<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<records xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
    
    <record>
        <title>Bi-Transformers-Aided Contextual Contrastive Learning for Sequential Recommendation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.01703108</link>
        <id>10.14569/IJACSA.2026.01703108</id>
        <doi>10.14569/IJACSA.2026.01703108</doi>
        <lastModDate>2026-03-31T07:33:43.1100000+00:00</lastModDate>
        
        <creator>Adel Alkhalil</creator>
        
        <creator>Ikhlaq Ahmed</creator>
        
        <creator>Zafran Khan</creator>
        
        <creator>Mazhar Abbas</creator>
        
        <creator>Aakash Ahmad</creator>
        
        <creator>Abdulrahman Albarrak</creator>
        
        <subject>Contextual sequential recommendation; bidirectional transformers; contrastive learning; auxiliary information</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>Contrastive learning (CL) based on Transformer sequence encoders offers a robust framework for sequential recommendation by effectively addressing data noise and sparsity issues. By utilizing the advantages of CL, these models are able to learn rich representations from sequences of user historical interactions, leading to improved recommendation and user satisfaction. However, recent CL methods are affected by two limitations. Firstly, CL approaches are mainly designed to process input sequences in single direction, i.e., left to right, which is sub-optimal for sequential prediction tasks because user historical interactions might not be in a fixed single direction sequence. Secondly, these models focus on designing CL objectives based solely on the input sequence, overlooking the valuable self-supervision features available as auxiliary information of descriptive text. To overcome these limitations, we introduce a new framework named Bi-Transformers aided Contextual Contrastive Learning for Sequential Recommendation (CCLRec). Specifically, bidirectional Transformers are extended to incorporate auxiliary information by using sentence embedding formulated from item’s textual description. Next, we introduce the rolling glass step technique for handling lengthy user sequence and descriptive features of corresponding item, which enables more refined partitioning of user sequences. Next, the cloze task, random occlusion, and dropout masking strategies are jointly applied to generate high-quality positive samples, enabling improved performance of the contrastive learning objective. Comprehensive experiments upon three benchmark datasets demonstrate that CCLRec consistently outperforms state-of-the-art baselines, achieving improvements of up to 5.69% to 6.34% in NDCG@10 across the MovieLens-1M, Amazon Beauty, and Amazon Toys datasets.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_108-Bi_Transformers_Aided_Contextual_Contrastive_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid Modeling and Control Framework for Intelligent Wheelchairs Using Timed Petri Nets and Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.01703107</link>
        <id>10.14569/IJACSA.2026.01703107</id>
        <doi>10.14569/IJACSA.2026.01703107</doi>
        <lastModDate>2026-03-31T07:33:43.0970000+00:00</lastModDate>
        
        <creator>Ayoub Elbazzazi</creator>
        
        <creator>Ikram Dahamou</creator>
        
        <creator>Cherki Daoui</creator>
        
        <subject>Timed Petri Nets; assistive robotics; adaptive control; neural networks; fuzzy logic FPGA; human-machine interaction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>This study proposes a hybrid modeling and control framework for intelligent wheelchair systems that integrates formal methods with adaptive artificial intelligence to ensure safety, robustness, and real-time performance. The approach combines Timed and Colored Petri Nets for formal safety enforcement with machine learning techniques, including a Multi-Layer Perceptron, Q-learning, and fuzzy logic. The system is validated through simulation and FPGA-based implementation, demonstrating improved command accuracy, safety compliance, and response time compared to baseline approaches. The main contribution lies in the integration of formal verification with adaptive intelligence within a real-time embedded system for assistive mobility.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_107-A_Hybrid_Modeling_and_Control_Framework_for_Intelligent_Wheelchairs.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Evidence-Aware and Risk-Sensitive Retrieval-Augmented Generation Framework for Internal Auditing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.01703106</link>
        <id>10.14569/IJACSA.2026.01703106</id>
        <doi>10.14569/IJACSA.2026.01703106</doi>
        <lastModDate>2026-03-31T07:33:43.0630000+00:00</lastModDate>
        
        <creator>Tareq Fahad Aljabri</creator>
        
        <creator>Mariam Abdulaziz Alnajim</creator>
        
        <subject>LLM; RAG; Audit; digitalization; automation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>Large Language Models (LLMs) that are enhanced with Retrieval-Augmented Generation(RAG) can aid in internal auditing, particularly in search and analysis of documents. However, in general, most RAG-based audit tools focus more on quick document access and being easy to use, then about deeper auditing reasons. They don’t do much to help with significant audit procedures, such as maintenance of clear evidence, calculating risk, and making intelligent decisions. Because of this, they are yet to find a place in ongoing internal auditing, which needs serious evidence and must adhere to recommended auditing standards. This study introduces an Evidence-Aware and Risk-Sensitive Retrieval-Augmented Generation (ER2-RAG) to help in internal auditing. The framework doesn’t just see how well documents can be retrieved, but also manages audit evidence and considers risk. It connects audit conclusions to supporting documents with confidence levels, modifies the information retrieval based on the audit risk and materiality, and restricts the process of generation to the standard audit reasoning practices. These design choices make AI assistance more transparent, reliable, and defensible in audit judgments. ER2-RAG has been developed and evaluated using the normal audit situations. These situations are related to the analysis of exceptions, evaluation of control efficiency, and control over procedural adherence. The research uses design science methodology. Compared to the older methods of RAG, ER2-RAG is efficient and presents a higher scope of evidence, references the sources much better, and the argument is clearer. The results indicate that risk sensitivity must be taken into account, and evidence should be used when adopting AI systems to perform continuous internal audits. The given research transforms RAG into not only aiding in information retrieval but also as a powerful reasoning foundation of professional assurance. It strives to enhance audit reliability and guide future development of evidence-aware AI systems.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_106-An_Evidence_Aware_and_Risk_Sensitive_Retrieval_Augmented_Generation_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>PicLingo: A GenAI-Based System for Language-Disabled Children</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.01703105</link>
        <id>10.14569/IJACSA.2026.01703105</id>
        <doi>10.14569/IJACSA.2026.01703105</doi>
        <lastModDate>2026-03-31T07:33:43.0330000+00:00</lastModDate>
        
        <creator>Razan Alatawi</creator>
        
        <creator>Shahad Alamri</creator>
        
        <creator>Renad Almaghthawi</creator>
        
        <creator>Shada Alofi</creator>
        
        <creator>Ghada Alharbi</creator>
        
        <creator>Rehab Albeladi</creator>
        
        <subject>PicLingo; Generative AI; text-to-image generation; speech recognition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>Children with language disorders often face challenges in understanding sentences and communicating effectively with others. While previous studies have utilized static digital games and automated feedback to support vocabulary and spelling, there remains a significant gap in leveraging generative models to provide dynamic, personalized visual reinforcement for verbal tasks. This study presents a new approach to support language development through PicLingo, a GenAI-powered system developed to assist both children and their mentors. A comparative experimental methodology is used to evaluate multiple generative models using MS COCO samples and standard metrics, including Inception Score (IS), Fr&#233;chet Inception Distance (FID), and human evaluation. PicLingo’s primary feature is a text-to-image generation (TTI) task that generates illustrative images from textual descriptions. Additionally, the system includes an interactive game that uses speech recognition technology to encourage active verbal participation. This approach aims to enhance language development and overall communication skills. The experimental results demonstrate that the proposed Stable Diffusion-based architecture significantly outperforms baseline models in generating high-quality, semantically accurate images, suggesting PicLingo as a promising, interactive tool for enhancing verbal communication and tracking linguistic progress in children with language disorders.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_105-PicLingo_A_GenAI_Based_System_for_Language_Disabled_Children.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Time-Aware Hierarchical Attention Recurrent Neural Networks for Multi-Criteria Recommender System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.01703104</link>
        <id>10.14569/IJACSA.2026.01703104</id>
        <doi>10.14569/IJACSA.2026.01703104</doi>
        <lastModDate>2026-03-31T07:33:42.9870000+00:00</lastModDate>
        
        <creator>Manogna Vankayalapati</creator>
        
        <creator>V Ramanjaneyulu Yannam</creator>
        
        <creator>Sarada Korrapati</creator>
        
        <creator>Murali Krishna Enduri</creator>
        
        <subject>Recommendation system; multi-criteria ratings; time-aware; hierarchical attention; recurrent neural networks; user preferences</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>Recommendation systems are an important component for various online platforms, especially in the e-commerce domain. Recommendation systems suggest items to users using information from their past interactions such as reviews, ratings, and purchase history. Traditional recommendation systems allow users to give only a single rating for an item. Recently, deep learning approaches have been used to improve recommendation accuracy in single rating systems, but these systems do not provide enough information about user preferences for an item. Domains such as gaming, movies, and tourism enable users to give ratings on multiple criteria for an item, which makes it easier to understand user preferences compared to single rating systems. In this study, we propose a Time-Aware Hierarchical Attention Recurrent Neural Network (TAH-RNN), a deep learning-based approach designed to utilize ratings from multiple criteria. Our proposed approach helps understand the association between multiple criteria ratings and overall ratings for each user. The model integrates temporal dynamics with multi-criteria ratings by applying a Time-Aware Importance-Based Sequence Formation mechanism, which assigns importance weights to each criterion based on interaction time and enables hierarchical attention to learn their relationships over sequential user behavior. Experiments using real-world datasets (TripAdvisor, BeerAdvocate, and Skytrax Airlines) indicate that the proposed approach performs well compared to single rating systems and multiple criteria approaches across various metrics.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_104-Time_Aware_Hierarchical_Attention_Recurrent_Neural_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Lightweight Human Parsing with Multi-Scale Context for Edge Devices</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.01703103</link>
        <id>10.14569/IJACSA.2026.01703103</id>
        <doi>10.14569/IJACSA.2026.01703103</doi>
        <lastModDate>2026-03-31T07:33:42.9570000+00:00</lastModDate>
        
        <creator>Abderrahim Ouza</creator>
        
        <creator>Mohamed El Ghmary</creator>
        
        <creator>Ali Choukri</creator>
        
        <subject>Human parsing; lightweight networks; multi-scale representation; edge computing; real-time segmentation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>For human parsing in wild and cluttered environment, deep architectures are widely utilized since they yield strong segmentation performance, while with the price of huge model size and computational complexity. These properties are highly limiting for them if to be deployed on resource-limited platforms, in particular for edge intelligence in real-time. In this study, we propose a lightweight framework for human parsing, named Fast DSPP+PGN+Attn, which focuses on the efficiency-accuracy trade-off. The proposed model consists of a MobileNetV2 back-bone (i.e., the AirLab-Net), a Dilated Spatial Pyramid Pooling (DSPP) block to capture multi-scale contextual information, a pixel grouping decoder employing the PGN for improved part boundary flow consistency and potency spatial and squeeze-and-excitation attention modules are used for feature refinement. However, the model with a relatively compact size—i.e., 2.14M parameters and 5.70 GFLOPs— could achieve the performance of 40.67% mean IoU (mIoU) and 87.3% pixel accuracy on the CIHP benchmark, while running at a speed of 51.9 frames per second on a single GPU. These findings indicate that combining the contextual aggregation methods with the method of structured pixel grouping is a strategy to exploit orthogonal avenues and cross-examine their complementarity, more efficiently and potentially achieve better segmentation quality without lost of real-time performance. Therefore, the proposed method can be widely applicable in embedded vision system, surveillance and mobile perception.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_103-Lightweight_Human_Parsing_with_Multi_Scale_Context_for_Edge_Devices.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Systematic Review on Crowd Density Estimation Using Deep Learning Techniques: State-of-the-Art Methods and Future Challenges</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.01703102</link>
        <id>10.14569/IJACSA.2026.01703102</id>
        <doi>10.14569/IJACSA.2026.01703102</doi>
        <lastModDate>2026-03-31T07:33:42.9230000+00:00</lastModDate>
        
        <creator>Norah Aloufi</creator>
        
        <creator>Liyakathunisa Syed</creator>
        
        <subject>Crowd density estimation; computer vision; deep learning; PRISMA 2020; systematic literature review</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>Estimating crowd density is a cornerstone of modern urban management and public safety, particularly in the aftermath of catastrophic incidents, such as the 2015 Mina stampede. With the rapid advancement of artificial intelligence (AI) technologies, deep learning (DL) has emerged as a powerful tool for addressing these challenges. This systematic review provides a comprehensive evaluation of current crowd density estimation methodologies, analyzing model architectures, datasets, and research trends. The review was conducted in accordance with PRISMA 2020 guidelines, and the search encompassed five major electronic databases (IEEE Xplore, Scopus, Google Scholar, Web of Science, and ScienceDirect) for the period 2020 to 2025. The selection process relied on rigorous eligibility criteria, including English-language publications that offer methodological contributions or empirical assessments in the field of computer vision and machine learning (ML). Twenty final studies were included, 70% of which were published in scientific journals. The analysis revealed that 55% of the studies relied entirely on DL models, while 30% leaned towards hybrid modelling. The ShanghaiTech dataset remained the most frequently used benchmark, accounting for 50% of the studies, followed by UCF CC 50 and WorldExpo’10 datasets. Although some models achieved a high accuracy of 99.88%, they still faced challenges in highly congested scenes and visual obstructions. This review reveals a growing shift towards edge intelligence and lightweight models to reduce latency, with a pressing need for more diverse datasets to minimize bias. This study concludes that bridging the gap between simulation and reality requires integrating contextual information and behavioral analysis to enable more reliable, proactive, and real-time crowd management.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_102-A_Systematic_Review_on_Crowd_Density_Estimation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Study of Supervised Machine Learning Models for Fake News Detection with Interpretability and Statistical Validation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.01703101</link>
        <id>10.14569/IJACSA.2026.01703101</id>
        <doi>10.14569/IJACSA.2026.01703101</doi>
        <lastModDate>2026-03-31T07:33:42.8930000+00:00</lastModDate>
        
        <creator>Bayan M. Alsharbi</creator>
        
        <subject>Fake news detection; supervised learning; Decision Tree</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>The rapid proliferation of fake news across digital platforms has intensified the need for reliable and computationally efficient automated detection systems. While deep learning models have demonstrated strong performance, their high computational cost and limited interpretability restrict practical deployment in real-time systems. This study proposes a structured comparative framework that evaluates seven supervised machine learning algorithms—Decision Tree, Passive Aggressive, Support Vector Machine (SVM), Random Forest, Logistic Regression, Perceptron, and Na&#239;ve Bayes—under identical preprocessing and feature engineering conditions using a balanced dataset of 44,989 news articles. Unlike prior works that emphasize accuracy alone, this research integrates statistical validation, computational efficiency analysis, and interpretability assessment using SHAP explanations. Experimental results show that the Decision Tree model achieved the highest accuracy of 99.58%, closely followed by Passive Aggressive (99.57%) and SVM (99.45%). Additionally, tree-based and linear classifiers demonstrated superior stability and lower computational overhead compared to more complex architectures. The findings indicate that interpretable and computationally efficient supervised models remain highly competitive for large-scale fake news detection, offering practical advantages for real-time deployment in digital media monitoring systems.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_101-Comparative_Study_of_Supervised_Machine_Learning_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predicting Concession Curves of Negotiating Agents Using Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.01703100</link>
        <id>10.14569/IJACSA.2026.01703100</id>
        <doi>10.14569/IJACSA.2026.01703100</doi>
        <lastModDate>2026-03-31T07:33:42.8770000+00:00</lastModDate>
        
        <creator>Khalid Mansour</creator>
        
        <subject>Automated negotiation; strategy classification; machine learning; feature engineering; strategic agents</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>Accurate opponent modeling is critical for effective automated negotiation, enabling agents to adapt their strategies based on the type of opponent. This study investigates machine learning approaches for classifying negotiation agent strategies from offer sequences across three scenarios: time-dependent agents following predetermined concession functions, strategic agents adapting to opponent behavior with deadline-only termination, and strategic agents with realistic termination through mutual agreement or deadline expiration. We systematically evaluate four algorithms—Naive Bayes, Random Forest, Support Vector Machines, and Neural Networks— on a number of simulated negotiations, comparing classification performance with and without temporal feature augmentation. A key contribution of this work is the introduction of temporal feature augmentation, where quarterly concession patterns and variance metrics are used to capture adaptive negotiation behavior that raw offer sequences alone cannot reveal. The augmented features encode temporal adaptation characteristics that distinguish Boulware, Linear, Conceder, and strategic negotiation behaviors. Feature augmentation produced statistically significant improvements in 7 of 12 model–scenario combinations, with the most notable gains observed in strategic agent identification.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_100-Predicting_Concession_Curves_of_Negotiating_Agents.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>RollupFL: An Auditable Federated Learning Framework for Byzantine Client Accountability</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170399</link>
        <id>10.14569/IJACSA.2026.0170399</id>
        <doi>10.14569/IJACSA.2026.0170399</doi>
        <lastModDate>2026-03-31T07:33:42.8470000+00:00</lastModDate>
        
        <creator>Md Tahmid Ashraf Chowdhury</creator>
        
        <creator>Fasee Ullah</creator>
        
        <creator>Shanjida Islam Labonno</creator>
        
        <creator>Shahid Kamal</creator>
        
        <creator>Mohammad Ahsanul Islam</creator>
        
        <subject>Federated learning; Byzantine attacks; audit layer; accountability; attacker attribution; tamper detection; robust aggregation; FedAvg; blockchain audit; sign-flip attack; model-replacement attack</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>Federated learning (FL) trains a shared model without sending raw data, but some clients can be Byzantine and send harmful updates. Robust aggregation methods like Median and Krum can reduce poisoning damage, but they do not clearly show which client attacked. In this study, we propose RollupFL, an audit layer for FL that improves accountability under Byzantine attacks. RollupFL keeps aggregation and auditing separate, so it can work with FedAvg, Median, or Krum without changing how aggregation is computed. We study two audit designs: simple logging, which is fast, but assumes a trusted server, and blockchain-based audit, which gives stronger integrity and attribution, but adds more latency. We evaluate MNIST training for 20 rounds with 10%–30% Byzantine clients under sign-flip and model-replacement attacks. Results show that auditing does not meaningfully change accuracy, but it improves accountability. At 30% Byzantine, blockchain audit achieves higher attribution (0.95) and tamper detection (0.92) than logging (0.65 and 0.58). Logging adds small per-round latency, while blockchain adds larger latency mainly due to ledger writing.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_99-RollupFL_An_Auditable_Federated_Learning_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>ANN-Based Employee Performance Prediction: A Comparative Analysis of Optimization Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170398</link>
        <id>10.14569/IJACSA.2026.0170398</id>
        <doi>10.14569/IJACSA.2026.0170398</doi>
        <lastModDate>2026-03-31T07:33:42.8000000+00:00</lastModDate>
        
        <creator>Rahaf Mohammed Bajhzer</creator>
        
        <creator>Yousef Alsenani</creator>
        
        <creator>Sahar Jambi</creator>
        
        <creator>Tawfiq Hasanin</creator>
        
        <subject>Employee performance prediction; artificial neural networks; data preprocessing; model optimization; HR analytics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>With the increasing use of artificial intelligence in decision-making systems, predicting employee performance has attracted growing attention in human resource analytics. This study aims to systematically evaluate the impact of data preprocessing and model optimization techniques on artificial neural network (ANN)-based prediction of employee performance in HR analytics. Three publicly available HR datasets were used, and multiple configurations involving feature selection, feature extraction, principal component analysis (PCA), reduced architectures, and regularization were evaluated. The experimental results show that appropriate feature selection and regularization consistently improve predictive performance across datasets, whereas PCA-based dimensionality reduction resulted in lower accuracy in the evaluated datasets, possibly due to the loss of discriminative information. Additionally, simplified ANN architectures yielded modest, but consistent improvements in generalization performance across datasets, highlighting the importance of controlling model complexity. The top-performing configurations across the assessed datasets achieved accuracies ranging from 81% to 96%. These findings offer practical guidance on selecting efficient preprocessing and architectural techniques when applying ANN-based models in human resource analytics.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_98-ANN_Based_Employee_Performance_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Domain-Agnostic Knowledge Graph Construction for Systematic Hallucination Reduction and Knowledge Reusability in Large Language Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170397</link>
        <id>10.14569/IJACSA.2026.0170397</id>
        <doi>10.14569/IJACSA.2026.0170397</doi>
        <lastModDate>2026-03-31T07:33:42.7670000+00:00</lastModDate>
        
        <creator>Durvesh Narkhede</creator>
        
        <creator>Rama Gaikwad</creator>
        
        <creator>Saniya Jadhav</creator>
        
        <creator>Pratiksha Ovhal</creator>
        
        <creator>Nigam Roy</creator>
        
        <creator>Prasad Dhanade</creator>
        
        <subject>Large Language Models; knowledge graph construction; hallucination reduction; Retrieval-Augmented Generation; web-grounded reasoning; decentralized knowledge systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>Large Language Models (LLMs) have rapidly advanced the capabilities of automated reasoning and text generation, yet they continue to hallucinate when responding to domain-specific or rapidly evolving queries due to limitations in their static, parametric knowledge. This challenge is especially significant in high-stakes domains where factual accuracy is critical. To address this gap, the present study introduces a domain-agnostic framework called the Web-Constructed Knowledge Graph (WCKG), designed to ground LLM outputs in verifiable, web-retrieved information. Unlike conventional Retrieval-Augmented Generation (RAG) pipelines, WCKG transforms ad-hoc retrieval into structured, reusable knowledge through automated, query-triggered web searches that extract entities and relations and synthesize them into lightweight, provenance-aware knowledge graphs maintained locally within user sessions. A global registry stores only abstracted metadata, ensuring decentralized knowledge management and privacy while enabling efficient indexing and discovery. Web-grounded reasoning is achieved by serializing relevant graph fragments directly into LLM prompts. Experimental evaluation demonstrates that this framework generates coherent knowledge graphs, supports iterative refinement through user interactions, and improves the reliability of model responses across diverse domains, achieving an average hallucination reduction of 3.3% over a RAG baseline. The findings imply that WCKG can convert transient LLM interactions into evolving knowledge resources, offering a practical foundation for long-term reasoning, model adaptation, and decentralized knowledge sharing in future AI systems.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_97-Domain_Agnostic_Knowledge_Graph_Construction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Retrieval-Augmented Generation System for Automated Functional Safety Analysis of AUTOSAR Basic Software Module Dependencies</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170396</link>
        <id>10.14569/IJACSA.2026.0170396</id>
        <doi>10.14569/IJACSA.2026.0170396</doi>
        <lastModDate>2026-03-31T07:33:42.7370000+00:00</lastModDate>
        
        <creator>Mohand Hammad</creator>
        
        <creator>Ahmed Moro</creator>
        
        <creator>Mohamed Taher</creator>
        
        <subject>AUTOSAR; RAG; Retrieval-Augmented Generation; functional safety; FMEA; FTA; DFA; ISO 26262; Langhain; vector database; automotive software; safety analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>This study presents an advanced Retrieval-Augmented Generation (RAG) system designed to assist functional safety engineers in performing safety analysis of AUTOSAR Classic Platform Basic Software (BSW) module dependencies. The system extracts structured dependency information from 128 AUTOSAR Software Specification (SWS) documents in ARXML format and generates draft Failure Mode and Effects Analysis (FMEA), Fault Tree Analysis (FTA), and Dependent Failure Analysis (DFA) tables compliant with AIAG VDA, IEC 60812, IEC 61025, and ISO 26262 standards for human expert review and approval. Key innovations include: 1) LLM-driven table definition extraction that designs optimal analysis output formats based on merged AUTOSAR safety context, ISO 26262 lifecycle considerations, and standard methodologies; 2) content-based inter-module dependency validation that prevents hallucination of non-existent module interactions; 3) ASIL-aware analysis that prioritizes lower-integrity components corrupting higher-integrity components per ISO 26262 freedom from interference; 4) a modular architecture with dual interfaces (CLI tool and LangGraph-based conversational chatbot) where the chatbot reuses core RAG functions, enabling single-source maintenance. The architecture combines semantic chunking with metadata-based filtering for precise module retrieval, episodic and working memory for multi-turn sessions, and automated Excel report generation with source traceability. A comparative evaluation against an LLM-only baseline and a standard semantic-search RAG baseline demonstrates that metadata filtering with content validation eliminates hallucinated dependencies. On a curated stress-test dataset of 15 safety-critical modules representing the most complex BSW interdependencies (watchdog supervision, diagnostics, memory management, communication stacks), the system achieves perfect micro-averaged precision/recall across 95 documented dependencies. Preliminary expert validation by three functional safety engineers confirmed the practical utility of the generated analyses as draft starting points for formal safety assessments.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_96-A_Retrieval_Augmented_Generation_System_for_Automated_Functional_Safety_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Overcoming Temporal Shuffling in Non-Profiled SCA: A Translation-Invariant Deep Learning Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170395</link>
        <id>10.14569/IJACSA.2026.0170395</id>
        <doi>10.14569/IJACSA.2026.0170395</doi>
        <lastModDate>2026-03-31T07:33:42.7070000+00:00</lastModDate>
        
        <creator>Ahmed Ismail</creator>
        
        <creator>Eid Emary</creator>
        
        <creator>Hala Abbas</creator>
        
        <subject>Side-Channel Analysis; Deep Learning; collision attack; shuffling countermeasure; Global Average Pooling; AES</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>Side-Channel Analysis (SCA) utilizing Deep learning has demonstrated significant potential in recovering secret keys from cryptographic implementations. However, the efficiency of these attacks is often severely compromised by hardware countermeasures such as temporal shuffling, which desynchronizes leakage traces. Existing non-profiled collision attacks successfully mitigate shuffling, but often rely on a “Grey-Box” threat model, requiring prior knowledge of the shuffle permutation to align traces before analysis. This study presents a Global Average Pooling Convolutional Neural Network (GAP-CNN) designed to exploit side-channel collisions in a strict Black-Box setting. By integrating a translation-invariant GAP layer, the proposed architecture forces the network to learn the presence of leakage signatures regardless of their temporal location, effectively neutralizing the shuffling countermeasure end-to-end without pre-processing. The methodology is evaluated on the DPA Contest v4.2 dataset, a highly protected AES-128 implementation. The empirical results demonstrate that the proposed Black-Box approach successfully recovers a majority of the target bytes, outperforming previous Grey-Box baselines. Furthermore, the study demonstrates strong cross-byte portability and cross-dataset robustness against masking countermeasures (ASCAD), confirming the existence of exploitable leakage clusters that persist despite advanced randomization.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_95-Overcoming_Temporal_Shuffling_in_Non_Profiled_SCA.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Trend-Based Encoding of Exogenous Time-Series for Interpretable Financial Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170394</link>
        <id>10.14569/IJACSA.2026.0170394</id>
        <doi>10.14569/IJACSA.2026.0170394</doi>
        <lastModDate>2026-03-31T07:33:42.6730000+00:00</lastModDate>
        
        <creator>Khudran M. Alzhrani</creator>
        
        <subject>Trend-based encoding; exogenous time-series; interpretable machine learning; financial prediction forecasting</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>Integrating heterogeneous exogenous data into financial prediction models is challenging due to scale mismatches and semantic ambiguity. We propose a trend-encoding framework that transforms raw exogenous time-series into directional binary representations, improving predictive robustness while preserving interpretability. Using Saudi stock market data with COVID- 19 indicators, we evaluate predictive models under baseline and trend-enhanced configurations. Results show that trend encoding consistently enhances predictive stability over raw inputs. Interpretable models benefit disproportionately, achieving performance comparable to black-box methods. Sectoral analysis reveals heterogeneous sensitivities: Banking responds strongly to case and mortality trends, Energy to recovery indicators, while Food &amp; Beverages shows weaker alignment. These findings show that trend-based encoding of exogenous signals can improve cross-domain financial prediction, particularly for interpretable models.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_94-Trend_Based_Encoding_of_Exogenous_Time_Series.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>ProGem: A Hybrid AI Framework for Task Effort Estimation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170393</link>
        <id>10.14569/IJACSA.2026.0170393</id>
        <doi>10.14569/IJACSA.2026.0170393</doi>
        <lastModDate>2026-03-31T07:33:42.6570000+00:00</lastModDate>
        
        <creator>Shahid Islam</creator>
        
        <creator>Shazia Arshad</creator>
        
        <creator>Natasha Nigar</creator>
        
        <creator>Jose Lukose</creator>
        
        <subject>Task effort estimation; software project management; time-series forecasting; real-time task insights</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>Accurate effort estimation at the task level is essential for effective project planning, resource allocation, and meeting delivery timelines in software development. Traditional approaches have focused primarily on project-level estimation, leaving a critical gap in predicting the duration of individual tasks. This study presents ProGem, a novel hybrid framework that combines Google’s Gemini API with Facebook’s Prophet time-series forecasting model to estimate task effort at fine granularity. ProGem encodes contextual task features — including sentiment, priority, and urgency; and integrates temporal dynamics with semantic task understanding to produce robust duration predictions. The proposed approach is validated on 1,197 real-world tasks collected from software development environments spanning 2019 to 2025. Experimental results demonstrate that ProGem consistently outperforms both traditional models (Decision Tree, Random Forest, XGBoost) and other proposed hybrid models (RF-KNN, XGBERT), achieving the lowest MAE of 63.75, MSE of 9,987.54, RMSE of 100.45, and the highest coefficient of determination (R2 = 0.4750). On individual real-world tasks, ProGem produced estimates of 9.16, 3.00, 6.08, 4.10, and 2.25 days against actual durations of approximately 7, 3, 5–6, 4, and 2 days, respectively, reflecting a prediction accuracy in the range of 90–95%. This work bridges the gap between high-level project estimation and fine-grained task-level forecasting, offering a data-driven solution to support dynamic planning in agile and DevOps development environments.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_93-ProGem_A_Hybrid_AI_Framework_for_Task_Effort.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning and Optimization-Driven Intrusion Detection Systems for Internet of Things Security: A Systematic Literature Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170392</link>
        <id>10.14569/IJACSA.2026.0170392</id>
        <doi>10.14569/IJACSA.2026.0170392</doi>
        <lastModDate>2026-03-31T07:33:42.6270000+00:00</lastModDate>
        
        <creator>Rosilawati Mohamad</creator>
        
        <creator>Muhammad Arif Mohamad</creator>
        
        <creator>Mohd Faizal Ab Razak</creator>
        
        <creator>Imam Riadi</creator>
        
        <creator>Sri Winiarti</creator>
        
        <creator>Herman Yuliansyah</creator>
        
        <subject>Internet of Things (IoT); intrusion detection system (IDS); deep learning (DL); metaheuristic optimization; systematic literature review (SLR); IoT security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>The rapid expansion of Internet of Things (IoT) deployments has increased the exposure of interconnected devices to cyber threats, particularly in heterogeneous and resource-constrained environments. Although recent research increasingly emphasizes learning-based detection, classical intrusion detection system (IDS) paradigms remain widely deployed in practical IoT settings due to their interpretability, deterministic behavior, and low computational overhead. This study presents a systematic literature review focused exclusively on classical IDS for IoT environments, including signature-based, anomaly-based, specification-based, and hybrid classical approaches. Following PRISMA-aligned procedures, peer-reviewed studies published between 2021 and 2026 were identified, screened, and synthesized using qualitative comparative analysis. The review examines detection principles, deployment contexts, datasets, evaluation practices, and reported limitations across the classical paradigms. The findings indicate that classical IDS continues to function as a baseline defensive mechanism, particularly at gateway and edge levels. However, persistent challenges remain, including limited capability against zero-day attacks, high false-positive behavior in dynamic environments, scalability constraints, rule maintenance overhead, and restricted adaptability to evolving IoT behavior. This study contributes a consolidated taxonomy and evidence-based analysis of classical IDS deployment characteristics in IoT environments, providing a validated baseline for future intrusion detection research and evaluation.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_92-Deep_Learning_and_Optimization_Driven_Intrusion_Detection_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Algorithmic Model Based on Optimization of the Production Rules for Phishing Attacks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170391</link>
        <id>10.14569/IJACSA.2026.0170391</id>
        <doi>10.14569/IJACSA.2026.0170391</doi>
        <lastModDate>2026-03-31T07:33:42.5970000+00:00</lastModDate>
        
        <creator>Anvar Kabulov</creator>
        
        <creator>Erkin Urinbaev</creator>
        
        <creator>Inomjon Yarashov</creator>
        
        <creator>Alisher Otakhonov</creator>
        
        <subject>Petri nets; phishing; production rules; URL; functioning table</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>Phishing cyber-hazards, which are a correct cyber threat to audit, monitoring, control, and data acquisition systems in the digitalization environment, are aimed at misleading participants in a complex system and editing personal digitalization data through unauthorized access. In the research work, functioning tables, production rules, algorithmic and mathematical modeling apparatus are used as a foundation for formulating, analyzing, and synthesizing the discrete adaptive behavior of large systems. In the scientific practical research work, phishing identification technologies based on production rules are implemented in solving a scientific problem by integrating access to digitalization resources into control and/or management operations and/or processes. In this research, a set of production rules is created to identify malicious and legitimate resources from URLs, URL features are extracted from a dataset of trusted platforms based on algorithmization, and logical rules are generated from these features; the authenticity of the URLs is then verified using this rule set. The results are compared with other existing models and algorithms, and two different approaches to generating production rules are developed. The study also develops a logical model for building a knowledge base from URL features and demonstrates the representation of malicious attacks through logical implications, conjunctions, and disjunctions. Finally, it tests optimized expressions based on monotone Boolean functions and their perfect disjunctive normal form (CDNF) on an independent test dataset in order to select the most efficient rule system.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_91-An_Algorithmic_Model_Based_on_Optimization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Noncommunicable Eye Diseases Trend Related to Artificial Intelligence: A Bibliometric and Visualization Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170390</link>
        <id>10.14569/IJACSA.2026.0170390</id>
        <doi>10.14569/IJACSA.2026.0170390</doi>
        <lastModDate>2026-03-31T07:33:42.5630000+00:00</lastModDate>
        
        <creator>Marizuana Mat Daud</creator>
        
        <creator>W Mimi Diyana W Zaki</creator>
        
        <creator>Laily Azyan Ramlan</creator>
        
        <creator>Fazlina Mohd Ali</creator>
        
        <creator>Jun Kit Chaw</creator>
        
        <subject>Artificial intelligence; noncommunicable eye disease; cataract; keratoconus; glaucoma; diabetic retinopathy; age-related macular degeneration</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>In recent years, artificial intelligence (AI) has transformed numerous sectors, including healthcare, and ophthalmology is no exception. The field has seen remarkable progress in using AI to detect, diagnose, and manage noncommunicable eye diseases (NCEDs), such as cataract, keratoconus, glaucoma, diabetic retinopathy, and age-related macular degeneration. This study presents a comprehensive bibliometric analysis of 4,280 articles between 2004 and 2026, revealing significant trends in AI-based NCED research. The literature search focused on a highly reputable database: Scopus. The selection of this database ensured a thorough exploration of the field, given its broad coverage of both technical and medical literature. The search strategy employed a carefully curated set of keywords to capture relevant articles and reviews. The field has experienced robust growth, with an average annual increase of 19.41% in publications, peaking in 2023 with 516 articles. Deep learning, particularly Convolutional Neural Networks (CNNs), has emerged as the leading approach, surpassing traditional image processing techniques. Research in medical image analysis has primarily focused on age-related macular degeneration, glaucoma, and diabetic retinopathy, with an increasing emphasis on automated screening systems for early detection. Future trends may include a focus on explainable AI and attention mechanisms, integration with telemedicine, and development of more robust, generalizable models, highlighting its potential to revolutionize early diagnosis and management of eye diseases.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_90-Noncommunicable_Eye_Diseases_Trend_Related_to_Artificial_Intelligence.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>LoRA-Based Fine-Tuning of Local LLMs for Hallucination Detection in Indonesian RAG Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170389</link>
        <id>10.14569/IJACSA.2026.0170389</id>
        <doi>10.14569/IJACSA.2026.0170389</doi>
        <lastModDate>2026-03-31T07:33:42.5500000+00:00</lastModDate>
        
        <creator>I Ketut Resika Arthana</creator>
        
        <creator>Nyoman Gunantara</creator>
        
        <creator>Made Sudarma</creator>
        
        <creator>Made Sukarsa</creator>
        
        <subject>Hallucination detection; Retrieval-Augmented Generation; LoRA fine-tuning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>Retrieval Augmented Generation (RAG) improves the factual grounding of Large Language Models (LLMs) by incorporating external knowledge. However, RAG systems may still generate hallucinated responses, and this issue remains underexplored in Indonesian language settings, particularly in settings where local deployment is preferred. This study proposes a hallucination detection approach for Indonesian RAG systems using Low Rank Adaptation (LoRA) fine-tuning. To support this objective, the study constructs a dataset in the Human-Computer Interaction domain consisting of 908 context, question, and answer pairs. The dataset is classified into four categories: FACT-H, FAITH-H, LOG-H, and FAITHFUL. Three local LLMs, namely, Gemma-7B-it, LlaMA-2-7B chat, and Phi-3-medium-4k-instruct, were evaluated using 5-fold cross-validation. The results show that Gemma-7B-it achieved the best performance in the four-class setting, with a Macro F1 score of 0.846. In the binary classification setting, Gemma achieved an accuracy of 98.1 per cent. Further analysis shows that Gemma was particularly effective in recognizing FAITHFUL, FAITH-H, and FACT-H, while LOG-H remained the most difficult class to distinguish consistently.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_89-LoRA_Based_Fine_Tuning_of_Local_LLMs.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Bibliometric Mapping and Systematic Review of Deep Learning Approaches in Film and Multimedia Recommendation Systems within New Media</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170388</link>
        <id>10.14569/IJACSA.2026.0170388</id>
        <doi>10.14569/IJACSA.2026.0170388</doi>
        <lastModDate>2026-03-31T07:33:42.5030000+00:00</lastModDate>
        
        <creator>Linlin Hou</creator>
        
        <subject>Deep learning; multimedia recommendation systems; film recommendation; new media platforms; multimodal learning; graph-based recommender systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>The rapid growth of film, video, and multimedia content on new media platforms has intensified information overload, increasing the importance of effective recommender systems. Traditional recommendation approaches face limitations in modeling complex content semantics and dynamic user preferences. Deep learning techniques have been widely adopted to enhance film and multimedia recommendation performance. This study presents a bibliometric mapping and systematic literature review of deep learning film and multimedia recommendation systems in new media. Scopus was used as the primary data source, yielding 679 peer-reviewed studies following a structured screening and inclusion process. The research methodology, search strategy, and selection criteria are explicitly documented. Bibliometric techniques, including citation analysis, keyword co-occurrence, and thematic clustering, are applied to identify influential publications, dominant research streams, and emerging trends. The reviewed literature is synthesized into major thematic areas, including multimodal representation learning, graph-based recommendation, multimedia feature extraction, personalization and cold-start mitigation, fairness and bias, emotion-aware recommendation, and explainability. The findings reveal a strong dominance of multimodal and graph-based deep learning models, particularly those integrating visual, audio, textual, and interaction data. However, many existing approaches rely on shallow feature fusion and demonstrate limited capability in capturing fine-grained semantic relationships, user attraction mechanisms, and contextual meaning. Challenges related to cold-start, sparse feedback, fairness, transparency, and user experience remain insufficiently addressed. This study identifies critical research gaps and outlines future research directions, emphasizing the need for semantically rich, explainable, fair, and human-centered multimedia recommender systems capable of supporting the evolving complexity of new media ecosystems.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_88-Bibliometric_Mapping_and_Systematic_Review_of_Deep_Learning_Approaches.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Socio-Technical Analysis of Enterprise Architecture Misalignments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170387</link>
        <id>10.14569/IJACSA.2026.0170387</id>
        <doi>10.14569/IJACSA.2026.0170387</doi>
        <lastModDate>2026-03-31T07:33:42.4700000+00:00</lastModDate>
        
        <creator>Ayed Alwadain</creator>
        
        <subject>Enterprise Architecture; EA; EA issues; EA misalignments; socio-technical systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>Modern organizations must adapt to competitive environments through the analysis of their current state and plan for a future state through the use of Enterprise Architecture (EA). EA is a management discipline to align business and IT strategies. However, many organizations face challenges in effectively implementing and using EA due to misalignments between EA parts, such as organizational support, documentation, and governance, leading to inefficiencies. Thus, this study employs the Punctuated Socio-Technical Information System Change model to examine EA misalignments through four interrelated components: structure, task, actor, and technology. It offers a comprehensive analytical lens for examining EA misalignments. This model is used to examine how EA misalignments emerge from disruptions or inconsistencies among various EA components that affect EA coherence and efficiency. Considering the purpose and nature of this research, a case study is a suitable research method. The research issue to be examined, EA misalignments, is contemporary and must be explored in its context. The findings reveal some EA misalignments that are categorized into four groups: organizational, governance, capabilities, and management, highlighting how disruptions among these components affect EA coherence and efficiency. The implications of this research are twofold: first, EA components must be aligned for optimal efficiency; second, any misalignment between these components results in EA operational inadequacies and practice failures.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_87-A_Socio_Technical_Analysis_of_Enterprise_Architecture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Machine Learning Application in Healthcare: A Case Study Using Ensemble Methods for Hospital Length of Stay Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170386</link>
        <id>10.14569/IJACSA.2026.0170386</id>
        <doi>10.14569/IJACSA.2026.0170386</doi>
        <lastModDate>2026-03-31T07:33:42.4570000+00:00</lastModDate>
        
        <creator>Hakima Reddad</creator>
        
        <creator>Maria Zemzami</creator>
        
        <creator>Norelislam El Hami</creator>
        
        <creator>Nabil Hmina</creator>
        
        <creator>Farouk Yalaoui</creator>
        
        <subject>Machine learning; XGBoost; healthcare operations; hospital resource management; ensemble methods; predictive analytics; SHAP analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>Artificial intelligence is driving digital transformation across multiple sectors, including healthcare, pharmaceuticals, industrial production, and the automotive industry. In healthcare specifically, AI-powered predictive analytics offer significant potential for optimizing operational efficiency and resource allocation. To demonstrate this potential, we present a case study focused on hospital length of stay (LOS) prediction using 2,125,280 admission records from the New York SPARCS database. We implemented and compared four machine learning algorithms: Linear Regression, Random Forest, Gradient Boosting, and XGBoost. Following hyperparameter optimization, the XGBoost model achieved superior performance with R&#178;=0.8686, RMSE=3.24 days, and MAE=1.42 days, substantially outperforming Linear Regression (R&#178;=0.5339, RMSE=6.10 days, MAE=2.86 days). Prediction accuracy reached 63.34% within &#177;1 day and 89.44% within &#177;3 days of actual LOS. SHAP analysis identified Total Costs, Total Charges, Hospital Service Area, APR Medical Surgical Description, and APR DRG Code as the most impactful predictors. Performance varied across LOS categories, with MAE ranging from 0.66 days for short stays (1-3 days) to 11.81 days for extended hospitalizations (&gt;30 days). These results demonstrate that ensemble machine learning methods, particularly XGBoost, provide clinically meaningful accuracy for healthcare operational planning, though challenges remain for extended stays and complex cases requiring specialized modeling approaches.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_86-Machine_Learning_Application_in_Healthcare.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Multi-Vector Framework for Injection Attack Detection Using NLP Lexical–Semantic Fusion with Reinforcement Learning DQN–Based Calibration</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170385</link>
        <id>10.14569/IJACSA.2026.0170385</id>
        <doi>10.14569/IJACSA.2026.0170385</doi>
        <lastModDate>2026-03-31T07:33:42.4070000+00:00</lastModDate>
        
        <creator>Carlo Jude P. Abuda</creator>
        
        <creator>Cristina E. Dumdumaya</creator>
        
        <subject>Deep Q-network reinforcement learning; injection attack detection; machine learning for cybersecurity; multi-vector attack detection; Natural Language Processing; payload analysis; web application security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>Injection attacks persist as dominant threats in modern web systems due to obfuscation, polymorphism, and multi-vector exploitation across SQLi, XSS, LDAP Injection, and Command Injection. Existing defenses often rely on static signatures or single-vector models, which limit generalization under adversarial payload mutation. This study addressed that limitation by designing and evaluating a unified multi-vector detection framework that integrated Natural Language Processing (NLP) and Deep Q-Network (DQN) Reinforcement Learning (RL) within a structured Design–Development–Research methodology. The study consolidated heterogeneous open-source datasets comprising 346,954 benign and 653,046 malicious XSS samples, 107,328 benign and 136,746 malicious SQLi samples, 1,591 benign and 515 malicious Command Injection samples, and 1,100 benign and 900 malicious LDAP Injection samples. The pipeline operationalized canonicalized payloads as inputs, hybrid lexical–semantic feature extraction and supervised classification as processes, and probabilistic attack decisions with calibrated thresholds as outputs. The NLP pipeline fused TF-IDF character n-grams with transformer embeddings to preserve structural and contextual signatures. Logistic Regression and One-vs-Rest Linear SVM achieved strong discrimination under group-aware splits, while the DQN agent optimized decision thresholds using reward-based calibration without modifying classifier parameters. Results demonstrated stable ROC and Precision–Recall performance, coherent embedding separation, and convergence of reinforcement learning rewards and loss. The deployed system was evaluated using ISO/IEC 25010 functional suitability criteria, including functional completeness, correctness, and appropriateness, to verify that the detection pipeline executed all required operations and produced reliable decision outputs and explainable, confidence-supported decisions. The framework strengthened secure digital infrastructure, contributing to resilient innovation ecosystems aligned with Sustainable Development Goals 9 and 16.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_85-A_Multi_Vector_Framework_for_Injection_Attack_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning-Based Model to Predict Personality Traits of Social Media Users</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170384</link>
        <id>10.14569/IJACSA.2026.0170384</id>
        <doi>10.14569/IJACSA.2026.0170384</doi>
        <lastModDate>2026-03-31T07:33:42.3930000+00:00</lastModDate>
        
        <creator>Faiza Abid</creator>
        
        <creator>Mazni Binti Omar</creator>
        
        <creator>Mohamad Sabri Bin Sinal</creator>
        
        <subject>Personality trait prediction; deep learning; gated recurrent units; human psychology; social media analytics; digital behavior analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>The rapid expansion of social media platforms has created enormous amounts of user-created content and behavioral information, providing the computational means to study human personality and psychology. This study creates a temporal deep learning model and Gated Recurrent Units (GRUs) to predict personality traits with behavioral and content-based features obtained from Facebook social media. The research fits the Big Five Personality Traits paradigm and aims at modelling temporal relationships in user activity patterns, including the frequency of posts, linguistic behavior, and social interaction relationships, to identify latent psychological aspects. A GRU-based framework was created to model sequential dependencies and contextual relationships among the activity timelines of the user. To evaluate the model performance and reliability, two comparison baselines: Long Short-Term Memory (LSTM) and Artificial Neural Network (ANN were run within the same experimental conditions. Model evaluation also used regression (Mean Absolute Error, MAE; Coefficient of Determination, R2) and classification (Accuracy, Precision, Recall, F1-score, and AUC-ROC) metrics, which were also validated using a 10-fold cross-validation process to ensure that they were stable and generalizable. The experimental findings indicated that the proposed GRU model was always better in all the evaluation metrics compared to the base models. It had the least MAE (0.00825) and the highest R2 (0.9917) and showed outstanding predictive reliability. GRU had a high accuracy (96.8) and F1-score (0.96) and AUC-ROC (0.98), which were better than LSTM (F1 = 0.95) and ANN (F1 = 0.84), in classification performance. The analysis at the trait level showed that the predictive accuracy is high on all dimensions of personality, with Agreeableness (R2 =0.9942, F1 =0.97) being the most accurately predicted and Extraversion (R2 =0.9862) having a high predictive consistency. The findings of the cross-validation also confirmed the strength and the external validity of the GRU framework.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_84-Deep_Learning_Based_Model_to_Predict_Personality_Traits.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Leakage-Aware and Reproducible Evaluation Framework for Predictive Maintenance Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170383</link>
        <id>10.14569/IJACSA.2026.0170383</id>
        <doi>10.14569/IJACSA.2026.0170383</doi>
        <lastModDate>2026-03-31T07:33:42.3600000+00:00</lastModDate>
        
        <creator>Abdulrahman M. Qahtani</creator>
        
        <subject>Predictive maintenance; evaluation methodology; information leakage; reproducible machine learning; cross-validation; class imbalance; performance metrics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>Predictive maintenance classification is widely used to support industrial maintenance planning; however, reported model performance is often influenced by evaluation practices that allow unintended information leakage between training and testing data, resulting in optimistic and difficult-to-reproduce estimates. This study examines predictive maintenance classification from the perspective of evaluation design, with a specific focus on quantifying the impact of leakage on performance assessment. A leakage-aware and fully reproducible evaluation protocol is implemented on the AI4I 2020 dataset, which exhibits severe class imbalance representative of practical industrial conditions. A comparative analysis between leakage-prone and leakage-aware evaluation settings shows that leakage-prone configurations can inflate AUC estimates by up to 8–9 percentage points, demonstrating the substantial influence of evaluation design on reported performance. Logistic Regression, Random Forest, and Gradient Boosting models are evaluated using stratified five-fold cross-validation with strictly fold-wise isolated preprocessing. While tree-based models achieved strong discriminative performance (mean AUC = 0.966 and 0.971), recall remained substantially lower than specificity, highlighting the persistent challenge of minority-class detection. The findings demonstrate that evaluation configuration, rather than model architecture alone, can significantly influence performance interpretation and lead to misleading conclusions when leakage is not controlled. This work provides a transparent and reproducible framework for reliable empirical evaluation in predictive maintenance research.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_83-A_Leakage_Aware_and_Reproducible_Evaluation_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced Grey Wolf Optimization Dimension Learning for Energy-Efficient Task Scheduling in Edge Computing Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170382</link>
        <id>10.14569/IJACSA.2026.0170382</id>
        <doi>10.14569/IJACSA.2026.0170382</doi>
        <lastModDate>2026-03-31T07:33:42.3470000+00:00</lastModDate>
        
        <creator>Jafar Aminu</creator>
        
        <creator>Rohaya Latip</creator>
        
        <creator>Zurina Mohd Hanapi</creator>
        
        <creator>Shafinah Kamarudin</creator>
        
        <creator>Mustapha Abubakar Giro</creator>
        
        <subject>Edge computing; energy consumption; execution time; task Scheduling; grey wolf optimization dimension learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>The development of edge computing has facilitated the development of numerous applications with diverse characteristics and stringent quality of service (QoS) requirements; these applications demand significant computational power and have strict time-sensitive constraints. While cloud computing offers seemingly unlimited computational resources, it often fails to meet the real-time demands of certain applications because of the latency introduced by the distance between edge devices and cloud data centers. Edge computing enables computational services closer to edge devices, better fulfilling these time-sensitive demands. Task scheduling that tries to share tasks among diverse virtual machines in an optimum manner concerning overall system performance metrics, such as minimal execution time or reduced energy consumption, is one of the key challenges of this heterogeneous computing environment. Task scheduling is an NP-complete problem. Therefore, metaheuristic algorithms are usually applied to obtain near-optimal solutions. The study presents an enhanced grey wolf optimization hybridized by a dimension learning-based strategy, EGWODLB, for optimizing QoS objectives focusing on execution time and energy consumption. The experimental results reflect that EGWODLB outperforms the benchmark algorithms by achieving significant improvements in both execution time, energy consumption, and VM utilization.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_82-Enhanced_Grey_Wolf_Optimization_Dimension_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Efficient Computational Framework for Scalable Learning in Complex Data Environments Using Deep Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170381</link>
        <id>10.14569/IJACSA.2026.0170381</id>
        <doi>10.14569/IJACSA.2026.0170381</doi>
        <lastModDate>2026-03-31T07:33:42.3130000+00:00</lastModDate>
        
        <creator>Priyanto </creator>
        
        <creator>Heri Nurdiyanto</creator>
        
        <subject>Scalable learning; deep neural networks; computational framework; complex data environments; efficient training</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>This study introduces an efficient computational framework designed to support scalable learning in complex data environments using deep neural networks. In many real-world settings, data are not only large in volume but also diverse in structure, noisy in quality, and constantly evolving. These conditions often make conventional deep learning pipelines difficult to scale and expensive to maintain, especially when computational resources are limited or when rapid model updates are required. To address these challenges, we propose a framework that integrates adaptive data preprocessing, modular neural network architectures, and resource-aware training strategies into a unified learning pipeline. The framework is built to balance learning performance with computational efficiency, allowing models to be trained and updated without excessive overhead. Experiments were conducted on multiple heterogeneous datasets representing different levels of data complexity and scale. The results show that the proposed approach consistently improves training stability and convergence speed while maintaining competitive predictive performance compared to standard deep learning setups. In addition, the framework demonstrates better adaptability when handling data distribution shifts, which are common in dynamic environments. These findings suggest that scalable learning does not necessarily require increasingly complex model designs, but rather thoughtful integration of computational strategies that align model behavior with data characteristics and system constraints. The proposed framework offers a practical pathway for deploying deep learning solutions in large-scale, real-world applications where efficiency, robustness, and scalability are equally important.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_81-An_Efficient_Computational_Framework_for_Scalable_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Lightweight Smart Contract Blockchain Platform for Secure and Efficient SME Transaction Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170380</link>
        <id>10.14569/IJACSA.2026.0170380</id>
        <doi>10.14569/IJACSA.2026.0170380</doi>
        <lastModDate>2026-03-31T07:33:42.2830000+00:00</lastModDate>
        
        <creator>Sabam Parjuangan</creator>
        
        <creator>Suhardi</creator>
        
        <creator>I Gusti Bagus Baskara Nugraha</creator>
        
        <subject>Smart contract; blockchain; SME digitalization; performance evaluation; transaction systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>Small and medium enterprises (SMEs) require secure, efficient, and low-cost digital transaction systems. However, many blockchain-based platforms are designed for large-scale applications and impose significant computational overhead, making them unsuitable for resource-constrained SMEs. This study proposes a lightweight smart contract blockchain platform tailored for SME-scale service environments. The system implements a modular smart contract architecture integrated with a lightweight blockchain and automates key transactional processes, including balance top-ups, service ordering, order confirmation, and payment execution, while ensuring data integrity through a simplified Proof-of-Work mechanism. System performance is evaluated using a Design of Experiment (DOE) framework with a full factorial design and analyzed through Analysis of Variance (ANOVA). The results show that execution time remains below 5 seconds under workloads of up to 20 concurrent transactions, with CPU utilization below 55%. ANOVA results indicate that transaction concurrency and smart contract complexity significantly affect performance, while block size has a limited impact. Security evaluation confirms resistance to unauthorized access, double-spending, and reentrancy attacks.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_80-A_Lightweight_Smart_Contract_Blockchain_Platform.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Clustering Analysis for Extracting Moroccan Health Provinces Typology According to Breast and Cervical Cancer Early Screening</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170379</link>
        <id>10.14569/IJACSA.2026.0170379</id>
        <doi>10.14569/IJACSA.2026.0170379</doi>
        <lastModDate>2026-03-31T07:33:42.2530000+00:00</lastModDate>
        
        <creator>Meryem Chakkouch</creator>
        
        <creator>Merouane Ertel</creator>
        
        <creator>Aziz Mengad</creator>
        
        <creator>Said Amali</creator>
        
        <creator>Majda Frindy</creator>
        
        <subject>Clustering; PCA; ICA; KPCA; t-SNE; LLE; K-Means; ACH; GMM; breast and cervical cancers early screening</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>Cancer remains a major global concern, and its screening is a complex public health intervention. In Morocco, breast and cervical cancers are the most frequent malignancies among women, accounting for about half of all diagnosed cases. However, screening participation and coverage still vary across provinces. This study proposes a provincial typology of early screening performance using collected indicators for breast and cervical cancer. Before clustering, we applied several dimensionality reduction methods to improve cluster separability. We adopt a comparative framework that evaluates combinations of DR techniques (PCA, ICA, kernel PCA, t-SNE, and LLE) and clustering algorithms (ACH, K-Means, and GMM) to identify the optimal model with the help of internal validation measures. Kernel PCA with K-Means presents the most optimal model, producing the most coherent province clustering from all tested combinations (DR &amp; algorithm clustering). It demonstrates the best overall separation and compactness according to the evaluation metrics. Three clusters were obtained describing a gradient of early screening system performance: the first group of provinces shows higher screening coverage and stronger diagnostic and referral capacity, the second group demonstrates intermediate performance and differentiated service delivery, and the third group of provinces with low coverage and restrictive access reflects geographic remoteness and service constraints. These results emphasize marked spatial disparity in preventive service performance. They demonstrate how unsupervised learning can support territorial health analysis. The resultant typology can inform targeted action: maintaining and sustaining quality in high-performing provinces, strengthening operations in intermediate-performing provinces, and giving priority to catch-up interventions in low-performing areas.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_79-Clustering_Analysis_for_Extracting_Moroccan_Health_Provinces_Typology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An FMA-Based Action Research Framework for Blockchain-Driven Scholarship Management: A Diagnostic Perspective</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170378</link>
        <id>10.14569/IJACSA.2026.0170378</id>
        <doi>10.14569/IJACSA.2026.0170378</doi>
        <lastModDate>2026-03-31T07:33:42.2370000+00:00</lastModDate>
        
        <creator>Chetna Achar</creator>
        
        <creator>Bharati Wukkadada</creator>
        
        <creator>Harshali Patil</creator>
        
        <subject>Blockchain; scholarship; shikshan shulk scholarship scheme; e-governance; smart contracts; action research; FMA (Framework, Methodology, Applications)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>For equity in access to higher education, scholarship schemes play a vital role. This is particularly the case with developing countries, where economic problems may become an impediment to academic betterment. In India, in the state of Maharashtra, the Shikshan Shulk Scholarship scheme was designed with the same goal – of reducing the financial obstacles for eligible students. However, the operation of this scholarship is often seen to be dealing with delays, manual verification issues, and a lack of transparency. Present scholarship portals are centralized, causing central points of failure, lack immutable audit trails, and depend mainly on manual involvement. This causes inefficiencies, a reduction in stakeholder trust, and compromises the outcome of financial assistance. To resolve these challenges, this study proposes an FMA (Framework of Ideas, Problem Solving Methodology, Areas of Application) based Action Research framework for automating the Shikshan Shulk Scholarship Management using blockchain technology. By utilizing an action research approach, the study establishes how blockchain can improve accountability, reduce delays, and improve operational efficiency in scholarship disbursement. This strategy will contribute to streamlining scholarship management as well as to the broader theme of blockchain-based e-governance.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_78-An_FMA_Based_Action_Research_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Deterministic ANN–CA Computational Framework for Spatial Simulation Using Socioeconomic Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170377</link>
        <id>10.14569/IJACSA.2026.0170377</id>
        <doi>10.14569/IJACSA.2026.0170377</doi>
        <lastModDate>2026-03-31T07:33:42.2070000+00:00</lastModDate>
        
        <creator>&#193;lvaro Peraza Garz&#243;n</creator>
        
        <creator>Ren&#233; Rodr&#237;guez Zamora</creator>
        
        <creator>M&#243;nica Avelina Guti&#233;rrez Haros</creator>
        
        <creator>Iliana Amabely Silva Hern&#225;ndez</creator>
        
        <creator>Juan Francisco Peraza Garz&#243;n</creator>
        
        <subject>Artificial neural networks; cellular automata; spatial simulation; deterministic modeling; hybrid computational framework</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>Hybrid approaches combining Cellular Automata (CA) and Artificial Neural Networks (ANN) have been widely applied to spatial simulation; however, most implementations rely on stochastic components that limit reproducibility and interpretability. This study proposes a deterministic ANN–CA computational framework in which the stochastic perturbation term of a constrained CA model is replaced by ANN-derived classification values based on socioeconomic variables. The framework integrates data preprocessing, ANN training, transition coefficient generation, and CA-based simulation into a unified workflow. A multilayer perceptron is trained using spatialized socioeconomic indicators (age, education, sex, and income) to generate deterministic transition potentials at the pixel level. Experimental evaluation using multitemporal land-use data shows that the proposed ANN–CA model achieves a moderate improvement in global spatial association (Cramer’s V: 0.5622 → 0.6016), while pixel-level agreement (Kappa: 0.6589 → 0.6595) remains nearly unchanged. These results indicate that the proposed approach primarily enhances structural coherence and spatial organization—reducing fragmented growth and improving corridor-oriented expansion—rather than significantly increasing pixel-wise predictive accuracy. By replacing stochastic behavior with data-driven deterministic rules, the proposed framework improves reproducibility and provides a more interpretable linkage between urban growth patterns and socioeconomic drivers. This work contributes a transparent hybrid modeling approach suitable for spatial simulation and planning-oriented applications.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_77-A_Deterministic_ANN_CA_Computational_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Thread-Sensitive Shared Instruction Cache Analysis for Precise WCET Estimation of Multithreaded Programs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170376</link>
        <id>10.14569/IJACSA.2026.0170376</id>
        <doi>10.14569/IJACSA.2026.0170376</doi>
        <lastModDate>2026-03-31T07:33:42.1570000+00:00</lastModDate>
        
        <creator>Naveeta Rani</creator>
        
        <creator>P Padma Priya Dharishini</creator>
        
        <creator>PVR Murthy</creator>
        
        <subject>Synchronization; concurrency; multicore systems; multithreaded program; shared instruction cache analysis; worst-case execution time</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>Real-time systems require strict adherence to task deadlines, making Worst-Case Execution Time (WCET) analysis essential. WCET estimation typically involves static analysis of instructions in a program, making use of models of hardware and architectural units such as shared instruction caches that are as precise as possible. Shared instruction caches pose a challenge because cache behaviour depends on access history and inter-core interference in multicore systems. Existing approaches do not fully exploit thread lifecycle and synchronization semantics when modeling shared instruction cache behavior. In contrast, the proposed TP-WCIP model explicitly incorporates these semantics to eliminate infeasible interference scenarios. By confining interference placement to feasible concurrent execution regions, it achieves more precise WCET estimation. Worst-case latency, due to shared instruction cache accesses in the presence of inter-core interferences, is estimated for each thread in a multithreaded program with a focus on start, join, and synchronization (wait and notify) primitives. A highly precise model, termed Threaded Program Worst-Case Interference Placement (TP-WCIP), is proposed for the static analysis of multithreaded programs to estimate Worst-Case Execution Time (WCET). The proposed model is evaluated in comparison with Cache Block Conflict Number (CCN), Worst-Case Interference Placement (WCIP), and Interference Partitioning (IP). TP-WCIP exploits concurrency and happens-before relationships induced by start, join, and synchronization primitives to accurately characterize inter-core interferences. The TP-WCIP model for shared instruction cache analysis is validated using benchmark programs and compared against approaches reported in the literature. It is established both theoretically and experimentally that the proposed model TP-WCIP leads to more precise worst-case latency measurements. Result analysis of benchmark programs Papabench and extended M&#228;lardalen benchmarks shows that the TP-WCIP model reduces interferences by up to 27% over IP, 53% over WCIP, and 75% over CCN, while preserving up to 16% more Shared Instruction Cache Hits than IP, 48% than WCIP, and 84% than CCN, thereby delivering more precise static WCET estimates for multi-threaded programs on multicore architectures.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_76-Thread_Sensitive_Shared_Instruction_Cache_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Parameter-Driven Evaluation of Zero Trust Security in Blockchain Networks Under Dynamic Threats</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170375</link>
        <id>10.14569/IJACSA.2026.0170375</id>
        <doi>10.14569/IJACSA.2026.0170375</doi>
        <lastModDate>2026-03-31T07:33:42.1270000+00:00</lastModDate>
        
        <creator>Samuthira Pandi V</creator>
        
        <creator>T. Vijayanandh</creator>
        
        <creator>A. Jeyamurugan</creator>
        
        <creator>M. D. Boomija</creator>
        
        <creator>V. Parimala</creator>
        
        <creator>S. Preena Jacinth Shalom</creator>
        
        <creator>Lavanya. M</creator>
        
        <creator>Veena. K</creator>
        
        <subject>Zero trust; blockchain security; trust management; cyber-attack simulation; access control; financial networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>In this study, we investigate a parameterized method to evaluate the Zero Trust (ZT) security model integrated with blockchain networks under dynamic cyberattack conditions. Existing static trust-based security models fail to adapt to evolving adversarial behavior and dynamic attack conditions. To address these deficiencies, a discrete-time simulation model is constructed a discrete-time simulation model is developed to capture dynamic trust evolution, probabilistic attack patterns, and threshold-based access control within a financial network. In this regard, the proposed model considers significant parameters such as the probability of attack, trust decay rate, and access, isolation, and quarantine thresholds to evaluate their impact on network security performance. From the results, a significant correlation is noted between trust, mitigation, adversarial intensity, and policy parameter tuning. For instance, a strict threshold policy results in improved attack mitigation but compromises network participation, while a lenient approach results in improved network participation but compromises network security. The proposed framework is a more adaptive, scalable, and robust approach compared to conventional static approaches in addressing dynamic threats. From the results, optimal tuning of parameters is a fundamental aspect in achieving a balance in security enforcement in blockchain-based zero-trust networks.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_75-Parameter_Driven_Evaluation_of_Zero_Trust_Security.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid Machine Learning Algorithm for Pipeline Leak Detection and Localisation in Water Distribution Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170374</link>
        <id>10.14569/IJACSA.2026.0170374</id>
        <doi>10.14569/IJACSA.2026.0170374</doi>
        <lastModDate>2026-03-31T07:33:42.1100000+00:00</lastModDate>
        
        <creator>Giresse M. Komba</creator>
        
        <creator>Topside E. Mathonsi</creator>
        
        <creator>Pius A. Owolawi</creator>
        
        <subject>SVM-ANN-GT; leak detection and localisation; EPANET; WDNs; ML</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>Water Distribution Networks (WDNs) frequently experience significant water losses due to pipeline leakages. These losses not only create economic challenges for water utilities but also intensify global concerns regarding water scarcity. This study aims to enhance the accuracy and reliability of leak detection and localisation within WDN infrastructures. Traditional leak detection techniques often exhibit limitations such as high operational costs, inefficient detection processes, and susceptibility to false alarms, particularly when sensors are deployed randomly across the network. Furthermore, detecting concealed or low-intensity leaks remains a difficult task. To address these challenges, this study introduces a hybrid supervised machine learning framework that combines Support Vector Machines (SVM), Artificial Neural Networks (ANN), and Graph Theory (GT). The integration of these techniques enables the proposed model to analyse multiple parameters influencing leak behaviour and improve the reliability of detection outcomes. The hybrid model, referred to as the SVM-ANN-GT algorithm, is evaluated using the EPANET hydraulic simulation environment and compared with conventional machine learning approaches. Experimental results indicate that the proposed hybrid model significantly improves leak detection performance. The model achieves an average detection accuracy of approximately 96%, outperforming standalone SVM and ANN models, which achieved accuracies of 85% and 80%, respectively. The improved performance is primarily attributed to the integration of graph-theoretic optimisation for sensor placement, which enhances monitoring coverage and reduces redundancy within the network.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_74-A_Hybrid_Machine_Learning_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Framework for Implementing ERP Integrating Client-Consultant Agency Management within Moroccan SMEs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170373</link>
        <id>10.14569/IJACSA.2026.0170373</id>
        <doi>10.14569/IJACSA.2026.0170373</doi>
        <lastModDate>2026-03-31T07:33:42.0630000+00:00</lastModDate>
        
        <creator>Yassine Zouhair</creator>
        
        <creator>Younous El Mrini</creator>
        
        <creator>Mustapha Belaissaoui</creator>
        
        <subject>ERP; ERP implementation; client-consultant; Moroccan SMEs; IS success; system benefits</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>Nowadays, Enterprise Resource Planning (ERP) is a preferred solution for Small and Medium-sized Enterprises (SMEs) wishing to modernize and integrate their information Systems (IS). However, implementing an ERP remains a complex process due to the organizational constraints specific to SMEs and the difficulties associated with the implementation process. In the context of Moroccan SMEs, the lack of a structured implementation framework and conflicts between the client and the consultant are major factors affecting the success of ERP projects. Furthermore, most of the theoretical frameworks presented in the literature are structured around similar phases, do not take into account the client-consultant agency management, and were developed in contexts different from that of Morocco. This research aims to develop an ERP implementation framework that integrates client-consultant agency management, specifically adapted to Moroccan SMEs. To achieve this objective, a mixed-methods approach was adopted. We opted for a quantitative research method using the PLS-SEM statistical technique, with the aid of SmartPLS software, to examine how client–consultant agency management affects the success of ERP implementation within Moroccan SMEs. Next, we used the action research method to develop a framework for ERP implementation that integrates client–consultant agency management within Moroccan SMEs. The proposed framework is based on five phases, each defining the objectives, inputs, processes, outputs, critical success factors (CSF), and associated risks. The integration of client-consultant agency management makes it possible to anticipate and manage organizational, technical, and human conflicts, particularly through the contract and conflict resolution strategies. This study contributes to both academic research and professional practice by offering consultants and Moroccan SMEs a structured framework aimed at improving the success of ERP projects.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_73-Framework_for_Implementing_ERP_Integrating_Client_Consultant_Agency.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>D-LexeCan: A Dynamic Lexicon-Based Framework for Sentiment Analysis in Tarifit, a Low-Resource Multiscript Language</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170372</link>
        <id>10.14569/IJACSA.2026.0170372</id>
        <doi>10.14569/IJACSA.2026.0170372</doi>
        <lastModDate>2026-03-31T07:33:42.0330000+00:00</lastModDate>
        
        <creator>Amar Amakssoum</creator>
        
        <creator>Fadwa Bouhafer</creator>
        
        <creator>Anass El Haddadi</creator>
        
        <creator>Abdelkhalak Bahri</creator>
        
        <subject>Sentiment analysis; low-resource languages; dynamic lexicon-based; static lexicon baseline; deep learning; machine-learning; transformer-based models</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>Sentiment analysis for low-resource languages remains challenging due to limited annotated data, orthographic instability, informal writing practices, and the lack of dedicated linguistic resources, challenges that are particularly acute for Tarifit (Tamazight of the Rif), an under-resourced Amazigh language characterized by strong dialectal variation, pervasive multi-script usage, and highly noisy user-generated content on social media. This study introduces D-LexeCan, a dynamic lexicon-based sentiment analysis framework that infers polarity directly from annotated corpus evidence without relying on predefined sentiment dictionaries or computationally intensive pretrained deep learning and transformer-based models. The framework combines deterministic multi-script normalization, unifying Arabic script, Tifinagh, and Arabizi into a single Tarifit Latin representation with automatic induction of sentiment-bearing unigrams and bigrams, while explicitly modeling negation and amplification phenomena through linguistically motivated operators and preserving emojis as meaningful discourse-level sentiment cues. The approach is evaluated on a manually annotated social media corpus collected from multiple online platforms, where it achieves an accuracy of 0.8800 and a Macro-F1 score of 0.8798. The results outperform a static lexicon baseline with an accuracy of 0.5275, a classical machine-learning model based on TF–IDF and SVM with an accuracy of 0.8525, and neural architectures including BiLSTM with an accuracy of 0.7950. Experiments with frozen multilingual transformer encoders show accuracy ranging from 0.6725 to 0.7650. Fine-tuned multilingual transformers such as mBERT achieve competitive performance, reaching an accuracy of 0.8175. Overall, the results demonstrate that adaptive and linguistically grounded dynamic lexicon induction constitutes an effective, interpretable, and computationally efficient alternative for sentiment analysis in low-resource, noisy, and multi-script African language contexts.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_72-D_LexeCan_A_Dynamic_Lexicon_Based_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Empirical Quantitive Investigation of the Effect of GoF Design Patterns on the Quality of Software Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170371</link>
        <id>10.14569/IJACSA.2026.0170371</id>
        <doi>10.14569/IJACSA.2026.0170371</doi>
        <lastModDate>2026-03-31T07:33:41.9870000+00:00</lastModDate>
        
        <creator>Somia Abufakher</creator>
        
        <subject>Gang-of-four patterns; empirical quantitive investigation; software systems quality</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>Design patterns are universal, reusable fixes for common issues in software design. They are supposed to encourage better design choices by rep-urposing already-established successful solutions, saving cost and time. The purpose of the current study is to present quantitive evidence about the expected implications on software quality by employing GoF design patterns; 10 commonly applied patterns have been empirically assessed for their impact on software quality. The evaluated patterns are: Factory Method, Prototype, Singleton, Adapter, Composite, Decorator, Observer, State, Template Method, and Proxy. The study considers software quality attributes that match the intents of the subject design patterns; these attributes are: software maintainability, testability, reusability, simple design, sensitivity to change, and error-proneness. The empirical evaluation of patterns is performed by computing 10 software quality metrics for pattern classes that have been detected in 10 real open-source projects implemented with Java. The findings reveal that the evaluated patterns promote software quality, except for the classes of Prototype, Composite, and Singleton, which were often found not cohesive.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_71-Empirical_Quantitive_Investigation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Enhanced Framework Using XLM-R with Optimized TF-IDF and Positional Encoding for Intra-Sentential Code Mixing Malay-English Sentiment Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170370</link>
        <id>10.14569/IJACSA.2026.0170370</id>
        <doi>10.14569/IJACSA.2026.0170370</doi>
        <lastModDate>2026-03-31T07:33:41.9570000+00:00</lastModDate>
        
        <creator>Surendran Selvaraju</creator>
        
        <creator>Nilam Nur Amir Sjarif</creator>
        
        <creator>Nurulhuda Firdaus Mohd Azmi</creator>
        
        <creator>Wan Noor Hamiza Wan Ali</creator>
        
        <creator>Norshaliza Kamaruddin</creator>
        
        <subject>Sentiment analysis; code mixing; feature extraction; contextual embeddings; XML-R; TF-IDF; positional encoding</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>The increasing use of online platforms, especially in social media, has led to a rapid growth of user-generated content that frequently exhibits intra-sentential code mixing between Malay and English language. Sentiment analysis remains challenging due to linguistic heterogeneity, frequent language switching, non-standard syntax and limited availability of adequate representations for code mixing text. Although multilingual contextual embedding models such as Cross-lingual Language Model (XLM-R) provide good semantic representations, but there are still challenges in capturing fine-grained sentiment cues in intra-sentential code mixing text when used directly. This study proposed an enhanced feature extraction framework for intra-sentential code mixing Malay-English. The framework first constructs TF-IDF weighting based on trigrams and followed by lexicon-guided filtering to select trigrams that contain sentiment-relevant words. Contextual embeddings are then extracted using XLM-R and further refined through Term Frequency–Inverse Document Frequency (TF-IDF) weighting and positional encoding to preserve structural information. The dataset derived from the MESocSentiment corpus with total of 4,292. The experimental results show that the proposed framework achieves an accuracy of 0.896 and an F1-score of 0.932, where it outperforms traditional sparse feature representations and multilingual contextual embedding baselines. Notably, the framework demonstrates a high recall of 0.954, indicating strong sensitivity in identifying sentiment-bearing instances across diverse social media code mixing expressions. Further analysis reveals that the integration of informative trigram filtering, XLM-R based contextual embedding, TF-IDF weighting, positional encoding, and sentiment polarity scoring enhances the representation of sentiment cues in short and informal social media text. Overall, the results suggest that the proposed feature extraction framework enhances the representation quality of sentiment analysis for code mixing Malay–English in social media.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_70-An_Enhanced_Framework_Using_XML_R.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The DAGC-ATS Database for Arabic Grammar Correction for Arabic Summaries</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170369</link>
        <id>10.14569/IJACSA.2026.0170369</id>
        <doi>10.14569/IJACSA.2026.0170369</doi>
        <lastModDate>2026-03-31T07:33:41.9230000+00:00</lastModDate>
        
        <creator>Nada Essa</creator>
        
        <creator>Mostafa. M. El-Gayar</creator>
        
        <creator>Eman M. El-Daydamony</creator>
        
        <subject>Arabic grammar correction; Arabic natural processing; open domain database</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>Arabic Grammar Correction is a comprehensive open-domain. Modern methods of correcting Arabic language errors rely on a database specific to a particular field and containing specific words and phrases, which leads to the emergence of the problem of out-of-context words. Due to the growth in recent work for Arabic text summarization and Arabic grammar correction, out-of-context words and the complex nature of Arabic grammar, an open-domain Arabic database is a required resource for Arabic language processing techniques. In this study, A new open-domain Database for Arabic Grammar Correction (DAGC-ATS) is presented to solve the out-of-context words problem, limited domain existing databases for training. The proposed database is based on the description of Arabic grammar using part-of-speech tags and relations between words by a dependency parser. The DAGC-ATS database is based on grammar error detection and correction at the simple sentence level. The database contains entries that describe Arabic grammar rules. The DAGC-ATS database contains two files, one for correcting Arabic simple sentences and the other for correcting incorrect Arabic basic sentences in grammar. It is designed for use only in the training stage. Every entry in the database describes one different grammatical problem, such as gender, number, singular, dual, or plural faults. It contains 9309888 entries. Using the QALB dataset, the system&#39;s precision, recall, and F-measure scores were 96.9, 94.80, and 95.83. Additionally, the same system was tested using the EASC database with 785 summaries, and the results for precision, recall, and F-measure were 99.73%, 95.90%, and 97.77%.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_69-The_DAGC_ATS_Database_for_Arabic_Grammar.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Automated Shrimp Feeding System Using Passive Acoustic Monitoring and Faster R-CNN</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170368</link>
        <id>10.14569/IJACSA.2026.0170368</id>
        <doi>10.14569/IJACSA.2026.0170368</doi>
        <lastModDate>2026-03-31T07:33:41.8930000+00:00</lastModDate>
        
        <creator>Huynh Viet Hung</creator>
        
        <creator>Huynh Vi Khang</creator>
        
        <creator>Luong Vinh Quoc Danh</creator>
        
        <subject>Automated feeding system; faster R-CNN; passive acoustic monitoring; shrimp culture; whiteleg shrimp</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>Shrimp aquaculture plays a vital role in global seafood production, contributing substantially to food security, economic growth, and export revenue. Feed typically accounts for 40–60% of total production costs, making efficient feed management crucial for improving farm profitability and the sustainability of culture operations. Acoustic-based feeding strategies offer a promising solution by enabling demand-driven feed control through the detection of shrimp feeding sounds. However, reliable recognition in commercial ponds remains difficult due to strong background noise from aerators, pumps, diffusers, and rainfall, which overlaps with the frequency band of the feeding signals. In addition, the dependence on specialized software and high-performance computing resources hinders large-scale adoption. This study proposes a novel shrimp feeding sound recognition approach that converts acoustic signals into spectrogram images and employs a Faster R-CNN–based framework to regulate feed delivery in real time according to shrimp demand. A wavelet-based filtering method is introduced to effectively suppress ambient noise under practical farming conditions. Moreover, the developed open-source Python-based software enhances the feasibility of deploying intelligent acoustic-based feeding systems in commercial shrimp aquaculture. Experimental results demonstrate that the proposed system improves feed utilization efficiency and growth performance compared with traditional feeding practices.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_68-An_Automated_Shrimp_Feeding_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Deployment-Oriented Framework for Machine Learning-Based Learning Style Identification: A Systematic Computational Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170367</link>
        <id>10.14569/IJACSA.2026.0170367</id>
        <doi>10.14569/IJACSA.2026.0170367</doi>
        <lastModDate>2026-03-31T07:33:41.8470000+00:00</lastModDate>
        
        <creator>Sarafa Olasunkanmi Adeyemo</creator>
        
        <creator>Mohd Shahizan Othman</creator>
        
        <creator>Chan Weng Howe</creator>
        
        <creator>Muteb Sinhat Almarshadi</creator>
        
        <creator>Siti Zaiton Mohd Hashim</creator>
        
        <creator>Taofik Olasunkanmi Tafa</creator>
        
        <creator>Abdulaziz Saidu Yalwa</creator>
        
        <subject>Machine learning; learning style identification; multimodal data fusion; deep learning; ensemble learning; explainable artificial intelligence; deployment maturity; adaptive learning systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>This study presents a systematic and deployment-oriented analysis of machine learning (ML) techniques for learning style identification in adaptive digital environments. A total of 57 peer-reviewed studies published between 2020 and 2025 were analysed using a PRISMA-guided methodology. Beyond descriptive synthesis, the review systematically examines algorithmic paradigms, multimodal data integration strategies, evaluation protocols, and deployment readiness characteristics. The findings reveal that classical supervised models remain prevalent in small-scale applications, while deep learning and ensemble methods demonstrate improved performance in high-dimensional behavioural datasets. However, significant heterogeneity exists in validation strategies, fusion architectures, and system scalability. To address these limitations, this study proposes a deployment-oriented architectural framework that integrates: 1) context-aware model selection, 2) structured multimodal fusion design, 3) layered explainability mechanisms, and 4) a four-level deployment maturity evaluation model. The framework provides a unified system-level perspective that shifts emphasis from isolated performance optimization toward scalable, interpretable, and integration-ready ML system design. This work contributes a structured computational blueprint for developing robust and deployment-aware learning style identification systems in intelligent educational platforms.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_67-A_Deployment_Oriented_Framework_for_Machine_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dynamic Multilevel User Allocation in MEC Using CESO for Resource Efficiency and QoE</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170366</link>
        <id>10.14569/IJACSA.2026.0170366</id>
        <doi>10.14569/IJACSA.2026.0170366</doi>
        <lastModDate>2026-03-31T07:33:41.8130000+00:00</lastModDate>
        
        <creator>V Arun</creator>
        
        <creator>M Azhagiri</creator>
        
        <subject>Mobile Edge Computing; distributed edge server; Quality of Experience; Cognitive Evolutionary Synergy Optimization; game-theoretic equilibrium</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>Mobile Edge Computing (MEC) has become one of the key paradigms to enable next-generation networks in supporting applications that are latency sensitive and computation-intensive. Nevertheless, the resourceful placement of heterogeneous and dynamically incoming user tasks with distributed edge servers is a problematic issue to be achieved because of network fluctuation, non-uniform resource availability, and variance in Quality of Experience (QoE) demand. To overcome these constraints, this study suggests the Dynamic Multilevel User Allocation Algorithm (DMUAA) that incorporates a new Cognitive Evolutionary Synergy Optimization (CESO) framework in order to reach stable, adaptive, and resource-optimizing allocation in real-time. DMUAA means a hierarchical optimization pipeline that consists of heuristic initialization, stochastic refinement, and strategic game-theoretic equilibrium assisted by a coordination and feedback mechanism that guarantees the constant adaptation to variations in user mobility and load. The system model collaboratively optimizes the latency, energy, resource, and QoE under the multi-constraint edge-server conditions. Extensive simulations over a wide range of resource capabilities, user rates, and mobility patterns indicate that DMUAA can be greatly superior to five state-of-the-art baselines, which are the MGGO, GTA, EUA, HAILP, and LGP. Findings indicate that DMUAA decreases average end-to-end latency by 18-34%, increases Resource Utilization Efficiency (RUE) by 12–27%, and increases Service Continuity Rate (SCR) by 15–30% over the current practices. The solved approach also produces 20-35% greater QoE, better load balancing (with up to 25% reduced LBI), and up to 22 per cent greater energy-QoE efficiency (EQR). Moreover, CESO allows for more rapid and stable convergence, and DMUAA comes to optimal allocation states 40-55% quicker than competing algorithms.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_66-Dynamic_Multilevel_User_Allocation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Web Application to Improve Advertising Order Management Based on Cloud Computing: Case Study of a Television Company</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170365</link>
        <id>10.14569/IJACSA.2026.0170365</id>
        <doi>10.14569/IJACSA.2026.0170365</doi>
        <lastModDate>2026-03-31T07:33:41.7670000+00:00</lastModDate>
        
        <creator>Ariadna Gisselle Ledesma-S&#225;nchez</creator>
        
        <creator>Ricardo Alejandro Gamarra-Valle</creator>
        
        <creator>Ernesto Adolfo Carrera-Salas</creator>
        
        <subject>Advertising order management; cloud computing; process automation; television companies; web applications</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>Television companies face significant problems in managing advertising orders due to manual processes that cause transcription errors, delays, and a lack of traceability. This study proposes a web application based on cloud computing to automate this process. The solution implements a hybrid architecture that allows advertising agencies to enter orders directly, eliminating manual transcription and providing complete traceability. The results of the pilot test demonstrated substantial improvements: processing time was reduced by 88.33%, decreasing from 17 minutes 16 seconds to 2 minutes 1 second per order. Transcription errors fell from 60% to 0%, and operating costs were reduced by 54.5%. Automation in reporting eliminated manual management, allowing agencies direct access to campaign information. The implementation successfully transformed manual management into an automated, efficient, and scalable system, improving operational efficiency and customer satisfaction.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_65-Web_Application_to_Improve_Advertising_Order_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Transformer-Enhanced Soft Actor-Critic with EV-Aware Reward Shaping for Maize Optimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170364</link>
        <id>10.14569/IJACSA.2026.0170364</id>
        <doi>10.14569/IJACSA.2026.0170364</doi>
        <lastModDate>2026-03-31T07:33:41.7370000+00:00</lastModDate>
        
        <creator>Xuan Lim</creator>
        
        <creator>Hock Guan Goh</creator>
        
        <creator>Shen Khang Teoh</creator>
        
        <creator>Peh Chiong Teh</creator>
        
        <creator>Ivan Andonovic</creator>
        
        <subject>Precision agriculture; maize optimization; fertilization and irrigation management; reinforcement learning; Soft Actor-Critic; transformer; reward shaping; explainable artificial intelligence</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>Optimizing fertilization and irrigation strategies is essential for improving productivity and resource efficiency in precision agriculture. Artificial intelligence (AI), particularly reinforcement learning (RL), has been increasingly explored for adaptive crop management under uncertain environmental conditions. However, many existing approaches rely on single-action formulations that struggle with joint input control, leading to economically unstable outcomes and limited policy interpretability. This study proposes a Transformer-enhanced Soft Actor-Critic (SAC) framework with expected value (EV)-aware reward shaping for maize optimization in a Decision Support System for Agrotechnology Transfer (DSSAT) Gym environment, enabling simultaneous control of fertilization and irrigation under dynamic crop-environment interactions. Unlike standard SAC implementations, the proposed framework incorporates a transformer-based state encoder for richer agronomic state representation and an EV-aware reward shaping mechanism to guide economically stable long-horizon decision-making. The proposed AI-driven approach improves economic profitability and profit stability compared with the prior state-of-the-art (SOTA) large language model (LLM)-enhanced Deep Q-Network (DQN) baseline. Behavioral analysis shows that the learned policy exhibits temporally structured decision patterns characterized by smaller-magnitude, higher-frequency actions and an associated input-efficiency trade-off. Furthermore, Shapley Additive Explanations (SHAP)-based explainable AI (XAI) analysis identifies growth-stage and crop-development variables as dominant drivers of long-horizon control decisions. Overall, the results demonstrate that the Transformer-enhanced SAC with EV-aware reward shaping provides a more profitable, financially stable, and interpretable AI-based decision-making framework for maize optimization in the DSSAT Gym environment.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_64-Transformer_Enhanced_Soft_Actor_Critic.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Real-Time Person Re-Identification Using Image Generation-Based Data Augmentation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170363</link>
        <id>10.14569/IJACSA.2026.0170363</id>
        <doi>10.14569/IJACSA.2026.0170363</doi>
        <lastModDate>2026-03-31T07:33:41.7070000+00:00</lastModDate>
        
        <creator>Yuya Ifuku</creator>
        
        <creator>Kohei Arai</creator>
        
        <creator>Oda Mariko</creator>
        
        <subject>Person re-identification; generative AI; data augmentation; OSNet; real-time systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>Person Re-identification (Re-ID) in single-gallery scenarios—where each individual has only one registration image—suffers from severe viewpoint sensitivity due to insufficient pose diversity. This study introduces ViewSynthReID, a pioneering generative augmentation framework that leverages Wan2.2, the latest diffusion-based video generation model, to synthesize complete 360&#176; viewpoint coverage from a single input. The pipeline innovatively employs MediaPipe for automatic frontal pose selection, Hybrid Attention Transformer (HAT) for texture-preserving super-resolution, and diffusion synthesis to create photorealistic multi-pose variants, all seamlessly integrated into the lightweight OSNet backbone for efficient multi-scale feature extraction. On Market-1501, while overall Rank metrics experienced minor degradation from synthetic artifacts (Rank-1: 92.3% → 91.8%), the method delivered targeted gains in challenging viewpoint transitions: 75/3,368 queries (2.2%) showed Rank-1 improvements averaging +12.4%, with 28 cases exceeding +25%. These gains were most pronounced in &gt;90&#176; viewpoint gaps, proving generative synthesis effectively bridges critical pose gaps unattainable through traditional augmentation. For real-world deployment, a production-grade inference pipeline is engineered, combining YOLO26 pedestrian detection with TensorRT-optimized OSNet, achieving 7.20 FPS and 135ms latency on 4K video streams. This system enables practical smart city applications, including real-time crowd monitoring, lost person recovery, and traffic behavior analysis, demonstrating that strategic generative augmentation can transform single-shot Re-ID from research curiosity to deployable surveillance technology.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_63-Real_Time_Person_Re_Identification_Using_Image_Generation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing the Successful Microservices Implementation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170362</link>
        <id>10.14569/IJACSA.2026.0170362</id>
        <doi>10.14569/IJACSA.2026.0170362</doi>
        <lastModDate>2026-03-31T07:33:41.6900000+00:00</lastModDate>
        
        <creator>Dinda Ayu Hapsari</creator>
        
        <creator>Teguh Raharjo</creator>
        
        <creator>Anita Nur Fitriani</creator>
        
        <creator>Bob Hardian</creator>
        
        <subject>Microservice; practice; systematic literature review; Kitchenham; PRISMA 2020</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>Despite the widespread adoption of microservices across major global companies, a knowledge gap persists between best practices and real-world implementation challenges faced by practitioners. While existing literature provides extensive coverage of microservices patterns, limited evidence exists regarding how the practices are actually implemented in various organizational contexts. Thus, this study aims to address this gap through a systematic synthesis of empirically-validated microservices practices from recent peer-reviewed literature. We conducted a comprehensive systematic literature review and ultimately included thirty-four high-quality articles published between 2021 and 2025. We extracted 114 microservice practices and classified them into eight domains, which are Architecture and Design, Communication and Integration, Development and Deployment, Monitoring and Observability, Testing and Quality Assurance, Migration and Legacy Modernization, Security and Access Control, and Team Organization and Development Process. Architecture and Design practices and Team Organization and Process collectively account for nearly half of all identified practices, while Security and Access Control emerged as a significant research gap, with only 5.9 per cent of studies addressing this domain. To the best of our knowledge, this is the first systematic literature review that comprehensively synthesizes microservice implementation practices across multiple domains with explicit empirical validation in real-world contexts as inclusion criteria. This study provides a comprehensive catalogue of empirically-validated practices, offering structured guidance for practitioners and a foundation for future development of microservice implementation guidelines, contributing to more successful microservice projects and mitigated implementation risks.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_62-Enhancing_the_Successful_Microservices_Implementation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hate Speech Detection on Multiple Social Networks Using Deep Learning and Optimization Techniques: A Hybrid Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170361</link>
        <id>10.14569/IJACSA.2026.0170361</id>
        <doi>10.14569/IJACSA.2026.0170361</doi>
        <lastModDate>2026-03-31T07:33:41.6570000+00:00</lastModDate>
        
        <creator>Vishu Tyagi</creator>
        
        <creator>Sourabh Jain</creator>
        
        <subject>Natural language processing; sparrow search algorithm; hate speech; deep neural network; social media</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>The use of social media networks as a source of hate speech is another emerging factor that complicates the possibility of a comprehensive organization of an environment suitable for promoting healthy communication. Automating the detection of hate speech in various social media networks has turned out to be a very difficult process. It is critical to identify and monitor hate speech to reduce its negative effects on people and groups. Currently, there are many approaches to classifying hate speech, but they still have indeterminacy when it comes to distinguishing between hate and normal messages and low accuracy. Many domains have greatly benefited from deep learning, especially in speech and NLP tasks. The hyperparameters of Deep Neural Networks (DNN) play a crucial role and are reflected in their success. However, because these hyperparameters are highly recursive, it is sometimes difficult to set them for machine learning models, such as deep neural networks. The work proposed in this study employed the sparrow search algorithm (SSA) optimization methods to fine-tune the hyperparameters of deep learning models for hate speech detection. In the training process of the SSA-DNN model, the SSA can help search and select the best hyperparameters. Based on the obtained experimental outcomes, it can be observed that the proposed SSA-DNN model outperforms different machine learning and deep learning techniques in the context of hate speech detection.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_61-Hate_Speech_Detection_on_Multiple_Social_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Survey of AI-Based Methods for Cloud Resource Allocation and Optimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170360</link>
        <id>10.14569/IJACSA.2026.0170360</id>
        <doi>10.14569/IJACSA.2026.0170360</doi>
        <lastModDate>2026-03-31T07:33:41.6270000+00:00</lastModDate>
        
        <creator>Rim Doukha</creator>
        
        <creator>Abderrahmane Ez-Zahout</creator>
        
        <subject>AI techniques; heuristics; metaheuristic; cloud resource management; sustainability; survey</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>Cloud computing has become essential for modern digital services, yet efficiently allocating compute, storage, and network resources in large-scale and highly dynamic environments remains a significant challenge. Traditional rule-based approaches often struggle to cope with workload variability, multi-tenancy, and the need for real-time multi-objective optimization. In response, recent research has increasingly explored artificial intelligence techniques to improve prediction, scheduling, and automated resource control in cloud infrastructures. This study presents a comprehensive survey of AI-based methods for cloud resource allocation, including machine learning, deep learning, reinforcement learning, and hybrid approaches. It systematically analyzes selected studies published between 2020 and 2026, examining their learning paradigms, optimization objectives (e.g., performance, cost, energy efficiency), experimental validation strategies, and reported limitations. While classical optimization techniques are briefly discussed to contextualize the evolution of the field, the core analysis is strictly centered on AI-driven approaches. The study concludes by identifying the key challenges that persist in intelligent cloud resource management and outlines promising directions for future research toward more adaptive, reliable, and scalable optimization frameworks.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_60-A_Survey_of_AI_Based_Methods.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mobile Application Based on Convolutional Neural Networks for the Initial Evaluation of Cutaneous Melanoma</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170359</link>
        <id>10.14569/IJACSA.2026.0170359</id>
        <doi>10.14569/IJACSA.2026.0170359</doi>
        <lastModDate>2026-03-31T07:33:41.6100000+00:00</lastModDate>
        
        <creator>Julio Guillermo Farro-Llanos</creator>
        
        <creator>Manases Sabteca Juan De Dios-Arango</creator>
        
        <creator>Rosalynn Ornella Flores-Casta&#241;eda</creator>
        
        <subject>Melanoma; skin disease; convolutional neural networks; mobile app</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>Cutaneous melanoma is a dermatological disease that affects a large portion of the world&#39;s population and is characterized by its high capacity for dissemination and aggressiveness, especially when not detected early. Given this need, the objective was to develop a mobile application based on convolutional neural networks for the initial assessment of this condition. Therefore, the percentage increase in sensitivity, specificity, and accuracy was evaluated. The research employed a quantitative approach and a pre-experimental design. The study variable was the initial assessment of cutaneous melanoma. The sample consisted of 120 images: 60 images from patients with melanoma-positive and 60 images from patients with melanoma-negative. The results of the implementation showed an increase in sensitivity of 0.729%, specificity of 3.626%, and accuracy of 2.631%. In conclusion, the adoption of the mobile application based on convolutional neural networks strengthens the initial assessment of cutaneous melanoma by optimizing these indicators.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_59-Mobile_Application_Based_on_Convolutional_Neural_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Web System Based on Convolutional Neural Networks to Support Early Identification of Ocular Pterygium</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170358</link>
        <id>10.14569/IJACSA.2026.0170358</id>
        <doi>10.14569/IJACSA.2026.0170358</doi>
        <lastModDate>2026-03-31T07:33:41.5800000+00:00</lastModDate>
        
        <creator>Justo Oscar Salcedo-Enriquez</creator>
        
        <creator>Keyla Guadalupe Yataco-Argomedo</creator>
        
        <creator>Rosalynn Ornella Flores-Casta&#241;eda</creator>
        
        <subject>Ocular pterygium; web system; convolutional neural networks; early identification; technology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>This research aligned with SDG No. 9, “Industry, Innovation, and Infrastructure,” as it promotes health and well-being using innovative technologies. The objective was to determine whether the development of a web-based system based on convolutional neural networks improves the early identification of pterygium. Sensitivity, specificity, and accuracy metrics were used to measure the results, yielding excellent incremental values of 96%, 98%, and 97%, respectively. The study was applied research with a quantitative approach and an experimental design, specifically pre-experimental. The study variable was the early identification of ocular pterygium, consisting of a sample of 100 images, which were divided into 50 images corresponding to individuals with ocular pterygium and 50 from healthy individuals. The type of sampling used was non-probabilistic convenience sampling. The results obtained showed an increase in sensitivity of 4.35%, specificity of 2.80%, and accuracy of 3.56%. It is concluded that the proposal positively improves support for the early identification of pterygium, thanks to the high results obtained with the indicators evaluated, which makes it executable and scalable for future research.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_58-Web_System_Based_on_Convolutional_Neural_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Students’ Perspectives of AI and Academic Integrity in Higher Education Institutions in Oman</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170357</link>
        <id>10.14569/IJACSA.2026.0170357</id>
        <doi>10.14569/IJACSA.2026.0170357</doi>
        <lastModDate>2026-03-31T07:33:41.5500000+00:00</lastModDate>
        
        <creator>Alaa Edein Qoussini</creator>
        
        <creator>Shaima Al Tabib</creator>
        
        <creator>Akel Freij</creator>
        
        <subject>Artificial intelligence; academic integrity; student perspectives; ethical AI use; policy awareness</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>The rapid adoption of generative artificial intelligence (AI) in higher education offers significant learning benefits while raising serious concerns about academic integrity. This study was conducted to examine undergraduate students’ perspectives and their awareness, attitudes, usage behaviors, perceived educational impact, policy awareness, and intentions to misuse AI tools in academic contexts in Oman. Using a cross-sectional survey design, data were collected from 200 undergraduate students across multiple academic levels. The survey measured six constructs using Likert-scale items, and data were analyzed using descriptive statistics, correlation analysis, group comparisons, and hierarchical multiple regression. Results indicated that students demonstrated moderate to high awareness of AI tools and generally positive attitudes toward their use for learning-related tasks such as grammar checking, summarization, and brainstorming. Correlation analysis showed that AI awareness, perceived educational impact, and policy awareness were significantly and negatively associated with intentions to misuse AI. Hierarchical multiple regression revealed that ethics-related variables, specifically perceived impact and policy awareness, explained substantial additional variance in misuse intentions beyond baseline predictors of awareness, attitudes, and usage frequency. Gender differences were observed, with male students reporting higher intentions to misuse AI, while senior students demonstrated higher awareness and policy understanding than early-year students. The findings highlight the critical role of AI literacy, ethical awareness, and clear institutional policies in mitigating unethical AI use. Integrating AI ethics education early in undergraduate curricula and strengthening communication of academic integrity policies may promote responsible AI engagement. These results contribute empirical evidence from the Middle Eastern context and offer practical implications for higher education institutions.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_57-Students_Perspectives_of_AI_and_Academic_Integrity.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>One Decade of Artificial Intelligence (AI) Research in Public Health Stunting Prediction and Intervention</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170356</link>
        <id>10.14569/IJACSA.2026.0170356</id>
        <doi>10.14569/IJACSA.2026.0170356</doi>
        <lastModDate>2026-03-31T07:33:41.5030000+00:00</lastModDate>
        
        <creator>Nurjoko </creator>
        
        <creator>Admi Syarif</creator>
        
        <creator>Favoriten R. Lumbanraja</creator>
        
        <creator>Khairun Nisa Berawi</creator>
        
        <subject>Artificial intelligence; stunting; public health; machine learning; systematic review</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>Stunting attributable to malnutrition remains a global public health problem impacting the long-term physical and cognitive growth of children. In recent years, artificial intelligence (AI) has been applied in public health research to help diagnose and predict stunting. This study seeks to review trends in AI research on stunting prediction and intervention, and to identify existing challenges and opportunities. The articles were screened using the Systematic Literature Review (SLR) method with the PRISMA protocol through databases like PubMed, ScienceDirect, Scopus, and Google Scholar. The analysis of the data was performed using VOSviewer and Microsoft Excel. The results showed that the most used models in predicting stunting were Random Forest (RF), Support Vector Machine (SVM), Gradient Boosting (XGBoost, LGBM), and Artificial Neural Network (ANN). Model evaluation is usually done through metrics such as AUC-ROC, accuracy, sensitivity, and specificity. Although AI has shown promise in identifying and predicting stunting, a few challenges remain: One is of data access and quality; others are model interpretability and integration within healthcare networks. Towards increasingly promising application outcomes: future directions for home-based health data prediction of the Internet of Things (IoT), Explainable AI (XAI), Multimodal AI, and natural language processing (NLP) models.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_56-One_Decade_of_Artificial_Intelligence_(AI)_Research.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Blockchain-Based Secure Data Sharing Framework: Dual Validation Through Content Validity and Thematic Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170355</link>
        <id>10.14569/IJACSA.2026.0170355</id>
        <doi>10.14569/IJACSA.2026.0170355</doi>
        <lastModDate>2026-03-31T07:33:41.4570000+00:00</lastModDate>
        
        <creator>Azman Azmi</creator>
        
        <creator>Farashazillah Yahya</creator>
        
        <creator>Nur Afrina Azman</creator>
        
        <subject>Blockchain; content validity; data sharing; e-government; thematic analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>Blockchain technology applied to digital government services platforms has introduced new possibilities for secure data sharing for public sector agencies. However, the verification and validation of critical security factors remain underexplored, leading to inconsistent security factors for implementations and theoretical gaps. This study addresses this issue by conducting a double validation analysis of security factors relevant to blockchain-based data sharing in e-government applications using thematic analysis and content validity index. Drawing from an extensive literature review, 54 security items from nine factors were evaluated by a panel of six domain experts using the methodological triangulation. The results indicate that key factors of confidentiality, integrity, availability, decentralisation, interoperability, transparency, auditability, and governance exhibit strong content validity and themes for thematic analysis. Immutability factors are outside the Universal Agreement (UA) scale and require further refinement. The validated framework contributes to both academic and practical domains by offering concrete fundamentals for secure system design and policy formulation. Future research directions include operational testing of validated factors and exploration of user-centric verification.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_55-Blockchain_Based_Secure_Data_Sharing_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Convolutional Neural Network for Chili Plant Disease Classification: A Deep Learning Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170354</link>
        <id>10.14569/IJACSA.2026.0170354</id>
        <doi>10.14569/IJACSA.2026.0170354</doi>
        <lastModDate>2026-03-31T07:33:41.4400000+00:00</lastModDate>
        
        <creator>Erna Dwi Astuti</creator>
        
        <creator>Widowati</creator>
        
        <creator>Aris Sugiharto</creator>
        
        <subject>Convolutional neural network; MobileNet-V2; chili plant disease; image processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>Chili peppers are a high-value horticultural crop that is highly susceptible to foliar diseases, which can significantly reduce yield and market quality. This study proposes and evaluates a Convolutional Neural Network (CNN) model based on the MobileNet-V2 architecture for chili leaf disease classification. A combined dataset consisting of 2,690 images collected from two public repositories and one field-acquired source was used in this research. The dataset was divided into training, validation, and testing subsets using an 80:10:10 ratio and underwent preprocessing steps including image resizing, data augmentation, and normalization. The proposed model was implemented using TensorFlow 2.15 and trained on the Google Colab platform. Experimental results demonstrate strong classification performance, achieving 95.6% validation accuracy and 96.8% test accuracy with a low loss value of 0.1011. All evaluated classes, anthracnose, yellow virus, leaf spot, leaf curl, and healthy leaves achieved precision, recall, and F1-scores exceeding 0.90, accompanied by near-perfect AUC values. These findings indicate that the MobileNet-V2-based CNN exhibits effective discriminative capability and generalization across heterogeneous visual conditions, highlighting its potential applicability for AI-assisted agricultural disease monitoring systems based on image processing techniques.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_54-Convolutional_Neural_Network_for_Chili_Plant_Disease_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Construction and Application Analysis of a Response Model for Improper Customer Behaviors in Service Enterprises Based on Cognitive Evaluation Theory</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170353</link>
        <id>10.14569/IJACSA.2026.0170353</id>
        <doi>10.14569/IJACSA.2026.0170353</doi>
        <lastModDate>2026-03-31T07:33:41.3600000+00:00</lastModDate>
        
        <creator>Enhou Zu</creator>
        
        <creator>Chun-Wei Lu</creator>
        
        <creator>Jui-Chan Huang</creator>
        
        <creator>Tien-Shou Huang</creator>
        
        <creator>Cheng-Ju Liu</creator>
        
        <subject>Cognitive evaluation; customer misbehavior; management research; social psychology; tourism services; game theory</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>This study investigates customer misbehavior in tourism services through the lens of Cognitive Evaluation Theory and Game Theory, contributing to both management research and social psychology. A dual-path model, tested via a multi-experiment design optimized with a machine learning algorithm, examines mediation versus defense strategies across violation types (interpersonal/transactional). The results show that for interpersonal norm violations, mediation boosts repurchase intention by 23.5%, mediated by organizational and moral justice perceptions. For transaction norm violations, the defense strategy shows higher recovery efficiency (32.1%), largely via organizational justice. Conversely, defense strategies are superior for transactional norm breaches, primarily mediated by organizational justice. This highlights how corporate responses signal organizational values, shaping onsite customer reactions. The analysis, framed by game theory, posits that service scenarios constitute a dynamic strategic system involving customers, firms, and bystanders. Choosing mediation in interpersonal conflicts fosters cooperative atmospheres, while defending transactional rules maintains authority in non-cooperative games. Ultimately, this algorithm-informed approach seeks a refined Bayesian equilibrium, offering data-driven intervention solutions for service order management.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_53-Construction_and_Application_Analysis_of_a_Response_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Detailed Classification of the Scheduling Algorithms in Fog Computing Environment, Challenges, and Future Directions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170352</link>
        <id>10.14569/IJACSA.2026.0170352</id>
        <doi>10.14569/IJACSA.2026.0170352</doi>
        <lastModDate>2026-03-31T07:33:41.3000000+00:00</lastModDate>
        
        <creator>Hend Gamal El Din Hassan Ali</creator>
        
        <creator>Imane Aly Saroit</creator>
        
        <creator>Amira Mohamed Kotb</creator>
        
        <subject>Cloud computing; fog computing; edge computing; Internet of Things; scheduling algorithms; directed acyclic graph</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>Fog computing is considered as a distributed computing and storage resource. It is placed near to users rather than cloud computing to reduce latency in Internet of Things sensitive time applications. Several scheduling algorithms were proposed in fog environment to achieve better performance in terms of execution time, cost, latency and quality of services. In this research, a complete study of the different scheduling approaches in the fog computing is illustrated. Also, an operational classification of up-to-date algorithms is highlighted with detailed comparison of their performance metrics.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_52-A_Detailed_Classification_of_the_Scheduling_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Interior Windshield Moisture Management Using a Vapor-Assisted Wiping Model: System Architecture and Design</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170351</link>
        <id>10.14569/IJACSA.2026.0170351</id>
        <doi>10.14569/IJACSA.2026.0170351</doi>
        <lastModDate>2026-03-31T07:33:41.2670000+00:00</lastModDate>
        
        <creator>Abdelrahim Fathy Ismail</creator>
        
        <subject>Interior windshield; moisture management; vapor-assisted wiping; windshield defogging; vehicle visibility enhancement</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>Fogging and moisture accumulation on the interior side of vehicle windshields continue to affect driving visibility despite the development of various defogging and climate-control approaches. While previous studies have mainly addressed the problem through HVAC optimization, airflow management, or intelligent monitoring systems, direct moisture handling at the interior glass surface remains less explored. In response to this gap, the present study proposes an interior windshield moisture management model based on vapor-assisted wiping. The model integrates a water reservoir and vapor generation unit, a guided vapor delivery pathway, and a regulated wiping interface that includes a porous moisture-distribution structure. These components work together to control vapor transfer before it reaches the windshield, allowing the wiping action to operate with a moderated moisture layer rather than uncontrolled vapor flow. The proposed system architecture explains how moisture generation, routing, and surface interaction are coordinated to support stable visibility inside the vehicle cabin. This model offers an alternative approach that complements existing defogging strategies and may contribute in future work to the development and evaluation of more effective interior windshield visibility enhancement systems.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_51-Interior_Windshield_Moisture_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Real-Time Multi-Scale Feature Pyramid YOLO Architecture for Accurate and Deployment-Efficient Road Damage Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170350</link>
        <id>10.14569/IJACSA.2026.0170350</id>
        <doi>10.14569/IJACSA.2026.0170350</doi>
        <lastModDate>2026-03-31T07:33:41.2200000+00:00</lastModDate>
        
        <creator>Olzhas Olzhayev</creator>
        
        <creator>Bakhytzhan Kulambayev</creator>
        
        <creator>Nurly Sakenkyzy</creator>
        
        <creator>Madina Belisbek</creator>
        
        <subject>Road damage; Multi-Scale Feature Pyramid; YOLO architecture; intelligent transportation systems; small-object detection; real-time deployment; pavement defect analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>Automated road damage detection has become a critical component of intelligent transportation systems, enabling timely infrastructure maintenance and enhanced traffic safety. However, detecting pavement defects such as cracks, potholes, and surface degradation remains challenging due to significant scale variation, irregular geometries, illumination changes, and class imbalance. This study proposes a real-time Multi-Scale Feature Pyramid YOLO architecture designed to achieve accurate and deployment-efficient multi-class road damage detection. The framework integrates hierarchical feature extraction with bidirectional multi-scale fusion to enhance sensitivity to both small and large defects. A decoupled detection head is employed to improve classification–localization balance, while focal loss and small-object emphasis mechanisms address class imbalance and fine-grained crack detection challenges. Comprehensive experiments conducted on a multi-class road damage dataset demonstrate that the proposed model achieves a mAP@0.5 of 0.68 and a recall of 0.81, outperforming several representative real-time detection approaches. Precision–recall analysis, confusion matrix evaluation, and ablation studies confirm the effectiveness of multi-scale feature aggregation and targeted optimization strategies. Qualitative results further illustrate robust detection performance under diverse environmental conditions. The proposed framework provides a practical trade-off between accuracy and computational efficiency, making it suitable for real-world deployment in intelligent road condition monitoring systems.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_50-A_Real_Time_Multi_Scale_Feature_Pyramid.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid Analysis Using Adaptive and Self-Adjusting Boosting and Logistic Regression</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170349</link>
        <id>10.14569/IJACSA.2026.0170349</id>
        <doi>10.14569/IJACSA.2026.0170349</doi>
        <lastModDate>2026-03-31T07:33:41.2070000+00:00</lastModDate>
        
        <creator>Haewon Byeon</creator>
        
        <subject>Employment outcomes; machine learning; adaptive &amp; self-adjusting boosting; gender disparities</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>This study investigates factors influencing the employment outcomes of sports science graduates, specifically their ability to secure decent jobs. Utilizing data from the Graduates Occupational Mobility Survey (GOMS) from 2015 to 2019, the study analyzed a sample of 1,019 sports science graduates aged 19 to 34. Both traditional statistical methods and advanced machine learning techniques, including Adaptive &amp; Self-Adjusting Boosting and logistic regression analysis, were employed to identify significant predictors and assess their impact. Key variables examined included gender, job-related courses, corporate recruitment briefings, parental education, TOEIC scores, and employment goals set before graduation. Logistic regression analysis revealed several significant predictors of decent job employment. Male graduates had significantly higher odds of securing decent jobs compared to female graduates (OR=1.45, 95% CI: 1.10-1.90, p=0.02). The number of job-related courses taken (OR=1.30, 95% CI: 1.05-1.60, p=0.04) and participation in corporate recruitment briefings (OR=1.25, 95% CI: 1.02-1.53, p=0.03) were positively associated with decent job employment. Parental education (OR=1.15, 95% CI: 1.01-1.30, p=0.05) and TOEIC scores (OR=1.10, 95% CI: 1.00-1.22, p=0.06) also showed modest but significant effects. Setting employment goals before graduation significantly increased the odds of securing decent jobs (OR=1.20, 95% CI: 1.05-1.37, p=0.03). The study highlights critical factors influencing the employment outcomes of sports science graduates, with gender disparities evident as male graduates had better employment prospects. Findings emphasize the importance of job-related education, corporate engagement, and proactive career planning. Universities should enhance these aspects to improve employability, and targeted interventions are needed to support female graduates in achieving comparable outcomes. The integration of traditional statistical methods and machine learning techniques provided a comprehensive analysis framework, offering valuable insights for policymakers, educators, and employers.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_49-A_Hybrid_Analysis_Using_Adaptive_and_Self_Adjusting_Boosting.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predictive Modeling of Lung Cancer Risk in Workers Using Dropouts Meet Multiple Additive Regression Trees</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170348</link>
        <id>10.14569/IJACSA.2026.0170348</id>
        <doi>10.14569/IJACSA.2026.0170348</doi>
        <lastModDate>2026-03-31T07:33:41.1900000+00:00</lastModDate>
        
        <creator>Haewon Byeon</creator>
        
        <subject>Lung cancer; predictive modeling; occupational exposure; gradient boosting; risk stratification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>Lung cancer remains a leading cause of cancer mortality, and preventable occupational and environmental exposures may compound risk in working-age populations. This study developed and compared predictive models for lung cancer risk using a publicly available tabular dataset (Kaggle; n = 1,000) containing demographic, lifestyle, symptom, and exposure-related variables. After standard preprocessing and an 80/20 train-test split, a Classification and Regression Tree (CART), a dropout-regularized gradient-boosted tree model (DART), k-nearest neighbors (KNN), and Gaussian Na&#239;ve Bayes were trained and evaluated using accuracy, precision, recall, F1-score, and area under the receiver operating characteristic curve (AUC). CART achieved the highest accuracy (84.5%), while KNN achieved the highest precision (78.7%). DART produced the best F1-score (77.3%) and the highest AUC (0.801), suggesting a favorable balance between sensitivity and specificity when accounting for class imbalance. Feature-importance patterns in the final DART model highlighted occupational hazards, smoking habits, genetic predisposition, and air pollution exposure as leading contributors to model-based risk stratification in occupational settings. These findings suggest that regularized ensemble tree methods can support stable risk stratification and may complement screening by prioritizing individuals who warrant closer evaluation. The analysis is limited by the modest sample size and reliance on a single public dataset; external validation in occupational cohorts with measured exposure histories is required before practical implementation.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_48-Predictive_Modeling_of_Lung_Cancer_Risk.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Gradient-Guided Data Augmentation with mBERT and MuRIL for Malayalam Offensive Language Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170347</link>
        <id>10.14569/IJACSA.2026.0170347</id>
        <doi>10.14569/IJACSA.2026.0170347</doi>
        <lastModDate>2026-03-31T07:33:41.1730000+00:00</lastModDate>
        
        <creator>Munawwar K V</creator>
        
        <creator>Nandhini K</creator>
        
        <subject>Offensive comment detection; gradient guided augmentation; NLPAUG; back translation; paraphrasing with MultiIndicParaphraseGeneration</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>The widespread adoption of social media platforms has facilitated increased usage of offensive content, particularly in native languages where users express themselves more freely. Automated offensive language detection in low-resource languages such as Malayalam faces significant challenges due to severe class imbalance, where non-offensive samples substantially outnumber offensive instances, resulting in biased model performance and diminished detection accuracy for underrepresented classes. This study addresses the critical challenge of class imbalance in Malayalam offensive language identification through a comprehensive data augmentation framework. We propose a novel gradient-guided augmentation technique specifically designed to mitigate minority class imbalance by selectively enhancing underrepresented class samples through the identification and synthesis of challenging instances that improve model robustness. The effectiveness of various augmentation strategies is systematically evaluated, including back-translation, paraphrasing, and NLPAUG techniques, integrated with mBERT and MuRIL models. Our gradient-guided augmentation approach demonstrates substantial performance improvements, achieving a notable 0.09 increase in recall score compared to the baseline model&#39;s 0.74 recall, while preserving overall model performance on imbalanced Malayalam offensive language datasets. The proposed methodology offers a promising solution for addressing class imbalance challenges in offensive content detection for low-resource languages. The results highlight that integrating augmentation with explainability not only improves classification performance but also helps in overcoming certain limitations associated with the previous methods, while also contributing more to the study.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_47-Gradient_Guided_Data_Augmentation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cloud-Based Replication Models Using AI Techniques for Enhanced Data Management</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170346</link>
        <id>10.14569/IJACSA.2026.0170346</id>
        <doi>10.14569/IJACSA.2026.0170346</doi>
        <lastModDate>2026-03-31T07:33:41.1430000+00:00</lastModDate>
        
        <creator>Moneef M. Jazzar</creator>
        
        <creator>Aws I. Abueid</creator>
        
        <subject>Q-learning; elastic cloud replication; SLA-aware control; latency optimization; adaptive replication; reinforcement learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>Elastic cloud infrastructure relies on dynamic replication mechanisms to maintain service availability and performance under fluctuating, non-stationary workloads. However, conventional threshold-based and static replication strategies frequently fail to maintain latency stability and Service Level Agreement (SLA) compliance in highly dynamic environments characterised by bursty, peak-stress traffic. This study introduces a Q-learning–based adaptive replication framework that formulates replication control as a sequential decision-making problem. The system is modelled as a Markov Decision Process (MDP), where replication adjustments are selected to maximise cumulative discounted reward, integrating latency minimisation, SLA violation penalties, and replica cost regularisation within a unified optimisation objective. A controlled cloud simulation environment was developed to emulate phased stochastic workload patterns, including normal, burst, sustained peak, and recovery intervals. The reinforcement learning controller was trained over 5000 episodes and subsequently evaluated under fixed-policy conditions against a reaction-delayed rule-based baseline controller. Experimental results demonstrate substantial improvements in performance stability. The proposed learning-based controller achieves a significant reduction in average latency, strong suppression of 95th percentile tail latency, and complete elimination of SLA violations under dynamic workload conditions. Unlike reactive threshold-based mechanisms, the learned policy anticipates workload transitions and proactively adjusts replication levels through long-term reward optimisation. These findings confirm that learning-driven replication control provides a structurally superior paradigm for latency-sensitive elastic cloud systems. By embedding SLA awareness directly into the reward formulation, replication management is transformed from a static configuration task into an adaptive, intelligent control process.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_46-Cloud_Based_Replication_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Machine Learning-Based Air Quality Monitoring in Indian Metropolitan Cities: A Comparative Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170345</link>
        <id>10.14569/IJACSA.2026.0170345</id>
        <doi>10.14569/IJACSA.2026.0170345</doi>
        <lastModDate>2026-03-31T07:33:41.0970000+00:00</lastModDate>
        
        <creator>Khushbu Chauhan</creator>
        
        <creator>Kruti Sutaria</creator>
        
        <subject>AQI; COVID; SVM; Gradient Boosting Machine (GBM); Extreme Gradient Boosting (XGBoost)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>Pure and clean air is essential to make the ecosystem healthy. Air pollution is becoming a critical global concern for both the environment and human health. Presence of harmful pollutants such as PM2.5, PM10, CO2, NO2, SO2, and O3 continuously degrades air quality and influences climatic conditions. This study aims to present a comprehensive air quality monitoring between traditional and advanced ensemble-based machine learning models. To monitor air quality, data collected from major metropolitan cities of India from 2015 to 2023 (Three phases- Pre-COVID, during COVID-19, and post-COVID). After pre-processing the data, a baseline supervised machine learning method, Support Vector Machine (SVM), was applied for ease of implementation. Later, to train weak learner features, ensemble-based machine learning techniques include Gradient Boosting Machine (GBM) and Extreme Gradient Boosting Machine (XGBM), evaluated to get better prediction analysis. The systematic analysis is inspected using different performance parameters: R&#178;, Mean Squared Error, Root Mean Squared Error, and Mean Absolute Error. The outcome indicates XGBM achieves superior predictive accuracy and robustness across most cities and time periods, and achieves better variability in spatial and temporal features in performance. The key findings highlight the importance of location-based specific modelling strategies and demonstrate the potential of ensemble learning models for reliable urban air quality monitoring.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_45-Machine_Learning_Based_Air_Quality_Monitoring.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design and Experimental Evaluation of Adaptive Load Balancing Strategies in Software-Defined Networks Using Mininet</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170344</link>
        <id>10.14569/IJACSA.2026.0170344</id>
        <doi>10.14569/IJACSA.2026.0170344</doi>
        <lastModDate>2026-03-31T07:33:41.0630000+00:00</lastModDate>
        
        <creator>M Shona</creator>
        
        <creator>Rinki Sharma</creator>
        
        <subject>Load balancing; control plane; data plane; OpenFlow; Mininet</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>Software-Defined Networking (SDN) introduces centralized control and programmability, enabling more flexible and efficient network management compared to traditional architectures. Load balancing is a serious SDN application that improves resource utilization, reduces latency, and enhances service reliability. However, implementing SDN-based load balancers involves several challenges, such as controller overhead, scalability issues, dynamic traffic handling, and protocol integration. This study investigates these challenges and presents practical approaches for implementing SDN load balancers using the Mininet emulation environment. Different load-balancing algorithms are implemented and evaluated, highlighting the trade-offs between static and dynamic techniques. The study also examines traffic generation tools supported by Mininet. Furthermore, the performance of various SDN controllers, including Ryu, POX, OpenDaylight (ODL), ONOS, and Floodlight, is assessed using metrics such as throughput and round-trip time. Key performance evaluation metrics and their computation methods are also discussed. The goal of this research is to examine the challenges of implementing load balancing in Software-Defined Networking. It also aims to explore effective methods for designing and evaluating SDN-based load-balancing solutions using a Mininet test environment.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_44-Design_and_Experimental_Evaluation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Latent-Representation-Based Algorithm with Dynamic Obstacle Avoidance (LADy) for Autonomous Mobile Robots (AMRs)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170343</link>
        <id>10.14569/IJACSA.2026.0170343</id>
        <doi>10.14569/IJACSA.2026.0170343</doi>
        <lastModDate>2026-03-31T07:33:41.0330000+00:00</lastModDate>
        
        <creator>Harishma Prakash</creator>
        
        <creator>Prasina A</creator>
        
        <creator>Samuthira Pandi V</creator>
        
        <creator>Naregalkar Akshaykumar Rangnath</creator>
        
        <creator>Rajalingam A</creator>
        
        <creator>Sundar R</creator>
        
        <subject>Path planning; semantic encoding; Autonomous Mobile Robots (AMRs); warehouse navigation; dynamic environment</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>Conventional path-planning algorithms are often tailored to industrial and warehouse settings, creating the necessity to integrate two or more planners, which requires more memory and computation limitations. To overcome this limitation, this study aims to develop a novel algorithm, semantic cost encoding-based A* with Dynamic obstacle avoidance, specifically designed for Autonomous Mobile Robots (AMRs) in warehouses. In this study, the proposed algorithm is benchmarked in Matplotlib against A* and RRT* showing 60.33% higher memory efficiency and 60.36% more efficiency in aspect of number of computed nodes than RRT*, while all equivalent to A* in a static environment, and benchmarked against D* lite for dynamic environment, as well as against a hybrid algorithm, a simplified interpretation of commercial AMR path planning approaches, to show 45.30% more memory-efficiency than the hybrid algorithm, which is much more preferred in a real time implementation than D* lite, for AMRs.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_43-A_Novel_Latent_Representation_Based_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Emotion Prediction in Performance-Critical Tasks: A Systematic Review of Physiological Signals and Deep Learning Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170342</link>
        <id>10.14569/IJACSA.2026.0170342</id>
        <doi>10.14569/IJACSA.2026.0170342</doi>
        <lastModDate>2026-03-31T07:33:41.0030000+00:00</lastModDate>
        
        <creator>Norhawani Ahmad Teridi</creator>
        
        <creator>Tengku Mohd Tengku Sembok</creator>
        
        <creator>Muhammad Fairuz Abd Rauf</creator>
        
        <creator>Nurhafizah Moziyana Mohd Yusop</creator>
        
        <creator>Zuraini Zainol</creator>
        
        <creator>Shahrulfadly Rustam</creator>
        
        <creator>Azlinda Abdul Aziz</creator>
        
        <creator>Hazri Haidar</creator>
        
        <creator>Mohd Fahmi Mohamad Amran</creator>
        
        <subject>Emotion prediction; physiological signals; deep learning; multimodal fusion; performance-critical tasks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>Emotions strongly influence how people think, decide, and perform, making reliable emotion forecasting essential in performance-critical environments. Traditional methods such as facial expressions, speech, and self-reports often lack reliability and continuity. Physiological signals offer a more objective alternative, providing continuous indicators of emotional states, while deep learning models are well-suited to capturing their non-linear temporal characteristics. Unlike prior reviews that primarily focus on general emotion recognition or isolated model performance, this study specifically examines emotion prediction in performance-critical contexts through the combined analysis of physiological signals, deep learning architectures, and task-driven requirements. This systematic review synthesizes recent studies on emotion prediction using physiological data and deep learning models. Following the PRISMA framework, relevant studies published between 2021 and 2025 were identified from the Dimensions AI and Web of Science databases, resulting in 25 eligible articles. The review examines trends in physiological modalities, deep learning architecture, emotion representations, and evaluation practices. Beyond summarizing these trends, the review provides a structured comparative synthesis that organizes existing studies according to physiological signal modality, model architecture, performance-critical task context, emotion representation, and evaluation practices, thereby offering methodological guidance for future emotion prediction system design. Findings show that EEG is the most widely used modality, frequently combined with peripheral signals such as heart rate variability, electrodermal activity and electrocardiography in multimodal systems. Hybrid architectures, particularly CNN–LSTM models, dominate current approaches, although attention-based and lightweight models are gaining traction. Key challenges remain, including inter-subject variability, limited real-world validity, inconsistent emotion modeling and non-standardized evaluation. This review highlights current gaps and offers guidance for developing more robust emotion prediction systems in high-performance contexts.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_42-Emotion_Prediction_in_Performance_Critical_Tasks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Graph Neural Networks and Ensemble Learning for Mineral Prospectivity Mapping Using Geochemical Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170341</link>
        <id>10.14569/IJACSA.2026.0170341</id>
        <doi>10.14569/IJACSA.2026.0170341</doi>
        <lastModDate>2026-03-31T07:33:40.9570000+00:00</lastModDate>
        
        <creator>Kholod M. Alzubidi</creator>
        
        <creator>Alaa O. Khadidos</creator>
        
        <creator>Adil O. Khadidos</creator>
        
        <creator>Haitham M. Baggazi</creator>
        
        <creator>Fahad M. Alharbi</creator>
        
        <creator>Razan Alamoudi</creator>
        
        <subject>Rare mineral mapping; geochemical data; geospatial analysis; GNN; ensemble learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>Mineral exploration is inherently challenging because geological formations are complex and geochemical relationships are often nonlinear and spatially variable. Although artificial intelligence has recently shown strong potential in improving mineral potential mapping, many existing approaches struggle to fully capture spatial relationships within geochemical data. In this study, an integrated framework that combines Graph Neural Networks (GNNs), ensemble learning classifiers, and unsupervised K-means clustering was developed to analyze geochemical data from Saudi Arabia. The geochemical samples were modeled as a spatial graph, where each node represents a sampling location, and the connections between nodes reflect their geographic proximity. This structure allows the GNN to better capture spatial relationships within the data, while ensemble models serve as baseline methods for performance comparison. K-means clustering was further used to examine spatial patterns and highlight potential mineralization zones. The proposed approach achieved strong predictive results, with classification accuracies reaching 85.08% for lithium and 90.62% for tungsten, alongside comparable performance for other elements. Overall, these results demonstrate the value of incorporating spatially-aware artificial intelligence techniques to support more accurate mineral exploration and more informed resource management.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_41-Graph_Neural_Networks_and_Ensemble_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Pedagogical Mediation Through Prompt Engineering: An Expert Evaluation of AI-Generated Feedback on Islamic-Integrated EFL Argumentative Writing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170340</link>
        <id>10.14569/IJACSA.2026.0170340</id>
        <doi>10.14569/IJACSA.2026.0170340</doi>
        <lastModDate>2026-03-31T07:33:40.9230000+00:00</lastModDate>
        
        <creator>Sari Dewi Noviyanti</creator>
        
        <creator>Rudi Hartono</creator>
        
        <creator>Hendi Pratama</creator>
        
        <creator>Seful Bahri</creator>
        
        <subject>AI-generated feedback; pedagogical mediation; prompt engineering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>This research tested whether prompt engineering could act as a type of pedagogical mediation to enhance the quality of AI-generated student feedback on EFL students&#39; argumentative essays in an Islamic education system. An initial pool of eight expert raters (four primary raters and four inter-raters) scored the AI-generated feedback from Claude Sonnet 4 using 12 systematically developed prompts to elicit feedback. Raters were asked to score feedback produced by these prompts across four areas of evaluation: pedagogy, linguistics, Islamic content, and AI reliability. The highest rated configuration was Prompt 4 (Feedback-only sequencing, English Lecturer Persona), with a mean rating of 31.00 (out of 35) in all categories. A Friedman test showed there were statistically significant differences among the four evaluative categories, χ&#178;(3) = 30.077, p &lt; .001. Additionally, inter-rater reliabilities were high for each of the possible pairs of raters (r = .89 -.96). Overall, this research suggests that prompt engineering is a potentially viable method of pedagogical mediation, allowing educators to develop more culturally responsive and pedagogically relevant AI-generated feedback systems for Islamic EFL higher education settings.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_40-Pedagogical_Mediation_Through_Prompt_Engineering.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Improved Hybrid CURE–SNE Model for High-Dimensional Data Clustering</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170339</link>
        <id>10.14569/IJACSA.2026.0170339</id>
        <doi>10.14569/IJACSA.2026.0170339</doi>
        <lastModDate>2026-03-31T07:33:40.8930000+00:00</lastModDate>
        
        <creator>Dewi Sartika Br Ginting</creator>
        
        <creator>T. H. F. Harumy</creator>
        
        <creator>Ade Sarah Huzaifah</creator>
        
        <creator>Ivanny Putri Marianto</creator>
        
        <subject>Hybrid clustering; CURE-SNE; stunting; davies bouldin index; silhouete score</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>Stunting remains a critical public health issue in rural communities, largely driven by inadequate nutrition, poor sanitation, and unfavorable socioeconomic conditions. This study proposes a hybrid clustering approach by integrating Clustering Using Representatives (CURE) with t-distributed Stochastic Neighbor Embedding (t-SNE) to analyze stunting prevalence and support the optimization of child nutrition strategies. Secondary data were collected from publicly accessible national health and nutrition repositories, comprising 500 child records with multiple parameters, including anthropometric indicators, nutritional intake, maternal characteristics, environmental sanitation, and socioeconomic factors. The t-SNE algorithm was employed to reduce the high-dimensional data into a two-dimensional space while preserving neighborhood structures, followed by the application of the CURE algorithm to construct clusters that are robust to noise and outliers. Experimental results indicate that the proposed CURE–SNE approach successfully formed four distinct clusters, namely C1 Very High Stunting Risk with 128 data points (25.6%), C2 High Stunting Risk with 142 data points (28.4%), C3 Moderate/Transitional Stunting Risk with 117 data points (23.4%), and C4 Low Stunting Risk with 113 data points (22.6%). Cluster quality evaluation demonstrates that the hybrid CURE–SNE method achieves a higher Silhouette Score and a lower Davies Bouldin Index compared to the CURE only approach, indicating improved cluster separation and compactness. These findings confirm that combining dimensionality reduction with representative-based clustering enhances the interpretability of stunting patterns and provides a reliable analytical foundation for designing targeted and data-driven child nutrition interventions in rural settings.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_39-An_Improved_Hybrid_CURE_SNE_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Energy-Efficient Cluster Head Rotation in WSNs Using Bee Colony Optimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170338</link>
        <id>10.14569/IJACSA.2026.0170338</id>
        <doi>10.14569/IJACSA.2026.0170338</doi>
        <lastModDate>2026-03-31T07:33:40.8770000+00:00</lastModDate>
        
        <creator>Azamuddin Bin Ab Rahman</creator>
        
        <creator>Sakib Iqram Hamim</creator>
        
        <subject>Energy efficiency; load balancing; cluster head rotation; bee colony optimization; metaheuristic clustering; Wireless Sensor Network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>In this study, we present a method designed to improve energy efficiency and balance the workload across Wireless Sensor Networks (WSNs). Our approach dynamically selects and rotates cluster heads (CHs) based on factors such as remaining energy, node mobility, distance to the base station, and data processing needs. By focusing on nodes with more energy and lower mobility, we aim to extend the network&#39;s operational life and prevent any single node from being overburdened. At the heart of our method is the Artificial Bee Colony (ABC) optimization algorithm, which mimics the foraging behavior of bees. This algorithm helps to identify the best nodes to act as CHs, balancing the energy load across the network and maintaining strong connectivity within clusters. Our simulations show that this method outperforms existing protocols like FEEC and PSAP-WSN, particularly when it comes to distributing energy more evenly and extending the network&#39;s lifespan. By continuously rotating the CHs, we ensure that energy consumption is spread out, leading to improved network performance and sustainability. The results indicate that this dynamic and adaptive approach is highly effective in maintaining a balanced energy distribution, making it a robust solution for energy management in WSNs.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_38-Energy_Efficient_Cluster_Head_Rotation_in_WSNs.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Explainable Deep Learning for Automated Skin Cancer Detection Using Advanced CNN Architectures on Dermoscopic Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170337</link>
        <id>10.14569/IJACSA.2026.0170337</id>
        <doi>10.14569/IJACSA.2026.0170337</doi>
        <lastModDate>2026-03-31T07:33:40.8470000+00:00</lastModDate>
        
        <creator>Adel Rajab</creator>
        
        <subject>Deep learning models; skin cancer detection; image processing; Grad-CAM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>Skin cancer is a considerable health issue worldwide, occurring when pigment cells turn malignant. However, diagnosing skin lesions is difficult for dermatologists because most lesions have similar characteristics. Initial detection is essential because it significantly increases the success rate of treatment and survival rates. In the past few decades, the rapid development of artificial intelligence has made it possible to build automated diagnostic systems based on large histopathology-validated image datasets. In this study, we introduce a deep learning solution for multi-class skin cancer classification based on state-of-the-art convolutional neural networks (CNNs) on the HAM10000+ISC image dataset. We used pre-trained CNN backbones, InceptionV3, DenseNet121, ResNet50, and VGG16, initialized with weights from ImageNet, for feature extraction, fine-tuning, and evaluation. Among the models, InceptionV3 achieved the highest accuracy of 76% and an ROC score of 0.967. To enhance interpretability, we used explainable AI (XAI) methods, Grad-CAM, Grad-CAM++, and class-wise attention maps, to examine both correctly and incorrectly classified images. The experiment demonstrates that the suggested system is not only characterized by high classification accuracy, but also by the ability to explain and visualize, which is a significant advantage for dermatologists when diagnosing skin cancer early and correctly.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_37-Explainable_Deep_Learning_for_Automated_Skin_Cancer_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Scaled Agile Process Improvement Recommendations with CMMI 2-Based Agile Scaling Model: A Case Study of the Indonesia National Single Window Agency</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170336</link>
        <id>10.14569/IJACSA.2026.0170336</id>
        <doi>10.14569/IJACSA.2026.0170336</doi>
        <lastModDate>2026-03-31T07:33:40.8130000+00:00</lastModDate>
        
        <creator>I Made Aditya Pradnyadipa Mustika</creator>
        
        <creator>Betty Purwandari</creator>
        
        <creator>Alex Ferdinansyah</creator>
        
        <subject>Scaling Agile; CMMI; Scrum; KPA rating; software engineering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>The Indonesia National Single Window System (SINSW) is a platform developed by the Indonesia National Single Window Agency (LNSW) using Agile methodology, specifically the Scrum framework. Several challenges were identified during its development, including deviations from the Scrum Guide and the absence of formal, regular events necessary for effective team coordination and alignment. These issues revealed gaps in Scrum implementation and broader difficulties associated with scaling Agile practices beyond the team level. Therefore, this study applied a scaling Agile model based on Capability Maturity Model Integration (CMMI) 2 to evaluate the agency&#39;s existing Agile process and recommend targeted improvements to Agile practices. The evaluation involved qualitative interviews with key stakeholders, including the Project Management Officer, System Analyst, and developers. The interviews were subsequently quantified using the Key Process Area (KPA) rating framework. The findings led to actionable recommendations to optimize the Agile process, improve team collaboration, and support SINSW’s success.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_36-Scaled_Agile_Process_Improvement_Recommendations.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>CEEMDAN–SSA–RWKV–SMA: A Robust Hybrid Model for Long-Term Wind Speed Forecasting in India</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170335</link>
        <id>10.14569/IJACSA.2026.0170335</id>
        <doi>10.14569/IJACSA.2026.0170335</doi>
        <lastModDate>2026-03-31T07:33:40.7830000+00:00</lastModDate>
        
        <creator>S. Vidya</creator>
        
        <subject>Wind speed forecasting; Complete Ensemble Empirical Mode Decomposition with Adaptive Noise; Singular Spectrum Analysis; RWKV neural network; Slime Mould Algorithm; Renewable energy integration</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>Reliable long-term wind speed forecasting is a critical requirement for the strategic deployment and operational stability of wind energy systems, particularly in meteorologically diverse regions like India. This study proposes a novel hybrid framework, CEEMDAN–SSA–RWKV–SMA, which integrates advanced signal decomposition, deep sequence modeling, and metaheuristic optimization. Initially, the raw wind speed time series is decomposed using Complete Ensemble Empirical Mode Decomposition with Adaptive Noise (CEEMDAN) to extract multi-scale Intrinsic Mode Functions (IMFs). To enhance signal clarity and reduce dimensionality, each IMF is further processed using Singular Spectrum Analysis (SSA). The resulting denoised and trend-extracted components are modeled using the Receptance Weighted Key Value (RWKV) neural network, a recent Transformer-RNN hybrid designed to capture long-range temporal dependencies efficiently. To optimize RWKV hyperparameters and SSA windowing parameters, the Slime Mould Algorithm (SMA) is employed as a global metaheuristic optimizer. Empirical evaluations on multi-regional Indian wind datasets demonstrate that the proposed framework consistently outperforms conventional models such as LSTM, Transformer, and CEEMDAN-LSTM in terms of MAE, RMSE, and MAPE. The proposed CEEMDAN-SSA-RWKV-SMA framework is a reliable forecasting strategy for improving wind energy integration in non-stationary and resource-critical environments.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_35-CEEMDAN_SSA_RWKV_SMA_A_Robust_Hybrid_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Prickly Pear Disease Classification Using Deep Convolutional Neural Networks: A Case Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170334</link>
        <id>10.14569/IJACSA.2026.0170334</id>
        <doi>10.14569/IJACSA.2026.0170334</doi>
        <lastModDate>2026-03-31T07:33:40.7530000+00:00</lastModDate>
        
        <creator>Raghiya Elghawth</creator>
        
        <creator>Wafae Abbaoui</creator>
        
        <creator>Soumia Ziti</creator>
        
        <subject>Plant disease classification; data augmentation; deep learning; prickly pear disease; MobileNetV2; DenseNet121</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>Prickly pear (Opuntia ficus-indica) is a member of the Cactaceae family. Because of its anti-inflammatory, anti-oxidant, antibacterial, hypoglycemic, and neuroprotective properties, prickly pears are a magical fruit. Both the fruit and its stem are utilized in value-added products. Deep learning (DL) applications are required for prickly pear disease detection and classification. To the best of our knowledge, no previous study has investigated prickly pear disease classification using convolutional neural networks. In this study, we propose the use of deep convolutional neural networks MobileNetV2 and DenseNet121, to classify prickly pear disease. A locally collected dataset from Tunisia was divided into two classes: healthy and cochineal. Data augmentation techniques were applied to increase the number of images. These augmented data were then fed as input into MobileNetV2 and DentNet121 networks. The experimental results show that MobileNetV2 achieved a precision, recall, and F1-score of 96.55% for healthy plants. For diseased plants, precision, recall, and F1-score reached 97.14%. Overall, the model obtained a classification accuracy of 96.88%. DenseNet121 achieved precision, recall, and F1-score values of 90.62%, 100%, and 95.08%, respectively, for healthy plants. For diseased plants, the precision, recall, and F1-score were 100%, 91.43%, and 95.52%, respectively, resulting in an overall classification accuracy of 95.31%. Our proposed deep learning models, MobileNetV2 and DenseNet121, perform well and demonstrate strong performance on the prickly pear dataset.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_34-Prickly_Pear_Disease_Classification_Using_Deep_Convolutional_Neural_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Weight Trajectory Prediction in Precision Livestock Farming Using Machine Learning: A Comparative Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170333</link>
        <id>10.14569/IJACSA.2026.0170333</id>
        <doi>10.14569/IJACSA.2026.0170333</doi>
        <lastModDate>2026-03-31T07:33:40.7200000+00:00</lastModDate>
        
        <creator>Moad Hakem</creator>
        
        <creator>Zakaria Boulouard</creator>
        
        <creator>Mohamed Kissi</creator>
        
        <subject>Machine learning; data science; precision livestock farming; weight trajectory prediction; ensemble learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>Accurate livestock body weight prediction is a key component of precision livestock farming, as it supports herd monitoring, production management, and planning in response to the increasing global demand for meat. Existing approaches for weight prediction include age-based regression models, growth trajectory modelling, average daily gain estimation, and methods relying on morphometric measurements or image-derived features. However, many of these approaches require frequent measurements or specialized data acquisition systems, which are often costly and difficult to deploy under practical farming conditions. This study presents a comparative evaluation of data-driven models for livestock body weight trajectory prediction under low-measurement conditions. A matrix factorization approach and four ensemble-based machine learning methods, namely XGBoost, LightGBM, CatBoost, and ExtraTrees, were evaluated using a dataset of Holstein cows. Model performance was assessed using standard regression metrics, including root mean squared error, mean absolute error, and mean absolute percentage error, with five-fold cross-validation employed to ensure robustness. The results show that ensemble learning methods consistently outperform matrix factorization techniques when only a limited number of weight measurements per animal are available. More specifically, XGBoost achieves the best predictive performance when only one historical measurement per animal is available, whereas ExtraTrees provides the most accurate predictions when two or three historical measurements are available. These findings demonstrate that accurate and cost-effective livestock weight prediction can be achieved from sparse routine body weight records, without relying on dense longitudinal sampling, image-based systems, or extensive morphometric measurements, thereby supporting the practical deployment of predictive tools in precision livestock farming systems.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_33-Weight_Trajectory_Prediction_in_Precision_Livestock_Farming.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Immersive Educational Application Based on Unity for Learning the Quechua Language</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170332</link>
        <id>10.14569/IJACSA.2026.0170332</id>
        <doi>10.14569/IJACSA.2026.0170332</doi>
        <lastModDate>2026-03-31T07:33:40.6900000+00:00</lastModDate>
        
        <creator>Giancarlo Eliseo Arrieta Villarreal</creator>
        
        <creator>Rosalynn Ornella Flores-Casta&#241;eda</creator>
        
        <subject>Quechua; computer application; information technology (software)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>Learning indigenous languages such as Quechua helps preserve cultural identity by narrowing educational gaps, in line with Sustainable Development Goal 4, which promotes inclusive and quality education. The objective was to develop an immersive educational application based on Unity to improve the learning of the Quechua language. This was an applied research study with a quantitative approach and a pre-experimental design, in which a pre- and post-test was administered to a group of young people. The results demonstrated that the Unity-based immersive educational app significantly improved recognition of Quechua vocabulary (Z = -4.149, p &lt; 0.001), increased performance on interactive activities (mean Level 2 = 15.58 vs. Level 1 = 13.17; error rate reduced from 34.17% to 22.08%), and decreased the overall error rate in language use (Z = -4.149, p &lt; 0.001), demonstrating its effectiveness in language learning and accuracy. In conclusion, virtual reality proved to be an effective and motivating tool for learning Quechua, promoting quality education and an appreciation for Peruvian cultural heritage.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_32-Immersive_Educational_Application_Based_on_Unity.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Segmentation of Convective Initiation Based on Spatio-Temporal Feature Joint Modeling</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170331</link>
        <id>10.14569/IJACSA.2026.0170331</id>
        <doi>10.14569/IJACSA.2026.0170331</doi>
        <lastModDate>2026-03-31T07:33:40.6430000+00:00</lastModDate>
        
        <creator>Runzhe Tao</creator>
        
        <creator>Rui Chen</creator>
        
        <creator>Peibei Zheng</creator>
        
        <creator>Zibo Hong</creator>
        
        <subject>Semantic segmentation; remote sensing imagery; convective initiation; spatiotemporal feature fusion</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>As a key indicator of the occurrence of severe convection, convective initiation (CI) exhibits characteristics such as fragmentation, scale heterogeneity, and susceptibility to confusion with other cloud systems in single-temporal remote sensing imagery, posing significant challenges for accurate CI detection. Traditional threshold-based methods inadequately capture spatial representations and have limited generalization capabilities, while existing deep learning approaches fail to fully utilize the temporal correlation features of the same target cloud cluster, resulting in a high false alarm rate. To address these challenges, based on the physical laws of convective development, we propose a spatiotemporal feature fusion-based CI detection model, namely Ti-UHRNet. The model integrates three core designs: integrating digital elevation model geographic information at the input layer to quantify the topographic modulation on convective development and enhance the physical consistency of features; adopting U-HRNet embedded with attention-gated feature fusion as the backbone to extract multi-scale features efficiently, filter critical information dynamically, and retain high-resolution spatial details of convective clouds; and designing a multi-head self-attention-based TransTrack module with multi-temporal inputs to capture the dynamic evolution information of convective clouds within a 15-minute window, thereby distinguishing them from other cloud systems. Experimental results show that compared with several advanced 2D and 3D convolutional segmentation methods, Ti-UHRNet achieves the best performance in extracting the spatiotemporal features of rapidly developing convective cloud clusters. On the test set, it attains a probability of detection of 0.954, a false alarm rate of 0.082, and a critical success index of 0.879. Verified against ground-based radar echoes, the model enables effective early warning of severe convective weather at 15–30 minutes in advance.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_31-Segmentation_of_Convective_Initiation_Based_on_Spatio_Temporal_Feature.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Machine Learning-Based Web System for Predicting and Classifying Financial Incentives in the Automotive Sector</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170330</link>
        <id>10.14569/IJACSA.2026.0170330</id>
        <doi>10.14569/IJACSA.2026.0170330</doi>
        <lastModDate>2026-03-31T07:33:40.6100000+00:00</lastModDate>
        
        <creator>Antony Jesus Ramirez Rivas</creator>
        
        <creator>Rosalynn Ornella Flores-Casta&#241;eda</creator>
        
        <subject>Artificial intelligence; fiscal policy; forecasting; sustainable development; automobile</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>This research presents the development of a web-based system using machine learning to predict and classify financial incentives in the automotive sector, contributing to Sustainable Development Goal 9 (Industry, Innovation and Infrastructure) and SDG 12 (Responsible Consumption and Production). The main objective was to design and implement an intelligent system that enhances decision-making regarding incentives such as exemptions (EXEM), natural gas subsidies (GNT), and tax benefits (TAX). The study employed a quantitative approach, applied type, and pre-experimental design, assessing model performance through accuracy, error rate, and response time metrics. Results showed an accuracy of 93.44%, a 45.12% reduction in error rate, and an average response time of 0.13 seconds. It is concluded that the proposed system significantly improves efficiency in predicting financial incentives, positioning itself as a viable technological tool for the automotive sector and economic sustainability.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_30-Machine_Learning_Based_Web_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Reinforcement Learning-Based Adaptive Penetration Testing Framework for Wireless Communication</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170329</link>
        <id>10.14569/IJACSA.2026.0170329</id>
        <doi>10.14569/IJACSA.2026.0170329</doi>
        <lastModDate>2026-03-31T07:33:40.5630000+00:00</lastModDate>
        
        <creator>Saken Tleuberdin</creator>
        
        <creator>Konstantin Malakhov</creator>
        
        <creator>Nurlan Tashatov</creator>
        
        <creator>Dina Satybaldina</creator>
        
        <creator>Didar Yedilkhan</creator>
        
        <subject>Internet of things; security; penetration testing; reinforcement learning; wireless communication</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>Wireless Fidelity (Wi-Fi) technology is widely used in the environment of Internet of Things (IoT), and the importance of security assessment has increased. Nowadays, Wi-Fi security assessment is based on security tools that are operated manually, and some methods face issues due to the lack of automation. In this study, we suggest a method for adapting Wi-Fi penetration testing in which a reinforcement learning (RL) agent interacts with the environment by choosing actions based on the current state to maximize the total reward received. We model a tabular Q-learning algorithm as an agent interacting with the wi-fi environment. The action space is made up of denial-of-service attacks, while the environment state vector includes parameters of the network and indicators of attack success, which all contribute to the reward function. The experiments show that the RL agent successfully finds vulnerabilities in the Wi-Fi Protected Access 2 (WPA2) and Wi-Fi Protected Access 3 (WPA3) protocols.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_29-Reinforcement_Learning_Based_Adaptive_Penetration_Testing_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluating ChatGPT for Grading Programming Assignments: Effectiveness, Fairness, and Student Perceptions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170328</link>
        <id>10.14569/IJACSA.2026.0170328</id>
        <doi>10.14569/IJACSA.2026.0170328</doi>
        <lastModDate>2026-03-31T07:33:40.5330000+00:00</lastModDate>
        
        <creator>Abedallah Zaid Abualkishik</creator>
        
        <creator>Sherzod Turaev</creator>
        
        <creator>Ali A. Alwan</creator>
        
        <creator>Mohamed Elhoseny</creator>
        
        <creator>Mohsin Murtaza</creator>
        
        <subject>AI-assisted grading; ChatGPT; automated grading; programming assignments; higher education; grading reliability; rubric-based evaluation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>This study investigates ChatGPT as an automated grading tool for programming assignments in higher education. Three datasets comprising Python, C++, and Java assignments were graded three times by ChatGPT and compared with faculty evaluations. Results show that ChatGPT achieves high grading accuracy, closely aligning with faculty scores and demonstrating statistically significant correlations. Statistical analyses using the Kolmogorov–Smirnov test, paired t-test, and Wilcoxon signed-rank test confirm overall agreement, although ChatGPT tends to apply stricter grading criteria. High intraclass correlation coefficients further indicate strong reliability and consistency across repeated grading attempts. The study highlights the critical role of well-defined rubrics in improving grading alignment and proposes an Instructor–AI Collaborative Rubric Development framework to support effective AI integration in assessment. A survey of 158 students indicates increased satisfaction and trust following disclosure of AI-assisted grading, although some still prefer human evaluation. Overall, the findings provide strong evidence that ChatGPT is a reliable and consistent grading tool, demonstrating close alignment with faculty evaluations and high reproducibility across attempts. However, its effectiveness is critically dependent on well-defined rubrics and requires human oversight to mitigate strictness, ensure fairness, and account for contextual nuances. These results strongly support a hybrid AI–human grading approach, grounded in transparent rubric design and reinforced by appropriate ethical safeguards.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_28-Evaluating_ChatGPT_for_Grading_Programming_Assignments.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Traffic Sign Classification Under Varying Lighting Conditions in the Philippines Using Transfer Learning with ResNet50 and Zero-DCE</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170327</link>
        <id>10.14569/IJACSA.2026.0170327</id>
        <doi>10.14569/IJACSA.2026.0170327</doi>
        <lastModDate>2026-03-31T07:33:40.5030000+00:00</lastModDate>
        
        <creator>John Paul Q. Tomas</creator>
        
        <creator>Carlo Miguel P. Legaspi</creator>
        
        <creator>Karl Anthony S. Dalangin</creator>
        
        <creator>Gabriel Paul Q. Lim</creator>
        
        <subject>Traffic Sign Recognition (TSR); Traffic Sign Classification (TSC); Advanced Driver-Assistance System (ADAS)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>This study presents a multi-stage transfer learning approach for improving traffic sign recognition performance under both normal and low-light conditions, addressing the gap between existing datasets and the real-world road environments of the Philippines, where poor lighting, faded signs, and unstructured roads are common. A curated local dataset of 7 commonly encountered traffic sign classes comprising approximately 5,000 manually localized images was constructed and split into training, validation, and test sets (70–10–20 ratio). Five model configurations were developed and compared: a VGG-inspired baseline trained from scratch, a standard ResNet50 transfer learning model, a multiphase ResNet50 model pretrained on the GTSRB dataset, and two corresponding variants enhanced using Zero-DCE low-light preprocessing. The baseline achieved 92.17% accuracy, while the standard ResNet50 models performed similarly with and without Zero-DCE (92.10–92.45%). The multiphase ResNet50 significantly improved accuracy to 96.43% by leveraging domain-aligned pretraining, and the highest performance was achieved by its Zero-DCE-enhanced counterpart at 98.21%, showing more balanced metrics and improved recognition stability. These results indicate that low-light enhancement alone does not guarantee better performance, but becomes highly effective when paired with a feature extractor already specialized in traffic sign features. Overall, the proposed multiphase, Zero-DCE–assisted pipeline provides a strong and scalable solution for traffic sign recognition in low-visibility Philippine conditions, with potential applications in ADAS and autonomous driving systems.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_27-Traffic_Sign_Classification_Under_Varying_Lighting_Conditions.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Simulation Study on the Proposed Multi-Agent Backdoor Detection System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170326</link>
        <id>10.14569/IJACSA.2026.0170326</id>
        <doi>10.14569/IJACSA.2026.0170326</doi>
        <lastModDate>2026-03-31T07:33:40.4870000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>Multi-layered backdoor detection system; keyword-triggered attack; semantic backdoor; distributed multi-agent attack; multi-agent LLM; Byzantine fault tolerance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>The proposed multi-layered backdoor detection system was evaluated across 10 diverse scenarios, including benign tasks, keyword-triggered attacks, semantic backdoors, and distributed multi-agent attacks. In the simulation experiments, Total Scenarios: 10 | Attack Scenarios: 5 | Benign Scenarios: 5, are prepared and Detection Mechanisms: 5 | Agent Architecture: 3-agent pipeline with a dedicated auditor are also prepared as the proposed system. All experiments executed successfully with comprehensive logging and tracing enabled. The system achieved perfect detection with zero false positives. The simulation experiments validate the effectiveness of the multi-layered defense architecture for detecting distributed backdoors in multi-agent LLM systems. These results demonstrate that architectural security approaches—treating multi-agent systems as distributed computing environments with Byzantine fault tolerance—can provide robust protection against sophisticated backdoor attacks without requiring model-level guarantees or training data access.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_26-Simulation_Study_on_the_Proposed_Multi_Agent_Backdoor_Detection_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Real-Time LiDAR SLAM-Driven Navigation and Collision Avoidance for Mobile Robots in Unstructured Environments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170325</link>
        <id>10.14569/IJACSA.2026.0170325</id>
        <doi>10.14569/IJACSA.2026.0170325</doi>
        <lastModDate>2026-03-31T07:33:40.4700000+00:00</lastModDate>
        
        <creator>Amandyk Tuleshov</creator>
        
        <creator>Anar Adilkhan</creator>
        
        <creator>Moldir Kuatova</creator>
        
        <creator>Gaukhar Seidaliyeva</creator>
        
        <subject>LiDAR SLAM; autonomous navigation; obstacle avoidance; path planning; mobile robots; real-time mapping; 3D point cloud processing; loop closure optimization; sensor fusion; robotic perception</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>Autonomous navigation in unknown environments requires accurate simultaneous localization and mapping, reliable obstacle detection, and efficient path planning within a unified framework. This study proposes a real-time LiDAR-based SLAM-driven navigation system for mobile robots operating in structured indoor environments. The developed architecture integrates three-dimensional LiDAR sensing, ego-motion estimation, scan registration, loop closure optimization, and collision-aware trajectory planning to achieve robust environmental reconstruction and safe autonomous mobility. A probabilistic measurement model is employed to relate sensor observations to robot pose and map states, while back-end optimization mitigates cumulative drift and enhances global consistency. The navigation module incorporates obstacle segmentation and goal-directed path generation, ensuring smooth and collision-free trajectories under kinematic constraints. Experimental validation is conducted in both incremental and full-environment exploration scenarios using a physical robotic platform equipped with LiDAR and auxiliary sensors. Results demonstrate consistent mapping accuracy, stable trajectory estimation, and effective obstacle avoidance in cluttered indoor settings. The system maintains real-time computational performance while preserving the structural coherence of reconstructed environments. The findings confirm the reliability and scalability of the proposed framework, providing a practical foundation for autonomous robotic navigation in semi-structured and unstructured operational domains.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_25-Real_Time_LiDAR_SLAM_Driven_Navigation_and_Collision_Avoidance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Rule-Based Myanmar Herbal Recommendation System Using Ontology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170324</link>
        <id>10.14569/IJACSA.2026.0170324</id>
        <doi>10.14569/IJACSA.2026.0170324</doi>
        <lastModDate>2026-03-31T07:33:40.4400000+00:00</lastModDate>
        
        <creator>Nang Saing Horm</creator>
        
        <creator>Nikom Suvonvorn</creator>
        
        <subject>Myanmar herbal medicine; ontology; recommendation system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>Myanmar herbal medicine is recognized as a vital component of traditional healthcare; however, its documentation remains disorganized and primarily available in the local language. Identifying appropriate herbs for individual users from existing records is inefficient and may result in medication errors. This study presents a formalized, digitized representation of Myanmar herbal knowledge using an ontology-based framework that enables precise and efficient herb identification and recommendation. The ontology and rule-based recommendation system were developed through literature review, expert consultation, and analysis of volumes 1 and 2 of Medicinal Plants of Myanmar. The system’s performance was evaluated by three experts from the University of Traditional Medicine in Mandalay. The constructed ontology models 119 herbs, 17 plant parts, 12 distribution regions, 256 disease symptoms, and 23 adverse effects. Seven inference rules were defined to generate recommendations based on seven benchmark questions. The system achieved an average accuracy of 95% and a recall of 96% in recommending herbs based on symptoms, plant parts used, location, plant family, adverse effects, combinations of users’ symptoms and location, and combinations of symptoms and adverse effects through rule-based evaluations. The proposed system provides a formalized structure for preserving Myanmar herbal knowledge and offers reliable recommendations within the scope of a limited dataset and a rigid ontology structure.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_24-Rule_Based_Myanmar_Herbal_Recommendation_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-Objective Intelligent Control of Bi-Directional V2X Charging Using NSGA-II in an Integrated Energy Management System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170323</link>
        <id>10.14569/IJACSA.2026.0170323</id>
        <doi>10.14569/IJACSA.2026.0170323</doi>
        <lastModDate>2026-03-31T07:33:40.4070000+00:00</lastModDate>
        
        <creator>Muhammad Aqmal Bin Abu Hassan</creator>
        
        <creator>Ezmin Abdullah</creator>
        
        <creator>Muhammad Umair</creator>
        
        <creator>Nik Hakimi Nik Ali</creator>
        
        <creator>Roslina Mohamad</creator>
        
        <creator>Nabil M. Hidayat</creator>
        
        <subject>Genetic algorithm; NSGA-II; optimization; Non-dominated Sorting Genetic Algorithm; EV charging; bi-directional EV charger; V2X; energy management system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>This study presents the development and evaluation of an intelligent control system for a real-time bi-directional Electric Vehicle (EV) charging infrastructure integrated with solar Photovoltaic (PV), Energy Storage Systems (ESS), and the power grid. The proposed system aims to optimize energy flow decisions such as cost minimization, energy efficiency maximization, and prioritization of renewable sources. Two evolutionary optimization techniques are implemented and compared: a traditional single-objective Genetic Algorithm (GA) and the Non-dominated Sorting Genetic Algorithm II (NSGA-II). The GA approach focuses solely on minimizing operational cost, while NSGA-II considers multiple objectives simultaneously, offering a set of optimal trade-off solutions. Real-time switching decisions are formulated based on binary control variables corresponding to relay states in the V2X energy system. Simulation results demonstrate that NSGA-II provides superior flexibility in handling multi-objective trade-offs, achieving improved solar utilization and reduced grid dependency without compromising cost efficiency. The hybrid integration of NSGA-II with rule-based override logic further enhances the system&#39;s adaptability to dynamic operating conditions, making it suitable for deployment in smart energy management applications.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_23-Multi_Objective_Intelligent_Control.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Ontological Design Model for Integrating Notification, Appointment, and Queue in Healthcare Queue Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170322</link>
        <id>10.14569/IJACSA.2026.0170322</id>
        <doi>10.14569/IJACSA.2026.0170322</doi>
        <lastModDate>2026-03-31T07:33:40.3770000+00:00</lastModDate>
        
        <creator>Nik Mohd Habibullah Nik Mohd Nizam</creator>
        
        <creator>Shafrida Sahrani</creator>
        
        <creator>Mohd Nazri Kama</creator>
        
        <creator>Abdul Ghafar Jaafar</creator>
        
        <creator>Mohd Yazid Bajuri</creator>
        
        <creator>Mohammad Nazir Ahmad</creator>
        
        <subject>Ontology-based design; queue management system; appointment scheduling; notification system; healthcare information systems; design science research</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>Healthcare queue systems frequently suffer from prolonged waiting times, overcrowding, and inefficient patient flow management. Although various Queue Management Systems (QMS) have been developed, most existing solutions treat notification, appointment scheduling, and queue management as independent components. This fragmented design limits semantic clarity, adaptability, and reusability. This study proposes an ontology-based design model, termed OntoNAQ, which integrates Notification, Appointment, and Queue (NAQ) into a unified conceptual framework for healthcare queue systems. The study adopts the Design Science Research Methodology (DSRM) to identify conceptual gaps, design the ontological model, and demonstrate its applicability through prototype mapping and qualitative evaluation. The findings indicate that OntoNAQ provides explicit semantic relationships among NAQ components and serves as a reusable and theoretically grounded conceptual foundation for healthcare queue system design.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_22-An_Ontological_Design_Model_for_Integrating_Notification_Appointment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Integrating Heterogeneous Data for Stock Market Prediction: A Systematic Literature Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170321</link>
        <id>10.14569/IJACSA.2026.0170321</id>
        <doi>10.14569/IJACSA.2026.0170321</doi>
        <lastModDate>2026-03-31T07:33:40.3300000+00:00</lastModDate>
        
        <creator>Abdullah Almusned</creator>
        
        <creator>Mohammad Mehedi Hassan</creator>
        
        <creator>Bader Alkhamees</creator>
        
        <creator>Muhammad Al-Qurishi</creator>
        
        <subject>Stock prediction; heterogeneous data; machine learning; quantitative and qualitative data; systematic review</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>This systematic literature review examines recent developments in stock market prediction using heterogeneous data sources that combine technical indicators, fundamental attributes, and sentiment-driven signals. Despite the growing adoption of machine learning in financial forecasting, existing research remains fragmented across data modalities, fusion strategies, and evaluation protocols, limiting comparability and practical applicability. Studies published between 2018 and 2024 were retrieved from five major scholarly databases and screened based on predefined eligibility criteria, resulting in 44 peer-reviewed articles included in the final analysis. The review synthesizes the quantitative and qualitative data modalities employed, the machine learning and deep learning methodologies adopted, the evaluation metrics used to assess predictive performance, and the principal challenges associated with multi-source stock market prediction. Findings reveal a clear shift toward deep learning architectures, hybrid fusion techniques, and the integration of external information such as news, corporate disclosures, and social media sentiment. Despite this progress, the literature exhibits inconsistent evaluation practices, limited attention to temporal data leakage, and insufficient coverage of non-English and emerging markets. This review consolidates current knowledge, presents a structured taxonomy of heterogeneous data sources and fusion strategies, and identifies open research challenges to guide future work in multimodal stock market prediction.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_21-Integrating_Heterogeneous_Data_for_Stock_Market_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving Decision-Making Processes in Retail Through Artificial Intelligence for Advanced Management Information Systems: A Study on Consumer Behavior in Qassim</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170320</link>
        <id>10.14569/IJACSA.2026.0170320</id>
        <doi>10.14569/IJACSA.2026.0170320</doi>
        <lastModDate>2026-03-31T07:33:40.3000000+00:00</lastModDate>
        
        <creator>Hussain Mohammad Abu-Dalbouh</creator>
        
        <creator>Mushira Mustafa Freihat</creator>
        
        <creator>Rayah Ismaeel Jawarneh</creator>
        
        <creator>Osman Abdalla Mohamed Elhadi</creator>
        
        <creator>Mortada Ibrahim Elimam</creator>
        
        <creator>Leenah Sulaiman Almuhanna Abalkhail</creator>
        
        <creator>Ghadi Mohammed Al Nafesah</creator>
        
        <creator>Soliman Aljarboa</creator>
        
        <creator>Sulaiman Abdullah Alateyah</creator>
        
        <subject>Analytics; machine learning; data-driven observations; predictive modeling; strategic marketing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>In today’s rapidly evolving retail environment, the sheer volume of consumer data presents both opportunities and challenges for businesses striving to maintain a competitive edge. This study explores the pivotal role of artificial intelligence and sophisticated data mining techniques within management information systems. The study aims to transform decision-making processes and deepen the understanding of consumer behavior in the Qassim region of Saudi Arabia, while also exploring implications for broader regional markets. By employing a dataset of 712 customers that encompasses demographic variables, lifestyle choices, and purchasing patterns, we implement leading machine learning algorithms, including Decision Trees, Random Forests, and Support Vector Machines. This allows us to uncover actionable findings that drive strategic initiatives. Additionally, we analyze the impact of artificial intelligence on retailers by comparing outcomes before and after implementing AI-enhanced analytics. The investigation reveals that retailers applying AI-enhanced analytics experience a remarkable 32% improvement in their responsiveness to market changes, a 28% increase in customer retention rates, and a 34.7% improvement in repeat customers. These results highlight the substantial impact of these technologies on operational efficacy and demonstrate how AI can enhance customer loyalty, satisfaction, and overall business performance. The Random Forest model achieved the highest accuracy at 96.91%. Furthermore, this research emphasizes the effectiveness of predictive analytics in identifying distinct consumer segments and tailoring marketing strategies to meet their specific needs. By enabling retailers to respond proactively to consumer trends, AI emerges as a crucial tool for enhancing customer engagement and satisfaction. The findings illustrate how data analysis empowers businesses to detect emerging trends and optimize inventory management practices, and boost profitability. This research underscores the transformative potential of integrating advanced algorithms into retail operations, fostering data-informed decision-making that cultivates sustainable growth and elevates customer satisfaction in an increasingly competitive marketplace. The observations gained from this study serve as a valuable resource for retailers eager to utilize the power of AI and data mining to navigate the complexities of modern consumer behavior.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_20-Improving_Decision_Making_Processes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Exploring Employability Factors: A Machine Learning Approach Using Association Rules in Business and Economics Graduates at Qassim University</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170319</link>
        <id>10.14569/IJACSA.2026.0170319</id>
        <doi>10.14569/IJACSA.2026.0170319</doi>
        <lastModDate>2026-03-31T07:33:40.2670000+00:00</lastModDate>
        
        <creator>Hussain Mohammad Abu-Dalbouh</creator>
        
        <creator>Osman Abdalla Mohamed Elhadi</creator>
        
        <creator>Ajlan Suliman Al-Ajlan</creator>
        
        <creator>Leenah Sulaiman Almuhanna Abalkhail</creator>
        
        <creator>Abdullah Suliman Almutlaq</creator>
        
        <creator>Wejdan Aamer Alasqah</creator>
        
        <creator>Mayadah Shikh Othman</creator>
        
        <creator>Sulaiman Abdullah Alateyah</creator>
        
        <subject>Machine learning; prediction; hidden patterns; employment rates; academic performance; data analysis; modeling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>The growing number of business and economics graduates raises concerns about employability in a competitive job market. Furthermore, scrutiny from the Saudi Education and Training Evaluation Commission on educational outcomes highlights the relevance of this research for university administrations. Current literature often overlooks the factors affecting employment outcomes for recent graduates. Understanding these factors is essential for addressing concerns. This study aims to fill these gaps by focusing on graduates from the College of Business and Economics at Qassim University, using association rule mining to uncover patterns and relationships among academic performance, skills, and employment status. This analysis uses a dataset of 407 graduates to examine factors such as gender, major, cumulative GPA, and employment status. As the job market evolves, the findings offer valuable observations for universities on aligning educational programs with employer needs. The association rules model was utilized to predict graduates&#39; likelihood of securing employment based on these attributes, showing that factors such as GPA and skills significantly impact employment outcomes. The proposed model demonstrated high accuracy in predicting employability and generated 147 association rules, indicating its effectiveness in identifying the factors that influence employment outcomes. It also reveals actionable knowledge for curriculum development. The effectiveness of the association rules in identifying the most impactful attributes related to employment outcomes reinforces the importance of addressing the skills and competencies sought by employers. The proposed model demonstrates its reliability for practical use. By aligning educational offerings with market demands, universities can enhance the employability of graduates, ensuring they are prepared for a dynamic environment. This research highlights the critical role of data mining in informing educational strategies and connecting academia with industry.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_19-Exploring_Employability_Factors.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>AFLBCRS: Blockchain-Enabled Federated Learning with Ring Signatures</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170318</link>
        <id>10.14569/IJACSA.2026.0170318</id>
        <doi>10.14569/IJACSA.2026.0170318</doi>
        <lastModDate>2026-03-31T07:33:40.2370000+00:00</lastModDate>
        
        <creator>Menna Mamdouh Orabi</creator>
        
        <creator>Osama Emam</creator>
        
        <creator>Hanan Fahmy</creator>
        
        <subject>Machine learning; federated learning; blockchain; security; ring signature</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>With the explosive development of machine learning and increased concern about data privacy, federated learning (FL) has emerged as a major area of study. Despite the benefits of FL, it deals with certain obstacles, including the risk of indirect data leaking via reverse engineering, the compromise of model architectural privacy, and the cost of connection and communication. Therefore, the proposed framework AFLBRS, or Adaptive Federated Learning with Blockchain and Ring Signatures, is an innovative framework that combines federated learning, blockchain technology, and ring signatures to enable collaborative and secure model training across decentralized networks while preserving data privacy. In AFLBRS, participants train local models using their private data and contribute updates to a shared model without disclosing raw data. Blockchain technology ensures the integrity and transparency of the process by securely recording and validating model updates. Ring signatures authenticate contributions while preserving participant anonymity. Key benefits of AFLBRS include privacy preservation, security, collaborative learning, and transparency. This framework is promising for applications in healthcare, finance, and other sensitive domains where data privacy and security are paramount. AFLBRS demonstrates competitive model accuracy compared to centralized approaches while effectively preserving data privacy and ensuring security through blockchain integration and ring signatures. The case study for AFLBCRS is a healthcare IoT setting using an ICU dataset, where multiple sites collaboratively trained a model to predict patient risk within 24 hours without sharing raw patient data. The results suggest that AFLBCRS is well-suited for compliance-focused environments because it keeps data local, protects participant identity, maintains an auditable (tamper-resistant) record of contributions, and ensures that only verified updates are accepted. When evaluated with a scoring method that prioritizes regulatory requirements alongside model usefulness and operational cost, AFLBCRS clearly outperformed a traditional centralized setup (0.898 vs. 0.343). The evaluation matrix for AFLBRS indicates promising results across key metrics such as model accuracy, privacy preservation, security, scalability, and usability.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_18-AFLBCRS_Blockchain_Enabled_Federated_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Leveraging Kolmogorov-Arnold Networks (KANs) for Mixed-Domain Satellite Imagery Segmentation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170317</link>
        <id>10.14569/IJACSA.2026.0170317</id>
        <doi>10.14569/IJACSA.2026.0170317</doi>
        <lastModDate>2026-03-31T07:33:40.1900000+00:00</lastModDate>
        
        <creator>Abdul Hadi Mazbah</creator>
        
        <creator>Safiza Suhana Binti Kamal Baharin</creator>
        
        <creator>Md. Shadman Zoha</creator>
        
        <subject>Satellite imagery; Kolmogorov-Arnold Network; semantic segmentation; attention; mixed-domain</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>Semantic segmentation of satellite imagery requires models that capture global context while preserving sharp object boundaries. Convolutional Neural Networks (CNNs) excel at local feature extraction, but often struggle with long-range dependencies. Transformers provide global context but may blur edges and rely on opaque classifier heads. This study aims to develop an interpretable hybrid segmentation model that improves boundary accuracy and generalization across mixed-domain satellite imagery. This study presents SwinKANet, a hybrid segmentation model that combines a transformer encoder with boundary-aware decoding and an interpretable prediction head. SwinKANet employs a Swin Transformer (SwinV2-Tiny) encoder to extract multi-scale features, while a Convolutional Block Attention Module (CBAM) at the bottleneck refines channel and spatial responses. Skip connections equipped with SharpBlock units enhance edge detail, and an FPN-like lateral fusion module aligns and merges decoder features. The conventional multilayer perceptron head is replaced with a Kolmogorov–Arnold Network (KAN) head, enabling flexible function approximation and class-wise interpretability. We evaluate SwinKANet on a mixed-domain LoveDA dataset (urban + rural) for diverse spatial learning and on the urban-only ISPRS Vaihingen dataset for city-scale benchmarking. SwinKANet achieves 0.5269 mIoU on LoveDA and 0.7645 mIoU on Vaihingen, delivering sharper boundaries and more consistent class regions than CNN, Mamba, and transformer baselines. The KAN head further enhances explainability by revealing feature contributions for each class, supporting interpretable remote sensing applications.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_17-Leveraging_Kolmogorov_Arnold_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Capacitated Location-Allocation Model for Emergency Supply Chain: The Case of Morocco</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170316</link>
        <id>10.14569/IJACSA.2026.0170316</id>
        <doi>10.14569/IJACSA.2026.0170316</doi>
        <lastModDate>2026-03-31T07:33:40.1430000+00:00</lastModDate>
        
        <creator>Imane Sassaoui</creator>
        
        <creator>Aziz Ait Bassou</creator>
        
        <creator>Mustapha Hlyal</creator>
        
        <creator>Jamila El Alami</creator>
        
        <subject>Emergency logistics; disaster response; location–allocation; stochastic demand; supply chain resilience</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>Recently, Morocco has experienced a series of disasters, including the El Haouz earthquake in 2023, which have brought renewed attention to the country’s emergency preparedness and the efficiency of its national emergency supply chain. In addition, this study considers a prospective scenario based on potential flood events in northern Morocco to evaluate future resilience requirements. In this context, improving the strategic planning of Emergency Supply Facilities (ESFs) is essential for strengthening disaster response capabilities. This study develops a capacitated location–allocation optimization model for emergency supply chain planning that incorporates demand uncertainty, flexible allocation of ESFs, and donor contributions. The proposed framework is evaluated through computational experiments using problem instances consisting of multiple candidate ESF locations, demand points, and disruption scenarios, allowing the analysis of different emergency response configurations. The results indicate that the proposed optimization framework can significantly improve the efficiency and responsiveness of Morocco’s emergency supply chain. The model provides a practical decision-support tool for policymakers and planners to enhance disaster preparedness and resource allocation in national emergency logistics systems.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_16-Capacitated_Location_Allocation_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sentiment and Emotion Analysis in Textual Data: A Recent Systematic Literature Review Method, Model and Application</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170315</link>
        <id>10.14569/IJACSA.2026.0170315</id>
        <doi>10.14569/IJACSA.2026.0170315</doi>
        <lastModDate>2026-03-31T07:33:40.0970000+00:00</lastModDate>
        
        <creator>Wan Azzura Wan Ramli</creator>
        
        <creator>Rabiah Abdul Kadir</creator>
        
        <creator>Amalia Amalia</creator>
        
        <creator>Ang Mei Choo</creator>
        
        <subject>Sentiment; emotion analysis; textual data; transformer; large language models</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>The analysis of sentiment and emotion has become an important research topic in Natural Language Processing (NLP) due to the rapid growth of textual data generated on digital platforms. Still, despite significant progress, the existing literature remains fragmented across methods, modalities, and application domains, making it difficult to obtain a comprehensive understanding of current research trends. This study presents a structured literature review that synthesizes recent advances in sentiment and emotion analysis of textual data. The review follows the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) protocol and systematically examines studies retrieved from the Web of Science (WoS) and Scopus databases. After screening, eligibility evaluation, and Quality Assessment (QA), 50 primary studies published between 2023 and 2025 were selected for analysis. As such, the findings reveal a clear methodological transition from traditional Machine Learning (ML) techniques toward transformer-based architectures and Large Language Models (LLMs). In addition, recent studies increasingly explore multimodal approaches and context-aware emotion modeling to improve sentiment and emotion detection. Despite these advancements, several challenges remain, including the detection of implicit emotions, dataset imbalance, and domain adaptability. Overall, this review provides a structured synthesis of recent developments in textual sentiment and emotion analysis, identifies key research challenges, and outlines potential directions for future studies.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_15-Sentiment_and_Emotion_Analysis_in_Textual_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Review on Machine Learning Approaches for Solid Waste Management</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170314</link>
        <id>10.14569/IJACSA.2026.0170314</id>
        <doi>10.14569/IJACSA.2026.0170314</doi>
        <lastModDate>2026-03-31T07:33:40.0500000+00:00</lastModDate>
        
        <creator>S. Vidya</creator>
        
        <subject>Municipal solid waste management; machine learning; modeling; optimization; solid waste generation; disposal</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>The rapid increase in population and the ongoing expansion of urban regions have resulted in a substantial growth in municipal solid waste generation, creating serious challenges for environmental protection and urban management. In response to these problems, recent research has increasingly focused on technological solutions, among which machine learning has gained considerable attention. Machine learning can capture complex nonlinear patterns and is therefore widely applied across various stages of municipal solid waste management to enhance sustainable and efficient waste handling. This review examines over one hundred research studies published between 2000 and 2022, with the objective of analyzing how machine learning techniques have been employed throughout the waste management process, including waste generation prediction, collection scheduling, transportation optimization, and disposal planning. The study systematically explores prevailing research trends, identifies methodological limitations, and highlights promising future research directions, offering conceptual understanding and practical guidance for subsequent investigations. In contrast to previous review studies, this research specifically focuses on the waste generation and disposal stages, highlighting how individuals, households, and municipal authorities employ advanced computational techniques to minimize waste volume and improve management efficiency. The findings indicate that most existing studies focus on waste classification, regional estimation of waste quantities, and prediction of bin fill levels. Nevertheless, several important challenges remain, such as the lack of real-time time-series datasets, limited model robustness and generalization capability, the absence of unified benchmarking standards, and the difficulty of achieving reliable long-term forecasting of waste generation.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_14-A_Review_on_Machine_Learning_Approaches.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>From Rules to Transformers: A Deep Learning Approach for Arabic Natural Language Interfaces to Databases</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170313</link>
        <id>10.14569/IJACSA.2026.0170313</id>
        <doi>10.14569/IJACSA.2026.0170313</doi>
        <lastModDate>2026-03-31T07:33:39.9870000+00:00</lastModDate>
        
        <creator>Dahr Laila</creator>
        
        <creator>Sahib Mohamed Rida</creator>
        
        <creator>Er-Raha Brahim</creator>
        
        <subject>Sequence-to-sequence (SeqToSeq); natural language to SQL (NL2SQL); semantic parsing; arabic NLP; Text-to-SQL</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>Natural language interfaces to databases (NLIDBs) enable users to communicate with databases using natural everyday language rather than difficult query languages. This study presents a new approach using deep learning techniques to improve the robustness and accessibility of Arabic NLIDB systems through a new end-to-end framework. A Transformer-based architecture is proposed, in which AraT5 is utilized to translate Arabic Natural Language Queries (ANLQs) into structured JSON Logical Query (JLQ) representations, subsequently converting these into executable SQL statements. Traditional rule-based systems are surpassed by this approach, as semantic understanding is leveraged instead of grammatical pattern matching. Consequently, the morphological complexity and dialectical variations of Arabic are more effectively handled. This neural semantic parsing approach demonstrates a deep understanding of query intent, moving beyond surface-level pattern matching. Experimental evaluation on a large-scale, multi-domain curated dataset of 50,000 query pairs demonstrates superior performance, with 85.2% exact match accuracy for JLQ generation and 89.8% SQL execution accuracy. The findings indicate that Transformer-based approaches offer substantial improvements in translation accuracy compared to conventional rule-induction methods.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_13-From_Rules_to_Transformers.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Assessment of Different Energy Management Strategies for the Operation of Hybrid Hot-Water Installations in Hotels</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170312</link>
        <id>10.14569/IJACSA.2026.0170312</id>
        <doi>10.14569/IJACSA.2026.0170312</doi>
        <lastModDate>2026-03-31T07:33:39.9400000+00:00</lastModDate>
        
        <creator>Boris I. Evstatiev</creator>
        
        <creator>Nadezhda L. Evstatieva</creator>
        
        <subject>Energy management; energy market; evacuated tube collectors; hot water consumption; strategies</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>With the adoption of the Energy market in Bulgaria, large fluctuations in the price of electrical energy have been occurring, which is a challenge for businesses in the different sectors of the economy. This study is aimed at evaluating the energy and financial performance of three energy management strategies for operating hybrid hot water installations in hotels: the first one is one assumes the water is heated only by an evacuated solar tube system; the second one assumes electrical energy is used whenever the water’s temperature falls below a certain threshold; and the third one uses preliminary pre-heating of the water during the off-peak hours when electrical energy is cheaper. A simulation model has been developed based on well-known physical and empirical dependencies, allowing for the necessary evaluations. A hot water installation operation has been investigated for a hotel with a capacity of 80 guests on a sunny summer day. The results showed that the first strategy does not allow for maintaining the temperature of the water in the tank above the required threshold. The second strategy ensured the requirements towards the water temperature are met with minimal application of electrical energy, leading to daily expenses between 3.4 EUR and 62 EUR. The third strategy increased the grid energy usage, but the daily expenses were limited to 18.5 EUR. The obtained results indicate that hotel owners could significantly reduce their hot water expenses with the help of a hybrid hot-water installation and an appropriate energy management strategy.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_12-Assessment_of_Different_Energy_Management_Strategies.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>CdbNorm: An Efficient Library for Automatic Database Normalization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170311</link>
        <id>10.14569/IJACSA.2026.0170311</id>
        <doi>10.14569/IJACSA.2026.0170311</doi>
        <lastModDate>2026-03-31T07:33:39.9230000+00:00</lastModDate>
        
        <creator>Ivan Piza-Davila</creator>
        
        <creator>Fernando Gutierrez-Preciado</creator>
        
        <creator>Victor Ortega-Guzman</creator>
        
        <creator>Mildreth Alcaraz-Mejia</creator>
        
        <subject>Database normalization; functional dependency; normal form; 1NF; 2NF; 3NF</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>This study introduces CdbNorm, a library that provides efficient implementations of the first three normal forms of relational database normalization. CdbNorm makes it quick and straightforward for a data analyst to divide a large dataset into smaller tables free from database anomalies (insert, update, and delete) and duplicate data. This study describes each of the steps of our normalization algorithm, which includes the discovery of functional dependencies and the population of output normalized datasets. We evaluate the accuracy and efficiency of our algorithm with databases introduced in prior papers and with large datasets available online.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_11-CdbNorm_An_Efficient_Library_for_Automatic_Database.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>User Behaviour Analysis for Insider Threat Detection Using Machine Learning: A Case Study in Enterprise Web Application Security</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170310</link>
        <id>10.14569/IJACSA.2026.0170310</id>
        <doi>10.14569/IJACSA.2026.0170310</doi>
        <lastModDate>2026-03-31T07:33:39.8770000+00:00</lastModDate>
        
        <creator>Yosep </creator>
        
        <creator>Aditya Kurniawan</creator>
        
        <subject>Insider threat; user behaviour analytics; machine learning; anomaly detection; cybersecurity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>This study presents a user behaviour analysis approach for detecting insider threats in an enterprise web application environment. The approach applies machine learning techniques to analyze patterns of user activity. Using a primary dataset collected from a leading ICT distributor company in Indonesia with nationwide channel operations over January–June 2025, we identify patterns of normal and anomalous user activities indicative of insider threats. Three machine learning models were implemented: Random Forest, Support Vector Machine (SVM) with RBF kernel, and 1D CNN, which are widely used in insider-threat and anomaly-detection research. Severe class imbalance was mitigated via undersampling followed by SMOTE. Random Forest delivered the best performance on the test set (Accuracy 97.38%, F1-Score 97.77%, ROC-AUC 99.82%), with CNN and SVM also showing strong anomaly sensitivity. The findings demonstrate a practical, high-accuracy insider-threat detector trained on real enterprise logs, not simulated datasets, suitable for deployment in Indonesian enterprise settings.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_10-User_Behaviour_Analysis_for_Insider_Threat_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparing Random Forest and Gradient Boosting for Monkeypox Diagnosis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170309</link>
        <id>10.14569/IJACSA.2026.0170309</id>
        <doi>10.14569/IJACSA.2026.0170309</doi>
        <lastModDate>2026-03-31T07:33:39.8470000+00:00</lastModDate>
        
        <creator>Fahlul Rizki</creator>
        
        <creator>Widowati</creator>
        
        <creator>Catur Edi Widodo</creator>
        
        <subject>Comparative analysis; Random Forest; Gradient Boosting; clinical symptoms; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>Early and accurate diagnosis of Monkeypox is essential to limit transmission and support effective treatment. This study aims to compare the performance of Random Forest and Gradient Boosting models for classifying Monkeypox cases using clinical symptom data. A synthetic dataset from Kaggle containing 25,000 records with 11 symptom-based features was used to evaluate both models under imbalanced and SMOTE-balanced conditions using stratified 5-fold cross-validation. Model performance was assessed using accuracy, precision, recall, F1-score, receiver operating characteristic (ROC) curves, and area under the curve (AUC). The experimental results indicate that both models achieve high recall values on imbalanced data, with Gradient Boosting slightly outperforming Random Forest in discriminative performance (AUC 0.6869 vs. 0.6839). While the application of SMOTE improves precision, it reduces recall and provides only marginal improvements in AUC, indicating a trade-off between sensitivity and precision in symptom-based classification. These findings demonstrate the potential of ensemble learning models for symptom-based Monkeypox classification in synthetic tabular datasets. However, further validation using real-world clinical data is necessary before practical diagnostic deployment.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_9-Comparing_Random_Forest_and_Gradient_Boosting.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Quantum-Resilient Machine Learning and Q-Learning–Driven Priority Time-Slot AODV for Secure MANET Routing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170308</link>
        <id>10.14569/IJACSA.2026.0170308</id>
        <doi>10.14569/IJACSA.2026.0170308</doi>
        <lastModDate>2026-03-31T07:33:39.8000000+00:00</lastModDate>
        
        <creator>Singireddy Sateesh Reddy</creator>
        
        <creator>E. Aravind</creator>
        
        <subject>Mobile Ad Hoc Networks; secure routing; AODV; trust management; machine learning; reinforcement learning; MAC layer scheduling; black hole attack; post-quantum security; quantum-resilient routing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>Mobile Ad Hoc Networks (MANETs) are decentralized in nature and, therefore, they have no centralized control, and consequently, they are highly susceptible to routing attacks like black hole attacks and gray hole attacks, both of which disable data delivery by causing a vicious loss of packets. To address these issues, the current study offers the Quantum-resilient Machine Learning and Q-Learning-driven Priority Time-Slot AODV (QR-MLQ-PTS-AODV) routing model. This framework combines a multi-metric trust query, an entropy-based behavioral stability query, a temporal query trust adjustment, and a managed machine learning method to attain exact malicious node forecasting. Reinforcement learning, through Q -learning, is employed to utilize dynamical assignment of MAC -layer priority time slots to enable cross-layer optimization, as well as adaptive routing decisions. In contrast to solutions that exist, the suggested framework avoids quantum-vulnerable cryptographic primitives in favor of hash-based trust authentication and learning-based mitigation measures, to make sure that it can withstand novel quantum-assisted routing attacks. The limited variables of the trust model are determined by a mathematical analysis and extensive NS-3 simulations that show that the model significantly improves the ratio of packet delivery, end-to-end delay, routing overhead, and attack detection accuracy in comparison with traditional AODV and the most up-to-date trust-, ML-, and RL-based protocols. Based on these results, the effectiveness of embedding quantum-sensitive security protocols and smart cross-layer routing in MANETs can be supported.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_8-Quantum_Resilient_Machine_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Photoplethysmogram-Based Diabetes Screening via Supervised Machine Learning: A Demographic Study on a Southeast Asian Cohort</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170307</link>
        <id>10.14569/IJACSA.2026.0170307</id>
        <doi>10.14569/IJACSA.2026.0170307</doi>
        <lastModDate>2026-03-31T07:33:39.7670000+00:00</lastModDate>
        
        <creator>Nazrul Anuar Nayan</creator>
        
        <creator>Mohd Taufik Rezza Mohd Foudzi</creator>
        
        <creator>Mohd Zubir Suboh</creator>
        
        <creator>Syaza Norfilsha Ishak</creator>
        
        <creator>Zazilah May</creator>
        
        <subject>Photoplethysmography (PPG); diabetes prediction; supervised machine learning; signal morphology features; non-invasive screening; feature selection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>Diabetes mellitus is a major chronic metabolic disorder that often leads to serious long-term vascular complications. Traditional monitoring methods focus mainly on metabolic indicators and often miss early vascular changes. This study developed and validated a non-invasive framework for classifying diabetic status based on photoplethysmogram (PPG) pulse morphology. The approach offers a scalable and affordable alternative to invasive blood tests. A dataset from 78 Malaysian participants was analyzed in five phases: signal pre-processing, feature extraction, and statistical ranking. Raw signals were filtered with a 4th-order Chebyshev Type II band-pass filter for accurate waveform analysis. From a wide set of temporal and amplitude features, key biomarkers linked to arterial stiffness and vascular compliance were identified and ranked. Six supervised machine learning models were evaluated: Logistic Regression, Decision Tree (DT), KNN, Support Vector Machine (SVM), Artificial Neural Network (ANN), and Na&#239;ve Bayes (NB). ANN and SVM models achieved the highest classification accuracy and AUC. This demonstrates effective distinction between diabetic and non-diabetic status using interpretable waveform features. Validation with a Southeast Asian cohort addresses a demographic gap in the literature. The framework shows that ranked PPG biomarkers can be used for accessible, community-level diabetes screening, especially in healthcare settings with limited resources.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_7-Photoplethysmogram_Based_Diabetes_Screening.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fast E-Learning Recommendation: Enhancing Model Efficiency with Q-Matrix Complexity Reduction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170306</link>
        <id>10.14569/IJACSA.2026.0170306</id>
        <doi>10.14569/IJACSA.2026.0170306</doi>
        <lastModDate>2026-03-31T07:33:39.7370000+00:00</lastModDate>
        
        <creator>Ismail Menyani</creator>
        
        <creator>Ahmed Oussous</creator>
        
        <creator>Ayoub Ait Lahcen</creator>
        
        <subject>Learner performance prediction; adaptive learning; complexity; knowledge components; Q-matrix; machine learning; DAS3H; PFA; AFM; IRT</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>Intelligent tutoring systems generate a large volume of data, which becomes particularly valuable when effectively leveraged for learner performance prediction in adaptive learning environments. In this context, the speed and predictive accuracy of machine learning models are crucial, as they determine the system’s ability to deliver timely and relevant insights and support responsive, personalized instruction. Enhancing model speed not only increases tutoring efficiency but also improves the adaptability of educational systems to learners’ needs. This study introduces an approach aimed at improving the execution time of three logistic regression-based models widely used for learner performance prediction: DAS3H (Item Difficulty, Student Ability, Skill, and Student Skill Practice History), AFM (Additive Factor Model), and PFA (Performance Factor Analysis). The proposed optimization reduces the complexity of the Q-matrix that links each item to its required knowledge components by simplifying its structure while preserving pedagogical relevance. An empirical evaluation was conducted on four real-world datasets collected from online tutoring platforms. The results demonstrate that the proposed approach, called Fast E-learning Recommendation (FER), significantly improves the execution speed of the three models while maintaining comparable predictive performance across datasets.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_6-Fast_E_Learning_Recommendation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Federated Gaussian Process Regression with Orthogonal Feature Encryption and Key-Based Access Control</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170305</link>
        <id>10.14569/IJACSA.2026.0170305</id>
        <doi>10.14569/IJACSA.2026.0170305</doi>
        <lastModDate>2026-03-31T07:33:39.6900000+00:00</lastModDate>
        
        <creator>Md. Rashedul Islam</creator>
        
        <creator>Jannatul Ferdous Akhi</creator>
        
        <creator>Takayuki Nakachi</creator>
        
        <subject>Gaussian process; differential privacy; Random Unitary Transformation; membership inference attack; machine learning; federated learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>Federated learning (FL) makes it possible to train models across distributed data sources without collecting raw data in one place. However, even in federated settings, trained models may still leak sensitive information at inference time. This problem is particularly evident for Gaussian Process regression (GPR), where predictive uncertainty is explicitly returned and can differ between training and non-training samples. Such differences can be exploited for membership inference. In this work, we examine inference-time privacy and robustness in federated GPR by focusing on the behavior of predictive variance. To enable scalable training, we employ a Random Fourier Feature approximation together with an Alternating Direction Method of Multipliers (ADMM) based distributed optimization scheme. On top of this learning framework, we apply key-dependent orthogonal feature transformations that enable multi-key inference time access control. When inference is performed using the correct key, prediction accuracy and uncertainty behavior remain close to those of plaintext federated GPR. When incorrect or mismatched keys are used, prediction errors increase sharply and predictive variance becomes uniformly large. Experimental results show that this variance inflation removes the usual gap between training and unseen samples, reducing the effectiveness of variance-based membership inference. Importantly, this effect arises without adding noise or relying on cryptographic operations. These findings suggest that predictive uncertainty can play a practical role in enforcing inference-time access control and improving privacy robustness in federated Gaussian Process models.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_5-Federated_Gaussian_Process_Regression_with_Orthogonal_Feature_Encryption.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybridizing Collaborative Filtering and Knowledge: How do they Work Together? A Scoping Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170304</link>
        <id>10.14569/IJACSA.2026.0170304</id>
        <doi>10.14569/IJACSA.2026.0170304</doi>
        <lastModDate>2026-03-31T07:33:39.6730000+00:00</lastModDate>
        
        <creator>Alex Mart&#237;nez-Mart&#237;nez</creator>
        
        <creator>Raul Montoliu</creator>
        
        <creator>Inmaculada Remolar</creator>
        
        <subject>Hybrid Recommender Systems; collaborative filtering; knowledge-based recommenders; personalized recommendations; scoping review</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>The rapid expansion of digital platforms and the increasing complexity of user preferences have driven the need for more sophisticated recommendation systems. While Collaborative Filtering and Knowledge-Based Filtering have been widely adopted as core techniques for personalized recommendations, their individual limitations have led to the rise of hybrid approaches. Despite significant advancements, a comprehensive understanding of hybridization methodologies, their technical implementations, and emerging challenges remains unsolved. The purpose of this research is to systematically examine and synthe-size the domain of Hybrid Recommender Systems to address this. This study presents a scoping review, following the PRISMA-ScR guidelines, to systematically examine the domain of hybridizing Collaborative Filtering and Knowledge-Based Filtering. A total of 62 hybrid recommenders across various application domains were analyzed, and categorized into three primary hybridization strategies: Model Fusion, Transfer Learning, and Hierarchical Models. The review explores technical characteristics, hybridization techniques, data sources, evaluation methodologies, and domain-specific applications. Key findings indicate that most hybrid approaches focus on leveraging graph-based models, deep learning architectures, and causal inference techniques to enhance recommendation outcomes. However, despite these advancements, critical gaps remain. The review identifies key challenges, including computational complexity, lack of explainability, bias in recommendations, and reliance on offline evaluation metrics. Additionally, scalability issues in knowledge graph maintenance and the need for user-centered evaluation frameworks highlight important directions for future research. Addressing these gaps will be crucial in making hybrid recommendation systems more efficient, interpretable, and adaptable across diverse domains. This study contributes to the field by providing a structured synthesis of existing hybridization techniques, pinpointing success factors, and proposing future research avenues to advance hybrid recommendation systems.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_4-Hybridizing_Collaborative_Filtering_and_Knowledge.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Benchmarking Lightweight Machine Learning Models for Epileptic Seizure Recognition: Accuracy, Calibration, and Robustness Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170303</link>
        <id>10.14569/IJACSA.2026.0170303</id>
        <doi>10.14569/IJACSA.2026.0170303</doi>
        <lastModDate>2026-03-31T07:33:39.6430000+00:00</lastModDate>
        
        <creator>Sairam Tabibu</creator>
        
        <creator>Mugdha Abhyankar</creator>
        
        <subject>Epileptic seizure recognition; EEG classification; lightweight machine learning; LightGBM; calibration; robustness; biomedical AI</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>Epileptic seizure recognition is a critical task in clinical decision support systems, where both accuracy and reliability of predictions directly affect patient outcomes. While deep learning architectures such as CNNs and LSTMs are widely applied to EEG-based seizure detection, many publicly available seizure datasets consist of precomputed EEG-derived features, making the problem fundamentally tabular rather than raw-signal based. In such settings, the necessity and added value of complex deep learning pipelines remain unclear, and prior studies have largely emphasized classification accuracy while giving more limited attention to calibration, robustness, and deployment efficiency. In this work, we present a systematic benchmark of lightweight machine learning models—Logistic Regression, Random Forest, XGBoost, LightGBM, and CatBoost—on the Epileptic Seizure Recognition dataset. We evaluate performance across multiple dimensions: discriminative ability (accuracy, macro-F1, ROC-AUC, PR-AUC), confidence calibration (Brier score, calibration and reliability diagrams), and robustness under Gaussian feature perturbations. Our results show that LightGBM achieves 98.04% accuracy, a ROC-AUC of 0.9971, and a Brier score of 0.0166, while maintaining stable performance under the tested noise levels. Notably, all gradient boosting methods substantially outperform Logistic Regression, indicating that nonlinear feature interactions are critical for this task. Compared with prior deep learning approaches on the same dataset, these lightweight models achieve competitive performance at a fraction of the computational cost. These findings show that tabular machine learning methods deserve serious consideration for EEG-derived feature classification tasks, particularly in resource-constrained clinical settings where efficiency, calibration, and robustness are as important as raw accuracy.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_3-Benchmarking_Lightweight_Machine_Learning_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing GANomaly-Based Anomaly Detection for X-Ray Cargo Inspection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170302</link>
        <id>10.14569/IJACSA.2026.0170302</id>
        <doi>10.14569/IJACSA.2026.0170302</doi>
        <lastModDate>2026-03-31T07:33:39.5800000+00:00</lastModDate>
        
        <creator>Kholoud Alotaibi</creator>
        
        <creator>Nasser Nasrabadi</creator>
        
        <subject>Anomaly detection; cargo X-ray imaging; GANomaly; perceptual loss; feature-level reconstruction; semi-supervised learning; generative adversarial networks; structural anomaly detection; security screening; reconstruction-based detection; deep learning for X-ray inspection; ResNet50</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>Anomaly detection in X-ray cargo imagery is challenging due to complex scene structures, object overlap, and limited labeled abnormal data. Reconstruction-based methods address this problem by learning normal cargo patterns and identifying deviations during testing. This study investigates how feature-level reconstruction objective functions influence detection performance within the GANomaly framework. Five objective configurations are evaluated on the CargoX dataset: a pixel-based baseline and three perceptual loss variants using Visual Geometry Group 16-layer network (VGG16) feature supervision at different depths (i.e., Rectified Linear Unit layers ReLU2_2, ReLU3_3, ReLU4_3, and multi-scale), and an encoder replacement using a ResNet50 with and without perceptual supervision. Performance is assessed using Receiver Operating Characteristic Area Under Curve (ROC-AUC), precision, recall, and F1-score, supported by qualitative analysis of reconstructions and residual maps. Results show that mid-level perceptual supervision (ReLU3_3) achieves the best performance. It improves ROC-AUC from 0.7182 to 0.7548 and demonstrates enhanced sensitivity to structural anomalies. Replacing the original GANomaly encoder with ResNet50 increases ROC-AUC to 0.7312 and improves precision. Combining ResNet50 with perceptual supervision achieves a ROC-AUC of 0.7517. However, it does not surpass the original ReLU3_3 configuration in recall or F1-score. Shallow features (ReLU2_2) and multi-scale aggregation do not improve detection. Failure analysis highlights challenges with low-contrast anomalies and structurally complex normal cargo scenes. These findings show that anomaly detection performance depends on both reconstruction supervision and encoder design. Therefore, loss selection and feature extraction should be analyzed together in reconstruction-based models.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_2-Enhancing_GANomaly_Based_Anomaly_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Empirical Validation of the ASER Framework for Long-Term Knowledge Retention in Augmented Reality</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170301</link>
        <id>10.14569/IJACSA.2026.0170301</id>
        <doi>10.14569/IJACSA.2026.0170301</doi>
        <lastModDate>2026-03-31T07:33:39.5030000+00:00</lastModDate>
        
        <creator>Samer Alhebaishi</creator>
        
        <creator>Richard Stone</creator>
        
        <creator>Ulrike Genschel</creator>
        
        <creator>Kris De Brabanter</creator>
        
        <creator>Mani Mina</creator>
        
        <creator>Anthony M. Townsend</creator>
        
        <creator>Mohammed Ameen</creator>
        
        <subject>Augmented reality (AR); ASER Framework; long-term knowledge retention; emotional memory; interactive storytelling; gamification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(3), 2026</description>
        <description>Long-term knowledge retention remains a critical challenge in augmented reality (AR) learning environments, which often prioritize novelty and short-term engagement over durable learning outcomes. This study empirically validates the Augmented Sensory Experience and Retention (ASER) Framework, an instructional model integrating emotional memory cues, interactive storytelling, and gamification within AR to promote sustained learning. A between-subjects experimental design was conducted with 30 adult participants randomly assigned to either an ASER-based AR condition or a traditional non-AR instructional condition. Baseline equivalence was established using equivalence testing. Learning outcomes were assessed using immediate post-test and three-week delayed recall measures. Individual gain scores were analyzed using Mann–Whitney U tests, and a one-way MANOVA examined multivariate effects across emotional engagement, motivation, learning engagement, and cognitive load. Results revealed significantly greater long-term retention gains in the ASER condition, with a large effect size, alongside stronger short-term improvement. Multivariate analysis demonstrated a significant overall effect of instructional condition, with the ASER group reporting higher engagement, motivation, and emotional involvement, as well as more favorable cognitive load. These findings provide empirical support for the ASER Framework and demonstrate that emotionally enriched, narrative-driven, and gamified AR instruction can foster deeper cognitive processing and more durable knowledge retention than conventional instructional approaches. The study offers evidence-based design guidance for developing pedagogically grounded AR learning systems aimed at sustained educational impact.</description>
        <description>http://thesai.org/Downloads/Volume17No3/Paper_1-Empirical_Validation_of_the_ASER_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Scalability of Predictive Models on Multi-Core CPUs and GPUs: An Empirical Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.01702108</link>
        <id>10.14569/IJACSA.2026.01702108</id>
        <doi>10.14569/IJACSA.2026.01702108</doi>
        <lastModDate>2026-02-28T10:50:50.7030000+00:00</lastModDate>
        
        <creator>Atif Mahmood</creator>
        
        <creator>Wan Joe Dean</creator>
        
        <creator>P. Ganesh Kumar</creator>
        
        <creator>Adnan N. Qureshi</creator>
        
        <subject>Scalability; XGBoost; Neural Networks; Elastic Net; resource efficiency</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>The scalability of predictive models has become a critical factor in modern machine learning, as data volumes grow and computational resources diversify. This study presents an empirical benchmark of three widely used regression paradigms: Elastic Net, XGBoost, and Multi-Layer Perceptrons (MLPs). The Obesity Estimation dataset is used to evaluate both predictive performance and computational scalability across multi-core CPUs and GPUs. Unlike prior studies that primarily emphasize accuracy, we explicitly examine the trade-offs between accuracy, training time, and hardware efficiency. Models are evaluated under staged training loads (10–100% of data) with grid-searched hyperparameters (for Elastic Net and XGBoost) and regularized deep architectures (for MLP). Results demonstrate that while XGBoost achieves the highest predictive accuracy (R2 = 0.91), it incurs significant computational overhead on CPUs, whereas GPU acceleration substantially improves its scalability. MLPs provide competitive accuracy (R2 = 0.87) with an order-of-magnitude lower training time on GPUs, making them attractive for rapid or repeated retraining. Elastic Net offers interpretability and linear scalability on CPUs, but lags in predictive power. These findings provide practitioners with a decision framework: XGBoost for maximum accuracy, MLPs for efficient retraining, and Elastic Net for interpretability and small-scale tasks. More broadly, this work highlights that hardware selection is as important as algorithm choice, with GPUs serving as enablers of state-of-the-art performance on structured data.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_108-Scalability_of_Predictive_Models_on_Multi_Core_CPUs_and_GPUs.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Attention-Guided Bidirectional Temporal Modelling with Graph-Based Regional Spatial Context for Bajra Crop Yield Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.01702107</link>
        <id>10.14569/IJACSA.2026.01702107</id>
        <doi>10.14569/IJACSA.2026.01702107</doi>
        <lastModDate>2026-02-28T10:50:50.6900000+00:00</lastModDate>
        
        <creator>Mamta Kumari</creator>
        
        <creator>Suman</creator>
        
        <creator>Devendra Prasad</creator>
        
        <subject>Bajra crop yield prediction; regional crop yield fore-casting; attention-guided Bidirectional LSTM; saline and alkaline soil composition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>Bajra (pearl millet) is a very important crop in Rajasthan, India, since it is drought-resistant, nutritious, and culturally important. But its productivity is becoming vulnerable to changes in climate, such as erratic rain and temperature changes, and thus precise estimation of yield is vital. Crop Yield Prediction (CYP) indicators like soil decomposition, rainfall and meteorological patterns are slowly evolving, exhibiting long-term temporal dependency and propagating over time. Conventional cropping prediction algorithms based on artificial intelligence process the historical data and these indicators in a unidirectional manner. While mapping the temporal dependencies, these algorithms consider each year independently and do not capture the delayed effect, like salt degradation. To address this issue, the study proposes a region-based spatiotemporal model with an attention-guided Bidirectional LSTM (Long-Short Term Memory) framework for CYP, termed as G-BiLSTM. The proposed model reproduces the spatial relationships between districts via GCN (Graph Convolution Network) -based immediate neighbour extraction. Further, a Bidirectional LSTM is used to model multi-year CYP temporal features, allowing each annual observation to be encoded using both past and future temporal context. A variance-reduced and comprehensible representation is produced by integrating an attention mechanism to adaptively highlight the most informative years within a temporal window. Using 15 agroenvironmental characteristics, including understudied elements like saline and alkaline soil composition, the framework is assessed on a dataset that includes 32 districts in Rajasthan over 13 years (2007–2019). The suggested attention-enhanced BiLSTM consistently outperforms traditional temporal models, achieving lower prediction error and better generalisation, according to experimental results analysis using a three-year sliding temporal window. For regional crop yield forecasting, the suggested method offers a scalable solution.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_107-Attention_Guided_Bidirectional_Temporal_Modelling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Stability-Aware QUBO Feature Selection for Tabular Classification Under Repeated Nested Cross-Validation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.01702106</link>
        <id>10.14569/IJACSA.2026.01702106</id>
        <doi>10.14569/IJACSA.2026.01702106</doi>
        <lastModDate>2026-02-28T10:50:50.6730000+00:00</lastModDate>
        
        <creator>Marco Fidel Mayta Quispe</creator>
        
        <creator>Leonid Alem&#225;n Gonzales</creator>
        
        <creator>Charles Ignacio Mendoza Mollocondo</creator>
        
        <creator>Nayer Tumi Figueroa</creator>
        
        <creator>Juan Carlos Juarez Vargas</creator>
        
        <creator>Godofredo Quispe Mamani</creator>
        
        <subject>Feature selection; QUBO; simulated annealing; nested cross-validation; selection stability; Jaccard similarity; probability calibration; tabular classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>Quadratic Unconstrained Binary Optimization (QUBO) provides a principled framework for feature selection by encoding relevance–redundancy trade-offs and explicit constraints directly in a combinatorial objective. This study presents a stability-aware QUBO pipeline for tabular binary classification, evaluated on two standard benchmarks, namely Breast Cancer Wisconsin Diagnostic (569 samples, 30 features) and Pima Indians Diabetes (768 samples, 8 features; clinically invalid zeros treated as missing and imputed within folds). We study four QUBO variants spanning a base relevance–redundancy formulation, an exact-cardinality formulation enforcing a fixed budget k, a stability-regularized formulation that incorporates bootstrap uncertainty estimates of relevance and redundancy directly into the QUBO objective, and a performance-weighted relevance variant based on inner-CV univariate utility. All methods are assessed under repeated nested stratified cross-validation (5 outer folds &#215; 3 repeats, n = 15 outer test evaluations), reporting AUC-ROC, AUC-PR, MCC, and Brier score with 95% confidence intervals, alongside selection stability via mean Jaccard similarity across outer-fold selected subsets. Results show that QUBO-based selection is competitive with strong classical baselines (RFECV, L1-logistic, permutation-importance ranking, and mutual information) while enabling strict budget control and transparent stability diagnostics. On the near-ceiling Breast Cancer benchmark, predictive differences are marginal and the main differentiators become subset-size control and stability; on Pima, QUBO-k remains competitive while enforcing strict cardinality constraints. These findings support QUBO as a practical framework when budgeted, interpretable, and reproducible feature selection is required, though evaluation is limited to low-dimensional tabular settings.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_106-Stability_Aware_QUBO_Feature_Selection_for_Tabular_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Malware Detection Using Machine Learning Models on Static Features</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.01702105</link>
        <id>10.14569/IJACSA.2026.01702105</id>
        <doi>10.14569/IJACSA.2026.01702105</doi>
        <lastModDate>2026-02-28T10:50:50.6270000+00:00</lastModDate>
        
        <creator>Ashwag Alotaibi</creator>
        
        <creator>Mounir Frikha</creator>
        
        <subject>Malware detection; machine learning (ML); static features; stacking ensemble; CPU optimization; resource constraints; memory efficiency; computational efficiency</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>This research introduces a CPU-optimized static malware-detection framework for resource-constrained environments, such as endpoints and IoT devices. We address the significant challenge of high memory and computational demands by proposing a robust, memory-safe data ingestion pipeline. This pipeline exclusively extracts histogram-based static features, employs type compression, and utilizes batch-wise loading with global sample limits to prevent memory overflows on systems with only 16 GB of RAM and no GPU support. Our core contribution is a compact stacking ensemble composed of three high-efficiency gradient-boosting models: LightGBM, CatBoost, and XGBoost, with a LightGBM meta-learner. This novel ensemble structure enables efficient, CPU-only training and inference while ensuring strong detection performance. Evaluated on the EMBER 2024 dataset, the framework achieves 86.99% accuracy, 0.87 F1-score, and 0.9473 AUC. This work fills a critical gap by demonstrating that carefully optimized gradient-boosting ensembles can serve as a highly deployable alternative to resource-intensive Deep Learning methods in limited security situations.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_105-Enhancing_Malware_Detection_Using_Machine_Learning_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Designing an Attack-Vector-Based Taxonomy for IoT Malware</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.01702104</link>
        <id>10.14569/IJACSA.2026.01702104</id>
        <doi>10.14569/IJACSA.2026.01702104</doi>
        <lastModDate>2026-02-28T10:50:50.6100000+00:00</lastModDate>
        
        <creator>Huda Aldawghan</creator>
        
        <creator>Mounir Frikha</creator>
        
        <subject>IoT security; malware taxonomy; attack vectors; cyber threat intelligence; network defense</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>This study presents a literature-derived, attack-vector-based taxonomy for IoT malware and complements it with an empirical validation using supervised machine learning. Building on prior surveys and taxonomies of IoT security and malware behavior, we synthesize how existing studies implicitly or explicitly describe infection vectors such as credential abuse, exposed services, firmware exploitation, internal lateral movement, and supply-chain compromise. The resulting taxonomy organises IoT malware according to initial entry mechanisms rather than post-compromise capabilities, providing a vector-centric perspective that aligns more naturally with risk assessment and defensive planning. To demonstrate the practical relevance of this taxonomy, we implement a supervised malware detection model operating on Windows Portable Executable (PE) files. Using malware samples collected from public repositories (e.g., VirusShare and MalwareBazaar) and benign executables from open-source projects, we extract structural, statistical, and metadata-based PE features and train an Extreme Gradient Boosting (XGBoost) classifier with Synthetic Minority Over-sampling Technique (SMOTE) for class balancing. The model achieves an accuracy of 98.13% with balanced F1-scores for both malware and benign classes, illustrating that feature-engineered supervised models can effectively support taxonomy-informed detection strategies. The combined conceptual and empirical view highlights how attack-vector taxonomies, IoT threat modeling, and machine learning-based detection can be jointly leveraged to strengthen IoT cyber defense.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_104-Designing_an_Attack_Vector_Based_Taxonomy_for_IoT_Malware.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced Ant Colony Optimization for Capacitated Vehicle Routing Problem with Time Windows in Franchise Distribution</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.01702103</link>
        <id>10.14569/IJACSA.2026.01702103</id>
        <doi>10.14569/IJACSA.2026.01702103</doi>
        <lastModDate>2026-02-28T10:50:50.5800000+00:00</lastModDate>
        
        <creator>Dian Rachmawati</creator>
        
        <creator>Tommy Lohil</creator>
        
        <creator>Jos Timanta Tarigan</creator>
        
        <subject>Ant Colony Optimization; CVRPTW; heuristics; distribution routing; logistics optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>Efficient routing for distributing goods to multiple franchisee locations requires optimization techniques capable of handling vehicle capacity limits, heterogeneous time windows, and operational constraints, making conventional brute-force or map-based approaches infeasible due to the NP-hard nature of the problem. This study presents an enhanced Ant Colony Optimization (ACO) algorithm for solving the Capacitated Vehicle Routing Problem with Time Windows (CVRPTW) in a franchisor–franchisee logistics setting. The proposed enhancement incorporates feasibility filtering to enforce capacity and time-window constraints during route construction and adaptive pheromone updating to improve convergence stability. Using real franchisee coordinates, demand values, and operational time windows, the experiments configured with α = 2, β = 1, ρ = 0.05, and a 150-iteration limit demonstrate that the enhanced ACO achieves a minimum total route distance of 46.90 km with zero variance across 10 simulations, indicating highly stable convergence. Comparative evaluation shows that the enhanced ACO improves route efficiency by 11.4% compared to standard ACO and 15.2% relative to a representative Genetic Algorithm baseline. Implemented in a web-based environment using JavaScript for visualization and Java for computation, the approach provides a practical decision-support tool for Indonesian franchise logistics. The algorithm exhibits an observed computational complexity of θ(n4), making it suitable for small to medium-scale distribution networks involving strict delivery time windows.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_103-Enhanced_Ant_Colony_Optimization_for_Capacitated_Vehicle_Routing_Problem.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimizing Resource Allocation for Crisis-Resilient Healthcare Robotics: An Integrated MLR, MDP, and Petri Net Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.01702102</link>
        <id>10.14569/IJACSA.2026.01702102</id>
        <doi>10.14569/IJACSA.2026.01702102</doi>
        <lastModDate>2026-02-28T10:50:50.5470000+00:00</lastModDate>
        
        <creator>Ikram Dahamou</creator>
        
        <creator>Ayoub Elbazzazi</creator>
        
        <creator>Cherki Daoui</creator>
        
        <subject>Healthcare robotics; Markov Decision Processes; Petri Nets; Multiple Linear Regression; crisis management; ICU resource allocation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>Global crises, such as pandemics and climate-related disasters, place unprecedented strain on healthcare Systems, exposing weaknesses in resource management and patient care. This study aims to address these challenges by developing an integrated computational framework for crisis-resilient healthcare robotics. We propose a unified approach that combines Multiple Linear Regression (MLR), Markov Decision Processes (MDPs), and Petri Nets. MLR is applied to predict the Average Length of Stay (ALOS) using patient and hospital data. These forecasts inform MDPs, which guide admission and triage decisions under uncertainty. Petri Nets are employed to model and validate patient flow and hospital workflows, ensuring feasibility and efficiency. Case studies, including ICU bed prioritization and disaster logistics, demonstrate that the proposed framework improves adaptability and resource utilization while supporting structured decision guidance. Simulation results highlight enhanced system efficiency, better patient prioritization, and reduced congestion during surge conditions. The integration of predictive analytics, probabilistic optimization, and workflow modeling provides a robust decision-support system for health-care robotics in crisis scenarios. This interdisciplinary framework offers practical solutions for improving resilience, scalability, and patient outcomes, providing a structured foundation for enhancing resilience and coordination in healthcare systems facing future emergencies.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_102-Optimizing_Resource_Allocation_for_Crisis_Resilient_Healthcare_Robotics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Attention-Guided Fusion of EfficientNet-B0 and Swin Transformer for Cervical Cancer Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.01702101</link>
        <id>10.14569/IJACSA.2026.01702101</id>
        <doi>10.14569/IJACSA.2026.01702101</doi>
        <lastModDate>2026-02-28T10:50:50.5170000+00:00</lastModDate>
        
        <creator>Twisibile Mwalughali</creator>
        
        <creator>Emmanuel C. OGU</creator>
        
        <creator>Evason Karanja</creator>
        
        <subject>Cervical cancer classification; deep learning model; colposcopy; cross-attention fusion; EfficientNet; Swin Transformer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>The interpretation of colposcopy images is a critical yet subjective component of cervical cancer screening. To enhance this process, we propose a novel hybrid deep learning framework for the classification of cervical lesions. Our model integrates EfficientNet-B0, adept at extracting localized hierarchical features, with a Swin-Tiny Transformer, which excels at modeling long-range dependencies and global context. Moving beyond basic fusion techniques, we introduce a novel cross-attention fusion mechanism, augmented with channel and spatial attention modules. This design selectively highlights the most discriminative inter-feature relationships while maintaining computational efficiency. Evaluated on the International Agency for Research on Cancer (IARC) colposcopy image dataset, our framework achieves an accuracy of 94.76%, significantly outperforming a concatenation-based fusion model (83.99%). This represents an absolute improvement of 10.77 percentage points and captures 67.3% of the residual performance margin toward perfect ac-curacy. The model also demonstrates robust performance across other metrics, including a precision of 94.68%, recall of 94.82%, F1-score of 94.74%, and a Cohen’s Kappa of 89.48%. These results indicate that our approach can enhance both the accuracy and reliability of cervical cancer screening, offering valuable support for clinical decision-making.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_101-Attention_Guided_Fusion_of_EfficientNet_B0_and_Swin_Transformer.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Blockchain-Based Multi-Chain Data Supervision Mechanism for Traditional Chinese Medicine Traceability System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.01702100</link>
        <id>10.14569/IJACSA.2026.01702100</id>
        <doi>10.14569/IJACSA.2026.01702100</doi>
        <lastModDate>2026-02-28T10:50:50.4870000+00:00</lastModDate>
        
        <creator>Rongjun Chen</creator>
        
        <creator>Yun Sun</creator>
        
        <creator>Feng Xue</creator>
        
        <creator>Yongzhi Ma</creator>
        
        <creator>Xinyu Wu</creator>
        
        <creator>Xianxian Zeng</creator>
        
        <creator>Jiawen Li</creator>
        
        <creator>Jinchang Ren</creator>
        
        <subject>Blockchain; traceability; multi-chain architecture; Hyperledger Fabric; Traditional Chinese Medicine</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>Addressing the challenges of Traditional Chinese Medicine (TCM) traceability systems, including heavy data storage burdens, poor privacy protection, and susceptibility to tampering, this study establishes a highly secure and trustworthy traceability supervision system for the entire Chinese medicine supply chain, which enhances product quality and safety assurance. Centred on the Hyperledger Fabric consortium blockchain as its core architecture, a multi-chain integration framework comprising one regulatory main chain plus five organisational sub-chains is proposed to achieve permission control, data isolation, and privacy. A multi-mode encrypted data storage mechanism is designed, integrating China’s national cryptographic algorithms SM4 and SM3 with CP-ABE attribute-based encryption to enable tiered management of private and non-private data. Zero-knowledge proof technology safeguards identity privacy during cross-chain data transmission, while QR codes and environmental data collection mechanisms enhance data entry efficiency and authenticity. The system achieves end-to-end traceability from cultivation and processing through transportation, warehousing, and sales. Comparative performance analysis shows that the proposed framework effectively alleviates data storage pressure, ensures data validity, enhances data security, and improves collaborative efficiency among organizations across the TCM supply chain. The proposed multi-chain integrated Chinese medicine traceability and supervision system enables efficient collaboration and trustworthy traceability across the entire Chinese medicine industry chain, while safeguarding data security and privacy, and has significant application and promotion value. Future integration with artificial intelligence and big data technologies could further enhance the system’s intelligent analysis and decision-support capabilities.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_100-Blockchain_Based_Multi_Chain_Data_Supervision_Mechanism.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Time Series Anomaly Detection Based on Entropy-Sparsified Time-Frequency Fusion and MsRwGWO Meta-Optimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170299</link>
        <id>10.14569/IJACSA.2026.0170299</id>
        <doi>10.14569/IJACSA.2026.0170299</doi>
        <lastModDate>2026-02-28T10:50:50.4530000+00:00</lastModDate>
        
        <creator>Xiaogang Yuan</creator>
        
        <creator>Jiaxi Chen</creator>
        
        <creator>Dezhi An</creator>
        
        <creator>Jianxin Wan</creator>
        
        <subject>Time series anomaly detection; entropy sparsification; time-frequency fusion; Mixture of Experts (MoE); meta-heuristic optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>Addressing the core challenges in multivariate time series anomaly detection within complex industrial environments, such as redundant time-frequency feature fusion, significant noise interference, and difficulties in model hyperparameter tuning, this study proposes a detection framework (TFUL) based on entropy-sparsified time-frequency fusion and a Multi-strategy Random Weighted Grey Wolf Optimizer (MsRwGWO). The main contributions of this work include: 1) A dual-domain entropy sparsification fusion mechanism is designed, which dynamically evaluates and filters crucial temporal segments and frequency components via information entropy, enabling adaptive and redundancy-resistant feature fusion. 2) A heterogeneously collaborative feature extraction network is constructed. The temporal branch, SoftShapeNet, integrates multi-scale convolutions and a Mixture of Experts (MoE) to capture local polymorphic shapes, while the frequency branch, FrequencyDomainProcessor, employs a learnable Mahalanobis distance to model nonlinear spectral dependencies among channels, surpassing the limitations of fixed transformations. 3) The MsRwGWO meta-optimization strategy is proposed, which incorporates dynamic weighting and multi-strategy perturbation mechanisms, significantly enhancing the efficiency and quality of hyperparameter search. Experiments conducted on several public datasets demonstrate that the pro-posed method outperforms mainstream comparative models in terms of detection accuracy and robustness, providing an effective solution for industrial time series anomaly detection.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_99-Time_Series_Anomaly_Detection_Based_on_Entropy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Tomato Maturity Analysis: A Comparative Study of Detection and Instance Segmentation Using YOLOv8</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170298</link>
        <id>10.14569/IJACSA.2026.0170298</id>
        <doi>10.14569/IJACSA.2026.0170298</doi>
        <lastModDate>2026-02-28T10:50:50.4230000+00:00</lastModDate>
        
        <creator>Salma Ait Oussous</creator>
        
        <creator>Rachid El Bouayadi</creator>
        
        <creator>Driss Zejli</creator>
        
        <creator>Aouatif Amine</creator>
        
        <subject>Tomato maturity detection; computer vision; object detection; instance segmentation; image analysis; Deep Learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>The accurate visual analysis of fruit maturity in complex agricultural scenes remains a fundamental challenge due to gradual appearance changes, object overlap, and partial occlusion. This study addresses tomato maturity analysis, formally defined as instance-level binary classification and spatial localization under varying degrees of visual density. While bounding-box-based object detection is widely used, it often lacks precision in dense clusters. We present a controlled experimental comparison between object detection and instance segmentation using a common YOLOv8-medium (YOLOv8m) backbone to isolate the effect of spatial representation. Experimental results demonstrate that instance segmentation achieves superior localization accuracy and boundary consistency, reaching a mask-based mAP@0.5:0.95 of 0.817. These findings suggest that pixel-level supervision effectively reduces localization ambiguity, providing a robust foundation for automated agricultural monitoring.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_98-Tomato_Maturity_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Abnormal State Detection of Industrial Tools Based on the MGC-YOLOv8 Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170297</link>
        <id>10.14569/IJACSA.2026.0170297</id>
        <doi>10.14569/IJACSA.2026.0170297</doi>
        <lastModDate>2026-02-28T10:50:50.3930000+00:00</lastModDate>
        
        <creator>Guan Yang</creator>
        
        <creator>Xiang Cheng</creator>
        
        <creator>Miao Wang</creator>
        
        <creator>Ziyue Huang</creator>
        
        <creator>Hao Tang</creator>
        
        <creator>Yujun Chen</creator>
        
        <subject>Object detection; surface defect detection; YOLOv8</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>As intelligent manufacturing advances toward precision and automation, cutting tool condition critically impacts product quality, equipment safety, and production efficiency. Anomalies like wear, chipping, or fracture cause workpiece scrapping and machine failure, demanding efficient online monitoring. Traditional manual or image-based methods suffer from low accuracy in complex environments. Although deep learning excels in industrial defect detection, existing end-to-end detectors exhibit insufficient recall and localization precision for millimeter-scale cracks and blurred tool boundaries. To address these challenges, we propose MGC-YOLOv8, an enhanced framework built upon the YOLOv8 backbone. A Multi-Scale Edge-Dual Fusion (MSEDF) module is introduced to integrate feature maps across different scales, thereby strengthening the detection of minor defects. Furthermore, a Global-to-Local Spatial Aggregation (GLSA) module enriches feature representations by simultaneously capturing global context and local details. A Convolutional Block Attention (CBAM) module is embedded upstream of the prediction head to adaptively highlight critical features in both channel and spatial dimensions. Although the integration of MSEDF, GLSA, and CBAM introduces a marginal runtime overhead and a slight increase in parameter count, the optimized architecture preserves real-time inference speeds that fully satisfy the requirements of industrial inspection systems. Experimental results demonstrate that MGC-YOLOv8 substantially outperforms the baseline YOLOv8n, achieving 88.1% precision, 87.9% recall, 92.5% mAP@0.5 and 69.6% mAP@0.5:0.95 on our test set.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_97-Abnormal_State_Detection_of_Industrial_Tools.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Post-Quantum Module Learning with Rounding-Based Public Key Encryption Using Incomplete Number Theoretic Transform</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170296</link>
        <id>10.14569/IJACSA.2026.0170296</id>
        <doi>10.14569/IJACSA.2026.0170296</doi>
        <lastModDate>2026-02-28T10:50:50.3600000+00:00</lastModDate>
        
        <creator>Anupama Arjun Pandit</creator>
        
        <creator>Arun Mishra</creator>
        
        <subject>Post-quantum cryptography; public key encryption; lattice-based cryptography; Incomplete Number Theoretic Transform; Module Learning with Rounding; Learning with Errors</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>Post Quantum Cryptographic (PQC) techniques are widely used in encryption standards and digital signatures. Lattice-based post-quantum cryptographic techniques have been reported in the last decades. The present work proposes an optimized quantum-safe lattice public key encryption (PKE) scheme based on the Module Learning with Rounding (MLWR) problem, enhanced by the use of Incomplete Number Theoretic Transform (NTT). The objective of the proposed scheme is to achieve efficient encryption and decryption while maintaining robust security in accordance with the National Institute of Standards and Technology (NIST) recommendations. The incomplete NTT relaxes the modulus q requirement, enables a smaller modulus for efficient arithmetic, and reduces computational complexity. This approach results in significant improvements in the speed of key generation, encryption, and decryption with a marked reduction in rejection probability, compared to schemes utilizing complete NTT. The proposed scheme demonstrates competitive performance against other lattice-based encryption schemes such as Kyber and Frodo. It shows lower encryption and decryption times while offering comparable security levels. Proposed scheme is at least a hundred times faster than Frodo lattice-based public-key encryption schemes. For NIST-recommended security level, in proposed scheme, each encryption needs an average of 300K CPU cycles, and each decryption needs 120K CPU cycles. Additionally, modulus 7937 enables a reduction in key and ciphertext sizes, optimizing the scheme for practical deployment in resource-constrained environments. Performance evaluations confirm the practicality of the scheme with substantial reductions in computational overhead, making it a highly efficient and secure candidate for post-quantum encryption.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_96-Post_Quantum_Module.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SecureDML:An Intelligent Framework for Preventing Poisoning Attacks in Distributed Machine Learning Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170295</link>
        <id>10.14569/IJACSA.2026.0170295</id>
        <doi>10.14569/IJACSA.2026.0170295</doi>
        <lastModDate>2026-02-28T10:50:50.3300000+00:00</lastModDate>
        
        <creator>Archa A. T</creator>
        
        <creator>Kartheeban K</creator>
        
        <subject>Poisoning attacks; SHapley Additive exPlanations; anomaly detection; federated learning; autoencoders</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>The security and protection of models in dis-tributed machine learning (ML) systems require high emphasis on adversarial threats, including poisoning attacks. This study contains a complete framework that integrates different advanced techniques to monitor poison attacks and prevent such attacks for the effective functioning of machine learning systems. The proposed system integrates hybrid encryption for security, and a subsequent anomaly detection method using autoencoders. SHapley Additive exPlanations-based interpretability method is used to enhance model transparency. Hybrid encryption combines the RSA and AES methods to keep data and model parameters secret, and autoencoders provide effective identification of poisoning attack patterns through abnormal data observations. This method is implemented using multimodal datasets such as CIFAR 100 and AG News datasets. Finally,the effectiveness of this method can be evaluated using confusion matrix, comparison graphs. It works as a comprehensive solution that benefits various ML applications, such as healthcare, autonomous vehicles, Large Language Models, etc., for enhancing security along with integrity protection.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_95-SecureDML_An_Intelligent_Framework_for_Preventing_Poisoning_Attacks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid DMD–TCN Framework for Interpretable Short-Horizon Prediction of 6-DOF Ship Motions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170294</link>
        <id>10.14569/IJACSA.2026.0170294</id>
        <doi>10.14569/IJACSA.2026.0170294</doi>
        <lastModDate>2026-02-28T10:50:50.3130000+00:00</lastModDate>
        
        <creator>Enock Tafadzwa Chekure</creator>
        
        <creator>Kumeshan Reddy</creator>
        
        <creator>John Fernandes</creator>
        
        <subject>Dynamic Mode Decomposition; Temporal Convolutional Network; hybrid learning; seakeeping and vessel response; data-driven modelling; Koopman operator methods</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>Accurate short-horizon prediction of six degrees of freedom (6-DOF) vessel motions is essential for autonomous navigation, motion compensation, and operational decision making. Traditional seakeeping models rely on hydrodynamic coefficients that are seldom available for full-scale vessels, while purely data-driven approaches may struggle to maintain physical consistency. This study introduces a hybrid physics–machine learning framework that combines Dynamic Mode Decomposition (DMD), which approximates the vessel’s dominant linear drift dynamics, with a causal Temporal Convolutional Network (TCN) that learns nonlinear residual corrections from a 12-hour historical window of environmental, geometric, and motion features. DMD provides an interpretable surrogate of the vessel dynamics through its eigenvalues, growth rates, and mode shapes, serving as a data-derived linear transfer operator. The TCN predicts only the residual departure from this structured baseline, ensuring a stable and causal forecasting architecture. Evaluation on full-scale field data shows that the hybrid model improves prediction accuracy for heave and achieves performance comparable to DMD for surge, while underperforming in sway, roll, pitch, and yaw due to the limited observability of key physical drivers at hourly resolution. These results highlight both the strengths and limitations of residual learning when important nonlinear forcing mechanisms and control inputs are unmeasured. Overall, the study demonstrates that hybrid physics–machine learning approaches provide valuable interpretability and diagnostic insight, even when data limitations are constrained. The framework offers a principled foundation for incorporating additional physical inputs, higher-frequency measurements, and physics-informed architectures in future work on operational ship-motion forecasting.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_94-A_Hybrid_DMD_TCN_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Forward Selection for Time Series-Based Qubit Generation via Parameterized Quantum Gates</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170293</link>
        <id>10.14569/IJACSA.2026.0170293</id>
        <doi>10.14569/IJACSA.2026.0170293</doi>
        <lastModDate>2026-02-28T10:50:50.2830000+00:00</lastModDate>
        
        <creator>Singaraju Srinivasulu</creator>
        
        <creator>Nagarajan G</creator>
        
        <subject>Quantum bits; Quantum Machine Learning; quantum algorithms; quantum measurements; Parameterized Quantum Gates; feature extraction; time-series data</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>Quantum data processing requires classical data to be encoded into quantum states. Current noisy intermediate-scale quantum devices have a limited number of qubits that are stable only briefly. Encoding classical data into qubits is the initial step in Quantum Machine Learning (QML), and effective encoding is crucial for quantum processing. This algorithms for data processing are still emerging, and compact data representations are essential for their success. This research proposes a novel data encoding technique using uniformly controlled rotation gates, achieving high storage density by encoding real-valued time series data as qubit rotations. The model uses a binary representation for computations on time series data, reducing the number of quantum measurements needed. The research explores quantum forward propagation in simulations to improve prediction accuracy for time series signals using parameterized quantum circuits, handling trends, noise, and sinusoidal components. The efficiency of the encoding process depends on data volume and chosen encoding, with potential infinite loading time in the worst case. This study presents a Forward Selection Time Series Data Pro-cessing and Feature Extraction Model for Qubits generation with Parameterized Quantum Gates (FSDPFEM-PQG), demonstrating superior performance in quantum representations compared to existing models.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_93-Forward_Selection_for_Time_Series_Based_Qubit_Generation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Density-Guided Adaptive Patch Learning for Robust Crowd Counting</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170292</link>
        <id>10.14569/IJACSA.2026.0170292</id>
        <doi>10.14569/IJACSA.2026.0170292</doi>
        <lastModDate>2026-02-28T10:50:50.2500000+00:00</lastModDate>
        
        <creator>Abdullah N Alhawsawi</creator>
        
        <subject>Computer vision; deep learning; crowd counting</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>Accurate crowd counting in real-world scenes re-mains challenging due to severe occlusions, perspective distortion, and large intra-scene density variation. Recent deep learning based approaches typically address these challenges using patch-level learning, where images are divided into fixed grids or randomly cropped patches. These approaches then estimate the count in each patch. However, such fixed partitioning strategies often fail to align with the irregular spatial distribution of crowds. This leads to heterogeneous density patterns within patches where the models fail to produce the accurate count. In this study, we propose a simple, yet effective Density-Guided Adaptive Patch Learning framework for crowd counting. Instead of relying on fixed-size patches, we first obtain a coarse density estimation to capture the global density structure of a scene. Based on this estimate, the image is dynamically partitioned into density-homogeneous regions, where dense areas are represented using smaller patches and sparse regions using larger patches. Each adaptive patch is then processed independently for density estimation, and the resulting predictions are fused to produce the final crowd density map. The proposed framework is model-agnostic and can be seamlessly integrated with existing crowd counting networks without architectural modification. Extensive experiments on benchmark datasets demonstrate that the pro-posed adaptive partitioning consistently achieves lower Mean Absolute Error (MAE) in counting accuracy and localization compared to fixed patch-based baselines, particularly in scenes with strong density variation.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_92-Density_Guided_Adaptive_Patch_Learning_for_Robust_Crowd_Counting.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>HQ-RTVF: High-Quality Real-Time Virtual Try-On Fitting for Diverse Clothing and Body Morphologies</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170291</link>
        <id>10.14569/IJACSA.2026.0170291</id>
        <doi>10.14569/IJACSA.2026.0170291</doi>
        <lastModDate>2026-02-28T10:50:50.2200000+00:00</lastModDate>
        
        <creator>Ilham KACHBAL</creator>
        
        <creator>Khadija Arhid</creator>
        
        <creator>Said El Abdellaoui</creator>
        
        <subject>Virtual try-on; diffusion models; real-time processing; deep learning; garment synthesis; pose estimation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>The ability to virtually try on clothing items has become an increasingly important feature for e-commerce and online shopping experiences. Real-time virtual try-on remains challenging because existing methods force a trade-off between speed and quality GAN-based approaches achieve high visual fidelity but at low frame rates, while faster methods sacrifice realism. HQ-RTVF is a diffusion-based framework that resolves this trade-off through three architectural innovations: running the diffusion U-Net entirely in the VAE’s compressed latent space (64&#215;64&#215;4 instead of 512&#215;512&#215;3), limiting denoising to 20 steps with FP16 mixed-precision computation, and parallelizing pose estimation and garment encoding to eliminate sequential bottlenecks. The system uses DensePose and DeepLabv3+ for body pose and segmentation, a CLIP-based garment encoder for fine-grained fabric representation, and an attention-guided fusion decoder that maintains temporal coherence across video frames— distinguishing it from static image methods like VITON-HD and HR-VITON. An adaptive masking mechanism handles diverse garment types from cropped tops to full-length dresses. Evaluated on VITON-HD and DressCode datasets, HQ-RTVF achieves SSIM of 0.950 and LPIPS of 0.067, while operating in real-time with only 4.2 GB GPU memory.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_91-HQ_RTVF_High_Quality_Real_Time_Virtual_Try_On_Fitting.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Transformer-Based Approach for Multimodal Arabic Sentiment Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170290</link>
        <id>10.14569/IJACSA.2026.0170290</id>
        <doi>10.14569/IJACSA.2026.0170290</doi>
        <lastModDate>2026-02-28T10:50:50.1900000+00:00</lastModDate>
        
        <creator>Ayoub BEN CHEIKHI</creator>
        
        <creator>EL Habib NFAOUI</creator>
        
        <subject>Arabic sentiment analysis; multimodal learning; feature fusion; machine learning; early fusion</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>The Multimodal Sentiment Analysis (MSA) land-scape for Arabic content is strikingly underexplored, mainly due to limited datasets and a lack of robust integration methods across text, audio, and image. While transformer-based models like MarBERT and ArBERT achieve strong results on Arabic text, most research remains unimodal and does not fully exploit multimodal synergy. In this work, we propose a three-fold approach for Arabic MSA. First, we finetune robust transformers for each modality, namely ViT, MarBERT, and HuBert for image, Text, and Audio, respectively. Second, we perform an early feature fusion. Third, we use classifiers for sentiment prediction. On the recent Ar-MuSA benchmark released on 2025, our tri-modal fusion system, achieves state-of-the-art performance (F1=0.7756, Accuracy=0.7759), significantly exceeding the multimodal models benchmarked on the Ar-MuSa dataset, as well as the unimodal and bimodal methods. This demonstrates that comprehensive tri-modal fusion and thoughtful classifier selection are essential for accurate, human-centric Arabic sentiment analysis.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_90-A_Transformer_Based_Approach_for_Multimodal_Arabic_Sentiment_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Leveraging Statistical Invariants to Fortify CNN-Based Medical Diagnostics Against Adversarial Perturbations</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170289</link>
        <id>10.14569/IJACSA.2026.0170289</id>
        <doi>10.14569/IJACSA.2026.0170289</doi>
        <lastModDate>2026-02-28T10:50:50.1430000+00:00</lastModDate>
        
        <creator>Yassine Chahid</creator>
        
        <creator>Anas Chahid</creator>
        
        <creator>Ismail Chahid</creator>
        
        <creator>Aissa Kerkour Elmiad</creator>
        
        <subject>Adversarial defense; medical X-ray; synergistic ensemble; deep learning security; diagnostic robustness; PGD attack</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>The integration of artificial intelligence (AI) in medical diagnostics is increasingly jeopardized by adversarial attacks—imperceptible perturbations designed to induce misclassification in Deep Learning models. While Convolutional Neural Networks (CNNs) have achieved state-of-the-art performance in medical image analysis, their susceptibility to gradient-based attacks poses a severe risk to patient safety and diagnostic integrity. This study addresses the critical need for robust defense mechanisms in X-ray diagnostics by proposing a Hybrid Ensemble model based on Stacked Generalization. Unlike single-paradigm approaches, our method fuses the spatial feature extraction capabilities of a CNN with the statistical anomaly detection power of a Random Forest (RF). We evaluated this architecture on a curated dataset of X-ray images subjected to Projected Gradient Descent (PGD) attacks with varying perturbation magnitudes (ϵ). The results demonstrate that the Hybrid Ensemble consistently outperforms individual models and standard adversarial training baselines. Under strong attack conditions (ϵ = 0.006), the proposed model achieved an Area Under the Curve (AUC) of 0.919, significantly surpassing the adversarial training baseline (AUC 0.700). Furthermore, the ensemble reduced false positives to 108 compared to 138 for the CNN alone, enhancing clinical reliability. Theoretical motivation for the feature extraction process and extensive experimental validation suggest that leveraging statistical irregularities offers a computationally efficient and robust defense strategy suitable for real-time clinical deployment.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_89-Leveraging_Statistical_Invariants_to_Fortify_CNN_Based_Medical_Diagnostics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Dual-Chain and Differential Privacy-Based Solution for Medical Data Privacy Protection and Access Control</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170288</link>
        <id>10.14569/IJACSA.2026.0170288</id>
        <doi>10.14569/IJACSA.2026.0170288</doi>
        <lastModDate>2026-02-28T10:50:50.1270000+00:00</lastModDate>
        
        <creator>Cen Gu</creator>
        
        <creator>Luping Wang</creator>
        
        <creator>Hongjie Wu</creator>
        
        <subject>Blockchain; differential privacy; IPFS; medical data</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>As living standards rise, people are paying increasing attention to health. Vast quantities of medical data are generated daily, yet each piece contains sensitive information such as patients’ names, mobile numbers, email addresses, and places of employment. Should this information be compromised, the consequences would be irreversible, causing severe damage. Traditional solutions merely implement access control policies, permitting data access only to authorised personnel. While this approach offers some protection, even compliant users cannot be entirely trusted and may engage in malicious activities. Once data is accessed, patients’ sensitive information becomes fully exposed to the user, posing a significant data security risk. Therefore, this study proposes a medical data sharing scheme based on Dual-Chain and differential privacy. It employs a hybrid approach combining private chains, consortium chains, and IPFS. Internal hospital personnel can access data after de-identification, while external parties can only access data that has been de-identified and subsequently augmented with noise. This significantly enhances security. The experimental section of this study also demonstrates that the proposed scheme effectively protects data, while the data shared with external users enables them to successfully complete downstream tasks.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_88-A_Dual_Chain_and_Differential_Privacy_Based_Solution.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Feasibility Study of Explainable Machine Learning on Small-Scale Postoperative Voice Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170287</link>
        <id>10.14569/IJACSA.2026.0170287</id>
        <doi>10.14569/IJACSA.2026.0170287</doi>
        <lastModDate>2026-02-28T10:50:50.0800000+00:00</lastModDate>
        
        <creator>Noura Haddou</creator>
        
        <creator>Najlae Idrissi</creator>
        
        <creator>Sofia Ben Jebara</creator>
        
        <subject>XAI; explainable AI; SHAP; glottal features; SVM; thyroidectomy; voice recovery</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>Voice dysfunction is a common complication following thyroid surgery. However, the application of explainable machine learning for predicting postoperative voice recovery remains largely unexplored. Therefore, an investigation was done to examine voice recovery based on acoustic, objective, and glottal features. Voice recordings were collected from female patients before surgery and one month after surgery. Acoustic and glottal parameters, including Quasi Open Quotient, Speed Quotient, age, and others, were automatically extracted from the recordings. Random Forest, Support Vector Machines, and Logistic Regression with Sequential Feature Selection were applied to examine model behavior and identify feature importance. Model stability and interpretability were evaluated across cross-validation folds. Performance metrics varied over folds, highlighting the exploratory and statistically fragile nature of predictions in small datasets. SHAP (SHapley Additive exPlanations) analysis revealed variability in feature contributions, emphasizing the need for cautious interpretation and detailed methodological reporting. Our findings provide preliminary guidance for applying explainable machine learning to small biomedical datasets. They demonstrate the importance of careful methodological design.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_87-A_Feasibility_Study_of_Explainable_Machine_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Integrating Processing-In-Memory into HW/SW Co-Design for Automotive Embedded Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170286</link>
        <id>10.14569/IJACSA.2026.0170286</id>
        <doi>10.14569/IJACSA.2026.0170286</doi>
        <lastModDate>2026-02-28T10:50:50.0470000+00:00</lastModDate>
        
        <creator>Zineb El Kacimi</creator>
        
        <creator>Safae Dahmani</creator>
        
        <creator>Oussama Elissati</creator>
        
        <creator>Mouhcine Chami</creator>
        
        <subject>Electronic Control Units; energy consumption; hardware/software co-design; Processing-In-Memory; real-time constraints</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>Continuous expansion of using intelligent learning workloads in modern vehicles, leads Electronic Control Units (ECUs) to manage massive volumes of data while dealing with hard real-time constraints. That said, ECUs must operate under strict power budget due to limited battery capacity and other safety and functional requirements. We study in this research the feasibility of integrating Processing-In-Memory (PIM) approach into the hardware/software co-design process for the emerging ECU architectures. The idea here is to allocate data-centric tasks to in-memory compute units so that computation occurs directly where data is stored. The proposed approach will allow avoiding expensive data traffic, which will improve processing performance and reduce energy consumption. Our work introduces a conceptual framework that reconciles the PIM strengths with automotive requirements and introduces techniques from dynamic voltage scaling to smarter memory management, and is limited to a conceptual architectural proposal without experimental validation or quantitative evaluation. We end up discussing some crucial opportunities and open challenges arising from the implementation of PIM in next-generation automotive systems.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_86-Integrating_Processing_In_Memory_into_HWSW_Co_Design.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Coronary Heart Disease Prediction Using Machine Learning Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170285</link>
        <id>10.14569/IJACSA.2026.0170285</id>
        <doi>10.14569/IJACSA.2026.0170285</doi>
        <lastModDate>2026-02-28T10:50:50.0000000+00:00</lastModDate>
        
        <creator>Inooc Rubio Paucar</creator>
        
        <creator>Cesar Yactayo-Arias</creator>
        
        <creator>Laberiano Andrade-Arenas</creator>
        
        <subject>Cardiovascular disease; machine learning; prediction; random forest; XGBoost</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>Cardiopathy is one of the most serious diseases worldwide with its high morbidity and mortality rates posing a latent risk over time. The objective of this research focuses on evaluating Machine Learning (ML) models such as Random Forest (RF), Extreme Gradient Boosting (XGBoost), and Logistic Regression (LR) for the prediction of coronary heart disease (CHD), with the aim of identifying the most efficient model for this prediction. The model construction followed the Cross-Industry Standard Process for Data Mining (CRISP-DM) methodology, which comprises five stages: business understanding, data understanding, data preparation, modeling, and evaluation. The modeling results revealed the superior predictive capability of the XGBoost algorithm for detecting coronary heart disease, compared to Random Forest and Logistic Regression. The assessment of performance metrics (Accuracy, Precision, Sensitivity, and F1 Score) established XGBoost as the reference model, highlighting an F1 Score of approximately 90.8%. This superiority is attributed to its robustness in capturing nonlinear interactions among clinical variables. Consequently, the XGBoost model is selected as the optimal tool for integration into future medical decision support systems. In summary, this ML-based approach provides a highly predictive tool capable of identifying subtle risk patterns from real clinical data. The XGBoost model is a promising candidate for integration into decision support systems and for the optimization of primary prevention protocols for coronary heart disease.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_85-Coronary_Heart_Disease_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Telecom Churn Prediction Using Emotion-Driven and Behavioral Engagement Features</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170284</link>
        <id>10.14569/IJACSA.2026.0170284</id>
        <doi>10.14569/IJACSA.2026.0170284</doi>
        <lastModDate>2026-02-28T10:50:49.9870000+00:00</lastModDate>
        
        <creator>Huthaifa Aljawazneh</creator>
        
        <subject>Customer churn prediction; emotion-driven features; feature engineering; imbalanced data; classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>Accurate churn prediction enables service providers to develop effective retention strategies and promotes revenue stability in the telecommunication industry. This study enhances churn prediction performance by extracting five emotion-driven and behavioral engagement features from a telecom churn dataset. The new features represent derived, experience-oriented indicators constructed from operational usage data rather than direct psychological or survey-based measurements. To assess the effect of these engineered features on predictions, three powerful classifiers (i.e., CatBoost, Random Forest, and XGBoost) were trained and tested in a structured three-stage experimental design. In the first stage, the classifiers were trained and tested using the original dataset (original features only). In the second stage, the original dataset was enriched with five newly derived features (i.e., frustration index, trust score, satisfaction index, service usage score, and international experience index). Finally, in the third stage, only the engineered features were used in the classification process to evaluate their standalone predictive capability. Because the dataset is imbalanced, SMOTE and SMOTE-Tomek were applied to address this issue. The results demonstrate that incorporating these engineered features improves churn prediction performance across the reported evaluation metrics (accuracy, precision, recall, and specificity) for the classifiers and balancing techniques combinations presented. The enriched dataset (original + engineered features) achieves the strongest overall performance compared to using either original features only or engineered features only. Compared to the original features only, the enriched dataset achieved improvements of up to 3.6% in accuracy and 5.8% in recall. These findings indicate that emotion-driven and behavioral engagement features provide meaningful complementary information that enhances churn prediction effectiveness.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_84-Enhancing_Telecom_Churn_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>AI-Driven Refactoring: Semantic Reconstruction of Domain Models Using LLM Reasoning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170283</link>
        <id>10.14569/IJACSA.2026.0170283</id>
        <doi>10.14569/IJACSA.2026.0170283</doi>
        <lastModDate>2026-02-28T10:50:49.9530000+00:00</lastModDate>
        
        <creator>Mohamed El BOUKHARI</creator>
        
        <creator>Nassim KHARMOUM</creator>
        
        <creator>Soumia ZITI</creator>
        
        <subject>Domain-Driven Design; large language models; AI-driven software refactoring legacy systems modernization; semantic code analysis; architecture reconstruction; GPT; LLM; domain layer reconstruction; AI-assisted software engineering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>This study examines the application of large language models (LLMs) for automating domain layer reconstruction in legacy systems, with a specific focus on a case study involving water consumption management. The process begins with a deliberately disordered JSON representation that conflates domain, application, and infrastructure issues. An LLM, specifically GPT-5.2, was employed to identify misplaced methods, inconsistent naming, DTO misuse, incoherent aggregates, and unrelated modules, and subsequently reorganize the model into a structure aligned with Domain-Driven Design (DDD). The structure includes entities, value objects, aggregates, domain services, domain events, and repositories. The methodology involves encoding the legacy model as JSON, applying an LLM-based diagnosis and reconstruction pipeline, and producing both a refined domain model and a categorized catalogue of corrections. A comparative analysis of candidate LLMs, informed by recent code-centric benchmarks, such as SWE-bench and LiveCodeBench, supports the selection of GPT-5.2 as the primary model for this study. The findings indicate that the LLM can swiftly recover key domain concepts and achieve semantically consistent refactoring, a task that typically requires extensive manual effort. This suggests that LLM-assisted domain reconstruction is a promising adjunct to traditional refactoring practices and can facilitate continuous architectural improvements in organizations.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_83-AI_Driven_Refactoring.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SIFChain: A Decentralized Framework for Secure Storage Sharing and Dynamic Access Control in Virtual Power Plants</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170282</link>
        <id>10.14569/IJACSA.2026.0170282</id>
        <doi>10.14569/IJACSA.2026.0170282</doi>
        <lastModDate>2026-02-28T10:50:49.9400000+00:00</lastModDate>
        
        <creator>Xiaochuan Xu</creator>
        
        <creator>Xiao Xin</creator>
        
        <creator>Jie Liu</creator>
        
        <creator>Ruiqi Fang</creator>
        
        <creator>Dekai Liu</creator>
        
        <creator>Zhixin Li</creator>
        
        <subject>Virtual Power Plant; access control; CP-ABE; IPFS; blockchain; distributed energy; privacy protection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>Virtual Power Plants (VPPs) face significant challenges in secure data management and sharing, including risks of centralized control, single points of failure, and dynamic access requirements among multiple stakeholders. To address these issues, this study proposes SIFChain, a decentralized framework that integrates Hyperledger Fabric, the InterPlanetary File System (IPFS), and a revocable Ciphertext-Policy Attribute-Based Encryption (CP-ABE) scheme with collaborative key generation. Unlike existing solutions such as Filecoin or Storj, SIFChain introduces a dual-channel blockchain architecture that separates public operational data from sensitive attribute information, mitigating privacy leakage and access policy exposure. The framework achieves fine-grained, dynamic access control with forward and backward security through an enhanced CP-ABE mechanism. Experimental evaluation demonstrates that SIFChain provides scalable performance: data upload/download times in-crease linearly from 1 MB to 1 GB, blockchain transaction latency remains under 5 ms for typical operations (registration, access requests, policy updates), and attribute-based encryption/decryption overhead scales linearly with policy complexity. These results confirm the practicality of SIFChain for secure, cross-organizational data sharing in Virtual Power Plant ecoSystems.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_82-SIFChain_A_Decentralized_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Leveraging Attention Mechanism and Class Weighting for Legal Event Detection in Chinese Text</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170281</link>
        <id>10.14569/IJACSA.2026.0170281</id>
        <doi>10.14569/IJACSA.2026.0170281</doi>
        <lastModDate>2026-02-28T10:50:49.9070000+00:00</lastModDate>
        
        <creator>Jinhong Hu</creator>
        
        <creator>Shaidah Jusoh</creator>
        
        <subject>Event detection; information extraction; Chinese corpus; recurrent neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>Event detection is an information extraction task that involves extracting specified event types from textual sequences. Currently, most event detection studies focus on English corpora; there is a lack of exploration in other linguistic contexts. Thus, a study on event detection in the Chinese corpus is essential. Sequence-based event detection has been extensively studied in the past, and many studies have utilized high-performance neural network models, such as traditional recurrent neural networks. This study aims to enhance the performance of sequence models by altering the base model, Bidirectional Long Short Term Memory (BiLSTM), to a Bidirectional Gated Recurrent Unit (BiGRU) and incorporating multi-head attention mechanisms, Conditional Random Fields (CRF), and class weights. These modifications not only improve the model’s accuracy but also enhance computational efficiency by reducing the number of parameters relative to large pre-trained models such as Bidirectional Encoder Representations from Transformers (BERT). The experimental findings demonstrate that the proposed model’s modification achieves an F1 Score of 83.55 for the micro standard and 78.07 for the macro standard. This presents a substantial improvement over the baseline, delivering performance nearly on par with state-of-the-art BERT-based models on the same dataset, while requiring significantly fewer parameters.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_81-Leveraging_Attention_Mechanism.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Efficient CNN-Based Time-Domain Denoising of Impulsive Noise in NB-PLC Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170280</link>
        <id>10.14569/IJACSA.2026.0170280</id>
        <doi>10.14569/IJACSA.2026.0170280</doi>
        <lastModDate>2026-02-28T10:50:49.8770000+00:00</lastModDate>
        
        <creator>Wided Belhaj Sghaier</creator>
        
        <creator>Fatma Rouissi</creator>
        
        <creator>H&#233;la Gassara</creator>
        
        <creator>Fethi Tlili</creator>
        
        <subject>NB-PLC; Middleton Class-A; OFDM; impulsive noise; deep learning; CNN</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>In this study, a convolutional neural network (CNN)-based time-domain denoising approach is proposed to suppress impulsive noise which is considered as the most sever impairments in narrowband powerline communications (NB-PLC). Unlike conventional techniques, such as clipping and blanking, the proposed method does not require prior knowledge of noise statistics. The introduced CNN network is trained using synthetically generated OFDM signals corrupted by Middleton Class-A impulsive noise, calibrated from real NB-PLC measurement data. Extensive G3-PLC-compliant simulations demonstrate that the proposed method significantly outperforms classical blanking and clipping schemes. At an SNR of 10 dB, the proposed CNN achieves a mean squared error (MSE) of 1.2&#215;10−4, compared to 2.3&#215;10−4 and 2.5 &#215; 10−4 for blanking and clipping, respectively, under time-varying impulsive noise conditions. Moreover, the receiver incorporating the denoising method closely approaches the ideal AWGN reference under low impulsive noise density and for SNR values above 12 dB.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_80-Efficient_CNN_Based_Time_Domain.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Context-Aware Hybrid Recommendation Framework for E-Learning Platforms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170279</link>
        <id>10.14569/IJACSA.2026.0170279</id>
        <doi>10.14569/IJACSA.2026.0170279</doi>
        <lastModDate>2026-02-28T10:50:49.8300000+00:00</lastModDate>
        
        <creator>Kaoutar Errakha</creator>
        
        <creator>Amina Samih</creator>
        
        <creator>Sanaa Dfouf</creator>
        
        <creator>Abderrahim Marzouk</creator>
        
        <subject>E-learning; recommender systems; hybrid approach; personalized learning; course</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>E-learning platforms provide learners with ex-tensive digital resources that enable self-paced and location-independent study. However, the overwhelming volume of learning materials offered by a wide range of institutions and content providers makes personalized guidance increasingly essential for effective knowledge acquisition. As a result, recommender systems have become fundamental components of modern e-learning environments, helping to reduce information overload and support individualized learning experiences. In general, the richer and more diverse the available data, the more accurate and relevant the resulting recommendations. Despite these advantages, conventional recommendation approaches often fail to fully exploit the contextual and relational information inherent in e-learning ecosystems, which limits their adaptability and predictive precision. This study proposes a hybrid recommendation framework that integrates collaborative filtering, content-based filtering, and context-aware modeling to generate more accurate and adaptive course recommendations. The proposed system infers learner preferences by combining historical interaction data, contextual attributes, and course characteristics, while also incorporating temporal and environmental factors that influence learning behavior. Experimental evaluations based on SVD, TF-IDF, and RNN models applied to a well-established benchmark dataset demonstrate that the proposed hybrid framework significantly improves recommendation accuracy, coverage, and adaptability compared with baseline methods. Furthermore, the integration of contextual information effectively alleviates the cold-start problem and better captures learners’ evolving goals and learning trajectories. Overall, the results confirm that combining multiple recommendation paradigms within e-learning platforms enables more adaptive, personalized, and scalable learning pathways, making the proposed system suitable for diverse educational contexts and learner profiles.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_79-A_Context_Aware_Hybrid_Recommendation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Integrating Deep Reinforcement Learning for Initialization and Adaptive Pheromone Updates in Ant Colony Optimization for UAV Pathing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170278</link>
        <id>10.14569/IJACSA.2026.0170278</id>
        <doi>10.14569/IJACSA.2026.0170278</doi>
        <lastModDate>2026-02-28T10:50:49.7970000+00:00</lastModDate>
        
        <creator>Mohamed A. Damos</creator>
        
        <creator>Wenbo Xu</creator>
        
        <creator>Abdolraheem Khader</creator>
        
        <creator>Ali Ahmed</creator>
        
        <creator>Mohammed Al-Mahbashi</creator>
        
        <creator>Almuhannad S.Alorfi</creator>
        
        <subject>Deep Reinforcement Learning; Ant Colony Optimization; adaptive pheromone update; UAV pathing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>Unmanned Aerial Vehicles (UAVs) are indispensable assets for missions in dynamic and complex environments, requiring highly efficient path planning that simultaneously optimizes the often-conflicting objectives of minimizing flight distance, energy consumption, and mission time. While Ant Colony Optimization (ACO) is a recognized and effective metaheuristic for this domain, its performance is significantly constrained by a static, empirically-derived pheromone update mechanism, which prevents the algorithm from adaptively learning or optimally managing the search process. To overcome this critical limitation, this study introduces a novel DRL-Assisted ACO framework where a Deep Reinforcement Learning (DRL) agent is seamlessly integrated with the ACO to strategically determine the optimal paths under multi-objective constraints. This intelligent agent is tasked with learning the optimal, mission-specific pheromone update strategy. It achieves this by observing the performance of generated paths and receiving a sophisticated reward signal meticulously derived from the Analytic Hierarchy Process (AHP), which systematically weights the mission objectives. Validated through a simulated case study conducted in Khartoum State, Su-dan, the DRL-Assisted ACO approach has demonstrably achieved superior performance, exhibiting marked gains in convergence speed and generating paths with a significantly higher overall multi-objective utility score, thereby delivering a robust and adaptive solution essential for high-stakes autonomous UAV operations.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_78-Integrating_Deep_Reinforcement_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of Lightweight Residual Convolutional Neural Network for Efficient Facial Emotion Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170277</link>
        <id>10.14569/IJACSA.2026.0170277</id>
        <doi>10.14569/IJACSA.2026.0170277</doi>
        <lastModDate>2026-02-28T10:50:49.7670000+00:00</lastModDate>
        
        <creator>Yelnur Mutaliyev</creator>
        
        <creator>Zhuldyz Kalpeyeva</creator>
        
        <subject>Facial Emotion Recognition; residual neural networks; lightweight convolutional neural networks; affective computing; facial expression recognition 2013 dataset; CPU-optimized architecture; pattern recognition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>Facial Emotion Recognition (FER) is essential for successful human-computer interaction; however, deploying robust systems on edge devices remains difficult. Recent techniques, such as Vision Transformers (ViTs) and deep ensemble networks, have achieved high accuracy, but suffer from extreme computational overhead and high latency, making them unsuitable for real-time use on limited hardware. The primary challenge lies in maintaining high discriminative power while operating under strict memory and power constraints. To address this, the objective of this research is to develop an efficient Residual Convolutional Neural Network (CNN) optimized for CPU-based inference. The proposed architecture utilizes a hierarchical structure, integrating three consecutive residual blocks with progressively increasing filter depths of 32, 64, and 128. These are engineered to enhance gradient flow and refine feature representation from low-resolution (48 &#215; 48) grayscale images. Comprising only 552,455 parameters and achieving a 12.4 ms latency on standard CPUs, the model balances efficiency and performance. Experimental results on the FER2013 dataset reveal a classification accuracy of approximately 71.4%, outperforming several existing lightweight frameworks. A comprehensive assessment using confusion matrices and ROC curves validates the architecture as a practical solution for real-time affective computing on resource-constrained devices.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_77-Development_of_Lightweight_Residual_Convolutional.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Generative AI as a Catalyst for Interoperability and Data-Driven Decision Support in Healthcare Systems of Developing Countries</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170276</link>
        <id>10.14569/IJACSA.2026.0170276</id>
        <doi>10.14569/IJACSA.2026.0170276</doi>
        <lastModDate>2026-02-28T10:50:49.7370000+00:00</lastModDate>
        
        <creator>YNSUFU Ali</creator>
        
        <creator>MOSKOLAI NGOSSAHA Justin</creator>
        
        <creator>AYISSI ETEME Adolphe</creator>
        
        <creator>BOWONG TSAKOU Samuel</creator>
        
        <subject>Decision support systems; interoperability; generative Artificial Intelligence; Large Language Model (LLM); heterogeneous data sources; information system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>Interoperability across heterogeneous information systems remains a persistent challenge, particularly in resource constrained contexts where infrastructures are fragmented and data formats remain incompatible. This study introduces a novel methodology that integrates generative Artificial Intelligence (AI) with the urbanization of information systems to enable scalable and seamless interoperability. The approach employs AutoGen AI, an open-source orchestration framework powered by Large Language Models (LLMs), specifically GPT-4o, to coordinate task-specific intelligent agents for data extraction, transformation, and harmonization. By converting disparate data into a standardized JSON representation, the architecture resolves both syntactic and semantic inconsistencies while simultaneously emphasizing multi-agent concurrency, distributed orchestration, and computational scalability, resulting in improved through-put, reduced latency, and enhanced robustness. A real-world healthcare case study is presented to illustrate the framework’s effectiveness: heterogeneous clinical datasets were unified into a coherent JSON structure, enabling accurate health indicator generation and reliable decision support. Experimental results demonstrate substantial improvements in system connectivity, processing efficiency, and integration reliability, with potential to generalize far beyond the medical sector. Moreover, the methodology incorporates advanced prompt engineering and context-aware dialogue design, minimizing model hallucinations and ensuring trustworthy outputs in LLM-driven processes. Overall, the study positions generative AI not only as a promising solution for interoperability in health informatics, but also as a transformative paradigm for intelligent system integration across diverse domains characterized by distributed, heterogeneous environments.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_76-Generative_AI_as_a_Catalyst.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-Criteria Methodology for Selecting Communication Protocols in M2M Environments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170275</link>
        <id>10.14569/IJACSA.2026.0170275</id>
        <doi>10.14569/IJACSA.2026.0170275</doi>
        <lastModDate>2026-02-28T10:50:49.7030000+00:00</lastModDate>
        
        <creator>Oleg Iliev</creator>
        
        <subject>Machine-to-Machine communication; Internet of Things; multi-criteria decision making; AHP; TOPSIS; communication protocol selection; non-functional requirements</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>The continuous expansion of Machine-to-Machine (M2M) communication and Internet of Things (IoT) ecosystems has significantly increased the complexity of selecting appropriate communication protocols and data flow management systems. Contemporary M2M deployments operate across heterogeneous functional domains, including sensor networks, transactional systems, and real-time streaming environments, each imposing distinct and often conflicting non-functional requirements such as latency, reliability, scalability, and resource efficiency. This study proposes a domain-oriented multi-criteria decision-making methodology for structured protocol selection in M2M environments. The framework integrates the Analytic Hierarchy Process (AHP) for context-dependent weighting of evaluation criteria with the Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) for quantitative ranking of alternative technologies. A structured domain taxonomy is introduced to dynamically align evaluation priorities with functional deployment characteristics, and 99th percentile latency (Lp99) is incorporated as a primary performance indicator to capture tail-behavior effects critical for M2M reliability. Beyond ranking computation, the methodology formalizes a reproducible analytical workflow linking empirical measurements to domain-specific decision out-comes and incorporates a sensitivity-analysis perspective to assess ranking robustness under variations in criterion weights. The proposed framework establishes a transparent and adaptable decision-theoretic foundation for context-aware communication protocol selection in heterogeneous M2M scenarios.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_75-Multi_Criteria_Methodology_for_Selecting_Communication_Protocols.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>K-Nearest Neighbors Algorithm for Short-to-Medium Term Directional Stock Price Forecasting: An Analysis of Thailand’s Banking Sector</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170274</link>
        <id>10.14569/IJACSA.2026.0170274</id>
        <doi>10.14569/IJACSA.2026.0170274</doi>
        <lastModDate>2026-02-28T10:50:49.6730000+00:00</lastModDate>
        
        <creator>Passawan Noppakaew</creator>
        
        <creator>Parit Wanitchatchawan</creator>
        
        <creator>Kanchana Phuhoy</creator>
        
        <creator>Natthasorn Seubwong</creator>
        
        <subject>Stock trend prediction; K-Nearest Neighbors; technical analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>This research explores the efficacy of a parsimonious K-Nearest Neighbors (KNN) framework for short-to-medium term stock direction forecasting, focusing specifically on the banking sector within Thailand’s SET50 index. Prior preliminary analysis, aimed at determining the optimal prediction horizon, indicated that a 60-day forecast yielded the most effective results, establishing the scope of this study as medium-term prediction. The objective of this analysis is to determine if a 60-day directional movement can be effectively captured using a minimalist feature set limited to the current day’s Opening Price and the previous day’s 14-day Simple and Exponential Moving Averages. Employing a rolling-window validation methodology on seven key banking stocks during H1 2025, the KNN model demon-strated significant predictive capability. The average accuracy across the selected banking stocks reached 82.0%, with standout performance for TISCO and SCB. While results varied across stocks, our findings substantiate the theoretical and practical sufficiency of a simplicity-first approach. The research demonstrates that in high-noise emerging markets, feature sparsity and instance-based logic serve as an essential defense against overfitting, providing institutional practitioners with a transparent and robust alternative to complex methodologies.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_74-K_Nearest_Neighbors_Algorithm_for_Short_to_Medium_Term.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Acquiring Optimal Models of Random Forest and Support Vector Machine Through Tuning Hyperparameters in Classifying the Imbalanced Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170273</link>
        <id>10.14569/IJACSA.2026.0170273</id>
        <doi>10.14569/IJACSA.2026.0170273</doi>
        <lastModDate>2026-02-28T10:50:49.6430000+00:00</lastModDate>
        
        <creator>Dwija Wisnu Brata</creator>
        
        <creator>Arif Djunaidy</creator>
        
        <creator>Daniel Oranova Siahaan</creator>
        
        <creator>Samingun Handoyo</creator>
        
        <subject>Area under the curve; cross-validation folds; Matthew&#39;s correlation coefficient; optimal hyperparameters; oversampling technique</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>Machine learning models most often misclassify the positive class in the dataset with class imbalance. Besides, a sophisticated model involves the hyperparameters that need to be tuned to the optimal values. The study aims to tune hyperparameters of random forest (RF) and support vector machine (SVM) models using 5-fold cross-validation data, to build the best RF and SVM for two data scenarios: the original and oversampling training data, and to compare the models&#39; performances in either the training or testing data. The RF hyperparameters: the instance number in the leaf node and tree depth of the RF, were acquired (500, 10), respectively. Whereas, the SVM hyperparameters: the values of gamma and constant, were acquired (0.001, 500), respectively. The benchmark models achieved around 98% across the accuracy, precision, recall, and F1 score metrics. However, it performed worse on the Mathew&#39;s Correlation Coefficient (MCC) and Area Under the Curve (AUC): 0.0000 and 0.5000, respectively. The models trained on the class-imbalance dataset failed to predict the positive class. Although the best RF and SVM models trained on the oversampled dataset perform worse than both benchmark models across four standard metrics, the RF best model shows improvements of approximately 7% (from 0.000 to 0.067) and 11% (from 0.500 to 0.612) while the SVM best model show slightly different improvements of approximately 6% (from 0.000 to 0.056) and 11% (from 0.500 to 0.611) in MCC and AUC, respectively. Both the RF and SVM models improve in predicting the positive class, and the best RF model performs slightly better.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_73-The_Acquiring_Optimal_Models_of_Random_Forest.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Feasibility Study on Synthetic RGB-NIR Image Generation for Oil Palm Fresh Fruit Bunch Grading</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170272</link>
        <id>10.14569/IJACSA.2026.0170272</id>
        <doi>10.14569/IJACSA.2026.0170272</doi>
        <lastModDate>2026-02-28T10:50:49.6100000+00:00</lastModDate>
        
        <creator>Nor Surayahani Suriani</creator>
        
        <creator>Norzali Hj Mohd</creator>
        
        <creator>Shaharil Mohd Shah</creator>
        
        <creator>Siti Zarina Muji</creator>
        
        <creator>Fadilla Atyka Nor Rashid</creator>
        
        <subject>Generative AI; deep learning; U-Net image translation; EfficientNet-B0; MobileNetV3</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>Accurate ripeness of grading oil palm fruit bunches (FFBs) is essential for optimizing oil quality and harvesting decisions. While near-infrared (NIR) imaging provides useful spectral cues for ripeness assessment, its adoption in field conditions is limited by sensor cost and system complexity. This study presents a low-cost alternative by generating synthetic NIR images from RGB inputs using a U-Net-based image translation model and integrating the generated NIR with RGB channels for ripeness classification. Five deep learning models, including a custom CNN, ResNet-50, EfficientNet-B0, DenseNet-201 and MobileNetV3, were evaluated under RGB-only and RGB + synthetic NIR configurations using identical training protocols. Experimental results demonstrate consistent performance improvements when synthetic NIR was incorporated. EfficientNet-B0 achieved the highest overall accuracy of 90.3%, while MobileNetV3 obtained the highest macro-averaged F1-score of 85.4%, indicating strong and balanced classification across ripeness classes. Confusion matrix analysis further revealed complementary strengths between the models, where EfficientNet-B0 showed stronger robustness in late-stage maturity detection, and MobileNetV3 provided improved discrimination of early-stage ripeness. The results demonstrate that synthetic NIR augmentation enhances classification performance and training stability without requiring specialized imaging hardware.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_72-A_Feasibility_Study_on_Synthetic_RGB_NIR_Image_Generation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>IoT-Driven Sensor Selection for Smart Water and Electricity Systems: A TOPSIS and VIKOR Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170271</link>
        <id>10.14569/IJACSA.2026.0170271</id>
        <doi>10.14569/IJACSA.2026.0170271</doi>
        <lastModDate>2026-02-28T10:50:49.5970000+00:00</lastModDate>
        
        <creator>Oumaima Rhallab</creator>
        
        <creator>Rachid Dehbi</creator>
        
        <creator>Zouhair Ibn Batouta</creator>
        
        <creator>Amine Dehbi</creator>
        
        <subject>Water and electricity system; water metering; electricity metering; MCDM techniques; TOPSIS; VIKOR; IoT; smart metering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>In the era of smart cities and intelligent resource management, the need for precise and efficient sensing devices is increasingly critical. Among essential infrastructures are water and electricity systems, which necessitate continuous monitoring to optimize consumption, detect anomalies, and reduce operational costs. Several commercial sensors for water metering and electricity metering are available on the market, each differing in precision, power consumption, communication protocols, and durability. This diversity creates a decisional challenge when choosing the best sensor for a certain application. We report herein a comparative study of various water and electricity sensors by means of a multi-criteria decision-making approach. We adopt TOPSIS (Technique for Order Preference by Similarity to Ideal Solution) and VIKOR (VIseKriterijumska Optimizacija I Kompromisno Resenje), two robust quantitative MCDM techniques. The carefully selected set of evaluation criteria and a wide range of sensors commonly used were analyzed. It turned out that the ranking was different based on context and application; therefore, valuable insights have been provided for engineers, system designers, and decision-makers in the IoT domain. It is expected that the present study will be of practical help for choosing the best sensors in smart metering applications and prove the efficiency of integrated decision models in technical evaluations.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_71-IoT_Driven_Sensor_Selection_for_Smart_Water.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Privacy-Aware Customer Segmentation Using a Distributed Graph-Based Attribute Projection Framework</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170270</link>
        <id>10.14569/IJACSA.2026.0170270</id>
        <doi>10.14569/IJACSA.2026.0170270</doi>
        <lastModDate>2026-02-28T10:50:49.5470000+00:00</lastModDate>
        
        <creator>Pentareddy Ashalatha</creator>
        
        <creator>G. Krishna Mohan</creator>
        
        <subject>Business intelligence; privacy-aware customer segmentation; graph-based learning; federated analytics; graph neural networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>Customer segmentation plays a vital role in Business Intelligence (BI) by enabling organizations to understand customer behavior, enhance personalization, and support informed decision-making. Conventional segmentation approaches, including K-Means clustering, hierarchical methods, and hybrid deep learning models, often face limitations when handling high-dimensional customer data and typically lack built-in mechanisms to address privacy concerns. As customer analytics increasingly relies on sensitive personal information, these limitations pose significant challenges for responsible data-driven applications. To overcome these issues, this study introduces a Distributed Graph-Based Attribute Projection Framework (GAPF) for privacy-aware customer segmentation. The key novelty of the proposed framework lies in its ability to minimize sensitive attribute exposure while preserving meaningful relational patterns among customers through graph-based representations. GAPF employs a distributed processing pipeline that integrates attribute projection to reduce identifiability, heuristic-driven customer similarity graph construction, graph convolutional network (GCN)–based feature learning, and community detection for final segmentation. The framework is implemented using Python, NetworkX, and PyTorch Geometric and evaluated on the Mall Customers dataset and large-scale anonymized synthetic data to assess scalability. Experimental results demonstrate that GAPF achieves superior segmentation performance, with an accuracy of 98%, precision of 92.5%, recall of 94.0%, and an F1-score of 93.2%, while also exhibiting efficient execution and reduced privacy risk. These findings confirm GAPF as a robust and practical solution for privacy-aware BI applications.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_70-Privacy_Aware_Customer_Segmentation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Securing Blue Carbon Accounting: A Cryptographic Framework for Coastal Ecosystem Monitoring</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170269</link>
        <id>10.14569/IJACSA.2026.0170269</id>
        <doi>10.14569/IJACSA.2026.0170269</doi>
        <lastModDate>2026-02-28T10:50:49.5170000+00:00</lastModDate>
        
        <creator>Heider A. M. Wahsheh</creator>
        
        <subject>Blue carbon accounting; environmental cybersecurity; blockchain security; Public Key Infrastructure (PKI); elliptic curve cryptography; secure multiparty computation; IoT security; data integrity; carbon credit verification; MRV</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>Blue carbon ecosystems have significant long-term carbon sequestration capacity, making them an important nature-based solution for climate change mitigation. However, monitoring and accounting processes are increasingly dependent on distributed IoT sensors, satellite remote sensing, and cloud-based analytics platforms. This growing digitalization exposes the blue carbon data lifecycle to risks such as tampering, unauthorized access, and loss of data provenance. A compliance-aware cryptographic framework is presented to secure blue carbon accounting throughout the end-to-end process, from in situ measurement to carbon credit verification. In contrast with generic IoT–blockchain architectures, the framework binds sensing devices to national Public Key Infrastructure (PKI) identities and produces audit-ready cryptographic evidence aligned with Monitoring, Reporting, and Verification (MRV) workflows. The design employs the Elliptic Curve Digital Signature Algorithm (ECDSA) to ensure authenticity and non-repudiation, Advanced Encryption Standard in Galois/Counter Mode (AES-GCM) encryption for confidentiality, a hash-chained log for ordered integrity, and Secure Multiparty Computation (SMC) for privacy-preserving validation. Experimental results under simulated attacks (n = 50) demonstrate a 100% detection rate across the evaluated tampering scenarios, while maintaining an average IoT-layer cryptographic latency below 10 ms and a blockchain throughput of 145 transactions per second, exceeding the requirements for continuous ecosystem monitoring. These findings indicate that strong lifecycle-wide cryptographic guarantees can be achieved without imposing prohibitive computational overhead.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_69-Securing_Blue_Carbon_Accounting.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Explainable AI with DRL for Smart Home Energy Management and Residential Cost Savings in IoT-Based Autonomic Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170268</link>
        <id>10.14569/IJACSA.2026.0170268</id>
        <doi>10.14569/IJACSA.2026.0170268</doi>
        <lastModDate>2026-02-28T10:50:49.4870000+00:00</lastModDate>
        
        <creator>Prabagaran A</creator>
        
        <creator>Srinivasan Sriramulu</creator>
        
        <creator>S. Premkumar</creator>
        
        <creator>Sheelesh Kumar Sharma</creator>
        
        <creator>Karthi Govindharaju</creator>
        
        <creator>Angelin Blessy J</creator>
        
        <subject>Explainable artificial intelligence; smart home energy management; deep reinforcement learning; Internet of Things; energy optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>As more people seek ways to improve their homes and workplaces while reducing energy consumption, smart home systems are becoming increasingly prevalent. Unfortunately, the complexity and &quot;black-box&quot; nature of these systems made it difficult to deploy AI-enabled decision-making simulations, raising issues with explainability, confidence, transparency, responsiveness, and fairness. Explainable Artificial Intelligence (XAI), a rapidly developing discipline, addresses these problems by offering justifications for various decisions and behaviors of the systems. This research describes a novel method for IoT-based autonomic devices to control energy that combines XAI with Deep Reinforcement Learning (DRL) to achieve significant Home Energy Management System (HEMS) for household cost reductions. The proposed approach leverages XAI&#39;s features to improve the accessibility and transparency of DRL agents, helping consumers understand and trust autonomous power management decisions. By optimizing energy usage patterns and adapting to changing environmental conditions, the proposed solution ensures effective energy use while maintaining user comfort. Use in-depth modeling and real-world applications to demonstrate the solution&#39;s efficacy, highlighting its potential to reduce energy consumption costs and promote sustainable living. This study sets a new standard for clarity and flexibility in AI-driven smart home systems, paving the way for more reliable and user-friendly IoT software. It is important to note that developing a thermal dynamics model and understanding unidentified variables are not prerequisites for the proposed technique. Results from simulations based on real-world data show the resilience and effectiveness of the recommended strategy.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_68-Explainable_AI_with_DRL_for_Smart_Home_Energy_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Temporal Attention Networks for Real-Time Multimodal Emotion Recognition from EEG and fNIRS Signals</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170267</link>
        <id>10.14569/IJACSA.2026.0170267</id>
        <doi>10.14569/IJACSA.2026.0170267</doi>
        <lastModDate>2026-02-28T10:50:49.4530000+00:00</lastModDate>
        
        <creator>Vinod Waiker</creator>
        
        <creator>Anne Marie D. Pahiwon</creator>
        
        <creator>Ankush Mehta</creator>
        
        <creator>Gadug Sudhamsu</creator>
        
        <creator>Pavithra M</creator>
        
        <creator>K. Kiran Kumar</creator>
        
        <creator>B Kiran Bala</creator>
        
        <creator>Osama R. Shahin</creator>
        
        <subject>Emotion recognition; Temporal Convolutional Network; attention mechanism; mental healthcare; EEG</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>Emotion recognition is critical in the development of real-time mental health care and individualized cognitive behavior. Current strategies to recognize cognitive emotions frequently fail to capture complex time dependencies and multimodal physiological reactions, leading to sub-optimal performance and inaccurate generalization. To overcome such shortcomings, the proposed study suggests TCADNet, a new deep learning model that integrates Temporal Convolutional Networks (TCN), attention-based feature weighting, and GAN-based data augmentation to achieve a high recognition rate of the emotional states through EEG and fNIRS recordings. The model utilizes the TCNs to extract both short-term and long-term temporal trends, and the attention mechanism emphasizes salient parts that bring about emotions, which improves interpretability. Moreover, a Deep Convolutional GAN creates artificial signals of unrepresented emotion classes, eliminating data imbalance and enhancing generalization. The TCADNet model is coded in Python on the TensorFlow/Keras system, and its key components are preprocessing, time modeling, attention weighting, data augmentation, and last classification by SoftMax layers. Experimental outcomes indicate that TCADNet has high recognition performance, with overall recognition, accuracy, precision, and recall, and F1-scores of over 98, which is higher than conventional CNN, LSTM, and separate TCN models. The suggested methodology can be useful to researchers, clinicians, and mental health professionals as it allows them to monitor cognitive and emotional conditions in real-time with a reliable, decipherable, and scalable instrument and provides an opportunity to detect and respond to the issue promptly and implement a tailored intervention plan in educational or health-related settings.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_67-Temporal_Attention_Networks_for_Real_Time_Multimodal_Emotion.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cluster Domain-Aware Client Selection for Federated Learning in The Healthcare Field (CDCSF)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170266</link>
        <id>10.14569/IJACSA.2026.0170266</id>
        <doi>10.14569/IJACSA.2026.0170266</doi>
        <lastModDate>2026-02-28T10:50:49.4400000+00:00</lastModDate>
        
        <creator>Sanaa Lakrouni</creator>
        
        <creator>Marouane Sebgui</creator>
        
        <creator>Slimane Bah</creator>
        
        <subject>Federated learning; healthcare; distributed learning; data heterogeneity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>Client selection remains a critical challenge in Federated Learning (FL). Resource-aware strategies aim to reduce training delays and mitigate stragglers by selecting an appropriate subset of clients in each round. However, these methods prioritize computationally strong clients and exclude resource-constrained clients. In healthcare settings, this approach is impractical because it removes entire domains from training, which harms generalisation. To address these challenges, we propose CDCSF, a domain-aware client selection framework that re-partitions clients into domain-homogeneous groups in each iteration. CDCSF is a dynamic clustering framework based on the (EM) algorithm that clusters clients based on local feature prototypes to enhance domain diversity. The framework incorporates a reliability score derived from an exponential moving average of training time to favor efficient clients. Simultaneously, a fairness score is introduced to ensure that underrepresented clients can still contribute to the training. This approach preserves sufficient representation across all domains to improve model generalization and accelerate convergence. We conduct extensive experiments on a healthcare benchmark dataset to validate the effectiveness of CDCSF. The proposed method improves accuracy by 2% over FedAvg under domain shift and outperforms PoC by 8%. With the proposed adaptive client selection strategy, we further demonstrate that CDCSF converges significantly faster than baseline methods under heterogeneous resource and data conditions.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_66-Cluster_Domain_Aware_Client_Selection_for_Federated_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A System-Oriented Machine Learning Approach for Planning and Execution of Decisions in Software Project Management</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170265</link>
        <id>10.14569/IJACSA.2026.0170265</id>
        <doi>10.14569/IJACSA.2026.0170265</doi>
        <lastModDate>2026-02-28T10:50:49.4070000+00:00</lastModDate>
        
        <creator>Foziah Gazzawe</creator>
        
        <subject>Software project management; decision support systems; machine learning; CatBoost; effort estimation; cost estimation; risk prediction; issue-tracking; explainable AI</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>Software project management must make high-stakes decisions under uncertainty in effort estimation, cost control, and execution risk. Although machine learning has enhanced predictive accuracy, several studies employ it only in isolated planning or execution tasks, thereby limiting its usefulness as an end-to-end decision support system. This study describes DeciBoost PM, a single framework that facilitates both planning-layer estimation and execution-layer delay-risk management using a single CatBoost backbone with inherent interpretability. On three heterogeneous public datasets, namely Desharnais to estimate effort, PROMISE to estimate cost, and an Apache JIRA issue-tracking dataset to classify delays and risks, we assess the framework. The same pipeline is used on the datasets, such as preprocessing, feature engineering, leakage-conscious splitting, and equal validation. Standard regression and classification measures are used to measure performance and compare the results with baseline learners. Findings indicate that DeciBoost PM has good and consistent predictive performance with low variance among tasks, thus enhancing better estimation accuracy and delay-risk discrimination. The framework provides transparency in its explanations of SHAP-based and threshold-controlled decision rules that can be directly translated to actionable managerial indicators. In general, DeciBoost PM has captured machine learning as a system-level, practical decision support methodology throughout the software project life cycle.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_65-A _System_Oriented_Machine_Learning_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Hybrid Learning for Sustainable Industrial Forecasting: Integrating CNN–LSTM Models to Enhance Economic Efficiency and Carbon Performance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170264</link>
        <id>10.14569/IJACSA.2026.0170264</id>
        <doi>10.14569/IJACSA.2026.0170264</doi>
        <lastModDate>2026-02-28T10:50:49.3770000+00:00</lastModDate>
        
        <creator>Mohamed Amine Frikha</creator>
        
        <creator>Mariem Mrad</creator>
        
        <creator>Younes Boujelben</creator>
        
        <creator>Soufiene Ben Othman</creator>
        
        <subject>Neural networks; deep learning; CNN-LSTM; AI-enabled demand forecasting; sustainable supply chain; economic performance; digital transformation; data quality; emerging economies</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>This paper explores the contribution of neural network-based safeguarding models to enhancing the environmental resilience and economic efficiency of industrial supply chains. The methodology includes a review of existing literature for a quasi-experimental study conducted from the perspective of a manufacturer. Using this approach, the study analyzes the transition from traditional statistical safeguarding practices to modern neural predictive frameworks, the amount of data available, and assesses their impact on decision-making and overall chain performance. The results from a Tunisian organization indicate that deep hybrid training architectures, particularly CNN-LSTM models, significantly improve the accuracy of demand forecasting, resulting in concurrent gains in operational efficiency and environmental performance. The organization also achieved a reduction in its annual costs of 2.25 million Tunisian dinars, leading to a decrease in carbon emissions. The study also identifies key obstacles, such as the fragmentation of data infrastructure, the lack of digital skills, and global development costs, which necessitate the effective adoption of deep training. Based on these findings, the paper proposes a dual-performance neural network framework to help managers and policymakers align technological innovation with the realities of emerging economies.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_64-Deep_Hybrid_Learning_for_Sustainable_Industrial_Forecasting.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Adaptive Network Security Framework for Distributed Quantum-Assisted Cloud Continuum Architectures</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170263</link>
        <id>10.14569/IJACSA.2026.0170263</id>
        <doi>10.14569/IJACSA.2026.0170263</doi>
        <lastModDate>2026-02-28T10:50:49.3600000+00:00</lastModDate>
        
        <creator>P. Suseendhar</creator>
        
        <creator>K. P. Sridhar</creator>
        
        <subject>Adaptive; network; security; distributed; cloud; continuum; threat; aware; orchestration; intelligent; engine; policy; enforcement module</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>Distributed Quantum-assisted Cloud Continuum architectures have brought revolutionary changes to the current computing environment through increased data proximity, reduced latency, and real-time responsiveness. These systems are architectures that integrate cloud, fog, and edge computing layers. However, this architectural development comes with many complicated security concerns such as heterogeneous devices, dynamic workloads, splintered attack surfaces, and different policy application across the spectrum. Such dynamic infrastructures need a new security paradigm that is resilient, scalable and responsive to guard against the aged and centralised safety measures. The Adaptive Threat-Aware Security Orchestration (ATASO) framework is a smart, context-aware, and scalable network security solution that is presented in this study as the way to overcome these challenges. Intelligent security monitoring layer (ISML) works in real-time and the Context-Aware Threat Analysis Engine (CTAE) detects distributed anomalies using federated deep learning. Adaptive Policy Enforcement Module (APEM) is a system based on context-aware and blockchain smart contracts to enforce mitigation policies. ATASO is made up of these three units. This multi-layer system is impregnable as far as enforcing policy enforcement is concerned and has a low latency overhead and the ability to monitor threats end to end, as dictated by its multi-layer design. The ATASO model is uniquely applicable where security responsiveness and low-latency response is of utmost importance, including health care monitoring networks, autonomous vehicle networks, smart city networks, and industrial IoT networks. By conducting extensive simulation studies, the approach has been discovered to outperform the existing approaches in a number of important dimensions including the detection accuracy (more than 96%), the response latency (up to 40% less), and the resource consumption where large computers are involved. These findings confirm that ATASO has the potential of being a sophisticated adaptive security system that will protect future cloud continuity designs against the developing cyber threats.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_63-Adaptive_Network_Security_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cloud-Continuum-Based Deep Learning Optimization Framework for Next-Generation Healthcare Data Performance on IoT Platform</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170262</link>
        <id>10.14569/IJACSA.2026.0170262</id>
        <doi>10.14569/IJACSA.2026.0170262</doi>
        <lastModDate>2026-02-28T10:50:49.3300000+00:00</lastModDate>
        
        <creator>G. Aravindh</creator>
        
        <creator>K. P. Sridhar</creator>
        
        <subject>Cloud; continuum; next-generation; healthcare data; performance; platforms; fog; edge; orchestration; device</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>The development of healthcare data performance analysis is becoming more driven by the incorporation of intelligent computing paradigms that guarantee real-time, scalable, and personalized feedback for coaches and athletes. However, existing healthcare data analytics systems are challenged with severe issues such as decision-making latency, processing capacity limitations at the edge, data fragmentation, and the inability to integrate across heterogeneous computing environments seamlessly. Athletic data in this scenario refers to a combination of biomechanical factors (motion capture, joint angles, gait patterns), biometric signals (heart rate, oxygen saturation, muscle activity), and sport-specific performance indicators (workload, speed, and acceleration). This paper introduces the Cloud-Continuum-based Deep Learning Optimization Framework (CC-DLOF). This novel architecture leverages the synergistic potential of edge, fog, and cloud computing to provide a dynamic and smart healthcare data performance on an IoT platform. CC-DLOF is a hierarchical continuum architecture, with real-time data gathering and lightweight analytics performed in the edge layer, contextual processing and federated learning in the fog layer, and global intelligence, deep model training, and long-term data storage in the cloud layer. A new Cloud-Fog-Edge Orchestration Device (CFEOD) provides dynamic allocation of computational tasks in terms of latency sensitivity and device capability. At the same time, a blockchain-supported access control is used to maintain data security and privacy. Simulation analysis, done in a simulated training environment that combines with real-world data sets, illustrates the performance of the framework in mitigating latency by 35%, increasing model accuracy by 22%, and boosting system scalability and reliability. CC-DLOF is a revolutionary way in healthcare data technology, leading to smart, responsive, and safe next-generation healthcare data performance on an IoT platform.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_62-Cloud_Continuum_Based_Deep_Learning_Optimization_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Articulatory-Aware CNN-BiGRU-Attention Framework for Explainable Phoneme-Level Pronunciation Assessment in ESL Speech</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170261</link>
        <id>10.14569/IJACSA.2026.0170261</id>
        <doi>10.14569/IJACSA.2026.0170261</doi>
        <lastModDate>2026-02-28T10:50:49.2970000+00:00</lastModDate>
        
        <creator>P. Bindhu</creator>
        
        <creator>Jasgurpreet Singh Chohan</creator>
        
        <creator>M. Durairaj</creator>
        
        <creator>Megha Sawangikar</creator>
        
        <creator>N. Neelima</creator>
        
        <creator>Elangovan Muniyandy</creator>
        
        <creator>G. Sanjiv Ra</creator>
        
        <creator>Loay F. Hussien</creator>
        
        <subject>Articulatory error analysis; attention mechanism; ESL pronunciation assessment; phoneme recognition model; speech processing framework</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>Proper pronunciation at the phoneme level has been known to be one of the most enduring problems affecting the Second Language learners of the English language (ESL) since the slight pronunciatory variations in the learned language may greatly influence its communicative power and the level of intelligibility. The existing methods of pronunciation evaluation, which are mostly made using automatic speech recognition (ASR), place their results at the word level or the sentence level and offer generic numerical scores with little linguistic meaning, which is not effective in assessing accented speech and subsequent correction. To overcome these shortcomings, the paper introduces an articulatory-conscious recognition model of phonemes that provides fine-grained and interpretable feedback to enhance ESL pronunciation. The novelty of the work is in the combination of a hybrid CNN-BiGRU-Attention architecture and an Articulatory Error Mapping Engine, which symbolically transforms phoneme-level articulation errors into articulatory errors, based on place of articulation, manner, voicing, and vowel quality articulatory deviations. The experimental analysis performed on the non-native English speech had a phoneme recognition accuracy of 91.4 that was much higher than the commercial ASR-based systems (78.3) and the traditional HMM-GMM baselines (70.5). The system was very sensitive to ESL pronunciation errors, making it 84 percent accurate in substitution, 82 percent accurate in deletion and 79 percent accurate in insertions in detection and articulatory mapping was over 87 percent accurate in all categories. The framework was tested in Python with deep learning packages and speech processing toolkits, and provided a scalable, explainable, and learner-focused system that can be used to support the intelligent training of ESL pronunciation and provide pedagogically significant feedback at the phoneme level.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_61-An_Articulatory_Aware_CNN_BiGRU_Attention_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Uncertainty-Aware Volumetric Transformer with Dual Spatial-Channel Attention for Lung Nodule Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170260</link>
        <id>10.14569/IJACSA.2026.0170260</id>
        <doi>10.14569/IJACSA.2026.0170260</doi>
        <lastModDate>2026-02-28T10:50:49.2670000+00:00</lastModDate>
        
        <creator>B. N. Patil</creator>
        
        <creator>TK Rama Krishna Rao</creator>
        
        <creator>Nurilla Mahamatov</creator>
        
        <creator>Elangovan Muniyandy</creator>
        
        <creator>Arun Prasad.VK</creator>
        
        <creator>Chamandeep Kaur</creator>
        
        <creator>Aaquil Bunglowala</creator>
        
        <creator>Ahmed I. Taloba</creator>
        
        <subject>3D Swin Transformer; dual-level attention mechanism; computed tomography; lung cancer diagnosis; uncertainty-aware deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>Lung cancer is also among the most common causes of cancer-related deaths in the world, and the earliest possible detection of the cancer through computed tomography (CT) is important in the enhancement of patient survival. Nevertheless, accurate diagnosis is still a challenge as the nodules are small and indistinct, inter-rater consistency among radiologists, and the traditional deep learning systems have limited capacity to handle volumetric interactions and give interpretable and confidence-aware forecasts. This research suggests an uncertainty-cognizant Transformer-Enhanced Dual-Level Attention Network (TDA-Net) to classify lung nodules in CT images to deal with these issues. The suggested architecture combines a 3D Swin Transformer backbone and sequential spatial and channel attention fusion to be able to model both localized structural and global volumetric context. Moreover, Monte Carlo dropout is used in inference to measure predictive uncertainty and allows low-confidence cases to be identified and sent to a radiologist. The model is tested on a publicly available lung CT dataset, and it has an accuracy of 98.3% with high sensitivity to small nodules in the feature space. There is a separation of classes in the feature space, and the uncertainty rate is 5.1%. The findings of the experiment indicate that TDA-Net can be used as a supportive decision-making tool to diagnose lung cancer with the help of computers because it has better discriminative performance and uncertainty awareness when compared to the baseline models. Moreover, distinguishable uncertainty of predictions and uncertainty of models are present. Predictive uncertainty is measured through the variance of softmax probability distributions through stochastic forward passes, which is related to the ambiguity of data. Monte Carlo dropout is used to estimate model uncertainty as a Bayesian approximation, which represents parameter-level uncertainty due to a small amount of training data.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_60-Uncertainty_Aware_Volumetric_Transformer.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>RiskMIS: A Web-Based Risk Management Information System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170259</link>
        <id>10.14569/IJACSA.2026.0170259</id>
        <doi>10.14569/IJACSA.2026.0170259</doi>
        <lastModDate>2026-02-28T10:50:49.2370000+00:00</lastModDate>
        
        <creator>Alya Thiab Almutairi</creator>
        
        <creator>Kassem Saleh</creator>
        
        <subject>Risk management; risk management information system; risk register; project risk; decision support systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>Risk management is the systematic process of identifying, assessing, monitoring, and responding to risks in projects and ongoing operations. Effective execution of risk management activities is essential for the successful completion of projects and the achievement of key performance indicators in operational environments. In recent years, organizations have increasingly emphasized the proactive identification and mitigation of risks before they materialize. Consequently, risk assessment is addressed early in the lifecycle and continuously revisited to ensure the proper functioning of ongoing operations and the successful delivery of projects. A central tool in the risk management process is the risk register, which consolidates all critical and relevant information related to identified risks. The risk register serves as a focal point around which risk management activities are organized. It is inherently dynamic and must be continuously updated to reflect changes in risk exposure and response strategies. When properly managed, the risk register supports informed decision-making and enables managers to handle risks in a systematic and timely manner. The primary objective of this study is to bridge theory and practice by developing a risk management information system centered on a comprehensive risk register. The study presents the design of the proposed system using established modeling techniques and diagrams, followed by the development of a functional prototype based on modern web technologies. Finally, a demonstration of the prototype is provided to illustrate its capabilities and practical applicability.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_59-RiskMIS_A_Web_Based_Risk_Management_Information_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Privacy-Preserving Federated Learning for Multi-Institutional Lung Cancer Severity Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170258</link>
        <id>10.14569/IJACSA.2026.0170258</id>
        <doi>10.14569/IJACSA.2026.0170258</doi>
        <lastModDate>2026-02-28T10:50:49.2030000+00:00</lastModDate>
        
        <creator>Ch. Srividya</creator>
        
        <creator>K. Ramasubramanian</creator>
        
        <creator>Myneni.Madhu Bala</creator>
        
        <subject>Federated learning; privacy-preserving AI; lung cancer detection; severity classification; multi-institutional data</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>Lung cancer continues to be the most common cause of cancer-related mortality globally and the timely detection of lung cancer and classification of its severity levels are critical to improving survival. Nonetheless, data privacy regulations and institutional data silos often create barriers to developing advanced robust AI models among clinical centers. This paper presents a framework for privacy-preserving multi-institutional lung cancer severity classification utilizing federated learning (FL) with secure aggregation, where only encrypted model updates are exchanged while raw patient data remain locally stored. The framework encompasses four new ideas: a privacy-preserving federated neural ensemble model (PP-FNE), a gradient boosting-(GB-)based FL strategy (MIF-GBF), a hybrid convolutional-transformer network (CF-CTN), and a semi-adaptive federated attention-aggregated model (SAFAM). Each of these ideas provides a way to connect sites in a multi-institutional effort while addressing data diversity/heterogeneity, model interpretability, and collaboration across sites while providing strong privacy protection measures for sensitive health data. The proposed framework is evaluated using a synthetic dataset developed to mimic the clinical heterogeneity of real-world clinical multi-site networks. The best-performing model, SAFAM, achieved an overall classification accuracy of 93.4%, demonstrated robustness to intelligently crafted noise (1.3% accuracy degradation), and preserved predictive performance under encrypted aggregation with minimal communication overhead per federation round. CF-CTN strengths lie in multimodal integration for lung cancer severity classification and model interpretability, while MIF-GBF had notable strengths in providing interpretability for GB-models specifically. PP-FNE exhibited stability as an ensemble model under variability across sites. All four individually based model FL methods restricted all communication/exchange across sites to conducting encrypted model update exchanges via a secure aggregation protocol, aligning with data minimization principles under HIPAA and GDPR). These results provide evidence that, if an FL approach considers task autonomous algorithmic innovations, accurate and privacy-protected lung cancer severity detection can be achieved through a distributed clinical setting.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_58-Privacy_Preserving_Federated_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Machine Learning-Based Autism Spectrum Disorder Classification Using an Enhanced Convolutional Neural Network Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170257</link>
        <id>10.14569/IJACSA.2026.0170257</id>
        <doi>10.14569/IJACSA.2026.0170257</doi>
        <lastModDate>2026-02-28T10:50:49.1730000+00:00</lastModDate>
        
        <creator>P. Yugander</creator>
        
        <creator>M. Jagannath</creator>
        
        <subject>Autism; enhanced CNN; random forest; MR images; logistic regression</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>Autism Spectrum Disorder (ASD) is a complex neurological developmental disability that appears during early childhood. Conventional ASD diagnostic techniques rely on behavioural observations, characteristics, and clinical interviews. To overcome these limitations, numerous machine learning (ML) and Deep Learning (DL) techniques have been used to assist physicians. For the past three decades, biomedical images have been employed to diagnose neurodevelopmental disorders. The functional Magnetic Resonance (MR) images used in this study. This paper proposes a novel machine learning framework to classify ASD control from healthy controls. The proposed framework consists of two stages. In the first stage, an enhanced Convolutional Neural Network (CNN) is proposed to extract features. In the second stage, the extracted features are given to the machine learning classifiers. The proposed method is tested on the 1112 fMRI images. A total of 539 ASD participants and 573 healthy controls are included in this study. A total of 17 datasets from the ABIDE website are used. These datasets are collected from various international medical laboratories. The proposed framework outperforms the existing methods. The proposed algorithm achieved 92.45% across the entire ABIDE dataset and 98.61% on the individual dataset.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_57-Machine_Learning_Based_Autism_Spectrum_Disorder_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Wavelet-Based Dual-Domain Phase Alignment for Predicting Stock Index Trends from Investor Sentiment Cycles</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170256</link>
        <id>10.14569/IJACSA.2026.0170256</id>
        <doi>10.14569/IJACSA.2026.0170256</doi>
        <lastModDate>2026-02-28T10:50:49.1430000+00:00</lastModDate>
        
        <creator>Monika Gorkhe</creator>
        
        <creator>Diksha Tripathi</creator>
        
        <creator>Pravin D Sawant</creator>
        
        <creator>Elangovan Muniyandy</creator>
        
        <creator>Veera Ankalu Vuyyuru</creator>
        
        <creator>Pratik Gite</creator>
        
        <creator>Raman Kumar</creator>
        
        <subject>Stock trend prediction; investor sentiment analysis; wavelet time–frequency analysis; phase synchronization modeling; financial market forecasting</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>The dynamics in the financial markets are complicated and non-stationary, with a significant influence of the wave of investor sentiment. The recent changes in sentiment-based stock prediction have shown promising results, but the current research is to a large extent based on individual domain analysis, constant correlation, or the traditional machine learning framework, which constrains its capability to elucidate multi-scale temporal dynamics and phase-based lead-lag relationships. In order to overcome these weaknesses, a new Cross-wavelet Sentiment-driven Dual-domain Phase Alignment, abbreviated as CS-D&#178;PA, is introduced for stock index trend prediction. The suggested structure has 91.8, 90.6, 92.1, 91.3, and 93.5 accuracy, precision, recall, F1-score, and trends consistency rate, respectively, proving to have a better predictive stability and a better classification performance in sentiment-driven stock trend forecasting. The non-stationary and multi-scale behavior of financial markets is dictated by non-periodic changes in investor sentiment cycles. This work proposes a Cross-wavelet Sentiment-based Dual-Domain Phase Alignment model (CS-D2PA) of predictive modeling of stock index trend. The framework combines the feature extraction by the continuous wavelet technique with the cross-wavelet phase difference estimation, as well as the structured alignment in time and frequency domains. The explicit modeling of the sentiment-price phase synchronization of the approach makes it possible to identify lead-lag interaction early and increases the predictability of the forecasts in volatile market conditions, which increases their interpretability.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_56-Wavelet_Based_Dual_Domain_Phase_Alignment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>AMCS: Adaptive Multi-Controller SDN Security with Stateful Traffic Intelligence for Fast and Accurate Multi-Vector Attack Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170255</link>
        <id>10.14569/IJACSA.2026.0170255</id>
        <doi>10.14569/IJACSA.2026.0170255</doi>
        <lastModDate>2026-02-28T10:50:49.1100000+00:00</lastModDate>
        
        <creator>Ameer El-Sayed</creator>
        
        <creator>Mohamed Nosseir Hemdan</creator>
        
        <creator>Noha Abdelkarim</creator>
        
        <creator>Ehab Rushdy</creator>
        
        <creator>Hanaa M. Hamza</creator>
        
        <subject>SDN; IoT; multi-controller security; stateful traffic analysis; adaptive anomaly detection; distributed mitigation; entropy-based indicators; cooperative defense; P4 programmable networks; cyber-physical systems security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>The rapid proliferation of Software-Defined Networking (SDN) in large-scale Internet of Things (IoT) ecosystems has amplified exposure to sophisticated, multi-vector cyberattacks that simultaneously exploit control- and data-plane asymmetries. Existing single-controller statistical detectors, while effective under high-volume anomalies, fail to sustain precision and responsiveness under dynamic, distributed, or low-rate attack conditions. Addressing this critical gap, we propose AMCS—an Adaptive Multi-Controller SDN Security framework that fuses stateful traffic intelligence with cooperative inter-controller decision-making to enable resilient, context-aware detection across complex IoT traffic. AMCS embeds lightweight, P4-based stateful processing directly in the data plane and augments it with adaptive entropy-driven anomaly evaluation and consensus-based coordination among distributed controllers. Extensive experiments on an SDN–IoT testbed demonstrate that AMCS achieves up to 99.7% detection accuracy for high-volume floods, 96.8% for low-rate anomalies, and 94.1% under mixed traffic, while maintaining false-positive rates below 5% and detection latencies as low as 1.24 s. The cooperative consensus protocol enhances cross-controller reliability to 98.4% with only 0.83 s synchronization delay, while reducing control overhead by 34.7% compared to the single-controller baseline. Moreover, the distributed mitigation layer reacts within 1.6 s on average, neutralizing over 97% of attack flows with negligible collateral impact. Collectively, these results confirm that integrating stateful in-switch analytics, adaptive thresholding, and multi-controller cooperation establishes a scalable, self-adaptive SDN security fabric—achieving both fast detection and stable defense against evolving multi-vector threats in IoT-driven networks.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_55-AMCS_Adaptive_Multi_Controller_SDN_Security.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Machine Learning and Deep Learning for Detecting Fake News in a Low-Resource Language</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170254</link>
        <id>10.14569/IJACSA.2026.0170254</id>
        <doi>10.14569/IJACSA.2026.0170254</doi>
        <lastModDate>2026-02-28T10:50:49.0800000+00:00</lastModDate>
        
        <creator>Elton Tata</creator>
        
        <creator>Jaumin Ajdarim</creator>
        
        <creator>Nuhi Besimi</creator>
        
        <subject>Fake news; Albanian language; machine learning; NLP</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>Fake news detection has become a major problem in the digital age. This study presents an improved machine learning technique that achieves 91.99% accuracy in predicting fake news detection within Albanian textual datasets, demonstrating an improvement over existing baseline methodology. The implemented learning model uses 54 features that are specific to the Albanian language, such as red flags, credibility signals, punctuation patterns, and linguistic features. The model is tested on a balanced dataset of 3,994 news articles aggregated in Albanian from various sources. We compare it to several baselines, such as LSTM networks (80.35% accuracy) and BERT-augmented Naive Bayes classifiers (88.36% accuracy). In our Albanian dataset and experimental setting, XGBoost achieved 91.99% accuracy, indicating strong performance under the evaluated scenario.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_54-Machine_Learning_and_Deep_Learning_for_Detecting_Fake_News.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Analysis of Machine Learning Based Algorithms for Predicting Injury Severity in Road Accidents</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170253</link>
        <id>10.14569/IJACSA.2026.0170253</id>
        <doi>10.14569/IJACSA.2026.0170253</doi>
        <lastModDate>2026-02-28T10:50:49.0470000+00:00</lastModDate>
        
        <creator>Soumaya AMRI</creator>
        
        <creator>Mohammed AL ACHHAB</creator>
        
        <creator>Mohamed LAZAAR</creator>
        
        <subject>Machine learning; imbalanced data; road accident; multiclass classification; binary classification; injury severity prediction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>Road crash injury severity prediction is essential for intelligent transportation systems, yet challenged by severe class imbalance, rigid 4-class severity schemes (unhurt/slight/hospitalized/fatal), and optimal methodological selection. This study proposes a structured framework systematically evaluating four machine learning models—CatBoost, HistGradientBoosting, Random Forest, and SVM—across multiclass (native 4-class and ordinal wrapper), binary reduction (non-severe vs. severe), and oversampling techniques using crash data. Multiclass approaches reveal tree ensemble dominance but persistent rare severe class prediction difficulties. Binary class reduction substantially improves severe injury detection performance on this dataset by simplifying decision boundaries, while SMOTE oversampling provides algorithm-specific imbalance mitigation. Random Forest demonstrates the most stable binary performance across evaluation metrics, independent of oversampling strategies. This performance gain comes at the cost of reduced severity granularity compared to the original multiclass formulation. Overall, under imbalance-sensitive evaluation metrics, binary class reduction provides a pragmatic and operationally effective alternative to complex multiclass strategies for severe injury detection.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_53-Comparative_Analysis_of_Machine_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Integrating Acoustic and Image Data Features for Melon Ripeness Classification Using Convolutional Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170252</link>
        <id>10.14569/IJACSA.2026.0170252</id>
        <doi>10.14569/IJACSA.2026.0170252</doi>
        <lastModDate>2026-02-28T10:50:49.0330000+00:00</lastModDate>
        
        <creator>Endang Purnama Giri</creator>
        
        <creator>Agus Buono</creator>
        
        <creator>Karlisa Priandana</creator>
        
        <creator>Dwi Guntoro</creator>
        
        <subject>CNN; multimodal learning; melon ripeness classification; image classification; acoustic classification; data augmentation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>This study evaluates three classification scenarios: image-based only, acoustic-based using Mel Frequency Cepstral Coefficients (MFCC), and a combined multimodal CNN architecture integrating both modalities. The experiments are conducted on a relatively small dataset comprising only 230 samples. To mitigate the risk of overfitting arising from the limited dataset size, data augmentation is applied to both image and audio data, with audio augmentation performed before the construction of the MFCC spectrogram. Experimental results demonstrate that the multimodal CNN with data augmentation achieves the best performance, with precision, recall, and F1-score, respectively, 0.95, 0.94, and 0.94. These results indicate that augmenting both image and audio data effectively enhances data diversity and model robustness, significantly improving classification performance. The findings confirm that combining complementary feature representations from multiple modalities with proper augmentation strategies substantially improves audio-visual classification tasks.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_52-Integrating_Acoustic_and_Image_Data_Features.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of a Vision Transformer-Based Architecture for Automatic Facial Emotion Monitoring in Workplace Environments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170251</link>
        <id>10.14569/IJACSA.2026.0170251</id>
        <doi>10.14569/IJACSA.2026.0170251</doi>
        <lastModDate>2026-02-28T10:50:49.0000000+00:00</lastModDate>
        
        <creator>Renzo Sebastian Gonzalez Caceres</creator>
        
        <creator>Jeramel Melissa Avila Salda&#241;a</creator>
        
        <creator>Patricia Gissela Pereyra Salvador</creator>
        
        <subject>Facial emotion recognition; vision transformer; affective computing; system architecture design; workplace emotion monitoring; privacy-by-design; deep learning inference</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>Facial emotion recognition is increasingly considered in affective computing as a mechanism for unobtrusive emotional awareness in organizational environments. This study proposes the design of a Vision Transformer (ViT)-based system architecture for automatic facial emotion monitoring, focusing on deployability, modular integration, and data-governance considerations rather than model benchmarking. The architecture defines a complete pipeline comprising a visual data acquisition layer, a processing backend for transformer-based inference, and a web-based visualization interface intended for aggregated emotional analytics. Publicly available datasets such as FER2013 and AffectNet are identified as reference sources for model adaptation within the proposed framework. The work details system components, data flow, scalability strategies, and privacy-by-design mechanisms, including transient image handling and non-persistent processing. Rather than presenting experimental performance, this study provides a technical blueprint and feasibility analysis intended to guide future implementation and validation of transformer-driven emotion monitoring systems in workplace contexts. The proposed framework aims to bridge the gap between advances in deep learning models and their practical integration into real-world organizational infrastructures.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_51-Design_of_a_Vision_Transformer_Based_Architecture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Enhanced Approach for Workmen’s Compensation Insurance Fraud Detection Based on Fuzzy Rule-Based System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170250</link>
        <id>10.14569/IJACSA.2026.0170250</id>
        <doi>10.14569/IJACSA.2026.0170250</doi>
        <lastModDate>2026-02-28T10:50:48.9700000+00:00</lastModDate>
        
        <creator>Reham M. Essa</creator>
        
        <subject>Workmen’s compensation; fuzzy logic; fraud detection; rule-based system; insurance fraud; prediction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>In Workmen’s Compensation insurance, fraud detection (FD) remains a significant challenge due to claims&#39; inherent uncertainty and complexity. To address this, we propose an enhanced approach based on a fuzzy rule system (FRS) for FD. The FRS is designed to handle ambiguous and imprecise data, making it effective for identifying fraudulent patterns in insurance claims. Unlike traditional methods, the fuzzy system utilizes human-like reasoning by applying flexible rules to assess the likelihood of fraud under uncertain conditions. By modeling the decision-making process with fuzzy logic, the system allows for a detailed evaluation of claims, accommodating the gray areas that often exist in FD. This approach enables accurate and adaptive FD, reducing false positives and enhancing the precision of fraud identification. In imbalanced FD scenarios, the system achieves strong performance, such as an F1-score of 0.82 and MCC of 0.75, demonstrating its capability to correctly identify rare fraudulent cases despite class imbalance.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_50-An_Enhanced_Approach_for_Workmens_Compensation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Farm and Learn: An Offline Mobile Learning System Integrating AR, AI, and Game-Based Learning for Agricultural Education Among Children</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170249</link>
        <id>10.14569/IJACSA.2026.0170249</id>
        <doi>10.14569/IJACSA.2026.0170249</doi>
        <lastModDate>2026-02-28T10:50:48.9400000+00:00</lastModDate>
        
        <creator>Pubuditha De Silva</creator>
        
        <creator>Chamodya Prabodhani</creator>
        
        <creator>Daminda Herath</creator>
        
        <subject>Agricultural education; augmented reality; artificial intelligence; child-centered learning; game-based learning; offline mobile learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>This study presents Farm and Learn, an offline-first mobile learning system that integrates Augmented Reality (AR), Artificial Intelligence (AI), and Game-Based Learning (GBL) to enhance agricultural education among children in low-connectivity environments. Existing agricultural learning applications often provide isolated functionalities such as visualization or plant recognition, with limited pedagogical integration and insufficient support for rural deployment. To address these limitations, the proposed system combines immersive AR-based exploration, interactive gamified learning activities, and AI-assisted paddy plant growth-stage identification within a unified child-centered educational framework. The architecture adopts a modular offline-first design that enables core learning functionalities to operate without continuous internet access while allowing optional synchronization when connectivity is available. The AI component employs a lightweight YOLOv11n deep learning model validated through prototype inference to assess feasibility for future on-device deployment. The system was developed using Unity and ARCore and evaluated through user acceptance testing involving students, educators, and domain experts. Results demonstrate high usability, strong learner engagement, and improved learning performance, confirming the effectiveness of integrating immersive visualization, intelligent interaction, and gamified reinforcement in educational contexts. The findings highlight the practical potential of offline-first mobile learning platforms to support inclusive agricultural education and provide a scalable foundation for future intelligent educational systems in resource-constrained environments.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_49-Farm_and_Learn_An_Offline_Mobile_Learning_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Deep Learning for Academic Achievement Prediction Using Spatio-Temporal and Behavioral Data in Higher Education</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170248</link>
        <id>10.14569/IJACSA.2026.0170248</id>
        <doi>10.14569/IJACSA.2026.0170248</doi>
        <lastModDate>2026-02-28T10:50:48.9070000+00:00</lastModDate>
        
        <creator>Asim Seedahmed Ali Osman</creator>
        
        <subject>Hybrid deep learning; CNN; LSTM; student performance; FOX optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>Accurate prediction of student academic performance is essential for enabling timely and effective educational interventions. Many existing prediction approaches focus either on academic outcomes or behavioral trends, without fully capturing the interaction between spatial performance indicators and their temporal evolution. To address this limitation, this study proposes a hybrid deep learning model that integrates spatio-temporal information for forecasting student achievement in higher education. The proposed framework combines a Convolutional Neural Network (CNN) to extract spatial features from normalized academic performance data with a Long Short-Term Memory (LSTM) network to model temporal patterns in student behavioral attributes, such as attendance and participation. In addition, FOX optimization is applied to adaptively tune the learning rate, improving training stability and predictive performance. The model is evaluated using student academic and behavioral datasets, and its performance is compared with commonly used baseline models. Experimental results show that the proposed CNN–LSTM approach achieves an accuracy of 97.18 per cent, outperforming standalone LSTM and Support Vector Machine (SVM) models. Furthermore, the model effectively classifies students into low, medium, and high academic risk categories, supporting early identification of at-risk students and facilitating timely intervention in higher education environments.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_48-Hybrid_Deep_Learning_for_Academic_Achievement_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Secure Ultra-Low Latency Data Paths: A Hybrid Architecture for High Speed Networking under Adversarial Conditions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170247</link>
        <id>10.14569/IJACSA.2026.0170247</id>
        <doi>10.14569/IJACSA.2026.0170247</doi>
        <lastModDate>2026-02-28T10:50:48.8930000+00:00</lastModDate>
        
        <creator>Abdulbasid Banga</creator>
        
        <subject>High Speed Secure Networking Architecture (HSSNA); hardware-accelerated security; ai-based anomaly detection; 5g/6g network security; low-latency encryption pipelines</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>The need to satisfy the line rate and deterministic latency requirements of next generation industrial networks is imperative for blistering 400G/800G Ethernet backbones, optical transport networks, and breaking new ground in 5G/6G infrastructures. Animals News: Other solutions limit attention towards detection accuracy or throughput separately without providing a discriminative systems-level architecture, leaving out bounded latency, scalable security as well as hardware-efficient adaptivity in an adversarial environment. We present a latency-constrained hardware-pipelined co-design framework, dubbed High Speed Secure Networking Architecture (HSSNA), capable of integrating probabilistic pre-filtering, adaptive lightweight cryptography, neural anomaly inference, and SDN-based routing into a single deterministic processing graph. In contrast to compositional security stacks, HSSNA defines the security-performance coupling as a constrained optimization problem that seeks to minimize the total processing delay without sacrificing the robustness of intrusion detection. Our contributions include (1) cross-layer security orchestration, which is embedded within the data path, (2) provable adversarial resilience guarantees, as a result of formally defined security properties, and (3) a parallel FPGA–GPU execution pipeline, which also removes sequential security bottlenecks. Through experiments conducted on a hybrid Mininet–NS3–FPGA/GPU testbed, we observe 60–77% lower latency, &gt; 190 Gbps more throughput, and better robustness to real-time detection compared to conventional CPU-centric deployments. Our results prove HSSNA is systems-level re-architecture, high-speed secure networking, not a composition of prior art tools.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_47-Secure_Ultra_Low_Latency_Data_Paths.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Self-Supervised and Explainable Transformer-Based Architectures for Robust End-to-End Speech and Language Understanding</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170246</link>
        <id>10.14569/IJACSA.2026.0170246</id>
        <doi>10.14569/IJACSA.2026.0170246</doi>
        <lastModDate>2026-02-28T10:50:48.8600000+00:00</lastModDate>
        
        <creator>Mahfuzul Huda</creator>
        
        <subject>Transformer models; self-supervised learning; explainable AI; speech recognition; natural language understanding; end-to-end systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>The primary aim of this study is to meld self-supervised learning techniques with transparent transformer-based frameworks to enable resilient, end-to-end speech and language understanding, alongside pretraining deep transformer models using unannotated speech and text corpora. But the system&#39;s complicated structure makes it very hard to compute, and its ability to be understood depends in part on using rough benchmarks to judge feature relevance. This research work proposes an explainable, systematic transformer-based framework concept for understanding voice and language that integrates self-supervising learning with built-in explainability. The model proposed here presented a low word error rate, high accuracy, and interpretation on multiple datasets. The framework has many strengths, but it also has some challenges, which are highlighted in the work. This deep transformer architecture needs a lot of computing power, and figuring out how important something relies on indirect truth values. In the future, planned improvements include making the framework work with more than one language and more than one field, making transformer models work better in real time, and adding assessment methods that focus on human perspectives to make it even easier to understand. Subsequently, we will work on expanding into datasets that are multilingual and cross-domain, making more efficient forms of transformers for real-time use, and employing human-centered assessment to verify that we are interpreting things correctly in real time.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_46-Self_Supervised_and_Explainable_Transformer_Based_Architectures.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Machine Learning-Based Effort Prediction and Early Risk Detection in Software Development Projects: A Case Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170245</link>
        <id>10.14569/IJACSA.2026.0170245</id>
        <doi>10.14569/IJACSA.2026.0170245</doi>
        <lastModDate>2026-02-28T10:50:48.8300000+00:00</lastModDate>
        
        <creator>Andreea-Elena Catana</creator>
        
        <creator>Adriana Florescu</creator>
        
        <subject>Software effort estimation; risk detection; machine learning; python; open data; gradient boosting; effort estimation; risk thresholding</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>Accurate effort estimation and early risk detection are critical for the success of software projects, as inaccurate forecasts can lead to schedule overruns, inefficient resource allocation, and unmet requirements. This study investigates the use of machine learning techniques to support task-level effort prediction and proactive risk identification in software project management. An applied case study was conducted on a simulated dataset of 500 software development tasks, described by planning, technical, and team-related features. Two ensemble-based regression models, Gradient Boosting and Random Forest, are evaluated for predicting actual task duration. Model performance is assessed using standard metrics, including Mean Absolute Error (MAE), Root Mean Squared Error (RMSE), and the coefficient of determination (R&#178;). To enable early risk detection, prediction errors are transformed into deviation-based indicators, and threshold-based classifiers are employed to identify tasks with moderate (&gt;20%) and severe (&gt;30%) schedule overruns. Confusion matrices and classification metrics are used to evaluate the effectiveness of the proposed alerting mechanism, and the distribution of high-risk tasks across sprint quantiles is analyzed to support managerial decision-making.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_45-Machine_Learning_Based_Effort_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Integrating Japanese Festival Traditions into Computational Thinking: A Culturally Responsive Approach to Micro:Bit-based Physical Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170244</link>
        <id>10.14569/IJACSA.2026.0170244</id>
        <doi>10.14569/IJACSA.2026.0170244</doi>
        <lastModDate>2026-02-28T10:50:48.7830000+00:00</lastModDate>
        
        <creator>Daiki Sugiyama</creator>
        
        <creator>M. Fahim Ferdous Khan</creator>
        
        <creator>Ken Sakamura</creator>
        
        <subject>Culturally responsive teaching (CRT); Culturally Responsive Computing (CRC); computational thinking; micro:bit; K-12 education; physical computing; STEM education</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>The current pre-tertiary educational landscape in Japan is defined by a significant paradox. While technological infrastructure has reached unprecedented levels of saturation through the Global and Innovation Gateway for All (GIGA) School initiative, student engagement and qualitative learning outcomes in computational disciplines have shown signs of stagnation. This research addresses this disparity by proposing a framework grounded in Culturally Responsive Teaching (CRT) and Culturally Responsive Computing (CRC), utilizing the traditional Japanese festival game of Shateki (target shooting) as a primary pedagogical vehicle. By leveraging the BBC micro:bit and a modular physical computing architecture, this study demonstrates how situating abstract programming concepts, such as sequential logic, conditional branching, and signal modulation, within familiar “vernacular” and “heritage” cultural contexts can foster intrinsic motivation and improve academic achievement among K-12 students. Through a pilot evaluation conducted during a community festival, this study provides quantitative evidence that CRT-based interventions not only enhance technical proficiency but also bolster a sense of student agency and cultural belonging. The results suggest that the success of modern educational technology initiatives depends less on hardware distribution and more on the pedagogical translation of technical content into culturally meaningful experiences.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_44-Integrating_Japanese_Festival_Traditions.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Face Sketch Recognition: Ethnic Groups Classification and Recognition Via a VGG16 Model Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170243</link>
        <id>10.14569/IJACSA.2026.0170243</id>
        <doi>10.14569/IJACSA.2026.0170243</doi>
        <lastModDate>2026-02-28T10:50:48.7500000+00:00</lastModDate>
        
        <creator>Khalid OUNACHAD</creator>
        
        <creator>Mohamed EL GHMARY</creator>
        
        <subject>Forensic face sketch; ethnic classification; deep learning; transfer learning; optimized VGG16 model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>In the law enforcement investigation, the police use sketching techniques to identify suspects from an eyewitness&#39;s memory. Many automatic face sketch recognition systems that determine the perpetrator’s appearance from the face image datasets have been proposed. The aim is to conduct the arrest of the right offender. We propose this work to carry out a search based on the ethnicity criterion to speed up this automatic identification and to help authorities execute fast responses by launching the retrieval process only in a part of the dataset of face images. The goal of this study is to enhance the accuracy of ethnic face sketch classification by using the convolutional neural network built on the VGG16 architecture. The FairFace dataset, which includes seven ethnic face images: White, Black, Indian, East Asian, Southeast Asian, Middle Eastern, and Latino|Hispanic, was employed in the study. We convert the face images dataset to face sketch images, and we optimize the VGG16 model for seven classification outputs. This work shows that the VGG16 deep learning model offers a reliable, automated approach for ethnic face sketch classification and recognition. The used model achieved an accuracy reaching above 94% and produced a low false negative rate, which is crucial for minimizing undetected cases.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_43-Face_Sketch_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Rubric-Relational Discourse Modeling with Counterfactual Explainability for Multi-Trait Automated Essay Scoring</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170242</link>
        <id>10.14569/IJACSA.2026.0170242</id>
        <doi>10.14569/IJACSA.2026.0170242</doi>
        <lastModDate>2026-02-28T10:50:48.7200000+00:00</lastModDate>
        
        <creator>N. Sreedevi</creator>
        
        <creator>M. Madhusudhan Rao</creator>
        
        <creator>Sridevi Dasam</creator>
        
        <creator>Roopa Traisa</creator>
        
        <creator>Jasgurpreet Singh Chohan</creator>
        
        <creator>V. Saranya</creator>
        
        <creator>Ahmed I. Taloba</creator>
        
        <subject>Automated Essay Scoring; rubric-aware modeling; discourse representation; counterfactual explainability; multi-task learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>Automated Essay Scoring (AES) systems often rely on holistic prediction and show weak alignment with rubric-based human evaluation. Existing deep learning approaches achieve moderate agreement but struggle to model discourse coherence and provide trait-faithful explanations. This study proposes a rubric-aware and discourse-faithful essay scoring framework that integrates contextual embeddings with sentence-level discourse modeling and rubric-specific attention. The framework generates both holistic and trait-level scores, while enabling counterfactual explanation of scoring decisions. Experiments conducted on the Learning Agency Lab – Automated Essay Scoring 2.0 dataset show that the proposed model achieves a Quadratic Weighted Kappa (QWK) of 0.86, Root Mean Square Error (RMSE) of 1.41, and Mean Absolute Error (MAE) of 1.12, outperforming CNN-LSTM, BERT-LSTM, and DeBERTa baselines. QWK evaluates ordinal agreement, while RMSE and MAE measure numerical prediction error. Trait-level performance reaches F1-scores of 0.89 for Content and 0.87 for Grammar, indicating strong rubric alignment. The proposed framework improves scoring reliability, interpretability, and consistency with human grading practices. It is suitable for large-scale educational assessment, formative feedback systems, and intelligent tutoring applications, offering a scalable and explainable solution for multi-trait essay evaluation.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_42-Rubric_Relational_Discourse_Modeling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Ziraai SSI: A Blockchain-Based Self-Sovereign Identity Model for Agricultural Supply Chains</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170241</link>
        <id>10.14569/IJACSA.2026.0170241</id>
        <doi>10.14569/IJACSA.2026.0170241</doi>
        <lastModDate>2026-02-28T10:50:48.7030000+00:00</lastModDate>
        
        <creator>Tariq Alar</creator>
        
        <creator>Gnana Bharathy</creator>
        
        <creator>Usha Batra</creator>
        
        <creator>Mukesh Prasad</creator>
        
        <subject>Blockchain; self-sovereign identity (ssi); agricultural supply chain; verifiable credentials; decentralized identity; privacy-preserving authentication</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>The agricultural supply chain plays a critical role in ensuring food security and sustainability; however, it continues to face challenges related to data fragmentation, limited transparency, and insufficient trust among participating stakeholders. Existing supply chain systems are primarily based on centralized identity and data management models, which introduce single points of failure, restrict auditability, and raise privacy concerns. More critically, the absence of decentralized and stakeholder-controlled identity mechanisms limits accountability and verifiable governance across agricultural ecosystems. In this work, we present Ziraai SSI, a blockchain-based self-sovereign identity (SSI) prototype designed to support identity-centric governance and trust establishment in agricultural supply chains. The proposed approach integrates blockchain-based trust anchoring with self-sovereign identity principles using decentralized identifiers (DIDs) and verifiable credentials (VCs). Through this integration, stakeholders retain direct control over their digital identities while enabling cryptographically verifiable and privacy-preserving interactions across organizational boundaries. The system architecture follows a multi-layered design that addresses identity management, credential lifecycle handling, authentication, access control, and governance. To move beyond conceptual analysis, the framework is realized as a functional research prototype using standardized SSI technologies and an agent-based architecture. The implemented system supports end-to-end credential workflows, including secure connection establishment, credential issuance, selective disclosure, and proof-based verification, without reliance on centralized identity providers or authentication authorities. Experimental validation conducted in a controlled environment confirms correct execution of identity and credential lifecycles, decentralized authentication, and privacy-preserving verification. The evaluation focuses on functional validation within a controlled prototype environment and does not include large-scale scalability benchmarking. These results demonstrate the feasibility of identity-centric governance mechanisms in agricultural supply chains using standardized SSI technologies.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_41-Ziraai_SSI_A_Blockchain_Based_Self_Sovereign_Identity_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Machine-Learning–Assisted Probabilistic Wind Assessment at Sechura, Peru</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170240</link>
        <id>10.14569/IJACSA.2026.0170240</id>
        <doi>10.14569/IJACSA.2026.0170240</doi>
        <lastModDate>2026-02-28T10:50:48.6570000+00:00</lastModDate>
        
        <creator>Ubaldo Yancachajlla Tito</creator>
        
        <creator>Celso Antonio Sanga Quiroz</creator>
        
        <creator>Edilberto Velarde Coaquira</creator>
        
        <creator>Germ&#225;n Belizario Quispe</creator>
        
        <subject>Wind resource assessment; probabilistic modeling; machine learning; kernel density estimation; Gaussian mixture model; annual energy production; capacity factor</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>Accurate characterization of wind resources is essential for reliable energy yield estimation and wind farm planning, particularly in regions with limited long-term measurements. This study presents a machine-learning–assisted probabilistic wind assessment in Sechura, Peru, based on multi-year hourly wind data obtained from the NASA POWER database. A representative Typical Meteorological Year (TMY) was constructed to preserve seasonal and diurnal variability while enabling standardized annual energy production (AEP) calculations. Wind speed distributions were modeled using empirical distributions, kernel density estimation (KDE), the Weibull distribution, and Gaussian mixture models (GMM). Statistical evaluation indicates that KDE and GMM reduce the annual RMSE by more than 50% compared to the Weibull model, achieving coefficients of determination above 0.98. Annual energy production is estimated at approximately 1.88 GWh, with differences below 0.3% among probabilistic models. The corresponding capacity factor is approximately 0.25 for a utility-scale wind turbine. The results demonstrate that advanced probabilistic models substantially improve wind speed representation while having a limited impact on integrated annual energy estimates, highlighting the importance of model selection for variability and seasonal analysis rather than for annual yield estimation.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_40-Machine_Learning_Assisted_Probabilistic_Wind_Assessment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Effectiveness of Capsule Networks in Detecting Deepfakes Instead of Traditional CNNs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170239</link>
        <id>10.14569/IJACSA.2026.0170239</id>
        <doi>10.14569/IJACSA.2026.0170239</doi>
        <lastModDate>2026-02-28T10:50:48.6270000+00:00</lastModDate>
        
        <creator>M. C. Weerawardana</creator>
        
        <creator>T. G. I. Fernando</creator>
        
        <subject>Deepfake; deep learning; capsule network; convolutional neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>As artificial intelligence has advanced, computer-generated fake content has become increasingly prevalent. Deepfake is an advanced fake creation generated using deep learning-based technologies, and deepfake images, videos, and voices have spread rapidly. The distinction between real and fake content is very hard for the naked eye, making reliable detection essential. Existing deepfake detection methods have achieved success, but still face limitations in keeping pace with the rapid evolution of deepfake generation techniques. In particular, CNN-based approaches may require a large number of parameters and may not fully capture spatial hierarchies. In this research, we investigate whether Capsule Networks can provide an effective and parameter-efficient alternative for deepfake detection. We proposed four different Capsule Network architectures by altering the size, complexity, and configuration. A comparative analysis is conducted against state-of-the-art Capsule models and CNN models across various datasets, utilizing AUC% and the number of parameters as evaluation criteria. For transparency, we note that some baseline CNN and Capsule models follow the training protocols and datasets reported in their original studies, which may differ across implementations. Our experimental results show that the proposed Capsule Network models achieved over 98% AUC% on the evaluated datasets while using fewer parameters than several CNN-based models. These findings suggest that Capsule Networks exhibit greater efficacy in detecting deepfakes compared to traditional CNN-based methods and represent a promising direction for future research.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_39-Effectiveness_of_Capsule_Networks_in_Detecting_Deepfakes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Attention-Based Capsule Network with Vision Transformer for Underwater Hyperspectral Image Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170238</link>
        <id>10.14569/IJACSA.2026.0170238</id>
        <doi>10.14569/IJACSA.2026.0170238</doi>
        <lastModDate>2026-02-28T10:50:48.5970000+00:00</lastModDate>
        
        <creator>Thiyagarajan B</creator>
        
        <creator>Thenmozhi M</creator>
        
        <subject>Underwater imaging; hyperspectral; U-Net; Vision Transformer; CapsNet; classification; segmentation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>The underwater investigations and research remain challenging due to various underwater distortion factors, scattering, and low wavelength absorption. Hyperspectral imaging helps in obtaining detailed information on each underwater object using the spectral reflectance, using 100 to 300 bands. The Hyperspectral imaging in underwater applications contains the 3D hyperspectral cube, which needs a high level of processing that results in high-accuracy classification. Hence, this study proposes the framework of hybrid deep learning techniques that work on segmentation, feature extraction, and classification processes. The Channel Attention Module (CAM) based U-Net architecture is used for the segmentation process to obtain the spectral spatial characteristics based on the Region of Interest (ROI). The CapsNet Feature extraction helps in obtaining the features of various bands, which helps in the classification of object class-wise using the pose-based relationships. The Vision Transformer (ViT) based classification that depends on the capsule vector token, carries out the multi-class classification by obtaining the global attention among the feature vectors and relationship-based long-range ROI feature data. In this way, the proposed model attains 95.3% accuracy using the maximum IoU of 0.88 and 95.2% of the segmentation process, which helps in achieving 0.99AUC for 8 substrate classes of the underwater HSI dataset.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_38-Attention_Based_Capsule_Network_with_Vision_Transformer.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Analysis of Neural Network Architectures for Classifying Depressive Content in Social Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170237</link>
        <id>10.14569/IJACSA.2026.0170237</id>
        <doi>10.14569/IJACSA.2026.0170237</doi>
        <lastModDate>2026-02-28T10:50:48.5630000+00:00</lastModDate>
        
        <creator>Yntymak Abdrazakh</creator>
        
        <creator>Rita Ismailova</creator>
        
        <creator>Nurseit Zhunissov</creator>
        
        <creator>Arypzhan Aben</creator>
        
        <creator>Anuarbek Amanov</creator>
        
        <creator>Aigerim Baimakhanova</creator>
        
        <subject>Depression detection; social media text; natural language processing; cross-platform evaluation; robustness; statistical significance testing; transformer-based models; CNN; LSTM; BERT; MentalBERT</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>Depression-related language on social media provides measurable signals for population-level mental-health research, yet model selection remains sensitive to evaluation protocol, domain shift, class imbalance, and computational constraints. This study benchmarks CNN, LSTM, and transformer encoders (BERT, RoBERTa, DistilBERT, and MentalBERT) for binary depression-indicative versus control classification on a unified corpus of 19,800 English posts/comments aggregated from three platforms (Reddit, Twitter, and Facebook) under a consistent preprocessing pipeline. We report two complementary evaluation protocols: (1) a fixed-split single-run baseline for a comparable snapshot, and (2) a five-seed repeated-run protocol with statistical testing (effect sizes and multiple-comparison correction) to quantify variability and reduce sensitivity to initialization effects. Under repeated-run reporting, MentalBERT achieves the best overall performance (F1 = 0.918 &#177; 0.005; AUC = 0.962 &#177; 0.002), while CNN/LSTM baselines show lower robustness under cross-platform transfer. Cross-domain experiments reveal a consistent performance drop relative to in-domain evaluation, confirming non-trivial platform shift and motivating robustness-aware reporting for deployment-oriented settings. In addition to predictive metrics, we report training time, inference latency, and derived throughput to support practical model selection for use cases such as moderation pipelines and screening/triage dashboards.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_37-Comparative_Analysis_of_Neural_Network_Architectures.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cognitive Assistance for Prosopagnosia Patients Using Landmark-Based Identity and Emotion Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170236</link>
        <id>10.14569/IJACSA.2026.0170236</id>
        <doi>10.14569/IJACSA.2026.0170236</doi>
        <lastModDate>2026-02-28T10:50:48.5330000+00:00</lastModDate>
        
        <creator>Bhavana Nagaraj</creator>
        
        <creator>Rajanna Muniswamy</creator>
        
        <subject>Cognitive assistance system; EfficientNet-B3; emotion recognition; face identity recognition; prosopagnosia</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>Prosopagnosia is a neurological condition that significantly affects the social interaction and quality of life of individuals. Existing assistive systems mainly focus on either face identity recognition or face emotion recognition, limiting their effectiveness in cognitive assistive scenarios. To overcome these, an integrated framework is needed that jointly addresses identity and emotion recognition to efficiently support prosopagnosia patients, as discussed in this study. The proposed system includes two separate modules: a face identity recognition module and a face emotion recognition module. The proposed system detects and aligns faces using Multi-Task Cascaded Convolutional Networks (MTCNN) with five-point landmark alignment. Face identity recognition is performed using an EfficientNet-B3 backbone to extract 512-dimensional facial embeddings, which are matched against a Structured Query Language (SQL) database using cosine similarity. Then, facial landmarks are detected along with emotion recognition, using the Dlib library, and are structured as a graph for High-order Graph Attention Network (HoGAN)-based relational interactions detection. The proposed system is trained using a joint loss function to efficiently provide real-time assistive feedback. The system achieves high recognition performance, with AUC values of 99.8% and 99.5% in both face identity recognition and face emotion recognition modules.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_36-Cognitive_Assistance_for_Prosopagnosia_Patients.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Radiomics Feature Profiling of Brodmann Regions in Structural MRI: A Machine Learning Study of Intensive Verbal Memorisation (Huffaz vs Controls)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170235</link>
        <id>10.14569/IJACSA.2026.0170235</id>
        <doi>10.14569/IJACSA.2026.0170235</doi>
        <lastModDate>2026-02-28T10:50:48.5000000+00:00</lastModDate>
        
        <creator>Mohd Zulfaezal Che Azemin</creator>
        
        <creator>Iqbal Jamaludin</creator>
        
        <creator>Abdul Halim Sapuan</creator>
        
        <creator>Mohd Izzuddin Mohd Tamrin</creator>
        
        <subject>Radiomics; PyRadiomics; Magnetic Resonance Imaging (MRI); voxel-based morphometry (VBM); neuroplasticity; Huffaz; Brodmann-areas; Volume of Interest (VOI); texture analysis; machine learning; nested cross-validation; ROC-AUC; Random Forest</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>Memorisation-based cognitive training has been hypothesized to relate to experience-dependent brain plasticity; however, quantitative evidence at the regional level remains limited. We hypothesized that radiomics descriptors extracted from Brodmann-area volume-of-interest (VOI) regions in pre-processed structural MRI would contain sufficient information to discriminate Quran memorizers (Huffaz) from non-memorizers (controls), and we evaluated this hypothesis using a fully nested validation framework. T1-weighted MRI volumes were pre-processed using a voxel-based morphometry pipeline, and VOIs were defined using Brodmann-area masks. Using PyRadiomics, first-order and texture features were extracted per VOI and combined into a feature matrix for classification. Models were evaluated using repeated nested cross-validation (outer 5-fold &#215; 10 repeats; inner 5-fold for tuning), with ROC-AUC as the primary metric. Random Forest achieved the strongest discrimination (AUC = 0.6704 &#177; 0.1792), followed by Logistic Regression (AUC = 0.5948 &#177; 0.2153), while SVM with an RBF kernel underperformed (AUC = 0.4356 &#177; 0.1927). One-sided testing against chance (AUC = 0.5) indicated above-chance performance for Random Forest and Logistic Regression, but not for SVM-RBF. These results should be interpreted as exploratory because the cohort is small (n = 47) and no independent external validation cohort was available. Practically, the observed effect sizes suggest that VOI-based radiomics may capture detectable group-associated imaging signatures under the current preprocessing and VOI assumptions, motivating validation on larger cohorts, sensitivity analysis (e.g., discretization/normalization settings), and assessment of probability calibration.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_35-Radiomics_Feature_Profiling_of_Brodmann_Regions.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Offline Estimation Method for Hip and Knee Joint Angles of Lower Limbs Based on Quaternion and DTW Time Alignment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170234</link>
        <id>10.14569/IJACSA.2026.0170234</id>
        <doi>10.14569/IJACSA.2026.0170234</doi>
        <lastModDate>2026-02-28T10:50:48.4870000+00:00</lastModDate>
        
        <creator>Yuling Zhang</creator>
        
        <creator>Yuejian Hua</creator>
        
        <creator>Shengli Luo</creator>
        
        <creator>Xiaolong Shu</creator>
        
        <creator>Hongyan Tang</creator>
        
        <creator>Hongliu Yu</creator>
        
        <subject>Inertial measurement unit; dynamic time warping; lower-limb joint angle; gait analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>Gait analysis is crucial for disease diagnosis and rehabilitation assessment; however, traditional optical motion capture systems are costly and limited to fixed setups. This study presents an &quot;Offline Estimation Method for Hip and Knee Joint Angles of Lower Limbs Based on Quaternion and DTW Time Alignment.&quot; The method uses quaternion fusion of inertial measurement unit (IMU) orientations and employs Sakoe-Chiba constrained Dynamic Time Warping (DTW) to eliminate a 42 ms initial offset and a 150 ms cumulative drift. Combining N-pose calibration with heel velocity event detection allows for the offline calculation of hip and knee joint angles. Data were collected from eight healthy participants during flat walking and stair ascent/descent scenarios, with the Noraxon Ultium Motion system serving as the reference. Results show that DTW reduces the average root mean square error (RMSE) by 29.1%; specifically, &quot;the RMSE for hip flexion reaches 4.1&#176;, while the overall knee joint RMSE is 10.2&#176;, with correlation coefficients ≥0.87. Hip joint measurements consistently met the clinically acceptable threshold of &lt;10&#176; across all scenarios; knee joint measurements satisfied this threshold during flat walking (RMSE = 7.8&#176;) but exceeded it during stair negotiation (RMSE = 11.4&#176;), reflecting the increased biomechanical complexity of multi-planar knee motion during stair activities. This study provides a low-cost, high-precision solution for the post-hoc offline estimation of hip and knee joint angles. The proposed method is specifically designed for retrospective gait data analysis rather than real-time feedback, offering a scalable strategy for early screening of gait abnormalities and clinical assessment in home and community rehabilitation settings.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_34-An_Offline_Estimation_Method_for_Hip_and_Knee_Joint_Angles.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid BERT–BiLSTM Architecture for Enhanced Cyber Threat Intelligence Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170233</link>
        <id>10.14569/IJACSA.2026.0170233</id>
        <doi>10.14569/IJACSA.2026.0170233</doi>
        <lastModDate>2026-02-28T10:50:48.4530000+00:00</lastModDate>
        
        <creator>Syarif Hidayatulloh</creator>
        
        <creator>Salman Topiq</creator>
        
        <creator>Ifani Hariyanti</creator>
        
        <creator>Dwi Sandini</creator>
        
        <subject>Cyber Threat Intelligence; hybrid deep learning; BERT–BiLSTM architecture; text classification; sequential modelling; cybersecurity natural language processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>Cyber Threat Intelligence (CTI) plays a crucial role in supporting proactive cybersecurity defence by offering insights into adversarial behaviours and attack tactics. However, CTI data are mainly presented in unstructured natural language, characterised by dense technical terminology, implicit attack semantics, and sequential descriptions of multi-stage threat activities. While transformer-based language models such as BERT have shown strong contextual representation abilities, they are naturally limited in explicitly modelling long-range sequential dependencies that often occur in CTI narratives. On the other hand, recurrent neural networks like BiLSTM effectively capture temporal dependencies, but lack deep contextual understanding. This study proposes a hybrid BERT–BiLSTM architecture that combines the contextual semantic strengths of transformers with the sequential learning abilities of bidirectional recurrent networks for improved CTI text classification. In the proposed framework, BERT acts as a feature extractor to produce contextualised token representations, which are then processed by a BiLSTM layer to model the progression of threats before final classification. A unified experimental setup is used, employing a publicly available CTI dataset, with consistent preprocessing, training strategies, and evaluation metrics to ensure fair assessment. Experimental results show that the proposed hybrid model consistently surpasses standalone BERT and BiLSTM baselines across multiple performance metrics, including accuracy and macro F1-score, with significant improvements especially in minority and semantically ambiguous threat categories. Further analysis indicates that the hybrid architecture effectively reduces common misclassification patterns caused by overlapping attack stages and implicit indicators. These findings demonstrate the effectiveness of combining contextual and sequential modelling approaches for CTI analysis. The proposed BERT–BiLSTM framework provides a robust and interpretable solution for automated CTI classification and offers practical insights for deploying hybrid deep learning architectures in real-world cybersecurity intelligence systems.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_33-Hybrid_BERT_BiLSTM_Architecture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Analytical Study of Data Augmentation Across Audio Representations for Infant Cry Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170232</link>
        <id>10.14569/IJACSA.2026.0170232</id>
        <doi>10.14569/IJACSA.2026.0170232</doi>
        <lastModDate>2026-02-28T10:50:48.4230000+00:00</lastModDate>
        
        <creator>Meriyem Ghanjaoui</creator>
        
        <creator>Abdelaziz Daaif</creator>
        
        <creator>Abdelmajid Bousselham</creator>
        
        <creator>Sajid Rahim</creator>
        
        <creator>Ahmed Bouatmane</creator>
        
        <creator>Mohamed Elyoussfi</creator>
        
        <subject>CNN; waveform; spectrogram; deep learning; baby cries</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>Several multidisciplinary studies consider an infant’s cry as a valuable source of information, particularly for parents, caregivers, and medical professionals. From a signal processing viewpoint, infant cries can be represented either in the time domain (one-dimensional or 1D raw waveform) or in the time–frequency domain (two-dimensional or 2D spectrogram-based representations). However, the impact of these representations on classification performance, particularly under constrained and imbalanced dataset conditions, remains insufficiently explored. This study presents a comparative analysis of 1D and 2D convolutional neural networks applied to waveform and spectrogram representations of infant cries. Due to the significant class imbalance of the dataset, we employed data augmentation techniques. Experimental results show that the 1D CNN achieved 95% training accuracy and 91% validation accuracy, indicating a relatively small generalization gap. In contrast, 2D CNN reached 98% training accuracy but remained below 91% on the validation set, revealing a larger gap and suggesting potential overfitting to the augmented data.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_32-An_Analytical_Study_of_Data_Augmentation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cybersecurity Awareness Among Undergraduate Students in Saudi Arabia: A Quantitative Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170231</link>
        <id>10.14569/IJACSA.2026.0170231</id>
        <doi>10.14569/IJACSA.2026.0170231</doi>
        <lastModDate>2026-02-28T10:50:48.3930000+00:00</lastModDate>
        
        <creator>Turky A. Saderaldin</creator>
        
        <creator>Ali Abuabid</creator>
        
        <subject>Cybersecurity awareness; undergraduate students; smartphone security; Saudi Arabia</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>Cybersecurity awareness has become increasingly important as digital services expand rapidly in Saudi Arabia. This study presents a quantitative assessment of cybersecurity awareness among undergraduate students in Riyadh and Jeddah. A structured questionnaire was administered to 177 students to evaluate awareness across three dimensions: internet usage, information security practices, and social media and smartphone security. Statistical analyses, including descriptive statistics, principal component analysis, regression, and cluster analysis, were applied to identify patterns and influencing factors. The results indicate a moderate overall level of cybersecurity awareness, with stronger adoption of passive security measures such as antivirus software and firewalls, and weaker engagement with proactive practices, including password management, VPN usage, and two-factor authentication. A key finding is the identification of a smartphone-related security gap: students who rely exclusively on smartphones show significantly lower awareness. The study highlights the influence of device usage and institutional context on cybersecurity behavior and provides recommendations to enhance cybersecurity education and awareness initiatives in higher education.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_31-Cybersecurity_Awareness_Among_Undergraduate_Students.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Robust Medical Image Reconstruction Using a Self-Evolving Encoder–Decoder and Adaptive Convolutional Power Scaling</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170230</link>
        <id>10.14569/IJACSA.2026.0170230</id>
        <doi>10.14569/IJACSA.2026.0170230</doi>
        <lastModDate>2026-02-28T10:50:48.3600000+00:00</lastModDate>
        
        <creator>Dhanusha P B</creator>
        
        <creator>J. Bennilo Fernandes</creator>
        
        <creator>A. Muthukumar</creator>
        
        <creator>A. Lakshmi</creator>
        
        <subject>Dynamic encoder and decoder; power flex model layer; high resolution images; weight initialization; adaptive convolutional power scaling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>Robust medical image reconstruction is a critical requirement for accurate diagnosis and clinical decision-making, particularly when images are affected by degradation, noise, or low resolution. Conventional encoder–decoder-based reconstruction methods compress input images into low-dimensional representations and subsequently decode them into high-resolution outputs; however, such approaches often suffer from artifacts and loss of fine anatomical details under severe degradation. To address these limitations, this work proposes a robust medical image reconstruction framework using a self-evolving encoder–decoder and adaptive convolutional power scaling. The proposed super-resolution model incorporates a dynamic encoder and decoder that adaptively evolve during training to capture color contrast, structural similarity, and high-frequency details from medical images. An MLP enhanced with an adaptive power flex layer is embedded within the reconstruction pipeline, enabling learnable power-based feature scaling through weight-wise modulation and initialization. This mechanism improves feature discrimination and stabilizes the reconstruction of subtle anatomical structures. The DRIVE and CHASE_DB1 retinal image datasets are employed for experimental validation, with appropriate preprocessing applied before training and testing. The selected images are processed through the proposed super-resolution model, and performance is quantitatively evaluated using PSNR, SSIM, sensitivity, and specificity metrics. Experimental results demonstrate that the proposed method achieves significant improvements in reconstruction quality and robustness compared to existing approaches, yielding enhanced perceptual quality and structural fidelity in reconstructed medical images. These findings indicate that the proposed self-evolving encoder–decoder with adaptive convolutional power scaling is well-suited for reliable medical image reconstruction applications.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_30-Robust_Medical_Image_Reconstruction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Composite Approach Extracting Software Quality Attributes Assessing Operational Profiles</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170229</link>
        <id>10.14569/IJACSA.2026.0170229</id>
        <doi>10.14569/IJACSA.2026.0170229</doi>
        <lastModDate>2026-02-28T10:50:48.3300000+00:00</lastModDate>
        
        <creator>Sudhakar Kambhampati</creator>
        
        <creator>G Vamsi Krishna</creator>
        
        <subject>Software reliability; operational profile evaluation; composite reliability modeling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>In modern software systems, software reliability is a defining quality characteristic, particularly in on-demand and critical needs settings where usage trends are prone to the evolution of time. The traditional black box reliability models do not sufficiently capture these dynamics, especially in component-based and reusable software architectures. In a bid to overcome such limitations, the present study presents the Level-Wise Composite Software Reliability method based on Operational Profile Evaluation (LCSR-OPE). The suggested model combines package-level measurements of software, software operational profile modelling and probabilistic analysis to approximate and refine software dependability in real usage contexts. The construction of user profiles through structural and complexity metrics clustering is carried out, but the operational behavior is measured by probability density functions in order to concentrate on testing and fault detection. Moreover, defect identification and reliability testing are narrowed down with a machine-learning-based fault classification model. Experimental performance on NASA software fault data sets establishes that the suggested method achieves high levels of fault-detection and reliability estimation as compared to traditional reliability models. The findings verify the effectiveness of the use of operational profiles and the level-based composite analysis in the reliability engineering of software excellence.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_29-A_Composite_Approach_Extracting_Software_Quality_Attributes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning Based Detection of Prostate Cancer in MRI Using Biopsy-Confirmed Ground Truth</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170228</link>
        <id>10.14569/IJACSA.2026.0170228</id>
        <doi>10.14569/IJACSA.2026.0170228</doi>
        <lastModDate>2026-02-28T10:50:48.2970000+00:00</lastModDate>
        
        <creator>Samana Jafri</creator>
        
        <creator>Gajanan Birajdar</creator>
        
        <subject>Prostate lesion; 3D U-Net; MRI; biopsy confirmed lesion masks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>Prostate cancer is one of the most common malignancies in men, and accurate lesion segmentation in magnetic resonance imaging (MRI) is essential for diagnosis, treatment planning, and disease monitoring. Manual delineation by radiologists is time-consuming and subject to interobserver variability. This study presents an automated, deep learning-based framework for 3D prostate lesion detection using modified U-Net architectures, guided by pathology-informed ground truth. The proposed approach leverages biopsy-verified lesion masks derived from the PROSTATEx and PROSTATEx2 datasets, ensuring biologically validated reference labels. Method 1 uses dice loss optimization to train a simplified 3D U-Net on full volume MRI data, while Method 2 uses a patch-based 3D U-Net with advanced preprocessing, extensive data augmentation, and a dice focal loss to reduce class imbalance and improve lesion localization. With a Dice similarity coefficient (DSC) of 92.3% and an intersection over union (IoU) of 87.8%, the quantitative data shows that the patch-oriented network performs better in segmentation. In contrast to models trained only on radiologist annotations, the work shows that pathology-informed learning improves lesion delineation accuracy, highlighting its potential for strong clinical translation in MRI-guided prostate cancer detection.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_28-Deep_Learning_Based_Detection_of_Prostate_Cancer_in_MRI.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Unified Benchmarking Framework for Offline Handwritten Signature Verification Using Deep Learning Architectures</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170227</link>
        <id>10.14569/IJACSA.2026.0170227</id>
        <doi>10.14569/IJACSA.2026.0170227</doi>
        <lastModDate>2026-02-28T10:50:48.2500000+00:00</lastModDate>
        
        <creator>Eissa Alreshidi</creator>
        
        <subject>Offline handwritten signature verification; Siamese networks; contrastive-learning; deep learning architectures; ResNet; MobileNetV2; EfficientNet; vision transformers; hybrid CNN-transformer models; biometric authentication; forgery detection; embedding separability; writer-independent verification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>Offline handwritten signature verification (OSV) remains a challenging biometric task owing to the subtle variability of genuine signatures and the sophistication of skilled forgeries. This study introduces a unified benchmarking framework for evaluating eight deep learning architectures—CNN Shallow, CNN Deep, ResNet 18, ResNet 34, MobileNetV2, EfficientNet B0, ViT Tiny, and a CNN–Transformer Hybrid—within a writer-independent Siamese contrastive-learning paradigm. The framework standardizes preprocessing, balanced pair generation, NVIDIA A100 GPU training, and a comprehensive evaluation suite that includes ROC and Precision-Recall curves, Equal Error Rate (EER), calibration analysis, threshold-sensitivity metrics, and embedding visualizations using PCA, t SNE, and UMAP. Experiments conducted on an NVIDIA A100 GPU reveal a clear performance stratification: six architectures achieve perfect verification performance (Accuracy = 1.0, ROC AUC = 1.0, PR AUC = 1.0, EER = 0.0), supported by consistently well separated embedding manifolds and highly stable calibration behavior. In contrast, MobileNetV2 and EfficientNet B0 exhibited elevated EER values and overlapping embeddings, underscoring the limitations of lightweight and compound-scaled models in capturing the fine-grained stroke morphology. The proposed framework establishes a transparent and extensible foundation for future research, enabling fair cross-model comparisons and guiding the development of robust and deployment-ready biometric verification systems. In addition, this study provides the first fully controlled, architecture-agnostic comparison of CNNs, residual networks, lightweight mobile models, and transformer-based architectures under identical, writer-independent conditions. By eliminating variability in preprocessing, pair generation, and training configuration, the framework isolates the true effect of the architectural design on the verification performance. The findings highlight the importance of embedding separability, calibration stability, and threshold robustness—factors often overlooked in prior OSV research but essential for real-world deployment.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_27-A_Unified_Benchmarking_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>AI-Driven Robotic Waste Sorting for Techno-Economic Assessment in Urban Indonesia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170226</link>
        <id>10.14569/IJACSA.2026.0170226</id>
        <doi>10.14569/IJACSA.2026.0170226</doi>
        <lastModDate>2026-02-28T10:50:48.2370000+00:00</lastModDate>
        
        <creator>Ida Nurhaida</creator>
        
        <creator>Mohammad Nasucha</creator>
        
        <creator>Hari Nugraha</creator>
        
        <subject>Smart waste sorting; AI vision; YOLOv12; computer vision; material recovery facility; techno-economic assessment; urban Indonesia; circular economy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>Urban centers in Indonesia are facing increasing pressure in managing municipal solid waste as a result of rapid population growth, rising labor costs, and stricter demands for high-purity recyclable materials. Manual sorting at Material Recovery Facilities has become progressively less efficient and economically burdensome under these conditions. This study presents an artificial intelligence-driven robotic waste sorting system designed and evaluated under real operational conditions in Jakarta and South Tangerang. The system integrates YOLO based object detection, vision-guided robotic manipulation, real-time processing hardware, a multi-axis gantry system with stepper motors, and a custom conveyor mechanism to deliver waste items to the sorting cell. Unlike previous studies that mainly focus on algorithmic accuracy or laboratory-scale validation, this work combines real-world technical performance assessment with a localized techno-economic analysis. Experimental results show an average sorting accuracy of 90%, a material purity of 95.1%, and a throughput of 50 items per minute, outperforming typical manual sorting performance. An economic evaluation based on local wage levels, electricity tariffs, and recyclable market prices indicates a payback period of 4.3 to 4.9 years. The main contributions of this study lie in integrating AI vision and robotic sorting into unstructured urban waste environments, in empirical validation under Indonesian operating conditions, and in demonstrating economic feasibility for emerging economies. Although the case study focuses on Jakarta and South Tangerang, the findings are relevant for metropolitan areas across the Global South seeking more efficient and sustainable waste management solutions.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_26-AI_Driven_Robotic_Waste_Sorting.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mapping Research on Artificial Intelligence in Customer Experience: A Bibliometric Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170225</link>
        <id>10.14569/IJACSA.2026.0170225</id>
        <doi>10.14569/IJACSA.2026.0170225</doi>
        <lastModDate>2026-02-28T10:50:48.2030000+00:00</lastModDate>
        
        <creator>Firdaws Hayoun</creator>
        
        <creator>Brahim Ouabouch</creator>
        
        <creator>Youssef Aatif</creator>
        
        <creator>Taoufiq Yahyaoui</creator>
        
        <creator>Fatima Zahra El Arbaoui</creator>
        
        <subject>Artificial intelligence; customer experience; marketing; personalization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>The purpose of this article is to conduct a comprehensive bibliometric analysis of research on artificial intelligence and customer experience. The study data was extracted from Scopus and Web of Science, focusing on articles published between 2010 and 2025. VOSviewer and Biblioshiny software were used to map the intellectual landscape of the interaction between artificial intelligence and customer experience, identifying growth in scientific output, geographical distribution and collaboration, influential publications, leading authors, word co-occurrence, leading journals, and thematic trends. The study reveals that research only began in 2017, then interest in this topic began to grow, but most publications date from 2025. Furthermore, the findings reveal that research is concentrated in a limited number of leading countries which are more advanced in terms of technological infrastructures. This field of research is gradually evolving towards greater specialization and increased use of AI to improve customer experience in terms of personalization, decision-making and automation of service.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_25-Mapping_Research_on_Artificial_Intelligence.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automating Computation Independent Model Elicitation in MDA using Task-Oriented Dialogue with In-Context Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170224</link>
        <id>10.14569/IJACSA.2026.0170224</id>
        <doi>10.14569/IJACSA.2026.0170224</doi>
        <lastModDate>2026-02-28T10:50:48.1730000+00:00</lastModDate>
        
        <creator>Mohamed EL Ayadi</creator>
        
        <creator>Yassine Rhazali</creator>
        
        <creator>Mohammed Lahmer</creator>
        
        <subject>MDA; CIM; Task-Oriented Dialogue (TOD); In-Context Learning (ICL); Large Language Models (LLMs); requirements elicitation; Domain-Specific Language (DSL); Artificial Intelligence (AI); Natural Language Understanding (NLU); BPMN</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>The Computation Independent Model (CIM) is a cornerstone of the Object Management Group&#39;s (OMG) Model-Driven Architecture (MDA), capturing business requirements and domain knowledge independent of specific technologies. However, the elicitation of CIM requirements is often a manual, time-consuming, and error-prone process, susceptible to ambiguities inherent in natural language. Traditional Natural Language Understanding (NLU) approaches, particularly intent-based systems, exhibit limitations in scalability, contextual understanding, and handling the nuanced, evolving nature of complex requirements. This study proposes a novel approach that integrates Task-Oriented Dialogue (TOD) systems with the In-Context Learning (ICL) capabilities of Large Language Models (LLMs) to automate and enhance CIM requirements elicitation. The proposed framework features a conversational agent that guides stakeholders through structured dialogue flows, translating their natural language inputs into a formal CIM-Domain Specific Language (CIM-DSL). These DSL commands are then transformed into CIM artifacts, such as Business Process Model and Notation (BPMN) diagrams and Unified Modeling Language (UML) use cases. The approach emphasizes quality assurance through interactive validation, consistency checks, and strategies to mitigate LLM limitations. We anticipate this method will significantly improve the accuracy, completeness, and efficiency of CIM construction, thereby strengthening the foundation of the MDA lifecycle.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_24-Automating_Computation_Independent_Model_Elicitation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Empirical Validation of Learnability Factors in Web-Based AR: Insights from the LEMARK–Hafsa Model Grounded in Kolb’s Experiential Learning Theory</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170223</link>
        <id>10.14569/IJACSA.2026.0170223</id>
        <doi>10.14569/IJACSA.2026.0170223</doi>
        <lastModDate>2026-02-28T10:50:48.1430000+00:00</lastModDate>
        
        <creator>Sayera Hafsa</creator>
        
        <creator>Mazlina Abdul Majid</creator>
        
        <creator>Shafiq Ur Rehman</creator>
        
        <subject>Experiential learning theory; educational technology; predictive validity; structural validity; AR-based learning; factor validation; LEMARK–Hafsa model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>Augmented Reality in higher education is transforming learning by providing immersive environments that enhance cognitive and motivational engagement. Despite growing interest, there remain limited empirically validated learnability factors that can support future instructional models, such as the LEMARK-Hafsa model. This research attempts to bridge the identified gap through statistically validating seven key factors—Motivation, Confidence, Enhanced Focus, Visualization of Invisible Concepts, Satisfaction, Better Lab Experience, and Better Learning—within the LEMARK-Hafsa model grounded in Kolb’s Experiential Learning Theory. Data collected from 291 participants underwent expert validation, data cleaning, exploratory factor analysis, and regression analysis. The exploratory factor analysis confirmed structural validity, with factor loadings ranging from 0.430 to 0.822. The Kaiser-Meyer-Olkin value was 0.769, and Bartlett’s test was significant (p &lt; 0.001), indicating that the data were suitable for factor analysis and supported multiple distinct factors. The regression results showed that Visualization of Invisible Concepts had a statistically significant positive effect on learning outcomes (the normalized regression weight recorded as 0.155, p = 0.031), while Enhanced Focus (p value of 0.091) and Satisfaction (p value of 0.089) were close to significance. Motivation, Confidence, and Better Lab Experience also showed positive, though not statistically significant, effects that were consistent with theoretical expectations. These findings provide empirical support for the statistical adequacy of the proposed LEMARK–Hafsa factors, establishing a validated measurement basis for subsequent theoretical integration and model-level investigation in research on web-based Augmented Reality learning environments in higher education.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_23-Empirical_Validation_of_Learnability_Factors.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Lightweight Explainable Multilayer Adaptive RNN-Based Intrusion Detection Framework</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170222</link>
        <id>10.14569/IJACSA.2026.0170222</id>
        <doi>10.14569/IJACSA.2026.0170222</doi>
        <lastModDate>2026-02-28T10:50:48.0970000+00:00</lastModDate>
        
        <creator>Nidhi Srivastav</creator>
        
        <creator>Rajiv Singh</creator>
        
        <subject>IDS; network security; adaptive techniques; RNN; cybersecurity; explainable AI; UNR-IDD</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>A rapid increase in the instances of cyberattacks has been observed with the expanding digitization. This leads to an urgent and critical need for developing robust intrusion detection systems (IDS) which can identify the occurrences of malicious activities within the network traffic. The present work proposes a novel, explainable, multilayer, lightweight adaptive IDS based on a Recurrent Neural Network (RNN). The purpose of this proposed IDS is to improve threat detection capabilities, especially low frequency high severe attack. The performance of the proposed IDS is evaluated using the UNR-IDD dataset. The network traffic is classified into normal and attack categories to assess the effectiveness of the proposed IDS. Two separate IDS models are developed. Model A is used to detect attacks on the basis of the frequency of the attacks, and Model B detects threats based on the severity of the attacks. Through the layered approach, the overall detection accuracy of 95.7% is achieved in Model A, and 97.5% is obtained in Model B. The present work highlights that the proposed IDS shows a remarkable improvement in the detection of less frequent severe attacks in comparison to existing IDS. The comparative result analysis of RNN-based IDS with Machine Learning models such as LR, Na&#239;ve Bayes, Cat Boost, Random Forest and Multilayer Perceptron models shows RNN-based IDS has outperformed the Machine Learning models. Explainable AI (XAI), the SHAP method is used for better interpretation of the proposed decisions. XAI helps to identify the network traffic that can influence predictions and detect potential biases. It also helps researchers and practitioners to validate model behaviour and establish trust in the system’s outputs.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_22-A_Novel_Lightweight_Explainable_Multilayer_Adaptive_RNN.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Integrating ABM and GIS for Flood Evacuation Planning: A Systematic Review and Future Direction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170221</link>
        <id>10.14569/IJACSA.2026.0170221</id>
        <doi>10.14569/IJACSA.2026.0170221</doi>
        <lastModDate>2026-02-28T10:50:48.0630000+00:00</lastModDate>
        
        <creator>Kabir Musa Ibrahim</creator>
        
        <creator>Abubakar Ahmad</creator>
        
        <creator>Noor Akma Abu Bakar</creator>
        
        <creator>Mazlina Abdul Majid</creator>
        
        <creator>Azamuddin Rahman</creator>
        
        <subject>Agent-based modeling; GIS integration; flood simulation; spatial modeling; evacuation dynamics; multi-agent systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>This systematic review examines the integration of agent-based modeling (ABM) and Geographic Information Systems (GIS) in flood evacuation planning from 2015 through early 2025. This review aims to systematically evaluate how ABM and GIS have been integrated in flood evacuation research, identify methodological gaps, and propose a structured framework to guide future model development. Using PRISMA guidelines, 67 studies were selected and analyzed to uncover methodological trends, empirical gaps, and policy relevance in this growing research domain. Using the PRISMA 2020 framework, the analysis reveals a dominant reliance on mesoscopic modeling (43%), limited real-time data integration (17.9%), weak empirical validation practices (16.4%), and minimal machine learning adoption (4.5%). To structure the evolving landscape, a conceptual integration framework is proposed to classify studies by modeling scale, data fidelity, and validation strategy. This framework highlights a gradual shift toward behaviorally realistic, spatially precise, and policy-relevant evacuation models. Persistent challenges include limited validation practices, weak real-time responsiveness, and insufficient policy integration. Conclusions were drawn by identifying five research priorities: AI integration, real-time enhancement, multi-hazard modeling, empirical grounding, and participatory policy co-design. This review offers actionable insights for advancing robust, scalable, and operational ABM-GIS systems in disaster risk reduction.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_21-Integrating_ABM_and_GIS_for_Flood_Evacuation_Planning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Empirical Evaluation of Multivariate Temporal Convolutional Networks with Global Market Indicators for Forecasting the Indonesian LQ45 Index</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170220</link>
        <id>10.14569/IJACSA.2026.0170220</id>
        <doi>10.14569/IJACSA.2026.0170220</doi>
        <lastModDate>2026-02-28T10:50:48.0330000+00:00</lastModDate>
        
        <creator>Yohanes Marakub</creator>
        
        <creator>Muhammad Zarlis</creator>
        
        <subject>Deep learning; financial forecasting; multivariate forecasting; TCN; time series analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>This study develops a multivariate Temporal Convolutional Network (TCN) framework to forecast the LQ45 stock index using daily time-series data from January 2015 to January 2025. The objective is to examine whether incorporating global market indicators, namely the Volatility Index (VIX), Brent crude oil price, and the Effective Federal Funds Rate (EFFR), alongside lagged LQ45 values, improves forecasting performance in Indonesia’s equity market. Two comparison models are considered: an Autoregressive Integrated Moving Average with Exogenous Variables (ARIMAX) model as a statistical baseline and a univariate TCN as a deep learning benchmark. Data preprocessing includes normalization and a seven-day sliding-window framing. Forecasting accuracy is evaluated using Root Mean Squared Error (RMSE), Mean Absolute Error (MAE), and Mean Absolute Percentage Error (MAPE), complemented by validation and interpretability analyses. The results show that while the multivariate TCN captures interactions among multiple temporal features, it does not provide measurable performance advantages over ARIMAX or the univariate TCN. The lagged LQ45 series exhibits the strongest predictive contribution, followed by EFFR with a stable secondary effect, whereas Brent oil prices and VIX display weak and unstable influences. These findings suggest that in short-horizon forecasting of relatively stable emerging markets, integrating exogenous variables showed no performance improvement and was associated with higher forecast error when their predictive structure was unstable, highlighting the trade-off between feature complexity and model robustness.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_20-An_Empirical_Evaluation_of_Multivariate_Temporal.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Lossless Medical Image Compression Framework Using Logic Minimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170219</link>
        <id>10.14569/IJACSA.2026.0170219</id>
        <doi>10.14569/IJACSA.2026.0170219</doi>
        <lastModDate>2026-02-28T10:50:48.0000000+00:00</lastModDate>
        
        <creator>Swathi Pai M</creator>
        
        <creator>Jacob Augustine</creator>
        
        <creator>Pamela Vinitha Eric</creator>
        
        <subject>Medical image compression; lossless compression; Quadtree; logic minimization; bit plane encoding; XOR operation; gray coding</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>Medical image compression is an active research area owing to the growth of the volume of medical image data in digital form. A method for lossless compression of medical images was proposed using logic minimization. The grayscale medical image is split into bit planes, and each bit plane is divided into blocks of fixed size, e.g., 8 &#215; 4. The binary bit stream resulting from each of these blocks is treated as the output of a Boolean function, and logic minimization is attempted for a compact form. If this step fails to give a compact representation, the bits are stored as such. In this study, an extendable framework for lossless compression of medical images is presented. The bit plane is adaptively divided using a Quadtree to capture large uniform areas as leaf nodes. The non-uniform blocks at the leaf node are subjected to a logic minimization approach, as in the case of fixed-size blocks in previous related work. To improve the result further, the original image is gray-coded as a pre-processing step. On the bit plane, an XOR operation is done for the current block with the neighboring block for redundancy removal. This framework allows further exploration with the incorporation of other Boolean function representation techniques to enhance the compression.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_19-A_Lossless_Medical_Image_Compression_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Ensemble Learning Framework with Metaheuristic Optimization for Credit Card Fraud Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170218</link>
        <id>10.14569/IJACSA.2026.0170218</id>
        <doi>10.14569/IJACSA.2026.0170218</doi>
        <lastModDate>2026-02-28T10:50:47.9530000+00:00</lastModDate>
        
        <creator>Agung Nugroho</creator>
        
        <creator>Muhtajuddin Danny</creator>
        
        <creator>Ismasari Nawangsih</creator>
        
        <subject>Fraud detection; ensemble learning; Genetic Algorithm; Random Forest; real-time detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>Credit card fraud detection is a major challenge in the financial system due to the characteristics of highly unbalanced data. This study proposes an ensemble learning approach combined with hyperparameter optimization using a Genetic Algorithm to improve the performance of fraud transaction detection. The results of the experiment showed that Random Forest achieved the best performance with a perfect Recall of 1.00 and an F1-Score of 0.903, outperforming the Stacking and Bagging models. Although the optimization significantly increases the training time, this method manages to accelerate the inference time to 0.0290 seconds, making it very feasible to apply to real-time banking security systems that require instant validation. This study confirms the effectiveness of integrating ensemble learning and metaheuristic optimization in dealing with the problem of unbalanced data.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_18-An_Ensemble_Learning_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Exposure-Based Media Mix Modeling Using Machine Learning and Genetic Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170217</link>
        <id>10.14569/IJACSA.2026.0170217</id>
        <doi>10.14569/IJACSA.2026.0170217</doi>
        <lastModDate>2026-02-28T10:50:47.9230000+00:00</lastModDate>
        
        <creator>Thejan Dulara</creator>
        
        <creator>Indra Mahakalanda</creator>
        
        <creator>Prasanga Jayathunga</creator>
        
        <subject>Media mix determination; saturation points; audience reach; audience exposure</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>Media budget allocation remains a persistent challenge in the advertising industry. Inefficient spending and biased planning decisions often reduce campaign effectiveness. Advertisers struggle to balance investments across television, radio, press, and digital platforms while managing diminishing returns. This study proposes a data-driven media mix determination model that integrates supervised machine learning with genetic algorithm-based optimization. The objective is to maximize audience reach while maintaining cost efficiency. Unlike traditional media mix models that rely on aggregated medium-level performance, this study adopts an audience exposure-based modelling approach. Facebook and YouTube are used as digital media platforms in this study. Television and digital models are trained using exposure-based reach measures, such as 1 plus and 2 plus reach. Machine learning models, including decision trees, random forests, XGBoost, and LightGBM, are evaluated to capture complex and nonlinear relationships between spend and exposure-based reach. Smoothed reach response curves are used to identify efficiency levels and saturation points for each medium. A genetic algorithm is then applied to derive the optimal budget allocation across media under efficiency, reach, and cost constraints. The model is trained using real advertising data from the Sri Lankan market, ensuring practical relevance and applicability. Although the analysis is based on a country-specific dataset, the model is transferable to markets of similar scale. This study contributes to the literature by introducing an exposure-driven media mix modelling approach that improves media budget planning accuracy and supports more effective advertising decision-making.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_17-Exposure_Based_Media_Mix_Modeling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Computer Vision–Based Method for Determining the Vaccine Injection Position in Pangasius Fingerlings</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170216</link>
        <id>10.14569/IJACSA.2026.0170216</id>
        <doi>10.14569/IJACSA.2026.0170216</doi>
        <lastModDate>2026-02-28T10:50:47.8930000+00:00</lastModDate>
        
        <creator>Nguyen Phuc Truong</creator>
        
        <creator>Luong Vinh Quoc Danh</creator>
        
        <creator>Nguyen Chanh Nghiem</creator>
        
        <subject>Computer vision; fish length measurement; fish vaccination; injection position; OpenCV; Pangasius</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>Pangasius farming in the Mekong Delta is a major component of Vietnam’s aquaculture industry, characterized by large-scale production, intensive farming practices, and significant contributions to export revenue. However, vaccination of Pangasius fingerlings is still predominantly performed manually, resulting in low productivity, high labor demand, and inconsistent injection accuracy, which limit large-scale deployment in commercial hatcheries. Therefore, there is an urgent need to develop an automated and accurate vaccination method for Pangasius fingerlings. This study proposes a novel computer vision–based approach for non-contact measurement of Pangasius fingerlings and accurate determination of the vaccine injection position. The proposed method leverages the image processing capabilities of the OpenCV library in combination with statistical morphological characteristics of Pangasius fingerlings to localize the injection position. The Python-implemented algorithm is lightweight and can run on an embedded Raspberry Pi platform, supporting practical in-field deployment. Experimental results demonstrate an average positioning accuracy of 97.65%, confirming the effectiveness of the proposed approach and its potential to serve as a technological foundation for automated vaccination systems in Pangasius aquaculture.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_16-A_Computer_Vision_Based_Method_for_Determining_the_Vaccine_Injection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid Framework Integrating GNN-LSTM-CNN to Map the Impact of MSME User Behavior on Digital Transformation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170215</link>
        <id>10.14569/IJACSA.2026.0170215</id>
        <doi>10.14569/IJACSA.2026.0170215</doi>
        <lastModDate>2026-02-28T10:50:47.8600000+00:00</lastModDate>
        
        <creator>Jani Kusanti</creator>
        
        <creator>Erni Widiastuti</creator>
        
        <creator>Bintara Sura Priambada</creator>
        
        <creator>Ramadhian Agus Triono Sudalyo</creator>
        
        <creator>Masdava Aviv Masyayissa</creator>
        
        <creator>Rizki Adhi Pratama</creator>
        
        <subject>CNN; e-commerce; hybrid model; LSTM; MSME; user behavior prediction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>MSMEs need a recommendation system that simultaneously captures the evolution of user intent over time and the relationship structure between entities (users, products, sessions, categories, and security events). The problem with this research is that LSTM excels in sequence, but its performance drops on a rare timeline, a general situation in MSME logs. GNN is strong for cross-entity relationships, but it does not explicitly model temporal dynamics. The gap arises because many pipelines still separate design signals, temporal behavior, and session security, reducing explainability and long-term reliability. Our contribution proposes a calibrated and security-aware hybrid that integrates CNN, heterogeneous GNN with reverse edges, and LSTM for behavioral sequences. Multitask-trained models (BCE for purchase links and λ&#183; BCE for session risk) with L2 regularization and post-practice calibration, chronological data sharing prevents leakage. The goal is to design and evaluate CNN-plus CNN-enhanced GNN-LSTM hybrids to improve the accuracy of recommendations and reduce risk. The results on partner MSME data: ROC-AUC 0.965 (val)/0.946 (test), PR-AP 0.943/0.910; risk ROC-AUC 0.984, PR-AP 0.982, surpassing a CNN-BiLSTM baseline (0.93/0.91). Brier scores 0.161 (links) and 0.176 (risk) enable safer personalization. Going forward, we are focusing on per-segment calibration with ECE/MCE reporting, compute efficiency, multimodal expansion, ablation, and explainability (GNNExplainer, CNN saliency), as well as online retraining and drift monitoring to maintain production performance.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_15-A_Hybrid_Framework_Integrating_GNN_LSTM_CNN.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Estimation of Landslide Hazard Zones Using Deep Learning Based on Diverse Geospatial Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170214</link>
        <id>10.14569/IJACSA.2026.0170214</id>
        <doi>10.14569/IJACSA.2026.0170214</doi>
        <lastModDate>2026-02-28T10:50:47.8300000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Kengo Oiwane</creator>
        
        <creator>Hiroshi Okumura</creator>
        
        <subject>SAR; Optical; ResUNet++; land cover classification; Digital Elevation Model; landslide hazard zones; The Heavy Rain Event of July 2018</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>Traditional landslide hazard mapping in Japan relies on labor-intensive field surveys, which are slow, costly, and fail to update dynamically amid rising climate-driven disasters like the 2018 Heavy Rain Event, leaving gaps in timely evacuations. This study addresses these challenges by proposing a semantic segmentation framework using ResUNet to fuse Sentinel-2 optical, Sentinel-1 SAR amplitude, DEM-derived Terrain Ruggedness Index (TRI), and JAXA land cover data, tackling class imbalance with BCE + Dice loss and providing probability/uncertainty maps via 4-TTA for robust hazard delineation under adverse weather. The principal aim is to enable operational, weather-robust hazard zone extraction with AUC upto 0.89 (best multimodal configuration), outperforming single-modality baselines (e.g., optical-only AUC 0.74; SAR-only 0.69) through synergistic feature fusion, while highlighting multimodal SAR&#39;s edge for cloud-obscured scenarios. Validated on Hiroshima Prefecture data—Japan&#39;s highest-risk region with ~32,000 hazard spots—this approach demonstrates pre/post-disaster change detection, but reveals limitations in spatial generalization due to region-specific training.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_14-Estimation_of_Landslide_Hazard_Zones.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Attention and Representation Learning in Byte-Level Digital Forensics: A Survey of Methods, Challenges, and Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170213</link>
        <id>10.14569/IJACSA.2026.0170213</id>
        <doi>10.14569/IJACSA.2026.0170213</doi>
        <lastModDate>2026-02-28T10:50:47.8130000+00:00</lastModDate>
        
        <creator>Teena Mary</creator>
        
        <creator>Sreeja CS</creator>
        
        <subject>Byte-level digital forensics; representation learning; attention mechanisms; file fragment classification; deep learning; forensic robustness</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>Byte-level analysis has become an essential capability in digital forensics, enabling content-based investigation when file system metadata, headers, or structural information are unavailable or unreliable. Recent advances in deep learning allow forensic systems to learn discriminative features directly from raw byte streams; however, the growing diversity of representation strategies, architectural designs, and attention mechanisms makes it difficult to assess their relative effectiveness and practical suitability. This study presents a structured survey of representation learning and attention-based approaches for byte-level digital forensic analysis. We examine statistical, embedding-based, image-based, sequential, and hybrid representations, and analyze how architectural choices and attention mechanisms influence performance, robustness, and scalability. Across the literature, hybrid representations combined with lightweight convolutional backbones and selective attention mechanisms consistently provide a favorable balance between accuracy and computational efficiency. The survey also reviews key forensic applications, including file fragment classification, malware and binary analysis, network payload forensics, and encrypted or compressed data triage. In addition, we critically discuss challenges related to distribution shift, dataset bias, adversarial vulnerability, interpretability, and reproducibility, along with practical considerations for deployment in large-scale forensic pipelines. By synthesizing architectural trends, operational constraints, and reliability concerns, this work identifies critical research gaps and provides a structured foundation for the development of robust and trustworthy byte-level forensic learning systems.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_13-Attention_and_Representation_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Construction of an International Trade Financial Risk Assessment and Prediction Model Based on Big Data Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170212</link>
        <id>10.14569/IJACSA.2026.0170212</id>
        <doi>10.14569/IJACSA.2026.0170212</doi>
        <lastModDate>2026-02-28T10:50:47.7670000+00:00</lastModDate>
        
        <creator>Zeyu Liu</creator>
        
        <subject>International trade; financial risk; big data; predictive modeling; China’s economy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>Background: International trade promotes economic growth across nations, while imposing financial risks of currency fluctuations, credit defaults, and market volatility. Although conventional methods of risk evaluation have served well in the past, they are, however, unable to provide risky international trade answers under the dynamic conditions at present. Objective: This study aims to develop data-driven risk assessment and prediction models for financial risks in international trade, with an emphasis on the China trade regime dominated by finance. The purpose is to maximize prediction accuracy and to provide pragmatic risk management solutions. Methods: The study proposed a hybrid method capable of characterizing complex nonlinear correlations by a DNN and, subsequently, estimating prediction outputs with an LR model for enhanced interpretability. Training models on the International Trade and Finance Dataset are augmented by macroeconomic indicators; preprocessing is performed via statistical imputation, feature normalization, and one-hot encoding. Results: With values of 0.9670, 0.0408, 0.0322, and 0.0017 awarded to R&#178;, RMSE, MAE, and MSE, respectively, the model stands out as the most capable and accurate in measuring financial risk. However, this hybrid model marries complex features with interpretable features, thereby paving the way for an exquisite instrument for risk assessment. Conclusion &amp; Implications: This study aims to develop a solid framework for predicting financial risks in international trade that will aid financial institutions in decision-making and in developing policies. The findings may be applied to ongoing financial stability assessments for trade risk management.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_12-Construction_of_an_International_Trade_Financial_Risk_Assessment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Understanding Authentication and Authorization: A Comparative Analysis of Role-Based Access Control (RBAC), Attribute-Based Access Control (ABAC), and Relationship-Based Access Control (ReBAC) Authorization Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170211</link>
        <id>10.14569/IJACSA.2026.0170211</id>
        <doi>10.14569/IJACSA.2026.0170211</doi>
        <lastModDate>2026-02-28T10:50:47.7370000+00:00</lastModDate>
        
        <creator>Madhuri Margam</creator>
        
        <subject>Authentication; authorization; access control; Role-Based Access Control (RBAC); Attribute-Based Access Control (ABAC); Relationship-Based Access Control (ReBAC); information security; least privilege; Zero Trust</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>This study elucidates the distinctions between authentication and authorization within information security, two fundamental yet frequently conflated concepts. While authentication serves to confirm an entity’s identity, authorization determines the permissible actions that entity may execute. A thorough understanding of these mechanisms is critical for architecting secure, scalable systems and reducing vulnerabilities. The study further explores three widely adopted authorization paradigms using a gymnasium analogy: Role-Based Access Control (RBAC), which assigns privileges based on predefined roles; Attribute-Based Access Control (ABAC), which leverages a dynamic evaluation of user and contextual attributes; and Relationship-Based Access Control (ReBAC), which determines access based on defined relationships among entities. The concluding discussion emphasizes that optimal security is realized when authentication and authorization function cohesively.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_11-Understanding_Authentication_and_Authorization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Reducing Interference in Human–Robot Collaboration: A Distance-Based Policy for Disassembly Collaborative Task</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170210</link>
        <id>10.14569/IJACSA.2026.0170210</id>
        <doi>10.14569/IJACSA.2026.0170210</doi>
        <lastModDate>2026-02-28T10:50:47.7030000+00:00</lastModDate>
        
        <creator>Wendy Cahya Kurniawan</creator>
        
        <creator>Wen Liang Yeoh</creator>
        
        <creator>Osamu Fukuda</creator>
        
        <subject>Human-robot collaboration; disassembly collaborative task; UI feedback; distance policy; vision-based sensor</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>This study investigates human-robot collaborative (HRC) disassembly tasks, focusing on the effects of constrained and unconstrained safety distances on human behavior and movement interference. Separate and shared target configurations were compared using a screw-picking task as a prototype for electronic waste (e-waste) disassembly. The system employ a vision-based approach to track screws, the human hand, and the robot end effector. Performance was evaluated using objective metrics, including completion time, warning distance, and movement heatmaps, as well as subjective workload assessed via NASA-TLX. Results show that the safety algorithm significantly improved movement efficiency by minimizing the spatial distribution compared to the unconstrained trials. Furthermore, significant results from the factorial design showed that separate-target task enhanced human awareness, enabling participants to anticipate warning distances more effectively than the shared-target task. Synchronizing task assignments with safety algorithms reduces the need for human intervention.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_10-Reducing_Interference_in_Human_Robot_Collaboration.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>LayCoder: UI Layout Completion with an Encoder-Only Transformer and Layout Tokenizer</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170209</link>
        <id>10.14569/IJACSA.2026.0170209</id>
        <doi>10.14569/IJACSA.2026.0170209</doi>
        <lastModDate>2026-02-28T10:50:47.6570000+00:00</lastModDate>
        
        <creator>Iskandar Salama</creator>
        
        <creator>Luiz Henrique Mormille</creator>
        
        <creator>Masayasu Atsumi</creator>
        
        <subject>Layout completion; deep learning; encoder-only transformers; masked language modeling; tokenization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>The growing complexity of user interface (UI) design calls for effective methods to understand, complete, and refine layout structures. While prior work has focused pre-dominantly on generating UI layouts from scratch, completing partially designed interfaces is equally critical, particularly in iterative design workflows and scenarios involving incomplete prototypes. In this study, we address the UI layout completion task for mobile app screens using an encoder-only transformer architecture with masked modeling and a layout tokenizer. By representing UI elements as discrete tokens, we formulate layout completion as a sequence prediction problem that leverages global context to infer missing components. We evaluate our approach on subsets of the RICO dataset designed with varying constraints on UI element types and spatial overlap, and report results using standard layout metrics: Coverage, Intersection over Union (IoU), Max IoU, and Alignment. The experiments demonstrate that the proposed method achieves substantial improvements over LayoutFormer++, a widely adopted baseline in UI layout generation, particularly in Coverage, and in several cases, IoU, Max IoU, and Alignment. Additional experiments on noise-reduced subsets reveal that dataset curation can enhance spatial consistency but may also reduce Coverage, reflecting an inherent trade-off between completeness and structural precision. These findings highlight both the effectiveness and the limitations of encoder-only masked modeling for layout completion, and underscore the importance of balancing model design with dataset construction when tackling complex UI design tasks.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_9-LayCoder_UI_Layout_Completion_with_an_Encoder_Only_Transformer.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Measurement of Dataset Quality and Computational Quality of Graph Neural Networks on Analog Integrated Circuit Recognition System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170208</link>
        <id>10.14569/IJACSA.2026.0170208</id>
        <doi>10.14569/IJACSA.2026.0170208</doi>
        <lastModDate>2026-02-28T10:50:47.6270000+00:00</lastModDate>
        
        <creator>Arif Abdul Mannan</creator>
        
        <creator>Koichi Tanno</creator>
        
        <subject>Big data; Graph Neural Network; Artificial Intelligence; calculation quality measurement; analog circuit design</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>To meet the needs of AI design in analog IC design automation and especially in big data applications, the application of sensitive and precise Graph Neural Network (GNN) recognition will become increasingly necessary. This sensitive GNN recognition is needed to accommodate the highly stringent trade-offs on analog IC design requirements, combined with the increasingly large learning data requirements. Furthermore, more precise GNN recognition will be a fundamental requirement for a more sensitive system to accommodate noise in floating point (FP) calculations, namely inaccuracies and imprecision in IEEE 754 FP calculations. In this study, by refining a previously proposed method that uses the output vector representation (OVR) of the untrained Graph Neural Network (GNN), and by exploiting the numerical reproducibility error in FP calculations, a method for measuring the sensitivity and precision of GNNs for use in analog IC design recognition has been proposed. The results of the sensitivity and precision measurements of the proposed method show a complex combination of effects between dataset quality (the presence or absence of data duplication), the sensitivity of the GNN to distinguish each feature in the dataset, the complexity of the calculations that occur in the GNN, and the level of quality of the FP calculations performed by the processing unit related to the non-ideal FP calculations. With certain GNN configurations, the proposed method also succeeded in measuring the difference in FP calculation quality of Central Processing Unit (CPU) and Graphics Processing Unit (GPU), where, for big data applications, from the tests carried out, the maximum amount of data that can be distinguished by the CPU is 25 to 100 times more than with the GPU. Because it only uses untrained GNN OVR and does not involve any training process in obtaining results, the proposed method is still unable to obtain a correlation with the final value of GNN performance after training is complete. The measurement method using GNN OVR involving the learning process is future work.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_8-Measurement_of_Dataset_Quality_and_Computational_Quality.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Constraint-Driven Conversational Architecture for Staged Task Interaction Using Large Language Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170207</link>
        <id>10.14569/IJACSA.2026.0170207</id>
        <doi>10.14569/IJACSA.2026.0170207</doi>
        <lastModDate>2026-02-28T10:50:47.6100000+00:00</lastModDate>
        
        <creator>Joseph Benjamin Ilagan</creator>
        
        <creator>Jose Ramon Ilagan</creator>
        
        <subject>Large language models; conversational systems; task-oriented dialogue; deterministic control; constraint enforcement; staged interaction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>Large language models (LLMs) enable flexible conversational interfaces, but remain difficult to deploy in structured, staged tasks that require controlled progression, bounded information disclosure, and task validity. Prompt-based control is inherently probabilistic and has been shown to degrade under multi-turn interaction, leading to premature solution disclosure, stage skipping, loss of state coherence, and constraint violations. This study presents a constraint-driven conversational architecture that separates probabilistic language generation from deterministic task governance. An external control layer manages dialogue state, stage transitions, and constraint enforcement, while a simulation layer represents task logic independently of the LLM. We instantiate the architecture in the context of customer discovery tasks to illustrate how staged processes and bounded disclosure can be operationalized without embedding task logic directly into prompts. This work focuses on architectural design and control mechanisms rather than outcome evaluation, offering a reusable architectural pattern for LLM-driven conversational systems that must preserve staged progression, enforce constraints, and prevent premature disclosure during multi-turn interaction.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_7-A_Constraint_Driven_Conversational_Architecture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Best Practices to Train Accurate Deep Learning Models: A General Methodology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170206</link>
        <id>10.14569/IJACSA.2026.0170206</id>
        <doi>10.14569/IJACSA.2026.0170206</doi>
        <lastModDate>2026-02-28T10:50:47.5630000+00:00</lastModDate>
        
        <creator>Alberto Nogales</creator>
        
        <creator>Ana M. Mait&#237;n</creator>
        
        <creator>&#193;lvaro J. Garc&#237;a-Tejedor</creator>
        
        <subject>Methodology; artificial intelligence; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>In recent years, the field of computer science has experienced great changes due to the incredible advances in the field of artificial intelligence. Deep Learning models are responsible for most of them since the biggest milestones occurred in 2012 when AlexNet won the image classification challenge called ImageNet. These models have demonstrated great performance in different types of complex tasks like image restoration, medical diagnosis, or object recognition. Their disadvantages are related to their high data dependency, which forces experts in the field to follow a precise methodology to obtain accurate models. In this study, we describe a complete workflow that begins with the management of the raw data until the in-depth interpretation of the performance of the models. This should be taken as a high-level consultation document describing good practices that should be applied. Apart from the step-by-step methodology, we present different use cases that correspond to the two main problems of the field: classification and regression.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_6-Deep_Learning_Models_A_General_Methodology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Beyond the Interface: AI Integration Through Input Stream Mediation and Intelligent Output Simulation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170205</link>
        <id>10.14569/IJACSA.2026.0170205</id>
        <doi>10.14569/IJACSA.2026.0170205</doi>
        <lastModDate>2026-02-28T10:50:47.5330000+00:00</lastModDate>
        
        <creator>Divij H. Patel</creator>
        
        <subject>Keystroke dynamics; keystroke simulation; AI-Assisted Interfaces; open-source AI; key-chord recognition; API integration; real-time interaction systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>As Natural Language Processing (NLP) technologies continue to advance, their integration within everyday software tools remains fragmented and often constrained by environment-specific plugins or proprietary AI interfaces. This study introduces a lightweight, platform-independent framework that transforms any text-input surface, ranging from simple editors to browsers and full-featured integrated development environments (IDEs), into an intelligent, AI-assisted interface. Leveraging keystroke dynamics and key-chord recognition, the system operates unobtrusively in the background to interpret user intent and enable real-time interaction within active applications. It provides context-aware suggestions, completions, and insights directly within the active window, effectively turning ordinary typing environments into responsive, intelligent companions. Beyond unifying AI support across diverse applications, the framework enables users to seamlessly switch among multiple open-source AI and NLP models via simple key combination triggers, supported through flexible API integration, thereby providing dynamic access to different linguistic capabilities on demand. The architecture also incorporates a protective intelligence layer that detects suspicious behavior patterns and safeguards sensitive information, offering an additional shield against unauthorized data exposure. By employing a universal mediation layer for input and output, supported by controlled keystroke simulation, the approach eliminates the need for custom extensions or application-specific integrations, presenting a scalable model for embedding real-time artificial intelligence into everyday computing environments.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_5-Beyond_the_Interface_AI_Integration.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluating the Efficiency of LLM-Generated Software in Resisting Malicious Attacks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170204</link>
        <id>10.14569/IJACSA.2026.0170204</id>
        <doi>10.14569/IJACSA.2026.0170204</doi>
        <lastModDate>2026-02-28T10:50:47.5000000+00:00</lastModDate>
        
        <creator>Dominic Niceforo</creator>
        
        <creator>Haydar Cukurtepe</creator>
        
        <subject>Artificial intelligence; cyber security; large language models (LLMs); software security evaluation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>This study introduces a structured framework for evaluating the security of Java applications generated by large language models (LLMs) and presents the results from its implementation across three models: DeepSeek, GPT-4, and Llama 4. The framework integrates Open Web Application Security Project (OWASP)-supported tools, such as SpotBugs with FindSecBugs, OWASP Dependency Check, and OWASP Zed Attack Proxy (ZAP), alongside the NIST Risk Management Framework. These tools and standards were selected for being publicly available, allowing this process to be replicated and extended without proprietary licensing, and for their alignment with widely adopted industry benchmarks. The testing methodology for generated Java applications includes static code analysis, third-party dependency checking, and dynamic attack simulation. Each of the specified tools for this study corresponds to identifying a specific category of critical vulnerabilities. Identified vulnerabilities are then evaluated against NIST risk analysis standards to characterize their threat sources, likelihoods, and impacts, as well as their implications for the overall security risk profile of each application. The effect of prompt design is also explored by comparing a neutral prompt against a security-emphasized prompt incorporating OWASP best practices. Results varied considerably across models: GPT-4 showed noticeable improvements across critical and high-severity vulnerabilities, with 33.3% and 53.8% reductions, respectively. However, Llama 4 and DeepSeek saw an increase in vulnerabilities from the neutral to the secure prompt. Llama 4 had a general increase of 10- 15% across critical, high, and medium-severity vulnerabilities, while DeepSeek saw no change in high-severity vulnerabilities and a 40% increase in low-severity vulnerabilities. The framework presented provides a structured process for evaluating LLM-generated code against established software development and security standards, while identifying present limitations and possible directions for future work.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_4-Evaluating_the_Efficiency_of_LLM_Generated_Software.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Real-Time Data-Driven Decision Support in Retail: A Hybrid GraphSAGE+XGBoost Model for Predicting Reorder Behavior and Unraveling Consumer Communities</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170203</link>
        <id>10.14569/IJACSA.2026.0170203</id>
        <doi>10.14569/IJACSA.2026.0170203</doi>
        <lastModDate>2026-02-28T10:50:47.4530000+00:00</lastModDate>
        
        <creator>Balayet Hossain</creator>
        
        <creator>Md Deluar Hossen</creator>
        
        <creator>Md Nuruzzaman Pranto</creator>
        
        <creator>Belal Hossain</creator>
        
        <creator>Sabrina Shamim Moushi</creator>
        
        <creator>Nusrat Ameri</creator>
        
        <creator>Khandakar Rabbi Ahmed</creator>
        
        <subject>Real-time business intelligence; graph analytics; machine learning; GraphSAGE; XGBoost; retail analytics; recommendation systems; decision support; instacart dataset; hybrid model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>The rising demand for real-time, data-driven decision support in retail platforms has underscored the need for intelligent systems capable of modeling both behavioral sequences and product relationships. This study introduces a hybrid architecture for real-time decision support in retailing by coupling graph-based learning with conventional machine learning methods. Based on Instacart 2017 data, it constructs a heterogeneous user-product graph and utilizes GraphSAGE to obtain relational embeddings. This combination of embeddings and domain-specific features is then fed into an XGBoost classifier to predict reorder behavior. Empirical findings show that the proposed GraphSAGE+XGBoost model outperforms conventional baselines, including the sole XGBoost, Multilayer Perceptron (MLP), and Long Short-Term Memory (LSTM) models. In particular, the hybrid model outperformed all baselines across all metrics, achieving a precision of 0.82, a recall of 0.78, an F1-score of 0.76, and a mean Average Precision (mAP) of 0.75. Furthermore, within the co-purchase network, product-level community identification identified significant clusters (such as breakfast staples, health-conscious products, and impulsive snacking) that provided insights into customer demographics and marketing potential. The experimental analysis comparing the proposed GraphSAGE+XGBoost with baseline models, including LSTM, XGBoost, and MLP, demonstrates that the proposed hybrid model outperforms in terms of modeling accuracy, Precision, and generalizability. The system is optimized for real-time inference and can operate in a dynamic commercial landscape, unraveling complex co-purchase behavior and hidden consumer communities.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_3-Real_Time_Data_Driven_Decision_Support.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Confidence-Aware Multi-Layer Framework for Drone Forensic Inference Using Heterogeneous Digital Evidence Sources and Flight Logs Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170202</link>
        <id>10.14569/IJACSA.2026.0170202</id>
        <doi>10.14569/IJACSA.2026.0170202</doi>
        <lastModDate>2026-02-28T10:50:47.4230000+00:00</lastModDate>
        
        <creator>Nidhiba Parmar</creator>
        
        <creator>Naveen Kumar Chaudhary</creator>
        
        <subject>UAV forensics; drone investigations; forensic inference; confidence quantification; flight log analysis; digital forensic framework</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>The increasing use of unmanned aerial vehicles (UAVs) in criminal and adversarial contexts has created new challenges for digital forensic investigations. Current UAV forensic research primarily emphasizes artefact extraction and platform-dependent analysis, while insufficient attention has been given to uncertainty modelling and confidence quantification in forensic inference. This study addresses this methodological gap by proposing a confidence-aware multi-layer UAV forensic framework designed to support legally defensible forensic conclusions. The framework integrates chip-off memory acquisition, logical flight log analysis, companion mobile device artefact examination, and wireless trace correlation within a unified analytical architecture. Physics-based flight trajectory reconstruction and cross-device temporal alignment algorithms enhance reproducibility and platform independence. To reflect varying levels of evidentiary reliability, a structured evidence-weighting approach is introduced alongside a novel Forensic Confidence Index (FCI) that quantifies evidentiary support without implying absolute certainty. Validation using a Yuneec Typhoon Q500 4K dataset demonstrates feasible trajectory reconstruction, temporal correlation, and confidence-constrained attribution under realistic investigative conditions. By explicitly incorporating uncertainty modeling and confidence articulation into UAV forensic workflows, the proposed framework improves scientific rigor, transparency, and legal defensibility while providing a scalable foundation for future cyber-physical forensic investigations.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_2-A_Confidence_Aware_Multi_Layer_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comparison of Metaheuristic Methods for the Vehicle Routing Problem</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170201</link>
        <id>10.14569/IJACSA.2026.0170201</id>
        <doi>10.14569/IJACSA.2026.0170201</doi>
        <lastModDate>2026-02-28T10:50:47.3770000+00:00</lastModDate>
        
        <creator>Manal El Jaouhari</creator>
        
        <creator>Ghita Bencheikh</creator>
        
        <creator>Ghizlane Bencheikh</creator>
        
        <subject>Metaheuristics; Ant Colony Optimization; Hill Climbing; Genetic Algorithm; Particle Swarm Optimization; exact algorithm; Capacitated Vehicle Routing Problem</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(2), 2026</description>
        <description>The Capacitated Vehicle Routing Problem (CVRP) is a fundamental NP-hard combinatorial optimization problem with important applications in logistics and distribution systems. Although numerous advanced approaches have been proposed in recent years, systematic benchmarking of classical metaheuristic algorithms under a unified experimental framework remains limited. This study evaluates the performance and trade-offs of four well-known metaheuristics: Hill Climbing (HC), Ant Colony Optimization (ACO), Particle Swarm Optimization (PSO), and Genetic Algorithms (GA). All methods are implemented within the same computational environment and assessed on benchmark CVRP instances, using the CPLEX exact solver as a reference for global optimality. The results indicate that ACO achieves the smallest optimality gaps and often approaches optimal solutions, at the cost of higher computational effort. PSO strikes a favorable balance between solution quality and runtime across the tested instances, whereas HC delivers very fast solutions but degrades as problem complexity increases. GA exhibits higher variability and less competitive performance under the selected parameter settings. Overall, this comparative analysis highlights the strengths and limitations of classical metaheuristics and establishes a reproducible baseline for future research, including hybrid and learning-assisted approaches for scalable vehicle routing optimization.</description>
        <description>http://thesai.org/Downloads/Volume17No2/Paper_1-A_Comparison_of_Metaheuristic_Methods.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predictive Data Mining Analysis of Ownership Structures and their Influence on Corporate Tax Avoidance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170197</link>
        <id>10.14569/IJACSA.2026.0170197</id>
        <doi>10.14569/IJACSA.2026.0170197</doi>
        <lastModDate>2026-01-30T11:01:09.8900000+00:00</lastModDate>
        
        <creator>Tuti Herawati</creator>
        
        <creator>Helmi Yazid</creator>
        
        <creator>Nurhayati Soleha</creator>
        
        <creator>Munawar Muchlish</creator>
        
        <subject>Data mining; big data; tax avoidance; tax burden; corporate ownership structure</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>This study aims to examine the impact of corporate ownership structures on tax avoidance using a predictive data mining approach. The main challenge addressed is understanding how variations in ownership influence a firm’s strategic financial decisions, particularly its tendency to engage in tax minimization practices. By applying advanced predictive data mining techniques, the research uncovers significant patterns, identifies key ownership features, and models their relationship with tax avoidance outcomes. The dataset, derived from corporate financial statements and ownership records, is systematically preprocessed, feature-selected, and validated to ensure reliable predictive performance. Results demonstrate that differences in ownership structures significantly affect tax avoidance behavior, with certain ownership characteristics consistently emerging as strong predictors. These findings offer computational insights for both academic understanding and practical applications, helping regulators anticipate risky ownership configurations and improve policy oversight. The study highlights the importance of integrating ownership theory with predictive modeling to enhance the transparency, interpretability, and robustness of corporate tax strategy analyses.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_97-Predictive_Data_Mining_Analysis_of_Ownership_Structures.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>DEEP: A Distributed Energy Efficient Routing Protocol for Internet of Nano-Things</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170196</link>
        <id>10.14569/IJACSA.2026.0170196</id>
        <doi>10.14569/IJACSA.2026.0170196</doi>
        <lastModDate>2026-01-30T11:01:09.8570000+00:00</lastModDate>
        
        <creator>Saoucene Mahfoudh</creator>
        
        <creator>Areej Omar Balghusoon</creator>
        
        <subject>Routing protocol; Internet of Nano Things; nano-sensor; energy efficiency; energy harvesting; terahertz communication</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>Nanotechnology offers transformative capabilities across healthcare, environmental monitoring, and industrial automation. When integrated with modern communication technologies, Wireless Nano Sensor Networks (WNSNs) form the Internet of Nano Things (IoNT), interconnecting nanoscale devices with conventional networks. Despite its potential, efficient routing in IoNT remains challenging due to severe energy constraints, limited processing, and high propagation losses in the terahertz (THz) band. This paper proposes the Distributed Energy-Efficient Protocol (DEEP), a lightweight routing scheme designed for IoNT-based WNSNs. DEEP balances simplicity, connectivity, and sustainability through adaptive retransmission control and a hybrid energy model combining environmental energy harvesting with wireless power transfer. Performance evaluation using the Nano-Sim module of the NS-3 simulator demonstrates that DEEP significantly extends network lifetime, reduces overall energy consumption, and maintains scalability and robust delivery performance with minimal communication overhead.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_96-DEEP_A_Distributed_Energy_Efficient_Routing_Protocol.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Machine Learning-Based Sentiment Analysis Pipeline for Evaluating Hajj Food Service Quality</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170195</link>
        <id>10.14569/IJACSA.2026.0170195</id>
        <doi>10.14569/IJACSA.2026.0170195</doi>
        <lastModDate>2026-01-30T11:01:09.8270000+00:00</lastModDate>
        
        <creator>Amjad Enad Almutairi</creator>
        
        <creator>Aisha Yaquob Alsobhi</creator>
        
        <creator>Abdulrhman M Alshareef</creator>
        
        <subject>Sentiment analysis; service quality; machine learning; Hajj; food service</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>Pilgrimage, also known as Hajj, brings together millions of people each year, creating significant challenges in managing, organizing, and maintaining the quality of various services. Among these essential services, food provision plays a vital role in shaping pilgrims’ overall experience and satisfaction. Despite its importance, research focusing on food services using sentiment analysis during Hajj remains limited. Existing studies often rely on social media data, which may not accurately capture the genuine opinions of pilgrims. This study addresses this gap by analyzing food service text reviews collected from Google Maps within the Hajj context. It contributes a new dataset collected for evaluating food services provided to pilgrims after the Hajj season, along with an empirical benchmark for Arabic Hajj food reviews. The dataset consists of 4,018 Google Maps reviews from 160 Hajj campaigns conducted between 2022 and 2025. After data preprocessing, the reviews were classified using several classical machine learning algorithms as empirical baselines, including support vector machine (SVM), logistic regression (LR), Na&#239;ve  Bayes (NB), decision tree (DT), and random forest (RF). The experimental results demonstrate that LR achieved the highest accuracy of 93.6% among the evaluated models, followed by SVM and RF with accuracies of 92.9% and 92.2%, respectively. The analysis also shows that positive sentiment dominated across all studied years, indicating an overall improvement in pilgrims’ satisfaction with food services. However, the persistence of food-related issues highlights the need for continued attention and improvement in service quality.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_95-Machine_Learning_Based_Sentiment_Analysis_Pipeline.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>From Accuracy to Insight: Explainability in Review Rating Prediction with Transformers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170194</link>
        <id>10.14569/IJACSA.2026.0170194</id>
        <doi>10.14569/IJACSA.2026.0170194</doi>
        <lastModDate>2026-01-30T11:01:09.7970000+00:00</lastModDate>
        
        <creator>Dhefaf T. Radain</creator>
        
        <creator>Dimah Alahmadi</creator>
        
        <creator>Arwa M. Wali</creator>
        
        <subject>Explainability; LIME; review rating prediction; SHAP; transformer-based models</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>Mobile application (app) reviews provide valuable information that facilitates understanding of users’ needs, leading to better design of developed products. They have abundant data that can be utilized by different models to explain the prediction results to stakeholders. This will lead mobile app developers to trust and rely on the models that are used to develop their apps and satisfy the users’ needs. To leverage this information, outstanding improvements in complex learning algorithms have led to the development of transformer-based models that are used for natural language processing (NLP) and to exploit rating predictions. However, such models are complex and lack explainability, especially for Arabic reviews. Most studies have applied explainability models for transformer-based models to the English language and various other languages but not the Arabic language. This study presents a rating prediction explain-ability (RPE) framework that combines transformer-based and explainability models for review rating predictions from mobile government (m-government) apps. The transformer-based models predict the ratings for reviews written in English or Arabic. Then, local explainability models, such as SHapley Additive exPlanation (SHAP) and local interpretable model-agnostic explanations (LIME), explain and visualize the results. In RPE, not only high prediction accuracy was achieved for both English and Arabic reviews, but the resulted predictions were also justified with consistency between the different explainability models. The transformer-based model ELECTRA yielded the highest accuracy and F1 score of 96% for the rating prediction of English reviews, whereas the transformer-based model AraBERTv2 had 95%accuracy and F1 score for the rating prediction of Arabic reviews. The results of both explainability models provided equivalent explanations and emphasized the same words that affected the predicted ratings.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_94-From_Accuracy_to_Insight_Explainability_in_Review_Rating_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Evaluation of Deep Learning Architectures and Hybrid Heuristics for Automated Gambling Content Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170193</link>
        <id>10.14569/IJACSA.2026.0170193</id>
        <doi>10.14569/IJACSA.2026.0170193</doi>
        <lastModDate>2026-01-30T11:01:09.7630000+00:00</lastModDate>
        
        <creator>Eros Anaya S&#225;nchez</creator>
        
        <creator>Chesney Taichi Marchena Tejada</creator>
        
        <creator>Jose Alfredo Herrera Quispe</creator>
        
        <subject>Deep learning; image classification; gambling detection; ResNet50; hybrid systems; transfer learning; Convolutional Neural Networks; platform governance; content moderation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>The exponential proliferation of online gambling content represents a multifaceted challenge for contemporary automated content moderation systems, primarily driven by the sophisticated visual obfuscation and semantic complexity characteristic of modern digital advertising. This study conducts a rigorous comparative evaluation of the efficacy of Deep Learning (DL) architectures against classical Machine Learning (ML) paradigms for the deterministic identification of gambling-related imagery. Specifically, we propose and implement GADIA (Gambling Ad Detector with Intelligent Analysis), a novel hybrid funnel-based architecture that integrates structural heuristic filtering with an asymmetrically fine-tuned ResNet50 classifier. To address the systemic scarcity of high-quality public repositories, the models were trained and validated on a proprietary, strictly balanced dataset of 2,312 images, meticulously curated to encapsulate real-world adversarial marketing techniques. Performance bench-marks were established through Accuracy, Precision, Recall, F1-score, and AUC metrics. Experimental evidence demonstrates that the ResNet50 architecture attained a superior robustness profile, achieving 85.01% accuracy and 90.42% recall, significantly outperforming traditional baselines that failed to capture high-dimensional visual hierarchies. These findings validate that deep residual learning, when integrated into a hybrid heuristic-visual pipeline, provides a computationally efficient and scalable foundation for real-time platform governance and digital safety monitoring.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_93-Comparative_Evaluation_of_Deep_Learning_Architectures_and_Hybrid_Heuristics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>TRI-GATE: A Tri-Modal Anti-Spoofing System for Gate Access Using Vehicle, License Plate, and Face Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170192</link>
        <id>10.14569/IJACSA.2026.0170192</id>
        <doi>10.14569/IJACSA.2026.0170192</doi>
        <lastModDate>2026-01-30T11:01:09.7330000+00:00</lastModDate>
        
        <creator>Muhannad Alsultan</creator>
        
        <creator>Thamer Alghonaim</creator>
        
        <creator>Abdulaziz Alorf</creator>
        
        <creator>Bandar Alwazzan</creator>
        
        <creator>Faisal Alsakakir</creator>
        
        <creator>Abdullah Alhassan</creator>
        
        <creator>Yousif Hussain</creator>
        
        <subject>Tri-modal anti-spoofing; vehicle recognition; license plate recognition; face recognition; real-time gate access control; multimodal biometrics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>Vehicle gate access, in general, still relies heavily on manual inspection of identification cards and visual verification by security guards, which is slow, tedious, and susceptible to spoofing. Single-modality, computerized systems that utilize license plates, vehicle appearance, and facial recognition can partially alleviate this difficulty. Still, they are prone to spoofing and generally perform poorly in real-world scenarios (e.g., glare, occlusion, and tinted glass). This study presents TRI-GATE, a tri-modal anti-spoofing framework that unifies vehicle, license plate, and face recognition within a single, real-time decision pipeline. The system employs YOLOv4-tiny for vehicle detection and a MobileNetV2-based classifier for make–model recognition, a retrained MTCNN and LPRNet pair for license plate detection and recognition on Saudi-specific datasets (17,000 images for detection and 35,000 for recognition), and RetinaFace with InsightFace embeddings, along with a linear SVM, for driver identification. An IoU-based best-frame selection scheme reduces latency by forwarding only the most informative frame to the recognition modules. Score-level fusion is then performed by a linear SVM that learns the relative importance of each modality for the final access decision. Evaluated on a dedicated tri-modal dataset, TRI-GATE achieves 97% gate-level accuracy with an end-to-end latency of 66 ms per frame (≈ 15.15 FPS), and demonstrates robust performance in a real-world gate-like deployment, substantially improving both security and operational efficiency over existing single- and bi-modal solutions.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_92-TRI_GATE_A_Tri_Modal_Anti_Spoofing_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Explainable AI for Enhancing Awareness of Academic Stress Among International University Students</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170191</link>
        <id>10.14569/IJACSA.2026.0170191</id>
        <doi>10.14569/IJACSA.2026.0170191</doi>
        <lastModDate>2026-01-30T11:01:09.7000000+00:00</lastModDate>
        
        <creator>Ahmed Almathami</creator>
        
        <creator>Richard Stone</creator>
        
        <subject>Explainable artificial intelligence; academic stress; student awareness; international university students; learning analytics; Human–Computer Interaction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>Academic stress is a common challenge in higher education, especially for international university students who must adapt to new academic systems, expectations, and learning environments. In recent years, artificial intelligence has been increasingly used to analyze academic data and estimate student stress. However, most AI-based systems prioritize prediction accuracy over providing valuable support for student understanding. As a result, students may receive stress-related indicators without a clear explanation of how these results relate to their academic tasks or activities. This state-of-the-art review discusses current research on explainable artificial intelligence in the field of academic stress and student awareness. Based on literature published between 2020 and 2025, this review synthesizes work from educational technology, learning analytics, and explainable AI from a Human–Computer Interaction perspective. The analysis focuses on the representation of academic stress, the design of explanatory frameworks, and the extent to which existing systems facilitate students’ ability to interpret and reflect on their work. The review finds that awareness is rarely treated as an explicit outcome in existing research. Although explainable models are increasingly used, the explanations they produce are often technical and not student-oriented. International students are an underrepresented group in the literature, despite the apparent differences in their academic preparation, linguistic ability, and expectations. Consequently, these shortcomings limit the effectiveness of artificial intelligence systems as tools for enhancing student awareness. This review highlights the need to shift from prediction-oriented approaches toward awareness-oriented explainable AI systems that prioritize student understanding. By emphasizing human-centered explanation design and inclusive evaluation, future research can better support students in making sense of academic stress within diverse higher education environments.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_91-Explainable_AI_for_Enhancing_Awareness_of_Academic_Stress.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Relative Position Estimation for Multi-Robots Based on Vertex Distance Between Regular Tetrahedral Units</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170190</link>
        <id>10.14569/IJACSA.2026.0170190</id>
        <doi>10.14569/IJACSA.2026.0170190</doi>
        <lastModDate>2026-01-30T11:01:09.6700000+00:00</lastModDate>
        
        <creator>Airi Kojima</creator>
        
        <creator>Kohei Yamagishi</creator>
        
        <creator>Tsuyoshi Suzuki</creator>
        
        <subject>Multi-robot system; distributed cooperative control; relative position estimation; self-contained positioning system; inrter-vertex distance; geometric structure</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>Optimal decentralized cooperative control in multi-robot systems requires simultaneous local sensing and inter-agent communication. Ultra-wide band (UWB) wireless communication has been investigated as a self-contained positioning system capable of supporting both functions within a single device. Conventional UWB-based positioning methods estimate absolute positions using distance measurements relative to fixed anchors in the environment; moreover, while relative position estimation methods based on antenna configurations have been studied, mutual relative position estimation with respect to the reference coordinate frames of the agents themselves has not yet been investigated. To address this gap, this paper proposes a three-dimensional relative position estimation method for distributed cooperative control based on inter-vertex distances sharing. In the proposed system, inter-vertex distances between units are measured, where each unit is equipped with four UWB devices arranged in a regular tetrahedral geometric structure. An optimization-based estimation process is applied and enhanced with a k-means clustering method to mitigate convergence to local minima. The estimation accuracy was evaluated through both simulations and real-world experiments. The results demonstrate that the proposed method can accurately estimate relative positions between units and is effective for multi-robot systems operating on planar surfaces.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_90-Relative_Position_Estimation_for_Multi_Robots.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Lightweight Machine Learning for Real-Time Gear Change Prediction in Autonomous Parking</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170189</link>
        <id>10.14569/IJACSA.2026.0170189</id>
        <doi>10.14569/IJACSA.2026.0170189</doi>
        <lastModDate>2026-01-30T11:01:09.6530000+00:00</lastModDate>
        
        <creator>Ahmed A. Kamel</creator>
        
        <creator>Reda Alkhoribi</creator>
        
        <creator>M. Shoman</creator>
        
        <creator>Mohammed A. A. Refaey</creator>
        
        <subject>Autonomous parking; direction change detection; embedded systems; machine learning; motion planning; random forest; rapidly-exploring random trees; rapidly-exploring random tree star</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>Real-time motion planning for autonomous parking on embedded advanced driver-assistance system (ADAS) platforms faces a fundamental computational bottleneck: transformer-based approaches (e.g., Motion Planning Trans-former, Diffusion-based planners) achieve strong performance but incur prohibitive computational costs unsuitable for resource-constrained automotive systems. This work proposes a lightweight alternative machine learning approach using Random Forest classifiers and regressors to predict parking trajectory regions and vehicle orientations, enabling accelerated Rapidly-exploring Random Trees (RRT) planning without sacrificing robustness. The approach is trained on a dataset of 10,725 synthetic per-pendicular backward parking scenarios generated via Rapidly-exploring Random Tree Star (RRT*) in the Reeds-Shepp con-figuration space. Using Random Forests with 20 trees and maximum depth 8, the method achieves 98.3–100% success rate in multi-direction-change scenarios with planning times of 0.15–0.25 seconds, compared to 2.81 seconds for unconstrained RRT. In scenarios with insufficient prediction guidance, the constrained planner can maintain a fallback mechanism that preserves RRT’s probabilistic completeness guarantees. This work demonstrates that simpler machine learning models can match transformer-based approaches while remaining practical for embedded deployment.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_89-Lightweight_Machine_Learning_for_Real_Time_Gear_Change_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Lightweight Dual-YOLOv8 Instance-Aware Semantic Segmentation for Real-Time Autonomous Driving on Edge ARM/GPU Platforms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170188</link>
        <id>10.14569/IJACSA.2026.0170188</id>
        <doi>10.14569/IJACSA.2026.0170188</doi>
        <lastModDate>2026-01-30T11:01:09.6230000+00:00</lastModDate>
        
        <creator>Safa Teboulbi</creator>
        
        <creator>Seifeddine Messaoud</creator>
        
        <creator>Mohamed Ali Hajjaji</creator>
        
        <creator>Mohamed Atri</creator>
        
        <creator>Abdellatif Mtibaa</creator>
        
        <subject>Autonomous driving; instance-aware semantic seg-mentation; real-time instance segmentation; YOLOv8; dual-model fusion; edge deployment</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>Semantic segmentation is a fundamental component of autonomous driving systems, enabling accurate scene understanding and object-level perception. However, achieving precise instance-level delineation while maintaining real-time performance on resource-constrained platforms remains a significant challenge, particularly for edge deployment scenarios. This paper proposes a lightweight dual-YOLOv8 fusion framework for instance-aware semantic segmentation in autonomous driving applications. The proposed approach integrates YOLOv8n-seg and YOLOv8s-seg through a multi-scale fusion strategy that exploits their complementary feature representations to improve the segmentation of road-relevant objects, including cars, buses, trucks, and motorcycles. The framework is evaluated on the Reetiquetado de Vehiculos dataset using standard instance-level segmentation metrics. Experimental results demonstrate strong performance, achieving an overall mAP@0.5 of 92.9% and mAP@0.5:0.95 of 80.8%, while maintaining real-time inference with an average processing time of 7.9 ms per image (126 FPS) on an NVIDIA RTX 3050 GPU. Class-wise and confidence-based analyses confirm consistent segmentation accuracy across vehicle categories, highlighting the robustness of the proposed fusion strategy in handling scale variation, occlusions, and object diversity. In addition, an embedded deployment analysis provides insight into the feasibility and practical constraints of deploying the proposed framework on representative edge platforms. Overall, the proposed dual-YOLOv8 fusion framework achieves an effective balance between segmentation accuracy and computational efficiency, making it suitable for real-time autonomous driving perception on edge ARM/GPU platforms and Advanced Driver Assistance Systems (ADAS).</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_88-Lightweight_Dual_YOLOv8_Instance_Aware_Semantic_Segmentation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing SCADA Security in Critical Infrastructure: A Multi-Layered Architecture Using IoT-Based Monitoring and AI-Driven Anomaly Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170187</link>
        <id>10.14569/IJACSA.2026.0170187</id>
        <doi>10.14569/IJACSA.2026.0170187</doi>
        <lastModDate>2026-01-30T11:01:09.5770000+00:00</lastModDate>
        
        <creator>Mohammad Alqahtani</creator>
        
        <creator>Abdulkarim Amin</creator>
        
        <creator>Kyounggon Kim</creator>
        
        <creator>Seokhee Lee</creator>
        
        <subject>SCADA security; industrial IoT; anomaly detection; machine learning; digital forensics; wind turbine telemetry</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>Supervisory Control and Data Acquisition (SCADA) systems are central to the efficient operation of critical infrastructure such as energy, water, and industrial networks. However, the increased digital integration of SCADA components, especially through Internet of Things (IoT) technologies, has simultaneously broadened their exposure to cyber threats. This project presents a simulated SCADA system architecture designed to model, monitor, and secure real-time industrial telemetry using open-source platforms Node-RED and ThingsBoard. Leveraging real-world data collected from the Aventa AV-7 wind turbine in Switzerland, the project implements a multilayered architecture comprising edge, fog, and cloud layers, equipped with synchronized databases for integrity comparison and threat forensics. Artificial intelligence (AI) models are integrated into the system to perform anomaly detection using supervised, unsupervised, and deep learning (LSTM) algorithms. Cyberattacks including Distributed Denial of Service (DDoS), false data injection, and replay attacks are simulated to evaluate the system’s resilience. This report details each stage of the project from data preprocessing and system design to implementation and evaluation culminating in a set of strategic recommendations for enhancing SCADA security through AI-driven frameworks.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_87-Enhancing_SCADA_Security_in_Critical_Infrastructure.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Interpreting Multimodal Fake News Detection Models: An Experimental Study of Performance Factors and Modality Contributions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170186</link>
        <id>10.14569/IJACSA.2026.0170186</id>
        <doi>10.14569/IJACSA.2026.0170186</doi>
        <lastModDate>2026-01-30T11:01:09.5470000+00:00</lastModDate>
        
        <creator>Noha A. Saad Eldien</creator>
        
        <creator>Wael H. Gomaa</creator>
        
        <creator>Khaled T. Wassif</creator>
        
        <creator>Hanaa Bayomi</creator>
        
        <subject>Multimodal fake news detection; modality reliability modeling; adaptive fusion; interpretable fusion; lightweight multi-modal models</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>The widespread dissemination of multimodal mis-information requires models that can reason across textual and visual content while remaining interpretable. However, many existing multimodal fusion approaches implicitly assume uniform modality reliability, providing limited transparency into modality contributions. This study introduces TweFuse-W, a lightweight multimodal framework for fine-grained fake-news detection that reframes multimodal fusion as a modality reliability estimation problem, rather than merely merging modalities or explicitly modeling their interactions. TweFuse-W integrates BERTweet-based textual representations with Swin Transformer visual features using a sample-conditioned, learnable weighted-sum gate operating at the modality level, producing global reliability weights without cross-attention overhead. By explicitly param-eterizing modality contributions during inference, the proposed approach provides intrinsic interpretability. Experiments on the six-class Fakeddit dataset show that TweFuse-W achieves a macro-F1 score of 0.838, outperforming simple concatenation (macro-F1 = 0.820). Analysis of the learned modality weights confirms meaningful interpretability, with textual representations dominating in Satire, Misleading, False Connection, and Imposter Content (αT = 0.57–0.62), while visual cues exert greater influence in Manipulated Content (αV = 0.51). Overall, these findings demonstrate that adaptive modality weighting enhances both predictive performance and model transparency, serving as a lightweight and interpretable complementary fusion strategy for multimodal fake-news detection.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_86-Interpreting_Multimodal_Fake_News_Detection_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Interpretable Structural Stability Analysis for Long-Term Cognitive IoT Time-Series Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170185</link>
        <id>10.14569/IJACSA.2026.0170185</id>
        <doi>10.14569/IJACSA.2026.0170185</doi>
        <lastModDate>2026-01-30T11:01:09.5130000+00:00</lastModDate>
        
        <creator>Basab Nath</creator>
        
        <creator>Yonis Gulzar</creator>
        
        <subject>Structural stability; training-free framework; total variation regularization; rolling statistics; stability index</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>Long-term heterogeneous time-series data generated by large-scale sensing and environmental monitoring systems exhibit complex temporal behavior that is not fully captured by prediction-driven learning models. While most existing approaches emphasize short-term forecasting accuracy, comparatively little attention has been given to the analysis of long-term structural stability inherent in such data. In this work, we propose a lightweight, training-free analytical framework for quantifying structural stability in long-duration time-series using stability-preserving preprocessing and interpretable temporal statistics. The proposed method combines total variation regularization with rolling statistical analysis to assess the consistency of local temporal behavior relative to global characteristics over extended time horizons. Structural stability is quantified using a simple yet effective stability index that captures deviations between local and global temporal trends. The framework is evaluated using more than two decades of daily environmental observations, including temperature, relative humidity, and precipitation, obtained from the NASA POWER repository for a representative location in Assam, India. Experimental results demonstrate consistent and systematic reductions in the stability index following preprocessing across all variables, indicating improved temporal consistency without structural distortion. Additional robustness analysis across multiple temporal scales confirms that the proposed framework is insensitive to window size selection and preserves long-term structural behavior. These findings suggest that meaningful insights into temporal stability can be obtained without reliance on model training or predictive learning, making the proposed approach suitable for interpretable, resource-efficient analysis of long-term heterogeneous time-series data. Unlike conventional stability descriptors such as variance-based measures or correlation-based consistency metrics, the proposed stability index directly quantifies local-to-global deviation of temporal descriptors across multiple window scales, enabling interpretable and comparable stability assessment without requiring model training or forecasting error baselines.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_85-Interpretable_Structural_Stability_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evolution of Image Captioning Models: A Systematic PRISMA Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170184</link>
        <id>10.14569/IJACSA.2026.0170184</id>
        <doi>10.14569/IJACSA.2026.0170184</doi>
        <lastModDate>2026-01-30T11:01:09.4970000+00:00</lastModDate>
        
        <creator>Abdelkrim SAOUABE</creator>
        
        <creator>Khalid TIZRA</creator>
        
        <creator>Doha BANOUI</creator>
        
        <subject>Image captioning; vision-language models; semantic-based models; transformer models; attention mechanism; pre-trained models; GPT-based models</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>This article presents a systematic review of image captioning approaches conducted according to the PRISMA methodology, ensuring a rigorous, transparent, and reproducible analysis of the literature. The study traces the evolution of image captioning methods, beginning with early machine learning–based techniques that rely on handcrafted visual features, object detection, and template-based or statistical language models. While these approaches established foundational concepts, they are constrained by limited scalability and semantic expressiveness. Specific challenges include difficulty in capturing complex object relationships and inability to generate diverse descriptions for the same image. Image captioning represents a key research problem at the intersection of computer vision and natural language processing, aiming to automatically generate coherent and semantically accurate textual descriptions of visual content. Due to its multimodal nature and practical relevance, it has attracted increasing attention in artificial intelligence research. The review then examines the transition toward deep learning–based models, which have become dominant due to their improved performance. Encoder–decoder architectures are analyzed, highlighting the use of convolutional neural networks for visual representation and recurrent neural networks for caption generation. Attention-based models are discussed for their ability to focus on salient image regions, followed by reinforcement learning–based methods that directly optimize evaluation metrics and semantic-driven architectures that enhance caption relevance. Finally, recent advances based on Transformer architectures and large-scale multimodal pretraining are reviewed, along with key application domains and open challenges for future research in image captioning.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_84-Evolution_of_Image_Captioning_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Smart Agriculture in Morocco: An Intelligent Deep Learning Framework for Crop Disease Diagnosis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170183</link>
        <id>10.14569/IJACSA.2026.0170183</id>
        <doi>10.14569/IJACSA.2026.0170183</doi>
        <lastModDate>2026-01-30T11:01:09.4830000+00:00</lastModDate>
        
        <creator>Hajar Krim</creator>
        
        <creator>Abdelhadi Assir</creator>
        
        <subject>Smart agriculture; deep learning; framework; Morocco; generation green; crop disease; PSO-CNN; precision farming</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>The Moroccan agricultural sector is currently navigating a pivotal transformation driven by the “Generation Green 2020–2030” national strategy, which places a high priority on the digitalization of farming practices to bolster resilience against climate volatility and phytopathological risks. This study proposes a robust Smart Agriculture Framework engineered to automate crop disease diagnosis within mobile environments with limited resources. Unlike generic standard Deep Learning models often unsuited for local specificities, the methodology presented here is specifically tailored to Morocco’s agroecological context, targeting three strategic crops: Tomato (Souss-Massa region), Potato (Gharb plains), and Wheat (Chaouia region). A hybrid intelligent architecture is introduced that integrates a lightweight Convolutional Neural Network (CNN) with Particle Swarm Optimization (PSO-CNN) for autonomous hyperparameter tuning. The proposed framework was validated using a curated dataset of 15,000 images, rigorously augmented to reflect local field conditions, yielding a classification accuracy of 94.7%. This work effectively bridges the gap between theoretical AI architectures and practical Precision Farming, providing a rapid decision support system to minimize yield losses and align with the national objective of establishing a digitally empowered agricultural ecosystem.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_83-Smart_Agriculture_in_Morocco.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparison of Histogram Equalization and Multi-Scale Retinex Methods for Near-Infrared Image Enhancement in Drowsiness Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170182</link>
        <id>10.14569/IJACSA.2026.0170182</id>
        <doi>10.14569/IJACSA.2026.0170182</doi>
        <lastModDate>2026-01-30T11:01:09.4500000+00:00</lastModDate>
        
        <creator>Moh Hadi Subowo</creator>
        
        <creator>Pulung Nurtantio Andono</creator>
        
        <creator>Guruh Fajar Shidik</creator>
        
        <creator>Heru Agus Santoso</creator>
        
        <subject>Image enhancement; near-infrared; drowsiness detection; histogram equalization; multi-scale retinex; CLAHE; no-reference quality metrics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>Computer vision-based drowsiness detection faces significant challenges in low-light conditions, particularly when using near-infrared (NIR) sensors for driver monitoring systems. Appropriate image enhancement methods are crucial to improve detection accuracy. This study systematically evaluates five enhancement methods: Histogram Equalization (HE), Adaptive Histogram Equalization (AHE), Contrast-Limited Adaptive Histogram Equalization (CLAHE), Brightness Preserving Dynamic Histogram Equalization (BPDHE), and Multi-Scale Retinex with Color Restoration (MSRCR). The evaluation was conducted on 4,272 frames from the University of Li&#232;ge (ULg) Multimodality Drowsiness Database (DROZY) using four no-reference metrics: Natural Image Quality Evaluator (NIQE), Perception-based Image Quality Evaluator (PIQE), Shannon Entropy, and Lightness Order Error (LOE). Additional validation was performed by measuring the face detection rate using MediaPipe. The results show that CLAHE achieves an optimal balance with an NIQE of 4.61 (best natural quality), a detection rate of 97.9%, and an LOE of 0.058 (superior structural preservation). MSRCR produces the highest entropy (6.58) but the lowest detection rate (75.6%), indicating structural distortion in the NIR context. Statistical validation using the Wilcoxon signed-rank test and the Friedman test confirmed the significance of the findings (p &lt; 0.05). CLAHE is recommended for NIR surveillance-based drowsiness detection systems.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_82-Comparison_of_Histogram_Equalization_and_Multi_Scale_Retinex_Methods.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>DAMCSeg: Dynamic Adaptive Multi-Modal Collaborative Semantic Segmentation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170181</link>
        <id>10.14569/IJACSA.2026.0170181</id>
        <doi>10.14569/IJACSA.2026.0170181</doi>
        <lastModDate>2026-01-30T11:01:09.4030000+00:00</lastModDate>
        
        <creator>Qirui Liao</creator>
        
        <creator>Zuohua Ding</creator>
        
        <creator>Hongyun Huang</creator>
        
        <subject>Dynamic Adaptive Multi-Modal Collaborative; Dual-Stage Attention Fusion (DSAF); End-to-End Detection-Segmentation Joint Framework; Lightweight Multi-Modal Fusion (LMMF)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>While current semantic segmentation models excel in controlled environments, they often struggle with key challenges such as dynamic multi-modal data, small target recognition, and computational efficiency for edge deployment. Motivated by these limitations, this study explores targeted solutions and presents DAMCSeg (Dynamic Adaptive Multi-modal Collaborative Semantic Segmentation), an innovative framework that introduces advancements across feature fusion, training paradigms, and model efficiency. The core contributions of DAM-CSeg include: 1) a Dual-Stage Attention Fusion (DSAF) module that dynamically adjusts multi-branch fusion weights based on scene complexity; 2) an end-to-end joint training framework for object detection and semantic segmentation designed to minimize inter-stage error propagation; and 3) a Lightweight Multi-Modal Fusion (LMMF) module that efficiently integrates multi-source data with low computational overhead. To rigorously evaluate the proposed method’s effectiveness against these specific challenges, extensive experiments are conducted on mainstream benchmark datasets. The results demonstrate that DAMCSeg achieves high accuracy and operational efficiency, effectively addressing critical issues in dynamic scene adaptation, complex target segmentation, and edge device deployment. This provides a practical and viable solution for semantic segmentation in demanding applications such as autonomous driving and medical image analysis.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_81-DAMCSeg_Dynamic_Adaptive_Multi_Modal_Collaborative_Semantic_Segmentation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Inter-Robots Position Estimation Using UWB Positioning Devices for Distributed Cooperative Control of Multiple Robots</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170180</link>
        <id>10.14569/IJACSA.2026.0170180</id>
        <doi>10.14569/IJACSA.2026.0170180</doi>
        <lastModDate>2026-01-30T11:01:09.3730000+00:00</lastModDate>
        
        <creator>Airi Kojima</creator>
        
        <creator>Kohei Yamagishi</creator>
        
        <creator>Tsuyoshi Suzuki</creator>
        
        <subject>Mobile robots; distributed cooperative control; mutual positioning system; relative position estimation; UWB position-ing devices; geometric structure</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>Ultra-wide band (UWB)-based positioning methods for static environments have been continuously improved; how-ever, many existing approaches rely on fixed reference nodes, and methods for directly computing relative positions among mobile units have not been sufficiently investigated. This paper presents a relative positioning approach for multi-robot systems using UWB wireless communication within a distributed cooperative control framework. In the proposed approach, multiple UWB positioning devices are arranged in regular polyhedral configurations to improve the uniformity of ranging accuracy. Robot coordinates are estimated using a nonlinear least-squares optimization method formulated from a system of simultaneous distance equations, enabling mutual relative position estimation among robots. Simulation experiments were conducted to evaluate estimation accuracy and error characteristics under different geometric configurations. Four configurations: square, tetrahedron, regular tetrahedron, and regular octahedron were considered, and their error magnitudes and axis-wise distributions were compared. The simulation results indicate that the proposed configuration achieve lower estimation errors than the other configurations evaluated. Based on these findings, experimental verification was performed, and the observed trends were consistent with the simulation results. This work provides a systematic investigation of a mutual positioning system that enables robots to estimate their positions with respect to one another without relying on fixed landmarks. Unlike existing method, our approach enables the determination of relative positions between robots based on distances measured by each robot. The proposed approach is expected to be applicable to autonomous decentralized control in multi-robot systems operating in static environments.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_80-Inter_Robots_Position_Estimation_Using_UWB_Positioning_Devices.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predicting Students’ Cognitive Profiles Using Explainable Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170179</link>
        <id>10.14569/IJACSA.2026.0170179</id>
        <doi>10.14569/IJACSA.2026.0170179</doi>
        <lastModDate>2026-01-30T11:01:09.3430000+00:00</lastModDate>
        
        <creator>Sonia Corraya</creator>
        
        <creator>Fahmid Al Farid</creator>
        
        <creator>M Shamim Kaiser</creator>
        
        <creator>Shamim Al Mamun</creator>
        
        <creator>Jia Uddin</creator>
        
        <creator>Hezerul Abdul Karim</creator>
        
        <subject>Explainable AI; SHAP feature selection; machine learning; innate intelligence prediction; cognitive profiles; student diversity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>Conventional educational strategies fail to comprehend and leverage the diversity of learners’ cognitive strengths and overlook their innate intelligence, a fundamental driver of learning. To address this gap, this study proposes a machine learning (ML) framework to predict students’ overall innate intelligence scores, independent of subject domain or exam structure, using the Learning Meta-Learning dataset, which includes data from 1,021 university students. Seven regression models, including Decision Tree, Random Forest, Extra Trees, Gradient Boosting, Extreme Gradient Boosting, LightGBM, and CatBoost, along with their ensembles have been trained and evaluated. Explainable Artificial intelligence (XAI) technique SHAP is used for important feature selection among 54 features and recursive feature elimination to further enhance model accuracy and interpretability. In comparison to the conventional method, the proposed SHAP-based ML approach is lightweight, trained with selected features, and has shown improvements in accuracy. The accuracy without XAI on CatBoost is 98.32%, whereas with XAI on CatBoost it is 98.53% using only 35 features out of 54. These findings suggest that integrating learners’ cognitive profile prediction model can aid the design of personalized educational strategies, moving beyond one-size-fits-all educational strategies.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_79-Predicting_Students_Cognitive_Profiles_Using_Explainable_Machine_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Robust RT-DETR-Based Method for Complex Self-Service Buffet Scene Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170178</link>
        <id>10.14569/IJACSA.2026.0170178</id>
        <doi>10.14569/IJACSA.2026.0170178</doi>
        <lastModDate>2026-01-30T11:01:09.3100000+00:00</lastModDate>
        
        <creator>Zhengwang Xu</creator>
        
        <creator>Hongyang Xiao</creator>
        
        <creator>Zhou Huang</creator>
        
        <subject>RT-DETR; lightweight object detection; multi-scale feature fusion; attention enhancement; buffet-scene perception</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>Object detection in buffet-style environments is highly challenging due to densely stacked tableware, frequent occlusions, strong illumination reflections, and substantial visual similarity across categories, all of which undermine the robustness of existing detectors. To address these issues, this paper proposes an improved real-time detection transformer–based model with a lightweight design while significantly enhancing multi-scale feature representation. First, a re-parameterized stem module is introduced to strengthen shallow texture extraction with negligible computational overhead. Second, a dynamic multi-kernel refinement module is developed to enrich directional texture modeling and cross-scale semantic aggregation. Furthermore, a heterogeneous-kernel feature pyramid network is constructed by integrating adaptive multi-scale fusion, multi-kernel fusion nodes, and a lightweight upsampling strategy to improve cross-level feature consistency and mitigate aliasing caused by conventional upsampling. Experimental results on a self-constructed buffet-scene dataset demonstrate that the proposed method improves mAP50 and mAP50:95 by 2.6% and 1.9%, respectively, while reducing parameters and GFLOPs by 42.6% and 42.3%, and increasing inference speed to 103.1 FPS. On Dota v1.0 and SkyFusion data sets, the small target detection ability has also been improved. The substantial reductions in computation and model size further confirm the effectiveness and practical value of the proposed approach for complex catering scenarios.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_78-A_Robust_RT_DETR_Based_Method_for_Complex_Self_Service.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Calibrated Residual Intelligence for Intra-Procedural CBCT–Based Collateral Grading in Ischemic Stroke</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170177</link>
        <id>10.14569/IJACSA.2026.0170177</id>
        <doi>10.14569/IJACSA.2026.0170177</doi>
        <lastModDate>2026-01-30T11:01:09.2800000+00:00</lastModDate>
        
        <creator>Kazi Ashikur Rahman</creator>
        
        <creator>Nur Hasanah Ali</creator>
        
        <creator>Ahmad Sobri Muda</creator>
        
        <creator>Nur Asyiqin Amir Hamzah</creator>
        
        <creator>Noradzilah Ismail</creator>
        
        <subject>Collateral circulation; brain stroke; ischemic stroke; deep learning; ResNet-18</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>Brain stroke occurs when the brain’s blood supply is disrupted, leading to oxygen deprivation and rapid neuronal death. Ischemic stroke, the focus of this study, accounts for most cases and is strongly influenced by collateral circulation, a network of alternative vessels that stabilize perfusion when a primary artery is obstructed. Collateral status determines the extent of salvageable tissue and is typically graded manually using modalities such as magnetic resonance angiography (MRA), computed tomography (CT), and cone-beam computed tomography (CBCT), a process prone to subjectivity and inter-observer variability. This study proposes a ResNet-18–based deep learning framework for automated three-class classification of collateral circulation (Good, Moderate, Poor) from intra-procedural CBCT scans. A curated dataset of 45 patient cases (22,861 DICOM slices), annotated by an expert neuroradiologist, was preprocessed with patient-wise partitioning, normalization, and augmentation. The model achieved a validation accuracy of 88.8%, a micro-averaged precision–recall score of 0.947, and a macro-averaged ROC AUC of 0.958. Calibration analysis confirmed well-aligned probability estimates, while most misclassifications occurred in the Moderate class, reflecting inherent clinical ambiguity. Com-pared with prior CBCT studies using shallower architectures, the proposed framework demonstrates substantially higher accuracy, improved calibration, and enhanced robustness. These findings highlight the feasibility of ResNet-18 applied to CBCT imaging as a reliable and efficient tool to support neuroradiologists in collateral grading during hyperacute stroke management.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_77-Calibrated_Residual_Intelligence_for_Intra_Procedural_CBCT.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Edge Artificial Intelligence for Real-Time Fresh Produce Identification in Retail Weighing Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170176</link>
        <id>10.14569/IJACSA.2026.0170176</id>
        <doi>10.14569/IJACSA.2026.0170176</doi>
        <lastModDate>2026-01-30T11:01:09.2630000+00:00</lastModDate>
        
        <creator>Shi Han Teo</creator>
        
        <creator>Jun Kit Chaw</creator>
        
        <subject>Real-Time object detection; Edge Artificial Intelligence (Edge AI); transfer learning; YOLO algorithm; nvidia jetson; fresh produce recognition; TensorRT optimization; model quantization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>Real-time recognition of loose fresh produce is a key requirement for intelligent retail weighing systems, enabling automated replacement of or assistance to manual PLU-based item selection. However, the deployment performance of recent YOLO architectures on embedded edge platforms such as the NVIDIA Jetson Xavier NX remains insufficiently studied in practical retail scenarios. This study aims to benchmark recent YOLO architectures for real-time fresh produce recognition on embedded edge devices. This work presents an Edge–AI retail weighing system that recognizes Malaysian fresh produce using YOLOv9, YOLOv10, and YOLOv11 models on the Jetson Xavier NX. A domain-specific dataset of 8 450 images across 26 classes was created by merging ImageNet and Roboflow sources and applying quality filtering and unified preprocessing. Each model was fine-tuned and optimized with TensorRT at FP16 and INT8 precision. Transfer learning improved accuracy across all models; YOLOv11-Large achieved the highest mAP@0.5 of ≈ 0.897 but at a reduced frame rate, while the mid-sized YOLOv10-M delivered an mAP@0.5 of ≈ 0.890 with near-real-time performance inference. Inference analysis shows that pre-and post-processing add only a few milliseconds per frame yet become proportionally significant as inference speeds increase; YOLOv11’s Non-Maximum Suppression (NMS) head introduces notable latency relative to YOLOv10’s NMS-free design. Quantized YOLOv10-M and YOLOv10-N sustain ≈ 14–19 FPS , offering the best balance between accuracy and speed. Qualitative tests on market footage confirm robust detection, indicating that these optimized models enable accurate, low-latency produce identification for intelligent retail weighing.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_76-Edge_Artificial_Intelligence_for_Real_Time_Fresh_Produce_Identification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Experimental Validation of Contextual Parameters and Comparative Analysis with State-of-the-Art in CARS Recommendation Systems in Ubiquitous Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170175</link>
        <id>10.14569/IJACSA.2026.0170175</id>
        <doi>10.14569/IJACSA.2026.0170175</doi>
        <lastModDate>2026-01-30T11:01:09.2470000+00:00</lastModDate>
        
        <creator>Pranali Gajanan Chavhan</creator>
        
        <creator>Ritesh Vamanrao Patil</creator>
        
        <subject>Context-aware recommendation systems; multi-modal recommendation; transformer-based models; experimental validation; contextual parameter modeling; user behavior modeling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>The most important role of Consumer Behavior prediction plays in e-commerce, various ways of marketing, and Context-aware Recommendation Systems (CARS). From an Amazon consumer dataset, we conduct a comparative analysis of different machine learning models to compare their performance or effectiveness in predicting consumer behavior based on an Amazon consumer dataset. Additionally, we introduce a new algorithm combining feature selection and optimization that aims to enhance prediction accuracy. Person behavior prediction has historically helped enhance e-commerce, marketing, and Context-Aware Recommendation Systems-CARS, allowing businesses to get closer to customers and understand their needs better from the time they appeared to the time an analysis could be done. The research work performs comparative analysis among various machine learning techniques, like Logistic Regression, Decision Tree, Random Forest, SVM, and KNN, to see which one is more effective in predicting customer behavior, based on an Amazon consumer dataset. Besides, a new algorithm that merges feature selection and optimization is proposed and implemented to guarantee better prediction accuracy. The project is aimed at the creation of data-driven decision systems powered by an optimized machine learning framework for customer analytics.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_75-Experimental_Validation_of_Contextual_Parameters_and_Comparative_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Systematic Review and Taxonomy of Privacy-Preserving Blockchain Consensus Mechanisms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170174</link>
        <id>10.14569/IJACSA.2026.0170174</id>
        <doi>10.14569/IJACSA.2026.0170174</doi>
        <lastModDate>2026-01-30T11:01:09.2170000+00:00</lastModDate>
        
        <creator>Sahnius Usman</creator>
        
        <creator>Sharifah Khairun Nisa Habib Elias</creator>
        
        <creator>Suriayati Chuprat</creator>
        
        <creator>Ahmad Akmaluddin Bin Mazlan</creator>
        
        <subject>Blockchain; Byzantine Fault Tolerance; consensus mechanisms; privacy-aware consensus; privacy preserving; systematic literature review</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>Blockchain systems rely on consensus mechanisms to validate transactions and coordinate distributed participants, making consensus a critical layer that shapes security, trust, and privacy. Although blockchain is increasingly applied in privacy-sensitive domains such as healthcare, smart cities, and the Internet of Things, existing review studies primarily examine security or performance and rarely analyse how consensus-level design properties influence privacy risks. As a result, privacy is often treated as a peripheral enhancement rather than a core consensus concern. This study presents a systematic literature review that examines blockchain consensus mechanisms from a privacy-focused perspective. The review aims to identify which consensus classes are most commonly used in privacy-preserving blockchain systems, what privacy limitations are reported across different consensus designs, and how privacy-preserving techniques are integrated into consensus mechanisms. The review follows PRISMA and Kitchenham-guided procedures, using structured search and screening of peer-reviewed journal articles from major academic databases, followed by relevance and quality assessment. 72 peer-reviewed journal articles were synthesised using taxonomy-based and thematic analysis. The proposed taxonomy explicitly classifies studies by consensus mechanism class, privacy limitation, and integration level, enabling structured comparison beyond existing surveys. The findings show that Byzantine Fault Tolerant (BFT)-based consensus mechanisms are most frequently adopted in privacy-preserving blockchain applications. However, privacy challenges such as identity exposure and communication pattern leakage remain common and are closely linked to consensus design properties. In addition, most studies rely on external privacy mechanisms rather than embedding privacy directly into the consensus layer. This review contributes a structured taxonomy, clear analytical insights, and practical guidance that support the development and evaluation of privacy-aware blockchain consensus mechanisms.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_74-A_Systematic_Review_and_Taxonomy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>AI-Based Parkinson’s Disease Diagnosis and Prediction with Therapeutic Game Design for Engagement</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170173</link>
        <id>10.14569/IJACSA.2026.0170173</id>
        <doi>10.14569/IJACSA.2026.0170173</doi>
        <lastModDate>2026-01-30T11:01:09.1870000+00:00</lastModDate>
        
        <creator>Marwah Muwafaq Almozani</creator>
        
        <creator>H&#252;seyin Demirel</creator>
        
        <subject>Parkinson’s disease; Artificial Intelligence (AI); mobile application; early diagnosis; machine learning algorithms</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>Parkinson’s disease (PD) is a progressive neurodegenerative disease that impacts motor and cognitive functions, and early diagnosis and management are essential to enhance patient outcomes. The study assumes the implementation of Artificial Intelligence (AI)-based diagnostic and predictive algorithms, along with therapeutic game design, to assist patients in improving the management and treatment of PD. The existing approaches to PD diagnosis rely heavily on clinical observation of symptoms and on traditional imaging methods, which may be subjective, time-consuming, and prone to human error. Moreover, conventional interventions are not consistently engaging or tailored to the patient, and hence, treatment adherence is not optimal. To overcome these difficulties, we present PD in the framework of AI (PD-AI), leveraging machine learning algorithms to enhance early diagnosis and predict disease progression. The system will be implemented as a mobile app that integrates AI with therapeutic gaming, with real-time symptom tracking based on sensor readings (e.g., tremors, motor skills) and interactive therapeutic games provided to the patient to maintain their engagement. The suggested approach enhances early diagnosis rates, provides a tailored approach, supports continuous monitoring of symptoms, and encourages patients to follow their treatment actively. An active, efficient, and convenient management strategy is facilitated by data analysis based on frequent examinations and feedback via the app. Preliminary results indicate that the PD-AI model improves case diagnosis accuracy and patient compliance with treatment regimens, demonstrating its effectiveness for both medical experts and patients with PD.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_73-AI_Based_Parkinsons_Disease_Diagnosis_and_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Integrated Carbon Footprint Calculation System Model for Net Zero Emission in the Manufacturing Industry Based on GHG Protocol and DEFRA</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170172</link>
        <id>10.14569/IJACSA.2026.0170172</id>
        <doi>10.14569/IJACSA.2026.0170172</doi>
        <lastModDate>2026-01-30T11:01:09.1530000+00:00</lastModDate>
        
        <creator>Dinar Rahayu</creator>
        
        <subject>Carbon footprint; net zero emission; GHG protocol; DEFRA; decision support system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>Manufacturing industries play a critical role in achieving Net Zero emission targets due to their significant contribution to greenhouse gas emissions. However, existing carbon footprint calculation practices often apply the GHG Protocol and emission factor standards independently, resulting in fragmented methodologies and limited decision-support capabilities. This study develops a carbon footprint calculation system model that integrates GHG Protocol emission scope classification with DEFRA emission conversion factors, supported by a decision-support framework for Net Zero emission planning. Using a Design Science Research (DSR) approach, the study produces a conceptual system model that structures activity data, emission scope classification, and standardized carbon calculation logic into a unified framework. The proposed model enables transparent aggregation of emissions across Scope 1, Scope 2, and Scope 3, while the decision-support framework translates calculation results into decision variables, scenario-based analysis, and Net Zero roadmap formulation. The system functions as a decision-support system that assists manufacturing organizations in interpreting carbon footprint results and supports Net Zero emission planning. The findings demonstrate that integrating standardized carbon accounting methodologies within a system-oriented design enhances methodological coherence, traceability, and strategic relevance for sustainability decision-making in the manufacturing sector.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_72-An_Integrated_Carbon_Footprint_Calculation_System_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Advancing Blood Supply Chain Prediction Based on a Novel Hybrid Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170171</link>
        <id>10.14569/IJACSA.2026.0170171</id>
        <doi>10.14569/IJACSA.2026.0170171</doi>
        <lastModDate>2026-01-30T11:01:09.1230000+00:00</lastModDate>
        
        <creator>Chaimae Mouncif</creator>
        
        <creator>Mohamed Amine Ben RABIA</creator>
        
        <creator>Adil Bellabdaoui</creator>
        
        <subject>Blood supply chain; mobile blood collection units; spatio-temporal prediction; hybrid machine learning; decision support</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>Blood supply chains constitute a critical yet often overlooked component of modern public health systems, as they coordinate donors, collection centers, hospitals, and patients. One of the major operational challenges lies in planning the deployment of mobile blood collection units under highly variable and uncertain spatio-temporal demand. In this context, this study proposes a novel hybrid machine learning framework for predicting donor return potential and supporting location and time selection decisions for mobile blood drives. The proposed approach combines Support Vector Regression (SVR) and Light Gradient Boosting Machine (LGBM) through a dynamic, context-aware weighting function designed to capture both temporal regularities and nonlinear spatial heterogeneity in donor behavior. The model is evaluated using real-world data collected from a blood collection center operating multiple mobile units. Experimental results demonstrate that the proposed hybrid framework consistently outperforms its individual components, achieving R&#178; values of up to 83% for certain locations, together with low Mean Absolute Error (MAE) and Mean Squared Error (MSE). These results confirm the robustness and stability of the proposed approach. Beyond predictive performance, the model is intended to be integrated into a decision-support system to help managers optimize logistical resources and improve the strategic planning of mobile blood collection campaigns. This work contributes to the emerging field of data-driven blood supply chain optimization by introducing a spatio-temporal, hybrid predictive core specifically designed for operational decision support.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_71-Advancing_Blood_Supply_Chain_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>MQTT Broker Congestion Mitigation Using Huffman Deep Compression</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170170</link>
        <id>10.14569/IJACSA.2026.0170170</id>
        <doi>10.14569/IJACSA.2026.0170170</doi>
        <lastModDate>2026-01-30T11:01:09.0930000+00:00</lastModDate>
        
        <creator>Ammar Nasif</creator>
        
        <creator>Zulaiha Ali Othman</creator>
        
        <creator>Nor Samsiah Sani</creator>
        
        <creator>Yousra Abudaqqa</creator>
        
        <subject>Compression; network congestion; connection overflow; deep learning; IoT; broker congestion; IoT network; sensor; latency reduction; publishers; broker; MQTT; power consumption</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>This study presents an improved MQTT protocol designed to address broker congestion and connection overflow in large-scale IoT networks. The proposed method integrates Huffman Deep Compression (HDC) at the publisher side to mitigate network traffic and latency. Unlike standard MQTT, which suffers from broker overload, our approach applies efficient data compression on resource-constrained sensor devices prior to publishing. The proposed approach was validated on a real-world air pollution dataset collected from the Tanjung Malim monitoring station in Malaysia, using ESP8266-based IoT nodes. Experimental results demonstrated that broker congestion was reduced by 84.26% for QoS 0 and 79.6% for QoS 1, significantly outperforming both standard MQTT and the state-of-the-art MRT-MQTT (58% and 45%, respectively). The method attained a high compression ratio of 2.62, which directly led to a dramatic reduction in power consumption from 2,664,864 to 63,216 mA (QoS 0) and from 3,155,760 to 49,168 mA (QoS 1). This substantial saving in current consumption contributes to extended device lifetime and enhanced energy efficiency. The findings highlight the potential of this enhanced protocol to support massive IoT deployments by minimizing network overhead at the broker.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_70-MQTT_Broker_Congestion_Mitigation_Using_Huffman_Deep_Compression.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Misinformation Detection on Twitter with a Content-Based Multi-Lingual Bert Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170169</link>
        <id>10.14569/IJACSA.2026.0170169</id>
        <doi>10.14569/IJACSA.2026.0170169</doi>
        <lastModDate>2026-01-30T11:01:09.0600000+00:00</lastModDate>
        
        <creator>Krishna Kumar</creator>
        
        <creator>Akila Venkatesan</creator>
        
        <subject>Component; misinformation detection; multi-lingual BERT; content-based attention mechanism; syntactic-semantic similarity; explainable AI; LIME interpretability; COVID-19 misinformation; cross-lingual generalization; twitter; adversarial robustness</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>The rapid spread of misinformation during global crises like COVID-19 has severely impacted public health, governance, and social trust. Social media platforms such as Twitter have amplified this issue, underscoring the urgent need for multilingual, real-time misinformation detection. The proposed Content-based Attention Multi-lingual BERT (CA-BERT) model addresses this challenge by enhancing the standard BERT framework with a content-based attention mechanism that assigns adaptive weights to semantically important tokens often linked to false or misleading content. This attention enables deeper contextual understanding of misinformation cues across diverse linguistic contexts. Using the LIME interpretability method, CA-BERT provides transparent explanations of its predictions, supporting accountable decision-making for policymakers and content moderators. Leveraging multilingual BERT (mBERT) allows the model to handle multiple languages simultaneously, ensuring robust cross-lingual applicability. Evaluations using a balanced multilingual tweet dataset on COVID-19 topics demonstrate that CA-BERT outperforms baseline models such as RoBERTa, DANN, and HANN, achieving 96% recall for true information and 95% for misinformation in English, with F1 Scores of 93% and 92%, respectively. The model maintains strong cross-lingual generalization, especially for Dutch (75% F1) and Spanish (72% F1), with slightly lower performance for Arabic due to tokenization and dialectal complexity. These results highlight CA-BERT’s adaptability while underscoring the need for improved handling of low-resource, morphologically rich languages. Future work involves region-specific preprocessing, cross-lingual transfer learning, and multimodal misinformation detection, aiming to transform CA-BERT into a core component of multilingual real-time disinformation monitoring systems.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_69-Enhancing_Misinformation_Detection_on_Twitter.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Exploring the Impact of Gamified Artificial Intelligence–Driven English Vocabulary Learning Systems on Learner Retention and Motivation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170168</link>
        <id>10.14569/IJACSA.2026.0170168</id>
        <doi>10.14569/IJACSA.2026.0170168</doi>
        <lastModDate>2026-01-30T11:01:09.0300000+00:00</lastModDate>
        
        <creator>Gogineni Aswini</creator>
        
        <creator>Madhu Munagala</creator>
        
        <creator>V. Saranya</creator>
        
        <creator>Keerthana R</creator>
        
        <creator>Aseel Smerat</creator>
        
        <creator>Vinisha Sumra</creator>
        
        <creator>Ahmed I. Taloba</creator>
        
        <subject>AI-driven gamified learning; adaptive educational systems; English vocabulary acquisition; learner motivation and engagement; vocabulary retention</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>The growing development of digital learning platforms has posed an increased demand on gamified and artificial intelligence-based methods of enhancing English vocabulary learning. However, existing studies often treat gamification and AI as loosely pair components, relying on static game mechanics or post-hoc analytics that limit personalization, adaptability, and long-term learning impact. To address these limitations, this study proposes the Gamified AI-Driven Vocabulary Retention and Motivation Enhancer (GAI-VRME), an adaptive learning framework that integrates machine-learning–based learner modeling, real-time difficulty calibration, and adaptive gamification strategies. In contrast to the previous systems, GAI-VRME can dynamically regulate the complexity of the task, the frequency of feedback and the sequence of rewards according to the performance and the motivational state of a specific learner, and can thus be constantly customized to the individual level as the process of learning progresses. The implementation and empirical assessment of the framework were conducted with the help of Python, TensorFlow, and Jupyter Notebook and Teaching-Learning Gamification Dataset of Mendeley Data. Mixed method analysis of vocabulary retention with paired t-tests and sentiment-analysis-based motivation modelling was used. The experimental outcomes show that GAI-VRME has much higher predictive accuracy, vocabulary retention, and learner motivation than the traditional gamified systems. These findings provide empirical evidence that deeply integrated AI-driven adaptive gamification, jointly optimizing cognitive retention and affective engagement, offers a scalable and pedagogically robust solution for modern digital vocabulary learning environments.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_68-Exploring_the_Impact_of_Gamified_Artificial_Intelligence.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Efficient Skin Cancer Stage Diagnostic Approach Using Customized Inception V3 Deep Learning Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170167</link>
        <id>10.14569/IJACSA.2026.0170167</id>
        <doi>10.14569/IJACSA.2026.0170167</doi>
        <lastModDate>2026-01-30T11:01:08.9970000+00:00</lastModDate>
        
        <creator>Adnan Afroz</creator>
        
        <creator>Shaheena Noor</creator>
        
        <creator>Muhammad Umar Khan</creator>
        
        <creator>Shakil Ahmed Bashir</creator>
        
        <subject>Melanoma; Basal Cell Carcinoma (BCC); Squamous Cell Carcinoma (SCC); Inceptionv3; Bayesian tuning; skin cancer classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>Among all stages of skin cancer, Melanoma, Basel Cell Carcinoma (BCC), and Squamous Cell Carcinoma (SCC) have a significant impact on world health. Although deep learning offers promising potential for dermatological categorization, only limited disease groups have benefited, since most studies focus on particular illnesses rather than covering comprehensive human skin problems. Computerized analysis has been used in the past to identify cancer in skin lesion images, but challenges still persist mainly due to the multiple forms, textures, and sizes of lesions that complicate skin cancer classification. This research paper presents a Convolutional Neural Network (CNN) model customized to meet our requirements by using a pre trained InceptionV3 model along with Bayesian hyperparameter tuning. Using the ISIC 2024 and HAM 10000 datasets, the main objective is to classify skin lesions and differentiate between malignant Melanoma, BCC, and SCC. By implementing this customized model, the issue caused by variations in lesion appearance is effectively addressed, leading to more accurate predictions. Using Bayesian hyperparameter tuning can increase identification while decreasing computational cost. The proposed model performed strongly on the combined datasets by achieving combined average accuracy of 95.1 %, a precision of 94.42 %, a sensitivity of 97.3 %, a specificity of 98.8 %, and an F1 score of 95.7 %. These results demonstrate that the model significantly outperformed existing techniques and provided more accurate and consistent diagnosis of pigmented skin lesions compared to current standards.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_67-An_Efficient_Skin_Cancer_Stage_Diagnostic_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Business Cybersecurity Through Integrated Defense and Incident Response: A Comparative Decision Framework</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170166</link>
        <id>10.14569/IJACSA.2026.0170166</id>
        <doi>10.14569/IJACSA.2026.0170166</doi>
        <lastModDate>2026-01-30T11:01:08.9830000+00:00</lastModDate>
        
        <creator>Jurgen Mecaj</creator>
        
        <subject>Cyber defense; incident response; business continuity; NIST CSF 2.0; SP 800-61r3; ISO/IEC 27001:2022; CIS Controls v8.1; MITRE ATT&amp;CK; Zero Trust; EDR; SOAR; MCDA</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>Business operations increasingly depend on digital workflows, hybrid infrastructures, and third-party ecosystems, making cybersecurity incidents a direct business continuity and governance problem rather than solely a technical concern. This paper proposes an integrated cyber defense and defense-to-response decision framework for organizations seeking to reduce exposure to external attacks and unauthorized access while improving incident detection, containment, and recovery. The framework aligns governance and control selection with NIST Cybersecurity Framework (CSF) 2.0, operational incident response considerations with NIST SP 800-61 Revision 3, control requirements with ISO/IEC 27001:2022, prioritized safeguards with CIS Controls v8.1, and adversary-behavior mapping with the MITRE ATT&amp;CK Enterprise Matrix. We define an evaluation model that combines 1) coverage mapping across prevent-detect-respond-recover functions, 2) multi-criteria decision analysis (MCDA) for cost, complexity, and risk reduction trade-offs, and 3) a playbook-oriented response design for high-frequency attack paths relevant to business environments. A worked comparative example demonstrates how three strategy bundles (traditional perimeter controls, defense-in-depth with SIEM, and a Zero Trust + EDR + SOAR approach) can be ranked using weighted criteria and incident lifecycle metrics. The paper concludes with an implementation roadmap and measurement plan to convert the framework into an evidence-based program that supports executive decision-making and continuous improvement.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_66-Enhancing_Business_Cybersecurity_Through_Integrated_Defense_and_Incident_Response.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Community-Aware Influence Maximization for Suppressing Cryptocurrency Scam Misinformation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170165</link>
        <id>10.14569/IJACSA.2026.0170165</id>
        <doi>10.14569/IJACSA.2026.0170165</doi>
        <lastModDate>2026-01-30T11:01:08.9500000+00:00</lastModDate>
        
        <creator>Naglaa Mostafa</creator>
        
        <creator>Hatem Abdelkader</creator>
        
        <creator>Asmaa H.Ali</creator>
        
        <subject>Influence maximization; misinformation suppression; cryptocurrency scams; OneCoin; diffusion models; Leiden; community-aware seeding</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>Cryptocurrency fraud campaigns often rely on large-scale social-media diffusion to recruit victims, normalize false claims, and coordinate multi-level marketing behavior. This study examines the dynamics of the One Coin scam. It proposes an influence-maximization (IM)-driven workflow for identifying high-impact accounts whose intervention can reduce future misinformation diffusion. A directed Twitter engagement network from retweet/reply interactions is constructed and studied, and the accounts that should be prioritized for intervention to reduce the reach of future scam-promoting misinformation are identified. We evaluate six seed selection strategies: Degree, Betweenness, PageRank, k-core, CELF (lazy greedy), and Reverse Influence Sampling (RIS) under the classical Independent Cascade (IC) and Linear Threshold (LT) diffusion models using a weighted-cascade parameterization when ground-truth transmission probabilities are unavailable. Across the tested seed budgets, CELF achieves the highest expected spread, but with the highest computational cost. At the largest seed budget, Degree is effectively tied with CELF (within 0.09% under LT and 1.4% under IC), indicating a hub-dominated engagement structure in which simple reach-based heuristics can be highly competitive. RIS provides a strong quality–efficiency trade-off, remaining within approximately 9.7% (LT) and 9.5% (IC) of CELF while requiring substantially less computation. We further introduce a community-aware variant using Leiden partitions and proportional seed allocation to improve cross-community coverage; at larger budgets, this improves methods sensitive to seed over-concentration, increasing LT spread by about 9.8% for k-core and 8.6% for RIS. Overall, the results quantify practical trade-offs between spread and runtime for deployable suppression workflows and show when community-aware planning better aligns with the heterogeneous structure of scam recruitment ecosystems.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_65-Community_Aware_Influence_Maximization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Natural Human-Machine Interaction Using Static Hand Gestures for a Gestural Calculator System with DNN</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170164</link>
        <id>10.14569/IJACSA.2026.0170164</id>
        <doi>10.14569/IJACSA.2026.0170164</doi>
        <lastModDate>2026-01-30T11:01:08.9200000+00:00</lastModDate>
        
        <creator>H. Abdelmoumene</creator>
        
        <creator>L. Meddeber</creator>
        
        <creator>O. Ghali</creator>
        
        <creator>Y. Amellal</creator>
        
        <subject>Hand gesture recognition; gestural calculator interface; DNN; MediaPipe; natural human-computer interaction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>Hand gesture recognition (HGR) represents a real challenge for natural human-computer interaction, which aims to revolutionize the naturalness of traditional interfaces, allowing intuitive control of various devices without using a keyboard or mouse. Despite the availability of frameworks such as MediaPipe, which enable better detection and tracking, the major challenge remains interpreting gestures made with both hands in a natural operational setting. In this regard, this study presents a real-time gesture calculator that combines gestures made with both hands (see using one hand) and aims to address the problem of interpretation in arithmetic operations. By leveraging MediaPipe to classify the 21 hand landmarks, an optimized dense neural network (DNN) was developed capable of recognizing 13 distinct static gestures. The latter includes six gestures for each hand (ranging from 0 to 5) to represent all digits from 0 to 9, five mathematical symbols, and two specialized commands designed explicitly for control management. Even with a standard webcam, this model achieved 91% accuracy on a reduced dataset of gestures from both hands. Beyond gesture recognition, this work demonstrates how these gestures can be integrated into a fluid sequence for arithmetic operations.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_64-Natural_Human_Machine_Interaction_Using_Static_Hand_Gestures.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimizing the Accuracy of Alzheimer&#39;s Detection Using Machine Learning and Intelligent Feature Selection Strategies</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170163</link>
        <id>10.14569/IJACSA.2026.0170163</id>
        <doi>10.14569/IJACSA.2026.0170163</doi>
        <lastModDate>2026-01-30T11:01:08.8900000+00:00</lastModDate>
        
        <creator>Suci Mutiara</creator>
        
        <creator>Siti Nur Laila</creator>
        
        <creator>Deppi Linda</creator>
        
        <creator>Sri Lestari</creator>
        
        <creator>Jean Antoni</creator>
        
        <creator>Christian Petrus Silalahi</creator>
        
        <subject>Machine learning; Alzheimer; Random Forest (RF); logistic regression; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>Alzheimer’s disease is a progressive neurodegenerative disorder for which early detection remains a significant challenge due to the complexity of clinical features and the high dimensionality of medical data. This study aims to improve the accuracy and reliability of Alzheimer’s disease detection by evaluating the performance of multiple machine learning algorithms integrated with intelligent feature selection strategies. Five classification models, Decision Tree, Na&#239;ve Bayes, Random Forest, Logistic Regression, and Deep Learning, were investigated under two experimental scenarios: without feature selection and with feature selection using Recursive Feature Elimination, Binary Particle Swarm Optimization, and Variance Threshold. Model performance was evaluated using K-fold cross-validation based on accuracy, precision, recall, and F1-score metrics. The results demonstrate that feature selection consistently enhances classification performance, particularly for conventional machine learning models such as Random Forest and Logistic Regression. Although the Deep Learning model achieves competitive accuracy, its reduced precision and F1-score indicate limitations when applied to reduced feature spaces. These findings highlight the importance of incorporating appropriate feature selection techniques to address data complexity and improve the effectiveness of early Alzheimer’s disease detection.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_63-Optimizing_the_Accuracy_of_Alzheimers_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Blockchain-Based Audit Trails: Improving Transparency and Fraud Detection in Digital Accounting Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170162</link>
        <id>10.14569/IJACSA.2026.0170162</id>
        <doi>10.14569/IJACSA.2026.0170162</doi>
        <lastModDate>2026-01-30T11:01:08.8570000+00:00</lastModDate>
        
        <creator>Neni Maryani</creator>
        
        <creator>Munawar Muchlish</creator>
        
        <creator>Roza Mulyadi</creator>
        
        <creator>Nurhayati Solehah</creator>
        
        <subject>Blockchain; audit trails; fraud detection; digital accounting; transparency; data mining</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>Blockchain technology has emerged as a transformative innovation in digital accounting, offering robust mechanisms to enhance auditability, data integrity, and fraud prevention. This study examines how blockchain-based audit trails can improve transparency and strengthen fraud detection within modern accounting information systems. Adopting a conceptual–analytical research design supported by secondary empirical evidence, the study analyzes data drawn from recent peer-reviewed case studies, industry reports, and documented implementations of permissioned blockchain systems in auditing and financial reporting contexts. The analysis focuses on core blockchain characteristics—immutability, decentralization, cryptographic security, real-time verification, and transaction mining—and evaluates their implications for audit processes and governance mechanisms. The results highlight that blockchain-enabled audit trails allow continuous access to verified transactional data, significantly improving early detection of anomalies, reducing opportunities for data manipulation, and enhancing the reliability of financial reporting. The study further demonstrates that permissioned blockchain architectures, combined with smart contract automation, can operationally support real-time audit logging and procedural compliance while minimizing human error. However, empirical insights also reveal critical implementation challenges, including interoperability constraints, scalability issues, regulatory uncertainty, and organizational resistance. In terms of contribution, this research offers a conceptual and methodological contribution by developing an integrated blockchain-based auditing framework that systematically links technological features with audit objectives and fraud prevention mechanisms. Unlike prior descriptive reviews, this study explicitly positions its framework against existing auditing and blockchain literature, clarifying how blockchain-based audit trails extend current auditing theory and provide practical design implications for enterprise accounting systems. Overall, the findings advance scholarly understanding of blockchain-enabled auditing and provide actionable insights for auditors, system designers, and regulators seeking to implement next-generation digital audit infrastructures.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_62-Blockchain_Based_Audit_Trails.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of Image Processing Filters for Improving Visibility of Fine Dentoalveolar Structures in Dental Cone-Beam Computed Tomography Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170161</link>
        <id>10.14569/IJACSA.2026.0170161</id>
        <doi>10.14569/IJACSA.2026.0170161</doi>
        <lastModDate>2026-01-30T11:01:08.8100000+00:00</lastModDate>
        
        <creator>Muhannad Almutiry</creator>
        
        <creator>Asma’a Al-Ekrish</creator>
        
        <creator>Saleh Alshebeili</creator>
        
        <subject>Cone-beam computed tomography; CBCT; image processing; Wiener filter; LLMMSE filter; dental imaging; noise reduction; low-dose imaging; endodontics; pulp canal visualization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>Cone-beam computed tomography (CBCT) imaging in dentistry requires post-reconstruction image processing to enhance diagnostic quality while minimizing radiation exposure. Visualization of fine dentomaxillofacial structures, particularly the inferior alveolar canal (IAC) and dental pulp canals, presents significant diagnostic challenges in low-dose CBCT imaging. This study investigates the application of Wiener and adaptive Wiener (LLMMSE) filters in the reconstruction domain to improve the visibility of these critical anatomical structures in low-dose dental CBCT images. Two CBCT examinations of a dry mandible were acquired using reference and low-dose protocols. The low-dose post-reconstruction data was processed using six different filters: geometric mean, LLMMSE with additive noise 15, LLMMSE with additive noise 5, moving average, Wiener, and local contrast filters. These computationally efficient filters offer practical advantages over existing complex and costly noise reduction schemes. Subjective evaluation by an experienced oral and maxillofacial radiologist demonstrated that the IAC was clearly identifiable in all low-dose datasets regardless of filter application. However, the highest visibility of narrow pulp canals was achieved with the Wiener and LLMMSE_5 filters. This proof-of-concept study demonstrates the potential of Wiener and LLMMSE_5 techniques for improving visibility of narrow dental pulp canals in low-dose CBCT images, which has important implications for endodontic diagnosis and treatment planning while supporting radiation dose reduction strategies.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_61-Development_of_Image_Processing_Filters_for_Improving_Visibility.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Collaborative Dual-Framework Defense: CTI and LLM-Based Enhanced Smishing Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170160</link>
        <id>10.14569/IJACSA.2026.0170160</id>
        <doi>10.14569/IJACSA.2026.0170160</doi>
        <lastModDate>2026-01-30T11:01:08.7630000+00:00</lastModDate>
        
        <creator>Li Guangliang</creator>
        
        <creator>Kalaivani Selvaraj</creator>
        
        <creator>Mahinderjit Singh</creator>
        
        <subject>Smishing detection; cyber threat intelligence; XGBoost; semantic verification; large language model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>Smishing has become a severe cybersecurity threat. Attackers now use AI and social engineering to craft more sophisticated campaigns. To address this challenge, this study proposes a dual-layer detection framework. It combines cyber threat intelligence (CTI), machine learning, and a large language model (LLM). The framework uses 22 features built from 2,811 real SMS messages. These features are categorized as content-based, context-based, and Indicators of Compromise (IOC)-based features. Five machine learning models were evaluated. XGBoost, trained with a 70% training, 10% validation, and 20% test split, achieved the best performance. It had a recall of 92.08% and an F1-score of 94.66%. For borderline cases, the study experimented with 4 LLMs (including GPT-4o and LLaMA 3). They served as a semantic verification layer. All models achieved a recall rate above 98.5% and produced human-readable explanations. The study demonstrated that these 4 models are complementary verifiers rather than main classifiers. The results show that structured threat intelligence used during feature engineering improves machine learning model performance. With semantic reasoning, the framework also generates accessible reports for non-specialists. This lowers the barrier for effective smishing detection.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_60-Collaborative_Dual_Framework_Defense.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Understanding IT Product Purchasing Behavior of MSMEs Using Sequential Pattern Mining Approaches</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170159</link>
        <id>10.14569/IJACSA.2026.0170159</id>
        <doi>10.14569/IJACSA.2026.0170159</doi>
        <lastModDate>2026-01-30T11:01:08.7330000+00:00</lastModDate>
        
        <creator>Rendro Kasworo</creator>
        
        <creator>R. Rizal Isnanto</creator>
        
        <creator>Budi Warsito</creator>
        
        <subject>Sequential pattern mining; Apriori; PrefixSpan; CloSpan; SMEs; digital transformation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>Sequential pattern mining is a crucial analytical method for understanding purchasing behavior and uncovering hidden patterns in transactional data. Unlike most prior studies that apply sequential pattern mining primarily in consumer-oriented retail settings or evaluate algorithms in isolation, this study investigates IT product purchasing behavior among Small and Medium Enterprises (SMEs) within a B2B digital transformation context through a direct comparative evaluation of three widely used algorithms: Apriori, PrefixSpan, and CloSpan. A series of controlled experiments was conducted on the same transactional datasets to assess algorithm performance in terms of accuracy, computational efficiency, and redundancy reduction. The results show that Apriori discovers exhaustive patterns at the cost of higher computational complexity, PrefixSpan achieves faster sequence extraction with balanced accuracy, and CloSpan effectively reduces redundancy by generating closed sequential patterns. Beyond pattern discovery, this study translates support, confidence, and lift metrics into actionable decision-support insights, highlighting how different algorithmic characteristics can be aligned with retention strategies, service bundling, and targeted interventions. These findings provide distinct methodological and practical contributions by positioning sequential pattern mining as a data-driven decision-support tool to accelerate digital transformation initiatives among SMEs in the IT product ecosystem.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_59-Understanding_IT_Product_Purchasing_Behavior_of_MSMEs.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Benchmarking Large Language Models for Dental Clinical Decision Support: A BERT Score Analysis of Claude Opus 4.5</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170158</link>
        <id>10.14569/IJACSA.2026.0170158</id>
        <doi>10.14569/IJACSA.2026.0170158</doi>
        <lastModDate>2026-01-30T11:01:08.7000000+00:00</lastModDate>
        
        <creator>Achmad Zam Zam Aghasy</creator>
        
        <creator>Muhammad Lutfan Lazuardi</creator>
        
        <creator>Hari Kusnanto Josef</creator>
        
        <subject>BERT Score; Large Language Models; clinical decision support system; semantic similarity; Claude Opus 4.5</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>The integration of Large Language Models (LLMs) into clinical decision support systems represents a significant advancement in healthcare informatics. This study presents a comprehensive evaluation framework for benchmarking LLM-generated dental treatment recommendations using BERT Score as the primary semantic similarity metric. We evaluated Claude Opus 4.5 as a Clinical Decision Support System (CDSS) across 116 dental case reports extracted from the Case Reports in Dentistry journal (2024-2025), spanning nine dental specialties. The BERT Score was calculated using the RoBERTa-large model to measure semantic alignment between AI-generated treatment plans and gold-standard published treatments. Results demonstrated strong semantic alignment with a mean BERT Score F1 of 0.8199 with a standard deviation of 0.0144 (95 per cent confidence interval: 0.8172-0.8225), significantly exceeding the 0.80 threshold (t = 14.90, p &lt; 0.001, d = 1.38). Cross-specialty analysis revealed consistent performance across all nine dental domains (Kruskal-Wallis H = 3.07, p = 0.879), indicating robust generalizability. A significant negative correlation was observed between BERT Score and response time (ρ = -0.371, p &lt; 0.001), suggesting a speed-accuracy trade-off in LLM reasoning. This study contributes a reproducible benchmarking methodology for evaluating LLM performance in specialized clinical domains and demonstrates the potential of BERT Score as a scalable evaluation metric for AI-generated clinical text.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_58-Benchmarking_Large_Language_Models_for_Dental_Clinical_Decision_Support.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Exploring Cyber Trends and Threats Towards V2X Connected Vehicles in Malaysia: A Systematic Literature Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170157</link>
        <id>10.14569/IJACSA.2026.0170157</id>
        <doi>10.14569/IJACSA.2026.0170157</doi>
        <lastModDate>2026-01-30T11:01:08.6700000+00:00</lastModDate>
        
        <creator>A’in Hazwani Ahmad Rizal</creator>
        
        <creator>Noor Afiza Mat Razali</creator>
        
        <creator>Sakinah Ali Pitchay</creator>
        
        <creator>Taqiyuddin Anas</creator>
        
        <subject>5G; V2X; connected vehicles; cybersecurity; intrusion detection systems; anomaly detection; spoofing; DoS; network tier; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>The rapid expansion of 5G enabled Vehicle to Everything (V2X) communication has evolved into an intelligent transportation system by supporting applications such as autonomous driving, real-time traffic optimization, and road safety management. However, the growing connectivity and diverse communication protocols also create major cybersecurity challenges, especially in the network tier of connected vehicles. This study conducts a systematic literature review following the PRISMA framework to examine cybersecurity threats and detection models in Malaysia&#39;s V2X ecosystem. It involves an analyzing phase towards 85 peer-reviewed studies published between 2016 and 2025. This addresses three research questions: (RQ1) What is the state-of-the-art in CVs in the aspect of network technology in Malaysia, (RQ2) What are the cybersecurity trends and threats towards CVs in the network tier, and (RQ3) What are the existing models in detecting and responding to cyber threats against CVs? Study identifies critical threats, including spoofing, jamming, and denial of service attacks, while evaluating intrusion detection systems that use machine learning, deep learning, and hybrid approaches. The existing approaches are yet to face limitations in real-time performance, contextual accuracy, and supply chain resilience under Malaysia&#39;s tropical urban conditions. This study proposes a conceptual model, the SCARF-V2X model, an NGSOC integrated concept that utilizes SIEM, SOAR, and Malaysian cyber threat intelligence platforms to enable automated detection and first-layer auto-response, specifically towards supply chain threats in CVs. The proposed model aims to improve Malaysia&#39;s V2X cybersecurity landscape and introduces a proactive and adaptive model to protect CVs against evolving cyber threats.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_57-Exploring_Cyber_Trends_and_Threats_Towards_V2X_Connected_Vehicles.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>RoBERTa-Enhanced Actor–Critic Reinforcement Learning for Adaptive and Personalized ESL Instruction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170156</link>
        <id>10.14569/IJACSA.2026.0170156</id>
        <doi>10.14569/IJACSA.2026.0170156</doi>
        <lastModDate>2026-01-30T11:01:08.6400000+00:00</lastModDate>
        
        <creator>Angalakuduru Aravind</creator>
        
        <creator>A. Swathi</creator>
        
        <creator>Jillellamoodi Naga Madhuri</creator>
        
        <creator>R. Aroul Canessane</creator>
        
        <creator>K. Lalitha Vanisree</creator>
        
        <creator>Elangovan Muniyandy</creator>
        
        <creator>Rasha M. Abd El-Aziz</creator>
        
        <subject>English as a Second Language; adaptive learning; reinforcement learning; RoBERTa; intelligent tutoring systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>Technology-Assisted Language Learning (TALL) has developed and has greatly transformed the way English as a Second Language (ESL) is taught. The current digital resources and smart solutions have enabled more interactive and accessible learning, providing learners with an opportunity to train their skills at any time and place. Nevertheless, most of the current systems remain based on strict rules or conventional supervised training approaches. These methods can demand large quantities of labelled data, are inflexible in the learning process, and have little in the way of individualized feedback. Consequently, students may remain inattentive, and the acquisition of all the necessary language skills, such as reading, writing, listening, and speaking, may be unequal. In order to address such shortcomings, this study presents T-RLNN (RoBERTa-based Reinforcement Learning Neural Network), which is a dynamic model of ESL teaching. T-RNN combines deep contextual language comprehension and reinforcement learning in order to customize teaching to every learner. The RoBERTa encoder can retrieve semantic and syntactic feedback on responses of learners, and an actor-critic reinforcement learning agent can modify teaching plans in real time. The agent takes into account the learner-specific factors, i.e., proficiency, response time, engagement, and interaction behavior, to give the best guidance. It was trained in Python using PyTorch and tested on a curated dataset of 5,000 responses of a learner in reading, writing, listening, and speaking tasks. T-RLNN performed better than conventional models, such as Support Vector Machines, random forests, and conventional deep neural networks, with a 94.8 % accuracy, 92.7 % F1 -score, and 71.5 % Adaptivity Index. These findings indicate that T-RLNN has the potential to provide insightful, interactive, and learner-oriented ESL training and open the way to smarter and more adaptable language learning systems.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_56-RoBERTa_Enhanced_Actor_Critic_Reinforcement_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Explainable CNN-Based Multiclass Household Waste Classification Using Grad-CAM for Smart Waste Management</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170155</link>
        <id>10.14569/IJACSA.2026.0170155</id>
        <doi>10.14569/IJACSA.2026.0170155</doi>
        <lastModDate>2026-01-30T11:01:08.6230000+00:00</lastModDate>
        
        <creator>Fuzy Yustika Manik</creator>
        
        <creator>Pauzi Ibrahim Nainggolan</creator>
        
        <creator>T. H. F Harumy</creator>
        
        <creator>Dewi Sartika Br Ginting</creator>
        
        <creator>Aini Maharani</creator>
        
        <creator>Hafizha Ramadayanti</creator>
        
        <creator>Jessica Almalia</creator>
        
        <creator>Muhammad Putra Harifin</creator>
        
        <subject>EfficientNet-B0; explainable AI; Grad-CAM; transfer learning; waste classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>Automated waste classification using computer vision has become essential for improving environmental sustainability and reducing manual sorting effort. This study presents an enhanced waste image classification model based on EfficientNet-B0, trained using a two-stage transfer learning strategy that combines feature extraction and fine-tuning. The proposed approach aims to enhance classification accuracy while maintaining computational efficiency. Experimental evaluations conducted on a heterogeneous multi-class waste dataset demonstrate the superiority of the proposed method. The confusion matrix results indicate a high proportion of correct predictions across most categories, with only minor misclassifications among visually similar classes, such as metal and paper. The model&#39;s robustness is further validated through 5-Fold Cross-Validation, which yields an average accuracy of 94.3% with a standard deviation of &#177;0.007, confirming consistent performance across data partitions. Compared with state-of-the-art CNN architectures, including ResNet50 and DenseNet121, the proposed model achieves the highest accuracy while using the fewest parameters (4.38M), making it suitable for deployment in resource-constrained environments. Additionally, qualitative analysis using Grad-CAM confirms that the model’s decisions are explainable and based on relevant object features. These findings demonstrate that the proposed EfficientNet-B0 model constitutes a reliable, efficient, and interpretable solution for automated waste classification. The model is further evaluated using cross-validation and explainable AI (Grad-CAM) to assess both performance stability and interpretability.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_55-Explainable_CNN_Based_Multiclass_Household_Waste_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Text-Driven Early Warning of Supply Chain Risks: A Hybrid Machine- and Deep-Learning Framework for the New Energy Vehicle (NEV) Industry</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170154</link>
        <id>10.14569/IJACSA.2026.0170154</id>
        <doi>10.14569/IJACSA.2026.0170154</doi>
        <lastModDate>2026-01-30T11:01:08.6070000+00:00</lastModDate>
        
        <creator>Ma Chaoke</creator>
        
        <creator>S. Sarifah Radiah Shariff</creator>
        
        <creator>Noryanti Nasir</creator>
        
        <creator>Gao Ying</creator>
        
        <subject>New Energy Vehicle (NEV); supply chain risk; natural language processing (NLP); text classification; early warning system; BERT</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>The rapid expansion of New Energy Vehicles (NEVs) has increased the global NEV supply chains&#39; exposure to diverse and interconnected risks. Distributed production networks frequently face disruptions driven by raw material volatility, evolving environmental regulations, customs clearance uncertainty, and geopolitical instability, underscoring the need for effective early-warning systems. To address limitations in existing studies that lack a consistent and interpretable structure for NEV-specific hazards, this study proposes a hybrid NLP-based pipeline for risk text classification and early-warning sender extraction. A curated dataset of 120 NEV-related risk reports published between 2023 and 2025 was collected from Chinese information sources, pre-processed, and annotated according to a six-category risk taxonomy. Classical machine-learning models, including logistic regression, support vector machines, random forest, and XGBoost, were trained using TF-IDF features, while a multilayer perceptron and a BERT model were employed to capture nonlinear patterns and contextual semantics. Classical models were evaluated using five-fold cross-validation, and deep models were assessed on a held-out test set. XGBoost achieved the best classical performance, with accuracy and F1 scores of 0.826 and 0.766, respectively. BERT outperformed all baselines, reaching an accuracy of 0.864 and an F1 score of 0.808. The proposed framework demonstrates a modular and scalable approach.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_54-Text_Driven_Early_Warning_of_Supply_Chain_Risks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Contextualized Learner-Profiling Transformer Architecture for Adaptive Grammar Error Diagnosis and Instruction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170153</link>
        <id>10.14569/IJACSA.2026.0170153</id>
        <doi>10.14569/IJACSA.2026.0170153</doi>
        <lastModDate>2026-01-30T11:01:08.5770000+00:00</lastModDate>
        
        <creator>Bukka Shobharani</creator>
        
        <creator>M Vijaya Lakshmi</creator>
        
        <creator>Kama Ramudu</creator>
        
        <creator>Jasgurpreet Singh Chohan</creator>
        
        <creator>S. Farhad</creator>
        
        <creator>Elangovan Muniyandy</creator>
        
        <creator>Gulnaz Fatma</creator>
        
        <creator>Ahmed I. Taloba</creator>
        
        <subject>Grammar error correction; adaptive feedback; Multi-task Learning; transformer models; ESL learning; personalized language instruction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>Grammatical accuracy is a critical component of English as a Second Language (ESL) learning; however, many learners continue to struggle with recurring errors despite the availability of automated grammar correction tools. Although recent transformer-based models such as BERT, GPT, and T5 have demonstrated strong benchmark performance, existing grammar error correction (GEC) systems remain largely correction-oriented and lack pedagogical flexibility, learner awareness, and explanation-based feedback. To address these limitations, this study proposes an Adaptive Multi-Task T5 (AMT-T5) framework that integrates grammatical error correction, error-type classification, and personalized feedback generation within a unified transformer architecture. The proposed method is designed to actively support learner development by maintaining dynamic learner error profiles and adaptively reweighting attention to provide targeted instructional guidance. AMT-T5 is implemented using Python, PyTorch, and the Hugging Face Transformers library, and trained on the Lang-8 Learner Corpus, which contains authentic ESL learner sentences with expert corrections. Experimental results demonstrate that the proposed model significantly outperforms existing transformer-based baselines, achieving 78.9 BLEU, 90.7 GLEU, 82.6% full-sentence accuracy, and an error reduction rate of 91.2%, representing an approximate 18–22% improvement in grammatical accuracy over prior models. The framework further incorporates Direct Preference Optimization to align corrections with pedagogical expectations and Knowledge Distillation to enable efficient real-time deployment. Overall, the proposed AMT-T5 framework transforms grammar correction from a passive editing task into an adaptive, learner-centered educational process, offering a scalable and effective solution for intelligent ESL grammar learning systems.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_53-A_Contextualized_Learner_Profiling_Transformer_Architecture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Immersive Learning Environment Design of Outdoor Education Space Using Artificial Intelligence Augmented Reality Technology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170152</link>
        <id>10.14569/IJACSA.2026.0170152</id>
        <doi>10.14569/IJACSA.2026.0170152</doi>
        <lastModDate>2026-01-30T11:01:08.5600000+00:00</lastModDate>
        
        <creator>Chenguang Liu</creator>
        
        <subject>Artificial intelligence; augmented reality; outdoor education space; immersive learning environment; system-level evaluation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>This study presents a technically grounded design and implementation of an AI- and AR-enabled immersive learning environment for outdoor education. Moving beyond conceptual descriptions, the study develops an executable system framework that integrates adaptive navigation and positioning, context-aware virtual tours, task-driven scenario simulation, and real-time feedback mechanisms. Each functional module is explicitly linked to algorithmic implementations, including multi-sensor state estimation, constrained generative scene construction, and reinforcement-based adaptive control, enabling reproducible system behavior in real outdoor settings. A controlled field experiment was conducted using an experimental group and a control group under identical instructional conditions. Quantitative evaluation based on pre–post testing, behavioral logging, and statistical analysis demonstrates that the proposed system achieves statistically significant improvements in learning interest, participation, knowledge mastery, and problem-solving ability. Experimental conditions, data characteristics, and methodological limitations are explicitly reported to support result verification and generalizability. The findings indicate that the proposed immersive learning environment constitutes a validated system-level contribution rather than a purely conceptual framework, offering practical and scientific value for computer science–oriented educational technology research.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_52-Immersive_Learning_Environment_Design_of_Outdoor_Education_Space.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>EnGMHE: Enhanced Geometric Mean Histogram Equalization for Low-Light Image Enhancement</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170151</link>
        <id>10.14569/IJACSA.2026.0170151</id>
        <doi>10.14569/IJACSA.2026.0170151</doi>
        <lastModDate>2026-01-30T11:01:08.5300000+00:00</lastModDate>
        
        <creator>Rawan Zaghloul</creator>
        
        <creator>Hazem Hiary</creator>
        
        <subject>Histogram Equalization; image enhancement; low-light enhancement; denoising; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>Low-light image enhancement has been extensively studied, with numerous methods proposed to address this challenge. Among these, Geometric Mean Histogram Equalization (GMHE) emerged as a histogram-based technique specifically designed for enhancing low-light images. Despite its effectiveness, GMHE has notable limitations: it often oversaturates results under specific conditions and amplifies noise, limiting its practical applicability. These shortcomings become particularly pronounced in real-world scenarios where low-light conditions are frequently accompanied by significant noise artifacts. To address these shortcomings, this study introduces EnGMHE, an enhanced version of GMHE. The proposed method consists of three key steps: 1) introducing a novel Gaussian Histogram Equalization (GHE) to improve image contrast and brightness, 2) utilizing GMHE to enhance sharpness and detail clarity, and 3) denoising the enhanced image using a pretrained deep neural network model. Together, these steps offer a more robust solution for low-light image enhancement, balancing contrast improvement, detail preservation, and noise reduction. The experimental results reveal not only the efficiency but also the effectiveness of the proposed model when benchmarked against the state-of-the-art methods.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_51-EnGMHE_Enhanced_Geometric_Mean_Histogram_Equalization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Robust Real-Time Multimodal Polynomial Fusion Framework for Sensor-Based Sign Language Recognition Using Flex–IMU Smart Gloves</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170150</link>
        <id>10.14569/IJACSA.2026.0170150</id>
        <doi>10.14569/IJACSA.2026.0170150</doi>
        <lastModDate>2026-01-30T11:01:08.4970000+00:00</lastModDate>
        
        <creator>Dadang Iskandar Mulyana</creator>
        
        <creator>Edi Noersasongko</creator>
        
        <creator>Guruh Fajar Shidik</creator>
        
        <creator>Pujiono</creator>
        
        <subject>Sign language recognition; Hijaiyah sign language; wearable sensors; smart glove; multimodal fusion; polynomial temporal smoothing; real-time recognition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>Sign language recognition is a critical component of assistive technologies for individuals with hearing and speech impairments. While vision-based approaches have shown promising performance, their reliability is often affected by illumination variations, occlusions, and background complexity. Wearable sensor–based solutions, particularly smart gloves integrating flex sensors and inertial measurement units (IMUs), provide a more stable alternative by directly capturing hand articulation and motion patterns. However, existing sensor-based methods frequently suffer from temporal instability, noise sensitivity, and limited discrimination among structurally similar gestures, which is especially challenging in Hijaiyah sign language, where many letters differ only by subtle finger configurations. This study proposes a robust real-time Multimodal Polynomial Fusion (MPF) framework for sensor-based sign language recognition using a flex–IMU smart glove, with a specific focus on Hijaiyah gestures as the application domain. The proposed framework applies nonlinear polynomial temporal smoothing within a sliding window to stabilize raw flex–IMU trajectories, followed by multimodal fusion to enhance gesture separability and temporal consistency. A large-scale multimodal dataset comprising 231,000 samples collected from 33 users performing 28 Hijaiyah gesture classes was constructed to enable rigorous subject-independent evaluation. Experimental results obtained from offline testing, session-aware analysis, and real-time streaming scenarios demonstrate that the proposed MPF framework consistently outperforms a baseline approach based on raw normalized signals. The proposed method improves recognition accuracy from 92.42% to 96.32%, while also achieving higher macro-level precision, recall, and F1-score. Furthermore, MPF significantly reduces misclassification rates and improves temporal stability, particularly for fine-grained Hijaiyah gestures with similar structural patterns. These results confirm that the proposed framework provides a robust and reliable solution for real-time wearable sign language recognition and offers practical benefits for Hijaiyah-based assistive communication systems.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_50-A_Robust_Real_Time_Multimodal_Polynomial_Fusion_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Selfdom Enhanced CatBoost Model for Remote Paddy Growth Monitoring and Fertilizer Recommendation in Precision Agriculture</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170149</link>
        <id>10.14569/IJACSA.2026.0170149</id>
        <doi>10.14569/IJACSA.2026.0170149</doi>
        <lastModDate>2026-01-30T11:01:08.4670000+00:00</lastModDate>
        
        <creator>Shanmuga Priya S</creator>
        
        <creator>V. Dhilip Kumar</creator>
        
        <subject>Paddy growth monitoring; fertilizer recommendation; Selfdom Enhanced CatBoost Model; Osprey Optimization Algorithm; oppositional function</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>Precision agriculture enables data-driven crop monitoring and improved resource utilization. Paddy cultivation requires continuous surveillance and timely fertilizer application because it is sensitive to soil nutrient dynamics, water availability, and climatic conditions. Conventional practices such as manual field inspection and heuristic fertilizer advisory methods are often labor-intensive and subjective, which can reduce decision consistency and contribute to yield variability. To address these limitations, this study proposes a Selfdom Enhanced CatBoost (SECB) framework for remote paddy growth-stage monitoring and fertilizer recommendation. Multispectral remote sensing data collected over multiple seasons are used to compute vegetation indices, including NDVI, GNDVI, RVI, GRVI, and NDRE, to characterize crop vigor and chlorophyll-related variation across growth stages. The proposed SECB improves CatBoost by integrating an Improved Osprey Optimization Algorithm (IOOA) to tune key model parameters, aiming to enhance feature interaction learning and reduce overfitting. In addition, oppositional function-based initialization is applied to improve the exploration capability of IOOA and accelerate convergence. Experimental results show that SECB achieves improved performance over baseline classifiers in terms of accuracy, precision, F1-score, specificity, and AUC. The proposed approach provides reliable growth-stage identification and supports fertilizer recommendations to promote efficient nutrient usage and improved productivity. Overall, the framework offers an automated and scalable decision-support strategy for paddy crop management.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_49-Selfdom_Enhanced_CatBoost_Model_for_Remote_Paddy_Growth_Monitoring.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Volumetric Feature Learning for High-Fidelity Two-Dimensional Dental Cast Image Reconstruction Using Generative Adversarial Networks (GANs)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170148</link>
        <id>10.14569/IJACSA.2026.0170148</id>
        <doi>10.14569/IJACSA.2026.0170148</doi>
        <lastModDate>2026-01-30T11:01:08.4200000+00:00</lastModDate>
        
        <creator>Eman Ahmed Eldaoushy</creator>
        
        <creator>Manal A. Abdel-Fattah</creator>
        
        <creator>Nermeen Ahmed Hassan</creator>
        
        <creator>Mai M. El defrawi</creator>
        
        <subject>Dental image reconstruction; generative adversarial networks; latent space representation; two-dimensional to three-dimensional mapping; volumetric deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>Dentistry is a medical branch that diagnoses and treats oral diseases, helps maintain oral function, and improves oral aes-thetics. Dental casts are three-dimensional models of a patient’s oral tissues that can be used to study oral anatomy, assess oc-clusal relationships, and determine tooth alignment. Traditional-ly, they were made of gypsum, an impression material used to pour into the patient’s mouth molds. Meanwhile, digital ones are three-dimensional models generated virtually using modern digi-tal imaging and intraoral scanners. Unlike physical models, which require a lot of manual work and ample storage space, digital models can be produced rapidly, easily modified, and stored for long-term usage. In this study, we present Denta-RecGAN, a novel approach based on Generative Adversarial Networks (GANs) that maps a two-dimensional dental cast im-age into a volumetric latent space and projects it back into a two-dimensional output. The proposed approach employs a 2D encoder to process dental cast images as input, enabling the extraction of spatial features. The structural depth is modelled, and noise is suppressed using volumetric 3D latent space de-noising models; a 2D decoder then reconstructs a high-quality image. The model is trained under an adversarial learning ap-proach using the IO150K dataset. The proposed architecture achieved Mean Absolute Error (MAE) of 0.0128, 0.0127, 0.0128; Structural Similarity Index Measure (SSIM) of 0.9450, 0.9452, 0.9453; and Peak Signal-to-Noise Ratio (PSNR) of 28.84, 28.85, 28.84?decibels across training, validation, and testing sets. These results demonstrate the effectiveness of volumetric feature learning in enhancing the accuracy of 2D image re-construction and preserving fine structural details.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_48-Volumetric_Feature_Learning_for_High_Fidelity.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Systematic Literature Review on Organizational Readiness for Artificial Intelligence Adoption Based on the TOE Framework</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170147</link>
        <id>10.14569/IJACSA.2026.0170147</id>
        <doi>10.14569/IJACSA.2026.0170147</doi>
        <lastModDate>2026-01-30T11:01:08.4030000+00:00</lastModDate>
        
        <creator>Sulistyo Aris Hirtranusi</creator>
        
        <creator>Benny Ranti</creator>
        
        <creator>Widijanto Satyo Nugroho</creator>
        
        <creator>Wisnu Jatmiko</creator>
        
        <subject>Artificial intelligence readiness; organizational readiness; TOE framework; systematic literature review; AI adoption</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>Artificial intelligence (AI) is increasingly being integrated into organizational processes, reshaping how organizations operate, compete, and make decisions. However, despite growing interest, many organizations face challenges in adopting AI effectively due to insufficient readiness. Prior research on organizational AI readiness has produced diverse and sometimes inconsistent conceptualizations, particularly with respect to definitions, readiness factors, and analytical approaches. To consolidate these dispersed insights, this study undertakes a structured review of the literature to synthesize organizational AI readiness factors through the lens of the Technology–Organization–Environment (TOE) framework. The review applies a transparent and replicable screening and selection process, consistent with PRISMA principles, to analyze peer-reviewed journal articles on organizational AI adoption and readiness. Through a multi-stage coding process, 124 readiness-related indicators were identified and subsequently consolidated into 35 factors, which were further synthesized into 12 core readiness themes mapped across the technological, organizational, and environmental dimensions of the TOE framework. The results indicate that organizational AI readiness is not a standalone condition, but a multidimensional and interdependent construct shaped by the alignment of technological capabilities, organizational structures and competencies, and external environmental conditions. By providing a structured synthesis of organizational AI readiness factors, this study clarifies the multidimensional nature of readiness and highlights cross-dimensional interdependencies within the TOE framework. The findings contribute theoretical clarity to the AI readiness literature and offer a consolidated foundation for future empirical studies and practical readiness assessments in organizational settings.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_47-A_Systematic_Literature_Review_on_Organizational_Readiness.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Architecture of an Intelligent Predictive Analytics System for Gas Environment Monitoring Based on Sensor-Series IoT Devices</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170146</link>
        <id>10.14569/IJACSA.2026.0170146</id>
        <doi>10.14569/IJACSA.2026.0170146</doi>
        <lastModDate>2026-01-30T11:01:08.3570000+00:00</lastModDate>
        
        <creator>Anuar Kussainov</creator>
        
        <creator>Gulnaz Zhomartkyzy</creator>
        
        <creator>Rajermani Thinakaran</creator>
        
        <subject>Intelligent system; predictive analytics; IoT; gas analyzer; LoRaWAN; industrial safety; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>Industrial facilities operating with toxic and explosive gases require continuous monitoring systems capable not only of detecting threshold exceedances but also of anticipating hazardous trends. Conventional IoT-based gas monitoring solutions are primarily limited to real-time data acquisition and alarm triggering, which restricts their ability to prevent incidents proactively. This study presents the architecture of an intelligent predictive analytics system for gas environment monitoring that integrates sensor-series IoT gas analyzers with advanced data analytics. The proposed system is built on domestically developed SENSOR-Mine gas analyzers supporting LoRaWAN and Wi-Fi communication, centralized data storage in MS SQL Server, machine learning–based analytics implemented in Python, and a web-based visualization platform using ASP.NET MVC. Time-series forecasting models and anomaly detection algorithms are jointly employed to analyze gas concentration dynamics and identify potentially dangerous situations at early stages. Experimental validation using carbon monoxide measurements demonstrates the practical applicability of the proposed architecture for industrial safety monitoring. The presented approach provides a scalable foundation for intelligent gas environment monitoring systems aimed at reducing industrial risks and improving worker protection.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_46-Architecture_of_an_Intelligent_Predictive_Analytics_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Advances in Deep Learning for Affective Intelligence: Language Models, Multimodal Trends, and Research Frontiers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170145</link>
        <id>10.14569/IJACSA.2026.0170145</id>
        <doi>10.14569/IJACSA.2026.0170145</doi>
        <lastModDate>2026-01-30T11:01:08.3270000+00:00</lastModDate>
        
        <creator>Diego Andres Andrade-Segarra</creator>
        
        <creator>Juan Carlos Santill&#225;n-Lima</creator>
        
        <creator>Miguel Duque-Vaca</creator>
        
        <creator>Fernando Tiverio Molina-Granja</creator>
        
        <subject>Sentiment Analysis; Emotion Recognition; Hate Speech Detection; cyberbullying; deep learning; Transformer-based Models; Multimodal Analysis; multilingual NLP; Explainable AI (XAI); social media</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>The accelerated growth of digital content and the increasing presence of emotional expressions, polarized opinions, and toxic behaviors in social media have driven the development of advanced Affective Analysis techniques. This study presents a broad and up-to-date review of recent studies covering Sentiment Analysis, Emotion Recognition, Hate Speech Detection, cyberbullying, and multimodal approaches grounded in deep learning. The review provides a comparative analysis of the architectures employed—including Transformer-based Models, multimodal frameworks, and variants designed for low-resource languages—along with their metrics, performance outcomes, and emerging patterns. The findings reveal a clear consolidation of Transformer-based Models as the dominant standard, significant progress in multimodality for affective interpretation, and growing attention to multilingual models adapted to diverse cultural contexts. Furthermore, persistent challenges are identified, including limitations related to data availability and quality, Explainable AI (XAI), computational efficiency, and robustness in cross-domain generalization. This review synthesizes current trends, limitations, and opportunities in the field, offering a structured perspective that can serve as a reference for researchers and practitioners involved in the development of more accurate, efficient, and culturally responsible affective systems.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_45-Advances_in_Deep_Learning_for_Affective_Intelligence.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning for Endometrium Segmentation in Transvaginal Ultrasound: A Systematic Review Towards Receptivity Assessment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170144</link>
        <id>10.14569/IJACSA.2026.0170144</id>
        <doi>10.14569/IJACSA.2026.0170144</doi>
        <lastModDate>2026-01-30T11:01:08.3100000+00:00</lastModDate>
        
        <creator>Asma Amirah Nazarudin</creator>
        
        <creator>Siti Salasiah Mokri</creator>
        
        <creator>Noraishikin Zulkarnain</creator>
        
        <creator>Aqilah Baseri Huddin</creator>
        
        <creator>Mohd Faizal Ahmad</creator>
        
        <creator>Ashrani Aizzuddin Abd Rani</creator>
        
        <creator>Seri Mastura Mustaza</creator>
        
        <creator>Huiwen Lim</creator>
        
        <subject>Endometrium segmentation; deep learning; image segmentation; image processing; endometrium receptivity assessment; ovarian health</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>Deep learning (DL) has become a transformative approach in medical image analysis, offering superior accuracy and automation in image segmentation tasks. In reproductive imaging, transvaginal ultrasound (TVUS) serves as a crucial modality for evaluating the endometrial condition, which plays a critical role in assessing ovarian health. Although many studies have applied deep learning to the segmentation of pathological endometrial conditions, research focusing on non-pathological endometrium segmentation remains critically limited. This study presents a comprehensive review of deep learning methods for endometrium segmentation in TVUS, with a focus on non-pathological conditions, including endometrial thickness measurement, morphology analysis, and endometrium receptivity assessment. Following PRISMA guidelines, research articles published between 2015 and 2025 were identified from major scientific databases. The selected studies were analyzed in terms of image processing methods, deep learning architectures, and performance metrics, such as Dice coefficient, Jaccard index, precision, recall, and Hausdorff distance. Although foundational architectures, such as U-Net and its variants, achieve impressive Dice coefficients (up to 0.977), the results often rely on small and single-center datasets, proving limited generalizability across imaging settings. Recent advancements demonstrate the efficacy of hybrid architectures, such as the Deep Learned Snake algorithm and Transformer-based models like SAIM, in optimizing segmentation precision within noisy transvaginal ultrasound images. This review highlights the lack of attention to non-pathological endometrium segmentation and guides future research directions in self-supervised learning, transformer-based architectures, and interpretable deep learning to achieve robust and clinically applicable models for enhancing endometrium receptivity assessment and supporting ovarian health in assisted reproduction technology.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_44-Deep_Learning_for_Endometrium_Segmentation_in_Transvaginal_Ultrasound.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Quality of Service and Customer Satisfaction: A Case Study of Call Center Services</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170143</link>
        <id>10.14569/IJACSA.2026.0170143</id>
        <doi>10.14569/IJACSA.2026.0170143</doi>
        <lastModDate>2026-01-30T11:01:08.2800000+00:00</lastModDate>
        
        <creator>Jhoanna Iveth Santiago Rufasto</creator>
        
        <creator>Sebasti&#225;n Ramos-Cosi</creator>
        
        <creator>Haslyd Claydiana Ramos Jara</creator>
        
        <creator>Ana Huamani-Huaracca</creator>
        
        <subject>Quality of service; customer satisfaction; call center</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>Quality of service and customer satisfaction have become priority aspects for call center services, especially in a context where their use is becoming more and more frequent. In the district of Los Olivos, Lima, Peru, 60% of users who receive telephone service consider the quality of the service to be deficient, which shows the need to delve into this issue. The objective of the study was to determine the relationship between service quality and customer satisfaction in call center services in a district of Lima. To this end, a non-experimental, quantitative, correlational and cross-sectional approach was used. A questionnaire was applied to 384 clients to measure both variables and their relationship was analyzed using Spearman&#39;s correlation. The results show a positive, very strong and significant correlation between service quality and customer satisfaction (r=0.907; p&lt;0.001). Likewise, the dimensions of service quality were significantly related to customer satisfaction: reliability (r=0.850), responsiveness (r=0.618), safety (r=0.473) and empathy (r=0.587). It concludes by highlighting the importance of strengthening the quality of service to improve customer satisfaction and generate benefits for the company. Finally, the need to investigate additional factors that may influence this dynamic is raised.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_43-Quality_of_Service_and_Customer_Satisfaction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Attention-Guided Lightweight MobileNetV2 for Real-Time Driver Drowsiness Classification on Edge-IoT Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170142</link>
        <id>10.14569/IJACSA.2026.0170142</id>
        <doi>10.14569/IJACSA.2026.0170142</doi>
        <lastModDate>2026-01-30T11:01:08.2470000+00:00</lastModDate>
        
        <creator>Yo Ceng Giap</creator>
        
        <creator>Muljono</creator>
        
        <creator>Affandy</creator>
        
        <creator>Ruri Suko Basuki</creator>
        
        <creator>Harun Al Azies</creator>
        
        <creator>R. Rizal Isnanto</creator>
        
        <creator>Deshinta Arrova Dewi</creator>
        
        <subject>Driver drowsiness detection; Edge-IoT deployment; lightweight convolutional neural networks; process innovation; MobileNetV2 optimization; squeeze-and-excitation attention</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>Driver drowsiness is a major cause of traffic accidents, so Edge-IoT platforms with limited resources need to be able to accurately and quickly detect when drivers are drowsy. This study examines attention-guided lightweight CNN design predicated on MobileNetV2 for real-time driver drowsiness detection. The authors compare a SE-enhanced MobileNetV2 to the baseline model and a structurally optimized version that uses Depthwise Separable Convolution (DSC), Bottleneck blocks, and Expansion layers. Experiments on 500 images demonstrate that channel attention enhances feature discrimination, whereas structural optimization yields the most resilient trade-off between accuracy and latency. Statistical validation employing 95% confidence intervals and two-proportion Z-tests substantiates the significance of these enhancements. The proposed models support real-time inference despite their small size (about 2.6 million parameters and 315 million FLOPs). These findings suggest structural optimization is more important than attention mechanisms in designing lightweight CNNs for embedded driver monitoring.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_42-Attention_Guided_Lightweight_MobileNetV2.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Few-Shot Semantic Meta-Learning Framework with CRF for Skill Entity Recognition in Open Innovation Ecosystems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170141</link>
        <id>10.14569/IJACSA.2026.0170141</id>
        <doi>10.14569/IJACSA.2026.0170141</doi>
        <lastModDate>2026-01-30T11:01:08.1400000+00:00</lastModDate>
        
        <creator>Nurchim </creator>
        
        <creator>Muljono</creator>
        
        <creator>Edi Noersasongko</creator>
        
        <creator>Ahmad Zainul Fanani</creator>
        
        <creator>Deshinta Arrova Dewi</creator>
        
        <subject>Few-shot; Named Entity Recognition; skill intelligence; process innovation; open innovation; Natural Language Processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>The accelerating pace of digital transformation is reshaping labour-market dynamics, driving the emergence of new competencies, and intensifying the need for scalable skill-intelligence systems within open innovation ecosystems. Yet, research on Indonesian Named Entity Recognition (NER) remains limited for skill-extraction tasks, especially in low-resource contexts where annotated data are scarce and novel skill expressions evolve rapidly. To address this gap, this study contributes to applied Natural Language Processing (NLP) by introducing the Few-Shot Semantic Meta-Learning framework with CRF (FSM-CRF) for Indonesian skill entity recognition, which integrates semantic span representations, episodic meta-learning, and BIO-constrained CRF decoding to enhance prototype stability and entity-boundary precision for complex, multi-token skill expressions. Using the NERSkill.id dataset, the proposed model is evaluated under a 3-way, 10-shot episodic setting and achieves a micro-F1 of 73.84%, outperforming traditional supervised approaches (IndoBERT fine-tuning, BiLSTM-CRF) and existing few-shot baselines. Ablation experiments further demonstrate that semantic span modelling and structured CRF inference play pivotal roles in improving robustness, while meta-learning strengthens adaptability across diverse and evolving skill categories. From an open innovation perspective, this framework offers a data-efficient solution for dynamic competency mapping, reducing dependence on costly annotation pipelines and enabling continuous updates to workforce skill taxonomies. Overall, the findings highlight semantic meta-learning as a promising foundation for next-generation skill-intelligence infrastructures that support AI-enabled innovation management, strategic workforce planning, and evidence-informed policy design.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_41-A_Few_Shot_Semantic_Meta_Learning_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Advanced Explainable Hybrid Metaheuristic–Deep Learning Framework for Real-Time Financial Fraud Detection with Temporal Convolutional Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170140</link>
        <id>10.14569/IJACSA.2026.0170140</id>
        <doi>10.14569/IJACSA.2026.0170140</doi>
        <lastModDate>2026-01-30T11:01:08.1070000+00:00</lastModDate>
        
        <creator>Madhu Kumar Reddy P</creator>
        
        <creator>M. N. V Kiranbabu</creator>
        
        <subject>Banking; business intelligence; convolutional neural network; fraud detection; Moth Flame Optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>The increasing digitization in banking and related financial services has resulted in spurring the level of transactions with fraudulent patterns and thus demands detection solutions not only efficient but also interpretable and replicable. The earlier machine learning approaches, like K-Nearest Neighbors, Decision Trees, and Random Forests, are not efficient in dealing with high-dimensional and sequential patterns in transactions; in addition, they are incapable of modeling time patterns and are not interpretable models. Since there exist drawbacks in earlier approaches, this work introduces an Interpretable Moth-Flame Optimized Temporal Convolutional Network (MFO-TCN) for efficient and interpretable real-time financial fraud detection. The approach is initiated with rigorous data preprocessing tasks like normalization and encoding performed on the Bank Account Fraud (BAF) dataset. Based on the Moth-Flame Optimization (MFO) algorithm, the optimal features of the transactions expressing high discriminative powers are extracted. This is followed by the application of the Temporal Convolutional Network (TCN) technique, which is capable of identifying the sequential patterns of fraud-related activities. For improved transparency and validity, the SHAP explainability technique has been adopted, ensuring better explanations for feature importance and decision-making. The proposed MFO-TCN results in an accuracy of 97.2% with higher values of precision and recall, achieving better results in comparison to classical and ensemble approaches. Moreover, it provides real-time processing for transactions in milliseconds. The above results show that an efficient combination of metaheuristics for feature optimization, along with temporal deep networks, can provide an optimal technique for financial fraud detection systems.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_40-Advanced_Explainable_Hybrid_Metaheuristic_Deep_Learning_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Privacy-Conscious Federated Reinforcement Learning Framework for Affect-Aware English Listening</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170139</link>
        <id>10.14569/IJACSA.2026.0170139</id>
        <doi>10.14569/IJACSA.2026.0170139</doi>
        <lastModDate>2026-01-30T11:01:08.0770000+00:00</lastModDate>
        
        <creator>N. Sreedevi</creator>
        
        <creator>V. Saranya</creator>
        
        <creator>Kama Ramudu</creator>
        
        <creator>M. Madhusudhan Rao</creator>
        
        <creator>Sakshi Malik</creator>
        
        <creator>Elangovan Muniyandy</creator>
        
        <creator>Ahmed I. Taloba</creator>
        
        <subject>Adaptive listening learning; federated reinforcement learning; affective proxy modeling; privacy-preserving AI; HuBERT speech representation; English language learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>The rapid growth of digital English listening platforms has intensified the need for intelligent personalization mechanisms that adapt to learner progression while preserving data privacy. Existing adaptive systems primarily rely on static difficulty scaling or centralized learning architectures, often neglecting learner engagement dynamics and raising concerns about sensitive data exposure. To address these limitations, this study proposes PrivAURAL, a privacy-preserving and affect-aware adaptive English listening framework that models listening instruction as a sequential decision-making problem. The objective is to dynamically personalize listening tasks by jointly considering comprehension performance and engagement trends, without transmitting raw learner data. PrivAURAL integrates HuBERT-based semantic–acoustic representations with affective proxy signals derived from learner behavior and employs a Federated Deep Q-Network to adapt task difficulty, playback speed, and assessment frequency. The model is implemented using PyTorch, HuggingFace speech models, and a simulated federated learning environment with secure aggregation. Experiments conducted on the TED-LIUM dataset demonstrate a 32.7% reduction in Word Error Rate over ten sessions, a 21.9% decrease in task completion time, and an improvement in listening accuracy from 86.1% to 87.3% compared with non–affect-aware baselines. Federated training further ensures stable convergence, while maintaining strict privacy constraints. The results confirm that reinforcement-driven, affect-aware personalization can significantly enhance listening efficiency and engagement, positioning PrivAURAL as a scalable, ethical, and privacy-conscious solution for next-generation digital language learning systems.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_39-A_Privacy_Conscious_Federated_Reinforcement_Learning_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Developing a Robotic-Integrated Leagility Adaptation Model Through Green Supply Chain Intelligence and Supply Chain Ambidexterity</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170138</link>
        <id>10.14569/IJACSA.2026.0170138</id>
        <doi>10.14569/IJACSA.2026.0170138</doi>
        <lastModDate>2026-01-30T11:01:08.0470000+00:00</lastModDate>
        
        <creator>Miftakul Huda</creator>
        
        <creator>Mohammad Hatta Fahamsyah</creator>
        
        <creator>Agung Nugroho</creator>
        
        <creator>Arie Indra Gunawan</creator>
        
        <creator>Pepen Komarudin</creator>
        
        <creator>Andrean Bagus Saputra</creator>
        
        <subject>Robotic integration; Green Supply Chain Intelligence; Supply Chain Ambidexterity; leagility adaptation; sustainable manufacturing performance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>This study develops a Robotic-Integrated Leagility Adaptation Model by combining Green Supply Chain Intelligence (GSCI) and Supply Chain Ambidexterity (SCA) to enhance sustainable performance in the manufacturing sector. The rapid evolution of robotics, cyber-physical systems, and AI-enabled decision technologies has transformed supply chain dynamics, necessitating an adaptive model that balances efficiency (lean) and responsiveness (agile). Using an integrated quantitative approach, this research examines how robotic automation strengthens leagility capabilities through real-time analytics, predictive intelligence, and environmentally oriented digital operations. The findings demonstrate that GSCI significantly enhances SCA, which in turn improves leagility adaptation and sustainable manufacturing performance. Robotic integration is found to play a catalytic role by enabling autonomous coordination, energy-efficient scheduling, and intelligent material handling as key enablers of green and responsive operations. This study contributes to the literature by proposing a technology-driven leagility model that links robotics, green supply chain intelligence, and ambidexterity within a unified smart manufacturing framework. Implications are provided for policymakers and industry leaders to accelerate sustainable transformation through robotics-enabled digital ecosystems.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_38-Developing_a_Robotic_Integrated_Leagility_Adaptation_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluating Field Flexibility Approaches in Relational Databases: A Performance Study of JSON and Column-Oriented Models in Library Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170137</link>
        <id>10.14569/IJACSA.2026.0170137</id>
        <doi>10.14569/IJACSA.2026.0170137</doi>
        <lastModDate>2026-01-30T11:01:08.0300000+00:00</lastModDate>
        
        <creator>Rizal Fathoni Aji</creator>
        
        <creator>Nilamsari Putri Utami</creator>
        
        <subject>Field flexibility; RDBMS; column-oriented model; JSON; library systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>This study examines two approaches for achieving field flexibility in library systems using relational databases: column-oriented tables and JSON data types. To evaluate the performance and practicality of flexible schema strategies, a dataset of 41,000 library records was implemented using both column-oriented and JSONB-based schemas in PostgreSQL. Five representative queries based on typical search operations in library applications were executed repeatedly on each model, and average execution times were measured in a controlled environment. Results show that JSONB consistently outperforms the column-oriented approach across all query scenarios, benefiting from reduced structural overhead and more direct access to semi-structured data. However, the flexibility of JSONB introduces risks of inconsistent data structures and reduced schema enforcement compared to the more rigid but uniform column-oriented method. The findings highlight a trade-off between performance and data consistency, suggesting that JSONB is advantageous for dynamic, metadata-rich systems, while column-oriented storage remains preferable when strict structural integrity is required. Future work should explore hybrid models and schema validation layers to combine flexibility with reliable data governance.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_37-Evaluating_Field_Flexibility_Approaches_in_Relational_Databases.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>EEG-Based Imagined-Speech Decoding: A Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170136</link>
        <id>10.14569/IJACSA.2026.0170136</id>
        <doi>10.14569/IJACSA.2026.0170136</doi>
        <lastModDate>2026-01-30T11:01:08.0130000+00:00</lastModDate>
        
        <creator>Hatem T M Duhair</creator>
        
        <creator>Masrullizam Bin Mat Ibrahim</creator>
        
        <creator>Mazen Farid</creator>
        
        <creator>Jamil Abedalrahim Jamil Alsayaydeh</creator>
        
        <creator>Safarudin Gazali Herawan</creator>
        
        <subject>Electroencephalography (EEG); Imagined speech; Brain–Computer Interfaces (BCIs); neural speech decoding; deep learning; transfer learning; time–frequency analysis; evaluation protocols</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>Non-invasive neural speech interfaces aim to reconstruct intended words from brain activity, offering critical communication options for individuals with severe dysarthria or locked-in syndrome. Among the available recording modalities, electroencephalography (EEG) remains the most accessible and cost-effective choice for long-term brain–computer interface (BCI) applications. Decoding imagined speech from EEG, however, remains difficult because of low signal-to-noise ratio, pronounced inter-subject variability, and the small, heterogeneous corpora that are currently available. This review adopts a narrative methodology to synthesise peer-reviewed studies on EEG-based imagined-speech decoding. Relevant articles were identified through keyword-based searches in major digital libraries and were included if they used non-invasive EEG, explicitly instructed imagined or covert speech, and reported quantitative decoding performance. The selected studies are organised along the processing pipeline, from experimental paradigms and data acquisition to preprocessing, feature extraction, representation learning, and classification. Across this body of work, binary imagined-speech tasks that rely on carefully designed time–frequency features and shallow classifiers often report accuracies above 80 percent, whereas multi-class word or phoneme recognition exhibits a much wider spread of performance and remains highly sensitive to dataset design and evaluation protocol. Recent trends favour convolutional and recurrent neural networks, temporal convolutional networks, and transfer learning strategies, which improve performance on some datasets but do not yet resolve fundamental issues of restricted vocabularies, inconsistent evaluation practices, and limited cross-subject generalisation. The review distils these observations into practical recommendations for dataset construction, model design, and evaluation protocols and outlines research directions aimed at more robust and clinically meaningful EEG-based imagined-speech BCIs.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_36-EEG_Based_Imagined_Speech_Decoding.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Blockchain-Based Privacy-Preserving Scheme for Integrity Verification and Fair Payment in Cloud Data Storage</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170135</link>
        <id>10.14569/IJACSA.2026.0170135</id>
        <doi>10.14569/IJACSA.2026.0170135</doi>
        <lastModDate>2026-01-30T11:01:07.9970000+00:00</lastModDate>
        
        <creator>Li Zhenxiang</creator>
        
        <creator>Jin Yuanrong</creator>
        
        <creator>Mohammad Nazir Ahmad</creator>
        
        <subject>Cloud storage; blockchain; integrity verification; smart contract; privacy-preserving audit; fair payment</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>Ensuring the integrity of outsourced data in cloud storage remains a critical challenge, especially when existing auditing schemes rely on centralized third-party auditors (TPAs), which introduce single points of failure, privacy leakage risks, and a lack of economic fairness. Current blockchain-based approaches improve transparency but still fail to simultaneously achieve privacy-preserving verification and fair payment between data owners and cloud service providers (CSPs). To address this gap, this study proposes a blockchain-based integrity verification scheme that supports decentralized, privacy-preserving, and economically fair audits for encrypted cloud data. The proposed scheme integrates homomorphic linear authenticators (HLA) and multi-party computation (MPC) to verify data integrity without revealing plaintext, while smart contracts are used to enforce automatic payment or penalty based on audit results, ensuring fairness and accountability. A prototype implementation confirms the practicality of the system. Experimental results show that the audit latency is reduced by up to 35 per cent and smart contract gas consumption by approximately 30 per cent compared to existing schemes, while maintaining low computation and communication overhead. Security analysis demonstrates that the scheme provides data integrity, privacy protection, fairness, and resistance to replay and collusion attacks. Overall, this work offers a practical and scalable solution for secure cloud storage auditing.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_35-A_Blockchain_Based_Privacy_Preserving_Scheme_for_Integrity_Verification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Review on Intrusion Detection Models in Internet of Medical Things (IoMT)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170134</link>
        <id>10.14569/IJACSA.2026.0170134</id>
        <doi>10.14569/IJACSA.2026.0170134</doi>
        <lastModDate>2026-01-30T11:01:07.9670000+00:00</lastModDate>
        
        <creator>Aljorey Alqahtani</creator>
        
        <creator>Monir Abdullah</creator>
        
        <subject>Internet of Medical Things (IoMT); IDS; Explainable Artificial Intelligence (XAI)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>The Internet of Medical Things (IoMT) environment is highly sensitive due to the nature of medical data and its direct connection to patient health, making it a prime target for sophisticated cyberattacks. This study explores the key security challenges within IoMT, discusses how Machine Learning (ML) can enhance threat detection capabilities, and shows how XAI contributes to improving transparency and understanding of model decisions, thereby increasing trust in these systems. It reviews recent advancements in Intrusion Detection Systems (IDS) specifically designed for IoMT networks, with a focus on integrating Explainable Artificial Intelligence (XAI) and ML models. Furthermore, the study compares various algorithms and models, identifying research gaps and discussing different datasets and feature extraction techniques used for optimizing the features. The reported performance and efficiency improvements are derived from prior studies using different dataset sizes, data-splitting strategies, and feature-selection methods.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_34-A_Review_on_Intrusion_Detection_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Environmental Assessment of Chemicals: Artificial Intelligence for Predicting Persistence, Bioaccumulation, and Toxicity Properties</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170133</link>
        <id>10.14569/IJACSA.2026.0170133</id>
        <doi>10.14569/IJACSA.2026.0170133</doi>
        <lastModDate>2026-01-30T11:01:07.9200000+00:00</lastModDate>
        
        <creator>Ayoub Belaidi</creator>
        
        <creator>Rachid El Ayachi</creator>
        
        <creator>Mohamed Biniz</creator>
        
        <creator>Mohamed Oubezza</creator>
        
        <creator>Youssef Youssefi</creator>
        
        <subject>PBT prediction; persistence; bioaccumulation; toxicity; QSAR models; cheminformatics; environmental risk assessment</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>Early assessment of the persistence, bioaccumulation, and toxicity (PBT) of chemicals is a major challenge for environmental protection and international regulatory frameworks. The objective of this study is to compare the effectiveness of three graph-based deep learning architectures—a graph neural network (GNN), a message passing network (MPNN), and a graph attention network (GAT)—for the binary classification of molecules as PBT or non-PBT.We compiled a regulatory dataset comprising 5,130 molecules annotated from public sources, such as ECHA and international POP lists. Molecular graphs were generated from SMILES using RDKit. The three models were implemented in PyTorch Geometric with homogeneous hyperparameters. The experiments were conducted with a scaffold split ratio of 80/10/10 and 10-fold cross-validation. Performance was evaluated using accuracy, AUC-ROC, and F1-score. Interpretability was examined using GAT model attention maps and atomic contribution analysis. The MPNN model achieves the best overall performance (Accuracy = 0.92; ROC-AUC = 0.94; F1 = 0.91), followed by GAT (Accuracy = 0.89; ROC-AUC = 0.93). The basic GNN performs less well (Accuracy = 0.82; ROC-AUC = 0.89). The GAT model provides more detailed atomic explanations thanks to attention weights, while the MPNN stands out for its predictive accuracy. The dataset includes annotations from heterogeneous experimental sources, which may introduce noise into the labels. The models rely solely on 2D graphs, without 3D conformational information. MPNN models can accelerate PBT pre-screening and help prioritize substances for experimental testing. GATs provide useful interpretations for understanding the substructures associated with PBT properties. This study provides the first reproducible and systematic comparison of GNN, MPNN, and GAT models applied to a large regulatory dataset dedicated to PBT, analyzing both performance and interpretability. These results highlight the potential of graph-based QSAR models for regulatory PBT screening and environmental risk assessment.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_33-Environmental_Assessment_of_Chemicals.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-Omics Integration Methods for AI-Based Breast Cancer Molecular Subtypes Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170132</link>
        <id>10.14569/IJACSA.2026.0170132</id>
        <doi>10.14569/IJACSA.2026.0170132</doi>
        <lastModDate>2026-01-30T11:01:07.8900000+00:00</lastModDate>
        
        <creator>Sajid Shah</creator>
        
        <creator>Azurah A Samah</creator>
        
        <creator>Siti Zaiton Mohd Hashim</creator>
        
        <creator>Sarahani Harun</creator>
        
        <creator>Zuraini Binti Ali Shah</creator>
        
        <creator>Farkhana Binti Muchtar</creator>
        
        <creator>Syed Hamid Hussain Madni</creator>
        
        <subject>Breast cancer; classification; integration methods; molecular subtypes; multi-omics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>Breast cancer is one of the most life-threatening and heterogeneous diseases. It contains various molecular subtypes, each subtypes have different characteristics, treatment outcomes, and prognosis. The proper integration of multi-omics data, including genomics, epigenomics, transcriptomics, and proteomics, is very important for enhancing the breast cancer molecular subtypes classification accuracy. Despite the increase in high-dimensional multi-omics data, selecting a suitable integration method for multi-omics data in breast cancer molecular subtypes classification still remains a crucial challenge. This study aims to evaluate and compare, and assess the effectiveness of the multi-omics data integration methods, including exploring the advantages, limitations, and highlighting their performance in terms of accuracy, interpretability, scalability, and biological relevance. Our findings indicate that transformer-based integration methods are increasingly adopted in recent studies due to their superior ability to handle high-dimensional heterogeneous data and capture intricate cross-omics relationships while providing interpretable insights. Additionally, we provide a comparative overview of existing models, discuss key trends over the years, and offer actionable guidance for method selection based on dataset characteristics and research objectives. Finally, we suggest future research directions, emphasizing hybrid deep learning frameworks, graph-based models, and attention mechanisms to enhance predictive accuracy and biological interpretability.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_32-Multi_Omics_Integration_Methods_for_AI_Based_Breast_Cancer.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Explainable AI Techniques for Interpretable Breast Cancer Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170131</link>
        <id>10.14569/IJACSA.2026.0170131</id>
        <doi>10.14569/IJACSA.2026.0170131</doi>
        <lastModDate>2026-01-30T11:01:07.8570000+00:00</lastModDate>
        
        <creator>Tony K. Hariadi</creator>
        
        <creator>Qodri Aziz</creator>
        
        <creator>Slamet Riyadi</creator>
        
        <creator>Kamarul Hawari Ghazali</creator>
        
        <creator>Khairunnisa Binti Hasikin</creator>
        
        <creator>Tri Andi</creator>
        
        <subject>Breast cancer; DBT; Grad-CAM; ResNet-50; XAI</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>Breast cancer is still a major health risk for women all over the world, and thus finding it early is very important for the patient&#39;s survival. Digital Breast Tomosynthesis (DBT) offers enhanced imaging capabilities relative to conventional mammography; yet, its quasi-3D characteristics provide distinct interpretability issues, often rendering deep learning models as black boxes. This work tackles the issue of transparency by testing three Explainable Artificial Intelligence (XAI) methods: Gradient-weighted Class Activation Mapping (Grad-CAM), Score-CAM, and Local Interpretable Model-Agnostic Explanations (LIME). The ResNet-50 architecture was utilized to analyse a dataset of 396 DICOM images that had been pre-processed in a unique way, including colour-mapping and balancing. The study used Insertion and Deletion Area Under the Curve (AUC) to carefully quantify how reliable the visual explanations were, in addition to usual criteria like accuracy, which achieved 94%. It was shown that LIME and Score-CAM generated attention maps that were dispersed or inconsistent, whereas Grad-CAM always showed lesion-specific areas with great accuracy. Grad-CAM was the best method for analysing DBT findings, since it had the highest Insertion AUC of 0.9078. These results provide radiologists with a way to trust and check automated diagnoses, which closes the gap between AI that works well and AI that is reliable in the clinic.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_31-Explainable_AI_Techniques_for_Interpretable_Breast_Cancer_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Forecast of Guangzhou Port Logistics Demand Based on Back Propagation Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170130</link>
        <id>10.14569/IJACSA.2026.0170130</id>
        <doi>10.14569/IJACSA.2026.0170130</doi>
        <lastModDate>2026-01-30T11:01:07.8270000+00:00</lastModDate>
        
        <creator>Xiu Chen</creator>
        
        <creator>Lianhua Liu</creator>
        
        <creator>Lifen Zheng</creator>
        
        <subject>BP neural network; GM(1,1); combination model; port logistics demand</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>In recent years, the port economy of our country has developed rapidly. Guangzhou Port is an important node of the maritime transportation of the Belt and Road, connecting the hinterland economy of our country with the countries along the Belt and Road, which is of great significance in promoting the economic development of the hinterland of our country. It is of great significance to predict the freight development demand of Guangzhou port scientifically and reasonably, which is beneficial to optimize the infrastructure construction and logistics system planning of Guangzhou port. This study selects the port cargo throughput, foreign trade cargo throughput, and container cargo throughput as three index values to measure the freight development of Guangzhou port. Firstly, the GM(1,1) model and the BP neural network model are constructed to predict the freight demand of Guangzhou port. Then, the GM(1,1) model and the BP neural network model are combined to predict again. By comparing the three models, the results show that the accuracy of the combined model is better than that of the single model. The combined model of BP neural network and GM(1,1) can be effectively applied in the prediction of Guangzhou port logistics demand. Finally, the combined model of BP neural network and GM(1,1) is used to forecast the freight development demand of Guangzhou Port in 2022-2024, which provides a reference for the development planning of Guangzhou Port. The results further indicate that the BP–GM(1,1) combination model significantly outperforms single forecasting models in terms of prediction accuracy, highlighting its effectiveness and robustness in port logistics demand forecasting.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_30-Forecast_of_Guangzhou_Port_Logistics_Demand.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Ghost-Vanilla Feature Maps: A Novel Hybrid Architecture for Efficient Fine-Grained Songket Motif Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170129</link>
        <id>10.14569/IJACSA.2026.0170129</id>
        <doi>10.14569/IJACSA.2026.0170129</doi>
        <lastModDate>2026-01-30T11:01:07.7970000+00:00</lastModDate>
        
        <creator>Yohannes </creator>
        
        <creator>Muhammad Ezar Al Rivan</creator>
        
        <creator>Siska Devella</creator>
        
        <creator>Tinaliah</creator>
        
        <subject>Ghost Module; fine-grained classification; lightweight deep learning; songket motif classification; VanillaNet</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>South Sumatra songket motifs present a challenging fine-grained classification task due to high inter-class similarity and substantial intra-class variability. This study proposes the Ghost-Vanilla Feature Map, a novel hybrid architecture that integrates low-cost ghost-generated features with the lightweight structural stability of VanillaNet to enhance discriminative feature learning while reducing computational burden. The proposed architecture is designed to address the inefficiency of conventional convolution-heavy networks in capturing subtle motif variations. Experimental evaluation on a dataset comprising 20 songket motif classes demonstrates that a ghost ratio 2 achieves the best trade-off, attaining an accuracy of 0.98 with more than 75% parameter reduction. Increasing the ghost ratio to 3 preserves high classification performance with an accuracy of 0.97, while ratios 4 and 5 further reduce model size at the expense of marginal accuracy degradation. Comparative results indicate that the Ghost-Vanilla Feature Map consistently outperforms lightweight CNN baselines, including MobileNetV3-Small, MobileNetV4-Conv-Small, EfficientNetV2-Small, and ShuffleNetV2. The proposed architecture substantially surpasses the Vanilla-only baseline, which achieves an accuracy of only 0.860 despite requiring 30.19 million parameters, highlighting the limitations of conventional convolution-dominant designs in fine-grained textile classification. The hybrid configuration with a ghost ratio 2 delivers superior accuracy while nearly halving the parameter count and significantly reducing computational overhead. Overall, the Ghost-Vanilla Feature Map provides an efficient and highly discriminative solution for fine-grained songket motif classification, achieving strong performance while substantially reducing model complexity through a balanced hybrid representation.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_29-Ghost_Vanilla_Feature_Maps.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intelligent Fruit-Picking Robot Using Convolutional Vision and Kinematic Control for Automated Harvesting</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170128</link>
        <id>10.14569/IJACSA.2026.0170128</id>
        <doi>10.14569/IJACSA.2026.0170128</doi>
        <lastModDate>2026-01-30T11:01:07.7800000+00:00</lastModDate>
        
        <creator>Nurbibi Sairamkyzy Imanbayeva</creator>
        
        <creator>Bekzat Ondasynuly Amanov</creator>
        
        <creator>Aigerim Bakatkaliyevna Altayeva</creator>
        
        <creator>Dana Kairatovna Ashimova</creator>
        
        <subject>Fruit-picking robot; automated harvesting; computer vision; deep learning; kinematic control; Mixed Fruit Dataset; adaptive gripper; transformer model; agricultural robotics; orchard automation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>This study presents the design, development, and evaluation of an intelligent fruit-picking robot that integrates convolutional vision, adaptive gripping mechanisms, and kinematic control to enable automated harvesting in diverse orchard environments. The proposed system combines a dual-manipulator platform with an extendable scissor-lift mechanism to achieve wide workspace coverage, allowing efficient access to fruits located at varying canopy heights. A deep learning-based recognition module, trained on a Mixed Fruit Dataset, is employed to detect and classify fruits under challenging conditions characterized by occlusions, variable illumination, and dense foliage. Visualization of feature activations confirms that the model effectively focuses on discriminative fruit regions, supporting precise alignment of the end-effector during grasping. The adaptive gripper, designed with compliant materials and multi-configuration geometry, ensures gentle handling across fruits of different shapes and sizes, minimizing mechanical damage. Experimental evaluations demonstrate that the system performs reliably across multiple fruit species, achieving accurate identification, robust segmentation, and stable manipulation in real-field scenarios. The integrated results highlight the robot’s potential to reduce labor dependency, improve harvesting efficiency, and support scalable automation in mixed-crop orchards. Future work will address enhancements in real-time processing, autonomous navigation, and cross-species generalization to advance fully autonomous orchard operations.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_28-Intelligent_Fruit_Picking_Robot_Using_Convolutional_Vision.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Privacy-Preserving Adaptive Biometric Framework with Reinforcement Learning and Blockchain-Enabled Multi-Factor Authentication</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170127</link>
        <id>10.14569/IJACSA.2026.0170127</id>
        <doi>10.14569/IJACSA.2026.0170127</doi>
        <lastModDate>2026-01-30T11:01:07.7630000+00:00</lastModDate>
        
        <creator>P. Selvaperumal</creator>
        
        <creator>Sakshi Malik</creator>
        
        <creator>Asfar H Siddiqui</creator>
        
        <creator>Dekhkonov Burkhon</creator>
        
        <creator>Elangovan Muniyandy</creator>
        
        <creator>Garigipati Rama Krishna</creator>
        
        <creator>P N V Syamala Rao M</creator>
        
        <subject>Privacy-preserving authentication; multi-factor authentication; reinforcement learning; biometric verification; blockchain-enabled logging</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>Ensuring secure and privacy-preserving authentication in web applications remains a critical challenge due to the limitations of conventional single-factor approaches, which are vulnerable to attacks and fail to account for dynamic user behaviors. Existing multi-factor authentication (MFA) methods often rely on static rules, exposing users to unnecessary friction or weak security under evolving threat conditions. To address these gaps, this study proposes PPAB-RL, a Privacy-Preserving Adaptive Biometric framework leveraging Reinforcement Learning for intelligent MFA selection. The proposed method integrates homomorphic encryption for secure fingerprint feature storage, contextual risk scoring based on device, behavioral, and geolocation deviations, and RL-driven adaptive MFA to dynamically select authentication pathways from password-only to multi-step biometric verification. Implementation is carried out using Python, with biometric processing performed on the SOCOFing dataset containing 6,000 fingerprint images, and blockchain-enabled logging for immutable and tamper-proof audit trails. Experimental results demonstrate that PPAB-RL achieves 96.8% authentication accuracy, surpassing traditional password-only (84.2%) and fingerprint-only (93.5%) methods, while maintaining low encrypted matching overhead and minimal user friction. Ablation studies confirm the essential contribution of each module, biometric preprocessing, encryption, risk analysis, and RL-based adaptation to overall system robustness. The RL policy converges rapidly, allowing real-time adaptation to changing user behaviors and threat contexts. Overall, the proposed PPAB-RL framework establishes a highly secure, intelligent, and scalable authentication paradigm, combining encrypted biometrics, dynamic risk assessment, and blockchain validation, offering an innovative approach that can inspire further research in next-generation privacy-sensitive authentication systems.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_27-Privacy_Preserving_Adaptive_Biometric_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Health Information Exchange in Malaysia: Leveraging Interoperability on International Standards for Health Data Exchange</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170126</link>
        <id>10.14569/IJACSA.2026.0170126</id>
        <doi>10.14569/IJACSA.2026.0170126</doi>
        <lastModDate>2026-01-30T11:01:07.7470000+00:00</lastModDate>
        
        <creator>Mohd Noor A. M. N. L</creator>
        
        <creator>M. Batumalay</creator>
        
        <creator>Balasubramaniam Muniandy</creator>
        
        <creator>Lakshmi D</creator>
        
        <creator>Vinoth Kumar P</creator>
        
        <subject>Health data integration; interoperability; international health data standards; implementation; policy reforms; process innovation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>This research proposal investigates the implementation of Health Information Exchange (HIE) in Malaysia, focusing on understanding the technological, policy, and financial challenges and opportunities associated with its adoption and effectiveness. Through a comprehensive survey approach targeting healthcare practitioners, policymakers, IT professionals, and patients, the study aims to elucidate the current state of HIE, assess interoperability with international standards, and identify pathways for enhancement. Key objectives include analyzing technological barriers, evaluating policy and regulatory impacts, and exploring sustainable financial models for HIE. Findings indicate a positive trajectory towards HIE implementation, underscored by a broad recognition of its potential to transform healthcare delivery. However, challenges such as system integration, policy clarity, infrastructural readiness, and privacy concerns remain. Recommendations for future improvement emphasize strengthening infrastructure, clarifying policies, enhancing security measures, providing continuous training, fostering innovation, and increasing patient engagement. Furthermore, this study highlights the alignment of HIE implementation with the Sustainable Development Goals (SDGs), particularly SDG 3, to ensure universal health coverage and enhance the healthcare workforce&#39;s capacity for process innovation. Incorporating international best practices and a validated framework will further strengthen Malaysia’s healthcare system in the digital age.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_26-Health_Information_Exchange_in_Malaysia.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Attention-Enhanced Hierarchical Transformer for Multimodal Integration of Mammograms and Clinical Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170125</link>
        <id>10.14569/IJACSA.2026.0170125</id>
        <doi>10.14569/IJACSA.2026.0170125</doi>
        <lastModDate>2026-01-30T11:01:07.7170000+00:00</lastModDate>
        
        <creator>N. Kannaiya Raja</creator>
        
        <creator>V S Krushnasamy</creator>
        
        <creator>Nurilla Mahamatov</creator>
        
        <creator>Prasad Devarasetty</creator>
        
        <creator>S.T. Gopukumar</creator>
        
        <creator>Sanjiv Rao Godla</creator>
        
        <creator>Vuda Sreenivasa Rao</creator>
        
        <subject>Breast cancer diagnosis; multimodal deep learning; Graph Attention Network; Bayesian uncertainty estimation; explainable AI</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>Breast cancer has been listed as one of the leading causes of death amongst women all over the world, and the current diagnostic techniques, which are founded on the manual examination of mammograms or individual clinical presentations, are often subjective, neither being consistent nor generalizable. The existing computer-aided diagnosis (CAD) systems are also characterized by significant weaknesses related to poor multimodal integration, no interpretability, and vulnerability to class imbalance. In order to address the inadequacy, the present study introduces an advanced multimodal deep learning framework named Hybrid Graph-Generative Transformer (HGGT), designed to integrate high-resolution mammographic images with the clinical, demographic, proteomic, and histological data pertinent to the patient. The HGGT network is a hierarchical Swin Transformer and CNN-based feature extraction, a Graph Attention Network (GAT) (to identify clinical variable interaction), and a contrastive cross-modal generative fusion system (to match the different modalities). The diagnostic head employs a Bayesian uncertainty-aware classifier to ensure more reliability in the prediction of malignancy. It is trained on 5-fold cross-validation, AdamW, and a cosine annealing scheduler, which is set on Python 3.10. It is demonstrated by the performance of the CBIS-DDSM mammography dataset and a corresponding clinical dataset consisting of over 400 patients that HGGT is much superior with 98.2% accuracy, 98.7% precision, 98.5% recall, 99.2% F1-score, and 99.1% AUC-ROC, having a significant advantage over the established models of ResNet50, EfficientNet-B0 and GAN-enhanced CNN classifier. Overall, the HGGT framework is delivering a scalable, interpretable, and highly accurate diagnosis solution that was a huge improvement over the existing unimodal and poorly integrated CAD system in the detection of breast cancer.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_25-Attention_Enhanced_Hierarchical_Transformer_for_Multimodal_Integration.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Evaluation of CNN Architectures for Corn Leaf Diseases Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170124</link>
        <id>10.14569/IJACSA.2026.0170124</id>
        <doi>10.14569/IJACSA.2026.0170124</doi>
        <lastModDate>2026-01-30T11:01:07.6870000+00:00</lastModDate>
        
        <creator>M. Abdallah</creator>
        
        <creator>M. F. Abu-Elyazeed</creator>
        
        <subject>Artificial intelligence; Convolutional Neural Network; deep learning; image processing; corn diseases classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>Corn has particular importance in the global food industry. Many diseases attack the corn crops, which affects the crop yield. Early classification and detection of these diseases are pivotal to preventing damage and achieving high crop productivity. Although deep learning, especially convolutional neural networks, has accomplished remarkable results in image recognition, selecting the optimal architecture and using limited datasets remains a challenge. To address this gap, a transfer learning approach based on ImageNet weights was applied to classify three common corn diseases (i.e., gray spot, common rust, and blight), as well as the healthy plants. Six CNN architectures—DenseNet201, EfficientNetB0, VGG16, ResNet50, InceptionV3, and InceptionResNetV2—inclusive performance was evaluated for classification on a corn dataset. Based on evaluation metrics, EfficientNetB0 achieves the highest training accuracy of 97.67% with a fast computational time of 71 seconds. It performs more efficiently than the other architectures. These findings support the use of deep learning models, particularly EfficientNet, in the evolution of artificial intelligence image classification system applications.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_24-Comparative_Evaluation_of_CNN_Architectures_for_Corn_Leaf_Diseases_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Integrating Augmented Reality Learning Objects in Intelligent Tutoring Systems: A Conceptual Model for Engaging Learning Experiences</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170123</link>
        <id>10.14569/IJACSA.2026.0170123</id>
        <doi>10.14569/IJACSA.2026.0170123</doi>
        <lastModDate>2026-01-30T11:01:07.6530000+00:00</lastModDate>
        
        <creator>Hind Tahir</creator>
        
        <creator>Najoua Hrich</creator>
        
        <creator>Salma El Boujnani</creator>
        
        <creator>Mohamed Khaldi</creator>
        
        <subject>Augmented reality; intelligent tutoring systems; E-learning; adaptation; engagement; personalized learning; learning objects</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>Despite the progress achieved by E-learning platforms, several limitations remain in sustaining learner engagement over time. With the rapid evolution of information and communication technologies, augmented reality has emerged as a powerful medium for designing pedagogical objects that are interactive, immersive, and adaptable to diverse learning contexts. The integration of augmented reality- based learning objects into intelligent tutoring systems enhances the educational process by providing learners with contextualized, multisensory experiences that align with their preferences and profiles. In this perspective, our objective is to propose a model for augmented reality-based learning objects within the context of an Intelligent Tutoring System. The proposed framework addresses a critical research gap: the absence of systematic architectural models that enable real-time, bidirectional adaptation between AR content representation and ITS decision-making mechanisms. Our model aims to strengthen learner motivation and reduce the risk of disengagement by dynamically adapting content to individual needs. It provides a structured foundation for the design and development of augmented reality-based learning objects within an intelligent tutoring system, ensuring that immersive resources are not only technologically innovative but also pedagogically aligned and personalized through the system’s diagnostic and feedback capabilities.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_23-Integrating_Augmented_Reality_Learning_Objects_in_Intelligent_Tutoring_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improved Sparrow Search Algorithm-Based Recurrent Neural Network for Short-Term Generation Load Forecasting of Hydropower Stations</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170122</link>
        <id>10.14569/IJACSA.2026.0170122</id>
        <doi>10.14569/IJACSA.2026.0170122</doi>
        <lastModDate>2026-01-30T11:01:07.6230000+00:00</lastModDate>
        
        <creator>Liyuan Sun</creator>
        
        <creator>Yilun Dong</creator>
        
        <creator>Junwei Yang</creator>
        
        <subject>Power plant; load forecasting; Mode Decomposition; Long Short-Term Memory; Sparrow Search Algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>To address the challenges of low accuracy and high randomness in short-term hydroelectric load forecasting within Multi-energy Coupled Virtual Power Plants (MC-VPPs), this study proposes a hybrid model integrating Variational Mode Decomposition (VMD), Long Short-Term Memory (LSTM) networks, and an Improved Sparrow Search Algorithm (ISSA). Traditional methods, such as exponential smoothing and multiple linear regression, often fail to capture nonlinear dynamics and external disturbances. The proposed framework first decomposes raw load data into four intrinsic mode functions (IMFs) via VMD to extract multi-scale features, including long-term trends, seasonal cycles, and short-term fluctuations. LSTM networks are then applied to model the temporal dependencies of each IMF. To enhance optimization, ISSA introduces a bidirectional sine-cosine search strategy, balancing global exploration and local exploitation to avoid premature convergence. Validated on 1,247 daily load records from a hydropower station in southwestern China, the ISSA-VMD-LSTM model achieves a 30.2% improvement in R&#178;, with reductions of 47.2% in RMSE, 47.8% in MAE, and 63.3% in MAPE, outperforming benchmarks like PSO-LSTM and SSA-VMD-LSTM. This demonstrates its robustness in handling nonlinearity and stochasticity. The model enhances MC-VPPs’ operational efficiency by enabling intelligent scheduling and renewable energy integration, with future applications extending to real-time forecasting and other renewable energy systems.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_22-Improved_Sparrow_Search_Algorithm_Based_Recurrent_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Advanced Multimodal AI for Resilient Healthcare: Enhancing Early Risk Assessment in Critical Care</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170121</link>
        <id>10.14569/IJACSA.2026.0170121</id>
        <doi>10.14569/IJACSA.2026.0170121</doi>
        <lastModDate>2026-01-30T11:01:07.5930000+00:00</lastModDate>
        
        <creator>Shih-Wei Wu</creator>
        
        <creator>Chengcheng Li</creator>
        
        <creator>Te-Nien Chien</creator>
        
        <creator>Yao-Yu Zhang</creator>
        
        <subject>Resilient healthcare; multimodal AI; early risk assessment; critical care; clinical text mining</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>This study develops an advanced multimodal AI framework to strengthen early risk assessment in critical care and support resilient healthcare delivery. Utilizing the MIMIC-III database, this research extracted structured variables and clinical notes from 26,829 adult patients. A text mining approach based on the BERTopic model was employed to generate topic embeddings from unstructured notes, which were subsequently integrated with 16 quantitative variables. Six machine learning models: Adaboost, Gradient Boosting, Support Vector Classification (SVC), Bagging, Logistic Regression, and MLP Classifier were trained to predict short-term and long-term mortality outcomes. Model performance was evaluated through AUROC, accuracy, recall, precision, and F1-score metrics. The results demonstrate that integrating topic embeddings with structured data significantly improved short-term risk prediction. The SVC model, in particular, achieved an AUROC of 0.9137 for predicting 2-day mortality. Critical predictors identified included the Glasgow Coma Scale, White Blood Cell Count, and text-derived topics related to cardiovascular and neurological conditions. The study is based on a single-center dataset, limiting generalizability. Additionally, only a subset of textual data sources was analyzed, and improvements in long-term risk prediction were relatively modest. These findings demonstrate how multimodal AI can significantly improve early risk assessment and enhance resilience in critical care decision-making. This research pioneers the integration of BERTopic-based text mining with machine learning models for clinical risk prediction, highlighting the value of multimodal data fusion in improving predictive accuracy and enriching medical informatics.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_21-Advanced_Multimodal_AI-for_Resilient_Healthcare.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An AI-Driven VR Learning Framework Using RL-Optimized Transformer Models for Personalized English Proficiency Assessment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170120</link>
        <id>10.14569/IJACSA.2026.0170120</id>
        <doi>10.14569/IJACSA.2026.0170120</doi>
        <lastModDate>2026-01-30T11:01:07.5600000+00:00</lastModDate>
        
        <creator>A. Sri Lakshmi</creator>
        
        <creator>E. S. Sharmila Sigamany</creator>
        
        <creator>Revati Ramrao Rautrao</creator>
        
        <creator>K. Ezhilmathi</creator>
        
        <creator>Dr. Bhuvaneswari Pagidipati</creator>
        
        <creator>Elangovan Muniyandy</creator>
        
        <creator>Dr. Adlin Sheeba</creator>
        
        <subject>AI-driven learning; Virtual Reality; English language education; reinforcement learning; Natural Language Processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>Effective English language learning demands adaptive, interactive, and flexible instructional support, which traditional e-learning systems and existing AI tutors struggle to provide due to limited immersion, static feedback mechanisms, isolated task structures, and the absence of robust reward-driven learning strategies. Although prior studies on VR-based learning environments and Natural Language Processing (NLP) have reported enhanced learner motivation and engagement, most existing solutions suffer from fixed task sequencing, limited real-time linguistic intelligence, and inadequate grammar and pronunciation correction capabilities. To address these challenges, this study proposes a Virtual Reality–based architecture named the Self-Evolving Neural Intelligence Tutor (SENIT), driven by Curriculum Reinforcement Learning and Hierarchical Adaptive Weighting. SENIT integrates a fine-tuned T5 transformer for grammar refinement and prosody-aware feedback, while a reinforcement learning agent dynamically adjusts task difficulty and lesson progression based on learner performance. Developed using Python and TensorFlow and deployed within a Unity3D VR environment, SENIT enables realistic conversational simulations and multimodal learner assessment. Experimental evaluation on a dedicated VR English Learning Dataset demonstrates grammar and pronunciation accuracy improvements of 90% and 81%, respectively, outperforming existing models by approximately 12 percentage points. Additionally, learners achieved notable fluency gains and high engagement scores, highlighting SENIT’s effectiveness in delivering personalized, immersive language learning experiences.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_20-An_AI_Driven_VR-Learning_Framework_Using_RL_Optimized_Transformer_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dynamic Decision Model for Tunnel Cross-Passage Layout Based on Multi-Source Sensor Data Fusion</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170119</link>
        <id>10.14569/IJACSA.2026.0170119</id>
        <doi>10.14569/IJACSA.2026.0170119</doi>
        <lastModDate>2026-01-30T11:01:07.5300000+00:00</lastModDate>
        
        <creator>Xuejun Di</creator>
        
        <creator>Musha Ruzi</creator>
        
        <creator>Angang Liu</creator>
        
        <subject>Multi-source sensor data fusion; dynamic cross-passage deployment decision-making for tunnels; Kalman filter; reinforcement learning; non-dominated sorting genetic algorithm II</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>The layout of tunnel cross-passages is a critical aspect of tunnel construction and operational safety. Traditional methods, primarily based on static design, struggle to adapt to complex and variable geological and construction environments. This study proposes a dynamic decision model for cross-passage layout based on multi-source sensor data fusion to enhance the scientific rigor and adaptability of cross-passage design. A three-dimensional data fusion mechanism integrating “temporal-spatial-statistical” dimensions was developed. Bayesian network quantifies uncertainty, Kalman filter processes time series data, and PCA extracts spatial features. Reinforcement learning and non-dominated sorting genetic algorithm II (NSGA-II) are used to achieve multi-objective optimization of safety coverage and construction efficiency. The proposed model significantly outperforms the traditional methods in many indicators, and is verified by 100 Monte Carlo simulations and actual tunnel experiments. The dynamic scheme increased the safety coverage rate from 72.4% to 91.7%, shortened the average evacuation distance by 38.7% (from 248 meters to 152 meters), saved resources by 14.2% (about 9.8 million yuan), and shortened the construction period by 3-6 days. The comprehensive utility value is 0.91, which is 19% higher than the traditional static method, and the robustness is enhanced. The model realizes the safe, economical, and efficient real-time optimization of the layout of the transverse channel. It provides a technical path and data support that can be promoted for intelligent tunnel construction under complex geological conditions.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_19-Dynamic_Decision_Model_for_Tunnel_Cross_Passage_Layout.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Metaphorical Meaning Integration in Poetry Based on Online Discourse Data: Analysis from a Cognitive Linguistics Perspective</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170118</link>
        <id>10.14569/IJACSA.2026.0170118</id>
        <doi>10.14569/IJACSA.2026.0170118</doi>
        <lastModDate>2026-01-30T11:01:07.5000000+00:00</lastModDate>
        
        <creator>Ying LIU</creator>
        
        <creator>Jiting XUE</creator>
        
        <subject>Online discourse data; poetic metaphor; cognitive linguistics; Artificial Intelligence; semantic fusion</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>As an emerging literary form, online poetry has garnered significant attention due to its rapid dissemination, diverse styles, and complex metaphorical expressions. However, the process of metaphorical meaning integration in poetry is difficult to quantify, necessitating support from Artificial Intelligence technologies. This study integrates cognitive linguistics theory with AI algorithms to propose a three-dimensional fusion analysis framework—“cognitive theory + specific AI algorithms + online discourse data”—for dissecting metaphorical meaning integration in online poetry. By constructing a comprehensive methodology encompassing metaphor identification, semantic mapping, and integration analysis, this study offers a novel quantitative pathway for metaphor research in poetry. Experimental validation demonstrates that the integrated approach—leveraging Support Vector Machines (SVM), Convolutional Neural Networks (CNN), BERT pre-trained models, and the DeepSeek-R1 large model—achieves outstanding performance in metaphor recognition accuracy, semantic association quantification, and fusion effectiveness evaluation, fully embodying both theoretical and practical value.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_18-Metaphorical_Meaning_Integration_in_Poetry_Based_on_Online_Discourse_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Competitive Co-Evolutionary Approach for the Nurse Scheduling Problem</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170117</link>
        <id>10.14569/IJACSA.2026.0170117</id>
        <doi>10.14569/IJACSA.2026.0170117</doi>
        <lastModDate>2026-01-30T11:01:07.4830000+00:00</lastModDate>
        
        <creator>Maizatul Farhana Mohamad Nazri</creator>
        
        <creator>Zeratul Izzah Mohd Yusoh</creator>
        
        <creator>Halizah Basiron</creator>
        
        <creator>Azlina Daud</creator>
        
        <subject>Nurse Scheduling Problem; competitive co-evolution; evolutionary algorithms; healthcare scheduling; constraint optimisation; adversarial evaluation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>The Nurse Scheduling Problem (NSP) is a constrained combinatorial optimisation problem that plays a critical role in healthcare scheduling and constraint optimisation. Traditional evolutionary approaches often rely on static fitness evaluation, which struggles to balance feasibility and solution quality under complex real-world constraints. This study proposes a competitive co-evolutionary algorithm for the NSP that introduces adaptive adversarial evaluation, where candidate schedules are assessed under dynamic competitive pressure to expose structural weaknesses and guide evolution more effectively. The proposed competitive NSP is evaluated on a 20-nurse, one-week scheduling instance and compared against a classical Genetic Algorithm (GA) under identical conditions for 30 independent runs. Experimental results show that the competitive NSP achieves a mean best penalty of 447.28, compared to 651.30 for the classical GA, corresponding to an average improvement of approximately 31%. The competitive approach further exhibits smoother convergence behaviour across generations, indicating stronger optimisation dynamics and improved robustness. These findings demonstrate that competitive co-evolution provides an effective and practical alternative to static fitness-based evolutionary methods for nurse scheduling, with broader applicability to healthcare scheduling and constraint optimisation problems.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_17-A_Competitive_Co_Evolutionary_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>GAN-Based Generation of Pre Disaster SAR for Earthquake Interferometry</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170116</link>
        <id>10.14569/IJACSA.2026.0170116</id>
        <doi>10.14569/IJACSA.2026.0170116</doi>
        <lastModDate>2026-01-30T11:01:07.4500000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Kengo Ohiwane</creator>
        
        <creator>Hiroshi Okumura</creator>
        
        <subject>GAN; SAR; earthquake; disaster; DEM; pix2pixHD; CycleGAN; interferometric SAR</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>This study proposes an earthquake disaster detection method based on interferometric synthetic aperture radar (InSAR) using synthetic pre‑disaster SAR data generated from optical satellite images. Conventional InSAR analysis requires pre‑ and post‑disaster SAR image pairs acquired under strict orbital and observation constraints, which makes it difficult to obtain suitable pre‑disaster data. In the proposed approach, a digital elevation model (DEM) and land‑cover information are combined with optical imagery, and generative adversarial networks (GANs), specifically pix2pixHD and CycleGAN, are used to generate pseudo‑SAR data that include both amplitude and phase components. Experimental results using Sentinel‑1 SAR and Sentinel‑2 multispectral instrument (MSI) data demonstrate that pix2pixHD achieves higher conversion accuracy than CycleGAN, with a peak signal‑to‑noise ratio (PSNR) of 21.25 dB and a histogram intersection of 65.25%, and that the generated pre‑disaster SAR images can be interfered with post‑disaster SAR observations to detect earthquake‑induced surface changes in the 2024 Noto Peninsula event. These findings indicate that the proposed method can extend the applicability of InSAR to areas and events where suitable pre‑disaster SAR acquisitions are unavailable, contributing to rapid earthquake disaster assessment.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_16-GAN_Based_Generation_of_Pre_Disaster_SAR.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>ForenVoice-Secure: Robust and Privacy-Aware Audio Data Mining for Forensic Speaker Identification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170115</link>
        <id>10.14569/IJACSA.2026.0170115</id>
        <doi>10.14569/IJACSA.2026.0170115</doi>
        <lastModDate>2026-01-30T11:01:07.4370000+00:00</lastModDate>
        
        <creator>Mubarak Albathan</creator>
        
        <subject>Forensic audio data mining; forensic voice analytics; voice biometrics; criminal identification; speaker recognition; anti-spoofing; deepfake and replay detection; convolutional neural networks; long short-term memory; federated learning; privacy-preserving biometrics; law enforcement intelligence systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>Speech is now routine evidence in criminal investigations, but forensic audio rarely matches the clean assumptions of standard speaker recognition. Clips are short, noisy, codec-compressed, and channel-mismatched, and they are increasingly exposed to replay and synthetic speech manipulation. Therefore, the cast criminal voice identification is forensic audio data mining, aiming to extract a stable identity structure from heterogeneous and potentially adversarial evidence, while respecting operational and privacy constraints. In this study, a novel ForenVoice-Secure system is proposed, a unified pipeline that combines robust representation learning, spoof-aware decisioning, and privacy-preserving training. Audio is mapped to log-Mel spectrograms and encoded with a CNN, while an LSTM aggregates temporal identity cues from irregular utterances. Robustness is improved through multi-task learning (identity + spoof), adversarial training, and spectro-temporal consistency checks for replay/deepfake artifacts. Privacy is addressed using federated learning, keeping raw recordings local and sharing only model updates. Experiments on VoxCeleb2, ASVspoof 2021, and a forensic-style speaker comparison corpus achieve statistically significant performance gains, 98.43% mean identification accuracy with strong class-balanced performance (macro F1 = 98.10%, precision = 98.22%, recall = 98.01%) and statistically significant gains over strong baselines across repeated folds (F1: p=8.0&#215;〖10〗^(-4); precision: p=1.1&#215;〖10〗^(-3); recall: p=9.0&#215;〖10〗^(-4)). The model remains lightweight (≈4.3M parameters, ≈1.2 GFLOPs per 3 s), enabling near real-time inference with modest overhead from consistency checks (&lt;6%). Overall, ForenVoice-Secure provides a compact and reproducible forensic audio data mining framework for scalable, spoof-resilient, privacy-aware law-enforcement identification.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_15-ForenVoice_Secure_Robust_and_Privacy_Aware_Audio_Data_Mining.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Correlation Characteristics-Based Channel Estimation Method for GFDM Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170114</link>
        <id>10.14569/IJACSA.2026.0170114</id>
        <doi>10.14569/IJACSA.2026.0170114</doi>
        <lastModDate>2026-01-30T11:01:07.4200000+00:00</lastModDate>
        
        <creator>Xiaotian Li</creator>
        
        <creator>Xiaoqing Yan</creator>
        
        <creator>Zitian Zhao</creator>
        
        <creator>Jiameng Pei</creator>
        
        <subject>GFDM; channel estimation; correlation; pilot</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>Generalized Frequency Division Multiplexing (GFDM) has broad application prospects due to its flexible subcarrier structure and low out-of-band leakage. Traditional channel estimation methods for GFDM systems rely on inserting a large number of pilot sequences, which reduces the data transmission rate. To address this problem, a channel estimation method for GFDM systems based on subcarrier correlation is proposed. First, according to the time–frequency characteristics of the prototype filter in the GFDM system, a pilot sequence with a two-dimensional time–frequency block structure (CTFP) is designed. This sequence is adjusted based on the parameters of the prototype filter. Then, the correlation among subcarriers is utilized for channel estimation, which effectively reduces the pilot overhead and improves the data transmission rate and interference resistance of the system. Simulation results show that under the same total time slot overhead, the mean square error and bit error rate performance of the proposed correlation-based methods are similar to those of existing methods, while the data transmission rate is improved by 14.97% compared with conventional methods.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_14-Correlation_Characteristics_Based_Channel_Estimation_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comprehensive Forensic Framework for Unmanned Aerial Vehicle Investigations: Empirical Validation with the DJI Mavic 3 Classic</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170113</link>
        <id>10.14569/IJACSA.2026.0170113</id>
        <doi>10.14569/IJACSA.2026.0170113</doi>
        <lastModDate>2026-01-30T11:01:07.3900000+00:00</lastModDate>
        
        <creator>Nidhiba Parmar</creator>
        
        <creator>Naveen Kumar Chaudhary</creator>
        
        <subject>Drone forensics; unmanned aerial vehicle; DJI Mavic 3 Classic; digital forensics; metadata analysis; digital twin; anomaly detection; evidence recovery</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>The rapid proliferation of unmanned aerial vehicles has introduced significant challenges for digital forensics, particularly due to their increasing involvement in criminal, surveillance, and security-related incidents. The heterogeneous hardware architectures, proprietary data formats, encryption mechanisms, and volatile storage characteristics of modern drones complicate reliable evidence recovery and analysis. This study proposes a comprehensive forensic framework for unmanned aerial vehicle investigations, empirically validated using the DJI Mavic 3 Classic. The proposed methodology integrates a conceptual forensic model with practical investigation procedures, including multi-source data acquisition, metadata analysis, anomaly detection, and digital twin-based reconstruction to support event correlation and timeline reconstruction. Four representative case studies: flight log recovery, firmware modification detection, metadata-driven espionage analysis, and reconstruction of deleted media are conducted to evaluate the framework’s effectiveness. Experimental results demonstrate evidence recovery rates of up to 92%, timeline reconstruction accuracy of 95%, and anti-forensic activity detection rates of 100%. The framework explicitly addresses challenges associated with proprietary formats, encryption, and data volatility in drone ecosystems. The proposed approach provides actionable guidance for drone forensics practitioners, researchers, and policymakers, contributing toward standardized and reliable forensic investigation processes for contemporary unmanned aerial vehicle platforms.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_13-A_Comprehensive_Forensic_Framework_for_Unmanned_Aerial_Vehicle_Investigations.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>AI Mathematical: Solving Math Challenges Using Artificial Intelligence Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170112</link>
        <id>10.14569/IJACSA.2026.0170112</id>
        <doi>10.14569/IJACSA.2026.0170112</doi>
        <lastModDate>2026-01-30T11:01:07.3570000+00:00</lastModDate>
        
        <creator>Trinh Quang Minh</creator>
        
        <creator>Ngo Thi Lan</creator>
        
        <creator>Bui Xuan Tung</creator>
        
        <creator>Phan Thanh Tuyen</creator>
        
        <subject>AI math solvers; Artificial Intelligence; STEM education; MathGPT.org; Math-GPT.ai; StudyX.ai; Python models; Olympiad problems; automated reasoning; Kaggle AIMO Progress Prize</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>Artificial Intelligence (AI) has emerged as a transformative tool for solving mathematical challenges across diverse domains, ranging from algebra and geometry to calculus and number theory. This study investigates the role of AI in mathematics by analyzing three representative platforms—MathGPT.org, Math-GPT.ai, and StudyX.ai—and by proposing ten Python-based problem-solving models tailored to Olympiad-style problems. The methodology integrates rule-based reasoning, brute-force search, and heuristic strategies, while benchmarking is inspired by the AI Math Olympiad (AIMO) Progress Award competition on Kaggle. A comparative evaluation was conducted to assess accuracy, reasoning depth, and computational efficiency. Results show that AI solvers can provide step-by-step solutions, interactive visualizations, and adaptive learning support, but their performance varies depending on problem type and strategy. This study highlights both the potential and limitations of AI in mathematics education and research, emphasizing the need for automated model selection (AutoML) and formal benchmarking to strengthen credibility. The findings demonstrate that AI can simultaneously promote automated problem-solving and enhance personalized STEM learning.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_12-AI_Mathematical_Solving_Math_Challenges.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Strengthening Indonesia’s Unmanned Aerial Vehicle Manufacturing Industry: A Technology-Focused Strategic Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170111</link>
        <id>10.14569/IJACSA.2026.0170111</id>
        <doi>10.14569/IJACSA.2026.0170111</doi>
        <lastModDate>2026-01-30T11:01:07.3270000+00:00</lastModDate>
        
        <creator>Satrio Utomo</creator>
        
        <creator>Gani Soehadi</creator>
        
        <creator>Sarjono Sarjono</creator>
        
        <creator>Yanuar Iman Dwiananto</creator>
        
        <creator>Adi Akhmadi Pamungkas</creator>
        
        <creator>Ellia Kristiningrum</creator>
        
        <creator>Budi Setiadi Sadikin</creator>
        
        <creator>Hardono Hardono</creator>
        
        <creator>Rizki Arizal Purnama</creator>
        
        <creator>Helen Fifianny</creator>
        
        <subject>Unmanned Aerial Vehicle; technology adoption; strategic development; SWOT; AHP; QSPM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>The development of national Unmanned Aerial Vehicle (UAV) technology represents a strategic imperative that requires immediate implementation. This study provides strategic recommendations to strengthen Indonesia’s UAV industry by employing SWOT analysis, the Analytical Hierarchy Process (AHP), and the Quantitative Strategic Planning Matrix (QSPM). Sixteen key internal and external factors were identified, with SWOT mapping situating UAVs in Quadrant I (Aggressive Strategy) at coordinates +1.24 and +0.60. AHP prioritization indicates that the strengths–opportunities (S–O) strategy (0.348) is of highest importance, emphasizing infrastructure enhancement and the adoption of advanced technologies. IFAS–EFAS integration confirms Wulung UAV’s aggressive growth position, while internal strengths account for 37.1% of overall strategic influence. QSPM analysis further validates the S–O strategy as optimal, with the highest internal (4.88) and external competitive (4.63) impact scores. Implementation of this strategy necessitates immediate action focused on manufacturing infrastructure enhancement, technological adoption, development of technical human capital, organizational capability strengthening, establishment of a domestic supply chain and supporting industries, and enforcement of robust industrial governance.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_11-Strengthening_Indonesias_Unmanned_Aerial_Vehicle_Manufacturing_Industry.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>HELM-BRCA: Hybrid Embedding and Learning Model for BRCA Methylation Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170110</link>
        <id>10.14569/IJACSA.2026.0170110</id>
        <doi>10.14569/IJACSA.2026.0170110</doi>
        <lastModDate>2026-01-30T11:01:07.2970000+00:00</lastModDate>
        
        <creator>Hemalatha D</creator>
        
        <creator>N Gomathi</creator>
        
        <subject>Breast cancer classification; DNA methylation; TCGA-BRCA; Truncated SVD; autoencoder; ensemble learning; deep learning; epigenomic biomarkers; hybrid model; machine learning pipeline; high-dimensional data</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>Breast cancer remains a highly heterogeneous disease for which it demands advanced computational techniques that can reveal significant biological patterns in high-dimensional epigenomic data. DNA methylation profiles generated by the Illumina HumanMethylation450 platform yield rich, clinically relevant signals but introduce significant analytical challenges due to their high dimensionality, sparsity, and nonlinear structure. This work presents a novel memory-efficient hybrid learning architecture that combines Truncated Singular Value Decomposition (SVD), a deep Autoencoder, and a multi-model ensemble classifier for boosting subtype classification performance using TCGA-BRCA methylation data. In order to circumvent memory limits and prevent system crashes, a probe-subset extraction strategy combined with variance-based feature selection was employed to ensure fast and safe data loading from the Xena repository. While the autoencoder extracts compact nonlinear manifold representations, SVD captures the global linear variance structure. Further, the fused latent space is modelled by an ensemble including Random Forest, XGBoost, and a lightweight Keras neural classifier that allows the system to exploit different decision limits and achieve robust generalization. The experimental investigation across several architectures demonstrates high predictive performance with ROC-AUC scores exceeding 0.99 and accuracies higher than 0.96 for Basic CNN and MLP models. Furthermore, the proposed hybrid ensemble improves stability and precision by outperforming traditional baselines and confirming the complementary nature of spectral and deep feature extraction. This study is suitable for large-scale biomedical data analytics scenarios. In conclusion, this work provides an efficient hybrid machine learning framework for breast cancer methylation study by offering a strong platform for improved prognostic modelling and development of epigenetic biomarkers.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_10-HELM_BRCA_Hybrid_Embedding_and_Learning_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Readability-Driven Prompting Framework for Accurate Grade-Specific EFL Narrative Creation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170109</link>
        <id>10.14569/IJACSA.2026.0170109</id>
        <doi>10.14569/IJACSA.2026.0170109</doi>
        <lastModDate>2026-01-30T11:01:07.2500000+00:00</lastModDate>
        
        <creator>Ronald William Marbun</creator>
        
        <creator>Makoto Shishido</creator>
        
        <subject>Artificial Intelligence (AI); English as a Foreign Language (EFL); large language models (LLMs); readability metrics; narrative generation; prompt engineering; educational technology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>The integration of Artificial Intelligence (AI) into English as a Foreign Language (EFL) education offers new opportunities for developing adaptive and engaging learning materials. Narrative-based content is central to improving reading comprehension, vocabulary acquisition, and learner motivation. However, maintaining grade-appropriate readability in AI-generated narratives remains a major challenge. This study presents Readability-Driven Prompting (RDP), a novel technique designed to enhance the accuracy and efficiency of large language models in generating grade-level narratives. Using GPT-4o-mini, three prompting strategies—CEFR Keyword-Constrained Prompting (CKCP), Instruction-Based Prompting (IBP), and the proposed RDP—were applied to produce narratives for 7th-grade (A1–A2 CEFR) and 10th-grade (B1–B2 CEFR) learners. The outputs were evaluated using Flesch Reading Ease (FRE), Dale–Chall (DC) readability metrics, lexical analysis, and human assessments. Experimental results indicate that the RDP approach achieves higher alignment with target readability levels and improved lexical appropriateness compared to baseline methods, demonstrating a scalable and effective strategy for generating educational narratives, particularly for beginner-level learners.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_9-A_Readability_Driven_Prompting_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Proposed Technological Solution to Predict the Need for Health Professionals in Health Centers Using Random Forest</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170108</link>
        <id>10.14569/IJACSA.2026.0170108</id>
        <doi>10.14569/IJACSA.2026.0170108</doi>
        <lastModDate>2026-01-30T11:01:07.2170000+00:00</lastModDate>
        
        <creator>Fiorella Patricia Mirano Surquislla</creator>
        
        <creator>Gianfranco Henry Ore Paredes</creator>
        
        <creator>Aguilar-Alonso Igor</creator>
        
        <subject>Technological solution; Random Forest; healthcare sector; healthcare professional prediction; human resource management</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>The objective of this research is to develop a technological solution based on the Random Forest algorithm to predict healthcare workforce requirements in public healthcare centers in Peru, addressing staff shortages and unequal workforce distribution. A national dataset from the Peruvian Ministry of Health (MINSA) covering the period 2017–2024, segmented by levels of care (I, II, and III), was used to capture the operational differences within the healthcare system. The model, validated using an 80/20 split, achieved outstanding performance, with coefficients of determination (R&#178;) exceeding 0.99 and minimal percentage errors (MAPE) across all levels of care. The main contribution of this work lies in converting estimated healthcare attendances into an operational metric of “required healthcare professionals”, integrated into a web-based architecture built on React, Flask, and PostgreSQL. The findings identify medical specialty and year as the most influential predictive variables. It is concluded that the proposed tool is robust for optimizing strategic healthcare workforce planning, enabling a more equitable and data-driven allocation of medical specialists.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_8-Proposed_Technological_Solution_to_Predict_the_Need_for_Health_Professionals.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dual Cognitive Pathway Architecture for Robust and Dialect-Aware English Reading Comprehension</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170107</link>
        <id>10.14569/IJACSA.2026.0170107</id>
        <doi>10.14569/IJACSA.2026.0170107</doi>
        <lastModDate>2026-01-30T11:01:07.1870000+00:00</lastModDate>
        
        <creator>Jillellamoodi Naga Madhuri</creator>
        
        <creator>Tonmoyee Doley</creator>
        
        <creator>Purnachandra Rao Alapati</creator>
        
        <creator>Vijaya Kumar P</creator>
        
        <creator>Linginedi Ushasree</creator>
        
        <creator>Marvin D. Mayormente</creator>
        
        <creator>Aseel Smerat</creator>
        
        <subject>Cognitive Twin; Dialectal Transfer; Adaptive Tasking; English reading comprehension; personalized learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>Reading comprehension models frequently struggle to accommodate linguistic diversity, especially dialectal variations within the English language that disrupt semantic alignment and fairness. In order to overcome these drawbacks, this study proposes the Dual Cognitive Pathway-Based Dialect-Aware Cognitive Twin Framework (NeuroTwin-DialectaLearn) as a new framework that combines the principles of cognitive science, sociolinguistic expertise, and adaptive learning methods. The framework also has two parallel understanding routes, one is a Lexico-Semantic Pathway that processes normal English, and the other is a Dialectal-Semantic Pathway that is involved in normalizing the dialect and aligning the semantics. Such pathways interact with each other by a process of Adaptive attention fusion in a Cognitive Twin model, which instantiates important cognitive processes, including lexical processing, syntactic parsing, semantic integration, inductive reasoning, and answer generation. It uses Python and PyTorch to implement the system and is tested on the English Classroom QA Dataset with the addition of synthetic dialectal variants to enhance the system. The accuracy of comprehension is 98.1 per cent, the average response time is 14.3 seconds, and the rate of learners’ improvement is 18.9 per cent, which is much higher than the baseline QA systems and improved BERT models. The framework has shown consistent performance in the context of dialects; the number of vocabulary-related mistakes is lower, and the consistency of inference is higher, proving to be an effective tool in dialect-conscious and cognitively based reading comprehension. In general, the present research provides a linguistically encompassing, versatile, and understandable next-generation smart educational system.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_7-Dual_Cognitive_Pathway_Architecture_for_Robust_and_Dialect_Aware.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards an AI-Powered Cyber Resilience Model: A Systematic Evaluation of Frameworks Against Emerging Threats</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170106</link>
        <id>10.14569/IJACSA.2026.0170106</id>
        <doi>10.14569/IJACSA.2026.0170106</doi>
        <lastModDate>2026-01-30T11:01:07.1530000+00:00</lastModDate>
        
        <creator>Chhaya Jahajeeah-Suntoo</creator>
        
        <creator>Sheeba Armoogum</creator>
        
        <subject>Cyber resilience; cybersecurity framework; Artificial Intelligence; emerging threats; Zero Trust; systematic literature review</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>This study presents a Systematic Literature Review of cyber resilience frameworks against emerging threats, published between 2010 and 2025. While numerous frameworks exist, their ability to anticipate, withstand, and evolve in the face of sophisticated attacks remains uncertain. The study maps frameworks across nine resilience goals, namely Identify, Protect, Detect, Respond, Recover, Govern, Anticipate, Withstand, and Evolve, creating a goal-wise evidence matrix and quantification. Using the PRISMA methodology, 11,027 publications were identified, of which 55 studies met the inclusion criteria for critical analysis. The results indicate that most frameworks accentuate Protect and Detect functions at 87.72 per cent, whereas Govern at 17.54 per cent, Withstand at 28.07 per cent, and Evolve at 24.56 per cent remain under-represented. Only 45.61 per cent of frameworks explicitly address emerging threats such as Artificial Intelligence-driven or Internet of Things-based attacks. Strengths observed include situational awareness, Artificial Intelligence and Machine Learning integration, dynamic defence mechanism, Blockchain, and adoption of Zero Trust principles. The key weaknesses lie in the undervalued cyber resilience goals, namely Govern, Withstand, and Evolve, low empirical validation, and a narrow scope in addressing emerging threats, which highlight gaps that limit resilience against sophisticated attacks. Based on these findings, an evidence-informed Artificial Intelligence-powered cyber resilience model is proposed that privileges adaptability and future proofing. This review highlights the urgent need for cyber resilience frameworks to expand beyond reactive measures and to embed forward-looking resilience capabilities.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_6-Towards_an_AI_Powered_Cyber_Resilience_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>DriveRight: An Embedded AI-Based Multi-Hazard Detection and Alert System for Safe and Sustainable Driving</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170105</link>
        <id>10.14569/IJACSA.2026.0170105</id>
        <doi>10.14569/IJACSA.2026.0170105</doi>
        <lastModDate>2026-01-30T11:01:07.1230000+00:00</lastModDate>
        
        <creator>Jamil Abedalrahim Jamil Alsayaydeh</creator>
        
        <creator>Rex Bacarra</creator>
        
        <creator>Ahamed Fayeez Bin Tuani Ibrahim</creator>
        
        <creator>Mazen Farid</creator>
        
        <creator>Aqeel Al-Hilali</creator>
        
        <creator>Safarudin Gazali Herawan</creator>
        
        <subject>Embedded AI; computer vision; intelligent transportation; IoT-based ADAS; deep learning; real-time object detection; Raspberry Pi</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>Recent advances in Artificial Intelligence (AI) and Computer Vision have significantly enhanced the potential of Advanced Driver Assistance Systems (ADAS). However, existing solutions remain limited by high computational cost, single-function design, and dependence on expensive sensors such as radar and LiDAR. This study presents DriveRight, an embedded AI-based driver-assistance system that integrates multi-scenario hazard detection and real-time object detection and alerting using a single low-cost vision sensor on a Raspberry Pi platform. The system leverages a simulation-to-deployment pipeline, combining CARLA-based synthetic training environments with TensorFlow deep learning models, including SSD Inception v2, MobileNet-SSD, and Faster R-CNN. Experimental results show that Faster R-CNN achieved 92.1% detection accuracy for vehicles and 90.3% for traffic signs, while MobileNet-SSD achieved real-time performance at 14.6 frames per second (FPS) with minimal latency of 2.8 seconds on embedded hardware. Field tests validated the system’s ability to accurately detect and classify stop signs, vehicles, and lane deviations under varying lighting and motion conditions, triggering timely alerts to the driver. The prototype demonstrates a cost-effective and energy-efficient AI solution (&lt; 12 W) for intelligent transportation systems. The findings establish the feasibility of deploying IoT-based ADAS and deep learning–driven driver-assistance technologies in low-cost, sustainable embedded platforms, bridging the gap between research-grade ADAS and practical real-world deployment.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_5-DriveRight_An_Embedded_AI_Based_Multi_Hazard_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards Robust Intrusion Detection: Exploring Feature Selection, Balancing Strategies, and Deep Learning for Minority Class Optimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170104</link>
        <id>10.14569/IJACSA.2026.0170104</id>
        <doi>10.14569/IJACSA.2026.0170104</doi>
        <lastModDate>2026-01-30T11:01:07.0930000+00:00</lastModDate>
        
        <creator>Khalid LABHALLA</creator>
        
        <creator>Amal BATTOU</creator>
        
        <subject>Network intrusion detection system; imbalanced data; minority class detection; deep learning; feature selection; balancing techniques</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>The increasing connectivity of systems and the rapid growth of the Internet have intensified cybersecurity threats. It has been demonstrated that conventional signature-based intrusion detection methods are deficient, especially against Zero-Day attacks. An alternative approach involves the deployment of Intrusion Detection Systems (IDS) that are based on deep learning algorithms. However, these systems face a significant challenge in detecting minority classes of attacks, such as Remote-to-Local (R2L) and User-to-Root (U2R) attacks, which, although rare, are of critical importance. Misclassifying these attacks is costly. Therefore, the reduction of false negatives is achieved by coupling feature selection techniques (Chi square, correlation, information Gain, Extreme Gradient Boosting (XGBoost), Autoencoder), oversampling methods (Synthetic Minority Oversampling Technique (SMOTE), Adaptive Synthetic Sampling (ADASYN)) and deep learning models (Deep Neural Network (DNN), Convolutional Neural Network (CNN), Long Short-Term Memory (LSTM) and hybrid model CNN LSTM). The present study uses the NSL-KDD dataset, with a particular focus on the minority classes R2L, which represents 2.61% of the dataset, and U2R, representing 0.08% of the dataset. The findings indicate that data balancing is paramount. ADASYN facilitates 100% U2R detection, while SMOTE enhances R2L accuracy to above 95%. The application of correlation and autoencoder feature selection techniques proved to be the most effective. The effectiveness of CNN models in addressing U2R classification tasks has been extensively demonstrated, while the use of DNN or CNN-LSTM models has been shown to yield optimal results for R2L tasks. DNN remains the most stable model overall. For the two minority classes, the most effective pipelines are Correlation + SMOTE + DNN, achieving 93.84 % recall for U2R and 99.88 % for R2L, and Autoencoder + SMOTE + CNN-LSTM, achieving 89.66 % recall for R2L and 99.68 % for U2R.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_4-Towards_Robust_Intrusion_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Intelligent Knowledge-Based Chatbot to Mitigate Travel Anxiety</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170103</link>
        <id>10.14569/IJACSA.2026.0170103</id>
        <doi>10.14569/IJACSA.2026.0170103</doi>
        <lastModDate>2026-01-30T11:01:07.0300000+00:00</lastModDate>
        
        <creator>Jieyu Wang</creator>
        
        <creator>Hungchih Yu</creator>
        
        <creator>Dingfang Kang</creator>
        
        <subject>Tourist-centered design; user-centered evaluation; knowledge base; context-aware chatbot; travel anxiety</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>With the emergence of intelligent chatbots, AI-driven conversational agents are increasingly being used to help tourists manage travel challenges and obtain effective solutions. Travel anxiety constitutes a significant impediment to tourism, substantially influencing travelers&#39; future intentions. Given its multifaceted nature, spanning from subjective experiences to complex logistical arrangements, this study developed a fully functional tourism chatbot system using a tourist-centered design method to provide targeted guidance for travel anxiety mitigation. The knowledge-based chatbot implemented user-centered evaluation methods by recruiting seven participants who were randomly assigned to scenarios across six major global travel regions. Results from the participants’ short-answer responses and Likert-scale usability ratings indicated that this knowledge-based system delivers highly informative, context-aware, and expert-level recommendations through multifaceted strategy implementation. The findings suggest that such AI-driven interventions are effective in addressing specific travel challenges, with further implications for user-centered design discussed herein.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_3-An_Intelligent_Knowledge_Based_Chatbot_to_Mitigate_Travel_Anxiety.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>AI-Driven Anomaly Prediction in Encrypted Network Traffic</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170102</link>
        <id>10.14569/IJACSA.2026.0170102</id>
        <doi>10.14569/IJACSA.2026.0170102</doi>
        <lastModDate>2026-01-30T11:01:07.0130000+00:00</lastModDate>
        
        <creator>Sina Ahmadi</creator>
        
        <subject>Machine learning; deep learning; recurrent neural networks; intrusion detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>The rapid growth of computer networks has increased demand for more sophisticated tools for network traffic analysis and monitoring. The increasing reliance on networks has amplified the need for robust security and intrusion detection mechanisms. Numerous studies have sought to develop efficient methods for fast and accurate intrusion detection, each addressing the challenge from different perspectives. A common limitation among these approaches is their reliance on expert-engineered features extracted from network traffic. This dependency makes them less adaptable to emerging attack techniques and changes in normal traffic patterns, often resulting in suboptimal performance. In this study, we propose a method leveraging recent advancements in artificial neural networks and deep learning, specifically using recurrent neural networks (RNNs), for network traffic analysis and intrusion detection. The key advantage of this approach is its ability to autonomously extract features from network traffic without human intervention. Trained on the ISCX IDS 2012 dataset, the proposed model achieved an accuracy of 0.99 in distinguishing between malicious and normal traffic.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_2-AI_Driven_Anomaly_Prediction_in_Encrypted_Network_Traffic.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Multi-Model Adaptive Q-Learning Framework for Robust Portfolio Management in Stochastic Markets</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2026</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2026.0170101</link>
        <id>10.14569/IJACSA.2026.0170101</id>
        <doi>10.14569/IJACSA.2026.0170101</doi>
        <lastModDate>2026-01-30T11:01:06.9030000+00:00</lastModDate>
        
        <creator>Sharmin Sultana</creator>
        
        <creator>Md Borhan Uddin</creator>
        
        <creator>Masuma Akter Semi</creator>
        
        <creator>Shahanaj Akther</creator>
        
        <creator>Urmi Chakraborty</creator>
        
        <creator>Khandakar Rabbi Ahmed</creator>
        
        <subject>Reinforcement learning; Q-Learning; tabular reinforcement learning; portfolio management; dynamic asset allocation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 17(1), 2026</description>
        <description>This study presents TAQLA, a new Tabular Adaptive Q-Learning Agent for portfolio management in stochastic financial markets. TAQLA rests on a multi-model reinforcement learning (RL) architecture that integrates parameter-adaptive Q-Learning mechanisms into softmax-based exploration to reconcile short-term profit maximization with long-term capital preservation. The method is contrasted with vanilla Q-Learning, SARSA, and a random trading policy using simulated equity market data. Empirical analysis shows that TAQLA performs better on profitability, risk-adjusted performance, and drawdown minimization, with a last portfolio value of $1687.45 (+68.74% of initial capital), a Sharpe ratio of 1.41, and a maximum drawdown of just 12.8%. Q-Learning and SARSA, on the other hand, yield Sharpe ratios below 1.0 and drawdowns exceeding 18%. Parameter sensitivity analysis across β (softmax temperature), α (learning rate), and γ (discount factor) reveals that aggressive exploration (β ≈ 1.0–1.5) and reasonable discounting (γ ≈ 0.4–0.6) generate the most aggressive and robust outcomes. Such outcomes place TAQLA as a robust RL-based adaptive portfolio control method under uncertainty, with improved capital appreciation and robustness to adverse market conditions.</description>
        <description>http://thesai.org/Downloads/Volume17No1/Paper_1-A_Multi_Model_Adaptive_Q_Learning_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Medical Diagnosis Using Hybrid of Machine Learning and Deep Learning Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01612135</link>
        <id>10.14569/IJACSA.2025.01612135</id>
        <doi>10.14569/IJACSA.2025.01612135</doi>
        <lastModDate>2025-12-31T12:27:06.3230000+00:00</lastModDate>
        
        <creator>Raed Alazaidah</creator>
        
        <creator>Moath Alomari</creator>
        
        <creator>Hamza Mashagba</creator>
        
        <creator>Musab Iqtait</creator>
        
        <creator>Azlan B. Abd Aziz</creator>
        
        <creator>Hayel Khafajeh</creator>
        
        <creator>Omar Khair Alla Alidmat</creator>
        
        <creator>Ghassan Samara</creator>
        
        <creator>Haneen Alzoubi</creator>
        
        <creator>Samir Salem Al-Bawri</creator>
        
        <subject>Classification; deep learning; feature selection; hybrid models; machine learning; medical diagnosis; medical image classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>The rapid development of medical practices and imaging technology tools creates substantial growth in the amount of medical image data each year in our present era. This research aims to develop a hybrid approach that integrates Machine Learning (ML) and Deep Learning (DL) techniques to enhance the accuracy and reliability of medical image classification for diagnostic purposes. Medical imaging data complexity and growing volume serve as the research motivation, which leads to an investigation of standalone ML or DL limitations and their combination into a single framework. The medical image processing starts with normalization, then noise reduction, and continues to grayscale conversion before performing histogram equalization. This research uses VGG16 and ResNet50 alongside MobileNet and InceptionV3 for feature extraction, then applies ten different ML algorithms, including SVM and MLP, and Ran-dom Forest, for classification. Five public medical image datasets from Kaggle are used: COVID-19 chest X-rays, melanoma skin lesions, pneumonia chest X-rays, acute stroke facial images, and various eye diseases. Hybrid models display superior performance compared to stand-alone ML or DL models based on accuracy, precision, recall, and F1-score evaluation measures. Multiple datasets demonstrate that the MobileNet+MLP combination de-livers the most accurate results, which demonstrates its reliable and efficient performance. The developed AI diagnostic tool presents a scalable system alongside accuracy and interpretability to enhance clinical decision outcomes.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_135-Medical_Diagnosis_Using_Hybrid_of_Machine_Learning_and_Deep_Learning_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Trajectory Planning of Shipbuilding Welding Manipulator Based on Improved Whale Optimization Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01612127</link>
        <id>10.14569/IJACSA.2025.01612127</id>
        <doi>10.14569/IJACSA.2025.01612127</doi>
        <lastModDate>2025-12-31T12:27:06.3070000+00:00</lastModDate>
        
        <creator>Caiping Liang</creator>
        
        <creator>Hao Yuan</creator>
        
        <creator>Chen Wang</creator>
        
        <creator>Wenxu Niu</creator>
        
        <creator>Yansong Zhang</creator>
        
        <subject>Shipboard welding robotic arm; quintic polynomial; Improved Whale Optimization Algorithm; time-optimal trajectory planning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>Time-optimal trajectory planning for shipboard welding robotic arms is a challenging problem due to strong kinematic constraints and the nonlinear coupling between trajectory parameters and execution time. Although various intelligent optimization algorithms have been combined with robotic arm trajectory planning in existing studies, most approaches primarily focus on algorithmic performance improvement and lack a clear formulation of time optimization within polynomial trajectory planning. To address this gap, this study proposes an Improved Whale Optimization Algorithm (IWOA) based on the traditional quintic polynomial trajectory planning method. In the proposed method, the trajectory execution time is explicitly formulated as the optimization objective under kinematic constraints, and the IWOA is designed to stably and efficiently search the time parameter space of the quintic polynomial trajectory. Specifically, chaotic sequence initialization is employed to enhance population distribution, an adaptive weight mechanism is introduced to balance global exploration and local exploitation, and a hybrid co-optimization strategy combining differential evolution and genetic operators is integrated to improve robustness and convergence stability. Simulation experiments are conducted to evaluate the effectiveness of the proposed algorithms. The results demonstrated that, while satisfying robotic arms kinematic constraints, the proposed method achieves an 18.3% reduction in operating time compared with the unoptimized trajectory. These results indicate that the proposed approach provides a systematic and effective solution for time-efficient trajectory planning of shipboard welding robotic arms.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_127-Trajectory_Planning_of_Shipbuilding_Welding_Manipulator.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Relationship Management System: A Data-Driven Framework for Modeling, Monitoring, and Restoring Human–AI Relationships</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01612134</link>
        <id>10.14569/IJACSA.2025.01612134</id>
        <doi>10.14569/IJACSA.2025.01612134</doi>
        <lastModDate>2025-12-31T12:27:06.2770000+00:00</lastModDate>
        
        <creator>Ilia Sedoshkin</creator>
        
        <subject>Human-AI interaction; relationship modeling; trust dynamics; conversational systems; affective computing; regression detection; recovery protocols</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>We present the Relationship Management System (RMS) a modular framework for modeling, monitoring, and repairing human AI relationships. Grounded in Knapp’s Relational Development Model and Social Penetration Theory, RMS operationalizes ten stages of relationship growth and decline, linking depth of disclosure with stage-appropriate behavior. An Airtable-backed schema Relationship Stages, Conversational Arcs, Session Directives) separates master content from user-specific state. A Trust Evaluator quantifies trust, engagement, and disclosure after each session and drives stage transitions. A weighted Regression Risk Score anticipates degradation by tracking shifts in trust, drops in engagement and frequency, patterns of topic avoidance, and conflict cues. When risk climbs, RMS activates empathy centered Recovery Arcs that acknowledge strain and guide repair. This two way, data-informed loop delivers early warning, adjusts pacing to context, and offers gentle offramps when needed improving long-term engagement while preserving interpretability and keeping operational costs low.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_134-Relationship_Management_System_A_Data_Driven_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dynamic Sentiment Analysis on the Emergence of Pre-Trained Generative Model-Based Applications in Indonesia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01612111</link>
        <id>10.14569/IJACSA.2025.01612111</id>
        <doi>10.14569/IJACSA.2025.01612111</doi>
        <lastModDate>2025-12-31T12:27:06.2430000+00:00</lastModDate>
        
        <creator>Frans Mikael Sinaga</creator>
        
        <creator>Jefri Junifer Pangaribuan</creator>
        
        <creator>Kelvin</creator>
        
        <creator>Ferawaty</creator>
        
        <creator>Andree Emmanuel Widjaja</creator>
        
        <subject>Dynamic sentiment; fine-grained; IndoBERT; multi-platform big data; sentiment analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>The emergence of pre-trained generative model–based applications has intensified sentiment dynamics within Indonesia’s multi-platform digital ecosystem, where sentiment intensity and temporal fluctuations occur simultaneously. To overcome these challenges, this study extends IndoBERT by incorporating a time-aware tokenization mechanism within a fine-grained dynamic sentiment analysis framework. This mechanism is designed to explicitly capture the evolution of sentiment over time. Instead of relying on external embeddings or implicit timestamps, temporal information is injected directly into the IndoBERT tokenizer through explicit temporal tokens, enabling end-to-end temporal adaptation during fine-tuning. We utilized a large-scale dataset harvested from various platforms—including TikTok, Twitter (X), YouTube, and forums—alongside AI-generated content from Gemini, ChatGPT, and Copilot. The dataset was annotated into five fine-grained sentiment classes: very positive, positive, neutral, negative, and very negative. The experimental evaluation demonstrates that the proposed time-aware IndoBERT model attains an average accuracy of 96.38%, exceeding the performance of the baseline BERT and RoBERTa models. Furthermore, ablation studies validate that the inclusion of time-aware tokenization yields quantifiable performance gains, proving that explicit temporal encoding refines sentiment sensitivity and offers sharper insights into the shifting public opinion in Indonesia.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_111-Dynamic_Sentiment_Analysis_on_the_Emergence.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comprehensive Analysis of Security Challenges and Solutions in the Internet of Drones: Recent Trends and Development</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01612133</link>
        <id>10.14569/IJACSA.2025.01612133</id>
        <doi>10.14569/IJACSA.2025.01612133</doi>
        <lastModDate>2025-12-31T12:27:06.2130000+00:00</lastModDate>
        
        <creator>Amine Hedfi</creator>
        
        <creator>Aida Ben Chehida Douss</creator>
        
        <creator>Ryma Abassi</creator>
        
        <creator>Mohamed Aymen Chalouf</creator>
        
        <creator>Om Saad Hamdi</creator>
        
        <subject>Unmanned Aerial Vehicle; Internet of Drones; cybersecurity; attacks; threats</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>The Internet of Drones (IoD) is a decentralized structure that links drones to regulate airspace and offer inter-location navigation services. With the increasing use of drones in both civilian and military applications, the importance of the IoD has grown significantly. It reshapes the current internet landscape, making it more extensive and all-encompassing. IoD establishes a connection between drones and the network, which exposes the IoD network to numerous privacy and security issues often associated with IoT ecosystems. To ensure optimal performance from IoD applications, it is crucial to maintain a secure environment devoid of privacy and security risks. Privacy and security concerns have obstructed the overall effectiveness of the IoD framework. This study conducts an extensive examination of security concerns and solutions related to IoD security. It delves into IoD-specific security requirements and sheds light on the latest developments in IoD security research. Hence, we first provide an overview of the overall context and structure of the IoD. We then identify the security issues linked to it. Afterward, we present the most recent security measures developed specifically for the IoD. Finally, we go through the challenges and potential areas for future research in the realm of IoD security.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_133-A_Comprehensive_Analysis_of_Security_Challenges_and_Solutions.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Deep Learning for Signals Automatic Modulation Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01612132</link>
        <id>10.14569/IJACSA.2025.01612132</id>
        <doi>10.14569/IJACSA.2025.01612132</doi>
        <lastModDate>2025-12-31T12:27:06.1670000+00:00</lastModDate>
        
        <creator>Muhammad Moinuddin</creator>
        
        <creator>Hitham K. Alshoubaki</creator>
        
        <creator>Omar Ayad Alani</creator>
        
        <creator>Ubaid M. Al-Saggaf</creator>
        
        <creator>Karim Abed-Meraim</creator>
        
        <subject>Automatic modulation classification; deep learning; machine learning; EfficientNet; Transformer Network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>Classifying signals or modulation classification is a crucial step in developing communication receivers. A common practice is to extract features before categorizing the signal, which requires implementing long preprocessing techniques. Due to breakthroughs in neural network topologies, machine learning (ML) algorithms, and optimization techniques, referred to as &quot;deep learning&quot; (DL), we have witnessed a vast degree of change over the previous five years. Advanced deep learning algorithms can be applied to the same automatic modulation classification problem and generate excellent outcomes without requiring time-consuming, manual, and complex feature extraction methods. In recent years, various DL techniques have been explored for automatic modulation classification (AMC). However, it has been observed that these techniques are effective only for higher Signal-to-Noise-Ratio (SNR) values. To overcome this challenge, we proposed a hybrid DL-based AMC technique by combining a customized EfficientNet with a customized Transformer Block. The transformer block is used to enhance the DL performance for the lower SNR values. The performance of the proposed hybrid model is tested on a benchmark dataset, RadioML2018.01A, and compared with the state-of-the-art existing DL method which shows the supremacy of the proposed hybrid model.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_132-Hybrid_Deep_Learning_for_Signals_Automatic_Modulation_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sustainable and Ethical AI-Driven Recognition in Robotics: Integrating ESG Analytics and Human–Robot Interaction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01612131</link>
        <id>10.14569/IJACSA.2025.01612131</id>
        <doi>10.14569/IJACSA.2025.01612131</doi>
        <lastModDate>2025-12-31T12:27:06.1370000+00:00</lastModDate>
        
        <creator>Fatma Mallouli</creator>
        
        <creator>Lobna Amouri</creator>
        
        <creator>Mejda Dakhlaoui</creator>
        
        <creator>Nada Chaabane</creator>
        
        <creator>Imen Gmach</creator>
        
        <creator>In&#232;s Hammami</creator>
        
        <creator>Hanen Chakroun</creator>
        
        <creator>Ahmed Mellouli</creator>
        
        <creator>Sonda Elloumi</creator>
        
        <creator>Abdelwaheb Trabelsi</creator>
        
        <creator>Heba Elbeh</creator>
        
        <creator>Mohamed Elkawkagy</creator>
        
        <subject>Artificial intelligence; robotic recognition; human–robot interaction; explainable AI; ESG analytics; sustainable robotics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>Environmental, Social, and Governance (ESG) in-formation has become an essential component in evaluating corporate responsibility and long-term resilience. However, its incremental value in predicting firm profitability remains insufficiently understood. This study investigates whether integrating ESG analytics with traditional financial ratios enhances the machine-learning classification of firms into high- and low-profitability categories. Using a multi-industry dataset that combines firm-level ESG pillar scores with accounting-based financial indica-tors, three supervised learning models—Decision Trees, Random Forests, and Support Vector Machines (SVM)—are developed and evaluated. Model validation is conducted through cross-validation, and predictive performance is assessed using Accu-racy, F1-score, and the Area Under the ROC Curve (AUROC). To isolate the specific contribution of ESG factors, ablation experiments and feature-importance analyses are performed. The findings reveal that the Random Forest model provides the most consistent and robust predictive performance (Accuracy = 0.89, F1-score = 0.88, AUROC = 0.93), with Environmental and Governance dimensions emerging as the most influential ESG predictors. The novelty of this research lies in establishing a clear mechanism linking ESG analytics to financial performance and in proposing an ESG-aware evaluation framework, rather than introducing a new predictive model or dataset.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_131-Sustainable_and_Ethical_AI_Driven_Recognition_in_Robotics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Achieving Long-Term Autonomy: A Self-Correcting Deep Reinforcement Learning Agent for Edge IoT Using Digital Twin-Based Drift Compensation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01612130</link>
        <id>10.14569/IJACSA.2025.01612130</id>
        <doi>10.14569/IJACSA.2025.01612130</doi>
        <lastModDate>2025-12-31T12:27:06.1030000+00:00</lastModDate>
        
        <creator>Jhon Monroy</creator>
        
        <creator>Miguel Paco</creator>
        
        <creator>Miguel Portella</creator>
        
        <creator>Geral Basurco</creator>
        
        <creator>Jeymi Valdivia</creator>
        
        <creator>Fiorela Jara</creator>
        
        <creator>Guido Anco</creator>
        
        <subject>Deep Reinforcement Learning (DRL); Edge AI; Internet of Things (IoT); digital twin; sensor drift; fault tolerance; autonomous systems; self-correcting systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>Ensuring long-term autonomy in Edge AI systems remains one of the most persistent challenges in environmental monitoring and biorisk management. Over time, the degradation of low-cost sensors—particularly sensor drift—leads to cumulative measurement errors, distorted state perception, and catastrophic decision failures in Deep Reinforcement Learning (DRL) agents. This paper proposes a novel Self-Correcting Deep Reinforcement Learning (SCDRL) framework that enables robust, long-term autonomy through in-loop drift compensation. The proposed Self-Correcting Agent (SCA) integrates a dual-input architecture combining (i) the local, drifted sensor reading and (ii) a stable reference prediction from a macro-scale Digital Twin (DT). By learning to correlate both signals, the agent implicitly estimates and neutralizes sensor bias in real time, achieving self-calibration without human intervention. To validate this approach, a nine-year simulation of autonomous water management was conducted using real-world hourly climate data from Arequipa, Peru. Results show that a conventional “blind” DRL agent suffers complete performance collapse as drift accumulates, whereas the proposed SCA maintains stable operation indefinitely. Quantitatively, the SCA achieved a 722% higher cumulative reward (415,662 vs. 57,556) and a 53% reduction in plant stress (RMSE 0.2238 vs. 0.4762). These findings establish a validated blueprint for fault-tolerant Edge AI, demonstrating that the fusion of local sensing with digital twin predictions enables self-calibrating agents capable of sustained, reliable autonomy in real-world, resource-constrained environments.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_130-Achieving_Long_Term_Autonomy_A_Self_Correcting_Deep_Reinforcement_Learning_Agent.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Soft and Hard Mixture-of-Experts Approach for Improved ADR Extraction from Patient-Generated Narratives</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01612129</link>
        <id>10.14569/IJACSA.2025.01612129</id>
        <doi>10.14569/IJACSA.2025.01612129</doi>
        <lastModDate>2025-12-31T12:27:06.0730000+00:00</lastModDate>
        
        <creator>Oumayma Elbiach</creator>
        
        <creator>Hanane Grissette</creator>
        
        <creator>El Habib Nfaoui</creator>
        
        <subject>Adverse Drug Reaction; Mixture-of-Experts; Soft and Hard MoE; sequence-to-sequence; patient narratives; biomedical text mining</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>Traditional single-architecture neural models, in-cluding monolithic transformer-based and sequence-to-sequence architectures, often struggle to extract Adverse Drug Reactions (ADRs) from patient-generated health narratives due to informal language, high linguistic variability, and complex relationships among drugs, diseases, and adverse events. Although Mixture-of-Experts (MoE) architectures have demonstrated strong performance across various Natural Language Processing (NLP) tasks, their effectiveness for ADR extraction from unstructured patient narratives remains largely unexplored. This study investigates the application of MoE architectures, specifically Soft MoE and Hard MoE, for ADR extraction from patient-generated content. The task is formulated as a sequence-to-sequence generation problem and evaluated on the PsyTAR dataset using both strict and relaxed evaluation metrics. Experimental results demonstrate that Soft MoE consistently outperforms Hard MoE, achieving a relaxed F1-score of 80.40% compared to 79.40%. These findings highlight the critical role of expert-routing strategies in capturing linguistic variability in patient narratives and establish MoE architectures as a competitive and reliable approach for automated ADR extraction in biomedical text mining and pharmacovigilance applications.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_129-A_Soft_and_Hard_Mixture_of_Experts_Approach_for_Improved_ADR_Extraction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An RBAC-Based Access Control and Security Architecture for UAV Networks in Precision Agriculture Using Software-Defined Drone Networking</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01612128</link>
        <id>10.14569/IJACSA.2025.01612128</id>
        <doi>10.14569/IJACSA.2025.01612128</doi>
        <lastModDate>2025-12-31T12:27:06.0400000+00:00</lastModDate>
        
        <creator>Nadia Kammoun</creator>
        
        <creator>Aida Ben Chehida Douss</creator>
        
        <creator>Ryma Abassi</creator>
        
        <subject>Unmanned Aerial Vehicles; Software-Defined Drone Network; role-based access control; security; attacks; trust management; authentication; access control</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>Unmanned Aerial Vehicles (UAVs), commonly referred to as drones, are widely employed in applications such as surveillance, delivery, mapping, and precision agriculture. Their flexibility, mobility, and cost effectiveness have accelerated their adoption in both civilian and industrial domains. However, the rapid evolution of UAV technologies introduces significant challenges related to limited resources, data processing constraints, and, most critically, security and privacy. Cyberattacks targeting UAV systems may result in data breaches, mission failures, operational disruptions, and risks to human safety. In our previous work, we proposed a lightweight identity authentication scheme based on Elliptic Curve Cryptography (ECC) and integrated it into a Software-Defined Drone Network (SDDN) architecture to ensure strong security with low computational overhead. Building on this foundation, the present study focuses on the agricultural domain, where UAVs are increasingly used for crop monitoring, precision farming, and environmental data collection. Due to the sensitivity of agricultural data and the involvement of multiple stakeholders, fine-grained access control is essential. The main contribution of this work is the design and evaluation of an SDDN-based security framework that integrates role-based access control (RBAC) with trust management to enable secure, scalable, and controlled UAV operations in agricultural environments. The framework restricts user actions according to predefined roles, improving system security and manageability. Simulation results demonstrate that the proposed approach effectively enforces access policies, enhances trust-aware decision making, and maintains low computational overhead suitable for resource-constrained UAV networks. Validation is conducted using Python and YAML-based configurations on Google Colab, confirming the practicality of the proposed solution.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_128-An_RBAC_Based_Access_Control_and_Security_Architecture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Fuzzy Petri Net Approach with Automated ANFIS Rule Learning for Modelling Real-Time Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01612126</link>
        <id>10.14569/IJACSA.2025.01612126</id>
        <doi>10.14569/IJACSA.2025.01612126</doi>
        <lastModDate>2025-12-31T12:27:05.9800000+00:00</lastModDate>
        
        <creator>Abdelilah Serji</creator>
        
        <creator>El Bekkaye Mermri</creator>
        
        <creator>Mohammed Blej</creator>
        
        <subject>Fuzzy petri net; adaptive neuro-fuzzy inference system; expert systems; fuzzy logic; real-time system; artificial intelligence</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>In this paper, we propose a modelling approach for real-time intelligent systems using Fuzzy Petri Nets (FPNs), a formalism that generates dynamic fuzzy rules, supports uncertainty, and enables concurrent reasoning. FPNs offer a well-defined tool for dynamically evaluating Fuzzy Production Rules (FPRs), Certainty Factors (CFs), and truth degrees, and for making real-time decisions. To reduce the complexity of manually constructed or probabilistically modelled fuzzy rules, we extend the modelling toolkit with the Adaptive Neuro-Fuzzy Inference System (ANFIS). ANFIS learns membership functions and Sugeno-type rules from numeric datasets through a feature. This results in a richer and more accurate set of rules. At the novelty level, we propose a rule-integrating scheme that maps Sugeno rules learned by ANFIS into FPN transitions to obtain more clearly explained reasoning and traceable rule execution within a neuro-fuzzy Petri net. Based on these learned rules, FPN executes them within a two-layer real-time (prediction and decision) while maintaining concurrent inference and real-time execution. The hybrid methodology is verified by fitting a real-time expert system for solar collector cleaning. Results from the experiments demonstrate that, in terms of predictive performance, ANFIS-induced rules drastically boost accuracy (from 85% to 93%) and reduce Root Mean Square Error (RMSE) from 4.82 to 2.57 relative to those generated by a single probabilistic FPN model. These results indicate that using neural learning combined with an FPN-based expert system makes real-time decision-making much more accurate and reliable.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_126-A_Fuzzy_Petri_Net_Approach_with_Automated_ANFIS.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Advanced Multi-Scale Enhanced U-Net for Efficient Land Cover Classification of Remote Sensing Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01612125</link>
        <id>10.14569/IJACSA.2025.01612125</id>
        <doi>10.14569/IJACSA.2025.01612125</doi>
        <lastModDate>2025-12-31T12:27:05.9470000+00:00</lastModDate>
        
        <creator>Syed Zaheeruddin</creator>
        
        <creator>K. Suganthi</creator>
        
        <subject>Land cover classification; remote sensing; UNet; satellite images; AMSE-U-Net; multi-scale features; semantic segmentation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>For monitoring the environment, building cities, assessing crops, and studying the climate, it is very important to be able to accurately classify land cover from remote sensing images. Deep learning has made semantic segmentation work much better, especially with encoder-decoder designs like U-Net. Still, ordinary U-Net models have trouble capturing multi-scale contextual relationships, distinguishing narrow borders, and successfully emphasizing traits that are distinctive to an area. This work presents an Advanced Multi-Scale Enhanced U-Net (AMSE-U-Net) to address these difficulties. The AMSE-U-Net combines (i) multi-scale feature extraction, (ii) squeeze-and-excitation channel attention, and (iii) attention-gated skip connections. The model improves learning of both local and global features while getting rid of background noise that isn’t useful. Tests done on common remote sensing datasets show big improvements in Intersection over Union (IoU), pixel precision, and boundary delineation when compared to standard U-Net and similar models. The suggested AMSE-U-Net works better for generalization with only a little amount of extra processing power, making it good for monitoring land cover and the environment.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_125-Advanced_Multi_Scale_Enhanced_U_Net_for_Efficient_Land_Cover_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fine-Tuning Language Models for Pedagogy-Aligned Lesson Plans in Cybersecurity Education</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01612124</link>
        <id>10.14569/IJACSA.2025.01612124</id>
        <doi>10.14569/IJACSA.2025.01612124</doi>
        <lastModDate>2025-12-31T12:27:05.9170000+00:00</lastModDate>
        
        <creator>Samar Althagafi</creator>
        
        <creator>Miada Almasre</creator>
        
        <creator>Wafaa Alsaggaf</creator>
        
        <creator>Lana Alshawwa</creator>
        
        <subject>Fine-Tuning; large language models; lesson planning; cybersecurity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>Lesson planning in cybersecurity is time-consuming and cognitively demanding, especially for less experienced instructors, and manual approaches often lack flexibility across courses and contexts. We present a framework for generating pedagogy-aligned lesson plans using a large language model, integrating measurable objectives (Revised Bloom’s Taxonomy), explicit learning theories, and evidence-based teaching strategies. We constructed a domain-specific knowledge base for cybersecurity topics and organized it with sentence-level embeddings and KMeans clustering. A pretrained large language model (GPT- 3.5) was then fine-tuned to produce lesson plans that follow this structure. On a held-out test set, the model achieved BLEU 73.5, ROUGE-1 82.2, ROUGE-L 78.2, and BERTScore F1 97.4, reflecting strong lexical and semantic fidelity to reference plans. Although the study is limited to a single academic program and relies primarily on automated metrics, the framework offers practical support for instructors by reducing preparation time, enhancing consistency, and ensuring alignment with pedagogical standards. Future work will expand the curricular scope and in-volve expert review and classroom validation to assess educational impact.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_124-Fine_Tuning_Language_Models_for_Pedagogy_Aligned_Lesson_Plans.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>AI-Based Framework for Automated Cell Cleavage Detection and Timing in Embryo Time-Lapse Videos</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01612123</link>
        <id>10.14569/IJACSA.2025.01612123</id>
        <doi>10.14569/IJACSA.2025.01612123</doi>
        <lastModDate>2025-12-31T12:27:05.8870000+00:00</lastModDate>
        
        <creator>Yasmin Alharbi</creator>
        
        <creator>Sultanah Alshammari</creator>
        
        <creator>Aisha Elaimi</creator>
        
        <subject>In vitro fertilization; Time-Lapse Microscopy (TLM) videos; AI-based framework; cleavage stage; cleavage onset timing; optical character recognition; Hours Post-Insemination (HPI)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>In vitro fertilization (IVF) has become a primary therapeutic intervention for couples worldwide addressing in-fertility challenges. IVF success depends critically on embryo quality assessment, where cell cleavage timing serves as a key developmental parameter. Traditional morphological evaluation methods suffer from inter-observer variability and laborintensive manual analysis. This study presents an automated AI-based framework for cleavage stage detection and cleavage onset timing estimation from Time-Lapse Microscopy (TLM) videos to assist embryologists in embryo selection. The proposed YOLO-based approach addresses significant class imbalance through selective data augmentation and random undersampling strategies. To ensure precise temporal data, an OCR (Optical Character Recognition) library was integrated to automatically read and record the Hours Post-Insemination (HPI) timestamps from the video frames. The proposed framework accurately identifies cell division stages up to the seven-cell stage with 1-2 hours mean timing delay post-insemination. The framework achieves an overall Accuracy of 86.61% , F1-score of 86.24% ,and precision of 86.24% in cleavage stage classification, demonstrating significant improvements over existing methods, particularly in intermediate and later stages (4-cell to 8-cell transitions) where previous research have demonstrated challenges in accurately detecting them. Automated extraction of morphokinetic parameters enables objective embryo assessment, reducing subjectivity in clinical decision-making. The proposed framework demonstrated significant improvements over previous research, which frequently has trouble accurately classifying beyond early cleavage stages. This has implications in improving the selection of good-quality embryos, and thus to help improve the success rate of IVF. This work contributes to advancing assisted reproductive technology by providing reliable, automated embryo quality assessment tools.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_123-AI_Based_Framework_for_Automated_Cell_Cleavage_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Confidence-Based Trust Calibration in Human-AI Teams</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01612122</link>
        <id>10.14569/IJACSA.2025.01612122</id>
        <doi>10.14569/IJACSA.2025.01612122</doi>
        <lastModDate>2025-12-31T12:27:05.8530000+00:00</lastModDate>
        
        <creator>Michael Ibrahim</creator>
        
        <subject>Human-AI collaboration; trust calibration; confidence-based delegation; decision-making strategies</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>Effective human-AI collaboration is contingent upon calibrated trust, wherein users depend on AI systems when accuracy is probable and rely on human judgment when errors are likely. In this study, a confidence-based mechanism for trust calibration within human-AI teams is examined. A decision-making strategy is proposed in which task delegation is governed by the AI’s confidence: when the confidence surpasses a specified threshold, the AI’s recommendation is adopted; otherwise, the decision is deferred to the human. Through simulation experiments on a binary classification task, performance outcomes are compared. The AI system achieves an accuracy of 77.7%, whereas the human decision-maker, modeled with a confidence-sensitive accuracy function ph(c) = 0.95 − 0.3c, attains an overall accuracy of 71.9%. Team performance is evaluated across a range of AI confidence thresholds (0.50 to 0.99), revealing that an intermediate threshold yields optimal team accuracy of 84.14%, substantially exceeding the performance of either agent individually. The findings provide a detailed analysis of confidence-based delegation, align with existing research on trust calibration, and underscore critical design implications for the development of human-centric AI systems.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_122-Confidence_Based_Trust_Calibration_in_Human_AI_Teams.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Bridging the Gap Between Text-Based and Visual Programming: A Comparative Study of Efficiency and Student Engagement in Game Development</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01612121</link>
        <id>10.14569/IJACSA.2025.01612121</id>
        <doi>10.14569/IJACSA.2025.01612121</doi>
        <lastModDate>2025-12-31T12:27:05.8230000+00:00</lastModDate>
        
        <creator>&#193;lvaro Villag&#243;mez-Palacios</creator>
        
        <creator>Claudia De la Fuente-Burdiles</creator>
        
        <creator>Cristian Vidal-Silva</creator>
        
        <subject>Visual scripting; higher education; development efficiency; engineering curricula</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>The integration of Low-Code and No-Code (LCNC) tools in higher education challenges traditional text-based programming pedagogies. While visual environments are often relegated to K-12 education, their adoption in professional engines like Unity suggests a need to re-evaluate their role in engineering curricula. This study analyzes the effectiveness, development efficiency, and perceived utility of Unity Visual Scripting com-pared to traditional C# programming (MonoGame) within a “Physics for Videogames&quot; undergraduate course. Employing a quasi-experimental design with a within-subjects approach (N = 22), students first developed a game using C#/MonoGame and subsequently a complex variant using Unity Visual Scripting. Metrics included development time for core mechanics, project grades, and pre/post surveys on self-efficacy. Results demonstrate a statistically significant reduction in development time (30–50%faster for core mechanics) using Visual Scripting. Furthermore, academic performance improved slightly, and students reported higher confidence levels. Crucially, participants identified Visual Scripting not as a replacement, but as a cognitive bridge that facilitates the understanding of algorithmic logic before tackling syntactic complexities. Consequently, Visual Scripting serves as an efficient accelerator for prototyping and conceptual learning in higher education, fostering a “logic-first, syntax-second&quot; approach.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_121-Bridging_the_Gap_Between_Text_Based_and_Visual_Programming.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Model-Driven Transformation of Business Processes into Blockchain Smart Contracts</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01612120</link>
        <id>10.14569/IJACSA.2025.01612120</id>
        <doi>10.14569/IJACSA.2025.01612120</doi>
        <lastModDate>2025-12-31T12:27:05.7900000+00:00</lastModDate>
        
        <creator>Imane Bouzaidi Tiali</creator>
        
        <creator>Zineb Aarab</creator>
        
        <creator>Achraf Lyazidi</creator>
        
        <creator>Moulay Driss Rahmani</creator>
        
        <subject>Model-driven engineering; BPMN; smart contracts; blockchain; ATL; automation; solidity; process transformation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>This paper presents a comprehensive Model-Driven Engineering (MDE) methodology for automatically transforming Business Process Model and Notation (BPMN) diagrams into executable blockchain-based smart contracts. The proposed approach defines a set of Atlas Transformation Language (ATL) rules that systematically map BPMN elements to Solidity con-structs, ensuring semantic consistency and traceability through-out the transformation process. The framework integrates several stages, including process modeling, model validation, code generation, and deployment, supported by tools such as Camunda, Eclipse ATL, Remix IDE, and MetaMask. Experimental vali-dation on the Ethereum Sepolia test network demonstrates the approach’s ability to enhance automation, reduce manual coding errors, and improve synchronization between business work-flows and their on-chain implementations. Compared to existing BPMN-to-blockchain frameworks, the proposed solution offers a unified and reusable transformation pipeline that bridges the gap between business process modeling and blockchain execution. The study concludes that MDE provides a scalable, traceable, and standardized foundation for developing decentralized business process applications.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_120-Model_Driven_Transformation_of_Business_Processes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-Spectral Image Analysis Using Different CNN Models to Detect the Plant Diseases in its Early Stages</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01612119</link>
        <id>10.14569/IJACSA.2025.01612119</id>
        <doi>10.14569/IJACSA.2025.01612119</doi>
        <lastModDate>2025-12-31T12:27:05.7600000+00:00</lastModDate>
        
        <creator>Dhiraj Bhise</creator>
        
        <creator>Sunil Kumar</creator>
        
        <creator>Hitesh Mohapatra</creator>
        
        <subject>Convolutional Neural Network (CNN); Multi-spectral images; Alexnet; Densenet121; Resnet18; Resnet50; VGG16; VGG19; Effficeienet80; MobilenetV2; Xception; InceptionV3; InceptionResnetV2</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>The Researchers and academicians are continuously working on minimizing the production losses due to various plant diseases. Therefore, recent technologies such as artificial intelligence (AI), and machine learning (ML) are playing a crucial role in detecting plant diseases in their early stages. These technologies help classifying plant leaves into ‘healthy’ and ‘rusty’ or ‘diseased’ leaves. It is difficult for human being to detect the plant diseases and take remedial action within stipulated time period. Hence, this research work addresses comparison of different convolutional neural network (CNN) models like Alexnet, Resent18, Resnet50, Xception, VGG16, VGG19, InceptionV3, InceptionResentV2 etc. and concluded with the top CNN models with good filters used to capture the plant leaves images. Proposed research work uses different filters dataset like K590, K665, K720, K850, BlueIR, Hotmirror. Plant disease detection requires accurately detecting the rust, or disease on the leaves immediately and efficiently. CNN models help classifying the plant leaves with higher accuracy and precision. Proposed research work gave accuracies for different filters with different models and found that, for K850 filter accuracy is 72.72% using balance efficientnetB0 CNN model, for K720 filter accuracy is 81.81%using balance efficientnet B0 CNN model, for K665 filter accuracy is 84.09% using balance efficientnetB0 CNN model, for K590 filter accuracy is 90.90% using balance MobilenetV2 CNN model, and for the hotmirror filter accuracy is 93.18% using balance Xception &amp; for the blueIR filter accuracy is 81.81% using balance Xception CNN model.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_119-Multi_Spectral_Image_Analysis_Using_Different_CNN_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Privacy in Databases by Data-Layer</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01612118</link>
        <id>10.14569/IJACSA.2025.01612118</id>
        <doi>10.14569/IJACSA.2025.01612118</doi>
        <lastModDate>2025-12-31T12:27:05.6670000+00:00</lastModDate>
        
        <creator>Sami Alharbi</creator>
        
        <creator>Samer Atawneh</creator>
        
        <creator>Hussein Al Bazar</creator>
        
        <creator>Roxane Elias Mallouhy</creator>
        
        <subject>Database privacy; security model; access control; data protection; privacy enhancing technologies; database systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>This study addresses the growing challenge of enhancing privacy in enterprise database systems, where excessive privileges and shared service accounts often lead to unauthorized data access and insider threats. The study proposes a data-layer security framework that enforces fine-grained access control based on authenticated user identities, integrating role-based access control (RBAC) and the principle of least privilege (PLP) to protect sensitive information. The model restricts developer and administrative access strictly to authorized data objects, reducing exposure while maintaining operational efficiency. Drawing on established database security mechanisms, including authentication, authorization, and centralized identity management through Active Directory, the proposed framework ensures that all database interactions are executed under verified user credentials. The approach is implemented using Microsoft SQL Server within an enterprise environment and evaluated through controlled experiments conducted before and after deployment. Results demonstrate a significant reduction in unauthorized data retrieval without introducing noticeable performance overhead. The findings confirm that enforcing privacy at the data-layer provides an effective and scalable solution for securing sensitive data in modern database systems, strengthening accountability and mitigating risks associated with privilege misuse.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_118-Enhancing_Privacy_in_Databases_by_Data_Layer.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>HCC: A Hierarchical Chart Captioning Model for Enhanced Accessibility of Chart Data for Visually Impaired Users</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01612117</link>
        <id>10.14569/IJACSA.2025.01612117</id>
        <doi>10.14569/IJACSA.2025.01612117</doi>
        <lastModDate>2025-12-31T12:27:05.6370000+00:00</lastModDate>
        
        <creator>Yoojeong Song</creator>
        
        <creator>Kanghyeon Seo</creator>
        
        <creator>Svetlana Kim</creator>
        
        <creator>Joo Hyun Park</creator>
        
        <subject>Hierarchical captioning; accessibility for visually impaired; chart interpretation; transformer models</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>In educational settings, charts and graphs are commonly used to convey complex information in a simple and understandable manner. However, these visual representations often present accessibility challenges regarding Accessibility for Visually Impaired users, as they cannot be directly interpreted by screen readers without proper alternative text. This pa-per proposes a novel hierarchical captioning model (HCC: Hierarchical Chart Captioning) designed to facilitate effective Chart Interpretation. The model utilizes spatial token features to generate captions at multiple levels, each offering varying degrees of detail and abstraction, mimicking human cognitive processing. Three hierarchical levels are developed: Level 1 offers basic and factual descriptions, Level 2 presents more detailed information, and Level 3 provides intuitive interpretations and inferences. By integrating a fine-tuned Transformer Models, this approach ensures efficient caption generation and supports user-selectable caption lengths. The model’s effectiveness is evaluated through user surveys involving 20 instructors, confirming that Level 2 captions provide the most comprehensible descriptions. Experimental results demonstrate that the proposed method outperforms existing captioning approaches, improving both the efficiency and accessibility of educational materials for visually impaired students. These findings highlight the potential of hierarchical learning models to create more inclusive and accessible educational experiences.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_117-HCC_A_Hierarchical_Chart_Captioning_Model_for_Enhanced_Accessibility.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Two-Step Real-Time Complex Environmental Vehicle Detection Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01612116</link>
        <id>10.14569/IJACSA.2025.01612116</id>
        <doi>10.14569/IJACSA.2025.01612116</doi>
        <lastModDate>2025-12-31T12:27:05.6030000+00:00</lastModDate>
        
        <creator>Zhihui Huo</creator>
        
        <creator>Yiqian Liang</creator>
        
        <creator>Xingju Wang</creator>
        
        <subject>Object detection; vehicle detection; image denoising</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>In recent years, as a critical pillar supporting the national economy and daily life, the safe and efficient operation of road traffic has highly relied on precise environmental perception capabilities. To address this, this study proposes a two-stage “denoising-detection” framework: the first stage restores clear images using an improved Uformer algorithm, which funda-mentally merges a probabilistic sparse self-attention mechanism. In contrast, the second stage leverages YOLOv11 for real-time object detection. This framework is introduced in the field for the first time and enhances the accuracy and robustness of vehicle detection in traffic images under complex weather scenarios, providing technical support for intelligent driving systems and traffic monitoring applications. Our experimental validation on our own flexible weather car detection database demonstrated the superior performance of the proposed model: CM-YOLO achieved 0.95 precision and 0.91 mAP50, which promoted 0.2 than the YOLOv11.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_116-A_Two_Step_Real_Time_Complex_Environmental_Vehicle_Detection_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Q-Learning Guided Local Search for the Traveling Salesman Problem</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01612115</link>
        <id>10.14569/IJACSA.2025.01612115</id>
        <doi>10.14569/IJACSA.2025.01612115</doi>
        <lastModDate>2025-12-31T12:27:05.5730000+00:00</lastModDate>
        
        <creator>Sanaa El Jaghaoui</creator>
        
        <creator>Aissa Kerkour Elmiad</creator>
        
        <subject>Traveling salesman problem; reinforcement learning; Q-Learning; local search; 2-opt; 3-opt</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>The Traveling Salesman Problem (TSP) remains a fundamental challenge in combinatorial optimization with applications in logistics, routing, and network design. Classical local search methods face a trade-off between solution quality and computational efficiency: while 3-opt delivers better solutions than 2-opt, its O(n3) complexity renders it impractical for large instances. This paper presents a reinforcement learning (RL) approach that addresses this challenge through intelligent guidance of local search operators. Our method employs a simple one-dimensional Q-table that learns to identify poorly positioned cities and directs 2-opt and 3-opt operations toward the most promising tour segments. We evaluate the approach on 55 TSPLIB benchmark instances ranging from 51 to 18,512 cities. For instances up to 1,000 cities, RL-guided 3-opt (RL-3opt) achieves optimality gaps of 0.9–2.2% compared to 3.8–4.3% for classical 3-opt, with execution times reduced from hours to under one second and speedups reaching 32,323&#215;. For instances between 1,000–5,000 cities, RL-3opt maintains computational efficiency (100–30,000&#215; speedups) while achieving competitive 6.3% gaps. Both RL-2opt and RL-3opt execute in sub-second to a few seconds even on problems with over 18,000 cities. All experiments run on standard CPU hardware without GPU acceleration, demonstrating that effective TSP optimization remains accessible without specialized resources.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_115-Q_Learning_Guided_Local_Search_for_the_Traveling_Salesman_Problem.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Framework Design and Solutions Taxonomy for Performance Optimization in Internet of Things Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01612114</link>
        <id>10.14569/IJACSA.2025.01612114</id>
        <doi>10.14569/IJACSA.2025.01612114</doi>
        <lastModDate>2025-12-31T12:27:05.5400000+00:00</lastModDate>
        
        <creator>Mariam A. Alotaibi</creator>
        
        <creator>Sami S. Alwakeel</creator>
        
        <creator>Aasem N. Alyahya</creator>
        
        <subject>IoT performance; reliability; security; scalability; quality; energy efficiency; technology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>The Internet of Things (IoT) is an exciting, rapidly expanding technology that’s still in its early stages and faces several complex issues. These challenges primarily arise from the limitations of IoT devices (e.g., restricted energy, memory, and processing power), the diversity of communication protocols, and the heterogeneity of interconnected devices. Collectively, these issues often hinder overall IoT system performance, prompting extensive research into techniques to improve Quality of Service (QoS), particularly in terms of latency, throughput, and energy use. This paper introduces a conceptual framework for multi-dimensional IoT performance optimization. The framework pro-vides a structured approach for evaluating and enhancing performance across all layers of the IoT architecture: device, network, support, and application. It assesses key performance dimensions—reliability, security, scalability, energy efficiency, quality assurance, and enabling technologies—and defines them in terms of overall system performance. To ensure a systematic assessment, these dimensions are supported by concrete performance metrics and precise measurement criteria. Finally, the paper provides a taxonomy of IoT Performance Optimization Components, identifies the essential prerequisites and core attributes that influence the overall efficiency of IoT systems, and thus provides a structured foundation for evaluating and advancing performance across the entire IoT ecosystem.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_114-A_Framework_Design_and_Solutions_Taxonomy_for_Performance_Optimization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Explainable AI Models for Assessing Short-Circuit Propagation in Fire-Exposed Cable Bundles</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01612113</link>
        <id>10.14569/IJACSA.2025.01612113</id>
        <doi>10.14569/IJACSA.2025.01612113</doi>
        <lastModDate>2025-12-31T12:27:05.5100000+00:00</lastModDate>
        
        <creator>Vijay H. Kalmani</creator>
        
        <creator>Kishor S. Wagh</creator>
        
        <creator>Kavita Tukaram Patil</creator>
        
        <creator>Pallavi Jha</creator>
        
        <creator>Tanuja Satish Dhope</creator>
        
        <creator>Deepak Gupta</creator>
        
        <creator>Chanakya Kumar Jha</creator>
        
        <subject>Explainable AI; short-circuit propagation; fire safety; cable testing; SHAP values; gradient boosting; feature importance; nuclear safety</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>Fire-induced short-circuit propagation in cable bundles poses significant safety risks in electrical installations, nuclear facilities, and transportation systems. Traditional fault detection methods often lack interpretability, hindering root cause analysis and preventive maintenance strategies. This paper presents novel explainable artificial intelligence (XAI) models for predicting and analyzing short-circuit propagation in fire-exposed cable bundles. We develop a hybrid framework combining gradient boosting machines with SHAP (SHapley Additive exPlanations) values to provide interpretable predictions of time-to-short-circuit and failure modes. Our approach integrates thermal imaging data, cable physical properties, and environmental conditions from controlled fire tests conducted on IEEE 383-qualified cables. The proposed XAI models achieve 94.7% accuracy in predicting short-circuit occurrence within 5-second windows while providing human-interpretable feature importance rankings. Experimental validation using the NUREG/CR-6931 dataset demonstrates that insulation temperature gradient, cable bundle density, and oxygen concentration are the three most critical factors influencing short-circuit propagation. The explainable framework enables fire safety engineers to understand model decisions, identify vulnerable cable configurations, and optimize protection strategies. Our results show a 23% improvement in early fault detection compared to conventional black-box deep learning approaches, with significantly enhanced model transparency for safety-critical applications.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_113-Explainable_AI_Models_for_Assessing_Short_Circuit_Propagation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Data-Driven Insights for Moroccan Airports: PCA and Clustering to Enhance Operational Performance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01612112</link>
        <id>10.14569/IJACSA.2025.01612112</id>
        <doi>10.14569/IJACSA.2025.01612112</doi>
        <lastModDate>2025-12-31T12:27:05.4800000+00:00</lastModDate>
        
        <creator>H. Fatih</creator>
        
        <creator>A. Bentaleb</creator>
        
        <creator>M. Lazaar</creator>
        
        <creator>B. Bentalha</creator>
        
        <subject>Principal Component Analysis (PCA); airport performance; transportation systems; K-means clustering; operational optimization; airport efficiency; airport operations management; air traffic; passenger experience</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>Following the trend of increasing complexity among systems, in an attempt to meet air passengers’ demands for higher quality service, this paper contributes to this stream of research by studying the operational efficiency of Moroccan airports through a novel multivariate approach. This research examines the following five performance metrics: baggage handling time, police screening time, customs processing time, passenger traffic, and flight delays. In this context and making use of Principal Component Analysis (PCA) with K-Means clustering, this paper aims at identifying the causes of operational variability, their significance in terms of performance management, and differentiating flights with similar operational profiles. Turning so particular techniques to the data of the moroccan airports this study reveals hidden patterns within airport interrelated activities, that in most cases were neglected by the traditional system of measurement. The findings make methodologies advancement in multivariate analysis of transport systems as well as practical improvement in the management of airport operations, and eventually impact on coordinated strategies of resource allocation for the systemic profit and the passenger utility. Through the use of PCA and K-means on the unreleased data of airports in Morocco, this paper is the first to offer a full multivariate study of the airport in the whole North African region. In contrast with standard monitoring systems which treat metrics as isolated entities, the study concurrently analyzes the dependencies among five key measures, discloses latent operational patterns, and promotes the formulation of context-based management policies suitable for an immature aviation market.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_112-Data_Driven_Insights_for_Moroccan_Airports.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>VidAvDetect: A Deepfake-Inspired Vision Transformer Approach for Detecting Real Humans vs. AI-Avatars in Video Streams</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01612110</link>
        <id>10.14569/IJACSA.2025.01612110</id>
        <doi>10.14569/IJACSA.2025.01612110</doi>
        <lastModDate>2025-12-31T12:27:05.4000000+00:00</lastModDate>
        
        <creator>Btissam Acim</creator>
        
        <creator>Hamid Ouhnni</creator>
        
        <creator>Nassim Kharmoum</creator>
        
        <creator>Soumia Ziti</creator>
        
        <subject>Vision transformer; deepfake; Artificial Intelligence (AI); Generative AI; AI Avatar; video streams</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>The pace of advancement in Generative AI has made it possible to realize highly realistic synthetic identities in the form of avatars for non-existent persons, thus paving the way for a paradigm beyond state-of-the-art deepfake attacks that aim to manipulate real identities in people. This rapidly emerging trend poses a challenge to digital media forensics in a most critical way, in terms of deciding whether a facial identity observed in a video clip represents a real human identity versus a fully synthetic identity created using advanced tools in the realm of Generative AI. To address this gap, we introduce VidAvDetect, a deepfake-inspired Vision Transformer approach specifically designed to discriminate real human faces from AI-generated avatars in video streams, addressing a novel identity-existence verification task. The proposed system integrates efficient frame sampling, robust facial preprocessing, patch-based embeddings, and global structural modeling through a transformer encoder, enabling the detection of subtle geometric and textural regularities characteristic of synthetic identities. Experimental results demonstrate strong performance, with training accuracy reaching 97–98%, video-level accuracy of 95.1%, a macro F1-score of 0.944, and a ROC-AUC of 0.991, confirming the model’s robustness across heterogeneous real, manipulated, and fully synthetic datasets. By moving beyond manipulation detection to focus on identity-existence verification, VidAvDetect establishes a new methodological direction for transparency, regulation, and trust in modern digital media environments where AI-generated avatars increasingly resemble real humans.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_110-VidAvDetect_A_Deepfake_Inspired_Vision_Transformer_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modeling Mixed Gas Reactions in Air Pollution: Stoichiometry, Kinetics, and Hazard Assessment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01612109</link>
        <id>10.14569/IJACSA.2025.01612109</id>
        <doi>10.14569/IJACSA.2025.01612109</doi>
        <lastModDate>2025-12-31T12:27:05.3870000+00:00</lastModDate>
        
        <creator>T Somasekhar</creator>
        
        <creator>Rekha B. Venkatapur</creator>
        
        <subject>Stoichiometric reaction modeling; mixed-gas kinetics; plug-flow transport correction; Bayesian hazard classification; air pollution risk assessment; environmental process safety; probabilistic uncertainty quantification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>This study introduces a novel integrated framework for modeling mixed gas reactions relevant to air pollution and industrial safety, demonstrated on the reaction between carbon monoxide and ammonia producing hydrogen cyanide and water. The approach couples closed form stoichiometric mass balances with a transport corrected kinetic ordinary differential equation system and a Bayesian logistic hazard classifier that incorporates expert informed priors. The combined pipeline predicts chemical yields, identifies reaction and transport limited regimes, and produces calibrated probabilistic hazard estimates with quantified uncertainty. Validation on synthetic and near experimental datasets shows reproducible parameter recovery and strong classifier performance, with area under the curve approximately 0.93 on held out data. The framework supports decision making for sensor prioritization, sampling design, and regulatory monitoring, and it can be extended to multi-stage reactions and spatial dispersion models. The novelty lies in coupling closed‑form stoichiometry with transport‑corrected kinetics and Bayesian hazard classification, producing a nondimensional regime map and calibrated probabilistic hazard scores not available in prior models.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_109-Modeling_Mixed_Gas_Reactions_in_Air_Pollution.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced Mobile GC Vit Architecture for Efficient Image Classification with Application to Plant Disease Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01612108</link>
        <id>10.14569/IJACSA.2025.01612108</id>
        <doi>10.14569/IJACSA.2025.01612108</doi>
        <lastModDate>2025-12-31T12:27:05.3530000+00:00</lastModDate>
        
        <creator>Mohamed Jawher Bahrouni</creator>
        
        <creator>Faouzi Benzarti</creator>
        
        <creator>Mohamed Touati</creator>
        
        <creator>Sadok Ben Yahia</creator>
        
        <subject>Hybrid transformer architecture; convolutional refinement block; gated convolution; edge devices; high-frequency features; tomato leaf disease classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>Efficient and accurate automated diagnosis of plant diseases remains a challenge for deployment on resource-constrained edge devices. While hybrid vision transformers like GCViT balance accuracy and efficiency, they often lose critical high-frequency details such as fine lesion textures and leaf margins that are essential for fine-grained disease classification. To address this gap, we propose the Enhanced High-Frequencies Global Context Visual Transformer (EHF-GCViT), a novel hybrid architecture designed to explicitly enhance high-frequency feature retention within a lightweight framework. The core innovations of EHF-GCViT include: first, a customized, lightweight convolutional refinement block based on depthwise separable operations that acts as a learnable pre-processor to preserve discriminative spatial details before tokenization; second, a gated convolutional block that replaces the final transformer stage, reducing the model memory footprint from 46.36 MB to 34.48 MB; and third, an adaptive normalization strategy to stabilize the training of the integrated heterogeneous layers. Extensive experiments on the PlantVillage tomato disease dataset demonstrate that EHF-GCViT achieves superior performance, surpassing the baseline GCViT, standard Vision Transformers (ViT), and CNN benchmarks (e.g., ResNet) in accuracy, precision, recall, and F1-score. These results validate that explicitly modeling high-frequency features within a hybrid transformer design provides a more memory-efficient and accurate backbone for practical plant disease detection systems targeting edge deployment.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_108-Enhanced_Mobile_GC_Vit_Architecture_for_Efficient_Image_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Multi-Scale ROI-Aligned Deep Learning Framework for Automated Road Damage Detection and Severity Assessment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01612107</link>
        <id>10.14569/IJACSA.2025.01612107</id>
        <doi>10.14569/IJACSA.2025.01612107</doi>
        <lastModDate>2025-12-31T12:27:05.3230000+00:00</lastModDate>
        
        <creator>Bakhytzhan Orazaliyevich Kulambayev</creator>
        
        <creator>Olzhas Muratuly Olzhayev</creator>
        
        <creator>Aigerim Bakatkaliyevna Altayeva</creator>
        
        <creator>Zhanna Zhunisbekova</creator>
        
        <subject>Road damage detection; deep learning; ROI alignment; multi-scale features; severity assessment; RDD2020 dataset; intelligent transportation systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>This study presents a multi-scale ROI-aligned deep learning framework designed to advance automated road damage detection and severity assessment using high-resolution roadway imagery. The proposed architecture integrates hierarchical feature extraction, a road-damage proposal network, and refined ROI-aligned encoding to capture both fine-grained local anomalies and broader contextual patterns across diverse pavement conditions. Leveraging the RDD2020 dataset, the model effectively identifies multiple defect categories, including longitudinal cracks, transverse cracks, alligator cracking, and potholes, achieving strong convergence behavior and stable generalization across training and validation phases. Quantitative evaluations reveal high detection accuracy and smooth loss reduction over 500 learning epochs, while qualitative visualizations demonstrate precise localization and robust classification of damages under varying environmental and structural complexities. The framework consistently maintains performance in challenging scenes featuring shadows, cluttered backgrounds, low contrast, or irregular defect geometries, underscoring the benefits of multi-scale fusion and ROI alignment mechanisms. Although slight fluctuations in validation metrics indicate the presence of inherently difficult samples, the overall results affirm the model’s capability to support large-scale, real-time road monitoring systems. The findings highlight the potential of the proposed approach to significantly enhance intelligent transportation infrastructure, offering an efficient and reliable solution for proactive pavement maintenance and improved roadway safety.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_107-A_Multi_Scale_ROI_Aligned_Deep_Learning_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hierarchical Swin Transformer Encoder-Decoder Architecture for Robust Cerebrovascular Abnormality Segmentation in Multimodal MRI</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01612106</link>
        <id>10.14569/IJACSA.2025.01612106</id>
        <doi>10.14569/IJACSA.2025.01612106</doi>
        <lastModDate>2025-12-31T12:27:05.3070000+00:00</lastModDate>
        
        <creator>Nazbek Katayev</creator>
        
        <creator>Zhanel Bakirova</creator>
        
        <creator>Assel Kaziyeva</creator>
        
        <creator>Aigerim Altayeva</creator>
        
        <creator>Karakat Zhanabaykyzy</creator>
        
        <creator>Daniyar Sultan</creator>
        
        <subject>Cerebrovascular segmentation; Swin Transformer; multimodal MRI; deep learning; vascular imaging; hierarchical attention; encoder–decoder architecture; medical image analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>This study presents a hierarchical Swin Transformer–based framework for automated segmentation of cerebrovascular structures using multimodal magnetic resonance imaging. The proposed architecture integrates patch partitioning, linear embedding, hierarchical windowed self-attention, and a multilevel encoder–decoder design to address the inherent challenges of vascular segmentation, including irregular morphology, small-caliber vessel visibility, and intensity variability across MRI modalities. A multimodal fusion module enhances the ability to capture complementary anatomical and vascular information, while skip-connected decoding ensures the preservation of fine-grained spatial features essential for accurate vessel reconstruction. The model was evaluated using a combination of open-access datasets and demonstrated superior performance across multiple quantitative metrics, achieving higher Dice similarity, precision, sensitivity, and specificity compared to existing state-of-the-art methods. Qualitative analysis further revealed accurate recovery of major arterial pathways, distal branches, and complex vascular topologies, confirming the model’s robustness in both global and localized segmentation tasks. The results highlight the discriminative strength of hierarchical attention mechanisms and emphasize their role in improving cerebrovascular characterization. Overall, the proposed framework offers a reliable and anatomically coherent approach for vascular segmentation, with strong potential for integration into clinical neuroimaging workflows and advanced cerebrovascular research applications.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_106-Hierarchical_Swin_Transformer_Encoder_Decoder_Architecture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Reinforcement Learning Framework for Missing Data Imputation in IoT Environments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01612105</link>
        <id>10.14569/IJACSA.2025.01612105</id>
        <doi>10.14569/IJACSA.2025.01612105</doi>
        <lastModDate>2025-12-31T12:27:05.2900000+00:00</lastModDate>
        
        <creator>Ahmed M. Salama Salem</creator>
        
        <creator>Sayed AbdelGaber A</creator>
        
        <creator>Ahmed E. Yakoub</creator>
        
        <subject>Data imputation; reinforcement learning; machine learning; deep learning; Internet of Things (IoT)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>Continuous, accurate meteorological sensing underpins many Internet of Things (IoT) applications, from smart irrigation and urban heat-island monitoring to early weather warnings, but data from distributed stations are often disrupted by sensor faults, power loss, or communication noise, causing missing values that degrade analytics and decisions. Existing data imputation methods lose accuracy on small or irregular datasets and adapt poorly to dynamic IoT settings. This study proposes a reinforcement learning (RL)-based framework for missing-data imputation that treats each gap as a sequential decision problem. The authors develop and compare three RL architectures, two Q-table methods and one Deep Q-learning model, to learn temporal dependencies and optimize imputation via experience. A second objective is to assess the feasibility and performance of RL for imputation in domains related to robotics and autonomous systems, where RL remains less explored. A third objective is to validate the methods on real-world datasets and simulations, supported by a user-friendly graphical interface for visualization and performance monitoring. The proposed RL imputers outperform state-of-the-art methods in accuracy and robustness: the best RL configuration cuts MSE/MAE by 8.6%/5.9% vs. K-Nearest Neighbors’ algorithm (KNN), 74.4%/75.6% vs. autoencoder, 79.6%/79.9% vs. clustering, 89.0%/83.7% vs. mean, 89.5%/83.3% vs. median, and 94.2%/89.3% vs. most-frequent, while raising the coefficient of determination (R&#178;) by +0.023, +0.532, +0.123, +0.407, +0.436, and +0.932, respectively. These findings highlight RL as an effective paradigm for intelligent data restoration in IoT-based sensing systems.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_105-Reinforcement_Learning_Framework_for_Missing_Data_Imputation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid CNN-BiGRU-GAN Framework for Enhanced Automated Analysis of Cervical Cancer in Medical Imaging</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01612104</link>
        <id>10.14569/IJACSA.2025.01612104</id>
        <doi>10.14569/IJACSA.2025.01612104</doi>
        <lastModDate>2025-12-31T12:27:05.2430000+00:00</lastModDate>
        
        <creator>Donepudi Rohini</creator>
        
        <creator>M Kavitha</creator>
        
        <subject>Cervical cancer detection; DiagnoFusionNet; medical image analysis; Adaptive Triple-Stage Feature Fusion; generative adversarial networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>Cervical cancer screening requires reliable automated systems capable of overcoming variability in staining, morphology, and limited annotated data, which often undermine the performance of traditional machine learning and deep learning approaches. Existing techniques commonly rely on single-modality feature extraction or static fusion, resulting in weak generalization, class imbalance sensitivity, and limited interpretability in clinical environments. Addressing these gaps, the research introduces DiagnoFusionNet, a hybrid CNN-BiGRU-GAN framework that integrates spatial features from Convolutional Neural Network (CNN), contextual dependencies from Bidirectional Gated Recurrent Unit (BiGRU), and Generative Adversarial Network (GAN)-generated samples to enhance data diversity and correct minority-class deficiencies. The methodology incorporates an Adaptive Triple-Stage Feature Fusion mechanism that dynamically recalibrates modality contributions using discriminator-informed attention, ensuring discriminative and clinically aligned feature representations. Experiments on the SIPaKMeD dataset demonstrate strong performance with 97.89% accuracy, 97.69% precision, 96.95% recall, 96.89% F1-score, and a 0.99 AUC, supported by GAN evaluation metrics, including an FID of 18.3, IS of 3.91, and SSIM of 0.92. Ablation analysis confirms the dominant contribution of the adaptive fusion module, while t-SNE clustering and confusion-matrix inspection highlight effective separability and reduced misclassification. Model development and experimentation were executed using Python, TensorFlow, Keras, OpenCV, and Scikit-learn on GPU-enabled environments. The framework provides a clinically interpretable, data-efficient, and scalable solution for automated cervical cytology analysis in real-world and resource-limited settings.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_104-A_Hybrid_CNN_BiGRU_GAN_Framework_for_Enhanced_Automated_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Interpretable Analytical Intelligence Architecture Delivering Reliable Detection of Software Defect Instances</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01612103</link>
        <id>10.14569/IJACSA.2025.01612103</id>
        <doi>10.14569/IJACSA.2025.01612103</doi>
        <lastModDate>2025-12-31T12:27:05.2300000+00:00</lastModDate>
        
        <creator>Srinivasa Rao Katragadda</creator>
        
        <creator>Sirisha Potluri</creator>
        
        <subject>Contrastive learning; explainable artificial intelligence; feature optimization; Siamese Neural Network; software defect prediction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>Software defect prediction plays a crucial role in improving software quality, yet existing approaches still suffer from severe class imbalance, redundant feature spaces, weak generalization, and limited interpretability, making their adoption in real development pipelines difficult. Many current models rely on black-box deep learning architectures or conventional classifiers that fail to identify minority defects or explain the reasoning behind their decisions. To overcome these limitations, this study introduces a novel framework named Contrastive Siamese Defect Learning–Integrated Explainable Neural Optimization System (CSDL-SEN-XAI), which integrates contrastive metric learning, enzyme-inspired optimization, and transparent explainability. The method combines SMOTE-based balancing, the Enzyme Action Optimizer for joint feature–hyperparameter optimization, and a Siamese Neural Network trained using contrastive loss to learn discriminative similarity embeddings. The entire workflow is implemented using Python, enabling efficient scalability and reproducibility. Experimental analysis reveals that the proposed model achieves an accuracy of 95.5%, a recall of 96.2%, and an F1-score of 95.5%, outperforming traditional models such as Random Forest, SVM, and CNN by margins ranging from 7% to 15% under identical evaluation settings. SHAP and Integrated Gradients further demonstrate that the model provides clear global and instance-level explanations, highlighting influential software metrics and strengthening the interpretability of predictions. Overall, the results confirm that CSDL-SEN-XAI delivers superior predictive performance, stable optimization, balanced learning, and transparent defect interpretation, offering a reliable and interpretable solution suitable for practical software engineering environments. Future work will explore cross-project defect prediction and the integration of lightweight optimization strategies to further enhance scalability.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_103-An_Interpretable_Analytical_Intelligence_Architecture_Delivering_Reliable_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>EvoNorm-GAN for Adaptive and Interpretable Detection of Ransomware in Windows PE Files</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01612102</link>
        <id>10.14569/IJACSA.2025.01612102</id>
        <doi>10.14569/IJACSA.2025.01612102</doi>
        <lastModDate>2025-12-31T12:27:05.1970000+00:00</lastModDate>
        
        <creator>G Badrinath</creator>
        
        <creator>Arpita Gupta</creator>
        
        <subject>Ransomware detection; EvoNorm-GAN; feature-wise dynamic normalization; portable executable files; adversarial learning; Explainable AI</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>Ransomware remains a key cybersecurity issue because of its growing amount of obfuscation, polymorphism, and constantly changing patterns of attack that repeatedly circumvent conventional defenses. Traditional systems and standard deep learning may fail, lowering accuracy and increasing false positives. To address these shortcomings, the proposed work proposes EvoNorm-GAN, a dynamic adversarial-based detection architecture that will incorporate Feature-Wise Dynamic Normalization (FDN) and Generative Adversarial Network to analyze ransomware in Windows Portable Executable (PE) files in a very flexible manner. Generator creates ransomware variants; discriminator classifies files using Wasserstein loss. EvoNorm-GAN is a TensorFlow application, using the Keras back-end, and tested on a large-scale Windows PE File Analysis Dataset of 62, 200 samples, with 31, 100 benign and 31, 100 malicious examples. The experimental findings indicate that EvoNorm-GAN has the state-of-the-art results of 98.2 % accuracy, 98.4 % precision, 98.1 % recall, 97.4 % F1-score, and 0.99 AUC, which are about 1 to 3 percent higher than the traditional CNN, RNN, and ensemble-based models. To enhance transparency and trust, SHAP-based explainable AI is integrated into EvoNorm-GAN, highlighting key PE file features such as Section Entropy and SizeOfCode that drive classification decisions. By combining adaptive learning, adversarial sample generation, and analyst-friendly interpretability into a unified framework, EvoNorm-GAN delivers an efficient, robust, and transparent ransomware detection system. Its scalable and resilient design makes it well-suited for real-world deployment in endpoint protection and cybersecurity environments, providing reliable detection of evolving ransomware threats.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_102-EvoNorm_GAN_for_Adaptive_and_Interpretable_Detection_of_Ransomware.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>From Consensus to Chaos: A Vulnerability Assessment of the RAFT Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01612101</link>
        <id>10.14569/IJACSA.2025.01612101</id>
        <doi>10.14569/IJACSA.2025.01612101</doi>
        <lastModDate>2025-12-31T12:27:05.1670000+00:00</lastModDate>
        
        <creator>Tamer Afifi</creator>
        
        <creator>Abdelfatah Hegazy</creator>
        
        <creator>Ehab Abousaif</creator>
        
        <subject>RAFT; consensus protocol; security; distributed systems; message forgery; replay attacks; cryptography</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>In recent decades, the RAFT distributed consensus algorithm has become a main pillar of the distributed systems ecosystem, ensuring data consistency and fault tolerance across multiple nodes. Although the fact that RAFT is well known for its simplicity, reliability, and efficiency, its security properties are not fully recognized, leaving implementations vulnerable to different kinds of attacks and threats, which can transform the RAFT harmony of consensus into a chaos of data inconsistency. This paper presents a systematic security analysis of the RAFT protocol, with a specific focus on its susceptibility to security threats such as message replay attacks and message forgery attacks. Examined how a malicious actor can exploit the protocol&#39;s message-passing mechanism to reintroduce old messages, disrupting the consensus process and leading to data inconsistency. The practical feasibility of these attacks is examined through simulated scenarios, and the key weaknesses in RAFT&#39;s design that enable them are identified. To address these vulnerabilities, a novel approach based on cryptography, authenticated message verification, and freshness check is proposed. This proposed solution provides a framework for enhancing the security of the RAFT implementations and guiding the development of more resilient distributed systems.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_101-From_Consensus_to_Chaos_A_Vulnerability_Assessment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>CleanCity IoT: A Vehicle-Mounted Platform for Real-Time Urban Air-Quality Monitoring and Forecasting in Resource-Constrained African Cities</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01612100</link>
        <id>10.14569/IJACSA.2025.01612100</id>
        <doi>10.14569/IJACSA.2025.01612100</doi>
        <lastModDate>2025-12-31T12:27:05.1370000+00:00</lastModDate>
        
        <creator>Eric Nizeyimana</creator>
        
        <creator>Damien Hanyurwimfura</creator>
        
        <creator>Gabriel Uwanyirigira</creator>
        
        <creator>Bonaventure Karikumutima</creator>
        
        <creator>Jimmy Nsenga</creator>
        
        <creator>Irene Niyonambaza Mihigo</creator>
        
        <subject>CleanCity IoT; air quality; mobile sensing; multivariate forecasting; spike detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>Urban air pollution is a growing public-health challenge in African cities, yet traditional monitoring stations are sparse and expensive. The paper presents CleanCity IoT, a deployed, low-cost, vehicle-mounted air-quality platform that combines IoT sensors, GSM connectivity, cloud aggregation, and machine learning to produce near-real-time exposure maps and 2-hour forecasts for multiple pollutants. Each device integrates low-cost sensors for PM2.5, PM10, NO₂, O₃, SO₂, and CO₂, alongside temperature and humidity. Measurements are geotagged and transmitted over mobile networks form vehicles to a cloud backend, where data are validated, stored, and visualized through a user-friendly dashboard that also issues automated alerts and periodic reports. Using a dataset collected in Kigali and secondary cities via routine vehicular routes, the paper introduces the training of a multivariate time-series model to forecast short-horizon pollutant levels, supporting proactive health guidance and regulatory action. The system reports a performance in terms of latency, uptime, coverage, and data quality, and evaluate forecast accuracy using MAE/RMSE/MAPE and event-oriented metrics for spike prediction. Results indicate that CleanCity IoT provides reliable, scalable, and cost-effective urban air-quality intelligence, closing key gaps in spatiotemporal coverage while enabling citizen access, policy support, and social impact. The platform demonstrates a practical blueprint for African cities to operationalize air-quality intelligence using existing mobile infrastructure and locally developed technology.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_100-CleanCity_IoT_A_Vehicle_Mounted_Platform_for_Real_Time.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Spatial Classification of Fertilizer Requirements Using Fuzzy C-Means on Shallot Agricultural Land</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161299</link>
        <id>10.14569/IJACSA.2025.0161299</id>
        <doi>10.14569/IJACSA.2025.0161299</doi>
        <lastModDate>2025-12-31T12:27:05.1030000+00:00</lastModDate>
        
        <creator>Roghib Muhammad Hujja</creator>
        
        <creator>Ahmad Ashari</creator>
        
        <creator>Danang Lelono</creator>
        
        <creator>Agus Prasekti</creator>
        
        <subject>Fuzzy C-Means (FCM); soil fertility zoning; NPK (Nitrogen, Phosphorus, Potassium); fertilizer recommendation; precision agriculture; Site-Specific Nutrient Management (SSNM); IoT (Internet of Things); UAV (Unmanned Aerial Vehicle)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>Spatial variability in soil fertility constrains productivity in intensive shallot farming, yet fertilizer is frequently applied uniformly across fields. This practice results in nutrient inefficiencies, increased costs, and heightened environmental risks. This study introduces a fertilizer requirement mapping framework utilizing Fuzzy C-Means (FCM) clustering, a machine learning technique for data grouping, applied to in-situ measurements of soil Nitrogen (N), Phosphorus (P), and Potassium (K). The framework was evaluated in a 500 &#215; 500 m shallot field in Srikayangan, Kulon Progo, Indonesia, subdivided into 10 &#215; 10 m management blocks suitable for smallholder operations. Soil NPK levels were measured using IoT sensor nodes and georeferenced with GNSS, while high-resolution RGB imagery from a UAV provided spatial context. Normalized NPK data were clustered with FCM to delineate fertility zones exhibiting nutrient differences. To operationalize clustering results, a nutrient-priority decision logic identified the most limiting nutrient (N, P, or K) for each block. Fertilizer recommendation points were visualized on a UAV-derived orthomosaic map to facilitate interpretation and field application. The results indicate that this approach effectively captures gradual fertility transitions and produces actionable fertilizer zones for site-specific nutrient management (SSNM) in smallholder systems. The study demonstrates the practical integration of fuzzy clustering, IoT-based soil sensing, and UAV mapping to inform precision agriculture decisions.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_99-Spatial_Classification_of_Fertilizer_Requirements_Using_Fuzzy_C_Means.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intelligent Platform for Employee Retention Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161298</link>
        <id>10.14569/IJACSA.2025.0161298</id>
        <doi>10.14569/IJACSA.2025.0161298</doi>
        <lastModDate>2025-12-31T12:27:05.0730000+00:00</lastModDate>
        
        <creator>Medha Wyawahare</creator>
        
        <creator>Milind Rane</creator>
        
        <creator>Ashish Rodi</creator>
        
        <creator>Samarth Arole</creator>
        
        <creator>Aryan Mundra</creator>
        
        <subject>Employee retention; feedforward neural network; Large Language Model; HR analytics; intelligent platform</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>Employee retention is a very important challenge to the organizations since it raises the cost of recruitment, affects domain knowledge retention, and impacts workforce stability. Presented here is a platform-based intelligent employee retention prediction system as a real-time HR decision support tool. As a part of the research, Feedforward Neural Network was initially trained and tested with structured employee data to confirm the relevance of the features and predictive viability, which had recorded an accuracy of 88.7%. This final implementation will integrate an AI-based Chat Widget with a modular pipeline system that will utilize an LLM for performing analytical reasoning on employee attributes and provide human-understandable explanations that will aid the HR decisions. The architecture separates the user interaction layer (Agent) from the prediction and reasoning logic (Pipeline), which makes the system scalable, interpretable, and easily integrable with the workflows of an organization. The proposed platform will show how validated predictive models and LLM-provided reasoning can be integrated in order to provide actionable and explainable employee retention insights.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_98-Intelligent_Platform_for_Employee_Retention_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Epidemic Modeling with a Hybrid RF-LSTM Method for Healthcare Demand Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161297</link>
        <id>10.14569/IJACSA.2025.0161297</id>
        <doi>10.14569/IJACSA.2025.0161297</doi>
        <lastModDate>2025-12-31T12:27:05.0400000+00:00</lastModDate>
        
        <creator>Budor Alshammari</creator>
        
        <creator>Bassam Zafar</creator>
        
        <subject>Predictive analytics; Hybrid modeling; digital health; Saudi Arabia; COVID-19; decision support systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>Accurate resource demand forecasts are necessary for sustainable healthcare systems to preserve flexibility and efficiency as well as to provide services in a professional manner. In this work, we propose an integrated Random Forest/Long Short-Term Memory (RF-LSTM) model for predicting Saudi Arabia&#39;s national healthcare resource demand. It combines non-linear feature extraction and temporal sequence learning. The integrated model employs governmental epidemiological and operational data from 2020 to 2024 to capture both short-term and long-term volatility and sustainability trends. The results demonstrate significant improvements in predictive accuracy compared with single-model baselines, such as Autoregressive Integrated Moving Average (ARIMA), Random Forest (RF), and Long Short Term Memory (LSTM), with reductions in Mean Absolute Error (MAE) and Root Mean Square Error (RMSE) for up to 22% and 18% compared with ARIMA, and by 12% and 9% relative to the best single model, which is LSTM, respectively A statistical analysis using one-way ANOVA confirmed the robustness of the hybrid method. Furthermore, residual plots were examined to verify model assumptions and visually assess the uniformity of prediction errors, thereby validating the results. These findings suggest that integrated AI-based prediction models can effectively facilitate capacity planning, enhance resource allocation, and contribute to achieving the objectives of Saudi Vision 2030 for a resilient, data-driven healthcare system.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_97-Epidemic_Modeling_with_a_Hybrid_RF_LSTM_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluating CTGAN-Generated Synthetic Data for Heart Disease Prediction: Fidelity, Predictive Utility, and Feature Preservation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161296</link>
        <id>10.14569/IJACSA.2025.0161296</id>
        <doi>10.14569/IJACSA.2025.0161296</doi>
        <lastModDate>2025-12-31T12:27:05.0100000+00:00</lastModDate>
        
        <creator>Wan Aezwani Wan Abu Bakar</creator>
        
        <creator>Nur Laila Najwa Josdi</creator>
        
        <creator>Mustafa Man</creator>
        
        <creator>Evizal Abdul Kadir</creator>
        
        <subject>Conditional Tabular GAN (CTGAN); correlation analysis; dimensionality reduction; feature importance; heart disease prediction; predictive utility; synthetic data; tabular data fidelity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>The increasing scarcity and sensitivity of clinical data necessitate the development of high-quality synthetic datasets. This study evaluated the ability of Conditional Tabular GAN (CTGAN) to generate synthetic heart disease data that preserves the statistical properties and predictive patterns of the Cleveland Heart Disease dataset. It assessed the fidelity of numerical and categorical features, preservation of pairwise correlations, and predictive utility using Logistic Regression and Random Forest classifiers. Dimensionality reduction analysis using PCA and t-SNE further measured the global similarity between the real and synthetic datasets. The results obtained show that CTGAN successfully reproduces the general distribution and correlations, especially for key features such as age, talach, and old peak. However, some discrepancies remain in categorical attributes. Predictive modeling shows moderate transferability, indicating that synthetic data captures important patterns without completely replicating the original labels. These findings highlight the potential of CTGAN-generated synthetic data as a privacy-preserving alternative for benchmarking and early algorithm development, while emphasizing the importance of feature-level and prediction validation in synthetic data research.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_96-Evaluating_CTGAN_Generated_Synthetic_Data_for_Heart_Disease_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Diagnostic Approaches Integrating Fuzzy Logic and Neural Networks for Parkinson’s Disease</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161295</link>
        <id>10.14569/IJACSA.2025.0161295</id>
        <doi>10.14569/IJACSA.2025.0161295</doi>
        <lastModDate>2025-12-31T12:27:04.9930000+00:00</lastModDate>
        
        <creator>Marwah Muwafaq Almozani</creator>
        
        <creator>H&#252;seyin Demirel</creator>
        
        <subject>Convolutional neural network; disease hybrid diagnostic; Parkinson&#39;s disease; fuzzy logic</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>Parkinson’s Disease (PD) is a movement-related and non-motor symptom neurological condition that requires early diagnosis and treatment. Fuzzy Logic and Neural Network Diagnostic hybrids are more accurate and reliable. The diagnostic approaches of PD are not sensitive to early PD, are subjective in assessing symptoms, and lack standardization. Such problems restrict treatment choices, thereby preventing a favorable patient outcome. In the PD Hybrid Diagnostic Approach (PD-HDA), fuzzy logic is utilized to address uncertainties in clinical data, and neural networks are employed to identify complex patterns in multimodal data. The PD-HDA design features structured selection and data fusion, which enhance diagnostic accuracy and constrain method variability. The images of hand tremors, gait analysis, and speech patterns are categorized using a CNN to reveal their complex properties. Fuzzy Logic and CNNs enhance the classification of PD stages and patient responses to symptoms. The PD-HDA model increases accuracy, sensitivity, and specificity during testing. The hybrid methods can be useful for early identification of PD and provide individualized care, leading to improved patient outcomes.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_95-Hybrid_Diagnostic_Approaches_Integrating_Fuzzy_Logic_and_Neural_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Functions Inverse Using Neural Networks via Branch-Wise Decomposition and Newton Refinement</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161294</link>
        <id>10.14569/IJACSA.2025.0161294</id>
        <doi>10.14569/IJACSA.2025.0161294</doi>
        <lastModDate>2025-12-31T12:27:04.9630000+00:00</lastModDate>
        
        <creator>Abdullah Balamash</creator>
        
        <subject>Neural networks; function inverse; Newton method; branch-wise decomposition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>In this work, a unified framework (using Neural Networks) is proposed to find the inverse of mathematical functions, spanning both simple one-to-one mapping and complex multivalued relations. The approach uses standard multilayer Neural Networks (NN) to approximate the functions’ inverse and introduces a deterministic branch-wise decomposition to handle multi-valued inverses. For single-valued (one-to-one) functions, a NN is directly trained on input-output pairs to learn the inverse mapping. For multi-valued functions, the function domain is decomposed into one-to-one branches, and a dedicated NN is trained for each branch. A refinement step using Newton’s method is applied to the NN output to further improve inversion accuracy. Across a broad set of benchmark functions, the proposed approach achieved low mean absolute error (MAE) and mean squared error (MSE) in recovering the true inverse, with high round-trip consistency. Newton refinement further reduces inversion error by rapidly converging to higher precision solutions. Notably, even for multi-valued inverse functions, each branch-specific NN can accurately recover the true inverse. Accordingly, standard NN, when combined with branch-wise decomposition and Newton refinement, can serve as an effective universal approximator for the inverse of functions across a spectrum of complexities.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_94-Functions_Inverse_Using_Neural_Networks_via_Branch_Wise_Decomposition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Context-Aware Requirements Prioritization Using Integrated Regression Learning with Ordinal Neural Modeling and Roberta</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161293</link>
        <id>10.14569/IJACSA.2025.0161293</id>
        <doi>10.14569/IJACSA.2025.0161293</doi>
        <lastModDate>2025-12-31T12:27:04.9330000+00:00</lastModDate>
        
        <creator>Prasis Poudel</creator>
        
        <creator>Noraini Che Pa</creator>
        
        <creator>Abdikadir Yusuf Mohamed</creator>
        
        <subject>Requirements prioritization; context-aware prioritization; machine learning; natural language processing; ordinal regression; dependency analysis; Explainable AI</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>Effective prioritization of software requirements is essential for reducing project risks, optimizing resource allocation, and ensuring timely delivery. Conventional approaches such as Analytic Hierarchy Process (AHP) and MoSCoW often suffer from subjectivity, inefficiency, and poor scalability, making them unsuitable for large-scale projects. Although machine learning (ML) based methods improve scalability, they frequently overlook critical contextual factors such as risk, urgency, implementation effort, and inter-requirement dependencies. To address this gap, this study proposes a new machine learning based context aware software requirements prioritization system. In the proposed system, a pre-trained RoBERTa model and an ordinal neural regression model are employed to infer contextual features including technical risk, complexity, urgency, business value, implementation effort, requirement stability, stakeholder criticality, security sensitivity, and inter-requirement dependencies directly from requirement statements. These inferred features are then used as inputs to a supervised multiple regression model (XGBoost), which generates continuous priority scores for each requirement, with higher scores reflecting higher implementation priority. To ensure transparency, SHAP-based feature attribution is applied for feature importance analysis, and a feedback integration mechanism allows stakeholders to iteratively refine prioritization outcomes, thus in-turn retraining the core prioritization model. Empirical validation against three domain experts across five projects from different application domains demonstrates strong alignment, with Spearman rank correlations between 0.6 and 0.75, Mean Absolute Error (MAE) around 0.10, and Top 5 Match Rates up to 0.80. The results confirm that the proposed system provides a scalable, explainable, and context aware requirements prioritization mechanism suitable for real-world software engineering projects.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_93-Context_Aware_Requirements_Prioritization_Using_Integrated_Regression_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>6G Wireless Networks in the Generative AI Age: Overview, Techniques, and Future Trends</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161292</link>
        <id>10.14569/IJACSA.2025.0161292</id>
        <doi>10.14569/IJACSA.2025.0161292</doi>
        <lastModDate>2025-12-31T12:27:04.8870000+00:00</lastModDate>
        
        <creator>Sallar S. Murad</creator>
        
        <creator>Rozin Badeel</creator>
        
        <creator>Harth Ghassan Hamid</creator>
        
        <creator>Reham A. Ahmed</creator>
        
        <subject>GenAI; 6G; generative models; intelligent systems; wireless communication</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>As the world move beyond the 5G era, the emergence of 6G promises a significant integration with innovative communication paradigms and burgeoning technology trends, actualizing previously utopian concepts alongside increased technical complexities. Analytical models offer basic frameworks, but ML and AI now outperform them in solving complex problems, either by augmenting or supplanting model-based methodologies. The predominant focus of data-driven wireless research is on discriminative AI (DAI), which necessitates extensive real-world datasets. In contrast to DAI, Generative AI (GenAI) refers to generative models (GMs) that can identify the fundamental data circulation, patterns, and characteristics of the incoming data. Given these attractive characteristics, GenAI can either substitute or augment DAI methodologies in multiple contexts. This comprehensive tutorial-survey article begins with an overview of 6G and wireless intellectual ability by delineating potential 6G applications and services. The aspects presented in this paper support the internet of things integration with 6G networks with the support of the AI as intelligent systems. This review paper concentrates on fundamental wireless research domains, encompassing network optimization, organization, and management. It examines the foundational learning principles of DAI and its methodologies, the application of DAI in wireless networks, and the utilization of GMs in 6G networks. Due to its comprehensive nature, this paper will act as a crucial reference for researchers and professionals exploring this dynamic and promising field.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_92-6G_Wireless_Networks_in_the_Generative_AI_Age.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimizing Fetal Health Prediction Using Machine Learning on Biocompatible Sensor Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161291</link>
        <id>10.14569/IJACSA.2025.0161291</id>
        <doi>10.14569/IJACSA.2025.0161291</doi>
        <lastModDate>2025-12-31T12:27:04.8700000+00:00</lastModDate>
        
        <creator>Yuli Wahyuni</creator>
        
        <creator>Hadiyanto</creator>
        
        <creator>Ridwan Sanjaya</creator>
        
        <creator>Nendar Herdianto</creator>
        
        <subject>Fetal health prediction; biocompatible sensors; machine learning; Random Forest; SVM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>Automatic Fetal Health Prediction plays a vital role in supporting early prenatal intervention through continuous and non-invasive monitoring. Recent advances in biocompatible sensors enable the safe long-term acquisition of physiological signals, which can be effectively analyzed using machine learning techniques. This study proposes a comprehensive machine learning pipeline for Fetal Health Prediction through fetal health classification using the fetal_health.csv dataset from Kaggle, consisting of 2,126 samples and 22 cardiotocography-derived features related to fetal heart rate and uterine contractions. To address class imbalance and the presence of outliers, RobustScaler normalization was applied during the preprocessing stage. Feature selection was performed using Random Forest feature importance to identify the most relevant predictors. Two classification models, namely Random Forest (RF) and Support Vector Machine (SVM), were trained and evaluated using an 80:20 stratified train–test split. Experimental results indicate that the Random Forest model outperformed SVM, achieving an accuracy of 92.7% and a macro F1-score of 85.9%, compared with 88.97% accuracy and a macro F1-score of 79.85% for SVM. Moreover, Random Forest demonstrated superior performance in detecting minority classes (Suspect and Pathological), which are of high clinical significance. These findings suggest that the proposed pipeline is robust, interpretable, and suitable for integration with biocompatible sensor-based systems for real-time fetal health monitoring and clinical decision support.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_91-Optimizing_Fetal_Health_Prediction_Using_Machine_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>RoadSCNet: Road Surface Condition Detection Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161290</link>
        <id>10.14569/IJACSA.2025.0161290</id>
        <doi>10.14569/IJACSA.2025.0161290</doi>
        <lastModDate>2025-12-31T12:27:04.8370000+00:00</lastModDate>
        
        <creator>Sujittra Sa-ngiem</creator>
        
        <creator>Kwankamon Dittakan</creator>
        
        <creator>Saroch Boonsiripant</creator>
        
        <subject>RoadSCNet; road surface; detect road; road condition; deep learning; crack; pothole; manhole cover; image analysis; convolutional neural network; CNN</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>The quality of the road is an important issue that contributes to accidents, resulting in the loss of time, resources, and lives. To manually survey the road issue. This is very delayed and costly. Automatic detection of road conditions facilitates surveys more efficiently than human methods. This research identifies three objects: cracks, potholes, and manhole covers. This research shown the highest efficiency with YOLO V6 compared to YOLO V5, V7, and V8. This paper proposes RoadSCNet, designed for YOLOv6 implementations, has been developed for road research. A key part is the customized Horizon block, which enhances horizontal contextual feature extraction efficacy and reduces the limitations of traditional YOLO architecture in identifying road surface condition by long and low light variation, such as cracks and potholes.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_90-RoadSCNet_Road_Surface_Condition_Detection_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>AI Readiness as a Pathway to Sustainable Competitiveness in Tourism Transport: Evidence from an Integrative SEM Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161289</link>
        <id>10.14569/IJACSA.2025.0161289</id>
        <doi>10.14569/IJACSA.2025.0161289</doi>
        <lastModDate>2025-12-31T12:27:04.8070000+00:00</lastModDate>
        
        <creator>Mohamed Amine Frikha</creator>
        
        <subject>Artificial intelligence (AI); demand forecasting; tourism transport; Intelligent Transportation Systems (ITS); digital infrastructure; Resource-Based View (RBV); Technology- Organization-Environment (TOE) Framework; Structural Equation Modeling (SEM); mediation analysis; sustainable mobility; data-driven decision making</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>Artificial intelligence (AI) is transforming demand forecasting in the tourism transportation sector, delivering unprecedented accuracy in volatile, seasonal, and customer-sensitive environments. Yet, many firms struggle to translate AI&#39;s potential into performance due to gaps in technological and organizational readiness. Drawing on the resource-based-view (RBV) and the Technology-Organization-Environment (TOE) framework, this study develops and tests an integrative model linking digital infrastructure, employee skills, and managerial support to AI adoption and, consequently, business performance. We use Structural Equation Modeling (SEM) with bootstrapping to test direct and indirect effects. The results confirm that all three dimensions of readiness significantly boost AI adoption, which improves operational efficiency, occupancy rates, customer satisfaction, and profitability. It is crucial that AI adoption fully mitigates the effects of employee skills and managerial support, while partially mitigating those of digital infrastructure, reflecting its dual role in analytics, through both AI and other means. The study contributes theoretically by clarifying the mechanism by which readiness translates into value and offers practitioners a roadmap for successful AI assimilation in high-uncertainty service contexts.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_89-AI_Readiness_as_a_Pathway_to_Sustainable_Competitiveness_in_Tourism_Transport.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Adaptive Intelligence in Retail Space Optimization: Modeling the Coffee Shop Dilemma with Q-Learning Agents</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161288</link>
        <id>10.14569/IJACSA.2025.0161288</id>
        <doi>10.14569/IJACSA.2025.0161288</doi>
        <lastModDate>2025-12-31T12:27:04.7900000+00:00</lastModDate>
        
        <creator>Siranee Nuchitprasitchai</creator>
        
        <creator>Kanchana Viriyapant</creator>
        
        <creator>Kanjanee Satitrangseewong</creator>
        
        <creator>May Myo Naing</creator>
        
        <subject>El Farol Bar problem; agent-based modeling; Q-learning; reinforcement learning; customer behavior; congestion paradox; decision-making; coffee shop operations</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>This study models the &quot;coffee shop dilemma&quot;, where customer attendance is discouraged by both overcrowding and emptiness. Using an agent-based model with Q-learning reinforcement learning, this study simulates the daily decisions of 100 agents over a one-year period. The results reveal a self-organizing attendance cycle around a $60\%$ capacity threshold. This study demonstrates that customer satisfaction is not driven by visit frequency, but by adaptive decision-making strategies shaped by learned congestion values. Clustering analysis identifies distinct behavioral patron groups (e.g., Ultra-Frequent, Optimized) that emerge from these subtle value differences. The study provides a data-driven framework for optimizing shop space and customer flow, offering conceptual insights into balancing the needs of quick-service and long-stay customers by dynamically managing perceived occupancy.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_88-Adaptive_Intelligence_in_Retail_Space_Optimization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implementation of Hybrid Channel-Aware Prioritization (HCAP) Scheduler for a Multi-User MIMO System in 5G Communication</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161287</link>
        <id>10.14569/IJACSA.2025.0161287</id>
        <doi>10.14569/IJACSA.2025.0161287</doi>
        <lastModDate>2025-12-31T12:27:04.7600000+00:00</lastModDate>
        
        <creator>Krishna Deshpande</creator>
        
        <creator>Virupaxi B. Dalal</creator>
        
        <creator>Yedukondalu Udara</creator>
        
        <subject>Multiple input and multiple output; HCAP; CQI; throughput; 5G; QoS; k-means clustering; resource scheduling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>The evolution of 5G networks demands highly efficient resource allocation strategies to accommodate burgeoning mobile data traffic, latency-sensitive applications, and diverse user requirements. Multi-User Multiple-Input Multiple-Output (MU-MIMO) technology is a cornerstone of 5G, enabling simultaneous service to multiple users and significantly improving spectral efficiency. However, its performance is critically dependent on dynamic scheduling algorithms that must balance high system throughput with equitable user access amidst rapidly changing channel conditions and interference. Traditional schedulers like Round Robin, Proportional Fair, and Max-CQI often exhibit a pronounced trade-off between these objectives, struggling to adapt effectively in heterogeneous and dynamic network environments. To address this gap, this study proposes a Hybrid Channel-Aware Prioritization (HCAP) scheduler. The HCAP framework intelligently integrates real-time Channel Quality Indicator (CQI) and interference measurements into a unified user priority score, utilizing tunable α–β weights to flexibly emphasize throughput or fairness. Furthermore, it employs k-means clustering based on long-term channel statistics to group users, thereby reducing scheduling bias and promoting fairness within clusters. Evaluated through comprehensive MATLAB simulations within a realistic MU-MIMO system model employing Regularized Zero-Forcing precoding, HCAP demonstrates a superior performance balance. The results indicate that HCAP achieves up to 2.6 times higher aggregate throughput compared to conventional Proportional Fair and Max-CQI schedulers, while consistently maintaining Jain&#39;s Fairness Index above 0.90 across varied network scenarios. These findings validate HCAP as a robust, scalable, and QoS-aware scheduling solution, offering significant potential for enhancing resource allocation in next-generation wireless communication systems.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_87-Implementation_of_Hybrid_Channel_Aware_Prioritization_HCAP_Scheduler.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Polarimetric Imaging and Computational Techniques for Identification of Malignant Lesions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161286</link>
        <id>10.14569/IJACSA.2025.0161286</id>
        <doi>10.14569/IJACSA.2025.0161286</doi>
        <lastModDate>2025-12-31T12:27:04.7130000+00:00</lastModDate>
        
        <creator>Mohammed Hachem MEZOUAR</creator>
        
        <creator>Abdessamad ACHNAOUI</creator>
        
        <creator>Mohammed TBOUDA</creator>
        
        <creator>Said CHOUHAM</creator>
        
        <creator>Said BELKACIM</creator>
        
        <creator>Mohamed NEJMEDDINE</creator>
        
        <creator>Driss MGHARAZ</creator>
        
        <subject>Polarized light; digital image correlation; Gray-Level Co-occurrence Matrix; fractal; cervix; cancer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>According to the International Agency for Research on Cancer, cervical cancer is a major cause of death among Moroccan women, with high incidence and mortality rates. Early detection remains essential to increasing patients’ chances of recovery. Our study combines polarized light imaging, digital image correlation (DIC), Gray-Level Co-occurrence Matrix (GLCM) texture analysis, and fractal-based local standard deviation mapping to identify microstructural alterations in cervical tissue. Smear and biopsy samples were collected and anonymized in hospitals in Agadir, Morocco. Our goal is to develop an optical system based on the interaction between polarized light and tissue, as well as a complementary computational framework to distinguish between different types of healthy, precancerous, and cancerous tissue. DIC revealed heterogeneous deformation patterns in cancerous regions, fractal analysis highlighted increased structural complexity, and GLCM features showed higher contrast and entropy in malignant samples. This pilot study introduces a novel approach combining polarimetric imaging and computational analysis, applied to cervical tissue samples from Moroccan women in Africa. Despite the small size of the ex vivo dataset, the results obtained encourage the conduct of larger-scale prospective and in vivo studies.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_86-Polarimetric_Imaging_and_Computational_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predictive Modelling of Flood Dynamics in Malaysia’s East Coast Using an NARX Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161285</link>
        <id>10.14569/IJACSA.2025.0161285</id>
        <doi>10.14569/IJACSA.2025.0161285</doi>
        <lastModDate>2025-12-31T12:27:04.6830000+00:00</lastModDate>
        
        <creator>Nur Nabilah Zakaria</creator>
        
        <creator>Azlee Zabidi</creator>
        
        <creator>Mahmood Alsaadi</creator>
        
        <creator>Mohd Izham Mohd Jaya</creator>
        
        <subject>Flood prediction; NARX model; hydrological modelling; Pekan</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>Flood forecasting is critical for improving early warning systems in Malaysia’s East Coast region, particularly in flood-prone Pekan. This study develops a Nonlinear Autoregressive with Exogenous Inputs (NARX) model to predict river water levels using data from four stations: Sungai Pahang, Sungai Pahang Tua, Sungai Paloh Hinai, and Sungai Mentiga (2020–2024). The dataset was preprocessed through short-gap interpolation, removal of long missing segments, and segmentation into continuous sequences to ensure high-quality inputs for modeling. A total of 75 NARX configurations were evaluated using different lag values, hidden neuron counts, and training epochs. Model performance was assessed using Mean Squared Error (MSE) and residual diagnostics. The best model—lag = 6 and 300 hidden units—achieved a validation loss of 0.102, demonstrating stable convergence and strong generalization. Prediction results showed close alignment with actual river levels. The findings confirm that the NARX approach effectively captures nonlinear hydrological dynamics and provides reliable short-term water level forecasts for Pekan, addressing an existing gap in localized flood prediction studies.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_85-Predictive_Modelling_of_Flood_Dynamics_in_Malaysias_East_Coast.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SWAP Optimization for Qubit Mapping Based on the Centric-Shortest Quantum Gate Set in NISQ Devices</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161284</link>
        <id>10.14569/IJACSA.2025.0161284</id>
        <doi>10.14569/IJACSA.2025.0161284</doi>
        <lastModDate>2025-12-31T12:27:04.6670000+00:00</lastModDate>
        
        <creator>Shujuan Liu</creator>
        
        <creator>Hui Li</creator>
        
        <creator>Yingsong Ji</creator>
        
        <creator>Jiepeng Wang</creator>
        
        <subject>Quantum computing; qubit mapping; Centric-Shortest Quantum Gate Set (C-SQGS); executable SWAP gate; multi-factor cost function</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>In the Quantum computing era of Noisy Intermediate-Scale Quantum (NISQ) devices, conventional qubit mapping strategies typically rely on specific heuristic rules to solve the mapping problem, overlooking the impact of other factors on the mapping, which leads to increased overhead from extra SWAP gates. To address this issue, we propose a SWAP optimization strategy based on the Centric-Shortest Quantum Gate Set (C-SQGS) and applies it to qubit mapping. In this approach, the centric qubit is determined by analyzing the maximum flexibility qubit set and the physical distances between the associated CNOT gates, leading to the identification of the Centric-Shortest Quantum Gate Set. To overcome the limitations of traditional cost functions that consider only single factors, a multi-factor cost function is introduced to evaluate the overall overhead of candidate SWAP operations and determine pending SWAP gate Set. Based on qubit flexibility analysis, executable SWAP gate is identified and inserted into the circuit. Experimental results demonstrate that the C-SQGS strategy effectively reduces both SWAP gate and two-qubit gate overhead. Specifically, it achieves an average SWAP gate reduction of 36.9% and 47.7%, and a two-qubit gate reduction of 13.8% and 13.5% on the t|ket⟩ and Qiskit compilers, respectively. These results highlight the potential of the C-SQGS strategy in enhancing the efficiency of qubit mapping for NISQ devices.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_84-SWAP_Optimization_for_Qubit_Mapping.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-Class Object Detection Using Quantized YOLOv11 for Real-Time Inference</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161283</link>
        <id>10.14569/IJACSA.2025.0161283</id>
        <doi>10.14569/IJACSA.2025.0161283</doi>
        <lastModDate>2025-12-31T12:27:04.6370000+00:00</lastModDate>
        
        <creator>Yehia A. Soliman</creator>
        
        <creator>Amr Ghoneim</creator>
        
        <creator>Mahmoud Elkhouly</creator>
        
        <subject>Quantized neural networks; YOLOv11; object detection; embedded systems; real-time inference; model optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>Real-time multi-class object detection on embedded devices poses significant challenges due to limited computational power, memory capacity, and energy efficiency requirements. Conventional high-precision object detectors, such as YOLOv11, deliver outstanding accuracy but are computationally intensive, making them unsuitable for deployment on resource-constrained hardware. This study presents a quantized implementation of the YOLOv11 model designed to enable efficient real-time inference on embedded platforms. The proposed approach applies post-training integer quantization and mixed-precision optimization to minimize computation and memory usage while maintaining detection accuracy across multiple object categories. Experimental evaluations were conducted on the COCO and Pascal VOC datasets. The results indicate that the quantized YOLOv11 achieves a 3.2&#215; increase in inference speed, a 2.7&#215; reduction in memory footprint, and a 35% improvement in energy efficiency, with less than 2% loss in mean Average Precision (mAP) compared to the full-precision baseline. The optimized model sustains real-time performance exceeding 45 frames per second (FPS), demonstrating that quantization is a viable and effective approach for deploying high-performance object detection models on embedded systems.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_83-Multi_Class_Object_Detection_Using_Quantized_YOLOv11_for_Real_Time_Inference.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Personalized Point of Interest in Location-Based Augmented Reality Tourism Application</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161282</link>
        <id>10.14569/IJACSA.2025.0161282</id>
        <doi>10.14569/IJACSA.2025.0161282</doi>
        <lastModDate>2025-12-31T12:27:04.6030000+00:00</lastModDate>
        
        <creator>Rimaniza Zainal Abidin</creator>
        
        <creator>Ma Boqi</creator>
        
        <creator>Rosilah Hassan</creator>
        
        <creator>Nor Shahriza Abdul Karim</creator>
        
        <creator>Mohamad Hidir Mhd Salim</creator>
        
        <subject>Augmented Reality; LBAR; PutrajayAR; AR Discovery; AR Recommendation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>In recent years, the rapid growth of the tourism industry and increasing demand for efficient and meaningful travel experiences have highlighted the need for smarter travel assistance tools. Many tourists, particularly first-time visitors, often face challenges navigating unfamiliar destinations and identifying relevant points of interest, leading to delays, inconvenience, and reduced satisfaction. During the peak tourist season, most local hotels and restaurants are overcrowded, and tourists have to find accommodation and food in unfamiliar places, which reduces their travel efficiency and experience. To address these challenges, this study proposes the PutrajayAR, a personalized Location-Based Augmented Reality (LBAR) tourism application designed to enhance tourists’ efficiency and overall travel experience. The application provides AR Discovery and AR Recommendation features that dynamically present personalized points of interest based on user preferences, spatial proximity, and contextual constraints. The system was developed using the waterfall model and evaluated through black box testing, usability testing, and persuasive design assessment. The results demonstrate that PutrajayAR significantly improves user experience and satisfaction compared to non-personalized approaches, thereby validating the effectiveness of personalized LBAR systems in helping users navigate effectively and discover attractions that match their interests.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_82-Personalized_Point_of_Interest_in_Location_Based_Augmented_Reality_Tourism_Application.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>DrugCellGNN: Graph Convolutional Networks for Integrating Omics and Drug Similarities in Cancer Therapy Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161281</link>
        <id>10.14569/IJACSA.2025.0161281</id>
        <doi>10.14569/IJACSA.2025.0161281</doi>
        <lastModDate>2025-12-31T12:27:04.5730000+00:00</lastModDate>
        
        <creator>Gehad Awad Aly</creator>
        
        <creator>Rania Ahmed Abdel Azeem Abul Seoud</creator>
        
        <creator>Dina Ahmed Salem</creator>
        
        <subject>Precision oncology; drug sensitivity prediction; graph neural networks (GNNs); multi-omics integration; focal loss; PCA</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>Predicting drug response in cancer cell lines is a critical step toward precision oncology, enabling more efficient therapeutic discovery and personalized treatment strategies. However, the complexity of drug–cell interactions, driven by diverse omics profiles and structural variability among drugs, poses significant challenges for conventional machine learning approaches. In this study, we propose an end-to-end pipeline that integrates multi-omics data (gene expression, copy number variation, and mutations) with chemical structure representations of drugs to predict binary drug response. Our method employs principal component analysis (PCA) for dimensionality reduction of high-dimensional omics data, followed by the computation of drug–drug and cell–cell similarity matrices. These are used to construct a heterogeneous graph combining intra-class similarities with drug–cell interactions. A customized graph neural network model, DrugCellGNN, is then applied to learn context-aware embeddings of drugs and cells. The fused representations are passed to a downstream multi-layer perceptron for classification. To address class imbalance, we introduce a dynamic focal loss function that adaptively emphasizes hard-to-classify examples. Evaluation on the GDSC dataset with an 80/20 train–test split demonstrates strong performance: Accuracy = 0.8935, F1 = 0.9201, AUC = 0.9510. This work highlights the utility of graph-based integration of multi-omics and drug features for drug sensitivity prediction. By leveraging both molecular and relational information, the proposed framework offers a robust and extensible foundation for advancing computational approaches in precision oncology.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_81-DrugCellGNN_Graph_Convolutional_Networks_for_Integrating_Omics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An AI-Driven Framework for Network Intrusion Detection Using ANOVA-Based Feature Selection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161280</link>
        <id>10.14569/IJACSA.2025.0161280</id>
        <doi>10.14569/IJACSA.2025.0161280</doi>
        <lastModDate>2025-12-31T12:27:04.5400000+00:00</lastModDate>
        
        <creator>Salam Allawi Hussein</creator>
        
        <creator>S&#225;ndor R&#233;p&#225;s</creator>
        
        <subject>Network security; intrusion detection; machine learning; feature selection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>In the last few years, cyberattacks have become more complex, and it is becoming increasingly necessary to establish secure networks. This study examines enhancements to intrusion detection systems (IDSs) with the implementation of machine learning for the categorization of network traffic attacks. For the current study, we utilize four publicly available datasets: CIC-IDS2017, CIC-DoS2017, CSE-CIC-IDS2018, and CIC-DDoS2019. We examined three machine learning techniques: LightGBM, Random Forest, and XGBoost. Experimental results showed that RandomForest and XGBoost achieved the highest accuracy of 0.99 in both binary and multi-class intrusion detection tasks, maintaining balanced performance with macro F1-scores around 0.86. LightGBM exhibited slightly lower overall performance, but benefited from ANOVA-based feature selection, which improved its recall and model stability. Feature selection also enhanced computational efficiency by reducing feature redundancy while preserving accuracy across models. These results highlight how AI tools could help network security deal with emerging threats and improve the performance of IDS. The study underscores the critical role of feature selection in enhancing model efficiency, hence promoting advancements in automated network security systems that can adapt to evolving cyber threats.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_80-An_AI_Driven_Framework_for_Network_Intrusion_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced Detection of Acute Lymphocytic Leukemia Using Deep Learning and Hybrid Classifiers on Microscopic Blood Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161279</link>
        <id>10.14569/IJACSA.2025.0161279</id>
        <doi>10.14569/IJACSA.2025.0161279</doi>
        <lastModDate>2025-12-31T12:27:04.5100000+00:00</lastModDate>
        
        <creator>H. A. El Shenbary</creator>
        
        <creator>Amr T. A. Elsayed</creator>
        
        <creator>Khaled A. A. Khalaf Allah</creator>
        
        <creator>Belal Z. Hassan</creator>
        
        <subject>Deep learning; transfer learning; leukemia; Alexnet; VGG19; SVM; K-NN; classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>There is no doubt that a significant number of individuals worldwide suffer from blood cancer. A lot of people are unaware of the dangers associated with this disease, which can be fatal. When diagnosed, patients may feel intense fear and a sense of powerlessness. In addition, due to the rarity of these diseases, patients often struggle to find the necessary help and information. A specific type of blood cancer called acute lymphocytic leukemia (ALL) mainly affects white blood cells and is particularly prevalent in children. Early detection of this disease will improve the chances of recovery. Therefore, it is crucial to have an accurate and dependable method for identifying blood cancers. Deep learning (DL) architectures have garnered significant interest within the computer vision realm. Recently, there has been a strong focus on the accomplishments of pretrained architectures in accurately describing or classifying data from various real-world image datasets. Classification performances of the proposed models are investigated by using Soft-max, Support Vector Machine (SVM), and K-Nearest Neighbors algorithm (K-NN) separately on a deep learning neural network (Alexnet and VGG19) to differentiate between the three types of ALL using microscopic images dataset. The experimental results demonstrate that the combination of Alexnet with SVM achieves outstanding classification performance on the leukemia dataset, particularly on the original(unsegmented) data, achieved 97.03%on bengin class, 96.14% on early class, 99.49% on pre class and 99.9% on pro class. This approach achieves higher accuracy levels than practicing physicians.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_79-Enhanced_Detection_of_Acute_Lymphocytic_Leukemia.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Adversarial Robustness of Deep Learning in Medical Imaging: A Comprehensive Survey and Benchmark of State-of-the-Art Architectures</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161278</link>
        <id>10.14569/IJACSA.2025.0161278</id>
        <doi>10.14569/IJACSA.2025.0161278</doi>
        <lastModDate>2025-12-31T12:27:04.4930000+00:00</lastModDate>
        
        <creator>Neethunath M R</creator>
        
        <creator>Gladston Raj S</creator>
        
        <creator>Pradeepan P</creator>
        
        <subject>Adversarial attacks; dermatoscopy; deep learning; robustness benchmark; security in medical AI</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>The integration of artificial intelligence into medical diagnostics promises to revolutionize healthcare. However, the reliability of these systems is critically undermined by adversarial examples, which are imperceptible perturbations that can lead to misdiagnosis. Ensuring the robustness of AI-driven clinical decisions is paramount for ensuring patient safety and institutional trust. This study addresses this challenge in two ways. First, we provide a structured survey of the state-of-the-art adversarial threats, including adversarial attacks and detection strategies. Second, we present a rigorous empirical benchmark of five prominent CNN architectures for dermatoscopic skin cancer classification using the gold standard Auto Attack suite. The results revealed significant disparities in robustness based on the architectural design. Although all standard-trained models are highly vulnerable, their defensibility through adversarial training varies significantly. We found that modern transformer-inspired architectures, such as ConvNeXt, achieved the state-of-the-art robust accuracy while maintaining high performance with minimal trade-offs. Conversely, architectures optimized for mobile efficiency, such as MobileNetV2 and EfficientNet-B2, are exceptionally difficult to defend. To the best of our knowledge, this is the first study to establish an architectural hierarchy of robustness for dermatoscopic tasks, demonstrating that hybrid designs outperform mobile-optimized models by over 25% under adversarial conditions. These findings advocate a shift in clinical AI validation from accuracy-centric to robustness-centric metrics.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_78-Adversarial_Robustness_of_Deep_Learning_in_Medical_Imaging.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automated Question Answering System for FAQ COVID-19 Using Word Embeddings</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161277</link>
        <id>10.14569/IJACSA.2025.0161277</id>
        <doi>10.14569/IJACSA.2025.0161277</doi>
        <lastModDate>2025-12-31T12:27:04.4630000+00:00</lastModDate>
        
        <creator>Nazar Elfadil</creator>
        
        <creator>Sarah Saad Alanazi</creator>
        
        <subject>Word embedding; Bag of Words; BERT; Word2Vec; Qaviar; Question Answering System (QAS); COVID-19; natural language processing; public health informatics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>This study&#39;s scope includes the development of a Question Answering System for COVID-19 and a review of that system. This is unlike previous work in biomedical QAS, which primarily targets technical users. This work leans towards developing a COVID-19 QAS customized for the general public, especially those who have limited knowledge in the clinical field. This study aims to present the development and analysis of COVID-19 QAS systems. The methodology was based on the development of the system, which included conducting experiments to check the accuracy of the QAS system. Consequently, the QAS system would process the user query using three different feature extraction approaches and output the related FAQ and the answer associated with it from a set of 561 FAQs that were sourced from the Ministry of Health, the Virginia Department of Health, and the World Health Organization. The accuracy of the ensuing responses has been tested by Qaviar. The experimental results indicated that BERT achieved the highest accuracy across all datasets consistently, with 96.25%–98%; Word2Vec scored 86.25%–95.2%, while Bow scored between 86.24% and 88%. While most models performed stably, the performance of Word2Vec was comparatively unstable across data sets. Generally, the lowest accuracy value resulted for all models on the smallest dataset. Increased size of the datasets might not necessarily result in higher accuracy. Generally, BERT outperformed the other embedding approaches.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_77-Automated_Question_Answering_System_for_FAQ_COVID_19.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Autonomous Blockchain-Enabled Security Framework for Smart Grids Using Adaptive AI</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161276</link>
        <id>10.14569/IJACSA.2025.0161276</id>
        <doi>10.14569/IJACSA.2025.0161276</doi>
        <lastModDate>2025-12-31T12:27:04.4170000+00:00</lastModDate>
        
        <creator>Brinal Colaco</creator>
        
        <creator>Nazneen Ansari</creator>
        
        <subject>Smart Grid Security; intrusion detection system (IDS); adaptive AI; deep learning; false data injection (FDI) attacks; cyber-physical systems (CPS)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>The increasing interconnectivity of smart grids exposes critical energy infrastructure to more sophisticated cyber threats, necessitating adaptable and auditable security measures. This study presents a blockchain-enabled, self-improving intrusion detection system (IDS) that integrates a permissioned blockchain, autonomous governance loops, and a hybrid CNN–LSTM detector. The platform retrains models across federated nodes using blockchain-anchored data, facilitates automatic containment through smart contracts, and permanently stores validated alarms. Following multiple self-improvement cycles, the system enhances its performance from an initial 94.5% accuracy and 4.2% false positive rate (FPR) to 98.1% accuracy, a 97.6% detection rate (recall), and a 2.1% FPR in simulated tests. In comparison to baselines, a blockchain-only IDS recorded 94.1% accuracy with a 4.8% FPR, while a conventional machine learning-based IDS achieved 92.7% accuracy with a 5.4% FPR. Operationally, blockchain anchoring provided a throughput of approximately 1,200 transactions per second with an average transaction latency of about 1.5 seconds. The combined detect-to-contain latency for high-severity events was approximately 3.2 seconds. These findings demonstrate that a scalable, low-FPR, and rapid-response security paradigm for modern smart grids can be achieved by integrating adaptive artificial intelligence with decentralized, robust governance.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_76-Autonomous_Blockchain_Enabled_Security_Framework_for_Smart_Grids.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modelling Dimensions and Indicators of Readiness for Lean 4.0 Implementation in Indonesian Industries</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161275</link>
        <id>10.14569/IJACSA.2025.0161275</id>
        <doi>10.14569/IJACSA.2025.0161275</doi>
        <lastModDate>2025-12-31T12:27:04.3870000+00:00</lastModDate>
        
        <creator>Sarjono Sarjono</creator>
        
        <creator>Pudji Hastuti</creator>
        
        <creator>Satrio Utomo</creator>
        
        <creator>Gani Soehadi</creator>
        
        <creator>Budi Setiadi Sadikin</creator>
        
        <creator>Manifas Zubair</creator>
        
        <creator>Jaizuluddin Mahmud</creator>
        
        <creator>Rizki Arizal Purnama</creator>
        
        <creator>Hardono Hardono</creator>
        
        <creator>Helen Fifianny</creator>
        
        <subject>Lean 4.0; Industry 4.0; readiness assessment; organizational readiness; Indonesian industries; digital transformation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>The integration of Lean Manufacturing and Industry 4.0, known as Lean 4.0, has emerged as a strategic approach to enhancing operational efficiency, digital transformation, and competitiveness in modern industries. The rapid development of Industry 4.0 has driven a massive transformation in the global manufacturing sector, including Indonesia, which continues to face competitiveness challenges due to limitations in technological capabilities, human resources, and organizational culture. However, the successful implementation of Lean 4.0 requires a structured and measurable level of organizational readiness. This study aims to model the key dimensions and indicators that define the readiness of Indonesian companies for Lean 4.0 implementation. Using a mixed-methods approach of qualitative and quantitative analysis, this study begins with a meta-synthesis of existing literature and expert interviews to identify initial dimensions and indicators, followed by a structured survey to validate the model. The analytical framework integrates the theoretical foundations of Lean and Industry 4.0 principles, existing readiness assessment models, as well as models from key references. The key finding is a proposed model for Lean 4.0 Readiness for Indonesian industries, consisting of 6 key dimensions and 41 indicators. The 6 key dimensions are: Leadership and Strategy, People and Culture, Technology and Digital Infrastructure, Operation and Process, Product and Service, and External Collaboration and Integration. Expert validation and pilot testing confirmed the consistency and contextual relevance of this model for industries in Indonesia. The finding contributes to theory and practice by providing a comprehensive and diagnostic framework for evaluating Lean 4.0 implementation readiness, as well as supporting the development of a readiness model that can be adapted to another industrial park in Indonesia.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_75-Modelling_Dimensions_and_Indicators_of_Readiness_for_Lean_4_0_Implementation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>H∞ Control Design for Nonlinear Systems via Multimodel Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161274</link>
        <id>10.14569/IJACSA.2025.0161274</id>
        <doi>10.14569/IJACSA.2025.0161274</doi>
        <lastModDate>2025-12-31T12:27:04.3530000+00:00</lastModDate>
        
        <creator>Rihab ABDELKRIM</creator>
        
        <subject>Nonlinear systems; H∞ loop shaping control; multimodel</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>Nonlinear systems are integral to contemporary engineering applications, yet their regulation remains a significant challenge due to complex and highly dynamic behaviors. Robust control frameworks, particularly H∞ methods, provide systematic tools to ensure stability and performance in the presence of disturbances and modeling uncertainties. This study proposes an integrated design methodology that combines H∞ loop-shaping techniques with multimodel approaches to achieve resilient control of nonlinear systems. The control law is structured around the H∞ loop-shaping scheme, which shapes the open-loop dynamics to meet desired robustness and performance specifications. The multimodel strategy further enhances adaptability by accommodating diverse operating conditions and capturing variations in system behavior. Several control architectures are presented that unify H∞ loop-shaping with multimodel representations, offering a flexible framework for nonlinear system control. The design methodology also ensures desirable transient responses, thereby improving practical applicability for complex systems. A study is conducted to validate the proposed approaches. Simulation results confirm the effectiveness of multimodel H∞ control systems, underscoring their potential as a robust solution for complex nonlinear applications.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_74-H_Control_Design_for_Nonlinear_Systems_via_Multimodel_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Task Scheduling in Cloud Computing Environment Based on Dwarf Mongoose Optimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161273</link>
        <id>10.14569/IJACSA.2025.0161273</id>
        <doi>10.14569/IJACSA.2025.0161273</doi>
        <lastModDate>2025-12-31T12:27:04.3230000+00:00</lastModDate>
        
        <creator>Olanrewaju Lawrence Abraham</creator>
        
        <creator>Md Asri Ngadi</creator>
        
        <creator>Johan Bin Mohamad Sharif</creator>
        
        <creator>Mohd Kufaisal Mohd Sidik</creator>
        
        <creator>Ogunyinka Taiwo Kolawole</creator>
        
        <subject>Task scheduling; cloud computing; virtual machines; dwarf mongoose optimization algorithm; Cloudsim; makespan</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>The rapid advancement of the Internet and Internet of Things (IoT) technologies has significantly increased the demand for scalable and efficient cloud computing solutions. Task scheduling, a critical aspect of cloud computing, directly impacts system performance by influencing resource utilization, execution time, and operational costs. However, scheduling tasks in large-scale, dynamic cloud environments remains an NP-hard problem, with existing metaheuristic methods often struggling with scalability, convergence, and adaptability. This study proposes a novel task scheduling approach based on the dwarf mongoose optimization (DMO) algorithm. To assess its effectiveness, we conduct two experimental scenarios. The results demonstrate that, compared with existing algorithms, the proposed DMO algorithm offers faster convergence and higher accuracy in identifying optimal task scheduling solutions, particularly under large-scale task loads. We evaluated the method using the Google Cloud Jobs (GoCJ) dataset, and the findings confirm that DMO outperforms prior state-of-the-art techniques in terms of reducing makespan.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_73-Task_Scheduling_in_Cloud_Computing_Environment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Engineering Prompt-Orchestrated LLM Workflows for Automated Test Case Generation in Agile Environments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161272</link>
        <id>10.14569/IJACSA.2025.0161272</id>
        <doi>10.14569/IJACSA.2025.0161272</doi>
        <lastModDate>2025-12-31T12:27:04.2900000+00:00</lastModDate>
        
        <creator>Almeyda Alania Fredy Antonio</creator>
        
        <creator>Barrientos Padilla Alfredo</creator>
        
        <creator>Siancas Garay Ronald Gustavo</creator>
        
        <subject>Software testing; test case generation; Large Language Models; Generative AI; prompt engineering; LLM orchestration; Behavior-Driven Development (BDD); agile methodology; acceptance testing; schema-aware prompting; Human-in-the-Loop; quality assurance Software testing; test case generation; Large Language Models; Generative AI; prompt engineering; LLM orchestration; Behavior-Driven Development (BDD); agile methodology; acceptance testing; schema-aware prompting; Human-in-the-Loop; quality assurance automation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>Manual test case generation for agile software development is a critical bottleneck that is costly, inconsistent, and error-prone. This study introduces a prompt-engineering and multi-level orchestration framework to automate this process. The proposed approach explicitly targets the automated generation of high-level acceptance test cases, addressing a gap in existing research that predominantly focuses on unit-level or reactive testing. The proposed tool, AI-Based Desktop Test Generator (AIDTG), employs a dual-LLM engine (Gemini 1.5 and GPT-4) to transform high-level functional descriptions from the Product Backlog into structured validation scenarios. Unlike prior LLM-based testing approaches, the framework integrates schema-aware prompt engineering and dual-model orchestration to ground the generation process in both functional intent and technical data constraints. The methodology is distinguished by its context-aware prompt engineering, which injects a frozen database schema to ground the models, and its ability to format outputs for the TestRigor BDD 2.0 platform. This schema-grounded and orchestrated workflow enables the systematic translation of informal User Stories into executable Behavior-Driven Development (BDD) acceptance tests, reducing ambiguity and improving semantic correctness. Experimental results on a real-world dataset of fifty User Stories show the framework reduces manual test design effort by eighty per cent, achieves a four point seven five (out of five) average quality rating from human experts, and produces BDD scripts with a ninety-one point nine per cent functional correctness pass rate. These results demonstrate that orchestrated, schema-aware Generative AI can operate as a reliable co-assistant for QA teams, improving efficiency while maintaining high standards of quality and executability.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_72-Engineering_Prompt_Orchestrated_LLM_Workflows.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Reinforcement Learning-Driven Adaptive Aggregation for Blockchain-Enabled Federated Learning in Secure EHR Management</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161271</link>
        <id>10.14569/IJACSA.2025.0161271</id>
        <doi>10.14569/IJACSA.2025.0161271</doi>
        <lastModDate>2025-12-31T12:27:04.2430000+00:00</lastModDate>
        
        <creator>Cai Yanmin</creator>
        
        <creator>Wang Lei</creator>
        
        <creator>Zainura Idrus</creator>
        
        <creator>Jasni Mohamad Zain</creator>
        
        <creator>Marina Yusoff</creator>
        
        <subject>Federated learning; blockchain; reinforcement learning; electronic health records; privacy preservation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>With the rapid digitization of healthcare, blockchain-integrated federated learning (FL) for EHR management faces challenges of heterogeneous data, high latency, and adversarial vulnerabilities. This study proposes a novel Reinforcement Learning-Driven Adaptive Aggregation (RL-DAA) in an enhanced blockchain-FL framework, using Q-learning to dynamically optimize model weights based on trust, data quality, and node reliability. RL-DAA reduces computational overhead by 40% via state-action-reward optimization (mitigating non-IID bias) and boosts robustness against Byzantine faults by 35% with fault-tolerant rewards. Validated on adapted CIFAR-10 and real-world healthcare simulations, compared to EPP-BCFL and baseline models, RL-DAA achieves 96.5% accuracy, 45% lower latency, and 38% reduced energy consumption. By dynamically balancing efficiency, privacy, and robustness via RL-driven optimization, this work advances secure, scalable EHR management, with broader potential in privacy-sensitive domains.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_71-Reinforcement_Learning_Driven_Adaptive_Aggregation_for_Blockchain.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predicting the Duration of Judicial Cases Using Hybrid Systems Based on Language Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161270</link>
        <id>10.14569/IJACSA.2025.0161270</id>
        <doi>10.14569/IJACSA.2025.0161270</doi>
        <lastModDate>2025-12-31T12:27:04.2130000+00:00</lastModDate>
        
        <creator>Amina BOUHOUCHE</creator>
        
        <creator>Saliha YASSINE</creator>
        
        <creator>Mustapha ESGHIR</creator>
        
        <creator>Mohammed ERRACHID</creator>
        
        <subject>Language model; judicial case durations; legal domain; Arabic legal corpus</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>Recent technological developments in the field of Natural Language Processing (NLP), notably due to Transformer architectures and language models, have made it possible to tackle aspects that were previously inaccessible with traditional tools. The present study addresses the issue of predicting legal case durations using Arabic judicial data. For this task, hybrid systems based on language models were implemented. The Arabic_LegalBERT model, derived from AraBERT and specialized through additional pre-training on an Arabic legal corpus, was proposed to generate representations that were integrated into the downstream steps of the approach. Two methods were adopted for predicting the processing time of a new case: The first followed a framework combining automatic classification with statistical correspondence, while the second relied on cosine similarity combined with empirical statistics. The results obtained with the classification approach are particularly promising, with a small improvement for the system based on the specialized model. For the similarity-based approach, the results are also promising, with a clear distinction observed when evaluating each type individually, indicating that types with a higher number of cases generally perform better than those with fewer cases.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_70-Predicting_the_Duration_of_Judicial_Cases_Using_Hybrid_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Computational Intelligence for Sustainable Banking: A Novel Fermatean Fuzzy LOPCOW–EDAS Framework</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161269</link>
        <id>10.14569/IJACSA.2025.0161269</id>
        <doi>10.14569/IJACSA.2025.0161269</doi>
        <lastModDate>2025-12-31T12:27:04.1970000+00:00</lastModDate>
        
        <creator>Majidah Majidah</creator>
        
        <creator>Dadan Rahadian</creator>
        
        <creator>Anisah Firli</creator>
        
        <creator>Suhal Kusairi</creator>
        
        <creator>Serkan Eti</creator>
        
        <creator>Serhat Y&#252;ksel</creator>
        
        <creator>Hasan Din&#231;er</creator>
        
        <subject>Fermatean fuzzy sets; multi-criteria decision-making; sustainable banking; digital transformation; decision support systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>The primary objective of this study is to identify the priority strategies required for banks to achieve their sustainable growth targets and to develop a new fuzzy multi-criteria decision-making model sensitive to uncertainty conditions. The model proposed in this study is designed based on the integration of Fermatean Fuzzy LOPCOW–EDAS. In the first stage, the criteria and strategic alternatives affecting sustainable growth were identified through a literature review. The LOPCOW method was then used to objectively calculate the criteria&#39;s importance weights. The prioritization of strategic alternatives was then performed using the EDAS method. To more accurately model the uncertainties in expert judgments, the opinions of ten experts were converted to the Fermatean fuzzy numbers and analyzed. The use of Fermatean fuzzy sets offers greater expressive power and increases decision reliability compared to traditional fuzzy and Pythagorean approaches. The LOPCOW method objectively evaluates the information density of the criteria by using logarithmic percentage change, while the EDAS method reduces the impact of outliers by considering the distance of the alternatives from the mean solution, producing a more stable ranking. The findings indicate that the &quot;digital green banking practices&quot; criterion is the most critical element for sustainable growth. Furthermore, the &quot;Digitalization and innovation capability&quot; strategy was determined to be the most important alternative. This result demonstrates that sustainable growth in the banking sector can be achieved through the integration of digital technologies and environmentally friendly practices.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_69-Computational_Intelligence_for_Sustainable_Banking.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Feature Engineering for Machine Learning-Based Trading Systems Using Decision Tree, Random Forest, and Gradient Boosting</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161268</link>
        <id>10.14569/IJACSA.2025.0161268</id>
        <doi>10.14569/IJACSA.2025.0161268</doi>
        <lastModDate>2025-12-31T12:27:04.1670000+00:00</lastModDate>
        
        <creator>Nugroho Agus Haryono</creator>
        
        <creator>Yuan Lukito</creator>
        
        <creator>Aditya Wikan Mahastama</creator>
        
        <subject>Feature engineering; machine learning; trading system; decision tree; Random Forest; Gradient Boosting</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>Machine learning-based trading systems require the selection and creation of features that crucially determine the performance level of the trading system. This study introduces an asset-specific, correlation-based feature selection approach for machine learning–based stock trading models. The research conducts a systematic evaluation of the influence of lookup period, the number of features from technical analysis, and feature selection on the performance of trading systems using tree-based algorithms: Decision Tree, Random Forest, and Gradient Boosting. The performance of the trading system was measured using the backtesting method, with metrics such as total return, win rate ratio, and profit factor. The research steps included selecting stocks with the largest market capitalization in the financial sector, which are included in the banking index. Historical data on the prices of these stocks was obtained from Yahoo! Finance for the years 2014-2025. The historical data was then divided into two parts, namely the in-sample dataset (2014-2024 time period) and the out-of-sample dataset (2025 time period). Each part of the data was supplemented with features from technical analysis and several other additional features. Trading signals are determined based on a profit target of +4% and a loss limit of –2% in a lookup period of 2 to 10 days. The results show that the ML strategy consistently outperforms the buy-and-hold strategy, with Gradient Boosting generating the highest return (37.443%). Spearman correlation-based feature selection per stock improves the performance of the strategy compared to uniform features.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_68-Feature_Engineering_for_Machine_Learning_Based_Trading_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>User Experience Evaluation in Government Applications: A Systematic Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161267</link>
        <id>10.14569/IJACSA.2025.0161267</id>
        <doi>10.14569/IJACSA.2025.0161267</doi>
        <lastModDate>2025-12-31T12:27:04.1200000+00:00</lastModDate>
        
        <creator>Emmy Hossain</creator>
        
        <creator>Noris Mohd Norowi</creator>
        
        <creator>Azrina Kamaruddin</creator>
        
        <creator>Hazura Zulzalil</creator>
        
        <subject>User Experience Evaluation; UX evaluation; government applications; e-government; systematic review</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>Evaluating the User Experience (UX) of government applications is becoming increasingly crucial as governments deploy public services online. Nevertheless, research in this area remains fragmented. Correspondingly, this study presents a systematic review of UX evaluation in government applications to address the following Research Questions (RQs): What UX evaluation approaches and UX dimensions have been employed in the UX evaluation of government applications, and how do domains, contextual, and cultural considerations influence the UX evaluation of government applications? Kitchenham and Charters’ guidelines, as well as the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA), are employed to guide this review. Moreover, recent studies from Scopus and Web of Science (WoS) databases between the years 2023 and 2025 were retrieved using a predefined review protocol. After applying the inclusion and exclusion criteria and subjecting the studies to quality assessment, the final number of retained studies for this review is 19. The analysis reveals four key themes: diversity in UX evaluation approaches, the range of UX dimensions evaluated, the range of domains evaluated, and the contextual and cultural considerations in UX evaluations. The findings reveal that UX evaluations of government applications are predominantly usability-focused, while hedonic, emotional, and cultural dimensions receive limited and inconsistent attention. In addition, the review highlights that UX evaluations for government applications should encompass both technical and pragmatic aspects, as well as domain-specific, cultural, and contextual dimensions. Accordingly, strengthening these evaluations can lead to more inclusive and meaningful assessments, resulting in government applications that offer better UX. Overall, the findings of this review may serve as a reference for future work and advance the field of UX evaluation, especially in the context of government applications.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_67-User_Experience_Evaluation_in_Government_Applications.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intelligent Systems, Machine Learning, and Deep Learning Algorithms for Detecting Banking Fraud: A Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161266</link>
        <id>10.14569/IJACSA.2025.0161266</id>
        <doi>10.14569/IJACSA.2025.0161266</doi>
        <lastModDate>2025-12-31T12:27:04.0730000+00:00</lastModDate>
        
        <creator>Jessica Vazallo-Bautista</creator>
        
        <creator>Allison Villalobos-Pe&#241;a</creator>
        
        <creator>Juan Soria-Quijaite</creator>
        
        <subject>Deep learning; algorithms; machine learning; fraud detection; real-time methods</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>The increase in unauthorized remote banking fraud has intensified with the expansion of digital channels, creating new risks and highlighting the inadequacy of traditional methods based on fixed rules and manual audits. This review aims to synthesize recent scientific evidence on the use of machine learning and deep learning techniques for the early detection of fraudulent banking transactions, considering supervised and unsupervised models and deep architectures that allow the analysis of complex patterns present in financial transactions. A total of 357 original articles were identified in the Scopus and Web of Science databases, in addition to manual research, published up to 2025. Of these, 35 studies met the inclusion criteria established using the PICOT approach and the PRISMA protocol. The most widely implemented models in the selected studies were Random Forest, XGBoost, SVM, LSTM networks, and graph-based approaches. The combination of different algorithms improves fraud detection by integrating temporal, relational, and behavioral patterns. Advanced models show better metrics in accuracy, recall, and F1-score compared to traditional methods, expanding the possibilities for continuous monitoring and reducing false positives. There are consistent associations between the application of advanced models, the availability of quality data, and the ability to adapt to different transactional scenarios, which favor timely fraud detection if challenges such as class imbalance, the need for real-time decisions, and the heterogeneity of financial contexts are addressed. The integration of multiple approaches and the optimization of preprocessing and evaluation processes allow us to move toward more robust, scalable anti-fraud systems that are better suited to the current demands of the digital environment.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_66-Intelligent_Systems_Machine_Learning_and_Deep_Learning_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Balancing Privacy and Acceptance: The Role of Anthropomorphism and Information Sensitivity in Autonomous Taxis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161265</link>
        <id>10.14569/IJACSA.2025.0161265</id>
        <doi>10.14569/IJACSA.2025.0161265</doi>
        <lastModDate>2025-12-31T12:27:04.0400000+00:00</lastModDate>
        
        <creator>Jia Fu</creator>
        
        <creator>Kyoung-jae Kim</creator>
        
        <subject>Anthropomorphism; information sensitivity; privacy concern; technology acceptance; individual cultural value; technical familiarity; autonomous taxis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>This study investigates how anthropomorphic interface design and information sensitivity influence users’ acceptance of autonomous vehicles (AVS), and examines the underlying role of privacy concern and its boundary conditions in a commercial autonomous taxi context. Addressing prior research that has predominantly examined anthropomorphism or privacy concerns in isolation, this study employs a 2 &#215; 2 experimental design to test the main interaction effects of anthropomorphism and information sensitivity on technology acceptance. The results demonstrate that both anthropomorphism and information sensitivity significantly affect users’ acceptance of AV technology, with a significant interaction effect between the two. Specifically, when information sensitivity is high, lower levels of anthropomorphism lead to higher acceptance, whereas under low information sensitivity, anthropomorphic design enhances acceptance. Further analysis reveals that privacy concern mediates the relationship between anthropomorphism, information sensitivity, and technology acceptance. Moreover, cultural value orientation and technical familiarity moderate the effect of privacy concern on technology acceptance, such that the negative impact of privacy concern is attenuated among users with stronger collectivist orientations and higher levels of technical familiarity. By clarifying the sequential roles of design cues, privacy concern, and individual differences, this study reveals a dynamic balance mechanism between emotional engagement and perceived privacy risk in data-intensive mobility services. These findings advance understanding of privacy–acceptance dynamics and provide practical implications for the design and deployment of autonomous taxi interfaces.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_65-Balancing_Privacy_and_Acceptance_The_Role_of_Anthropomorphism.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid Spherical Fuzzy–Machine Learning Model for Multi-Criteria Decision-Making in Sustainable Water Resource Management</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161264</link>
        <id>10.14569/IJACSA.2025.0161264</id>
        <doi>10.14569/IJACSA.2025.0161264</doi>
        <lastModDate>2025-12-31T12:27:04.0100000+00:00</lastModDate>
        
        <creator>Edanur Erg&#252;n</creator>
        
        <creator>Serkan Eti</creator>
        
        <creator>Serhat Y&#252;ksel</creator>
        
        <creator>Hasan Din&#231;er</creator>
        
        <subject>Irrigation activities; water use; decision-making model; machine learning; MEREC; WASPAS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>The aim of this study is to develop an innovative, multi-dimensional, and uncertain decision-making model that can identify the most appropriate alternative irrigation method for the efficient use of water resources in agriculture. In this context, the proposed model is based on the integrated use of spherical fuzzy sets, machine learning, MEREC, and WASPAS methods. The evaluations obtained from ten experts were converted into spherical fuzzy numbers, and the experts&#39; importance weights were objectively calculated using machine learning. Criteria weights were determined using the MEREC method, and alternatives were ranked using the WASPAS method. This hybrid approach both reduces expert subjectivity and objectively reflects the relationships between criteria. According to the findings, feasibility/technological suitability (0.152) emerged as the most important criterion, followed by environmental impacts (0.144). Among the alternatives, drip irrigation (2.226) was identified as the most suitable option for efficient use of water resources. This result demonstrates that modern, technology-based irrigation systems should be a priority in sustainable agricultural policies. This study&#39;s contribution to the literature is its ability to bring objectivity, transparency, and the ability to manage high uncertainty to decision-making processes in agricultural water management. The model offers both methodological innovation and a practical decision-support tool at the application level.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_64-A_Hybrid_Spherical_Fuzzy_Machine_Learning_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning Approach for Solar Radiation Forecasting in a Tropical Region Using LSTM Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161263</link>
        <id>10.14569/IJACSA.2025.0161263</id>
        <doi>10.14569/IJACSA.2025.0161263</doi>
        <lastModDate>2025-12-31T12:27:03.9800000+00:00</lastModDate>
        
        <creator>Manuel Ospina</creator>
        
        <creator>Gabriel Chanch&#237;</creator>
        
        <creator>&#193;lvaro Realpe</creator>
        
        <subject>Deep learning; LSTM networks; renewable energy; solar radiation forecasting; time series prediction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>Solar radiation forecasting is a key task for energy planning, grid management, and photovoltaic deployment, especially in tropical regions where weather variability reduces operational reliability. This work applies deep learning techniques to forecast hourly solar radiation in Mompox, Colombia, using Long Short-Term Memory (LSTM) neural networks. Three temporal windows were studied (5, 24, and 720 hours) to examine how sequence length affects prediction accuracy and model behavior. Hourly radiation data from 2021 to 2022 were used for training, and independent datasets from 2023 to 2024 were used for external validation to ensure long-term assessment and reproducibility. Most existing studies use short input windows designed for mid-latitude environments (5–24 hours), which do not capture multi-day tropical cloud persistence or sub-seasonal radiation variability. This gap limits forecasting accuracy and restricts practical use in tropical energy planning. To address this issue, this study introduces a long temporal input design that allows the model to learn month-scale variability more effectively. The three network configurations were trained under the same settings, allowing a direct comparison between short, daily, and long input memories. The LSTM-720 model performed best, achieving the lowest RMSE and the most stable predictions across all validation years, showing its ability to reconstruct both diurnal cycles and broader seasonal dynamics. Unlike most solar forecasting work, which treats window size as a tuning parameter, this study introduces a long-context LSTM design based on a 720-hour sequence. This allowed the model to learn intra-month atmospheric persistence—an essential tropical feature that short windows cannot represent—positioning the approach as a methodological contribution that expands the temporal learning paradigm rather than a configuration adjustment. Time-series comparisons revealed close agreement between measured and predicted radiation, particularly during stable climate periods. The proposed framework can support practical applications in solar plant design, renewable energy scheduling, and operational grid strategies in tropical regions. Future work will integrate satellite information and hybrid deep learning architectures to enhance spatial transferability and long-term forecasting accuracy.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_63-Deep_Learning_Approach_for_Solar_Radiation_Forecasting.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Choosing the Arena: A Systematic Review of Simulators for Deep Reinforcement Learning in Mobile Robot Navigation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161262</link>
        <id>10.14569/IJACSA.2025.0161262</id>
        <doi>10.14569/IJACSA.2025.0161262</doi>
        <lastModDate>2025-12-31T12:27:03.9630000+00:00</lastModDate>
        
        <creator>Zakaria Haja</creator>
        
        <creator>Leila Kelmoua</creator>
        
        <creator>Ihababdelbasset Annaki</creator>
        
        <creator>Jamal Berrich</creator>
        
        <creator>Toumi Bouchentouf</creator>
        
        <subject>Simulator; mobile robot; Deep Reinforcement Learning; navigation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>This study presents a formal Systematic Literature Review (SLR) to address a critical methodological question in robotics research: &quot;Which simulator is most suitable for a given Deep Reinforcement Learning (DRL) algorithm and mobile robot navigation task?&quot; The choice of a simulation environment profoundly impacts policy robustness, data efficiency, and sim-to-real transfer, yet the community has lacked an evidence-based guide for this decision. Following PRISMA guidelines, we methodically searched and analyzed 87 peer-reviewed studies published between January 2020 and June 2025 to map the contemporary research landscape. Our synthesis introduces a novel, theory-informed taxonomy that classifies simulators into three archetypes based on their empirical use. Archetype I, ROS-centric standards (e.g., Gazebo), are chosen for algorithmic novelty with low-dimensional sensor inputs. Archetype II, versatile platforms (e.g., CoppeliaSim), are favored for rapid prototyping. Archetype III, GPU-native engines (e.g., NVIDIA Isaac Sim), have emerged for large-scale, perception-heavy challenges, leveraging photorealism and parallelization to mitigate the perception gap and enable zero-shot transfer. This review reveals a paradigm shift towards data-driven methodologies and culminates in a prescriptive decision-making framework, transforming simulator selection from an incidental detail into a strategic choice.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_62-Choosing_the_Arena_A_Systematic_Review_of_Simulators.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Contact-Free Cardiovascular Monitoring Using AI-Driven Radar and Sensor Fusion on a Hybrid Edge-Cloud Platform</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161261</link>
        <id>10.14569/IJACSA.2025.0161261</id>
        <doi>10.14569/IJACSA.2025.0161261</doi>
        <lastModDate>2025-12-31T12:27:03.9170000+00:00</lastModDate>
        
        <creator>K Ravindra Shetty</creator>
        
        <creator>Shanthala K V</creator>
        
        <creator>Nishanth A R</creator>
        
        <creator>Himani Jain</creator>
        
        <subject>Wireless sensing; radar signal processing; sensor fusion; contact-free monitoring; heart rate; heart rate variability; blood pressure; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>Access to essential cardiovascular parameters such as heart rate (HR), heart rate variability (HRV), and blood pressure (BP) remains limited in low-income and remote populations, particularly among older adults in developing regions. Continuous, simultaneous, and contact-free monitoring of these parameters beyond close proximity can enhance early detection, screening, and management of cardiovascular and related conditions. This study presents a real-time, contact-free health monitoring system based on millimeter-wave (mmWave) FMCW radar, phase demodulation, and digital signal processing (DSP), integrated with multimodal sensor fusion and artificial intelligence (AI)-driven inference. Sub-millimeter chest wall displacements are captured using radar in-phase and quadrature (I/Q) signals to extract beat-to-beat physiological features, including ECG-correlated waveform components, HR, and HRV, while non-invasive blood pressure is indirectly estimated using a physics-informed adaptive learning framework. A custom Long Short-Term Memory (LSTM) neural network is employed for temporal smoothing and stabilization of HRV signals, improving robustness under real-world conditions. The system is implemented within a hybrid edge–cloud architecture, enabling on-device inference for real-time monitoring and cloud-based analytics for long-term analysis and integration. Clinical-like validation conducted on over 100 adult participants demonstrates measurement accuracy comparable to clinically accepted reference devices, and statistical analysis confirms the robustness and reliability of the proposed system.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_61-Contact_Free_Cardiovascular_Monitoring_Using_AI_Driven_Radar.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detecting Low-Quality Deepfake Videos Using 3D Residual Vision Transformer</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161260</link>
        <id>10.14569/IJACSA.2025.0161260</id>
        <doi>10.14569/IJACSA.2025.0161260</doi>
        <lastModDate>2025-12-31T12:27:03.8870000+00:00</lastModDate>
        
        <creator>Amna Saga</creator>
        
        <creator>Lili N. A</creator>
        
        <creator>Fatimah Khalid</creator>
        
        <creator>Nor Fazlida Mohd Sani</creator>
        
        <creator>Hussna E. M. Abdalla</creator>
        
        <creator>Zulfahmi Syahputra</creator>
        
        <creator>Rian Farta Wijaya</creator>
        
        <subject>Deepfake detection; compressed deepfake videos; low-quality deepfakes; 3D convolutional neural networks; Video Vision Transformer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>The rapid evolution of deep generative models has facilitated the creation of &quot;Deepfakes&quot;, enabling the synthesis of hyper-realistic facial manipulations that threaten the trustworthiness of digital media. While forensic countermeasures have been developed to identify these forgeries, deepfake detection in real-world scenarios is severely hampered by video compression artifacts, which often obscure the subtle pixel-level traces exploited by conventional Convolutional Neural Networks (CNNs). This study introduces a robust detection framework designed specifically to withstand the aggressive compression inherent to social media dissemination. We present a hybrid 3D architecture that integrates the local spatiotemporal feature extraction capabilities of a 3D-ResNet-50 backbone with the global context modeling of a temporal Video Vision Transformer. Unlike frame-based or joint spatiotemporal attention approaches, the proposed model performs fully video-level reasoning and utilizes a factorized self-attention mechanism to decouple spatial and temporal modeling, thereby preserving stable temporal cues under compression while minimizing computational costs. Experimental results on the compressed protocols of the FaceForensics++ dataset as well as Celeb-DF-v2 and DFDC datasets, including cross-dataset generalization evaluation, validate the efficacy of this design, demonstrating that our method achieves superior detection accuracy and generalization compared to existing baselines, particularly on low-quality inputs.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_60-Detecting_Low_Quality_Deepfake_Videos.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Formal Verification Unified Modeling Language Statechart Using Enhancement Common Modeling Language</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161259</link>
        <id>10.14569/IJACSA.2025.0161259</id>
        <doi>10.14569/IJACSA.2025.0161259</doi>
        <lastModDate>2025-12-31T12:27:03.8530000+00:00</lastModDate>
        
        <creator>Muhammad Amsyar Azwarrudin</creator>
        
        <creator>Pathiah Abdul Samat</creator>
        
        <creator>Norhayati Mohd Ali</creator>
        
        <creator>Novia Indriaty Admodisastro</creator>
        
        <subject>CML; E-CML; formal verification; model checkers; UML Statechart</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>Modern systems are rapidly evolving and increasing in complexity to satisfy growing requirements. Such systems often incorporate multiple hierarchical statecharts within their behavior modeling diagram, which significantly complicates the verification process. To address this challenge, the Common Modeling Language (CML) was introduced as an intermediate modeling language for formal verification, serving as a bridge between Unified Modeling Language (UML) Statechart and the model checkers. However, CML supports modeling only a single hierarchical statechart, which limits its applicability to complex systems. This study introduces the Enhancement Common Modeling Language (E-CML), an extension of the CML, to support the verification of systems that incorporate multiple hierarchical statecharts. We introduce the group component in E-CML, comprising an initial state, a set of states, transitions, triggers, and a region, to formally differentiate the group components from superstates. We also propose new translation rules to map E-CML into Symbolic Model Verifier (SMV) syntax. E-CML operates through two main processes: transformation and translation. The transformation process transforms an XML Metadata Interchange (XMI) file into E-CML, while the translation process translates E-CML to an Input Symbolic Model Verifier (I-SMV) file. The system is verified using the SMV model checker, with formal properties specified in Computational Tree Logic (CTL) and represented in the I-SMV file. The results demonstrate that the behavior modeling diagram satisfies all formal properties, indicating that E-CML provides an effective framework for the verification of complex systems comprising multiple hierarchical statecharts.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_59-Formal_Verification_Unified_Modeling_Language_Statechart.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving YOLO11 Architecture for Reckless Driving Detection on the Road</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161258</link>
        <id>10.14569/IJACSA.2025.0161258</id>
        <doi>10.14569/IJACSA.2025.0161258</doi>
        <lastModDate>2025-12-31T12:27:03.8230000+00:00</lastModDate>
        
        <creator>Sutikno </creator>
        
        <creator>Aris Sugiharto</creator>
        
        <creator>Retno Kusumaningrum</creator>
        
        <subject>Reckless driving detection; improved YOLO11n-cls; added convolution blocks; added C3k2 blocks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>Reckless driving behavior on the road can increase the risk of traffic accidents for drivers and other road users. Currently, supervision remains weak, particularly in direct supervision, due to the limited number of officers. This study developed an automated system to detect reckless drivers based on their road trajectories. This system comprised three subsystems: car detection, car tracking, and driving trajectory detection. In the driving trajectory detection subsystem, we proposed an improved YOLO11n-cls method developed from YOLO11n-cls by adding convolution and C3k2 blocks. The test results showed that the proposed model achieved an accuracy increase of 4.4% over YOLO11n-cls. The proposed model achieved an accuracy of 0.935 and an inference time of 0.5 ms for car trajectory classification. In addition, the proposed model achieved higher accuracy than all YOLO11 models (YOLO11n-cls, YOLO11s-cls, YOLO11m-cls, YOLO11l-cls, and YOLO11x-cls) and all YOLO12 models (YOLO12n-cls, YOLO12s-cls, YOLO12m-cls, YOLO12l-cls, and YOLO12x-cls). Therefore, the proposed model is better suited to support traffic law enforcement, especially the real-time detection of reckless drivers on highways.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_58-Improving_YOLO11_Architecture_for_Reckless_Driving_Detection_on_the_Road.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Adaptive Denoising of Partial Discharge Using Absolute Difference Optimization Versus Artificial Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161257</link>
        <id>10.14569/IJACSA.2025.0161257</id>
        <doi>10.14569/IJACSA.2025.0161257</doi>
        <lastModDate>2025-12-31T12:27:03.7900000+00:00</lastModDate>
        
        <creator>Kui-Fern Chin</creator>
        
        <creator>Chang-Yii Chai</creator>
        
        <creator>Ismail Saad</creator>
        
        <creator>Yee-Ann Lee</creator>
        
        <subject>Partial discharge localization; adaptive denoising optimization; discrete wavelet transform; artificial neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>Accurate partial discharge (PD) localization in medium-voltage (MV) power cables is essential for condition-based maintenance, yet it remains unreliable when PD pulses are masked by broadband noise and narrowband interference. The novelty of this work is a controlled denoiser-to-localization benchmarking framework that isolates the denoising front end, while keeping the downstream PD detection and localization backend fixed, allowing localization differences to be attributed solely to denoising decisions. Within this fixed-backend paradigm, an optimization-driven Adaptive Denoising Optimization (ADO) method is introduced as an adaptive discrete wavelet transform (DWT) front end that systematically selects the mother wavelet, decomposition level, and threshold parameters to preserve time-of-arrival (ToA) critical wavefront features rather than only maximizing noise suppression. ADO is evaluated against two learning-based denoisers, a multilayer artificial neural network (ANN) and a lightweight feedforward neural network (FNN), using MATLAB simulations of synthetic PD pulses corrupted by white Gaussian noise (WGN) and discrete spectral interference (DSI) over SNRs from 9.78 dB to -10.34 dB. Performance is quantified using execution time, percentage localization error (PE), median absolute localization error (MedAE), and F1 score. Results show that ADO delivers the most robust localization fidelity, maintaining near-zero PE above -6 dB, keeping PE below 0.3% at -10.34 dB, achieving sub-metre MedAE, and sustaining F1 close to 1.0 across noise levels. In contrast, FNN is the fastest option, reducing runtime by approximately 15% versus ANN and 27% versus ADO, highlighting a practical robustness-efficiency trade-off for real-time MV cable monitoring.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_57-Adaptive_Denoising_of_Partial_Discharge_Using_Absolute_Difference_Optimization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Arabic Biomedical Named Entity Recognition Using Transformer-Based Representations and CRF Sequence Labeling</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161256</link>
        <id>10.14569/IJACSA.2025.0161256</id>
        <doi>10.14569/IJACSA.2025.0161256</doi>
        <lastModDate>2025-12-31T12:27:03.7600000+00:00</lastModDate>
        
        <creator>Nassima Gannoune</creator>
        
        <creator>Abdellah Madani</creator>
        
        <creator>Mohamed Kissi</creator>
        
        <subject>Named entity recognition; electronic health records; natural language processing; CAMeLBERT; CRF</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>Electronic health records have witnessed tremendous growth in recent years. To make these documents useful for decision-making, high-performance natural language processing (NLP) systems are essential. Named entity recognition (NER) is a critical task for many biomedical NLP applications that contribute to improving patient care, drug discovery, and disease surveillance. However, despite its status as an official language in more than 22 countries, Arabic is largely neglected in this field. Only limited work has been done and there is little well-annotated public dataset. This work tackles these issues by proposing an NER model capable of recognizing entities such as diseases, symptoms, and organs from biomedical Arabic text. To achieve this, an annotated dataset was first developed, followed by fine-tuning the CAMeLBERT model, a BERT-based model, in conjunction with a conditional random field (CRF) layer. The evaluation results indicate that the CAMeLBERT+CRF model achieves the best overall F1-score of 90%, surpassing other base models such as CAMeLBERT and AraBERT. This study demonstrates the effectiveness of the hybrid approach and underscores the importance of transfer learning techniques for low-resourced and morphologically rich languages like Arabic.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_56-Enhancing_Arabic_Biomedical_Named_Entity_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Game-Based Learning Model for Basic Life Support Using First-Person Interactive Simulation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161255</link>
        <id>10.14569/IJACSA.2025.0161255</id>
        <doi>10.14569/IJACSA.2025.0161255</doi>
        <lastModDate>2025-12-31T12:27:03.7300000+00:00</lastModDate>
        
        <creator>Nur Raidah Rahim</creator>
        
        <creator>Siti Aisyah Mohd Nasron</creator>
        
        <creator>Sazilah Salam</creator>
        
        <creator>Che Ku Nuraini Che Ku Mohd</creator>
        
        <creator>Wan Mohd Ya’akob Wan Bejuri</creator>
        
        <creator>Richki Hardi</creator>
        
        <creator>Nur Sri Syazana Rahim</creator>
        
        <subject>Basic life support; emergency; simulation; game-based learning; serious games</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>Previous BLS and first-aid learning studies largely rely on traditional face-to-face training or low-fidelity digital approaches, which are often costly, time-consuming, and inaccessible to many learners, especially laypersons. Many serious games focus primarily on awareness and conceptual knowledge, rather than procedural mastery and real-time decision-making. In addition, most existing games lack high-fidelity first-person immersion, provide limited real-time feedback, and are not aligned with localized national medical protocols, reducing their realism and contextual relevance. To address these gaps, this study proposes the development of a 3D game-based learning model using first person interactive simulation, designed to educate users on Basic Life Support (BLS) procedures in cardiac arrest scenarios. Unreal Engine 5.4 was utilized to create an immersive and realistic environment where players engage in critical emergency steps, real-time visual prompts and audio feedback, and decision-making under pressure, rather than passive content delivery. Importantly, it strictly follows the Ministry of Health Malaysia’s BLS guidelines, ensuring procedural accuracy and local relevance. This approach bridges the gap between theoretical knowledge and practical application, while providing a scalable, accessible, and engaging alternative to conventional BLS training. Through this educational serious game, players are empowered to gain confidence and practical understanding of life-saving procedures, ultimately contributing to greater public preparedness in real-world emergencies.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_55-A_Game_Based_Learning_Model_for_Basic_Life_Support.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Machine Learning-Based Dissolved Oxygen Classification Using Low-Cost IoT Sensors for Smart Aquaponic</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161254</link>
        <id>10.14569/IJACSA.2025.0161254</id>
        <doi>10.14569/IJACSA.2025.0161254</doi>
        <lastModDate>2025-12-31T12:27:03.6970000+00:00</lastModDate>
        
        <creator>Supria </creator>
        
        <creator>Afis Julianto</creator>
        
        <creator>Wahyat</creator>
        
        <creator>Marzuarman</creator>
        
        <creator>M Nur Faizi</creator>
        
        <creator>Hardiyanto</creator>
        
        <subject>Aquaponic; dissolved oxygen; IoT; machine learning; XGBoost; low-cost sensors</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>Dissolved oxygen (DO) plays a vital role in maintaining balanced aquaponic ecosystems, yet conventional optical and galvanic DO sensors remain costly and impractical for low-budget deployments. However, most existing dissolved oxygen monitoring studies rely on costly sensing infrastructures, regression-oriented prediction approaches, or centralized processing schemes, which limit their applicability in small-scale and resource-constrained aquaculture settings. Furthermore, many previous works focus primarily on numerical prediction accuracy without explicitly addressing data imbalance issues or providing actionable classification outputs that can directly support real-time operational decisions at the pond level. This study proposes a machine learning–based approach for estimating DO levels using low-cost pH, temperature, and nitrogen sensors integrated with an IoT data acquisition system. A dataset comprising approximately 1,048,536 records was processed using feature engineering and class balancing techniques, followed by training an XGBoost classifier optimized through grid search. The model classified DO into three categories—Low (&lt;5 mg/L), Medium (5–7 mg/L), and Good (&gt;7 mg/L)—achieving 96.6% accuracy, outperforming baseline regression models including Linear Regression, Random Forest, and XGBoost Regressor. Feature importance analysis revealed temperature and the pH–temperature interaction as dominant predictors. The model was successfully deployed on a Raspberry Pi for real-time monitoring, offering a scalable and cost-effective alternative to high-end probes. The proposed framework demonstrates practical potential for smart aquaponic systems, enabling affordable, automated, and data-driven oxygen management.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_54-Machine_Learning_Based_Dissolved_Oxygen_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Elaboration Context Graph: A System to Support Understanding the Contexts in Elaboration Processes of Research Documents</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161253</link>
        <id>10.14569/IJACSA.2025.0161253</id>
        <doi>10.14569/IJACSA.2025.0161253</doi>
        <lastModDate>2025-12-31T12:27:03.6670000+00:00</lastModDate>
        
        <creator>Sho Onami</creator>
        
        <creator>Ryo Onuma</creator>
        
        <creator>Hiroki Nakayama</creator>
        
        <creator>Hiroaki Kaminaga</creator>
        
        <creator>Youzou Miyadera</creator>
        
        <creator>Shoichi Nakamura</creator>
        
        <subject>Elaboration contexts graph; elaboration work of research documents; elaboration contexts; understanding work circumstances and histories; screenshots</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>Elaborating research documents is carried out by repeatedly creating and editing documents while simultaneously performing tasks such as surveys, presentation of results, and discussion of research directions. Although indispensable for advancing research, such work is often challenging because it requires handling diverse documents. Effective execution therefore demands an accurate understanding of the elaboration contexts of research documents, including related artifacts, referenced documents, and the circumstances and history of past tasks, so that these can be applied in subsequent work. However, these contexts grow increasingly large and complex as research progresses, making them difficult to grasp and reducing task efficiency. This paper describes a method for generating an elaboration context graph by organizing documents involved in the elaboration process using work history data recorded on a PC. The graph visually represents the documents, screenshots capturing work scenes, and the relationships among them, thereby supporting the understanding of elaboration contexts. We further describe a system developed on the basis of this method. Finally, we report an experiment conducted with the prototype and discuss the system’s effectiveness.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_53-Elaboration_Context_Graph_A_System_to_Support_Understanding_the_Contexts.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimizing Dermatological Image Classification Using Efficient Convolutional Neural Network Architecture</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161252</link>
        <id>10.14569/IJACSA.2025.0161252</id>
        <doi>10.14569/IJACSA.2025.0161252</doi>
        <lastModDate>2025-12-31T12:27:03.6370000+00:00</lastModDate>
        
        <creator>Khalil Ladrham</creator>
        
        <creator>Hicham Gueddah</creator>
        
        <subject>Convolutional neural networks; skin diseases; medical image; classification; Xception; clinical</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>Skin diseases represent a global healthcare challenge because of their frequent occurrence and complex diagnosis. However, despite clinical advances, accurately identifying dermatological lesions remains difficult due to significant intra-class variability, overlapping visual patterns, and reliance on clinician expertise. In this study, it presents a complete overview of a number of state-of-the-art CNN architectures as they apply to multiclass classification of skin diseases. The study introduces an overview of the common skin diseases and discuss the fundamentals of deep learning for medical image analysis. The study proceeds to introduce the dataset used in this work and provide a brief description of the two diagnostic groups identified for evaluation. A range of CNN models which comprise GoogLeNet, Inception-V3, Inception-V4, ResNet-50, Xception, MobileNet, ResNeXt-50, AlexNet, VGG-16, and VGG-19 were trained and tested in terms of accuracy, loss, FLOPs, and epoch runtime. The experimental findings suggest that Xception performs constantly at the highest level, with an accuracy of more than 98% and low validation loss, whereas lightweight models such as MobileNet-V3 provide a competitive outperformance with minimum computational cost. These findings demonstrate the potential of modern CNN architectures to enhance efficient and accurate dermatological diagnosis and offer guidance for selecting appropriate architectures for clinical and real-time deployment.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_52-Optimizing_Dermatological_Image_Classification_Using_Efficient_Convolutional_Neural_Network_Architecture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>AI-Powered Architecture Refactoring: From Legacy Systems to Modern Patterns</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161251</link>
        <id>10.14569/IJACSA.2025.0161251</id>
        <doi>10.14569/IJACSA.2025.0161251</doi>
        <lastModDate>2025-12-31T12:27:03.6030000+00:00</lastModDate>
        
        <creator>Mohamed El BOUKHARI</creator>
        
        <creator>Nassim KHARMOUM</creator>
        
        <creator>Soumia ZITI</creator>
        
        <subject>Artificial intelligence; LLM; AI-driven refactoring; code-level refactoring; legacy systems, command and query responsibility segregation; CQRS; software architecture refactoring; software engineering; CodeSearchNet</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>This study explores the integration of artificial intelligence (AI), especially large language models (LLMs), into software engineering, particularly the architecture refactoring process, focusing on automated command-query classification for legacy systems transitioning to the Command Query Responsibility Segregation (CQRS) pattern. We present Airchitect, a modular system. NET-based tools that orchestrate legacy code analysis, LLM-driven classification, CQRS artifact generation, and automated test creation are also available. Based on the CodeLlama model, Airchitect achieved a 16x–40x reduction in classification time compared to expert manual methods while maintaining over 85% classification accuracy. A test case involving N-tier legacy classes demonstrated the model’s ability to decompose and modularize the methods into CQRS-aligned components. Despite these gains, the study highlights key limitations: the need for human validation in complex or ambiguous cases, dependence on high-quality labeled datasets, and variability of legacy patterns that challenge rule-based automation. The results suggest that LLMs, when embedded in structured tools like Airchitect, can significantly accelerate modernization workflows—provided they are used in tandem with expert oversight.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_51-AI_Powered_Architecture_Refactoring.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-Objective Design Optimization of Ventilation Duct Systems: A Graph-Informed Hybrid Evolutionary Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161250</link>
        <id>10.14569/IJACSA.2025.0161250</id>
        <doi>10.14569/IJACSA.2025.0161250</doi>
        <lastModDate>2025-12-31T12:27:03.5570000+00:00</lastModDate>
        
        <creator>Xiangming Liu</creator>
        
        <creator>Bin Liu</creator>
        
        <creator>Kunze Du</creator>
        
        <creator>Da Gao</creator>
        
        <creator>Nan Li</creator>
        
        <subject>Multi-objective optimization; NSGA-III; graph-informed optimization; HVAC design; heuristic search; domain knowledge</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>Optimizing silencer placement in Heating, Ventilation, and Air Conditioning (HVAC) systems is a complex multi-objective problem due to conflicting objectives (noise, energy, cost) and intricate topological constraints. Conventional Multi-Objective Evolutionary Algorithms (MOEAs) often exhibit inefficient convergence on such problems due to their reliance on random search strategies. Addressing this challenging HVAC design problem requires a more informed approach. This paper proposes the G-HNSGA-III (Graph-Informed Hybrid NSGA-III), a novel framework that enhances the NSGA-III algorithm by embedding domain-specific knowledge from the system&#39;s Directed Acyclic Graph (DAG) topology. This is achieved through two core components that leverage heuristic search: a Graph-Informed Initialization (GINI) strategy to provide a high-quality starting population and a Graph-Informed Local Search (GILS) module for post-processing refinement. The performance of G-HNSGA-III was comprehensively benchmarked against the baseline NSGA-III and six other established MOEAs on a complex data center test instance. The results demonstrate a marked superiority, with G-HNSGA-III achieving a 38.4% higher mean Hypervolume (HV) than the baseline NSGA-III and a 99.3% Set Coverage (SC) dominance over MOEA/D. The framework consistently converged to the best-known Pareto front, achieving a final mean Inverted Generational Distance (IGD) of 0.0030. These findings validate that the proposed graph-informed strategies effectively accelerate convergence and enable the discovery of a higher-quality Pareto front, providing superior and practically applicable solutions for complex engineering design problems.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_50-Multi_Objective_Design_Optimization_of_Ventilation_Duct_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>EfficientNet-Based Melanoma Classification with CBAM Attention and Monte Carlo Dropout for Robust Uncertainty Estimation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161249</link>
        <id>10.14569/IJACSA.2025.0161249</id>
        <doi>10.14569/IJACSA.2025.0161249</doi>
        <lastModDate>2025-12-31T12:27:03.5270000+00:00</lastModDate>
        
        <creator>Soujenya Voggu</creator>
        
        <creator>Shadab Siddiqui</creator>
        
        <creator>Shahin Fatima</creator>
        
        <subject>Deep learning; CNN; accuracy; CBAM; EfficientNetB4</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>Recent developments in deep learning have demonstrated tremendous potential for enhancing medical picture classification tasks, particularly for the detection of skin malignancies like melanoma. However, it is still a huge challenge to guarantee high accuracy, reliability, and interpretability in real clinical settings. This study attempted to resolve these issues by proposing a novel approach to melanoma detection, by employing diverse techniques such as the Convolutional Block Attention Module (CBAM), binary focal loss, and Monte Carlo Dropout (MC Dropout) for uncertainty estimation. The CBAM attention module was inserted to help the network focus on important features of images, and focal loss was applied to solve class imbalance and encourage learning from hard samples. MC Dropout was used to achieve an uncertainty estimate in the test set, and thus, more reliable and interpretable predictions. The approach was implemented with a pre-trained deep CNN called EfficientNetB4 as the backbone and trained on a large melanoma dataset, which is separated into training sets, test sets, and validation sets in order to test the performance. Model evaluation was performed using accuracy, precision, recall, F1-score, and AUC, resulting in 0.95 for accuracy, whereas the AUC value is 0.98. Furthermore, the uncertainty estimate made a clearer decision-making, and the interpretability was crucial when used as a clinical task model. These results highlight the necessity to combine attention mechanisms, task-specific loss terms, and uncertainty quantification for building accurate and interpretable AI in medical domains. The study prototype has the potential for improving the detection of early-stage melanoma and provides useful guidance to future AI-based healthcare services.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_49-EfficientNet_Based_Melanoma_Classification_with_CBAM_Attention.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Lightweight Rule-Based Detection Approach for ARP Flooding Malware in Office Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161248</link>
        <id>10.14569/IJACSA.2025.0161248</id>
        <doi>10.14569/IJACSA.2025.0161248</doi>
        <lastModDate>2025-12-31T12:27:03.4930000+00:00</lastModDate>
        
        <creator>Rizal Fathoni Aji</creator>
        
        <creator>Heri Kurniawan</creator>
        
        <creator>Nilamsari Putri Utami</creator>
        
        <subject>ARP flooding; cybersecurity detection; rule-based detection; lightweight intrusion detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>Address Resolution Protocol (ARP) is a standard protocol used to map an IP address to its MAC address so the network can send packets to its destination. Office networks, which typically have limited network resources, are vulnerable to ARP flooding attacks launched by malware. ARP flooding can be used by malware to create network disruption and jam the networks. This study presents a rule-based detection method, Time Density ARP Thresholding with Binding Consistency Monitoring (TDCM), to identify ARP flooding using a simple mechanism, making it suitable for use in networks with limited hardware. To detect flooding anomalies, the TDCM algorithm monitors the flow of ARP packets and the consistency between MAC IP bindings in ARP packets. In this study, a series of experiments was conducted and repeated multiple times. On average, the experiment shows that the system performs well under high-volume ARP attack conditions. This proposed method offers an alternative to machine learning techniques, making it more suitable for deployment in resource-constrained office networks. Future work will focus on improving detection in low-volume attack scenarios, validating performance in real-world environments, and implementing on devices with limited computing resources.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_48-A_Lightweight_Rule_Based_Detection_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Related Multi-Task Allocation Scheme Based on Greedy Algorithm in Mobile Crowdsensing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161247</link>
        <id>10.14569/IJACSA.2025.0161247</id>
        <doi>10.14569/IJACSA.2025.0161247</doi>
        <lastModDate>2025-12-31T12:27:03.4630000+00:00</lastModDate>
        
        <creator>Xia Zhuoyue</creator>
        
        <creator>Raja Kumar Murugesan</creator>
        
        <subject>Mobile crowdsensing; task allocation; fuzzy logic; greedy algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>With the popularity of mobile intelligent devices, the mobile crowdsensing (MCS) network based on wireless sensor networks and crowdsourcing technology came into being. There is more and more research on MCS, and it has been applied in many scenarios. Due to the increase in data volume of the MCS platform, the task shows exponential growth. Among them, there will be irreplaceable tasks that belong to the same category, that is, tasks with correlation. If the related tasks can be allocated to the same person for execution, the overhead will be greatly reduced, and the success probability of task allocation will be improved. Firstly, the spatio-temporal distribution of tasks and users is predicted by fuzzy logic to divide spatio-temporal scenarios in this study, and a more suitable multi-task allocation algorithm is selected. Then, when allocating multi-tasks, considering the correlation of tasks, the greedy algorithm is used to allocate multi-tasks according to different scenarios. The experimental results show that compared with the benchmark scheme, the proposed related multi-task allocation scheme based on the greedy algorithm improves the task allocation completion rate by 25.2%, and significantly improves the task allocation success rate in MCS.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_47-Related_Multi_Task_Allocation_Scheme_Based_on_Greedy_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Bibliometric Analysis of Blockchain Applications in E-Commerce: Trends and Research Directions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161246</link>
        <id>10.14569/IJACSA.2025.0161246</id>
        <doi>10.14569/IJACSA.2025.0161246</doi>
        <lastModDate>2025-12-31T12:27:03.4330000+00:00</lastModDate>
        
        <creator>Nguyen Thi Phuong Giang</creator>
        
        <creator>Le Ngoc Son</creator>
        
        <creator>Thai Dong Tan</creator>
        
        <subject>Blockchain; e-commerce; bibliometric analysis; smart contracts; digital transformation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>Blockchain technology has emerged as a transformative force within the e-commerce industry, offering significant potential to address longstanding issues such as data security, transaction transparency, and customer trust. Despite its growing relevance, the academic exploration of blockchain applications in e-commerce remains fragmented and lacks a cohesive research agenda. This study conducts a comprehensive bibliometric analysis to map the intellectual landscape of blockchain applications in e-commerce, identifying influential publications, key authors, prominent journals, and major thematic trends. Using data extracted from the Scopus database between 2014 and 2024, the study employs bibliometric tools such as VOSviewer and Biblioshiny for performance analysis and science mapping. The analysis reveals a steady increase in research interest, with dominant themes including trust, smart contracts, supply chain management, and secure payments. Furthermore, the findings indicate that most research is concentrated in technologically advanced countries, and collaborations among scholars remain limited. By interpreting these patterns, the study uncovers critical gaps in the literature and proposes future research directions focusing on consumer behavior, regulatory frameworks, cross-border challenges, and integration with emerging technologies like AI and IoT. The results contribute to a clearer understanding of the evolution of blockchain research in e-commerce and provide a foundation for academics and practitioners to develop more secure, efficient, and user-centric digital commerce systems.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_46-A_Bibliometric_Analysis_of_Blockchain_Applications_in_E_Commerce.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comparative Review of AI, IoT, and Big Data in Healthcare: Towards a Data-Centric Approach for Enhanced Data Quality and Contextual Adaptability</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161245</link>
        <id>10.14569/IJACSA.2025.0161245</id>
        <doi>10.14569/IJACSA.2025.0161245</doi>
        <lastModDate>2025-12-31T12:27:03.4170000+00:00</lastModDate>
        
        <creator>Imane RAFIQ</creator>
        
        <creator>Zahi JARIR</creator>
        
        <creator>Hiba ASRI</creator>
        
        <subject>Data-Centric AI; IoT; Big Data Analytics; healthcare informatics; data quality; bias mitigation; privacy; predictive analytics; machine learning; disease prediction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>The convergence of Artificial Intelligence (AI), the Internet of Things (IoT), and Big Data is revolutionizing healthcare by enabling predictive diagnostics, real-time monitoring, and personalized treatment through data-driven analytics and intelligent decision-making. Despite these advancements, the effectiveness of such systems is significantly hindered by poor data quality, including issues such as missing values, noise, bias, and inconsistencies. This study presents a systematic and comparative review of recent research at the intersection of AI, IoT, and Big Data in healthcare, highlighting critical gaps in data quality that undermine model performance and real-world reliability. In response, we introduce the Data-Centric AI (DCAI) paradigm as a promising approach focused on systematic data improvement rather than model complexity. We examine the application of the METRIC framework for assessing data quality dimensions such as completeness, consistency, fairness, and timeliness. Furthermore, we propose future research directions to improve scalability and trustworthiness in AI-driven healthcare, integrating advanced AI techniques such as generative AI and multimodal frameworks with DCAI principles for more ethical AI applications. This work serves as both a comparative synthesis of existing literature and a conceptual foundation for future experimental validation through a case study integrating context-aware data modeling and real-time decision support.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_45-A_Comparative_Review_of_AI_IoT_and_Big_Data_in_Healthcare.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Safety Helmet Wear Detection Algorithm Based on ASG-YOLOv8s</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161244</link>
        <id>10.14569/IJACSA.2025.0161244</id>
        <doi>10.14569/IJACSA.2025.0161244</doi>
        <lastModDate>2025-12-31T12:27:03.3700000+00:00</lastModDate>
        
        <creator>Li-Zhen He</creator>
        
        <creator>Zhi-Sheng Wang</creator>
        
        <creator>Yi-Wei Duan</creator>
        
        <creator>Jin-Hai Sa</creator>
        
        <subject>YOLOv8; safety helmet wearing detection; slim- neck; attention mechanism</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>In the field of industrial safety, the standardised wearing of safety helmets by workers constitutes a core protective measure against head injuries. However, in industrial settings, multi-scale background interference arising from variations in monitoring distance renders traditional detection models ineffective at capturing the contour features of small-sized helmets. This study, therefore, proposes the ASG-YOLOv8s safety helmet detection network, based on YOLOv8s, to address the challenge of complex scene background interference. First, the AKC-SCAM unit is introduced within the YOLOv8 backbone network to replace certain standard convolutions. This module dynamically adjusts the sampling shape of convolutional kernels, enhancing the extraction of multi-scale defect features. Secondly, a cross-scale interaction architecture (Slim-neck) is constructed in the Neck section, employing GSConv instead of conventional convolutions. This combines with a cross-level feature pyramid to achieve cross-scale interaction between deep semantic features and shallow details. Finally, GAM attention is embedded before the multi-scale output for head detection, establishing a dual-stream attention mechanism that synergistically optimises feature response intensity for low-quality candidate boxes, while suppressing background noise interference. Experimental results demonstrate that the enhanced ASG-YOLOv8s achieves improvements of 2.54%, 2.94%, and 3.16% over the original model in Precision (P), Recall (R), and mean average precision (mAP), respectively, on the SHWD dataset.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_44-Safety_Helmet_Wear_Detection_Algorithm_Based_on_ASG_YOLOv8s.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>MTML 1.0: A Novel Interlingua Knowledge Representation Model for Machine Translation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161243</link>
        <id>10.14569/IJACSA.2025.0161243</id>
        <doi>10.14569/IJACSA.2025.0161243</doi>
        <lastModDate>2025-12-31T12:27:03.3400000+00:00</lastModDate>
        
        <creator>M. A. S. T Goonatilleke</creator>
        
        <creator>B Hettige</creator>
        
        <creator>A. M. R. R Bandara</creator>
        
        <subject>Machine translation; knowledge representation; LLMs; rule-based approach; hybrid approach</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>Machine translation is one of the major areas of both computational linguistics and artificial intelligence that employs computer algorithms to automatically translate text between different natural languages. At present, the advent of Large Language Models (LLMs) has revolutionized this field, marking a significant turning point in its evolution. Despite their impressive capabilities, LLMs still fall short of achieving human-like translation due to key limitations, namely lack of transparency, explainability, and interpretability, the production of non-deterministic outputs, and insufficient support for low-resource languages. To address these challenges, incorporating human-aided translation mechanisms that reflect how the human brain performs translation is effective. Therefore, from a computer science perspective, this motivates the development of a novel hybrid machine translation approach that integrates a rule-based approach with LLM-based methods. This study presents a novel rule-based interlingual knowledge representation model named MTML 1.0 that has been designed and implemented to accurately analyze source language input and systematically structure the resulting linguistic information to facilitate applications, including target language generation and question-answering systems. The MTML 1.0 system consists of four key modules, namely the preprocessing module, morphological analyzer module, syntax analyzer module, and semantic analyzer module. Furthermore, the system has been fully implemented as a web-based application using the Python programming language, with spaCy serving as the foundation for natural language processing tasks. Finally, the functionality of the system has been demonstrated through the development of a prototype question-answering system.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_43-MTML_1_0_A_Novel_Interlingua_Knowledge_Representation_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Resilient Framework for Industry 5.0 WSNs: Enhancing Network Lifetime via a Lightweight Reputation Ledger and Hybrid AI</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161242</link>
        <id>10.14569/IJACSA.2025.0161242</id>
        <doi>10.14569/IJACSA.2025.0161242</doi>
        <lastModDate>2025-12-31T12:27:03.3070000+00:00</lastModDate>
        
        <creator>Padma Sree N</creator>
        
        <creator>Malini M Patil</creator>
        
        <subject>Wireless Sensor Networks (WSNs); Industry 5.0; anomaly detection; lightweight blockchain; trust management; network lifetime; Digital Twin</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>Wireless Sensor Networks (WSNs) play an increasingly important role in Industry 5.0 cyber–physical systems, where resilience, trust, and energy efficiency are essential under dynamic operating conditions. However, their limited resources, scattered deployment, and continuous operation make these networks highly susceptible to unusual behavior and cyberattacks. Such issues can compromise data quality, disrupt network reliability, and shorten the overall lifespan of the system. To address these challenges, this study examines WSN resilience as a combined problem of anomaly detection accuracy, fault isolation latency, and network lifetime under realistic fault and energy constraints. At the core of the framework is a Model Context Protocol (MCP), which combines a supervised LightGBM classifier with an unsupervised LSTM autoencoder to capture both event-driven and temporal anomalies in sensor data. Complementing this is a compact “Micro-Ledger” system that updates trust values for each node by monitoring behavior and using streamlined consensus rules. Together, they create a continuous feedback mechanism that isolates suspicious nodes while keeping energy consumption in check. The framework is evaluated using a set of resilience-oriented metrics, including fault detection latency, Mean Time To Failure (MTTF), reputation convergence behavior, and overall network lifetime. Experiments conducted in a Digital Twin simulation environment report an F1-score of 0.997, an 18.7% improvement in network lifetime, and a Micro-Ledger storage overhead of approximately 98 KB. While the current validation is simulation-based, the proposed design can be extended to physical deployments through adaptive trust weighting, cluster-head redundancy, and probation-based node reintegration.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_42-A_Resilient_Framework_for_Industry_5_0_WSNs.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Bio-Inspired Behavior-Based Hybrid Framework for Ransomware Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161241</link>
        <id>10.14569/IJACSA.2025.0161241</id>
        <doi>10.14569/IJACSA.2025.0161241</doi>
        <lastModDate>2025-12-31T12:27:03.2770000+00:00</lastModDate>
        
        <creator>Mohammed A. F. Salah</creator>
        
        <creator>Mohd Fadzli Marhusin</creator>
        
        <creator>Rossilawati Sulaiman</creator>
        
        <subject>Ransomware; Artificial Immune Systems (AIS); anomaly detection; Negative Selection Algorithm; Markov chain; Random Forest; hybrid framework</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>Ransomware remains a critical and evolving cybersecurity threat, increasingly rendering traditional signature-based detection techniques ineffective. While modern machine learning models achieve high detection accuracy, they often operate as opaque “black boxes”, introducing a significant explainability gap that undermines analyst trust. In addition, behavior-based anomaly detection systems frequently suffer from high false-positive rates, limiting their operational viability. To address these challenges, this study adopts a Design Science Research Methodology to develop a novel, interpretable, multi-stage ransomware detection framework. The proposed architecture integrates three complementary components: a bio-inspired Negative Selection Algorithm from Artificial Immune Systems to filter benign behavioral patterns, a first-order Markov chain model to capture probabilistic deviations in execution sequences, and a Random Forest ensemble classifier to synthesize these signals for final decision-making. The framework is evaluated using a dual-pipeline experimental design on real-world ransomware and benign software samples, enabling controlled comparison between probabilistic and pattern-based behavioral modeling. Experimental results demonstrate that the proposed approach achieves high detection performance while maintaining a low false-positive rate and providing interpretable behavioral evidence. Overall, the framework offers a principled balance between detection effectiveness and interpretability, addressing key limitations of existing ransomware detection systems.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_41-A_Bio_Inspired_Behavior_Based_Hybrid_Framework_for_Ransomware_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>MetaEdge: A Meta-Learning-Based Auto-Selective Tool for Hardware-Aware Anomaly Detection on Edge Devices</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161240</link>
        <id>10.14569/IJACSA.2025.0161240</id>
        <doi>10.14569/IJACSA.2025.0161240</doi>
        <lastModDate>2025-12-31T12:27:03.2300000+00:00</lastModDate>
        
        <creator>Nadia Rashid</creator>
        
        <creator>Rashid Mehmood</creator>
        
        <creator>Fahad Alqurashi</creator>
        
        <creator>Turki Alghamdi</creator>
        
        <subject>Anomaly detection; edge computing; hardware-aware optimization; machine learning; meta-learning; model selection; ONNX</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>The deployment of anomaly detection systems across heterogeneous edge computing environments faces significant challenges due to varying computational constraints and resource limitations. Existing approaches typically employ static model selection strategies that fail to adapt to diverse hardware capabilities, resulting in suboptimal detection performance and inefficient resource utilization. To address this, we propose MetaEdge, a novel hardware-aware framework that intelligently selects and deploys anomaly detection models based on specific device characteristics and hardware constraints. The MetaEdge framework introduces a systematic methodology that leverages meta-learning in the first stage to train a machine learning model to predict the top-k anomaly detectors by considering dataset characteristics. These candidates are then put through hardware-aware optimization that incorporates the hardware constraints of edge devices to ensure deployment feasibility. The framework evaluates 11 candidate anomaly detection algorithms spanning traditional machine learning and deep learning methods across four representative computing architectures ranging from ultra-constrained edge devices to GPU-accelerated cloud instances. Model conversion through ONNX standardization enables cross-platform deployment while maintaining detection capabilities. Experimental evaluation demonstrates the framework&#39;s effectiveness in achieving superior anomaly detection performance across diverse hardware configurations. The hardware-aware stage successfully identifies optimal model-hardware pairings, with the deployed models achieving up to 96.6% accuracy and 90.4% precision on edge devices. The framework demonstrates high accuracy in model selection decisions, with confidence scores providing meaningful hardware compatibility assessments that guide deployment. MetaEdge introduces a novel paradigm for hardware-aware anomaly detection in edge computing, demonstrating that meta-learning–driven model selection can deliver superior detection performance while adhering to stringent hardware constraints. By integrating automatic model selection with hardware-aware optimization, the proposed approach enables anomaly detection systems to intelligently adapt to diverse computing environments and maximize performance under resource constraints.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_40-MetaEdge_A_Meta_Learning_Based_Auto_Selective_Tool.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards Quantum-Accelerated Urban Systems: Integrating Quantum Computing into Saudi Smart City Megaprojects</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161239</link>
        <id>10.14569/IJACSA.2025.0161239</id>
        <doi>10.14569/IJACSA.2025.0161239</doi>
        <lastModDate>2025-12-31T12:27:03.1970000+00:00</lastModDate>
        
        <creator>Eissa Alreshidi</creator>
        
        <subject>Quantum Computing (QC); Hybrid Quantum-Classical Architecture (HQCA); quantum security; smart cities; NEOM; Saudi Vision 2030; combinatorial optimization; Urban Digital Twin (UDT); Quantum Machine Learning (QML); roadmap</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>Quantum Computing (QC), rooted in the principles of superposition and entanglement, enables transformative computational capabilities that surpass classical systems, particularly in solving NP-hard combinatorial optimization, simulation, and machine learning problems. These capabilities are increasingly vital for smart cities, which depend on real-time data from the Internet of Things (IoT) devices, Artificial Intelligence (AI), and Urban Digital Twins (UDTs) to orchestrate complex urban systems such as traffic, energy, logistics, and public safety. As global urbanization accelerates, the demand for hyper-efficient, secure, and adaptive infrastructure exceeds the limits of classical computation. This study employs a multi-pronged methodology that combines literature synthesis, algorithmic mapping, and strategic roadmap design. This study investigates the strategic alignment between QC and the computational demands of next-generation urban environments, with a specific focus on Saudi Arabia’s greenfield megaprojects, including NEOM, The Line, and the Red Sea Project, within the Saudi Vision 2030 framework. The analysis systematically maps urban computational challenges to applicable quantum algorithm families—Quantum Approximate Optimization Algorithm (QAOA), Variational Quantum Eigensolver (VQE), and Quantum Machine Learning (QML)—and synthesizes the technical, organizational, financial, ethical, and regulatory prerequisites for national deployment. The core contribution is the development of a conceptual Hybrid Quantum-Classical Architecture (HQCA) and a methodologically grounded three-phase deployment roadmap, tailored to the Saudi context, mapping quantum technical readiness to policy and infrastructure milestones in Saudi Arabia. This framework positions Saudi Arabia to pioneer quantum-accelerated urban systems, enabling resilient infrastructure, sovereign digital capabilities, and global leadership in the emerging Quantum City paradigm.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_39-Towards_Quantum_Accelerated_Urban_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Framework for Ethical Acquisition of User-Data to Improve Recommendation Models’ Accuracy in Digital Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161238</link>
        <id>10.14569/IJACSA.2025.0161238</id>
        <doi>10.14569/IJACSA.2025.0161238</doi>
        <lastModDate>2025-12-31T12:27:03.1830000+00:00</lastModDate>
        
        <creator>Shaheer Hussain Qazi</creator>
        
        <creator>M.Batumalay</creator>
        
        <creator>Asheer Hussain</creator>
        
        <creator>Ali Abbas</creator>
        
        <subject>Data handling; data privacy; online tracking; data collection; user profiling; model accuracy; ethical AdTech; open standards; user autonomy; SDG 9; SDG 16; process innovation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>The modern digital ecosystem has evolved into a pervasive, opaque system where platforms collect and infer personal data through nearly every online action, search queries, email content, browsing history, and app usage, without transparency. Justified as a means to deliver “relevant” content and ads, this approach undermines user privacy, introduces bias, and normalizes surveillance. Through a comprehensive literature review, this study sought to: critically analyze the current landscape of user tracking, profiling, and privacy violations in online platforms and to evaluate the impact of existing legal, technical, and platform-driven mechanisms like GDPR, CCPA, ATT, and Privacy Sandbox in protecting user autonomy. It was learned that the current frameworks fall short by mostly being policy-based and having hard-to-access user controls. And a major flaw in existing systems is the assumption that all digital behavior reflects actual user preference, overlooking shared devices, accidental clicks, and non-user actions. To validate these insights, a survey of 572 privacy-aware participants was conducted, where nearly 71% preferring a proactive solution over passive regulatory frameworks and hard-to-navigate privacy menus/dashboards. Building on these findings, this study proposes a framework: a digital platform where individuals actively create and manage modular preference profiles, categorized by app type or content domain, which can be selectively and consensually shared with platforms in a standardized format. This concept facilitates high-quality, context-rich datasets for algorithms, enhancing personalization and recommendation models’ accuracy and performance. By shifting from forced surveillance to invited participation, this approach advances ethical data-sourcing, enhances algorithmic accuracy, and aligns with SDG 9 and SDG 16 by prioritizing responsible digital solutions, process innovation, and safeguarding user autonomy.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_38-Framework_for_Ethical_Acquisition_of_User_Data_to_Improve_Recommendation_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Business Process Outsourcing and Digitalization in Albania: Challenges, Opportunities, and Strategic Directions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161237</link>
        <id>10.14569/IJACSA.2025.0161237</id>
        <doi>10.14569/IJACSA.2025.0161237</doi>
        <lastModDate>2025-12-31T12:27:03.1500000+00:00</lastModDate>
        
        <creator>Nertila &#199;ika</creator>
        
        <subject>Business Process Outsourcing (BPO); digitalisation; artificial intelligence (AI); automation; PEST analysis; Western Balkans; Albania; strategic development</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>The rapid expansion of Business Process Outsourcing (BPO) has transformed the global services economy, and Albania is emerging as a competitive nearshoring destination in the Western Balkans. This study examines the intersection of BPO and digitalisation in Albania, exploring how technological innovation, artificial intelligence (AI), and cloud-based automation are reshaping service delivery, labour productivity, and competitiveness. This study is explicitly framed as a desk-based policy and analytical study relying exclusively on secondary data, without primary data collection based on secondary data from OECD, World Bank, IBM, and European Commission reports (2023–2025). Findings indicate that Albania’s BPO sector benefits from low labour costs, multilingual human capital, and favourable fiscal policies, yet faces challenges related to technological capability, digital infrastructure, and talent retention. Through a PEST analytical framework, this study identifies the macro-environmental factors influencing BPO development and proposes strategic directions for enhancing digital readiness and regional integration. The study further expands its comparative analysis to include other Western Balkan economies, Kosovo, North Macedonia, and Montenegro, providing a broader perspective on Albania’s position within the regional outsourcing ecosystem. This research contributes to the academic and policy discourse on digital transformation by presenting an integrated model aligning BPO growth with sustainable innovation and regional competitiveness.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_37-Business_Process_Outsourcing_and_Digitalization_in_Albania.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Organizational Information Systems Through Explainable Artificial Intelligence</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161236</link>
        <id>10.14569/IJACSA.2025.0161236</id>
        <doi>10.14569/IJACSA.2025.0161236</doi>
        <lastModDate>2025-12-31T12:27:03.1200000+00:00</lastModDate>
        
        <creator>Kian Jazayeri</creator>
        
        <subject>Artificial intelligence; Human Resource Analytics; Explainable Artificial Intelligence; Decision-Support Systems; workplace perceptions; Clustering Analysis; Decent Work and Economic Growth</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>This study examines Workplace Perceptions among Finnish employees through the application of Artificial Intelligence within the domain of Human Resource Analytics. An integrated analytical framework combining Clustering Analysis, supervised classification, and Explainable Artificial Intelligence is proposed to uncover and interpret latent employee perception profiles. Using 23 perception-related indicators from the Finnish Working Life Barometer 2022, K-means clustering identified two distinct employee groups: one characterized by consistently positive evaluations of fairness, leadership, well-being, and motivation, and another reflecting systematically negative workplace perceptions. A LightGBM model was subsequently employed to predict cluster membership based on demographic and occupational variables, and SHapley Additive exPlanations (SHAP) were used to provide transparent global and local interpretations of the predictive outcomes. The results show that employment duration, age, industry affiliation, gender, and socioeconomic status are the most influential determinants of cluster membership. By embedding Explainable Artificial Intelligence into Human Resource Analytics, the study demonstrates how employee perception data can be transformed into interpretable knowledge that supports organizational Decision-Support Systems. The proposed framework advances data-driven and transparent HR decision-making and contributes to the United Nations Sustainable Development Goal 8, Decent Work and Economic Growth, by identifying structural disparities in employee experience and enabling more equitable and inclusive workplace interventions.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_36-Enhancing_Organizational_Information_Systems_Through_Explainable_Artificial_Intelligence.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>RFM–K-OPT Based Machine Learning Framework for Customer Segmentation and Behavioral Profiling in Direct Marketing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161235</link>
        <id>10.14569/IJACSA.2025.0161235</id>
        <doi>10.14569/IJACSA.2025.0161235</doi>
        <lastModDate>2025-12-31T12:27:03.0900000+00:00</lastModDate>
        
        <creator>Khadija Mehrez</creator>
        
        <subject>Customer segmentation; behavioral profiling; clustering optimization; predictive marketing; data-driven decision making</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>Customer segmentation is an essential element of modern marketing analytics, which helps companies recognize, comprehend, and market to customers depending on their behavioral and transactional attributes. Conventional methods based on Recency, Frequency, and Monetary (RFM) analysis or on simple unsupervised clustering algorithms such as K-Means are very common, but they are usually limited by weaknesses such as sensitivity to centroid starting location, low cluster separability, and low interpretability. Such problems will cause volatile effects of segmentation and restrict the dependability of data-driven marketing choices. In an effort to deal with these concerns, this research study suggests a hybrid model, the RFM K-Means Optimization Technique (RFM–K-OPT), a combination of RFM analytics and K-Means clustering, and an iterative centroid optimization unit. The proposed structure will help to improve cluster compactness, stability, and interpretability using statistical computation and refinement of centroid positioning. The model is written in Python and tested with publicly available customer transaction data. The result of the experimental process shows a better quality of clustering with a Silhouette Coefficient of 0.83, Davies-Bouldin Index of 0.31, Calinski-Harabasz Index of 563, purity of clustering of 94.2 per cent, and an execution time of 5.4 seconds. The results suggest that the RFMK Opt model is a useful tool that offers credible and explainable customer segments, which can be used to make effective behavioral profiles and make sound judgments when it comes to making decisions in the context of direct marketing.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_35-RFM_K_OPT_Based_Machine_Learning_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Innovative Approaches to Green Strategy Formulation with a Novel Hybrid AI-Spherical Fuzzy Framework</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161234</link>
        <id>10.14569/IJACSA.2025.0161234</id>
        <doi>10.14569/IJACSA.2025.0161234</doi>
        <lastModDate>2025-12-31T12:27:03.0570000+00:00</lastModDate>
        
        <creator>Yasar G&#246;kalp</creator>
        
        <creator>Serkan Eti</creator>
        
        <creator>Halil Yorulmaz</creator>
        
        <creator>Serhat Y&#252;ksel</creator>
        
        <creator>Hasan Din&#231;er</creator>
        
        <subject>Artificial intelligence; fuzzy decision-making; spherical fuzzy sets; ARAS; Entropy; green strategy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>This study aims to establish prioritized strategies for businesses to adopt green strategies. In this framework, literature-based criteria are analyzed through a three-stage model. In the first stage of the analysis, an artificial intelligence (AI)-based decision matrix is created. In the second stage, factors affecting green business strategies are weighted by the Spherical Fuzzy (et SF) Entropy method. In the last stage, the strategies are ranked using the SF ARAS method. The novelty of this study is the integration of AI with SF numbers. Expert opinions can be evaluated by AI with different coefficients according to the knowledge level and experience of the experts. The AI-based decision matrix enables expert weights to differ according to factors such as experience. The findings show that the most important criterion is cost efficiency (weight: 0.2219). According to the analysis results, investments in clean energy projects have a positive impact on this process (Ki:0.9799).</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_34-Innovative_Approaches_to_Green_Strategy_Formulation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Temporal-Cross-Modal Intelligence for Detecting Fraudulent Crowdfunding Campaigns</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161233</link>
        <id>10.14569/IJACSA.2025.0161233</id>
        <doi>10.14569/IJACSA.2025.0161233</doi>
        <lastModDate>2025-12-31T12:27:03.0270000+00:00</lastModDate>
        
        <creator>Lakshmi B S</creator>
        
        <creator>Rekha K S</creator>
        
        <subject>Crowdfunding fraud detection; multimodal learning; temporal behavior modeling; cross-modal consistency analysis; blockchain-based verification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>Reward-based crowdfunding platform fraud has now become a multimodal and temporally dynamic threat, with conventional text-only or snapshot-based detection methods ineffective at detecting more complex deceptive campaigns. In this study, a Temporal Dynamics Aware Multi-Model Fraud Detection Framework (TDMM-FDF) that simultaneously models linguistic indicators, visual discrepancies, and time behavioral changes is proposed. The framework introduces three key innovations: 1) HM4, a Hidden Method-of-Moments Markov model for modeling long-range latent transitions across campaign updates, 2) Polynomial Expansion Canonical Correlation Analysis (PECCA) for quantifying nonlinear semantic discrepancies between textual narratives and associated images, and 3) Frequency-Gated GRU (FG-GRU) which separates recurrent activations into low frequency (trend) and high frequency (anomaly) components in order to achieve higher sensitivity to abrupt fraudulent behaviors. Massive simulations on an actual Kickstarter data set prove that the given architecture outperforms classical machine learning models, sequence encoders, and transformer baselines significantly [96.4% accuracy and good calibration (ECE = 0.06) and high ROC-AUC]. The supplementary role of all modules is confirmed in ablation studies, and their qualitative analyses provide precise semantic-visual discrepancies and semantic time anomalies of fraudulent campaigns.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_33-Temporal_Cross_Modal_Intelligence_for_Detecting_Fraudulent_Crowdfunding_Campaigns.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Linking Leadership Styles to Corporate ESG Performance: A Novel Sierpinski Triangle Fuzzy Decision-Making Modelling</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161232</link>
        <id>10.14569/IJACSA.2025.0161232</id>
        <doi>10.14569/IJACSA.2025.0161232</doi>
        <lastModDate>2025-12-31T12:27:02.9930000+00:00</lastModDate>
        
        <creator>Serkan Eti</creator>
        
        <creator>&#199;agla &#214;zgen Safak</creator>
        
        <creator>Serhat Y&#252;ksel</creator>
        
        <creator>Hasan Din&#231;er</creator>
        
        <subject>Leadership approaches; ESG performance; z-NIDM; CIMAS; RAM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>Existing research generally addresses the factors affecting ESG performance at a general level, but fails to examine the relative impact of leadership approaches on this performance with a holistic decision-making model. This deficiency makes it difficult for businesses to align their sustainability strategies with the right leadership styles and creates uncertainty in achieving their ESG goals. In this context, the aim of this study is to determine the most appropriate leadership style for improving ESG performance and to prioritize the criteria influencing this selection with an integrated approach. To address this gap in the literature, the study proposes a new multi-criteria decision-making model. The model utilizes a combination of the Z-score-based normalized ideal distance method (z-NIDM), CIMAS, RAM, and the innovative Sierpinski triangle fuzzy sets. According to the analysis results, the most important criterion for improving ESG performance is promoting green innovation, with a weight of 0.108, followed by resource efficiency, with a weight of 0.105. The most appropriate leadership style was determined to be ethical leadership, with a weight of 1.4841. These findings suggest that, to achieve their sustainability goals, businesses must adopt ethical management approaches, increase investments in green innovation, and make resource efficiency a strategic priority. This study offers a unique contribution by introducing a new fuzzy set approach to the literature, analyzing the relationship between ESG and leadership with an integrated decision-making model, and proposing a methodologically robust framework.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_32-Linking_Leadership_Styles_to_Corporate_ESG_Performance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Impact of Climate Change on Animal Diseases Based on Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161231</link>
        <id>10.14569/IJACSA.2025.0161231</id>
        <doi>10.14569/IJACSA.2025.0161231</doi>
        <lastModDate>2025-12-31T12:27:02.9470000+00:00</lastModDate>
        
        <creator>Gehad K. Hussien</creator>
        
        <creator>Mohamed H. Khafagy</creator>
        
        <creator>Hussam M. Elbehiery</creator>
        
        <subject>Climate change; environmental health; convolutional neural network (CNN); animal diseases; graphical user interface (GUI)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>The rapid pace of climate change has altered the distribution of animal diseases, increased their frequency, and dispersed them over a larger geographic area. Rising temperatures, fluctuating humidity, and erratic rainfall patterns have increased the risk of illness in cows. These modifications have facilitated the growth of diseases and vectors. As a result, timely and accurate identification of these illnesses has become crucial for both food security and sustainable animal health management. To detect and classify animal diseases using visual data, this study proposes a diagnostic framework that utilizes machine learning approaches, with a focus on convolutional neural networks (CNNs) in conjunction with classification models, including ResNet, YOLOv5, and AltCLIP. By learning to distinguish between healthy and sick animals, the model enables prompt identification and treatment of sick animals. Merging data on disease detection with climate parameters, we make comparisons between them to get the best result and use this to generate advanced tools for detecting diseases. This informs us about potential risks. According to the results, machine learning-based diagnosis can improve disease detection&#39;s accuracy and efficiency while also providing important new information for climate adaptation strategies in cattle management. The optimal model will produce a graphical user interface (GUI) that displays environmental risk scores, diagnostic data, and recommendations for actions such as monitoring the situation, seeking immediate veterinary care, or verifying the animal&#39;s health.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_31-Impact_of_Climate_Change_on_Animal_Diseases.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Vegetation Identification in Hyperspectral Images of Cartagena City Using the Haar Wavelet Transform</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161230</link>
        <id>10.14569/IJACSA.2025.0161230</id>
        <doi>10.14569/IJACSA.2025.0161230</doi>
        <lastModDate>2025-12-31T12:27:02.9330000+00:00</lastModDate>
        
        <creator>Gabriel El&#237;as Chanch&#237; Golondrino</creator>
        
        <creator>Manuel Alejandro Ospina Alarc&#243;n</creator>
        
        <creator>Manuel Saba</creator>
        
        <subject>Vegetation detection; hyperspectral images; remote sensing; wavelet transform; earth observation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>Hyperspectral imaging is one of the most widespread remote sensing techniques in earth observation, corresponding to images with high spectral and spatial resolution that enable material detection through the identification of their spectral signature. A key challenge in hyperspectral imaging is the definition of novel and efficient computational methods that contribute to reducing computational cost while maintaining the efficacy and precision in material detection provided by methods such as correlation or machine learning. This study aims to propose a new efficient method for vegetation detection in hyperspectral images based on the similarity between the approximate and detailed components of the Haar wavelet transform of the vegetation spectral signature, with respect to the components of the pixel to be classified in the image. For the development of the present investigation, five methodological phases were defined: P1. Selection of sample pixels for vegetation and other materials; P2. Determination of the characteristic vegetation pixel; P3. Implementation and evaluation of the method with vegetation and non-vegetation pixels; P4. Deployment of the method on the reference hyperspectral image; P5. Comparative evaluation of the proposed method against the correlation method. As a result of this research, a novel computational method for vegetation identification in hyperspectral images was proposed, leveraging the similarity of wavelet transform components. This method demonstrated comparable detection efficacy to the correlation method and proved to be approximately 5% more efficient in the detection process. The proposed method can be suitably integrated into hyperspectral image-based environmental monitoring systems, particularly where images are of considerable size and more efficient methods are required.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_30-Vegetation_Identification_in_Hyperspectral_Images_of_Cartagena_City.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Accessible Application Prototype for Improving Digital Reading in People with Dyslexia: User-Centered Design and Usability Validation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161229</link>
        <id>10.14569/IJACSA.2025.0161229</id>
        <doi>10.14569/IJACSA.2025.0161229</doi>
        <lastModDate>2025-12-31T12:27:02.9170000+00:00</lastModDate>
        
        <creator>Enrique Lee Huaman&#237;</creator>
        
        <creator>Brian Andre&#233; Meneses-Claudio</creator>
        
        <creator>Carlos Fidel Ponce S&#225;nchez</creator>
        
        <creator>Jehovanni F. Velarde-Molina</creator>
        
        <subject>Dyslexia; digital accessibility; digital reading; usability; user-centered design</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>Digital reading remains a challenge for individuals with dyslexia due to the limited availability of accessible tools tailored to their cognitive and perceptual needs. Although many digital reading applications offer basic personalization options, they often lack integrated mechanisms to support reading comprehension and user autonomy. This study presents the design and usability validation of an accessible mobile application prototype aimed at improving the digital reading experience for people with dyslexia using a user-centered design approach. The research followed a design thinking methodology that included a systematic literature review, analysis of documented user needs from peer-reviewed studies and dyslexia support communities, personas development based on published user profiles, and a competitive analysis of existing accessible reading applications. The design process focused on identifying key pain points and developing interactive prototypes that integrate advanced text personalization, visual and auditory supports, text difficulty analysis, and intelligent accessibility suggestions. The proposed prototype enables users to customize font type and size, color, and spacing; activate text-to-speech functionality; highlight words or lines; upload and edit texts in multiple formats; and receive automated recommendations to enhance accessibility. Usability and accessibility were evaluated through expert heuristic assessment using established usability principles and WCAG 2.1 guidelines. Results indicate strong adherence to usability standards, with experts highlighting effective feature integration and identifying the need for additional instructional support for advanced functions. Overall, the proposed application addresses key gaps in digital reading accessibility and provides a foundation for future empirical validation with end users.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_29-Accessible_Application_Prototype_for_Improving_Digital_Reading.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Multi-View Classification Method for Distribution Network Towers Based on Improved EfficientNet</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161228</link>
        <id>10.14569/IJACSA.2025.0161228</id>
        <doi>10.14569/IJACSA.2025.0161228</doi>
        <lastModDate>2025-12-31T12:27:02.9000000+00:00</lastModDate>
        
        <creator>Gao Liu</creator>
        
        <creator>Changyu Li</creator>
        
        <creator>Junsheng Lin</creator>
        
        <creator>Xinzhe Weng</creator>
        
        <creator>Qianming Wang</creator>
        
        <creator>Zhenbing Zhao</creator>
        
        <subject>Multi-view classification of power towers; mask-guided feature fusion; BirefNet; multi-scale feature fusion; convolutional block attention</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>View recognition of distribution network towers is a key technology in UAV intelligent inspection. To address the problem of low accuracy of existing deep learning methods in complex background interference, this paper proposes a tower view classification method based on EfficientNet that integrates foreground perception, multi-scale feature fusion, and dual-dimensional attention. First, a Mask-Guided Fusion Module (MGFM) is designed to extract tower foreground masks using the BiRefNet network, enhancing foreground representation and suppressing background interference through a two-stage fusion strategy. Second, a Multi-Scale Attention Aggregation Module (MSAA) is constructed to achieve efficient cross-layer feature fusion through parallel multi-scale convolution, fully integrating shallow details and deep semantic information. Finally, the Convolutional Block Attention Module (CBAM) is introduced to adaptively strengthen view-discriminative features through channel and spatial dual-attention mechanisms, significantly improving the recognition capability for small-sample categories such as top views. Ablation experiments on a self-built multi-view tower dataset show that the proposed method can effectively distinguish different views such as top view, front view, and side view, with significantly improved accuracy compared to other deep learning models, providing technical support for intelligent inspection of transmission lines.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_28-A_Multi_View_Classification_Method_for_Distribution_Network_Towers.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automated Quality Evaluation of Panoramic Dental Radiographs Using a Domain-Adapted Transfer Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161227</link>
        <id>10.14569/IJACSA.2025.0161227</id>
        <doi>10.14569/IJACSA.2025.0161227</doi>
        <lastModDate>2025-12-31T12:27:02.8700000+00:00</lastModDate>
        
        <creator>Nur Nafiiyah</creator>
        
        <creator>Rifky Aisyatul Faroh</creator>
        
        <creator>Eha Renwi Astuti</creator>
        
        <creator>Rini Widyaningrum</creator>
        
        <creator>Agus Harjoko</creator>
        
        <creator>Kang-Hyun Jo</creator>
        
        <creator>Alhidayati Asymal</creator>
        
        <creator>Youan Nhareswary Dwike Prasetya</creator>
        
        <subject>Batch Normalization; image quality; panoramic radiograph; transfer learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>Assessing the quality of panoramic dental radiographs is essential to ensure diagnostic accuracy and patient safety. However, existing CNN-based approaches for radiograph quality assessment often emphasize architectural comparisons, while providing limited discussion on training stability and generalization, particularly when applied to relatively small and heterogeneous datasets. To address this gap, this study proposes a transfer learning-based framework that integrates Global Average Pooling (GAP) and Batch Normalization (BN) to enhance feature robustness and reduce overfitting in panoramic dental radiograph quality classification. Three pretrained CNN architectures: ResNet50, VGG16, and VGG19 were evaluated using panoramic radiographs collected from two tertiary hospitals in Indonesia. Experimental results using k-fold cross-validation indicate that the proposed GAP+BN refinement improves classification consistency across models, with VGG16 demonstrating the most stable and reliable performance. These findings suggest that domain-adapted transfer learning with appropriate feature aggregation and normalization can support the development of automated and clinically reliable quality assurance systems for panoramic dental imaging.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_27-Automated_Quality_Evaluation_of_Panoramic_Dental_Radiographs.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Retrieval-Augmented Pedagogical Assistant (RAPA): A Methodology for Enhancing Critical Thinking and Equity in AI-Augmented Education</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161226</link>
        <id>10.14569/IJACSA.2025.0161226</id>
        <doi>10.14569/IJACSA.2025.0161226</doi>
        <lastModDate>2025-12-31T12:27:02.8400000+00:00</lastModDate>
        
        <creator>Shohel Pramanik</creator>
        
        <creator>Mohd Heikal Bin Husin</creator>
        
        <subject>RAG; AI literacy; critical thinking; equitable education; professional development</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>This study presents the Retrieval-Augmented Pedagogical Assistant (RAPA) methodology, an integrated framework designed to overcome the core limitations of general Large Language Models (LLMs)—specifically factual instability (hallucination) and static knowledge bases—by deploying a specialized, institutional Retrieval-Augmented Generation (RAG) architecture. The methodology addresses three critical challenges to the responsible integration of AI in higher education. Firstly, the framework ensures data sovereignty and sustainable deployment by mandating a comprehensive Total Cost of Ownership (TCO) analysis. This analysis validates the strategic necessity of local RAG hosting and of leveraging computational efficiencies, such as Parameter-Efficient Fine-Tuning (PEFT) and PROXIMITY caching, to ensure a cost-effective solution that strictly complies with FERPA and GDPR data protection mandates and mitigates security risks associated with data leakage. Secondly, the framework ensures the equitable integration of AI literacy across disciplines with varying technological resources, particularly in the Humanities and Vocational Education and Training (VET). This is achieved by minimizing technical prerequisites and institutionalizing continuous Professional Development (PD) through the Dialogic Video Cycle (DVC), which trains faculty in Prompt Engineering to embed individualized pedagogical rules and ethical constraints into the RAPA’s architecture. Finally, specific measures are implemented to evaluate the development of Critical Thinking (CT). RAPA outputs are architecturally constrained to include transparent Chain-of-Thought (CoT) reasoning and verifiable source citations. Student Critical AI Analysis Assignments require students to critique the AI&#39;s synthesis, identifying inaccuracies, biases, or limitations. The effectiveness of this assessment is quantified using a quasi-experimental design and technical RAGAS metrics, such as Faithfulness and Context Precision, ensuring a verifiable shift from passive knowledge consumption to active, informed critique. Key findings from the preliminary architectural validation indicate that integrating Proximity-LSH caching reduced database retrieval calls by 77.2% and retrieval latency by approximately 72.5%, while maintaining high retrieval recall, addressing the scalability bottleneck inherent in high-volume educational deployments. Furthermore, the application of Robust Fine-Tuning (RbFT) demonstrated a marked improvement in the system&#39;s resilience to noisy educational data, preventing performance degradation where standard RAG models typically fail when exposed to irrelevant or counterfactual document chunks. These technical optimizations directly support the pedagogical objective by ensuring that the AI assistant remains responsive and factually grounded.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_26-The_Retrieval_Augmented_Pedagogical_Assistant_RAPA.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Ubiquitous Computing Framework for Reducing Ambiguity in the Lanna Thai Dialect Using Transformer Models and Fuzzy Logic</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161225</link>
        <id>10.14569/IJACSA.2025.0161225</id>
        <doi>10.14569/IJACSA.2025.0161225</doi>
        <lastModDate>2025-12-31T12:27:02.8230000+00:00</lastModDate>
        
        <creator>Wongpanya S. Nuankaew</creator>
        
        <creator>Pathapol Jomsawan</creator>
        
        <creator>Pratya Nuankaew</creator>
        
        <subject>Ambiguities in Lanna Thai vocabulary; Lanna Thai vocabulary; pervasive computing; Speech Transformer; Thai speech recognition; ubiquitous computing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>This research focuses on developing a speech-recognition model that can better handle the unique sounds and vocabulary of the Lanna Thai dialect while supporting translation between Lanna and Standard Thai. A dataset of spoken Lanna Thai was collected from native and fluent speakers between 2023 and 2025, refined from 200 selected words to 100 terms that were consistently difficult to interpret. This dataset was used to train several supervised models, including HuBERT, Wav2Vec2 (baseTH), Wav2Vec2, and WavLM, with HuBERT showing the strongest overall performance and Wav2Vec2 (baseTH) offering a balanced vocabulary response. A companion web application was also created to convert Lanna speech to text and provide two-way translation, improving access to local language resources despite some limitations with rare terms and diverse accents. Early user feedback indicates that the system is practical and helpful, supporting the broader goal of preserving Lanna Thai in a modern digital environment.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_25-Ubiquitous_Computing_Framework_for_Reducing_Ambiguity.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Fuzzy Logic System for Real-Time Text Difficulty Assessment in Mobile Reading Apps for Dyslexia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161224</link>
        <id>10.14569/IJACSA.2025.0161224</id>
        <doi>10.14569/IJACSA.2025.0161224</doi>
        <lastModDate>2025-12-31T12:27:02.7900000+00:00</lastModDate>
        
        <creator>Enrique Lee Huaman&#237;</creator>
        
        <creator>Brian Andre&#233; Meneses-Claudio</creator>
        
        <creator>Carlos Fidel Ponce S&#225;nchez</creator>
        
        <creator>Jehovanni F. Velarde-Molina</creator>
        
        <subject>Fuzzy logic systems; mobile web applications; dyslexia support technology; automated text analysis; accessibility engineering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>Automated text difficulty assessment in mobile reading applications remains an underexplored challenge for dyslexia support systems. This study presents the development and validation of an intelligent fuzzy logic system engineered for real-time text complexity analysis in mobile web environments. Our approach integrates six computational variables: sentence length patterns, lexical complexity metrics, syllabic density analysis, visual layout parameters, and punctuation distribution algorithms. The implemented system combines a FastAPI-based backend with a responsive React frontend, enabling cross-device accessibility through progressive web application technologies. Technical validation demonstrates 94.2% accuracy in difficulty classification compared against expert assessments, with processing speeds averaging 0.3 seconds per text analysis. Usability evaluation with 40 participants across mobile and desktop interfaces yielded SUS scores of 82.6, while 85% expressed frequent usage intention. The mobile web architecture achieves 98% device coverage with WCAG 2.1 AA compliance standards. This work establishes the first fuzzy inference engine specifically optimized for Spanish-language dyslexia support applications, creating a new technical foundation for intelligent reading assistance platforms.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_24-A_Novel_Fuzzy_Logic_System_for_Real_Time_Text_Difficulty_Assessment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Agentic AI as the Orchestrator of Mobile Ecosystems: A Review of the Trade-off Between Performance and Drawbacks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161223</link>
        <id>10.14569/IJACSA.2025.0161223</id>
        <doi>10.14569/IJACSA.2025.0161223</doi>
        <lastModDate>2025-12-31T12:27:02.7770000+00:00</lastModDate>
        
        <creator>Ayat Aljarrah</creator>
        
        <creator>Mustafa Ababneh</creator>
        
        <subject>Agentic AI; orchestrator; mobile ecosystems; on-device</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>This system review explores the transformational role of agentic artificial intelligence (AI) as an orchestrator in mobile ecosystems. Agentic AI systems proactively plan, execute, and adapt across applications, devices, and services, unlike traditional and generative AI. These systems offer autonomous, context-aware coordination by integrating reasoning engines, tool orchestration, memory, retrieval-augmented generation (RAG), and safety layers. The review examines architectural requirements for mobile deployment, including on-device processing, resource-aware execution, and cross-platform synchronization. It stresses implementation targets and achievements through 2025, automation levels across key capabilities, and the impact of agentic orchestration on mobile ecosystem challenges. The findings highlight agentic AI’s potential to optimize performance, privacy, and user experience simultaneously. Future directions include edge-native architectures, human-in-the-loop frameworks, and multi-agent interoperability standards. This study provides a comprehensive roadmap for advancing agentic AI as a foundational layer in next-generation mobile computing.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_23-Agentic_AI_as_the_Orchestrator_of_Mobile_Ecosystems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Human-Centered Behavioral Analysis of Window Operation Using AI-Based Skeletal Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161222</link>
        <id>10.14569/IJACSA.2025.0161222</id>
        <doi>10.14569/IJACSA.2025.0161222</doi>
        <lastModDate>2025-12-31T12:27:02.7430000+00:00</lastModDate>
        
        <creator>Jewon Oh</creator>
        
        <creator>Daisuke Sumiyoshi</creator>
        
        <creator>Takahiro Yamamoto</creator>
        
        <creator>Takahiro Ueno</creator>
        
        <creator>Tatsuto Kihara</creator>
        
        <subject>Image processing; skeletal recognition; behavioral analysis; Openpose</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>This study presents a quantitative approach to analyzing window opening and closing behaviors using skeletal recognition technology. Video data of five participants performing these actions were captured and processed using the Openpose model, which detects 25 human joints. Focusing on the shoulder, elbow, and wrist, the study analyzed time-series joint coordinates to identify motion patterns and behavioral characteristics. The results revealed consistent relationships among joint movements and enabled accurate distinction between left- and right-hand operations. In addition, behavioral distribution characteristics were examined by visualizing horizontal and vertical skeletal displacements. The results showed that stationary postures are concentrated near a reference origin, whereas window operation actions produce distinct spatial shifts in the coordinate space, indicating that occupant behavior can be interpreted as a sequence of state transitions composed of distinct behavioral phases. The findings confirm that skeletal data can effectively represent occupant behavior without intrusive sensors, providing a non-contact and privacy-preserving monitoring method. This approach contributes to the development of human-centered intelligent building systems that can adapt indoor environments in real time based on occupant actions, thereby improving both thermal comfort and energy efficiency. Future research will expand behavioral categories and explore real-time implementation in smart building applications.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_22-Human_Centered_Behavioral_Analysis_of_Window_Operation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development and Evaluation of a Mobile-Based Local Food Information System for Elderly Nutrition Support</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161221</link>
        <id>10.14569/IJACSA.2025.0161221</id>
        <doi>10.14569/IJACSA.2025.0161221</doi>
        <lastModDate>2025-12-31T12:27:02.7300000+00:00</lastModDate>
        
        <creator>Renuka Khunchamnan</creator>
        
        <creator>Kewalin Angkananon</creator>
        
        <subject>Information system; LINE OA; elderly care system; local food; elderly nutrition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>This research aimed to: 1) study information needs regarding local foods and information systems for the elderly, 2) develop a local food information system for the elderly, and 3) evaluate system effectiveness. The quantitative study included 235 senior caregivers selected via purposive sampling. The research tools were an interview form (IOC = 0.98) and a questionnaire (Cronbach&#39;s alpha = 0.953). The results indicated that there were three key information needs: 1) Comprehensive and reliable content that includes dietary data, food types, and disease categorization, daily meal search features, disease-specific recommendations, and presentation with large-font images and succinct language. The system was developed on the LINE Official Account platform and is based on a web application. 2) User evaluation from 235 participants showed strong agreement with overall system efficiency. Usefulness received the highest rating, followed by LINE OA usability and efficiency. A gender comparison showed that there were statistically significant differences (p &lt; .01) between females and males in terms of content and efficiency. 3) Linear regression analysis identified efficiency as the primary factor influencing usefulness, followed by system usability and functionality. A one-way ANOVA showed that users accessing LINE once per week had significantly higher usefulness scores than those using it every 2 to 3 days (p &lt; .05).</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_21-Development_and_Evaluation_of_a_Mobile_Based_Local_Food.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dynamic Assessment and Goal Optimization of Corporate ESG Performance Based on DEA-CCR-GML and Inverse DEA Integration Framework</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161220</link>
        <id>10.14569/IJACSA.2025.0161220</id>
        <doi>10.14569/IJACSA.2025.0161220</doi>
        <lastModDate>2025-12-31T12:27:02.6970000+00:00</lastModDate>
        
        <creator>Hui Liu</creator>
        
        <creator>Tsung-Xian Lin</creator>
        
        <creator>Yaqing Hu</creator>
        
        <creator>Yingxi Xiao</creator>
        
        <creator>Chengze Ou</creator>
        
        <creator>Yayi Lao</creator>
        
        <creator>Wenchao Pan</creator>
        
        <subject>Corporate ESG performance; DEA; financing constraints; green innovation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>Since the Ministry of Ecology and Environment issued the &quot;Reform Plan for the System of Environmental Information Disclosure in Accordance with the Law&quot; in 2021, the nation has set forth new requirements for sustainable development. Against this backdrop, how enterprises enhance their value across all dimensions through ESG in compliance with national mandates holds significant implications for refining ESG management systems. This study employs DEA-CCR-GML and inverse DEA models to empirically examine how investments in specific dimensions of ESG&#39;s three pillars influence corporate value realization, using data from Shanghai to Shenzhen A-share listed companies from 2019 to 2024. Findings reveal that higher levels of scale efficiency and technical efficiency correlate with greater improvements in ESG scores, while mandatory enforcement of national policies exerts a highly effective driving force on enterprises. Mechanism analysis indicates that firms can enhance ESG score improvement efficiency by elevating scale and technical efficiency, thereby more effectively realizing their intrinsic value.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_20-Dynamic_Assessment_and_Goal_Optimization_of_Corporate_ESG_Performance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimized Dimensionality Reduction Using Metaheuristic and Class Separability</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161219</link>
        <id>10.14569/IJACSA.2025.0161219</id>
        <doi>10.14569/IJACSA.2025.0161219</doi>
        <lastModDate>2025-12-31T12:27:02.6830000+00:00</lastModDate>
        
        <creator>Eman Abdulazeem Ahmed</creator>
        
        <creator>Malek Alzaqebah</creator>
        
        <creator>Sana Jawarneh</creator>
        
        <subject>Dimensionality reduction; Particle Swarm Optimization; metaheuristics; K-Nearest Neighbors; class separability; high-dimensional data</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>The high dimensionality of modern datasets presents significant challenges for machine learning, including increased computational cost, model complexity, and risk of overfitting. This study introduces a metaheuristic framework for optimized dimensionality reduction to identify the highly discriminative feature subsets. The proposed method (KDR-PSO) combines a Particle Swarm Optimization (PSO) algorithm with the K-Nearest Neighbors Distance Ratio (KDR) as a filter-based objective function. This metric quantitatively assesses class separability within a feature subspace by computing the ratio of the average distance from a sample to neighbors in other classes versus those in its own class. Maximizing this ratio with a penalty for model size, KDR-PSO automates the discovery of parsimonious feature sets that maximize inter-class discrimination. The method is computationally efficient, naturally lending itself to multi-class classification and avoiding the prohibitive cost associated with classifier-in-the-loop wrappers. Experimental results on benchmark gene expression and image datasets show that KDR-PSO can achieve better dimensionality reduction compared to baselines and other algorithms, such as winning a better or at least similar performing models with decreased features. This approach is a strong and pragmatic technique to improve the model interpretability and generalizability for high-dimensional regions.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_19-Optimized_Dimensionality_Reduction_Using_Metaheuristic_and_Class_Separability.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Human–Technology Interaction in Generative AI: A Theoretical Review of Technology Acceptance and Cognitive Response</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161218</link>
        <id>10.14569/IJACSA.2025.0161218</id>
        <doi>10.14569/IJACSA.2025.0161218</doi>
        <lastModDate>2025-12-31T12:27:02.6370000+00:00</lastModDate>
        
        <creator>Ugur Dagtekin</creator>
        
        <creator>Ahmet Kamil Kabakus</creator>
        
        <subject>Generative Artificial Intelligence; Technology Acceptance Model; Cognitive Response Theory; Human-AI Interaction; cognitive trust</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>The rapid rise of Generative Artificial Intelligence (GenAI) has transformed the way humans interact with technology and has revealed cognitive mechanisms that extend beyond the explanatory scope of traditional technology acceptance models, such as the Technology Acceptance Model (TAM), Technology Acceptance Model 2 (TAM2), and the Unified Theory of Acceptance and Use of Technology (UTAUT). This theoretical review examines the combined role of the Technology Acceptance Model (TAM) and Cognitive Response Theory (CRT) in explaining GenAI-related user behaviors. The increasing involvement of GenAI in knowledge production triggers complex cognitive reactions, including cognitive trust, curiosity, ambivalence, epistemic suspicion, and resistance, which fundamentally shape technology acceptance processes. Following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) 2020 guidelines, a systematic literature search was conducted in the Web of Science and Scopus databases. From 3,842 records published between 2014 and 2025, duplicates were removed, and the remaining studies underwent title–abstract and full-text screening. In the final stage, 69 publications were included in the review corpus. The findings indicate that, while perceived usefulness and perceived ease of use remain core determinants of GenAI adoption within the TAM framework, integrating CRT highlights the importance of deeper internal mechanisms, such as cognitive reappraisal, epistemic trust, algorithmic scepticism, cognitive load, and curiosity. Post-ChatGPT literature further emphasizes the influence of anthropomorphic cues and cognitive tension on user attitudes, trust calibration, and engagement. Overall, the combined application of TAM and CRT provides a more comprehensive theoretical lens for understanding GenAI interactions by concurrently capturing cognitive, emotional, and behavioural processes. This integrative approach offers a comprehensive lens for understanding cognitive, emotional, and behavioral processes in GenAI interactions.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_18-Human_Technology_Interaction_in_Generative_AI.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Histogram Gradient Boosting Classifier-Based UWSN Cyber Attack Detection Incorporating Environmental Factors (HGBoostUCAD)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161217</link>
        <id>10.14569/IJACSA.2025.0161217</id>
        <doi>10.14569/IJACSA.2025.0161217</doi>
        <lastModDate>2025-12-31T12:27:02.6030000+00:00</lastModDate>
        
        <creator>Hamid OUIDIR</creator>
        
        <creator>Amine BERQIA</creator>
        
        <creator>Siham AOUAD</creator>
        
        <subject>UWSN; security; intrusion detection system; cyber-attack detection; cybersecurity; machine learning; histogram gradient boosting</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>Underwater Wireless Sensor Networks (UWSNs) are commonly employed for exploring and exploiting aquatic areas, and its role is very important and more beneficial precisely in hostile and constrained marine environments. However, their security is more critical than terrestrial wireless sensor networks (TWSNs) due to the space in which they are deployed, the wireless communication medium, and the cost of damage repair, and their protection is a problematic issue that needs to be continuously resolved. Consequently, it is highly recommended see required to take procedure to protect UWSNs against attacks and intrusion and maintain service quality. In general, existing works on machine learning-based intrusion detection system (IDS) and cyber-attack detection approaches for (UWSNs) utilize dedicated datasets designed for (WSNs) without adapting them to the aquatic environment. Furthermore, these studies analyze the enhancement of UWSN performance based on network metrics separately from machine learning model metrics, and vice versa. In this way, this paper proposes a novel cybersecurity detection approach-based model learning Histogram Gradient Bosting (HGB) classifier called (HGBoostUCAD). It classifies four types of DoS attacks (Blackhole, Grayhole, Flooding, and Scheduling), employing an adjusted dataset for (IDS) in wireless sensor networks (WSN-DS) taking into account simulate realistic environmental factors: salinity, temperature, deep through Mackenzie’s equation and node movement in training data. The insight of simulation results obtained, shows that our method reached 97% as accuracy and 96% as precision also outperformed both Deep Neural Network (DNN) as well as the recent study Hyper_RNN_SVM eventually referenced in this research, in terms of machine learning model metrics. In addition to machine learning model metrics, our approach provides network measurements by DoS attack type.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_17-Histogram_Gradient_Boosting_Classifier_Based_UWSN_Cyber_Attack_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>From Bits to Qubits: Comparative Insights into Classical and Quantum Computing Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161216</link>
        <id>10.14569/IJACSA.2025.0161216</id>
        <doi>10.14569/IJACSA.2025.0161216</doi>
        <lastModDate>2025-12-31T12:27:02.5730000+00:00</lastModDate>
        
        <creator>Tariq Jamil</creator>
        
        <subject>Classical computing; high-performance computing; quantum computing; qubit</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>The rapid development of computing hardware has been driven by an ever-emerging need for high throughput, scalable performance, and computation capabilities to be able to address increasingly complex problems. The paradigm of classical computing, centered on deterministic binary logic and the von Neumann architecture, has long favored modern information processing and still supports a wide range of applications. However, physical limits in scaling transistors, power dissipation, and slowing down Moore&#39;s Law have stimulated the consideration of alternative computing paradigms. Quantum computing is now an emerging means that exploits the basic principles of quantum mechanics-such as superposition, entanglement, and quantum interference-to enable new forms of computation. This review study discusses a comparison between the classical and quantum computing systems in operational paradigms, architecture structures, performance characteristics, and application domains. This work is supported by a systematic review of the established theories, currently realized hardware implementations, and representative algorithms. The analysis underlines that classical systems remain very reliable, scalable, and efficient in general-purpose and deterministic workloads, while quantum systems ensure essential advantages in specified problem classes, such as cryptography, quantum chemistry, combinatorial optimization, and selected machine learning tasks. The study concludes that classical and quantum computing are best viewed as complementary technologies. Future high-performance computing platforms will most likely be based on hybrid-classical–quantum architectures in which quantum processors serve as specialized accelerators to help classical systems solve new computational challenges.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_16-From_Bits_to_Qubits_Comparative_Insights.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Speckle Denoising in Breast Ultrasound Images Using Multi-Filter Pseudo-Clean Targets and Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161215</link>
        <id>10.14569/IJACSA.2025.0161215</id>
        <doi>10.14569/IJACSA.2025.0161215</doi>
        <lastModDate>2025-12-31T12:27:02.5570000+00:00</lastModDate>
        
        <creator>Omar Ayad Alani</creator>
        
        <creator>Muhammad Moinuddin</creator>
        
        <subject>Speckle noise; breast ultrasound; denoising; U-Net++; multi-filter pseudo-clean targets; deep supervision</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>Ultrasound imaging is widely used in breast cancer diagnosis, but suffers from speckle noise, which reduces contrast and obscures fine structures. Supervised deep learning methods for speckle reduction/denoising typically require clean ground truth, which is unattainable in vivo. To address this, this study proposes a multi-filter pseudo-ground-truth strategy combined with a UNet++ denoiser. Each image in the BUSI dataset is processed using three classical despeckling filters (Gaussian, median, and total variation) to generate diverse pseudo-clean targets. The network is trained with deep supervision to minimize a robust loss with respect to these targets, enabling it to learn a consensus representation beyond any single filter. On the BUSI test set, the proposed method achieves PSNR = 34.11 dB and SSIM = 0.8901, outperforming recent CNN baselines under the same evaluation protocol. Qualitative results show improved edge preservation and lesion visibility. This approach eliminates the need for unattainable clean ultrasound images and provides a practical path toward clinically useful ultrasound despeckling. Code, data splits, pretrained weights, and the full evaluation protocol will be released for reproducibility.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_15-Speckle_Denoising_in_Breast_Ultrasound_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>NeuroFusionNet Adaptive Deep Learning for Intelligent Real-Time Industrial IoT Decisions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161214</link>
        <id>10.14569/IJACSA.2025.0161214</id>
        <doi>10.14569/IJACSA.2025.0161214</doi>
        <lastModDate>2025-12-31T12:27:02.5270000+00:00</lastModDate>
        
        <creator>Ghayth AlMahadin</creator>
        
        <subject>Deep learning; hybrid CNN-BiGRU; OptiSenseNet; sensor data synthesis; smart manufacturing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>The rapid development of Industrial IoT (IIoT) has facilitated real-time observation and decision-making in smart factories, even though current methods suffer from constraints like processing noisy, high-dimensional sensor data and modeling both spatial and temporal relationships well. Classical models like CNN, LSTM, and GRU tend to fail in handling sequential patterns and context-aware anomaly detection, which restricts predictive maintenance and operational efficiency. To address these limitations, this research introduces NeuroFusionNet, a CNN–BiGRU–Attention hybrid framework, developed using Python and TensorFlow, to pull localized spatial features using CNN, capture bidirectional temporal relationships using BiGRU, and highlight key time steps using Attention for improved anomaly detection and predictive maintenance. The framework is tested on the Environmental Sensor Telemetry dataset, with multivariate industrial signals such as gas levels, temperature, and equipment vibrations. Experimental results demonstrate that NeuroFusionNet achieves 95.2% accuracy, 94.8% precision, 94.1% recall, and 94.4% F1-score, representing an improvement of approximately 2 to 7% over baseline models (CNN, RNN, LSTM) across multiple performance metrics. The method provides faster convergence and robust real-time inference, supporting scalable deployment for smart manufacturing environments. These results highlight that NeuroFusionNet not only outperforms conventional hybrid models such as CNN–LSTM and CNN–GRU but also offers actionable insights for predictive maintenance, safety, and efficiency, establishing a foundation for adaptive AI-driven monitoring in Industry 4.0 applications.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_14-NeuroFusionNet_Adaptive_Deep_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Creative Guidance of Intelligent Emotion Recognition in Video Art</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161213</link>
        <id>10.14569/IJACSA.2025.0161213</id>
        <doi>10.14569/IJACSA.2025.0161213</doi>
        <lastModDate>2025-12-31T12:27:02.4930000+00:00</lastModDate>
        
        <creator>Weixing Chen</creator>
        
        <creator>Yubo Zhou</creator>
        
        <subject>Emotional recognition; image art; artistic expressiveness; creative feedback</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>This study optimizes the existing emotion recognition model to improve the application effect of emotion recognition technology in the guidance of video art creation. It also compares the performances of Multimodal Sentiment Analysis (MMSA), Multimodal Sequence Encoder (MuSE), and the optimized model through a series of simulation experiments. In the fine-grained accuracy of emotion recognition, the optimized model performs well in micro-expression recognition accuracy and multimodal fusion effect, and scores 4.17 and 4.96, respectively. In the robustness test under a complex background, the background interference resistance score of the optimized model is 4.30, and the modal mismatch processing score is 4.38. In the experiment on the effectiveness of emotion recognition in creative guidance, the optimized model scores 4.40 and 3.66 in terms of artistic expression enhancement and creative feedback accuracy, respectively. The experimental results show that the optimized model is superior to the existing models in several key dimensions of emotion recognition, especially in enhancing the emotional expression of artistic works and providing creative guidance. Therefore, this study provides some reference for the application and development of emotion recognition technology in video art creation.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_13-Creative_Guidance_of_Intelligent_Emotion_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application of Improved YOLO-LSTM with Combined MQTT-LoRaWAN for AI Surveillance in Tea Plantations to Prevent Elephant Intrusion</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161212</link>
        <id>10.14569/IJACSA.2025.0161212</id>
        <doi>10.14569/IJACSA.2025.0161212</doi>
        <lastModDate>2025-12-31T12:27:02.4630000+00:00</lastModDate>
        
        <creator>Rabin Kumar Mullick</creator>
        
        <creator>Rakesh Kumar Mandal</creator>
        
        <subject>Tea garden; machine learning; artificial intelligence</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>Elephant-human conflict is a growing problem in tea garden areas of Dooars in North Bengal, resulting in massive cost for crops, infrastructures and sometimes human life as well. Each year, these mild-mannered giants destroy crops, destroy fences and even threaten the locals, which raises the costs of repairs and endangering lives. The conventional deterring methods, such as fences, firecrackers, and patrols are mostly ineffective, unsustainable or cruel to the animals. As a way of addressing this predicament, scholars have designed a non-invasive, intelligent surveillance system named HIS-Hexagonal Intelligent Surveillance. HIS integrates state-of-the-art machine learning with artificial intelligence by combining an improved YOLO-LSTM and MQTT-LoRaWAN, which combines the capabilities of distributed-based agents with predictive analytics and hex-grid mapping scheme. HIS is an effective solution for elephant intrusions detection and deterrence for ecological balance. The system sends specific warnings before the elephants can cause havoc when an intrusion is detected. The hex-grid mapping enables the operators to have accurate spatial knowledge and the predictive analytics forecasts the time and location where the elephants could roam. The virtual simulation of the proposed work shows 98% accuracy on designed-custom dataset of elephants. The paper offers a background of the architecture, theoretical framework, algorithm models, and expected benefits of the proposed framework.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_12-Application_of_Improved_YOLO_LSTM_with_Combined_MQTT_LoRaWAN.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>CAT-TODNet: A Contextual Transformer-Based Optimized Deformable Convolution Framework for Efficient ECG-Based Heart Failure Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161211</link>
        <id>10.14569/IJACSA.2025.0161211</id>
        <doi>10.14569/IJACSA.2025.0161211</doi>
        <lastModDate>2025-12-31T12:27:02.4330000+00:00</lastModDate>
        
        <creator>Vinitha V</creator>
        
        <creator>V. Parthasarathy</creator>
        
        <creator>R. Santhosh</creator>
        
        <subject>Heart Failure (HF); Electrocardiogram (ECG); Artificial Intelligence (AI); Contextual Auxiliary Transformer (CAT); deformable convolution; optimization algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>Heart Failure detection using Electrocardiogram (ECG) signals is a critical clinical task, as continuous analysis of cardiac waveforms supports early diagnosis and effective intervention. Despite advancements in machine learning and deep learning techniques, existing approaches often suffer from limited contextual representation, sensitivity to noise, and in-adequate handling of non-stationary temporal deformations in ECG signals, which restrict diagnostic reliability. To address these challenges, this study introduces a novel deep learning framework termed Contextual Auxiliary Transformer with Triple Stacked Optimized Deformable Convolution Network (CAT-TODNet) for accurate heart failure detection from ECG signals. ECG recordings acquired from the MIT-BIH Arrhythmia Database are initially subjected to three-stage preprocessing, including de-noising, signal smoothing, and Power Line Interference (PLI) removal, to enhance signal quality. The Contextual Auxiliary Transformer (CAT) module explicitly captures both static and dynamic contextual dependencies, enabling robust contextual feature extraction. These context-aware features are subsequently processed through triple stacked deformable convolution layers with adaptive receptive fields. To ensure stable offset estimation under non-stationary ECG conditions, the Al-Biruni Earth Radius (ABER) optimization algorithm is employed to optimize deformable convolution offsets, overcoming the limitations of gradient-based learning. Experimental results demonstrate that CAT-TODNet achieves an accuracy of 98.88.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_11-CAT_TODNet_A_Contextual_Transformer_Based_Optimized_Deformable_Convolution_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Systematic Review of Functional Requirements, Modelling Practices, and Validation Strategies in IoT Application Development</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161210</link>
        <id>10.14569/IJACSA.2025.0161210</id>
        <doi>10.14569/IJACSA.2025.0161210</doi>
        <lastModDate>2025-12-31T12:27:02.3870000+00:00</lastModDate>
        
        <creator>Nor Haniza Ramli</creator>
        
        <creator>Nur Atiqah Sia Abdullah</creator>
        
        <creator>Nur Ida Aniza Rusli</creator>
        
        <subject>Functional requirements; Unified Modeling Language; validation; Internet of Things; systematic review</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>The rapid development of the Internet of Things (IoT) requires systematic development methods that address complex functional, architectural, and validation concerns. This review synthesized research published between 2016 and 2023 to characterize common functional requirements (FRs), current modelling techniques, and validation practices. From an initial corpus of 1,598 articles, 425 publications were selected for in-depth analysis. The results demonstrated a consistent emphasis on data handling, inter-device communication, analytics, and system management. Nevertheless, critical gaps persisted in end-to-end security and context-aware computing. Sequence, class, and use case diagrams dominated modelling practices, which indicated attention to system behavior and interactions. Conversely, state machine and deployment diagrams were underutilized despite their potential to better capture runtime states and architectural configurations. The validation approaches in IoT development were primarily empirical, with experiments and case studies predominating. Expert reviews are valuable for early-stage assessment, but were rarely applied, which indicated missed opportunities for early improvement of design quality in the lifecycle. Overall, the results reflected the maturity and limitations of current practices in IoT engineering. These limitations can be addressed by diversifying modelling techniques, enhancing security integration, and using hybrid validation frameworks. This review presented a foundational reference to guide the systematic development of scalable, secure, and context-aware IoT systems, and contributed to the evolving body of IoT software engineering knowledge.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_10-A_Systematic_Review_of_Functional_Requirements_Modelling_Practices_and_Validation_Strategies.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dynamic Trust Modulation and Human Oversight in AI-Driven AML Systems: A Conceptual Framework for Compliance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161209</link>
        <id>10.14569/IJACSA.2025.0161209</id>
        <doi>10.14569/IJACSA.2025.0161209</doi>
        <lastModDate>2025-12-31T12:27:02.3070000+00:00</lastModDate>
        
        <creator>Julian Diaz</creator>
        
        <creator>Abeer Alsadoon</creator>
        
        <creator>Oday D. Jerew</creator>
        
        <creator>Ahmed Hamza Osman</creator>
        
        <creator>Hani Moetque Aljahdali</creator>
        
        <creator>Albaraa Abuobieda</creator>
        
        <creator>Abubakar Elsafi</creator>
        
        <subject>Artificial intelligence; anti-money laundering (AML); Trust Calibration; Explainability; decision fatigue; human oversight; AUSTRAC Compliance; transaction monitoring; false positives; Analyst–System Interaction; Regulatory Technology (RegTech)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>This literature review investigates how human trust, decision fatigue, explainability (XAI), and human oversight interrelate to influence analyst decision-making in AI-driven anti-money laundering (AML) systems. While prior research has predominantly emphasized algorithmic performance, detection accuracy, or regulatory compliance in isolation, a critical gap remains in understanding the human-centered dynamics that shape real-world operational outcomes. Addressing this gap, the review examines how financial institutions navigate compliance demands and operational constraints, drawing on the Australian regulatory environment as an illustrative governance reference, including expectations articulated by AUSTRAC. Building on this synthesis, the study identifies structural gaps in Trust Calibration and oversight practices. It introduces a Dynamic Trust Modulation (DTM) framework to conceptualize how trust evolves across AML workflows. The framework models trust as a fluid, context-dependent construct shaped by system behavior, analyst workload, explainability mechanisms, and regulatory pressure. By framing trust, explainability, and decision fatigue as interdependent components of human–AI collaboration, this review advances a more holistic perspective on socio-technical system design in financial crime detection. The proposed framework contributes theoretically by extending human–AI trust research into the AML domain and practically by offering actionable design principles to enhance system accountability, decision defensibility, and adaptive compliance in operational AML environments.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_9-Dynamic_Trust_Modulation_and_Human_Oversight_in_AI_Driven_AML_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Latent-Topology Graph State-Space Model (LT-GSSM) for Robust Traffic Fore-Casting</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161208</link>
        <id>10.14569/IJACSA.2025.0161208</id>
        <doi>10.14569/IJACSA.2025.0161208</doi>
        <lastModDate>2025-12-31T12:27:02.2600000+00:00</lastModDate>
        
        <creator>Selma Kerdous</creator>
        
        <subject>Traffic forecasting; graph neural networks; state-space models; latent topology; dynamic adjacency learning; spatio-temporal modeling; noise and missing data robustness; probabilistic modeling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>Accurate traffic forecasting remains challenging when sensor data are noisy, incomplete, or non-stationary. Recent advances in spatio-temporal learning have combined Graph Neural Networks (GNNs) with recurrent, convolutional, or attention mechanisms to capture spatio-temporal dependencies. However, most existing approaches remain largely deterministic and rely on fixed or pre-learned adjacency matrices, limiting their adaptability when network structures evolve or sensor reliability varies. Some methods further stack multiple adjacency matrices to represent complex spatial relations, yet still lack explicit mechanisms to model uncertainty, resulting in reduced robustness under degraded data conditions. This work introduces the Latent Topology Graph State-Space Model (LT-GSSM), a probabilistic framework designed to enhance robustness and adaptability in traffic forecasting. LT-GSSM represents the road network as a latent dynamic graph whose structure evolves over-time through dynamic adjacency learning based on past hidden states and observations, enabling the model to capture evolving spatial correlations such as congestion propagation. Temporal dependencies are modelled by a nonlinear state-space function implemented with a Temporal Convolutional Network (TCN), which captures long-range temporal patterns without recurrence. The probabilistic state-space formulation explicitly represents sensor noise and handles missing data through probabilistic estimation inspired by Kalman filtering. By jointly integrating dynamic graph learning, explicit noise modelling, and nonlinear temporal transitions, LT-GSSM achieves greater stability and resilience to data uncertainty. Experiments on SUMO simulations and real-world PeMS datasets show that LT-GSSM consistently outperforms static and adaptive-graph models, providing a strong foundation for robust spatio-temporal forecasting under uncertain conditions.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_8-Latent_Topology_Graph_State_Space_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intelligent Diagnostic Model for Early Malaria Symptoms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161207</link>
        <id>10.14569/IJACSA.2025.0161207</id>
        <doi>10.14569/IJACSA.2025.0161207</doi>
        <lastModDate>2025-12-31T12:27:02.2130000+00:00</lastModDate>
        
        <creator>Phoebe A Barraclough</creator>
        
        <creator>Charles M Were</creator>
        
        <creator>Hilda Mwangakala</creator>
        
        <creator>Philip Anderson</creator>
        
        <creator>Dornald O Ohanya</creator>
        
        <creator>Harison Agola</creator>
        
        <creator>Philip Nandi</creator>
        
        <subject>Malaria diagnosis system; malaria symptoms; classifier; ANFIS; fuzzy rules</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>One of the most significant worldwide health concerns in low-middle-income nations over the past few decades is Malaria, especially in Kenya. In Kenya, seventy per cent of people reside in areas where malaria is widespread, and most of them face obstacles getting access to medical care because of social culture, distance, and lack of money. Malaria transmission is high, particularly in Kenya’s remote areas, despite a plethora of scientific efforts to combat the disease. This study aims to design and develop an intelligent malaria diagnosis model for early symptom detection using an Adaptive Neuro-fuzzy-Inference System with a 2000 dataset extracted from Six Types of Patient Data Inputs to optimize the model performance. The result achieved was 98.3% accuracy, which contrasted with the pertinent cutting-edge finding to illustrate the benefits of the suggested approach. The main contributions of this study are a combined Six Types of Patient Data Inputs, including Demographic, Symptoms, Blood pressure, Heartbeats, Height, and Weight, using fuzzy Systems techniques to detect early malaria symptoms accurately. The combined patient data input used for evaluation is demonstrated in the results, and the technique can identify different forms of malaria and has the best outcome when compared to relevant findings from the existing studies.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_7-Intelligent_Diagnostic_Model_for_Early_Malaria_Symptoms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Optimization and CNN-Transformer Framework for Hot Topic Detection in Social Media</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161206</link>
        <id>10.14569/IJACSA.2025.0161206</id>
        <doi>10.14569/IJACSA.2025.0161206</doi>
        <lastModDate>2025-12-31T12:27:02.1500000+00:00</lastModDate>
        
        <creator>Hemasundara Reddy Lanka</creator>
        
        <creator>Vinodkumar Reddy Surasani</creator>
        
        <creator>Nagaraju Devarakonda</creator>
        
        <creator>Sarvani Anandarao</creator>
        
        <subject>Hot topic detection; Twitter trend analysis; CNN-Transformer; Modified Bald Eagle Search (MBES); Particle Swarm Optimization (PSO)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>The rapid growth of Twitter as a real-time communication platform has created an urgent need for effective hot topic detection. Traditional statistical and machine learning models often fail to capture contextual semantics and long-range dependencies, while deep learning approaches such as CNNs and LSTMs improve representation but face challenges in scalability, optimization, and convergence. This study proposes a novel deep learning framework that integrates Multi-Scale Conv1D for diverse n-gram feature extraction, an attention-enhanced BiLSTM for contextual learning, and a hybrid Modified Bald Eagle Optimization–Particle Swarm Optimization (MBES-PSO) strategy for robust parameter tuning. Unlike conventional models limited by fixed kernel sizes or shallow architectures, the proposed design dynamically captures both local and global semantic patterns in tweets. The hybrid optimizer balances global exploration with local exploitation, achieving faster convergence and improved stability. The framework is evaluated on a large-scale Twitter dataset from Kaggle. Experimental results show that the proposed model achieved the highest accuracy of 90.12%, significantly outperforming 13 state-of-the-art baselines across precision, recall, and F1-score. This study contributes: 1) a Multi-Scale Conv1D architecture for enriched feature extraction; 2) an attention-based BiLSTM module for improved interpretability; 3) a hybrid MBES-PSO optimizer that enhances convergence and avoids local minima; and 4) extensive comparative evaluation validating robustness on real-world Twitter data. The proposed framework offers a scalable, interpretable, and high-performing solution for real-time hot topic detection in social media analytics.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_6-Hybrid_Optimization_and_CNN_Transformer_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>TrustGraph: A Heterogeneous GNN for Dynamic Zero-Trust Policy Enforcement in Microservices</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161205</link>
        <id>10.14569/IJACSA.2025.0161205</id>
        <doi>10.14569/IJACSA.2025.0161205</doi>
        <lastModDate>2025-12-31T12:27:02.1030000+00:00</lastModDate>
        
        <creator>Nurmyrat Amanmadov</creator>
        
        <creator>Jemshit Iskanderov</creator>
        
        <creator>Tarlan Abdullayev</creator>
        
        <subject>Graph neural networks; zero-trust security; microservices; anomaly detection; heterogeneous graphs; multi-modal telemetry; dynamic policy enforcement</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>Securing cloud microservices requires a unified understanding of how services behave, authenticate, and interact in real time. Unlike existing methods that analyze telemetry signals in isolation, this work presents a heterogeneous graph-based Zero-Trust framework that represents microservices using multi-modal telemetry—logs, metrics, traces, and authentication flows—embedded directly into graph nodes and edges. A Graph Neural Network architecture with attention captures risk propagation across service dependencies, while a joint anomaly detection and trust computation mechanism generates dynamic trust scores with temporal decay to support continuous verification. These trust signals drive real-time dynamic policy enforcement capable of denying or restricting suspicious interactions with minimal operational overhead. Experiments on the TrainTicket, Sock Shop, and DeathStarBench benchmarks show strong performance, achieving 97.2% accuracy, 98.1% recall, and 0.987 AUC on TrainTicket, with consistent results across the other datasets and latency overhead below 3.2 ms. Robustness tests demonstrate accuracy above 95.8% under noisy logs, delayed traces, and authentication failures. Ablation and SHAP analyses confirm that leveraging multiple telemetry modalities—especially authentication data—is critical for accurate detection and trust scoring. These findings show that multi-modal heterogeneous graph modeling, coupled with integrated anomaly-to-policy decision pipelines, provides an effective foundation for Zero-Trust security in cloud-native microservices.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_5-TrustGraph_A_Heterogeneous_GNN_for_Dynamic_Zero_Trust_Policy_Enforcement.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving Emergency Preparedness with a Mobile Application for Respiratory Therapy Resource Coordination</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161204</link>
        <id>10.14569/IJACSA.2025.0161204</id>
        <doi>10.14569/IJACSA.2025.0161204</doi>
        <lastModDate>2025-12-31T12:27:02.0730000+00:00</lastModDate>
        
        <creator>Rahaf Katib</creator>
        
        <subject>Respiratory therapy; mass gatherings; Hajj and Umrah; emergency preparedness; health information systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>Mass gatherings such as Hajj and Umrah, along with pandemic outbreaks, place a significant strain on healthcare systems, particularly respiratory therapy (RT) services, where shortages of respiratory therapists, ventilators, and specialized equipment can compromise emergency response due to increased patient volume, environmental exposure, and heightened risk of respiratory diseases. This study presents a conceptual model of a mobile-based resource coordination system designed to enhance emergency preparedness and response for respiratory therapy services during pandemics and mass-gathering events, such as Hajj and Umrah. The proposed system integrates a real-time database with mobile clinical decision support to provide RT managers and supervisors with centralized visibility of ventilators, staffing levels, and critical respiratory equipment across healthcare facilities. By enabling real-time monitoring and predictive, alert-driven decision support, the system aims to support proactive resource allocation, staff deployment, and equipment distribution. Although the work is conceptual and does not include empirical evaluation, it provides a well-defined architectural framework suitable for future implementation that aligns respiratory care workflows with real-time monitoring and forecast-driven decision support in settings such as Hajj, Umrah, and large-scale public health emergencies.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_4-Improving_Emergency_Preparedness_with_a_Mobile_Application.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>TransAneu-Net: A Hybrid Radiomics and Contrastive Deep Learning Framework for Automated Brain Aneurysm Diagnosis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161203</link>
        <id>10.14569/IJACSA.2025.0161203</id>
        <doi>10.14569/IJACSA.2025.0161203</doi>
        <lastModDate>2025-12-31T12:27:02.0100000+00:00</lastModDate>
        
        <creator>Zhadra Kozhamkulova</creator>
        
        <creator>Shirin Amanzholova</creator>
        
        <creator>Bella Tussupova</creator>
        
        <creator>Yelena Satimova</creator>
        
        <creator>Mukhamedali Uzakbayev</creator>
        
        <creator>Kenzhekhan Kaden</creator>
        
        <creator>Dastan Kambarov</creator>
        
        <subject>Aneurysm; deep learning; radiomics; transformer networks; contrastive learning; MR imaging; MRA; medical image analysis; aneurysm detection; neurovascular diagnostics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>Accurate and early detection of intracranial aneurysms is critical for preventing life-threatening subarachnoid hemorrhage and improving clinical outcomes. This study proposes a hybrid diagnostic framework that integrates radiomics-based feature engineering with a transformer-driven deep learning architecture enhanced by teacher–student contrastive representation learning. The workflow incorporates region-of-interest segmentation, handcrafted radiomic feature extraction, multimodal representation fusion, and probabilistic aneurysm localization using high-resolution MR and MRA imaging. Comprehensive experiments conducted on benchmark neuroimaging datasets demonstrate that the proposed model achieves high classification accuracy, stable convergence, and robust generalization across diverse anatomical and imaging conditions. Qualitative evaluations further reveal that heatmap-based confidence overlays reliably identify aneurysmal regions and closely align with ground-truth annotations. The contrastive learning module strengthens spatial and frequency-domain feature alignment, enabling effective training under limited supervision and reducing performance degradation associated with data heterogeneity. While limitations remain regarding dataset breadth and segmentation dependencies, the results indicate that this hybrid radiomics–AI framework offers a promising pathway toward automated aneurysm screening and clinical decision support. The proposed system has the potential to enhance diagnostic precision, mitigate inter-observer variability, and contribute to earlier intervention in neurovascular care.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_3-TransAneu_Net_A_Hybrid_Radiomics_and_Contrastive_Deep_Learning_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Efficient Multi-Class Analysis of Consumer Complaints Using Frozen MiniLM Embeddings and Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161202</link>
        <id>10.14569/IJACSA.2025.0161202</id>
        <doi>10.14569/IJACSA.2025.0161202</doi>
        <lastModDate>2025-12-31T12:27:01.9800000+00:00</lastModDate>
        
        <creator>Sri Vishnu Gopinathan</creator>
        
        <creator>Muhammad Faraz Manzoor</creator>
        
        <subject>Consumer complaints; text classification; sentence embeddings; MiniLM; class imbalance; sentiment analysis; domain adaptation; contextual embeddings</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>Text classification is a critical task in domains generating large volumes of unstructured text, such as finance, healthcare, and consumer services. However, accurately classifying such data remains challenging due to its noisy, imbalanced, and context-dependent nature. While pre-trained language models have improved general text classification, their direct application often overlooks domain-specific cues and sentiment patterns that are important for nuanced understanding. In this study, we propose a novel framework that extends the MiniLM language model by integrating domain-relevant cues and sentiment features with textual embeddings. This integration allows the model to capture both semantic richness and domain-specific patterns, enhancing reliability and interpretability. Comparative experiments against baselines including TF-IDF + Logistic Regression, Word2Vec + Logistic Regression, TF-IDF + Na&#239;ve Bayes, and Word2Vec + Na&#239;ve Bayes shows that the proposed approach consistently outperforms traditional methods, achieving an accuracy of 0.8653, precision of 0.8697, recall of 0.8653, F1-score of 0.8668, Cohen’s Kappa of 0.7862, and MCC of 0.7870. Ablation studies further demonstrate the critical role of cues and sentiment features in improving performance. These findings indicate that combining pre-trained embeddings with carefully selected domain features offers a more robust and context-aware solution for text classification, establishing a foundation for future work integrating transformer-based models with explainable AI techniques in domain-specific applications.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_2-Efficient_Multi_Class_Analysis_of_Consumer_Complaints.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>ML to Predict Effectiveness of the MCP Authorization Model for LLM-Powered Agent</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161201</link>
        <id>10.14569/IJACSA.2025.0161201</id>
        <doi>10.14569/IJACSA.2025.0161201</doi>
        <lastModDate>2025-12-31T12:27:01.8870000+00:00</lastModDate>
        
        <creator>Upakar Bhatta</creator>
        
        <subject>Model Context Protocol; artificial intelligence; machine learning; large language model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(12), 2025</description>
        <description>In today’s AI-driven world, unlocking AI potential and enabling AI models to communicate with external data sources is vital for enhancing the efficiency and security of AI-driven applications. The Model Context Protocol (MCP) serves as a standard for maximizing AI potential. This study leverages a machine learning approach to predict the effectiveness of the MCP Authorization Model for an LLM-powered agent. It utilizes logs from Azure services such as Azure Monitor, Azure Sentinel, and Azure Active Directory, which are used to monitor MCP server activity, to create a sample dataset. This dataset includes features such as source_ip, destination_ip, event_type, alert_severity, and target_variable. These features are used to train the ML model to assess the effectiveness of the MCP Authorization model for LLM-powered agents, enabling organizations to better understand the importance of a secure connection between AI models. This approach contributes to unlocking AI’s full potential while improving application security and operational efficiency.</description>
        <description>http://thesai.org/Downloads/Volume16No12/Paper_1-ML_to_Predict_Effectiveness_of_the_MCP_Authorization_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Adaptive Open Cyber Intelligence for SOAR: Reduced False Positives in Low-Resource Environments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01611105</link>
        <id>10.14569/IJACSA.2025.01611105</id>
        <doi>10.14569/IJACSA.2025.01611105</doi>
        <lastModDate>2025-12-02T06:04:44.2130000+00:00</lastModDate>
        
        <creator>Shunmugam U</creator>
        
        <creator>Rajesh D</creator>
        
        <subject>Open Cyber Intelligence Framework; SOAR; SIEM; Cyber Threat Intelligence; false positive reduction; threat mitigation; anomaly detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>The increasing use of resource-constrained cyber-physical devices emphasizes the need for effective and flexible methods in the deployment of threat intelligence. The Open Cyber Intelligence Framework (OCIF), an architecture that applies Security Information and Event Management (SIEM) and Security Orchestration, Automation, and Response (SOAR) capabilities to resource-constrained environments, is presented in this study. The OCIF uses lightweight machine learning models in an adaptive way to process cyber threat intelligence (CTI) with greater precision and effectiveness. By using Wazuh to monitor the behavior of machines and OpenSearch for modeling the results of the analysis, the OCIF can reduce false positives by up to 6% in real-world implementations. The model ensures sufficient threat mitigation without taxing the system by striking a balance between anomaly detection, context, and decreased communication overhead. Because of its open-source propagation and modular form factor, OCIF promotes innovation and makes it possible for CTI to be built and used in restricted resources with optimal detection and operational efficiency.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_105-Adaptive_Open_Cyber_Intelligence_for_SOAR.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Reinforcement Learning-Based Hyper-Parameter Optimization with Yolov8 Indoor Fire Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01611104</link>
        <id>10.14569/IJACSA.2025.01611104</id>
        <doi>10.14569/IJACSA.2025.01611104</doi>
        <lastModDate>2025-11-29T11:00:04.7670000+00:00</lastModDate>
        
        <creator>Patrick D. Cerna</creator>
        
        <creator>Harvey C. Quijada</creator>
        
        <creator>Michael Joseph E. Ortaliz</creator>
        
        <creator>Kenneth Charles H. Saluna</creator>
        
        <subject>Fire detection; YOLOv8; reinforcement learning; hyperparameter optimization; convolutional neural network; indoor surveillance; real-time alert system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>This study presents an indoor vision-based fire detection system that integrates a YOLOv8n object detection model with a Reinforcement Learning-based Optimization Algorithm (ROA) for hyperparameter tuning. The research investigates three key aspects: 1) the effectiveness of ROA in improving model performance, 2) the optimal smart camera resolution and placement for indoor fire detection, and 3) the implementation of a real-time dual-channel user notification system. The BantaySunog model iteratively adjusted a few hyperparameters using the Reinforcement Learning-based Optimization Algorithm (ROA) [Talaat &amp; Gamel, 2023]. An episodic framework was used for training, with 15 episodes of 20 epochs each, for a maximum of 300 epochs. Each episode&#39;s top weights were carried over to the following one. To balance exploration and exploitation, ROA employed an epsilon-greedy policy with an epsilon value that decreased from 0.9 to 0.2. Experimental results show that while ROA reduced training time and yielded a more conservative prediction strategy, it did not consistently outperform the baseline YOLOv8n in terms of detection metrics such as recall and mAP50-95. Camera deployment tests identified that positioning cameras away from direct light sources significantly improved detection success, with both elevation and resolution contributing to overall system performance. Finally, a dual-channel alert mechanism combining Firebase Cloud Messaging (FCM) and Telerivet SMS API enabled the timely delivery of fire alerts, aligning with real-world standards. The findings contribute to the development of reliable and accessible fire detection systems, especially for densely populated residential areas with limited infrastructure.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_104-Hybrid_Reinforcement_Learning_Based_Hyper_Parameter_Optimization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Lightweight Cryptography for Energy-Conscious Authentication in IoT Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01611103</link>
        <id>10.14569/IJACSA.2025.01611103</id>
        <doi>10.14569/IJACSA.2025.01611103</doi>
        <lastModDate>2025-11-29T11:00:04.7530000+00:00</lastModDate>
        
        <creator>Ashwag Alotaibi</creator>
        
        <creator>Huda Aldawghan</creator>
        
        <creator>Mounir Frikha</creator>
        
        <subject>Lightweight cryptography; IoT security; energy efficiency; lightweight authentication; SPECK; PRESENT; AES- 128; cryptographic algorithms; resource-constrained devices; Multi-Metric Suitability Analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>The proliferation of IoT devices has introduced new challenges in ensuring secure communication while maintaining computational and energy efficiency. Traditional cryptographic mechanisms such as AES-128, although robust, often fail to meet the stringent constraints of resource-limited IoT environments. This research investigates lightweight cryptographic algorithms and energy-conscious authentication protocols as viable alternatives for enhancing security in IoT systems. A comprehensive performance evaluation is conducted through the simulated implementation of three widely used algorithms—AES-128, SPECK, and PRESENT—measuring encryption time, decryption time, memory consumption, and ciphertext size. The results reveal critical trade-offs between security strength and resource usage, highlighting SPECK’s balanced performance and PRESENT’s ultra-low resource footprint. A Multi-Metric Suitability Analysis framework is introduced to assess overall applicability across different IoT use cases. The study provides evidence-based insights to guide algorithm selection for secure yet efficient IoT deployments. Future directions include hardware-based testing, integration into IoT protocols, and exploration of post-quantum lightweight cryptographic approaches. The findings contribute to the development of practical, scalable, and energy-efficient security architectures tailored for the next generation of IoT systems.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_103-Lightweight_Cryptography_for_Energy_Conscious_Authentication.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Approach for Urban Traffic Congestion Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01611102</link>
        <id>10.14569/IJACSA.2025.01611102</id>
        <doi>10.14569/IJACSA.2025.01611102</doi>
        <lastModDate>2025-11-29T11:00:04.7070000+00:00</lastModDate>
        
        <creator>Chaimae Kanzouai</creator>
        
        <creator>Abderrahim Zannou</creator>
        
        <creator>Soukaina BOUAROUROU</creator>
        
        <creator>El Habib Nfaoui</creator>
        
        <creator>Abdelhak Boulaalam</creator>
        
        <subject>Traffic congestion; traffic management; traffic factors; congestion level; DBSCAN; GCN</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>Traffic congestion is a global problem in urban areas that creates longer travel times, increased fuel consumption, and elevated levels of pollution. Traffic congestion occurs because of the exponential growth of vehicles along with a finite number of roadways and the inability to manage traffic effectively. This paper studies the question: How well can traffic type factors be used as a predictor for determining the severity of traffic con-gestion? To answer this question, we present a new methodology to perform clustering and classification based on various types of traffic indicators. In addition, traffic indicators (such as size of roadway, speed of vehicles, number of vehicles, and level of traffic flow) are categorized by using two distinct classifications: homogeneous and heterogeneous. Using these categories, we then apply a modified version of the Density-Based Spatial Clustering of Applications with Noise (DBSCAN) algorithm to do clustering of traffic indicators. The resultant label from the clustering process is then used to develop a prediction model that will provide information regarding the level of traffic congestion along a selected roadway. Results from our experiments were conducted using an actual dataset and demonstrate that our proposed method produced an accuracy rate of 93% with 92%precision and recall, and therefore, outperforming other current methodologies used for predicting traffic congestion. Overall, these findings indicate that incorporating an analysis of traffic type factors into the clustering and classification methodology can result in more accurate predictions of traffic congestion.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_102-A_Novel_Approach_for_Urban_Traffic_Congestion_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Access Control Model with the Skew Tent Map for Decision Making (STM-ABAC)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01611101</link>
        <id>10.14569/IJACSA.2025.01611101</id>
        <doi>10.14569/IJACSA.2025.01611101</doi>
        <lastModDate>2025-11-29T11:00:04.6730000+00:00</lastModDate>
        
        <creator>Omessead BenMbarak</creator>
        
        <creator>Anis Naanaa</creator>
        
        <creator>Sadok ElAsmi</creator>
        
        <subject>Cloud computing; skew tent map; ABAC; ABE; attribute token</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>The proliferation of cloud computing exposes sensitive data to the risk of unauthorized access, as traditional access control mechanisms are often inadequate for this dynamic environment. To address these shortcomings, this article proposes a novel access control scheme, named STM-ABAC, which is based on the Skew Tent Map (STM). This scheme is specifically designed to overcome the inherent limitations of traditional Attribute-Based Access Control (ABAC) and Attribute-Based Encryption (ABE) schemes when deployed in dynamic cloud environments. The methodology involves constructing a multi-authority ABAC model, generating verifiable attribute tokens using chaotic sequences, applying LSSS-based policy encryption, and evaluating performance through rigorous formal analysis and experimental benchmarking. The results demonstrate that STM-ABAC reduces the computational overhead during decryption by up to 60% and maintains lower initialization and key-generation costs compared to existing CP-ABE and MA-ABE schemes. Furthermore, security proofs confirm strong resistance to chosen-attribute and chosen-nonce attacks.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_101-A_Novel_Access_Control_Model_with_the_Skew_Tent_Map.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A String-Similarity-Oriented Item Swapping Algorithm for Protecting Sensitive Frequent Itemsets in Transaction Databases</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01611100</link>
        <id>10.14569/IJACSA.2025.01611100</id>
        <doi>10.14569/IJACSA.2025.01611100</doi>
        <lastModDate>2025-11-29T11:00:04.6430000+00:00</lastModDate>
        
        <creator>Fatah Yasin Al Irsyadi</creator>
        
        <creator>Dedi Gunawan</creator>
        
        <creator>Diah Priyawati</creator>
        
        <creator>Wardah Yuspin</creator>
        
        <subject>Data hiding; sensitive frequent itemset; frequent itemset; data mining; item swapping technique; D-Lswap</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>Frequent itemset mining is a widely adopted data mining technique. The application of the technique can be found in transaction database analysis, such as exploring a set of purchased items. Presently, the growing concern of privacy protection and security issues in society leads the business parties to be more careful in handling their database, since various information can be extracted from the database including the sensitive information. Therefore, database owner should take measure to minimize sensitive information leak during the data mining process. A hiding sensitive frequent itemset algorithm can be adopted to achieve that. However, it is remain a challenge to design a data hiding algorithm that not only successfully hiding frequent sensitive itemset but also minimize the side effects such as item loss, the appearance of artificial frequent itemset and misses cost. In this paper, a method namely D-LSwap that based on item swapping technique is proposed to cope the issue while minimizing those side effects. Initially D-LSwap inspect each transaction in the database to determine whether a transaction is sensitive. Following that, it select a sensitive transaction from the previous process and create a pair of transactions from them. The pair is formed by incorporating Damerau-levensthein string similarity. The next step is selecting items from this pair for the swapping process. Experiment results indicate that the proposed method outperforms several existing algorithms by increasing data utility up to 10%, while minimizing the number of item loss more than 10 times lower than that of the baseline methods.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_100-A_String_Similarity_Oriented_Item_Swapping_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mobile Application Based on Clean Architecture for Sustainable Tourism Recommendation Using BERT</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161199</link>
        <id>10.14569/IJACSA.2025.0161199</id>
        <doi>10.14569/IJACSA.2025.0161199</doi>
        <lastModDate>2025-11-29T11:00:04.6130000+00:00</lastModDate>
        
        <creator>Cristhian Paolo Atuncar Yataco</creator>
        
        <creator>Marco Antonio Lopez Salinas</creator>
        
        <creator>Jos&#180;e Alfredo Herrera Quispe</creator>
        
        <subject>Software engineering; clean architecture; MVC; BERT; recommendation systems; mobile applications; FastAPI; Kotlin; sustainability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>The development of intelligent systems requires robust architectures and design patterns that ensure scalability, maintainability, and separation of concerns. This paper presents the design and implementation of a mobile application for recommending sustainable tourist destinations in Peru, employing a Clean Architecture approach in the backend and the MVC pattern in the frontend. The solution integrates a content-based recommendation engine using semantic embeddings generated through BERT, connected to open data sources from MINCE-TUR, SafeTravels, and OpenStreetMap. The proposed system was evaluated using three simulated user profiles, achieving an average similarity score of 0.83 and a more balanced distribution between traditional and emerging destinations. From a software engineering perspective, the system demonstrates decoupling between layers, modularity, interoperability with external APIs, and scalability potential. This architecture serves as a practical case study of applying modern design patterns to intelligent systems with an impact on sustainable tourism.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_99-Mobile_Application_Based_on_Clean_Architecture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>STROKECT-BENCH: Evaluating Convolutional and Transformer-Based Deep Models for Automated Stroke Diagnosis Using Brain CT Imaging</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161198</link>
        <id>10.14569/IJACSA.2025.0161198</id>
        <doi>10.14569/IJACSA.2025.0161198</doi>
        <lastModDate>2025-11-29T11:00:04.5670000+00:00</lastModDate>
        
        <creator>Raghda Essam Ali</creator>
        
        <creator>Reda Abdel-Wahab El-Khoribi</creator>
        
        <creator>Ehab Ezzat Hassanein</creator>
        
        <creator>Farid Ali Moussa</creator>
        
        <subject>Stroke detection; brain CT image; Convolutional neural networks; vision transformers; exploratory benchmark</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>Stroke detection from computed tomography (CT) images is an important research direction in computer vision. However, prior studies often use different preprocessing steps, model configurations, and evaluation protocols, making it difficult to compare results or assess architectural reliability. This paper presents an exploratory benchmark that evaluates representative convolutional neural networks (CNNs) and vision transformer (ViT) models under a unified experimental setting for binary stroke classification. STROKECT-BENCH is introduced as a standardized framework in which five CNNs and four transformer-based models are trained on the Brain Stroke CT Image dataset (1,551 normal and 950 stroke images) using identical preprocessing, augmentation, optimization parameters, and performance metrics. The results show that transformer models, particularly PVT-Small and Swin Transformer, achieve the highest accuracy and AUC, while EfficientNetB0 provides a strong balance between accuracy and computational efficiency. As an exploratory study, the findings aim to establish reliable baselines rather than clinical validation. STROKECT-BENCH offers a consistent evaluation reference for future work involving patient-level datasets, external validation, and multimodal stroke-analysis approaches.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_98-STROKECT_BENCH_Evaluating_Convolutional_and_Transformer_Based_Deep_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid Mutual-Information and Heatmap Driven Under-Sampling Algorithm for Imbalanced Binary Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161197</link>
        <id>10.14569/IJACSA.2025.0161197</id>
        <doi>10.14569/IJACSA.2025.0161197</doi>
        <lastModDate>2025-11-29T11:00:04.5500000+00:00</lastModDate>
        
        <creator>Sehar Gul</creator>
        
        <creator>Syahid Anuar</creator>
        
        <creator>Hazlifah Mohd Rusli</creator>
        
        <creator>Azri Azmi</creator>
        
        <creator>Sadaquat Ali Ruk</creator>
        
        <subject>Mutual information; heat-map visualization; clustering; under-sampling; data-level; hybrid</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>Class imbalance is a common problem that occurs in classification where one class has much more instances than the other class. Class imbalance is especially challenging in high-stakes fields like medical diagnosis, fraud detection and predictive maintenance among others. In cases of imbalanced class distribution, models perform well while predicting the majority class but are unable to predict the minority class, which is usually more important. This paper introduces the MI-Heat (Mutual-Information and Heatmap Driven Under-sampling), a hybrid algorithm which targets the binary classification problem. The algorithm is a data-level method that combines Mutual Information (MI) for identifying important features and K-Means clustering for identification of the most important majority class samples. In addition, a distance heatmap is used to project proximities among samples and cluster centers, guiding what majority instances to retain and what to discard. Together, Mutual Information, clustering, and heatmap preserve diversity and suppress noise in order to increase the ability of the model to represent both classes equally with clarity. Performance of the MI-Heat algorithm is tested on 23 benchmark datasets and the results are seen as an improvement in classification accuracy, minority class recall and model generalization. When compared to the traditional under-sampling approaches, MI-Heat performance is consistently better, which clearly demonstrates its dominance over dealing with the class imbalance issue.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_97-A_Hybrid_Mutual_Information_and_Heatmap_Driven_Under_Sampling_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Ensuring End-to-End Traceability and Sustainability in the FSC: A Modular Web3 Architecture Integrating Blockchain, IoT, and Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161196</link>
        <id>10.14569/IJACSA.2025.0161196</id>
        <doi>10.14569/IJACSA.2025.0161196</doi>
        <lastModDate>2025-11-29T11:00:04.5030000+00:00</lastModDate>
        
        <creator>Addou Kamal</creator>
        
        <creator>Mohammed Yassine El Ghoumari</creator>
        
        <subject>Food supply chain; traceability; blockchain; web3; smart contracts; IoT; machine learning; data integrity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>Traceability in food supply chains is crucial for ensuring safety, enabling effective quality control, and maintaining consumer trust. However, traditional paper-based or digital tracking systems often prove too slow and opaque during food safety incidents or investigations into fraud. To address these limitations, this paper presents a modular Web3 architecture that integrates Ethereum blockchain smart contracts, Internet of Things (IoT) sensors, and machine learning (ML) to achieve end-to-end traceability and sustainability in agrifood supply chains, and to support auditable, partially automated decision-making. The system design separates concerns into layers: an on-chain layer of Ethereum smart contracts for tamper-proof event logging and automated business logic, and an off-chain layer for secure storage of detailed sensor data and documents, linked by crypto-graphic hashes to ensure data provenance. Low-cost IoT sensors are deployed from farm to distributor, continuously monitoring environmental conditions (temperature, humidity, geolocation) and uploading signed, time-stamped summaries to the blockchain. In addition, ML models perform predictive quality control by estimating expected conditions, detecting anomalies, and scoring the conformity of product batches, which enables smart contracts to automatically trigger state transitions (acceptance or dispute escrow of shipments) based on real-time data. Using Ethereum smart contracts, a prototype that manages the life cycle of a specific food product was implemented, and two cases (conformant vs non-conformant shipments) were studied to demonstrate how cryptographically verifiable data and events make decisions transparent and trustworthy.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_96-Ensuring_End_to_End_Traceability_and_Sustainability_in_the_FSC.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>THMI-FS-Stack: A Hybrid Imputation and Feature Selection with Stacking Ensemble for Avian Influenza Outbreak Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161195</link>
        <id>10.14569/IJACSA.2025.0161195</id>
        <doi>10.14569/IJACSA.2025.0161195</doi>
        <lastModDate>2025-11-29T11:00:04.4570000+00:00</lastModDate>
        
        <creator>V. S. V. S. Murthy</creator>
        
        <creator>J. N. V. R. Swarup Kumar</creator>
        
        <creator>Srinivas Gorla</creator>
        
        <subject>Zoonotic disease outbreaks; avian influenza; pan-demic prediction; hybrid imputation; feature selection; stacking ensemble classifier; wild bird HPAI dataset; early-warning systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>Timely prediction of zoonotic disease outbreaks, particularly Highly Pathogenic Avian Influenza (HPAI), is critical for real-time epidemiological surveillance and pandemic pre-paredness. However, real-world avian surveillance datasets often suffer from missing values, high dimensionality, and inconsistent feature distributions, leading to unreliable predictions. This study proposes THMI-FS-Stack, a modular and interpretable machine learning pipeline that integrates hybrid data imputation, scalable feature selection, and ensemble classification for outbreak forecasting. The first stage, THMI-CB, employs a two-layer imputation framework combining statistical techniques (Mode, Hot Deck, KNN) and machine learning models (Bayesian Networks and CatBoost), achieving an F1-score of 0.91. The second stage, Hybrid-FS-ML, combines filter-based ranking (Mutual Information, Chi-Square, mRMR) with wrapper-based optimization using a Genetic Algorithm, achieving a 72% dimensionality reduction and an F1-score of 0.96. The final component is a stacking ensemble classifier that uses Random Forest and XGBoost as base learners and Logistic Regression as the meta-learner, yielding an F1-score of 0.92, accuracy of 0.93, and AUC-PR of 0.89. Evaluated on the Wild Bird HPAI dataset with 5-fold stratified cross-validation, THMI-FS-Stack consistently outperforms baseline models. Its robust architecture, low computational cost (runtime of 28–36s), and strong generalization ability make it highly suitable for noisy, incomplete epidemiological data in wildlife surveillance dashboards and early-warning systems.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_95-THMI_FS_Stack_A_Hybrid_Imputation_and_Feature_Selection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>AI-Assisted Schema Translation and Verified Migration: From Object-Relational Models to NoSQL Document-Oriented Stores</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161194</link>
        <id>10.14569/IJACSA.2025.0161194</id>
        <doi>10.14569/IJACSA.2025.0161194</doi>
        <lastModDate>2025-11-29T11:00:04.4230000+00:00</lastModDate>
        
        <creator>Fouad Toufik</creator>
        
        <creator>Abd Allah Aouragh</creator>
        
        <creator>Abdelhak Khalil</creator>
        
        <subject>Graph Neural Networks (GNN); transformer models; neuro-symbolic verification; hybrid AI architectures; schema translation; data migration; object-relational databases; document-oriented databases</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>The transition from object relational databases (ORDBs) to document oriented NoSQL stores offers increased flexibility and scalability in modern data management. However, existing migration processes remain largely manual and heuristic, hindering automation, formal verification, and adaptability to evolving workloads. This paper introduces a principled AI-assisted framework that unifies schema translation and data migration for end-to-end ORDB-to-NoSQL transformation. The framework operates through a four-stage pipeline comprising:(i) database metadata extraction, (ii) prediction of mapping strategies using a dual encoder that integrates a Graph Neural Network (GNN) for structural reasoning and a Transformer for semantic interpretation, (iii) automated data migration guided by the predicted mappings, and (iv) symbolic verification ensuring semantic and structural correctness. The verification stage enforces coverage, key fidelity, referential realizability, and type soundness constraints to guarantee loss-free transformation, while an optional workload-aware component refines verified mappings for query efficiency. Experimental evaluation on the RetailDB benchmark demonstrates that the proposed framework establishes a safe, explainable, and adaptable foundation for AI-assisted schema and data migration between object relational and document oriented models.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_94-AI_Assisted_Schema_Translation_and_Verified_Migration.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Quantized Deep Learning Model for Efficient Plant Leaf Disease Detection on Embedded Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161193</link>
        <id>10.14569/IJACSA.2025.0161193</id>
        <doi>10.14569/IJACSA.2025.0161193</doi>
        <lastModDate>2025-11-29T11:00:04.3930000+00:00</lastModDate>
        
        <creator>Balkis Tej</creator>
        
        <creator>Soulef Bouaafia</creator>
        
        <creator>Mohamed Ali Hajjaji</creator>
        
        <creator>Abdellatif Mtibaa</creator>
        
        <creator>Mohamed Atri</creator>
        
        <subject>Object detection; plant disease; quantization technique; YOLOv5</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>You Only Look Once (YOLO) object detection network has gained significant adoption in the field of plant leaf disease detection due to its strong detection capabilities. However, deploying YOLO models on resource-constrained devices remains challenging, as they require substantial computational power. The complexity and size of these models pose significant obstacles for edge platforms, which are often limited in processing and memory resources. To address these limitations and accelerate inference, we propose a quantized version of YOLOv5x, called Quant-YOLOv5x. This quantization reduces the size and complexity of the model, making it more suitable for edge deployment while maintaining competitive detection accuracy. The experiments were carried out using a self-generated dataset focused on detecting tomato and pepper leaf diseases. Our quantization method reduces the bitwidth of the entire YOLO network to 8 bits, resulting in only a 2.8% decrease in mean Average Precision (mAP), a 50% reduction in model size, and an increase of 4.7 FPS compared to the standard YOLOv5 model.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_93-A_Quantized_Deep_Learning_Model_for_Efficient_Plant_Leaf_Disease_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Bit Stability and Hash-Length Trade-Offs in Binary Face Templates</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161192</link>
        <id>10.14569/IJACSA.2025.0161192</id>
        <doi>10.14569/IJACSA.2025.0161192</doi>
        <lastModDate>2025-11-29T11:00:04.3470000+00:00</lastModDate>
        
        <creator>Abdelilah Ganmati</creator>
        
        <creator>Karim Afdel</creator>
        
        <creator>Lahcen Koutti</creator>
        
        <subject>Binary face templates; face verification; hamming distance; PCA-ITQ; bit stability; eyeglasses attribute; resource-constrained verification; MORPH; CelebA</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>Binary face templates are an appealing alternative to floating-point embeddings for face verification in resource-constrained environments because they enable constant-time Hamming matching with minimal storage and input/output (I/O). This paper studies the bit-level behavior of hashes obtained by principal component analysis followed by iterative quantization (PCA–ITQ) at L∈{32, 64, 128} derived from a frozen lightweight face encoder. Using subject-disjoint splits on the MORPH longitudinal dataset and an eyeglasses stress protocol on CelebA, the analysis quantifies 1) bit balance and entropy, 2) within-identity bit stability via per-bit flip rates, and 3) verification performance at low false-accept rates in Hamming space. On MORPH, 64-bit PCA–ITQ codes achieve an area under the receiver operating characteristic curve (AUC) of 0.9978 and a true positive rate (TPR) of 96.5% at a false positive rate (FPR) of 1%, compared to 99.1% at 128 bits, while halving the template length; 32-bit codes remain feasible but drop to 85.7% at the same operating point and are more sensitive to nuisance variation. Across both datasets, codes are near-balanced and mostly stable, yet a small minority of bit positions accounts for most flips under the eyeglasses attribute. In this regime, 64-bit hashes offer a favorable size–accuracy trade-off, whereas 128-bit hashes approximate float-embedding behavior and 32-bit hashes require redundancy or additional robustness mechanisms. All evaluations use fixed seeds and subject-disjoint splits; thresholds are selected on validation and held fixed on test to reflect deployment conditions.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_92-Bit_Stability_and_Hash_Length_Trade_Offs.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards a Tailored Cybersecurity Education Framework for Malaysia: A Systematic Literature Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161191</link>
        <id>10.14569/IJACSA.2025.0161191</id>
        <doi>10.14569/IJACSA.2025.0161191</doi>
        <lastModDate>2025-11-29T11:00:04.3170000+00:00</lastModDate>
        
        <creator>Muhammad Asfand Yar</creator>
        
        <creator>Hock Guan Goh</creator>
        
        <creator>Kiran Adnan</creator>
        
        <creator>Ming Lee Gan</creator>
        
        <creator>Vasaki Ponnusamy</creator>
        
        <subject>Cybersecurity education; systematic literature review; Malaysia; industry competencies; higher education; curriculum model; hands-on learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>Developing countries including Malaysia faces urgent challenges in cybersecurity education: preparing graduates who meet industry demands while addressing national cultural and regulatory contexts. Despite global advancements, no localized education framework currently aligns Malaysian higher education curricula with industry-required competencies. This systematic literature review (SLR) analyzed 65 academic and gray literature sources selected from an initial pool of 706 studies. The review employed thematic synthesis to examine the Malaysian cybersecurity programs incorporate technical competencies, policy literacy, and contextual relevance. Findings reveal four recurring gaps: limited integration of industry-aligned technical skills, insufficient adoption of hands-on pedagogies such as labs and gamification, underrecognition of professional certifications, and minimal incorporation of local policy and cultural considerations. These insights emphasize the necessity of a context-aware cyber-security education framework tailored for Malaysia. The study provides a conceptual foundation for designing an industry-driven curriculum model, supporting future research on cybersecurity competency development in higher education.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_91-Towards_a_Tailored_Cybersecurity_Education_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>AI in Web Development: A Comparative Study of Traditional Coding and LLM-Based Low-Code Platforms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161190</link>
        <id>10.14569/IJACSA.2025.0161190</id>
        <doi>10.14569/IJACSA.2025.0161190</doi>
        <lastModDate>2025-11-29T11:00:04.2670000+00:00</lastModDate>
        
        <creator>Abiha Babar</creator>
        
        <creator>Nosheen Sabahat</creator>
        
        <creator>Arisha Babar</creator>
        
        <creator>Nosheen Qamar</creator>
        
        <creator>Marwan Abu-Zanona</creator>
        
        <creator>Asef Mohammad Ali Al Khateeb</creator>
        
        <creator>Bassam ElZaghmouri</creator>
        
        <creator>Saad Mamoun AbdelRahman Ahmed</creator>
        
        <creator>Lamia Hassan Rahamatalla</creator>
        
        <subject>Web development; artificial intelligence; large language models; low-code platforms; no-code platforms; conversational agents; software engineering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>Web development supports business, education, and public services online, so speed and reliability are important. Low-code and no-code (LCNC) platforms aim to save time by using visual tools instead of writing all code. The impact of these platforms when combined with large language models (LLMs) has not been well studied. This paper compares a chatbot built in three coding stacks (Node.js, Python, Ruby) and one LCNC workflow in n8n that uses LLMs (Grok, Gemini, ChatGPT). The same tasks and prompts were used to test development time, speed, user ratings, and answer quality (precision, recall, F1). The study shows that LCNC with LLMs reduced build time by about 60 percent while keeping response speed close to hand-coded systems and reaching high answer quality (F1 up to 90 percent) with strong user approval. To clarify the main objective, the paper aims to evaluate whether LCNC+LLM integration offers a practical alternative to traditional coding approaches for intelligent web applications, particularly in terms of efficiency and maintainability. The challenge addressed is the limited empirical evidence comparing these two paradigms under identical conditions and using consistent performance metrics. Results are also interpreted relative to competing approaches in conventional development workflows, highlighting where LCNC tools match, exceed, or fall behind manual coding. Some areas, such as security and error handling, still require extra care and represent limitations of the present study. Overall, results show that LCNC with LLMs can be a useful way to build fast and reliable tools while lowering the development barrier for both developers and non-developers.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_90-AI_in_Web_Development_A_Comparative_Study.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mapping Artificial Intelligence Research in Automotive Manufacturing: A Bibliometric Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161189</link>
        <id>10.14569/IJACSA.2025.0161189</id>
        <doi>10.14569/IJACSA.2025.0161189</doi>
        <lastModDate>2025-11-29T11:00:04.2370000+00:00</lastModDate>
        
        <creator>Sara OULED LAGHZAL</creator>
        
        <creator>EL OUADI Abdelmajid</creator>
        
        <subject>Artificial Intelligence; automotive industry; Industry 4.0; autonomous vehicles; supply chain optimization; computer vision; predictive maintenance; digital twins; smart manufacturing; bibliometric analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>Artificial Intelligence (AI) is transforming the automotive industry by enabling smart manufacturing, optimizing supply chains, enhancing vehicle safety, and accelerating the shift toward autonomous mobility. This bibliometric study provides a systematic overview of the intellectual landscape of AI adoption in the automotive sector. Using data from Scopus, Web of Science, and OpenAlex, we analyze publication trends, influential authors, key institutions, thematic clusters, and international collaboration networks. Findings show a sharp rise in research output during the last decade, with major themes including predictive maintenance, computer vision for quality control, autonomous driving systems, supply chain optimization, and sustainable manufacturing. Emerging areas such as explainable AI, digital twins, and AI-enabled Industry 4.0 architectures are gaining increasing visibility. Collaboration analysis highlights strong contributions from Asia, Europe, and North America, with growing interdisciplinary networks bridging engineering, computer science, and management. This work not only maps the state of research but also identifies gaps and future directions for advancing AI adoption in the automotive industry. The study offers practical insights for researchers, industry practitioners, and policymakers aiming to harness AI for operational efficiency, competitiveness, and sustainable growth.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_89-Mapping_Artificial_Intelligence_Research_in_Automotive_Manufacturing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Managerial Drivers and Performance Outcomes of AI Adoption in Automotive Manufacturing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161188</link>
        <id>10.14569/IJACSA.2025.0161188</id>
        <doi>10.14569/IJACSA.2025.0161188</doi>
        <lastModDate>2025-11-29T11:00:04.2200000+00:00</lastModDate>
        
        <creator>Sara OULED LAGHZAL</creator>
        
        <creator>EL OUADI Abdelmajid</creator>
        
        <subject>Industry 4.0; automotive manufacturing; artificial intelligence; machine learning; quality 4.0; predictive maintenance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>This article examines how artificial intelligence and machine learning reshape automotive manufacturing within Industry 4.0. Reported impacts include up to a 200 percent reduction in costs and a 400 percent gain in production efficiency, with controlled studies showing about a 15 percent improvement from process optimization. The largest early wins appear in quality management through computer vision and continuous inspection, followed by predictive maintenance that cuts unplanned downtime and stabilizes throughput. Supply chain and planning benefit from demand forecasting and inventory optimization that reduce bullwhip and working capital. Adoption barriers remain meaningful, including high initial investment, integration complexity, skills gaps, and trust and explainability requirements in regulated contexts. Effective programs use a common data and MLOps backbone, prioritize short cycle use cases, link model outputs to machine and recipe actions, and track value through OEE, ppm, MTBF, lead time, and service level. The discussion outlines practical steps to scale while noting evidence limitations and the need for standardized reporting on cost of ownership and time to value.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_88-Managerial_Drivers_and_Performance_Outcomes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Integrating Artificial Intelligence into Continuous Improvement for Automotive Manufacturing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161187</link>
        <id>10.14569/IJACSA.2025.0161187</id>
        <doi>10.14569/IJACSA.2025.0161187</doi>
        <lastModDate>2025-11-29T11:00:04.1900000+00:00</lastModDate>
        
        <creator>Sara OULED LAGHZAL</creator>
        
        <creator>EL OUADI Abdelmajid</creator>
        
        <subject>Industry 4.0; automotive manufacturing; artificial intelligence; machine learning; CNN; computer vision</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>The integration of Artificial Intelligence (AI) and Machine Learning (ML) into Continuous Improvement (CI) frameworks is redefining the foundations of automotive manufacturing under the Industry 4.0 paradigm. Traditional method-ologies such as Kaizen, Lean Six Sigma, and Total Quality Management (TQM) have long provided structured approaches for quality enhancement, waste reduction, and process stability. However, the emergence of AI introduces new capabilities—advanced analytics, predictive modeling, and intelligent automation—that transform these static frameworks into dynamic, data-driven ecosystems. This study conducts a systematic literature review following the PRISMA protocol, covering publications from 2010 to 2024 across Scopus, Web of Science, and OpenAlex. After filtering and de-duplication, 13,080 documents were analyzed. Data were categorized by AI methodologies (computer vision, neural networks, deep learning), industrial use cases (quality inspection, predictive maintenance, process optimization, scheduling, and supply chain planning), and key performance metrics such as Overall Equipment Effectiveness (OEE), Mean Time Between Failures (MTBF), parts per million (ppm), lead time, and service level. The analysis reveals substantial and measurable performance improvements. AI-driven systems achieve an aver-age 15% gain in production efficiency, while computer vision enables automated defect detection, improving first-pass yield and reducing scrap. Predictive maintenance reduces unplanned downtime, increasing equipment availability and reliability. These benefits depend strongly on digital maturity and integration within enterprise systems—particularly Manufacturing Execution Systems (MES), Enterprise Resource Planning (ERP), and Product Lifecycle Management (PLM) which together ensure real-time data flow, process synchronization, and traceability across production operations. The primary barriers to adoption include data quality and governance issues, lack of workforce expertise, model explainability in safety-critical environments, and the complexity of integrating AI solutions into legacy systems. These factors hinder large-scale deployment despite proven technical advantages. This study proposes an applied framework for integrating AI within CI initiatives, aligned with the DMAIC (Define–Measure–Analyze–Improve–Control) cycle and the emerging Quality 4.0 architecture. It highlights managerial enablers such as data readiness, digital governance, and cross-functional collaboration, while identifying research gaps related to implementation costs, time-to-value, and long-term performance measurement. The findings demonstrate how AI transforms CI from reactive optimization to proactive, self-improving systems capable of sustaining excellence in modern automotive manufacturing.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_87-Integrating_Artificial_Intelligence_into_Continuous_Improvement.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Informative Inputs Over Complex Features: Long Short-Term Memory Forecasting in the Saudi Stock Market</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161186</link>
        <id>10.14569/IJACSA.2025.0161186</id>
        <doi>10.14569/IJACSA.2025.0161186</doi>
        <lastModDate>2025-11-29T11:00:04.1600000+00:00</lastModDate>
        
        <creator>Munira AlBalla</creator>
        
        <creator>Arwa Alawajy</creator>
        
        <creator>Seetah Alsalamah</creator>
        
        <creator>Zahida Almuallem</creator>
        
        <subject>Deep learning; feature engineering; stock price forecasting; macroeconomic indicators; Brent crude oil; Saudi stock market; Long Short-Term Memory; Tadawul; Tadawul All Share Index (TASI)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>This study investigates the effectiveness of deep learning, specifically Long Short-Term Memory (LSTM) networks, for forecasting stock closing prices in the Saudi Arabian market. Unlike prior research that focuses on narrow stock subsets or individual technical indicators, we present the first comprehensive evaluation across the top thirty companies by market capitalization on the Tadawul Exchange. We compare eight LSTM variants trained on different combinations of feature families, including technical indicators, calendar effects, and macroeconomic variables such as Brent oil prices and the Tadawul All Share Index (TASI). Despite the popularity of complex feature engineering, our results show that models using simpler macroeconomic inputs consistently outperform those based on technical indicators. In particular, combining the TASI with closing prices yielded the best results for 44.8% of stocks, improving median accuracy by 6.19% over the closing price-only baseline. Conversely, models incorporating extensive technical indicators or applying advanced feature selection techniques underperformed the baseline by 8 to 12%. These findings challenge the assumption that greater complexity leads to better performance in financial forecasting and highlight the value of focused, economically interpretable features in the Saudi market context.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_86-Informative_Inputs_Over_Complex_Features.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>MOON Framework: An Emotionally Adaptive Voice Interaction Model for Older Adults</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161185</link>
        <id>10.14569/IJACSA.2025.0161185</id>
        <doi>10.14569/IJACSA.2025.0161185</doi>
        <lastModDate>2025-11-29T11:00:04.1270000+00:00</lastModDate>
        
        <creator>Hasan Sagga</creator>
        
        <creator>Richard Stone</creator>
        
        <subject>Voice User Interfaces; emotional adaptivity; affective computing; older adults; human-computer interaction; empathy; accessibility; adaptive systems; MOON Framework</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>As the world experiences a rapidly aging population, it has become a pressing design challenge to ensure that emerging digital technologies remain understandable, supportive, and meaningful for older adults. One of the most promising modalities for promoting inclusive interaction is the Voice User Interface (VUI), supported by artificial intelligence (AI) and speech recognition. However, most existing VUIs emphasize functional accuracy and overlook users’ emotional conditions, such as anxiety, confidence, or frustration, which significantly affect long-term engagement and adoption. To address this gap, this study introduces a structured and adaptive model for developing emotionally intelligent voice interfaces, namely MOON Framework (Model-Observation-Optimization-Nurture). This framework integrates demographic and linguistic profiling (Model), real-time emotional perception (Observation), adaptive vocal modulation (Optimization), and feedback-driven learning (Nurture) to create a closed-loop system capable of adjusting dynamically to user experience. Grounded in Affective Computing Theory, Socioemotional Selectivity Theory (SST), and Adaptive User Modeling, MOON Framework conceptualizes empathy as a measurable and computable dimension of VUIs. Unlike existing affective systems that rely primarily on visual or physical expressiveness, MOON demonstrates that vocal empathy, achieved through modulation of tone, cadence, and speech rate can foster comparable emotional attunement. Its cyclical design transforms emotion from a passive observation into an active driver of system adaptation. Focusing on familiarity and confidence specifically in older adults, MOON provides both a theoretical foundation and a practical framework for creating emotionally inclusive AI technologies that promote trust, engagement, and long-term well-being.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_85-MOON_Framework_An_Emotionally_Adaptive_Voice_Interaction_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Integrating Large Language Models with Deep Reinforcement Learning for Portfolio Optimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161184</link>
        <id>10.14569/IJACSA.2025.0161184</id>
        <doi>10.14569/IJACSA.2025.0161184</doi>
        <lastModDate>2025-11-29T11:00:04.1130000+00:00</lastModDate>
        
        <creator>Renad Alsweed</creator>
        
        <creator>Mohammed Alsuhaibani</creator>
        
        <subject>Portfolio optimization; Reinforcement Learning (RL); algorithmic trading; Markov Decision Process (MDP); Large Language Models (LLM); deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>This paper explores the application of Deep Reinforcement Learning (DRL) and Large Language Models (LLMs) to portfolio optimization, a critical financial task requiring strategies to balance risk and return in volatile markets. Traditional models often struggle with the complexity of financial markets, whereas Reinforcement Learning (RL) provides end-to-end frameworks for learning optimal, dynamic trading policies through sequential decision-making and trial-and-error interactions. The study examines key DRL algorithms, including Q-learning, Deep Q-Networks (DQN), Proximal Policy Optimization (PPO), and Twin-Delayed Deep Deterministic Policy Gradient (TD3), emphasizing their strengths in dynamic asset allocation. Crucial components of financial RL systems are discussed, such as state representations, reward function designs, its algorithms, and main approaches. Furthermore, the survey investigates how LLMs enhance decision-making by analyzing unstructured data (like news and social media) for sentiment and risk assessment, often integrating these insights to augment state representations or guide reward shaping within DRL frameworks.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_84-Integrating_Large_Language_Models_with_Deep_Reinforcement_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Secure Tracking System to Prevent Counterfeit Drugs in the Pharmaceutical Supply Chain</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161183</link>
        <id>10.14569/IJACSA.2025.0161183</id>
        <doi>10.14569/IJACSA.2025.0161183</doi>
        <lastModDate>2025-11-29T11:00:04.0670000+00:00</lastModDate>
        
        <creator>Huda Alrehaili</creator>
        
        <creator>Suhair Alshehri</creator>
        
        <creator>Rania M. Alhazmi</creator>
        
        <subject>Pharmaceutical supply chain; blockchain; access control; InterPlanetary File System (IPFS); Non-Fungible Token (NFT)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>Counterfeit drugs represent a serious global risk, accounting for about 10% of the medicines worldwide. This per-centage increases significantly in developing countries, reaching up to 30%, and has become a major cause of child mortality in these regions. Blockchain technology provides a good solution to address this critical issue through improving transparency and traceability of pharmaceuticals from manufacturers to end users. However, traditional blockchain applications face major challenges, particularly in terms of high costs and scalability especially when handling large volumes of data and transactions such as those found in the pharmaceutical supply chain. To address these limitations, researchers have introduced Layer 2 blockchain technologies, which operate on top of standard blockchain networks to improve scalability and reduce costs. This paper aims to design an efficient, secure, and scalable framework that can support traceability in pharmaceutical supply chains and contribute to reducing counterfeit drugs. This framework lever-ages Layer 2 blockchain solutions and combines Non-Fungible Tokens (NFTs) linked to Quick Response (QR) codes to provide unique identification for each drug. Furthermore, it incorporates the InterPlanetary File System (IPFS) for off-chain storage to reduce the amount of data stored directly on the blockchain. Additionally, an access control mechanism is incorporated to ensure the protection of sensitive data by limiting access based on predefined roles. This enhances privacy and reduces the risk of data leakage or misuse. In conclusion, this work provides the global scientific community with a secure and scalable framework that can be applied internationally to strengthen pharmaceutical supply chains and reduce the risks of counterfeit drugs.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_83-A_Secure_Tracking_System_to_Prevent_Counterfeit_Drugs.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Iterative Partition Optimization: A Novel Approach for Feature Selection in NIR Spectroscopy</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161182</link>
        <id>10.14569/IJACSA.2025.0161182</id>
        <doi>10.14569/IJACSA.2025.0161182</doi>
        <lastModDate>2025-11-29T11:00:04.0330000+00:00</lastModDate>
        
        <creator>Phuong Nguyen Thi Hoang</creator>
        
        <creator>Thinh Ngo Hung</creator>
        
        <creator>Tuong Nguyen Huy</creator>
        
        <creator>Hieu Nguyen Van</creator>
        
        <subject>Machine learning; feature extraction; near infrared spectroscopy; iterative partition optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>Machine learning for near-infrared (NIR) spectroscopy requires effective feature selection to address high dimensionality and multicollinearity. This study proposes Iterative Partition Optimization (IPO), a framework integrating Model Population Analysis with Weighted Binary Matrix Sampling through segment-wise optimization: partitioning spectra into segments, isolating one active segment while freezing others, and using adaptive weighted sampling that learns from best-performing sub-models. Validation across four diverse NIR datasets (n=54-523 samples, 100-700 wavelengths) demonstrates IPO’s consistent performance improvement over conventional methods. For agricultural products (soy flour, wheat kernels), IPO achieved lower RMSECV while reducing wavelengths. In chemical analysis (diesel fuels, manure), the method maintained high prediction accuracy (RPD&gt;3.0) using less than half the original variables. Notably in multi-component manure analysis, IPO improved predictions across seven chemical properties (N, NH4, P2O5, CaO, MgO, K2O, DM) while reducing spectral variables, consistently outperforming MCUVE in both accuracy and wavelength selection efficiency. These results establish IPO as an effective wavelength selection method for NIR spectroscopy, addressing multicollinearity while preserving spectral interpretation through optimized interval selection.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_82-Iterative_Partition_Optimization_A_Novel_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Kalman-Enhanced Deep Reinforcement Learning for Noise-Resilient Algorithmic Trading in Volatile Gold Markets</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161181</link>
        <id>10.14569/IJACSA.2025.0161181</id>
        <doi>10.14569/IJACSA.2025.0161181</doi>
        <lastModDate>2025-11-29T11:00:04.0030000+00:00</lastModDate>
        
        <creator>Amine Kili</creator>
        
        <creator>Brahim Raouyane</creator>
        
        <creator>Mohamed Rachdi</creator>
        
        <creator>Mostafa Bellafkih</creator>
        
        <subject>Deep reinforcement learning; Kalman filtering; algorithmic trading; gold markets; XAU/USD; microstructure noise; signal processing; financial time series; noise reduction; institutional trading</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>Precious metals market, such as gold (XAU/USD), exhibit high volatility and significant microstructure noise in their financial time series, which degrade the reliability of algorithmic trading models. While deep reinforcement learning (DRL) has shown strong results in equities and cryptocurrencies, its application to precious metals remains limited by unstable signals and rapid market fluctuations. This study proposes a Kalman-enhanced DRL framework that integrates classical noise filtering with modern neural architectures to improve signal quality and trading performance in highly volatile environments. The methodology applies Kalman filtering to recursively denoise OHLCV price data, which then serves as an input alongside 22 technical indicators to train three state-of-the-art DRL agents: Deep Q-Network (DQN), Proximal Policy Optimization (PPO), and Recurrent PPO (RPPO). Eight years of hourly XAU/USD data (January 2017 to January 2025, N = 47,304) were used for training and evaluation. Models were evaluated on cumulative return, CAGR, Sharpe ratio, maximum drawdown, and volatility. Results demonstrate substantial gains from noise attenuation: PPO with Kalman filtering achieved 80.21% cumulative return (27.1% CAGR, Sharpe 12.10, drawdown -0.48%) compared with raw PPO’s 8.70% (3.46% CAGR, Sharpe 0.45, drawdown - 12.52%). DQN and RPPO achieved comparable improvements, with 244 to 822% return increases, 88 to 96% drawdown reduction, and up to 29&#215; Sharpe ratio enhancement. Statistical significance was confirmed (p &lt; 0.001 for PPO/RPPO; p &lt; 0.05 for DQN). These findings highlight Kalman-enhanced reinforcement learning as a scalable and robust framework for institutional algorithmic trading, bridging signal processing and artificial intelligence for next-generation adaptive trading systems.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_81-Kalman_Enhanced_Deep_Reinforcement_Learning_for_Noise_Resilient_Algorithmic_Trading.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>TOPSIS-YOLO Decision Fusion with Mel-Spectrogram Analysis for Engine Fault Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161180</link>
        <id>10.14569/IJACSA.2025.0161180</id>
        <doi>10.14569/IJACSA.2025.0161180</doi>
        <lastModDate>2025-11-29T11:00:03.9700000+00:00</lastModDate>
        
        <creator>Rommel F. Canencia</creator>
        
        <creator>Ken D. Gorro</creator>
        
        <creator>Deolinda E. Caparroso</creator>
        
        <creator>Marlito V. Patunob</creator>
        
        <creator>Jonathan C. Maglasang</creator>
        
        <creator>Joecyn N. Archival</creator>
        
        <subject>Industrial fault detection; audio signal processing; spectrogram analysis; deep learning; multi-criteria decision making (MCDM); TOPSIS; YOLO</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>Industrial machinery fault detection systems require both high diagnostic accuracy and computational efficiency for real-time deployment. This study presents a novel hybrid approach that integrates the Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) with You Only Look Once (YOLO) deep learning for efficient audio-based fault detection in industrial machinery. The proposed methodology employs a two-tiered decision fusion strategy: TOPSIS serves as a rapid mathematical pre-filter analyzing seven acoustic features (RMS, ZCR, Spectral Centroid, Spectral Bandwidth, Peak Frequencies, Kurtosis, and Skewness) extracted from preprocessed 1-2 second audio segments, while YOLO performs detailed spectrogram-based visual analysis on flagged segments. The TOPSIS algorithm normalizes feature vectors, calculates closeness coefficients to ideal and negative-ideal solutions, and classifies segments using a threshold of τ = 0.65. Segments identified as normal terminate processing immediately, while potentially abnormal segments proceed to spectrogram generation and YOLO-based detection. Experimental results on 150 industrial audio segments demonstrate that the hybrid system achieves 93.8% detection accuracy while reducing computational overhead by 85.3% compared to full-dataset YOLO analysis. The TOPSIS pre-filter successfully identifies 128 normal segments (85.3%) with a mean closeness coefficient Ci = 0.847 &#177; 0.025, while 22 abnormal segments (14.7%) with Ci = 0.084 &#177; 0.033 are forwarded to YOLO for confirmation. The decision fusion logic enables YOLO to override false positives and flag low-confidence cases for expert review, combining the speed of mathematical analysis with the robustness of deep learning. This approach reduces processing time by approximately 6.8&#215;, decreases GPU utilization by 85%, and minimizes storage requirements for spectrogram images, making it suitable for real-time industrial monitoring systems where computational resources are constrained.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_80-TOPSIS_YOLO_Decision_Fusion_with_Mel_Spectrogram_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Privacy Protection Method for IoT Data Based on Edge Computing and Federated Learning Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161179</link>
        <id>10.14569/IJACSA.2025.0161179</id>
        <doi>10.14569/IJACSA.2025.0161179</doi>
        <lastModDate>2025-11-29T11:00:03.9400000+00:00</lastModDate>
        
        <creator>Ying Wu</creator>
        
        <subject>Edge computing; federated learning; Internet of Things; privacy protection; homomorphic encryption; dropout supplementation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>This paper proposes a privacy protection method for IoT data integrating edge computing and federated learning. To address challenges including edge node heterogeneity, central server bottlenecks in traditional federated learning, and high overhead of homomorphic encryption, we design a hierarchical architecture comprising requesters, participants, edge nodes, a sensing platform, and a key generation center. Participants train models locally using SGD, encrypt parameters with an optimized verifiable dual-key ElGamal homomorphic encryption scheme, and transmit them to edge nodes. Edge nodes employ the MPSDGS algorithm for participant similarity discovery and dropout supplementation, and the MP-Update method for dynamic weighted averaging to ensure continuity and accuracy. Edge-side ciphertext aggregation reduces data volume to the platform. The sensing platform performs global secure aggregation in ciphertext. Experiments demonstrate that the method maintains data privacy above 0.8, with training and aggregation delays within acceptable ranges for typical IoT scales, balancing privacy and efficiency.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_79-A_Privacy_Protection_Method_for_IoT_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>AI-Enabled Assessed Learning from an Interdisciplinary Educational Perspective: An Empirical Case Study Using ChatGPT LLM</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161178</link>
        <id>10.14569/IJACSA.2025.0161178</id>
        <doi>10.14569/IJACSA.2025.0161178</doi>
        <lastModDate>2025-11-29T11:00:03.8930000+00:00</lastModDate>
        
        <creator>Ali Hassan</creator>
        
        <creator>Omar Tayan</creator>
        
        <creator>Abdurazzag Almiladi</creator>
        
        <creator>Adnan Ahmed Abi Sen</creator>
        
        <subject>Educational assessment; interdisciplinary learning; ChatGPT; Large Language Models (LLMs); Artificial Intelligence (AI)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>The recent developments in artificial intelligence (AI) have introduced transformative tools into the educational landscape, with early signs of its impact on advancing student learning methods and possible outcomes. This paper explores the possible integration of large language models (LLMs) in higher education academic courses, using the popular ChatGPT LLM as a case study in interdisciplinary academic courses. In particular, this paper investigates the potential to advance learning and creativity in interdisciplinary courses and their assessment processes. First, we provide a qualitative review of the state-of-the-art in LLMs and chatbot uses in higher education, highlighting some notable gaps in the literature that motivated this work. Next, the paper explores the application of ChatGPT in interdisciplinary courses, examining how its capabilities can be leveraged to support multi-disciplinary learning and collaboration through several assessment types in interdisciplinary courses. The analysis is used to investigate opportunities for fostering deeper integration of knowledge across fields, with LLMs serving as a versatile tool for enhancing student engagement and understanding. Moreover, the paper investigates the role of LLMs in advancing engagement in creative disciplines, exploring how they can be adapted to stimulate creativity and innovation in educational settings. Unique challenges and opportunities presented by incorporating AI tools into creative processes are considered, and their impact on both students and educators is assessed. An empirical design used in this study included testing ChatGPT with 50 test questions across multiple interdisciplinary courses, with 5 evaluators’ perspectives provided using the Delphi scoring method. Key results obtained showed an overall accuracy rate of 93% and 100% for open-ended and multiple-choice questions, respectively. Such promising results had demonstrated ChatGPT’s potential as a constructive tool for fostering interdisciplinary learning by bridging knowledge gaps and promoting integration of ideas across different courses/domains. Finally, this paper aims to contribute to ongoing discussions about the responsible and effective use of AI in academic environments, while highlighting its potential to reform learning and teaching across disciplines.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_78-AI_Enabled_Assessed_Learning_from_an_Interdisciplinary_Educational_Perspective.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Integrating Causality with Spatio-Temporal Attention for Accurate Airline Delay Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161177</link>
        <id>10.14569/IJACSA.2025.0161177</id>
        <doi>10.14569/IJACSA.2025.0161177</doi>
        <lastModDate>2025-11-29T11:00:03.8770000+00:00</lastModDate>
        
        <creator>Akash Daulatrao Gedam</creator>
        
        <creator>Pavaimalar S</creator>
        
        <creator>Mercy Toni</creator>
        
        <creator>Y. Rajesh Babu</creator>
        
        <creator>P. Satish</creator>
        
        <creator>Bobonazarov Abdurasul</creator>
        
        <creator>Elangovan Muniyandy</creator>
        
        <subject>Flight delay prediction; spatio-temporal modeling; causal reasoning; attention network; uncertainty estimation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>Flight delays can cause serious problems for airlines, passengers, and the economy in general. Current prediction methods that use Random Forests, deep neural networks, and recurrent architectures such as GRU can address either time or quantity, pero not both when applied to causal reasoning and assess uncertainty therein, which negatively affects each model&#39;s ability to interpret, generalize for unknown conditions, and ultimately assess reliability of the predicted delay in an operational setting. Causal-Aware Spatio-Temporal Attention Network (CASTAN) is designed as a combined approach to address these challenges of spatio-temporal and causal modeling all in one. Analysts use GraphSAGE-based spatial encoding to encode and capture inter-airport dependencies, with a self-attention temporal encoder to learn long-range sequential patterns of historical delays in addition to traffic and weather factors. A cross-attention fusion mechanism accounts for the dynamic and spatio-temporal contributions to delay. A final causal counterfactual module adds interpretable independence results—helping analysts to assess the contributing factors to delay. Finally, the incorporation of dropout is done in a Bayesian approach to assess uncertainty for each prediction made and generate uncertainty-aware predictions so analysts may assess reliability through levels of confidence or any other metric decided. Results from evaluation of a large-scale U.S. flight dataset compared to traditional baselines demonstrate the predictive power of the model, achieving 96.4% accuracy, RMSE of 4.2, and MAE of 2.9. The CASTAN process has positioned its place as an interpretable, reliable, and operationally informative modeling approach to proactive management of airline delay.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_77-Integrating_Causality_with_Spatio_Temporal_Attention.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Enhanced Deep Learning Framework for Diabetic Retinopathy Classification Using Multiple Convolutional Neural Network Architectures</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161176</link>
        <id>10.14569/IJACSA.2025.0161176</id>
        <doi>10.14569/IJACSA.2025.0161176</doi>
        <lastModDate>2025-11-29T11:00:03.8470000+00:00</lastModDate>
        
        <creator>Zaid Romegar Mair</creator>
        
        <creator>Agus Harjoko</creator>
        
        <creator>Rendra Gustriansyah</creator>
        
        <creator>Septa Cahyani</creator>
        
        <creator>Rudi Heriansyah</creator>
        
        <creator>Indah Permatasari</creator>
        
        <creator>Muhammad Haviz Irfani</creator>
        
        <subject>Diabetic retinopathy; diabetic retinopathy classification; deep learning; Convolutional Neural Network (CNN); VGG16</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>Diabetic retinopathy (DR) is a leading cause of blindness, requiring early and accurate diagnosis. Although deep learning, particularly Convolutional Neural Networks (CNNs), has shown promising results in automating DR classification, selecting the optimal architecture and extracting effective features for specific clinical datasets remains a challenge. This study aims to conduct a comprehensive performance evaluation of six CNN architectures—DenseNet121, MobileNet, NasNet_Mobile, ResNet50, VGG16, and VGG19—for DR classification on a dataset from the Community Eye Hospital of South Sumatra Province. The main novelty of our approach lies in a specific preprocessing workflow that integrates grayscale conversion and Canny edge detection to enhance the visibility of critical retinal features, such as blood vessels and lesions, before classification. Using a dataset of 3000 fundus images across five classes (No_DR, Mild, Moderate, Severe, and Proliferative DR), the model was trained with data augmentation and the Adam optimizer. Experimental results indicate that the VGG16 architecture achieves a peak accuracy of 73%, outperforming baseline implementations from previous studies. This study highlights the potential of combining classical CNN models with tailored preprocessing for improved DR detection, thus providing a benchmark for model selection on similar clinical datasets. These findings highlight the robustness and stability of VGG16, demonstrating its suitability as an early DR screening tool.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_76-An_Enhanced_Deep_Learning_Framework_for_Diabetic_Retinopathy_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Leveraging Intelligent Speech Training to Elevate Phonetic Accuracy and Prosodic Fluency in English Learners</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161175</link>
        <id>10.14569/IJACSA.2025.0161175</id>
        <doi>10.14569/IJACSA.2025.0161175</doi>
        <lastModDate>2025-11-29T11:00:03.8170000+00:00</lastModDate>
        
        <creator>Amit Khapekar</creator>
        
        <creator>Nidhi Mishra</creator>
        
        <creator>Vijaya Lakshmi Mandava</creator>
        
        <creator>T K Rama Krishna Rao</creator>
        
        <creator>Bhuvaneswari Pagidipati</creator>
        
        <creator>Prasad Devarasetty</creator>
        
        <creator>Elangovan Muniyandy</creator>
        
        <subject>Automatic speech recognition; pronunciation and prosody; transformer-based phoneme identification; prosody assessment; adaptive learning algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>The successful teaching of pronunciation, as well as prosody, is the significant challenge that still remains to the English as Foreign Learning (EFL) students. Traditional pedagogical theories tend to focus on segmental phoneme accuracy but ignore suprasegmental components (stress or rhythm and intonation) which are natural and intelligible speech components. The currently available systems of computer-assisted pronunciation training (CAPT) are useful, but limited by the fact that they are based on limited acoustic models and incomplete coverage of prosodic characteristics, leading to less than optimal accuracy and limited pedagogical suitability. To overcome these shortcomings, the current paper proposes Attention-Guided Cross-Lingual Self-Supervised Learning (AG-CLSSL), a new model that is both able to combine phoneme-level representations of XLS-R (wav2vec2-large-xlsr-53) and prosodic representations of the pitch, energy, and duration through a Phoneme-Prosody Cross-Attention Fusion (PP-CAF) process. This conglomeration allows the joint and context specific representation of the speech that is further refined by the multi-task Transformer-based scoring model to jointly assess the accuracy of pronunciation, the consistency of the prosody and the general intelligibility. The framework is implemented in Python, with support of PyTorch and Hugging Face Transformers and is trained on an evaluated corpus of EFL learner speech (n=100) with a variety of L1 backgrounds, including Mandarin, Hindi, and Spanish. Experimental assessments indicate significant performance improvement with 55.4% decrease in Phoneme Error rate, 52.0 percent decrease in Word Error rate, 43.3 percent increase in Stress Placement Accuracy and 34.9 percent increase in Pitch Alignment Score. The total acoustic similarity to native speech went up by 36.1, which demonstrates the ability of AG-CLSSL to progress articulatory accuracy as well as the naturalness of prosody and provide interpretable and attention-directed information on scalable AI-based pronunciation and prosody training.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_75-Leveraging_Intelligent_Speech_Training_to_Elevate_Phonetic_Accuracy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Transformer Driven Multi-Agent Reinforcement Learning Framework for Integrated Waste Classification Forecasting and Adaptive Routing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161174</link>
        <id>10.14569/IJACSA.2025.0161174</id>
        <doi>10.14569/IJACSA.2025.0161174</doi>
        <lastModDate>2025-11-29T11:00:03.7670000+00:00</lastModDate>
        
        <creator>Ritesh Patel</creator>
        
        <creator>Igamberdiyev Asqar Kimsanovich</creator>
        
        <creator>Vinod Waiker</creator>
        
        <creator>Elangovan Muniyandy</creator>
        
        <creator>Swarna Mahesh Naidu</creator>
        
        <creator>Nurilla Mahamatov</creator>
        
        <creator>Osama R.Shahin</creator>
        
        <subject>Smart waste management; temporal fusion transformer; vision transformer; predictive analytics; route optimization; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>The rapid expansion of urban populations has intensified the challenges associated with municipal solid waste management, particularly where conventional static or ad-hoc routing strategies create operational inefficiencies, excessive fuel usage, and repeated bin overflow. Many existing systems still treat waste classification, fill-level forecasting, and routing as separate processes, which restricts coordinated optimization and limits broader sustainability outcomes. To address these shortcomings, TMORL is introduced as a Transformer-enhanced Multi-Agent Reinforcement Learning framework that unifies perception, prediction, and decision-making for intelligent waste management. The framework integrates IoT-enabled sensor measurements with deep learning and MARL-driven optimization to manage waste collection adaptively under real-time uncertainty. A Vision Transformer supports precise waste image classification through global spatial feature extraction, while a Temporal Fusion Transformer generates accurate, uncertainty-aware multi-horizon fill-level forecasts. These model outputs collectively shape the state representation for a multi-objective MARL module that optimizes fuel consumption, travel duration, emission reduction, and overflow mitigation, enabling simultaneous operational and sustainability improvements. TMORL is implemented in PyTorch and evaluated using the Smart Waste Management Dataset containing heterogeneous IoT bin measurements and annotated waste images. The model achieves strong perception accuracy, reporting 97.3% precision, 96.6% recall, and 98.4% mAP@0.5, while the TFT forecasts align closely with real bin-fill patterns to support proactive routing adjustments. When compared with static scheduling and Ant Colony Optimization routing, TMORL reduces fuel usage by 22%, collection time by 25%, and overflow incidents by 95%. Overall, the findings confirm that a transformer-driven, IoT-integrated MARL framework significantly strengthens efficiency, decision responsiveness, and environmental sustainability in next-generation smart waste management systems.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_74-Transformer_Driven_Multi_Agent_Reinforcement_Learning_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Clinically Informed Adaptive Multimodal Graph Learning Paradigm for Transparent Temporal and Generalizable Alzheimer’s Disease Diagnosis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161173</link>
        <id>10.14569/IJACSA.2025.0161173</id>
        <doi>10.14569/IJACSA.2025.0161173</doi>
        <lastModDate>2025-11-29T11:00:03.7370000+00:00</lastModDate>
        
        <creator>Padmavati Shrivastava</creator>
        
        <creator>V S Krushnasamy</creator>
        
        <creator>Guru Basava Aradhya S</creator>
        
        <creator>Vinod Waiker</creator>
        
        <creator>Peddireddy Veera Venkateswara Rao</creator>
        
        <creator>Elangovan Muniyandy</creator>
        
        <creator>Khaled Bedair</creator>
        
        <subject>Alzheimer’s detection; graph neural network; multimodal fusion; explainable AI; temporal transformer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>This is a clinically reliable and explainable diagnostic framework for the early detection of Alzheimer&#39;s disease with multimodal data. Current computational methods face challenges in dealing with fragmented clinical information, poor cross-modal integration, limited temporal modelling, and low interpretability, rendering them unsuitable for real-world medical deployment. To overcome these limitations, we propose the Clinically Guided Adaptive Multimodal Graph Transformer (CAM-GT), a novel architecture that fuses clinical priors with graph-based learning and transformer-driven temporal reasoning within a unified model. The proposed framework uniquely integrates clinically guided graph attention, cross-modal fusion, and contrastive alignment, where the system can capture hidden relationships among imaging, cognitive scores, and clinical biomarkers with high robustness against missing or imbalanced modalities. Implemented on the Python platform with advanced deep-learning libraries, CAM-GT carries out multimodal encoding, temporal progression modeling, and explainability mapping in order to identify the most significant biomarkers that influence the status of a disease. Experimental evaluation demonstrates that the model performs well by achieving an accuracy of 97%, a 97.2% AUC, and outperforming existing models while maintaining strong generalization in heterogeneous clinical environments. Further, high interpretability ensures that clinically, it will be able to trace how predictions are made to instill greater trust and ethical reliability and increase the adoption potential in hospitals and research centers. Finally, CAM-GT benefits neurologists, radiologists, healthcare institutions, and researchers by providing a stable, transparent, high-performing AI system that has the capability to support early diagnosis and guide real-world clinical decision-making in neurodegenerative disease care.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_73-Clinically_Informed_Adaptive_Multimodal_Graph_Learning_Paradigm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Interpretable Deep Learning Framework for Measuring Organizational Digital Transformation Readiness</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161172</link>
        <id>10.14569/IJACSA.2025.0161172</id>
        <doi>10.14569/IJACSA.2025.0161172</doi>
        <lastModDate>2025-11-29T11:00:03.6900000+00:00</lastModDate>
        
        <creator>Pravin D. Sawant</creator>
        
        <creator>Veera Ankalu Vuyyuru</creator>
        
        <creator>B. Arunsundar</creator>
        
        <creator>A. Vini Infanta</creator>
        
        <creator>Dekhkonov Burkhon</creator>
        
        <creator>N. Roopalatha</creator>
        
        <subject>Digital transformation; FT-Transformer; TabNet; maturity intelligence; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>The accelerating pace of digital transformation (DT) across industries demands accurate, transparent, and adaptable maturity evaluation frameworks capable of capturing complex organizational behaviors. Conventional fuzzy logic and decision tree-based maturity models cannot effectively represent the nonlinear dependencies among DT indicators and often produce inconsistent, opaque assessments. To overcome these limitations, this study proposes the TUMI (Transformer TabNet Unified Maturity Intelligence) framework, a novel hybrid deep learning architecture specifically designed for DT maturity assessment. The framework uniquely integrates FT-Transformer and TabNet, enabling simultaneous modeling of global feature dependencies through attention mechanisms and localized sparse feature selection aligned with DT maturity metrics. This domain-tailored hybridization goes beyond existing hybrid or ensemble approaches by supporting real-time readiness estimation, accommodating heterogeneous organizational indicators, and offering structured interpretability based on complementary attention weights and feature selection masks. The proposed model was trained using a multi-dimensional DT maturity dataset implemented in Python (PyTorch). Experimental results demonstrate strong predictive performance, with 97.0% accuracy, 96.0% precision, 95.0% recall, and an AUC of 98.2%, representing an 8.5% improvement over traditional fuzzy and decision tree models. The interpretability provided by the combined mechanisms offers clearer insight into the organizational determinants influencing maturity progression. Overall, TUMI enhances transparency, diagnostic capability, and scalability, providing an evidence-based, explainable, and cross-industry applicable solution for supporting organizations in evaluating and improving their digital transformation maturity.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_72-An_Interpretable_Deep_Learning_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Meta Learning Enhanced Graph Transformer for Robust Smart Grid Anomaly Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161171</link>
        <id>10.14569/IJACSA.2025.0161171</id>
        <doi>10.14569/IJACSA.2025.0161171</doi>
        <lastModDate>2025-11-29T11:00:03.6430000+00:00</lastModDate>
        
        <creator>Layth Almahadeen</creator>
        
        <creator>Aseel Smerat</creator>
        
        <creator>Sandeep Kumar Mathariya</creator>
        
        <creator>G. Indra Navaroj</creator>
        
        <creator>Vuda Sreenivasa Rao</creator>
        
        <creator>Kamila Ibragimova</creator>
        
        <creator>Osama R.Shahin</creator>
        
        <subject>Adaptive detection; anomaly detection; contrastive learning; graph transformer networks; smart grid</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>The increasing complexity of modern smart grids and the heterogeneity of multi-sensor data make anomaly detection extremely challenging, as existing techniques struggle to capture long-range spatial dependencies, cross-sensor interactions, and unseen anomaly patterns. Conventional models such as Isolation Forest, Random Forest, GCAD, AT-GTL, CVTGAD, and hybrid CNN-Transformer approaches often suffer from limited generalization, weak multimodal fusion, and strong dependence on labeled anomalies. To address these limitations, this study introduces a novel Multimodal Graph Transformer with Contrastive Self-Supervised Learning and Model-Agnostic Meta-Learning (MGT-CGSSML), a uniquely integrated framework designed to learn structural, attribute, and cross-modal relationships simultaneously. The proposed method stands out by combining multimodal graph encoding, dual-view contrastive learning, and fast meta-adaptation, enabling the model to rapidly identify new anomaly types with minimal labeled data. Implemented in Python using PyTorch, the model is evaluated on a multimodal smart grid dataset containing time-stamped voltage, current, power factor, frequency, temperature, and humidity measurements recorded at 15-minute intervals. Experimental results demonstrate 96.5% accuracy, 95% precision, 95.5% recall, and 95.2% F1-score, reflecting a 3–5% performance improvement over advanced baseline models due to enhanced multimodal fusion and meta-learning optimization. The study concludes that MGT-CGSSML delivers a scalable, interpretable, and real-time anomaly detection solution capable of supporting resilient and adaptive smart-grid operations, offering substantial advancements over existing methods.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_71-Meta_Learning_Enhanced_Graph_Transformer.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Interpretable Dual-Level Feedback Approach for Improving Graded Language Simplification and Readability</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161170</link>
        <id>10.14569/IJACSA.2025.0161170</id>
        <doi>10.14569/IJACSA.2025.0161170</doi>
        <lastModDate>2025-11-29T11:00:03.6130000+00:00</lastModDate>
        
        <creator>Pavani G</creator>
        
        <creator>Myagmarsuren Orosoo</creator>
        
        <creator>W. Grace Shanthi</creator>
        
        <creator>Vinod Waiker</creator>
        
        <creator>Aseel Smerat</creator>
        
        <creator>Bhuvaneswari Pagidipati</creator>
        
        <creator>Bansode G. S</creator>
        
        <creator>Osama R. Shahin</creator>
        
        <subject>Text simplification; CEFR-level alignment; reinforcement learning; adaptive feedback loop; T5 transformer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>Text simplification plays a vital role in adaptive language learning, especially when aligned with the Common European Framework of Reference (CEFR) proficiency levels. The purpose of this study is to develop an interpretable and CEFR-aligned text simplification framework that produces pedagogically appropriate simplified texts for learners at different proficiency levels. Existing neural simplification approaches, such as ACCESS, MUSS, and EditNTS, primarily rely on single-level feedback or surface-level readability measures, limiting their ability to ensure both sentence-level linguistic simplicity and document-level coherence. To address these gaps, this study proposes CEFR-RefineNet, a hybrid framework integrating T5 for generative simplification and BERT for contextual CEFR-level classification, enhanced through a novel Dual-Level Explainable Feedback Loop (DL-EFL). The DL-EFL simultaneously evaluates sentence-level linguistic difficulty and document-level readability while providing token-level error attribution for interpretability. The model was implemented using Python and the Hugging Face Transformers library, trained and tested on the CEFR Levelled English Texts corpus comprising 1,500 texts spanning levels A1 to C2. Experimental results show that CEFR-RefineNet achieved a SARI score of 0.78, accuracy of 91%, and F1-score of 0.85, outperforming the strongest baseline (MUSS, 81% accuracy) by approximately 12%. The adaptive feedback mechanism accelerated reward convergence and improved CEFR compliance, ensuring more pedagogically suitable simplifications. In summary, the proposed CEFR-RefineNet establishes a transparent, interpretable, and performance-driven text simplification model capable of generating fluent, meaning-preserving, and CEFR-aligned texts, paving the way for intelligent and adaptive language-learning systems.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_70-An_Interpretable_Dual_Level_Feedback_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Linguistically Informed Essay Assessment Framework to Analyze Writing Style Vocabulary Usage and Coherence</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161169</link>
        <id>10.14569/IJACSA.2025.0161169</id>
        <doi>10.14569/IJACSA.2025.0161169</doi>
        <lastModDate>2025-11-29T11:00:03.5670000+00:00</lastModDate>
        
        <creator>Sreela B</creator>
        
        <creator>B. Neelambaram</creator>
        
        <creator>Manasa Adusumilli</creator>
        
        <creator>Revati Ramrao Rautrao</creator>
        
        <creator>Aseel Smerat</creator>
        
        <creator>Myagmarsuren Orosoo</creator>
        
        <creator>A. Swathi</creator>
        
        <subject>COSMET-Net; contrastive learning; explainable AI; meta-learning; essay scoring</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>Automated essay scoring (AES) has become an essential tool in educational technology, yet many existing approaches rely on black-box models that lack interpretability and adaptability across diverse prompts and writing styles. Conventional transformer-based AES systems demonstrate strong accuracy, but often fail to provide pedagogically meaningful feedback or generalize effectively in low-resource settings, limiting their practical applicability. The proposed COSMET-Net (Contrastive and Explainable Semantic Meta-Evaluation Network) addresses these limitations by integrating contrastive learning, meta-learning, and explainable AI to produce an adaptive and interpretable evaluation of academic essays. Essays are processed through text cleaning, tokenization, and lemmatization, and embeddings are generated using pretrained transformers such as BERT and RoBERTa. Contrastive learning distinguishes high- and low-quality essays, while a Contrastive Linguistic Regularization (CLR) layer aligns embeddings with linguistic properties, enhancing interpretability. Meta-learning enables rapid adaptation to novel prompts with minimal additional data. The explainable output module, employing attention visualization and SHAP values, provides detailed feedback on grammar, coherence, vocabulary richness, and readability. The framework was implemented in Python with PyTorch and Hugging Face Transformers and evaluated on the IELTS Writing Scored Essays Dataset. COSMET-Net achieved an accuracy of 92%, a recall of 93%, and an F1-score of 92%, surpassing existing models such as hybrid RoBERTa + linguistic features (F1-score 84%) and discourse + lexical regression (F1-score 88%). These results demonstrate that COSMET-Net delivers highly accurate, flexible, and linguistically interpretable assessments, providing a scalable solution for automated and pedagogically meaningful essay evaluation.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_69-Linguistically_Informed_Essay_Assessment_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Semi-Supervised Learning vs. Few-Shot Learning: Which is Better for Sentiment Analysis on Hotel Reviews Towards a Small Labeled Training Data?</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161168</link>
        <id>10.14569/IJACSA.2025.0161168</id>
        <doi>10.14569/IJACSA.2025.0161168</doi>
        <lastModDate>2025-11-29T11:00:03.4870000+00:00</lastModDate>
        
        <creator>Retno Kusumaningrum</creator>
        
        <creator>Ahmad Ainun Herlambang</creator>
        
        <creator>Wafiq Afifah</creator>
        
        <creator>Adi Wibowo</creator>
        
        <creator>Sutikno</creator>
        
        <creator>Priyo Sidik Sasongko</creator>
        
        <subject>Sentiment analysis; hotel reviews; semi-supervised learning; few-shot learning; low-resource language</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>The massive growth in user reviews on the online travel agent (OTA) website can be automatically processed using sentiment analysis to understand consumer satisfaction or feedback. Sentiment analysis is commonly implemented as a sentiment classification task by applying classical machine learning and deep learning algorithms. However, implementing both strategies has a significant challenge in providing a reliable labeled dataset since labeling is time-consuming and highly resource-intensive. Therefore, this study aims to compare the performance of two learning methods: semi-supervised learning (SSL) and few-shot learning (FSL), since there is still no direct, controlled comparison between both methods. SSL is a learning method that builds a generalized classification model as a refined model based on automatically generated additional labeled data. In contrast, FSL is a learning method that enables a generalized pre-trained model to predict unlabeled data using only a few labeled samples per class. This study evaluates the self-training method on SSL, and the implemented FSL algorithm is Sentence Transformer Fine-Tuning (SETFIT). The results show that implementing FSL (employing only 16 labeled training samples) outperforms SSL with an accuracy improvement of 9.5%. The implementation of SETFIT is very promising as a solution to overcome the limited amount of labeled data in the classification task. Moreover, SETFIT is more adaptable to various low-resource language domains than other, more data-intensive learning approaches.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_68-Semi_Supervised_Learning_vs_Few_Shot_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Beyond Ensembles: Architecture-Level Fusion for Enhanced Monument Heritage Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161167</link>
        <id>10.14569/IJACSA.2025.0161167</id>
        <doi>10.14569/IJACSA.2025.0161167</doi>
        <lastModDate>2025-11-29T11:00:03.4230000+00:00</lastModDate>
        
        <creator>Mennat Allah Hassan</creator>
        
        <creator>Mona M. Nasr</creator>
        
        <creator>Alaa Mahmoud Hamdy</creator>
        
        <subject>Cultural monument; heritage landmarks; monument classification; monument recognition; transformers</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>Heritage is seen as a key part of nations, including a broad variety of traditions, cultures, monuments, plants and animals, foods, music, and further. Regarding countries, their own heritages are defined by preservation, excavation, and restoration of historical assets that are important and show the nation&#39;s history. It comprises a wide range of physical objects and materials found in cultural institutions which are moveable heritage, as well as the heritage found in built environments which are immovable and natural landscapes. Previous studies on monument classification frequently used single small datasets, limiting accuracy and generalizability. This work introduces a proposed model and a thorough experimental comparison of widely used deep learning architectures, specifically Convolutional Neural Networks and Transformers beside our proposed model, for monument recognition in the cultural monument domain. It seeks to conduct a comparative experiment for selecting representatives from these two methodologies regarding their capacity for transferring information from a general dataset, like ImageNet, to heritage landmarks datasets of varying sizes. When we tested samples of the topologies ResNet, DenseNet, and Swin Transformer (Swin-T), we find that the proposed model had the best results, however ResNet-50 achieved comparable accuracy to Swin-T.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_67-Beyond_Ensembles_Architecture_Level_Fusion.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Causality Aware Multimodal Reasoning Network in Human Emotion Identification and Sentiment Understanding</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161166</link>
        <id>10.14569/IJACSA.2025.0161166</id>
        <doi>10.14569/IJACSA.2025.0161166</doi>
        <lastModDate>2025-11-29T11:00:03.3930000+00:00</lastModDate>
        
        <creator>N. K. Thakre</creator>
        
        <creator>Yazan Shaker Almahammed</creator>
        
        <creator>G. Indra Navaroj</creator>
        
        <creator>Mohammed Fahad Almohazie</creator>
        
        <creator>Abdullah Albalawi</creator>
        
        <creator>Marran Al Qwaid</creator>
        
        <creator>G. Sanjiv Rao</creator>
        
        <subject>Multimodal sentiment analysis; knowledge-driven transformer; explainable AI; dynamic multimodal fusion; CMU-MOSEI dataset</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>Sentiment and emotion recognition in dynamic English communication require intelligent systems capable of reasoning beyond surface correlations among linguistic, acoustic, and visual cues. Traditional multimodal approaches exhibit limited interpretability, weak contextual adaptability, and lack causal understanding of emotional expressions, resulting in inconsistent predictions under ambiguous conditions. To address these challenges, a Context-Adaptive Knowledge-Guided Causal Reasoning Network (CKCR-Net) is introduced, integrating external semantic and affective knowledge with multimodal fusion to ensure transparency and contextual sensitivity. The proposed framework employs a Dynamic Multimodal Knowledge Graph (DMKG), hierarchical cross-modal attention, and a dual-stage causal reasoning module to infer cause–effect dependencies among modalities. The model was implemented in Python (PyTorch) using the CMU-MOSEI benchmark dataset and optimized through Adam optimizer and consistency-based loss regularization. CKCR-Net achieved an accuracy of 97.5%, precision of 96.4%, recall of 97.2%, and F1-score of 97.3%, significantly outperforming models such as CM-BERT (89.4%), RoBERTa (71%), and TFIDF-based fusion (96.9%). The causal reasoning mechanism improved recognition of subtle emotions like sarcasm and empathy, enhancing interpretability through attention heatmaps and counterfactual analysis. Overall, CKCR-Net provides an explainable, context-sensitive, and high-performing framework for multimodal sentiment analysis, offering a reliable pathway toward transparent affective computing and human–machine communication.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_66-Causality_Aware_Multimodal_Reasoning_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Unveiling Gender in Malay-English Short Text: A Comparative Study of ML, DL and Sequential Models with XAI Misclassification Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161165</link>
        <id>10.14569/IJACSA.2025.0161165</id>
        <doi>10.14569/IJACSA.2025.0161165</doi>
        <lastModDate>2025-11-29T11:00:03.3630000+00:00</lastModDate>
        
        <creator>Norazlina Khamis</creator>
        
        <creator>Nur Shaheera Shastera Nulizairos</creator>
        
        <creator>Haslizatul Mohamed Hanum</creator>
        
        <creator>Amirah Ahmad</creator>
        
        <creator>Nor Hapiza Mohd Ariffin</creator>
        
        <creator>Ruhaila Maskat</creator>
        
        <subject>Gender identification; Manglish; machine learning; shallow deep learning; deep sequential model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>Gender identification through written text analysis leverages writer-specific characteristics including linguistic patterns and stylistic behaviors, yet research on gender identification in Malay-English (Manglish) using Traditional Machine Learning (ML), Shallow Deep Learning (DL), and Deep Sequential techniques remains limited compared to English-focused studies. This study addresses this gap by investigating gender identification in Manglish across traditional ML, Shallow DL, and Sequential Deep Model approaches using a self-collected dataset of Manglish tweets from 50 anonymized Malaysian public figures. Following preprocessing, feature extraction employed Word2Vec embeddings and TF-IDF methods, revealing that Word2Vec embeddings delivered superior performance across Shallow DL and Deep Sequential models, with Bi-CNN achieving optimal results of accuracy (0.722), precision (0.727), recall (0.722), and F1-score (0.720), while TF-IDF vectorization yielded substandard performance except for Logistic Regression, which achieved consistent metrics of 0.728 across all evaluation criteria. To enhance model interpretability, eXplainable Artificial Intelligence (XAI) tools including SHAP and LIME were applied to analyze misclassifications, identifying key issues such as frequent shortform usage and word misassignment affecting prediction accuracy, and incorporating these XAI insights through iterative refinements yielded modest improvements from 72.4% to 72.8%, demonstrating XAI&#39;s value in model optimization despite limitations in capturing dataset biases and complex linguistic patterns. This study contributes the first gender classification dataset for Malay short text and demonstrates that Shallow DL and Deep Sequential models, enhanced by XAI-driven analysis, show significant promise for mixed-language contexts, highlighting the unique challenges of code-switched languages in NLP tasks while suggesting future research should explore large language models to advance classification performance in multilingual social media environments.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_65-Unveiling_Gender_in_Malay_English_Short_Text.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multimodal Cognitive Mapping Framework for Context-Aware Figurative Language Understanding</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161164</link>
        <id>10.14569/IJACSA.2025.0161164</id>
        <doi>10.14569/IJACSA.2025.0161164</doi>
        <lastModDate>2025-11-29T11:00:03.3170000+00:00</lastModDate>
        
        <creator>R. Swathi Gudipati</creator>
        
        <creator>Neena PC</creator>
        
        <creator>K. Ezhilmathi</creator>
        
        <creator>M. Durairaj</creator>
        
        <creator>S. Farhad</creator>
        
        <creator>Elangovan Muniyandy</creator>
        
        <creator>Padmashree V</creator>
        
        <subject>Bi-LSTM; cognitive mapping; cross-lingual understanding; idiom acquisition; multimodal learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>Learning figurative language, including idioms, metaphors, and similes, remains challenging due to subtle cultural, contextual, and multimodal cues that cannot be inferred from literal meanings alone. Traditional unimodal and text-only approaches, such as CLS-BERT, LaBSE, and mUSE, often fail to capture these deeper semantic patterns, resulting in reduced accuracy and limited cultural generalization. This study introduces a context-aware multimodal learning framework that integrates textual embeddings from a Graph-Enhanced Transformer (HCGT) with visual embeddings from CLIP, fused through a graph-based cross-modal attention mechanism, and refined using a cognitive mapping layer. This architecture models human-like semantic reasoning by aligning literal and figurative senses across modalities while maintaining conceptual structure through graph-driven representation learning. Experiments conducted on idiom, metaphor, simile, and multimodal meme datasets include preprocessing steps such as text cleaning, tokenization, image normalization, and label standardization. The framework achieves an accuracy of 90%, surpassing state-of-the-art text-only transformer baselines by 3–4%. Explainable AI tools, including attention heatmaps and SHAP values, validate the interpretability of the model by highlighting influential textual tokens and visual regions. The results confirm that integrating multimodal embeddings with cognitive mapping substantially enhances performance, interpretability, and cultural sensitivity in figurative language understanding.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_64-Multimodal_Cognitive_Mapping_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Attention-Enhanced Multi-View Graph Convolutional Network for Early Prediction of Chronic Kidney Disease</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161163</link>
        <id>10.14569/IJACSA.2025.0161163</id>
        <doi>10.14569/IJACSA.2025.0161163</doi>
        <lastModDate>2025-11-29T11:00:03.2670000+00:00</lastModDate>
        
        <creator>Roshan D Suvaris</creator>
        
        <creator>K Nagaiah</creator>
        
        <creator>P. Satish</creator>
        
        <creator>Hussana Johar R B</creator>
        
        <creator>Elangovan Muniyandy</creator>
        
        <creator>Manasa Adusumilli</creator>
        
        <creator>Khaled Bedair</creator>
        
        <subject>Chronic kidney disease progression; multi-view graph convolutional network; temporal fusion transformer; uncertainty-aware AI models; personalized medicine in healthcare</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>The prediction of chronic kidney disease (CKD) must have models capable of processing heterogeneous clinical data and being transparent to assist clinical decision making. Current CKD research usually uses single-view data, integrated graph representations, or bivalent deep learning systems that do not reflect view-specific clinical connections or cannot be interpreted effectively. The first study that uses a combination of the individual multi-view similarity graphs and an attention-based fusion approach to predict the risk of CKD, and the study overcomes the shortcomings of the earlier machine learning, deep learning, and graph-based models. The suggested Attentive Multi-View Graph Convolutional Network (MV-GCN-Attn) uses Graph Convolutional Networks to learn view-specific embeddings and applies them in an adaptive way with the help of attention mechanisms and highlighted clinically influential features. The model has an accuracy of 91.0% along with a precision of 89.0% and a recall of 92.0% and F1-score of 90.0 in the experiment of 400 patient records and 24 attributes in a publicly available dataset of UCI CKD, which is higher than the conventional baselines. The framework also offers feature- and view-level interpretability and the key indicators are determined: serum creatinine and haemoglobin. These results indicate that the use of multi-view graph learning with attention-based interpretability has the potential to provide effective, clinically significant predictions, which can be used with a high degree of confidence in the practical implementation of CKD screening and decision-support in the work of various healthcare facilities and as a valuable aid in the early clinical intervention process.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_63-Attention_Enhanced_Multi_View_Graph_Convolutional_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Graph-Enhanced Transformer Framework for Context-Sensitive English Skill Assessment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161162</link>
        <id>10.14569/IJACSA.2025.0161162</id>
        <doi>10.14569/IJACSA.2025.0161162</doi>
        <lastModDate>2025-11-29T11:00:03.2200000+00:00</lastModDate>
        
        <creator>Anna Shalini</creator>
        
        <creator>Myagmarsuren Orosoo</creator>
        
        <creator>W. Grace Shanthi</creator>
        
        <creator>Prema S</creator>
        
        <creator>S. Farhad</creator>
        
        <creator>Elangovan Muniyandy</creator>
        
        <creator>A. Chrispin Antonieta Dhivya</creator>
        
        <subject>Memory-augmented networks; conversational AI; English Language Teaching (ELT); adaptive feedback; personalized language learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>The integration of Artificial Intelligence (AI) into English Language Teaching (ELT) has enabled personalized and interactive learning, yet most existing systems rely on static, rule-based feedback models, which fail to capture learner history or adapt interventions based on skill interdependencies. These limitations result in generic management, reduced learner engagement, and fragmented skill development. To overcome these challenges, this study proposes a hybrid DeBERTa–GAT–PPO framework that combines transformer-based contextual embeddings, graph attention-based inter-skill modeling, and reinforcement learning for adaptive, history-aware feedback. The model is implemented in Python 3.10 using PyTorch 2.0 and processes the Kaggle Feedback Prize – English Language Learning dataset, containing over 6,600 annotated essays across cohesion, syntax, vocabulary, phraseology, grammar, and conventions. Learner essays are preprocessed, embedded via DeBERTa, and represented as a knowledge graph to capture skill interdependencies through GAT. The PPO agent then generates context-sensitive feedback optimized via policy gradients. Experimental results demonstrate that the proposed framework achieves an accuracy of 89.8% and an AUC of 0.96, representing an approximate 6 to 8% improvement over baseline models such as BERT and RoBERTa. Visualizations and ablation studies confirm effective learning of inter-skill dependencies and reinforcement-based feedback adaptation. Overall, the proposed model provides scalable, interpretable, and pedagogically effective feedback, bridging the gap between conventional AI tutors and fully adaptive, learner-centered systems, thus advancing the state-of-the-art in intelligent English language tutoring.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_62-Graph_Enhanced_Transformer_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Personalized Grammar Refinement Using Meta-Reinforcement Learning and Transformer-Based Framework</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161161</link>
        <id>10.14569/IJACSA.2025.0161161</id>
        <doi>10.14569/IJACSA.2025.0161161</doi>
        <lastModDate>2025-11-29T11:00:03.1730000+00:00</lastModDate>
        
        <creator>Bukka Shobharani</creator>
        
        <creator>Melito D. Mayormente</creator>
        
        <creator>Edgardo B. Sario</creator>
        
        <creator>Bernadette R. Gumpal</creator>
        
        <creator>S. Farhad</creator>
        
        <creator>Jasgurpreet Singh Chohan</creator>
        
        <creator>Elangovan Muniyandy</creator>
        
        <subject>Grammar correction; Transformer models; Meta-Reinforcement Learning; curriculum learning; personalization; ESL writing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>Writing competence is an essential academic and professional proficiency, and grammatical precision and reliability a long-term issue, especially among ESL students. Conventional rule-based and statistical grammar correction models have constraints based on context, whereas contemporary Transformer-based sequence-to-sequence models like BERT, T5, and GPT have strong performance but cannot be customized or adapted to specific writer styles. To fill in these gaps, this study introduces Meta-ACGR, a meta-reinforcement learning grammar refinement system that augments Transformer-based seq2seq models with Proximal Policy Optimization (PPO) and Model-Agnostic Meta-Learning (MAML) and curriculum learning. The model promotes individualized grammar correction, which allows quick adjustment to the new learners in ESL by using meta-learning and guided error development. Meta-ACGR is written in Python with the help of PyTorch and trained on big datasets of ESL language, like NUCLE and Lang-8, which can be refined based on context and individual learners. Empirical evidence indicates that Meta-ACGR receives better grammatical accuracy (86.2 vs. 94.0 per cent), decreases inference latency (12 per cent vs. baseline Transformer models), and performs better on personalization (15 per cent vs. baseline Transformer models). Altogether, Meta-ACGR provides a scalable, adaptable, and customized grammar check system with good chances to be implemented in real-life to improve writing in ESL.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_61-Personalized_Grammar_Refinement_Using_Meta_Reinforcement_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>HeritageLM: Culturally-Aware Multimodal Language Modeling with Memory-Enhanced Cross-Dialect Adaptation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161160</link>
        <id>10.14569/IJACSA.2025.0161160</id>
        <doi>10.14569/IJACSA.2025.0161160</doi>
        <lastModDate>2025-11-29T11:00:03.1430000+00:00</lastModDate>
        
        <creator>Pasupuleti Venkata Ramana</creator>
        
        <creator>K. K. Sunalini</creator>
        
        <creator>A. Swathi</creator>
        
        <creator>R. Lakshmi</creator>
        
        <creator>Raman Kumar</creator>
        
        <creator>Elangovan Muniyandy</creator>
        
        <creator>Khaled Bedair</creator>
        
        <subject>Cultural embeddings; Generative Memory; dialect revitalization; contrastive learning; multimodal NLP</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>The HeritageLM system solves the acute problem of language loss by proposing a multimodal language model that brings into Generative Memory the cultural context. Manual documentation and linguistic archiving are the traditional ways of preserving dialects, which may not be effective in preserving the phonetic variety and other cultural peculiarities of the face of an endangered dialect. The current NLP models, such as BERT and GPT, are not effective to produce dialectal content because they do not have exposure to under-resourced and historically rich language varieties. These shortcomings are mitigated by training Cultural Contextual Embeddings (CCE), Generative Memory Augmentation (GMA), and Cross-Dialect Contrastive Transfer Learning (CDCP) using reinforcement learning with Cultural Rewards (RLCR). It is a step-by-step process that builds a Multimodal Cultural Knowledge Graph (MCKG), matches dialect embeddings in contrastive learning, and retrieves culturally relevant information in the generation process. The model was trained on the Indian Languages Audio Dataset of Kaggle, which also included phonetic variations of ten languages with preprocessing steps of text-to-speech analysis, phonetic annotation, and semantic tagging. HeritageLM, which was implemented in Python, scored above 98 in its BLEU, ROUGE-L, phonetic accuracy, and cultural embedding, showing that it can effectively generate linguistically accurate, phonetically accurate, and culturally authentic results. These outcomes are a major step towards the resurrection of dying dialects and maintaining their distinct cultural background.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_60-HeritageLM_Culturally_Aware_Multimodal_Language_Modeling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Shadow IT Transformation in the Post-Pandemic Digital Workplace: A Systematic Literature Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161159</link>
        <id>10.14569/IJACSA.2025.0161159</id>
        <doi>10.14569/IJACSA.2025.0161159</doi>
        <lastModDate>2025-11-29T11:00:03.0670000+00:00</lastModDate>
        
        <creator>Ginanjar Nugraha</creator>
        
        <creator>Munir</creator>
        
        <creator>Puspo Dewi Dirgantari</creator>
        
        <subject>Shadow IT; COVID-19; remote work; digital transformation; systematic literature review; IT governance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>The COVID-19 pandemic has significantly altered organizational work patterns, accelerating digital transformation and the adoption of remote and hybrid work models. These changes have affected the practice of shadow IT, the use of unauthorized IT by employees without formal IT approval. This systematic literature review aims to explore how the pandemic and the shift to remote work have impacted shadow IT adoption, motivations, and management strategies in the context of digital transformation. We followed the PRISMA 2020 guidelines to conduct a search of peer-reviewed articles published between 2018 and 2025 across multiple databases (Scopus, Web of Science, IEEE Xplore, ACM Digital Library, AIS eLibrary). A total of 67 studies were included based on predefined criteria. The review identified key themes related to the evolving nature of shadow IT adoption, its associated risks, and adaptive management practices. Shadow IT adoption increased from 30 to 40% before the pandemic to 41% in 2022, with projections suggesting it could reach 75% by 2027. The findings show a shift in motivation for adopting shadow IT, from convenience-driven use to a necessity for business continuity, and finally, to a strategy for optimizing organizational processes. This review highlights the need for organizations to rethink IT governance in the post-pandemic digital workplace, as shadow IT has moved from an issue to be eliminated to a phenomenon that can be managed and leveraged.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_59-Shadow_IT_Transformation_in_the_Post_Pandemic_Digital_Workplace.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimizer Algorithms Analysis for Intrusion Detection System on Deep Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161158</link>
        <id>10.14569/IJACSA.2025.0161158</id>
        <doi>10.14569/IJACSA.2025.0161158</doi>
        <lastModDate>2025-11-29T11:00:02.9870000+00:00</lastModDate>
        
        <creator>H. A. Danang Rimbawa</creator>
        
        <creator>Agung Nugroho</creator>
        
        <creator>Muhammad Abditya Arghanie</creator>
        
        <subject>Deep Neural Networks (DNN); Intrusion Detection System (IDS); optimization algorithms; UNSW-NB15 dataset</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>Intrusion Detection Systems (IDS) play a critical role in identifying potential threats and intrusions in real-time within information technology infrastructures. The development of IDS using Deep Neural Networks (DNN) with the UNSW-NB15 dataset has shown significant potential in improving attack classification accuracy. However, the performance of the DNN-based IDS models is highly dependent on the choice of optimization algorithm. This study compares the performance of several commonly used optimizers in DNN training, including SGD, RMSprop, Adam, Adadelta, Adagrad, Adamax, Adafactor, and Nadam. The quantitative analysis demonstrates that Adam achieves the highest accuracy among all optimizers tested, while Adadelta performs the worst. RMSprop shows instability in both validation accuracy and loss convergence, indicating challenges in adapting the learning rate for consistent learning. The ANOVA analysis yields an F-statistic of 34.687, which is greater than the F-critical value of 2.140 at a significance level of α = 0.05. This result confirms a statistically significant difference in performance among the tested optimization algorithms. These findings provide valuable insights for selecting the most appropriate optimizer to enhance the performance of DNN-based intrusion detection systems. Furthermore, this research contributes to the existing literature by offering a comprehensive comparative evaluation of optimizers, supporting future studies in improving IDS optimization strategies.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_58-Optimizer_Algorithms_Analysis_for_Intrusion_Detection_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Two-Level Hierarchical Adaptive Dynamic Fusion for CNN–LSTM Integration in Fatigue Level Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161157</link>
        <id>10.14569/IJACSA.2025.0161157</id>
        <doi>10.14569/IJACSA.2025.0161157</doi>
        <lastModDate>2025-11-29T11:00:02.9570000+00:00</lastModDate>
        
        <creator>Marlince NK Nababan</creator>
        
        <creator>Poltak Sihombing</creator>
        
        <creator>Erna Nababan</creator>
        
        <creator>T Henny Febriana Harum</creator>
        
        <subject>Multimodal fusion; adaptive dynamic fusion; CNN-LSTM; fatigue level prediction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>Driver fatigue is a major contributor to traffic accidents, yet most existing detection systems rely on unimodal inputs or static fusion mechanisms that lack robustness under poor lighting, partially obscured faces, and missing sensor data. This study aims to overcome these limitations by proposing a Hierarchical Adaptive Dynamic Fusion (HADF) model. HADF integrates a two-level adaptive fusion mechanism combining a CNN (ResNet-18) for facial micro-expressions and an LSTM for physiological signals (heart rate, temperature, and accelerometer). The first stage computes adaptive intra-modality weights (α), while the second stage assigns inter-modality weights (γ), enabling context-aware and resilient multimodal integration even under missing-modality conditions. Experiments on a multimodal fatigue dataset show that HADF achieves a validation accuracy of 96.5%, a macro F1-score of 0.96, and ROC-AUC values of 1.00 (Normal), 0.99 (Eye-Closed), and 0.93 (Yawn). Compared with unimodal and static-fusion baselines, HADF improves accuracy by approximately 4.5% and macro F1-score by 6–9%, while maintaining stable performance under incomplete data. These results confirm the novelty of HADF as a two-stage adaptive fusion strategy that enhances accuracy and system robustness, making it suitable for real-time fatigue monitoring in transportation, occupational safety, and healthcare applications.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_57-Two_Level_Hierarchical_Adaptive_Dynamic_Fusion.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>InfoCore: AI Driven Named Entity Deduplication and Event Categorization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161156</link>
        <id>10.14569/IJACSA.2025.0161156</id>
        <doi>10.14569/IJACSA.2025.0161156</doi>
        <lastModDate>2025-11-29T11:00:02.8930000+00:00</lastModDate>
        
        <creator>Rohail Qamar</creator>
        
        <creator>Raheela Asif</creator>
        
        <creator>Abdul Karim Kazi</creator>
        
        <creator>Muhammad Ali</creator>
        
        <creator>Muhammad Mustafa</creator>
        
        <subject>Data deduplication; context-aware; event categorization; NLP; large language model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>The exponential growth of digital information presents critical challenges for efficient data management, as conventional manual curation methods remain slow, error-prone, and unable to adapt to evolving data streams. This paper presents InfoCore: AI-Driven Entity Deduplication and Event Categorization, an automated framework that leverages artificial intelligence to identify and remove redundant news articles while classifying them by event. Focusing on the political news domain, the system integrates Natural Language Processing, machine learning, and clustering techniques to enhance information retrieval and reduce redundancy. News content is collected via the Newspaper3k library and processed through tokenization, normalization, and entity extraction. Transformer-based models enable named entity recognition, while LLaMA-based large language models, TensorFlow, and PyTorch support text classification and event categorization. Empirical evaluation demonstrates InfoCore’s capacity to detect duplicates and achieve precise event classification with high scalability. The paper contributes a domain-independent architecture for automated data curation and a replicable workflow that improves efficiency and accuracy in large-scale information systems. The results highlight InfoCore’s potential to advance data management practices and inform the design of intelligent, scalable frameworks for handling unstructured digital content.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_56-InfoCore_AI_Driven_Named_Entity_Deduplication.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Generalizing In-Field Plant Disease Diagnosis: A Deep Transfer Learning Approach for Multi-Crop and Heterogeneous Imaging Conditions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161155</link>
        <id>10.14569/IJACSA.2025.0161155</id>
        <doi>10.14569/IJACSA.2025.0161155</doi>
        <lastModDate>2025-11-29T11:00:02.8170000+00:00</lastModDate>
        
        <creator>Khoerul Anwar</creator>
        
        <creator>Tubagus Mohammad Akhriza</creator>
        
        <creator>Mahmud Yunus</creator>
        
        <subject>Plant diseases; deep learning; transfer learning; multi-food crops; precision agriculture</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>Plant diseases pose a serious threat to agricultural productivity, which can cause significant crop losses if not addressed quickly and appropriately. There are significant opportunities for digital image-based treatment with computer vision and artificial intelligence. The main challenges in recognizing image-based plant diseases are: developing a single model capable of diagnosing diseases in various types of plants. Ensuring the model remains reliable even when images are taken under varying lighting conditions, backgrounds, and camera quality. In addition, the challenge in this study is to present a model capable of recognizing leaf diseases of multiple food crops, especially rice and corn. The purpose of this study is to identify leaf diseases of rice and corn crops. This study proposes deep learning and transfer learning for diagnosing plant leaf diseases in various types of plants and unstructured imaging environments. To address these challenges, a selection of VGGNet, ResNet50, InceptionV3, and EfficientNetB0 methods was conducted by testing them using laboratory datasets. Based on the testing, the EfficientNetB0 model performed the best. Then, the selected model parameters were tuned, feature extraction and a new dataset was collected in a real-world domain with varying lighting, changing viewpoints and scales, complex backgrounds, similar symptoms between diseases, and occlusion. The results showed that the proposed model performed very well and robustly, with 98% accuracy and a weighted average F1-score of 98% in identifying food crop diseases: blight, rust, blast, blight, tungro, and healthy leaves. This performance indicates that the developed model is highly reliable in classifying leaf diseases in rice and corn. This model is expected to be applied to precision agriculture technology so that farmers can take timely action regarding treatment without further delay.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_55-Generalizing_In_Field_Plant_Disease_Diagnosis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SPA-DCN-NET: A Gated Multi-Scale Local Contrast Normalization Network for Ultrasound Image Segmentation of Liver</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161154</link>
        <id>10.14569/IJACSA.2025.0161154</id>
        <doi>10.14569/IJACSA.2025.0161154</doi>
        <lastModDate>2025-11-29T11:00:02.7370000+00:00</lastModDate>
        
        <creator>Su Ming Jian</creator>
        
        <creator>Afizan Bin Azman</creator>
        
        <creator>Afizan.Azman@taylors.edu.my</creator>
        
        <subject>Spatial pyramid attention; deformable convolutional network; ultrasound image; medical image analysis; soft-gate mechanism; local contrast normalization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>Segmentation of the liver in ultrasound images is a critical task in medical image analysis, yet it remains challenging due to acoustic speckle noise, brightness instability, and deformations caused by probe pressure. To address these problems, this study presents SPA-DCN-NET, a lightweight framework that integrates three synergized components: first, a learnable gated local contrast normalization (Gated LCN) module utilizes a sigmoid soft-gate mechanism to dynamically fuse LCN-enhanced features with original features, effectively stabilizing the features for training. Second, a spatial pyramid attention (SPA) module applies multi-scale context aggregation, transforming the clear features provided by Gated LCN into deformable convolutional networks. Third, these features guide deformable convolutional networks to adaptively adjust sampling grids, ensuring precise delineation of irregular liver boundaries in ultrasound images. Experimental results demonstrate that our SPA-DCN-NET achieved mean IoU scores of 83.52%, 75.57%, 73.85%, and 85.94% across the four datasets, respectively. These results are all higher than those obtained by UNet, nnUNet, and ResUNet. The metrics indicate that our SPA-DCN-NET is more adaptable to the ultrasonic medical environment compared to other existing medical segmentation networks, and it is a recommended network of image analysis for ultrasound abdominal scans.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_54-SPA_DCN_NET_A_Gated_Multi_Scale_Local_Contrast.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Swarm Intelligence-Based Optimization of FACTS Devices: A Review of Operation, Control and Emerging Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161153</link>
        <id>10.14569/IJACSA.2025.0161153</id>
        <doi>10.14569/IJACSA.2025.0161153</doi>
        <lastModDate>2025-11-29T11:00:02.6600000+00:00</lastModDate>
        
        <creator>Patricia Khwambala</creator>
        
        <creator>Kumeshan Reddy</creator>
        
        <creator>Senthil Krishnamurthy</creator>
        
        <subject>Swarm intelligence; FACTS devices; power transfer system; voltage stability; power losses</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>The integration of Flexible AC Transmission System (FACTS) devices into modern power networks plays a pivotal role in enhancing voltage stability, reducing transmission losses, and improving overall power transfer capability. Determining the optimal location and sizing of these devices is a critical task that significantly influences system performance. In recent years, swarm intelligence (SI) algorithms have emerged as powerful optimization tools for addressing such complex, nonlinear, and multi-objective problems in power systems. This study presents a comprehensive review of the application of swarm intelligence techniques, Artificial Bee Colony (ABC), Bacterial Foraging Optimization (BFO), Dragonfly Algorithm (DA), Salp Swarm Algorithm (SSA), and Particle Swarm Optimization (PSO). These algorithms are used to optimize the placement and sizing of FACTS devices, such as Static Var Compensators (SVCs), Thyristor-Controlled Series Capacitors (TCSCs), and Static Synchronous Compensators (STATCOMs). The review highlights the underlying mechanisms, strengths, and limitations by comparing the performance of each algorithm in terms of convergence, optimal location, and sizing of a particular FACT device in a power transfer system to enhance voltage stability, minimize real power losses, and improve system loadability. The review provides a comprehensive resource for researchers and practitioners interested in applying swarm intelligence-based optimization techniques of FACTS devices in power transmission systems.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_53-Swarm_Intelligence_Based_Optimization_of_FACTS_Devices.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Conceptual Model for an Ontology-Based Dietary Recommendation Plan in Crohn’s Disease</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161152</link>
        <id>10.14569/IJACSA.2025.0161152</id>
        <doi>10.14569/IJACSA.2025.0161152</doi>
        <lastModDate>2025-11-29T11:00:02.5670000+00:00</lastModDate>
        
        <creator>Rezan Almehmadi</creator>
        
        <creator>Aisha Alsobhi</creator>
        
        <creator>Omaima Almatrafi</creator>
        
        <subject>Conceptual model; ontology; dietary recommendations; Crohn’s Disease; domain knowledge</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>Crohn&#39;s Disease (CD) is a long-term inflammatory bowel disorder that affects the digestive system. It is influenced by geography, diet, genetics, and immune response. Patients often experience difficulties managing CD due to the complexity and heterogeneity of the condition. Despite increasing scientific efforts, knowledge within the domain remains scattered and fragmented across different concepts. The purpose of this study is to develop a conceptual model of CD using knowledge engineering to organize domain knowledge and clarify the main aspects and their relationships. The conceptual model is created following the Ontology Development 101 methodology to define the creation of classes, properties, and restrictions. The proposed model is designed to organize and integrate multiple aspects of the condition, such as symptoms, treatments, risk factors, and patient profiles, with a particular emphasis on dietary recommendations. The resulting model consists of eight primary classes and fifteen key relationships, clarifying the connections between patients, symptoms, treatments, and diagnosis. The CD conceptual model does not comprehensively address the genetic, environmental, cultural practices, or neurobiological factors related to CD. To address these limitations, future work should focus on integrating real-world clinical data and considering broader demographic contexts. This study examines nutritional treatments for CD, such as Exclusive Enteral Nutrition (EEN), the low-FODMAP diet, and the Crohn’s Disease Exclusion Diet (CDED) that emphasize the role of diet in personalized healthcare. The CD conceptual model developed is the basis for building a comprehensive ontology-driven system that will help Crohn&#39;s patients with personalized dietary advice and future decision support systems that will improve their clinical care.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_52-A_Conceptual_Model_for_an_Ontology_Based_Dietary_Recommendation_Plan.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sleep Quality and Burnout Syndrome in Students at a University in Lima, Peru: A Cross-Sectional Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161151</link>
        <id>10.14569/IJACSA.2025.0161151</id>
        <doi>10.14569/IJACSA.2025.0161151</doi>
        <lastModDate>2025-11-29T11:00:02.4870000+00:00</lastModDate>
        
        <creator>Yajaira Garay-Castro</creator>
        
        <creator>Luis Acosta-Avila</creator>
        
        <creator>Ana Flores-Hiyo</creator>
        
        <creator>Ana Huamani-Huaracca</creator>
        
        <creator>Sebasti&#225;n Ramos-Cosi</creator>
        
        <creator>Gina Le&#243;n-Untiveros</creator>
        
        <creator>Alicia Alva-Mantari</creator>
        
        <subject>Burnout Syndrome; sleep quality; university students; mental health</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>The World Health Organization (WHO) reports that 15% of mental health problems develop in people with demanding work and academic conditions. Both sleep quality problems and Burnout Syndrome (BS) are recognized as significant problems in university settings. In Lima, Peru, the situation is critical, as BS affects up to 60% of university students. Therefore, quantifying this problem through a nursing intervention is crucial. The objective of this study was to determine the relationship between sleep quality and BS in university students at a university in Lima. Using a quantitative, cross-sectional, and correlational approach, the Pittsburgh Sleep Quality Index (PSQI) and the Student Burnout Scale (EUBE) were applied to a sample of 216 nursing and systems engineering students, using Spearman&#39;s Rho test and the Multinomial Logistic Regression Model. The findings revealed a moderate negative correlation between sleep quality and SB (Rho=-0.508; p &lt; 0.001) and a relationship between sleep quality problems and mild SB (RRR=7.84565; p = 0.005). Furthermore, 85.19% of participants experienced sleep problems that warranted medical attention and treatment, and 86.57% had mild SB. Sleep quality problems and the development of SB are prevalent in this population; therefore, it is essential to continue studying them and integrating specific intervention strategies.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_51-Sleep_Quality_and_Burnout_Syndrome_in_Students.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Aligning Confidence and Localization: An Enhanced DINO Model for Small Object Detection in Robotic Nursing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161150</link>
        <id>10.14569/IJACSA.2025.0161150</id>
        <doi>10.14569/IJACSA.2025.0161150</doi>
        <lastModDate>2025-11-29T11:00:02.4100000+00:00</lastModDate>
        
        <creator>Yanchen Du</creator>
        
        <creator>Qingzhuo Yuan</creator>
        
        <creator>Shengli Luo</creator>
        
        <creator>XiaoLong Shu</creator>
        
        <creator>Xu Wang</creator>
        
        <creator>Yuheng Jiang</creator>
        
        <creator>Hongliu Yu</creator>
        
        <subject>Small object detection; matching alignment; robotic object detection; nursing feeding; DINO; GhostConv; edge deployment; transformers</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>In care environments such as nursing homes, robots performing tasks (e.g., feeding assistance) must accurately identify and locate target objects to ensure safe and efficient execution. However, real-world applications face challenges such as numerous small objects, complex backgrounds, and severe occlusions, all of which compromise detection performance. To address these challenges, this study proposes an end-to-end object detection algorithm based on an enhanced DINO framework. Transformer-based DINO is adopted as the baseline, leveraging its global modeling capabilities and avoiding the complex pre- and post-processing required by traditional CNN detectors. In addition, an improved Align-Loss is introduced to enhance small-object detection and address misalignment issues within DINO. Furthermore, a GhostConv module is integrated into DINO’s ResNet50 backbone to reduce the computational load of feature extraction and accelerate detection. Finally, multi-scale data augmentation and transfer learning are applied during training to improve detection accuracy and accelerate convergence. To validate the proposed method, experiments were conducted on the augmented MYNursingHome dataset and the COCO dataset. On the MYNursingHome dataset, the proposed approach improved mAP by 3.1% and APs by 2.6% over the DINO baseline, while reducing parameters from 47M to 39.6M and FLOPs from 279G to 243G. On the NVIDIA Jetson Orin Nano Super, inference speed increased from 16.8 FPS to 18.9 FPS (+12.5%). The experimental results demonstrate that the improved DINO detector proposed in this study exhibits a significant advantage in small object detection for nursing scenarios, providing algorithmic support for intelligent and efficient robotic care.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_50-Aligning_Confidence_and_Localization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SBERT-Based Stacking Ensemble Model for Fake News Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161149</link>
        <id>10.14569/IJACSA.2025.0161149</id>
        <doi>10.14569/IJACSA.2025.0161149</doi>
        <lastModDate>2025-11-29T11:00:02.3170000+00:00</lastModDate>
        
        <creator>Abdulaziz A Alzubaidi</creator>
        
        <creator>Amin A Alawady</creator>
        
        <subject>Fake news detection; machine learning; SBERT embeddings; stacking ensemble; Random Forest; Logistic Regression; MLP; XGBoost</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>Fake news has become a significant global challenge, affecting public opinion, social dynamics, and decision-making processes. Detecting fabricated news accurately and efficiently remains a challenging task due to the diversity of content, writing styles, and subtle semantic nuances. In this study, we propose a stacking ensemble model that uses SBERT-based semantic embeddings to improve the detection of fake news. The model integrates several machine-learning classifiers with a meta-learner to enhance robustness and predictive reliability. Experiments on the WELFake dataset show that the proposed model achieves 92.74% accuracy, a 93.01% F1-score, and a 97.93% ROC-AUC in classifying fake and real news. These results demonstrate the model’s effectiveness and suggest its potential for broader application across different languages and news domains.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_49-SBERT_Based_Stacking_Ensemble_Model_for_Fake_News_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Evolution of Hackathons as an Innovation Tool: A Systematic Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161148</link>
        <id>10.14569/IJACSA.2025.0161148</id>
        <doi>10.14569/IJACSA.2025.0161148</doi>
        <lastModDate>2025-11-29T11:00:02.2530000+00:00</lastModDate>
        
        <creator>Claudia Marrujo-Ingunza</creator>
        
        <creator>Meyluz Paico-Campos</creator>
        
        <subject>Hackathon; evolution; tool; innovation; review</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>Hackathons have established themselves as dynamic open innovation spaces that promote interdisciplinary collaboration and creative problem-solving. This systematic review, which follows the PRISMA methodology, synthesizes the findings of 73 articles from Scopus, Web of Science, and IEEE Xplore, with the aim of analyzing the evolution, impacts, and knowledge gaps surrounding hackathons as an innovation tool. The study identifies a growing trend in their global implementation, with a particular emphasis on skill development, driving innovation, and strengthening entrepreneurial competencies. However, limitations are evident, such as the scarcity of longitudinal studies, the poor assessment of their long-term sustainability, and the geographical concentration of research in technologically advanced countries. Future research should focus on comparing organizational models, measuring long-term results, and including diverse contexts. Our findings underscore the potential of hackathons to boost creativity and entrepreneurship, as well as foster sustainable and collaborative innovation processes.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_48-The_Evolution_of_Hackathons_as_an_Innovation_Tool.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sustainable IoT Smart Home Perceptions Across Demographic and Vulnerable User Segments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161147</link>
        <id>10.14569/IJACSA.2025.0161147</id>
        <doi>10.14569/IJACSA.2025.0161147</doi>
        <lastModDate>2025-11-29T11:00:02.1430000+00:00</lastModDate>
        
        <creator>Burhan Mahmoud Hamadneh</creator>
        
        <creator>Zeyad Alshboul</creator>
        
        <creator>Nabhan Mahmoud Hamadneh</creator>
        
        <creator>Turki Mahdi Alqarni</creator>
        
        <creator>Malek Turki Jdaitawi</creator>
        
        <subject>Internet of Things; smart home; vulnerable people; smart home; energy efficiency</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>The rapid expansion of Internet of Things (IoT) technologies has transformed smart home systems into essential tools for enhancing safety, independence, and quality of life, particularly for older adults and individuals with disabilities. However, the extent to which these groups understand and adopt IoT-enabled smart homes remains limited. This study addresses insufficient knowledge and uneven adoption of smart home IoT technologies among vulnerable demographic groups, examining how demographic factors shape levels of awareness and readiness for use. Two parallel approaches were employed for a validated questionnaire (of 15 sections) distributed to 249 participants and in parallel with semi-structured interviews for 25 selected individuals in Najran, Saudi Arabia, during the summer of 2023. Quantitative data were gathered and analyzed using descriptive statistics and multivariate analyses of variance. Qualitative data were under content analysis. Results revealed low levels of knowledge regarding IoT-enabled smart home systems among the respondents and different groups. Significant differences were found for groups of different employment status, age (15 to 30 and 30 to 45 years), economic status (above average), and disability status. However, no significant differences were found for gender or marital status. Qualitative insights indicated major concerns related to affordability, user passivity, lack of technical support, poor internet connectivity, device overload, issues of privacy, security, and system reliability. It was highly recommended to perform targeted measures to improve awareness, accessibility, inclusive design, and infrastructure. Future work may address spatial and temporal variations in IoT adoption, develop tailored training models for older adults and individuals with disabilities, and assess the effectiveness of policy initiatives aimed at increasing smart home readiness. These efforts can further improve the safe and effective integration of IoT technologies and improve indoor life quality and sustainability for vulnerable people.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_47-Sustainable_IoT_Smart_Home_Perceptions.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>AI-Driven Multimodal Frameworks for Cardiovascular Diagnostics: Integrating Sensors, Imaging, and Robotic Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161146</link>
        <id>10.14569/IJACSA.2025.0161146</id>
        <doi>10.14569/IJACSA.2025.0161146</doi>
        <lastModDate>2025-11-29T11:00:01.9100000+00:00</lastModDate>
        
        <creator>Chandrasekhara Reddy T</creator>
        
        <creator>Ramesh Babu P</creator>
        
        <subject>Cardiovascular disease (CVD); machine learning (ML); artificial intelligence (AI); wearable sensors; deep learning; Biomedical Signal Processing; microfluidics; nano sensors; robotic intervention; medical imaging; predictive modelling; causal inference; domain adaptation; federated learning; clinical validation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>Cardiovascular disease is still the leading cause of death, and a definitive cure has not yet been found, so this is the time to make important changes in prevention and early diagnosis. Integrating artificial intelligence, machine learning, wearable sensors, and biomedical imaging is changing healthcare for cardiovascular care. Recent breakthroughs have reported that the use of smart immune sensors and artificial intelligence helps diagnose disease by testing blood and urine samples. The upcoming advances from various emerging technologies are expected to greatly enhance the overall accuracy and personalization of diagnostic processes within the medical field. This essay explores difficulties relating to domain adaptation, variability in data, and interpretability, including the need for rigorous validation tests and ethical considerations. The new system is made up of several programs that help the user to make decisions more efficiently in situations where rapid action is needed, while considering privacy preservation, clinical quality improvement, and energy efficiency. This review of more than sixty recent studies is an attempt to broaden the field of cardiovascular care by introducing a roadmap for further research. The creation of a fully responsive cardiovascular diagnostic system is not yet complete and requires the contribution of several entirely different fields of science.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_46-AI_Driven_Multimodal_Frameworks_for_Cardiovascular_Diagnostics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Advances in Natural Language Processing for Radiology: State-of-the-Art Techniques, Applications, and Open Challenges</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161145</link>
        <id>10.14569/IJACSA.2025.0161145</id>
        <doi>10.14569/IJACSA.2025.0161145</doi>
        <lastModDate>2025-11-29T11:00:01.4230000+00:00</lastModDate>
        
        <creator>Kotha Chandrakala</creator>
        
        <creator>Shahin Fatima</creator>
        
        <subject>Natural language processing; radiology reports; deep learning; transformers; medical imaging; report generation; clinical NLP</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>Radiology reports encode critical clinical observations from medical imaging in an unstructured textual form that is central to modern clinical diagnosis and decision support. In this context, natural language processing (NLP) has emerged as a key clinical NLP technology for automatically extracting, classifying, and interpreting information from radiology reports. This study presents a structured review of more than sixty recent contributions on NLP for radiology, covering approaches that range from traditional rule-based pipelines to contemporary deep learning and transformer-based models. We examine how deep learning architectures, including BERT, GPT-4, multimodal transformers, and vision–language alignment networks, are applied to core tasks such as disease classification, tumor response assessment, cancer phenotype extraction, radiology report generation, cohort identification, quality assurance, and longitudinal patient follow-up. Particular attention is given to knowledge graph integration, multimodal cross-attention, and zero-shot learning strategies that adapt large language models to radiology-specific workflows. We also analyze key barriers to clinical adoption, including limited annotated data, domain generalization gaps across institutions, ethical and fairness concerns, and the need for transparent model explainability. Based on this synthesis, the review outlines future research directions for building interpretable, multimodal, and clinically robust NLP solutions that integrate technological, clinical, and operational perspectives to advance radiology report analysis and medical imaging–driven care.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_45-Advances_in_Natural_Language_Processing_for_Radiology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cross-Lingual Sentiment Analysis in Low-Resource Languages: A Recent Review on Tasks, Methods and Challenges</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161144</link>
        <id>10.14569/IJACSA.2025.0161144</id>
        <doi>10.14569/IJACSA.2025.0161144</doi>
        <lastModDate>2025-11-29T11:00:01.2700000+00:00</lastModDate>
        
        <creator>Nor Zakiah Lamin</creator>
        
        <creator>Azwa Abdul Aziz</creator>
        
        <subject>Cross-lingual sentiment analysis; low-resource language; natural language processing; pre-trained language models; transfer learning; few-shot learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>Cross-lingual sentiment analysis (CLSA) has become increasingly important in natural language processing and machine learning, enabling the understanding of opinions across diverse linguistic communities, particularly in low-resource languages (LRLs). Despite growing attention, persistent challenges such as limited annotated data, semantic misalignment, and cultural variation in sentiment expression continue to hinder progress. This systematic literature review (SLR) examines recent developments by analyzing the tasks, methods, and challenges reported in CLSA studies focused on LRLs. Following the PRISMA 2020 framework, a comprehensive search was conducted across major databases, including Scopus, IEEE Xplore, SpringerLink, Elsevier, and Google Scholar, covering studies published between 2021 and 2025. After applying inclusion and exclusion criteria, 27 studies were selected for analysis. The findings reveal that while polarity detection remains the dominant sentiment analysis task, emerging directions such as aspect-based sentiment analysis (ABSA), emotion detection, and hate speech recognition are gaining traction. Methodologically, most studies rely on multilingual pre-trained language models (PLMs), supplemented by machine translation, transfer learning, few-shot learning, and hybrid approaches. However, key challenges remain, including the scarcity of high-quality datasets, instability of few-shot performance, difficulties in handling dialectal variation, bias in PLMs, and the lack of standardized evaluation benchmarks. This review concludes by emphasizing the need for more culturally grounded tasks, adaptive hybrid frameworks, and fairness-aware evaluation practices to build robust cross-lingual frameworks and richer linguistic resources for underrepresented languages.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_44-Cross_Lingual_Sentiment_Analysis_in_Low_Resource_Languages.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Software Project Effort Estimation Using Formal Method and Model Checker</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161143</link>
        <id>10.14569/IJACSA.2025.0161143</id>
        <doi>10.14569/IJACSA.2025.0161143</doi>
        <lastModDate>2025-11-29T11:00:01.2200000+00:00</lastModDate>
        
        <creator>Abdulaziz Alhumam</creator>
        
        <subject>Software effort estimation; cost estimation; formal methods; SMT Solver; automated verification; Z-specifications</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>Software project effort estimation is a critical component of software development, as it determines the time and financial resources required to complete a project. Existing estimation techniques—ranging from empirical models and algorithmic methods to heuristic and expert-based approaches—struggle with inconsistent accuracy due to the inherent complexity, subjectivity, and contextual variability across software projects. Although earlier formal methods aimed to reduce confusion by being very precise, they usually don&#39;t allow for automated logical analysis or check if the assumptions used for estimation are consistent with one another. To address these limitations, this study introduces a novel formal modeling framework that integrates Z-Specification with the Z3 SMT solver to both formalize and computationally verify effort estimation models. The use of Z notation guarantees the meaning is precise and unambiguous. Furthermore, SMT (Satisfiability Modulo Theories) reasoning adds powerful new abilities that older methods lacked. These new capabilities include automatically finding constraint violations, confirming how parameters depend on one another, and determining feasible estimation ranges under clearly defined conditions. This integration not only reduces ambiguity but also provides a verifiable, machine-checkable basis for evaluating, refining, and comparing diverse effort estimation methods, thereby offering a more robust foundation than traditional or solely formalized models.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_43-Software_Project_Effort_Estimation_Using_Formal_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>How did the Intelligent Search Engine Become Popular Among Chinese Residents Since the Emergence of Deepseek?</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161142</link>
        <id>10.14569/IJACSA.2025.0161142</id>
        <doi>10.14569/IJACSA.2025.0161142</doi>
        <lastModDate>2025-11-29T11:00:01.2070000+00:00</lastModDate>
        
        <creator>Zhixuan Wang</creator>
        
        <creator>Yuying Song</creator>
        
        <creator>Hanyu Guo</creator>
        
        <subject>AI search engine; Deepseek; SOR; Technology Acceptance Model; Theory of Planned Behavior</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>This study, based on the Technology Acceptance Model (TAM) and the Theory of Planned Behavior (TPB), combined with the Stimulus-Organism-Response (SOR) theoretical framework, explores the influencing factors and promotion mechanisms for the popularization of “Deepseek” intelligent search engines in China. Through a questionnaire survey, 343 valid samples were collected, and the structural equation model (SEM) was used for data analysis, verifying 8 out of 10 research hypotheses. Perceived usefulness (PU) and perceived ease of use (PEOU) significantly and positively influence the intention to use, indicating that users&#39; cognition of the functional value and operational convenience of AI search engines is a key driving factor. Subjective norms (SN) directly promote the intention to use but do not indirectly affect it through perceived usefulness, suggesting that the influence of the social environment stems more from irrational paths (such as group atmosphere infection). Perceived behavioral control indirectly enhances the intention to use by improving perceived ease of use and usefulness, highlighting the importance of users&#39; own abilities and device support. Perceived playfulness only indirectly affects the intention to use through perceived usefulness and has no significant effect on ease of use, possibly because the entertainment function increases operational complexity. Technological facilitation has a significant positive impact on both perceived usefulness and ease of use, indicating that the optimization of technical performance (such as interaction efficiency and powerful functions) is the core for enhancing user experience.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_42-How_did_the_Intelligent_Search_Engine_Become_Popular.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Robotic Process Automation (RPA) Scripting Model Using Machine Learning (ML) for Enterprise Data Validation and Integration</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161141</link>
        <id>10.14569/IJACSA.2025.0161141</id>
        <doi>10.14569/IJACSA.2025.0161141</doi>
        <lastModDate>2025-11-29T11:00:01.1130000+00:00</lastModDate>
        
        <creator>Luis &#193;ngel Bendez&#250; Jim&#233;nez</creator>
        
        <creator>Jorge Luis Juan de Dios Apaza</creator>
        
        <creator>Ruben Oscar Cerda Garc&#237;a</creator>
        
        <subject>RPA; scripting; data automation; Machine Learning; data validation; data integration; intelligent workflows</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>This research presents an automated data processing model based on RPA Scripting, designed to enhance efficiency in extracting, validating, and integrating information from various web platforms. The automated workflow begins with the use of a tool that simulates human interaction on web platforms to obtain data automatically and reliably. The data is then organized and cleaned using processing techniques that prepare it for analysis. As a key component of the model, Machine Learning algorithms have been incorporated to detect errors, identify unusual patterns, and classify records, thereby improving data quality before storage. Finally, the processed data is loaded into a database and visualized through a dynamic dashboard that supports decision-making via reports and indicators. In conclusion, integrating Machine Learning algorithms within an RPA Scripting model not only optimizes the execution of automated tasks but also equips the model with intelligence to anticipate errors and adapt to changes in the data. This enables the development of a more robust, reliable, and adaptive automated process, aligned with current requirements for real-time analysis and decision-making.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_41-Robotic_Process_Automation_RPA_Scripting_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>AI-Driven Professional Profile Categorization and Recommendation System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161140</link>
        <id>10.14569/IJACSA.2025.0161140</id>
        <doi>10.14569/IJACSA.2025.0161140</doi>
        <lastModDate>2025-11-29T11:00:01.0200000+00:00</lastModDate>
        
        <creator>Marouane CHIHAB</creator>
        
        <creator>Hicham BOUSSATTA</creator>
        
        <creator>Mohamed CHINY</creator>
        
        <creator>Nabil Mabrouk</creator>
        
        <creator>Younes CHIHAB</creator>
        
        <creator>Moulay Youssef HADI</creator>
        
        <subject>Professional profile classification; profile recommendation; natural language processing (NLP); supervised learning; Logistic Regression; Random Forest; Support Vector Machine (SVM); k-Nearest Neighbors (KNN); Gradient Boosting; Na&#239;ve Bayes; AI-based recruitment</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>The exponential growth of applications in digital and information system domains has made the identification of qualified candidates increasingly complex, resulting in longer and less efficient recruitment processes. Recruiters frequently deal with heterogeneous and unstructured r&#233;sum&#233;s, which complicates skill assessment and increases the risk of mismatches between candidates and job requirements. To address these challenges, this research proposes an AI-based framework for the automatic classification and recommendation of professional profiles using natural language processing (NLP), text mining, and supervised machine learning techniques. The methodology includes the comparative evaluation of several classification algorithms—Logistic Regression, Random Forests, Support Vector Machines (SVM), k-Nearest Neighbors (KNN), Gradient Boosting (GB), and Na&#239;ve Bayes—to identify the most accurate and robust model. The framework also incorporates a similarity-based matching mechanism to align candidate profiles with job postings. Experimental results show a classification accuracy of 96.38%, demonstrating the model’s effectiveness in enabling faster, more reliable, and objective recruitment decisions while providing candidates with insights into their compatibility with labor market expectations.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_40-AI_Driven_Professional_Profile_Categorization_and_Recommendation_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Semantic Segmentation Algorithm of Animal Husbandry Image Based on an Improved U⁃Net Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161139</link>
        <id>10.14569/IJACSA.2025.0161139</id>
        <doi>10.14569/IJACSA.2025.0161139</doi>
        <lastModDate>2025-11-29T11:00:01.0030000+00:00</lastModDate>
        
        <creator>Jia Li</creator>
        
        <creator>Jinjing Zhang</creator>
        
        <creator>Fengjiao Jiang</creator>
        
        <subject>Machine vision; semantic segmentation; feature fusion; attention mechanism</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>Due to the limitations of unclear edges and fuzzy features in image segmentation tasks, this study proposes an enhanced U⁃Net semantic segmentation network utilizing the local and global fusion attention module in response to the drawbacks of fuzzy features and unclear edges in image segmentation tasks. Firstly, a feature extraction module combining convolution and Transformer is introduced in the bottleneck layer, so that the network can fully simultaneously capture local and global features, and effectively promote the fusion of local and global features. Secondly, the CBAM attention module is added to the skip connections between the encoder and decoder. Finally, the output feature map is processed using the ASPP module to enhance focus on target features and improve segmentation performance. Experiments conducted on four animal husbandry segmentation datasets show that the LCA_Net model proposed in this study achieves an IoU score of 90.19% and a Dice score of 94.83%, compared with U-Net and other mainstream segmentation networks, it has improved. This study offers effective technical support for advancing aquaculture status monitoring and lays a foundation for further development in this field.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_39-Semantic_Segmentation_Algorithm_of_Animal_Husbandry_Image.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Robust Detection of Partially Occluded Faces in Low-Light Scenarios Using YOLOv7 and YOLOv6</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161138</link>
        <id>10.14569/IJACSA.2025.0161138</id>
        <doi>10.14569/IJACSA.2025.0161138</doi>
        <lastModDate>2025-11-29T11:00:00.9570000+00:00</lastModDate>
        
        <creator>Nayef Alqahtani</creator>
        
        <creator>Amina Shaikh</creator>
        
        <creator>Imran Khan Keerio</creator>
        
        <subject>Deep learning; partial occlusion; YOLOv6; YOLOv7; real-time detection; low-light face recognition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>Partial occlusion and low light are significant challenges for face detection, limiting its effectiveness in critical applications such as security, surveillance, and user identification within computer vision. This study evaluates the effectiveness of two influential deep-learning models, YOLOv6 and YOLOv7, in identifying partially occluded faces in uncontrollable, real-world conditions. Training both models and assessing them with the help of comprehensive data-augmentation schemes that facilitate the occurrence of generalization, a carefully selected sample of partially blocked and hidden images of faces was used in all the experiments performed under low-light exposure. Findings indicate that YOLOv7 systematically outsmarts YOLOv6 in all key measures of performance, including precision (0.92 vs. 0.90), recall (0.89 vs. 0.79), as well as the mean Average Precision (mAP), which proves its ability to recognize hidden faces under adverse environments better. YOLOv7 takes a longer time to be trained, but with its enhanced design, especially the Extended Efficient Layer Aggregation Network (E-ELAN), feature extraction and real-time detection become much smoother. The statistics of this research indicate a visible increase, which indicates that YOLOv7 is reasonably suitable to be implemented in real-life, where a strong ability of face recognition, even with occlusions and low visibility, is required. This study contributes to the advancement of face detection technologies, tackling developing privacy and security needs in increasingly masked and low-visibility spaces.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_38-Robust_Detection_of_Partially_Occluded_Faces.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluating the Effectiveness and Usability of Microsoft Threat Modelling Tool in Undergraduate Cybersecurity Education</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161137</link>
        <id>10.14569/IJACSA.2025.0161137</id>
        <doi>10.14569/IJACSA.2025.0161137</doi>
        <lastModDate>2025-11-29T11:00:00.9230000+00:00</lastModDate>
        
        <creator>Nor Laily Hashim</creator>
        
        <creator>Ahmad Zuhairi Bin Mohd Yusri</creator>
        
        <subject>Cybersecurity education; threat modelling; stride; usability testing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>As cyber threats evolve, equipping students with hands-on experience in identifying and mitigating system vulnerabilities is critical for developing a cybersecurity-aware workforce. There are a variety of threat modelling tools available on the market, and it is challenging for educators to select the best tool for their students to learn and identify any possible threats that may exploit system vulnerabilities. This study investigates the effectiveness and usability of the Microsoft Threat Modelling Tool (MTMT) among undergraduate students, addressing the need for a practical tool in cybersecurity education. This study was conducted in four phases. The first phase involves conducting a comprehensive literature review to understand the features, capabilities, and tools of the threat modelling tools being compared, specifically the MTMT. Phase two consists of defining the evaluation criteria for assessing the tool&#39;s effectiveness and usability. Criteria for error frequency, ease of use, and user-friendliness will be developed, with particular focus on their relevance to educational environments, especially for undergraduate students. Phase three involved data collection, during which participants were recruited and had hands-on sessions with the tool. Training sessions were conducted using case studies to familiarise participants with the tool&#39;s features and functionalities. The last phase involves developing assessments to evaluate participants’ knowledge, effectiveness and usability of the tools. The evaluation includes structured usability testing and post-assessment of students’ knowledge and skill acquisition. Findings reveal that MTMT enhances students’ comprehension of threat modelling concepts, bridging the gap between theoretical knowledge and real-world cybersecurity practices. However, the study also highlights areas for improvement in the tool’s interface and documentation to better support student learning. These insights enhance educational strategies, foster active learning, and equip students for real-world cybersecurity challenges. The results emphasise the tool’s potential to strengthen the integration of threat modelling into the cybersecurity field, thereby fostering essential skills for safeguarding organisational and digital infrastructures. The novelty of this study lies in the methodology used to measure the effectiveness and usability of the threat modelling tool. The tool’s effectiveness was measured using the effectiveness formulas from ISO/IEC 25022:2016(E), while its usability was measured using the System Usability Scale (SUS).</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_37-Evaluating_the_Effectiveness_and_Usability_of_Microsoft_Threat_Modelling_Tool.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Recursive Gated Convolution-Based YOLOv11 Framework for Operator Safety Management in Live-Line Work</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161136</link>
        <id>10.14569/IJACSA.2025.0161136</id>
        <doi>10.14569/IJACSA.2025.0161136</doi>
        <lastModDate>2025-11-29T11:00:00.8630000+00:00</lastModDate>
        
        <creator>Dapeng Ma</creator>
        
        <creator>Liang Yang</creator>
        
        <creator>Kang Chen</creator>
        
        <creator>Feng Yang</creator>
        
        <creator>Ao Cui</creator>
        
        <creator>Rundong Yang</creator>
        
        <creator>Zhilin Wen</creator>
        
        <creator>Donghua Zhao</creator>
        
        <subject>Detection transformer; recursive gated convolution; YOLOv11; personal protective equipment; live-line work scenarios</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>In live-line work scenarios, it is essential for workers to wear electric field shielding clothing to prevent fatal accidents caused by electric shock. Accordingly, this study developed an electric field shielding clothing detection system for live-line working environments based on the YOLOv11 framework. Previous research has explored intelligent wearable detection systems for personal protective equipment such as safety helmets. However, compared to safety helmets, electric field shielding clothing comes in more varieties and is more challenging to identify. To address the challenges mentioned above, this study constructed a dual-layer detection model for operator detection and electric field shielding clothing detection in live-line work scenarios. The first layer employs an improved detection transformer (IDETR) to locate operators within the environment. The second layer, based on the YOLOv11 framework integrated with recursive gated convolution (GnConv), is designed to classify three types of personal protective equipment, including electric field shield clothing, electric field shield masks, and electric field shield gloves. Finally, the experimental results showed that compared with the DETR, the accuracy of the IDETR-based worker localization model improved by 2.29%. The accuracy of the GnConv-based YOLOv11 framework in the electric field shielding clothing detection task reaches 90.40%.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_36-Recursive_Gated_Convolution_Based_YOLOv11_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>CICA Framework: Harnessing CSR, AI, and Blockchain for Sustainable Digital Culture</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161135</link>
        <id>10.14569/IJACSA.2025.0161135</id>
        <doi>10.14569/IJACSA.2025.0161135</doi>
        <lastModDate>2025-11-29T11:00:00.8470000+00:00</lastModDate>
        
        <creator>Danang Danang</creator>
        
        <creator>Agustinus Budi Santoso</creator>
        
        <creator>Maya Utami Dewi</creator>
        
        <subject>Cybersecurity awareness; corporate social responsibility; artificial intelligence; blockchain; sustainable digital culture; emerging economies</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>Digital transformation has created new opportunities for organizations, but it has also intensified cybersecurity risk. In emerging economies, where institutional support and digital literacy remain limited, cybersecurity awareness plays a crucial role in strengthening digital resilience and fostering a sustainable digital culture. This study introduces the CSR-Integrated Cybersecurity Awareness (CICA) Framework, which conceptualizes Corporate Social Responsibility (CSR) as a key driver of cybersecurity awareness, reinforced by the adoption of artificial intelligence (AI) and blockchain technologies. Data were collected from companies in Central Java, Indonesia, that implement CSR-based digital initiatives, with responses gathered from managers, CSR officers, and IT staff. Using Structural Equation Modeling (SEM), the findings show that CSR significantly enhances cybersecurity awareness, AI adoption strengthens proactive security measures, and blockchain increases trust and transparency. The results also reveal that CSR mediates the relationship between digital technology adoption and sustainable digital culture. This study contributes by integrating CSR and cybersecurity through emerging technologies, offering theoretical insights and practical implications for organizations in developing regions.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_35-CICA_Framework_Harnessing_CSR_AI_and_Blockchain.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Evaluation of Software Multi-Threshold Decoders for Self-Orthogonal Codes in Modern Broadband Wireless Communication Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161134</link>
        <id>10.14569/IJACSA.2025.0161134</id>
        <doi>10.14569/IJACSA.2025.0161134</doi>
        <lastModDate>2025-11-29T11:00:00.8000000+00:00</lastModDate>
        
        <creator>Nurlan Tashatov</creator>
        
        <creator>Gennady Ovechkin</creator>
        
        <creator>Zhuldyz Sailaukyzy</creator>
        
        <creator>Eldor Egamberdiyev</creator>
        
        <creator>Dina Satybaldina</creator>
        
        <creator>Gulmira Danenova</creator>
        
        <creator>Zarina Khassenova</creator>
        
        <subject>Forward Error Correction; Multi-Threshold Decoder; Self-Orthogonal Codes; Orthogonal Frequency-Division Multiplexing; Multiple Input Multiple Output; Bit Error Rate; signal-to-noise ratio; LDPC; turbo codes; space–time coding; fading channels; GPU acceleration; OpenCL</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>5G and Internet of Things (IoT) wireless systems face challenges to reliable data transmission due to multipath fading, intersymbol interference, and the need for low-complexity Forward Error Correction (FEC). Conventional FEC techniques, such as Low-Density Parity-Check (LDPC) and turbo codes, provide high reliability but are unsuitable for resource-constrained IoT devices due to high decoding complexity. The aim of this study is to evaluate Multi-Threshold Decoders (MTDs) applied to Self-Orthogonal Codes (SOCs) as a low-complexity FEC solution in Orthogonal Frequency-Division Multiplexing (OFDM) and Multiple Input Multiple Output (MIMO) systems with Space–Time Coding (STC). The systems are modeled under ITU-R (Outdoor A, TU6, RA6) and 3GPP Spatial Channel Model (Urban Macro/Micro) fading environments and compared with LDPC (WiMAX, DVB-S2) and turbo codes in terms of Bit Error Rate (BER), Signal-to-Noise Ratio (SNR), decoder complexity, antenna diversity, modulation order, and throughput. Results indicate that SOC+MTD outperform short LDPC and turbo codes under deep fading while achieving reliability comparable to long LDPC codes at significantly lower decoding complexity. Min-sum refinement and approximate Maximum-Likelihood (ML) detection provide up to 2 dB additional SNR gain, and 1&#215;3 antenna diversity reduces required Eb/N₀ by ~7 dB at BER = 10⁻⁵. Higher-order modulations such as 8PSK and 16APSK achieve 1.5–2&#215; higher bit rates with moderate SNR penalties, while Open Computing Language (OpenCL) based Graphics Processing Unit (GPU) acceleration enables a 32-fold increase in simulation speed. These findings demonstrate that SOCs decoded with MTD represent a promising low-complexity, high-reliability FEC approach for 5G and IoT physical layers.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_34-Performance_Evaluation_of_Software_Multi_Threshold_Decoders.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Leveraging AI and ML for Enhanced Business Intelligence Systems: A Research Landscape of Trends, Influences, and Future Directions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161133</link>
        <id>10.14569/IJACSA.2025.0161133</id>
        <doi>10.14569/IJACSA.2025.0161133</doi>
        <lastModDate>2025-11-29T11:00:00.7070000+00:00</lastModDate>
        
        <creator>Sunil Mandaliya</creator>
        
        <creator>Priti Kulkarni</creator>
        
        <subject>Artificial intelligence; machine learning; business intelligence; digital transformation; data analytics; bibliometric analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>This research provides a systematic review of AI and ML applied to the BI context from 2014 to 2024. By characterizing the article and citation distribution and by tracing the topics of publications over time, this study outlines major developments, landmark contributions, and the future outlook of the research discipline at hand. However, despite rapid advancements, existing research remains fragmented. It lacks a consolidated understanding of how AI and ML are shaping modern BI systems, creating an urgent need for an integrated review. Prior studies often examine isolated techniques or industry-specific implementations, but very few provide a comprehensive synthesis that maps long-term trends, methodological patterns, and unresolved challenges in AI/ML-enabled BI. The study reviews more than 200 research articles obtained from top academic databases and finds a rise in the integration of AI/ML with conventional BI systems. Our results show the emergence of real-time, predictive, and automotive analytics as much-needed and valuable to consumers. Artificial Intelligence strengthens the capabilities for visualizing data, so complex information becomes easier to access by those who need it. Research literature shows an increase in publications about AI and ML applications in BI systems because these technologies have gained substantial practical importance. The field of study investigates the ways AI and ML improve BI systems by paying special attention to predictive analytics as well as decision-making processes. The main aspects of interest unite advanced AI implementations with user-centric tools that serve multiple industries. Future directions for researchers are AI ethics in BI, as well as creating simple AI tools for non-programmers and investigating the influence of AI-based BI systems on different industries in the long run. This study also presents future research suggestions.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_33-Leveraging_AI_and_ML_for_Enhanced_Business_Intelligence_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Configurable Storytelling System for Emotion Recognition in Children: Design and Pilot Evaluation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161132</link>
        <id>10.14569/IJACSA.2025.0161132</id>
        <doi>10.14569/IJACSA.2025.0161132</doi>
        <lastModDate>2025-11-29T11:00:00.6600000+00:00</lastModDate>
        
        <creator>Chadi Fouad Riman</creator>
        
        <creator>Wael Hosny Fouad Aly</creator>
        
        <creator>Carmen Mariana Pasca</creator>
        
        <subject>Computer gaming; serious games; education; emotions learning; peer-assisted learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>Computer games are very popular among children, with serious games being widely used for teaching specific skills and rehabilitation across various age groups. This work presents a configurable, story-based serious game designed to teach emotion recognition to young children (ages 6 to 11) through a peer-assisted learning approach. The system emphasizes pedagogical adaptability, allowing educators to customize content without programming knowledge. Its core innovation lies in structuring collaboration where older children co-create emotion-focused questions and facts for younger peers, operationalizing Vygotsky&#39;s social development theory. A pilot study (N=4) demonstrated the system&#39;s effectiveness in improving emotion identification, with 75% of participants showing increased engagement with the story&#39;s emotional cues. Results indicated successful peer-assisted interaction, where younger children answered most questions posed by older peers and showed critical engagement through fact verification. The findings suggest that this configurable, peer-assisted storytelling approach offers a promising, accessible method for fostering social-emotional learning in educational settings.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_32-A_Configurable_Storytelling_System_for_Emotion_Recognition_in_Children.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Kinematic Modeling and Design of a Robotic Manipulator System for Automated Fruit Harvesting in Intensive Orchards</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161131</link>
        <id>10.14569/IJACSA.2025.0161131</id>
        <doi>10.14569/IJACSA.2025.0161131</doi>
        <lastModDate>2025-11-29T11:00:00.6130000+00:00</lastModDate>
        
        <creator>Nurbibi Imanbayeva</creator>
        
        <creator>Bekzat Amanov</creator>
        
        <creator>Aiman Nurmaganbetova</creator>
        
        <creator>Arman Moldashev</creator>
        
        <creator>Akbayan Aliyeva</creator>
        
        <creator>Sulukul Dauletbekova</creator>
        
        <subject>Robotic manipulator; kinematic modeling; fruit harvesting; intensive orchards; machine vision; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>The development of robotic systems for automated fruit harvesting in intensive orchards has emerged as a critical response to labor shortages, high production costs, and the need for efficiency in modern agriculture. This study presents the kinematic modeling and design of a robotic manipulator system integrated into a mobile platform with an articulated lift mechanism, dual manipulators, and compliant gripping devices equipped with vision-based perception. The proposed system was modeled and validated through simulation in SolidWorks, enabling analysis of workspace coverage, kinematic stability, and motion optimization. Results indicate that the dual-manipulator configuration achieved a harvesting rate of up to 12 trees per hour, reducing the average fruit cycle time to less than seven seconds while lowering fruit loss to 14.5%, compared to over 30% in manual harvesting. The gripping device demonstrated a success rate of 94% with safe detachment forces between 2.5 and 3.5 N, ensuring minimal fruit damage and consistent quality. The lift mechanism provided stable vertical translation with minimal lateral deflection, supporting precise manipulator operation. Overall, the study highlights the potential of robotic manipulators to enhance productivity, safety, and sustainability in orchard management, while outlining future directions for field implementation, adaptive vision algorithms, and autonomous navigation.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_31-Kinematic_Modeling_and_Design_of_a_Robotic_Manipulator_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel YOLO-Like Multi-Branch Architecture for Accurate Apple Detection and Segmentation Under Orchard Constraints</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161130</link>
        <id>10.14569/IJACSA.2025.0161130</id>
        <doi>10.14569/IJACSA.2025.0161130</doi>
        <lastModDate>2025-11-29T11:00:00.5670000+00:00</lastModDate>
        
        <creator>Olzhas Olzhayev</creator>
        
        <creator>Nurbibi Imanbayeva</creator>
        
        <creator>Satmyrza Mamikov</creator>
        
        <creator>Bibigul Baibek</creator>
        
        <subject>Precision agriculture; detection; segmentation; YOLO-like architecture; multi-branch network; feature pyramid network; real-time inference; orchard monitoring</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>This study introduces a novel YOLO-like multi-branch deep learning architecture designed for accurate apple detection and segmentation in orchard environments, addressing the persistent challenges of occlusion, illumination variability, and fruit clustering. The proposed model integrates an enhanced backbone with C2f modules and a Spatial Pyramid Pooling Fast (SPPF) block to capture multi-scale receptive fields, while a Feature Pyramid Network (FPN) combined with a Path Aggregation Network (PAN) ensures effective top-down and bottom-up feature fusion. To extend beyond bounding box localization, a prototype-based segmentation head is incorporated, enabling precise instance mask generation with reduced computational overhead. The model was comprehensively evaluated on the MinneApple dataset, consisting of high-resolution orchard images with polygonal annotations, and compared against state-of-the-art detection and segmentation frameworks, including Faster R-CNN, Mask R-CNN, SSD, YOLO variants, YOLACT, and SOLOv2. Quantitative results demonstrated that the proposed approach achieved superior mean Average Precision (mAP@0.5 = 0.76), precision (0.83), and F1-score (0.76), while maintaining a competitive inference speed of 40 FPS, confirming its suitability for real-time agricultural applications. Qualitative analysis further highlighted robustness in complex orchard conditions, reinforcing the model’s applicability for automated harvesting, yield estimation, and orchard monitoring. These findings advance the state of agricultural computer vision by unifying detection and segmentation in a lightweight, high-performance framework.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_30-A_Novel_YOLO_Like_Multi_Branch_Architecture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Data-Driven Model for Optimizing Active Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161129</link>
        <id>10.14569/IJACSA.2025.0161129</id>
        <doi>10.14569/IJACSA.2025.0161129</doi>
        <lastModDate>2025-11-29T11:00:00.4700000+00:00</lastModDate>
        
        <creator>Abang Asyraaf Aiman Abang Azahari</creator>
        
        <creator>Marshima Mohd Rosli</creator>
        
        <creator>Nor Shahida Mohamad Yusop</creator>
        
        <subject>Data-driven models; selection model; data quality characteristics; active learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>Data-driven models depend on extensive datasets for precise predictions; yet, acquiring adequate labeled data for training these models is a challenge, especially with medical datasets that are constrained by privacy considerations, resulting in a deficiency of labeled data. Active Learning (AL) has developed as a cost-effective strategy that minimizes the quantity of labeled data required for training by selecting the most informative samples. The performance of active learning methods is significantly influenced by data quality characteristics, and due to a lack of direction in selecting the most suitable active learning approach. The study presents a data-driven selection approach that suggests appropriate active learning methods based on dataset characteristics. The study examines the characteristics of the dataset and their impact on active learning performance, revealing significant correlations between data quality issues and the efficacy of active learning approaches. A rule-based selection model is subsequently constructed and verified by experiments and case studies across various datasets. The findings demonstrated consistent alignment between suggested and practically effective techniques. Statistical analysis verifies that the data-driven selection model exhibits reliability exceeding chance agreement, indicating its robustness and practical application in recommending AL techniques selection.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_29-Data_Driven_Model_for_Optimizing_Active_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>User Experience Deficiencies in Mobile Tourism Applications: A Preliminary Study from Generation Z and Tourism Practitioners</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161128</link>
        <id>10.14569/IJACSA.2025.0161128</id>
        <doi>10.14569/IJACSA.2025.0161128</doi>
        <lastModDate>2025-11-29T10:59:59.9870000+00:00</lastModDate>
        
        <creator>Huang Huihui</creator>
        
        <creator>Azliza Othman</creator>
        
        <creator>Nadia Diyana Mohd Muhaiyuddin</creator>
        
        <subject>Mobile tourism applications; Generation Z; tourism practitioners; user experience; preliminary study</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>While previous research has broadly explored the user experience of mobile tourism applications (MTAs), few have examined user experience deficiencies from the dual perspectives of both tourism practitioners and Generation Z adults. This preliminary study sought to investigate user experience deficiencies present in MTA, with an emphasis on Generation Z adults. This study analysed the viewpoints of tourism practitioners and Generation Z users on MTA and revealed commonalities in their perspectives. Data was collected through semi-structured interviews with five tourism practitioners and eight Generation Z adults. Thematic analysis was used to identify key themes. The results revealed that the two groups of respondents held complementary perspectives on user experience deficiencies faced by MTA users. The findings reveal that numerous MTAs are plagued by perplexing information architecture, unappealing interfaces, and inadequate emotional resonance. This study transcends the systemic limitations and usability challenges identified in previous studies, concentrating on functionality issues, and offers initial practical suggestions for developers and designers to create user-centric interface designs.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_28-User_Experience_Deficiencies_in_Mobile_Tourism_Applications.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>AI-Driven Deep Learning Architectures for Robust Emotion Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161127</link>
        <id>10.14569/IJACSA.2025.0161127</id>
        <doi>10.14569/IJACSA.2025.0161127</doi>
        <lastModDate>2025-11-29T10:59:59.9570000+00:00</lastModDate>
        
        <creator>Hamad Ali Abosaq</creator>
        
        <subject>Deep learning models; computer vision; emotion recognition; image processing; Grad-CAM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>Due to an insufficient labeled dataset, class-level variation emotion recognition becomes a challenging task in computer vision. Deep learning (DL) makes it possible to automatically learn meaningful patterns from facial expressions. It captures simple details such as edges, textures at low layers, and gradually builds up to more complex information, including Facial components and the overall meaning of the expression. Despite progress made via end-to-end learning, partial occlusions, inconsistent lighting, and biases within datasets are a few challenges that still remain. In this work, a DL based model is presented to classify two emotional states of human expression. The pipeline depends on several components, including the preparation of data, preprocessing and analysis, and the use of pretrained networks, dimensionality-reduction techniques, and region-based explanation via Grad-CAM. More than 2,000 images of happy and sad faces were derived from Kaggle. These images were used to test a custom-designed CNN and two widely adopted architectures, such as VGG16 and MobileNetV. The custom model attained an accuracy rate of 66% and 67% F1, while the VGG16 performed notably better with 78% accuracy and 77% F1, and the MobileNetV architecture, which achieved 77% accuracy and 73% F1. The statistical comparisons using paired t-tests and Wilcoxon signed-rank tests further confirmed these findings, showing that pre-trained models outperformed a custom CNN with a meaningful effect size. Although deeper networks are more susceptible to overfitting and the hand-crafted CNN suffered exhibited underfitting, the results indicate that pretained architecture provides a clear advantage for facial emotion recognition. This study makes a major contribution to existing computer vision research in removing the trade-off between accuracy and generalization, and opens doors to the application of lightweight yet interpretable models in practical affective computing systems.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_27-AI_Driven_Deep_Learning_Architectures_for_Robust_Emotion_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Lexicon-Based Sentiment Analysis of Social Media Reviews for Floating Market Popularity in South Kalimantan</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161126</link>
        <id>10.14569/IJACSA.2025.0161126</id>
        <doi>10.14569/IJACSA.2025.0161126</doi>
        <lastModDate>2025-11-29T10:59:59.9230000+00:00</lastModDate>
        
        <creator>Evi Lestari Pratiwi</creator>
        
        <creator>Ramadhani Noor Pratama</creator>
        
        <creator>Inayatul Ulya Ahyati</creator>
        
        <creator>Paula Dewanti</creator>
        
        <subject>Sentiment analysis; lexicon-based methods; floating markets; social media analytics; cultural tourism</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>Floating markets in South Kalimantan are culturally significant heritage destinations whose contemporary reputation is increasingly shaped by user-generated content on digital platforms. This study analyzes public perceptions of these markets by applying a lexicon-based sentiment analysis framework to 300 reviews collected from TripAdvisor and Google Maps between 2023 and 2024. The analytical workflow included text normalization, tokenization, stop-word removal, and stemming, with feature representation generated through term frequency–inverse document frequency (TF-IDF). Sentiment polarity was determined using bilingual lexicon-based scoring and categorized into positive, neutral, or negative sentiments. The results indicate that 45% of reviews expressed positive sentiment, highlighting cultural distinctiveness and riverfront experiences; 35% were neutral and provided descriptive logistical information; and 20% were negative, emphasizing waste issues, overcrowding, pricing inconsistencies, and perceived reductions in authenticity. TripAdvisor reviews exhibited greater emotional polarization than those on Google Maps. The findings demonstrate that lexicon-based sentiment analysis offers a transparent and effective approach for multilingual tourism contexts, providing insights into how digital narratives contribute to destination image formation. The study offers practical implications for improving environmental management, regulating visitor flows, and enhancing communication transparency within heritage tourism settings. It also contributes theoretically by underscoring the informational role of neutral reviews within electronic word-of-mouth dynamics. Future work may integrate machine learning-based sentiment classifiers or multimodal data to enhance analytical precision and extend the applicability of sentiment analysis in digital tourism research.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_26-Lexicon_Based_Sentiment_Analysis_of_Social_Media_Reviews.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>From External Stakeholder Pressure to Sustainable Practice in HEIs: Mechanisms and Internal Mediating Factors</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161125</link>
        <id>10.14569/IJACSA.2025.0161125</id>
        <doi>10.14569/IJACSA.2025.0161125</doi>
        <lastModDate>2025-11-29T10:59:59.8930000+00:00</lastModDate>
        
        <creator>Xue Jin</creator>
        
        <creator>S. M. Ferdous Azam</creator>
        
        <creator>Jacquline Tham</creator>
        
        <subject>Sustainable procurement; HEIs; external stakeholder pressure; affective commitment; knowledge</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>Sustainable procurement is an important part of sustainable development in HEIs, playing a pivotal role in optimizing resource allocation, fulfilling social responsibilities, and promoting green development. However, existing research has paid insufficient attention to the impact of external stakeholder pressure on HEIs’ sustainable procurement and its intrinsic action mechanism. To address this gap, this study aims to explore the influence path of external stakeholder pressure on HEIs’ sustainable procurement and identify key mediating factors. This study collected 260 valid data points from Chinese higher education institutions with more than one year of purchasing experience through snowball sampling. PLS-SEM analysis results show that external stakeholder pressure not only directly promotes sustainable procurement but also exerts an indirect effect through two mediating paths: affective commitment and professional knowledge. The mediating role of affective commitment is stronger than that of knowledge, and affective commitment itself has the strongest direct impact on sustainable procurement among all variables. Theoretically, this study enriches the application scenarios of stakeholder theory and institutional theory in the field of higher education sustainable management. Practically, it provides actionable references for HEIs to enhance sustainable procurement performance by strengthening external stakeholder collaboration, optimizing knowledge management systems, and fostering employees’ affective commitment.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_25-From_External_Stakeholder_Pressure_to_Sustainable_Practice_in_HEIs.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>From User Experience Evaluation to Design Guidelines for E-Commerce Websites</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161124</link>
        <id>10.14569/IJACSA.2025.0161124</id>
        <doi>10.14569/IJACSA.2025.0161124</doi>
        <lastModDate>2025-11-29T10:59:59.8630000+00:00</lastModDate>
        
        <creator>Layla Hasan</creator>
        
        <creator>Beatrice Lim Pei Ying</creator>
        
        <subject>User experience; UX guidelines; e-commerce websites; Malaysian e-commerce; hedonic features; pragmatic features</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>The evolution of the Internet and advances in information technology have significantly transformed business practices, leading to widespread e-commerce website development. To succeed in a competitive environment, website owners and designers must ensure that their e-commerce websites provide positive user experiences (UX). This research proposes UX guidelines for e-commerce websites, comprising 27 categories: 20 pragmatic features and 7 hedonic features. The guidelines were developed by evaluating the UX of two local e-commerce websites (Shopee and Lazada) and two international websites (Amazon and Alibaba), using a questionnaire completed by 200 participants to collect both quantitative and qualitative data. Results revealed common UX problems across all websites, including hedonic issues such as unattractive design and pragmatic issues such as poor usability, unclear layout, lack of user-friendliness, non-intuitive design, poor performance, and disorganization. Unique UX problems were also observed: some hedonic and pragmatic issues were specific to local websites, while certain pragmatic issues were unique to international websites. For Amazon, quantitative analysis showed the highest average rating for hedonic and pragmatic metrics at 3.87 out of 5, while qualitative analysis identified the least number of UX issues (13 in total: 1 hedonic and 12 pragmatic).</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_24-From_User_Experience_Evaluation_to_Design_Guidelines.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving the Performance of TFS with Ensemble Learning for Cross-Project Software Defect Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161123</link>
        <id>10.14569/IJACSA.2025.0161123</id>
        <doi>10.14569/IJACSA.2025.0161123</doi>
        <lastModDate>2025-11-29T10:59:59.8170000+00:00</lastModDate>
        
        <creator>Pathiah Abdul Samat</creator>
        
        <creator>Yahaya Zakariyau Bala</creator>
        
        <creator>Nur Hamizah Hamidi</creator>
        
        <subject>Software; defect prediction; cross-project; ensemble learning; feature selection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>Software defect prediction (SDP) plays a key role in improving software quality by identifying defect-prone modules early in the development cycle. While within-project prediction has been widely studied, cross-project defect prediction (CPDP) remains challenging due to differences in datasets, high feature dimensionality, and poor model generalization. To address these challenges, this study enhances the Transformation and Feature Selection (TFS) approach by integrating ensemble learning techniques. Three methods, Gradient Boosting Machine (GBM), stacking, and hybridization, were explored to evaluate their effectiveness in improving CPDP performance. Experiments were conducted using the AEEEM datasets, with preprocessing steps including normalization, feature reduction, and the Synthetic Minority Oversampling Technique (SMOTE) to handle data imbalance. The models were trained on source projects and tested on separate target projects, with the F1 score used as the main evaluation metric. Results show that the TFS &#215; Stacking model achieved the highest overall performance, with a mean F1 score of 0.963, outperforming both TFS &#215; GBM (0.958) and TFS &#215; Hybridization (0.920). Compared to the original TFS &#215; Random Forest method, the stacking approach consistently provided significant improvements across all project pairs. These findings highlight the potential of combining TFS with ensemble learning to enhance defect prediction in projects with limited or no historical data. This work not only advances CPDP research but also offers practical value to software teams by enabling more accurate identification of defect-prone modules and better allocation of testing resources.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_23-Improving_the_Performance_of_TFS_with_Ensemble_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>NDN-Based ICN Architecture Design to Improve Data Communication QoS in Kertajati Aerocity</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161122</link>
        <id>10.14569/IJACSA.2025.0161122</id>
        <doi>10.14569/IJACSA.2025.0161122</doi>
        <lastModDate>2025-11-29T10:59:59.8000000+00:00</lastModDate>
        
        <creator>Enang Rusnandi</creator>
        
        <creator>Tri Kuntoro Priambodo</creator>
        
        <creator>Mardhani Riasetiawan</creator>
        
        <subject>Aerocity; Information-Centric Networking (ICN); Named Data Networking (NDN); Quality of Service (QoS); ndnSIM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>The development plan for Kertajati Aerocity requires adequate support from the data communication infrastructure. The primary challenge lies in the dense data flows, which involve very large volumes and diverse data types. The commonly used host-centric architecture suffers from latency, bandwidth utilization, and scalability issues, making it unsuitable for the dynamic Aerocity scenario. To address these limitations, this study proposes an Information-Centric Networking (ICN) architecture based on Named Data Networking (NDN), where communication relies on content names rather than IP addresses. The ndnSIM simulator was employed in the experimental evaluation of this architectural model, using operational data requirements derived from the Taoyuan Aerotropolis case. Performance was assessed through Quality of Service (QoS) metrics, including throughput, latency, and cache hit ratio. The simulation results indicate that throughput stabilized at ~10.1 Mbps (from 7.2 Mbps initially) with balanced node distribution, while latency averaged ~3 ms, with p95 &lt; 10 ms and over 95% of requests completed within low-delay bounds despite initial spikes. Cache statistics were unavailable (CacheHits/Misses = 0) due to tracer settings and traffic patterns, so cache analysis is left for future work. These findings highlight the novelty of integrating NDN-based ICN with Aerocity-specific traffic requirements. The proposed model is presented as a scalable solution for the evolving data communication infrastructure of Kertajati Aerocity in the future, emphasizing designing an NDN-based architecture specifically designed for the multi-zone Aerocity ecosystem, including hierarchical and cross-zone naming schemes that model the operational flow of Aerocity.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_22-NDN_Based_ICN_Architecture_Design_to_Improve_Data_Communication.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Advanced Multimodal Sentiment Classification Through Combined Attention Mechanisms: A-MSDA</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161121</link>
        <id>10.14569/IJACSA.2025.0161121</id>
        <doi>10.14569/IJACSA.2025.0161121</doi>
        <lastModDate>2025-11-29T10:59:59.7530000+00:00</lastModDate>
        
        <creator>Soukaina FATIMI</creator>
        
        <creator>WAFAE SABBAR</creator>
        
        <creator>Abdelkrim BEKKHOUCHA</creator>
        
        <subject>Dual attention; self-attention; cross-modal attention; multimodal sentiment analysis; multimodal fusion; MVSA dataset; image–text interaction; deep multimodality</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>Interest in multimodal sentiment analysis has grown significantly due to the widespread sharing of text and images on social platforms. Existing approaches often emphasize either sentiment features within textual–visual data or the correlation between modalities, leaving gaps in effectively capturing both aspects simultaneously. To address these limitations, we propose the Advanced Multimodal Sentiment Analysis with Dual Attention (A-MSDA) model, which integrates self-attention and cross-modal attention mechanisms in a unified dual-attention framework. This design enables robust multimodal fusion by extracting salient textual and visual features while modeling their image–text interaction comprehensively. Experimental evaluation on MVSA-Single and MVSA-Multiple datasets demonstrates that A-MSDA achieves notable improvements in accuracy and F1-score, outperforming existing techniques by up to 3.4% in F1-score on MVSA-Multiple, while maintaining competitive performance on MVSA-Single. These results highlight the potential of A-MSDA to advance research in deep multimodality and sentiment analysis.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_21-A_New_Advanced_Multimodal_Sentiment_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Newton-Raphson-Based Optimizer-Driven Temporal Convolutional Networks for Birth Rate Prediction in a Small Area</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161120</link>
        <id>10.14569/IJACSA.2025.0161120</id>
        <doi>10.14569/IJACSA.2025.0161120</doi>
        <lastModDate>2025-11-29T10:59:59.7370000+00:00</lastModDate>
        
        <creator>Shengyi Zhou</creator>
        
        <creator>Liang Chen</creator>
        
        <creator>Wei Han</creator>
        
        <creator>Bin Liu</creator>
        
        <subject>Temporal Convolutional Network; Bi-directional Long Short-Term Memory; prediction model; birth rate; meta-heuristic algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>For economically developed small geographic regions, population forecasting serves as a vital tool for achieving refined regional management. However, due to relying on the subjective experience of experts, traditional methods for predicting birth rates have shortcomings in accuracy, resulting in unreliable results. To address this limitation, this study introduces deep learning (DL) models into the domain of birth rate prediction. Specifically, a hybrid TCN-Bi-LSTM model is proposed, integrating a Temporal Convolutional Network (TCN) with a Bi-directional Long Short-Term Memory (Bi-LSTM) network to predict birth populations in small regions. The proposed hybrid model effectively leverages the strengths of the TCN and Bi-LSTM to capture both local temporal patterns and long-term hidden dependencies within birth rate time series data. The proposed birth rate prediction model not only incorporates historical data on regional birth rates but also accounts for the influence of factors such as divorce rates, consumption levels, and population size. Furthermore, an enhanced meta-heuristic algorithm is designed to optimize the hyperparameters of the hybrid TCN-Bi-LSTM model, with the aim of increasing its prediction accuracy. The hippopotamus position update strategy was introduced into the Newton-Raphson-Based Optimizer (NRBO), and an improved NRBO (INRBO) algorithm was developed. Finally, the performance of the proposed birth rate prediction model was validated using a dataset from three regions or countries. The prediction results demonstrate that, compared to the other four models, the proposed INRBO-TCN–Bi-LSTM model achieves the best performance, with an average reduction of 95% in training loss.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_20-A_Newton_Raphson_Based_Optimizer_Driven_Temporal_Convolutional_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving Spam Detection with Feature Engineering and Adaptive Learning Approaches</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161119</link>
        <id>10.14569/IJACSA.2025.0161119</id>
        <doi>10.14569/IJACSA.2025.0161119</doi>
        <lastModDate>2025-11-29T10:59:59.7070000+00:00</lastModDate>
        
        <creator>Sadeem H. AlHomidan</creator>
        
        <creator>Marwah M. Almasri</creator>
        
        <creator>Shimaa A. Nagro</creator>
        
        <subject>Spam detection; machine learning; Multinomial Na&#239;ve Bayes; logistic regression; ensemble learning; text preprocessing; feature engineering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>Spam email detection is a critical component of securing and maintaining reliable digital communication systems. This study explores the effectiveness of various machine learning algorithms in classifying spam, with an emphasis on enhancing accuracy and precision through systematic preprocessing, advanced feature engineering, and text preprocessing. Six models were evaluated: Logistic Regression, Support Vector Classifier, Multinomial Na&#239;ve Bayes, K-Nearest Neighbors, AdaBoost, and Bagging Classifier using a comprehensive preprocessing pipeline that included Term Frequency–Inverse Document Frequency vectorization, feature scaling, and the incorporation of engineered features such as character counts. Experimental results reveal that Multinomial Na&#239;ve Bayes consistently achieved the highest precision 1.00 and strong accuracy 0.979 when paired with feature scaling, while Logistic Regression delivered robust and stable performance across multiple configurations with precision exceeding 0.96, making it a reliable choice for real-world deployment. Although Support Vector Classifier and AdaBoost exhibited competitive baseline performance, Support Vector Classifier showed limitations when handling numeric features, whereas AdaBoost maintained consistent results across scenarios. These findings underscore the critical role of tailored preprocessing and ensemble learning in improving classification outcomes and highlight the comparative strengths of different algorithms in real-world spam detection. In particular, Multinomial Na&#239;ve Bayes proved highly effective for precision-critical tasks, while Logistic Regression emerged as a dependable solution for environments requiring consistent reliability. Overall, this work advances machine learning-based spam filtering by identifying models that successfully balance precision, adaptability, and computational efficiency.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_19-Improving_Spam_Detection_with_Feature_Engineering_and_Adaptive_Learning_Approaches.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Adult-Size Bio-Mechatronic Exoskeleton with Robotic Underactuated Mechanism for Continuous Passive Motion Therapy on Hemiplegic Lower Limbs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161118</link>
        <id>10.14569/IJACSA.2025.0161118</id>
        <doi>10.14569/IJACSA.2025.0161118</doi>
        <lastModDate>2025-11-29T10:59:59.6730000+00:00</lastModDate>
        
        <creator>Adrian Nacarino</creator>
        
        <creator>Carol Sandoval</creator>
        
        <creator>Cesar Martel</creator>
        
        <creator>Jose Cornejo</creator>
        
        <creator>Margarita Murillo</creator>
        
        <creator>Jeanette Borja</creator>
        
        <creator>Josue Alata Rey</creator>
        
        <creator>Ricardo Palomares</creator>
        
        <subject>Bio-mechatronics; engineering design; exoskeleton; stroke rehabilitation; rehabilitation robotics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>Cerebrovascular accidents (CVA) cause hemiplegia in adult patients, limiting their daily activities and reducing their autonomy, which calls for effective rehabilitation systems for the lower limbs. The study aims to design and simulate a subactuated robotic continuous passive motion system for lower-limb therapy in adult patients with post-stroke hemiplegia. Physical rehabilitation protocols were analyzed, and consultations with specialists were conducted to define criteria related to safety, stability, and adaptability. The mechanical subsystem was developed using SolidWorks with an adjustable structure; the electronic subsystem was designed using Autodesk Eagle and KiCad 7 to enable precise control; and the software subsystem was implemented in Wokwi using an ESP-32 microcontroller for parameter configuration and data transmission. Simulations were conducted in CoppeliaSim, modeling a virtual patient undergoing three rehabilitation modes: knee flexion-extension, ankle flexion-extension, and a combined full knee extension cycle. Simulations validated the correct execution of the planned movements, stability of the power supply, precision in displacement control, and the ability to record and transmit therapeutic data. The strength of materials and the effectiveness of the mechanism were also confirmed. The designed subactuated robotic system represents a safe and effective tool for physical rehabilitation, with the potential to improve mobility and quality of life in patients with lower-limb hemiplegia following a CVA.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_18-Adult_Size_Bio_Mechatronic_Exoskeleton_with_Robotic_Underactuated_Mechanism.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Lightweight Multi-Feature Fusion GAN with Deformable Attention for HMD-Occluded Face Reconstruction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161117</link>
        <id>10.14569/IJACSA.2025.0161117</id>
        <doi>10.14569/IJACSA.2025.0161117</doi>
        <lastModDate>2025-11-29T10:59:59.6430000+00:00</lastModDate>
        
        <creator>Yingying Li</creator>
        
        <creator>Ajune Wanis Ismail</creator>
        
        <creator>Muhammad Anwar Ahmad</creator>
        
        <creator>Norhaida Mohd Suaib</creator>
        
        <creator>Fazliaty Edora Fadzli</creator>
        
        <subject>Generative adversarial network; Lie group feature learning; deformable attention; face reconstruction; virtual reality; head-mounted displays</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>Head-mounted displays (HMDs) enhance virtual reality (VR) experiences, but occlude the upper face, hindering realistic user representation. To address this, some studies employ sensors to capture facial expressions under occlusion, while deep learning methods typically rely on image inpainting to restore missing regions. However, these approaches often suffer from limitations such as insufficient shallow feature representation, high computational complexity, and redundant model structures. This study proposes a lightweight generative adversarial network (GAN) that utilizes multi-feature fusion and deformable attention for face reconstruction under HMD occlusion. Specifically, a Lie group feature learning module is used to enhance shallow geometric representations, while reference-guided deformable attention dynamically focuses on occluded regions, improving both structural fidelity and efficiency. Experiments across multiple face datasets show that the proposed method outperforms existing mainstream approaches regarding structural fidelity, detail restoration capability, and model efficiency. The proposed framework offers a promising solution for integration with HMDs equipped with facial tracking, enabling more realistic and expressive avatars in VR applications.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_17-Lightweight_Multi_Feature_Fusion_GAN_with_Deformable_Attention.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Improved Method Based on YOLOv7 for Detecting the Safety Helmets of Two-Wheeled Bicycle Riders</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161116</link>
        <id>10.14569/IJACSA.2025.0161116</id>
        <doi>10.14569/IJACSA.2025.0161116</doi>
        <lastModDate>2025-11-29T10:59:59.6130000+00:00</lastModDate>
        
        <creator>Xufei Wang</creator>
        
        <creator>Penghui Wang</creator>
        
        <creator>Zishuo Wang</creator>
        
        <creator>Jeonyoung Song</creator>
        
        <creator>Jinde Song</creator>
        
        <subject>Object detection; YOLOv7; MobileOne; CA module; TCHD</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>Convolutional neural networks (CNNs) were widely used in object detection tasks. Usually, CNNs with strong object detection performance were difficult to apply to small, mobile embedded systems with limited computational resources due to the large number of parameters. Aiming at this problem, the lightweight improvement method for the safety helmet object detection task based on YOLOv7 has been studied. The first step was the lightweight improvement of the network. Taking YOLOv7 and YOLOv7-Tiny as the basic networks, respectively, the backbone network was improved using the MobileOne network. YOLOv7-MobileOne (YOLOv7-MO) and YOLOv7-Tiny-MobileOne (YOLOv7-TMO) were obtained. Compared with the original network parameters, the number of parameters decreased by 36.8% and 37.9%, respectively. Verified on the Pascal VOC dataset, the YOLOv7-MO had a 3.7% decrease in mAP @.5 compared to the YOLOv7. The YOLOv7-MO had a 9.8% increase in mAP @.5 compared to the YOLOv7-TMO. The second step was to improve the detection accuracy. The Coordinate Attention (CA) module was integrated at different positions of YOLOv7-MO and YOLOv7-TMO, respectively, to obtain YOLOv7-MO-Coordinate Attention (YOLOv7-MOC) and YOLOv7-TMO-Coordinate Attention (YOLOv7-TMOC). Verified on the Pascal VOC dataset, YOLOv7-MOC improved 1.44% compared to YOLOv7-MO&#39;s mAP @.5 and reduced FPS by 5.4Hz. Verified on the self-constructed two-wheeled cyclists helmet dataset (TCHD), YOLOv7-MOC increased by 0.8% compared to YOLOv7-MO&#39;s mAP @.5 and reduced FPS by 0.3Hz. YOLOv7-MOC increased by 1.0% compared to YOLOv7&#39;s mAP @.5 to 77.1%. The corresponding FPS was 28.7Hz higher, reaching 89.3Hz. Finally, experiments were conducted using the Raspberry Pi 4B embedded development board, based on the Linux system and the Pytorch framework, with the YOLOv7-TMOC network model. The results proved that the improved network model can be applied to the object detection of small embedded systems.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_16-An_Improved_Method_Based_on_YOLOv7.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Speech Emotion Recognition via Parallel Dual-Branch Fusion Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161115</link>
        <id>10.14569/IJACSA.2025.0161115</id>
        <doi>10.14569/IJACSA.2025.0161115</doi>
        <lastModDate>2025-11-29T10:59:59.5800000+00:00</lastModDate>
        
        <creator>Zhongliang Wei</creator>
        
        <creator>Chang Ge</creator>
        
        <creator>Lijun Zhu</creator>
        
        <creator>Jinmin Ye</creator>
        
        <subject>RAVDESS; Speech Emotion Recognition; spectrogram modeling; probability-space fusion; wav2vec 2.0 fine-tuning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>Speech Emotion Recognition (SER) has become a pivotal topic within affective computing and human–computer interaction, where the core challenge lies in jointly capturing both the time–frequency structure and the semantic context of speech. To overcome the shortcomings of current approaches—including single-view feature representation, the lack of emotional discriminability in self-supervised models, and suboptimal complementarity among fusion strategies—this study proposes a parallel dual-branch fusion architecture for SER. The framework consists of a wav2vec 2.0 branch and a CNN–Transformer spectrogram branch, which respectively extract contextual semantic representations from raw waveforms and explicit time–frequency features from spectrograms. A logistic regression fusion mechanism is further introduced at the decision level to achieve adaptive weighting in the probability space, thereby fully leveraging the complementary strengths of the two feature types. Experiments carried out on the RAVDESS audio subset showed that the proposed model surpassed several mainstream baselines (e.g., CNN-n-GRU and RELUEM), achieving 92.7% accuracy and 92.2% Macro-F1, with an average improvement of about 3.2 percentage points. The layer unfreezing studies confirmed the effectiveness of partial fine-tuning for transferring pretrained features, while the comparative experiments on fusion strategies validated the superiority of probability-space fusion in both performance and stability. Overall, the proposed framework achieves simultaneous gains in accuracy and robustness through feature complementarity, branch decoupling, and lightweight fusion. Future work will explore cross-lingual generalization, multimodal extensions, lightweight deployment, and dynamic emotion modeling, contributing to more efficient affective computing and intelligent interaction systems.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_15-Speech_Emotion_Recognition_via_Parallel_Dual_Branch_Fusion_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Forecasting of Saudi Stock Prices Using Statistical and Machine Learning Models: A Multi-Model Comparative Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161114</link>
        <id>10.14569/IJACSA.2025.0161114</id>
        <doi>10.14569/IJACSA.2025.0161114</doi>
        <lastModDate>2025-11-29T10:59:59.5670000+00:00</lastModDate>
        
        <creator>Eissa Alreshidi</creator>
        
        <subject>Financial time-series forecasting; Saudi Stock Market; ARIMA; XGBoost; deep learning; ensemble models; LSTM; stock price prediction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>Accurate forecasting of financial time-series data is not just a challenge—it&#39;s a critical necessity for investors in emerging markets. This study decisively evaluates the predictive power of seven advanced statistical and machine learning models: Autoregressive Integrated Moving Average (ARIMA), Long Short-Term Memory (LSTM), Random Forest, eXtreme Gradient Boosting (XGBoost), Support Vector Regression (SVR), K-Nearest Neighbors (KNN), and Decision Tree, across eight major stocks on the Saudi Stock Exchange (Tadawul). Employing a robust, lag-based forecasting framework, we meticulously assessed model performance using RMSE, R&#178;, directional accuracy, and computational efficiency. We introduce a hybrid evaluation framework that integrates magnitude accuracy, directional precision, and runtime profiling to guide model selection at the individual stock level, an approach that has not been previously applied to the Saudi market. The empirical evidence is compelling: model selection is heavily stock-specific. The classical ARIMA model consistently outperformed the others, delivering the lowest error and highest goodness-of-fit for stable, high-capitalization stocks, underscoring the timeless relevance of linear autoregressive components. Conversely, the ensemble method XGBoost emerged as a powerhouse of computational efficiency and predictive balance for more volatile series, boasting an optimal operational profile (runtime of ~ 1.5 s). While deep learning (LSTM) and SVR models fall short of magnitude metrics owing to the low signal-to-noise ratio in daily close price data, these findings offer practical guidance for investors, analysts, and policymakers seeking scalable, stock-specific forecasting strategies. Considering Saudi Arabia’s Vision 2030 and the increasing demand for real-time financial intelligence, this research addresses the urgent need for scalable stock-specific forecasting frameworks that support investor decision-making and policy formulation.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_14-Forecasting_of_Saudi_Stock_Prices_Using_Statistical_and_Machine_Learning_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Proposed IoT System for Monitoring and Controlling Movable Zamzam Tanks in the Holy Mosque</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161113</link>
        <id>10.14569/IJACSA.2025.0161113</id>
        <doi>10.14569/IJACSA.2025.0161113</doi>
        <lastModDate>2025-11-29T10:59:59.5500000+00:00</lastModDate>
        
        <creator>Taha M. Mohamed</creator>
        
        <subject>Zamzam tanks monitoring; IoT smart systems; automatic monitoring and control; Logistics 5.0</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>Zamzam water is very important for all Muslims worldwide, especially for Pilgrims and Umrah performers who are visiting the two holy mosques in Saudi Arabia. Many movable Zamzam tanks are dispersed in many places of the two holy mosques. These tanks are available for pilgrims and visitors for drinking. When the two holy mosques are crowded, visitors want to know the locations of the nearest filled Zamzam tanks. Administrators and operators also need to monitor tank statuses for performing necessary logistics operations. In this study, we proposed a novel IoT system for automating, monitoring, and controlling water levels in Zamzam tanks in the two holy mosques. The proposed system modifies the existing Zamzam tanks’ architecture to be smart tanks. The proposed system is analyzed, modeled, simulated, and discussed. Simulation and discussion show that the system is very useful for administrators, operators, and holy mosque visitors as it saves time, decreases efforts, automates logistics, and increases productivity. The system is very important for stakeholders, especially in crowded seasons. The performance metrics indicate that the proposed system is scalable with minimal delay and higher throughput. Also, the proposed system can participate in the digitalization of the two holy mosques, achieving the Saudi Vision 2030.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_13-A_Proposed_IoT_System_for_Monitoring_and_Controlling_Movable_Zamzam_Tanks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Security Vulnerability Analysis and Enhancement of a Lightweight Sensor Node Authentication Framework</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161112</link>
        <id>10.14569/IJACSA.2025.0161112</id>
        <doi>10.14569/IJACSA.2025.0161112</doi>
        <lastModDate>2025-11-29T10:59:59.5200000+00:00</lastModDate>
        
        <creator>Kim Kyoung Yee</creator>
        
        <creator>Haewon Byeon</creator>
        
        <subject>Lightweight authentication; IoMT security; ECC; timestamp–nonce validation; replay resistance; formal verification; smart healthcare systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>This study presents a comprehensive structural and mathematical security analysis of LightAuth, a lightweight authentication framework, specifically designed for smart health sensor networks. We delve into its core components and identify several critical vulnerabilities that could compromise the integrity and security of the system. Our analysis reveals that the framework suffers from insufficient freshness verification, a flawed and biased key agreement process, and the persistent exposure of fixed identifiers, which makes it susceptible to various attacks. To address these significant security weaknesses, we propose a suite of practical and effective countermeasures. These enhancements include the implementation of a robust timestamp+nonce validation mechanism to ensure message freshness and the introduction of mutual signature verification to prevent man-in-the-middle attacks. Furthermore, we advocate for the use of dynamic pseudonyms to obfuscate user identities and enhance privacy. To bolster long-term security, we also integrate perfect forward secrecy (PFS), which ensures that a compromise of a long-term key does not compromise past session keys. We conducted extensive simulations to evaluate the effectiveness of these proposed enhancements. The results demonstrate that our improvements achieve a remarkable 100% replay detection rate, while the performance degradation remains within acceptable limits, proving the practicality of our solution.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_12-Security_Vulnerability_Analysis_and_Enhancement_of_a_Lightweight_Sensor_Node.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Maximizing Influence and Mitigating Harmful Viral Content in Social Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161111</link>
        <id>10.14569/IJACSA.2025.0161111</id>
        <doi>10.14569/IJACSA.2025.0161111</doi>
        <lastModDate>2025-11-29T10:59:59.4700000+00:00</lastModDate>
        
        <creator>Muhammad Mohsin</creator>
        
        <creator>Muhammad Yaseen</creator>
        
        <creator>Umar Farooq Khattak</creator>
        
        <creator>Gohar Rahman</creator>
        
        <subject>Sentiment analysis; harmful content; counter-narratives; network analysis; PageRank; Vander influence maximization; digital ethics; social media moderation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>This study presents an integrated framework containing sentiment and network analysis for social media modernization to mitigate the spread of harmful viral posts. The research focuses on detecting harmful content, identifying key influencers, and generating counter-narratives to promote constructive engagement. A Twitter Dataset was preprocessed to remove URLs, special characters, and numbers, and the VADER Tool was used to classify tweets into harmful, positive, and neutral categories. Network analysis was conducted by constructing directed retweet graphs to visualize information flow and identify influential users using the PageRank Algorithm and Vander centrality metrics. Counter-narratives were generated for harmful tweets to neutralize negativity and encourage positive discourse. Results show that integrating sentiment and network analysis reduces harmful content propagation by approximately 60 per cent through effective targeting of influential users. The proposed approach offers a scalable and data-driven model for social media moderation, contributing to safer and more ethical online communication environments by balancing freedom of expression with responsible content regulation.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_11-Maximizing_Influence_and_Mitigating_Harmful_Viral_Content_in_Social_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Method for Planning the Dissemination Path of Traditional Chinese Medicine Culture Based on the Optimized Ant Colony Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161110</link>
        <id>10.14569/IJACSA.2025.0161110</id>
        <doi>10.14569/IJACSA.2025.0161110</doi>
        <lastModDate>2025-11-29T10:59:59.4400000+00:00</lastModDate>
        
        <creator>Qian Guo</creator>
        
        <creator>Ying Ma</creator>
        
        <subject>TCM dissemination; Ant Colony Optimization (ACO); intelligent path planning; cultural communication networks; knowledge graph optimization; algorithmic dissemination strategy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>Strategic planning improves TCM cultural transmission efficacy, reliability, and impact. Many systems use heuristic or rule-based approaches, which have drawbacks such as path redundancy, low adaptation, and limited scalability in non-static networks. To address these constraints, we suggest RACO-TCM, or Reinforced Ant Colony Optimization for TCM Dissemination. This novel algorithmic distribution technique uses Ant Colony Optimization and reinforcement learning to create adaptable reward-driven cultural routes. The framework outperforms standard ant colony optimization because it uses dynamic pheromone updates, reinforcement-based exploration, and redundancy-aware heuristics to improve global search, convergence time, and robustness to local optimal solutions. We quantitatively assessed RACO-TCM against other methods and found that it increased cultural diffusion efficiency by 18.6% and reduced repeated routes by 12.3%. Creating a vast and instructive TCM knowledge graph with over 46,000 prescriptions, 8,000 herbs, and 25,000 chemical compounds achieved this. Overall, the TCM transmission technique is adaptive, scalable, and culturally consistent. It is used to manage business and TCM tourism, promote healthcare, digital education, and cultural services in smart cities.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_10-A_Method_for_Planning_the_Dissemination_Path.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dual-Attention ResUNet-GAN for Secure Image Steganography: Optimizing the Trade-off Between Imperceptibility and Payload Capacity</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161109</link>
        <id>10.14569/IJACSA.2025.0161109</id>
        <doi>10.14569/IJACSA.2025.0161109</doi>
        <lastModDate>2025-11-29T10:59:59.4100000+00:00</lastModDate>
        
        <creator>Zobia Shabeer</creator>
        
        <creator>Muhammad Naeem</creator>
        
        <creator>Gohar Rahman</creator>
        
        <creator>Mehmood Ahmed</creator>
        
        <creator>Muhammad Zeeshan</creator>
        
        <creator>Asim Shahzad</creator>
        
        <creator>Salamah binti Fattah</creator>
        
        <subject>Image steganography; Generative Adversarial Networks (GANs); payload capacity; steganalysis robustness; artificial intelligence; BOSSbase; ALASKA#2</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>Secure and high-capacity data concealment has already become a requirement of modern multimedia communication, particularly with the enhanced protection and privacy levels of concern. The framework introduced in this study—the improved Dual-Attention ResUNet-GAN—helps optimize the trade-off among imperceptibility, robustness, and payload capacity in the field of image steganography. The two PatchGAN discriminators used in the model were a visual realism discriminator and a learned steganalyzer. Two encoders based on the ResNet-34 using CBAM-based dual attention are to be used. Just before the data is embedded, AES-256 encryption in CBC mode is employed to provide cryptographic confidentiality. Experiments on the COCO, BOSSbase, and ALASKA2 datasets are conducted to evaluate the proposed method&#39;s performance, yielding PSNR=42.5 dB, SSIM=0.98, BER=0.02, and high resistance to steganalysis (PE=91.2% vs. SRNet). Embedding is also changed in the proposed framework to high-entropy areas, thereby allowing the application of both conservative payloads (0.0156 bpp) and capacity-driven configurations (0.4 bpp) without affecting image quality. The findings have validated that the proposed system fits well with secure communication and intelligent data-hiding applications in real-world scenarios.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_9-Dual_Attention_ResUNet_GAN_for_Secure_Image_Steganography.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced Android Malware Detection Using Deep Learning and Ensemble Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161108</link>
        <id>10.14569/IJACSA.2025.0161108</id>
        <doi>10.14569/IJACSA.2025.0161108</doi>
        <lastModDate>2025-11-29T10:59:59.3770000+00:00</lastModDate>
        
        <creator>Abdul Museeb</creator>
        
        <creator>Yaman Hamed</creator>
        
        <creator>Rajalingam Sokkalingam</creator>
        
        <creator>Anis Amazigh Hamza</creator>
        
        <creator>Atta Ullah</creator>
        
        <creator>Iliyas Karim Khan</creator>
        
        <subject>Android malware detection; machine learning; API calls; permissions; android security; malware classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>Android malware continues to pose significant security threats, with evolving tactics that often bypass traditional detection systems. Existing detection mechanisms remain ineffective against obfuscated or novel malware variants, necessitating the development of more robust detection techniques. This study introduces a comprehensive machine learning framework for Android malware detection that leverages a systematic comparison between a deep Neural Network and diverse ensemble methods, including Voting Ensemble, Stacking Ensemble, XGBoost, and Random Forest. Unlike prior studies that often focus on individual approaches, this work provides an empirical benchmark that demonstrates how practical ensemble configurations can achieve superior performance while maintaining computational efficiency. The model is trained using the CIC-AndMal2017 dataset, incorporating a comprehensive set of static features, including API calls, permissions, services, receivers, and activities. Feature selection was performed to optimize model performance, reducing redundancy and improving detection accuracy. The models were evaluated on multiple classification metrics, including accuracy, F1-score, and confusion matrices, with the Voting Ensemble model achieving an accuracy of 94.14%, outperforming all other approaches, including the deep neural network. This study contributes to the field by demonstrating that a carefully constructed ensemble of diverse classifiers can not only improve detection accuracy but also offer a more scalable, lightweight solution compared to complex deep learning models. The research provides a significant advancement in practical Android malware detection by identifying optimal strategies that balance performance with computational efficiency.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_8-Enhanced_Android_Malware_Detection_Using_Deep_Learning_and_Ensemble_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Anamel: Children’s Psychological and Mental Health Detection Application by Drawing Analysis Based on AI</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161107</link>
        <id>10.14569/IJACSA.2025.0161107</id>
        <doi>10.14569/IJACSA.2025.0161107</doi>
        <lastModDate>2025-11-29T10:59:59.3470000+00:00</lastModDate>
        
        <creator>Amal Alshahrani</creator>
        
        <creator>Manar Mohammed Almatrafi</creator>
        
        <creator>Jenan Ibrahim Mustafa</creator>
        
        <creator>Layan Saad Albaqami</creator>
        
        <creator>Raneem Abdulrahman Aljabri</creator>
        
        <subject>Application; mental health; drawings; artificial intelligence; YOLO; deep learning; computer vision; flutter</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>Psychological and mental health issues affect many people worldwide. However, the impact of these issues is stronger on children starting from early ages until their teenage. Using drawing to analyze and detect such feelings is a common way for specialists to help children express their feelings. Due to that, this study employs artificial intelligence to improve and ease this process by developing an application to help parents and specialists have a first-look analysis of their children&#39;s drawings. The Anamel application allows users to go through an experience that simulates clinics and psychological centers by being able to detect several feelings: happiness, sadness, anger, and aggression. Users start by answering pre-questions to collect initial information about the child. Then they can upload the image of the drawing, which is processed using computer vision techniques, where the AI model based on YOLO deep learning architecture provides the analysis results with an accuracy of 94%. Finally, they answer the post questions to ensure the final result. Specialists can also register themselves in the application to allow parents to communicate with them for extra help.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_7-Anamel_Childrens_Psychological_and_Mental_Health_Detection_Application.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluating Generalist Conversational AI Against Foundational Models of Instructional Design: A Comparative Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161106</link>
        <id>10.14569/IJACSA.2025.0161106</id>
        <doi>10.14569/IJACSA.2025.0161106</doi>
        <lastModDate>2025-11-29T10:59:59.3170000+00:00</lastModDate>
        
        <creator>Abdelmounaim AZINDA</creator>
        
        <creator>Mohamed Khaldi</creator>
        
        <subject>Generative AI; large language models (LLMs); instructional design; constructive alignment; pedagogical evaluation; AI ethics in education; Human-AI collaboration</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>The rapid integration of Generative AI into instructional engineering presents a critical challenge: verifying the capacity of these tools to strictly adhere to systemic theoretical models of learning, despite the risk of generating &quot;pedagogically hallucinated&quot; content that possesses surface plausibility but lacks structural validity. This study addresses this gap by systematically evaluating the performance of generalist conversational AIs against foundational principles of Instructional Design (ID). Adopting a qualitative comparative analysis of four state-of-the-art models available in October 2025—GPT-5 (OpenAI), Gemini 2.5 Pro (Google), Claude Sonnet 4.5 (Anthropic), and DeepSeek V3.2 (DeepSeek AI)—we assessed their outputs for complex design scenarios against a multi-dimensional framework grounded in authoritative theories, including Biggs’s Constructive Alignment, Merrill’s First Principles of Instruction, and Universal Design for Learning (UDL). Results reveal a &quot;paradox of competence without comprehension,&quot; where models demonstrate high factual reliability and linguistic fluency but exhibit significant shortcomings in maintaining logical pedagogical consistency, particularly regarding assessment alignment and accessibility standards, with only Claude Sonnet 4.5 demonstrating a notable proactive partnership posture. Consequently, we conclude that current generalist LLMs cannot function as autonomous expert designers and argue for a shift in professional practice toward Critical AI Literacy, where the human designer leverages AI for ideation but remains the essential guarantor of the pedagogical architecture.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_6-Evaluating_Generalist_Conversational_AI_Against_Foundational_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Leveraging Large Language Models in the Software Development Lifecycle: Opportunities and Challenges</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161105</link>
        <id>10.14569/IJACSA.2025.0161105</id>
        <doi>10.14569/IJACSA.2025.0161105</doi>
        <lastModDate>2025-11-29T10:59:59.2830000+00:00</lastModDate>
        
        <creator>Jasdeep Singh Bhalla</creator>
        
        <creator>Mansimar Kaur Jodhka</creator>
        
        <subject>Large Language Models (LLMs); Software Development Lifecycle (SDLC); AI-assisted software engineering; automated code generation; software testing; software architecture; DevOps automation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>Large Language Models (LLMs) are increasingly integrated into software engineering workflows, yet existing studies provide fragmented or domain-specific examinations of their impact. This survey aims to systematically analyze how LLMs influence the Software Development Lifecycle (SDLC) end-to-end, identifying capabilities, limitations, risks, and emerging opportunities. We review 147 publications from 2017–2025 across ACM Digital Library, IEEE Xplore, ACL Anthology, and arXiv using predefined inclusion and exclusion criteria. Unlike prior surveys that focus narrowly on code generation or testing, this work provides an SDLC-wide synthesis supported by empirical benchmarks, industrial evidence, and a unified taxonomy mapping LLM capabilities to each phase of development. We further examine technical risks including hallucinations, dataset governance, robustness, security vulnerabilities, and auditability. The goal of this survey is to consolidate fragmented knowledge, highlight practical adoption challenges, and outline future research directions essential for building trustworthy, scalable, and effective LLM-enabled software engineering systems.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_5-Leveraging_Large_Language_Models_in_the_Software_Development_Lifecycle.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SUNDUS: A Human-Centered Framework for Fostering Human-AI Collaboration Through Transparency</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161104</link>
        <id>10.14569/IJACSA.2025.0161104</id>
        <doi>10.14569/IJACSA.2025.0161104</doi>
        <lastModDate>2025-11-29T10:59:59.2530000+00:00</lastModDate>
        
        <creator>Abduljaleel Hosawi</creator>
        
        <creator>Richard Stone</creator>
        
        <subject>SUNDUS; human-AI collaboration; Human-Computer Interaction (HCI); transparency; multisensory display; cognitive load; decision-making; trust; training; OMAR</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>Imagine a future where a security operator can understand a complex threat in fractions of a second without specialized training. Not only that, but the operator can instantly understand the logic behind a particular AI warning. This vision of AI-augmented cognition is the focus of the proposed SUNDUS framework. However, today, this promise faces a critical barrier: the user. The immense potential of AI is often rendered useless when its operator lacks the professional training necessary to interpret and implement its outputs in the real world. This context reveals a critical research gap: while collaborative human-AI systems significantly enhance performance, their efficacy remains fundamentally dependent on extensive operator training, as evidenced by the OMAR (Operator Machine Augmentation Resource) system. The current paper proposes the SUNDUS (System for Understanding, Navigating, and Decision-Making Under Uncertainty and Support) framework, a theoretical model designed through a Human-Computer Interaction (HCI) lens to enhance human-AI collaboration by making system design a substitute for formal training. Leveraging principles from AMID (Augmented Multisensory Interface Design) and Visual Representations of Meta-Information, SUNDUS employs enhanced transparency—via Natural Language Explanations (NLEs), confidence scores, and multisensory cues—to offload cognitive burden and increase intuitive understanding. We propose a comparative experimental methodology to validate SUNDUS against OMAR, hypothesizing that SUNDUS will yield significantly higher decision-making accuracy and appropriately calibrated trust alongside a lower cognitive load in untrained users. The key implication is a scalable, human-centric design blueprint that shifts the burden of adaptation from the operator to the AI system, unlocking the full potential of augmented cognition.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_4-SUNDUS_A_Human_Centered_Framework_for_Fostering_Human_AI_Collaboration.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of Test Access Mechanisms for Improving Scan Compression and Test Time</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161103</link>
        <id>10.14569/IJACSA.2025.0161103</id>
        <doi>10.14569/IJACSA.2025.0161103</doi>
        <lastModDate>2025-11-29T10:59:59.2370000+00:00</lastModDate>
        
        <creator>Vijay Sontakke</creator>
        
        <subject>Compression ratio; scan bandwidth; TAM; test coverage; test scheduling; test time</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>System-on-Chip (SoC) devices now integrate dozens—and sometimes hundreds—of heterogeneous embedded IP cores, each of which must be verified after fabrication. Industry therefore relies on modular testing so that every core can be exercised and validated without revealing its internal implementation, and so that designers can reuse test patterns efficiently. A persistent challenge is the mismatch between the limited scan-in/scan-out bandwidth at the chip boundary and a much larger channel capacity required if all cores were tested simultaneously. Widespread use of scan-compression schemes, such as Embedded Deterministic Test (EDT), offers several features for channel count selection, and this also needs to be considered during bandwidth allocation across cores. The multiple requirements are met using Test Access Mechanism (TAM), and over the past two decades, researchers have proposed many TAM architectures that move well beyond simple pin multiplexing, each balancing wiring overhead, concurrency, pattern compression, and scheduling complexity in different ways. However, a combined study of their effectiveness considering multiple aspects is not available. This study reviews the principles, algorithms, and architectures of TAM and test scheduling techniques. A classification of the techniques is provided, based on the method used and the area of application. The goal of the study is to create a platform for the future development of test access mechanisms. The study is believed to be helpful to both industry and academia.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_3-Analysis_of_Test_Access_Mechanisms_for_Improving_Scan_Compression.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Agentic AI in Commodity Trading: A Comparative Simulation Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161102</link>
        <id>10.14569/IJACSA.2025.0161102</id>
        <doi>10.14569/IJACSA.2025.0161102</doi>
        <lastModDate>2025-11-29T10:59:59.2070000+00:00</lastModDate>
        
        <creator>TarakRam Nunna</creator>
        
        <creator>Ananya Samala</creator>
        
        <subject>Agentic AI; Agent Based Modeling; commodity trading; reinforcement learning; cognitive agents; market simulation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>Agent Based Modeling (ABM) has long been used to study emergent market behavior, but most prior financial ABM frameworks rely on reactive rule-based or reinforcement learning agents with limited cognitive capability. This study introduces a novel integration of agentic artificial intelligence (AI) featuring autonomous goal setting, persistent memory, and multi-step planning into commodity trading simulations. We develop a hybrid ABM-Agentic AI framework and comparatively evaluate 20 traditional agents and 20 Agentic AI agents across Natural Gas and WTI Crude Oil markets over multiple horizons (1M–3Y). To address external validity concerns, synthetic price series are calibrated to historical volatility regimes. Results show consistent performance improvements for Agentic AI, with large practical effect sizes, although statistical significance is limited due to small sample sizes. We also identify sources of potential bias, such as higher initial skill ranges and frictionless execution, and present controlled adjustments to mitigate them. The study makes four contributions: 1) a novel simulation architecture for integrating cognitive AI into ABM; 2) explicit operationalization of agentic capabilities; 3) a controlled comparative evaluation across commodities; and 4) robustness checks examining sensitivity to volatility and parameter shifts. Limitations and recommendations for real data validation and realistic microstructure modeling are also discussed.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_2-Agentic_AI_in_Commodity_Trading_A_Comparative_Simulation_Study.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Assessing LXC Containers on Raspberry Pi 4B/5 Boards in a Proxmox Virtual Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161101</link>
        <id>10.14569/IJACSA.2025.0161101</id>
        <doi>10.14569/IJACSA.2025.0161101</doi>
        <lastModDate>2025-11-29T10:59:59.0670000+00:00</lastModDate>
        
        <creator>Eric Gamess</creator>
        
        <creator>Deuntae Winston</creator>
        
        <subject>Container; virtualization; Proxmox VE; LXC; single-board computers; Raspberry Pi; performance evaluation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(11), 2025</description>
        <description>On one hand, containerization is gaining acceptance as a lightweight virtualization alternative to Virtual Machines (VMs). On the other hand, Single-Board Computers (SBCs) are increasingly used due to their affordability, versatility, low energy consumption, and growing computational power. In this work, extensive experiments were conducted to assess the capabilities and limitations of Linux Containers (LXC) when deployed on a Proxmox Virtual Environment (Proxmox VE) cluster, built with Raspberry Pi (RPi) computers. The clusters consisted of either two Raspberry Pi 4 Model B (RPi 4B) or two Raspberry Pi 5 (RPi 5) with identical characteristics, connected through an Ethernet switch. The experiments aimed to determine: 1) the maximum number of containers that can be run simultaneously when varying their operating system, 2) the maximum number of containers that can be executed in parallel when varying their allocated RAM, 3) the maximum number of containers that can be run concurrently under different SBC memory configurations, 4) the time for container migration, and 5) the network performance between two containers. For storage, SATA SSDs were connected to the RPi 4B boards through their USB ports, while the RPi 5 boards used NVMe SSDs connected via their PCIe interfaces. The cluster formed with RPi 5 boards outperformed the one built with RPi 4B boards, showing significant improvements in the migration experiments. In terms of network performance, the results were similar between containers running on different nodes. However, much larger differences were observed between containers in execution on the same nodes. With this study, the authors aim to assist users and researchers in identifying and selecting the technologies and configurations that best meet the performance requirements of their specific study cases.</description>
        <description>http://thesai.org/Downloads/Volume16No11/Paper_1-Assessing_LXC_Containers_on_Raspberry_Pi_4B5_Boards.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comparison Between FAHP-TOPSIS and FAHP-FTOPSIS Methods for Selecting the Best Products for Home-Based Sellers: A Performance Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01610108</link>
        <id>10.14569/IJACSA.2025.01610108</id>
        <doi>10.14569/IJACSA.2025.01610108</doi>
        <lastModDate>2025-11-03T08:06:32.2300000+00:00</lastModDate>
        
        <creator>Selvia Lorena Br Ginting</creator>
        
        <creator>Zulaiha Ali Othman</creator>
        
        <subject>Home-based sellers; product selection; FAHP-TOPSIS; FAHP-FTOPSIS; sensitivity analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>This study discusses common problems faced by home-based sellers in determining the right product ideas to sell. To overcome this problem, a method is needed that can help home-based sellers to choose the right product. Therefore, a decision support system using a multi-criteria decision-making technique with a hybrid approach was applied, which integrates the FAHP-TOPSIS and FAHP-FTOPSIS methods in the product selection process. The analysis results show that the FAHP-TOPSIS method is more effective in producing product rankings, with alternative A5948 ranking first with a score of 0.946. Meanwhile, the FAHP-FTOPSIS method also placed the same alternative in first place with a score of 0.679. The findings in the ranking analysis showed that the addition of fuzzy did not affect the rankings but did affect the score value of the alternatives. Sensitivity Analysis using Mean Absolute Deviation (MAD), Mean Square Error (MSE), and Spearman Correlation (SC) was conducted. FAHP-TOPSIS performed best at Weight 1 (MAD 89, MSE 18.486, SC 0.972) and excelled at Weight 3 (MAD 144, MSE 51.791, SC 0.997), though more volatile at other weights. Overall, at the base weight (Weight 1), TOPSIS shows the best ranking stability (low MAD/MSE, high SC), while with shifted weights (especially Weight 3), FTOPSIS better maintains ordering (SC ≈ 1) despite higher error at Weight 2. Practically, TOPSIS suits baseline scenarios; FTOPSIS is more robust under weight variations, with error variance control still necessary. These findings provide a practical guideline: use FAHP-FTOPSIS when preferences are uncertain, and use FAHP-TOPSIS when preferences are clear. The resulting rankings can be directly adopted by sellers to prioritize and select products with confidence.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_108-A_Comparison_Between_FAHP_TOPSIS_and_FAHP_FTOPSIS_Methods.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Document Similarity Detection for Project Development Using Fused Interactive Attention Mechanisms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01610107</link>
        <id>10.14569/IJACSA.2025.01610107</id>
        <doi>10.14569/IJACSA.2025.01610107</doi>
        <lastModDate>2025-10-30T08:26:18.9170000+00:00</lastModDate>
        
        <creator>Chao Zhang</creator>
        
        <creator>Ying Zhang</creator>
        
        <creator>Gang Yang</creator>
        
        <creator>Fan Hu</creator>
        
        <subject>Text similarity; multi-feature fusion model; word2vec; cw2vec; MP-CNN; fusion attention mechanism; semantic extraction; project evaluation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>This study introduces a novel multi-feature fusion model aimed at improving text similarity calculation in scientific and technological projects. The primary objective is to enhance the accuracy and efficiency of assessing text similarities, particularly in evaluating originality and identifying duplications in project submissions. To overcome the limitations of traditional text similarity methods (e.g., Vector Space Models, Latent Dirichlet Allocation, and TF-IDF) in capturing complex semantic and structural features, a hybrid model is proposed. The model combines word embeddings (word2vec and cw2vec), a Bi-LSTM network, and a multi-perspective convolutional neural network (MP-CNN) for effective feature extraction. Additionally, a fusion attention mechanism and interactive attention are incorporated to improve the extraction of semantic, contextual, and structural information. Experimental evaluation on two benchmark datasets demonstrates that the proposed model achieves an average precision of 0.75, a recall of 0.71, and an F1-score of 0.73, outperforming traditional methods (LDA, TF-IDF, Word2vec+Cosine) and deep learning baselines (Siamese-LSTM, MP-CNN) by more than 10% on average. These results confirm that the proposed architecture effectively balances semantic relevance and structural integrity, yielding superior similarity detection performance. The integration of advanced deep learning components—Bi-LSTM, MP-CNN, and attention mechanisms—substantially improves both the accuracy and efficiency of similarity evaluation, providing a more reliable and objective approach for scientific project assessment.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_107-Document_Similarity_Detection_for_Project_Development.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Text Information Data Mining Method in Natural Language Processing Tasks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01610106</link>
        <id>10.14569/IJACSA.2025.01610106</id>
        <doi>10.14569/IJACSA.2025.01610106</doi>
        <lastModDate>2025-10-30T08:26:18.8870000+00:00</lastModDate>
        
        <creator>Shengguo Guo</creator>
        
        <creator>Dandan Xing</creator>
        
        <subject>Natural language processing; text information; data mining; VSM; TF-IDF; BERT</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>Text mining methods often rely on a single data source or simple word frequency statistics, making it difficult to capture multi-source text semantic associations and local contextual dependencies, resulting in poor mining accuracy. Therefore, a method for text information data mining in natural language processing tasks is proposed. Using Python web crawlers to obtain multi-source text data, after preprocessing such as cleaning, segmentation, and removal of stop words, a Vector Space Model (VSM) is used for text representation, and a TF-IDF (Term Frequency Across Document Frequency) weight optimization mechanism is introduced to enhance feature semantic representation. On this basis, a semantic enhancement system is constructed based on the BERT classification model in the field of natural language processing. Through the self-attention mechanism of multi-layer Transformer encoders, semantics are aggregated to effectively capture local contextual dependencies, and context-sensitive word vectors are generated by the output layer. Finally, by fine-tuning the parameters of the BERT (Bidirectional Encoder Representation from Transformers) model and combining it with the Softmax function, precise mining of text information data categories was achieved. The experimental results show that in the embedding experiment of sports news headlines, this method can form a semantic aggregation structure with clear domain logic for word vectors; In the cross domain short text classification experiment, the overall accuracy of this method on the dataset reached 95.7%, which was 19.5% and 18.7% higher than the comparative methods, effectively solving the cross domain ambiguity problem in natural language processing.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_106-Text_Information_Data_Mining_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Virtual Assistant Based on Recurrent Neural Networks: Scope and Coverage of Health Insurance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01610105</link>
        <id>10.14569/IJACSA.2025.01610105</id>
        <doi>10.14569/IJACSA.2025.01610105</doi>
        <lastModDate>2025-10-30T08:26:18.8530000+00:00</lastModDate>
        
        <creator>Diego Alberto Paz-Medina</creator>
        
        <creator>David Eduardo Rojas-Cavassa</creator>
        
        <creator>Ernesto Adolfo Carrera-Salas</creator>
        
        <subject>Insurance; insurance coverage; web application; chatbot; Recurrent Neural Networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>In Peru, the use of health services provided by state institutions has decreased, largely due to perceived deficiencies in care quality, such as delays in medical attention and administrative barriers that hinder timely access to information on insurance scope and coverage. This study develops a web-based application with an integrated chatbot that leverages Artificial Intelligence (AI) and a Recurrent Neural Network (RNN) to provide accurate and accessible information about institutional insurance, including the characteristics of different insurance types and specific coverage cases. The application integrates a chatbot powered by an AI model based on Natural Language Processing (NLP) from OpenAI, with intent recognition handled by a dedicated classifier. In comparison to existing chatbots in the healthcare field, the model proposes a hybrid approach based on RNN and GPT-3.5, optimized for health insurance queries in public sector contexts. The system was evaluated with 50 representative questions scored on four criteria: clarity, relevance, coherence, and accuracy. The chatbot achieved an overall mean response accuracy of 82% and a mean user satisfaction score of 4.59/5, indicating strong acceptance and usability. These results suggest that the combination of these technologies constitutes an effective alternative for addressing queries relevant to users of state health systems.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_105-Virtual_Assistant_Based_on_Recurrent_Neural_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Systematic Literature Review of Reactive Jamming Attacks Mitigation Techniques in Internet of Things Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01610104</link>
        <id>10.14569/IJACSA.2025.01610104</id>
        <doi>10.14569/IJACSA.2025.01610104</doi>
        <lastModDate>2025-10-30T08:26:18.8230000+00:00</lastModDate>
        
        <creator>Enos Letsoalo</creator>
        
        <creator>Topside Mathonsi</creator>
        
        <creator>Tshimangadzo Tshilongamulenzhe</creator>
        
        <creator>Daniel du Duplesis</creator>
        
        <subject>IoT networks; reactive jamming attacks; mitigation methods; systematic literature review; electronic digital libraries</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>Internet of Things (IoT) networks have become a prevalently exploited research area in academia and industry. IoT networks benefit from a variety of applications, including smart cities, smart homes, intelligent transportation, smart agriculture, monitoring, surveillance, etc. The security challenges associated with IoT networks have been broadly studied in the literature. This systematic literature review (SLR) is aimed at reviewing the existing research studies on IoT networks’ reactive jamming attacks, challenges, and mitigation. This SLR examined the research studies published between 2019 and 2024 within the popular electronic digital libraries. We selected 45 papers after a rigorous screening of published works to answer the proposed research questions. The outcomes of this SLR reported three major IoT network performance issues. The results showed that the existing mitigation methods are categorized as machine learning based, deception-based, statistical-based, radio frequency-based, game theory-based, and encryption-based. The results show that most methods can detect reactive jamming attacks with accuracy. However, those methods still require additional infrastructure, encryption systems, and lead to prolonged training delays due to large datasets, resulting in computational overhead and transmission delays. Furthermore, the methods are unable to provide a better defense response to reactive jamming attacks. This is because the methods cannot adequately deal with the increased power consumption of IoT devices, cannot minimize transmission delays, and cannot improve the packet delivery ratio. As a result, reactive jamming attacks continue to be prevalent in IoT networks.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_104-Systematic_Literature_Review_of_Reactive_Jamming_Attacks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Auditable Real-Time Cold-Chain Monitoring with IoT and Blockchain Anchoring</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01610103</link>
        <id>10.14569/IJACSA.2025.01610103</id>
        <doi>10.14569/IJACSA.2025.01610103</doi>
        <lastModDate>2025-10-30T08:26:18.7930000+00:00</lastModDate>
        
        <creator>Mohamed DOUBIZ</creator>
        
        <creator>Mouad BANANE</creator>
        
        <creator>Abdelali ZAKRANI</creator>
        
        <creator>Allae ERRAISSI</creator>
        
        <subject>Internet of Things; blockchain; cold chain; vaccine storage; real-time monitoring; tamper-evident; data provenance; traceability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>Safe vaccine storage hinges on continuous, trust-worthy temperature supervision and evidence that records have not been altered. Yet many cold rooms still rely on fragmented logging tools that lack real-time alerts, end-to-end traceability, and audit-ready data. This paper presents a practical, low-cost architecture that integrates Internet of Things (IoT) sensing with blockchain anchoring to deliver real-time monitoring and visibility, reliable anomaly detection, and tamper-evident provenance for cold-chain storage. The design couples minute-level telemetry and dashboarding with alert debouncing/hysteresis to reduce false alarms, while anchoring hourly summaries and event alerts on-chain to create a verifiable trail without exposing raw data or incurring recurring fees during experimentation. A prototype in a vaccine cold-room scenario demonstrates that the approach is simple to deploy on commodity hardware, scales by adding room-s/sensors, and produces operator-friendly notifications along-side independently verifiable records. This combination of edge retention and cryptographic anchoring provides a pragmatic path for pharmacies, clinics, and warehouses to upgrade from basic loggers to transparent, audit-ready monitoring, bridging operational needs (alerts) and compliance needs (provenance) in one system.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_103-Auditable_Real_Time_Cold_Chain_Monitoring.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluating Transparency in the Development of Artificial Intelligence Systems: A Systematic Literature Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01610102</link>
        <id>10.14569/IJACSA.2025.01610102</id>
        <doi>10.14569/IJACSA.2025.01610102</doi>
        <lastModDate>2025-10-30T08:26:18.7600000+00:00</lastModDate>
        
        <creator>Giulia Karanxha</creator>
        
        <creator>Paulinus Ofem</creator>
        
        <subject>Artificial intelligence; transparency evaluation; trustworthy AI; transparency metrics; EU AI Act; systematic literature review</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>Transparency is increasingly recognised as a cornerstone of trustworthy artificial intelligence (AI), yet its operationalisation remains fragmented and underdeveloped. Existing methods often rely on qualitative checklists or domain-specific case studies, limiting comparability, reproducibility, and regulatory alignment. This paper presents a Systematic Literature Review (SLR) of 28 peer-reviewed studies that explicitly propose or apply methods for evaluating transparency in AI systems (2019-July 2025). The review identifies recurring themes such as traceability, explainability, and communication, and classifies evaluation approaches by metric type and calculation type. Empirically, checklist-based instruments are the most frequent evaluation form (9/28, 32%), followed by scenario-based qualitative assessments (5/28, 18%). Most (9/28, 32%) research on AI applications occurs in healthcare; references to legal or ethical frameworks appear in 19/28 studies (67%), although traceable mappings to specific obligations are rare. The results of the quality assessment highlight strengths in methodological clarity, but reveal persistent gaps in benchmarking, stakeholder inclusion, and lifecycle integration. Based on these findings, this study informs the adaptation of the Z-Inspection&#174; process within the context of AI development projects and motivates a Transparency Artefact Registry (TAR), a structured, metadata-based mechanism for capturing and reusing transparency artefacts across system lifecycles. By embedding transparency evaluation into AI development workflows, the proposed approach seeks to provide verifiable, repeatable, and regulation-aligned practices for assessing transparency in complex AI systems.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_102-Evaluating_Transparency_in_the_Development_of_Artificial_Intelligence_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Feature-Optimized Machine Learning for High-Accuracy Ammunition Detection in X-Ray Security Screening</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01610101</link>
        <id>10.14569/IJACSA.2025.01610101</id>
        <doi>10.14569/IJACSA.2025.01610101</doi>
        <lastModDate>2025-10-30T08:26:18.7130000+00:00</lastModDate>
        
        <creator>Osama Dorgham</creator>
        
        <creator>Nijad Al-Najdawi</creator>
        
        <creator>Mohammad H. Ryalat</creator>
        
        <creator>Sara Tedmori</creator>
        
        <creator>Sanad Aburass</creator>
        
        <subject>Feature optimization; ammunition detection; X-Ray images; machine learning; security imaging</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>This paper introduces a machine learning system that is feature-optimized to enhance the detection of concealed ammunition in X-Ray security imaging. The system integrates advanced image analysis techniques with a cascade-AdaBoost classifier and Multi-scale Block Local Binary Pattern (MB-LBP) features, which are particularly effective for object recognition and classification in complex, high-dimensional data. The combination of these algorithms ensures robust performance in identifying ammunition types even under challenging conditions, such as variations in image quality or object orientation. The system is specifically designed for the accurate identification of various types of ammunition, including 9 mm bullets for handguns, AK-47 machine gun bullets, and 12-gauge shotgun cartridges. To support the development and testing of this system, a new dataset comprising 1,732 X-Ray images of passenger luggage was collected. This dataset is made publicly available to facilitate further research and improvement in this critical area of security technology. Experimental results demonstrate that the system achieves a high level of detection accuracy, with the ability to identify 12-gauge shotgun shells concealed in baggage with a 92% success rate. Beyond its technical achievements, this system significantly enhances the efficiency and reliability of security checks, improving the overall effectiveness of ammunition detection in real-world scenarios.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_101-Feature_Optimized_Machine_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Expert Systems in Tuberculosis Prevention Established in Certainty Factor</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01610100</link>
        <id>10.14569/IJACSA.2025.01610100</id>
        <doi>10.14569/IJACSA.2025.01610100</doi>
        <lastModDate>2025-10-30T08:26:18.6830000+00:00</lastModDate>
        
        <creator>Inooc Rubio Paucar</creator>
        
        <creator>Cesar Yactayo-Arias</creator>
        
        <creator>Laberiano Andrade-Arenas</creator>
        
        <subject>Buchanan’s methodology; certainty factor; expert system; public health; tuberculosis; web application</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>Tuberculosis remains a highly relevant public health concern, especially in contexts with limited access to medical services, highlighting the need for tools that support early diagnosis. In this study, a web-based expert system was developed to assist in tuberculosis detection, using Buchanan’s methodology, which consists of five phases: identification, conceptualization, formalization, implementation, and validation. The system was designed with a knowledge rule-based approach and incorporated the Certainty Factor to quantify confidence in diagnostic conclusions. Validation was carried out through expert judgment using a 15-question survey. The results showed a high overall positive consensus, with question 13 standing out as it obtained the highest mean score (4.80) and the lowest dispersion (SD = 0.61), reflecting the most favorable perception and greatest agreement among the experts. Conversely, question 4 recorded the lowest mean score (4.00) and the highest dispersion (SD = 1.12), indicating aspects of the system that generated more divided opinions. Overall, these findings confirm that the system is effective, reliable, and usable, making it a relevant tool to support clinical decision-making in resource-limited settings. As an additional contribution, the integration of complementary technologies is suggested, such as (ML) algorithms, radiological image analysis, and mobile applications for symptom tracking, in order to optimize early detection and strengthen clinical care for tuberculosis.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_100-Expert_Systems_in_Tuberculosis_Prevention.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Stochastic Policies, Deterministic Minds: A Calibrated Evaluation Protocol and Diagnostics for Deep Reinforcement Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161099</link>
        <id>10.14569/IJACSA.2025.0161099</id>
        <doi>10.14569/IJACSA.2025.0161099</doi>
        <lastModDate>2025-10-30T08:26:18.6670000+00:00</lastModDate>
        
        <creator>Sooyoung Jang</creator>
        
        <creator>Seungho Yang</creator>
        
        <creator>Changbeom Choi</creator>
        
        <subject>Deep reinforcement learning; policy evaluation; stochastic policy; temporal difference error; Atari; PPO</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>Deep reinforcement learning (DRL) typically in-volves training agents with stochastic exploration policies while evaluating them deterministically. This discrepancy between stochastic training and deterministic evaluation introduces a potential objective mismatch, raising questions about the validity of current evaluation practices. Our study involved training 40 Proximal Policy Optimization agents across eight Atari environments and examined eleven evaluation policies ranging from deterministic to high-entropy strategies. We analyzed mean episode rewards and their coefficient of variation while assessing one-step temporal-difference errors related to low-confidence actions for value-function calibration. Our findings indicate that the optimal evaluation policy is highly dependent on the environment. deterministic evaluation performed best in three games, while low-to-moderate-entropy policies yielded higher returns in five, with a significant improvement of over 57% in Breakout. However, increased policy entropy generally degraded stability—evidenced by a rise in the coefficient of variation in Pong from 0.00 to 2.90. Additionally, low-confidence actions often revealed an over-optimistic value function, exemplified by negative TD errors, including -10.67 in KungFuMaster. We recommend treating evaluation-time entropy as a tunable hyperparameter, starting with deterministic or low-temperature softmax settings to optimize both return and stability on held-out seeds. These insights provide actionable strategies for practitioners aiming to enhance their DRL-based agents.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_99-Stochastic_Policies_Deterministic_Minds.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimizing Asset Transfer Process in ERP Using Business Process Management Technique</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161098</link>
        <id>10.14569/IJACSA.2025.0161098</id>
        <doi>10.14569/IJACSA.2025.0161098</doi>
        <lastModDate>2025-10-30T08:26:18.6370000+00:00</lastModDate>
        
        <creator>Ravindu Yasarathne</creator>
        
        <creator>Naduni Ranatunga</creator>
        
        <creator>Vikasitha Herath</creator>
        
        <creator>Lakshan Chalinda</creator>
        
        <creator>Chathurangika Kahandawaarachchi</creator>
        
        <creator>Sanjeeva Perera</creator>
        
        <creator>Chamath Randula</creator>
        
        <subject>Asset management; bulk asset transfer; Business Process Reengineering (BPR); Enterprise Resource Planning (ERP); workflow optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>Enterprise Resource Planning (ERP) systems are critical for managing enterprise-wide business processes, including asset management. Yet, many ERP platforms lack efficient mechanisms for bulk asset transfers, leading to high manual effort, increased costs, and data inconsistencies. This study applies Business Process Reengineering (BPR) techniques as the methodology to optimize ERP asset management, focusing on workflow optimization and automation, contributing both practical and methodological insights. A mixed-method approach was adopted, analyzing a financial organization with 256 branches and over 450 Oracle ERP users. Data from 51 representative branches identified inefficiencies such as manual transfer delays, approval bottlenecks, and synchronization issues. The proposed solution introduces automated bulk asset transfers, optimized approval workflows, and real-time data synchronization, along with new metrics for evaluating efficiency, compliance, risk, and asset utilization. Compared to the As-Is system, the reengineered framework achieved a 100% reduction in operational costs per user ($7,500 annual saving), an 80% reduction in compliance incidents, a 67% reduction in asset transaction errors, and a 20% improvement in asset utilization. These results demonstrate a scalable, adaptable, and effective framework that enhances ERP operational efficiency, strengthens data integrity, and advances both academic understanding and industrial practice of asset management process reengineering.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_98-Optimizing_Asset_Transfer_Process_in_ERP.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predicting Employee Attrition in the Saudi Private Sector Using Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161097</link>
        <id>10.14569/IJACSA.2025.0161097</id>
        <doi>10.14569/IJACSA.2025.0161097</doi>
        <lastModDate>2025-10-30T08:26:18.5900000+00:00</lastModDate>
        
        <creator>Haya Alqahtani</creator>
        
        <creator>Hana Almagrabi</creator>
        
        <creator>Amal Alharbi</creator>
        
        <subject>Employee attrition; attrition prediction; predictive models; machine learning; voting classifier; ensemble methods; Saudi private sector; employee turnover; employee retention; feature importance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>Employee attrition represents a prominent issue facing organizations, as human capital represents one of the most valuable resources. Attrition refers to the voluntary or involuntary reduction in the number of employees, which can negatively affect profitability, reputation, and overall organizational performance. Therefore, a comprehensive understanding of this phenomenon, its causal factors, and the development of retention strategies is crucial for mitigating employee turnover. The purpose of this work is to predict employee attrition in the Saudi private sector and identify the key factors contributing to employee turnover using machine learning approaches. in addition, the research structurally evaluates the performance of multiple Machine Learning (ML) algorithms within the proposed framework to determine the most effective predictive model for employee attrition. This study utilized a training dataset obtained from an online survey targeting employees in the Saudi private sector in order to investigate employee attrition and identify its most prominent causes within this context. Thus, various Machine Learning (ML) algorithms, including Logistic Regression (LR), Support Vector Machine (SVM), K-Nearest Neighbors (KNN), Decision Tree (DT), Random Forest (RF), Extreme Gradient Boosting (XGBoost), Bagging ensemble, and Voting Classifier (VC) were evaluated. The results demonstrate that the Voting Classifier (VC) yielded the highest accuracy at 90%. Moreover, the analysis identified job opportunities and job titles as some of the most influential factors driving employee turnover.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_97-Predicting_Employee_Attrition_in_the_Saudi_Private_Sector.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Two-Stage Framework for Abnormalities Detection in WCE Images by Combining Semantic Segmentation and Deformable Agent-Based Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161096</link>
        <id>10.14569/IJACSA.2025.0161096</id>
        <doi>10.14569/IJACSA.2025.0161096</doi>
        <lastModDate>2025-10-30T08:26:18.5730000+00:00</lastModDate>
        
        <creator>Brahim Alibouch</creator>
        
        <creator>Yasmina El Khalfaoui</creator>
        
        <subject>Wireless capsule endoscopy; deep learning; classification; gastrointestinal abnormalities</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>Wireless capsule endoscopy (WCE) has revolutionized gastrointestinal (GI) diagnostics by offering a patient-friendly imaging and diagnostic tool compared to traditional endoscopic techniques. However, the manual assessment of these images is a time-consuming task and is prone to inaccuracies, which necessitates the implementation of automated approaches. In this paper, we introduce a two-stage deep learning framework to identify the most common GI abnormalities in WCE images. The first stage in the proposed method is to segment suspicious regions from the WCE images, which act as potential markers for GI abnormalities. In the second stage we perform frame-level classification to identify and categorize different pathologies in the GI tract. Extensive experiments conducted on four image datasets demonstrate that our approach achieves the highest values in terms of accuracy, precision, recall and specificity in comparison with four common deep learning methods : Resnet50, VGG16, Vit-S16 and InceptionV3.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_96-A_Two_Stage_Framework_for_Abnormalities_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Breast Cancer Classification Using Ensemble Voting: A Feature Selection Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161095</link>
        <id>10.14569/IJACSA.2025.0161095</id>
        <doi>10.14569/IJACSA.2025.0161095</doi>
        <lastModDate>2025-10-30T08:26:18.5430000+00:00</lastModDate>
        
        <creator>Antu Kumar Guha</creator>
        
        <creator>Jun-Jiat Tiang</creator>
        
        <creator>Abdullah-Al Nahid</creator>
        
        <subject>Breast cancer; machine learning; feature selection; ensemble learning; AdaBoost; biomedical data classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>Breast cancer is one of the most common and deadly diseases affecting women around the worldwide. It is specially affecting in regions where has limited access to advanced diagnostic tools. Recent studies have shown that blood-based biomarkers can give a cost-effective alternative for early detection. This paper represents a machine learning-based approach for classifying breast cancer using clinical and biomedicial data. We have used the Breast Cancer Coimbra dataset for our study. We employed four filter-based feature selection methods—Mutual Information, Chi-Square, ANOVA F-test, and Pearson Correlation Coefficient—to identify the most relevant features for classification. We have applied two classifiers (AdaBoost and Ensemble Voting Classifier) to enhance predictive accuracy. The ensemble model achieved an accuracy of 82.86%. Key features such as glucose, HOMA, insulin, resistin, and age consistently contributed across all selected methods.It highlights that a few of the features has a great significance in breast cancer prediction. This study also try to investigate the reasons behind the missclassification cases. Our results show that using statistical feature selection with ensemble learning reasonable helps to boost the accuracy of breast cancer prediction. This approach helps the model focus on the most important features.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_95-Breast_Cancer_Classification_Using_Ensemble_Voting.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>RT-DETR Edge Deployment: Real-Time Detection Transformer for Distracted Driving Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161094</link>
        <id>10.14569/IJACSA.2025.0161094</id>
        <doi>10.14569/IJACSA.2025.0161094</doi>
        <lastModDate>2025-10-30T08:26:18.5100000+00:00</lastModDate>
        
        <creator>Fares Hamad Aljahani</creator>
        
        <subject>RT-DETR; real-time inference; autonomous vehicles</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>Distracted driving is one of the primary contributors to road accidents worldwide, highlighting the urgent need for reliable in-cabin driver monitoring systems. Existing approaches often face trade-offs: CNN-based classifiers achieve high recognition accuracy but lack spatial localization, while lightweight real-time detectors sacrifice contextual reasoning for efficiency. To bridge this gap, we propose a customized fine-tuned transformer-based object detection framework, RT-DETR-L, specifically adapted for distracted driving detection. In contrast to prior applications of RT-DETR, our adaptation integrates distraction-specific data augmentation, loss-balancing strategies, and deployment-oriented optimizations, enabling precise classification and spatial localization of distractions such as texting, drinking, yawning, and eye closure. Trained and validated on a large-scale annotated in-cabin dataset, RT-DETR-L achieves state-of-the-art performance with a mAP50 of 0.995 and mAP50–95 of 0.774. In addition the proposed model demonstrates the deployment feasibility on resource-constrained embedded platforms (ARM-based edge AI devices), where the model sustains real-time performance at 17.5 FPS with minimal latency. These results establish RT-DETR-L as a hybrid solution combining the semantic depth of transformers with the efficiency required for Advanced Driver Assistance Systems (ADAS). By addressing both accuracy and deployability, this study makes concrete contributions toward advancing robust, real-time driver monitoring for enhanced road safety.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_94-RT_DETR_Edge_Deployment_Real_Time_Detection_Transformer.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Privacy-Preserving Education Data Sharing Scheme Based on Consortium Blockchain</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161093</link>
        <id>10.14569/IJACSA.2025.0161093</id>
        <doi>10.14569/IJACSA.2025.0161093</doi>
        <lastModDate>2025-10-30T08:26:18.4800000+00:00</lastModDate>
        
        <creator>Jiaqi Guo</creator>
        
        <creator>Zhuoran Wang</creator>
        
        <creator>Ningning Liu</creator>
        
        <subject>Consortium blockchain; access control; secure search; life-long education; data sharing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>With the growing emphasis on lifelong education and the rapid expansion of open education platforms, the secure and efficient management and sharing of lifelong learning data have become critical challenges. To address these issues, this paper proposes a Privacy-Preserving Educational Data Sharing (PPEDS) scheme based on blockchain technology. The PPEDS scheme employs attribute-based encryption with hidden attributes to achieve privacy-preserving and fine-grained access control. In addition, it incorporates multi-keyword searchable encryption to enable efficient encrypted data retrieval and combines private and consortium blockchains to ensure data authenticity and integrity across multiple educational institutions. The security analysis demonstrates that the scheme resists potential attacks and ensures confidentiality, access control, and search privacy under a semi-trusted model. Furthermore, performance evaluations conducted on real-world educational datasets show that the proposed scheme achieves efficient encryption, search, and decryption operations, with low computational overhead even in large-scale deployments. Overall, the PPEDS scheme provides a secure, scalable, and practical solution for privacy-preserving data sharing in lifelong education systems.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_93-Privacy_Preserving_Education_Data_Sharing_Scheme.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Federated Performance-based Averaging (FedPA): A Robust and Selective Learning Framework for Chest X-Ray Classification in Heterogeneous Data Environments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161092</link>
        <id>10.14569/IJACSA.2025.0161092</id>
        <doi>10.14569/IJACSA.2025.0161092</doi>
        <lastModDate>2025-10-30T08:26:18.4500000+00:00</lastModDate>
        
        <creator>Atif Mahmood</creator>
        
        <creator>Tashin Khan Sadique</creator>
        
        <creator>Saaidal Razalli Azzuhri</creator>
        
        <creator>Roziana Ramli</creator>
        
        <creator>Leila Ismail</creator>
        
        <subject>Public health; industrial growth; federated learning; FedAvg; FedPA; FedSGD</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>Chest X-ray imaging remains a cornerstone in the diagnosis of thoracic conditions such as COVID-19, pneumonia, and lung opacity. Despite advancements in deep learning, the development of robust and generalizable models is limited by data privacy constraints, as patient data cannot be centralized across institutions. Federated Learning (FL) has emerged as a promising solution by enabling collaborative model training without sharing raw data. However, standard FL algorithms like FedAvg, FedProx, and FedSGD aggregate all client updates without considering their individual quality, making them vulnerable to performance degradation in the presence of data heterogeneity, label noise, or underperforming clients. To address these challenges, this study proposes Federated Performance-Based Averaging (FedPA), a novel selective aggregation strategy that incorporates only those client models that meet a pre-defined performance threshold during training. By leveraging an accuracy-based filtering mechanism, FedPA ensures that only sufficiently trained and reliable local models contribute to global updates. The method was evaluated on a multi-class, non-IID chest X-ray dataset containing four classes: Normal, COVID-19, Pneumonia, and Lung Opacity. Using DenseNet as the backbone model, experiments were conducted across four federated clients, each biased toward a specific class to simulate real-world data distributions. Results demonstrate that FedPA significantly outperforms baseline federated algorithms across key metrics, achieving a global accuracy of 91.82%, F1-score of 92.48%, and recall of 92.08%. The method also achieved faster convergence, higher stability, and reduced round-to-round accuracy fluctuations. System-level evaluations further show that FedPA offers competitive efficiency in terms of inference time, throughput, CPU usage, and memory footprint, making it suitable for deployment in resource-constrained clinical environments. Overall, FedPA offers a practical and effective advancement in federated learning for medical imaging. By filtering unreliable client contributions, it preserves model quality and privacy, presenting a viable path for clinical deployment in scenarios where data centralization is infeasible due to ethical, legal, or logistical constraints.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_92-Federated_Performance_based_Averaging_FedPA.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Glioma Classification Using Harris Hawks-Driven Optimized Gradient Boosting Classifier Along with SHAP-Based Interpretability</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161091</link>
        <id>10.14569/IJACSA.2025.0161091</id>
        <doi>10.14569/IJACSA.2025.0161091</doi>
        <lastModDate>2025-10-30T08:26:18.4330000+00:00</lastModDate>
        
        <creator>SM Naim</creator>
        
        <creator>Jun-Jiat Tiang</creator>
        
        <creator>Abdullah-Al Nahid</creator>
        
        <subject>Glioma; gradient boosting; Harris Hawks Optimization (HHO); SHAP; feature selection; interpretability; TCGA; IDH1; EGFR</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>Gliomas are considered one of the most lethal and aggressive types of brain cancer, responsible for countless deaths worldwide. This study seeks to improve glioma classification using cutting-edge machine learning (ML) techniques to differentiate between glioma subtypes based on clinical and genomic data. The goal is to identify important biomarkers and features influencing glioma classification, with an emphasis on improving feature selection and model interpretability. For glioma classification, the Gradient Boosting Classifier (GBC) was employed. The Harris Hawks Optimization (HHO) algorithm was used for feature selection and hyperparameter fine-tuning to enhance the model’s performance. Additionally, SHapley Additive exPlanations (SHAP) were applied to improve model interpretability and to better understand feature contributions.The Gradient Boosting (GB) method yielded the best performance among the selected models, achieving an accuracy of 88.40%, precision of 87.3%, recall of 88.48%, and an F1 score of 88.29%, with feature selection and hyperparameter tuning using the Harris Hawks Optimization. These results highlight the significance of hyperparameter tuning and feature selection in enhancing classification performance. Key features such as IDH1, Age at Diagnosis, and EGFR were identified as the most influential in distinguishing glioma subtypes. SHAP analysis further confirmed the importance of these features in the model.This study shows that the Gradient Boosting Classifier (GBC), optimized with Harris Hawks Optimization (HHO), significantly improves glioma classification, achieving a high F1 score. Key features like IDH1, Age at Diagnosis, and EGFR were identified, showcasing its potential for enhanced glioma diagnosis.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_91-Glioma_Classification_Using_Harris_Hawks_Driven_Optimized_Gradient.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Review of Artificial Intelligence in Inventory Management: Methods, Applications and Directions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161090</link>
        <id>10.14569/IJACSA.2025.0161090</id>
        <doi>10.14569/IJACSA.2025.0161090</doi>
        <lastModDate>2025-10-30T08:26:18.4030000+00:00</lastModDate>
        
        <creator>Jinjin Li</creator>
        
        <creator>Huijun Huang</creator>
        
        <creator>Yuping Gong</creator>
        
        <creator>Lei Wang</creator>
        
        <creator>Xiangui Yin</creator>
        
        <creator>Yichang Liu</creator>
        
        <subject>Inventory management; artificial intelligence; demand forecasting; inventory control; inventory classification; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>Effective inventory management is fundamental to supply chain resilience and efficiency. Artificial intelligence (AI) has emerged as a transformative solution that enables more dynamic and data-driven inventory strategies. To map the latest advancements in this rapidly evolving field, this study presents a systematic literature review (SLR) of AI techniques in inventory management. The review was conducted following the PRISMA 2020 guidelines, through which 87 high-quality articles published between 2021 and 2025 were systematically analyzed. Our review identifies machine learning (ML), deep learning (DL), reinforcement learning (RL), and hybrid methods as the predominant AI technologies. These techniques primarily address three foundational tasks. In demand forecasting, they improve prediction accuracy and mitigate stockout and overstock risks. For inventory control, they balance costs with service levels and optimize replenishment strategies. In inventory classification, they facilitate targeted resource allocation. Despite these advancements, AI research confronts significant challenges, particularly in data dependency, model interpretability, and implementation overhead. To address these gaps, we suggest future research focused on data-efficient learning, explainable AI, and lightweight, integrated frameworks to lower adoption barriers. This review provides a timely and holistic overview of the current research landscape, which serves as a reference for academics to identify research directions.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_90-A_Review_of_Artificial_Intelligence_in_Inventory_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Out-of-Distribution Detection for Retail Time-Series Data Using Entropic Methods</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161089</link>
        <id>10.14569/IJACSA.2025.0161089</id>
        <doi>10.14569/IJACSA.2025.0161089</doi>
        <lastModDate>2025-10-30T08:26:18.3700000+00:00</lastModDate>
        
        <creator>Nga Nguyen Thi</creator>
        
        <creator>Tuan Vu Minh</creator>
        
        <creator>Khanh Nguyen-Trong</creator>
        
        <subject>Out-of-Distribution Detection; entropic learning; IsoMax+ loss; time-series classification; retail forecasting; deep learning; spectrogram transformation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>Machine learning models are typically developed under the “closed-world” assumption, where training and testing data originate from a consistent distribution. However, in real-world scenarios, especially in the retail domain, this assumption can become problematic due to the frequent introduction of new products, seasonal promotions, and irregular sales events. When models encounter out-of-distribution data inputs, predictions can become overly confident or entirely incorrect. While existing out-of-distribution detection methods primarily focus on image-based datasets, challenges associated with numerical, high-dimensional, and heterogeneous retail time-series data remain largely unexplored. To address this gap, this study proposes an enhanced Entropic Out-of-Distribution Detection framework tailored specifically for dynamic retail environments. By trans-forming time-series sales data into spectrogram representations and leveraging the IsoMax+ loss function, our approach im-proves uncertainty calibration and robustness without requiring labeled out-of-distribution data or additional post-hoc calibration techniques. Experimental results, conducted on a large-scale retail dataset from Vietnam, demonstrate that the proposed Entropic Out-of-distribution detection framework significantly outperforms traditional out-of-distribution detection methods in terms of detection accuracy and inference efficiency, providing a scalable and practical solution for real-time retail applications. Our approach achieves strong performance with an F1-score of 88% and an AUC of 91%, highlighting its promising applicability across diverse business scenarios.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_89-Enhancing_Out_of_Distribution_Detection_for_Retail_Time_Series_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Experimental Validation of an Adaptive Controller for a Mecanum-Wheel Robot with Unknown Center-of-Gravity Offset and Slope Inclination</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161088</link>
        <id>10.14569/IJACSA.2025.0161088</id>
        <doi>10.14569/IJACSA.2025.0161088</doi>
        <lastModDate>2025-10-30T08:26:18.3400000+00:00</lastModDate>
        
        <creator>Chawannat Chaichumporn</creator>
        
        <creator>Supaluk Prapan</creator>
        
        <creator>Nghia Thi Mai</creator>
        
        <creator>Md Abdus Samad Kamal</creator>
        
        <creator>Iwanori Murakami</creator>
        
        <creator>Kou Yamada</creator>
        
        <subject>Model reference adaptive control; Mecanum Wheel Robot; center of gravity offset</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>High-precision path tracking for a Four-Mecanum-Wheel Mobile Robot (FMWMR) is challenged by real-world factors such as payload-induced shifts in the center-of-gravity (CoG) and operation on inclined surfaces. These uncertainties introduce complex, coupled dynamic forces that degrade the performance of conventional controllers. This study addresses this problem with a Model Reference Adaptive Controller (MRAC), which learns and compensates for these unpredictable dynamic effects in real time. To ensure effective operation on physical hardware, the controller incorporates practical solutions for motor friction and control signal stability. The proposed approach is validated through implementation of the MRAC on a Rosmaster X3 robot. A performance comparison is made against a well-tuned Proportional-Integral-Derivative (PID) controller is conducted across twelve distinct scenarios. The results show that the adaptive controller reduced the position Root Mean Square Error (RMSE) by an average of 52.7% and the Integral of Time-weighted Absolute Error (ITAE) by 61.5%. This work validates the MRAC as a powerful and robust solution for robots operating in unpredictable environments.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_88-Experimental_Validation_of_an_Adaptive_Controller_for_a_Mecanum_Wheel_Robot.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Spatiotemporal Forex Trading System Based on a Hybrid Model GAT-LSTM: Forecasting Forex Price Directions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161087</link>
        <id>10.14569/IJACSA.2025.0161087</id>
        <doi>10.14569/IJACSA.2025.0161087</doi>
        <lastModDate>2025-10-30T08:26:18.3070000+00:00</lastModDate>
        
        <creator>Nabil MABROUK</creator>
        
        <creator>Marouane CHIHAB</creator>
        
        <creator>Younes CHIHAB</creator>
        
        <subject>Forex trading; hybrid deep learning model; Graph Attention Network (GAT); Long Short-Term Memory (LSTM); spatiotemporal forecasting</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>Due to the high volatility and complex interdependencies within financial markets, predicting Forex prices becomes a difficult challenge for investors. Furthermore, the traditional trading models struggle to capture those relationships. To address this issue, we introduced a spatiotemporal Forex trading system, GAT-LSTM-based; it is a hybrid approach that combines Graph Attention Network (GAT) with a Long Short-Term Memory (LSTM) network. The GAT component helps to capture spatial dependencies between currencies by constructing a directed graph containing 28 currency pairs alongside commodity stock and US stocks. The strength of the GAT component lies in its ability to dynamically adjust and recalculate the weights of edges over time, which helps our proposed system to adapt to macroeconomic changes, news events, and financial factors that can impact the Forex market status. The LSTM component deals with the nature of time series datasets. It learns temporal interdependencies, allowing our system to detect repeated long-term patterns over time. Experimental results proved that the suggested hybrid model, GAT-LSTM, surpasses both LSTM and GAT separately. By combining both elements and leveraging simultaneously the strength of dynamically modelling spatial dependencies, and the strength of learning long-term temporal patterns, our suggested system became more accurate in forecasting Forex price directions, showing promising results and high accuracy during the validation phase.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_87-A_Spatiotemporal_Forex_Trading_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Voting-Based Ensemble Method for Deep Learning Performance Enhancement</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161086</link>
        <id>10.14569/IJACSA.2025.0161086</id>
        <doi>10.14569/IJACSA.2025.0161086</doi>
        <lastModDate>2025-10-30T08:26:18.2930000+00:00</lastModDate>
        
        <creator>Mohammed Abdel Razek</creator>
        
        <creator>Rania Salah El-Sayed</creator>
        
        <creator>Arwa Mashat</creator>
        
        <creator>Shereen A. El-aal</creator>
        
        <subject>Divided Ensemble Voting (DEV); deep learning (DL); CNN; binary classification; performance metrics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>Overfitting and limited generalization remain significant challenges for deep learning models, often leading to suboptimal performance on unseen data. To address this, The Divided Ensemble Voting (DEV) method was introduced, a novel approach that strategically partitions a dataset into distinct sub-sets to train an independent model on each partition. This division encourages each model to specialize in unique features and patterns, thereby increasing ensemble diversity. Predictions from all models are aggregated through a majority voting mechanism to determine the final output, which mitigates overfitting and improves generalization. The proposed method was rigorously evaluated on four binary image classification tasks: Deepfake &amp; Real, Waste Classification, Concrete &amp; Pavement Crack, and Non &amp; Biodegradable Material. Experimental results demonstrate that DEV consistently surpasses the performance of conventional singular models. Accuracy rates improved from 85.55% to 93.1%, 85.12% to 89.6%, 95.42% to 99.0%, and 89.00% to 93.0%, respectively, across the datasets. These findings underscore the efficacy of strategic data partitioning and ensemble consensus in advancing deep learning performance.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_86-A_Voting_Based_Ensemble_Method_for_Deep_Learning_Performance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Spatiotemporal Graph Networks for Relational Reasoning in Campus Infrastructure Management</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161085</link>
        <id>10.14569/IJACSA.2025.0161085</id>
        <doi>10.14569/IJACSA.2025.0161085</doi>
        <lastModDate>2025-10-30T08:26:18.2600000+00:00</lastModDate>
        
        <creator>Sanjay Agal</creator>
        
        <creator>Krishna Raulji</creator>
        
        <creator>Nikunj Bhavsar</creator>
        
        <creator>Pooja Bhatt</creator>
        
        <subject>Spatiotemporal Graph Neural Networks; relational reasoning; smart campus management; infrastructure utilization forecasting; graph attention networks; temporal convolutional networks; dynamic graph construction; energy optimization; predictive analytics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>The efficient management of campus infrastructure presents a complex spatiotemporal forecasting challenge characterized by dynamic interdependencies between physical assets. Traditional models fail to capture these intricate relationships as they treat buildings as independent entities or rely on static correlation structures. This paper introduces a novel Spatiotemporal Graph Neural Network (ST-GNN) framework that reframes infrastructure forecasting as a relational reasoning task, enabling dynamic inference of campus wide interdependencies. Our approach integrates Graph Attention Networks (GAT) to learn time-varying spatial dependencies and Gated Temporal Convolutional Networks (TCNs) to capture multi-scale temporal patterns. A key innovation is our context-sensitive graph construction method that incorporates physical proximity, functional similarity, and human mobility data to create a holistic representation of campus dynamics. Evaluated on a real-world multimodal dataset comprising 24 months of energy and occupancy data from 50 campus buildings, the proposed model demonstrates superior performance, achieving a 16.3% reduction in mean absolute error compared to the strongest baseline. Comprehensive ablation studies confirm the critical contribution of each architectural component, while qualitative analysis reveals the model’s capacity to provide interpretable insights into campus operational patterns. This work provides a powerful framework for intelligent campus management, enabling precise resource allocation, energy optimization, and sustainable operational planning through advanced relational reasoning capabilities.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_85-Spatiotemporal_Graph_Networks_for_Relational_Reasoning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Coordination, Communication and Robustness in Multi-Agents: An Industrial Network Scenario Using Trust Region Policy Optimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161084</link>
        <id>10.14569/IJACSA.2025.0161084</id>
        <doi>10.14569/IJACSA.2025.0161084</doi>
        <lastModDate>2025-10-30T08:26:18.2130000+00:00</lastModDate>
        
        <creator>Munam Ali Shah</creator>
        
        <subject>Safety robustness; reinforcement learning; multi-agents; safe state; collaboration</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>Numerous practical uses necessitate multi-agent systems, including managing traffic, assigning tasks, regulating ant colonies, and operating self-driving cars, and drones. These systems involve multiple agents working together, communicating and engaging with their surroundings to achieve the highest possible total numerical reward. Deep Reinforcement Learning (DRL) approaches are used to address these multi-agent applications. In many circumstances, the use of agents raise challenges to safety and robustness. To address these issues, we develop a DRL based system in which multiple agents in an industrial network scenario interact with the real-world environment and act collaboratively and cooperatively. In proposed model, several agents collaborate with one another to complete tasks and maintain a safe state. To take actions cooperatively and collaboratively of agents in accordance with the safety robustness of policies, we apply DRL algorithms such as proximal policy optimization (PPO) and Trust Region Policy Optimization (TRPO) algorithms and DRL approaches. We apply Curriculum Learning (CL) for their better performance and training. In this study, a reward structure is also proposed which help agents to maintain their safe state. Mean reward, policy loss, value loss, value estimate and safety robustness are analyzed as performance matrix in this study. The results shows that the policy adopted in the proposed model perform comparably better than the other policies.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_84-Coordination_Communication_and_Robustness_in_Multi_Agents.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>User Identity Confirmation Property Management System Based on State Secret Algorithm and Blockchain Technology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161083</link>
        <id>10.14569/IJACSA.2025.0161083</id>
        <doi>10.14569/IJACSA.2025.0161083</doi>
        <lastModDate>2025-10-30T08:26:18.1830000+00:00</lastModDate>
        
        <creator>Xiao Tian</creator>
        
        <creator>Xing Chen</creator>
        
        <subject>User identification; state secret algorithm; blockchain technology; property management system; security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>The existing user identity confirmation methods in property management systems are vulnerable to attacks and forgery, posing serious threats to system security and reliability. To address these issues, this study proposes a novel user identity confirmation method that combines the state secret SM9 algorithm with blockchain technology. The system utilizes blockchain for managing and verifying user identity information, while employing the SM9 algorithm for double encryption of user data. This approach ensures robust protection against identity theft and fraud, enhancing security and privacy. The proposed method was tested experimentally, and the results show that the model achieves an average communication connection and verification initiation time of approximately 11.07 ms, with a key negotiation success rate of 88.73%. Moreover, the model achieved a user identity confirmation accuracy of 90.41%, which is significantly higher than traditional methods. These findings highlight that the integration of the SM9 algorithm and blockchain technology offers high accuracy, low latency, and improved scalability, making it an ideal solution for enhancing the security and efficiency of property management systems.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_83-User_Identity_Confirmation_Property_Management_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Classification of Mangrove Ecosystem Health Using Sentinel-2 Images with Genetic Algorithm Optimization in Machine Learning Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161082</link>
        <id>10.14569/IJACSA.2025.0161082</id>
        <doi>10.14569/IJACSA.2025.0161082</doi>
        <lastModDate>2025-10-30T08:26:18.1530000+00:00</lastModDate>
        
        <creator>Putri Yuli Utami</creator>
        
        <creator>Murni Ramadhani</creator>
        
        <creator>Rudi Alfian</creator>
        
        <creator>Barry Ceasar Octariadi</creator>
        
        <creator>Dimas Kurniawan</creator>
        
        <subject>Classification; genetic algorithm; machine learning; mangrove ecosystem; Sentinel-2</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>Mangrove ecosystems play an important role in maintaining coastal ecological balance, including as carbon sinks and natural protection from abrasion, but mangrove areas in Mempawah Regency have experienced significant degradation due to anthropogenic pressures. Therefore, this study aims to classify the health condition of mangroves using multi-temporal Sentinel-2 imagery with a hybrid machine learning (ML) approach and Genetic Algorithm (GA) optimization. We implemented GA optimization comparatively on four main ML models—Multilayer Perceptron (MLP), Decision Tree (DT), XGBoost, and Na&#239;ve Bayes (NB)—to adjust hyperparameters to improve accuracy and reduce overfitting. The results prove that GA optimization effectively improves classification performance, with the MLP-GA model providing the highest accuracy with an increase of up to 3.8% compared to the non-optimized baseline model, achieving a best performance value of ROC AUC 0.9730 and reducing computation time by up to 60%. These findings indicate that the GA-MLP framework is highly reliable and efficient, providing a precise tool for strategic decision-making in the management of healthy mangrove ecosystems.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_82-Classification_of_Mangrove_Ecosystem_Health.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Truth Under Pressure: A Deep Learning-Based Lie Detection System for Online Lending Using Voice Stress and Response Latency</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161081</link>
        <id>10.14569/IJACSA.2025.0161081</id>
        <doi>10.14569/IJACSA.2025.0161081</doi>
        <lastModDate>2025-10-30T08:26:18.1200000+00:00</lastModDate>
        
        <creator>Ahmad Ihsan Farhani</creator>
        
        <creator>Alhadi Bustamam</creator>
        
        <creator>Rinaldi Anwar</creator>
        
        <creator>Titin Siswantining</creator>
        
        <subject>Online lending; lie detection; large language model; deep learning; voice acoustics; response latency</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>The rapid increase in defaults in the online lending industry highlights significant flaws in current debtor verification, which largely relies on static, preparable interviews, leading to high non-performing loans. Existing research is fragmented: while Large Language Models (LLMs) show promise in question generation, their application is confined to non-financial domains like education, and lie detection studies often analyze modalities in isolation. This study addresses this critical gap by proposing the first integrated AI-driven system for this context. We solve the problem in two parts: 1) A Llama 3 LLM is fine-tuned to generate dynamic, biodata-tailored questions, preventing the rehearsed answers that plague static interviews. 2) A novel multimodal deep learning model is developed to analyze the response, uniquely fusing vocal acoustic features and response latency—two key deception indicators that prior work has failed to combine. The Llama 3 model produced a low perplexity score (2-3), and the lie detection model achieved 70% testing accuracy with a 70.9% F1-Score. Despite signs of overfitting, this framework provides a novel, intelligent decision-support tool to reduce fraud and manage default risks more effectively.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_81-Truth_Under_Pressure_A_Deep_Learning_Based_Lie_Detection_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Vision Transformer and MLP-Mixer for Epileptic Seizure Detection in Intracranial EEG</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161080</link>
        <id>10.14569/IJACSA.2025.0161080</id>
        <doi>10.14569/IJACSA.2025.0161080</doi>
        <lastModDate>2025-10-30T08:26:18.0900000+00:00</lastModDate>
        
        <creator>Thouraya Guesmi</creator>
        
        <creator>Abir Hadriche</creator>
        
        <creator>Nawel Jmail</creator>
        
        <subject>Vision transformer; MLP-Mixer; iEEG; HFOs; ResNet; GoogleNet; EfficientNetB0</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>Accurate and timely seizure detection is essential for effective epilepsy management, and automated systems can play a valuable role in supporting clinical practice. In this study, we introduce a hybrid approach that uses time-frequency representations of Intracranial electroencephalography (iEEG) signals filtered at High-Frequency Oscillations (HFOs) bands as input to different convolutional neural network (CNN) backbones for feature extraction, followed by classification with either a Vision Transformer (ViT) or MLP-Mixer. This work establishes a systematic, comparative framework for benchmarking hybrid CNN-ViT against CNN-MLP-Mixer, providing a critical new reference for automated epileptic seizure detection within HFOs filtered iEEG signals. Extensive evaluation demonstrates that the ViT consistently achieves superior performance, with an EfficientNetB0-ViT model attaining remarkable accuracy (97.85%) and specificity (98.92%). Crucially, the MLP-Mixer emerges as a highly competitive alternative, exhibiting strong recall capabilities that make it suitable for applications where missing a seizure is not an option. Overall, our findings suggest that self-attention mechanisms in ViTs provide a distinct advantage for capturing complex seizure dynamics, yet MLP-based models present a powerful, efficient option.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_80-Hybrid_Vision_Transformer_and_MLP_Mixer.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Privacy-Aware Federated Graph Neural Networks for Adaptive and Explainable Cancer Drug Personalization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161079</link>
        <id>10.14569/IJACSA.2025.0161079</id>
        <doi>10.14569/IJACSA.2025.0161079</doi>
        <lastModDate>2025-10-30T08:26:18.0430000+00:00</lastModDate>
        
        <creator>Tripti Sharma</creator>
        
        <creator>Lakshmi K</creator>
        
        <creator>M. Misba</creator>
        
        <creator>Jasgurpreet Singh Chohan</creator>
        
        <creator>R. Aroul Canessane</creator>
        
        <creator>Komatigunta Nagaraju</creator>
        
        <creator>Adlin Sheeba</creator>
        
        <subject>Graph Neural Networks; cancer drug dosage; privacy preservation; genomic profiling; precision oncology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>Personalized cancer treatment remains challenging due to the complexity of genomic data and variability in drug responses. Previous federated learning (FL) approaches handled distributed patient data to preserve privacy but treated genomic and pharmacological features as flat, tabular inputs, limiting the ability to capture gene–drug interactions. In this study, we propose a Graph Neural Network (GNN)-based framework, FedGraphOnco, which models patient-specific gene–drug interactions as structured graphs, enabling the network to learn complex relational patterns that are difficult or impractical for FL-only models. Attention mechanisms and SHapley Additive exPlanations (SHAP) are incorporated to provide interpretable insights into important genes, pathways, and drug interactions, increasing clinical trust. Using the GDSC dataset with gene expression, mutation status, copy number variation, and IC50 drug responses, the model demonstrates high predictive accuracy (Pearson correlation = 0.85, RMSE = 2.6, MAE = 1.9, dosage deviation = 2.8%), robustness to noise and non-IID data, and adaptive, personalized dosage recommendations. The approach highlights the advantages of combining privacy-preserving FL, GNNs, multi-omics data integration, explainability, and adaptive dosing, offering a scalable and interpretable solution for precision oncology.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_79-Privacy_Aware_Federated_Graph_Neural_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Aerial Draft Surveyor (ADS)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161078</link>
        <id>10.14569/IJACSA.2025.0161078</id>
        <doi>10.14569/IJACSA.2025.0161078</doi>
        <lastModDate>2025-10-30T08:26:18.0270000+00:00</lastModDate>
        
        <creator>John Matthew H. Escarro</creator>
        
        <creator>Fharjan M. Taguinopon</creator>
        
        <creator>Gyrielle Kysha M. Demegillo</creator>
        
        <creator>Dan Kevin T. Amper</creator>
        
        <creator>Rosanna C. Ucat</creator>
        
        <creator>Mark John S. Pag-Alaman</creator>
        
        <subject>Draft survey; UAV; machine learning; computer vision</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>Draft surveying is an essential procedure in determining the displacement and loaded cargo weight of bulk carriers. Currently, the most acceptable method is through manual visual observation by trained draft surveyors. However, this process is subjective, error-prone, and unsafe under poor visibility or during rough sea conditions. This study presents an automated computer vision-powered UAV draft surveying system integrating TensorRT Optimized YOLO11n object detection and YOLO11n-seg image segmentation models deployed on an NVIDIA Jetson Orin Nano. The system performs real-time draft estimation by detecting draft marks, segmenting the waterline, and computing draft values using convergence and line-fitting algorithms. Comparative evaluation with licensed human surveyors on 40 paired readings yielded an MAE of 0.1068 m, RMSE of 0.2740 m, and an R&#178; of 0.948, demonstrating human-comparable accuracy. Agreement analysis indicates high reliability (two-way random effects ICC(2,1) = 0.974) and a small mean bias (system − manual = +0.0628 m, 95% limits of agreement: −0.467 m to +0.592 m). Moreover, a paired t-test (t = 1.469, df = 39) found no statistically significant difference between methods (p ≈ 0.150). The results validate that the proposed UAV-driven computer vision system can perform reliable, real-time draft surveying with accuracy comparable to human experts.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_78-Aerial_Draft_Surveyor.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Real-Time Multi-Scale Object Detection in Surveillance Using Hybrid Transformer Architecture</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161077</link>
        <id>10.14569/IJACSA.2025.0161077</id>
        <doi>10.14569/IJACSA.2025.0161077</doi>
        <lastModDate>2025-10-30T08:26:17.9970000+00:00</lastModDate>
        
        <creator>Roshan D Suvaris</creator>
        
        <creator>Rahul Suryodai</creator>
        
        <creator>S. Narayanasamy</creator>
        
        <creator>Aanandha Saravanan</creator>
        
        <creator>Raman Kumar</creator>
        
        <creator>P N V Syamala Rao M</creator>
        
        <creator>Elangovan Muniyandy</creator>
        
        <subject>Real-time object detection; hybrid transformer–YOLOv8; Context-Aware Feed Forward Network (CA-FFN); Cross-Scale Attention Skip Connections (CSASC); surveillance video analytics; multi-scale feature fusion</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>Real-time surveillance systems require accurate and efficient object detection to ensure safety and situational awareness. Existing methods, such as YOLOv5 and Vision Transformer-based detectors, often struggle to reliably identify small, distant, or occluded objects while maintaining real-time inference, limiting their applicability in complex surveillance environments. To address these challenges, this study proposes PRISM, a hybrid Transformer–YOLOv8 framework that integrates fast local feature extraction with global contextual refinement. The method introduces two novel components: i) a Context-Aware Feed Forward Network (CA-FFN) within the Vision Transformer (ViT), which dynamically weights channel features to reduce redundancy and enhance global context modeling, and ii) Cross-Scale Attention Skip Connections (CSASC) for selective fusion of multi-scale YOLOv8 and ViT features, improving detection of small or occluded objects. The model is implemented in PyTorch and trained on a comprehensive surveillance dataset consisting of pedestrians, vehicles, bicycles, bags, and miscellaneous objects. Experimental evaluation demonstrates that PRISM achieves 96% accuracy, a significant improvement of ~4–5% over baseline methods, with robust performance across all object categories. Key performance indicators verify the reliability of the model to real-time usage, and the lightweight design makes it edge deployable. These findings imply that PRISM can be used to provide a speed-accuracy balance in a complex and dynamic setting, which is more efficient than the current methods. The study also notes the partial extensions, such as the incorporation of multi-sensors and continuous video streams to do time modeling as an extension, which will offer a good base to the next-generation intelligent surveillance systems.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_77-Real_Time_Multi_Scale_Object_Detection_in_Surveillance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Efficient and Scalable Reinforcement Learning-Driven Intelligent Resource Management and Secure Framework for LoRaWAN</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161076</link>
        <id>10.14569/IJACSA.2025.0161076</id>
        <doi>10.14569/IJACSA.2025.0161076</doi>
        <lastModDate>2025-10-30T08:26:17.9630000+00:00</lastModDate>
        
        <creator>Shaista Tarannum</creator>
        
        <creator>Usha S. M</creator>
        
        <subject>LoRa; LoRaWAN; Q-learning; adaptive duty cycle; channel scheduling; energy efficiency; intrusion detection; trust score; resource management; IoT security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>This study proposes a Q-learning-based adaptive duty cycle scheduling algorithm for LoRaWAN in a smart city eco-system to enhance the energy efficiency, reduce transmission delay, and handle dynamic traffic conditions. Additionally, it also incorporates an intelligent and efficient channel utilization scheme for LoRaWAN-enabled IoT networks and also integrates a lightweight security strategy at the edge (gateways), making it suitable for low-power, low-computation LoRaWAN environments. In this adaptive and intelligent LoRaWAN framework Q-learning agent dynamically selects various transmission actions based on the contextual states, including buffer size, energy levels, and channel conditions, which optimizes energy efficiency and also enhances the reliability of data transmission in LoRaWAN. The light-weight intrusion detection mechanism also filters suspicious packets using trust scores and payload analysis to ensure secure data delivery and adaptive, scalable, and proactive protection against several prevalent threats in LoRaWAN-driven IoT. It also incorporates a channel-aware scheduling to avoid congestion and improve overall transmission performance. Experimental outcome further confirms improvement over throughput, delay, bandwidth utilization, energy conservation, and resilience against malicious or faulty transmissions, demonstrating the framework’s ability to optimize the resource allocation performance while balancing the above metrics adaptively.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_76-An_Efficient_and_Scalable_Reinforcement_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intelligent Visualization and Knowledge Graph Analysis for Trend Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161075</link>
        <id>10.14569/IJACSA.2025.0161075</id>
        <doi>10.14569/IJACSA.2025.0161075</doi>
        <lastModDate>2025-10-30T08:26:17.9330000+00:00</lastModDate>
        
        <creator>Sunan Lv</creator>
        
        <subject>Reviewer; industry-education integration; hotspot visualization and analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>This research employs scientometric examination and visual analytics techniques anchored in the Web of Science (WoS) repository to methodically delineate predominant research themes, foundational academic works, and emerging scholarly directions within industry-education integration studies. The investigation seeks to elucidate the discipline&#39;s epistemological framework and longitudinal transformation patterns while offering innovative analytical lenses and methodological paradigms to advance theoretical conceptualization and operational innovation in industry-education convergence initiatives. This investigation employs scientometric techniques to systematically map and examine 500 scholarly works on industry-education integration from the Web of Science (WoS) database (2010–2023) using VOSviewer. Through co-occurrence mapping, thematic clustering, and temporal trend analysis, the study identifies dominant research foci, influential contributors, and collaborative networks. This quantitative approach is further supplemented by case study investigations to delineate operational strategies and innovative frameworks for industry-academia synergy. Analysis reveals that research concentration spans five domains: higher education reform, Industry 4.0 alignment, engineering pedagogy enhancement, innovation ecosystems, and sustainability integration. Temporal evolution tracking demonstrates a paradigm shift from foundational theoretical debates to applied technological and implementation studies in recent cycles. Cluster analytics highlight the interdisciplinary nature of industry-education convergence, emphasizing tripartite collaboration among academic institutions, corporate entities, and governmental bodies as pivotal to systemic advancement. By synthesizing research trajectories and thematic priorities, this work establishes a structured knowledge foundation for both theoretical refinement and practical implementation in industry-education integration.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_75-Intelligent_Visualization_and_Knowledge_Graph_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Roadmap for Emerging Cyberbullying Mitigation: Integrating AI-Based Solutions, Ethics, and Policy</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161074</link>
        <id>10.14569/IJACSA.2025.0161074</id>
        <doi>10.14569/IJACSA.2025.0161074</doi>
        <lastModDate>2025-10-30T08:26:17.9030000+00:00</lastModDate>
        
        <creator>Atif Mahmood</creator>
        
        <creator>Shaik Shabana Anjum</creator>
        
        <creator>Umm E Mariya Shah</creator>
        
        <creator>Pavani Cherukuru</creator>
        
        <creator>Javid Iqbal</creator>
        
        <creator>Sarah Bukhari</creator>
        
        <subject>Cyberbullying; human computer interaction; artificial intelligence; natural language processing; machine learning; content moderation; predictive analytics; online safety; youth protection; mental health; mental illness; cybercrime</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>Cyberbullying is one of these challenges that are most found among the younger users of social media which affects the mental health. Artificial Intelligence (AI) is rapidly developing and has enormous potential to mitigate cyberbullying. Therefore, this chapter will talk about the role AI has started playing in strengthening the efforts to combat cyberbullying. Cyberbullying includes all forms of deliberate aggressive behaviour that aims to inflict social, psychological or physical pain in a digital space and AI detection technologies have a lot of potential to detect, predict and prevent cyberbullying in real time. Other critical components of the chapter are how the advances in Natural Language Processing (NLP) technologies, machine learning, images and videos, behavioural analytics make AI an emerging innovation to prevent cyberbullying and provide better services in a timely manner. There are positive trends that make it clear how Safer AI can help in improving the safety of future digital environments. More advanced NLP models will be able to identify the nuances of cyberbullying involving indirect attacks and sarcasm. The chapter will also discuss the hazards associated with AI-based solutions, such as privacy, the zero-sum game of AI morality against AI effectiveness, and the importance of explaining and assigning responsibility for every AI decision. It shows how AI is changing our approach to online safety and helps us identify cyberbullying in a variety of media, including text, video and images. This article gives an overview of roadmap for cyberbullying mitigation with the assistance of AI and ethical practices.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_74-Roadmap_for_Emerging_Cyberbullying_Mitigation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Conversational AI-Powered VR Development Model for Tourism Promotion in Thailand: Expert Assessment and Stakeholder Acceptance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161073</link>
        <id>10.14569/IJACSA.2025.0161073</id>
        <doi>10.14569/IJACSA.2025.0161073</doi>
        <lastModDate>2025-10-30T08:26:17.8700000+00:00</lastModDate>
        
        <creator>Jenasama Srihirun</creator>
        
        <creator>Kridsanapong Lertbumroongchai</creator>
        
        <creator>Vitsanu Nittayathammakul</creator>
        
        <creator>Pimon Kaewdang</creator>
        
        <subject>Virtual reality; conversational AI; Model Development; digital tourism; technology adoption</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>Thailand’s tourism sector increasingly requires immersive digital innovations that preserve local identity while enhancing visitor engagement. However, there remains a lack of a comprehensive model to guide such developments. This study aims to propose the Conversational AI-powered Virtual Reality Development Model for Tourism Promotion in Thailand, providing an integrated and context-specific framework suitable for practical implementation. A Design and Development Research (DDR) methodology (Type II) was employed in three stages: 1) synthesizing essential components through a scoping review, 2) constructing and validating the model via expert panels using the Content Validity Index (CVI) analysis, and 3) assessing suitability and acceptance through expert evaluation and stakeholder surveys. The model developed in this study, referred to as the 4Ds Model, contributes new knowledge by integrating conversational AI and virtual reality within a four-phase structure—Discover, Design, Develop, and Deploy—supported by five enabling capitals: human, cultural, technological, informational, and financial. The Deploy phase modifies the AISAS communication framework into AICAS (Attention, Interest, Chat, Action, Share) to illustrate the function of conversational AI in improving user interaction and engagement within the context of tourism in Thailand. Results indicated high expert ratings of suitability and strong stakeholder intention to adopt. Multiple regression analysis revealed that technological self-efficacy, perceived interactivity, and perceived tourism benefits were significant predictors, explaining 73.3% of the variance in behavioral intention. The findings demonstrate both the theoretical advancement in AI–VR integration and the practical readiness of the 4Ds Model as a culturally aligned roadmap for digital tourism transformation in Thailand.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_73-Conversational_AI_Powered_VR_Development_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Critical Review of Object Detection Techniques for Traffic Light Detection in Intelligent Transportation Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161072</link>
        <id>10.14569/IJACSA.2025.0161072</id>
        <doi>10.14569/IJACSA.2025.0161072</doi>
        <lastModDate>2025-10-30T08:26:17.8230000+00:00</lastModDate>
        
        <creator>Adhwa Salemi</creator>
        
        <creator>Muhammad Arif Mohamad</creator>
        
        <subject>Object detection; traffic light detection; optimization; intelligent transportation systems; review</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>Object detection and tracking play a critical role in intelligent transportation systems (ITS), particularly in recognizing and monitoring traffic lights to ensure safety and improve traffic efficiency. Despite progress in deep learning and optimization algorithms, traffic light detection still faces persistent challenges under varying conditions such as illumination changes, occlusions, and visual clutter. This study provides a critical review of object detection techniques specifically for traffic light detection, evaluating the evolution of machine learning frameworks, deep learning architectures, and hybrid optimization models. The review identifies research gaps in the robustness, real-time adaptability, and generalizability of existing methods. Furthermore, it highlights emerging trends such as multi-camera systems, anchor-free detection, and hybrid optimization techniques that bridge performance trade-offs between accuracy and efficiency. The findings offer a new perspective on integrating multiple approaches to achieve scalable, high-accuracy traffic light detection for future ITS applications.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_72-Critical_Review_of_Object_Detection_Techniques_for_Traffic_Light_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Understanding Echo Chambers in Recommender Systems: A Systematic Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161071</link>
        <id>10.14569/IJACSA.2025.0161071</id>
        <doi>10.14569/IJACSA.2025.0161071</doi>
        <lastModDate>2025-10-30T08:26:17.7930000+00:00</lastModDate>
        
        <creator>Meriem HASSANI SAISSI</creator>
        
        <creator>Nouhaila IDRISSI</creator>
        
        <creator>Ahmed ZELLOU</creator>
        
        <subject>Echo chamber; recommender systems; filter bubbles; collaborative filtering; systematic literature review</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>Echo chambers refer to the phenomenon in which individuals are consistently exposed to content that aligns with their existing viewpoints. Over time, this can narrow a user’s perspective and make it harder to encounter different opinions. In this systematic literature review, we looked at studies published between 2019 and early 2025 and how they have approached this issue, from understanding the cause, and examining existing detection and mitigation strategies. We went through and organized the main findings, noting patterns in the algorithms used, the role of user behavior, and the influence of the data itself. Several works also suggest ways to introduce more variety into recommendations, aiming to break repetitive exposure. Our review confirms that echo chambers and filter bubbles do exist in recommender systems and that they raise concerns for diversity and fairness. Furthermore, we end by pointing to open questions and possible directions for future work, for both researchers and practitioners.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_71-Understanding_Echo_Chambers_in_Recommender_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluation of the Impact of Cybersecurity Knowledge on the Prevention of Social Cybercrime Among University Students in Mexico, Colombia, and Peru</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161070</link>
        <id>10.14569/IJACSA.2025.0161070</id>
        <doi>10.14569/IJACSA.2025.0161070</doi>
        <lastModDate>2025-10-30T08:26:17.7600000+00:00</lastModDate>
        
        <creator>Yasmina Riega-Viru</creator>
        
        <creator>Lainiver Mendoza Munar</creator>
        
        <creator>Mario Ninaquispe-Soto</creator>
        
        <creator>Kiara Nilupu-Moreno</creator>
        
        <creator>Juan Luis Salas-Riega</creator>
        
        <creator>Alfonso Renato Vargas-Murillo</creator>
        
        <creator>Yolanda Pinto Bouroncle</creator>
        
        <subject>Cybersecurity; social cybercrime; university students</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>Objectives: This study aims to evaluate the degree of cybersecurity knowledge and awareness among university students in Peru, Mexico, and Colombia, and to determine how these factors contribute to protection against social cybercrime. This cross-regional analysis represents a novel contribution by comparing cybersecurity preparedness across three Latin American countries, an underrepresented region in cybersecurity education research. Methods: A cross-sectional study was conducted using a 97-question survey that assessed both cybersecurity knowledge and practices. The study involved 809 university students from Peru, Mexico, and Colombia. Correlation analysis was performed to examine the relationship between cybersecurity knowledge and cybercrime prevention practices. Results: The analysis revealed a positive but low correlation (r=0.252) between cybersecurity knowledge and cybercrime prevention practices. Only 10.71% of preventive practices could be explained by acquired knowledge. Greater efficacy was observed in cyberstalking prevention compared to other forms of cybercrime. A significant gap was found between theoretical knowledge and practical application of cybersecurity, with only 44.6% of students receiving occasional information on the subject. Conclusions: This study highlights the urgent need to improve cybersecurity education in Latin American universities. The findings underscore the importance of integrating applied practices into cybersecurity curricula to strengthen students&#39; ability to effectively counter cyber threats. Future educational initiatives should focus on bridging the gap between theoretical knowledge and practical application to enhance students&#39; resilience against social cybercrime.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_70-Evaluation_of_the_Impact_of_Cybersecurity_Knowledge.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Taxonomy for Human Activity Recognition Based on a Systematic Analysis of Public UAV Datasets</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161069</link>
        <id>10.14569/IJACSA.2025.0161069</id>
        <doi>10.14569/IJACSA.2025.0161069</doi>
        <lastModDate>2025-10-30T08:26:17.7300000+00:00</lastModDate>
        
        <creator>Sumaya Abdulrahman Altuwairqi</creator>
        
        <creator>Salma Kammoun Jarraya</creator>
        
        <subject>Human action recognition; UAV videos; surveillance systems; categorization framework</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>In recent decades, unmanned aerial vehicles (UAVs) have become widely utilized for many real-world applications, including surveillance, crowd management, and threat detection, providing a new perspective to recognize human behaviors. However, current UAV-based video datasets adopt categorization schemes that rely on broad and inconsistent categories relative to real-world aerial contexts. To address this knowledge gap, this study proposes a novel human activity categorization framework derived from a comprehensive systematic analysis study of ten publicly available UAV-based human action recognition (HAR) datasets, incorporating a variety of environmental situations and human behaviors. By reconciling inconsistent categories and finer activities, this taxonomy serves as a standard framework for UAV-based HAR research. The proposed categorization framework is validated by comparing it with other existing frameworks on the publicly benchmarked Drone-Action dataset, outperforming them by 97% across four metrics. Our contribution aims to develop the foundation for further experimental validation and provide a guide for researchers interested in developing accurate and context-aware surveillance systems.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_69-A_Novel_Taxonomy_for_Human_Activity_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluating Head Pose Estimation for Assessing Visual Attention in Children with Special Needs During Robot-Assisted Therapy</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161068</link>
        <id>10.14569/IJACSA.2025.0161068</id>
        <doi>10.14569/IJACSA.2025.0161068</doi>
        <lastModDate>2025-10-30T08:26:17.7000000+00:00</lastModDate>
        
        <creator>Rusnani Yahya</creator>
        
        <creator>Rozita Jailani</creator>
        
        <creator>Nur Khalidah Zakaria</creator>
        
        <creator>Fazah Akhtar Hanapiah</creator>
        
        <subject>Head pose estimation; visual attention; robot-assisted therapy; children with special needs</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>This study investigates the application of head pose estimation (HPE) to assess visual attention in children with special needs (CwSN) during robot-assisted therapy sessions, focusing on its effectiveness and the attention patterns exhibited by these children. CwSN often faces unique challenges, such as sensory processing difficulties or delayed cognitive processing. Age and therapy duration also influenced attention levels, with younger children generally exhibiting shorter attention spans than older participants. Additionally, familiarity with technology, such as prior screen time at home, positively impacted engagement during robot-assisted therapy. An experimental study was conducted with 30 children aged 2 to 7 years, including those with autism spectrum disorder (ASD), speech delay (SD), and attention-deficit/hyperactivity disorder (ADHD). Using an integrated camera, head movements were tracked to analyse forward-facing head direction as an indicator of attention. The system achieved an overall accuracy of 82% and an average attention percentage of 65%, highlighting that visual attention varies significantly based on the type of disability, age, and therapy duration. The integration of the robot enhanced visual engagement across all groups, fostering improved interaction and attention. These findings emphasise the importance of tailoring robot-assisted therapy (RAT) to the specific needs and attention patterns of children with different disabilities, ages, and therapy histories, underscoring the potential of assistive robotics to optimise therapeutic outcomes in special education settings. This research highlights the potential of personalised RAT to improve social, cognitive, and motor skills. It offers evidence-based strategies for integrating assistive robotics into special education and therapeutic settings for CwSN.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_68-Evaluating_Head_Pose_Estimation_for_Assessing_Visual_Attention_in_Children.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Recent Integrating Machine Learning and Malay-Arabic Lexical Mapping for Halal Food Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161067</link>
        <id>10.14569/IJACSA.2025.0161067</id>
        <doi>10.14569/IJACSA.2025.0161067</doi>
        <lastModDate>2025-10-30T08:26:17.6670000+00:00</lastModDate>
        
        <creator>Noorrezam Yusop</creator>
        
        <creator>Massila Kamalrudin</creator>
        
        <creator>Nuridawati Mustafa</creator>
        
        <creator>Tao Hai</creator>
        
        <creator>Mohd Nazrien Zaraini</creator>
        
        <creator>Halimaton Hakimi</creator>
        
        <creator>Siti Fairuz Nurr Sardikan</creator>
        
        <subject>Malay-Arabic lexical mapping; natural language processing; machine learning; halal food classification; halal food e-commerce</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>The rapid growth of e-commerce has changed the way people engage with businesses, notably in the food industry. For the Muslim community, guaranteeing Halal conformity in digital transactions is critical. This study provides a comprehensive framework for improving Halal E-Commerce systems that include machine learning, pattern libraries, and multilingual support, specifically in Malay and Arabic. The study examines the role of pattern libraries in designing user-friendly interfaces, as well as lexical mapping strategies for enhancing Malay-Arabic translation accuracy. Natural language processing (NLP) and machine learning are combined to create an application that classifies food items into two categories: Halal or Haram. With an accuracy of 85%, a Random Forest classifier is trained on labeled datasets. Preparing the text, extracting features using TF-IDF, and evaluating the results using precision, recall, and F1-score are all steps in the classification process. To increase classification accuracy, a rule-based approach is also applied to conditional logic and keyword matching. By adjusting the parameters, the model is further improved, leading to strong performance. By taking into account the cultural and linguistic requirements of the Muslim community, multilingual support enhances accessibility and user confidence. The suggested method increases translation accuracy by employing lexical mapping at the word, phrase, and context levels. The paper also assesses several machine learning models, demonstrating that Random Forest outperforms the other methods examined. The findings contribute to the growth of Halal E-Commerce by outlining a systematic strategy to ensure compliance and usability. The proposed system can serve as a platform for future research into AI-driven Halal certification and digital marketplace optimization, blockchain with an e-Commerce framework.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_67-Recent_Integrating_Machine_Learning_and_Malay_Arabic_Lexical_Mapping.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing the Scanability of Damaged QR Codes Through Image Restoration Using GANs Combined with the Spectral Normalization Technique</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161066</link>
        <id>10.14569/IJACSA.2025.0161066</id>
        <doi>10.14569/IJACSA.2025.0161066</doi>
        <lastModDate>2025-10-30T08:26:17.6530000+00:00</lastModDate>
        
        <creator>Puwadol Sirikongtham</creator>
        
        <creator>Apichaya Nimkoompai</creator>
        
        <subject>QR Code restoration; Generative Adversarial Networks; spectral normalization; image inpainting; deep learning; damage reconstruction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>QR Codes are widely used in the digital era for storing and sharing information in various applications. However, they are often susceptible to physical damage such as scratches, tears, or fading, which can result in scanning failures and limit their usability. To overcome this issue, this research introduces a Generative Adversarial Network (GAN) model integrated with Spectral Normalization to restore damaged QR Code images. The model was trained and evaluated using a dataset of QR Codes with simulated damage ranging from 1% to 60%. Experimental results demonstrate that the proposed approach effectively reconstructs missing parts of QR Codes while preserving structural details and module sharpness. The model achieved an average PSNR of 28.5 dB, SSIM of 0.91, and a scanning success rate of 88%, outperforming U-Net (68%) and a baseline GAN (75%). Although the processing time is slightly longer, the model offers superior accuracy and robustness, particularly for severely damaged QR Codes (40% to 60% damage). These findings confirm that GANs enhanced with Spectral Normalization offer a promising solution for QR Code restoration, with potential uses in digital marketing, payment systems, and inventory management.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_66-Enhancing_the_Scanability_of_Damaged_QR_Codes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Integration of Color QR-Code Technology in Biometric Data Encoding and Facial Identity Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161065</link>
        <id>10.14569/IJACSA.2025.0161065</id>
        <doi>10.14569/IJACSA.2025.0161065</doi>
        <lastModDate>2025-10-30T08:26:17.6370000+00:00</lastModDate>
        
        <creator>Nazym Kaziyeva</creator>
        
        <creator>Kalybek Maulenov</creator>
        
        <creator>Ruslan Ospanov</creator>
        
        <creator>Abzhan Khamza Mukhtaruly</creator>
        
        <subject>Color biometric QR code; facial image encoding; RGB channel decomposition; biometric data integration; secure identification; facial recognition; QR animation; identity encoding; privacy protection; data capacity; OpenCV; computer vision</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>This paper presents an enhanced algorithm for the generation of color biometric QR codes capable of encoding facial image data, anthropometric parameters, and personal identity information simultaneously within a single RGB-based QR structure. The proposed approach extends existing monochrome QR models by integrating optimized image decomposition, modular QR block generation, and multi-channel RGB encoding to achieve higher data density, improved privacy protection, and better readability under various lighting and compression conditions. The algorithm was implemented in Python using the OpenCV library, ensuring compatibility with contemporary biometric systems, embedded devices, and mobile platforms. Experimental evaluations conducted on standard face databases demonstrate the method’s robustness in terms of decoding accuracy, distortion resilience, and information integrity. Furthermore, the study explores new applications such as animated QR codes and photo–sketch hybrid datasets for training and validation purposes. The results highlight the potential of color biometric QR technology for secure identification, access control, and digital identity verification, offering a novel bridge between computer vision and information security.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_65-Integration_of_Color_QR_Code_Technology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Performance-Based Time Series Forecast Combination Method and Applications with Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161064</link>
        <id>10.14569/IJACSA.2025.0161064</id>
        <doi>10.14569/IJACSA.2025.0161064</doi>
        <lastModDate>2025-10-30T08:26:17.6030000+00:00</lastModDate>
        
        <creator>M. Burak Erturan</creator>
        
        <subject>Combination forecast; performance-based combination; neural networks; multi-layer perceptron; extreme learning machine</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>Performance-based forecast combination approaches determine the weights of the individual forecasts based on the inverse average error for a past time interval. However, although the performances are calculated for a time span, the aim is mostly a one-step-ahead time-point forecast. In these classical methods, a relatively higher prediction error of a single past time-point spreads and decreases the performance value of the model, even though the model is highly successful on other time-points in the interval. In this study, a novel approach is presented where performance of each past time-point prediction is calculated separately. Instead of taking the inverse average error for a pre-determined past time interval, prediction performance is calculated for each past data point separately using the normalized inverse absolute error, then the average performances are calculated for past time interval to get the combination weights. To be able to measure the performance of the presented methodology, it is applied on three well-known time series data. Seven different models of neural networks, based on multi-layer perceptron and extreme learning machines are used to model, forecast and form the combination forecasts. Moreover, four different performance-based combination techniques, two central tendency-based benchmark combination methods and the na&#239;ve model are employed for comparison. The obtained results show that proposed methodology is a powerful and robust technique and superior to all performance-based combination techniques compared.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_64-A_Novel_Performance_Based_Time_Series_Forecast_Combination_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Integrative Hybrid Metaheuristic Algorithm for Hyperparameter Optimisation in Pre-Trained Convolutional Neural Network Models (I-HAHO)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161063</link>
        <id>10.14569/IJACSA.2025.0161063</id>
        <doi>10.14569/IJACSA.2025.0161063</doi>
        <lastModDate>2025-10-30T08:26:17.5900000+00:00</lastModDate>
        
        <creator>Nazleeni Samiha Haron</creator>
        
        <creator>Jafreezal Jaafar</creator>
        
        <creator>Izzatdin Abdul Aziz</creator>
        
        <creator>Mohd Hilmi Hasan</creator>
        
        <creator>Muhammad Hamza Azam</creator>
        
        <subject>Hyperparameter Optimisation (HPO); Convolutional Neural Networks (CNNs); Artificial Bee Colony (ABC); Harris Hawks Optimisation (HHO); Hybrid Metaheuristic Algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>Hyperparameter optimisation (HPO) remains a fundamental challenge in deep learning, especially for pre-trained convolutional neural networks (CNNs). While pre-trained models reduce the computational burden of training from scratch, their effectiveness depends heavily on tuning parameters such as learning rate, batch size, dropout, weight decay, and optimizer type. The search space of hyperparameters is large, nonlinear, and highly dataset-dependent, making traditional techniques like grid search, random search, and Bayesian optimisation insufficient. This paper introduces I-HAHO, an Integrative Hybrid Metaheuristic Algorithm that combines Artificial Bee Colony (ABC) for global exploration and Harris Hawks Optimisation (HHO) for local exploitation. A diversity-based phase-switching mechanism dynamically regulates exploration and exploitation, allowing the optimiser to adapt its search behaviour to varying landscape conditions. Experiments on CIFAR-10, CIFAR-100, SVHN, and TinyImageNet with three CNN architectures (VGG16, ResNet50, EfficientNet-B0) demonstrate up to 6.9% accuracy improvements. I-HAHO enhances adaptability, scalability, and robustness for hyperparameter tuning.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_63-Integrative_Hybrid_Metaheuristic_Algorithm_for_Hyperparameter_Optimisation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Unveiling the Drivers of Consumer Purchase Intention in Short-Form Video Marketing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161062</link>
        <id>10.14569/IJACSA.2025.0161062</id>
        <doi>10.14569/IJACSA.2025.0161062</doi>
        <lastModDate>2025-10-30T08:26:17.5570000+00:00</lastModDate>
        
        <creator>Merisa Syafrina</creator>
        
        <creator>Viany Utami Tjhin</creator>
        
        <subject>E-commerce; purchase intention; Shopee Video; social commerce; SEM-PLS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>Short-video features on e-commerce platforms have become a key driver of social commerce, enhancing user engagement and purchase intention. However, user reviews of Shopee Video reveal issues such as disruptive autoplay, limited content control, and unintuitive navigation. While prior studies have examined engagement and satisfaction in general e-commerce, limited research has explored how short-video features within social commerce influence purchase intention through user engagement. This study fills that gap by analysing the factors affecting consumer behaviour toward Shopee Video. Sentiment analysis of user reviews identified common dissatisfaction themes, followed by a quantitative survey of 300 Shopee Video users in Indonesia. Using Structural Equation Modeling with the Partial Least Squares (SEM-PLS) approach, the results show that user engagement significantly mediates the relationship between system quality, information quality, and technology adoption and usefulness on purchase intention. The study extends existing models to the social commerce context, providing insights for optimizing short-video features to strengthen engagement and conversion.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_62-Unveiling_the_Drivers_of_Consumer_Purchase_Intention.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Technique for Automated Parallel Optimization of Function Calls in C++ Code</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161061</link>
        <id>10.14569/IJACSA.2025.0161061</id>
        <doi>10.14569/IJACSA.2025.0161061</doi>
        <lastModDate>2025-10-30T08:26:17.5100000+00:00</lastModDate>
        
        <creator>Shuruq Abed Alsaedi</creator>
        
        <creator>Fathy Elbouraey Eassa</creator>
        
        <creator>Amal Abdullah AlMansour</creator>
        
        <creator>Lama Abdulaziz Al Khuzayem</creator>
        
        <creator>Rsha Talal Mirza</creator>
        
        <subject>Automatic parallelization; function-level parallelization; C++ code optimization; parallel computing; control flow graph; dependency analysis; performance optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>In modern software development, achieving high performance increasingly relies on effective parallelization. While much of the existing research has focused on loop-level parallelism, function-level parallelization remains relatively underutilized. Yet, in many real-world applications, function calls serve as natural units of computation that could greatly benefit from concurrent execution. To address this gap, we present an automated tool that analyzes sequential C++ code, identifies independent function calls, and evaluates their suitability for parallel execution. The tool performs three key analyses: dependency analysis to detect function calls, context analysis to understand execution conditions, and workload assessment to determine whether parallelization would result in significant performance benefits. Based on the analysis results, the tool transforms eligible function calls into parallel equivalents without altering the original program logic. Additionally, the tool generates detailed Control Flow Graphs (CFG) for each function in three formats, facilitating further structural analysis. Three benchmark programs were used in experimental testing. The evaluation measured both sequential and parallel execution times, along with the computed performance gain expressed as a percentage reduction in runtime. Results demonstrated the tool’s ability to improve execution efficiency and reduce processing time. These outcomes emphasize the tool’s role in advancing function-level automatic parallelization. The tool showed notable performance improvements across the three benchmark applications, with the Employee Performance System achieving the highest improvement of 54.6%, followed by the Genomic Sequence System at 48.3%, and the Book Reviews System achieving an improvement of 36.1%. Demonstrating the tool’s ability to improve efficiency via automated function-level parallelization.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_61-A_Technique_for_Automated_Parallel_Optimization_of_Function_Calls.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fourier Transform and Attention Guided Deep Neural Network for Face Anti-Spoofing in Medical Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161060</link>
        <id>10.14569/IJACSA.2025.0161060</id>
        <doi>10.14569/IJACSA.2025.0161060</doi>
        <lastModDate>2025-10-30T08:26:17.4800000+00:00</lastModDate>
        
        <creator>Zhanseri Ikram</creator>
        
        <subject>Liveness detection; face anti-spoofing; deep learning; CNN; frequency domain</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>Face recognition systems have become prevalent in mobile devices and security applications, increasing the demand for robust face presentation attack detection. Early efforts based on handcrafted features struggled to cope with variations in illumination, pose, and attack modalities, prompting a transition toward deep learning solutions capable of extracting subtle discriminative cues. A novel architecture built upon an EfficientNet-V2 backbone, combined with a Shuffle Attention module and Fourier heads, was developed to capture both spatial and frequency domain characteristics. A dual-path approach processes each input face image through conventional convolutional blocks and a 2D Discrete Fourier Transform path, with dedicated Fourier heads reconstructing frequency maps that reveal minute discrepancies between genuine and spoofed presentations. Experimental evaluation on the Oulu-NPU dataset demonstrates strong performance across four protocols, including robust detection under varying environmental conditions, low error rates with novel attack types, and consistent results across different sensor inputs. Metrics such as APCER, BPCER, and ACER validate the method’s ability to distinguish between live and fake faces reliably. The outcomes suggest that combining spatial and frequency cues addresses limitations observed in earlier approaches, offering valuable insights for deployment in security-sensitive applications and setting a strong foundation for future research in face anti-spoofing.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_60-Fourier_Transform_and_Attention_Guided_Deep_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Quality Classification of Harumanis Mango Based on External Multi-Parameter and Machine Learning Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161059</link>
        <id>10.14569/IJACSA.2025.0161059</id>
        <doi>10.14569/IJACSA.2025.0161059</doi>
        <lastModDate>2025-10-30T08:26:17.4630000+00:00</lastModDate>
        
        <creator>Mohd Nazri Abu Bakar</creator>
        
        <creator>Abu Hassan Abdullah</creator>
        
        <creator>Muhamad Imran Ahmad</creator>
        
        <creator>Norasmadi Abdul Rahim</creator>
        
        <creator>Haniza Yazid</creator>
        
        <creator>Wan Mohd Faizal Wan Nik</creator>
        
        <creator>Shafie Omar</creator>
        
        <creator>Shahrul Fazly Man@Sulaiman</creator>
        
        <creator>Tan Shie Chow</creator>
        
        <creator>Fahmy Rinanda Saputri</creator>
        
        <subject>Machine learning; image processing; quality assessment; Harumanis mango; appearance attributes</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>Grading Harumanis mangoes is traditionally done through manual visual inspection, which is subjective, inconsistent, and labor-intensive. Industry practices report only 70–80% consistency among human graders, with accuracy further declining under fatigue or high volumes. These limitations hinder uniform quality assurance, especially for export markets. To address this, an image-based, non-destructive grading system was developed, focusing on external features such as surface defect severity, ripeness index, shape uniformity, and size. A dataset of 1,018 mango samples was collected and analyzed using a machine vision system. Features were extracted through image segmentation and color–shape analysis, then classified using a Fuzzy Inference System (FIS) and Machine Learning (ML) models including SVM, MLPNN, and ANFIS. Enhanced SVM variants were also implemented to assess performance gains. Results showed strong performance across all parameters: ripeness index accuracy reached 93.5%, shape uniformity 91.6%, and size classification over 96%. The enhanced SVM+ achieved the best overall accuracy at 95.1% with the lowest error rates. The proposed system demonstrated clear improvements over manual grading and effectively classified mangoes into PREMIUM, GRADE 1, GRADE 2, and REJECT categories, supporting its potential for reliable real-world deployment.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_59-Quality_Classification_of_Harumanis_Mango_Based_on_External_Multi_Parameter.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Meta-Learning Prediction Framework for Asphalt Mixtures Fatigue Life Modeling</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161058</link>
        <id>10.14569/IJACSA.2025.0161058</id>
        <doi>10.14569/IJACSA.2025.0161058</doi>
        <lastModDate>2025-10-30T08:26:17.4330000+00:00</lastModDate>
        
        <creator>Longmeng Tan</creator>
        
        <creator>Krzysztof Kowalski</creator>
        
        <subject>Asphalt mixtures; fatigue life; meta-learning prediction; mechanism analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>In order to improve the accuracy and generalization ability of asphalt mixture fatigue life prediction, this study introduces the meta-learning method, which aims to solve the problems of poor adaptability and strong data dependence of the traditional prediction model under complex working conditions. In this study, a prediction framework based on the Model-Agnostic Meta-Learning (MAML) algorithm is constructed, which realizes the fast and accurate prediction of asphalt mixture fatigue life under multi-task conditions through feature extraction, meta-knowledge learning, and a fast adaptive mechanism. The experiments were conducted using multi-class mixture data and compared with linear regression and BP neural network methods under the MATLAB platform. The results show that the meta-learning model achieves a prediction accuracy of 0.98 within 500 iterations, which is significantly better than that of the BP neural network (0.89) and linear regression (0.84), and the prediction error is controlled to be between 40 and 60 under typical working conditions, while the traditional method has an error of up to 150. Further analysis shows that the meta-learning method has a faster convergence rate (the convergence index is 0.9 for 100 iterations) and a higher convergence index of 0.9 for 100 iterations. 0.9) with higher robustness. In conclusion, the meta-learning-based prediction method shows excellent performance in fatigue life modeling, which is suitable for rapid application in real-world engineering with diverse materials and loading environments.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_58-Meta_Learning_Prediction_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Machine Learning-Driven Emotional Feedback Analysis and Adaptive Content Generation for VR Movie and TV Users</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161057</link>
        <id>10.14569/IJACSA.2025.0161057</id>
        <doi>10.14569/IJACSA.2025.0161057</doi>
        <lastModDate>2025-10-30T08:26:17.4030000+00:00</lastModDate>
        
        <creator>Yun TANG</creator>
        
        <subject>Machine learning; VR movie and television; user sentiment feedback analysis; adaptive content generation; reinforcement learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>With the growing demand for immersive audiovisual experiences, user sentiment feedback analysis has become a pivotal factor in improving personalization and interactivity in virtual reality (VR) movie and television. This study proposes a machine learning–driven framework that integrates sentiment feedback recognition and adaptive content generation to optimize user experience. First, a Long Short-Term Memory (LSTM) model is developed to analyze multimodal sentiment feedback data, including physiological signals, behavioral responses, and interactive actions. The model achieves an average recognition accuracy of 75.75% across four basic emotions—happiness, sadness, anger, and fear—demonstrating its ability to capture dynamic and continuous emotional patterns. Based on real-time sentiment feedback, a Deep Q-Network (DQN) reinforcement learning algorithm is employed to generate adaptive VR content that aligns with users’ current emotional states. Experimental validation with 100 participants shows that adaptive content generation increases overall satisfaction scores from 6.2 to 7.8, and the matching degree between user emotions and content improves by more than 20%. The integration of sentiment feedback analysis and reinforcement learning establishes a closed feedback loop—emotion detection → adaptive adjustment → feedback optimization—that enhances immersion, empathy, and user engagement. This research provides a data-driven reference for the intelligent evolution of VR movie and television, and future work will expand to fine-grained emotional dimensions and multimodal fusion to improve recognition precision and real-time adaptive generation performance.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_57-Machine_Learning_Driven_Emotional_Feedback_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Embedding Models: A Comprehensive Review with Task-Oriented Assessment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161056</link>
        <id>10.14569/IJACSA.2025.0161056</id>
        <doi>10.14569/IJACSA.2025.0161056</doi>
        <lastModDate>2025-10-30T08:26:17.3700000+00:00</lastModDate>
        
        <creator>Lahbib Ajallouda</creator>
        
        <creator>Meriem Hassani Saissi</creator>
        
        <creator>Ahmed Zellou</creator>
        
        <subject>Natural language processing; sentence embedding models; transformer models; embedding models challenges</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>Sentence embedding is a very important technique in most natural language processing (NLP) tasks, such as answer generation, semantic similarity detection, text classification and information retrieval. This technique aims to transform the semantic meaning of a sentence into a fixed-dimensional vector, allowing machines to understand human language. Sentence embedding has moved in recent years from simple word vector averaging methods to the development of more sophisticated models, particularly those based on transformer structures such as the BERT model and its variants. However, systematic reviews that critical, analyze and compare the performance of these models are still limited, particularly the selection of the appropriate embedding model for a specific NLP task. This study aims to address this gap by a comprehensive review for sentence embedding models and a systematic evaluation of their performance on NLP tasks, such as semantic similarity, clustering, and retrieval. The study enabled us to identify the appropriate embedding model for each task, identify the main challenges faced by embedding models, and propose effective solutions to improve the performance and efficiency of sentence embedding.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_56-Embedding_Models_A_Comprehensive_Review.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-Criteria Using Dijkstra’s Algorithm to Determine Optimal Time Paths in Vehicle Route Optimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161055</link>
        <id>10.14569/IJACSA.2025.0161055</id>
        <doi>10.14569/IJACSA.2025.0161055</doi>
        <lastModDate>2025-10-30T08:26:17.3530000+00:00</lastModDate>
        
        <creator>Basorudin </creator>
        
        <creator>Handaru Jati</creator>
        
        <creator>Nurkhamid</creator>
        
        <creator>Puput Dani Prasetyo Adi</creator>
        
        <subject>Dijkstra’s algorithm; multi-criteria; road traffic network; Weighted Sum Method (WSM); Weighted Product Method (WPM); mathematics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>This research is the development of a Road Traffic Network using one of the methods in Mathematics, namely Dijkstra&#39;s Algorithm, Weighted Sum Method (WSM), and Weighted Product Method (WPM). Meanwhile, the parameters used are route, volume, capacity, DOS, distance, and travel time. The objective of this research is to find the fastest route alternative from one place to another. The recommended results using Dijkstra&#39;s algorithm combine values of distance, travel time, and congestion degree with the weighted-sum and weighted-product methods, each calculated accordingly. The shortest route is 1→2→4→7→10→14, the route with the least congestion and shortest travel time is 1→2→5→6→9→13→14. This research combines these three parameters to obtain a balanced route between congestion level, short travel distance, and short travel time for the driver. By combining these parameters, the best route from this study is route 1: 1→2→5→8→12→11→14 with a total distance of 28.77 km, a saturation degree value of 5.421, and a travel time of 28 minutes. Thus, the research results indicate that the best route will have a combination of multiple criteria, such as short distance, short travel time, and less congestion simultaneously. The Weighted-Sum Method (WSM) and Weighted-Product Method (WPM) can produce different outputs, with WPM being superior to WSM in terms of computational steps.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_55-Multi_Criteria_Using_Dijkstras_Algorithm_to_Determine_Optimal_Time_Paths.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SEARCHX: An Integrated Framework of Distributed Intelligent Search Services Based on Web Browser</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161054</link>
        <id>10.14569/IJACSA.2025.0161054</id>
        <doi>10.14569/IJACSA.2025.0161054</doi>
        <lastModDate>2025-10-30T08:26:17.3070000+00:00</lastModDate>
        
        <creator>Zehui Zhang</creator>
        
        <creator>Lin Zhou</creator>
        
        <creator>Jie Peng</creator>
        
        <creator>Liwei Wang</creator>
        
        <creator>Bo Cheng</creator>
        
        <subject>Web; TF-IDF; distributed network; intelligent search; SEARCHX</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>To address the growing demand for web search and improve the performance and accuracy of search systems, this study proposes a distributed intelligent search service integration framework based on SEARCHX. This framework leverages the local computational power of the browser, integrating inverted indexing, data sharding, and replication mechanisms, as well as the Term Frequency-Inverse Document Frequency (TF-IDF) intelligent ranking algorithm. These components enable front-end distributed processing of search tasks and multi-source result fusion. Experiments are conducted on six major browser platforms, Chrome, Firefox, Edge, Safari, etc., using the open-source Text REtrieval Conference (TREC) dataset. The system’s response performance and accuracy are evaluated under varying search loads. The experimental results show that, compared to the unoptimized version, the optimized SEARCHX reduces the average response time by approximately 27 per cent under medium-to-high load conditions. Precision improves by an average of 0.05, and the F1 score increased by more than 0.04 on all platforms. The system also demonstrates good stability and consistency across multiple platforms. SEARCHX provides a viable approach to building decentralized, high-efficiency, and easily deployable intelligent search services, with strong practical value and expansion potential. This study aims to construct a decentralized, cross-platform, and high-performance intelligent search service framework, offering a more efficient, stable, and accurate technical support solution for users in complex search environments.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_54-SEARCHX_An_Integrated_Framework_of_Distributed_Intelligent_Search.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Leveraging AI and Hybrid Intelligence for Robust Geospatial Data Fusion in Autonomous Terrestrial Navigation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161053</link>
        <id>10.14569/IJACSA.2025.0161053</id>
        <doi>10.14569/IJACSA.2025.0161053</doi>
        <lastModDate>2025-10-30T08:26:17.2770000+00:00</lastModDate>
        
        <creator>Manel Salhi</creator>
        
        <creator>Mounir Bouzguenda</creator>
        
        <creator>Faouzi Benzarti</creator>
        
        <creator>Fawaz Alanazi</creator>
        
        <creator>Ezzeddine Touti</creator>
        
        <subject>Autonomous navigation; geospatial data fusion; graph neural networks; transformer-based models; sensor fusion; AI-driven mobility</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>Rapid advancement of artificial intelligence (AI) and geospatial data fusion has enabled the development of highly autonomous terrestrial navigation systems with improved accuracy, adaptability, and robustness. This paper proposes a novel framework integrating multi-source geospatial data fusion with deep learning-based decision-making for autonomous terrestrial navigation. Unlike conventional approaches that rely solely on Global Navigation Satellite Systems (GNSS) or inertial sensors, our system leverages a hybrid fusion model combining GNSS, LiDAR, camera vision, and high-resolution geospatial databases. A deep reinforcement learning (DRL) paradigm is introduced to enhance the system’s adaptability in dynamic environments, optimizing route planning and obstacle avoidance in real-time. Additionally, a hybrid AI model incorporating Graph Neural Networks (GNN) and Transformer-based architectures processes spatial and temporal dependencies in navigation data, improving localization precision and resilience against sensor failures. The proposed system is evaluated through extensive simulations and real-world tests, demonstrating superior performance in complex urban and off-road scenarios compared to traditional Kalman filter-based methods. Our findings highlight the potential of AI-driven geospatial data fusion in redefining autonomous navigation, paving the way for next-generation intelligent mobility solutions.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_53-Leveraging_AI_and_Hybrid_Intelligence_for_Robust_Geospatial_Data_Fusion.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Feature Pyramid Network with Dual-Decoder Supervision for Accurate Stroke Lesion Localization in Multi-Modal Brain MRI</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161052</link>
        <id>10.14569/IJACSA.2025.0161052</id>
        <doi>10.14569/IJACSA.2025.0161052</doi>
        <lastModDate>2025-10-30T08:26:17.2470000+00:00</lastModDate>
        
        <creator>Satmyrza Mamikov</creator>
        
        <creator>Zhansaya Yakhiya</creator>
        
        <creator>Bauyrzhan Omarov</creator>
        
        <creator>Yernar Mamashov</creator>
        
        <creator>Akbayan Aliyeva</creator>
        
        <creator>Balzhan Tursynbek</creator>
        
        <subject>Stroke lesion localization; multi-modal MRI; feature pyramid network; segmentation; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>This study presents a novel Feature Pyramid Network with Dual-Decoder Supervision for accurate stroke lesion localization in multi-modal brain MRI. The proposed architecture integrates a Swin Transformer backbone with multi-scale feature aggregation, enabling effective fusion of hierarchical representations from DWI, ADC, and FLAIR sequences. A dual-decoder structure is employed, where the auxiliary decoder provides coarse lesion guidance through pseudo masks, and the primary decoder refines boundaries for precise voxel-level segmentation. Auxiliary supervision improves convergence stability and feature discrimination, while modality dropout enhances robustness to incomplete imaging protocols. Experiments conducted on the ATLAS v2.0 dataset demonstrate superior performance over baseline encoder–decoder models, achieving higher Dice scores, improved boundary accuracy, and strong lesion-wise detection rates. The model consistently localizes lesions of varying size, shape, and intensity, with minimal overfitting, as evidenced by small training–testing performance gaps. Qualitative results confirm the framework’s ability to transform coarse localization into anatomically accurate predictions. The combination of multi-modal integration, dual-decoder specialization, and self-training mechanisms positions the proposed method as a promising candidate for clinical deployment in rapid stroke diagnosis workflows. Future directions include expanding validation to multi-center datasets, incorporating explainable AI techniques, and enabling real-time 3D processing for deployment in acute care environments.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_52-Feature_Pyramid_Network_with_Dual_Decoder_Supervision.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Energy Efficient Workflow Allocation in Cloud Computing Using Improved Grey Wolf Optimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161051</link>
        <id>10.14569/IJACSA.2025.0161051</id>
        <doi>10.14569/IJACSA.2025.0161051</doi>
        <lastModDate>2025-10-30T08:26:17.2130000+00:00</lastModDate>
        
        <creator>Md. Mazhar Nezami</creator>
        
        <creator>Anoop Kumar</creator>
        
        <subject>Cloud computing; energy efficient; workflow; Heterogeneous Earliest Finish Time (HEFT); Grey Wolf Optimization (GWO); makespan; cost</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>Cloud computing has emerged as a dominant platform for hosting complex applications, offering scalable and flexible resources on demand. However, the dynamic and heterogeneous nature of cloud environments poses significant challenges for efficient workflow scheduling, particularly when aiming to minimize total execution time, energy consumption, and operational cost. In this research, we propose a novel hybrid approach that integrates the Heterogeneous Earliest Finish Time (HEFT) algorithm with an Improved Grey Wolf Optimizer (IGWO) enhanced by differential evolution strategies and survival-of-the-fittest mechanisms. These enhancements strengthen exploration and exploitation by adaptively mutating and refining task allocations while eliminating weaker solutions. The use of HEFT-based initialization provides a strong starting population, and the DE-driven IGWO refinement accelerates convergence and avoids premature stagnation. Together, these two-level optimization strategy ensures faster convergence and higher energy-efficient workflow scheduling compared to earlier HEFT metaheuristic approaches. To evaluate the effectiveness of the proposed hybrid method, extensive experiments were conducted on randomly generated workflows with varying task and dependency complexities. The performance analysis demonstrates that the hybrid HEFT-IGWO approach consistently outperforms standard HEFT, traditional GWO, and standalone metaheuristic techniques in terms of minimizing makespan, reducing energy consumption, and lowering cloud infrastructure costs. This study highlights the potential of combining heuristic initialization with evolutionary optimization to achieve energy-efficient, cost-effective workflow scheduling in cloud computing environments.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_51-Energy_Efficient_Workflow_Allocation_in_Cloud_Computing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Bridging Machine-Readable Code of Regulations and its Application on Generative AI: A Survey</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161050</link>
        <id>10.14569/IJACSA.2025.0161050</id>
        <doi>10.14569/IJACSA.2025.0161050</doi>
        <lastModDate>2025-10-30T08:26:17.1830000+00:00</lastModDate>
        
        <creator>Samira Yeasmin</creator>
        
        <creator>Bader Alshemaimri</creator>
        
        <subject>Regulatory compliance; natural language processing; machine learning; machine-readable code; Machine-Readable Regulations; generative AI; large language models; RegTech; conflicting regulations; regulation issuance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>Machine-Readable Code (MRC) and Machine-Readable Regulations (MRR) enable the conversion of complex regulations into structured formats such as JSON, XML, and X2RL, allowing machines to parse and interpret regulatory texts efficiently. Currently, organizations face challenges in regulatory compliance due to the complexity of regulations, frequent updates, and difficulty in identifying changes that impact policies and procedures. Existing literature provides guidance to a certain extent on how to anticipate regulatory modifications or ensure timely compliance. This review examines current literature on applying machine learning (ML) and Generative AI (GenAI) to extract, structure, and interpret regulatory content. It surveys techniques for converting regulations into machine-readable formats, predicting regulatory changes, and assessing alignment with real-world modifications issued by regulatory bodies. The findings indicate that using MRC, MRR, and AI enables automated compliance checks, faster detection of violations or errors, standardized compliance processes, real-time monitoring, and automatic report generation. These approaches can significantly enhance regulatory adherence across industries, particularly in sectors such as finance, where compliance is critical.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_50-Bridging_Machine_Readable_Code_of_Regulations.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Transformative Integration of Machine Learning in Software Applications in Light of Current Software Engineering Practices</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161049</link>
        <id>10.14569/IJACSA.2025.0161049</id>
        <doi>10.14569/IJACSA.2025.0161049</doi>
        <lastModDate>2025-10-30T08:26:17.1670000+00:00</lastModDate>
        
        <creator>Fawzi Abdulaziz Albalooshi</creator>
        
        <subject>Machine learning (ML); software engineering; DevOps; MLOps; ML integration challenges; integrated software development</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>This study critically reviews the transformative integration of machine learning (ML) into software engineering, detailing its evolution from traditional DevOps to MLOps, which has significantly enhanced software development by enabling adaptive and intelligent systems, improving processes, and boosting software quality. Despite these benefits, the integration introduces unique challenges across technical (e.g., model deployment, data quality, scalability), organizational (e.g., collaboration, tool management), and cultural (e.g., resistance to change, skill gaps) domains throughout the software development lifecycle. The review highlights emerging solutions, including robust MLOps practices, microservices architecture, and frameworks like CRISP-DM, DataOps, and Agile ML, which aim to streamline the ML lifecycle and ensure reliability and scalability. Furthermore, it emphasizes the crucial role of security and governance frameworks in protecting against adversarial attacks, maintaining data privacy, and ensuring accountability and compliance, which are essential for building trust and ethical application of ML systems. Ultimately, successful ML integration requires a holistic approach that addresses these multifaceted challenges to optimize ML&#39;s impact and drive technological progress and business value.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_49-Transformative_Integration_of_Machine_Learning_in_Software_Applications.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Adaptive Hybrid Deep Learning with Recursive Feature Elimination for Physical Violence Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161048</link>
        <id>10.14569/IJACSA.2025.0161048</id>
        <doi>10.14569/IJACSA.2025.0161048</doi>
        <lastModDate>2025-10-30T08:26:17.1530000+00:00</lastModDate>
        
        <creator>Sukmawati Anggraeni Putri</creator>
        
        <creator>Duwi Cahya Putri Buani</creator>
        
        <creator>Achmad Rifa’i</creator>
        
        <creator>Imam Nawawi</creator>
        
        <subject>Violence detection; deep learning; VGG19; BiLSTM; RFE; Educational AI</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>Physical violence among students remains a persistent issue that often goes undetected, especially in school environments without intelligent real-time monitoring systems. Such incidents pose serious risks to student safety and hinder the creation of a secure learning atmosphere. This study aims to develop an adaptive visual-based system for detecting physical violence in educational settings using a deep learning approach. A hybrid architecture was designed by integrating VGG19 for spatial feature extraction and Bidirectional Long Short-Term Memory (BiLSTM) for temporal sequence analysis. To enhance model interpretability and reduce redundancy, Recursive Feature Elimination (RFE) was employed to eliminate irrelevant features and improve overall learning efficiency. The proposed system effectively captures both spatial and temporal cues from classroom surveillance videos, enabling more accurate classification of violent and non-violent behaviors. The model was trained and tested on benchmark datasets containing diverse video samples and achieved an accuracy of 92.4%, outperforming standalone CNN and LSTM models. The integration of RFE contributed to a more compact and computationally efficient framework. This study demonstrates the potential of hybrid deep learning and feature optimization for real-time violence detection, contributing to the advancement of visual intelligence and Educational AI for safer, data-driven learning environments.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_48-Adaptive_Hybrid_Deep_Learning_with_Recursive_Feature_Elimination.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards Designing a Blockchain-Based Model for E-Book Publishing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161047</link>
        <id>10.14569/IJACSA.2025.0161047</id>
        <doi>10.14569/IJACSA.2025.0161047</doi>
        <lastModDate>2025-10-30T08:26:17.1200000+00:00</lastModDate>
        
        <creator>Maznun Arifa Mohammadan Makhtar</creator>
        
        <creator>Novia Admodisastro</creator>
        
        <creator>Suleymenova Laura Askarbekkyzy</creator>
        
        <subject>e-book publishing; Ethereum; blockchain technology; smart contract</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>This paper examines the application of blockchain technology in e-book publishing by analyzing previous research and identifying current limitations. The study investigates how smart contracts and cryptographic algorithms can facilitate agreements between publishers and authors. While blockchain has been widely adopted in digital publishing domains, such as image, video, music, and scientific journals, research on its application to e-books remains limited. Existing solutions typically address individual challenges such as transaction transparency, authenticity, or copyright protection, but rarely integrate them into a single framework. To provide a systematic synthesis of prior works, this paper develops a taxonomy of blockchain-based e-book publishing models across six dimensions: platform, storage, smart contract usage, cryptographic algorithm, tokenization, and actors. This paper reviews seven (7) blockchain-based models in e-book publishing and identifies their limitations. Based on these insights, a conceptual blockchain-based smart contract model for e-book publishing was proposed using Ethereum platform, incorporating InterPlanetary File System (IPFS) storage and cryptographic algorithms. The proposed model has the potential to significantly enhance the security and rights protection for authors and publishers, thereby fostering a more secure and equitable e-book publishing landscape.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_47-Towards_Designing_a_Blockchain_Based_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Benchmarking Deep Learning Models for Visual Classification and Segmentation of Horticultural Commodities</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161046</link>
        <id>10.14569/IJACSA.2025.0161046</id>
        <doi>10.14569/IJACSA.2025.0161046</doi>
        <lastModDate>2025-10-30T08:26:17.0900000+00:00</lastModDate>
        
        <creator>Fuzy Yustika Manik</creator>
        
        <creator>Syahril Efendi</creator>
        
        <creator>Jos Timanta Tarigan</creator>
        
        <creator>Maya Silvi Lydia</creator>
        
        <subject>Fruit quality assessment; classification; segmentation; EfficientNet-B0; DeepLabV3+; AISAM-CSNet</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>Recent advances in computer vision have enabled new approaches for automated quality assessment of tropical fruits, where accurate classification and segmentation are essential for postharvest inspection. A major challenge lies in identifying deep learning architectures that achieve high accuracy while remaining computationally efficient for potential edge-based deployment. This study benchmarks three Convolutional Neural Network (CNN) models for classification (VGG16, ResNet50, and EfficientNet-B0) and two encoder–decoder models for segmentation (U-Net and DeepLabV3+) using annotated pineapple and strawberry image datasets. A 5-fold cross-validation strategy was applied to ensure statistical robustness, with evaluation metrics including accuracy, precision, recall, F1-score, Intersection over Union (IoU), and Dice coefficient. Statistical significance was verified using the Friedman and Wilcoxon signed-rank tests (α = 0.05 and 0.01). EfficientNet-B0 achieved the best classification results with average accuracies of 91.4% (strawberry) and 90.7% (pineapple), significantly outperforming ResNet50 and VGG16 (p &lt; 0.01). For segmentation, DeepLabV3+ obtained the highest performance with mean IoU values of 91.7% and 90.8% and Dice coefficients above 92%, indicating precise boundary delineation of ripe and defective regions. Computational efficiency analysis further showed that EfficientNet-B0 had the lowest inference time (0.026 s) and smallest model size (20.4 MB), making it ideal for real-time or embedded applications. Visual analysis confirmed that DeepLabV3+ maintained robustness at fruit boundaries, though minor misclassifications were observed. This benchmarking highlights the combination of EfficientNet-B0 and DeepLabV3+ as a reliable baseline for deep learning-based fruit quality assessment.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_46-Benchmarking_Deep_Learning_Models_for_Visual_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid AI Framework for DDoS Detection and Mitigation in SDN Environments Using CNN, GAN, and Semi-Supervised Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161045</link>
        <id>10.14569/IJACSA.2025.0161045</id>
        <doi>10.14569/IJACSA.2025.0161045</doi>
        <lastModDate>2025-10-30T08:26:17.0570000+00:00</lastModDate>
        
        <creator>Abdelhakim HADJI</creator>
        
        <creator>Brahim RAOUYANE</creator>
        
        <subject>SDN; CNN; GAN; DDOS; OpenDaylight; Mininet; semi-supervised learning; hybrid AI framework</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>The fast technological evolution seen in recent years enhanced the performance and scalability of cloud computing infrastructure and Software-Defined Networking architectures. SDN provides programmability, centralized orchestration, and dynamic resource provisioning, while separating the control and data planes to offer promising architectural paradigm for cloud computing environments. Openness and flexibility expose SDN-based networks to other security concerns, such as large-scale Distributed Denial of Service (DDoS) attacks. This paper introduces a hybrid artificial intelligence (AI) framework for detecting and mitigating DDoS attacks in SDN environments. The framework leverages three complementary approaches: Convolutional Neural Networks (CNN) to capture temporal traffic patterns, Generative Adversarial Networks (GAN) to generate synthetic traffic for dataset augmentation and to enhance anomaly detection, and semi-supervised learning techniques to exploit large amounts of unlabeled traffic data. The proposed system is deployed on a testbed combining OpenDaylight as the SDN controller and Mininet for network emulation, while the AI models are trained and run in Anaconda environment. The network traffic flows are collected, processed into statistical features (i.e., packet rates, entropy values, protocol distribution ratios), and analyzed through the hybrid AI pipeline. Mitigation actions are configured through ODL RESTCONF interface, converting the detection into OpenFlow rules to drop or rate-limit the malicious packets. Experimental evaluation demonstrates that the proposed approach achieves high accuracy detection and robustness to unseen attacks patterns demonstrating the value of applying a hybrid CNN, GAN, Semi-supervised learning approach.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_45-A_Hybrid_AI_Framework_for_DDoS_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Dermatological Diagnostics: An Enhanced Approach for Skin Cancer Classification Using pix2pix GAN</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161044</link>
        <id>10.14569/IJACSA.2025.0161044</id>
        <doi>10.14569/IJACSA.2025.0161044</doi>
        <lastModDate>2025-10-30T08:26:17.0270000+00:00</lastModDate>
        
        <creator>Adnan Afroz</creator>
        
        <creator>Shaheena Noor</creator>
        
        <creator>Shakil Ahmed Bashir</creator>
        
        <creator>Umair Jilani</creator>
        
        <subject>Deep learning; skin cancer; generative adversarial network; pix2pixHD; classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>Skin cancer is among the predominant forms of the disease that includes malignant squamous cell carcinoma, basal cell carcinoma, and melanoma that is characterized by aberrant melanocyte cell development. Frequent screenings and examinations enhance the prognosis for people with skin cancer. Sadly, a lot of patients with skin cancer are not diagnosed until the condition has progressed past the point at which treatment is effective. Deep learning techniques in computer vision have made impressive strides, but issues like class imbalance and a lack of data still hinder the autonomous identification of skin conditions. A solution to address these problems is the implementation of GAN, which is capable of synthesizing realistic data. In this paper, a deep learning GAN model for image synthesis utilizing the pix2pixHD integrated with Convolutional Neural Network (CNN) classifier approach is used to perform skin cancer classification. To categorize three forms of skin cancer benign or malignant. The proposed pix2pixHD GAN is a novel method for utilizing pertinent skin lesion information for generation of high-quality synthesized dermoscopic image and conduct skin lesion classification performance with improved accuracy. Realistic images were created using a U-Net-based generator and PatchGAN discriminator with custom CNN architecture to classify three forms of cancer. With remarkable accuracy of 87.65% (MEL), 91% (BCC), and 89.85% (SCC) and other performance parameters indicate that GAN pix2pixHD Classifier model has promising results in classification. These findings demonstrate the Classifier&#39;s ability to produce and correctly identify high-quality skin lesion images, indicating its potential as a deep learning-based medical image analysis tool.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_44-Enhancing_Dermatological_Diagnostics_An_Enhanced_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Correcting Blue-Shift in Single-Image Dehazing via Haze-Compensated Von Kries Adaptation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161043</link>
        <id>10.14569/IJACSA.2025.0161043</id>
        <doi>10.14569/IJACSA.2025.0161043</doi>
        <lastModDate>2025-10-30T08:26:16.9970000+00:00</lastModDate>
        
        <creator>Asniyani Nur Haidar Abdullah</creator>
        
        <creator>Mohd Shafry Mohd Rahim</creator>
        
        <creator>Sim Hiew Moi</creator>
        
        <creator>Azah Kamilah Draman</creator>
        
        <creator>Ahmad Hoirul Basori</creator>
        
        <creator>Novanto Yudistira</creator>
        
        <subject>Image dehazing; blue-shift correction; color compensation; Von Kries adaptation; preprocessing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>Haze severely degrades image quality by reducing contrast, obscuring details, and introducing a blue-shift color cast caused by atmospheric scattering. Traditional dehazing methods, including prior-based approaches (e.g., DCP, CAP, LPMinVP) and preprocessing techniques (e.g., ICAP WB, Dynamic Gamma), improve visibility but fail to correct haze-induced color imbalance, resulting in unstable RGB distributions and unnatural tone reproduction. This study proposes the Haze-Compensated Color Von Kries (HCCVK) method, a lightweight and training-free preprocessing strategy that performs color compensation before transmission estimation in single-image dehazing. HCCVK integrates a novel red-channel compensation mechanism with Von Kries chromatic adaptation to mitigate wavelength-dependent haze suppression and stabilize chromatic consistency under varying illumination. Unlike learning-based color correction approaches, HCCVK does not require training data, is computationally efficient, and maintains algorithmic interpretability, making it suitable for practical deployment. The method was evaluated on six benchmark datasets: CHIC, Dense-Haze, I-Haze, O-Haze, SOT, and NH-Haze, covering indoor, outdoor, dense, and non-homogeneous haze scenarios. Experimental results based on the RGB color balance metric (σRGB) show that HCCVK reduces color deviation by approximately 75–92% on CHIC, 80–90% on Dense-Haze, and 82–90% on NH-Haze compared to the widely used DCP, and also outperforms CAP, ICAP WB, Dynamic Gamma, and LPMinVP by producing more compact and stable RGB distributions. These findings demonstrate that HCCVK effectively corrects blue-shift imbalance, preserves luminance consistency, and enhances the color stability of dehazing pipelines.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_43-Correcting_Blue_Shift_in_Single_Image_Dehazing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Review of Confidence and Other Evaluation Metrics in Predictive Modeling for Procurement Fraud Coalition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161042</link>
        <id>10.14569/IJACSA.2025.0161042</id>
        <doi>10.14569/IJACSA.2025.0161042</doi>
        <lastModDate>2025-10-30T08:26:16.9630000+00:00</lastModDate>
        
        <creator>Saifuddin Mohd</creator>
        
        <creator>Mohamad Taha Ijab</creator>
        
        <subject>Procurement fraud; predictive modeling; confidence; evaluation metrics; association rule mining; coalition detection; public sector analytics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>Procurement fraud, particularly when bidders act together through collusion or coalition schemes, remains a major threat to fair competition in public procurement. Predictive modeling has emerged as a key analytical tool for detecting such behaviors yet choosing appropriate evaluation metrics continues to be a challenge, especially with imbalanced or correlated data. This study applies a structured narrative review supported by a comparative analysis to examine commonly used evaluation metrics—Accuracy, Precision, Recall, F1-score, and AUC-ROC—in relation to the rule-based Confidence metric derived from association rule mining. The findings reveal that while traditional classification metrics are effective for general predictive tasks, they often fail to capture relational and co-occurrence patterns that characterize coalition fraud. In contrast, Confidence demonstrates higher interpretability and contextual relevance for detecting collusive behaviors among suppliers. The study highlights the potential of hybrid evaluation frameworks that combine classification and rule-based measures to improve fraud detection accuracy and explainability. This approach contributes to advancing predictive modeling, procurement analytics, and coalition detection by emphasizing metrics that balance performance, interpretability, and real-world applicability.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_42-Comparative_Review_of_Confidence_and_Other_Evaluation_Metrics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Uneven But Accelerating: AI Adoption in Higher Education</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161041</link>
        <id>10.14569/IJACSA.2025.0161041</id>
        <doi>10.14569/IJACSA.2025.0161041</doi>
        <lastModDate>2025-10-30T08:26:16.9330000+00:00</lastModDate>
        
        <creator>Mahendra Adhi Nugroho</creator>
        
        <creator>Umar Yeni Suyanto</creator>
        
        <creator>Didik Hariyanto</creator>
        
        <creator>Septiningdyah Arianisari</creator>
        
        <subject>Artificial intelligence adoption; higher education; sustainable education; developing country</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>Artificial Intelligence (AI) is increasingly recognized as a transformative force in higher education, yet adoption remains patchy and often confined to partial implementations. Using the PRISMA protocol, this study systematically reviews 74 Scopus-indexed articles published between 2015 and 2025. Publication activity rose sharply after 2020, led by contributions from China, the United States, and Saudi Arabia. Across the corpus, Perceived Usefulness and the Technology Acceptance Model (TAM) are the most frequently applied constructs, while ethical and policy dimensions remain underexamined. Thematic analysis delineates five clusters: adaptive learning and personalization; ethics and trust; digital literacy and readiness; AI in assessment and evaluation; and organizational transformation. Despite growing attention, regional gaps persist—especially in developing countries, where constrained infrastructure, funding, and digital literacy impede adoption. To address these challenges, the study proposes a multi-level conceptual framework integrating TAM, UTAUT, TPACK, and TOE to connect individual, institutional, and external factors for sustainable AI-driven education. Overall, the review underscores that AI adoption is not merely an efficiency tool but a strategic lever to advance the Sustainable Development Goals (SDGs), particularly by fostering inclusive, equitable, and innovative higher education systems.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_41-Uneven_But_Accelerating_AI_Adoption_in_Higher_Education.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comprehensive Survey of Visual SLAM Technology: Methods, Challenges, and Perspectives</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161040</link>
        <id>10.14569/IJACSA.2025.0161040</id>
        <doi>10.14569/IJACSA.2025.0161040</doi>
        <lastModDate>2025-10-30T08:26:16.9030000+00:00</lastModDate>
        
        <creator>Aidos Ibrayev</creator>
        
        <creator>Amanzhol Bektemessov</creator>
        
        <subject>Visual SLAM; monocular SLAM; Stereo SLAM; RGB-D SLAM; 3D mapping; pose estimation; loop closure; semantic SLAM; deep learning; sensor fusion</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>Visual Simultaneous Localization and Mapping (Visual SLAM) has become a cornerstone of autonomous navigation and spatial understanding in robotics, augmented reality, and computer vision. This review presents a comprehensive examination of algorithmic progress in Visual SLAM, focusing on the three principal paradigms: monocular, stereo, and RGB-D SLAM. Monocular SLAM, known for its minimal hardware requirements, has evolved from feature-based methods to deep learning-enhanced systems, addressing challenges like scale ambiguity and drift. Stereo SLAM leverages depth through triangulation, improving scale accuracy and robustness, particularly in dynamic and low-texture environments. RGB-D SLAM, utilizing depth-sensing technology, has enabled dense and semantically enriched mapping, finding significant application in indoor and real-time scenarios. Through a chronological and technical exploration of representative methods including RatSLAM, ORB-SLAM, DSO, ProSLAM, ElasticFusion, DynaSLAM, and recent hybrid and learning-based frameworks. This review identifies major milestones and architectural innovations across paradigms. A cross-paradigm analysis highlights the trade-offs in accuracy, computational efficiency, and adaptability, while also discussing emerging trends such as semantic integration, multimodal fusion, and neural implicit representations. Furthermore, the paper outlines future directions that include lifelong learning, real-time deployment on edge devices, dynamic environment adaptation, and the convergence of geometry and learning-based pipelines. Supported by a detailed taxonomy and historical evolution illustrated in visual summaries, this review serves as a foundational reference for researchers and developers aiming to understand and contribute to the advancement of Visual SLAM technologies in both academic and real-world contexts.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_40-A_Comprehensive_Survey_of_Visual_SLAM_Technology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Review of Ransomware Detection Models for Cybersecurity Driven IIoT in Cloud Environments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161039</link>
        <id>10.14569/IJACSA.2025.0161039</id>
        <doi>10.14569/IJACSA.2025.0161039</doi>
        <lastModDate>2025-10-30T08:26:16.8700000+00:00</lastModDate>
        
        <creator>Abrar Ali</creator>
        
        <creator>Norah Hamed</creator>
        
        <creator>Monir Abdullah</creator>
        
        <subject>Ransomware; Industrial Internet of Things; cloud computing; machine learning; deep learning; blockchain</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>Ransomware is currently one of the most severe cybersecurity threats and not only attacks legacy systems but cloud systems and Industrial Internet of Things (IIoT) systems as well. Security and privacy threats are heightened as these systems integrate more closely and thus are exposed to sophisticated and long-lasting attacks. This paper provides a comprehensive review of ransomware prevention and detection measures in cloud and IIoT environments with an emphasis on the usage of Machine Learning (ML) and Deep Learning (DL) models. Research studies published across IEEE, Elsevier, and Springer databases between 2020 and 2024 were analyzed. Our check reveals Ensemble methods and Random Forest (RF) are two of the ML methods most in use, with each at 18.00%, followed by Neural Networks (NNs) at 12.00%, with older models such as Support Vector Machines (SVMs) with 10.00%, Na&#239;ve Bayes (NBs) had 7.00%, and Decision Trees (DTs) still in use with utilization at 9.00% . Additionally, DL approaches (including Convolutional NN (NN), Long Short-Term Memory (LSTM), Bidirectional Long Short-Term Memory (BiLSTM), and Recurrent NN (RNN)) account for 20.00% of the techniques deployed, highlighting their growing prominence in IIoT security and ransomware research. Indicative of their integration into hybrid ML pipelines, Light Gradient Boosting Machine (LightGBM) and other ensemble boosting frameworks comprise 16.00%. Last but not least, other novel and specialized models including Extreme Gradient Boosting (XGBoos), Self-Organizing Maps (SOM), Gain Ratio, and Digital DNA account for 8.00% of the overall utilization observed throughout study. Among DL methods, Recurrent NNs (RNNs) are at the forefront with 40%, followed by CNNs with 30%, CNN–RNN hybrid models at 20%, and Autoencoders with 10%. Integration of cryptographic schemes, federated learning, blockchain-based audit mechanisms, and adaptive runtime mechanisms have further boosted the mechanisms of anomaly detection with detection rates of over 99% for polymorphic and zero-day ransomware.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_39-A_Review_of_Ransomware_Detection_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Intelligent Platform for Behavior Modification and Office Syndrome Risk Reduction Using MediaPipe and Computer Vision</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161038</link>
        <id>10.14569/IJACSA.2025.0161038</id>
        <doi>10.14569/IJACSA.2025.0161038</doi>
        <lastModDate>2025-10-30T08:26:16.8230000+00:00</lastModDate>
        
        <creator>Sumran Chaikhamwang</creator>
        
        <creator>Wijitra Montri</creator>
        
        <creator>Chalida Janthajirakowit</creator>
        
        <creator>Srinuan fongmanee</creator>
        
        <subject>Office syndrome; computer vision; MediaPipe; behavior modification; ergonomics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>Office Syndrome, a musculoskeletal disorder prevalent among office workers, poses significant risks to health, productivity, and quality of life. Traditional preventive approaches, such as ergonomic guidelines and reminder-based systems, often fail due to limited user adherence and practicality. To address this gap, this study developed an intelligent platform that integrates MediaPipe and computer vision to monitor sitting posture, eye-to-screen distance, and sitting duration in real time. The system provides automated notifications and stretching recommendations, combining detection, feedback, and behavioral intervention into a sensor-free and cost-effective solution. The platform was evaluated in terms of technical performance and user behavioral impact. Results demonstrated high system accuracy, with the eye-distance detection module achieving 95.2% accuracy, followed by long sitting alerts (92.5%) and proximity alerts (90.1%). User evaluations confirmed that real-time notifications increased awareness and encouraged healthier working behaviors. These findings highlight the potential of computer vision based approaches for ergonomic health promotion. The proposed platform contributes not only to preventive strategies for Office Syndrome but also to advancing user-centered, technology-driven health solutions adaptable to both office and remote work environments. This study not only demonstrates technical performance but also introduces a novel integration of Media Pipe based posture and facial detection with behavioral modification features, which previous ergonomic systems have not addressed. The proposed framework contributes a new perspective for integrating real time feedback with computer vision in promoting sustainable ergonomic behavior.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_38-An_Intelligent_Platform_for_Behavior_Modification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>EYE-GDM: Clinically Validated, Explainable Ensemble Learning for Gestational Diabetes</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161037</link>
        <id>10.14569/IJACSA.2025.0161037</id>
        <doi>10.14569/IJACSA.2025.0161037</doi>
        <lastModDate>2025-10-30T08:26:16.7930000+00:00</lastModDate>
        
        <creator>Shatha Alghamdi</creator>
        
        <creator>Rashid Mehmood</creator>
        
        <creator>Fahad Alqurashi</creator>
        
        <creator>Turki Alghamdi</creator>
        
        <creator>Sarah Ghazali</creator>
        
        <creator>Asmaa AlAhmadi</creator>
        
        <subject>Explainable Artificial Intelligence (XAI); interpretable machine learning (IML); Gestational diabetes mellitus (GDM); maternal health; healthcare AI; GDM risk prediction; transparency; trust</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>As artificial intelligence (AI) advances in healthcare, its use in maternal health shows promise but faces challenges of trust due to the black-box nature of many models. Gestational diabetes mellitus (GDM), a transient yet high-risk condition, demands accurate and interpretable prediction tools. However, existing GDM prediction studies often rely on opaque models or post-hoc explanation techniques applied after training, which limits transparency and reduces their clinical applicability. This highlights an urgent need for models that unify high predictive performance with interpretability by design. This study introduces EYE-GDM, a case-specific application of our Enhanced Interpretability Ensemble (EYE) framework, designed to predict GDM risk with clinically meaningful explanations. The pipeline evaluates multiple algorithms and selects Decision Tree (DT), k-Nearest Neighbors (k-NN), and Gradient Boosting (GB) as the best-performing base learners. These are integrated with SHAP and a logistic regression (LR) meta-model to construct EYE-GDM, embedding interpretability by weighting learner outputs with LR coefficients. This yields global (population-level) and local (patient-level) explanations consistent with medical knowledge. Tested on a dataset of 3,525 pregnancies, EYE-GDM achieved strong performance (accuracy = 0.9789, AUC-ROC = 0.9981) and provided insights into risk patterns, thresholds, and feature interactions relevant to GDM. By embedding explainability within the ensemble construction, EYE-GDM achieves transparent and clinically aligned reasoning without compromising predictive performance. Thus, EYE-GDM demonstrates how explainable AI (XAI) can translate from technical innovation to practical value in maternal care, supporting earlier risk identification and more informed clinical decisions.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_37-EYE_GDM_Clinically_Validated_Explainable_Ensemble_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multimodal Deep Learning for Tuberculosis Detection Using Cough Audio and Clinical Data with Health Acoustic Representations (HeAR)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161036</link>
        <id>10.14569/IJACSA.2025.0161036</id>
        <doi>10.14569/IJACSA.2025.0161036</doi>
        <lastModDate>2025-10-30T08:26:16.7600000+00:00</lastModDate>
        
        <creator>Rinaldi Anwar Buyung</creator>
        
        <creator>Widi Nugroho</creator>
        
        <subject>Tuberculosis; cough detection; Health Acoustic Representation; multimodal; vocal biomarker</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>Tuberculosis (TB) remains a significant global health challenge, necessitating rapid and accessible screening methods. This study proposes a multimodal deep learning model for non-invasive TB detection by fusing acoustic features from cough sounds with clinical metadata. We utilize the pre-trained Health Acoustic Representations (HeAR) model as a powerful backbone to extract features from mel-spectrograms of cough audio. These acoustic features are combined with clinical data, including sex, age, and key symptoms through a late-fusion architecture. The model was trained and evaluated on a balanced dataset of 16,000 samples derived from the CODA TB DREAM Challenge dataset. Our proposed multimodal approach achieved a high overall accuracy of 90% on the unseen test set, with balanced precision, recall, specificity, and F1-scores of 0.90 for both TB-positive and non-TB classes. These results demonstrate the effectiveness of using cough sound as a non-invasive vocal biomarker, amplified by combining advanced acoustic representations with clinical context. This highlights the potential of our method as a robust, low-cost, and scalable tool for early TB screening.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_36-Multimodal_Deep_Learning_for_Tuberculosis_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>User Satisfaction in AI-Driven Islamic Fintech: An Extended Technology Acceptance Model with Task–Technology Fit and Sharia Compliance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161035</link>
        <id>10.14569/IJACSA.2025.0161035</id>
        <doi>10.14569/IJACSA.2025.0161035</doi>
        <lastModDate>2025-10-30T08:26:16.7300000+00:00</lastModDate>
        
        <creator>Mardiana Andarwati</creator>
        
        <creator>Sari Yuniarti</creator>
        
        <creator>Andriyan Rizki Jatmiko</creator>
        
        <creator>Firnanda Al-Islama Achyunda Putra</creator>
        
        <creator>Galandaru Swalaganata</creator>
        
        <creator>Ahmad Taufiq Andriono</creator>
        
        <subject>Task–Technology fit; sharia compliance; technology acceptance model; user satisfaction; AI; Islamic fintech; MSMEs</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>The rapid development of digital financial services has transformed financial intermediation through improved access, transparency, and efficiency. In the Indonesian context, Islamic financial technology (fintech) offers an alternative aligned with Sharia principles, particularly through e-ijarah contracts that provide MSMEs with productive asset access without interest-bearing debt. This study aims to empirically evaluate the determinants of user satisfaction in adopting AI-based e-ijarah applications by extending the Technology Acceptance Model (TAM) with Task–Technology Fit (TTF), Sharia Compliance (SC), Trust in AI, and Perceived Risk (PR). A survey of 75 food and beverage MSMEs in East Java was analyzed using Partial Least Squares Structural Equation Modeling (PLS-SEM). The findings indicate that Perceived Ease of Use (PEOU) strongly influences Perceived Usefulness (PU), which in turn significantly affects Behavioral Intention (BI), Actual Use (AU), and User Satisfaction (EUCS). Trust in AI and TTF also play significant roles in enhancing PU and BI. Interestingly, SC shows a significant but negative effect on PU, highlighting a contextual gap between digital automation and perceptions of religious compliance. PR negatively impacts both BI and AU, while Age does not moderate usage behavior. The study contributes conceptually by integrating TAM, TTF, and Sharia compliance in a single framework, and practically by offering insights for fintech developers and regulators to improve system usability, trust, and compliance clarity.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_35-User_Satisfaction_in_AI_Driven_Islamic_Fintech.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predicting Stock Market Performance Based on Sentiment Analysis of Online Comments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161034</link>
        <id>10.14569/IJACSA.2025.0161034</id>
        <doi>10.14569/IJACSA.2025.0161034</doi>
        <lastModDate>2025-10-30T08:26:16.7130000+00:00</lastModDate>
        
        <creator>Wenhao Suo</creator>
        
        <creator>Tongjai Yampaka</creator>
        
        <subject>Investor sentiment; non-trading hour sentiment; social media comments; dual-channel LSTM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>In China&#39;s retail-focused stock market, the influence of social media sentiment during off-hours on the next day&#39;s opening price has received limited attention. This paper takes Kweichow Moutai—a leading Chinese company with substantial market capitalization—as the research sample. It gathers investor commentary data from financial platforms, and uses natural language processing tools (SnowNLP) to develop a multidimensional sentiment index (including average sentiment score, positive ratio, and sentiment volatility). By integrating this index with stock trading data and macroeconomic indicators, this study designs a dual-channel LSTM model: one channel for market technical features (e.g., price, volume) and the other for sentiment features, aiming to analyze the impact of off-hours sentiment on opening prices. Empirical results indicate that overnight sentiment has significant predictive power for the next day&#39;s opening price; meanwhile, sentiment transmission is asymmetric, making predictions more challenging in declining markets. Additionally, high-frequency sentiment data significantly outperforms low-frequency data in market prediction accuracy. This research expands the understanding of how investor sentiment influences the market over time, providing practical insights for market participants to develop effective strategies and manage risks.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_34-Predicting_Stock_Market_Performance_Based_on_Sentiment_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modeling and Analyzing Malware Behavior in Virtual Networks Using EVE-NG</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161033</link>
        <id>10.14569/IJACSA.2025.0161033</id>
        <doi>10.14569/IJACSA.2025.0161033</doi>
        <lastModDate>2025-10-30T08:26:16.6830000+00:00</lastModDate>
        
        <creator>Maria-Madalina Andronache</creator>
        
        <creator>Alexandru Vulpe</creator>
        
        <creator>Corneliu Burileanu</creator>
        
        <subject>EVE-NG; network attacks; network defense; SIEM; network architecture</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>Malicious attacks have become increasingly common in all organizations and systems. The continued evolution of such software aims to extract information from diverse systems. Therefore, the objective of this study is to introduce another approach to analyze some network attacks, within a virtual infrastructure, through multi-vendor network emulation software (Emulated Virtual Environment–Next Generation - EVE-NG). Basically, through emulated resources, the aim is to implement a complex network, which also includes a Security Information and Event Management System (SIEM), which can detect some attacks, both from the network area (carried out by malicious attackers) and through malicious files (from the public resources area), that are accidentally or intentionally downloaded by certain users. Within this environment, various scenarios can be implemented to simulate the real production environment, in order to test network vulnerabilities, but also to improve some methods for learning network attack and defense modes. In the experiments performed, the SIEM system detected most of the simulated attacks, but failed to distinguish between the displayed alarms so that the alerts could indicate the type of attack. Thus, the potential of EVE-NG for simulating and analyzing the behavior of malware is demonstrated.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_33-Modeling_and_Analyzing_Malware_Behavior.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Quality Assessment Study of Deep Learning Techniques for Medical Image Diagnosis and Their Applications: A Systematic Literature Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161032</link>
        <id>10.14569/IJACSA.2025.0161032</id>
        <doi>10.14569/IJACSA.2025.0161032</doi>
        <lastModDate>2025-10-30T08:26:16.6530000+00:00</lastModDate>
        
        <creator>Amine Berquedich</creator>
        
        <creator>Ahmed Zellou</creator>
        
        <subject>Deep learning; medical image segmentation; systematic review; convolutional neural networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>Medical imaging is one of the cornerstones of modern medicine, planning treatments, monitoring patient progress and aiding clinicians in diagnosing diseases such as tumors, cancer, and many others. With the rise of neural networks, especially deep learning (DL) approaches, significant advancements have been made in this domain. This systematic literature review intended to investigate and identify the latest implementations of DL algorithms for medical image processing by examining 294 peer-reviewed articles. We also explored the DL-based image segmentation methods, highlighting their advantages and limitations and the commonly used datasets in the field. Finally, we analyzed key challenges and outlined future research directions related to image segmentation. Our review reveals that convolutional neural networks, particularly U-Net and its variants, dominate the field, while deep neural networks show promising results enabling end-to-end learning, providing greater flexibility, and facilitating transfer learning. This study is conducted by defining the search process designed for execution based on a set of inclusion and exclusion criteria from major databases including IEEE explore, Scopus and DBLP.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_32-A_Quality_Assessment_Study_of_Deep_Learning_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Digital Trust and Legacy: Mapping the Intersection of Inheritance Systems and Emerging Technologies (2010–2025)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161031</link>
        <id>10.14569/IJACSA.2025.0161031</id>
        <doi>10.14569/IJACSA.2025.0161031</doi>
        <lastModDate>2025-10-30T08:26:16.6200000+00:00</lastModDate>
        
        <creator>Nor Aimuni Md Rashid</creator>
        
        <creator>Faiqah Hafidzah Halim</creator>
        
        <creator>Hazrati Zaini</creator>
        
        <creator>Norshahidatul Hasana Ishak</creator>
        
        <creator>Nur Farahin Mohd Johari</creator>
        
        <creator>Alya Geogiana Buja</creator>
        
        <subject>Inheritance systems; digitalization; secure data; trust; technologies; digital legacy; blockchain; digital assets</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>Inheritance systems worldwide are undergoing a paradigm shift evolving from manually administered processes to technologically enabled platforms for managing both tangible and digital assets. Yet, the scholarly understanding of how technologies ranging from information systems to blockchain have transformed inheritance management remains underexplored and fragmented. This study aims to trace the evolution of inheritance systems from 2010 to 2025, with a particular focus on the digitalization of inheritance management, emerging technologies and governance models. Using a bibliometric approach, 229 documents were initially retrieved from the Scopus database. After removing irrelevant records, a refined dataset of 81 publications was analyzed using Excel and VOSviewer. The analysis included performance metrics (e.g., publication growth, citation trends, and country output) and science mapping (keyword co-occurrence and clustering). Findings reveal a significant rise in publications post-2020, coinciding with increased attention to digital assets, data privacy laws (e.g., GDPR) and emerging technologies such as blockchain. The most active contributors were from the United States, China and the United Kingdom. Highly cited articles discuss themes such as digital legacy, legal frameworks, asset authentication and ethical considerations. Thematic clustering revealed four research domains: digital legacy and estate transition, digital transformation and trust, digital asset structuring and fraud prevention in social media inheritance. This study contributes a comprehensive overview of the field’s conceptual landscape by highlighting the uneven yet accelerating integration of digital tools in inheritance systems. It also underscores the urgent need for inclusive, interdisciplinary frameworks that accommodate diverse legal, cultural and technological contexts for future inheritance governance.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_31-Digital_Trust_and_Legacy_Mapping.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Ambulance Detection and Priority Passage at Urban Intersections Using Transfer Learning and Explainable AI</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161030</link>
        <id>10.14569/IJACSA.2025.0161030</id>
        <doi>10.14569/IJACSA.2025.0161030</doi>
        <lastModDate>2025-10-30T08:26:16.6030000+00:00</lastModDate>
        
        <creator>Murtaza Hanif</creator>
        
        <creator>Taj Muhammad</creator>
        
        <creator>Atif Ikram</creator>
        
        <creator>Shahid Yousaf</creator>
        
        <creator>Marwan Abu-Zanona</creator>
        
        <creator>Asef Mohammad Ali Al Khateeb</creator>
        
        <creator>Bassam Elzaghmouri</creator>
        
        <creator>Saad Mamoun Abdel Rahman Ahmed</creator>
        
        <creator>Lamia Hassan Rahamatalla</creator>
        
        <subject>Ambulance detection; YOLOV8; LIME; transfer learning; NorFair; urban area; traffic control; smart traffic management</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>Static traffic signal timings often cause severe delays for emergency vehicles, including ambulances at junctions in urban areas, putting lives at risk. To highlight this, the present study proposes an intelligent traffic control system that dynamically adjusts traffic signals based on real-time monitoring. The system employs a yolov8-based deep learning model fine-tuned through transfer learning for ambulance detection from live video. At an Intersection over Union (IoU) threshold of 0.5, the model achieves a mean Average Precision (mAP) of 0.860. To ensure continuous tracking, NORFair tracking is implemented to ensure constant detection across frames. Additionally, to improve explainability and, the frame incorporates Local Interpretable Model-Agnostic Explanation (LIME), providing visual signals into the model decision-making process. Once an ambulance is detected, the system instantly triggers a green-light activation for the ambulance&#39;s lane, enabling quick emergency response. Unlike conventional systems with fixed signal timing, this approach enables smart and adaptive traffic management in urban environment. However, the system&#39;s shortcomings in low-visibility situations, such as at night or in fog, despite its encouraging results, highlight the need for incorporating images taken at night and in foggy weather into the dataset.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_30-Ambulance_Detection_and_Priority_Passage_at_Urban_Intersections.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Facial Expression Recognition Under Partial Occlusion Using Part-Based Ensemble Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161029</link>
        <id>10.14569/IJACSA.2025.0161029</id>
        <doi>10.14569/IJACSA.2025.0161029</doi>
        <lastModDate>2025-10-30T08:26:16.5730000+00:00</lastModDate>
        
        <creator>Evangelions Felix Yehdeya</creator>
        
        <creator>Wahyono</creator>
        
        <subject>Facial expression recognition; partial occlusion; partial part model; support vector machine; ensemble learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>Facial expression recognition (FER) under partial occlusion remains a challenging task, especially when key regions of the face, such as the mouth and nose, are covered by medical masks. Such conditions significantly reduce the discriminative features available for accurate emotion recognition, limiting the effectiveness of conventional full-face approaches. To address this issue, this study proposes a part-based learning framework that partitions the face into multiple regions, allowing the model to exploit unoccluded areas for expression recognition. The proposed method employs Support Vector Machine (SVM) classifiers trained on Histogram of Oriented Gradients (HoG) features extracted from 2, 3, 4, and 6 facial partitions. Each part-based model is trained independently, and their outputs are combined through a weighted soft voting ensemble mechanism to generate the final prediction. The experiments were conducted on the MaskedFER2013 dataset, which contains 31,116 grayscale facial images (48&#215;48 pixels) distributed across seven emotion classes. The results demonstrate that the four-part model achieves the best performance, reaching an accuracy of 45%, outperforming both single-part models and full-face baselines under occlusion scenarios. These findings confirm that the proposed part-based ensemble approach enhances the robustness of FER systems by effectively leveraging complementary regional features, thereby providing a promising solution for real-world applications, where facial occlusion is unavoidable.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_29-Facial_Expression_Recognition_Under_Partial_Occlusion.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid Deep Learning and Forensic Approach for Robust Deepfake Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161028</link>
        <id>10.14569/IJACSA.2025.0161028</id>
        <doi>10.14569/IJACSA.2025.0161028</doi>
        <lastModDate>2025-10-30T08:26:16.5430000+00:00</lastModDate>
        
        <creator>Sales Aribe Jr</creator>
        
        <subject>Adversarial robustness; deepfake detection; diffusion models; explainable AI; forensic fusion; multimedia forensics; trustworthy AI</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>The rapid evolution of generative adversarial networks (GANs) and diffusion models has made synthetic media increasingly realistic, raising societal concerns around misinformation, identity fraud, and digital trust. Existing deepfake detection methods either rely on deep learning, which suffers from poor generalization and vulnerability to distortions, or forensic analysis, which is interpretable but limited against new manipulation techniques. This study proposes a hybrid framework that fuses forensic features—including noise residuals, JPEG compression traces, and frequency-domain descriptors—with deep learning representations from convolutional neural networks (CNNs) and vision transformers (ViTs). Evaluated on benchmark datasets (FaceForensics++, Celeb-DF v2, DFDC), the proposed model consistently outperformed single-method baselines and demonstrated superior performance compared to existing state-of-the-art hybrid approaches, achieving F1-scores of 0.96, 0.82, and 0.77, respectively. Robustness tests demonstrated stable performance under compression (F1 = 0.87 at QF = 50), adversarial perturbations (AUC = 0.84), and unseen manipulations (F1 = 0.79). Importantly, explainability analysis showed that Grad-CAM and forensic heatmaps overlapped with ground-truth manipulated regions in 82 per cent of cases, enhancing transparency and user trust. These findings confirm that hybrid approaches provide a balanced solution—combining the adaptability of deep models with the interpretability of forensic cues—to develop resilient and trustworthy deepfake detection systems.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_28-A_Hybrid_Deep_Learning_and_Forensic_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Predictive Maintenance Method Using Machine Learning to Improve IoT-Embedded Machinery Efficiency and Performance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161027</link>
        <id>10.14569/IJACSA.2025.0161027</id>
        <doi>10.14569/IJACSA.2025.0161027</doi>
        <lastModDate>2025-10-30T08:26:16.4970000+00:00</lastModDate>
        
        <creator>Abiinesh Nadarajan</creator>
        
        <creator>Iskandar Ishak</creator>
        
        <creator>Noridayu Manshor</creator>
        
        <creator>Raihani Mohamed</creator>
        
        <creator>Mohamad Yusnisyahmi Yusof</creator>
        
        <subject>Internet of Things; machine learning; predictive maintenance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>Predictive maintenance plays a crucial role in minimizing unplanned downtimes, reducing maintenance costs, and optimizing the operational efficiency of IoT-embedded industrial machinery. Despite its transformative potential, traditional predictive maintenance methods often face challenges such as limited accuracy, high latency, and inefficiencies in processing large and imbalanced datasets. This study proposes an enhanced predictive maintenance method using the Sliding Window Method with XGB model (E.XGB), incorporating advanced data preprocessing, permutation importance, and hyperparameter optimization to address these limitations. The proposed method was evaluated on two datasets, which are the synthetic AI4I 2020 Predictive Maintenance Dataset and the real-world CNC Milling Dataset. A comparative analysis with a predictive maintenance method using E.AB from prior research as a benchmark, along with several baseline models, DT, RF, and SVM, revealed that the E.XGB model consistently outperformed other methods in accuracy, precision, recall, and F1-scores. On the AI4I2020 dataset, the E.XGB model achieved an accuracy of 99.05%, while on the CNC Milling dataset, it attained an accuracy of 99.01%. Additionally, the E.XGB model also demonstrated reduced training and prediction times, meeting the real-time requirements of industrial applications. The proposed model demonstrated training speed of approximately 94% and prediction speeds of approximately 99.8% improvement over the E.AB model, making it highly suitable for real-time industrial applications. By improving accuracy, training speed, and prediction latency, the predictive maintenance method offers a robust, scalable, and reliable solution for predictive maintenance across diverse industrial contexts.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_27-Enhancing_Predictive_Maintenance_Method_Using_Machine_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Improved Marine Predators Algorithm-Based UAV Path Planning for 10-kV Distribution Networks Inspection in Live Working Scenarios</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161026</link>
        <id>10.14569/IJACSA.2025.0161026</id>
        <doi>10.14569/IJACSA.2025.0161026</doi>
        <lastModDate>2025-10-30T08:26:16.4630000+00:00</lastModDate>
        
        <creator>Dapeng Ma</creator>
        
        <creator>Hongtao Jiang</creator>
        
        <creator>Lichao Jiang</creator>
        
        <creator>Chi Zhang</creator>
        
        <creator>Changwu Li</creator>
        
        <creator>Xin Zheng</creator>
        
        <creator>Mingxian Liu</creator>
        
        <creator>Kai Li</creator>
        
        <subject>Marine predictors algorithm; YOLOv11; defect classification; UAV path planning; live power lines</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>Before conducting maintenance on 10-kV distribution networks, the use of unmanned aerial vehicles (UAVs) for inspecting distribution lines can effectively enhance the operational efficiency of personnel in live working scenarios. For UAV-based inspection of power distribution networks, an optimal flight path ensures both operational safety and comprehensive image acquisition in live working scenarios. Therefore, this study proposes a UAV path planning algorithm and an insulator defect classification model based on YOLOv11, aiming to develop a UAV system for live power line detection. Firstly, a UAV path planning model is established to minimize the flight path length and maximize the image acquisition range, which also considers the safety distance constraints between UAVs and live power lines. On this basis, the optimization strategy of the particle swarm optimization (PSO) algorithm is introduced into the marine predictors algorithm (MPA), and a hybrid PSO-MPA algorithm is designed to improve the convergence accuracy of the MPA algorithm and solve the proposed UAV planning model. In addition, an insulator defect detection model has been developed to accurately identify the image information collected by UAVs. In order to improve the accuracy of the YOLOv11 model, the task-separation assignment (TSA) module was introduced into the YOLOv11 model, and a TSA-YOLOv11 model was designed. Experimental results demonstrate that the proposed PSO-MPA algorithm achieves superior convergence accuracy compared to five algorithms, including PSO. When the UAV flight step size is one meter, the PSO-MPA algorithm reduces the objective function value by an average of 49.62% relative to the other algorithms. Additionally, the TSA-YOLOv11 model attained an average accuracy of 96.87% for the insulator defect classification problem.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_26-An_Improved_Marine_Predators_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Performance Analysis of Original AuRa and Improved AuRa Consensus Algorithms in Chain Hammer Digital Certificate Simulation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161025</link>
        <id>10.14569/IJACSA.2025.0161025</id>
        <doi>10.14569/IJACSA.2025.0161025</doi>
        <lastModDate>2025-10-30T08:26:16.4330000+00:00</lastModDate>
        
        <creator>Robiah Arifin</creator>
        
        <creator>Wan Aezwani Wan Abu Bakar</creator>
        
        <creator>Mustafa Man</creator>
        
        <creator>Evizal Abdul Kadir</creator>
        
        <subject>Blockchain; Ethereum; AuRa_ori; AuRa_v1; TPS; TGS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>The blockchain functions as a distributed database, where data is securely stored across multiple servers and network nodes. It exists in various forms, with Bitcoin, Ethereum, and Hyperledger being among the most prominent examples. To ensure the integrity and security of transactions within a blockchain network, a consensus algorithm is employed to establish agreement among participating nodes. Several types of consensus algorithms exist, each offering distinct features and operational mechanisms. One such algorithm is Authority Round (here defined as AuRa_ori), a member of the Proof-of-Authority (PoA) family supported by Parity clients. Previous studies have highlighted several vulnerabilities and performance limitations in AuRa_ori, particularly concerning transaction speed per second (TPS) and transaction throughput per second (TGS). This study specifically investigates the original AuRa algorithm alongside an improved version, termed AuRa_v1. In AuRa_v1, the transaction process is structured into four key phases: 1) leader assignment, 2) block proposal, 3) agreement, and 4) block commitment. However, inconsistencies and inefficiencies have been identified within certain phases of the original AuRa_ori, particularly during the leader assignment and agreement stages. In response, this study proposes an improved approach through AuRa_v1 to address these vulnerabilities. A detailed analysis is conducted to evaluate the impact of these vulnerabilities on TPS, TGS, and epoch time, followed by a performance comparison between AuRa_ori and AuRa_v1. Experimental results demonstrate that AuRa_v1 effectively resolves the identified performance issues, achieving a significant improvement. Specifically, AuRa_v1 records a 21.65% increase in both TPS and TGS compared to AuRa_ori, validating the effectiveness of the proposed enhancements.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_25-Comparative_Performance_Analysis_of_Original_AuRa.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Game-Theoretic Approaches for Robust Stability of DC Motor Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161024</link>
        <id>10.14569/IJACSA.2025.0161024</id>
        <doi>10.14569/IJACSA.2025.0161024</doi>
        <lastModDate>2025-10-30T08:26:16.4170000+00:00</lastModDate>
        
        <creator>Mohamed Ayari</creator>
        
        <creator>Atef Gharbi</creator>
        
        <creator>Yamen El Touati</creator>
        
        <creator>Zeineb Klai</creator>
        
        <creator>Mahmoud Salaheldin Elsayed</creator>
        
        <creator>Elsaid Md. Abdelrahim</creator>
        
        <subject>Game theory; DC motor control; robust stability; differential games; Lyapunov stability; reinforcement learning; evolutionary algorithms</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>This study proposes a game-theoretic framework for achieving robust stability in DC motor systems operating under parametric uncertainty and external disturbances. We model the controller, disturbance, and uncertainty as strategic players in a non-cooperative differential game and synthesize equilibrium policies using a Lyapunov–game approach. Practically, the method integrates: 1) LMI-based stabilization to certify descent conditions, 2) actor–critic reinforcement learning to approximate the Hamilton–Jacobi–Isaacs (HJI) value function beyond linear regimes, and 3) evolutionary/swarm optimization for controller initialization and distributed observer tuning. We validate the framework on a separately excited DC motor subject to &#177;20% parameter variations and a bounded load-torque disturbance and compare it against PID and H8 baselines. Simulations show consistently faster rise/settling, lower overshoot, stronger disturbance rejection at a step disturbance, and smoother control effort, while attaining the highest qualitative robustness margin among the tested controllers. Beyond single-motor stabilization, we outline extensions to multi-agent coordination, security-aware control, and fractional/fuzzy models, demonstrating adaptability and scalability of the approach. These results indicate that framing stability as the outcome of strategic interactions yields reliable and efficient DC-motor control in uncertain, adversarial environments.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_24-Game_Theoretic_Approaches_for_Robust_Stability.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Exploring Hallucination in Large Language Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161023</link>
        <id>10.14569/IJACSA.2025.0161023</id>
        <doi>10.14569/IJACSA.2025.0161023</doi>
        <lastModDate>2025-10-30T08:26:16.3870000+00:00</lastModDate>
        
        <creator>Nesreen M. Alharbi</creator>
        
        <creator>Thoria Alghamdi</creator>
        
        <creator>Raghda M. Alqurashi</creator>
        
        <creator>Reem Alwashmi</creator>
        
        <creator>Amal Babour</creator>
        
        <creator>Entisar Alkayal</creator>
        
        <subject>ChatGPT; GPT-4o; GPT-4o-mini; hallucination; healthcare; large language models</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>Large Language Models such as GPT-4o and GPT-4o-mini have shown significant promise in various fields. However, hallucination, when models generate inaccurate information, remains a critical challenge, especially in domains that require high accuracy, such as the healthcare field. This study investigates hallucinations in two different LLMs, focusing on the healthcare domain. Four different experiments were defined to examine the two models’ memorization and reasoning abilities. For each experiment, a dataset with 193,155 multiple-choice medical questions from postgraduate medical programs was prepared by splitting it into 21 subsets according to medical topics. Each subset has two versions: one with the correct answers included and one without them. Accuracy and compliance were evaluated for each model. Models’ adherence to requirements in prompts was assessed. Also, the correlation between size and accuracy was tested. The experiments were repeated to evaluate the models’ stability. Finally, the models’ reasoning was evaluated by human experts who assessed the models’ explanations for correct answers. The results revealed poor rates of accuracy and compliance for the two models, with rates below 70% and 75%, respectively, in most datasets; yet, both models showed low uncertainty (3%) in their responses. The findings showed that the accuracy was not affected by the size of the dataset provided to the models. Also, the results indicated that GPT-4o-mini demonstrates greater performance stability compared to GPT-4o. Furthermore, the two models provided acceptable justifications for choosing the correct answer in most cases, according to 68.8% of expert questionnaire participants who agreed with both models’ justifications. According to these results, both models cannot be relied upon when accuracy is critical, even though GPT-4o-mini slightly outperformed GPT-4o in providing the correct answers. The findings highlight the importance of improving LLM accuracy and reasoning to ensure reliability in critical fields like healthcare.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_23-Exploring_Hallucination_in_Large_Language_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Federated Machine Learning for Monitoring Student Mental Health in Kazakhstan</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161022</link>
        <id>10.14569/IJACSA.2025.0161022</id>
        <doi>10.14569/IJACSA.2025.0161022</doi>
        <lastModDate>2025-10-30T08:26:16.3400000+00:00</lastModDate>
        
        <creator>Bakirova Gulnaz</creator>
        
        <creator>Bektemyssova Gulnara</creator>
        
        <creator>Nor&#39;ashikin Binti Ali</creator>
        
        <subject>Federated Learning; data privacy; FedOpt; FedAvg; FedProx; mental health; non-IID data; educational data mining; psychological analytics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>Federated Learning (FL) offers a privacy-preserving and decentralized paradigm for machine learning, making it particularly suitable for analyzing sensitive psychological and physiological data. This study aims to develop and evaluate a federated learning framework for assessing the psycho-emotional well-being of students in Kazakhstani educational institutions, where data privacy and infrastructural constraints pose significant challenges. We benchmark three FL algorithms, such as FedAvg, FedOpt, and FedProx, on heterogeneous, institution-level datasets that combine sleep, dietary, activity, and self-reported emotional measures. Experiments simulate cross-device, non-IID deployments and evaluate convergence, accuracy, and stability across ten communication rounds. Results show that FedProx attains the best trade-off between accuracy and stability under non-IID conditions (peak accuracy is 99.9%), while FedOpt provides faster early convergence, and FedAvg performs well for more homogeneous partitions. The methodological contribution comprises optimized aggregation and adaptive client weighting to mitigate non-IID effects in resource-constrained educational settings. These findings validate FL as a scalable, privacy-preserving approach for mental health monitoring in education and support its use for early intervention and resilience tracking. The proposed framework contributes to data-driven mental health policy design in educational systems, addressing both ethical and infrastructural considerations. The study discusses limitations of the simulated setup and outlines directions for broader deployment and cross-silo validation.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_22-Federated_Machine_Learning_for_Monitoring_Student_Mental_Health.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid Deep Learning and IoT Framework for Predictive Maintenance of Wind Turbines: Enhancing Reliability and Reducing Downtime</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161021</link>
        <id>10.14569/IJACSA.2025.0161021</id>
        <doi>10.14569/IJACSA.2025.0161021</doi>
        <lastModDate>2025-10-30T08:26:16.3070000+00:00</lastModDate>
        
        <creator>Amina Eljyidi</creator>
        
        <creator>Hakim Jebari</creator>
        
        <creator>Siham Rekiek</creator>
        
        <creator>Kamal Reklaoui</creator>
        
        <subject>Predictive maintenance; wind turbine; artificial intelligence; deep learning; Convolutional Neural Network; Long Short-Term Memory; Internet of Things; Remaining Useful Life; condition monitoring</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>The global shift towards renewable energy has positioned wind power as a cornerstone of sustainable development. However, the operational efficiency of wind farms is significantly hampered by unexpected component failures, leading to substantial downtime and maintenance costs. Traditional scheduled maintenance protocols are inefficient, often leading to unnecessary interventions or catastrophic failures. This study proposes a novel, robust framework for the predictive maintenance (PdM) of wind turbines, integrating Internet of Things (IoT) sensory data with a hybrid deep learning architecture. The proposed model leverages Convolutional Neural Networks (CNN) for feature extraction from vibrational and acoustic emission data, combined with Long Short-Term Memory (LSTM) networks to model the temporal dependencies inherent in time-series operational data. Drawing inspiration from successful applications of similar hybrid AI models in precision agriculture and smart farming, our approach is designed to accurately forecast the Remaining Useful Life (RUL) of critical components like gearboxes and bearings. We validate our framework on a benchmark dataset from NASA&#39;s Pronostia platform, demonstrating a 30% improvement in prediction accuracy over traditional single-model approaches and a 50% reduction in false alarms. The results underscore the potential of integrating hybrid AI and IoT, a paradigm successfully demonstrated in other complex systems, to create more reliable, efficient, and cost-effective maintenance strategies for the wind energy sector, thereby enhancing grid stability and accelerating the renewable energy transition.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_21-A_Hybrid_Deep_Learning_and_IoT_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hippopotamus Optimization Algorithm-Based Convolutional Neural Network Model for Mental Health Assessment Among College Students</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161020</link>
        <id>10.14569/IJACSA.2025.0161020</id>
        <doi>10.14569/IJACSA.2025.0161020</doi>
        <lastModDate>2025-10-30T08:26:16.2770000+00:00</lastModDate>
        
        <creator>Gai Hang</creator>
        
        <creator>Lin Yang</creator>
        
        <subject>Convolutional Neural Network; Long Short-Term Memory; hippopotamus optimization algorithm; mental health assessment; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>The mental health of adult students is crucial not only for enhancing their learning experience and overall quality of life, but also for alleviating academic and employment-related anxiety. A significant challenge in developing effective online mental health support systems is the accurate assessment of students&#39; mental health status. Current evaluation methods often lack precision and fail to integrate multifaceted data perspectives. To address these challenges, this study developed a psychological assessment system based on deep learning technology. The system aims to assess adult students&#39; psychological states and provide appropriate support. Specifically, it utilizes a Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM) algorithm framework to evaluate students&#39; psychological states by synthesizing image data, academic performance, and textual inputs. Furthermore, to enhance the accuracy of deep learning-based mental health assessment models, an improved hippopotamus optimization (IHO) algorithm was designed to optimize the hyperparameters of deep learning frameworks. By using the proposed multi-input single-output hybrid IHO-based LSTM-CNN framework (IHO-LSTM-CNN), the online mental health assessment module can accurately describe the psychological status of college students and provide personalized support to meet their specific needs. The final results indicate that the IHO-LSTM-CNN framework provides more accurate assessments than existing mental health assessment models, with an accuracy of 90.28%. This enhanced accuracy enables online community psychological support systems to deliver precise and effective psychological support to college students.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_20-A_Hippopotamus_Optimization_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluating Transformer-Based Pretrained Models for Classical Arabic Named Entity Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161019</link>
        <id>10.14569/IJACSA.2025.0161019</id>
        <doi>10.14569/IJACSA.2025.0161019</doi>
        <lastModDate>2025-10-30T08:26:16.2470000+00:00</lastModDate>
        
        <creator>Mariam Muhammed</creator>
        
        <creator>Shahira Azab</creator>
        
        <subject>Classical Arabic; Named Entity Recognition; transformer models; pretrained models; CANERCorpus</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>This study presents a comprehensive comparative evaluation of transformer-based pretrained language models for Named Entity Recognition (NER) in Classical Arabic, an underexplored linguistic variety characterized by rich morphology, orthographic ambiguity, and the absence of diacritics. The main objective of this work is to identify the most effective transformer model for Classical Arabic NER and to analyze the linguistic factors influencing model performance. Using the CANERCorpus, which contains Hadith texts annotated with twenty fine-grained entity types, ten transformer-based models were fine-tuned and evaluated under consistent experimental settings. The study benchmarks models such as AraBERT, ArBERT, and multiple CAMeLBERT variants, comparing their precision, recall, and F1-scores. The results demonstrate that all models achieve strong performance (F1 &gt; 96%), while CAMeL-CA-NER attains the highest score (F1 = 97.78%), confirming the advantage of domain-specific pretraining on Classical Arabic data. Error analysis further reveals that domain-adapted models better handle ambiguous entities and religious terminology. A comparative analysis with traditional and non-transformer approaches, including rule-based and BERT-CRF models from previous studies, shows that CAMeL-CA-NER surpasses earlier methods by more than 3% in F1-score, highlighting its superior capability in handling Classical Arabic text. However, this study is limited to the CANERCorpus, which primarily consists of Hadith texts; results may vary for other Classical Arabic genres or domains. These findings provide a valuable benchmark for future research and demonstrate the adaptability of modern NLP architectures to linguistically complex, low-resource domains.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_19-Evaluating_Transformer_Based_Pretrained_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Using Combined Weighting and BP Neural Networks for Relative Poverty Measurement and its Evaluation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161018</link>
        <id>10.14569/IJACSA.2025.0161018</id>
        <doi>10.14569/IJACSA.2025.0161018</doi>
        <lastModDate>2025-10-30T08:26:16.2130000+00:00</lastModDate>
        
        <creator>Xiaohua Cai</creator>
        
        <creator>Ya Zhao</creator>
        
        <creator>Lijia Chen</creator>
        
        <creator>Juan Huang</creator>
        
        <creator>Yang Xu</creator>
        
        <subject>Analytic hierarchy process (AHP); entropy method; BP neural network model; relative poverty measurement</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>This study addresses the challenges of measuring and evaluating relative poverty by introducing a comprehensive evaluation model based on the Analytic Hierarchy Process (AHP)-entropy method and BP neural networks. A multidimensional evaluation index system was constructed through expert consultation and literature review. The AHP-entropy method was then employed to determine the weights of the evaluation indicators, ensuring objectivity and scientific validity. Additionally, the BP neural network model was integrated to leverage self-learning and adaptive mechanisms for efficient and accurate poverty assessment. Empirical analysis shows that the model maintains a calculation error within 3.9%, demonstrating high precision and wide applicability. This research provides a novel approach that combines qualitative analysis with quantitative evaluation, offering a practical tool for governmental agencies to design effective poverty alleviation strategies. Moreover, the model opens new pathways for future research in regional poverty assessment, especially in enhancing cross-cultural adaptability and advancing intelligent evaluation models.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_18-Using_Combined_Weighting_and_BP_Neural_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Efficient Lightweight Detection and Classification Method for Field-Grown Horticultural Crops</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161017</link>
        <id>10.14569/IJACSA.2025.0161017</id>
        <doi>10.14569/IJACSA.2025.0161017</doi>
        <lastModDate>2025-10-30T08:26:16.1830000+00:00</lastModDate>
        
        <creator>Yaru Huang</creator>
        
        <creator>Hua Zhou</creator>
        
        <creator>Zhongyi Shu</creator>
        
        <subject>Computer vision; neural network; object detection and classification; lightweight; horticultural crops</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>As the core carrier of human food supply and agricultural economy, manual management in large-scale crop cultivation faces bottlenecks such as low efficiency, high cost, and difficulty in standardization. There is an urgent need for computer vision technology to realize automated detection and growth stage classification. However, most existing algorithms rely on high-performance GPUs for operation, resulting in high hardware costs, which makes it difficult to popularize them in low-end agricultural edge devices (e.g., embedded controllers, low-cost industrial computers). This study proposes a lightweight crop detection and classification model, Lite-CropNet. It builds a neural network architecture based on the CSPDarknet backbone network, designs a concise decoder, and adopts four-scale detection heads to adapt to crop targets of different sizes, balancing high accuracy and lightweight characteristics. Using tomatoes as the experimental object, tests on the TomatOD dataset (simulating real greenhouse environments) show that Lite-CropNet outperforms advanced methods, with a mean Average Precision (mAP)@0.5 of 85.7%. Under the conditions of the GTX 1650 GPU and 640&#215;640 resolution, the Frame Per Second (FPS) reaches 76.9, and the model size is only 4.4M. This neural network model can efficiently complete tomato detection and maturity classification, and its architecture and design can also be transferred to crops such as potatoes and strawberries, providing a cost-effective and highly universal automated solution for agricultural production.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_17-Efficient_Lightweight_Detection_and_Classification_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SD-CNN: A Novel Lightweight Convolutional Neural Network Model for Fall Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161016</link>
        <id>10.14569/IJACSA.2025.0161016</id>
        <doi>10.14569/IJACSA.2025.0161016</doi>
        <lastModDate>2025-10-30T08:26:16.1530000+00:00</lastModDate>
        
        <creator>Han-lin Shen</creator>
        
        <creator>Tian-hu Wang</creator>
        
        <creator>Hong Mu</creator>
        
        <subject>Fall detection; lightweight; SMA attention; depth-separable convolution</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>Aiming at the traditional deep learning fall detection model due to high computational complexity and a large number of parameters, this study proposes a lightweight convolutional neural network model, SD-CNN (SMA-Enhanced Depthwise Convolutional Neural Network), for fall detection. The model is first designed with an SMA attention module to enhance feature representation. Then, depth separable convolution is used to significantly reduce the model complexity. Finally, batch normalisation and Dropout regularisation techniques are combined to efficiently extract spatial-temporal features from temporal signals for accurate classification of fall and non-fall behaviours. The experiments use a sliding window to extract discrete features, three-axis acceleration, and synthetic acceleration as feature inputs. SD-CNN achieves 99.11% accuracy, 98.78% specificity, and 99.39% sensitivity on the homemade dataset Act, which are improved by 7.14%, 6.42%, and 9.38%, respectively, compared to CNN, while the number of parameters is reduced significantly. The effectiveness of the model is also verified by generalisation experiments on the public datasets SisFall and WEDAFall. The SD-CNN algorithm can efficiently complete the fall detection task, and the lightweight design makes it particularly suitable for wearable devices, which provides a highly efficient and reliable solution for real-time fall detection, and it has an important value for practical applications.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_16-SD_CNN_A_Novel_Lightweight_Convolutional_Neural_Network_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Integrated CNN, YOLOv5 and Faster R-CNN Framework for Real-Time Water Pipe Defect Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161015</link>
        <id>10.14569/IJACSA.2025.0161015</id>
        <doi>10.14569/IJACSA.2025.0161015</doi>
        <lastModDate>2025-10-30T08:26:16.1200000+00:00</lastModDate>
        
        <creator>Chu Fu</creator>
        
        <creator>Mideth Abisado</creator>
        
        <subject>Deep learning; Convolutional Neural Network; YOLOv5; Faster R-CNN; machine vision</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>In the context of rapidly expanding urban water supply networks and the prevalence of pipe defects – for example, corrosion, cracks, leaks, blockages – that undermine efficiency and pose safety risks, this study presents an intelligent detection system aimed at improving maintenance accuracy and operational stability. We propose a fusion-based detection architecture combining Convolutional Neural Networks for stable multi‐level feature extraction, YOLOv5 for high‐speed real‐time detection, and Faster R‐CNN for enhanced recall of small or occluded defects. Individually, the models achieve 85.0% accuracy for the CNN extractor, 90.0% detection accuracy with 50 FPS for YOLOv5, and 86.8% recall for Faster R‐CNN. Ablation experiments confirm that the fully integrated system attains superior performance—92.1% accuracy, 85.0% recall, an F1 score of 81.0, and an mAP of 85.1 at 45 FPS—demonstrating that ensemble methods harness complementary strengths to optimize detection precision and speed. Overall, our findings highlight the promise of deep learning–based ensembles for large‐scale, real‐time pipeline inspection, offering a foundation for future intelligent infrastructure management.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_15-An_Integrated_CNN_YOLOv5_and_Faster_R_CNN_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>NetDAIL: An Optimized Deep Learning-Based Hybrid Model for Anomaly Detection in Network Traffic</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161014</link>
        <id>10.14569/IJACSA.2025.0161014</id>
        <doi>10.14569/IJACSA.2025.0161014</doi>
        <lastModDate>2025-10-30T08:26:16.0900000+00:00</lastModDate>
        
        <creator>Saad Khalifa</creator>
        
        <creator>Mohamed Marie</creator>
        
        <creator>Wael Mohamed</creator>
        
        <subject>Anomaly detection; deep learning; autoencoders; NetDAIL; unsupervised learning; intrusion detection; NSL-KDD; KDD Cup 1999</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>Detecting rare and subtle anomalies is critical for ensuring cybersecurity, financial integrity, and operational safety. High-dimensional features, severe class imbalance, and large data volumes often challenge conventional intrusion detection methods. This study presents NetDAIL, a hybrid framework that integrates deep feature learning using a denoising autoencoder, anomaly scoring through Isolation Forest, and classification via LightGBM to address these challenges. To evaluate its effectiveness, the proposed framework was tested on two widely used benchmark datasets: NSL-KDD for controlled-scale experimentation and KDD Cup 1999 for large-scale evaluation. NetDAIL achieved an AUC of 0.998 on the NSL-KDD dataset and 0.990 on the KDD Cup 1999 dataset, demonstrating strong discriminative capability across different traffic volumes and attack patterns. Experimental results confirm the model’s high detection accuracy, scalability, and generalization across diverse network intrusion scenarios. These findings highlight NetDAIL as a practical and reliable solution for real-world anomaly detection, capable of efficiently handling both small- and large-scale environments while maintaining robust and effective performance in operational settings.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_14-NetDAIL_An_Optimized_Deep_Learning_Based_Hybrid_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Hybrid Algorithm for Vision-Based Sleep Posture Analysis Integrating CNN, LSTM and MediaPipe</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161013</link>
        <id>10.14569/IJACSA.2025.0161013</id>
        <doi>10.14569/IJACSA.2025.0161013</doi>
        <lastModDate>2025-10-30T08:26:16.0570000+00:00</lastModDate>
        
        <creator>Apichaya Nimkoompai</creator>
        
        <creator>Puwadol Sirikongtham</creator>
        
        <subject>Sleep posture detection; MediaPipe; CNN; LSTM; real-time monitoring; pose estimation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>Sleep posture is a critical factor affecting sleep quality and long-term health, particularly for the elderly and patients with chronic conditions. This research proposes a novel hybrid algorithm for real-time, vision-based sleep posture analysis by integrating Convolutional Neural Networks (CNN), Long Short-Term Memory (LSTM) networks, and MediaPipe pose estimation. The primary objective is to accurately classify the four main sleep postures—supine, left lateral, right lateral, and prone—while incorporating an automated alert system for risky behaviors, such as maintaining a prone position for over 15 minutes or remaining in any static posture for more than 2 hours. The system processes video input through a streamlined pipeline: MediaPipe first extracts 3D body keypoints, which are then fed into a CNN for spatial feature extraction, followed by an LSTM network to model temporal dependencies across frames. Evaluated on a dataset of 280 video samples from 20 participants under both daytime and nighttime conditions, the model achieved an accuracy of 96.4% in daylight and 92.8% in low-light environments, demonstrating robust performance across varying illumination. Comparative analysis confirmed its superiority over existing methods, such as depth-based CNN or pressure-sensor models. The study concludes that the proposed hybrid system offers a practical, non-invasive, and highly accurate solution for continuous sleep monitoring, with significant potential for deployment in smart healthcare and remote elderly care applications.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_13-A_New_Hybrid_Algorithm_for_Vision_Based_Sleep_Posture_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Handwriting Detectives Using Wavelet Siamese Technology to Verify Signature Fraud</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161012</link>
        <id>10.14569/IJACSA.2025.0161012</id>
        <doi>10.14569/IJACSA.2025.0161012</doi>
        <lastModDate>2025-10-30T08:26:16.0270000+00:00</lastModDate>
        
        <creator>Mohamed Nazir</creator>
        
        <creator>Ali Maher</creator>
        
        <creator>Mostafa Eltokhy</creator>
        
        <creator>Ali M. El-Rifaie</creator>
        
        <creator>Tarek Hosny</creator>
        
        <creator>Hani M. K. Mahdi</creator>
        
        <subject>Biometric authentication; Siamese neural networks; scattering wavelets; common anchor selection; neutrosophic logic; signature verification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>This paper addresses the escalating challenge of signature forgery detection through an innovative hybrid verification system. We integrate Siamese Neural Networks with wavelet scattering transformations to precisely capture signature characteristics while accommodating inherent variations. Our principal contribution, the &quot;common anchor methodology,&quot; identifies a singular representative signature per individual, substantially reducing computational demands on the CEDAR Dataset while maintaining verification integrity. Through meticulous optimization of wavelet scattering parameters, our system demonstrates markedly superior performance on the CEDAR benchmark while requiring considerably fewer model parameters than traditional CNN architectures. This research establishes noteworthy advancements in both accuracy and efficiency for practical signature verification implementations. The study evaluates the performance of a wavelet-Siamese network architecture for offline signature verification through a series of five experiments with varying parameter configurations. Key variables include the use of a common anchor, the J Factor, and the θ value. Results reveal that incorporating a common anchor consistently improves performance. Among all configurations, experiment 4 with a J Factor of 2 and a θ value of 16 yielded the most favorable results, achieving the lowest error rate of 20.823% and the highest ROC-AUC score of 0.8699, along with efficient convergence within 55 iterations. In contrast, the absence of a common anchor in Experiment 1 led to a notably higher error rate of 24.44% and lower model performance. These findings demonstrate the critical role of parameter tuning in enhancing the robustness and accuracy of signature verification systems based on Siamese networks. Despite the substantial computational savings, the system’s best achieved error rate (20.82%) remains higher than several state-of-the-art and commercial signature verification solutions, many of which report error rates below 10%. This indicates an existing trade-off between efficiency and the highest attainable accuracy, which future work will aim to mitigate.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_12-Handwriting_Detectives_Using_Wavelet_Siamese_Technology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Adaptive Virtual Machine Consolidation Based on Autoformer and Enhanced Double Q-Network for Energy-Efficient Cloud Data Center</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161011</link>
        <id>10.14569/IJACSA.2025.0161011</id>
        <doi>10.14569/IJACSA.2025.0161011</doi>
        <lastModDate>2025-10-30T08:26:15.9970000+00:00</lastModDate>
        
        <creator>Kaiqi Zhang</creator>
        
        <creator>Youbo Lyu</creator>
        
        <creator>Dequan Zheng</creator>
        
        <creator>Yanping Chen</creator>
        
        <creator>Jianshan Xu</creator>
        
        <subject>Cloud computing; virtual machine consolidation; load prediction; energy efficiency; deep reinforcement learning; Autoformer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>As the scale of cloud data centers continues to expand, energy consumption has become a critical issue. Virtual machine (VM) consolidation is a key technology for improving resource utilization and reducing energy consumption, yet it remains challenging to effectively balance energy efficiency with service level agreement violations (SLAV) in dynamic cloud environments. This paper proposes an adaptive VM consolidation strategy based on Autoformer and an enhanced dual Q-Network, referred to as AEDQN-VMC. The approach consists of three integrated components: 1) Autoformer-based load detection, which leverages an autocorrelation mechanism to decompose time-series data into multi-scale trend and periodic components; 2) a VM selection method that integrates the Pearson correlation coefficient and migration time to optimize the selection of VMs for migration; and 3) an enhanced dual Q-Network for VM placement, incorporating the upper confidence bound (UCB) and adaptive learning rate (ALR) to improve the exploration-exploitation trade-off. Extensive experiments on real-world cloud workload traces (PlanetLab, Google Cluster, and Alibaba datasets) demonstrate that the proposed method significantly outperforms state-of-the-art benchmarks such as PABFD, AD-VMC, and AMO-VMC. Specifically, it achieves maximum reductions of 46.5% in energy consumption and 74.2% in SLAV rate. Ablation studies further validate the contribution of each component and confirm the synergistic effect of the overall architecture. The results highlight the potential of AEDQN-VMC as an efficient and reliable solution for sustainable cloud data center operations.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_11-Adaptive_Virtual_Machine_Consolidation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Construction and Characteristics of an Engineering Economic Risk Management Platform Based on the BO-GBM Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161010</link>
        <id>10.14569/IJACSA.2025.0161010</id>
        <doi>10.14569/IJACSA.2025.0161010</doi>
        <lastModDate>2025-10-30T08:26:15.9630000+00:00</lastModDate>
        
        <creator>Chaojian Wang</creator>
        
        <creator>Die Liu</creator>
        
        <subject>Engineering economic risk management platform; BO-GBM model; Bayesian Optimization; gradient boosters</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>Economic risk control is pivotal to the success of engineering projects. Traditional risk assessment methods often fall short in handling the high-dimensional, nonlinear, and strongly correlated risk factors prevalent in modern large-scale projects. To address these limitations, this study constructs an engineering economic risk management platform based on the BO-GBM model, which integrates Bayesian Optimization (BO) with a Gradient Boosting Machine (GBM). The platform employs a systematically constructed four-dimensional feature system encompassing 28 indicators across project ontology, market environment, execution process, and risk association dimensions. A rolling time window strategy is adopted for dynamic model training. Experimental validation on a dataset of 327 projects demonstrates the superior performance of the BO-GBM model: for classification tasks, it achieves an AUC of 0.927 and a recall rate of 91.3%, outperforming the standard GBM by 17.5 percentage points in recall; for regression tasks (cost deviation prediction), it attains an RMSE of 83,200 RMB and reduces the MAPE to 9.7%, surpassing mainstream baseline models. The platform&#39;s layered architecture (data, model, service, application layers) enables efficient risk identification and early warning: the time required for risk identification in large projects is drastically reduced from 42.6 hours to 0.52 hours, representing an 81.9-fold efficiency gain; the average single prediction response time is below 127 milliseconds, with a P95 response time of 427 milliseconds under 500 concurrent users; the early warning accuracy reaches 72.5%, with high-risk warnings issued up to 28 days in advance for cost risks and 42 days for schedule risks.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_10-Construction_and_Characteristics_of_an_Engineering_Economic_Risk_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced Fault Detection in Software Using an Adaptive Neural Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161009</link>
        <id>10.14569/IJACSA.2025.0161009</id>
        <doi>10.14569/IJACSA.2025.0161009</doi>
        <lastModDate>2025-10-30T08:26:15.9170000+00:00</lastModDate>
        
        <creator>Jasem Alostad</creator>
        
        <subject>Software fault detection; adaptive neural algorithm; software reliability; neural networks; fault classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>Software fault detection is crucial for ensuring reliable and high-quality software systems. However, traditional fault detection methods often rely on manual inspection or rule-based techniques, which are time-consuming and prone to human errors. In this research, the researchers propose an enhanced fault detection approach using an adaptive neural transfer learning algorithm. The goal is to leverage the power of neural networks and adaptability to improve fault detection accuracy and classification performance. The problem addressed in this research is the need for more effective fault detection methods that can handle the complexities of modern software systems. Existing fault detection techniques lack adaptability and struggle to cope with diverse software scenarios. Neural networks have shown promise in pattern recognition and classification tasks, making them suitable for fault detection. However, fixed architectures and training strategy limit their performance in different software contexts. To address this problem, the research proposes an adaptive neural transfer learning algorithm for fault detection. The algorithm dynamically adjusts its neural network architecture and training process based on the characteristics of the software under test. It incorporates adaptive mechanisms, such as adjusting learning rates and regularization techniques, to optimize performance. Real-time feedback and performance evaluation during the training process drive the adaptive mechanisms. To evaluate the proposed approach, the researchers conducted a series of experiments using diverse software systems and fault scenarios. The research compared the performance of the adaptive algorithm with traditional fault detection methods, including rule-based techniques and fixed neural network architectures. Evaluation metrics such as accuracy, precision, recall, and F1 score were used. The results consistently show that the adaptive neural transfer learning algorithm outperforms existing methods, achieving higher fault detection accuracy and improved classification performance.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_9-Enhanced_Fault_Detection_in_Software_Using_an_Adaptive_Neural_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>TabNet–XGBoost Hybrid Model for Student Performance Prediction and Customized Feedback</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161008</link>
        <id>10.14569/IJACSA.2025.0161008</id>
        <doi>10.14569/IJACSA.2025.0161008</doi>
        <lastModDate>2025-10-30T08:26:15.9030000+00:00</lastModDate>
        
        <creator>Anupama Prasanth</creator>
        
        <subject>Virtual learning environments; student performance prediction; TabNet; XGBoost; SHAP; feedback generation; quality education</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>Virtual Learning Environments (VLEs) have emerged as a cornerstone of modern education, enabling large-scale delivery of learning materials, assessments, and interactions in fully or partially online formats. The dynamic and self-paced nature of VLEs makes the early prediction of learner scores crucial for timely intervention and support. The existing frameworks either underperform in capturing complex, non-linear relationships in heterogeneous educational data or lack interpretability mechanisms necessary for actionable interventions. This study proposes a TabNet–XGBoost hybrid model with SHAP-based interpretability for score range classification in VLE contexts, using the Open University Learning Analytics Dataset (OULAD). Data preprocessing involved cleaning, encoding, normalization, feature engineering, and score band derivation, producing an enriched feature matrix integrating demographic, assessment, and engagement indicators. TabNet’s sequential attentive feature selection extracted a latent representation of the most informative variables, which was subsequently refined by XGBoost to produce sharper decision boundaries for four distinct score ranges. SHAP values were computed post-prediction to identify domain-specific performance drivers, enabling alignment with a structured feedback module across seven predefined learning domains. Experimental results demonstrated a classification accuracy of 98.8% on the test set, outperforming the baseline frameworks. The SHAP-driven feedback mechanism provided interpretable, domain-targeted insights, enhancing the model’s practical applicability for educators and academic support teams. By integrating high predictive accuracy with transparent reasoning and actionable feedback, the proposed framework addresses both the technical and pedagogical requirements of early performance prediction in online learning environments, offering a scalable solution for real-time academic monitoring and intervention.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_8-TabNet_XGBoost_Hybrid_Model_for_Student_Performance_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Evaluation of CNN Architectures for Skin Cancer Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161007</link>
        <id>10.14569/IJACSA.2025.0161007</id>
        <doi>10.14569/IJACSA.2025.0161007</doi>
        <lastModDate>2025-10-30T08:26:15.8870000+00:00</lastModDate>
        
        <creator>Taopik Hidayat</creator>
        
        <creator>Nurul Khasanah</creator>
        
        <creator>Elly Firasari</creator>
        
        <creator>Laela Kurniawati</creator>
        
        <creator>Eni Heni Hermaliani</creator>
        
        <subject>Artificial intelligence; convolutional neural network; deep learning; dermoscopic images; skin cancer classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>Skin cancer is one of the fastest-growing health problems worldwide. Early and accurate diagnosis is essential for improving treatment success and patient survival. However, many previous studies have focused on single CNN architectures or limited datasets, resulting in models with restricted generalizability. To address this gap, this study presents a comparative evaluation of three deep learning architectures (DenseNet169, MobileNetV2, and VGG19) for automatic classification of benign and malignant skin cancers using dermoscopic digital images. A total of 10,000 images were compiled from three public Kaggle datasets, preprocessed through resizing and data augmentation, and trained using transfer learning based on ImageNet weights. Two data split schemes (60:20:20 and 80:10:10) were applied to assess model robustness. Experimental results show that DenseNet169 achieved the highest test accuracy of 90.7 per cent, while MobileNetV2 was the fastest with an inference time of 16 seconds. These findings highlight the tradeoff between accuracy and computational efficiency and support the use of deep learning models, particularly DenseNet169 and MobileNetV2, in the development of real-time AI-assisted skin cancer diagnostic systems.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_7-Comparative_Evaluation_of_CNN_Architectures.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Pedestrian Navigation System with 3D Map and Charging Server Based on Steganography</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161006</link>
        <id>10.14569/IJACSA.2025.0161006</id>
        <doi>10.14569/IJACSA.2025.0161006</doi>
        <lastModDate>2025-10-30T08:26:15.8530000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>Pedestrian navigation; steganography; client-server system; geographic information; system; GIS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>A pedestrian navigation system with a steganography-based 3D map and billing server is proposed. The proposed system includes a server system that provides topographical maps and navigation information to both pedestrians and vehicles. When using the proposed system, necessary images of cross sections, intersections, or points of interest can be automatically obtained, similar to the Street View feature in Google Maps. Users can post photos taken with their camera phones and earn points if the photos are marked as posted. While the proposed system incurs a usage fee, these points can be used to reduce the subscription fee. If the quality of an image is superior to that of a previously archived image, the new image overwrites the previous one. The billing system for the system usage fee incorporates digital steganography for security reasons. This prevents the leakage of user information and other data. Through experiments, it is confirmed that the proposed system works well.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_6-Pedestrian_Navigation_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Quantifying Career Preferences and Perceptions of Software Testing Among Filipino IT Students: A Mixed-Method Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161005</link>
        <id>10.14569/IJACSA.2025.0161005</id>
        <doi>10.14569/IJACSA.2025.0161005</doi>
        <lastModDate>2025-10-30T08:26:15.8230000+00:00</lastModDate>
        
        <creator>Chrisza Joy M. Carrido</creator>
        
        <creator>Abeer Alsadoon</creator>
        
        <creator>Thair Al-Dala’in</creator>
        
        <creator>Ahmed Hamza Osman</creator>
        
        <creator>Abubakar Elsafi</creator>
        
        <creator>Azhari Qismallah</creator>
        
        <creator>Albaraa Abuobieda</creator>
        
        <subject>Software testing; career preferences; Filipino IT students; mixed methods; education reform</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>Software testing (ST) careers have consistently demonstrated low appeal among IT students globally, creating significant workforce gaps in this essential field of information technology. This study investigates the extent to which Filipino IT students share this disinterest in software testing careers as observed in previous international studies, while examining the unique cultural, economic, and curricular realities that influence their career decision-making processes. Utilizing a mixed-methods approach, the research analyzes quantitative survey responses and qualitative focus group discussions to determine student perceptions and attitudes toward the software testing profession. The study employs a multidimensional framework to explore local factors that shape career preferences among Filipino IT students. Findings confirm that software testing is not the first career choice for most respondents, paralleling previous international research findings. However, qualitative data reveal that students demonstrate significantly greater interest when opportunities offer competitive salaries, clear career growth trajectories, meaningful professional development opportunities, and comprehensive academic training in software testing methodologies. The research identifies unique local factors, including economic pressures, cultural perceptions of professional prestige, and significant curriculum gaps that systematically influence students&#39; career decisions. These results highlight critical needs for effective reforms within current IT curricula and enhanced career guidance programs to address the systematic undervaluation of the software testing profession. The study&#39;s implications suggest that targeted educational interventions and improved industry-academia collaboration could better prepare students for the fast-evolving demands of the IT industry while addressing the persistent shortage of qualified software testing professionals in both local and global markets.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_5-Quantifying_Career_Preferences_and_Perceptions_of_Software_Testing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>IoT-Enabled Data-Driven Optimization of Dynamic Thermal Loads for Low-Energy Buildings</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161004</link>
        <id>10.14569/IJACSA.2025.0161004</id>
        <doi>10.14569/IJACSA.2025.0161004</doi>
        <lastModDate>2025-10-30T08:26:15.7930000+00:00</lastModDate>
        
        <creator>Zhaojiang Lyu</creator>
        
        <subject>IoT-enabled optimization; dynamic thermal load; attention-enhanced forecasting; multi-agent reinforcement learning; energy-efficient buildings</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>Energy-efficient building operation requires accurate prediction and optimization of dynamic thermal loads under noisy IoT data streams. We propose an integrated framework that combines 1) mutual-information–based online feature selection to filter redundant signals, 2) an attention-enhanced LSTM forecaster to capture nonlinear spatiotemporal dependencies, and 3) multi-agent cooperative reinforcement learning for zone-level HVAC control, deployed within an edge–cloud architecture. Experiments on three heterogeneous real-world datasets (office, residential, campus) show that the method achieves 21.7% median energy savings (IQR 18.9–23.1%), improving over MADDPG by +5.8 percentage points (p=0.004, Wilcoxon). Forecasting accuracy is also improved, with MAE reduced by 16.7% (95% CI 12.4–20.1%) compared with Seq2Seq+Attention. Comfort deviations are maintained within &#177;1&#176;C (median absolute deviation 0.32&#176;C). Robustness tests indicate graceful degradation under σ≤0.2 Gaussian noise and ≤20% missing data, while ablation confirms the contribution of each module. Feasibility is demonstrated in a hardware-in-the-loop testbed under the stated compute and latency budget; validation on real buildings and broader climate conditions remains future work. This study contributes to smart building energy management, IoT-based HVAC control, and sustainable operation optimization.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_4-IoT_Enabled_Data_Driven_Optimization_of_Dynamic_Thermal_Loads.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>From Logs to Knowledge: LLM-Powered Dynamic Knowledge Graphs for Real-Time Cloud Observability</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161003</link>
        <id>10.14569/IJACSA.2025.0161003</id>
        <doi>10.14569/IJACSA.2025.0161003</doi>
        <lastModDate>2025-10-30T08:26:15.7600000+00:00</lastModDate>
        
        <creator>Nurmyrat Amanmadov</creator>
        
        <creator>Tarlan Abdullayev</creator>
        
        <subject>Large Language Models (LLMs); AI for cloud computing; knowledge graphs; logs</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>Cloud platforms continuously generate vast amounts of logs, metrics, and traces that are vital for monitoring and debugging distributed systems. However, current observability solutions are often siloed, dashboard-centric, and limited to surface-level correlations, making it difficult to derive actionable insights in real time. In this work, we present Log2Graph, a novel framework that leverages large language models (LLMs) to transform heterogeneous telemetry into dynamic knowledge graphs that evolve alongside system state. Unlike traditional log analytics, Log2Graph unifies unstructured messages, distributed traces, and configuration data into a living graph representation, enabling real-time dependency mapping, causal chain analysis, and compliance monitoring. Furthermore, the framework supports natural language queries over the evolving graph, allowing operators to ask questions such as “what services will be impacted if this database fails?” and receive precise, graph-backed explanations. Our evaluation on multi-cloud testbeds shows that Log2Graph reduces incident resolution time, improves accuracy in dependency detection, and enhances operator productivity. This work introduces a new paradigm of LLM-augmented observability, bridging the gap between raw logs and actionable cloud intelligence.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_3-From_Logs_to_Knowledge_LLM_Powered_Dynamic_Knowledge_Graphs.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>From Legacy to Cloud: Migration Strategies for Traditional Financial Institutions Using AWS</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161002</link>
        <id>10.14569/IJACSA.2025.0161002</id>
        <doi>10.14569/IJACSA.2025.0161002</doi>
        <lastModDate>2025-10-30T08:26:15.7130000+00:00</lastModDate>
        
        <creator>Uday Kiran Chilakalapalli</creator>
        
        <creator>Brij Mohan</creator>
        
        <creator>Vinodkumar Reddy Surasani</creator>
        
        <subject>Cloud migration; financial services; AWS; compliance; risk management; governance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>Traditional financial institutions face unprecedented pressure to modernize their technological infrastructure while maintaining regulatory compliance and operational stability. This research examines the strategic approaches, implementation challenges, and outcomes of migrating legacy banking systems to Amazon Web Services (AWS) cloud infrastructure through a mixed-methods analysis of twelve financial institutions that completed migrations between 2019 and 2024. Through structured interviews with technology leaders and quantitative analysis of migration outcomes, including regulatory considerations and real-world implementation cases, this study identifies key success factors and potential pitfalls in large-scale financial services cloud adoption. The research reveals that institutions adopting phased migration strategies with robust risk management frameworks achieve 92% success rates with 30-45% cost reductions and 40-60% performance improvements, compared to 58% success rates for rapid, wholesale transitions. Furthermore, the study demonstrates that AWS-specific services such as AWS Control Tower and AWS Config provide essential governance capabilities that traditional financial institutions require for regulatory compliance during cloud transformation initiatives.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_2-From_Legacy_to_Cloud_Migration_Strategies.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>AI-Assisted Workflow Optimization and Automation in the Compliance Technology Field</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0161001</link>
        <id>10.14569/IJACSA.2025.0161001</id>
        <doi>10.14569/IJACSA.2025.0161001</doi>
        <lastModDate>2025-10-30T08:26:15.6200000+00:00</lastModDate>
        
        <creator>Zhen Zhong</creator>
        
        <subject>Compliance technology; auxiliary process; process optimization; automation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(10), 2025</description>
        <description>Against the backdrop of digital transformation and stricter regulation, enterprise compliance work demands higher efficiency and accuracy. The auxiliary compliance process has become an important entry point for optimizing the compliance system due to its strong transactional nature and high degree of repetition. This study focuses on the process characteristics of auxiliary compliance work, sorts out its structural composition and organizational mechanism, proposes an optimization path with process reengineering, system modeling, and technology integration as the core, and focuses on exploring the collaborative application of key technologies such as RPA, rule engine, and semantic recognition in process automation. Research suggests that the systematic optimization and intelligent upgrading of auxiliary processes will help build a modern compliance operation system that is responsive, efficient, structurally clear, and risk controllable.</description>
        <description>http://thesai.org/Downloads/Volume16No10/Paper_1-AI_Assisted_Workflow_Optimization_and_Automation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Improved BFT Algorithm in Traceability Data for Supply Chain</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160995</link>
        <id>10.14569/IJACSA.2025.0160995</id>
        <doi>10.14569/IJACSA.2025.0160995</doi>
        <lastModDate>2025-09-30T11:06:49.6900000+00:00</lastModDate>
        
        <creator>Zhiyong Liang</creator>
        
        <creator>Rongwang Jiang</creator>
        
        <creator>Ming Yang</creator>
        
        <creator>Boxiong Yang</creator>
        
        <subject>Tendermint; BFT; consensus; consortium blockchain; traceability data for supply chain</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>Byzantine Fault Tolerance (BFT) is a class of faulttolerance techniques in the field of distributed computing. Aiming at the risks of error-prone and tampering data brought by the centralized database in the traditional supply chain traceability process, the use of the BFT consensus algorithm in combination with the alliance blockchain can be used to solve the security problems such as data deletion, data misuse, application attacks, and efficiency reduction in the storage process of supply chain traceability data. This will be the future trend of safe and orderly storage and management of supply chain traceability data.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_95-An_Improved_BFT_Algorithm_in_Traceability_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Socio-Technical Factors Influencing Business Intelligence Adoption in SMEs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160994</link>
        <id>10.14569/IJACSA.2025.0160994</id>
        <doi>10.14569/IJACSA.2025.0160994</doi>
        <lastModDate>2025-09-30T11:06:49.6600000+00:00</lastModDate>
        
        <creator>Ibrahim Abdusalam Abubaker Alsibhawi</creator>
        
        <creator>Hazura Binti Mohamed</creator>
        
        <creator>Jamaiah Binti Yahaya</creator>
        
        <subject>Information quality; social influences; perceived usefulness of Business Intelligence Adoption; perceived ease of adoption of Business Intelligence System; Business Intelligence System adoption</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>This study explores the major challenges that Small and Medium-sized Enterprises (SMEs) encounter when adopting Business Intelligence Systems (BIS), particularly in complex socio-political environments, such as Libya. It aims to understand how internal constraints, like limited financial capacity, resistance to change among management, and weak knowledge-sharing practices, combined with external socio-political factors, influence BIS adoption in developing economies. A cross-sectional survey approach was employed, targeting 297 SME owners and managers in Libya. Data was collected using a structured questionnaire and analyzed with SmartPLS to examine the relationships among key variables: facilitating conditions, information quality, perceived ease of adoption, perceived usefulness, and social influence. The findings highlight that social influence, especially from peers and industry experts, plays a crucial role in shaping SMEs’ adoption behavior. Moreover, the quality of information emerged as a significant determinant in the successful adoption of BIS. The study offers both practical and policy-level insights, suggesting that with the right support, BIS adoption can significantly enhance SMEs’ competitiveness, decision-making capabilities, and operational efficiency.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_94-Socio_Technical_Factors_Influencing_Business_Intelligence.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards the Hybrid Approach for Predicting Stroke Risk: A Feature Augmented Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160993</link>
        <id>10.14569/IJACSA.2025.0160993</id>
        <doi>10.14569/IJACSA.2025.0160993</doi>
        <lastModDate>2025-09-30T11:06:49.6270000+00:00</lastModDate>
        
        <creator>Ting Tin Tin</creator>
        
        <creator>Wong Jia Qian</creator>
        
        <creator>Ali Aitizaz</creator>
        
        <creator>Ayodeji Olalekan Salau</creator>
        
        <creator>Omolayo M. Ikumapayi</creator>
        
        <creator>Sunday A. Afolalu</creator>
        
        <subject>Public health; Random Forest; Support Vector Machine; hybrid model; stroke prediction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>This project addresses the critical challenge of stroke prediction by developing a hybrid model that integrates the strengths of the Random Forest (RF) and Support Vector Machine (SVM) algorithms. Stroke risk is highly influenced by lifestyle-related factors such as smoking, hypertension, heart disease, and elevated body mass index (BMI). Although existing models, such as standalone Random Forest classifiers, offer moderate predictive performance, achieving an accuracy of approximately 74.53%, they often fall short in clinical reliability. The proposed hybrid model improves prediction accuracy by leveraging Random Forest to capture complex, nonlinear relationships and determine feature importance, while SVM enhances performance in high-dimensional spaces by establishing precise decision boundaries. This study also includes a comprehensive literature review that evaluates existing algorithms, their implementation in current systems, and cross-domain insights, ultimately forming the development of a novel conceptual framework. The anticipated outcome is a robust, data-driven predictive tool that enhances clinical decision-making and supports early intervention strategies. By combining complementary machine learning techniques, this hybrid approach aims to set a new benchmark in stroke risk assessment and contribute meaningfully to patient care in modern healthcare environments towards sustainable public health.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_93-Towards_the_Hybrid_Approach_for_Predicting_Stroke_Risk.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Analysis of Statistical, Machine Learning, and Deep Learning Approaches for Frost Prediction in the Peruvian Altiplano</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160992</link>
        <id>10.14569/IJACSA.2025.0160992</id>
        <doi>10.14569/IJACSA.2025.0160992</doi>
        <lastModDate>2025-09-30T11:06:49.6130000+00:00</lastModDate>
        
        <creator>Fred Torres-Cruz</creator>
        
        <creator>Dina Maribel Yana-Yucra</creator>
        
        <creator>Richar Andre Vilca-Solorzano</creator>
        
        <subject>Frost prediction; machine learning; deep learning; ensemble methods; Altiplano; agricultural early warning systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>Frost events represent a critical climatic hazard for agricultural systems in the Peruvian highlands, impacting approximately 74% of rural communities in the Puno region. This research addresses the question of whether machine learning (ML) and deep learning (DL) approaches can significantly outperform traditional statistical methods for frost prediction in extreme high-altitude tropical conditions, achieving sufficient accuracy for operational early warning systems. We present a comprehensive evaluation of twelve forecasting models for predicting daily minimum temperatures, utilizing NASA POWER satellite data (2000-2025) from thirteen meteorological stations across the Alti-plano plateau (121,056 observations). The study implements and compares traditional statistical approaches (SARIMAX, Holt-Winters, Prophet, STL+ARIMA), machine learning algorithms (Random Forest, Support Vector Machines, XGBoost), deep neural network architectures (Multilayer Perceptron, LSTM, 1D-CNN), a hybrid SARIMA+ANN model, and an optimized ensemble approach. The ensemble model, integrating XGBoost, LSTM, and Random Forest through weighted averaging, demonstrated superior performance with RMSE=1.65&#176;C and TSS=0.87, representing a 35% improvement over the best-performing statistical method. Individual analysis revealed XGBoost achieved RMSE=1.78&#176;C with exceptional feature interaction modeling, while LSTM networks exhibited remarkable temporal pattern recognition with recall=0.88 for frost event detection. These findings validate the effectiveness of nonlinear approaches for operational forecasting under extreme climatic conditions and offer a robust framework for early warning systems that could substantially mitigate agricultural losses in vulnerable high-altitude communities.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_92-Comparative_Analysis_of_Statistical_Machine_Learning_and_Deep_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>TomDetLeaf: A Realistic Multi-Source Dataset for Real-Time Tomato Leaf Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160991</link>
        <id>10.14569/IJACSA.2025.0160991</id>
        <doi>10.14569/IJACSA.2025.0160991</doi>
        <lastModDate>2025-09-30T11:06:49.5800000+00:00</lastModDate>
        
        <creator>Yassmine Ben Dhiab</creator>
        
        <creator>Mohamed Ould-Elhassen Aoueileyine</creator>
        
        <creator>Abdallah Namoun</creator>
        
        <creator>Ridha Bouallegue</creator>
        
        <subject>Tomato leaf detection; smart agriculture; dataset; tomato leaf dataset; real-time inference; Edge AI; object detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>Plant diseases remain a major threat to crop productivity, especially where timely diagnosis is difficult. This paper introduces TomDetLeaf, a new annotated dataset designed for tomato leaf detection in diverse agricultural environments, supporting the development of generalizable deep learning models for edge AI deployment. Unlike existing datasets such as PlantVillage, which consist mainly of single-leaf images captured under controlled conditions, TomDetLeaf integrates heterogeneous sources including the Taiwan dataset, climate-controlled green-houses, hydroponic systems and farm environments. The dataset combines single-leaf and multi-leaf images, realistic backgrounds and varying illumination, addressing a key gap that limits the real-world robustness of current models. To demonstrate its utility, we trained and evaluated YOLOv8 on both the original Taiwan dataset and our proposed TomDetLeaf. Results show that YOLOv8 trained on TomDetLeaf achieved 88.3% mAP@0.5, 81.8% precision, and 82.7% recall, exceeding the Taiwan-subset baseline of 77.4% mAP@0.5, 81.6% precision, and 67.6% recall. This validates the contribution of TomDetLeaf in improving detection accuracy and generalization under realistic conditions. By providing a diverse, deployment-ready dataset, this work bridges the gap between theoretical benchmarks and practical real-time applications.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_91-TomDetLeaf_A_Realistic_Multi_Source_Dataset.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predictive Models in Mental Health Based on Unsupervised Data Clustering</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160990</link>
        <id>10.14569/IJACSA.2025.0160990</id>
        <doi>10.14569/IJACSA.2025.0160990</doi>
        <lastModDate>2025-09-30T11:06:49.5500000+00:00</lastModDate>
        
        <creator>Inoc Rubio Paucar</creator>
        
        <creator>Cesar Yactayo-Arias</creator>
        
        <creator>Laberiano Andrade-Arenas</creator>
        
        <subject>Behavioral patterns; clustering; machine learning; mental health; university students</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>In the university context, students’ mental health has been progressively affected over time. The objective of this research was to develop a predictive model of machine learning based on the K-Means algorithm, with the purpose of identifying and classifying mental health profiles among university students. For the construction of this model, the standard Cross-Industry Standard Process for Data Mining (CRISP-DM) methodology was applied, which encompasses five stages: business understanding, data understanding, data preparation, modeling, and evaluation. The results obtained suggest that the generated clusters produce consistent groupings in key variables such as screen time, hours of sleep, and level of physical activity, allowing the characterization of different student profiles. This approach provides valuable information for designing academic support strategies and programs aimed at students’ well-being and mental health. The early identification of behavioral patterns and lifestyle habits enables educational institutions to implement preventive and personalized measures, fostering improved academic performance and university adaptation.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_90-Predictive_Models_in_Mental_Health_Based_on_Unsupervised_Data_Clustering.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Chaotic Compressed Sensing for Secure Image Transmission in LoRa IoT Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160989</link>
        <id>10.14569/IJACSA.2025.0160989</id>
        <doi>10.14569/IJACSA.2025.0160989</doi>
        <lastModDate>2025-09-30T11:06:49.5330000+00:00</lastModDate>
        
        <creator>Chatchai Wannaboon</creator>
        
        <creator>Shamsul Ammry Bin Shamsul Ridzwan</creator>
        
        <creator>Sorawit Fong-In</creator>
        
        <subject>Secure image transmission; compressed sensing; chaotic maps; long-range radio signals; LoRa</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>Transmitting image data reliably over long distances with low cost and minimal storage consumption is critical for LoRa-enabled IoT devices. Conventional methods often rely on high-power consumption or computationally intensive hardware, rendering them unsuitable for cost-sensitive and resource-limited IoT deployments. This paper presents a hybrid compressed sensing approach designed for efficient image transmission in LoRa-based IoT systems. The proposed method utilizes a chaotic map-based sensing matrix to enhance randomness and incoherence in the sampling process, which also serves as an encryption key to secure the transmitted data. While wavelet transform is combined with Total Variation (TV) minimization to accurately recover high-quality images from the sparse measurements on the reconstruction side. The system is implemented on low-power development boards, with the ESP32-CAM used for image capture and initial compression, and the CubeCell-AB01 handling LoRa-based wireless transmission. Experimental results demonstrate significant reductions in data size and transmission cost, while preserving image fidelity and enhancing data security, making the proposed method well-suited for resource-constrained IoT applications.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_89-Chaotic_Compressed_Sensing_for_Secure_Image_Transmission.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimizing Energy Efficiency and Increasing Scalability in 6G-IoT Networks Through SDN, Duty Cycling, and AI-Driven Slicing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160988</link>
        <id>10.14569/IJACSA.2025.0160988</id>
        <doi>10.14569/IJACSA.2025.0160988</doi>
        <lastModDate>2025-09-30T11:06:49.5030000+00:00</lastModDate>
        
        <creator>Marwah Albeladi</creator>
        
        <creator>Kamal Jambi</creator>
        
        <creator>Fathy E. Eassa</creator>
        
        <creator>Maher Khemakhem</creator>
        
        <subject>6G-IoT; energy efficiency; scalability; SDN; duty cycling; network slicing; CNN; BiLSTM; AI-driven optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>As sixth-generation (6G) and Internet of Things (IoT) networks expand rapidly, concerns are growing about their energy consumption and scalability. This is primarily because more devices are being connected, resulting in increased energy consumption energy consumption.This study examines three primary strategies for optimizing energy efficiency and improving scalability in 6G-IoT networks. This research looks at three experimental setups: 1) using software-defined networking (SDN) with dynamic slicing to organize devices based on when they are most and least used, 2) duty cycling, which turns devices on and off to save energy, and 3) AI-optimized network slicing that uses both convolutional neural networks (CNN) and bidirectional long short-term memory (BiLSTM) models. In the first setup, SDN with dynamic slicing helped reduce unnecessary power consumption by matching device activity to peak times. As more devices were added, this method kept energy use low and improved the network’s ability to handle growth without requiring significantly more power. This resulted in a 66.28 percent decrease in power usage. In the second setup, duty cycling allowed only some devices to be active at a time, which reduced power use by over 60 percent during slow periods. In the third setup, the CNN-BiLSTM model effectively classified service types and reduced power use by 60.14 percent. While these methods were not combined into a single solution, each utilized slicing techniques to more effectively allocate resources and manage power.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_88-Optimizing_Energy_Efficiency_and_Increasing_Scalability.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hierarchical Adaptive Gap-Run TID Compression for Large-Scale Frequent Itemset Mining</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160987</link>
        <id>10.14569/IJACSA.2025.0160987</id>
        <doi>10.14569/IJACSA.2025.0160987</doi>
        <lastModDate>2025-09-30T11:06:49.4700000+00:00</lastModDate>
        
        <creator>Xin Dai</creator>
        
        <creator>Chenjiao Liu</creator>
        
        <creator>Xue Hao</creator>
        
        <creator>Qichen Su</creator>
        
        <subject>Frequent itemset mining; pure Eclat; Hierarchical Adaptive Gap-Run List (HAGL-TID); large-scale transaction data</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>Frequent itemset mining faces the prominent problems of high storage space requirements and low efficiency in a large-scale transaction data environment.The traditional Eclat algorithm usually uses bitmap or sparse array to represent a single transaction identifier (TID), which is difficult to adapt to the changes of dense and sparse transaction data at the same time; Although the existing hybrid representation schemes can partly alleviate this problem, the additional computational overhead caused by frequent data structure switching and the inherent space waste of bitmap structure have not been fundamentally solved. Therefore, this article proposes a HiAGL-FIM algorithm based on Hierarchical Adaptive Gap Run Transaction Identifier List (HAGL-TID). This algorithm adaptively selects Gap List or Run List for transaction identifier encoding through continuity ratio, and designs an efficient TID intersection operation method, completely eliminating dependence on bitmap structure and effectively reducing memory consumption and intersection calculation overhead. The experimental results show that HiAGL-FIM has significant advantages in terms of running time, memory usage, and data scalability compared to classical algorithms such as Eclat, FP Growth, and dEclat. Especially when the transaction data scale reaches millions, it shows a more significant performance improvement, demonstrating the effectiveness and practical value of our method.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_87-Hierarchical_Adaptive_Gap_Run_TID_Compression.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Weakly Supervised MIL Approach to Fake News Detection via Propagation Tree Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160986</link>
        <id>10.14569/IJACSA.2025.0160986</id>
        <doi>10.14569/IJACSA.2025.0160986</doi>
        <lastModDate>2025-09-30T11:06:49.4230000+00:00</lastModDate>
        
        <creator>Shariq Bashir</creator>
        
        <subject>Identifying fake news; social network analysis; post stance detection; deep learning; information retrieval; multiple instance learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>This paper presents a weakly supervised Multiple Instance Learning (MIL) framework for fake news detection in social media, leveraging propagation tree analysis to model the spread of misinformation across online networks. Unlike traditional text-based or graph-based methods, our approach captures fine-grained post-level stances (support, denial, question, comment) and aggregates them to infer news veracity using a novel hierarchical attention mechanism. The framework in-corporates social network dynamics of information diffusion, offering deeper insights into how user interactions amplify or suppress misinformation. We evaluate our model on benchmark datasets, including PolitiFact and GossipCop from FakeNewsNet, comprising over 23,000 news articles and hundreds of thousands of user engagements, as well as on the SemEval-8 dataset for binary classification of true vs. fake news. Our method achieves up to 94.3% accuracy and 91.7% F1-score, outperforming state-of-the-art machine learning and deep learning baselines. Ablation studies further validate the contribution of stance aggregation and attention-based propagation modeling. These results highlight the effectiveness of integrating stance detection, propagation structures, and weakly supervised learning for scalable and interpretable fake news verification in online environments.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_86-A_Weakly_Supervised_MIL_Approach_to_Fake_News_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mental Health Monitoring in Neurodivergent Children Using NeuroSky TGAM1: Real-Time EEG Signal Processing for Cognitive and Emotional Assessment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160985</link>
        <id>10.14569/IJACSA.2025.0160985</id>
        <doi>10.14569/IJACSA.2025.0160985</doi>
        <lastModDate>2025-09-30T11:06:49.3930000+00:00</lastModDate>
        
        <creator>Erika Yolanda Aguilar Del Villar</creator>
        
        <creator>Jes&#180;us Jaime Moreno Escobar</creator>
        
        <creator>Claudia Hern&#180;andez Aguilar</creator>
        
        <subject>EEG; neurodivergent children; wearable; spectral power density analysis; therapy sessions</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>This study presents a real-time electroencephalography (EEG) monitoring system tailored for neurodivergent children, leveraging the affordable, single-channel NeuroSky TGAM1 sensor. We introduce a robust signal processing pipeline based on spectral power density analysis (from Delta to Gamma bands) to identify discrete cognitive-emotional states during therapy sessions. The system demonstrates 82.3% accuracy in classifying focused attention, emotional distress, and calm engagement. Crucially, our wearable implementation provides objective biomarkers for personalizing mental health interventions, effectively bridging biomedical engineering and child psychiatry. We illustrate the system’s adaptability across various therapeutic contexts; notably, our findings reveal compelling neural response patterns during dolphin-assisted therapy for children with Autism Spectrum Disorder (ASD). This low-cost, scalable solution shows significant potential for objectively evaluating therapeutic efficacy in populations with ADHD and ASD, moving beyond subjective assessments towards data-driven care.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_85-Mental_Health_Monitoring_in_Neurodivergent_Children.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Applying a Lightweight Graphics Library to Visually Corroborate Learning in Programming Introduction Courses</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160984</link>
        <id>10.14569/IJACSA.2025.0160984</id>
        <doi>10.14569/IJACSA.2025.0160984</doi>
        <lastModDate>2025-09-30T11:06:49.3770000+00:00</lastModDate>
        
        <creator>Claudia De La Fuente</creator>
        
        <creator>Cristian Vidal-Silva</creator>
        
        <creator>Liza Jeg&#180;o-Mendoza</creator>
        
        <creator>Patricia Pedrero-Valenzuela</creator>
        
        <subject>Structured programming; visual corroboration; self-regulated learning; metacognition; program visualization; project-based learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>Learning to program in first-year courses is challenging because the link between source code and program behaviour is not immediately visible to novices. This paper reports on the deployment of UFramework, a lightweight graphics library developed in C++/Visual Studio, designed to help students visually corroborate their learning by observing the on–screen effects of their own algorithms. The experience was implemented in a Structured Programming module that combines lectures, labs, and project-based assignments. We 1) describe the design principles and architecture of the library, 2) present a portfolio of progressively scaffolded assignments (shooter prototype, grid map parsing, spatial quadrants), 3) outline the assessment rubric and its alignment with intended learning outcomes, and 4) report multi-year descriptive evidence that includes pass rates and qualitative reflections. Results show improved student engagement and higher pass rates in the most recent cohorts, together with qualitative evidence of increased motivation and clearer problem decomposition. While the findings are limited to a single institution and remain descriptive, they suggest that lightweight, visual-first workflows can lower barriers to learning programming and foster computational thinking competencies such as decomposition, abstraction, algorithmic design, and debugging. Future work should include controlled comparisons and broader validation to strengthen internal validity and explore the applicability of this approach to non-visual domains.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_84-Applying_a_Lightweight_Graphics_Library_to_Visually_Corroborate_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>New Explainable Overlapping Co-Clustering for Recommender Systems: Capturing Multifaceted Preferences with Enhanced Interpretability</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160983</link>
        <id>10.14569/IJACSA.2025.0160983</id>
        <doi>10.14569/IJACSA.2025.0160983</doi>
        <lastModDate>2025-09-30T11:06:49.3470000+00:00</lastModDate>
        
        <creator>Chiheb Eddine Ben Ncir</creator>
        
        <creator>Mohammed Ibrahim Alattas</creator>
        
        <subject>Clustering-based recommender systems; modularity maximization; overlapping co-clustering; multiple-user-preferences; recommendation interpretability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>Recommender systems have become critical tools in reducing information overload by providing personalized recommendations across several application domains including commerce, industry, education, academic research, etc. Clustering-based recommender systems, which use the clustering technique to group similar users or items to generate suggestions, have shown high accuracy and efficiency. However, conventional clustering methods often fail to address several challenges such as ignoring the possibility that a user may have different item preferences, limited interpretability of generated suggestions, and the inability to tailor recommendation list sizes to individual user needs. To address all these issues, we propose in this work a new recommender system based on Overlapping Co-clustering and Modularity Maximisation (OCCMM). The proposed method allows to take into account that users may have several item preferences by building overlapping clusters rather than the conventional non-overlapping model. Also, the proposed method adopts a simultaneous clustering of items and users to facilitate the generation and interpretation of suggestions through using the co-clustering technique. Furthermore, OCCMM enables an adjustment of recommendation list sizes based on an easy tunning parameter δ. Experiments conducted in three real-world datasets demonstrated the effectiveness of OCCMM in achieving better performance in terms of accuracy and interpretability compared to conventional existing methods.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_83-New_Explainable_Overlapping_Co_Clustering_for_Recommender_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Scalable Formal Verification of Modular Concurrent Systems: A Survey of Techniques, Tools and Challenges</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160982</link>
        <id>10.14569/IJACSA.2025.0160982</id>
        <doi>10.14569/IJACSA.2025.0160982</doi>
        <lastModDate>2025-09-30T11:06:49.3170000+00:00</lastModDate>
        
        <creator>Sawsen Khlifa</creator>
        
        <creator>Chiheb Ameur Abid</creator>
        
        <creator>Asma ben Letaifa</creator>
        
        <creator>Belhassen Zouari</creator>
        
        <subject>Distributed systems; state space; Modular Petri Net; formal verification; state explosion problem; model checking; temporal logic; reduction techniques; RDSS; ROS2; scalability; modularity; model checking</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>The increasing complexity of distributed and con-current systems raises pressing challenges for ensuring correctness and reliability. Formal verification, and in particular model checking, offers a rigorous foundation to validate system properties, yet suffers from the well-known state space explosion problem. This difficulty is especially acute in modular architectures, where local behaviors intertwine with synchronization across components. This paper provides a structured survey of the main techniques designed to overcome these challenges, including state space reduction, abstraction, compositional reasoning, symbolic approaches, and distributed verification. We also review representative tools such as SPIN, NuSMV, LTSmin, DiVinE, and STORM, assessing their capabilities and limitations in handling modular and concurrent models. Building on this landscape, we position the Reduced Distributed State Space (RDSS) as a novel framework that addresses key scalability limits. RDSS reduces global complexity into module-specific meta-graphs, ensures stuttering equivalence, and enables local model checking without exploring the full global state space. Comparative experiments demonstrate significant gains over existing approaches, particularly for systems where modules are not all synchronized on the same transitions. We conclude by identifying open challenges and future research directions, including distributed implementations, AI-driven heuristics, and hybrid reductions. Our survey underscores the importance of structural awareness in modern verification workflows and establishes RDSS as a promising foundation for scalable verification of modular concurrent systems.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_82-Scalable_Formal_Verification_of_Modular_Concurrent_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A FOREX Trading System Based on Semi-Supervised News Classification, Market Sentiment Analysis, and GRU-CNN Deep Learning Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160981</link>
        <id>10.14569/IJACSA.2025.0160981</id>
        <doi>10.14569/IJACSA.2025.0160981</doi>
        <lastModDate>2025-09-30T11:06:49.2830000+00:00</lastModDate>
        
        <creator>Nabil MABROUK</creator>
        
        <creator>Marouane CHIHAB</creator>
        
        <creator>Younes CHIHAB</creator>
        
        <subject>FOREX; trading; semi-supervised classification; sentiment analysis; machine learning; deep learning; RNN; CNN; GRU</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>Investors access the foreign exchange market (FOREX) not only to preserve their wealth but also to generate profits and achieve specific financial goals. It is one of the largest financial markets that investors rely on, and it is based on fluctuations in currency exchange rates to make a profit in different time cycles: short-term, medium-term, and long-term. In this article, we propose an automated FOREX trading system that combines two artificial intelligence algorithms: the first to classify news by pertinence (semi-supervised) and then to analyze market sentiment. This algorithm plays a crucial role in replacing the traditional fundamental analysis, which is based on macroeconomic factors, political events, and news headlines. The use of GAN-BERT helped improve performance in classification tasks with limited labeled data and reduced execution time. This algorithm demonstrates impressive results, achieving a high accuracy of 97.5%, which makes its output data more reliable for use in the second algorithm, a combination of two deep learning models: the Gated Recurrent Unit (GRU) and the Convolutional Neural Network (CNN). We enrich the dataset used in this phase with additional technical indicators and features that may help explain market fluctuations. We evaluated our final algorithm over multiple time frames and several windows; the results were impressive and confirmed by back-testing its potential profitability and risk.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_81-A_FOREX_Trading_System_Based_on_Semi_Supervised_News_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Leveraging Distance-Optimized Transformers for High-Performance Arabic Short Answers Grading</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160980</link>
        <id>10.14569/IJACSA.2025.0160980</id>
        <doi>10.14569/IJACSA.2025.0160980</doi>
        <lastModDate>2025-09-30T11:06:49.2670000+00:00</lastModDate>
        
        <creator>Hatem M. Noaman</creator>
        
        <creator>Mohsen Rashwan</creator>
        
        <creator>Hazem Raafat</creator>
        
        <subject>Automatic Arabic Short Answers Grading; Arabic language processing; educational technology; pre-trained language models; semantic similarity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>This study presents comprehensive distance-optimized transformer architecture for Automated Arabic Short Answers Grading (AASAG) that systematically evaluates multiple semantic similarity measures. Short answer grading—assessment of responses typically 1-3 sentences long requiring conceptual understanding rather than factual recall—poses significant challenges in Arabic due to morphological complexity and limited computational resources. Our approach integrates pre-trained Arabic transformer models (AraBERT v02) with four distinct distance algorithms: cosine similarity, Manhattan distance, Euclidean distance, and dot-product calculations within a Siamese network architecture. Through systematic evaluation across three progressively enhanced datasets (original AR-ASAG, SemEvalaugmented, and reference-integrated versions), our distance-optimized approach achieves state-of-the-art performance with correlation coefficients of 0.7998, representing a 5.5% improvement over existing methods. This advancement significantly outperforms traditional vector space models (0.7037 correlation), BERT-based approaches (0.7616), and hybrid semantic analysis methods (0.745), establishing new benchmarks for Arabic educational assessment technology.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_80-Leveraging_Distance_Optimized_Transformers.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Task-Oriented Evaluation of Assamese Tokenizers Using Sentiment Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160979</link>
        <id>10.14569/IJACSA.2025.0160979</id>
        <doi>10.14569/IJACSA.2025.0160979</doi>
        <lastModDate>2025-09-30T11:06:49.2200000+00:00</lastModDate>
        
        <creator>Basab Nath</creator>
        
        <creator>Sagar Tamang</creator>
        
        <creator>Osman Elwasila</creator>
        
        <creator>Yonis Gulzar</creator>
        
        <subject>Assamese NLP; tokenization; subword tokenization; sentiment analysis; low-resource languages; BERT; class imbalance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>Tokenization is a foundational step in the NLP pipeline, and its design strongly influences the performance of transformer-based models, particularly for morphologically rich and low-resource languages such as Assamese. While most tokenizers are traditionally assessed using intrinsic metrics, their practical impact on downstream tasks has remained underexplored. This study systematically evaluates nine subword tokenizer configurations—spanning Byte-Pair Encoding (BPE), WordPiece, and Unigram algorithms with vocabulary sizes of 8K, 16K, and 32K—on sentiment classification in Assamese. Each tokenizer was integrated into a BERT-base-multilingual-cased model by replacing the default tokenizer and reinitializing the embedding layer. On a manually curated dataset, na&#168;ıve fine-tuning proved unstable under class imbalance, but a class-weighted loss restored effective training and exposed clear performance differences across tokenizers. WordPiece consistently outperformed BPE and Unigram, with the wordpiece 16k configuration achieving a weighted F1-score of 0.4897 across 10 random seeds. This score was statistically comparable to mBERT (0.4919) and competitive with larger multilingual baselines such as XLM-R (0.4978), despite relying on a far smaller, Assamese-specific vocabulary. These findings underscore that tokenizer choice is not a neutral preprocessing step but a critical design decision, highlighting the importance of downstream evaluation when developing practical NLP pipelines for low-resource languages.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_79-Task_Oriented_Evaluation_of_Assamese_Tokenizers.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Benchmarking Large Language Models for Hate Speech Detection in Arabic Dialects: Focus on the Saudi Dialects</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160978</link>
        <id>10.14569/IJACSA.2025.0160978</id>
        <doi>10.14569/IJACSA.2025.0160978</doi>
        <lastModDate>2025-09-30T11:06:49.1900000+00:00</lastModDate>
        
        <creator>Omaima Fallatah</creator>
        
        <subject>Arabic hate speech detection; large language models (LLMs); in-context learning; Arabic NLP</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>This study investigates the effectiveness of large language models (LLMs) in detecting Arabic hate speech, with a particular focus on prompt-based learning and the sociolinguistic challenges of Saudi dialects. We evaluate four LLMs, GPT-4o, LLaMA3, Gemma2, and ALLaM, using zero-shot, one-shot, and three-shot prompting strategies. The results show that all models benefit from in-context examples, with GPT-4o achieving the highest overall performance across all prompting settings. A detailed error analysis reveals persistent challenges, particularly in detecting implicit hate, handling dialectal variation, and interpreting culturally embedded expressions. We also highlight limitations related to topic bias and annotation ambiguity, which further complicate model evaluation. Overall, the findings offer key insights for evaluating LLMs in low-resource settings and addressing the unique linguistic complexities of Arabic dialects.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_78-Benchmarking_Large_Language_Models_for_Hate_Speech_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Graph Neural Networks with Shapley-Value Explanations for Hierarchical Recommendation Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160977</link>
        <id>10.14569/IJACSA.2025.0160977</id>
        <doi>10.14569/IJACSA.2025.0160977</doi>
        <lastModDate>2025-09-30T11:06:49.1600000+00:00</lastModDate>
        
        <creator>Redwane Nesmaoui</creator>
        
        <creator>Mouad Louhichi</creator>
        
        <creator>Mohamed Lazaar</creator>
        
        <subject>Hyperbolic graph neural networks; Shapley value; explainable recommendation; hierarchical recommendation systems; interpretability; Explainable AI (XAI); Poincar&#180;e ball embeddings; graph neural networks; feature attribution; hyperbolic geometry; user-item; graph embeddings</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>Hierarchical structures are prevalent in real-world recommendation systems; however, existing graph neural networks (GNNs) struggle to capture them effectively because of their reliance on Euclidean geometry and a lack of Interpretability. This paper presents a novel architecture, Hyperbolic Graph Neural Networks with Shapley-Value Explanations (HGNN-SV), which simultaneously addresses both challenges in hierarchical recommendation tasks. Our method combines Poincar&#180;e ball hyperbolic embeddings with Shapley-value-based feature attributions, enabling accurate modelling of tree-like user–item relationships while offering transparent, theoretically grounded explanations for each recommendation. Experiments on the Amazon Product Reviews and MovieLens 1M datasets demonstrated strong performance across multiple evaluation metrics. On MovieLens-1M, HGNN-SV achieved a Precision@10 of 0.822, Recall@10 of 0.785, and F1-Score@10 of 0.803. For Amazon Product Reviews, the method attained a Precision@10 of 0.785, Recall@10 of 0.730, and F1-Score@10 of 0.756. A comparative evaluation against leading baselines, including LightGCN, Hyperbolic GCN, GNNShap, and MAGE, shows that our unified approach consistently outperforms existing methods across all metrics. Moreover, the generated Shapley attribution closely aligned with semantic item hierarchies, as validated through systematic evaluation. By bridging the gap between geometric expressiveness and interpretability, our approach establishes a new benchmark for trustworthy, high-fidelity hierarchical recommendation systems.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_77-Graph_Neural_Networks_with_Shapley_Value_Explanations.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Computer-Aided Diagnosis System for Ulcerative Colitis Classification Using Vision Transformer</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160976</link>
        <id>10.14569/IJACSA.2025.0160976</id>
        <doi>10.14569/IJACSA.2025.0160976</doi>
        <lastModDate>2025-09-30T11:06:49.1270000+00:00</lastModDate>
        
        <creator>Dharmendra Gupta</creator>
        
        <creator>Jayesh Gangrade</creator>
        
        <creator>Yadvendra Pratap Singh</creator>
        
        <creator>Shweta Gangrade</creator>
        
        <subject>Ulcerative Colitis (UC); colonoscopy videos; deep learning; vision transformer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>An unhealthy digestive condition that inflames the colon is called ulcerative colitis (UC). Utilising colonoscopy information to assess disease severity is a laborious process that concentrates on the most severe anomalies. The severity of this condition can significantly impact a patient’s quality of life. Current diagnostic methods, primarily colonoscopy, for assessing UC severity are subjective and prone to inter-observer variability, hindering accurate staging and personalized treatment. Colonoscopies are currently used by doctors to diagnose the severity of ulcerative colitis, yet this might be imprecise due to physician variance. As such, to deliver optimal outcomes, automated and precise technology is required. The current study introduces UC-visionNet, an automated approach that classifies ulcerative colitis severity based on colonoscopy image analysis using vision transfer techniques. UC-visionNet makes use of vision transformers, which are pre-trained deep learning models that have shown to be quite successful in image analysis applications. To classify ulcerative colitis severity, these models are “fine-tuned” using the LIMUC (Labeled Images for Ulcerative Colitis) dataset. Compared to conventional colonoscopy procedures, using UC-visionNet for image analysis may be faster, enhancing patient satisfaction and increasing healthcare effectiveness. In contrast to state-of-the-art techniques, the suggested model performs quantitatively better on the LIMUS dataset. After using Vision transformer (ViT) on the LIMUC dataset, the current study attained a 96% training accuracy. UC-visionNet offers a promising automated solution for accurate and efficient UC severity classification.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_76-A_Computer_Aided_Diagnosis_System_for_Ulcerative_Colitis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Hybrid Approach Based on Discrete Wavelet Transform and Deep Learning for Traffic Sign Recognition in Autonomous Vehicles</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160975</link>
        <id>10.14569/IJACSA.2025.0160975</id>
        <doi>10.14569/IJACSA.2025.0160975</doi>
        <lastModDate>2025-09-30T11:06:49.1130000+00:00</lastModDate>
        
        <creator>Rim Trabelsi</creator>
        
        <creator>Khaled Nouri</creator>
        
        <subject>Safety; discrete wavelet transform; traffic sign recognition; autonomous vehicles; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>The rapid advancement of autonomous vehicles has led to the widespread integration of advanced driver assistance systems, significantly improving vehicle control, safety, and compliance with traffic regulations. A crucial aspect of these systems is the reliable detection and recognition of traffic signs, which play a key role in managing urban traffic flow and ensuring road safety. However, traffic sign recognition remains a challenging task due to varying lighting conditions, occlusions, and diverse sign appearances. This paper presents a novel hybrid approach for efficient traffic sign recognition tailored to the needs of autonomous driving. The proposed method combines the Discrete Wavelet Transform for robust feature extraction with the powerful classification capabilities of Convolutional Neural Networks within a Deep Learning framework. The DWT effectively captures essential image characteristics while reducing noise and irrelevant details, providing a compact yet informative feature set for the CNN classifier. Extensive experiments were conducted to evaluate the performance of the system in real-world conditions. The proposed approach achieved an impressive recognition precision of 98%, demonstrating its ability to interpret and respond to traffic signs with high reliability. The results confirm the method’s robustness, real-time efficiency, and suitability for deployment in intelligent transportation systems and autonomous vehicles. Overall, this study highlights the complementary strengths of DWT and CNN within the broader context of Deep Learning, offering a significant improvement over conventional traffic sign recognition techniques. The proposed system represents a promising step toward enhancing the perception capabilities of autonomous vehicles, contributing to safer and more reliable navigation in complex traffic environments.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_75-A_New_Hybrid_Approach_Based_on_Discrete_Wavelet_Transform.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Vision-Based Autonomous Localization of Fall Protection Anchor Points on Transmission Towers Using Multi-View Geometric Perception</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160974</link>
        <id>10.14569/IJACSA.2025.0160974</id>
        <doi>10.14569/IJACSA.2025.0160974</doi>
        <lastModDate>2025-09-30T11:06:49.0800000+00:00</lastModDate>
        
        <creator>Chunqing Yang</creator>
        
        <creator>Yu Peng</creator>
        
        <creator>Jian Yu</creator>
        
        <creator>Dongfeng Yu</creator>
        
        <creator>Rui Liu</creator>
        
        <creator>Jiahui Chen</creator>
        
        <subject>Fall protection lanyard; transmission tower inspection; anchor point localization; multiview geometry; spacial edge distance perception; homography transformation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>This paper presents the first systematic investigation into autonomous UAV-mounted fall protection lanyard (FPL) deployment for high-voltage transmission tower inspections, addressing a critical safety gap in the power industry where falls account for 34% of occupational fatalities. We propose a novel geometry-based solution to overcome three fundamental limitations of existing approaches: the isolated processing of UAV imagery without sensor fusion, unreliable 2D-to-3D spatial cor-respondence in anchor point detection, and the high annotation costs of supervised learning methods. Our technical contribution establishes a multi-view geometric perception framework that decomposes the FPL anchoring task into ridge line identification and optimal mounting point selection. The method first develops a spacial edge distance perception algorithm specifically for power inspection drones, which computes structural depth through plane-induced homography transformations of temporally matched line features. Subsequently, a mounting position planning algorithm integrates multiview geometric constraints with practical operational requirements including ladder proximity, diagonal steel avoidance, and temporal stability. Experimental validation on real-world power infrastructure data demonstrates superior performance compared to learning-based alternatives, achieving 10.98 MAE in positioning accuracy while maintaining 80ms processing efficiency for real-time operation. The proposed approach eliminates dependency on manual climbing and expert annotations, offering both theoretical advancements in stereo-environment perception for complex structures and immediate field applicability for safer power grid maintenance. This work represents the first formal proposal and comprehensive solution for autonomous FPL deployment in transmission tower inspection scenarios.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_74-Vision_Based_Autonomous_Localization_of_Fall_Protection_Anchor_Points.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced Fuzzy Clustering Approach for Overlapping Community Detection via Structural Neighborhood Similarity</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160973</link>
        <id>10.14569/IJACSA.2025.0160973</id>
        <doi>10.14569/IJACSA.2025.0160973</doi>
        <lastModDate>2025-09-30T11:06:49.0500000+00:00</lastModDate>
        
        <creator>Faiza Riaz Khawaja</creator>
        
        <creator>Zuping Zhang</creator>
        
        <creator>Abdul Hadi Riaz</creator>
        
        <creator>Abdolraheem Khader</creator>
        
        <creator>Ahmed Hamza Osman</creator>
        
        <creator>Hani Moetque Aljahdali</creator>
        
        <creator>Ali Ahmed</creator>
        
        <subject>Fuzzy clustering; neighborhood similarity; extended modularity; overlapping community; complex networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>The existence of complex networks can be observed in various real-world contexts, such as social, biological, and/or neurological networks. A critical analytical challenge in such networks is community detection, which entails detecting groupings of nodes with dense internal connectivity. Numerous studies have been conducted on the subject of overlapping communities, wherein nodes may concurrently belong to multiple communities. In this paper, we propose an enhanced fuzzy clustering method for overlapping community detection based on neighborhood similarity. The core idea is to observe the community membership as a continuous feature; hence, nodes can belong to more than one community following different levels of affiliations. Our method consists of four stages: first, we find local structural features; then, we make a neighborhood similarity matrix based on common neighbors; next, we give initial fuzzy memberships using an Enhanced Fuzzy C-Means approach; and last, we improve memberships using a local optimization strategy. We evaluated our method on various real-world datasets of differing sizes and determined that it outperforms multiple state-of-the-art techniques, as indicated by overlapping modularity, F-score, and statistical significance assessments. The proposed method is a useful and scalable solution that is easier to understand and more accurate.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_73-Enhanced_Fuzzy_Clustering_Approach_for_Overlapping_Community_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>DAE-IDS: A Domain-Aware Ensemble Intrusion Detection System with Explainable AI for Industrial IoT Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160972</link>
        <id>10.14569/IJACSA.2025.0160972</id>
        <doi>10.14569/IJACSA.2025.0160972</doi>
        <lastModDate>2025-09-30T11:06:49.0170000+00:00</lastModDate>
        
        <creator>Saifur Rahman</creator>
        
        <subject>Intrusion detection systems; IoT security; Explainable AI (XAI); class imbalance; frequency-aware ensemble; SHAP interpretability; domain-aware routing; confidence-based ensemble; Edge-IIoTset dataset; optimized random forest</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>The widespread deployment of Industrial Internet of Things (IIoT) devices creates an urgent need for effective intrusion detection systems (IDS). However, two critical challenges limit current approaches: severe class imbalance in network traffic data that hampers detection of rare attacks, and the “black-box” nature of machine learning models that undermines trust in security-critical applications. This study presents a Domain-Aware Ensemble Intrusion Detection System (DAE-IDS) equipped with explainable AI, addressing both challenges through frequency-aware ensemble learning and computationally efficient interpretability mechanisms. Using the Edge-IIoTset dataset containing 80 features across 12 classes, attacks were categorized into three frequency groups: majority attacks (5 classes), middle-frequency attacks (4 classes), and minority attacks (3 classes). Specialized Random Forest models (50 trees each with class weighting) tailored to each frequency group, then developed a domain-aware ensemble that routes traffic to the most appropriate specialized model based on attack frequency patterns. To enhance interpretability, SHAP explanations added using an optimized approach that combines interventional TreeExplainer with instance subsampling (300 samples per model) and top-k feature prioritization. This optimization reduced SHAP computation time by 60% while maintaining full interpretability. The domain-aware ensemble achieved superior performance with a macro-F1 score of 1.00, demonstrating significant improvements in rare-attack detection compared to traditional approaches. SHAP analysis revealed attack-specific discriminative features, providing action-able insights for security analysts. This framework successfully bridges the accuracy-interpretability trade-off in IIoT security applications, enabling trustworthy intrusion detection suitable for resource-constrained edge environments. The attack-frequency specialization approach offers a practical solution for handling class imbalance while maintaining model transparency through efficient explainability mechanisms.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_72-DAE_IDS_A_Domain_Aware_Ensemble_Intrusion_Detection_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced IoT Security Using Machine Learning Technology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160971</link>
        <id>10.14569/IJACSA.2025.0160971</id>
        <doi>10.14569/IJACSA.2025.0160971</doi>
        <lastModDate>2025-09-30T11:06:48.9700000+00:00</lastModDate>
        
        <creator>Rawan Yousef Bukhowah</creator>
        
        <creator>Alanoud Khaled Bu Dookhi</creator>
        
        <creator>Mounir Frikha</creator>
        
        <subject>Internet of Things; Artificial Intelligence; machine learning; deep learning; security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>This paper examines the enhancement of security measures for the Internet of Things (IoT) systems through the application of Machine Learning (ML) techniques. As the number of IoT devices continues to rise, ensuring their security has become increasingly critical, given that conventional methods frequently struggle to identify advanced threats. This study explores the implementation of several ML algorithms, including Random Forest (RF), Decision Trees (DT), Support Vector Machines (SVM), and Convolutional Neural Networks (CNN), to identify anomalies and intrusions within IoT networks. By conducting a comprehensive review of existing research and experiments, it highlights the effectiveness of ML in enhancing IoT security, with high detection rates for various threats, including botnet attacks, Denial-of-Service (DoS) and Distributed Denial-of-Service (DDoS) incidents, and intrusion attempts. DoS/DDoS attacks and many types of botnets are the most devastating attacks that have been spreading for a long time, and they are still branching out in new ways against IoT networks. They can damage IoT services and prevent these services from being used by legitimate users. Therefore, securing IoT networks becomes a significant concern. The proposed model is used to increasingly monitor network traffic for any deviations from standard patterns IoT networks. This paper also stresses the necessity of utilising suitable datasets and feature selection techniques to enhance the efficacy of ML models. To train our model, we have utilized a dataset called the IoT23 dataset, which is one of the most recent datasets that has many IoT scenarios and anomalous activities. Further-more, we utilised two types of feature selection algorithms, the Correlation-based Feature Selection (CFS) algorithm and the Genetic Algorithm (GA), and then we compared the results of these algorithms when training our model. The best performances were obtained with DT and RF classifiers when they were trained with features selected by CFS However, for training and testing times metrics, DT performance was superior across both feature selection methods.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_71-Enhanced_IoT_Security_Using_Machine_Learning_Technology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Control System of Ocean Wave Simulator Using PID-Salp Swarm Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160970</link>
        <id>10.14569/IJACSA.2025.0160970</id>
        <doi>10.14569/IJACSA.2025.0160970</doi>
        <lastModDate>2025-09-30T11:06:48.9400000+00:00</lastModDate>
        
        <creator>Affiani Machmudah</creator>
        
        <creator>Juchen Li</creator>
        
        <creator>Mahmud Iwan Solihin</creator>
        
        <creator>Chiong Meng Choung</creator>
        
        <creator>Wibowo Harso Nugroho</creator>
        
        <creator>Ahmad Syafiul Mujahid</creator>
        
        <creator>Sahlan</creator>
        
        <creator>Abdul Ghofur</creator>
        
        <subject>Marine simulation technologies; Stewart platform; ocean wave; control system; meta-heuristic optimization; Salp Swarm Algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>This paper presents a control system optimization of an ocean wave simulator using a meta-heuristic optimization. The proposed control system involves finding leg length trajectories by an Inverse Kinematics (IK) to be used as references for a Proportional-Integral-Derivative (PID). PID gains are tuned using a Salp Swarm Algorithm (SSA) with a Root Mean Square Error (RMSE) of leg position errors as a performance index. A Stewart platform dynamic is modeled in Simscape Multibody, integrated with a trajectory generator, an IK module, and a control system diagram block in Simulink models. The Simulink model of Stewart platform dynamics is then called in the optimization procedure by writing the programming code in MATLAB. Results show that the SSA outperforms other meta-heuristic methods, namely a Genetic Algorithm (GA) and a Particle Swarm Optimization (PSO), achieving the lowest fitness value, 16.8% and 8.7% lower than GA and PSO, respectively. Moreover, the SSA avoids boundary-trapping issues encountered by the PSO, which is stuck at its upper bound. The SSA has successfully enhanced the simplified version of the PID control system, where the scenario of the simplified PID-SSA scenario achieves better tracking error performance than the full PID-SSA configuration. The proposed approach contributes to the advancement of marine simulation technologies, supporting innovation in ocean engineering and sustainable maritime applications.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_70-Control_System_of_Ocean_Wave_Simulator.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimization of Convolutional Neural Network Algorithm for Indonesian Sign Language Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160969</link>
        <id>10.14569/IJACSA.2025.0160969</id>
        <doi>10.14569/IJACSA.2025.0160969</doi>
        <lastModDate>2025-09-30T11:06:48.9230000+00:00</lastModDate>
        
        <creator>Alvin Bintang Rebrastya</creator>
        
        <creator>Sumarni Adi</creator>
        
        <creator>Hanif Al Fatta</creator>
        
        <creator>Windha Mega Pradnya Dhuhita</creator>
        
        <creator>Ika Nur Fajri</creator>
        
        <creator>Muhammad Hanafi</creator>
        
        <subject>Indonesian Sign Language; hand sign recognition; image classification; Convolutional Neural Network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>Sign language serves as a primary mode of communication for individuals who are deaf or speech impaired, using hand gestures to convey meaning visually. While it facilitates communication among the deaf community, it presents challenges for interaction with those who rely on spoken language. This study aims to recognize hand signs representing the letters A to Y (excluding J and Z) in the Indonesian Sign Language (SIBI) using image-based input. A custom dataset was collected through personal photo shoots and used to train a Convolutional Neural Network (CNN) implemented in Python using the TensorFlow library. The study also focuses on optimizing the CNN architecture to achieve high classification accuracy. Evaluation using a confusion matrix on the test data resulted in an overall accuracy of 87.1%, while real-time testing achieved an accuracy of 90.25%. The number of convolutional filters and dropout rates was adjusted to prevent underfitting and overfitting during model training.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_69-Optimization_of_Convolutional_Neural_Network_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Weighted Scoring Model of Heuristic-Based Workload Scheduling Approaches in Edge-Cloud Environments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160968</link>
        <id>10.14569/IJACSA.2025.0160968</id>
        <doi>10.14569/IJACSA.2025.0160968</doi>
        <lastModDate>2025-09-30T11:06:48.8930000+00:00</lastModDate>
        
        <creator>Hasnae NOUHAS</creator>
        
        <creator>Abdessamad BELANGOUR</creator>
        
        <creator>Mahmoud NASSAR</creator>
        
        <subject>Edge computing; cloud computing; workload scheduling; heuristic algorithms; metaheuristics; scheduling optimization; ACO; PSO; HEFT; Tabu Search; GA; Greedy Resource-Aware Heuristics; Min-Min; Max-Min; WSM; Weighted Scoring Model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>Hybrid edge–cloud computing has emerged as a promising paradigm to meet the demands of latency-sensitive and resource-aware applications by combining the low-latency benefits of edge nodes with the scalability of cloud infrastructure. Efficient workload scheduling in such environments remains a critical challenge due to the heterogeneity of resources, dynamic network conditions, and diverse application requirements. This paper presents a comprehensive survey and comparative analysis of heuristic and metaheuristic scheduling algorithms tailored for edge–cloud systems. Seven representative algorithms, including Greedy Resource-Aware Heuristics (GRAH), Heterogeneous Earliest Finish Time (HEFT), Min-Min/Max-Min, Genetic Algorithm, Particle Swarm Optimization (PSO), Ant Colony Optimization (ACO), and Tabu Search, are evaluated against seven key criteria: latency awareness, energy efficiency, scalability, scheduling accuracy, implementation complexity, resource utilization, and adaptability. The evaluation is literature-driven and structured through a Weighted Scoring Model (WSM), which synthesizes findings from prior simulation-based and experiment-based studies into a comparative framework. Results indicate that Greedy Resource-Aware Heuristics offer the best trade-off for real-time, dynamic scenarios, while optimization-based methods, like GA and Tabu Search, provide superior accuracy and resource balance at the cost of increased complexity. The findings highlight critical trade-offs and offer guidance on selecting appropriate scheduling strategies based on application-specific goals and system constraints.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_68-A_Weighted_Scoring_Model_of_Heuristic_Based_Workload.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>From Review to Practice: A Comparative Study and Decision-Support Framework for Sentiment Classification Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160967</link>
        <id>10.14569/IJACSA.2025.0160967</id>
        <doi>10.14569/IJACSA.2025.0160967</doi>
        <lastModDate>2025-09-30T11:06:48.8630000+00:00</lastModDate>
        
        <creator>Kamal Walji</creator>
        
        <creator>Allae Erraissi</creator>
        
        <creator>Abdelali ZAKRANI</creator>
        
        <creator>Mouad Banane</creator>
        
        <subject>Sentiment analysis; text classification; machine learning; deep learning; transformer models; BERT; LSTM; random forest; hybrid approaches; model evaluation; interpretability; natural language processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>Sentiment classification is a core task in natural language processing (NLP), enabling automated interpretation of opinionated text across domains, such as social media, e-commerce, and healthcare. While numerous models have been proposed—from classical machine learning algorithms to deep neural networks and transformer architectures—their adoption is often hindered by trade-offs in performance, interpretability, and computational cost. This paper presents a threefold contribution: 1) a structured review of over 30 peer-reviewed studies that compare sentiment classifiers across five analytical dimensions—accuracy, robustness, interpretability, efficiency, and context adaptability; 2) a lightweight empirical benchmark on the IMDb dataset, evaluating Na&#239;ve Bayes, linear SVM, and LSTM; and 3) a practitioner-oriented decision-support framework comprising a model selection flowchart and recommendation matrix. The experimental results show that SVM achieved the highest F1-score (0.8329), while Na&#239;ve Bayes provided strong performance with minimal training time, and LSTM underperformed under constrained conditions. We further highlight persistent challenges in benchmarking consistency, model explainability, and cross-lingual adaptability. The paper concludes with actionable future directions, including hybrid architectures, low-resource deployment strategies, and inclusive NLP systems for diverse user populations. To our knowledge, this is the first study that unifies systematic review, empirical validation, and practical decision tools in the field of sentiment classification.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_67-From_Review_to_Practice_A_Comparative_Study.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced Crow Search Algorithm with Cooperative Island Strategy for Energy-Aware Routing in Wireless Sensor Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160966</link>
        <id>10.14569/IJACSA.2025.0160966</id>
        <doi>10.14569/IJACSA.2025.0160966</doi>
        <lastModDate>2025-09-30T11:06:48.8470000+00:00</lastModDate>
        
        <creator>Xiangqian LI</creator>
        
        <creator>Xuemei ZHOU</creator>
        
        <subject>Wireless sensor networks; energy efficiency; cluster head selection; Crow Search; island model; routing; optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>Energy efficiency is a fundamental problem experienced by Wireless Sensor Networks (WSNs), as limited battery power affects network lifespan and reliability. This paper develops a novel energy-efficient routing protocol based on an Enhanced Crow Search Algorithm (ECSA) optimization approach to optimize cluster head selection. The proposed ECSA combines a cooperative island model and an adaptive tournament selection procedure to overcome traditional Crow Search Algorithm (CSA) disadvantages caused by low population diversity, a slow convergence rate, and undesirable exploration-exploitation tradeoffs. A multi-objective fitness function is constructed by analyzing residual energy and remaining battery life, distance to the base station, packet delivery rate, throughput, and path loss to achieve overall network design optimality. Sensor nodes are organized optimally to reduce power consumption and prolong the system&#39;s lifespan. The experimental results demonstrate that, for a network of 100 nodes, the proposed ECSA-based routing protocol significantly outperforms recent metaheuristic approaches. Specifically, ECSA achieved 22% lower optimization cost than CSA, 28.2% than Black Widow Optimization (BWO), 26.3% than Grey Wolf Optimizer (GWO), and 30% than Whale Optimization Algorithm (WOA). It further attained 4.8–10.8% higher throughput, 24.4–40.3% lower path loss, 4.5–13.7% higher packet delivery ratio, and 40.1–109.1% more alive nodes compared to these benchmarks. These results confirm that ECSA provides superior energy efficiency, reliability, and robustness for large-scale WSN deployments.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_66-Enhanced_Crow_Search_Algorithm_with_Cooperative_Island_Strategy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sentiment Analysis Revisited: A Multi-Metric Comparative Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160965</link>
        <id>10.14569/IJACSA.2025.0160965</id>
        <doi>10.14569/IJACSA.2025.0160965</doi>
        <lastModDate>2025-09-30T11:06:48.8170000+00:00</lastModDate>
        
        <creator>Kamal Walji</creator>
        
        <creator>Allae Erraissi</creator>
        
        <creator>Abdelali ZAKRANI</creator>
        
        <creator>Mouad Banane</creator>
        
        <subject>Sentiment analysis; natural language processing; machine learning; deep learning logistic regression; random forest; Naive Bayes; LSTM; CNN; Efficiency Score</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>Sentiment analysis is a fundamental task in natural language processing with wide-ranging applications, from customer feedback monitoring to healthcare and social media analytics. While recent research has mainly emphasized predictive accuracy, computational efficiency has remained largely overlooked, despite its importance for large-scale and real-time deployment. This study addresses this gap by conducting a comparative evaluation of classical machine learning algorithms (Logistic Regression, Na&#239;ve Bayes, Random Forest) and deep learning architectures [Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM)]. Experiments were carried out on two benchmark datasets, IMDB and Yelp Polarity, with evaluation based on accuracy, precision, recall, F1-score, training time, and a novel Efficiency Score. Results on IMDB show that Logistic Regression and LSTM both achieved 88% accuracy, but with radically different costs: Logistic Regression trained in 0.25 seconds, whereas LSTM required more than 2600 seconds. On Yelp Polarity, Logistic Regression improved to 91.6% accuracy, outperforming LSTM (86.2%) while remaining over 300 times faster. By integrating both predictive metrics and efficiency measures, the Efficiency Score highlighted the practical advantages of Logistic Regression and Na&#239;ve Bayes in resource-constrained environments. This dual evaluation framework demonstrates that classical models remain highly competitive when both accuracy and efficiency are considered, providing a practical alternative to computationally expensive neural architectures and offering practitioners clear guidelines for model selection under real-world constraints.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_65-Sentiment_Analysis_Revisited_A_Multi_Metric_Comparative_Study.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Robust Control of Cyber-Physical Teleoperation Systems for Synchronized Healthcare Supply Chain Management</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160964</link>
        <id>10.14569/IJACSA.2025.0160964</id>
        <doi>10.14569/IJACSA.2025.0160964</doi>
        <lastModDate>2025-09-30T11:06:48.7830000+00:00</lastModDate>
        
        <creator>Mariem Mrad</creator>
        
        <creator>Mohamed Amine Frikha</creator>
        
        <subject>Sliding mode control; master-slave teleoperation system; cyber-physical system; healthcare supply chain management; inventory forecasting; synchronization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>This paper presents a delay-dependent sliding mode control (SMC) framework for synchronization in a three-degree-of-freedom cyber-physical master–slave teleoperation system, with emphasis on healthcare supply chain management. Communication delays pose a critical challenge, often leading to instability, desynchronization, and inaccurate inventory records. Such discrepancies compromise patient safety and hinder reliable forecasting of high-value medical supplies. The proposed approach integrates a decentralized synchronization scheme with a delay-dependent SMC method to ensure robustness against uncertainties and network-induced disruptions. System constraints, including variable communication delays up to 0.4 s and measurement errors of 20%, are explicitly addressed. A graph-theoretic coupling structure is employed to mitigate these challenges and improve multi-agent coordination. Simulation results demonstrate a 15–20% reduction in synchronization error relative to baseline controllers, while eliminating mismatches between physical supply usage and digital inventory records. The findings confirm the controller’s practical utility in enhancing both clinical precision and healthcare supply chain efficiency.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_64-Robust_Control_of_Cyber_Physical_Teleoperation_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimized Random Forest for High-Accuracy Autism Spectrum Disorder Detection via Phenotypic Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160963</link>
        <id>10.14569/IJACSA.2025.0160963</id>
        <doi>10.14569/IJACSA.2025.0160963</doi>
        <lastModDate>2025-09-30T11:06:48.7530000+00:00</lastModDate>
        
        <creator>Mohamed Gawish</creator>
        
        <creator>Nada S. El-Askary</creator>
        
        <creator>Mohamed Mabrouk Morsey</creator>
        
        <creator>Abeer M. Mahmoud</creator>
        
        <creator>Mostafa Aref</creator>
        
        <creator>Taha Ibrahim El-Arif</creator>
        
        <subject>Mental healthcare; ASD; phenotypic data; ABIDE- II; random forest; hyperparameter optimizations</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>Autism Spectrum Disorder (ASD) is a mental disorder with a neurological condition noticed in patients by their persistent deficits in social communication and interaction, along with the possibility of the presence of repetitive motor behaviors or activities. Early diagnosis of this disorder is crucial for improving patients’ cognitive, emotional, and social development. Numerous studies on detecting autism exist; however, data limitations and imbalance affect their model generalization. This study proposes a new intelligent computational model for mental healthcare in individuals with ASD, utilizing machine learning (ML) to address these shortcomings. The proposed model enhances the random forest (RF) algorithm by setting optimal parameters and encompasses two key pipelines: 1) the data pipeline and 2) the learning pipeline. We first gathered a multi-source dataset, implemented integration and preprocessing via ML algorithms. The phenotypic data used were collected from 19 different sites and merged to ensure the diversity of the data used. The resulting dataset is subsequently fed into the learning pipeline, where a supervised ML algorithm is employed to create a trained computational model for detecting ASD. The model is based on tuning the RF algorithm by finding the optimal values for five key hyperparameters. After tuning the model, the accuracy of detecting ASD from phenotypic data reached 96.86%, with a sensitivity of 97.14% and a false positive rate of 3.39%. Comparing the tuned RF model with different ML models verified that tuning and optimizing RF achieves a preeminent classification accuracy for ASD detection using phenotypic data, as the accuracy of RF without tuning is 95.06%. In addition, to validate the tuned RF model’s real-world applicability, a separate qualitative study was conducted on five independent, narrative-based case studies, where the model accurately classified four of them by translating descriptive language into quantitative features.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_63-Optimized_Random_Forest_for_High_Accuracy_Autism_Spectrum_Disorder.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Re-engineering Grid-Based Quorum Replication into Binary Vote Assignment on Cloud: A Scalable Approach for Strong Consistency in Cloud Databases</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160962</link>
        <id>10.14569/IJACSA.2025.0160962</id>
        <doi>10.14569/IJACSA.2025.0160962</doi>
        <lastModDate>2025-09-30T11:06:48.7200000+00:00</lastModDate>
        
        <creator>Ainul Azila Che Fauzi</creator>
        
        <creator>Noor Ashafiqa</creator>
        
        <creator>Asiah Mat</creator>
        
        <creator>Syerina Azlin Md Nasir</creator>
        
        <creator>A. Noraziah</creator>
        
        <subject>Binary Vote Assignment in Cloud (BVAC); cloud database replication; fault tolerance; high availability; quorum-based replication; strong consistency</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>The growth of cloud computing has heightened the demand for replication strategies that ensure strong consistency, high availability, and low communication cost across distributed infrastructures. Existing systems such as DynamoDB, FoundationDB, and GeoGauss illustrate different design trade-offs but face limitations in balancing latency, correctness, and resilience under dynamic workloads. This study proposes the Binary Vote Assignment in Cloud (BVAC), a cloud-native replication algorithm re-engineered from the Binary Vote Assignment on Grid Quorum (BVAGQ). BVAC organizes replicas in a logical grid structure and employs binary voting weights with a Commit Coordination (BCC) mechanism to enforce quorum-validated commits, representing a form of quorum-based replication. This design maintains serializable consistency, minimizes replication conflicts, and reduces low communication cost through fixed-size quorums of three to five replicas. Experimental results demonstrate that BVAC sustains fault tolerance, achieves cloud database replication efficiency, and sustains high data availability via multiple valid quorum paths. By avoiding the heavy coordination cost and infrastructure footprint of current systems, BVAC provides a scalable and cost-efficient replication strategy tailored for modern cloud workloads. The study establishes BVAC as an advancement in distributed data management and a foundation for future adaptive and multi-cloud replication frameworks.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_62-Re_engineering_Grid_Based_Quorum_Replication.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>D.M.A.I.H.: Deepfake-Inspired Few-Shot Learning Approach with Stable Diffusion for Digital Mourning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160961</link>
        <id>10.14569/IJACSA.2025.0160961</id>
        <doi>10.14569/IJACSA.2025.0160961</doi>
        <lastModDate>2025-09-30T11:06:48.6900000+00:00</lastModDate>
        
        <creator>Btissam Acim</creator>
        
        <creator>Hamid Ouhnni</creator>
        
        <creator>Nassim Kharmoum</creator>
        
        <creator>Soumia Ziti</creator>
        
        <subject>Stable diffusion; few-shot learning; deepfake; Artificial Intelligence (AI); generative AI; digital mourning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>Digital mourning (deuil num&#233;rique) is the use of digital and AI-based technologies to preserve, recontextualize, and extend the memory of deceased loved ones through personalized and meaningful virtual representations. The digital mourning process requires innovative technologies capable of preserving the memory of deceased loved ones in meaningful and humanized ways. This paper proposes a novel generative approach, D.M.A.I.H. (Digital Mourning with Artificial Intelligence for Healing), for digital grief, with a focus on moral support and the mental health of bereaved relatives, using Stable Diffusion with a few-shot learning adaptation mechanism. The system takes as input a small set of personal references (e.g. a portrait, contextual images such as the person’s home, and a short descriptive script) and outputs high-quality, photorealistic images of the deceased in different meaningful contexts, process closely related to deepfake generation but redirected here toward therapeutic and commemorative purposes. Unlike traditional generative models requiring large datasets, Few-shot personalization is leveraged to adapt Stable Diffusion to each individual with minimal data, enabling the generation of personalized digital albums. Experimental results show that the model consistently preserves identity in the images it produces, and contextual control ensures emotional resonance. In particular, identity similarity scores for the generated images ranged from 0.88 to 0.93, with an average score of 0.91, testifying to strong identity preservation across all outputs. The innovation of this study is a foundation for AI-based memorialization, balancing technological innovation with concerns over privacy, authenticity and cultural sensitivity, and psychological comfort.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_61-DMAIH_Deepfake_Inspired_Few_Shot_Learning_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Integrating YOLOv8 and IoT in a Computer Vision System for Child Detection in Smart Cities</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160960</link>
        <id>10.14569/IJACSA.2025.0160960</id>
        <doi>10.14569/IJACSA.2025.0160960</doi>
        <lastModDate>2025-09-30T11:06:48.6600000+00:00</lastModDate>
        
        <creator>Modhawi Alotaibi</creator>
        
        <creator>Atheer Alruwaythi</creator>
        
        <creator>Sara Alenazi</creator>
        
        <creator>Maisaa Alsaedi</creator>
        
        <subject>Computer vision; Internet of Things; deep learning; YOLOv8; DeepSORT</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>In an era marked by technological advancements aimed at establishing smart cities, technology increasingly focuses on enhancing aspects related to crowd management. The widespread deployment of CCTV systems, combined with the integration of computer vision, has enabled accurate insights into crowd density estimation. Our research highlights the potential benefits of child detection across various domains that serve governments and business decision-making. Leveraging Internet of Things (IoT) devices to collect real-time data and employing artificial intelligence (AI) based on deep learning through computer vision is powerful in such domains. In this paper, we propose an IoT architecture that facilitates intelligence and decision-making in two phases: 1) a deep learning model with object detection and image segmentation capabilities using YOLOv8, and 2) a tracking/counting algorithm for estimating child density based on DeepSORT. Our implementation efficiently identified and classified children in extracted images with an accuracy rate of up to 98%. Also, our model outperformed the other two solutions proposed by previous studies in terms of mAP@50, Precision, and Recall metrics. The results provide valuable insights for businesses aiming to refine site selection and guide governments in improving urban planning and safety, thereby fostering sustainable and intelligent urban development.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_60-Integrating_YOLOv8_and_IoT_in_a_Computer_Vision_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Real Time Facial Emotions Recognition on Autistic Individuals</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160959</link>
        <id>10.14569/IJACSA.2025.0160959</id>
        <doi>10.14569/IJACSA.2025.0160959</doi>
        <lastModDate>2025-09-30T11:06:48.6130000+00:00</lastModDate>
        
        <creator>Fatima Ezzahrae El Rhatassi</creator>
        
        <creator>Btihal El Ghali</creator>
        
        <creator>Najima Daoudi</creator>
        
        <subject>Facial emotion recognition; video; frames; spatial and temporal features; pretrained models; LSTM; autistic individuals</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>Communication and social interaction issues are frequently linked to autism, which can have an impact on quality of life, work, and education. Opportunities to lessen these difficulties are presented by assistive technology, especially those that facilitate individualized and encouraging engagement. Facial expression recognition (FER) is essential to these systems, but current methods are still inadequate for autism-specific situations even though they achieve high accuracy on benchmark datasets like CK+. Because autistic people usually exhibit aberrant, subtle, or ambiguous facial expressions—which deviate from the common patterns used to train traditional FER models—this limitation occurs. In this work, we suggest a hybrid model that combines an LSTM network for temporal modeling of video sequences with three pretrained convolutional neural networks (EfficientNetB0, ResNet50, and MobileNetV2) for spatial feature extraction. Although the model performs well on CK+, its applicability to autism is still limited by the lack of relevant datasets and the use of artificial intelligence (AI)-generated videos rather than authentic recordings. The critical need for more comprehensive data and adaptive model designs tailored to autistic populations is highlighted by these findings.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_59-Hybrid_Real_Time_Facial_Emotions_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Review of Visualization Techniques for Duplicate Detection in Cancer Datasets</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160958</link>
        <id>10.14569/IJACSA.2025.0160958</id>
        <doi>10.14569/IJACSA.2025.0160958</doi>
        <lastModDate>2025-09-30T11:06:48.5970000+00:00</lastModDate>
        
        <creator>Nurul A. Emran</creator>
        
        <creator>Ruhaila Maskat</creator>
        
        <subject>Duplicate detection; data duplication; visualization; deduplication; TCGA; TCIA; NAACCR</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>As clinical cancer research increasingly depends on large, diverse datasets, concerns about data duplication have grown. Duplicates can undermine data integrity, skew analytical results, and reduce the reproducibility of studies. This review explores how visualization can play a critical role in identifying and managing duplicates in non-image clinical cancer data. Drawing from literature in biomedical informatics, data quality, and visual analytics, it synthesizes current approaches and highlights key challenges. Using a scoping review methodology, we analyzed studies published over the past two decades, focusing on non-image clinical datasets. Studies were selected based on relevance to duplicate detection and visualization, excluding those centered on image or video data. Major datasets like The Cancer Genome Atlas (TCGA), The Cancer Imaging Archive (TCIA), and the North American Association of Central Cancer Registries (NAACCR) are examined to show how duplication occurs across genomic, clinical, and registry data. The review assesses existing visualization techniques based on their scalability, interactivity, integration with deduplication algorithms, and how well they address core data quality dimensions. While some tools offer scalable and interactive features, few provide clear visual representations of duplicates, especially those involving complex temporal and multidimensional patterns. Several methodological gaps are identified, including limited integration of data quality metrics, inadequate support for tracking changes over time, and a lack of standardized evaluation frameworks. To address these issues, the review advocates for the development of practical, user-friendly visualization tools that combine duplicate detection with key indicators of data quality. By offering a more complete and intuitive view of clinical datasets, such tools can help researchers and clinicians make better-informed decisions, ultimately improving the reliability and impact of cancer research. Bridging the gap between technical detection and visual understanding is essential for advancing data-driven healthcare and ensuring high-quality, reproducible outcomes.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_58-A_Review_of_Visualization_Techniques_for_Duplicate_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Advancements in Texture Analysis and Classification: A Bibliometric Review of Entropy-Based Approaches</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160957</link>
        <id>10.14569/IJACSA.2025.0160957</id>
        <doi>10.14569/IJACSA.2025.0160957</doi>
        <lastModDate>2025-09-30T11:06:48.5670000+00:00</lastModDate>
        
        <creator>Muqaddas Abid</creator>
        
        <creator>Muhammad Suzuri Hitam</creator>
        
        <creator>Rozniza Ali</creator>
        
        <creator>Muhammad Hammad</creator>
        
        <subject>Artificial intelligence; bibliometric review; entropy; research trends; texture analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>Entropy-based texture analysis has gained significant attention in medical imaging, computer vision, and material science. The purpose of this paper is to provide a bibliometric review that maps the evolution, key contributors, research trends, and emerging themes of entropy-based texture analysis from 1980 to 2025. Using the Scopus database, 1,482 articles were initially retrieved and refined to 1,226 documents for analysis. VOSviewer was employed for bibliometric mapping, examining publication trends, authorship networks, keyword co-occurrence, and citation patterns. Results indicate a notable increase in research activity between 2004 and 2021, followed by a decline in recent years. The analysis highlights leading contributors, with significant work focusing on medical imaging applications such as radiomics and tumor heterogeneity assessment. While Shannon entropy remains widely used, newer measures like sample entropy, permutation entropy, and dispersion entropy are gaining attention. The study also identifies major research clusters, demonstrating the interdisciplinary nature of entropy-based texture analysis across medicine, engineering, and artificial intelligence. Despite database and language limitations, this review provides valuable insights into the field’s evolution and future directions, encouraging further interdisciplinary collaborations and advancements.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_57-Advancements_in_Texture_Analysis_and_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Strategic Decision Support in Financial Management Using Deep Learning-Based Stock Price Prediction Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160956</link>
        <id>10.14569/IJACSA.2025.0160956</id>
        <doi>10.14569/IJACSA.2025.0160956</doi>
        <lastModDate>2025-09-30T11:06:48.5330000+00:00</lastModDate>
        
        <creator>Layth Almahadeen</creator>
        
        <creator>Chinnapareddy Venkata Krishna Reddy</creator>
        
        <creator>Roopa Traisa</creator>
        
        <creator>Mukhamadiev Sanjar Isoevich</creator>
        
        <creator>Lavanya Kongala</creator>
        
        <creator>Janvi Anand Rathi</creator>
        
        <creator>Revati Ramrao Rautrao</creator>
        
        <subject>Stock price forecasting; deep learning; temporal fusion transformer; financial decision support; BiLSTM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>The Strategic decision support creates a strong and smart decision support system for financial management through the correct forecast of stock prices with deep learning. Statistical models and shallow machine learning techniques tend to be ineffective in modeling the nonlinear relationships, sequential interdependencies, and time-dependent volatility typical of financial data; consequently, poor prediction quality and untrustworthy investment choices. To overcome these constraints, introduce a new hybrid deep learning architecture based on the Temporal Fusion Transformer (TFT) combined with Bidirectional Long Short-Term Memory (BiLSTM) networks. The proposed hybrid model is expected to complement time-series forecasting by simultaneously utilizing attention mechanisms for explainability and sequence memory functions for richer temporal understanding. The model is trained and tested with the publicly available Stock Market Dataset on Kaggle, which includes stock history from various companies. The whole process is carried out on the Python platform using TensorFlow along with relevant libraries for data preprocessing, feature scaling, and model training. The new TFT-BiLSTM model surpasses conventional models through an accuracy level of 93.4% and an F1-score of 94.2%, demonstrating its precision and generalization power. The system provides strategic benefits in financial planning and risk management. Financial analysts, investors, fintech, and portfolio managers may take advantage of our prediction system to make rational buy/sell judgments, minimize risks, and maximize asset allocations. By synthesizing state-of-the-art deep learning models and public financial data, our framework illustrates that accurate stock price prediction can be an effective mechanism for supporting decision-making in financial markets.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_56-Strategic_Decision_Support_in_Financial_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mapping Elderly Residential Research During the Onset of Baby Boomer Aging: A Bibliometric Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160955</link>
        <id>10.14569/IJACSA.2025.0160955</id>
        <doi>10.14569/IJACSA.2025.0160955</doi>
        <lastModDate>2025-09-30T11:06:48.5170000+00:00</lastModDate>
        
        <creator>Keyi Xiao</creator>
        
        <creator>Han Wang</creator>
        
        <creator>Yan Ma</creator>
        
        <subject>Aging; housing for the elderly; long-term care; environment design; bibliometrics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>As global population aging accelerates, housing for older adults has emerged as a critical interdisciplinary research topic. Understanding how academic attention has evolved in this field is essential for informing policy and guiding future research. This study conducted a bibliometric analysis of 2,141 English-language publications related to elderly residential indexed in the Web of Science Core Collection from 2002 to 2021. It systematically examined publication trends, leading countries and institutions, key subject areas, collaboration networks, and journal co-citation patterns. The results show that the United States, Australia, and China are the top contributors in terms of publication volume, while countries like Sweden and New Zealand demonstrate high research intensity on a per capita basis. Over time, the research focus has shifted from clinical geriatrics and nursing toward environmental sciences, urban planning, and public health, indicating an increasing interdisciplinary integration. Collaboration network analysis highlights the central roles of institutions in Australia and Hong Kong in facilitating international research partnerships. This study maps the global knowledge landscape of elderly residential research and provides a foundation for future policy development, interdisciplinary collaboration, and scholarly inquiry.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_55-Mapping_Elderly_Residential_Research.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Machine Learning for Recommender Systems Under Implicit Feedback and Class Imbalance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160954</link>
        <id>10.14569/IJACSA.2025.0160954</id>
        <doi>10.14569/IJACSA.2025.0160954</doi>
        <lastModDate>2025-09-30T11:06:48.4870000+00:00</lastModDate>
        
        <creator>Younes KOULOU</creator>
        
        <creator>Norelislam EL HAMI</creator>
        
        <subject>Recommender systems; XGBoost; implicit feedback; class imbalance; health insurance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>Recommender systems (RS) in domains with implicit feedback and significant class imbalance, such as health insurance, face unique challenges in accurately predicting user preferences. This study proposes a machine learning framework leveraging tree-based ensemble methods to address these limitations. We conducted a comprehensive comparative analysis of algorithms, including Decision Trees, Random Forest, Gradient Boosting Machines, CatBoost, Extra Trees, HistGradient Boosting, and XGBoost to identify the most effective approach for handling data skew and complex feature interactions. The model was trained on a real-world dataset from an international insurance broker, containing demographic profiles and purchase histories. After extensive preprocessing and class rebalancing, the models were optimized and evaluated on a separate test set. Among these, XGBoost verified superior performance, achieving remarkable results with a precision of 97.23% and an accuracy of 97.51%. The model presented robust generalization capabilities and convergence stability, with no signs of overfitting. Concretely, these performances translate into an increased ability for insurers to reliably identify customer needs from limited behavioral data, thus improving the relevance of personalized offers. These findings highlight the efficacy of XGBoost in treatment datasets with unbalanced implicit feedback and its capacity as an effective solution for complex recommendation problems. This work contributes a practical and scalable framework for improving personalized recommendations in data-constrained environments.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_54-Machine_Learning_for_Recommender_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel CNN-Based Feature Fusion Framework for Breast Cancer Ultrasound Image Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160953</link>
        <id>10.14569/IJACSA.2025.0160953</id>
        <doi>10.14569/IJACSA.2025.0160953</doi>
        <lastModDate>2025-09-30T11:06:48.4400000+00:00</lastModDate>
        
        <creator>Mobarak Zourhri</creator>
        
        <creator>Bouchaib Cherradi</creator>
        
        <creator>Mohamed El Khaili</creator>
        
        <subject>Breast cancer classification; ultrasound imaging; Convolutional Neural Networks (CNN); transfer learning; model fusion; Grad-CAM; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>Breast cancer remains a major global health concern and is among the leading causes of cancer-related deaths in women. Timely and precise diagnosis significantly improves treatment outcomes and patient survival rates. This paper presents a novel deep learning-based framework for breast cancer classification using ultrasound imagery, built upon the concatenation of two pre-trained Convolutional Neural Network (CNN) models: VGG19 and EfficientNetB0. By leveraging transfer learning and combining heterogeneous feature representations, the proposed method enhances the discriminative power of the extracted features. The model is evaluated on a publicly available benchmark ultrasound dataset and assessed through standard performance indicators, including accuracy, precision, recall, F1-score, and Area Under the Curve (AUC). In addition, Gradient-weighted Class Activation Mapping (Grad-CAM) is employed to generate interpretability heatmaps, visually highlighting regions that contribute most to classification outcomes. The experimental findings reveal that the integrated architecture outperforms several existing approaches as well as individual CNN baselines. This study contributes to the growing field of AI-assisted medical diagnostics and demonstrates the effectiveness of model fusion in ultrasound-based breast cancer detection.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_53-A_Novel_CNN_Based_Feature_Fusion_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Real-Time Biomechanical Squat and Deadlift Posture Analysis Using Google Machine Learning Kit</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160952</link>
        <id>10.14569/IJACSA.2025.0160952</id>
        <doi>10.14569/IJACSA.2025.0160952</doi>
        <lastModDate>2025-09-30T11:06:48.4100000+00:00</lastModDate>
        
        <creator>Liew Yee Jie</creator>
        
        <creator>Ting Tin Tin</creator>
        
        <creator>Chaw Jun Kit</creator>
        
        <creator>Ali Aitizaz</creator>
        
        <creator>Ayodeji Olalekan Salau</creator>
        
        <creator>Omolayo M. Ikumapayi</creator>
        
        <creator>Lim Siew Mooi</creator>
        
        <subject>Pose detection; squat and deadlift; Google ML Kit; fitness; posture analysis; emergency; public health</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>This project presents the development of a mobile application for real-time posture analysis during squat and deadlift exercises, using Google Machine Learning (ML) Kit pose detection. Proper exercise form is critical in preventing injuries, underscoring the need for systems that provide immediate feedback, an aspect often missing in existing fitness applications. This study addresses that gap by designing an app that not only guides users through motion analysis but also incorporates a safety mechanism to detect sudden falls. The system employs algorithms to process landmarks, calculate joint angles, count repetitions, and trigger emergency alerts. Two groups of bodybuilders confirmed the usability and accuracy in real-time biomechanical squat and deadlift posture analysis. These findings contribute to the field of AI-driven fitness by introducing a non-wearable, mobile-based solution for guided strength training. In addition, it offers societal benefits as an AI-powered fitness coach that aims to promote public health.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_52-Real_Time_Biomechanical_Squat_and_Deadlift_Posture_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Code Quality Through Automated Refactoring Using Transformer-Based Language Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160951</link>
        <id>10.14569/IJACSA.2025.0160951</id>
        <doi>10.14569/IJACSA.2025.0160951</doi>
        <lastModDate>2025-09-30T11:06:48.3770000+00:00</lastModDate>
        
        <creator>A. Sri Lakshmi</creator>
        
        <creator>E. S. Sharmila Sigamany</creator>
        
        <creator>Roopa Traisa</creator>
        
        <creator>Raman Kumar</creator>
        
        <creator>Karaka Ramakrishna Reddy</creator>
        
        <creator>Jasgurpreet Singh Chohan</creator>
        
        <creator>Aseel Smerat</creator>
        
        <subject>Automation; code refactoring; maintainability; transformer models; unit testing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>Maintaining high-quality source code is crucial for software reliability, scalability, and maintainability. Traditional refactoring methods, which involve manual code improvement or rule-based automation, often fall short due to their inability to understand the contextual semantics of code. These approaches are rigid, language-specific, and prone to inconsistencies, especially in large and complex codebases. As a result, developers spend significant time and effort identifying code smells, restructuring poorly written segments, and ensuring behavior preservation shows an accuracy of 97%. To address these limitations, this study proposes an automated code refactoring framework powered by Transformer-based language models. Leveraging models such as CodeT5, which are pre-trained on massive code corpora, this approach captures both syntax and semantic patterns to suggest intelligent, context-aware code transformations. The model is fine-tuned using a curated dataset of original and refactored code pairs to learn efficient refactoring strategies. The methodology involves preprocessing raw source code, tokenizing it for model input, and generating improved versions of the code using the trained Transformer model. Output suggestions are validated using Abstract Syntax Tree (AST) analysis and unit testing to ensure behavioral equivalence. Code quality improvements are quantified using metrics like maintainability index, cyclomatic complexity, and duplication rate. Experimental results demonstrate that the proposed method significantly enhances code readability and maintainability while reducing developer effort, outperforming traditional rule-based refactoring tools.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_51-Enhancing_Code_Quality_Through_Automated_Refactoring.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid RoBERTa-BiGRU-Attention Model for Accurate and Context-Aware Figurative Language Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160950</link>
        <id>10.14569/IJACSA.2025.0160950</id>
        <doi>10.14569/IJACSA.2025.0160950</doi>
        <lastModDate>2025-09-30T11:06:48.3300000+00:00</lastModDate>
        
        <creator>Sreeja Balakrishnan</creator>
        
        <creator>Rahul Suryodai</creator>
        
        <creator>S. Manochitra</creator>
        
        <creator>Jasgurpreet Singh Chohan</creator>
        
        <creator>Karaka Ramakrishna Reddy</creator>
        
        <creator>A. Smitha Kranthi</creator>
        
        <creator>Ritu Sharma</creator>
        
        <subject>Figurative language detection; sarcasm classification; RoBERTa-BiGRU-Attention model; contextual embeddings; Natural Language Processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>Figurative language, especially sarcasm, poses strong challenges for Natural Language Processing (NLP) models because of its implicit, context-sensitive nature. Both traditional and transformer models tend to find it difficult to identify these subtle forms, particularly when dealing with imbalanced datasets or without mechanisms for targeted interpretability. For overcoming these shortcomings, this study recommends a hybrid deep learning architecture that integrates RoBERTa for high-contextual embedding, Bidirectional Gated Recurrent Units (BiGRU) to capture bidirectional sequential relations, and an attention mechanism allowing the model to focus on the most informative parts of the input text. This integration enhances semantic understanding and classification accuracy compared to current solutions. The model is trained and tested on the benchmark News Headlines Dataset for Sarcasm Detection using binary cross-entropy loss minimized with Adam, along with dropout and learning rate scheduling to avoid overfitting. Experimental results yield better performance, attaining an accuracy of 92.4%, a precision of 91.1%, a recall of 93.2%, and an F1-score of 92.1%. These results outperform baseline techniques such as BiLSTM with attention and fine-tuned BERT variants. Implementation uses PyTorch and Hugging Face Transformers, ensuring reproducibility and extensibility. While effective, the model faces challenges with figurative expressions requiring external world knowledge or cultural context beyond pretrained embeddings. Future work aims to integrate external knowledge graphs and extend the model to multilingual and cross-domain scenarios. This hybrid framework advances figurative language detection, contributing to the broader goal of enhancing AI’s nuanced understanding and interpretability of human language.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_50-A_Hybrid_RoBERTa_BiGRU_Attention_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>ProjectNavigator: A Software Project Management Approach Selection Assistant</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160949</link>
        <id>10.14569/IJACSA.2025.0160949</id>
        <doi>10.14569/IJACSA.2025.0160949</doi>
        <lastModDate>2025-09-30T11:06:48.2830000+00:00</lastModDate>
        
        <creator>Lin Dongzhi</creator>
        
        <creator>Salfarina Abdullah</creator>
        
        <subject>Software projects; project management; recommendation tool; expert evaluation; usability testing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>In software projects, project management approaches are crucial. Selecting a suitable management approach based on the specific project characteristics becomes the key to the success of the project. However, software projects are becoming more and more complex, and project managers tend to rely on subjective judgment to select the project management approaches. At present, project managers lack a systematic method or tool that can help them quickly and accurately select the most suitable project management approach to reduce project risks and improve the success rate. The objective of this research is to propose a tool to assist project managers in selecting the most suitable project management approach based on the specific project characteristics. This research will collect and analyze existing project management approaches and their applicable scenarios to extract relevant influencing factors. Then, a recommendation tool is developed to compare and recommend the most suitable project management approach. Finally, the usability and effectiveness of the tool will be validated through expert evaluation and usability testing. Through this tool, project managers can quickly analyze and compare the suitability of different management approaches and obtain specific guidance and suggestions, significantly improving the success rate of projects.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_49-ProjectNavigator_A_Software_Project_Management_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Analysis of Spectrogram-Based Versus Raw Waveform-Based Deep Learning Models for Smoker Detection from Cough Audio</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160948</link>
        <id>10.14569/IJACSA.2025.0160948</id>
        <doi>10.14569/IJACSA.2025.0160948</doi>
        <lastModDate>2025-09-30T11:06:48.2530000+00:00</lastModDate>
        
        <creator>Widi Nugroho</creator>
        
        <creator>Alhadi Bustamam</creator>
        
        <creator>Rinaldi Anwar Buyung</creator>
        
        <subject>Smoker detection; cough audio classification; deep learning; Audio Spectrogram Transformer; Wav2Vec2; vocal biomarker</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>The classification of cough sounds for smoker detection represents a challenging task in audio processing that compares different data representation methods. This study presents a performance analysis of two prominent deep learning approaches: a spectrogram-based model, the Audio Spectrogram Transformer (AST), and a raw waveform-based model, Wav2Vec2. We used 7,561 smoker and 7,561 non-smoker samples from the CODA TB DREAM Challenge dataset. Both models were trained with five-fold cross-validation and data augmentation (SpecAugment for AST; noise, pitch, and time shifts for Wav2Vec2). The raw waveform-based Wav2Vec2 model achieved the best performance, with an average accuracy of 86.5%, an F1-score of 0.862, and an Area Under the Curve (AUC) of 0.945, completing training in approximately 49 minutes per fold. In contrast, the spectrogram-based AST model reached around 76-77% accuracy and an AUC of 0.85 in approximately 78 minutes per fold. These findings indicate that the raw waveform-based approach is significantly more effective and computationally efficient than the spectrogram-based approach for this task, offering a robust method for non-invasive smoker classification through the analysis of vocal biomarkers.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_48-Performance_Analysis_of_Spectrogram.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Privacy-Aware ML Framework for Dynamic Query Formation in Multi-Dimensional Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160947</link>
        <id>10.14569/IJACSA.2025.0160947</id>
        <doi>10.14569/IJACSA.2025.0160947</doi>
        <lastModDate>2025-09-30T11:06:48.2200000+00:00</lastModDate>
        
        <creator>B Bhavani</creator>
        
        <creator>Haritha Donavalli</creator>
        
        <subject>Dynamic query formation; Approximate Query Processing (AQP); local differential privacy; contextual bandits; reinforcement learning; constrained randomization; multi-dimensional data exploration</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>Interactive data exploration at scale remains constrained by 1) weak adaptability to shifting query workloads, 2) limited and post hoc error guarantees, 3) poor scalability under dynamic, high-dimensional data, 4) sparse user guidance during query formulation, and 5) non-trivial system overheads from learned or probabilistic components. We propose an end-to-end, privacy-aware framework that dynamically forms SQL queries for multi-dimensional data using randomized signals derived from personal web usage. The method integrates: 1) on-device user modeling that converts browsing interactions into preference embeddings under local differential privacy; 2) a constrained-randomization layer that enforces coverage and diversity to avoid filter bubbles while remaining responsive to user intent; 3) a contextual bandit policy (with optional deep reinforcement learning extension) that selects or completes query templates using signals from user profiles, session context, and data synopses; and 4) an error-aware AQP executor combining stratified/pilot sampling, synopsis reuse, and confidence-interval gating with automatic sample escalation. This design directly addresses the above limitations: the bandit adapts online to workload shifts; the AQP layer provides pre-execution feasibility checks and per-query error control; synopsis reuse and AB-tree–style random sampling maintain low latency under updates; and a guidance module (predictive autocompletion with information-gain scoring) reduces user effort while preserving exploration diversity. To evaluate effectiveness, we introduce a privacy-preserving training regimen (federated updates over DP-noised profiles) and a novel benchmark protocol measuring time-to-insight, error compliance under differential privacy, session diversity, and latency against strong baselines. The result is an ML-driven exploration loop that achieves error-bounded interactivity, robust personalization, and scalable performance on evolving, high-dimensional datasets, while providing evaluation metrics that capture both user experience and privacy-preserving guarantees.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_47-Privacy_Aware_ML_Framework_for_Dynamic_Query_Formation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Edge-Integrated IoT and Computer Vision Framework for Real-Time Urban Flood Monitoring and Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160946</link>
        <id>10.14569/IJACSA.2025.0160946</id>
        <doi>10.14569/IJACSA.2025.0160946</doi>
        <lastModDate>2025-09-30T11:06:48.1600000+00:00</lastModDate>
        
        <creator>Rupesh Mandal</creator>
        
        <creator>Bobby Sharma</creator>
        
        <creator>Dibyajyoti Chutia</creator>
        
        <subject>Urban flood prediction; IoT sensor networks; edge computing; fuzzy logic fusion; hydraulic blockage detection; computer vision</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>Urban flash floods pose a critical threat to rapidly growing cities in India, where unplanned development, climate variability, and inadequate drainage amplify risks. Guwahati, in Northeast India, experiences recurrent inundation during monsoons, disrupting livelihoods and damaging infrastructure. This study presents an integrated IoT and AI-enabled framework for urban flood monitoring and prediction. A LoRa-based IoT sensor network was deployed to capture localized hydrological and meteorological parameters, overcoming the limitations of coarse weather APIs. Rainfall forecasting was implemented at the edge layer using Random Forest, XGBoost, CatBoost, and K-Nearest Neighbors, fused through a fuzzy logic model that achieved 92.4% accuracy, surpassing individual classifiers. In parallel, a computer vision pipeline detected drainage blockages from geotagged user images, with EfficientNetB0-U-Net achieving ~91% accuracy, outperforming ResNet50, InceptionV3, and MobileNetV2. By combining rainfall prediction, IoT sensing, and blockage detection, the proposed framework delivers a holistic, low-cost, and scalable early warning system, marking a novel contribution toward resilient urban flood management in resource-constrained settings.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_46-Edge_Integrated_IoT_and_Computer_Vision_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dynamic Assessment and Optimization Strategy for Brand Tourism Competitiveness in the Yangtze River Delta City Cluster Based on Entropy Weight-TOPSIS</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160945</link>
        <id>10.14569/IJACSA.2025.0160945</id>
        <doi>10.14569/IJACSA.2025.0160945</doi>
        <lastModDate>2025-09-30T11:06:48.1270000+00:00</lastModDate>
        
        <creator>Dongmei Wang</creator>
        
        <creator>Daoyi Wu</creator>
        
        <subject>Yangtze River Delta City Cluster; brand tourism competitiveness; entropy weight-TOPSIS; dynamic assessment</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>In the context of the integrated, high-quality development of the Yangtze River Delta City Cluster (YRDCC), brand tourism competitiveness is a key indicator of cities’ attractiveness and regional synergy. However, most existing studies focus on static comparisons and fail to dynamically assess competitiveness trends among cities. This study uses 27 cities in the YRDCC from 2019 to 2023 as a sample and applies the entropy weight-TOPSIS method for dynamic analysis of brand tourism competitiveness. This method integrates objective weights and relative performance across multiple indicators, enabling a comprehensive identification of city differences in resource allocation, brand communication, and service capacity. The findings reveal that Shanghai and Hangzhou lead in brand tourism competitiveness due to their strong economic foundations, rich tourism resources, and continuous brand development, playing a regional demonstration role. Suzhou and Nanjing have solid foundations but require improvements in brand internationalization and tourism experience. In contrast, Chuzhou and Chizhou lag behind due to insufficient industrial support, weak infrastructure, and low brand recognition. The study recommends enhancing brand tourism competitiveness by strengthening regional cooperation, promoting differentiated development, cultivating local brand identities, and advocating for green tourism, thereby providing a sustainable development model and empirical support for tourism development in China’s city clusters.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_45-Dynamic_Assessment_and_Optimization_Strategy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Fuzzy–PPO Control for Precision UAV Spraying</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160944</link>
        <id>10.14569/IJACSA.2025.0160944</id>
        <doi>10.14569/IJACSA.2025.0160944</doi>
        <lastModDate>2025-09-30T11:06:48.0970000+00:00</lastModDate>
        
        <creator>Ahmad B. Alkhodre</creator>
        
        <creator>Adnan Ahmed Abi Sen</creator>
        
        <creator>Yazed Alsaawy</creator>
        
        <creator>Nour Mahmoud Bahbouh</creator>
        
        <creator>Mohamed Benaida</creator>
        
        <subject>UAVs; precision agriculture; UAV spraying; fuzzy logic control; reinforcement learning; Proximal Policy Optimization (PPO); hybrid control</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>Precision agriculture increasingly relies on autonomous UAVs for tasks, such as crop monitoring and targeted pesticide spraying. However, maintaining stable flight and precise spray delivery under varying payloads and wind disturbances remains challenging. This paper proposes a hybrid control architecture that combines interpretable Mamdani fuzzy logic controllers with a deep reinforcement learning (DRL) agent (Proximal Policy Optimization, PPO). The fuzzy controllers encode expert-crafted rules for baseline altitude and attitude stabilization, while the PPO agent adaptively adjusts setpoints to optimize spray coverage and energy efficiency. We train the agent in a realistic PyBullet simulator with dynamic payload and wind conditions. In simulated precision-spraying trials, our hybrid controller outperformed both a conventional PID-based controller and a pure PPO controller. Specifically, it achieved roughly 2–3&#215; faster disturbance rejection, near-zero overshoot, and ~30% faster settling than the baselines, resulting in more uniform coverage and reduced pesticide use. These results demonstrate that fusing fuzzy logic with deep PPO yields a UAV spray controller that is both high-performance and robust for precision agriculture applications.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_44-Hybrid_Fuzzy_PPO_Control_for_Precision_UAV_Spraying.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Cybersecurity Programs in Small and Medium Enterprises (SMEs): A Systematic Literature Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160943</link>
        <id>10.14569/IJACSA.2025.0160943</id>
        <doi>10.14569/IJACSA.2025.0160943</doi>
        <lastModDate>2025-09-30T11:06:48.0670000+00:00</lastModDate>
        
        <creator>Eliana Ludin</creator>
        
        <creator>Masnizah Mohd</creator>
        
        <creator>Fariza Fauzi</creator>
        
        <subject>Cybersecurity program; Security Education; Training; Awareness (SETA); systematic evaluation; Malaysian SMEs; NVivo; PRISMA</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>Small and Medium Enterprises (SMEs) in Malaysia face increasing cybersecurity risks, yet their adoption of Security Education, Training, and Awareness (SETA) programs remains limited. Unlike prior reviews that focus broadly on SMEs, this study contributes novelty by systematically synthesizing empirical evidence within the Malaysian context. Guided by the PRISMA framework and supported by NVivo analysis, 57 studies published between 2019 and 2025 were examined to classify both the importance of SETA and the barriers to its implementation. The thematic analysis revealed six recurring domains of challenges: financial constraints, human resource limitations, management support, cultural resistance, technical infrastructure, and legal/data protection. Beyond consolidating fragmented insights, the study provides a taxonomy of challenges and practical recommendations such as modular training, role-specific awareness, and leveraging national initiatives. While this review offers structured guidance for policymakers and practitioners, its descriptive nature without empirical SME validation is a limitation, highlighting the need for future applied studies.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_43-Enhancing_Cybersecurity_Programs_in_Small_and_Medium_Enterprises.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Formal Verification of a Blockchain-Based Security Model for Personal Data Sharing Using the Dolev-Yao Model and ProVerif</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160942</link>
        <id>10.14569/IJACSA.2025.0160942</id>
        <doi>10.14569/IJACSA.2025.0160942</doi>
        <lastModDate>2025-09-30T11:06:48.0330000+00:00</lastModDate>
        
        <creator>Godwin Mandinyenya</creator>
        
        <creator>Vusumuzi Malele</creator>
        
        <subject>Blockchain; security model; Chaincode-as-a-Service; InterPlanetary File System; Intel Software Guard Extensions; Zero-Knowledge Proofs ProVerif; formal verification; Dolev-Yao</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>Secure personal data sharing remains a critical challenge in decentralized systems due to concerns over privacy, compliance, and trust. This paper presents the formal verification of a Blockchain-Based Security Model (BSM) designed to address these challenges through a multi-layered architecture. The proposed model integrates Chaincode-as-a-Service (CCaaS) on Hyperledger Fabric to ensure modular, maintainable, and scalable execution of smart contracts. A Flask-based API serves as the secure gateway for data operations and identity management. Sensitive data is stored off-chain using InterPlanetary File System (IPFS), preserving decentralization while minimizing on-chain bloat. Access control is enforced using efficient cryptographic techniques, while Intel SGX (or simulated enclaves) safeguards secure data processing and decryption within trusted execution environments. To further enhance privacy guarantees, Zero-Knowledge Proofs (ZKPs) are optionally integrated to enable verifiable claims without disclosing raw data. For assurance of correctness and security, the BSM is formally modeled using the Dolev-Yao attacker model and verified through ProVerif, focusing on key security properties such as confidentiality, integrity, authentication, and accountability. The findings confirm that the proposed model satisfies stringent security goals and is robust against symbolic adversaries. This work contributes a verifiable and extensible framework for privacy-preserving data sharing in sectors such as healthcare, finance, and government. To the best of our knowledge, this is among the first works to formally verify a blockchain-based security model that simultaneously integrates modular chaincode execution (CCaaS), trusted hardware enclaves (Intel SGX), decentralized off-chain storage (IPFS), and optional Zero-Knowledge Proofs (ZKPs) with a unified framework for personal data sharing.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_42-Formal_Verification_of_a_Blockchain_Based_Security_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sentence-Level Indonesian Sign Language (BISINDO) Recognition Using 3D CNN-LSTM and 3D CNN-BiLSTM Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160941</link>
        <id>10.14569/IJACSA.2025.0160941</id>
        <doi>10.14569/IJACSA.2025.0160941</doi>
        <lastModDate>2025-09-30T11:06:48.0030000+00:00</lastModDate>
        
        <creator>Katriel Larissa Wiguna</creator>
        
        <creator>Rojali</creator>
        
        <subject>Sign Language Recognition; BISINDO (Indonesian Sign Language); 3D Convolutional Neural Network (3D CNN); Long Short-Term Memory (LSTM) Network; Bidirectional Long Short-Term Memory (BiLSTM); Connectionist Temporal Classification (CTC)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>Sign Language Recognition (SLR) has been an active area of research, but sentence-level SLR remains relatively underexplored. While most studies focus on recognizing individual signs, understanding full sentences presents greater challenges. This research proposes a sentence-level SLR using a combination of 3D Convolutional Neural Networks (3D CNN) for spatio-temporal feature extraction with sequential modeling using Long Short-Term Memory (LSTM) and Bidirectional LSTM (BiLSTM). Connectionist Temporal Classification (CTC) is also used to enable training without word-level annotations. In this study, we used the Indonesian Sign Language (BISINDO) dataset, specifically the DKI Jakarta version, consisting of 900 videos representing 30 sentences, which was expanded to 3600 videos through data augmentation techniques such as speed variation and brightness adjustments. All videos underwent preprocessing to ensure data quality, and Bayesian Optimization was applied for hyperparameter tuning to obtain optimal configurations for each model. Both models were trained with CTC loss and evaluated using Word Error Rate (WER). The 3DCNN-LSTM model achieved a WER result of 59.21%, while the 3DCNN-BiLSTM presents a significantly better performance with a WER of 2.77%. Despite these promising results, the models’ ability to generalize across different signers may require further research, as the dataset used in this research involved only a single signer.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_41-Sentence_Level_Indonesian_Sign_Language_BISINDO_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comprehensive Analysis of YOLOv8 + DeepSORT for Vehicle Tracking: HOTA and CLEAR-Based Evaluation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160940</link>
        <id>10.14569/IJACSA.2025.0160940</id>
        <doi>10.14569/IJACSA.2025.0160940</doi>
        <lastModDate>2025-09-30T11:06:47.9700000+00:00</lastModDate>
        
        <creator>I Nyoman Eddy Indrayana</creator>
        
        <creator>Made Sudarma</creator>
        
        <creator>I Ketut Gede Darma Putra</creator>
        
        <creator>Anak Agung Kompiang Oka Sudana</creator>
        
        <subject>Multi-object tracking; higher order tracking accuracy metric; CLEAR Metric; YOLOv8</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>This paper offers a thorough comparative investigation of the performance of a vehicle multi-object tracking system, incorporating various versions of the YOLOv8 detector (from ‘n’ to ‘x’) alongside the DeepSORT tracking algorithm. This study systematically assesses the impact of the trade-off between detector speed and accuracy on tracking metrics, utilising a real-world traffic video dataset from Bali. The assessment is performed utilising two fundamentally distinct metric frameworks: the traditional CLEAR metric (which includes MOTA) and the contemporary Higher Order Tracking Accuracy (HOTA) metric. The findings indicate that although the larger YOLOv8 model markedly enhances detection recall, particularly for smaller and more difficult items like motorcycles, tracking issues persist. The dual metric study provides significant insights: the HOTA measure demonstrates that car tracking has more associative stability (higher AssA scores) compared to motorbike tracking, which frequently experiences track fragmentation. In contrast, the detection-biased MOTA metric produces somewhat paradoxical outcomes, as motorbikes receive elevated scores due to enhanced detection accuracy (fewer false positives), therefore obscuring deficiencies in tracking consistency. This study concludes that HOTA offers a more comprehensive evaluation by differentiating between detection and association performance, so demonstrating that detection-only metrics like MOTA can yield an imperfect representation of actual tracking ability. These findings underscore the necessity of matching detector architecture and evaluation criteria with specific application requirements, particularly in safety-critical systems where identity consistency is essential.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_40-Comprehensive_Analysis_of_YOLOv8_DeepSORT.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Incremental LSTM Ensemble for Online Intrusion Detection in Software-Defined Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160939</link>
        <id>10.14569/IJACSA.2025.0160939</id>
        <doi>10.14569/IJACSA.2025.0160939</doi>
        <lastModDate>2025-09-30T11:06:47.9400000+00:00</lastModDate>
        
        <creator>Raed Basfar</creator>
        
        <creator>Mohamed Y. Dahab</creator>
        
        <creator>Abdullah Marish Ali</creator>
        
        <creator>Fathy Eassa</creator>
        
        <creator>Kholoud Bajunaied</creator>
        
        <subject>Software-defined networking; intrusion detection; incremental learning; LSTM ensemble; concept drift; weighted voting</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>Software-Defined Networking (SDN) promises flexible control of network flows but also exposes controllers to rapidly shifting attack surfaces. Conventional intrusion-detection engines, trained once and deployed statically, falter when traffic patterns drift. We introduce an adaptive intrusion detection system that couples a mini-batch incremental learning scheme with a five-member ensemble of Long Short-Term Memory (LSTM) classifiers. Each model trains on successive data partitions drawn from the InSDN dataset, while a lightweight tracker monitors accuracy and “age.” A weighted-voting rule penalizing stale models in proportion to their lifetime lets the ensemble down-rank obsolete learners without full retraining. When the tracker flags slippage, only the most dated models are refreshed, limiting computational load and preserving service continuity. Across four streaming iterations, the system sustains a mean detection accuracy of 95.8% and a 3.2% false-positive rate, recovering quickly from concept drift that drives individual models to baseline performance. Comparative analysis against three recent SDN IDS baselines shows improvements of up to 14 percentage points in accuracy and 0.48 in F-score, without sacrificing latency (≈50 ms). These results indicate that modest, well-timed retraining rather than continual online updates can keep an SDN IDS both nimble and efficient. The approach offers a practical roadmap for securing programmable networks that evolve by the hour.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_39-An_Incremental_LSTM_Ensemble_for_Online_Intrusion_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Review of Attention-Enhanced GRU Models with STL Decomposition for Food Loss Forecasting</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160938</link>
        <id>10.14569/IJACSA.2025.0160938</id>
        <doi>10.14569/IJACSA.2025.0160938</doi>
        <lastModDate>2025-09-30T11:06:47.9100000+00:00</lastModDate>
        
        <creator>Ru Poh Tan</creator>
        
        <creator>Siew Mooi Lim</creator>
        
        <creator>Kuan Yew Leong</creator>
        
        <creator>Shee Chia Lee</creator>
        
        <creator>Siaw Hong Liew</creator>
        
        <creator>Jun Kit Chaw</creator>
        
        <subject>GRU; food loss forecasting; attention mechanism; seasonal decomposition; STL; loess; time series; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>Forecasting food loss with high accuracy is crucial for improving global food security, optimising supply chains, and supporting sustainability goals. However, conventional time series models and standard deep learning techniques, including recurrent neural networks (RNNs), often fall short in handling the irregularity, seasonality, and complexity inherent in food loss data. While Gated Recurrent Units (GRUs) offer advantages over traditional RNNs, such as mitigating vanishing gradients, they still face limitations in modelling long-range dependencies and noisy sequences. This paper reviews recent advancements aimed at overcoming these challenges by enhancing GRU-based models with attention mechanisms and seasonal-trend decomposition using Loess (STL). Evidence from related domains shows that attention mechanisms improve the capture of long-term dependencies and interpretability, while STL decomposition strengthens stability and accuracy by isolating seasonal and trend components. Hybrid GRU models that combine both approaches consistently outperform standalone methods, highlighting their promise for robust and interpretable forecasting. Though underexplored in the context of food loss, this paper identifies the research gap and advocates for domain-specific GRU–attention–STL architectures, offering a foundation for future empirical work to enable timely interventions and foster resilient, data-driven food systems.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_38-A_Review_of_Attention_Enhanced_GRU_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Assessing the Effectiveness of MCR-KSM for Waiting Waste Reduction: An Empirical Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160937</link>
        <id>10.14569/IJACSA.2025.0160937</id>
        <doi>10.14569/IJACSA.2025.0160937</doi>
        <lastModDate>2025-09-30T11:06:47.8770000+00:00</lastModDate>
        
        <creator>Nargis Fatima</creator>
        
        <creator>Sumaira Nazir</creator>
        
        <creator>Suriayati Chuprat</creator>
        
        <subject>Modern code review; wastes; waiting waste; software quality; automated code review; sustainable software engineering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>Modern Code Review (MCR) is a well-known and widely adopted quality assurance activity to develop quality software. Although it is a core activity for improving code quality, it generates various types of waste, including waiting waste, defect waste, and composite solution waste. Besides all other wastes, the waiting waste is the most critical one, leading to mental distress, delayed code merges, and project delays. Researchers have made efforts to reduce the production of waiting waste by providing various automated code review tools, techniques and models, one of them is the MCR Knowledge Sharing Model (MCR-KSM). The model claims that it supports sustainable software engineering by minimizing waiting waste reduction during MCR activities. This study aims to evaluate the effectiveness of MCR-KSM with respect to the reduction of waiting waste produced during MCR activities. The experiment methodology is employed for this purpose. This paper presents the experimental investigation approach along with the results. The experiment was conducted in dual sessions with 28 graduate students having similar educational and industrial experience. The tools and techniques, such as SPSS paired t-test and value stream mapping, are used for experimental data management and analysis. The study results revealed that the model significantly helps in the reduction of waiting waste production. The conducted study has implications for investigators to extend the research with different parameters and settings.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_37-Assessing_the_Effectiveness_of_MCR_KSM_for_Waiting_Waste_Reduction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Analytical Review of Environmental and Machine Learning Approaches in Dengue Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160936</link>
        <id>10.14569/IJACSA.2025.0160936</id>
        <doi>10.14569/IJACSA.2025.0160936</doi>
        <lastModDate>2025-09-30T11:06:47.8630000+00:00</lastModDate>
        
        <creator>Orlando Iparraguirre-Villanueva</creator>
        
        <creator>Juan Chavez-Perez</creator>
        
        <creator>Eddier Flores-Idrugo</creator>
        
        <creator>Luis Chauca-Huete</creator>
        
        <subject>Public health analytics; machine learning models; disease prediction; environmental risk factors; dengue surveillance; health data analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>In recent years, dengue has gained prominence as a priority public health challenge due to increasing incidences of spread. The main objective of this systematic literature review (SLR) is to explore the use of environmental factors and machine learning (ML) techniques to combat dengue, based on studies published between 2020 and 2024. For this purpose, 56 studies were selected from a balanced distribution of PubMed, Web of Science, Scopus and Springer Link, under the Preferred Reporting Items for Systematic Reviews and meta-analyses (PRISMA) method. The results obtained made it possible to determine that the climatological variables, such as temperature difference, humidity concentration and rainfall volume, are conditioning factors in the spread of the dengue virus. As for ML models, Random Forest and Support Vector Machines proved to be more accurate than traditional methods in detecting risk areas. The highest scientific production corresponded to the year 2024, with 25% of the studies, while India, with 14.29%, and the United States, with 12.50%, stood out as the countries with the highest contribution. In conclusion, ML techniques have enormous potential for strengthening early detection systems and optimizing resources in high-risk areas, but further research is needed in this field due to the lack of data availability and replicability of models.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_36-An_Analytical_Review_of_Environmental_and_Machine_Learning_Approaches.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Perceived Usefulness and Perceived Ease of Use as Predictors of Attitude Toward IoT Adoption Among Rice Farmers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160935</link>
        <id>10.14569/IJACSA.2025.0160935</id>
        <doi>10.14569/IJACSA.2025.0160935</doi>
        <lastModDate>2025-09-30T11:06:47.8470000+00:00</lastModDate>
        
        <creator>Hermin Arrang</creator>
        
        <creator>Sek Yong Wee</creator>
        
        <creator>Nazrulazar Bin Bahaman</creator>
        
        <creator>Jack Febrian Rusdi</creator>
        
        <subject>IoT; attitude towards IoT; Perceived Usefulness; Perceived Ease of Use; technology adoption</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>This study investigates key drivers influencing rice farmers’ attitudes toward Internet of Things (IoT) adoption in Indonesia, using the Technology Acceptance Model (TAM) as an analytical lens. Specifically, it evaluates the predictive roles of Perceived Usefulness (PU) and Perceived Ease of Use (PEOU), both of which are posited to shape users’ Attitude Toward Usage (ATT). Survey data were obtained from 62 smallholder farmers in Bandung Regency and examined through Partial Least Squares Structural Equation Modeling (PLS-SEM). The study confirms that both PU and PEOU significantly contribute to the formation of favorable farmer attitudes, with the model showing high explanatory strength (R&#178; = 0.723). PU captures the perceived benefit of IoT for improving productivity and efficiency, while PEOU reflects user-friendly design and ease of integration into existing agricultural routines. These findings extend TAM’s validity in rural, low-tech farming contexts and offer actionable insights for technology developers, government agencies, and agricultural organizations seeking to foster digital transformation. By confirming the relevance of usability and perceived value, this study supports targeted design and communication strategies that align with farmers’ expectations. It also lays the groundwork for broader ASEAN agricultural resilience efforts by emphasizing inclusive technology pathways. Future research may incorporate sociocultural dimensions and systemic barriers to expand the model&#39;s applicability in diverse farming environments.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_35-Perceived_Usefulness_and_Perceived_Ease_of_Use_as_Predictors.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Prioritizing Non-Functional Requirements and Influencing Factors for API Quality Framework: An Industry Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160934</link>
        <id>10.14569/IJACSA.2025.0160934</id>
        <doi>10.14569/IJACSA.2025.0160934</doi>
        <lastModDate>2025-09-30T11:06:47.8170000+00:00</lastModDate>
        
        <creator>Aumir Shabbir</creator>
        
        <creator>Aziz Deraman</creator>
        
        <creator>Mohamad Nor Bin Hassan</creator>
        
        <creator>Kamal Uddin Sarker</creator>
        
        <creator>Shahid Kamal</creator>
        
        <subject>Non-Functional Requirements (NFRs); Application Programming Interface (API); software development practices; API quality; Non-Functional Requirement Quality Framework for APIs (NFRQF-API); ISO/IEC 25010</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>Application Programming Interface (API) management is currently a trending research area; however, APIs require careful attention to Non-Functional Requirements (NFRs) to ensure system performance, maintainability, security, and resiliency. The software industry struggles to maintain API quality, especially NFRs, due to a focus on functional aspects in standards like the OpenAPI Specification (OAS). Similarly, standards, such as ISO/IEC 25010:2023, evaluate the quality of general software but offer limited guidance on addressing API challenges. Based on the industry perspective, this paper prioritizes the most critical quality attributes and their influencing factors for APIs, supporting the development of a Non-Functional Requirement Quality Framework for APIs (NFRQF-API). We adopted ISO/IEC 25010 as our reference standard and surveyed industry experts. Eleven NFRs are added in the survey, including nine from ISO/IEC 25010 and two additional attributes, Observability and Resiliency, identified through the literature review. A structured survey tool has been validated, pilot-tested, and distributed to 38 API practitioners, with data analyzed through IBM Statistical Package for the Social Sciences (IBM SPSS). SPSS demonstrates strong internal consistency (α &gt; 0.7) across items within each group. Additionally, Maintainability (4.29) and Resiliency (4.20) have been identified as core NFRs, while Interaction Capability (3.18), Flexibility (3.18), and Safety (2.93) have lower scores based on their mean calculation. The remaining six NFRs are moderately significant, highlighting their ongoing importance. These findings, based on NFR classification, establish a solid foundation for developing a Quality Framework for APIs aligned with modern Software engineering requirements. The article supports researchers and practitioners to build a strong understanding towards NFR prioritization, which is a crucial step for API quality management.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_34-Prioritizing_Non_Functional_Requirements_and_Influencing_Factors.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Smart City Safety: Deep Learning Approaches for Automatic Vehicle Accident Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160933</link>
        <id>10.14569/IJACSA.2025.0160933</id>
        <doi>10.14569/IJACSA.2025.0160933</doi>
        <lastModDate>2025-09-30T11:06:47.7830000+00:00</lastModDate>
        
        <creator>Ahad AlNemari</creator>
        
        <creator>Shahad AlOtaibi</creator>
        
        <creator>Majd Jada</creator>
        
        <creator>Aeshah AlHarthi</creator>
        
        <creator>Sara AlThuwaybi</creator>
        
        <creator>Wojoud AlNemari</creator>
        
        <creator>Nadan Marran</creator>
        
        <creator>Abdulmajeed Alsufyani</creator>
        
        <subject>Accident detection; deep learning algorithms; ResNet-101; traffic safety; YOLOv5; YOLOv9</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>Traffic accidents have significant societal impacts due to the substantial human and material losses they cause. Recently, numerous AI-based traffic surveillance technologies, such as Saher, have been implemented to improve traffic safety in Saudi Arabia. The prompt detection of vehicle accidents is crucial for enhancing the response time of accident management systems, thereby reducing the number of injuries resulting from collisions. This study evaluates various deep learning algorithms to determine the most effective method for detecting and classifying car accidents. Multiple deep-learning models were trained and tested using an extensive dataset of car accident images, allowing for the accurate identification and classification of different types of accidents. Among the six pre-trained models analyzed, ResNet-101 achieved the highest accuracy, with a classification rate of 93%. For accident detection, YOLOv5 attained a mean Average Precision (mAP) of 97.8%, indicating superior performance compared to YOLOv8 and YOLOv9, and highlighting its capability to effectively detect accidents in video footage. The research’s primary goal is to enhance urban safety by enabling rapid accident detection, which supports timely emergency responses, minimizes fatalities, and contributes to the development of safer and more resilient smart cities.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_33-Enhancing_Smart_City_Safety_Deep_Learning_Approaches.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Advancing Speech Enhancement with Generative Adversarial Network-Autoencoder: A Robust Adversarial Autoencoder Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160932</link>
        <id>10.14569/IJACSA.2025.0160932</id>
        <doi>10.14569/IJACSA.2025.0160932</doi>
        <lastModDate>2025-09-30T11:06:47.7530000+00:00</lastModDate>
        
        <creator>Mandar Diwakar</creator>
        
        <creator>Brijendra Gupta</creator>
        
        <subject>Speech enhancement; Generative Adversarial Network (GAN); Autoencoder (AE); MFCC; noise robustness; adversarial training</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>In day-to-day life, the speech signals are often noisy and distorted by background noise. These signals are not suitable for use in different audio-operated applications directly, as they are distorted. The use of these noisy voice signals can degrade the performance of the speech communication system. There are a huge number of applications nowadays that we use for various purposes, which utilize voice as input. Our study focuses on speech enhancement, which involves a combination of Generative Adversarial Networks (GAN) and Autoencoders (AE). The required features are extracted by using the MFCC algorithm from the MUSAN dataset. The features extracted with MFCC are paired samples of clean and noisy speech. The main architecture is a combination of GAN and AE. The Generator is trained to reconstruct clean speech features from noisy speech signal inputs. On the other hand, the discriminator is trained to tell the difference between real clean samples and samples that are generated by the generator. The adversarial training approach continuously improves the performance of the generator to produce good-quality and more intelligent speech. The MUSAN dataset used for the experiment contains data of noisy speech. As a result, we note that the model performs very well and shows robustness across multiple types of noise samples. The AE is used for feature reconstruction, and the GAN for generating fake samples. This combination of GAN and AE resulted in a good solution for speech enhancement in a noisy and distorted environment.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_32-Advancing_Speech_Enhancement_with_Generative_Adversarial_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Generating a Trading Strategy Using Candlestick Patterns with Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160931</link>
        <id>10.14569/IJACSA.2025.0160931</id>
        <doi>10.14569/IJACSA.2025.0160931</doi>
        <lastModDate>2025-09-30T11:06:47.7070000+00:00</lastModDate>
        
        <creator>Hussaina Bala Malami</creator>
        
        <creator>Badamasi Imam Ya’u</creator>
        
        <creator>Fatima Umar Zambuk</creator>
        
        <creator>Mohannad Alkanan</creator>
        
        <creator>Osman Elwasila</creator>
        
        <creator>Mohammad Shuaib Mir</creator>
        
        <creator>Mohammed Nasir Danmalam Bawa</creator>
        
        <creator>Yonis Gulzar</creator>
        
        <subject>Nigerian stock trading; machine learning; pattern recognition; candlestick</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>This study examines the application of machine learning (ML) algorithms for multi-day stock price prediction on the Nigerian Stock Exchange (NSE) from 2013 to 2023, to inform trading strategies. Utilizing candlestick patterns and technical indicators, including Simple Moving Average (SMA), Exponential Moving Average (EMA), and Volume Rate of Change (VROC), as input features, the models were trained to capture historical price dynamics. Among the evaluated algorithms, Ridge Regression demonstrated superior performance, achieving a Mean Absolute Error (MAE) of 0.0366 over a three-day forecasting horizon, while effectively mitigating overfitting and handling market volatility. In contrast, Decision Tree, Lasso, Support Vector Regressor (SVR), and K-Nearest Neighbors (KNN) models exhibited limitations due to sensitivity to data noise and overfitting. A recursive multi-step forecasting approach further enhanced prediction accuracy by incorporating temporal dependencies. However, backtesting revealed that predictive accuracy alone did not guarantee profitable trading outcomes, emphasizing the need to integrate market conditions, risk management, and strategy design. The findings underscore the importance of robust feature engineering and data preprocessing in financial ML applications. While Ridge Regression shows promise for stock price forecasting, successful trading strategies require a holistic framework that accounts for broader market factors. Future research should explore hybrid modeling techniques and additional exogenous variables to improve robustness.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_31-Generating_a_Trading_Strategy_Using_Candlestick_Patterns.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of Factors Affecting Continuance Intention in Indonesian Digital Banks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160930</link>
        <id>10.14569/IJACSA.2025.0160930</id>
        <doi>10.14569/IJACSA.2025.0160930</doi>
        <lastModDate>2025-09-30T11:06:47.6730000+00:00</lastModDate>
        
        <creator>Soros Lie</creator>
        
        <creator>Viany Utami Tjhin</creator>
        
        <subject>Digital banking; DeLone and McLean; structural equation model; continuance intention; user satisfaction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>Indonesian Digital Banks are currently competing to get more customers with their own mobile application, digital finance ecosystem and their promotion method. The research aims to find out the factors that influence customer satisfaction when using the digital bank application. The variables used in this study are System Quality, Service Quality, Information Quality, Perceived Advantage Effort Expectancy, Digital Ecosystem, User Satisfaction, and Continuance Intention in the mobile application. The results of the study concluded what factors influence satisfaction to develop a better mobile application for an Indonesian digital bank. The research model and questionnaire use a modified research model of the successful information system model DeLone and McLane and in analyzing the results of the questionnaire, researchers used descriptive statistics and Structural Equation Model (SEM) analysis using SMART PLS V.4. The results of the management of these data conducted that there are 11 significant direct effects on User Satisfaction or Continuance Intention, and 6 direct effects that were not significant. There were also 8 significant indirect effects, mainly where User Satisfaction mediated the impact of other variables on Continuance Intention. The results of this study are expected to provide a better reference for Indonesian digital banks to improve their mobile app services and maintain customer loyalty.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_30-Analysis_of_Factors_Affecting_Continuance_Intention.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>NEBULA Framework: An Adaptive Framework for Unstructured Description to Solve Cold Start Problem</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160929</link>
        <id>10.14569/IJACSA.2025.0160929</id>
        <doi>10.14569/IJACSA.2025.0160929</doi>
        <lastModDate>2025-09-30T11:06:47.6430000+00:00</lastModDate>
        
        <creator>I Gusti Agung Gede Arya Kadyanan</creator>
        
        <creator>Ni Made Ary Esta Dewi Wirastuti</creator>
        
        <creator>Gede Sukadarmika</creator>
        
        <creator>Ngurah Agus Sanjaya ER</creator>
        
        <subject>Cold start; adaptive framework; recommender system; NER; unstructured description</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>The cold start problem is one of the main challenges in recommendation systems, especially when the system has to provide recommendations for new items that do not yet have a history of interaction. Although various approaches have been developed, most still use conventional interaction-based methods, which are not optimal in providing accurate recommendations for new items that only have minimal and unstructured descriptive information. This research aims to provide recommendations for new items that lack interaction history and have unstructured descriptive information by addressing the cold start problem more adaptively. The proposed model is based on Named Entity Recognition (NER) and metadata representation as an adaptive framework capable of adjusting recommendation methods based on the availability of initial information. For new items, the system utilizes basic attributes such as product type, materials, and origin, and employs an adaptive approach for rating prediction. Testing results demonstrate system performance with an Accuracy of 0.967, Precision of 0.838, Recall of 0.846, F1-score of 0.842, and an average Mean Absolute Error (MAE) of 0.159. This adaptive framework proved to be superior to conventional approaches, with improvements in Precision of 15.59%, Recall of 17.50%, F1-score of 16.54%, and a significant reduction in MAE. Additionally, the Kappa value of 0.69 indicates a high level of agreement (substantial agreement) among validators. These findings demonstrate that the system is not only more accurate in recommending new items but also more reliable under minimal data conditions, thereby enhancing user confidence. Overall, this NER and metadata-based framework can serve as an effective solution for addressing the cold start problem and improving recommendation quality during the initial stages.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_29-NEBULA_Framework_An_Adaptive_Framework_for_Unstructured_Description.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluating User Experience in a Public Sector Digital System Through Nielsen’s Heuristic Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160928</link>
        <id>10.14569/IJACSA.2025.0160928</id>
        <doi>10.14569/IJACSA.2025.0160928</doi>
        <lastModDate>2025-09-30T11:06:47.6130000+00:00</lastModDate>
        
        <creator>Nagyan Yosse Wibisono</creator>
        
        <creator>Viany Utami Tjhin</creator>
        
        <subject>Usability evaluation; heuristic evaluation; user experience; digital government; PLS-SEM; public sector technology; UX mining; usability modeling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>The digital transformation in public administration has encouraged the Indonesian National Police (Polri) to adopt a digital government application for managing official documents electronically. Despite its functional benefits, users have reported several usability issues such as non-intuitive navigation, inconsistent interface design, inadequate system feedback, and insufficient documentation. To systematically address these problems, this study combines Jakob Nielsen’s Heuristic Evaluation (HE) with Partial Least Squares Structural Equation Modeling (PLS-SEM), offering a hybrid methodological approach that is rarely applied in public sector UX evaluation. Data were collected through a structured questionnaire distributed to 156 active users of the application. The instrument measured ten heuristic principles and user experience dimensions using a four-point Likert scale. The results reveal that all heuristic dimensions scored within the “good” range with mean values between 3.19 and 3.38, classified as cosmetic issues under Nielsen’s severity scale. More importantly, the analysis shows that only three heuristics—Match Between System and the Real World, Help Users Recognize Diagnose and Recover from Errors, and Help and Documentation—have a significant positive impact on user experience perceptions. Together, these heuristics explain 90.9 per cent of the variance in user experience, highlighting their critical role in shaping user-centered digital government systems. This study advances existing evaluation models by demonstrating the effectiveness of integrating heuristic evaluation with quantitative SEM-based analysis, bridging diagnostic insights with statistical rigor. The findings provide a prioritized roadmap for improving the application’s interface and emphasize the importance of user-centered design for enhancing the adoption and effectiveness of public sector digital systems.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_28-Evaluating_User_Experience_in_a_Public_Sector_Digital_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Neural Networks for Pest Diagnosis in Agriculture: A Global Literature Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160927</link>
        <id>10.14569/IJACSA.2025.0160927</id>
        <doi>10.14569/IJACSA.2025.0160927</doi>
        <lastModDate>2025-09-30T11:06:47.5800000+00:00</lastModDate>
        
        <creator>Heling Kristtel Masgo Ventura</creator>
        
        <creator>Italo Maldonado Ram&#237;rez</creator>
        
        <creator>Roberto Carlos Santa Cruz Acosta</creator>
        
        <creator>Wilfredo Ruiz Camacho</creator>
        
        <creator>Juan Eduardo Suarez Rivadeneira</creator>
        
        <creator>Jos&#233; Celso Paredes Carranza</creator>
        
        <creator>Mayra Pamela Musay&#243;n D&#237;az</creator>
        
        <creator>Cesar R. Balcazar Zumaeta</creator>
        
        <creator>Carlos Luis Lobat&#243;n Arenas</creator>
        
        <creator>Juan Alberto Rojas Castillo</creator>
        
        <creator>Eli Morales-Rojas</creator>
        
        <subject>Neural networks; pests; agriculture; developing countries</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>Agricultural pests severely reduce global crop yields. To mitigate these losses, pest identification systems based on artificial intelligence have gained importance. This review analyzes worldwide advances in the use of neural networks for agricultural pest diagnosis, covering studies from 2007 to February 2024 retrieved from the Scopus database. Data were processed in Minitab 19 and spreadsheets, and keywords were mapped with VOSviewer. Results show that India and China lead scientific output, with research focused on corn, tomato, rice, and wheat. The most common architectures are ResNet, YOLO, and VGG-16/19, achieving performance metrics of up to 99 %. The review highlights the strong relationship between economic development and the adoption of neural networks. These findings provide researchers, agricultural engineers, and policymakers with a global perspective to guide future AI-based pest management strategies and support automation, especially in developing countries.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_27-Neural_Networks_for_Pest_Diagnosis_in_Agriculture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Evaluation of Centrality Measures for Detecting Significant Nodes in Social Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160926</link>
        <id>10.14569/IJACSA.2025.0160926</id>
        <doi>10.14569/IJACSA.2025.0160926</doi>
        <lastModDate>2025-09-30T11:06:47.5500000+00:00</lastModDate>
        
        <creator>Hardeep Singh</creator>
        
        <creator>Supreet Kaur</creator>
        
        <creator>Karman Singh Sethi</creator>
        
        <creator>Jai Sharma</creator>
        
        <subject>Social networks; centrality measures; significant nodes; validation metrics; extended gravity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>Social networks are certainly a crucial platform for bringing people together globally. Detecting the significant nodes inside the social network remains an open problem because of the broad variety of network sizes. To solve this problem, different centrality measures have been introduced. Detecting the significant nodes is essential for speeding up or slowing down the spread of information, managing diseases and rumors, and more. This paper presents a comparative evaluation of 12 centrality measures to determine the most effective measure on the basis of accuracy, differentiation capability, and runtime. To validate performance, a series of experiments is conducted on four social networks using the validation metrics such as monotonicity, the SIR model, and Kendall tau. The experimental outcomes indicate that the gravity-based measures have superior accuracy and differentiation capability as compared to other measures. Finally, this paper outlines future research directions for enhancements based on centrality measures.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_26-Comparative_Evaluation_of_Centrality_Measures.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Aggregated Dataset of Agile User Stories and Use Case Taxonomy for AI-Driven Research</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160925</link>
        <id>10.14569/IJACSA.2025.0160925</id>
        <doi>10.14569/IJACSA.2025.0160925</doi>
        <lastModDate>2025-09-30T11:06:47.5170000+00:00</lastModDate>
        
        <creator>Abdulrahim Alhaizaey</creator>
        
        <creator>Majed Al-Mashari</creator>
        
        <subject>Agile software development; requirements engineering; user stories; natural language processing; datasets; large language models; generative language models</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>Agile methodologies are considered revolutionary approaches in the development of systems and software. With the rapid advancement of artificial intelligence, natural language processing, and large language models, there is an increasing demand for high-quality datasets to support the design and development of intelligent, practical, and effective automation tools. However, researchers in Agile Requirements Engineering face significant challenges due to the limited availability of datasets, particularly those involving user stories. This paper presents a dataset of over 10K user stories collected from academic sources and publicly accessible online repositories. These stories represent requirements formulated in accordance with Agile principles. The process of collecting and classifying data, as well as its use in a prior research project focused on identifying non-functional requirements, is described. The dataset was validated with substantial inter-annotator agreement and has been successfully employed in prior experiments, where a fine-tuned pre-trained language model achieved F1 scores above 93% in classifying non-functional requirements. Additionally, a structured taxonomy of potential research and practical use cases for this dataset is proposed, aiming to support researchers and practitioners in areas such as requirements analysis, automated generative tasks using generative language models, model development, and educational purposes.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_25-An_Aggregated_Dataset_of_Agile_User_Stories.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimizing Image Retrieval: A Two-Step Content-Based Image Retrieval System Using Bag of Visual Words and Color Coherence Vectors</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160924</link>
        <id>10.14569/IJACSA.2025.0160924</id>
        <doi>10.14569/IJACSA.2025.0160924</doi>
        <lastModDate>2025-09-30T11:06:47.4700000+00:00</lastModDate>
        
        <creator>Muhammad Sauood</creator>
        
        <creator>Muhammad Suzuri Hitam</creator>
        
        <creator>Wan Nural Jawahir Hj Wan Yussof</creator>
        
        <subject>CBIR system; Bag of Visual Words; color coherence vectors; bi-layer CBIR; two-step CBIR feature fusion; feature extraction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>Content-Based Image Retrieval (CBIR) systems play a crucial role in efficiently managing and retrieving images from large datasets based on visual content. This paper presents a novel bi-layer CBIR system that integrates Bag of Visual Words (BoVW) and Color Coherence Vector (CCV) methods to enhance image retrieval accuracy and performance by leveraging the strengths of both feature extraction techniques. In the first layer, the BoVW approach extracts local features and represents images as histograms of visual word occurrences, facilitating efficient initial filtering. In the second layer, CCV features are extracted from the top retrieved images to capture the spatial coherence of colour regions, providing a detailed colour signature. By combining the merits of both layers, the proposed system achieves higher retrieval precision and recall compared to the traditional single-layer approaches. Experimental results demonstrate the effectiveness of the bi-layer CBIR system in retrieving relevant images with improved accuracy, making it a valuable tool for application in image databases, digital libraries, and multimedia content management.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_24-Optimizing_Image_Retrieval_A_Two_Step_Content.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Is Metaverse Technology Ready to Welcome Online Banking Users?</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160923</link>
        <id>10.14569/IJACSA.2025.0160923</id>
        <doi>10.14569/IJACSA.2025.0160923</doi>
        <lastModDate>2025-09-30T11:06:47.4400000+00:00</lastModDate>
        
        <creator>Yantu Ma</creator>
        
        <creator>Dalbir Singh</creator>
        
        <creator>Siok Yee Tan</creator>
        
        <creator>Meng Chun Lam</creator>
        
        <creator>Ab Ghani Nur Laili</creator>
        
        <creator>Haodong Guan</creator>
        
        <creator>Fengjin Lei</creator>
        
        <creator>Ahmad Sufril Azlan Mohamed</creator>
        
        <subject>Banking; human-computer interaction; information systems; Metaverse; virtual reality</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>The advent of the Web 3.0 era has precipitated the gradual emergence of Metaverse technology as the frontier of next-generation interactive technology transformation in banking. Nevertheless, a considerable disparity exists between the user experience of Metaverse banking in virtual interactive environments and traditional online banks. Consequently, in the nascent stages of Metaverse banking development, examining how existing banking users adopt Metaverse technology and exploring related controversies from an information systems implementation perspective is imperative. The present study employs a systematic literature review methodology, screening 19 relevant articles published between 2020 and 2024 from two leading academic databases, Web of Science (WOS) and Scopus. The review identifies factors hindering current bank users&#39; adoption of Metaverse banking, such as privacy concerns and lack of social norms, while elucidating motivational drivers in Metaverse contexts, including usability and perceived enjoyment. Moreover, from a banking application perspective, this study proposes implementation recommendations for the initial deployment of Metaverse banking, including lowering user adoption barriers and enhancing immersive experiences. The study draws upon contemporary customer needs to analyse key decision-making considerations when using Metaverse banking services. It highlights key adoption barriers and drivers for sustained usage, emphasising potential challenges in the initial development phase, such as privacy leaks and user behaviour monitoring in virtual environments. Furthermore, it identifies unexplored research gaps in the early implementation stage of Metaverse banking. This review synthesises current knowledge on Metaverse technology in banking and outlines practical considerations and strategic directions for its integration within the industry.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_23-Is_Metaverse_Technology_Ready_to_Welcome_Online_Banking.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>CrypTen-FL: A Secure Federated Learning Framework for Multi-Disease Prediction from MIMIC-IV Using Encrypted EHRs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160922</link>
        <id>10.14569/IJACSA.2025.0160922</id>
        <doi>10.14569/IJACSA.2025.0160922</doi>
        <lastModDate>2025-09-30T11:06:47.4100000+00:00</lastModDate>
        
        <creator>Himanshu </creator>
        
        <creator>Pushpendra Singh</creator>
        
        <subject>Federated learning; secure multi-party computation; electronic health records; disease prediction; MIMIC-IV</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>The increasing demand for privacy-preserving machine learning in healthcare has driven the need for federated approaches that ensure data confidentiality across institutions. In this work, we present CrypTen-FL, a secure federated learning framework for disease prediction using the MIMIC-IV electronic health record (EHR) dataset. CrypTen-FL enables collaborative model training across multiple hospitals without sharing raw patient data, thereby addressing critical privacy concerns through the integration of Secure Multi-Party Computation (SMPC) using CrypTen and differential privacy mechanisms. We adopt a Transformer-based neural architecture to effectively capture the temporal and high-dimensional nature of EHR data, enabling accurate prediction of multiple clinically significant conditions. The framework incorporates decentralized key generation, secure aggregation, and cross-institutional evaluation to assess generalization performance and robustness. Experimental results demonstrate that CrypTen-FL achieves competitive predictive performance while offering strong privacy guarantees, paving the way for secure and scalable AI applications in real-world healthcare settings.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_22-CrypTen_FL_A_Secure_Federated_Learning_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Integrated Evaluation Using Enhanced Panel Factor Model and Machine Learning: Assessing the Level and Structure of Regional Coordinated Development in the Guangdong-Hong Kong-Macao Greater Bay Area</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160921</link>
        <id>10.14569/IJACSA.2025.0160921</id>
        <doi>10.14569/IJACSA.2025.0160921</doi>
        <lastModDate>2025-09-30T11:06:47.3770000+00:00</lastModDate>
        
        <creator>Li Shi</creator>
        
        <creator>Ting Nie</creator>
        
        <subject>Comprehensive evaluation; machine learning; regional coordinated development; comparative analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>Regional sustainable and coordinated development has become a central issue in the backdrop of a reshaped global economic landscape. Therefore, it is particularly important to evaluate the level of regional coordinated development effectively. This study aimed to validate and assess the effectiveness of machine learning algorithms and the Enhanced Panel Factor Model for evaluating regional coordinated development. To this end, based on panel data from 11 cities in the Guangdong–Hong Kong–Macao Greater Bay Area for 2005–2023, we constructed a four-dimensional composite indicator system covering economic growth, structural optimization, innovation-driven development, and social development. First, we employ a factor model to achieve dimensionality reduction and extract latent factors. SPSS and the JiekeLi platform are used for visualization, and finally, we combine LASSO regression with linear regression to build predictive models to verify the explanatory power of key factors for regional coordination. The findings indicate that the traditional factor model performs robustly in structural identification, whereas machine learning methods have advantages in variable selection and fitting accuracy. The empirical results show that the overall level of coordination in the Greater Bay Area has steadily improved; however, substantial disparities among cities remain. This study demonstrates a new pathway that integrates econometrics and machine learning for the comprehensive evaluation of regional development levels. It also conducts a comparative analysis of the applicability and effectiveness of these two methods, thereby offering significant theoretical and practical value.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_21-An_Integrated_Evaluation_Using_Enhanced_Panel.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards More Effective Automatic Question Generation: A Hybrid Approach for Extracting Informative Sentences</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160920</link>
        <id>10.14569/IJACSA.2025.0160920</id>
        <doi>10.14569/IJACSA.2025.0160920</doi>
        <lastModDate>2025-09-30T11:06:47.3470000+00:00</lastModDate>
        
        <creator>Engy Yehia</creator>
        
        <creator>Neama Hassan</creator>
        
        <creator>Sayed AbdelGaber</creator>
        
        <subject>Automatic Question Generation (AQG); informative sentence extraction; NER; SBERT; question answering; information gain; fusion strategies</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>Informative Sentence Extraction (ISE) is one of the crucial components in Automatic Question Generation (AQG) and directly influences the quality and relevancy of the generated questions. Instructional texts often contain not only informative but also irrelevant sentences. This results in the creation of poor-quality or distorted questions when irrelevant, non-informative sentences have been used as input. Therefore, the basic problem discussed in this paper is how to provide a systematic method for filtering out such sentences and retaining those that are pedagogically valuable. The purpose of ISE is to filter out irrelevant, low-quality information and retain only the factually dense sentences, express key concepts and are contextually significant. This paper proposes a hybrid approach for extracting informative sentences that combines lexical, statistical, and semantic criteria to identify informative sentences suitable for generating educational questions. The proposed approach consists of two modules: the first module employs four techniques in order to evaluate the informativeness of sentences, which are the keyword-based scoring, Named Entity Recognition (NER), information gain (IG) and Sentence-BERT (SBERT). The second module utilizes multiple fusion strategies to integrate the results derived from the informative sentence extraction techniques. The preprocessed sentences extracted from educational materials were ranked and filtered based on their informativeness coverage. The evaluation results indicate that the hybrid approach can improve the extraction of informative sentences rather than using individual methods. Such a contribution is important for enhancing the performance of downstream tasks in AQG systems, such as distractor generation and question formulation.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_20-Towards_More_Effective_Automatic_Question_Generation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>User Requirements of Adaptive Learning Through Digital Game-Based Learning: User-Centered Design Approach to Enhance the Language Literacy Development</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160919</link>
        <id>10.14569/IJACSA.2025.0160919</id>
        <doi>10.14569/IJACSA.2025.0160919</doi>
        <lastModDate>2025-09-30T11:06:47.3170000+00:00</lastModDate>
        
        <creator>Nur Atiqah Zaini</creator>
        
        <creator>Tengku Siti Meriam Tengku Wook</creator>
        
        <creator>Mohd Nor Akmal Khalid</creator>
        
        <creator>Shahrul Azman Mohd Noah</creator>
        
        <subject>Digital game-based learning; adaptive learning; artificial intelligence; language literacy; optimization; primary students; user experience</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>This study aims to elicit the user requirements of digital game-based learning among the primary students through adaptive digital game-based learning, with a focus on enhancing language literacy. Hence, acquiring the user requirements through user-centered design approach is emphasized to identify the specifications and provide practical insights for language learning and digital literacy skills. Further, the requirement specifications are specifically aligned to promote the quality of education for designing the digital game-based learning of language literacy, considering the game elements such as feedback mechanism, player’s profile, game rules, game genre, game environment, rewards, adaptive language learning contents, integration of virtual tutor through artificial intelligence, activities and challenges. This paper presents a qualitative analysis of the results from a controlled study that investigates the potential of digital game-based learning through adaptive learning for the enhancement of language proficiency. Thus, it can contribute to the broader field of digital game-based learning by expanding the understanding of adaptive learning for language literacy as an optimal strategy where the primary school students can experience the influence of artificial intelligence technology development nowadays.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_19-User_Requirements_of_Adaptive_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Starfish Optimization Algorithm-Based Federated Learning Approach for Financial Risk Prediction in Manufacturing Enterprises</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160918</link>
        <id>10.14569/IJACSA.2025.0160918</id>
        <doi>10.14569/IJACSA.2025.0160918</doi>
        <lastModDate>2025-09-30T11:06:47.2830000+00:00</lastModDate>
        
        <creator>Bin Liu</creator>
        
        <creator>Liang Chen</creator>
        
        <creator>Haitong Jiang</creator>
        
        <creator>Rui Ma</creator>
        
        <subject>Deep learning; neural Turing machine; prediction; starfish optimization algorithm; federated learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>During digital transformation, manufacturing enterprises encounter challenges such as the high cost of smart devices, operational interruptions, and increased technology expenses, raising their financial risks. Addressing the digital transformation challenges confronting manufacturing enterprises necessitates developing an intelligent financial risk prediction system leveraging AI technologies like big data and deep learning, enabling enterprises to mitigate financial exposure. In addition, some data of some manufacturing enterprises cannot be disclosed and shared due to the involvement of trade secrets and shareholder interests. To address these challenges, this study proposes a federated learning (FL)-based framework for predicting financial risk in manufacturing enterprises. Without sharing data, each client (manufacturing enterprise) in the FL framework uses deep learning models to train financial risk prediction models through a central server federation. In this study, the proposed FL framework employs a deep learning model based on a neural Turing machine (NTM) with a long short-term memory (LSTM) controller. In addition, in order to improve the prediction accuracy of the hybrid NTM-FL model, an improved starfish optimization algorithm (ISFOA) was used to optimize the structure of the NTM model. Finally, the experimental results showed that the ISFOA-based NTM-FL (ISFOA-NTM-FL) model improved the prediction accuracy by 26.32% compared to the other three financial risk prediction models.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_18-A_Starfish_Optimization_Algorithm_Based_Federated_Learning_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>AI-Enabled Demand Forecasting, Technological Capability, and Supply Chain Performance: Empirical Evidence from the Global Logistics Sector</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160917</link>
        <id>10.14569/IJACSA.2025.0160917</id>
        <doi>10.14569/IJACSA.2025.0160917</doi>
        <lastModDate>2025-09-30T11:06:47.2670000+00:00</lastModDate>
        
        <creator>Mohamed Amine Frikha</creator>
        
        <creator>Mariem Mrad</creator>
        
        <subject>AI-enabled demand forecasting; technological capability; data infrastructure; workforce skills; management support; supply chain performance; artificial intelligence</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>This study advances understanding of artificial intelligence (AI) integration within supply chain management, with a particular emphasis on AI-enabled demand forecasting. The research examines 1) the extent of adoption of AI-driven forecasting practices, 2) the role of technological and organizational readiness, captured through data infrastructure, workforce skills, and management support, as antecedents, and 3) the mediating effect of AI adoption on the relationship between readiness and supply chain performance. Grounded in the resource-based view and technology adoption theory, a conceptual model was developed and empirically validated using data from global logistics firms, with structural equation modeling applied as the primary analytical technique. The findings confirm that readiness factors significantly foster AI adoption, which in turn exerts both a direct effect on supply chain performance and a mediating effect linking readiness to performance. By focusing on the global logistics sector and empirically validating this mediating mechanism, the study provides novel insights into how firms can translate technological readiness into superior operational outcomes, offering theoretical contributions to AI assimilation literature and practical guidance for managers.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_17-AI_Enabled_Demand_Forecasting_Technological_Capability.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mathematical Representation of Netflow Analysis Decision Making Based on Production Logic</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160916</link>
        <id>10.14569/IJACSA.2025.0160916</id>
        <doi>10.14569/IJACSA.2025.0160916</doi>
        <lastModDate>2025-09-30T11:06:47.2370000+00:00</lastModDate>
        
        <creator>Alimdzhan Babadzhanov</creator>
        
        <creator>Inomjon Yarashov</creator>
        
        <creator>Maruf Juraev</creator>
        
        <creator>Alisher Otakhonov</creator>
        
        <creator>Adilbay Kudaybergenov</creator>
        
        <creator>Rustam Utemuratov</creator>
        
        <subject>Production rules; packet features; PCAP; network events; netflow; production logic; confusion matrix</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>In the sphere of NetFlow traffic analysis, accurate detection of anomalous behavior in real time remains a critical challenge. This study will present a mathematical representation for decision making in NetFlow analysis, production implementation logic for automated expert knowledge. A specialized software system will be developed for collecting and processing NetFlow traffic events in real time cyberspace. NetFlow data will be accumulated in PCAP (Packet Capture) format and converted to .csv using a three-step algorithmic sequence: packet reading, key feature extraction, and output formatting. A total of 89 different packet features will be identified, grouped into 10 categories, including flow statistics, payload features, inter-arrival time, and TCP flags. Based on these features, 110 frequently occurring network processes will be synthesized and identified into seven sets, such as classic cyber threats, application-level cyber threats, anomalies, and normal netflows. Each event will be formally expressed using Boolean production rules in the IF...THEN format, linking subsets of feature vectors (F) to certain network events (A). These production rules form the knowledge base of the expert system, which allows for the efficient identification of cyber threats such as DDoS attacks, port scanning, spoofing, and covert channels. The architecture will ensure systematic analysis for the early detection and identification of netflow anomalies, facilitating to the solidity and protection of complex information systems. The suggested decision-making representation based on production logic will ensure scalable and explainable synthesis and/or analysis, opening the way to the creation of intelligent structures for identifying cyber threats in cyberspace. The accuracy of the results obtained was checked using a confusion matrix.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_16-Mathematical_Representation_of_Netflow_Analysis_Decision.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of a Modular Architecture Based on AI and Blockchain for Personalized Microcredits Using Open Finance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160915</link>
        <id>10.14569/IJACSA.2025.0160915</id>
        <doi>10.14569/IJACSA.2025.0160915</doi>
        <lastModDate>2025-09-30T11:06:47.1900000+00:00</lastModDate>
        
        <creator>Pedro Hidalgo</creator>
        
        <creator>Ciro Rodr&#237;guez</creator>
        
        <creator>Luis Bravo</creator>
        
        <creator>Cesar Angulo</creator>
        
        <subject>Smart microcredits; artificial intelligence; open finance; blockchain; smart contracts; financial inclusion</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>This paper presents the design and validation of a modular architecture for smart microcredits, aimed at expanding credit access for populations excluded from the traditional financial system. The solution integrates three key technological components: data acquisition through Open Finance, automated risk assessment using Artificial Intelligence (AI) models, and the execution of smart contracts on blockchain. A functional prototype was developed to process applications manually submitted by users without prior financial history, utilizing a LightGBM model trained on real, anonymized data. The model was integrated into the system workflow to generate automatic credit conditions and register decisions on the blockchain without direct human intervention. During the validation phase, the model achieved an Area Under the Curve (AUC) of 0.94, supporting its discriminative power within the automated flow. The overall technical validation demonstrates the feasibility of offering personalized, traceable, and secure credit services through open and decentralized technologies. The use of alternative unstructured data, as well as the expansion into production environments, is proposed as a future line of development. In our system, Open Finance provides consented financial data off-chain; the ML model estimates default probability and outputs an eligibility decision; a rule engine maps the score to personalized loan terms; and blockchain smart contracts only record loan terms and execution events on-chain (no personal data). This separation ensures auditability (on-chain) and privacy (off-chain).</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_15-Design_of_a_Modular_Architecture_Based_on_AI_and_Blockchain.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Method for Person Re-Identification with 2D-to-3D Image (Image-to-Video) Conversion</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160914</link>
        <id>10.14569/IJACSA.2025.0160914</id>
        <doi>10.14569/IJACSA.2025.0160914</doi>
        <lastModDate>2025-09-30T11:06:47.1600000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>Person re-identification; identification performance; 2D-to-3D image conversion method; TRIPO; CSM; KLING</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>A method for person re-identification using image-to-video conversion tools is proposed. The proposed method involves matching two images taken from different viewpoints: a reference image captured in advance and a current image captured in real-time to identify the person in concern in the current image, whose image is matched to the reference image. The 2D current image is converted into a 3D representation, from which synthetic images are generated at multiple viewpoints. By comparing the generated images and the reference image, person re-identification can be done. Experiments have demonstrated that the proposed method significantly improves identification accuracy. By accounting for changes in appearance due to different viewpoints and utilizing advanced image conversion, one of the main challenges in person re-identification is addressed. This approach offers a promising solution for applications requiring high accuracy in identifying individuals across varying perspectives.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_14-Method_for_Person_Re_Identification_with_2D_to_3D_Image.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Adaptive Trust-Based Fault Tolerance for Multi-Drone Systems: Theory and Application in Agriculture</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160913</link>
        <id>10.14569/IJACSA.2025.0160913</id>
        <doi>10.14569/IJACSA.2025.0160913</doi>
        <lastModDate>2025-09-30T11:06:47.1430000+00:00</lastModDate>
        
        <creator>Atef GHARBI</creator>
        
        <creator>Faheed A. F. Alrslani</creator>
        
        <subject>Adaptive trust model; trust-aware robotics; multi-drone coordination; fault-tolerant systems; precision agriculture applications</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>This paper presents RobotTrust, an adaptive trust framework for fault-tolerant coordination in multi-drone systems for precision agriculture. The study aims to improve mission reliability under sensor/actuator faults and uncertain interactions by combining a structured fault taxonomy (behavioral, actuator, sensor) with team-based recovery and an adaptive trust model that integrates direct experience with filtered indirect recommendations. We formalize trust computation (direct, recommended, and global trust) and introduce safeguards such as a minimum-trust threshold and weighted fusion to curb misinformation propagation. The framework is evaluated in simulation using the AgriFleet drone team and is compared against the TReconf baseline across three metrics: (i) time-step efficiency for task completion, (ii) RMSD between predicted and true trustworthiness, and (iii) interaction quality (preference for reliable peers). Results show 20–40% faster task completion, lower RMSD (more accurate trust estimation), and selective interaction patterns that prioritize dependable agents while limiting exposure to unreliable ones. These findings indicate that RobotTrust enhances responsiveness and robustness in decentralized, fault-prone environments typical of agricultural deployments. The work contributes a practical, generalizable approach to trust-aware coordination in multi-robot systems and outlines directions for context-aware weighting, explainable trust signals, heterogeneous teams, adversarial robustness, and large-scale field trials.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_13-Adaptive_Trust_Based_Fault_Tolerance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automated Scoliosis Diagnosis in Spinal Imaging: Laboratory Validation, Clinical Limitations, and Systematic Implementation Challenge Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160912</link>
        <id>10.14569/IJACSA.2025.0160912</id>
        <doi>10.14569/IJACSA.2025.0160912</doi>
        <lastModDate>2025-09-30T11:06:47.1270000+00:00</lastModDate>
        
        <creator>Ervin Gubin Moung</creator>
        
        <creator>Xie Aishu</creator>
        
        <creator>Ali Farzamnia</creator>
        
        <subject>Automated diagnosis; medical imaging; scoliosis; Cobb angle; clinical implementation; artificial intelligence</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>Technological advances in automated medical imaging diagnosis have created translation gaps between laboratory achievements and clinical implementation, with traditional manual Cobb angle measurement requiring considerable time with inevitable measurement errors. This review analyzes translation challenges in automated diagnosis systems using scoliosis assessment as a case study, examining 55 articles from 1948-2025 across three domains: Cobb angle measurement, classification, and segmentation. Despite research investment, fully automated approaches have not surpassed semi-automated performance in comparable validation studies. Within the 23 Cobb angle measurement studies, traditional methods outperform sophisticated deep learning systems with average error rates of 1.8&#176; &#177; 0.4&#176; MAD versus 4.2&#176; &#177; 1.8&#176; MAE, while validation degradation occurs with performance dropping from 95.28% to 85.9% when transitioning to real-world datasets. Non-standard classification achieves high accuracy but lacks clinical utility, while standard systems struggle with automation, revealing a translation paradox where technical sophistication does not correlate with clinical adoptability. Main problems include testing method gaps, performance drops, different automation approaches, and cost issues. This review recommends standard testing methods and step-by-step clinical implementation to help these systems work in real clinics.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_12-Automated_Scoliosis_Diagnosis_in_Spinal_Imaging.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>WiTS: A Wi-Fi-Based Human Action Recognition via Spatio-Temporal Hybrid Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160911</link>
        <id>10.14569/IJACSA.2025.0160911</id>
        <doi>10.14569/IJACSA.2025.0160911</doi>
        <lastModDate>2025-09-30T11:06:47.0970000+00:00</lastModDate>
        
        <creator>Pengcheng Gao</creator>
        
        <subject>Wi-Fi CSI; human action recognition; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>Human action recognition has many applications in different scenarios. With the advancement of wireless sensing and the widespread deployment of Wi-Fi devices, the perception technology of Wi-Fi channel state information (CSI) has shown great potential. Related studies identified actions by capturing specific attenuation and distortion features caused by human posture on CSI. These methods are less susceptible to the effects of lighting and object occlusion. However, they have yet to adequately extract information within CSI. The challenge of enhancing model performance through the comprehensive utilization of information features within different dimensions remains an imperative area. To address this, a spatio-temporal hybrid neural network model named WiTS is proposed. It integrates the advantages of different neural networks, using CNN to extract spatial features, combining TCN and Bi-LSTM for dual temporal dimension modeling, and incorporating Transformer&#39;s global attention mechanism to achieve comprehensive extraction and multi-level fusion of spatio-temporal features. Additionally, this study further optimizes the original WiTS model from three aspects. The Experiment on WiAR and CSIAR datasets show that the model achieves average accuracy rates of 95.75% and 96.71%, respectively, with F1-scores exceeding 96%. The model has only 2.19 million parameters and less than 560 million FLOPs, offering significant advantages in terms of lightweight design, making it suitable for deployment on limited-computing edge terminals while meeting real-time requirements.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_11-WiTS_A_Wi_Fi_Based_Human_Action_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Reliability Risk Assessment Approaches in Software Engineering: A Review Structured by Software Development LifeCycle (SDLC) Phases and Reliable Sub-Characteristics</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160910</link>
        <id>10.14569/IJACSA.2025.0160910</id>
        <doi>10.14569/IJACSA.2025.0160910</doi>
        <lastModDate>2025-09-30T11:06:47.0800000+00:00</lastModDate>
        
        <creator>Lehka Subramanium</creator>
        
        <creator>Saadah Hassan</creator>
        
        <creator>Mohd. Hafeez Osman</creator>
        
        <creator>Hazura Zulzalil</creator>
        
        <subject>Reliability; risk assessment; SDLC</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>Reliability risk is a critical concern in software development, as failures can result in system downtime, degraded performance, data integrity issues, financial losses and loss of user trust. The increasing complexity of modern systems, driven by dynamic workloads, distributed architecture, and unpredictable interactions, amplifies these risks. In regulated industries like healthcare, finance, and transportation, software reliability directly affects safety, compliance and operational continuity, making robust risk assessment essential. Despite recent development and improvement on numerous reliability risk assessment techniques, system failures continue to be potent, creating concerns on scope, applicability and limitations. This paper will dive deep into evaluating recent methods, the advantages and disadvantages of the application itself, while critically assessing the research gaps. Here, the techniques are categorized across the software development lifecycle (SDLC), to bridge methods to phase-specific reliability needs. Consequently, the paper addresses methodological synthesis of recent practices, identifies segments where existing techniques fail to live up to expectations, and summarize future research directions for achieving more robust and adaptive reliability risk assessment.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_10-Reliability_Risk_Assessment_Approaches_in_Software_Engineering.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Beyond Words: An Advanced Ensemble Framework for Unmasking AI-Generated Content Through Linguistic Fingerprinting</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160909</link>
        <id>10.14569/IJACSA.2025.0160909</id>
        <doi>10.14569/IJACSA.2025.0160909</doi>
        <lastModDate>2025-09-30T11:06:47.0330000+00:00</lastModDate>
        
        <creator>Ghada Y. Elwan</creator>
        
        <creator>Doaa R. Fathy</creator>
        
        <creator>Nahed M. El Desouky</creator>
        
        <creator>Abeer S. Desuky</creator>
        
        <subject>AI detection; machine learning; text classification; ensemble methods; content verification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>AI-generated content detection is vital because it helps to uphold digital integrity in most fields of application, such as in academic publishing and content verification. The process of identifying text authenticity and traceability of the source is dependent on proper detection means. The approach introduced in this paper is a novel ensemble method that combines machine learning and linguistic analysis for AI content detection. The ensemble approach uses a set of classification algorithms to identify the most important differences between human-authored and AI-generated text. To validate the proposed method, this study utilized an extensive collection of text samples (20,000) obtained from SQuAD 2.0, CNN/Daily Mail, GPT-3.5, and ChatGPT datasets. The proposed ensemble model achieved precision, accuracy, recall, and F1-score of 97.2%, 97.5%, 96.4%, and 97.3%, respectively, demonstrating superior performance compared to individual classifiers. The experimental results demonstrate that the ensemble approach offers efficient detection performance, which can be applied to various text types and lengths, and thus can be implemented in practical systems for content verification and academic integrity assessment.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_9-Beyond_Words_An_Advanced_Ensemble_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Reinforcement Learning-Based Target Detection and Autonomous Obstacle Avoidance Control for UAV</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160908</link>
        <id>10.14569/IJACSA.2025.0160908</id>
        <doi>10.14569/IJACSA.2025.0160908</doi>
        <lastModDate>2025-09-30T11:06:47.0030000+00:00</lastModDate>
        
        <creator>Like Zhao</creator>
        
        <creator>Hao Liu</creator>
        
        <creator>Guangmin Gu</creator>
        
        <creator>Fei Wan</creator>
        
        <creator>Yanyang Feng</creator>
        
        <subject>Target detection; multi-scale; lightweight; YOLOv8; autonomous obstacle avoidance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>To address the challenges faced by distribution network monitoring systems—such as significant variations in anomaly scale, frequent missed and false detections of small-scale faults, and the need for real-time operational control—this paper proposes a lightweight multi-scale feature fusion detection network combined with a deep reinforcement learning-based autonomous control strategy, forming an end-to-end intelligent perception and decision-making system for distribution networks. To enhance detection accuracy and computational efficiency, a lightweight feature fusion network (Grid_RepGFPN) is designed, and a novel feature fusion module (DBB_GELAN) is proposed, which significantly reduces model parameters and computational cost while improving detection performance. Additionally, a feature extraction module (FTA_C2f) is constructed using partial convolution (PConv) and triplet attention mechanisms, combined with the ADown downsampling structure to improve the model’s capability to capture spatial and electrical measurement details. The programmable gradient information (PGI) strategy of YOLOv9 is further optimized by introducing a context-guided reversible architecture and a Grid_PGI method with additional detection heads, thereby enhancing deep supervision stability and reducing semantic information loss. Based on the detection model, a real-time operational control strategy is developed using deep reinforcement learning, enabling autonomous fault response, load adjustment, and network optimization through a state–action–feedback optimization loop. Experimental results on multiple distribution network simulation platforms demonstrate that the proposed LMGrid-YOLOv8 model outperforms YOLOv8s, with improvements of 4.2%, 3.9%, 5.1%, and 3.0% in precision, recall, mAP@0.5, and mAP@0.5:0.95, respectively, while reducing parameters by 63.9% and increasing computation by only 0.4 GFLOPs, achieving a favorable balance between performance and resource consumption. Inference experiments on edge computing platforms confirm that the proposed model maintains high detection accuracy under real-time constraints, demonstrating strong applicability to real-time distribution network monitoring. Furthermore, class activation map-based visual analysis reveals the model’s superior capabilities in detecting small-scale faults and processing high-resolution network measurement regions.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_8-Deep_Reinforcement_Learning_Based_Target_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Embedded System for ECG Signal Monitoring and Fatigue Detection in Elderly Individuals Using Machine Learning Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160907</link>
        <id>10.14569/IJACSA.2025.0160907</id>
        <doi>10.14569/IJACSA.2025.0160907</doi>
        <lastModDate>2025-09-30T11:06:46.9700000+00:00</lastModDate>
        
        <creator>Chokri Baccouch</creator>
        
        <creator>Chaima Bahar</creator>
        
        <subject>Fatigue; ECG; AI; classification; GRU; LSTM; RNN</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>Ascertaining fatigue in elderly people is crucial both for preventing future health complications and for enhancing their quality of life. In this paper, we present an embedded system for real-time fatigue detection and monitoring based on electrocardiogram (ECG) signals, leveraging cost-effective sensors and advanced deep learning architectures. The proposed framework integrates an AD8232 ECG sensor with an ESP32/Raspberry Pi platform for continuous signal acquisition, followed by preprocessing through a 4th-order Butterworth bandpass filter, feature extraction, dimensionality reduction with PCA, and classification using recurrent neural network models. Unlike previous studies relying on multi-sensor or image-based approaches, our solution demonstrates high efficiency, scalability, and affordability by employing a single low-cost ECG sensor. Three neural architectures were evaluated: standard Recurrent Neural Networks (RNN), Long Short-Term Memory (LSTM), and Gated Recurrent Unit (GRU). Among them, the GRU model achieved the highest accuracy (98.86%), followed by LSTM (97.73%), whereas standard RNNs lagged behind (82.76%). Experimental results confirm the robustness of GRU in capturing temporal dependencies in ECG data, outperforming other models in both accuracy and computational efficiency. This study highlights the feasibility of deploying lightweight yet powerful AI models in embedded healthcare systems for elderly individuals. By enabling early detection of fatigue as a critical risk factor for falls, cardiovascular incidents, and reduced autonomy. Our approach offers significant societal benefits, including preventive care, reduced hospitalization costs, and improved independence. Future work will extend the dataset and validate system robustness in real-world environments to enhance clinical applicability.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_7-Embedded_System_for_ECG_Signal_Monitoring.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Graph-Based Deep Reinforcement Learning and Econometric Framework for Interpretable and Uncertainty-Aware Stablecoin Stability Assessment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160906</link>
        <id>10.14569/IJACSA.2025.0160906</id>
        <doi>10.14569/IJACSA.2025.0160906</doi>
        <lastModDate>2025-09-30T11:06:46.9570000+00:00</lastModDate>
        
        <creator>Yaozhong Zhang</creator>
        
        <creator>Quanrong Fang</creator>
        
        <subject>Stablecoin stability; deep reinforcement learning; graph neural networks; uncertainty quantification; macro prudential policy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>The instability of algorithmic and hybrid stablecoins has become a systemic concern in decentralized finance. This paper proposes a unified, interpretable, and uncertainty-aware framework that integrates graph-based deep reinforcement learning, GARCH econometric modeling, and Bayesian inference. Multi-stage reinforcement learning agents simulate interactions between arbitrageurs and protocol mechanisms. GARCH models capture volatility dynamics, while Bayesian methods provide confidence intervals for peg deviation forecasts, enabling adaptive prediction and transparent risk interpretation. The framework is validated using over eight million on-chain and off-chain records across 120 scenarios involving USDT, USDC, and TerraUSD. It achieves 89 per cent crisis prediction accuracy and 83 per cent reflexivity modeling performance, significantly outperforming six benchmark models. Notably, the system issued early warnings up to 72 hours before the TerraUSD collapse. Ablation studies confirm the unique contribution of each module. In addition to technical improvements, the framework outputs a stability index and dynamic reserve recommendations to support policy response and supervisory planning. Compared to existing approaches, this is the first framework to combine dynamic simulation, interpretability, and probabilistic forecasting in a single architecture. It offers practical value for stablecoin monitoring and establishes a methodological foundation for future research in digital asset risk assessment.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_6-A_Graph_Based_Deep_Reinforcement_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Scalable Microservices Architecture for Real-Time Data Processing in Cloud-Based Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160905</link>
        <id>10.14569/IJACSA.2025.0160905</id>
        <doi>10.14569/IJACSA.2025.0160905</doi>
        <lastModDate>2025-09-30T11:06:46.9230000+00:00</lastModDate>
        
        <creator>Desidi Narsimha Reddy</creator>
        
        <creator>Rahul Suryodai</creator>
        
        <creator>Vinay Kumar S. B</creator>
        
        <creator>M. Ambika</creator>
        
        <creator>Elangovan Muniyandy</creator>
        
        <creator>V. Rama Krishna</creator>
        
        <creator>Bobonazarov Abdurasul</creator>
        
        <subject>Microservices architecture; real-time data processing; cloud-native systems; Kubernetes orchestration; API gateway</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>In today’s data-intensive landscape, the exponential growth of digital applications and IoT devices has heightened the demand for real-time data processing within cloud-native environments. Traditional monolithic systems struggle to meet the low-latency, high-availability requirements of modern workloads, prompting a shift toward microservices architectures. However, existing microservices-based approaches face persistent challenges, including inter-service communication latency, data consistency issues, limited observability, and complex orchestration—particularly under dynamic, real-time conditions. Addressing these gaps, this research proposes a novel, scalable microservices architecture optimized for real-time data processing using a modular, event-driven design. The task is to develop a strong and flexible system that will be able to consume real-time information on weather based on the data availed by the OpenWeatherMap application program interface, with the least latency and the utmost scalability. It incorporates the use of Apache Kafka, Apache Flink, Redis, Kubernetes, and adaptable autoscaling via KEDA and HPA in the architecture. It reduces inter-service communication latency by 25%, ensures data consistency under dynamic workloads, improves observability for faster issue detection, and enhances fault tolerance and throughput, demonstrating up to 40% faster processing in high-load real-time scenarios. Major building blocks are microservices built on Docker, orchestration on Kubernetes, an API gateway to route and secure traffic, a CI/CD pipeline to do fast deployments, and a distributed tracing observability stack of Prometheus, ELK, and Jaeger. Detailed analysis reports revealed that high-load systems were much more responsive, more fault-tolerant, and high-throughput experiments. Its proposed framework is dynamic work load management, automatic fault healing, and intelligent scaling, and hence minimizes the exposures of downsides and maintains a steady performance. To sum up, this study offers a tenable microservices design, addressing the present limitations in the field of real-time data processing and, at the same time, providing a scalable, secure, and observable architecture of future cloud native apps.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_5-A_Scalable_Microservices_Architecture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Real-Time Dynamic Pricing Using Machine Learning: Integrating Customer Sentiment and Predictive Models for E-Commerce</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160904</link>
        <id>10.14569/IJACSA.2025.0160904</id>
        <doi>10.14569/IJACSA.2025.0160904</doi>
        <lastModDate>2025-09-30T11:06:46.8770000+00:00</lastModDate>
        
        <creator>Areyfin Mohammed Yoshi</creator>
        
        <creator>Arafat Rohan</creator>
        
        <creator>Sohana Afrin Mitu</creator>
        
        <creator>Md Masud Karim Rabbi</creator>
        
        <creator>Shahanaj Akther</creator>
        
        <creator>Khandakar Rabbi Ahmed</creator>
        
        <subject>Dynamic pricing; machine learning; XGBoost’ e-commerce analytics; revenue optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>Dynamic pricing has emerged as a crucial strategy for e-commerce platforms to maximize profitability while remaining competitive in rapidly changing digital markets. Traditional pricing methods often fail to capture the complexity of customer behavior and the rapid evolution of market trends. To address these limitations, this study introduces a machine learning based framework that integrates transactional, behavioral, and contextual data with multilingual sentiment analysis from customer reviews. The framework employs multiple algorithms, including Random Forest, Gradient Boosting, Neural Networks, and XGBoost, with extensive feature engineering and model evaluation. Experimental results on a large-scale retail and e-commerce dataset show that the proposed XGBoost-based approach achieved superior performance, with a Mean Absolute Error (MAE) of 1.29, Root Mean Squared Error (RMSE) of 1.65, and an R&#178; of 0.97, significantly outperforming baseline models. These findings underscore the framework&#39;s capacity to facilitate real-time, adaptive, and customer-centric pricing mechanisms. The study contributes by presenting 1) an end-to-end ML pipeline for dynamic pricing, 2) the novel incorporation of sentiment-based features into predictive models, and 3) a comparative evaluation that establishes XGBoost as the most effective model. The results demonstrate both practical and theoretical value, offering insights for e-commerce platforms seeking to optimize revenue and ensure pricing fairness in real-world scenarios.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_4-Real_Time_Dynamic_Pricing_Using_Machine_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>LegalSummNet: A Transformer-Based Model for Effective Legal Case Summarization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160903</link>
        <id>10.14569/IJACSA.2025.0160903</id>
        <doi>10.14569/IJACSA.2025.0160903</doi>
        <lastModDate>2025-09-30T11:06:46.8470000+00:00</lastModDate>
        
        <creator>Md Farhad Kabir</creator>
        
        <creator>Sohana Afrin Mitu</creator>
        
        <creator>Sharmin Sultana</creator>
        
        <creator>Belal Hossain</creator>
        
        <creator>Rakibul Islam</creator>
        
        <creator>Khandakar Rabbi Ahmed</creator>
        
        <subject>Legal document summarization; NLP; extractive and abstractive summarization; transformer; LegalSummNet; BERT; LegalT5; ROUGE-L</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>Expanding legal documents has become increasingly complicated and presents a greater challenge to legal professionals in extracting relevant information efficiently. In this paper, a new two-stage hybrid summarization system, called LegalSummNet, is introduced. It excels in handling the peculiarities of legal texts, such as their extremely long length, complex syntax, and specialized vocabulary. LegalSummNet combines an extractive filtering model with an attention-weighted filtering module and a transformer-based abstractive generation model, enabling it to identify significant elements and produce compact, coherent, and semantically competent summaries. The proposed model is tested using a large-scale dataset comprising a legal case and shows significant improvements compared to robust baselines, such as BERTSumExt and LegalT5, in performance measured by ROUGE-1, ROUGE-2, ROUGE-L, and BERT Score. A greater compression efficiency is also evident with the model. Hence, there is a significant application of real-world systems in generating case briefs and summaries related to contracts. The findings demonstrate that LegalSummNet is effective in enhancing the accessibility of legal documents and supporting informed decision-making.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_3-LegalSummNet_A_Transformer_Based_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multispectral Image Analysis Using Deep Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160902</link>
        <id>10.14569/IJACSA.2025.0160902</id>
        <doi>10.14569/IJACSA.2025.0160902</doi>
        <lastModDate>2025-09-30T11:06:46.8170000+00:00</lastModDate>
        
        <creator>Arun D. Kulkarni</creator>
        
        <subject>Remote sensing; classification; deep neural networks; Landsat scene</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>Multispectral image classification plays a crucial role in remote sensing applications such as land cover mapping, agricultural monitoring, and environmental surveillance. Traditional classification techniques, including the Maximum Likelihood Classifier (MLC), Support Vector Machine (SVM), Decision Tree (DT), and Multi-Layer Perceptron (MLP), often struggle with the complexity and high dimensionality of multispectral data. Recent advances in deep learning have revolutionized the field of remote sensing by enabling the extraction of high-level, abstract features from raw input data. In this paper, we explore the application of Deep Neural Networks (DNNs) for pixel-wise classification in multispectral imagery. DNNs are capable of learning informative and hierarchical representations, which have demonstrated significant success in a wide range of computer vision tasks. We propose and implement a simple DNN architecture consisting of six layers: an input layer (representing reflectance values across spectral bands), a fully connected layer, a batch normalization layer, a ReLU activation layer, another fully connected layer, and a final SoftMax output layer for classification. Each pixel is represented by a vector of spectral reflectance values. We evaluated our model using two Landsat scenes, one from the New Orleans area and the other from the Mississippi River bottomland area. The proposed DNN achieved classification accuracy of 97.44% and 95.74%, respectively, on these datasets, demonstrating the effectiveness of deep learning for multispectral image classification.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_2-Multispectral_Image_Analysis_Using_Deep_Neural_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cognitive Biases: Understanding and Designing Fair AI Systems for Software Development</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160901</link>
        <id>10.14569/IJACSA.2025.0160901</id>
        <doi>10.14569/IJACSA.2025.0160901</doi>
        <lastModDate>2025-09-30T11:06:46.7670000+00:00</lastModDate>
        
        <creator>Sheriff Adepoju</creator>
        
        <creator>Mildred Adepoju</creator>
        
        <subject>Cognitive biases; fair AI systems; algorithmic bias; software development; bias mitigation; fairness; software engineering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(9), 2025</description>
        <description>Artificial Intelligence (AI) systems increasingly influence decisions that affect people&#39;s lives, making fairness a core requirement. However, cognitive biases, systematic deviations in human judgment, can enter AI through data, modeling choices, and oversight, amplifying social inequities. This paper examines how three bias channels, data, algorithmic, and human, manifest across the software development lifecycle and synthesizes practical strategies for mitigation. Using a qualitative review of recent scholarship and real‑world case studies, we distill a lightweight diagnostic framework that helps practitioners identify bias sources, evaluate mitigation options against effectiveness, feasibility, transparency, and scalability, and institutionalize routine audits. We illustrate the framework with representative vignettes and summarize trade‑offs between fairness goals and model performance. Our analysis recommends diverse and well‑documented datasets, fairness‑aware learning and evaluation, third‑party audits, and cross‑functional collaboration as mutually reinforcing levers. The paper contributes a developer‑oriented map of cognitive bias risks across data, model, and human processes, a four‑criterion rubric for comparing mitigation techniques, and an actionable checklist that teams can embed in their pipelines. The results aim to support software and product teams in building AI systems that are both accurate and equitable.</description>
        <description>http://thesai.org/Downloads/Volume16No9/Paper_1-Cognitive_Biases_Understanding_and_Designing_Fair_AI_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>QoS-Aware Deployment and Synchronization of Digital Twins Over Federated Cloud Platforms for Smart Infrastructure Monitoring</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01608105</link>
        <id>10.14569/IJACSA.2025.01608105</id>
        <doi>10.14569/IJACSA.2025.01608105</doi>
        <lastModDate>2025-08-29T12:48:03.4570000+00:00</lastModDate>
        
        <creator>M V Narayana</creator>
        
        <creator>Naveen Reddy N</creator>
        
        <creator>Madhu S</creator>
        
        <creator>Madhu T</creator>
        
        <creator>Niladri Sekhar Dey</creator>
        
        <creator>Sanjeev Shrivastava</creator>
        
        <subject>QoS-aware digital twins; federated cloud synchronization; smart infrastructure monitoring; latency-constrained orchestration; edge-assisted deployment</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>This electronic increasingly, Digital Twin (DT) systems are being leveraged in smart infrastructure settings (e.g., structural health monitoring, intelligent traffic controls, and distributed utility networks). Yet, the available solutions all face hurdles that can prevent real-time synchronization of DT instances across federated cloud platforms, primarily due to latency variation, quality of service (QoS) assurance, and stale data, which are all consequences of heterogeneous computer environments. Most solutions depend on static cloud-only models of deployment, with no option for dynamic negotiation of resources. These provide long update times (typically greater than 200ms), low accuracy rates, and low real-time responsiveness. Additionally, traditional DT models were not developed with multi-regional deployment or QoS workloads in mind. In this work, a QoS-Aware Federated Digital Twin Orchestration Framework (Q-FDTO) is designed to allow latency-critical monitoring of infrastructure across different federated cloud regions, through the integration of a hybrid edge-cloud control plane, adaptive synchronization, jitter consideration for observed intervals, and dynamic resource allocation via reinforcement learning for defined QoS Service Level Objectives (SLOs). This system was evaluated on a smart city testbed of 1200 sensor nodes. The testbed monitored sensor readings for structural strain, vibration, and traffic density across twelve locations. The digital twin pipeline is comprehensive [i.e., (i) ingestion via Wi-Fi MQTT, (ii) stream fusion of all the sensor readings via Kalman filtering, and (iii) twin modeling of prediction, through a temporal graph convolutional network (T-GCN)]. To assess performance, sync policies were evaluated on metrics for average update latency (ms), sync drift (ms), and data consistency rate (%). The results demonstrate that Q-FDTO had an average update latency of 87.3 ms, reduced from 194.6 ms, and a 96.2% consistency rate across federated nodes with less than 2.5% sync drift over 10-minute intervals, showing Q-FDTO architecture ability for network boundaries and also compatible with AWS Outposts and Azure Arc hybrid cloud environments. It establishes a scalable and practical approach to latency-sensitive DT deployments in the realm of smart infrastructure systems.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_105-QoS_Aware_Deployment_and_Synchronization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Secure Authentication Protocol for IoT Devices</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01608104</link>
        <id>10.14569/IJACSA.2025.01608104</id>
        <doi>10.14569/IJACSA.2025.01608104</doi>
        <lastModDate>2025-08-29T12:48:03.3930000+00:00</lastModDate>
        
        <creator>Mohamed Ech-Chebaby</creator>
        
        <creator>Hicham Zougagh</creator>
        
        <creator>Hamid Garmani</creator>
        
        <creator>Zouhair Elhadari</creator>
        
        <subject>IoT; Internet of Things; security; authentication</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>The rapid evolution of the Internet of Things (IoT) offers vast opportunities in automation and connectivity, yet simultaneously introduces critical security challenges. One of the most pressing concerns lies in the heterogeneity and limited computational capabilities of IoT devices, which complicate the deployment of robust security mechanisms. In this work, we present a lightweight and secure authentication protocol designed to establish mutual authentication between a server and smart objects. Our protocol enhances the scheme proposed by Fatma et al., addressing its identified vulnerabilities. Formal security analysis using AVISPA and ProVerif confirms the protocol’s resilience against a wide range of threats. Furthermore, a practical simulation was conducted using a Raspberry Pi as the IoT device and a Core i5-based server to evaluate real-world performance. Results show that the protocol executes efficiently in real-time with a reduced authentication delay, demonstrating its feasibility for resource-constrained environments. This research contributes to the development of effective, scalable, and secure authentication solutions tailored for the IoT landscape.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_104-A_Secure_Authentication_Protocol_for_IoT_Devices.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>From Review to Refinement: An Expert-Informed Environmental Diagnostic Model for Stingless Bee Colony Monitoring</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01608103</link>
        <id>10.14569/IJACSA.2025.01608103</id>
        <doi>10.14569/IJACSA.2025.01608103</doi>
        <lastModDate>2025-08-29T12:48:03.2830000+00:00</lastModDate>
        
        <creator>Yang Lejing</creator>
        
        <creator>Rozi Nor Haizan Nor</creator>
        
        <creator>Yusmadi Yah Jusoh</creator>
        
        <creator>Nur Ilyana Ismarau Tajuddin</creator>
        
        <creator>Khairi Azhar Aziz</creator>
        
        <subject>Stingless bees; environmental monitoring; behavioral analysis; diagnostic model; expert-informed refinement</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>The resilience of stingless bee colonies has become increasingly challenged by erratic climate conditions and intensified environmental stressors. While previous studies have introduced diagnostic models for monitoring colony health, most remain constrained by a narrow reliance on either environmental or behavioral parameters alone. This study proposes a refined diagnostic model that builds on existing frameworks and is further shaped by expert insights from the field. The model integrates environmental inputs, specifically temperature and humidity, with behavioral activity detected via video analysis to deliver a multi-dimensional assessment of colony status. Through a structured review of the literature and interviews with apiculture experts, we identify critical gaps in conventional systems and translate those findings into a more responsive and field-deployable architecture. The result is an improved model capable of categorizing colony health with greater sensitivity and clarity, designed to support early intervention and long-term monitoring. The model is visualized through comparative schematic diagrams, showing the evolution from a basic environmental-only logic to a more holistic decision-making system.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_103-From_Review_to_Refinement.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Simulation-Driven Improvement of King Khalid University Non-Monthly Entitlement Workflows in AnyLogic</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01608102</link>
        <id>10.14569/IJACSA.2025.01608102</id>
        <doi>10.14569/IJACSA.2025.01608102</doi>
        <lastModDate>2025-08-29T12:48:03.2230000+00:00</lastModDate>
        
        <creator>Elaf Ali Alsisi</creator>
        
        <creator>Osman A. Nasr</creator>
        
        <creator>Badriah Mousa Alqahtani</creator>
        
        <creator>Rodoon Shawan Alnajei</creator>
        
        <subject>Non-monthly financial entitlements; operational efficiency; Anylogic simulation; process optimization; university staff</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>Aiming to offer useful recommendations for enhancing process accuracy and operational performance, this study investigates the elements affecting the efficiency and timeliness of disbursing non-monthly financial entitlements to university staff. The study developed a process workflow model using AnyLogic simulation tools and structured interviews and then conducted two main tests to enhance effectiveness. While the second concentrated on dynamic workload distribution to balance duties and maximize performance, the first conducted complete automation using an electronic platform that centralized all departmental tasks involved in financial disbursements. By lowering service times, minimizing manual errors, and thereby simplifying task allocation, the results show that combining automated workflows and real-time workload distribution significantly increased operational efficiency. Faster, more accurate, and fair processing of financial entitlements produced by this change highlights the need for technology-driven solutions in reaching lasting organizational excellence.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_102-Simulation_Driven_Improvement_of_King_Khalid_University.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Grey Clustering Algorithm for Urban Air Quality Classification: A Case Study in Lima, Peru</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01608101</link>
        <id>10.14569/IJACSA.2025.01608101</id>
        <doi>10.14569/IJACSA.2025.01608101</doi>
        <lastModDate>2025-08-29T12:48:03.1430000+00:00</lastModDate>
        
        <creator>Alexi Delgado</creator>
        
        <creator>Katherine Paredes Guerrero</creator>
        
        <creator>Anderson Carrillo</creator>
        
        <subject>Grey clustering algorithm; air quality classification; grey systems theory; urban air pollution</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>This study introduces a grey clustering algorithm based on the Central Triangular Whitenization Weight Function (CTWF), designed to classify urban air quality under conditions of limited or uncertain data. Based on Grey Systems The-ory (GST), the proposed algorithm facilitates structured multi-criteria assessments using sparse or irregular datasets—a condition frequently encountered in urban environmental monitoring. The algorithm stands out for its low computational complexity, interpretability, and ability to integrate multiple pollutants into a single qualitative classification, making it particularly suitable for smart city applications and real-time decision support systems. To evaluate its performance, the grey clustering algorithm (CTWF) was applied to a case study in Northern Lima, Peru, covering eight semesters between 2011 and 2019 and including four key pollutants: PM10, SO2, NO2, and CO. Although all periods were classified as “Good” under national standards, the disaggregated analysis revealed PM10 as the most persistent concern, while CO levels remained consistently low, and SO2 and NO2 showed moderate fluctuations. These findings validate the algorithm’s capacity to extract pollutant-specific insights and spatiotemporal trends even in data-scarce environments. Future enhancements may include meteorological integration, broader pollutant sets (e.g., PM2.5, ozone), and satellite data to extend forecasting capabilities and spatial resolution.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_101-Grey_Clustering_Algorithm_for_Urban_Air_Quality_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced Phishing Website Detection Using Optimized Ensemble Stacking Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01608100</link>
        <id>10.14569/IJACSA.2025.01608100</id>
        <doi>10.14569/IJACSA.2025.01608100</doi>
        <lastModDate>2025-08-29T12:48:03.1130000+00:00</lastModDate>
        
        <creator>Zainab Alamri</creator>
        
        <creator>Abeer Alhuzali</creator>
        
        <creator>Bassma Alsulami</creator>
        
        <creator>Daniyal Alghazzawi</creator>
        
        <subject>Phishing detection; machine learning; ensemble stacking; cybersecurity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>Phishing attacks remain a persistent and evolving cybersecurity threat, necessitating the development of highly accurate and efficient detection mechanisms. This research introduces an optimized ensemble stacking framework for phishing website detection, leveraging advanced machine learning techniques, hybrid feature preprocessing, and meta-learning strategies. The proposed approach systematically evaluates nine diverse base classifiers: XGBoost, CatBoost, LightGBM, Random Forest, Gradient Boosting, Extra Trees, Support Vector Classifier, AdaBoost, and Bagging. We compare baseline classifiers, a standard ensemble stacking model, and four optimized stacking configurations across four balanced and imbalanced datasets. Our optimized ensemble stacking achieves perfect accuracy (one hundred percent) on the first two datasets, and over ninety-nine percent accuracy on the two more challenging imbalanced datasets. A direct comparison with related studies demonstrates that our optimized stacking approach delivers superior detection accuracy.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_100-Enhanced_Phishing_Website_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Bridging Tradition and Technology: Leveraging ERP Systems for Streamlined Supply Chains and Modernized Keropok Lekor Production Management</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160899</link>
        <id>10.14569/IJACSA.2025.0160899</id>
        <doi>10.14569/IJACSA.2025.0160899</doi>
        <lastModDate>2025-08-29T12:48:03.0200000+00:00</lastModDate>
        
        <creator>Faizah Aplop</creator>
        
        <creator>Wan Muhammad Ikhwan Wan Mohammad</creator>
        
        <creator>Muhammad Nasyrul Adly Mohd Afendy</creator>
        
        <creator>Mustafa Man</creator>
        
        <creator>Fakhrul Adli Mohd Zaki</creator>
        
        <creator>Rosaida Rosly</creator>
        
        <creator>Ismail Abu Bakar</creator>
        
        <subject>Overall equipment effectiveness; real-time information; centralization; data-driven workflows; business digitalization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>Online marketplace and social media offers sub-stantial opportunities for business growth for many and it has contributed greatly to the increase demand of keropok lekor (fish cracker) from Terengganu throughout Malaysia as the market reach is increasing. Positive effects of these online platforms as significant digital marketing tools encourage keropok lekor producers in Terengganu to invent and diversify their products with the goal to market and increase the sale of keropok lekor in a larger scale. Some of the innovations are selling keropok lekor in pre-packaged form and introduce different versions of keropok lekor by adding more flavors, textures and shapes to meet broader range of customer preferences. This positive development promotes the commercialization of keropok lekor which then re-quires the producers like ROMA Food Industry Sdn. Bhd. (RFI) to handle higher market demand without significant disruptions. An automated approach is crucial for them to streamline the keropok lekor business operations to enable them handle not only the increased product market demand, but at the same time the volumes of work or expansion without compromising quality or efficiency. ROMAns is an Enterprise Resource Planning (ERP) system built to optimize keropok lekor business processes by facilitating the flow of information across different functions, improve efficiency, and gain a competitive edge by leveraging integrated data and streamlined operations across keropok lekor business operations.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_99-Bridging_Tradition_and_Technology_Leveraging_ERP_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Next-Generation Network Security: An Analysis of Threats, Challenges and Emerging Intelligent Defenses Within SDN and NFV Architectures</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160898</link>
        <id>10.14569/IJACSA.2025.0160898</id>
        <doi>10.14569/IJACSA.2025.0160898</doi>
        <lastModDate>2025-08-29T12:48:02.9730000+00:00</lastModDate>
        
        <creator>Amina SAHBI</creator>
        
        <creator>Faouzi JAIDI</creator>
        
        <creator>Adel BOUHOULA</creator>
        
        <subject>Next generation network security; software defined networking; network function virtualization; network security; artificial intelligence</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>The integration of Software Defined Networking (SDN) and Network Function Virtualization (NFV) offers considerable advantages in terms of scalability, interoperability, and cost-efficiency. They redefine network architecture, replacing rigid hardware-based control with a more flexible, software-driven approach. However, this convergence also introduces significant security threats and challenges due to architectural vulnerabilities and an expanded attack surface. This study presents a comprehensive overview of the key security risks associated with SDN/NFV networks. It analyzes existing countermeasures, highlighting their effectiveness in addressing specific threats while identifying limitations in achieving comprehensive security due to inherent architectural vulnerabilities. The study concludes with a discussion on open challenges and future research directions toward more secure and resilient network infrastructures. This study highlights the importance of an integrated security approach and identifies areas where further research is required to enhance SDN/NFV security.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_98-Next_Generation_Network_Security_An_Analysis_of_Threats.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>YOLOv8s-Swin: Enhanced Tomato Ripeness Detection for Smart Agriculture</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160897</link>
        <id>10.14569/IJACSA.2025.0160897</id>
        <doi>10.14569/IJACSA.2025.0160897</doi>
        <lastModDate>2025-08-29T12:48:02.9100000+00:00</lastModDate>
        
        <creator>Jalal Uddin Md Akbar</creator>
        
        <creator>Syafiq Fauzi Kamarulzaman</creator>
        
        <subject>Agricultural automation; attention mechanism; computer vision; smart agriculture; object detection; YOLO; swin transformer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>Accurate object detection and classification are paramount in precision agriculture for assessing ripeness stages and optimizing yield, particularly for high-value crops like toma-toes. Traditional manual inspection methods are laborious, time-consuming, and error-prone. Furthermore, existing deep learning models often struggle with real-world agricultural challenges such as varying lighting, occlusions from foliage or other fruits, and dense clustering of small objects. To address these limitations and enhance tomato production efficiency and quality in diverse agricultural conditions, this study introduces YOLOv8s-Swin, an advanced object detection model. YOLOv8s-Swin integrates the powerful YOLOv8s architecture with a Swin Transformer module (C3STR) to capture global and local contextual information, crucial for robust small object detection. It also incorporates Focus, Depthwise Convolution (DWconv), Spatial Pyramid Pooling with Contextual Spatial Pyramid Convolution (SPPCSPC), and C2 modules for preserving fine details, reducing computational overhead, enhancing multi-scale feature fusion, and improving high-level semantic feature extraction, respectively. The Wise Intersection over Union (WIoU) loss function is adopted to enhance localization and address convergence issues. Evaluated on a comprehensive tomato image dataset, YOLOv8s-Swin demonstrated superior performance with a mean Average Precision (mAP@0.5) of 88.3%, precision of 84.4%, recall of 79.9%, and an F1-Score of 0.821. This significantly surpasses the base YOLOv8s (84.7%mAP@0.5, 0.795 F1-Score) and other models like Faster R-CNN, SSD, YOLOv4, YOLOv5s, and YOLOv7, all under identical conditions. Maintaining a competitive inference speed of 166.67 FPS, YOLOv8s-Swin offers a robust and efficient solution for AI-driven crop management and sustainable food production.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_97-YOLOv8s_Swin_Enhanced_Tomato_Ripeness_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Advanced Strategies for Big Data Resource and Storage Optimization: An AI Perspective</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160896</link>
        <id>10.14569/IJACSA.2025.0160896</id>
        <doi>10.14569/IJACSA.2025.0160896</doi>
        <lastModDate>2025-08-29T12:48:02.8300000+00:00</lastModDate>
        
        <creator>Ayoub Sghir</creator>
        
        <creator>Ayoub Allali</creator>
        
        <creator>Najat Rafalia</creator>
        
        <creator>Jaafar Abouchabaka</creator>
        
        <subject>Artificial intelligence; big data; optimization; re-source; storage</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>The increasing use of advanced technologies with artificial intelligence in our daily lives has become an urgent necessity to facilitate tasks in a fluid and simple way, which leads to the generation of huge amounts of data. This data comes from various sources: media, social networks, connected objects., online transactions, and smart devices, among other sources. These data are generally organized into three categories: structured, unstructured, and semi-structured. This data is therefore called Big Data. This data is characterized by its enormous size and fast flow, as well as by the diversity of its sources. The importance of data lies in its ability to provide future perspectives and improve the decision-making process. To get the most out of this data, it must be stored and processed, but current technologies face many challenges and are often insufficient to cope with the huge amounts of data generated. It is necessary to look for advanced and highly efficient technologies, capable of storing the entirety of the data and processing it faster. We can also rely on artificial intelligence to help improve the use of storage and processing resources by compressing data or deleting excess data, thus saving storage space. This study discusses various approaches for optimizing Big Data processing, such as the use of AI compression techniques, the PSNR-SSIM method, and many others. The compression ratio for these algorithms is around 90%. With these technologies, it is possible to optimize the use of storage space, ensuring efficient and optimized management.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_96-Advanced_Strategies_for_Big_Data_Resource.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Scalable and Privacy-Preserving Hybrid Blockchain Architecture for Secure Healthcare Data Management</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160895</link>
        <id>10.14569/IJACSA.2025.0160895</id>
        <doi>10.14569/IJACSA.2025.0160895</doi>
        <lastModDate>2025-08-29T12:48:02.8170000+00:00</lastModDate>
        
        <creator>Sanjida Sharmin</creator>
        
        <creator>Mohammad Shamsul Arefin</creator>
        
        <creator>Pranab Kumar Dhar</creator>
        
        <creator>Zinnia Sultana</creator>
        
        <creator>Sultana Akter</creator>
        
        <subject>Blockchain; healthcare; AES-256-GCM; merkle tree; IPFS; data security; scalability; gas optimization; ethereum; hybrid architecture</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>The protection of sensitive medical information has become a critical concern in modern digital healthcare. This study introduces a Hybrid Architecture that ensures secure and reliable healthcare data management through the integration of blockchain technology with off-chain and on-chain mechanisms. Patient records are encrypted using AES-256-GCM, stored in the InterPlanetary File System (IPFS), and verified using Merkle Tree structures, with only the root values anchored on Ethereum smart contracts. This design guarantees data security and integrity while achieving significant gas optimization by reducing on-chain storage costs. Experimental evaluation demonstrates that the proposed system achieves high scalability, efficient transaction processing, and strong resistance to tampering, ensuring confidentiality and auditability. By combining blockchain, cryptographic techniques, and distributed storage, the framework addresses pressing challenges of security, privacy, and trust in healthcare ecosystems. The results highlight the potential of Hybrid Architecture models to deliver a cost-effective, privacy-preserving, and scalable solution for next-generation Healthcare Data Security.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_95-A_Scalable_and_Privacy_Preserving_Hybrid_Blockchain_Architecture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Securing the Healthcare Supply Chain Using Blockchain-Enabled Smart Contracts</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160894</link>
        <id>10.14569/IJACSA.2025.0160894</id>
        <doi>10.14569/IJACSA.2025.0160894</doi>
        <lastModDate>2025-08-29T12:48:02.7530000+00:00</lastModDate>
        
        <creator>Muhammad Saad</creator>
        
        <creator>Hamail Ashraf</creator>
        
        <creator>Muhammad Saffi Ullah Khan</creator>
        
        <creator>Muhammad Awais Javed</creator>
        
        <creator>Ahmad Naseem Alvi</creator>
        
        <creator>Ahmed Alfakeeh</creator>
        
        <subject>Blockchain; smart contracts; supplychain</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>The blockchain technology is an innovative tool that has shown its effectiveness in many sectors such as health-care and many are likely to experience a revolutionary transformation. In healthcare, the blockchain functions like a distributed network which is constantly updated with records ensuring that you cannot eliminate or alter them without the consensus to do so. In other words, blockchain-based smart contracts will enable clients to do transactions without having to rely on intermediaries and thus it will be more reliable. Smart contracts regulate and constitute numerous activity and dealings of stakeholders, automating processes, augmenting visibility, maximizing productivity, and standing against the clock. Healthcare supply chains may be exploited by blockchain technology that addresses pain points such as connectivity, traceability, as well as fighting counterfeit medicines. The purpose of our analysis was to advance traceability between the healthcare supply chain elements, such as the transportation of pharmaceuticals or medical equipment. And for its more practical implementation we have worked on minimizing the cost of smart contracts being deployed on the blockchain for the health care logistic system.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_94-Securing_the_Healthcare_Supply_Chain_Using_Blockchain.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Exploring the Factors Influencing School Dropout: A Logit Model Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160893</link>
        <id>10.14569/IJACSA.2025.0160893</id>
        <doi>10.14569/IJACSA.2025.0160893</doi>
        <lastModDate>2025-08-29T12:48:02.7370000+00:00</lastModDate>
        
        <creator>Noaman LAKCHOUCH</creator>
        
        <creator>Lamarti Sefian MOHAMMED</creator>
        
        <creator>Mustapha KHALFOUNI</creator>
        
        <subject>School dropout; development index; Morocco; cross-sectional study; logistic regression; academic performance; exam grades; age; gender; school transportation; academic success</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>School dropout negatively impacts a country’s development index, with numerous factors contributing to this complex phenomenon. To investigate the factors associated with school dropout in a specific region of Morocco, a cross-sectional study was conducted, encompassing a weighted sample of 274 junior secondary education students. Data collection was facilitated through a questionnaire administered to school directors. The data processing involved two main stages: preparation and modeling. The modeling phase employed a binary logistic regression model, focusing on the student’s dropout status as the dependent variable. The study’s findings highlighted several significant factors associated with school dropout: academic performance (as indicated by exam grades), the student’s age and gender, and the availability of school transportation services that encourage students to continue their studies. Additionally, while class size also played a significant role, its impact was deemed less critical compared to the other factors identified. These results underscore that school dropout is influenced by a multitude of factors, suggesting the need for targeted interventions to prevent dropout and foster academic success, particularly among female students.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_93-Exploring_the_Factors_Influencing_School_Dropout.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Autonomous Self-Adaptation in the Cloud: ML-Heal’s Framework for Proactive Fault Detection and Recovery</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160892</link>
        <id>10.14569/IJACSA.2025.0160892</id>
        <doi>10.14569/IJACSA.2025.0160892</doi>
        <lastModDate>2025-08-29T12:48:02.6770000+00:00</lastModDate>
        
        <creator>Qais Al-Na’amneh</creator>
        
        <creator>Mahmoud Aljawarneh</creator>
        
        <creator>Rahaf Hazaymih</creator>
        
        <creator>Ayoub Alsarhan</creator>
        
        <creator>Khalid Hamad Alnafisah</creator>
        
        <creator>Nayef H. Alshammari</creator>
        
        <creator>Sami Aziz Alshammari</creator>
        
        <subject>Cloud computing; service composition; self-healing systems; autonomic computing; machine learning; anomaly detec-tion; automated recovery; fault tolerance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>Cloud computing environments increasingly host applications constructed from orchestrated service compositions, which deliver enhanced functionality through distributed work-flows. This paradigm, however, introduces vulnerabilities where component failures can cascade, disrupting entire applications. Conventional fault tolerance often falls short in these dynamic settings. This paper introduces ML-Heal, an autonomous self-healing framework architected to bolster the resilience of such service compositions. ML-Heal leverages machine learning for proactive failure detection, precise diagnosis, and intelligent recovery strategy selection. The framework integrates real-time monitoring data, applies ML-based anomaly detection and classification to identify faults, and plans corrective actions via a learned policy or predictive models. Implemented using Python with scikit-learn models and a custom orchestration layer, its efficacy is demonstrated through simulated fault injection scenarios. Illustrative system architecture and evaluation results show that this ML-driven methodology significantly curtails recovery time and augments availability when confronted with faults, showcasing AI’s potential in creating more robust, self-adaptive cloud service compositions with minimal human oversight.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_92-Autonomous_Self_Adaptation_in_the_Cloud_ML_Heals_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Recommender System for Precision Chemical Application in Banana Cultivation Using Matrix Factorization and Content-Based Filtering</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160891</link>
        <id>10.14569/IJACSA.2025.0160891</id>
        <doi>10.14569/IJACSA.2025.0160891</doi>
        <lastModDate>2025-08-29T12:48:02.6130000+00:00</lastModDate>
        
        <creator>Ravi Kumar Tirandasu</creator>
        
        <creator>Prasanth Yalla</creator>
        
        <creator>Pachipala Yellamma</creator>
        
        <subject>Hybrid recommendation system; content-based filtering; matrix factorization; banana disease management; agricultural data heterogeneity; precision agriculture; chemical application optimization; black sigatoka</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>Proper management of pesticides and fertilizers is critical towards effective control of the banana diseases, but integration of various agricultural data has been a problem. The novelty of this study is the hybrid recommendation system which encompasses Content-Based Filtering (CBF) with Matrix Factorization (MF) to be used when recommending chemical treatment of bananas during cultivation. The system exploits the use of heterogenous data- such as soil nutrient profiles (NPK, pH), climatic variables, and disease signatures to create customized chemical recommendation to manage the disease. A real-world agricultural dataset was used in the evaluation of the hybrid approach and the improvement, precision, recall, F1-score, and the accuracy of the system were measured. The findings indicate that the suggested model performed better than the traditional models of single-method or user-based recommendation systems and predicted the disease outbreak with high accuracy (F1-score) up to 98 percent in Black Sigatoka; these results were highly consistent across other disease classes and different chemical interventions. Notably, the hybrid system helps not only to optimize the costs of chemical use and crop yields, but also to create the environmental sustainability by reducing the number of the superfluous chemical use. Methodology, the characteristics of the dataset and the measures that have been employed are described, which explains how CBF and MF integration solve the complexity and variability in agricultural data. The solution provided in this work is a high-performance scalable tool in precision agriculture, which assists further in the informed decision-making of the farmer and agricultural planners.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_91-Hybrid_Recommender_System_for_Precision_Chemical_Application.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards a Robust DNA Storage System with a Multilayer Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160890</link>
        <id>10.14569/IJACSA.2025.0160890</id>
        <doi>10.14569/IJACSA.2025.0160890</doi>
        <lastModDate>2025-08-29T12:48:02.5330000+00:00</lastModDate>
        
        <creator>Ayoub Sghir</creator>
        
        <creator>Manar Sais</creator>
        
        <creator>Douha Bourached</creator>
        
        <creator>Jaafar Abouchabaka</creator>
        
        <creator>Najat Rafalia</creator>
        
        <subject>DNA storage; data compression; error correction; huffman encoding; run-length encoding; LZW; LZ77</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>The growing demand for data storage requires innovative, resilient solutions to the challenges of cost, space, and energy consumption posed by current methods. DNA stands out as a promising next-generation data storage medium, offering a remarkable storage density of 1019 bits per cubic centimeter, some eight orders of magnitude denser than conventional media. This study explores the potential of DNA storage by proposing an intelligent multi-layer solution to overcome current technological challenges. The system combines the storage capabilities of DNA with sophisticated solutions such as data compression, error correction, and cryptography, transforming the concept of DNA storage into a tangible reality. This study also focused on the first layer dedicated to data compression. The results obtained represent a significant advance in the evaluation of the potential of different compression algorithms, through a comparative study of techniques such as Huffman coding, run-length coding, LZW and LZ77. This analysis enabled us to define the essential components of the first layer of the proposed approach. Finally, the interface in the digital domain to visually present the overall results of the project was introduced, while providing insight into the system’s efficiency, data integrity and ease of use.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_90-Towards_a_Robust_DNA_Storage_System_with_a_Multilayer_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Approach Based on Named Entity Recognition and Semantic Analysis for Recruitment Efficiency and Optimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160889</link>
        <id>10.14569/IJACSA.2025.0160889</id>
        <doi>10.14569/IJACSA.2025.0160889</doi>
        <lastModDate>2025-08-29T12:48:02.4100000+00:00</lastModDate>
        
        <creator>Ismail Ifakir</creator>
        
        <creator>Noureddine Mohtaram</creator>
        
        <creator>El Habib Nfaoui</creator>
        
        <creator>Abderrahim Zannou</creator>
        
        <creator>Mohammed El Hassouni</creator>
        
        <subject>Named entity recognition; large language models; feature extraction; generate question; matching</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>Modern recruitment requires smarter, faster, and more inclusive methods to manage the growing volume and diversity of job applications and candidate resumes. Manual screening is often ineffective and unreliable especially in low-resource or multilingual contexts. To address this challenge, we propose an approach that automates and optimizes key stages of the recruitment process. This three-stage approach includes: 1) extracting structured data from resumes using a robust Named Entity Recognition (NER) system, which comprises a NER annotator, a feature extractor, and a transition-based parser; 2) employing a fine-tuned transformer model to perform semantic matching between candidates and job descriptions; and 3) leveraging a large language model to generate interview questions tailored to specific job requirements, thereby improving the relevance and personalization of candidate assessments. The recruitment system was tested on a large-scale resume and job posting dataset across multiple domains. Our NER model reported an F1-score of 85.11% in entity extraction, and the matching component reported accuracy levels as high as 92%when using hierarchical job classes. The results prove the efficacy of combining deep learning techniques with semantic reasoning in enhancing automation, accuracy, and fairness in hiring.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_89-An_Approach_Based_on_Named_Entity_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Blockchain Enabled Healthcare Supply Chain: Review, Case Study and Future Opportunities</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160888</link>
        <id>10.14569/IJACSA.2025.0160888</id>
        <doi>10.14569/IJACSA.2025.0160888</doi>
        <lastModDate>2025-08-29T12:48:02.3170000+00:00</lastModDate>
        
        <creator>Muhammad Saad</creator>
        
        <creator>Kamran Ali</creator>
        
        <creator>Muhammad Awais Javed</creator>
        
        <creator>Ahmad Naseem Alvi</creator>
        
        <creator>Ahmed Alfakeeh</creator>
        
        <subject>Blockchain; smart contracts; healthcare; supply chain</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>Blockchain is a major component of future smart healthcare that can improve the security, reliability, trust and automation of healthcare supply chain processes. Blockchain has several applications in areas such as medicine procurement, supply chain tracking, and drug traceability. In this study, we present a review of recent works in the area of Blockchain-enabled healthcare supply chains with a particular focus on pharmaceutical supply chains. We categorize the literature into three major areas namely, procurement, asset management, and system efficiency improvement. We also present a case study of efficient smart contracts for the pharmaceutical supply chain in which we identify different stakeholders of the pharmaceutical supply chain and develop different functions and tasks to be performed by each stakeholder and their interactions with each other. We implement the proposed smart contract in Remix Integrated Development Environment (IDE) using solidity language and evaluate the transaction cost of each function used in the smart contract. Lastly, we also present future opportunities on using Blockchain-enabled healthcare supply chain.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_88-Blockchain_Enabled_Healthcare_Supply_Chain.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Parameter-Free Negative Extreme Anomalous Undersampling Techniques on Class Imbalance Problems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160887</link>
        <id>10.14569/IJACSA.2025.0160887</id>
        <doi>10.14569/IJACSA.2025.0160887</doi>
        <lastModDate>2025-08-29T12:48:02.2070000+00:00</lastModDate>
        
        <creator>Benjawan Jantamat</creator>
        
        <creator>Krung Sinapiromsaran</creator>
        
        <subject>Classification; class imbalance; imbalanced datasets; undersampling; parameter-free method; negative extreme anomalous score</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>This research addressed the critical challenge of class imbalance in classification, which is a prevalent issue in real-world applications. Standard classifiers often struggled with imbalanced datasets and frequently misclassified the minority class (positive instances) due to the overwhelming presence of the majority class (negative instances). The proposed Negative Extreme Anomalous Undersampling Technique (NEXUT) was introduced as a parameter-free approach. It leveraged the negative extreme anomalous score to strategically eliminate negative instances located in overlapping regions. This targeted removal was designed to improve the classifier’s ability to effectively distinguish between the two classes. To evaluate the effectiveness of the proposed method, we conducted a comprehensive comparison with established undersampling techniques. The evaluation utilized both synthetic datasets and twelve datasets from the UCI repository. Six different classifiers were employed to ensure a diverse and unbiased performance assessment. Results from the Wilcoxon signed-rank test confirmed that the proposed method achieved significantly higher performance compared to existing techniques. These findings demonstrated the potential of NEXUT as a robust and valuable tool for addressing class imbalance problems.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_87-Parameter_Free_Negative_Extreme_Anomalous_Undersampling_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>DALG: A Dual Attention-Based LSTM-GRU Model for Exchange Rate Volatility Forecasting in China’s Forex Sector</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160886</link>
        <id>10.14569/IJACSA.2025.0160886</id>
        <doi>10.14569/IJACSA.2025.0160886</doi>
        <lastModDate>2025-08-29T12:48:02.1130000+00:00</lastModDate>
        
        <creator>Shamaila Butt</creator>
        
        <creator>Mohammad Abrar</creator>
        
        <creator>Muhammad Ali Chohan</creator>
        
        <creator>Muhammad Farrukh Shahzad</creator>
        
        <subject>Exchange rate forecasting; deep learning; LSTM-GRU hybrid; attention mechanism; financial time series; USD/RMB volatility</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>Exchange rate volatility forecasting plays a vital role in guiding financial decisions and economic planning, particularly in China’s dynamic foreign exchange market. This study proposes a novel deep learning framework, termed DALG (Dual Attention-based LSTM-GRU), designed to capture complex temporal patterns and feature dependencies in high-frequency USD/RMB exchange rate data. By integrating LSTM and GRU architectures with a dual-stage attention mechanism, comprising input and temporal attention, the proposed DALG model enhances the interpretability and accuracy of exchange rate volatility forecasts. The model is empirically evaluated against benchmark models such as LSTM, GRU, and a hybrid LSTM-DA using standard performance metrics, including Root Mean Square Error (RMSE), Mean Absolute Error (MAE), and Mean Absolute Percentage Error (MAPE). Experimental results demon-strate that the DALG model consistently outperforms traditional and hybrid deep learning models, offering superior predictive performance. The findings suggest that attention-enhanced deep learning architectures hold significant promise for robust financial time series modeling and forecasting in volatile forex markets.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_86-DALG_A_Dual_Attention_Based_LSTM_GRU_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Robust Ulcerative Colitis Detection via Integrated Convolutional Feature Encoding, Bidirectional Temporal Context, and Data Augmentation for Class Imbalance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160885</link>
        <id>10.14569/IJACSA.2025.0160885</id>
        <doi>10.14569/IJACSA.2025.0160885</doi>
        <lastModDate>2025-08-29T12:48:01.9270000+00:00</lastModDate>
        
        <creator>Dharmendra Gupta</creator>
        
        <creator>Jayesh Gangrade</creator>
        
        <creator>Yadvendra Pratap Singh</creator>
        
        <creator>Shweta Gangrade</creator>
        
        <subject>Ulcerative Colitis Detection (UCD); CNNs; Bi-GRU; Bi-LSTM; medical image</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>Ulcerative Colitis (UC), a chronic inflammatory bowel disease, presents significant diagnostic challenges due to its overlapping symptoms with other gastrointestinal disorders and the complex visual patterns in endoscopic imagery. Accurate and early detection is essential to guide effective treatment and improve patient outcomes. This research introduces a robust hybrid framework that combines convolutional feature extraction with bidirectional temporal modelling for the precise identification of UC from medical imagery. The proposed approach integrates CNNs—including MobileNetV3Large, Inception v3, InceptionResNetV2, and Xception—with Bi-GRU and Bi-LSTM networks. The CNNs are responsible for capturing high-level spatial features, while the Bi-GRU and Bi-LSTM modules enhance temporal context understanding, enabling the model to effectively interpret subtle patterns and transitions characteristic of UC. Each hybrid model was designed, and thoroughly tested on an curated set of experimental data. Among the combinations, the highest accuracy was of 93.10%, obtained with the Xception+ Bi-GRU + Bi-LSTM model. Inception v3 + Bi-GRU + Bi-LSTM followed closely, attaining an accuracy of 92.62%. The different data augmentation techniques is deployed to handle the class imbalance that exists in the LIMUC dataset . Notably, the bidirectional temporal modelling component significantly improved the recognition of sequential dependencies in medical image frames, enhancing the model’s diagnostic robustness. The findings demonstrate that integrating CNNs with bidirectional temporal encoders offers a promising solution for UC detection, providing a valuable tool for clinicians in automated diagnostic systems. This study not only contributes to the advancement of intelligent medical imaging but also paves the way for deploying real-time UC detection models in clinical practice.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_85-Robust_Ulcerative_Colitis_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Feature Selection and Classification of Microarray Datasets Based on an Improved Binary Harris Hawks Optimization Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160884</link>
        <id>10.14569/IJACSA.2025.0160884</id>
        <doi>10.14569/IJACSA.2025.0160884</doi>
        <lastModDate>2025-08-29T12:48:01.7830000+00:00</lastModDate>
        
        <creator>Guoxia LI</creator>
        
        <creator>Wen SHI</creator>
        
        <creator>Jingyu ZHANG</creator>
        
        <creator>Zhixia GU</creator>
        
        <creator>Jixiang XU</creator>
        
        <creator>Yueyue LI</creator>
        
        <creator>Yaxing SUN</creator>
        
        <subject>Microarray dataset; feature selection; parameter optimization; ReliefF</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>High-dimensional microarray datasets are prone to the “curse of dimensionality” due to feature redundancy, which impairs the performance of machine learning models, and feature selection is the key to addressing this issue. This study proposes an Improved Binary Harris Hawks Optimization algorithm (IBHHO) for feature selection in high-dimensional microarray data. Core innovations comprise: i) a hybrid filter-wrapper framework integrating a filter method (ReliefF), a wrapper method (HHO) and a classifier (SVM) to simultaneously optimize ReliefF parameters, SVM hyperparameters, and feature subsets; ii) a differentiated exploration–exploitation strategy leveraging HHO’s two-stage behavior (global parameter optimization during exploration; feature refinement and local parameter tuning during exploitation); and iii) an elite feature guidance strategy that reduces redundant exploration and accelerates convergence via fixed key-feature anchor points. Experiments conducted on eight public microarray datasets demonstrate that IBHHO reduces feature counts while improving classification accuracy, achieving comprehensive performance superior to benchmark algorithms. Consequently, IBHHO offers an efficient feature selection frame-work for high-dimensional biomedical data analysis.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_84-Feature_Selection_and_Classification_of_Microarray_Datasets.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Simulysis: A Method to Change Impact Analysis in Simulink Projects Based on WAVE-CIA</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160883</link>
        <id>10.14569/IJACSA.2025.0160883</id>
        <doi>10.14569/IJACSA.2025.0160883</doi>
        <lastModDate>2025-08-29T12:48:01.7070000+00:00</lastModDate>
        
        <creator>Hoang-Viet Tran</creator>
        
        <creator>Ta Van Thang</creator>
        
        <creator>Cao Xuan Son</creator>
        
        <creator>Do Trong Thu</creator>
        
        <creator>Pham Ngoc Hung</creator>
        
        <subject>Change impact analysis; WAVE-CIA; MAT-LAB/Simulink projects</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>MATLAB and Simulink, which have more than 5 million users and are installed at more than 100,000 businesses, universities, and government organizations1, are widely used in numerous large-scale projects across various industries. These projects continually evolve in response to changes in business logic. However, managing the impact of these changes on Simulink projects presents several challenges to guarantee the quality of these projects. To address this, we propose a WAVE-CIA-based method named Simulysis for change impact analysis (CIA) in Simulink projects. The core idea behind Simulysis is to directly analyze Simulink project files and construct the project’s corresponding call graph. By comparing the call graphs from the old and new project versions, Simulysis computes the change set. Subsequently, Simulysis applies the WAVE-CIA method to this change set and the call graph to identify the impact set. Additionally, Simulysis proposes a signal tracing method helping the system engineers to follow, check, and debug signals through the system. We have implemented Simulysis as a tool with the same name and conducted experiments using several open-source Simulink projects. The experiments demonstrate that Simulysis effectively performs the CIA process and retrieves the impact set, producing optimistic results and proving the practical applicability of Simulysis for real-world projects. Further discussions about Simulysis are provided in the paper.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_83-Simulysis_A_Method_to_Change_Impact_Analysi_in_Simulink_Projects.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Human Versus AI: A Comparative Study of Zero-Shot LLMs and Transformer Models Against Human Annotations for Arabic Sentiment Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160882</link>
        <id>10.14569/IJACSA.2025.0160882</id>
        <doi>10.14569/IJACSA.2025.0160882</doi>
        <lastModDate>2025-08-29T12:48:01.6600000+00:00</lastModDate>
        
        <creator>Dimah Alahmadi</creator>
        
        <subject>Large Language Model (LLMs); transformers; NLP; annotation; inter agreement; sentiment analysis; Saudi dialects</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>Accurate sentiment analysis in Arabic natural language processing (NLP) remains a complex task due to the language’s rich morphology, syntactic variability, and diverse dialects. Traditional annotation approaches require human experts, face significant challenges related to inter-annotator agreement and dialectal understanding. Recent advances in transformer-based models and large language models (LLMs) offer new techniques to generate annotations. This paper presents a comparative evaluation of three sentiment annotation strategies applied to Saudi dialect tweets: human expert labeling, fine-tuned transformer models (specifically CAMeLBERT-DA), and zero-shot inference using GPT-4o. The selected CAMeLBERT-DA which is already trained specifically for Arabic sentiment tasks and dialects, demonstrates robust performance with fast, scalable predictions. On the other hand, the selected GPT-4o shows competitive zero-shot accuracy without fine-tuning, making it a practical solution for real-time applications. We investigate how each approach performs on two datasets, both of more than 4,000 Saudi tweets covering a wide spectrum of dialects and sentiment expressions. Our methodology involves analyzing consistency across annotations using interrater agreement metrics such as Cohen’s Kappa, Pearson correlation, and class-specific agreement rates. The results reveal that while human annotations capture cultural and context subtleties, they suffer from inconsistency, particularly in ambiguous or dialect-specific cases. This study contributes to the growing body of work on annotation methodologies by highlighting the strengths and limitations of both human and AI-based annotators in Arabic NLP. Our findings suggest that the zero-shot use of domain-specific transformers like CAMeLBERT-DA with general-purpose LLMs such as GPT-4o have a moderate correlation compared to actual human annotators. The paper concludes with recommendations for building reliable ground truth datasets and integrating AI-assisted labeling into Arabic NLP tasks.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_82-Human_Versus_AI_A_Comparative_Study_of_Zero_Shot_LLMs.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>State-of-the-Art in Software Security Visualization: A Systematic Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160881</link>
        <id>10.14569/IJACSA.2025.0160881</id>
        <doi>10.14569/IJACSA.2025.0160881</doi>
        <lastModDate>2025-08-29T12:48:01.5670000+00:00</lastModDate>
        
        <creator>Ishara Devendra</creator>
        
        <creator>Chaman Wijesiriwardana</creator>
        
        <creator>Prasad Wimalaratne</creator>
        
        <subject>Security visualization; vulnerability analysis; threat intelligence; compliance monitoring</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>Software security visualization is an interdisciplinary field that combines the technical complexity of cyberse-curity, including threat intelligence and compliance monitoring, with visual analytics, transforming complex security data into easily digestible visual formats. As software systems get more complex and the threat landscape evolves, traditional text-based and numerical methods for analyzing and interpreting security concerns become increasingly ineffective. The purpose of this paper is to systematically review existing research and create a comprehensive taxonomy of software security visualization techniques through literature, categorizing these techniques into four types: graph-based, notation-based, matrix-based, and metaphor-based visualization. This systematic review explores over 60 recent key research papers in software security visualization, highlighting its key issues, recent advancements, and prospective future research directions. From the comprehensive analysis, the two main areas were distinctly highlighted as extensive software development visualization, focusing on advanced methods for depicting software architecture: operational security visualization and cybersecurity visualization. The findings highlight the necessity for innovative visualization techniques that adapt to the evolving security landscape, with practical implications for enhancing threat detection, improving security response strategies, and guiding future research.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_81-State_of_the_Art_in_Software_Security_Visualization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>AI and 5G Integration for Smart City Energy Systems: A Systematic Literature Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160880</link>
        <id>10.14569/IJACSA.2025.0160880</id>
        <doi>10.14569/IJACSA.2025.0160880</doi>
        <lastModDate>2025-08-29T12:48:01.3930000+00:00</lastModDate>
        
        <creator>TALBI Chaymae</creator>
        
        <creator>Rahmouni M’hamed</creator>
        
        <creator>OUAHBI Younesse</creator>
        
        <creator>ZITI Soumia</creator>
        
        <subject>Smart cities; artificial intelligence; 5G; energy management; smart grids; renewable energy integration; IoT; machine learning; sustainability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>Smart cities increasingly rely on Artificial Intelligence (AI), 5G, and Internet of Things (IoT) technologies to enhance energy management and support real-time decision-making in smart grids. This study presents a systematic literature review of recent research on the integration of AI and 5G in urban energy systems, with a focus on sustainability goals. It examines how these technologies are used for renewable energy integration, demand-side control, and predictive maintenance across smart environments. Using data from OpenAlex, Scopus, and Web of Science covering the period 2018 to 2025, the review was filtered by language, domain, and scientific relevance. Key findings reveal the use of machine learning models for forecasting, anomaly detection, and system optimization. The review also identifies technical, ethical, and infrastructural challenges, including data heterogeneity, limited interoperability, and regional inequalities in deployment. While AI and 5G offer promising capabilities for real-time monitoring and system automation, the literature shows persistent gaps in algorithm robustness and standardized integration frameworks. The paper emphasizes the need for validated, scalable solutions to achieve long-term energy sustainability. This review provides a clear overview of current trends and future directions in smart energy systems, contributing to a better understanding of how digital technologies shape the future of sustainable urban infrastructures.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_80-AI_and_5G_Integration_for_Smart_City_Energy_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Prompt-Driven Framework for Reflective Evaluation of Course Alignment Using Large Language Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160879</link>
        <id>10.14569/IJACSA.2025.0160879</id>
        <doi>10.14569/IJACSA.2025.0160879</doi>
        <lastModDate>2025-08-29T12:48:01.2830000+00:00</lastModDate>
        
        <creator>Mashael M. Alsulami</creator>
        
        <subject>Prompt engineering; Course Learning Outcomes (CLOs); Large Language Models (LLMs); curriculum alignment; educational quality assurance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>Large language models (LLMs) such as ChatGPT are gaining attention in educational settings, yet their potential role in supporting course design and academic quality assurance remains underexplored. This study introduces a structured, prompt-driven framework that uses ChatGPT as a reflective tool to help faculty and curriculum designers evaluate the alignment between course learning outcomes (CLOs), assessment methods, and teaching strategies. Grounded in the standards of the National Center for Academic Accreditation and Evaluation (NCAAA) and Bloom’s Revised Taxonomy, the system generates targeted, context-aware feedback using structured prompts modeled after official NCAAA forms. To enhance reliability and reduce hallucinations, the framework employs template-based prompt engineering and rule-based cognitive classification. A large-scale analysis was conducted across 56 CLOs to assess internal consistency, followed by expert validation from two academic reviewers who evaluated a sample of AI-generated feedback for accuracy, usefulness, and cognitive alignment. The findings highlight the tool’s ability to surface alignment issues and offer constructive recommendations, while also demonstrating its potential as a scalable support system for curriculum review and accreditation readiness, complementing rather than replacing human expertise.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_79-A_Prompt_Driven_Framework_for_Reflective_Evaluation_of_Course_Alignment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Graph-Based Clustering of Short Texts Using Word Embedding Similarity</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160878</link>
        <id>10.14569/IJACSA.2025.0160878</id>
        <doi>10.14569/IJACSA.2025.0160878</doi>
        <lastModDate>2025-08-29T12:48:01.1900000+00:00</lastModDate>
        
        <creator>Supakpong Jinarat</creator>
        
        <creator>Ratchakoon Pruengkarn</creator>
        
        <subject>Clustering; graph-based clustering; semantic similarity; short text; word embedding</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>The exponential growth of short textual content on the Internet, such as social media posts and search snippets, necessitates effective text mining techniques. Short text clustering, a critical tool for organizing this data, contends with two primary challenges: data sparsity, which undermines the quality of traditional clustering methods, and the poor interpretability of machine-generated cluster labels. This study introduces the Semantic Word Graph (SWG) algorithm, a novel graph-based approach designed to address both of these issues simultaneously. Our methodology begins by constructing a global word graph where nodes represent unique terms from the corpus, and edges are weighted by the semantic similarity of word pairs, calculated using a pre-trained Word2Vec model. Cohesive communities of words are then identified using the Louvain method, and documents are assigned to clusters based on these communities. Meaningful cluster labels are generated by ranking representative nouns within each community. To validate our approach, the SWG algorithm was evaluated on three benchmark datasets (AG News, Tweet, and SearchSnippets) and compared against established methods, including Lingo, Suffix Tree Clustering (STC), and K-means. Quantitative results, measured by the F-score, show that SWG achieved up to 0.89 F-score on AG News, 0.85 on Tweets, and 0.82 on SearchSnippets, consistently outperforming baseline algorithms in clustering quality. Further-more, a qualitative analysis confirms that SWG produces more coherent and topically comprehensive cluster labels, improving interpretability. This study concludes that the SWG algorithm is a robust and effective framework for enhancing both the accuracy and interpretability of short text clustering. Future research could explore integrating contextual embeddings such as BERT to capture deeper semantic relationships, optimizing the similarity threshold dynamically for different datasets, and scaling the algorithm to handle larger, real-time streaming text data. These directions would further improve the applicability of SWG in diverse domains such as social media analytics, news aggregation, and real-time topic detection.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_78-Graph_Based_Clustering_of_Short_Texts.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Machine Learning-Based Climate Prediction in Indonesia: A Baseline Experiment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160877</link>
        <id>10.14569/IJACSA.2025.0160877</id>
        <doi>10.14569/IJACSA.2025.0160877</doi>
        <lastModDate>2025-08-29T12:48:01.1270000+00:00</lastModDate>
        
        <creator>Faisal Rahutomo</creator>
        
        <creator>Bambang Harjito</creator>
        
        <subject>Indonesia; climate data; experiment baseline; machine learning; prediction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>This study presents the results of a series of machine learning experiments conducted on Indonesian climate data collected between 2010 and 2020. The findings offer a comparative foundation for future research. Weather prediction remains a significant challenge due to the complex interplay of various climatic factors. Weather stations typically record data at hourly or daily intervals, resulting in large volumes of historical weather information. When appropriately processed, this extensive dataset offers valuable opportunities for predictive modeling. The study explores two primary approaches to leveraging big data for weather forecasting. The first employs a machine learning classification technique to predict categorical weather conditions based on existing feature values. The second utilizes time series forecasting to predict continuous weather parameters using historical data. Multiple classification and forecasting algorithms were evaluated and compared. Notably, the year-on-year forecasting approach outperformed several modern techniques, including deep learning, in terms of predictive accuracy. Despite the application of deep learning, classification models achieved a maximum accuracy of only 0.811. Forecasting methods generally produced a mean absolute percentage error (MAPE) of 3–4%. However, year-on-year forecasting—identified through exploratory data visualization—reduced the prediction error to below 1.6%. Another key contribution of this research is the emphasis on the critical role of data visualization prior to algorithmic modeling. The findings highlight the importance of human intervention in the early stages of data analysis, particularly for visual exploration and feature assessment. Classification models were found to underperform due to overly generalized feature representations. In contrast, forecasting techniques, supported by informed human-guided preprocessing, yielded more reliable and accurate results.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_77-Machine_Learning_Based_Climate_Prediction_in_Indonesia.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Public Opinion Stage Segmentation in Disaster Events: A Study Based on Multimodal Sentiment Prediction Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160876</link>
        <id>10.14569/IJACSA.2025.0160876</id>
        <doi>10.14569/IJACSA.2025.0160876</doi>
        <lastModDate>2025-08-29T12:48:01.0670000+00:00</lastModDate>
        
        <creator>Xiaogang Yuan</creator>
        
        <creator>Jiaxi Chen</creator>
        
        <creator>Dezhi An</creator>
        
        <creator>Xiang Gong</creator>
        
        <subject>Multimodal sentiment prediction; e-divisive with medians; transformer; encoder; decoder; cross-attention</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>Existing approaches for predicting public sentiment and analyzing opinion evolution during disaster events using multimodal data (text, video, audio) suffer from several limitations: an inadequate dynamic fusion of heterogeneous multi-source data, and imprecise division of public opinion stages. To address these issues, this paper proposes an Enhanced Disentangled Cross Fusion (EDCF) model-based framework for analyzing the evolution of public opinion in disaster events. This framework integrates the E-Divisive with Medians (EDM) change point detection method with spatiotemporal sequence modeling techniques to achieve fine-grained stage segmentation. The EDCF model employs Transformers and positional encoding to process time-series signals (audio/video), effectively capturing long-range dependencies. It enhances modality-specific representation capabilities by introducing dedicated encoders for each modality, a shared encoder, and a reconstruction decoder for disentangled representation learning. Furthermore, the model utilizes a cross-modal language-guided attention mechanism for efficient and effective feature fusion. Experimental validation on the publicly available multimodal sentiment dataset CMU-MOSI demonstrates that the proposed EDCF framework significantly outperforms baseline methods on key sentiment prediction metrics.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_76-Public_Opinion_Stage_Segmentation_in_Disaster_Events.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Adaptive Levy Flight Chicken Swarm Optimization with Differential Evolution for Function Optimization Problem</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160875</link>
        <id>10.14569/IJACSA.2025.0160875</id>
        <doi>10.14569/IJACSA.2025.0160875</doi>
        <lastModDate>2025-08-29T12:48:01.0200000+00:00</lastModDate>
        
        <creator>Wen-Jun Liu</creator>
        
        <creator>Azlan Mohd Zain</creator>
        
        <creator>Mohamad Shukor Bin Talib</creator>
        
        <creator>Sheng-Jun Ma</creator>
        
        <subject>Chicken swarm optimization; levy flight; differential evolution algorithm; adaptive adjustment strategy; function optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>This study proposes an improved swarm algorithm, Adaptive Levy Flight Chicken Swarm Optimization with Differential Evolution (ALCSODE), to overcome the low convergence accuracy and imbalance between exploration and exploitation in the original CSO algorithm. The method incorporates adaptive perturbation based on individual differences and a differential evolution mechanism into the rooster update process. An elitism preservation strategy is also applied to enhance population stability and information sharing. The algorithm is evaluated on 24 benchmark functions, including unimodal, high-dimensional multimodal, and CEC2022 functions. Performance metrics such as search trajectories and convergence curves are used to assess its effectiveness. Experimental results show that ALCSODE achieves a better exploration–exploitation trade-off and shows statistically superior performance over seven classical algorithms, confirming its potential as an effective tool for solving complex optimization problems.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_75-An_Adaptive_Levy_Flight_Chicken_Swarm_Optimization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning Meets Bibliometrics: A Survey of Transfer Learning Techniques for Breast Cancer Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160874</link>
        <id>10.14569/IJACSA.2025.0160874</id>
        <doi>10.14569/IJACSA.2025.0160874</doi>
        <lastModDate>2025-08-29T12:48:00.9570000+00:00</lastModDate>
        
        <creator>Amna Wajid</creator>
        
        <creator>Natasha Nigar</creator>
        
        <creator>Hafiz Muhammad Faisal</creator>
        
        <creator>Olukayode Oki</creator>
        
        <creator>Jose Lukose</creator>
        
        <subject>Transfer learning; breast cancer; medical imaging analysis; predictive analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>This study aims to provide a comprehensive biblio-metric analysis of research on transfer learning in breast cancer detection from 2016 to 2024. It highlights publication trends, influential contributors, collaborations, and keyword patterns. Bibliometric methods are employed to analyze data extracted from the Scopus database. It includes co-occurrence and citation analyses to identify prevalent keywords, highly cited documents, journals, authors, organizations, and countries contributing to this field. The analysis reveals a significant upward trend in publications over the last decade. Key insights include the identification of dominant keywords, influential contributors, and notable collaborations. The results highlight the growing impact of transfer learning techniques in breast cancer detection research, particularly within the domains of medical imaging analysis and predictive analysis. This study offers a systematic overview of the current state of transfer learning in breast cancer detection research, providing valuable insights and guiding future research efforts in this rapidly evolving domain.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_74-Deep_Learning_Meets_Bibliometrics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An AI-Driven Approach for Real-Time Noise Level Monitoring and Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160873</link>
        <id>10.14569/IJACSA.2025.0160873</id>
        <doi>10.14569/IJACSA.2025.0160873</doi>
        <lastModDate>2025-08-29T12:48:00.9270000+00:00</lastModDate>
        
        <creator>Yellamma Pachipala</creator>
        
        <creator>L K SureshKumar</creator>
        
        <creator>Veeranki Venkata Rama Maheswara Rao</creator>
        
        <creator>Vijaya Chandra Jadala</creator>
        
        <creator>T.Srinivasarao</creator>
        
        <creator>D. Srinivasa Rao</creator>
        
        <subject>Noise pollution; IoT; ESP8266 Wi-Fi; smart automation; AI-enabled noise pattern</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>Now a days, noise pollution is posing a public health risk, especially in residences and indoor environments like workplaces and schools. The proposed work presents a comprehensive analysis of hourly equivalent noise levels measured at 100 locations, such as the indoor environment. The proposed work is an intelligent automated noise pollution monitoring system for the real-time tracking and adaptive management of noise in indoor environments such as offices, homes, and educational institutions. Unlike other systems that merely record noise levels, the proposed solution provides real-time alerts with web-based visualization and AI-enabled noise pattern recognition for enhanced noise classification. The integration of an ESP8266 Wi-Fi module and a cloud-based architecture employs email notifications instantly, which also allows historical trend analysis and predictive insights. In addition, the framework is under-scope for the integration of smart home automation systems and mobile-based alerting, allowing for better accessibility. The IoT-powered innovations within this framework will revolutionize noise management by proactively monitoring, analyzing, and optimizing indoor sound environments. Through real-time adjustments and intelligent automation, these solutions will create a more serene, comfortable, and productivity-enhancing atmosphere. Whether in offices, homes, or public spaces, this advanced noise control system will contribute to overall well-being, concentration, and efficiency.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_73-An_AI_Driven_Approach_for_Real_Time_Noise_Level.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Functional vs Ethical Drivers in Generative AI Adoption: A PLS-SEM Study in Business Education</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160872</link>
        <id>10.14569/IJACSA.2025.0160872</id>
        <doi>10.14569/IJACSA.2025.0160872</doi>
        <lastModDate>2025-08-29T12:48:00.8770000+00:00</lastModDate>
        
        <creator>Jorge Serrano-Malebr&#225;n</creator>
        
        <creator>Cristian Vidal-Silva</creator>
        
        <creator>Paola von-Bichoffshausen</creator>
        
        <creator>Romina G&#243;mez-L&#243;pez</creator>
        
        <creator>Franco Campos-N&#250;&#241;ez</creator>
        
        <subject>Generative artificial intelligence; student perceptions; ChatGPT; PLS-SEM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>This study examined the factors influencing the use of ChatGPT by university students enrolled in business and management programs, considering the simultaneous effect of their functional perceptions and ethical or academic concerns. Using a structural equation modeling approach (PLS-SEM) applied to a sample of 118 students in Chile, the study found that functional perceptions, such as efficiency, clarity, and cognitive support, exert a positive and significant effect on the use of the tool. By contrast, concerns related to technological dependency, reliability of responses, and academic authorship showed no significant effect on either perception or usage. These findings reveal a functionalist adoption logic in which ethical judgment and pedagogical risks do not act as meaningful barriers. This study contributes to the literature by simultaneously integrating enabling and inhibiting factors into a single explanatory model and providing empirical evidence from a Latin American context. It concludes that there is a pressing need to develop pedagogical and institutional frameworks that foster critical literacy in the use of generative artificial intelligence, particularly in disciplines in which strategic judgment and ethical responsibility are core competencies. These findings should be interpreted within the context of a single Chilean institution and are not intended for statistical generalization.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_72-Functional_vs_Ethical_Drivers_in_Generative_AI_Adoption.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>JellyNovaNet-JSO: A Hybrid TabNet–BiLSTM Model for IoT-Based Crop Yield Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160871</link>
        <id>10.14569/IJACSA.2025.0160871</id>
        <doi>10.14569/IJACSA.2025.0160871</doi>
        <lastModDate>2025-08-29T12:48:00.8300000+00:00</lastModDate>
        
        <creator>Huang Zhicheng</creator>
        
        <creator>Zhang Yinjun</creator>
        
        <subject>IoT agriculture; crop yield prediction; BiLSTM; TabNet; jellyfish search optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>Precise prediction of crop yield is essential for sustainable agriculture, resource maximization, and food security. As the use of IoT and Wireless Sensor Networks (WSNs) gains momentum, huge amounts of heterogeneous and time-series environmental data have become readily available from intelligent greenhouses. Despite this, it is still difficult to obtain meaningful insights from these data due to their high dimensionality, noise, and nonlinear temporal behavior. Traditional machine learning and statistical approaches usually fail to effectively capture static as well as sequential relationships, and most current models are difficult to tune hyperparameters and have problems with dealing with data heterogeneity and do not generalize across dynamic environments. To overcome these shortcomings, this paper introduces JellyNovaNet-JSO, a new hybrid deep learning architecture that integrates TabNet and BiLSTM architectures, designed using the Jellyfish Search Optimization (JSO) algorithm. The model exploits TabNet sparse attention for static feature modeling and the temporal memory of BiLSTM for time-series sensor data. The innovation is in utilizing attention-guided tabular learning with bidirectional temporal modeling, with a metaheuristic optimization layer to perform automatic hyperparameter tuning. Experimental outcomes based on real-world IoT greenhouse data demonstrate that JellyNovaNet-JSO attains MAE of 0.012, RMSE of 0.017, R&#178; of 0.991, and MAPE of 1.89%, outperforming state-of-the-art CNN-LSTM, Random Forest, and SVM models substantially. In comparison with the prior approaches, JellyNovaNet-JSO enhances prediction accuracy by as much as 25% while ensuring scalability and robustness. This innovation provides a viable, interpretable, and deployable solution for precision agriculture, enabling smarter irrigation, climate control, and yield management.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_71-JellyNovaNet_JSO_A_Hybrid_TabNet_BiLSTM_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Data-Driven Approach to Achieve Low-Carbon Building Energy Optimization by Using BIM Technology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160870</link>
        <id>10.14569/IJACSA.2025.0160870</id>
        <doi>10.14569/IJACSA.2025.0160870</doi>
        <lastModDate>2025-08-29T12:48:00.7830000+00:00</lastModDate>
        
        <creator>Xin Yu</creator>
        
        <creator>Guoliang Ren</creator>
        
        <creator>Jie Niu</creator>
        
        <subject>Low-carbon buildings; building information modelling; carbon emissions; operational energy optimization; SqueezeNet; energy simulation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>Low-carbon building energy optimization is the environmental aspect that is impacted due to the construction sector. The integration of Building Information Modelling (BIM) with Artificial Intelligence (AI) techniques offers the design and operation of buildings with a reduced carbon footprint. Usually, such systems lack flexibility and the precision of dynamically optimizing energy usage. This work proposes a novel data-driven framework that merges AI and BIM to optimize the building energy system for low-emission design using Carbon Major Emission Datasets. It aids materials and energy source selections by identifying highly emitting commodities to reduce operational carbon footprints. Initially, data acquisition and emission analysis are performed on the Carbon Majors database to identify high-emission materials. Subsequently, emission factors are linked with the BIM elements using plug-ins such as One Click LCA, which allow the annotation of embodied carbon values. Further, operational energy is optimized by Multi-Agent Assisted NSGA-II, which optimizes parameters and material selection. Additionally, AI-assisted energy prediction supported by the SqueezeNet model and energy simulation techniques was used for minimizing building energy consumption. The results reveal a high-energy prediction accuracy of 0.0212 for MAE, 0.0376 for MSE, and 0.9814 for the R&#178; score. It further helps to reduce carbon emissions by 1155 tons and improve the cost efficiency of 570.25 million, promoting low-carbon building solutions from the earliest stages of design.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_70-A_Data_Driven_Approach_to_Achieve_Low_Carbon_Building.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Explainable Multimodal Sentiment Analysis Using Hierarchical Attention-Based Adaptive Transformer Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160869</link>
        <id>10.14569/IJACSA.2025.0160869</id>
        <doi>10.14569/IJACSA.2025.0160869</doi>
        <lastModDate>2025-08-29T12:48:00.7530000+00:00</lastModDate>
        
        <creator>Anna Shalini</creator>
        
        <creator>B. Manikyala Rao</creator>
        
        <creator>Ranjitha. P. K</creator>
        
        <creator>Guru Basava Aradhya S</creator>
        
        <creator>S. Farhad</creator>
        
        <creator>Elangovan Muniyandy</creator>
        
        <creator>Yousef A. Baker El-Ebiary</creator>
        
        <subject>Multimodal sentiment analysis; RoBERTa; Wav2Vec 2.0; vision transformer; CMU-MOSEI</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>Multimodal Sentiment Analysis (MSA) has emerged as a critical task in Natural Language Processing (NLP), driven by the growth of user-generated content containing textual, visual, and auditory cues. While transformer-based approaches achieve strong predictive performance, their lack of interpretability and limited adaptability restrict their use in sensitive applications such as healthcare, education, and human–computer interaction. To address these challenges, this study proposes an explainable and adaptive MSA framework based on a hierarchical attention-based transformer architecture. The model leverages RoBERTa for text, Wav2Vec2.0 for speech, and Vision Transformer (ViT) for visual cues, with features fused using a three-tier attention mechanism encompassing token/frame-level, modality-level, and semantic-level attention. This design enables fine-grained representation learning, dynamic cross-modal alignment, and intrinsic explainability through attention heatmaps. Additionally, contrastive alignment loss is incorporated to align heterogeneous modality embeddings, while label smoothing mitigates overconfidence, improving generalizability. Experimental evaluation on the CMU-MOSEI benchmark demonstrates state-of-the-art performance, achieving 93.2% accuracy, 93.5% precision, 92.8% recall, and 94.1% F1-score, surpassing prior multimodal transformer-based methods. Unlike earlier models that rely on shallow fusion or post-hoc interpretability, the proposed approach integrates explainability into its architecture, balancing accuracy and transparency. These results confirm the efficacy of the adaptive hierarchical attention-based framework in delivering a robust, interpretable, and scalable solution for English-language multimodal sentiment analysis.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_69-Explainable_Multimodal_Sentiment_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning-Driven Scalable and High-Precision Malaria Detection from Microscopic Blood Smear Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160868</link>
        <id>10.14569/IJACSA.2025.0160868</id>
        <doi>10.14569/IJACSA.2025.0160868</doi>
        <lastModDate>2025-08-29T12:48:00.7070000+00:00</lastModDate>
        
        <creator>N. Kannaiya Raja</creator>
        
        <creator>Divya Rohatgi</creator>
        
        <creator>Venkata Lalitha Narla</creator>
        
        <creator>Ganesh Kumar Anbazhagan</creator>
        
        <creator>R. Aroul Canessane</creator>
        
        <creator>Drakshayani Sriramsetti</creator>
        
        <creator>Yousef A.Baker El-Ebiary</creator>
        
        <subject>Automated diagnosis; blood smear images; contrastive learning; deep learning; malaria detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>Malaria continues to be a life-threatening disease, especially in tropical and low-resource regions, where timely and accurate diagnosis remains a major challenge. Traditional diagnostic approaches like manual microscopy are not only time-consuming and expertise-dependent but also prone to subjective errors. Existing deep learning methods, such as Convolutional Neural Networks (CNNs), ResNet, and Vision Transformers (ViT), struggle to generalize across variations in staining, resolution, and morphology, leading to misclassification and reduced diagnostic reliability. To overcome these limitations, this study proposes a novel hybrid architecture, Swin-Siamese, which integrates the hierarchical self-attention mechanism of the Swin Transformer with the contrastive similarity learning capability of the Siamese Neural Network. This unique combination enables the model to capture both global and local spatial patterns while accurately distinguishing infected from uninfected blood smear images. The model is implemented using TensorFlow and PyTorch, and trained on a publicly available malaria dataset comprising 13,152 training, 626 validation, and 1,253 test images. Experimental results demonstrate a 3.1% improvement in accuracy over traditional CNNs, achieving 95.3% accuracy, 95.1% precision, 95.4% recall, 95.2% F1-score, and an AUC-ROC of 0.97. This significant performance gain highlights the model&#39;s scalability, interpretability, and real-time applicability in clinical and field-deployable diagnostic systems, offering a powerful solution for malaria screening in underserved regions.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_68-Deep_Learning_Driven_Scalable_and_High_Precision_Malaria_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Scalable Machine Learning Framework for Predictive Analytics and Employee Performance Enhancement in Large Enterprises</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160867</link>
        <id>10.14569/IJACSA.2025.0160867</id>
        <doi>10.14569/IJACSA.2025.0160867</doi>
        <lastModDate>2025-08-29T12:48:00.6770000+00:00</lastModDate>
        
        <creator>Jyoti Singh Kanwar</creator>
        
        <creator>Ranju S Kartha</creator>
        
        <creator>Chamandeep Kaur</creator>
        
        <creator>Behara Venkata Nandakishore</creator>
        
        <creator>Elangovan Muniyandy</creator>
        
        <creator>Vuda Sreenivasa Rao</creator>
        
        <creator>Yousef A.Baker El-Ebiary</creator>
        
        <subject>Employee performance prediction; workforce optimization; performance forecasting; hybrid deep dense attention network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>Employee performance prediction and workforce optimization are critical for sustainable growth in large enterprises, yet traditional performance forecasting techniques often rely on regression analysis and conventional machine learning models that fail to capture the dynamic, nonlinear nature of human resource data. The approaches are not flexible, explainable, and actionable in terms of appropriate optimizations, which makes them less effective intelligent decision support systems. To overcome these limitations, this study presents a novel Hybrid Deep Dense Attention Network (HD-DAN) model combined with reinforcement learning (RL) to predict employee performance and optimally manage the workforce. The HD-DAN optimally combines self-attention in dense layers to dynamically emphasize performance-critical aspects, such as engagement, skills, and behavioral attributes. The RL agent learns to map the predictions into optimized interventions, such that continuous performance improvement is achieved. The HD-DAN achieves a Mean Absolute Error (MAE) of 0.076, Root Mean Square Error (RMSE) of 0.129, and an R&#178; of 0.421—corresponding to an 11.5% RMSE reduction and a 15.6% R&#178; increase over the best available baselines. In addition to higher predictive accuracy, the framework delivers interpretability through attention weight visualization and decision reliability through RL-driven optimization, providing a scalable, adaptive, and explainable platform for intelligent decision support in employee performance forecasting and workforce management.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_67-A_Scalable_Machine_Learning_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>AIoT-Based Waste Classification for Solid Waste Management to Accomplish the SDGs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160866</link>
        <id>10.14569/IJACSA.2025.0160866</id>
        <doi>10.14569/IJACSA.2025.0160866</doi>
        <lastModDate>2025-08-29T12:48:00.6270000+00:00</lastModDate>
        
        <creator>T. M. Shien</creator>
        
        <creator>M. Batumalay</creator>
        
        <creator>Balasubramaniam Muniandy</creator>
        
        <creator>Pavan Kumar Pagadala</creator>
        
        <creator>Vinoth Kumar. P</creator>
        
        <subject>Solid waste management; waste classification; artificial intelligence of things (AIoT); well-being; process innovation; sustainable development goals (SDG); recycling; circular economy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>The Fourth Industrial Revolution, known as IR4.0 technologies, has enhanced global economic capabilities and productivity, but there are negative environmental and health impacts due to industrialisation and urbanisation, such as greenhouse gas emissions and global warming. One effective method that reduces environmental impact is to conduct a waste classification program that incorporates the principles of 3R. The proposed work includes educating individuals and businesses on the importance of waste reduction, promoting reusable products and packaging, and implementing effective recycling systems. Additionally, Governments could also incentivise sustainable practices through tax breaks or invest in renewable energy sources to reduce greenhouse gas emissions associated with industrial processes. The proposed study aims to develop automated waste classification technology that can help reach SDGs 11, 12, and 13 by making waste management more efficient, increasing recycling and resource recovery rates, and cutting down on greenhouse gas emissions. The proposed system is developed using a deep learning algorithm with a microprocessor and microcontroller managing sensors and actuators to perform waste sorting based on the classification result. This distinguishes the proposed system from existing manual and RFID-based approaches by integrating AIoT with a user incentive mechanism, improving both accuracy and public adoption. This technology enhances overall sustainability and promotes a more circular economy by enabling the reuse and recycling of materials for their own well-being through process innovation.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_66-AIoT_Based_Waste_Classification_for_Solid_Waste_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>DMME-Driven Product Quality Prediction for Semiconductor Manufacturing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160865</link>
        <id>10.14569/IJACSA.2025.0160865</id>
        <doi>10.14569/IJACSA.2025.0160865</doi>
        <lastModDate>2025-08-29T12:48:00.5800000+00:00</lastModDate>
        
        <creator>Alif Ulfa Afifah</creator>
        
        <creator>Angga Prastiyan</creator>
        
        <creator>Fahmi Arif</creator>
        
        <creator>Fadillah Ramadhan</creator>
        
        <subject>Data mining; quality prediction; DMME; semiconductor manufacturing; random forest</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>Defective products in manufacturing can be reduced by accurately predicting quality outcomes based on process parameters. This study proposes a quality prediction framework for semiconductor manufacturing using the Data Mining Methodology for Engineering Applications (DMME). This study extends DMME with domain-specific preprocessing and demonstrates its superiority on the SECOM dataset compared to other classifiers. Experimental results show that the Random Forest algorithm achieved the highest performance, with 92.99% accuracy and an F-measure of 0.9637, confirming the effectiveness of the proposed approach. These findings highlight the potential of structured, engineering-oriented data mining to improve product quality and support informed decision-making in complex manufacturing environments.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_65-DMME_Driven_Product_Quality_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Smart Mobile Apps for Responsible Child Management: A Systematic Literature Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160864</link>
        <id>10.14569/IJACSA.2025.0160864</id>
        <doi>10.14569/IJACSA.2025.0160864</doi>
        <lastModDate>2025-08-29T12:48:00.5330000+00:00</lastModDate>
        
        <creator>Daniel Celestino </creator>
        
        <creator>Harol Medina</creator>
        
        <creator>Cristian Lara</creator>
        
        <creator>Nemias Saboya</creator>
        
        <subject>Smart mobile apps; parental control; artificial intelligence; screen time regulation; responsible child management</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>Children are increasingly using mobile devices, which raises challenges such as restricting access to inappropriate content, reducing excessive screen exposure, and ensuring safe digital habits. Although various parental control applications exist, most studies focus on isolated aspects such as content filtering or screen time management, with limited integration of artificial intelligence (AI) or consideration of children’s cognitive and emotional development. This highlights a research gap that requires a systematic review to consolidate existing evidence and identify best practices. Using the PRISMA methodology, a systematic search was conducted in four databases (Web of Science, Science@Direct, Scopus, and Semantic Scholar). After applying inclusion and exclusion criteria, 29 studies were selected for detailed analysis. Results show that AI-based applications can enhance personalization, improve detection of harmful content, and support parents in establishing healthier digital routines. However, limitations persist, including scarce training datasets, lack of algorithm transparency, and limited assessment of practical effectiveness. This review contributes by mapping current solutions, highlighting strengths and weaknesses, and providing evidence-based insights for researchers, parents, educators, and developers to design safer and more effective child-centered mobile applications.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_64-Smart_Mobile_Apps_for_Responsible_Child_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mobile Applications that Incorporate AI for Information Search and Recommendation: A Systematic Literature Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160863</link>
        <id>10.14569/IJACSA.2025.0160863</id>
        <doi>10.14569/IJACSA.2025.0160863</doi>
        <lastModDate>2025-08-29T12:48:00.4870000+00:00</lastModDate>
        
        <creator>Mijael R. Aliaga</creator>
        
        <creator>Jhosep S. Llacctahuaman</creator>
        
        <creator>Carla N. Esquivel</creator>
        
        <creator>Nemias Saboya</creator>
        
        <subject>AI algorithms; recommendation algorithms; mobile applications; search algorithms; AI for information</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>The inclusion of artificial intelligence (AI) has become essential for mobile application development, allowing improved personalization and optimization of the user experience. Over the past decade, smart mobile devices have been observed to enhance user experience across a variety of needs. The main objective of this study is to evaluate AI-powered mobile applications that utilize intelligent search mechanisms and more accurate recommendations and analyze their impact on addressing these user needs. The databases used were Scopus, ScienceDirect, Web of Science, and EBSCO. A filtering process using Prisma and a document quality assessment process was performed to select the most relevant articles. The study posed four questions related to the topic. The results showed that mobile apps for search and recommendation are mainly based on hybrid approaches (collaborative and content-based filtering) and deep learning (autoencoders, LSTMs/transformers, and BERT-type semantic retrieval embeddings), complemented by classic techniques (matrix factorization, SVM, K-NN, and trees/boosting) and contextual personalization (location, time, activity). It was concluded that these AI additions benefited users and met their search and recommendation needs. Furthermore, these mechanisms are advancing by leaps and bounds, as we now talk about more precise search and recommendation with voice, images, and even video.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_63-Mobile_Applications_that_Incorporate_AI_for_Information_Search.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>In-Depth Comparison of Supervised Classification Models - Performance and Adaptability to Practical Requirements</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160862</link>
        <id>10.14569/IJACSA.2025.0160862</id>
        <doi>10.14569/IJACSA.2025.0160862</doi>
        <lastModDate>2025-08-29T12:48:00.4400000+00:00</lastModDate>
        
        <creator>Mouataz IDRISSI KHALDI</creator>
        
        <creator>Allae ERRAISSI</creator>
        
        <creator>Mustapha HAIN</creator>
        
        <creator>Mouad BANANE</creator>
        
        <subject>Supervised classification; Na&#239;ve Bayes; decision tree; Random Forest; k-nearest neighbor; Support Vector Machine; algorithm performance; interpretability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>In this paper, we carried out an in-depth comparative analysis of five major supervised classification algorithms: Na&#239;ve Bayes, Decision Tree, Random Forest, KNN and SVM. These models were evaluated through a rigorous literature review, based on 20 criteria grouped into five key dimensions: algorithm performance, computational efficiency, practicality and ease of use, data compatibility and practical applicability. The results show that each algorithm has specific strengths and limitations: SVM and Random Forest stand out for their robustness and accuracy in complex environments, while Na&#239;ve Bayes and Decision Tree are appreciated for their speed, simplicity and interpretability. KNN, despite its intuitive approach, suffers from high complexity in the prediction phase, limiting its effectiveness on large datasets. This study aims to provide a structured framework for researchers and practitioners in various fields, such as healthcare, finance, industry and education, where supervised classification algorithms play a central role in decision-making. In addition, the results highlight the importance of selecting algorithms according to specific needs, and open up promising prospects, including the development of hybrid models and improved real-time data processing.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_62-In_Depth_Comparison_of_Supervised_Classification_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of an RGB-D Simultaneous Localization and Mapping Algorithm for Unmanned Aerial Vehicle</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160861</link>
        <id>10.14569/IJACSA.2025.0160861</id>
        <doi>10.14569/IJACSA.2025.0160861</doi>
        <lastModDate>2025-08-29T12:48:00.3930000+00:00</lastModDate>
        
        <creator>Muhammad Zamir Fathi Mohammad Effendi</creator>
        
        <creator>Norhidayah Mohamad Yatim</creator>
        
        <creator>Zarina Mohd Noh</creator>
        
        <creator>Nur Aqilah Othman</creator>
        
        <subject>Unmanned aerial vehicle; UAV; simultaneous localization and mapping; SLAM; RGB-D; real-time appearance based map; RTAB-map</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>This study investigates the implementation of an RGB-D Simultaneous Localization and Mapping (SLAM) algorithm on an unmanned aerial vehicle (UAV) equipped with an Intel RealSense D435i camera. The study focuses on Real-Time Appearance-Based Mapping (RTAB-Map), a well-established RGB-D SLAM method capable of building 3D maps while simultaneously localizing a robot within its environment. Despite its advanced capabilities, deploying RTAB-Map on UAVs introduces specific challenges due to the dynamics of aerial navigation. This research evaluates the performance of RTAB-Map in terms of robustness, precision, and accuracy to optimize its application in UAV-based RGB-D SLAM. The findings reveal that the sequential frame-matching approach, combined with a minimum inliers threshold of 10, provides the most robust performance. In contrast, the global matching approach with a minimum inliers threshold of 20 offers better precision and accuracy. The results show that this implementation, utilizing an off-the-shelf hardware and software setup, has significant potential for advanced applications such as monitoring and surveillance in environments, where dense 3D mapping is critical.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_61-Analysis_of_an_RGB_D_Simultaneous_Localization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>VGG-19 and Vision Transformer Enabled Shelf-Life Prediction Model for Intelligent Monitoring and Minimization of Food Waste in Culinary Inventories</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160860</link>
        <id>10.14569/IJACSA.2025.0160860</id>
        <doi>10.14569/IJACSA.2025.0160860</doi>
        <lastModDate>2025-08-29T12:48:00.3300000+00:00</lastModDate>
        
        <creator>Bindhya Thomas</creator>
        
        <creator>Priyanka Surendran</creator>
        
        <subject>Food waste reduction; shelf-life prediction; VGG-19; vision transformer; image-based freshness classification; sustainable food management</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>Food waste, particularly in the prepared food industry, presents a serious worldwide concern with serious ethical, environmental and socioeconomic implications. In restaurants and catering contexts, traditional inventory and waste management systems frequently lack the versatility and granularity to mitigate spoilage in real-time. The study proposes a sophisticated deep learning framework that predicts the remaining shelf-life of prepared food items using visual input, enabling timely interventions to reduce food waste. The proposed hybrid architecture integrates VGG-19 (Visual Geometry Group 19-layer network) for fine-grained feature extraction with Vision Transformer (ViT) that models contextual degradation patterns and temporal cues. The model operates by analyzing food images at regular intervals and predicting the remaining time before spoilage, enabling proactive decision-making for consumption prioritization. Food images are categorized into four freshness states: Fresh, Fit for Consumption, About to Expire and Expired, enabling the model to monitor real-time conditions. An elaborate dataset with 34 distinct food categories was utilized in the study, achieving outstanding performance with 98% accuracy, 97.5% precision, 97.9% recall and an F1-score of 97.75% and yielded an estimated 84% reduction in food waste. The model stands out for its non-invasive, image-based decision-making and the potential scalability across various food service settings. By offering predictive insights into food degradation and by using only visual data, the study advances the integration of artificial intelligence into sustainable food management.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_60-VGG_19_and_Vision_Transformer_Enabled_Shelf_Life_Prediction_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Architecting a Privacy-Focused Bitcoin Framework Through a Hybrid Wallet System Integrating Multiple Privacy Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160859</link>
        <id>10.14569/IJACSA.2025.0160859</id>
        <doi>10.14569/IJACSA.2025.0160859</doi>
        <lastModDate>2025-08-29T12:48:00.2830000+00:00</lastModDate>
        
        <creator>Lamiaa Said</creator>
        
        <creator>Hatem Mohamed</creator>
        
        <creator>Diaa Salama</creator>
        
        <creator>Nesma Mahmoud</creator>
        
        <subject>Bitcoin; privacy; anonymity; wallet; blockchain; Coinjoin; Payjoin; stealth address</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>Although Bitcoin enables pseudonymous peer-to-peer digital transactions, its transparent public ledger architecture allows for blockchain analysis that can compromise user anonymity. Despite the presence of wallets with privacy-enhancing features, no single solution currently offers comprehensive anonymity independently. Existing privacy-preserving techniques such as CoinJoin, PayJoin, and Stealth Addresses offer differing degrees of anonymity, yet each exhibits intrinsic limitations. This study proposes a hybrid privacy architecture that integrates multiple privacy-enhancing techniques into a unified and coherent transaction workflow. By integrating decentralized CoinJoin mixing, PayJoin for input ownership obfuscation, and Stealth Addresses for unlinkable payments, the proposed model establishes a robust, privacy-oriented framework for Bitcoin transactions. The framework is implemented and evaluated through pre-funded Sparrow and JoinMarket wallets, interconnected via a fully synchronized Bitcoin Core node deployed on the testnet environment. All communications are routed via the Tor network to maintain anonymity at the network layer. Using testnet-based simulations, we evaluate the effectiveness of the architecture. The results show that combining these techniques substantially strengthens resistance to common deanonymization heuristics, enhances transaction unlinkability, and achieves higher overall anonymity than relying on individual methods alone. This demonstrates the synergistic effect of the hybrid model in providing more resilient protection against transaction tracing and blockchain surveillance.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_59-Architecting_a_Privacy_Focused_Bitcoin_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Securing Image Messages Using Secure Hash Algorithm 3, Chaos Scheme, and DNA Encoding</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160858</link>
        <id>10.14569/IJACSA.2025.0160858</id>
        <doi>10.14569/IJACSA.2025.0160858</doi>
        <lastModDate>2025-08-29T12:48:00.2230000+00:00</lastModDate>
        
        <creator>Amer Sharif</creator>
        
        <creator>Dian Rachmawati</creator>
        
        <creator>Wilbert</creator>
        
        <subject>Image encryption; image decryption; chaos scheme; DNA encoding; secure hash algorithm 3 Keccak</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>Security is an essential aspect to consider during data transmission, especially images. Threats that may occur during image transmission include images being stolen by third parties. One way to secure images is through encryption-decryption processes using cryptographic algorithms. One of the algorithms developed for image security involves combining a chaos scheme, DNA encoding, and hashing. Chaos scheme refers to a system sensitive to initial conditions, resulting in behavior that is difficult to predict or appears random. DNA encoding is the process of converting bits into a DNA sequence. Hashing is a mathematical function which takes variable inputs and converts them into a binary sequence with fixed length. In this research, security enhancement is achieved by replacing the hashing algorithm with Secure Hash Algorithm (SHA) 3 Keccak. This study successfully implemented cryptographic algorithms into a website that can simulate image encryption-decryption processes in about 15 seconds per process. The effectiveness of the algorithm used has also been tested in abstracting images through Mean Squared Error (MSE) and Peak-Signal-to-Noise-Ratio (PSNR) evaluations. Obtained MSE values of 0 and PSNR values of infinity indicated that the original images and decrypted images are identical.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_58-Securing_Image_Messages_Using_Secure_Hash_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Game Theory-Optimized Attention-Based Temporal Graph Convolutional Network for Spatiotemporal Forecasting of Sea Level Rise</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160857</link>
        <id>10.14569/IJACSA.2025.0160857</id>
        <doi>10.14569/IJACSA.2025.0160857</doi>
        <lastModDate>2025-08-29T12:48:00.1600000+00:00</lastModDate>
        
        <creator>T M Swathy</creator>
        
        <creator>K.Ruth Isabels</creator>
        
        <creator>A. Sindhiya Rebecca</creator>
        
        <creator>Venubabu Rachapudi</creator>
        
        <creator>Yousef A.Baker El-Ebiary</creator>
        
        <creator>Shobana Gorintla</creator>
        
        <creator>Elangovan Muniyandy</creator>
        
        <subject>Temporal graph convolutional networks; attention mechanisms; game theory optimization; sea level rise prediction; climate change adaptation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>Predicting Sea level rise accurately is crucial in the formulation of effective adaptation plans to counteract the effects of climate change in vulnerable coastal areas, infrastructure, and people. The conventional forecasting models tend to fail in capturing the intricate spatiotemporal relationships affecting sea level variations. In order to overcome the above-mentioned challenges, this research introduces a hybrid predictive model combining a Temporal Graph Convolutional Network (T-GCN) with attention and game theory-based optimization strategy. T-GCN structure is specially tailored to capture spatial dependencies as well as temporal dynamics in sea level change, providing even deeper understanding of the changing dynamics of sea levels. The attention mechanism strengthens the model by dynamically weighing important variables, whereas the game-theoretic optimization efficiently optimizes multiple objectives, e.g., prediction accuracy and robustness. Experimental results, measured in terms of common performance indicators, show the better effectiveness of the proposed model with a correlation coefficient of 0.996512 and an overall error of 0.032154. Through the inclusion of both climatic and socio-economic variables, this methodology provides accurate, data-based insights to inform climate policy and adaptive planning. The results highlight the capabilities of state-of-the-art machine learning methods for solving actual sea level rise challenges.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_57-Game_Theory_Optimized_Attention_Based_Temporal_Graph.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Transformer-Enabled Smartphone System for Intelligent Physical Activity Monitoring</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160856</link>
        <id>10.14569/IJACSA.2025.0160856</id>
        <doi>10.14569/IJACSA.2025.0160856</doi>
        <lastModDate>2025-08-29T12:48:00.0800000+00:00</lastModDate>
        
        <creator>Leping Zhang</creator>
        
        <creator>Fengjiao Jiang</creator>
        
        <creator>Guopeng Jia</creator>
        
        <creator>Yue Wang</creator>
        
        <subject>Activity recognition; smartphone; transformer architecture; inertial measurement units</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>This study addresses the prevalent decline in physical activity among university students in the contemporary information society, proposing an innovative deep learning-based framework for intelligent physical activity recognition. Central to this framework is the comprehensive utilization of high-precision Inertial Measurement Units (IMUs) integrated within smartphones, encompassing triaxial accelerometers, gyroscopes, and magnetometers, enabling multi-dimensional, real-time capture of students&#39; daily activity postures. For algorithmic design, this research transcends traditional limitations by adopting the more advanced Transformer architecture as its core classifier. Through the distinct self-attention mechanism inherent to this architecture, the proposed method efficiently and precisely extracts critical spatiotemporal features from vast sensor data, thereby achieving accurate identification and classification of various physical activities, such as walking, running, and climbing stairs. Rigorous evaluation results demonstrate significant advantages in key performance metrics, including recognition accuracy, when compared to conventional recurrent neural networks (e.g., Long Short-Term Memory networks, Recurrent Neural Networks) and classic machine learning algorithms (e.g., Random Forest), with a validation accuracy reaching 93.97%. This forward-looking research outcome not only provides a reliable and efficient technological means for monitoring the physical activity status of university students but also establishes a robust data foundation for the future development and implementation of targeted health intervention measures.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_56-Transformer_Enabled_Smartphone_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Artificial Intelligence in Diagnostic and Therapeutic Interventions: A Systematic Review of Randomized Controlled Trials</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160855</link>
        <id>10.14569/IJACSA.2025.0160855</id>
        <doi>10.14569/IJACSA.2025.0160855</doi>
        <lastModDate>2025-08-29T12:47:59.9730000+00:00</lastModDate>
        
        <creator>Oscar Jimenez-Flores</creator>
        
        <creator>Sandra Pajares-Centeno</creator>
        
        <creator>Oscar Mejia-Sanchez</creator>
        
        <creator>Rodrigo Flores-Palacios</creator>
        
        <subject>AI applications; diagnostic interventions; therapeutic interventions; randomised controlled trials; health technology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>Artificial intelligence (AI) is increasingly being integrated into diagnostic and therapeutic interventions, offering potential advantages in accuracy, efficiency, and clinical decision-making compared to conventional methods. This systematic review aimed to identify and characterise AI applications assessed in randomised controlled trials (RCTs), and to synthesise the reported clinical outcomes in comparison with standard approaches. A comprehensive search was conducted in PubMed, Scopus, and Web of Science for articles published between 2015 and June 2024. Eligible studies included randomised controlled trials involving patients with various medical conditions who received diagnostic or therapeutic interventions supported by AI technologies. Comparators included conventional diagnostic or treatment methods, placebo, or standard care. Two reviewers independently screened the studies, extracted data, and assessed risk of bias using the Cochrane RoB 1 tool. A total of 13 trials involving 10,566 participants met the inclusion criteria, spanning a range of medical specialties including gastroenterology, dermatology, radiology, oncology, neurology, and ophthalmology. While several trials reported improvements in diagnostic accuracy, treatment planning, or procedural efficiency, other studies showed inconsistent or limited benefits, highlighting the variability in outcomes depending on the clinical context and type of AI application. This review offers an updated synthesis of AI-based clinical interventions evaluated through randomised controlled trials and emphasises the need for further research to validate these tools, standardise their implementation, and assess their broader impact as health technology in modern healthcare systems.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_55-Artificial_Intelligence_in_Diagnostic_and_Therapeutic_Interventions.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predictive Modeling for Metro Performance Using MetroPT3 Dataset</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160854</link>
        <id>10.14569/IJACSA.2025.0160854</id>
        <doi>10.14569/IJACSA.2025.0160854</doi>
        <lastModDate>2025-08-29T12:47:59.9400000+00:00</lastModDate>
        
        <creator>Akshitha Mary A C</creator>
        
        <creator>R Rakshinee</creator>
        
        <creator>Stefani Jeyaseelan</creator>
        
        <creator>Sakthivel V</creator>
        
        <creator>Prakash P</creator>
        
        <subject>Long short-term memory autoencoder; time-series anomaly detection; sequence modeling; reconstruction error; predictive maintenance; unsupervised learning; encoder-decoder architecture; anomaly threshold</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>The study titled &quot;Predictive Modeling for Metro Performance Using the MetroPT3 Dataset&quot; aims to create a predictive maintenance system for the metro systems in order to reduce unanticipated breakdowns. The dataset known as MetroPT3 is primarily used to provide data useful in monitoring the operation of certain features of the APU and includes several types of time-series data like air pressure, the current drawn by a motor and oil temperatures. Some basic data quality enhancement procedures, such as cleaning, interpolation of missing entries and normalization were performed. The analysis aims to develop a Long Short-Term Memory (LSTM) Autoencoder based on an encoder-decoder architecture to perform sequence modeling and identify anomalies. The model learns normal operational patterns and detects deviations using reconstruction error as an anomaly threshold, enabling timely intervention. The results obtained are encouraging since the model performed excellently in reconstructing clean operating values using the Autoencoder structure.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_54-Predictive_Modeling_for_Metro_Performance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Review on Image-Based Methods for Plant Disease Identification in Diverse Data Conditions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160853</link>
        <id>10.14569/IJACSA.2025.0160853</id>
        <doi>10.14569/IJACSA.2025.0160853</doi>
        <lastModDate>2025-08-29T12:47:59.9100000+00:00</lastModDate>
        
        <creator>Feilong Tang</creator>
        
        <creator>Rosalyn R Porle</creator>
        
        <creator>Hoe Tung Yew</creator>
        
        <creator>Farrah Wong</creator>
        
        <subject>Few-shot learning; transfer learning; deep learning; crop protection; early detection; data scarcity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>Image-based plant disease identification methods have demonstrated potential in enhancing crop protection through early detection. However, the development of this field faces several challenges, such as the scarcity of high-quality annotated data, significant intra-class variation and high inter-class similarity among plant diseases, and the limited generalization ability of current models under diverse domain conditions. We extensively investigated 110+ latest papers on plant disease identification, aiming to present a timely and comprehensive overview of the most recent advances in the field, along with impartial comparisons of strengths and weaknesses of the existing works. Specifically, we begin by reviewing traditional machine learning and deep learning methods, which form the foundation for many current models. We then introduce a taxonomy of transfer learning methods, including instance-based, mapping-based, and network-based methods, and analyze their effectiveness in enhancing classification performance by leveraging prior knowledge under data-constrained scenarios. Subsequently, we examine recent advances in few-shot learning methods for plant disease identification, categorizing them into model-based, metric-based, and optimization-based methods, and evaluate their capabilities in addressing data scarcity and improving identification accuracy. Finally, we summarize the current limitations and outline promising future research directions, with the aim of guiding continued development in this area.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_53-A_Review_on_Image_Based_Methods_for_Plant_Disease_Identification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Prediction of Mining-Induced Subsidence in Saudi Arabia Phosphate Mines Using ANN Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160852</link>
        <id>10.14569/IJACSA.2025.0160852</id>
        <doi>10.14569/IJACSA.2025.0160852</doi>
        <lastModDate>2025-08-29T12:47:59.8630000+00:00</lastModDate>
        
        <creator>Atef GHARBI</creator>
        
        <creator>Mohamed AYARI</creator>
        
        <creator>Yamen El Touati</creator>
        
        <creator>Zeineb Klai</creator>
        
        <creator>Mahmoud Salaheldin Elsayed</creator>
        
        <creator>Elsaid Md. Abdelrahim</creator>
        
        <subject>Subsidence prediction; phosphate mine; artificial neural network; multilayer perceptron; hyperparameter optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>This study develops and validates an artificial neural network (ANN) model to predict mining-induced land subsidence in Saudi Arabia’s Al-Jalamid and Umm Wu’al phosphate mines. A multilayer perceptron is used with optimized hyperparameters based on four inputs (ground point position, distance from extraction center, accumulated exploitation volume, and time). The optimal configuration (5 hidden layers, 64 nodes, 240 epochs) achieves RMSE = 22 mm and MAE = 13 mm, outperforming traditional numerical/statistical baselines. Case-study validation at both mines confirms robustness (e.g., RMSE ≈ 20 mm, MAE ≈ 12 mm), enabling practical mitigation such as ground reinforcement and extraction-rate control. The results demonstrate that a tuned ANN provides accurate, operationally useful subsidence forecasts, supporting safer and more sustainable mine planning.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_52-Prediction_of_Mining_Induced_Subsidence_in_Saudi_Arabia.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Modeling Approach for Strategic Fleet Sizing Under Maritime Sovereignty: Application to the Moroccan National Fleet</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160851</link>
        <id>10.14569/IJACSA.2025.0160851</id>
        <doi>10.14569/IJACSA.2025.0160851</doi>
        <lastModDate>2025-08-29T12:47:59.8470000+00:00</lastModDate>
        
        <creator>Mohamed Anas KHALFI</creator>
        
        <creator>Aziz AIT BASSOU</creator>
        
        <creator>Mustapha HLYAL</creator>
        
        <creator>Jamila EL ALAMI</creator>
        
        <subject>Maritime fleet sizing; sovereignty and national ownership; strategic transport planning; resilience in maritime logistics; green maritime transport</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>The disruptions experienced by global supply chains in recent years have reignited the importance of maritime sovereignty, particularly through the creation or reinforcement of national shipping fleets. In this context, the present study explores strategic approaches to national fleet sizing, drawing from recent policy directions and maritime planning models. The study is motivated by the need to design resilient and sovereign fleets that reduce dependency on foreign operators and strengthen autonomy in trade logistics. To complement this analysis, a mathematical model is developed in the form of a Mixed-Integer Nonlinear Programming (MINLP) formulation, where sovereignty is captured through the share of vessel operations under national control. In addition to sovereignty, the model integrates criteria of economic viability, environmental impact, and resilience, positioning Maritime Fleet Sizing within the broader scope of Strategic Transport Planning and Green Maritime Transport. Numerical experiments are carried out on a representative dataset of vessels and strategic routes, illustrating how sovereignty thresholds affect fleet composition and deployment. The results highlight a fundamental trade-off between sovereignty and profitability, emphasizing the need for strategic decision-making that carefully balances autonomy objectives with resilience and environmental considerations. Findings also show that moderate sovereignty thresholds support cost-efficient and diversified fleets, while maximalist sovereignty requirements lead to reduced coverage, higher unmet demand, and lower profitability. These insights underline the importance of calibrated strategies, where Sovereignty, Resilience in Maritime Logistics, and sustainability are treated as interconnected pillars of long-term fleet development.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_51-A_Modeling_Approach_for_Strategic_Fleet_Sizing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>DeepIndel: A ResNet-Based Method for Accurate Insertion and Deletion Detection from Long-Read Sequencing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160850</link>
        <id>10.14569/IJACSA.2025.0160850</id>
        <doi>10.14569/IJACSA.2025.0160850</doi>
        <lastModDate>2025-08-29T12:47:59.8170000+00:00</lastModDate>
        
        <creator>Md. Shadmim Hasan Sifat</creator>
        
        <creator>Khandokar Md. Rahat Hossain</creator>
        
        <subject>Structural variations (SVs); indels; long-read sequencing; breakpoints; genomic features; diseases; deep learning; ResNet; HG002 dataset; precision medicine; gene expression; phenotypic diversity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>Structural variations (SVs) play a pivotal role in human genetics, influencing gene expression, disease mechanisms, and phenotypic diversity. Despite the advancements in short-read sequencing technologies, long-read sequencing offers superior resolution for detecting SVs, particularly in complex genomic regions. In this study, DeepIndel, a novel computational framework, is presented that leverages long-read sequencing data combined with a deep learning model to identify SV breakpoints accurately. This approach captures complex breakpoint patterns by aligning long reads to a reference genome and extracting 23 key features at each genomic location, including read support, candidate length, and strand-specific information. DeepIndel has been evaluated on the HG002 dataset, achieving exceptional performance with high precision and reliability in detecting insertions and deletions, with F1 scores (94.27% for insertions, 91.09% for deletions) and thereby demonstrating significant improvements over existing state-of-the-art tools, offering a more precise and robust approach to SV detection. This work advances structural variant analysis, with promising implications for genomic research, disease understanding, and personalized medicine.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_50-DeepIndel_A_ResNet_Based_Method_for_Accurate_Insertion.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Innovative Model of Tourism on Educational Engineering: Transformation Learning from Experiential to Interactive</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160849</link>
        <id>10.14569/IJACSA.2025.0160849</id>
        <doi>10.14569/IJACSA.2025.0160849</doi>
        <lastModDate>2025-08-29T12:47:59.7700000+00:00</lastModDate>
        
        <creator>Yurao Yan</creator>
        
        <creator>Tara Ahmed Mohammed</creator>
        
        <creator>Hailan Liang</creator>
        
        <creator>Mingxi Guan</creator>
        
        <subject>Tourism education; educational engineering; coupling coordination model; industry–education integration</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>This study explores the interactive relationship between tourism education and industry development by applying a coupled coordination model and proposing an innovative framework that shifts learning from traditional experiential approaches to interactive teaching. The research establishes comprehensive evaluation indicators for both tourism industry performance and educational engineering, and quantitatively analyzes their coupling degree. Results reveal that although tourism education and industrial development are closely linked, mismatches in resource allocation and talent demand reduce coordination effectiveness. The innovative model, based on educational engineering, demonstrates significant advantages by integrating digital technologies such as VR, AR, and big data analytics into teaching. These tools enhance student engagement, improve knowledge construction, and provide real-time feedback, thereby optimizing both educational outcomes and industrial benefits. The findings indicate that interactive teaching strengthens students’ practical competencies, increases efficiency in resource distribution, and contributes to the sustainable growth of the tourism sector. Furthermore, the degree of coupling coordination has gradually shifted from an initial to a moderate level, suggesting that interactive teaching promotes a more resilient and adaptive education–industry system. However, the transformation requires stronger institutional support, improved teacher training in technological applications, and regional balance in resource allocation. The study concludes that fostering an interactive mechanism between education and industry is essential for achieving synergy, cultivating high-quality professionals, and advancing the long-term competitiveness of tourism. Future research should refine indicator systems, integrate diverse modeling methods, and address regional disparities to strengthen the innovation pathway for tourism education.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_49-Innovative_Model_of_Tourism_on_Educational_Engineering.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Privacy-Preserving Content-Based Medical Image Retrieval Using Integrated CNN Fusion and Quantization Optimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160848</link>
        <id>10.14569/IJACSA.2025.0160848</id>
        <doi>10.14569/IJACSA.2025.0160848</doi>
        <lastModDate>2025-08-29T12:47:59.7370000+00:00</lastModDate>
        
        <creator>Mohamed Jafar sadik</creator>
        
        <creator>Muhammed E Abd Alkhalec Tharwat</creator>
        
        <creator>Noor Azah Samsudin</creator>
        
        <creator>Ezak Fadzrin Bin Ahmad</creator>
        
        <subject>Content-Based Image Retrieval (CBIR); medical image analysis; privacy preservation; deep learning; convolutional neural networks (CNNs); feature fusion; model quantization; healthcare security; encrypted image processing; resource-constrained computing; computed tomography (CT); magnetic resonance imaging (MRI)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>Content‑Based Image Retrieval (CBIR) systems have become increasingly crucial in healthcare as the volume of medical imaging data continues to grow exponentially. However, existing systems struggle to balance privacy preservation, computational efficiency and retrieval accuracy, particularly in resource‑constrained healthcare environments. This research proposes a novel multi‑level privacy‑preserving CBIR architecture that integrates multiple convolutional neural network (CNN) architectures with fusion strategies and quantization optimization specifically designed for encrypted medical images. The proposed framework addresses three key challenges: privacy preservation through advanced encryption techniques, feature extraction using optimized CNN fusion strategies and computational efficiency through model quantization. By implementing multiple pre‑trained CNN models—including VGG‑16, ResNet50, DenseNet121 and EfficientNet‑B0—along with various fusion strategies, the system achieves improved feature extraction from encrypted medical images. The framework incorporates quantization techniques to optimize computational efficiency without compromising retrieval accuracy. Experimental results across multiple medical imaging modalities, including X‑ray, magnetic resonance imaging (MRI) and computed tomography (CT) scans, demonstrate the effectiveness of the proposed approach in terms of retrieval accuracy, computational efficiency and security robustness. This research contributes to advancing privacy‑preserving medical image analysis by providing a comprehensive solution that effectively balances security requirements with practical implementation constraints in healthcare settings.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_48-Privacy_Preserving_Content_Based_Medical_Image_Retrieval.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>RA-ACS_net Network: A Quantum Optical Reconstruction Method for Ultra-high Resolution Bioimaging</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160847</link>
        <id>10.14569/IJACSA.2025.0160847</id>
        <doi>10.14569/IJACSA.2025.0160847</doi>
        <lastModDate>2025-08-29T12:47:59.7230000+00:00</lastModDate>
        
        <creator>Lin SHANG</creator>
        
        <subject>Ultra-high resolution bioimaging; quantum optics; computer vision; ripple algorithm; attention mechanism</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>Ultra-high resolution bioimaging based on quantum optics offers high sensitivity at relatively low cost, yet conventional reconstruction algorithms face challenges of excessive sampling time, long computation, and artifacts that limit imaging quality. To overcome these issues, this study proposes a novel quantum optical bioimaging reconstruction method termed RA-ACS_net, which integrates a ripple algorithm with a hybrid attention mechanism network. The ripple algorithm provides global optimization for network parameter adjustment, while the attention mechanism enhances feature extraction and information fusion. Furthermore, a differentiated loss function (ALoss) is designed to preserve fine structural details and improve visual fidelity compared with conventional MSE loss. A large-scale dataset of quantum optics-based bioimages is employed for training and validation. Experimental results demonstrate that RA-ACS_net achieves superior reconstruction performance, with significantly higher PSNR and SSIM across both low and high sampling ratios, when compared to iterative algorithms (TVAL3) and existing deep learning models (DR2-Net, DPA-Net). The proposed approach exhibits robustness under sparse data conditions, reduces blocking artifacts, and accelerates convergence, thereby addressing critical limitations of current methods. This study highlights the potential of combining quantum optics with advanced deep learning optimization strategies to establish a practical and efficient framework for ultra-high resolution bioimaging.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_47-RA_ACS_net_Network_A_Quantum_Optical_Reconstruction_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Consumer Adoption of Autonomous Vehicles in China: A Bibliometric Review of Intention Drivers and Perceptions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160846</link>
        <id>10.14569/IJACSA.2025.0160846</id>
        <doi>10.14569/IJACSA.2025.0160846</doi>
        <lastModDate>2025-08-29T12:47:59.6770000+00:00</lastModDate>
        
        <creator>Yunluo Zou</creator>
        
        <creator>Syuhaily Osman</creator>
        
        <creator>Sharifah Azizah Haron</creator>
        
        <subject>Autonomous vehicles; diffusion of AVs; consumer adoption; consumer perception; Chinese market; bibliometric analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>Autonomous vehicles (AVs) are playing an increasing role in digitally enabled transportation systems with the dramatic emergence of related technologies. Consumer adoption is arguably a key factor in the deployment of AVs in China. In this study, bibliometric analysis was used to explore intention drivers regarding consumer adoption of AVs and the role of consumer perception in the decision-making process from the perspective of Chinese consumers. The results revealed that consumer perception is a highly critical factor influencing the adoption of AVs. Moreover, most Chinese consumers were more sensitive to perceived losses than to gains. In addition, the main public focus was on highly intelligent shared AVs rather than family-use vehicles. These findings could help governments and enterprises gain a deeper understanding of consumer behavior in the Chinese market, which could be used as a reference for implementing measures to better accelerate the diffusion of AVs.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_46-Consumer_Adoption_of_Autonomous_Vehicles_in_China.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comprehensive Analysis of Machine and Deep Learning Models for Stock Market Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160845</link>
        <id>10.14569/IJACSA.2025.0160845</id>
        <doi>10.14569/IJACSA.2025.0160845</doi>
        <lastModDate>2025-08-29T12:47:59.6430000+00:00</lastModDate>
        
        <creator>Hadi S. AlQahtani</creator>
        
        <creator>Mohammed J. Alhaddad</creator>
        
        <creator>Mutasem Jarrah</creator>
        
        <subject>Deep learning; machine learning; prediction methods; stock market; regression; taxonomy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>Stock market prediction is a core task in financial engineering that requires sophisticated methods to extract subtle market and volatility trends. The increasing complexity of the stock market has led to the integration of advanced machine learning (ML) and deep learning (DL) techniques to improve accuracy beyond traditional statistical methods. This research provides a taxonomy of stock market prediction methods and reviews key regression-based models, including linear regression and advanced neural networks like recurrent neural networks (RNNs), long short-term memory (LSTM), and hybrid (CNN-LSTM) models. The study deploys and evaluates three specific models: Linear Regression, RNNs, and LSTMs. The models were trained and tested using modern data preprocessing procedures, including Z-score normalization and temporal sequencing. The findings show that the Linear Regression (LR) model performed better, with a Root Mean Square Error (RMSE) of 0.334 during training and 0.304 during testing, and a Mean Absolute Error (MAE) of 0.203 and 0.207, respectively. This contrasted with the deep learning models, which had higher error rates. The LSTM achieved a training RMSE of 0.355, while the RNN model had a training RMSE of 0.383. These results provide empirical evidence that increased model complexity does not necessarily translate into better forecasting accuracy in financial applications, and that model selection is both context-sensitive and data-driven. The findings mentioned the challenge of nonstationarity in stock market data and the need to periodically retrain models on recent data.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_45-Comprehensive_Analysis_of_Machine_and_Deep_Learning_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intrusion Detection Using Machine Learning and Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160844</link>
        <id>10.14569/IJACSA.2025.0160844</id>
        <doi>10.14569/IJACSA.2025.0160844</doi>
        <lastModDate>2025-08-29T12:47:59.6270000+00:00</lastModDate>
        
        <creator>Fatima Jobran ALzaher</creator>
        
        <creator>Asma AlJarullah</creator>
        
        <subject>Cybersecurity; cyber-attack; intrusion detection system; machine learning; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>As cyberattacks grow in prevalence, Intrusion Detection Systems (IDS) have become critical for securing network infrastructures. This study proposes an efficient IDS framework utilizing both machine learning (ML) and deep learning (DL) algorithms. The framework is evaluated on the “NF-UNSW-NB15-v2” dataset, which comprises a blend of normal and malicious traffic. A diverse set of advanced models—including Deep Neural Networks (DNN), Long Short-Term Memory (LSTM) networks, eXtreme Gradient Boosting (XGBoost), Random Forest (RF), and K-Nearest Neighbors (KNN)—is deployed for intrusion detection. The approach encompasses both binary classification (normal vs. malicious) and multi-class classification (specific attack categories). Preprocessing steps include feature standardization using StandardScaler, class imbalance correction via SMOTE, and dimensionality reduction through Principal Component Analysis (PCA). Results show that Random Forest and XGBoost models achieve high accuracy in binary classification with F1-scores approaching 0.97, while XGBoost attains the best macro F1-score (0.71) in multi-class tasks. Additionally, RF and XGBoost demonstrate the fastest inference times, underscoring their suitability for real-time deployment. This work contributes a scalable and optimized IDS pipeline for enhancing cybersecurity resilience.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_44-Intrusion_Detection_Using_Machine_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Shared API Call Insights for Optimized Malware Detection in Portable Executable Files</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160843</link>
        <id>10.14569/IJACSA.2025.0160843</id>
        <doi>10.14569/IJACSA.2025.0160843</doi>
        <lastModDate>2025-08-29T12:47:59.5800000+00:00</lastModDate>
        
        <creator>Mehdi Kmiti</creator>
        
        <creator>Jallal Eddine Moussaoui</creator>
        
        <creator>Khalid El Gholami</creator>
        
        <creator>Yassine Maleh</creator>
        
        <subject>Malware detection; static analysis; portable executable (PE) files; API calls; extra trees classifier</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>Malware analysis is essential for understanding malicious software and developing effective detection strategies. Traditional detection methods, such as signature-based and heuristic-based approaches, often fail against evolving threats. To address this challenge, this study proposes a static analysis–based malware detection system that employs thirteen classifiers, including Logistic Regression, K-Nearest Neighbors (KNN), Support Vector Machine (SVM), Naive Bayes, Decision Tree, Linear Discriminant Analysis (LDA), Quadratic Discriminant Analysis (QDA), Random Forest, Extra Trees, Gradient Boosting, AdaBoost, and LightGBM. The framework is built on a balanced dataset of 1,318 Windows Portable Executable (PE) files (674 malware, 644 benign), where the features are derived from shared API calls between benign and malicious files to ensure relevance and reduce redundancy. Experimental results show that the Extra Trees classifier achieved the highest accuracy of 98.14%, highlighting its effectiveness in detecting malware. Overall, this study provides a robust, data-driven approach that enhances static malware detection and contributes to strengthening cybersecurity against emerging threats.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_43-Shared_API_Call_Insights_for_Optimized_Malware_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Analysis of Proposed Scalable Reversible Randomization Algorithm (SRRA) in Privacy Preserving Big Data Analytics</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160842</link>
        <id>10.14569/IJACSA.2025.0160842</id>
        <doi>10.14569/IJACSA.2025.0160842</doi>
        <lastModDate>2025-08-29T12:47:59.5500000+00:00</lastModDate>
        
        <creator>Mohana Chelvan P</creator>
        
        <creator>Rajavarman V N</creator>
        
        <creator>Dahlia Sam</creator>
        
        <subject>Big data; data analytics; high dimensionality; feature selection; selection stability; privacy preservation; information loss</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>The economy of today’s world is a data-driven knowledge economy, as electronic devices are mostly used for our day-to-day activities, through which organizations collect data actively or passively. The dimensionality of the dataset is also increased, along with the volume of data, because of the advancements in digital devices and communication technology. The feature selection becomes a crucial preprocessing step in big data analytics as a dimensionality reduction technique to eliminate redundant and noisy features. Studying the fluctuations in feature selection results is a vigorous area of research, as it is positively related to data utility, as fluctuations in feature selection results confuse the data analysts’ minds about their research outcomes. Privacy preservation is a major concern in big data analytics to protect sensitive individuals’ data. Application of privacy preservation techniques to modify the dataset will affect the stability of feature selection, as it has recently been proven that it mostly depends on the dataset’s physical characteristics. This study analyses the performance of the proposed Scalable Reversible Randomization Algorithm (SRRA) in terms of privacy preservation, change in characteristics of the dataset, information loss, stability of feature selection, and data utility in big data scenarios.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_42-Performance_Analysis_of_Proposed_Scalable_Reversible_Randomization_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analyzing Cyber Attack Detection in IoT Healthcare Environments Using Artificial Intelligence</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160841</link>
        <id>10.14569/IJACSA.2025.0160841</id>
        <doi>10.14569/IJACSA.2025.0160841</doi>
        <lastModDate>2025-08-29T12:47:59.5330000+00:00</lastModDate>
        
        <creator>Rawan Marzooq Alharbi</creator>
        
        <creator>Muhammad Asif Khan</creator>
        
        <subject>IoT healthcare security; cyber-attack detection; healthcare security; AI in healthcare; smart medical systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>The rapid growth of the Internet of Things (IoT) has significantly increased its integration into daily life. In recent years, the integration of IoT technologies in healthcare has significantly enhanced patient care and operational efficiency. One of the most promising areas for using IoT devices in healthcare or interconnecting medical devices is known as the Internet of Medical Things (IoMT). IoMT supports various healthcare services, e.g., remote patient monitoring. However, there are serious cyber-security concerns, as various attacks have targeted these IoMT devices in recent years. This research presents an analytical approach to understanding how Artificial Intelligence (AI) can improve the detection of cyber-attacks within IoT healthcare environments. The main goal of this research is to provide an AI-based model to detect cyber-attacks in IoMT in the healthcare environment. Many researchers have worked on developing a framework in this field to address critical cybersecurity threats. However, these efforts often fall short of covering other important aspects such as data privacy and interoperability. In this study, a model and framework are proposed to monitor IoT networks, and detect potential security breaches in real-time to help in mitigating risks while maintaining healthcare services. The key findings contribute to strengthening cybersecurity protocols in healthcare IoT environments in order to ensure the protection of sensitive information against emerging cybersecurity vulnerabilities.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_41-Analyzing_Cyber_Attack_Detection_in_IoT_Healthcare_Environments.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Ensemble Learning for Multi-Class Android Malware Detection: A Robust Framework for Family Level Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160840</link>
        <id>10.14569/IJACSA.2025.0160840</id>
        <doi>10.14569/IJACSA.2025.0160840</doi>
        <lastModDate>2025-08-29T12:47:59.4870000+00:00</lastModDate>
        
        <creator>Mana Saleh Al Reshan</creator>
        
        <subject>Malware detection; cyber threat; ML models; feature selection; ensemble methods</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>The widespread popularity of Android devices has made them a prime target for sophisticated and evolving malware threats. Traditional malware detection techniques rely on binary classification (malicious vs. benign), which fails to capture the nuanced behavioral differences between malware families, critical for threat intelligence and incident response. To address this limitation, we propose a robust multi-class classification approach for Android malware family detection, leveraging ensemble learning and advanced feature selection methods. Our system uses a hybrid feature extraction strategy that combines Chi-Squared and Mutual Information techniques to eliminate low-utility features and retain the most discriminative attributes. These include flow-based metrics, inter-arrival time (IAT), and session duration, key indicators of malicious behavior. We evaluated five baseline classifiers (Random Forest, Gradient Boosting, XGBoost, Extra Trees, and Decision Trees) across three ensemble strategies (bagging, voting, and stacking). Among these, the Stacking ensemble achieved the highest overall performance, with 83% across all evaluation metrics, accuracy, precision, recall, and F1-score, and a True Negative Rate (TNR) of 93.34%. The framework also improves the detection of minority malware families in imbalanced datasets. These findings highlight the advantages of ensemble learning for building scalable and reliable Android systems suitable for real-world deployment.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_40-Ensemble_Learning_for_Multi_Class_Android_Malware_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mid-Upper Arm Circumference Measurement Using Digital Images: A Top-Down Approach with Panoptic Segmentation Using Mask R-CNN</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160839</link>
        <id>10.14569/IJACSA.2025.0160839</id>
        <doi>10.14569/IJACSA.2025.0160839</doi>
        <lastModDate>2025-08-29T12:47:59.4570000+00:00</lastModDate>
        
        <creator>Maya Silvi Lydia</creator>
        
        <creator>Pauzi Ibrahim Nainggolan</creator>
        
        <creator>Desilia Selvida</creator>
        
        <creator>Doli Aulia Hamdalah</creator>
        
        <creator>Dhani Syahputra Bukit</creator>
        
        <creator>Amalia</creator>
        
        <creator>Rahmita Wirza Binti O. K. Rahmat</creator>
        
        <subject>Mask R-CNN; mid-upper arm circumference; digital images; segmentation; mean absolute error</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>Assessing nutritional status, particularly among children and pregnant women, necessitates accurate measurement of Mid-Upper Arm Circumference (MUAC). This research introduces a novel system for MUAC estimation from digital images using the Mask R-CNN algorithm, employing a top-down panoptic segmentation strategy. The proposed model was designed to identify the upper arm region within human body images and compute MUAC values autonomously. Mask R-CNN was selected due to its capacity to perform precise segmentation of objects within visually complex scenes, especially in the mid-upper arm area. Model training was conducted using a dataset of annotated images, with subsequent evaluation confirming its ability to reliably detect and measure MUAC. The system was validated using 72 image samples, yielding a mean absolute error (MAE) of 2.31 cm when compared to manual measurements. Among these samples, 29.2% (21 individuals) exhibited a measurement discrepancy of 0 to 1 cm, 27.8% (20 individuals) showed a 1 to 2 cm difference, and 43.1% (31 individuals) demonstrated deviations exceeding 2 cm. Despite some variations in measurement accuracy, the system presents a promising tool for enhancing the automation and efficiency of nutritional assessments.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_39-Mid_Upper_Arm_Circumference_Measurement.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Multilevel Framework for DoS Detection in SDN</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160838</link>
        <id>10.14569/IJACSA.2025.0160838</id>
        <doi>10.14569/IJACSA.2025.0160838</doi>
        <lastModDate>2025-08-29T12:47:59.4100000+00:00</lastModDate>
        
        <creator>Rejo Rajan Mathew</creator>
        
        <creator>Amarsinh Vidhate</creator>
        
        <subject>Software defined network; distributed denial of service; openflow</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>DoS attacks have been the most popular type of attack on SDNs. The threat landscape has widened due to advanced persistent threats. Recent studies have focused on a single level of defence and conventional detection methods, which have become redundant. The study proposes and implements a novel multilevel DoS attack detection, which has a three-pronged approach to counter modern-day DoS attacks. The first level emphasizes the Zero Trust mechanism using Hash SHA-256 to validate the clients. The second level uses hybrid deep learning models to detect DoS attacks, which are trained and tested across three latest datasets, namely NSLKDD, CIC DOS 2019 and IOT2023, giving an accuracy of 95% consistently. The third level is a lightweight adaptive DoS detection, which can detect fast and low-rate DoS attacks, ensuring that the SDN is secure in a few milliseconds by ruling out any possibility of congestion. The results clearly indicate how a three-level approach can thwart most advanced persistent threats.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_38-A_Novel_Multilevel_Framework_for_DoS_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards Explainable and Balanced Federated Learning: A Neural Network Approach for Multi-Client Fraud Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160837</link>
        <id>10.14569/IJACSA.2025.0160837</id>
        <doi>10.14569/IJACSA.2025.0160837</doi>
        <lastModDate>2025-08-29T12:47:59.3630000+00:00</lastModDate>
        
        <creator>Nurafni Damanik</creator>
        
        <creator>Chuan-Ming Liu</creator>
        
        <subject>Component federated learning; K-Means SMOTEENN; credit card fraud detection; LIME</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>The growing demand for secure and privacy-preserving machine learning frameworks has resulted in the implementation of federated learning (FL), especially in critical areas like Credit card fraud detection. This study presents a comprehensive federated learning architecture that incorporates Neural Networks as local models, in conjunction with KMeans-SMOTEENN to address class imbalance in distributed datasets. The system utilises the Flower framework, employing the FedAvg algorithm across ten decentralised clients to collectively train the global model while preserving raw data confidentiality. To improve model transparency and cultivate stakeholder trust, Local Interpretable Model-Agnostic Explanations (LIME) is utilized, offering localised, comprehensible insights into model decisions. The experimental results indicate that the suggested method effectively achieves high predictive accuracy and explainability, rendering it appropriate for real-world fraud detection contexts that necessitate data confidentiality and model accountability.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_37-Towards_Explainable_and_Balanced_Federated_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Systematic Review of Multilingual Plagiarism Detection: Approaches and Research Challenges</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160836</link>
        <id>10.14569/IJACSA.2025.0160836</id>
        <doi>10.14569/IJACSA.2025.0160836</doi>
        <lastModDate>2025-08-29T12:47:59.3300000+00:00</lastModDate>
        
        <creator>Chaimaa BOUAINE</creator>
        
        <creator>Faouzia BENABBOU</creator>
        
        <creator>Zineb Ellaky</creator>
        
        <creator>Amine BOUAINE</creator>
        
        <creator>Chaimae ZAOUI</creator>
        
        <subject>Multilingual plagiarism; systematic literature review; multilingual text representation; translation approaches; natural language processing; machine learning; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>The existence of voluminous multilingual sources on the web in different fields creates numerous issues, including violations of intellectual property rights. For that, the multilingual plagiarism or cross-language plagiarism detection (CLPD) has become a great challenge, which refers to copying content from a source text in one language into a target text in another without proper attribution. This study presents a systematic literature review (SLR) of methodologies used in CLPD covering works published between 2014 and 2025. This literature review summarizes and diagrams the different approaches used for CLPD. We propose a classification of the different representations of multilingual texts into four types: traditional approaches, multilingual semantic networks, fingerprinting methods, and deep learning models. In addition, we have carried out an in-depth analysis of ten language pairs and have focused on the approaches employed, including translation strategies, feature extraction approaches, classification techniques, similarity methods, dataset types, data granularity, and evaluation metrics. Among the fulfilled results, English appears in 98% of language pairs, and the English-Arabic pair stands out as the most studied. Over 60% of studies involve a translation phase with Google Translate as the most frequently used tool. The mBART model achieves over 95% accuracy for English-Spanish, English-French, and English-German, while BERT reached 96% for English-Russian. As for the assisted translation study based on the Expert translation tool, strong results are obtained for English-Persian, with an accuracy of 98.82%. On the whole, transformers offer better results in several language pairs without the need for translation.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_36-A_Systematic_Review_of_Multilingual_Plagiarism_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Stock Market Prediction of the Saudi Telecommunication Sector Using Univariate Deep Learning Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160835</link>
        <id>10.14569/IJACSA.2025.0160835</id>
        <doi>10.14569/IJACSA.2025.0160835</doi>
        <lastModDate>2025-08-29T12:47:59.3000000+00:00</lastModDate>
        
        <creator>Hadi S. AlQahtani</creator>
        
        <creator>Mohammed J. Alhaddad</creator>
        
        <creator>Mutasem Jarrah</creator>
        
        <subject>Deep learning; stock market; prediction; models; regression; time series</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>Stock market volatility, randomness, and complexity make accurate stock price prediction very elusive, though it is required for logical investment and risk management. This study compares four Deep Learning (DL) models, Long Short-Term Memory (LSTM), Recurrent Neural Network (RNN), Convolutional Neural Network (CNN), and a CNN-LSTM model, to predict the Saudi Telecommunication sector by focusing on the closing price time series. The daily historical closing prices of STC, Mobily, and Zain companies are gathered and preprocessed, involving duplicate removal, feature selection, and Min-Max scaling. Models were trained with MSE loss, whereas validation was done with the RMSE and MAE. The study points toward the ability of deep learning to capture complex nonlinear regression patterns in the ebbs and flows of volatile financial markets. A comparative analysis reveals that the LSTM model yielded the lowest Test RMSE in all cases (Mobily: 1.169705, STC: 0.708495, Zain: 0.27147), therefore, presenting the best overall predictive accuracy. On the other hand, RNN almost always had the highest Test RMSE values (Mobily: 1.688603, STC: 1.143664, Zain: 0.666184), highlighting its limitations. The CNN and CNN-LSTM models showed intermediate performance, with implications for enhanced financial forecasting and decision-making within this specific market segment.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_35-Stock_Market_Prediction_of_the_Saudi_Telecommunication_Sector.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Segment-Based Vehicular Congestion Detection Methods Using Vehicle ID and Loss of Expected Time of Arrival</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160834</link>
        <id>10.14569/IJACSA.2025.0160834</id>
        <doi>10.14569/IJACSA.2025.0160834</doi>
        <lastModDate>2025-08-29T12:47:59.2530000+00:00</lastModDate>
        
        <creator>Mustapha Abubakar Ahmed</creator>
        
        <creator>Azizul Rahman Mohd Shariff</creator>
        
        <subject>Vehicle ID; traffic congestion; congestion detection; vehicle trajectories; vehicle speed; vehicle density; loss of expected time of arrival</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>Increasing number of vehicles and rapid urbanization are the significant causes of road traffic congestion. Road traffic congestion is the main issue facing world cities today. Congestion control and mitigation are necessary to mitigate the negative impacts of road traffic congestion, such as delays and increased fuel consumption, among others. There are many congestion detection methods published in the literature; some of these methods, such as the speed threshold, use a single congestion detection metric. Using a single parameter for traffic congestion detection might produce false and inaccurate results. Furthermore, many congestion detection techniques fall short in describing traffic congestion from the user&#39;s perspective and vision. To address this, this study develops a segment-based congestion detection method that uses vehicle ID and loss of expected time of arrival. The ID-based method considers both vehicle speed and density, whereas the loss of expected time of arrival focuses on the time loss. These methods are segment-based, where roads are divided into segments using vehicle trajectories. Using a speed threshold of 8.33 m/s, the road is segmented into segments of 8.33 m, 16.66 m, and 24.99 m in length. Vehicle speed and density are monitored using vehicle identification numbers (VINs). Experimental results reveal that the speed threshold and the Microscopic Congestion Detection Protocol recorded false congestion detection. The proposed ID-based congestion detection method is capable of identifying false congestion and accurately detecting real congestion. Moreover, the loss of expected time of arrival shows a promising result in terms of identifying congestion based on motorists’ feelings.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_34-Segment_Based_Vehicular_Congestion_Detection_Methods.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of the Possibilities of Using LLM Chatbots for Solving Course and Exam Tasks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160833</link>
        <id>10.14569/IJACSA.2025.0160833</id>
        <doi>10.14569/IJACSA.2025.0160833</doi>
        <lastModDate>2025-08-29T12:47:59.2230000+00:00</lastModDate>
        
        <creator>Svetlana Stefanova</creator>
        
        <creator>Yordan Kalmukov</creator>
        
        <subject>Large language models (LLM); artificial intelligence (AI); ChatGPT; Claude AI; DeepSeek; AI in education; AI for solving exams</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>With the widespread introduction of new technologies and, in particular, AI in various areas of life, students are increasingly using large language models (LLMs) such as ChatGPT and other similar tools to help them with their academic tasks. By using them, they can improve their productivity, improve their understanding of complex topics, and support their academic work. LLMs are used both in research, information gathering and preparation for exams and tests, as well as for generating ideas, creating code, and more. This study explores the possibility of using ChatGPT, Claude and DeepSeek for solving course and exam tasks. The results of the analysis could serve as a warning signal and motivation for future transformation of student testing and assessment methods. The ability to use AI systems to search, analyze, and summarize large volumes of information should shift the focus of assessment from classical fact-finding and practical performance of elementary tasks to creativity, combinability, and skills for adapting and applying the already gained knowledge.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_33-Analysis_of_the_Possibilities_of_Using_LLM_Chatbots.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Boosting Deepfake Detection Accuracy with Unsharp Masking and EfficientNet Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160832</link>
        <id>10.14569/IJACSA.2025.0160832</id>
        <doi>10.14569/IJACSA.2025.0160832</doi>
        <lastModDate>2025-08-29T12:47:59.1900000+00:00</lastModDate>
        
        <creator>Radwa Khaled</creator>
        
        <creator>Hossam M. Moftah</creator>
        
        <creator>Fahad Kamal Alsheref</creator>
        
        <creator>Adel Saad Assiri</creator>
        
        <creator>Kamel Hussein Rahouma</creator>
        
        <creator>Mohammed Kayed</creator>
        
        <subject>Deepfake detection; efficientnet; unsharp masking; convolutional neural networks (CNNs); facial manipulation detection; computer vision; artificial intelligence</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>The rapid progress of deepfake technology, fueled by generative adversarial networks (GANs), has increased the challenge of verifying the authenticity of digital media. This study suggests a more powerful deepfake detection framework based on the EfficientNet convolutional neural network family, coupled with an unsharp masking preprocessing method to highlight manipulation artifacts. Based on a big, diverse dataset of over 5000 video samples, the model was trained and tested on several variants of EfficientNets (B0–B4). The results indicate that the integration of unsharp masking significantly improves the model&#39;s ability to detect minor irregularities in facial regions, with its best validation accuracy at 97.77% with EfficientNetB4. The method strikes a balance between computational cost and detection accuracy, rendering it applicable to real-world use cases, such as forensic examination and digital content authentication. The stability of the framework across different datasets and manipulation methods highlights its value as a scalable solution for curbing disinformation and protecting media integrity.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_32-Boosting_Deepfake_Detection_Accuracy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Tamil Handwritten Character Recognition: A Comprehensive Review of Recent Innovations and Progress</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160831</link>
        <id>10.14569/IJACSA.2025.0160831</id>
        <doi>10.14569/IJACSA.2025.0160831</doi>
        <lastModDate>2025-08-29T12:47:59.1430000+00:00</lastModDate>
        
        <creator>Manoj K</creator>
        
        <creator>Iyapparaja M</creator>
        
        <subject>Convolutional Neural Network (CNN); handwritten recognition; Tamil characters; offline recognition; feature extraction techniques; neural network architecture; Support Vector Machine (SVM); groupwise classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>Recognizing handwritten characters is a complex task, particularly when dealing with Tamil, a writing system known for its intricate and stylized nature. Several challenges arise in recognizing Tamil handwritten characters, including the complexity of the writing style, similar character shapes, irregular handwriting, slanting characters, varying curves, inconsistent font sizes, and limited datasets. Additionally, the diversity of writing styles and the absence of a standard solution for accurately recognizing all Tamil characters further complicate the process. To address these issues, researchers have explored various techniques, including neural networks, support vector machines, clustering, and groupwise classification. However, Tamil handwritten character recognition remains an evolving field with ample opportunities for exploration and advancement. This review study aims to provide a thorough analysis of the current state of the field, identify key challenges, and highlight areas for improvement. Furthermore, it presents a detailed examination of the proposed techniques and suggests potential directions for future research in this domain.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_31-Tamil_Handwritten_Character_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detection of Leaf Fall Disease in Sembawa Rubber Plantation Through Feature Extraction Model and Clustering Methods</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160830</link>
        <id>10.14569/IJACSA.2025.0160830</id>
        <doi>10.14569/IJACSA.2025.0160830</doi>
        <lastModDate>2025-08-29T12:47:59.1130000+00:00</lastModDate>
        
        <creator>Alhadi Bustamam</creator>
        
        <creator>Devvi Sarwinda</creator>
        
        <creator>Retno Lestari</creator>
        
        <creator>Ahmad Ihsan Farhani</creator>
        
        <creator>Harum Ananda Setyawan</creator>
        
        <creator>Masita Dwi Mandini Manessa</creator>
        
        <creator>Tri Rappani Febbiyanti</creator>
        
        <creator>Minami Matsui</creator>
        
        <subject>Convolutional autoencoder; gray level co-occurrence matrix; k-means clustering; rubber plant plantation; Pestalotiopsis sp</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>Natural rubber is one of Indonesia&#39;s most important export commodities, making the country the second-largest exporter globally with a 28.65% share of the world market. However, recent production has declined, partly due to leaf fall disease caused by the Pestalotiopsis sp. fungus. This disease leads to premature leaf drop, which forces rubber trees to redirect energy from latex production to leaf regeneration, potentially reducing yields by up to 30%. Traditional detection methods that rely on manual visual inspection of leaf morphology are impractical over large plantation areas. To address this, the present study proposes a remote sensing-based detection approach using aerial drone imagery and unsupervised machine learning. Two feature extraction methods: Convolutional Autoencoder (CAE) and Gray Level Co-occurrence Matrix (GLCM) were used prior to clustering with k-means. Despite a small dataset, the GLCM-based approach significantly outperforms the CAE-based method. These results demonstrate that GLCM combined with clustering can reliably distinguish between healthy and diseased plantation areas. The proposed method offers a cost-effective, scalable, and non-invasive alternative to ground surveys, and has strong potential for real-world deployment in disease monitoring and early warning systems across large agricultural regions.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_30-Detection_of_Leaf_Fall_Disease_in_Sembawa_Rubber_Plantation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Adaptive Ensemble Models for Robust Intrusion Detection in Cloud Environment on Imbalanced Dataset</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160829</link>
        <id>10.14569/IJACSA.2025.0160829</id>
        <doi>10.14569/IJACSA.2025.0160829</doi>
        <lastModDate>2025-08-29T12:47:59.0800000+00:00</lastModDate>
        
        <creator>Swarnalatha K</creator>
        
        <creator>Nirmalajyothi Narisetty</creator>
        
        <creator>Gangadhara Rao Kancherla</creator>
        
        <creator>Neelima Guntupalli</creator>
        
        <creator>Simhadri Mallikarjuna Rao</creator>
        
        <creator>Archana Kalidindi</creator>
        
        <subject>Resampling methods; cloud computing; feature selection; ensemble model; intrusion detection system; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>The rapid development of Information storage and sharing technologies brings new challenges in protecting against network security attacks. In this study, ensemble learning models are evaluated to enhance the performance of a network intrusion detection system (NIDS) with three phases through machine learning approaches. In the first phase, the unbalanced dataset is processed through four re-sampling techniques, such as SMOTE, RUS, RUS+ROS, and RUS+SMOTE, for balancing treatment. In the second phase, Random Forest feature selection is imposed for these four balanced datasets. Finally, three Ensemble Models named as EM1, EM2 and EM3 are designed using six basic classifiers and thus evaluated. In earlier studies, the first and second phases were evaluated through an SVM binary classifier for four feature subsets. The four feature subsets are obtained through Random Forest feature selection with the four different thresholds of Cumulative Feature Importance Scores (CFIS) (85%, 90%, 95% and 99%). With the observation of the evaluated results, three challenges were identified: i) The highest accuracy obtained through the re-sampling method required maximum computational time. ii) Different thresholds of CFIS exhibit instability in performance metrics as well as computational times, even though the number of features is less. iii) The adopted multi-class SVM classifier’s efficiency to detect the attacks within minimum computational time and without compromising accuracy when compared to earlier works is yet to be ascertained. In this study, an attempt has been made to address these challenges with ensemble learning. Three ensemble models are chosen for the evaluation process conducted on the adopted CIIDS -2017 dataset. Finally, the comparative results are presented, and decisive discussions are carried out for implementing the prevention and mitigation algorithms by security professionals.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_29-Adaptive_Ensemble_Models_for_Robust_Intrusion_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Forecasting Currency Exchange Direction with an Advanced Immune-Inspired Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160828</link>
        <id>10.14569/IJACSA.2025.0160828</id>
        <doi>10.14569/IJACSA.2025.0160828</doi>
        <lastModDate>2025-08-29T12:47:59.0330000+00:00</lastModDate>
        
        <creator>EL BADAOUI Mohamed</creator>
        
        <creator>RAOUYANE Brahim</creator>
        
        <creator>EL MOUMEN Samira</creator>
        
        <creator>BELLAFKIH Mostafa</creator>
        
        <subject>Artificial immune recognition system; financial market prediction; machine learning; predictive analytics; time series forecasting</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>Accurately forecasting currency exchange rates is a persistent and significant challenge in computational finance. This study addresses the challenge by introducing an advanced model based on the Artificial Immune Recognition System (AIRS), an algorithm inspired by the adaptive learning of biological immune systems, to predict the directional movement of the EUR/USD pair. While conventional machine learning models are widely used, immune-inspired approaches have been largely unexplored in this domain. Using historical data from May 2002 to July 2024, the proposed model was rigorously optimized through time-series cross-validation and an Evolutionary Algorithm search. On the out-of-sample test set, the optimized model demonstrates strong predictive power, achieving an F1-Score of 0.66 and an ROC AUC of 0.74, results that are competitive with standard machine learning benchmarks. These findings validate AIRS as a robust and scientifically defensible tool for financial forecasting, offering a viable alternative to conventional methods in a highly volatile market.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_28-Forecasting_Currency_Exchange_Direction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid Approach Combining Deep CNN Features with Classical Machine Learning for Diabetic Retinopathy Diagnosis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160827</link>
        <id>10.14569/IJACSA.2025.0160827</id>
        <doi>10.14569/IJACSA.2025.0160827</doi>
        <lastModDate>2025-08-29T12:47:58.9870000+00:00</lastModDate>
        
        <creator>Amandeep Kaur</creator>
        
        <creator>Simranjit Singh</creator>
        
        <creator>Hardeep Singh</creator>
        
        <creator>Sarveshwar Bharti</creator>
        
        <creator>Jai Sharma</creator>
        
        <creator>Himanshi Sharma</creator>
        
        <subject>Deep learning; convolutional neural networks; hybrid model; diabetic retinopathy; machine learning; medical image analysis; feature extraction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>One of the main causes of vision impairment is diabetic retinopathy (DR), a common and dangerous consequence of diabetes that damages the retinal blood vessels. Preventing irreversible vision loss requires early detection of DR. Recent developments demonstrate how artificial intelligence (AI), and in particular deep learning (DL), can automate the classification of retinal images for the diagnosis of DR. In this study, a hybrid model is proposed that combines deep learning-based feature extraction with classical machine learning classifiers for robust medical image analysis. After using preprocessing methods to lower background noise, this study investigates the use of Convolutional Neural Networks (CNNs) for extracting discriminative features from DR images. To improve image contrast and highlight vascular features, the preprocessing pipeline uses morphological top-hat filtering and green channel extraction. Furthermore, transfer learning was applied to enhance feature representation. The tuned Radial Basis Function Support Vector Machine (RBF-SVM) had the greatest classification accuracy of 85% among the machine learning (ML) classifiers that were assessed, including Random Forest (RF), Gradient Boosting (GB), and RBF-SVM. These findings demonstrate the potential of hybrid AI-driven approaches and domain-specific medical image analysis in providing reliable and efficient automated DR detection.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_27-A_Hybrid_Approach_Combining_Deep_CNN_Features.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Integrating Chatbots into E-Learning Platforms: A Systematic Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160826</link>
        <id>10.14569/IJACSA.2025.0160826</id>
        <doi>10.14569/IJACSA.2025.0160826</doi>
        <lastModDate>2025-08-29T12:47:58.9400000+00:00</lastModDate>
        
        <creator>Victor Sevillano-Vega</creator>
        
        <creator>Juan Chavez-Perez</creator>
        
        <creator>Carmen Torres-Cecl&#233;n</creator>
        
        <creator>Orlando Iparraguirre-Villanueva</creator>
        
        <subject>Chatbots; educational platforms; e-learning; education; challenges</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>The application of chatbots in e-learning has experienced rapid growth in recent years, but a dilemma remains about their pedagogical contribution in practice. For this reason, the aim of this systematic literature review was to analyze the implementation of chatbots in e-learning platforms, evaluating their benefits, academic impact and challenges. The methodology used was PRISMA 2020 (Preferred Reporting Items for Systematic Reviews and Meta-Analyses), based on a structured search in databases such as Scopus, Web of Science, Springer and ScienceDirect. The selection included 55 studies published between 2020 and 2024, after applying rigorous inclusion and exclusion controls. The research results show that personalization of learning, self-regulation, increased student engagement and educational efficiency benefit most when chatbots are integrated with active methodologies. Geographically, scientific output was dominated by the UK, Malaysia and Spain, with 38.18% of publications in 2024. It was also found that the majority of methodological approaches were quantitative, followed by mixed and qualitative studies less frequently. Among the barriers that emerged in terms of the pedagogical dimension were teacher resistance and limited training in artificial intelligence tools. Educational issues, privacy concerns, and biases in generated responses also emerged. Keywords from co-occurrence analysis using VOSviewer revealed the prominence of terms such as chatbot, intelligent tutoring and technology-enhanced learning in recent scientific output. Thus, it is concluded that chatbots are a determinant of autonomy, motivation and effectiveness of online learning, leading to a change in future educational environments, where students will adopt emerging technology. Among the limitations of this review were the scarcity of longitudinal studies and restricted access to certain articles.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_26-Integrating_Chatbots_into_E_Learning_Platform.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The ECTLC-Horcrux Protocol for Decentralized Biometric-Based Self-Sovereign Identity with Time-Lapse Encryption</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160825</link>
        <id>10.14569/IJACSA.2025.0160825</id>
        <doi>10.14569/IJACSA.2025.0160825</doi>
        <lastModDate>2025-08-29T12:47:58.9100000+00:00</lastModDate>
        
        <creator>N. M. Kaziyeva</creator>
        
        <creator>R. M. Ospanov</creator>
        
        <creator>N. Issayev</creator>
        
        <creator>K. Maulenov</creator>
        
        <creator>Shakhmaran Seilov</creator>
        
        <subject>Self-sovereign identity; horcrux protocol; elliptic curves time-lapse cryptography; biometrics; QR codes; blockchain</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>In the era of rapid development of digital communication, there is a growing need for technologies that guarantee secure user identification, document authentication and protection of personal data, including biometrics. Previously used centralized identity management systems are becoming increasingly vulnerable to hacking, falsification and misuse. This problem is especially relevant when information must remain closed until a specific moment or event occurs, for example, in the fields of forensics, healthcare or law (medical certificates, legal acts, inheritance agreements, etc.). The main goal is to create a secure, verifiable and at the same time distributed access control system with the ability to defer disclosure of information. The study proposes a cryptographic protocol that combines Self-Sovereign Identity (SSI), Time-Lapse Cryptography (TLC), and decentralized biometric data management. The protocol is based on the principles of Time-Lapse Cryptography (TLC) and the Horcrux protocol, which enable time-controlled disclosure of encrypted information associated with a user&#39;s identity. The architecture includes the use of QR codes as a transport for Verifiable Credentials (VC), blockchain for authenticity verification and key management, and biometrics as a second factor of identity binding. The proposed solution is intended for use in scenarios where cryptographic protection against premature access to sensitive data is required, such as in medicine, forensics, notarial acts, or intellectual property. The study presents the protocol structure and application options.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_25-The_ECTLC_Horcrux_Protocol_for_Decentralized_Biometric.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Edge-Guided Multi-Scale YOLOv11n: An Advanced Framework for Accurate Ship Detection in Remote Sensing Imagery</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160824</link>
        <id>10.14569/IJACSA.2025.0160824</id>
        <doi>10.14569/IJACSA.2025.0160824</doi>
        <lastModDate>2025-08-29T12:47:58.8770000+00:00</lastModDate>
        
        <creator>Yan Shibo</creator>
        
        <creator>Liu Pan</creator>
        
        <creator>Abudhahir Buhari</creator>
        
        <subject>Optical remote sensing imagery; ship detection; multi-scale deep convolution; edge-aware feature extraction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>Ship detection in optical remote sensing imagery plays a vital role in maritime surveillance and environmental monitoring. However, existing deep learning models often struggle to generalize effectively in complex marine environments due to challenges such as noise interference, small object sizes, and diverse weather conditions. To address these issues, this study proposes an Edge-Guided Multi-Scale YOLO algorithm (YOLOv11n-EGM). The approach introduces multi-scale deep convolutional branches with varying kernel sizes to perform parallel feature extraction, enhancing the model’s ability to detect objects of different scales. Additionally, the classic Sobel operator is incorporated for edge-aware feature extraction, improving the model’s sensitivity to object boundaries. Finally, 1&#215;1 convolutions are employed for feature fusion, reducing computational complexity. Experimental results on the ShipRSImageNet V1.0 dataset demonstrate that the improved model achieves notable gains in precision, recall, mAP@0.5, and mAP@0.5:0.95 compared to the baseline, highlighting its superior performance in challenging maritime scenarios. Qualitative analysis further shows that YOLOv11n-EGM can accurately detect both large and extremely small ships in cluttered scenes, with precise boundary localization. However, occasional misclassification in fine-grained categories (e.g., motorboat vs. hovercraft) highlights the challenge of small-instance recognition. Overall, the proposed method exhibits strong robustness and practical applicability in real-world maritime scenarios, offering a promising solution for edge-aware, multi-scale ship detection in remote sensing imagery.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_24-Edge_Guided_Multi_Scale_YOLOv11n.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Residual DDPG Control with Error-Aware Reward Rescaling for Active Suspension Under Unseen Road Conditions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160823</link>
        <id>10.14569/IJACSA.2025.0160823</id>
        <doi>10.14569/IJACSA.2025.0160823</doi>
        <lastModDate>2025-08-29T12:47:58.8300000+00:00</lastModDate>
        
        <creator>Zien Zhang</creator>
        
        <creator>Abdul Hadi Abd Rahman</creator>
        
        <creator>Noraishikin Zulkarnain</creator>
        
        <subject>Deep deterministic policy gradient; active suspension; reward function; generalization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>This study investigates a hybrid residual control framework combining Deep Deterministic Policy Gradient (DDPG) and a Proportional–Integral–Derivative (PID) based correction module for active suspension (AS) systems, aiming to improve ride performance and generalization under complex road excitations. The DDPG controller is trained on sinewave inputs, while the PID module compensates for residual errors to enhance robustness. To further guide policy optimization, an error-aware reward rescaling strategy is introduced during training, adaptively shaping the reward signal based on acceleration deviation. The controller is tested under five typical road conditions. These include sinewave inputs and step inputs, and ISO 8608 Level B random profiles. Simulation results show that the residual DDPG (RDDPG) controller works better than both DDPG alone and the PID controller. It reduces vertical acceleration RMS by 50.35% under a 0.05 m sinewave input. This shows that using reinforcement learning (RL) with fast correction and reward adjustment is a useful and stable way to control AS in different driving conditions.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_23-Residual_DDPG_Control_with_Error_Aware_Reward_Rescaling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Analysis of Machine Learning and Deep Learning Models for Handwritten Digit Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160822</link>
        <id>10.14569/IJACSA.2025.0160822</id>
        <doi>10.14569/IJACSA.2025.0160822</doi>
        <lastModDate>2025-08-29T12:47:58.8000000+00:00</lastModDate>
        
        <creator>Soukaina Chekki</creator>
        
        <creator>Boutaina Hdioud</creator>
        
        <creator>Rachid Oulad Haj Thami</creator>
        
        <creator>Sanaa El Fkihi</creator>
        
        <subject>Handwritten digit recognition (HDR); Optical Character Recognition (OCR); Machine Learning (ML); Deep Learning (DL); segmentation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>Handwritten digit recognition (HDR) forms a key component of computer vision systems, especially in optical character recognition (OCR). This study presents a comparative analysis of Machine Learning (ML) algorithms and Deep Learning (DL) models for HDR tasks. A contour-based segmentation technique was applied in preprocessing to enhance feature extraction by detecting digit boundaries and reducing noise. ML models, including K-Nearest Neighbors (KNN) and Support Vector Machine (SVM), and DL architectures, such as Artificial Neural Networks (ANNs), Convolutional Neural Networks (CNNs), and Recurrent Neural Networks (RNNs), were evaluated on the Modified National Institute of Standards and Technology (MNIST) and the National Institute of Standards and Technology (NIST) datasets. The results demonstrate that DL models significantly outperform ML algorithms in terms of accuracy and robustness, while the KNN model achieved acceptable results. The results underline the importance of contour-based preprocessing in boosting deep learning techniques for HDR.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_22-Comparative_Analysis_of_Machine_Learning_and_Deep_Learning_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Proactive Cancer Prediction Using IoT and Deep Learning Before Symptoms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160821</link>
        <id>10.14569/IJACSA.2025.0160821</id>
        <doi>10.14569/IJACSA.2025.0160821</doi>
        <lastModDate>2025-08-29T12:47:58.7530000+00:00</lastModDate>
        
        <creator>Mohamed Amine Meddaoui</creator>
        
        <creator>Imane Karkaba</creator>
        
        <creator>Moulay Amzil</creator>
        
        <creator>Mohammed Erritali</creator>
        
        <subject>Deep learning; internet of things; artificial intelligence; convolutional neural network; recurrent neural network; long short-term memory; autoencoders; cancer prediction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>The ability to predict cancer before the onset of clinical symptoms represents a paradigm shift in oncology and preventive medicine. Existing diagnostic approaches remain reactive, relying on imaging or symptomatic manifestations that frequently detect the disease only at advanced stages, particularly in pancreatic, lung, and ovarian cancers. To address this gap, we propose a novel methodology that integrates the Internet of Things (IoT), Artificial Intelligence (AI), and Deep Learning for proactive cancer prediction. Continuous high-resolution physiological, behavioral, and environmental data are collected through IoT-enabled wearable and implantable devices and analyzed using a hybrid architecture that combines Autoencoders, Convolutional Neural Networks (CNNs), and Recurrent Neural Networks (RNNs), with a specific focus on Long Short-Term Memory (LSTM) models. Unlike previous work, which primarily targeted general IoT-based monitoring or symptom-driven detection, this study explicitly demonstrates how the fusion of multidimensional IoT data and advanced deep learning enables the identification of micro-level deviations from an individual’s baseline as early biomarkers of cancer risk. Experiments conducted on synthetic datasets simulating pancreatic, lung, and ovarian cancer progression show that the proposed framework achieves an accuracy of 89%, a sensitivity of 85%, a specificity of 91%, and an AUC of 0.93, with an average early detection lead time of 7.5 months. These findings highlight the rigor and originality of the proposed approach, which advances the field by offering a validated, proactive methodology for cancer prediction and establishing clear differences from prior studies by the authors that focused on narrower IoT applications. This work paves the way for predictive and preventive oncology, where intervention can occur long before clinical manifestation of the disease.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_21-Proactive_Cancer_Prediction_Using_IoT_and_Deep_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Navigating the Landscape of Automated Information Extraction for Financial Fund Prospectuses: Survey and Challenges</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160820</link>
        <id>10.14569/IJACSA.2025.0160820</id>
        <doi>10.14569/IJACSA.2025.0160820</doi>
        <lastModDate>2025-08-29T12:47:58.7370000+00:00</lastModDate>
        
        <creator>Yuyao Xu</creator>
        
        <creator>Mohamad Farhan Mohamad Mohsin</creator>
        
        <subject>Machine learning; information automation; financial documentation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>In the financial sector, a fund prospectus is a critical document mandated by the Securities and Exchange Commission (SEC) that provides vital information about investments to the public. These documents encompass a range of financial concepts that define the fund&#39;s operations, including its name and disclaimers associated with periodic reports. Traditionally, the identification of these concepts has been a manual, labour-intensive, and costly task for financial regulators, aimed at ensuring the completeness of information. Automating this process is fraught with challenges, including the lengthy nature of prospectuses, the nuances of financial language, and the scarcity of labelled data for effective model training. This study explores state-of-the-art methods for information extraction, specifically within the context of financial documents. It begins with an overview of information extraction, detailing its definition and various types, such as Named Entity Recognition (NER) and event extraction. The discussion highlights the increasing significance of information extraction in the financial domain and reviews typical application areas. Ultimately, this research seeks to highlight the challenges within existing methods through a comprehensive literature review, emphasizing the need for more effective techniques tailored to the extraction of financial concepts in fund prospectuses. By enhancing and streamlining the extraction process, it aspires to improve efficiency and reduce costs for financial regulators, thereby ensuring more accurate and comprehensive information dissemination.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_20-Navigating_the_Landscape_of_Automated_Information_Extraction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Augmentation-Based System for Diagnosing COVID-19 Using Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160819</link>
        <id>10.14569/IJACSA.2025.0160819</id>
        <doi>10.14569/IJACSA.2025.0160819</doi>
        <lastModDate>2025-08-29T12:47:58.7070000+00:00</lastModDate>
        
        <creator>Mohamad Shady Alrahhal</creator>
        
        <creator>Mohammad A. Mezher</creator>
        
        <creator>Osamah A.M. Ghaleb</creator>
        
        <creator>Mohammad Al-Hjouj</creator>
        
        <creator>Raghad Sehly</creator>
        
        <creator>Samir Bataineh</creator>
        
        <subject>COVID-19; medical images; augmentation; vision transformer; training data ratio</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>Recently, due to the dangerous spread of COVID-19, there has been strong competition among computer science researchers within the scientific research community to employ deep learning for the development of intelligent medical systems that diagnose this illness. Enhancing accuracy is considered the most important objective, and augmentation techniques are used in this context. This study addresses two main issues related to applying augmentation on X-ray and CT-scan images: losing the positional information of augmented medical images and the integration of extracted features while scanning them. The use of the Vision Transformer Structure, supported by a Position-Aware Embedding (PAE) method, is proposed to deal with these issues. Moreover, in this study, a student–teacher-based approach was adopted to enable considerable resistance against training on a small batch of training images. Due to the sensitivity of medical data, preserving the privacy of patients was taken into account by using a pseudonym-based anonymity approach. After evaluations based on accuracy, precision, recall, and specificity metrics, the results showed that the proposed system has a high-level capability to predict class images (X-ray or CT-scan) as well as considerable resistance against training on small medical images.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_19-An_Augmentation_Based_System_for_Diagnosing_COVID_19.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Developing ReAdaBalancer for Load Balancing Optimization in Networked Cloud Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160818</link>
        <id>10.14569/IJACSA.2025.0160818</id>
        <doi>10.14569/IJACSA.2025.0160818</doi>
        <lastModDate>2025-08-29T12:47:58.6600000+00:00</lastModDate>
        
        <creator>M Diarmansyah Batubara</creator>
        
        <creator>Poltak Sihombing</creator>
        
        <creator>Syahril Efendi</creator>
        
        <creator>Suherman</creator>
        
        <subject>Cloud computing; heuristic optimization; adaptive load balancing; scalability; ReAdaBalancer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>Traditional load balancing systems frequently have trouble adjusting to abrupt and unexpected changes in traffic. This can cause problems like server overload, longer response times, and more requests being denied. This problem is highly important in areas like healthcare, finance, cloud computing, and e-commerce, where performance, stability, and fast data delivery are all very important. To solve this problem, this study presents ReAdaBalancer, an adaptive load balancing architecture that aims to improve system performance, scalability, stability, and efficiency in contexts with changing traffic. Flask serves as the backend framework for ReAdaBalancer, while Nginx serves as the load balancer. Real-time monitoring and analytics are used to improve traffic distribution based on the resources that are currently available. Leveraging queuing theory (M/M/s/K Network), the system’s performance is tested under diverse load situations, providing insights into its scalability and efficiency. ReAdaBalancer can also learn and adapt all the time, thanks to machine learning and heuristic optimization. This makes sure that it works the same way even when demand changes. Experimental results demonstrate that, under equivalent settings, ReAdaBalancer decreases response times by over 67% and reduces request denial rates by over 50% in comparison to traditional methods. This work has multiple opportunities for subsequent investigation. Future improvements could involve making ReAdaBalancer work in distributed multi-data center environments, adding reinforcement learning to make decisions more independently, looking into load balancing strategies that use less energy, and making it work in edge computing and IoT ecosystems.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_18-Developing_ReAdaBalancer_for_Load_Balancing_Optimization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hierarchical Transformer Residual Model for Pneumonia Detection and Lesion Mapping</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160817</link>
        <id>10.14569/IJACSA.2025.0160817</id>
        <doi>10.14569/IJACSA.2025.0160817</doi>
        <lastModDate>2025-08-29T12:47:58.6270000+00:00</lastModDate>
        
        <creator>Anupama Prasanth</creator>
        
        <subject>Pneumonia detection; lesion segmentation; chest X-ray; ResNet50; swin transformer; global average pooling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>Pneumonia, a potentially fatal infection and a common disease-causing culprit among children and the elderly, still remains as a prevalent threat even after years of research on tackling it. Rapid and proper identification is crucial for timely treatment and improved results. While thoracic radiographs are widely employed in pneumonia diagnosis, real-world clinical assessment is frequently questioned by factors such as subtle radiographic patterns, overlapping symptoms, subjective manual judgement and dependency on expert radiologists. The study proposes a hybrid deep learning model integrating ResNet50 and the Swin Transformer, coupled with an auxiliary segmentation decoder to facilitate both classification and lesion localization in chest X-ray images. ResNet50 acts as the backbone for hierarchical spatial feature extraction, capturing fine-grained local textures indicative of pulmonary abnormalities, and the Swin Transformer serves as the global attention-driven feature aggregator. The shifted window mechanism of the Swin Transformer maintains spatial hierarchy while facilitating effective contextual modelling. Global Average Pooling (GAP) and Multilayer Perceptron (MLP) form the classification head, yielding accurate predictions in classifying the images, while the segmentation decoder utilizes multiscale features to generate pixel-wise masks for pneumonia lesion regions. The model outperformed conventional methods with 98.4% classification accuracy, 98.2% precision, 99.2% recall and an F1-score of 98.7% with a 0.88 Dice Coefficient in segmentation. These results reflect the hybrid architecture’s superior performance and its dual capacity for diagnostic prediction and lesion interpretability. The proposed model demonstrates promising results for deployment in real-world clinical workflows, especially in resource-constrained or high-patient-load environments.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_17-Hierarchical_Transformer_Residual_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>AraSpam: A Multitask Deep Neural Network for Spam Detection in Arabic Twitter</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160816</link>
        <id>10.14569/IJACSA.2025.0160816</id>
        <doi>10.14569/IJACSA.2025.0160816</doi>
        <lastModDate>2025-08-29T12:47:58.6130000+00:00</lastModDate>
        
        <creator>Lulua Alhamdan</creator>
        
        <creator>Ahmed Alsanad</creator>
        
        <creator>Nora Al-Twairesh</creator>
        
        <subject>Spam detection; Twitter; multitask deep neural network; transformer-based model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>Twitter has become widely used for disseminating information across the Arab world. It provides diverse communicative and informational needs while serving as a rich data source for a wide range of research. However, the integrity of such data is frequently undermined by the pervasive issue of spam. Existing research proposed the use of spam detection models at multiple levels—the account, tweet, and campaign levels. Many of these models target Uniform Resource Locator (URL)-based spam messages, whereas a significant portion of spam content operates without embedded URLs. Furthermore, spam detection methodologies tailored to the account level often lack the precision required for tweet-level analysis or, conversely, fail to capture broader account-level behavioral patterns. Moreover, studies focusing on Arabic spam have largely been restricted to specific geographical regions or linguistic varieties, such as Arabic dialect (AD) or Modern Standard Arabic (MSA), thereby neglecting the full spectrum of Arabic’s linguistic diversity in spam messages. This study aims to address these limitations by proposing AraSpam, a multitask deep neural network that detects both spam messages and profiles using a single model. It was trained using a dataset of tweets written in AD and MSA covering different spamming targets. The text features were extracted using transformer-based models: AraBERT for tweet text and mBERT for profile screen name. The experiment demonstrated 96% accuracy in detecting both spam accounts and tweets with seven different spamming targets. Additionally, the experiments revealed that reducing the number of spam classes resulted in an increase in tweet detection performance and a decrease at the account level.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_16-AraSpam_A_Multitask_Deep_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Exploring Trust Management in Fog Computing: A Comprehensive Review and Future Challenges in Task Offloading</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160815</link>
        <id>10.14569/IJACSA.2025.0160815</id>
        <doi>10.14569/IJACSA.2025.0160815</doi>
        <lastModDate>2025-08-29T12:47:58.5800000+00:00</lastModDate>
        
        <creator>Liu Feng</creator>
        
        <creator>Suhaidi Hassan</creator>
        
        <creator>Mohammed Alsamman</creator>
        
        <subject>Trust management; fog computing; cloud computing; task offloading; heterogeneous networks; trust evaluation; task completion time</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>With the proliferation of data-driven services and latency-sensitive applications, fog computing has emerged as a pivotal extension of cloud infrastructure, enabling data processing and resource allocation at the network edge. However, the trustworthiness of task offloading in such decentralized and heterogeneous environments remains insufficiently explored, posing significant concerns related to system reliability, security, and performance. This review aims to address this gap by providing a comprehensive and systematic analysis of current research on trust-based task offloading in fog computing. The study investigates various trust evaluation mechanisms, categorizing them into three major paradigms: Direct Trust-based, Recommended Trust-based, and Comprehensive Trust. Through this classification, the study identifies and examines key trust-related metrics that influence offloading decisions, including task execution accuracy, trust evaluation accuracy, and evaluation latency. A critical assessment of the strengths and limitations of existing approaches reveals ongoing challenges such as dynamic trust management, scalability in large-scale networks, interoperability among diverse nodes, and resilience against malicious behaviours. Based on these insights, the study highlights pressing research opportunities and recommends the development of lightweight, adaptive, and context-aware trust frameworks capable of supporting real-time decision-making in dynamic fog environments. By synthesizing fragmented research and offering a forward-looking perspective, this review contributes a foundational reference for scholars and practitioners seeking to enhance the reliability and security of task offloading in fog computing, thereby supporting the evolution of more robust and efficient edge-based computing infrastructures.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_15-Exploring_Trust_Management_in_Fog_Computing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>nodeWSNsec: A Hybrid Metaheuristic Approach for Reliable Security and Node Deployment in Wireless Sensor Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160814</link>
        <id>10.14569/IJACSA.2025.0160814</id>
        <doi>10.14569/IJACSA.2025.0160814</doi>
        <lastModDate>2025-08-29T12:47:58.5330000+00:00</lastModDate>
        
        <creator>Rahul Mishra</creator>
        
        <creator>Sudhanshu Kumar Jha</creator>
        
        <creator>Naresh Kshetri</creator>
        
        <creator>Bishnu Bhusal</creator>
        
        <creator>Mir Mehedi Rahman</creator>
        
        <creator>Md Masud Rana</creator>
        
        <creator>Aimina Ali Eli</creator>
        
        <creator>Khaled Aminul Islam</creator>
        
        <creator>Bishwo Prakash Pokharel</creator>
        
        <subject>Node deployment; wireless sensor networks; genetic algorithm; particular swarm optimization; competitive multi-objective marine predators algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>Efficient and reliable node deployment in Wireless Sensor Networks is crucial for optimizing coverage of the area, connectivity among nodes, and energy efficiency. Random deployment of nodes may lead to coverage gaps, connectivity issues and reduce network lifetime. This study proposes a hybrid metaheuristic approach combining a Genetic Algorithm (GA) and Particle Swarm Optimization (PSO) to address the challenges of energy-efficient and reliable node deployment. The GA-PSO hybrid leverages GA’s strong exploration capabilities and PSO’s rapid convergence, achieving an optimum stability between coverage and energy consumption. The performance of the proposed approach is evaluated against GA and PSO alone and the innovatory metaheuristic-based Competitive Multi-Objective Marine Predators Algorithm (CMOMPA) across varying sensing ranges. Simulation results demonstrate that GA-PSO requires 15 to 25% fewer sensor nodes and maintains 95% or more area coverage while maintaining connectivity in comparison to the standalone GA or PSO algorithm. The proposed algorithm also dominates CMOMPA when compared for long sensing and communication range in terms of higher coverage, improved connectivity, and reduced deployment time while requiring fewer sensor nodes. This study also explores key trade-offs in WSN deployment and highlights future research directions, including heterogeneous node deployment, mobile WSNs, and enhanced multi-objective optimization techniques. The findings underscore the effectiveness of hybrid metaheuristics in improving WSN performance, offering a promising approach for real-world applications such as environmental monitoring, smart cities, smart agriculture, disaster response, and IIoT.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_14-nodeWSNsec_A_Hybrid_Metaheuristic_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Artificial Intelligence in Optometry: Potential Benefits and Key Challenges: A Narrative Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160813</link>
        <id>10.14569/IJACSA.2025.0160813</id>
        <doi>10.14569/IJACSA.2025.0160813</doi>
        <lastModDate>2025-08-29T12:47:58.5030000+00:00</lastModDate>
        
        <creator>Noura A. Aldossary</creator>
        
        <subject>Artificial intelligence; automated screening; tele-optometry; eye care services; patient autonomy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>The integration of Artificial Intelligence (AI) into healthcare is transforming many medical fields, including optometry. This study provides a narrative review of the current applications and future potential of AI in optometric practice, emphasizing its role in automated screening and diagnosis, personalized treatment planning, and enhanced accessibility through tele-optometry. Alongside these opportunities, this study examines the technical, socioeconomic, ethical, legal, and professional challenges that limit the effective integration of AI in optometry practice. Focus is placed on concerns surrounding data privacy, patient autonomy, regulatory disparities, and practitioner resistance to adoption. Furthermore, this review highlights key research gaps, including the need for diverse training datasets, large-scale validation trials, and collaborative training between clinicians and AI developers. By resolving these challenges, AI has the potential to improve diagnostic accuracy, expand access to care, and enhance the quality of eye care services. By integrating the available evidence, this narrative review provides clinicians, policymakers, and researchers with a comprehensive overview of the benefits, challenges, and future directions of AI in optometry.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_13-Artificial_Intelligence_in_Optometry.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of Marketing Digital Control System Based on the Integration of Big Data Analysis and Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160812</link>
        <id>10.14569/IJACSA.2025.0160812</id>
        <doi>10.14569/IJACSA.2025.0160812</doi>
        <lastModDate>2025-08-29T12:47:58.4730000+00:00</lastModDate>
        
        <creator>Qiming Li</creator>
        
        <creator>Songling Du</creator>
        
        <creator>Shiyuan Zhang</creator>
        
        <subject>Big data; flink; reinforcement learning; rete network; interest level; term frequency-inverse document frequency; rule</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>The advent of the digital age has made it difficult for traditional marketing methods to meet the rapidly changing needs of the market. To improve the efficiency and effectiveness of enterprise marketing activities, it is particularly important to develop intelligent and precise marketing management systems. Therefore, the study proposes a marketing digital control system based on the integration of big data analysis and machine learning, which utilizes the Apache Flink distributed big data processing framework to design a marketing control system that includes system content and functional requirements. At the same time, machine learning design ideas are introduced into marketing recommendation algorithms, using reinforcement learning to enrich the business logic of the Rete network, dynamically generate and update rules, and calculate user interest while ensuring the fit between marketing recommendation content and user interest information. The study constructed a self-designed dataset (consisting of over 30000 pieces of data) through data simulation and crawling, and compared the research method with other machine learning algorithms on the same dataset. The results show that the maximum matching accuracy of the improved recommendation algorithm reaches 90.12%, and the prediction accuracy of user consumption behavior exceeds 88%, which is better than other comparative algorithms. The mean absolute error value on the product is less than 0.10, and the F1 value is greater than 0.65, indicating significant recommendation effectiveness. The research-designed marketing digital control system effectively integrates big data analysis and machine learning technology, providing support for the digital transformation of enterprises and the intelligent upgrading of related fields.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_12-Design_of_Marketing_Digital_Control_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Method for Real-Time Fall Detection Based on MediaPipe Pose Estimation and LSTM</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160811</link>
        <id>10.14569/IJACSA.2025.0160811</id>
        <doi>10.14569/IJACSA.2025.0160811</doi>
        <lastModDate>2025-08-29T12:47:58.4270000+00:00</lastModDate>
        
        <creator>Puwadol Sirikongtham</creator>
        
        <creator>Apichaya Nimkoompai</creator>
        
        <subject>Fall detection; older adults; real-time; MediaPipe; LSTM; computer vision; pose estimation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>Falls are a significant health problem among older adults, leading to serious injuries and adversely affecting both quality of life and public health burdens. Although various fall detection systems have been developed using technologies such as wearable sensors and image processing (computer vision), limitations remain in dimensions of convenience, accuracy, and real-time responsiveness. To overcome these limitations, this research aimed to present a real-time fall detection system that integrates MediaPipe pose estimation technology with a Long Short-Term Memory (LSTM) neural network. The proposed method functioned through two main components. MediaPipe pose estimation technology was applied to detect and track keypoints on the human body from real-time video input; meanwhile, a trained LSTM model was utilized to analyze the sequence of movements of the detected keypoints for classifying and differentiating between fall behaviors and normal activities. The system was trained, and its performance was evaluated using the standard UR Fall Detection Dataset. From experimental results, the proposed system achieved high efficiency in fall detection, with an accuracy of 95.2% on the test dataset. The integrated system had its capability to detect all actual fall events (with a recall of 100%). Its false positive rate was low. Compared to other research, the proposed method provided higher accuracy. These results indicated that the proposed system has the potential for practical application as an effective tool for real-time fall alerts, enabling timely assistance for those injured from falls.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_11-A_New_Method_for_Real_Time_Fall_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Exploring the Future Research Agenda for Health Applications Adoption: A Systematic Literature Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160810</link>
        <id>10.14569/IJACSA.2025.0160810</id>
        <doi>10.14569/IJACSA.2025.0160810</doi>
        <lastModDate>2025-08-29T12:47:58.3770000+00:00</lastModDate>
        
        <creator>Rahmat Fauzi</creator>
        
        <creator>Adhistya Erna Permanasari</creator>
        
        <creator>Silmi Fauziati</creator>
        
        <subject>Systematic literature review; health application adoption; health technology; user behavior; technology acceptance; TAM; UTAUT</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>The healthcare sector is experiencing rapid digital transformation, marked by the growing popularity of mobile health (mHealth) and eHealth applications for various health-related purposes. However, despite their potential, the adoption of health applications remains inconsistent due to varying influencing factors. Previous reviews often focused on specific populations or limited frameworks, leaving a gap for a comprehensive synthesis. This study aims to systematically review and consolidate the current understanding of the factors affecting user adoption behavior in health applications. Following the Preferred Reporting Items for Systematic Review and Meta-Analyses (PRISMA) guidelines, a comprehensive literature search was conducted to identify relevant studies between 2016 and 2025. A total of 79 primary studies were analyzed to explore the theoretical model, variables, and emerging trends in health applications. The Technology Acceptance Model (TAM) and Unified Theory of Acceptance and Use of Technology (UTAUT) models are the most widely used models by researchers. Beyond these core frameworks, researchers have proposed extended constructs such as psychological factors, health literacy, regulator readiness, security concerns, and infrastructure limitations. This review highlights the need for more inclusive, cross-cultural, and mixed-method research, particularly focusing on underrepresented populations such as rural users, the elderly, and low-literacy groups. These findings offer valuable insight to inform the design of future models and support the development of more effective, context-aware, and user-centered health technologies.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_10-Exploring_the_Future_Research_Agenda.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>EcoRouting: Carbon-Aware Path Optimization in Green Internet Architectures</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160809</link>
        <id>10.14569/IJACSA.2025.0160809</id>
        <doi>10.14569/IJACSA.2025.0160809</doi>
        <lastModDate>2025-08-29T12:47:58.3470000+00:00</lastModDate>
        
        <creator>Handrizal </creator>
        
        <creator>Herriyance</creator>
        
        <creator>Amer Sharif</creator>
        
        <subject>EcoRouting; carbon-aware routing; internet; green internet; QoS (Quality of Service)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>The exponential growth of Internet traffic has raised increasing concerns over the environmental sustainability of network infrastructures, particularly regarding energy consumption and carbon emissions. While traditional routing algorithms prioritize performance metrics such as speed, reliability, and QoS (Quality of Service), they often overlook the environmental cost associated with data transmission. This study presents EcoRouting, a carbon-aware routing algorithm designed for the Green Internet that integrates emission intensity into the graph-based path optimization process. Implemented in a simulated network environment using Python and NetworkX, EcoRouting leverages real-world carbon intensity data from ElectricityMap to evaluate route selection based on both carbon emissions and latency. Across four experimental scenarios, including static and time-varying emissions, QoS comparison, and multi-city topologies, EcoRouting consistently demonstrated carbon savings of up to 47.1%, with acceptable latency tradeoffs ranging from 2.61% to 95.2% depending on network conditions. The results confirm that EcoRouting provides a viable, scalable, and environmentally conscious approach for reducing the carbon footprint of Internet routing while maintaining QoS.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_9-EcoRouting_Carbon_Aware_Path_Optimization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Autonomous Driving in Adverse Weather: A Multi-Modal Fusion Framework with Uncertainty-Aware Learning for Robust Obstacle Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160808</link>
        <id>10.14569/IJACSA.2025.0160808</id>
        <doi>10.14569/IJACSA.2025.0160808</doi>
        <lastModDate>2025-08-29T12:47:58.3170000+00:00</lastModDate>
        
        <creator>Zhengqing Li</creator>
        
        <creator>Baljit Singh Bhathal Singh</creator>
        
        <subject>Autonomous driving; adverse weather; multimodal sensor fusion; Bayesian neural networks; uncertainty estimation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>Robust obstacle detection in autonomous driving under adverse weather remains a critical challenge due to sensor degradation, visibility reduction, and increased uncertainty. This study proposes an Uncertainty-Aware Multi-Modal Fusion (UAMF) framework that integrates LiDAR, RGB images, and weather priors through a dynamic cross-modal attention mechanism and Bayesian uncertainty modeling. The model adaptively adjusts the fusion weights between sensor modalities according to real-time weather conditions and jointly optimizes detection loss with a KL divergence regularization to quantify predictive uncertainty. Experimental results on the nuScenes, KITTI-Adverse, and CARLA datasets demonstrate that UAMF achieves superior performance across rain, snow, and fog scenarios, with mAP@0.5 reaching 0.78, 0.72, and 0.65, respectively—representing 12–31% gains over existing baselines. Notably, UAMF reduces false positive rates by up to 40% in low-visibility conditions and exhibits a strong correlation (ρ = 0.85) between estimated uncertainty and localization error. Ablation studies confirm the importance of the weather-aware fusion and uncertainty modules, while visibility-level analysis shows improved robustness under &lt;30 m scenarios. The proposed framework offers reliable uncertainty signals for downstream decision-making and is deployable in real-time on embedded platforms. Future work will explore unsupervised weather parameter estimation, uncertainty-aware trajectory forecasting, and cross-domain generalization.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_8-Autonomous_Driving_in_Adverse_Weather.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Privacy-Preserving Gaussian Process Regression Framework Against Membership Inference Attacks Using Random Unitary Transformation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160807</link>
        <id>10.14569/IJACSA.2025.0160807</id>
        <doi>10.14569/IJACSA.2025.0160807</doi>
        <lastModDate>2025-08-29T12:47:58.2830000+00:00</lastModDate>
        
        <creator>Md. Rashedul Islam</creator>
        
        <creator>Jannatul Ferdous Akhi</creator>
        
        <creator>Takayuki Nakachi</creator>
        
        <subject>Gaussian process; differential privacy; random unitary transformation; membership inference attack; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>As artificial intelligence (AI) systems become increasingly embedded in sensitive domains such as healthcare and finance, they face heightened vulnerabilities to privacy threats. A prominent type of attack against AI is the membership inference attack (MIA), which aims to determine whether specific data instances were used in a model’s training set, thereby posing a serious risk of sensitive information disclosure. This study focuses on Gaussian Process (GP) models, which are widely adopted for their probabilistic interpretability and ability to quantify predictive uncertainty, and examines their susceptibility to MIAs. To mitigate this threat, a novel defense mechanism based on Random Unitary Transformation (RUT) is introduced, which encrypts training and testing inputs using orthonormal matrices. Unlike Differential Privacy-based Gaussian Processes (DP-GPR), which rely on noise injection and often degrade model performance, the proposed method preserves both the structural integrity and predictive fidelity of the GP model without injecting noise into the learning process. Two configurations are evaluated: i) encryption applied to both training and test data, and ii) encryption applied only to training data. Experimental results on a medical dataset demonstrate that the framework significantly reduces the effectiveness of MIAs while maintaining high predictive accuracy. Comparative analysis with DP-GPR models further confirms that the proposed method achieves competitive or stronger privacy protection with less impact on model utility. These findings underscore the potential of structure-preserving transformations as a practical and effective alternative to noise-based privacy mechanisms in GP models, particularly in privacy-critical machine learning applications.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_7-A_Privacy_Preserving_Gaussian_Process_Regression_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Integrating Fine-Tuned GPT with Agent-Based Economic Modeling for Transparent Wage Policy Decisions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160806</link>
        <id>10.14569/IJACSA.2025.0160806</id>
        <doi>10.14569/IJACSA.2025.0160806</doi>
        <lastModDate>2025-08-29T12:47:58.2370000+00:00</lastModDate>
        
        <creator>Daniel A. Ariaso Sr.</creator>
        
        <creator>Ken D. Gorro</creator>
        
        <creator>Deofel Balijon</creator>
        
        <creator>Meshel Balijon</creator>
        
        <subject>Agent-Based simulation; reinforcement learning; fuzzy AHP; GPT</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>This study presents a decision-support system powered by GPT-enhanced insights to help policymakers explore the economic effects of minimum wage policies in the Philippines. The system integrates agent-based simulation, fuzzy logic, reinforcement learning, and Fuzzy Analytic Hierarchy Process (Fuzzy AHP) to model the complex relationships between wages, inflation, firm behavior, and employment. At its core is a fine-tuned GPT model trained on synthetic simulation outputs, capable of generating human-readable interpretations that explain dynamic trends, trade-offs, and fuzzy economic behaviors that are often difficult to decipher from numbers alone. Two policy scenarios were simulated over 100 months: increasing the minimum wage from ‚500 to ‚600, and from ‚500 to ‚700. While the ‚700 scenario led to short-term boosts in productivity and real wages, it also triggered early inflation, unstable profits, and reduced employment. In contrast, the ‚600 scenario produced more stable results, balancing moderate wage growth with firm sustainability and lower inflationary pressure. Fuzzy AHP was used to evaluate each scenario across four key criteria—real wages, firm profitability, employment, and inflation—favoring ‚600 as the more sustainable policy path. What sets this study apart is the integration of GPT-generated policy narratives that accompany each simulation run. These insights help translate fuzzy, nonlinear model behaviors into clear, accessible language—supporting more inclusive, transparent, and evidence-based wage policy decisions. By combining simulation and generative AI, the framework offers not just predictions, but practical understanding of how economic systems respond to complex changes.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_6-Integrating_Fine_Tuned_GPT_with_Agent_Based_Economic_Modeling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid Approach to Automatic Timetabling Using Self-Organizing Maps, Secure Convex Dominating Sets, and Metaheuristics</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160805</link>
        <id>10.14569/IJACSA.2025.0160805</id>
        <doi>10.14569/IJACSA.2025.0160805</doi>
        <lastModDate>2025-08-29T12:47:58.1900000+00:00</lastModDate>
        
        <creator>Elmo Ranolo</creator>
        
        <creator>Ken Gorro</creator>
        
        <creator>Pierre Anthony Gwen Abella</creator>
        
        <creator>Lawrence Roble</creator>
        
        <creator>Rue Nicole Santillan</creator>
        
        <creator>Anthony Ilano</creator>
        
        <creator>Benjie Ociones</creator>
        
        <creator>Roel Vasquez</creator>
        
        <creator>Deofel Balijon</creator>
        
        <creator>Daniel Ariaso Sr.</creator>
        
        <creator>Rose Ann Campita</creator>
        
        <creator>Robert Jay Angco</creator>
        
        <subject>Timetable optimization; Self-Organizing Maps (SOM); Secure Convex Dominating Set (SCDS); Genetic Algorithm (GA); Academic Scheduling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>Creating conflict-free academic timetables that respect teacher availability, subject eligibility, and limited re-sources remains a persistent challenge in educational institutions. This study introduces a novel hybrid algorithm that combines Self-Organizing Maps (SOM), Secure Convex Dominating Sets (SCDS), and Genetic Algorithms (GA) to address this problem effectively. SOM is employed to cluster subjects based on teaching duration and eligibility, providing structured guidance in initial scheduling. SCDS identifies the most conflict-prone subjects—typically those with limited eligible teachers—and ensures they are prioritized, thereby reducing downstream bottlenecks. GA then iteratively refines the schedule by evaluating room assignments, teacher loads, and constraint satisfaction. Extensive simulation experiments were conducted under varying conditions, including worst-case scenarios with dense scheduling conflicts. The system achieved high success rates, particularly in moderate to complex settings, and demonstrated robustness even in constrained environments. Notably, SOM improved spatial and temporal coherence, while SCDS enhanced conflict resolution and GA enabled adaptive optimization. Runtime and convergence results remained within practical limits, with a time complexity of O(n2+gpn). The proposed hybrid framework balances structural prioritization and evolutionary refinement, offering a scalable and intelligent solution to the timetabling problem. It stands out by gracefully handling worst-case scenarios where traditional heuristics often fail.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_5-A_Hybrid_Approach_to_Automatic_Timetabling_Using_Self_Organizing_Maps.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Cyber Security Through Predictive Analytics: Real-Time Threat Detection and Response</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160804</link>
        <id>10.14569/IJACSA.2025.0160804</id>
        <doi>10.14569/IJACSA.2025.0160804</doi>
        <lastModDate>2025-08-29T12:47:58.1430000+00:00</lastModDate>
        
        <creator>Muhammad Danish</creator>
        
        <subject>Predictive analytics; real-time cyber-attack detection; statistical methods; machine learning; threat detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>This study evaluates the application of predictive analytics for real-time cyber-attack detection and response, focusing on how statistical and machine learning methods can improve decision-making in Security Operations Centers (SOCs). Using a curated network-traffic dataset of 2,000 records, we analyzed key features such as attack type, packet length, anomaly scores, protocol usage, and geo-location patterns to assess their predictive value. Findings indicate that attack type has a measurable influence on response actions, while basic header metrics alone lack the precision needed for accurate classification. These results highlight the importance of incorporating richer contextual features—such as user behavior, asset criticality, and temporal patterns—into predictive models. By integrating such features into operational pipelines, organizations can improve early threat detection, reduce false positives, and optimize resource allocation. This research contributes actionable insights for advancing proactive, data-driven cyber defense strategies and outlines directions for future implementation in live SOC environments.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_4-Enhancing_Cyber_Security_Through_Predictive_Analytics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Approximate Conformance Checking Accuracy with Hierarchical Clustering Model Behaviour Sampling</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160803</link>
        <id>10.14569/IJACSA.2025.0160803</id>
        <doi>10.14569/IJACSA.2025.0160803</doi>
        <lastModDate>2025-08-29T12:47:58.1130000+00:00</lastModDate>
        
        <creator>Yilin Lyu</creator>
        
        <subject>Approximate conformance checking; model behaviour sampling; hierarchical clustering; process mining</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>Conformance checking techniques evaluate how well a process model aligns with an actual event log. Existing methods, which are based on optimal trace alignment, are computationally intensive. To improve efficiency, a model sampling method has been proposed to construct a subset of model behaviour that represents the entire model. However, current model sampling techniques often lack sufficient model representativeness, limiting their potential to achieve optimal approximation accuracy. This study proposes new model behaviour sampling approaches using hierarchical clustering to compute an approximation closer to the exact result. This study also refines the existing upper bound algorithm for better approximation. Our experiments on six real-world event logs demonstrate that our method improves approximation accuracy compared to state-of-the-art model sampling methods.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_3-Enhancing_Approximate_Conformance_Checking_Accuracy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>How Teachers’ Gestural Culture Influences Japanese Students’ Emotions: A Machine Learning Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160802</link>
        <id>10.14569/IJACSA.2025.0160802</id>
        <doi>10.14569/IJACSA.2025.0160802</doi>
        <lastModDate>2025-08-29T12:47:58.0800000+00:00</lastModDate>
        
        <creator>Yuka Nishi</creator>
        
        <creator>Olivia Kennedy</creator>
        
        <creator>Choi Dongeun</creator>
        
        <creator>Noriaki Kuwahara</creator>
        
        <subject>Nonverbal behavior; affective computing; AI-based gesture recognition; cultural differences; multimodal analysis; cross-cultural education; emotion prediction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>This study analyzes differences in teachers’ gestural styles based on their culture and investigates how these differences are perceived to influence Japanese students’ emotional responses by active observers. Classroom videos of Japanese- and English-native instructors were analyzed using MediaPipe for gesture tracking and DeepFace for facial emotion recognition. Ground-truth emotion labels were collected from four Japanese observers. Results show that Japanese and non-Japanese teachers’ gesture dynamics differ in terms of range, rhythm, and symmetry. Japanese student observers perceived each group’s gestures differently, with cultural familiarity playing a role in their shifts in emotion. Machine learning models trained on gesture features, facial emotion scores, and teacher background successfully predicted students’ affective reactions. These findings highlight the importance of culturally sensitive nonverbal communication in education and demonstrate the potential of AI-based approaches for modeling student emotion in cross-cultural contexts. This study contributes a novel multimodal framework that integrates gesture dynamics, facial emotion recognition, and teacher cultural background to predict student affect, thereby highlighting the necessity of culturally adaptive affective computing in education.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_2-How_Teachers_Gestural_Culture_Influences_Japanese_Students.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of Web Apps for Users with Special Needs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160801</link>
        <id>10.14569/IJACSA.2025.0160801</id>
        <doi>10.14569/IJACSA.2025.0160801</doi>
        <lastModDate>2025-08-29T12:47:58.0030000+00:00</lastModDate>
        
        <creator>Silviya Varbanova</creator>
        
        <creator>Milena Stefanova</creator>
        
        <creator>Tihomir Stefanov</creator>
        
        <subject>Braille; sign language; deafness; visually impaired; software prototypes</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(8), 2025</description>
        <description>This study presents software solutions and prototypes of converters designed to assist individuals who suffer from visual impairments or deafness. The prototypes were developed in a laboratory environment using modern programming technologies, with their applicability focused on contexts such as education, employment, and online shopping. The converter prototypes are designed to transform textual information into Braille or sign language, depending on users’ needs. Other solutions facilitate auditory interpretation when working with assessment materials, thereby helping the sightless to access information more easily. The research methodology combines synthesis and analysis of existing information from related research works. The choice of programming technologies was carefully considered to ensure the implementation of more accessible functionalities in the developed applications. In the teaching process, the authors of the study motivate their students to develop programming and analytical skills. The results achieved are based on student project work in the design and implementation of software prototypes to assist users with hearing or visual impairments. The prototypes created are aligned with scenarios for use in education and everyday activities, which expand the practical relevance of the study. Importantly, the presented applications also benefit people without disabilities by promoting communication that is more effective and understanding in cases of visually or hearing impaired.</description>
        <description>http://thesai.org/Downloads/Volume16No8/Paper_1-Development_of_Web_Apps_for_Users_with_Special_Needs.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Dendritic Cell Algorithm by Integration with Multi-Layer Perceptron for Anomaly Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160798</link>
        <id>10.14569/IJACSA.2025.0160798</id>
        <doi>10.14569/IJACSA.2025.0160798</doi>
        <lastModDate>2025-07-30T12:59:47.0770000+00:00</lastModDate>
        
        <creator>Yousra Abudaqqa</creator>
        
        <creator>Zulaiha Ali Othman</creator>
        
        <creator>Azuraliza Abu Bakar</creator>
        
        <subject>Dendritic Cell Algorithm (DCA); anomaly threshold; Multi-Layer Perceptron (MLP); anomaly detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>Anomaly detection is crucial in a variety of areas, with the Dendritic Cell Algorithm (DCA) being one of the most used artificial immune systems (AIS) and introduced for binary classification of data. Both traditional and current perspectives on classification in DCA have primarily been threshold-based methods. Such approaches are limited in important ways, including inflexibility, manual tuning, and not being context-aware. The latest improvements in literature have provided adaptive dynamic threshold mechanisms that allow the system to adjust the sensitivity of the threshold using some statistical data of real-time observations. Although this is progress, the systems proposed are still based on rules, which have traditionally struggled with the more complex, higher-dimensional and nonlinear nature of data. This is common in most complex anomaly detection tasks today. In this study, we propose an improved DCA-MLP framework that uses a Multi-layer Perceptron (MLP) classifier replacing the thresholding phase. The MLP allows the DCA to learn from data context adaptively through a context-sensitive learning mechanism that can also change with the data distribution as it evolves, eliminating the need to robotically calibrate based on static or heuristic thresholds. The framework was tested thoroughly on fourteen benchmark datasets, and performance was evaluated against standard DCA in terms of accuracy, sensitivity and specificity measures. The performance results revealed considerable enhancements in DCA-MLP’s performance: 12%–50% improvements in accuracy (increasing accuracy to 93%–99%), 46% improvements in sensitivity (sensitivity as 98%), and 39% improvements in specificity. This shows that DCA-MLP is better adaptable, with learning capacity and robustness - a paradigm shift away from thresholds or threshold-based systems to an intelligent self-adjusting anomaly detection classification scheme.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_98-Enhancing_Dendritic_Cell_Algorithm_by_Integration.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Empowering Accessibility: IoT-Driven Smart Buildings for Elderly and Disabled Individuals</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160797</link>
        <id>10.14569/IJACSA.2025.0160797</id>
        <doi>10.14569/IJACSA.2025.0160797</doi>
        <lastModDate>2025-07-30T12:59:47.0300000+00:00</lastModDate>
        
        <creator>Zeyad Alshboul</creator>
        
        <creator>Burhan Mahmoud Hamadneh</creator>
        
        <creator>Turki Mahdi Alqarni</creator>
        
        <creator>Bajes Zeyad Aljunaeidia</creator>
        
        <creator>Methaq Khadum</creator>
        
        <subject>IoT; smart home; sustainability; urban development; quality of life; assistive technologies</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>This study aims to examine the attitudes of elderly and disabled individuals in Saudi Arabia toward Internet of Things (IoT)-enabled smart home technologies, with specific attention to the influence of demographic variables. The research employed a descriptive survey design, utilizing an online questionnaire distributed to a stratified random sample of 249 participants. Stratification ensured balanced representation across gender, age, educational attainment, employment status, and economic background. Statistical analyses, including Scheff&#233;’s post hoc test, revealed generally positive attitudes toward IoT adoption, primarily driven by perceived benefits related to enhanced quality of life, personal safety, and autonomy. Significant differences were identified across several demographic variables: married individuals, employed participants, those with higher education, higher-income groups, and individuals aged 30 to 45 all reported more favorable attitudes. Similarly, individuals with disabilities expressed stronger acceptance compared to their elderly counterparts. In contrast, gender differences were not statistically significant. These findings highlight the need for targeted, inclusive strategies that promote the adoption of IoT technologies across diverse social groups. The study contributes to a deeper understanding of how demographic characteristics shape technology acceptance and underscores the urgency of designing accessible, user-centered smart home systems. Recommendations emphasize public awareness initiatives, affordability measures, and inclusive design practices contributing to digital equity and aligning with the broader objectives of Saudi Vision 2030 for sustainable urban development.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_97-Empowering_Accessibility_IoT_Driven_Smart_Buildings.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Recommendation Engine for Amazon Magazine Subscriptions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160796</link>
        <id>10.14569/IJACSA.2025.0160796</id>
        <doi>10.14569/IJACSA.2025.0160796</doi>
        <lastModDate>2025-07-30T12:59:47.0000000+00:00</lastModDate>
        
        <creator>Sushil Khairnar</creator>
        
        <creator>Deep Bodra</creator>
        
        <subject>Sentiment analysis; topic modeling; recommender system; link prediction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>Recommender systems play a crucial role in enhancing user experience and engagement on e-commerce plat-forms by suggesting relevant products based on user behavior. In the context of Amazon’s extensive catalog of over 8,000 magazines spanning more than twenty-five categories, providing personalized magazine subscription recommendations poses a significant challenge. This study addresses the problem of identifying potential future associations between magazine reviewers and products using a graph-based approach. Specifically, we aim to predict unseen but likely links between users and magazines to improve recommendation quality. To achieve this, we construct an undirected bipartite network connecting reviewers and magazine products based on review data. We perform network analysis using measures such as centrality, modularity, and clustering, and apply sentiment analysis and topic modeling to extract behavioral and thematic insights from user reviews. These insights inform a series of link prediction techniques including Common Neighbors, Adamic-Adar, Jaccard Coefficient, and Preferential Attachment evaluated using cross-validation and ROC curves. Our results show that the Preferential Attachment model outperforms other approaches, attributed to the skewed degree distribution inherent in the dataset’s structure.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_96-Recommendation_Engine_for_Amazon_Magazine_Subscriptions.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluating and Interpreting Pooling Techniques in Spectrogram-Based Audio Analysis Using Diverse Metrics</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160795</link>
        <id>10.14569/IJACSA.2025.0160795</id>
        <doi>10.14569/IJACSA.2025.0160795</doi>
        <lastModDate>2025-07-30T12:59:46.9830000+00:00</lastModDate>
        
        <creator>Supun Bandara</creator>
        
        <creator>Uthayasanker Thayasivam</creator>
        
        <subject>Audio data analysis; pooling; deep learning; dimensionality reduction; spectrograms</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>Audio analysis is a rapidly advancing field that spans various domains, including speech, music, and environmental sound data. Using spectrograms with Convolutional Neural Networks (CNNs) enables the visualization and extraction of critical audio features by combining time-frequency representations with deep learning. Pooling plays a crucial role in this process, as it reduces dimensionality while retaining essential information. However, existing evaluations of pooling methods primarily emphasize downstream task performance, such as classification accuracy, often overlooking their effectiveness in preserving critical signal features. To address this gap, we use 17 distinct metrics, categorized into four domains, to comprehensively assess various pooling operations. Furthermore, we explore the underex-amined relationship between specific pooling techniques and their impact on feature retention across diverse audio applications. Our analysis encompasses spectrograms from three audio domains (speech, music, and environmental sound), identifying their key characteristics, and grouping them accordingly. Using this setup, we evaluate the performance of 12 pooling methods across these applications. By investigating the features critical to each task and evaluating how well different pooling techniques preserve them, we give insights into their suitability for specific applications. This work aims to guide researchers in selecting the most appropriate pooling strategies for their applications, enabling more granular evaluations, improving explainability, and thereby advancing the precision and efficiency of audio analysis pipelines.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_95-Evaluating_and_Interpreting_Pooling_Techniques_in_Spectrogram.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving Cross-Patient Epilepsy Detection via EEG Decomposition into Canonical Brain Rhythms with Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160794</link>
        <id>10.14569/IJACSA.2025.0160794</id>
        <doi>10.14569/IJACSA.2025.0160794</doi>
        <lastModDate>2025-07-30T12:59:46.9370000+00:00</lastModDate>
        
        <creator>Jose Yauri</creator>
        
        <creator>Elinar Carrillo-Riveros</creator>
        
        <creator>Edith Guevara-Morote</creator>
        
        <creator>Juan Carlos Carre&#241;o-Gamarra</creator>
        
        <creator>Karel Peralta-Sotomayor</creator>
        
        <creator>Pelayo Quispe-Bautista</creator>
        
        <subject>EEG signals; EEG signal decomposition; canonical brain rhythms; deep learning; convolutional neural network; transformer neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>Epilepsy affects more than 50 million people world-wide, and almost 80% of them live in low-income countries with limited access to medical and public services. Beyond these challenges, epileptic patients also face other problems, such as stigma and social exclusion due the misunderstanding of epilepsy. Thus, epilepsy has become a major public health problem with a high social impact. Electroencephalography (EEG) remains the primary tool for diagnosing epilepsy; however, the traditional procedure of reviewing long EEG recordings is time-consuming, error-prone, and highly dependent on the neurologist’s experience. Recent advances in deep learning (DL) have driven the development of new methods for automatic epilepsy detection. Despite these advances, most methods are not generalizable to all patients, limiting their clinical applicability in real-life cases. In this work, we present a cross-patient method capable of improving epilepsy detection by spectral decomposition of EEG signals into canonical brain rhythms. These spectral bands improve the signal significance and the model performance. The proposal was evaluated in a cross-patient validation scheme on the CHB-MIT dataset and proved superior performance using EEG signals from the interictal and ictal epilepsy stages. The model achieved of 100% of sensibility and specificity using the theta band, outperforming the state-of-the-art methods and offering a promising step towards real-world clinical implementation.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_94-Improving_Cross_Patient_Epilepsy_Detection_via_EEG_Decomposition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Gender and Age Estimation from Facial Images Based on Multi-Task and Curriculum Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160793</link>
        <id>10.14569/IJACSA.2025.0160793</id>
        <doi>10.14569/IJACSA.2025.0160793</doi>
        <lastModDate>2025-07-30T12:59:46.9230000+00:00</lastModDate>
        
        <creator>Toma Brezovan</creator>
        
        <creator>Claudiu Ionut Pop&#238;rlan</creator>
        
        <subject>Age estimation; gender classification; multi-task learning; curriculum learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>This study presents a multi-task deep learning approach for predicting age and gender attributes from facial images, with the aim of obtaining a robust dual classifier. The proposed system uses the pre-trained EfficientNet-B4 model as the feature extractor of the main model and incorporates a two-branch architecture, where the output of the gender classification branch informs the age prediction branch. This means a conditional feature learning with an explicit injection mechanism, by injecting gender information into the age field of the dual-task model, which is one of the novelties of our proposal. A curriculum learning strategy is applied during training to progressively improve the model’s performance using various datasets, such as UTKFace, MORPH-II, and Adience. The proposed multi-phase curriculum learning strategy, which uses both multi-task learning and multi-dataset training, is another novelty of our proposal. Experimental results show that the model achieves high accuracy in both age and gender classification tasks while maintaining low inference latency. Furthermore, the experiments highlighted that the classification accuracy values of the proposed method, both for gender and age, as well as in all datasets used, are close to the best state-of-the-art results, which validates the robustness of the proposed classifier.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_93-Gender_and_Age_Estimation_from_Facial_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cyber Deception Across Domains: A Comprehensive Survey of Techniques, Challenges, and Perspectives</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160792</link>
        <id>10.14569/IJACSA.2025.0160792</id>
        <doi>10.14569/IJACSA.2025.0160792</doi>
        <lastModDate>2025-07-30T12:59:46.8730000+00:00</lastModDate>
        
        <creator>Amal Sayari</creator>
        
        <creator>Slim Rekhis</creator>
        
        <subject>Cyber defense; cyber deception; cloud environments; wireless networks; cyber-physical systems; industrial control systems; smart grids; internet of things; internet of vehicles; unmanned aerial vehicles</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>Cloud environments (CE), wireless networks (WN), cyber-physical systems (CPS), industrial control systems (ICS), smart grids (SG), internet of things (IoT), internet of vehicles (IOV), and unmanned aerial vehicles (UAV), are currently popular targets for cyberattacks due to their inherent limitations and vulnerabilities. Each domain has its own attack surfaces, weaknesses, and areas for implementing defense strategies appropriate to its specific conditions. Among the various defense mechanisms discussed in previous years, cyber deception has appeared as a very promising method. This approach allows the defenders to steer the attackers in the wrong direction, get threat intelligence, and at the same time, increase security by engaging with adversaries in deception environments in a proactive manner. Cyber deception has been a topic of investigation in several studies, where specific frameworks and techniques were proposed to identify, delay, or disrupt adversarial behavior. Nevertheless, the contributions of earlier works are frequently limited or missing a unified framework that makes a thorough and comparative study necessary. This survey investigates the cyber deception techniques used in various domains. The first part is about the cores of deception and its background. Next, it presents a summary of the available deception techniques with their modeling by different frameworks like MITRE ATT&amp;CK, D3FEND, and Engage, and intelligent orchestration using reinforcement learning (RL) and game theory (GT). Then, it serves as a thorough systematic review of each selected paper, going over the system design, used deception techniques, evaluation metrics, and limitations on each scheme. The achieved results are compiled into a unified summary table to enable a quick and effective comparison across the domains. It concludes, therefore, by discussing the main challenges, open issues, and areas of research that have not yet been explored, thus making it a valuable source for future research on cyber deception.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_92-Cyber_Deception_Across_Domains.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Forecast COVID-19 Epidemics by Strengthening Deep Learning Models with Time Series Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160791</link>
        <id>10.14569/IJACSA.2025.0160791</id>
        <doi>10.14569/IJACSA.2025.0160791</doi>
        <lastModDate>2025-07-30T12:59:46.8430000+00:00</lastModDate>
        
        <creator>Warapree Tangseefa</creator>
        
        <creator>Tepanata Pumpaibool</creator>
        
        <creator>Paisit Khanarsa</creator>
        
        <creator>Krung Sinapiromsaran</creator>
        
        <subject>Forecasting models; COVID-19; long short-term memory; gated recurrent unit; autocorrelation function; partial autocorrelation function</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>The COVID-19 pandemic has profoundly impacted economic and social structures, directly affecting individuals’ lives. Deep learning models offer the potential to forecast future long-term trends and capture the temporal dependencies present in time series data. In this study, we propose leveraging the autocorrelation function (ACF) and the partial autocorrelation function (PACF) series as additional components to enhance the forecasting accuracy of our models. Our proposed method is applied to forecast COVID-19 time series data in twelve countries using the deep learning techniques of Long Short-Term Memory (LSTM) and Gated Recurrent Units (GRU). When comparing the rankings average of mean absolute error and R-squared, the proposed models demonstrated superior performance in time series forecasting compared to the standard LSTM and GRU model. Specifically, the ACF-PACF-GRU model achieved the best median values for mean absolute percentage error (1.67 per cent for confirmed cases and 2.17 per cent for death cases) and root mean square error (1.92 for confirmed cases and 2.17 for death cases). Therefore, the proposed ACF-PACF-GRU model showed the highest performance in forecasting both confirmed and death cases. This research introduces a novel method for constructing effective time series models aimed at forecasting disease burdens, thereby aiding in epidemic control and the implementation of preventive measures.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_91-Forecast_COVID_19_Epidemics_by_Strengthening_Deep_Learning_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Older Adults and Technology Design from the HCI Perspective</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160790</link>
        <id>10.14569/IJACSA.2025.0160790</id>
        <doi>10.14569/IJACSA.2025.0160790</doi>
        <lastModDate>2025-07-30T12:59:46.8130000+00:00</lastModDate>
        
        <creator>Hasan Ali Sagga</creator>
        
        <creator>Richard Stone</creator>
        
        <subject>Older adults; HCI; smartphone applications; human science; technology design; older adults challenges</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>Older adults are an important segment in all societies worldwide, and this category of users cannot be ignored, considering technological progress, especially in the proliferation of smartphone applications. The expected growth in this age group in the following years, specifically in some developing countries, will present interaction challenges and opportunities in several areas for both older adult users and smartphone application designers. The main purpose of this review study is to create a better understanding of such a group from different angles and to identify this group of users from the perspective of the Human-Computer Interaction field, as well as explore current and future challenges to build a solid literature review emphasizing findings from HCI and human sciences. This literature review concludes with current and future trends to help address technology designs and older adults’ characteristics and needs.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_90-Older_Adults_and_Technology_Design.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning-Driven DNA Image Encryption with Optimal Chaotic Map Selection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160789</link>
        <id>10.14569/IJACSA.2025.0160789</id>
        <doi>10.14569/IJACSA.2025.0160789</doi>
        <lastModDate>2025-07-30T12:59:46.7800000+00:00</lastModDate>
        
        <creator>Sara Bentouila</creator>
        
        <creator>Kamel Mohamed Faraoun</creator>
        
        <subject>Image encryption; DNA encoding; chaotic map selection; lorenz system; deep learning; convolutional neural network (CNN); security analysis; VGG16; cryptographic robustness</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>This research introduces an advanced image encryption framework addressing critical security limitations in existing approaches. The study focuses on developing a robust encryption methodology that overcomes arbitrary chaotic map selection and static key generation vulnerabilities. Our approach integrates three synergistic components: a systematic chaotic map evaluation protocol identifying optimal dynamic systems, a deep learning-based key generation mechanism employing fine-tuned convolutional neural networks for image-sensitive cryptographic keys, and a hybrid encryption pipeline combining DNA encoding with chaotic diffusion. Experimental validation demonstrates that the proposed scheme achieves near-ideal entropy values (cipher images with an average entropy of 7.90 and above), and ensures extremely low correlation coefficients between adjacent pixels (close to zero in horizontal, vertical, and diagonal directions). Differential analysis confirms strong robustness, with NPCR values exceeding 99.6% and UACI about 33.5% across multiple color images. Visual results show that encrypted images display no perceivable patterns or similarities with the original images. Comparative performance assessment also highlights the method’s efficiency, with encryption execution times competitive with or better than recent state-of-the-art methods. Brute-force resistance is guaranteed by an extensive key space determined by the combination of deep learning-generated keys, Lorenz chaotic parameters, and DNA encoding rule permutations. The comprehensive multi-layered security strategy further ensures resilience against brute-force, statistical, differential, and chosen-plaintext attacks, as well as against modern deep learning-based cryptanalysis.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_89-Deep_Learning_Driven_DNA_Image_Encryption.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design and Analysis of Smart Lighting System for Room Environments Using Simulation Supporting Diverse Light Bulbs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160788</link>
        <id>10.14569/IJACSA.2025.0160788</id>
        <doi>10.14569/IJACSA.2025.0160788</doi>
        <lastModDate>2025-07-30T12:59:46.7500000+00:00</lastModDate>
        
        <creator>Husnul Ajra</creator>
        
        <creator>Mazlina Abdul Majid</creator>
        
        <creator>Md. Shohidul Islam</creator>
        
        <subject>Bulb; energy; light; model; simulation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>Recently, the demand for intelligent, energy-efficient lighting systems has increased due to rising environ-mental concerns and increasing electricity consumption in smart room environment buildings. Conventional lighting systems often operate inefficiently, using outdated bulb technologies and lacking automation, which results in substantial energy waste, especially in rooms with variable occupancy. Lighting significantly contributes to energy consumption in indoor spaces, which presents vast opportunities for smart lighting model development through automation and adaptive control. This study proposes a smart lighting system model for room environments that dynamically adapts to user presence and supports diverse light bulb types. The study analyzes energy usage while maintaining automatic light control and operational effectiveness through simulation. The system is developed using AnyLogic by integrating agent-based and discrete event simulation to model occupant behavior and manage event-driven lighting logic. It incorporates sensors, smart door mechanisms, and energy-measuring processes, all powered by solar energy and managed through battery storage. The system dynamically adjusts lighting based on occupancy, minimizing idle energy usage for the room. LED bulbs offer more promising energy efficiency, while incandescent bulbs show the highest consumption. The outcome provides a visualized simulation model for designing adaptive lighting systems and reinforces the potential to enhance energy efficiency to support sustainability in smart room applications.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_88-Design_and_Analysis_of_Smart_Lighting_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Robust Particle Filter for Accurate WiFi-Based Indoor Positioning in the Presence of Outlier-Corrupted Sensor Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160787</link>
        <id>10.14569/IJACSA.2025.0160787</id>
        <doi>10.14569/IJACSA.2025.0160787</doi>
        <lastModDate>2025-07-30T12:59:46.7030000+00:00</lastModDate>
        
        <creator>Mohamed Aizad Bin Mohamed Ghazali</creator>
        
        <creator>Aroland Kiring</creator>
        
        <creator>Lyudmila Mihaylova</creator>
        
        <creator>Hoe Tung Yew</creator>
        
        <creator>Seng Kheau Chung</creator>
        
        <creator>Farrah Wong</creator>
        
        <subject>Complex environments; indoor positioning; measurement noise and outliers; RMSE reduction; robust particle filter</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>This study presents a comprehensive evaluation of an outlier-robust particle filter (RPF) designed to improve indoor positioning accuracy in complex environments with substantial measurement noise and outliers. The RPF’s performance is benchmarked against a standard Particle Filter (PF) using both simulated and real-world datasets. Simulation results indicate that the RPF consistently outperforms the PF in indoor positioning particularly when sensor measurements contain out-liers, achieving significant reductions in root mean square error (RMSE) for position, velocity, and acceleration estimation, with improvements of approximately 40.02%, 38.48%, and 65.80%, respectively. Real-world experiments, applying a calibrated log-normal path loss model to Wi-Fi received signal strength (RSS) data, further corroborate the RPF’s effectiveness, demonstrating a 93.61% improvement in positioning accuracy compared to the PF. These findings highlight the RPF’s robustness in delivering high accuracy, especially in environments with measurement outliers, establishing it as a reliable solution for indoor tracking in noisy sensor environments.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_87-Robust_Particle_Filter_for_Accurate_WiFi_Based_Indoor_Positioning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cardio-Edge: Hardware-Software Co-design Implementation of LSTM Based ECG Classification for Continuous Cardiac Monitoring on Wearable Devices</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160786</link>
        <id>10.14569/IJACSA.2025.0160786</id>
        <doi>10.14569/IJACSA.2025.0160786</doi>
        <lastModDate>2025-07-30T12:59:46.6730000+00:00</lastModDate>
        
        <creator>Nousheen Akhtar</creator>
        
        <creator>Abdul Rehman Buzdar</creator>
        
        <creator>Jiancun Fan</creator>
        
        <creator>Muhammad Umair Khan</creator>
        
        <subject>ECG classification; wearable devices; discrete wavelet transform (DWT); long short-term memory (LSTM); field-programmable gate array (FPGA)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>Cardiac arrhythmias should be detected at an early stage so that clinical intervention can take place and continuous patient monitoring can be established in a timely manner. In this study, we present Cardio-Edge, a hardware-software co-design implementation of an LSTM-based ECG classification system optimized for real-time use on wearable devices. Proposed architecture comprises discrete wavelet transform (DWT) and principal component analysis (PCA) for efficient feature extraction followed by multiple parallel LSTM networks and a multi-layer perceptron (MLP) for classification. Implemented on a Xilinx ZYNQ-7000 SoC, our system leverages FPGA-based hardware acceleration alongside ARM Cortex-A9 for preprocessing tasks. Compared to software-only implementation on the same ARM processor, our co-design achieves a 10&#215; improvement in execution speed with 99% classification accuracy trained and verified on the MIT-BIH arrhythmia dataset. The hardware-efficient implementation employs resource-optimized architectures for LSTM, activation functions, and fully connected layers making it appropriate for low-power, patient-specific wearable healthcare devices. This real-time, on-chip solution eliminates dependence in-cloud connectivity and ensures data privacy hence suitable for continuous cardiac monitoring applications.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_86-Cardio_Edge_Hardware_Software_Co_design_Implementation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Air Quality Prediction Based on VMD-CNN-BiLSTM-Attention</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160785</link>
        <id>10.14569/IJACSA.2025.0160785</id>
        <doi>10.14569/IJACSA.2025.0160785</doi>
        <lastModDate>2025-07-30T12:59:46.6400000+00:00</lastModDate>
        
        <creator>Huang Xinxin</creator>
        
        <creator>Mohd Suffian Sulaiman</creator>
        
        <creator>Marshima Mohd Rosli</creator>
        
        <subject>Air quality prediction; variational mode decomposition; convolutional neural network; bidirectional long short-term memory; hyperparameter optimization; air quality index</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>With the advancement of industrialization, air pollution has emerged as a critical global health and environmental concern. This study presents an air quality prediction model based on variational mode decomposition, a convolutional neural network, bidirectional long short-term memory, and an attention mechanism. The variational mode decomposition method is employed to decompose the Air Quality Index sequence, capturing different local characteristics of the original data. A hybrid model is constructed by integrating the convolutional neural network for feature extraction, the bidirectional long short-term memory for temporal pattern recognition, and the attention mechanism for focusing on significant data features. The model is optimized using the Grey Wolf Optimizer for hyperparameter tuning, thereby enhancing prediction accuracy. The proposed model is evaluated using air quality data from Changsha, China, covering the years 2015 to 2023. The results demonstrate that our model outperforms several other models in terms of mean absolute error, mean squared error, root mean squared error, and R-squared. This study provides a robust approach to air quality prediction, offering valuable insights for residents and policymakers.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_85-Air_Quality_Prediction_Based_on_VMD_CNN_BiLSTM_Attention.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automated Anatomical Analysis of Wood Cross Sections Using Macroscopic Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160784</link>
        <id>10.14569/IJACSA.2025.0160784</id>
        <doi>10.14569/IJACSA.2025.0160784</doi>
        <lastModDate>2025-07-30T12:59:46.6100000+00:00</lastModDate>
        
        <creator>Khanh Nguyen-Trong</creator>
        
        <creator>Thanh Nhan Nguyen-Thi</creator>
        
        <subject>Wood species identification; wood anatomical analysis; segmentation; Mask R-CNN; DenseNet</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>Wood anatomical features are crucial in forestry science, traditionally relying on manual inspection of wood cross-sections. This conventional method is time-consuming, subjective, and dependent on expert experience. Recent advancements in deep learning offer high accuracy but often operate as black-box models, lacking interpretability and struggling with out-of-distribution challenges under real-world variations. To address these limitations, we propose a two-stage framework combining deep-learning-based image classification and explicit anatomical feature analysis, directly extracting expert-recognized morphological attributes such as pore size, frequency, and spatial arrangement from macroscopic images. By quantifying these anatomical descriptors, our framework yields transparent, OOD-robust features that can be directly fed into downstream species-identification models, thereby enhancing future classification accuracy while preserving interpretability. An end-to-end implementation integrates data acquisition, automated feature extraction, and interactive visualization, making the methodology practically applicable in both laboratory and field settings.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_84-Automated_Anatomical_Analysis_of_Wood_Cross_Sections.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards Robust IoT Security: The Impact of Data Quality and Imbalanced Data on AI-Based IDS</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160783</link>
        <id>10.14569/IJACSA.2025.0160783</id>
        <doi>10.14569/IJACSA.2025.0160783</doi>
        <lastModDate>2025-07-30T12:59:46.5930000+00:00</lastModDate>
        
        <creator>Hiba El Balbali</creator>
        
        <creator>Anas Abou El Kalam</creator>
        
        <subject>Machine learning; intrusion detection; internet of things; data quality; big data</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>The increased number of connected devices and the rise of Big Data have revolutionized industries and triggered a surge in cyberattacks, making security a top priority. Machine learning and Deep Learning algorithms are crucial in intrusion detection and classification, enabling systems to identify and respond to threats with precision. However, the success of these algorithms is directly related to the quality of the data they process, underscoring the critical importance of robust and well-prepared datasets. Furthermore, despite their potential in detecting and classifying attacks, some algorithms are susceptible to imbalanced datasets, struggling to accurately classify minority classes, while others demonstrate resilience to such challenges. Hence, this study presents a comprehensive analysis of the impact of data quality and imbalanced data on different classification problems, particularly binary, 8-class, and 34-class classification in an intrusion detection context. Our work extensively evaluates six ML and DL algorithms using a novel IoT dataset. Unlike existing research, we use a diverse set of metrics, including accuracy, precision, recall, F1-score, AUC-ROC, and other visual tools, to provide a robust and reliable algorithm performance assessment. This unique analysis underscores the critical importance of addressing data quality and the impact of different balancing techniques on the type of algorithms and type of classification.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_83-Towards_Robust_IoT_Security.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>CeC-SMOTE: A Clustering and Centroid-Based Adaptive Oversampling Method for Imbalanced Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160782</link>
        <id>10.14569/IJACSA.2025.0160782</id>
        <doi>10.14569/IJACSA.2025.0160782</doi>
        <lastModDate>2025-07-30T12:59:46.5470000+00:00</lastModDate>
        
        <creator>Xiaoling Gao</creator>
        
        <creator>Marshima Mohd Rosli</creator>
        
        <creator>Muhammad Izzad Ramli</creator>
        
        <creator>Nursuriati Jamil</creator>
        
        <subject>Imbalanced data classification; synthetic oversampling; k-means clustering; centroid-based neighbor</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>Class imbalance is a common challenge in real-world datasets, leading standard classifiers to perform poorly on underrepresented classes. Traditional oversampling techniques, such as SMOTE and its variants, often generate synthetic samples without fully considering the local data structure, resulting in increased noise and class overlap.This study introduces CeC-SMOTE, an adaptive oversampling method that integrates clustering and centroid-based strategies to enhance the quality of synthetic minority samples. By first partitioning minority instances using K-means clustering, CeC-SMOTE identifies safe and boundary regions, selectively generating new samples where they are most needed while filtering out noise. This targeted approach preserves the underlying distribution of the minority class and minimizes the risk of overfitting. Extensive experiments on artificial and benchmark UCI datasets demonstrate that CeC-SMOTE consistently delivers competitive or superior results compared to established oversampling techniques, particularly in cases with complex or ambiguous class boundaries. Sensitivity analysis confirms that the method is robust to parameter settings, enabling strong performance with minimal tuning.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_82-CeC_SMOTE_A_Clustering_and_Centroid_Based_Adaptive_Oversampling_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Portfolio Optimization with Weighted Scoring for Return Prediction Through Machine Learning and Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160781</link>
        <id>10.14569/IJACSA.2025.0160781</id>
        <doi>10.14569/IJACSA.2025.0160781</doi>
        <lastModDate>2025-07-30T12:59:46.5170000+00:00</lastModDate>
        
        <creator>Ruili Sun</creator>
        
        <creator>Qiongchao Xia</creator>
        
        <creator>Shiguo Huang</creator>
        
        <subject>Machine learning; stock return prediction; portfolio optimization; support vector regression; transformer; NASDAQ stock market; a-share stock market; cryptocurrency market</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>Accurately predicting stock return can enhance the effectiveness of portfolio optimization models. Many previous studies typically divide machine learning algorithms and portfolio optimization into two separate stages: the first step leverages the powerful modeling capabilities of machine learning algorithms to select stocks, and the second step optimizes weights using traditional portfolio models. This separation means that the modeling strengths of machine learning are only utilized in the stock selection phase and not fully exploited during weight optimization. Therefore, this study proposes a portfolio construction method based on Return Prediction Weighted Scoring (RPWS). RPWS generates a stock ranking by assigning weighted scores to each stock, cleverly maps this ranking to weight biases, and then optimizes actual weights using a traditional covariance matrix. This process successfully integrates the modeling capabilities of machine learning into the weight optimization phase, ensuring its full utilization throughout the portfolio construction process. Backtesting experiments are conducted using the U.S. stock market, A-share market, and major cryptocurrencies as datasets, with Support Vector Regression (SVR), Transformer, and other machine learning algorithms as prediction models. Empirical results from these three markets show that the SVR-RPWS and Transformer-RPWS models significantly outperform mainstream funds and traditional portfolio models in terms of annualized returns, sharpe ratio, and drawdown control.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_81-Enhancing_Portfolio_Optimization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automated Dried Fish Classification Using MobileNetV2 and Transfer Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160780</link>
        <id>10.14569/IJACSA.2025.0160780</id>
        <doi>10.14569/IJACSA.2025.0160780</doi>
        <lastModDate>2025-07-30T12:59:46.4830000+00:00</lastModDate>
        
        <creator>Rajmohan Pardeshi</creator>
        
        <creator>Rajermani Thinakaran</creator>
        
        <creator>Sanjay Kharat</creator>
        
        <subject>Dried fish classification; MobileNetV2; transfer learning; edge deployment; fisheries automation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>India, the second largest fish producer globally, contributes significantly to food security, nutrition, and economic development. Dried fish is a vital component of the fisheries value chain, especially in South Asia, yet current classification methods are manual, inconsistent, and labor-intensive. This study aims to automate dried fish classification using MobileNetV2 through transfer learning, enabling real-time, lightweight deployment on edge devices. We trained and evaluated the model across four diverse publicly available datasets using single, bulk, head, and tail image modalities. Our experiments demonstrated high accuracy (up to 100%) and strong generalization across datasets. The proposed model offers a practical, scalable, and efficient solution to modernize dried fish processing and enhance productivity and traceability in fisheries.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_80-Automated_Dried_Fish_Classification_Using_MobileNetV2.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>AI-Driven Firewall Log Analysis: Enhancing Threat Detection with Deep Learning Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160779</link>
        <id>10.14569/IJACSA.2025.0160779</id>
        <doi>10.14569/IJACSA.2025.0160779</doi>
        <lastModDate>2025-07-30T12:59:46.4530000+00:00</lastModDate>
        
        <creator>Yasmine ABOUDRAR</creator>
        
        <creator>Khalid BOURAGBA</creator>
        
        <creator>Mohamed OUZZIF</creator>
        
        <subject>AI-driven SIEM; deep learning; firewall log analysis; threat detection; false positives; cybersecurity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>As cyber-attacks get increasingly sophisticated, cybersecurity threats have surged, with 430 million new malware instances identified in 2023 representing a 36% rise compared to 2020 figures in the United States.Traditional firewall defense mechanisms are increasingly restricted. Even though firewalls are the frontline defense mechanism, their reliance on preconfigured rules and signature-based detection leaves them behind in the identification of carefully crafted, dynamic attacks. Furthermore, they generate enormous volumes of logs and hence add high false positive rates, making manual threat analysis a tedious and time-consuming process. In order to counter such issues, we propose an AI-fortified SIEM system using deep learning algorithms for intelligent firewall log analysis. This serves to reduce false positives through event pattern extraction and correlation, allowing for more efficient threat detection. By employing deep neural networks like fully connected, convolutional, and recurrent, our system enhances classification accuracy and optimizes threat detection. We utilize actual firewall logs and benchmarking datasets (UNSW-NB15-training and UNSW-NB15-testing) to assess our system, one for training and the other for testing. Our primary objective is to differentiate between true positive and false positive alarms so that security analysts can respond to cyber threats more effectively. The experimental results demonstrate the effectiveness of our approach in improving threat monitoring and IT security. Besides, they confirm that our learning-based models are better than classical machine learning methods and are therefore a realistic and efficient solution to real-world firewall security.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_79-AI_Driven_Firewall_Log_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Representation Learning Ability of Self-Supervised Learning in Unlabeled Image Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160778</link>
        <id>10.14569/IJACSA.2025.0160778</id>
        <doi>10.14569/IJACSA.2025.0160778</doi>
        <lastModDate>2025-07-30T12:59:46.4230000+00:00</lastModDate>
        
        <creator>Jinzhu Lin</creator>
        
        <creator>Tianwei Ni</creator>
        
        <subject>Self-supervised learning (SSL); unlabeled image data; representation learning; contrastive learning; convolutional neural network (CNN); image classification; feature embedding; label-efficient learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>Many existing systems struggle to strike a balance between global feature discrimination and local semantic understanding, despite the growing popularity of Self-Supervised Learning (SSL) for representation learning with unlabeled image data. This study introduces a novel SSL framework—Contrastive and Contextual Self-Supervised Representation Learning (C2SRL)—which integrates contrastive learning mechanisms with auxiliary context-based pretext tasks, specifically rotation prediction and jigsaw puzzle solving. The proposed C2SRL enhances two leading constructive models, SimCLR and MoCo, by incorporating contextual modules and a unified multi-task loss function, thereby improving the robustness and generalizability of the learned representations. A lightweight ResNet backbone is employed for encoding, followed by a dual-view augmentation strategy and a projection head that maps features into a contrastive embedding space. The proposed C2SRL outperforms existing SSL approaches in terms of classification accuracy and clustering coherence on the STL-10 and CIFAR-10 datasets, two benchmark datasets. It demonstrates strong scalability, as evidenced by its 89.6% mAP and 0.81 NMI, achieved using only 10% labeled data for fine-tuning. These results highlight the potential of combining contextual and contrastive learning objectives to generate rich, transferable visual representations for low-label or label-free applications.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_78-The_Representation_Learning_Ability_of_Self_Supervised_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Analysis of Cybersecurity Frameworks in Educational Institutions: Towards a Tailored Security Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160777</link>
        <id>10.14569/IJACSA.2025.0160777</id>
        <doi>10.14569/IJACSA.2025.0160777</doi>
        <lastModDate>2025-07-30T12:59:46.3900000+00:00</lastModDate>
        
        <creator>Syarif Hidayatulloh</creator>
        
        <creator>Aedah Binti Abd. Rahman</creator>
        
        <subject>Cybersecurity; educational institutions; cybersecurity frameworks; tailored security model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>Educational institutions face unique cybersecurity challenges due to their open culture, decentralised structures, and limited resources. While standard frameworks such as NIST, ISO/IEC 27001, and COBIT offer comprehensive guidance, their full implementation in academic settings is often impractical. This study addresses the gap by conducting a document-based comparative analysis of these frameworks, focusing on their applicability in educational institutions. A total of 42 documents—including case studies, cybersecurity guidelines, and academic articles—were analysed using thematic coding. The findings reveal significant misalignments between current frameworks and academic environments, particularly in terms of complexity, adaptability, and resource demand. Based on these insights, a tailored cybersecurity model is proposed. The model emphasises modularity, cultural integration, resource optimisation, and decentralised implementation to suit the educational context. A multi-step validation plan is also outlined to assess the model&#39;s practicality. This research offers both theoretical and practical contributions to cybersecurity governance in the education sector.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_77-Comparative_Analysis_of_Cybersecurity_Frameworks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Dual-Path Gated Attention-Based Deep Learning Model for Automated Essay Scoring Using Linguistic Features</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160776</link>
        <id>10.14569/IJACSA.2025.0160776</id>
        <doi>10.14569/IJACSA.2025.0160776</doi>
        <lastModDate>2025-07-30T12:59:46.3430000+00:00</lastModDate>
        
        <creator>Qin Jie</creator>
        
        <creator>Congling Huang</creator>
        
        <subject>Attention mechanism; deep learning; essay scoring; gated fusion; linguistic features; semantic encoding; syntactic representation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>Automated Essay Scoring (AES) has become a critical tool for scaling writing assessment in modern education. However, existing AES models often struggle to effectively evaluate both the syntactic structure and semantic meaning of essays while maintaining interpretability and fairness. This study presents a novel deep learning-based model that integrates syntactic and semantic analysis using an improved LSTM architecture. The model employs a dual-path structure: one path processes semantic representations using BERT-tokenized input, while the other captures syntactic patterns via part-of-speech sequences. These paths are fused using a gated mechanism and enhanced through multi-head attention to emphasize important linguistic cues. Additional student metadata, such as grade level and gender, is also incorporated to improve personalization and fairness. The model jointly predicts both holistic and grammar scores, trained and evaluated on the ASAP 2.0 dataset. Performance is measured using multiple statistical metrics, including MAE, MSE, RMSE, R&#178;, Pearson’s r, and Spearman’s ρ. The proposed model achieves a high prediction accuracy of 92%, significantly outperforming traditional and single-path models. These results demonstrate the model’s ability to capture both surface-level and deep linguistic features, offering a robust, interpretable, and scalable solution for automated writing evaluation.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_76-A_Dual_Path_Gated_Attention_Based_Deep_Learning_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Multi-Level Stacking Ensemble Model Optimized by Soft Set Theory for Customer Churn Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160775</link>
        <id>10.14569/IJACSA.2025.0160775</id>
        <doi>10.14569/IJACSA.2025.0160775</doi>
        <lastModDate>2025-07-30T12:59:46.3130000+00:00</lastModDate>
        
        <creator>Nurul Nadzirah Adnan</creator>
        
        <creator>Mohd Khalid Awang</creator>
        
        <subject>Customer churn prediction; soft set theory; ensemble learning; stacking models; telecommunications; predictive analytics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>This study proposes a multi-level stacking ensemble model enhanced by Soft Set Theory to improve the accuracy and efficiency of customer churn prediction. The proposed model leverages Soft Set Theory to eliminate redundant classifiers via the analysis of the indiscernibility matrix, increasing classifier diversity and ensemble generalization. Ten base classifiers are considered at Level-1, from which five are selected: Gradient Boosting, Logistic Regression, XGBoost, Support Vector Machine, and CatBoost. Logistic Regression serves as the Level-2 meta-classifier. Experiments using the UCI Telco Churn dataset achieve an accuracy of 94.87% and an F1-score of 95.14%, while reducing computational time by over 50%. Comparative analyses with existing churn prediction models validate the model&#39;s superior performance. This framework demonstrates strong potential for implementation in telecommunications, healthcare, and finance sectors where customer retention is critical.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_75-A_Multi_Level_Stacking_Ensemble_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Integration of Grey Wolf Optimizer Algorithm with Combinatorial Testing for Test Suite Generation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160774</link>
        <id>10.14569/IJACSA.2025.0160774</id>
        <doi>10.14569/IJACSA.2025.0160774</doi>
        <lastModDate>2025-07-30T12:59:46.2970000+00:00</lastModDate>
        
        <creator>Muhamad Asyraf Anuar</creator>
        
        <creator>Rosziati Ibrahim</creator>
        
        <creator>Mazidah Mat Rejab</creator>
        
        <creator>Nurezayana Zainal</creator>
        
        <subject>Grey wolf optimizer algorithm; combinatorial testing; metaheuristics; t-way testing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>Combinatorial Testing (CT) is a software testing technique designed to detect defects in complex systems by efficiently covering diverse combinations of input parameters within given time and resource constraints. A common strategy in CT is t-way testing, which ensures that all possible interactions among any t parameters are tested at least once. The Grey Wolf Optimization Algorithm (GWOA) is a nature-inspired metaheuristic that has been successfully applied to various optimization problems. In this study, we introduce the Combinatorial Grey Wolf Optimization Algorithm (CGWOA), which integrates GWOA with CT to enhance test suite generation. Effectiveness of CGWOA is evaluated through experiments on a real-world software system, where it is found that the number of test cases was reduced by 98%, from 3000 to 40, while still ensuring complete 2-way interaction coverage. Experimental results demonstrate that CGWOA consistently produces smaller test suites compared to pure computation methods such as Jenny, IPOG, IPOG-D and TConfig, as CGWOA consistently outperformed, especially in handling both lower and higher interaction strengths. In scenarios with binary parameters, CGWOA delivered the smallest test suites, while in more complex configurations, even in MCA settings, it showed impressive scalability, outperforming the other algorithms. Statistical analysis using the Wilcoxon signed-rank test revealed that the proposed approach significantly outperforms existing methods, with all p-values less than 0.02 after applying the Holm correction. The experimental results demonstrate that the proposed CGWOA approach advances software testing by efficiently minimizing the number of test cases required to achieve complete test coverage.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_74-Integration_of_Grey_Wolf_Optimizer_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automated Bubble Detection in Contact Lenses Using a Hybrid Deep Learning Framework</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160773</link>
        <id>10.14569/IJACSA.2025.0160773</id>
        <doi>10.14569/IJACSA.2025.0160773</doi>
        <lastModDate>2025-07-30T12:59:46.2800000+00:00</lastModDate>
        
        <creator>Chee Chin Lim</creator>
        
        <creator>Yen Fook Chong</creator>
        
        <creator>Vikneswaran Vijean</creator>
        
        <creator>Gei Ki Tang</creator>
        
        <subject>Bubble detection; contact lens quality assurance; deep learning; transfer learning; Support Vector Machine (SVM); AlexNet; image pre-processing; binary classification; defect detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>This study presents a hybrid deep learning approach for automated detection of bubbles in contact lenses, aiming to enhance quality assurance in the manufacturing process. A hybrid AlexNet+SVM model was developed using transfer learning, where AlexNet’s convolutional features were leveraged for binary classification (bubble vs. normal) via a Support Vector Machine (SVM) classifier. The dataset consisted of 320 images (160 bubbles, 160 normal) pre-processed using median filtering, local histogram equalization, and circular masking to improve image clarity and consistency. Through systematic hyperparameter tuning, the model achieved 100% testing accuracy and 97.92% validation accuracy, with perfect precision (100%) and high recall (96%). Comparative evaluation against ResNet and VGGNet demonstrated that the AlexNet+SVM model offered superior generalization and robustness, particularly for small-scale datasets. While VGGNet also achieved 100% testing accuracy with 95.83% validation accuracy, ResNet underperformed in recall (89%), likely due to its deeper architecture and data limitations. The findings underscore the suitability of hybrid models for binary classification tasks in limited-data scenarios. Identified challenges, including dataset size and risk of overfitting, point to future research directions involving expanded datasets and more advanced pre-processing techniques. This research contributes to the advancement of automated defect detection systems for contact lens manufacturing, offering a reliable and efficient quality control solution.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_73-Automated_Bubble_Detection_in_Contact_Lenses.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>HGWWO: A Hybrid Grey Wolf–Whale Optimizer for Load Balancing in Cloud Computing Environments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160772</link>
        <id>10.14569/IJACSA.2025.0160772</id>
        <doi>10.14569/IJACSA.2025.0160772</doi>
        <lastModDate>2025-07-30T12:59:46.2500000+00:00</lastModDate>
        
        <creator>Yameng BAI</creator>
        
        <creator>Junxia MENG</creator>
        
        <creator>Shuai ZHAO</creator>
        
        <creator>Ruoyu REN</creator>
        
        <subject>Cloud computing; load balancing; hybrid meta-heuristic; grey wolf optimizer; whale optimization algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>This paper aims to develop an efficient and adaptive load balancing algorithm for cloud computing environments using a novel hybrid meta-heuristic approach. Effective load balancing is necessary for optimum performance and resource utilization in cloud computing systems. Most conventional meta-heuristic algorithms suffer from premature convergence and poor exploration–exploitation tradeoffs. An innovative hybrid meta-heuristic algorithm, Hybrid Grey Wolf–Whale Optimizer (HGWWO), is proposed for efficiently and dynamically balancing cloud load. HGWWO integrates the leadership hierarchy and adaptive hunting strategy of the Grey Wolf Optimizer (GWO) with the spiral-shaped exploitation mechanism of the Whale Optimization Algorithm (WOA), resulting in high convergence rates. The algorithm is implemented in a multi-objective cloud load balancing model to reduce response time, energy usage, and makespan while optimizing resource utilization among virtual machines. The experimental outcomes prove that HGWWO outperforms existing algorithms regarding throughput, waiting time, and execution efficiency. The suggested model has potential for real-time cloud scheduling of resources and is an efficient solution for scalable and heterogeneous cloud environments.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_72-HGWWO_A_Hybrid_Grey_Wolf_Whale_Optimizer.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Anomaly Detection and Fault Diagnosis of Power Distribution Line Point Cloud Data Based on Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160771</link>
        <id>10.14569/IJACSA.2025.0160771</id>
        <doi>10.14569/IJACSA.2025.0160771</doi>
        <lastModDate>2025-07-30T12:59:46.2330000+00:00</lastModDate>
        
        <creator>Jiangshun Yu</creator>
        
        <creator>Poyu You</creator>
        
        <creator>Jian Zhao</creator>
        
        <creator>Xianzhe Long</creator>
        
        <creator>Yuran Chen</creator>
        
        <subject>Power distribution; anomaly detection; point cloud; deep learning; fault diagnosis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>Early and accurate fault diagnosis in power distribution systems is essential to ensure stable electricity delivery and prevent outages. This study presents a deep learning-based anomaly detection framework that analyzes 3D LiDAR point cloud data to identify structural defects in power distribution lines. Leveraging advancements in deep learning and 3D sensing, a hybrid architecture combining PointNet++ and 3D Convolutional Neural Networks (3D CNN) is proposed. The system processes point clouds from the TS40K dataset, comprising high-resolution, annotated scans of power infrastructure, and uses a feature fusion strategy to integrate fine-grained local geometry from PointNet++ with global volumetric features from 3D CNN. Implemented in Python, the method achieves a 94.7% accuracy in fault diagnosis, outperforming standalone models. It robustly detects anomalies such as sagging wires, leaning poles, and broken insulators, maintaining precision, recall, and F1-scores above 90%, even under noisy and sparse conditions. Visualization of detected faults on 3D models confirms its precise localization capability, supporting real-time monitoring and maintenance planning in smart grids. By integrating complementary deep learning techniques, this approach offers a scalable, accurate, and automated solution for anomaly detection and fault diagnosis in power distribution systems. Future work will focus on multi-sensor fusion and semi-supervised learning to reduce dependence on labeled data and broaden applicability to other infrastructure use cases.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_71-Anomaly_Detection_and_Fault_Diagnosis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Game Theory Meets Explainable AI: An Enhanced Approach to Understanding Black Box Models Through Shapley Values</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160770</link>
        <id>10.14569/IJACSA.2025.0160770</id>
        <doi>10.14569/IJACSA.2025.0160770</doi>
        <lastModDate>2025-07-30T12:59:46.1870000+00:00</lastModDate>
        
        <creator>Mouad Louhichi</creator>
        
        <creator>Redwane Nesmaoui</creator>
        
        <creator>Mohamed Lazaar</creator>
        
        <subject>Cooperative game theory; Explainable Artificial Intelligence (XAI); Shapley values; cluster analysis; interpretability; feature attribution; black-box models</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>The increasing complexity of machine learning models necessitates robust methods for interpretability, particularly in clustering applications, where understanding group characteristics is critical. To this end, this paper introduces a novel framework that integrates cooperative game theory and explainable artificial intelligence (XAI) to enhance the interpretability of black-box clustering models. Our framework integrates approximated Shapley values with multi-level clustering to reveal hierarchical feature interactions, enabling both local and global interpretability. The validity of this framework is achieved by conducting extensive empirical evaluations of two datasets, the Portuguese wine quality benchmark and Beijing Multi-Site Air Quality dataset the framework demonstrates improved clustering quality and interpretability, with features such as density and total sulfur dioxide emerging as dominant predictors in the wine analysis, while pollutants like PM2.5 and NO2 significantly influence air quality clustering. Key contributions include a multi-level clustering approach that reveals hierarchical feature attribution, use of interactive visualizations produced by Altair and a single interpretability framework that validate the state-of-art baselines. As a result, the framework forms a strong basis of interpretable clustering in essential fields like healthcare, finance, and environmental surveillance, which reinforces its generalization with respect to each domain. The results underline the need for interpretability in machine learning, providing actionable insights for stakeholders in a variety of fields.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_70-Game_Theory_Meets_Explainable_AI.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Data Management for Decision Support Systems in Indonesian Government Internal Audit: A DMBOK Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160769</link>
        <id>10.14569/IJACSA.2025.0160769</id>
        <doi>10.14569/IJACSA.2025.0160769</doi>
        <lastModDate>2025-07-30T12:59:46.1570000+00:00</lastModDate>
        
        <creator>Febrian Imanda Effendy</creator>
        
        <creator>Nilo Legowo</creator>
        
        <subject>Data management; data management body of knowledge; Indonesian government internal audit agency; decision support system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>Indonesian public institutions, including the Financial and Development Supervisory Agency (BPKP), face challenges such as fragmented standards and poor data quality, which hinder effective Decision Support Systems (DSS). This research aims to evaluate BPKP&#39;s current analytics maturity level using the TDWI Analytics Maturity Model and to formulate a Data Management Body of Knowledge (DMBOK)-based strategy to enhance its data management and analytical capabilities in support of decision-making. This qualitative descriptive case study methodology employed document analysis. The research stages involved assessing maturity using the TDWI model, conducting a gap analysis, formulating a strategy with DMBOK principles, and proposing an implementation roadmap based on Aiken&#39;s Data Management Value Pyramid. The research findings indicate BPKP&#39;s analytics maturity is at the &quot;Early Adoption&quot; stage (overall score 3.41), with the Analytics dimension scoring the lowest (2.60) and exhibiting the largest gap (1.40). Key challenges identified are underdeveloped institutional metadata and limited application of advanced analytics. A comprehensive DMBOK-based strategy and a four-phased implementation roadmap using Aiken&#39;s Pyramid were proposed to address these issues.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_69-Enhancing_Data_Management_for_Decision_Support_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Home Network Attached Storage (HOMENAS) Using Raspberry Pi with Telegram Bot Notification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160768</link>
        <id>10.14569/IJACSA.2025.0160768</id>
        <doi>10.14569/IJACSA.2025.0160768</doi>
        <lastModDate>2025-07-30T12:59:46.1230000+00:00</lastModDate>
        
        <creator>Nurul Najwa Abdul Rahid @ Abdul Rashid</creator>
        
        <creator>Syafnidar Abdul Halim</creator>
        
        <creator>Siti Maisarah Md Zain</creator>
        
        <creator>Nik Aiman Shafiq Nik Shukri</creator>
        
        <subject>Home network; NAS; network attached storage; raspberry pi; telegram bot</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>This paper presents the development of the Home Network Attached Storage (HOMENAS) using Raspberry Pi with a Telegram Bot Notification. Network Attached Storage (NAS) is an independent storage system connected directly to the network that can be accessed easily. NAS devices are readily available on the market nowadays. However, the current price is too expensive, consumes more electricity and lacks a notification mechanism. This paper proposes the development of HOMENAS at a lower cost and consumes less power than the current NAS devices available on the market. The proposed HOMENAS is also integrated with the Telegram Bot, which can notify users of the progress of downloading files. 95% of the energy cost can be reduced by implementing a Raspberry Pi as the Home Network Attached Storage. A network performance test has been conducted to evaluate the streaming rate for single and multiple users with wired and wireless connections. The result finding shows that the Raspberry Pi not only matches the performance of laptop but, in some aspects, it has better results in torrent-based file downloading tasks.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_68-Home_Network_Attached_Storage.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Decoding Sales Order Anomalies: Advanced Predictive Modeling and Discrepancy Resolution Utilizing Machine Learning Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160767</link>
        <id>10.14569/IJACSA.2025.0160767</id>
        <doi>10.14569/IJACSA.2025.0160767</doi>
        <lastModDate>2025-07-30T12:59:46.0930000+00:00</lastModDate>
        
        <creator>Amit Kumar Soni</creator>
        
        <creator>Pooja Jain</creator>
        
        <subject>Block predictions; credit; machine learning; sales data</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>This study examines the accuracy of order prediction and determines the grounds for order block predictions. It sets order deviation by calculating forecasted variation using R2 scores and mean absolute deviation. The blocks that are checked mainly include—business partner block, credit block, common block, and delivery block. Demand forecasts compare six months’ worth of sales data against mean absolute deviation and coefficient of variation. This study puts forth a proposal for solving discrepancies between sales order forecasts and confirms credit management’s system credit limits on sales orders. Parameters for evaluating orders are set relying on historical data. Machine Learning (ML) has been utilized in this study—which involves Support Vector Machine (SVM) and K-Nearest Neighbor (KNN) algorithms to improve accuracy where they achieve 96% and 93% respectively.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_67-Decoding_Sales_Order_Anomalies.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Smartphone-Integrated Sensor-Based DFU Risk Assessment Using CatBoost and Deep Neuro-Fuzzy Intelligence</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160766</link>
        <id>10.14569/IJACSA.2025.0160766</id>
        <doi>10.14569/IJACSA.2025.0160766</doi>
        <lastModDate>2025-07-30T12:59:46.0630000+00:00</lastModDate>
        
        <creator>Jayashree J</creator>
        
        <creator>Vijayashree J</creator>
        
        <creator>Perepi Rajarajeswari</creator>
        
        <creator>Saravanan S</creator>
        
        <subject>Bayesian Optimization; CatBoost; Deep Neuro-Fuzzy Networks (DN-FN); Diabetic Foot Ulcer (DFU) prediction; sensor-based risk stratification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>Diabetic Foot Ulcer (DFU) is a serious and common complication of diabetes mellitus, which can lead to lower limb amputation if not identified and treated in its early stages. This study introduces an integrated and intelligent system designed for the early detection and severity classification of DFUs by combining sensor-driven data collection with machine learning techniques in a mobile application. The research is based on a dataset comprising both clinical features (D-1 to D-16) and key sensor-based readings gathered from 316 participants. After preprocessing and normalization, the clinical data undergoes feature selection using CatBoost, which filters out the five least impactful features while preserving all sensor data due to its diagnostic relevance. The refined dataset is then processed using a Deep Neuro-Fuzzy Network (DN-FN) to deliver real-time DFU severity predictions, categorized into Low, Mid, and High-risk levels. The solution is deployed through an intuitive smartphone interface, enabling users to input clinical data once and conduct periodic sensor-based tests—including vibration, pressure, and temperature readings. The mobile application interfaces with embedded hardware via Bluetooth and performs offline inference using a compact version of the trained model. The system is designed to offer both patients and healthcare professionals a practical and interpretable tool for continuous monitoring of foot health, with the ultimate goal of reducing the risk and impact of DFU complications.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_66-Smartphone_Integrated_Sensor_Based_DFU.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comparative Study of Machine Learning Techniques for AE-Based Corrosion Detection with Emphasis on Transformer Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160765</link>
        <id>10.14569/IJACSA.2025.0160765</id>
        <doi>10.14569/IJACSA.2025.0160765</doi>
        <lastModDate>2025-07-30T12:59:46.0300000+00:00</lastModDate>
        
        <creator>Osama Shahid Ali</creator>
        
        <creator>Lukman B A Rahim</creator>
        
        <subject>Acoustic emissions; transformer based models; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>Corrosion-induced damage poses a critical threat to the structural integrity of fluid transport pipelines, necessitating advanced detection strategies for early intervention. This study investigates the use of acoustic emission (AE) monitoring in conjunction with machine learning techniques to identify anomalies indicative of corrosion. A comprehensive analysis of supervised, unsupervised, semi-supervised, and self-supervised learning methods is presented, with emphasis on their suitability for AE-based anomaly detection. Building upon this foundation, we implement and evaluate multiple machine learning models—including K-Nearest Neighbours (KNN), Support Vector Machines (SVM), Artificial Neural Networks (ANN), and Convolutional Neural Networks (CNN)—and compare them to a Transformer-based model integrated into a hybrid CNN-Transformer architecture. Experimental results demonstrate that the hybrid model outperforms all baselines, achieving R-squared values of 0.7037 for Acoustic Signal Level (ASL) and 0.6836 for Root Mean Square (RMS), thus confirming its superior ability to capture both local and long-range dependencies in acoustic emission data. A systematic review of recent Transformer-based corrosion detection models further contextualizes the results. This research highlights the promise of Transformer-based models in robust, real-time corrosion monitoring and offers a pathway toward more intelligent, machine learning-driven infrastructure maintenance systems.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_65-A_Comparative_Study_of_Machine_Learning_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>DLCA-CapsNet: Dual-Lane CDH Atrous CapsNet for the Detection of Plant Diseases</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160764</link>
        <id>10.14569/IJACSA.2025.0160764</id>
        <doi>10.14569/IJACSA.2025.0160764</doi>
        <lastModDate>2025-07-30T12:59:46.0000000+00:00</lastModDate>
        
        <creator>Steve Okyere-Gyamfi</creator>
        
        <creator>Michael Asante</creator>
        
        <creator>Yaw Marfo Missah</creator>
        
        <creator>Kwame Ofosuhene Peasah</creator>
        
        <creator>Vivian Akoto-Adjepong</creator>
        
        <subject>Color Difference Histogram (CDH); Convolutional Neural Network (CNN); atrous Convolution; Capsule Neural Network; plant disease detection; dynamic routing; AI in agriculture</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>Humanity&#39;s survival, development, and existence are deeply intertwined with agriculture, the source of most of our food. Plant disease detection helps in securing food, but manual plant disease detection is error-prone and labor-intensive. Convolutional Neural Networks (CNNs) are highly effective for automated plant disease classification, but their difficulty in recognizing differently oriented images means they need large datasets with many variations to work best. Capsule Networks (CapsNets) were developed to overcome the shortcomings of CNNs and can function effectively with smaller datasets. However, CapsNets process every part of an input image, so their performance can suffer when dealing with complex visuals. To tackle this challenge, DLCA-CapsNet was introduced. DLCA-CapsNet integrates a Color Difference Histogram (CDH) layer for key feature extraction, atrous convolution layers to enlarge receptive fields while maintaining spatial details, along with max-pooling, standard convolutional layers, and a dropout layer. The proposed DLCA-CapsNet method was evaluated on datasets including apple, banana, grape, maize, mango, pepper, potato, rice, tomato, as well as CIFAR-10 and Fashion-MNIST. The model demonstrated strong performance with high test accuracies in plant disease detection and on CIFAR-10 and Fashion-MNIST. It improved test accuracies by 6.78%, 14.82%, 6.14%, 5.07%, 21.12%, 40.32%, 4.64%, 0.76%, 10.23%, 13.73%, and 2.03%, while also reducing the number of parameters in millions by 6.16M, 6.16M, 6.16M, 6.16M, 7.14M, 5.68M, 5.92M, 7.62M, 7.62M, and 6.54M respectively when compared with the original CapsNet. On sensitivity, F1-Score, precision, specificity, Receiver Operating Characteristics, Precision-Recall values, accuracy, disk size, and parameters generated, etc., the DLCA-CapsNet achieved better performance compared to the original CapsNet and other advanced CapsNets reported in the literature. The findings suggest that this efficient and computationally less demanding method can significantly enhance plant disease classification and contribute incrementally to efforts aligned with the SDG 2 goal by offering a lightweight, scalable solution that can be adapted for field use in resource-constrained settings.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_64-DLCA_CapsNet_Dual_Lane_CDH_Atrous_CapsNet.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Focused Survey of ECG Datasets for Artificial Intelligence-Based Atrial Fibrillation Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160763</link>
        <id>10.14569/IJACSA.2025.0160763</id>
        <doi>10.14569/IJACSA.2025.0160763</doi>
        <lastModDate>2025-07-30T12:59:45.9530000+00:00</lastModDate>
        
        <creator>ASSALHI Imane</creator>
        
        <creator>Bybi Abdelmajid</creator>
        
        <creator>Oulad Hamdaoui Hanaa</creator>
        
        <creator>Ebobisse Djene Yves Frederic</creator>
        
        <creator>Drissi Lahssini Hilal</creator>
        
        <subject>Atrial fibrillation; ECG datasets; Artificial Intelligence; AI-ECG; dataset survey; AF detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>Atrial fibrillation (AF) is the most common sustained cardiac arrhythmia and increases the risk of stroke, heart failure, and mortality. Electrocardiography (ECG) is the most important technology for AF detection because it is inexpensive, non-invasive, and provides clinically useful information. However, the variability of ECG patterns, particularly during paroxysmal AF creates challenges in detecting AF. Artificial Intelligence (AI) offers a promising opportunity to improve AF recognition. However, AI performance is contingent on obtaining high-quality and diverse ECG datasets. This paper presents a focused survey of 15 publicly available and clinical ECG datasets used in AI-driven AF detection research between 2023 and 2025. We analyze the datasets based on acquisition methods, ECG type, format, lead configurations, annotation richness, and their application in AI models. Our comparative analysis reveals major trends, challenges such as data imbalance and motion artifacts, and gaps in current datasets including limited demographic diversity and underrepresentation of wearable ECG data. This study aims to guide future research toward more robust, interpretable, and inclusive AF detection models.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_63-A_Focused_Survey_of_ECG_Datasets.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>3D Reconstruction from JPG Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160762</link>
        <id>10.14569/IJACSA.2025.0160762</id>
        <doi>10.14569/IJACSA.2025.0160762</doi>
        <lastModDate>2025-07-30T12:59:45.9230000+00:00</lastModDate>
        
        <creator>Youssif Mohamed Mostafa</creator>
        
        <creator>Maryam N. Al-Berry</creator>
        
        <creator>Howida A. Shedeed</creator>
        
        <subject>3D reconstruction; photogrammetry; computer vision; image-based modeling; point cloud generation; JPEG images</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>Three-dimensional (3D) reconstruction from two-dimensional (2D) images is a fundamental challenge in computer vision and photogrammetry, with applications in medical imaging, robotics, and augmented reality. This research introduces an image-based modeling pipeline designed to overcome the inherent limitations of Joint Photographic Experts Group (JPEG) images, such as lossy compression and reduced structural fidelity. The proposed hybrid framework integrates photogrammetric methods specifically Structure-from-Motion (SFM) and Dense Stereo Matching with advanced point cloud generation and surface reconstruction techniques. Initially, Marching Cubes was utilized to generate dense point clouds from sequential JPEG slices, followed by Poisson Surface Reconstruction to produce watertight 3D models. Structural details are further enhanced using Structural Similarity index (SSIM) guided texture refinement. Evaluated on the Kaggle Chest CT Segmentation dataset, the method achieves an SSIM score of 0.725, outperforming the JPEG-based reconstruction baseline of 0.675 by 7.4%. In addition to improved accuracy, the study explores the balance between computational cost and reconstruction quality, offering insights relevant to real time and resource constrained applications. By bridging photogrammetry with computer vision, this work advances practical 3D reconstruction from compressed medical images, enabling efficient digitization in low-bandwidth environments.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_62-3D_Reconstruction_from_JPG_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Explainable Approach Using Semantic-Guided Alignment for Radiology Imaging Diagnosis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160761</link>
        <id>10.14569/IJACSA.2025.0160761</id>
        <doi>10.14569/IJACSA.2025.0160761</doi>
        <lastModDate>2025-07-30T12:59:45.8900000+00:00</lastModDate>
        
        <creator>Fatima Cheddi</creator>
        
        <creator>Ahmed Habbani</creator>
        
        <creator>Hammadi Nait-Charif</creator>
        
        <subject>Automated report generation; explainable AI; cross-modal fusion; contrastive learning; semantic-guided alignment</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>The increased success of deep learning in the radiology imaging domain has significantly advanced automated diagnosis and report generation, aiming to enhance diagnostic precision and clinical decision-making. However, existing methods often struggle to achieve detailed morphological description, resulting in reports that provide only general information without precise clinical specifics and thus fail to meet the stringent interpretability requirements of medical diagnosis. Also, the critical need for transparency in clinical automated systems has catalyzed the emergence of explainable artificial intelligence (XAI) as an essential research frontier. To address these limitations, we propose an explainable system for report generation that leverages semantic-guided alignment and interpretable multimodal deep learning. Our model combines hierarchical semantic feature extraction from medical reports with fine-grained features that guide the model to focus on lesion-relevant visual features and use Concept Activation Vectors (CAVs) to explain how radiological concepts affect report generation. A contrastive multimodal fusion module aligning textual and visual modalities through hierarchical attention and contrastive learning. Finally, an integrated concept activation system that provides transparent explanations by quantifying how radiological concepts influence generated reports. Validation of our approach in comparisons with existing methods indicates a corresponding boost in report quality in terms of clinical accuracy of the description, localization of the lesion, and contextual consistency, positioning our framework as a robust tool for generating more accurate and reliable medical reports.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_61-Explainable_Approach_Using_Semantic_Guided_Alignment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid Deep Learning and Optimization Approach for Accurate Channel Estimation in 5G MIMO-OFDM Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160760</link>
        <id>10.14569/IJACSA.2025.0160760</id>
        <doi>10.14569/IJACSA.2025.0160760</doi>
        <lastModDate>2025-07-30T12:59:45.8600000+00:00</lastModDate>
        
        <creator>Mohammed Fakhreldin</creator>
        
        <subject>Multiple-input multiple-output; channel estimation; orthogonal frequency division multiplexing; long short-term memory; pilot length</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>Channel estimation plays a pivotal role in enhancing the reliability and efficiency of 5G wireless communication systems, particularly in MIMO-OFDM (Multiple Input Multiple Output - Orthogonal Frequency Division Multiplexing) architectures under multipath and Doppler-affected conditions. Conventional methods such as Least Squares (LS) are widely used due to their low computational complexity and lack of requirement for prior channel statistics. However, these approaches often result in poor estimation accuracy, especially in dynamic environments. To overcome these limitations, this study introduces a hybrid deep learning-based channel estimation framework that integrates Harris Hawks Optimization (HHO), Sparrow Search Algorithm (SSA), and Long Short-Term Memory (LSTM) networks—referred to as HHO-SSA-LSTM. The proposed method is designed to optimize the LSTM parameters using HHO and SSA, enhancing learning efficiency and estimation accuracy. Additionally, the model employs hybrid pre-coding aligned with codebook modeling strategies to preserve angle characteristics without disrupting azimuthal distributions. The system is evaluated in a 5G MIMO-OFDM setting under realistic conditions simulated using Doppler frequency and multipath propagation. Performance is assessed using key metrics including Bit Error Rate (BER), Mean Square Error (MSE), Symbol Error Rate (SER), efficiency, and execution time across different Pilot Lengths (PL = 128, 136, and 160). Simulation results demonstrate that the HHO-SSA-LSTM framework outperforms LS, LMMSE (Linear Minimum Mean Square Error), CNN (Convolutional Neural Network), FDNN (Forest Deep Neural Network), and standalone LSTM models. Notably, at PL = 160, BER is reduced by up to 91% and MSE by 86%, with an efficiency improvement exceeding 12% compared to traditional methods. Although the model exhibits a slightly higher execution time due to its hybrid design, the substantial accuracy gains justify the trade-off. The findings validate the effectiveness of the proposed hybrid model for robust and efficient channel estimation in 5G networks.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_60-A_Hybrid_Deep_Learning_and_Optimization_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Intent Recognition for Mixed Script Queries Using Roman Transliteration</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160759</link>
        <id>10.14569/IJACSA.2025.0160759</id>
        <doi>10.14569/IJACSA.2025.0160759</doi>
        <lastModDate>2025-07-30T12:59:45.8270000+00:00</lastModDate>
        
        <creator>Anu Chaudhary</creator>
        
        <creator>Rahul Pradhan</creator>
        
        <creator>Shashi Shekhar</creator>
        
        <subject>Intent identification; Intent recognition; mixed script inquiries; machine learning; deep learning model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>Intent identification has become a difficult problem given the rising usage of multilingual and mixed-script inquiries, especially in areas where Roman transliteration is widely employed. Traditional intent detection systems suffer from the discrepancies and differences in transliterated text, which lowers their accuracy. The objective of this paper is to examine difficulties connected with intent recognition in mixed-script inquiries and to create a method based on transliteration to enhance intent recognition to assess the efficacy of the suggested model concerning current intent detection methods. The suggested approach is feature extraction and classification using machine learning and deep learning models after Roman transliteration pre-processing of mixed-script queries. The proposed hybrid deep learning architecture, that involves CNN, BiLSTM, and an Attention mechanism, holds an accuracy of 92.4% and F1-score of 91.0%, and beats baseline models like SVM, Random Forest, LSTM, and Transformer. Moreover, transliteration preprocessing enhanced accuracy by 7–9% on various models, proving the success of the approach.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_59-Enhancing_Intent_Recognition_for_Mixed_Script_Queries.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Adaptive AI-Driven Enterprise Resource Planning for Scalable and Real-Time Strategic Decision Making</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160758</link>
        <id>10.14569/IJACSA.2025.0160758</id>
        <doi>10.14569/IJACSA.2025.0160758</doi>
        <lastModDate>2025-07-30T12:59:45.7970000+00:00</lastModDate>
        
        <creator>Ghayth AlMahadin</creator>
        
        <subject>Enterprise resource planning; adaptive predictive modeling; real-time decision support; AI-Augmented ERP; ensemble learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>Enterprise Resource Planning (ERP) systems play a critical role in managing organizational assets and operations. However, traditional ERP systems rely on static, rule-based decision-making frameworks that lack the agility and intelligence required for real-time strategic support. To address these limitations, this study proposes an Adaptive AI-Driven Enterprise Resource Planning (A2ERP). This AI-augmented ERP framework integrates adaptive predictive models to enhance decision-making capabilities at scale. The A2ERP architecture features a dynamic data ingestion layer, an adaptive predictive engine utilizing online learning and ensemble methods, and a decision support interface empowered with explainable AI (XAI). It is designed for scalability through a containerized microservices architecture. Experimental results demonstrate that A2ERP achieves a 98% accuracy rate in both training and testing phases, effectively identifying errors such as omission, addition, and overstatement. Comparative evaluations show that A2ERP outperforms traditional ERP methods across key performance metrics, including precision, recall, and F1-score. The framework’s ability to process large-scale, complex data in real-time underscores its effectiveness in delivering timely strategic insights. A2ERP represents a significant advancement toward scalable, adaptive, and intelligent ERP systems, bridging the gap between operational execution and strategic decision-making.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_58-Adaptive_AI_Driven_Enterprise_Resource_Planning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimized Automatic Temperature and Humidity Control for Tobacco Storage Using TwinCAT and Deep Reinforcement Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160757</link>
        <id>10.14569/IJACSA.2025.0160757</id>
        <doi>10.14569/IJACSA.2025.0160757</doi>
        <lastModDate>2025-07-30T12:59:45.7670000+00:00</lastModDate>
        
        <creator>Zhen Liu</creator>
        
        <creator>Jili Wang</creator>
        
        <creator>Shihao Song</creator>
        
        <creator>Qiang Hua</creator>
        
        <subject>TwinCAT; deep reinforcement learning; tobacco storage; temperature and humidity control; system optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>With the rapid development of the tobacco industry, precise temperature and humidity control in storage environments has become essential for maintaining tobacco leaf quality. Traditional manual control methods suffer from low efficiency and limited accuracy, failing to meet modern storage demands. This study proposes an optimized automatic control system integrating TwinCAT and deep reinforcement learning (DRL) to enhance climate regulation in tobacco warehouses. Leveraging TwinCAT’s real-time control capabilities and DRL’s adaptive decision-making, the system achieves precise environmental regulation. Experimental results demonstrate that temperature and humidity control errors are reduced to &#177;0.5 &#176;C and &#177;3%, respectively. Compared to conventional methods, the proposed system lowers energy consumption by 20% and reduces the mildew rate of stored tobacco by 15%, significantly improving storage quality. This work offers a novel technical framework for intelligent environmental control in tobacco storage and provides valuable insights for broader applications in similar domains.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_57-Optimized_Automatic_Temperature_and_Humidity_Control.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fuzzy Delphi Method: A Step-by-Step Guide to Obtaining Expert Consensus on Mobile Tourism Acceptance Culture</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160756</link>
        <id>10.14569/IJACSA.2025.0160756</id>
        <doi>10.14569/IJACSA.2025.0160756</doi>
        <lastModDate>2025-07-30T12:59:45.7330000+00:00</lastModDate>
        
        <creator>Syaifullah </creator>
        
        <creator>Shamsul Arrieya Ariffin</creator>
        
        <creator>Norhisham Mohamad Nordin</creator>
        
        <subject>Cultural acceptance; Fuzzy Delphi Method (FDM); hofstede&#39;s cultural dimensions; mobile tourism; technology acceptance model (TAM)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>Mobile technology has developed rapidly in a short period of time, which has greatly changed the tourism sector and led to the emergence of Mobile Tourism (MT). To ensure that MT grows well and is widely used, it is important to know how people from different cultures accept it. This study provides a complete description of how to use the Fuzzy Delphi Method (FDM) to obtain expert agreement on the most important factors that influence how acceptable mobile tourism is from a cultural perspective. This study uses the Technology Acceptance Model (TAM) and Hofstede&#39;s Cultural Dimensions to carefully find and confirm the variables and indicators and how they are interrelated. This approach describes a rigorous process with nine stages in reaching expert agreement. The results revealed that experts largely agreed on the variables related to perceived usefulness, perceived trust, perceived ease of use, and facilitating conditions in the TAM framework, as well as some variables of collectivism, uncertainty avoidance, and long-term orientation in Hofstede&#39;s cultural aspects. This study also verified and validated the overall relationship between variables in building the Mobile Tourism Cultural Acceptance (MTCA) framework, the general and specific interactions between variables, and the function of cultural dimensions as mediators. This study shows how important it is to get expert opinion when making a comprehensive plan on how to use technology in a culturally acceptable environment for mobile tourism. The information obtained has a major impact on mobile tourism developers, policy makers, and marketers who want to make MT more popular.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_56-Fuzzy_Delphi_Method_A_Step_by_Step_Guide.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intelligent Logistics Vehicle Scheduling Based on MPHIGA</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160755</link>
        <id>10.14569/IJACSA.2025.0160755</id>
        <doi>10.14569/IJACSA.2025.0160755</doi>
        <lastModDate>2025-07-30T12:59:45.7030000+00:00</lastModDate>
        
        <creator>Xinxin Gao</creator>
        
        <creator>Qing Wang</creator>
        
        <subject>Multi-population hybrid improved genetic algorithm; domain generation algorithm; logistics vehicles; co-evolution; scheduling management</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>The current intelligent logistics vehicle scheduling faces challenges, including the difficulty of obtaining real-time location data and the need for manual intervention in emergencies. To address these issues, a modified multi-population hybrid genetic algorithm is proposed, along with an intelligent scheduling model constructed through the reconstruction of domain generation strategies. Experimental results show that the model stabilizes the total cost at 7864 yuan within 49 iterations, whereas the dual-population hybrid genetic algorithm requires 51 iterations, making convergence more time-consuming. Moreover, when the scheduling frequency is two, the research model successfully allocates three company vehicles, whereas the comparison algorithm can only allocate two. Overall, the research model offers significant advantages in reducing operating costs and enhancing dynamic response capabilities, providing effective technical support for the digital transformation of logistics companies.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_55-Intelligent_Logistics_Vehicle_Scheduling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Review of Federated Learning Attacks: Threat Models and Defence Strategies</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160754</link>
        <id>10.14569/IJACSA.2025.0160754</id>
        <doi>10.14569/IJACSA.2025.0160754</doi>
        <lastModDate>2025-07-30T12:59:45.6570000+00:00</lastModDate>
        
        <creator>Fizlin Zakaria</creator>
        
        <creator>Shamsul KamalAhmad Khalid</creator>
        
        <subject>Federated learning; threat models; defence strategies; privacy-preserving AI; adversarial attacks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>Federated Learning (FL) has emerged as a critical paradigm in privacy-preserving machine learning, enabling collaborative model training across decentralised devices without sharing raw data. While FL enhances privacy by maintaining data locality, it remains susceptible to sophisticated adversarial attacks. This review systematically analyses the FL threat landscape and introduces a novel taxonomy that classifies attack models based on their objectives, capabilities, and exploited vulnerabilities. Major categories include data poisoning, inference attacks, and Byzantine behaviours, each examined in terms of mechanisms, assumptions, and system impact. In addition, the paper evaluates prominent defence strategies—such as differential privacy, secure aggregation, and anomaly detection—by assessing their strengths, limitations, and real-world applicability. Key gaps include the lack of standardised evaluation metrics and limited exploration of adaptive defence mechanisms. Emerging trends such as homomorphic encryption, secure multi-party computation, and blockchain-based verifiability are also discussed. This review is a comprehensive resource for researchers and practitioners aiming to design resilient, privacy-aware FL systems that withstand evolving threats.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_54-A_Review_of_Federated_Learning_Attacks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-Agent Deep Reinforcement Learning Algorithms for Distributed Charging Station Management</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160753</link>
        <id>10.14569/IJACSA.2025.0160753</id>
        <doi>10.14569/IJACSA.2025.0160753</doi>
        <lastModDate>2025-07-30T12:59:45.6230000+00:00</lastModDate>
        
        <creator>Li Junda</creator>
        
        <creator>Wang Tianan</creator>
        
        <creator>Zhang Dingyi</creator>
        
        <creator>Wu Quancai</creator>
        
        <creator>Liu Jian</creator>
        
        <subject>Charging station scheduling; cross-regional coordination; multi-agent systems; deep reinforcement learning; Markov decision process; resource optimization; uncertainty response</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>With the continued growth of the electric vehicle (EV) fleet, the issue of cross-regional coordinated scheduling for charging infrastructure has become increasingly prominent, facing challenges such as uneven resource allocation and delayed responses. Considering the complex coupling between charging stations and the power system in a smart grid environment, this paper proposes a distributed scheduling strategy based on multi-agent deep reinforcement learning (MADRL) to achieve efficient, coordinated management of charging infrastructure and power resources. The proposed approach constructs a hierarchical decision-making architecture to jointly optimize intra-regional resource allocation and cross-regional power support, modeling the scheduling process as a Markov Decision Process (MDP) and treating regional charging stations, power nodes, and material units as independent agents. Through the multi-agent deep reinforcement learning mechanism, each agent autonomously learns optimal scheduling policies in the presence of uncertain demand and supply fluctuations, thus enabling rapid response and enhancing system robustness. Simulation results demonstrate that the proposed method effectively reduces scheduling costs and improves resource utilization and service quality. This study provides both theoretical support and practical pathways for building intelligent, efficient, and sustainable charging infrastructure.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_53-Multi_Agent_Deep_Reinforcement_Learning_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Firefighter PPE Compliance Through Deep Learning and Computer Vision</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160752</link>
        <id>10.14569/IJACSA.2025.0160752</id>
        <doi>10.14569/IJACSA.2025.0160752</doi>
        <lastModDate>2025-07-30T12:59:45.5930000+00:00</lastModDate>
        
        <creator>Asmaa Alayed</creator>
        
        <creator>Razan Talal Alqurashi</creator>
        
        <creator>Samah Hamoud Alhelali</creator>
        
        <creator>Asrar Yousef Khadawurdi</creator>
        
        <creator>Bashayer Fayez Khan</creator>
        
        <subject>Firefighter safety; Personal Protective Equipment (PPE) Object detection; YOLOv10; YOLOv11; deep learning; computer vision; real-time detection; PPE compliance; AI in public safety</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>Ensuring firefighter safety in high-risk environments requires strict adherence to Personal Protective Equipment (PPE) protocols. This study presents an automated real-time detection system for PPE using deep learning and computer vision techniques, aiming to improve PPE compliance and overall safety monitoring. The research employs advanced object detection models, specifically YOLOv10 and YOLOv11 (You Only Look Once), to identify critical PPE components such as helmets, gloves, boots, and self-contained breathing apparatus (SCBA) units. A custom-annotated dataset of firefighter images was developed to train and evaluate both models using standard performance metrics such as precision, recall, mAP, F1-score, and Intersection over Union (IoU). The results show that YOLOv11 outperformed YOLOv10, achieving a higher mAP@0.5 score of 0.646 compared to 0.586, with improved detection of small and partially occluded objects and a reduction in training time by 11%. YOLOv11 showed improved detection accuracy for small and partially blocked objects and reduced training time by 11%, while maintaining real-time efficiency. The system generates instant alerts when PPE is missing, minimizing reliance on manual monitoring and improving situational awareness in real-time. This research reinforces the role of AI in public safety and AI-powered automation in enhancing critical public safety operations. By integrating deep learning and computer vision into PPE monitoring systems, the study contributes to developing intelligent, responsive solutions aligned with modern safety standards.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_52-Enhancing_Firefighter_PPE_Compliance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Secure Data Sharing Using Blockchain Technology: A Systematic Literature Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160751</link>
        <id>10.14569/IJACSA.2025.0160751</id>
        <doi>10.14569/IJACSA.2025.0160751</doi>
        <lastModDate>2025-07-30T12:59:45.5770000+00:00</lastModDate>
        
        <creator>Azman Azmi</creator>
        
        <creator>Farashazillah Yahya</creator>
        
        <creator>Nur Afrina Azman</creator>
        
        <creator>Hazlina Jalil</creator>
        
        <subject>Blockchain; Distributed Ledger Technology (DLT); data sharing security; e-government; Systematic Literature Review (SLR); PRISMA 2020</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>Data sharing security is currently one of the crucial parts in e-government systems. Although blockchain, a type of Distributed Ledger Technology (DLT), has been increasingly applied to enhance secure data exchange, there is a significant lack of studies focusing on the specific security factors that underpin its implementation in e-government contexts. Defining and understanding these factors is crucial for the successful integration of blockchain into public data infrastructures. This study addresses this research gap through a Systematic Literature Review (SLR) guided by the PRISMA 2020 framework. A total of 511 articles were retrieved from five major databases, and 103 were selected and systematically reviewed using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) 2020 approach. While the majority of studies emphasised privacy, integrity, and transparency, other critical security factors such as scalability, availability, governance, and decentralisation remain comparatively underexplored. The theory of blockchain-based data sharing security factors was developed as a reference. The article wraps up the discussion by highlighting nine security factors in data sharing using blockchain in e-government for future investigation.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_51-Secure_Data_Sharing_Using_Blockchain_Technology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Attention Aware Dual-Path Autoencoder with Asymmetric Loss for Recognition in Complex Scenes</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160750</link>
        <id>10.14569/IJACSA.2025.0160750</id>
        <doi>10.14569/IJACSA.2025.0160750</doi>
        <lastModDate>2025-07-30T12:59:45.5300000+00:00</lastModDate>
        
        <creator>Hashim Rosli</creator>
        
        <creator>Rozniza Ali</creator>
        
        <creator>Muhamad Suzuri Hitam</creator>
        
        <creator>Ashanira Mat Deris</creator>
        
        <creator>Noor Hafhizah Abd Rahim</creator>
        
        <subject>Component; autoencoder; attention aware; feature fusion; image enhancement; multi-label classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>Object recognition in complex scenes is challenging due to cluttered backgrounds, overlapping objects, and degraded image quality. Another difficulty arises from sparse label presence, as most images contain only one to three active labels despite the dataset being balanced across 20 object classes. This intra-sample sparsity complicates binary classification by exposing models to a high proportion of inactive classes. This work aims to improve recognition accuracy, robustness under sparse multi-label conditions, and interpretability in visually complex environments. The objective is to help models focus on relevant visual features, suppress background noise, and better distinguish objects that are rare or overlapping. To address these challenges, we introduce an attention aware dual-path autoencoder that enhances image features while learning to classify multiple objects. The model uses asymmetric loss to reduce the influence of easy negatives and emphasize rare or difficult labels. It also integrates an attention mechanism in the reconstruction path to improve object clarity. The proposed model achieves 96.72 percent accuracy, 0.0328 Hamming Loss, 0.9809 macro ROC-AUC, and 0.8925 macro mAP, along with 0.9372 SSIM and 7.1012 dB PSNR in reconstruction. These results confirm its effectiveness for robust classification and enhanced visual understanding in complex scenes.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_50-Attention_Aware_Dual_Path_Autoencoder.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>AFL-BERT : Enhancing Minority Class Detection in Multi-Label Text Classification with Adaptive Focal Loss and BERT</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160749</link>
        <id>10.14569/IJACSA.2025.0160749</id>
        <doi>10.14569/IJACSA.2025.0160749</doi>
        <lastModDate>2025-07-30T12:59:45.4830000+00:00</lastModDate>
        
        <creator>Zakia Labd</creator>
        
        <creator>Said Bahassine</creator>
        
        <creator>Khalid Housni</creator>
        
        <subject>Adaptive focal loss; BERT; imbalanced text classification; multilabel text classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>Fine-tuning transformer models like Bidirectional Encoder Representations from Transformers has enhanced text classification performance. However, class imbalance remains a challenge, causing biased predictions. This study introduces an improved training strategy using a novel Adaptive Focal Loss with dynamically adjusted γ based on class frequencies. Unlike static γ values, this method emphasizes minority classes automatically. Experiments on the CMU Movie Summary dataset show Adaptive Focal Loss surpasses standard binary cross-entropy and Focal Loss, achieving an F1-score of 0.5, ROC accuracy of 0.79, and Micro Recall of 0.53. These results demonstrate the effectiveness of adaptive focusing methods in improving the detection of minority classes in imbalanced scenarios.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_49-AFL_BERT_Enhancing_Minority_Class_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SpatialSolar-Net: A Multi-Site Collaborative Framework for Solar Power Forecasting with Adaptive Spatial Correlation Assessment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160748</link>
        <id>10.14569/IJACSA.2025.0160748</id>
        <doi>10.14569/IJACSA.2025.0160748</doi>
        <lastModDate>2025-07-30T12:59:45.4530000+00:00</lastModDate>
        
        <creator>Yiming Liu</creator>
        
        <creator>Mugambigai Darajah</creator>
        
        <creator>Christopher Gan</creator>
        
        <subject>Solar power forecasting; spatial correlation; graph neural networks; adaptive fusion; multi-site collaboration; renewable energy integration; extreme weather robustness; grid stability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>The increasing penetration of solar power generation poses significant challenges for grid integration due to its inherent variability and intermittency. Existing forecasting approaches treat individual solar installations independently, failing to leverage spatial correlations between geographically proximate sites and lacking adaptive mechanisms for varying environmental conditions. This paper presents SpatialSolar-Net, a novel multi-site collaborative solar power generation forecasting framework that addresses these limitations through adaptive spatial correlation evaluation and dynamic knowledge integration mechanisms. The proposed architecture combines a dual-branch design integrating convolutional neural network-based spatial feature extraction with attention mechanism-based temporal modeling, enhanced by graph neural networks for spatial dependency modeling and an adaptive fusion mechanism that intelligently balances local and spatial information based on real-time correlation strength. This framework significantly enhances renewable energy integration by enabling accurate solar power predictions that support grid stability and optimal resource allocation. Extensive experimental validation demonstrates that SpatialSolar-Net achieves superior performance with Mean Absolute Error of 9.98 kW and Root Mean Square Error of 14.79 kW, representing 12.6% and 10.8% improvements over state-of-the-art methods. Most notably, the framework exhibits exceptional robustness during extreme weather events, achieving a remarkable 64% error reduction during dust storm conditions compared to baseline approaches. The adaptive nature enables efficient deployment across diverse geographical regions while maintaining computational efficiency suitable for practical renewable energy integration.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_48-SpatialSolar_Net_A_Multi_Site_Collaborative_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Machine Learning Methods for Detecting Fake News: A Systematic Literature Review of Machine Learning Applications in Key Domains</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160747</link>
        <id>10.14569/IJACSA.2025.0160747</id>
        <doi>10.14569/IJACSA.2025.0160747</doi>
        <lastModDate>2025-07-30T12:59:45.4370000+00:00</lastModDate>
        
        <creator>Nur Ida Aniza Rusli</creator>
        
        <creator>Nur Atiqah Sia Abdullah</creator>
        
        <creator>Fatin Nabila Abd Razak</creator>
        
        <creator>Nor Haniza Ramli</creator>
        
        <subject>Machine learning; fake news; systematic review; health; politics; economy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>Rapid digitisation in communication and online platform growth have transformed information dissemination and facilitated rapid access while simultaneously amplifying the spread of fake news. This widespread issue undermines public trust, destabilises political systems, and threatens economic stability. Machine learning techniques have been widely applied to fake news detection, but comparative analyses across specific domains such as health, politics, and economics remain limited. Existing reviews tend to focus on supervised learning methods, frequently excluding unsupervised and hybrid approaches, along with unique challenges and dataset requirements of each domain. This study conducted a systematic literature review of machine learning applications for detecting fake news across the three domains. The methodologies and metrics used were evaluated, while key challenges and opportunities were explored. The results revealed a strong reliance on supervised learning techniques, particularly in health-related contexts, where misinformation presented significant risks to public health outcomes. Deep learning methods were promising for processing complex data. Nonetheless, hybrid and unsupervised approaches were underexplored, which presented opportunities to address data scarcity and adaptability. Most datasets originated from social media platforms and news outlets. The common evaluation metrics included accuracy, but advanced measures were rarely applied, which indicated the possibility of enhancing such methods. Persistent challenges include poor data quality, bias, and ethical concerns highlighted the necessity for bias-mitigating algorithms and improved model interpretability. Specifically, economic misinformation has received less attention despite its potential to cause large-scale financial disruptions. This study highlighted that more effective, ethical, and context-specific machine learning solutions are needed to address fake news and enhance digital information credibility.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_47-Machine_Learning_Methods_for_Detecting_Fake_News.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Determinant Factors of Success for the Village Information System in Providing Sustainable Services and Governance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160746</link>
        <id>10.14569/IJACSA.2025.0160746</id>
        <doi>10.14569/IJACSA.2025.0160746</doi>
        <lastModDate>2025-07-30T12:59:45.4070000+00:00</lastModDate>
        
        <creator>Sutia Dwi Santika</creator>
        
        <creator>Tuga Mauritsius</creator>
        
        <subject>E-Government; System Quality; Service Quality; Delone and Mclean model; TAM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>The Indonesian government is working to implement a digital transformation aimed at providing effective services and governance at both central and local levels, similar to other developing countries. A key strategy for achieving this digital government transformation at the local level involves the implementation of village information system, which are managed by village officials. The Ministry of Villages and Development of Disadvantaged Regions is developing a village information system to support these efforts, village laws, and the Sustainable Development Goals. However, the village information system encounters ongoing challenges, such as slow access speeds and ineffective response services. This study adopts a quantitative approach by using cross-sectional questionnaires that collected 426 valid responses. This study identified seven main factors that influence the success or failure of implementing a village information system: system quality, information quality, service quality, perceived usefulness, user satisfaction, trust, and net benefits. This study contributes to the literature by recognizing these factors within the DeLone and McLean Information System Success Model and TAM frameworks, which are still rarely addressed in e-government adoption studies, especially regarding village governments in developing countries. Data analysis revealed significant relationships system quality, information quality, and service quality significantly impact perceived usefulness. Information quality, service quality, trust, and perceived usefulness significantly impact user satisfaction, and perceived usefulness and satisfaction significantly affect net benefits. This research has practical implications for the successful adoption of the village information system as part of the ministry&#39;s efforts to improve services and overall governance.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_46-Determinant_Factors_of_Success_for_the_Village_Information_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modeling an Adaptive and Collaborative E-Learning System with Artificial Intelligence Tools</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160745</link>
        <id>10.14569/IJACSA.2025.0160745</id>
        <doi>10.14569/IJACSA.2025.0160745</doi>
        <lastModDate>2025-07-30T12:59:45.3730000+00:00</lastModDate>
        
        <creator>Kawtar Zargane</creator>
        
        <creator>Hassane Kemouss</creator>
        
        <creator>Mohamed Khaldi</creator>
        
        <subject>Conceptual modeling of an online learning system; Adaptive system; Online collaborative learning; Artificial intelligence (AI); Educational software architecture; UML modeling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>In an educational environment that is undergoing digital transformation, the need to create smarter, learner-centred learning environments is becoming increasingly urgent. This article presents a conceptual modeling of an e-learning system that integrates the adaptive and collaborative dimensions, relying on the tools of artificial intelligence (AI), which occupies a central place, both as a dynamic adaptation engine and as a collaboration facilitator and automaton of certain pedagogical activities. This methodical and structured approach makes it possible to develop a hybrid environment capable of adjusting to individual needs while promoting the co-construction of knowledge between peers. Based on instructional design principles and the 2TUP (Two Tracks Unified Process) process, this approach aims to develop a systematic architecture, illustrated by UML (Unified Modeling Language) diagrams, of classes, use cases, activities and sequences, integrating AI (Artificial Intelligence) through adaptive learning, conversational agents and intelligent tutoring systems that make it possible to personalize learning, Provide targeted feedback, optimize learner performance, and guide learners more accurately. This combination of standardized modeling and AI improves the synergy between stakeholders and increases the efficiency of online learning environments. Finally, this model paves the way for a new era of more flexible, inclusive and responsive techno-pedagogical systems, capable of facing the contemporary challenges of online training.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_45-Modeling_an_Adaptive_and_Collaborative_E_Learning_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Patient Health Through Smart IoT Technologies in Healthcare</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160744</link>
        <id>10.14569/IJACSA.2025.0160744</id>
        <doi>10.14569/IJACSA.2025.0160744</doi>
        <lastModDate>2025-07-30T12:59:45.3430000+00:00</lastModDate>
        
        <creator>Monica Bhutani</creator>
        
        <creator>Osman Elwasila</creator>
        
        <creator>Rajermani Thinakaran</creator>
        
        <creator>Yonis Gulzar</creator>
        
        <subject>IoT in Healthcare; smart technologies; remote patient monitoring; data security; AI in healthcare; telemedicine; healthcare analytics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>Health care has been revolutionized by this rapid change in the field of the Internet of things that enable smart connected devices that provide better patient monitoring, diagnosis and treatment. IoT technologies facilitate collection of real time health data and remote patient monitoring, as well as prediction, thus improving overall healthcare outcomes. The ideas and concepts related to Chronic diseases, Diseases, Emergency, Detection and its management are greatly transformed by the presence of Wearable sensors, Smart Hospital infrastructures and AI powered Analytics. Meanwhile, healthcare systems driven by IoT are more efficient, reduce number of hospital readmissions, and provide telemedicine services. But, IoT in healthcare comes with a lot of challenges such as IoT security risks, patient privacy issues and multitude of interoperability problems. In this paper a comprehensive review of smart IoT technologies in healthcare, their applications and the type of benefits that weigh them for the patient care is provided. Along with exploring the role of data analytics in the IoT based decision making, data handling ethical implications and security threats in IoT healthcare systems introduced here. Moreover, beyond this, they discuss future directions including integration of AI, 5G enabled telemedicine, and blockchain for secure patient data management. This makes IoT an ideal candidate for healthcare transformation — one that addresses existing challenges and capitalize on emerging innovations to go from more efficient, more accessible, and definitely more patient-centric healthcare.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_44-Enhancing_Patient_Health_Through_Smart_IoT_Technologies.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An In-Depth Analysis of Security Flaws in Advanced Authentication Protocols for the Internet of Medical Things</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160743</link>
        <id>10.14569/IJACSA.2025.0160743</id>
        <doi>10.14569/IJACSA.2025.0160743</doi>
        <lastModDate>2025-07-30T12:59:45.3270000+00:00</lastModDate>
        
        <creator>Haewon Byeon</creator>
        
        <subject>Four-factor authentication; IoT Healthcare security; physical unclonable functions; quantum-resistant cryptography; biometric data protection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>This study evaluates a four-factor authentication protocol designed for IoT healthcare systems, identifying several key vulnerabilities that could compromise its security. The analysis highlights risks associated with node cloning, insider threats, biometric data security, session management, and scalability. To address these vulnerabilities, the study proposes a series of enhancements, including the implementation of Physical Unclonable Functions (PUFs) to prevent node cloning and the use of advanced encryption techniques, such as homomorphic encryption, to protect biometric data. Additionally, the adoption of role-based access control (RBAC) and attribute-based access control (ABAC) systems can mitigate insider threats by limiting user permissions. Optimizing session management through strict expiration and key rotation policies can maintain session integrity, while lightweight cryptographic algorithms and adaptive power management techniques enhance scalability and resource utilization. Future research directions include exploring quantum-resistant cryptographic algorithms and developing adaptive security policies leveraging artificial intelligence. These efforts are essential for maintaining the protocol&#39;s resilience against evolving threats and ensuring the secure operation of IoT-based healthcare systems.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_43-An_In_Depth_Analysis_of_Security_Flaws.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Empirical Validation and Enhancement of ADiBA: A Framework for Big Data Analytics Implementation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160742</link>
        <id>10.14569/IJACSA.2025.0160742</id>
        <doi>10.14569/IJACSA.2025.0160742</doi>
        <lastModDate>2025-07-30T12:59:45.2970000+00:00</lastModDate>
        
        <creator>Norhayati Daut</creator>
        
        <creator>Naomie Salim</creator>
        
        <creator>Sharin Hazlin Huspi</creator>
        
        <creator>Anazida Zainal</creator>
        
        <creator>Chan Weng Howe</creator>
        
        <creator>Muhammad Aliif Ahmad</creator>
        
        <creator>Siti Zaiton Mohd Hashim</creator>
        
        <creator>Masitah Ghazali</creator>
        
        <creator>Mohd Adham Isa</creator>
        
        <creator>Rashidah Kadir</creator>
        
        <creator>Nuremira Ibrahim</creator>
        
        <creator>Norazlina Khamis</creator>
        
        <subject>Adoption process; big data; big data analytics; framework; framework validation; expert survey; content validity index; thematic analysis; organizational implementation; digital transformation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>The implementation of Big Data Analytics (BDA) in organisations requires a structured approach to ensure alignment with strategic goals and infrastructure readiness. This study presents an enhanced version of the previously published ADiBA (Accelerating Digital Transformation Through Big Data Adoption) framework that aimed at guiding organizations through critical components necessary for successful BDA implementation. The initial framework was developed based on systematic literature review. To validate and refine the framework, a mixed-methods survey was conducted among domain experts using a five-point Likert scale and open-ended questions to assess the relevance of each framework component. Quantitative responses were analysed using the Content Validity Index (CVI), with a threshold of 0.78 adopted as the minimum acceptable I-CVI score for each item. Complementing the quantitative analysis, qualitative feedback from the open-ended survey responses, Focus Group Discussions (FGDs), and in-depth interviews were examined through thematic analysis, revealing key themes related to framework’s clarity and operational aspects. Insights from both analyses informed the refinement of several components. The resulting framework is a validated and empirically-informed guide designed to support effective BDA implementation in organizational contexts.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_42-Empirical_Validation_and_Enhancement_of_ADiBA.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Banking Data Classification Through Hybrid L2 Regularisation and Early Stopping in Artificial Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160741</link>
        <id>10.14569/IJACSA.2025.0160741</id>
        <doi>10.14569/IJACSA.2025.0160741</doi>
        <lastModDate>2025-07-30T12:59:45.2670000+00:00</lastModDate>
        
        <creator>Khairul Nizam Abd Halim</creator>
        
        <creator>Abdul Syukor Mohamad Jaya</creator>
        
        <creator>Fauziah Kasmin</creator>
        
        <creator>Azlan Abdul Aziz</creator>
        
        <subject>Artificial neural networks; L2 regularisation; early stopping; banking; classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>The demand for robust data-driven classification (DDC) techniques remains critical in banking applications, where accurate and efficient decision-making is paramount. Artificial Neural Networks (ANNs), particularly Multi-Layer Perceptrons (MLPs), are widely used due to their strong learning capabilities. However, their performance often depends on effective hyperparameter tuning and regularisation strategies to avoid overfitting. This study aims to enhance the efficiency of the MLP training process by introducing a hybrid approach that integrates L2 regularisation with Early Stopping (ES) into the hyperparameter tuning procedure. The key contribution lies in embedding both techniques within a grid search framework, thereby streamlining the search for optimal hyperparameters. The proposed method was evaluated using three real-world banking datasets: two related to loan subscription (16 and 20 features) and one concerning credit card default payment (23 features). Experimental results demonstrate that the hybrid approach reduces hyperparameter tuning time by over 90% while achieving high classification performance. Notably, the Receiver Operating Characteristic - Area Under the Curve (ROC-AUC) scores of 93.89% and 91.21% were achieved on the loan datasets, and 73.28% on the credit card dataset, surpassing previous benchmarks. These findings highlight the potential of the L2ES hybrid method to improve both the accuracy and computational efficiency of DDC in financial applications.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_41-Enhancing_Banking_Data_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Scalable Graph Learning with Graph Convolutional Networks and Graph Attention Networks: Addressing Class Imbalance Through Augmentation and Optimized Hyperparameter Tuning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160740</link>
        <id>10.14569/IJACSA.2025.0160740</id>
        <doi>10.14569/IJACSA.2025.0160740</doi>
        <lastModDate>2025-07-30T12:59:45.2330000+00:00</lastModDate>
        
        <creator>Chaima Ahle Touate</creator>
        
        <creator>Rachid El Ayachi</creator>
        
        <creator>Mohamed Biniz</creator>
        
        <subject>Graph Convolutional Networks (GCN); Graph Attention Networks (GAT); hyperparameter tuning; data augmentation; PEGASUS; synonym replacement; optuna bayesian optimization; node classification; class imbalance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>In this study, we propose a graph-based node classification to address challenges such as data scarcity, class imbalance, limited access to original textual content in benchmark datasets, semantic preservation, and model generalization in node classification tasks. Beyond simple data replication, we enhanced the Cora dataset by extracting content from its original PostScript files using a three-dimensional framework that combines in one pipeline NLP-based techniques such as PEGASUS paraphrase, synthetic model generation and a controlled subject aware synonym replacement. We substantially expanded the dataset to 17,780 nodes—representing an approximation of 6.57x scaling while maintaining semantic fidelity (WMD scores: 0.27-0.34). Our Bayesian Hyperparameter tuning was conducted using Optuna, along with k-fold cross-validation for a rigorous optimized model validation protocol. Our Graph Convolutional Network (GCN) model achieves 95.42% accuracy while Graph Attention Network (GAT) reaches 93.46%, even when scaled to a significantly larger dataset than the base. Our empirical analysis demonstrates that semantic-preserving augmentation helped us achieve better performance while maintaining model stability across scaled datasets, offering a cost-effective alternative to architectural complexity, making graph learning accessible to resource-constrained environments.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_40-Scalable_Graph_Learning_with_Graph_Convolutional_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Potential Variables in Pharmaceutical Drug Prediction Research with Machine Learning Approach: A Literature Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160739</link>
        <id>10.14569/IJACSA.2025.0160739</id>
        <doi>10.14569/IJACSA.2025.0160739</doi>
        <lastModDate>2025-07-30T12:59:45.2030000+00:00</lastModDate>
        
        <creator>Gunadi Emmanuel</creator>
        
        <creator>Yulyani Arifin</creator>
        
        <creator>Ilvico Sonata</creator>
        
        <creator>Muhammad Zarlis</creator>
        
        <subject>Drug demand; machine learning; pharmaceutical installations; prediction; potential variables</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>As a downstream component of the drug supply chain, pharmaceutical installations often face uncertainty in drug demand. Predicting pharmaceutical drugs using a machine learning approach enables the development of new variables that can enhance the performance of medicine prediction. Amidst limited data and a choice of prediction algorithms, the accuracy of variable selection is significant for drug prediction performance. This study remaps the scope of variables from previous studies related to drug demand prediction and machine learning performance to develop further significant variables. Investigating research literature on significant variables in drug demand prediction with machine learning models published in 2020-2024. The systematic literature methodology uses the Kitchenham method. Mapping problems, discussion areas, and data availability result in ten categories of issue areas, each with its respective data needs and algorithm choices. A qualitative exploration of issue areas identifies potential variables for pharmaceutical drug prediction, including drug consumption, epidemiology, drug management, supply chain-patient domicile, and pharmacotherapy. Mapping potential variables facilitates the availability and integration of data relevant to local or regional characteristics, enabling further research on the characteristics of data and algorithm choices.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_39-Potential_Variables_in_Pharmaceutical_Drug_Prediction_Research.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modelling Cloud Computing Adoption in the Malaysian Healthcare</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160738</link>
        <id>10.14569/IJACSA.2025.0160738</id>
        <doi>10.14569/IJACSA.2025.0160738</doi>
        <lastModDate>2025-07-30T12:59:45.1570000+00:00</lastModDate>
        
        <creator>Normilah Mohd Noh</creator>
        
        <creator>Nurhizam Safie Mohd Satar</creator>
        
        <creator>Hasimi Sallehudin</creator>
        
        <creator>Ibrahim Hassan Mallam</creator>
        
        <creator>Surya Sumarni Hussein</creator>
        
        <creator>Nur Azaliah Abu Bakar</creator>
        
        <subject>Adoption; cloud computing; Malaysian healthcare; partial least squares-structural equation modelling; resource-based view (RBV)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>Cloud computing is increasingly reshaping the global IT landscape, offering scalable and efficient solutions across industries, including the healthcare sector. This study investigates the determinants of cloud computing adoption in the Malaysian healthcare industry by integrating the Resource-Based View (RBV) and Technology-Organisation-Environment (TOE) frameworks. Emphasising internal organisational capabilities, the study excludes traditional Information Systems (IS) models to maintain theoretical coherence with RBV’s strategic orientation toward firm-level resource advantages. Data were collected from 265 respondents across 127 healthcare organisations and analysed using Partial Least Squares Structural Equation Modelling (PLS-SEM). The study also proposes an extended taxonomy of cloud services contextualised for healthcare, strengthening the theoretical underpinnings and practical applicability of cloud adoption strategies in this domain. The findings reveal that among IT capabilities, managerial IT capability exerts the most substantial influence on adoption, followed by relational and technical capabilities. Within the TOE dimensions, regulatory support emerged as the most critical enabler, while business resources, change management, organisational culture, and vendor support also demonstrated significant positive effects. The results offer empirical validation for a comprehensive conceptual model grounded in RBV and TOE, providing both theoretical insights and practical guidance for healthcare organisations aiming to strengthen IT capabilities, optimise organisational readiness, and align with external institutional drivers for successful cloud migration.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_38-Modelling_Cloud_Computing_Adoption.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design and Evaluation of a Biometric IoT-Based Smart Lock System with Real-Time Monitoring and Alert Mechanisms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160737</link>
        <id>10.14569/IJACSA.2025.0160737</id>
        <doi>10.14569/IJACSA.2025.0160737</doi>
        <lastModDate>2025-07-30T12:59:45.1230000+00:00</lastModDate>
        
        <creator>Jamil Abedalrahim Jamil Alsayaydeh</creator>
        
        <creator>Mohd Faizal Yusof</creator>
        
        <creator>Serhij Mamchenko</creator>
        
        <creator>Rostam Affendi Hamzah</creator>
        
        <creator>Safarudin Gazali Herawan</creator>
        
        <subject>Smart door lock; Internet of Things (IoT); biometric authentication; fingerprint sensor; ESP32 microcontroller; Blynk IoT platform; access control; cybersecurity; real-time monitoring; home automation; OLED display; False Rejection Rate (FRR); False Acceptance Rate (FAR); remote monitoring; smart home automation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>A smart door lock system based on IoT is presented which uses fingerprint biometric authentication, ESP32 microcontroller and Blynk IoT platform to provide a secure, user friendly and remote controllable access control solution. The proposed architecture replaces traditional locks with a real time biometric system that gives instant feedback through onboard display (OLED) and buzzer and remote monitoring and control through a mobile app. A new fail-safe mechanism is implemented: after 3 failed fingerprint attempts the system will lock out for 15 seconds and send instant alert to the authorized user’s smartphone. Performance test of the prototype shows fingerprint recognition time of around 1.0 second and door unlock time of 5 seconds, so it’s convenient to use. The system has a very low False Acceptance Rate (FAR) of 1.32% which means strong resistance to unauthorized access. The False Rejection Rate (FRR) is higher (around 26.32%) due to user error such as improper finger placement – so a usability issue to be addressed. The device can store up to 3 fingerprint profiles and gives visual/audible alert for all access events. This integration of IoT with biometric security not only enhances physical security but also user convenience, a modern smart-lock solution for smart home automation.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_37-Design_and_Evaluation_of_a_Biometric_IoT.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Text Mining Model for Lecturer Performance Evaluation: A Comparative Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160736</link>
        <id>10.14569/IJACSA.2025.0160736</id>
        <doi>10.14569/IJACSA.2025.0160736</doi>
        <lastModDate>2025-07-30T12:59:45.1100000+00:00</lastModDate>
        
        <creator>Anita Ratnasari</creator>
        
        <creator>Vina Ayumi</creator>
        
        <creator>Mariana Purba</creator>
        
        <creator>Wachyu Hari Haji</creator>
        
        <creator>Handrie Noprisson</creator>
        
        <creator>Marissa Utami</creator>
        
        <subject>Text mining; CNN; LSTM; RNN; text-to-sequence</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>To support the evaluation of the teaching and learning process in higher education institutions, it is necessary to develop a text mining (TM) model. The aim of this research is to compare the performance of Long Short-Term Memory using Word Embedding Text to Sequence (WETS-LSTM), WETS-BiLSTM, WETS-CNN1D, and WETS-RNN, using four dataset categories including pedagogic, professional, personality, and social competency. This research has five main steps, including literature study, dataset collection, TM model development, and evaluation. Dataset is collected from Universitas Sjakhyakirti, Institut Teknologi dan Bisnis Palcomtech, Universitas Muhammadiyah Palembang, Universitas Bina Darma, AMIK Bina Sriwijaya and Politeknik Darusalam. The questionnaire distribution process initially yielded 6,170 responses with 6,164 valid across four competency categories, with total of 24,656 text data for analysis. Model of WETS-LSTM obtained the best performance overall, achieved the train accuracy of 96.65% and the highest test accuracy of 82.92%. The CNN1D with Word Embedding Text to Sequence (WETS-CNN1D) demonstrated good train accuracy with 96.73% but obtained lower test performance with 80.67%. The WETS with Recurrent Neural Network (WETS-RNN) obtained the weakest results, with a train accuracy of 95.88% and a test accuracy of 77.99%.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_36-The_Text_Mining_Model_for_Lecturer_Performance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Hybrid HO-CAL Framework for Enhanced Stock Index Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160735</link>
        <id>10.14569/IJACSA.2025.0160735</id>
        <doi>10.14569/IJACSA.2025.0160735</doi>
        <lastModDate>2025-07-30T12:59:45.0770000+00:00</lastModDate>
        
        <creator>Zeren Shi</creator>
        
        <creator>Othman Ibrahim</creator>
        
        <creator>Hanini Ilyana Che Hashim</creator>
        
        <subject>Attention mechanism; CNN; LSTM; stock index; hippopotamus optimization algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>The accurate prediction of stock indexes plays a critical role in supporting investment decisions and managing financial risks. This study proposed a novel hybrid deep learning model that integrated the strengths of Convolutional Neural Networks (CNN), the Attention mechanism, and Long Short-Term Memory (LSTM) networks to enhance the modelling of temporal patterns in financial time series. To further improve the prediction performance, the Hippopotamus Optimization (HO) algorithm was incorporated to fine-tune the networks parameters. This is the first application of the CNN-Attention-LSTM (CAL) architecture to stock index prediction. Ablation experiments revealed that the proposed CAL significantly outperformed traditional CNN, LSTM, and CNN-LSTM models, highlighting the effectiveness of the Attention-based architecture. Comparative analyses also demonstrated that the HO-optimized CAL (HO-CAL) model achieved superior predictive accuracy across multiple markets, confirming both the robustness of the hybrid model and the optimization algorithm. These findings underscore the potential of combining deep learning architectures with metaheuristic optimization to improve the prediction accuracy in financial markets, offering valuable insights for real-world investment strategies.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_35-A_Novel_Hybrid_HO_CAL_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning Optimization Conception: Less Data, Less Time, More Performance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160734</link>
        <id>10.14569/IJACSA.2025.0160734</id>
        <doi>10.14569/IJACSA.2025.0160734</doi>
        <lastModDate>2025-07-30T12:59:45.0000000+00:00</lastModDate>
        
        <creator>Mohamed Amine MEDDAOUI</creator>
        
        <creator>Moulay AMZIL</creator>
        
        <creator>Imane KARKABA</creator>
        
        <creator>Mohammed ERRITALI</creator>
        
        <subject>Deep Learning; AI; IoT; optimization; transfer learning; model compression; few-shot learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>Although Deep Learning has not made a breakthrough in terms of artificial intelligence core technology, it achieves the best worldwide performance across areas such as computer vision and natural language processing. However, it depends on large-scale datasets and enormous computational resources. This paper tackles a major issue: Can we train more efficient deep learning models with less data in less time? We look at numerous strategies designed to reduce the burden of training, without letting the quality deteriorate. From transfer learning and few-shot learning to lightweight architectures, synthetic datasets produced artificially, as well as dispersed training, we contemplate how to make advanced AI subsystems fit for running under scarce resources. The aim is to lay down a future for deep learning that is more sustainable and all-embracing. This research focuses on the important issue of streamlining deep learning models with balancing model performance against data collection and computations. We look into other approaches such as transfer learning, couple with fewshot learning, data augmentation, architecture optimization, and parallelization. We explain processes with their benefits as well as their setbacks. Our research shows that training a model more efficiently improves the overall training process, making it cheaper and greener. A change like this would help more people use sophisticated AI systems even when limited by constrained resources. This broadens the real-world application of AI technology and further stimulates innovation in the area.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_34-Deep_Learning_Optimization_Conception.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>AI-Powered Skin Disease Detection Using Adaptive Particle Swarm Intelligent Optimization and Hyper-Convolutional Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160733</link>
        <id>10.14569/IJACSA.2025.0160733</id>
        <doi>10.14569/IJACSA.2025.0160733</doi>
        <lastModDate>2025-07-30T12:59:44.9530000+00:00</lastModDate>
        
        <creator>N Annalakshmi</creator>
        
        <creator>S Umarani</creator>
        
        <subject>Skin cancer; image preprocessing; hyper-convoluted intra-capsuled neural network (HCI-CNN); adaptive Particle Swarm Intelligent Optimization (APSIO); image classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>In recent medical research, skin cancer has emerged as one of the most prevalent and fatal cancers globally. Previous studies have faced challenges in detecting skin cancer early due to the complexity of identifying specific skin diseases, segmenting affected areas, and selecting relevant features. To address these limitations, this study proposes a novel AI-powered enhanced skin disease detection system that applies an Adaptive Particle Swarm Intelligent Optimization (APSIO) in conjunction with a Hyper-Convoluted Intra-Capsuled Neural Network (HCI-CNN). In image processing, a Gaussian Wavelet Spectral Filter is initially used to preprocess the input dataset of skin-cancer images. This filter is used to standardize the skin layer of the pixel. After preprocessing, the method applies Slice Fragment Window Segmentation (SFWS) to divide the image into several clusters, focusing on the specified area affected by the disease. Next, Adaptive Particle Swarm Intelligent Optimization (APSIO) is applied for feature selection. APSIO is an optimization metaheuristic algorithm that optimizes the selection of relevant features from the segmented image. After removing evaluated and non-effective features, YOLO extracted features are passed through an HCI-CNN classifier to efficiently characterize high-level spatial hierarchies and relations of features in the feature space using hyper-convolutional operations and capsule representations. This paper analyzed the clinical images of individuals along with the dataset images. The output gain improved Accuracy to 97%, precision to 96.52%, recall to 96.55%, and F1-score to 96.93%, while simultaneously minimizing false positives and total time complexity.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_33-AI_Powered_Skin_Disease_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Advancing Precision Livestock Farming: Integrating Hybrid AI, IoT, Cloud and Edge Computing for Enhanced Welfare and Efficiency</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160732</link>
        <id>10.14569/IJACSA.2025.0160732</id>
        <doi>10.14569/IJACSA.2025.0160732</doi>
        <lastModDate>2025-07-30T12:59:44.9230000+00:00</lastModDate>
        
        <creator>Hakim Jebari</creator>
        
        <creator>Siham Rekiek</creator>
        
        <creator>Kamal Reklaoui</creator>
        
        <subject>Hybrid artificial intelligence; edge computing; cloud computing; Internet of Things; artificial intelligence; predictive analytics; smart farming; smart poultry farming</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>Poultry farming is pivotal to global food security, yet maintaining optimal environmental and operational conditions remains a challenge. Suboptimal conditions, such as high temperature and humidity, promote bacterial growth and the production of toxic gases like ammonia (NH3), carbon monoxide (CO), carbon dioxide (CO2), methane (CH4), and hydrogen sulfide (H2S), which increase poultry disease and mortality rates. This study introduces an innovative, modular, and scalable system integrating Artificial Intelligence (AI), Internet of Things (IoT), Edge Computing, and Cloud Computing for real-time monitoring, prediction, and automation in poultry barns. The system employs a hybrid AI framework combining Gradient Boosting techniques (XGBoost, LightGBM, CatBoost) and Long Short-Term Memory (LSTM) networks to analyze data from a heterogeneous wireless sensor network. It monitors critical parameters—temperature, humidity, and toxic gas concentrations—while predicting environmental conditions and detecting potential stress to optimize poultry welfare. Leveraging IoT for data collection, Edge Computing for low-latency processing, and cloud analytics for advanced insights, the system enhances decision-making, reduces feed wastage, lowers energy costs, and decreases mortality rates. A case study demonstrates significant improvements in prediction accuracy, operational efficiency, and animal welfare, underscoring the framework’s adaptability across diverse agricultural settings. This work establishes a robust precedent for hybrid AI-driven smart farming solutions, advancing precision livestock farming.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_32-Advancing_Precision_Livestock_Farming.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Perceptual Hash Techniques for Audio Copyright Protection in Decentralized Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160731</link>
        <id>10.14569/IJACSA.2025.0160731</id>
        <doi>10.14569/IJACSA.2025.0160731</doi>
        <lastModDate>2025-07-30T12:59:44.9070000+00:00</lastModDate>
        
        <creator>N. Kavitha</creator>
        
        <creator>Rashmika S J</creator>
        
        <creator>Reshika A S</creator>
        
        <subject>Perceptual hashing; blockchain; audio; copyright protection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>The overlap of perceptual hashing technologies with blockchain is an interesting answer to strengthen copyright protection of sound in decentralized networks. As the music society still has to endure attacks on unauthorized duplication and transformation of online content, traditional security does not hold water. Perceptual hashing bridges the gap, creating unique digital fingerprints that are resistant to small-scale modifications, to allow detection of copyright piracy even in edited audio content. When combined with the immutable nature of blockchain and the smart contract functionality, such a new framework not only guarantees ownership verification but also does licensing procedures automatically, thereby doing away with the need for intermediaries. The proposed method addresses the issues of the state-of-the-art methods and performs well under various conditions.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_31-Perceptual_Hash_Techniques_for_Audio_Copyright_Protection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-Step Cross-Domain Aspect-Based Sentiment Generation with Error Correction Mechanism</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160730</link>
        <id>10.14569/IJACSA.2025.0160730</id>
        <doi>10.14569/IJACSA.2025.0160730</doi>
        <lastModDate>2025-07-30T12:59:44.8600000+00:00</lastModDate>
        
        <creator>Ningning Mao</creator>
        
        <creator>Xuanliang Zhu</creator>
        
        <creator>Yadi Xu</creator>
        
        <subject>Cross-domain aspect-based sentiment analysis; multi-step generation; correction mechanism; domain-invariant feature learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>With the rapid growth of social media and user-generated content, cross-domain aspect-level sentiment analysis has become an important research direction in sentiment computing. In this study, a cross-domain sentiment analysis method based on the T5 model is proposed. This method integrates a multi-step generative training mechanism with a correction mechanism to improve the model&#39;s generalization ability and sentiment classification accuracy when processing texts from different domains. First, domain-invariant sentiment features are extracted through training on texts and their associated aspect vocabularies from both the source and target domains. This process effectively reduces inter-domain discrepancies. Unlike other methods, the generative task is formulated in the source domain to produce both aspect and sentiment element pairs, which improves the model&#39;s reasoning ability through multi-step generation. Finally, a correction mechanism is used to detect the aspect labels in the generated labels of the target domain and regenerate the sentiment predictions when errors are detected, which improves the model’s robustness. Experimental results show that the proposed method performs well in several cross-domain sentiment analysis tasks and significantly outperforms traditional methods in sentiment classification accuracy. The study provides an innovative solution for cross-domain sentiment analysis with broad application potential.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_30-Multi_Step_Cross_Domain_Aspect_Based_Sentiment_Generation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Method for Maternal Health Risk Assessment with Smartwatch-Based Vital Sign Measurements</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160729</link>
        <id>10.14569/IJACSA.2025.0160729</id>
        <doi>10.14569/IJACSA.2025.0160729</doi>
        <lastModDate>2025-07-30T12:59:44.8270000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Diva Kurnianingtyas</creator>
        
        <subject>Artificial intelligence; Kaggle; maternal health risk assessment; IoT technology; classification performance; pregnancy risk level; ANN; RF; MHRL; BP</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>The risk of maternal health issues remains a particular challenge in regions with scant access to continuous antenatal care. This study proposes a smartwatch-based system for evaluating the possible risks associated with maternal health through monitoring vital signs and machine learning algorithms. Using an open-access dataset from Kaggle, the smartwatch assesses maternal risk levels by monitoring systolic and diastolic blood pressure, heart rate, blood glucose, and body temperature. The combination of Artificial Neural Network (ANN) and Random Forest (RF) classifiers gave the system&#39;s best-obtained results of 95% accuracy, 97% precision, 97% recall, and an F1 score of 0.97 on the testing dataset. Analysis of correlation demonstrated significant relationships between maternal risk and several primary measures, particularly with systolic blood pressure (r = 0.931), diastolic pressure (r = 0.916), and blood glucose (r = 0.887). Two regression models, MHRL1 and MHRL2, were created to estimate risk levels based on these parameters. From the experimental data, three clinical action levels were defined for the management of pregnancy care: 1) hypertension with Blood Pressure: BP ≥140/90 mmHg, 2) elevated fasting glucose ≥95 mg/dL or postprandial ≥140 mg/dL, and 3) tachycardia with sustained heart rate &gt;100 bpm. These results prove the capability of using IoT-based wearables integrated into workflows for maternal monitoring to enable early warning systems and tailored health management, particularly in constrained settings.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_29-Method_for_Maternal_Health_Risk_Assessment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Deep Learning-Based Dual-Model Framework for Real-Time Malware and Network Anomaly Detection with MITRE ATT&amp;CK Integration</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160728</link>
        <id>10.14569/IJACSA.2025.0160728</id>
        <doi>10.14569/IJACSA.2025.0160728</doi>
        <lastModDate>2025-07-30T12:59:44.8130000+00:00</lastModDate>
        
        <creator>Migara H. M. S</creator>
        
        <creator>Sandakelum M. D. B</creator>
        
        <creator>Maduranga D. B. W. N</creator>
        
        <creator>Kumara D. D. K. C</creator>
        
        <creator>Harinda Fernando</creator>
        
        <creator>Kavinga Abeywardena</creator>
        
        <subject>Cybersecurity; malware detection; generative adversarial net-works; deep learning; MITRE ATT&amp;CK; feedforward neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>The contemporary world of high connectivity in the digital realm has presented cybersecurity with more advanced threats, such as advanced malware and network attacks, which in most cases will not be detected using traditional detection tools. Static cybersecurity tools, which are traditional, often fail to deal with dynamic and hitherto unseen attacks, including signature-based antivirus systems and rule-based intrusion detection. To address this issue, we would suggest a two-part, AI-powered solution to cybersecurity which would allow real-time threat detection on an endpoint and a network level. The first element uses a Feed-forward Neural Network (FNN) to categorize Windows Porta-ble Executable (PE) files, whether they are benign or malicious, by using structured static features. The second component im-proves network anomaly detection with a deep learning model that is augmented by Generative Adversarial Networks (GAN) and effectively addresses the data imbalance issue and sensitivi-ty to rare cyber-attacks. To enhance its performance further, the system is integrated with the MITRE ATT&amp;CK adversarial tactics and techniques, which correlate real-time detection re-sults with adversarial tactics and techniques, thus offering ac-tionable context to incident response teams. Tests based on open-source datasets provided accuracies of 98.0 per cent of malware detection and 96.2 per cent of network anomaly detec-tion. Data augmentation using GAN was very effective in im-proving the detection of less popular attacks, including SQL injections and internal reconnaissance. Moreover, the system is horizontally scalable and responsive in real-time due to Docker-based deployment. The suggested framework is an effective, explainable and scalable cybersecurity defense system, which is perfectly applicable to Managed Security Service Providers (MSSPs) and Security Operations Centers (SOCs), greatly in-creasing the precision rate and contextual insight of threat detection.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_28-A_Deep_Learning_Based_Dual_Model_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>AI-Driven Textual Feedback Analysis in E-Training Using Enhanced RoBERTa</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160727</link>
        <id>10.14569/IJACSA.2025.0160727</id>
        <doi>10.14569/IJACSA.2025.0160727</doi>
        <lastModDate>2025-07-30T12:59:44.7670000+00:00</lastModDate>
        
        <creator>Rakan Saad Alotaibi</creator>
        
        <creator>Fahad Mazyed Alotaibi</creator>
        
        <creator>Sameer Abdullah Nooh</creator>
        
        <creator>Abdulaziz A. Alsulami</creator>
        
        <subject>Job performance prediction; transformer models; enhanced RoBERTa; domain-adaptive pretraining (DAPT); dynamic attention scaling (DAS); natural language processing (NLP); explainable AI; textual feedback analysis; workforce analytics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>In corporate e-training environments, traditional metrics like course completion and quiz scores often fail to reflect actual job performance. Rich insights are embedded in unstructured textual feedback, yet they remain underutilized due to limitations in existing analytical models. This study proposes E-RoBERTa, an enhanced transformer-based model designed to predict employee job performance by analyzing open-ended feedback from digital training platforms. The model aims to improve accuracy, domain adaptability, and interpretability. E-RoBERTa integrates Domain-Adaptive Pretraining (DAPT) to fine-tune RoBERTa on corporate-specific language and introduces Dynamic Attention Scaling (DAS) to highlight semantically critical tokens. A real-world, GDPR-compliant dataset containing 16,000 feedback entries from 3,500 employees across multiple departments was used. Preprocessing included tokenization, sentiment tagging, and feature extraction. The model achieved superior performance with a macro F1-score of 0.875, outperforming standard RoBERTa, LSTM, and SVM baselines. Attention visualizations revealed alignment between influential tokens and human-interpretable performance indicators. E-RoBERTa provides a transparent and accurate framework for evaluating job performance through textual feedback. Its use of domain adaptation and dynamic attention mechanisms supports scalable, ethical, and explainable AI in corporate learning analytics, offering actionable insights for personalized interventions and strategic HR decision-making.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_27-AI_Driven_Textual_Feedback_Analysis_in_E_Training.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Speech Emotion Recognition from Audio Data Using LSTM Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160726</link>
        <id>10.14569/IJACSA.2025.0160726</id>
        <doi>10.14569/IJACSA.2025.0160726</doi>
        <lastModDate>2025-07-30T12:59:44.7330000+00:00</lastModDate>
        
        <creator>Md. Mahbub-Or-Rashid</creator>
        
        <creator>Akash Kumar Nondi</creator>
        
        <creator>Abdullah Al Sadnun</creator>
        
        <creator>Md. Anwar Hussen Wadud</creator>
        
        <creator>T M Amir Ul Haque Bhuiyan</creator>
        
        <creator>Md. Saddam Hossain</creator>
        
        <subject>Emotion; audio data; Ryerson audio-visual database; Toronto emotional speech set; classification; layers; combine</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>The capacity to comprehend and interact with others through language is the most valuable human ability. Since emotions are crucial to communication, we are well-trained to recognize and interpret the many emotions we encounter. Contrary to popular assumption, the subjective aspect of human mood makes emotion recognition difficult for computers. There are some works based on Emotion recognition using images, text, and audio. We are here working on the audio dataset to find the accurate human emotion for computers to understand. In this work, we have utilized a Long Short-Term Memory (LSTM) model to implement Speech Emotion Recognition (SER) from Audio data on two different datasets: the Toronto Emotional Speech Set (TESS) and the Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS). The accuracy rates of our LSTM-based model were impressive, with 91.25% for the RAVDESS dataset and 98.05% for the TESS dataset; the combined accuracy for both datasets was 87.66%. These results highlight the effectiveness of the LSTM model in effectively identifying and categorizing emotional states from audio files. The study adds significant knowledge to the field of speech emotion recognition by emphasizing the model’s ability to handle a variety of datasets and its potential.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_26-Speech_Emotion_Recognition_from_Audio_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Adaptive SVR-Based Framework for Multimodal Corpus Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160725</link>
        <id>10.14569/IJACSA.2025.0160725</id>
        <doi>10.14569/IJACSA.2025.0160725</doi>
        <lastModDate>2025-07-30T12:59:44.7030000+00:00</lastModDate>
        
        <creator>Yuhui Wang</creator>
        
        <subject>SVR; adaptive corpus classification; incremental learning; multimodal corpus; feature extraction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>To address the challenges associated with the dynamic growth and multimodal complexity of modern corpora, an adaptive classification framework based on Support Vector Regression (SVR) was developed. A structured corpus was first constructed, followed by the extraction of salient textual features using Term Frequency–Inverse Document Frequency (TF-IDF) metrics. To accommodate the continuous expansion of the corpus, an incremental learning strategy was employed, enabling the model to update efficiently without complete retraining. A kernel-based SVR model was trained to perform classification tasks, and an adaptive feedback-driven mechanism was introduced to dynamically adjust both model parameters and feature representations based on classification performance metrics. Evaluation was conducted on multiple multilingual and multimodal corpora, with particular emphasis on Chinese language processing, which often presents unique challenges due to character complexity and sparse feature representations. The proposed method achieved a significant improvement in classification accuracy when compared to conventional classification approaches. Furthermore, the model demonstrated superior adaptability and computational efficiency across various corpus types. The findings confirm the viability of SVR as a core component for adaptive classification tasks in dynamic linguistic environments. This study contributes to the field by establishing a generalizable, efficient, and interpretable framework suitable for real-time corpus management systems, intelligent content filtering, and multilingual information retrieval.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_25-An_Adaptive_SVR_Based_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detection of Autism Spectrum Disorder (ASD) Using Lightweight Ensemble CNN Based on Facial Images for Improved Diagnostic Accuracy</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160724</link>
        <id>10.14569/IJACSA.2025.0160724</id>
        <doi>10.14569/IJACSA.2025.0160724</doi>
        <lastModDate>2025-07-30T12:59:44.6570000+00:00</lastModDate>
        
        <creator>Andi Kurniawan Nugroho</creator>
        
        <creator>Jajang Edi Priyanto</creator>
        
        <creator>D. S. P. Vinski</creator>
        
        <subject>Component; autism spectrum disorder (ASD); early detection; ensemble convolutional neural network (CNN); facial images; classification; accuracy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>Autism Spectrum Disorder (ASD) is a neurodevelopmental disorder that affects how people talk to each other and act. The fact that ASD is becoming more common and that diagnosing it can be difficult means that early detection is important for improving treatment outcomes. This study&#39;s goal is to use lightweight ensemble Convolutional Neural Networks (CNN) to make it easier to classify ASD from facial photos. The study looks at different CNN architectures, like MobileNetV2 and EfficientNet variations, to find the best model for diagnosing ASD quickly and accurately. The method involves training and testing five lightweight CNN models on a set of facial photos. We use pre-processing methods like scaling and data augmentation to help the model learn better. The study tests how well ensemble CNN models work by combining predictions from different architectures using averaging and voting methods. We use important performance metrics like accuracy, precision, recall, and F1-score to see how well each model works. The results show that the best balance between accuracy and computational efficiency is achieved by combining MobileNetV2 and EfficientNetB0. This combination achieves an accuracy of 0.8299, a precision of 0.8514, a recall of 0.8182, and an F1_score of 0.8344. Other models, like ResNet50 combined with EfficientNetB0, have higher precision but lower recall, making them less useful for finding all ASD cases. This study was also compared with other researchers, and the proposed study was found to have greater accuracy than other researchers. The results show that ensemble CNN models can significantly improve the accuracy of classifying ASD compared to single CNNs. This study shows that lightweight ensemble CNN models are good at finding ASD in pictures of people&#39;s faces. The method is fast and can be used on devices with limited processing power, making it a good way to find ASD early in both clinical and real-world settings.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_24-Detection_of_Autism_Spectrum_Disorder.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Bio-Inspired Metaheuristic Framework for DNA Motif Discovery Using Hybrid Cluster Based Walrus Optimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160723</link>
        <id>10.14569/IJACSA.2025.0160723</id>
        <doi>10.14569/IJACSA.2025.0160723</doi>
        <lastModDate>2025-07-30T12:59:44.6400000+00:00</lastModDate>
        
        <creator>M. Shilpa</creator>
        
        <creator>C. Nandini</creator>
        
        <subject>Motifs; walrus optimization algorithm; meta-heuristic algorithms; k-means clustering; DNA; bioinformatics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>Motifs are short, recurring sequence elements with biological significance within a set of nucleotide sequences. Motif discovery is the problem of finding these motifs. The problem of motif discovery has become an important problem in the field of Bioinformatics since, it finds its applications in Drug discovery, Environmental Health Research, and early Detection of Diseases by finding anomalies in gene sequences. Motif discovery is a challenging job in bioinformatics since it is NP-hard and cannot be solved within an exact time. In this study, we have proposed Hybrid Cluster based Walrus Optimization algorithm (HCWaOA) to solve the problem of motif discovery. The accuracy and efficiency of the proposed algorithm are improved using a hybrid approach. The population is initialized using Random Projection technique to generate a meaningful solution space. Then, k-means clustering is used to group similar solutions. Lastly, a population-based metaheuristic algorithm, Walrus optimization technique, is applied on each of the clusters to find the best motif. The proposed Hybrid Cluster-based Walrus Optimization algorithm (HCWaOA) is tested on both simulated and real biological datasets. The performance of HCWaOA is compared with benchmark algorithms like MEME, AlignCE and other meta-heuristics algorithms. The results of the proposed algorithm are found to be stable with a precision of 92%, a recall of 93% and an F-score of 93%. The proposed HCWaOA is tested using biological cancer-causing BARC and CTCF datasets to identify cancer causing motifs. Results show that incorporating clustering to initial solution space results in optimal solutions within a fewer iteration. The results of HCWaOA are compared with other popular motifs discovery algorithms and found to be stable.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_23-Bio_Inspired_Metaheuristic_Framework_for_DNA_Motif_Discovery.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>High-Speed Fiber-Optic Communication Performance Utilizing Fiber Bragg Grating-Based Dispersion Compensation Schemes</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160722</link>
        <id>10.14569/IJACSA.2025.0160722</id>
        <doi>10.14569/IJACSA.2025.0160722</doi>
        <lastModDate>2025-07-30T12:59:44.5930000+00:00</lastModDate>
        
        <creator>Kripa Kalkala Balakrishna</creator>
        
        <creator>Karthik Palani</creator>
        
        <subject>Dispersion management; chirped fiber bragg grating; gaussian apodization; quality factor; bit error rate; optical transmission system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>Chromatic dispersion is a significant limitation in optical fiber communication, as it causes pulse broadening, which negatively impacts transmission distance and data rates, both of which are critical for meeting the high-speed demands of 5G optical networks. This study focuses on addressing chromatic dispersion in Standard Single-Mode Fiber (SSMF) systems, which are widely deployed in 5G fronthaul and access networks. A comprehensive investigation is conducted using Gaussian-apodized linear chirped Fiber Bragg Gratings (FBGs) for dispersion compensation, implemented across three strategic configurations: pre-compensation, post-compensation, and symmetrical compensation. Each scheme is systematically evaluated to determine the most effective approach for enhancing signal integrity and overall network performance. Simulations are performed using OptiSystem 7.0 on a 10 Gbps SSMF-based optical system, with transmission distances ranging from 10 km to 80 km under controlled simulation parameters. Key performance metrics, including Quality factor (Q-factor), Bit Error Rate (BER), and eye height, are analyzed by varying SSMF length, input power, and bit rate. The results demonstrate that symmetrical compensation using Gaussian-apodized linear chirped FBGs provides the best performance, achieving a Q-factor of 12.3938, an ultra-low BER of 1.12336&#215;10⁻&#179;⁵, and a significantly improved eye height at 80 km. These findings establish the symmetrical compensation scheme employing Apodized Chirped Fiber Bragg Gratings (ACFBGs) as the most effective and scalable solution for high-speed, long-distance optical transmission in 5G networks. This approach enables key 5G applications, including ultra-reliable low-latency communication (URLLC), enhanced mobile broadband (eMBB), and smart infrastructure in smart cities. The proposed technique offers multiple advantages, such as low BER, high Q-factor, reduced signal distortion through sidelobe suppression, energy efficiency via passive operation, and design flexibility for long-haul network integration.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_22-High_Speed_Fiber_Optic_Communication_Performance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SOM-Based Leader Selection Strategies for Cooperative Spectrum Sensing in Multi-Band Multi-User 6G CR IoT</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160721</link>
        <id>10.14569/IJACSA.2025.0160721</id>
        <doi>10.14569/IJACSA.2025.0160721</doi>
        <lastModDate>2025-07-30T12:59:44.5630000+00:00</lastModDate>
        
        <creator>Mayank Kothari</creator>
        
        <creator>Suresh Kurumbanshi</creator>
        
        <subject>Cooperative spectrum sensing; reinforcement learning; k-means leader selection; self-organizing map</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>In 6G Cognitive Radio Internet of Things (CR-IoT) networks, multi-band spectrum sensing cooperatively provides access to extensive spectrum resources. The suggested learning-based multi-band multi-user cooperative spectrum sensing (M2CSS) scheme addresses intelligent spectrum access challenges. A cooperative strategy is introduced into a dueling deep Q network to facilitate multi-user reinforcement learning. This study selects the most suitable IoT secondary users (SU) to sense channels using the proposed learning-based M2CSS scheme. With the restriction that each IoT SU can serve as a front-runner for a single network and that there will only be one leader for individual frequency, the proposed work expresses an optimization difficulty in choosing leaders through k-means and SOM, who can efficiently interact with other SUs. Next, choose matching cooperative SUs for each frequency and express additional optimization problems. Following this phase, a subset of cooperative secondary users (SUs) senses frequencies and employs accurate knowledge to determine the channels&#39; availability in a distributed manner. The simulation findings demonstrate significant improvements in detection performance, preventing the misuse of specific devices, providing reliable sensing data over extensive IoT connections, and achieving energy efficiency—all essential for IoT implementations. These advantages make the proposed M2CSS system suitable for the massive machine-type communications anticipated in 6G IoT scenarios.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_21-SOM_Based_Leader_Selection_Strategies.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Heuristics-Based Clustering of Internet of Vehicle Based on Effective Approximate QoS Guideline for Message Dissemination</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160720</link>
        <id>10.14569/IJACSA.2025.0160720</id>
        <doi>10.14569/IJACSA.2025.0160720</doi>
        <lastModDate>2025-07-30T12:59:44.5300000+00:00</lastModDate>
        
        <creator>Tanuja Kayarga</creator>
        
        <creator>S Ananda Kumar</creator>
        
        <creator>Lakshmi B S</creator>
        
        <creator>Anil Kumar B H</creator>
        
        <subject>QoS; clustering; meta heuristics; IoV; PSO; steiner minimal tree</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>Internet of Things and connected world concepts are emerging foreground in all walks of life, and Internet of Vehicles (IoV) is a natural evolution of these concepts for a connected vehicular environment. Clustering is needed in the IoV network for effective management of network resources and to avoid congestion in the network. On the effective cluster base, any efficient routing protocol can be realized for message dissemination. In most of the existing work, clustering based on multiple mobility metrics is the first step, followed by routing over established clusters. This work proposes a clustering solution without the need for frequent clustering or reduced cluster maintenance effort. The proposed solution establishes a virtual QoS guideline path for message dissemination based on graph theory as a first step, and clustering of the network is done to further improve the QoS in the guideline path using a heuristic algorithm. The proposed solution demonstrates notable improvements over existing approaches. Specifically, it achieves an average cluster duration that is at least 6% higher and reduces cluster maintenance overhead by 7%. In terms of Quality of Service (QoS), the solution attains a 3.6% higher packet delivery ratio, along with a 21% reduction in end-to-end delay and a 28% decrease in routing overhead.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_20-Heuristics_Based_Clustering_of_Internet_of_Vehicle.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detecting Fake News Images Using a Hybrid CNN-LSTM Architecture</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160719</link>
        <id>10.14569/IJACSA.2025.0160719</id>
        <doi>10.14569/IJACSA.2025.0160719</doi>
        <lastModDate>2025-07-30T12:59:44.5000000+00:00</lastModDate>
        
        <creator>Dina R. Salem</creator>
        
        <creator>Abdullah A. Abdullah</creator>
        
        <creator>AbdAllah A. AlHabshy</creator>
        
        <creator>Kamal A. ElDahshan</creator>
        
        <subject>Fake news images; machine learning; deep learning; cloud computing; CNN; LSTM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>In today&#39;s digital world, images have become a double-edged tool in the dissemination of news; as much as they contribute to enriching honest content and communicating information effectively, they are increasingly being used to mislead the public and spread fake news. The ease of manipulating images and taking them out of their original context, or even creating them entirely with advanced techniques, gives them tremendous power in lending false credibility to false narratives, taking advantage of the human eye&#39;s tendency to believe what it sees and the image&#39;s superior ability to directly evoke emotions. These misleading images, which are often difficult to debunk with the naked eye, spread at lightning speed across digital platforms, allowing fake news to reach and influence large audiences before it can be verified. However, they tend to generate inaccurate reports. This study proposes a model architecture to detect fake news images. Machine learning and deep learning algorithms were used. The deep learning models are used depending on conventional neural nets (CNN), long short-term memory (LSTM) and a hybrid model that combines CNN and LSTM frameworks on Google Cloud. The hybrid model was able to categorize news with better accuracy than using each model individually. The model was tested and trained on a dataset for classifying fake news images. We used different evaluation metrics (precision, recall, F1 metric, etc.) to measure the efficiency of the model.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_19-Detecting_Fake_News_Images_Using_a_Hybrid.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Investigating Space-Time Dynamics in Live Memory Forensics Using Hybrid Transformer Approaches</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160718</link>
        <id>10.14569/IJACSA.2025.0160718</id>
        <doi>10.14569/IJACSA.2025.0160718</doi>
        <lastModDate>2025-07-30T12:59:44.4700000+00:00</lastModDate>
        
        <creator>Sarishma Dangi</creator>
        
        <creator>Kamal Ghanshala</creator>
        
        <creator>Sachin Sharma</creator>
        
        <subject>Live memory forensics; swin transformer; longformer transformers; memory acquisition; anomaly detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>Live memory forensics plays a critical role in digital investigations by analyzing volatile memory to detect system anomalies such as malware and unauthorized process activities. Traditional approaches often fall short in modelling the evolving nature of live memory. This study presents a novel Hybrid Space-Time Transformer Architecture combining Swin Transformer for localized spatial feature extraction and Longformer for capturing long-term temporal dependencies. By integrating windowed and sliding attention mechanisms, the proposed method enables precise detection of anomalies such as malware injection and process hijacking. Evaluated on benchmark datasets, the model achieved an accuracy of 95%, F1-score of 0.94, outperforming conventional deep learning and transformer-based approaches. Our work contributes a scalable, interpretable, and highly accurate model for enhancing live memory forensic workflows.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_18-Investigating_Space_Time_Dynamics_in_Live_Memory_Forensics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Qualitative Constructivist Framework for Assessing Knowledge Transfer in Enterprise System Projects: Insights from Expert Interviews</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160717</link>
        <id>10.14569/IJACSA.2025.0160717</id>
        <doi>10.14569/IJACSA.2025.0160717</doi>
        <lastModDate>2025-07-30T12:59:44.4530000+00:00</lastModDate>
        
        <creator>Jamal M. Hussien</creator>
        
        <creator>Riza bin Sulaiman</creator>
        
        <creator>Ali H Hassan</creator>
        
        <creator>Mansoor Abdulhak</creator>
        
        <creator>Hasan Kahtan</creator>
        
        <creator>Basit Shahzad</creator>
        
        <subject>Knowledge transfer; enterprise system projects; KTSSAF; project management; constructivist methodology; knowledge management; IS success theory</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>Effective Knowledge Transfer (KT) is widely recognized as a cornerstone of success in Enterprise System Projects (ESPs). However, despite its critical role, many ESPs continue to suffer from poor KT practices, resulting in delays, cost overruns, and suboptimal system adoption. This study aims to develop a qualitative constructivist framework for assessing knowledge transfer in ESPs to advance the ESPs&#39; success. This study introduces the Transfer Success Self-Assessment Framework (KTSSAF), a theoretically grounded and empirically validated framework designed to systematically evaluate KT effectiveness across ESP phases. Drawing upon the Information Systems (IS) Success Theory, the KTSSAF is built around the Project Management Process Groups (PMPG), enabling organizations to assess KT at granular levels within the pre-, during-, and post-implementation stages of ESPs. The development of KTSSAF was guided by a qualitative constructivist methodology, combining insights from semi-structured interviews with domain experts and a comprehensive literature review, an assessment kit, and a scoring mechanism tailored to enterprise-specific knowledge clusters and project phases. The framework supports ESP stakeholders in identifying KT gaps, forecasting KT success, and implementing targeted improvements. Empirical validation through expert reviews and pilot studies demonstrates the framework&#39;s practical utility and theoretical contributions. KTSSAF empowers organizations to make informed decisions regarding knowledge management strategies, facilitating improved knowledge retention, enhanced system use, and increased stakeholder engagement. By addressing longstanding gaps in KT evaluation within ESPs, this study contributes a structured, repeatable approach for practitioners and researchers to enhance KT outcomes and overall ESP success.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_17-A_Qualitative_Constructivist_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Advanced AI for Liver Cancer Detection: Vision Transformers, XAI and Contrastive Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160716</link>
        <id>10.14569/IJACSA.2025.0160716</id>
        <doi>10.14569/IJACSA.2025.0160716</doi>
        <lastModDate>2025-07-30T12:59:44.4370000+00:00</lastModDate>
        
        <creator>B C Anil</creator>
        
        <creator>Jayasimha S R</creator>
        
        <creator>Samitha Khaiyum</creator>
        
        <creator>T L Divya</creator>
        
        <creator>Rakshitha Kiran P</creator>
        
        <creator>Vishal C</creator>
        
        <subject>Contrastive learning; explainable AI (XAI); medical imaging AI; vision transformers; liver cancer detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>Liver cancer detection has always stood as a significant challenge in medical diagnostics, largely due to the complexity of interpreting imaging data and the critical need for accurate yet explainable results. This study explored how recent advances in artificial intelligence, specifically Vision Transformers (ViTs), Contrastive Learning, and Explainable AI (XAI), can be combined to address this challenge more effectively. Unlike conventional models, Vision Transformers are particularly good at capturing intricate patterns in medical images, which makes them well-suited for tasks like cancer classification. To improve the model&#39;s ability to generalize across different imaging conditions incorporated contrastive learning techniques, essentially teaching the system to recognize subtle distinctions between similar and dissimilar image features. This approach significantly sharpened its performance. Recognizing the importance of transparency in medical AI also integrated explainable AI tools into the model. This helped generate visual and textual cues that explain the system’s predictions, which is crucial for gaining the trust of clinicians who rely on these tools in high-stakes environments. The model was trained on a comprehensive dataset of liver cancer images, including both CT scans and MRIs, sourced from a well-established medical repository. The results were promising: the system reached a classification accuracy of 92 per cent, outperforming standard convolutional neural networks (CNNs) by 8 per cent. Most notably, it showed strong performance in identifying early-stage liver cancer, with 90 per cent sensitivity and 94 per cent specificity, suggesting that it may hold real potential for clinical application.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_16-Advanced_AI_for_Liver_Cancer_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SE-Pruned ResNet-18: Balancing Accuracy and Efficiency for Object Classification on Resource-Constrained Devices</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160715</link>
        <id>10.14569/IJACSA.2025.0160715</id>
        <doi>10.14569/IJACSA.2025.0160715</doi>
        <lastModDate>2025-07-30T12:59:44.3900000+00:00</lastModDate>
        
        <creator>Zeyad Farisi</creator>
        
        <subject>ResNet-18; squeeze-and-excitation model; model pruning; object classification; resource-constrained devices</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>Deep learning-based image object classification methods often achieve high accuracy, but with the growing demand for real-time performance on resource-constrained edge devices, existing approaches face challenges in balancing accuracy, computational complexity, and model size. To alleviate this awkward situation, we propose a novel ResNet-18 architecture that integrates the Squeeze-and-Excitation (SE) module and model pruning. The SE module adaptively emphasizes informative feature channels to enhance classification accuracy, while pruning technology reduces computational costs by removing unimportant connections or parameters without significant accuracy loss. Extensive experiments on benchmark datasets demonstrate that the optimized model outperforms the original ResNet-18 in both accuracy and inference speed. The classification accuracy increases from 93.2% to 94.1%, the number of parameters is reduced by 30%, the Floating-Point Operations decreases from 1.81 giga to 1.32 giga, and the inference time is decreased from 15.2 milliseconds to 12.8 milliseconds per batch. What’s more, the proposed model outperforms MobileNetV2, ShuffleNetV2, and EfficientNet-B0 in accuracy while maintaining competitive inference speed and parameter count. The experimental results highlight the model’s potential for deployment on resource-constrained devices, expanding the practical application scenarios of object classification methods in edge computing and real-time detection tasks.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_15-SE_Pruned_ResNet_18_Balancing_Accuracy_and_Efficiency.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improved YOLOv8 Model for Enhanced Small-Sized Breast Mass Detection on Magnetic Resonance Imaging</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160714</link>
        <id>10.14569/IJACSA.2025.0160714</id>
        <doi>10.14569/IJACSA.2025.0160714</doi>
        <lastModDate>2025-07-30T12:59:44.3730000+00:00</lastModDate>
        
        <creator>Feiyan Wu</creator>
        
        <creator>Chia Yean Lim</creator>
        
        <creator>Sau Loong Ang</creator>
        
        <creator>Jiaxin Zheng</creator>
        
        <subject>Bidirectional feature pyramid network; breast cancer detection; convolutional block attention module; MRI; object detection; small-sized masses; YOLOv8</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>The early detection of breast cancer is critically important for prompt treatment and rescuing lives. However, the accuracy of small-sized breast masses’ early detection in various algorithms remains unsatisfactory, as the small-sized masses often exhibit subtle features, contain blurry boundaries, and may overlap with other parts in crowded magnetic resonance imaging (MRI) images. This research proposes an improved object detection model based on You Only Look Once (YOLO) v8 to enhance small-sized breast mass detection on MRI. A feature fusion method called the Bidirectional Feature Pyramid Network (BiFPN) and an attention mechanism called the Convolutional Block Attention Module (CBAM) are integrated into the YOLOv8 architecture. The improved YOLOv8 model, equipped with CBAM and BiFPN and hyperparameter tuning, achieved the best performance with a precision of 95.7%, a mAP50 of 91.2%, a recall of 84.3%, and the shortest inference time of 3.4ms per image. The proposed improved Yolov8 model outperformed the baseline model with improvements in precision, mAP50, and recall of 6%, 3.9%, and 2.1%, respectively. The inference time per image is reduced by 1.4ms as well. It is hoped that the proposed model could be applied in the clinical field to increase the early detection rate of breast cancer and the life expectancy of women in the world.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_14-Improved_YOLOv8_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>niCNN: A Novel Neuromorphic Approach to Energy-Efficient and Lightweight Human Activity Recognition on Edge Devices</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160713</link>
        <id>10.14569/IJACSA.2025.0160713</id>
        <doi>10.14569/IJACSA.2025.0160713</doi>
        <lastModDate>2025-07-30T12:59:44.3270000+00:00</lastModDate>
        
        <creator>Preeti Agarwal</creator>
        
        <subject>Neuromorphic computing; human activity recognition (HAR); edge computing; convolution neural network (CNN); spiking neural network (SNN); sensors</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>Recent years have seen a surge in the use of deep learning for human activity recognition (HAR) in various applications. However, running complex deep learning models on edge devices with limited resources, such as processing power, memory, and energy, is challenging. The objective of this study is to design a novel, lightweight and energy-efficient neuromorphic inspired CNN (niCNN) architecture for real-time HAR on edge devices. The niCNN architecture consists of four stages: design of a shallow CNN, conversion into an equivalent spiking network using Clamping and Quantization (CnQ) algorithm to minimize information loss, threshold balancing to calculate spiking neuron firing rate using Threshold Firing (TF) algorithm, and edge deployment. The experimental evaluation shows that the niCNN architecture achieves 97.25% and 98.92% accuracy on two publicly accessible HAR datasets, WISDM and mHealth. Furthermore, the niCNN technique retains a low inference latency of 2.25 ms and 2.36 ms, as well as a low memory utilization of 22.11 KB and 31.84 KB, respectively. Furthermore, energy usage is reduced to 5.2w and 5.8w. In comparison to various state-of-the-art and baseline CNN models, the niCNN architecture outperforms them in terms of classification metrics, memory usage, energy consumption, and inference delay. The CnQ algorithm reduces memory usage and inference latency, while the TF algorithm improves classification accuracy. The findings show that neuromorphic computing has a lot of potential for resource-constrained edge devices.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_13-niCNN_A_Novel_Neuromorphic_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Benefits and Challenges of Cloud Computing System in Malaysian Public Healthcare Organizations</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160712</link>
        <id>10.14569/IJACSA.2025.0160712</id>
        <doi>10.14569/IJACSA.2025.0160712</doi>
        <lastModDate>2025-07-30T12:59:44.2970000+00:00</lastModDate>
        
        <creator>Nurul Izzatty Ismail</creator>
        
        <creator>Juhari Noor Faezah</creator>
        
        <creator>Muhammad Syukri Abdullah</creator>
        
        <creator>Masrina Nadia Mohd Salleh</creator>
        
        <subject>Cloud computing; technology; Malaysia; healthcare sector; public healthcare; TOE framework</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>Cloud computing has become an emerging technology in information systems (IS), which brings worldwide attention to healthcare management, including in Malaysia. Therefore, the present study aims to identify the benefits and challenges of the cloud system in the healthcare sector. The TOE framework was adopted to explain the benefits and challenges of cloud system implementation in the healthcare sector. The findings show that cost, scalability, data accessibility, and interoperability are factors in a technological context that may enable the successful implementation of cloud systems in healthcare organizations. Apart from that, the size of healthcare organizations and their training were also important factors in the organizational context, while government regulations and policies, as well as cyber threats, are considered crucial factors in the environmental context in cloud implementation in the healthcare sector. A conceptual framework of the cloud system was proposed to provide a comprehensive understanding to optimise future implementation or adoption of the cloud system in the Malaysian healthcare sector.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_12-Benefits_and_Challenges_of_Cloud_Computing_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>FoodSharePro: An Integrated Mobile Platform for Sustainable Food Donation and Decentralized Composting</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160711</link>
        <id>10.14569/IJACSA.2025.0160711</id>
        <doi>10.14569/IJACSA.2025.0160711</doi>
        <lastModDate>2025-07-30T12:59:44.2670000+00:00</lastModDate>
        
        <creator>Jamil Abedalrahim Jamil Alsayaydeh</creator>
        
        <creator>Rex Bacarra</creator>
        
        <creator>Shamsul Fakhar Bin Abd Gani</creator>
        
        <creator>Serhij Mamchenko</creator>
        
        <creator>Safarudin Gazali Herawan</creator>
        
        <subject>Food waste management; food donation platforms; mobile applications; traditional composting; smart waste systems; user engagement; foodsharepro</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>Food waste is a global issue, one-third of all food produced is lost every year. This study introduces FoodSharePro, an integrated mobile-based Waste Food Management System that connects food donation and composting. The system allows for the efficient donation of surplus edible food through a mobile app and management of inedible waste through traditional composting methods. Built using Android Studio and Google Firebase, the app has secure authentication, location-based rider matching via Google Maps API and real-time data synchronization. Donors can log donations, track status and view delivery confirmations through a user-friendly dashboard, while riders are assigned tasks based on location and transport suitability. To minimize organic waste, composting hardware with temperature sensors and dehydration units supports aerobic composting process. Evaluation among 20 users showed that FoodSharePro has the highest satisfaction rate (75%) compared to 6 other platforms, with a mean user satisfaction of 24.29% and a standard deviation of 24.25%. The results prove that mobile technology can be integrated with grassroots waste management to reduce food loss and be sustainable.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_11-FoodSharePro_An_Integrated_Mobile_Platform.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Object Recognition in Pond Environments Using Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160710</link>
        <id>10.14569/IJACSA.2025.0160710</id>
        <doi>10.14569/IJACSA.2025.0160710</doi>
        <lastModDate>2025-07-30T12:59:44.2330000+00:00</lastModDate>
        
        <creator>Suhaila Sari</creator>
        
        <creator>Ng Wei Jie</creator>
        
        <creator>Nik Shahidah Afifi Md Taujuddin</creator>
        
        <creator>Hazli Roslan</creator>
        
        <creator>Nabilah Ibrahim</creator>
        
        <creator>Mohd Helmy Abd Wahab</creator>
        
        <subject>Dataset; deep learning; object recognition; pond; underwater image; YOLO</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>Complicated underwater environment, such as visibility limitations and illumination conditions pose significant challenges for underwater imaging and its object recognition performance. These issues are especially critical for applications involving autonomous underwater vehicles (AUVs) or robotic systems involved in object recognition tasks during search-and-retrieval operations. Moreover, high-turbidity underwater image datasets, especially for pond environments, remain scarce. Therefore, this study focuses on establishing a pond underwater images dataset and evaluating the deep learning-based object recognition architecture, You Only Look Once Version 5 (YOLOv5), in recognizing multiple objects in respective underwater pond images. The dataset contains self-captured 1116 underwater pond images, which are annotated with LabelImg for object recognition and dataset generation. Under varying depths, camera distances, and object angles, the YOLOv5 reaches a mean accuracy mAP 50-95 of 87.96%, demonstrating its effectiveness for recognizing multiple objects in pond underwater environments.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_10-Object_Recognition_in_Pond_Environments.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Emotional Analysis and Interpretation of Music Conducting Works Based on Artificial Intelligence</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160709</link>
        <id>10.14569/IJACSA.2025.0160709</id>
        <doi>10.14569/IJACSA.2025.0160709</doi>
        <lastModDate>2025-07-30T12:59:44.2030000+00:00</lastModDate>
        
        <creator>He Huang</creator>
        
        <creator>Chengcheng Zhang</creator>
        
        <creator>Yun Liu</creator>
        
        <creator>Liyuan Liu</creator>
        
        <subject>Artificial intelligence; music conductor works; emotional analysis; interpretation technology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>Emotional expression of music conductor works is the core of music performance. Based on deep learning technology, this study puts forward an emotional analysis method of music conductor works, constructs a complete framework of audio feature extraction, emotional classification, model optimization and evaluation, selects different styles of music conductor works, extracts audio features by using short-time Fourier transform and mel-frequency cepstral coefficients, and classifies emotional categories by using convolutional neural network with bidirectional long short-term memory structure. The experimental results show that the model performs well in the recognition of joy, sadness and tranquility, and the accuracy and F1-score both reach a high level. Different styles of works have differences in emotional classification; classical works tend to be quiet and happy, and romantic works account for a higher proportion in the category of sadness. The change of command style has an impact on the results of emotion classification, and the treatment of rhythm, strength and timbre by different conductors leads to differences in emotion recognition of the same works. The research results provide a new methodological support for music emotional computing, and have application value in music education, intelligent recommendation, emotional computing and other fields. The experimental results demonstrate high effectiveness, with an average classification accuracy of 88.5% and an F1-score exceeding 0.87 across core emotional categories. These findings provide methodological support for affective computing in music, with practical applications in music education, intelligent recommendation, and affective computing. Future research will optimize the model structure and combine multimodal data to improve the accuracy of music emotion recognition, providing a broader research space for the combination of music analysis, interpretation technology, and artificial intelligence.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_9-Emotional_Analysis_and_Interpretation_of_Music.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Transformer Model Optimization Method for Multi-Modal Data Fusion</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160708</link>
        <id>10.14569/IJACSA.2025.0160708</id>
        <doi>10.14569/IJACSA.2025.0160708</doi>
        <lastModDate>2025-07-30T12:59:44.1730000+00:00</lastModDate>
        
        <creator>Shanshan Yang</creator>
        
        <creator>Jie peng</creator>
        
        <subject>Transformer model; multimodal data fusion; model optimization; attention mechanism; adaptive fusion</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>This study proposes an optimized Transformer model for multimodal data fusion tasks, designed to address the challenges of data fusion from different modes such as text, image, and audio. By improving data preprocessing methods, optimizing model architecture and fusion strategies, the study significantly improves the performance of the model in multimodal tasks. The experimental results show that the optimized model is superior to the benchmark model and other comparison models in key indicators such as accuracy, recall, F1 score and AUC value, and shows stronger performance and higher stability. In particular, the research solves the problems of data heterogeneity and computing resource consumption by introducing a weighted fusion strategy, multi-head self-attention mechanism and lightweight design. At the same time, the processing of missing modal data is optimized to enhance the robustness of the model. Despite the remarkable results, there are still challenges such as data heterogeneity, computational efficiency, and missing modal data. Future research can further optimize modal alignment methods and data preprocessing techniques to improve the performance of the model in practical applications. This research provides a new idea and direction for the application and development of multimodal data fusion technology.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_8-Transformer_Model_Optimization_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>CNN-LinATFormer: Enhancing PM2.5 Prediction Through Feature Assessment and Linear Attention Mechanism</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160707</link>
        <id>10.14569/IJACSA.2025.0160707</id>
        <doi>10.14569/IJACSA.2025.0160707</doi>
        <lastModDate>2025-07-30T12:59:44.1400000+00:00</lastModDate>
        
        <creator>Yuchen Zhang</creator>
        
        <creator>Rajermani Thinakaran</creator>
        
        <subject>PM2.5 prediction; air quality forecasting; deep learning; convolutional neural network; linear attention mechanism; channel attention; feature assessment; hybrid model architecture; environmental monitoring; spatiotemporal modeling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>Atmospheric fine particulate matter (PM2.5) poses a serious threat to public health, and its accurate prediction is crucial for environmental management and pollution control. However, existing prediction methods have difficulty in effectively capturing the complex nonlinear characteristics and multi-scale spatiotemporal dependencies of PM2.5 concentration changes. To address this challenge, this study proposes a CNN-LinATFormer hybrid deep learning architecture that combines the local feature extraction capabilities of CNN with the global dependency modeling advantages of the linear attention mechanism. The model innovatively introduces a feature evaluator to dynamically classify environmental features into three categories, and achieves targeted processing through three specially designed processing branches: CNN feature extraction, channel attention, and linear attention fusion. Based on the urban monitoring data of 9 environmental feature dimensions from 2020 to 2023, the experimental evaluation results show that CNN-LinATFormer outperforms the existing methods in all evaluation indicators, with an RMSE of 8.42μg/m&#179;, which is 21.1% lower than the CNN-RF model with the closest performance; the ablation experiment confirms the effectiveness of each component, especKeywords-PM2.5 prediction; air quality forecasting; deep learning; convolutional neural network; linear attention mechanism; channel attention; feature assessment; hybrid model architecture; environmental monitoring; spatiotemporal modeling the channel attention mechanism; the case analysis reveals that the model performs well in the low concentration range (RMSE is 3.12μg/m&#179;), but the high pollution range (&gt;150μg/m&#179;) still needs to be improved. This study provides a new technical path for air quality prediction, which is of great value to environmental monitoring and public health protection.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_7-CNN_LinATFormer_Enhancing_PM2.5_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluating Intangible Software Quality Metrics for Effective Project Management Information Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160706</link>
        <id>10.14569/IJACSA.2025.0160706</id>
        <doi>10.14569/IJACSA.2025.0160706</doi>
        <lastModDate>2025-07-30T12:59:44.1230000+00:00</lastModDate>
        
        <creator>Gu Xin</creator>
        
        <creator>Rozi Nor Haizan Nor</creator>
        
        <creator>Nur Ilyana Ismarau Tajuddin</creator>
        
        <creator>Khairi Azhar Aziz</creator>
        
        <subject>Web-based PMIS; software quality model; intangible software quality metrics; software quality</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>In modern organizational environments, project management information systems (PMIS) play an important role in ensuring project success with the user requirements, keeping overall costs within the planned budget, and delivering projects at the agreed time. Selecting a high-quality PMIS is vital for the success of project management. A software quality model tailored to PMIS, summarizing the intangible software quality metrics (ISQM) that are effective in evaluating a PMIS, can help better decision-making on PMIS for project managers. However, there is limited research on the PMIS-tailored quality model. To fill the gap, this study evaluates effective SQM for PMIS quality assessment. There are two types of PMIS: Web-based PMIS and PMIS software applications. To narrow the context of PMIS, we focus on web-based PMIS since they are widely used across the industry, such as Microsoft Project and Jira. According to the PMIS features, we merely explored the tailored quality models that have been proven to be more appropriate for web-based PMIS rather than the basic models, such as ISO/IEC 9126, ISO/IEC 25010, and Bertoa. This research uses a qualitative approach to conduct the commonality screening among these models and find out the key evaluation metrics, such as usability and functionality, and the corresponding qualitative attributes suitable for Web-based PMIS quality assessment. The selected metrics and attributes are used to form a Web-based PMIS-tailored quality model. The scoring mechanism is introduced based on the PMIS-tailored quality model, where project managers can have a clear comparison among different web-based PMISs, leading to effective web-based PMIS selection for project management.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_6-Evaluating_Intangible_Software_Quality_Metrics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Power-Aware Video Transmission in 5G Telemedicine: Challenges, Solutions, and Future Directions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160705</link>
        <id>10.14569/IJACSA.2025.0160705</id>
        <doi>10.14569/IJACSA.2025.0160705</doi>
        <lastModDate>2025-07-30T12:59:44.0930000+00:00</lastModDate>
        
        <creator>Qiuhong SHI</creator>
        
        <creator>Mingjing CAO</creator>
        
        <creator>Yun BAI</creator>
        
        <subject>Telemedicine; Internet of Things; energy efficiency; video transmission</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>The transformation of healthcare delivery through telemedicine has been significantly accelerated by the deployment of 5G networks and the integration of Internet of Things (IoT) technologies. These advancements enable real-time video-based medical services, including remote operations, urgent care, mobile procedures, and virtual consultations. However, the energy limitations of IoT devices present significant questions regarding long-term video quality and system efficiency. This survey reviews state-of-the-art solutions envisioned for enhancing video transmission in telemedicine environments that are limited by energy consumption but made possible by 5G connectivity. The study discusses recent advances, including efficient video compression techniques, computation offloading via edge computing, adaptive streaming procedures, and dynamic 5G architecture-aware resource scheduling, such as network slicing. It discusses the trade-off between power efficiency and video performance for various telehealth scenarios. Using a scenario-based analysis with a unifying integration framework, this work advances research into energy-efficient e-health systems.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_5-Power_Aware_Video_Transmission_in_5G_Telemedicine.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Brightness-Aware Generative Adversarial Network for Low-Light Image Enhancement</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160704</link>
        <id>10.14569/IJACSA.2025.0160704</id>
        <doi>10.14569/IJACSA.2025.0160704</doi>
        <lastModDate>2025-07-30T12:59:44.0630000+00:00</lastModDate>
        
        <creator>Huafei Zhao</creator>
        
        <creator>Mideth Abisado</creator>
        
        <subject>Low-light image enhancement; generative adversarial networks; U-Net; PatchGAN; attention mechanism; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>Images from low-light frequently exhibit poor visibility, excessive noise, and color distortion, which substantially impair both computer vision systems and human visual perception. Although numerous enhancement techniques have been developed, producing visually appealing results with well-maintained structural details and natural color reproduction continues to pose significant challenges. To address these limitations, this paper present an Brightness-Aware Generative Adversarial Network (BA-GAN) for robust low-light image enhancement (LLIE). Our framework employs a U-Net-based generator that effectively captures multi-scale contextual features while preserving fine image details through skip connections. The key innovation lies in our novel Brightness Attention Mechanism Module, integrated within the decoder, which dynamically directs the network&#39;s focus to regions requiring substantial illumination correction. To ensure local photorealism, this paper adopt a PatchGAN discriminator architecture. The complete model is trained on the LOL dataset using a composite loss function combining: (1) adversarial loss for realistic image generation, (2) brightness attention loss for keeping the brightness accuracy, and (3) perceptual loss to maintain structural and semantic fidelity. Extensive experiments validate that our BA-GAN outperforms current state-of-the-art methods, achieving superior performance on both quantitative metrics (PSNR: 20.7127, SSIM: 0.7963, LPIPS: 0.2271) and qualitative visual assessments. The enhanced images demonstrate significantly improved visibility while effectively suppressing noise and preserving natural color characteristics.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_4-Brightness_Aware_Generative_Adversarial_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cross-Context Evaluation of an Indoor–Outdoor AR Navigation System in a University Campus Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160703</link>
        <id>10.14569/IJACSA.2025.0160703</id>
        <doi>10.14569/IJACSA.2025.0160703</doi>
        <lastModDate>2025-07-30T12:59:44.0170000+00:00</lastModDate>
        
        <creator>Toma Marian-Vladut</creator>
        
        <creator>Pascu Paul</creator>
        
        <creator>Turcu Corneliu Octavian</creator>
        
        <subject>Augmented reality; hybrid navigation; spatial computing; usability testing; indoor-outdoor contexts; ARCore; higher education</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>This paper presents a comparative evaluation of a mobile augmented reality (AR) navigation application designed for both indoor and outdoor university environments. Building on a previously validated system for indoor guidance, the current study extends the deployment to outdoor campus spaces without relying on GPS or additional infrastructure. Using visual positioning and spatial anchors, the same application provides real-time AR cues and audio instructions to support wayfinding across different spatial contexts. The principal aim of this study is to determine whether a unified AR navigation system can deliver a consistent, infrastructure-free user experience in both indoor and outdoor university environments. A within-subjects study was conducted with 256 university students who completed both indoor and outdoor navigation tasks. Usability and user acceptance were assessed using the System Usability Scale (SUS) and constructs from the Technology Acceptance Model (TAM). Results revealed consistent user experience across both contexts, with no statistically significant differences in perceived intuitiveness, usefulness, engagement, behavioral intention to reuse, or localization accuracy. A significant difference was found only in perceived AR content loading speed, which was rated slightly higher indoors. These findings demonstrate the feasibility of a unified AR navigation system for academic campuses and provide practical insights into its scalability and user-centered design.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_3-Cross_Context_Evaluation_of_an_Indoor_Outdoor_AR_Navigation_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Designing an Empathetic Conversational Agent for Student Mental Health: A Pilot Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160702</link>
        <id>10.14569/IJACSA.2025.0160702</id>
        <doi>10.14569/IJACSA.2025.0160702</doi>
        <lastModDate>2025-07-30T12:59:43.9700000+00:00</lastModDate>
        
        <creator>Kaichi Minami</creator>
        
        <creator>Choi Dongeun</creator>
        
        <creator>Panote Siriaraya</creator>
        
        <creator>Noriaki Kuwahara</creator>
        
        <subject>Conversational agent; mental health; large language models; prompt design; empathy; chatbot evaluation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>This study presents the design and evaluation of a conversational agent aimed at supporting university students&#39; mental health. We implemented two variants of a chatbot, referred to as A1 and A2, using large language models (LLMs). A1 employed a baseline prompt reflecting a structured yet neutral counseling style, while A2 was an enhanced version incorporating feedback from psychiatrists and findings from a preliminary study. Emotionally rich expressions, conversational variation, and mild self-disclosure are introduced in A2. A mixed-method user study with 18 participants was conducted to compare A1, A2, and human interactions. Results indicated that A2 significantly improved users’ perception of empathy and engagement compared to A1, though human-level rapport was not fully achieved. These findings highlight the role of prompt design in creating emotionally responsive AI companions for mental health support.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_2-Designing_an_Empathetic_Conversational_Agent.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Trust in Human-AI Collaboration: A Conceptual Review of Operator-AI Teamwork</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160701</link>
        <id>10.14569/IJACSA.2025.0160701</id>
        <doi>10.14569/IJACSA.2025.0160701</doi>
        <lastModDate>2025-07-30T12:59:43.9070000+00:00</lastModDate>
        
        <creator>Abduljaleel Hosawi</creator>
        
        <creator>Richard Stone</creator>
        
        <subject>Operator-AI collaboration; trust calibration; trust dynamics; explainability; transparency; trust repair mechanisms; cross-cultural trust; Clinical Decision Support Systems (CDSS); AI autonomy and influence; ethical considerations in AI; team performance; AI system characteristics; operator competencies; contextual factors; framework; limitations; robustness; Human-AI teaming; design principles; human competencies; predictability; re-liability; understandability; over-reliance; automation bias; under-trust; trust measurement; trust erosion</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(7), 2025</description>
        <description>Trust is vital to collaborative work between opera-tors and AI. Yet, important elements of its nature remain to be investigated, including the dynamic process of trust formation, growth, decline, and even death between an operator and an AI. This review analyzes how the dynamic development of trust is determined by Team performance and its complex interaction with factors related to AI system characteristics, Operator competencies, and Contextual factors. This review summarizes current concepts, theories, and models to propose a genuine framework for enhancing trust. It analyzes the current understanding of trust in human-AI collaborations, highlighting key gaps and limitations, such as a lack of robustness, poor explainability, and effective collaboration design. The findings emphasize the importance of key components in this collaborative environment, including Operator capabilities and AI technology characteristics, underscoring their impact on trust. This study advances understanding of the nature of Operator-AI collaboration and the Dynamics of trust in trust calibration. Through a multidisciplinary approach, it also emphasizes the impact of Explainability, Transparency, and trust repair mechanisms. It highlights how Operator-AI systems can be improved through Design principles and developing Human competencies to enhance collaboration.</description>
        <description>http://thesai.org/Downloads/Volume16No7/Paper_1-Enhancing_Trust_in_Human_AI_Collaboration.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparison of Conventional Techniques for House Electricity Consumption Forecasting</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01606108</link>
        <id>10.14569/IJACSA.2025.01606108</id>
        <doi>10.14569/IJACSA.2025.01606108</doi>
        <lastModDate>2025-06-30T13:21:19.9770000+00:00</lastModDate>
        
        <creator>Sandra Pajares Centeno</creator>
        
        <creator>Hugo Alatrista-Salas</creator>
        
        <subject>Electricity consumption; forecasting; recurrent neural networks; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>Electricity consumption monitoring is the auto-mated process of recording, processing, and analyzing electricity usage in real time to make informed decisions. This research aims to implement an artificial intelligence- and deep learning-based methodology to forecast monthly electricity consumption in Tacna, Peru, and generate decision-making indicators. To this end, we used electricity consumption records from Electrosur S.A., the company responsible for electricity distribution and marketing in the departments of Tacna and Moquegua, from February 2015 to December 2022 (a total of 95 months). We compared three artificial intelligence models in this context: i) eXtreme Gradient Boosting (XGBoost), ii) Light Gradient Boosting (LGBM), and iii) Prophet. While all models effectively fore-casted electricity consumption, the Prophet model demonstrated superior performance, achieving a mean absolute percentage error (MAPE) of 0.7% compared to actual consumption values. Additionally, the study discusses the potential of recurrent neural networks to further enhance predictive accuracy.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_108-Comparison_of_Conventional_Techniques_for_House_Electricity.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Text Classification Using Enhanced Binary Wind Driven Optimization Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01606107</link>
        <id>10.14569/IJACSA.2025.01606107</id>
        <doi>10.14569/IJACSA.2025.01606107</doi>
        <lastModDate>2025-06-30T13:21:19.9430000+00:00</lastModDate>
        
        <creator>Jaffar Atwan</creator>
        
        <creator>Mohammad Wedyan</creator>
        
        <creator>Ahmad Hamadeen</creator>
        
        <creator>Qusay Bsoul</creator>
        
        <creator>Ayat Alrosan</creator>
        
        <creator>Ryan Alturki</creator>
        
        <subject>Text classification; Arabic documents; wind driven optimization algorithm; simulating annealing; feature selection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>Document classification using supervised machine learning is now widely used on the internet and in digital libraries. Several studies have focused on English-language document classification. However, Arabic text includes high variation in its morphology, which leads to high extracted features and increases the dimensionality of the classification task. Towards reducing the curse of dimension in Arabic text classification, a wrapper feature selection method is proposed in this study. In more detail, a hybrid metaheuristic model based on the Wind Driven and Simulated Annealing is designed to solve FS task in Arabic text, known as WDFS. The Wind Driven method is initially introduced to optimize the Fs task in the exploration phase. Then, WD is hybridized with simulated annealing as a local search in the exploitation phase to enhance the solutions located by the WD. Three classifiers are utilized to evaluate the selected features using the proposed WDFS: K-nearest Neighbor, Na&#239;ve Bayesian, and Decision Tree. The proposed WDFS method was assessed on selected four groups of files from a benchmark TREC Arabic text newswire dataset. Comparative results showed that the WDFS method outperforms other existing Arabic text classification methods in term of the accuracy. The obtained results reveal the high potentiality of WDFS in reliably searching the feature space to obtain the optimal combination of features.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_107-Text_Classification_Using_Enhanced_Binary_Wind_Driven_Optimization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fake News Detection on Kashmir Issue Using Machine Learning Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01606106</link>
        <id>10.14569/IJACSA.2025.01606106</id>
        <doi>10.14569/IJACSA.2025.01606106</doi>
        <lastModDate>2025-06-30T13:21:19.9270000+00:00</lastModDate>
        
        <creator>Misbah Kazmi</creator>
        
        <creator>Sadia Nauman</creator>
        
        <creator>Sadaf Abdul Rauf</creator>
        
        <creator>S. Ali</creator>
        
        <creator>Ali Daud</creator>
        
        <creator>Bader Alshemaimri</creator>
        
        <subject>Classification algorithm; fake news; Kashmir issue; machine learning techniques</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>Focusing events are sudden, impactful occurrences that spark widespread discussions. Analyzing fake news during such events is challenging due to limited and short-lived datasets. Online fact checkers are slow in identifying fake news, and internet communities and forums become the primary source of news, allowing unchecked dissemination. This study proposes a machine learning approach to predict fake news during the revocation of Article 370 in Kashmir as a focusing event. Small dataset from 20th August till 2nd September is collected and user profile parameters are utilized for effective classification. Five classifiers were employed, with Random-Forest and Logistic-Regression achieving the highest F1 scores of 74 per cent. Results identifies prevalent words in true and false news tweets, aiding in fake news detection. This approach mitigates misinformation during events with limited data, contributing to a reliable online environment. The research is valuable for major geopolitical shifts, natural disasters, and social movements.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_106-Fake_News_Detection_on_Kashmir_Issue.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Reducing Computational Complexity in CNNs: A Focus on VGG19 Pruning and Quantization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01606105</link>
        <id>10.14569/IJACSA.2025.01606105</id>
        <doi>10.14569/IJACSA.2025.01606105</doi>
        <lastModDate>2025-06-30T13:21:19.8800000+00:00</lastModDate>
        
        <creator>Md. Mijanur Rahman</creator>
        
        <creator>Anik Datta</creator>
        
        <creator>Md. Sabiruzzaman</creator>
        
        <creator>Md Samim Ahmed Bin Hossain</creator>
        
        <subject>VGG19 Model optimization; model compression; pruning; quantization; structured pruning; unstructured pruning; memory management; quantization-aware training; 8-bit; 4-bit</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>The Convolutional Neural Network (CNN) models are effective in computer vision strategies and have gained popularity due to their strong performance in visual tasks. Nevertheless, models with architectures such as VGG19 are expensive in terms of computational resources and require huge memory, which limits their usage on low-end devices. The study examines how efficiency can be increased in the model VGG19 by using model compression techniques, like pruning (structured and un-structured) and quantization (8-bit and 4-bit Quantization-Aware Training - QAT). The efficiency of the individual compression approaches was tested by thoroughly exploring the VGG19 with the MNIST, CIFAR-10, and Oxford-IIIT Pet datasets. Each model was evaluated against the baseline based on measures of accuracy, model size, inference time, and complexities of the model, CPU usage, and memory usage. The applied QAT approach reduced the model size by 75% with a drop in computational cost across all methods. In addition, the 8-bit quantitative assessment allowed for substantial system compression alongside increased speed delivery with minimal impact on accuracy. The highest compression and sparsity achieved by 4-bit QAT was 48%, which was not effective as it reduced accuracy on complex datasets, with additional computational overhead on T4 GPU. Structured pruning resulted in faster inference, but unstructured pruning also demonstrated a good result in retaining accuracy and even improving it. To simplify the VGG19 structure, pruning and quantization mechanisms are suggested in order to simplify the architecture to implement the model on edge devices sufficiently, without compromising prediction performance.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_105-Reducing_Computational_Complexity_in_CNNs.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Phishing Simulation as a Proactive Defense: A Customizable Platform for Training and Behavioral Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01606104</link>
        <id>10.14569/IJACSA.2025.01606104</id>
        <doi>10.14569/IJACSA.2025.01606104</doi>
        <lastModDate>2025-06-30T13:21:19.8670000+00:00</lastModDate>
        
        <creator>Abdulrahman Alsaqer</creator>
        
        <creator>Hussain Almajed</creator>
        
        <creator>Khalid Alarfaj</creator>
        
        <creator>Mounir Frikha</creator>
        
        <subject>Phishing; simulation; awareness; analytics; cyber-security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>Phishing is one of the most persistent threats, but a lot of awareness programs still use generic, static training. This paper fills in the gap identified above by existing studies through the introduction of a phishing simulation platform that provides personalized, role-based simulation with real-time behavioral tracking. It is a multi-channel delivery (Email, Short Message Service (SMS), (WhatsApp) and can dynamically generate messages using placeholders to simulate realistic attack scenarios. User interactions are visualized on an integrated dashboard to let organizations judge the individual’s risk and provide immediate awareness feedback. Due to ethical restrictions, real user testing could not be performed, and the system was tested using simulated data found to work with a cloud-ready front end. The solution shows great potential for being adopted by enterprises due to its potential to adopt an approach towards cybersecurity training in a more adaptive and engaging way.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_104-Phishing_Simulation_as_a_Proactive_Defense.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Random Forest Model Based on Machine Learning for Early Detection of Diabetes</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01606103</link>
        <id>10.14569/IJACSA.2025.01606103</id>
        <doi>10.14569/IJACSA.2025.01606103</doi>
        <lastModDate>2025-06-30T13:21:19.8200000+00:00</lastModDate>
        
        <creator>Inooc Rubio Paucar</creator>
        
        <creator>Cesar Yactayo-Arias</creator>
        
        <creator>Laberiano Andrade-Arenas</creator>
        
        <subject>Data mining; decision tree; diabetes mellitus; machine learning; random forest</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>Diabetes mellitus presents a growing prevalence at the global level, representing a significant public health challenge. Despite the availability of specific treatments, it is imperative to develop innovative strategies that optimize early detection and management of the disease. The research aims to develop a model that allows for the early detection of diabetes using the Random Forest algorithm, using the Knowledge Discovery in Databases (KDD) methodology, which comprises the phases of selection, preprocessing, transformation, data mining, interpretation and evaluation. The dataset used include 520 randomly selected patient records. The model achieved robust performance, with an accuracy of 85%, sensitivity of 75%, and an F1-score of 78%, indicating an adequate balance between precision and sensitivity. Specificity was 78%, while the area under the ROC curve (AUC) reached 86%, demonstrating a high discriminative ability between positive and negative cases. The balanced accuracy was 82%, and the Matthews correlation coefficient (MCC) registered a value of 0.72, confirming the strength and reliability of the model even in the presence of class imbalance. These results demonstrate the effectiveness of the machine learning-based approach for the early detection of diabetes mellitus, with potential application in clinical decision support systems.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_103-Random_Forest_Model_Based_on_Machine_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>XPathia: A Deep Learning Approach for Translating Natural Language into XPath Queries for Non-Technical Users</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01606102</link>
        <id>10.14569/IJACSA.2025.01606102</id>
        <doi>10.14569/IJACSA.2025.01606102</doi>
        <lastModDate>2025-06-30T13:21:19.7870000+00:00</lastModDate>
        
        <creator>Karam Ahkouk</creator>
        
        <creator>Mustapha Machkour</creator>
        
        <subject>Deep learning; XML databases; neural networks; text-to-XPATH; natural language processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>XPath is a widely used language for navigating and extracting data from XML documents due to its simple syntax and powerful querying capabilities. However, non-technical users often struggle to retrieve the needed information from XML files, as they lack knowledge of XML structures and query languages like XPath. To address this challenge, we propose XPathia, a novel deep learning-based model that automatically translates natural language questions into corresponding XPath queries. Our approach employs supervised learning on an annotated XML dataset to learn accurate mappings between natural language and structured XPath expressions. We evaluate XPathia using two standard metrics: Component Matching (CM) and Exact Matching (EM). Experimental results demonstrate that XPathia achieves a state-of-the-art performance with an accuracy of 25.85% on the test set.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_102-XPathia-A_Deep_Learning_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Artificial Intelligence in Disaster Risk Management: A Scientometric Mapping of Evolution, Collaboration, and Emerging Trends (2003–2025)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01606101</link>
        <id>10.14569/IJACSA.2025.01606101</id>
        <doi>10.14569/IJACSA.2025.01606101</doi>
        <lastModDate>2025-06-30T13:21:19.7570000+00:00</lastModDate>
        
        <creator>Hssaine Hamid</creator>
        
        <creator>ELouadi Abedlmajid</creator>
        
        <subject>Artificial Intelligence; disaster risk management; machine learning; deep learning; remote sensing; bibliometric analysis; natural disasters; geospatial AI; early warning systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>Recent years have seen a dramatic increase in the number of and severity of natural disasters, driven in part by climate change and urbanization. Artificial Intelligence (AI) appears to be a promising new technology that can transform disaster risk management (DRM) and provide new opportunities for prediction, monitoring, response, and recovery. The present study performs a bibliometric review of applications of AI to DRM, from a total collection of 7842 scientific articles extracted from Scopus, Web of Science and OpenAlex databases from the year 2003 to the year 2025. Exploring the trends of publications, authorship, international collaboration, and research topics, the study reveals the development and current status of AI incorporating disaster management. The results illustrate an apparent growth in interest in the field of science, how machine learning and deep learning methodologies are leading, and the raise of geospatial AI, remote sensing, and social media analysis in disaster preparedness and response. Other issues including data quality, ethics, technology and trust in AI systems are also considered. This study offers helpful perspectives on the status quo and future development of AI-based DRMs.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_101-Artificial_Intelligence_in_Disaster_Risk_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intelligent Agents in Disaster Risk Management: A Systematic Review of Advances and Challenges</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01606100</link>
        <id>10.14569/IJACSA.2025.01606100</id>
        <doi>10.14569/IJACSA.2025.01606100</doi>
        <lastModDate>2025-06-30T13:21:19.7400000+00:00</lastModDate>
        
        <creator>Hssaine Hamid</creator>
        
        <creator>ELouadi Abedlmajid</creator>
        
        <subject>Intelligent agents; artificial intelligence; disaster risk management; predictive analytics; resilience; early warning systems; geospatial AI; disaster response; ethical challenges; ma-chine learning; climate change adaptation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>Artificial Intelligence (AI) has emerged as a trans-formative technology in the domain of Disaster Risk Management (DRM), offering new possibilities for forecasting, preparedness, and rapid response in the face of increasingly frequent and complex natural disasters. This systematic literature review synthesizes the state-of-the-art advances in AI-driven intelligent agents applied to DRM, covering domains such as early warning systems, geospatial analysis, damage assessment, evacuation planning, and decision support. It critically examines the technological innovations, implementation methods, and interdisciplinary approaches that have shaped the evolution of intelligent agent-based solutions in disaster scenarios. Through the analysis of over 7,800 scientific publications indexed in Scopus, Web of Science, and OpenAlex between 2010 and 2025, the review identifies key patterns, application domains, and persistent gaps such as data scarcity, lack of model interpretability, and limited operational deployment. The study also addresses ethical concerns related to AI deployment in high-stakes environments and proposes a roadmap for future integration of intelligent agents with IoT, UAVs, and real-time decision infrastructures. The findings contribute to a deeper understanding of how AI and multi-agent systems can reinforce disaster resilience and inform sustainable and adaptive disaster management strategies at both global and local levels.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_100-Intelligent_Agents_in_Disaster_Risk_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning-Based Bone Age Growth Disease Detection (BAGDD) Using RSNA Radiographs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160699</link>
        <id>10.14569/IJACSA.2025.0160699</id>
        <doi>10.14569/IJACSA.2025.0160699</doi>
        <lastModDate>2025-06-30T13:21:19.6930000+00:00</lastModDate>
        
        <creator>Muhammad Ali</creator>
        
        <creator>Muhammad Faheem Mushtaq</creator>
        
        <creator>Saima Noreen Khosa</creator>
        
        <creator>Naila Kiran</creator>
        
        <creator>Humaira Arshad</creator>
        
        <creator>Urooj Akram</creator>
        
        <subject>Bone age estimation; pediatric healthcare; convolutional neural networks; transfer learning; YOLOv3; medical imaging</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>Radiological bone age assessment is essential for diagnosing pediatric growth and developmental disorders. The conventional Greulich-Pyle Atlas, though widely used, is manual, time-intensive, and prone to inter-observer variability. While deep learning methods such as Convolutional Neural Networks (CNNs) offer automation potential, most existing models rely on transfer learning from natural image datasets and lack specialization for medical radiographs. This study aims to address the gap by developing a domain-specific, custom CNN for pediatric bone age prediction. This research proposes a customized CNN architecture trained on the RSNA pediatric bone age dataset, which includes over 12,000 annotated hand X-ray images labeled with age and gender. The pipeline incorporates pre-processing techniques such as image resizing, normalization, and Contrast Limited Adaptive Histogram Equalization (CLAHE) to enhance input quality. A YOLOv3 object detector is utilized to localize the hand region prior to model training, focusing on the most relevant anatomical structures. Unlike traditional transfer learning models such as ResNet50, VGG19, and InceptionV3, the proposed CNN is tailored for radiographic features using optimized convolutional blocks and domain-aware augmentations. This design improves generalization and reduces overfitting on small or imbalanced subsets. The proposed model achieved a Mean Absolute Error (MAE) of 3.27 months on the test set and 3.08 months on the validation set, outperforming state-of-the-art transfer learning approaches. These results demonstrate the model’s potential for accurate and consistent bone age estimation and highlight its suitability for integration into clinical decision-support systems in pediatric radiology.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_99-Deep_Learning_Based_Bone_Age_Growth_Disease_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Habitat Intelligence: How Machine Learning Reveals Species Preferences for Ecological Planning and Conservation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160698</link>
        <id>10.14569/IJACSA.2025.0160698</id>
        <doi>10.14569/IJACSA.2025.0160698</doi>
        <lastModDate>2025-06-30T13:21:19.6770000+00:00</lastModDate>
        
        <creator>Meryem Ennakri</creator>
        
        <creator>Soumia Ziti</creator>
        
        <creator>Mohamed Dakki</creator>
        
        <subject>Artificial Intelligence; machine learning; deep learning; species preferences; habitat suitability modeling; species distribution models (SDMs); ecological niche modeling; conservation planning; environmental monitoring; explainable AI (xAI); habitat intelligence; biodiversity management</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>The emerging confluence between artificial intelligence and ecology has generated a new research frontier, which we refer to as habitat intelligence, aiming to unveil species environment relationships through data-driven approaches. This SLR aims to summarise the pass to the current year (2025) of the research on the use of ML and DL models to represent species preferences, habitat suitability and ecological niches. Based on 365 peer-reviewed studies extracted from SCOPUS, Web of Science and OpenAlex, we identify four main areas of innovation which encompass: automated species identification and ecological monitoring; AI-enhanced species distribution models (SDMs); advanced data collection and processing for ecological research; and conservation-oriented decision support systems. Our review shows that AI has the potential for a more precise and scalable approach to biodiversity investigations in the age of integrated remote sensing, acoustics, citizen science, and environmental data. But we also point out pressing challenges such as data paucity, model interpretability and computational limitations. We suggest that future advancements in this branch of the food web could come from interdisciplinary cooperation using explainable AI (xAI) and the construction of bridging hybrid models between prediction and ecological interpretability. In the end, this review offers a conceptual and methodological ‘roadmap’ to other researchers and conservation practitioners who wish to apply AI to the service of global biodiversity aims.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_98-Habitat_Intelligence.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Integrating cGAN-Enhanced Prediction with Hybrid Intervention Recommendations Systems for Student Dropout Prevention</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160697</link>
        <id>10.14569/IJACSA.2025.0160697</id>
        <doi>10.14569/IJACSA.2025.0160697</doi>
        <lastModDate>2025-06-30T13:21:19.6470000+00:00</lastModDate>
        
        <creator>Hassan Silkhi</creator>
        
        <creator>Brahim Bakkas</creator>
        
        <creator>Khalid Housni</creator>
        
        <subject>Student dropout prediction; machine learning in education; personalized intervention systems; Conditional Generative Adversarial Networks(cGAN); Large Language Models (LLMs); hybrid recommendation systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>Early-warning dashboards in higher education typically stop at tagging students as “at-risk,” offering no concrete guidance for remedial action; this limitation contributes to the loss of thousands of learners each year. Approach. We propose an integrated framework that (i) uses a class-balanced Conditional GAN to augment sparse attrition data, and (ii) couples the resulting XGBoost predictor with a four-mode intervention engine—rule-based, few-shot, fine-tuned LLM, and a novel hybrid strategy—to recommend personalised support. Major findings. Training on GAN-augmented records raises prediction accuracy to 92.79% (a 15.46-point gain over non-augmented baselines), while the hybrid intervention generator attains 94% categorical coverage and the highest specificity score (0.63) albeit at a per-student latency of 61s. Impact. By uniting robust risk prediction with high-quality, actionable interventions, the framework closes the long-standing gap between detection and response, furnishing institutions with a scalable path to materially reduce dropout rates across diverse educational settings.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_97-Integrating_cGAN_Enhanced_Prediction_with_Hybrid_Intervention.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Foreign Key Constraints to Maintain Referential Integrity in Distributed Database in Microservices Architecture</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160696</link>
        <id>10.14569/IJACSA.2025.0160696</id>
        <doi>10.14569/IJACSA.2025.0160696</doi>
        <lastModDate>2025-06-30T13:21:19.6170000+00:00</lastModDate>
        
        <creator>Shamsa Kanwal</creator>
        
        <creator>Nauman Riaz Chaudhry</creator>
        
        <creator>Reema Choudhary</creator>
        
        <creator>Younus Ahamad Shaik</creator>
        
        <creator>Pankaj Yadav</creator>
        
        <creator>Ayesha Rashid</creator>
        
        <subject>Foreign key constraints; relational mapping; referential integrity; saga pattern; event driven architecture; APIs; microservice; distributed database</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>In the world of modern software development, microservices architecture has become increasingly popular due to its ability to help developers to build large and complex applications that are more agile, faster and more scalable. In large scale applications (such as e-commerce, healthcare, finance, social media, inventory management, travel booking, content management, and customer relationship management systems etc.) with many interconnected services, it is tough to keep the data accurate and consistent. The concept of referential integrity is applied to validate the data. Referential integrity refers to the preservation of relationships between tables. In a monolithic architecture, where the application and database are closely linked and co-located on the same server, referential integrity via foreign key constraints makes it feasible to preserve consistent and accurate data. But in the microservices architecture, maintaining referential integrity across distributed databases poses significant challenges due to its decentralized nature of data management. This study utilizes a hybrid research methodology, combining empirical research and design science research to discover and address the challenges of maintaining referential integrity in distributed databases in microservice architecture and calculate the response time by comparing and analyzing with existing models. The results of the evolution in term of response time are presented in this work.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_96-Foreign_Key_Constraints_to_Maintain_Referential_Integrity.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cross-Domain Evaluation of Large Language Models for Abstractive Text Summarization: An Empirical Perspective</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160695</link>
        <id>10.14569/IJACSA.2025.0160695</id>
        <doi>10.14569/IJACSA.2025.0160695</doi>
        <lastModDate>2025-06-30T13:21:19.5830000+00:00</lastModDate>
        
        <creator>Walid Mohamed Aly</creator>
        
        <creator>Taysir Hassan A. Soliman</creator>
        
        <creator>Amr Mohamed AbdelAziz</creator>
        
        <subject>Large language models; natural language processing; automatic text summarization; prompt engineering; summarization evaluation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>Large Language Models (LLMs) have demon-strated remarkable capabilities in generating human-like text; however, their effectiveness in abstractive summarization across diverse domains remains underexplored. This study conducts a comprehensive evaluation of six open source LLMs across four datasets: CNN / Daily Mail and NewsRoom (news), SAMSum (dialogue) and ArXiv (scientific) using zero shot and in-context learning techniques. Performance was assessed using ROUGE and BERTScore metrics, and inference time was measured to examine the trade-off between accuracy and efficiency. For long documents, a sentence-based chunking strategy is introduced to overcome context limitations. Results reveal that in-context learning consistently enhances summarization quality, and chunking improves performance on long scientific texts. The model performance varies according to architecture, scale, prompt design, and dataset characteristics. The qualitative analysis further demonstrates that the top-performing models produce summaries that are coherent, informative, and contextually aligned with human-written references, despite occasional lexical divergence or factual omissions. These findings provide practical insights into designing instruction-based summarization systems using open-source LLMs.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_95-Cross_Domain_Evaluation_of_Large_Language_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>MITG-CU: Multimodal Interaction Temporal Graphs Approach for Conversational Emotion Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160694</link>
        <id>10.14569/IJACSA.2025.0160694</id>
        <doi>10.14569/IJACSA.2025.0160694</doi>
        <lastModDate>2025-06-30T13:21:19.5370000+00:00</lastModDate>
        
        <creator>Qian Xing</creator>
        
        <creator>Yaqin Qiu</creator>
        
        <creator>Minglu Chi</creator>
        
        <creator>Xuewei Li</creator>
        
        <creator>Changyi Gao</creator>
        
        <subject>Emotion recognition; multimodal interaction; relational temporal graph; cross-modal interaction; feature fusion</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>In the emotion recognition of conversations, the complementary relationship between the context information and multimodal data cannot be fully exploited. This results in insufficient comprehensiveness and accuracy in emotion recognition. To address these challenges, this paper proposed a Multimodal Interactive Temporal Graph Conversation Understanding model (MITG-CU) composed of textual, audio and visual modalities. Firstly, the pre-extracted textual, audio, and visual features are adopted as the input of the Transformer, and the attention mechanism is utilized to capture the cross-modal context correlation information. Furthermore, structural relationships and temporal dependencies between utterances are captured through a local-level relational temporal graph module. Inter-modal interaction weights are dynamically adjusted by a global-level pairwise cross-modal interaction mechanism. By integrating two complementary hierarchical structures, a hierarchical multimodal information fusion was achieved, and at the same time, the model’s adapt-ability to complex conversation scenarios was enhanced. Finally, feature fusion is carried out by using the gating mechanism and sentiment classification is conducted. Experimental results demonstrated that the proposed model outperforms six common baseline methods across metrics including accuracy, precision, recall, and F1-score. Especially in Weighted-F1 and Accuracy have improved by 0.28 % and 0.39 % respectively, which confirmed the effectiveness of the model.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_94-MITG_CU_Multimodal_Interaction_Temporal_Graphs_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Advancing Traffic Sign Detection with Convolutional Neural Networks: A Deep Learning Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160693</link>
        <id>10.14569/IJACSA.2025.0160693</id>
        <doi>10.14569/IJACSA.2025.0160693</doi>
        <lastModDate>2025-06-30T13:21:19.5070000+00:00</lastModDate>
        
        <creator>OUAHBI Younesse</creator>
        
        <creator>ZITI Soumia</creator>
        
        <subject>Traffic sign detection; convolutional neural net-works; deep learning; road safety; intelligent transportation systems; real-time detection; artificial intelligence; transportation efficiency</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>Traffic sign detection is a key task in intelligent transportation systems, supporting road safety and traffic flow. This study introduces RoadNet, a lightweight Convolutional Neural Network (CNN) designed for real-time detection and classification of traffic signs in Moroccan road environments. The system addresses challenges such as occlusion, illumination variability, and diverse sign structures. Built on deep learning techniques, RoadNet leverages multiscale feature extraction and transfer learning to improve detection accuracy and generaliza-tion. The dataset includes four sign categories: speed limit, stop, crosswalk, and traffic light. Extensive image preprocessing and augmentation were applied to increase robustness. Results show that RoadNet outperforms baseline models like VGG16, achieving 96% training accuracy and 88.6% validation accuracy, with superior precision, recall, and F1-score. The model maintains low loss and performs reliably under constrained resources. This research confirms the effectiveness of CNN-based architectures for traffic sign detection in real-world Moroccan settings. It contributes to the deployment of AI-powered solutions for smart mobility and logistics, especially in regions with limited computational resources.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_93-Advancing_Traffic_Sign_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning in Cephalometric Analysis: A Scoping Review of Automated Landmark Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160692</link>
        <id>10.14569/IJACSA.2025.0160692</id>
        <doi>10.14569/IJACSA.2025.0160692</doi>
        <lastModDate>2025-06-30T13:21:19.4900000+00:00</lastModDate>
        
        <creator>Idriss Tafala</creator>
        
        <creator>Fatima-Ezzahraa Ben-Bouazza</creator>
        
        <creator>Aymane Edder</creator>
        
        <creator>Oumaima Manchadi</creator>
        
        <creator>Bassma Jioudi</creator>
        
        <subject>Artificial Intelligence; deep learning; cephalometric analysis; landmark detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>Cephalometric landmark identification is funda-mental for accurate cephalometric analysis, serving as a corner-stone in orthodontic diagnosis and treatment planning. However, manual tracing is a labor-intensive process prone to inter-observer variability and human error, highlighting the need for automated methods to improve precision and efficiency. Recent advances in Deep Learning have enabled automatic detection of cephalometric landmarks, thereby increasing accuracy and consistency while reducing processing time. This scoping review examines contemporary applications of Deep Learning in cephalometric landmark detection and cephalometric analysis from 2019 to January 2025. We searched IEEEXplore, Sci-enceDirect, arXiv, Springer, and PubMed databases, identifying 601 articles, of which 76 met inclusion criteria after rigorous screening. Our analysis revealed significant performance improvements with Deep Learning methods achieving Success Detection Rates (SDR) of 75-90% at 2mm thresholds, substantially out-performing traditional methods. Geographical analysis identified China, South Korea, and the United States as leading research centers, with commercial applications like WebCeph and CephX gaining clinical adoption. Deep Learning improves the accuracy and efficiency of cephalometric analysis; however, challenges persist regarding dataset standardization and clinical validation. These technologies show promising potential to support novice clinicians, streamline radiological examinations, and improve landmark identification reliability in routine orthodontic practice.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_92-Deep_Learning_in_Cephalometric_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced Feature Extraction for Accurate Human Action Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160691</link>
        <id>10.14569/IJACSA.2025.0160691</id>
        <doi>10.14569/IJACSA.2025.0160691</doi>
        <lastModDate>2025-06-30T13:21:19.4600000+00:00</lastModDate>
        
        <creator>Tarek Elgaml</creator>
        
        <creator>Ali Saudi</creator>
        
        <creator>Mohamed Taha</creator>
        
        <subject>Human activity recognition; human-computer inter-action; spatial features; temporal features; SMART frame selection; hierarchical fusion network; HMDB51 dataset</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>This paper tackles the challenge of achieving accurate and computationally efficient human activity recognition (HAR) in videos. Existing methods often fail to effectively balance spatial details (e.g. body poses) with long-term temporal dynamics (e.g. motion patterns), particularly in real-world scenarios characterized by cluttered backgrounds and viewpoint variations. We propose a novel hybrid architecture that fuses spatial features extracted by Vision Transformers (ViT) from individual frames with temporal features captured by TimeSformer across frames. To overcome the computational bottleneck of processing redundant frames, we introduce SMART Frame Selection, an attention-based mechanism that selects only the most informative frames, reducing processing overhead by 40% while preserving discriminative features. Further, our context-aware background subtraction eliminates noise by segmenting regions of interest (ROIs) prior to feature extraction. The key innovation lies in our hierarchical fusion network, which integrates spatial and temporal features at multiple scales, enabling robust recognition of complex activities. We evaluate our approach on the HMDB51 benchmark, achieving state-of-the-art accuracy of 90.08%, out-performing competing methods like CNN-LSTM (85.2%), GeoDe-former (88.3%), and k-ViViT (89.1%) in precision, recall, and F1-score. Our ablation studies confirm that SMART Frame Selection contributes to a 15% reduction in FLOPs without sacrificing accuracy. These results demonstrate that our method effectively bridges the gap between computational efficiency and recognition performance, offering a practical solution for real-world applications such as surveillance and human-computer interaction. Future work will extend this framework to multi-modal inputs (e.g. depth sensors) for enhanced robustness.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_91-Enhanced_Feature_Extraction_for_Accurate_Human_Action.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Integrating Blockchain and Smart Card Technologies for Secure Healthcare Data Management</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160690</link>
        <id>10.14569/IJACSA.2025.0160690</id>
        <doi>10.14569/IJACSA.2025.0160690</doi>
        <lastModDate>2025-06-30T13:21:19.4270000+00:00</lastModDate>
        
        <creator>Zayneb Gaouzi</creator>
        
        <creator>Imad Bourian</creator>
        
        <creator>Khalid Chougdali</creator>
        
        <subject>Healthcare; security; blockchain; smart contracts</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>In recent years, the healthcare sector has faced growing challenges in managing patient data securely and efficiently, especially when it comes to data privacy and the way information is shared across healthcare providers. A number of digital solutions have been proposed over time, but more recently, blockchain has started to gain serious interest. Its structure allows data to remain intact and traceable, while also offering a strong layer of security. This paper explores how blockchain based smart contract might be used alongside smart cards to offer a more robust system for protecting patient information. Smart cards bring in a physical barrier that helps limit access to only those who are authorized, while blockchain makes it much harder to tamper with information or centralize control. The suggested method demonstrates how the decentralized and immutable nature of blockchain, combined with the physical authentication provided by smart cards and the automation of smart contracts improve data security and restrict unauthorized access. The proposed framework is evaluated through smart contract deployment and testing on both the Hardhat local network and the Celo public testnet. The results confirm the practicality and efficiency of the solution and support its potential for real world application in secure healthcare data management.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_90-Integrating_Blockchain_and_Smart_Card_Technologies.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sign3DNet: An Enhanced 3D CNN Architecture for Bengali Word-Level Sign Language Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160689</link>
        <id>10.14569/IJACSA.2025.0160689</id>
        <doi>10.14569/IJACSA.2025.0160689</doi>
        <lastModDate>2025-06-30T13:21:19.3970000+00:00</lastModDate>
        
        <creator>Safi Ullah Chowdhury</creator>
        
        <creator>Nasima Begum</creator>
        
        <creator>Tanjina Helaly</creator>
        
        <creator>Rashik Rahman</creator>
        
        <subject>Bengali sign word recognition; computer vision; deep learning; convolutional neural network; spatial-temporal dynamics; video data</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>Automated recognition of sign languages has been playing an important role in breaking barriers to communication and inclusion for the deaf and mute community. Several studies have been conducted on Bengali Sign Language (BdSL). However, Bengali Word-Level Sign Language (BdWLSL) remains unexplored due to the lack of large annotated datasets and a stable model. Therefore, in this research, we introduced a large-scale Bengali word-level video dataset and proposed a modified 3D Convolutional Neural Network (CNN) architecture for word-level BdSL recognition, emphasizing its ability to capture the spatial and temporal dynamics from video data. The proposed strategy represents strong performance in Bengali word-level sign language recognition by utilizing the spatiotemporal pattern captured by the modified 3D CNN architecture. The proposed model demonstrates its potential for practical use by successfully learning complex hand movements straight from raw video data. The proposed CNN model is benchmarked against traditional deep learning techniques, Temporal Shift Module (TSM), Long Short-Term Memory (LSTM), and default 3D-CNN, providing a comprehensive comparison of their strengths and limitations. Experiments are conducted using a structured video dataset containing 102 Bengali sign-word classes. To ensure privacy, the volunteers’ faces were blurred and only landmark data extracted using MediaPipe, rendered on black backgrounds, were used for training. The experimental result analysis shows that the performance of the proposed 3D-CNN model achieves a satisfactory accuracy of 58.25%, demonstrating its potential for word-level sign language recognition tasks. To our knowledge, this is the very first pilot study for BdWLSL recognition. Hence, we consider the recognition rate 58.25% of the proposed modified 3D-CNN architecture to be satisfactory and a potential scope for future researchers in the same field.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_89-Sign3DNet_An_Enhanced_3D_CNN_Architecture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Study of Prenatal and Postnatal Images for Detecting Down Syndrome in Children</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160688</link>
        <id>10.14569/IJACSA.2025.0160688</id>
        <doi>10.14569/IJACSA.2025.0160688</doi>
        <lastModDate>2025-06-30T13:21:19.3670000+00:00</lastModDate>
        
        <creator>Labanti Singha</creator>
        
        <creator>Iqbal Ahmed</creator>
        
        <subject>Down syndrome; prenatal ultrasound; postnatal facial recognition; CNN; vision transformer; ensemble learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>Down syndrome is a genetic disorder caused by the presence of an extra copy of chromosome 21, affecting both neurological development and physical features. Early and accurate diagnosis is critical for ensuring timely medical intervention and support. This study presents a comparative analysis of prenatal (ultrasound) and postnatal (facial) imaging modalities for the detection of Down syndrome using deep learning techniques. We employed VGG19, ResNet50, DenseNet121, MobileNetV2, and the Vision Transformer for image classification. An ensemble model integrating four CNN architectures achieved superior performance, with 92% test accuracy on prenatal images and 83%on postnatal images. Among the individual models, ResNet50 out-performed the others across both modalities. Evaluation metrics, including accuracy, precision, recall, and F1-score, confirm the effectiveness of the proposed framework. These results highlight the potential of ensemble learning to enhance the early detection of Down syndrome and improve accessibility to healthcare.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_88-Comparative_Study_of_Prenatal_and_Postnatal_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Analysis of Machine Learning Frameworks for Robust Ovarian Cancer Detection Using Feature Selection and Data Balancing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160687</link>
        <id>10.14569/IJACSA.2025.0160687</id>
        <doi>10.14569/IJACSA.2025.0160687</doi>
        <lastModDate>2025-06-30T13:21:19.3330000+00:00</lastModDate>
        
        <creator>DSS LakshmiKumari P</creator>
        
        <creator>Maragathavalli P</creator>
        
        <subject>Ovarian cancer detection; machine learning frame-work; data balancing; feature selection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>One of the most serious malignancies that affects women’s health worldwide is ovarian cancer. As a result, prompt accurate diagnosis and treatment are necessary. This study’s primary objective is to determine whether or not OC is present within the body of a person by using a range of characteristics gleaned with a couple of health examinations. The article is concentrated on twelve ML techniques used for OC diagnosis. The dataset has been altered by applying the borderline SVMSMOTE method to address the imbalance properties and the MICE imputation method to impute the missing values in order to enhance the performance of the classifiers. Addition-ally, the boruta approach and recursive feature reduction has been utilized to identify the most important features while the hyper parameter tuning strategy has been employed to improve classifier performance and provide ideal solutions.Boruta opted just 50% of the total characteristics and outperformed RFE while considering the most important feature. Furthermore, many performance measures are used to determine which classifiers are the best in identifying OC. Voting classifier surpassed state-of-the-art approaches and other machine learning methods with the highest accuracy. The suggested approach obtained the highest average of 93.06% accuracy, 88.57% precision, 96.88% recall, 92.54% F1-score, and 93.44% AUC-ROC based on experimental results. Experiments show that in comparison with the state-of-the-art techniques, our suggested method can identify OC more accurately.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_87-Comparative_Analysis_of_Machine_Learning_Frameworks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dynamic Polygon-Based Reverse Driving Detection Technique for Enhanced Road Safety</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160686</link>
        <id>10.14569/IJACSA.2025.0160686</id>
        <doi>10.14569/IJACSA.2025.0160686</doi>
        <lastModDate>2025-06-30T13:21:19.3030000+00:00</lastModDate>
        
        <creator>Tara Kit</creator>
        
        <creator>Youngsun Han</creator>
        
        <creator>Anand Nayyar</creator>
        
        <creator>Tae-Kyung Kim</creator>
        
        <subject>Reverse driving detection; lane collapse detection; polygon zones; object detection; YOLOv8</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>Reverse driving and lane collapse pose serious risks to road safety, especially on complex infrastructures such as multi-lane highways, intersections, and roundabouts. Existing detection systems often depend on rigid lane configurations and struggle to adapt to varied road geometries and environmental conditions. Prior works are typically limited to straight, multi-lane roads and rely on automated boundary extraction, making them unsuitable for irregular traffic layouts. To address this gap, the objectives of this research paper is to propose a vision-based detection system that combines the YOLOv8 object detector with a dynamic polygon-based zone management strategy. The system aims to detect reverse driving and lane collapse incidents in real time using CCTV footage, without requiring additional sensors. Its key novelty lies in manually configurable zones and the integration of ByteTrack for robust vehicle tracking across complex scenes. The system was tested under diverse real-world parameters, including different road types (single-lane, multi-lane, roundabouts), lighting conditions (day and night), and traffic behaviors (normal flow, reverse, and collapse) and visual evaluations highlight consistent and logically coherent results across scenarios, highlighting its practical effectiveness for real-time intelligent traffic monitoring.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_86-Dynamic_Polygon_Based_Reverse_Driving_Detection_Technique.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predicting Jobs, Shaping Economies: Bibliometric Insights into AI and Big Data in Workforce Demand Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160685</link>
        <id>10.14569/IJACSA.2025.0160685</id>
        <doi>10.14569/IJACSA.2025.0160685</doi>
        <lastModDate>2025-06-30T13:21:19.2730000+00:00</lastModDate>
        
        <creator>EL Massi Fouad</creator>
        
        <creator>ELouadi Abdelmajid</creator>
        
        <subject>Big data; Artificial Intelligence; predictive modeling; bibliometric analysis; natural language processing; labor market analytics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>The integration of Big Data and Artificial Intelligence (AI) is fundamentally transforming how labor markets are analyzed, predicted and managed. Despite significant advances in using these technologies for workforce analytics, the field suffers from several critical limitations: existing approaches predominantly rely on data from online job portals that may not capture informal employment sectors, current predictive models lack robustness in long-term forecasting under rapid economic transformations and cross-border data integration remains insufficiently addressed for comprehensive global analyses. Moreover, the field lacks a structured, quantitative assessment of scientific production that provides a comprehensive overview of research developments, with most existing studies being case-specific or focusing on narrow applications, leaving significant gaps in understanding the intellectual structure, key contributors and thematic evolution of this interdisciplinary domain. To address these research gaps, this study presents the first comprehensive bibliometric analysis of global scientific research examining the intersection of AI, Big Data and labor market prediction. Drawing on a systematic dataset of 276 publications from Scopus, Web of Science and OpenAlex databases spanning 2003 to 2025, this research employs advanced bibliometric techniques to map the intellectual landscape of this rapidly evolving field. Through a structured four-phase methodological framework incorporating performance analysis, science mapping and thematic evolution, the study identifies research trends, intellectual structures, influential contributors and emerging themes. The analysis reveals significant developments in predictive modeling, natural language processing, and hybrid AI approaches for recruitment forecasting and workforce analytics, while highlighting critical challenges posed by algorithmic bias and ethical considerations in AI-driven systems. Key contributions include: 1) the first systematic scientific mapping of the AI-Big Data-labor market intersection 2) identification of research gaps and future directions for long-term labor market prediction, 3) comprehensive analysis of institutional networks and collaborative patterns and 4) evidence-based recommendations for addressing data integration and model interpretability challenges. The findings offer actionable insights for researchers, policymakers and practitioners seeking to leverage intelligent systems to shape the future of work in the digital economy while addressing current methodological limitations.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_85-Predicting_Jobs_Shaping_Economies.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Influence of Familiarity with Traffic Regulations on Road Safety: A Simulated Study on Roundabouts and Intersections</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160684</link>
        <id>10.14569/IJACSA.2025.0160684</id>
        <doi>10.14569/IJACSA.2025.0160684</doi>
        <lastModDate>2025-06-30T13:21:19.2230000+00:00</lastModDate>
        
        <creator>Raghda Alqurashi</creator>
        
        <creator>Hasan J. Alyamani</creator>
        
        <creator>Nesreen Alharbi</creator>
        
        <creator>Hasan Sagga</creator>
        
        <subject>Driving behavior; driving performance; familiarity with traffic regulations; road intersections; roundabouts</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>International drivers who come from keep-right countries and drive in keep-left countries are frequently involved in road accidents due to unfamiliarity with keep-left traffic regulations. Due to unfamiliarity of the traffic regulation, the driver’s performance and behavior are subject to change. The objective of this study was to explore the effects of familiarity with traffic regulations on driving performance and behavior at roundabouts and intersections. To achieve this in a safe environment, twenty-one male familiar drivers and thirty- four male unfamiliar drivers participated in driving in a simulated keep-left traffic regulation. The factors observed were not fastening the seat belt, entering the driving simulation from wrong side, using an improper approaching lane, not signaling, speeding, driving against the traffic flow and using an improper exiting lane at each roundabout and intersection. Mann-Whitney U test was used to compare driving behavior and performance between the familiarity groups. Unfamiliar drivers made significantly more driving mistakes on roundabouts than unfamiliar drivers. Also, some unfamiliar drivers got inside the vehicle from the passenger side instead of driver side and drove against the traffic flow inside the roundabouts. The implications for familiar and unfamiliar driving can be considered for future research development.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_84-The_Influence_of_Familiarity_with_Traffic_Regulations.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>LASSO-Based Feature Extraction with Adaptive Windowing via DTW for Fault Diagnosis in Rotating Machinery</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160683</link>
        <id>10.14569/IJACSA.2025.0160683</id>
        <doi>10.14569/IJACSA.2025.0160683</doi>
        <lastModDate>2025-06-30T13:21:19.1930000+00:00</lastModDate>
        
        <creator>Jirayu Samkunta</creator>
        
        <creator>Patinya Ketthong</creator>
        
        <creator>Nghia Thi Mai</creator>
        
        <creator>Md Abdus Samad Kamal</creator>
        
        <creator>Iwanori Murakami</creator>
        
        <creator>Kou Yamada</creator>
        
        <creator>Nattagit Jiteurtragool</creator>
        
        <subject>Rotating machinery; fault analysis; feature extraction; LASSO regression</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>In real-world engineering environments, faults in rotating machines typically occur for concise periods, which leads to poor stability and low accuracy in fault diagnosis. The traditional fault diagnosis of rotating machinery relies on analyzing time-series data to detect system degradation and faulty components. However, the complexity of rotating machinery and the presence of multiple fault types across different operating conditions challenges for conventional classification techniques. This paper proposes a LASSO regression-based feature extraction method with adaptive window based on Dynamic Time Warping (DTW) for fault diagnosis in rotating machinery. The approach effectively extract features by modeling the relationship between shaft rotational speeds (25, 50, and 75 rpm) and vibration signals from piezoelectric accelerometers. This research focus on single and combination faults analysis to include 11 faults, enhancing its applicability to real-world fault conditions. To assess its effectiveness, the proposed method is evaluated against Principal Component Analysis (PCA) and Independent Component Analysis (ICA) using the K-Nearest Neighbors (KNN) classifier. The experimental results demonstrate that the LASSO-based approach consistently achieves high classification accuracy across different speeds, outperforming PCA and ICA in both single and double fault scenarios. These findings highlight LASSO regression as a robust feature extraction technique for improving fault detection and predictive maintenance in rotating machinery.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_83-LASSO_Based_Feature_Extraction_with_Adaptive_Windowing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Endometriosis Lesion Classification Using Deep Transfer Learning Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160682</link>
        <id>10.14569/IJACSA.2025.0160682</id>
        <doi>10.14569/IJACSA.2025.0160682</doi>
        <lastModDate>2025-06-30T13:21:19.1630000+00:00</lastModDate>
        
        <creator>Shujaat Ali Zaidi</creator>
        
        <creator>Varin Chouvatut</creator>
        
        <creator>Chailert Phongnarisorn</creator>
        
        <subject>Endometriosis classification; lesion detection; medical image classification; deep learning; transfer learning; DCGAN</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>In resource-limited settings, assisting physicians with disease identification can significantly improve patient outcomes. Early diagnosis is crucial, as many patients could remain healthy with timely intervention. Recent advancements in deep learning models for medical image processing have enabled algorithms to achieve diagnostic accuracy comparable to that of healthcare professionals. This research aims to develop a comprehensive system for the rapid and precise detection of endometriosis lesions. We explore the several deep transfer learning architectures, specifically MobileNetV2, VGG19, and InceptionV3, on the Gynecologic Laparoscopy Endometriosis Dataset (GLENDA). Through extensive literature review and parameter optimization, we find that MobileNetV2 outperforms the other models in terms of accuracy. However, challenges remain, as healthcare imaging datasets often suffer from limited sample sizes and uneven class distributions. Collecting additional samples can be costly and time-consuming, which is a prevalent issue in medical imaging. To address this, we employ Deep convolutional Generative Adversarial Networks (DCGAN) to enhance the dataset by generating synthetic images, thus improving class balance. This image augmentation strategy not only boosts model performance but also reduces the manual effort required for image labeling. We evaluate our proposed model using metrics such as accuracy, precision, recall, and F1-score. Initially, our model achieves an accuracy of 95%. The introduction of synthetic samples results in an increased accuracy of 99%, reflecting a 4%improvement and enhancing the model’s overall efficacy.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_82-Endometriosis_Lesion_Classification_Using_Deep_Transfer_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Real-Time Video Captioning on CPU and GPU: A Comparative Study of Classical and Transformer Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160681</link>
        <id>10.14569/IJACSA.2025.0160681</id>
        <doi>10.14569/IJACSA.2025.0160681</doi>
        <lastModDate>2025-06-30T13:21:19.1300000+00:00</lastModDate>
        
        <creator>Othmane Sebban</creator>
        
        <creator>Ahmed Azough</creator>
        
        <creator>Mohamed Lamrini</creator>
        
        <subject>Video captioning; transformer; timesformer; GPT-2; real-time inference; spatiotemporal attention; multimedia accessibility; CPU and GPU deployment</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>This study proposes a scalable and hardware-adaptable approach to automatic video caption generation by comparing two architectures: a traditional encoder–decoder framework combining InceptionResNetV2 with GRU and a transformer-based model integrating TimeSformer with GPT-2. The system supports CPU and GPU deployment through a unified pipeline built on FFmpeg and ImageMagick for keyframe extraction and subtitle embedding. Experimental evaluations on the MSVD and VATEX datasets demonstrate that the TimeSformer–GPT-2 architecture significantly outperforms baseline models, particularly in GPU settings, achieving top results across BLEU, METEOR, ROUGE-L, and CIDEr metrics. This superiority is attributed to its capacity to model spatiotem-poral dependencies and generate contextually rich language. Designed for real-time operation, the system is also suitable for low-resource devices, enabling impactful applications such as assistive tools for the visually impaired and intelligent video indexing. Despite high computational demands and sequence-length limitations, the system presents promising directions for future development, including multilingual captioning, multimodal audio–visual integration, and lightweight models like TinyGPT for enhanced portability.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_81-Real_Time_Video_Captioning_on_CPU_and_GPU.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Technique to Support Incremental Construction and Verification in Component-Based Software Development</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160680</link>
        <id>10.14569/IJACSA.2025.0160680</id>
        <doi>10.14569/IJACSA.2025.0160680</doi>
        <lastModDate>2025-06-30T13:21:19.1000000+00:00</lastModDate>
        
        <creator>Faranak Nejati</creator>
        
        <creator>Ng Keng Yap</creator>
        
        <creator>Abdul Azim Abd Ghani</creator>
        
        <subject>Component-based software development; incremental software construction; software verification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>Technological advancements in recent decades have significantly increased the scale and complexity of software systems, which poses challenges to their development and reliability. Component-based software development (CBSD) offers a promising solution by enabling modular and efficient software construction. However, CBSD alone cannot fully address challenges such as ensuring reliability and avoiding errors like deadlocks. Verification techniques, such as model-checking, are necessary to ensure the correctness of CBSD systems. Despite its effectiveness in verifying system properties, model-checking faces a critical issue known as state-space explosion (SSE), which hinders scalability. This study introduces an incremental verification technique for CBSD to address SSE and ensure deadlock freedom. The proposed technique incrementally constructs and verifies component-based systems, eliminating verified portions of components to minimize state-space size during subsequent verification steps. It utilizes a component model that supports encapsulation of computation and control, making incremental verification feasible. Evaluation of the technique using coloured petri nets with non-trivial case studies demonstrates its ability to detect deadlocks early and manage SSE effectively, thereby improving the efficiency of the verification process.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_80-A_Technique_to_Support_Incremental_Construction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>RFID Integration with Internet of Things: Data Processing Algorithm Based on Convolutional Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160679</link>
        <id>10.14569/IJACSA.2025.0160679</id>
        <doi>10.14569/IJACSA.2025.0160679</doi>
        <lastModDate>2025-06-30T13:21:19.0700000+00:00</lastModDate>
        
        <creator>Liang Wang</creator>
        
        <subject>RFID; chipless; coding; threshold; data transmission; error correction; security authentication</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>Radio Frequency Identification is a fast and reliable communication module that performs automatic data capture to identify and track individual objects and people. Frequency-coded tags employ resonant networks to decode their unique code. A multi-scatterer or multi-resonant method encodes the data. Research primarily related to the current investigation predicted that the chipless RFID tag resonant network has a high bit encoding capacity. This study addresses the simulation, optimization, fabrication, testing, and data encoding methods for chipless RFID tags. This research provides a framework for the open-ended quarter-wavelength stub multi-resonator method in chipless Radio Frequency Identification (RFID) tags. The proposed design enhances the tag&#39;s data encoding capacity and improves its robustness to ecological differences. This study integrates Error Correction Coding (ECC) and Adaptive Modulation Systems (AMS) employing Convolutional Neural Networks (CNN) to enhance the tag&#39;s performance. The AMS dynamically alters the modulation parameters based on channel states, while ECC improves data reliability. The results indicate efficient performance compared to traditional chipless RFID tags, highlighting the possibility of practical behavior in typical applications that necessitate reliable and high-capacity data transmission.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_79-RFID_Integration_with_Internet_of_Things.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design and Implementation of an Intelligent Laboratory Management System Based on UWB Technology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160678</link>
        <id>10.14569/IJACSA.2025.0160678</id>
        <doi>10.14569/IJACSA.2025.0160678</doi>
        <lastModDate>2025-06-30T13:21:19.0530000+00:00</lastModDate>
        
        <creator>Heng Sun</creator>
        
        <creator>Qiang Gao</creator>
        
        <subject>UWB; intelligent management; laboratory management system; IoT</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>In recent years, the rapid development of educational informatization and the widespread adoption of Internet of Things (IoT) technologies have accelerated the transformation of university laboratories toward intelligent management. However, traditional laboratory management systems still suffer from limited automation, insufficient safety mechanisms, and poor real-time responsiveness. To address these issues, this study proposes an intelligent laboratory management system based on Ultra-Wideband (UWB) technology, which offers high-precision positioning and low-latency communication. The proposed system integrates IoT-based functionalities across six core modules, including access control, asset management, environmental monitoring, and user management. By deploying UWB tags and sensors throughout the laboratory environment, the system enables real-time tracking of personnel and equipment, automatic activation of laboratory devices, and intelligent safety alerts. A pilot deployment in three university laboratories demonstrated improvements in access efficiency, energy conservation, equipment security, and user satisfaction. The results validate the system’s effectiveness in enhancing intelligent laboratory management and provide a scalable model for future smart educational infrastructure.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_78-Design_and_Implementation_of_an_Intelligent_Laboratory_Management_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design and Implementation of Low-Cost Hybrid-Controlled Smart Wheelchair Based on PID Control Integrated with Vital Signs Monitoring</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160677</link>
        <id>10.14569/IJACSA.2025.0160677</id>
        <doi>10.14569/IJACSA.2025.0160677</doi>
        <lastModDate>2025-06-30T13:21:19.0230000+00:00</lastModDate>
        
        <creator>M. Sayed</creator>
        
        <creator>MG Mousa</creator>
        
        <creator>Ali A. S</creator>
        
        <creator>T. Mansour</creator>
        
        <subject>Smart wheelchair; speech recognition; PID; healthcare; mobile robot</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>According to international organizations’ statistics, the percentage of disabled people is considered not just a small percentage of the world&#39;s population. Improving the quality of life for people by using new technologies is one of the essential topics today. Although the wheelchair is the most common way of mobility for the disabled, this research aims to improve wheelchair use by creating other control methods, including speech recognition commands. A joystick, wireless remote control, and an additional port are available in addition to speech recognition commands to control the wheelchair. Researchers also studied and compared the effects of integrating a Proportional–Integral–Derivative (PID) controller with the normal controller during smart wheelchair operation in various normal usage scenarios. This research provides data based on a real experiment, not like most research that depends on mathematical models only for comparison. Adding the PID controller eliminated the overshoot of the smart wheelchair, reduced steady-state error, and reduced settling time. Furthermore, it contains a healthcare monitoring system to track the user&#39;s vital signs and object avoidance sensors to keep them safe. Also, a full motor selection calculation for the smart wheelchair has been provided, which is useful for mobile robot design. Additionally, the smart wheelchair features a power monitoring system. Finally, a voice-controlled wheelchair helps the user feel more private and independent. Using this kind of smart wheelchair and living this lifestyle will increase the user&#39;s morale.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_77-Design_and_Implementation_of_Low_Cost_Hybrid_Controlled.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Ensuring Consistency in Group Decision Making: A Systematic Review of the FWZIC Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160676</link>
        <id>10.14569/IJACSA.2025.0160676</id>
        <doi>10.14569/IJACSA.2025.0160676</doi>
        <lastModDate>2025-06-30T13:21:18.9730000+00:00</lastModDate>
        
        <creator>Ghazala Bilquise</creator>
        
        <creator>Samar Ibrahim</creator>
        
        <subject>FWZIC; MCDM; fuzzy sets; subjective judgment; group decision making</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>Subjective opinions in decision-making processes are often vague, ambiguous, and imprecise due to the inherent subjectivity and variability in individual perspectives. This systematic study examines the Fuzzy Weighted Zero Inconsistency (FWZIC) method, which addresses these challenges by achieving consistency in group consensus and effectively managing uncertainties associated with subjective human opinions. The FWZIC method is increasingly popular in the Multi-Criteria Decision Making (MCDM) field for determining criteria weights. This study comprehensively analyzes 71 empirical studies published from 2021 to March 2025, employing the FWZIC method across diverse domains such as healthcare, engineering, and supply chain. By categorizing FWZIC literature based on themes and domains, this study reveals a taxonomy of the latest techniques and methods integrated with FWZIC. It also explores fuzzy extensions and integrated MCDM methods, providing researchers with a summary of suitable techniques for various contexts. By systematically synthesizing findings, this study provides a comprehensive overview of the current state of FWZIC applications in the literature, identifies gaps and suggests potential avenues for future research in the MCDM domain.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_76-Ensuring_Consistency_in_Group_Decision_Making.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application Analysis and Research of Text Model Based on Improved CNN-LSTM in the Financial Field</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160675</link>
        <id>10.14569/IJACSA.2025.0160675</id>
        <doi>10.14569/IJACSA.2025.0160675</doi>
        <lastModDate>2025-06-30T13:21:18.9600000+00:00</lastModDate>
        
        <creator>Jing Chen</creator>
        
        <creator>Chensha Li</creator>
        
        <subject>Financial information mining; CNN-LSTM model; stock price prediction; sentiment analysis; BiLSTM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>With the continuous development of information technology, public opinion analysis based on open-source texts and financial situation awareness has become a research hotspot. This study focuses on financial news and commentary information. First, a topic crawler classification model combining the advantages of CNN and LSTM is proposed to improve the topic recognition ability of financial news texts, and a CNN-LSTM-AM stock price fluctuation prediction model is proposed. This model performs sentiment analysis through BiLSTM, integrates multiple emotional factors and market historical data, and demonstrates superior predictive performance compared to traditional models in multiple experiments.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_75-Application_Analysis_and_Research_of_Text_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluation Index System for Environmental Restoration Effectiveness Based on Landscape Pattern and Ecological Low-Carbon Construction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160674</link>
        <id>10.14569/IJACSA.2025.0160674</id>
        <doi>10.14569/IJACSA.2025.0160674</doi>
        <lastModDate>2025-06-30T13:21:18.9270000+00:00</lastModDate>
        
        <creator>Jingyuan Mao</creator>
        
        <subject>Panoramic green perception rate; deep learning; urban green space; vegetation recognition; landscape assessment</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>Traditional green view rate (GVR) methods, which rely on two-dimensional planar images, have several limitations. They fail to capture the three-dimensional spatial characteristics of urban greenery, are frequently dependent on subjective parameters such as camera angles and lighting, and require labor-intensive manual analysis. These factors limit the accuracy and scalability of green space assessments. To overcome these challenges, this study introduces the Panoramic Green Perception Rate (PGPR). This novel metric utilizes spherical panoramic imagery and deep learning for the automated recognition of three-dimensional vegetation. A Dilated ResNet-105 network was used, achieving a mean Intersection over Union (mIoU) of 62.53% with only a 9.17% average deviation from manual annotation. PGPR was empirically applied in Ziyang Park, Wuhan, where it effectively quantified green visibility across urban activity spaces. This approach allows for the scalable and objective evaluation of urban greenery, which has practical applications in urban planning, landscape assessment, and ecological low-carbon construction. Urban planners, environmental engineers, and computer vision and smart city development researchers will find it especially useful.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_74-Evaluation_Index_System_for_Environmental_Restoration_Effectiveness.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Detection Framework Using Natural Language Processing (NLP) and Reinforcement Learning (RL) for Cross-Site Scripting (XSS) Attacks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160673</link>
        <id>10.14569/IJACSA.2025.0160673</id>
        <doi>10.14569/IJACSA.2025.0160673</doi>
        <lastModDate>2025-06-30T13:21:18.8970000+00:00</lastModDate>
        
        <creator>Carlo Jude P. Abuda</creator>
        
        <subject>Cross-site scripting attacks; deep neural network; reinforcement learning; natural language processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>Cross-site scripting (XSS) attacks remained among the most persistent threats in web-based systems, often bypassing traditional input validation techniques through obfuscated or embedded scripting payloads. Existing detection models typically relied on static rules or shallow learning techniques, limiting their ability to adapt to evolving attack vectors. This research addressed that gap by developing a hybrid detection framework that integrated natural language processing (NLP) and reinforcement learning (RL) techniques to classify and interpret malicious web inputs. The study aimed to design, develop, and evaluate a system that transformed raw input strings into structured features, trained a deep neural network (DNN) for binary classification, and simulated agent-based learning through policy-driven feedback. The methodology followed the Design Development Research (DDR) framework. Preprocessing involved lowercasing, lemmatization, stopword removal, and TF-IDF vectorization. The trained DNN achieved high accuracy and demonstrated clear boundary separability through PCA and t-SNE visualizations. In the simulation phase, the RL agent optimized its classification policy using cumulative rewards, Q-value heatmaps, and decision contour projections. Results confirmed the system’s capability to generalize across input variations while maintaining interpretability and precision. This framework provided a scalable solution for web application security and demonstrated the effectiveness of semantically guided and policy-aware models for detecting XSS threats.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_73-Hybrid_Detection_Framework_Using_Natural_Language_Processing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cybersecurity and the NIST Framework: A Systematic Review of its Implementation and Effectiveness Against Cyber Threats</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160672</link>
        <id>10.14569/IJACSA.2025.0160672</id>
        <doi>10.14569/IJACSA.2025.0160672</doi>
        <lastModDate>2025-06-30T13:21:18.8670000+00:00</lastModDate>
        
        <creator>Juan Luis Salas-Riega</creator>
        
        <creator>Yasmina Riega-Vir&#250;</creator>
        
        <creator>Mario Ninaquispe-Soto</creator>
        
        <creator>Jos&#233; Miguel Salas-Riega</creator>
        
        <subject>Cyberattacks; small and medium enterprises; risk management; organizational resilience; cyberthreats</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>This systematic review evaluates the adoption and effectiveness of the NIST Cybersecurity Framework (CSF) in mitigating cyber threats across diverse sectors. Following PRISMA guidelines, we analyzed studies published between 2015 and 2024 from major academic databases, focusing on the framework&#39;s five core functions: Identify, Protect, Detect, Respond, and Recover. Results indicate widespread recognition but uneven adoption—large organizations show strong performance in the Protect and Detect functions, while small and medium-sized enterprises (SMEs) face implementation barriers due to limited resources. The framework&#39;s flexibility and risk-based approach are notable strengths, though its voluntary nature and lack of localized standards pose challenges. Compared to ISO/IEC 27001 and COBIT, NIST CSF is more adaptable but less prescriptive. We identify key gaps in empirical validation and sector-specific applications, and recommend future research integrating AI-driven threat detection and regional adaptations.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_72-Cybersecurity_and_the_NIST_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Real Time Accident Detection and Emergency Response Using Drones, Machine Learning and LoRa Communication</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160671</link>
        <id>10.14569/IJACSA.2025.0160671</id>
        <doi>10.14569/IJACSA.2025.0160671</doi>
        <lastModDate>2025-06-30T13:21:18.8470000+00:00</lastModDate>
        
        <creator>Bandara H. M</creator>
        
        <creator>Maduhansa H. K. T. P</creator>
        
        <creator>Jayasinghe S. S</creator>
        
        <creator>Samararathna A. K. S. R</creator>
        
        <creator>Harinda Fernando</creator>
        
        <creator>Shashika Lokuliyana</creator>
        
        <subject>Accident detection; machine learning; IoT; drones; traffic management; LoRa communication</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>Road accidents and delayed emergency responses remain a major concern in urban environments, contributing to over 1.4 million fatalities globally each year. With rapid urbanization and increasing vehicle density, timely detection and efficient traffic management are critical to reducing the impact of such events. This study proposes a real time Accident Detection and Emergency Response System with integrating Machine Learning IoT enabled drones and LoRa communication. The system combines real time accident detection using CCTV, drone assisted fire detection for post accident scenarios, crime activity monitoring and automated traffic management to reduce congestion and improve public safety. LoRa ensure long range, energy-efficient communication. ML models improve detection accuracy across accidents, fires, crimes and vehicles. Figures and sensor data are analyzed in real time to trigger alerts and assist emergency responders. The system supports scalable integration with existing urban infrastructure, promoting the development of smart city safety frameworks. By minimizing emergency response time, limiting secondary incidents and improving situational awareness, the proposed solution addresses critical gaps in current urban safety systems. It offers a practical, intelligent and adaptive approach to accident mitigation and traffic control in smart cities.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_71-Real_Time_Accident_Detection_and_Emergency_Response.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Utilizing Machine Learning to Identify High-Risk Groups in Sickle Cell Anemia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160670</link>
        <id>10.14569/IJACSA.2025.0160670</id>
        <doi>10.14569/IJACSA.2025.0160670</doi>
        <lastModDate>2025-06-30T13:21:18.8030000+00:00</lastModDate>
        
        <creator>Haneen Banjar</creator>
        
        <creator>Nofe Alganmi</creator>
        
        <creator>Hajar Alharbi</creator>
        
        <creator>Ahmed Barefah</creator>
        
        <creator>Hatem Alahwal</creator>
        
        <creator>Salwa Alnajjar</creator>
        
        <creator>Abdulrahman Alboog</creator>
        
        <creator>Salem Bahashwan</creator>
        
        <creator>Galila Zaher</creator>
        
        <subject>Sickle cells anemia; feature selection; predicting complication; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>Sickle Cell Anemia (SCA) is a hereditary condition causing abnormal red blood cells, leading to severe health complications. Traditional treatment approaches for SCD often involve reactive management, which can delay appropriate interventions and worsen patient outcomes. The aim of this study is to leverage machine learning (ML) algorithms, including Logistic Regression (LR), Support Vector Machines (SVM), and Decision Trees (DT), to identify high-risk groups among SCA patients using clinical and pathological data from King Abdulaziz University Hospital. This study employs a comprehensive dataset comprising 200 SCA patients, with data preprocessing to handle missing values and feature selection techniques to enhance model performance. The dataset is divided into training and testing sets, and models are evaluated using ten-fold cross-validation. Performance metrics such as True Positive Rate (TPR), False Negative Rate (FNR), Positive Predictive Value (PPV), and False Discovery Rate (FDR) are used to assess model effectiveness. The results indicate that the SVM model with the top seven correlated features achieved the highest TPR and PPV, along with the lowest FNR and FDR, demonstrating its superior performance in identifying high-risk patients. The study concludes that ML models, particularly SVM, can significantly improve risk assessment and patient management in SCA, offering a proactive tool for healthcare providers. The main message is the potential of ML algorithms to enhance clinical decision-making and improve outcomes for patients with SCA.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_70-Utilizing_Machine_Learning_to_Identify_High_Risk_Groups.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Preliminary Study on Songket: A Preservation of Intangible Cultural Heritage</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160669</link>
        <id>10.14569/IJACSA.2025.0160669</id>
        <doi>10.14569/IJACSA.2025.0160669</doi>
        <lastModDate>2025-06-30T13:21:18.7730000+00:00</lastModDate>
        
        <creator>Nik Siti Fatima Nik Mat</creator>
        
        <creator>Syadiah Nor Wan Shamsuddin</creator>
        
        <creator>Syarilla Iryani Ahmad Saany</creator>
        
        <creator>Norkhairani Abdul Rawi</creator>
        
        <creator>Julaily Aida Jusoh</creator>
        
        <creator>Wan Malini Wan Isa</creator>
        
        <creator>Addy Putra Md Zulkifli</creator>
        
        <creator>Shahrul Anuwar Mohamed Yusof</creator>
        
        <subject>Songket; weaving; textiles; aesthetic; intangible cultural heritage; processing songket; cultural; heritage; Terengganu; Malaysia</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>Songket is a traditional Malaysian woven fabric, opulent, and symbolizes luxurious classical textiles of the old craft in Malaysia. Songket is part of the intangible cultural heritage to this day. However, cultural heritage preservation arises as a critical endeavor, especially for younger generations. There is a growing concern about the disinheritance of cultural heritage to posterity due to disregard for the value of our own heritage. Therefore, this paper aims to explore in-depth insight into the beautiful heritage of songket weaving including the knowledge of art weaving, delve into the techniques and materials used as well as the legacy and time ahead. The research uses observation methods and deep face-to-face interviews for data collection. It was conducted with experienced experts and workers, revealing the origin of Songket and its processing from scratch. The research data findings were gathered and subsequently analyzed as secondary data. This study gives more knowledge about Malay traditional textile art and heritage, indicating a positive future of traditional weaving arts as well.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_69-A_Preliminary_Study_on_Songket.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving Cross-Lingual Fake News Detection in Indonesia with a Hybrid Model by Enhancing the Embedding Process</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160668</link>
        <id>10.14569/IJACSA.2025.0160668</id>
        <doi>10.14569/IJACSA.2025.0160668</doi>
        <lastModDate>2025-06-30T13:21:18.7400000+00:00</lastModDate>
        
        <creator>Jihan Nabilah Hakim</creator>
        
        <creator>Yuliant Sibaroni</creator>
        
        <subject>Cross-lingual; fake news detection; hybrid learning; MUSE embeddings; digital misinformation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>In the digital age, the spread of false information across languages in digital form threatens the authenticity and credibility of information. This study aims to develop an efficient hybrid deep learning model for detecting cross-lingual fake news, particularly in resource-constrained environments, by enhancing the embedding process. It proposes a lightweight model that combines MUSE embeddings with CNN, LSTM, and LSTM-CNN architectures to evaluate performance across various language pairs with Indonesian as the source language. Experiments show that linguistic similarity significantly influences classification performance. CNN achieves an F1-score of 82% for the Indonesian–Malay pair, a similar language pair. While LSTM achieves 97% for the Indonesian–German language pair (a structurally different language pair). These findings highlight the effectiveness of hybrid architectures and multilingual embeddings in improving cross-lingual fake news detection, especially when English is not the source language. The proposed method provides a reliable yet computationally efficient solution for multilingual misinformation detection in resource-constrained environments.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_68-Improving_Cross_Lingual_Fake_News_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Explainable Deep Temporal Modeling for Stroke Risk Assessment Using Attention-Based LSTM Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160667</link>
        <id>10.14569/IJACSA.2025.0160667</id>
        <doi>10.14569/IJACSA.2025.0160667</doi>
        <lastModDate>2025-06-30T13:21:18.7100000+00:00</lastModDate>
        
        <creator>P. Selvaperumal</creator>
        
        <creator>F. Sheeja Mary</creator>
        
        <creator>Pratik Gite</creator>
        
        <creator>T L Deepika Roy</creator>
        
        <creator>Yousef A. Baker El-Ebiary</creator>
        
        <creator>Gowrisankar Kalakoti</creator>
        
        <creator>Sandeep Kumar Mathariya</creator>
        
        <subject>Attention mechanism; deep learning; imbalanced data; LSTM networks; SMOTE resampling; stroke prediction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>Stroke continues to be a major cause of mortality and disability globally, and precise risk prediction models are needed. Current models do not effectively incorporate temporal patient information, restricting the quality of prediction and clinical interpretability. This research introduces a new LSTM-based deep learning model enriched with an attention mechanism for predicting stroke risk that can prioritize important risk factors like age, hypertension, and heart disease. The model takes advantage of LSTM&#39;s ability to learn sequential dependencies from long-term patient histories, while the attention mechanism dynamically emphasizes clinically important features, promoting interpretability and clinical significance. By testing the model using a dataset of 5,110 patient records with a mere 6% stroke cases, showcasing extreme class imbalance. To counteract this, preprocessing involved SMOTE for synthetic oversampling, mean imputation to handle missing values, and Min-Max normalization. As deployed in Python based on TensorFlow, the model realized remarkable performance. The constructed LSTM-Attention model attained a test accuracy of 83.7%, an AUC-ROC value of 85.3%, and an F1-value of 82.2%, which was higher than that of conventional models such as Logistic Regression and Random Forest. These evaluate the model&#39;s improved ability to identify subtle stroke risk factors that go unnoticed otherwise. The attention-augmented LSTM architecture not only guarantees accurate predictions but also offers transparent insight into the decision process, making it appropriate for incorporation in real-time clinical decision support systems. This method has the potential to improve personalized stroke risk assessment dramatically and enhance preventive healthcare interventions.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_67-Explainable_Deep_Temporal_Modeling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Reinforcement Learning Based Robotic Arm Control Simulation to Execute Object Reaching Task for Industrial Application</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160666</link>
        <id>10.14569/IJACSA.2025.0160666</id>
        <doi>10.14569/IJACSA.2025.0160666</doi>
        <lastModDate>2025-06-30T13:21:18.6770000+00:00</lastModDate>
        
        <creator>John Mark Correa</creator>
        
        <creator>Rudolph Joshua Candare</creator>
        
        <creator>Junrie B. Matias</creator>
        
        <subject>Reinforcement learning; deep reinforcement learning; reward shaping techniques; robotic arm; robot simulation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>This study presents a deep reinforcement learning (DRL) approach to train a robotic arm for object reaching tasks in industrial settings, eliminating the need for traditional task-specific programming. Leveraging the Proximal Policy Optimization (PPO) algorithm for its stability in continuous control, the system learns optimal behaviors through autonomous trial-and-error. Central to this work is reward shaping, where structured feedback based on distance to the target, collision avoidance, motion constraints, and step efficiency guides the agent, akin to incremental coaching. A simulated industrial environment was developed using Webots, integrated with OpenAI Gym and Stable-Baselines3, enabling safe training with sensor data (camera, distance sensor) and randomized target placements. Three models with varying reward schemes were evaluated: simpler rewards prioritized rapid convergence, while complex formulations (e.g., perceptual alignment) enhanced long-term accuracy at the cost of initial instability. Experimental results demonstrated that reward shaping reduced the required steps, highlighting its role in accelerating learning. The study underscores the efficacy of combining DRL, simulation-based training, and adaptive reward design to develop efficient robotic controllers. These findings advance scalable solutions for industrial automation, emphasizing the trade-offs between reward complexity and policy convergence. Future work will refine reward functions to bridge simulation-to-reality gaps, fostering practical adoption in manufacturing and assembly systems.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_66-Deep_Reinforcement_Learning_Based_Robotic_Arm_Control.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing SVM and KNN Performance Through Preprocessing Pipelines for Interactive mHealth Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160665</link>
        <id>10.14569/IJACSA.2025.0160665</id>
        <doi>10.14569/IJACSA.2025.0160665</doi>
        <lastModDate>2025-06-30T13:21:18.6470000+00:00</lastModDate>
        
        <creator>Btissam Elaziz</creator>
        
        <creator>Charaf Eddine AIT ZAOUIAT</creator>
        
        <creator>Mohamed Eddabbah</creator>
        
        <creator>Yassin LAAZIZ</creator>
        
        <subject>Mobile health; cloud computing; machine learning; SVM; KNN; data preprocessing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>Mobile health (mHealth) applications are increasingly relying on artificial intelligence (AI) to provide accurate and real-time decision support for healthcare delivery. However, achieving the optimal balance between processing time and accuracy remains challenging, especially for interactive applications that rely on cloud computing for scalability and performance. This study investigates the impact of data preprocessing techniques on the performance of two widely used machine learning algorithms, Support Vector Machine (SVM) and k-Nearest Neighbors (KNN), in cloud-based mHealth systems. We evaluate the effects of various scaling methods and dimensionality reduction techniques, on processing time and model accuracy. Our results demonstrate that preprocessing significantly improves model performance, with SVM achieving a precision of 0.72 and a processing time of 0.087 ms using StandardScaler, while KNN demonstrates the fastest processing times when paired with robust preprocessing. These findings underscore the importance of optimizing both data preparation and algorithmic efficiency for interactive mHealth applications. By enhancing model accuracy and reducing latency, this research contributes to the development of cost-effective, real-time mobile health systems that improve user experience and decision-making in healthcare.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_65-Enhancing_SVM_and_KNN _Performance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Metaheuristic-Driven Feature Selection for IoT Intrusion Detection: A Hierarchical Arithmetic Optimization Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160664</link>
        <id>10.14569/IJACSA.2025.0160664</id>
        <doi>10.14569/IJACSA.2025.0160664</doi>
        <lastModDate>2025-06-30T13:21:18.6000000+00:00</lastModDate>
        
        <creator>Jing GUO</creator>
        
        <creator>Dejun ZHU</creator>
        
        <creator>Qing XU</creator>
        
        <subject>Intrusion detection; internet of things; feature selection; hierarchical arithmetic optimization; cybersecurity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>The increasing sophistication of cyberattacks in Internet of Things (IoT) networks requires strong Intrusion Detection Systems (IDS) with optimal feature selection mechanisms. High-dimensional data, computational complexity, and suboptimal detection accuracy hinder conventional IDS mechanisms. To overcome these limitations, in this study, the Hierarchical Self-Adaptive Arithmetic Optimization Algorithm (HSAOA) is introduced as a new metaheuristic method for IDS feature selection. HSAOA combines a stochastic spiral exploration method, an adaptive hierarchical model of leaders and followers, and a differential mutation mechanism to improve exploration-exploitation balance, global search capability, and premature convergence. The NF-ToN-IoT dataset is used to test the model, wherein HSAOA undertakes the feature selection process, and classification accuracy is increased by utilizing Random Forest (RF). The experimental results indicate that the proposed HSAOA is better than other advanced approaches in accuracy, computational efficiency, and convergence speed. These results validate the proposed algorithm as a scalable and effective solution for enhancing cybersecurity in IoT environments by improving IDS performance and reducing feature selection complexity.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_64-Metaheuristic_Driven_Feature_Selection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Graph Neural Networks with Attention Mechanisms for Accurate Dengue Severity Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160663</link>
        <id>10.14569/IJACSA.2025.0160663</id>
        <doi>10.14569/IJACSA.2025.0160663</doi>
        <lastModDate>2025-06-30T13:21:18.5830000+00:00</lastModDate>
        
        <creator>Monali G. Dhote</creator>
        
        <creator>Puneet Thapar</creator>
        
        <creator>Yousef A. Baker El-Ebiary</creator>
        
        <creator>G. Indra Navaroj</creator>
        
        <creator>R. Aroul Canessane</creator>
        
        <creator>B. V. Suresh Reddy</creator>
        
        <creator>Elangovan Muniyandy</creator>
        
        <creator>Kapil Joshi</creator>
        
        <subject>Attention mechanism; dengue severity prediction Graph Neural Network; healthcare analytics; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>Dengue fever continues to be a significant public health issue across the globe because it can lead to life-threatening complications. Severity prediction in a timely and precise manner is imperative for proper clinical management and effective resource utilization. Conventional models fail to identify intricate relationships between heterogeneous clinical, demographic, and epidemiological variables. For this purpose, we develop an innovative framework—Graph Neural Network with Attention Mechanism (GNN-AM)—aimed at enhancing dengue severity prediction. In the suggested method, every patient is viewed as a node in a graph with edges indicating clinical similarity in terms of health properties. The incorporation of attention mechanisms enables the model to selectively pay attention to important clinical indicators like fever duration, platelet count, and bleeding tendencies. This selective attentiveness improves prediction quality by giving maximum importance to the most important features while reducing the impact of less significant data. The model was trained and tested on a dataset of laboratory-confirmed dengue cases that contained clinical symptoms, laboratory results, and demographics. Experimental results showed that attention-augmented GNN performed better than both typical GNNs and traditional machine learning models, recording an accuracy of 90.3%, a recall of 88.9%, and an F1-score of 89.6%. Results highlight the efficacy of the GNN-AM framework in classifying dengue severity accurately and the ability to emphasize crucial clinical indicators using attention mechanisms. In the future, this model can be combined with Electronic Health Records (EHRs) and implemented in real-world healthcare environments using federated learning methods to maintain data privacy across institutions.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_63-Graph_Neural_Networks_with_Attention_Mechanisms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>AccuLandNet: Enhancing Land Cover Detection with Deep Integrated Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160662</link>
        <id>10.14569/IJACSA.2025.0160662</id>
        <doi>10.14569/IJACSA.2025.0160662</doi>
        <lastModDate>2025-06-30T13:21:18.5530000+00:00</lastModDate>
        
        <creator>Geetha Guthikonda</creator>
        
        <creator>M. Senthil Kumaran</creator>
        
        <subject>Land Use/Land Cover (LULC); U-Net; Multi-Sensor Data Fusion (MSDF); Maximum Likelihood Classification (MLC); Support Vector Machines (SVM)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>Now-a-days, population growth is increasing more and more in all the places of the world. Specifically, this increase is in urban development based on economic and industrial improvement. It shows the massive impact on Land Use/Land Cover (LULC) and may change many times. The most popular use of land cover categorization is to analyze satellite imagery to categorize different land surface types, such as urban areas, agricultural fields, forests, and aquatic bodies. With the help of several land cover images, a unique classification model (UCM) based on satellite image classification will be developed in this study. The proposed approach implements the following stages. In the first stage, the pre-trained model U-Net was used to train the satellite images. In the second stage, the preprocessing techniques, including data acquisition and noise reduction, such as Adaptive Noise Removal (ANR) and Histogram Equalization (AHE), were used to preprocess the images. The third stage focused on extracting the features using Multi-Sensor Data Fusion (MSDF) to extract features like water bodies, roads, urban areas, edges, boundaries, and shapes. The final step uses the Maximum Likelihood Classification (MLC) combined with Support Vector Machines (SVM) to give the advanced classification results. Experimental results explain that the proposed approach outperformed the existing models in terms of better outcomes.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_62-AccuLandNet_Enhancing_Land_Cover_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Leveraging LSTM-Driven Predictive Analytics for Resource Allocation and Cost Efficiency Optimization in Project Management</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160661</link>
        <id>10.14569/IJACSA.2025.0160661</id>
        <doi>10.14569/IJACSA.2025.0160661</doi>
        <lastModDate>2025-06-30T13:21:18.5230000+00:00</lastModDate>
        
        <creator>G. Gokul Kumari</creator>
        
        <creator>Shokhjakhon Abdufattokhov</creator>
        
        <creator>Sanjit Singh</creator>
        
        <creator>Guru Basava Aradhya S</creator>
        
        <creator>T L Deepika Roy</creator>
        
        <creator>Yousef A.Baker El-Ebiary</creator>
        
        <creator>Elangovan Muniyandy</creator>
        
        <creator>B Kiran Bala</creator>
        
        <subject>Resource optimization; project management; long short-term memory; predictive analytics; task scheduling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>Resource planning and cost optimization are essential elements of effective project management. Conventional models are weak in changing environments because they cannot keep pace with intricate task interdependencies and changing project constraints. To overcome such weaknesses, this research envisions an LSTM-based predictive analytics model that deploys temporal trends and past project information for precise predictions of task duration, resource allocations, and possible delays. The proposed method combines sequential data modeling with Long Short-Term Memory (LSTM) networks, along with data preprocessing and optimization, to enhance project scheduling and cost control decision-making. With TensorFlow implementation, the proposed LSTM-PRO model resulted in a Mean Squared Error (MSE) of 0.0025, Root Mean Squared Error (RMSE) of 0.05, and an R&#178; score of 0.96, which was far better than ARIMA and other baseline models. The model resulted in a cost saving of 20% on project costs and 20% rise in resource utilization from 65% to 85%. The outcome proves the effectiveness and applicability of the model in actual project settings.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_61-Leveraging_LSTM_Driven_Predictive_Analytics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Interpretable Transformer-Based Approach for Context-Aware and Stylistically Aligned Academic Paraphrasing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160660</link>
        <id>10.14569/IJACSA.2025.0160660</id>
        <doi>10.14569/IJACSA.2025.0160660</doi>
        <lastModDate>2025-06-30T13:21:18.4730000+00:00</lastModDate>
        
        <creator>A. Z. Khan</creator>
        
        <creator>Ritu Sharma</creator>
        
        <creator>K. Kiran Kumar</creator>
        
        <creator>Elangovan Muniyandy</creator>
        
        <creator>Raman Kumar</creator>
        
        <creator>Yousef A. Baker El-Ebiary</creator>
        
        <creator>Prema S</creator>
        
        <creator>Osama R. Shahin</creator>
        
        <subject>Academic writing; attention visualization; context-aware paraphrasing; reinforcement learning; T5-transformer model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>Academic paraphrasing, particularly when aiming at contextual competence, coherence, and stylistic consistency, poses a significant challenge to non-native English speakers and novice researchers. This research seeks to create an interpretable transformer model specifically designed for paraphrasing academic texts that guarantees semantic correctness, contextual relevance, and scholarly style. Existing paraphrasing models are largely unsuitable in meeting the subtle needs of academic work, lagging in semantic preservation, fluency, scholarly style, and interpretability. In addressing these limitations, we propose T5-XAVRL (T5 with Attention Visualization and Reinforcement Learning for Style Control), an interpretable Transformer model created specifically for paraphrasing academic text. Based on the T5 architecture, T5-XAVRL adds fine-tuning for better domain adaptation, attention visualization for better transparency, and reinforcement learning to control outputs towards academic writing quality. The model is trained and tested on the ArXiv Academic Papers Dataset and demonstrates high versatility in a variety of academic environments. Developed with Python, TensorFlow, and Hugging Face Transformers, the system is made for scalability as well as performance. Experimental findings indicate that T5-XAVRL obtains a 68.7% BLEU score, greatly surpassing traditional paraphrasing models in both semantic accuracy and linguistic fluency. Far more than a paraphraser, T5-XAVRL is a trustworthy academic writing aide capable of assisting users with producing grammatically and stylistically correct scholarly work. Its interpretable outputs also increase user confidence by vividly displaying how paraphrasing choices are being made. As a whole, this study is an important step towards creating interpretable, context-sensitive, and style-sensitive paraphrasing systems for scholarly use.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_60-An_Interpretable_Transformer_Based_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Content Validity Assessment Using Aiken’s V: Knowledge Integration Model for Blockchain in Higher Learning Institutions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160659</link>
        <id>10.14569/IJACSA.2025.0160659</id>
        <doi>10.14569/IJACSA.2025.0160659</doi>
        <lastModDate>2025-06-30T13:21:18.4600000+00:00</lastModDate>
        
        <creator>Nur Ilyana Ismarau Tajuddin</creator>
        
        <creator>Ummu-Hani Abas</creator>
        
        <creator>Khairi Azhar Aziz</creator>
        
        <creator>Rozi Nor Haizan Nor</creator>
        
        <creator>Nor Aziyatul Izni</creator>
        
        <creator>Muhammad Nuruddin Sudin</creator>
        
        <creator>Nur Aqilah Hazirah Mohd Anim</creator>
        
        <creator>Noorashikin Md Noor</creator>
        
        <subject>Blockchain; content validity; aiken’s v; higher learning institutions; knowledge integration model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>The integration of blockchain technology into higher learning institutions (HLIs) holds the potential to revolutionize data management, enhance transparency, and improve trust in academic systems. However, the effective adoption of blockchain requires a comprehensive and valid model that addresses the specific needs and contexts of HLIs. This study aims to assess the content validity of the Knowledge Integration Model for Blockchain in Higher Learning Institutions using Aiken’s V method. The proposed model was developed through a systematic literature review and refined with expert input. Content validity was evaluated by seven experts with backgrounds in education, blockchain technology, and information systems. Each item within the model was rated for its relevance, clarity, and representativeness using a Likert scale. The instrument, consisting of 50 items across six constructs, was evaluated by seven domain experts using Aiken’s V methodology. Each item was rated based on its relevance and clarity using a 5-point Likert scale. The results revealed that out of 50 items, 21 required revision or removal due to low Aiken’s V scores (&lt;0.70), 21 were deemed acceptable but required minor revisions, and 8 demonstrated strong content validity (V ≥ 0.80). These findings underscore the importance of expert evaluation in refining research instruments and ensuring construct alignment. The use of Aiken’s V provided a robust quantitative foundation for the validation process. The refined instrument serves as a reliable tool for assessing institutional readiness and knowledge integration capabilities in the context of blockchain adoption. This work contributes to the growing research on educational blockchain implementation by offering a validated framework that can support empirical investigations and strategic decision-making in higher education.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_59-Content_Validity_Assessment_Using_Aiken_s_V.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Critical Success Factors for Knowledge Transfer in Enterprise System Projects: A Theoretical and Empirical Investigation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160658</link>
        <id>10.14569/IJACSA.2025.0160658</id>
        <doi>10.14569/IJACSA.2025.0160658</doi>
        <lastModDate>2025-06-30T13:21:18.4130000+00:00</lastModDate>
        
        <creator>Jamal M. Hussien</creator>
        
        <creator>Riza bin Sulaiman</creator>
        
        <creator>Ali H Hassan</creator>
        
        <creator>Mansoor Abdulhak</creator>
        
        <creator>Hasan Kahtan</creator>
        
        <subject>Enterprise system projects (ESPs); knowledge transfer (KT); critical success factors (CSFs); digital transformation; knowledge-sharing culture; management support; information systems implementation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>Enterprise System Projects (ESPs) are fundamental enablers of digital transformation across organizations, yet they consistently suffer from high failure rates, often attributed to ineffective Knowledge Transfer (KT) practices. Despite the critical role of KT in ensuring project sustainability and long-term organizational learning, limited scholarly attention has been given to identifying and systematically categorizing the success factors that influence KT outcomes in ESPs. The aim of this study is to investigate and conceptualize the Critical Success Factors (CSFs) that influence effective knowledge transfer in ESPs. To address this research gap, a mixed-methods approach is used, combining a literature review with empirical insights from semi-structured interviews with industry practitioners involved in large-scale ESP implementations. The analysis reveals a set of interrelated CSFs that significantly impact KT effectiveness. Some of the key points highlighted are the shared knowledge between cultures, the high expertise of consultants based on technicality and social skills, and the solid and visible management support. These points are integrated into a conceptual framework that enhances conceptual understanding while offering practitioners practical guidance. The study contributes by bridging the gap between the KT concept and ESP implementation, which are connected to the academic discourse, proposing a comprehensive model for successful knowledge transfer during the deployment of ESP. From a practical standpoint, the findings offer organizations a strategic lens to design and implement KT mechanisms that enhance project outcomes and ensure long-term knowledge retention.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_58-Critical_Success_Factors_for_Knowledge_Transfer.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>AutiSim: A Virtual Reality Simulation Game Based on the Autism Spectrum Disorder</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160657</link>
        <id>10.14569/IJACSA.2025.0160657</id>
        <doi>10.14569/IJACSA.2025.0160657</doi>
        <lastModDate>2025-06-30T13:21:18.3800000+00:00</lastModDate>
        
        <creator>Muhammad Aliff Muhd Farid Arfian</creator>
        
        <creator>Ikmal Faiq Albakri Mustafa Albakri</creator>
        
        <creator>Faaizah Shahbodin</creator>
        
        <creator>Mohd Khalid Mokhtar</creator>
        
        <creator>Asniyani Nur Haidar Abdullah</creator>
        
        <creator>Norhaida Mohd Suaib</creator>
        
        <creator>Muhammad Nur Affendy Nor&#39;a</creator>
        
        <creator>Abdul Hasib Jahidin</creator>
        
        <subject>Virtual reality; simulation game; autism; artificial intelligence</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>Technologies with altering reality like virtual reality (VR) have become more relevant to the public for their capabilities in the entertainment and healthcare field, as well as affordable for everyone. However, the emphasis on mental health-related simulation is often ignored due to technical complexities and wrong representation. Therefore, this study leverages the immersive capabilities of VR to create an engaging and educational game experience that simulates the sensory and social challenges faced by individuals with autism spectrum disorder (ASD). The study involves designing and implementing a VR game that places users in various scenarios reflecting the daily experiences of autistic individuals. The VR game aims to educate players about common misconceptions, sensory sensitivities, and social difficulties associated with autism. A literature review on XR technology and ASD was conducted during the pre-production phase to explore past research on autism in video games and shape the overall game vision. The study continues with developing an immersive simulation game using VR with locomotive motion controls and an artificial intelligence non-playable character (AI NPC) with Speech-To-Text function. Finally, the testing phase used two approaches: quantitative analysis, using the System Usability Scale (SUS) to assess usability and the Simulator Sickness Questionnaire (SSQ) to identify discomfort issues such as headaches and blurriness during gameplay, and qualitative analysis, gathering expert’s feedback on the VR game&#39;s content and teaching effectiveness.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_57-AutiSim_A_Virtual_Reality_Simulation_Game.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Analysis of Rank and Roulette Wheel Selection Strategies in Genetic Algorithms for Spatial Layout Optimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160656</link>
        <id>10.14569/IJACSA.2025.0160656</id>
        <doi>10.14569/IJACSA.2025.0160656</doi>
        <lastModDate>2025-06-30T13:21:18.3500000+00:00</lastModDate>
        
        <creator>Najihah Ibrahim</creator>
        
        <creator>Fadratul Hafinaz Hassan</creator>
        
        <creator>Sharifah Mashita Syed-Mohamad</creator>
        
        <creator>Rosmayati Mohemad</creator>
        
        <creator>Ahmad Shukri Mohd Noor</creator>
        
        <subject>Genetic algorithm; optimization; spatial layout arrangement; space utilization; urban planning; facility layout design; rank selection; roulette wheel selection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>Autonomous urban planning, facility layout design, and interior design are critical and meticulous tasks that require the optimization of space arrangement. One of the main purposes of space arrangement is to achieve high space utilization with a non-complex arrangement for emergency assistance, particularly to enhance pedestrian safety in panic situations. This study explores the optimization of spatial layouts by employing Genetic Algorithms (GA) due to their robust search capabilities. However, spatial layout size limitations may affect the search capability and significantly impact space arrangement and utilization. Hence, this study presents a comparative study of two GA selection operator methods: Rank Selection (RS) and Roulette Wheel Selection (RWS) for determining the effectiveness in optimizing spatial layout arrangements and space utilization. The results demonstrated significant improvements in crowd flow management, with the RWS method showing the highest fitness value despite slower convergence compared to RS. The study highlighted the impact of different methods on the convergence of the multi-objective fitness value based on space elements such as overlapping and standard walkway distances. While both selection methods proved to be effective in optimizing space utilization, the RWS method demonstrated greater computational efficiency while still adhering to standard layout designs. This efficiency helps to ensure smoother evacuation and ease of movement during emergency situations.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_56-Comparative_Analysis_of_Rank_and_Roulette_Wheel_Selection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Vision-Based Vehicle Classification Using Deep Learning Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160655</link>
        <id>10.14569/IJACSA.2025.0160655</id>
        <doi>10.14569/IJACSA.2025.0160655</doi>
        <lastModDate>2025-06-30T13:21:18.3200000+00:00</lastModDate>
        
        <creator>Ahsiah Ismail</creator>
        
        <creator>Amelia Ritahani Ismail</creator>
        
        <creator>Muhammad Afiq Mohd Ara</creator>
        
        <creator>Asmarani Ahmad Puzi</creator>
        
        <creator>Suryanti Awang</creator>
        
        <subject>YOLO; vehicle classification; deep learning; traffic monitoring</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>Vehicle classification offers intelligent solutions for road traffic monitoring by enabling future prediction planning and decision making. Predictive analytics can be used to predict traffic congestion based on the types of vehicles on the road. In this research, the reliability of deep learning based models for vision-based vehicle classification is investigated. Four models of You Only Look Once (YOLO) are investigated, namely YOLOv5s, YOLOv5x, YOLOv10n, and YOLOv12n. These models were trained and evaluated on a vehicle dataset comprising five vehicle classes, which are Ambulance, Bus, Car, Motorcycle, and Truck, with a total number of 1103 images. From the experiment conducted, YOLOv10n achieved the highest performance measure of mAP@0.5 with 0.859 across all vehicle classes, including per-class evaluation, demonstrating superior detection compared to the other models. Finally, the results indicate that the YOLOv10n model can be used in vision-based vehicle classification.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_55-Vision_Based_Vehicle_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Self-Supervised Method for Risky Situation Detection in Road Traffic Sequences Using Video Masked Autoencoder</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160654</link>
        <id>10.14569/IJACSA.2025.0160654</id>
        <doi>10.14569/IJACSA.2025.0160654</doi>
        <lastModDate>2025-06-30T13:21:18.2870000+00:00</lastModDate>
        
        <creator>Abdelhafid Berroukham</creator>
        
        <creator>Mohammed Lahraichi</creator>
        
        <creator>Khalid Housni</creator>
        
        <subject>Video processing; risk detection; VideoMAE; vision transformer; deep learning; computer vision</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>Road traffic accidents are a significant public health issue, particularly in developing nations, where infrastructure and traffic monitoring systems may be limited. Risky situations including sudden stopping, lane switching, and near-misses can lead to accidents. In this study, we present an original approach for recognizing risky situations in road traffic sequences using Video Masked Autoencoder (VideoMAE), a self-supervised deep learning model built upon Vision Transformer architecture. By applying a pre-trained VideoMAE on a large dataset of videos and fine-tuning it on labeled traffic sequences categorized as risky or non-risky, our model learns spatiotemporal features without requiring extensive manual labeling. The method achieves high accuracy on testing data, demonstrating strong potential for high-risk detection with an accuracy of 95%. This study highlights the promise of self-supervised video representation learning for real-world safety applications and paves the way for the development of intelligent traffic monitoring and crash prevention tools.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_54-Self_Supervised_Method_for_Risky_Situations_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>AI-Powered Assessment of Resistance to Change in the Context of Digital Transformation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160653</link>
        <id>10.14569/IJACSA.2025.0160653</id>
        <doi>10.14569/IJACSA.2025.0160653</doi>
        <lastModDate>2025-06-30T13:21:18.2570000+00:00</lastModDate>
        
        <creator>Bachira Abou El Karam</creator>
        
        <creator>Tarik Fissaa</creator>
        
        <creator>Rabia Marghoubi</creator>
        
        <subject>Resistance to change; digital transformation; zero-shot LLMs; prompt engineering; allies strategy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>Digital transformation is a key driver of business evolution, but it comes with significant challenges, particularly employee resistance to change. This resistance can manifest in various forms, ranging from explicit opposition to more subtle hesitation toward new practices. Its underlying causes are diverse, including fear of the unknown, loss of control, and dissatisfaction with perceived transformations. Understanding employee perceptions is, therefore, crucial to adapting digital initiatives and ensuring successful adoption. However, existing methods for assessing resistance, which rely on closed-ended questionnaires and binary classifications, have limitations. They restrict the expression of opinions and fail to provide a nuanced segmentation of employees’ stances toward change. In this context, this study proposes an innovative and automated methodology that combines specialized zero-shot LLMs and prompt engineering techniques to analyze resistance to change. It is based on the allies strategy, a concept derived from sociodynamic theory and widely applied in change management, which seeks to more precisely differentiate employee attitudes based on their level of synergy or antagonism toward a new project or transformation initiative. To evaluate the effectiveness of the proposed approach, an experiment was conducted on an annotated dataset comprising a hundred employee responses. Two prompt engineering strategies were explored and applied to six zero-shot models to assess their ability to accurately classify expressed attitudes. The findings underscored, on one side, the significance of prompt structuring in enhancing classification efficacy and, on the other side, the preeminence of DeBERTa-v3-large-zeroshot, which demonstrated itself as the most exemplary model, even exceeding GPT-4, one of the most sophisticated and cutting-edge language models currently accessible.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_53-AI_Powered_Assessment_of_Resistance_to_Change.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Advancing Aerodynamic Coefficient Prediction: A Hybrid Model Integrating Deep Learning and Optimization Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160652</link>
        <id>10.14569/IJACSA.2025.0160652</id>
        <doi>10.14569/IJACSA.2025.0160652</doi>
        <lastModDate>2025-06-30T13:21:18.2230000+00:00</lastModDate>
        
        <creator>Jad Zerouaoui</creator>
        
        <creator>Rachid Ed-daoudi</creator>
        
        <creator>Badia Ettaki</creator>
        
        <creator>El Mahjoub Chakir</creator>
        
        <subject>Aerodynamic coefficients; computational fluid dynamics; deep learning; convolutional neural networks; optimization techniques; evolutionary algorithms; gradient-based optimization; aerospace design</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>The aerospace industry increasingly relies on predictive models for aerodynamic coefficients to enhance design, performance, and optimization. While traditional methods like Computational Fluid Dynamics (CFD) and wind tunnel simulations offer accurate predictions, they are computationally intensive and time-consuming. This study explores a novel approach that fuses advanced Deep Learning (DL) architectures with Optimization Techniques to achieve faster and more accurate predictions of aerodynamic coefficients. Building on the foundation of Convolutional Neural Networks (CNNs), we introduce hybrid models that integrate Evolutionary Algorithms and Gradient-Based Optimization to improve the accuracy, generalization, and adaptability of predictions. The proposed framework is validated on datasets derived from CFD simulations and wind tunnel experiments, demonstrating superior accuracy, reduced computational cost, and robust performance across diverse aerodynamic conditions. This study highlights the potential of combining DL and optimization methods as a transformative tool for real-time aerodynamic analysis, paving the way for more efficient Aerospace Design and decision-making. Future research directions include expanding the model to handle complex geometries and dynamic flight conditions.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_52-Advancing_Aerodynamic_Coefficient_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Image Quality Assessment Based on Feature Fusion and Local Adaptation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160651</link>
        <id>10.14569/IJACSA.2025.0160651</id>
        <doi>10.14569/IJACSA.2025.0160651</doi>
        <lastModDate>2025-06-30T13:21:18.1770000+00:00</lastModDate>
        
        <creator>Minjuan GAO</creator>
        
        <creator>Yankang LI</creator>
        
        <creator>Xuande ZHANG</creator>
        
        <subject>No-reference image quality assessment; deep learning; multi-scale; feature fusion; local adaptation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>No-reference image quality assessment (NR-IQA) aims to evaluate the perceptual quality of images without access to corresponding reference images and has broad applications in real-world image processing scenarios. However, existing NR-IQA methods often suffer from limited accuracy and generalization, especially under complex and diverse distortion types. To address this challenge, we propose Inc-LAENet, a novel NR-IQA framework that leverages multi-scale deep residual representations, integrates feature fusion mechanisms, and incorporates a local adaptive perception module to achieve improved assessment accuracy and generalization. Specifically, ResNet50 is employed to extract hierarchical residual features, an enhanced Inception-style module (Inc-s) strengthens sensitivity to various distortion patterns, and a lightweight local adaptive extraction module efficiently captures fine-grained structural information. Extensive experiments demonstrate the effectiveness of the proposed method, achieving SROCC values of 0.967 and 0.935 on the synthetic distortion datasets LIVE and CSIQ, and 0.852 and 0.898 on the authentic distortion datasets LIVEC and KonIQ-10k, respectively. These results confirm that Inc-LAENet provides a robust and efficient solution for NR-IQA tasks across both synthetic and real-world scenarios.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_51-Image_Quality_Assessment_Based_on_Feature_Fusion.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Enhanced LSTM Model Based on Feature Attention Mechanism and Emotional Intelligence for Advanced Sentiment Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160650</link>
        <id>10.14569/IJACSA.2025.0160650</id>
        <doi>10.14569/IJACSA.2025.0160650</doi>
        <lastModDate>2025-06-30T13:21:18.1300000+00:00</lastModDate>
        
        <creator>Muhammad Naeem Aftab</creator>
        
        <creator>Dost Muhammad Khan</creator>
        
        <creator>Muhammad Zulqarnain</creator>
        
        <creator>Muhammad Rizwan Akram</creator>
        
        <subject>Sentiment analysis; emotional intelligence; attention mechanism; two-state LSTM; long-term dependencies</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>Sentiment analysis, a crucial yet complex task in natural language processing (NLP), is extensively employed to identify sentiment polarity within user-generated content. Traditional deep learning methods for textual sentiment analysis often overlook the influence of emotional modulation on extracting sentiment features. At the same time, their attention mechanisms primarily operate at the word or sentence levels. Such oversight of higher-level abstractions may hinder the learning of nuanced sentiment patterns, ultimately damaging the accuracy of sentiment analysis. Addressing these gaps, this study proposed a novel framework, the Two-State Enhanced LSTM (TS-ELSTM), which integrated Emotional Intelligence (EI) and a Feature Attention Mechanism (FAM) to enhance the identification of relevant features during selection. Furthermore, this study employed a dual-phase training strategy of LSTM to accelerate learning and minimize information loss. A dynamic topic-level attention mechanism is also introduced to optimize hidden text representation weights. By integrating EI with a topic-level attention mechanism, the proposed framework efficiently extracts valuable features and enhances the feature learning ability of the conventional LSTM model. This novelty attains emotion-aware learning through two key components: an emotion modulator and an emotion estimator, which successfully normalize the system’s learning dynamics by combining emotional context. The experimental outcomes demonstrated that the proposed approach achieved an accuracy of 84.20%, 94.12% using MR and IMDB, respectively. The proposed approach significantly improves sentiment analysis accuracy, outperforming traditional deep learning models by a notable margin.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_50-An_Enhanced_LSTM_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Approach for Enhancing Advanced Encryption Standard Performance and Cryptographic Resilience</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160649</link>
        <id>10.14569/IJACSA.2025.0160649</id>
        <doi>10.14569/IJACSA.2025.0160649</doi>
        <lastModDate>2025-06-30T13:21:18.1000000+00:00</lastModDate>
        
        <creator>Muthu Meenakshi Ganesan</creator>
        
        <creator>Sabeen Selvaraj</creator>
        
        <subject>Cryptography; NIST; AES; block cipher; key expansion; symmetric encryption; galois field; statistical techniques; cryptanalysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>Advanced Encryption Standard (AES) encrypts data in blocks of sixteen bytes to secure confidential data stored in the cloud. For cloud-based systems, enhancements in existing encryption techniques are necessary as the nature of cyber threats evolves and computational speed becomes increasingly critical. This study presents an enhanced design of AES that substitutes two special operations, Byte Transformation and Bits Permuted Bytes, with the conventional S-Box operation to improve the speed and security of the encryption method. The suggested round structure in the new approach of AES, which preserves the original data block size, consists of the following operations: Byte Transformation, Shift Rows, Bits Permuted Bytes, Add Round Key, and Mix Columns. The analysis of the strict avalanche effect, correlation coefficient, entropy, execution time, and throughput outcomes confirms that the developed scheme improves the security and processing speed.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_49-A_Novel_Approach_for_Enhancing_Advanced_Encryption.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Leveraged Cognitive Data Analytics and Artificial Intelligence to Enhance Sustainable Agricultural Practices: A Systematic Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160648</link>
        <id>10.14569/IJACSA.2025.0160648</id>
        <doi>10.14569/IJACSA.2025.0160648</doi>
        <lastModDate>2025-06-30T13:21:18.0370000+00:00</lastModDate>
        
        <creator>Wongpanya S. Nuankaew</creator>
        
        <creator>Patchara Nasa-Ngium</creator>
        
        <creator>Pratya Nuankaew</creator>
        
        <subject>Cognitive data analytics; sustainable agriculture; harnessing AI for agriculture; precision agriculture; environmental sustainability; food security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>This systematic review examines the transformative role of Cognitive Data Analytics (CDA) and Artificial Intelligence (AI) in advancing sustainable agricultural practices, with a primary objective to evaluate their applications in Precision Agriculture (PA), Internet of Things (IoT), smart irrigation, and Geographic Information Systems (GIS) from 2020 to 2025. Key findings highlight AI predictive modeling, IoT real-time monitoring, and GIS spatial analysis improving crop yields, water conservation, and environmental management. Challenges such as high costs, technical expertise gaps, and regional disparities hinder adoption. The review underscores the need for supportive policies and farmer training to enhance food security and sustainability by 2030.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_48-Leveraged_Cognitive_Data_Analytics_and_Artificial_Intelligence.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>AI-Driven Intrusion Detection Systems for Securing IoT Healthcare Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160647</link>
        <id>10.14569/IJACSA.2025.0160647</id>
        <doi>10.14569/IJACSA.2025.0160647</doi>
        <lastModDate>2025-06-30T13:21:18.0070000+00:00</lastModDate>
        
        <creator>Muhammad Sajid Nawaz</creator>
        
        <creator>Muhammad Ahsan Raza</creator>
        
        <creator>Binish Raza</creator>
        
        <creator>Manal Ahmad</creator>
        
        <creator>Farial Syed</creator>
        
        <subject>IoT; intrusion detection system (IDS); convolutional neural network (CNN); recurrent neural network (RNN); cybersecurity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>The integration of IoT in healthcare has remained very dynamic, with a lot of improvement in the health of patients and the running of operations. Integration also comes with new risks and threats, raising IoT healthcare networks as cyber victims with great potential. This study explores an AI-based solution to defend healthcare IoT networks against intrusions. Therefore, using the most superior machine learning algorithms and deep learning expertise, it is concluded that a credible IDS would be built eventually to be able to detect and neutralize security threats in a live environment. The proposed IDS are trained and tested on a large, rich data set of IoT healthcare security incidents and features like CNN and RNN. Our system has learned to identify numerous and different types of cyber threats, such as Malware, Ransomware, Unauthorized access, data breaches, and many more, with better accuracy and even fewer false positives. This study proves that IDS backed by Artificial Intelligence is effective in improving the security status of IoT healthcare networks, organization’s control over crucial patient information, and thus, the maintenance of the continuous provision of healthcare services.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_47-AI_Driven_Intrusion_Detection_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>TL-MC-ShuffleNetV2: A Lightweight and Transferable Framework for Elevator Guideway Fault Diagnosis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160646</link>
        <id>10.14569/IJACSA.2025.0160646</id>
        <doi>10.14569/IJACSA.2025.0160646</doi>
        <lastModDate>2025-06-30T13:21:17.9600000+00:00</lastModDate>
        
        <creator>Zhiwei Zhou</creator>
        
        <creator>Xianghong Deng</creator>
        
        <creator>Xuwen Zheng</creator>
        
        <creator>Chonlatee Photong</creator>
        
        <subject>Transfer learning; elevator guideway; vibration signal analysis; fault diagnosis; lightweight deep neural network; squeeze-and-excitation attention; smart maintenance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>This study presents TL-MC-ShuffleNetV2, a lightweight and transferable fault diagnosis framework designed for elevator guideway vibration analysis. To tackle challenges such as limited labeled data and the constraints of real-time deployment, the approach integrates Variational Mode Decomposition (VMD) for multi-scale signal separation and employs a customized 1D ShuffleNetV2 backbone with multi-channel (MC) inputs. Squeeze-and-Excitation (SE) attention modules are embedded throughout the network to enhance channel-wise feature sensitivity. A transfer learning (TL) strategy is adopted, in which the model is initially trained using the Case Western Reserve University (CWRU) bearing dataset and subsequently adapted to the elevator domain by freezing early convolutional layers while fine-tuning higher-level layers. Evaluation results demonstrate that the proposed framework achieves a classification accuracy of 96.4%, alongside significantly reduced inference time and parameter complexity. Comparative and ablation experiments further validate the individual contributions of VMD preprocessing, SE modules, and transfer learning to model performance. Overall, the method exhibits strong adaptability, computational efficiency, and suitability for deployment in smart elevator monitoring systems under Industry 4.0 environments.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_46-TL_MC_ShuffleNetV2_A_Lightweight_and_Transferable_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Rule-Based Framework for Clothing Fit Recommendation from 3D Body Reconstruction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160645</link>
        <id>10.14569/IJACSA.2025.0160645</id>
        <doi>10.14569/IJACSA.2025.0160645</doi>
        <lastModDate>2025-06-30T13:21:17.9270000+00:00</lastModDate>
        
        <creator>Hamid Ouhnni</creator>
        
        <creator>Acim Btissam</creator>
        
        <creator>Belhiah Meryam</creator>
        
        <creator>Benachir Rigalma</creator>
        
        <creator>Soumia Zit</creator>
        
        <subject>Body size estimation; SMPLify-X; OpenPose; 3D body modeling; clothing size prediction; e-commerce sizing; human pose estimation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>This research presents a comprehensive framework for body size estimation that accurately derives anthropometric measurements—specifically, the circumferences of the waist and hips—from a singular image by utilizing OpenPose for joint localization and SMPLify-X for precise 3D body modeling. The proposed methodology involves projecting the generated three-dimensional model onto a horizontal plane and applying a convex hull geometric assessment to extract relevant body measurements. These derived measurements are then classified into standardized clothing size predictions (XS–XL) via a transparent rule-based classification system suitable for e-commerce sizing and virtual fitting applications. Empirical validation conducted on the Agora dataset substantiated the framework&#39;s reliability across diverse body types, demonstrating strong consistency with industry sizing standards. The method is non-intrusive and interpretable, effectively addressing practical challenges in automated human pose estimation for retail contexts. Limitations include constraints related to body posture and potential clothing interference; however, the modular design enables enhancements such as integrating chest circumference measurements and mobile deployment. This scholarly contribution thus provides a robust, accessible solution for automated, image-based clothing size recommendations.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_45-A_Rule_Based_Framework_for_Clothing_Fit_Recommendation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Machine Learning and 5G Edge Computing for Intelligent Traffic Management</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160644</link>
        <id>10.14569/IJACSA.2025.0160644</id>
        <doi>10.14569/IJACSA.2025.0160644</doi>
        <lastModDate>2025-06-30T13:21:17.8970000+00:00</lastModDate>
        
        <creator>Talbi Chaymae</creator>
        
        <creator>Rahmouni M&#39;hamed</creator>
        
        <creator>Ziti Soumia</creator>
        
        <subject>5G Edge computing; traffic management; dynamic routing; smart cities; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>The integration of fifth-generation (5G) communication technology and Artificial Intelligence (AI) is reshaping urban mobility by enabling intelligent transportation systems and smarter cities. This synergy allows real-time traffic management, predictive maintenance, and enhanced autonomous driving, supported by high-speed, low-latency networks and advanced data analytics. By leveraging 5G’s strong connectivity, AI systems can process massive datasets to address urban challenges such as traffic congestion, environmental sustainability, and public safety. This study presents a framework that combines 5G and AI to optimize traffic management through dynamic congestion prediction and real-time routing, supported by edge computing. It highlights the benefits of improving traffic flow, reducing emissions, and enhancing overall urban mobility efficiency. In addition, it discusses key challenges including data privacy concerns, cybersecurity risks, and the high cost of infrastructure deployment. By analyzing existing technologies and proposing an AI-driven, 5G-enabled system model, this study aims to bridge the gap between theoretical advancements and practical urban implementations. The findings provide insights into scalable, efficient solutions for the future of smart transportation networks and offer directions for further research in this dynamic and evolving field.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_44-Machine_Learning_and_5G_Edge_Computing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Developing an Ontology-Driven and Governance-Integrated Method for Information Dashboard Design</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160643</link>
        <id>10.14569/IJACSA.2025.0160643</id>
        <doi>10.14569/IJACSA.2025.0160643</doi>
        <lastModDate>2025-06-30T13:21:17.8670000+00:00</lastModDate>
        
        <creator>Ahadi Haji Mohd Nasir</creator>
        
        <creator>Nik Habibullah Nik Mohd Nizam</creator>
        
        <creator>Mohd Khairul Maswan Mohd Redzuan</creator>
        
        <creator>Mohammad Nazir Ahmad</creator>
        
        <subject>Information dashboard; ontological modelling; information dashboard design ontology (IDDO); information dashboard design method (IDDM) canvas; information governance (IG); unified ontological approach (UOA); design science research methodology (DSRM)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>Despite the increasing reliance on information dashboards across industries, dashboard design practices remain fragmented, lacking standardized methodologies, ontological formalization, and governance integration. Addressing these gaps, this study develops a method to guide dashboard design by embedding ontological modeling and Information Governance (IG) principles. Two complementary artifacts are proposed: the Information Dashboard Design Ontology (IDDO) and the Information Dashboard Design Method (IDDM) Canvas. Using Design Science Research Methodology (DSRM) and a Unified Ontological Approach (UOA), IDDO formalizes tacit dashboard design knowledge into a structured framework, while the IDDM Canvas operationalizes this ontology into a practical design tool. Validation through the Ontological Unified Modeling Language (OntoUML) Plugin and conceptual assessment based on Unified Foundational Ontology (UFO) principles confirmed internal consistency and ontological soundness. The resulting framework integrates twelve dashboard design building blocks with eight IG principles to ensure rigor and governance alignment. The application of the IDDM Canvas demonstrated its utility in facilitating structured, replicable dashboard development. While the evaluation focused primarily on conceptual validation, future studies are recommended to empirically assess the framework’s practical effectiveness across various domains and real-world projects.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_43-Developing_an_Ontology_Driven_and_Governance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Reinforcement Learning Improves SVM-Driven Algorithms for Classifying Multi-Sensor Data for Medical Monitoring</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160642</link>
        <id>10.14569/IJACSA.2025.0160642</id>
        <doi>10.14569/IJACSA.2025.0160642</doi>
        <lastModDate>2025-06-30T13:21:17.8500000+00:00</lastModDate>
        
        <creator>Zhiwei Xuan</creator>
        
        <creator>Yajie Liu</creator>
        
        <subject>Reinforcement learning; improved SVM; medical monitoring; multi-sensor; data; classification processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>Multi-sensor data in medical monitoring includes waveform changes in physiological signals and time-series characteristics of disease progression. These features typically exhibit high-dimensionality, large-scale, and time-varying characteristics. Nonlinear relationships exist between these features, increasing the difficulty of data processing and feature extraction, thereby reducing the classification capabilities of related algorithms. This study proposes a multi-sensor data classification processing method in medical monitoring based on reinforcement learning improved SVM. The algorithm employs the DBSCAN algorithm combined with Euclidean distance for clustering and data collection of multi-sensor data. Discrete wavelet transform is used to remove interference noise from the data, followed by convolutional neural networks for signal feature extraction from the denoised data. The Q-learning algorithm in reinforcement learning is used to improve the traditional SVM, with the extracted signal features input into the improved SVM. The classification results of medical monitoring multi-sensor data are output via a regression function. The experimental results show that the denoising results of medical monitoring data of the method are high, the signal-to-noise ratio is high, and the Kappa coefficient reaches up to 0.98. Therefore, it shows that the method can accurately classify medical monitoring multi-sensor data.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_42-Reinforcement_Learning_Improves_SVM_Driven_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Advanced AI-Driven Safety Compliance Monitoring in Dynamic Construction Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160641</link>
        <id>10.14569/IJACSA.2025.0160641</id>
        <doi>10.14569/IJACSA.2025.0160641</doi>
        <lastModDate>2025-06-30T13:21:17.8030000+00:00</lastModDate>
        
        <creator>Aisha Hassan</creator>
        
        <creator>Ali H. Hassan</creator>
        
        <creator>Yasmin Christensen</creator>
        
        <creator>Hussain Alsadiq</creator>
        
        <subject>YOLOv11n; personal protection equipment (PPE); construction safety; real-time object detection; deep learning; AI-driving compliance systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>Construction safety is a critical global concern due to the high-risk environment faced by workers, with accidents often leading to serious injuries and fatalities. To enhance construction management, this study proposes a scalable deep-learning model for real-time compliance monitoring of safety regulations. The research gap addressed is the lack of real-time, scalable AI solutions for safety compliance monitoring in dynamic construction environments. The YOLOv11n model was trained and evaluated to identify and track the use of safety helmets and vests in extreme dynamic environments, ensuring timely detection of non-compliance. It is hypothesized that the YOLOv11n model will outperform baseline models in accuracy and real-time monitoring speed. The YOLOv11n model outperformed other baseline models, with precision, recall, and mean average precision scores of 89.5%, 85%, and 91.6%, respectively, and a real-time processing speed of 71.68 FPS. Its lightweight size and performance make it suitable for deployment. Integrated with a person-detection framework, the system provides real-time desktop alerts for safety violations, enhancing safety compliance. These findings contribute to construction automation by advancing scalable AI-driven solutions for proactive safety compliance, reducing accidents, and improving operational efficiency on construction sites.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_41-Advanced_AI_Driven_Safety_Compliance_Monitoring.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Proposed Framework for Loan Default Prediction Using Machine Learning Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160640</link>
        <id>10.14569/IJACSA.2025.0160640</id>
        <doi>10.14569/IJACSA.2025.0160640</doi>
        <lastModDate>2025-06-30T13:21:17.7730000+00:00</lastModDate>
        
        <creator>Mona Aly SharafEldin</creator>
        
        <creator>Amira M. Idrees</creator>
        
        <creator>Shimaa Ouf</creator>
        
        <subject>Random forest; decision trees; gradient boosting machines; feature selection; feature importance; loan default</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>The accurate prediction of loan defaults is critical for the risk management strategies of financial institutions. Traditional credit assessment approaches have often relied on subjective judgment, leading to inconsistent decisions and heightened financial risk. This study investigates the application of machine learning techniques—namely Random Forest, Decision Tree, and Gradient Boosting—to predict loan defaults using customer data from the Agricultural Bank of Egypt. The research emphasizes the role of feature selection in enhancing model performance, utilizing both embedded and recursive methods to isolate key predictive attributes. Among the evaluated features, loan balance, due amount, and delinquency history emerged as the most influential, while demographic variables like gender and employment status were found to be less significant. The Decision Tree model demonstrated superior performance with an overall accuracy of 88%, a recall of 53%, and a specificity of 89%, making it the most effective among the tested classifiers. The findings highlight the importance of combining robust feature selection with interpretable models to support informed decision-making in banking.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_40-A_Proposed_Framework_for_Loan_Default_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Reinforcement Learning for Real-Time Scheduling in Dynamic Reconfigurable Manufacturing Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160639</link>
        <id>10.14569/IJACSA.2025.0160639</id>
        <doi>10.14569/IJACSA.2025.0160639</doi>
        <lastModDate>2025-06-30T13:21:17.7400000+00:00</lastModDate>
        
        <creator>Salah Hammedi</creator>
        
        <creator>Abdallah Namoun</creator>
        
        <creator>Mohamed Shili</creator>
        
        <subject>Adaptability; deep reinforcement learning (DRL); makespan; manufacturing systems; reinforcement learning (RL); resource utilization; scheduling optimization; shortest processing time (SPT); tardiness; traditional scheduling methods</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>This study presents a novel application of Reinforcement Learning (RL) and Deep Reinforcement Learning (DRL) for scheduling optimization in Reconfigurable Manufacturing Systems (RMFS). The performance of these approaches is quantitatively evaluated and compared with traditional scheduling methods, specifically Shortest Processing Time (SPT) and Earliest Due Date (EDD), across several key metrics, including makespan, tardiness, resource utilization, and adaptability to disturbances. Our results show a significant reduction in makespan, with RL achieving a 20% improvement and DRL a 28.57% improvement over SPT. Moreover, RL and DRL outperform classical methods in minimizing tardiness and improving resource utilization. DRL also demonstrates superior adaptability under dynamic disruptions such as machine breakdowns, with only a 5% deviation in makespan compared to 16.67% for SPT. These findings confirm the benefits of RL and DRL for real-time decision-making in dynamic manufacturing environments. The study discusses the robustness and scalability of RL and DRL approaches, as well as the challenges related to their computational cost. The novelty lies in integrating RL and DRL into RMFS scheduling to offer a scalable, adaptive solution that improves production efficiency.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_39-Reinforcement_Learning_for_Real_Time_Scheduling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid PSO-ACO Optimization for Rice Leaf Disease Classification Using Random Forest and Support Vector Machines</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160638</link>
        <id>10.14569/IJACSA.2025.0160638</id>
        <doi>10.14569/IJACSA.2025.0160638</doi>
        <lastModDate>2025-06-30T13:21:17.7270000+00:00</lastModDate>
        
        <creator>Avip Kurniawan</creator>
        
        <creator>Tri Retnaningsih Soeprobowati</creator>
        
        <creator>Budi Warsito</creator>
        
        <subject>Rice leaf disease; particle swarm optimization (PSO); support vector machine (SVM); feature extraction; precision agriculture</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>This study proposes a hybrid machine learning framework for rice leaf disease detection by combining handcrafted feature extraction with metaheuristic optimization and classical classifiers. Using a dataset of 6,000 rice leaf images across seven classes, features including color, texture, shape, and edge were extracted and optimized using Spider Monkey Optimization (SMO), Particle Swarm Optimization (PSO), and Ant Colony Optimization (ACO). Classification was conducted using Random Forest Classifier (RFC) and Support Vector Classifier (SVC), both with and without hyperparameter tuning. Experimental results revealed that PSO consistently outperformed other optimizers, achieving 91.00% accuracy with RFC and 94.64% with SVC when all features and optimal parameters were used. While SMO also showed strong performance, ACO yielded less consistent results. These findings highlight the importance of combining comprehensive feature engineering with adaptive optimization strategies to improve classification accuracy. Compared to previous SMO-based approaches, the proposed PSO-ACO framework demonstrated improved stability and scalability. The proposed framework is interpretable, efficient, and scalable, making it suitable for practical deployment in precision agriculture. Future research directions include integrating deep learning with handcrafted features, developing adaptive metaheuristics, and implementing real-time mobile detection systems.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_38-Hybrid_PSO_ACO_Optimization_for_Rice_Leaf_Disease.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mobile Application Using Convolutional Neural Networks for Preliminary Diagnosis of Rosacea</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160637</link>
        <id>10.14569/IJACSA.2025.0160637</id>
        <doi>10.14569/IJACSA.2025.0160637</doi>
        <lastModDate>2025-06-30T13:21:17.7100000+00:00</lastModDate>
        
        <creator>Angie Fiorella Sapaico-Alberto</creator>
        
        <creator>Rosalynn Ornella Flores-Casta&#241;eda</creator>
        
        <subject>Convolutional neural networks; mobile application; rosacea; preliminary diagnosis; sensitivity; specificity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>Rosacea is a chronic skin disease affecting millions of people worldwide, characterized by redness and inflammatory lesions on the face. Given the need to improve early detection, this research aims to develop a mobile application using convolutional neural networks to improve the preliminary diagnosis of rosacea. For this purpose, increases in sensitivity, specificity and accuracy percentages were evaluated. The study was applied, with a quantitative approach and an experimental design, specifically pre-experimental. The study variable was the preliminary diagnosis of rosacea, and the sample consisted of 100 images: 50 from rosacea patients and 50 from healthy people. The technique used for data collection was observation. The results of the implementation of the mobile application showed an increase in sensitivity of 2.7%, specificity of 1.97% and accuracy of 0.10%. In conclusion, the use of the mobile application with convolutional neural networks improves the preliminary diagnosis of rosacea by optimizing the indicators evaluated.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_37-Mobile_Application_Using_Convolutional_Neural_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Systematic Literature Review on Artificial Intelligence-Driven Personalized Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160636</link>
        <id>10.14569/IJACSA.2025.0160636</id>
        <doi>10.14569/IJACSA.2025.0160636</doi>
        <lastModDate>2025-06-30T13:21:17.6770000+00:00</lastModDate>
        
        <creator>Anas Usman Inuwa</creator>
        
        <creator>Shahida Sulaiman</creator>
        
        <creator>Ruhaidah Samsudin</creator>
        
        <subject>Personalized learning; model; framework; approach; technique; systematic literature review; personalized learning components; artificial intelligence</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>Artificial Intelligence (AI) is widely used in various contexts, including education at different levels, such as K-12 (kindergarten through 12th grade) and higher learning. The impact of AI in education is becoming increasingly significant, making the academic sphere more effective, personalized, global, context-intensive, and asynchronous. Despite the publication of several systematic literature reviews, mapping studies, and reviews on the use of AI in education, there is still a lack of reviews focusing on personalized learning (PL) frameworks, models, and approaches at various levels especially the pre-university level for Science, Technology, Engineering, and Mathematics (STEM) subjects. To address this gap, our work presents a systematic literature review of AI-driven PL models, frameworks, and approaches published over the past ten years from 2013 to 2023, extracted from the Scopus database. This review focuses on the AI techniques used, personalized learning elements, components, attributes, and the possibility of replicating the technique in pre-university level studies, and gaps or prospects that will attract further research. The study reviewed 69 articles, downloaded via the Scopus database, and reported the most used AI techniques, PL components or factors, trends, and prospects for future research. The results show that most existing studies focus on higher learning that requires further research at the pre-university level. In addition, machine learning and deep learning are identified as the most suitable and frequent techniques besides other technologies, knowledge delivery, learners’ needs, behavior and interest as the most required components for personalized systems in diverse fields. In terms of publication output by country, the study indicates that Switzerland, USA, UK, and China are leading contributors to PL research. Thus, this study calls for further research on AI-driven personalized learning that thoughtfully integrates educational theories, subject-specific content, and industry needs to enhance outcomes and learner satisfaction.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_36-Systematic_Literature_Review_on_Artificial_Intelligence.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analyzing the Impact of Robotic Process Automation (RPA) on Productivity and Firm Performance in the Service Sector</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160635</link>
        <id>10.14569/IJACSA.2025.0160635</id>
        <doi>10.14569/IJACSA.2025.0160635</doi>
        <lastModDate>2025-06-30T13:21:17.6470000+00:00</lastModDate>
        
        <creator>Miftakul Huda</creator>
        
        <creator>Agus Rahayu</creator>
        
        <creator>Chairul Furqon</creator>
        
        <creator>Mokh Adib Sultan</creator>
        
        <creator>Neng Susi Susilawati Sugiana</creator>
        
        <subject>Robotic process automation; productivity improvement; firm performance; service sector; digital transformation; operational efficiency</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>Robotic Process Automation (RPA) has emerged as a transformative technology in the service sector, enabling organizations to automate repetitive and rule-based tasks with minimal human intervention. This study investigates the impact of RPA implementation on productivity and overall firm performance within service-oriented businesses. Using a mixed-method approach, quantitative data were collected from 50 service firms that have adopted RPA technologies, complemented by qualitative insights from managerial interviews. The findings reveal that RPA significantly enhances operational efficiency by reducing process cycle times, minimizing errors, and lowering operational costs. These productivity gains directly contribute to improved financial outcomes and customer satisfaction, key indicators of firm performance. Furthermore, the study highlights critical success factors such as employee training, change management, and technology integration that influence the effectiveness of RPA deployment. However, challenges related to workforce adaptation and initial investment costs are also discussed. This research provides valuable empirical evidence for service sector firms considering RPA adoption, emphasizing that strategic implementation can lead to sustainable competitive advantages. The study contributes to the growing body of knowledge on digital transformation by linking RPA technology with measurable improvements in productivity and firm performance, offering practical recommendations for managers and policymakers aiming to optimize automation strategies.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_35-Analyzing_the_Impact_of_Robotic_Process_Automation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fusion of CNN and Transformer Architectures for Proactive Wildfire Detection in Satellite Imagery</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160634</link>
        <id>10.14569/IJACSA.2025.0160634</id>
        <doi>10.14569/IJACSA.2025.0160634</doi>
        <lastModDate>2025-06-30T13:21:17.6170000+00:00</lastModDate>
        
        <creator>Shereen Essam Elbohy</creator>
        
        <creator>Mona M. Nasr</creator>
        
        <creator>Farid Ali Mousa</creator>
        
        <subject>Wildfire detection; satellite imagery; convolutional neural networks (CNN); transformers; deep learning; hybrid model; proactive monitoring; remote sensing; disaster prevention; computer vision</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>Wildfires pose a significant threat to ecosystems, human settlements, and air quality, necessitating advanced detection and mitigation strategies. Traditional wildfire detection methods often rely on manual observation and conventional machine learning approaches, which may lack efficiency and accuracy. This study proposes a novel deep learning model based on the ConvNeXt-Small architecture, a hybrid design that fuses the strengths of Convolutional Neural Networks (CNNs) and Transformer-inspired mechanisms, enabling more comprehensive analysis of wildfire patterns in satellite imagery. The model was trained using the Adam optimizer, which provides efficient convergence and adaptive learning. The dataset used consists of real-world satellite images collected from wildfire-affected regions in Canada, covering various geographic and seasonal conditions to reflect real environmental diversity. The results underscore the potential of ConvNeXt-based architecture for real-time, high-precision wildfire detection, offering a powerful tool for early intervention, disaster mitigation, and environmental monitoring efforts.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_34-Fusion_of_CNN_and_Transformer_Architectures.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automatic Detection of Natural Disasters Using Faster R-CNN with ResNet50 Backbone</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160633</link>
        <id>10.14569/IJACSA.2025.0160633</id>
        <doi>10.14569/IJACSA.2025.0160633</doi>
        <lastModDate>2025-06-30T13:21:17.6000000+00:00</lastModDate>
        
        <creator>Shereen Essam Elbohy</creator>
        
        <creator>Mona M. Nasr</creator>
        
        <creator>Farid Ali Mousa</creator>
        
        <subject>Natural disasters detection; satellite imagery; convolutional neural networks (CNN); transformers; deep learning; ResNet50; proactive monitoring; faster R-CNN; disaster prevention; computer vision</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>Natural disasters pose significant threats to human life and infrastructure. Timely detection and assessment of these events are crucial for effective disaster management. This study proposes an automatic detection system for natural disasters using aerial imagery. Accurate and timely detection of natural disasters is critical for minimizing their impact and supporting emergency response efforts. This study presents a comparative analysis of deep learning architectures for natural disaster detection using satellite and aerial imagery. Four models were evaluated as baseline CNN, ResNet50, Faster-CNN, and Faster R-CNN with a ResNet50 backbone using standard classification metrics. The results demonstrate that deeper and more sophisticated models significantly enhance detection performance. While the baseline CNN achieved modest results with 85.3% accuracy, integrating residual learning in ResNet50 improved accuracy to 92.7%. Region-based models further boosted performance, with Faster-CNN and Faster R-CNN attaining 95.1% and 97.1% accuracy, respectively. The superior performance of the Faster R-CNN with ResNet50 highlights its robustness and suitability for real-time disaster monitoring, offering a scalable and reliable solution for operational deployment in disaster management systems.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_33-Automatic_Detection_of_Natural_Disasters.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fine-Tuning OpenAI GPT Chatbot in Western Saudi Dialect: A Case Study of Taibah University</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160632</link>
        <id>10.14569/IJACSA.2025.0160632</id>
        <doi>10.14569/IJACSA.2025.0160632</doi>
        <lastModDate>2025-06-30T13:21:17.5700000+00:00</lastModDate>
        
        <creator>Maimounah Alhujaili</creator>
        
        <creator>Ruqayya Abdulrahman</creator>
        
        <subject>Artificial intelligence (AI); large language model (LLM); generative pre-trained transformer (GPT); Modern Standard Arabic (MSA); Western Saudi dialect</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>The current era is characterized by technological advancement and innovation, which affect various sectors. Numerous remarkable and alluring computer programs and applications have surfaced, including ones that aim to replicate human behavior. A chatbot is an example of an Artificial Intelligence (AI) computer program that uses natural language to mimic human conversations in voice or content. Even though a lack of Arabic chatbots, most of these chatbots use Modern Standard Arabic instead of Arabic dialects. This research presents the development and evaluation of a chatbot designed to respond to academic inquiries from university students using the Western Saudi dialect. A traditional Support Vector Machine (SVM) baseline model was first implemented to establish a reference point for performance. Subsequently, a fine-tuned version of Generative Pre-trained Transformer (GPT) 3.5-Turbo-0125 was developed using a culturally specific system prompt to enhance the model’s understanding of regional language and academic contexts. Evaluation was conducted through a multi-dimensional framework combining human assessments, BERTScore semantic similarity measurements, and GPT-4-based automatic judging. With human assessors determining that 85% of GPT-3.5&#39;s replies to 132 messages of test data were appropriate, the transformer-based model clearly outperformed the SVM baseline, which had an accuracy of 42.86% on 20 messages of test data. These findings highlight the importance of cultural and contextual fine-tuning in building effective conversational agents for dialectal Arabic communities. The research contributes to the growing field of localized AI by demonstrating how advanced language models can be adapted to serve specialized linguistic and academic needs.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_32-Fine_Tuning_OpenAI_GPT_Chatbot_in_Western_Saudi_Dialect.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deepfake Audio Detection Using Feature-Based and Deep Learning Approaches: ANN vs ResNet50</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160631</link>
        <id>10.14569/IJACSA.2025.0160631</id>
        <doi>10.14569/IJACSA.2025.0160631</doi>
        <lastModDate>2025-06-30T13:21:17.5370000+00:00</lastModDate>
        
        <creator>Reham Mohamed Abdulhamied</creator>
        
        <creator>Sarah Naiem</creator>
        
        <creator>Mona M. Nasr</creator>
        
        <creator>Farid Ali Moussa</creator>
        
        <subject>Audio classification; automatic speech recognition; machine learning; deep learning; DEEP-VOICE</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>The proliferation of algorithms and commercial tools for generating synthetic audio has sparked a surge in mis- information, especially on social media platforms. Consequently, significant attention has been devoted to detect such misleading content in recent years. However, effectively addressing this challenge remains elusive, given the increasing naturalness of fake audio. This study introduces a model designed to distinguish between natural and fake audio, employing a two-stage approach: an audio preparation phase involving raw audio manipulation, followed by modeling using two distinct models. The first model employed feature extraction through wavelet transformation, followed by classification using a machine learning Artificial Neural Network. The second model utilized ResNet50 architecture, a type of deep learning model, which resulted in improved accuracy. These findings underscore the effectiveness of deep learning approaches in audio classification tasks. Training data for the model is sourced from the DEEP-VOICE dataset, which comprises both genuine and synthetic audio generated by various deep-fake algorithms. The model’s performance is assessed using diverse metrics such as accuracy, F1 score, precision and recall. Results indicate successful classification of audio in 86% of cases. This research contributes to the field of Automatic Speech Recognition (ASR) by integrating advanced preprocessing techniques with robust model architectures to identify manipulated speech.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_31-Deepfake_Audio_Detection_Using_Feature_Based.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Anomaly Study of Computer Networks Based on Weighted Dynamic Network Representation Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160630</link>
        <id>10.14569/IJACSA.2025.0160630</id>
        <doi>10.14569/IJACSA.2025.0160630</doi>
        <lastModDate>2025-06-30T13:21:17.5070000+00:00</lastModDate>
        
        <creator>Xin Wei</creator>
        
        <subject>Network security; attacks; weighted dynamic network; anomaly detection; deep learning; LSTM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>One of the foremost significant challenges in the continuously increasing technological environment is the requirement to secure the authenticity of data. Network security is a primary method for securing the confidentiality of data throughout communication, one of several types of data security assurance. To secure networks against additional cyberattacks, trustworthy Anomaly Detection (AD) is essential. The drawbacks of conventional AD are gradually increasing as various types of attacks and network changes continually evolve. The researchers of the present study propose a novel approach that incorporates Weighted Long Short-Term Memory (WLSTM) networks with Dynamic Network Representation Learning (DNRL) to address these problems, referring to it as the Weighted Dynamic Network Representation Learning (WDNRL) paradigm. This investigation develops the WLSTM utilizing the Weight of Evidence (WoE), which periodically determines weights to network features in the resulting network model. The WLSTM design functions as the network&#39;s coordinator, obtaining data from the recommended model, upgrading the representation, and aggregating the features. The findings showed that the proposed model achieved high accuracy rates of 99.85% for Denial of Service (DoS) attacks and 99.55% for Distributed Denial of Service (DDoS) attacks when evaluated using two datasets, NSL-KDD and CICIDS-2017, compared to different models. Additionally, the simulation&#39;s F1-scores, recall rates, and precision are all above average, indicating that it is capable of identifying many network anomalies with minimal false positives (FP).</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_30-Anomaly_Study_of_Computer_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>LaObese: A Serious Game Powered by Analytic Hierarchy Process for Culturally Tailored Childhood Obesity Prevention in Oman</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160629</link>
        <id>10.14569/IJACSA.2025.0160629</id>
        <doi>10.14569/IJACSA.2025.0160629</doi>
        <lastModDate>2025-06-30T13:21:17.4770000+00:00</lastModDate>
        
        <creator>Nurul Akhmal Mohd Zulkefli</creator>
        
        <creator>Mukesh Madanan</creator>
        
        <creator>Zainab Mohammed Al-Nahdi</creator>
        
        <creator>Jayasree Radhamaniamma</creator>
        
        <subject>Serious games; childhood obesity; gamification; analytic hierarchy process (AHP); preschool nutrition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>Childhood obesity is a growing public health concern in Oman, yet culturally appropriate digital tools for early prevention remain scarce. This study introduces LaObese, a mobile application and serious game designed to prevent obesity in Omani preschool children. The name LaObese derives from the Arabic word “La” (meaning “no”) and an abbreviation of “obesity”, reflecting the game’s mission to say, “no to obesity”. LaObese integrates gamification and behaviour modification strategies to encourage healthy lifestyle practices from an early age. Targeting children aged six, the initiative addresses the urgent issue of early childhood obesity, which is linked to long-term health complications. The development process incorporated a Multi-Criteria Decision-Making (MCDM) approach – specifically the Analytic Hierarchy Process (AHP) to identify and prioritise foods commonly consumed by children, ensuring the game’s nutritional content is both effective and culturally relevant. Food items were categorised (e.g., fruits, vegetables, proteins, beverages, desserts, and traditional Omani dishes) to align with preschool nutrition goals and local dietary habits. Findings from expert assessments (informed by national nutrition guidelines and local data from Salalah, Dhofar) highlight a growing preference for nutrient-rich traditional foods like dates and almonds among young children. LaObese is the first serious game in the Arab region to integrate an AHP-driven educational model for childhood obesity prevention. The platform facilitates collaboration between health professionals and educators in creating culturally tailored digital interventions, aiming to instil healthy eating habits in children through engaging gameplay.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_29-LaObese_A_Serious_Game_Powered_by_Analytic_Hierarchy_Process.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Organizational Threat Profiling by Employing Deep Learning with Physical Security Systems and Human Behavior Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160628</link>
        <id>10.14569/IJACSA.2025.0160628</id>
        <doi>10.14569/IJACSA.2025.0160628</doi>
        <lastModDate>2025-06-30T13:21:17.4600000+00:00</lastModDate>
        
        <creator>D. H. Senevirathna</creator>
        
        <creator>W. M. M. Gunasekara</creator>
        
        <creator>K. P. A. T. Gunawardhana</creator>
        
        <creator>M. F. F. Ashra</creator>
        
        <creator>Harinda Fernando</creator>
        
        <creator>Kavinga Yapa Abeywardena</creator>
        
        <subject>Deep learning; physical security; human behavior analysis; security operation centers; threat profiling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>Organizations need a comprehensive threat profiling system that uses cybersecurity methods together with physical security methods because advanced cyber-threats have become more complex. The objective of this study is to implement deep learning models to boost organizational threat identification via human behavior assessment and continuous surveillance activities. Our method for human behavior analysis detects insider threats through assessments of user activities that include logon patterns along with device interactions and measurement of psychometric traits. CNN, together with Random Forest classifiers, has been utilized to identify behavioral patterns that indicate security threats from inside the organization. Our model uses labeled datasets of abnormal user behavior to properly differentiate between normal and dangerous user activities with high accuracy. The physical security component improves surveillance abilities through the use of MobileNetV2 for real-time anomaly detection in CCTV video data. The system receives training to detect security breaches and violent and unauthorized entry attempts, and specific security-related incidents. The combination of transfer learning and fine-tuning methodologies enables MobileNetV2 to deliver outstanding security anomaly detection alongside low power requirements, thus it fits into Security Operations Centers operations. Experiments using our framework operate on existing benchmark collection sets that assess cybersecurity, together with physical security threats. Experimental testing establishes high precision levels for detecting insider threats along with physical security violations by surpassing conventional rule-based methods. Security Operation Centers gain an effective modern threat profiling solution through the application of deep learning models. The investigation generates better organization defenses against cyber-physical threats using behavioral analytics together with intelligent surveillance systems.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_28-Enhancing_Organizational_Threat_Profiling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Framework for Child Healthcare System Using Random Forest</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160627</link>
        <id>10.14569/IJACSA.2025.0160627</id>
        <doi>10.14569/IJACSA.2025.0160627</doi>
        <lastModDate>2025-06-30T13:21:17.4130000+00:00</lastModDate>
        
        <creator>Mahesh Ashok Mahant</creator>
        
        <creator>P. Vidyullatha</creator>
        
        <subject>Child healthcare system; random forest; machine learning; registration process; medicine</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>Proactive and customized approaches are necessary when it comes to the medical care of expectant mothers and children. Even if early and accurate disease prediction is based on readily available symptom information, it can significantly improve outcomes by promoting timely therapies. Extensive testing and specialist visits are common components of traditional diagnosis techniques, which may be costly and time-consuming, especially in situations with limited resources. This study reconnoiters the potential of using Random Forest, a powerful machine learning algorithm, to predict diseases in children and pregnant women based on the symptoms that they exhibit. This offers a possible choice for improved healthcare delivery and early risk assessment. Making predictions about childhood diseases, including pneumonia, malaria, and malnutrition, based on reported symptoms, can significantly lower morbidity and death. A Random Forest model can identify the probability of certain diseases and provide rapid referrals for additional testing and treatment when symptoms like fever, cough, dyspnoea, and weight loss are entered. Communities that are geographically remote and have limited access to specialized medical care should pay special attention to this. The early diagnosis of conditions including gestational diabetes, preeclampsia, and anemia during pregnancy is crucial for the mother&#39;s and the unborn child&#39;s health. Early detection of the ailment allows for the timely implementation of preventative measures, such as changing one&#39;s lifestyle or taking medication. The accuracy of the proposed Child healthcare system is 92% which is greater than other present methods. This analysis is based on the information provided by parents about the symptoms of their child’s diseases.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_27-Framework_for_Child_Healthcare_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Solar-Net: Adaptive Fusion of Spatial-Temporal Features for Resilient Solar Power Generation Forecasting</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160626</link>
        <id>10.14569/IJACSA.2025.0160626</id>
        <doi>10.14569/IJACSA.2025.0160626</doi>
        <lastModDate>2025-06-30T13:21:17.3800000+00:00</lastModDate>
        
        <creator>Wenqian Su</creator>
        
        <creator>Jason See Toh Seong Kuan</creator>
        
        <creator>Xiangyu Shi</creator>
        
        <creator>Yuchen Zhang</creator>
        
        <subject>Solar power generation forecasting; hybrid deep learning; adaptive attention fusion; CNN+Transformer; extreme weather adaptability; sustainable development goal 7</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>Solar power generation forecasting faces significant challenges due to intermittency and volatility, particularly under extreme weather conditions. This study proposes Solar-Net, a novel solar power generation prediction model based on a CNN+Transformer hybrid parallel architecture with an adaptive attention fusion mechanism. The CNN branch extracts spatial features from the power station layout and environmental conditions, while the Transformer branch models temporal dependencies in generation patterns. The core innovation lies in the adaptive attention fusion mechanism that dynamically adjusts branch weights according to real-time meteorological conditions, enabling the model to automatically adapt to varying environmental scenarios. Experiments were conducted on a comprehensive dataset containing over 50,000 observation points from two photovoltaic power stations. Results demonstrate that Solar-Net achieves superior performance compared to existing methods, with Mean Absolute Error (MAE) and Root Mean Square Error (RMSE) improvements of 12.7% and 10.9%, respectively. Under extreme weather conditions such as dust storms, the model maintains prediction errors within 8.5% of peak power generation, representing a 45.7% average reduction compared to baseline methods. The multi-scale convolution design enhances prediction accuracy by 10.5% while reducing computational complexity by 21.3%. The proposed Solar-Net model provides a robust and efficient solution for solar power generation forecasting, demonstrating significant potential for improving grid dispatching efficiency and supporting renewable energy integration in power systems.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_26-Solar_Net_Adaptive_Fusion_of_Spatial_Temporal_Features.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Reading-Aware Fusion Fact Reasoning Network for Explainable Fake News Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160625</link>
        <id>10.14569/IJACSA.2025.0160625</id>
        <doi>10.14569/IJACSA.2025.0160625</doi>
        <lastModDate>2025-06-30T13:21:17.3500000+00:00</lastModDate>
        
        <creator>Bofan Wang</creator>
        
        <creator>Shenwu Zhang</creator>
        
        <subject>Explainable fake news detection; fact reasoning; feature fusion</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>The current growth of information exhibits an exponential trend, with fake news becoming a focal issue for both the public and governments. Existing fact-checking-based fake news detec-tion methods face two challenges: a heavy reliance on fact-checking reports, a lack of explanatory evidence related to the original reports, and a shallow level of feature interaction. To address these challenges, this study proposes a Reading-aware Fusion Fact Reasoning Network for explainable fake news de-tection. In the aspect of extractive evidence for explainability, a Hierarchical Encoding Layer is constructed to capture sen-tence-level and document-level feature representations, followed by a Fact Reasoning Layer to obtain report and sentence repre-sentations most relevant to the claim, thereby reducing the mod-el&#39;s reliance on fact-checking reports. Inspired by reading be-haviors, which often involve repeatedly reading the claim and corresponding report during information verification, the Read-ing-aware Fusion Layer is introduced to learn the deep interde-pendencies among the claim, evidence, and report feature repre-sentations, enhancing semantic integration. Extensive experi-ments were conducted on the publicly available RAWFC and LIAR fake news datasets. The experimental results demonstrate that RFFR outperforms leading advanced baselines on both datasets.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_25-A_Reading_Aware_Fusion_Fact_Reasoning_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>False News Recognition Model Based on Attention Mechanism and Multiple Features</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160624</link>
        <id>10.14569/IJACSA.2025.0160624</id>
        <doi>10.14569/IJACSA.2025.0160624</doi>
        <lastModDate>2025-06-30T13:21:17.3330000+00:00</lastModDate>
        
        <creator>Qiongyao Suo</creator>
        
        <creator>Hongzhen Chang</creator>
        
        <subject>Fake news; attention mechanism; multiple features; bidirectional gated recurrent unit</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>As the prevalence of social media continues to grow, the rapid and wide dissemination of false news has become a critical societal challenge, undermining public trust, creating social unrest, and distorting political discourse. Traditional fake news detection methods often rely solely on linguistic cues or shallow semantic analysis, which leads to limited accuracy and poor robustness, particularly when addressing emotionally biased or contextually complex content. To overcome these limitations, this study proposes a novel fake news recognition model based on a bidirectional gated recurrent unit combined with a self-attention mechanism, further enhanced by integrating sentiment polarity, textual metadata, and contextual semantic features. Experimental results show that the proposed model achieves a recognition accuracy of ninety-seven per cent and an F1 score of ninety-seven. In addition, it demonstrates the lowest mean absolute error, which is zero point one nine, and the shortest recognition time, requiring only zero point eight seconds after eighty iterations. The model also maintains over ninety-three per cent accuracy across news content with active, negative, and neutral emotional tones. The model offers a scalable and reliable framework for detecting false news, with strong adaptability to diverse content types and emotional expressions, thereby contributing to the advancement of automated misinformation identification in real-world applications.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_24-False_News_Recognition_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Multi-Modal Deep Learning Approach for Real-Time Live Event Detection Using Video and Audio Signals</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160623</link>
        <id>10.14569/IJACSA.2025.0160623</id>
        <doi>10.14569/IJACSA.2025.0160623</doi>
        <lastModDate>2025-06-30T13:21:17.3030000+00:00</lastModDate>
        
        <creator>Pavadareni R</creator>
        
        <creator>A. Prasina</creator>
        
        <creator>Samuthira Pandi V</creator>
        
        <creator>Ibrahim Mohammad Khrais</creator>
        
        <creator>Alok Jain</creator>
        
        <creator>Karthikeyan</creator>
        
        <subject>Multi-modality; feature fusion; early fusion; concatenation audio-video signals; convolutional neural network (CNN); Long Short-Term Memory (LSTM); Mel Frequency Cepstral Coefficients (MFCC); ResNet (Residual Network)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>Recent developments in live event detection have primarily focused on single-modal systems, where most applications are based on audio signals. Such methods normally rely on classification approaches involving the Mel-spectrogram. Single-modal systems, though effective in some applications, suffer from severe disadvantages in capturing the complexities of a real-world event, which thereby reduces their reliability in dynamically changing environments. This research study presents a novel multi-modal deep learning approach that combines audio and visual signals in order to enhance the accuracy and robustness of live event detection. The innovation lies in the use of two-stream LSTM pipelines, allowing for temporally consistent modeling of both input modalities while keeping a real-time processing pace through feature-level fusion. Unlike many of the recent transformer models, we are utilizing proven techniques (MFCC, 2D CNN, ResNet and LSTM) in a latency-aware and deployment-friendly architecture suitable for embedded and edge-level event detection. The AVE (Audio Video Events) dataset, consisting of 28 categories, has been used. For the visual modality, video frames undergo feature extraction through a 2D CNN ResNet and temporal analysis through an LSTM. Simultaneously, the audio modality employs MFCC (Mel Frequency Cepstral Coefficients) for feature extraction and LSTM to capture temporal dependencies. The features extracted from both audio and video modalities are concatenated for fusion. The proposed integration leverages the complementary nature of audio and visual inputs to create a more comprehensive framework. The outcome yields 85.19% accuracy in audio and video-based events due to the effective fusion of spatial and temporal cues from diverse modalities, outperforming single-modal baselines (audio-only or video-only models).</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_23-A_Novel_Multi_Modal_Deep_Learning_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Audio-Visual Multimodal Deepfake Detection Leveraging Emotional Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160622</link>
        <id>10.14569/IJACSA.2025.0160622</id>
        <doi>10.14569/IJACSA.2025.0160622</doi>
        <lastModDate>2025-06-30T13:21:17.2730000+00:00</lastModDate>
        
        <creator>Alaa Alsaeedi</creator>
        
        <creator>Amal AlMansour</creator>
        
        <creator>Amani Jamal</creator>
        
        <subject>Machine learning; deepfake; multimodal; sentiment of speech; emotion recognition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>Recently, there has been a significant reliance on the Internet. This creates a fertile environment for various risks, including fraud, privacy violations, and theft. The most common and dangerous risks at present are known as deepfakes. The development of deepfake technologies relies on advancements in artificial intelligence. Deepfake content can greatly affect privacy and security, posing a significant risk to many fields. Therefore, recent research has focused on mechanisms to detect real content from fake content. These mechanisms are classified into two main types: single-modal and multimodal detection. It is worth noting that the widespread deepfake technology has recently become more complex. This may hinder traditional single-mode detection methods in detecting video clips. In this study, we designed an effective multimodal fusion mechanism that integrates pre-trained audio, visual, and textual features. Our framework is based on three considerations: audio features, visual features, and emotion recognition. Emotion recognition focuses on three considerations: audio emotion, facial emotion, and sentiment of speech. We take advantage of the sentiment of speech to ensure there is consistency between audio and visual emotion with the meaning of words. As we achieved, the sentiment of speech makes our model more accurate and robust than when we used the audio-visual emotion inconsistency measures only. In our experiment, we used the FakeAVCeleb dataset, and we achieved 95.24% accuracy, proving our assumption of the impact of the sentiment of speech, the emotion of audio tone, and facial expressions to detect deepfakes.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_22-Audio_Visual_Multimodal_Deepfake_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Integration of 2D-CNN and LSTM Networks for Enhanced Image Processing and Prediction in Alzheimer’s Disease</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160621</link>
        <id>10.14569/IJACSA.2025.0160621</id>
        <doi>10.14569/IJACSA.2025.0160621</doi>
        <lastModDate>2025-06-30T13:21:17.2400000+00:00</lastModDate>
        
        <creator>Aya Mohamed Abd El-Hamed</creator>
        
        <creator>Mohamed Aborizka</creator>
        
        <subject>Alzheimer’s disease; magnetic resonance imaging; two-dimensional convolutional neural network; long short-term memory; deep learning; early detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>The early diagnosis of Alzheimer’s disease remains a major challenge due to the complexity of magnetic resonance image interpretation and the limitations of existing diagnostic models. The slow memory loss associated with the gradual loss of thinking abilities, known as Alzheimer&#39;s disease, is the most common element of the illness. Effective early diagnosis is therefore essential to treatment; unfortunately, the traditional diagnostic procedure, which involves analyzing magnetic resonance images, is a complex process and prone to mistakes. This study aims to successfully merge these cognitive models with advanced deep learning techniques to enhance the diagnostic capabilities of Alzheimer’s disease using a fusion model with 3-dimensional convolutional neural networks and long short-term memory networks. The proposed approach uses three-dimensional convolutional neural networks to extract intricate features from volumetric magnetic resonance images, while long short-term memory networks analyze sequential data to identify key temporal patterns that indicate the progression of Alzheimer&#39;s disease. The dataset used in this study is the Alzheimer&#39;s Disease Neuroimaging Initiative dataset, which contains magnetic resonance images labeled into four categories: Non-Demented, Very Mild Demented, Mild Demented, and Moderate Demented. The dataset consists of 6,400 magnetic resonance images in total, split into training (70%), validation (15%), and testing (15%) sets. These outcomes demonstrate that the hybrid model improves predictive accuracy significantly over current benchmarks on this topic. This study highlights the importance of introducing deep learning models into clinical practice, thereby providing an efficient tool for early-stage Alzheimer’s disease diagnosis, ultimately improving patient outcomes through early and accurate intervention.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_21-Integration_of_2D_CNN_and_LSTM_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Landslide Detection Method Based on Lightweight Convolution and Attention Mechanisms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160620</link>
        <id>10.14569/IJACSA.2025.0160620</id>
        <doi>10.14569/IJACSA.2025.0160620</doi>
        <lastModDate>2025-06-30T13:21:17.2100000+00:00</lastModDate>
        
        <creator>Cong Chen</creator>
        
        <creator>Chengyang Zhang</creator>
        
        <creator>Ran Chen</creator>
        
        <creator>Jiyang YU</creator>
        
        <subject>YOLOv11n; landslide detection; GhostConv module; C3K2-SCConv module; SimAM attention mechanism</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>Landslide monitoring is a crucial component of geological disaster early warning systems. Traditional landslide detection methods often suffer from insufficient accuracy or low efficiency. To address these issues, this study proposes an improved landslide detection algorithm based on YOLOv11n, aiming to enhance both detection accuracy and efficiency by optimizing the model structure. First, the GhostConv module is introduced to reduce redundant computations, thereby improving computational efficiency. Additionally, the C3K2-SCConv optimization module is incorporated, which enhances feature extraction capability and improves the recognition of landslides at different scales by integrating multi-scale information and a weighted convolution strategy. Furthermore, the SimAM attention mechanism is implemented to adaptively adjust feature map weights, strengthening key features in landslide regions and improving detection accuracy. Experimental results demonstrate that the improved model achieves a mean average precision (mAP@0.5) of 83.3%, a precision of 85.5%, and a recall of 78.1%, representing increases of 2.0%, 3.2%, and 2.8%, respectively, compared to the baseline model. The proposed improvements provide a more accurate and efficient landslide detection method, contributing to the precision of geological disaster early warnings and enhancing the reliability of disaster prevention and mitigation efforts.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_20-Landslide_Detection_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Steel Surface Defect Detection Method Based on Lightweight Convolution Optimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160619</link>
        <id>10.14569/IJACSA.2025.0160619</id>
        <doi>10.14569/IJACSA.2025.0160619</doi>
        <lastModDate>2025-06-30T13:21:17.1770000+00:00</lastModDate>
        
        <creator>Cong Chen</creator>
        
        <creator>Ming Chen</creator>
        
        <creator>Hoileong Lee</creator>
        
        <creator>Yan Li</creator>
        
        <creator>Jiyang YU</creator>
        
        <subject>YOLOv9s; steel surface defect detection; C3Ghost module; SCConv module; CARAFE upsampling operator</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>Surface defect detection of steel, especially the recognition of multi-scale defects, has always been a major challenge in industrial manufacturing. Steel surfaces not only have defects of various sizes and shapes, which limit the accuracy of traditional image processing and detection methods in complex environments. However, traditional defect detection methods face issues of insufficient accuracy and high miss-detection rates when dealing with small target defects. To address this issue, this study proposes a detection framework based on deep learning, specifically YOLOv9s, combined with the C3Ghost module, SCConv module, and CARAFE upsampling operator, to improve detection accuracy and model performance. First, the SCConv module is used to reduce feature redundancy and optimize feature representation by reconstructing the spatial and channel dimensions. Second, the C3Ghost module is introduced to enhance the model’s feature extraction ability by reducing redundant computations and parameter volume, thereby improving model efficiency. Finally, the CARAFE upsampling operator, which can more finely reorganize feature maps in a content-aware manner, optimizes the upsampling process and ensures detailed restoration of high-resolution defect regions. Experimental results demonstrate that the proposed model achieves higher accuracy and robustness in steel surface defect detection tasks compared to other methods, effectively addressing defect detection problems.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_19-A_Steel_Surface_Defect_Detection_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Research on Network Flow Based on Statistical Analysis Methods</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160618</link>
        <id>10.14569/IJACSA.2025.0160618</id>
        <doi>10.14569/IJACSA.2025.0160618</doi>
        <lastModDate>2025-06-30T13:21:17.1470000+00:00</lastModDate>
        
        <creator>Maruf Juraev</creator>
        
        <creator>Inomjon Yarashov</creator>
        
        <creator>Adilbay Kudaybergenov</creator>
        
        <creator>Alimdzhan Babadzhanov</creator>
        
        <creator>Zilolaxon Mamatova</creator>
        
        <subject>Unexpected situation; mean; variance; asymmetry coefficient; kurtosis coefficient; opposition coefficient; entropy coefficient; packet analysis; markov chain; packet features</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>To solve the problem of detecting unexpected situations in network traffic, it is proposed to determine the normal, unexpected states and behavior of the system using statistical methods. The statistical analysis methods used are the mean, variance, asymmetry coefficient, kurtosis coefficient, opposition coefficient and entropy coefficient. All statistical analysis methods allow for a deeper understanding of the uniqueness of the data. Each coefficient provides different measurement values, with which it is possible to determine the general characteristics of the data. The application of statistical analysis methods in network traffic is one of the most common methods for implementing the technology of detecting unexpected situations. For this, network traffic recorded in real time in laboratory conditions is used. Packet features of the network traffic are extracted from the recorded data, and then statistical analysis is performed using the extracted features. Markov chain is one of the most effective tools for analyzing situations and events occurring in network traffic. Markov chain is formed using the results of statistical analysis, and a Markov chain is constructed. This approach represents the state of the network flow in a probabilistic model and serves as an effective tool in monitoring the network flow.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_18-Research_of_Network_Flow_Based_on_Statistical_Analysis_Methods.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Factors Influencing Internet of Things Adoption in Public Hospitals: A Pilot Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160617</link>
        <id>10.14569/IJACSA.2025.0160617</id>
        <doi>10.14569/IJACSA.2025.0160617</doi>
        <lastModDate>2025-06-30T13:21:17.1170000+00:00</lastModDate>
        
        <creator>Mutasem Zrekat</creator>
        
        <creator>Othman Bin Ibrahim</creator>
        
        <subject>Internet of Things; adoption; Jordan; TOE</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>Incorporating Internet of Things (IoT) technologies in healthcare represents a significant leap forward, capable of transforming the delivery and management of medical services. As healthcare systems across the globe increasingly seek innovative solutions to improve efficiency, enhance patient outcomes, and reduce operational costs, IoT emerges as a key enabler of this transformation. Despite the widely recognized benefits of IoT, its adoption in the healthcare sector, particularly within public hospitals in developing countries, remains limited and is still in the early stages. Therefore, understanding the factors influencing its adoption is essential for supporting effective adoption and advancing digital healthcare initiatives. This study aims to assess the validity and reliability of the instrument designed to identify the factors influencing the adoption of IoT technology in Jordanian public hospitals. A structured survey instrument collected a preliminary dataset from forty decision-makers in Jordanian public hospitals. The survey items were developed based on constructs derived from the Technology-Organizational-Environment (TOE) framework and the Human-Organization-Technology fit (HOT-fit) model, supported by relevant literature. Descriptive statistics were performed using SPSS, while the reliability and validity of the instrument were assessed using Partial Least Squares Structural Equation Modeling (PLS-SEM). The results demonstrated that the measurement instrument had acceptable levels of reliability and validity, confirming its suitability for use in the main study. This study enriches the existing research and enhances the broader understanding of IoT adoption in healthcare organizations, offering insights that can be useful to both practitioners and researchers in this field.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_17-The_Factors_Influencing_Internet_of_Things_Adoption.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Method for Tea Leaf Plucking Timing Prediction with High Resolution of Images Based on YOLO11</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160616</link>
        <id>10.14569/IJACSA.2025.0160616</id>
        <doi>10.14569/IJACSA.2025.0160616</doi>
        <lastModDate>2025-06-30T13:21:17.0700000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Yoho Kawaguchi</creator>
        
        <subject>Tealeaf plucking; YOLO; budding detection; spatial resolution; optical image; annotation; germination rate</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>As a method for estimating the time when tea leaves reach their peak quality (amino acid content) (optimum picking time), our previous study revealed that the optimum picking time is when the accumulated temperature from the detection of germination of new buds reaches 600&#176;C. However, the accuracy of this germination detection was insufficient, so the estimation accuracy of the optimum picking time was also insufficient. Since annotation accuracy is extremely important for germination detection by YOLO11, strict attention is paid to annotation by hand and by increasing the number of training datasets. The detection accuracy has been improved compared to the germination detection by YOLOv8, which was previously proposed and used relatively low-resolution images. The conclusion of this study is that the estimation method of the optimum picking time based on the criterion that the optimum picking time (amino acid content reaches its peak) is effective when the accumulated temperature from germination detection meets the condition of 600&#176;C. The effectiveness of this method has been confirmed by comparison with germination detection by experts. For tea farmers, being able to predict the optimum picking time, when the amino acid content in the new buds is at its peak, is important, and we are sure it will have a positive impact on agricultural researchers studying this subject.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_16-Method_for_Tea_Leaf_Plucking_Timing_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Analysis of Deep Learning Techniques for Passive Underwater Acoustic Target Recognition: Overview, Challenges, and Future Directions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160615</link>
        <id>10.14569/IJACSA.2025.0160615</id>
        <doi>10.14569/IJACSA.2025.0160615</doi>
        <lastModDate>2025-06-30T13:21:17.0370000+00:00</lastModDate>
        
        <creator>Song Yifei</creator>
        
        <creator>Mohamad Farhan Mohamad Mohsin</creator>
        
        <subject>Underwater acoustic target recognition; deep learning; deep network architecture; classifier</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>Passive underwater acoustic target recognition (UATR) involves analyzing acoustic waves captured by passive sonar to extract valuable information about submerged targets. The underwater acoustics community has increasingly turned its attention to deep learning techniques, owing to their remarkable success in image recognition tasks. This study presents a comprehensive overview of the evolution of UATR techniques, categorizing them into three distinct groups: early methods, conventional machine learning approaches, and modern deep learning-based techniques. Additionally, it provides an in-depth summary of the recognition process utilizing deep learning, detailing various deep network architectures, classifiers specifically designed for underwater acoustic target recognition, and different data input modalities. Finally, the study synthesizes current research findings and outlines potential future directions for advancements in this field, emphasizing opportunities for innovation across these three categories.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_15-Comparative_Analysis_of_Deep_Learning_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dataset Development for Classifying Kick Types for Martial Arts Athletes Using IMU Devices</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160614</link>
        <id>10.14569/IJACSA.2025.0160614</id>
        <doi>10.14569/IJACSA.2025.0160614</doi>
        <lastModDate>2025-06-30T13:21:17.0070000+00:00</lastModDate>
        
        <creator>Rudy Gunawan</creator>
        
        <creator>Suhardi</creator>
        
        <creator>Widyawardana Adiprawita</creator>
        
        <creator>Tommy Apriantono</creator>
        
        <subject>Classification; dataset; machine learning; inertial sensors; IMU; martial art; sport; motion capture</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>This study aims to establish a dataset of kicks in Kempo martial arts to categorize athletes&#39; kick types based on their movement patterns. The real problem addressed in this research is the lack of an accurate, efficient, and portable system to automatically recognize and classify kick types in martial arts training, especially outside of controlled laboratory environments. Previous studies often relied on optical motion capture systems, which, while accurate, are expensive and impractical for real-world training settings. To overcome this limitation, this study utilizes wearable motion sensing technology and analyzes the data with an Inertial Measurement Unit (IMU) sensor. During data collection, IMU sensors are attached to athletes to monitor their movements during training or competitions. In this research, an IMU sensor type MPU6050 is used, controlled by an ESP32 microcontroller, and the data is collected via wireless communication to a data collection server. The dataset comprises gyroscope readings for angular velocity and accelerometer measurements for linear acceleration in three axes. The study evaluates three kick types: straight kicks, sidekicks, and roundhouse kicks. It employs machine learning methodologies utilizing three principal classification algorithms: Support Vector Machine (SVM), k-Nearest Neighbors (k-NN), and Random Forest (RF). These algorithms were selected due to their distinct strengths in processing sensor data derived from the accelerometer and gyroscope embedded within the IMU device. The findings indicate that the SVM algorithm successfully identified and categorized kick types in Shorinji Kempo martial arts athletes with a 96.7 per cent accuracy rate when using a dataset of 70 samples and two sensors. However, when three sensors were used, the accuracy decreased to approximately 92.4 per cent. In contrast, the k-NN algorithm achieved a classification accuracy of 92.4 per cent with a dataset of 70 samples, k = 3, and three sensors. Analyzing the contributions of features to classification provides in-depth insight into the key characteristics of movement patterns for kick type recognition.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_14-Dataset_Development_for_Classifying_Kick_Types.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Emotion Recognition Algorithm Based on Multi-Modal Physiological Signal Feature Fusion Using Artificial Intelligence and Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160613</link>
        <id>10.14569/IJACSA.2025.0160613</id>
        <doi>10.14569/IJACSA.2025.0160613</doi>
        <lastModDate>2025-06-30T13:21:16.9770000+00:00</lastModDate>
        
        <creator>Yue Pan</creator>
        
        <subject>Emotion recognition; physiological signals; attention-based CNN-BILSTM-transformer; multimodal fusion; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>Emotion recognition technology that utilizes physiological signals has become highly important because of its diverse purposes in healthcare fields and human-computer interaction and affective computing, which require emotional state understanding for enhanced user experience and mental health management. Support Vector Machines (SVM) and Random Forest (RF) serve as traditional machine learning approaches for emotion classification, but they struggle to accurately model spatial, temporal and long-range dependencies within multimodal physiological data, which leads to degraded overall performance. Created an Attention-Based CNN-BiLSTM-Transformer Model, which unites several neural network structures to extract features and classify information more effectively. This model implements Convolutional Neural Networks for detecting spatial patterns at the raw level of numerous physiological signals, which contain Electroencephalography, Electrocardiography, Galvanic Skin Response, and Electromyography. BiLSTM works as a temporal model which analyzes time-series physiological patterns through dual-directional contextual processing to create improved features from historical data patterns. The Transformer encoder serves to detect extended relationships between sequence items for better emotional change comprehension throughout time. The classification accuracy receives additional improvement because an attention-based fusion mechanism applies dynamic importance weights to different physiological signals, so the most significant features optimize the ultimate decision process. Testing of the proposed model using publicly accessible DEAP and AMIGOS resulted in 88.2% accuracy on DEAP while achieving 89.5% accuracy on AMIGOS, and both outcomes exceeded conventional machine learning methods as well as baseline deep learning approaches, which used CNN-LSTM and Transformer-only models. Testing showed that the attention mechanism successfully determined how to weigh multiple features, which resulted in better classification success. A deep learning framework based on TensorFlow and PyTorch operates throughout the implementation in Python to provide an efficient solution for emotion recognition in physiological signals.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_13-Emotion_Recognition_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SkinDiseaseXAI: XAI-Driven Neural Networks for Skin Disease Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160612</link>
        <id>10.14569/IJACSA.2025.0160612</id>
        <doi>10.14569/IJACSA.2025.0160612</doi>
        <lastModDate>2025-06-30T13:21:16.9430000+00:00</lastModDate>
        
        <creator>Ammar Nasser Alqarni</creator>
        
        <creator>Abdullah Sheikh</creator>
        
        <subject>XAI; skin disease; Grad-CAM++; convolutional neural networks; clinical interpretability; melanoma; eczema; atopic dermatitis; fungal infections</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>Accurate classification of skin diseases is an important step toward early diagnosis and therapy. However, deep learning models are frequently used in therapeutic contexts without transparency, reducing confidence and acceptance. This study introduces SkinDiseaseXAI, a unique convolutional neural network (CNN) that uses Grad-CAM++ to classify ten different types of skin diseases and provide visual explanations. The proposed model was trained using a publicly available dataset of dermatoscopic images following preprocessing and augmentation. SkinDiseaseXAI achieved 76.12% training accuracy and 66.25% validation accuracy in 20 epochs. We used Grad-CAM++ to generate heatmaps that highlighted discriminative regions inside the lesion areas, thereby improving interpretability. The experimental results indicate that the model has the ability not only to perform multi-class skin disease categorization but also to provide interpretable visual outputs, which improves the transparency and dependability of decision-making processes. This concept has the possibility to improve clinical diagnosis by merging performance and explainability.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_12-SkinDiseaseXAI_XAI_Driven_Neural_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Anomaly Detection Algorithm Based on Random Matrix Theory and Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160611</link>
        <id>10.14569/IJACSA.2025.0160611</id>
        <doi>10.14569/IJACSA.2025.0160611</doi>
        <lastModDate>2025-06-30T13:21:16.9130000+00:00</lastModDate>
        
        <creator>Yongming Lu</creator>
        
        <subject>Random matrix theory; machine learning; anomaly detection; experimental simulation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>This study focuses on anomaly detection algorithms. Aiming at the limitations of traditional methods in complex data processing, an innovative algorithm that integrates random matrix theory and machine learning is proposed. First, different types of data, such as numerical values, texts, and images, are preprocessed, and random matrices are constructed. Hidden abnormal features are mined through specific transformations and then classified by optimized machine learning models. In the experimental stage, multiple data sets, such as KDD Cup 99, are selected to compare with classic algorithms such as DBSCAN and Isolation Forest. The results show that the innovative algorithm has a detection accuracy of 95%, a recall rate of 93%, and an F1 value of 94% on the KDD Cup 99 data set, which is significantly improved compared with the comparison algorithm. It also performs well on other data sets, with an average accuracy increase of seven percentage points and a recall rate increase of eight percentage points. The results demonstrate that the proposed algorithm can effectively mine data anomaly patterns, achieve efficient and accurate anomaly detection in complex data sets, and provide strong support for applications in related fields.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_11-The_Anomaly_Detection_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Impact of Auxiliary Information in Generative Artificial Intelligence Models for Cross-Domain Recommender Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160610</link>
        <id>10.14569/IJACSA.2025.0160610</id>
        <doi>10.14569/IJACSA.2025.0160610</doi>
        <lastModDate>2025-06-30T13:21:16.8800000+00:00</lastModDate>
        
        <creator>Matthew O. Ayemowa</creator>
        
        <creator>Roliana Ibrahim</creator>
        
        <creator>Noor Hidayah Zakaria</creator>
        
        <creator>Yunusa Adamu Bena</creator>
        
        <subject>Generative adversarial networks; auxiliary information; cross-domain recommender systems; data sparsity; knowledge transfer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>Recommender systems (RSs) are significant in enhancing the experiences of users across different online platforms. One of the major problems faced by the conventional RSs is difficulties in getting precise preferences for users, mostly for the users that has limited previous interaction data, and this eventually affects the performance of the conventional techniques to solve the data sparsity problem. To address this challenge, this study proposes an Auxiliary-Aware Conditional GAN (AUXIGAN) model that integrates heterogeneous auxiliary information into both the generator and discriminator networks to enhance representation learning to enhance the performance of the cross-domain recommender systems (CDRS). Most researchers consider only the rating matrix of users-items and ignore the impact of auxiliary information on the interaction functions, which is very significant to the recommendation accuracy to solve data sparsity problems. The proposed novel technique considers features concatenation, attention-based fusion networks, contrastive representation learning, knowledge transfer, and multi-modal embedding alignment techniques to enhance the user-item interaction matrix. Our experiments on benchmark datasets show that the proposed model significantly outperformed state-of-the-art RSs models, the key metrics utilized are: RMSE, Precision, Recall, and MAE, which show the influence of incorporating auxiliary information into the GAN-based CDRS. In conclusion, the integration of auxiliary information on generative adversarial networks models represents a substantial advancement in the field of CDRS, and the results of the proposed models on two real-world datasets show that the proposed model significantly outperforms collaborative filtering and other GAN-based techniques.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_10-Impact_of_Auxiliary_Information_in_Generative_Artificial_Intelligence.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application of Deep Learning-Based Image Compression Restoration Technology in Power System Unstructured Data Management</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160609</link>
        <id>10.14569/IJACSA.2025.0160609</id>
        <doi>10.14569/IJACSA.2025.0160609</doi>
        <lastModDate>2025-06-30T13:21:16.8500000+00:00</lastModDate>
        
        <creator>Junjie Zha</creator>
        
        <creator>Aiguo Teng</creator>
        
        <creator>Xinwen Shan</creator>
        
        <creator>Hao Tang</creator>
        
        <creator>Zihan Liu</creator>
        
        <subject>Image compression; attention mechanism; multimodal fusion; unstructured data in the power industry; image data</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>In power-system unstructured-data management, a large volume of images from inspection drones, substation cameras, and smart meters is heavily compressed due to bandwidth and storage constraints, resulting in lower resolution that hinders defect detection and maintenance decisions. Although deep-learning super-resolution (SR) techniques have made significant advances, real-world deployments still require a balance between reconstruction accuracy and model lightweightness. To meet this need, we introduce a channel-attention-embedded Transformer SR method (CAET). The approach adaptively injects channel attention into both the Transformer’s global features and the convolutional local features, harnessing their complementary strengths while dynamically enhancing critical information. Tested on five public datasets and compared with six representative algorithms, CAET achieves the best or second-best performance across all upscaling factors; at 4&#215; enlargement, it outperforms the advanced SwinIR method by 0.09 dB in PSNR on Urban100 and by 0.30 dB on Manga109, with noticeably improved visual quality. Experiments demonstrate that CAET delivers high-precision, low-latency restoration of compressed images for the power sector while keeping model complexity low.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_9-Application_of_Deep_Learning_Based_Image_Compression.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Advancements in Deep Learning for Malaria Detection: A Comprehensive Overview</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160608</link>
        <id>10.14569/IJACSA.2025.0160608</id>
        <doi>10.14569/IJACSA.2025.0160608</doi>
        <lastModDate>2025-06-30T13:21:16.8200000+00:00</lastModDate>
        
        <creator>Kiswendsida Kisito Kabore</creator>
        
        <creator>Desire Guel</creator>
        
        <creator>Flavien Herve Somda</creator>
        
        <subject>Malaria detection; deep learning; Convolutional Neural Networks (CNNs); medical imaging; automated diagnostics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>Malaria remains a critical global health issue, with millions of cases reported annually, particularly in resource-limited regions. Timely and accurate diagnosis is vital to ensure effective treatment, reduce complications, and control transmission. Conventional diagnostic methods, including microscopy and Rapid Diagnostic Tests (RDTs), face considerable limitations such as dependency on skilled personnel, limited sensitivity at low parasitemia levels, and cost constraints. In response, deep learning technologies—especially Convolutional Neural Networks (CNNs)—have emerged as promising tools to overcome these barriers by enabling automated diagnostics based on medical imaging, significantly enhancing precision and scalability. This paper presents a comprehensive review of recent advances in deep learning for malaria diagnosis, highlighting the role of publicly available datasets in driving innovation. It analyzes leading architectures—such as ResNet, VGG, and YOLO—based on their classification performance, including accuracy, sensitivity, and computational efficiency. Furthermore, the review discusses novel directions such as mobile-integrated diagnostics and multi-modal data fusion, which can enhance diagnostic accessibility in low-resource settings. Despite notable progress, challenges remain in terms of dataset imbalance, lack of generalizability, and barriers to clinical deployment. The paper concludes by outlining future research directions and emphasizing the need for robust, adaptable models that can support global malaria control and eradication strategies.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_8-Advancements_in_Deep_Learning_for_Malaria_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Deepfake Content Detection Through Blockchain Technology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160607</link>
        <id>10.14569/IJACSA.2025.0160607</id>
        <doi>10.14569/IJACSA.2025.0160607</doi>
        <lastModDate>2025-06-30T13:21:16.8030000+00:00</lastModDate>
        
        <creator>Qurat-ul-Ain Mastoi</creator>
        
        <creator>Muhammad Faisal Memon</creator>
        
        <creator>Salman Jan</creator>
        
        <creator>Atif Jamil</creator>
        
        <creator>Muhammad Faique</creator>
        
        <creator>Zeeshan Ali</creator>
        
        <creator>Abdullah Lakhan</creator>
        
        <creator>Toqeer Ali Syed</creator>
        
        <subject>Blockchain; deep fake; convolutional neural network (CNN); long short-term memory (LSTM); RNN (recurrent neural network); video and image</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>Deepfake technology poses a growing threat to the authenticity and trustworthiness of digital media, necessitating the development of advanced detection mechanisms. While AI-based methods have shown promise, they generally face limitations in terms of generalization and scalability. We present a blockchain-enabled watermarking technique, characterized by its immutable, transparent, and decentralized nature, which offers a robust complementary approach for enhancing media authentication through methods such as cryptographic watermarking, decentralized identity, and content provenance tracking. To train and evaluate blockchain-based watermarking and deepfake detection systems, a variety of large-scale datasets are utilized. Video datasets include UADFV (49 real, 49 fake), Deepfake-TIMIT (320 real, 640 fake), DFFD (1000 real, 3000 fake), Celeb-DF v2 (590 real, 5639 fake), DFDC (23,564 real, 104,500 fake), DeeperForensics-1.0 (50,000 real, 10,000 fake), FaceForensics++ (1000 real, 5000 fake), and ForgeryNet (99,630 real, 121,617 fake). Image datasets include DFFD (58,703 real, 240,336 fake), FFHQ (70,000 GAN-generated), iFakeFaceDB (87,000 fake), 100k AI Faces, and over 2.8 million samples in ForgeryNet. Despite integration challenges such as scalability, computational cost, and standardization, blockchain-based solutions show promise in tracking content origin and enhancing verification. Simulation results demonstrate that the proposed blockchain-enabled watermarking achieves a higher accuracy in detecting fake content compared to existing machine learning methods.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_7-Enhancing_Deepfake_Content_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Metabolite Screening for Heart Disease Using Support Vector Machine-Based AI</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160606</link>
        <id>10.14569/IJACSA.2025.0160606</id>
        <doi>10.14569/IJACSA.2025.0160606</doi>
        <lastModDate>2025-06-30T13:21:16.7400000+00:00</lastModDate>
        
        <creator>Edward L. Boone</creator>
        
        <creator>Ryad A. Ghanam</creator>
        
        <creator>Faten S. Alamri</creator>
        
        <creator>Elizabeth B. Amona</creator>
        
        <subject>Machine learning; genetic algorithm; support vector machines; classification; heart disease; metabolites</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>Algorithms for feature selection are growing in interest among researchers aiming to connect specific features in a dataset with specific classifications. Recent developments in machine learning, particularly Support Vector Machine-based artificial intelligence algorithms have demonstrated excellent classification performance in highly nonlinear data. However, identifying which features contribute most to classification re-mains challenging, especially when datasets include hundreds of variables. Initially, features must be screened to narrow down the set for deeper analysis. Metabolomics datasets are one such case, where many features must be examined to determine those associated with heart disease diagnosis. This work applies a Genetic Algorithm, incorporating a penalized likelihood approach with Support Vector Machines for mutation, to stochastically search the feature space. A large-scale simulation study demonstrates that the proposed method achieves a high true feature identification rate while maintaining a reasonable false identification rate. The method is then applied to a Qatar BioBank dataset focused on heart disease, reducing the number of candidate metabolites from 232 to 37.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_6-Metabolite_Screening_for_Heart_Disease.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Location Based Augmented Reality Navigation Application</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160605</link>
        <id>10.14569/IJACSA.2025.0160605</id>
        <doi>10.14569/IJACSA.2025.0160605</doi>
        <lastModDate>2025-06-30T13:21:16.7100000+00:00</lastModDate>
        
        <creator>Samridhi Sanjay Pramanik</creator>
        
        <creator>Aishwary Pramanik</creator>
        
        <subject>Augmented reality; location based application; markerless AR; GPS based application; real world augmentation; unity; ARCore; navigation system; user interaction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>This paper presents a novel Augmented Reality (AR) navigation system to overcome limitations of conventional 2D map-based applications in advanced real-world environments. Current AR navigation systems solutions often lack dynamic adaptation to user behavior and fail to deliver context-aware, personalized guidance. Addressing these gaps, we present a markerless, location-based AR system integrating three innovations: 1) a Dynamic Predictive Navigation module with Long Short-Term Memory (LSTM) networks for anticipating user intention and dynamically optimizing routes in real time; 2) a Smart POI Ranking system with sentiment analysis, live user feedback, and social media trends for presenting personalized and context-aware recommendations; and 3) a 3D AR interface built with Unity and ARCore for enhancing spatial understanding and reducing cognitive burden through visually engaging guidance. Experimental evaluation presents improved navigation responsiveness, reduced rerouting effort, and increased user interaction with recommended POIs. This work contributes a scalable and adaptive solution towards real-time AR navigation, with applicability to smart city mobility and context-aware spatial computing.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_5-Location_Based_Augmented_Reality_Navigation_Application.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application of Blockchain Frameworks for Decentralized Identity and Access Management of IoT Devices</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160604</link>
        <id>10.14569/IJACSA.2025.0160604</id>
        <doi>10.14569/IJACSA.2025.0160604</doi>
        <lastModDate>2025-06-30T13:21:16.6770000+00:00</lastModDate>
        
        <creator>Sushil Khairnar</creator>
        
        <subject>Blockchain; decentralization; identity and access management; ethereum; hyperledger</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>The growth in IoT devices means an ongoing risk of data vulnerability. The transition from centralized ecosystems to decentralized ecosystems is of paramount importance due to security, privacy, and data use concerns. Since the majority of IoT devices will be used by consumers in peer-to-peer applications, a centralized approach raises many issues of trust related to privacy, control, and censorship. Identity and access management lies at the heart of any user-facing system. Blockchain technologies can be leveraged to augment user authority, transparency, and decentralization. This study proposes a decentralized identity management framework for IoT environments using Hyperledger Fabric and Decentralized Identifiers (DIDs). The system was simulated using Node-RED to model IoT data streams, and key functionalities including device onboarding, authentication, and secure asset querying were successfully implemented. Results demonstrated improved data integrity, transparency, and user control, with reduced reliance on centralized authorities. These findings validate the practicality of blockchain-based identity management in enhancing the security and trustworthiness of IoT infrastructures.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_4-Application_of_Blockchain_Frameworks_for_Decentralized_Identity.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>AI-Driven Education: Integrating Machine Learning and NLP to Transform Child Learning Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160603</link>
        <id>10.14569/IJACSA.2025.0160603</id>
        <doi>10.14569/IJACSA.2025.0160603</doi>
        <lastModDate>2025-06-30T13:21:16.6470000+00:00</lastModDate>
        
        <creator>Masuma Akter Semi</creator>
        
        <creator>Md Borhan Uddin</creator>
        
        <creator>Sharmin Sultana</creator>
        
        <creator>Motmainna Tamanna</creator>
        
        <creator>Azim Uddin</creator>
        
        <creator>Khandakar Rabbi Ahmed</creator>
        
        <subject>Artificial intelligence; machine learning; natural language processing; adaptive learning systems; sentencebert; gradient boosting machine; personalized feedback</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>An Artificial Intelligence-driven child learning system with a Machine Learning and Natural Language Processing-based approach to dynamically personalize educational experiences for children is proposed in this study. Using a Sentence-BERT model to encode student queries for the computation of semantic similarity and knowledge domains to be retrieved. A T5-based transformer model writes verbose, personalized feedback, and a Gradient Boosting Machine classifier predicts the appropriate learning outcomes. The content difficulty and personalization of educational trajectories across content are set by an integrated adaptive learning engine that monitors and adjusts for student performance. On the General Knowledge QA dataset, classification accuracy reaches 85.2%, and the ROC-AUC score is 0.912, which has been proven to be reliable in real-world cases. It also produces positive effects regarding the understanding and preference for learners of adaptive systems, as observed in user studies. AI technologies have exciting potential to deliver scalable, personalized education for young learners, as demonstrated in this work.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_3-AI_Driven_Education_Integrating_Machine_Learning_and_NLP.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Transforming the Working Style of Call Center Agents Through Generative AI</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160602</link>
        <id>10.14569/IJACSA.2025.0160602</id>
        <doi>10.14569/IJACSA.2025.0160602</doi>
        <lastModDate>2025-06-30T13:21:16.6000000+00:00</lastModDate>
        
        <creator>Satya Karteek Gudipati</creator>
        
        <subject>Generative AI; call center transformation; agent augmentation; LLMs; sentiment analysis; hyper-personalization; conversational AI; AI ethics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>As Generative Artificial Intelligence (Gen AI) is evolving rapidly, there is a significant change in the approach by the contact center industry with respect to work culture. Historically, customer service agents working in a contact center used to depend significantly on static scripts and fragmented information systems, which thereby resulted in delayed resolutions, cognitive overload, and made them deliver inconsistent customer experiences. This study explores the paradigm shift that&#39;s occurring in contact service centers through implementing Gen AI. Real-time intent recognition, contextual response generation, and personalized engagement across channels are some novel capabilities introduced by Gen AI by adopting Large Language Models (LLMs). Organizations can reduce Average Handling Time (AHT), improve First Contact Resolution (FCR), and enhance Customer Satisfaction (CSAT) scores by integrating Gen AI into core workflows such as issue summarization, behavioral analytics, sentiment tracking, and knowledge retrieval. In order to demonstrate the quantifiable improvements in agent performance and customer engagement, this study adopted a blended research design by combining enterprise case studies, simulation scenarios, and comparative KPI evaluations. Furthermore, it addresses implementation bottlenecks such as onboarding efficiency, multilingual support, emotional intelligence, and real-time guidance. With reference to the industry standards, ethical considerations such as data privacy, algorithmic bias, and explainability are examined. Case examples that are collected from the industry leaders are leveraged to validate the study&#39;s conclusions. Through this study, a structured and well-organized roadmap for enterprises is delivered, which aims at transforming contact centers from reactive service units into proactive, intelligence-driven ecosystems.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_2-Transforming_the_Working_Style_of_Call_Center_Agents.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detecting and Preventing Money Laundering Using Deep Learning and Graph Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160601</link>
        <id>10.14569/IJACSA.2025.0160601</id>
        <doi>10.14569/IJACSA.2025.0160601</doi>
        <lastModDate>2025-06-30T13:21:16.5230000+00:00</lastModDate>
        
        <creator>Mamunur R Raja</creator>
        
        <creator>Md Anwar Hosen</creator>
        
        <creator>Md Farhad Kabir</creator>
        
        <creator>Sharmin Sultana</creator>
        
        <creator>Shah Ahammadullah Ashraf</creator>
        
        <creator>Rakibul Islam</creator>
        
        <subject>Anti-money laundering (AML); deep learning (DL); LSTM; GraphSAGE; graph analysis; transaction monitoring; hybrid fusion model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025</description>
        <description>Money laundering is a major worldwide issue facing financial organizations, with its increasingly complicated and changing methods. Conventional rule-based anti-money laundering (AML) systems can fail to identify advanced fraudulent activity. This study shows a new hybrid model to detect suspicious transaction patterns precisely by efficiently combining GraphSAGE, a graph-based Machine Learning (ML) technique, with Long Short-Term Memory (LSTM) networks. The suggested approach uses GraphSAGE&#39;s relational capabilities for graph-structured anomaly detection and the temporal strengths of LSTM for sequence modeling. With excessive traditional ML and stand-alone Deep Learning (DL) techniques, the Hybrid LSTM-GraphSAGE model achieves an accuracy of 95.4% using a simulated dataset reflecting real-world financial transactions. The findings show how well our combined strategy lowers false positives and improves the identification of advanced AML operations. This work opens the path for creating real-time, intelligent, flexible money laundering detection systems appropriate for current financial situations.</description>
        <description>http://thesai.org/Downloads/Volume16No6/Paper_1-Detecting_and_Preventing_Money_Laundering.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Innovative Design Algorithm of Huizhou Bamboo Weaving Patterns Based on Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160595</link>
        <id>10.14569/IJACSA.2025.0160595</id>
        <doi>10.14569/IJACSA.2025.0160595</doi>
        <lastModDate>2025-06-03T07:49:05.7400000+00:00</lastModDate>
        
        <creator>Jinjin Rong</creator>
        
        <creator>Xin Fang</creator>
        
        <subject>Deep learning; Huizhou bamboo weaving; bamboo weaving pattern; vision transformer; local self-attention mechanism</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>In the field of innovative design of Huizhou bamboo weaving patterns, traditional deep learning algorithms cannot fully capture the fine structure and subtle changes of patterns, resulting in distorted or blurred results, and require a lot of computing resources and time during the training process. This paper constructs an improved ViT (Vision Transformer) model to collect diverse Huizhou bamboo weaving pattern data covering different styles and forms. In the data enhancement stage, common enhancement techniques such as rotation, scaling, flipping, and color perturbation are used to increase the diversity of training data. Based on the traditional ViT model, a local self-attention mechanism is applied to replace the traditional global self-attention mechanism. Mixed precision training and distributed training strategies are used to effectively accelerate the training process while maintaining high accuracy. The model automatically generates innovative designs by learning the style and structural characteristics of Huizhou bamboo weaving patterns, and adds a detail repair module in the generation process to enhance the detail expression of the pattern. The experimental results show that the improved ViT model tends to 0.95 after 50 training rounds, indicating that it performs well in detail preservation and structural similarity; with a sample data volume of 5000, the training time of the improved ViT model is 47.4 seconds, and the GPU memory usage is 37.1GB, providing higher computing efficiency. The experimental results prove the effectiveness of this paper’s research on the innovative design algorithm of Huizhou bamboo weaving patterns.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_95-Innovative_Design_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Remote Monitoring and Management System for Oil and Gas Facilities with Integrated IoT and Artificial Intelligence Data Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160594</link>
        <id>10.14569/IJACSA.2025.0160594</id>
        <doi>10.14569/IJACSA.2025.0160594</doi>
        <lastModDate>2025-06-03T07:49:05.6930000+00:00</lastModDate>
        
        <creator>Shu Haowen</creator>
        
        <creator>Zhang Bin</creator>
        
        <creator>Gao Shiyu</creator>
        
        <creator>Gu Li</creator>
        
        <creator>Jia Yanjie</creator>
        
        <subject>Internet of Things; artificial intelligence; data analytics; remote monitoring; management system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>An important trend in the development of digitalization is the expansion from digitalization to intelligence. For the intelligence of oil and gas facilities, it is to apply the multi-intelligent judgment and analysis technology to the actual production of oil and gas facilities, so as to realize the real-time data acquisition, analysis and “monitoring tube” integration of remote oil and gas facilities. As a flexible monitoring configuration software, force control provides a good development interface and a simple engineering implementation method under the trend of intelligent oil and gas facilities, which can facilitate users to realize and complete the functions of the monitoring layer. Its humanized application, such as trend curve, report output, limit alarm and other system functions, can also be more intuitive for users to monitor and process control on-site production data, and provide strong technical support for the intelligent process of oil and gas facilities. Based on this, this paper studies the design of remote monitoring and management system of oil and gas station by integrating the Internet of Things, artificial intelligence and big data analysis technology.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_94-Remote_Monitoring_and_Management_System_for_Oil_and_Gas_Facilities.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analyzing the Impact of Histogram-Based Image Preprocessing on Melon Leaf Abnormality Detection Using YOLOv7</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160593</link>
        <id>10.14569/IJACSA.2025.0160593</id>
        <doi>10.14569/IJACSA.2025.0160593</doi>
        <lastModDate>2025-06-03T07:49:05.6000000+00:00</lastModDate>
        
        <creator>Sahrial Ihsani Ishak</creator>
        
        <creator>Sri Wahjuni</creator>
        
        <creator>Karlisa Priandana</creator>
        
        <subject>Leaf abnormality; melon; image preprocessing; YOLOv7</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>This study aims to analyze and implement image preprocessing techniques to improve the performance of melon leaf abnormality detection using the YOLOv7 algorithm. A total of 521 abnormal melon leaf images were processed using augmentation and three preprocessing methods: Averaging Histogram Equalization (AVGHEQ), Brightness Preserving Dynamic Histogram Equalization (BPDFHE), and Contrast Limited Adaptive Histogram Equalization (CLAHE), then compared with the original dataset. Modeling was conducted in three stages: initial training with an 80:20 split and default YOLOv7 augmentation; hyperparameter tuning via cross-validation using a 90:10 split without augmentation; and final training using the best parameters with augmentation reactivated. The models were evaluated using ensemble learning. Results showed mAP ranged from 58.6% to 66.3%, accuracy from 80.7% to 84.9%, and detection time from 9.8 to 20 milliseconds. Preprocessing improved mAP and detection time, though it had little effect on accuracy. The best performance was obtained with a kernel size of 3 and a learning rate of 0.001, while changes in activation function, pooling, batch size, and momentum had minimal impact. The top models, trained with maximum epochs and standard augmentation, achieved mAP of 84.12%, accuracy of 91.19%, and detection time of 4.55 milliseconds. Models using early stopping (patience = 300) reached mAP of 81.57%, accuracy of 92.23%, and detection time of 5.03 milliseconds. The best model outperformed previous works, which reported only 48.85% with Faster R-CNN, 33.16% with SSD, and 16.56% with YOLOv3. Although histogram-based preprocessing methods mainly enhanced inference speed, the overall improvements to YOLOv7 significantly boosted detection performance.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_93-Analyzing_the_Impact_of_Histogram_Based_Image_Preprocessing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Event-B Capability-Centric Model for Cloud Service Discovery</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160592</link>
        <id>10.14569/IJACSA.2025.0160592</id>
        <doi>10.14569/IJACSA.2025.0160592</doi>
        <lastModDate>2025-05-31T09:35:23.4700000+00:00</lastModDate>
        
        <creator>Aicha Sid’Elmostaphe</creator>
        
        <creator>J Paul Gibson</creator>
        
        <creator>Imen Jerbi</creator>
        
        <creator>Walid Gaaloul</creator>
        
        <creator>Mohamedade Farouk Nanne</creator>
        
        <subject>Formal verification; cloud service discovery; capability modelling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>Cloud computing has become increasingly adopted due to its ability to provide on-demand access to computing resources. However, the proliferation of cloud service offerings has introduced significant challenges in service discovery. Existing cloud service discovery approaches are often evaluated solely through simulation or experimentation and typically rely on unstructured service descriptions, which limits their precision and scalability. In this work, we address these limitations by proposing a formally verified architecture for capability-centric cloud service discovery, grounded in the Event-B method. The architecture is built upon a capability-centric service description model that captures service semantics through property-value representations. A core element of this model is the formally verified variantOf relation, which defines specialization among services. We prove that variantOf satisfies the properties of a partial order, enabling services to be structured as a Directed Acyclic Graph (DAG) and thus supporting hierarchical and scalable discovery. We formally verify the consistency of our model across multiple refinement levels. All proof obligations generated by the Rodin platform were successfully discharged. A scenario-based validation further confirms the correctness of dynamic operations within the system.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_92-An_Event_B_Capability_Centric_Model_for_Cloud_Service.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>RETRACTED: The Impact of Federated Learning on Distributed Remote Sensing Archives</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160591</link>
        <id>10.14569/IJACSA.2025.0160591</id>
        <doi>10.14569/IJACSA.2025.0160591</doi>
        <lastModDate>2025-05-31T09:35:23.4370000+00:00</lastModDate>
        
        <creator>Pratik Surendrakumar Patel</creator>
        
        <creator>Vijay Govindarajan</creator>
        
        <subject>Machine learning; federated learning; deep learning model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>After careful and considered review of the content of this paper by a duly constituted expert committee, this paper has been found to be in violation of IJACSA`s Publication Principles. We hereby retract the content of this paper. Reasonable effort should be made to remove all past references to this paper. Retraction DOI: 10.14569/IJACSA.2025.0160591.retraction</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_91-The_Impact_of_Federated_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Emotion-Aware EEG Analysis for Alzheimer’s Disease Detection Using Boosting and Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160590</link>
        <id>10.14569/IJACSA.2025.0160590</id>
        <doi>10.14569/IJACSA.2025.0160590</doi>
        <lastModDate>2025-05-31T09:35:23.4070000+00:00</lastModDate>
        
        <creator>Shynara Ayanbek</creator>
        
        <creator>Abzal Issayev</creator>
        
        <creator>Amandyk Kartbayev</creator>
        
        <subject>Alzheimer’s disease; feature extraction; machine learning; CNN; boosting algorithms; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>Alzheimer’s disease (AD) is a leading cause of dementia, yet its diagnosis remains challenging. EEG provides a noninvasive and cost-effective method for monitoring brain activity, which may reflect both cognitive decline and altered emotional states. In this study, an EEG-based pipeline was developed to classify AD using two approaches: an ensemble of boosting classifiers based on extracted features, and a deep convolutional neural network (CNN) applied to raw signals. A publicly available dataset was processed to extract time, frequency, and complexity features, with emotional brain dynamics implicitly reflected in the signals and considered during analysis. Five ensemble models (including CatBoost, LightGBM, and XGBoost) were optimized using Bayesian search. The CNN was trained separately and evaluated under cross-validation schemes. A balanced accuracy of 78.96% was achieved for AD detection using XGBoost, while the CNN reached 70.92% for Frontotemporal dementia. The study demonstrates that combining machine learning with EEG produces generalizable models for dementia detection and suggests that accounting for emotion-related variability may enhance diagnostic results.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_90-Emotion_Aware_EEG_Analysis_for_Alzheimers_Disease.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-Dimensional Digital Media Sentiment Visualization Intelligent Analysis System Based on Machine Learning Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160589</link>
        <id>10.14569/IJACSA.2025.0160589</id>
        <doi>10.14569/IJACSA.2025.0160589</doi>
        <lastModDate>2025-05-31T09:35:23.3900000+00:00</lastModDate>
        
        <creator>Mengwei Leia</creator>
        
        <creator>Qiong Chen</creator>
        
        <subject>Digital media; sentiment analysis; intelligent systems; multimodal data</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>This study builds a multi-dimensional sentiment analysis system to solve the problem of sentiment prediction of text and image data in the Weibo platform. By combining CNN (Convolutional Neural Network), BiLSTM (Bidirectional Long Short-Term Memory) and Attention mechanism (AM), the accuracy of sentiment classification is improved, which helps to better understand and analyze user sentiment expressions in social media. This study uses crawler tools to collect text and image data of 1,000 users on the Weibo platform from January to December 2021 to ensure the diversity and representativeness of the data; the text data is segmented, stop words are removed, and the text is converted into vectors; at the same time, the ResNet-50 pre-trained model is used to extract the deep features of the image, CNN is used to process the image data, and BiLSTM captures the contextual information in the text data. Finally, the AM is used to enhance the model&#39;s attention to emotional expression. Experimental results show that the proposed Word2Vec (Word to Vector) model performs outstandingly in the accuracy of sentiment classification. The accuracy of the CNN-BiLSTM-Attention model in positive, neutral and negative classification tasks is 97.5 per cent, 95.4 per cent and 91.6 per cent, respectively, which are significantly better than the performance of the CNN and BiLSTM models, especially in the evaluation indicators such as accuracy and macro F1. This study proposes a multimodal sentiment analysis system based on CNN-BiLSTM-Attention, which significantly and effectively improves the accuracy of social media sentiment classification. The system can effectively process complex sentiment categories and multimodal data, and has broad application prospects, especially in the fields of social media sentiment analysis and public opinion monitoring.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_89-Multi_Dimensional_Digital_Media_Sentiment_Visualization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>CodifiedCant: Enhancing Legal Document Accessibility Using NLP and Longformer for Secure and Efficient Compliance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160588</link>
        <id>10.14569/IJACSA.2025.0160588</id>
        <doi>10.14569/IJACSA.2025.0160588</doi>
        <lastModDate>2025-05-31T09:35:23.3600000+00:00</lastModDate>
        
        <creator>Jayapradha J</creator>
        
        <creator>Su-Cheng Haw</creator>
        
        <creator>Naveen Palanichamy</creator>
        
        <creator>Nilanjana Bhattacharya</creator>
        
        <creator>Aayushi Agarwal</creator>
        
        <creator>Senthil Kumar T</creator>
        
        <subject>Natural language processing; transformer-based deep learning system; long former; semantic routing; network encryption; legal document; unstructured data; and data security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>CodifiedCant is a new idea that employs Natural Language Processing to simplify company guidelines and legal documents. Legal texts are extensive, complicated and hard for non-experts to understand. To tackle the above problem, this research incorporates the Longformer model because it functions as a transformer-based deep learning system designed to work effectively with extensive legal documents. Longformer enables the system to handle extensive documents by keeping better track of context, which results in transforming complex legal text into easily readable formats. To enhance the search and retrieval speed, this research investigates the nuances of transforming unstructured data, like tabular data from PDFs, to vectors. This revolution supports quicker, cognisant semantic routing inside the document. Further, it assists in data arrangement and detection across massive sources of legitimate and business information. Data security is also a major priority for the platform, which utilizes network encryption to protect data and privacy. CodifiedCant is a scalable, secure and intelligent solution for better employee access to legal news, greater company transparency and reinforces better compliance in the organization. Table extraction and document simplification performance of the model are validated on Cornell LII and Kaggle evaluation datasets, respectively. CodifiedCant associates the variance relating to legitimate terminology and user knowledge.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_88-CodifiedCant_Enhancing_Legal_Document_Accessibility.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Advanced Image Recognition Techniques for Crop Pest Detection Using Modified YOLO-v3</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160587</link>
        <id>10.14569/IJACSA.2025.0160587</id>
        <doi>10.14569/IJACSA.2025.0160587</doi>
        <lastModDate>2025-05-31T09:35:23.3130000+00:00</lastModDate>
        
        <creator>Dechao Guo</creator>
        
        <creator>Hao Zhang</creator>
        
        <subject>Feature detection algorithm; YOLO-v3 network; image recognition technology; crop pest detection applications</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>Accurate and efficient detection of agricultural pests is crucial for crop protection and pest control. This study addresses the limitations of traditional pest detection methods, such as weak detection capabilities and high computational demands, by proposing an improved image recognition system based on the YOLO-v3 algorithm. The research focuses on enhancing pest detection accuracy through deep learning techniques, specifically by modifying the YOLO-v3 model with the ISODATA clustering algorithm, DenseBlock enhancements, and the ELU activation function. A dataset of 13,000 images representing six common crop pests was created and expanded using various image augmentation techniques. The modified YOLO-v3 model was trained and evaluated on this dataset, achieving a higher mean Average Precision (mAP) of 89.7% and faster recognition speed compared to Faster-RCNN, SSD-300, and the original YOLO-v3 model. Finally, the improved model demonstrated a recognition speed of 27 frames per second (fps), significantly outperforming other detection models in both accuracy and speed. The proposed method offers a superior solution for real-time pest detection in agricultural settings, combining high accuracy with computational efficiency. Future work will explore the application of optimization algorithms to further enhance the robustness and generalizability of the system across diverse pest detection scenarios.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_87-Advanced_Image_Recognition_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Graph Neural Network Output for Dataset Duplication Detection on Analog Integrated Circuit Recognition System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160586</link>
        <id>10.14569/IJACSA.2025.0160586</id>
        <doi>10.14569/IJACSA.2025.0160586</doi>
        <lastModDate>2025-05-31T09:35:23.2800000+00:00</lastModDate>
        
        <creator>Arif Abdul Mannan</creator>
        
        <creator>Koichi Tanno</creator>
        
        <subject>Big data; graph neural network; artificial intelligence; analog circuit design</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>In the need for artificial intelligence application on the analog circuit design automation, larger and larger datasets containing analog and digital circuit pieces are required to support the analog circuit recognition systems. Since analog circuits with almost similar designs could produce completely different outputs, in case of poor netlist to graph abstraction, larger netlist input circuits could generate larger graph dataset duplications, leading to poor performance of the circuit recognition. In this study, a technique to detect graph dataset duplication on big data applications is introduced by utilizing the output vector representation (OVR) of the untrained Graph Neural Network (GNN). By calculating the multi-dimensional OVR output data into 2-dimentional (2D) representation, even the random weighted untrained GNN outputs are observed to be capable of distinguishing between each graph data inputs, generating different output for different graph input while providing identical output for the same duplicated graph data, and allowing the dataset’s duplication detection. The 2D representation is also capable of visualizing the overall datasets, giving a simple overview of the relation of the data within the same and different classes. From the simulation result, despite being affected by the floating-point calculation accuracy and consistency deficiency, the F1 score using floating-point identical comparisons are observed with an average of 96.92% and 93.70% when using CPU and GPU calculations, respectively, while the floating-point rounding calculation is applied. The duplication detection using floating point range comparison is the future work, combined with the study of the 2D GNN output behavior under the ongoing training process.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_86-Graph_Neural_Network_Output_for_Dataset_Duplication.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Efficient Processing and Intelligent Diagnosis Algorithm for Internet of Things Medical Data Based on Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160585</link>
        <id>10.14569/IJACSA.2025.0160585</id>
        <doi>10.14569/IJACSA.2025.0160585</doi>
        <lastModDate>2025-05-31T09:35:23.2500000+00:00</lastModDate>
        
        <creator>Wang Liyun</creator>
        
        <subject>Intelligent diagnosis; Internet of Things medical; electronic medical records; long short-term memory; convolutional neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>Electronic Medical Record (EMR) is a commonly used tool in medical diagnosis, which has static recording, difficulty in combining and analyzing different forms of data, and insufficient diagnostic efficiency and accuracy. This article proposes a CNN (Convolutional Neural Network)-LSTM (Long Short-Term Memory) algorithm for efficient processing and intelligent diagnosis of Internet of Things (IoT) medical data. The Word2Vec model is applied to clinical text data and its ability is utilized to capture semantic relationships between words. Medical image data is feature extracted using CNN, while physiological signal data is dynamically processed using LSTM to identify trends and anomalies in the data. An attention mechanism is applied to dynamically adjust the model’s attention weights for different types of data. By analyzing the samples of health, cardiovascular disease, diabetes, chronic obstructive pulmonary disease, hypertension, and chronic kidney disease, the CNN-LSTM in this article can accurately classify a variety of diseases, and the accuracy rate of healthy individuals has reached 97.8%. By combining CNN-LSTM with multimodal data, the accuracy and efficiency of medical diagnosis have been effectively improved.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_85-Efficient_Processing_and_Intelligent_Diagnosis_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>MICRAST: Micro-Forecasting Approach for Cloud User Consumption Pattern Based on RNN</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160584</link>
        <id>10.14569/IJACSA.2025.0160584</id>
        <doi>10.14569/IJACSA.2025.0160584</doi>
        <lastModDate>2025-05-31T09:35:23.2200000+00:00</lastModDate>
        
        <creator>Shallaw Mohammed Ali</creator>
        
        <creator>Gabor Kecskemeti</creator>
        
        <subject>Micro-forecasting; cloud workload; data processing; macro-forecasting; data mining</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>One vital key for effective management of cloud resources is the ability to predict their users’ consumption’s patterns in granular level. It can provide more insightful analysis to guide these users towards more resource-effective habits. Such prediction requires pre-processing the users’ traces from these cloud resources for granular prediction (micro-prediction). However, the methodology followed by many forecasting based cloud studies was designed to deal with these traces as over-all trends (macro-prediction). We propose a (MICRAST) that generates segments of granular patterns. Then, it carries out parallel tasks of pre-processing and training that lead to separate trained network for each of these segments. To select a model for our approach, we compared methods from two forecasting categories: statistical and artificial neural network (ANN)-based. The results lead us to recurrent neural networks (RNN). We evaluated the MICRAST through a comparison with related work methodologies (macro-prediction approach) for both univariate and multi-variate forecasting. Then, we measured its confidence for forecasting up to 20% of the training time steps. The results showed that our approach can forecast the preferences of each cloud user with a confidence level of between (95% to 98%) surpassing related works by more than 70%.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_84-MICRAST_Micro_Forecasting_Approach_for_Cloud_User.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Adaptive Observer-Based Sliding Mode Secure Control for Nonlinear Descriptor Systems Against Deception Attacks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160583</link>
        <id>10.14569/IJACSA.2025.0160583</id>
        <doi>10.14569/IJACSA.2025.0160583</doi>
        <lastModDate>2025-05-31T09:35:23.1730000+00:00</lastModDate>
        
        <creator>M. Kchaou</creator>
        
        <creator>L Ladhar</creator>
        
        <creator>M Omri</creator>
        
        <creator>R. Abbassi</creator>
        
        <creator>H. Jerbi</creator>
        
        <subject>Descriptor systems; TS fuzzy models; fuzzy observer; deception attacks; adaptive sliding mode; SBOA</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>This paper delves into an advanced control scheme that combines the sliding mode control (SMC) strategy with a meta-heuristic method to examine the issue of security control for non-linear systems that are vulnerable to deception attacks on their sensors and actuators. The proposed approach focuses on the development of a secure SMC law for nonlinear descriptor systems described by TS fuzzy models. A fuzzy observer is designed to accurately estimate the states that may affected by unpredictable sensor attacks, and an adaptive SMC controller is synthesized based on the estimated information to drive the observer’s state trajectories towards the sliding surface and then maintaining the sliding motion thereafter. Afterward, sufficient conditions are established to ensure the admissibility of the closed-loop system. Then, the secretary bird optimization algorithm (SBOA), is explored for tackling an optimization problem with non-convex and nonlinear constraints as is defined to enhance the system’s performance under threats. Ultimately, a simulation study through a practical example is performed to showcase the effectiveness of the proposed control scheme in maintaining system performance, even in the presence of attacks.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_83-Adaptive_Observer_Based_Sliding_Mode_Secure_Control.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Reinventing Alzheimer’s Disease Diagnosis: A Federated Learning Approach with Cross-Validation on Multi-Datasets via the Flower Framework</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160582</link>
        <id>10.14569/IJACSA.2025.0160582</id>
        <doi>10.14569/IJACSA.2025.0160582</doi>
        <lastModDate>2025-05-31T09:35:23.1400000+00:00</lastModDate>
        
        <creator>Charmarke Moussa Abdi</creator>
        
        <creator>Fatima-Ezzahraa Ben-Bouazza</creator>
        
        <creator>Ali Yahyaouy</creator>
        
        <subject>Federated learning; alzheimer’s disease; MRI; flower framework; data confidentiality; artificial intelligence; EfficientNet-B3; Segment Anything Model (SAM); medical image analysis; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>Alzheimer’s disease (AD) diagnosis using MRI is hindered by data-sharing restrictions. This study investigates whether federated learning (FL) can achieve high diagnostic accuracy while preserving data confidentiality. We propose an FL pipeline, utilizing EfficientNet-B3 and implemented via the Flower framework, incorporating advanced MRI segmentation (the Segment Anything Model, SAM) to isolate brain regions. The model is trained on a large ADNI MRI dataset and cross-validated on an independent OASIS dataset to evaluate generalization. Results show that our approach achieves high accuracy on ADNI (approximately 96%) and maintains strong performance on OASIS (around 85%), demonstrating robust generalization across datasets. The FL model attained high sensitivity and specificity in distinguishing AD, mild cognitive impairment, and healthy controls, validating the effectiveness of FL for AD MRI analysis. Importantly, this approach enables multi-center collaboration without sharing raw patient data. Our findings indicate that FL-trained models can be deployed across clinical sites, increasing the accessibility of advanced diagnostic tools. This work highlights the potential of FL in neuroimaging and paves the way for extension to other imaging modalities and neurodegenerative diseases.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_82-Reinventing_Alzheimers_Disease_Diagnosis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>MRI Brain Tumor Image Enhancement Using LMMSE and Segmentation via Fast C-Means</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160581</link>
        <id>10.14569/IJACSA.2025.0160581</id>
        <doi>10.14569/IJACSA.2025.0160581</doi>
        <lastModDate>2025-05-31T09:35:23.1100000+00:00</lastModDate>
        
        <creator>Ngan V. T. Nguyen</creator>
        
        <creator>Tuan V. Huynh</creator>
        
        <creator>Liet V. Dang</creator>
        
        <subject>Magnetic Resonance Imaging (MRI); brain tumor segmentation; image denoising; Wavelet Packet Transforms (WPT); Linear Minimum Mean Square Error (LMMSE); fast c-means clustering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>Brain MRI imaging revolutionizes tumor diagnosis, yet noise frequently obscures the images, complicating precise tumor identification and segmentation. This paper presents a comprehensive pipeline for brain MRI enhancement and tumor segmentation. The proposed method integrates Wavelet Packet Transform (WPT) and Linear Minimum Mean Square Error (LMMSE) filtering for effective noise reduction, combined with morphological operations for contrast enhancement. For segmentation, Fast C-Means clustering is employed, with the number of clusters automatically determined from histogram peaks. The tumor cluster is selected based on the highest centroid intensity and further refined by morphological operations to accurately delineate tumor borders. The approach is evaluated on the BraTS 2021 dataset, subject to Rician, Gaussian, and salt-and-pepper noise with intensities from 6% to 14%. Results demonstrate superior noise suppression compared to Denoising Convolutional Neural Networks (DnCNN) and Non-Local Means (NLM), maintaining structural integrity with a Structural Similarity Index (SSIM) of 0.43 for Rician noise at σ = 6%. Segmentation performance remains stable, achieving Dice coefficients above 0.70, precision over 90%, and sensitivity between 75% to 81%, despite challenges posed by higher levels of salt-and-pepper noise. Tumor characteristics such as position and size correspond closely to ground truth, validating the effectiveness of the system in automating tumor delineation and providing reliable diagnostic assistance in neuro-oncology.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_81-MRI_Brain_Tumor_Image_Enhancement_Using_LMMSE.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Semantic and Fuzzy Integration: A New Approach to Efficient and Flexible Querying of Relational Databases</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160580</link>
        <id>10.14569/IJACSA.2025.0160580</id>
        <doi>10.14569/IJACSA.2025.0160580</doi>
        <lastModDate>2025-05-31T09:35:23.0800000+00:00</lastModDate>
        
        <creator>Rachid Mama</creator>
        
        <creator>Mustapha Machkour</creator>
        
        <subject>Relational databases; fuzzy logic; ontologies; flexible queries; user interface</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>Data are “gold mines” that must be processed and interpreted quickly and efficiently to be useful. Thus, flexible queries continue to attract considerable attention. Several works have been proposed that allow users to perform flexible queries on relational databases. Most are related to fuzzy logic, which showed its performance in handling fuzziness in scalar values, but non-scalar values are still a more complex task. To solve this drawback of fuzzy logic, we propose using ontologies to establish the semantic relationships between the domain elements of a queried attribute. Moreover, we present the architecture of a new system that combines both techniques to allow users to write and execute queries in a flexible way where the criteria are not only exact but can also be fuzzy or semantic, and they may also include accomplishment degrees. Furthermore, the proposed system uses a new fast methodology for handling fuzzy queries, which has shown its great efficiency in accelerating the execution of fuzzy queries. Data mining techniques are used to assist users in defining their fuzzy understanding. The developed system has a user-friendly interface to assist users in managing their fuzzy preferences and semantic preferences. Finally, we have proven the performance of our system by conducting a set of experiments in different areas. We have also provided a qualitative and quantitative comparison with flexible query systems, which are documented in the literature.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_80-Semantic_and_Fuzzy_Integration.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detecting Hate Speech Targeting Protected Groups in Arabic Using Hypothesis Engineering and Zero-Shot Learning with Ground Validation via ChatGPT</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160579</link>
        <id>10.14569/IJACSA.2025.0160579</id>
        <doi>10.14569/IJACSA.2025.0160579</doi>
        <lastModDate>2025-05-31T09:35:23.0470000+00:00</lastModDate>
        
        <creator>Ahmed FathAlalim</creator>
        
        <creator>Yongjian Liu</creator>
        
        <creator>Qing Xie</creator>
        
        <creator>Alhag Alsayed</creator>
        
        <creator>Musa Eldow</creator>
        
        <subject>Hate speech detection; low resource Arabic language; zero-shot learning; natural language processing; ChatGPT; transfer learning; online safety</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>Automatic detection of hate speech in low-resource languages presents a persistent challenge in natural language processing, particularly with the rise of toxic discourse on social media platforms. Arabic, characterized by its rich morphology, dialectal variation, and limited annotated datasets, is underrep-resented in hate speech research, especially regarding content targeting marginalized and protected groups. This study proposes a zero-shot learning approach that leverages Natural Language Inference (NLI) models guided by carefully engineered hypotheses in native Arabic to detect hate speech against protected groups, such as women, immigrants, Jews, Black people, transgender individuals, gay people, and people with disabilities. We formulated nine different Arabic hypothesis groups and employed a zero-shot XNLI model with a baseline embedding-based model, incorporating preprocessing techniques on the HateEval Arabic dataset. The results indicate that the XNLI model achieves up to 80% accuracy in detecting targeted hate speech, significantly out-performing baseline models. Furthermore, a real-world validation using GPT-3 via the ChatGPT interface achieved 54% accuracy in zero-shot conversational settings. These findings highlight the importance of hypothesis design and linguistic preprocessing in zero-shot hate speech detection, particularly in low-resource and culturally nuanced languages offering a scalable and culturally aware solution for moderating harmful content in Arabic online spaces.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_79-Detecting_Hate_Speech_Targeting_Protected_Groups.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Topic Interpretability with ChatGPT: A Dual Evaluation of Keyword and Context-Based Labeling</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160578</link>
        <id>10.14569/IJACSA.2025.0160578</id>
        <doi>10.14569/IJACSA.2025.0160578</doi>
        <lastModDate>2025-05-31T09:35:23.0000000+00:00</lastModDate>
        
        <creator>Mashael M. Alsulami</creator>
        
        <creator>Maha A. Thafar</creator>
        
        <subject>Automatic label generation; topic modeling; Large Language Models (LLMs); topic labeling; semantic relevance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>Accurate topic labeling is essential for structuring and interpreting large-scale textual data across various domains. Traditional topic modeling methods, such as Latent Dirichlet Allocation (LDA), effectively extract topic-related keywords but lack the capability to generate semantically meaningful and contextually appropriate labels. This study investigates the integration of a large language model (LLM), specifically ChatGPT, as an automatic topic label generator. A dual evaluation frame-work was employed, combining keyword-based and context-based assessments. In the keyword-based evaluation, domain experts reviewed ChatGPT-generated labels for semantic relevance using LDA-derived keywords. In the context-based evaluation, experts rated the alignment between ChatGPT-assigned topic labels and actual content from representative sample posts. The findings demonstrate strong agreement between AI-generated labels and human judgments in both dimensions, with high inter-rater reliability and consistent contextual relevance for several topics. These results underscore the potential of LLMs to enhance both the coherence and interpretability of topic modeling outputs. The study highlights the value of incorporating context in evaluating automated topic labeling and affirms ChatGPT’s viability as a scalable, efficient alternative to manual topic interpretation in research, business intelligence, and content management systems.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_78-Enhancing_Topic_Interpretability_with_ChatGPT.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predictive Maintenance Based on Deep Learning: Early Identification of Failures in Heavy Machinery Components</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160577</link>
        <id>10.14569/IJACSA.2025.0160577</id>
        <doi>10.14569/IJACSA.2025.0160577</doi>
        <lastModDate>2025-05-31T09:35:22.9830000+00:00</lastModDate>
        
        <creator>Pablo Cabrera Melgar</creator>
        
        <creator>Luis Hilasaca Chambi</creator>
        
        <creator>Raul Sulla Torres</creator>
        
        <subject>Predictive maintenance; deep learning; fault detection; artificial intelligence</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>Deep learning-based predictive maintenance is a key strategy in industry to prevent unexpected failures, reduce downtime, and improve operational safety. This study presents an advanced approach for early fault detection in heavy machinery components using image analysis, focusing on four critical defect types: hose wear, piston failure, corrosion, and moisture. To this end, three state-of-the-art object detection models were implemented and compared: YOLOv11, RT-DETR, and YOLO-World. The dataset consists of images captured in real-life industrial environments exhibiting variations in lighting, texture, and material degradation. A manual preprocessing and annotation process was applied to improve training quality. Model performance was evaluated using key metrics such as the precision-recall (PR) curve and the confusion matrix to determine the most efficient technique for real-time fault detection. Experimental results show that YOLOv11 achieves the highest overall accuracy, with an mAP@0.5 of 83.8%, followed by YOLO-World at 82.4% and RT-DETR at 80.3%. In terms of efficiency, YOLO-World offers a balance between accuracy and detection speed, while RT-DETR shows stable performance but lower accuracy for certain defect types. These findings confirm that deep learning-based detection models enable the rapid and accurate identification of industrial defects, facilitating the implementation of predictive maintenance strategies.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_77-Predictive_Maintenance_Based_on_Deep_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Layered Security Perspective on Internet of Medical Things: Challenges, Risks, and Technological Solutions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160576</link>
        <id>10.14569/IJACSA.2025.0160576</id>
        <doi>10.14569/IJACSA.2025.0160576</doi>
        <lastModDate>2025-05-31T09:35:22.9530000+00:00</lastModDate>
        
        <creator>Ziad Almulla</creator>
        
        <creator>Hussain Almajed</creator>
        
        <creator>M M Hafizur Rahman</creator>
        
        <subject>IoMT; security risks; challenges; healthcare IoT; countermeasures; TrustMed-IoMT</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>The Internet of Medical Things (IoMT) refers to smart devices that are used in their transformation of the healthcare sector with continuous monitoring in real time, remote diagnostics as well as real time data exchange. nevertheless such systems are being targeted by a number of challenges like data breaches, unauthorized users and service interruptions. The study uses the PRISMA 2020 method and analyzes 25 peer-reviewed articles that were published between 2020 and 2025. Security risks are identified and mapped on the IoMT architecture’s perception, network, application and cloud layers. One of the key findings was confirming the fact that blockchain based identity management, algorithmic lightweight cryptographic protocol, and Artificial Intelligence(AI) driven intrusion detection systems can potentially address these risks. However, these areas are still limited in terms of interoperability, resource efficiency, and there are no solutions against the emerging quantum threats. A number of countermeasures achieved almost perfect detection accuracy over 98%, leading to increased security for IoMT systems. In order to solve the above issues, the framework, TrustMed-IoMT, is introduced to integrate blockchain-based identity management, intelligent intrusion detection and encryption that is safe against quantum attacks.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_76-A_Layered_Security_Perspective_on_Internet_of_Medical_Things.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Behavioural Analysis of Malware by Selecting Influential API Through TF-IDF API Embeddings</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160575</link>
        <id>10.14569/IJACSA.2025.0160575</id>
        <doi>10.14569/IJACSA.2025.0160575</doi>
        <lastModDate>2025-05-31T09:35:22.9230000+00:00</lastModDate>
        
        <creator>Binayak Panda</creator>
        
        <creator>Sudhanshu Shekhar Bisoyi</creator>
        
        <creator>Sidhanta Panigrahy</creator>
        
        <subject>Malware analysis; behavioural analysis; API sequence; multiclass malware; TF-IDF; API embeddings</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>The constant threat of malware makes studying its behavior an ongoing task. Malware identification and clas-sification challenges can be solved better by analyzing software behaviorally rather than using conventional hashcode-based signatures. API sequence represents the behavior of any program when collected during its execution. Considering API sequences gathered while the malware was being executed in controlled conditions, this report addresses the issue of choosing influential APIs for malware. The suggested feature selection method Select API in this research selects key features, i.e., significant APIs, that can better classify malware using TF-IDF API embeddings. Two machine learning models, Random Forest, which ensemble several estimators implicitly, and Support Vector Classifier, a standard non-linear model, are trained and evaluated to validate the importance of the chosen APIs. The proposed API selection methodology, called SelectAPI, has shown promising results. It achieves accuracy, macro-avg precision-score, macro-avg recall-score, and macro-avg F1-score of 0.76, 0.77, 0.76, and 0.76, respectively. This method focuses on selecting influential APIs and has resulted in significantly improved performance on the open-benchmark multiclass dynamic-API-Sequence based malware dataset, MAL-API-2019. These results surpass the previously best-known accuracy value of 0.60 and reported F1-Score of 0.61.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_75-Behavioural_Analysis_of_Malware_by_Selecting_Influential_API.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Integrating AI in Ophthalmology: A Deep Learning Approach for Automated Ocular Toxoplasmosis Diagnosis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160574</link>
        <id>10.14569/IJACSA.2025.0160574</id>
        <doi>10.14569/IJACSA.2025.0160574</doi>
        <lastModDate>2025-05-31T09:35:22.9070000+00:00</lastModDate>
        
        <creator>Bader S. Alawfi</creator>
        
        <subject>Ocular Toxoplasmosis; Posterior Uveitis; deep learning; automated diagnosis; CNNs; transformer models; CoAtNet architecture; retinal image analysis; medical image classification; hybrid deep learning models</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>Background: Ocular Toxoplasmosis, a leading cause of Posterior Uveitis, demands timely diagnosis to prevent vision loss. Manual retinal image analysis is labor-intensive and variable, while existing Deep Learning models often fail to balance local details and global context in Medical Image Classification. Objective: I propose RetinaCoAt, a Hybrid Deep Learning Model based on the CoAtNet Architecture, for Automated Diagnosis of Ocular Toxoplasmosis, integrating local and global features in Retinal Image Analysis. Methods: RetinaCoAt combines Convolutional Neural Networks (CNNs) for local pathological pattern detection with Transformer Models using multi-head self-attention for global context. Enhanced by residual connections and optimized tokenization, it was trained on 3,659 retinal images (healthy vs. unhealthy) and benchmarked against VGG16, CNNs, and ResNet. Results: RetinaCoAt achieved 98% accuracy in Medical Image Classification, outperforming VGG16 (96.87%), CNNs (95%), and ResNet (93.75%), due to its robust CNN-Transformer synergy. Conclusion: RetinaCoAt advances Automated Diagnosis of Ocular Toxoplasmosis and Posterior Uveitis, with potential for broader retinal pathology detection.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_74-Integrating_AI_in_Ophthalmology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Spatiotemporal Modeling of Foot-Strike Events Using A0-Mode Lamb Waves and 2D Wave Equations for Biomechanical Gait Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160573</link>
        <id>10.14569/IJACSA.2025.0160573</id>
        <doi>10.14569/IJACSA.2025.0160573</doi>
        <lastModDate>2025-05-31T09:35:22.8770000+00:00</lastModDate>
        
        <creator>Tajim Md. Niamat Ullah Akhund</creator>
        
        <creator>Waleed M. Al-Nuwaiser</creator>
        
        <creator>Md. Sumon Reza</creator>
        
        <creator>Watry Biswas Jyoty</creator>
        
        <subject>Biomechanics; foot-strike modeling; lamb waves; wave equation; gait analysis; Internet of Things (IoT); Human-Computer Interaction (HCI)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>This study introduces a physics-based framework for modeling human running biomechanics by interpreting foot-strike events as point-source excitations generating radially propagating wavefronts, akin to A0-mode Lamb waves, in a cylindrical coordinate system. Using a two-dimensional damped wave equation solved via finite-difference methods, we simulate spatiotemporal displacement fields and compare the outcomes with real-world gait kinematic and kinetic data. Our approach performs a parameter sweep of excitation frequency and amplitude to identify configurations closely replicating biomechanical signals associated with different running profiles and injury states. Unlike traditional machine learning approaches, our model leverages physical wave dynamics for simulation-validation matching, enabling interpretable identification of anomalies and potential injury risks. The results reveal distinctive wave propagation patterns between injured and non-injured runners, supporting the viability of wave-based modeling as a diagnostic and analytic tool in sports biomechanics. This work opens a novel direction for physics-informed, data-driven hybrid methods in gait analysis and injury prevention.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_73-Spatiotemporal_Modeling_of_Foot_Strike_Events.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Support Vector Machine with Rule Extraction to Improve Diabetes Prediction Using Fuzzy AHP-Sugeno and Nearest Neighbor</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160572</link>
        <id>10.14569/IJACSA.2025.0160572</id>
        <doi>10.14569/IJACSA.2025.0160572</doi>
        <lastModDate>2025-05-31T09:35:22.8600000+00:00</lastModDate>
        
        <creator>Muhammadun </creator>
        
        <creator>Baity Jannaty</creator>
        
        <creator>Rajermani Thinakaran</creator>
        
        <creator>Taufik Rachman</creator>
        
        <subject>SVM; Fuzzy AHP; rule extraction; diabetes; coefficient of variation; fuzzy Sugeno; SDG 3</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>Diabetes is one of the most prevalent chronic diseases globally, with significant mortality and morbidity rates. Early and accurate diagnosis plays a critical role in managing and mitigating its impact. However, achieving high diagnostic accuracy while ensuring interpretability remains a key challenge in medical machine learning applications. This paper proposes an interpretable and accurate hybrid framework for diabetes prediction that integrates Support Vector Machine Rule Extraction (SVMRE), Fuzzy Analytic Hierarchy Process (Fuzzy AHP), and Sugeno fuzzy inference. The primary objective of this study is to enhance prediction accuracy while enabling the extraction of meaningful and explainable decision rules derived from SVM models. To address the black-box nature of traditional SVM models, fuzzy rules are extracted and embedded into a Sugeno fuzzy inference system. Attribute importance is quantified through Fuzzy AHP based on expert consultation, ensuring medically relevant decision-making. Furthermore, to overcome rule redundancy and complexity, the coefficient of variation is computed for each rule and optimized using a Nearest Neighbor (NN) approach, which clusters rules with adjacent variation values. The proposed framework is evaluated using a real-world diabetes dataset from Sylhet, Bangladesh. It achieves a prediction accuracy of 84.62 per cent, outperforming several conventional methods. Compared to other competitive approaches found in recent literature, such as fuzzy grey wolf optimization and neuro-fuzzy systems, our method demonstrates superior balance between interpretability, computational efficiency, and classification performance. This study confirms that integrating rule-based learning, fuzzy expert systems, and statistical optimization provides a robust and interpretable approach for diabetes prediction. The framework aligns with Sustainable Development Goal 3 (SDG 3) by promoting early detection and decision support for non-communicable diseases in healthcare systems.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_72-Support_Vector_Machine_with_Rule_Extraction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Innovative Design System of Traditional Embroidery Patterns Based on Computer Linear Classifier Intelligent Algorithm Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160571</link>
        <id>10.14569/IJACSA.2025.0160571</id>
        <doi>10.14569/IJACSA.2025.0160571</doi>
        <lastModDate>2025-05-31T09:35:22.8300000+00:00</lastModDate>
        
        <creator>Xiao Bai</creator>
        
        <subject>Fisher linear discriminant analysis; embroidery pattern; interactive tool; embroidery interactive system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>This research introduces an innovative system for designing traditional embroidery patterns utilizing a computer-based linear classifier intelligent algorithm. The system achieves efficient classification and recognition of embroidery pattern features by employing the Fisher linear discriminant analysis technique, thus enabling the intelligent and innovative creation of designs. Additionally, the system encompasses the design of classification algorithms for embroidery patterns and incorporates interactive tools along with embroidery systems, offering designers a user-friendly platform for pattern creation. In the system design, Fisher linear discriminant analysis algorithm is used to classify the feature vectors of embroidery patterns to ensure that the features of each type of pattern are accurately extracted and identified. The model simulation verifies the algorithm&#39;s effectiveness through multiple iterations, and the results show that the system has significantly improved the classification accuracy of embroidery patterns and the efficiency of innovative design. Accurate data analysis shows that the classification accuracy of the system in different types of embroidery patterns reaches more than 95%, and user satisfaction is improved by 20%.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_71-The_Innovative_Design_System_of_Traditional_Embroidery_Patterns.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Maximizing Shift Preference for Nurse Rostering Schedule Using Integer Linear Programming and Genetic Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160570</link>
        <id>10.14569/IJACSA.2025.0160570</id>
        <doi>10.14569/IJACSA.2025.0160570</doi>
        <lastModDate>2025-05-31T09:35:22.7970000+00:00</lastModDate>
        
        <creator>Siti Noor Asyikin Binti Mohd Razali</creator>
        
        <creator>Thesigan Achari A/L Tamilarasan</creator>
        
        <creator>Batrisyia Binti Basri</creator>
        
        <creator>Norazman bin Arbin</creator>
        
        <subject>Nurse rostering schedule; schedule optimization; metaheuristic techniques; complex scheduling; integer linear programming; genetic algorithm; shift; and off-day preference maximization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>This study explores how scheduling methods can support work-life balance and overall job satisfaction by considering the preferences of the nursing staff. Creating a nurse rostering schedule that maximizes staff preferences for working shifts, off days, and hospital demands was the main goal. A Google Form that was distributed to the nursing staff is used to gather preference data. With the help of the LPSolve IDE, an integer linear programming (ILP) technique is used for the first datasets, and the Flexible Shift Scheduling System is utilized to facilitate the use of a genetic algorithm approach for the second dataset. The first dataset&#39;s result reveals that the proposed schedule&#39;s preference weight is 205.8 (73.35%), indicating an increase of 46.24 (16.48%) over the current schedule&#39;s 159.56 (56.87%) preference weight. According to the results of the second dataset, the preference weight for the current schedule is 589 (62.98%), whereas the preference weight for the proposed schedule is 619.2 (66.21%), indicating a 30.2 (3.23%) increase. This demonstrates that both proposed schedules have higher preference weight values than the current schedule, which satisfies the study&#39;s primary goal of optimizing staff preferences. The genetic algorithm is used in the second dataset since it has a high complexity problem and can produce a near-optimal solution. Flexible Shift Scheduling System generates quicker and easier schedules compared to manual schedules. This study emphasizes the importance of including nurse staff preferences into consideration when creating nurse rostering schedule procedures to support a happier and more engaged nursing team.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_70-Maximizing_Shift_Preference_for_Nurse_Rostering_Schedule.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Power Line Fault Detection Combining Deep Learning and Digital Twin Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160569</link>
        <id>10.14569/IJACSA.2025.0160569</id>
        <doi>10.14569/IJACSA.2025.0160569</doi>
        <lastModDate>2025-05-31T09:35:22.7670000+00:00</lastModDate>
        
        <creator>Siyu Wu</creator>
        
        <creator>Xin Yan</creator>
        
        <subject>YOLOv5; route; fault diagnosis; digital twin; loss function</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>To address the issue of inadequate diagnosis of power line faults, an automated power line fault diagnosis technology is put forward. In this context, the research leverages the object detection algorithm YOLOv5 to construct a fault diagnosis model and enhances its anchor box loss function. In addition, the study introduces digital twin models for fault point localization, and improves the recognition model by introducing GhostNet and attention mechanism, thereby enhancing the diagnostic performance of the technology in multi-objective scenarios. In the performance test of the loss function, the improved loss function performs the best in both regression loss and intersection over union ratio, with the average loss value and intersection over union ratio being 125 and 0.986, respectively. In multi-scenario fault diagnosis, the research model performs the best in accuracy and model loss, with values of 0.986 and 0.00125, respectively. In multi-scenario fault diagnosis, such as abnormal heating detection, when the number of targets is 4, the relative error of the research model is 0.86%, which is better than similar models. Finally, in the testing of frame rate recognition and diagnostic time, the research model shows excellent performance, surpassing similar technologies. The technology proposed by the research has good application effects. This study provides technical support for the construction of power informatization and line maintenance.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_69-Power_Line_Fault_Detection_Combining_Deep_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Artificial Intelligence-Driven Physical Simulation and Animation Generation in Computer Graphics</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160568</link>
        <id>10.14569/IJACSA.2025.0160568</id>
        <doi>10.14569/IJACSA.2025.0160568</doi>
        <lastModDate>2025-05-31T09:35:22.7200000+00:00</lastModDate>
        
        <creator>Fei Wang</creator>
        
        <subject>GAN; computer graphics; expression synthesis; animation generation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>This study explores an expression synthesis algorithm anchored in Generative Adversarial Networks (GAN) with attention mechanisms, achieving enhanced authenticity in facial expression generation. Evaluated on the MUG and Oulu-CASIA datasets, our method synthesizes six expressions with superior clarity (96.63&#177;0.26 confidence for neutral expressions) and smoothness (SSIM &gt;0.92 for video frames), outperforming StarGAN and ExprGAN in detail preservation and temporal stability. The proposed model demonstrates significant advantages in realism and identity retention, validated through quantitative metrics and comparative experiments.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_68-Artificial_Intelligence_Driven_Physical_Simulation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Real-Time Emotion Recognition in Psychological Intervention Methods</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160567</link>
        <id>10.14569/IJACSA.2025.0160567</id>
        <doi>10.14569/IJACSA.2025.0160567</doi>
        <lastModDate>2025-05-31T09:35:22.7030000+00:00</lastModDate>
        
        <creator>Sebasti&#225;n Ramos-Cosi</creator>
        
        <creator>Daniel Yupanqui-Lorenzo</creator>
        
        <creator>Meyluz Paico-Campos</creator>
        
        <creator>Claudia Marrujo-Ingunza</creator>
        
        <creator>Ana Huaman&#237;-Huaracca</creator>
        
        <creator>Maycol Acu&#241;a-Diaz</creator>
        
        <creator>Enrique Huamani-Uriarte</creator>
        
        <subject>Facial recognition; real-time; methods; psychological interventions</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>In the context of mental health, this study aims to develop a real-time emotion-focused facial recognition system based on psychological intervention methods. It uses a convolutional neural network (CNN) base and is trained with the FER2013 dataset, which consists of 35,887 facial images classified into seven basic emotions. Through normalisation, data augmentation, and training in TensorFlow and Keras, the model achieved 92.3% accuracy in a pilot test with 1,000 images, achieving an F1 score of 0.92, precision of 0.93, and recall of 0.91. Subsequently, when scaled to 71,774 images, it maintained robust performance with an overall accuracy of 77.5%. Emotions such as happiness (0.83), surprise (0.80), and neutrality (0.85) were recognised with greater accuracy, while K-means analysis was applied to cluster emotional patterns in a visually interpretable way. Complementing the technical architecture, a user-friendly graphical interface was designed for psychology professionals, allowing clear visualisation of the detected emotions with a latency of just 150 milliseconds per image. Overall, this proposal represents a significant advance toward more interactive, personalised, and efficient therapies, without requiring a complex technological infrastructure. Future studies recommend exploring different multimodal signals and increasing the use of convolutional layers to improve the quality of results and data efficiency.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_67-Real_Time_Emotion_Recognition_in_Psychological_Intervention_Methods.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Meta-Heuristic Algorithm for Optimal Virtual Machine Migration in Cloud Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160566</link>
        <id>10.14569/IJACSA.2025.0160566</id>
        <doi>10.14569/IJACSA.2025.0160566</doi>
        <lastModDate>2025-05-31T09:35:22.6870000+00:00</lastModDate>
        
        <creator>Hongkai LIN</creator>
        
        <subject>Cloud computing; virtualization; migration; particle swarm optimization; seahorse optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>Virtual Machine (VM) migration is one of the most important features of cloud computing for resource utilization optimization, energy minimization, and quality of service enhancement. Existing migration solutions, however, suffer from excessive migration overhead, energy inefficiency, and ineffective allocation of resources. This study proposes a novel hybrid meta-heuristic algorithm through the integration of Particle Swarm Optimization (PSO) and Seahorse Optimization (SHO) to address the drawbacks. The proposed PSOSHO algorithm takes advantage of the global exploration capability of PSO and the adaptive exploitation feature of SHO and provides a sound solution for VM migration in dynamic cloud computing environments. Extensive simulation experiments were conducted for a different number of cloud tasks, and the results demonstrated that PSOSHO significantly outperforms existing algorithms. Specifically, it achieves improvements of up to 54% in load factor, 60% in migration count, 48% in migration cost, 7% in energy consumption, 27% in resource availability, and 37% in computation time. These results confirm the effectiveness and robustness of the proposed methodology for optimal VM migration and resource management in virtualized cloud computing infrastructures.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_66-Hybrid_Meta_Heuristic_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>PSOMCD: Particle Swarm Optimization Algorithm Enhanced with Modified Crowding Distance for Load Balancing in Cloud Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160565</link>
        <id>10.14569/IJACSA.2025.0160565</id>
        <doi>10.14569/IJACSA.2025.0160565</doi>
        <lastModDate>2025-05-31T09:35:22.6570000+00:00</lastModDate>
        
        <creator>Bolin ZHOU</creator>
        
        <creator>Jiao GE</creator>
        
        <creator>RuiRui ZHANG</creator>
        
        <subject>Cloud computing; load balancing; particle swarm optimization; crowding distance; task allocation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>Effective load balancing in cloud computing architectures is crucial towards enhancing resource utilization, response times, and stability in the system. The present study proposes a new strategy with a Particle Swarm Optimization algorithm enhanced with Modified Crowding Distance (PSOMCD) to tackle task scheduling among Virtual Machines (VMs) in dynamic scenarios. The traditional PSO algorithm is supplemented by an enhanced crowding distance mechanism by PSOMCD to improve diversity in decision spaces and convergence to optimal solutions. The multi-objective fitness function addresses principal challenges in cloud computing, including load distribution, energy consumption, and throughput optimization. The performance of the algorithm is demonstrated in simulations, comparing its performance with other optimization techniques available in the literature. Results prove that PSOMCD provides better task allocation, improved load balancing, and decreased energy usage, thus effectively managing resources in dynamic and heterogeneous cloud ecosystems.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_65-PSOMCD_Particle_Swarm_Optimization_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>ECOA: An Enhanced Chimp Optimization Algorithm for Cloud Task Scheduling</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160564</link>
        <id>10.14569/IJACSA.2025.0160564</id>
        <doi>10.14569/IJACSA.2025.0160564</doi>
        <lastModDate>2025-05-31T09:35:22.6100000+00:00</lastModDate>
        
        <creator>Yue WANG</creator>
        
        <subject>Cloud computing; task scheduling; resource utilization; chimp optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>Effective scheduling of tasks is a key concern in cloud computing because it considerably affects system functionality, resource usage, and execution efficiency. The present study proposes an Enhanced Chimp Optimization Algorithm (ECOA) to address such problems by overcoming the disadvantages of traditional scheduling methods. The proposed ECOA combines three innovative components: 1) the highly disruptive polynomial mutation enhances population diversity, 2) the Spearman rank correlation coefficient promotes the refinement of inferior solutions, and 3) the beetle antennae operator facilitates more efficient local exploitation. These changes significantly enhance the equilibrium between exploration and exploitation, decrease the chance of premature convergence, and are a better solution. Extensive experiments on benchmark datasets prove that ECOA outperforms traditional algorithms concerning makespan, imbalance degree, and resource utilization. The obtained results confirm that the proposed ECOA has excellent potential for better performance in task scheduling in dynamic and large-scale cloud environments, as it represents a promising optimization solution for complex problems in cloud computing.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_64-ECOA_An_Enhanced_Chimp_Optimization_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Internet of Things-Driven Safety and Efficiency in High-Risk Environments: Challenges, Applications, and Future Directions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160563</link>
        <id>10.14569/IJACSA.2025.0160563</id>
        <doi>10.14569/IJACSA.2025.0160563</doi>
        <lastModDate>2025-05-31T09:35:22.5800000+00:00</lastModDate>
        
        <creator>Hua SUN</creator>
        
        <subject>Internet of things; high-risk environments; safety; operational efficiency; data analytics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>The Internet of Things (IoT) is a technology that can bring about significant change in several areas, especially in high-risk situations such as industrial environments and health and safety contexts. This research study has examined many IoT applications within domains and identified their importance in improving risk management and operational efficiency strategies. IoT enables sensor networks, wearable devices, and remote monitoring systems with edge computing capabilities. Thus, it allows real-time monitoring, early threat detection, and predictive maintenance. Data analytics technologies make it easier to capture valuable information that stakeholders can use to make informed decisions and optimize workflows to improve performance. Despite the transformational promises of IoT, there are still some problems. These include security vulnerabilities, interoperability concerns, and extensive training programs. Addressing these challenges offers the opportunity to create innovative, resourceful collaboration in developing robust IoT solutions to accommodate the requirements of hazardous environments. In the coming times, further growth of IoT and integration with the latest technologies like 5G and robotics promise new ways to ensure safety and efficiency in operations. Within this study, we emphasize the role of IoT as an enabling factor in transforming dangerous areas into safe and efficient zones, assuring our readers on the safety benefits of IoT. It also provides a general perspective towards potential future research and development directions.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_63-Internet_of_Things_Driven_Safety_and_Efficiency.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intelligent Identification of Pile Defects Based on Improved LSTM Model and Wavelet Packet Local Peaking Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160562</link>
        <id>10.14569/IJACSA.2025.0160562</id>
        <doi>10.14569/IJACSA.2025.0160562</doi>
        <lastModDate>2025-05-31T09:35:22.5630000+00:00</lastModDate>
        
        <creator>Xiaolin Li</creator>
        
        <creator>Xinyi Chen</creator>
        
        <subject>Foundation pile; defect identification; LSTM; WOA; WPT; LPS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>With the continuous expansion of building scale, the structural safety of foundation piles, as key load-bearing components, has received increasing attention. To improve the defect recognition ability under complex working conditions, this study first uses the whale optimization algorithm to perform hyperparameter optimization on the long short-term memory network model, achieving efficient classification of the defect and non-defect samples. Subsequently, the signals identified as having defects are subjected to wavelet packet decomposition to extract multi-scale energy features, and combined with the local peak finding method to accurately locate key reflection peaks, achieving further identification of defect types. The results showed that the classification accuracy, recognition precision, recall rate, and F1 value of the new method were the highest at 96.7%, 95.16%, 93.87%, and 94.51%, respectively, and the average recognition time was the shortest at 0.97 seconds. Especially for the defect identification errors of drilled cast-in-place piles and prefabricated piles, the lowest were 0.19 and 0.23, and the lowest complexity could reach 65.28%, demonstrating high precision and stability in defect identification. This model has strong robustness and accuracy in various types of defect scenarios, and has good generalization ability and engineering application potential, which can provide certain technical references for the construction monitoring of road and bridge engineering in the future.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_62-Intelligent_Identification_of_Pile_Defects.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>CT Imaging-Based Deep Learning System for Non-Small Cell Lung Cancer Detection and Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160561</link>
        <id>10.14569/IJACSA.2025.0160561</id>
        <doi>10.14569/IJACSA.2025.0160561</doi>
        <lastModDate>2025-05-31T09:35:22.4370000+00:00</lastModDate>
        
        <creator>Devyani Rawat</creator>
        
        <creator>Sachin Sharma</creator>
        
        <creator>Shuchi Bhadula</creator>
        
        <subject>Artificial intelligence; NSCLC; ML-CNN; ADASYN; tomek link</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>About 85% of all occurrences of lung cancer are classified as Non-Small Cell Lung Cancer (NSCLC), making it a serious worldwide health concern. For better treatment results and patient survival, NSCLC must be detected early and accurately. This research presents an advanced Deep Learning-enabled Lung Cancer Detection and Classification System (LCDCS) aimed at significantly improving diagnostic precision and operational efficiency. Emerging technologies such as artificial intelligence and multi-level convolutional neural networks (ML-CNN) are increasingly being leveraged in CT imaging-based deep learning systems for accurate detection. The outlined framework leverages a multi-layer convolutional neural network to effectively analyse CT scan images and accurately classify lung nodules. Tomek link and Adaptive Synthetic Sampling (ADASYN) are used in a novel way to balance data, address class imbalance, and guarantee strong model performance. Deep learning with a CNN model is utilized to derive features, and the SoftMax function is applied for multi-class classification. Thorough evaluation on datasets like the LUNA16 dataset demonstrates that the system surpasses earlier models and data balancing techniques in accuracy, yielding a training accuracy of 95.8% and a validation accuracy of 96.9%. The findings demonstrate the potential of the suggested method as a trustworthy diagnostic instrument for the prompt identification of lung cancer. The study emphasizes on how crucial it is to combine deep learning architectures with sophisticated data balancing techniques to overcome medical imaging difficulties and raise diagnostic accuracy. Future research attempts to investigate real-time deployment in clinical settings and expand the system&#39;s capability to encompass more cancer types.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_61-CT_Imaging_Based_Deep_Learning_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>EJAIoV: Enhanced Jaya Algorithm-Based Clustering for Internet of Vehicles Using Q-Learning and Adaptive Search Strategies</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160560</link>
        <id>10.14569/IJACSA.2025.0160560</id>
        <doi>10.14569/IJACSA.2025.0160560</doi>
        <lastModDate>2025-05-31T09:35:22.4070000+00:00</lastModDate>
        
        <creator>Jinchuan LU</creator>
        
        <subject>Internet of vehicles; clustering; Jaya algorithm; Q-learning; optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>The Internet of Vehicles (IoV) is an indispensable part of contemporary Intelligent Transportation Systems (ITS), providing efficient vehicle-to-everything (V2X) communication. Nevertheless, high mobility and consequent topological changes in IoV networks create overwhelming difficulties in establishing and maintaining stable and effective communication. In this work, we introduce the Enhanced Jaya Algorithm for IoV (EJAIoV), an optimized clustering algorithm using optimization to develop stable and long-term clusters in IoV scenarios. EJAIoV uses efficient random initialization with three scrambling strategies to produce diverse, high-quality solutions. Q-learning selection between three neighborhood operators enhances local search effectiveness by incorporating a segmented operator. In addition, an adaptive search balance strategy adjusts solution updating dynamically to avoid premature convergence and optimize the exploration procedure. Simulation experiments show that EJAIoV outperforms existing clustering algorithms, achieving up to 31.5% improvement in cluster lifetime and 28.2% reduction in the number of clusters across various node densities and grid sizes.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_60-EJAIoV_Enhanced_Jaya_Algorithm_Based_Clustering.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Quantum-Assisted Variational Deep Learning for Efficient Anomaly Detection in Secure Cyber-Physical System Infrastructures</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160559</link>
        <id>10.14569/IJACSA.2025.0160559</id>
        <doi>10.14569/IJACSA.2025.0160559</doi>
        <lastModDate>2025-05-31T09:35:22.3770000+00:00</lastModDate>
        
        <creator>Nilesh Bhosale</creator>
        
        <creator>Bukya Mohan Babu</creator>
        
        <creator>M. Karthick Raja</creator>
        
        <creator>Yousef A.Baker El-Ebiary</creator>
        
        <creator>Manasa Adusumilli</creator>
        
        <creator>Elangovan Muniyandy</creator>
        
        <creator>David Neels Ponkumar Devadhas</creator>
        
        <subject>Quantum variational circuits; cyber-physical system security; hybrid quantum-classical algorithms; anomaly detection framework; quantum machine learning optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>The aim of the current study is to propose a Quantum-Assisted Variational Autoencoder (QAVAE) model capable of efficiently identifying anomalies in high-dimensional, time-series data produced by cyber-physical systems. The existing approaches to machine learning have some limitations when recording temporal interactions and take substantial time to run with many attributes. To meet all of these challenges, this study aims at proposing a quantum-assisted approach to anomaly detection using the potential of a Quantum-Assisted Variational Autoencoder (QAVAE). The general goal of this research is to optimize anomaly detection systems using consummate deep learning quantum computing models. According to the QAVAE framework, variational inference is employed for learning latent representations of time series data; besides, quantum circuits are utilized for enhancing the capacity of the model and its generalization capability. This work was accomplished using Python programming language, and the analysis was carried out using TensorFlow Quantum. The QAVAE model demonstrates the highest accuracy of 95.2%, indicating its strong capability in correctly identifying both anomalous and normal instances.  So, it can learn well from the data and keep stable in the evaluation process, which will make it suitable for real-time anomaly detection in dynamic environments. In conclusion, the QAVAE model brings a reasonable approach and solution for anomaly detection that is accurate in identifying and scalable too. Utilizing the HAI, the dataset achieved a high detection accuracy of 95.2%. Further research has to be dedicated to its application to quantum computing architecture as well as to modifications that allow for its use on multi-variable actual-life data.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_59-Quantum_Assisted_Variational_Deep_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Neuro-Symbolic Reinforcement Learning for Context-Aware Decision Making in Safe Autonomous Vehicles</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160558</link>
        <id>10.14569/IJACSA.2025.0160558</id>
        <doi>10.14569/IJACSA.2025.0160558</doi>
        <lastModDate>2025-05-31T09:35:22.3430000+00:00</lastModDate>
        
        <creator>Huma Khan</creator>
        
        <creator>Tarunika D Chaudhari</creator>
        
        <creator>Janjhyam Venkata Naga Ramesh</creator>
        
        <creator>A. Smitha Kranthi</creator>
        
        <creator>Elangovan Muniyandy</creator>
        
        <creator>Yousef A.Baker El-Ebiary</creator>
        
        <creator>David Neels Ponkumar Devadhas</creator>
        
        <subject>Autonomous vehicles; neuro-symbolic learning; Deep Q-Network (DQN); CNN-LSTM architecture; context aware</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>Autonomous vehicles need to be equipped with smart, understandable, and context-aware decision-making frameworks to drive safely within crowded environments. Current deep learning approaches tend to generalize poorly, lack transparency, and perform inadequately in dealing with uncertainty within dynamic city environments. Towards overcoming these deficiencies, this study suggests a new hybrid approach that combines Neuro-Symbolic reasoning with a Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM) architecture, together with a Deep Q-Network (DQN) for learning through reinforcement. The model employs symbolic logic to enforce traffic regulations and infer context while relying on CNN for extracting spatial features and LSTM for extracting temporal dependencies in vehicle motion. The system is trained and tested using the Lyft Level 5 Motion Prediction dataset, which emulates varied and realistic driving scenarios in urban environments. Enforced on the Python platform, the new framework allows autonomous cars to generate rule-adherent, strong, and explainable choices under diverse driving scenarios. Neuro-symbolic combination is more robust for learning as well as explainability, whereas reinforcement improves long-term rewards regarding safety and efficiency. The experiment shows that the model provides high accuracy of 98% on scenario-based decision-making problems in contrast to classical deep learning models used in safety-critical routing. This work is advantageous to autonomous vehicle manufacturers, smart mobility system developers, and urban planners by providing a scalable, explainable, and reliable AI-based solution for future transportation systems.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_58-Neuro_Symbolic_Reinforcement_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Capsule Network-Based Multi-Modal Neuroimaging Approach for Early Alzheimer’s Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160557</link>
        <id>10.14569/IJACSA.2025.0160557</id>
        <doi>10.14569/IJACSA.2025.0160557</doi>
        <lastModDate>2025-05-31T09:35:22.3300000+00:00</lastModDate>
        
        <creator>Kabilan Annadurai</creator>
        
        <creator>A Suresh Kumar</creator>
        
        <creator>Yousef A.Baker El-Ebiary</creator>
        
        <creator>Sachin Upadhye</creator>
        
        <creator>Janjhyam Venkata Naga Ramesh</creator>
        
        <creator>K. Lalitha Vanisree</creator>
        
        <creator>Elangovan Muniyandy</creator>
        
        <subject>Alzheimer’s detection; 3d-capsule networks; multi-modal neuroimaging; deep learning in healthcare; early diagnosis and classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>Alzheimer’s Disease (AD) is a terminal illness affecting the human brain that leads to deterioration of cognitive function and should therefore be diagnosed as early as possible. The goal of this work is to come up with a precise and interpretable diagnostic model for the early diagnosis of Alzheimer&#39;s Disease (AD) based on multi-modal neuroimaging data. Current deep learning models such as Convolutional Neural Networks (CNNs) are limited in that they lose spatial hierarchies in 3D medical images, which inhibits classification performance and interpretability. To overcome this, in this work, we introduce a new 3D Capsule Network (3D-CapsNet) framework that captures spatial relations more effectively with dynamic routing and pose encoding to improve volumetric neuroimaging data analysis. Our approach has three principal phases: extensive pre-processing of MRI and PET scans such as skull stripping, intensity normalization, and motion correction; feature extraction through the 3D-CapsNet model; and multi-modal classification based on fusion. We used the Alzheimer&#39;s Classification dataset from Kaggle for training and testing. The model is implemented in the Python platform with TensorFlow and Keras libraries incorporating 3D CNN operations along with capsule layers to extract fine-grained features of AD-affected brain areas such as the hippocampus and entorhinal cortex. Experimental results show that our model reaches a very high classification accuracy of 92%, which is higher than the conventional architectures VGG-16, ResNet-50, and DenseNet-121 in accuracy, precision, recall, F1-score, and AUC-ROC. This strategy is helpful to clinicians and medical researchers because it gives them a non-invasive, interpretable, and trustworthy tool for diagnosing and monitoring various stages of AD (Non-Demented, Very Mild, Mild, and Moderate). It sets the stage for real-time clinical integration and future studies in monitoring disease progression over time.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_57-Capsule_Network_Based_Multi_Modal_Neuroimaging_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid Convolutional Neural Network-Temporal Attention Mechanism Approach for Real-Time Prediction of Soil Moisture and Temperature in Precision Agriculture</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160556</link>
        <id>10.14569/IJACSA.2025.0160556</id>
        <doi>10.14569/IJACSA.2025.0160556</doi>
        <lastModDate>2025-05-31T09:35:22.3130000+00:00</lastModDate>
        
        <creator>M. L. Suresh</creator>
        
        <creator>Swaroopa Rani B</creator>
        
        <creator>T K Rama Krishna Rao</creator>
        
        <creator>S. Gokilamani</creator>
        
        <creator>Yousef A.Baker El-Ebiary</creator>
        
        <creator>Prajakta Waghe</creator>
        
        <creator>Jihane Ben Slimane</creator>
        
        <subject>Precision agriculture; edge AI; convolutional neural network; temporal attention mechanism; smart irrigation optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>Precision Agriculture is a combination of Artificial Intelligence (AI) and the Internet of Things (IoT) to improve farming efficiency, sustainability, and overall productivity. This work presents hybrid CNN-TAM (Convolutional Neural Network–Temporal Attention Mechanism) model running on Edge AI devices for real time crop soil temperature and Soil Moisture prognosis. IoT sensors gather long term environmental data which is preprocessed to remove noise and extract meaningful spatial and temporal features. CNN can obtain spatial patterns and TAM assigns dynamic attention weights to important time steps enhancing prediction accuracy. The proposed hybrid model surpasses the conventional methods like Linear Regression, Random Forest, LSTM, and independent CNN with the lowest RMSE (1.7). Different from cloud-based deployments, the Edge AI deployment offers reduced latency, consumes lower bandwidth, and is better suited for scalability, enabling large-scale, real-time precision farming. Experimental outcome confirms enhanced real-time prediction capability allowing farmers to optimize irrigation schedules, reduce resource waste, and improve crop resilience against extreme weather conditions. This ensures sustainable resource management, conserves water and fertilizers, and enhances decision-making in agriculture. The results demonstrate the capability of AI-driven decision-support tools in present-day agriculture and presents a scalable, cost-effective and deployable solution for both small- and large-scale farms. By emphasizing data privacy, real-time processing, and low-latency inference, this research contributes to the area of precision agriculture relying on AI, addressing key challenges such as real-time analytics, unreliable connectivity, and the need for immediate on-site decision-making. The study develops an AI-powered system for intelligent farm management to support sustainable and Smart Irrigation Optimization is used for efficient agricultural practices.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_56-A_Hybrid_Convolutional_Neural_Network_Temporal_Attention_Mechanism.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Efficient Task Allocation in Internet of Things Using L&#233;vy Flight-Driven Walrus Optimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160555</link>
        <id>10.14569/IJACSA.2025.0160555</id>
        <doi>10.14569/IJACSA.2025.0160555</doi>
        <lastModDate>2025-05-31T09:35:22.2800000+00:00</lastModDate>
        
        <creator>Yaozhi CHEN</creator>
        
        <subject>Internet of things; energy efficiency; task scheduling; walrus; optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>The rapid growth of the Internet of Things (IoT) has presented a significant challenge in efficiently managing energy-aware task distribution over heterogeneous devices. Optimizing the efficient use of resources in terms of energy consumption is critical when considering IoT device resource-constrained environments. This study proposes a new IoT task distribution resolution mechanism using an Enhanced Walrus Optimization Algorithm (EWOA). EWOA incorporates sophisticated techniques, such as L&#233;vy flight processes and augmented exploration-exploitation, and thus is best suited to complex and dynamic IoT environments. This study proposes an EWOA to assign effective tasks considering device capability compatibility and reduced energy consumption. Simulations over benchmark IoT scenarios validate that the EWOA outperforms current approaches in terms of efficiency in terms of energy consumption, convergence, and robustness. In conclusion, improvements in minimizing energy consumption, enhancing task execution performance, and efficient use of resources in IoT networks have been emphasized significantly. In this work, the EWOA was proven to be an effective tool for IoT NP-hard optimization problem resolution and opens doors for future work in utilizing sophisticated metaheuristic algorithms for use in energy-constrained environments.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_55-Efficient_Task_Allocation_in_Internet_of_Things.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>GOA-WO-ML: Enhancing Internet of Things Security with Gannet Optimization and Walrus Optimizer-Based Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160554</link>
        <id>10.14569/IJACSA.2025.0160554</id>
        <doi>10.14569/IJACSA.2025.0160554</doi>
        <lastModDate>2025-05-31T09:35:22.2330000+00:00</lastModDate>
        
        <creator>Jing GUO</creator>
        
        <creator>Wen CHEN</creator>
        
        <creator>Xu ZHANG</creator>
        
        <subject>Internet of things; intrusion detection; machine learning; optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>The rapid development of the Internet of Things (IoT)-based Wireless Sensor Networks (WSNs) has fueled security challenges, necessitating efficient intrusion detection approaches. The computationally intensive nature and the high-dimension data preclude the direct employment of machine learning-based Intrusion Detection Systems (IDSs). This study introduces GOA-WO-ML, a robust IDS system that integrates the Gannet Optimization Algorithm (GOA) and Walrus Optimizer (WO) for feature selection and parameter tuning in machine learning algorithms. The system is tested on the NSL-KDD dataset, indicating better cyberattack detection performance. The experimental findings suggest that GOA-WO-ML improves intrusion detection accuracy, decreases false positives, and has low computational overhead compared to traditional methods. By adopting bio-inspired methods, the proposed system successfully counteracts security issues in IoT-WSNs through efficient surveillance. Future research directions include considering deep learning improvements and real-time deployment methods in dynamic environments for further intrusion detection performance.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_54-GOA_WO_ML_Enhancing_Internet_of_Things_Security.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>DBSCAN Algorithm in Creation of Media and Entertainment: Drawing Inspiration from TCM Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160553</link>
        <id>10.14569/IJACSA.2025.0160553</id>
        <doi>10.14569/IJACSA.2025.0160553</doi>
        <lastModDate>2025-05-31T09:35:22.2200000+00:00</lastModDate>
        
        <creator>Xiaoxiao Li</creator>
        
        <creator>Libo Wan</creator>
        
        <creator>Xin Gao</creator>
        
        <subject>DBSCAN algorithm; TCM cultural communication; picture character product design; sand cat swarm optimization methodology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>This study proposes a TCM culture communication data clustering division and classification method that is based on an enhanced DBSCAN clustering algorithm and an ELM model. The objective is to address the issue of Traditional Chinese Medicine (TCM) culture communication image role product design. Firstly, for the problem of extracting feature vectors of TCM cultural communication, we analyse the path of communication role product design, design the product design scheme of TCM cultural communication image role, and extract the feature vectors of TCM cultural communication; secondly, for the problem of clustering and classifying the health data of TCM, we propose a method of clustering and classifying the health data of TCM based on the SCSO-DBSCAN clustering algorithm by combining the DBSCAN clustering algorithm with the sandcat swarm optimization algorithm. Finally, the TCM cultural dissemination data clustering classification and classification methods are tested and analyzed using TCM cultural dissemination data. This problem of TCM health data clustering classification is addressed by combining the ELM network algorithm, and a classification method of TCM cultural data dissemination based on the ELM model is proposed. The experimental results demonstrate that the method proposed in this study enhances the accuracy of TCM health data clustering division and also improves the accuracy of TCM cultural data communication classification, in comparison to other algorithms used for TCM cultural communication data clustering division and classification.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_53-DBSCAN_Algorithm_in_Creation_of_Media_and_Entertainment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid Graph Convolutional Networks (GCN)-Collaborative Filtering Recommender System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160552</link>
        <id>10.14569/IJACSA.2025.0160552</id>
        <doi>10.14569/IJACSA.2025.0160552</doi>
        <lastModDate>2025-05-31T09:35:22.1870000+00:00</lastModDate>
        
        <creator>Qingfeng Zhang</creator>
        
        <subject>Graph convolutional networks; collaborative filtering; hybrid recommender systems; university library performance evaluation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>This study proposes a hybrid recommendation system that integrates Graph Convolutional Networks (GCN) and collaborative filtering to improve the accuracy and performance of university library book recommendation systems. The goal is to develop a comprehensive evaluation method for assessing the effectiveness of recommendation algorithms in university libraries. A combination of GCN and collaborative filtering algorithms was employed to enhance recommendation accuracy. GCN was used to capture complex relationships in user data, while collaborative filtering focused on user preferences. Performance evaluation was conducted using a set of functional indicators, and the system was tested using real library data. The evaluation metrics included Mean Absolute Percentage Error (MAPE), Root Mean Square Error (RMSE), and evaluation time. The GCN-based evaluation model significantly outperformed traditional methods. It achieved a MAPE of 0.7597 and an RMSE of 0.3775, both superior to BP, CNN, and DBN algorithms. In terms of evaluation time, the GCN algorithm showed moderate performance (0.44s) compared to BP (0.32s), but better than DBN (0.87s) and CNN (0.67s). These results demonstrate the robustness and efficiency of the GCN model in predicting library recommendations. The proposed hybrid system effectively improves the accuracy and evaluation of university library recommendation systems. The GCN-based model outperformed other methods in terms of error rates and evaluation time, making it a valuable tool for enhancing personalized recommendations in library systems. Future research will focus on optimizing the computational efficiency of the GCN model.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_52-A_Hybrid_Graph_Convolutional_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detection of Malaria Infections Using Convolutional Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160551</link>
        <id>10.14569/IJACSA.2025.0160551</id>
        <doi>10.14569/IJACSA.2025.0160551</doi>
        <lastModDate>2025-05-31T09:35:22.1570000+00:00</lastModDate>
        
        <creator>Luis Edison Nahui Vargas</creator>
        
        <creator>Mario Aquino Cruz</creator>
        
        <subject>Malaria diagnosis; CNN architectures; deep learning; artificial intelligence; plasmodium; clinical decision support; medical imaging</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>Malaria persists as a serious global public health threat, particularly in resource-limited regions where timely and accurate diagnosis is a challenge due to poor medical infrastructure. This study presents a comparative evaluation of three pre-trained convolutional neural network (CNN) architectures—EfficientNetB0, InceptionV3, and ResNet50—for automated detection of Plasmodium-infected blood cells using the Malaria Cell Images Dataset. The models were implemented in Python with TensorFlow and trained in Google Colab Pro with GPU A100 acceleration. Among the models evaluated, ResNet50 proved to be the most balanced, achieving 97% accuracy, a low false positive rate (1.8%) and the shortest training time (2.9 hours), making it a suitable choice for implementation in real-time clinical settings. InceptionV3 obtained the highest sensitivity (98% recall), although with a higher false positive rate (4.0%) and a higher computational demand (6.5 hours). EfficientNetB0 was the fastest model (3.2 hours), showed validation and a higher false negative rate (6.2%). Standard metrics—accuracy, loss, recall, F1-score and confusion matrix—were applied under a non-experimental cross-sectional design, along with regularization and data augmentation techniques to improve generalization and mitigate overfitting. As a main contribution, this research provides reproducible empirical evidence to guide the selection of CNN architectures for malaria diagnosis, especially in resource-limited settings. This systematic comparison between state-of-the-art models, under a single protocol and homogeneous metrics, represents a significant novelty in the literature, guiding the selection of the most appropriate architecture. In addition, a lightweight graphical user interface (GUI) was developed that allows real-time visual testing, reinforcing its application in clinical and educational settings. The findings also suggest that these models, in particular ResNet50, could be adapted for the diagnosis of other parasitic diseases with similar cell morphology, such as leishmaniasis or babesiosis.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_51-Detection_of_Malaria_Infections.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Sequence Augmentation and Optimized Contrastive Loss Recommendation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160550</link>
        <id>10.14569/IJACSA.2025.0160550</id>
        <doi>10.14569/IJACSA.2025.0160550</doi>
        <lastModDate>2025-05-31T09:35:22.1270000+00:00</lastModDate>
        
        <creator>Minghui Li</creator>
        
        <creator>Xiaodong Cai</creator>
        
        <subject>Recommendation algorithm; data sparsity; loss function; sequence augmentation; timestamp optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>To address the issues of relevance and diversity imbalance in the augmented data and the shortcomings of existing loss functions, this study proposes a recommendation algorithm based on hybrid sequence augmentation and optimized contrastive loss. First, two new data augmentation operators are designed and combined with the existing operators to form a more diversified augmentation strategy. This approach better balances the relevance and diversity of the augmented data, ensuring that the model can make more accurate recommendations when facing various scenarios. Additionally, to optimize the training process of the model, this study also introduces an improved loss function. Unlike the traditional cross-entropy loss, this loss function introduces a temporal accumulation term before calculating the cross-entropy loss, integrating the advantages of binary cross-entropy loss. This overcomes the limitation of traditional methods, which apply cross-entropy loss only at the last timestamp of the sequence, thereby improving the model&#39;s accuracy and stability. Experiments on the Beauty, Sports, Yelp, and Home datasets show significant improvements in the Hit@10 and NDCG@10 metrics, demonstrating the effectiveness of the recommendation model based on hybrid sequence augmentation and optimized contrastive loss. Specifically, the Hit metric, which reflects model accuracy, improves by 8.64%, 13.07%, 5.92%, and 19.28% respectively on these four datasets. The NDCG metric, which measures ranking quality, increases by 15.60%, 19.01%, 9.66%, and 20.31% respectively.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_50-Hybrid_Sequence_Augmentation_and_Optimized_Contrastive_Loss.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>FB-PNet: A Semantic Segmentation Model for Automated Plant Leaf and Disease Annotation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160549</link>
        <id>10.14569/IJACSA.2025.0160549</id>
        <doi>10.14569/IJACSA.2025.0160549</doi>
        <lastModDate>2025-05-31T09:35:22.0930000+00:00</lastModDate>
        
        <creator>P Dinesh</creator>
        
        <creator>Ramanathan Lakshmanan</creator>
        
        <subject>Semantic segmentation; forward-backward propagated percept net; intersection over union; data augmentation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>Semantic segmentation is an important operation in computer vision, which is generally plagued by computational resources and the time-consuming process for labor intensive of pixel-wise labeling. As a solution to this issue, the present study introduces a state-of-the-art segmentation system based on the Forward-Backward Propagated Percept Net (FB-PNet) architecture, augmented with Perception Convolution layers designed specifically for this purpose. The suggested method improves segmentation precision and processing the efficiency by capturing fine visual features and reducing some unnecessary data. The performance of the model is tested using key evaluation metrics, including Intersection over Union (IoU), Dice coefficient, Loss, Recall, and Precision. Experimental results indicate that the model works effective in segmenting leaf and disease regions in plant images without requiring full pixel-by-pixel labeling. Data augmentation techniques also greatly improve the capability of the model to handle new situations. A strong partitioning technique of the dataset allows for best performance testing, demonstrating the strength and flexibility of the model with respect to new data in the PlantVillage dataset, even without the employment of annotation masks. The innovation of this research is an efficient and scalable approach to large-scale plant leaf and disease detection, which is able to sustain precision agriculture application cases.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_49-FB_PNet_A_Semantic_Segmentation_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Tracking Parkinson’s Disease Progression Using Deep Learning: A Hybrid Auto Encoder and Bi-LSTM Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160548</link>
        <id>10.14569/IJACSA.2025.0160548</id>
        <doi>10.14569/IJACSA.2025.0160548</doi>
        <lastModDate>2025-05-31T09:35:22.0800000+00:00</lastModDate>
        
        <creator>Sri Lavanya Sajja</creator>
        
        <creator>Kabilan Annadurai</creator>
        
        <creator>S. Kirubakaran</creator>
        
        <creator>TK Rama Krishna Rao</creator>
        
        <creator>P. Satish</creator>
        
        <creator>Elangovan Muniyandy</creator>
        
        <creator>Yahia Said</creator>
        
        <subject>Auto encoders; DL; Parkinson’s disease; Bi-LSTM; tele monitoring dataset</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>Parkinson&#39;s disease (PD) is a progressive and chronic neurodegenerative disorder characterized by motor impairment, speech deficits, and cognitive decline. Monitoring disease progression accurately and intermittently is imperative for early treatment planning and personalized intervention. In the past, conventional methods of diagnosis—clinical examination and traditional machine learning (ML) algorithms—tend to be insufficient in identifying intricate temporal behaviors of PD progress and involve frequent clinic visits. There is no cure for this disease but there are treatments. To tackle these issues, we introduce a deep learning (DL)-based approach that integrates auto encoders for feature learning with Bi-Directional Long Short-Term Memory (Bi-LSTM) networks for temporal sequence modeling. The hybrid model successfully monitors PD severity over time by learning complex patterns in the data. We measure our method with the Parkinson&#39;s Tele monitoring Dataset from the UCI Machine Learning Repository, which contains longitudinal voice recordings together with Unified Parkinson&#39;s Disease Rating Scale (UPDRS) scores—rendering it particularly well-suited for time-series analysis. Implemented in Python with Tensor Flow applies sophisticated training methods to achieve maximum performance. Experimental results affirm a dramatic improvement compared to traditional ML methods, producing an accuracy rate of 95.2%. Such high predictive power facilitates timely adjustment of treatment and improves patient management. The suggested model presents a non-invasive, scalable real-time PD monitoring solution. It aids neurologists, clinicians, and researchers by offering an AI-based platform for pre-emptive intervention. It helps patients by facilitating continuous remote monitoring, minimizing frequent clinic visits, and enhancing their quality of life.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_48-Tracking_Parkinsons_Disease_Progression.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Bridging the Gap: The Role of Education and Digital Technologies in Revolutionizing Livestock Farming for Sustainability and Resilience</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160547</link>
        <id>10.14569/IJACSA.2025.0160547</id>
        <doi>10.14569/IJACSA.2025.0160547</doi>
        <lastModDate>2025-05-31T09:35:22.0300000+00:00</lastModDate>
        
        <creator>Nur Amlya Abd Majid</creator>
        
        <creator>Mohd Fahmi Mohamad Amran</creator>
        
        <creator>Muhammad Fairuz Abd Rauf</creator>
        
        <creator>Lim Seong Pek</creator>
        
        <creator>Suziyanti Marjudi</creator>
        
        <creator>Puteri Nor Ellyza Nohuddin</creator>
        
        <creator>Kemal Farouq Mauladi</creator>
        
        <subject>Livestock farming; sustainable agriculture; digital technologies; farmer education; climate resilience</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>Livestock farming remains a cornerstone of global agricultural systems, contributing significantly to food security, economic development, and rural livelihoods. However, the sector is increasingly challenged by environmental degradation, inefficient practices, and socio-economic barriers. Education serves as a pivotal solution, empowering farmers with the knowledge and skills required for sustainable livestock management. This bibliometric analysis explores the intersection of livestock farming and education, analyzing research trends, thematic clusters, and collaboration patterns from 2015 to 2024 using data from the Web of Science database and VOSviewer software. The analysis identifies critical themes, such as sustainable practices, climate resilience, zoonotic disease management, and socio-economic empowerment, underscoring the transformative role of education in addressing these issues. Additionally, the integration of digital technologies, such as mobile learning platforms, precision farming tools, and blockchain-based traceability systems, enhances the accessibility and effectiveness of educational initiatives in livestock management. The findings reveal a steady growth in research on this topic, with significant academic and practical implications. Targeted educational interventions, including Farmer Field Schools and tailored training programs, are recommended to enhance productivity, promote sustainability, and foster inclusivity in the livestock sector. By integrating education with livestock farming, the study contributes to achieving Sustainable Development Goals, particularly Goals 2 (Zero Hunger) and 4 (Quality Education). This research provides a comprehensive foundation for policymakers, researchers, and practitioners to advance the integration of education in livestock farming, fostering resilience and sustainability within the sector.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_47-Bridging_the_Gap_The_Role_of_Education_and_Digital_Technologies.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Blockchain-Assisted Serverless Framework for AI-Driven Healthcare Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160546</link>
        <id>10.14569/IJACSA.2025.0160546</id>
        <doi>10.14569/IJACSA.2025.0160546</doi>
        <lastModDate>2025-05-31T09:35:22.0170000+00:00</lastModDate>
        
        <creator>Akash Ghosh</creator>
        
        <creator>Abhraneel Dalui</creator>
        
        <creator>Lalbihari Barik</creator>
        
        <creator>Jatinderkumar R. Saini</creator>
        
        <creator>Sunil Kumar Sharma</creator>
        
        <creator>Bibhuti Bhusan Dash</creator>
        
        <creator>Satyendr Singh</creator>
        
        <creator>Namita Dash</creator>
        
        <creator>Susmita Patra</creator>
        
        <creator>Sudhansu Shekhar Patra</creator>
        
        <subject>AIBLOCK; blockchain; healthfaas; latency optimization; serverless computing; Transport Layer Security (TLS)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>With the advent of new sensor device designs, IoT based medical applications are increasingly being employed. This study introduces BlockFaaS: a Blockchain-assisted serverless framework that incorporates advanced AI models in latency sensitive healthcare applications with confidentiality, energy efficiency, and real-time decision-making. This framework combines the structure of AIBLOCK with dynamic sharding and zero knowledge proofs to make the framework strongly scalable with health-assured data inviolability with HealthFaaS, a serverless platform for cardiovascular risk detection. Explainable AI and federated learning models are introduced into the system to retain an equilibrium between data privacy and interpretability. All layers of communication use the Transport Layer Security protocol to ensure security. This proposed system is validated by new performance metrics such as real-time response rates and energy consumption, proving to be superior to the existing HealthFaaS and AIBLOCK technologies.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_46-Blockchain_Assisted_Serverless_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Attention-Driven Hierarchical Federated Learning for Privacy-Preserving Edge AI in Heterogeneous IoT Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160545</link>
        <id>10.14569/IJACSA.2025.0160545</id>
        <doi>10.14569/IJACSA.2025.0160545</doi>
        <lastModDate>2025-05-31T09:35:21.9830000+00:00</lastModDate>
        
        <creator>Pournima Pande</creator>
        
        <creator>Bukya Mohan Babu</creator>
        
        <creator>Poonam Bhargav</creator>
        
        <creator>T L Deepika Roy</creator>
        
        <creator>Elangovan Muniyandy</creator>
        
        <creator>Yousef A. Baker El-Ebiary</creator>
        
        <creator>V Diana Earshia</creator>
        
        <subject>Edge AI; federated learning; wearable health monitoring; arrhythmia; privacy-preserving; IoT device; 1DCNN-LSTM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>ECG arrhythmia detection is very important in identification and management of patients with cardiac disorders. Centralized machine learning models are privacy invasive, and distributed ones poorly deal with the data heterogeneity of the devices. These challenges are responded to by presenting the edge AI an attention-driven hierarchical federated learning framework with 1-Dimensional Convolutional Neural Network (1D-CNN) - Long Short-Term Memory (LSTM) -Attention to classify arrhythmia in ECG recordings. This model includes the spatial characteristics of ECG signals and the temporal characteristics of attention maps, identifying the significant areas of the inputs and providing high interpretability and accuracy of the model. Thus, federated learning is applied to perform model training in a decentralized process through the Privacy-Preserving while the raw data remains on the edge devices. For assessment, this study utilized St. Petersburg INCART 12-lead Arrhythmia Database and Wearable Health Monitoring has given an overall classification accuracy of 96.5% with an average of AUC-ROC of 0.98 with five classes as Normal (N), Supraventricular (S), Ventricular (V), Fusion (F), and Unclassified (Q). The proposed model was created using the Python programming language with the TensorFlow framework deep learning and tested using Raspberry Pi devices to mimic edge settings. Overall, this study proves that it is possible to classify using IoT Device ECG arrhythmia reliably and securely on devices with limited resources, which will enable real-time cardiac monitoring.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_45-Attention_Driven_Hierarchical_Federated_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fine-Tuning Arabic and Multilingual BERT Models for Crime Classification to Support Law Enforcement and Crime Prevention</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160544</link>
        <id>10.14569/IJACSA.2025.0160544</id>
        <doi>10.14569/IJACSA.2025.0160544</doi>
        <lastModDate>2025-05-31T09:35:21.9530000+00:00</lastModDate>
        
        <creator>Njood K. Al-harbi</creator>
        
        <creator>Manal Alghieth</creator>
        
        <subject>Artificial intelligence; deep learning; natural language processing; bidirectional encoder representation from transformer; crime classification; crime prevention; tweets; text classification; transformer; Arabic; X</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>Safety and security are essential to social stability since their absence disrupts economic, social, and political structures and weakens basic human needs. A secure environment promotes development, social cohesion, and well-being, making national resilience and advancement crucial. Law enforcement struggles with rising crime, population density, and technology. Time and effort are required to analyze and utilize data. This study employs AI to classify Arabic text to detect criminal activity. Recent transformer methods, such as Bidirectional Encoder Representation Form Transformer (BERT) models, have shown promise in NLP applications, including text classification. Applying these models to crime prevention motivates significant insights. They are effective because of their unique architecture, especially their capacity to handle text in both left and right contexts after pre-training on massive data. The limited number of crime field studies that employ the BERT transformer and the limited availability of Arabic crime datasets are the primary concerns with the previous studies. This study creates its own X (previously Twitter) dataset. Next, the tweets will be pre-processed, data imbalance addressed, and BERT-based models fine-tuned using six Arabic BERT models and three multilingual models to classify criminal tweets and assess optimal variation. Findings demonstrate that Arabic models are more effective than multilingual models. MARBERT, the best Arabic model, surpasses the outcomes of previous studies by achieving an accuracy and F1-score of 93%. However, mBERT is the best multilingual model with an F1-score and accuracy of 89%. This emphasizes the efficacy of MARBERT in the classification of Arabic criminal text and illustrates its potential to assist in the prevention of crime and the defense of national security.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_44-Fine_Tuning_Arabic_and_Multilingual_BERT_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Linear Correction Model for Statistical Inference Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160543</link>
        <id>10.14569/IJACSA.2025.0160543</id>
        <doi>10.14569/IJACSA.2025.0160543</doi>
        <lastModDate>2025-05-31T09:35:21.9230000+00:00</lastModDate>
        
        <creator>Jing Zhao</creator>
        
        <creator>Zhijiang Zhang</creator>
        
        <subject>Linear correction model; statistical analysis; fiducial inference; numerical simulation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>A linear correction model based on joint independent information is proposed to optimize the statistical inference performance in high-dimensional data and small sample scenarios by integrating Fiducial inference and Bayesian posterior prediction methods. The model utilizes multi-source data features to construct a joint independent information framework, combined with an information domain dynamic correction mechanism, significantly improving parameter estimation efficiency and confidence interval coverage. Numerical simulation shows that when the sample size is 30, the posterior prediction method has a coverage rate of 0.927, approaching 95% of the theoretical value, and the coverage probability approaches the ideal level with increasing sample size. Compared with traditional methods, the model exhibits stronger adaptability and stability in high-dimensional noise covariance and dynamic data streams, providing an efficient and robust theoretical tool for statistical inference in complex data environments.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_43-Linear_Correction_Model_for_Statistical_Inference_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Digital Twin-Based Predictive Analytics for Urban Traffic Optimization and Smart Infrastructure Management</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160542</link>
        <id>10.14569/IJACSA.2025.0160542</id>
        <doi>10.14569/IJACSA.2025.0160542</doi>
        <lastModDate>2025-05-31T09:35:21.8900000+00:00</lastModDate>
        
        <creator>A. B. Pawar</creator>
        
        <creator>Shamim Ahmad Khan</creator>
        
        <creator>Yousef A. Baker El-Ebiary</creator>
        
        <creator>Vijay Kumar Burugari</creator>
        
        <creator>Shokhjakhon Abdufattokhov</creator>
        
        <creator>Aanandha Saravanan</creator>
        
        <creator>Refka Ghodhbani</creator>
        
        <subject>Digital twin technology; traffic flow optimization; predictive analytics; smart city infrastructure; GRU-CNN hybrid model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>In modern cities, urban traffic congestion remains a persistent issue that causes longer journey times, excessive fuel consumption, and environmental pollution. Traditional traffic management systems often employ static models that are insensitive to dynamic changes in urban mobility patterns in real time, which results in inefficient congestion relief. This study proposes a predictive analytics system based on digital twins to enhance smart city infrastructure management and optimize traffic flow to transcend these limitations. A Convolutional Neural Network–Gated Recurrent Unit (CNN-GRU) model is embedded at the core of the proposed system to effectively capture and learn spatial and temporal traffic patterns efficiently to enhance prediction accuracy and real-time decision-making. The scalability and robustness of the model are trained on actual urban traffic data. The system is developed and verified with Python, TensorFlow, and simulation-based digital twin platforms. The prediction capability of traffic conditions and congestion relief of the model is evidenced from the experimental results, which present a high prediction accuracy of 94.5%. Enhanced route planning, anticipatory congestion avoidance, and smart traffic signal control are some of the primary benefits. The outcome is that urban mobility has been enhanced and congestion in traffic has reduced substantially. This research contributes to the evolution of intelligent transportation systems by being the first to integrate deep learning-based predictive analytics with digital twin technology. Ultimately, the proposed framework encourages the emergence of future-oriented smart city infrastructure and the aim of sustainable city transport.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_42-Digital_Twin_Based_Predictive_Analytics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Disease Prediction from Symptom Descriptions Using Deep Learning and NLP Technique</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160541</link>
        <id>10.14569/IJACSA.2025.0160541</id>
        <doi>10.14569/IJACSA.2025.0160541</doi>
        <lastModDate>2025-05-31T09:35:21.8770000+00:00</lastModDate>
        
        <creator>Salmah Saad Al-qarni</creator>
        
        <creator>Abdulmohsen Algarni</creator>
        
        <subject>Natural language processing; disease prediction; machine learning; deep learning; classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>Accurate disease prediction from symptom descriptions is vital for improving early detection and enabling remote healthcare services, especially in the evolving landscape of digital health. Traditional diagnosis methods face significant limitations due to their reliance on structured datasets and subjective assessments, leading to delays and inefficiencies in the diagnosis process. Our strategy is to employ advanced NLP techniques such as tokenization and TF-IDF, along with DL techniques like LSTM, CNN-LSTM, and GRU, to analyze unstructured symptom data and more accurately predict diseases The study also compares two text transformation techniques (TF-IDF vectorization and tokenization) with traditional Machine Learning (ML) methods like Decision Trees to specify the best technique. Through intensive experiments on two datasets (one with 24 diseases and one with 41 diseases), the efficiency of the proposed methods is verified and the importance of using NLP and deep learning in revolutionizing healthcare is illustrated, particularly in upgrading remote diagnosis and enabling early medical intervention. The best-performing model, CNN-LSTM using tokenized text, achieved 99.90% accuracy on a 41-disease dataset, and LSTM with TF-IDF achieved 98.8% accuracy on a 24-disease dataset, outperforming or matching results from more complex models in prior studies. The findings show that combining NLP and deep learning enables accurate, efficient disease prediction, advancing remote care and early intervention in digital healthcare.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_41-Disease_Prediction_from_Symptom_Descriptions.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>FPGA-Based Implementation of Enhanced DGHV Homomorphic Encryption: A Power-Efficient Approach to Secure Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160540</link>
        <id>10.14569/IJACSA.2025.0160540</id>
        <doi>10.14569/IJACSA.2025.0160540</doi>
        <lastModDate>2025-05-31T09:35:21.8430000+00:00</lastModDate>
        
        <creator>Gurdeep Singh</creator>
        
        <creator>Sonam Mittal</creator>
        
        <creator>Hani Moaiteq Aljahdali</creator>
        
        <creator>Ahmed Hamza Osman</creator>
        
        <creator>Ala Eldin A Awouda</creator>
        
        <creator>Ashraf Osman Ibrahim</creator>
        
        <creator>Salil Bharany</creator>
        
        <subject>Homomorphic encryption; cybersecurity; cryptography; DGHV; FPGA; Xilinx Vivado tool; Genesys Kintex</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>One new area of secure computing and privacy is homomorphic encryption (HE). An FPGA-based implementation of the HE algorithm, Enhanced DGHV, which helps real-time computation on encrypted text without disclosing the original data, is developed in this study. This research aims to focus on implementing the Enhanced DGHV Fully HE algorithm on FPGA hardware to achieve a more efficient scheme in terms of performance and security. The Xilinx Vivado tool implements the design on a Genesys 2 Kintex 7 FPGA board. While software simulation with 3.2% I/O usage, the simulation confirms a total power consumption of 3.12W (watts), highlighting successful synthesis with little resources. At 9.105W, the hardware implementation is comparable. Furthermore, an effective FPGA-based implementation confirms a method for achieving a balance between power consumption and performance while implementing the DGHV algorithm. The results show that the overall computational complexity can be reduced, and the hardware and software integration help to achieve an increased data security level for homomorphic encryption algorithms with improved efficiency.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_40-FPGA_Based_Implementation_of_Enhanced_DGHV.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Estimating Missing Data in Wireless Sensor Network Through Spatial-Temporal Correlation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160539</link>
        <id>10.14569/IJACSA.2025.0160539</id>
        <doi>10.14569/IJACSA.2025.0160539</doi>
        <lastModDate>2025-05-31T09:35:21.8130000+00:00</lastModDate>
        
        <creator>Walid Atwa</creator>
        
        <creator>Abdulwahab Ali Almazroi</creator>
        
        <creator>Eman A. Aldhahr</creator>
        
        <creator>Nourah Fahad Janbi</creator>
        
        <subject>Wireless sensor networks; missing data estimation; spatial correlation; temporal correlation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>Wireless sensor networks consist of a set of smart sensors with limited memory and wireless communication capabilities. These sensors get data from the environment and send them to an application center. However, data loss has happened due to the characteristics of sensors, which negatively affect the accuracy of applications. To solve this problem, we need to estimate the missing data for applications that depend on accurate data collecting. In this study, we present an algorithm that uses the most significant historical data to estimate the missing data based on spatial and temporal correlations. In the proposed algorithm, we combine the spatial correlation by using data from the closest sensor based on the missing pattern and the temporal correlation by referring to the closest data prior to the missing instance. The experimental results demonstrate that the proposed algorithm lowers estimation errors when compared to current algorithms for a variety of missing data patterns.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_39-Estimating_Missing_Data_in_Wireless_Sensor_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Customer Churn Analysis by Using Real-Time Machine Learning Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160538</link>
        <id>10.14569/IJACSA.2025.0160538</id>
        <doi>10.14569/IJACSA.2025.0160538</doi>
        <lastModDate>2025-05-31T09:35:21.7800000+00:00</lastModDate>
        
        <creator>Haitham Ghallab</creator>
        
        <creator>Mona Nasr</creator>
        
        <creator>Hanan Fahmy</creator>
        
        <subject>Customer churn; real-time analysis; continual learning; machine learning; event-driven development; stacked ensemble learning; replay-based approach</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>Customer churn, the loss of customers to competitors, poses a significant challenge for businesses, particularly in competitive industries such as banking and telecommunications. As a result, several customer churn analysis models have been proposed to identify at-risk customers and enable top managers to implement strategic decisions to mitigate churn and improve customer retention. Although the existing models provide top managers with promising insights for churn prediction, they rely on a batch-based training approach using fixed datasets collected at periodic intervals. While this training approach enables existing models to perform well in relatively stable environments, they, unfortunately, struggle to adapt to dynamic settings, where customer preferences shift rapidly, especially in industries with volatile market conditions, such as banking and telecom. Where, in dynamic environments, data distribution can change significantly over short periods, disabling existing models to maintain efficiency and leading to poor predictive performance, increased misclassification rates, and suboptimal decision-making by top executives, ultimately exacerbating customer churn. To address these limitations, this research proposes RCE, a real-time, continual learning-based, ensemble learning model. RCE integrates an event-driven development approach for real-time churn analysis with a replay-based continual learning mechanism to adapt to evolving customer behaviors without catastrophic forgetting, and RCE implements a stacked ensemble learning for customer churn classification. Unlike existing models, RCE continuously processes streaming data, ensuring adaptability and generalization in fast-changing environments, and providing instantaneous insights that enable decision-makers to respond swiftly to emerging risks, market fluctuations, and customer behavior changes. RCE is evaluated using the Churn Modelling benchmark dataset for European banks, achieving performance with a 95.65% accuracy; however, in dynamic environments, RCE accomplishes an average accuracy (ACC) of 86.75% and an average forgetting rate (FR) of 13.25% across tasksT_i. The results demonstrate that RCE outperforms existing models in predictive accuracy, adaptability, and robustness across multiple tasks, especially in dynamic environments. Finally, this research discusses the proposed model’s limitations and outlines directions for future improvements in real-time customer churn analysis.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_38-Enhancing_Customer_Churn_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Topology Planning and Optimization of DC Distribution Network Based on Mixed Integer Programming and Genetic Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160537</link>
        <id>10.14569/IJACSA.2025.0160537</id>
        <doi>10.14569/IJACSA.2025.0160537</doi>
        <lastModDate>2025-05-31T09:35:21.7670000+00:00</lastModDate>
        
        <creator>Ran Cheng</creator>
        
        <creator>Chong Gao</creator>
        
        <creator>Hao Li</creator>
        
        <creator>Junxiao Zhang</creator>
        
        <creator>Ye Huang</creator>
        
        <subject>DC distribution network; topology planning; mixed integer programming; genetic algorithm; optimization effect</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>In the current situation of rapid development of the power industry, DC distribution network topology planning and optimization are of vital importance. This research studies the shortcomings of existing methods in terms of computational efficiency and optimization effect. Based on the real data of a medium-sized DC distribution network in a large city with 200 nodes and 350 lines, an innovative method combining mixed integer programming (MIP) and genetic algorithm (GA) is adopted. MIP is used to accurately describe physical constraints and optimization objectives, and GA efficiently searches for the best solution in the solution space with its global search capability. Experimental results show that the MIP-GA model has the lowest power transmission loss at different load levels. For example, at high load, it is 32% lower than the baseline, 16% lower than the MIP model, and 12.5% lower than the ACO model. It also performs best in terms of node voltage deviation, reliability, power quality and other indicators. Cost-benefit analysis shows that although the MIP-GA model has a relatively high investment cost for topology adjustment, it has the lowest annual power loss and maintenance cost, a reasonable total annual cost, a benefit-cost ratio of 1.5, and a payback period of only 3 years. Research has shown that this hybrid model has significant advantages in DC distribution network topology planning and optimization, and can effectively improve system performance and economic benefits.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_37-Topology_Planning_and_Optimization_of_DC_Distribution.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Survival Analysis and Machine Learning Models for Predicting Heart Failure Outcomes</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160536</link>
        <id>10.14569/IJACSA.2025.0160536</id>
        <doi>10.14569/IJACSA.2025.0160536</doi>
        <lastModDate>2025-05-31T09:35:21.7330000+00:00</lastModDate>
        
        <creator>Naseem Mohammed ALQahtani</creator>
        
        <creator>Abdulmohsen Algarni</creator>
        
        <subject>Heart failure prediction; machine learning; cox proportional hazards model; random forest</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>Heart failure is still one of the prominent causes of morbidity and mortality globally, and thus, determining the principal factors influencing survival in patients becomes crucial. Being able to predict survival is critical for optimizing patient treatment and management. Heart failure, with its multifactorial and involvement of numerous clinical variables, complicates prediction of survival rates in patients. This study utilizes the &quot;Heart Failure Clinical Records&quot; dataset to analyze and predict patient survival based on two separate approaches: survival analysis and machine learning (ML) classification. Specifically, we employ the Cox Proportional Hazards Model to assess the influence of clinical variables like “age”, “serum creatinine”, and “ejection fraction” on survival durations. Additionally, machine learning classification models like K-Nearest Neighbors (KNN), Decision Trees (DT), and Random Forests (RF) are implemented to predict the binary response variable of survival (DEATH_EVENT). Data preprocessing is carried out using methods like feature scaling, imputation of missing values, and balancing the classes for the improvement of model performance. Among the evaluated models, the Random Forest classifier, when integrated with feature selection derived from the Cox model, reached the best performance with 96.2% accuracy and an AUC ROC of 0.987, outperforming all other approaches. The results indicate that integrating survival analysis with machine-learning techniques is effective in heart failure prediction outcomes, providing valuable support for patient management and clinical decision-making.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_36-Survival_Analysis_and_Machine_Learning_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Early Detection and Forecasting of Influenza Epidemics Using a Hybrid ARIMA-GRU Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160535</link>
        <id>10.14569/IJACSA.2025.0160535</id>
        <doi>10.14569/IJACSA.2025.0160535</doi>
        <lastModDate>2025-05-31T09:35:21.7030000+00:00</lastModDate>
        
        <creator>Kabilan Annadurai</creator>
        
        <creator>Aanandha Saravanan</creator>
        
        <creator>S. Kayalvili</creator>
        
        <creator>Madhura K</creator>
        
        <creator>Elangovan Muniyandy</creator>
        
        <creator>Inakollu Aswani</creator>
        
        <creator>Yousef A.Baker El-Ebiary</creator>
        
        <subject>Time-series analysis; gated recurrent unit; temporal patterns; influenza epidemic; auto regressive integrated moving average; early detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>Early diagnosis and accurate epidemic prediction are essential in limiting the public health impact of influenza epidemics because intervention on time can effectively curb both the spread of the disease and the strain on health services. Standard ARIMA models have proven their usefulness in short-term forecasting, particularly in stable contexts, but the fact that they cannot keep up with the complex and non-linear dynamics of disease spread makes them less capable of dealing with rapid-evolving outbreaks. This is especially the case when outbreaks are characterized by complicated seasonal trends and irregular peaks which are challenging for ARIMA to predict by itself. To fill this deficit, this study presents a hybrid model that marries ARIMA&#39;s statistical strength in dealing with short-term trends and the high-powered deep learning strengths of Gated Recurrent Units (GRU) that specialize in detecting long-term dependencies and non-linear relationships in data. The WHO Flu Net dataset, a trusted source of influenza surveillance, forms the foundation of training the model, with careful preprocessing operations conducted to normalize the data and eliminate any missing values, providing high-quality input to the model to make precise predictions. By combining ARIMA&#39;s linear prediction strengths with GRU&#39;s sophisticated pattern detection, the hybrid model delivers a powerful solution that is better than both regular ARIMA and other machine learning models, as evidenced by lower error rates on test metrics like MAE, RMSE and MAPE. The experimental findings validate that the ARIMA-GRU model not only enhances predictive performance but also increases the model&#39;s sensitivity to subtle trends, making it a valuable asset for early detection systems in public health. In the future, the incorporation of real-time environmental information such as temperature, humidity, and mobility patterns may further enhance the model&#39;s accuracy and responsiveness, providing more robust forecasting. Also, integrating healthcare infrastructure-related data, i.e., hospital capacity and availability of medical resources, would aid in developing a more complete epidemic management process. In total, the ARIMA-GRU hybridization is an effective and novel strategy for enhancing influenza surveillance, outbreak detection at the early stage, and epidemic control operations.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_35-Early_Detection_and_Forecasting_of_Influenza_Epidemics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modified MobileNet-V2 Convolution Neural Network (CNN) for Character Identification of Surakarta Shadow Puppets</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160534</link>
        <id>10.14569/IJACSA.2025.0160534</id>
        <doi>10.14569/IJACSA.2025.0160534</doi>
        <lastModDate>2025-05-31T09:35:21.6730000+00:00</lastModDate>
        
        <creator>Achmad Solichin</creator>
        
        <creator>Dwi Pebrianti</creator>
        
        <creator>Painem</creator>
        
        <creator>Sanding Riyanto</creator>
        
        <subject>Wayang kulit; characters identification; Convolution Neural Network (CNN); machine learning; image processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>Shadow puppets or in Indonesian called as “wayang kulit” is one of Indonesia&#39;s native traditional arts that still exists to this day. This art form has been recognised by UNESCO since 2003. Wayang kulit is not just ordinary entertainment. It carries profound moral values, but is gradually being forgotten by the younger generation. To facilitate the public in recognizing wayang kulit characters, a desktop-based application was developed using Canny edge detection for image extraction and a modified MobileNet-V2 CNN algorithm for character identification. The dataset used in this research was sourced from Google and Instagram, with 22 names of wayang kulit characters serving as classes. The identification results for 1,312 wayang kulit images (test data) using the classic CNN model yielded an accuracy of 50%, precision of 53%, and recall of 47%. Meanwhile, with the modified MobileNet-V2 CNN model, called custom CNN gives an accuracy of 92%, precision of 93%, and recall of 92%. From the result, it is shown that the custom CNN has high performance, where it has a few false positive predictions in detecting the characters of wayang kulit. Additionally, the result shows that the CNN model is robust and reliable for the task of identifying the wayang kulit characters. Based on the result, the model can be applied in preserving and promoting traditional wayang kulit art by helping to catalog and identify characters, making it more accessible to a wider audience, including the younger generation.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_34-Modified_MobileNet_V2_Convolution_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Method for Effect Evaluation of a Reception System on Sales, Number of Customers, Hourly Productivity and Churn Based on Intervention Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160533</link>
        <id>10.14569/IJACSA.2025.0160533</id>
        <doi>10.14569/IJACSA.2025.0160533</doi>
        <lastModDate>2025-05-31T09:35:21.6400000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Ikuya Fujikawa</creator>
        
        <creator>Sayuri Ogawa</creator>
        
        <subject>Intervention time series analysis; causalimpact package; counterfactual prediction value; general machine learning model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>We propose a method of AI-based evaluation of sales, number of customers, and churn before and after the introduction of a hair salon based on intervention time series analysis. We also used the software package of CausalImpact for the intervention time series analysis. The problem with this method is that the prediction accuracy is insufficient, and the estimated results of the intervention effect are not very valid. We thought it was necessary to verify prediction accuracy by using data before the system was introduced, where correct answer data exists, for the counterfactual prediction value after the system was introduced and devised a method to accurately predict the outcome variable before the system was introduced. Specifically, we introduce two learning models as in the development workflow of a general machine learning model, one for learning and the other one for accuracy verification. However, since CausalImpact does not include the function to verify the prediction accuracy, a separate code was prepared for that purpose to improve the prediction accuracy. As a result, we were able to confirm that the prediction accuracy was almost acceptable.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_33-Method_for_Effect_Evaluation_of_a_Reception_System_on_Sales.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design and Evaluation of a Forensic-Ready Framework for Smart Classrooms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160532</link>
        <id>10.14569/IJACSA.2025.0160532</id>
        <doi>10.14569/IJACSA.2025.0160532</doi>
        <lastModDate>2025-05-31T09:35:21.6270000+00:00</lastModDate>
        
        <creator>Henry Rossi Andrian</creator>
        
        <creator>Suhardi</creator>
        
        <creator>I Gusti Bagus Baskara Nugraha</creator>
        
        <subject>Forensic-ready system; smart classroom; threat estimation; risk profile</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>The rise of cyber threats in educational environments underscores the need for forensic-ready systems tailored to digital learning platforms like smart classrooms. This study proposes a proactive forensic-ready framework that integrates threat estimation, risk profiling, data identification, and collection management into a continuous readiness cycle. Blockchain technology ensures log immutability, while LMS APIs enable systematic evidence capture with minimal disruption to learning processes. Monte Carlo Simulation validates the framework’s performance across key metrics. Results show a log capture success rate of 77.27%, with high accuracy for structured attacks such as SQL Injection. The system maintains operational efficiency, adding only 15% average CPU overhead. Forensic logs are securely stored in JSON format on a blockchain ledger, ensuring both integrity and accessibility. However, reduced effectiveness for complex attacks like Remote Code Execution and occasional retrieval delays under heavy loads highlight areas for improvement. Future enhancements will focus on expanding threat coverage and optimizing log retrieval. By addressing vulnerabilities unique to smart classrooms, such as unauthorized access and data manipulation, this study introduces a scalable, domain-specific solution for enhancing forensic readiness and cybersecurity in educational ecosystems.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_32-Design_and_Evaluation_of_a_Forensic_Ready_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Deep Learning Model for Speech Emotion Recognition on RAVDESS Dataset</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160531</link>
        <id>10.14569/IJACSA.2025.0160531</id>
        <doi>10.14569/IJACSA.2025.0160531</doi>
        <lastModDate>2025-05-31T09:35:21.5800000+00:00</lastModDate>
        
        <creator>Zhongliang Wei</creator>
        
        <creator>Chang Ge</creator>
        
        <creator>Chang Su</creator>
        
        <creator>Ruofan Chen</creator>
        
        <creator>Jing Sun</creator>
        
        <subject>Speech emotion recognition; deep learning; RAVDESS dataset; multi-feature fusion</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>Speech Emotion Recognition (SER), a pivotal area in artificial intelligence, is dedicated to analyzing and interpreting emotional information in human speech. To address the challenges of capturing both local acoustic features and long-range dependencies in emotional speech, this study proposes a novel parallel neural network architecture that integrates Convolutional Neural Networks (CNNs) and Transformer encoders. To integrate the distinct feature representations captured by the two branches, a cross-attention mechanism is employed for feature-level fusion, enabling deep-level semantic interaction and enhancing the model’s emotion discrimination capacity. To improve model generalization and robustness, a systematic preprocessing pipeline is constructed, including signal normalization, data segmentation, additive white Gaussian noise (AWGN) augmentation with varying SNR levels, and Mel spectrogram feature extraction. A grid search strategy is adopted to optimize key hyperparameters such as learning rate, dropout rate, and batch size. Extensive experiments conducted on the RAVDESS dataset, consisting of eight emotional categories, demonstrate that our model achieves an overall accuracy of 80.00%, surpassing existing methods such as CNN-based (71.61%), multilingual CNN (77.60%), bimodal LSTM-attention (65.42%), and unsupervised feature learning (69.06%) models. Further analyses reveal its robustness across different gender groups and emotional intensities. Such outcomes highlight the architectural soundness of our model and underscore its potential to inform subsequent developments in affective speech processing.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_31-A_Deep_Learning_Model_for_Speech_Emotion_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Systematic Literature Review on Generative AI: Ethical Challenges and Opportunities</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160530</link>
        <id>10.14569/IJACSA.2025.0160530</id>
        <doi>10.14569/IJACSA.2025.0160530</doi>
        <lastModDate>2025-05-31T09:35:21.5470000+00:00</lastModDate>
        
        <creator>Feliks Prasepta Sejahtera Surbakti</creator>
        
        <subject>Generative Artificial Intelligence (GAI); AI ethics; systematic literature review; bias; misinformation; data privacy; accountability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>Generative Artificial Intelligence (GAI) has rapidly emerged as a transformative technology capable of autonomously creating human-like content across domains such as text, images, code, and media. While GAI offers significant benefits in fields like education, healthcare, and creative industries, it also introduces complex ethical challenges. This study aims to systematically review and synthesize the ethical landscape of GAI by analyzing 112 peer-reviewed journal articles published between 2021 and 2025. Using a Systematic Literature Review (SLR) methodology, the study identifies five primary ethical challenges—bias and discrimination, misinformation and deepfakes, data privacy violations, intellectual property issues, and accountability and explainability. In addition, it highlights emerging opportunities for ethical innovation, such as responsible design, inclusive governance, and interdisciplinary collaboration. The findings reveal a fragmented research landscape with limited empirical validation and inconsistent ethical frameworks. This review contributes to the field by mapping cross-sectoral patterns, identifying critical research gaps, and offering practical directions for researchers, developers, and policymakers to promote the responsible development of generative AI.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_30-Systematic_Literature_Review_on_Generative_AI.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Reducing Cyber Violence and Fostering Empathy Through VRN4RCV Model: Expert Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160529</link>
        <id>10.14569/IJACSA.2025.0160529</id>
        <doi>10.14569/IJACSA.2025.0160529</doi>
        <lastModDate>2025-05-31T09:35:21.5000000+00:00</lastModDate>
        
        <creator>Wu Qiong</creator>
        
        <creator>Nadia Diyana Binti Mohd Muhaiyuddin</creator>
        
        <creator>Azliza Binti Othman</creator>
        
        <subject>Cyber violence; VR news; empathy; conceptual model; expert review</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>Cyber violence has become increasingly prevalent, necessitating innovative intervention strategies. VR technology, with its immersive and empathetic capabilities, provides a unique opportunity for influencing behavioral change among perpetrators of cyber violence. This study proposes a conceptual design model for VR news, aimed at fostering empathy through immersive experiences to reduce cyber violence. The model was validated through three cycles of expert review. Expert feedback highlighted the model’s relevance and applicability while offering constructive suggestions for refinement. The findings indicate that this conceptual model provides a practical guide for designing VR news that effectively addresses the issue of cyber violence. Future research will include prototype testing and empirical evaluation to assess the model’s impact on behavioral change and empathy enhancement.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_29-Reducing_Cyber_Violence_and_Fostering_Empathy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Instance Segmentation Method Based on DPA-SOLOV2</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160528</link>
        <id>10.14569/IJACSA.2025.0160528</id>
        <doi>10.14569/IJACSA.2025.0160528</doi>
        <lastModDate>2025-05-31T09:35:21.4530000+00:00</lastModDate>
        
        <creator>Yuyue Feng</creator>
        
        <creator>Liqun Ma</creator>
        
        <creator>Yinbao Xie</creator>
        
        <creator>Zhijian Qu</creator>
        
        <subject>Instance segmentation; segmenting objects by locations V2; deformable convolutional networks; path aggregation feature pyramid network; insulator dataset</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>To solve the problems of missed detection, segmentation errors in instance segmentation models, we propose an instance segmentation approach, DPA-SOLOV2, based on the improved segmenting objects by locations V2 (SOLO V2). Firstly, DPA-SOLOV2 introduces deformable convolutional networks (DCN) into the feature extraction network ResNet50. By freely sampling points to convolve features of any shape, the network can extract feature information more effectively. Secondly, DPA-SOLOV2 uses the path aggregation feature pyramid network (PAFPN) feature fusion method to replace the feature pyramid. By adding a bottom-up path, it can better transmit the location information of features and also enhance the information interaction between features. To prove the effectiveness of the improved model, we conduct experiments on two public datasets, COCO and CVPPP. The experimental results show that the accuracy of the improved model on the COCO dataset is 1.3% higher than that of the original model, and the accuracy on the CVPPP dataset is 1.5% higher than that before the improvement. Finally, the improved model is applied to the insulator dataset, which can accurately segment the umbrella skirt of insulators and outperforms other mainstream instance segmentation algorithms such as Yolact++.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_28-Instance_Segmentation_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Securing UAV Flight Data Using Lightweight Cryptography and Image Steganography</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160527</link>
        <id>10.14569/IJACSA.2025.0160527</id>
        <doi>10.14569/IJACSA.2025.0160527</doi>
        <lastModDate>2025-05-31T09:35:21.4230000+00:00</lastModDate>
        
        <creator>Orkhan Valikhanli</creator>
        
        <creator>Fargana Abdullayeva</creator>
        
        <subject>UAV; GCS; cyberattack; cryptography; steganography; flight data</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>The popularity of Unmanned Aerial Vehicles (UAVs) in various fields has been rising recently. UAV technology is being invested in by numerous industries in order to cut expenses and increase efficiency. Therefore, UAVs are predicted to become much more important in the future. As UAVs become more popular, the risk of cyberattacks on them is also growing. One type of cyberattack involves the exposure of important flight data. This, in turn, can lead to serious problems. To address this problem, a new method based on lightweight cryptography and steganography is proposed in this work. The proposed method ensures multilayer protection of important UAV flight data. This is achieved by two layers of encryption using a polyalphabetic substitution cipher and ChaCha20-Poly1305 authenticated encryption, as well as randomized least significant bit (LSB) steganography. Most importantly, through this work, a balance is kept between security and performance. Additionally, all experiments are carried out on real devices, making the proposed method more practical. The proposed method is evaluated using MSE, PSNR, and SSIM metrics. Even with a capacity of 8000 bytes, it achieves an MSE of 0.04, a PSNR of 62, and an SSIM of 0.9998. It is then compared to existing methods. The results show better practical use, stronger security, and higher overall performance.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_27-Securing_UAV_Flight_Data_Using_Lightweight_Cryptography.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Computational Linguistic Approach for Holistic User Behaviors Modeling Through Opinionated Data of Virtual Communities</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160526</link>
        <id>10.14569/IJACSA.2025.0160526</id>
        <doi>10.14569/IJACSA.2025.0160526</doi>
        <lastModDate>2025-05-31T09:35:21.3770000+00:00</lastModDate>
        
        <creator>Kashif Asrar</creator>
        
        <creator>Syed Abbas Ali</creator>
        
        <subject>Roman Urdu; positive and negative statements detection; sentiment analysis; BERT Model; Long Short-Term Memory (LSTM) networks; Pakistani social media</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>This research is aimed at establishing a computational linguistic model for the detection of positive and negative statements, synthesized for the Pakistani microblogging site Twitter, particularly, in the Roman Urdu language. With increased freedom of speech people express their sentiments towards an event or a person in positive, negative, neutral, and sometimes sarcastic tones, especially on social media platforms. Pakistani social media users, like other multilingual countries, express their opinions through code switching and code mixing. Their language lacks correct grammar, informal and nonstandard writing, unrelated spelling, alternative analogies make it difficult for computational linguist to mine their data for computational research. To overcome this challenge, the study employed web scraping tools to retrieve a large number of Roman Urdu tweets. In order to establish a new positive and negative statements corpus, the text data is annotated through a sentiment analysis carried out by using TextBlob sentiment analysis and Bidirectional Encoder Representations from Transformers (BERT). Addressing this issue makes it possible to eliminate the gap that is evident in the models that do not identify Roman Urdu as a form of language. The findings are useful for the regulatory bodies and researchers since it offers a culturally and linguistically appropriate database and model targeting resource constraints and key performance metrics. It helps in content moderation and in making policies regarding the technological advancement within Pakistan.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_26-Computational_Linguistic_Approach_for_Holistic_User.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Binary–Source Code Matching Based on Decompilation Techniques and Graph Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160525</link>
        <id>10.14569/IJACSA.2025.0160525</id>
        <doi>10.14569/IJACSA.2025.0160525</doi>
        <lastModDate>2025-05-31T09:35:21.3430000+00:00</lastModDate>
        
        <creator>Ghader Aljebreen</creator>
        
        <creator>Reem Alnanih</creator>
        
        <creator>Fathy Eassa</creator>
        
        <creator>Maher Khemakhem</creator>
        
        <creator>Kamal Jambi</creator>
        
        <creator>Muhammed Usman Ashraf</creator>
        
        <subject>Binary–source code matching; call graphs; code clone detection; control flow graphs; decompiler</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>Recent approaches to binary–source code matching often operate at the intermediate representation (IR) level, with some applying the matching process at the binary level by compiling the source code to binary and then matching it directly with the binary code. Others, though less common, perform matching at the decompiler-generated pseudo-code level by first decompiling the binary code into pseudo-code and then comparing it with the source code. However, all these approaches are limited by the loss of semantic information in the original source code and the introduction of noise during compilation and decompilation, making accurate matching challenging and often requiring specialized expertise. To address these limitations, this study introduces a system for binary–source code matching based on decompilation techniques and Graph analysis (BSMDG) that matches binary code with source code at the source code level. Our method utilizes the Ghidra decompiler in conjunction with a custom-built transpiler to reconstruct high-level C++ source code from binary executables. Subsequently, call graphs (CGs) and control flow graphs (CFGs) are generated for both the original and translated code to evaluate their structural and semantic similarities. To evaluate our system, we used a curated dataset of C++ source code and corresponding binary files collected from the AtCoder website for training and testing. Additionally, a case study was conducted using the widely recognized POJ-104 benchmark dataset to assess the system&#39;s generalizability. The results demonstrate the effectiveness of combining decompilation with graph-based analysis, with our system achieving 90% accuracy on POJ-104, highlighting its potential in code clone detection, vulnerability identification, and reverse engineering tasks.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_25-Binary_Source_Code_Matching_Based_on_Decompilation_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Integrating ISA Optimised Random Forest Methods for Building Applications in Digital Accounting Talent Assessment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160524</link>
        <id>10.14569/IJACSA.2025.0160524</id>
        <doi>10.14569/IJACSA.2025.0160524</doi>
        <lastModDate>2025-05-31T09:35:21.2970000+00:00</lastModDate>
        
        <creator>Yu ZHOU</creator>
        
        <subject>Integrated learning; internal renovation algorithms; random forests; digitalisation of applied undergraduate institutions; accounting talent assessment</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>Digital accounting talent assessment in applied undergraduate colleges and universities is an urgent problem of talent assessment construction. In order to solve the problem of digital accounting talent assessment in applied undergraduate colleges, a digital accounting talent assessment method based on improved machine learning algorithm is proposed. Firstly, the digital accounting talent assessment problem in applied undergraduate colleges is analysed, digital accounting talent assessment indicators are extracted, and the index system is constructed; secondly, the digital accounting talent assessment model based on the integrated ISA optimized random forest algorithm in applied undergraduate colleges is constructed by combining the integrated learning technology, the intelligent optimization algorithm, and the random forest; lastly, the digital accounting talent data in applied undergraduate colleges is used to analyse the model. The results show that compared with other algorithms, the accuracy of digital accounting talent assessment in applied undergraduate colleges and universities of Ada-ISA-RF is improved by 3.06 per cent and 7.04 per cent, respectively.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_24-Integrating_ISA_Optimised_Random_Forest_Methods.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Robot Path Planning Model Based on Improved A* Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160523</link>
        <id>10.14569/IJACSA.2025.0160523</id>
        <doi>10.14569/IJACSA.2025.0160523</doi>
        <lastModDate>2025-05-31T09:35:21.2500000+00:00</lastModDate>
        
        <creator>Jing Xie</creator>
        
        <creator>Chunyuan Xu</creator>
        
        <creator>Qianxi Yang</creator>
        
        <subject>Robot; path; planning; A* algorithm; artificial potential field method; SA</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>Robot path planning is a key technology for achieving autonomous navigation and efficient operation of robots. In order to improve the autonomous navigation capability of mobile robots, a global path planning model based on an improved A* algorithm and a local path planning model based on an improved artificial potential field method were designed. The results showed that the turns in the optimal path under the improved A* algorithm were 8, 5, 9, and 5, respectively. The improved artificial potential field method achieved a maximum planning time of 0.17s and a minimum planning time of 0.11s. The designed global and local path planning models for mobile robots have good performance and can provide technical support for improving the autonomous navigation capability of mobile robots for industrial manufacturing.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_23-Robot_Path_Planning_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Structure Query Language Injection (SQLi) Detection Using Deep Q-Networks: A Reinforcement Machine Learning Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160522</link>
        <id>10.14569/IJACSA.2025.0160522</id>
        <doi>10.14569/IJACSA.2025.0160522</doi>
        <lastModDate>2025-05-31T09:35:21.2030000+00:00</lastModDate>
        
        <creator>Carlo Jude P. Abuda</creator>
        
        <creator>Cristina E. Dumdumaya</creator>
        
        <subject>Adaptive systems; cybersecurity; deep q-network; intrusion detection; query classification; reinforcement learning; SQL injection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>Structured Query Language injection (SQLi) remains one of the most pervasive and dangerous threats to web-based systems, capable of compromising databases and bypassing authentication protocols. Despite advancements in machine learning for cybersecurity, many models rely on static detection rules or require extensive labeled datasets, making them less adaptable to evolving threats. Addressing this limitation, the present study aimed to design, implement, and evaluate a Deep Q-Network (DQN) model capable of detecting SQLi attacks using reinforcement learning. The research employed a Design and Development Research (DDR) methodology, supported by an evolutionary prototyping framework, and utilized a dataset of 30,919 labeled SQL queries, balanced between malicious and safe inputs. Preprocessing involved query normalization and vector encoding into fixed-length ASCII representations. The DQN model was trained over 2,000 episodes, using experience replay and an epsilon-greedy strategy. Key evaluation metrics—accuracy, cumulative reward, and epsilon decay—showed performance improvements, with accuracy increasing from 52% to 82% and stabilizing between 65% and 73% in later episodes. The agent demonstrated consistent adaptability by successfully generalizing across various injection patterns. This outcome suggests that reinforcement learning, particularly using DQN, provides a viable alternative to traditional models, with superior resilience and dynamic learning capabilities. The model&#39;s convergence trend highlights its practical application in real-time SQLi detection systems, contributing significantly to cybersecurity measures for database-driven applications.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_22-Hybrid_Structure_Query_Language_Injection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>DamageNet: A Dilated Convolution Feature Pyramid Network Mask R‑CNN for Automated Car Damage Detection and Segmentation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160521</link>
        <id>10.14569/IJACSA.2025.0160521</id>
        <doi>10.14569/IJACSA.2025.0160521</doi>
        <lastModDate>2025-05-31T09:35:21.1730000+00:00</lastModDate>
        
        <creator>Nazbek Katayev</creator>
        
        <creator>Zhanna Yessengaliyeva</creator>
        
        <creator>Zhazira Kozhamkulova</creator>
        
        <creator>Zhanel Bakirova</creator>
        
        <creator>Assylzat Abuova</creator>
        
        <creator>Gulbagila Kuandikova</creator>
        
        <subject>Car damage detection; instance segmentation; dilated convolution; feature pyramid network; Mask R‑CNN; deep learning; vehicle damage assessment; semantic segmentation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>Automated and precise assessment of vehicle damage is critical for modern insurance processing, accident analysis, and autonomous maintenance systems. In this work, we introduce DamageNet, a unified deep instance segmentation framework that embeds a multi‑rate dilated‑convolution context module within a Feature Pyramid Network (FPN) backbone and couples it with a Region Proposal Network (RPN), RoI‑Align, and parallel heads for classification, bounding‑box regression, and pixel‑level mask prediction. Evaluated on the large‑scale VehiDE dataset comprising 5 200 high‑resolution images annotated for dents, scratches, and broken glass, DamageNet achieves a mean Average Precision (mAP) of 85.7% for damage localization and a mean Intersection over Union (mIoU) of 82.3% for segmentation, outperforming baseline Mask R‑CNN by 6.2 and 7.8 percentage points, respectively. Ablation studies confirm that the dilated‑convolution module, multi‑scale fusion in the FPN, and post‑processing refinements each contribute substantially to segmentation fidelity. Qualitative results demonstrate robust delineation of both subtle scratch lines and extensive panel deformations under diverse lighting and occlusion conditions. Although the integration of atrous convolutions introduces a modest inference overhead, DamageNet offers a significant advancement in end‑to‑end vehicle damage analysis. Future extensions will investigate lightweight dilation approximations, dynamic rate selection, and semi‑supervised learning strategies to further enhance processing speed and generalization to additional damage modalities.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_21-DamageNet_A_Dilated_Convolution_Feature_Pyramid_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Breast Cancer Classification and Segmentation Using Deep Learning on Ultrasound Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160520</link>
        <id>10.14569/IJACSA.2025.0160520</id>
        <doi>10.14569/IJACSA.2025.0160520</doi>
        <lastModDate>2025-05-31T09:35:21.1570000+00:00</lastModDate>
        
        <creator>Doha Saad Dajam</creator>
        
        <creator>Ayman Qahmash</creator>
        
        <subject>Breast cancer; Convolutional Neural Networks (CNNs); tumor segmentation; MobileNet; dice coefficient; BUSI Dataset</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>Breast cancer continues to pose a major health challenge for women worldwide, highlighting the critical role of accurate and early detection methods in improving patient outcomes. Ultrasound imaging, a commonly used and non-invasive method, is especially useful for identifying tissue irregularities in younger women or individuals with dense breast tissue. However, accurate interpretation of ultrasound images is challenging due to variability in human analysis and limitations in existing deep learning models, which often struggle with small, imbalanced datasets and lack generalizability compared to models trained on natural images. To tackle these challenges, we introduce a dual deep learning framework that combines image classification and tumor segmentation using breast ultrasound images. The classification component evaluates four models (Custom CNN, VGG16, InceptionV3, and MobileNet) while the segmentation module employs a MobileNet-optimized U-Net architecture for precise boundary localization. We validate our approach using the publicly available BUSI dataset, achieving a 98% classification accuracy with MobileNet and a Dice coefficient of 0.8959 for segmentation, indicating high model reliability and spatial agreement. Our method demonstrates a robust, efficient solution to automate breast cancer detection and localization, with potential to support radiologists in early and accurate diagnosis.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_20-Breast_Cancer_Classification_and_Segmentation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Nonlinear Consensus for Wireless Sensor Networks: Enhancing Convergence in Neighbor-Influenced Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160519</link>
        <id>10.14569/IJACSA.2025.0160519</id>
        <doi>10.14569/IJACSA.2025.0160519</doi>
        <lastModDate>2025-05-31T09:35:21.1270000+00:00</lastModDate>
        
        <creator>Rawad Abdulghafor</creator>
        
        <creator>Yousuf Al Husaini</creator>
        
        <creator>Abdullah Said AL-Aamri</creator>
        
        <creator>Mohammad Abrar</creator>
        
        <creator>Alaa A. K. Ismaeel</creator>
        
        <creator>Mohammed Abdulla Salim Al Husaini</creator>
        
        <subject>Fractional power; consensus; WSNs; NIAM; NIFFAM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>Wireless sensor networks (WSNs) are a modern technology that has revolutionized many industries thanks to their ability to collect and analyze information from surrounding environments and improve the performance of complex systems through the cooperation of a group of independent sensors to achieve common goals. Sensor clustering and agreement have wide applications in daily life, ranging from environmental monitoring and industrial control to healthcare and smart cities. However, the WSN system faces many challenges, one of the most prominent is achieving agreement between different sensors on a common state. This challenge is essential to enable successful cooperation between sensors in complex systems. Many previous research and models have been developed to address the problem of sensor agreement, such as the Neighbor-Influenced Timestep Consensus Model (NITCM), which was presented as a framework to achieve agreement effectively. In this paper, we propose a new technique to improve this model by using fractional force in the updating process. This leads to developing the Neighbor-Influenced Fractional Timestep Consensus Model (NIFTCM). This technique achieves faster convergence between sensors, which leads to improved efficiency in reaching agreement over previous techniques. This development aims to enhance the speed and stability of consensus processes in wireless sensor networks and make them more suitable for time-sensitive applications.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_19-Nonlinear_Consensus_for_Wireless_Sensor_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Exploring Research Trends in Distributed Acoustic Sensing with Machine Learning and Deep Learning: A Bibliometric Analysis of Themes and Emerging Topics</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160518</link>
        <id>10.14569/IJACSA.2025.0160518</id>
        <doi>10.14569/IJACSA.2025.0160518</doi>
        <lastModDate>2025-05-31T09:35:21.0800000+00:00</lastModDate>
        
        <creator>Nor Farisha Muhamad Krishnan</creator>
        
        <creator>Jafreezal Jaafar</creator>
        
        <subject>Machine learning; deep learning; distributed acoustic sensing; bibliometric</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>This paper explores the emerging research trends in Distributed Acoustic Sensing (DAS) with the integration of Machine Learning and Deep Learning technologies. DAS has diverse applications, including subsurface seismic monitoring, pipeline surveillance, and natural disaster detection. Using the Scopus database, 323 documents published between 2011 and 2023 were analysed. Through a comprehensive bibliometric analysis using the “bibliometrix” R package, the study aims to document the advancement in DAS techniques over the last decade, highlighting the publication patterns, key contributors, and frequently explored themes. The analysis reveals a steady increase in research output, with significant contributions from China and the United States. Core research areas identified include seismic monitoring, pipeline security, and infrastructure health monitoring. Additionally, the paper examines the impact of key publications, influential authors, and prolific research institutions. The findings provide valuable insights for both academic and industrial stakeholders, underscoring the potential for future innovations in DAS applications and helping to identify potential research gaps.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_18-Exploring_Research_Trends_in_Distributed_Acoustic_Sensing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced Bidirectional LSTM for Sentiment Analysis of Learners’ Posts in MOOCs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160517</link>
        <id>10.14569/IJACSA.2025.0160517</id>
        <doi>10.14569/IJACSA.2025.0160517</doi>
        <lastModDate>2025-05-31T09:35:21.0470000+00:00</lastModDate>
        
        <creator>Chakir Fri</creator>
        
        <creator>Rachid Elouahbi</creator>
        
        <creator>Youssef Taki</creator>
        
        <creator>Ahmed Remaida</creator>
        
        <subject>MOOCs; Sentiment analysis; deep learning; Bidirectional LSTM; data augmentation; regularization techniques</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>Massive Open Online Courses (MOOCs) have transformed digital learning, leading to vast amounts of learner-generated content that reflect user experience and engagement. Accurately classifying sentiment from this content is essential for improving course quality, but remains challenging due to subtle linguistic variation and contextual ambiguity. This study proposes a sentiment analysis approach based on an enhanced Bidirectional Long Short-Term Memory (LSTM) model. The enhancements include the integration of data augmentation and regularization techniques to address overfitting and improve generalization. The model was trained and evaluated on a dataset of 29,604 learner discussion posts from Stanford University MOOCs. Experimental results show that the proposed model achieves an accuracy of 88.54% in classifying sentiments into positive, negative, and neutral classes. These results suggest that the enhanced LSTM model offers a reliable solution for large-scale sentiment classification in online education, with potential applications in learner support, curriculum design, and personalized feedback.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_17-Enhanced_Bidirectional_LSTM_for_Sentiment_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Pet Cat Home Design Evaluation System: Based On Grounded Theory-CRITIC-TOPSIS</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160516</link>
        <id>10.14569/IJACSA.2025.0160516</id>
        <doi>10.14569/IJACSA.2025.0160516</doi>
        <lastModDate>2025-05-31T09:35:21.0170000+00:00</lastModDate>
        
        <creator>Yuzhe Qi</creator>
        
        <creator>Hengwang Zhang</creator>
        
        <creator>Yaping Liu</creator>
        
        <subject>Grounded theory; CRITIC; TOPSIS; design evaluation; pet home</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>As pet cats assume an increasingly significant role in households, the variety of pet-cat home products on the market has proliferated. However, existing studies primarily focus on qualitative assessments of individual product functions or user experiences, and lack a systematic evaluation framework that combines in-depth exploration of user needs with quantitative analysis. To address this research gap and with the objectives of enhancing user satisfaction and guiding product development, this study constructs a user-needs–based evaluation framework for pet-cat home design. Semi-structured interviews with 12 pet-cat owners were conducted and analyzed via Grounded Theory to elicit four core requirements—Enhancing Pet Life Quality (A1), Ease of Cleaning and Maintenance (A2), Aesthetic Appeal (A3), and Safety and Reliability (A4)—and thirteen primary requirement elements. The CRITIC method was then applied to determine the weights of these dimensions (A1 = 0.30, A2 = 0.28, A3 = 0.27, A4 = 0.16). Four representative market products were selected and ranked using the TOPSIS method based on their proximity to the ideal and negative-ideal solutions, quantitatively evaluating their relative merits. Results indicate that pet owners prioritize Enhancing Pet Life Quality and Ease of Cleaning and Maintenance (combined weight = 0.58), providing focused guidance for designers on spatial layout and material selection. Aesthetic Appeal and Safety and Reliability also remain critical, pointing to specific optimization directions for product appearance and structural integrity. This study not only fills a methodological gap in pet-cat home design evaluation but also offers a practical model for weighting user needs and selecting optimal design solutions, thereby contributing to the standardization and refinement of pet home products.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_16-Pet_Cat_Home_Design_Evaluation_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimal Algorithm of Expressway Maintenance Scheme Based on Genetic Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160515</link>
        <id>10.14569/IJACSA.2025.0160515</id>
        <doi>10.14569/IJACSA.2025.0160515</doi>
        <lastModDate>2025-05-31T09:35:20.9530000+00:00</lastModDate>
        
        <creator>Yushu Zhu</creator>
        
        <creator>Xingwang Liu</creator>
        
        <creator>Fengshuang Zhang</creator>
        
        <creator>Kashan Khan</creator>
        
        <creator>Yang Chen</creator>
        
        <creator>Runqi Liu</creator>
        
        <creator>Qiang He</creator>
        
        <subject>Genetic algorithm (GA); expressway; scheme optimization; MATLAB; program development</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>The genetic algorithm (GA), characterized by parallelism and global optimization capabilities, is well-suited for solving optimization problems related to expressway maintenance schemes. In this study, we improved GA operators and algorithm parameters within the existing maintenance scheme optimization model, thereby enhancing the operational efficiency of the GA. Building on this foundation, an optimization algorithm for expressway maintenance schemes was developed. Subsequently, MATLAB was employed to program the algorithm and solve the expressway maintenance scheme problem. When compared with the solution results in the reference, the proposed approach achieved a reduction of approximately 3.6% in maintenance costs and an improvement of about 47% in operation speed, verifying the algorithm&#39;s reliability and effectiveness. Finally, visualization of the algorithm program was enabled using MATLAB App Designer and MATLAB Compiler. This method can be popularized and applied in aspects such as expressway maintenance decision-making and optimization of building maintenance schemes.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_15-Optimal_Algorithm_of_Expressway_Maintenance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Human Detection and Tracking with YOLO and SORT Tracking Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160514</link>
        <id>10.14569/IJACSA.2025.0160514</id>
        <doi>10.14569/IJACSA.2025.0160514</doi>
        <lastModDate>2025-05-31T09:35:20.9230000+00:00</lastModDate>
        
        <creator>Tanveer Kader</creator>
        
        <creator>Ahmad Fakhri Ab. Nasir</creator>
        
        <creator>M. Zulfahmi Toh</creator>
        
        <creator>Muhammad Nur Aiman Shapiee</creator>
        
        <creator>Amir Fakarullsroq Abdul Razak</creator>
        
        <subject>Human tracking; multiple object tracking; tracking-by-detection; you only look once (YOLO); simple online and realtime tracking (SORT)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>Human tracking is often performed on publicly available well annotated datasets, where the dataset development is always avoided because of the tiring process. Publicly available well-annotated datasets are ideal for training because those generate higher tracking accuracy. This study performs human tracking on videos recorded manually using optimized detectors following the tracking by detection framework. Manually recorded videos were used to develop a dataset which comprises more than 8k image sequences. Both indoor and outdoor scenarios were chosen to maintain different lighting conditions which make tracking difficult. All these image frames are labelled with bounding boxes for humans. The dataset is prepared by following the MOT15 dataset structure. A unique annotation process was performed that reduced the annotation labor by almost 80% which was a combination of manual annotation and prediction from pretrained models. Different sizes of You Only Look Once (YOLO) detection model (n/s/m) were trained using the train dataset focusing on humans and coupled with two most popular tracking algorithms: Simple Online Realtime Tracking (SORT) and DeepSORT. The YOLOv8 and YOLO11 models were optimized with proper hyperparameter values followed by tracking, using SORT and DeepSORT. The results were observed with those models on different confidence and Intersection over Union (IoU) threshold values. This study finds a proportional relation with the optimization of detection models and tracking accuracy. YOLO11m with DeepSORT tracker performed best on the test data with 74% Multiple Object Tracking Accuracy (MOTA) also the other optimized YOLO models tend to perform better with the trackers than the unoptimized ones.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_14-Human_Detection_and_Tracking_with_YOLO.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Exploring the Landscape of 6G Wireless Communication Technology: A Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160513</link>
        <id>10.14569/IJACSA.2025.0160513</id>
        <doi>10.14569/IJACSA.2025.0160513</doi>
        <lastModDate>2025-05-31T09:35:20.8770000+00:00</lastModDate>
        
        <creator>Nur Arzilawati Md Yunus</creator>
        
        <creator>Zurina Mohd Hanapi</creator>
        
        <creator>Shafinah Kamarudin</creator>
        
        <creator>Aindurar Rania Balqis Mohd Sufian</creator>
        
        <creator>Fazlina Mohd Ali</creator>
        
        <creator>Nabilah Ripin</creator>
        
        <creator>Hazrina Sofian</creator>
        
        <subject>6G; wireless communication technology; artificial intelligence; connectivity; edge computing; Internet of Things (IoT); quantum computing; terahertz communication; Ultra-Reliable Low Latency Communication (URLLC)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>The advent of 6G technology promises to revolutionize the landscape of connectivity, ushering in an era of unprecedented speed, reliability, and integration of emerging technologies. This comprehensive review delves into the evolving domain of 6G wireless communication technology, synthesizing current research, trends, and projections to provide a holistic understanding of its potential impact and challenges. Beginning with an overview of the evolution from previous generations, the review examines the foundational principles, key features, and technological advancements envisioned for 6G networks. It explores concepts such as terahertz communication, ultra-reliable low latency communication (URLLC), intelligent surfaces, and holographic beamforming, elucidating their potential to redefine communication paradigms. The integration of artificial intelligence (AI) and edge computing is highlighted as pivotal in enabling intelligent, adaptive, and efficient network operations. Furthermore, the review investigates how 6G is expected to support massive-scale Internet of Things (IoT) deployments and considers the future role of quantum computing in enhancing security and processing capabilities. Regulatory and standardization frameworks essential for the development and deployment of 6G networks are scrutinized, alongside addressing issues concerning security, privacy, and sustainability. By synthesizing insights from academia, industry, and standardization bodies, this review provides a roadmap for researchers, policymakers, and industry stakeholders to navigate the evolving landscape of 6G and realize its transformative potential in shaping the future of global connectivity.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_13-Exploring_the_Landscape_of_6G_Wireless_Communication_Technology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluating Large Language Model Versus Human Performance in Islamophobia Dataset Annotation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160512</link>
        <id>10.14569/IJACSA.2025.0160512</id>
        <doi>10.14569/IJACSA.2025.0160512</doi>
        <lastModDate>2025-05-31T09:35:20.8430000+00:00</lastModDate>
        
        <creator>Rafizah Daud</creator>
        
        <creator>Nurlida Basir</creator>
        
        <creator>Nur Fatin Nabila Mohd Rafei Heng</creator>
        
        <creator>Meor Mohd Shahrulnizam Meor Sepli</creator>
        
        <creator>Melinda Melinda</creator>
        
        <subject>Large Language Model; generative AI; human intelligence; automatic data annotation; sentiment analysis; islamophobia; ChatGPT</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>Manual annotation of large datasets is a time-consuming and resource-intensive process. Hiring annotators or outsourcing to specialized platforms can be costly, particularly for datasets requiring domain-specific expertise. Additionally, human annotation may introduce inconsistencies, especially when dealing with complex or ambiguous data, as interpretations can vary among annotators. Large Language Models (LLMs) offer a promising alternative by automating data annotation, potentially improving scalability and consistency. This study evaluates the performance of ChatGPT compared to human annotators in annotating an Islamophobia dataset. The dataset consists of fifty tweets from the X platform using the keywords Islam, Muslim, hijab, stopislam, jihadist, extremist, and terrorism. Human annotators, including experts in Islamic studies, linguistics, and clinical psychology, serve as a benchmark for accuracy. Cohen’s Kappa was used to measure agreement between LLM and human annotators. The results show substantial agreement between LLM and language experts (0.653) and clinical psychologists (0.638), while agreement with Islamic studies experts was fair (0.353). Overall, LLM demonstrated a substantial agreement (0.632) with all human annotators. ChatGPT achieved an overall accuracy of 82%, a recall of 69.5%, an F1-score of 77.2%, and a precision of 88%, indicating strong effectiveness in identifying Islamophobia-related content. The findings suggest that LLMs can effectively detect Islamophobic content and serve as valuable tools for preliminary screenings or as complementary aids to human annotation. Through this analysis, the study seeks to understand the strengths and limitations of LLMs in handling nuanced and culturally sensitive data, contributing to broader discussion on the integration of generative AI in annotation tasks. While LLMs show great potential in sentiment analysis, challenges remain in interpreting context-specific nuances. This study underscores the role of generative AI in enhancing human annotation efforts while highlighting the need for continuous improvements to optimize performance.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_12-Evaluating_Large_Language_Model_Versus_Human_Performance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automated Classification of Parasitic Worm Eggs Based on Transfer Learning and Fine-Tuned CNN Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160511</link>
        <id>10.14569/IJACSA.2025.0160511</id>
        <doi>10.14569/IJACSA.2025.0160511</doi>
        <lastModDate>2025-05-31T09:35:20.8130000+00:00</lastModDate>
        
        <creator>Ira Puspita Sari</creator>
        
        <creator>Budi Warsito</creator>
        
        <creator>Oky Dwi Nurhayati</creator>
        
        <subject>Classification; Convolutional Neural Network; EfficientNetB0; MobileNetV3; ResNet50</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>Classification of worm eggs is important for diagnosing worm diseases, but the manual process is time-consuming. This study designs an image classification system using Convolutional Neural Network (CNN), transfer learning, and fine-tuning. The main goal of this study is to create a CNN model to sort parasitic worm eggs into groups. It does this by comparing three CNN architectures: EfficientNetB0, MobileNetV3, and ResNet50; it also creates classification technology for diagnosing worm infections. We applied transfer learning with pre-trained models and fine-tuned them for the IEEE parasitic egg dataset. The results reveal that EfficientNetB0 is superior, with an accuracy of 95.36%, precision of 95.80%, recall of 95.38%, and F1-score of 95.48%. It performs better and more efficiently than the other two architectures. Applying transfer learning and fine-tuning improves model performance, with EfficientNetB0 consistently outperforming. Furthermore, visual similarities between classes in the dataset likely cause prediction errors. Therefore, this system can support the diagnosis of worm diseases with high efficiency and accuracy.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_11-Automated_Classification_of_Parasitic_Worm_Eggs.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Building Cyber-Resilient Universities: A Tailored Maturity Model for Strengthening Cybersecurity in Higher Education</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160510</link>
        <id>10.14569/IJACSA.2025.0160510</id>
        <doi>10.14569/IJACSA.2025.0160510</doi>
        <lastModDate>2025-05-31T09:35:20.7670000+00:00</lastModDate>
        
        <creator>Maznifah Salam</creator>
        
        <creator>Khairul Azmi Abu Bakar</creator>
        
        <creator>Azana Hafizah Mohd Aman</creator>
        
        <subject>Cybersecurity; HEIs; cybersecurity maturity model; mixed-method; governance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>This study explores Higher Education Institutions (HEIs) cybersecurity maturity and preparedness, developing a Cybersecurity Maturity Model (CSMM) for HEIs specific to the needs of these institutions. These HEIs face increasing cyber threats and cyberattacks from ransomware attacks, phishing attempts, and data breaches, considering increasing dependence on digital methods for administration, teaching, and research. Though cybersecurity is of paramount importance today, many institutions do not have proper structures with which they can evaluate and enhance their security practices. The study uses a mixed-method approach, whereby the integration of qualitative case studies and quantitative surveys would address this gap, subsequently allowing the identification, validation, and assessment of the key domains and criteria in a comprehensive cybersecurity framework. The research started with an investigation, followed by design, data collection, analysis, and reporting, which accounted for the major phases of the study. The data was collected through interviews, documentation reviews, and surveys involving cybersecurity experts and ICT management teams in various HEIs. The results revealed eleven important assessment domains, twenty-four criteria, and sixty-seven elements necessary for developing the CSMM: Governance, Risk Management, Infrastructure Security, Human Factors, Compliance, and Monitoring. The validation confirmed the model to be practical, reliable, and valuable in the overall sense, giving the institutions a structured avenue for assessing and improving their cybersecurity maturity.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_10-Building_Cyber_Resilient_Universities.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Artificial Intelligence Based System for Sorting and Detection of Organic and Inorganic Waste</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160509</link>
        <id>10.14569/IJACSA.2025.0160509</id>
        <doi>10.14569/IJACSA.2025.0160509</doi>
        <lastModDate>2025-05-31T09:35:20.7330000+00:00</lastModDate>
        
        <creator>Angel Jair Casta&#241;eda Meza</creator>
        
        <creator>Nicol’s Alexander Lopez Haro</creator>
        
        <creator>Rosalynn Ornella Flores-Casta&#241;eda</creator>
        
        <subject>Artificial intelligence (AI); environmental sustainability; waste classification; organic waste; inorganic waste</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>Solid waste management has become a global challenge today due to its constant increase in waste and inadequate classification, which leads to serious environmental problems. The research objective is to develop a system based on artificial intelligence (AI) for the classification and detection of organic and inorganic waste. In terms of its approach, it is quantitative with a pre-experimental and applied design. The population was made up of 1,298 images as a data collection technique for observation. Furthermore, the implementation of this system has shown significant improvements in its key indicators: precision, detection speed, and reduction of errors in the tests carried out, obtaining an increase in precision of 11.52%, 23.61% in detection speed and a reduction in 24.13% error rate. Finally, this research highlights the importance of AI in environmental sustainability by promoting much more efficient waste management and thus promoting ecological awareness in educational environments and for students to value the importance of recycling and sustainability. Finally, this research concludes that AI-based systems are a viable and scalable solution to address all the challenges associated with waste management.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_9-Artificial_Intelligence_Based_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Ontology-Based Automatic Generation of Learning Materials for Python Programming</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160508</link>
        <id>10.14569/IJACSA.2025.0160508</id>
        <doi>10.14569/IJACSA.2025.0160508</doi>
        <lastModDate>2025-05-31T09:35:20.6870000+00:00</lastModDate>
        
        <creator>Jawad Alshboul</creator>
        
        <creator>Erika Baksa-Varga</creator>
        
        <subject>Ontology; knowledge graph; learning material generation; domain knowledge; python</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>Learning materials in programming education are essential for effective instruction. This study introduces an ontology-based approach for automatically generating learning materials for Python programming. The method harnesses ontologies to capture domain knowledge and semantic relationships, enabling the creation of personalized, adaptive content. The ontology serves as a knowledge base to identify key concepts and resources and map them to learning objectives aligned with user preferences. The study outlines the design of a dual-module ontology: a general and a specific domain-specific concepts module. This design supports enhanced, tailored learning experiences, enhancing Python education by meeting individual needs and learning styles. The approach also increases the quality and uniformity of generated content, which can be reused for educational reasons. The system ensures alignment with reference materials by using BERT embeddings for a semantic similarity measurement, achieving a quality accuracy of 98.5%. It can be applied to improve Python education by providing personalized recommendations, hints, and problem-solution generation. Future developments could further support the functionality to strengthen teaching and learning outcomes in programming education, and it could expand to automated problem generation.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_8-Ontology_Based_Automatic_Generation_of_Learning_Materials.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Multi-Stage Detection of Diabetic Retinopathy in Fundus Images Using Convolutional Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160507</link>
        <id>10.14569/IJACSA.2025.0160507</id>
        <doi>10.14569/IJACSA.2025.0160507</doi>
        <lastModDate>2025-05-31T09:35:20.6730000+00:00</lastModDate>
        
        <creator>Puneet Kumar</creator>
        
        <creator>Salil Bharany</creator>
        
        <creator>Ateeq Ur Rehman</creator>
        
        <creator>Arjumand Bono Soomro</creator>
        
        <creator>Mohammad Shuaib Mir</creator>
        
        <creator>Yonis Gulzar</creator>
        
        <subject>Diabetic retinopathy; convolutional neural network; EfficientNetB3; fundus images; deep learning; transfer learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>Diabetic Retinopathy (DRY) is a microvascular complication caused by diabetes mellitus, and it is one of the leading causes of blindness, especially in human adults. As the prevalence of this disease is growing exponentially, the screening of millions of people needs to be performed at a proliferating rate to diagnose the stage of the disease in its early stages. Highly advanced in the domain of technology, especially in artificial intelligence and its allied techniques, has come for the screening of DRY in photography to enhance the quality of life. This generates a bulk size of data that travels at high speed and cuts down on many human tasks. However, the techniques employed by the authors so far are quite expensive and time-consuming, and the prediction rate is insufficient to apply in a real-time scenario. This study offered a road for a deep learning-based fully automated system that helps to save manual disease diagnosis work and achieve disease detection in its very early stage using EfficientNetB3 (ENB3) Convolutional Neural Network (CNN) on DRY Fundus Images (FDI). In the suggested CNN, architectural variations and pre-processing techniques such as dimensionality reduction, global average pooling, and circular cropping are introduced alongside the Leaky ReLU (LR) activation function, Transfer Learning, and Reduce LROnPlateau technique, respectively. The accuracy of the proposed CNN classifier was 94.2% on training data, with a kappa score of 0.874, while it achieved a high level of accuracy at 96.7% on the testing data for DRY grading. Further, the evaluation results presented that the proposed model efficiently classifies the DRY stages for early disease detection.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_7-A_Multi_Stage_Detection_of_Diabetic_Retinopathy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>HCAT: Advancing Unstructured Healthcare Data Analysis Through Hierarchical and Context-Aware Mechanisms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160506</link>
        <id>10.14569/IJACSA.2025.0160506</id>
        <doi>10.14569/IJACSA.2025.0160506</doi>
        <lastModDate>2025-05-31T09:35:20.6400000+00:00</lastModDate>
        
        <creator>Monica Bhutani</creator>
        
        <creator>Mohammad Shuaib Mir</creator>
        
        <creator>Choo Wou Onn</creator>
        
        <creator>Yonis Gulzar</creator>
        
        <subject>Machine learning; data analysis; natural language processing; hierarchical transformer; context-aware computing; medical text mining; clinical decision support; healthcare; unstructured data processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>To that end, this study presents the Hierarchical Context-Aware Transformer (HCAT), a new model to perform analysis on unstructured healthcare data that resolves significant problems related to medical text. In the proposed model, the hierarchical structure of the system is integrated with the context-sensitive mechanisms to process the healthcare documents at sentence level and document levels. HCAT complies with domain knowledge by a specific attention module and uses a detailed loss function that focuses on classification accuracy besides encouraging domain adaptation. The quantitative experiment shows that HCAT is a better choice than Bi-LSTM and BERT for sentence representation. The model attains 92.30% test accuracy on medical text classification, conversing with high computational efficiency; batch processing time is about 150ms, while the memory consumed is 320 MB. The proposed architecture for clinical text representation facilitates the incorporation of long-range dependencies for clinical story representation, whereas the context-sensitive layer supports a better understanding of medical language. Precision and recall are significant because of the healthcare application of the model; the model has an accuracy of 91.8% and a recall of 93.2%. From these results, it can be concluded that HCAT presented significant progress in computing healthcare data. It provides a highly practical application for real-world extraction of medical data from unformatted text.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_6-HCAT_Advancing_Unstructured_Healthcare_Data_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Industrial Cybersecurity with Virtual Lab Simulations</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160505</link>
        <id>10.14569/IJACSA.2025.0160505</id>
        <doi>10.14569/IJACSA.2025.0160505</doi>
        <lastModDate>2025-05-31T09:35:20.6100000+00:00</lastModDate>
        
        <creator>Hamza Hmiddouch</creator>
        
        <creator>Antonio Villafranca</creator>
        
        <creator>Raul Castro</creator>
        
        <creator>Volodymyr Dubetskyy</creator>
        
        <creator>Maria-Dolores Cano</creator>
        
        <subject>Cybersecurity; industrial control system; ransomware; virtual lab</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>The increasing integration of Industrial Control Systems (ICS) within production environments underscores the urgent need for robust cybersecurity measures. However, securing these devices without disrupting ongoing operations presents a significant challenge. This study introduces a virtual laboratory environment that simulates real-world ICS networks, including a misconfigured Active Directory (AD) domain and a Supervisory Control and Data Acquisition (SCADA) node, to train cybersecurity professionals in recognizing and mitigating vulnerabilities. We propose a comprehensive setup of virtual machines—Windows Server, Windows Workstations, and Kali Linux—and follow the Purdue model for network segmentation, effectively bridging theory with hands-on practice. Demonstrating various penetration testing tools (e.g., Impacket, Kerbrute, Chisel, Socat, and TeslaCrypt ransomware), this work reveals how a single misconfiguration, such as disabling Kerberos pre-authentication, can cascade into severe breaches, including ransomware attacks on critical devices. Our preliminary results show that the virtual laboratory approach strengthens business continuity and resilience by enabling real-time testing of countermeasures without risking production downtime. This ongoing research aims to provide a practical, adaptable, and standards-aligned solution for cybersecurity training and threat response in industrial setting.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_5-Enhancing_Industrial_Cybersecurity.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>End-to-End Current Consumption Estimation for a Driving System of a Mobile Robot Considering Geology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160504</link>
        <id>10.14569/IJACSA.2025.0160504</id>
        <doi>10.14569/IJACSA.2025.0160504</doi>
        <lastModDate>2025-05-31T09:35:20.5800000+00:00</lastModDate>
        
        <creator>Shota Chikushi</creator>
        
        <creator>Yonghoon Ji</creator>
        
        <creator>Hanwool Woo</creator>
        
        <creator>Hitoshi Kono</creator>
        
        <subject>Current consumption estimation; mobile robot; neu-ral network; snow environment</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>Mobile robots are often tasked with environmental surveys and disaster response operations. Accurately estimating the energy consumption of these robots during such tasks is essen-tial. Among the various components, the drive system consumes the most energy and exhibits the greatest fluctuations. Since these energy fluctuations stem from variations in current consumption, it is crucial to estimate the drive system’s current consumption with high accuracy. However, existing research faces challenges in accurately estimating current consumption, particularly when the ground geology changes or when internal states cannot be measured. Moreover, there is no clearly defined methodology for estimating the current consumption of a mobile robot’s drive system under unknown geological conditions or internal states. To address this gap, the present study aims to develop an end-to-end method for estimating the current consumption of a mobile robot’s drive system, taking ground geology into consideration. To achieve this, we propose a novel approach for collecting interaction data and generating a current consumption model. For data collection, we introduce a method that effectively captures the internal and external factors influencing the drive system’s current consumption, as well as their interactions. This is accomplished by treating the physical phenomena resulting from the interaction between the driving mechanism and the ground as vibrations. Additionally, we propose a method for generating a current consumption model using a neural network, accounting for measurement errors, outliers, noise, and global current fluctuations. The effectiveness of the proposed method is demonstrated through experiments conducted on three different ground types using a skid-steering mobile robot.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_4-End_to_End_Current_Consumption_Estimation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Quantized Object Detection for Real-Time Inference on Embedded GPU Architectures</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160503</link>
        <id>10.14569/IJACSA.2025.0160503</id>
        <doi>10.14569/IJACSA.2025.0160503</doi>
        <lastModDate>2025-05-31T09:35:20.5300000+00:00</lastModDate>
        
        <creator>Fatima Zahra Guerrouj</creator>
        
        <creator>Sergio Rodriiguez Florez</creator>
        
        <creator>Abdelhafid El Ouardi</creator>
        
        <creator>Mohamed Abouzahir</creator>
        
        <creator>Mustapha Ramzi</creator>
        
        <subject>Object detection model; quantization; embedded architectures; real-time</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>Deploying deep learning-based object detection models like YOLOv4 on resource-constrained embedded ar-chitectures presents several challenges, particularly regarding computing performance, memory usage, and energy consumption. This study examines the quantization of the YOLOv4 model to facilitate real-time inference on lightweight edge devices, focusing on NVIDIA’s Jetson Nano and AGX. We utilize post-training quantization techniques to reduce both model size and computational complexity, all while striving to maintain acceptable detection accuracy. Experimental results indicate that an 8-bit quantized YOLOv4 model can achieve near real-time performance with minimal accuracy loss. This makes it well-suited for embedded applications such as autonomous navigation. Additionally, this research highlights the trade-offs between model compression and detection performance, proposing an optimization method tailored to the hardware constraints of embedded architectures.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_3-Quantized_Object_Detection_for_Real_Time_Inference.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automated Analysis of Glucose Response Patterns in Type 1 Diabetes Using Machine Learning and Computer Vision</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160502</link>
        <id>10.14569/IJACSA.2025.0160502</id>
        <doi>10.14569/IJACSA.2025.0160502</doi>
        <lastModDate>2025-05-31T09:35:20.5000000+00:00</lastModDate>
        
        <creator>Arjun Jaggi</creator>
        
        <creator>Aditya Karnam Gururaj Rao</creator>
        
        <creator>Sonam Naidu</creator>
        
        <creator>Vijay Mane</creator>
        
        <creator>Siddharth Bhorge</creator>
        
        <creator>Medha Wyawahare</creator>
        
        <subject>Continuous glucose monitoring; glucose response; Type 1 diabetes; food image analysis; dietary pattern recognition; time-series analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>This study presents an automated and data-driven framework for analysing glucose response patterns in individuals with Type 1 diabetes by integrating machine learning and computer vision methodologies. The system leverages multimodal data inputs, including food images, continuous glucose monitoring (CGM) data, and time-series meal logs to model glycaemic variability and infer personalized dietary effects. Using a dataset comprising over eighty annotated meals from eight subjects, the framework extracts nutritional features from food images via convolutional neural networks (CNNs) with attention mechanisms and correlates them with postprandial glucose trajectories. The analysis reveals substantial inter-individual variability and identifies critical temporal and nutritional factors influencing glucose dynamics. Results demonstrate the system’s capability to detect patterns predictive of glycemic responses, enabling the development of tailored dietary recommendations. This approach offers a scalable tool for personalized diabetes management and paves the way for future integration into real-time decision support systems.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_2-Automated_Analysis_of_Glucose_Response_Patterns.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Federated Learning Security with a Defense Framework Against Adversarial Attacks in Privacy-Sensitive Healthcare Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160501</link>
        <id>10.14569/IJACSA.2025.0160501</id>
        <doi>10.14569/IJACSA.2025.0160501</doi>
        <lastModDate>2025-05-31T09:35:20.4230000+00:00</lastModDate>
        
        <creator>Frederick Ayensu</creator>
        
        <creator>Claude Turner</creator>
        
        <creator>Isaac Osunmakinde</creator>
        
        <subject>Federated learning; machine learning; privacy; adversarial attacks; defense framework; global model; healthcare; disease prediction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025</description>
        <description>Federated learning (FL) is a cutting-edge method of collaborative machine learning that lets organizations or companies train models without exchanging personal information. Adversarial attacks such as data poisoning, model poisoning, backdoor attacks, and man-in-the-middle attacks could compromise its accuracy and reliability. Ensuring resistance against such risks is crucial as FL gets headway in fields like healthcare, where disease prediction and data privacy are essential. Federated systems lack strong defenses, even though centralized machine learning security has been extensively researched. To secure clients and servers, this research creates a framework for identifying and thwarting adversarial attacks in FL. Using PyTorch, the study evaluates the framework’s effectiveness. The baseline FL system achieved an average accuracy of 90.07%, with precision, recall, and F1-scores around 0.9007 to 0.9008, and AUC values of 0.95 to 0.96 under benign conditions. With AUC values of 0.93 to 0.94, the defense-enhanced FL system showed remarkable resilience and maintained dependable classification (precision, recall, F1-scores ~0.8590–0.8598), despite a 4.1% accuracy decline to 85.97% owing to security overhead. With an 84.33% attack detection rate, 99.32% precision, 96.62% accuracy and a low false positive rate of 0.15%, the defense architecture performed exceptionally well in adversarial attacks. Trade-offs were identified via latency analysis: the defense-enhanced system stabilized at 54 to 56 seconds, while the baseline system averaged 13-second rounds. With practical implications for safe, robust machine learning partnerships, these findings demonstrate a balance between accuracy, efficiency and security, establishing the defense-enhanced FL system as a reliable option for privacy-sensitive healthcare applications.</description>
        <description>http://thesai.org/Downloads/Volume16No5/Paper_1-Enhancing_Federated_Learning_Security.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Secure Optimization of RPL Routing in IoT Networks: Analysis of Metaheuristic Algorithms in the Face of Attacks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01604113</link>
        <id>10.14569/IJACSA.2025.01604113</id>
        <doi>10.14569/IJACSA.2025.01604113</doi>
        <lastModDate>2025-05-01T08:03:35.3170000+00:00</lastModDate>
        
        <creator>Mansour Lmkaiti</creator>
        
        <creator>Maryem Lachgar</creator>
        
        <creator>Ibtissam Larhlimi</creator>
        
        <creator>Houda Moudni</creator>
        
        <creator>Hicham Mouncif</creator>
        
        <subject>IoT Security; PSO; MILP; ARS2A; simulated annealing; RPL protocol; metaheuristic techniques; routing efficiency; ETX; latency; energy consumption; attack mitigation; blackhole; wormhole; grayhole; cyberattack</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>The security and efficiency of Internet of Things (IoT) networks depend on optimizing the routing protocol for low-power, lossy networks (LPNs) to manage various challenges, including expected number of transmissions (ETX), latency and energy consumption. This study proposes an advanced meta-heuristic optimization framework integrating several algorithms, including Particle Swarm Optimization (PSO), Mixed Integer Linear Programming (MILP), Adaptive Random Search with two-step Adjustment (ARS2A) and Simulated Annealing (SA), to improve the performance of RPL-based IoT networks under attack scenarios. Our methodology focuses on secure routing by integrating dynamic anomaly detection and adaptive optimization mechanisms to mitigate network threats such as Blackhole, Sinkhole, and Wormhole attacks.Simulations were carried out on large-scale IoT networks with 100 and 150 nodes to evaluate the performance of the proposed algorithms. Experimental results indicate that ARS2A and MILP offer the best compromise between security and performance, achieving minimal ETX (1.28), reduced latency (0.12 ms) and optimized energy consumption (0.85 J) in dense networks. Furthermore, simulated annealing demonstrates high adaptability to mitigate routing attacks while guaranteeing stable energy efficiency. The comparative analysis highlights the strengths and weaknesses of each algorithm, underscoring the need for hybrid optimization strategies that balance computational cost and real-time adaptability. This work establishes a secure and scalable optimization framework for IoT networks, contributing to the development of intelligent, resilient and energy-efficient routing solutions.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_113-Secure_Optimization_of_RPL_Routing_in_IoT_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multitask Model with an Attention Mechanism for Sequentially Dependent Online User Behaviors to Enhance Audience Targeting</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01604112</link>
        <id>10.14569/IJACSA.2025.01604112</id>
        <doi>10.14569/IJACSA.2025.01604112</doi>
        <lastModDate>2025-05-01T08:03:35.3030000+00:00</lastModDate>
        
        <creator>Marwa Hamdi El-Sherief</creator>
        
        <creator>Mohamed Helmy Khafagy</creator>
        
        <creator>Asmaa Hashem Sweidan</creator>
        
        <subject>Multitask learning; 1D convolution neural networks; attention mechanism; click through rate; conversion rate; audience behavioral targeting; audience behavior</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>This paper proposes a multitask learning approach with an attention mechanism to predict audience behavior as sequential actions. The goal is to improve click-through and conversion rates by effectively targeting audience behavior. The proposed model introduces specific task sets designed to address the challenges specific to each prediction task. In particular, the first task, click prediction, suffers from data sparsity and a lack of prior knowledge, limiting its predictive power. To address this, a one-dimensional convolutional network (1D CNN) tower is used in the first task to learn local dependencies and temporal patterns of user activity. This design choice allows the model to better detect potential clicks, even without rich historical data. The task of conversion prediction is tackled by a fully connected convolution tower that selectively combines the corresponding features extracted from the first task using an Attention Mechanism, as well as the original shared embedding input data, enabling richer context for performing more accurate prediction. Experimental results show that the proposed multitask architecture significantly outperforms existing state-of-the-art models that do not consider tower architecture design to predict sequential online audience behavior.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_112-Multitask_Model_with_an_Attention_Mechanism.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimizing Medical Image Analysis: A Performance Evaluation of YOLO-Based Segmentation Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01604111</link>
        <id>10.14569/IJACSA.2025.01604111</id>
        <doi>10.14569/IJACSA.2025.01604111</doi>
        <lastModDate>2025-05-01T08:03:35.2700000+00:00</lastModDate>
        
        <creator>Haifa Alanazi</creator>
        
        <subject>Medical image; instance segmentation; one-stage object detection models; transfer learning; nuclei detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>Instance segmentation is a critical component of medical image analysis, enabling tasks such as tissue and organ delineation, and disease detection. This paper provides a detailed comparative analysis of two fine-tuned one-stage object detection models, YOLOv11-seg and YOLOv9-seg, tailored for instance segmentation in medical imaging. Leveraging transfer learning, both models were initialized with pretrained weights and sub-sequently fine-tuned on the NuInsSeg dataset, which comprises over 30,000 manually segmented nuclei across 665 image patches from various human and mouse organs. This approach facilitated faster convergence and improved generalization, particularly given the limited size and high complexity of the medical dataset. The models were evaluated against key performance metrics. The experimental results reveal that YOLOv11n-seg outperforms YOLOv9c-seg with a precision of 0.87, recall of 0.84, and mAP50 of 0.89, indicating superior segmentation quality and more accurate delineation of nuclei contours. This study highlights the robust performance and efficiency of YOLOv11n-seg, demonstrating its superiority in medical image segmentation tasks, with notable advantages in both accuracy and real-time processing capabilities.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_111-Optimizing_Medical_Image_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Precision Agriculture with YOLOv8: A Deep Learning Approach to Potato Disease Identification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01604110</link>
        <id>10.14569/IJACSA.2025.01604110</id>
        <doi>10.14569/IJACSA.2025.01604110</doi>
        <lastModDate>2025-05-01T08:03:35.2570000+00:00</lastModDate>
        
        <creator>Mohammed Aleinzi</creator>
        
        <subject>Potato disease detection; YOLOv8; Agriculture 4.0; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>Timely and precise identification of potato leaf diseases plays a critical role in improving crop productivity and reducing the impact of plant pathogens. Conventional detection techniques are often labor-intensive, dependent on expert anal-ysis, and may not be practical for widespread agricultural use. This paper introduces an automated detection system based on YOLOv8, a cutting-edge deep learning framework specialized in object detection, to accurately recognize multiple potato leaf diseases. The proposed model is trained on a carefully prepared dataset that includes both healthy and infected leaves, utilizing robust feature learning to distinguish between different disease types. Our experimental evaluation reveals that the YOLOv8-based method achieves superior performance in terms of accuracy and processing speed when compared to traditional approaches. This work contributes to the ongoing transformation of agriculture through smart technologies by offering an AI-powered tool that facilitates real-time crop monitoring. Future research may focus on deploying this solution on edge devices, such as smartphones or drones, to enable scalable, on-field disease diagnostics. Ultimately, this study supports the vision of sustainable agriculture by integrating intelligent systems into everyday farming operations.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_110-Enhancing_Precision_Agriculture_with_YOLOv8.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimizing Data Transmission and Energy Efficiency in Wireless Networks: A Comparative Study of GA, PSO, and Hybrid Approaches</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01604109</link>
        <id>10.14569/IJACSA.2025.01604109</id>
        <doi>10.14569/IJACSA.2025.01604109</doi>
        <lastModDate>2025-05-01T08:03:35.2100000+00:00</lastModDate>
        
        <creator>Suhare Solaiman</creator>
        
        <subject>Resource allocation; optimization; genetic algorithms; particle swarm optimization; hybrid algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>As wireless communication technology evolves, efficient resource allocation in Orthogonal Frequency Division Multiple Access (OFDMA) networks is becoming more important. This study looks at three resource allocation algorithms: Genetic Algorithms (GA), Particle Swarm Optimization (PSO), and a hybrid approach that combines both. The hybrid algorithm takes advantage of the strengths of both methods to improve data transmission and energy efficiency. Using simulations in MATLAB, the study assesses algorithms based on key metrics such as data rate, energy consumption, and computational complexity. The findings show that the hybrid approach generally performs better than both GA and PSO, especially in maximizing data rates. This research offers useful information for network operators looking to implement effective resource management strategies in practical wireless communication settings.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_109-Optimizing_Data_Transmission_and_Energy_Efficiency.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards Hybrid Meta-Heuristic Analysis for the Optimization of Fundamental Performance in Robotic Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01604108</link>
        <id>10.14569/IJACSA.2025.01604108</id>
        <doi>10.14569/IJACSA.2025.01604108</doi>
        <lastModDate>2025-05-01T08:03:35.1770000+00:00</lastModDate>
        
        <creator>Boudour Dabbaghi</creator>
        
        <creator>Faical Hamidi</creator>
        
        <creator>Mohamed Aoun</creator>
        
        <creator>Houssem Jerbi</creator>
        
        <subject>Domain of Attraction (DA); Differential Algebraic Representation (DAR); meta-heuristic approach; actuators saturation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>This paper examines the concept of implementing a hybrid optimization approach through combining analytical and meta-heuristic approaches to improve the performance of practical engineering systems. Designed in support of artificial intelligence strategy, the proposed approach ensures high stability and efficiency under actuators saturation constraint. This is a well-known and sensitive problem in robotics and control. Specifically, this paper deals with the problem of computing the stability region for controlled systems. While addressing this issue, research approaches take into consideration the fact that actuator saturation may occur. It is imperative to maintain this propriety and ensure the reliability of design control systems, particularly those developed to control robot actuators. Models of the studied systems are based on differential algebraic representations and polytypic regions in state space. The developed technique com-bines LMI with an improved meta-heuristic based optimization approach that fast searches and enlarge domains of attraction for robot actuators. The direct Lyapunov theory is used to analyze and validate stability key performance. A numerical example study has been conducted to validate the proposed approach’s efficacy and efficiency. A comparative benchmarking study has been carried out to highlight the main concepts and results of this study.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_108-Towards_Hybrid_Meta_Heuristic_Analysis_for_the_Optimization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluating the Performance of Tree-Based Model in Predicting Haze Events in Malaysia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01604107</link>
        <id>10.14569/IJACSA.2025.01604107</id>
        <doi>10.14569/IJACSA.2025.01604107</doi>
        <lastModDate>2025-05-01T08:03:35.1470000+00:00</lastModDate>
        
        <creator>Mahiran Muhammad</creator>
        
        <creator>Ahmad Zia Ul-Saufie</creator>
        
        <creator>Fadhilah Ahmad Radi</creator>
        
        <subject>Extreme Gradient Boosting (XGBoost); Gradient Boosting Regression (GBR); Decision Tree (DT); extreme values; Particulate Matter (PM)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>Predicting haze is crucial in controlling air pollution to reduce its impact, especially on human health. Accurate prediction of extreme values is vital to raising public awareness of this issue and better understanding of air quality management. Extreme values in air pollution refer to unusually high measure-ments of pollutants that diverge significantly from the normal range of observed values. Extreme values are normally caused by haze from various factors. Neglecting extreme values can cause unreasonable predictions. Therefore, this study aims to evaluate the performance of a tree-based algorithm in predicting haze events. Predictive analytics were based on hourly air pollution data from 2013 to 2022 in Shah Alam, Malaysia. The ten parameters are chosen Relative Humidity (RH), Temperature (T), Wind Direction (WD), Wind Speed (WS), PM10, NOx, NO2, SO2, O3 and CO. Decision Tree (DT), Gradient Boosting Regression (GBR) and Extreme Gradient Boosting (XGBoost) are compared in determining the best approach for modeling PM10 concentrations for the next 24 hours (PM10,t+24h) for overall air quality data and three air quality blocks: Good air quality (Block 1), Moderate air quality (Block 2) and Extreme air quality (Block 3). The performance of RMSE, MAE and MAPE indicate that XGBoost outperforms GBR and DT with the RMSE(21.5921), MAE(14.2396) and MAPE(0.4816). When evaluating the performance across the three air quality blocks, XGBoost remains as the top-performing model. However, XGBoost faces challenges in accurately predicting extreme values.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_107-Evaluating_the_Performance_of_Tree_Based_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Robust Defense Mechanism Against Adversarial Attacks in Maritime Autonomous Ship Using GMVAE+RL</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01604106</link>
        <id>10.14569/IJACSA.2025.01604106</id>
        <doi>10.14569/IJACSA.2025.01604106</doi>
        <lastModDate>2025-05-01T08:03:35.1170000+00:00</lastModDate>
        
        <creator>Ganesh Ingle</creator>
        
        <creator>Kailas Patil</creator>
        
        <creator>Sanjesh Pawale</creator>
        
        <subject>Maritime autonomous systems; reinforcement learning; defense mechanisms; Gaussian Mixture Variational Auto encoder; Singapore maritime database</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>In this paper, we propose a robust defense frame-work combining Gaussian Mixture Variational Autoencoders (GMVAE) with Reinforcement Learning (RL) to counter adversarial attacks in Maritime Autonomous Systems, specifically targeting the Singapore Maritime Database. By modeling complex maritime data distributions through GMVAE and dynamically adapting decision boundaries via RL, our approach establishes a resilient latent representation space that effectively identifies and mitigates adversarial perturbations. Experimental evaluations using adversarial methods such as FGSM, IFGSM, DeepFool, and Carlini-Wagner attacks demonstrate that the proposed GMVAE+RL model outperforms traditional defenses in both accuracy and robustness. Specifically, it achieves a peak accuracy of 87% and robustness of 20.5%, compared to 85.8% and 19.2%for FGSM and significantly lower values for other methods. These results underscore the superiority of our method in ensuring data integrity and operational reliability within complex maritime environments facing evolving cyber threats.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_106-A_Robust_Defense_Mechanism_Against_Adversarial_Attacks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>NW Logistics: System Architecture and Design for Sustainable Road Logistics</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01604105</link>
        <id>10.14569/IJACSA.2025.01604105</id>
        <doi>10.14569/IJACSA.2025.01604105</doi>
        <lastModDate>2025-05-01T08:03:35.1000000+00:00</lastModDate>
        
        <creator>OUAHBI Younesse</creator>
        
        <creator>ZITI Soumia</creator>
        
        <subject>Artificial Intelligence; logistics; supply chain; supply chain management; applications; Internet of Things; road safety; environnment</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>The logistics industry is under increasing pressure to reduce carbon emissions and enhance efficiency in response to environmental and regulatory demands. However, optimizing road logistics to achieve these goals requires innovative solutions that balance operational efficiency with sustain-ability. This study addresses this need by introducing NW Logistics, an AI-powered platform that optimizes road logistics to lower CO2 emissions and improve fleet performance. In order to achieve these objectives, real-time CO2 tracking, route optimization, and driver behavior monitoring were integrated into NW Logistics. The system enables precise, real-time tracking of deliveries and vehicle locations, allowing logistics managers to monitor fleet performance with enhanced accuracy. Additionally, onboard cameras and sensors generate individualized driver reports, tracking infractions and fostering safer driving behaviors. Initial simulations of NW Logistics indicate a significant reduction in carbon emissions, along with improvements in route efficiency, delivery tracking accuracy, and driver safety. These results demonstrate the transformative potential of AI to advance sustainable and efficient logistics management.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_105-NW_Logistics_System_Architecture_and_Design.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Adversarial Attack on Autonomous Ships Navigation Using K-Means Clustering and CAM</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01604104</link>
        <id>10.14569/IJACSA.2025.01604104</id>
        <doi>10.14569/IJACSA.2025.01604104</doi>
        <lastModDate>2025-05-01T08:03:35.0830000+00:00</lastModDate>
        
        <creator>Ganesh Ingle</creator>
        
        <creator>Kailas Patil</creator>
        
        <creator>Sanjesh Pawale</creator>
        
        <subject>Maritime autonomous surface ships; object detection; clean-label poisoning attacks; adversarial attacks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>As Maritime Autonomous Surface Ships (MASSs) increasingly become part of global maritime operations, the reliability and security of their object detection systems have become a major concern. These systems, which play a crucial role in identifying small yet critical maritime objects such as buoys, vessels, and kayaks, are particularly susceptible to adversarial attacks, especially clean-label poisoning attacks. These attacks introduce subtle manipulations into training data without altering their true labels, thereby inducing misclassification during model inference and threatening navigational safety. The objective of this study is to evaluate the vulnerability of maritime object detection models to such attacks and to propose an integrated adversarial framework to expose and analyze these weaknesses. A novel attack method is developed using K-means clustering to segment similar object regions and Class Activation Mapping (CAM) to identify high-importance zones in image data. Adversarial perturbations are then applied within these zones to craft poisoned inputs that target the YOLOv5 object detection model. Experimental validation is performed using the Singapore Marine Dataset (SMD and SMD-Plus), and performance is measured under different perturbation intensities. The results reveal a considerable decline in detection accuracy—especially for small and mid-sized vessels—demonstrating the effectiveness of the attack and its capacity to remain imperceptible to human observers. This research highlights a critical gap in the security posture of AI-based navigation systems and emphasizes the urgent need to develop maritime-specific adversarial defense strategies for ensuring robust and resilient MASS deployment.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_104-Adversarial_Attack_on_Autonomous_Ships_Navigation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning-Based UI Design Analysis: Object Detection and Image Retrieval Using YOLOv8</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01604103</link>
        <id>10.14569/IJACSA.2025.01604103</id>
        <doi>10.14569/IJACSA.2025.01604103</doi>
        <lastModDate>2025-05-01T08:03:35.0670000+00:00</lastModDate>
        
        <creator>Roba Alghamdi</creator>
        
        <creator>Adel Ahmad</creator>
        
        <creator>Fawaz alsaadi</creator>
        
        <subject>Data-driven design; YOLOv8; design search; deep learning; user interface design</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>Data-driven design models support various types of mobile application design, such as design search, promoting a better understanding of best practices and trends. Designing the well User Interface (UI) makes the application practical and easy to use and contributes significantly to the application’s success. Therefore, searching for UI design examples helps gain inspiration and compare design alternatives. However, searching for relevant design examples from large-scale UI datasets is challenging and not easily stricken. The current search approaches rely on various input types, and most of them have limitations that affect their accuracy and performance. This research proposed a model that provides a fine-grained search for relevant UI design examples based on UI screen input. The proposed model will contain two phases. Object detection was implemented using the deep learning model ‘YOLOv8’, achieving 95% precision and 97% average precision. Image retrieval, leveraging the cosine similarity technique to retrieve the top 3 images similar to the input. These results highlight the system’s effectiveness in accurately detecting and retrieving relevant UI elements, providing a valuable tool for UI designers.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_103-Deep_Learning_Based_UI_Design_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predicting Multiclass Java Code Readability: A Comparative Study of Machine Learning Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01604102</link>
        <id>10.14569/IJACSA.2025.01604102</id>
        <doi>10.14569/IJACSA.2025.01604102</doi>
        <lastModDate>2025-05-01T08:03:35.0200000+00:00</lastModDate>
        
        <creator>Budi Susanto</creator>
        
        <creator>Ridi Ferdiana</creator>
        
        <creator>Teguh Bharata Adji</creator>
        
        <subject>Code readability; machine learning; multiclass classification; hyperparameter tuning; future selection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>The classification of program code readability has traditionally focused on two target classes: readable and unreadable. Recently, it has evolved into a multiclass classification task in three categories: readable, neutral, and unreadable. Most of the existing approaches rely on deep learning. This study investigated the multiclass classification of Java code readability using four feature metric datasets and 14 supervised machine learning algorithms. The dataset comprises 200 labeled Java function declarations. Readability features were extracted using Scalabrino’s tool, generating three datasets: Scalabrino, Buse-Weimer, a combined set (Dall), and a fourth (Dcorr) via feature selection based on interfeature correlation. Each model underwent hyperparameter tuning via a Randomized Search and was evaluated through 30 iterations of a five-fold cross-validation. Scaling techniques (MinMax, Standard, Robust, and None) were also compared. The best performance, with an average accuracy of 61.1% and minimal overfitting, was achieved by Random Forest with MinMax scaling on Dcorr. Feature importance analysis using permutation methods identified 22 key metrics related to comments: code complexity, syntax, naming, token usage, and density. Despite its moderate accuracy, the findings offer valuable insights and highlight essential features for advancing code readability research.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_102-Predicting_Multiclass_Java_Code_Readability.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>AI-Driven Resource Allocation in Edge-Fog Computing: Leveraging Digital Twins for Efficient Healthcare Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01604101</link>
        <id>10.14569/IJACSA.2025.01604101</id>
        <doi>10.14569/IJACSA.2025.01604101</doi>
        <lastModDate>2025-05-01T08:03:34.9900000+00:00</lastModDate>
        
        <creator>Brahim Ould Cheikh Mohamed Nouh</creator>
        
        <creator>Rafika Brahmi</creator>
        
        <creator>Sidi Cheikh</creator>
        
        <creator>Ridha Ejbali</creator>
        
        <creator>Mohamedade Farouk Nanne</creator>
        
        <subject>Edge computing; fog computing; digital twin; deep learning; CNN-BiLSTM; Deep Q-Network (DQN); resource allocation; cardiac event prediction; healthcare; Artificial Intelligence (AI); Internet of Things (IoT); real-time</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>The evolution of healthcare, driven by remote monitoring and connected devices, is transforming medical service de-livery. Digital twins, virtual replicas of patients, enable continuous monitoring and predictive analysis. However, the rapid growth of real-time health data presents major challenges in resource allocation and processing, especially in cardiac event prediction scenarios. This paper proposes an artificial intelligence-based approach to optimize resource allocation in a fog-edge computing environment, with a focus on Mauritania. The system integrates a deep learning model (CNN-BiLSTM), which achieves 98%accuracy in predicting cardiovascular risks from physiological signals, combined with a Deep Q-Network (DQN) to dynamically decide whether tasks should run at the edge or in the fog. Using IoT sensors, real-time health data is collected and processed intelligently, ensuring low latency and rapid response. Digital twins provide a synchronized virtual representation of the physical system for real-time supervision. This architecture improves resource utilization, reduces processing delays, and enhances responsiveness to critical medical conditions, supporting more accurate cardiac event prediction and timely intervention, especially in resource-constrained environments.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_101-AI_Driven_Resource_Allocation_in_Edge_Fog_Computing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dual Neural Paradigm: GRU-LSTM Hybrid for Precision Exchange Rate Predictions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01604100</link>
        <id>10.14569/IJACSA.2025.01604100</id>
        <doi>10.14569/IJACSA.2025.01604100</doi>
        <lastModDate>2025-05-01T08:03:34.9600000+00:00</lastModDate>
        
        <creator>Shamaila Butt</creator>
        
        <subject>Prediction; LSTM; GRU; USD/RMB exchange rate; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>The USD/RMB exchange rate is significant when examining the structure of the Chinese financial system. Predicting the accurate USD/RMB exchange rate enables individuals to analyze the condition of the economy and prevent losses. We propose a novel hybrid approach of GRU-LSTM to improve the forecast of the future USD/RMB exchange rate. Deep learning techniques have become the cornerstone of numerous computer vision and natural language processing fields. This paper discusses various aspects and aims to show that they can help predict the exchange rate. We investigate how the newly developed hybrid GRU-LSTM model performs in terms of success rate and profitability compared with the LSTM and GRU models. The evaluation of the model is done on the USD/RMB currency pair and the forecasts made from September 13, 2023, to December 11, 2023. To increase the accuracy of the model, metrics like mean absolute error (MAE), mean square error (MSE), root mean square error (RMSE), and mean absolute relative error (MAPE) were introduced. The study found that the novel hybrid GRU-LSTM model was performing relatively well compared to the models of LSTM and GRU deployed in the survey for exchange rate prediction. This improvement can significantly benefit the analyst or trader in making the right decisions on the management of risks. The study further opens new possibilities for using the hybrid GRU-LSTM model by demonstrating the enhanced potential of this method, which can be more effective in the financial environment. Subsequent studies might improve the forecast by increasing the set of hybrid models and including more economic variables.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_100-Dual_Neural_Paradigm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Big Data-Driven Charging Network Optimization: Forecasting Electric Vehicle Distribution in Malaysia to Enhance Infrastructure Planning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160499</link>
        <id>10.14569/IJACSA.2025.0160499</id>
        <doi>10.14569/IJACSA.2025.0160499</doi>
        <lastModDate>2025-05-01T08:03:34.9270000+00:00</lastModDate>
        
        <creator>Ouyang Mutian</creator>
        
        <creator>Guo Maobo</creator>
        
        <creator>Yu Tianzhou</creator>
        
        <creator>Liu Haotian</creator>
        
        <creator>Yang Hanlin</creator>
        
        <subject>Electric vehicles; charging infrastructure; CEEM-DAN; XGBoost; spatial optimization; data-driven planning; Malaysia</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>The rapid growth of electric vehicles (EVs) globally and in Malaysia has raised significant concerns regarding the adequacy and spatial imbalance of charging infrastructure. Despite government incentives and policy support, Malaysia’s charging network remains insufficient and unevenly distributed, with major urban centers having better access than rural and highway regions. This paper proposes a data-driven approach to optimize EV infrastructure planning by employing a hybrid CEEMDAN-XGBoost model for accurate EV ownership fore-casting and GIS-based spatial optimization for strategic charger deployment. The model achieved superior performance compared to baseline models, with the lowest prediction errors (RMSE: 120; MAE:38;MAPE: 5.6%). Spatial analysis revealed significant infrastructure gaps in underserved regions, guiding equitable and demand-aligned station placement. The results provide valuable insights into future EV distribution and inform policy recommendations for scalable, data-driven planning across Malaysia.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_99-Big_Data_Driven_Charging_Network_Optimization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Revolutionizing Road Safety and Optimization with AI: Insights from Enterprise Implementation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160498</link>
        <id>10.14569/IJACSA.2025.0160498</id>
        <doi>10.14569/IJACSA.2025.0160498</doi>
        <lastModDate>2025-05-01T08:03:34.8970000+00:00</lastModDate>
        
        <creator>OUAHBI Younesse</creator>
        
        <creator>ZITI Soumia</creator>
        
        <subject>AI adoption; road logistics; logistics management; digital transformation; CO2 emissions; parcel tracking management</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>This study explores the key factors influencing the adoption of artificial intelligence (AI) in the logistics sector, with a particular emphasis on road logistics management. It examines the technological, organizational, and environmental contexts that shape AI integration, as well as the challenges faced by logistics managers, including the need for digital transformation, carbon emissions reduction, and advanced parcel tracking management. The objective is to identify technological and human-related barriers to AI adoption and to assess the level of interest and readiness among logistics companies, especially in the Moroccan context. A quantitative research approach was adopted, based on an online survey targeting logistics professionals and decision-makers, mainly from European and Moroccan small and medium-sized enterprises (SMEs). The collected data were analyzed using statistical methods, including linear regression and ANOVA, to evaluate the relationships between company characteristics, perceived complexity of AI tools, and the avail-ability of qualified human resources. The findings indicate that perceived complexity and limited access to specialized skills significantly hinder AI adoption. Moreover, the perception of tangible performance benefits—such as increased operational efficiency and reduced CO2 emissions—emerges as a major driver for acceptance. These insights offer practical implications for logistics companies seeking to leverage AI technologies to optimize operations, reduce environmental impact, and enhance parcel tracking systems. A strategic roadmap is proposed to overcome the identified barriers and promote effective AI integration.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_98-Revolutionizing_Road_Safety_and_Optimization_with_AI.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Rib Bone Extraction Towards Liver Isolating in CT Scans Using Active Contour Segmentation Methods</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160497</link>
        <id>10.14569/IJACSA.2025.0160497</id>
        <doi>10.14569/IJACSA.2025.0160497</doi>
        <lastModDate>2025-05-01T08:03:34.8500000+00:00</lastModDate>
        
        <creator>Mahmoud S. Jawarneh</creator>
        
        <creator>Shahid Munir Shah</creator>
        
        <creator>Mahmoud M. Aljawarneh</creator>
        
        <creator>Ra’ed M. Al-Khatib</creator>
        
        <creator>Mahmood G. Al-Bashayreh</creator>
        
        <subject>Active contour; computed tomography; segmentation; medical diagnostics; medical imaging segmentation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>Image segmentation is an important aspect of image processing and analysis. Medical imaging segmentation is critical for providing noninvasive information about human body structure that helps physicians analyze body anatomies efficiently. Until recently, various medical imaging segmentation approaches have been presented; however, these approaches are deficient in segmenting abdominal organs due to the significant similarity in their intensity levels. The purpose of this research is to propose a method to facilitate the segmentation of abdominal organs and improve the performance of the segmentation. The core functionality of this research is based on the extraction of rib bone from muscle tissues prior to the application of segmentation. This way, efficient segmentation of abdominal organs can be achieved by isolating the rib bone from the muscle tissues located between the rib bone. The proposed rib bone extraction mechanism is applied to four slices of the MICCAI2007 liver data set to isolate muscle tissues from liver tissues that have significant intensity similarity to liver tissues. The results indicate that the proposed extraction of rib bone efficiently isolated muscle tissues from linked liver tissues and improved the segmentation performance.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_97-Rib_Bone_Extraction_Towards_Liver_Isolating_in_CT_Scans.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Knowledge Discovery of the Internet of Things (IoT) Using Large Language Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160496</link>
        <id>10.14569/IJACSA.2025.0160496</id>
        <doi>10.14569/IJACSA.2025.0160496</doi>
        <lastModDate>2025-05-01T08:03:34.8170000+00:00</lastModDate>
        
        <creator>Bassma Saleh Alsulami</creator>
        
        <subject>Internet of Things; large language model; BERT; knowledge discovery; data mining; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>Internet of Things (IoT) technology quickly trans-formed traditional management and engagement techniques in several sectors. This work explores the trends and applications of the Internet of Things in industries, including agriculture, education, transportation, water management, air quality monitoring, underground mining, smart retail, smart home systems, and weather forecasting. The methodology involves a comprehensive review of the literature, followed by data extraction and analysis using BERT to identify key insights and patterns in IoT applications. The findings show that IoT significantly impacts the improvement of real-time monitoring, increasing efficiency, and encouraging innovative solutions in various sectors. Despite its transformative potential, cybersecurity threats, data privacy concerns, and the need for strong policy frameworks persist. The study emphasizes the necessity of multidisciplinary approaches to address these difficulties and optimize IoT implementation. Future research should focus on establishing secure IoT systems, maintaining data integrity, and encouraging collaboration between disciplines to realise the benefits of IoT technology.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_96-Knowledge_Discovery_of_the_Internet_of_Things.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Healthcare 4.0: A Large Language Model-Based Blockchain Framework for Medical Device Fault Detection and Diagnostics</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160495</link>
        <id>10.14569/IJACSA.2025.0160495</id>
        <doi>10.14569/IJACSA.2025.0160495</doi>
        <lastModDate>2025-05-01T08:03:34.7870000+00:00</lastModDate>
        
        <creator>Khalid Alsaif</creator>
        
        <creator>Aiiad Albeshri</creator>
        
        <creator>Maher Khemakhem</creator>
        
        <creator>Fathy Eassa</creator>
        
        <subject>Healthcare 4.0; Large Language Models; blockchain technology; medical device diagnostics; fault detection; smart healthcare; IoT healthcare security; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>This paper introduces a novel framework integrating Large Language Models (LLMs) with blockchain technology for medical device fault detection and diagnostics in Health-care 4.0 environments. The proposed framework addresses key challenges, including real-time fault detection, data security, and automated diagnostics through a multi-layered architecture incorporating Internet of Things (IoT) integration, blockchain-based security, and LLM-driven diagnostics. Experimental evaluations demonstrate substantial improvements in diagnostic accuracy and response time while maintaining stringent security standards and regulatory compliance. The system provides enhanced fault detection with real-time monitoring capabilities and secure maintenance record management for smart healthcare. Comparative analysis of different LLMs and traditional Machine Learning (ML) methods shows that Deepseek-R1:7b achieved 97.6% classification accuracy, while O3-mini reached 90.4%and 91.2% in diagnosis accuracy and problem identification, respectively. Claude demonstrated the highest technical accuracy (98.4%), while Traditional ML excelled in processing time (11.7) and processing rate (10.68). Deepseek-R1:7b’s offline capabilities ensure stringent security, privacy, and confidentiality with restricted connectivity, making it particularly suitable for sensitive healthcare applications where data protection is paramount.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_95-Healthcare_4_0_A_Large_Language_Model_Based_Blockchain_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hardware-Accelerated Detection of Unauthorized Mining Activities Using YOLOv11 and FPGA</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160494</link>
        <id>10.14569/IJACSA.2025.0160494</id>
        <doi>10.14569/IJACSA.2025.0160494</doi>
        <lastModDate>2025-05-01T08:03:34.7570000+00:00</lastModDate>
        
        <creator>Refka Ghodhbani</creator>
        
        <creator>Taoufik Saidani</creator>
        
        <creator>Amani Kachoukh</creator>
        
        <creator>Mahmoud Salaheldin Elsayed</creator>
        
        <creator>Yahia Said</creator>
        
        <creator>Rabie Ahmed</creator>
        
        <subject>YOLOv11; object detection; mining industry</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>Illegal mining activities present significant environ-mental, economic, and safety challenges, particularly in remote and under-monitored regions. Traditional surveillance methods are often inefficient, labor-intensive, and unable to provide real-time insights. To address this issue, this study proposes a computer vision-based solution leveraging the state-of-the-art YOLOv11 Nano and Small models, fine-tuned for the detection of illegal mining activities. A specific dataset comprising aerial and ground-level images of mining sites was curated and annotated to train the models for identifying unauthorized excavation, equipment usage, and human presence in restricted zones. The proposed system integrates the hardware-software design of YOLOv11 on the PynqZ1 FPGA, offering a high-performance, low-latency, and energy-efficient solution suitable for real-time monitoring in resource-constrained environments. This hardware-accelerated approach combines FPGA’s parallel processing capabilities with the lightweight deep learning models, enabling efficient deployment for automated illegal mining detection. By providing a scalable, real-time monitoring tool, this work contributes to the development of automated enforcement tools for the mining industry, ensuring better control and surveillance of mining activities. To validate the efficiency of deep learning deployment on edge devices, YOLOv11n was implemented on an FPGA, utilizing 70% of available LUTs, 50% of FFs, and 80%of DSPs, with 8.3 Mbits of on-chip memory. The design achieved 100.33 GOP/s throughput, 18 FPS at 55 ms latency, consuming 4.8 W, and delivering an energy efficiency of 20.90 GOP/s/W.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_94-Hardware_Accelerated_Detection_of_Unauthorized_Mining_Activities.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comparative Study of Deep Learning and Modern Machine Learning Methods for Predicting Australia’s Precipitation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160493</link>
        <id>10.14569/IJACSA.2025.0160493</id>
        <doi>10.14569/IJACSA.2025.0160493</doi>
        <lastModDate>2025-05-01T08:03:34.7400000+00:00</lastModDate>
        
        <creator>Hira Farman</creator>
        
        <creator>Qurat-ul-ain Mastoi</creator>
        
        <creator>Qaiser Abbas</creator>
        
        <creator>Saad Ahmad</creator>
        
        <creator>Abdulaziz Alshahrani</creator>
        
        <creator>Salman Jan</creator>
        
        <creator>Toqeer Ali Syed</creator>
        
        <subject>Machine learning; rainfall prediction; neural network; Random Forest; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>Floods are chaotic weather patterns that cause irreversible and devastating harm to people’s lives, crops, and the socioeconomic system. It causes extensive property damage, animal mortality, and even human fatalities. To mitigate the risk of flooding, it is imperative to create an early warning system that can accurately forecast the amount of rain that will fall tomorrow. Rainfall forecasting is essential to the lives of people and is absolutely important everywhere in the world. The rainfall prediction model reduces risk and helps to prevent further human deaths. Statistics cannot reliably forecast rainfall since the atmosphere is dynamic. Due to the preceding factors, this study uses machine learning and deep learning techniques to estimate precipitation. The purpose of this study is to develop and evaluate a prediction model for forecasting rainfall of 5 cities of Australia (Darwin, Sydney, Perth airport, Melbourne, Brisbane). The Dataset was gathered from the national meteo-rological organization of Australia is the Australian Government Bureau of Meteorology, also known as the BOM. To monitor and forecast meteorological conditions, climatic trends, and natural calamities like cyclones, storms, floods, the Bureau of Meteorology is essential. The dataset includes 14, 5460 size, 23 features detailed city-specific monthly averages for Australia from 2008 to 2017(10 years). An effective rainfall forecasting was produced by integration of a number of Machine Learning and Deep Learning techniques, including Random Forest model (RF), Decision Tree (DT) and Gradient Boosting classifier (GBC), Artificial Neural Network (ANN), and Recurrent Neural Network (RNN). The models were trained to forecast rainfall, reducing the potential impact of floods. Results indicate that combining neural networks and Random Forests provides the most accurate predictions.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_93-A_Comparative_Study_of_Deep_Learning_and_Modern_Machine_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detection and Classification of Intestinal Parasites With Bayesian-Optimized Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160492</link>
        <id>10.14569/IJACSA.2025.0160492</id>
        <doi>10.14569/IJACSA.2025.0160492</doi>
        <lastModDate>2025-05-01T08:03:34.7100000+00:00</lastModDate>
        
        <creator>Haifa Hamza</creator>
        
        <creator>Kamarul Hawari Ghazali</creator>
        
        <creator>Abubakar Ahmad</creator>
        
        <subject>Intestinal parasites; faster region convolutional neural network; You Look Only Once (YOLOv8); Bayesian Optimization; medical imaging; object detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>Automated detection of intestinal parasites in medical imaging enhances diagnostic efficiency and reduces human error. This study evaluates object detection techniques using Faster R-CNN with different backbone architectures such as ResNet, RetinaNet, ResNext and YOLOv8 series for detecting Ascaris lumbricoides and Trichuris trichiura in microscopic images. A dataset of 2000 images was split into training (1500), validation (300), and testing (200). Results show Faster R-CNN with RetinaNet achieves the highest Average Precision (AP) across varying Intersection over Union (IoU) thresholds, making it robust in feature extraction. However, YOLOv8 excels in real-time detection, with YOLOv8n (nano) providing the best trade-off between accuracy and computational efficiency. Bayesian Optimization further improves YOLOv8n, achieving an AP of 99.6% and an Average Recall (AR) of 99.7%, surpassing two-stage architectures. This study highlights the potential of deep learning for automated parasite detection, reducing reliance on manual microscopy. Future research should explore transformer-based models, self-supervised learning, and mobile deployment for real-world clinical applications.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_92-Detection_and_Classification_of_Intestinal_Parasites.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Quantitative Assessment and Forecasting of Control Risks in the Ore-Stream Quality Management System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160491</link>
        <id>10.14569/IJACSA.2025.0160491</id>
        <doi>10.14569/IJACSA.2025.0160491</doi>
        <lastModDate>2025-05-01T08:03:34.6770000+00:00</lastModDate>
        
        <creator>Almas Mukhtarkhanuly Soltan</creator>
        
        <creator>Bakytzhan Turmyshevich Kobzhassarov</creator>
        
        <subject>Ore-stream; system; model; technology; control; risks; probability; unmanned vehicles</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>The paper is aimed at organizational and technological optimization of the system of remote control of ore-stream quality according to technical and economic criteria. The ore-stream in the environment of digital transformation of the mining industry is seen as a system where one of the main functions of management is control. The key importance of the control function in ore-stream quality management becomes in ore quality assessment at the stage of ore material technological preparation, where the homogeneity of the ore massif in terms of the content of the useful component from heterogeneous deposits is formed. Such component in the paper is iron. System technological novelty, which is presented in the paper, consists in realization of constant remote control of ore material quality in the form of monitoring. Remote control is technically realized using unmanned vehicles with subsequent digital processing of information by on-board microprocessor technology and special mathematical and software. The iron content of the ore is estimated from the vertical vector of the magnetic field of the ore material. The implementation of such a concept envisaged the solution of the following tasks: development of a structural and functional model of ore-stream quality control; development of mathematical support for the digital system of data processing of ore material magnetic field measurement data, optimization of metrological indicators of the measuring complex of the control system. It is proposed to use control risks as criteria for quantitative assessment of the functional quality of the ore-stream quality management system. The empirical function of the relationship between the cost of magneto metric remote control of iron content and probable control risks is found. A 3D model of the dependence of the cost of magnetometric control of iron content as a function of accuracy and the value of standards of iron content in ore was built.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_91-Quantitative_Assessment_and_Forecasting_of_Control_Risks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Using EPP Theory and BMO-Inspired Approach to Design a Virtual Reality Dashboard Design Ontology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160490</link>
        <id>10.14569/IJACSA.2025.0160490</id>
        <doi>10.14569/IJACSA.2025.0160490</doi>
        <lastModDate>2025-05-01T08:03:34.6300000+00:00</lastModDate>
        
        <creator>Liew Kok Leong</creator>
        
        <creator>Fazita Irma Tajul Urus</creator>
        
        <creator>Muhammad Arif Riza</creator>
        
        <creator>Mohammad Nazir Ahmad</creator>
        
        <creator>Ummul Hanan Mohamad</creator>
        
        <subject>Design Science Research (DSR); Ontology Development Methodology (ODM); Ecological Psychological Perspective (EPP); Unified Foundational Ontology (UFO); Virtual Reality Dashboard Design Method (VRDDM)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>This paper introduces the Virtual Reality Dashboard Design Ontology (VRDDO), an ontological framework developed to address the absence of standardized methodologies in designing Virtual Reality (VR) dashboards for complex data visualization, particularly in smart farm monitoring. The VRDDO is built upon the Design Science Research (DSR) approach and anchored in Kernel Theory, specifically the Ecological Psychological Perspective (EPP) theory and Business Model Ontology (BMO). During the design and development phase of DSR, the Unified Ontological Approach (UoA) is applied as the ontology development methodology, to design and construct VRDDO as a design artifact. By offering a structured framework for VR dashboard design, VRDDO aims to enhance data interpretation and decision-making in immersive environments. Additionally, this ontology forms the basis for a Virtual Reality Dashboard Design Method, establishing a systematic and user-centric approach to developing efficient VR dashboards. This research is significant for its potential to improve VR dashboard development across diverse domains, facilitate knowledge sharing, and eliminate fragmented, ad-hoc practices in immersive data visualization.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_90-Using_EPP_Theory_and_BMO_Inspired_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Investigation of Convolutional Neural Network Model for Vehicle Classification in Smart City</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160489</link>
        <id>10.14569/IJACSA.2025.0160489</id>
        <doi>10.14569/IJACSA.2025.0160489</doi>
        <lastModDate>2025-05-01T08:03:34.6000000+00:00</lastModDate>
        
        <creator>Ahsiah Ismail</creator>
        
        <creator>Amelia Ritahani Ismail</creator>
        
        <creator>Nur Azri Shaharuddin</creator>
        
        <creator>Asmarani Ahmad Puzi</creator>
        
        <creator>Suryanti Awang</creator>
        
        <subject>Vehicle classification; Convolutional Neural Network; SSD; YOLO; MobileNets</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>Smart city optimize efficiency by integrating advanced digital technologies, real-time data analytics, and intelligent automation. With the evolution of big data, smart cities enhance infrastructure and provide intelligent solutions for transportation with the integration of high-level adaptability of computer technologies including artificial intelligence (AI). The optimization can be achieved through predictive analytics in providing intelligent solutions for transportation. However, this requires reliable and accurate informative data as input for predictive analytics. Therefore, in this paper, five models of Convolutional Neural Network (CNN) deep learning method are investigated to determine the most accurate model for classification; namely Single Shot Detector (SSD) Resnet50, SSD Resnet152, SSD MobileNet, You Only Look Once (YOLO) YOLOv5 and YOLOv8. A total of 1324 vehicle images are collected to test these CNN models. The images consist of five different categories of vehicles, which are ambulance, car, motorcycle, bus and truck. The performances of all the models are compared. From the evaluation, the model YOLOv8 attained 0.956 of precision, 0.968 of recall and 0.968 of F1 score and outperformed the others. In terms of computational time, YOLOv5 is the fastest. However, a minimal computational time difference is observed between the YOLOv5 and YOLOv8, which were separated by only 20 minutes.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_89-Investigation_of_Convolutional_Neural_Network_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Ontology-Based Business Processes Gap Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160488</link>
        <id>10.14569/IJACSA.2025.0160488</id>
        <doi>10.14569/IJACSA.2025.0160488</doi>
        <lastModDate>2025-05-01T08:03:34.5700000+00:00</lastModDate>
        
        <creator>Abdelgaffar Hamed Ahmed Ali</creator>
        
        <subject>Business process; gap analysis; ontology for business processes</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>Business processes are subject to change for quality reasons (i.e., efficiency). However, the gap analysis process is a preliminary and essential step in discovering the gap between the to-be and as-is business processes. It usually resorts to a nonstandard and manual analysis process, making it unpredictable and complex. This paper proposes a standard method based on ontology principles and the business process design methodology (DEMO). The ontology unifies the shared vocabulary among worlds of source and target business process to enable this sort of interoperability. Building an essential model is a core concept behind DEMO that provides an ontological view independent of realization and implementation issues and enables understanding of the enterprises&#39; behavior. Moreover, this paper provides heuristics for detecting gaps, based on the premise that producing similar institutional facts reflects similar behavior between the to-be and as-is business processes. Since the domains of the source and target are the same, it is also possible to compare the inputs of corresponding actions. The paper proposes a UML activity model for modeling business processes, enriched with DEMO concepts, to provide a foundational and informative ontology for reasoning about gaps. The expected outcome is a contribution to the broader community of business process management, ERP, and strategic planning, enabling more informed decision-making.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_88-Ontology_Based_Business_Processes_Gap_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automated Defect Detection in Manufacturing Using Enhanced VGG16 Convolutional Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160487</link>
        <id>10.14569/IJACSA.2025.0160487</id>
        <doi>10.14569/IJACSA.2025.0160487</doi>
        <lastModDate>2025-05-01T08:03:34.5370000+00:00</lastModDate>
        
        <creator>Altynzer Baiganova</creator>
        
        <creator>Zhanar Ubayeva</creator>
        
        <creator>Zhanar Taskalyeva</creator>
        
        <creator>Lezzat Kaparova</creator>
        
        <creator>Roza Nurzhaubaeva</creator>
        
        <creator>Banu Umirzakova</creator>
        
        <subject>Automated defect detection; deep learning; convolutional neural networks; VGG16; quality control; manufacturing inspection; machine vision; Industry 4.0</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>Automated defect detection in manufacturing is a critical component of modern quality control, ensuring high production efficiency and minimizing defective outputs. This study presents an enhanced VGG16-based convolutional neural network (CNN) model for defect classification and localization, improving upon traditional vision-based inspection methods. The proposed model integrates advanced deep learning techniques, including batch normalization and dropout regularization, to enhance generalization and prevent overfitting. Extensive experiments were conducted on benchmark manufacturing defect datasets, evaluating performance based on accuracy, loss evolution, precision, recall, and mean average precision (mAP). The results demonstrate that the enhanced VGG16 model outperforms conventional CNN architectures and the standard VGG16, achieving higher defect classification accuracy and superior feature extraction capabilities. The model successfully detects multiple defect types, including surface irregularities, scratches, and deformations, with improved robustness in complex industrial environments. Additionally, the receiver operating characteristic (ROC) analysis confirms the model’s high sensitivity and specificity in distinguishing between defective and non-defective components. Despite its strong performance, challenges such as dataset scarcity, computational costs, and model interpretability remain areas for further research. Future directions include the integration of lightweight architectures for real-time deployment, generative adversarial networks (GANs) for data augmentation, and explainable AI techniques for improved transparency. The findings of this study highlight the transformative potential of deep learning in manufacturing defect detection, paving the way for intelligent, automated quality control systems that enhance production efficiency and reliability. The proposed approach contributes to the advancement of Industry 4.0 by enabling scalable, data-driven decision-making in manufacturing processes.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_87-Automated_Defect_Detection_in_Manufacturing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced Emotion Recognition Using a Hybrid Autoencoder-LSTM Model Optimized with a Hybrid ACO-WOA Algorithm for Hyperparameter Tuning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160486</link>
        <id>10.14569/IJACSA.2025.0160486</id>
        <doi>10.14569/IJACSA.2025.0160486</doi>
        <lastModDate>2025-05-01T08:03:34.5070000+00:00</lastModDate>
        
        <creator>Vinod Waiker</creator>
        
        <creator>Janjhyam Venkata Naga Ramesh</creator>
        
        <creator>Kiran Bala</creator>
        
        <creator>V. V. Jaya Rama Krishnaiah</creator>
        
        <creator>T. Jackulin</creator>
        
        <creator>Elangovan Muniyandy</creator>
        
        <creator>Osama R. Shahin</creator>
        
        <subject>Emotion recognition; autoencoder; long short-term memory; Ant Colony Optimization (ACO); Whale Optimization Algorithm (WOA)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>Emotion recognition is vital in the human Computer interaction because it improves interaction. Therefore, this paper proposes an improved method for emotion recognition regarding the Hybrid Autoencoder-Long Short-Term Memory (LSTM) model and the newly developed hybrid approach of the Ant Colony Optimization (ACO) and Whale Optimization Algorithm (WOA) for hyperparameters tuning. In this case, Autoencoder can reduce input data dimensionality for input data and find the features relevant for the model’s work. In addition, LSTM is able to work with temporal structures of sequential inputs like speech and videos. The contribution of this research lies in the novel combination method of ACO-WOA which aims at tweaking hyperparameters of Autoencoder-LSTM model. Global aspect of ACO and WOA thereby improve the search efficiency and the accuracy of the proposed emotion recognition system and its generalization capacity. In context with the benchmark dataset for the experimentations of emotion recognition, it has established the efficiency of the proposed model in terms of the conventional methods. Recall rates in recognitive intended various emotions and different modalities were also higher in the hybrid Autoencoder-LSTM model. The optimization algorithms like the ACO-WOA also supported in reducing the computational cost which arose due to hyperparameters tuning. The implementation of this paper is done through Python Software. This implementation shows a high accuracy of 94.12% and 95.94% for audio datasets and image datasets respectively when compared with other deep learning models of Conv LSTM and VGG16. Therefore, the research shows that the presented hybrid approach can be a useful solution for successfully employing emotion recognition for enhancing the creation of the empathetic AI systems and for improving user interactions within various fields including healthcare, entertainment, and customer support.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_86-Enhanced_Emotion_Recognition_Using_a_Hybrid_Autoencoder.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Speech Decoding from EEG Signals</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160485</link>
        <id>10.14569/IJACSA.2025.0160485</id>
        <doi>10.14569/IJACSA.2025.0160485</doi>
        <lastModDate>2025-05-01T08:03:34.4900000+00:00</lastModDate>
        
        <creator>Salma Fahad Altharmani</creator>
        
        <creator>Maha M. Althobaiti</creator>
        
        <subject>Speech decoding; EEG; deep learning; CNN; RNN; hybrid models; Brain-Computer Interfaces (BCI)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>The field of speech decoding is rapidly evolving, presenting new challenges and new opportunities for people with disabilities such as amyotrophic lateral sclerosis (ALS), stroke, or paralysis, and for those who support them. However, speech decoding is complex: it requires analysing brain waves, across spatial and temporal dimensions, before translating them into speech. Recent work attempts to recreate speech that is never physically spoken by analysing the brain Artificial-intelligence methods offer a breakthrough because they can analyse complex data, including EEG signals. This paper aims to decode imagined speech through training CNN, RNN, and XGBoost models on a suitable dataset consisting of recorded EEG signals. EEG from 23 individuals is acquired from a public online dataset. These data are preprocessed, and the features are extracted using five different methods. After data acquisition, preprocessing is performed to ensure its readability to the proposed models. After that, five different feature extraction methods have been used and evaluated. Training and testing the proposed models are done after pre-processing and feature extraction to produce classification results. The proposed model involves CNN, LSTM, and XGBoost as classifiers to achieve an effective and robust speech decoding process. The ultimate result reflects on the accuracy with which the algorithms can regenerate speech from EEG signal analysis. The findings will advance speech-decoding research by showing the potential of hybrid deep-learning architectures for precise decoding of imagined speech from EEG signals. These advances have promising potential for creating non-invasive communication systems to assist people with severe speech and motor disorders, thereby improving their quality of life and increasing the application scope of brain-computer interfaces.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_85-Speech_Decoding_from_EEG_Signals.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Approach for Early Road Defect Detection: Integrating Edge Detection with Attention-Enhanced MobileNetV3 for Superior Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160484</link>
        <id>10.14569/IJACSA.2025.0160484</id>
        <doi>10.14569/IJACSA.2025.0160484</doi>
        <lastModDate>2025-05-01T08:03:34.4600000+00:00</lastModDate>
        
        <creator>Ayoub Oulahyane</creator>
        
        <creator>Mohcine Kodad</creator>
        
        <creator>El Houcine Addou</creator>
        
        <creator>Sofia Ourarhi</creator>
        
        <creator>Hajar Chafik</creator>
        
        <subject>Road defect detection; edge detection; attention mechanism; MobileNetV3</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>The early detection of road defects is critical for maintaining infrastructure quality and ensuring public safety. This research presents a hybrid approach that combines edge detection techniques with an enhanced deep learning model for efficient and accurate road defect classification. The process begins with edge detection to highlight structural irregularities, such as cracks and potholes, by emphasizing critical features in road surface images. These pre-processed images are then fed into a classification model based on MobileNetV3, augmented with an attention mechanism to improve feature weighting and model focus on defect-prone regions. The proposed system was evaluated on a Crack500 dataset of road surface images, achieving a classification accuracy of 96.2%. This demonstrates significant improvement compared to baseline models without edge detection or attention enhancements. The edge detection stage efficiently reduces noise, while the attention-augmented MobileNetV3 ensures robust feature discrimination, making the approach suitable for real-time and resource-constrained deployment scenarios. This study highlights the effectiveness of combining classical image processing with advanced neural network techniques. The proposed system has the potential to optimize road maintenance workflows, operational costs, and improve road safety by enabling early and precise defect identification.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_84-Hybrid_Approach_for_Early_Road_Defect_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Extracting Facial Features to Detect Deepfake Videos Using Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160483</link>
        <id>10.14569/IJACSA.2025.0160483</id>
        <doi>10.14569/IJACSA.2025.0160483</doi>
        <lastModDate>2025-05-01T08:03:34.4430000+00:00</lastModDate>
        
        <creator>Ayesha Aslam</creator>
        
        <creator>Jamaluddin Mir</creator>
        
        <creator>Gohar Zaman</creator>
        
        <creator>Atta Rahman</creator>
        
        <creator>Asiya Abdus Salam</creator>
        
        <creator>Farhan Ali</creator>
        
        <creator>Jamal Alhiyafi</creator>
        
        <creator>Aghiad Bakry</creator>
        
        <creator>Mustafa Jamal Gul</creator>
        
        <creator>Mohammed Gollapalli</creator>
        
        <creator>Maqsood Mahmud</creator>
        
        <subject>Deepfake; fake videos; facial features; GAN</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>Generative adversarial networks (GANs) have gained popularity for their ability to synthesize images from random inputs in deep learning models. One of the notable applications of this technology is the creation of realistic videos known as deepfakes, which have been misused on social media platforms. The difficulty lies in distinguishing these fake videos from real ones with the naked eye, leading to significant concerns. This study proposes a supervised machine learning approach to effectively differentiate between real and counterfeit videos by detecting visual artifacts. To achieve this, two facial features are extracted: eye blinking and nose position, utilizing landmark detection techniques. Both features were trained on supervised machine learning classifiers and evaluated using the publicly available UADFV and Celeb-DF deepfake datasets. The experiments successfully demonstrate that the proposed method achieves a promising and superior performance, with an area under the curve (AUC) of 97% for deepfake detection in contrast to state-of-the-art methods investigating the same datasets.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_83-Extracting_Facial_Features_to_Detect_Deepfake_Videos.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimizing Document Classification Using Modified Relative Discrimination Criterion and RSS-ELM Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160482</link>
        <id>10.14569/IJACSA.2025.0160482</id>
        <doi>10.14569/IJACSA.2025.0160482</doi>
        <lastModDate>2025-05-01T08:03:34.4270000+00:00</lastModDate>
        
        <creator>Muhammad Anwaar</creator>
        
        <creator>Ghulam Gilanie</creator>
        
        <creator>Abdallah Namoun</creator>
        
        <creator>Wareesa Sharif</creator>
        
        <subject>Feature selection; relative discrimination criterion; ring seal search; extreme learning machine; metaheuristic algorithms; document classification; optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>Internet content is increasing daily, and more data are being digitized due to technological advancements. Ever-increasing textual data in words, phrases, terms, sentences, and paragraphs pose significant challenges in classifying them effectively and require sophisticated techniques to arrange them automatically. The vast amount of textual data presents an opportunity to organise and extract valuable insights by identifying crucial pieces of information using feature selection techniques. Our article proposes “a Modified Relative Discrimination Criterion (MRDC) Technique and Ringed Seal Search-Extreme Learning Machine (RSS-ELM) to improve document classification&quot;, which prioritizes key data and fits corresponding documents into appropriate classes. The proposed MRDC and RSS-ELM techniques are compared with several existing techniques, such as the Relative Discrimination Criterion (RDC), the Improved Relative Discrimination Criterion (IRDC), GA-EM, and CS-ELM. The MRDC technique produced superior classification results with 91.60% accuracy compared to existing RDC and IRDC for feature selection. Moreover, the RSS-ELM optimization technique improved predictions significantly, with 98.9% accuracy compared to CS-ELM and GA-ELM on the Reuter21578 dataset.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_82-Optimizing_Document_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Developing a Comprehensive NLP Framework for Indigenous Dialect Documentation and Revitalization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160481</link>
        <id>10.14569/IJACSA.2025.0160481</id>
        <doi>10.14569/IJACSA.2025.0160481</doi>
        <lastModDate>2025-05-01T08:03:34.3800000+00:00</lastModDate>
        
        <creator>Mohammed Fakhreldin</creator>
        
        <subject>Indigenous language preservation; natural language processing; meta-learning; contrastive learning; low-resource languages</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>The disappearance of Indigenous languages results in a decrease in cultural diversity, hence making the preservation of these languages extremely important. Conventional methods of documentation are lengthy, and the present AI solutions somehow do not deliver due to data scarcity, dialectal variation, and poor adaptability to low-resource languages. A novel NLP framework is being proposed to solve the existing problems. This framework intermixes Meta-Learning and Contrastive Learning to counter these problems. Thus, adaptation to low-resourced languages becomes rapid via meta-learning (MAML), while dialect differentiation is enhanced through contrastive learning. The model training is carried out on Tatoeba (text) and Mozilla Common Voice (speech) datasets to ensure robust performance in both text and phonetic tasks. The results indicate that there is a reduction of 15% in Word Error Rate (WER), an 18% improvement in BLEU score corresponding to translation, and a 12% improvement in F1-score related to dialect classification. The testing was also done with native speakers to assess its practical viability. It is a real-time translation, transcription, and language documentation system deployed via a cloud-based platform, thereby reaching out to Indigenous communities globally. This dual-learning framework represents a scalable, adaptive, and cost-efficient solution for the revitalization of languages. The models proposed have been a game changer for language preservation, have set new standards for low-resource NLP, and have made some tangible contributions towards the digital sustainability of endangered dialects.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_81-Developing_a_Comprehensive_NLP_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Impact of Cryptocurrencies and Their Technological Infrastructure on Global Financial Regulation: Challenges for Regulators and New Regulations</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160480</link>
        <id>10.14569/IJACSA.2025.0160480</id>
        <doi>10.14569/IJACSA.2025.0160480</doi>
        <lastModDate>2025-05-01T08:03:34.3500000+00:00</lastModDate>
        
        <creator>Juan Chavez-Perez</creator>
        
        <creator>Raquel Melgarejo-Espinoza</creator>
        
        <creator>Victor Sevillano-Vega</creator>
        
        <creator>Orlando Iparraguirre-Villanueva</creator>
        
        <subject>Cryptocurrencies; financial regulation; blockchain; regulatory challenges; cryptocurrency laws</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>The rise of cryptocurrencies is transforming the landscape of global finance, but their very decentralized nature is triggering unprecedented challenges for regulatory systems. This systematic literature review (SLR) aimed to gather and synthesize information to understand the functioning of cryptocurrencies in relation to their regulatory challenges. The PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) methodology supports the rigor of the research, where 50 studies published between 2022 and 2025 were selected in databases such as Scopus, Web of Science, IEEE Xplore and Science Direct. Among the results, it was observed that the continents with the greatest contributions were Europe and Asia, representing 60% and 25% of the studies analyzed, respectively. Likewise, the period with the highest scientific production was the year 2024, with 50% of the manuscripts published. Regarding the analysis of keyword co-occurrence using VOSviewer, it was found that &quot;blockchain&quot; and &quot;cryptocurrency&quot; were the most predominant terms, with 18 and 16 mentions, highlighting their centrality in the academic discussion. Ultimately, the research highlights that cryptocurrencies bring with them major regulatory challenges, such as money laundering and lack of legal clarity, while blockchain emerges as an essential tool to improve the transparency and operability of financial regulation.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_80-Impact_of_Cryptocurrencies_and_Their_Technological_Infrastructure.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of an Interactive Oral English Translation System Leveraging Deep Learning Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160479</link>
        <id>10.14569/IJACSA.2025.0160479</id>
        <doi>10.14569/IJACSA.2025.0160479</doi>
        <lastModDate>2025-05-01T08:03:34.3170000+00:00</lastModDate>
        
        <creator>Dan Zhao</creator>
        
        <creator>HeXu Yang</creator>
        
        <subject>Deep learning; interactive English; spoken English; automatic translation; translation system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>An advanced interactive English oral automatic translation system has been developed using cutting-edge deep learning techniques to address key challenges such as low success rates, lengthy processing times, and limited accuracy in current systems. The core of this innovation lies in a sophisticated deep learning translation model that leverages neural network architectures, combining logarithmic and linear models to efficiently map and decompose the activation functions of target neurons. The system dynamically calculates neuron weight ratios and compares vector levels, enabling precise and responsive interactive translations. A robust system framework is established around a central text conversion module, integrating hardware components such as the I/O bus, I/O bridge, recorder, interactive information collector, and an initial language correction unit. Key hardware includes the WT588F02 recording and playback chip (with external flash) for audio recording and NAND flash memory for efficient data storage. Noise reduction is achieved using the POROSVOC-PNC201 audio processor, while the aml100 chip enhances audio detection capabilities. The extensive neuron network testing using a dataset of 1.8 million translation samples demonstrates the system&#39;s superior performance, achieving an impressive success rate exceeding 80%, a rapid translation time of under 50ms, and a remarkable translation accuracy of over 95%. This state-of-the-art system sets a new benchmark in interactive English oral translation, achieving a success rate exceeding 80% (a 10% improvement over existing methods), a rapid translation time of under 50ms (a 30% reduction), and a remarkable translation accuracy of over 95% (a 5% improvement), by combining deep learning advancements with high-performance computing and optimized hardware integration.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_79-Development_of_an_Interactive_Oral_English_Translation_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Transforming Internal Auditing: Harnessing Retrieval-Augmented Generation Technology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160478</link>
        <id>10.14569/IJACSA.2025.0160478</id>
        <doi>10.14569/IJACSA.2025.0160478</doi>
        <lastModDate>2025-05-01T08:03:34.2870000+00:00</lastModDate>
        
        <creator>Olive Stumke</creator>
        
        <creator>Fanie Ndlovu</creator>
        
        <subject>Adaptive learning; Anthropic Haiku; benefits; challenges; Generative AI; Google Gemini API Pro; higher education; internal auditing; OpenAI GPT-Turbo; personalized learning; RAG (Retrieval-Augmented Generation); South Africa</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>The advent of cloud-based Generative AI models, such as ChatGPT, Google Gemini, and Claude, has created new opportunities for improving education through real-time, adaptive learning experiences. Despite their widespread use globally, their application in South African higher education remains limited and underexplored, resulting in an application gap. This paper, as Phase 1 of a larger project, addresses this gap by focusing on the development of a Retrieval-Augmented Generation (RAG) web application designed to enhance Internal Auditing education at the Durban University of Technology. This is achieved by integrating three powerful Generative AI models—OpenAI GPT-4o-mini, Google Gemini-1.5-flash, and Anthropic Claude-3-haiku—into a single educational platform that will enable lecturers to manage and augment lecture materials while allowing students to access personalized, AI-generated content. This paper presents the design considerations, architecture, and integration techniques employed in the development of the RAG web application, offering insights into the potential of adaptive learning, personalized learning, and AI-driven tutoring in South Africa’s educational landscape. This paper demonstrates how a RAG web application can provide the building blocks for future generative AI applications that could enhance teaching and learning with minimal effort from lecturers and learners in the South African context.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_78-Transforming_Internal_Auditing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Match Detection Process Using Chi-Square Equation for Improving Type-3 and Type-4 Clones in Java Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160477</link>
        <id>10.14569/IJACSA.2025.0160477</id>
        <doi>10.14569/IJACSA.2025.0160477</doi>
        <lastModDate>2025-05-01T08:03:34.2570000+00:00</lastModDate>
        
        <creator>Noormaizzattul Akmaliza Abdullah</creator>
        
        <creator>Al-Fahim Mubarak-Ali</creator>
        
        <creator>Mohd Azwan Mohamad Hamza</creator>
        
        <creator>Siti Salwani Yaacob</creator>
        
        <subject>Code clone detection; distance measure; Java language; Chi-square; computational intelligence</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>Generic Code Clone Detection (GCCD) is a code clone detection model that use distance measure equation, enabling detection of all types of code clones, naming clone Type-1, Type-2, Type-3 and Type-4 in Java programming language applications. However, the detection process of GCCD did not focus on detecting clones of Type-3 and Type-4. Hence, this paper suggested two experiments to incorporate enhancements to the GCCD in order to improve the detection rate of clone Type-3 and clone Type-4. The implementation of Chi-square distance in the match detection process produced a significant result increase in the experiment specifically on clones Type-3 and Type-4, in comparison with the Euclidean distance in GCCD, which allows the increase of detection rate due to the dissimilarity of the distance measures. Based on the results, the suggested enhancement using Chi-square distance on match detection process outperforms GCCD in terms of improving code clone detection results based on clone Type-3 and Type-4, as the objectives for each experiment are carried, contributes to the research on improving the code clone detection result.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_77-Enhancing_Match_Detection_Process_Using_Chi_Square_Equation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Portable and Lightweight Signal Processing Approach for sEMG-Based Human–Machine Interaction in Robotic Hands</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160476</link>
        <id>10.14569/IJACSA.2025.0160476</id>
        <doi>10.14569/IJACSA.2025.0160476</doi>
        <lastModDate>2025-05-01T08:03:34.2230000+00:00</lastModDate>
        
        <creator>Ngoc-Khoat Nguyen</creator>
        
        <subject>sEMG; myo-prosthesis; myosignals; human–prosthesis interface; signal processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>Surface electromyography (sEMG) presents a viable biosignal for the control of robotic prosthetic hands, as it directly correlates with underlying muscle activity. This study introduces an efficient, computationally lightweight signal processing methodology designed for real-time embedded systems. The proposed methodology comprises a preprocessing pipeline, incorporating bandpass and notch filtering, followed by segmentation via overlapping sliding windows. Time-domain features, specifically Mean Absolute Value (MAV), Zero Crossing (ZC), Waveform Length (WL), Slope Sign Change (SSC), and Variance (VAR), are extracted to characterize relevant muscular activation patterns. By prioritizing computational efficiency and embedded system feasibility, this method establishes a practical framework for user intent recognition and real-time control of wearable robotic hands, particularly within assistive and rehabilitative applications. The experimental findings clearly indicate that the extracted features effectively differentiate between various hand gestures, allowing for accurate, real-time control of the wearable robotic hand. The system&#39;s high responsiveness, low latency, and resilience to noise underscore its suitability for assistive and rehabilitative applications. With its focus on computational simplicity and feasibility for embedded implementation, the proposed method provides a practical basis for recognizing user intent in human-machine interaction systems.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_76-Portable_and_Lightweight_Signal_Processing_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intelligent Guitar Chord Recognition Using Spectrogram-Based Feature Extraction and AlexNet Architecture for Categorization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160475</link>
        <id>10.14569/IJACSA.2025.0160475</id>
        <doi>10.14569/IJACSA.2025.0160475</doi>
        <lastModDate>2025-05-01T08:03:34.2100000+00:00</lastModDate>
        
        <creator>Nilesh B. Korade</creator>
        
        <creator>Mahendra B. Salunke</creator>
        
        <creator>Amol A. Bhosle</creator>
        
        <creator>Sunil M. Sangve</creator>
        
        <creator>Dhanashri M. Joshi</creator>
        
        <creator>Gayatri G. Asalkar</creator>
        
        <creator>Sujata R. Kadu</creator>
        
        <creator>Jayesh M. Sarwade</creator>
        
        <subject>Chords; prediction; spectrogram; chromagram; Mel Frequency Cepstral Coefficients; AlexNet</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>Chord prediction plays a key role in the advancement of musical technological innovations, such as automatic music transcription, real-time music tutoring, and intelligent composition tools. Accurate chord prediction can assist musicians, educators, and developers in constructing tools that help in learning, playing, and composing music. Background noise and audio distortions may have an impact on chord prediction accuracy, particularly in real-world situations. Chords can have distinct voicings or finger positions on the guitar, resulting in slight variations in audio representation. This study focuses on the classification of guitar chords using techniques of deep learning. There are eight major and minor guitar chords in the dataset. They have been turned into spectrograms, chromagrams, and Mel Frequency Cepstral Coefficients (MFCC) so that features can be extracted. Various deep learning architectures, including CNN, ResNet50, AlexNet, and VGG, were employed to classify the chords. Experimental results demonstrated that the spectrogram-based AlexNet model outperforms others, achieving good accuracy and robustness in chord classification. The proposed study demonstrates the efficiency of spectrograms and advanced deep learning models for audio signal processing in music applications. By automating chord detection, this study provides beneficial resources for music learners as well as educators, enabling more efficient learning and real-time feedback during practice sessions.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_75-Intelligent_Guitar_Chord_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Healthy and Unhealthy Oil Palm Tree Detection Using Deep Learning Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160474</link>
        <id>10.14569/IJACSA.2025.0160474</id>
        <doi>10.14569/IJACSA.2025.0160474</doi>
        <lastModDate>2025-05-01T08:03:34.1770000+00:00</lastModDate>
        
        <creator>Kang Hean Heng</creator>
        
        <creator>Azman Ab Malik</creator>
        
        <creator>Mohd Azam Bin Osman</creator>
        
        <creator>Yusri Yusop</creator>
        
        <creator>Irni Hamiza Hamzah</creator>
        
        <subject>Component oil palm detection; deep learning models; object detection; Faster R-CNN; drone imagery analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>Oil palm trees are the world&#39;s most efficient and economically productive oil bearing crop. It can be processed into components needed in various products, such as beauty products and biofuel. In Malaysia, the oil palm industry contributes around 2.2% annually to the nation&#39;s GDP. The continuous surge in demand for oil palm worldwide has created an awareness among the local plantation owner to apply more monitoring standards on the trees to increase their yield. However, Malaysia&#39;s cultivation and monitoring process still mainly depends on the labor force, which caused it to be inefficient and expensive. This scenario served as a motivation for the owner to innovate the tree monitoring process through the use of computer vision techniques. This paper aims to develop an object detection model to differentiate healthy and unhealthy oil palm trees through aerial images collected through a drone on an oil palm plantation. Different pre-trained models, such as Faster R-CNN (Region-Based Convolutional Neural Network) and SSD (Single-Shot MultiBox Detector), with different backbone modules, such as ResNet, Inception, and Hourglass, are used on the images of palm leaves. A comparison will then be made to select the best model based on the AP and AR of various scales and total loss to differentiate healthy and unhealthy oil palm. Eventually, the Faster R-CNN ResNet101 FPN model performed the best among the models, with AParea = all of 0.355, ARarea = all of 0.44, and total loss of 0.2296.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_74-Healthy_and_Unhealthy_Oil_Palm_Tree_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modelling the Moderating Role of Government Policy in Cryptocurrency Investment Acceptance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160473</link>
        <id>10.14569/IJACSA.2025.0160473</id>
        <doi>10.14569/IJACSA.2025.0160473</doi>
        <lastModDate>2025-05-01T08:03:34.1470000+00:00</lastModDate>
        
        <creator>Maslinda Mohd Nadzir</creator>
        
        <creator>Rabea Abdulrahman Raweh</creator>
        
        <creator>Hapini Awang</creator>
        
        <creator>Huda Ibrahim</creator>
        
        <subject>Cryptocurrency; acceptance; investment; UTAUT; government policy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>Without the requirement for third-party approval, cryptocurrency enables anonymous, secure, quick, and inexpensive financial transactions. Although cryptocurrency is gaining global popularity, its applications are still limited. This research aims to investigate the factors influencing the acceptance of cryptocurrency as an investment tool, focusing on the moderating role of government policy. Using the Unified Theory of Acceptance and Use of Technology (UTAUT) extended with awareness, security, and trust, a survey was conducted with 220 respondents. Structural Equation Modelling (SEM) was employed to analyse the data. The findings revealed that the usage of cryptocurrencies is significantly affected by performance expectancy, facilitating conditions, social influence, awareness, and security in investment. However, trust does not affect the acceptance of cryptocurrency as an investment. The outcomes generate vital insights and strategies for cryptocurrency users, offering a crucial examination for stakeholders and professionals keen on understanding the underlying dynamics of cryptocurrency acceptance in investment.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_73-Modelling_the_Moderating_Role_of_Government_Policy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Electric Vehicle Security with Face Recognition: Implementation Using Raspberry Pi</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160472</link>
        <id>10.14569/IJACSA.2025.0160472</id>
        <doi>10.14569/IJACSA.2025.0160472</doi>
        <lastModDate>2025-05-01T08:03:34.1170000+00:00</lastModDate>
        
        <creator>Jamil Abedalrahim Jamil Alsayaydeh</creator>
        
        <creator>Chin Wei Yi</creator>
        
        <creator>Rex Bacarra</creator>
        
        <creator>Fatimah Abdulridha Rashid</creator>
        
        <creator>Safarudin Gazali Herawan</creator>
        
        <subject>Face recognition; face detection; Principal Component Analysis (PCA); Support Vector Machine (SVM); Raspberry Pi</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>Facial identification has emerged as a key research area due to its potential to enhance biometric security. This research proposes an advanced security system for electric vehicles (EVs) based on facial identification, implemented using Raspberry Pi. The system comprises two main modules: Face Detection and Face Recognition. For face detection, the researchers propose using the Viola-Jones algorithm, which leverages Haar-like features to detect and extract unique facial features, such as the eyes, nose, and mouth. MATLAB will be used as the development tool for this module. For face recognition, the proposed approach integrates Principal Component Analysis (PCA) with Support Vector Machine (SVM). PCA is used to extract the most relevant facial information and construct a computational model, while SVM enhances classification accuracy. The system&#39;s performance is evaluated using accuracy and the Receiver Operating Characteristic (ROC) curve, with results demonstrating a face recognition accuracy of 95% and an average execution time of 2.32 seconds, meeting real-time operational requirements. These findings confirm the proposed method’s reliability in offering advanced and efficient biometric protection for modern electric vehicles.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_72-Enhancing_Electric_Vehicle_Security_with_Face_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards an Optimization Model for Household Waste Bins Location Management</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160471</link>
        <id>10.14569/IJACSA.2025.0160471</id>
        <doi>10.14569/IJACSA.2025.0160471</doi>
        <lastModDate>2025-05-01T08:03:34.1000000+00:00</lastModDate>
        
        <creator>Moulay Lakbir Tahiri Alaoui</creator>
        
        <creator>Meryam Belhiah</creator>
        
        <creator>Soumia Ziti</creator>
        
        <subject>Smart City; IoT; household waste; LoRaWan; bin location; outlier detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>Smart cities require effective, adaptive household waste management systems due to rapid urbanization. Traditional bin placement strategies based on placing bins equidistant among residents fail to account for actual human behavior, leading to overflowing or underused bins. This paper addresses optimizing bin location and capacity through Internet of things (IoT) technologies and data-driven decision-making by deploying LoRaWAN sensors in Tangier City as a case study; real-time usage information was then collected and analyzed. Through statistical analysis and outlier detection techniques, the proposed approach identifies bin placements that are non-optimized by using statistical analysis. It also evaluates data quality and classes bins by their usage level; results show several bins were constantly overused or underused indicating that dynamic placement and capacity adjustment would improve waste collection efficiency, reduce operational costs and enhance citizen satisfaction within a Smart City framework.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_71-Towards_an_Optimization_Model_for_Household_Waste_Bins.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Exploring Digital Insurance Solutions: A Systematic Literature Review and Future Research Agenda</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160470</link>
        <id>10.14569/IJACSA.2025.0160470</id>
        <doi>10.14569/IJACSA.2025.0160470</doi>
        <lastModDate>2025-05-01T08:03:34.0830000+00:00</lastModDate>
        
        <creator>Anni Wei</creator>
        
        <creator>Yurita Yakimin Abdul Talib</creator>
        
        <creator>Zakiyah Sharif</creator>
        
        <subject>Digital insurance; Technology Acceptance Model; antecedents of adoption; systematic literature review; future research agenda</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>The purpose of this study is to explore the antecedents for the adoption of digital insurance solutions and to present current research trends and future research agendas based on a systematic literature review. The findings revealed key motivators for the adoption of digital insurance solutions, such as trust, perceived usefulness, ease of use, performance and effort expectancy, social influence, subjective norms, self-efficacy, system quality, and attitudes. Meanwhile, the key inhibitors include perceived risk, privacy concerns, complexity, and technology anxiety. The study shows that current research themes primarily focus on the online insurance sector, while lack of attention to emerging technologies. Although the Technology Acceptance Model (TAM) being the most widely applied theory in digital insurance adoption studies, its explanatory power needs to be enhanced by introducing new theories. Moreover, most research samples consist of insurance consumers, with less attention paid to user groups excluded from financial services. Questionnaires and Structural Equation Modeling (SEM) are commonly used methods, but still have limitations when dealing with large samples and complex behavioral changes. This study provides guidance for governments in promoting the implementation of digital insurance solutions, alongside strategic support for insurers to optimise user experience and enhance industry competitiveness.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_70-Exploring_Digital_Insurance_Solutions.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Clustering Analysis of Physicians&#39; Performance Evaluation: A Comparison of Feature Selection Strategies to Support Medical Decision-Making</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160469</link>
        <id>10.14569/IJACSA.2025.0160469</id>
        <doi>10.14569/IJACSA.2025.0160469</doi>
        <lastModDate>2025-05-01T08:03:34.0370000+00:00</lastModDate>
        
        <creator>Amani Mustafa Ghazzawi</creator>
        
        <creator>Alaa Omran Almagrabi</creator>
        
        <creator>Hanaa Mohammed Namankani</creator>
        
        <subject>Physicians; performance; evaluation; clustering; k- means; features; decision making</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>Evaluating physicians&#39; performance is one of the fundamental pillars of improving the quality of healthcare in medical institutions, as it contributes to measuring their ability to provide appropriate treatment, interact effectively with patients, and work within healthcare teams. This study aims to explore the impact of attribute selection on the accuracy of physician clustering using the K-Means algorithm, to improve physician performance assessment. Three datasets containing professional, medical, and administrative attributes were analyzed, such as age, nationality, job title, years of experience, number of operations, and evaluations from various entities. The optimal number of clusters was determined using the Elbow and Silhouette Score methods. The results showed that the original feature set and Lasso features performed best at k = 3, with a clear distinction between clusters. The &quot;three-star&quot; cluster performed well at k = 2 but lost some fine details. It was also shown that attribute selection directly affects the number and accuracy of clusters resulting from clustering, allowing for a clearer classification of physician categories. The study recommends using either original features or Lasso features to achieve more effective clustering, which supports improved recruitment, training, and management decision-making processes in healthcare organizations.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_69-Clustering_Analysis_of_Physicians_Performance_Evaluation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>DenseRSE-ASPPNet: An Enhanced DenseNet169 with Residual Dense Blocks and CE-HSOA-Based Optimization for IoT Botnet Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160468</link>
        <id>10.14569/IJACSA.2025.0160468</id>
        <doi>10.14569/IJACSA.2025.0160468</doi>
        <lastModDate>2025-05-01T08:03:34.0070000+00:00</lastModDate>
        
        <creator>Mohd Abdul Rahim Khan</creator>
        
        <subject>Internet of Things; botnet detection; DenseRSE-ASPPNet; residual squeeze-and-excitation blocks; Cyclone-Enhanced Humboldt Squid Optimization Algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>The growing prevalence of Internet of Things (IoT) devices has heightened vulnerabilities to botnet-based cyberattacks, necessitating robust detection mechanisms. This paper proposes DenseRSE-ASPPNet, an advanced deep learning framework for botnet detection, incorporating comprehensive preprocessing, feature extraction, and optimization. The preprocessing pipeline includes data cleaning and Min-Max normalization to ensure high-quality input data. The DenseNet169 backbone is enhanced with Residual Squeeze-and-Excitation (RSE) blocks for channel-wise attention recalibration and Atrous Spatial Pyramid Pooling (ASPP) for capturing multi-scale spatial patterns, enabling effective feature extraction. Hyperparameter optimization is performed using the Cyclone-Enhanced Humboldt Squid Optimization Algorithm (CE-HSOA), which balances global exploration and local exploitation, ensuring faster convergence and enhanced robustness. Experimental results demonstrate the superior performance of the proposed framework, achieving 99.00 per cent accuracy, 96.40 per cent sensitivity, and 99.95 per cent specificity, significantly minimizing false positives and false negatives. The proposed DenseRSE-ASPPNet provides an efficient, scalable, and effective solution for mitigating botnet threats in IoT environments.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_68-DenseRSE_ASPPNet.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Adaptive AI-Based Personalized Learning for Accelerated Vocabulary and Syntax Mastery in Young English Learners</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160467</link>
        <id>10.14569/IJACSA.2025.0160467</id>
        <doi>10.14569/IJACSA.2025.0160467</doi>
        <lastModDate>2025-05-01T08:03:33.9730000+00:00</lastModDate>
        
        <creator>Angalakuduru Aravind</creator>
        
        <creator>M. Durairaj</creator>
        
        <creator>Preeti Chitkara</creator>
        
        <creator>Yousef A.Baker El-Ebiary</creator>
        
        <creator>Elangovan Muniyandy</creator>
        
        <creator>Linginedi Ushasree</creator>
        
        <creator>Mohamed Ben Ammar</creator>
        
        <subject>AI-based learning; gamification; language acquisition; personalized feedback; vocabulary</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>Language acquisition is an integral part of early schooling, but young English language learners struggle to learn vocabulary and syntax since they are not provided with specialized instruction. Conventional teaching may vary according to different learning speeds and it leads to unbalanced levels of proficiency among students and possibly leading to disengagement among slow learners. The present computer-assisted learning aids provide practice interactively but without real-time adaptation and personalized feedback, limiting their capacity to address learners&#39; unique problems. To overcome these constraints, this study suggests an Artificial Intelligence based personalized learning system that supports vocabulary and syntax learning via adaptive learning models, NLP-based chatbots and gamified interactive lessons. The system dynamically adapts content according to students&#39; most recent performance in real time to enable a personalized learning experience, which results in efficient Learning. The research has experimental study design, and two groups are considered, an AI-supported learning group and a traditional learning group. Pre-test and post-test design measures the effects of the system on vocabulary recall and syntax correctness. Other learner engagement rates like survey results and qualitative feedback inform learner experience and learning efficacy. Initial results indicate that learners working with the Artificial Intelligence powered learning system gained 25percent in recalling vocabulary and 30percent in syntax accuracy over the control group. Further, learner engagement rates are elevated because of real-time feedback and gamification components. These results emphasize the promise of AI-based personalized learning to boost language acquisition and lay the basis for further effective innovations in adaptive education technologies.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_67-Adaptive_AI_Based_Personalized_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improvement of Rainfall Estimation Accuracy Using a Convolutional Neural Network with Convolutional Block Attention Model on Surveillance Camera</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160466</link>
        <id>10.14569/IJACSA.2025.0160466</id>
        <doi>10.14569/IJACSA.2025.0160466</doi>
        <lastModDate>2025-05-01T08:03:33.9270000+00:00</lastModDate>
        
        <creator>Iqbal </creator>
        
        <creator>Adhi Harmoko Saputro</creator>
        
        <creator>Alhadi Bustamam</creator>
        
        <creator>Ardasena Sopaheluwakan</creator>
        
        <subject>Rainfall; surveillance camera; hybrid deep learning; CBAM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>Accurate rainfall estimation is essential for various applications, including transportation management, agriculture, and climate modeling. Traditional measurement methods, such as rain gauges and radar systems, often face challenges due to limited spatial resolution and susceptibility to environmental interferences. These constraints affect the ability of the model to deliver high-resolution, real-time rainfall data, allowing the model to be challenging to capture localized variations effectively. Therefore, this study aimed to introduce a hybrid deep learning architecture that combined a Convolutional Neural Network (CNN) with a Convolutional Block Attention Module (CBAM) to improve rainfall intensity estimation using images captured by surveillance cameras. The proposed model was evaluated using standard datasets and previous unseen images collected at different times of the day, including morning, noon, afternoon, and night, to assess its toughness against temporal variations. The experimental results showed that VGG-CBAM architecture performed better than ResNet (Residual Network)-CBAM across all evaluation metrics, achieving a coefficient of determination (R&#178;) of 0.93 compared to 0.89. Furthermore, when tested on unseen images captured at different periods, the model showed strong generalization capability, with correlation values (R) ranging from 0.77 to 0.98. These results signified the effectiveness of the proposed method in improving the accuracy and adaptability of image-based rainfall estimation, offering a scalable and high-resolution alternative to conventional measurement methods.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_66-Improvement_of_Rainfall_Estimation_Accuracy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>WOAAEO: A Hybrid Whale Optimization and Artificial Ecosystem Optimization Algorithm for Energy-Efficient Clustering in Internet of Things-Enabled Wireless Sensor Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160465</link>
        <id>10.14569/IJACSA.2025.0160465</id>
        <doi>10.14569/IJACSA.2025.0160465</doi>
        <lastModDate>2025-05-01T08:03:33.8970000+00:00</lastModDate>
        
        <creator>Shengnan BAI</creator>
        
        <creator>Ningning LIU</creator>
        
        <creator>Yongbing JI</creator>
        
        <creator>Kecheng WANG</creator>
        
        <subject>Clustering; Internet of Things; energy efficiency; wireless sensor network; network lifespan</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>In the Internet of Things (IoT) era, energy efficiency in Wireless Sensor Networks (WSNs) is of utmost importance given the finite power resources of sensor nodes. An efficient Cluster Head (CH) selection greatly influences network performance and lifetime. This paper suggests a novel energy-efficient clustering protocol that hybridizes Whale Optimization Algorithm (WOA) and Artificial Ecosystem Optimization (AEO), called WOAAEO. It utilizes the exploration capabilities of AEO and the exploitation strengths of WOA in optimizing CH selection and balancing energy consumption and network efficiency. The proposed method is structured into two phases: CH selection using the WOAAEO algorithm and cluster formation based on Euclidean distance. The new method was modeled in MATLAB and compared with current algorithms. Results show that WOAAEO increases the network lifetime by a maximum of 24%, enhances the packet delivery rate by a maximum of 21%, and reduces energy consumption by a maximum of 35% compared to related algorithms. The results show that WOAAEO can be a suitable solution to help resolve energy-saving issues in WSNs and can thus be applied to IoT without any issues.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_65-WOAAEO_A_Hybrid_Whale_Optimization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Energy-Efficient Cloud Computing Through Reinforcement Learning-Based Workload Scheduling</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160464</link>
        <id>10.14569/IJACSA.2025.0160464</id>
        <doi>10.14569/IJACSA.2025.0160464</doi>
        <lastModDate>2025-05-01T08:03:33.8670000+00:00</lastModDate>
        
        <creator>Ashwini R Malipatil</creator>
        
        <creator>M E Paramasivam</creator>
        
        <creator>Dilfuza Gulyamova</creator>
        
        <creator>Aanandha Saravanan</creator>
        
        <creator>Janjhyam Venkata Naga Ramesh</creator>
        
        <creator>Elangovan Muniyandy</creator>
        
        <creator>Refka Ghodhbani</creator>
        
        <subject>Cloud computing; energy efficiency; reinforcement learning; virtual machine; workload scheduling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>The basis for current digital infrastructure is cloud computing, which allows for scalable, on-demand computational resource access. Data center power consumption, however, has skyrocketed because of demand increases, raising operating costs and their footprint. Traditional workload scheduling algorithms often assign performance and cost priority over energy efficiency. This paper proposes a workload scheduling method utilizing deep reinforcement learning (DRL) that adjusts dynamically according to present cloud situations to ensure optimal energy efficiency without compromising performance. The proposed method utilizes Deep Q-Networks (DQN) to perform feature engineering to identify key workload parameters such as execution time, CPU and memory consumption, and subsequently schedules tasks smartly based on these results. Based on evaluation output, the model brings down the latency to 15 ms and throughput up to 500 tasks/sec with 92% efficiency in load balancing, 95% resource usage, and 97% QoS. The proposed approach yields improved performance in terms of key parameters compared to conventional approaches such as Round Robin, FCFS, and heuristic methods. These findings show how reinforcement learning can significantly enhance the scalability, reliability, and sustainability of cloud environments. Future work will focus on enhancing fault tolerance, incorporating federated learning for decentralized optimization, and testing the model on real-world multi-cloud infrastructures.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_64-Energy_Efficient_Cloud_Computing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Pose Estimation of Spacecraft Using Dual Transformers and Efficient Bayesian Hyperparameter Optimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160463</link>
        <id>10.14569/IJACSA.2025.0160463</id>
        <doi>10.14569/IJACSA.2025.0160463</doi>
        <lastModDate>2025-05-01T08:03:33.8330000+00:00</lastModDate>
        
        <creator>N. Kannaiya Raja</creator>
        
        <creator>Janjhyam Venkata Naga Ramesh</creator>
        
        <creator>Yousef A.Baker El-Ebiary</creator>
        
        <creator>Elangovan Muniyandy</creator>
        
        <creator>N. Konda Reddy</creator>
        
        <creator>Vanipenta Ravi Kumar</creator>
        
        <creator>Prasad Devarasetty</creator>
        
        <subject>Dual-channel transformer model; Bayesian optimization; EfficientNet; pose estimation; SLAB dataset</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>Spacecraft pose estimation is an essential contribution to facilitating central space mission activities like autonomous navigation, rendezvous, docking, and on-orbit servicing. Nonetheless, methods like Convolutional Neural Networks (CNNs), Simultaneous Localization and Mapping (SLAM), and Particle Filtering suffer significant drawbacks when implemented in space. Such techniques tend to have high computational complexity, low domain generalization capacity for varied or unknown conditions (domain generalization problem), and accuracy loss with noise from the space environment causes such as fluctuating lighting, sensor limitations, and background interference. In order to overcome these challenges, this study suggests a new solution through the combination of a Dual-Channel Transformer Network with Bayesian Optimization methods. The innovation is at the center with the utilization of EfficientNet, augmented with squeeze-and-excitation attention modules, to extract feature-rich representations without sacrificing computational efficiency. The dual-channel architecture dissects satellite pose estimation into two dedicated streams—translational data prediction and orientation estimation via quaternion-based activation functions for rotational precision. Activation maps are transformed into transformer-compatible sequences via 1&#215;1 convolutions, allowing successful learning in the transformer&#39;s encoder-decoder system. To maximize model performance, Bayesian Optimization with Gaussian Process Regression and the Upper Confidence Bound (UCB) acquisition function makes the optimal hyperparameter selection with fewer queries, conserving time and resources. This entire framework, used here in Python and verified with the SLAB Satellite Pose Estimation Challenge dataset, had an outstanding Mean IOU of 0.9610, reflecting higher accuracy compared to standard models. In total, this research sets a new standard for spacecraft pose estimation, by marrying the versatility of deep learning with probabilistic optimization to underpin the future generation of intelligent, autonomous space systems.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_63-Pose_Estimation_of_Spacecraft_Using_Dual_Transformers.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Designing Quantum-Resilient Blockchain Frameworks: Enhancing Transactional Security with Quantum Algorithms in Decentralized Ledgers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160462</link>
        <id>10.14569/IJACSA.2025.0160462</id>
        <doi>10.14569/IJACSA.2025.0160462</doi>
        <lastModDate>2025-05-01T08:03:33.8030000+00:00</lastModDate>
        
        <creator>Meenal R Kale</creator>
        
        <creator>Yousef A.Baker El-Ebiary</creator>
        
        <creator>L. Sathiya</creator>
        
        <creator>Vijay Kumar Burugari</creator>
        
        <creator>Erkiniy Yulduz</creator>
        
        <creator>Elangovan Muniyandy</creator>
        
        <creator>Rakan Alanazi</creator>
        
        <subject>Quantum resilience; blockchain security; Quantum Key Distribution (QKD); Post-Quantum Cryptography (PQC); Quantum Random Number Generation (QRNG); decentralized ledger</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>Quantum computing is progressing at a fast rate and there is a real threat that classical cryptographic methods can be compromised and therefore impact the security of blockchain networks. All of the ways used to secure blockchain like Rivest–Shamir–Adleman (RSA), Elliptic Curve Cryptography (ECC) and Secure Hash Algorithm 256-bit (SHA256) are the characteristic of the traditional cryptographic techniques vulnerable to attack by quantum algorithms: Shor’s and Grover’s algorithms: can efficiently break asymmetric encryption and speed up brute force attacks. Because of this vulnerability, there exists a need to develop an advance quantum resilient blockchain framework to protect the decentralized ledgers from the future threats of the quantum. This research proposes Post-Quantum Cryptography (PQC), Quantum Key Distribution (QKD) and Quantum Random Number Generation (QRNG) as a formidable architectural integration, to fortify security of blockchain. Classical encryption is replaced with PQC, QKD with secure key exchange by detecting eavesdropping, and QRNG with improving cryptographic randomness to remove the predictable key vulnerability. Only with a small loss of transaction efficiency, we increase transaction encryption accuracy, key exchange security, and resistance to quantum attacks. In this quantum enhanced blockchain design, the idea is to preserve the decentralization, transparency and security and at the same time overcome the future quantum threat. By going through rigorous analysis and comparative evaluation, we demonstrate that the approach saves blockchain networks from the emerging quantum risks to make sure that the decentralized finance, smart contracts and cross chain transactions.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_62-Designing_Quantum_Resilient_Blockchain_Frameworks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>From Code Analysis to Fault Localization: A Survey of Graph Neural Network Applications in Software Engineering</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160461</link>
        <id>10.14569/IJACSA.2025.0160461</id>
        <doi>10.14569/IJACSA.2025.0160461</doi>
        <lastModDate>2025-05-01T08:03:33.7870000+00:00</lastModDate>
        
        <creator>Maojie PAN</creator>
        
        <creator>Shengxu LIN</creator>
        
        <creator>Zhenghong XIAO</creator>
        
        <subject>Graph neural networks; fault localization; code analysis; software quality</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>Graph Neural Networks (GNNs) represent a class of deep machine learning algorithms for analyzing or processing data in graph structure. Most software development activities, such as fault localization, code analysis, and measures of software quality, are inherently graph-like. This survey assesses GNN applications in different subfields of software engineering with special attention to defect identification and other quality assurance processes. A summary of the current state-of-the-art is presented, highlighting important advances in GNN methodologies and their application in software engineering. Further, the factors that limit the current solutions in terms of their use for a wider range of tasks are also considered, including scalability, interpretability, and compatibility with other tools. Some suggestions for future work are presented, including the enhancement of new architectures of GNNs, the enhancement of the interpretability of GNNs, and the design of a large-scale dataset of GNNs. The survey will, therefore, provide detailed insight into how the application of GNNs offers the possibility of enhancing software development processes and the quality of the final product.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_61-From_Code_Analysis_to_Fault_Localization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design and Modeling of a Dynamic Adaptive Hypermedia System Based on Learners&#39; Needs and Profile</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160460</link>
        <id>10.14569/IJACSA.2025.0160460</id>
        <doi>10.14569/IJACSA.2025.0160460</doi>
        <lastModDate>2025-05-01T08:03:33.7400000+00:00</lastModDate>
        
        <creator>Mohamed Benfarha</creator>
        
        <creator>Mohammed Sefian Lamarti</creator>
        
        <creator>Mohamed Khaldi</creator>
        
        <subject>Design; adaptive hypermedia; learning styles; user modeling; UML models</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>This study presents the design and modeling of an adaptive hypermedia system, capable of dynamically adjusting to the needs and characteristics of each learner according to their profile. In the digital age, where digital content must respond to varied profiles and adapt to learners&#39; preferences and skills, this system offers a personalized approach that improves the learning and interaction experience. This personalized approach aims to enrich the learning and interaction experience with learning environments. This work consists of analyzing the different types of learner profiles, in order to identify the key criteria for effective personalization. Based on this, the authors developed a model of an adaptive and dynamic hypermedia system, capable of adapting in real time. To ensure a clear and coherent structure, the use of UML (Unified Modeling Language) modeling is increased. Preliminary results show that this system offers a relevant and targeted experience thanks to learner engagement and satisfaction, making learning both more relevant and more enjoyable. This work paves the way for future research on the optimization of hypermedia systems by further integrating the individual behaviors of learners, in a truly adaptive learning environment, which values the potential of each learner.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_60-Design_and_Modeling_of_a_Dynamic_Adaptive_Hypermedia_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Smoke Detection Model with Adaptive Feature Alignment and Two-Channel Feature Refinement</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160459</link>
        <id>10.14569/IJACSA.2025.0160459</id>
        <doi>10.14569/IJACSA.2025.0160459</doi>
        <lastModDate>2025-05-01T08:03:33.7100000+00:00</lastModDate>
        
        <creator>Yuanpan Zheng</creator>
        
        <creator>Binbin Chen</creator>
        
        <creator>Zeyuan Huang</creator>
        
        <creator>Yu Zhang</creator>
        
        <creator>Chao Wang</creator>
        
        <creator>Xuhang Liu</creator>
        
        <subject>Smoke detection model; adaptive feature alignment; two-channel feature refinement; attention mechanism</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>To address issues of missed detections and low accuracy in existing smoke detection algorithms when dealing with variable smoke patterns in small-scale objects and complex environments, FAR-YOLO was proposed as an enhanced smoke detection model based on YOLOv8. The model adopted Fast-C2f structure to optimize and reduce the amount of parameters. Adaptive Feature Alignment Module (AFAM) was introduced to enhance semantic information retrieval for small targets by merging and aligning features across different layers during point sampling. Besides, FAR-YOLO designed an Attention- Guided Head (AG-Head) in which feature guiding branch was built to integrate critical information of both localization and classification tasks. FAR-YOLO refines key features using Dual-Feature Refinement Attention module (DFRAM) to provide complementary guidance for the both two tasks mentioned above. Experimental results demonstrate that FAR-YOLO improves detection accuracy compared to existing. There&#39;s a 3.5% Precision increase and a 4.0% AP50 increase respectively in YOLOv8. Meanwhile, the model reduces number of parameters by 0.46M, achieving an FPS of 135, making it proper for real-time smoke detection in challenging conditions and ensuring reliable performance in various scenarios.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_59-Smoke_Detection_Model_with_Adaptive_Feature_Alignment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Cybersecurity Through Artificial Intelligence: A Novel Approach to Intrusion Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160458</link>
        <id>10.14569/IJACSA.2025.0160458</id>
        <doi>10.14569/IJACSA.2025.0160458</doi>
        <lastModDate>2025-05-01T08:03:33.6930000+00:00</lastModDate>
        
        <creator>Mohammed K. Alzaylaee</creator>
        
        <subject>Intrusion detection; machine learning; deep learning; zero-day attacks; anomaly detection; feature selection; reinforcement learning; cybersecurity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>Modern cyber threats have evolved to sophisticated levels, necessitating advanced intrusion detection systems (IDS) to protect critical network infrastructure. Traditional signature-based and rule-based IDS face challenges in identifying new and evolving attacks, leading organizations to adopt AI-driven detection solutions. This study introduces an AI-powered intrusion detection system that integrates machine learning (ML) and deep learning (DL) techniques—specifically Support Vector Machines (SVM), Random Forests, Autoencoders, and Convolutional Neural Networks (CNNs)—to enhance detection accuracy while reducing false positive alerts. Feature selection techniques such as SHAP-based analysis are employed to identify the most critical attributes in network traffic, improving model interpretability and efficiency. The system also incorporates reinforcement learning (RL) to enable adaptive intrusion response mechanisms, further enhancing its resilience against evolving threats. The proposed hybrid framework is evaluated using the SDN_Intrusion dataset, achieving an accuracy of 92.8%, a false positive rate of 5.4%, and an F1-score of 91.8%, outperforming conventional IDS solutions. Comparative analysis with prior studies demonstrates its superior capability in detecting both known and unknown threats, particularly zero-day attacks and anomalies. While the system significantly enhances security coverage, challenges in real-time implementation and computational overhead remain. This paper explores potential solutions, including federated learning and explainable AI techniques, to optimize IDS functionality and adaptive capabilities.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_58-Enhancing_Cybersecurity_Through_Artificial_Intelligence.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cognitive Load Optimization in Digital (ESL) Learning: A Hybrid BERT and FNN Approach for Adaptive Content Personalization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160457</link>
        <id>10.14569/IJACSA.2025.0160457</id>
        <doi>10.14569/IJACSA.2025.0160457</doi>
        <lastModDate>2025-05-01T08:03:33.6630000+00:00</lastModDate>
        
        <creator>Komminni Ramesh</creator>
        
        <creator>Christine Ann Thomas</creator>
        
        <creator>Joel Osei-Asiamah</creator>
        
        <creator>Bhuvaneswari Pagidipati</creator>
        
        <creator>Elangovan Muniyandy</creator>
        
        <creator>B. V. Suresh Reddy</creator>
        
        <creator>Yousef A.Baker El-Ebiary</creator>
        
        <subject>Cognitive load management; artificial intelligence-based English as a secondary language learning; adaptive content personalization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>Traditional English as a Secondary Language (ESL) learning platform rely on static content delivery, often failing to adapt to individual learners’ cognitive capacities, leading to inefficient comprehension and increased cognitive load. A novel hybrid Feedforward Neural Network and Bidirectional Encoder Representation Transformer (FNN-BERT) framework stands as our solution because it performs dynamic content personalization through predictions of real-time cognitive load. The proposed approach incorporates Feedforward Neural Networks (FNN) alongside Bidirectional Encoder Representations from Transformers (BERT) to process behavioral analytics for optimized content complexity adjustment and adaptive and scalable learning delivery. Real-time adaptability, scalability and high computational needs of current models reduce their effectiveness in personalized learning environments. Through the application of Test of English for International Communication (TOEIC), International English Language Testing System (IELTS) and Test of English as a Foreign Language (TOEFL) datasets, our methodology uses Feedforward Neural Network (FNN) to forecast cognitive load based on student engagement behaviors and application errors then Bidirectional Encoders Representations from Transformer (BERT) processes content difficulty adjustments automatically. The proposed model delivers a 95.3% accuracy rate, 96.22% precision level, 96.1% recall capability and 97.2% F1-score which surpasses conventional Artificial Intelligence-based English as a Secondary Language (ESL) learning systems. The system makes use of Python for its implementation to improve understanding as well as student focus and mental processing speed. Personalized content presentation methods lead to lower cognitive strain which simultaneously advances student achievement numbers. The research adds value to smart educational frameworks through its introduction of a scalable framework that allows adaptable learning systems for English as a second language (ESL). The following research steps include simplifying system complexity while adding multimodal learning signals including eye monitoring and speech recognition and further developing the model across various educational subject areas. The research works as a promising foundation which propels AI real-time adaptive education systems for students from various backgrounds.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_57-Cognitive_Load_Optimization_in_Digital_ESL_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>AI-Driven Predictive Analytics for CRM to Enhance Retention Personalization and Decision-Making</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160456</link>
        <id>10.14569/IJACSA.2025.0160456</id>
        <doi>10.14569/IJACSA.2025.0160456</doi>
        <lastModDate>2025-05-01T08:03:33.6170000+00:00</lastModDate>
        
        <creator>Yashika Gaidhani</creator>
        
        <creator>Janjhyam Venkata Naga Ramesh</creator>
        
        <creator>Sanjit Singh</creator>
        
        <creator>Reetika Dagar</creator>
        
        <creator>T Subha Mastan Rao</creator>
        
        <creator>Sanjiv Rao Godla</creator>
        
        <creator>Yousef A.Baker El-Ebiary</creator>
        
        <subject>Artificial Intelligence; predictive analytics; customer relationship management; natural language processing; churn prediction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>The advent of Artificial Intelligence (AI) has dramatically altered Customer Relationship Management (CRM) by allowing organizations to anticipate customer behavior, customize interactions and automate service delivery. This research introduces an extensive AI-based predictive analytics framework aimed at improving customer engagement, retention and satisfaction using advanced Machine Learning (ML) and Natural Language Processing (NLP) methodologies. By using XGBoost for churn prediction and BERT-based models for sentiment analysis, the system efficiently handles both structured and unstructured customer data. The methodology involves sophisticated feature engineering, customer segmentation via K-Means clustering, and Customer Lifetime Value (CLV) prediction to aid data-driven business strategies. An NLP-driven chatbot offers real-time, personalized support, response time and improving user experience. Evaluation metrics such as accuracy, precision, recall and F1-score demonstrate the better performance of the proposed system compared to conventional CRM approaches. This work also addresses important issues such as data privacy compliance, algorithmic bias and explainability of AI decision-making. Ethical deployment and transparency of AI are emphasized for building confidence in automated CRM systems. Future evolution will tackle the use of reinforcement learning to facilitate learning-based interaction schemes and federated learning for trusted, decentralized management of data. This architecture does not only provide better CRM functionality but also builds a platform towards intelligent, responsible and scalable solutions for customer relations across industries.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_56-AI_Driven_Predictive_Analytics_for_CRM.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Understanding Brain Network Stimulation for Emotion Analyzing Connectivity Feature Map from Electroencephalography</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160455</link>
        <id>10.14569/IJACSA.2025.0160455</id>
        <doi>10.14569/IJACSA.2025.0160455</doi>
        <lastModDate>2025-05-01T08:03:33.5830000+00:00</lastModDate>
        
        <creator>Mahfuza Akter Maria</creator>
        
        <creator>M. A. H. Akhand</creator>
        
        <creator>Md Abdus Samad Kamal</creator>
        
        <subject>Brain connectivity; connectivity feature map; electroencephalography; emotion</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>In understanding brain functioning by Electroencephalography (EEG), it is essential to be able to not only identify more active brain areas but also understand connectivity among different areas. The functional and efficient connectivity networks of the brain have been examined in this study by constructing a connectivity feature map (CFM) with four widely used connectivity methods from the Database for Emotion Analysis Using Physiological Signals (DEAP) emotional EEG data to research how this connectivity&#39;s patterns are influenced by emotion. According to the investigation results, emotions are mainly related to the parietal, central, and frontal regions. The parietal region is more responsible for emotion alteration among these three regions. Positive emotions are associated with more direct correlations and dependencies than negative ones. When experiencing negative emotions, the regions of the brain function more synchronously as well as there are less flow of information. Whether direct or inverse, there is less correlation between brain regions in the higher frequency band than in the lower frequency band. Higher frequencies are associated with increased dependence and directed information transfer between brain regions. Generally, the electrodes in the same lobe show stronger connectivity than those in different lobes. At a glance, the present study is a comprehensive analysis to understand brain network stimulation for emotion from EEG, and it significantly differs from the existing emotion recognition studies typically focused on recognition proficiency.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_55-Understanding_Brain_Network_Stimulation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Adaptive Crow Search Algorithm for Hierarchical Clustering in Internet of Things-Enabled Wireless Sensor Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160454</link>
        <id>10.14569/IJACSA.2025.0160454</id>
        <doi>10.14569/IJACSA.2025.0160454</doi>
        <lastModDate>2025-05-01T08:03:33.5070000+00:00</lastModDate>
        
        <creator>Lingwei WANG</creator>
        
        <creator>Hua WANG</creator>
        
        <subject>Internet of things; wireless sensor networks; clustering; energy efficiency; optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>The Internet of Things (IoT) relies on efficient Wireless Sensor Networks (WSNs) for data collection and transmission in various applications, including smart cities, industrial automation, and environmental monitoring. Clustering is a fundamental technique for structuring WSNs hierarchically, enabling load balancing, reducing energy consumption, and extending network lifespan. However, clustering optimization in WSNs is an NP-hard problem, necessitating heuristic and metaheuristic approaches. This study introduces an Adaptive Crow Search Algorithm (A-CSA) for clustering in IoT-enabled WSNs, addressing the inherent limitations of the standard CSA, such as premature convergence and local optima entrapment. The proposed A-CSA incorporates three key enhancements: (1) a dynamic awareness probability to improve global search efficiency during initial population selection, (2) a systematic leader selection mechanism to enhance exploitation and avoid random selection bias, and (3) an adaptive local search strategy to refine cluster formation. Performance evaluations conducted under varying network configurations, including node density, network size, and base station positioning, demonstrate that A-CSA outperforms existing clustering approaches in terms of energy efficiency, network longevity, and data transmission reliability. The results highlight the potential of A-CSA as a robust optimization technique for clustering in IoT-driven WSN environments.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_54-Adaptive_Crow_Search_Algorithm_for_Hierarchical_Clustering.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Investigating the Impact of Hyper Parameters on Intrusion Detection System Using Deep Learning Based Data Augmentation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160453</link>
        <id>10.14569/IJACSA.2025.0160453</id>
        <doi>10.14569/IJACSA.2025.0160453</doi>
        <lastModDate>2025-05-01T08:03:33.4600000+00:00</lastModDate>
        
        <creator>Umar Iftikhar</creator>
        
        <creator>Syed Abbas Ali</creator>
        
        <subject>Artificial intelligence; learning rate; cyber threat; network intrusion detection; deep learning; data augmentation; generative adversarial networks epochs</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>The effects of changing learning rates, data augmentation percentage and numbers of epochs on the performance of Wasserstein Generative Adversarial Networks with Gradient Penalties (WGAN-GP) are evaluated in this study. The purpose of this research is to find out how they affect the data augmentation to enhance stability during training. In this research, the degree of system performance is measured using the Classification Model Utility approach. For this reason, this study aims to determine the interaction between learning rate, augmentation percentage and epoch value when using WGAN-GP to generate synthetic data for the recognition of the system performance. The results will provide the indications on how some of the hyper parameters can be adjusted up or down for having positive or negative consequences on the generation process for further research and use of WGAN-GP. It also provides insights into how the generative model is trained, and how that affects stability and quality of the result in various settings such as image synthesis or other generative tasks.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_53-Investigating_the_Impact_of_Hyper_Parameters.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Real-Time Lightweight Sign Language Recognition on Hybrid Deep CNN-BiLSTM Neural Network with Attention Mechanism</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160452</link>
        <id>10.14569/IJACSA.2025.0160452</id>
        <doi>10.14569/IJACSA.2025.0160452</doi>
        <lastModDate>2025-05-01T08:03:33.4430000+00:00</lastModDate>
        
        <creator>Gulnur Kazbekova</creator>
        
        <creator>Zhuldyz Ismagulova</creator>
        
        <creator>Gulmira Ibrayeva</creator>
        
        <creator>Almagul Sundetova</creator>
        
        <creator>Yntymak Abdrazakh</creator>
        
        <creator>Boranbek Baimurzayev</creator>
        
        <subject>Sign language recognition; CNN-BiLSTM; attention mechanism; deep learning; gesture classification; real-time processing; assistive technology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>Sign language recognition (SLR) plays a crucial role in bridging communication gaps for individuals with hearing and speech impairments. This study proposes a hybrid deep CNN-BiLSTM neural network with an attention mechanism for real-time and lightweight sign language recognition. The CNN module extracts spatial features from individual gesture frames, while the BiLSTM module captures temporal dependencies, enhancing classification accuracy. The attention mechanism further refines feature selection by focusing on the most relevant time steps in a sign sequence. The proposed model was evaluated on the Sign Language MNIST dataset, achieving state-of-the-art performance with high accuracy, precision, recall, and F1-score. Experimental results indicate that the model converges rapidly, maintains low misclassification rates, and effectively distinguishes between visually similar signs. Confusion matrix analysis and feature map visualizations provide deeper insights into the hierarchical feature extraction process. The results demonstrate that integrating spatial, temporal, and attention-based learning significantly improves recognition performance while maintaining computational efficiency. Despite its effectiveness, challenges such as misclassification in ambiguous gestures and real-time computational constraints remain, suggesting future improvements in multi-modal fusion, transformer-based architectures, and lightweight model optimizations. The proposed approach offers a scalable and efficient solution for real-time sign language recognition, contributing to the development of assistive technologies for individuals with communication disabilities.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_52-Real_Time_Lightweight_Sign_Language_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comprehensive Vulnerability Analysis of Three-Factor Authentication Protocols in Internet of Things-Enabled Healthcare Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160451</link>
        <id>10.14569/IJACSA.2025.0160451</id>
        <doi>10.14569/IJACSA.2025.0160451</doi>
        <lastModDate>2025-05-01T08:03:33.4270000+00:00</lastModDate>
        
        <creator>Haewon Byeon</creator>
        
        <subject>Three-factor authentication; IoT healthcare security; multi-factor authentication; side-channel attack mitigation; replay attack prevention</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>This study evaluates a three-factor authentication protocol designed for IoT healthcare systems, identifying several key vulnerabilities that could compromise its security. The analysis reveals weaknesses in single-factor authentication, time synchronization, side-channel attacks, and replay attacks. To address these vulnerabilities, the study proposes a series of enhancements, including the implementation of multi-factor authentication (MFA) to strengthen user verification processes and the inclusion of timestamps or nonces in messages to prevent replay attacks. Additionally, the adoption of advanced cryptographic techniques, such as masking and shuffling, can mitigate side-channel attacks by minimizing information leakage during encryption. The use of message authentication codes (MACs) ensures communication integrity by verifying message authenticity. These improvements aim to fortify the protocol&#39;s security framework, ensuring the protection of sensitive medical data. Future research directions include exploring adaptive security policies leveraging artificial intelligence and optimizing cryptographic operations to enhance efficiency. These efforts are essential for maintaining the protocol&#39;s resilience against evolving threats and ensuring the secure operation of IoT-based healthcare systems.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_51-Comprehensive_Vulnerability_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Cross-Chain Mechanism Based on Hierarchically Managed Notary Group</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160450</link>
        <id>10.14569/IJACSA.2025.0160450</id>
        <doi>10.14569/IJACSA.2025.0160450</doi>
        <lastModDate>2025-05-01T08:03:33.4130000+00:00</lastModDate>
        
        <creator>Hongliang Tian</creator>
        
        <creator>Zhiyang Ruan</creator>
        
        <creator>Zhong Fan</creator>
        
        <subject>Blockchain; cross-chain; notary group; hierarchical management; reputation evaluation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>Blockchain technology, characterized by decentralization, immutability, traceability, and transparency, provides innovative solutions for data management. However, the limited cross-chain interoperability between blockchains hampers their broader application and development. To address this challenge, this paper proposes a Cross-Chain Mechanism Based on Hierarchically Managed Notary Group, abbreviated as HMNG-CCM, which enables secure and efficient cross-chain transactions between blockchains. To mitigate the centralization issue inherent in traditional cross-chain mechanism based on notary, an innovative notary group management approach is introduced. This approach implements hierarchical management by categorizing notaries into three levels—junior notary, intermediate notary, and senior notary—thereby effectively mitigating the centralization problem. Additionally, a functional division mechanism for notary is designed, wherein the roles of transaction processing and verification within the cross-chain transaction process are separated to enhance system reliability. Furthermore, to tackle the complexity of notary reputation evaluation, a reputation assessment scheme based on an improved PageRank algorithm is proposed. Differentiated reputation evaluation strategies are developed for junior and intermediate notaries to ensure fairness and rationality in the assessment process. The effectiveness of this scheme is validated through experiments conducted on the Hyperledger Fabric platform. The experimental results demonstrate that the proposed mechanism exhibits strong robustness against malicious notaries while significantly improving transaction speed and success rate. This study offers new theoretical and practical foundations for the optimization and advancement of blockchain cross-chain technology.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_50-A_Cross_Chain_Mechanism.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Database-Based Cooperative Scheduling Optimization of Multiple Robots for Smart Warehousing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160449</link>
        <id>10.14569/IJACSA.2025.0160449</id>
        <doi>10.14569/IJACSA.2025.0160449</doi>
        <lastModDate>2025-05-01T08:03:33.3970000+00:00</lastModDate>
        
        <creator>Zhenglu Zhi</creator>
        
        <subject>Database; intelligent warehousing; robotics; cooperative scheduling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>This study investigates the current state and future directions of cooperative scheduling optimization for multiple robots in smart warehousing environments. With the rapid growth of logistics automation, optimizing the collaboration between intelligent robots has become essential for improving warehouse efficiency and adaptability. The research employs a bibliometric analysis based on the Web of Science (WoS) database, using VOSviewer for keyword co-occurrence, clustering, and density visualization to identify key research hotspots, knowledge structures, and technological trends. The analysis categorizes the field into four major research clusters: robot path planning and navigation, warehouse system optimization and order picking, algorithm design and performance evaluation, and the application of emerging technologies such as edge computing and cloud robotics. Results shows a growing emphasis on dynamic scheduling, real-time data integration, and multi-objective optimization, with increasing use of technologies like deep reinforcement learning and digital twins. The study also incorporates real-world case comparisons from leading domestic and international enterprises, revealing implementation challenges and performance benchmarks. Although promising advancements are evident, issues such as fragmented data systems, limited real-time responsiveness, and insufficient cross-disciplinary integration persist. The study concludes that future research should focus on improving environmental adaptability through edge computing, standardizing robot collaboration protocols, and enhancing system robustness via real-time database architectures. By bridging theoretical insights with practical needs, this research offers a comprehensive foundation for developing next-generation intelligent warehousing systems based on coordinated multi-robot scheduling.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_49-Database_Based_Cooperative_Scheduling_Optimization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Experiential Landscape Design Using the Integration of Three-Dimensional Animation Elements and Overlay Methods</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160448</link>
        <id>10.14569/IJACSA.2025.0160448</id>
        <doi>10.14569/IJACSA.2025.0160448</doi>
        <lastModDate>2025-05-01T08:03:33.3500000+00:00</lastModDate>
        
        <creator>Mingjing Sun</creator>
        
        <creator>Ming Wei</creator>
        
        <subject>3D animation integration; overlay method; experiential landscape design; user immersive experience; evaluation system design</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>This work aims to optimize users&#39; immersive experiences, enhance design effectiveness, and construct a scientific evaluation system for landscape design. The work begins with the collection and analysis of spatial data from the landscape design area, using 3D animation technology to generate visual models and virtually reconstruct key landscape elements. Next, the overlay method is applied to visually stratify elements within the space, progressively building a multi-layered, logical spatial structure to enhance realism and information communication efficiency in landscape design. To evaluate design effectiveness, a user experience questionnaire and behavior tracking experiments are designed. The questionnaire covers three dimensions: immersion, satisfaction, and interactivity, while the behavioral tracking experiment collects data on user dwell time and gaze movement in virtual scenes. Results indicate that the design scheme based on 3D animation and layering significantly outperforms traditional designs in terms of immersive experience, clarity of structure, and user engagement. In the questionnaire, the average satisfaction rating for the design scheme is 4.7 (out of 5), with an immersion rating average of 4.8. The behavioral tracking experiment shows a 40% increase in dwell time compared to traditional designs, and users&#39; willingness to revisit improves by 26% compared to the control group. This work innovatively applies 3D animation and overlay methods to experiential landscape design, confirming the practical value of this method in optimizing user experience and design effectiveness.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_48-Experiential_Landscape_Design.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Stochastic Nonlinear Analysis of Internet of Things Network Performance and Security</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160447</link>
        <id>10.14569/IJACSA.2025.0160447</id>
        <doi>10.14569/IJACSA.2025.0160447</doi>
        <lastModDate>2025-05-01T08:03:33.3170000+00:00</lastModDate>
        
        <creator>Junzhou Li</creator>
        
        <creator>Feixian Sun</creator>
        
        <subject>Internet of Things; security; stochastic nonlinearity; support vector machines; grey wolf optimization algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>Aiming at the problem of poor effect of traditional Internet of Things network performance and security analysis methods, the research uses support vector machine for Internet of Things network security situation assessment. It also introduces the grey wolf optimization algorithm improved by genetic algorithm to optimize it, and designs a stochastic nonlinear integration of Internet of Things network performance algorithm. The results revealed that the mean absolute error, root mean square error, and mean absolute percentage error of the integrated algorithm were 0.0064, 0.041, and 0.0013, respectively, in the performance test. It was significantly lower than that of the other four algorithms, which proved that its prediction accuracy was higher. The recall of the integrated algorithm was 93.7%, and the F1 value was 0.94, which was significantly higher than the other comparative algorithms, proving its better comprehensive performance. In the analysis of practical application effect, when access control was performed by the integrated algorithm, the predicted curve basically overlapped with the actual curve, which proved its better fitting performance. The communication overhead of the integrated algorithm was 81.3 KB, which was significantly lower than the other two calculations. The average communication time of the integrated algorithm was 3.59 s, which was lower than the other two algorithms, proving that it can effectively reduce the communication cost and delay. The integrated algorithm can effectively improve the performance of Internet of Things network security situation assessment, which provides reliable technical support for the security protection of Internet of Things network and has important practical application value.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_47-Stochastic_Nonlinear_Analysis_of_Internet_of_Things.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of Control System of Water Source Heat Pump Based on Fuzzy PID Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160446</link>
        <id>10.14569/IJACSA.2025.0160446</id>
        <doi>10.14569/IJACSA.2025.0160446</doi>
        <lastModDate>2025-05-01T08:03:33.3030000+00:00</lastModDate>
        
        <creator>Min Dong</creator>
        
        <creator>Xue Li</creator>
        
        <creator>Yixuan Yang</creator>
        
        <creator>Zheng Li</creator>
        
        <creator>Hui He</creator>
        
        <subject>Central air conditioning system; frequency converter; fuzzy PID control; intelligent control; energy saving</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>This study aims to enhance the control and energy efficiency of the central air conditioning system by integrating frequency conversion fuzzy control and advanced control strategies. The focus is on optimizing the motor operation of the central air conditioning system with the help of a frequency converter and improving the system&#39;s performance through adaptive control mechanisms, which is an important part of intelligent control. The research adopts frequency conversion fuzzy control for high - power motors in the central air conditioning system, using a pure proportional controller. The system’s response is analyzed, including the rise time (tr = 339.3s) and peak interval (Ts = 633.19s) based on unit step response data. The study also addresses the integration of cooling water heat exchange systems, such as heat pumps and plate heat exchangers, to facilitate energy recycling, achieving the goal of energy saving. System identification is performed using MATLAB’s toolbox for deep well water pump frequency conversion data, forming a basis for further simulation and optimization. The study incorporates a hybrid PID, fuzzy, and neural network - based control strategy to handle the system’s time - varying, nonlinear characteristics. The results indicate that the hybrid control strategy significantly improves the system’s dynamic response. With a rising time of tr = 611s, peak time of tp = 830s, adjustment time (&#177;5%) of ts = 1140s, and an overshoot (Mp) of 16.08%, the system exhibits better performance than conventional PID controllers, particularly in handling large lag and nonlinear behaviors. This work presents an innovative approach by combining frequency conversion fuzzy control with adaptive PID and neural networks for a more efficient air conditioning control system. The integration of cooling water heat recycling and advanced control mechanisms provides a novel solution for enhancing energy efficiency and operational performance in central air conditioning systems, which is highly relevant to energy saving and intelligent control.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_46-Design_of_Control_System_of_Water_Source_Heat_Pump.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>LIFT: Lightweight Incremental and Federated Techniques for Live Memory Forensics and Proactive Malware Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160445</link>
        <id>10.14569/IJACSA.2025.0160445</id>
        <doi>10.14569/IJACSA.2025.0160445</doi>
        <lastModDate>2025-05-01T08:03:33.2700000+00:00</lastModDate>
        
        <creator>Sarishma Dangi</creator>
        
        <creator>Kamal Ghanshala</creator>
        
        <creator>Sachin Sharma</creator>
        
        <subject>Live memory forensics; malware detection; federated learning; fileless malware; anomaly detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>Live Memory Forensics deals with acquiring and analyzing the volatile memory artefacts to uncover the trace of in-memory malware or fileless malware. Traditional forensics methods operate in a centralized manner leading to a multitude of challenges and severely limiting the possibilities of accurate and timely analysis. In this work, we propose a decentralized approach for conducting live memory forensics across different devices. The proposed federated learning-based live memory forensics model uses FedAvg algorithm in order to make a lightweight, incremental approach to conduct live memory forensics. The study demonstrates the performance of federated learning algorithms in anomaly detection, achieving a maximum accuracy of 92.5% with Clustered Federated Learning (CFL) while maintaining a convergence time of approximately 35 communication rounds. Key features such as CPU usage and network activity contributed over 85% to the detection accuracy, emphasizing their importance in the predictive process.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_45-LIFT_Lightweight_Incremental_and_Federated_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Reinforcement Learning-Driven Cluster Head Selection for Reliable Data Transmission in Dense Wireless Sensor Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160444</link>
        <id>10.14569/IJACSA.2025.0160444</id>
        <doi>10.14569/IJACSA.2025.0160444</doi>
        <lastModDate>2025-05-01T08:03:33.2400000+00:00</lastModDate>
        
        <creator>Longyang Du</creator>
        
        <creator>Qingxuan Wang</creator>
        
        <creator>Zhigang ZHANG</creator>
        
        <subject>Energy efficiency; wireless sensor networks; clustering; reinforcement learning; fuzzy inference system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>Wireless Sensor Networks (WSNs) have made significant advances towards practical applications. Data gathering in WSNs has been carried out using various techniques, such as multi-path routing, tree topologies, and clustering. Conventional systems lack a reliable and effective mechanism for dealing with end-to-end connection, traffic, and mobility problems. These deficiencies often lead to poor network performance. We propose an Internet of Things (IoT)-integrated densely distributed WSN system. The system utilizes a tree-based clustering approach dependent on the installed sensors&#39; density. The cluster head nodes are structured in a tree-based cluster to optimize the process of gathering data. Each cluster&#39;s most efficient aggregation node is selected using a fuzzy inference-based reinforcement learning technique. The decision is based on three crucial factors: algebraic connectedness, bipartivity index, and neighborhood overlap. The proposed method significantly enhances energy efficiency and outperforms existing methods in bit error rate, throughput, packet delivery ratio, and delay.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_44-Reinforcement_Learning_Driven_Cluster_Head_Selection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid Levy Arithmetic and Machine Learning-Based Intrusion Detection System for Software-Defined Internet of Things Environments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160443</link>
        <id>10.14569/IJACSA.2025.0160443</id>
        <doi>10.14569/IJACSA.2025.0160443</doi>
        <lastModDate>2025-05-01T08:03:33.2230000+00:00</lastModDate>
        
        <creator>Wenpan SHI</creator>
        
        <creator>Ning ZHANG</creator>
        
        <subject>Intrusion detection; internet of things; software-defined; feature selection; levy arithmetic</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>The convergence of Software-Defined Networking (SDN) and the Internet of Things (IoT) has enabled a more adaptable framework for managing SDN-enabled IoT (SD-IoT) applications, but it also introduces significant cyber security risks. This study proposes a lightweight and explainable intrusion detection system (IDS) based on a hybrid Levy Arithmetic Algorithm (LAA) for SD-IoT environments. By integrating Levy randomization with the Arithmetic Optimization Algorithm (AOA), the LAA enhances feature selection efficiency while minimizing computational overhead. The model was evaluated using the NSL-KDD and UNSW-NB15 datasets. Experimental results demonstrate that the LAA outperformed baseline models, achieving up to 89.2% F1-score and 95.4% precision, while maintaining 100% detection of normal behaviors. These outcomes highlight the proposed system&#39;s potential for accurate and efficient detection of cyber-attacks in resource-constrained SD-IoT environments.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_43-A_Hybrid_Levy_Arithmetic_and_Machine_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Improved Sparrow Search Algorithm for Flexible Job-Shop Scheduling Problem with Setup and Transportation Time</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160442</link>
        <id>10.14569/IJACSA.2025.0160442</id>
        <doi>10.14569/IJACSA.2025.0160442</doi>
        <lastModDate>2025-05-01T08:03:33.1930000+00:00</lastModDate>
        
        <creator>Yi Li</creator>
        
        <creator>Song Han</creator>
        
        <creator>Zhaohui Li</creator>
        
        <creator>Fan Yang</creator>
        
        <creator>Zhengyi Sun</creator>
        
        <subject>Flexible job shop scheduling; machine setup; transportation; sparrow search algorithm; earliest completion time priority</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>This study addresses the low production efficiency in manufacturing enterprises caused by the diversification of order products, small batches, and frequent production changeovers. Focusing on minimizing the makespan, this study establishes a Flexible Job-Shop Scheduling Problem (FJSP) model incorporating machine setup and workpiece transportation times, and proposes an improved sparrow search algorithm to effectively solve the problem. Based on the sparrow search algorithm, this study proposes a novel location update strategy that expands the search direction in each dimension and strengthens each individual’s local search capability. In addition, a critical-path-based neighborhood search strategy is introduced to enhance individual search efficiency, and an earliest completion time priority rule is employed during population initialization to further improve solution quality. Several experiments are conducted to validate the effectiveness of the improved strategy, and the results are compared with those obtained using the particle swarm optimization and gray wolf optimization algorithms to demonstrate the efficiency of the proposed model and algorithm. The improved sparrow search algorithm can effectively generate feasible solutions for large-scale problems, provide practical manufacturing scheduling schemes, and enhance the production efficiency of manufacturing enterprises.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_42-An_Improved_Sparrow_Search_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Industry 4.0 for SMEs: Exploring Operationalization Barriers and Smart Manufacturing with UKSSL and APO Optimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160441</link>
        <id>10.14569/IJACSA.2025.0160441</id>
        <doi>10.14569/IJACSA.2025.0160441</doi>
        <lastModDate>2025-05-01T08:03:33.1470000+00:00</lastModDate>
        
        <creator>Meeravali Shaik</creator>
        
        <creator>Piyush Kumar Pareek</creator>
        
        <subject>European small and medium-sized enterprises; artificial protozoa optimizer; knowledge-based semi-supervised framework; contrastive learning algorithm; smart manufacturing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>The research aimed to find out why SMEs have a hard time adopting smart manufacturing, what makes smart manufacturing operational, and if only large companies can afford to take advantage of technological opportunities. It used a knowledge-based semi-supervised framework named Unsupervised Knowledge-based Multi-Layer Perceptron (UKMLP), which has two parts: a contrast learning algorithm that takes the unlabeled dataset and uses it to extract feature representations, and a UKMLP that uses that representation to classify the input data using the limited labelled dataset. Next, an artificial protozoa optimizer (APO) makes the necessary adjustments. This research is based on the hypothesis that large companies may be able to exploit Small and Medium-sized Enterprises (SMEs) to their detriment in cyber-physical production systems, thus cutting them out of the market. Secondary data analysis, which involved evaluating and analyzing data that had already been collected, was crucial in accomplishing the research purpose. Since big companies are usually the center of attention in these discussions, the necessity to delve into this subject stems from the reality that SMEs have a higher research need. The results confirmed the importance of Industry 4.00 in industrial production, particularly with regard to the smart process planning offered by algorithms for virtual simulation and deep learning. The report also covered the various connection choices available to SMEs in order to improve business productivity through the use of autonomous robotic technology and machine intelligence. This research suggests that a substantial value-added opportunity may lie in the way Industry 4.0 interacts with the economic organization of companies.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_41-Industry_4_0_for_SMEs_Exploring_Operationalization_Barriers.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Obesity Risk Level (ORL) Based on Combination of K-Means and XGboost Algorithms to Predict Childhood Obesity</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160440</link>
        <id>10.14569/IJACSA.2025.0160440</id>
        <doi>10.14569/IJACSA.2025.0160440</doi>
        <lastModDate>2025-05-01T08:03:33.1170000+00:00</lastModDate>
        
        <creator>Ghaidaa Hamed Alharbi</creator>
        
        <creator>Mohammed Abdulaziz Ikram</creator>
        
        <subject>Prediction system; Childhood obesity; K-Means; XGBoost; Machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>Childhood obesity is a common and serious public health problem that requires early prevention measures. Identifying children at risk of obesity is crucial for timely interventions that aim to mitigate these adverse health outcomes. Machine learning (ML) offers powerful tools to predict obesity and related complications using large and diverse data sources. The article uses machine learning (ML) techniques to analyze children&#39;s data, focusing on a newly developed variable, the Obesity Risk Level (ORL), which categorizes participants into high, medium, and low risk levels. Two primary models were utilized: the K-Means algorithm for clustering participants based on shared characteristics and XGBoost for predicting the risk level and obesity likelihood. The results showed an overall prediction precision of 88.04%, with high precision, recall, and F1 scores, demonstrating the robustness of the model in identifying obesity risks. This approach provides a data-driven framework to improve health interventions and prevent childhood obesity, providing information that could shape future preventive strategies.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_40-An_Obesity_Risk_Level.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Machine Learning-Based Prediction of Cannabis Addiction Using Cognitive Performance and Sleep Quality Evaluations</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160439</link>
        <id>10.14569/IJACSA.2025.0160439</id>
        <doi>10.14569/IJACSA.2025.0160439</doi>
        <lastModDate>2025-05-01T08:03:33.0830000+00:00</lastModDate>
        
        <creator>Abdelilah Elhachimi</creator>
        
        <creator>Mohamed Eddabbah</creator>
        
        <creator>Abdelhafid Benksim</creator>
        
        <creator>Hamid Ibanni</creator>
        
        <creator>Mohamed Cherkaoui</creator>
        
        <subject>Cannabis addiction; machine learning; cognitive assessment; sleep quality; predictive modeling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>Cannabis addiction remains a growing public health concern, particularly due to its impact on cognition and sleep quality. Conventional screening tools, such as structured interviews and self-assessments, often lack objectivity and sensitivity. This study aims to develop and compare machine learning (ML) models for the prediction of cannabis addiction using cognitive performance (Montreal Cognitive Assessment – MoCA) and sleep quality (Pittsburgh Sleep Quality Index – PSQI) features. A total of 200 participants aged 13 to 24 were assessed, including 103 diagnosed addicts and 97 controls. Principal Component Analysis (PCA) was used to reduce data dimensionality and enhance model robustness. The study evaluated six supervised machine learning algorithms, namely Logistic Regression (LR), K-Nearest Neighbors (KNN), Support Vector Machine (SVM), Random Forest (RF), Extreme Gradient Boosting (XGBoost), and Multilayer Perceptron (MLP). Results showed that LR and MLP models achieved high sensitivity (85.71%) and specificity (100%) on the test set, outperforming the DSM-5-based CUD reference test (sensitivity = 71.43%). Although the RF and XGBoost models achieved perfect classification on the training set, their reduced performance on the test set indicates a potential overfitting issue. Integrating machine learning with validated psychometric assessments enables a more accurate and objective identification of cannabis addiction at early stages, thus supporting timely interventions and more effective prevention strategies.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_39-Machine_Learning_Based_Prediction_of_Cannabis_Addiction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Energy Management Controller for Bi-Directional EV Charging System Using Prioritized Energy Distribution</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160438</link>
        <id>10.14569/IJACSA.2025.0160438</id>
        <doi>10.14569/IJACSA.2025.0160438</doi>
        <lastModDate>2025-05-01T08:03:33.0530000+00:00</lastModDate>
        
        <creator>Ezmin Abdullah</creator>
        
        <creator>Muhammad Wafiy Firdaus Jalil</creator>
        
        <creator>Nabil M. Hidayat</creator>
        
        <subject>Energy management controller; Bi-directional EV charging system; safety features; control algorithms; energy flow optimization; EV battery protection; testing and validation; thingsboard platform; InfluxDB database</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>The growing adoption of electric vehicles (EVs) has intensified the need for efficient, intelligent, and grid-independent Bi-directional charging systems. Conventional EV charging solutions heavily rely on grid electricity, leading to high energy costs, grid instability, and low renewable energy utilization. Existing Bi-directional charging systems often lack real-time prioritization of energy sources, fail to optimize solar and energy storage system (ESS) usage, and do not incorporate adaptive control mechanisms for varying grid conditions. To address these gaps, this study proposes an Energy Management Controller (EMC) for Bi-Directional EV Charging, integrating a prioritized solar to ESS to grid energy distribution strategy to maximize renewable energy usage while ensuring system stability and cost efficiency. The proposed EMC is implemented on an ESP32 microcontroller and manages energy flow via a 6-channel relay module. A temperature-based safety mechanism is embedded to prevent overheating, shutting down relays if the system temperature exceeds 50&#176;C. The control logic dynamically adjusts power flow based on grid stress levels, solar irradiance, ESS state of charge (SOC), and EV battery SOC. The system is monitored using ThingsBoard for real-time visualization and InfluxDB for historical data analysis. Experimental validation across 12 predefined operational scenarios demonstrated that the EMC effectively reduces grid dependency to 15%, achieves renewable energy utilization of up to 90%, and maintains a fast relay switching response time of 50ms. The safety mechanism successfully prevents overheating, ensuring reliable operation under all test conditions.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_38-Energy_Management_Controller.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>HSI Fusion Method Based on TV-CNMF and SCT-NMF Under the Background of Artificial Intelligence</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160437</link>
        <id>10.14569/IJACSA.2025.0160437</id>
        <doi>10.14569/IJACSA.2025.0160437</doi>
        <lastModDate>2025-05-01T08:03:33.0200000+00:00</lastModDate>
        
        <creator>Dapeng Zhao</creator>
        
        <creator>Yapeng Zhao</creator>
        
        <creator>Xuexia Dou</creator>
        
        <subject>HSI; NMF; sparse regularization; SCT; augmented Lagrangian method</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>The fusion of hyper-spectral images has important application value in fields such as remote sensing, environmental monitoring, and agricultural analysis. To improve the quality of reconstructed images, an HSI fusion method based on fully variational coupled non-negative matrix factorization and sparse constrained tensor factorization techniques is proposed. Spectral sparsity description is enhanced through sparse regularization, image spatial characteristics are captured using differential operators, and convergence is improved by combining proximal optimization with augmented Lagrangian methods. The experiment outcomes on the AVIRIS and HYDICE datasets indicate that the proposed method achieves peak signal-to-noise ratios of 38.12 dB and 37.56 dB, respectively, and reduces spectral angle errors to 3.98&#176; and 4.12&#176;, respectively, significantly better than the other two comparative methods. The contribution of each module is further verified through ablation experiments. The complete algorithm performs the best in all indicators, verifying the synergistic effect of sparse regularization, total variation regularization, and coupled factorization strategies. In HSI fusion tasks under various complex lighting and noise conditions, the performance of the proposed algorithm is particularly excellent, fully demonstrating its robustness and applicability in complex scenes. The method proposed by the research effectively improves the fusion quality of HSI, providing an efficient and robust solution for the analysis and application of HSI.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_37-HSI_Fusion_Method_Based_on_TV_CNMF_and_SCT_NMF.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Bibliometric and Content Analysis of Large Language Models Research in Software Engineering: The Potential and Limitation in Software Engineering</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160436</link>
        <id>10.14569/IJACSA.2025.0160436</id>
        <doi>10.14569/IJACSA.2025.0160436</doi>
        <lastModDate>2025-05-01T08:03:32.9900000+00:00</lastModDate>
        
        <creator>Annisa Dwi Damayanti</creator>
        
        <creator>Hamdan Gani</creator>
        
        <creator>Feng Zhipeng</creator>
        
        <creator>Helmy Gani</creator>
        
        <creator>Sitti Zuhriyah</creator>
        
        <creator>Nurani</creator>
        
        <creator>Nurhayati Djabir</creator>
        
        <creator>Nur Ilmiyanti Wardani</creator>
        
        <subject>Large Language Models; LLM; software engineering; bibliometric; content analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>Large Language Models (LLM) is a type of artificial neural network that excels at language-related tasks. The advantages and disadvantages of using LLM in software engineering are still being debated, but it is a tool that can be utilized in software engineering. This study aimed to analyze LLM studies in software engineering using bibliometric and content analysis. The study data were retrieved from Web of Science and Scopus. The data were analyzed using two popular bibliometric approaches: bibliometric and content analysis. VOS Viewer and Bibliometrix software were used to conduct the bibliometric analysis. The bibliometric analysis was performed using science mapping and performance analysis approaches. Various bibliometric data, including the most frequently referenced publications, journals, and nations, were evaluated and presented. Then, the synthetic knowledge method was utilized for content analysis. This study examined 235 papers, with 836 authors contributing. The publications were published in 123 different journals. The average number of citations per publication is 1.44. Most publications were published in Proceedings International Conference on Software Engineering and ACM International Conference Proceeding Series, with China and the United States emerging as the leading countries. It was discovered that international collaboration on the issue was inadequate. The most often used keywords in the publications were &quot;software design,&quot; &quot;code (symbols),&quot; and &quot;code generation.&quot; Following the content analysis, three themes emerged: 1) Integration of LLM into software engineering education, 2) application of LLM in software engineering, and 3) potential and limitation of LLM in software engineering. The results of this study are expected to provide researchers and academics with insights into the current state of LLM in software engineering research, allowing them to develop future conclusions.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_36-Bibliometric_and_Content_Analysis_of_Large_Language_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Analysis of SVM, Na&#239;ve Bayes, and Logistic Regression in Detecting IoT Botnet Attacks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160435</link>
        <id>10.14569/IJACSA.2025.0160435</id>
        <doi>10.14569/IJACSA.2025.0160435</doi>
        <lastModDate>2025-05-01T08:03:32.9600000+00:00</lastModDate>
        
        <creator>Apri Siswanto</creator>
        
        <creator>Luhur Bayu Aji</creator>
        
        <creator>Akmar Efendi</creator>
        
        <creator>Dhafin Alfaruqi</creator>
        
        <creator>M. Rafli Azriansyah</creator>
        
        <creator>Yefrianda Raihan</creator>
        
        <subject>IoT security; botnet detection; machine learning; intrusion detection system; comparative analysis; SVM; na&#239;ve bayes; logistic regression</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>The rapid proliferation of Internet of Things (IoT) devices has significantly increased the risk of cyberattacks, particularly botnet intrusions, which pose serious security threats to IoT networks. Machine learning-based Intrusion Detection Systems (IDS) have emerged as effective solutions for detecting such attacks. This study presents a comparative analysis of three widely used machine learning classifiers—Support Vector Machine (SVM), Na&#239;ve Bayes (NB), and Logistic Regression (LR)—to assess their performance in detecting IoT botnet attacks. The experiment uses the BoTNeTIoT-L01 dataset, applying preprocessing techniques such as data cleaning, normalization, and feature selection to enhance model accuracy. The models are trained and evaluated based on standard performance metrics, including accuracy, precision, recall, F1-score, and AUC-ROC. The results indicate that SVM outperforms the other classifiers in terms of detection accuracy and robustness, particularly in detecting malware based on PE files. These findings offer valuable insights into selecting suitable machine learning models for securing IoT environments. Future work will further explore integrating advanced feature selection techniques and deep learning models to improve detection performance.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_35-Comparative_Analysis_of_SVM_Na&#239;ve_Bayes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Personalized Recommendation for Online News Based on UBCF and IBCF Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160434</link>
        <id>10.14569/IJACSA.2025.0160434</id>
        <doi>10.14569/IJACSA.2025.0160434</doi>
        <lastModDate>2025-05-01T08:03:32.9130000+00:00</lastModDate>
        
        <creator>Wei Shi</creator>
        
        <creator>Yitian Zhang</creator>
        
        <subject>IBCF algorithm; UBCF; collaborative filtering; news recommendations; label promotion network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>With the popularization of the Internet and the widespread use of mobile devices, online news has become one of the main ways for people to obtain information and understand the world. However, the increasing number and variety of news often cause users to feel troubled when searching for content of interest. To solve this problem, the first step is to design a personalized recommendation model for online news. Based on this model, a new personalized recommendation model is designed by combining the item-based collaborative filtering (IBCF) and the user-based collaborative filtering (UBCF). The experimental results showed that the average scores of the volunteers for the performance indicators, coverage indicators, and satisfaction indicators of the model were 85 and 93, 86, respectively. This system has high accuracy, low resource consumption, and higher user satisfaction, providing a new algorithmic approach for the field of recommendation models. The contribution of research is not only improving the accuracy of recommendations, but also increasing the diversity of recommendations, effectively solving the problem of data sparsity and real-time news. By introducing a tag propagation network for clustering analysis of users and projects, the recommendation results are further optimized and user satisfaction is improved. In addition, the research also realizes efficient data processing and storage through real-time user data collection and distributed data processing technology, which significantly improves the performance and response speed of the system.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_34-Personalized_Recommendation_for_Online_News.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predicting Human Essential Genes Using Deep Learning: MLP with Adaptive Data Balancing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160433</link>
        <id>10.14569/IJACSA.2025.0160433</id>
        <doi>10.14569/IJACSA.2025.0160433</doi>
        <lastModDate>2025-05-01T08:03:32.8800000+00:00</lastModDate>
        
        <creator>Ahmed AbdElsalam</creator>
        
        <creator>Mohamed Abdallah</creator>
        
        <creator>Hossam Refaat</creator>
        
        <subject>Artificial intelligence; bioinformatics; deep learning; Multi-Layer Perceptron (MLP); imbalanced-handling techniques; essential gene prediction; sequence characteristics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>Artificial intelligence (AI) has transformed many scientific disciplines including bioinformatics. Essential gene prediction is one important use of artificial intelligence in bioinformatics since it is necessary for knowledge of the biological pathways needed for cellular survival and disease diagnosis. Essential genes are fundamental for maintaining cellular life as well as for the survival and reproduction of organisms. Understanding the importance of these genes can help one to identify the basic needs of organisms, point out genes connected to diseases, and enable the development of new drugs. Traditional methods for identifying these genes are time consuming and costly, so computational approaches are used as alternatives. In this study, a Multi-Layer Perceptron (MLP) model combined with ADASYN (adaptive synthetic sampling). Furthermore, using deep learning techniques to solve the restrictions of traditional machine learning techniques and raise forecast accuracy attracts a lot of interest. It was proposed to handle data imbalance. The model utilizes features from protein-protein interaction networks, DNA and protein sequences. The model achieved high performance, with a sensitivity of 0.98, overall accuracy of 0.94, and specificity of 0.96, demonstrating its effectiveness in data classification.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_33-Predicting_Human_Essential_Genes_Using_Deep_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Role of Artificial Intelligence in Brand Experience: Shaping Consumer Behavior and Driving Repurchase Decisions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160432</link>
        <id>10.14569/IJACSA.2025.0160432</id>
        <doi>10.14569/IJACSA.2025.0160432</doi>
        <lastModDate>2025-05-01T08:03:32.8500000+00:00</lastModDate>
        
        <creator>Ati Mustikasari</creator>
        
        <creator>Ratih Hurriyati</creator>
        
        <creator>Puspo Dewi Dirgantari</creator>
        
        <creator>Mokh Adieb Sultan</creator>
        
        <creator>Neng Susi Susilawati Sugiana</creator>
        
        <subject>Digital marketing; artificial intelligence; brand experience; consumer behavior; repurchase intentions</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>The rapid advancement of Artificial Intelligence (AI) has transformed brand experiences, influencing consumer behavior and repurchase decisions in digital marketplaces. This study aims to examine the role of AI in enhancing brand experience and its impact on consumer purchasing behavior, particularly in driving repurchase intentions. A quantitative research approach was employed, involving a sample of 340 online shoppers who have previously engaged with AI-driven brand interactions. Data were collected through a structured questionnaire and analyzed using Structural Equation Modeling (SEM) with AMOS. The findings reveal that AI-powered brand experience significantly affects consumer trust, satisfaction, and emotional engagement, which in turn positively influences repurchase decisions. The study also highlights that personalized AI-driven interactions, such as chatbots, recommendation systems, and predictive analytics, enhance consumer perception of brand value, fostering long-term loyalty. The implications of this research suggest that businesses should leverage AI technologies to create immersive and personalized brand experiences that strengthen customer retention and maximize sales performance. This study contributes to the literature by integrating AI and brand experience within the consumer decision-making framework, offering a novel perspective on AI’s role in shaping repurchase behavior. Future research could explore industry- specific AI applications and their impact on different demographic segments.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_32-The_Role_of_Artificial_Intelligence_in_Brand_Experience.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>High-Precision Urban Air Quality Prediction Using a LSTM-Transformer Hybrid Architecture</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160431</link>
        <id>10.14569/IJACSA.2025.0160431</id>
        <doi>10.14569/IJACSA.2025.0160431</doi>
        <lastModDate>2025-05-01T08:03:32.8330000+00:00</lastModDate>
        
        <creator>Yiming Liu</creator>
        
        <creator>Mcxin Tee</creator>
        
        <creator>Liangyan Lu</creator>
        
        <creator>Fei Zhou</creator>
        
        <creator>Binggui Lu</creator>
        
        <subject>Air quality; deep learning; LSTM; transformer; multi-head attention mechanism; temporal prediction; health risk</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>With the acceleration of urbanization, accurate air quality prediction is crucial for environmental governance and public health risk management. Existing prediction methods still face challenges in handling complex time-series dependencies and multi-scale features. In this paper, a hybrid deep learning architecture (LT-Hybrid) based on LSTM and Transformer is proposed for high-precision air quality prediction. The model captures the long-term dependencies of time-series data through a two-layer LSTM structure, models the complex interactions among different environmental factors using a multi-head self-attention mechanism, and improves the training stability through a combination of residual connections and layer normalization. Experiments on an urban air quality dataset, containing nine dimensions of environmental characteristics such as temperature, humidity, PM2.5, etc., show that the LT-Hybrid model achieves an RMSE of 0.1021 and an R&#178; of 0.9382, reducing prediction errors by 13.0% and 5.1% compared to benchmark models of traditional LSTM and XGBoost, respectively. Accurate prediction of air quality indicators provides timely risk assessment for respiratory diseases and cardiovascular conditions, enabling proactive public health interventions. Through systematic ablation experiments and hyperparameter analysis, the validity of each core component of the model is verified, providing a high-precision prediction scheme for environmental monitoring and health risk assessment.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_31-High_Precision_Urban_Air_Quality_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Assessment of Remote Sensing Image Quality and its Application Due to Off-Nadir Imaging Acquisition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160430</link>
        <id>10.14569/IJACSA.2025.0160430</id>
        <doi>10.14569/IJACSA.2025.0160430</doi>
        <lastModDate>2025-05-01T08:03:32.8030000+00:00</lastModDate>
        
        <creator>Agus Herawan</creator>
        
        <creator>Patria Rachman Hakim</creator>
        
        <creator>Ega Asti Anggari</creator>
        
        <creator>Agung Wahyudiono</creator>
        
        <creator>Mohammad Mukhayadi</creator>
        
        <creator>M. Arif Saifudin</creator>
        
        <creator>Chusnul Tri Judianto</creator>
        
        <creator>Elvira Rachim</creator>
        
        <creator>Ahmad Maryanto</creator>
        
        <creator>Satriya Utama</creator>
        
        <creator>Rommy Hartono</creator>
        
        <creator>Atriyon Julzarika</creator>
        
        <creator>Rizatus Shofiyati</creator>
        
        <subject>Land cover; land use; LAPAN-A3; microsatellite; off-nadir; revisit time</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>One advantage of using microsatellites for remote sensing is their maneuverability so that the target area can be captured from any viewing angle based on specific needs. However, the image captured under off-nadir acquisition will have reduced quality in both geometry and radiometric aspects. This research aims to find the effect of off-nadir acquisition on remote sensing image quality in general and its accuracy on land use land cover (LULC) application based on LAPAN-A3 microsatellite image data. Both images from the nadir and off-nadir acquisition of one specific target, which had several days/weeks difference, are compared to the nearest Landsat-8 image data. Based on several target images used in this research, the imaging viewing angle indeed affects the quality of the remote sensing images, both in general image quality and land use land cover application accuracy. The degradation of LULC accuracy can be considered acceptable however, where in general, it can be modeled to -0.5 percent/degree, i.e., an image taken under 20 degrees off-nadir acquisition will have reduced 10 percent accuracy. This result shows that the off-nadir microsatellite imaging technique can be used for specific remote sensing needs without compromising quality.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_30-Assessment_of_Remote_Sensing_Image_Quality.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Multitasking Framework for Feature Selection in Road Accident Severity Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160429</link>
        <id>10.14569/IJACSA.2025.0160429</id>
        <doi>10.14569/IJACSA.2025.0160429</doi>
        <lastModDate>2025-05-01T08:03:32.7700000+00:00</lastModDate>
        
        <creator>Soumaya AMRI</creator>
        
        <creator>Mohammed AL ACHHAB</creator>
        
        <creator>Mohamed LAZAAR</creator>
        
        <subject>Feature selection; road accident; injury severity; Grey Wolf Optimizer; multitasking; knowledge transfer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>In machine learning studies, feature selection presents a crucial step especially when handling complex and imbalanced datasets, such as those used in road traffic injury analysis. This study proposes a novel multitasking feature selection methodology that integrates the Grey Wolf Optimizer, knowledge transfer, and the CatBoost ensemble algorithm to enhance the performance and interpretability of road accident severity prediction. The main objective of this study is to identify critical features impacting the prediction of severe injury cases in road accidents. The proposed framework integrates several steps to handle the complexities related to feature selection. The fitness function of the Grey Wolf Optimizer model is designed to prioritize the classification accuracy of the severe injury class. To mitigate early convergence of the model, a knowledge transfer mechanism that generates new wolf instances based on a historical record of wolves used previously is integrated within a multitasking process. To evaluate the prediction performance of the generated feature subsets, the CatBoost algorithm is employed in the evaluation step to assess the effectiveness of the proposed approach. By Integrating these three step methodology which combine metaheuristic feature selection technique with knowledge transfer through a multitasking process, the proposed framework enhances generalization, reduces prediction models complexity and handles imbalanced distributions. It proposed a feature selection model that overcomes key limitations of traditional methods. Applied to real-world road crash data, the methodology significantly improves the identification of factors impacting the severity of injuries. Experimental results demonstrate enhanced model performance, reduced complexity, and deeper insights into the factors contributing to traffic injuries. These findings highlight the potential of advanced machine learning techniques in improving road safety analysis and supporting data-driven decision-making.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_29-A_Novel_Multitasking_Framework_for_Feature_Selection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Air Quality Assessment Based on CNN-Transformer Hybrid Architecture</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160428</link>
        <id>10.14569/IJACSA.2025.0160428</id>
        <doi>10.14569/IJACSA.2025.0160428</doi>
        <lastModDate>2025-05-01T08:03:32.7230000+00:00</lastModDate>
        
        <creator>Yuchen Zhang</creator>
        
        <creator>Rajermani Thinakaran</creator>
        
        <subject>Air quality assessment; deep learning; CNN-Transformer hybrid architecture; feature extraction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>Air quality assessment plays a crucial role in environmental governance and public health decision-making. Traditional assessment methods have limitations in handling multi-source heterogeneous data and complex nonlinear relationships. This paper proposes an air quality assessment model based on a CNN-Transformer hybrid architecture, which achieves end-to-end prediction by integrating CNN&#39;s local feature extraction capability with Transformer&#39;s advantage in modeling global dependencies. The model employs a three-layer CNN for local feature learning, combined with Transformer&#39;s multi-head self-attention mechanism to capture long-range dependencies, and uses multilayer perceptrons for final prediction. Experiments on public datasets demonstrate that compared to traditional machine learning methods and single deep learning models, the proposed hybrid architecture achieves a 10.2 percentage improvement in Root Mean Square Error (RMSE) and a 0.57 percentage point improvement in coefficient of determination (R&#178;). Through systematic ablation experiments, we verify the necessity of each model component, particularly the importance of the CNN-Transformer hybrid architecture, positional encoding mechanism, and multi-layer network structure in enhancing prediction performance. The research results provide an effective deep learning solution for air quality assessment.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_28-Air_Quality_Assessment_Based_on_CNN_Transformer.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>AHP and Fuzzy Evaluation Methods for Improving Cangzhou Honey Date Supplier Performance Management</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160427</link>
        <id>10.14569/IJACSA.2025.0160427</id>
        <doi>10.14569/IJACSA.2025.0160427</doi>
        <lastModDate>2025-05-01T08:03:32.6930000+00:00</lastModDate>
        
        <creator>Zhixin Wei</creator>
        
        <subject>AHP; fuzzy evaluation method; supplier performance; Cangzhou honey date; supply chain management</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>This study focuses on improving supplier performance management within the Cangzhou honey date industry by integrating the Analytic Hierarchy Process (AHP) and fuzzy evaluation methods. Recognizing the limitations of traditional evaluation systems—such as subjectivity and insufficient quantitative analysis—the research aims to build a comprehensive, data-driven evaluation framework. The methodology involves constructing a supplier performance index system based on five key dimensions: quality, cost, delivery, service, and social responsibility. Using the AHP method, expert opinions are quantified to determine the weight of each indicator. Subsequently, fuzzy evaluation is employed to transform qualitative judgments into numerical scores, enabling more objective assessment. Five major suppliers are evaluated empirically, and statistical methods such as ANOVA and cluster analysis are used to identify performance differences and classify suppliers into performance tiers. The results indicate that Supplier A excels in quality and service, Supplier B leads in delivery performance, while Suppliers C and E require significant improvements. Correlation analysis reveals strong links between supplier performance and key operational metrics such as product defect rates, procurement costs, and customer satisfaction. Based on these findings, the study proposes targeted improvement strategies including the adoption of Six Sigma practices, implementation of VMI and JIT models, and enhanced performance-based incentive mechanisms. The research confirms the effectiveness of combining AHP and fuzzy methods in supplier evaluation and provides actionable insights for improving supply chain efficiency, resilience, and competitiveness. It also suggests that future studies should incorporate larger datasets and intelligent algorithms to refine evaluation accuracy and operational decision-making.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_27-AHP_and_Fuzzy_Evaluation_Methods_Improvement.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Path Planning Technology for Unmanned Aerial Vehicle Swarm Based on Improved Jump Point Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160426</link>
        <id>10.14569/IJACSA.2025.0160426</id>
        <doi>10.14569/IJACSA.2025.0160426</doi>
        <lastModDate>2025-05-01T08:03:32.6630000+00:00</lastModDate>
        
        <creator>Haizhou Zhang</creator>
        
        <creator>Shengnan Xu</creator>
        
        <subject>Unmanned aerial vehicle swarm; path planning; jump point search algorithm; geometric collision detection; dynamic window method</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>Multi-unmanned aerial vehicle path planning encounters challenges with effective obstacle avoidance and collaborative operation. The study proposes a swarm planning technique for unmanned aerial vehicles, based on an improved jump point algorithm. It introduces a geometric collision detection strategy to optimize path search and employs the dynamic window method to constrain the flight range. Additionally, the study presents conflict avoidance strategies for multi-unmanned aerial vehicle path planning and establishes collision fields for unmanned aerial vehicles to achieve collaborative path planning. In single unmanned aerial vehicle path planning, the research model exhibits the lowest control errors in the X, Y, and Z axes, with the Y-axis error being 0.05m. In static planning, the model boasts the shortest planning time and length, with 1002ms and 17.85m in multi-obstacle planning, respectively. In multi-unmanned aerial vehicle path planning, the research model effectively avoids local optimal problems in local conflict scenarios and re-plans the route. During testing on a 29m&#215;29m grid map, the research technology successfully avoids obstacles and re-plans routes. However, similar technological obstacles can cause interference and traps in local convergence, preventing re-planning. The research technology demonstrates good application effects in the path planning of unmanned aerial vehicle swarms and will provide technical support for multi-machine collaborative path planning.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_26-Path_Planning_Technology_for_Unmanned_Aerial_Vehicle_Swarm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Photovoltaic Fault Detection in Remote Areas Using Fuzzy-Based Multiple Linear Regression (FMLR)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160425</link>
        <id>10.14569/IJACSA.2025.0160425</id>
        <doi>10.14569/IJACSA.2025.0160425</doi>
        <lastModDate>2025-05-01T08:03:32.6300000+00:00</lastModDate>
        
        <creator>Feby Ardianto</creator>
        
        <creator>Ermatita Ermatita</creator>
        
        <creator>Armin Sofijan</creator>
        
        <subject>Photovoltaic; multiple linear regression; fuzzy; fault detection; remote areas</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>This research focused on developing and implementing a fault detection model for photovoltaic (PV) systems in remote areas, utilizing a Fuzzy-Based Multiple Linear Regression (FMLR) approach. The study aimed to address the challenges of monitoring PV systems in locations with limited access to conventional power grids and technical resources. The fault detection system integrated environmental parameters such as solar radiation, temperature, wind speed, and rainfall, alongside PV system parameters like panel voltage, current, battery voltage, and inverter performance. Data collection and preprocessing were conducted over a specified period to identify operational patterns under both normal and faulty conditions, ensuring data accuracy through cleaning, normalization, and categorization. The research was conducted in Pandan Arang Village, Kandis District, Ogan Ilir Regency, South Sumatera, Indonesia, contributing to the improvement of reliability and sustainability of renewable energy sources in isolated communities. The total number of data points for 276 rows with 6 attributes each was 1656 records. The MLR model was developed to predict the output power of the PV system, while fuzzy logic was employed to handle uncertainties in the data, offering a more flexible and adaptive decision-making process. The system applied fuzzy rules to determine the charging status (P3), categorizing it into Optimal Charging, Adjusted Charging, Charging Delay, or Fault Alert. The model was tested with real-time data, and its performance was validated through comparison with manual inspections. The results showed that the FMLR-based fault detection system effectively identified faults and optimized the performance of the PV system, making it suitable for remote areas in South Sumatera.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_25-Photovoltaic_Fault_Detection_in_Remote_Areas.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intellectual Property Protection in the Age of AI: From Perspective of Deep Learning Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160424</link>
        <id>10.14569/IJACSA.2025.0160424</id>
        <doi>10.14569/IJACSA.2025.0160424</doi>
        <lastModDate>2025-05-01T08:03:32.6000000+00:00</lastModDate>
        
        <creator>Jing Li</creator>
        
        <creator>Quanwei Huang</creator>
        
        <subject>Intellectual property; Artificial Intelligence; Deep Learning; Natural Language Processing; neural network; legal applicability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>The rapid development of Artificial Intelligence (AI), especially Deep Learning (DL) technologies, has brought unprecedented challenges and opportunities for Intellectual Property (IP) protection and management. In this paper, we employ Bibliometrix and Biblioshiny to conduct a bibliometric analysis of global research at the intersection of AI-driven innovation and IP frameworks over the past decade. The findings reveal a significant annual growth rate of 15.34 per cent in publications, with an average of 5.82 citations per study, reflecting increasing academic interest. China, the United States, and India dominate the research output, but the cross-country collaboration rate is only 10.74 per cent, indicating that there is still room for improvement in global collaborative research. The current major research groups in the field, as well as different research themes, are identified through collaborative network and thematic analyses, respectively. Although the field has achieved remarkable results in technological innovation, the deep integration of legal, economic and ethical dimensions is still at an early stage. The study highlights the urgent need for interdisciplinary collaboration and enhanced international cooperation to address pressing issues such as AI-generated content (AIGC) attribution, legal applicability, and the societal impact of DL technologies in IP protection. These findings aim to support academia and industry in clarifying ownership and promoting synergistic innovation in the AI era.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_24-Intellectual_Property_Protection_in_the_Age_of_AI.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimization Design of Robot Grasping Based on Lightweight YOLOv6 and Multidimensional Attention</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160423</link>
        <id>10.14569/IJACSA.2025.0160423</id>
        <doi>10.14569/IJACSA.2025.0160423</doi>
        <lastModDate>2025-05-01T08:03:32.5830000+00:00</lastModDate>
        
        <creator>Junyan Niu</creator>
        
        <creator>Guanfang Liu</creator>
        
        <subject>Capture detection; YOLOv6; multidimensional attention; MobileViT; industrial robot; lightweight</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>To address the computational redundancy and robustness limitations of industrial grasping models in complex environments, this study proposes a lightweight capture detection framework integrating Mobile Vision Transformer (MobileViT) and You Only Look Once version 6 (YOLOv6). Three innovations are developed: 1) A cascaded architecture fusing convolution and Transformer to compress parameters; 2) A multidimensional attention mechanism combining channel-pixel dual enhancement; 3) A Pixel Shuffle-Receptive Field Block (PixShuffle-RFB) decoder enabling sub-pixel localization. Experiments demonstrate that the model achieves 0.88 detection accuracy with 66 Frames Per Second (FPS) in simulations and 90.04% grasping success rate in physical tests. The lightweight design reduces computational costs by 37% versus conventional models while maintaining 93.54% segmentation efficiency (2.85 milliseconds inference). This multidimensional attention-driven approach effectively improves industrial robot adaptability, advancing capture detection applications in high-noise manufacturing scenarios.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_23-Optimization_Design_of_Robot_Grasping.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Meter-YOLOv8n: A Lightweight and Efficient Algorithm for Word-Wheel Water Meter Reading Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160422</link>
        <id>10.14569/IJACSA.2025.0160422</id>
        <doi>10.14569/IJACSA.2025.0160422</doi>
        <lastModDate>2025-05-01T08:03:32.5530000+00:00</lastModDate>
        
        <creator>Shichao Qiao</creator>
        
        <creator>Yuying Yuan</creator>
        
        <creator>Ruijie Qi</creator>
        
        <subject>Word-wheel water meter; YOLOv8n; global features; slim-neck; loss function</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>To address the issues of low efficiency and large parameters in the current word-wheel water meter reading recognition algorithms, this paper proposes a Meter-YOLOv8n algorithm based on YOLOv8n. Firstly, the C2f component of YOLOv8n is improved by introducing an enhanced inverted residual mobile block (iRMB). It enables the model to efficiently capture global features and fully extract the key information of the water meter characters. Secondly, the Slim-Neck feature fusion structure is employed in the neck network. By replacing the original convolutional kernels with GSConv, the model&#39;s ability to express the features of small object characters is enhanced, and the number of parameters in the model is reduced. Finally, Inner-EIoU is used to optimize the bounding box loss function. This simplifies the calculation process of the loss function and improves the model&#39;s ability to locate dense bounding boxes. The experimental results show that, compared with the original model, the precision, recall, mAP@0.5, and mAP@0.5:0.95 of the improved model have increased by 1.7%, 1.2%, 3.4%, and 3.3% respectively. Meanwhile, the parameters, FLOPs, and model size have decreased by 0.56M, 2.6G, and 0.7MB respectively. The improved model can better balance the relationship between detection performance and computational complexity. It is suitable for the task of recognizing word-wheel water meter readings and has practical application value.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_22-Meter_YOLOv8n_A_Lightweight_and_Efficient_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Netizens as Readers, Producers, and Publishers: Communication Ethics and Challenges in Social Media</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160421</link>
        <id>10.14569/IJACSA.2025.0160421</id>
        <doi>10.14569/IJACSA.2025.0160421</doi>
        <lastModDate>2025-05-01T08:03:32.5070000+00:00</lastModDate>
        
        <creator>Burhanuddin Arafah</creator>
        
        <creator>Muhammad Hasyim</creator>
        
        <creator>Herawati Abbas</creator>
        
        <subject>Netizen; communication ethic; challenge; social media</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>Social media has fundamentally transformed how people communicate and interact, creating a dynamic landscape where today&#39;s internet users assume multifaceted roles as readers, producers of text (messages), and publishers of their own content. This evolution empowers individuals to consume information and generate it, offer commentary, and share it widely across platforms. However, this shift brings forth significant ethical considerations that warrant critical examination. This research analyzes the complex issues and challenges surrounding the ethics of social media communication. It emphasizes the urgent need for individuals and society to address these challenges ethically and responsibly in an era where misinformation can spread rapidly, influencing public opinion and societal norms. The research employs a descriptive qualitative method that includes observation of netizen comments on YouTube cases related to corruption and immorality alongside an online questionnaire distributed among social media users. The study draws from two primary data sources: first, netizen comments on various YouTube videos addressing corruption; second, responses from 1,061 participants who completed the online questionnaire. Findings reveal that active participation by netizens enables them to engage in diverse forms of communication—expressing critical views, sharing recommendations for positive change, or even disseminating hate speech in reaction to contentious issues like corruption or moral failings. While some netizens utilize respectful language and promote constructive dialogue through engaging content creation, others contribute to a more toxic environment characterized by negativity. This diversity highlights the potential for positive discourse and the risks associated with unchecked expression on social media platforms. Ultimately, this research underscores that netizens possess substantial opportunities—and responsibilities—to shape public discourse through their actions as readers, producers, and publishers within this evolving digital ecosystem.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_21-Netizens_as_Readers_Producers_and_Publishers.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Machine Learning Advances in Technology Applications: Cultural Heritage Tourism Trends in Experience Design</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160420</link>
        <id>10.14569/IJACSA.2025.0160420</id>
        <doi>10.14569/IJACSA.2025.0160420</doi>
        <lastModDate>2025-05-01T08:03:32.4730000+00:00</lastModDate>
        
        <creator>Meihua Deng</creator>
        
        <subject>Heritage tourism; tourism experience; machine learning; VOSviewer; bibliometric data</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>This study investigates the evolving trends in cultural heritage tourism experience design and examines how machine learning technologies are being applied to enhance visitor engagement and heritage preservation. Using bibliometric data from the Web of Science (WoS) and visualization tools such as VOSviewer, the research identifies key themes, author collaborations, and keyword clusters from 2016 to 2025. The analysis reveals a shift in focus from traditional conservation and display methods to user-centered experiences supported by advanced technologies. Machine learning techniques—such as deep learning, natural language processing, and multimodal data fusion—are increasingly used to personalize tours, analyze tourist behavior, restore damaged artifacts, and improve decision-making in resource management. Tools like CNNs and BERT models enable smart guiding systems and interactive Q&amp;A features, while sentiment analysis enhances feedback mechanisms. The study also highlights several ongoing challenges, including data privacy issues, algorithmic bias, and unequal access to technological infrastructure, especially in developing regions. Ethical considerations and the need for human-centered design principles are emphasized to ensure that technological innovation aligns with cultural values and sustainability goals. In conclusion, this research provides a comprehensive overview of academic progress in cultural heritage tourism and illustrates the growing importance of AI and machine learning in creating immersive, efficient, and culturally respectful tourism experiences. The findings offer practical insights for scholars, heritage site managers, and policymakers seeking to leverage digital tools for both preservation and enhanced visitor satisfaction.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_20-Machine_Learning_Advances_in_Technology_Applications.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analyzing RGB and HSV Color Spaces for Non- Invasive Blood Glucose Level Estimation Using Fingertip Imaging</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160419</link>
        <id>10.14569/IJACSA.2025.0160419</id>
        <doi>10.14569/IJACSA.2025.0160419</doi>
        <lastModDate>2025-05-01T08:03:32.4430000+00:00</lastModDate>
        
        <creator>Asawari Kedar Chinchanikar</creator>
        
        <creator>Manisha P. Dale</creator>
        
        <subject>Blood glucose; Photoplethysmography; non- invasive; genetic algorithm; XGBoost; RGB; HSV</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>Traditional blood glucose measurement methods, including finger-prick tests and intravenous sampling, are invasive and can cause discomfort, leading to reduced adherence and stress. Non-invasive BGL estimation addresses these issues effectively. The proposed study focuses on estimating blood glucose levels (BGL) using “Red-Green-Blue (RGB)” and “Hue-Saturation-Value (HSV) color spaces” by analyzing fingertip videos captured with a smartphone camera. The goal is to enhance BGL prediction accuracy through accessible, portable devices, using a novel fingertip video database from 234 subjects. Videos recorded in the “RGB color space” using a smartphone camera were converted into the “HSV color space”. The “R channel” from “RGB” and the “Hue channel” from “HSV” were used to generate photoplethysmography (PPG) waves, and additional features like age, gender, and BMI were included to improve predictive accuracy. To enhance the precision of blood glucose estimation, the Genetic Algorithm (GA) was used to identify the most significant and optimal features from the large set of features. The “XGBoost”, “CatBoost”, “Random Forest Regression (RFR)”, and “Gradient Boosting Regression (GBR)” algorithms were applied for blood glucose level (BGL) prediction. Among them, “XGBoost” yielded the best results, with an R2 value of 0.89 in the “RGB color space” and 0.84 in the “HSV color space”, showcasing its superior predictive ability. The experimental outcomes were assessed using “Clarke error grid analysis” and a “Bland-Altman plot”. The Bland-Altman analysis showed that only 7.04% of the BGL values fell outside the limits of agreement (&#177;1.96 SD), demonstrating strong agreement with reference values.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_19-Analyzing_RGB_and_HSV_Color_Spaces.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Usability and Cognitive Engagement in Elderly Products Through Brain-Computer Interface Technologies</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160418</link>
        <id>10.14569/IJACSA.2025.0160418</id>
        <doi>10.14569/IJACSA.2025.0160418</doi>
        <lastModDate>2025-05-01T08:03:32.4130000+00:00</lastModDate>
        
        <creator>Daijiao Shi</creator>
        
        <creator>Chao Jiang</creator>
        
        <creator>Chenhan Huang</creator>
        
        <subject>Big data; human-computer interaction; the elderly; product design</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>This study addresses the limitations of traditional elderly care products in terms of intelligence and user experience by integrating human-computer interaction (HCI) principles into a product design framework for the elderly. This study explores the importance of feature extraction in human-computer interaction systems, emphasizes its key role in enhancing user adaptability and interaction efficiency, and deeply analyzes its impact on brain-computer interface (BCI) technology. At the same time, the study conducts simulation experiments to evaluate the effectiveness of various algorithms in processing two types of motor imagery tasks. Finally, the obtained results provide a comparative evaluation of the algorithms and highlight their respective strengths and limitations.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_18-Enhancing_Usability_and_Cognitive_Engagement.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid-Optimized Model for Deepfake Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160417</link>
        <id>10.14569/IJACSA.2025.0160417</id>
        <doi>10.14569/IJACSA.2025.0160417</doi>
        <lastModDate>2025-05-01T08:03:32.3800000+00:00</lastModDate>
        
        <creator>H. Mancy</creator>
        
        <creator>Marwa Elpeltagy</creator>
        
        <creator>Kamal Eldahshan</creator>
        
        <creator>Aya Ismail</creator>
        
        <subject>Bayesian optimization; deepfake detection; deepfake videos; Mask R-CNN; Xception network; XGBoost</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>The advancement of deep learning models has led to the creation of novel techniques for image and video synthesis. One such technique is the deepfake, which swaps faces among persons and then produces hyper-realistic videos of individuals saying or doing things that they never said or done. These deepfake videos pose a serious risk to everyone and countries if they are exploited for extortion, scamming, political disinformation, or identity theft. This work presents a new methodology based on a hybrid-optimized model for detecting deepfake videos. A Mask Region-based Convolutional Neural Network (Mask R-CNN) is employed to detect human faces from video frames. Then, the optimal bounding box representing the face region per frame is selected, which could help to discover many artifacts. An improved Xception-Network is proposed to extract informative and deep hierarchical representations of the produced face frames. The Bayesian optimization (BO) algorithm is employed to search for the optimal hyperparameters&#39; values in the extreme gradient boosting (XGBoost) classifier model to properly discriminate the deepfake videos from the genuine ones. The proposed method is trained and validated on two different datasets; CelebDF-FaceForencics++ (c23) and FakeAVCeleb, and tested also on various datasets; CelebDF, DeepfakeTIMIT, and FakeAVCeleb. The experimental study proves the superiority of the proposed method over the state-of-the-art methods. The proposed method yielded %97.88 accuracy and %97.65 AUROC on the trained CelebDF-FaceForencics++ (c23) and tested CelebDF datasets. Additionally, it achieved %98.44 accuracy and %98.44 AUROC on the trained CelebDF-FaceForencics++ (c23) and tested DeepfakeTIMIT datasets. Moreover, it yielded %99.50 accuracy and %99.21 AUROC on the FakeAVCeleb visual dataset.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_17-Hybrid_Optimized_Model_for_Deepfake_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Wireless Internet of Things System Optimization Based on Clustering Algorithm in Big Data Mining</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160416</link>
        <id>10.14569/IJACSA.2025.0160416</id>
        <doi>10.14569/IJACSA.2025.0160416</doi>
        <lastModDate>2025-05-01T08:03:32.3500000+00:00</lastModDate>
        
        <creator>Jing Guo</creator>
        
        <subject>Wireless sensor; network routing protocol; clustering algorithm; two-layer clustering; Internet of Things</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>The rapid development of the Internet of Things (IoT) has highlighted the importance of Wi-Fi sensor networks in efficiently collecting data anytime and anywhere. This paper aims to propose an optimized routing protocol that significantly reduces power consumption in IoT systems based on clustering algorithms. The paper begins by introducing the architecture of Wi-Fi sensor networks, sensor nodes, and the key technologies needed for implementation. It distinguishes between cluster-based and planar protocols, noting the advantages of each. The proposed protocol, DKBDCERP (Dual-layer K-means and Density-based Clustering Energy-efficient Routing Protocol), utilizes a two-layer clustering approach. In the first layer, nodes are clustered based on density, while in the second layer, first-level cluster heads are further grouped using the K-Means algorithm. This dual-layer structure balances the responsibilities of cluster heads, ensuring a more efficient distribution of data reception, fusion, and forwarding tasks across different levels. Simulation results demonstrate that the DKBDCERP protocol achieves optimal performance, with the smallest curve value and the most stable amplitude. It significantly reduces energy consumption, with the total cluster-head power consumption recorded at 0.1J and a variance of 0.1&#215;10⁻⁴. The introduction of two election modes during the clustering stage and the adoption of a centralized control mechanism further contribute to reduced broadcast energy loss. This research introduces an innovative two-layer clustering scheme that enhances the energy efficiency of wireless sensor networks in IoT environments. By leveraging clustering algorithms and a network routing protocol optimized through big data mining techniques, the proposed DKBDCERP significantly reduces energy consumption while maintaining communication stability in large- scale wireless Internet of Things (IoT) systems. The optimized routing protocol provides a novel solution for reducing power consumption while maintaining network stability, offering valuable insights for future IoT applications.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_16-Wireless_Internet_of_Things_System_Optimization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>IoT-Enabled Waste Management in Smart Cities: A Systematic Literature Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160415</link>
        <id>10.14569/IJACSA.2025.0160415</id>
        <doi>10.14569/IJACSA.2025.0160415</doi>
        <lastModDate>2025-05-01T08:03:32.3030000+00:00</lastModDate>
        
        <creator>Moulay Lakbir Tahiri Alaoui</creator>
        
        <creator>Meryam Belhiah</creator>
        
        <creator>Soumia Ziti</creator>
        
        <subject>Waste management; smart cities; Internet of Things (IoT); smart bins; urban planning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>The growing population of cities has increased the pressure on the waste management systems and therefore, new and better approaches are needed. This paper aims to present the theoretical underpinning of the application of Internet of Things (IoT) technologies in the improvement of waste collection in smart cities. In this regard, this paper reviews the latest trends, methodologies, and technologies from a vast collection of peer-reviewed papers published between 2018 and 2024. The areas of focus include real-time monitoring systems, predictive analytics, and optimization algorithms that have created new norms in traditional waste management. The review discusses the novel concept of IoT-based smart bins, dynamic waste collection routing, and data-based decision-making frameworks which yield significant environmental and economic benefits. According to established studies, reported outcomes include reduced overflow and manual labor costs, improved routing efficiency, enhanced recycling processes, optimized bin placement, and increased energy savings. Across a variety of cities, reports comparing pre-IoT operations with IoT-enhanced ones have found remarkable decreases in operating costs, resource allocation, and overall sustainability performance improvements. However, challenges in data security, interoperability and scalability still exist,  highlighting the need for a standardized framework and policies. This review contributes to the existing body of knowledge by identifying research gaps and proposing directions for future work. It emphasizes the importance of hybrid approaches combining IoT with emerging technologies such as artificial intelligence and blockchain to address the limitations of current systems. The findings offer valuable insights for policymakers, urban planners, and researchers aiming to foster sustainable and smart urban ecosystems.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_15-IoT_Enabled_Waste_Management_in_Smart_Cities.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mitigating Catastrophic Forgetting in Continual Learning Using the Gradient-Based Approach: A Literature Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160414</link>
        <id>10.14569/IJACSA.2025.0160414</id>
        <doi>10.14569/IJACSA.2025.0160414</doi>
        <lastModDate>2025-05-01T08:03:32.2700000+00:00</lastModDate>
        
        <creator>Haitham Ghallab</creator>
        
        <creator>Mona Nasr</creator>
        
        <creator>Hanan Fahmy</creator>
        
        <subject>Deep learning; continual learning; model adaptation and generalization; catastrophic forgetting; gradient-based approach</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>Continual learning, also referred to as lifelong learning, has emerged as a significant advancement for model adaptation and generalization in deep learning with the capability to train models sequentially from a continuous stream of data across multiple tasks while retaining previously acquired knowledge. Continual learning is used to build powerful deep learning models that can efficiently adapt to dynamic environments and fast-shifting preferences by utilizing computational and memory resources, and it can ensure scalability by acquiring new skills over time. Continual learning enables models to train incrementally from an ongoing stream of data by learning new data as it comes while saving old experiences, which eliminates the need to collect new data with old data to be retrained together from scratch, saving time, resources, and effort. However, despite continual learning advantages, it still faces a significant challenge known as catastrophic forgetting. Catastrophic forgetting is a phenomenon in continual learning where a model forgets previously learned knowledge when trained on new tasks, making it challenging to preserve performance on earlier tasks while learning new ones. Catastrophic forgetting is a central challenge in advancing the field of continual learning as it undermines the main goal of continual learning, which is to maintain long-term performance across all encountered tasks. Therefore, several types of research have been proposed recently to address and mitigate the catastrophic forgetting dilemma to unlock the full potential of continual learning. As a result, this research provides a detailed and comprehensive review of one of the state-of-the-art approaches to mitigate catastrophic forgetting in continual learning known as the gradient-based approach. Furthermore, a performance evaluation is conducted for the recent gradient-based models, including the limitations and the promising directions for future research.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_14-Mitigating_Catastrophic_Forgetting_in_Continual_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid SEM-ANN Method for Developing an Information Technology Acceptance and Utilization Model in River Tourism Services</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160413</link>
        <id>10.14569/IJACSA.2025.0160413</id>
        <doi>10.14569/IJACSA.2025.0160413</doi>
        <lastModDate>2025-05-01T08:03:32.2400000+00:00</lastModDate>
        
        <creator>Mutia Maulida</creator>
        
        <creator>Iphan Fitrian Radam</creator>
        
        <creator>Nurul Fathanah Mustamin</creator>
        
        <creator>Yuslena Sari</creator>
        
        <creator>Andreyan Rizky Baskara</creator>
        
        <creator>Eka Setya Wijaya</creator>
        
        <creator>Muhammad Alkaff</creator>
        
        <creator>M. Renald Abdi</creator>
        
        <subject>River tourism; technology acceptance; TWAM; E-TAM; hybrid SEM-ANN</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>Tourism is a vital sector that contributes significantly to Indonesia&#39;s economic growth. However, despite its great potential, the sector faces challenges in the application of information technology, as seen in the Go-Klotok application in Banjarmasin City which has not been well received by tourists. Therefore, it is important to understand the factors that influence the acceptance of information technology in river tourism to improve the tourist experience and support the growth of the sector. This study aims to develop a model of technology acceptance and utilization in river tourism in South Kalimantan. To that end, this study modifies four main models, namely the Tourism Web Acceptance Model (T-WAM), the Unified Theory of Acceptance and Use of Technology 2 (UTAUT2), the E-Tourism Technology Acceptance Model (ETAM), and The DeLone and McLean Model. This research identifies and analyzes various factors that influence technology acceptance in the context of river tourism. The research method uses a hybrid SEM-ANN approach, where Partial Least Squares Structural Equation Modeling (PLS-SEM) is used to analyze the relationship between variables, while Artificial Neural Network (ANN) captures more complex data patterns. Data analysis in this study used the Hybrid SEM-ANN method with the SmartPLS application and the IBM SPSS Statistics 27 application. The hypotheses of this study were 14 hypotheses and 9 hypotheses were accepted. The results of the analysis of 471 respondents show that Social Influence, Perceived Benefits, and Information Quality significantly influence user intention to use information technology services, with Social Influence as the most dominant factor.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_13-A_Hybrid_SEM_ANN_Method_for_Developing_an_Information_Technology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fear of Missing Out (FoMO) and Recommendation Algorithms: Analyzing Their Impact on Repurchase Intentions in Online Marketplaces</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160412</link>
        <id>10.14569/IJACSA.2025.0160412</id>
        <doi>10.14569/IJACSA.2025.0160412</doi>
        <lastModDate>2025-05-01T08:03:32.2100000+00:00</lastModDate>
        
        <creator>Ati Mustikasari</creator>
        
        <creator>Ratih Hurriyati</creator>
        
        <creator>Puspo Dewi Dirgantari</creator>
        
        <creator>Mokh Adieb Sultan</creator>
        
        <creator>Neng Susi Susilawati Sugiana</creator>
        
        <subject>Component; FoMO; repurchase intentions; online marketplace; SEM; consumer behavior</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>The rapid growth of e-commerce has intensified consumers&#39; Fear of Missing Out (FoMO), influencing their repurchase intentions. This study aims to examine the impact of online FoMO on repurchase intentions in marketplaces, emphasizing the role of personalized recommendations and promotional strategies. A quantitative approach was employed, collecting data from 300 respondents who actively shop on online marketplaces. The study utilized Structural Equation Modelling (SEM) to analyze the relationships between FoMO, trust, perceived value, and repurchase intentions. The findings reveal that FoMO significantly influences repurchase intentions, both directly and indirectly, through trust and perceived value. Additionally, personalized recommendations and time-limited promotions amplify FoMO, further strengthening consumers&#39; intention to repurchase. These results highlight the necessity for e-commerce platforms to strategically implement AI-driven personalization and gamification elements to optimize customer retention. The study contributes theoretical insights by integrating psychological and technological perspectives in understanding consumer behavior in digital marketplaces. The originality of this research lies in its empirical validation of the FoMO- repurchase intention relationship using SEM, offering novel insights into how marketplace features shape consumer decision-making. Practically, the findings provide actionable strategies for businesses to enhance customer engagement and retention through behavioral-driven marketing approaches.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_12-Fear_of_Missing_Out_FoMO_and_Recommendation_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Method for Providing Exercise Instruction That Allows Immediate Feedback to Trainees</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160411</link>
        <id>10.14569/IJACSA.2025.0160411</id>
        <doi>10.14569/IJACSA.2025.0160411</doi>
        <lastModDate>2025-05-01T08:03:32.1770000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Kosuke Eto</creator>
        
        <creator>Mariko Oda</creator>
        
        <subject>Motion training; immediate feedback; DTW (Dynamic Time Warping); children with disabilities; skeletal detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>Method for providing exercise instruction that allows immediate feedback to trainees is proposed. The purpose of this research is to combine artificial intelligence technology and motion analysis methods to build an effective vocational training support program aimed at supporting the employment of children with disabilities. Specifically, we develop a system that uses DTW (Dynamic Time Warping) to calculate the similarity between the trainee&#39;s motion and the model motion, and scores the results based on the results. This system will enable optimal instruction for each disabled child, and is expected to improve motion skills and promote learning motivation. Furthermore, by providing scored feedback, we aim to improve the traditional evaluation that relies on the subjectivity of the instructor and provide an intuitive and easy-to-understand means of confirming results for trainees. In this research, we use skeletal detection technology to record the trainee&#39;s three-dimensional coordinate data and perform quantitative evaluation. In addition, we will design a program that allows trainees to visually check their own progress through a motion evaluation function and maximize the learning effect. Through experiment, it is found that the proposed method does work for motion trainings at supporting the employment of children with disabilities. Also, it is found that immediate feedback is better than conventional delayed feedback.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_11-Method_for_Providing_Exercise_Instruction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Flexible Software Architecture for Genetic Data Processing in Alpaca Breeding Programs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160410</link>
        <id>10.14569/IJACSA.2025.0160410</id>
        <doi>10.14569/IJACSA.2025.0160410</doi>
        <lastModDate>2025-05-01T08:03:32.1630000+00:00</lastModDate>
        
        <creator>Alfredo Gama-Zapata</creator>
        
        <creator>Fernando Barra-Quipse</creator>
        
        <creator>Elizabeth Vidal</creator>
        
        <subject>Architecture; genomic selection; adaptability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>Improving alpaca fiber quality is an important objective in the textile industry. There are different kinds of techniques aimed to enhance breeding outcomes. This study proposes and validates a flexible software architecture for managing genetic information in alpaca breeding, integrating genomic selection methods. The proposed architecture consists of three components: 1) Input—capturing data from individual records, pedigree, phenotypic traits, fiber characteristics, genomic, and non-genomic information; 2) Processing—implementing statistical methods such as BLUP, GBLUP, and SSGBLUP, alongside inbreeding coefficient calculation and machine learning techniques; and 3) Output—generating reports for mating list proposals, estimated breeding values, and genetic evaluations. Designing a software architecture for genetic improvement in alpaca breeding programs could help software developers with maintainability, extensibility, and adaptability, considering different kinds of data sources for future advancements in alpaca breeding. This work shows the implementation and validation of software for an alpaca breeding program based on the proposed architecture.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_10-Flexible_Software_Architecture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Approach Detection and Warning Using BLE and Image Recognition at Construction Sites</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160409</link>
        <id>10.14569/IJACSA.2025.0160409</id>
        <doi>10.14569/IJACSA.2025.0160409</doi>
        <lastModDate>2025-05-01T08:03:32.1170000+00:00</lastModDate>
        
        <creator>Yuya Ifuku</creator>
        
        <creator>Kohei Arai</creator>
        
        <creator>Mariko Oda</creator>
        
        <subject>Construction site; safety management; intrusion detection; object recognition; trajectory tracking; YOLOv8; ByteTrack; BLE Beacon</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>Ensuring the safety of workers in dangerous areas is an important issue at construction sites. In particular, fatal accidents at construction sites often involve falls or traffic accidents, and tend to occur around hazardous areas. In this paper, to prevent such accidents, a proximity detection and warning system based on image recognition and Bluetooth Low Energy (BLE) technology is proposed. This system mainly uses image recognition to detect workers approaching dangerous areas, and uses BLE beacons as an auxiliary to achieve continuous detection even under occlusion conditions. A master-slave operation model is adopted, with image recognition serving as the main detection method and BLE beacons as an auxiliary. When a worker approaches a dangerous area, a real-time warning is issued via a wireless earphone connected to a smartphone, allowing immediate recognition and response. This has made it possible to reach the stage of detecting intrusion into dangerous areas. However, there are still some challenges remaining for this system. The first challenge is individual re-identification. In order to issue a warning to the relevant worker when an intrusion into a dangerous area is detected, the worker needs to be recognized individually. The second challenge is adapting to changes in the structure of the construction site. Since the environment of a construction site changes over time, it is necessary to consider the appropriate placement of cameras. Experiments show that the proposed method works well to locate workers approaching and entering dangerous areas. The proposed system also detects intrusion into dangerous areas through bone conduction wireless earphones from a distance of 115 meters and issues a warning to the corresponding workers.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_9-Approach_Detection_and_Warning_Using_BLE_and_Image.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Pothole Detection: A Study of Ensemble Learning and Decision Framework</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160408</link>
        <id>10.14569/IJACSA.2025.0160408</id>
        <doi>10.14569/IJACSA.2025.0160408</doi>
        <lastModDate>2025-05-01T08:03:32.0830000+00:00</lastModDate>
        
        <creator>Ken D. Gorro</creator>
        
        <creator>Elmo B. Ranolo</creator>
        
        <creator>Anthony S. Ilano</creator>
        
        <creator>Deofel P. Balijon</creator>
        
        <subject>YOLO; Mask R-CNN; ensemble learning; MCDM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>This study investigates the potential use of ensemble learning (YOLOv9 and Mask R-CNN) and Multi-Criteria Decision Making for pothole detection system. A series of experiments were conducted, including variations in confidence thresholds, IoU thresholds, dynamic weight configurations, camera angles and MCDM criteria, to assess their effects on detection performance. The YOLOv9 model achieved a mean Average Precision (mAP) of 0.908 at 0.5 IoU and an F1 score of 0.58 at a confidence threshold of 0.282, indicating a strong balance between precision and recall. However, adjusting IoU thresholds showed that lower thresholds improved recall but resulted in false positives, while higher thresholds improved precision but reduced recall. Dynamic weight configurations were explored, with balanced weights (wY = 0.5, wM = 0.5) yielding the best overall performance, while uneven weights allowed trade-offs between precision and recall based on specific application needs. The MCDM framework refined detection outputs by evaluating pothole features such as size, position, depth, and shape. The proposed algorithm has the potential to be widely used in practical applications. Overfitting is the main drawback of the proposed algorithm, but this is dependent on the use case where the pothole detection will be used.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_8-Pothole_Detection_A_Study_of_Ensemble_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid Length-Based Pattern Matching Algorithm for Text Searching</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160407</link>
        <id>10.14569/IJACSA.2025.0160407</id>
        <doi>10.14569/IJACSA.2025.0160407</doi>
        <lastModDate>2025-05-01T08:03:32.0670000+00:00</lastModDate>
        
        <creator>Victor Cornejo-Aparicio</creator>
        
        <creator>Cesar Cuarite-Silva</creator>
        
        <creator>Antoni Benavente-Mayta</creator>
        
        <creator>Karim Guevara</creator>
        
        <subject>Knuth-Morris-Pratt; Boyer-Moore; text search; hybrid algorithm; preprocessing; word-length patterns; test text for experiments</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>This paper presents a hybrid algorithm for pattern matching in text, which combines word length preprocessing with the Knuth-Morris-Pratt (KMP) algorithm. Its performance was evaluated against KMP and Boyer-Moore (BM) in two scenarios: synthetic texts and real-world texts. In the former, classical algorithms proved more efficient due to the uniform structure of the data. However, in real-world texts, the hybrid algorithm significantly reduced search times, thanks to its ability to filter matches by length patterns before performing character-by-character comparisons. The algorithm also demonstrated flexibility in recognizing patterns with different delimiters. Among its limitations is the difficulty in detecting substrings within longer words. As future work, the incorporation of partial matching techniques and the adaptation of the approach to multilingual environments and machine learning systems are proposed. The dataset used is provided to encourage reproducibility.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_7-A_Hybrid_Length_Based_Pattern_Matching_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluating User Acceptance and Usability of AR-Based Indoor Navigation in a University Setting: An Empirical Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160406</link>
        <id>10.14569/IJACSA.2025.0160406</id>
        <doi>10.14569/IJACSA.2025.0160406</doi>
        <lastModDate>2025-05-01T08:03:32.0200000+00:00</lastModDate>
        
        <creator>Toma Marian-Vladut</creator>
        
        <creator>Turcu Corneliu Octavian</creator>
        
        <creator>Pascu Paul</creator>
        
        <subject>Augmented reality; indoor navigation; mobile application; usability evaluation; ARCore; higher education; spatial computing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>This paper presents the development and usability evaluation of a mobile augmented reality (AR) application designed to support indoor navigation within a higher education setting. The system offers real-time visual and audio guidance without requiring additional infrastructure, leveraging spatial anchors, QR code initialization, and compatibility with both ARCore and ARKit platforms. Users can select destinations such as classrooms, offices, and restrooms, and follow augmented reality overlays to reach them efficiently. A review of existing AR navigation systems highlights current technological approaches and gaps in user-centered research, particularly within academic institutions. Building on these findings, the proposed application was tested in a large-scale empirical study involving 256 students, situated in the context of spatial computing within a university environment. Data collection was based on the System Usability Scale and the Technology Acceptance Model, with four research hypotheses examining ease of use, usefulness, system responsiveness, and continued usage intention. Results revealed significant correlations between intuitive design and usability scores, as well as between perceived usefulness and behavioral intention to reuse the application. These findings reinforce the value of user-centered design in developing infrastructure-free mobile AR systems and demonstrate their potential to improve spatial orientation in complex educational building.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_6-Evaluating_User_Acceptance_and_Usability_of_AR.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Economic Growth and Fiscal Policy in Peru: Prediction Using Machine Learning Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160405</link>
        <id>10.14569/IJACSA.2025.0160405</id>
        <doi>10.14569/IJACSA.2025.0160405</doi>
        <lastModDate>2025-05-01T08:03:31.9730000+00:00</lastModDate>
        
        <creator>Fidel Huanco Ramos</creator>
        
        <creator>Yesenia Valentin Ccori</creator>
        
        <creator>Henry Shuta Lloclla</creator>
        
        <creator>Martha Yucra Sotomayor</creator>
        
        <creator>Ilda Mamani Uchasara</creator>
        
        <subject>Machine learning; predictive models; fiscal policy; economic growth</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>The empirical literature presents several indicators related to fiscal policy and economic growth. The paper aims to predict Peru&#39;s economic growth using fiscal policy variables. For this purpose, open data from the Central Reserve Bank of Peru was used, data preprocessing and the study used Python programming through Google Colab to evaluate eight machine learning models. Metrics such as Root Mean Square Error (RMSE), Mean absolute error (MAE), Mean square error (MSE), and Coefficient of Determination (R&#178;) were used to measure their performance. In addition, SHapley Additive exPlanations (SHAP) was applied to interpret the importance of macroeconomic variables. The results show that the K-Nearest Neighbors (KNN) model obtained the best performance, with an R&#178; of 0.972 and low prediction errors. In the same way, important variables in fiscal policy such as Net Debt, Liabilities, and Interest on External Debt were identified. In conclusion, the study shows that KNN and Ensemble Bagging are highly effective models for predicting Peru&#39;s economic growth.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_5-Economic_Growth_and_Fiscal_Policy_in_Peru.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Level of Anxiety and Knowledge About Breastfeeding in First-Time Mothers with Children Under Six Months</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160404</link>
        <id>10.14569/IJACSA.2025.0160404</id>
        <doi>10.14569/IJACSA.2025.0160404</doi>
        <lastModDate>2025-05-01T08:03:31.9430000+00:00</lastModDate>
        
        <creator>Frank Valverde-De La Cruz</creator>
        
        <creator>Maria Valverde-Ccerhuayo</creator>
        
        <creator>Ana Huamani-Huaracca</creator>
        
        <creator>Gina Le&#243;n-Untiveros</creator>
        
        <creator>Sebasti&#225;n Ramos-Cosi</creator>
        
        <creator>Alicia Alva-Mantari</creator>
        
        <subject>Anxiety; knowledge; breastfeeding; first-time mothers; children</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>The World Health Organization notes that one in five women of reproductive age faces episodes of anxiety. In Latin America, more than 50% of women experience postnatal anxiety, and in Peru, in Hu&#225;nuco, 40% of first-time mothers have moderate anxiety. The aim of this study is to analyze the relationship between the level of anxiety and knowledge about breastfeeding in first-time mothers with children under six months of age. The study has a correlational quantitative approach, in which STAI questionnaires and the Breastfeeding Knowledge Instrument were applied to a total of 166 mothers, using SPSS and the multinomial logistic regression model. The results indicate that 57.23% of the mothers are young, 53.01% have completed secondary school, 22.89% study, and 63.25% had a normal delivery, with 41.57% experiencing complications. In addition, 56.16% of the children were between 4 and 5 months old. Also, 24.10% of mothers with moderate state anxiety and medium knowledge about breastfeeding and 22.29% with moderate trait anxiety. It was found that complications during childbirth (p=0.026, OR=1.025753) and the mother&#39;s occupation (p=0.013, OR=1.149548) are significantly related to anxiety. It is concluded that, although anxiety does not directly affect knowledge about breastfeeding, it is crucial to offer specific psychological and educational support for new mothers, particularly addressing sociodemographic factors.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_4-Level_of_Anxiety_and_Knowledge_About_Breastfeeding.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Applications of Qhali-Bot in Psychological Assistance and Promotion of Well-being: A Systematic Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160403</link>
        <id>10.14569/IJACSA.2025.0160403</id>
        <doi>10.14569/IJACSA.2025.0160403</doi>
        <lastModDate>2025-05-01T08:03:31.8970000+00:00</lastModDate>
        
        <creator>Sebasti&#225;n Ramos-Cosi</creator>
        
        <creator>Daniel Yupanqui-Lorenzo</creator>
        
        <creator>Enrique Huamani-Uriarte</creator>
        
        <creator>Meyluz Paico-Campos</creator>
        
        <creator>Victor Romero-Alva</creator>
        
        <creator>Claudia Marrujo-Ingunza</creator>
        
        <creator>Alicia Alva-Mantari</creator>
        
        <creator>Linett Velasquez-Jimenez</creator>
        
        <subject>Qhalibot; robot; psychological assistance; well-being; review</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>Social robots have emerged as efficient tools in the field of psychological assistance and well-being promotion, especially known as Qhalibot in prominent areas such as mental health, education and work environments. The aim of this study is to provide a comprehensive overview of their application in these contexts, through a systematic review based on the PRISMA methodology and a bibliometric analysis. To this end, 41 articles obtained from databases such as Scopus, IEEE Xplore, Web of Science, and JSTOR were evaluated. The findings reveal that social robots offer significant benefits, such as improved adherence to therapeutic treatments, real-time emotional support, and reduced stress levels in various groups of people. These benefits have shown a positive impact on users, especially towards those facing mental health conditions or high-stress situations, improving their overall well-being. However, significant challenges were encountered, including user acceptance of these technologies, personalization of interactions to meet individual needs, and integration of these systems into pre-existing environments. Furthermore, it is identified that most of the studies have been carried out in controlled environments, which limits the transferability of the findings to real-world situations. As future lines of research, it is suggested to explore new methodologies for the implementation of these systems in uncontrolled environments, the development of innovative tools that facilitate human-robot interaction, and the evaluation of the long-term impact of these systems in diverse populations. These investigations are crucial to better understand the effectiveness and applicability of social robots in broader and less controlled contexts, which could lead to a more effective integration into daily life.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_3-Applications_of_Qhali_Bot_in_Psychological_Assistance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparing Vision-Instruct LLMs, Vision-Based Deep Learning, and Numeric Models for Stock Movement Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160402</link>
        <id>10.14569/IJACSA.2025.0160402</id>
        <doi>10.14569/IJACSA.2025.0160402</doi>
        <lastModDate>2025-05-01T08:03:31.8500000+00:00</lastModDate>
        
        <creator>Qizhao Chen</creator>
        
        <subject>Convolutional Neural Network (CNN); Large Language Model (LLM); MobileNetV2; stock price prediction; time series forecasting; vision transformer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>This research conducts a comparative study of several stock movement prediction approaches, evaluating large language models (LLMs) and vision-based deep learning models with stock image as input, as well as models that utilize numerical data. Specifically, the study investigates a prompt-based LLM framework that processes candlestick charts, comparing its performance with image-based models such as MobileNetV2, Vision Transformer, and Convolutional Neural Network (CNN), as well as models with numerical inputs including Support Vector Machine (SVM), Random Forest, LSTM, and CNN-LSTM. Although LLMs have demonstrated promising results in stock prediction, directly applying them to stock images poses challenges compared to numerical approaches. To address this, this study further improves LLM performance with post-hoc calibration, reducing prediction biases. Experimental results demonstrate that post-hoc calibrated LLMs with visual input achieve competitive performance compared to other models, highlighting their potential as a viable alternative to traditional stock prediction methods while simplifying the prediction process.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_2-Comparing_Vision_Instruct_LLMs_Vision_Based_Deep_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Augmented Sensory Experience and Retention: ASER Framework</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160401</link>
        <id>10.14569/IJACSA.2025.0160401</id>
        <doi>10.14569/IJACSA.2025.0160401</doi>
        <lastModDate>2025-05-01T08:03:31.6770000+00:00</lastModDate>
        
        <creator>Samer Alhebaishi</creator>
        
        <creator>Richard Stone</creator>
        
        <subject>Augmented Reality (AR); emotional memory; inter-active storytelling; gamification; Augmented Sensory Experience and Retention (ASER) Framework</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(4), 2025</description>
        <description>In the process of shifting from traditional teacher-centred systems to more student-engagement ones, Augmented Reality (AR) is coming into its own as a way of improving how information is delivered and received. However, while the use of AR is commonly attributed to increasing engagement, the potential of this technology to support deep, long-term learning is not fully explored. The ASER Framework (Augmented Sensory Experience and Retention) offers a new approach to this gap by integrating emotional memory, interactive storytelling, and gamification within AR environments. After analyzing the current state of AR education research, this study found a lack of frameworks that combine these elements systematically, thus offering a chance to improve cognitive retention and meaningful learning. A multi-sensory model proposes ASER for emotional connection, participation, and knowledge consolidation. The theoretical foundation is strong; however, further empirical validation is required to determine its real-world effectiveness across diverse educational settings. These recommendations provide a starting point for future research and implementation strategies that seek to change the rules of instructional design for engaging and enduring learning experiences.</description>
        <description>http://thesai.org/Downloads/Volume16No4/Paper_1-Augmented_Sensory_Experience_and_Retention.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of Cybersecurity Awareness Model Based on Protection Motivation Theory (PMT) for Digital IR 4.0 in Malaysia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01603117</link>
        <id>10.14569/IJACSA.2025.01603117</id>
        <doi>10.14569/IJACSA.2025.01603117</doi>
        <lastModDate>2025-03-31T12:15:11.4330000+00:00</lastModDate>
        
        <creator>Siti Fatiha Abd Latif</creator>
        
        <creator>Noor Suhana Sulaiman</creator>
        
        <creator>Nur Sukinah Abd Aziz</creator>
        
        <creator>Azliza Yacob</creator>
        
        <creator>Akhyari Nasir</creator>
        
        <subject>Cyber security; information security; intrusion detection; IR 4.0; PLS SEM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>This study aims to examine the complex interplay among perceived threat severity, perceived threat vulnerability, fear, perceived response efficacy, perceived self-efficacy, and response cost using Partial Least Squares Structural Equation Modelling (PLS-SEM) via SmartPLS 4.0, grounded in the Protection Motivation Theory (PMT). The analysis is situated within the context of cyber security and information security in Industry Revolution 4.0 (IR 4.0) environments, where interconnected systems are increasingly exposed to cyber threats. Both measurement and structural model assessments were performed, revealing strong indicator loadings, high Cronbach’s alpha, composite reliability (CR), and adequate average variance extracted (AVE), confirming the model’s reliability and validity. The Fornell-Larcker criterion and heterotrait-monotrait (HTMT) ratio confirmed discriminant validity, while variance inflation factor (VIF) values under 5 and an R&#178; value of 0.554 indicated no collinearity issues and moderate explanatory power in the structural model. Findings demonstrate that perceived threat severity and vulnerability significantly increased fear, which mediated the threat perception-protection motivation relationship, emphasising the role of emotional responses in decision-making. Coping appraisal components, namely perceived response efficacy and self-efficacy, were strong positive predictors of protection motivation, while response cost negatively influenced protective behaviour intentions. Although intrusion detection systems are essential in mitigating cyber risks, this study highlights the equally critical behavioural component of cyber defence. The outcomes underscore the value of PMT in modelling security behaviour, offering theoretical and practical implications for behavioural interventions, public health strategies, and policy design in IR 4.0 domains. These insights contribute to strengthening cybersecurity and information security culture across digitally-driven industries.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_117-Development_of_Cybersecurity_Awareness_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of a Rural Tourism Satisfaction Monitoring System Based on the Improved INFO Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01603116</link>
        <id>10.14569/IJACSA.2025.01603116</id>
        <doi>10.14569/IJACSA.2025.01603116</doi>
        <lastModDate>2025-03-31T12:15:11.3870000+00:00</lastModDate>
        
        <creator>Meihua Qiao</creator>
        
        <subject>Enhanced INFO algorithm; rural tourism satisfaction; tourist monitoring system design; collaborative positioning methodology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>The increasing influx of tourists to scenic areas has raised significant security concerns, often surpassing the management capacity of these locations. Despite the growing need for effective solutions, many regions have not yet developed strategies to address these issues. This study aims to enhance rural tourist satisfaction monitoring systems to better manage tourist flows and improve security. The research explores rural tourist satisfaction, which has significant potential for large-scale monitoring due to its self-expanding nature. The paper discusses the critical role of tourist satisfaction within scenic areas, particularly focusing on tourist tracking systems. It also introduces key features and positioning algorithms used for monitoring satisfaction. A new collaborative positioning approach, based on subnetwork fusion, is proposed to address the limitations of traditional non-line-of-sight INFO positioning algorithms. The proposed subnetwork fusion method outperforms the traditional INFO algorithm, with a 39.7% reduction in localization error when more than 130 nodes are used. Furthermore, when anchor nodes exceed 10%, the DPeNet algorithm achieves an average precision value of 0.768, surpassing the 0.75 threshold due to its enhanced multi-channel convolution and downsampling structure, which optimally utilizes the deep features of small-sized targets. This paper introduces an innovative collaborative positioning strategy for rural tourist satisfaction monitoring, overcoming existing algorithm limitations and enhancing localization accuracy in real-time tourist management systems. The findings contribute to improving both tourist experience and safety in rural scenic areas, offering a scalable solution for broader applications in tourist destinations.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_116-Design_of_a_Rural_Tourism_Satisfaction_Monitoring_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comparative Evaluation of Ontology Learning Techniques in the Context of the Qur’an</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01603115</link>
        <id>10.14569/IJACSA.2025.01603115</id>
        <doi>10.14569/IJACSA.2025.01603115</doi>
        <lastModDate>2025-03-31T12:15:11.3400000+00:00</lastModDate>
        
        <creator>Rohana Ismail</creator>
        
        <creator>Mokhairi Makhtar</creator>
        
        <creator>Hasni Hasan</creator>
        
        <creator>Nurnadiah Zamri</creator>
        
        <creator>Azilawati Azizan</creator>
        
        <subject>Ontology learning; Qur’an; NER; statistical; pattern-Based; hajj</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>Ontology Learning refers to the automatic or semi-automatic process of creating ontologies by extracting terms, concepts, and relationships from text written in natural languages. This process is essential, as manually building ontologies is time-consuming and labour-intensive. The Qur&#39;an, a vast source of knowledge for Muslims, presents linguistic and cultural complexities, with many words carrying multiple meanings depending on context. Ontologies offer a structured way to represent this knowledge, linking concepts systematically. Although various ontologies have been developed from the Qur&#39;an for purposes such as advanced querying and analysis, most rely on manual creation methods. Few studies have examined the use of Ontology Learning for Qur’anic ontologies. Thus, this study evaluates three Ontology Learning techniques: Named Entity Recognition (NER), statistical methods, and Quranic patterns. The NER aims to find names represented by entity, statistical techniques aimed at finding frequently occurring words, and pattern-based techniques aim to identify complex relationships and multi-word expressions. The Ontology Learning techniques were evaluated based on precision, recall, and F-measure to assess extraction accuracy. The NER technique achieved an average precision of 0.62, statistical methods of 0.45, and pattern-based techniques of 0.58, indicating the strengths and weaknesses of each approach for extracting relevant terms as concepts, instances, or relations. This indicates that improvements or enhancements to the existing techniques are necessary for more accurate results. Future work will focus on refining or adapting patterns based on the structure of the Qur&#39;an translation using LLMs.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_115-A_Comparative_Evaluation_of_Ontology_Learning_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analyzing Consumer Decision-Making in Digital Environments Using Random Forest Algorithm and Statistical Methods</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01603114</link>
        <id>10.14569/IJACSA.2025.01603114</id>
        <doi>10.14569/IJACSA.2025.01603114</doi>
        <lastModDate>2025-03-31T12:15:11.2930000+00:00</lastModDate>
        
        <creator>Hussain Mohammad Abu-Dalbouh</creator>
        
        <creator>Mushira Mustafa Freihat</creator>
        
        <creator>Rayah Ismaeel Jawarneh</creator>
        
        <creator>Mohammed Abdalwahab Mohammed Salim</creator>
        
        <creator>Sulaiman Abdullah Alateyah</creator>
        
        <subject>Consumer behavior; demographics marketing strategies; data analysis; digital transformation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>In an era characterized by the rapid digital transformation of the marketplace, understanding consumer behavior is essential for effective decision-making and the development of marketing strategies. This study investigates the impact of demographic attributes such as age, income, education, and lifestyle preferences, alongside social media engagement, on the consumer decision-making process in the Al-Qassim region of Saudi Arabia. A survey was distributed, gathering responses from 684 participants. The study specifically tests the hypotheses that demographic factors significantly influence each stage of the decision-making journey: problem recognition, information search, evaluation of alternatives, purchase decision, and post-purchase behavior, with social media engagement acting as a mediating factor in these stages. By utilizing management information systems to analyze this comprehensive dataset, a Random Forest Classifier was employed, achieving an overall accuracy of 88% and revealing significant correlations between demographic characteristics and consumer behavior. The model demonstrated particularly strong performance in the Evaluation of Alternatives stage, with a precision of 0.90 and a recall of 0.95. Additionally, the findings underscore the critical role of social media engagement in enhancing consumer awareness and influencing purchasing decisions. This study provides actionable insights for marketers in the Al-Qassim region, equipping them with the necessary tools to optimize their strategies in the rapidly evolving digital landscape, ultimately improving consumer satisfaction and fostering long-term loyalty.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_114-Analyzing_Consumer_Decision_Making_in_Digital_Environments.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Image-Based Air Quality Estimation Using Convolutional Neural Network Optimized by Genetic Algorithms: A Multi-Dataset Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01603113</link>
        <id>10.14569/IJACSA.2025.01603113</id>
        <doi>10.14569/IJACSA.2025.01603113</doi>
        <lastModDate>2025-03-31T12:15:11.2800000+00:00</lastModDate>
        
        <creator>Arshad Ali Khan</creator>
        
        <creator>Mazlina Abdul Majid</creator>
        
        <creator>Abdulhalim Dandoush</creator>
        
        <subject>Convolutional neural network; Genetic Algorithm; air quality estimation; image processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>Air pollution poses significant threats to human health and the environment, making effective monitoring increasingly essential. Traditional methods using fixed monitoring stations have challenges related to high costs and limited coverage. This paper proposes a new approach using convolutional neural networks with genetic algorithms for estimating air quality directly from images. The convolutional neural network is optimized using genetic algorithms, which dynamically tune hyper-parameters such as learning rate, batch size, and momentum to improve performance and generalizability across diverse environmental conditions. Our approach improves performance and reduces the risk of overfitting, thus ensuring balanced and robust results. To mitigate overfitting, we implemented dropout layers, batch normalization, and early stopping, significantly enhancing the model’s generalization capability. Specifically, three different open-access datasets were combined into a single training dataset, capturing extensive temporal, spatial, and environmental variability. Extensive testing of the model performance was conducted with a broad set of metrics, including precision, recall, and F1 score. The results demonstrate that our model not only achieves high accuracy but also maintains well-balanced performance across all metrics, ensuring robust classification of different air quality levels. For instance, the model achieved a precision of 0.97, a recall of 0.97, and an overall accuracy of 0.9544 percent, outperforming baseline methods significantly in all metrics. These improvements underscore the effectiveness of Genetic Algorithms in optimizing the model.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_113-Image_Based_Air_Quality_Estimation_Using_Convolutional_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SSFed: Statistical Significance Aggregation Algorithm in Federated Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01603112</link>
        <id>10.14569/IJACSA.2025.01603112</id>
        <doi>10.14569/IJACSA.2025.01603112</doi>
        <lastModDate>2025-03-31T12:15:11.2330000+00:00</lastModDate>
        
        <creator>Yousef Alsenani</creator>
        
        <subject>Federated learning; non-i.i.d data; model aggregation; privacy-preserving AI; federated optimization; decentralized learning; data heterogeneity; distributed machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>Federated learning enables collaborative model training across multiple clients without sharing raw data, where the global server aggregates local models. One of the primary challenges in this setting is dealing with non-i.i.d data, which can lead to biased aggregations, as well as the overhead of frequent communication between clients and the server. Our approach improves state-of-art aggregation by adding statistical significance testing. This step assigns greater weight to client updates with higher statistical impact. Only statistically significant updates are included in the global model. The process begins with each client training a local model on its dataset. Clients then send these trained parameters to the server. At the global server, statistical significance testing is applied by calculating z-scores for each parameter. Updates with z-scores below a set threshold are included, with each update weighted based on its significance.SSFed achieves a final accuracy of 88.71% in just 20 rounds, outperforming baseline algorithms and resulting in an average improvement of 25% over traditional federated learning methods. This demonstrates faster convergence and stronger performance, especially under highly non-i.i.d client data distributions. Our SSFed implementation is available on GitHub.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_112-SSFed_Statistical_Significance_Aggregation_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Adaptive Sine-Cosine Optimization Technique for Stability and Domain of Attraction Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01603111</link>
        <id>10.14569/IJACSA.2025.01603111</id>
        <doi>10.14569/IJACSA.2025.01603111</doi>
        <lastModDate>2025-03-31T12:15:11.1830000+00:00</lastModDate>
        
        <creator>Messaoud Aloui</creator>
        
        <creator>Faical Hamidi</creator>
        
        <creator>Mohammed Aoun</creator>
        
        <creator>Houssem Jerbi</creator>
        
        <subject>Domain of Attraction; nonlinear autonomous systems; Lyapunov function; Lyapunov’s theory; stability; optimization; Adaptive Sine-Cosine Algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>In the last few years, researchers have concentrated on estimating and maximizing the Domain of Attraction of autonomous nonlinear systems. Based on the Lyapunov theory, the proposed approach in this paper aims to give an accurate estimation of the Domain of Attraction with high performance against the existing conventional methods. The Adaptive Sine-Cosine Algorithm has been considered one of the most advanced algorithms. It combines a large exploration with a strong local search and provides high-quality convergence conditions. This paper uses the benefits of the Adaptive Sine-Cosine Algorithm to develop a flexible method to estimate the Domain of Attraction by an oriented sampling to guarantee the largest sublevel related to the given Lyapunov function. The approach is applied to some benchmark examples and validates its efficiency and its ability to provide performant results.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_111-Adaptive_Sine_Cosine_Optimization_Technique_for_Stability.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Vulnerability Testing of RESTful APIs Against Application Layer DDoS Attacks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01603110</link>
        <id>10.14569/IJACSA.2025.01603110</id>
        <doi>10.14569/IJACSA.2025.01603110</doi>
        <lastModDate>2025-03-31T12:15:11.1530000+00:00</lastModDate>
        
        <creator>Sivakumar K</creator>
        
        <creator>Santhi Thilagam P</creator>
        
        <subject>DDoS; rate-limiting; HTTP/1.1; HTTP/2; API; micro service; multiplexing; security; DoS; security testing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>In recent years, modern mobile, web applications are shifting from monolithic application to microservice based application because of the issues such as scalability and ease of maintenance.These services are exposed to the clients through Application programming interface (API). APIs are built, integrated and deployed quickly.The very nature of APIs directly interact with backend server, the security is paramount important for CAP. Denial of service attacks are more serious attack which denies service to legitimate request. Rate limiting policies are used to stop the API DoS attacks. But by passing rate limit or flooding attack overload the backend server. Even sophisticated attack using http/2 multiplexing with multiple clients leads severe disruptions of service. This research shows that how sophisticated multi client attack on high workload end point leads to a dos attack.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_110-Vulnerability_Testing_of_RESTful_APIs_Against_Application_Layer.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Optimization Design of the Pattern Matrix Based on EXIT Chart for PDMA Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01603109</link>
        <id>10.14569/IJACSA.2025.01603109</id>
        <doi>10.14569/IJACSA.2025.01603109</doi>
        <lastModDate>2025-03-31T12:15:11.1070000+00:00</lastModDate>
        
        <creator>Hanqing Ding</creator>
        
        <creator>Jiaxue Li</creator>
        
        <creator>Jin Xu</creator>
        
        <subject>PM optimization; EXIT chart; PDMA system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>The maximum degree of function node of pattern matrix (PM) dominates the detection complexity of belief propagation algorithm for pattern division multiple access (PDMA) systems. This work proposes a method to search the optimal PM ensemble for PDMA system under constrained detection complexity. This issue is converted to find the optimal variable node (VN) degree distribution (DD) of PM with function node DD concentrated. Utilizing extrinsic information transfer chart (EXIT) techniques, the DD of PM with overload rate of 150%is obtained and its DD is designed by progressive edge growth (PEG) algorithm. The performance of this PDMA system is evaluated and compared with the ones of the same overload rate in literature to verify the effectiveness of the proposed method. Furthermore, for iterative detection and decoding (IDD), the concatenated LDPC code is optimized to enhance the overall performance. EXIT analysis and Monte Carlo simulations confirm that the designed pattern matrix outperforms other pattern matrix about 2.3 dB in bit error rate when both schemes employ the same LDPC code, and 0.2 dB when using the optimized codes respectively.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_109-The_Optimization_Design_of_the_Pattern_Matrix.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Paradigm for Parameter Optimization of Hydraulic Fracturing Using Machine Learning and Large Language Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01603108</link>
        <id>10.14569/IJACSA.2025.01603108</id>
        <doi>10.14569/IJACSA.2025.01603108</doi>
        <lastModDate>2025-03-31T12:15:11.0770000+00:00</lastModDate>
        
        <creator>Chunxi Yang</creator>
        
        <creator>Chuanyou Xu</creator>
        
        <creator>Yue Ma</creator>
        
        <creator>Bang Qu</creator>
        
        <creator>Yiquan Liang</creator>
        
        <creator>Yajun Xu</creator>
        
        <creator>Lei Xiao</creator>
        
        <creator>Zhimin Sheng</creator>
        
        <creator>Zhenghao Fan</creator>
        
        <creator>Xin Zhang</creator>
        
        <subject>Hydraulic fracturing; parameter optimization; large language model; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>Hydraulic fracturing is a common practice in the oil and gas industry meant to increase the production of oil and natural gas. In this process, appropriate fracturing design parameters are important to maximize the efficiency of fracture propagation. However, conventional fracturing parameter design methods often rely on expert experience or fail to take into account complex geological conditions, resulting in suboptimal parameter design schemes. Therefore, this paper presents PPOHyFrac, a novel paradigm for optimizing hydraulic fracturing parameters with large language model and machine learning, which aims to automatically extract, assess and optimize fracturing parameters. PPOHyFrac uses advanced large language model to perform the extraction of key parameters from hundreds of fracturing design documents, and then refines the extracted data using statistical methods such as missing value imputation and feature normalization. Besides, the techniques in correlation analysis are utilized to identify key influencing factors and finally machine learning methods are implemented to optimize and predict the key influencing factors. This paper also presents a comparative study of five machine learning methods. Experiments show that random forest is the best choice for parameter optimization and can improve the prediction and optimization accuracy of key parameters.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_108-A_Novel_Paradigm_for_Parameter_Optimization_of_Hydraulic_Fracturing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Cross-Layer Framework for Optimizing Energy Efficiency in Wireless Sensor Networks: Design, Implementation, and Future Directions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01603107</link>
        <id>10.14569/IJACSA.2025.01603107</id>
        <doi>10.14569/IJACSA.2025.01603107</doi>
        <lastModDate>2025-03-31T12:15:11.0130000+00:00</lastModDate>
        
        <creator>Sami Mohammed Alenezi</creator>
        
        <subject>Wireless sensor network; cross-layer; energy efficient; performance; OPNET</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>Environmental monitoring, healthcare, and industrial automation are among the numerous modern applications in which Wireless Sensor Networks (WSNs) are becoming increasingly indispensable. Despite this, the scalability and endurance of these networks are still significantly impeded by the energy constraints of sensor nodes. This study proposes a novel cross-layer framework that dynamically optimizes energy consumption across the entire communication hierarchy by integrating the Application, Network, Data Link, and Physical layers to address this issue. The framework introduces significant innovations, including an adaptive Low-Traffic Aware Hybrid Medium Access Control (LTH-MAC) protocol that is intended to adjust transmission schedules in response to real-time traffic conditions, and energy-aware routing algorithms that consider both node energy levels and network topology when determining the most energy-efficient communication paths. The framework exhibits substantial enhancements in energy efficiency, reaching a reduction in energy consumption of up to 43%, as evidenced by extensive simulations conducted with OPNET. Furthermore, the network lifetime is extended by 8%, and transmission is improved by 10% compared to conventional statically defined layered architectures. These findings underscore the potential of the proposed cross-layer framework to not only improve overall network performance but also reduce energy consumption, thereby guaranteeing sustainable and efficient operation in resource-constrained environments. Additionally, the solution’s scalability renders it suitable for a diverse array of WSN applications, providing a promising solution for overcoming the constraints of energy and establishing the foundation for more durable and efficient sensor networks. This study establishes the foundation for future research on adaptive, cross-layer protocols that can further enhance energy-efficient communication in WSNs.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_107-A_Cross_Layer_Framework_for_Optimizing_Energy_Efficiency.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Genetic Algorithm-Driven Cover Set Scheduling for Longevity in Wireless Sensor Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01603106</link>
        <id>10.14569/IJACSA.2025.01603106</id>
        <doi>10.14569/IJACSA.2025.01603106</doi>
        <lastModDate>2025-03-31T12:15:10.9500000+00:00</lastModDate>
        
        <creator>Ibtissam Larhlimi</creator>
        
        <creator>Mansour Lmkaiti</creator>
        
        <creator>Maryem Lachgar</creator>
        
        <creator>Hicham Ouchitachen</creator>
        
        <creator>Anouar Darif</creator>
        
        <creator>Hicham Mouncif</creator>
        
        <subject>Maximum network lifetime; wireless sensor network; coverage; sets scheduling; genetic algorithm; pattern search algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>This paper aims to develop an efficient scheduling approach based on Genetic Algorithms to optimize energy consumption and maximize the operational lifetime of Wireless Sensor Networks (WSNs). Effective energy management is crucial for prolonging the operational lifespan of wireless sensor networks (WSNs) that include a substantial number of sensors. Simultaneously activating all sensors results in a fast depletion of energy, thus diminishing the overall lifespan of the network. To address this issue, it is necessary to schedule sensor activity in an effective manner. This task, known as the maximum coverage set scheduling (MCSS) problem, is highly complex and has been demonstrated to be NP-hard. This article presents a customized genetic algorithm designed to tackle the MCSS problem, aiming to improve the longevity of Wireless Sensor Networks (WSNs). Our methodology effectively detects and enhances combinations of coverage sets and their corresponding schedules. The program incorporates key criteria such as the detection ranges of individual sensors, their energy levels, and activity durations to optimize the overall energy efficiency and operational sustainability of the network. The performance of the suggested algorithm is assessed through simulations and compared to that of the Greedy algorithm and the Pattern search algorithm. The results indicate that our genetic algorithm not only maximizes network lifetime but also enhances the efficiency and efficacy of solving the MCSS problem. This represents a significant improvement in managing the energy consumption in WSNs.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_106-Genetic_Algorithm_Driven_Cover_Set_Scheduling_for_Longevity.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>MEXT: A Parameter-Free Oversampling Approach for Multi-Class Imbalanced Datasets</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01603105</link>
        <id>10.14569/IJACSA.2025.01603105</id>
        <doi>10.14569/IJACSA.2025.01603105</doi>
        <lastModDate>2025-03-31T12:15:10.9200000+00:00</lastModDate>
        
        <creator>Chittima Chiamanusorn</creator>
        
        <creator>Krung Sinapiromsaran</creator>
        
        <subject>Class imbalance; classification; extreme anomalous; multiclass; oversampling; parameter-free</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>Machine learning classifiers face significant challenges when confronted with class-imbalanced datasets, particularly in multi-class scenarios. The inherent skewness in class distributions often leads to biased model predictions, with classifiers struggling to accurately identify instances from underrepresented classes. This paper introduces MEXT, a novel parameter-free oversampling technique specifically designed for multi-class imbalanced datasets. Unlike conventional approaches that often rely on the one-against-all strategy and require manual parameter tuning for each class, MEXT addresses these limitations by simultaneously balancing all classes. By leveraging anomalous score analysis, MEXT automatically determines optimal locations for synthesizing new instances of minority classes, eliminating the need for manual parameter selection. The technique aims to achieve a balanced class distribution where each class has an equal number of instances. To evaluate MEXT’s effectiveness, the experiments were conducted extensively on a collection of multi-class datasets from the UCI repository. The proposed MEXT algorithm was evaluated against a suite of state-of-the-art SMOTE-based oversampling techniques, including SMOTE, ADASYN, Safe-Level SMOTE, MDO, and DSRBF. All comparative algorithms were implemented within the one-against-all framework. Hyperparameter optimization for each algorithm was performed using grid search. An automated machine learning pipeline was employed to identify the optimal classifier-hyperparameter combination for each dataset and oversampling technique. The Wilcoxon signed-rank test was subsequently utilized to statistically assess the performance of MEXT relative to the other oversampling techniques. The results demonstrate that MEXT consistently outperforms the other methods in terms of average ranking of key evaluation metrics, including macro-precision, macro-recall, F1-measure, and G-mean, indicating its superior ability to address multi-class imbalanced learning problems.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_105-MEXT_A_Parameter_Free_Oversampling_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detection of Wheat Pest and Disease in Complex Backgrounds Based on Improved YOLOv8 Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01603104</link>
        <id>10.14569/IJACSA.2025.01603104</id>
        <doi>10.14569/IJACSA.2025.01603104</doi>
        <lastModDate>2025-03-31T12:15:10.8730000+00:00</lastModDate>
        
        <creator>Dandan Zhong</creator>
        
        <creator>Penglin Wang</creator>
        
        <creator>Jie Shen</creator>
        
        <creator>Dongxu Zhang</creator>
        
        <subject>Wheat disease and pest; YOLOv8; edge amplification; visual remote dependency; global context; vision transformer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>Detecting wheat diseases and pests, particularly those characterized by small targets amidst complex background interference, presents a significant challenge in agricultural re-search. To address this issue and achieve precise and efficient detection, we propose an enhanced version of YOLOv8, termed MGT-YOLO, which incorporates multi-scale edge enhancement and visual remote dependency mechanisms. Our methodology begins with the creation of a comprehensive dataset, WheatData, comprising 2393 high-resolution images capturing various wheat diseases and pests across different growth stages in diverse agricultural settings. To improve the detection of small targets, we implemented a multi-scale edge amplification technique within the backbone network of YOLOv8, enhancing its ability to capture minute details of wheat diseases and pests. Furthermore, we introduced the C2f GlobalContext module in the neck network, which integrates global contextual relationships and facilitates the fusion of features from small-sized objects by leveraging remote dependencies in visual imagery. Additionally, we incorporated a Vision Transformer module into the neck network to enhance the processing efficiency of small-scale disease and pest features. The proposed MGT-YOLO network was rigorously evaluated on the WheatData dataset. The results demonstrated significant improvements, with mAP@0.5 values of 90.0% for powdery mildew and 65.5% for smut disease, surpassing the baseline YOLOv8 by 5.3% and 6.8%, respectively. The overall mAP@0.5 reached 89.5%, representing a 2.0% improvement over YOLOv8 and outperforming other state-of-the-art detection methods. These findings suggest that MGT-YOLO is a promising solution for real-time detection of agricultural diseases and pests, offering enhanced accuracy and efficiency in complex agricultural environments.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_104-Detection_of_Wheat_Pest_and_Disease_in_Complex_Backgrounds.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Agile Requirements Change Management: Integrating LLMs with Fuzzy Best-Worst Method for Decision Support</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01603103</link>
        <id>10.14569/IJACSA.2025.01603103</id>
        <doi>10.14569/IJACSA.2025.01603103</doi>
        <lastModDate>2025-03-31T12:15:10.8400000+00:00</lastModDate>
        
        <creator>Bushra Aljohani</creator>
        
        <creator>Abdulmajeed Aljuhani</creator>
        
        <creator>Tawfeeq Alsanoosy</creator>
        
        <subject>Fuzzy Best-Worst Method; Large Language Models; Agile Requirements Change Management; Global Software Development</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>Agile Requirements Change Management (ARCM) in Global Software Development (GSD) posed significant challenges due to the dynamic nature of project requirements and the complexities of distributed team coordination. One approach used to mitigate these challenges and ensure efficient collaboration is the identification and prioritization of success factors. Traditional Multi-Criteria Decision-Making methods, such as the Best-Worst Method (BWM), had been employed successfully to prioritize success factors. However, these methods often failed to capture the inherent uncertainties of decision-making in a GSD. To address this limitation, this study integrated Large Language Models (LLMs) with the Fuzzy Best-Worst Method (FBWM) to enhance prioritization accuracy and decision support. We propose a model for comparing the prioritization outcomes of human expert assessments and LLM-generated decisions to evaluate the consistency and effectiveness of machine-generated decisions relative to those made by human experts. The findings indicate that the LLM-driven FBWM exhibit high reliability in mirroring expert judgments, demonstrating the potential of LLMs to support strategic decision-making in ARCM. This study contributed to the evolving landscape of AI-driven project management by providing empirical evidence of LLMs’ utility in improving ARCM for GSD.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_103-Enhancing_Agile_Requirements_Change_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning-Based Behavior Analysis in Basketball Video: A Spatiotemporal Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01603102</link>
        <id>10.14569/IJACSA.2025.01603102</id>
        <doi>10.14569/IJACSA.2025.01603102</doi>
        <lastModDate>2025-03-31T12:15:10.7930000+00:00</lastModDate>
        
        <creator>Jingyi Wang</creator>
        
        <subject>Basketball; player movement analysis; player technique analysis; deep learning; attention mechanism</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>The study of sports movement analysis technologies based on video has significant practical applications. Digital video footage, human-computer communication, as well as additional technologies can greatly improve the effectiveness of sports training. This research looks at the players’ technical proficiency in a basketball contest footage and suggests a behaviour assessment technique inspired by the use of deep learning and attention mechanisms. First, we develop an approach for effortlessly obtaining the marking lines from the basketball arena and stadium. After that, the most significant frames of the footage have been shot using a spatial and temporal ranking technique. Next, we design a behaviour comprehension and prediction technique by implementing an autoencoder design. The results of the study may be sent to instructors and data scientists instantly to support them in determining their strategies and professional decisions. An extensive dataset of basketball films is used to test the proposed method. The outcomes demonstrate that the recommended attention mechanism-based strategy competently recognises the movement of video individuals while attaining substantial behavioural assessment efficiency.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_102-Deep_Learning_Based_Behavior_Analysis_in_Basketball_Video.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving Satellite Flood Image Classification Using Attention-Based CNN and Transformer Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01603101</link>
        <id>10.14569/IJACSA.2025.01603101</id>
        <doi>10.14569/IJACSA.2025.01603101</doi>
        <lastModDate>2025-03-31T12:15:10.7630000+00:00</lastModDate>
        
        <creator>Sanket S Kulkarni</creator>
        
        <creator>Ansuman Mahapatra</creator>
        
        <subject>CNN; DenseNet; ResNet101v2; VGG16; hybrid CNN model; CBAM; vision transformer; xView2 Building Damage (xBD)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>Floods are among the most frequent and devastating natural disasters, significantly impacting infrastructure, ecosystems, and human communities. Accurate satellite-based flood image classification is crucial for assessing flood-affected regions and supporting emergency response efforts. This study uses Convolutional Neural Networks (CNNs) and transformer-based architectures to enhance flood classification, integrating the Convolutional Block Attention Module (CBAM) to improve feature extraction. Using the xView2 xBD dataset, we classify houses as completely or partially surrounded by flood-water. Experimental evaluations demonstrate that ResNet101v2 achieved an accuracy of 86.87%, while a hybrid CNN model (MobileNetV2- DenseNet201) attained 85.83%, further improving to 89.54CBAM. The Vision Transformer (ViT) with CBAM achieved the highest accuracy of 90.75%, showcasing the effectiveness of attention-based hybrid models for flood image classification. These results highlight the potential of integrating CBAM with deep learning architectures to enhance classification accuracy and improve flood impact assessment.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_101-Improving_Satellite_Flood_Image_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improved Monte Carlo Localization for Agricultural Mobile Robots with the Normal Distributions Transform</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01603100</link>
        <id>10.14569/IJACSA.2025.01603100</id>
        <doi>10.14569/IJACSA.2025.01603100</doi>
        <lastModDate>2025-03-31T12:15:10.6830000+00:00</lastModDate>
        
        <creator>Brian Lai Lap Hong</creator>
        
        <creator>Mohd Azri Bin Mohd Izhar</creator>
        
        <creator>Norulhusna Binti Ahmad</creator>
        
        <subject>Adaptive monte carlo localization; normal distributions transform; pose estimation; precision agriculture; agricultural robotics; outdoor localization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>Localization is crucial for robots to navigate autonomously in agricultural environments. This paper introduces an improved Adaptive Monte Carlo Localization (AMCL) algorithm integrated with the Normal Distributions Transform (NDT) to address the challenges of navigation in agricultural fields. 2D Light Detection and Ranging (LiDAR) measures distances to surrounding objects using laser light, and captures distance data in a single horizontal plane, making it ideal for detecting obstacles and field features such as trees and crop rows. While conventional AMCL has been studied for indoor environments, there is a lack of research on its application in outdoor agricultural settings, particularly when using 2D LiDAR. The proposed method enhances localization accuracy by applying the NDT after the conventional AMCL estimation, refining the pose estimate through a more detailed alignment of the 2D LiDAR data with the map. Simulations conducted in a palm oil plantation environment demonstrate a 53% reduction in absolute pose error and a 50%reduction in relative position error compared to conventional AMCL. This highlights the potential of the AMCL-NDT approach with 2D LiDAR for cost-effective and scalable deployment in precision agriculture.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_100-Improved_Monte_Carlo_Localization_for_Agricultural_Mobile_Robots.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Exploring the Synergy Between Digital Twin Technology and Artificial Intelligence: A Comprehensive Survey</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160399</link>
        <id>10.14569/IJACSA.2025.0160399</id>
        <doi>10.14569/IJACSA.2025.0160399</doi>
        <lastModDate>2025-03-31T12:15:10.6370000+00:00</lastModDate>
        
        <creator>Wael Y. Alghamdi</creator>
        
        <creator>Rayan M. Alshamrani</creator>
        
        <creator>Ruba K. Aloufi</creator>
        
        <creator>Shaikhah O. Ba Lhamar</creator>
        
        <creator>Retaj A. Altwirqi</creator>
        
        <creator>Fatimah S. Alotaibi</creator>
        
        <creator>Shahad M. Althobaiti</creator>
        
        <creator>Hadeel M. Altalhi</creator>
        
        <creator>Shatha A. Alshamrani</creator>
        
        <creator>Atouf S Alazwari</creator>
        
        <subject>Digital twin; artificial intelligence; internet of things; big data; predictive analytics; real-time monitoring</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>The integration of Digital Twin Technology with Artificial Intelligence (AI) represents a transformative advancement across multiple domains. Digital twins are dynamic, real-time virtual representations of physical systems, leveraging technologies such as Internet of Things (IoT), augmented and virtual reality (AR/VR), big data analytics, 3D modeling, and cloud computing. Initially conceptualized by Michael Grieves in 2003 and further developed by organizations such as NASA, digital twins have been widely adopted in manufacturing, healthcare, smart cities, and energy systems. This paper provides a comprehensive analysis of how real-time data streams, continuous feedback loops, and predictive analytics within digital twins enhance AI capabilities, enabling anomaly detection, predictive maintenance, and data-driven decision-making. Additionally, the study examines technical and operational challenges, including data integration, sensor accuracy, cybersecurity, and computational overhead. By evaluating current methodologies and identifying future research directions, this survey underscores the potential of digital twins to drive adaptive, intelligent, and resilient systems in an increasingly data-driven world.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_99-Exploring_the_Synergy_Between_Digital_Twin_Technology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Impact of Cybersecurity Through Knowledge Sharing Practices: Limitations, Analysis of Current Trends and Future Research Directions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160398</link>
        <id>10.14569/IJACSA.2025.0160398</id>
        <doi>10.14569/IJACSA.2025.0160398</doi>
        <lastModDate>2025-03-31T12:15:10.5900000+00:00</lastModDate>
        
        <creator>Moneer Alshaikh</creator>
        
        <creator>Sajid Mehmood</creator>
        
        <creator>Rashid Amin</creator>
        
        <creator>Faisal S. Alsubaei</creator>
        
        <subject>Cybersecurity; knowledge sharing; Saudi Arabia; Vision 2030; digital transformation; cybersecurity education; cyber threats; cybersecurity framework; cultural barriers; National Cybersecurity Authority (NCA)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>Research examines Saudi Arabian cyber security knowledge-sharing programs during its digital transformation under Vision 2030 through a combination of literature reviews and expert specialist insights to analyze current cybersecurity professional information transfer systems. This analysis shows how technological developments along with organizational and cultural elements impact these practices since the constant drive for innovation aims to enhance knowledge transfer so researchers discovered that cultural obstacles from resistance to openness, lack of trust and hierarchical structures and division within organizations and insufficient workflow systems along with worry about trust and outdated technological capabilities limit successful knowledge sharing. Through analysis of knowledge-sharing programs established by the National Cybersecurity Authority (NCA) Saudi Aramco and the King Abdulaziz City for Science and Technology (KACST), researchers show that strategic programs improve national cybersecurity readiness effectiveness. The research provides actionable advice that combines the design of a national security plan and secure technology funding with does-based mentorship initiatives across sectors and integrated incident reporting along with educational programs and performance-driven reward systems for motivation. The research offers combined theory and practice-oriented guidance that helps Saudi Arabia’s policymakers along with organizations and cybersecurity practitioners to build effective strategies as they establish their leadership position in collaborative cybersecurity practices internationally.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_98-The_Impact_of_Cybersecurity_Through_Knowledge_Sharing_Practices.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Integrating BDI Cognitive Intelligence in IIoT: A Framework for Advanced Decision-Making in Manufacturing and Policy Development</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160397</link>
        <id>10.14569/IJACSA.2025.0160397</id>
        <doi>10.14569/IJACSA.2025.0160397</doi>
        <lastModDate>2025-03-31T12:15:10.5430000+00:00</lastModDate>
        
        <creator>Ammar Ahmed E. Elhadi</creator>
        
        <subject>BDI cognitive intelligence; IIoT; smart manufacturing; decision-making; adaptive systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>This paper presents an innovative system frame-work that integrates multiple domains—Smart Cities, Underwater Environments, and Healthcare—using advanced Data Analytics Platforms enhanced by BDI (Belief-Desire-Intention) cognitive intelligence. Current data analytics systems, while capable of collecting and processing large amounts of data, exhibit significant gaps in intelligent decision-making, particularly in dynamic and context-sensitive environments. By leveraging the BDI model, which mimics human cognitive processes through beliefs, desires, and intentions. This system proposes a context-aware, adaptive approach to decision-making by leveraging BDI cognitive intelligence, which outperforms traditional AI-based analytics by enabling dynamic, goal-driven responses to real-time data in IIoT environments. The system is designed to dynamically respond to real-time data collected from IoT-enabled devices and actuators, improving efficiency, safety, and adaptability. The proposed framework addresses the limitations of existing platforms by incorporating the latest technology and techniques for proactive, intelligent decision-making. The qualitative analysis of the proposed model shows promising results, particularly in its ability to respond to rapid environmental changes, highlighting its potential for transformative applications in urban management, marine conservation, and healthcare delivery.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_97-Integrating_BDI_Cognitive_Intelligence_in_IIoT.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Capacity Analysis of MIMO Channels Under High SNR Using Nakagami-q Fading Distribution</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160396</link>
        <id>10.14569/IJACSA.2025.0160396</id>
        <doi>10.14569/IJACSA.2025.0160396</doi>
        <lastModDate>2025-03-31T12:15:10.4830000+00:00</lastModDate>
        
        <creator>Syeda Anika Tasnim</creator>
        
        <creator>Md. Mazid-Ul-Haque</creator>
        
        <creator>Md. Sajid Bin Faisal</creator>
        
        <creator>Rakin Sad Aftab</creator>
        
        <subject>MIMO systems; Nakagami-q; high-SNR capacity; antenna configurations; wireless channel modeling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>This study explores the capacity of multiple-input multiple-output (MIMO) wireless channels under high signal-to-noise ratio (SNR) conditions, incorporating Nakagami-q fading distribution alongside Rayleigh and Rician fading models. The main objective is to develop an analytical framework that accurately models MIMO channel capacity under high-SNR conditions using Nakagami-q fading and compares its performance with conventional fading models. By employing a robust wireless channel modeling approach, the study examines the impact of various antenna configurations on system performance. The derived framework assesses how different fading conditions affect capacity, showing that MIMO systems effectively mitigate multipath effects. The results reveal that channel capacity improves with an increasing number of antennas and favorable fading parameters, emphasizing the significance of antenna configurations in enhancing performance. The comparative analysis highlights substantial differences in capacity across fading models, offering critical insights to optimize next-generation wireless channel modeling in diverse environments.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_96-Capacity_Analysis_of_MIMO_Channels_Under_High_SNR.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automated DoS Penetration Testing Using Deep Q Learning Network-Quantile Regression Deep Q Learning Network Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160395</link>
        <id>10.14569/IJACSA.2025.0160395</id>
        <doi>10.14569/IJACSA.2025.0160395</doi>
        <lastModDate>2025-03-31T12:15:10.4500000+00:00</lastModDate>
        
        <creator>Mariam Alhamed</creator>
        
        <creator>M M Hafizur Rahman</creator>
        
        <subject>DQN; QR-DQN; MulVAL; DFS; penetration testing; DoS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>Penetration test is essential to determine the security level of a network. A penetration test attack path simulates an attack to identify vulnerabilities, reduce likely losses, and continuously enhance security. It helps to facilitate the simulation of different attack scenarios, develops robust security measures, and enables proactive risk assessment. We have combined Mul-VAL with DQN and QR-DQN algorithms to solve the problem of incorrect route prediction and problematic convergence associated with attack path planning training. As a result of this algorithm, an attack tree is generated, paths within the attack graph are searched for, and a deep-first search method is used to create a transfer matrix. In addition, QR-DQN and DQN algorithms determine the optimal attack path for the target system. The results of this study show that although the QR-DQN algorithm requires more resources and takes longer to train than the traditional (DQN) algorithm, it is effective in identifying vulnerabilities and optimizing attack paths.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_95-Automated_DoS_Penetration_Testing_Using_Quantile_Regression.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Vision-Based Religious Tourism Systems in Makkah Using Fine-Tuned YOLOv11 for Landmark Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160394</link>
        <id>10.14569/IJACSA.2025.0160394</id>
        <doi>10.14569/IJACSA.2025.0160394</doi>
        <lastModDate>2025-03-31T12:15:10.4200000+00:00</lastModDate>
        
        <creator>Kaznah Alshammari</creator>
        
        <subject>YOLOv11; object detection; Makkah landmark</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>Makkah, one of the most significant cities in the Islamic world, possesses a rich architectural and cultural heritage that requires precise detection and identification of its landmarks. Accurate landmark detection plays a vital role in urban planning, cultural preservation, and enhancing tourism experiences. In this study, a fine-tuned versions of the YOLOv11 network, specifically the nano and small variants, are proposed for efficient and precise detection of Makkah’s landmarks. The YOLOv11 framework, renowned for its real-time object detection capabilities, was carefully adapted to address the unique challenges posed by the diverse visual characteristics of Makkah’s landmarks, including varying scales, intricate textures, and challenging environmental conditions. To further enhance the models for deployment in embedded systems with low-latency requirements, a quantization technique is applied. This process significantly reduces model size and increases inference speed, optimizing the network for resource-constrained environments while maintaining high detection accuracy. Beyond technical improvements, this approach supports real-world applications such as interactive tourism via mobile and AR systems, automated heritage documentation, and continuous monitoring of historic sites for conservation efforts. Additionally, integration into smart city infrastructures can enhance security and management of cultural landmarks. Experimental results show that the fine-tuned YOLOv11 models, particularly the small version, achieve high accuracy, with notable improvements in precision and recall compared to baseline models. This research demonstrates the potential of deep learning techniques for cultural heritage detection and lays the foundation for future applications in urban analytics, geospatial mapping, and real-time vision-based systems for tourism and heritage preservation.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_94-Enhancing_Vision_Based_Religious_Tourism_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Small Object Detection in Complex Images: Evaluation of Faster R-CNN and Slicing Aided Hyper Inference</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160393</link>
        <id>10.14569/IJACSA.2025.0160393</id>
        <doi>10.14569/IJACSA.2025.0160393</doi>
        <lastModDate>2025-03-31T12:15:10.3730000+00:00</lastModDate>
        
        <creator>Fatma Mazen Ali Mazen</creator>
        
        <creator>Yomna Shaker</creator>
        
        <subject>Faster R-CNN; Cascaded R-CNN; SAHI; ATSS; artistic head detection; small object detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>Small object detection has many applications, including maritime surveillance, underwater computer vision, agriculture, traffic flow analysis, drone surveying, etc. Object detection has made notable improvements in recent years. Despite these advancements, there is a notable disparity in performance between detecting small and large objects. This gap is because small objects have less information and a weaker ability to express features. This paper investigates the performance of Faster Region-Based Convolutional Neural Networks (R-CNN), one of the most popular and user-friendly object detection models for head detection and counts in artworks rather than images of real humans. The impacts of Slicing Aided Hyper Inference (SAHI) on the enhancement of the model’s capability to detect small heads in large-size images are also being analyzed. The Kaggle-hosted Artistic Head Detection dataset was used to train and evaluate the proposed model. The effectiveness of the proposed methodology was demonstrated by integrating SAHI into two other object detection models, Cascaded R-CNN and Adaptive Training Sample Selection (ATSS). The experimental results reveal that applying SAHI on top of any object detector enhances its ability to recognize and detect tiny and various scaled heads in large-scale images, which is a significant challenge in numerous applications. At a confidence level of 0.8, the SAHI-enhanced Faster R-CNN achieved the best private Root Mean Square Error (RMSE) score of 5.31337, while the SAHI-enhanced Cascaded R-CNN obtained the highest public RMSE score of 3.47005.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_93-Small_Object_Detection_in_Complex_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Machine Learning-Driven Preventive Maintenance for Fibreboard Production in Industry 4.0</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160392</link>
        <id>10.14569/IJACSA.2025.0160392</id>
        <doi>10.14569/IJACSA.2025.0160392</doi>
        <lastModDate>2025-03-31T12:15:10.3570000+00:00</lastModDate>
        
        <creator>Sirirat Suwatcharachaitiwong</creator>
        
        <creator>Nikorn Sirivongpaisal</creator>
        
        <creator>Thattapon Surasak</creator>
        
        <creator>Nattagit Jiteurtragool</creator>
        
        <creator>Laksiri Treeranurat</creator>
        
        <creator>Aree Teeraparbseree</creator>
        
        <creator>Phattara Khumprom</creator>
        
        <creator>Sirirat Pungchompoo</creator>
        
        <creator>Dollaya Buakum</creator>
        
        <subject>Predictive maintenance; machine learning; fibre-board production; operational efficiency; Industry 4.0; smart manufacturing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>The transition to Industry 4.0 has necessitated the adoption of intelligent maintenance strategies to enhance manufacturing efficiency and reduce operational disruptions. In fibreboard production, conventional preventive maintenance, reliant on fixed schedules, often leads to inefficient resource allocation and unexpected failures. This study proposes a machine learning-driven predictive maintenance (PdM) framework that utilises real-time sensor data and predictive analytics to optimise maintenance scheduling and improve system reliability. The proposed approach is validated using real-world industrial data, where Random Forest and Gradient Boosting regression models are applied to predict machine wear progression and estimate the remaining useful life (RUL) of critical components. Performance evaluation shows that Random Forest outperforms Gradient Boosting, achieving a lower Mean Squared Error (MSE) of 0.630, a lower Mean Absolute Error (MAE) of 0.613, and a higher R-squared score of 0.857. Feature importance analysis further identifies surface grade as a key determinant of equipment wear, suggesting that redistributing production across lower-impact grades can significantly reduce long-term wear and extend machine lifespan. These findings underscore the potential of artificial intelligence in predictive maintenance applications, contributing to the advancement of smart manufacturing in Industry 4.0. This research lays the foundation for further investigations into adaptive, real-time maintenance frameworks, supporting sustainable and efficient industrial operations.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_92-Machine_Learning_Driven_Preventive_Maintenance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>MAHYA: Facial Recognition-Based Pilgrim Identification System for Enhanced Health Monitoring and Assistance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160391</link>
        <id>10.14569/IJACSA.2025.0160391</id>
        <doi>10.14569/IJACSA.2025.0160391</doi>
        <lastModDate>2025-03-31T12:15:10.3100000+00:00</lastModDate>
        
        <creator>Shahad Albalawi</creator>
        
        <creator>Lujin Alamri</creator>
        
        <creator>Jumanah Atut</creator>
        
        <creator>Shatha Albalawi</creator>
        
        <creator>Reem Haddaddi</creator>
        
        <creator>A’aeshah Alhakamy</creator>
        
        <subject>Facial recognition; emergency medical care; ResNet inception; siamese network; mobile health technology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>During the Hajj season, Saudi Arabia experiences the arrival of millions of pilgrims from diverse linguistic and geographical backgrounds. This influx poses significant challenges for emergency medical care services. The primary objective of this study is to explore the technological shortcomings and difficulties encountered by healthcare teams during such large-scale gatherings and to propose improvements for more effective emergency medical response systems. This study introduces MAHYA, a mobile health technology application designed to enhance emergency medical responses. MAHYA integrates advanced facial recognition technology, utilizing Inception ResNet V1 and Siamese network algorithms, to quickly and accurately identify individuals and retrieve their medical histories. This quick access to vital medical information is crucial for timely and efficient emergency medical care. The app incorporates a few-shot learning approach to bolster its facial recognition capabilities, which is vital to manage the large number of pilgrims. Further technical aspects of MAHYA include its use of Flask for back-end operations, Python for data processing, and NGROK to ensure secure external connectivity. These features collectively empower the application to offer a highly effective, secure, and adaptive facial recognition service, tailored for the dynamic and densely populated environment of the Hajj. The findings of the deployment of this application indicate a substantial improvement in the operational efficiency of healthcare professionals on the ground, leading to faster response times and improved overall quality of emergency medical services.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_91-MAHYA_Facial_Recognition_Based_Pilgrim_Identification_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Adaptive Ensemble Selection for Personalized Cardiovascular Disease Prediction Using Clustering and Feature Selection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160390</link>
        <id>10.14569/IJACSA.2025.0160390</id>
        <doi>10.14569/IJACSA.2025.0160390</doi>
        <lastModDate>2025-03-31T12:15:10.2800000+00:00</lastModDate>
        
        <creator>Mutaz A. B. Al-Tarawneh</creator>
        
        <creator>Khaled S. Al-Maaitah</creator>
        
        <creator>Ashraf Alkhresheh</creator>
        
        <subject>Cardiovascular disease prediction; adaptive ensemble selection; clustering techniques; feature selection; personalized healthcare</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>Cardiovascular disease (CVD) remains one of the leading causes of mortality worldwide, highlighting the need for early and precise prediction to support timely intervention. This study introduces an ensemble-based adaptive approach that personalizes CVD prediction by dynamically adjusting model configurations based on patient subgroups. To achieve this, various clustering techniques, including KMeans, DBSCAN, and MeanShift, are employed alongside feature selection methods such as chi-square, Mutual Information, and a baseline that incorporates all features. By tailoring classifier selection to each cluster, the proposed approach optimizes predictive performance, with ensemble models configured using Multi-Layer Perceptron (MLP) or Decision Tree classifiers. Through extensive experiments utilizing 10-fold cross-validation, results indicate that the adaptive ensemble consistently surpasses the static ensemble in key performance metrics, including accuracy, precision, recall, F1 score and AUC. In particular, the highest accuracy of 95.57%was achieved using MeanShift clustering with the entire set of features, demonstrating the effectiveness of density-based clustering in improving classification performance. Notably, this accuracy exceeds the best-reported results in previous studies, establishing a new benchmark for CVD prediction. These findings highlight the potential of adaptive ensemble selection to significantly improve diagnostic precision, providing valuable insights for personalized CVD prediction and broader applications in medical decision making.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_90-Adaptive_Ensemble_Selection_for_Personalized_Cardiovascular_Disease.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Evaluation of Machine Learning-Based Cyber Attack Detection in Electric Vehicles Charging Stations</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160389</link>
        <id>10.14569/IJACSA.2025.0160389</id>
        <doi>10.14569/IJACSA.2025.0160389</doi>
        <lastModDate>2025-03-31T12:15:10.2630000+00:00</lastModDate>
        
        <creator>Mutaz A. B. Al-Tarawneh</creator>
        
        <creator>Omar Alirr</creator>
        
        <creator>Hassan Kanj</creator>
        
        <subject>Machine learning; cyber attack detection; cyber threats; distributed denial of service attack; charging stations</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>Electric Vehicles (EV) chargers rely on resource-constrained embedded hardware to execute critical charging operations. However, conventional security solutions may not adequately meet the needs of these devices. Increasingly, machine learning techniques are being leveraged to detect cyber attacks during electric vehicle charging. This study aims to evaluate various base machine learning methods and conduct binary and multi-class classification experiments to enhance security and operational efficiency in EV charging stations. The experiments utilize the CICEVSE2024 dataset, curated by the Canadian Institute for Cybersecurity at the University of New Brunswick, designed specifically for anomaly detection and establishing behavioral patterns in EV charging stations. The analysis highlights nuances in performance across different machine learning classifiers. For instance, Random Forest achieved 95.07% accuracy in binary classification by constructing robust decision trees. Ensemble methods such as CatBoost and LightGBM further improved binary classification to 95.37% and 95.41%, respectively through gradient boosting techniques. In multi-class attack classification, ensemble methods demonstrated superior performance, with the Stacking Ensemble achieving 91.1% accuracy by combining multiple models, and Voting Ensemble achieving 90.7%. Notably, among homogeneous base classifiers, Extra Trees and HistGradient Boosting were particularly effective, achieving 90.2% and 89.8% accuracy respectively in multi-class classification tasks. These findings underscore the efficacy of machine learning in enhancing cybersecurity measures for EV charging infrastructure.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_89-Performance_Evaluation_of_Machine_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of the Application and Potential of Renewable Energy in Landscape Architecture</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160388</link>
        <id>10.14569/IJACSA.2025.0160388</id>
        <doi>10.14569/IJACSA.2025.0160388</doi>
        <lastModDate>2025-03-31T12:15:10.2330000+00:00</lastModDate>
        
        <creator>YaWei Wu</creator>
        
        <creator>Xiang Meng</creator>
        
        <subject>Landscape architecture; sustainability; renewable energy; decision-making; deep learning; artificial intelligence</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>The field of landscape architecture is constantly evolving to address sustainability and climate change. There is a rising chance to use these technology into landscape design as renewable energy sources become more prevalent. An effective technique for evaluating the possibility of incorporating renewable energy management into landscape architecture is currently required. As a result, decision-making procedures are now manual and subjective, requiring greater precision and consistency. Deep learning algorithms can be used to examine the possibilities for renewable energy management in landscape architecture, which would help to solve this problem. Deep learning is a branch of artificial intelligence that automatically extracts complicated relationships and patterns from data using multi-layer neural networks. With inputs like topography, solar radiation, and climate, the algorithm can determine where in a particular landscape renewable energy installations would be most effective.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_88-Analysis_of_the_Application_and_Potential_of_Renewable_Energy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Fuzzy-Neural Network Approach to Market Supervision and Product Recall Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160387</link>
        <id>10.14569/IJACSA.2025.0160387</id>
        <doi>10.14569/IJACSA.2025.0160387</doi>
        <lastModDate>2025-03-31T12:15:10.1830000+00:00</lastModDate>
        
        <creator>Wei Chen</creator>
        
        <subject>Fuzzy-neural network; customer complaint rate; product quality rating; market trend index; market supervision; accuracy; precision; recall; F1-Score and MSE</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>The paper suggests a fuzzy-neural network market monitoring and product recall prediction method. This method uses fuzzy logic and neural networks to handle complex and ambiguous input. The fuzzy logic component fuzzes product quality, customer complaint, and market trend index input variables. The neural network component learns fuzzified data patterns to predict product recalls. Online information is used for product recalls. Customer complaint rate, product quality rating, and market trend index are in this dataset. Fuzzy sets and membership functions finish input variable fuzzying. A neural network trained on fuzzified data predicts product recalls. We assess the proposed method&#39;s accuracy, precision, recall, and F1-score. After testing, the suggested technique had an accuracy of 0.863, precision of 0.854, recall of 0.872, F1-score of 0.863, and MSE of 0.123. The fuzzy-neural network technology improves market monitoring and product recall predictions. Fuzzy logic and neural networks analyze complicated and unexpected data, improving prediction accuracy. This strategy may assist market supervisors and manufacturers decide on product recalls.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_87-A_Fuzzy_Neural_Network_Approach_to_Market_Supervision.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving Financial Forecasting Accuracy Through Swarm Optimization-Enhanced Deep Learning Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160386</link>
        <id>10.14569/IJACSA.2025.0160386</id>
        <doi>10.14569/IJACSA.2025.0160386</doi>
        <lastModDate>2025-03-31T12:15:10.1530000+00:00</lastModDate>
        
        <creator>Balakrishnan S</creator>
        
        <creator>Y. Srinivasa Rao</creator>
        
        <creator>Karaka Ramakrishna Reddy</creator>
        
        <creator>Janjhyam Venkata Naga Ramesh</creator>
        
        <creator>Elangovan Muniyandy</creator>
        
        <creator>M. V. A. L. Narasimha Rao</creator>
        
        <creator>Yousef A. Baker El-Ebiary</creator>
        
        <creator>B Kiran Bala</creator>
        
        <subject>Financial forecasting; deep learning; swarm optimization; predictive modeling; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>Financial forecasting is a crucial factor for decision-making in numerous fields, it demands very accurate predictive models. Traditional methods, like Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN), and Gradient Boosting Machines (GBM), display suitable performance however have proven not totally efficient in complex high-dimensional financial data. This paper introduces a new approach combining swarm-based algorithms and deep learning architectures to improve predicative accuracy in financial forecasting. The proposed method relies on elite data preprocessing algorithms to optimize the learning process and prevent overfitting. By experimenting with large variety of dataset, the optimized model was able to achieve accuracy of 98% out running traditional models such as CNN (80%), RNN (83%), and GBM (95.6%). Furthermore, the model performed a good precision-recall trade-off, strengthening it applicability to real world work of predictive tasks, such as stock price prediction and market trend analysis. Through optimizations of essential hyperparameters by means of swarm intelligence, the framework handles the non-linear dependencies as well as volatility of financial data. The study shows high robustness and adaptability of the proposed concept provides solutions to the shortcomings of conventional financial forecasting tools. This study furthers the state of intelligent financial analytics proposing a byword framework for additional studies fostering deep learning and optimisation technologies together. The results align with the potential application of swarm-optimizer models for overcoming the limitation of predictive reliability of financial forecasting systems and future research in machine learning driven economic modelling and risk analysis.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_86-Improving_Financial_Forecasting_Accuracy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Smart Homes, Family Bonds, and Societal Resilience: A Comparative Analysis of AraBERT, MarBERT, and DistilBERT on Arabic Twitter Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160385</link>
        <id>10.14569/IJACSA.2025.0160385</id>
        <doi>10.14569/IJACSA.2025.0160385</doi>
        <lastModDate>2025-03-31T12:15:10.1230000+00:00</lastModDate>
        
        <creator>Eman Alqahtani</creator>
        
        <creator>Rashid Mehmood</creator>
        
        <creator>Sanaa Sharaf</creator>
        
        <creator>Saad Alqahtany</creator>
        
        <subject>Smart homes; smart families; sustainability; Bidirectional Encoder Representations from Transformers (BERT); AraBERT; MarBERT; DistilBERT; coherence metrics; Twitter</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>This study explores the concept of Smart Homes &amp; Families by analyzing 1,174,912 Arabic tweets from Saudi Arabia to understand societal perceptions, challenges, and expectations. Recognizing that homes play a vital role in nurturing relationships, values, morals, and societal cohesion, the research emphasizes that the &quot;smartness&quot; of homes lies not only in technological advancements but also in supporting core family functions and contributing to sustainability. A machine learning tool was developed, integrating data collection, preprocessing, embedding generation, dimensionality reduction, clustering, visualization, and validation. The study conducts a comparative analysis of AraBERT, MarBERT, and DistilBERT (models based on Bidirectional Encoder Representations from Transformers, or BERT), identifying AraBERT as the optimal model for Arabic X (formerly Twitter) analysis. Coherence metrics and thematic evaluation were used to assess model performance. Thematic analysis revealed 22 key parameters grouped into three macro-parameters, offering a structured understanding of public discourse. The study provides policy recommendations and outlines future research directions, delivering actionable insights for stakeholders to support family well-being, societal resilience, and sustainable development through smart home technologies.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_85-Smart_Homes_Family_Bonds_and_Societal_Resilience.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimizing Large Language Models for Low-Resource Languages: A Case Study on Saudi Dialects</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160384</link>
        <id>10.14569/IJACSA.2025.0160384</id>
        <doi>10.14569/IJACSA.2025.0160384</doi>
        <lastModDate>2025-03-31T12:15:10.0900000+00:00</lastModDate>
        
        <creator>Bayan M. Alsharbi</creator>
        
        <subject>LLM; Saudi Dialect; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>Large Language Models (LLMs) have revolutionized natural language processing (NLP); however, their effectiveness remains limited for low-resource languages and dialects due to data scarcity. One such underrepresented variety is the Saudi dialect, a widely spoken yet linguistically distinct variant of Arabic. NLP models trained on Modern Standard Arabic (MSA) often struggle with dialectal variations, leading to suboptimal performance in real-world applications. This study aims to enhance LLM performance for the Saudi dialect by leveraging the MADAR dataset, applying data augmentation techniques, and fine-tuning a state-of-the-art LLM. Experimental results demonstrate the model’s effectiveness in Saudi dialect classification, achieving 91% accuracy, with precision, recall, and F1-scores all exceeding 0.90 across different dialectal variations. These findings underscore the potential of LLMs in handling dialectal Arabic and their applicability in tasks such as social media monitoring and automatic translation. Future research can further improve performance by refining fine-tuning strategies, integrating additional linguistic features, and expanding training datasets. Ultimately, this work contributes to democratizing NLP technologies for low-resource languages and dialects, bridging the gap in linguistic inclusivity within AI applications.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_84-Optimizing_Large_Language_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Medical Named Entity Recognition for Enhanced Electronic Health Record Maintenance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160383</link>
        <id>10.14569/IJACSA.2025.0160383</id>
        <doi>10.14569/IJACSA.2025.0160383</doi>
        <lastModDate>2025-03-31T12:15:10.0430000+00:00</lastModDate>
        
        <creator>Muralikrishna S. N</creator>
        
        <creator>Raghavendra Ganiga</creator>
        
        <creator>Raghurama Holla</creator>
        
        <creator>Ruppikha Sree Shankar</creator>
        
        <subject>Electronic health records; named entity recognition; natural language processing; part-of-speech</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>The increasing use of electronic health records (EHRs) has led to a surge in unstructured data, making it challenging to extract valuable insights. This study proposes Natural Language Processing (NLP) based techniques to standardize Electronic Health Record (EHR) data. Conducted in a healthcare setting, the research focuses on transforming unstructured EHR text into structured data using Part-of-Speech tagging and Named Entity Recognition (NER). NER techniques are applied to extract and categorize medical terms, enhancing data accuracy and consistency. The framework’s performance is evaluated using precision and recall rates. Experimental results demonstrate that NER effectively identifies and organizes medical entities, facilitating improved data analysis and decision-making in healthcare. This approach promises to enhance interoperability and the overall utility of EHR systems.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_83-Medical_Named_Entity_Recognition_for_Enhanced_Electronic_Health.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Music Emotion Recognition and Analysis Based on Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160382</link>
        <id>10.14569/IJACSA.2025.0160382</id>
        <doi>10.14569/IJACSA.2025.0160382</doi>
        <lastModDate>2025-03-31T12:15:10.0130000+00:00</lastModDate>
        
        <creator>Zhao Hanbing</creator>
        
        <creator>Jin Xin</creator>
        
        <creator>Guo Jinfeng</creator>
        
        <subject>Music emotion recognition; multimodal fusion; audio signal processing; neural network; sentiment analysis; user experience</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>The close connection between music and human emotions has always been an important topic of research in psychology and musicology. Scientists have proven that music can affect a person&#39;s emotional state, thereby possessing the potential for therapy and stress relief. With the development of information technology, automatic music emotion recognition has become an important research direction. The MultiSpec-DNN model proposed in this article is a multi-spectral deep neural network that integrates multiple features and modalities of music, including but not limited to melody, rhythm, harmony, and lyrical content, thus achieving efficient and accurate recognition of music emotions. The core of the MultiSpec-DNN model lies in its ability to process and analyze various types of data inputs. By combining audio signal processing and natural language processing technologies, the MultiSpec-DNN model can extract and analyze the comprehensive emotional characteristics in music files, thereby achieving more accurate emotion classification. In the experimental section, the MultiSpec-DNN model was tested on two standard emotional speech databases: EmoDB and IEMOCAP. The experimental results show that the MultiSpec-DNN model has a significant improvement in accuracy compared to traditional single-modal recognition methods, which proves the effectiveness of integrated features in emotion recognition.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_82-Music_Emotion_Recognition_and_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Handwritten Arabic Calligraphy Generation: A Systematic Literature Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160381</link>
        <id>10.14569/IJACSA.2025.0160381</id>
        <doi>10.14569/IJACSA.2025.0160381</doi>
        <lastModDate>2025-03-31T12:15:09.9500000+00:00</lastModDate>
        
        <creator>Afnan Sumayli</creator>
        
        <creator>Mohamed Alkaoud</creator>
        
        <subject>Arabic calligraphy; deep learning; generative models; handwritten dataset; Generative Adversarial Networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>Arabic calligraphy is famous for its distinct artistic style. It is written by skilled calligraphers to highlight the beauty of Arabic letters and represent its rich artistry. Due to the complexity of Arabic text compared to other languages&#39; scripts, Arabic calligraphy writing demands a significant investment of time and effort, as well as the acquisition of high skills from calligraphers to correctly form the curves of Arabic script and accurately represent its various styles. This Systematic Literature Review (SLR) aims to provide a comprehensive analysis of the current state of research in Arabic calligraphy generation using deep learning and generative models. The review follows the PRISMA guidelines and examines 19 primary studies selected from a systematic search of academic databases, with publications spanning from January 2009 to December 2024. The findings indicate that Generative Adversarial Networks (GANs) and their variants are the most commonly used models for generating Arabic calligraphy. Additionally, the review highlights a significant gap in the availability of large, standardized handwritten datasets for model training and evaluation, as most existing datasets are small, custom-made, or privately held. In conclusion, the review offers valuable insights that can help researchers and practitioners advance the field, enabling the generation of high-quality Arabic calligraphy that satisfies both artistic and functional needs.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_81-Handwritten_Arabic_Calligraphy_Generation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Resource Utilization Prediction Model for Cloud Datacentre: Survey</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160380</link>
        <id>10.14569/IJACSA.2025.0160380</id>
        <doi>10.14569/IJACSA.2025.0160380</doi>
        <lastModDate>2025-03-31T12:15:09.8870000+00:00</lastModDate>
        
        <creator>Doaa Bliedy</creator>
        
        <creator>Mohamed H. Khafagy</creator>
        
        <creator>Rasha M. Badry</creator>
        
        <subject>Cloud computing; resource utilization; prediction; cloud datacenter; machine learning models; resource allocation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>This survey aims to analyze resource prediction models in cloud environments to improve resource allocation strategies. It can be difficult for cloud service providers to maintain the required Quality of Service (QoS) requirements without going against a service level agreement (SLA). Improving cloud performance requires accurate workload prediction. To enhance customer service quality (QoS), cloud computing provides virtualisation, scalability, and on-demand services. Resource provisioning is a major challenge in the cloud environment due to its dynamic nature and the rapid increase in resource demand. Over-provisioning of resources leads to energy waste and increased expenses while under-provisioning can result in SLA breaches and reduced QoS. It is crucial to allocate resources as closely as possible to current demands. Cloud elasticity plays a key role in adapting to workload changes and maintaining performance levels. Predicting future resource demand is essential for effective resource allocation, which is the focus of this survey. Our survey uniquely focuses on comparing univariate and multivariate input cases for cloud resource prediction, a perspective that has not been deeply explored in similar surveys. Unlike existing works that primarily categorize models by methodologies or application characteristics, our study offers a novel analysis of how different input scenarios impact prediction accuracy, resource efficiency, and scalability. By addressing this overlooked aspect, our survey provides unique insights and practical guidance for researchers and practitioners aiming to optimize resource utilization in cloud environments. A thorough analysis of resource prediction models in cloud systems is presented in this research, including a comparison of predicted resources, prediction algorithms, datasets, performance metrics, a prediction summary, and a taxonomy of prediction methods. This survey not only synthesizes current knowledge but also identifies key gaps and future directions for the development of more robust and efficient resource prediction models.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_80-Resource_Utilization_Prediction_Model_for_Cloud_Datacentre.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dynamic Obstacle Avoidance and Path Planning for Mobile Robots Integrating Improved Rapidly-Exploring Random Tree-Star and Improved Dynamic Window Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160379</link>
        <id>10.14569/IJACSA.2025.0160379</id>
        <doi>10.14569/IJACSA.2025.0160379</doi>
        <lastModDate>2025-03-31T12:15:09.8400000+00:00</lastModDate>
        
        <creator>Xianyong Wei</creator>
        
        <creator>Hongying Si</creator>
        
        <subject>Rapidly-exploring random tree-star; dynamic window approach; A-star algorithm; dynamic obstacle avoidance; path planning; mobile robot</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>With the application and popularization of artificial intelligence and intelligent robots in daily life, the autonomous navigation and flexible operation capabilities of mobile robots have become particularly critical. Mobile robots perform well in regular environments, but face problems such as low accuracy in dynamic obstacle avoidance and weak adaptability to complex terrains. This study proposes to enhance the adaptability of the Rapidly-exploring Random Tree Star algorithm and integrate it with the A-Star algorithm, the Dynamic Window Approach, and visual sensor to construct an obstacle avoidance model. The objective is to enable the improved model to recognize various terrain features and enhance the accuracy of the path planning algorithm. The proposed model performed well in obstacle avoidance, with a success rate of 95.78% after ten training epochs and no more than four collisions within 4 minutes. In the experiment, as the obstacle increased every minute, the response speed of the proposed model remained below 25 seconds. The above results indicate that the quality of the planned path is higher than that of the other three models. The path optimization improvement combined with the A* algorithm is effective and has high real-time and accuracy, which can make mobile robots widely used in industries such as services, navigation, and logistics.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_79-Dynamic_Obstacle_Avoidance_and_Path_Planning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intrusion Detection System-Based Network Behavior Analysis: A Systemic Literature Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160378</link>
        <id>10.14569/IJACSA.2025.0160378</id>
        <doi>10.14569/IJACSA.2025.0160378</doi>
        <lastModDate>2025-03-31T12:15:09.8100000+00:00</lastModDate>
        
        <creator>Mohammed Janati</creator>
        
        <creator>Fay&#231;al Messaoudi</creator>
        
        <subject>Artificial Intelligence (AI); deep learning; machine learning; cybersecurity; Intrusion Detection System; Network Behavior Analysis (NBA); Systematic Literature Review (SLR)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>An Intrusion Detection System (IDS) in cyberspace, as of now, plays primarily as a means of detecting illegal access and activity in a network. Due to the rapidly evolving cyber threats, the traditional signature-based IDS have started losing their effectiveness, leading to the emergence of advanced alternatives to these traditional technologies, such as Network Behavior Analysis (NBA). Unlike conventional signature-based systems, NBA monitors behavioral patterns for deviations and potential threats, which is a far more flexible and powerful way of detecting intrusion. While NBA-based IDS is a growing field of interest, the existing research in this area is mostly disoriented, mostly concentrating on single features like machine learning, deep learning algorithms, specific detection processes, or unique environments such as IoT and cloud systems. This systematic literature review (SLR) follows the guidelines proposed by Kitchenham to collect various studies, highlights research gaps, and provides an overview of the existing evidence. Spanning literature from January 2014 to April 2024, it comprehensively highlights the methods, datasets, types of detectable cyber-attacks, performance metrics, and the challenges that besiege existing NBA-based IDS. This shows the urgency for much more flexible and robust solutions, i.e., providing solutions through advanced Artificial Intelligence (AI) techniques in response to the increasing cyberspace complexities. Therefore, this review provides fundamental perspectives for researchers and practitioners and makes an important contribution towards stimulating future research efforts to design more effective and robust IDS solutions.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_78-Intrusion_Detection_System_Based_Network_Behavior_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detection of Structural Vulnerabilities in Multi-Cavity Steel Plate Shear Walls Using Improved Deep Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160377</link>
        <id>10.14569/IJACSA.2025.0160377</id>
        <doi>10.14569/IJACSA.2025.0160377</doi>
        <lastModDate>2025-03-31T12:15:09.7630000+00:00</lastModDate>
        
        <creator>Zhang Bo</creator>
        
        <creator>Xu Dabin</creator>
        
        <subject>Structural vulnerabilities; deep neural networks; steel plate shear walls; seismic design; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>Steel Plate Shear Walls (SPSWs) are a significant structural system because they can dissipate energy and have a very high lateral stiffness. However, the discovery and elimination of vital structural vulnerabilities, mainly in multi-cavity configurations, is still a major challenge. This study utilizes developments in the deep learning era to improve the identification and representation of such vulnerabilities. An improved DNN architecture was employed to analyze the effectiveness of multi-cavity SPSWs under different loading conditions. The proposed method combines hybrid information extraction techniques with various geometries and materials to ensure a reliable prediction of structural element failures. The tests have shown highly positive results, with the enhanced DNN outperforming conventional procedures by achieving higher accuracy, lower false-positive rates, and superior generalization across various test cases. This work demonstrates a new way to detect weaknesses in a structure, thereby developing an effective tool for engineers to prevent the sustainability and safety of SPSWs in critical infrastructure.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_77-Detection_of_Structural_Vulnerabilities.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development and Evaluation of Accounting Information System and Shopee Open Application Programming Interface for a Small Business, Thailand</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160376</link>
        <id>10.14569/IJACSA.2025.0160376</id>
        <doi>10.14569/IJACSA.2025.0160376</doi>
        <lastModDate>2025-03-31T12:15:09.7170000+00:00</lastModDate>
        
        <creator>Kewalin Angkananon</creator>
        
        <creator>Piyabud Ploadaksorn</creator>
        
        <subject>Accounting information system; e-commerce integration; agricultural community enterprise; shopee open API</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>This research aimed to develop and evaluate an integrated Accounting Information System (AIS) with Shopee Open API for the Ban Huai Luek Agricultural Community Enterprise in Thailand, designed to enhance financial data management efficiency and optimize online marketing operations. The research employed a mixed-method approach, combining qualitative interviews with 30 stakeholders in three groups and quantitative assessments of system effectiveness with 388 consumers and 30 farmers. Interview findings revealed diverse stakeholder needs: Enterprise members prioritized financial management and operational costs, farmers emphasized security and technology access, while customers focused on e-commerce capabilities and market positioning. The developed AIS features 41 database tables and nine core functions, incorporating Shopee&#39;s e-commerce platform through Application Programming Interface (API) integration, enabling automated product listing, inventory management, and financial calculations. System evaluation demonstrated high user satisfaction across all groups. Consumer analysis showed an overall strong approval, with security and perceived benefits ranking highest, while performance efficiency scored lowest. Farmer assessments indicated high satisfaction, with ease of use and system accuracy rated highest, though security concerns emerged during initial technology adoption. Demographic factors, particularly age and income, significantly influenced user perceptions.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_76-Development_and_Evaluation_of_Accounting_Information_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>AEDGAN: A Semi-Supervised Deep Learning Model for Zero-Day Malware Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160375</link>
        <id>10.14569/IJACSA.2025.0160375</id>
        <doi>10.14569/IJACSA.2025.0160375</doi>
        <lastModDate>2025-03-31T12:15:09.6530000+00:00</lastModDate>
        
        <creator>Abdullah Marish Ali</creator>
        
        <creator>Fuad A. Ghaleb</creator>
        
        <creator>Faisal Saeed</creator>
        
        <subject>Malware detection; zero-day; anomaly detection; generative adversarial network; autoencoder; convolutional neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>Malware presents an increasing threat to cyberspace, drawing significant attention from researchers and industry professionals. Many solutions have been proposed for malware detection; however, zero-day malware detection remains challenging due to the evasive techniques used by malware authors and the limitations of existing solutions. Traditional supervised learning methods assume a fixed relationship between malware and their class labels over time, but this assumption does not hold in the ever-changing landscape of evasive malware and its variants. That is malware developers intentionally design malicious software to share features with benign programs, making zero-day malware. This study introduces the AEDGAN model, a zero-day malware detection framework based on a semi-supervised learning approach. The model leverages a generative adversarial network (GAN), an autoencoder, and a convolutional neural network (CNN) classifier to build an anomaly-based detection system. The GAN is used to learn representations of benign applications, while the auto-encoder extracts latent features that effectively characterize benign samples. The CNN classifier is trained on an integrated feature vector that combines the latent features from the autoencoder with hidden features extracted by the GAN’s discriminator. Extensive experiments were conducted to evaluate the model’s effectiveness. Results from two benchmark datasets show that the AEDGAN model outperforms existing solutions, achieving a 5% improvement in overall accuracy and an 11% reduction in false alarms compared to the best-performing related model.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_75-AEDGAN_A_Semi_Supervised_Deep_Learning_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Challenges and Solutions in Agile Software Development: A Managerial Perspective on Implementation Practices</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160374</link>
        <id>10.14569/IJACSA.2025.0160374</id>
        <doi>10.14569/IJACSA.2025.0160374</doi>
        <lastModDate>2025-03-31T12:15:09.6070000+00:00</lastModDate>
        
        <creator>Geetha L S</creator>
        
        <creator>Yousef A. Baker El-Ebiary</creator>
        
        <creator>Bandla Srinivasa Rao</creator>
        
        <creator>Revati Ramrao Rautrao</creator>
        
        <creator>T Subha Mastan Rao</creator>
        
        <creator>Janjhyam Venkata Naga Ramesh</creator>
        
        <creator>Omaia Al-Omari</creator>
        
        <subject>Agile software development; implementation challenges; managerial interventions; agile frameworks; performance evaluation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>Agile software development is much used as it is flexible and is customer centric style but its implementation there are still challenges in which in transferring from traditional project management. The implementation is, however, beset with much trouble, especially in transitioning organizations from old project management frameworks. This research elaborates on the challenges of Agile implementation and the methods managers use to overcome these challenges, thus providing a managerial perspective toward Agile adoption. The main challenges derived from the reviewed literature and case studies are resistance to change, lack of Agile expertise, poor team coordination, and inconsistent stakeholder buy-in. These usually lead to performance degradation because teams cannot maintain productivity and meet deadlines in delivering quality work. This paper outlines a number of managerial interventions that help mitigate such challenges, such as Agile training, leadership support, incremental transition plans, and effective communication strategies, among others. These interventions are assessed using performance indicators such as team productivity, stakeholder satisfaction, and time-to-market to establish the role such interventions play in making transitions smoother to Agile frameworks. It also makes a comparison on how Agile frameworks work in Scrum, Kanban, and SAFe compared to the traditional practices of project management, respectively, in regard to risk management, team integration, and return on investment. Data from industry reports and surveys show that Agile methodologies are generally faster, more flexible, and better at engaging stakeholders than traditional methods, although success with Agile depends significantly on the maturity level of the organization and the managerial support provided. While Agile offers great advantages, it is still highly challenging to implement it successfully. Managerial involvement has been the theme of this research in overcoming these barriers with continuous improvement, adaptive practices, and creating a collaborative environment for sustainable success in Agile adoption.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_74-Challenges_and_Solutions_in_Agile_Software_Development.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>AI-Driven NAS-GBM Model for Precision Agriculture: Enhancing Crop Yield Prediction Accuracy</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160373</link>
        <id>10.14569/IJACSA.2025.0160373</id>
        <doi>10.14569/IJACSA.2025.0160373</doi>
        <lastModDate>2025-03-31T12:15:09.5600000+00:00</lastModDate>
        
        <creator>Sudhir Anakal</creator>
        
        <creator>Poornima N</creator>
        
        <creator>Abdurasul Bobonazarov</creator>
        
        <creator>Janjhyam Venkata Naga Ramesh</creator>
        
        <creator>Elangovan Muniyandy</creator>
        
        <creator>Mandava Manjusha</creator>
        
        <creator>Yousef A. Baker El-Ebiary</creator>
        
        <subject>Network sensor; crop yield prediction; neural architecture search; Gradient Boosting Machine (GBM)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>Precision agriculture has emerged as a vital approach for optimizing crop yield prediction, enabling data-driven decision-making to improve agricultural productivity. Traditional forecasting methods encounter difficulties due to extreme complexity within environmental factors while operating under dynamic farming conditions. An AI framework combining NAS and GBM serves as the solution to address these issues through enhancing predictive capabilities. This study works to produce an automated system which selects optimal models through optimization processes for more accurate crop yield forecasts. Through NAS component exploration the optimal neural network architecture can be identified whereas GBM component effectively analyzes non-linear dependencies in data which leads to superior predictive capabilities. Data processing techniques precede model development by using Recursive Feature Elimination (RFE) for feature selection which leads to training NAS-optimized deep learning architectures together with GBM. The researchers applied the model to real agriculture datasets which included essential agricultural variables comprising soil conditions and weather elements and crop health measurements. The experimental results prove that the developed NAS-GBM framework achieves superior performance compared to standard models across three major aspects including predictive accuracy and computation efficiency in addition to generalization capability. The research project uses TensorFlow and Scikit-learn alongside Optuna for model optimization while it depends on cloud-based computational resources for extensive processing requirements. AI-driven hybrid models based on the research demonstrate their capability to improve decision-making capabilities for farmers together with agronomists.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_73-AI_Driven_NAS_GBM_Model_for_Precision_Agriculture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Application of Face Recognition Model Based on MLBP-HOG-G Algorithm in Smart Classroom</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160372</link>
        <id>10.14569/IJACSA.2025.0160372</id>
        <doi>10.14569/IJACSA.2025.0160372</doi>
        <lastModDate>2025-03-31T12:15:09.5430000+00:00</lastModDate>
        
        <creator>Xiaoxia Li</creator>
        
        <subject>Multi feature local binary pattern; directional gradient histogram; Gabor filter; face recognition; smart classroom</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>The development of Internet and Internet of things technology has accelerated the informatization construction of smart education. But the traditional face recognition algorithm used in smart classrooms inevitably has problems such as large amount of calculation, obvious resource and memory consumption, and poor recognition accuracy. In order to promote the informatization construction of colleges and universities and the accuracy of face recognition, a face recognition model based on multi-feature Local Binary Pattern Directional Gradient Histogram Gabor Filter algorithm is proposed. The model first extracts the binary texture image, and then carries out secondary feature extraction, dimension reduction processing and serial fusion with the gray level co-occurrence matrix feature weighting to improve the recognition accuracy. The results show that the recognition rate of the proposed method in ORL database, CMU_PIE database and Yale database can reach 95%, 94.12% and 93.33%, which is better than other algorithms. And in the comprehensive data set, the training and verification recognition accuracy of the proposed method for face recognition is basically 98% and 97.23%, which has good generalization and stability, and its cumulative error result of face key point detection is less than that of other comparison methods. The proposed method can provide new opportunities and possibilities for the application effect of face recognition, smart classroom construction and teaching development.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_72-The_Application_of_Face_Recognition_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Parameter Adaptation of Enhanced Ant Colony System for Water Quality Rules Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160371</link>
        <id>10.14569/IJACSA.2025.0160371</id>
        <doi>10.14569/IJACSA.2025.0160371</doi>
        <lastModDate>2025-03-31T12:15:09.5130000+00:00</lastModDate>
        
        <creator>Husna Jamal Abdul Nasir</creator>
        
        <creator>Mohd Mizan Munif</creator>
        
        <creator>Muhammad Imran Ahmad</creator>
        
        <creator>Tan Shie Chow</creator>
        
        <creator>Ku Ruhana Ku-Mahamud</creator>
        
        <creator>Abu Hassan Abdullah</creator>
        
        <subject>Parameter adaptation; rules classification; water quality monitoring; ant colony system; pheromone update techniques</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>Water quality monitoring in aquaculture involves classifying and analyzing the collected data to assess the water quality that is appropriate for breeding, rearing and harvesting aquatic organisms. Systematic data classification is essential when it comes to managing large amounts of data that are continuously sensed in real time and have various attributes in each instance of a sequence. Ant Colony System (ACS) has been employed in optimizing the data classification in smart aquaculture, where the majority of the research focuses on enhancing the classification procedure using predetermined parameters within a specified range. Nevertheless, this approach does not guarantee ideal performance. This paper enhances the ACS algorithm by introducing the Enhanced Ant Colony System-Rule Classification (EACS-RC) algorithm, which improves rule construction by integrating pheromone and heuristic values while incorporating advanced pheromone update techniques. The optimal parameter values to be used by the proposed algorithm are obtained from parameter adaptation experiments in which different values within the defined range were applied to obtain the optimal value for each parameter. Experiments were performed on the Kiribati water quality dataset and the results of the EACS-RC algorithm were evaluated against the AntMiner and AGI-AntMiner algorithms. Based on the results, the proposed algorithm outperforms the benchmark algorithms in classification accuracy and processing time. The output of this study can be adopted by the other ACS variants to achieve optimal performance for data classification in smart aquaculture.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_71-Parameter_Adaptation_of_Enhanced_Ant_Colony_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced Reconstruction of Occluded Images Using GAN and VGG-Net Preprocessing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160370</link>
        <id>10.14569/IJACSA.2025.0160370</id>
        <doi>10.14569/IJACSA.2025.0160370</doi>
        <lastModDate>2025-03-31T12:15:09.4830000+00:00</lastModDate>
        
        <creator>Salamun </creator>
        
        <creator>Shamsul Kamal Ahmad Khalid</creator>
        
        <creator>Ezak Fadzrin Ahmad Shaubari</creator>
        
        <creator>Noor Azah Samsudin</creator>
        
        <creator>Luluk Elvitaria</creator>
        
        <subject>Face recognition; occlusion; image reconstruction; generative adversarial networks; VGG-Net; occluded images; feature extraction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>Facial recognition is widely used in security and identification systems, but occlusions like masks or glasses remain a major challenge. Recent approaches, such as GANs and partial feature extraction methods, attempt to reconstruct or identify occluded facial images. However, these approaches still have limitations in handling severe occlusions, computational efficiency, and dependency on large labeled datasets. In this paper, a GAN-based framework for synthetic reconstruction of occluded facial images is proposed, incorporating multiple specialized modules including a VGG-Net-based perceptual loss component to enhance visual quality. Our architecture improves the fidelity and robustness of reconstructed faces under varied occlusion types. Experimental evaluation on different occlusion scenarios demonstrated high reconstruction quality, with PSNR up to 33.106 and SSIM up to 0.983. The model also maintained strong recognition performance across diverse occlusion combinations. These findings support the framework&#39;s potential to enhance face recognition systems in real-world, unconstrained environments.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_70-Enhanced_Reconstruction_of_Occluded_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fuzzy Logic with Kalman Filter Model Framework for Children’s Personal Health Apps</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160369</link>
        <id>10.14569/IJACSA.2025.0160369</id>
        <doi>10.14569/IJACSA.2025.0160369</doi>
        <lastModDate>2025-03-31T12:15:09.4330000+00:00</lastModDate>
        
        <creator>Noorrezam Yusop</creator>
        
        <creator>Massila Kamalrudin</creator>
        
        <creator>Nuridawati Mustafa</creator>
        
        <creator>Nor Aiza Moketar</creator>
        
        <creator>Tao Hai</creator>
        
        <creator>Siti Fairuz Nurr Sardikan</creator>
        
        <subject>Fuzzy logic; Kalman filter; food Nutrition; personal health; food recommendations</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>The increasing prevalence of obesity among children under five has led to a growing demand for improved food nutrition advisory systems. Current food nutrition recommendation models struggle with parameter estimation, contextual adaptation, and real-time accuracy, often relying on traditional fuzzy logic models that lack responsiveness to evolving dietary needs. This study proposes an Adaptive Extended Kalman Filter Fuzzy Logic (AEKFFL) model to enhance the accuracy and reliability of food nutrition recommendations. The AEKFFL model integrates the Extended Kalman Filter (EKF) for dynamic estimation of nutritional values and Fuzzy Logic for adaptive decision-making, effectively addressing parametric uncertainties in nutrition estimation. The research employs a Design Science Research Methodology (DSRM), incorporating stakeholder interviews, literature review, and data from food composition databases, user reviews, and ingredient information. The proposed hybrid model is tested against baseline methods, including standalone Fuzzy Logic, Support Vector Machine (SVM), Neural Networks (NN), and a hybrid Fuzzy-NN approach. Experimental results demonstrate that the AEKFFL model achieves the highest accuracy (94.8%) with the lowest error rates (MAE = 0.031, RMSE = 0.045), outperforming alternative models. Additionally, AEKFFL exhibits superior classification performance (F1-score = 94.4%) and usability (SUS score = 92.1%), indicating its effectiveness in real-time nutritional guidance. These findings suggest that AEKFFL provides an innovative and computationally efficient framework for personal health and food recommendations, contributing to enhanced dietary management and obesity prevention among children. Future work will focus on refining model adaptability and integrating real-time IoT data for further improvements in precision and responsiveness.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_69-Fuzzy_Logic_with_Kalman_Filte_Model_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Chronic Kidney Disease Classification Using Bagging and Particle Swarm Optimization Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160368</link>
        <id>10.14569/IJACSA.2025.0160368</id>
        <doi>10.14569/IJACSA.2025.0160368</doi>
        <lastModDate>2025-03-31T12:15:09.4030000+00:00</lastModDate>
        
        <creator>Suhendro Y. Irianto</creator>
        
        <creator>Dephi Linda</creator>
        
        <creator>Immaniar I. M. Rizki</creator>
        
        <creator>Sri Karnila</creator>
        
        <creator>Dona Yuliawati</creator>
        
        <subject>Kidney disease; PSO; bagging; Random Forest</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>Chronic kidney disease (CKD) is a serious chronic illness without a definitive cure. According to WHO in 2015, 10% of the population suffers from CKD, with 1.5 million patients undergoing global haemodialysis. The incidence of CKD is increasing by 8% annually, ranking it as the 20th highest cause of global mortality. The Random Forest (RF) technique utilizes decision trees as an ensemble model, where class predictions are derived from the combination of results from each tree. The final decision is based on the highest outcome of class predictions generated by each decision tree, employed in this study. In testing, Random Forest with PSO-based Bagging achieved the highest performance with precision of 98.12%, recall of 100.00%, and AUC of 0.999. The Random Forest with PSO-based Bagging model demonstrates high performance in CKD detection, but metrics like precision, recall, and AUC alone do not guarantee clinical applicability. Balancing false positives and negatives is crucial, and its real-world integration should be evaluated to assess its impact on patient outcomes and clinical workflows. Research on predicting chronic kidney disease using the Random Forest algorithm with Bagging based on Particle Swarm Optimization (PSO) indicates that Bagging with PSO feature selection can enhance accuracy and kappa values. These findings contribute to understanding the roles of Bagging and PSO methods in improving the performance of several algorithms, including Random Forest.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_68-Chronic_Kidney_Disease_Classification_Using_Bagging.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Machine Learning Applications in Workforce Management: Strategies for Enhancing Productivity and Employee Engagement</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160367</link>
        <id>10.14569/IJACSA.2025.0160367</id>
        <doi>10.14569/IJACSA.2025.0160367</doi>
        <lastModDate>2025-03-31T12:15:09.3400000+00:00</lastModDate>
        
        <creator>Mano Ashish Tripathi</creator>
        
        <creator>Joel Osei-Asiamah</creator>
        
        <creator>Avanti Chinmulgund</creator>
        
        <creator>Aanandha Saravanan</creator>
        
        <creator>T Subha Mastan Rao</creator>
        
        <creator>Ramya H P</creator>
        
        <creator>Yousef A. Baker El-Ebiary</creator>
        
        <subject>Machine learning; workforce management; employee engagement; task allocation; productivity optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>Workforce management is a critical component of organizational success, encompassing employee scheduling, task allocation, and engagement strategies. Traditional methods rely heavily on rule-based systems and manual supervision, leading to inefficiencies and suboptimal workforce utilization. Existing machine learning (ML) approaches, such as supervised learning and statistical models, have improved certain aspects but often fail to dynamically adapt to evolving workforce demands. Additionally, these models struggle with real-time decision-making, requiring constant retraining and manual intervention. This study introduces a reinforcement learning (RL)-based workforce management framework to optimize productivity and employee engagement. Unlike conventional ML models, RL enables adaptive decision-making by continuously learning from interactions within the workforce environment. The proposed method employs deep Q-networks (DQN) and policy gradient techniques to enhance scheduling, task distribution, and incentive structures, leading to a more efficient and responsive workforce management system. The methodology involves collecting real-time workforce data, pre-processing it for feature extraction, and training the RL model using simulated and historical workforce scenarios. The model’s performance is evaluated based on efficiency gains, employee satisfaction, and task completion rates compared to traditional workforce management techniques. Experimental results demonstrate that the RL-based approach significantly improves task allocation accuracy by 18%, reduces scheduling conflicts by 22%, and enhances employee satisfaction scores by 15%. These findings underscore the potential of reinforcement learning in revolutionizing workforce management by fostering data-driven, real-time optimization, ultimately leading to enhanced organizational productivity and employee well-being.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_67-Machine_Learning_Applications_in_Workforce_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of Estimation Methods for Submarine Towing Resistance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160366</link>
        <id>10.14569/IJACSA.2025.0160366</id>
        <doi>10.14569/IJACSA.2025.0160366</doi>
        <lastModDate>2025-03-31T12:15:09.2930000+00:00</lastModDate>
        
        <creator>Shancheng Li</creator>
        
        <creator>Guanghui Zeng</creator>
        
        <creator>Guangda Wang</creator>
        
        <subject>Submarine; towing resistance; CFD simulation; empirical formulas; maritime rescue</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>In order to estimate the drag of submarine towing effectively, based on the analysis of the drag components of submarine towing, the friction resistance and residual resistance of submarine towing are estimated according to the empirical formula of towing surface ship resistance. Subsequently, CFD is used to simulate the towing resistance of submarine on water surface. The CFD simulation results are compared with those estimated by empirical formula. It is shown that the friction resistance of submarine Towing on the surface can be calculated by “Towing Guide at Sea” and “Towing” empirical formula, and the residual resistance can be estimated by the “Towing” formula or Shen Pugen’s formula. However, a head shape coefficient of approximately 1.5 is found to be more suitable for the residual resistance estimation formula of a towed submarine.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_66-Analysis_of_Estimation_Methods_for_Submarine_Towing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>IoT-Based Smart Accident Detection and Early Warning System for Emergency Response and Risk Management</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160365</link>
        <id>10.14569/IJACSA.2025.0160365</id>
        <doi>10.14569/IJACSA.2025.0160365</doi>
        <lastModDate>2025-03-31T12:15:09.2630000+00:00</lastModDate>
        
        <creator>Jinsong Tao</creator>
        
        <creator>Rahat Ali</creator>
        
        <creator>Shakeel Ahmad</creator>
        
        <creator>Fasahat Ali</creator>
        
        <subject>IoT; Blynk application; smart transportation; accident detecting and early warning system; risk management</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>Driving in dense fog creates significant challenges, particularly in Asian countries like Pakistan, where increasing traffic and air pollution contribute to reduced visibility, elevate the risk of ac-cidents, property damage, and fatalities. Accidents in such conditions are worsened by vehicle congestion and poor weather, such as dense fog. To address these issues, this study proposes an IoT-based intelligent accident detection and early warning system that uses integrated smartphone sensors to detect and monitor vehicular collisions. The system enhances risk manage-ment by autonomously detecting accidents and instantly trans-mitting essential information, including precise location, to emergency response networks for timely intervention and deci-sion-making. Additionally, the system alerts driver to possible near-collisions or hazardous conditions through real-time warn-ing alert, displayed via the Blynk application. Utilizing a smartphone&#39;s built-in sensors to detect vehicular collisions and notify the nearest first responders, along with providing real-time location tracking for paramedics and emergency victims, can significantly enhance recovery chances for victims while reducing both time and costs. The operational reliability and accuracy of the IoT-based framework for smart transportation are evaluated through numerical and simulation-based experiments, validating its efficacy in harsh environmental conditions.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_65-IoT_Based_Smart_Accident_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>On the Impact of Various Combinations of Preprocessing Steps on Customer Churn Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160364</link>
        <id>10.14569/IJACSA.2025.0160364</id>
        <doi>10.14569/IJACSA.2025.0160364</doi>
        <lastModDate>2025-03-31T12:15:09.2170000+00:00</lastModDate>
        
        <creator>Mohamed Ezzeldin Saleh</creator>
        
        <creator>Nadia Abd-Alsabour</creator>
        
        <subject>Attribute selection; churn prediction; decision trees; imputation methods; machine learning; normalization techniques</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>This paper investigates various combinations of preprocessing methods (attribute selection, normalization, resampling, and imputation) and evaluates their impact on the performance of decision tree models for predicting customer churn. The experiments were performed on the benchmark Cell2Cell dataset due to its ability to address diverse aspects of customer behavior, including value-added services, usage patterns, demographic information, customer service interactions, personal data, and billing data. This comprehensive view of client activities makes it ideal for studying customer churn. The aim of this work is to identify the most effective preprocessing method that can be applied to a real-world telecommunications dataset to improve the effectiveness of customer churn prediction methods. The study systematically examines the effects of imputation methods (K-Nearest Neighbors and statistical imputation), normalization techniques (Median and Median Absolute Deviation Normalization, Min-Max Scaling, and Z-Score Standardization), feature selection using Lasso regression, and resampling using SMOTE Tomek. This results in 16 distinct preprocessed datasets, each reflecting a unique combination of preprocessing steps. An analysis of these datasets was conducted, evaluating the performance metrics of the Decision Tree model on each dataset, including accuracy, precision, recall, F1 score, and ROC-AUC. Key findings highlight that Statistical Imputation, Median and Median Absolute Deviation Normalization, and Lasso feature selection achieved the highest performance, with 0.78 in precision, 0.77 in accuracy, recall, and F1 Score, and 0.74 in ROC-AUC.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_64-The_Impact_of_Various_Combinations_of_Preprocessing_Steps.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Deep Learning-Based Generative Adversarial Network for Digital Art Style Migration</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160363</link>
        <id>10.14569/IJACSA.2025.0160363</id>
        <doi>10.14569/IJACSA.2025.0160363</doi>
        <lastModDate>2025-03-31T12:15:09.2000000+00:00</lastModDate>
        
        <creator>Wenting Ou</creator>
        
        <subject>Generative Adversarial Networks (GANs); deep learning; style transfer; unsupervised learning; neural style transfer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>This study introduces the ConvNeXt-CycleGAN, a novel deep learning-based Generative Adversarial Network (GAN) designed for digital art style migration. The model addresses the time-consuming and expertise-driven nature of traditional artistic creation, aiming to automate and accelerate the style transfer process using artificial intelligence. The ConvNeXt-CycleGAN integrates ConvNeXt blocks within the CycleGAN framework, enhancing convolution capabilities and leveraging self-attention mechanisms for precise and nuanced artistic style capture. The model undergoes rigorous evaluation using multiple performance metrics, including Inception Score (IS), Peak Signal-to-Noise Ratio (PSNR), and Fr&#233;chet Inception Distance (FID), ensuring its effectiveness in generating high-quality, diverse images while retaining fidelity during style transfer. The ConvNeXt-CycleGAN surpasses traditional GAN models across key metrics: it achieves an IS of 12.7004 (higher image diversity), a PSNR of 14.0211 (better preservation of original artwork integrity), and an FID of 234.1679 (closer resemblance to real artistic distributions). Additionally, its ability to efficiently train on unpaired images via unsupervised learning enhances its real-world applicability. This research presents an architectural innovation by combining ConvNeXt blocks with the CycleGAN framework, offering robust performance across diverse datasets and artistic styles. The ConvNeXt-CycleGAN represents a significant advancement in the integration of AI with creative processes, providing a powerful tool for rapid prototyping in digital art creation and innovation.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_63-A_Deep_Learning_Based_Generative_Adversarial_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Early Warning Model Construction for Deformation Monitoring and Management of Deep Foundation Pit Project Combined with Artificial Intelligence</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160362</link>
        <id>10.14569/IJACSA.2025.0160362</id>
        <doi>10.14569/IJACSA.2025.0160362</doi>
        <lastModDate>2025-03-31T12:15:09.1530000+00:00</lastModDate>
        
        <creator>Xiaoyuan Zhang</creator>
        
        <creator>Xin Wang</creator>
        
        <subject>Deep foundation pit; deformation; Gaussian regression analysis; management warning; artificial intelligence</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>In various engineering construction projects, construction safety problems caused by pit deformation continue to be solved. The existing early warning model for pit deformation management cannot effectively meet the needs of actual construction for complex pit projects. Artificial intelligence technology has more obvious advantages in foundation pit deformation detection due to its wide applicability, flexibility, and other characteristics. This study uses Gaussian regression analysis model to construct a corresponding deep foundation pit deformation monitoring and management warning model. The purpose is to better monitor and manage the deformation of deep foundation pits, ensuring the smooth and stable development of the entire construction project. In the experimental analysis, different performance indicators were used to verify the effectiveness of the research method, including different error indicators, precision, recall rate, F1 score, etc. MAE can effectively evaluate the deviation between predicted values and actual values, which indicates that the model is closer to the true value. Precision, recall, and F1 score can better evaluate the proportion of correctly classified samples and demonstrate the model&#39;s discriminative ability. These indicators comprehensively measure the performance of the model from different perspectives. In specific construction projects, the results showed that the proposed method had an RMSE of 0.012 and a MAE of 0.015, both significantly lower than the comparative methods, indicating better performance. The precision, recall, and F1 score of GRGA were 92.37%, 47.52%, and 0.17, respectively. In the comparison of existing foundation pit deformation monitoring methods BPNN, CNN, and GM, the precision was 90.52%, 90.03%, and 89.95%, respectively, the recall was 34.20%, 32.01%, and 29.67%, respectively, and the F1 score was 0.10, 0.13, and 0.14, respectively. The research method has more obvious advantages. The results demonstrate that the early warning model is an effective method for analyzing and predicting the deformation of deep foundation pits. The combination of Gaussian regression and genetic algorithm for deep excavation management can model and predict nonlinear deformation data, optimize the parameters of Gaussian regression process, and improve prediction accuracy. Compared with existing warning methods, the method proposed in this study utilizes Gaussian regression process to better model and analyze the deformation process of foundation pits, thus accurately analyzing the detailed changes of foundation pits.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_62-Early_Warning_Model_Construction_for_Deformation_Monitoring.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Application of Optimized JPEG-LS Algorithm in Efficient Transmission of Multi-Spectral Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160361</link>
        <id>10.14569/IJACSA.2025.0160361</id>
        <doi>10.14569/IJACSA.2025.0160361</doi>
        <lastModDate>2025-03-31T12:15:09.1070000+00:00</lastModDate>
        
        <creator>Huanping Hu</creator>
        
        <creator>Xing Wang</creator>
        
        <subject>Multi-spectral; image transmission; JPEG-LS algorithm; compression ratio; signal-to-noise ratio</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>Currently, multi-spectral image transmission faces challenges such as high storage costs and low transmission efficiency. Although various technologies are attempted to solve these problems recently, such as improving encoding methods in some algorithms, there are still issues such as insufficient compression ratio and slow processing speed. Therefore, the research focuses on optimizing the Joint Photographic Experts Group Lossless Standard (JPEG-LS) algorithm and constructing a multi-spectral image processing system. Regarding the JPEG LS algorithm process, improvements are made to the conventional encoding method by adopting sub-block compression strategy and block compression algorithm based on dynamic image bit width. The results show that the optimized JPEG LS algorithm has an average compression ratio of 5.81, which is higher than the comparison algorithm. The average compression time is 0.35 seconds, the average peak signal-to-noise ratio (PSNR) is 43.6, and the average structural similarity (SSIM) is 0.97, all of which are better than the comparison algorithm. In terms of system performance, stability testing of each module shows that the overall system tends to be stable, and the resource utilization rate of the image compression module is low, with a large resource margin that can meet practical application needs.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_61-The_Application_of_Optimized_JPEG_LS_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Effect of Climate Change on Animal Diseases by Using Image Processing and Deep Learning Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160360</link>
        <id>10.14569/IJACSA.2025.0160360</id>
        <doi>10.14569/IJACSA.2025.0160360</doi>
        <lastModDate>2025-03-31T12:15:09.0770000+00:00</lastModDate>
        
        <creator>Gehad K. Hussien</creator>
        
        <creator>Mohamed H. Khafagy</creator>
        
        <creator>Hossam M. Elbehiery</creator>
        
        <subject>Climate change; sustainability; smallholder; animal disease; image processing; deep learning; animal skin diseases</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>Climate change is one of the most talked-about topics of this decade, affecting all economic output sectors, including the economy of cow farming. In many scenarios, exceptionally severe climate change is predicted for the Mediterranean region. As a result, practical measures must be taken to strengthen the sector&#39;s resilience, particularly for smallholders involved in the cattle production industry. As a result, technology is required to stop animal disease outbreaks. There are benefits to using automatic methods for detecting animal disease and cellulite. Climate change seriously threatens animal health, which is changing ecosystems, changing weather patterns, and posing new difficulties for animal existence. But this crisis also offers a chance for imagination and cooperation in a changing climate, a comprehensive strategy that includes adaptation and mitigation strategies that can boost resilience and safeguard animal populations. In conclusion, knowledge of climate change and adaptation measures are the main factors driving the rising demand for animal products. Furthermore, we have a variety of adaptation strategies at our disposal to mitigate the effects of climate change, which must be used to limit its further expansion.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_60-The_Effect_of_Climate_Change_on_Animal_Diseases.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Tree Seed Algorithm-Based Optimized Deep Features Selection for Glaucoma Disease Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160359</link>
        <id>10.14569/IJACSA.2025.0160359</id>
        <doi>10.14569/IJACSA.2025.0160359</doi>
        <lastModDate>2025-03-31T12:15:09.0300000+00:00</lastModDate>
        
        <creator>Sherif Tawfik Amin</creator>
        
        <subject>Deep learning; tree seed algorithm; feature extraction; mobilenetv2</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>Glaucoma is a common eye condition that can cause irreversible blindness if left untreated. Glaucoma can be identified by the optic nerve disorder (a perilous path that carries the potential risk) and leads to blindness. Therefore, early glaucoma detection is critical for optimizing treatment outcomes and preserving vision. The majority of afflicted people typically do not exhibit any overt symptoms. Since many afflicted people go untreated as a result, early detection is essential for successful therapy. Systems for detecting glaucoma have been developed through a great deal of research. These manual, time-consuming, and frequently erroneous traditional diagnostic methods are not suitable for glaucoma diagnosis thus, automated methods are required. This research study proposes a novel glaucoma diagnosis model that addresses the difficulty of determining the complex cup-to-disc ratio. For accurate feature extraction, a publicly available dataset with two classes (Glaucoma positive and negative) is utilized from Kaggle. The dataset is augmented using the Flip technique and resized. A two-step approach using the Mobilenetv2 model is used to extract features from positive and negative classes. Accurate features are selected with the help of Transfer Function Sequential Analysis (TSA). The enriched features are then classified using three different classifiers: Cubic SVM, Ensemble Subspace KNN, and Fine KNN. The experimental evaluation comprises 7 and 8 cross-validation folds. On 7 folds Ensemble Subspace KNN provides an accuracy of 97.33%, and on 8 folds Fine KNN provides the best accuracy of 97.92%.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_59-Tree_Seed_Algorithm_Based_Optimized_Deep_Features_Selection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving Road Safety in Indonesia: A Clustering Analysis of Traffic Accidents Using K-Medoids</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160358</link>
        <id>10.14569/IJACSA.2025.0160358</id>
        <doi>10.14569/IJACSA.2025.0160358</doi>
        <lastModDate>2025-03-31T12:15:08.9670000+00:00</lastModDate>
        
        <creator>Handrizal </creator>
        
        <creator>Hayatunnufus</creator>
        
        <creator>Maryo Christopher Davinci Nababan</creator>
        
        <subject>Traffic accidents; K-Medoids; clustering; data mining</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>Traffic accidents pose a significant public health and safety challenge in Indonesia, ranking fifth globally in terms of traffic fatality rates. This study aims to identify patterns in traffic accident data to inform effective mitigation strategies. Utilizing the K-Medoids algorithm, we clustered traffic accident data from the Indonesian Central Bureau of Statistics for the period 1992–2022. Prior to clustering, rigorous data preprocessing was conducted to ensure accuracy. The K-Medoids algorithm successfully partitioned the data into distinct clusters, revealing variations in accident patterns across different regions of Indonesia, including disparities in accident frequency and severity. This research provides valuable insights for policymakers and transportation authorities to develop targeted interventions and improve road safety in Indonesia. Additionally, this study successfully applied the K-Medoids algorithm to cluster traffic accident data in Indonesia using data from 2018 to 2022.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_58-Improving_Road_Safety_in_Indonesia.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Study on Human Hazardous Behavior Recognition and Monitoring System in Slide Facilities Based on Improved HRNet Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160357</link>
        <id>10.14569/IJACSA.2025.0160357</id>
        <doi>10.14569/IJACSA.2025.0160357</doi>
        <lastModDate>2025-03-31T12:15:08.9500000+00:00</lastModDate>
        
        <creator>Chen Chen</creator>
        
        <creator>Huiyu Xiang</creator>
        
        <creator>Song Huang</creator>
        
        <creator>Yanpei Zhang</creator>
        
        <subject>Playground equipment; object detection; skeleton sequence; flow alignment module; human behavior recognition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>In recent years, accidents involving slide playground equipment have frequently occurred due to various reasons, attracting significant attention. Reducing or even eliminating these accidental injuries has become an urgent technical issue to address. Currently, the safety management of slide playground facilities still relies on manual monitoring, and the level of technology for detecting and intelligently recognizing hazardous behaviors on slides needs improvement. This paper proposes a behavior detection system based on human skeleton sequence information to address the issue of recognizing hazardous behaviors on slides. To resolve the feature fusion loss problem that arises when HRNet extracts feature information from images of different resolutions, this paper introduces a Flow Alignment Module (FAM) and an Attention-aware Feature Fusion (AFF) module to improve the network structure. Experimental results show that the improved skeleton sequence extraction model exhibits good computational efficiency and accuracy on the dataset, achieving an accuracy rate of over 90%. The human behavior recognition system proposed in this paper effectively meets detection requirements, providing new technical assurance for the safe use of slide playground equipment.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_57-Study_on_Human_Hazardous_Behavior_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimization of LED Luminaire Life Prediction Algorithm by Integrating Feature Engineering and Deep Learning Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160356</link>
        <id>10.14569/IJACSA.2025.0160356</id>
        <doi>10.14569/IJACSA.2025.0160356</doi>
        <lastModDate>2025-03-31T12:15:08.9330000+00:00</lastModDate>
        
        <creator>Xiongbo Huang</creator>
        
        <subject>Feature engineering; deep learning; LED lamps; life prediction; algorithm optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>With the wide application of LED luminaires in various fields, it has become particularly important to accurately predict their lifetime. The lifetimes of LED luminaires are affected by a variety of factors, including temperature, current, voltage, light intensity, and operating time, and there are complex interactions among these factors. Traditional prediction methods are often difficult to capture these nonlinear relationships, so a more powerful prediction model is needed. In this study, we aim to develop an efficient life prediction model for LED luminaires, and propose a hybrid neural network structure that incorporates a convolutional neural network (CNN), a long short-term memory network (LSTM), and an attention mechanism by combining feature engineering and deep learning techniques. In the research process, we first collected the operation record data provided by a well-known LED lighting manufacturer and performed detailed data preprocessing, including missing value processing, outlier detection, normalization/standardization, data smoothing, and time series segmentation. Then, we designed and implemented several benchmark models (e.g., linear regression, support vector machine regression, random forest regression, and deep learning model using only LSTM) as well as the proposed hybrid neural network model. Through a detailed experimental design including parameter setting, training and testing, we evaluate the performance of these models and analyze the results. The experimental results show that the proposed hybrid neural network model significantly outperforms the conventional model in key performance metrics such as root mean square error (RMSE), mean absolute error (MAE) and coefficient of determination (R&#178;). In particular, the hybrid model outperforms in terms of Mean Absolute Percentage Error (MAPE) and Maximum Absolute Error (Max AE). In addition, through cross-validation and testing on different datasets, the model shows stable performance under various environments and conditions, verifying its good generalization ability and robustness.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_56-Optimization_of_LED_Luminaire_Life_Prediction_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Designing Minimum Data Set and Data Model for Electronic Health Record Systems in Indonesia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160355</link>
        <id>10.14569/IJACSA.2025.0160355</id>
        <doi>10.14569/IJACSA.2025.0160355</doi>
        <lastModDate>2025-03-31T12:15:08.8870000+00:00</lastModDate>
        
        <creator>Teddie Darmizal</creator>
        
        <creator>Nor Hasbiah Ubaidullah</creator>
        
        <creator>Aslina Saad</creator>
        
        <subject>Minimum data set; data element; data model; electronic health record; electronic health record system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>This study aimed to design a minimum data set (MDS) and Data Model for electronic health record system (EHRS) in Indonesia. The content of the MDS in this study is different from the MDS from the results of the study in other advanced countries. The technical preparation of the MDS in this study follows the medical service process provided to patients from the time they first enter the hospital until they complete receiving services at the hospital with the aim that the MDS designed is aligned with real-world hospital workflows. The initial stage of this research began by identifying data elements through literature reviews sourced from medical record documents of general hospitals and psychiatric hospitals in Indonesia, papers regarding minimum data set in other advanced countries, websites, and clinical guidelines. The Delphi technique was employed to validate the identified data elements through a survey of medical experts. A questionnaire was designed to determine data elements in both administrative and clinical departments. There were 5 and 21 data classes agreed upon by experts in the administrative and clinical sections with 28 and 858 data elements, respectively. This MDS could be a reliable tool for data standardization in EHRS that can improve the quality of data and medical services in hospitals. The designed data model consist of conceptual, logical and physical component. This MDS and data model can facilitate system developers to build physical EHRS database and health surveillance center for more efficient health data management.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_55-Designing_Minimum_Data_Set_and_Data_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Systematic Literature Review on the Sand Cat Swarm Algorithm: Enhancements, Applications, and Future Directions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160354</link>
        <id>10.14569/IJACSA.2025.0160354</id>
        <doi>10.14569/IJACSA.2025.0160354</doi>
        <lastModDate>2025-03-31T12:15:08.8400000+00:00</lastModDate>
        
        <creator>Wirawati Dewi Ahmad</creator>
        
        <creator>Azuraliza Abu Bakar</creator>
        
        <creator>Mohd Nor Akmal Khalid</creator>
        
        <subject>Sand cat swamp algorithm; sand cat optimization; optimization; metaheuristic</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>The Sand Cat Swarm Algorithm (SCSA) has emerged as a promising metaheuristic optimization technique inspired by the behavior of sand cats in their natural habitat. This paper presents a systematic literature review synthesizes the enhancement, performance comparing algorithms, applications of SCSA across various domains and future direction on SCSA enhancement. The study aims to contribute to the evolution, enhancements, applications, and performance of the Sand Cat Swarm Algorithm (SCSA), providing a comprehensive analysis of its development, performances evaluation, application, limitations, and future research opportunities in SCSA in solving optimization problems. The SLR methodology was applied, and a total of 77 scientific articles were analyzed. The analysis reveals that SCSA demonstrates competitive performance across a wide range of benchmark problems and real-world applications in engineering, computer science, and other fields such as engineering design optimization, feature selection, energy systems optimization, flexible job shop scheduling and medical diagnosis problems. This review also identifies several key strengths of SCSA, including its ability to balance exploration and exploitation effectively, its adaptability to various problem domains, and its potential for hybridization with other algorithms. Lastly, this paper outlines potential improvements and future research directions, such as the development of multi-objective SCSA variants, integration with machine learning techniques, and exploration of parallel and distributed implementations. Overall, this paper provides researchers and practitioners with valuable insights into the current state of SCSA, its practical applications, and promising avenues for future research in the field of metaheuristic optimization.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_54-A_Systematic_Literature_Review.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Power of Digitalization: How Information Disclosure Shapes Company Value</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160353</link>
        <id>10.14569/IJACSA.2025.0160353</id>
        <doi>10.14569/IJACSA.2025.0160353</doi>
        <lastModDate>2025-03-31T12:15:08.8270000+00:00</lastModDate>
        
        <creator>Lina Nur Hidayati</creator>
        
        <creator>Muniya Alteza</creator>
        
        <creator>Mahendra Ryansa Gallen Gagah Pratama</creator>
        
        <subject>Information; digitalization; business; firm value</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>This study aims to explore how business digitalization influences firm value within the Indonesia Stock Exchange (IDX). It seeks to offer a thorough examination of the effects of digital transformation on corporate valuation. The findings highlight a strong positive correlation between digitalization and firm valuation, supporting signaling theory, which asserts that a company&#39;s transparency in disclosing its digital transformation efforts serves as a strategic indicator for investors and consumers. Greater transparency and specificity in disclosing digitalization information improve perceptions of corporate stability and future growth prospects, ultimately increasing firm value. As Indonesia undergoes rapid digital transformation, this research gains heightened relevance by offering critical insights into how companies that proactively communicate their digitalization strategies can strengthen their market positioning and secure a competitive edge in the financial landscape. This study makes a significant contribution by providing empirical evidence on the role of business digitalization in shaping firm value, particularly in an emerging market context where digital adoption is accelerating. This investigation highlights the strategic importance of digitalization disclosure in the Indonesian market, offering novel insights into how transparency in digital initiatives can serve as a competitive advantage.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_53-The_Power_of_Digitalization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detection Optimization of Brute-Force Cyberattack Using Modified Caesar Cipher Algorithm Based on Binary Codes (MCBC)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160352</link>
        <id>10.14569/IJACSA.2025.0160352</id>
        <doi>10.14569/IJACSA.2025.0160352</doi>
        <lastModDate>2025-03-31T12:15:08.7930000+00:00</lastModDate>
        
        <creator>Muhannad Tahboush</creator>
        
        <creator>Adel Hamdan</creator>
        
        <creator>Mohammad Klaib</creator>
        
        <creator>Mohammad Adawy</creator>
        
        <creator>Firas Alzobi</creator>
        
        <subject>Brute-force attack; encryption; Caesar cipher; binary code; security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>Information security is considered vital aspects that are employed to protect user credentials and digital information from cyber security threats. A Caesar cipher is an ancient cryptography algorithm, and it is susceptible to being easily broken and vulnerable to brute-force attack. Brute-force attack is a cyberattack that uses trial and error to crack passwords, login credentials, and encryption keys to unauthorized access and illegal to a system and individual accounts. However, several research has been developed to defeat the existing vulnerabilities in Caesar cipher, but are still suffering from their limitations and failing to provide a high level of attack detection and encryption strength. Therefore, Modified Caesar Cipher Algorithm Based on Binary Codes (MCBC) has been proposed to mitigate brute-force attack more optimistically based on different scenarios. First scenario, converting message to binary numbering system and the second scenario, employ binary shifting technique and then convert it to hexadecimal code. The performance metrics that were taken into consideration to evaluate the MCBC proposed algorithm are detection rate, strength rate, true positive rate and time required for decryption. The experimental results show that the proposed approach MCBC performance metrics outperformed other algorithms against brute force attack by ensuring the confidentiality of information.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_52-Detection_Optimization_of_Brute_Force_Cyberattack.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sentiment Analysis: An Insightful Literature Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160351</link>
        <id>10.14569/IJACSA.2025.0160351</id>
        <doi>10.14569/IJACSA.2025.0160351</doi>
        <lastModDate>2025-03-31T12:15:08.7630000+00:00</lastModDate>
        
        <creator>Indrajani Sutedja</creator>
        
        <creator>Hendry</creator>
        
        <subject>Sentiment analysis; sentiment analysis approach; text mining</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>Understanding the consumer is becoming crucial in today&#39;s customer-focused company culture. Sentiment analysis is one of many methods that can be used to evaluate the public’s sentiment toward a specific entity in order to generate actionable knowledge. In the commercial sector, sentiment analysis is critical in enabling businesses to establish strategy and obtain insight into user feedback on their products. Unfortunately, there are still many companies that do not hear customer feedback and run the business as usual, even though there is an analysis of sentiment that can reflect services and products of companies. The problem can be overcome by implementing sentiment analysis. When a company implements sentiment analysis, they can more easily discover what the consumers want, what they disapprove of, and what measures can be taken to sustain, which will help companies improve their products and services&#39; performance. The purpose of this paper is to find out the uses of sentiment analysis in a company and the methodology that companies use to implement sentiment analysis. The research used in this paper was done by reviewing 22 papers that discuss sentiment analysis. This paper aims to learn more about the methodology and uses of sentiment analysis in a company.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_51-Sentiment_Analysis_An_Insightful_Literature_Review.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Virtual Reality (VR) Technology in Civics Practice Teaching Evaluating the Effect of Immersive Experience</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160350</link>
        <id>10.14569/IJACSA.2025.0160350</id>
        <doi>10.14569/IJACSA.2025.0160350</doi>
        <lastModDate>2025-03-31T12:15:08.7470000+00:00</lastModDate>
        
        <creator>Hao Qin</creator>
        
        <creator>Yangqing Zhang</creator>
        
        <creator>Jiali Wei</creator>
        
        <subject>Virtual reality technology; civics practice teaching; immersive experience effect assessment; enterprise development optimisation algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>In order to improve the low precision of the current immersive experience effect assessment method, a virtual reality Civics practice teaching immersive experience effect assessment method with enterprise development optimisation algorithm and mixed kernel extreme learning machine is proposed. Firstly, we analyse the current status of research on virtual reality Civic and political practice teaching, design the idea of assessing the application of VR technology in Civic and political practice teaching, extract the relevant assessment features, and construct the effect assessment system; secondly, we use the enterprise development optimization algorithm to optimize the parameters of the mixed kernel extreme learning machine, and construct the immersive experience effect assessment model; finally, we use the data of Civic and political practice teaching based on VR technology to verify and analyse the proposed model. The results show that the proposed model effectively improves the assessment accuracy of the immersive experience effect assessment method and achieves a higher precision of the Civic and political practice teaching effect assessment.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_50-Virtual_Reality_VR_Technology_in_Civics_Practice_Teaching.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Defect Detection of Photovoltaic Cells Based on an Improved YOLOv8</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160349</link>
        <id>10.14569/IJACSA.2025.0160349</id>
        <doi>10.14569/IJACSA.2025.0160349</doi>
        <lastModDate>2025-03-31T12:15:08.7000000+00:00</lastModDate>
        
        <creator>Zhihui LI</creator>
        
        <creator>Liqiang WANG</creator>
        
        <subject>Photovoltaic cells; defect detection; YOLOv8; loss function</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>Currently, defect detection in photovoltaic (PV) cells faces challenges such as limited training data, data imbalance, and high background complexity, which can result in both false positives and false negatives during the detection process. To address these challenges, a defect detection network based on an improved YOLOv8 model is proposed. Firstly, to tackle the data imbalance problem, five data augmentation techniques—Mosaic, Mixup, HSV transformation, scale transformation, and flip—are applied to improve the model’s generalization ability and reduce the risk of overfitting. Secondly, SPD-Conv is used instead of Conv in the backbone network, enabling the model to better detect small objects and defects in low-resolution images, thereby enhancing its performance and robustness in complex backgrounds. Next, the GAM attention mechanism is applied in the detection head to strengthen global channel interactions, reduce information dispersion, and enhance global dependencies, thereby improving network performance. Lastly, the CIoU loss function in YOLOv8 is replaced with the Focal-EIoU loss function, which accelerates model convergence and improves bbox regression accuracy. Experimental results show that the optimized model achieves a mAP of 86.6% on the augmented EL2021 dataset, representing a 5.1% improvement over the original YOLOv8 model, which has 11.24 &#215; 10^6 parameters. The improved algorithm outperforms other widely used methods in photovoltaic cell defect detection.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_49-Defect_Detection_of_Photovoltaic_Cells.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Malicious Domain Name Detection Using ML Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160348</link>
        <id>10.14569/IJACSA.2025.0160348</id>
        <doi>10.14569/IJACSA.2025.0160348</doi>
        <lastModDate>2025-03-31T12:15:08.6530000+00:00</lastModDate>
        
        <creator>Lamis Alshehri</creator>
        
        <creator>Samah Alajmani</creator>
        
        <subject>DNS Security; machine learning; malicious domain detection; XGBoost; LightGBM; CatBoost</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>With the ever-increasing rate of cyber threats, especially through malicious domain names, the need for their effective detection and prevention becomes very urgent. This study mainly investigates the classification of domain names into either benign or malicious classes based on DNS logs using machine learning. We evaluated five strong ML models: XGBoost, LightGBM, CatBoost, Stacking, and Voting Classifier, in an effort to obtain high accuracy, F1 score, AUC, recall, and precision. The challenge in that direction is to achieve a very good solution, without using deep learning techniques for low computational cost. Moreover, this project has an obligation to upgrade the cybersecurity landscape by embedding the best-performing model into the DNS firewall to enable protection against harmful domains in real time. Our dataset was collected and curated to include 90,000 domain names, including an equal number of safe and harmful, respectively, extracting 34 features from DNS logs and further enriched using publicly available data.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_48-Malicious_Domain_Name_Detection_Using_ML_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Classroom Behavior Recognition and Analysis Technology Based on CNN Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160347</link>
        <id>10.14569/IJACSA.2025.0160347</id>
        <doi>10.14569/IJACSA.2025.0160347</doi>
        <lastModDate>2025-03-31T12:15:08.6070000+00:00</lastModDate>
        
        <creator>Weihua Qiao</creator>
        
        <subject>Convolution neural network; multi-task learning; face recognition; classroom; student behavior</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>Students’ classroom behavior can effectively reflect the learning efficiency and the teaching quality of teachers, but the accuracy of current students’ classroom behavior identification methods is not high. Aiming at this research gap, an improved algorithm based on multi-task learning cascaded convolutional neural network architecture is proposed. Through the improved algorithm, a face recognition model is constructed to identify students&#39; classroom behavior more accurately. In the performance comparison experiment of the improved convolutional network algorithm, it was found that the recall rate of the improved algorithm was 88.8%, higher than the three comparison models. The result demonstrated that the improved algorithm performed better than the contrast model. In the empirical analysis of the face recognition model based on the improved algorithm, it was found that the accuracy of the proposed face recognition model was 90.2%, which was higher than the traditional face recognition model. The findings indicate that the model developed in this study is capable of accurately reflecting the students&#39; state in the classroom, thereby facilitating the formulation of targeted teaching strategies to enhance their classroom efficiency.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_47-Classroom_Behavior_Recognition_and_Analysis_Technology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Bibliometric Analysis of the Evolution and Impact of Short Videos in E-Commerce (2015-2024): New Research Trends in AI</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160346</link>
        <id>10.14569/IJACSA.2025.0160346</id>
        <doi>10.14569/IJACSA.2025.0160346</doi>
        <lastModDate>2025-03-31T12:15:08.5770000+00:00</lastModDate>
        
        <creator>Duy Nguyen Binh Phuong</creator>
        
        <creator>Tien Ngo Thi My</creator>
        
        <creator>Thuy Nguyen Binh Phuong</creator>
        
        <creator>Thi Pham Nguyen Anh</creator>
        
        <creator>Hung Le Huu</creator>
        
        <subject>Short video; AI; co-citation analysis; keyword co-occurrence analysis; bibliographic coupling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>Over a decade of rapid growth in short video content has opened increasingly in-depth perspectives on this topic, with increasingly diverse scientific publications exploring different aspects of this phenomenon. Short videos have rapidly transformed the e-commerce landscape, influencing consumer behavior, marketing strategies, and technological advancements. This study used bibliometric analysis to evaluate existing research on short videos in e-commerce and identify key trends, research clusters, and influential publications. Using Scopus (2015-2024) data, co-citation, keyword co-occurrence, and bibliographic matching analyses were conducted. Publication analysis revealed three stages: initial (2015-2018) with limited research, growth (2019-2020) with increased interest, and explosive growth (2021-2024). Keyword co-occurrence analysis highlights interconnected research topics, with &quot;video platforms,&quot; &quot;short video,&quot; and &quot;social media&quot; forming a central cluster. The cluster indicates a recent focus on the &quot;social context&quot; of short videos in e-commerce. Co-citation analysis identifies key research clusters covering e-commerce and user behavior, user experience, advertising effectiveness of short videos, methodology, and underlying theories. These findings are helpful for researchers seeking to understand short-form video utilization in e-commerce. Insights are required to develop effective marketing strategies, improve user experiences, and capitalize on technological innovation in this rapidly evolving space.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_46-Bibliometric_Analysis_of_the_Evolution_and_Impact_of_Short_Videos.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimization of Automated Financial Statement Information Disclosure System Based on AI Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160345</link>
        <id>10.14569/IJACSA.2025.0160345</id>
        <doi>10.14569/IJACSA.2025.0160345</doi>
        <lastModDate>2025-03-31T12:15:08.5430000+00:00</lastModDate>
        
        <creator>Yonghui Xiao</creator>
        
        <creator>Haikuan Zhang</creator>
        
        <subject>Information disclosure of financial statements; artificial intelligence; automated processing; system optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>In the context of the digital transformation of the global economy and the rapid advancement of enterprise informatization, ensuring accurate and timely financial statement disclosure has become a critical priority for businesses and regulatory bodies. This study aims to address the inefficiencies, high error rates, and slow response times inherent in traditional financial information disclosure processes, which fail to meet the real-time data accuracy demands of modern enterprises. The study introduces an AI-driven optimization scheme for an automated processing network system for financial statement information disclosure. By leveraging advanced machine learning techniques and large language models, the proposed system enhances the accuracy, speed, and cost-effectiveness of disclosure processes. The system was tested and compared against traditional manual methods, focusing on processing time, accuracy rates, and operational cost savings. The optimized system significantly reduces the average processing time from three hours to 20 minutes, achieving a 90% efficiency improvement. Accuracy is enhanced from 92% to over 97%, while the response speed increases by 40%. Additionally, the system reduces operational costs by 15%, resulting in annual labor cost savings of approximately 12 million yuan. These findings demonstrate the transformative potential of AI technologies in addressing the limitations of traditional financial disclosure processes. This study highlights an innovative application of AI in the realm of intelligent finance, offering a scalable solution that aligns with the evolving demands for real-time, accurate financial information. The research contributes to the growing field of AI-driven automation by showcasing its practical implications and substantial benefits in financial statement disclosure.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_45-Optimization_of_Automated_Financial_Statement.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Visual Communication Design and Customization Through the CLIP Contrastive Language-Image Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160344</link>
        <id>10.14569/IJACSA.2025.0160344</id>
        <doi>10.14569/IJACSA.2025.0160344</doi>
        <lastModDate>2025-03-31T12:15:08.4970000+00:00</lastModDate>
        
        <creator>Xiujie Wang</creator>
        
        <subject>CLIP; language image model; visual communication design; element customization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>This study explores the impact of the CLIP (Contrastive Language-Image Pretraining) model on visual communication design, particularly focusing on its application in design innovation, personalized element creation, and cross-modal understanding. The research addresses how CLIP can meet the increasing demand for personalized and diverse design solutions in the context of digital information overload. Through a comprehensive analysis of the CLIP model’s capabilities in image-text pairing and large-scale learning, this study examines its ability to enhance design efficiency, customization, and creative expression. Quantitative data is presented, showcasing improvements in design processes and outcomes. The use of the CLIP model has resulted in a 30% increase in design efficiency, with a 20% improvement in originality and a 15% boost in market relevance of creative solutions. Personalized design solutions have seen a 40% increase in accuracy and user satisfaction. Additionally, the model’s cross-modal understanding has enhanced the coherence and immersion of visual experiences, improving user satisfaction by 25%. This research highlights the transformative potential of AI-driven models like CLIP in revolutionizing visual communication design, offering insights into how AI can foster design innovation, optimize user experience, and respond to the growing demands for personalized visual solutions in the digital age.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_44-Enhancing_Visual_Communication_Design_and_Customization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Deep Learning-Based Framework for Real-Time Detection of Cybersecurity Threats in IoT Environments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160343</link>
        <id>10.14569/IJACSA.2025.0160343</id>
        <doi>10.14569/IJACSA.2025.0160343</doi>
        <lastModDate>2025-03-31T12:15:08.4670000+00:00</lastModDate>
        
        <creator>Sultan Saaed Almalki</creator>
        
        <subject>IoT security; intrusion detection system; cybersecurity threats; deep learning; real-time detection; adversarial robustness; anomaly detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>The rapid adoption of Internet of Things (IoT) devices has led to an exponential increase in cybersecurity threats, necessitating efficient and real-time intrusion detection systems (IDS). Traditional IDS and machine learning models struggle with evolving attack patterns, high false positive rates, and computational inefficiencies in IoT environments. This study proposes a deep learning-based framework for real-time detection of cybersecurity threats in IoT networks, leveraging Transformers, Convolutional Neural Networks (CNNs), and Long Short-Term Memory (LSTM) architectures. The proposed framework integrates hybrid feature extraction techniques, enabling accurate anomaly detection while ensuring low latency and high scalability for IoT devices. Experimental evaluations on benchmark IoT security datasets (CICIDS2017, NSL-KDD, and TON_IoT) demonstrate that the Transformer-based model outperforms conventional IDS solutions, achieving 98.3% accuracy with a false positive rate as low as 1.9%. The framework also incorporates adversarial defense mechanisms to enhance resilience against evasion attacks. The results validate the efficacy, adaptability, and real-time applicability of the proposed deep learning approach in securing IoT networks against cyber threats.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_43-A_Deep_Learning_Based_Framework_for_Real_Time_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Analysis of YOLO and Faster R-CNN Models for Detecting Traffic Object</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160342</link>
        <id>10.14569/IJACSA.2025.0160342</id>
        <doi>10.14569/IJACSA.2025.0160342</doi>
        <lastModDate>2025-03-31T12:15:08.4500000+00:00</lastModDate>
        
        <creator>Iqbal Ahmed</creator>
        
        <creator>Roky Das</creator>
        
        <subject>Faster R-CNN; YOLOV3; YOLOV5 Traffic object detection; image detection; autonomous driving</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>The identification of traffic objects is a basic aspect of autonomous vehicle systems. It allows vehicles to detect different traffic entities such as cars, pedestrians, cyclists, and trucks in real-time. The accuracy and efficiency of object detection are crucial in ensuring the safety and reliability of autonomous vehicles. The focus of this work is a comparative analysis of two object detection models: YOLO (You Only Look Once) and Faster R-CNN (Region-based Convolutional Neural Networks) using the KITTI dataset. The KITTI dataset is a widely accepted reference dataset for work in autonomous vehicles. The evaluation included the performance of YOLOv3, YOLOv5, and Faster R-CNN on three established levels of difficulty. The three levels of difficulty range from Easy, Moderate, to Hard based on object exposure, lighting, and the existence of obstacles. The results of the work show that Faster R-CNN achieves maximum precision in detection of pedestrians and cyclists, while YOLOv5 has a good balance of speed and precision. As a result, YOLOv5 is found to be highly suitable for applications in real-time. In this aspect, YOLOv3 shows computational efficacy but displayed poor performance in more demanding scenarios. The work presents useful insights into the strength and limitation of these models. The results help in improving more resilient and efficient systems of detection of traffic objects, hence advancing the construction of more secure and reliable self-driving cars. Moreover, this study provides a comparative analysis of YOLO and Faster R-CNN models, highlighting key trade-offs and identifying YOLOv5 as a strong real-time candidate while emphasizing Faster R-CNN’s precision in challenging conditions.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_42-Comparative_Analysis_of_YOLO_and_Faster_R_CNN_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-Strategy Improved Rapid Random Expansion Tree (RRT) Algorithm for Robotic Arm Path Planning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160341</link>
        <id>10.14569/IJACSA.2025.0160341</id>
        <doi>10.14569/IJACSA.2025.0160341</doi>
        <lastModDate>2025-03-31T12:15:08.4030000+00:00</lastModDate>
        
        <creator>Yuan Sun</creator>
        
        <creator>Shoujun Zhang</creator>
        
        <subject>Robotic arm; RRT algorithm; path planning; target-biased sampling; Gaussian sampling; bidirectional tree extension; adaptive step-size</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>The purpose of this paper is to propose an improved RRT algorithm that incorporates multiple improvement strategies to solve the problems of low efficiency, long and unsmooth paths in the traditional rapid random expansion tree (RRT) algorithm for path planning of robotic arms. The algorithm first uses a bidirectional tree extension strategy to generate trees from both the starting point and the target position simultaneously, improving search efficiency and reducing redundant paths. Secondly, the algorithm introduces target bias sampling in combination with local Gaussian sampling, which renders the sampling points more focused on the target area, and dynamically adjusts the distribution to improve sampling efficiency and path connection speed. Concurrently, the algorithm is equipped with an adaptive step size strategy, which dynamically adjusts the expansion step size according to the target distance, thereby achieving a balance between rapid expansion over long distances and precise search at close range. Finally, a collision-free operation is ensured by a path verification mechanism, and the path is smoothed using cubic B-splines and minimum curvature optimisation techniques, significantly improving the smoothness of the path and the feasibility of the robot arm movement. As demonstrated by simulation experiments, the improved RRT algorithm exhibits a reduction in the average path length by 18.15%, planning time by 96.29%, the number of nodes by 92.13%, and the number of iterations by 91.60%, in comparison with the conventional RRT algorithm, when operating in complex map mode. These findings substantiate the efficacy and practicality of the improved RRT algorithm in the domain of robotic arm path planning.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_41-Multi_Strategy_Improved_Rapid_Random_Expansion_Tree.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Big Data Privacy Protection Technology Integrating CNN and Differential Privacy</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160340</link>
        <id>10.14569/IJACSA.2025.0160340</id>
        <doi>10.14569/IJACSA.2025.0160340</doi>
        <lastModDate>2025-03-31T12:15:08.3570000+00:00</lastModDate>
        
        <creator>Yanfeng Liu</creator>
        
        <creator>Ping Li</creator>
        
        <creator>Min Zhang</creator>
        
        <creator>Qinggang Liu</creator>
        
        <subject>Convolutional neural network; differential privacy; adaptive noise addition; big data; privacy protection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>To solve the difficulty of balancing privacy and availability in big data privacy protection technology, this study integrates the powerful feature extraction ability of convolutional neural network models with the efficiency of differential privacy technology in data privacy protection. An innovative privacy protection method combining gradient adaptive noise and adaptive step size control is proposed. The experiment findings denote that the research method outperforms existing advanced privacy protection technologies in terms of performance, with an average accuracy of 97.68% and a performance improvement of about 20% to 30%. In addition, for larger privacy budgets, increasing the threshold appropriately can further optimize the effectiveness of research methods. This indicates that through refined noise control and step size adjustment, not only can the privacy protection process be optimized, but also the high efficiency and accuracy of data processing can be maintained. In summary, while ensuring data utility, research methods can not only significantly reduce the risk of privacy breaches, but also optimize privacy protection mechanisms, achieving an ideal balance between protecting personal privacy and maximizing data utility. This innovative approach provides an efficient probability distribution function solution for the field of privacy protection, with the potential to promote further development of related technologies and applications.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_40-Big_Data_Privacy_Protection_Technology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Carbon Pollution Removal in Activated Sludge Process of Wastewater Treatment Systems Using Grey Wolf Optimization-Based Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160339</link>
        <id>10.14569/IJACSA.2025.0160339</id>
        <doi>10.14569/IJACSA.2025.0160339</doi>
        <lastModDate>2025-03-31T12:15:08.3100000+00:00</lastModDate>
        
        <creator>Sa&#239;da Dhouibi</creator>
        
        <creator>Raja Jarray</creator>
        
        <creator>Soufiene Bouall&#232;gue</creator>
        
        <subject>Wastewater treatment systems; carbon pollution removal; fuzzy predictive control; metaheuristics optimization; Grey Wolf Optimizer; ANOVA tests</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>Managing wastewater to effectively remove water pollution is inherently difficult. Ensuring that the treated water meets stringent standards is a main priority for several countries. Advances in control and optimization strategies can significantly improve the elimination of harmful substances, particularly in the case of carbon pollution. This paper presents a novel optimization-based approach for carbon removal in Activated Sludge Process (ASP) of Wastewater Treatment Plants (WWTPs). The developed pollution removal algorithm combined the concepts of Takagi-Sugeno (TS) fuzzy modeling, Model Predictive Control (MPC) and Grey Wolf Optimization (GWO), as a parameters-free metaheuristics algorithm, to boost the carbon elimination in terms of standard metrics, namely Chemical Oxygen Demand (COD), Biochemical Oxygen Demand (BOD5) and Total Suspended Solids (TSS). To enhance such a pollution removal, the proposed fuzzy predictive control for all wastewater variables, i.e. effluent volume, concentrations of heterotrophic biomass, biodegradable substrate and dissolved oxygen, is formulated as a constrained optimization problem. The MPC parameters’ tuning process is therefore performed to select appropriate values for weighting coefficients, prediction and control horizons of local TS sub-models. To demonstrate the effectiveness of the proposed parameters-free GWO algorithm, comparisons with homologous state-of-the-art solvers such as Particle Swarm Optimization (PSO) and Genetic Algorithm (GA), as well as the standard commonly used Parallel Distributed Compensation (PDC) technique, are carried out in terms of key purification indices COD, BOD5, and TSS. Additionally, an ANOVA study is conducted to evaluate the reported competing metaheuristics using Friedman ranking and post-hoc tests. The main findings highlight the superiority of the proposed GWO-based carbon pollution removal in WWTPs with elimination efficiencies of 93.9% for COD, 93.4% for BOD5, and 94.1% for TSS, in comparison with lower percentages for PSO, GA and PDC techniques.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_39-Carbon_Pollution_Removal_in_Activated_Sludge_Process.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Related Applications of Deep Learning Algorithms in Medical Image Fusion Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160338</link>
        <id>10.14569/IJACSA.2025.0160338</id>
        <doi>10.14569/IJACSA.2025.0160338</doi>
        <lastModDate>2025-03-31T12:15:08.2800000+00:00</lastModDate>
        
        <creator>Hua Sun</creator>
        
        <creator>Li Zhao</creator>
        
        <subject>Image fusion; image recognition; residual network; medical image; speeded up robust features; medical diagnosis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>As the continuous advancement of medical technology, image fusion technology has also been used in it. However, current medical image fusion systems still have drawbacks such as low image clarity, low accuracy, and slow computing speed. To address this drawback, this study utilized speeded up robust features image recognition algorithms to optimize deep residual network algorithms and proposed an optimization algorithm based on residual network deep learning algorithms. Based on this optimization algorithm, a medical image fusion system was constructed. Comparative experiments were organized on the improved algorithm, and the experiment outcomes denoted that the accuracy of image feature extraction was 0.98, the average time for feature extraction was 0.12 seconds, and the extraction capability was significantly better than that of the comparative algorithms HPF-CNN, PSO and PCA-CNN. Subsequently, experiments were conducted on the image fusion system, and the outcomes denoted that the accuracy and clarity of the fused images were 0.98 and 0.97, respectively, which were superior to other systems. The above outcomes indicate that the proposed medical image fusion system based on optimized deep learning algorithms can not only improve the speed of image fusion, but also enhance the clarity and accuracy of fused images. This study not only improves the accuracy of medical diagnosis, but also provides a theoretical basis for the field of image fusion.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_38-Related_Applications_of_Deep_Learning_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Machine Learning-Based Identification of Cellulose Particle Pre-Bridging and Bridging Stages in Transformer Oil</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160337</link>
        <id>10.14569/IJACSA.2025.0160337</id>
        <doi>10.14569/IJACSA.2025.0160337</doi>
        <lastModDate>2025-03-31T12:15:08.2470000+00:00</lastModDate>
        
        <creator>Nur Badariah Ahmad Mustafa</creator>
        
        <creator>Marizuana Mat Daud</creator>
        
        <creator>Hidayat Zainuddin</creator>
        
        <creator>Nik Hakimi Nik Ali</creator>
        
        <creator>Fadilla Atyka Nor Rashid</creator>
        
        <subject>Cellulose bridging; feature classification; feature extraction; oil deterioration; support vector machine; synthetic transformer oil</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>The deterioration of transformer oil quality is influenced by factors including the presence of acids, water, and other contaminates such as cellulose particles and metal dust. The dielectric strength of the oil decreases over time and depending on the service conditions. This study introduces an efficient machine learning method to classify the pre-bridging and bridging stages by analyzing the formation of cellulose particle bridges in synthetic ester transformer oil. It is important to note that the pre-bridging and bridging stages indicate a pre-breakdown condition. The machine learning approach implements the combination of digital image processing (DIP) technique and support vector machine (SVM). The DIP technique, specifically the feature extraction method, captures the feature descriptors from the cellulose particles bridging images including area, MajorAxisLength, MinorAxisLength, orientation, contrast, correlation, homogeneity and energy. These descriptors are used in SVM to assess the pre-bridging and bridging stages in transformer oil without human intervention. Various SVM models were implemented, including linear, quadratic, cubic, fine Gaussian, medium Gaussian, and coarse Gaussian. The results achieved 96.5% accuracy using quadratic and cubic SVM models with the eight feature descriptors. This research has significant implications, allowing early detection of transformer breakdown, prolonging transformer lifespan, ensuring uninterrupted power plant operations, and potentially reducing replacement costs and electricity disruptions due to late breakdown detection.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_37-Machine_Learning_Based_Identification_of_Cellulose_Particle.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Micro Laboratory Safety Hazard Detection Based on YOLOv4: A Lightweight Image Analysis Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160336</link>
        <id>10.14569/IJACSA.2025.0160336</id>
        <doi>10.14569/IJACSA.2025.0160336</doi>
        <lastModDate>2025-03-31T12:15:08.2000000+00:00</lastModDate>
        
        <creator>Yuan Lin</creator>
        
        <subject>Hazardous chemical safety; unsafe factors; deep learning; target detection; YOLO-v4-tiny; laboratory safety</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>In hazardous chemical laboratories, identifying and managing safety hazards is critical for effective safety management. This study, grounded in safety engineering principles, focuses on laboratory environments to develop an efficient hazard detection model using deep learning and object detection techniques. The lightweight YOLOv4-Tiny algorithm, with fewer parameters, was selected and optimized for detecting unsafe factors in laboratories. The CIOU loss function was employed to enhance the stability of candidate box regression, while three attention mechanism modules were embedded into the backbone feature extraction network and the feature pyramid&#39;s upsampling layer, forming an improved YOLOv4-Tiny object detection algorithm. To support the detection tasks, a specialized dataset for laboratory hazards was created. The improved YOLOv4-Tiny model was then used to construct two detection models: one for identifying the status of chemical bottles and another for detecting general laboratory safety hazards. The chemical bottle status detection model achieved AP values of 93.06% (normal), 95.31% (disorderly stacking), and 90.72% (label detachment), with an mAP of 93.03% and an FPS of 272, demonstrating both high accuracy and speed. The laboratory hazard detection model achieved AP values of 97.40%, 90.14%, 96.80%, and 68.95% for normal experimenters, individuals not wearing protective equipment, individuals smoking, and open flames, respectively, with a mAP of 88.32% and an FPS of 116. These results confirm the effectiveness of the proposed models in accurately and efficiently identifying laboratory safety hazards.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_36-Micro_Laboratory_Safety_Hazard_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Data Segmentation and Concatenation for Controlling K-Means Clustering-Based Gamelan Musical Nuance Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160335</link>
        <id>10.14569/IJACSA.2025.0160335</id>
        <doi>10.14569/IJACSA.2025.0160335</doi>
        <lastModDate>2025-03-31T12:15:08.1530000+00:00</lastModDate>
        
        <creator>Heribertus Himawan</creator>
        
        <creator>Arry Maulana Syarif</creator>
        
        <creator>Ika Novita Dewi</creator>
        
        <creator>Abdul Karim</creator>
        
        <subject>Musical emotion clustering; classification; clustering-based classification; K-Means algorithm; symbolic music; gamelan music</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>The musical nuance classification model is proposed using a clustering-based classification approach. Gamelan, a traditional Indonesian music ensemble, is used as the subject of this study. The proposed approach employs initial and final data segmentation to analyze symbolic music data, followed by concatenation of the clustering results from both segments to generate a more complex label. Structural-based segmentation divides the composition into an initial segment, representing theme introduction, and a final segment, serving as a closing or resolution. This aims to capture the distinct characteristics of the initial and final segments of the composition. The approach reduces clustering complexity while maintaining the relevance of local patterns. The clustering process, performed using the K-Means algorithm, demonstrates strong performance and promising results. Furthermore, the classification rules derived from data segmentation and concatenation help mitigate clustering complexity, resulting in an effective classification outcome. The model evaluation was conducted by measuring the similarity within the classes formed from data merging using Euclidean distance score, where values below three indicate high similarity, and values greater than ten indicate strong dissimilarity. Three of the 13 formed classes with more than one data point, Class 5, Class 12, and Class 18, demonstrate high similarity with a value below three. Five other classes, Class 7, Class 10, Class 11, Class 15, and Class 20, exhibit near-high similarity, with values ranging from three to four, while the remaining five classes fall within the range of four to five.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_35-Data_Segmentation_and_Concatenation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Developing Motion Templates of Sport Training Using R-GDL Approach for Evaluating Extrinsic Feedback of Penalty Kicks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160334</link>
        <id>10.14569/IJACSA.2025.0160334</id>
        <doi>10.14569/IJACSA.2025.0160334</doi>
        <lastModDate>2025-03-31T12:15:08.1230000+00:00</lastModDate>
        
        <creator>Amir Irfan Mazian</creator>
        
        <creator>Wan Rizhan</creator>
        
        <creator>Normala Rahim</creator>
        
        <creator>Muhammad D. Zakaria</creator>
        
        <creator>Mohd Sufian Mat Deris</creator>
        
        <creator>Fadzli Syed Abdullah</creator>
        
        <creator>Ahmad Rafi</creator>
        
        <subject>Motion templates; motion capture; penalty kick; extrinsic feedback; reverse-gesture description language</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>The study developed Motion Templates (MTs) using the Reverse-Gesture Description Language (R-GDL) method to evaluate extrinsic feedback in football penalty kick training. Traditional coaching methods often rely on subjective and qualitative assessments. To address this, motion capture (MoCap) technology was employed to collect kinematic data from two university football players (right- and left-footed) performing penalty kicks toward left (Set 1) and right (Set 2) goalpost and Score Rubric Assessment (SRA) form was used by professional coach to evaluate the performance. From the collected MoCap data, 40 successful penalty kicks were selected, converted into SKL format and generate MTs through Gesture Description Language (GDL) system using R-GDL, which standardized movement patterns through adaptive machine-learning-derived rules. The MTs incorporated features such as joint angles and limb trajectories, producing five rules per template for comparative analysis. Results demonstrated that MTs effectively differentiated players’ techniques across sets (e.g., Player A required fewer attempts in Set 1 than Player B in Set 2). Cross-validation against coach-evaluated Score Rubric Assessment (SRA) outcomes revealed that extrinsic feedback scores from MTs did not surpass SRA benchmarks, confirming the uniqueness of each player’s motion patterns. This highlights MTs’ reliability in providing objective, granular feedback for skill improvement. The study concludes that R-GDL-based MTs offer a robust tool for enhancing sports training analytics, enabling data-driven coaching strategies. Future work will focus on scalability, cost reduction, and extending this approach to other sports.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_34-Developing_Motion_Templates_of_Sport_Training.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Popularity-Correction Sampling and Improved Contrastive Loss Recommendation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160333</link>
        <id>10.14569/IJACSA.2025.0160333</id>
        <doi>10.14569/IJACSA.2025.0160333</doi>
        <lastModDate>2025-03-31T12:15:08.1070000+00:00</lastModDate>
        
        <creator>Wei Lu</creator>
        
        <creator>Xiaodong Cai</creator>
        
        <creator>Minghui Li</creator>
        
        <subject>Recommendation algorithms; contrast loss; difficult negative samples object; popularity bias</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>In recommendation systems, negative sampling strategies are crucial for the calculation of contrastive learning loss. Traditional random negative sampling methods may lead to insufficient quality of negative samples during training, thereby affecting the convergence and performance of the model. In addition, the Bayesian Personalized Ranking (BPR) loss function usually converges slowly and is prone to falling into suboptimal local solutions. To address the above problems, this paper proposes a recommendation algorithm based on popularity-corrected sampling and improved contrastive loss. First, a dynamic negative sampling method with popularity correction is proposed, which reduces the impact of item popularity distribution bias on model training and dynamically screens out negative samples to improve the quality of model recommendations. Second, an improved contrastive loss is proposed, which selects the most challenging negative samples and introduces a boundary threshold to control the sensitivity of the loss, enabling the model to focus more on samples that are difficult to distinguish and further optimize the recommendation effect. Experimental results on the Amazon-Book, Yelp2018, and Gowalla datasets show that the proposed model significantly outperforms mainstream state-of-the-art models in recommendation tasks. Specifically, the Recall metric, which reflects model accuracy, improves by 16.8%, 12.9%, and 5.72% respectively on these three datasets. The NDCG metric, which measures ranking quality, increases by 20.7%, 16.4%, and 7.76% respectively. These results confirm the effectiveness and superiority of the recommendation algorithm across different scenarios. Compared with baseline models, it demonstrates stronger adaptability in complex situations, such as the sparse dataset Gowalla and the long - tail distribution dataset Amazon-Book, with the highest improvement in core metrics exceeding 20%.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_33-Popularity_Correction_Sampling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sentiment Analysis and Emotion Detection Using Transformer Models in Multilingual Social Media Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160332</link>
        <id>10.14569/IJACSA.2025.0160332</id>
        <doi>10.14569/IJACSA.2025.0160332</doi>
        <lastModDate>2025-03-31T12:15:08.0770000+00:00</lastModDate>
        
        <creator>Sultan Saaed Almalki</creator>
        
        <subject>Multilingual sentiment analysis; emotion detection; transformer models; XLM-R; mBERT; T5; code-switching; cross-lingual NLP; social media text processing; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>The rapid expansion of multilingual social media platforms has resulted in a surge of user-generated content, introducing challenges in sentiment analysis and emotion detection due to code-switching, informal text, and linguistic diversity. Traditional rule-based and machine learning models struggle to process multilingual complexities effectively, necessitating advanced deep-learning approaches. This study develops a transformer-based sentiment analysis and emotion detection system capable of handling multilingual and code-mixed social media text. The proposed fine-tuned Cross-lingual Language Model – Robust (XLM-R) model is compared against state-of-the-art transformer models (mBERT, T5) and traditional classifiers (support vector machine (SVM), Random Forest) to assess its cross-lingual sentiment classification performance. A multilingual dataset was compiled from Twitter, YouTube, Facebook, and Amazon Reviews, covering English, Spanish, French, Hindi, Arabic, Tamil, and Portuguese. Data preprocessing included tokenization, stopword removal, emoji normalization, and code-switching handling. Transformer models were fine-tuned using cross-lingual embeddings and transfer learning, with accuracy, F1-score, and confusion matrices for performance evaluation. Results show that XLM-R outperformed all baselines, achieving an F1-score of 90.3%, while multilingual Bidirectional Encoder Representations from Transformers (mBERT) and T5 scored 84.5% and 87.2%, respectively. Preprocessing improved performance by 7%, particularly in code-mixed datasets. Handling code-switching increased accuracy by 8.9%, confirming the model’s robustness in multilingual sentiment analysis. The findings demonstrate that XLM-R effectively classifies sentiments and emotions in multilingual social media data, surpassing existing approaches. This study supports integrating transformer-based models for cross-lingual natural language processing (NLP) tasks, paving the way for real-time multilingual sentiment analysis applications.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_32-Sentiment_Analysis_and_Emotion_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intelligent Real-Time Air Quality Index Classification for Smart Home Digital Twins</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160331</link>
        <id>10.14569/IJACSA.2025.0160331</id>
        <doi>10.14569/IJACSA.2025.0160331</doi>
        <lastModDate>2025-03-31T12:15:08.0130000+00:00</lastModDate>
        
        <creator>Saley Saleh</creator>
        
        <creator>A. S. Abohamama</creator>
        
        <creator>A. S. Tolba</creator>
        
        <subject>Air quality classification; machine learning; deep learning; Convolutional Neural Networks; Recurrent Neural Networks; transformer; Support Vector Machines; Random Forest; Gradient Boosting; k-nearest neighbors; CCS811 sensor data</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>This paper investigates the application of machine learning and deep learning models for intelligent real-time Air Quality Index (AQI) classification within a smart home digital twin context. Leveraging sensor data encompassing CO2 and TVOC levels, we perform a comparative analysis of eight models: Transformer Neural Network (TNN), Convolutional Neural Networks (CNN), Gated Recurrent Units (GRU), Recurrent Neural Networks (RNN), Support Vector Machines (SVM), Random Forest (RF), Gradient Boosting (GB), and K-Nearest Neighbors (KNN). These models aim to accurately classify air quality into six categories corresponding to AQI levels, ranging from Good to Hazardous, which are critical for assessing health risks. The performance of each model is rigorously evaluated using metrics including accuracy, precision, recall, F1-score, and ROC curves. Our findings demonstrate that the implemented models exhibit strong performance. This high-accuracy classification enables the smart home digital twin to move beyond passive monitoring, enabling proactive environmental control. For instance, the digital twin can use this real-time AQI classification to automatically adjust HVAC systems, trigger air purifiers when indoor air quality degrades, and potentially inform occupancy schedules. This integration allows for intelligent, adaptive management of the home&#39;s environment, ensuring optimal indoor air quality and occupant well-being. The paper also discusses the limitations of each model and suitable application scenarios for intelligent AQI management within the digital twin framework, offering valuable insights for the selection of appropriate air quality classification models in smart home environments.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_31-Intelligent_Real_Time_Air_Quality_Index_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Deep Learning Ordinal Classifier</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160330</link>
        <id>10.14569/IJACSA.2025.0160330</id>
        <doi>10.14569/IJACSA.2025.0160330</doi>
        <lastModDate>2025-03-31T12:15:07.9830000+00:00</lastModDate>
        
        <creator>Tiphelele Lwazi Nxumalo</creator>
        
        <creator>Richard Maina Rimiru</creator>
        
        <creator>Vusi Mpendulo Magagula</creator>
        
        <subject>Ordinal classification; TabNet; proportional odds model; tabular data</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>Deep learning models such as TabNet have gained popularity for handling tabular data. However, most existing architectures treat categorical variables as nominal, ignoring the inherent ordering in ordinal data, which can lead to suboptimal classification performance, particularly in tasks where ordinal relationships carry meaningful information, such as quality assessment, disease severity staging, and risk prediction. This study investigates the impact of explicitly modeling ordinal relationships in deep learning by developing an ordinal classification model and comparing it with its nominal counterpart. The proposed approach integrates TabNet a deep learning framework with ordinal constraints, leveraging a proportional odds model to better capture the ordinal structure and Beta cross-entropy as the loss function to enforce ordering during training. To evaluate the effectiveness of the proposed ordinal classification approach, experiments were conducted on two publicly available datasets: the White Wine Quality dataset and the Hepatitis C dataset. The results demonstrate that incorporating ordinal constraints leads to improvements across multiple evaluation metrics, including 1-off accuracy, average mean absolute error (AMAE), maximum mean absolute error (MMAE), and quadratic weighted kappa (QWK) compared to a nominal classification model trained under the same conditions. These findings underscore the importance of ordinal modeling in tabular classification and contribute to the advancement of deep learning techniques for structured data.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_30-A_Deep_Learning_Ordinal_Classifier.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Adaptive Deep Learning Framework with Unicintus Optimization for Anomaly Detection in Streaming Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160329</link>
        <id>10.14569/IJACSA.2025.0160329</id>
        <doi>10.14569/IJACSA.2025.0160329</doi>
        <lastModDate>2025-03-31T12:15:07.9330000+00:00</lastModDate>
        
        <creator>Srividhya V R</creator>
        
        <creator>Kayarvizhy N</creator>
        
        <subject>Streaming data; sliding window; anomaly detection; reservoir sampling; Unicintus escape energy optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>Anomaly detection in streaming data is crucial for identifying unusual patterns or outliers that may indicate significant issues. Traditional methods struggle with the inability in efficiently handling high-velocity data, adapting to changing data distributions, and maintain performance over time. Further, the conventional methods struggled with scalability, adaptability, and computational efficiency, leading to delays in detection or an increased rate of false positives. To address these limitations, Unicintus Escape Energy enabled Sampling based Drift Deep Belief Network-Bidirectional Long Short Term Memory (UES2-DTM) is proposed in the research. The research model incorporates the combination of adaptive reservoir sampling as well as the adaptive sliding window mechanisms into the base model, which elevates the efficiency of the model to work with the streaming data. Moreover, the adaptive sliding window mechanisms for drift detection integrates the Unicintus Escape Energy Optimization (UE2O) Algorithm to boost efficiency by dynamically adjusting the sliding window size and parameters, based on real-time streaming data characteristics. Further, Adaptive reservoir sampling helps in maintaining a representative sample of the data stream, for effective detection. Overall, the UES2-DTM model demonstrates superior adaptability and accuracy, which is evaluated with the metrics such as precision, recall, F1-score, and Mean Square Error (MSE) attained 97.199%, 94.827%, 95.998%, and 3.461 respectively.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_29-Adaptive_Deep_Learning_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modification of C-Grabcut for Segmentation and Classification of Coffee Leaf Diseases in Complex Backgrounds</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160328</link>
        <id>10.14569/IJACSA.2025.0160328</id>
        <doi>10.14569/IJACSA.2025.0160328</doi>
        <lastModDate>2025-03-31T12:15:07.9030000+00:00</lastModDate>
        
        <creator>Anastia Ivanabilla Novanti</creator>
        
        <creator>Agus Harjoko</creator>
        
        <subject>Image segmentation; in-field image; mobilenet-v2; coffee leaf diseases; background complexity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>Visual changes, including spots, discoloration, and deformation characterize coffee leaf diseases. In real-world image data, complex backgrounds present challenges for classification using deep learning models. Irrelevant objects, such as soil, other leaves, and miscellaneous items, can hinder the model&#39;s ability to accurately recognize disease patterns. Furthermore, the absence of effective segmentation techniques has resulted in low accuracy in previous studies. This work aims to address these limitations by enhancing the performance of the MobileNet-V2 model for coffee leaf disease classification. We applied a modified C-Grabcut segmentation technique to improve the isolation of diseased areas from complex backgrounds. The results demonstrate a significant performance improvement, achieving an Intersection over Union (IoU) of 0.8369 and an accuracy of 94.83%. These findings suggest that the modified MobileNet-V2 model, combined with the improved C-Grabcut segmentation, offers robust performance for in-field coffee leaf disease classification, striking a better balance between effectiveness and accuracy compared to previous studies.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_28-Modification_of_C_Grabcut_for_Segmentation_and_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>AI-Driven Intrusion Detection in IoV Communication: Insights from CICIoV2024 Dataset</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160327</link>
        <id>10.14569/IJACSA.2025.0160327</id>
        <doi>10.14569/IJACSA.2025.0160327</doi>
        <lastModDate>2025-03-31T12:15:07.8400000+00:00</lastModDate>
        
        <creator>Nourah Fahad Janbi</creator>
        
        <subject>Intrusion Detection System; controller area network; Internet of Vehicles; CICIoV2024; machine learning; Artificial Intelligence; security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>The increasing interconnectivity of vehicular networks through the Internet of Vehicles (IoV) introduces significant security challenges, particularly for the Controller Area Network (CAN), a widely adopted protocol vulnerable to cyberattacks such as spoofing and Denial-of-Service (DoS). To address these challenges, this study explores the potential of Intrusion Detection Systems (IDSs) leveraging artificial intelligence (AI) techniques to detect and mitigate malicious activities in CAN communications. Using the CICIoV2024 dataset, which provides a realistic testbed of vehicular traffic under benign and malicious conditions, we evaluate 25 machine learning (ML) models across multiple metrics, including accuracy, balanced accuracy, F1-score, and computational efficiency. A systematic and repeatable approach was proposed to facilitate testing multiple models and classification scenarios, enabling a comprehensive exploration of the dataset&#39;s characteristics and providing insights into various ML algorithms&#39; effectiveness. The findings highlight the strengths and limitations of various algorithms, with ensemble-based and tree-based models demonstrating superior performance in handling imbalanced data and achieving high generalization. This study provides insights into optimizing IDSs for vehicular networks and outlines recommendations for improving the robustness and applicability of security solutions in real-world IoV scenarios.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_27-AI_Driven_Intrusion_Detection_in_IoV_Communication_Insights.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Super-Twisting Sliding Mode Distributed Consensus for Nonlinear Multi-Agent Systems with Unknown Bounded External Disturbances</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160326</link>
        <id>10.14569/IJACSA.2025.0160326</id>
        <doi>10.14569/IJACSA.2025.0160326</doi>
        <lastModDate>2025-03-31T12:15:07.8100000+00:00</lastModDate>
        
        <creator>Belkacem Kada</creator>
        
        <creator>Khalid Munawar</creator>
        
        <subject>Distributed consensus; cooperative control; nonlinear multiagent systems; robustness; super-twisting sliding mode</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>This paper addresses the distributed consensus tracking problem for nonlinear multi-agent systems subject to unknown but bounded external disturbances by leveraging a super-twisting sliding mode (STSM) control framework. Two STSM-based consensus algorithms are proposed—one for first-order and another for second-order multi-agent systems—to achieve finite-time convergence despite disturbances. A disturbance observer is integrated into the consensus control protocols to estimate and compensate for these disturbances, ensuring robust tracking without requiring time-derivative sliding variables or smoothing algorithms. The proposed consensus protocols build upon the concepts of finite-time stability, Lipschitz-bounded functions, relative degree analysis of input-output dynamics, and positive-definite matrix properties. Stability and finite-time convergence are rigorously established using Lyapunov-based proofs, Rayleigh’s inequality, and finite-time settling results. Unstructured disturbances are modelled as zero-mean Gaussian noise and structured disturbances are expressed via a regressor formulation. Numerical simulations confirm that the integrated STSM-based consensus approach and disturbance observer ensure high tracking accuracy, robustness, and smooth control performance under diverse disturbance conditions.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_26-Super_Twisting_Sliding_Mode_Distributed_Consensus.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improved CNN Recognition Algorithm for Identifying Bird Hazards in Transmission Lines</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160325</link>
        <id>10.14569/IJACSA.2025.0160325</id>
        <doi>10.14569/IJACSA.2025.0160325</doi>
        <lastModDate>2025-03-31T12:15:07.7930000+00:00</lastModDate>
        
        <creator>Junzhou Li</creator>
        
        <creator>Yao Li</creator>
        
        <creator>Wen Wang</creator>
        
        <subject>CNN; hazard birds; transmission line; distinguish; support vector machine</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>With the expansion of the power grid, bird activities have become the main factor causing transmission line failures. How to accurately identify hazard birds has received widespread attention from all sectors of society. However, the current bird identification methods for transmission line hazards suffer from low accuracy due to the small size of bird targets. This study proposes an enhanced Convolutional Neural Network (CNN) with Support Vector Machines (SVM) to improve the accuracy of identifying hazardous birds on transmission lines. At the same time, a dataset of bird species affected by transmission lines is constructed, and data augmentation methods and denoising deep convolutional networks are used to process the data. Thus, a bird identification algorithm for transmission line hazards based on improved CNNs and SVM is constructed by combining the three. The study conducts a performance comparison analysis of the algorithm and finds that its average recognition speed and accuracy are 9.8 frames per second and 97.4%, respectively, significantly better than the compared algorithms. In addition, an analysis of the application effect of the algorithm is conducted, and it is found that the algorithm can accurately identify hazard birds. In some recognition results, the recognition results and confirmation probabilities for Pica Pica, ciconia boyciana, egretta garzetta, and hirundo rusticas are 98.73%, 97.68%, 96.54%, and 91.34%, respectively, all above 90%. The above findings indicate that the proposed identification algorithm has good performance and practical value, which helps to improve the accuracy of identifying hazard birds on transmission lines.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_25-Improved_CNN_Recognition_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Flood Prevention System Using IoT</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160324</link>
        <id>10.14569/IJACSA.2025.0160324</id>
        <doi>10.14569/IJACSA.2025.0160324</doi>
        <lastModDate>2025-03-31T12:15:07.7330000+00:00</lastModDate>
        
        <creator>Balasubramaniam Muniandy</creator>
        
        <creator>Siti Sarah Maidin</creator>
        
        <creator>M. Batumalay</creator>
        
        <creator>Lakshmi Dhandapani</creator>
        
        <creator>Prakash. S</creator>
        
        <subject>Flood prevention system; Internet of Things (IoT); automated water turbines; river basin management; real-time monitoring; AI-based flood prediction; environmental sustainability; smart infrastructure</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>Floods are one of the most severe natural disasters in Malaysia, occurring frequently in recent years and causing significant socio-economic and environmental impacts. These recurring disasters lead to huge losses and prolonged recovery period. Flood management involves four phases: prevention, preparedness, response, and recovery. However, existing flood management systems primarily focus on preparedness, response, and recovery, often neglecting preventive measures, especially in river basin which serve as the primary channels for water flow. The lack of emphasis on the prevention phase has resulted in frequent flood occurrences, economic losses, loss of lives, and extensive environmental damage. To address this gap, this study proposes an IoT-based Flood Prevention System specifically designed for river basin management to mitigate flood risks. The system effectively regulates and maintains river water flow and quality, with the integration of Internet of Things (IoT) and Automated Water Turbines. By using real-time data collection from IoT sensors with historical flood data, the system can autonomously take appropriate actions to regulate and maintain the water flow and water level in river basin. These proactive measures allow for better water discharge to the sea, even during periods of heavy rainfall. The implementation of this system contributes to sustainable flood mitigation strategies with advanced technologies enhancing disaster management capabilities.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_24-Flood_Prevention_System_Using_IoT.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Career Recommendation Based on Feature Selection for Undergraduate Students Using Machine Learning Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160323</link>
        <id>10.14569/IJACSA.2025.0160323</id>
        <doi>10.14569/IJACSA.2025.0160323</doi>
        <lastModDate>2025-03-31T12:15:07.6830000+00:00</lastModDate>
        
        <creator>Samar El-Keiey</creator>
        
        <creator>Dina ElMenshawy</creator>
        
        <creator>Ehab Hassanein</creator>
        
        <subject>Career path; feature selection; machine learning techniques; recommendation systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>Undergraduate students worldwide face difficulties choosing the career paths that should stay with them for at least several years. It is widespread for graduates to work in jobs or join a career path they are not interested in. Also, sometimes these jobs do not suit the skills and preferences of undergraduates. On the other hand, some jobs require certain criteria and various skills that may not be available to some undergraduates. Although an undergraduate can study a major that he/she is interested in, this does not guarantee that he/she will be successful in his/her future career path. Undergraduates in various majors need advice on career paths that suit their skills and interests. When a graduate feels dissatisfied with his/her job, this dissatisfaction can impact his/her productivity and performance in his/her assigned tasks and job responsibilities. Moreover, the overall performance of the organization where these workers work can be negatively affected by having less talented and less motivated workers. As a result, in this paper, a recommendation system is designed and proposed to guide undergraduates in choosing the optimal career path. Various machine-learning techniques were used in the recommendation system. The proposed system was applied to two datasets related to Information Technology jobs; “Dataset A” consisted of 20,000 records and “Dataset B” consisted of 500 records. Feature selection techniques were applied on “Dataset A” to determine the most important features that enhance the accuracy of the proposed recommendation system. It has been shown that the random forests technique performed the best among the other machine learning techniques.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_23-Career_Recommendation_Based_on_Feature_Selection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Abnormal Data Detection Model Based on Autoencoder and Random Forest Algorithm: Camera Sensor Data in Autonomous Driving Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160322</link>
        <id>10.14569/IJACSA.2025.0160322</id>
        <doi>10.14569/IJACSA.2025.0160322</doi>
        <lastModDate>2025-03-31T12:15:07.5900000+00:00</lastModDate>
        
        <creator>Geng Shengwen</creator>
        
        <creator>Mohd Hafeez Osman</creator>
        
        <subject>Automatic driving; anomaly data detection; convolutional autoencoder; random forest; CAE-RF</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>This project develops an AI-based anomaly detection system. In the field of autonomous driving, abnormal data will directly affect the safety of autonomous driving systems, especially in terms of abnormal camera sensor data. Sensor failure, environmental changes, or bad weather can lead to the emergence of abnormal data, which can affect the decision-making process and may have disastrous consequences. Based on the above problems, this study addresses this challenge by proposing a hybrid anomaly detection model (called CAE-RF) that combines convolutional autoencoders and random forest algorithms to achieve efficient and accurate identification of abnormal data patterns to improve the safety of autonomous driving systems. The proposed method will use convolutional autoencoders to calculate the reconstruction error and combine the hidden features extracted by the encoder as the input of the random forest to distinguish normal data from abnormal data. The key performance indicators such as accuracy, precision, recall, and F1 score are used to evaluate the model, and the robustness is guaranteed by cross-validation. Experimental results show that the CAE-RF model has an accuracy of 92% in distinguishing normal and abnormal data. Compared with traditional methods, the CAE-RF model achieves higher accuracy and reliability. The implementation of this model can timely identify and process abnormal data, reduce the risks brought by sensor failure or external environment changes, prevent potential accidents, and improve the safety and reliability of the autonomous driving system.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_22-Abnormal_Data_Detection_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Unmasking AI-Generated Texts Using Linguistic and Stylistic Features</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160321</link>
        <id>10.14569/IJACSA.2025.0160321</id>
        <doi>10.14569/IJACSA.2025.0160321</doi>
        <lastModDate>2025-03-31T12:15:07.5600000+00:00</lastModDate>
        
        <creator>Muhammad Irfaan Hossen Rujeedawa</creator>
        
        <creator>Sameerchand Pudaruth</creator>
        
        <creator>Vusumuzi Malele</creator>
        
        <subject>AI-generated texts; human-written texts; machine learning; linguistic features; stylistic features</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>As Artificial Intelligence (AI) generated texts become increasingly sophisticated, distinguishing between human-written and AI-generated content presents a growing challenge. Reliably detecting AI-generated texts is of primary importance in fields that involve a lot of text such as journalism, education and law. In this study, several methods for detecting AI-generated texts by analysing a range of linguistic and stylistic features were investigated. It incorporated features such as text length, punctuation count, vocabulary richness, readability indices and sentiment polarity, to identify patterns in AI-generated content. Out of the six machine learning classifiers which were tested, the Random Forest classifier achieved the highest accuracy of 82.6%. A dataset of 483,360 essays was used in this study. Thus, the findings of this study provide a framework for the development of more sophisticated detection tools that can be applied to various real-world scenarios.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_21-Unmasking_AI_Generated_Texts.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Impact of Emerging Technologies on Customer Loyalty: A Systematic Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160320</link>
        <id>10.14569/IJACSA.2025.0160320</id>
        <doi>10.14569/IJACSA.2025.0160320</doi>
        <lastModDate>2025-03-31T12:15:07.5300000+00:00</lastModDate>
        
        <creator>Jonattan Andia-Reyna</creator>
        
        <creator>Yorhs Malasquez-Villanueva</creator>
        
        <subject>Emerging technologies; loyalty programs; customer loyalty; business growth</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>The rapid evolution of emerging technologies has generated growing interest in their potential to transform customer loyalty into digital environments. This study aims to conduct a systematic literature review (SLR) to analyze how emerging technologies influence customer loyalty. This review is focused on identifying how these technologies affect loyalty indicators in markets with developed digital environments. A total of 453 articles from the Scopus database were identified by applying the PRISMA methodology. After removing duplicates and applying filters by language and document type, 103 relevant articles were selected. Then, a detailed review based on inclusion and exclusion criteria was conducted. Hence, 51 documents were finally included for analysis. The main technologies investigated were Big Data, IoT, and Machine Learning. Big Data and Data Analytics were the most researched technologies, followed by IoT and Machine Learning. The systematic review demonstrated that emerging technologies significantly impact customer loyalty. Artificial intelligence and data analytics are key tools for improving customer experience and retention, which contributes to business growth. It is concluded that adopting these technologies enhances customer experience by offering personalization, behavior prediction, and inventory optimization, resulting in greater customer satisfaction and loyalty.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_20-Impact_of_Emerging_Technologies_on_Customer_Loyalty.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Unified Deep Learning for Real-Time Pedestrian Detection, Pose Estimation, and Tracking</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160319</link>
        <id>10.14569/IJACSA.2025.0160319</id>
        <doi>10.14569/IJACSA.2025.0160319</doi>
        <lastModDate>2025-03-31T12:15:07.4670000+00:00</lastModDate>
        
        <creator>Joseph De Guia</creator>
        
        <creator>Madhavi Deveraj</creator>
        
        <subject>Pedestrian detection; pose estimation; tracking; YOLOv8; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>This study introduces a novel unified deep learning framework for real-time pedestrian and Vulnerable Road User (VRU) detection, pose estimation, and tracking using YOLOv8. Unlike traditional approaches that separately handle these tasks, our integrated multi-task model leverages YOLOv8’s advanced multi-scale feature extraction and optimized architecture to efficiently perform simultaneous detection, pose estimation, and tracking. Experimental evaluations demonstrate superior performance compared to baseline YOLOv8 configurations, achieving an mAP@0.5 of 57.2%, OKS of 76.1% (COCO dataset), MOTA of 67.1%, and IDF1 of 64.3%. The framework&#39;s robust performance is validated through comprehensive testing under realistic urban scenarios and challenging conditions. By effectively addressing limitations in current autonomous vehicle (AV) perception systems, such as handling occlusions, varying lighting, and dense pedestrian environments, this integrated approach significantly enhances AV safety and navigation reliability at critical junctions and pedestrian crossings.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_19-Unified_Deep_Learning_for_Real_Time_Pedestrian_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Machine Learning-Based Terahertz Spectroscopy for Starch Concentration Prediction in Biofilms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160318</link>
        <id>10.14569/IJACSA.2025.0160318</id>
        <doi>10.14569/IJACSA.2025.0160318</doi>
        <lastModDate>2025-03-31T12:15:07.4200000+00:00</lastModDate>
        
        <creator>Juan-Jesus Garrido-Arismendis</creator>
        
        <creator>Jimy Oblitas</creator>
        
        <creator>Cesar Nino</creator>
        
        <creator>Himer Avila-George</creator>
        
        <creator>Wilson Castro</creator>
        
        <subject>Terahertz spectroscopy; machine learning; chemo-metrics; starch detection; biofilms</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>Food preservation and safety require advanced detection methods to ensure transparency in supply chains. Terahertz (THz) spectroscopy has emerged as a powerful, non-invasive tool for material characterization. This study explores the integration of THz spectroscopy and machine learning for accurately quantifying maize starch adulteration in bioplastics derived from potato starch. Bioplastic samples with varying concentrations of maize starch were prepared, molded into three different thicknesses, and subjected to a two-stage drying process, resulting in 81 samples (27 treatments with three replicates each). The spectral profiles at THz (0.5 to 2 THz) were recorded and analyzed using three regression models: support vector regression, partial least squares regression, and multiple linear regression. The models were evaluated using the coefficient of determination (R2), Root Mean Square Error (RMSE), and the Residual Predictive Deviation (RPD). The results showed R2 values ranging from 0.7283 to 0.9495, RMSE between 0.0594 and 0.1393, and RPD values from 1.8753 to 4.4479, demonstrating strong predictive performance. These findings highlight the potential of THz spectroscopy and machine learning in the noninvasive detection of starch adulterants in bioplastics, paving the way for future research to enhance model robustness and applicability.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_18-Machine_Learning_Based_Terahertz_Spectroscopy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimization of IIR Digital Filters Using Differential Evolution: A Comparative Analysis of FDDE and AMECoDEs Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160317</link>
        <id>10.14569/IJACSA.2025.0160317</id>
        <doi>10.14569/IJACSA.2025.0160317</doi>
        <lastModDate>2025-03-31T12:15:07.4030000+00:00</lastModDate>
        
        <creator>Wildor Ferrel Serruto</creator>
        
        <subject>IIR digital filter; differential evolution; FDDE algorithm; AMECoDEs algorithm; triple-passband IIR filter</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>Infinite impulse response (IIR) digital filters are fundamental components in various digital signal processing applications, particularly those requiring optimized use of computational resources, such as memory and processing power. This study presents the design of classical IIR filters, including low-pass, high-pass, band-pass, and band-stop configurations, as well as multiple-passband filters featuring dual and triple passbands. Two differential evolution algorithms are utilized: FDDE (Differential Evolution Algorithm with Fitness and Diversity Ranking-Based Mutation Operator) and AMECoDEs (Adaptive Multiple-Elites-Guided Composite Differential Evolution Algorithm with a Shift Mechanism). To date, no study has investigated the application of the FDDE algorithm to IIR digital filter design, whereas the AMECoDEs algorithm has seen limited application in this context. Consequently, this work investigates the design of IIR filters using these algorithms and assesses their performance based on the mean squared error (MSE). Comparative analysis reveals that, for classical filters, the FDDE algorithm yields a slightly lower MSE in the magnitude response compared to the AMECoDEs algorithm. Conversely, for multiple-passband filters, the AMECoDEs algorithm outperforms FDDE by achieving a lower MSE. In the proposed model, IIR filters are implemented using a cascade structure of second-order sections (SOS), with their fitness function evaluated based on the MSE, computed using a constant weight function within each frequency band. Additionally, the magnitude response characteristics of the designed filters are compared with those of classical and dual-passband filters designed with the AMECoDEs algorithm in recent studies. The results indicate that the filters designed in this study show significant improvements across most evaluated metrics, particularly in terms of improved stopband attenuation. One of the key contributions of this work is the novel application of differential evolution algorithms to the design of triple-passband IIR filters, demonstrating their effectiveness through successful validation on a development board.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_17-Optimization_of_IIR_Digital_Filters_Using_Differential_Evolution.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Bioplastic Thickness Estimation Using Terahertz Time-Domain Spectroscopy and Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160316</link>
        <id>10.14569/IJACSA.2025.0160316</id>
        <doi>10.14569/IJACSA.2025.0160316</doi>
        <lastModDate>2025-03-31T12:15:07.3730000+00:00</lastModDate>
        
        <creator>Juan-Jes&#180;us Garrido-Arismendis</creator>
        
        <creator>Luis Juarez</creator>
        
        <creator>Jorge Mogollon</creator>
        
        <creator>Brenda Acevedo-Ju&#180;arez</creator>
        
        <creator>Himer Avila-George</creator>
        
        <creator>Wilson Castro</creator>
        
        <subject>Terahertz spectroscopy; machine learning; chemo-metrics; thickness; bioplastic</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>In the sustainable packaging industry, multiple parameters require regulation to achieve a high-quality final product that meets contemporary demands. In bioplastic manufacturing, the control of the film thickness is critical because it in-fluences the mechanical properties and other key characteristics. Terahertz time-domain spectroscopy (THz-TDS) has emerged as a promising technology for the non-invasive characterization of polymeric materials. The present study evaluates the integration of THz-TDS with chemometric techniques and machine learning models to predict the thickness of bioplastic samples fabricated from potato and maize starch. Three distinct thickness levels were produced by solution casting, and a spectral analysis was performed in the range of 0.5 to 1.2 THz. Four regression models were developed, including partial least squares regression, support vector regression, binary regression tree, and a feedforward neural network. The performance of the model was assessed using the coefficient of determination (R2), root mean square error (RMSE) and the ratio of performance to deviation (RPD). R2 values ranged from 0.8379 to 0.9757, the RMSE values ranged from 0.1259 to 0.3368, and the RPD values ranged from 2.4399 to 6.8106. These findings underscore the potential of THz-TDS and machine learning for non-invasive analysis of thin polymeric films and lay the groundwork for future research aimed at enhancing reliability and functionality.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_16-Bioplastic_Thickness_Estimation_Using_Terahertz.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Business Intelligence in Public Management</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160315</link>
        <id>10.14569/IJACSA.2025.0160315</id>
        <doi>10.14569/IJACSA.2025.0160315</doi>
        <lastModDate>2025-03-31T12:15:07.3400000+00:00</lastModDate>
        
        <creator>Javier Benavides-Redhead</creator>
        
        <creator>Jenny Guti&#233;rrez-Flores</creator>
        
        <subject>Business intelligence; municipality; taxation; decision making; indicators; public management; information presentation; technology tool; modernization; productivity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>The present research seeks to demonstrate the improvement of the visualization of indicators applying Business Intelligence in the district municipality of Lince. The entity has among its different institutional objectives to strengthen the modernization of the administrative and functional systems of institutional management. The research was proposed as an applied type, with a pre-experimental and quantitative design. A sample of 10 users belonging to Tax Administration Management was available, applying the questionnaire technique and the survey-type instrument. From the data collected by the instrument in the Pre-Test and Post- Test, the results were obtained that allowed us to determine a positive relationship in relation to decision-making for tax collection. For the Pre-Test tests, a score of 50% was obtained, in the low-level score as opposed to the Post Test tests, which obtained a 50% general level. The investigation allowed, in its interpretation, the meaningful change for decision-making supported by indicators generated by Business Intelligence, when evaluating the results and finding changes among the respondents on time, productivity and presentation of information in relation to the use of Business Intelligence. On the other hand, decision-making was positively affected from the direction, control, and evaluation organization, from the perception of the respondents in the changes represented by the use of a business tool to obtain information capable of responding to the needs of the institution for decision-making, focused on tax collection. The research is structured in six sections. The first section details the problematic situation and the justification of the study in relation to the research objectives. The second explains the background and previous research that supports the problematic situation based on the key constructs of the work. The third mentions the methodology used, through the quantitative approach, and the fourth shows the results obtained. The fifth section makes a comparison of what the study achieved compared to other previous studies and finally, the conclusions provide the final scopes on the achievement of the objectives and the contribution to future research.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_15-Business_Intelligence_in_Public_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Security Onion as a Network Auditing Tool at the San Crist&#243;bal de Huamanga National University</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160314</link>
        <id>10.14569/IJACSA.2025.0160314</id>
        <doi>10.14569/IJACSA.2025.0160314</doi>
        <lastModDate>2025-03-31T12:15:07.2930000+00:00</lastModDate>
        
        <creator>Kimberlly Nena Barraza Tudela</creator>
        
        <creator>Hubner Janampa Patilla</creator>
        
        <subject>Network security; network auditing; Security Onion; IDS; CIS Controls</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>In a context of evolving cyber threats, the San Cristobal de Huamanga National University (UNSCH) faces the need to improve its network security infrastructure. This study implements Security Onion as a network auditing tool at this institution with the objective of evaluating its effectiveness in three key areas: security monitoring, log management, and intrusion detection. The study employs an applied, descriptive, and experimental approach to demonstrate that Security Onion is a robust solution for incident detection. It enables comprehensive analysis of network logs and early identification of suspicious activities, providing a holistic view of the network. Based on the results, the study suggests best practices for protecting institutional information and the network, and contributes to understanding Security Onion&#39;s capabilities in similar network infrastructures. Furthermore, it provides a replicable model for other institutions.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_14-Security_Onion_as_a_Network_Auditing_Tool.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application of the Business Process Management (BPM) Methodology in the Process of Incorporating Human Talent in the Retail Business Sector</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160313</link>
        <id>10.14569/IJACSA.2025.0160313</id>
        <doi>10.14569/IJACSA.2025.0160313</doi>
        <lastModDate>2025-03-31T12:15:07.2470000+00:00</lastModDate>
        
        <creator>Anyela Alanya-Ramos</creator>
        
        <creator>Argenis Moreno-Rosales</creator>
        
        <creator>Luis Acosta-Medina</creator>
        
        <subject>BPM; human talent; incorporation process; process optimization; methodology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>The lack of a well-defined onboarding process for new talent in a retail company specializing in beauty products and accessories for women has generated the need to undertake this research. The objective of which was to evaluate the positive impact that the implementation of business process management (BPM) could generate in this area, whose deficiencies lay in inadequate communication and the lack of appropriate digital tools. The study focused on three key dimensions to understand how this improvement could transform the process of integrating new talent. As a research method, an applied pre-experimental design was chosen, with a quantitative approach. Likewise, the survey was applied to collect data, using a questionnaire as a measurement instrument. As a result, it was observed that by following the characteristics and life cycle of the BPM methodological framework, it was necessary to implement digital actions and tools to optimize the process and generate positive impacts in its three dimensions. In addition, there was a 44% increase in the satisfaction and commitment of the participants in the process, a 47% increase in the positive perception about monitoring and tracking the entry of new talent, and a 38% increase in the perception about the distribution of tasks among the actors in the process. In conclusion, the application of the methodology has generated a notable improvement in the process, which has directly contributed to enriching the experience of new talents in the incorporation process of the retail.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_13-Application_of_the_Business_Process_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implementation of a Web System to Optimize the Quotation Process in the Company KSF Representaciones EIRL, 2022</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160312</link>
        <id>10.14569/IJACSA.2025.0160312</id>
        <doi>10.14569/IJACSA.2025.0160312</doi>
        <lastModDate>2025-03-31T12:15:07.2170000+00:00</lastModDate>
        
        <creator>Betsy Nataly Llacchuarimay-De La Cruz</creator>
        
        <creator>Segundo Alexandher Gutierrez Argomedo</creator>
        
        <creator>Luis Alberto Torres-Cabanillas</creator>
        
        <subject>Web system; optimization; quotation; customer satisfaction; efficiency</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>This research seeks to demonstrate whether the implementation of a web-based system influences the optimization of activities related to the quoting process, saving time and money for KSF Representaciones EIRL. Therefore, the following question arises: To what extent does the implementation of a web-based system optimize the quoting process? This is an applied, pre-experimental design with a quantitative approach. The population consists of average daily quote records for 24 business days per month. For the convenience sample, an average of 24 quote records from May were used for the pre-test and an average of 24 quote records from June for the post-test, collected using an observation sheet. The results regarding the quoting process variable show that the application reduces the time to generate quotes. For the second dimension, the application results in a higher percentage of quote fulfillment. In conclusion, the implementation of the web-based system improved quote generation by an average of 28 minutes and increased the compliance rate of submitted quotes by an average of 89.8%.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_12-Implementation_of_a_Web_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of an Algorithm-Based Analysis and Compression Integrated Communication Tracking Management Information System (iCTMIS)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160311</link>
        <id>10.14569/IJACSA.2025.0160311</id>
        <doi>10.14569/IJACSA.2025.0160311</doi>
        <lastModDate>2025-03-31T12:15:07.1530000+00:00</lastModDate>
        
        <creator>Carlo Jude P. Abuda</creator>
        
        <creator>Ritchell S. Villafuerte</creator>
        
        <subject>Information system; optical character recognition; Lempel-Ziv-welch lossless compression; zlib compression; communication tracking</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>This study addresses the challenges of administrative tasks and communication tracking at Visayas State University Alangalang (VSUA), highlighting the inefficiencies in the current manual processes. The objective is to develop an Integrated Communication Tracking Management Information System (iCTMIS) that enhances operational efficiency by integrating Optical Character Recognition (OCR) and Lempel-Ziv-Welch (LZW) Lossless and Zlib compression algorithms. By employing a developmental research design and ADDIE model, the system proves that there is an improvement on data analysis and reduces disk space through efficient compression. Significant findings reveal that OCR achieves up to 90% accuracy in text conversion, while LZW compressions substantially deflate data sizes. This was evaluated against ISO 9126 Software Quality Characteristics, the iCTMIS has shown to optimize storage and address VSUA&#39;s operational challenges effectively. This research therefore concludes that the systematic integration of advanced algorithmic frameworks in iCTMIS significantly enhances organizational communication and administrative workflows efficiency.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_11-Development_of_an_Algorithm_Based_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluation of the Usability and User Experience of a Digital Platform for Mental Health Assessment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160310</link>
        <id>10.14569/IJACSA.2025.0160310</id>
        <doi>10.14569/IJACSA.2025.0160310</doi>
        <lastModDate>2025-03-31T12:15:07.1230000+00:00</lastModDate>
        
        <creator>Jerina Jean M. Ecleo</creator>
        
        <creator>Mia Amor C. Tinam-isan</creator>
        
        <creator>Kristine Mae E. Galera</creator>
        
        <creator>Ric Adrian C. Balaton</creator>
        
        <creator>Imelu G. Mordeno</creator>
        
        <creator>Cenie M. Vilela-Malabanan</creator>
        
        <subject>Mental health; usability testing; user experience; mental health assessment; digital platform</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>This study evaluated the usability and user experience of a mental health digital platform among college students. Usability tests were conducted using quantitative measures, user feedback, and direct observations. User experience is also aimed at gaining insights of what works and what does not work in the system. A total of 3,396 second year students participated in the assessment with university guidance counselors serving as facilitators. Results from the usability test indicated an above- average score among students suggesting high satisfaction in terms of ease-of-use, well-integrated functions, and performance. Strengths of the platform generated from the users’ feedback are effectiveness and efficiency, ease of use, innovation, organization and structure, and reliability and performance. Further enhancements in functionality, including loading time, usability, readability, language preference, and lengthy questionnaires, were identified as key concerns among respondents. These findings highlight the usability of the platform while also identifying areas for improvement to ensure continuous engagement and user-friendly experience for users.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_10-Evaluation_of_the_Usability_and_User_Experience.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Model for Simulation of the Energy Flows in a Heat Pipe Solar Collector</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160309</link>
        <id>10.14569/IJACSA.2025.0160309</id>
        <doi>10.14569/IJACSA.2025.0160309</doi>
        <lastModDate>2025-03-31T12:15:07.0770000+00:00</lastModDate>
        
        <creator>Boris Evstatiev</creator>
        
        <creator>Nadezhda Evstatieva</creator>
        
        <subject>Model; simulation; heat pipe solar collector; useful power</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>The domestic sector is one of the major energy consumers and hot water is a compulsory service in modern society. Therefore, one of the possibilities for reducing energy expenses is heating water using solar collectors. However, the optimization of such installations requires careful planning and preliminary simulations. This study presents a model for simulating the energy flows in a heat pipe solar collector. Unlike previous studies, it also accounts for the self-shading of the vacuum tubes at certain hours of the day. An experimental setup was organized to collect reference data for model validation, and the data was automatically stored in a database by a microcontroller-based electronic system. The modeled and experimental data were compared and a PME of 1.55%, and a PMAE of 16.33% were obtained. The proposed model could be used for simulating the useful power of hybrid hot-water systems under different application scenarios.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_9-A_Model_for_Simulation_of_the_Energy_Flows.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Knowledge Management Application for Small and Medium-Sized Service-Oriented Enterprises Based on the SECI Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160308</link>
        <id>10.14569/IJACSA.2025.0160308</id>
        <doi>10.14569/IJACSA.2025.0160308</doi>
        <lastModDate>2025-03-31T12:15:07.0600000+00:00</lastModDate>
        
        <creator>Chen Chang</creator>
        
        <creator>Manabu Sawaguchi</creator>
        
        <creator>Yasuaki Mori</creator>
        
        <subject>Knowledge management; Socialization Externalization Combination Internalization (SECI); nail salon; Small and Medium-Sized Enterprises (SMEs)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>This paper analyzes the current situation and development bottlenecks of small and medium-sized service industry enterprises using the T nail salon as an example. It emphasizes the importance of knowledge management and proposes the need to establish a knowledge system within the company that combines both humanistic and technological aspects. From the practice of using the SECI model in the T nail salon, we can also conclude that small and medium-sized service-oriented enterprises can use appropriate means and less cost to achieve effective knowledge conversion among individuals, teams, organizations, and customers, achieve orderly knowledge management, and ultimately achieve a comprehensive effect of improving the quality of enterprise services and competitiveness.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_8-Knowledge_Management_Application.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Privacy Protection in JPEG XS: A Lightweight Spatio-Color Scrambling Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160307</link>
        <id>10.14569/IJACSA.2025.0160307</id>
        <doi>10.14569/IJACSA.2025.0160307</doi>
        <lastModDate>2025-03-31T12:15:07.0130000+00:00</lastModDate>
        
        <creator>Takayuki Nakachi</creator>
        
        <creator>Yasuhisa Kato</creator>
        
        <creator>Mitsuru Maruyama</creator>
        
        <subject>JPEG XS; UHD video; Encryption-then-Compression; privacy protection; perceptual scrambling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>This paper presents a lightweight JPEG XS coding scheme incorporating spatio-color scrambling for privacy protection. The proposed approach follows an Encryption-then-Compression (EtC) framework, maintaining compatibility with the JPEG XS standard. Prior to encoding, input images undergo scrambling operations, including line permutation, line reversal, and color permutation. Security analysis indicates that the scrambling technique provides a large key space, making brute-force attacks computationally challenging. Experimental results demonstrate that the proposed method achieves a rate-distortion (RD) performance nearly equivalent to conventional JPEG XS compression while enhancing visual security. Additionally, a rectangular block-based scrambling technique is explored, which offers a trade-off among low latency, reduced memory usage, and visual concealment performance. While real-time processing is possible with or without block-based scrambling, the block-based approach is particularly beneficial for applications that demand lower latency and reduced memory usage. The effectiveness of the proposed method is validated through simulations on 8K ultra-high-definition (UHD) images.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_7-Privacy_Protection_in_JPEG_XS.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Smart Insoles for Multi-User Monitoring: A Case Study on Received Signal Strength Indicator-Based Distance Measurement</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160306</link>
        <id>10.14569/IJACSA.2025.0160306</id>
        <doi>10.14569/IJACSA.2025.0160306</doi>
        <lastModDate>2025-03-31T12:15:06.9830000+00:00</lastModDate>
        
        <creator>Victor Huilca Cabay</creator>
        
        <creator>Alexandra Flores</creator>
        
        <creator>Paul Hernan Machado Herrera</creator>
        
        <creator>Byron Paul Huera Paltan</creator>
        
        <subject>Internet of Things; RSSI; smart insole; distance; wearables; neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>In the current context of high adoption of wearables and Internet of Things (IoT) devices, this work develops a smart insole system to measure the distance between users using the RSSI signal (Received Signal Strength Indicator). ESP32 WROOM microcontrollers with Bluetooth Low Energy, Wi-Fi, and multiple functionalities were used. The prototype includes sensors to count steps, detect activity (walking/running) and a configurable alarm to alert when the distance is less than a threshold. Collected data are sent directly and in real-time to a database using the ThingSpeak web platform, which allows to visualize the data acquired from the insole sensors. Using the RSSI signal provided by the Bluetooth LE module, a significant response was interpreted and modeled using a multilayer perceptron (MLP) neural network, achieving an average distance estimation accuracy of 90.89% using data measured in real time.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_6-Smart_Insoles_for_Multi_User_Monitoring.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid AI-Based Risk Assessment Framework for Sustainable Construction: Integrating ANN, Fuzzy Logic, and IoT</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160305</link>
        <id>10.14569/IJACSA.2025.0160305</id>
        <doi>10.14569/IJACSA.2025.0160305</doi>
        <lastModDate>2025-03-31T12:15:06.9030000+00:00</lastModDate>
        
        <creator>Andr&#233; Lu&#237;s Barbosa Gomes G&#243;es</creator>
        
        <creator>Rafaqat Kazmi</creator>
        
        <creator>Aqsa</creator>
        
        <creator>Siddhartha Nuthakki</creator>
        
        <subject>Risk assessment; sustainable construction; artificial neural networks; fuzzy logic; predictive analytics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>The construction industry is central to the advancement of economic growth all over the world but it has various problems in risk management especially concerning sustainable construction projects. Standard risk management techniques like AHP and Monte Carlo simulation do not afford the flexibility and accuracy needed in construction sites. Based on the identified limitations, this study offers a new system of risk assessment that combines Artificial Neural Networks (ANN), Fuzzy Logic, and Internet of Things (IoT) technologies. Real-time IoT sensor data and historical project data are integrated into a real-time and adaptive system which can identify, suggest, and minimize potential risks for improved decision making. The ANN component is distinctive in pattern recognition and risk prediction while Fuzzy Logic brings ease of interpretation and reasoning in the uncertain environment. Raw IoT data are live data which may be processed and updated frequently relative to the devices and their environment. The effectiveness of this framework can be ascertained through experimental proof; the framework’s accuracy is 92.7%; project delay and cost have been minimized. The results reveal that the presented framework is highly resistant to noise, and its performance changes fairly slowly if the project requirements change. This integrative approach ensures the identification of the comprehensive solution for the sustainable construction risk management, which may help with the development of the safer, more efficient and non-harmful to the environment construction techniques.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_5-A_Hybrid_AI_Based_Risk_Assessment_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Emotional Engagement and Teaching Innovations for Deep Learning and Retention in Education: A Literature Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160304</link>
        <id>10.14569/IJACSA.2025.0160304</id>
        <doi>10.14569/IJACSA.2025.0160304</doi>
        <lastModDate>2025-03-31T12:15:06.8570000+00:00</lastModDate>
        
        <creator>Samer Alhebaishi</creator>
        
        <creator>Richard Stone</creator>
        
        <creator>Mohammed Ameen</creator>
        
        <subject>Emotional engagement; pedagogical recall; long-term knowledge retention; augmented reality in education; blended learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>The goal of this examination is to identify key factors that enhance educational settings through innovative teaching methods and the integration of technology, emphasizing the transformative role of digital tools, particularly in mathematics and science education, and their impact on student engagement, problem-solving skills, and conceptual understanding. The increasing digitalization of education necessitates the adoption of pedagogical strategies that enhance both cognitive and emotional engagement, ensuring students develop critical thinking and long-term knowledge retention skills. Various educational theories, including Behaviorism, Cognitivism, Constructivism, and Social Learning Theory, are analyzed to demonstrate their relevance in both traditional and online learning environments. Emotional engagement is explored as a crucial element in learning, focusing on its connection to memory retention and cognitive development. Pedagogical recall is highlighted as essential for optimizing long-term knowledge retention, particularly in online and blended learning environments, while the effectiveness of different teaching strategies in fostering deep learning and sustaining knowledge over time is evaluated. The findings advocate for a holistic educational approach that integrates both cognitive and emotional factors, leveraging technological advancements and innovative pedagogical methods to create inclusive, adaptive, and effective learning environments. Continuous pedagogical evolution is necessary to address emerging educational challenges and enhance student success in an increasingly digitalized academic landscape.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_4-Emotional_Engagement_and_Teaching_Innovations.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel System for Managing Encrypted Data Using Searchable Encryption Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160303</link>
        <id>10.14569/IJACSA.2025.0160303</id>
        <doi>10.14569/IJACSA.2025.0160303</doi>
        <lastModDate>2025-03-31T12:15:06.8400000+00:00</lastModDate>
        
        <creator>Vijay Govindarajan</creator>
        
        <subject>Cloud service providers; encrypting; security; web-based platform</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>The motivation for this study arises; from the insufficient security measures provided by cloud service providers, particularly with regard to data integrity and confidentiality. In today’s digital landscape, nearly every international organization stores data in the cloud, whether through in-house servers or third-party providers. While encrypting data prior to storage addresses certain security concerns, it does not fully resolve the issue. Specifically, how can a server effectively process or search the data without decrypting it? This challenge is addressed by the concept of searchable encryption. Therefore, the objective of this study is to implement and evaluate a contemporary set of searchable encryption algorithms within a web-based platform. The study includes a comprehensive performance analysis of the implemented algorithms and an evaluation of the system based on the statistical outcomes of these algorithms. Therefore, this study aims to contribute to the advancement of secure and efficient methods for managing encrypted data in cloud environments. This study evaluates an image search system using the FAST protocol, achieving an average search time of 28.696 ms per image and an average deletion time of 0.557 seconds. While slower than FAST’s benchmarks due to limited computational resources and additional processing steps, the system demonstrated reliable performance within its constraints. These results highlight the trade-offs between security, functionality, and performance, offering valuable insights for future optimizations in resource-constrained environments.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_3-A_Novel_System_for_Managing_Encrypted_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Distributed Identity for Zero Trust and Segmented Access Control: A Novel Approach to Securing Network Infrastructure</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160302</link>
        <id>10.14569/IJACSA.2025.0160302</id>
        <doi>10.14569/IJACSA.2025.0160302</doi>
        <lastModDate>2025-03-31T12:15:06.7800000+00:00</lastModDate>
        
        <creator>Sina Ahmadi</creator>
        
        <subject>Distributed identity; ZTA; DID; VC; lateral movement; privacy; credential security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>Distributed Identity is the transition from centralized identity with Decentralized Identifiers (DID) and Verifiable Credentials (VC) for secure and privacy positive authentications. With distributed identity, identity data is brought back under the control of the user, freeing them from the single point of failure presented by credentials, and hence preventing credential-based attacks. In this study, some security improvement to the Zero Trust Architecture (ZTA) with use of the distributed identity were be evaluated, especially on migrations laterally within segmented networks. Furthermore, it discusses the implementation specification of the framework, the benefits and disadvantages of the method to organizations, and the compatibility and generalizability issues. Moreover, the study also considers privacy and regulatory issues like the General Data Protection Regulation (GDPR) and the California Consumer Data Privacy Act (CCPA) along with possible solutions. However, the study indicates that distributed identities can give an order of magnitude improvement to overall security posture through contextual and least privileged authorization as well as user privacy. Results show that by integrating distributed identity into ZTA, unauthorized lateral movement is reduced approximately 65%, authentication security is increased 78 percent relative to traditional, and it is not possible for a credential to be compromised through a phishing attack more than 80 percent of the time. Also, General Data Protection Regulation (GDPR) and California Consumer Data Privacy Act (CCPA) compliance are bolstered because of increased user identity data control. It identifies privacy and regulatory compliance problems and looks at solutions of these problems. The findings indicate that a great improvement in overall security posture can be had by incorporating distributed identities and promoting contextual and least-privilege authorization while protecting user privacy. The research suggests that technical standards need to be refined, distributed identity needs to be expanded into practice, and that it be discussed as an application to the current digital security landscape.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_2-Distributed_Identity_for_Zero_Trust_and_Segmented_Access_Control.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Federated Learning-Driven Privacy-Preserving Framework for Decentralized Data Analysis and Anomaly Detection in Contract Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160301</link>
        <id>10.14569/IJACSA.2025.0160301</id>
        <doi>10.14569/IJACSA.2025.0160301</doi>
        <lastModDate>2025-03-31T12:15:06.7170000+00:00</lastModDate>
        
        <creator>Raj Sonani</creator>
        
        <creator>Vijay Govindarajan</creator>
        
        <creator>Pankaj Verma</creator>
        
        <subject>Federated learning; privacy preservation; clause classification; compliance validation; anomaly detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(3), 2025</description>
        <description>Contract review is a critical legal task that involves several processes such as compliance validation, clause classification, and anomaly detection. Traditional, centralized models for the analysis of contracts raise significant data privacy and compliance challenges due to the highly sensitive nature of legal documents. This paper proposes a contract review-oriented federated learning framework, where model training can be performed in a completely decentralized way with data confidentiality. It leverages privacy preserving methods such as Differential Privacy (“DP”) and Secure Multi-Party Computation (“SMPC”) that provide protection for sensitive information during collaborative learning. The proposed framework reaches a clause classification accuracy of 94.2% while securing privacy requirements. Performance analysis of the training efficiency revealed that the federated model needed 13.1 hours instead of 10.4 hours for a centralized model while still protecting the security of the system. This research offers a scalable and secure approach toward contract review and offers a path forward for privacy-conscious AI-driven legal solutions.</description>
        <description>http://thesai.org/Downloads/Volume16No3/Paper_1-Federated_Learning_Driven_Privacy_Preserving_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Advanced Optimization of RPL-IoT Protocol Using ML Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01602135</link>
        <id>10.14569/IJACSA.2025.01602135</id>
        <doi>10.14569/IJACSA.2025.01602135</doi>
        <lastModDate>2025-02-28T14:35:55.5130000+00:00</lastModDate>
        
        <creator>Mansour Lmkaiti</creator>
        
        <creator>Ibtissam Larhlimi</creator>
        
        <creator>Maryem Lachgar</creator>
        
        <creator>Houda Moudni</creator>
        
        <creator>Hicham Mouncif</creator>
        
        <subject>IoT; RPL; machine learning; routing efficiency; energy consumption; expected transmission count; network optimization; Artificial Intelligence (AI)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>This study explores the transformative potential of machine learning (ML) algorithms in optimizing the Routing Protocol for Low-Power and Lossy Networks (RPL), addressing critical challenges in Internet of Things (IoT) networks, such as Expected Transmission Count (ETX), latency, and energy consumption. The research evaluates the performance of Random Forest, Gradient Boosting, Artificial Neural Networks (ANNs), and Q-Learning across IoT network simulations with varying scales (50, 100, and 150 nodes). Results indicate that tree-based models, particularly Random Forest and Gradient Boosting, demonstrate robust predictive capabilities for ETX and latency, achieving consistent results in smaller and medium-sized networks. Specifically, for 50-node networks, Neural Networks achieved the best performance with the lowest latency (2.43862 ms) and the best ETX (5.29557), despite slightly higher energy consumption. For 100-node networks, Q-Learning stood out with the lowest energy consumption (1.62973 J) and competitive ETX (2.70647), though at the cost of increased latency. In 150-node networks, Q-Learning again outperformed other models, achieving the lowest latency (0.68 ms) and energy consumption (2.21 J), though at the cost of higher ETX. Neural Networks excel in capturing non-linear dependencies but face limitations in energy-related metrics, while Q-Learning adapts dynamically to network changes, achieving remarkable latency reductions at the cost of transmission efficiency. The findings highlight key trade-offs between performance metrics and emphasize the need for algorithmic strategies tailored to specific IoT applications. This work not only validates the scalability and adaptability of ML approaches but also lays the foundation for intelligent and efficient IoT network optimization, laying the groundwork for future advancements in sustainable and scalable IoT networks.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_135-Advanced_Optimization_of_RPL_IoT_Protocol.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Investigating Retrieval-Augmented Generation in Quranic Studies: A Study of 13 Open-Source Large Language Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01602134</link>
        <id>10.14569/IJACSA.2025.01602134</id>
        <doi>10.14569/IJACSA.2025.01602134</doi>
        <lastModDate>2025-02-28T06:26:04.5630000+00:00</lastModDate>
        
        <creator>Zahra Khalila</creator>
        
        <creator>Arbi Haza Nasution</creator>
        
        <creator>Winda Monika</creator>
        
        <creator>Aytug Onan</creator>
        
        <creator>Yohei Murakami</creator>
        
        <creator>Yasir Bin Ismail Radi</creator>
        
        <creator>Noor Mohammad Osmani</creator>
        
        <subject>Large-language-models; retrieval-augmented generation; question answering; Quranic studies; Islamic teachings</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>Accurate and contextually faithful responses are critical when applying large language models (LLMs) to sensitive and domain-specific tasks, such as answering queries related to quranic studies. General-purpose LLMs often struggle with hallucinations, where generated responses deviate from authoritative sources, raising concerns about their reliability in religious contexts. This challenge highlights the need for systems that can integrate domain-specific knowledge while maintaining response accuracy, relevance, and faithfulness. In this study, we investigate 13 open-source LLMs categorized into large (e.g., Llama3:70b, Gemma2:27b, QwQ:32b), medium (e.g., Gemma2:9b, Llama3:8b), and small (e.g., Llama3.2:3b, Phi3:3.8b). A Retrieval-Augmented Generation (RAG) is used to make up for the problems that come with using separate models. This research utilizes a descriptive dataset of Quranic surahs including the meanings, historical context, and qualities of the 114 surahs, allowing the model to gather relevant knowledge before responding. The models are evaluated using three key metrics set by human evaluators: context relevance, answer faithfulness, and answer relevance. The findings reveal that large models consistently outperform smaller models in capturing query semantics and producing accurate, contextually grounded responses. The Llama3.2:3b model, even though it is considered small, does very well on faithfulness (4.619) and relevance (4.857), showing the promise of smaller architectures that have been well optimized. This article examines the trade-offs between model size, computational efficiency, and response quality while using LLMs in domain-specific applications.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_134-Investigating_Retrieval_Augmented_Generation_in_Quranic_Studies.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced Fuzzy Deep Learning for Plant Disease Detection to Boost the Agricultural Economic Growth</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01602133</link>
        <id>10.14569/IJACSA.2025.01602133</id>
        <doi>10.14569/IJACSA.2025.01602133</doi>
        <lastModDate>2025-02-28T06:26:04.5170000+00:00</lastModDate>
        
        <creator>Mohammad Abrar</creator>
        
        <subject>Deep learning; plant disease; fuzzy deep learning; agricultural production</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>Plant disease detection is a crucial technology to ensure agricultural productivity and sustainability. However, traditional methods tend to fail as they do not address imprecise and uncertain data in a satisfactory way. We propose the Enhanced Fuzzy Deep Neural Network (EFDNN) which integrate the fuzzy logic with deep neural networks. This study aims to incorporate and allow assessment of the economic impact of the EFDNN on agricultural productivity for plant diseases detection. Data for the research framework were collected from remote sensing and economic sources. Preprocessing of data was done, namely normalization and feature extraction to make sure that the inputs are high quality. Deep Belief Networks (DBNs) were used as a way to pretrain the EFDNN model and supervised learning was then fine-tuned using this. Then, the model was evaluated with accuracy, precision, recall and area under the receiver operating characteristic curve (AUC-ROC), and compared against baseline models: convolutional neural networks (CNNs), traditional DNNs, and fuzzy neural network (FNNs). The plant disease detection performance of the EFDNN model was 95.2% accuracy, 94.8%precision, 95.6% recall, and 0.978 AUC-ROC. The accuracy of the EFDNN model was greater than the accuracy of CNNs by 92.3%, greater than traditional DNNs by 89.7% and FNNs’ accuracy by 90.4%. In economic analysis, however, a reduced pesticide use and an increase in crop yield of USD120 per acre were calculated. 14.3%, leading to higher farmer revenues. The EFDNN model is an effective enhancement to plant disease detection that offers economic and agricultural benefits. This validates the potential of combining fuzzy logic with deep learning to enhance the performance and sustainability of agricultural practices.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_133-Enhanced_Fuzzy_Deep_Learning_for_Plant_Disease_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards Effective Anomaly Detection: Machine Learning Solutions in Cloud Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01602132</link>
        <id>10.14569/IJACSA.2025.01602132</id>
        <doi>10.14569/IJACSA.2025.01602132</doi>
        <lastModDate>2025-02-28T06:26:04.5030000+00:00</lastModDate>
        
        <creator>Hussain Almajed</creator>
        
        <creator>Abdulrahman Alsaqer</creator>
        
        <creator>Abdullah Albuali</creator>
        
        <subject>Anomaly; cloud; machine learning; detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>Cloud computing has transformed modern Information Technology (IT) infrastructures with its scalability and cost-effectiveness but introduces significant security risks. More-over, existing anomaly detection techniques are not well equipped to deal with the complexities of dynamic cloud environments. This systematic literature review shows the advancements in Machine Learning (ML) solutions for anomaly detection in cloud computing. The study categorizes ML approaches, examines the datasets and evaluation metrics utilized, and discusses their effectiveness and limitations. We analyze supervised, unsupervised, and hybrid ML models showing their advantages in dealing with a certain threat vector. It also discusses how advanced feature engineering, ensemble learning and real-time adaptability can improve detection accuracy and reduce false positives. Some key challenges, such as dataset diversity and computational efficiency, are highlighted, along with future research directions to improve ML based anomaly detection for robust and adaptive cloud security. Hybrid approaches are found to increase the accuracy reaching up to 99.85% and reduces the number of false positives. This review provides a comprehensive guide to researchers aiming to enhance anomaly detection in cloud environments.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_132-Towards_Effective_Anomaly_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Temperature Prediction for Photovoltaic Inverters Using Particle Swarm Optimization-Based Symbolic Regression: A Comparative Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01602131</link>
        <id>10.14569/IJACSA.2025.01602131</id>
        <doi>10.14569/IJACSA.2025.01602131</doi>
        <lastModDate>2025-02-28T06:26:04.4870000+00:00</lastModDate>
        
        <creator>Fabian Alonso Lara-Vargas</creator>
        
        <creator>Jesus Aguila-Leon</creator>
        
        <creator>Carlos Vargas-Salgado</creator>
        
        <creator>Oscar J. Suarez</creator>
        
        <subject>Particle swarm optimization; photovoltaic inverters; multiple linear regression; symbolic regression; temperature pre-diction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>Accurate temperature modeling is crucial for maintaining the efficiency and reliability of solar inverters. This paper presents an innovative application of symbolic regression based on particle swarm optimization (PSO) for predicting the temperature of photovoltaic inverters, offering a novel approach that balances accuracy and computational efficiency. The study evaluates the performance of a PSO-based symbolic regression model compared to multiple linear regression (MLR) and a symbolic regression model based on genetic algorithms (GA). The models were developed using a dataset that included inverter temperature, active power, and DC bus voltage, collected over a year in hourly intervals from a rooftop photovoltaic system in a tropical region. The dataset was divided, with 70% used for training and the remaining 30% for testing. The symbolic regression model based on PSO demonstrated superior performance, achieving lower values of the root mean square error (RMSE) and mean absolute error (MAE) of 3.97 and 3.31, respectively. Furthermore, the PSO-based model effectively captured the nonlinear relationships between variables, outperforming the MLR model. It also exhibited greater computational efficiency, requiring fewer iterations than traditional symbolic regression approaches. These findings open new possibilities for real-time monitoring of photovoltaic inverters and suggest future research directions, such as generalizing the PSO model to different environmental conditions and inverter types.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_131-Temperature_Prediction_for_Photovoltaic_Inverters.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Chinese Relation Extraction with External Knowledge-Enhanced Semantic Understanding</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01602130</link>
        <id>10.14569/IJACSA.2025.01602130</id>
        <doi>10.14569/IJACSA.2025.01602130</doi>
        <lastModDate>2025-02-28T06:26:04.4570000+00:00</lastModDate>
        
        <creator>Shulin Lv</creator>
        
        <creator>Xiaoyao Ding</creator>
        
        <subject>Chinese relation extraction; knowledge graph; external knowledge; semantic understanding; attention mechanism</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>Relation extraction is the foundation of constructing knowledge graphs, and Chinese relation extraction is a particularly challenging aspect of this task. Most existing methods for Chinese relation extraction rely either on character-based or word-based features. However, the former struggles to capture contextual information between characters, while the latter is constrained by the quality of word segmentation, resulting in relatively low performance. To address this issue, a Chinese relation extraction model enhanced with external knowledge for semantic understanding is proposed. This model leverages external knowledge to improve semantic understanding in the text, thereby enhancing the performance of relation prediction between entity pairs. The approach consists of three main steps: first, the ERNIE pre-trained language model is used to convert textual information into dynamic word embeddings; second, an attention mechanism is employed to enrich the semantic representation of sentences containing entities, while external knowledge is used to mitigate the ambiguity of Chinese entity words as much as possible; and finally, the semantic representation enhanced with external knowledge is used as input for classification to make predictions. Experimental results demonstrate that the proposed model outperforms existing methods in Chinese relation extraction and offers better interpretability.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_130-Chinese_Relation_Extraction_with_External_Knowledge.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Securing Internet of Medical Things: An Advanced Federated Learning Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01602129</link>
        <id>10.14569/IJACSA.2025.01602129</id>
        <doi>10.14569/IJACSA.2025.01602129</doi>
        <lastModDate>2025-02-28T06:26:04.4100000+00:00</lastModDate>
        
        <creator>Anass Misbah</creator>
        
        <creator>Anass Sebbar</creator>
        
        <creator>Imad Hafidi</creator>
        
        <subject>Internet of Medical Things (IoMT); federated learning; machine learning; security; intrusion detection systems; decentralized framework</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>The Internet of Medical Things (IoMT) is transforming healthcare through extensive automation, data collection, and real-time communication among interconnected devices. However, this rapid expansion introduces significant security vulnerabilities that traditional centralized solutions or device-level protections often fail to adequately address due to challenges related to latency, scalability, and resource constraints. This study presents a novel federated learning (FL) framework tailored for IoMT security, incorporating techniques such as stacking, federated dynamic averaging, and active user participation to decentralize and enhance attack classification at the edge. Utilizing the CICIoMT2024 dataset, which encompasses 18 attack classes and 45 features, we deploy Random Forest (RF), AdaBoost, Support Vector Machine (SVM), and Deep Learning (DL) models across 10 simulated edge devices. Our federated approach effectively distributes computational loads, mitigating the strain on central servers and individual devices, thereby enhancing adaptability and resource efficiency within IoMT networks. The RF model achieves the highest accuracy of 99.22%, closely followed by AdaBoost, demonstrating the feasibility of FL for robust and scalable edge security. While this study validates the proposed framework using a single realistic dataset in a controlled environment, future work will explore additional datasets and real-world scenarios to further substantiate the generalization and effectiveness of the approach. This research underscores the potential of federated learning to address the unique security and computational constraints of IoMT, paving the way for practical, decentralized deployments that strengthen device-level defenses across diverse healthcare settings.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_129-Securing_Internet_of_Medical_Things.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning in Heart Murmur Detection: Analyzing the Potential of FCNN vs. Traditional Machine Learning Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01602128</link>
        <id>10.14569/IJACSA.2025.01602128</id>
        <doi>10.14569/IJACSA.2025.01602128</doi>
        <lastModDate>2025-02-28T06:26:04.3770000+00:00</lastModDate>
        
        <creator>Hajer Sayed Hussein</creator>
        
        <creator>Hussein AlBazar</creator>
        
        <creator>Roxane Elias Mallouhy</creator>
        
        <creator>Fatima Al-Hebshi</creator>
        
        <subject>Heart murmur detection; machine learning; deep learning; cardiovascular diagnostics; artificial intelligence; phys-ioNet dataset</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>This research investigates the performance of machine learning and deep learning models in detecting heart murmurs from audio recordings. Using the PhysioNet Challenge 2016 dataset, we compare several traditional machine learning models—Support Vector Machine, Random Forest, AdaBoost, and Decision Tree—with a Fully Convolutional Neural Network (FCNN). The findings indicate that while traditional models achieved accuracies between 0.85 and 0.89, they faced challenges with data complexity and maintaining a balance between precision and recall. Ensemble methods such as Random Forest and AdaBoost demonstrated improved robustness but were still outperformed by deep learning approaches. The FCNN model, leveraging artificial intelligence, significantly outperformed all other models, achieving an accuracy of 0.99 with a precision of 0.94 and a recall of 0.96. These results highlight the potential of AI-driven cardiovascular diagnostics, as deep learning models exhibit superior capability in identifying intricate patterns in heart sound data. Our findings suggest that deep learning models offer substantial advantages in medical diagnostics, particularly for cardiovascular diagnostics, by providing scalable and highly accurate tools for heart murmur detection. Future work should focus on improving model interpretability and expanding dataset diversity to facilitate broader adoption in clinical settings.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_128-Deep_Learning_in_Heart_Murmur_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Hybrid Attentive Convolutional Autoencoder (HACA) Framework for Enhanced Epileptic Seizure Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01602127</link>
        <id>10.14569/IJACSA.2025.01602127</id>
        <doi>10.14569/IJACSA.2025.01602127</doi>
        <lastModDate>2025-02-28T06:26:04.3470000+00:00</lastModDate>
        
        <creator>Venkata Narayana Vaddi</creator>
        
        <creator>Madhu Babu Sikha</creator>
        
        <creator>Prakash Kodali</creator>
        
        <subject>Epileptic seizure detection; EEG; hybrid attentive convolutional autoencoder; attention mechanism; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>Epilepsy, a prevalent neurological disorder, requires accurate and efficient seizure detection for timely intervention. This study presents a Hybrid Attentive Convolutional Autoen-coder (HACA) framework designed to address challenges in EEG signal processing for seizure detection. The proposed method integrates signal reconstruction, innovative feature extraction, and attention mechanisms to focus on seizure-critical patterns. Compared to conventional CNN- and RNN-based approaches, HACA demonstrates superior performance by enhancing feature representation and reducing redundant computations. The proposed HACA framework achieved 99.4% accuracy, 99.6%sensitivity, and 99.2% specificity on the CHB-MIT dataset. Moreover, the training time is reduced by 40%, which makes the model more relevant for real-time applications and portable seizure monitoring systems.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_127-A_Novel_Hybrid_Attentive_Convolutional_Autoencoder.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Leveraging Machine-Aided Learning in College English Education: Computational Approaches for Enhancing Student Outcomes and Pedagogical Efficiency</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01602126</link>
        <id>10.14569/IJACSA.2025.01602126</id>
        <doi>10.14569/IJACSA.2025.01602126</doi>
        <lastModDate>2025-02-28T06:26:04.3130000+00:00</lastModDate>
        
        <creator>Danxia Zhu</creator>
        
        <subject>Machine learning; natural language processing; computational intelligence; data analytics; pedagogy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>The integration of machine-aided learning into college English education offers transformative potential for enhancing teaching and learning outcomes. This paper investigates the application of computational models, including machine learning algorithms and natural language processing tools, to optimize pedagogical practices and improve student performance. A series of experiments were conducted to evaluate the effectiveness of machine-aided learning in various aspects of English language education. The study focuses on six key parameters: 1) student test scores, 2) learning engagement, 3) learning time efficiency, 4) language proficiency, 5) student retention, and 6) teacher workload. The results demonstrate significant improvements across these parameters: a 25% increase in student test scores, a 30% improvement in overall learning engagement, a 20%reduction in learning time for complex language tasks, a 15%enhancement in language proficiency, a 10% increase in student retention, and a 5% reduction in teacher workload. These findings underscore the potential of machine-aided learning to reshape college English education by promoting personalized, data-driven learning environments. This paper provides valuable insights for educators, researchers, and policymakers aiming to harness the power of computational methods in educational settings.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_126-Leveraging_Machine_Aided_Learning_in_College_English_Education.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Knowledge Graph Path-Enhanced RAG for Intelligent Residency Q&amp;A</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01602125</link>
        <id>10.14569/IJACSA.2025.01602125</id>
        <doi>10.14569/IJACSA.2025.01602125</doi>
        <lastModDate>2025-02-28T06:26:04.2670000+00:00</lastModDate>
        
        <creator>Jian Zhu</creator>
        
        <creator>Huajun Zhang</creator>
        
        <creator>Jianpeng Da</creator>
        
        <creator>Hanbing Huang</creator>
        
        <creator>Chongxin Luo</creator>
        
        <creator>Xu Peng</creator>
        
        <subject>Retrieval-augmented generation; path search; knowledge graph; household registration policy vertical field</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>As the demand for efficient information retrieval in specialized domains continues to rise, vertical domain question-answering systems play an increasingly important role in addressing domain-specific knowledge needs. This paper proposes a retrieval-augmented generation method that integrates path search in knowledge graphs to enhance intelligent question-answering systems for professional information retrieval. The proposed approach leverages fine-tuned large language models to identify entities and extract relations from user queries, combining pruned marker method with a shortest path generation tree algorithm to efficiently retrieve relevant information. The retrieval results are then integrated with user queries using prompt engineering to generate precise and contextually relevant answers. To validate the practicality of the proposed method, this paper develops a knowledge graph encompassing policies, regulations, and social services within the household registration vertical domain. The experimental results within this vertical domain reveal that the proposed method significantly outperforms existing methods in terms of evaluation metrics such as BLEU, ROUGE, and METEOR, achieving improvements exceeding 3%. Furthermore, ablation experiments validate the importance of combining path search algorithms with fine-tuning techniques in enhancing the question-answering performance.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_125-Knowledge_Graph_Path_Enhanced_RAG_for_Intelligent_Residency.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Deep Learning Approach for Nepali Image Captioning and Speech Generation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01602124</link>
        <id>10.14569/IJACSA.2025.01602124</id>
        <doi>10.14569/IJACSA.2025.01602124</doi>
        <lastModDate>2025-02-28T06:26:04.2370000+00:00</lastModDate>
        
        <creator>Sagar Sharma</creator>
        
        <creator>Samikshya Chapagain</creator>
        
        <creator>Sachin Acharya</creator>
        
        <creator>Sanjeeb Prasad Panday</creator>
        
        <subject>Image captioning; speech generation; image-to-speech generation; deep learning; BLEU score; HiFiGaN; TTS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>This article introduces a novel approach for Image-to-Speech generation that aims in converting images into textual captions along with spoken descriptions in Nepali Language using deep learning techniques. By leveraging computer vision and natural language processing, the system analyzes images, extracts features, generates human-readable captions, and produces intelligible speech output. The experimentation utilizes state-of-the-art transformer architecture for image caption generation complemented by ResNet and EfficientNet as feature extractors. BLEU score is used as an evaluation metric for generated captions. The BLEU scores obtained for BLEU-1, BLEU-2, BLEU-3, and BLEU-4 n-grams are 0.4852, 0.2952, 0.181, and 0.113, respectively. Pretrained HifiGaN(vocoder) and Tacotorn2 are used for text to speech synthesis. The proposed approach contributes to the underexplored domain of Nepali-language AI applications, aiming to improve accessibility and technological inclusivity for the Nepali-speaking population.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_124-A_Deep_Learning_Approach_for_Nepali_Image_Captioning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Leveraging Deep Semantics for Sparse Recommender Systems (LDS-SRS)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01602123</link>
        <id>10.14569/IJACSA.2025.01602123</id>
        <doi>10.14569/IJACSA.2025.01602123</doi>
        <lastModDate>2025-02-28T06:26:04.2070000+00:00</lastModDate>
        
        <creator>Adel Alkhalil</creator>
        
        <subject>LDA-2-Vec technique; content representation; topic-based modeling; probabilistic matrix decomposition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>RS (Recommender Systems) provide personalized suggestions to the user(s) by filtering through vast amounts of similar data, including media content, e-commerce platforms, and social networks. Traditional recommendation system (RS) methods encounter significant challenges. Collaborative Filtering (CF) is hindered by the lack of sufficient user-product engagement data, while CBF (Content Based Filtering) depends extensively on feature extraction techniques in order to describe the items, which requires an understanding of both content contextual and semantic relevance of the information. To address the sparsity issue, various matrix factorization methods have been developed, often incorporating pre-processed auxiliary information. However, existing feature extraction techniques generally fail to capture both the semantic richness and topic-level insights of textual data. This paper introduces a novel hybrid recommendation system called Topic-Driven Semantic Hybridization for Sparse Recommender Systems (LDS-SRS). The model leverages the semantic features from item descriptions and incorporates topic-specific data to effectively tackle the challenges posed by data sparsity. By extracting embeddings that capture the deep semantics of textual content—such as reviews, summaries, comments, and narratives—and embedding them into Probabilistic Matrix Factorization (PMF), the framework significantly alleviates data sparsity. The LDS-SRS framework is also computationally efficient, offering low deployment time and complexity. Experimental evaluations conducted on publicly available datasets, such as AIV (Amazon Instant Video) and Movielens (1 Million &amp; 10 Million), demonstrate the exceptional ability of the method to handle sparse user-to-item ratings, outperforming existing leading methods. The proposed system effectively addresses data sparsity by integrating embeddings that encapsulate the deep textual semantics content, including sum-maries, comment(s), and narratives, within PMF (Probabilistic Matrix Factorization). The LDS-SRS framework is also highly efficient, characterized by minimal deployment time and low computational complexity. Experimental evaluations conducted on publicly available MovieLens (1 Million and 10 Million) and AIV (Amazon Instant Video) benchmark datasets demonstrate the framework’s exceptional ability to handle sparse user-item ratings, surpassing existing advanced methods.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_123-Leveraging_Deep_Semantics_for_Sparse_Recommender_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-Objective Osprey Optimization Algorithm-Based Resource Allocation in Fog-IoT</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01602122</link>
        <id>10.14569/IJACSA.2025.01602122</id>
        <doi>10.14569/IJACSA.2025.01602122</doi>
        <lastModDate>2025-02-28T06:26:04.1730000+00:00</lastModDate>
        
        <creator>Nagarjun E</creator>
        
        <creator>Dharamendra Chouhan</creator>
        
        <creator>Dilip Kumar S M</creator>
        
        <subject>Fog computing; IoT; resource allocation and real-location; task allocation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>Fog Computing (FC) paradigm offers significant potential for hosting diverse delay-sensitive Internet of Things (IoT) applications. However, the limited resources of fog devices pose significant challenges for deploying multiple applications, particularly in heterogeneous and dynamic IoT scenarios, due to the absence of effective mechanisms for resource estimation and discovery. An efficient resource allocation strategy is crucial for meeting the Quality of Service (QoS) requirements of IoT applications while enhancing overall system performance. Identifying the optimal allocation strategy for IoT applications with multiple QoS parameters is a complex and computationally intensive challenge, classified as an NP-complete problem. This paper proposes a Multi-Objective Optimization Algorithm (MOOA) for optimal resource allocation using the Osprey Optimization Algorithm (OOA) to efficiently allocate available resources. The proposed algorithm was evaluated against existing approaches, including the Genetic Algorithm (GA) and Particle Swarm Optimization (PSO), under varying task loads ranging from 100 to 500 tasks. The simulation results demonstrate significant performance improvements, including an average reduction in execution time by 12.45% compared to PSO and 22.97% compared to GA, response time by 32.57% compared to GA and 24.45% compared to PSO, and completion time by 44.39% compared to GA and 33.23% compared to PSO. These findings highlight the proposed algorithm’s ability to efficiently handle task allocation in dynamic FC environments and its potential to address complex QoS requirements in real-world IoT applications.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_122-Multi_Objective_Osprey_Optimization_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>AI-Driven Construction and Application of Gardens: Optimizing Design and Sustainability with Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01602121</link>
        <id>10.14569/IJACSA.2025.01602121</id>
        <doi>10.14569/IJACSA.2025.01602121</doi>
        <lastModDate>2025-02-28T06:26:04.1430000+00:00</lastModDate>
        
        <creator>Jingyi Wang</creator>
        
        <creator>Yan Song</creator>
        
        <creator>Haozhong Yang</creator>
        
        <creator>Han Li</creator>
        
        <creator>Minglan Zou</creator>
        
        <subject>Artificial intelligence; machine learning; construction and application of garden design; convolutional neural network; VGG16; InceptionV3</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>Artificial intelligence (AI) integration into environ-mental analysis has revolutionized various fields. Including the construction and application of gardens, by enabling precise classification and decision-making for sustainable practices. This paper presents a strong AI-driven framework uses convolutional neural network (CNN) and pretrained models like VGG16 and InceptionV3 to classify eight distinct environmental classes. The CNN achieved superior performance Among the tested models and reaching an impressive 98% accuracy with optimized batch sizes. This demonstrate its effectiveness for precise environmental condition classification. This work highlights the crucial role of AI in advancing the construction and application of gardens. It offers insights into optimizing garden design through accurate environmental data analysis. The diverse dataset used ensures the framework’s adaptability to real-world applications, making it a valuable resource for sustainable development and eco-friendly design strategies. This paper not only contributes to the field of AI-driven environmental analysis but also provides a foundation for future innovations in garden management and sustainability, paving the way for intelligent solutions in the evolving landscape of ecological design.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_121-AI_Driven_Construction_and_Application_of_Gardens.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards Two-Step Fine-Tuned Abstractive Summarization for Low-Resource Language Using Transformer T5</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01602120</link>
        <id>10.14569/IJACSA.2025.01602120</id>
        <doi>10.14569/IJACSA.2025.01602120</doi>
        <lastModDate>2025-02-28T06:26:04.1130000+00:00</lastModDate>
        
        <creator>Salhazan Nasution</creator>
        
        <creator>Ridi Ferdiana</creator>
        
        <creator>Rudy Hartanto</creator>
        
        <subject>Abstractive summarization; low-resource language; Transformer T5; transfer learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>This study explores the potential of two-step fine-tuning for abstractive summarization in a low-resource language, focusing on Indonesian. Leveraging the Transformer-T5 model, the research investigates the impact of transfer learning across two tasks: machine translation and text summarization. Four configurations were evaluated, ranging from zero-shot to two-step fine-tuned models. The evaluation, conducted using the ROUGE metric, shows that the two-step fine-tuned model (T5-MT-SUM) achieved the best performance, with ROUGE-1: 0.7126, ROUGE- 2: 0.6416, and ROUGE-L: 0.6816, outperforming all baselines. These findings demonstrate the effectiveness of task transfer-ability in improving abstractive summarization performance for low-resource languages like Indonesian. This study provides a pathway for advancing natural language processing (NLP) in low-resource language through two-step transfer learning.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_120-Towards_Two_Step_Fine_Tuned_Abstractive_Summarization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Validation of an Adaptive Decision Support System Framework for Outcome-Based Blended Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01602119</link>
        <id>10.14569/IJACSA.2025.01602119</id>
        <doi>10.14569/IJACSA.2025.01602119</doi>
        <lastModDate>2025-02-28T06:26:04.0800000+00:00</lastModDate>
        
        <creator>Rahimah Abd Halim</creator>
        
        <creator>Rosmayati Mohemad</creator>
        
        <creator>Noraida Hj Ali</creator>
        
        <creator>Anuar Abu Bakar</creator>
        
        <creator>Hamimah Ujir</creator>
        
        <subject>Learner needs; adaptive learning; blended learning; fuzzy delphi method; decision support system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>The Adaptive Decision Support System Learning Framework (A-DSS-LF) was developed to address diverse learner needs in blended learning environments by integrating learning styles, cognitive levels, practical skills, and value practices. This study validates the framework using the Fuzzy Delphi Method (FDM), a consensus-building tool that synthesizes expert opinions and addresses uncertainties in subjective judgments. A panel of 15 experts evaluated the framework’s constructs: Learning Process, Learning Assessment, Decision Support System, and Adaptive Learning Profile. All constructs met the FDM’s consensus criterion, achieving threshold values between 0.087 and 0.118 (≤0.2), indicating high consistency and low variability. The defuzzification process confirmed values exceeding 0.5, with scores ranging from 0.873 to 0.922 and expert agreement surpassing 75 percent for all elements. These findings confirm the robustness and applicability of the A-DSS-LF, validating its role in enhancing personalized learning outcomes and supporting teachers in tailoring adaptive learning resources. The framework is scalable and can be implemented in secondary school computer science education and online learning platforms to create personalized learning paths, improve engagement, and bridge the gap between online and offline learning. This study reinforces the significance of expert validation in adaptive learning frameworks, ensuring their scalability and adaptability for future applications in diverse educational settings.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_119-Validation_of_an_Adaptive_Decision_Support_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Emotion Prediction in Multimedia Content Through Multi-Task Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01602118</link>
        <id>10.14569/IJACSA.2025.01602118</id>
        <doi>10.14569/IJACSA.2025.01602118</doi>
        <lastModDate>2025-02-28T06:26:04.0330000+00:00</lastModDate>
        
        <creator>Wan Fan</creator>
        
        <subject>Multi task learning; multimodal emotion analysis; timing; transformer; attention</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>This study presents a robust multimodal emotion analysis model aimed at improving emotion prediction in film and television communication. Addressing challenges in modal fusion and data association, the model integrates a Transformer-based framework with multi-task learning to capture emotional associations and temporal features across various modalities. It overcomes the limitations of single-modal labels by incorporating multi-task learning, and is tested on the Cmumosi dataset using both classification and regression tasks. The model achieves strong performance, with an average absolute error of 0.70, a Pearson correlation coefficient of 0.82, and an accuracy of 47.1% in a seven-class task. In a two-class task, it achieves an accuracy and F1 score of 88.4%. Predictions for specific video segments are highly consistent with actual labels, with predicted scores of 2.15 and 1.4. This research offers a new approach to multimodal emotion analysis, providing valuable insights for film and television content creation and setting the foundation for further advancements in this area.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_118-Enhancing_Emotion_Prediction_in_Multimedia_Content.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Personalized Motion Scheme Generation System Design for Motion Software Based on Cloud Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01602117</link>
        <id>10.14569/IJACSA.2025.01602117</id>
        <doi>10.14569/IJACSA.2025.01602117</doi>
        <lastModDate>2025-02-28T06:26:04.0030000+00:00</lastModDate>
        
        <creator>Jinkai Duan</creator>
        
        <subject>Cloud computing; sports; random forest algorithm; personaliza-tion; system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>The increase of national attention has also promoted the growth of the scale of sports health industry. However, for ordinary people who lack professional knowledge, intuitive data cannot make correct sports planning. Therefore, aiming at the problem that it is difficult for ordinary people to make correct exercise plan according to intuitive data, a personalized exercise plan generation system based on cloud computing is proposed. By analyzing the user&#39;s movement and physical data, the system uses cloud computing resources and machine learning algorithms to provide customized exercise recommendations for users. The key innovation of the research is the combination of improved random forest algorithm and reinforcement learning, while improving the performance of the algorithm on unbal-anced sample sets. The results indicated that the accuracy of the improved random forest was 0.985 higher than that of the precision weighted random forest. The research algorithm was 9.04% higher on average than the original random forest algo-rithm and 2.71% higher than the accuracy weighted random forest algorithm. In terms of the accuracy of personalized mo-tion scheme generation of motion software, the improved algo-rithm reached 95.05% at most, and its recall rate reached 83.46% at most. Compared with the existing sports software solutions, the research system can generate personalized sports programs more accurately, promote the development of the sports health industry and improve the national physical health level. The system can provide users with personalized sports suggestions, and utilize the powerful computing power of cloud computing to realize real-time processing and analysis of large-scale user data, providing users with timely sports feedback and suggestions.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_117-Personalized_Motion_Scheme_Generation_System_Design.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimized Wavelet Scattering Network and CNN for ECG Heartbeat Classification from MIT–BIH Arrhythmia Database</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01602116</link>
        <id>10.14569/IJACSA.2025.01602116</id>
        <doi>10.14569/IJACSA.2025.01602116</doi>
        <lastModDate>2025-02-28T06:26:03.9700000+00:00</lastModDate>
        
        <creator>Mohamed Elmehdi AIT BOURKHA</creator>
        
        <creator>Anas HATIM</creator>
        
        <creator>Dounia NASIR</creator>
        
        <creator>Said EL BEID</creator>
        
        <subject>Electrocardiogram (ECG); Convolutional Neural Network (CNN); Arrhythmia Rhythm (ARR); Maximum Relevance Minimum Redundancy (MRMR); Wavelet Scattering Network (WSN)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>Early detection of cardiovascular diseases is vital, especially considering the alarming number of deaths worldwide caused by heart attacks, as highlighted by the world health organization. This emphasizes the urgent need to develop automated systems that can ensure timely and accurate identification of cardiovascular conditions, potentially saving countless lives. This paper presents a novel approach for heartbeats classification, aiming to enhance both accuracy and prediction speed in classification tasks. The model is based on two distinct types of features. First, morphological features that obtained by applying wavelet scattering network to each ECG heartbeat, and the maximum relevance minimum redundancy algorithm was also applied to reduce the computational cost. Second, dynamic features, which capture the duration of two pre R–R intervals and one post R–R interval within the analyzed heartbeat. The feature fusion technique is applied to combine both morphological and dynamic features, and employ a convolutional neural network for the classification of 15 different ECG heartbeat classes. Our proposed method demonstrates an overall accuracy of 98.50% when tested on the Massachusetts institute of Technology -Boston’s Beth Israel hospital arrhythmia database. The results obtained from our approach highlight its superior performance compared to existing automated heartbeat classification models.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_116-Optimized_Wavelet_Scattering_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimized Dynamic Graph-Based Framework for Skin Lesion Classification in Dermoscopic Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01602115</link>
        <id>10.14569/IJACSA.2025.01602115</id>
        <doi>10.14569/IJACSA.2025.01602115</doi>
        <lastModDate>2025-02-28T06:26:03.9570000+00:00</lastModDate>
        
        <creator>J. Deepa</creator>
        
        <creator>P. Madhavan</creator>
        
        <subject>Confidence partitioning sampling filtering; dynamic graph convolutional recurrent imputation network; ISIC-2019 skin disease dataset; red billed blue magpie optimization algorithm; hybrid dual attention-guided efficient transformer and UNet 3+</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>Early and accurate classification of skin lesions is critical for effective skin cancer diagnosis and treatment. However, the visual similarity of lesions in their early stages often leads to misdiagnoses and delayed interventions. This lack of transparency makes it challenging for dermatologists to interpret with validate decisions made by such methods, reducing their trust in the system. To overcome these complications, Skin Lesions Classification in Dermoscopic Images using Optimized Dynamic Graph Convolutional Recurrent Imputation Network (SLCDI-DGCRIN-RBBMOA) is proposed. The input image is pre-processed utilizing Confidence Partitioning Sampling Filtering (CPSF) to remove noise, resize, and enhance image quality. By using the Hybrid Dual Attention-guided Efficient Transformer and UNet 3+ (HDAETUNet3+) it segment ROI region of the preprocessed dermoscopic images. Finally, segmented images are fed to Dynamic Graph Convolutional Recurrent Imputation Network (DGCRIN) for classifying skin lesion as actinic keratosis, dermatofibroma, basal cell carcinoma, squamous cell carcinoma, benign keratosis, vascular lesion, melanocytic nevus, and melanoma. Generally, DGCRIN does not express any adaption of optimization strategies for determining optimal parameters to exact skin lesion classification. Hence, Red Billed Blue Magpie Optimization Algorithm (RBBMOA) is proposed to enhance DGCRIN that can exactly classify type of skin lesion. The proposed SLCDI-DGCRIN-RBBMOA technique attains 26.36%, 20.69% and 30.29% higher accuracy, 19.12%, 28.32%, and 27.84% higher precision, 12.04%, 13.45% and 22.80% higher recall and 20.47%, 16.34%, and 20.50% higher specificity compared with existing methods such as a deep learning method dependent on explainable artificial intelligence for skin lesion classification (DNN-EAI-SLC), multiclass skin lesion classification utilizing deep learning networks optimal information fusion (MSLC-CNN-OIF), and classification of skin cancer from dermoscopic images utilizing deep neural network architectures (CSC-DI-DCNN) respectively.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_115-Optimized_Dynamic_Graph_Based_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fourth Party Logistics Routing Optimization Problem Based on Conditional Value-at-Risk Under Uncertain Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01602114</link>
        <id>10.14569/IJACSA.2025.01602114</id>
        <doi>10.14569/IJACSA.2025.01602114</doi>
        <lastModDate>2025-02-28T06:26:03.9100000+00:00</lastModDate>
        
        <creator>Guihua Bo</creator>
        
        <creator>Qiang Liu</creator>
        
        <creator>Huiyuan Shi</creator>
        
        <creator>Xin Liu</creator>
        
        <creator>Chen Yang</creator>
        
        <creator>Liyan Wang</creator>
        
        <subject>Logistics service; routing optimization; tardiness risk; conditional value-at-risk; improved Q-learning algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>In order to improve the level of logistics service and considering the impact of uncertainties such as bad weather and highway collapse on fourth party logistics routing optimization problem, this paper adopts Conditional Value-at-Risk (CVaR) to measure the tardiness risk, which is caused by the uncertainties, and proposes a nonlinear programming mathematical model with minimized CVaR. Furthermore, the proposed model is compared with the VaR model, and an improved Q-learning algorithm is designed to solve two models with different node sizes. The experimental results indicate that the proposed model can reflect the mean value of tardiness risk caused by time uncertainty in transportation tasks and better compensate for the shortcomings of the VaR model in measuring tardiness risk. Comparative analysis also shows that the effectiveness of the proposed improved Q-learning algorithm.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_114-Fourth_Party_Logistics_Routing_Optimization_Problem.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Accurate AI Assistance in Contract Law Using Retrieval-Augmented Generation to Advance Legal Technology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01602113</link>
        <id>10.14569/IJACSA.2025.01602113</id>
        <doi>10.14569/IJACSA.2025.01602113</doi>
        <lastModDate>2025-02-28T06:26:03.8930000+00:00</lastModDate>
        
        <creator>Youssra Amazou</creator>
        
        <creator>Faouzi Tayalati</creator>
        
        <creator>Houssam Mensouri</creator>
        
        <creator>Abdellah Azmani</creator>
        
        <creator>Monir Azmani</creator>
        
        <subject>AI Lawyer; contract law; legal technology; Retrieval-Augmented Generation (RAG); Large Language Models (LLMs); GPT; chatbots</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>Understanding legal documentation is a complex task due to its inherent subtleties and constant changes. This article explores the use of artificial intelligence-driven chatbots, enhanced by retrieval-augmented generation (RAG) techniques, to address these challenges. RAG integrates external knowledge into generative models, enabling the delivery of accurate and contextually relevant legal responses. Our study focuses on the development of a semantic legal chatbot designed to interact with contract law data through an intuitive interface. This AI Lawyer functions like a professional lawyer, providing expert answers in property law. Users can pose questions in multiple languages, such as English and French, and the chatbot delivers relevant responses based on integrated official documents. The system distinguishes itself by effectively avoiding LLM hallucinations, relying solely on reliable and up-to-date legal data. Additionally, we emphasize the potential of chatbots based on LLMs and RAG to enhance legal understanding, reduce the risk of misinformation, and assist in drafting legally compliant contracts. The system is also adaptable to various countries through the modification of its legal databases, allowing for international application.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_113-Accurate_AI_Assistance_in_Contract_Law.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimizing Social Media Marketing Strategies Through Sentiment Analysis and Firefly Algorithm Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01602112</link>
        <id>10.14569/IJACSA.2025.01602112</id>
        <doi>10.14569/IJACSA.2025.01602112</doi>
        <lastModDate>2025-02-28T06:26:03.8630000+00:00</lastModDate>
        
        <creator>Sudhir Anakal</creator>
        
        <creator>P N S Lakshmi</creator>
        
        <creator>Nishant Fofaria</creator>
        
        <creator>Janjhyam Venkata Naga Ramesh</creator>
        
        <creator>Elangovan Muniyandy</creator>
        
        <creator>Shaik Sanjeera</creator>
        
        <creator>Yousef A.Baker El-Ebiary</creator>
        
        <creator>Ritesh Patel</creator>
        
        <subject>Sentiment analysis; firefly algorithm; social media marketing; optimization; user engagement; marketing strategies</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>The dramatic expansion of social media platforms reshaped business-to-customer interactions so organizations need to refine their marketing strategies toward maximizing both user engagement and marketing return on investment (ROI). Present-day social media marketing methods struggle to embrace user emotions fully while responding to market variations thus demonstrating the necessity for developing innovative social media marketing tools. Studies seek to boost social media marketing performance through an FA integration with sentiment analysis for content strategy optimization and better user engagement results. This study adopts novel techniques by combining sentiment analysis with the Firefly Algorithm to optimize marketing strategies in real-time and it represents an underutilized approach in present research. Eventually combined fields generate a sentiment-driven and data-oriented decision-making capability in social media marketing applications. The proposed system combines sentiment analysis technology that measures social media emotion levels alongside the Firefly Algorithm which applies optimization methods to marketing tactics based on present feedback. The framework operates through dynamic adjustments of content strategies which maximize user engagement. The proposed method demonstrated 98.4% precision in forecasting user engagement metrics and adapting content strategies. Results show traditional marketing strategies yield to these approaches by improving user interaction alongside campaign effectiveness. The research introduces a new optimization method in social media marketing which integrates sentiment analysis with Firefly Algorithm technology. Research findings suggest this combined methodology brings substantial precision improvements to marketing strategies by offering companies an effective method to optimize digital marketplace outcomes.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_112-Optimizing_Social_Media_Marketing_Strategies.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>AI-Driven Transformer Frameworks for Real-Time Anomaly Detection in Network Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01602111</link>
        <id>10.14569/IJACSA.2025.01602111</id>
        <doi>10.14569/IJACSA.2025.01602111</doi>
        <lastModDate>2025-02-28T06:26:03.8300000+00:00</lastModDate>
        
        <creator>Santosh Reddy P</creator>
        
        <creator>Tarunika Chaudhari</creator>
        
        <creator>Sanjiv Rao Godla</creator>
        
        <creator>Janjhyam Venkata Naga Ramesh</creator>
        
        <creator>Elangovan Muniyandy</creator>
        
        <creator>A. Smitha Kranthi</creator>
        
        <creator>Yousef A.Baker El-Ebiary</creator>
        
        <subject>Anomaly detection; network security; transformer framework; bidirectional encoder representations from transformers; zero-shot learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>The detection of evolving cyber threats proves challenging for traditional anomaly detection because signature-based models do not identify new or zero-day attacks. This research develops an AI Transformer-based system with Bidirectional Encoder Representations from Transformers (BERT) technology with Zero-Shot Learning (ZSL) for real-time network system anomaly detection while solving these security challenges. The goal positions the development of an effective alerting system that detects Incident response and proactive defenses cyber threats both known and unknown while needing minimal human input. The methodology uses BERT to transform textual attack descriptions found in CVEs alongside MITRE ATT&amp;CK TTPs into multidimensional embedding features. Visual embeddings generated from textual documents undergo comparison analysis with current network traffic data containing packet flow statistics and connection logs through the cosine similarity method to reveal potential suspicious patterns. The Zero-Shot Learning extension improves the system by enabling threat recognition of new incidents when training data remains unlabeled through its analysis of semantic links between familiar and unfamiliar attack types. Here utilizes three different tools that include Python for programming purposes alongside BERT for embedding analytics and cosine similarity for measuring embedded content similarities. Numerical experiment outcomes validate the proposed framework by achieving a 99.7% accuracy measure with 99.4% precision, 98.8% recall while maintaining a sparse 1.1% false positive rate. The system operates with a detection latency of just 45ms, making it suitable for dynamic cybersecurity environments. The results indicate that the AI-driven Transformer framework outperforms conventional methods, providing a robust, real-time solution for anomaly detection that can adapt to evolving cyber threats without extensive manual intervention.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_111-AI_Driven_Transformer_Frameworks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Attention-Based Transformers-CNN Model for Seizure Prediction Through Electronic Health Records</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01602110</link>
        <id>10.14569/IJACSA.2025.01602110</id>
        <doi>10.14569/IJACSA.2025.01602110</doi>
        <lastModDate>2025-02-28T06:26:03.8130000+00:00</lastModDate>
        
        <creator>Janjhyam Venkata Naga Ramesh</creator>
        
        <creator>M. Misba</creator>
        
        <creator>S. Balaji</creator>
        
        <creator>K. Kiran Kumar</creator>
        
        <creator>Elangovan Muniyandy</creator>
        
        <creator>Yousef A. Baker El-Ebiary</creator>
        
        <creator>B Kiran Bala</creator>
        
        <creator>Radwan Abdulhadi .M. Elbasir</creator>
        
        <subject>Epileptic seizure prediction; EEG signal analysis; CNN-Transformer model; deep learning in healthcare; spatiotemporal feature extraction; neural network optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>Seizures are a serious neurological disease, and proper prognosis by electroencephalography (EEG) dramatically enhances patient outcomes. Current seizure prediction methods fail to deal with big data and usually need intensive preprocessing. Recent breakthroughs in deep learning can automatically extract features and detect seizures. This work suggests a CNN-Transformer model for epileptic seizure prediction from EEG data with the goal of increasing precision and prediction rates by investigating spatial and temporal relationships within data. The innovation is in employing CNN for spatial feature extraction and a Transformer-based architecture for temporal dependencies over the long term. In contrast to conventional methods that depend on hand-crafted features, this method uses an optimization approach to enhance predictive performance for large-scale EEG datasets. The dataset, which was obtained from Kaggle, consists of EEG signals from 500 subjects with 4097 data points per subject in 23.6 seconds. CNN layers extract spatial characteristics, while the Transformer takes temporal sequences in through a Self-Attention Profiler to process EEG&#39;s temporality. The suggested CNN-Transformer model also performs well with 98.3% accuracy, 97.9% precision, 98.73% F1-score, 98.21% specificity, and 98.5% sensitivity. These outcomes show how the model identifies seizures while being low on false positives. The results indicate how the hybrid CNN-Transformer model is effective at utilizing spatiotemporal EEG features in seizure prediction. Its high sensitivity and accuracy indicate important clinical promise for early intervention, enhancing treatment for epilepsy patients. This method improves seizure prediction, allowing for better management and early therapeutic response in the clinic.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_110-Hybrid_Attention_Based_Transformers_CNN_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving English Writing Skills Through NLP-Driven Error Detection and Correction Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01602109</link>
        <id>10.14569/IJACSA.2025.01602109</id>
        <doi>10.14569/IJACSA.2025.01602109</doi>
        <lastModDate>2025-02-28T06:26:03.7830000+00:00</lastModDate>
        
        <creator>Purnachandra Rao Alapati</creator>
        
        <creator>A. Swathi</creator>
        
        <creator>Jillellamoodi Naga Madhuri</creator>
        
        <creator>Vijay Kumar Burugari</creator>
        
        <creator>Bhuvaneswari Pagidipati</creator>
        
        <creator>Yousef A.Baker El-Ebiary</creator>
        
        <creator>Prema S</creator>
        
        <subject>Natural Language Processing (NLP); error detection; writing skills improvement; language models; AI-Driven writing tools</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>Error detection and correction is an important activity that ensures the quality of written communication, especially in education, business, and legal documentation. State-of-the-art NLP approaches have several issues, including overcorrection, poor handling of multilingual texts, and poor adaptability to domain-specific errors. Traditional methods, based on rule-based approaches or single-task models, fail to capture the complexity of real-world applications, especially in code-switched (multilingual) contexts and resource-scarce languages. To overcome these limitations, this research proposes an advanced error detection and correction framework based on transformer-based models such as Bidirectional Encoder Representations from Transformers (BERT) and Generative Pre-trained Transformer (GPT). The hybrid approach integrates a Seq2Seq architecture with attention mechanisms and error-specific layers for handling grammatical and spelling errors. Synthetic data augmentation techniques, including back-translation, improve the system&#39;s robustness across diverse languages and domains. The architecture attains maximum accuracy of 99%, surpassing the state-of-the-art models, in this case, GPT-3 fine-tuned for grammatical error correction at 98%. It demonstrates superior performance in various multilingual and domain-specific settings, in addition to complex spelling challenges such as homophones and visually similar words. The system was realized using Python with TensorFlow and PyTorch. The system applies C4-200M for training and evaluation. The precision and recall rates, with real-time processing of text, render the model highly useful for practice applications in the areas of education, content development, and platforms for communication. This research fills a gap in present systems and hence contributes to an enhancement of automated improvement of writing skills in the English language, with a sound and scalable solution.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_109-Improving_English_Writing_Skills.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Machine Learning-Enabled Personalization of Programming Learning Feedback</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01602108</link>
        <id>10.14569/IJACSA.2025.01602108</id>
        <doi>10.14569/IJACSA.2025.01602108</doi>
        <lastModDate>2025-02-28T06:26:03.7530000+00:00</lastModDate>
        
        <creator>Mohammad T. Alshammari</creator>
        
        <subject>Machine learning; programming; learning outcome; feedback; personalization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>Acquiring programming skills is daunting for most learners and is even more challenging in heavily attended courses. This complexity also makes it difficult to offer personalized feedback within the time constraints of instructors. This study offers an approach to predict programming weaknesses in each learner to provide appropriate learning resources based on machine learning. The machine learning models selected for training and testing and then compared are Random Forest, Logistic Regression, Support Vector Machine, and Decision Trees. During the comparison based on the features of prior knowledge, time spent, and GPA, Logistic Regression was found to be the most accurate. Using this model, the programming weaknesses of each learner are identified so that personalized feedback can be given. The paper further describes a controlled experiment to evaluate the effectiveness of the personalized programming feedback generated based on the model. The findings indicate that learners receiving personalized programming feedback achieve superior learning outcomes than those receiving traditional feedback. The implications of these findings are explored further, and a direction for future research is suggested.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_108-Machine_Learning_Enabled_Personalization_of_Programming.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detecting Chinese Sexism Text in Social Media Using Hybrid Deep Learning Model with Sarcasm Masking</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01602107</link>
        <id>10.14569/IJACSA.2025.01602107</id>
        <doi>10.14569/IJACSA.2025.01602107</doi>
        <lastModDate>2025-02-28T06:26:03.7200000+00:00</lastModDate>
        
        <creator>Lei Wang</creator>
        
        <creator>Nur Atiqah Sia Abdullah</creator>
        
        <creator>Syaripah Ruzaini Syed Aris</creator>
        
        <subject>Sexism; chinese; deep learning; sarcasm; masking</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>Sexist content is prevalent in social media, which seriously affects the online environment and occasionally leads to offline disputes. For this reason, many scholars have researched how to automatically detect sexist content in social media. However, the presence of sarcasm complicates this task. Thus, recognizing sarcasm to improve the accuracy of sexism detection has become a crucial research focus. In this study, we adopt a deep learning approach by combining a sexism lexicon and a sarcasm lexicon to work on the detection of Chinese sexist content in social media. We innovatively propose a sarcasm-based masking mechanism, which achieves an accuracy of 82.65% and a macro F1 score of 80.49% on the Sina Weibo Sexism Review (SWSR) dataset, significantly outperforming the baseline model by 2.05% and 2.89%, respectively. This study combines the irony masking mechanism with sexism detection, and the experimental results demonstrate the effectiveness of the deep learning method based on the irony masking mechanism in Chinese sexism detection.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_107-Detecting_Chinese_Sexism_Text_in_Social_Media.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Depression Detection in Social Media Using NLP and Hybrid Deep Learning Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01602106</link>
        <id>10.14569/IJACSA.2025.01602106</id>
        <doi>10.14569/IJACSA.2025.01602106</doi>
        <lastModDate>2025-02-28T06:26:03.7070000+00:00</lastModDate>
        
        <creator>S M Padmaja</creator>
        
        <creator>Sanjiv Rao Godla</creator>
        
        <creator>Janjhyam Venkata Naga Ramesh</creator>
        
        <creator>Elangovan Muniyandy</creator>
        
        <creator>Pothumarthi Sridevi</creator>
        
        <creator>Yousef A.Baker El-Ebiary</creator>
        
        <creator>David Neels Ponkumar Devadhas</creator>
        
        <subject>Depression detection; RoBERTa; BiLSTM; social media analysis; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>One type of feeling that possesses a detrimental effect on people&#39;s day-to-day lives is depression. Globally, the number of persons experiencing long-term sentiments is rising annually. Many psychiatrists find it difficult to recognize mental disease or unpleasant emotions in patients before it&#39;s too late to improve treatment. Finding depression in individuals quickest possible time represents one of the most difficult problems. To create tools for diagnosing depression, researchers are employing NLP to examine written content shared on social media sites. Traditional techniques frequently have problems with scalability and poor precision. To overcome the drawbacks of the prior methods, it is suggested to introduce an improved depression detection system based on the RoBERTa (Robustly optimized BERT approach) and BiLSTM (Bidirectional Long Short-Term Memory) approach. This proposed work aims is to take advantage of the contextualized word embeddings from RoBERTa and the sequential learning properties of BiLSTM to determine depression from social media text. The technique is innovative because it combines the use of BiLSTM to accurately describe the temporal patterns of text sequences with RoBERTa to capture subtle linguistic aspects. It removes stopwords and punctuations form the input data to provide clean data to the model for processing. The system illustrates preference over the existing models as they achieve a 99.4 % accuracy, 98. 5% precision, 97. 1% recall, and 97. 3% F1 score. Thus, these results clearly highlight the effectiveness of the combination of the proposed technique with the traditional method in identifying depression with more accuracy and less variance. The proposed method is implemented using python.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_106-Depression_Detection_in_Social_Media_Using_NLP.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Self-Organizing Neural Networks Integrated with Artificial Fish Swarm Algorithm for Energy-Efficient Cloud Resource Management</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01602105</link>
        <id>10.14569/IJACSA.2025.01602105</id>
        <doi>10.14569/IJACSA.2025.01602105</doi>
        <lastModDate>2025-02-28T06:26:03.6730000+00:00</lastModDate>
        
        <creator>A. Z. Khan</creator>
        
        <creator>B. Manikyala Rao</creator>
        
        <creator>Janjhyam Venkata Naga Ramesh</creator>
        
        <creator>Elangovan Muniyandy</creator>
        
        <creator>Eda Bhagyalakshmi</creator>
        
        <creator>Yousef A. Baker El-Ebiary</creator>
        
        <creator>David Neels Ponkumar Devadhas</creator>
        
        <subject>Energy-efficient cloud resource management; Self-Organizing Neural Networks (SONN); Artificial Fish Swarm Algorithm (AFSA); cloud optimization; swarm intelligence; resource utilization; task scheduling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>Cloud computing&#39;s exponential expansion requires better resource management methods to solve the existing struggle between system performance and energy efficiency and functional scalability. Traditional resource management practices frequently lead systems in large-scale cloud environments to produce suboptimal results. This research presents a brand-new computational framework that unites Self-Organizing Neural Networks (SONN) with Artificial Fish Swarm Algorithm (AFSA) to enhance energy efficiency alongside optimized resource allocation and scheduling improvements. The SONN system groups workload information and automatically changes its structure to support fluctuating demand rates then the AFSA optimizes resource management through swarm-based intelligent protocols for high performance with scalable benefits. The SONN-AFSA model achieves substantial performance gains by analyzing real-world CPU usage statistics and memory usage behavior together with scheduling data from Google Cluster Data. The experimental findings show 20.83% lower energy utilization next to 98.8% prediction rates alongside 95% SLA maintenance and an outstanding 98% task execution rate. The proposed model delivers reliability outcomes superior to traditional approaches PSO and DRL and PSO-based neural networks which achieve accuracy rates above 88% and reach 92% accuracy. The adaptive platform delivers better power management to cloud computations yet preserves operational agility by adapting workload distributions. The learning ability of SONN joined with AFSA optimization segments produces superior resource direction capabilities which yield better service delivery quality. Research will proceed beyond its current scope to study real-time feedback structures as it evaluates multi-objective enhancement through large-scale dataset validation work to boost cloud computing sustainability across various platforms.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_105-Self_Organizing_Neural_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>ALE Model: Air Cushion Impact Characteristics of Seaplane Landing Application</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01602104</link>
        <id>10.14569/IJACSA.2025.01602104</id>
        <doi>10.14569/IJACSA.2025.01602104</doi>
        <lastModDate>2025-02-28T06:26:03.6430000+00:00</lastModDate>
        
        <creator>Yunsong Zhang</creator>
        
        <creator>Ruiyou Li Shi</creator>
        
        <creator>Bo Gao</creator>
        
        <creator>Changxun Song</creator>
        
        <creator>Zhengzhou Zhang</creator>
        
        <subject>Seaplane; ALE method; multiphase coupling; air cushion</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>Seaplane landing is a strong nonlinear gas-liquid-solid multiphase coupling problem, and the coupling impact characteristics of air cushion are very complicated, and it is difficult to maintain the stability of the air-frame. In this paper, The ALE method is used to study the landing of seaplane at different initial attitude angles and velocities. Firstly, a comparative study of the structure entry model and the air cushion effect model of flat impact water surface is conducted to verify the reliability of the numerical model in this paper, and the influence of the velocity, the water shape and the air cushion are accurately analyzed. Then, a seaplane landing is systematically studied, and the vertical acceleration, attitude angle, aircraft impact force and flow field distribution are analyzed. The results show that the air cushion has a great influence on the landing of seaplane. The smaller the initial horizontal velocity, the more obvious the cushioning effect of the air cushion. Cavitation causes a secondary impact on the tail and produces a pressure value exceeding the initial value, which may cause damage to the aircraft structure. The air cushion has a buffering effect on the seaplane, the pitch angle increases at a slower rate and the pressure value at the monitoring point decreases. The larger the initial attitude angle, the more significant the air cushion. By analyzing the landing rules of seaplane, the range of speed and attitude angle suitable for seaplane takeoff and landing process is given. The results of this paper can provide theoretical guidance for the stability design of seaplane takeoff and landing process.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_104-ALE_Model_Air_Cushion_Impact_Characteristics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Adaptive and Scalable Cloud Data Sharing Framework with Quantum-Resistant Security, Decentralized Auditing, and Machine Learning-Based Threat Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01602103</link>
        <id>10.14569/IJACSA.2025.01602103</id>
        <doi>10.14569/IJACSA.2025.01602103</doi>
        <lastModDate>2025-02-28T06:26:03.6130000+00:00</lastModDate>
        
        <creator>P Raja Sekhar Reddy</creator>
        
        <creator>Pulipati Srilatha</creator>
        
        <creator>Kanhaiya Sharma</creator>
        
        <creator>Sudipta Banerjee</creator>
        
        <creator>Shailaja Salagrama</creator>
        
        <creator>Manjusha Tomar</creator>
        
        <creator>Ashwin Tomar</creator>
        
        <subject>Blockchain audit; data security and privacy; machine learning; proxy re-encryption; quantum-resistant cryptography</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>The increasing prevalence of cloud environments makes it important to ensure secure and efficient data sharing between dynamic teams, especially in terms of user access and termination based on proxy re-encryption and hybrid authentication management schemes aimed at increasing scalability, flexibility, and adaptability and exploring a multi-proxy server architecture to distribute re-encryption tasks, improve fault tolerance and load balancing in large deployments. In addition, to this eliminated the need for trusted third-party auditors, integrate blockchain-based audit mechanisms for immutable decentralized monitoring of data access, revocation events To future-proof systems provides quantum-resistant cryptographic mechanisms for long-term security as well as to develop revolutionary approaches that drive the user out of the box, driven by machine learning to predict and execute addressing potential threats in real-time. Proposed systems also introduce fine-grained, multi-level access controls for discrete data security and privacy, meeting different roles of users and data sensitivity levels mean improvements greater in terms of computing performance, security and scalability, making this enhanced system more effective for secure data sharing at dynamic and large clouds around us.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_103-Adaptive_and_Scalable_Cloud_Data_Sharing_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Pneumonia Detection Using Transfer Learning: A Systematic Literature Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01602102</link>
        <id>10.14569/IJACSA.2025.01602102</id>
        <doi>10.14569/IJACSA.2025.01602102</doi>
        <lastModDate>2025-02-28T06:26:03.5800000+00:00</lastModDate>
        
        <creator>Mohammed A M Abueed</creator>
        
        <creator>Danial Md Nor</creator>
        
        <creator>Nabilah Ibrahim</creator>
        
        <creator>Jean-Marc Ogier</creator>
        
        <subject>Pneumonia; machine learning; COVID-19: deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>Deep learning models have significantly improved pneumonia detection using X-ray image analysis in the field of AI-driven healthcare, showing a major advancement in the effectiveness of medical decision systems. In this paper, we have conducted a systematic literature review of pneumonia detection techniques that applied transfer learning combined with other methods. The review protocol has been developed thoroughly and it identifies recent research related to pneumonia detection from the past five years. We have used very famous research repositories such as IEEE, Elsevier, Springer, and ACM digital library. After a thorough search process, 35 papers are finalized. The review summarizes those past papers that have implemented different methods of pneumonia detection and results are compared based on the best performing models. Also, these models have been categorized into three approaches to pneumonia detection: Deep Learning methods, Transfer Learning techniques, and hybrid methods. Then, there is a performance comparison of the best-performing models for pneumonia detection. This study concludes that while transfer learning holds substantial potential for improving pneumonia detection, further research is necessary to optimize these models for clinical application. This study concludes that while transfer learning holds substantial potential for improving pneumonia detection, further research is necessary to optimize these models for clinical application. This review is very helpful for the researchers in identifying the research gap for pneumonia detection techniques and how these gaps can be addressed shortly.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_102-Pneumonia_Detection_Using_Transfer_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Artificial Bee Colony and Bat Algorithm for Efficient Resource Allocation in Edge-Cloud Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01602101</link>
        <id>10.14569/IJACSA.2025.01602101</id>
        <doi>10.14569/IJACSA.2025.01602101</doi>
        <lastModDate>2025-02-28T06:26:03.5500000+00:00</lastModDate>
        
        <creator>Jiao GE</creator>
        
        <creator>Bolin ZHOU</creator>
        
        <creator>Na LIU</creator>
        
        <subject>Cloud computing; edge computing; resource allocation; optimization; task scheduling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>Integrating edge and cloud computing systems builds up a powerhouse, a framework for realizing real-time data processing and conducting large-scale computation tasks. However, efficient resource allocation and task scheduling are outstanding challenges in these dynamic, heterogeneous environments. This paper proposes an innovative hybrid algorithm that amalgamates the features of the Bat Algorithm (BA) and Artificial Bee Colony (ABC) to meet such challenges. The ABC algorithm&#39;s solid global search capabilities and the BA&#39;s efficient local exploitation are merged for efficient task scheduling and resource allocation. Dynamic adaptation of the proposed hybrid algorithm accommodates such conditions by balancing exploration and exploitation through periodic solution exchanges. Experimental evaluations highlight that the proposed algorithm can minimize execution time and costs involving resource utilization by guaranteeing proper management of task dependencies using a Directed Acyclic Graph (DAG) model. Compared to the available methods, the proposed hybrid technique generates better performance metrics concerning reduced makespan, improved resource utilization, and lower computational delays concerning resource optimization in an edge-cloud context.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_101-Hybrid_Artificial_Bee_Colony_and_Bat_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Chronic Kidney Disease Prediction with Deep Separable Convolutional Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01602100</link>
        <id>10.14569/IJACSA.2025.01602100</id>
        <doi>10.14569/IJACSA.2025.01602100</doi>
        <lastModDate>2025-02-28T06:26:03.5170000+00:00</lastModDate>
        
        <creator>Janjhyam Venkata Naga Ramesh</creator>
        
        <creator>P N S Lakshmi</creator>
        
        <creator>Thalakola Syamsundararao</creator>
        
        <creator>Elangovan Muniyandy</creator>
        
        <creator>Linginedi Ushasree</creator>
        
        <creator>Yousef A. Baker El-Ebiary</creator>
        
        <creator>David Neels Ponkumar Devadhas</creator>
        
        <subject>Chronic kidney disease; deep separable convolutional neural networks; learning rate warm-up with cosine annealing; predictive accuracy; optimization techniques</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>Chronic Kidney Disease (CKD) is a chronic disease that progressively impairs kidney function to the point of wasting filtration, electrolyte imbalance, and blood pressure control. Early and precise prediction becomes necessary for successful disease management. This research demonstrates a new method involving Deep Separable Convolutional Neural Networks (DS-CNNs) in improving CKD prediction. Based on the Chronic Kidney Disease Dataset available at Kaggle, the model employs DS-CNNs combined with optimized techniques of optimization for better predictive accuracy. DS-CNNs utilize depthwise and pointwise convolutions to facilitate effective feature extraction and classification with efficient computation. To enhance model performance, the Learning Rate Warm-Up with Cosine Annealing technique is used to guarantee stable convergence and controlled rate of reduction in the learning rate. This solution remedies the inadequacies of traditional CKD detection solutions that are insensitive to early stages and entail expensive, invasive procedures. At 94.50% accuracy, the new DS-CNN model outcompetes conventional methods, featuring better prediction performance. The results demonstrate the utility of deep learning and optimization in early detection of CKD and introduce a promising tool for enhanced clinical decision-making.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_100-Enhancing_Chronic_Kidney_Disease_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>LDA-Based Topic Mining for Unveiling the Outstanding Universal Value of Solo Keroncong Music as an Intangible Cultural Heritage of UNESCO</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160299</link>
        <id>10.14569/IJACSA.2025.0160299</id>
        <doi>10.14569/IJACSA.2025.0160299</doi>
        <lastModDate>2025-02-28T06:26:03.4870000+00:00</lastModDate>
        
        <creator>Denik Iswardani Witarti</creator>
        
        <creator>Danis Sugiyanto</creator>
        
        <creator>Atik Ariesta</creator>
        
        <creator>Pipin Farida Ariyani</creator>
        
        <creator>Rusdah</creator>
        
        <subject>LDA; OUV; Solo keroncong; text mining; topic modeling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>Outstanding Universal Value (OUV) is an essential value of culture and nature. It is so extraordinary that it transcends national boundaries and becomes generally crucial for all humanity&#39;s current and future generations. A culture with this value needs permanent protection because it is considered a critical heritage for the world community. Solo keroncong music, as one of the local wisdom owned by the Indonesian nation, has yet to be recognized as one of the UNESCO Intangible Cultural Heritage (ICH). It has even become an instrument of Indonesia&#39;s soft power diplomacy in several countries, such as Malaysia, England, and the United States. It must be of OUV and meet at least one of ten selection criteria to be included on the World Heritage List. This study explored the OUV of Solo keroncong music using Latent Dirichlet Allocation. The primary data were obtained by conducting an FGD with the Indonesian Keroncong Music Artist Community (KAMKI) Surakarta and in-depth interviews with several keroncong figures in Solo. The result showed there are four topics with a coherent score of 0.51. Then, the expert mapped those four topics into three OUVs of Solo keroncong music as temporary findings. Keroncong music is a masterpiece of human creativity, a witness to civilization, and has traditional values. These findings showed that Solo keroncong music is worthy of being proposed as one of the UNESCO ICH.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_99-LDA_Based_Topic_Mining_for_Unveiling_the_Outstanding_Universal_Value.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimizing Athlete Workload Monitoring with Supervised Machine Learning for Running Surface Classification Using Inertial Sensors</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160298</link>
        <id>10.14569/IJACSA.2025.0160298</id>
        <doi>10.14569/IJACSA.2025.0160298</doi>
        <lastModDate>2025-02-28T06:26:03.4570000+00:00</lastModDate>
        
        <creator>WenBin Zhu</creator>
        
        <creator>QianWei Zhang</creator>
        
        <creator>SongYan Ni</creator>
        
        <subject>Athlete monitoring; machine learning models; running surface classification; Inertial Measurement Units (IMU); neural networks; Support Vector Machines (SVM); Principal Component Analysis (PCA)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>Monitoring athlete movement is important to improve performance, reduce fatigue, and decrease the likelihood of injury. Advanced technologies, including computer vision and inertial sensors, have been widely explored in classifying sport-specific movements. Combining automated sports action labeling with athlete-monitoring data provides an effective approach to enhance workload analysis. Recent studies on categorizing sport-specific movements show a trend toward training and evaluation methods based on individual athletes, allowing models to capture unique features peculiar to each athlete. This is particularly beneficial for movements that exhibit large variations in technique between athletes. The current study uses supervised machine learning models, including Neural Networks and Support Vector Machines (SVM), to distinguish between running surfaces, namely, athletics track, hard sand, and soft sand, using features extracted from an upper-back inertial measurement unit (IMU) sensor. Principal Component Analysis (PCA) is applied for feature selection and dimensionality reduction, enhancing model efficiency and interpretability. Our results show that athlete-dependent training approaches considerably enhance the classification performance compared to athlete-independent approaches, achieving higher weighted average precision, recall, F1-score, and accuracy (p &lt; 0.05).</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_98-Optimizing_Athlete_Workload_Monitoring.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Quantum Swarm Intelligence and Fuzzy Logic: A Framework for Evaluating English Translation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160297</link>
        <id>10.14569/IJACSA.2025.0160297</id>
        <doi>10.14569/IJACSA.2025.0160297</doi>
        <lastModDate>2025-02-28T06:26:03.4230000+00:00</lastModDate>
        
        <creator>Pei Yang</creator>
        
        <subject>English translation software; quantum swarm intelligence; fuzzy logic; multi-domain evaluation; optimization; linguistic performance analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>This study introduces the Quantum Swarm-Driven Fuzzy Evaluation Framework (QSI-Fuzzy) for assessing English translation software across multiple domains and criteria. The principal aim is to develop a scalable, adaptive, and interpretable evaluation framework that optimizes dynamic weight assignments while managing linguistic uncertainties. A major challenge in translation software evaluation lies in ensuring accurate and unbiased assessments of semantic accuracy, fluency, efficiency, and user satisfaction, particularly across diverse domains such as Legal, Medical, and Conversational contexts. To address this, QSI-Fuzzy integrates Quantum Swarm Intelligence (QSI) for dynamic weight optimization with fuzzy logic for handling linguistic uncertainties, ensuring robust and adaptive decision-making. Experimental results demonstrate that QSI-Fuzzy outperforms benchmark algorithms including Genetic Algorithm (GA), Particle Swarm Optimization (PSO), and Simulated Annealing (SA), achieving faster convergence (55 iterations on average vs. 120 for SA) and exhibiting greater robustness under noisy conditions (maintaining a performance score of 0.80 at 20% noise, compared to 0.70, 0.68, and 0.65 for GA, PSO, and SA, respectively). These findings confirm that QSI-Fuzzy provides an efficient, scalable, and high-performance solution for translation software evaluation, with broader implications for real-time systems, complex decision-making, and multi-domain optimization challenges.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_97-Quantum_Swarm_Intelligence_and_Fuzzy_Logic.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Analysis of Cardiac Disease Classification Using a Deep Learning Model Embedded with a Bio-Inspired Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160296</link>
        <id>10.14569/IJACSA.2025.0160296</id>
        <doi>10.14569/IJACSA.2025.0160296</doi>
        <lastModDate>2025-02-28T06:26:03.3930000+00:00</lastModDate>
        
        <creator>Nandakumar Pandiyan</creator>
        
        <creator>Subhashini Narayan</creator>
        
        <subject>Cardiac disease; heart disease; bio-inspired; machine learning; deep learning; prediction; classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>Cardiac disease classification is a crucial task in healthcare aimed at early diagnosis and prevention of cardiovascular complications. Traditional methods such as machine learning models often face challenges in handling high-dimensional and noisy datasets, as well as in optimizing model performance. In this study, we propose and compare a novel approach for heart disease prediction using deep learning models embedded in bioinspired algorithms. The integration of deep learning techniques allows for automatic feature learning and complex pattern recognition from raw data, while bioinspired algorithms provide optimization capabilities for enhancing model accuracy and generalization. Specifically, the cuckoo search algorithm and elephant herding optimization algorithm are employed to optimize the architecture and hyperparameters of deep learning models, facilitating the exploration of diverse model configurations and parameter settings. This hybrid approach enables the development of highly effective predictive models by efficiently leveraging the complementary strengths of deep learning and bioinspired optimization. Experimental results on benchmark heart disease datasets demonstrate the superior performance of the proposed method compared to conventional approaches, achieving higher accuracy and robustness in predicting heart disease risk. The proposed framework holds significant promise for advancing the state-of-the-art in heart disease prediction and facilitating personalized healthcare interventions for at-risk individuals.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_96-Comparative_Analysis_of_Cardiac_Disease_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Smart Night-Vision Glasses with AI and Sensor Technology for Night Blindness and Retinitis Pigmentosa</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160295</link>
        <id>10.14569/IJACSA.2025.0160295</id>
        <doi>10.14569/IJACSA.2025.0160295</doi>
        <lastModDate>2025-02-28T06:26:03.3470000+00:00</lastModDate>
        
        <creator>Shaheer Hussain Qazi</creator>
        
        <creator>M. Batumalay</creator>
        
        <subject>Night-vision; glasses; night blindness; Retinitis Pigmentosa (RP); IoT; assistive technology; sensor technology; AI; data processing; low-light navigation; wearable devices; process innovation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>This paper presents the conceptualization of Smart Night-Vision Glasses, an innovative assistive device aimed at individuals with night blindness and Retinitis Pigmentosa (RP). These conditions, characterized by significant difficulty in seeing in low-light or dark environments, currently have no effective medical solution. The proposed glasses utilize advanced sensor technologies such as LiDAR, infrared, and ultrasonic sensors, combined with artificial intelligence (AI), to create a real-time, visual representation of the surroundings. Unlike conventional camera-based systems, which require light to function, this device relies on non-visible, non-harmful rays to detect environmental data, making it suitable for use in pitch-dark conditions. The AI processes the sensor data to generate a simplified, user-friendly view of the environment, outlined with clear, cartoon-like visuals for easy identification of objects, obstacles, and surfaces. The glasses are designed to look like regular prescription eyewear, ensuring comfort and discretion, while a button or trigger can switch them to &quot;night mode&quot; for enhanced vision in low-light settings. This concept aims to improve the independence, safety, and quality of life for individuals with night blindness and RP, offering a transformative solution where no medical alternatives currently exist. However, challenges such as sensor miniaturization, power consumption, and AI integration must be addressed for successful implementation. Beyond its direct benefits for users, the device could have broader societal and economic impacts by enhancing accessibility, reducing nighttime accidents, and fostering technological innovation in assistive wearables. The paper also discusses future directions for research and refinement of the technology while supporting the Process Innovation.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_95-Smart_Night_Vision_Glasses_with_AI_and_Sensor_Technology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Efficient Personalized Federated Learning Method with Adaptive Differential Privacy and Similarity Model Aggregation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160294</link>
        <id>10.14569/IJACSA.2025.0160294</id>
        <doi>10.14569/IJACSA.2025.0160294</doi>
        <lastModDate>2025-02-28T06:26:03.3130000+00:00</lastModDate>
        
        <creator>Shiqi Mao</creator>
        
        <creator>Fangfang Shan</creator>
        
        <creator>Shuaifeng Li</creator>
        
        <creator>Yanlong Lu</creator>
        
        <creator>Xiaojia Wu</creator>
        
        <subject>Federated learning; differential privacy; gradient clipping; model aggregation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>In recent years, personalized federated learning (PFL) has garnered significant attention due to its potential for safeguarding data privacy while addressing data heterogeneity across clients. However, existing PFL approaches remain vulnerable to privacy breaches, particularly under adversarial inference and client-side data reconstruction attacks. To address these concerns, we propose DP-FedSim, a novel PFL framework incorporating adaptive differential privacy mechanisms. First, to mitigate the limitations posed by fixed-layer personalization strategies, we evaluate parameter significance using the Fisher information matrix. By selectively retaining parameters with higher Fisher values, DP-FedSim reduces the noise impact, enabling more efficient dynamic personalization. Second, we introduce a layered adaptive gradient clipping method. By leveraging the mean and standard deviation of the gradients within each layer, this method allows DP-FedSim to automatically adjust clipping thresholds in response to real-time privacy demands and model states, enhancing the adaptability to various model structures. This ensures a more accurate balance between privacy preservation and model performance. Furthermore, we present a model similarity-based aggregation method utilizing cosine similarity. This technique dynamically adjusts each client&#39;s contribution to the global model update, prioritizing clients with models more similar to the global model. This improves the global model&#39;s performance and generalization by allowing DP-FedSim to better handle a variety of data distributions and client model attributes. Experimental results on multiple SVHN cifar-10 datasets show that DP-FedSim outperforms the state-of-the-art PFL algorithm by an average of 5% when data heterogeneity is at its strongest. The efficiency of the suggested modules is validated by ablation tests, and the visualization results shed light on the reasoning behind important hyperparameter settings.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_94-Efficient_Personalized_Federated_Learning_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SQRCD: Building Sustainable and Customer Centric DFIS for the Industry 5.0 Era</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160293</link>
        <id>10.14569/IJACSA.2025.0160293</id>
        <doi>10.14569/IJACSA.2025.0160293</doi>
        <lastModDate>2025-02-28T06:26:03.3000000+00:00</lastModDate>
        
        <creator>Ruchira Rawat</creator>
        
        <creator>Himanshu Rai Goyal</creator>
        
        <creator>Sachin Sharma</creator>
        
        <creator>Bina Kotiyal</creator>
        
        <subject>Artificial General Intelligence (AGI); Digital Financial Inclusion System (DFIS); industry 5.0; customer-centric; sustainability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>Artificial Intelligence (AI) is considered a big turning point for the financial industry. Introducing Artificial General Intelligence (AGI) enhances the capability of all the areas where AI shows its power. The development of AGI is directly proportional to the need for more advanced automation by enhancing the features quick responsive, customization/personalization and the refined decision making capabilities in different industries. The current study aims to discuss the respondent&#39;s views on the adoption of an AGI-enabled Sustainability, Quick responsiveness, Risk management, Customer-centric, and Data privacy (SQRCD) system in Digital Financial Inclusion System (DFIS). A total of 630 responses were collected from the respondents belonging to 90 different finance institutes. The result shows that SQRCD had a significant positive relationship with the attitude to adopt an AGI enabled-SQRCD system. The three cultural dimensions of Hofstede’s theory power distance index, collectivism-individualism, and uncertainty acceptance are also taken as moderators. The effect of moderators is seen in the different relationships. The study develops a direct hypothesis to analyze the adoption of a new financial system which includes the mentioned factors. The result of the study is beneficial in the development of a renewed financial system where the mentioned parameters are essential for Industry 5.0.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_93-SQRCD_Building_Sustainable_and_Customer_Centric_DFIS.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Enhanced Whale Optimization Algorithm Based on Fibonacci Search Principle for Service Composition in the Internet of Things</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160292</link>
        <id>10.14569/IJACSA.2025.0160292</id>
        <doi>10.14569/IJACSA.2025.0160292</doi>
        <lastModDate>2025-02-28T06:26:03.2530000+00:00</lastModDate>
        
        <creator>Yun CUI</creator>
        
        <subject>Service composition; Internet of Things; quality of service; whale optimization; Fibonacci search</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>Service composition in the Internet of Things (IoT) poses significant challenges owing to the dynamics in IoT ecosystems and the exponential increase in service candidates. This paper proposes an Enhanced Whale Optimization Algorithm (EWOA) by introducing the Fibonacci search principle for service composition optimization to overcome certain shortcomings of conventional approaches, including slow convergence and being stuck in local optima, in addition to imbalanced exploration-exploitation trade-offs. The proposed EWOA combines the application of nonlinear crossover weights with a Fibonacci search to optimize the global exploration and local exploitation searches of the basic version, thereby producing a better solution. Several simulations were performed for IoT functions. Among the experiments involving different QOS-based service compositions, the results show that the EWOA achieves superior and faster convergence capability with enhanced convergence compared to recent methods.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_92-An_Enhanced_Whale_Optimization_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>AI-Powered Intelligent Speech Processing: Evolution, Applications and Future Directions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160291</link>
        <id>10.14569/IJACSA.2025.0160291</id>
        <doi>10.14569/IJACSA.2025.0160291</doi>
        <lastModDate>2025-02-28T06:26:03.2200000+00:00</lastModDate>
        
        <creator>Ziqing Zhang</creator>
        
        <subject>Intelligent speech recognition; AI speech synthesis; speech processing; AI technology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>This paper provides an overview of the historical evolution of speech recognition, synthesis, and processing technologies, highlighting the transition from statistical models to deep learning-based models. Firstly, the paper reviews the early development of speech processing, tracing it from the rule-based and statistical models of the 1960s to the deep learning models, such as deep neural networks (DNN), convolutional neural networks (CNN), and recurrent neural networks (RNN), which have dramatically reduced error rates in speech recognition and synthesis. It emphasizes how these advancements have led to more natural and accurate speech outputs. Then, the paper examines three key learning paradigms used in speech recognition: supervised, self-supervised, and semi-supervised learning. Supervised learning relies on large amounts of labeled data, while self-supervised and semi-supervised learning leverage unlabeled data to improve generalization and reduce reliance on manually labeled datasets. These paradigms have significantly advanced the field of speech recognition. Furthermore, the paper explores the wide-ranging applications of AI-driven speech processing, including smart homes, intelligent transportation, healthcare, and finance. By integrating AI with technologies like the Internet of Things (IoT) and big data, speech technology is being applied in voice assistants, autonomous vehicles, and speech-controlled devices. The paper also addresses the current challenges facing intelligent speech processing, such as performance issues in noisy environments, the scarcity of data for low-resource languages, and concerns related to data privacy, algorithmic bias, and legal responsibility. Overcoming these challenges will be crucial for the continued progress of the field. Finally, the paper looks to the future, predicting further improvements in speech processing technology through advancements in hardware and algorithms. It anticipates increased focus on personalized services, real-time speech processing, and multilingual support, along with growing integration with other technologies such as augmented reality. Despite the technical and ethical challenges, AI-driven speech processing is expected to continue its transformative impact on society and industry.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_91-AI_Powered_Intelligent_Speech_Processing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Developing an IoT Testing Framework for Autonomous Ground Vehicles</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160290</link>
        <id>10.14569/IJACSA.2025.0160290</id>
        <doi>10.14569/IJACSA.2025.0160290</doi>
        <lastModDate>2025-02-28T06:26:03.1900000+00:00</lastModDate>
        
        <creator>Murat Tashkyn</creator>
        
        <creator>Amanzhol Temirbolat</creator>
        
        <creator>Nurlybek Kenes</creator>
        
        <creator>Amandyk Kartbayev</creator>
        
        <subject>Autonomous vehicle; testing system; IoT; functional testing; electronic modules; delivery automation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>Autonomous ground vehicles play a crucial role in the Internet of Things, offering transformative potential for applications such as urban transportation and delivery services. These vehicles can operate autonomously in uncertain environments, making reliable testing essential. This study develops and analyzes a testing framework for autonomous ground vehicles, focusing on their motion control systems and electronic modules. The research reviews testing methods for printed circuit boards (PCBs), highlighting the need for JTAG testing implementation for vehicle modules. Functional testing was conducted on key components such as cameras, LiDARs, and wireless interfaces under various conditions. Results show that JTAG testing successfully detects faults with precise localization, while functional tests confirm stable component performance. Environmental tests revealed that most components perform reliably within optimal conditions, with failures occurring at temperatures beyond &#177;70&#176;C and humidity levels exceeding 90% RH. The developed testing system enhances the reliability of autonomous delivery vehicles.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_90-Developing_an_IoT_Testing_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Forecasting Models for Predicting Global Supply Chain Disruptions in Trade Economics</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160289</link>
        <id>10.14569/IJACSA.2025.0160289</id>
        <doi>10.14569/IJACSA.2025.0160289</doi>
        <lastModDate>2025-02-28T06:26:03.1600000+00:00</lastModDate>
        
        <creator>Limei Fu</creator>
        
        <subject>Supply chain disruptions; forecasting models; trade economics; predictive analytics; global resilience</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>Global supply chain disruptions have evolved into a critical challenge for trade economics, and have caused them to reach across industries and economies around the globe. The ability to foresee these disruptions is crucial for policymakers, businesses, and supply chain managers who want to develop actionable strategies for stability. The current document focuses on analyzing the potential application of forecasting models to predict global supply chain disruptions and their efficacy and limitations. A comparison of statistical, machine learning, and hybrid models is performed, and the best methods for predicting disruptions arising from geopolitical events, pandemics, natural disasters, and other external factors are identified. The study considers real-world datasets and various scenario analyses to provide actionable insights. The key findings were obtained by integrating various sources of information, including trade volume fluctuations, transportation bottlenecks, and economic indicators, into predictive frameworks. It is thus a novel contribution to the field of study done by this research to build up an advanced forecasting model that can boost the resilience level and elasticity level of global supply chains, finally playing a key role in the sustainability of trade economics.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_89-Forecasting_Models_for_Predicting_Global_Supply_Chain.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Readmission Risk Prediction After Total Hip Arthroplasty Using Machine Learning and Hyperparameter Optimized with Bayesian Optimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160288</link>
        <id>10.14569/IJACSA.2025.0160288</id>
        <doi>10.14569/IJACSA.2025.0160288</doi>
        <lastModDate>2025-02-28T06:26:03.1130000+00:00</lastModDate>
        
        <creator>Intan Yuniar Purbasari</creator>
        
        <creator>Athanasius Priharyoto Bayuseno</creator>
        
        <creator>R. Rizal Isnanto</creator>
        
        <creator>Tri Indah Winarni</creator>
        
        <subject>Total hip arthroplasty; orthopaedic surgery; Bayesian Optimization; machine learning algorithm; hyperparameter optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>Machine learning techniques are increasingly used in orthopaedic surgery to assess risks such as length of stay, complications, infections, and mortality, offering an alternative to traditional methods. However, model performance varies depending on private institutional data, and optimizing hyperparameters for better predictions remains a challenge. This study incorporates automatic hyperparameter tuning to improve readmission prediction in orthopaedics using a public medical dataset. Bayesian Optimization was applied to optimize hyperparameters for seven machine learning algorithms—Extreme Gradient Boosting, Stochastic Gradient Boosting, Random Forest, Support Vector Machine, Decision Tree, Neural Network, and Elastic-net Penalized Logistic Regression—predicting readmission risk after Total Hip Arthroplasty (THA). Data from the MIMIC-IV database, including 1,153 THA patients, was used. Model performance was evaluated using Precision, Recall, and AUC-ROC, comparing optimized algorithms to those without hyperparameter tuning from previous studies. The optimized Extreme Gradient Boosting algorithm achieved the highest AUC-ROC of 0.996, while other models also showed improved accuracy, precision, and recall. This research successfully developed and validated optimized machine learning models using Bayesian Optimization, enhancing readmission prediction following THA based on patient demographics and preoperative diagnosis. The results demonstrate superior performance compared to prior studies that either lacked hyperparameter optimization or relied on exhaustive search methods.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_88-Readmission_Risk_Prediction_after_Total_Hip_Arthroplasty.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>IoMT-Enabled Noninvasive Lungs Disease Detection and Classification Using Deep Learning-Based Analysis of Lungs Sounds</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160287</link>
        <id>10.14569/IJACSA.2025.0160287</id>
        <doi>10.14569/IJACSA.2025.0160287</doi>
        <lastModDate>2025-02-28T06:26:03.0800000+00:00</lastModDate>
        
        <creator>Muhammad Sajid</creator>
        
        <creator>Wareesa Sharif</creator>
        
        <creator>Ghulam Gilanie</creator>
        
        <creator>Maryam Mazher</creator>
        
        <creator>Khurshid Iqbal</creator>
        
        <creator>Muhammad Afzaal Akhtar</creator>
        
        <creator>Muhammad Muddassar</creator>
        
        <creator>Abdul Rehman</creator>
        
        <subject>Deep learning; respiratory sound; coronahack respiratory sound and coswara sound; IoMT</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>Noninvasive and accurate methods for diagnosing respiratory diseases are essential to improving healthcare consequences. The Internet of Medical Things (IoMT) is critical in driving developments in this field. This work presents an IoMT-enabled approach for lung disease detection and classification, using deep learning techniques to analyze lung sounds. The proposed approach uses three datasets: the Respiratory Sound, the Coronahack Respiratory Sound, and the Coswara Sound. Traditional machine learning models, including the Extra Tree Classifier and AdaBoost Classifier, are used to benchmark performance. The Extra Tree Classifier achieved 94.12%, 95.23%, and 94.21% across the datasets, while the AdaBoost Classifier showed improvements with 95.42%, 96.33%, and 94.76%. The proposed deep neural network (DNN) achieves accuracies of 98.92%, 99.33%, and 99.36% for the same datasets. This study explores the transformative potential of the Internet of Medical Things (IoMT) in augmenting diagnostic precision and advancing the field of respiratory healthcare.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_87-IoMT_Enabled_Noninvasive_Lungs_Disease_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Handling Imbalanced Data in Medical Records Using Entropy with Minkowski Distance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160286</link>
        <id>10.14569/IJACSA.2025.0160286</id>
        <doi>10.14569/IJACSA.2025.0160286</doi>
        <lastModDate>2025-02-28T06:26:03.0330000+00:00</lastModDate>
        
        <creator>Lastri Widya Astuti</creator>
        
        <creator>Ermatita</creator>
        
        <creator>Dian Palupi Rini</creator>
        
        <subject>Medical record; imbalanced data; classification; distance; entropy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>Medical records are essential for disease detection to help establish a diagnosis. Many issues with imbalanced classification are discovered in many cases of early disease detection and diagnosis using machine learning methods, resulting in decreased accuracy values due to imbalanced data distribution caused by the number of positive patients with less disease than normal individuals. To improve the accuracy of the results, a classification architectural model is proposed through a modified oversampling method (SMOTE) using Minkowski distance and adding entropy as a weight value estimation to figure out the number of samples to be made. The feature selection procedure will adopt the hybrid Particle Swarm Optimisation Grey-Wolf Optimisation approach (PSO GWO). Dataset selection evaluated high, medium, and low data dimensions based on the number of features and the total number of dataset samples. The six classification algorithms were compared using datasets involving diabetes, heart, and breast cancer. The final classification results indicated an average accuracy of 74% for diabetes, 83% for heart, and 96% for breast cancer. The proposed approach successfully solves imbalances in medical record data, outperforming Na&#239;ve Bayes, Logistic Regression, Support Vector Machine (SVM), and Random Forest classification approaches.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_86-Handling_Imbalanced_Data_in_Medical_Records.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Review of Cybersecurity Challenges and Solutions for Autonomous Vehicles</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160285</link>
        <id>10.14569/IJACSA.2025.0160285</id>
        <doi>10.14569/IJACSA.2025.0160285</doi>
        <lastModDate>2025-02-28T06:26:03.0030000+00:00</lastModDate>
        
        <creator>Lasseni Coulibaly</creator>
        
        <creator>Damien Hanyurwimfura</creator>
        
        <creator>Evariste Twahirwa</creator>
        
        <creator>Abubakar Diwani</creator>
        
        <subject>Internet of Things; smart transportation; autonomous vehicles; cybersecurity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>With the continuously increasing demand for new technologies, many concepts have emerged in recent decades and the Internet of Things is one of the most popular. IoT is revolutionizing several aspects of human life with a large range of applications including the transportation sector. Based on IoT technologies and Artificial Intelligence, new-generation vehicles are being developed with autonomous or self-driving capabilities to handle transportation in future smart cities. Regarding human-based errors such as accidents, traffic congestion, and disruptions, autonomous vehicles are presented as an alternative solution to increase traffic safety, efficiency, and mobility. However, by transferring from a human-based to a computer-based driving style, the transportation area is inheriting existing cyber-security challenges. Due to their connectivity and data-driven decision-making, the security of autonomous vehicles is a high-level concern since it involves human safety in addition to economic losses. In this paper, a comprehensive review is conducted to discuss the security threats and existing solutions for autonomous vehicles. In addition to that, the open security challenges are discussed for further investigations toward trusted and widespread deployment of autonomous vehicles.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_85-A_Review_of_Cybersecurity_Challenges_and_Solutions.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced Early Detection of Diabetic Nephropathy Using a Hybrid Autoencoder-LSTM Model for Clinical Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160284</link>
        <id>10.14569/IJACSA.2025.0160284</id>
        <doi>10.14569/IJACSA.2025.0160284</doi>
        <lastModDate>2025-02-28T06:26:02.9570000+00:00</lastModDate>
        
        <creator>U. Sudha Rani</creator>
        
        <creator>C. Subhas</creator>
        
        <subject>Autoencoder-LSTM; Diabetic nephropathy; early disease detection; machine learning; clinical data analysis; hybrid models</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>Early detection and precise prediction are essential in medical diagnosis, particularly for diseases such as diabetic nephropathy (DN), which tends to go undiagnosed at its early stages. Conventional diagnostic techniques may not be sensitive and timely, and hence, early intervention might be difficult. This research delves into the application of a hybrid Autoencoder-LSTM model to improve DN detection. The Autoencoder (AE) unit compresses clinical data with preservation of important features and dimensionality reduction. The Long Short-Term Memory (LSTM) network subsequently processes temporal patterns and sequential dependency, enhancing feature learning for timely diagnosis. Clinical and demographic information from diabetic patients are included in the dataset, evaluating variables such as age, sex, type of diabetes, duration of disease, smoking, and alcohol use. The model is done using Python and exhibits better performance compared to conventional methods. The Hybrid AE-LSTM model proposed here attains an accuracy of 99.2%, which is a 6.68% improvement over Random Forest (RF), Support Vector Machine (SVM), and Logistic Regression. The findings demonstrate the power of deep learning in detecting DN early and accurately and present a novel tool for proactive disease control among diabetic patients.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_84-Enhanced_Early_Detection_of_Diabetic_Nephropathy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Revolutionizing AI Governance: Addressing Bias and Ensuring Accountability Through the Holistic AI Governance Framework</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160283</link>
        <id>10.14569/IJACSA.2025.0160283</id>
        <doi>10.14569/IJACSA.2025.0160283</doi>
        <lastModDate>2025-02-28T06:26:02.9230000+00:00</lastModDate>
        
        <creator>Ibrahim Atoum</creator>
        
        <subject>Artificial intelligence; framework; bias; discrimination; governance; key performance indicators</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>Artificial intelligence (AI) possesses the capacity to transform numerous facets of our existence; however, it concomitantly engenders considerable risks associated with bias and discrimination. This article explores emerging technologies like Explainable AI (XAI), Fairness Metrics (FMs), and Adversarial Learning (AL) for bias mitigation while emphasizing the critical role of transparency, accountability, and continuous monitoring and evaluation in AI governance. The Holistic AI Governance Framework (HAGF) is introduced, featuring a comprehensive, five-layered structure that integrates top-down and bottom-up strategies. HAGF prioritizes foundational principles and resource allocation, outlining five lifecycle-specific phases. Unlike the OECD AI Principles, which offer a general ethical framework lacking holistic perspective and resource allocation guidance, and the Berkman Klein Center&#39;s Model, which provides a broad framework but omits resource allocation and detailed implementation, HAGF offers actionable mechanisms. Tailored Key Performance Indicators (KPIs) are proposed for each HAGF layer, enabling ongoing refinement and adaptation to the evolving AI landscape. While acknowledging the need for enhancements in data governance and enforcement, the embedded KPIs ensure accountability and transparency, positioning HAGF as a pivotal framework for navigating the complexities of ethical AI.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_83-Revolutionizing_AI_Governance_Addressing_Bias.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced Colon Cancer Prediction Using Capsule Networks and Autoencoder-Based Feature Selection in Histopathological Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160282</link>
        <id>10.14569/IJACSA.2025.0160282</id>
        <doi>10.14569/IJACSA.2025.0160282</doi>
        <lastModDate>2025-02-28T06:26:02.8930000+00:00</lastModDate>
        
        <creator>Janjhyam Venkata Naga Ramesh</creator>
        
        <creator>F. Sheeja Mary</creator>
        
        <creator>S. Balaji</creator>
        
        <creator>Divya Nimma</creator>
        
        <creator>Elangovan Muniyandy</creator>
        
        <creator>A. Smitha Kranthi</creator>
        
        <creator>Yousef A. Baker El-Ebiary</creator>
        
        <subject>Colon cancer prediction; capsule network; autoencoder; histopathological images; early cancer detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>The malignant development of cells in the colon or rectum is known as colon cancer, and because of its high incidence and possibility for death, it is a serious health problem. Because the disease frequently advances without symptoms in its early stages, early identification is essential. Improved survival rates and more successful therapy depend on an early and accurate diagnosis. The reliability of early detection can be impacted by problems with traditional diagnostic procedures, such as high false-positive rates, insufficient sensitivity, and inconsistent outcomes. This unique approach to colon cancer diagnosis uses autoencoder-based feature selection, capsule networks (CapsNets), and histopathology images to overcome these problems. CapsNets capture spatial hierarchies in visual input, improving pattern identification and classification accuracy. When employed for feature extraction, autoencoders reduce dimensionality, highlight important features, and eliminate noise, all of which enhance model performance. The suggested approach produced remarkable outcomes, with a 99.2% accuracy rate. The model&#39;s strong capacity to detect cancerous lesions with few mistakes is demonstrated by its high accuracy in differentiating between malignant and non-malignant tissues. This study represents a substantial development in cancer detection technology by merging autoencoders with Capsule Networks, so overcoming the shortcomings of existing approaches and offering a more dependable tool for early diagnosis. This method may improve patient outcomes, provide more individualized treatment regimens, and boost diagnostic accuracy.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_82-Enhanced_Colon_Cancer_Prediction_Using_Capsule_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Color Multi-Focus Image Fusion Method Based on Contourlet Transform</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160281</link>
        <id>10.14569/IJACSA.2025.0160281</id>
        <doi>10.14569/IJACSA.2025.0160281</doi>
        <lastModDate>2025-02-28T06:26:02.8630000+00:00</lastModDate>
        
        <creator>Zhifang Cai</creator>
        
        <subject>Contourlet transform; image fusing; NSCT; Laplacian Pyramid; color multi-focus</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>Color Multi-Focus Image Fusion (MFIF) technology finds extensive use in areas such as microscopy, astronomy, and multi-scene photography where high-quality and detailed images are vital. This paper presents the Contourlet Transform alongside its enhanced version, the Non-Subsampled Contourlet Transform (NSCT), aimed at improving the outcomes of image fusion, with the support of Laplacian Pyramid (LP) decomposition. The NSCT framework overcomes challenges like spectral aliasing and directional sensitivity, leading to images with sharper edges, enriched texture details, and preserved delicate information. Experimental findings highlight the NSCT-based fusion algorithm&#39;s superiority. Subjective assessments indicate that using the NSCT method results in images with sharp and well-defined object boundaries, outstanding contrast, and abundant textures without the creation of artifacts, markedly excelling beyond traditional techniques such as the Contourlet Transform, Non-Subsampled Shearlet Transform (NSST) and Rolling Guidance Filtering (RGF). Objective measures verify its effectiveness: In the first dataset, it attains an average gradient (AG) of 8.36 and an edge intensity (EI) of 3.29E-04, while in the second dataset, it reports an AG of 21.39 and an EI of 4.06E-04, significantly outperforming other methods. Moreover, the NSCT method offers competitive computational speed, balancing runtime with high-quality fusion performance. These results establish the proposed method as a powerful and efficient solution for color MFIF, offering notable performance benefits and practical utility in various imaging fields.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_81-Color_Multi_Focus_Image_Fusion_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving Performance with Big Data: Smart Supply Chain and Market Orientation in SMEs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160280</link>
        <id>10.14569/IJACSA.2025.0160280</id>
        <doi>10.14569/IJACSA.2025.0160280</doi>
        <lastModDate>2025-02-28T06:26:02.8130000+00:00</lastModDate>
        
        <creator>Miftakul Huda</creator>
        
        <creator>Agus Rahayu</creator>
        
        <creator>Chairul Furqon</creator>
        
        <creator>Mokh Adib Sultan</creator>
        
        <creator>Nani Hartati</creator>
        
        <creator>Neng Susi Susilawati Sugiana</creator>
        
        <subject>Big data; supply chain management; web analytics; corporate performance; market orientation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>This study aims to explore the impact of big data-driven supply chain management, web analytics, and market orientation on corporate performance in medium-sized enterprises (MSEs) in Indonesia. By integrating these contemporary elements, the research seeks to provide insights into how digital technologies and strategic market practices can enhance organizational effectiveness. The study adopts a quantitative approach, utilizing survey data collected from 350 MSEs across various sectors in Indonesia. Purposive sampling was employed to ensure that the selected firms actively implement big data analytics and market-oriented strategies. Structural Equation Modeling (SEM) was conducted using SmartPLS to analyze the relationships among the variables. The findings reveal that big data-driven supply chain management and web analytics significantly contribute to improved corporate performance, with market orientation serving as a critical mediating factor. These results emphasize the importance of aligning digital tools with strategic business objectives to achieve competitive advantages. Furthermore, the study highlights the practical implications for MSEs, suggesting that integrating big data and web analytics into supply chain operations can optimize resource allocation, enhance decision-making, and foster market responsiveness. This research contributes to the literature on digital transformation and strategic management in emerging economies, offering a novel perspective on how MSEs can leverage technological advancements to remain competitive. Future studies may explore longitudinal impacts and sector-specific adaptations.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_80-Improving_Performance_with_Big_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Hybrid Model Based on CEEMDAN and Bayesian Optimized LSTM for Financial Trend Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160279</link>
        <id>10.14569/IJACSA.2025.0160279</id>
        <doi>10.14569/IJACSA.2025.0160279</doi>
        <lastModDate>2025-02-28T06:26:02.7670000+00:00</lastModDate>
        
        <creator>Yu Sun</creator>
        
        <creator>Sofianita Mutalib</creator>
        
        <creator>Liwei Tian</creator>
        
        <subject>LSTM; Bayesian optimization; CEEMDAN; financial time series; time window selection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>Financial time series prediction is inherently complex due to its nonlinear, nonstationary, and highly volatile nature. This study introduces a novel CEEMDAN-BO-LSTM model within a decomposition-optimization-prediction-integration framework to address these challenges. The Complete Ensemble Empirical Mode Decomposition with Adaptive Noise (CEEMDAN) algorithm decomposes the original series into high-frequency, medium-frequency, low-frequency, and trend components, enabling precise time window selection. Bayesian Optimization (BO) algorithm optimizes the parameters of a dual-layer Long Short-Term Memory (LSTM) network, enhancing prediction accuracy. By integrating predictions from each component, the model generates a comprehensive and reliable forecast. Experiments on 10 representative global stock indices reveal that the proposed model outperforms benchmark approaches across RMSE, MAE, MAPE, and R&#178; metrics. The CEEMDAN-BO-LSTM model demonstrates robustness and stability, effectively capturing market fluctuations and long-term trends, even under high volatility.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_79-A_Novel_Hybrid_Model_Based_on_CEEMDAN.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Utilizing NLP to Optimize Municipal Services Delivery Using a Novel Municipal Arabic Dataset</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160278</link>
        <id>10.14569/IJACSA.2025.0160278</id>
        <doi>10.14569/IJACSA.2025.0160278</doi>
        <lastModDate>2025-02-28T06:26:02.7530000+00:00</lastModDate>
        
        <creator>Homod Hamed Alaloye</creator>
        
        <creator>Ahmad B. Alkhodre</creator>
        
        <creator>Emad Nabil</creator>
        
        <subject>Arabic text classification; machine learning (ML); deep learning (DL); hyperparameter optimization; municipal services</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>The natural language processing paradigm has emerged as a vital tool for addressing complex business challenges, mainly due to advancements in machine learning (ML), deep learning (DL), and Generative AI. Advanced NLP models have significantly enhanced the efficiency and effectiveness of NLP applications, enabling the seamless integration of various business processes to improve decision-making. In the municipal sector, the Kingdom of Saudi Arabia is trying to harness the power of NLP to promote urban development, city planning, and infrastructure enhancements, ultimately elevating the quality of life for its residents. In the municipal sector, approximately 300 services are available through multiple channels, including the Baladi application, unified communication services, WhatsApp, a dedicated beneficiary center (serving citizens and residents), and social media accounts. These channels are supported by a dedicated team that operates 24/7. This paper examines the implementation of ML and DL methods to categorize requests and suggestions submitted by residents for various municipal services in the Kingdom of Saudi Arabia. The primary aim of this work is to enhance service quality and reduce response times to community inquiries. However, a significant challenge arises from the lack of Arabic datasets specifically tailored to the municipal sector for training purposes, which limits meaningful progress. To address this issue, we have created a novel dataset consisting of 3,714 manually classified requests and suggestions collected from the X platform. This dataset is organized into eight classes: tree maintenance, lighting, construction waste, old and neglected assets, road conditions, visual pollution, billboards, and cleanliness. Our findings indicate that ML models, particularly when optimized with hyperparameters and appropriate pre-processing, outperformed DL models, achieving an F1 score of 90% compared to 88%. By releasing this novel Arabic dataset, which will be open sourced for the scientific community, we believe this work provides a foundational reference for further research and significantly contributes to improving the municipal sector&#39;s service delivery.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_78-Utilizing_NLP_to_Optimize_Municipal_Services_Delivery.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning-Driven Detection of Terrorism Threats from Tweets Using DistilBERT and DNN</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160277</link>
        <id>10.14569/IJACSA.2025.0160277</id>
        <doi>10.14569/IJACSA.2025.0160277</doi>
        <lastModDate>2025-02-28T06:26:02.7070000+00:00</lastModDate>
        
        <creator>Divya S</creator>
        
        <creator>B Ben Sujitha</creator>
        
        <subject>Terrorism; global safety; terrorist attacks; data mining; artificial intelligence; natural language processing; DistilBERT; deep neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>As globalization accelerates, the threat of terrorist attacks poses serious challenges to national security and public safety. Traditional detection methods rely heavily on manual monitoring and rule-based surveillance, which lack scalability, adaptability, and efficiency in handling large volumes of real-time social media data. These approaches often struggle with identifying evolving threats, processing unstructured text, and distinguishing between genuine threats and misleading information, leading to delays in response and potential security lapses. To address these challenges, this study presents an advanced terrorism threat detection model that leverages DistilBERT with a Deep Neural Network (DNN) to classify Twitter data. The proposed approach efficiently extracts contextual and semantic information from textual content, enhancing the identification of potential terrorist threats. DistilBERT, a lightweight variant of BERT, is employed for its ability to process large volumes of text while maintaining high accuracy. The extracted embeddings are further analyzed using a Dense Neural Network, which excels at recognizing complex patterns. The model was trained and evaluated on a labeled dataset of tweets, achieving an impressive 93% accuracy. Experimental results demonstrate the model’s reliability in distinguishing between threatening and non-threatening tweets, making it an effective tool for early detection and real-time surveillance of terrorism-related content on social media. The findings highlight the potential of deep learning and natural language processing (NLP) in automated threat identification, surpassing traditional machine learning approaches. By integrating advanced NLP techniques, this model contributes to enhancing public safety, national security, and counter-terrorism efforts.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_77-Deep_Learning_Driven_Detection_of_Terrorism_Threats.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Analysis of Undersampling, Oversampling, and SMOTE Techniques for Addressing Class Imbalance in Phishing Website Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160276</link>
        <id>10.14569/IJACSA.2025.0160276</id>
        <doi>10.14569/IJACSA.2025.0160276</doi>
        <lastModDate>2025-02-28T06:26:02.6730000+00:00</lastModDate>
        
        <creator>Kamal Omari</creator>
        
        <creator>Chaimae Taoussi</creator>
        
        <creator>Ayoub Oukhatar</creator>
        
        <subject>Phishing website detection; class imbalance; XGBoost; SMOTE-NC</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>Since this is one of the most challenging tasks in cyber security, many of them are affected by class imbalance when it comes to the performance of machine learning. This paper evaluates various strategies using a number of resampling-based approaches: ROS, RUS, and SMOTE-based methods in conjunction with XGBoost classifier techniques to solve such an imbalanced dataset. Key performance measures include precision, F1 score, recall, precision, ROC-AUC, and geometric mean score. Among the methods, the highest was found with regard to the SMOTE-NC-XGB with precision equal to 98.0% and a recall of 98.5%, thus ensuring an effective trade-off between sensitivity and specificity. Although the stand-alone XGB model performs really well, adding resampling techniques makes its efficiency much higher, especially in cases of evident imbalance between classes. These results also revealed that resampling techniques are really helpful to enhance detection performance; hence, the SMOTE-NC-XGB is found out as the best among all of these. It will be of great contribution for future works in order to enhance the development of phishing detection systems and investigate other new hybrid resampling methods.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_76-Comparative_Analysis_of_Undersampling_Oversampling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Bridging Data and Clinical Insight: Explainable AI for ICU Mortality Risk Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160275</link>
        <id>10.14569/IJACSA.2025.0160275</id>
        <doi>10.14569/IJACSA.2025.0160275</doi>
        <lastModDate>2025-02-28T06:26:02.6430000+00:00</lastModDate>
        
        <creator>Ali H. Hassan</creator>
        
        <creator>Riza bin Sulaiman</creator>
        
        <creator>Mansoor Abdulhak</creator>
        
        <creator>Hasan Kahtan</creator>
        
        <subject>Explainable AI; healthcare; machine learning; predictive model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>Despite advancements in machine learning within healthcare, the majority of predictive models for ICU mortality lack interpretability, a crucial factor for clinical application. The complexity inherent in high-dimensional healthcare data and models poses a significant barrier to achieving accurate and transparent results, which are vital in fostering trust and enabling practical applications in clinical settings. This study focuses on developing an interpretable machine learning model for intensive care unit (ICU) mortality prediction using explainable AI (XAI) methods. The research aimed to develop a predictive model that could assess mortality risk utilizing the WiDS Datathon 2020 dataset, which includes clinical and physiological data from over 91,000 ICU admissions. The model&#39;s development involved extensive data preprocessing, including data cleaning and handling missing values, followed by training six different machine learning algorithms. The Random Forest model ranked as the most effective, with its highest accuracy and robustness to overfitting, making it ideal for clinical decision-making. The importance of this work lies in its potential to enhance patient care by providing healthcare professionals with an interpretable tool that can predict mortality risk, thus aiding in critical decision-making processes in high-acuity environments. The results of this study also emphasize the importance of applying explainable AI methods to ensure AI models are transparent and understandable to end-users, which is crucial in healthcare settings.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_75-Bridging_Data_and_Clinical_Insight.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Systematic Review of Metaheuristic Algorithms in Human Activity Recognition: Applications, Trends, and Challenges</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160274</link>
        <id>10.14569/IJACSA.2025.0160274</id>
        <doi>10.14569/IJACSA.2025.0160274</doi>
        <lastModDate>2025-02-28T06:26:02.6270000+00:00</lastModDate>
        
        <creator>John Deutero Kisoi</creator>
        
        <creator>Norfadzlan Yusup</creator>
        
        <creator>Syahrul Nizam Junaini</creator>
        
        <subject>Metaheuristic algorithm; human activity recognition; systematic review; application; trend; challenge; literature</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>Metaheuristic algorithms have emerged as promising techniques for optimizing human activity recognition (HAR) systems. This systematic review examines the application of these algorithms in HAR by analyzing relevant literature published between 2019 and 2024. A comprehensive search across multiple databases yielded 27 studies that met the inclusion criteria. The analysis revealed that Genetic Algorithms (GA) exhibit classification accuracy rates ranging from 88.25% to 96.00% in activity recognition and up to 90.63% in localization tasks. Notably, Oppositional and Chaos Particle Swarm Optimization (OCPSO) combined with MI-1DCNN significantly improves detection accuracy, demonstrating a 2.82% improvement over standard PSO with Support Vector Machine (SVM) as classifier approaches. Our analysis highlights a growing trend toward hybrid metaheuristic approaches that enhance feature selection and classifier optimization. However, challenges related to computational cost and scalability persist, underscoring key areas for future research. These findings emphasize the potential of metaheuristic algorithms to significantly advance HAR. Future studies should explore the development of more computationally efficient hybrid models and the integration of metaheuristic optimization with deep learning architectures to enhance system robustness and adaptability.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_74-A_Systematic_Review_of_Metaheuristic_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Lightweight Parabola Chaotic Keyed Hash Using SRAM-PUF for IoT Authentication</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160273</link>
        <id>10.14569/IJACSA.2025.0160273</id>
        <doi>10.14569/IJACSA.2025.0160273</doi>
        <lastModDate>2025-02-28T06:26:02.5500000+00:00</lastModDate>
        
        <creator>Nattagit Jiteurtragool</creator>
        
        <creator>Jirayu Samkunta</creator>
        
        <creator>Patinya Ketthong</creator>
        
        <subject>SRAM PUF; PUF key generation; chaotic keyed hash; device authentication; discrete-time chaotic</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>This paper introduces a lightweight and efficient keyed hash function tailored for resource-constrained Internet of Things (IoT) environments, leveraging the chaotic properties of the Parabola Chaotic Map. By combining the inherent unpredictability of chaotic systems with a streamlined cryptographic design, the proposed hash function ensures robust security and low computational overhead. The function is further strengthened by integrating it with a Physical Unclonable Function (PUF) based on SRAM initial values, which serves as a secure and tamper-resistant source of device-specific keys. Experimental validation on an ESP32 microcontroller demonstrates the function&#39;s high sensitivity to input variations, exceptional statistical randomness, and resistance to cryptographic attacks, including collisions and differential analysis. With a mean bit-change probability nearing the ideal 50% and 100% reliability in key generation under varying conditions, the system addresses critical IoT security challenges such as cloning, replay attacks, and tampering. This work contributes a novel solution that combines chaos theory and hardware-based security to advance secure, efficient, and scalable authentication mechanisms for IoT applications.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_73-Lightweight_Parabola_Chaotic_Keyed_Hash.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimizing the GRU-LSTM Hybrid Model for Air Temperature Prediction in Degraded Wetlands and Climate Change Implications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160272</link>
        <id>10.14569/IJACSA.2025.0160272</id>
        <doi>10.14569/IJACSA.2025.0160272</doi>
        <lastModDate>2025-02-28T06:26:02.4570000+00:00</lastModDate>
        
        <creator>Yuslena Sari</creator>
        
        <creator>Yudi Firmanul Arifin</creator>
        
        <creator>Novitasari Novitasari</creator>
        
        <creator>Samingun Handoyo</creator>
        
        <creator>Andreyan Rizky Baskara</creator>
        
        <creator>Nurul Fathanah Musatamin</creator>
        
        <creator>Muhammad Tommy Maulidyanto</creator>
        
        <creator>Siti Viona Indah Swari</creator>
        
        <creator>Erika Maulidiya</creator>
        
        <subject>Predictions; temperature; Gated Recurrent Unit (GRU); Long Short-Term Memory (LSTM); performance; indicators</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>Accurate air temperature prediction is critical, particularly for micro air temperatures. The temperature of micro air changes quickly. Micro and macro air temperatures vary, particularly in degraded wetlands. By predicting air temperature, climate change in a degraded wetland environment can be predicted earlier. Furthermore, micro and macro air temperatures are drought index parameters. Knowing the drought index can help you avoid disasters like fires and floods. However, the right indicators for predicting micro or macro temperatures have yet to be found. LSTM excels at tasks requiring complex long-term memory, whereas GRU excels at tasks requiring rapid processing. We proposed a deep learning strategy based on the GRU-LSTM Hybrid model. Both of these deep learning models are excellent for predicting time series. The performance of this hybrid model is affected by changes in model indicators. The preprocessing stage, the number of input parameters, and the presence or absence of a Dropout Layer in the model architecture are among the most influential indicators of model performance. The best macro temperature prediction performance was obtained using 12 monthly average data to predict the next month’s temperature, yielding an RMSE of 0.056807, MAE of 0.046592, and R2 of 0.989371. This model also performed well in predicting daily micro temperature, with an RMSE of 0.227086, MAE of 0.190801, and R2 of 0.981802.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_72-Optimizing_the_GRU_LSTM_Hybrid_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Review of AI and IoT Implementation in a Museum’s Ecosystem: Benefits, Challenges, and a Novel Conceptual Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160271</link>
        <id>10.14569/IJACSA.2025.0160271</id>
        <doi>10.14569/IJACSA.2025.0160271</doi>
        <lastModDate>2025-02-28T06:26:02.4230000+00:00</lastModDate>
        
        <creator>Shinta Puspasari</creator>
        
        <creator>Indah Agustien Siradjuddin</creator>
        
        <creator>Rachmansyah</creator>
        
        <subject>AI; IoT; digital museum; digital ecosystem</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>The museums need to transform into modern museums by developing a digital ecosystem that integrates all elements in the museum to optimize organizational outcomes and impact people&#39;s welfare in the era of Society 5.0. This paper aims to conduct a review of the museum&#39;s digital ecosystem based on the implementation of artificial intelligence (AI) and internet of things (IoT). PRISMA methodology for literature review was adopted to search for the answers to the research questions, knowing digital technology trends, challenges, and benefits of a digital museum ecosystem development, and proposed a novel conceptual model of the museum ecosystem based on AI and IoT implementation. The dataset contained metadata from Scopus, Google Scholar, and IEEExplore databases. Several stages were implemented in the literature review process so that it is known that AI and IoT technologies have never been separated in the development of digital museums since 2020, but there has yet to be research on the digital museum ecosystem model that integrates IoT and AI. The museum&#39;s digital ecosystem implementation benefits will improve museum resources and increase museum competitiveness. However, there will be challenges related to cybersecurity issues, data integration in multi-media formats, and interface designs to overcome user acceptance challenges of the technology constructed in the digital museum ecosystem. The proposed AI and IoT-based model also require an evaluation for implementation validation at the museum in future works.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_71-A_Review_of_AI_and_IoT_Implementation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detection and Prediction of Polycystic Ovary Syndrome Using Attention-Based CNN-RNN Classification Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160270</link>
        <id>10.14569/IJACSA.2025.0160270</id>
        <doi>10.14569/IJACSA.2025.0160270</doi>
        <lastModDate>2025-02-28T06:26:02.3930000+00:00</lastModDate>
        
        <creator>Siji Jose Pulluparambil</creator>
        
        <creator>Subrahmanya Bhat B</creator>
        
        <subject>Polycystic ovary syndrome; contrast limited adaptive histogram equalization; particle swarm optimization; k- means clustering algorithm; convolutional neural network; recurrent neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>Polycystic Ovary Syndrome (PCOS) has many challenges when it comes to its diagnosis and treatment due to the diversity of presentation and potential long-term consequences for health. For this reason, sophisticated data pre-processing and classification methods are implemented to enhance the accuracy of PCOS diagnosis. A number of innovative techniques are employed in the process to enhance the accuracy and reliability of PCOS diagnosis. To identify ovarian cysts, real-time ultrasound images are pre-processed initially with the Contrast-Limited Adaptive Histogram Equalization (CLAHE) model to improve image contrast and sharpness. The ultrasound images are segmented with the K-means clustering algorithm, Particle Swarm Optimization (PSO), and a fuzzy filter, enabling precise analysis of regions of interest. An attention-based Convolutional Neural Network-Recurrent Neural Network (CNN-RNN) model is employed for classification and does so effectively to capture the temporal and spatial characteristics of the segmented data. The proposed model has a very good accuracy rate of 96% and works very well on a variety of evaluation metrics such as accuracy, precision, sensitivity, F1-score, and specificity. The results are evidence of the robustness of the model in minimizing false positives and enhancing PCOS diagnostic accuracy. Nevertheless, it is noted that bigger data sets are required to maximize the precision and generalizability of the model. The aim of subsequent research is to use Explainable AI (XAI) methods to enhance clinical decision-making and establish trust by making the model&#39;s predictions clearer and understandable for patients and clinicians. Along with enhancing PCOS detection, this comprehensive approach sets a precedent for integrating explainability into AI-based medical diagnostic devices.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_70-Detection_and_Prediction_of_Polycystic_Ovary_Syndrome.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modular Analysis of Complex Products Based on Hybrid Genetic Ant Colony Optimization in the Context of Industry 4.0</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160269</link>
        <id>10.14569/IJACSA.2025.0160269</id>
        <doi>10.14569/IJACSA.2025.0160269</doi>
        <lastModDate>2025-02-28T06:26:02.3470000+00:00</lastModDate>
        
        <creator>Yichun Shi</creator>
        
        <creator>Qinhe Shi</creator>
        
        <subject>Industry 4.0; genetic algorithm; ant colony; complex products; modularization; production efficiency</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>With the development of science and technology, industrial construction has entered the era of 4.0 intelligent construction, and various algorithms have been widely applied in the modularization of production products. This study focuses on the modular optimization problem of complex products and establishes a hybrid genetic algorithm based on the ant colony algorithm framework. The new algorithm incorporates visibility analysis of the genetic algorithm, using the obtained solution as the pheromone source for the new algorithm to quickly obtain the optimal solution. The results showed that the algorithm could quickly achieve modularization of complex industrial products, adapt to products with a large number of parts and complex compositions, and obtain the optimal solution. The new algorithm reduced the running time of modular complex products by 35.06% compared to the particle swarm optimization algorithm. The new algorithm optimized the product design process for core components, reducing production costs by 23.46% and increasing production efficiency by 39.20%. Consequently, the novel algorithm modularizes complex products, thereby enhancing production efficiency and providing a novel intelligent method for the design process of complex products.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_69-Modular_Analysis_of_Complex_Products.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Traffic Safety in Mixed Environments by Predicting Lane Merging and Adaptive Control</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160268</link>
        <id>10.14569/IJACSA.2025.0160268</id>
        <doi>10.14569/IJACSA.2025.0160268</doi>
        <lastModDate>2025-02-28T06:26:02.3000000+00:00</lastModDate>
        
        <creator>Aigerim Amantay</creator>
        
        <creator>Shyryn Akan</creator>
        
        <creator>Nurlybek Kenes</creator>
        
        <creator>Amandyk Kartbayev</creator>
        
        <subject>Autonomous driving; lane merging; traffic safety; trajectory prediction; deep learning; LiDAR; LSTM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>Autonomous driving technology is primarily developed to enhance traffic safety through advancements in motion prediction and adaptive control mechanisms. Highway lane merging remains a high-risk scenario, accounting for approximately 7% of highway collisions globally due to misjudged vehicle interactions, according to international statistics. This paper proposes a two-stage deep learning framework for autonomous lane merging in mixed traffic. Using the Argoverse dataset, which contains over 300,000 vehicle trajectories mapped to high-definition road networks, we first predict vehicle trajectories using a Seq2Seq model with LSTM layers, achieving a 21% improvement in prediction accuracy over a baseline Multi-layer Perceptron model. In the second stage, reinforcement learning is employed for maneuver generation, where a Dueling Deep Q-Network outperforms a standard DQN by 8% in collision avoidance. Experimental results indicate that the combined trajectory prediction and RL-based framework significantly reduces merging delays, enhances data-driven decision-making in mixed traffic environments, and provides a scalable solution for safer autonomous highway merging.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_68-Traffic_Safety_in_Mixed_Environments.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced Virtual Machine Resource Optimization in Cloud Computing Using Real-Time Monitoring and Predictive Modeling</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160267</link>
        <id>10.14569/IJACSA.2025.0160267</id>
        <doi>10.14569/IJACSA.2025.0160267</doi>
        <lastModDate>2025-02-28T06:26:02.2670000+00:00</lastModDate>
        
        <creator>Rim Doukha</creator>
        
        <creator>Abderrahmane Ez-zahout</creator>
        
        <subject>Cloud computing; virtual machine optimization; resource allocation; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>Effective resource estimation is essential in cloud computing to minimize operational costs, optimize performance, and enhance user satisfaction. This study proposes a comprehensive framework for virtual machine optimization in cloud environments, focusing on predictive resource management to improve resource efficiency and system performance. The framework integrates real-time monitoring, advanced resource management techniques, and machine learning-based predictions. A simulated environment is deployed using PROXMOX, with Prometheus for monitoring and Grafana for visualization and alerting. By leveraging machine learning models, including Random Forest Regression and LSTM, the framework predicts resource usage based on historical data, enabling precise and proactive resource allocation. Results indicate that the Random Forest model achieves superior accuracy with a MAPE of 2.65%, significantly outperforming LSTM&#39;s 17.43%. These findings underscore the reliability of Random Forest for resource estimation. This research demonstrates the potential of predictive analytics in advancing cloud resource management, contributing to more efficient and scalable cloud computing practices.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_67-Enhanced_Virtual_Machine_Resource_Optimization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Custom Deep Learning Approach for Traffic Flow Prediction in Port Environments: Integrating RCNN for Spatial and Temporal Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160266</link>
        <id>10.14569/IJACSA.2025.0160266</id>
        <doi>10.14569/IJACSA.2025.0160266</doi>
        <lastModDate>2025-02-28T06:26:02.1900000+00:00</lastModDate>
        
        <creator>Abdul Basit Ali Shah</creator>
        
        <creator>Xinglu Xu</creator>
        
        <creator>Zheng Yongren</creator>
        
        <creator>Zijian Guo</creator>
        
        <subject>Traffic flow prediction; transformer model; port congestion; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>Port congestion poses a significant challenge to maritime logistics, especially for industries dealing with perishable goods like seafood. This study presents a custom deep learning model using Transformer architecture to predict real-time traffic flow at the Port of Virginia, with a focus on optimizing the movement of fish trucks. The model integrates multimodal data from 36 sensors, capturing traffic flow, occupancy, and speed at five-minute intervals, and processes high-dimensional, time-series data for accurate predictions. The model utilizes attention mechanisms to capture spatial and temporal dependencies, significantly improving predictive performance. Evaluation results indicate that the Transformer-based model outperforms existing models like RandomForest, GradientBoosting, and Support Vector Regression, with an R-squared value of 0.89, Pearson correlation of 0.91, and a Root Mean Squared Error (RMSE) of 0.0208. These results suggest that the model can effectively manage dynamic port traffic and optimize resource allocation, ensuring the timely delivery of perishable goods.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_66-A_Custom_Deep_Learning_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Broccoli Grading Based on Improved Convolutional Neural Network Using Ensemble Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160265</link>
        <id>10.14569/IJACSA.2025.0160265</id>
        <doi>10.14569/IJACSA.2025.0160265</doi>
        <lastModDate>2025-02-28T06:26:02.0970000+00:00</lastModDate>
        
        <creator>Zaki Imaduddin</creator>
        
        <creator>Yohanes Aris Purwanto</creator>
        
        <creator>Sony Hartono Wijaya</creator>
        
        <creator>Shelvie Nidya Neyman</creator>
        
        <subject>Grading; convolution neural network; ensemble deep learning; voting</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>The demand for broccoli in Indonesia has been increasing significantly, with an annual growth of approximately 15% to 20%. However, the supply availability remains insufficient, and its quality is often inconsistent. Therefore, a grading process is needed to classify broccoli into grades A, B, and C based on color, size, and shape. Currently, the grading process is carried out solely by market intermediaries, while farmers and the general public have a limited understanding of this process. This research developed an automated grading method using a Convolutional Neural Network (CNN) based on two broccoli images’ top and side views. Three main parameters, namely color, size, and shape, were identified from these images and used as grading determinants. An ensemble deep learning technique was applied by training each parameter separately using several CNN models, namely ResNet50, EfficientNetB2, VGG16, and Improved CNN. These were then combined in the testing phase using a voting technique. The test was conducted 64 times with various model combinations to achieve the best accuracy. A significant contribution of the Improved CNN lies in the shape feature, which achieved a maximum performance of 95%. This study also compared evaluation metrics such as precision, recall, F-Score, and accuracy across different model combinations.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_65-Broccoli_Grading_Based_on_Improved_Convolutional_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Target Detection of Leakage Bubbles in Stainless Steel Welded Pipe Gas Airtightness Experiments Based on YOLOv8-BGA</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160264</link>
        <id>10.14569/IJACSA.2025.0160264</id>
        <doi>10.14569/IJACSA.2025.0160264</doi>
        <lastModDate>2025-02-28T06:26:02.0330000+00:00</lastModDate>
        
        <creator>Huaishu Hou</creator>
        
        <creator>Zikang Chen</creator>
        
        <creator>Chaofei Jiao</creator>
        
        <subject>Image processing; stainless steel welded pipe; non-destructive testing; YOLOv8; attention mechanism; loss function</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>Gas-tightness experiment is an effective means to detect leakage of stainless steel welded pipe, and the vision-based bubble recognition algorithm can effectively improve the efficiency of gas-tightness detection. This study proposed a new detection network of YOLOv8-BGA using the YOLOv8 model as a baseline, which can achieve effective identification of leakage bubbles and bubble images are collected under different lighting conditions in a practical industrial inspection environment to create a bubble dataset. Firstly, a C2f_BoT module was designed to replace the C2f module in the backbone network, which improved the feature extraction capability of the model; secondly, the convolutional layer of the neck network was replaced by using the GSConv module, which achieved the model lightweighting; thirdly, the C2f-BM attention mechanism was added before the detection layer, which effectively improved the model performance; finally, the WIoU was used to improve the loss function, which improved the detrimental effect of small bubbles of low-quality samples in the dataset on the gradient, and significantly improved the convergence speed of the network. The experimental results showed that the average leakage bubble detection accuracy of the YOLOv8-BGA model algorithm reached 97.7%, which improved by 5.3% compared with the baseline, and meets the needs of practical industrial inspection.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_64-Target_Detection_of_Leakage_Bubbles_in_Stainless_Steel.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Classifying Weed Development Stages Using Deep Learning Methods</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160263</link>
        <id>10.14569/IJACSA.2025.0160263</id>
        <doi>10.14569/IJACSA.2025.0160263</doi>
        <lastModDate>2025-02-28T06:26:01.9870000+00:00</lastModDate>
        
        <creator>Yasin &#199;I&#199;EK</creator>
        
        <creator>Eyy&#252;p G&#220;LBANDILAR</creator>
        
        <creator>Kadir &#199;IRAY</creator>
        
        <creator>Ahmet ULUDAG</creator>
        
        <subject>Deep learning; weed development stages; classification; DenseNET; Xception; SqueezeNET; GoogleNET; EfficientNET; ROI</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>The control of harmful weeds holds a significant place in the cultivation of agricultural products. A crucial criterion in this control process is identifying the development stages of the weeds. The technique to be used is determined based on the weed&#39;s growth stage. This study addresses the application of deep learning methods in classifying growth stages using images of various weed species to predict their development periods. Four different weed species, obtained from seeds collected in Turkey-Afyonkarahisar-Sinanpaşa Plain, were used in the study. The images were captured with a Nikon D7000 camera equipped with three different lenses, and the ROI extraction was performed using Lifex software. Using these ROI images, deep learning models such as DenseNet, EfficientNet, GoogleNet, Xception, and SqueezeNet were evaluated. Performance metrics including accuracy, F1 score, precision, and recall were employed. In the 4-class dataset with ROI annotations, DenseNet and Xception achieved an accuracy of 86.57%, while EfficientNet demonstrated the highest performance with an accuracy of 89.55%. Following the initial tests, it was concluded that classes 3 and 4 exhibited extreme similarity caused most of the prediction errors. Merging the said classes significantly increased the accuracy and F1 scores across all models. In image classification tests, SqueezeNet and GoogleNet demonstrated the shortest processing times. However, while EfficientNet lagged slightly behind these models in terms of speed, it exhibited superior accuracy. In conclusion, although the use of ROI improved classification performance, class merging strategies resulted in a more significant performance enhancement.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_63-Classifying_Weed_Development_Stages_Using_Deep_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Supervised Learning-Based Classification Technique for Precise Identification of Monkeypox Using Skin Imaging</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160262</link>
        <id>10.14569/IJACSA.2025.0160262</id>
        <doi>10.14569/IJACSA.2025.0160262</doi>
        <lastModDate>2025-02-28T06:26:01.9400000+00:00</lastModDate>
        
        <creator>Vandana </creator>
        
        <creator>Chetna Sharma</creator>
        
        <creator>Yonis Gulzar</creator>
        
        <creator>Mohammad Shuaib Mir</creator>
        
        <subject>Deep learning; monkeypox; medical image processing; image classification; cross validation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>The monkeypox epidemic has spread to nearly every nation. Governments implement several strict policies, to stop the virus that causes monkeypox. For effective handling and treatment, early identification and diagnosis of monkeypox using digital skin lesion images is critical, and this work employed deep learning architectures to achieve this goal. This article presents a supervised learning-based classification method designed for the precise identification of monkeypox cases. The analysis was conducted using an open-source dataset from Kaggle, consisting of digital images of monkeypox, which were processed using advanced image processing and deep learning techniques. The data was categorized based on findings related and unrelated to monkeypox. A deep neural network with 50 layers and up to 35 folds was utilized to identify regions of interest, which could be indicative of characteristics relevant to computer-assisted medical diagnosis and enable us to solve image processing and natural language processing tasks with high accuracy. In terms of performance, the proposed method achieved an accuracy of 96% during cross-validation classification testing. This outcome demonstrates the potential for computer-assisted diagnosis as a supplementary tool for medical professionals. Amid the monkeypox outbreak, this method offers a technical and objective assessment of patients&#39; skin conditions, thereby simplifying the diagnostic process for specialists.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_62-A_Supervised_Learning_Based_Classification_Technique.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Urban Mapping in Indonesia with YOLOv11</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160261</link>
        <id>10.14569/IJACSA.2025.0160261</id>
        <doi>10.14569/IJACSA.2025.0160261</doi>
        <lastModDate>2025-02-28T06:26:01.9100000+00:00</lastModDate>
        
        <creator>Muhammad Emir Kusputra</creator>
        
        <creator>Alesandra Zhegita Helga Prabowo</creator>
        
        <creator>Kamel</creator>
        
        <creator>Hady Pranoto</creator>
        
        <subject>YOLOv11; object detection; house detection; house counting; computer vision; deep learning; urban mapping</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>Object recognition in urban and residential settings has become more vital for urban planning, real estate evaluation, and geographic mapping applications. This study presents an innovative methodology for house detection with YOLOv11, an advanced deep-learning object detection model. YOLO is based on a Convolutional Neutral Network (CNN), a type of deep learning model well suited for image analysis. In the case of YOLO, it is designed specifically for real-time object detection in images and videos. The suggested method utilizes sophisticated computer vision algorithms to recognize residential buildings precisely according to their roofing attributes. This study illustrates the potential of color-based roof categorization to improve spatial analysis and automated mapping technologies through meticulous dataset preparation, model training, and rigorous validation. This research enhances the field by introducing a rigorous methodology for accurate house detection relevant to urban development, geographic information systems, and automated remote sensing applications. By leveraging the power of deep learning and computer vision, this approach not only improves the efficiency of urban planning processes but also contributes to the development of more resilient and adaptive urban environments.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_61-Enhancing_Urban_Mapping_in_Indonesia.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimization Technology of Civil Aircraft Stand Assignment Based on MSCOEA Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160260</link>
        <id>10.14569/IJACSA.2025.0160260</id>
        <doi>10.14569/IJACSA.2025.0160260</doi>
        <lastModDate>2025-02-28T06:26:01.8470000+00:00</lastModDate>
        
        <creator>Qiao Xue</creator>
        
        <creator>Yaqiong Wang</creator>
        
        <creator>Hui Hui</creator>
        
        <subject>Collaborative evolution; quantum algorithm; stand assignment; multi-objective optimization; population</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>The Chinese aviation transportation industry is constantly developing towards multiple objectives and constraints. The conventional optimization method for stand assignment of civil aviation aircraft has low efficiency and can no longer meet practical needs. Based on this, the paper firstly focuses on the problem of convergence and uniformity in multi-objective optimization, and uses the multi-strategy algorithm to optimize the multi-strategy algorithm of Multi-strategy competitive-cooperative co-evolutionary algorithm (MSCOEA). Then, for the problem of high time complexity in the traditional chromosome coding mode, the characteristics of quantum evolution algorithm can be reduced by MSCOEA algorithm. Front the results, the prediction accuracy of the research method was above 90% on both the training and validation sets. With the increase of iterations, the final accuracy was 96.8% and 97.53%, respectively. This algorithm achieved the same performance as some other comparative algorithms in most of the objectives. The optimal flight allocation rate reached 98.4%. The mean, optimal value, and variance of the number of flights allocated to remote stands were 5.75E+00, 4.00E+00, and 1.04E+00, respectively, which were superior to other comparative algorithms. The deigned stand assignment optimization method achieves efficient stand assignment, and improves the allocation efficiency of large and multi-objective stands.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_60-Optimization_Technology_of_Civil_Aircraft_Stand_Assignment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Machine Learning-Based Denoising Techniques for Monte Carlo Rendering: A Literature Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160259</link>
        <id>10.14569/IJACSA.2025.0160259</id>
        <doi>10.14569/IJACSA.2025.0160259</doi>
        <lastModDate>2025-02-28T06:26:01.8000000+00:00</lastModDate>
        
        <creator>Liew Wen Yen</creator>
        
        <creator>Rajermani Thinakaran</creator>
        
        <creator>J. Somasekar</creator>
        
        <subject>Convolutional neural network; Monte Carlo rendering; generative adversarial network; deep learning; machine learning; denoising techniques</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>Monte Carlo (MC) rendering is a powerful technique for achieving photorealistic images by simulating complex light interactions. However, the inherent noise introduced by MC rendering necessitates effective denoising techniques to enhance image quality. This paper presents a comprehensive review and comparative analysis of various machine learning (ML) methods for denoising MC renderings, focusing on four main categories: radiance prediction using convolutional neural networks (CNNs), kernel prediction networks, temporal rendering with recurrent architectures, and adaptive sampling approaches. Through systematic analysis of 7 peer-reviewed studies from 2019-2024, the author&#39;s findings reveal that deep learning models, particularly generative adversarial networks (GANs), achieve superior denoising performance. The study identifies key challenges including computational demands, with some methods requiring significant GPU resources, and generalization across diverse scenes. Additionally, we observe a trade-off between denoising quality and processing speed, particularly crucial for real-time applications. The study concludes with recommendations for future research, emphasizing the need for hybrid approaches combining physics-based models with ML techniques to improve robustness and efficiency in production environments.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_59-Machine_Learning_Based_Denoising_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Flexible Framework for Lung and Colon Cancer Automated Analysis Across Multiple Diagnosis Scenarios</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160258</link>
        <id>10.14569/IJACSA.2025.0160258</id>
        <doi>10.14569/IJACSA.2025.0160258</doi>
        <lastModDate>2025-02-28T06:26:01.7530000+00:00</lastModDate>
        
        <creator>Marwen SAKLI</creator>
        
        <creator>Chaker ESSID</creator>
        
        <creator>Bassem BEN SALAH</creator>
        
        <creator>Hedi SAKLI</creator>
        
        <subject>Lung and colon cancers; histopathological images; LC25000 dataset; lightweight convolutional neural networks; multiple diagnosis scenarios</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>Among humans, lung and colon cancers are regarded as primary contributors to mortality and morbidity. They may grow simultaneously in organs, having a harmful influence on the lives of people. If tumor is not diagnosed early, it is likely to spread to both of those organs. This research presents a flexible framework that employs lightweight Convolutional Neural Networks architecture for automating lung and colon cancer diagnosis in histological images across multiple diagnosis scenarios. The LC25000 dataset is commonly used for this task. It includes 25000 histopathological images belonging to 5 distinct classes, which are lung adenocarcinoma, lung squamous cell carcinoma, benign lung tissue, colon adenocarcinoma, and benign colonic tissue. This work includes three diagnosis scenarios: (S1) evaluates lung or colon samples, (S2) distinguishes benign from malignant images, and (S3) classifies images into five categories from the LC25000 dataset. Across all the scenarios, the scored accuracy, recall, precision, F1-score, and AUC exceeded 0.9947, 0.9947, and 0.9995, respectively. This investigation with a lightweight Convolutional Neural Network containing only 1.612 million parameters is extremely efficient for automated lung and colon cancer diagnosis, outperforming several current methods. This method might help doctors provide more accurate diagnoses and improve patient outcomes.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_58-Flexible_Framework_for_Lung_and_Colon_Cancer.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Arabic Sentiment Analysis Using Optuna Hyperparameter Optimization and Metaheuristics Feature Selection to Improve Performance of LightGBM</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160257</link>
        <id>10.14569/IJACSA.2025.0160257</id>
        <doi>10.14569/IJACSA.2025.0160257</doi>
        <lastModDate>2025-02-28T06:26:01.7070000+00:00</lastModDate>
        
        <creator>Mostafa Medhat Nazier</creator>
        
        <creator>Mamdouh M. Gomaa</creator>
        
        <creator>Mohamed M. Abdallah</creator>
        
        <creator>Awny Sayed</creator>
        
        <subject>Arabic Sentiment Analysis (ASA); big data; Light Gradient Boosting Machine (LightGBM); Optuna hyperparameter optimization; metaheuristics feature selection; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>Sentiment Analysis (SA) effectively examines big data, such as customer reviews, market research, social media posts, online discussions, and customer feedback evaluation. Arabic Language is a complex and rich language. The main reason for the need to enhance Arabic resources is the existence of numerous dialects alongside the standard version (MSA). This study investigates the impact of stemming and lemmatization methods on Arabic sentiment analysis (ASA) using Machine Learning techniques, specifically the LightGBM classifier. It also employs metaheuristic feature selection algorithms like particle swarm optimization, dragonfly optimization, grey wolf optimization, harris hawks optimizer, and a genetic optimization algorithm to identify the most relevant features to improve LightGBM’s model performance. It also employs the Optuna hyperparameter optimization framework to determine the optimal set of hyperparameter values to enhance LightGBM model performance. It also underscores the importance of preprocessing strategies in ASA and highlights the effectiveness of metaheuristic approaches and Optuna hyperparameter optimization in improving LightGBM model performance in ASA. It also applies different stemming and lemmatization methods, Metaheuristic Feature Selection algorithms, and the Optuna hyperparameter optimization on eleven datasets with different Arabic dialects. The findings indicate that metaheuristics feature selection with the LightGBM classifier, using suitable stemming and lemmatization or combining them, enhances LightGBM&#39;s accuracy by between 0 and 8%. Still, Optuna hyperparameter optimization with the LightGBM classifier, using suitable stemming and lemmatization or combining them, depending on data characteristics, improves LightGBM&#39;s accuracy by between 2 and 11%. It achieves superior results than metaheuristics feature selection in more than 90% of cases. This study is of significant importance in the field of ASA, providing valuable insights and directions for future research.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_57-Arabic_Sentimen_Analysis_Using_Optuna.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Using Fuzzy Matter-Element Extension Method to Cultural Tourism Resources Data Mining and Evaluation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160256</link>
        <id>10.14569/IJACSA.2025.0160256</id>
        <doi>10.14569/IJACSA.2025.0160256</doi>
        <lastModDate>2025-02-28T06:26:01.6430000+00:00</lastModDate>
        
        <creator>Fei Liu</creator>
        
        <subject>Cultural and tourism integration; fuzzy object meta-theory; development; organic integration</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>This study explores the mining and evaluation of cultural and tourism resources based on fuzzy matter-element extension in the context of cultural and tourism integration. Through fieldwork and analysis of cultural and tourism resources, it is found that the fuzzy matter-element extension theory can be effectively applied to the mining and evaluation of cultural and tourism resources in the context of cultural and tourism integration. The application of integration of cultural and tourism resources has a significant driving effect in tourism development, which can effectively enhance the tourist experience and improve the visibility and attractiveness. Meanwhile, through field research and data analysis, this study also puts forward relevant improvement suggestions for the characteristics and actual situation of the research object, aiming at further optimising the development mode, realising the organic integration of culture and tourism resources, and promoting the prosperity and development of the local cultural industry. Overall, this study has certain theoretical and practical significance for promoting the integrated development of culture and tourism and the sustainable development of tourism.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_56-Using_Fuzzy_Matter_Element_Extension_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Energy-Balance-Based Out-of-Distribution Detection of Skin Lesions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160255</link>
        <id>10.14569/IJACSA.2025.0160255</id>
        <doi>10.14569/IJACSA.2025.0160255</doi>
        <lastModDate>2025-02-28T06:26:01.6270000+00:00</lastModDate>
        
        <creator>Jiahui Sun</creator>
        
        <creator>Guan Yang</creator>
        
        <creator>Yishuo Chen</creator>
        
        <creator>Hongyan Wu</creator>
        
        <creator>Xiaoming Liu</creator>
        
        <subject>Balanced energy regularization loss; skin lesions; out-of-distribution detection; convolutional neural networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>Skin lesion detection plays a crucial role in the diagnosis and treatment of skin diseases. Due to the wide variety of skin lesion types, especially when dealing with unknown or rare lesions, models tend to exhibit overconfidence. Out-of-distribution (OOD) detection techniques are capable of identifying lesion types that were not present in the training data, thereby enhancing the model&#39;s robustness and diagnostic reliability. However, the issue of class imbalance makes it difficult for models to effectively learn the features of minority class lesions. To address this challenge, a Balanced Energy Regularization Loss is proposed in this paper, aimed at mitigating the class imbalance problem in OOD detection. This method applies stronger regularization to majority class samples, promoting the model&#39;s learning of minority class samples, which significantly improves model performance. Experimental results demonstrate that the Balanced Energy Regularization Loss effectively enhances the model&#39;s robustness and accuracy in OOD detection tasks, providing a viable solution to the class imbalance issue in skin lesion detection.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_55-Energy_Balance_Based_Out_of_Distribution_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Spatial Attention-Based Adaptive CNN Model for Differentiating Dementia with Lewy Bodies and Alzheimer&#39;s Disease</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160254</link>
        <id>10.14569/IJACSA.2025.0160254</id>
        <doi>10.14569/IJACSA.2025.0160254</doi>
        <lastModDate>2025-02-28T06:26:01.5800000+00:00</lastModDate>
        
        <creator>K Sravani</creator>
        
        <creator>V RaviSankar</creator>
        
        <subject>Alzheimer&#39;s disease and dementia with lewy bodies differentiation; spatial attention-based adaptive convolution neural network; cingulate island sign; improved random function-based birds foraging search; dilated residual-long short-term memory</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>Differentiation of Alzheimer&#39;s Disease (AD) and Dementia with Lewy Bodies (DLB) utilizing brain perfusion Single Photon Emission Tomography (SPECT) is crucial and it might be difficult to distinguish between the two illnesses. The most recently discovered characteristic of DLB for a possible diagnosis is the Cingulate Island Sign (CIS). This work aims to differentiate DLB and AD by utilizing a deep learning model and this model is named AD-DLB-DNet. Initially, the required images are collected from the benchmark dataset. Further, the Spatial Attention-Based Adaptive Convolution Neural Network (SA-ACNN) is used to visualize the CIS features from the images where the attributes are tuned using Improved Random Function-based Birds Foraging Search (IRF-BFS). Further, CIS features attained from the SA-ACNN are used to accurately differentiate the DLB and AD. Finally, the Dilated Residual-Long Short-Term Memory (DR-LSTM) layer is proposed to accurately perform the AD and DLB differentiation for identifying the clinical characteristics of the DLB. The suggested model is used for differentiating between AD and DLB for taking effective therapeutic measures. Finally, the validation is performed to validate the effectiveness of the introduced system.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_54-Spatial_Attention_Based_Adaptive_CNN_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Data-Driven Technology Augmented Reality Digitisation in Cultural Communication Design</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160253</link>
        <id>10.14569/IJACSA.2025.0160253</id>
        <doi>10.14569/IJACSA.2025.0160253</doi>
        <lastModDate>2025-02-28T06:26:01.5330000+00:00</lastModDate>
        
        <creator>Na YIN</creator>
        
        <subject>Intangible cultural heritage AR digitization; low-carbon tourism; design test analysis; dragonfly algorithm; restricted Boltzmann machine model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>The digitalisation of intangible cultural heritage and big data technology provide great potential for the development of intangible cultural heritage in low-carbon reform tourism, which not only increases the accuracy of AR digital design, but also contributes to the management and protection of intangible cultural heritage based on tourism. Aiming at the lack of testing and evaluation process of the current tourism intangible cultural heritage AR digital design process, this paper proposes an intangible cultural heritage AR digital design testing algorithm based on data-driven technology using Qinhuai lanterns and colours as a case study. Firstly, an AR digitization scheme based on Qinhuai lanterns intangible cultural heritage is designed; then, around the scheme, key technical contents of AR digitization design of intangible cultural heritage are analysed; secondly, combining the dragonfly algorithm with the restricted Boltzmann machine model, a test method for the AR digitization design of tourism intangible cultural heritage of low-carbon reform based on the optimization of the structural parameters of restricted Boltzmann machine by the dragonfly algorithm is put forward; lastly, relying on the collected data, the design of the AR digital design model of Qinhuai lanterns and colours tourism, and also analysed the effectiveness of the intelligent testing algorithm proposed in this paper. The results show that the proposed digital design method is effective, while the optimised test method has improved convergence speed and increased accuracy, and the test score prediction accuracy reaches 93.5%.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_53-Data_Driven_Technology_Augmented_Reality_Digitisation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An NLP-Enabled Approach to Semantic Grouping for Improved Requirements Modularity and Traceability</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160252</link>
        <id>10.14569/IJACSA.2025.0160252</id>
        <doi>10.14569/IJACSA.2025.0160252</doi>
        <lastModDate>2025-02-28T06:26:01.4870000+00:00</lastModDate>
        
        <creator>Rahat Izhar</creator>
        
        <creator>Shahid Nazir Bhatti</creator>
        
        <creator>Sultan A. Alharthi</creator>
        
        <subject>Requirements Engineering (RE); semantic clustering; sentence-BERT; natural language processing (NLP)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>The escalating complexity of modern software systems has rendered the management of requirements increasingly arduous, often plagued by redundancy, inconsistency, and inefficiency. Traditional manual methods prove inadequate for addressing the intricacies of dynamic, large-scale datasets. In response, this research introduces SQUIRE (Semantic Quick Requirements Engineering), a cutting-edge automated framework leveraging advanced Natural Language Processing (NLP) techniques, specifically Sentence-BERT (SBERT) embeddings and hierarchical clustering, to semantically organize requirements into coherent functional clusters. SQUIRE is meticulously designed to enhance modularity, mitigate redundancy, and strengthen traceability within requirements engineering processes. Its efficacy is rigorously validated using real-world datasets from diverse domains, including attendance management, e-commerce systems, and school operations. Empirical evaluations reveal that SQUIRE outperforms conventional clustering methods, demonstrating superior intra-cluster cohesion and inter-cluster separation, while significantly reducing manual intervention. This research establishes SQUIRE as a scalable and domain-agnostic solution, effectively addressing the evolving complexities of contemporary software development. By streamlining requirements management and enabling software teams to focus on strategic initiatives, SQUIRE advances the state of NLP-driven methodologies in Requirements Engineering, offering a robust foundation for future innovations.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_52-An_NLP_Enabled_Approach_to_Semantic_Grouping.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>All Element Selection Method in Classroom Social Networks and Analysis of Structural Characteristics</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160251</link>
        <id>10.14569/IJACSA.2025.0160251</id>
        <doi>10.14569/IJACSA.2025.0160251</doi>
        <lastModDate>2025-02-28T06:26:01.4570000+00:00</lastModDate>
        
        <creator>Zhaoyu Shou</creator>
        
        <creator>Zhe Zhang</creator>
        
        <creator>Jingquan Chen</creator>
        
        <creator>Hua Yuan</creator>
        
        <creator>Jianwen Mo</creator>
        
        <subject>Genetic algorithms; element selection; random forest; partial least squares; classroom network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>To deeply investigate the complex relationship between learners&#39; structural characteristics in classroom social networks and the dynamics of learning emotions in smart teaching environments, an innovatively improved RP-GA. All Element Selection Method based on genetic algorithm is proposed. The method calculates the importance of factors based on the random forest model and guides the population initialization together with random numbers to achieve the differentiation and efficiency of factor selection; and utilized the Partial Least Squares regression model in conjunction with a cross-validation optimization model to enhance the accuracy of fitness evaluation, efficiently tackling the issues of premature convergence and low prediction accuracy inherent in traditional genetic algorithms for factor selection. Based on this method, the elements affecting learning emotions are precisely screened, and the intrinsic links between elemental changes and structural properties are deeply analyzed. Experiments show that RP-GA selects a small and efficient number of key elements on public datasets and significantly improves the prediction performance of classifiers such as SVM, NB, MLP, and RF. The proposed learning sentiment all-essential selection method provides effective conditions for classroom network structure characterization and future learning sentiment computation.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_51-All_Element_Selection_Method_in_Classroom_Social_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Planning and Design of Elderly Care Space Combining PER and Dueling DQN</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160250</link>
        <id>10.14569/IJACSA.2025.0160250</id>
        <doi>10.14569/IJACSA.2025.0160250</doi>
        <lastModDate>2025-02-28T06:26:01.4230000+00:00</lastModDate>
        
        <creator>Di Wang</creator>
        
        <creator>Hui Ma</creator>
        
        <creator>Yu Chen</creator>
        
        <subject>Elderly care space; planning and design; prioritized experience replay; dueling deep q-network algorithm; spatial planning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>With the continuous development of the aging phenomenon in society, people&#39;s attention to the planning of elderly care spaces is increasing. Currently, many scholars have used various spatial planning models to plan and design elderly care spaces. However, the resource utilization rate and comfort of the elderly care spaces designed by these models are low, and the models still need to be optimized. This study first integrates the Prioritized Experience Replay mechanism with the Dueling deep Q-network algorithm, and constructs a spatial planning model based on the fused algorithm, to use this model to plan elderly care spaces reasonably. The study first conducts comparative experiments on the fusion algorithm, and the outcomes indicate that the fusion algorithm has the best prediction performance, with a minimum prediction error rate of only 0.9% and a prediction speed of up to 8.7bps. In addition, the denoising effect of the algorithm is the best, and the performance of the algorithm is much higher than that of the comparative algorithm. Further analysis of the spatial planning model based on this algorithm shows that the average time required for elderly care space planning is only 1.3 seconds, and the comfort level of the planned elderly care space reaches 98.7%, the resource utilization rate reaches 89.7%, and the planned elderly care space can raise the living standard of the elderly by 67.7%. From the above information, the spatial planning model raised in the study can validly enhance the resource utilization and comfort of elderly care spaces, and raise the living standard of the elderly.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_50-Planning_and_Design_of_Elderly_Care_Space.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Transfer Learning for Named Entity Recognition in Setswana Language Using CNN-BiLSTM Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160249</link>
        <id>10.14569/IJACSA.2025.0160249</id>
        <doi>10.14569/IJACSA.2025.0160249</doi>
        <lastModDate>2025-02-28T06:26:01.4100000+00:00</lastModDate>
        
        <creator>Shumile Chabalala</creator>
        
        <creator>Sunday O. Ojo</creator>
        
        <creator>Pius A. Owolawi</creator>
        
        <subject>Natural language processing; named entity recognition; convolutional neural network; bidirectional long short-term memory; Setswana</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>This research proposes a hybrid approach for Named-Entity Recognition (NER) for Setswana, a low-resource language, that combines a bidirectional long short-term memory (BiLSTM) with a transfer learning model and a convolutional neural network (CNN). Among the 11 official languages of South Africa, Setswana is a morphologically rich language that is underrepresented in the field of deep learning for natural language processing (NLP). The fact that it is a language with limited resources is one of the reasons for this gap. The suggested NER hybrid transfer learning approach and an open-source Setswana NER dataset from the South African Centre for Digital Language Resources (SADiLaR), which contains an estimated 230,000 tokens overall, are used in this research to close this gap. Five NER models are created for the study and contrast with one another to determine which performs best. The performance of the top model is then contrasted with that of the baseline models. The latter three models are trained at sentence-level, whereas the first two are at word-level. Sentence-level models interpret the entire sentence as a series of word embeddings, while word-level models represent each word as a character sequence or word embedding. CNN is the first model, and CNN-BiLSTM transfer learning based on Word level is the second. Sentence-Level is the basis for the last three CNN, CNN-BiLSTM Transfer Learning, and CNN-BiLSTM models. With 99% of accuracy, the CNN-BiLSTM Transfer Learning sentence-level outperforms all other models. Furthermore, it outperforms the state-of-the-art models for Setswana in the literature that were created using the same dataset.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_49-Transfer_Learning_for_Named_Entity_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Object Recognition IoT-Based for People with Disabilities: A Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160248</link>
        <id>10.14569/IJACSA.2025.0160248</id>
        <doi>10.14569/IJACSA.2025.0160248</doi>
        <lastModDate>2025-02-28T06:26:01.3630000+00:00</lastModDate>
        
        <creator>Andriana </creator>
        
        <creator>Elli Ruslina</creator>
        
        <creator>Zulkarnain</creator>
        
        <creator>Fajar Arrazaq</creator>
        
        <creator>Sutisna Abdul Rahman</creator>
        
        <creator>Tjahjo Adiprabowo</creator>
        
        <creator>Puput Dani Prasetyo Adi</creator>
        
        <creator>Yudi Yuliyus Maulana</creator>
        
        <subject>Internet of Things; mini smart camera; object recognition; speech recognition; assistive technology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>This research focuses on a literature study on developing a Mini Smart Camera (MSC) system that utilizes Internet of Things (IoT) technology to help people with disabilities interact with their environment. The MSC serves as an assistive device, which integrates object recognition and speech recognition technologies along with an internet-based two-way communication system. Utilizing state-of-the-art hardware and software, the system captures images, processes audio, and transmits data via Real Time Streaming Protocol (RTSP) and Message Queuing Telemetry Transport (MQTT). These protocols serve different purposes: managing data transmission and enabling communication between machines. The MSC is equipped with a 5 MP camera, 2.5 GHz Quad-Core processor, and 4G connectivity, and is connected to a high-performance Ubuntu 22.04 Linux cloud server. The use of OpenCV libraries and machine learning algorithms ensures fast and precise image analysis. By integrating machine learning and natural language processing (NLP), MSC efficiently handles both visual and audio inputs. Key features, including text-to-speech (TTS) and speech-to-text (STT), provide an interactive and adaptive communication interface. The system is designed to improve accessibility and encourage greater independence for people with disabilities in daily activities. The development of multispectral cameras for disabilities will provide a more detailed analysis for the detection of surrounding objects.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_48-Object_Recognition_IoT_Based_for_People_with_Disabilities.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Watermelon Rootstock Seedling Detection Based on Improved YOLOv8 Image Segmentation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160247</link>
        <id>10.14569/IJACSA.2025.0160247</id>
        <doi>10.14569/IJACSA.2025.0160247</doi>
        <lastModDate>2025-02-28T06:26:01.3300000+00:00</lastModDate>
        
        <creator>Qingcang Yu</creator>
        
        <creator>Zihao Xu</creator>
        
        <creator>Yi Zhu</creator>
        
        <subject>Image segmentation; YOLOv8s-seg; lightweight; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>Automated grafting is an important means for modern agriculture to improve production efficiency and graft seedling quality, among which the use of visual systems to quickly segment target rootstock seedlings is the key technology to achieve automated grafting. This study aims to solve the problems of inaccurate image segmentation and slow detection speed in traditional rootstock seedling segmentation algorithms. To address these challenges, this study proposes a lightweight segmentation method based on an improved version of YOLOv8s-seg. The improved YOLOv8-seg introduces FasterNet as the backbone network and designs an RCAAM module to enhance feature extraction ability and lightweight model. The D-C2f module is improved to enhance feature fusion ability, achieving efficient and accurate segmentation of watermelon rootstock seedlings and improving grafting efficiency. This article designs a series of comparative experiments, comparing the improved version of YOLOv8-seg with classic models such as Unet, SOLO v2, Mask R-CNN, Deeplabv3+ on a test set containing watermelon rootstock seedlings, and evaluating the recognition performance and detection effect of the model. The experimental results show that the improved version of YOLOv8-seg outperforms other models in mAP coefficient index and can segment seedlings more accurately. This study provides reliable deep learning-based solution for the development of automatic grafting robots, which can effectively reduce labor costs and improve grafting efficiency, meeting the requirements of automated equipment for inference efficiency and hardware resources.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_47-Watermelon_Rootstock_Seedling_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Eco-Efficiency Measurement and Regional Optimization Strategy of Green Buildings in China Based on Three-Stage Super-Efficiency SBM-DEA Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160246</link>
        <id>10.14569/IJACSA.2025.0160246</id>
        <doi>10.14569/IJACSA.2025.0160246</doi>
        <lastModDate>2025-02-28T06:26:01.3000000+00:00</lastModDate>
        
        <creator>Xianhong Qin</creator>
        
        <creator>Yaou Lv</creator>
        
        <creator>Yunfang Wang</creator>
        
        <creator>Jian Pi</creator>
        
        <creator>Ze Xu</creator>
        
        <subject>Three stages; data envelopment analysis; super efficiency model; green buildings; ecological efficiency</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>With the increasing attention of society to sustainable development, green building as an important sustainable building form has attracted much attention. However, the comprehensive assessment of eco-efficiency of green buildings faces many challenges, including the insufficient comprehensive analysis of all stages of the building life cycle and the oversimplification of multidimensional input-output relationships. In addition, the existing methods have subjectivity and uncertainty in data processing and weight allocation, which reduces the reliability of evaluation. To overcome these difficulties, a measurement method based on the three-stage super-efficient data Enveloping analysis (SBM-DEA) model is introduced in this study. By constructing a three-stage super-efficiency SBM-DEA model, the eco-efficiency measurement model of green buildings is established, taking building resources and energy as input and economic and environmental value as output. The results show that after removing the interference of external environment variables and random errors, the measurement results of stage 3 are more reasonable. From 2011 to 2018, the eco-efficiency of green buildings in China showed obvious regional differences, showing a decreasing trend of &quot;the highest in the east (0.884), followed by the central (0.704) and the lowest in the west (0.578)&quot;. The innovation of this study lies in the full consideration of timing and dynamics, which provides new theoretical and practical ideas for promoting sustainable development in the field of green building, and is expected to improve the assessment accuracy and reliability in the field of green building.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_46-Eco_Efficiency_Measurement_and_Regional_Optimization_Strategy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Bridging the Gap Between Industry 4.0 Readiness and Maturity Assessment Models: An Ontology-Based Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160245</link>
        <id>10.14569/IJACSA.2025.0160245</id>
        <doi>10.14569/IJACSA.2025.0160245</doi>
        <lastModDate>2025-02-28T06:26:01.2530000+00:00</lastModDate>
        
        <creator>ABADI Asmae</creator>
        
        <creator>ABADI Chaimae</creator>
        
        <creator>ABADI Mohammed</creator>
        
        <subject>Industry 4.0; readiness assessment; maturity assessment; digital transformation; ontology development; conceptual model; knowledge engineering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>The rapid evolution of Industry 4.0 technologies has created a complex and interconnected landscape of readiness and maturity assessment models. However, these models often fail to address the full spectrum of organizational readiness across strategic, technological, operational, and cultural dimensions, while also not accounting for emerging paradigms such as Industry 5.0. This paper proposes a conceptual model for an ontology that integrates all relevant domain knowledge into a unified framework, capturing strategic, technological, operational, and cultural readiness and maturity within a single comprehensive model. The ontology provides a systematic approach to understanding the interconnectedness of I4.0 and Industry 5.0 assessment models, facilitating a holistic view of an organization’s preparedness for digital transformation. By bridging the gap between these two stages of industrial evolution, the model enables interoperability across diverse frameworks, promoting more informed decision-making and strategic planning. This research highlights the potential of the proposed ontology to support the ongoing shift from Industry 4.0 to Industry 5.0, offering a valuable tool for researchers, practitioners, and decision-makers navigating the complexities of next-generation industrial ecosystems. The paper further discusses the theoretical underpinnings and practical applications of the model in fostering a smooth transition toward a more human-centric, sustainable, and technologically advanced industrial future.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_45-Bridging_the_Gap_Between_Industry_4_Readiness.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid SETO-GBDT Model for Efficient Information Literacy System Evaluation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160244</link>
        <id>10.14569/IJACSA.2025.0160244</id>
        <doi>10.14569/IJACSA.2025.0160244</doi>
        <lastModDate>2025-02-28T06:26:01.2200000+00:00</lastModDate>
        
        <creator>Jiali Dai</creator>
        
        <creator>Hanifah Jambari</creator>
        
        <creator>Mohd Hizwan Mohd Hisham</creator>
        
        <subject>Vocational education; talent; information literacy; system building; educational evaluation; gradient augmentation; decision tree</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>Information literacy (IL) is essential for vocational education talents to thrive in the modern information age. Traditional assessment methods often lack quantitative precision and systematic evaluation models, making it difficult to accurately measure IL levels. This paper aims to develop a robust, data-driven model to assess information literacy in vocational education talents. The goal is to improve the accuracy and efficiency of IL evaluations by combining machine learning techniques with optimization algorithms. The proposed method integrates the Stock Exchange Trading Optimization (SETO) algorithm with the Gradient Boosting Decision Tree (GBDT) to construct the SETO-GBDT model. This model optimizes parameters such as the number of decision trees and tree depth. A comprehensive evaluation index system for IL is built, focusing on learning attitude, process, effect, and practice. The SETO-GBDT model was trained and tested using real-world data on IL indicators. The SETO-GBDT model outperformed traditional models such as Decision Tree, Random Forest, and GBDT optimized by other algorithms like SCA and SELO. Specifically, it achieved an RMSE of 0.13, an R&#178; of 0.98, and reduced evaluation time to 0.092 s, demonstrating superior accuracy and efficiency. The research concludes that the SETO-GBDT model offers a significant improvement in evaluating IL for vocational education talents. The model’s high accuracy and reduced evaluation time make it an effective tool for assessing and enhancing information literacy, aligning with the educational goals of developing well-rounded, information-savvy professionals.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_44-A_Hybrid_SETO_GBDT_Model_for_Efficient_Information.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>TPGR-YOLO: Improving the Traffic Police Gesture Recognition Method of YOLOv11</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160243</link>
        <id>10.14569/IJACSA.2025.0160243</id>
        <doi>10.14569/IJACSA.2025.0160243</doi>
        <lastModDate>2025-02-28T06:26:01.1900000+00:00</lastModDate>
        
        <creator>Xuxing Qi</creator>
        
        <creator>Cheng Xu</creator>
        
        <creator>Yuxuan Liu</creator>
        
        <creator>Nan Ma</creator>
        
        <creator>Hongzhe Liu</creator>
        
        <subject>Traffic police gesture recognition; loss function; YOLO algorithm; multi-scale feature fusion</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>In open traffic scenarios, gesture recognition for traffic police faces significant challenges due to the small scale of the traffic police and the complex background. To address this, this paper proposes a gesture recognition network based on an improved YOLOv11. This method enhances feature extraction and multi-scale information retention by integrating RFCAConv and C2DA modules into the backbone network. In the Neck part of the network, an edge-enhanced multi-branch fusion strategy is introduced, incorporating target edge information and multi-scale information during the feature fusion phase. Additionally, the combination of WIoU and SlideLoss loss functions optimizes the positioning of bounding boxes and the allocation of sample weights. Experimental validation was conducted on multiple datasets, and the proposed method achieved varying degrees of improvement in all metrics. Experimental results demonstrate that this method can accurately perform the task of recognizing traffic police gestures and exhibits good generalization capabilities for small targets and complex backgrounds.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_43-TPGR_YOLO_Improving_the_Traffic_Police.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modeling Cloud Computing Adoption and its Impact on the Performance of IT Personnel in the Public Sector</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160242</link>
        <id>10.14569/IJACSA.2025.0160242</id>
        <doi>10.14569/IJACSA.2025.0160242</doi>
        <lastModDate>2025-02-28T06:26:01.1430000+00:00</lastModDate>
        
        <creator>Noorbaiti Mahusin</creator>
        
        <creator>Hasimi Sallehudin</creator>
        
        <creator>Nurhizam Safie Mohd Satar</creator>
        
        <creator>Azana Hafizah Mohd Aman</creator>
        
        <creator>Farashazillah Yahya</creator>
        
        <subject>Cloud computing; cloud integration model introduction; performance of IT personnel; public sector; system integration</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>This study investigates the factors influencing cloud computing adoption in the public sector, emphasizing the performance of IT personnel. Through qualitative interviews with five IT management professionals in the public sector, we identify key challenges in integrating cloud computing systems. The primary issues include technical complexity, skill and knowledge deficits in data governance and budget constraints. These insights inform the development of the Cloud Computing Capacity and Integration Model for the Public Sector, which proposes a comprehensive strategy to address these barriers. Our findings identified five key challenges to cloud computing adoption in the public sector. First, compatibility issues and system integration challenges resulting from conflicts between cloud platforms and older infrastructure contributed to operational inefficiency. Second, data migration issues due to incompatible formats and structures resulted in data loss and delays. Third, network constraints, such as limited bandwidth and high latency, hampered cloud service performance. Finally, a lack of staff training and budget constraints hampered successful cloud integration, emphasizing the importance of focused capacity-building initiatives and additional financial support. Thus, the &quot;Cloud Computing Integration and Cloud Computing Acceptance and Performance Model&quot; (CCAPM) presented in this research paper aims to deliver a comprehensive model that tackles a wide array of technical, operational, and human resource challenges to create an effective cloud computing ecosystem, enhance the adoption of cloud computing within the public sector, elevate the capabilities of public sector IT personnel, and develop a secure, resilient, and sustainable cloud computing environment in the public sector.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_42-Modeling_Cloud_Computing_Adoption_and_its_Impact.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning-Based Attention Mechanism Algorithm for Blockchain Credit Default Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160241</link>
        <id>10.14569/IJACSA.2025.0160241</id>
        <doi>10.14569/IJACSA.2025.0160241</doi>
        <lastModDate>2025-02-28T06:26:01.1130000+00:00</lastModDate>
        
        <creator>Wangke Lin</creator>
        
        <creator>Yue Liu</creator>
        
        <subject>Deep learning; attention mechanism; blockchain credit default prediction; special forces algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>With the rise of internet finance and the increasing demand for personal credit risk management, accurate credit default prediction has become essential for financial institutions. Traditional models face limitations in handling complex and large-scale data, especially in the blockchain domain, which has emerged as a crucial technology for securing and processing financial transactions. This paper aims to improve the accuracy and generalization of blockchain-based credit default prediction models by optimizing deep learning algorithms with the Special Forces Algorithm (SFA) and attention mechanism (AM) networks. The study introduces a hybrid approach combining SFA with AM to optimize hyperparameters of the credit default prediction model. The model preprocesses blockchain credit data, extracts critical features such as user and loan information, and applies the SFA-AM algorithm to improve classification accuracy. Comparative analysis is conducted using other machine learning algorithms like XGBoost, LightGBM, and LSTM. Results: The SFA-AM model outperforms traditional models in key metrics, achieving higher precision (0.8289), recall (0.8075), F1 score (0.8180), and AUC value (0.9407). The model demonstrated better performance in identifying both default and non-default cases compared to other algorithms, with significant improvements in reducing misclassifications. The proposed SFA-AM model significantly enhances blockchain credit default prediction accuracy and generalization. While effective, the study acknowledges limitations in dataset diversity and model interpretability, suggesting future research could expand on these areas for more robust applications across different financial sectors.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_41-Deep_Learning_Based_Attention_Mechanism_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-Classification Convolution Neural Network Models for Chest Disease Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160240</link>
        <id>10.14569/IJACSA.2025.0160240</id>
        <doi>10.14569/IJACSA.2025.0160240</doi>
        <lastModDate>2025-02-28T06:26:01.0630000+00:00</lastModDate>
        
        <creator>Noha Ayman</creator>
        
        <creator>Mahmoud E. A. Gadallah</creator>
        
        <creator>Mary Monir Saeid</creator>
        
        <subject>Convolution neural network; classification; chest X-ray; image preprocessing; U-Net; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>Chest diseases significantly affect public health, causing more than one million hospital admissions and approximately 50,000 deaths annually in the United States. Chest X-ray imaging technology, which is a critically important imaging technique, helps in examining, diagnosing, and managing chest conditions by providing essential insights about the presence and severity of disease. This study introduces a novel chest X-ray classification framework leveraging a fine-tuned VGG19 model (16 layers) enhanced with CLAHE for improved contrast, binary mask attention to highlight abnormalities and advanced data augmentation for better generalization. Key innovations include the use of a Probabilistic U-Net for lung segmentation to isolate critical features and weighted masks to focus on pathological regions, addressing class imbalance with computed class weights for fair learning. By achieving 95% accuracy and superior class-specific metrics, the proposed method outperforms existing deep learning approaches, providing a robust and interpretable solution for real-world healthcare applications, where a test accuracy of 94.8% is achieved using different customized models based on VGG19 without using a mask. The experimental results indicate that our proposed method surpasses current deep learning techniques in terms of overall classification accuracy for chest disease detection.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_40-Multi_Classification_Convolution_Neural_Network_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Effectiveness of Immersive Contextual English Teaching Based on Fuzzy Evaluation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160239</link>
        <id>10.14569/IJACSA.2025.0160239</id>
        <doi>10.14569/IJACSA.2025.0160239</doi>
        <lastModDate>2025-02-28T06:26:01.0170000+00:00</lastModDate>
        
        <creator>Mei Niu</creator>
        
        <subject>Fuzzy evaluation; immersion; contextual English teaching; teaching effectiveness; teaching assessment</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>Investigating the real-world impact of immersive contextual instruction on English language education, verifying its contribution to the enhancement of linguistic skills and the improvement of learning attitudes, and evaluating the practicality and worth of fuzzy evaluation in gauging teaching efficacy. A fuzzy complete assessment model was built utilizing the language competency test and the learning attitude questionnaire, and the teaching effect was quantitatively examined based on the experimental data using methods such as affiliation function and weight calculation. The study&#39;s findings revealed that students in the experimental group performed much better than students in the control group in terms of language competence and learning attitudes, with an overall fuzzy score of 88.5 compared to 74.8 in the latter. The statistical test indicated a significant difference between the groups (𝑝&lt;0.001). The study also confirmed the scientific and practical validity of fuzzy evaluation in the assessment of multidimensional educational efficacy. Immersion contextual English teaching provides considerable benefits for improving students&#39; language skills and learning attitudes. The fuzzy assessment method introduces a new instrument for quantitative research on teaching efficacy and has a wide range of potential applications.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_39-Effectiveness_of_Immersive_Contextual_English_Teaching.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Advanced Football Match Winning Probability Prediction: A CNN-BiLSTM_Att Model with Player Compatibility and Dynamic Lineup Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160238</link>
        <id>10.14569/IJACSA.2025.0160238</id>
        <doi>10.14569/IJACSA.2025.0160238</doi>
        <lastModDate>2025-02-28T06:26:01.0030000+00:00</lastModDate>
        
        <creator>Tao Quan</creator>
        
        <creator>Yingling Luo</creator>
        
        <subject>Football big data; match prediction; feature vector; tactical understanding; match analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>In recent years, with the continuous expansion of the football market, the prediction of football match-winning probabilities has become increasingly important, attracting numerous professionals and institutions to engage in the field of football big data analysis. Pre-match data analysis is crucial for predicting match outcomes and formulating tactical strategies, and all top-level football events rely on professional data analysis teams to help teams gain an advantage. To improve the accuracy of football match winning probability predictions, this study has taken a series of measures: using the Word2Vec model to construct feature vectors to parse the compatibility between players; developing a winning probability prediction model based on LSTM to capture the dynamic changes in team lineups; designing an improved BILSTM_Att winning probability prediction model, which distinguishes the different impacts of players on match outcomes through an attention mechanism; and proposing a CNN-BILSTM_Att winning probability prediction model that combines the local feature extraction capability of CNN with the time series analysis of BILSTM. These research efforts provide more refined data support for football coaching teams and analysts. For the general audience, these in-depth analyses can help them understand the tactical layouts and match developments on the field more deeply, thereby enhancing their viewing experience and understanding of the matches.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_38-Advanced_Football_Match_Winning_Probability_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Long-Term Recommendation Model for Online Education Systems: A Deep Reinforcement Learning Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160237</link>
        <id>10.14569/IJACSA.2025.0160237</id>
        <doi>10.14569/IJACSA.2025.0160237</doi>
        <lastModDate>2025-02-28T06:26:00.9700000+00:00</lastModDate>
        
        <creator>Wei Wang</creator>
        
        <subject>Deep reinforcement learning; long-term recommendation; intelligent tutoring system; Markov Decision Process; recurrent neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>Intelligent tutoring systems serve as tools capable of providing personalized learning experiences, with their efficacy significantly contingent upon the performance of recommendation models. For long-term instructional plans, these systems necessitate the provision of highly accurate, enduring recommendations. However, numerous existing recommendation models adopt a static perspective, disregarding the sequential decision-making nature of recommendations, rendering them often incapable of adapting to novel contexts. While some recent studies have delved into sequential recommendations, their emphasis predominantly centers on short-term predictions, neglecting the objectives of long-term recommendations. To surmount these challenges, this paper introduces a novel recommendation approach based on deep reinforcement learning. We conceptualize the recommendation process as a Markov Decision Process, employing recurrent neural networks to simulate the interaction between the recommender system and the students. Test results demonstrate that our model not only significantly surpasses traditional Top-N methods in hit rate and NDCG concerning the enhancement of long-term recommendations but also adeptly addresses scenarios involving cold starts. Thus, this model presents a new avenue for enhancing the performance of intelligent tutoring systems.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_37-Long_Term_Recommendation_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fuzzy Evaluation of Teaching Quality in &quot;Smart Classroom&quot; with Application of Entropy Weight Coupled TOPSIS</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160236</link>
        <id>10.14569/IJACSA.2025.0160236</id>
        <doi>10.14569/IJACSA.2025.0160236</doi>
        <lastModDate>2025-02-28T06:26:00.9400000+00:00</lastModDate>
        
        <creator>Yajuan SONG</creator>
        
        <subject>Smart classroom; entropy weight method; TOPSIS method; teaching quality; optimization and improvement</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>This research aims to investigate the scientific assessment methodology for the teaching quality of smart classrooms and to develop a multi-dimensional evaluation system utilizing a combination of the entropy weight technique and the TOPSIS approach. To comprehensively assess the pedagogical proficiency of educators, this paper selects the dimensions of teaching preparation, the process, teaching effect and teaching reflection, and combines the questionnaire survey and statistical data to collect and analyze the data. The research methodology initially standardized the raw data to mitigate discrepancies among various scales; subsequently, the weight method was employed to ascertain the weight of each evaluation index, thereby indicating the significance of the indices through information entropy; ultimately, the TOPSIS method was utilized to evaluate teachers&#39; performance across each dimension and rank them based on their proximity to the optimal and negative ideal solutions, culminating in a comprehensive assessment of teaching quality. The results of the study show that the entropy weight method can effectively determine the weight of each index, and the TOPSIS method provides teachers with a clear ranking of teaching quality by calculating the distance from the ideal solution, helping to identify strengths and weaknesses in teaching. This paper concludes that the evaluation method combining the entropy weight method and TOPSIS method can provide an objective and comprehensive teaching quality assessment for the smart classroom, but there are limitations such as the small sample data size and some teaching dimensions are not adequately covered, etc. Future research can further improve the evaluation system by expanding the sample size and increasing the evaluation dimensions to enhance its applicability and accuracy, so as to provide stronger support for the continuous optimization of the smart classroom.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_36-Fuzzy_Evaluation_of_Teaching_Quality.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Lightweight CA-YOLOv7-Based Badminton Stroke Recognition: A Real-Time and Accurate Behavior Analysis Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160235</link>
        <id>10.14569/IJACSA.2025.0160235</id>
        <doi>10.14569/IJACSA.2025.0160235</doi>
        <lastModDate>2025-02-28T06:26:00.9100000+00:00</lastModDate>
        
        <creator>Yuchuan Lin</creator>
        
        <subject>Badminton shot; pose recognition; YOLO V7; size adaptive input; model pruning; attention mechanism</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>With the rapid development of sports technology, accurate and real-time recognition of badminton stroke postures has become essential for athlete training and match analysis. This study presents an improved YOLOv7-based method for badminton stroke posture recognition, addressing limitations in accuracy, real-time performance, and automation. To optimize the model, pruning techniques were applied to the backbone structure, significantly enhancing processing speed for real-time demands. A parameter-free attention module was integrated to improve feature extraction without increasing model complexity. Furthermore, key stroke action nodes were defined, and a joint point matching module was introduced to enhance recognition accuracy. Experimental results show that the improved model achieved a mAP@0.5 of 0.955 and a processing speed of 44 frames per second, demonstrating its capability to deliver precise and efficient badminton stroke recognition. This research provides valuable technical support for coaches and athletes, enabling better analysis and optimization of stroke techniques.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_35-Lightweight_CA_YOLOv7_Based_Badminton_Stroke_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving Air Quality Prediction Models for Banting: A Performance Evaluation of Lasso, mRMR, and ReliefF</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160234</link>
        <id>10.14569/IJACSA.2025.0160234</id>
        <doi>10.14569/IJACSA.2025.0160234</doi>
        <lastModDate>2025-02-28T06:26:00.8630000+00:00</lastModDate>
        
        <creator>Siti Khadijah Arafin</creator>
        
        <creator>Suvodeep Mazumdar</creator>
        
        <creator>Nurain Ibrahim</creator>
        
        <subject>PM2.5 concentration; feature selection; Lasso; mRmR; RBFNN; ReliefF</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>This study explores the effectiveness of various feature selection methods in forecasting next-day PM2.5 levels in Banting, Malaysia. The accurate prediction of PM2.5 concentrations is crucial for public health, enabling authorities to take timely actions to mitigate exposure to harmful pollutants. This study compares three feature selection methods: Lasso, mRMR, and ReliefF using a dataset consisting of 43,824 data points collected from Banting air quality monitoring stations (CA22B). The dataset includes ten variables, including pollutant concentrations such as O3, CO, NO2, SO2, PM10, and PM2.5, along with meteorological parameters such as temperature, humidity, wind direction and wind speed. The results revealed that Lasso outperformed both mRMR and ReliefF in terms of various performance metrics, including accuracy, sensitivity, precision, F1 score, and AUROC. Lasso demonstrated superior ability to handle multicollinearity, significantly improving the interpretability of the model by retaining only the most important variables. This suggests that the effectiveness of feature selection methods is highly dependent on the characteristics of the dataset, such as correlations among features. Thus, the top eight features to predict PM2.5 levels in Banting selected by Lasso method are relative humidity, PM2.5, wind direction, ambient temperature, PM10, NO2, wind speed, and O3. The findings from this study contribute to the growing body of knowledge on air quality prediction models, highlighting the importance of selecting the appropriate feature selection method to achieve the best model performance. Future research should explore the application of Lasso method in other geographical regions, including urban, suburban and rural areas, to assess the generalizability of the results.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_34-Improving_Air_Quality_Prediction_Models_for_Banting.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>BlockMed: AI Driven HL7-FHIR Translation with Blockchain-Based Security</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160233</link>
        <id>10.14569/IJACSA.2025.0160233</id>
        <doi>10.14569/IJACSA.2025.0160233</doi>
        <lastModDate>2025-02-28T06:26:00.8470000+00:00</lastModDate>
        
        <creator>Yonis Gulzar</creator>
        
        <creator>Faheem Ahmad Reegu</creator>
        
        <creator>Abdoh Jabbari</creator>
        
        <creator>Rahul Ganpatrao Sonkamble</creator>
        
        <creator>Mohammad Shuaib Mir</creator>
        
        <creator>Arjumand Bano Soomro</creator>
        
        <subject>Blockchain; health care; electronic health records (EHRs); interoperability; and healthcare system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>Blockchain is a peer-to-peer (P2P) network that distributes information and protects data integrity, security, and privacy. Constant simplification is required for information exchange. This comprehensive assessment seamlessly integrates Electronic Health Record (EHRs) with blockchain technology. EHRs are represented with different standards mainly HL7 and FHIR. EHR should be interpreted to both parties after exchange. Such interpretation after exchange may face few interoperable challenges. To overcome EHR interoperability difficulties, 18 blockchain-based alternatives were examined. Despite their promise, these systems have a variety of drawbacks, including reliability, privacy, data integrity, and collaborative sharing. Six phases make up the systematic review: research, investigation, article curation, keyword abstraction, data distillation, and project trajectory monitoring. In total, 18 seminal articles on EHR interoperability and Blockchain integration were identified. Many unique interoperability methods are proposed for Blockchain-integrated EHR systems in these contributions. Several Blockchain applications, standards, and issues associated with EHR interoperability are described and analyzed. Implemented and proposed blockchain-based EHR frameworks are numerous. The security aspects have been covered, but standards compliance and interoperability requirements are lacking. Research in this area is needed. This research study has analyzed the different national and international EHR standards. This paper describes the current state of EHRs, including blockchain-based implementations, along with the interoperability issues between existing blockchain-based EHR frameworks. The research has proposed novel BlockMed framework which is interoperable for the HL7 and FHIR EHR standards. BlockMed framework is evaluated with Data Accuracy, Mapping Quality, Response Time, Latency, Interoperability Coverage, AI Model Efficiency, Consent and Security Management, Cross-Chain Support, Patient and Provider Satisfaction.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_33-BlockMed_AI_Driven_HL7_FHIR_Translation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dialogue-Based Disease Diagnosis Using Hierarchical Reinforcement Learning with Multi-Expert Feedback</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160232</link>
        <id>10.14569/IJACSA.2025.0160232</id>
        <doi>10.14569/IJACSA.2025.0160232</doi>
        <lastModDate>2025-02-28T06:26:00.8130000+00:00</lastModDate>
        
        <creator>Shi Li</creator>
        
        <creator>Xueyao Sun</creator>
        
        <subject>Disease diagnosis; dialogue system; large language model; reinforcement learning; reward model; adversarial network; dialogue agent</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>In order to minimize the stochasticity of agents used in disease diagnosis within the dialogue system, and to enable them to interact with users based on the inherent connections between symptoms and diseases, while simultaneously addressing the issue of limited medical data, we propose the Hierarchical Reinforcement Learning with Multi-expert Feedback framework. The framework constructs a reward model in the lower-level networks of the hierarchical structure. Here, the discriminator leveraging the concept of adversarial networks generates rewards by evaluating the authenticity of symptom query sequences generated by the agent, and the large language model of human experts synthesizes various factors to assess the reasonableness of the agent&#39;s current symptom queries, thereby guiding the learning of the policy network. The algorithm addresses the deficiencies in data characteristics and improves the policy&#39;s capability to leverage feature information, thus making the process of disease diagnosis more aligned with clinical practice. Experimental results demonstrate that the proposed framework achieves diagnostic success rates of 61.5% on synthetic datasets and 84.4% on real-world datasets, while requiring fewer dialogue turns on average. Both metrics surpass those of conventional approaches, further indicating the framework&#39;s strong generalization ability.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_32-Dialogue_Based_Disease_Diagnosis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Rapid Drift Modeling Method Based on Portable LiDAR Scanner</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160231</link>
        <id>10.14569/IJACSA.2025.0160231</id>
        <doi>10.14569/IJACSA.2025.0160231</doi>
        <lastModDate>2025-02-28T06:26:00.7830000+00:00</lastModDate>
        
        <creator>Zhao Huijun</creator>
        
        <creator>Liu Chao</creator>
        
        <creator>Qi Yunpu</creator>
        
        <creator>Song Zhanglun</creator>
        
        <creator>Xia Xu</creator>
        
        <subject>Underground mining; 3D modeling; portable 3D laser scanning; simultaneous localization and mapping (SLAM); mine surveying; inertial measurement unit (IMU)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>Traditional measurement methods in underground mining tunnels have faced inefficiencies, limited accuracy, and operational challenges, consuming significant time and labor in complex environments. These limitations severely restrict the efficiency and quality of mine management and engineering design. To enhance the efficiency and accuracy of 3D modeling in underground tunnels, this study combines portable 3D LiDAR scanning technology with simultaneous localization and mapping. This integration enables autonomous positioning and efficient modeling without external positioning signals. The proposed approach effectively acquires high-resolution 3D data in complex environments, ensuring data accuracy and model reliability. High-resolution scanning of multiple critical areas was conducted on-site, with inertial navigation systems correcting the device&#39;s pose information. Automated data processing software was used for filtering, denoising, and modeling the collected data, leading to precise 3D tunnel models. Validation results indicate that portable laser scanning technology offers significant advantages in efficiency, accuracy, and safety, meeting the geological surveying and engineering needs of mining operations. The application of portable 3D laser scanning technology demonstrates considerable benefits in the rapid modeling of underground tunnels, providing effective technical support to improve mine management efficiency and safety. It also reveals broad application prospects.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_31-A_Rapid_Drift_Modeling_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Recurrent Neural Network Efficacy in Online Sales Predictions with Exploratory Data Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160230</link>
        <id>10.14569/IJACSA.2025.0160230</id>
        <doi>10.14569/IJACSA.2025.0160230</doi>
        <lastModDate>2025-02-28T06:26:00.7530000+00:00</lastModDate>
        
        <creator>Erni Widiastuti</creator>
        
        <creator>Jani Kusanti</creator>
        
        <creator>Herwin Sulistyowati</creator>
        
        <subject>Exploratory data analysis; recurrent neural networks; online sales prediction; sequential data; trend patterns</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>Online sales forecasting has become an essential aspect of effective business planning in the digital era. The widespread adoption of digital transformation has enabled companies to collect substantial datasets related to consumer behavior, market trends, and sales drivers. This study attempts to uncover patterns and predict sales growth by utilizing product images and their associated filenames as input. To achieve this, we use EDA combined with LSTM and Gated Recurrent Unit (GRU), which excel in processing sequential data. However, the performance of these networks is significantly affected by the quality of data and the preprocessing methods applied. This study highlights the importance of Exploratory Data Analysis (EDA) and Ensemble Methods in enhancing the efficacy of RNNs for online sales forecasting. EDA plays a crucial role in identifying significant patterns such as trends, seasonality, and autocorrelation while addressing data irregularities such as missing values and outliers. These findings show that integrating EDA substantially improves the performance metrics of RNN, as indicated by the reduction in loss and mean absolute error (MAE) values across training epochs (e.g. loss: 0.0720, MAE: 0.1918 at epoch 15). These results indicate that EDA improves the accuracy, stability, and efficiency of the model, allowing RNN to provide more reliable sales predictions while minimizing the risk of overfitting.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_30-Enhancing_Recurrent_Neural_Network_Efficacy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Integrating Artificial Intelligence to Automate Pattern Making for Personalized Garment Design</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160229</link>
        <id>10.14569/IJACSA.2025.0160229</id>
        <doi>10.14569/IJACSA.2025.0160229</doi>
        <lastModDate>2025-02-28T06:26:00.7200000+00:00</lastModDate>
        
        <creator>Muyan Han</creator>
        
        <subject>Machine learning models; pattern generation; AI-assisted pattern construction; data augmentation techniques; CAD flattening</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>This paper introduces an innovative AI-assisted pattern construction tool that leverages machine learning models to revolutionize pattern generation in garment design. The proposed system automatically generates patterns from 3D body scans, which are converted into 3D shell meshes and subsequently flattened into 2D patterns using advanced data augmentation techniques and CAD flattening algorithms. This approach eliminates the need for expertise in traditional pattern-making, enabling seamless transformation of 3D models into realistic garment patterns. The tool accommodates various garment styles, including fitted, standard fit, and relaxed fit, while also enabling high levels of personalization by adapting patterns to individual body dimensions. Through its AI-driven automation and user-friendly interface, this plug-in enhances accessibility, allowing individuals without conventional design skills to create customized apparel efficiently.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_29-Integrating_Artificial_Intelligence_to_Automate_Pattern.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>IoT CCTV Video Security Optimization Using Selective Encryption and Compression</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160228</link>
        <id>10.14569/IJACSA.2025.0160228</id>
        <doi>10.14569/IJACSA.2025.0160228</doi>
        <lastModDate>2025-02-28T06:26:00.6730000+00:00</lastModDate>
        
        <creator>Kawalpreet Kaur</creator>
        
        <creator>Amanpreet Kaur</creator>
        
        <creator>Yonis Gulzar</creator>
        
        <creator>Vidhyotma Gandhi</creator>
        
        <creator>Mohammad Shuaib Mir</creator>
        
        <creator>Arjumand Bano Soomro</creator>
        
        <subject>Closed-Circuit Television (CCTV); decryption; encryption; internet of things (IoT); security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>Data security and privacy are critical concerns when integrating Closed-Circuit Television (CCTV) cameras with the Internet of Things (IoT). To enhance security, IoT data must be encrypted before transmission and storage. However, to minimize overheads related to storage space, computational time, and transmission energy, data can be compressed prior to encryption. H.264/AVC (Advanced Video Coding) offers a balanced solution for video compression by addressing processing demands, video quality, and compression efficiency. Encryption is vital for safeguarding data security, yet the integrity of IoT data may sometimes be compromised. Ineffective data selection can lead to inefficiencies and potential security risks, highlighting the importance of addressing CCTV video data security carefully. This study proposes an algorithm that integrates compression with selective encryption techniques to reduce computational overhead while ensuring access to critical information for real-time analysis. By employing frame intervals, the algorithm enhances efficiency without compromising security. The execution details and merits of the proposed approach are analyzed, demonstrating its effectiveness in safeguarding the privacy and integrity of IoT CCTV video data. Results reveal superior performance in terms of compression efficiency and encryption/decryption times, with an average encryption time of 0.00171 seconds for a 128-bit key, enabling fast processing suitable for real-time applications. The decryption time matches the encryption time, confirming the method’s viability for practical IoT CCTV implementations. Metrics such as correlation coefficient, bitrate overhead, and histogram analysis further validate the approach’s robustness against statistical attacks.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_28-IoT_CCTV_Video_Security_Optimization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detection of Stopwords in Classical Chinese Poetry</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160227</link>
        <id>10.14569/IJACSA.2025.0160227</id>
        <doi>10.14569/IJACSA.2025.0160227</doi>
        <lastModDate>2025-02-28T06:26:00.6430000+00:00</lastModDate>
        
        <creator>Lei Peng</creator>
        
        <creator>Xiaodong Ma</creator>
        
        <creator>Zheng Teng</creator>
        
        <subject>TF-IDF; stopwords; Chinese; poetry; frequency</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>In this research, we address the problem of stopword detection in Classical Chinese Poetry, an area that has not been explored previously. Stopword detection is crucial in text mining tasks, as identifying and removing stopwords is essential for improving the performance of various natural language processing models. Inspired by the TF-IDF method, we propose a novel approach that utilizes external knowledge to reconstruct the Term Weight matrix. Our key finding is that incorporating external knowledge significantly refines the granularity of the term weight, thereby improving the effectiveness of stopword detection. Based on these findings, we conclude that external knowledge can enhance the ability of text representation, especially for the short texts in Classical Chinese Poetry.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_27-Detection_of_Stopwords_in_Classical_Chinese_Poetry.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Long Short-Term Memory-Based Bandwidth Prediction for Adaptive High Efficiency Video Coding Transmission Enhancing Quality of Service Through Intelligent Optimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160226</link>
        <id>10.14569/IJACSA.2025.0160226</id>
        <doi>10.14569/IJACSA.2025.0160226</doi>
        <lastModDate>2025-02-28T06:26:00.6130000+00:00</lastModDate>
        
        <creator>Hajar Hardi</creator>
        
        <creator>Imade Fahd Eddine Fatani</creator>
        
        <subject>HEVC adaptive streaming; LSTM networks; quality of service; proactive encoding adjustments; High Efficiency Video Coding</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>With the growing demand for high-quality video streaming, the necessity for efficient techniques to balance video quality and bandwidth has become increasingly critical to ensure a seamless user experience. Existing traditional adaptive streaming methods only react to network fluctuations, which often leads to delays, quality degradation, and buffering. This paper introduces an AI-powered approach for adaptive High Efficiency Video Coding (HEVC) transmission, using a predictive model based on Long Short-Term Memory (LSTM) networks to predict bandwidth variations and proactively adjust encoding parameters. The proposed approach uses historical and real-time network data to anticipate network changes, offering smoother transitions and reducing buffering. The experimental results demonstrate the system&#39;s effectiveness, achieving an improvement of 15% in Peak Signal-to-Noise Ratio (PSNR) and an increase of 12% in Structural Similarity Index (SSIM) compared to baseline methods. Additionally, the system reduces buffering events by 25% while improving bitrate stability by 20%, guaranteeing consistent video quality with minimal interruptions. This proactive approach significantly enhances Quality of Service (QoS) by providing stable video quality and uninterrupted streaming, representing a significant advancement in adaptive streaming technologies.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_26-Long_Short_Term_Memory_Based_Bandwidth_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>YOLOv7-b: An Enhanced Object Detection Model for Multi-Scale and Dense Target Recognition in Remote Sensing Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160225</link>
        <id>10.14569/IJACSA.2025.0160225</id>
        <doi>10.14569/IJACSA.2025.0160225</doi>
        <lastModDate>2025-02-28T06:26:00.5800000+00:00</lastModDate>
        
        <creator>Yulong Song</creator>
        
        <creator>Hao Yang</creator>
        
        <creator>Lijun Huang</creator>
        
        <creator>Song Huang</creator>
        
        <subject>YOLOv7-b; remote sensing images; object detection; deformable convolution; bi-level routing attention; multi-scale</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>To address the challenges of dense object distribution, scale variability, and complex shapes in remote sensing images, this paper proposes an improved YOLOv7-b model to enhance multi-scale target detection accuracy and robustness. First, deformable convolution (DCNv2) is introduced into the YOLOv7 backbone to replace the standard convolutions in the last two ELAN modules, thereby providing more flexible sampling capabilities and improving adaptability to irregularly shaped targets. Next, a Bi-level Routing Attention (BRA) module is integrated after the SPPCSPC module, employing both coarse- and fine-grained routing strategies to focus on densely distributed targets while suppressing irrelevant background. Finally, training and evaluation are conducted on the large-scale DIOR remote sensing dataset under unified hyperparameter settings and evaluation metrics, allowing a systematic assessment of the overall model performance. Experimental results show that, compared with the original YOLOv7, the improved YOLOv7-b achieves significant enhancements in Precision, Recall, mAP@0.5, and mAP@0.5:0.95, with mAP@0.5 and mAP@0.5:0.95 reaching 85.72% and 66.55%, respectively. Visualization further demonstrates that YOLOv7-b provides stronger recognition and localization for densely arranged, small-scale, and morphologically complex targets, effectively reducing missed and false detections. Overall, YOLOv7-b delivers higher detection accuracy and robustness in multi-scale remote sensing target detection. By combining deformable convolution with a dynamic sparse attention mechanism, the model excels in detecting highly deformable objects and dense scenes, offering a more adaptive and accurate solution for small-target detection, dense target recognition, and multi-scale detection in remote sensing imagery.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_25-YOLOv7_b_An_Enhanced_Object_Detection_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Data Analytics for Product Segmentation and Demand Forecasting of a Local Retail Store Using Python</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160224</link>
        <id>10.14569/IJACSA.2025.0160224</id>
        <doi>10.14569/IJACSA.2025.0160224</doi>
        <lastModDate>2025-02-28T06:26:00.5330000+00:00</lastModDate>
        
        <creator>Arun Kumar Mishra</creator>
        
        <creator>Megha Sinha</creator>
        
        <subject>Data analytics; product segmentation; demand forecasting; multicriteria ABC classification; seasonality</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>In today&#39;s competitive business environment, understanding customers&#39; expectations and choices is a necessity for the successful operations of a retail store. Forecasting demand also plays an important role in maintaining inventory at an optimum level. The work utilises data analytics for product segmentation and demand forecasting in a local retail store. Python is being used as a programming language for data analytics. Historical sales data of a local store has been used to categorise products into different segments. Statistical techniques and a k-means clustering algorithm have been used to understand different segments of the product. Machine learning algorithms and time series models have been used to forecast future sales trends. The business insights allow the retail store to meet customers&#39; expectations, manage inventory at an optimum level and enhance supply chain efficiency. The present work seeks to illustrate how data-driven tactics can enhance operational decision-making in retail.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_24-Data_Analytics_for_Product_Segmentation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced Cyber Threat Detection System Leveraging Machine Learning Using Data Augmentation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160223</link>
        <id>10.14569/IJACSA.2025.0160223</id>
        <doi>10.14569/IJACSA.2025.0160223</doi>
        <lastModDate>2025-02-28T06:26:00.5030000+00:00</lastModDate>
        
        <creator>Umar Iftikhar</creator>
        
        <creator>Syed Abbas Ali</creator>
        
        <subject>Anomaly detection; cyber threat intelligence; generative adversarial networks; data augmentation; Wasserstein GAN with gradient penalty</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>In the modern era of cyber security, cyber-attacks are continuously evolving in terms of complexity and frequency. In this context, organizations need to enhance Network Intrusion Detection Systems (NIDS) for anomaly detection. Although the existing Machine Learning models are in place to cater to the situations but new challenges emerge rapidly which affects the performance and efficiency of existing models specifically the unreachability of large datasets and unorganized data. This results in degraded efficiency for the identification of complex attacks. In this paper, data augmentation has been done of NSL-KDD which is a standard dataset for Intrusion Detection Systems (IDS) specifically for IoT-based devices. The improvement in performance and efficiency of NIDS has been performed by training the augmented dataset using the K-Nearest Neighbor (KNN) ML model.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_23-Enhanced_Cyber_Threat_Detection_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>DyGAN: Generative Adversarial Network for Reproducing Handwriting Affected by Dyspraxia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160222</link>
        <id>10.14569/IJACSA.2025.0160222</id>
        <doi>10.14569/IJACSA.2025.0160222</doi>
        <lastModDate>2025-02-28T06:26:00.4870000+00:00</lastModDate>
        
        <creator>Jes&#180;us Jaime Moreno Escobar</creator>
        
        <creator>Hugo Quintana Espinosa</creator>
        
        <creator>Erika Yolanda Aguilar del Villar</creator>
        
        <subject>Children with neurodevelopmental disorders; dyspraxia; generative adversarial network; deep learning; deep convo-lutional neural network; human handwriting</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>Dyspraxia primarily affects coordination and is categorized into two forms: 1) Motor, and 2) Verbal ororal. This study focuses on motor dyspraxia, which influences individuals in learning movement-related tasks. Consequently, the DyGAN initiative employs deep convolutional aversarial generation networks, using deep learning to create characters resembling human handwriting. The methodology in this study is structured into two main stages: 1) the creation of a first-order cybernetic model, and 2) the execution phase. Using four independent variables and three dependent variables, eight outcomes were analyzed using variance analysis. DyGAN is a twin Deep Convolutional Neural Networks and it is highly sensitive to the Learning Rate. It scored a 67% on the proposal, suggesting that characters can sound written by a human. The project will feature writers from different backgrounds and will help augment data for writing resources for dyspraxia, potentially benefiting those struggling with writing difficulties and improving our understanding of education. The model is designed to be widely applicable. Future work could customize the model to mimic the way a specific child writes, with neural networks, for example.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_22-DyGAN_Generative_Adversarial_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Systematic Review of the Benefits and Challenges of Data Analytics in Organizational Decision Making</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160221</link>
        <id>10.14569/IJACSA.2025.0160221</id>
        <doi>10.14569/IJACSA.2025.0160221</doi>
        <lastModDate>2025-02-28T06:26:00.4700000+00:00</lastModDate>
        
        <creator>Juan Carlos Morales-Arevalo</creator>
        
        <creator>Ciro Rodr&#237;guez</creator>
        
        <subject>Data analytics; decision-making; data-driven processes; big data analytics; systematic review</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>Data analytics has been relied heavily in organizational decision-making, which allows accuracy, timeliness, and data-driven processes in a wide range of industries. These factors are influential as the figure and complexity of data are on the rise, along with problems like authentication, integration, and organizational resistance. The current study seeks to systematically review the benefits and challenges of data analytics on decision-making in different sectors using the PRISMA guidelines. A total of 32 articles published from 2020 until 2024 were identified through this review from reputable databases, including Scopus, Web of Science, IEEE Xplore, ProQuest, and Emerald Insight. These insights underscore the power of data analytics in driving change, enabling more accurate, faster, and aligned decision-making with organizational objectives. Challenges remain though, including the availability of broken data systems, hindrance due to a non-standardized norm across the whole sector, and resistance in places where data literacy is low or cultures resist data-driven practices. To mitigate the challenges, this review offers organizations practical recommendations for management. Companies that successfully incorporate analytics into their overall business strategies and create an organization-wide value for data and insights will be able to leverage analytics more effectively to enhance efficiency, encourage innovative growth, and navigate future disruptions. However, tackling these challenges is more than just optimizing performance—it is about future-proofing organizations in a world increasingly defined by data.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_21-A_Systematic_Review_of_the_Benefits_and_Challenges_of_Data_Analytics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Optimization with Span&lt;T&gt; and Memory&lt;T&gt; in C# When Handling HTTP Requests: Real-World Examples and Approaches</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160220</link>
        <id>10.14569/IJACSA.2025.0160220</id>
        <doi>10.14569/IJACSA.2025.0160220</doi>
        <lastModDate>2025-02-28T06:26:00.4230000+00:00</lastModDate>
        
        <creator>Daniel Damyanov</creator>
        
        <creator>Ivaylo Donchev</creator>
        
        <subject>NET; C#; optimization; memory optimization; span; memory; development; HTTP requests; data structures</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>Optimization of application performance is a critical aspect of software development, especially when dealing with high-throughput operations such as handling HTTP requests. In modern C#, structures Span&lt;T&gt; and Memory&lt;T&gt; provide powerful tools for working with memory more efficiently, reducing heap allocations, and improving overall performance. This paper explores the practical applications of Span&lt;T&gt; and Memory&lt;T&gt; in the context of optimizing HTTP request processing. Real-world examples and approaches that demonstrate how these types can minimize memory fragmentation are presented, avoid unnecessary data copying, and enable high-performance parsing and transformation of HTTP request data. By leveraging these advanced memory structures, developers can significantly enhance the throughput and responsiveness of their applications, particularly in resource-constrained environments or systems handling many concurrent requests. This paper aims to provide developers with actionable insights and strategies for integrating these techniques into their .NET applications for improved performance.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_20-Performance_Optimization_with_Span_T_and_Memory_T.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Scallop Segmentation Using Aquatic Images with Deep Learning Applied to Aquaculture</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160219</link>
        <id>10.14569/IJACSA.2025.0160219</id>
        <doi>10.14569/IJACSA.2025.0160219</doi>
        <lastModDate>2025-02-28T06:26:00.4100000+00:00</lastModDate>
        
        <creator>Wilder Nina</creator>
        
        <creator>Nadia L. Quispe</creator>
        
        <creator>Liz S. Bernedo-Flores</creator>
        
        <creator>Marx S. Garcia</creator>
        
        <creator>Cesar Valdivia</creator>
        
        <creator>Eber Huanca</creator>
        
        <subject>Image segmentation; object detection; deep learning; computer vision; aquaculture; scallop segmentation; aquatic images</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>This study evaluates the performance of deep learning-based segmentation models applied to underwater images for scallop aquaculture in Sechura Bay, Peru. Four models were analyzed: SUIM-Net, YOLOv8, DETECTRON2, and CenterMask2. These models were trained and tested using two custom datasets: SEG SDS GOPRO and SEG SDS SF, which represent diverse underwater scenarios, including clear and turbid waters, varying current intensities, and sandy substrates. The primary aim was to automate scallop identification and segmentation to improve the efficiency and safety of aquaculture monitoring. The evaluation showed that SUIM-Net achieved the highest accuracy of 93% and 94% on the SEG SDS GOPRO and SEG SDS SF datasets, respectively. CenterMask2 performed best on the SEG SDS SF dataset, with an accuracy of 96.5%. Additionally, a combined dataset was used, where YOLOv8 achieved an accuracy of 88%, demonstrating its robustness across varied conditions. Beyond scallop segmentation, the models were extended to detect six additional marine classes, achieving a maximum accuracy of 39.90% with YOLOv8. This research under-scores the potential of deep learning techniques to revolutionize aquaculture by reducing operational risks, minimizing costs, and enhancing monitoring accuracy. The findings contribute valuable insights into the challenges and opportunities of applying artificial intelligence in underwater environments.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_19-Scallop_Segmentation_Using_Aquatic_Images_with_Deep_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Integrating Deep Learning in Art and Design: Computational Techniques for Enhancing Creative Expression</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160218</link>
        <id>10.14569/IJACSA.2025.0160218</id>
        <doi>10.14569/IJACSA.2025.0160218</doi>
        <lastModDate>2025-02-28T06:26:00.3630000+00:00</lastModDate>
        
        <creator>Yanjie Deng</creator>
        
        <creator>Qibing Zhai</creator>
        
        <subject>Deep learning; art; design; creative expression; computational techniques</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>Deep learning and art design are being integrated, which is an innovative process that has the potential to reframe the way the human imagination is defined. This paper is an exploration of a broad field that showcases how AI enhances the experience of artist practice, especially content deep learning. This study comprises an exhaustive analysis of the cutting-edge models including generative adversarial networks (GANs), neural style transfer, and multimodal AI that assist in the creation, modification, and optimization of the artistic experience. This research points to implementations of those in the visual arts, graphic design, and interactive media while providing contemporary examples where deep learning has been an addition to traditional media and created new forms of art. Besides, the paper points to the challenges and ethical considerations concerning algorithmic art, including issues of authorship, biases, and intellectual property. The integration of computational methods in the realm of artistic expression is made in the paper and the paper provides insights into the change that deep learning can affect for artists, designers, and technologists.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_18-Integrating_Deep_Learning_in_Art_and_Design.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluation of Convolutional Neural Network Architectures for Detecting Drowsiness in Drivers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160217</link>
        <id>10.14569/IJACSA.2025.0160217</id>
        <doi>10.14569/IJACSA.2025.0160217</doi>
        <lastModDate>2025-02-28T06:26:00.3300000+00:00</lastModDate>
        
        <creator>Bryan Hurtado Delgado</creator>
        
        <creator>Marycielo Xiomara Oscco Guillen</creator>
        
        <creator>Mario Aquino Cruz</creator>
        
        <subject>Architectures; detection; drowsiness; neural networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>Drowsiness in drivers is a condition that can manifest itself at any time, representing a constant challenge for road safety, especially in a context where artificial intelligence technologies are increasingly present in driver assistance systems. This paper presents a comparative evaluation of convolutional neural network (CNN) architectures for drowsiness detection, focusing on the identification of signals such as eye state and yawning. The research was of an applied type with a descriptive level, comparing the performance of LeNet, DenseNet121, InceptionV3 and MobileNet under challenging conditions, such as lighting and motion variations. A non-experimental design was used, with two datasets: a public dataset from Kaggle that included images classified into two categories (yawn and no yawn) and another created specifically for this study, which included images classified into three main categories (eyes open, eyes closed and undetected). The results indicated that, although all architectures performed well in controlled conditions, MobileNet stood out as the most accurate and consistent in challenging scenarios. DenseNet121 also showed good performance, while LeNet was effective in eye-state detection. This study provided a comprehensive assessment of the capabilities and limitations of CNNs for applications in drowsiness monitoring systems, and suggested future directions for improving accuracy in more challenging environments.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_17-Evaluation_of_Convolutional_Neural_Network_Architectures.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Data Mart Design to Increase Transactional Flow of Debit and Credit Card in Peruvian Bodegas</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160216</link>
        <id>10.14569/IJACSA.2025.0160216</id>
        <doi>10.14569/IJACSA.2025.0160216</doi>
        <lastModDate>2025-02-28T06:26:00.3130000+00:00</lastModDate>
        
        <creator>Juan Carlos Morales-Arevalo</creator>
        
        <creator>Erick Manuel Aquise-Gonzales</creator>
        
        <creator>William Yohani Carpio-Ore</creator>
        
        <creator>Emmanuel Victor Mendoza S&#225;enz</creator>
        
        <creator>Carlos Javier Mazzarri-Rodriguez</creator>
        
        <creator>Erick Enrique Remotti-Becerra</creator>
        
        <creator>Edison Humberto Medina-La Plata</creator>
        
        <creator>Luis F. Luque-Vega</creator>
        
        <subject>Business intelligence; Extract, Transform, Load (ETL); dashboard; data mart; Point of Sale (POS)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>The objective of this research is to design a Data Mart to identify tactical actions and increase the use of POS (points of sale) in the bodega business sector of Lima, Peru. A quantitative approach, using transaction history data, is applied using the Kimball methodology. This involves the ETL (Extract, Transform, Load) process to create a dimensional model and to develop a dashboard to visualize key indicators using Power BI. This solution is expected to improve the detection and analysis of transactional errors, categorized by geographic location and business sector while enhancing decision-making processes. This research improves the transactional flow and digital payment adoption in small businesses, fostering greater financial inclusion in the Peruvian market. Therefore, the methodology and tools to be applied in this research offer a framework as a model for similar contexts, especially in emerging markets, which will allow closing gaps in digital payment adoption and financial inclusion.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_16-Data_Mart_Design_to_Increase_Transactional_Flow.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Impact of the TikTok Algorithm on the Effectiveness of Marketing Strategies: A Study of Consumer Behavior and Content Preferences</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160215</link>
        <id>10.14569/IJACSA.2025.0160215</id>
        <doi>10.14569/IJACSA.2025.0160215</doi>
        <lastModDate>2025-02-28T06:26:00.2670000+00:00</lastModDate>
        
        <creator>Raquel Melgarejo-Espinoza</creator>
        
        <creator>Mauricio Gonzales-Cruz</creator>
        
        <creator>Juan Chavez-Perez</creator>
        
        <creator>Orlando Iparraguirre-Villanueva</creator>
        
        <subject>TikTok; algorithm; consumer behavior; marketing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>TikTok has become one of the most widely used platforms, its innovative video format has allowed companies and users to increase their visibility, transforming the way brands communicate their strategies. This systematic literature review (SLR) explored how the TikTok algorithm influences marketing strategies during the period 2021 to 2024. For this purpose, research was conducted based on the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) method. Also, reliable and relevant research databases were consulted, specifically Springer, Science Direct and EBSCO, from which 64 studies aligned with the inclusion and exclusion criteria were extracted, all corresponding to academic articles. After compilation, it was determined that 2024 was the year with the highest number of publications, representing 50% of the total number of articles. Likewise, the country that stood out was China with 28.13% of the related documents. Regarding the research approach, quantitative research predominated, followed by qualitative and mixed research. Finally, the study helped to understand the positive impact of TikTok on marketing, showing how it improves the visibility of brands, as well as identifying trends in consumer preferences, which allows the creation of more accurate strategies that are closer to the public.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_15-Impact_of_the_TikTok_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Lung Parenchyma Segmentation Using Mask R-CNN in COVID-19 Chest CT Scans</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160214</link>
        <id>10.14569/IJACSA.2025.0160214</id>
        <doi>10.14569/IJACSA.2025.0160214</doi>
        <lastModDate>2025-02-28T06:26:00.2070000+00:00</lastModDate>
        
        <creator>Wilmer Alberto Pacheco Llacho</creator>
        
        <creator>Eveling Castro-Gutierrez</creator>
        
        <creator>Luis David Huallpa Tapia</creator>
        
        <subject>Mask R-CNN; ResNet-50; computed tomography; lung parenchyma; COVID-19</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>During the COVID-19 pandemic, the precise evaluation of lung impairments using computed tomography (CT) scans became critical for understanding and managing the disease; however, specialists faced a high workload and the urgent need to deliver fast and accurate results. To address this, deep learning models offered a promising solution by automating lung identification and lesion localization associated with COVID-19. This study employs the semantic segmentation technique Mask R-CNN, integrated with a ResNet-50 backbone, to analyze CT scans of COVID-19 patients. The model was trained using an annotated dataset, enhancing its ability to accurately segment and delineate the lung parenchyma in CT images. The results showed that Mask R-CNN achieved a Dice Similarity Coefficient (DSC) of 93.4%, demonstrating high concordance between the segmented areas and clinically relevant regions. These findings highlight the effectiveness of the proposed approach for precise lung tissue segmentation in CT scans, enabling quantitative assessments of lung impairments and providing valuable insights for diagnosis and patient monitoring.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_14-Lung_Parenchyma_Segmentation_Using_Mask_R_CNN.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Model for Training and Predicting the Occurrence of Potato Late Blight Based on an Analysis of Future Weather Conditions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160213</link>
        <id>10.14569/IJACSA.2025.0160213</id>
        <doi>10.14569/IJACSA.2025.0160213</doi>
        <lastModDate>2025-02-28T06:26:00.1600000+00:00</lastModDate>
        
        <creator>Daniel Damyanov</creator>
        
        <creator>Ivaylo Donchev</creator>
        
        <subject>Machine learning; potato late blight; data analysis; forecast; prediction models</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>Plant diseases pose a significant challenge to agriculture, leading to serious economic losses and a risk to food security. Predicting and managing diseases such as potato blight requires an analysis of key environmental factors, including temperature, dew point, and humidity, that influence the development of pathogens. The current study uses machine learning to integrate this data for the purpose of early detection of diseases. The use of local weather data from sensors, combined with forecast data from public weather API servers, is a prerequisite for accurate short-term forecasting of adverse events. The results highlight the potential of predictive models to optimize prevention strategies, reduce losses and support sustainable crop management. Machine learning provides powerful tools for analyzing and predicting data related to plant diseases. Combining different approaches allows the creation of more precise and adaptive models for disease management.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_13-Model_for_Training_and_Predicting_the_Occurrence_of_Potato.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Chatbot for the Legal Sector of Mauritius Using the Retrieval-Augmented Generation AI Framework</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160212</link>
        <id>10.14569/IJACSA.2025.0160212</id>
        <doi>10.14569/IJACSA.2025.0160212</doi>
        <lastModDate>2025-02-28T06:26:00.0030000+00:00</lastModDate>
        
        <creator>Taariq Noor Mohamed</creator>
        
        <creator>Sameerchand Pudaruth</creator>
        
        <creator>Ivan Coste-Mani&#232;re</creator>
        
        <subject>Law; chatbot; retrieval augmented generation; large language model; OpenAI; Mistral AI</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>Mauritius is known to have a hybrid legal system as the logical consequence of being both a former French and English colony. From its independence in 1968 to date, the legal environment has changed to reflect the constant need to provide a framework to address the country’s diverse needs. With over 1200 pieces of legislation available for consultation, including those which are no longer in force, it is very difficult to know all of them. Yet, there is a legal maxim that says, “nemo censetur ignorare legem”. In other words, ignorance of the law is no excuse. This study aims to provide a solution for professionals and non-professionals to have better access to the law through the development of a chatbot. A Retrieval Augmented Generation (RAG) chatbot system has been developed to achieve this objective. A RAG system is one that leverages the use of Large Language Models (LLM) to process a query and generate a response, while ensuring accuracy by performing similarity searches against documents stored in a vector database. A sample of 46 legal documents (acts and regulations) were retrieved from the website of the Supreme Court of Mauritius. They were broken down into chunks and stored as vectors in Chroma, a vector database. The chatbot combines and processes the queries with a text prompt, searches the relevant legal texts, and generates an appropriate response using OpenAI GPT-4o-mini or MistralAI Open-Mixtral-8x22B. Since most legal texts are in English, a translation layer is included for queries in French. Sources for the answers are also displayed for easy cross-validation. This chatbot will undoubtedly be a useful tool for the Mauritian people.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_12-A_Chatbot_for_the_Legal_Sector_of_Mauritius.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Evaluation and Selection of Appropriate Congestion Control Algorithms for MPT Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160211</link>
        <id>10.14569/IJACSA.2025.0160211</id>
        <doi>10.14569/IJACSA.2025.0160211</doi>
        <lastModDate>2025-02-28T06:25:59.9570000+00:00</lastModDate>
        
        <creator>Naseer Al-Imareen</creator>
        
        <creator>G&#225;bor Lencse</creator>
        
        <subject>Packet loss; congestion control; MPT-GRE; delay; throughput; jitter</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>Recent academic research highlights a growing interest in multipath technologies, which offer promising solutions to networking challenges in complex environments. This interest is reflected in the emergence of protocols such as Multipath TCP (MPTCP) and Multipath UDP-in-GRE (MPT-GRE). The development of network protocols, particularly various iterations of the Transmission Control Protocol (TCP), has been distinguished by congestion detection and control algorithms, such as HighSpeed, CUBIC, Reno, LP, BBR, and Illinois. This paper evaluates the performance and suitability of these algorithms for multipath MPT-GRE networks under varying conditions, including delay, jitter, and data loss at different transmission speeds (both symmetric and asymmetric). Using StarBED resources, we applied delay, jitter, or packet loss to one of two physical paths to simulate congestion. The results demonstrate that some algorithms, HighSpeed and BBR among them, significantly enhance Quality of Service (QoS) metrics and network throughput in multipath MPT-GRE networks. These findings provide valuable insights into their performance and practical applications.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_11-Performance_Evaluation_and_Selection_of_Appropriate_Congestion.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of a Software Tool for Learning the Fundamentals of CubeSat Angular Motion</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160210</link>
        <id>10.14569/IJACSA.2025.0160210</id>
        <doi>10.14569/IJACSA.2025.0160210</doi>
        <lastModDate>2025-02-28T06:25:59.9230000+00:00</lastModDate>
        
        <creator>Victor Romero-Alva</creator>
        
        <creator>Angelo Espinoza-Valles</creator>
        
        <subject>CubeSat; angular motion; simulation; learning tool MATLAB</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>The development of tools for understanding and simulating CubeSat angular motion is essential for both educational and research purposes in space technology. In this context, this paper presents the development of a MATLAB-based software tool designed to facilitate the comprehension of CubeSat angular motion. This tool allows users to simulate CubeSat dynamics by adjusting parameters, such as initial conditions and physical properties, enabling the observation of different types of motion, including rotatory, oscillatory, both stable and unstable behaviors. The mathematical models selected for simulating the CubeSat dynamics are presented. The interface of the tool, designed for intuitive parameter input and visualization of phase portraits of the system under consideration, is described. The software is demonstrated using a CubeSat 3U configuration, and simulation results, including angle of attack, angular velocity, and altitude decay, are presented. This tool aims to enhance the understanding of CubeSat angular motion, contributing to the design and operation of CubeSat missions in low Earth orbit.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_10-Development_of_a_Software_Tool_for_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mobile Application Based on Geolocation for the Recruitment of General Services in Trujillo, La Libertad</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160209</link>
        <id>10.14569/IJACSA.2025.0160209</id>
        <doi>10.14569/IJACSA.2025.0160209</doi>
        <lastModDate>2025-02-28T06:25:59.8930000+00:00</lastModDate>
        
        <creator>Melissa Giannina Alvarado Baudat</creator>
        
        <creator>Camila Vertiz Asmat</creator>
        
        <creator>Fernando Sierra-Li&#241;an</creator>
        
        <subject>Mobile application; recruitment; geolocation; general services</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>Currently, there is no technological solution that efficiently facilitates the offering of general services by independent workers in the city of Trujillo. This limitation reduces job opportunities, as workers secure fewer contracts due to reliance on client recommendations, a method that is often inefficient due to long response times and low accessibility. Leveraging the versatility of mobile applications. This study contributes to computer science by demonstrating how cloud-based data management, real-time communication, and location-based service matching using Google APIs optimize service efficiency and user experience. The study follows an applied research approach with a quantitative methodology, employing a pre-experimental explanatory design and a sample of 22 workers selected through non-probabilistic convenience sampling. The development was carried out using the Flutter framework and the Dart programming language, with an SQL database hosted on Microsoft Azure cloud services. The Mobile-D agile methodology guided the development process. After implementing the application, the results showed an 86.79% reduction in the average hiring process time, a 50% increase in the number of contracts completed, and a 51.27% improvement in workers&#39; average satisfaction. These findings highlight the effectiveness of mobile and cloud computing technologies, along with ranking algorithms and geolocation services, in streamlining labor market interactions and improving user experience.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_9-Mobile_Application_Based_on_Geolocation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automated Subjective Perception of a Driver’s Pain Level Based on Their Facial Expression</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160208</link>
        <id>10.14569/IJACSA.2025.0160208</id>
        <doi>10.14569/IJACSA.2025.0160208</doi>
        <lastModDate>2025-02-28T06:25:59.8630000+00:00</lastModDate>
        
        <creator>F. Hadi</creator>
        
        <creator>O. Fukuda</creator>
        
        <creator>W. LYeoh</creator>
        
        <creator>H. Okumura</creator>
        
        <creator>Y. Rodiah</creator>
        
        <creator>Herlina</creator>
        
        <creator>A. Prasetyo</creator>
        
        <subject>Pain; driver; convolution neural network; facial expression</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>One factor that has a positive correlation with the risk of traffic accidents is the pain experienced by drivers. This pain is sometimes expressed facially by the driver and can be subjectively perceived by others. By observing the facial expression of drivers, it can estimate the pain experienced at that point in time and intervene to prevent some accidents. A method to automatically estimate the pain level expressed by a driver using their facial expression will be proposed in this study. The model is trained by a convolution neural network based on a public dataset of facial expressions at various pain levels. This model is then used to automatically classify the pain level perceived using only the facial expressions of drivers. The result of the automated classification is then compared to ratings of subjective feelings of the driver’s pain evaluated by a medical doctor. The experiment results showed that the model classified the pain level expressed facially by the drivers matched that of the classification by the medical doctor at a rate of 80%.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_8-Automated_Subjective_Perception_of_a_Drivers_Pain_Level.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced Network Bandwidth Prediction with Multi-Output Gaussian Process Regression</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160207</link>
        <id>10.14569/IJACSA.2025.0160207</id>
        <doi>10.14569/IJACSA.2025.0160207</doi>
        <lastModDate>2025-02-28T06:25:59.8300000+00:00</lastModDate>
        
        <creator>Shude Chen</creator>
        
        <creator>Takayuki Nakachi</creator>
        
        <subject>Network traffic prediction; Multi-Output Gaussian Process (MOGP); signal processing; time series analysis; predictive modeling; multi-channel data; IoT traffic monitoring; 5G networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>Modern network environments, especially in do-mains like 5G and IoT, exhibit highly dynamic and nonlinear traffic behaviors, posing significant challenges for accurate time series analysis and predictive modeling. Traditional approaches, including stochastic ARIMA and deep learning-based LSTM, frequently encounter difficulties in capturing rapid signal variations and inter-channel dependencies, often due to data sparsity or excessive computational cost. To address these issues, this paper proposes a Multi-Output Gaussian Process (MOGP) framework augmented with a novel signal processing strategy, where additional signals are generated by summing adjacent elements over multiple window sizes. Such multi-scale enrichment effectively leverages cross-channel correlations, enabling the MOGP model to discover complex temporal patterns in multi-channel data. Experimental results on real-world network traces highlight that the proposed method achieves consistently lower RMSE compared to conventional single-output or deep learning methods, thereby underscoring its value for robust bandwidth estimation. Our findings suggest that integrating MOGP with multi-scale augmentation holds promise for a wide range of predictive analytics applications, including resource allocation in 5G networks and traffic monitoring in IoT systems.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_7-Enhanced_Network_Bandwidth_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>RSCHED: An Effective Heterogeneous Resource Management for Simultaneous Execution of Task-Based Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160206</link>
        <id>10.14569/IJACSA.2025.0160206</id>
        <doi>10.14569/IJACSA.2025.0160206</doi>
        <lastModDate>2025-02-28T06:25:59.8000000+00:00</lastModDate>
        
        <creator>Etienne Ndamlabin</creator>
        
        <creator>Berenger Bramas</creator>
        
        <subject>Heterogeneous resource management; scheduling; task-based applications; gradient descent; StarPU</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>Modern parallel architectures have heterogeneous processors and complex memory hierarchies, offering up to billion-way parallelism at multiple hierarchical levels. Their exploitation by HPC applications greatly boosts scientific discoveries and advances, but they are still not fully utilized, leading to proportionally high energy consumption. The task-based programming model has demonstrated promising potential in developing scientific applications on modern high-performance platforms. This work introduces a new framework for managing the concurrent execution of task-based applications, RSCHED. The framework aims to minimize the overall time spent executing a set of applications and maximize resource utilization. RSCHED is a two-level resources management framework: resource distribution and task scheduling, with sharable and reusable resources on the fly. A new model of Gradient Descent has been proposed, among other strategies for resource distribution, due to its well-known speedy convergence event in fast-growing systems. We implemented our proposal on StarPU and evaluated it on real applications. RSCHED demonstrated the potential to speed up the overall makespan of executed applications compared to consecutive execution with an average factor of 10x and the potential to increase resource utilization.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_6-RSCHED_An_Effective_Heterogeneous_Resource_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Exploiting Ray Tracing Technology Through OptiX to Compute Particle Interactions with Cutoff in a 3D Environment on GPU</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160205</link>
        <id>10.14569/IJACSA.2025.0160205</id>
        <doi>10.14569/IJACSA.2025.0160205</doi>
        <lastModDate>2025-02-28T06:25:59.7670000+00:00</lastModDate>
        
        <creator>David Algis</creator>
        
        <creator>Berenger Bramas</creator>
        
        <subject>CUDA; graphics processing unit; high-performance computing; OptiX; particle interactions; ray tracing; scientific computing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>Particle interaction simulation is a fundamental method of scientific computing that require high-performance solutions. In this context, computing on graphics processing units (GPUs) has become standard due to the significant performance gains over conventional CPUs. However, since GPUs were originally designed for 3D rendering, they still retain several features that are not fully exploited in scientific computing. One such feature is ray tracing, a powerful technique for rendering 3D scenes. In this paper, we propose exploiting ray tracing technology via OptiX and CUDA to compute particle interactions with a cutoff distance in a 3D environment on GPUs. To this end, we describe algorithmic techniques and geometric patterns for efficiently determining the interaction lists for each particle. Our approach enables the computation of interactions with quasi-linear complexity in the number of particles, eliminating the need to construct a grid of cells or an explicit kd-tree. We compare the performance of our method to a classical grid-based approach and demonstrate that our approach is faster in most cases with non-uniform particle distributions.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_5-Exploiting_Ray_Tracing_Technology_Through_OptiX.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Resampling Imbalanced Healthcare Data for Predictive Modelling</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160204</link>
        <id>10.14569/IJACSA.2025.0160204</id>
        <doi>10.14569/IJACSA.2025.0160204</doi>
        <lastModDate>2025-02-28T06:25:59.7370000+00:00</lastModDate>
        
        <creator>Manoj Yadav Mamilla</creator>
        
        <creator>Ronak Al-Haddad</creator>
        
        <creator>Stiphen Chowdhury</creator>
        
        <subject>Imbalanced data; resampling; machine learning; healthcare</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>Imbalanced datasets pose significant challenges in healthcare for developing accurate predictive models in medical diagnostics. In this work, we explore the effectiveness of combining resampling methods with machine learning algorithms to enhance prediction accuracy for imbalanced heart and lung disease datasets. Specifically, we integrate undersampling techniques such as Edited Nearest Neighbours (ENN) and In-stance Hardness Threshold (IHT) with oversampling methods like Random Oversampling (RO), Synthetic Minority Oversampling Technique (SMOTE), and Adaptive Synthetic Sampling (ADASYN). These resampling strategies are paired with classifiers including Decision Trees (DT), Random Forests (RF), K-Nearest Neighbours (KNN), and Support Vector Machines (SVM). Model performance is evaluated using accuracy, precision, recall, F1 score, and the Area Under the Curve (AUC). Our results show that tailored resampling significantly boosts machine learning model performance in healthcare settings. Notably, SVM with ENN undersampling markedly improves accuracy for lung cancer predictions, while SVM and RF with IHT achieve higher validation accuracies for both diseases. Random oversampling shows variable effectiveness across datasets, whereas SMOTE and ADASYN consistently enhance accuracy. This study underscores the value of integrating strategic resampling with machine learning to improve predictive reliability for imbalanced healthcare data.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_4-Resampling_Imbalanced_Healthcare_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Automated Mapping Approach of Emergency Events and Locations Based on Object Detection and Social Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160203</link>
        <id>10.14569/IJACSA.2025.0160203</id>
        <doi>10.14569/IJACSA.2025.0160203</doi>
        <lastModDate>2025-02-28T06:25:59.6900000+00:00</lastModDate>
        
        <creator>Khalid Alfalqi</creator>
        
        <creator>Martine Bellaiche</creator>
        
        <subject>Machine learning; deep learning; big data; social networks; object detection; emergency event detection; snapchat; hotspot map</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>The high prevalence of cellphones and social networking platforms such as Snapchat are obviously dissolving traditional barriers between information providers and end-users. It is certainly relevant in emergency events, as individuals on the site produce and exchange real-time information about the event. However, notwithstanding their demonstrated significance, obtaining event-related information from real-time streams of vast numbers of snaps is a significant challenge. To address this gap, this paper proposes an automated mapping approach of emergency events and locations based on object detection and social networks. Furthermore, employing object detection methods on social networks to detect emergency events will construct reliable, flexible and speedy approach by utilizing the Snapchat hotspot map as a reliable source to discover the exact location of emergency events. Moreover, the proposed approach aims to yields high accuracy by employing the state of the arts object detectors to achieve the objectives of this paper. Furthermore, this paper evaluates the performance of four object detection baseline models and the proposed ensemble approach to detect emergency events. Results show that the proposed approach achieved a very high accuracy of 96% for flood dataset and 94% for fire dataset.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_3-An_Automated_Mapping_Approach_of_Emergency_Events.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Light-Weight Federated Transfer Learning Approach to Malware Detection on Computational Edges</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160202</link>
        <id>10.14569/IJACSA.2025.0160202</id>
        <doi>10.14569/IJACSA.2025.0160202</doi>
        <lastModDate>2025-02-28T06:25:59.6600000+00:00</lastModDate>
        
        <creator>Sakshi Mittal</creator>
        
        <creator>Prateek Rajvanshi</creator>
        
        <creator>Riaz Ul Amin</creator>
        
        <subject>Alware detection; transfer learning; light weight transfer learning; federated learning alware detection; transfer learning; light weight transfer learning; federated learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>With rapid increase in edge computing devices, Light weight methods to identify and stop cyber-attacks has become a topic of interest for the research community. Fast proliferation of smart devices and customer’s concerns regarding the data security and privacy has necessitated new methods to counter cyber attacks. This work presents a unique light weight transfer learning method to leverage malware detection in federated mode. Existing systems seems insufficient in terms of providing cyber security in resource constrained environment. Fast IoT device deployment raises a serious threat from malware attacks, which calls for more efficient, real-time detection systems. Using a transfer learning model over federated architecture (with federated learning support), the research suggests to counter the cyber risks and achieve efficiency in detection of malware in particular. Using a real-world publicly accessible IoT network dataset, the study assessed the performance of the model using Aposemat IoT-23 dataset. Extensive testing shows that with train-ing accuracy approaching around 98% and validation accuracy reaching 0.97.6% with 10 epoch, the proposed model achieves great detection accuracy of over 98%. These findings show how well the model detects Malware threats while keeping reasonable processing times—critical for IoT devices with limited resources.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_2-Light_Weight_Federated_Transfer_Learning_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>6G-Enabled Autonomous Vehicle Networks: Theoretical Analysis of Traffic Optimization and Signal Elimination</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160201</link>
        <id>10.14569/IJACSA.2025.0160201</id>
        <doi>10.14569/IJACSA.2025.0160201</doi>
        <lastModDate>2025-02-28T06:25:59.5970000+00:00</lastModDate>
        
        <creator>Daniel Benniah John</creator>
        
        <subject>6G Communication systems; autonomous vehicle networks; traffic flow optimization; signal-free traffic management; Vehicle-to-Vehicle Communication (V2V); Vehicle-to-Infrastructure Communication (V2I); multi-agent deep reinforcement learning; real-time traffic management</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(2), 2025</description>
        <description>This paper proposes a theoretical framework for optimizing traffic flow in autonomous vehicle (AV) networks using 6G communication systems. We propose a novel technique to eliminate conventional traffic signals through vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communication. The article demonstrates traffic flow optimization, density, and safety improvements through real-time management and decision-making. The theoretical foundation involves the combination of multi-agent deep reinforcement learning, coupled with complex analytical models across the partition managing intersections, thus forming the basis of proposed innovative city advancements. From the theoretical analysis, the proposed approach shows a relative improvement of 40-50% in intersection waiting time, 50-70% in accident probability, and 35% in carbon footprint. The above improvements are obtained by applying ultra-low latency 6G communication with the sub-millisecond response and accommodating up to 10000 vehicles per square kilometre. In addition, an economic evaluation revealed that such a system would generate a return on investment by 6.7 years, making this system a technical and financial system for enhancing an intelligent city.</description>
        <description>http://thesai.org/Downloads/Volume16No2/Paper_1-6G_Enabled_Autonomous_Vehicle_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Machine Learning-Based Analysis of Tourism Recommendation Systems: Holistic Parameter Discovery and Insights</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01601130</link>
        <id>10.14569/IJACSA.2025.01601130</id>
        <doi>10.14569/IJACSA.2025.01601130</doi>
        <lastModDate>2025-01-30T12:01:32.3300000+00:00</lastModDate>
        
        <creator>Raniah Alsahafi</creator>
        
        <creator>Rashid Mehmood</creator>
        
        <creator>Saad Alqahtany</creator>
        
        <subject>Recommendation Systems (RS); Tourism Recommendation Systems (TRS); big data analytics; machine learning; unsupervised learning; social; economic and environmental sustainability; Bidirectional Encoder Representations from Transformers (BERT); SDGs; literature review</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>Tourism is a cornerstone of the global economy, fostering cultural exchange and economic growth. As travelers increasingly seek personalized experiences, recommendation systems have become vital in guiding decision-making and enhancing satisfaction. These systems leverage advanced technologies such as IoT and machine learning to provide tailored suggestions for destinations, accommodations, and activities. This paper explores the transformative role of tourism recommendation systems (TRS) by analyzing data from 3,013 research articles published between 2000 and 2024 using a BERT-based methodology for semantic text representation and clustering. A robust software framework, integrating tools such as UMAP for dimensionality reduction and HDBSCAN for clustering, facilitated data modeling, cluster analysis, visualization, and the identification of key parameters in TRS. We discover a comprehensive taxonomy of 16 TRS parameters grouped into 4 macro-parameters. These include Personalized Tourism; Sustainability, Health and Resource Awareness; Adaptability &amp; Crisis Management; and Social Impact &amp; Cultural Heritage. These macro-parameters align with all three dimensions of the triple bottom line (TBL) -- social, economic, and environmental sustainability. The findings reveal key trends, highlight underexplored areas, and provide research-informed recommendations for developing more effective TRS. This paper synthesizes existing knowledge, identifies research gaps, and outlines directions for advancing TRS to support sustainable, personalized, and innovative travel solutions.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_130-A_Machine_Learning_Based_Analysis_of_Tourism.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Exploring the Best Machine Learning Models for Breast Cancer Prediction in Wisconsin</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01601129</link>
        <id>10.14569/IJACSA.2025.01601129</id>
        <doi>10.14569/IJACSA.2025.01601129</doi>
        <lastModDate>2025-01-30T09:02:16.5770000+00:00</lastModDate>
        
        <creator>Abdullah Al Mamun</creator>
        
        <creator>Touhid Bhuiyan</creator>
        
        <creator>Md Maruf Hassan</creator>
        
        <creator>Shahedul Islam Anik</creator>
        
        <subject>Wisconsin breast cancer disease prediction; ML; SVM; KNN; AUC-ROC; Naive Bayes</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>This research focuses on predicting Wisconsin Breast Cancer Disease using machine learning algorithm, employs a dataset offered by UCI repository (WBCD) dataset. The under- gone substantial preparation, includes managing missing values, normalization, outlier elimination, increase data quality. The Synthetic Minority Oversampling Technique (SMOTE) is used to alleviate class imbalance and to enable strong model training. Machine learning models, include SVM, kNN, Neural Networks, and Naive Bayes, were built and verified using Key performance metrics and K-Fold cv. included as recall, accuracy, F1-score, precision and AUC- ROC were employed to analyze the models. Among these, the Neural Network model emerged the most effective, obtaining a prediction accuracy 98.13%, precision 98.21%, recall 98.00%, F1Score of 97.96%, AUC-ROC score 0.9992. Study underscores promise of ML boosting the diagnosis and treatment of WBCD illnesses, giving scalable and accurate ways for early detection and prevention.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_129-Exploring_the_Best_Machine_Learning_Models_for_Breast_Cancer.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>DBFN-J: A Lightweight and Efficient Model for Hate Speech Detection on Social Media Platforms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01601128</link>
        <id>10.14569/IJACSA.2025.01601128</id>
        <doi>10.14569/IJACSA.2025.01601128</doi>
        <lastModDate>2025-01-30T09:02:16.5470000+00:00</lastModDate>
        
        <creator>Nourah Fahad Janbi</creator>
        
        <creator>Abdulwahab Ali Almazroi</creator>
        
        <creator>Nasir Ayub</creator>
        
        <subject>Hate speech detection; social media analysis; deep learning; hybrid models; artificial intelligence; optimization; sentiment analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>Hate speech on social media platforms like YouTube, Facebook, and Twitter threatens online safety and societal harmony. Addressing this global challenge requires innovative and efficient solutions. We propose DBFN-J (DistillBERT-Feedforward Neural Network with Jaya optimization), a lightweight and effective algorithm for detecting hate speech. This method combines DistillBERT, a distilled version of the Bidirectional Encoder Representations from Transformers (BERT), with a Feedforward Neural Network. The Jaya algorithm is employed for parameter optimization, while aspect-based sentiment analysis further enhances model performance and computational efficiency. DBFN-J demonstrates significant improvements over existing methods such as CNN BERT (Convolutional Neural Network BERT), BERT-LSTM (Long Short-Term Memory), and ELMo (Embeddings from Language Models). Extensive experiments reveal exceptional results, including an AUC (Area Under the Curve) of 0.99, a log loss of 0.06, and a balanced F1-score of 0.95. These metrics underscore its robust ability to identify abusive content effectively and efficiently. Statistical analysis further confirms its precision (0.98) and recall, making it a reliable tool for detecting hate speech across diverse social media platforms. By outperforming traditional algorithms in both performance and resource utilization, DBFN-J establishes a new benchmark for hate speech detection. Its lightweight design ensures suitability for large-scale, resource-constrained applications. This research provides a robust framework for protecting online environments, fostering healthier digital spaces, and mitigating the societal harm caused by hate speech.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_128-DBFN_J_A_Lightweight_and_Efficient_Model_for_Hate_Speech.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Artificial Intelligence in Financial Risk Early Warning Systems: A Bibliometric and Thematic Analysis of Emerging Trends and Insights</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01601127</link>
        <id>10.14569/IJACSA.2025.01601127</id>
        <doi>10.14569/IJACSA.2025.01601127</doi>
        <lastModDate>2025-01-30T09:02:16.5170000+00:00</lastModDate>
        
        <creator>Muhammad Ali Chohan</creator>
        
        <creator>Teng Li</creator>
        
        <creator>Suresh Ramakrishnan</creator>
        
        <creator>Muhammad Sheraz</creator>
        
        <subject>Artificial intelligence; deep learning; financial risk management; early warning systems; bibliometrics analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>With the continuous development of financial markets worldwide, there has been increasing recognition of the importance of financial risk management. To mitigate financial risk, financial risk early warning serves as a risk uncovering mechanism enabling companies to anticipate and counter potential disruptions. The present review paper aims to identify the bibliometric analysis for exploring the growth and academic evolution of financial risk, financial risk management, and financial risk early warning concepts. Academic literature is surveyed from the Scopus database during the period 2010-2024. The network analysis, conceptual structure, and bibliographic analysis of the selected articles are employed using VOSviewer and Bibliometric R Package. The biblioshiny technique based on the bibliometric R package was used to draw journal papers’ performance and scientific contributions by displaying distinctive features from the bibliometric method used in prior studies. The data was extracted from Scopus databases. In addition, this study comprehensively analyzes the evolution of financial risk early warning systems, highlighting significant trends and future directions. Thematic evaluation across 2010-2015, 2016-2021, and 2022-2024 reveals a shift from traditional statistical methods to advanced machine learning and AI techniques, with neural networks, random forests, and XGBoost being pivotal. Innovations like attention mechanisms and LSTM models improve prediction accuracy. The integration of sustainability factors, such as carbon neutrality and renewable energy, reflects a trend towards incorporating environmental considerations into risk management. The study underscores the need for interdisciplinary collaborations and advanced data analytics for comprehensive financial systems. Policy implications include promoting AI adoption, integrating environmental factors, fostering collaborations, and developing advanced data analytics frameworks.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_127-Artificial_Intelligence_in_Financial_Risk_Early_Warning_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Imbalance Datasets in Malware Detection: A Review of Current Solutions and Future Directions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01601126</link>
        <id>10.14569/IJACSA.2025.01601126</id>
        <doi>10.14569/IJACSA.2025.01601126</doi>
        <lastModDate>2025-01-30T09:02:16.4830000+00:00</lastModDate>
        
        <creator>Hussain Almajed</creator>
        
        <creator>Abdulrahman Alsaqer</creator>
        
        <creator>Mounir Frikha</creator>
        
        <subject>Malware detection; machine learning; imbalance datasets; oversampling; SMOTE</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>Imbalanced datasets are a significant challenge in the field of malware detection. The uneven distribution of malware and benign samples is a challenge for modern machine learning based detection systems, as it creates biased models and poor detection rates for malicious software. This paper provides a systematic review of existing approaches for dealing with imbalanced datasets in malware detection such as data-level, algorithm-level, and ensemble methods. We explore different techniques including Synthetic Minority Oversampling Technique, deep learning techniques including CNN and LSTM hybrids, Genetic Programming for feature selection, and Federated Learning. Furthermore, we assesses the strengths, weakness, and areas of application of each approach. Computational complexity, scalability, and the practical applicability of these techniques remains as challenges. Finally, the paper summarizes promising directions for future research like lightweight models and advanced sampling strategies to further improve the robustness and practicality of malware detection systems in dynamic environments.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_126-Imbalance_Datasets_in_Malware_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Eagle Framework: An Automatic Parallelism Tuning Architecture for Semantic Reasoners</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01601125</link>
        <id>10.14569/IJACSA.2025.01601125</id>
        <doi>10.14569/IJACSA.2025.01601125</doi>
        <lastModDate>2025-01-30T09:02:16.4530000+00:00</lastModDate>
        
        <creator>Haifa Ali Al-Hebshi</creator>
        
        <creator>Muhammad Ahtisham Aslam</creator>
        
        <creator>Kawther Saeedi</creator>
        
        <subject>Automatic tuning; parallel semantic reasoning; performance optimization; ontology; high performance computing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>Parallel semantic reasoners use parallel architectures to improve the efficiency of reasoning tasks. Studies in semantic reasoning rely on manual tuning to configure the degree of parallelism. However, manual tuning becomes increasingly challenging as ontologies become massive and complex. Studies in related fields have developed automatic tuning frameworks using optimization search methods. Although these methods offer performance gains, reducing search time and space size is still an open problem. This study aims to bridge the gap in semantic reasoning and the problem in existing search methods. To achieve these aims, we propose Eagle Framework (EF), an innovative automatic tuning framework designed to improve the performance of parallel semantic reasoners. EF automatically configures the degree of parallelism and calculates the performance data. It incorporates a novel search space and algorithm, inspired by the AVL tree, that efficiently identifies the optimal degree of parallelism. In a case study, EF completed the tuning processes in seconds to a few minutes, achieving performance gains up to 65 times faster than common search methods. The reliability findings, with ICC scores ranging from 0.90 to 0.99, confirmed the consistency of the performance data calculated by EF. The regression analysis revealed the effectiveness of EF in identifying the factors that affect reasoning scalability, with the conclusion that the size of the ontology is the dominant factor. The study underscores the need for adaptive approaches to tune the degree of parallelism based on the size of the ontology.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_125-Eagle_Framework_An_Automatic_Parallelism_Tuning_Architecture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>DBYOLOv8: Dual-Branch YOLOv8 Network for Small Object Detection on Drone Image</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01601124</link>
        <id>10.14569/IJACSA.2025.01601124</id>
        <doi>10.14569/IJACSA.2025.01601124</doi>
        <lastModDate>2025-01-30T09:02:16.4070000+00:00</lastModDate>
        
        <creator>Yawei Tan</creator>
        
        <creator>Bingxin Xu</creator>
        
        <creator>Jiangsheng Sun</creator>
        
        <creator>Cheng Xu</creator>
        
        <creator>Weiguo Pan</creator>
        
        <creator>Songyin Dai</creator>
        
        <creator>Hongzhe Liu</creator>
        
        <subject>Drone images; dual-branch; small object detection; YOLOv8</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>Object detection based on drone platforms is a valuable yet challenging research field. Although general object detection networks based on deep learning have achieved breakthroughs in natural scenes, drone images in urban environments often exhibit characteristics such as a high proportion of small objects, dense distribution, and significant scale variations, posing significant challenges for accurate detection. To address these issues, this paper proposes a dual-branch object detection algorithm based on YOLOv8 improvements. Firstly, an auxiliary branch is constructed by extending the YOLOv8 backbone to aggregate high-level semantic information within the network, enhancing the feature extraction capability. Secondly, a Multi-Branch Feature Enhancement (MBFE) module is designed to enrich the feature representation of small objects and enhance the correlation of local features. Third, Spatial-to-Depth Convolution (SPDConv) is utilized to mitigate the loss of small object information during downsampling, preserving more small object feature information. Finally, a dual-branch feature pyramid is designed for feature fusion to accommodate the dual-branch input. Experimental results on the VisDrone benchmark dataset demonstrate that DBYOLOv8 outperforms state-of-the-art object detection methods. Our proposed DBYOLOv8s achieve mAP@0.5 of 49.3% and mAP@0.5:0.95 of 30.4%, which are 2.8% and 1.5%higher than YOLOv9e, respectively.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_124-DBYOLOv8_Dual_Branch_YOLOv8_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>LMS-YOLO11n: A Lightweight Multi-Scale Weed Detection Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01601123</link>
        <id>10.14569/IJACSA.2025.01601123</id>
        <doi>10.14569/IJACSA.2025.01601123</doi>
        <lastModDate>2025-01-30T09:02:16.3770000+00:00</lastModDate>
        
        <creator>YaJun Zhang</creator>
        
        <creator>Yu Xu</creator>
        
        <creator>Jie Hou</creator>
        
        <creator>YanHai Song</creator>
        
        <subject>You-only-look-once-11; weed; lightweight; group convolution</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>With the advancement of precision agriculture, efficient and accurate weed detection has emerged as a pivotal task in modern crop management. Current weed detection methods face dual challenges: inadequate extraction of detailed features and edge information, coupled with the necessity for real-time performance. To address these issues, this paper pro-poses a lightweight multi-scale weed detection model based on YOLOv11n (You-only-look-once-11). Our approach incorporates three innovative components: (1) A fast-gated lightweight unit combined with C3K2 to enhance local and global interaction capabilities of weed features. (2) An adaptive hierarchical feature fusion network based on HSFPN, which improves the extraction of weed edge information. (3) A lightweight group convolution detection head module that captures multi-scale feature details while maintaining a lightweight structure. Experimental validation on two public datasets, CottonWeedDet3 and CottonWeed2, demonstrates that our model achieves an mAP50 improvement of 2.5% on CottonWeedDet3 and 1.9% on CottonWeed2 compared to YOLOv11n, with a 37% reduction in parameters and a 26%decrease in computational effort.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_123-LMS_YOLO11n_A_Lightweight_Multi_Scale_Weed_Detection_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Review of Analyzing Different Agricultural Crop Yields Using Artificial Intelligence</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01601122</link>
        <id>10.14569/IJACSA.2025.01601122</id>
        <doi>10.14569/IJACSA.2025.01601122</doi>
        <lastModDate>2025-01-30T09:02:16.3430000+00:00</lastModDate>
        
        <creator>Vijaya Bathini</creator>
        
        <creator>K. Usha Rani</creator>
        
        <subject>Agriculture; artificial intelligence; deep learning; crop yields; management</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>The advancement of Artificial Intelligence (AI), in particular Deep Learning (DL), has made it possible to interpret gathered data more quickly and effectively in this new digital era. To draw attention to development advancements in deep learning across many industries. Agriculture has been one of the most affected areas in recent advancements of the current globalized world agriculture plays a vital role and makes significant contributions. Over the years, agriculture has faced several difficulties in meeting the growing demands of the global people, which has creased over the last 50 years. Different forecasts have been made regarding this extraordinary population expansion which is expected to grasp almost 9 billion persons worldwide by 2050. More than a century ago, different technologies were brought into agriculture to solve issues related to crop cultivation. Many mechanical technologies are accessible today, and they are evolving at an amazing rate. To support their demands and help them optimize their crop yields based on data and task automation need innovative techniques to aid farmers. This will transform the agricultural industry into a new dimension. Therefore, this study’s primary goal was to present a thorough summary of the most current developments based on research interconnected with the digitization of agriculture for crop yields including fruit counting, crop management, water management, weed identification, soil management, seed categorization, disease detection, yield forecasting and harvesting of yields based on Artificial Intelligence Techniques.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_122-A_Review_of_Analyzing_Different_Agricultural_Crop_Yields.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SM9 Key Encapsulation Mechanism for Power Monitoring Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01601121</link>
        <id>10.14569/IJACSA.2025.01601121</id>
        <doi>10.14569/IJACSA.2025.01601121</doi>
        <lastModDate>2025-01-30T09:02:16.3130000+00:00</lastModDate>
        
        <creator>Chao Hong</creator>
        
        <creator>Peng Xiao</creator>
        
        <creator>Pandeng Li</creator>
        
        <creator>Zhenhong Zhang</creator>
        
        <creator>Yiwei Yang</creator>
        
        <creator>Biao Bai</creator>
        
        <subject>SM9; Outsourced decryption; cryptographic reverse firewall; power monitoring systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>The boundaries of the new power system network are blurred, and data privacy and security are threatened. Although the SM9 algorithm is widely used in power systems to protect data security, its efficiency and security remain the main issues in application. Therefore, an SM9 key encapsulation mechanism (OSM9-KEM-CRF) was proposed to support outsourced decryption and cryptographic reverse firewall. In order to resist the backdoor attacks, we deployed cryptographic reverse firewalls at the terminals and proved that the proposed OSM9-KEM-CRF is ID-IND-CCA2 secure. The cryptographic reverse firewalls maintain functionality, weakly retain security, and weakly resist penetration, thereby enhancing the security of the scheme. In addition, considering the limited computing resources of terminal devices, decryption operations are outsourced to cloud servers in order to reduce the computational burden on the terminals. Compared with other SM9-KEMs, the proposed mechanism not only reduces computational and communication overhead, but also lowers energy consumption. Therefore, the proposed mechanism is more suitable for power monitoring systems.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_121-SM9_Key_Encapsulation_Mechanism_for_Power_Monitoring_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Exploring Machine Learning in Malware Analysis: Current Trends and Future Perspectives</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01601120</link>
        <id>10.14569/IJACSA.2025.01601120</id>
        <doi>10.14569/IJACSA.2025.01601120</doi>
        <lastModDate>2025-01-30T09:02:16.2800000+00:00</lastModDate>
        
        <creator>Noura Alyemni</creator>
        
        <creator>Mounir Frikha</creator>
        
        <subject>Machine learning; malware analysis; cybersecurity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>Sophisticated cyberattacks are an increasing concern for individuals, businesses, and governments alike. Detecting malware remains a significant challenge, particularly due to the limitations of traditional methods in identifying new or unexpected threats. Machine Learning (ML) has emerged as a powerful solution, capable of analyzing large datasets, recognizing complex patterns, and adapting to rapidly changing attack strategies. This paper reviews the latest advancements in machine learning for malware analysis, shedding light on both its strengths and the challenges it faces. Additionally, it explores the current limitations of these approaches and outlines future research directions. Key recommendations include improving data preprocessing techniques to reduce information loss, utilizing distributed computing for greater efficiency, and maintaining balanced, up-to-date datasets to enhance model reliability. These strategies aim to improve the scalability, accuracy, and resilience of ML-driven malware detection systems.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_120-Exploring_Machine_Learning_in_Malware_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Building Detection from Satellite Imagery Using Morphological Operations and Contour Analysis over Google Maps Roadmap Outlines</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01601119</link>
        <id>10.14569/IJACSA.2025.01601119</id>
        <doi>10.14569/IJACSA.2025.01601119</doi>
        <lastModDate>2025-01-30T09:02:16.2330000+00:00</lastModDate>
        
        <creator>Arbab Sufyan Wadood</creator>
        
        <creator>Ahthasham Sajid</creator>
        
        <creator>Muhammad Mansoor Alam</creator>
        
        <creator>Mazliham MohD Su’ud</creator>
        
        <creator>Arshad Mehmood</creator>
        
        <creator>Inam Ullah Khan</creator>
        
        <subject>Building detection; satellite imagery; urban planning; disaster response; image processing; machine learning; morphological operations; contour detection; aspect ratio</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>One such research area is building detection, which has a high influence and potential impact in urban planning, disaster management, and construction development. Classifying buildings using satellite images is a difficult task due to building designs, shapes, and complex backgrounds which lead to occlusion between buildings. The current study introduces a new method for constructing recognition and classification globally based on Google Maps contour trace detection and an evolved image processing technique, seeking synergies with a systematic methodology. We first extract the building outlines by taking the image from the &#168;Roadmap&#168;view in Google Maps, converting it to gray scale, thresholding it to create binary boundaries,and finally applying morphological operations to facilitate noise removal and gap filling. These binary outlines are overlaid on colorful satellite imagery, which aids in identifying buildings. Machine learning techniques can also be used to improve aspect ratio analysis and improve overall detection accuracy and performance.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_119-Building_Detection_from_Satellite_Imagery_Using_Morphological_Operations.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparison of Machine Learning Algorithms for Malware Detection Using EDGE-IIoTSET Dataset in IoT</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01601118</link>
        <id>10.14569/IJACSA.2025.01601118</id>
        <doi>10.14569/IJACSA.2025.01601118</doi>
        <lastModDate>2025-01-30T09:02:16.2030000+00:00</lastModDate>
        
        <creator>Jawaher Alshehri</creator>
        
        <creator>Almaha Alhamed</creator>
        
        <creator>Mounir Frikha</creator>
        
        <creator>M M Hafizur Rahman</creator>
        
        <subject>IoT malware; machine learning; malware detection; IoT security; EDGE-IIoTSET</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>The growth of IoT devices has presented great vulnerabilities leading to many malware attacks. Existing IoT malware detection methods face many challenges; including: device heterogeneity, device resource restrictions, and the complexity of encrypted malware payloads, thus leading to less effective conventional cybersecurity techniques. This study’s objective is to reduce these gaps by assessing the results obtained from testing five machine learning algorithms that are used to detect IoT malware by applying them on the EDGE-IIoTSET dataset. Key preprocessing steps include: cleaning data, extracting features, and encoding network traffic. Several algorithms used these include: Logistic Regression, Decision Tree, Na&#168;ıve Bayes, KNN, and Random Forest. The Decision Tree model achieved perfect accuracy at 100%, making it the best-performing model for this analysis. In contrast, Random Forest delivered a strong performance with an accuracy of 99.9%, while Logistic Regression performed at 27%, Na&#168;ıve Bayes at 57%, and KNN with moderate performance. Hence, the results have shown the effectiveness of machine learning techniques to enhance the security IoT systems regarding real-time malware detection with high accuracy. These findings are useful input for policymakers, cybersecurity practitioners, and IoT developers as they develop better mechanisms for handling dynamic IoT malware attack incidents.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_118-Comparison_of_Machine_Learning_Algorithms_for_Malware_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cross-Domain Health Misinformation Detection on Indonesian Social Media</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01601117</link>
        <id>10.14569/IJACSA.2025.01601117</id>
        <doi>10.14569/IJACSA.2025.01601117</doi>
        <lastModDate>2025-01-30T09:02:16.1730000+00:00</lastModDate>
        
        <creator>Divi Galih Prasetyo Putri</creator>
        
        <creator>Savitri Citra Budi</creator>
        
        <creator>Arida Ferti Syafiandini</creator>
        
        <creator>Ikhlasul Amal</creator>
        
        <creator>Revandra Aryo Dwi Krisnandaru</creator>
        
        <subject>Health misinformation; machine learning; social media</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>Indonesia is among the world’s most prolific countries in terms of internet and social media usage. Social media serves as a primary platform for disseminating and accessing all types of information, including health-related data. However, much of the content generated on these platforms is unverified and often falls into the category of misinformation, which poses risks to public health. It is essential to ensure the credibility of the information available to social media users, thereby helping them make informed decisions and reducing the risks associated with health misinformation. Previous research on health misinformation detection has predominantly focused on English-language data or has been limited to specific health crises, such as COVID-19. Consequently, there is a need for a more comprehensive approach which not only focus on single issue or domain. This study proposes the development of a new corpus that encompasses various health topics from Indonesian social media. Each piece of content within this corpus will be manually annotated by expert to label a social media post as either misinformation or fact. Additionally, this research involves experimenting with machine learning models, including traditional and deep learning models. Our finding shows that the new cross-domain dataset is able to achieve better performance compared to those trained on the COVID dataset, highlighting the importance of diverse and representative training data for building robust health misinformation detection system.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_117-Cross_Domain_Health_Misinformation_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Android Malware Detection Through CNN Ensemble Learning on Grayscale Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01601116</link>
        <id>10.14569/IJACSA.2025.01601116</id>
        <doi>10.14569/IJACSA.2025.01601116</doi>
        <lastModDate>2025-01-30T09:02:16.1270000+00:00</lastModDate>
        
        <creator>El Youssofi Chaymae</creator>
        
        <creator>Chougdali Khalid</creator>
        
        <subject>Android malware detection; image-based analysis; Convolutional Neural Networks (CNN); grayscale image transformation; weighted voting ensemble; Bayesian optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>With Android’s widespread adoption as the leading mobile operating system, it has become a prominent target for malware attacks. Many of these attacks employ advanced obfuscation techniques, rendering traditional detection methods, such as static and dynamic analysis, less effective. Image-based approaches provide an alternative for effective detection that addresses some limitations of conventional methods. This re-search introduces a novel image-based framework for Android malware detection. Using the CICMalDroid 2020 dataset, Dalvik Executable (DEX) files from Android Package (APK) files are extracted and converted into grayscale images, with dimensions scaled according to file size to preserve structural characteristics. Various Convolutional Neural Network (CNN) models are then employed to classify benign and malicious applications, with performance further enhanced through a weighted voting ensemble optimized by Bayesian Optimization to balance the contribution of each model. An ablation study was conducted to demonstrate the effectiveness of the six-model ensemble, showing consistent improvements in accuracy as models were added incrementally, culminating in the highest accuracy of 99.3%. This result surpasses previous research benchmarks in Android malware detection, validating the robustness and efficiency of the proposed methodology.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_116-Android_Malware_Detection_Through_CNN_Ensemble_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Q-Learning-Based Optimization of Path Planning and Control in Robotic Arms for High-Precision Computational Efficiency</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01601115</link>
        <id>10.14569/IJACSA.2025.01601115</id>
        <doi>10.14569/IJACSA.2025.01601115</doi>
        <lastModDate>2025-01-30T09:02:16.0930000+00:00</lastModDate>
        
        <creator>Yuan Li</creator>
        
        <creator>Byung-Won Min</creator>
        
        <creator>Haozhi Liu</creator>
        
        <subject>Optimization; deep Q-learning; path planning; robotic arms; precision; computational efficiency; kinematic</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>Optimizing path planning and control in robotic arms is a critical challenge in achieving high-precision and efficient operations in various industrial and research applications. This study proposes a novel approach leveraging deep Q-learning (DQL) to enhance robotic arm movements’ computational efficiency and precision. The proposed framework effectively ad-dresses key challenges such as collision avoidance, path smooth-ness, and dynamic control by integrating reinforcement learning techniques with advanced kinematic modelling. To validate the effectiveness of the proposed method, a simulated environment was developed using a 6-degree-of-freedom robotic arm, where the DQL model was trained and tested. Results demonstrated a significant performance improvement, achieving an average path optimization accuracy of 98.76% and reducing computational overhead by 22.4% compared to traditional optimization methods. Additionally, the proposed approach achieved real-time response capabilities, with an average decision-making latency of 0.45 seconds, ensuring its applicability in time-critical scenarios. This research highlights the potential of deep Q-learning in revolutionizing robotic arm control by combining precision and computational efficiency. The findings bridge gaps in robotic path planning and pave the way for future advancements in autonomous robotics and industrial automation. Further studies can explore the scalability of this approach to more complex and real-world environments, solidifying its relevance in emerging technological domains.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_115-Deep_Q_Learning_Based_Optimization_of_Path_Planning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>AI-Driven Image Recognition System for Automated Offside and Foul Detection in Football Matches Using Computer Vision</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01601114</link>
        <id>10.14569/IJACSA.2025.01601114</id>
        <doi>10.14569/IJACSA.2025.01601114</doi>
        <lastModDate>2025-01-30T09:02:16.0630000+00:00</lastModDate>
        
        <creator>Qianwei Zhang</creator>
        
        <creator>Lirong Yu</creator>
        
        <creator>WenKe Yan</creator>
        
        <subject>Artificial intelligence; image recognition; automation; foul detection; deep learning; computer vision</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>Integrating artificial intelligence (AI) and computer vision in sports analytics has transformed decision-making pro-cesses, enhancing fairness and efficiency. This paper proposes a novel AI-driven image recognition system for automatically detecting offside and foul events in football matches. Unlike conventional methods, which rely heavily on manual intervention or traditional image processing techniques, our approach utilizes a hybrid deep learning model that combines advanced object tracking with motion analysis to deliver real-time, precise event detection. The system employs a robust, self-learning algorithm that leverages spatiotemporal features from match footage to track player movements and ball dynamics. By analyzing the continuous flow of video data, the model detects offside positions and identifies foul types such as tackles, handballs, and dangerous play—through a dynamic pattern recognition process. This multi-tiered approach overcomes traditional methods’ limitations by accurately identifying critical events with minimal latency, even in complex, high-speed scenarios. In experiments conducted on diverse datasets of live match footage, the system achieved an overall accuracy of 99.85% for offside detection and 98.56%for foul identification, with precision rates of 98.32% and 97.12%, respectively. The system’s recall rates of 97.45% for offside detection and 96.85% for foul recognition demonstrate its reliability in real-world applications. It’s clear from these results that the proposed framework can automate and greatly enhance the accuracy of match analysis, making it a useful tool for both referees and broadcasters. The system’s low computational overhead and growing ability make connecting to existing match broadcasting infrastructure easy. This establishes an immediate feedback loop for use during live games. This work marks a significant step forward in applying AI and computer vision for sports, introducing a powerful method to enhance the objectivity and precision of officiating in football.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_114-AI_Driven_Image_Recognition_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>High-Precision Multi-Class Object Detection Using Fine-Tuned YOLOv11 Architecture: A Case Study on Airborne Vehicles</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01601113</link>
        <id>10.14569/IJACSA.2025.01601113</id>
        <doi>10.14569/IJACSA.2025.01601113</doi>
        <lastModDate>2025-01-30T09:02:16.0300000+00:00</lastModDate>
        
        <creator>Nasser S. Albalawi</creator>
        
        <subject>Airborne vehicles; YOLOv11; object detection; surveillance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>The widespread adoption of airborne vehicles, including drones and UAVs, has brought significant advancements to fields such as surveillance, logistics, and disaster response. Despite these benefits, their increasing use poses substantial challenges for real-time detection and classification, particularly in multi-class scenarios where precision and scalability are essential. This paper proposes a high-performance detection framework based on YOLOv11, specifically tailored for identifying airborne vehicles. YOLOv11 integrates innovative features, such as anchor-free detection and enhanced attention mechanisms, to deliver superior accuracy and speed. The proposed framework is tested on a comprehensive airborne vehicle dataset featuring diverse conditions, including variations in altitude, occlusion, and environmental factors. Experimental results demonstrate that the fine-tuned YOLOv11 model exceeds the performance of existing models. Additionally, its ability to operate in real-time makes it ideal for critical applications like air traffic management and security monitoring.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_113-High_Precision_Multi_Class_Object_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intelligent Fault Diagnosis for Elevators Using Temporal Adaptive Fault Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01601112</link>
        <id>10.14569/IJACSA.2025.01601112</id>
        <doi>10.14569/IJACSA.2025.01601112</doi>
        <lastModDate>2025-01-30T09:02:16.0000000+00:00</lastModDate>
        
        <creator>Zhiyu Chen</creator>
        
        <subject>Elevator fault diagnosis; temporal adaptive fault network; predictive maintenance; multivariate time-series data; feature refinement; fault classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>Contemporary cities depend on elevators for vertical mobility in residential, commercial, and industrial buildings. However, elevator system malfunctions may cause operational interruptions, economic losses, and safety dangers, requiring advanced tools for detection. High-dimensional sensor data, temporal interdependence, and fault dataset imbalances are common problems in fault detection algorithms. These restrictions reduce fault diagnostic accuracy and reliability, especially in real-time applications. This paper presents a Temporal Adaptive Fault Network (TAFN) to overcome these issues. The system uses Temporal Convolution Layers to capture sequential dependencies, Adaptive Feature Refinement Layers to dynamically improve feature relevance, and a Fault Decision Head for correct classification. For reliable performance, the Weighted Divergence Analyzer and innovative data processing methods are used for feature selection. Experimental findings show that the TAFN model outperforms state-of-the-art fault classification approaches with an F1-score of 98.5% and an AUC of 99.3%. The model’s capacity to handle unbalanced datasets and complicated temporal patterns makes it useful in real life. The paper also proposes the Fault Temporal Sensitivity Index (FTSI) to assess fault prediction temporal consistency. The results demonstrate that TAFN may revolutionize elevator problem detection, improving reliability, downtime, and safety. This technique advances predictive maintenance tactics for critical infrastructure.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_112-Intelligent_Fault_Diagnosis_for_Elevators.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>GRACE: Graph-Based Attention for Coherent Explanation in Fake News Detection on Social Media</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01601111</link>
        <id>10.14569/IJACSA.2025.01601111</id>
        <doi>10.14569/IJACSA.2025.01601111</doi>
        <lastModDate>2025-01-30T09:02:15.9530000+00:00</lastModDate>
        
        <creator>Orken Mamyrbayev</creator>
        
        <creator>Zhanibek Turysbek</creator>
        
        <creator>Mariam Afzal</creator>
        
        <creator>Marassulov Ussen Abdurakhimovich</creator>
        
        <creator>Ybytayeva Galiya</creator>
        
        <creator>Muhammad Abdullah</creator>
        
        <creator>Riaz Ul Amin</creator>
        
        <subject>Graph neural network; dual attention; NLP; semantics; social network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>Detecting fake news on social media is a critical challenge due to its rapid dissemination and potential societal impact. This paper addresses the problem in a realistic scenario where the original tweet and the sequence of users who retweeted it, excluding any comment section, are available. We propose a Graph-based Attention for Coherent Explanation (GRACE) to perform binary classification by determining if the original tweet is false and provide interpretable explanations by highlighting suspicious users and key evidential words. GRACE integrates user behaviour, tweet content, and retweet propagation dynamics through Graph Convolutional Networks (GCNs) and a dual co-attention mechanism. Extensive experiments conducted on Twitter15 and Twitter16 datasets demonstrate that GRACE out-performs baseline methods, achieving an accuracy improvement of 2.12% on Twitter15 and 1.83% on Twitter16 compared to GCAN. Additionally, GRACE provides meaningful and coherent explanations, making it an effective and interpretable solution for fake news detection on social platforms.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_111-GRACE_Graph_Based_Attention_for_Coherent_Explanation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Efficient Anomaly Detection Technique for Future IoT Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01601110</link>
        <id>10.14569/IJACSA.2025.01601110</id>
        <doi>10.14569/IJACSA.2025.01601110</doi>
        <lastModDate>2025-01-30T09:02:15.9230000+00:00</lastModDate>
        
        <creator>Ahmad Naseem Alvi</creator>
        
        <creator>Muhammad Awais Javed</creator>
        
        <creator>Bakhtiar Ali</creator>
        
        <creator>Mohammed Alkhathami</creator>
        
        <subject>Anomaly detection; IoT networks; security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>Internet of Things (IoT) provides smart wireless connectivity and is the basis of many future applications. IoT nodes are equipped with sensors that obtain application-related data and transmit to the servers using IEEE 802.15.4-based wire-less communications, thus forming a low-rate wireless personal area network. Security is a major challenge in IoT networks as malicious users can capture the network and waste the available bandwidth reserved for legitimate users, thus significantly reducing the Quality of Service (QoS) in terms of transmitted data and transmission delay. In this work, an Anomaly Detection Mechanism for IEEE 802.15.4 standard (ADM15.4) to improve the QoS of the IoT Nodes is proposed. ADM15.4 also proposes a mechanism to block the malicious nodes without affecting the overall performance of the medium. The performance of ADM15.4 is compared with the standard when there is no such anomaly detection is present. The results are obtained for different values of SO and for different sets of GTS requesting nodes and are compared with the standard in the presence and absence of malicious nodes. The simulation results show that the ADM15.4 improves data transmission up to 19.5% from IEEE 802.15.4 standard without attacks and up to 52% when there is malicious attacks. Furthermore, ADM15.4 transmits data 33% reduced time and accommodate 56% more GTS requesting legitimate nodes as compared to the standard in the presence of the malicious attacks.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_110-Efficient_Anomaly_Detection_Technique.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Efficient Tumor Detection in Medical Imaging Using Advanced Object Detection Model: A Deep Learning Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01601109</link>
        <id>10.14569/IJACSA.2025.01601109</id>
        <doi>10.14569/IJACSA.2025.01601109</doi>
        <lastModDate>2025-01-30T09:02:15.8900000+00:00</lastModDate>
        
        <creator>Taoufik Saidani</creator>
        
        <subject>Tumor detection; medical imaging; YOLOv11; deep learning; real-time detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>Timely and accurate tumor detection in medical imaging is crucial for improving patient outcomes and reducing mortality rates. Traditional methods often rely on manual image interpretation, which is time-intensive and prone to variability. Deep learning, particularly convolutional neural networks (CNNs), has revolutionized tumor detection by automating the process and achieving remarkable accuracy. The present paper investigates the use of YOLOv11, a powerful object detection model, for tumor detection in several medical imaging modalities, such as CT scans, MRIs, and histopathological images. YOLOv11 incorporates architectural advancements, including enhanced feature pyramids and attention processes, allowing accurate identification of tumors with diverse sizes and complexity. The model’s real-time detection capabilities and lightweight architecture render it appropriate for use in clinical settings and resource-limited contexts. Experimental findings indicate that the fine-tuned YOLOv11 attains exceptional accuracy and efficiency, exhibiting an average precision of 91% and a mAP of 68%. This research highlights YOLOv11’s significance as a transformational instrument in the integration of AI in medical imaging, aimed at optimizing diagnostic processes and improving healthcare delivery.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_109-Efficient_Tumor_Detection_in_Medical_Imaging.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Empirical Analysis of Variations of Matrix Factorization in Recommender Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01601108</link>
        <id>10.14569/IJACSA.2025.01601108</id>
        <doi>10.14569/IJACSA.2025.01601108</doi>
        <lastModDate>2025-01-30T09:02:15.8770000+00:00</lastModDate>
        
        <creator>Srilatha Tokala</creator>
        
        <creator>Murali Krishna Enduri</creator>
        
        <creator>T. Jaya Lakshmi</creator>
        
        <creator>Koduru Hajarathaiah</creator>
        
        <creator>Hemlata Sharma</creator>
        
        <subject>Recommendations; matrix factorization; content-based; collaborative filtering; RMSE</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>Recommender systems recommend products to users. Almost all businesses utilize recommender systems to suggest their products to customers based on the customer’s previous actions. The primary inputs for recommendation algorithms are user preferences, product descriptions, and user ratings on products. Content-based recommendations and collaborative filtering are examples of traditional recommendation systems. One of the mathematical models frequently used in collaborative filtering is matrix factorization (MF). This work focuses on discussing five variants of MF namely Matrix Factorization, Probabilistic MF, Non-negative MF, Singular Value Decomposition (SVD), and SVD++. We empirically evaluate these MF variants on six benchmark datasets from the domains of movies, tourism, jokes, and e-commerce. MF is the least performing and SVD is the best-performing method among other MF variants in terms of Root Mean Square Error (RMSE).</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_108-Empirical_Analysis_of_Variations_of_Matrix_Factorization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dolphin Inspired Optimization for Feature Extraction in Augmented Reality Tracking</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01601107</link>
        <id>10.14569/IJACSA.2025.01601107</id>
        <doi>10.14569/IJACSA.2025.01601107</doi>
        <lastModDate>2025-01-30T09:02:15.8430000+00:00</lastModDate>
        
        <creator>Indhumathi S</creator>
        
        <creator>Christopher Clement J</creator>
        
        <subject>Feature descriptor; dolphin optimization; feature extraction; augmented reality tracking</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>Feature extraction has the prominent role in Augmented Reality (AR) tracking. AR tracking monitor the position and orientation to overlay the 3D model in real-world environment. This approach of AR tracking, encouraged to propose the optimum feature extraction model by embedding the dolphin grouping system. We implemented dolphin grouping algorithm to extract the features effectively without compromising the accuracy. In addition, to prove the stability of the proposed model, we have included the affine transformation images such as rotation, blur image and light variation for the analysis. The Dolphin model obtained the average precision of 0.92 and recall score of 0.84. Whereas, the computation time of dolphin model is identified as 2ms which is faster than the other algorithm. The comparative result analysis reveals that accuracy and the efficiency of the proposed model surpasses the existing descriptors.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_107-Dolphin_Inspired_Optimization_for_Feature_Extraction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multilabel Classification of Bilingual Patents Using OneVsRestClassifier: A Semiautomated Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01601106</link>
        <id>10.14569/IJACSA.2025.01601106</id>
        <doi>10.14569/IJACSA.2025.01601106</doi>
        <lastModDate>2025-01-30T09:02:15.8130000+00:00</lastModDate>
        
        <creator>Slamet Widodo</creator>
        
        <creator>Ermatita</creator>
        
        <creator>Deris Stiawan</creator>
        
        <subject>Multilabel patent classification; Natural Language Processing (NLP); OneVsRestClassifier; TF–IDF vectorisation; bilingual patent analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>In response to the increasing complexity and volume of patent applications, this research introduces a semiautomated system to streamline the literature review process for Indonesian patent data. The proposed system employs a synthesis of multilabel classification techniques based on natural language processing (NLP) algorithms. This methodology focuses on developing an iterative and modular system, with each step visualised in detailed flowcharts. The system design incorporates data collection and preprocessing, multilabel classification model development, model optimisation, query and prediction, and results presentation modules. Experimental results demonstrate the promising potential of the multilabel classification model, achieving a micro F1 score of 0.6723 and a macro F1 score of 0.6009. The OneVsRestClassifier model with LinearSVC as the base classifier shows reasonably good performance in handling a bilingual dataset comprising 15,097 patent documents. The optimal model configuration uses TfidfVectorizer with 20,000 features, including bigrams, and an optimal C parameter of 0.1 for LinearSVC. Performance analysis reveals variations across IPC classes, indicating areas for further improvement. The discussion highlights the implications of the proposed system for researchers, patent examiners and industry professionals by facilitating efficient searches within patent databases. This study acknowledges the potential of semiautomated systems to enhance the efficiency of patent analysis while emphasising the need for further research to address identified challenges, such as class imbalance and performance variations across patent categories. This research paves the way for further developments in the field of automated patent classification, aiming to improve efficiency and accuracy in international patent systems while recognising the crucial role of human experts in the patent classification process.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_106-Multilabel_Classification_of_Bilingual_Patents.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Feature Substitution Using Latent Dirichlet Allocation for Text Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01601105</link>
        <id>10.14569/IJACSA.2025.01601105</id>
        <doi>10.14569/IJACSA.2025.01601105</doi>
        <lastModDate>2025-01-30T09:02:15.7800000+00:00</lastModDate>
        
        <creator>Norsyela Muhammad Noor Mathivanan</creator>
        
        <creator>Roziah Mohd Janor</creator>
        
        <creator>Shukor Abd Razak</creator>
        
        <creator>Nor Azura Md. Ghani</creator>
        
        <subject>Feature extraction; feature selection; Latent Dirichlet Allocation; text classification; Hidden Markov Model; dimensionality reduction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>Text classification plays a pivotal role in natural language processing, enabling applications such as product categorization, sentiment analysis, spam detection, and document organization. Traditional methods, including bag-of-words and TF-IDF, often lead to high-dimensional feature spaces, increasing computational complexity and susceptibility to overfitting. This study introduces a novel Feature Substitution technique using Latent Dirichlet Allocation (FS-LDA), which enhances text representation by replacing non-overlapping high-probability topic words. FS-LDA effectively reduces dimensionality while retaining essential semantic features, optimizing classification accuracy and efficiency. Experimental evaluations on five e-commerce datasets and an SMS spam dataset demonstrated that FS-LDA, combined with Hidden Markov Models (HMMs), achieved up to 95% classification accuracy in binary tasks and significant improvements in macro and weighted F1-scores for multiclass tasks. The innovative approach lies in FS-LDA&#39;s ability to seamlessly integrate dimensionality reduction with feature substitution, while its predictive advantage is demonstrated through consistent performance enhancement across diverse datasets. Future work will explore its application to other classification models and domains, such as social media analysis and medical document categorization, to further validate its scalability and robustness.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_105-Feature_Substitution_Using_Latent_Dirichlet_Allocation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Stacking Regressor Model for PM2.5 Concentration Prediction Based on Spatiotemporal Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01601104</link>
        <id>10.14569/IJACSA.2025.01601104</id>
        <doi>10.14569/IJACSA.2025.01601104</doi>
        <lastModDate>2025-01-30T09:02:15.7500000+00:00</lastModDate>
        
        <creator>Mitra Unik</creator>
        
        <creator>Imas Sukaesih Sitanggang</creator>
        
        <creator>Lailan Syaufina</creator>
        
        <creator>I Nengah Surati Jaya</creator>
        
        <subject>Ensemble learning; PM2.5 prediction; remote sensing; stacking regressor; spatio-temporal data</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>This study presents the development of a predictive model for PM2.5 concentrations resulting from forest and peatland fires in Riau Province, utilizing the stacking regressor technique within an ensemble learning framework. The model integrates spatiotemporal data from remote sensing and ground-based sensors at a resolution of 1 km x 1 km, demonstrating its effectiveness in capturing the intricate patterns of PM2.5 concentrations. By combining Random Forest, Gradient Boosting Machine (GBM), and XGBoost, with RidgeCV as a meta-learner, the model attained optimal performance, achieving R&#178; = 0.851, MAE = 0.045 &#181;g/m&#179;, and MSE = 0.003 &#181;g/m&#179;. The incorporation of temporal feature engineering techniques, including lag and rolling window methods, significantly enhanced prediction accuracy, enabling the model to effectively capture seasonal variations and temporal dynamics. Key variables, such as air temperature, evapotranspiration, and Aerosol Optical Depth (AOD), were found to exhibit strong correlations with PM2.5 concentrations. The findings from this research contribute to the formulation of data-driven policies for air quality management and pollution mitigation, with the potential for broader application in regions encountering similar environmental challenges.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_104-Stacking_Regressor_Model_for_PM_2.5_Concentration.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimizing Route Planning for Autonomous Electric Vehicles Using the D-Star Lite Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01601103</link>
        <id>10.14569/IJACSA.2025.01601103</id>
        <doi>10.14569/IJACSA.2025.01601103</doi>
        <lastModDate>2025-01-30T09:02:15.7200000+00:00</lastModDate>
        
        <creator>Bhakti Yudho Suprapto</creator>
        
        <creator>Suci Dwijayanti</creator>
        
        <creator>Desi Windisari</creator>
        
        <creator>Gatot Aria Pratama</creator>
        
        <subject>Autonomous vehicle; D-Star Lite; path planning; realtime; replanning route; optimal route</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>Every vehicle, including autonomous vehicles, requires a route to navigate its journey. Route planning is a critical aspect of autonomous vehicle operations, as these vehicles rely on guided paths or sequential steps to move effectively. Ensuring that the route is optimal is a key consideration. This study tests the D-Star Lite algorithm to determine the most efficient route. In simulation tests, the D-Star Lite algorithm was compared with the A-Star algorithm. The results showed that D-Star Lite outperformed A-Star, achieving an average distance reduction of 124 meters. Real-time testing involved finding a route from node 36 to node 0, resulting in a total distance of 803 meters. Additional tests focused on route replanning in real-time scenarios. For instance, the initial route passing through nodes 36 →37→38→39→40→41→42→43→44→45→0 was adjusted to an alternative route: 36→37→38→46→26→11→2→4→1→0. Based on the results, the D-Star Lite algorithm proves effective in identifying the best route for autonomous electric vehicles while also enabling real-time route replanning.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_103-Optimizing_Route_Planning_for_Autonomous_Electric_Vehicles.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimizing Feature Selection in Intrusion Detection Systems Using a Genetic Algorithm with Stochastic Universal Sampling</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01601102</link>
        <id>10.14569/IJACSA.2025.01601102</id>
        <doi>10.14569/IJACSA.2025.01601102</doi>
        <lastModDate>2025-01-30T09:02:15.6730000+00:00</lastModDate>
        
        <creator>RadhaRani Akula</creator>
        
        <creator>GS Naveen Kumar</creator>
        
        <subject>GA-SUS; anomaly detection; IDS; RFE; DQN</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>The current study presents a hybrid framework integrating the Genetic optimization algorithm with Stochastic Universal Sampling (GA-SUS) for feature selection and Deep Q-Networks (DQN) for fine-tuning an ensemble of classifiers to enhance network intrusion detection. The proposed method enhances genetic algorithms with stochastic universal sampling (GA-SUS) combined with recursive feature elimination (RFE). An ensemble of machine learning methods which includes gradient boosting and XG boost as base learners and subsequently logistic regression as meta learner is developed. A deep Q-networks (DQN) is used to optimize the base algorithms XG boost and gradient boost. The suggested method attains an accuracy of 97.60% on the popular NSL-KDD dataset and proficiently detects several attack types, such as probe attacks and Denial of Service (DoS), while tackling the issue of class imbalance. The multi-objective optimization approach is evident in anomaly detection and enhances model generalization by diminishing susceptibility to fluctuations in training data. Nonetheless, the model&#39;s efficacy regarding infrequent attack types, such as User to Root (U2R), remains inadequate due to their sparse representation in the dataset.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_102-Optimizing_Feature_Selection_in_Intrusion_Detection_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>PSR: An Improvement of Lightweight Cryptography Algorithm for Data Security in Cloud Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01601101</link>
        <id>10.14569/IJACSA.2025.01601101</id>
        <doi>10.14569/IJACSA.2025.01601101</doi>
        <lastModDate>2025-01-30T09:02:15.6400000+00:00</lastModDate>
        
        <creator>P. Sri Ram Chandra</creator>
        
        <creator>Syamala Rao</creator>
        
        <creator>Naresh K</creator>
        
        <creator>Ravisankar Malladi</creator>
        
        <subject>Cryptography; cloud security; PSR; encryption; decryption; avalanche effect</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>Data security in cloud storage is a pressing concern as organizations increasingly rely on cloud computing services. Transitioning to cloud-based solutions underscores the need to safeguard sensitive information against data breaches and unauthorized access. Traditional cryptography algorithms are vulnerable to brute-force attacks and mathematical breakthroughs, necessitating large key sizes for security. Moreover, they lack resilience against emerging quantum computing threats, posing a significant risk to encryption. To tackle these issues, this study presents a novel lightweight cryptography algorithm named as PSR which is aimed at encryption so as to improve data security before storage in cloud systems. The proposed system converts 128 bit plaintext to cipher by employing techniques such as substitution, ASCII and hexadecimal conversions, block-wise transformations including Rail Fence, Grey Code, and XOR operations with random prime numbers. Notably, the proposed algorithm demonstrates superior performance with minimal runtime and memory usage, satisfying the avalanche effect criterion with a noteworthy efficacy in all executions and resistant to brute force attack.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_101-PSR_An_Improvement_of_Lightweight_Cryptography_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>High-Accuracy Vehicle Detection in Different Traffic Densities Using Improved Gaussian Mixture Model with Cuckoo Search Optimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.01601100</link>
        <id>10.14569/IJACSA.2025.01601100</id>
        <doi>10.14569/IJACSA.2025.01601100</doi>
        <lastModDate>2025-01-30T09:02:15.6100000+00:00</lastModDate>
        
        <creator>Nor Afiqah Mohd Aris</creator>
        
        <creator>Siti Suhana Jamaian</creator>
        
        <subject>Gaussian mixture model; vehicle detection; adaptive time-varying learning rate; exponential decay; outlier processing; cuckoo search optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>Background subtraction plays a critical role in computer vision, particularly in vehicle detection and tracking. Traditional Gaussian Mixture Models (GMM) face limitations in dynamic traffic scenarios, leading to inaccuracies. This study proposes an Improved GMM with adaptive time-varying learning rates, exponential decay, and outlier processing to enhance performance across light, moderate, and heavy traffic densities. The model&#39;s parameters are automatically optimized using the Cuckoo Search algorithm, improving adaptability to varying environmental conditions. Validated on the ChangeDetection.net 2014 dataset, the Improved GMM achieves superior precision, recall, and F-measure compared to existing methods. Its consistent performance across diverse traffic scenarios highlights its effectiveness for real-time traffic flow analysis and vehicle detection applications.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_100-High_Accuracy_Vehicle_Detection_in_Different_Traffic_Densities.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluation of Eye Movement Features and Visual Fatigue in Virtual Reality Games</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160199</link>
        <id>10.14569/IJACSA.2025.0160199</id>
        <doi>10.14569/IJACSA.2025.0160199</doi>
        <lastModDate>2025-01-30T09:02:15.5630000+00:00</lastModDate>
        
        <creator>Yuwei Ji</creator>
        
        <subject>Virtual reality; games; eye movement features; visual fatigue</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>VR games make people happy physically and mentally, but also lead to eye health problems. At present, the existing VR systems lack fatigue detection technology, which makes it difficult to help users use their eyes reasonably. In order to improve the user experience of VR gamers, this paper proposes a visual fatigue detection algorithm based on eye movement features, which uses the relationship between the lateral and longitudinal displacements of the human head and the displacement of the center point of the human eye to locate the position of the human eye. Moreover, in this paper, the human eye position tracking model is input into the three-frame difference algorithm to detect eye movement features. In addition, for tiny motion interference such as eyebrows, the image opening operation of eroding first and then expanding is used to remove it. Through experiments, it is found that the eye movement feature detection method adopted in this paper can greatly improve the detection speed with less accuracy loss, meet the sensitivity requirements of eye movement feature capture, improve the real-time performance of the system, and effectively improve the real-time analysis of player status. Therefore, integrating this algorithm into the virtual game system can help players adjust their own state, which has a positive effect on improving the game experience and reducing eye damage.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_99-Evaluation_of_Eye_Movement_Features_and_Visual_Fatigue.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Decoding Face Attributes: A Modified AlexNet Model with Emphasis on Correlation-Heterogeneity Relationship Between Facial Attributes</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160198</link>
        <id>10.14569/IJACSA.2025.0160198</id>
        <doi>10.14569/IJACSA.2025.0160198</doi>
        <lastModDate>2025-01-30T09:02:15.5300000+00:00</lastModDate>
        
        <creator>Abdelaali Benaiss</creator>
        
        <creator>Otman Maarouf</creator>
        
        <creator>Rachid El Ayachi</creator>
        
        <creator>Mohamed Biniz</creator>
        
        <creator>Mustapha Oujaoura</creator>
        
        <subject>Face attribute estimation; biometrics; Convolutional Neural Network (CNN); face verification; computer vision</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>Face attribute estimation has several applications in computer vision, biometric systems, face verification /identification and image retrieval. The performance of face attribute estimation has been improved by using machine learning algorithms. In recent years, most algorithms have addressed this problem in multiple binary problem. Specifically, CNN-based approaches, which we can divide them into two classes; shared features and parts-based approaches. In shared features approach, the model uses two types of CNNs: one for feature extraction succeed by another one, for attribute classification. In the parts-based approaches, the approaches split the face image into multiple parts according to the geometric position of each attribute and train a CNN model for each part of the face. However, the shared features approach can handle attributes correlation but ignored attribute heterogeneity and gain in training time. On the other hand, the parts-based approaches can handle attributes heterogeneity but ignore attributes correlation and need more time in the training set compared with a shared feature approach. In this work, we propose a face attribute estimation method, which combined shared features and a parts-based approach into one model. Our model splits the input face image into five parts: whole image part, face part, face upper part, lower part, and nose part. In the same manner, the face attributes are subdivided into five groups according to the geometric position in the face image. We train shared feature model for each part, and we proposed an algorithm for feature selection task followed by AdaBoost algorithm to handle attribute classification task. Through a set of experiments using the LFWA and IIITM Face Emotion datasets, we demonstrate that our approach shows higher efficiency of face attribute estimation compared with the state-of-the art methods.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_98-Decoding_Face_Attributes_A_Modified_AlexNet_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Road Safety: A Multi-Modal Drowsiness Detection System for Drivers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160197</link>
        <id>10.14569/IJACSA.2025.0160197</id>
        <doi>10.14569/IJACSA.2025.0160197</doi>
        <lastModDate>2025-01-30T09:02:15.4830000+00:00</lastModDate>
        
        <creator>Guirrou Hamza</creator>
        
        <creator>Mohamed Zeriab Es-Sadek</creator>
        
        <creator>Youssef Taher</creator>
        
        <subject>Component; fatigue detection; drowsiness monitoring; ADAS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>Driver drowsiness is a major contributing factor in road accidents, emphasizing the need for enhanced detection measures to improve car safety. This paper describes a multi-modal fatigue detection system that uses data from an internal camera, a front camera, and vehicle factors to reliably assess driver alertness. The technology outperforms traditional methods in terms of detection accuracy by utilizing powerful machine learning algorithms. Simulation and real-world tests show considerable improvements in reliability and performance. This integrated strategy offers a promising alternative for reducing the dangers associated with driver weariness and improving overall traffic safety.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_97-Enhancing_Road_Safety_A_Multi_Modal_Drowsiness_Detection_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design and Research of Accounting Automation Management System Based on Swarm Intelligence Algorithm and Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160196</link>
        <id>10.14569/IJACSA.2025.0160196</id>
        <doi>10.14569/IJACSA.2025.0160196</doi>
        <lastModDate>2025-01-30T09:02:15.4530000+00:00</lastModDate>
        
        <creator>Dan Gui</creator>
        
        <creator>Wei Ma</creator>
        
        <creator>Wanfei Chen</creator>
        
        <subject>Swarm intelligence algorithm; deep learning; accounting; automation management</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>In the current research, the application verification of traditional algorithms in actual accounting management is insufficient, and deep learning data processing capabilities need to be fully optimized in complex accounting scenarios. Given the challenges of efficiency and accuracy faced by the current accounting industry in the context of big data, this study creatively combines the swarm intelligence algorithm and deep learning technology to design and implement an efficient and accurate accounting automation management system. The research aims to investigate the potential of swarm intelligence algorithms and deep learning techniques in developing an automated accounting management system, with a focus on improving efficiency, accuracy, and scalability. Key research questions include exploring the optimal configuration of swarm intelligence algorithms for accounting tasks and assessing the performance of deep learning models in automating various accounting processes. Through experimental verification, the system is tested with the financial data of a large enterprise for three consecutive years. The results show that the system can significantly shorten the time of financial statement generation by 65%, reduce the error rate to less than 0.5%, and increase the accuracy of abnormal data recognition by as much as 90%. These data not only reflect the significant improvement of the efficiency and accuracy of the system but also prove its great potential in early warning of financial risk, providing intelligent and automated solutions for the accounting industry.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_96-Design_and_Research_of_Accounting_Automation_Management_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Robust Joint Detection of Coronary Artery Plaque and Stenosis in Angiography Using Enhanced DCNN-GAN</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160195</link>
        <id>10.14569/IJACSA.2025.0160195</id>
        <doi>10.14569/IJACSA.2025.0160195</doi>
        <lastModDate>2025-01-30T09:02:15.4230000+00:00</lastModDate>
        
        <creator>M. Jayasree</creator>
        
        <creator>L. Koteswara Rao</creator>
        
        <subject>DCNN-GAN; angiography; coronary artery plaque; stenosis; joint conditional detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>Timely detection and diagnosis of coronary artery segment plaque and stenosis in X-ray angiography is of great significance, however, the image quality variation, noise, and artifacts in the original image cause definitive difficulties to the current algorithms. These problems pose a challenge to meaningful analysis via traditional approaches, which compromises the efficiency of detection algorithms. To overcome these drawbacks, the current study presents a new integrated deep learning technique that integrates Deep Convolutional Neural Network (DCNN) with Generative Adversarial Network (GAN) in dual conditional detection. Detailed feature learning extracted from X-ray angiography images are performed through DCNN where it considers vascular structure and automatic pathologic regions detection. The use of GANs is to further enrich the dataset with synthetic images, distortions, and visual noise, which will make the model more immune to various conditions of images. Both approaches combined help in better classification of normal and pathological areas and less sensitiveness to quality of the obtained images. The proposed method therefore has shown an improvement of the diagnostic accuracy as a solid foundation for clinical decision making in cardiovascular systems. The efficacy of the suggested approach has been demonstrated by the following evaluation metrics: 97.9% F1 score, 98.7% accuracy, 98.2% precision, and 98% recall. The results prove higher sensitivity and accuracy of the plaque and stenosis identification comparing to the traditional methods, which confirms the efficiency of using the proposed DCNN-GAN method for considering the real-world fluctuations in the medical imaging. It reveals a decisive advancement in the ability to use algorithms for cardiovascular assessment by providing better results in difficult imaging environments.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_95-Robust_Joint_Detection_of_Coronary_Artery_Plaque.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced Traffic Congestion Prediction Using Attention-Based Multi-Layer GRU Model with Feature Embedding</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160194</link>
        <id>10.14569/IJACSA.2025.0160194</id>
        <doi>10.14569/IJACSA.2025.0160194</doi>
        <lastModDate>2025-01-30T09:02:15.3900000+00:00</lastModDate>
        
        <creator>Sreelekha M</creator>
        
        <creator>Midhunchakkaravarthy Janarthanan</creator>
        
        <subject>Intelligent transportation system; traffic congestion; urban mobility; deep learning; gated recurrent unit</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>Intelligent Transportation Systems (ITS) are crucial for managing urban mobility and addressing traffic congestion, which poses significant challenges to modern cities. Traffic congestion leads to increased travel times, pollution, and fuel consumption, impacting both the environment and quality of life. Traditional traffic management solutions often fall short in predicting and adapting to dynamic traffic conditions. This study proposes an efficient deep learning (DL) model for predicting traffic congestion, utilizing the strengths of an attention-based multilayer Gated Recurrent Unit (GRU) network. The dataset used for this study includes 48,120 hourly vehicle counts across four junctions and additional weather data. Temporal and lagged features were engineered to capture daily and historical traffic trends and categorical data were considered by employing feature embedding. The attention-based GRU model integrates an attention mechanism to focus on relevant historical data, improving predictive performance by selectively emphasizing crucial time steps. This model architecture, consisting of two hidden layers and attention mechanisms, allows for nuanced traffic predictions by handling temporal dependencies and variations effectively. The performance was evaluated using various error metrics. The results demonstrate the model’s ability to predict traffic congestion with MSE of 0.9678, MAE of 0.4322, R&#178; of 0.8686, MAPE of 6% offering valuable insights for traffic management and urban planning.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_94-Enhanced_Traffic_Congestion_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Text-to-Image Generation Method Based on Object Enhancement and Attention Maps</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160193</link>
        <id>10.14569/IJACSA.2025.0160193</id>
        <doi>10.14569/IJACSA.2025.0160193</doi>
        <lastModDate>2025-01-30T09:02:15.3600000+00:00</lastModDate>
        
        <creator>Yongsen Huang</creator>
        
        <creator>Xiaodong Cai</creator>
        
        <creator>Yuefan An</creator>
        
        <subject>Multi-object category; text-to-image generation; object enhancement; attention maps</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>In the task of text-to-image generation, common issues such as missing objects in the generated images often arise due to the model&#39;s insufficient learning of multi-object category information and the lack of consistency between the text prompts and the generated image contents. To address these challenges, this paper proposes a novel text-to-image generation approach based on object enhancement and attention maps. First, a new object enhancement strategy is introduced to improve the model’s capacity to capture object-level features. The core idea is to generate difficult samples by processing the object mask maps of tokens, followed by dynamic weighting of the attention map using latent image embeddings. Second, to enhance the consistency between the text prompts and the generated image contents, we enforce similarity constraints between the cross-attention maps and the attention-weighted mask feature maps, penalizing inconsistencies through a loss function. Experimental results demonstrate that the Stable Diffusion v1.4 model, optimized using the proposed method, achieves significant improvements on the COCO instance dataset and the ADE20K instance dataset. Specifically, the MG metrics are improved by an average of 12.36% and 6.55%, respectively, compared to state-of-the-art models. Furthermore, the FID metrics show a 0.84% improvement over the state-of-the-art model on the COCO instance validation set.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_93-Text_to_Image_Generation_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sentiment and Emotion Analysis with Large Language Models for Political Security Prediction Framework</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160192</link>
        <id>10.14569/IJACSA.2025.0160192</id>
        <doi>10.14569/IJACSA.2025.0160192</doi>
        <lastModDate>2025-01-30T09:02:15.3130000+00:00</lastModDate>
        
        <creator>Liyana Safra Zaabar</creator>
        
        <creator>Adriana Arul Yacob</creator>
        
        <creator>Mohd Rizal Mohd Isa</creator>
        
        <creator>Muslihah Wook</creator>
        
        <creator>Nor Asiakin Abdullah</creator>
        
        <creator>Suzaimah Ramli</creator>
        
        <creator>Noor Afiza Mat Razali</creator>
        
        <subject>Political security; large language models; sentiment analysis; emotion analysis; BERT; threat prediction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>The increasing spread of textual content on social media, driven by the rise of Large Language Models (LLMs), has highlighted the importance of sentiment analysis in detecting threats, racial abuse, violence, and implied warnings. The subtlety and ambiguity of language present challenges in developing effective frameworks for threat detection, particularly within the political security domain. While significant research has explored hate speech and offensive content, few studies focus on detecting threats using sentiment analysis in this context. Leveraging advancements in Natural Language Processing (NLP), this study employs the NRC Emotion Lexicon to label emotions in a political-domain social media dataset. TextBlob is used to extract sentiment polarity, identifying potential threats where anger and fear intensities exceed a threshold alongside negative sentiment. The Bidirectional Encoder Representations from Transformers (BERT) was applied to enhance threat detection accuracy. The proposed framework achieved an Area Under the ROC Curve (AUC) of 87%, with the BERT model achieving 91% accuracy, 90.5% precision, 81.3% recall and F1-score of 91%, outperforming baseline models. These findings demonstrate the effectiveness of sentiment and emotion-based features in improving threat detection accuracy, providing a robust framework for political security applications.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_92-Sentiment_and_Emotion_Analysis_with_Large_Language_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detecting Emotions with Deep Learning Models: Strategies to Optimize the Work Environment and Organizational Productivity</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160191</link>
        <id>10.14569/IJACSA.2025.0160191</id>
        <doi>10.14569/IJACSA.2025.0160191</doi>
        <lastModDate>2025-01-30T09:02:15.2800000+00:00</lastModDate>
        
        <creator>Cantuarias Valdivia Luis Alberto de Jes&#250;s</creator>
        
        <creator>G&#243;mez Human Javier Junior</creator>
        
        <creator>Sierra-Li&#241;an Fernando</creator>
        
        <subject>Facial recognition; real-time emotions; convolutional neural networks; work environment; artificial intelligence in human resources</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>This study proposes the implementation of a facial emotion recognition system based on Convolutional Neural Networks to detect emotions in real time, aiming to optimize the workplace environment and enhance organizational productivity. Six deep learning models were evaluated: Standard CNN, AlexNet, VGG16, InceptionV3, ResNet152 and DenseNet201, with DenseNet201 achieving the best performance, delivering an accuracy of 87.7% and recall of 96.3%. The system demonstrated significant improvements in key performance indicators (KPIs), including a 72.59% reduction in data collection time, a 63.4% reduction in diagnosis time, and a 66.59% increase in job satisfaction. These findings highlight the potential of Deep Learning technologies for workplace emotional management, enabling timely interventions and fostering a healthier, more efficient organizational environment.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_91-Detecting_Emotions_with_Deep_Learning_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>PCE-BP: Polynomial Chaos Expansion-Based Bagging Prediction Model for the Data Modeling of Combine Harvesters</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160190</link>
        <id>10.14569/IJACSA.2025.0160190</id>
        <doi>10.14569/IJACSA.2025.0160190</doi>
        <lastModDate>2025-01-30T09:02:15.2330000+00:00</lastModDate>
        
        <creator>Liangyi Zhong</creator>
        
        <creator>Mengnan Deng</creator>
        
        <creator>Maolin Shi</creator>
        
        <creator>Ting Lou</creator>
        
        <creator>Shaoyang Zhu</creator>
        
        <creator>Jingwen Zhan</creator>
        
        <creator>Zishang Li</creator>
        
        <creator>Yi Ding</creator>
        
        <subject>Combine harvester; data modeling; polynomial chaos expansion; decision tree; bagging</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>With the rapid developments of measurement and monitoring techniques, massive amounts of in-situ data have been recorded and collected from the measurement system of combine harvesters in their working process and/or field experiments. However, the relationship between the operation parameters and the performance index such as clearing loss usually changes greatly in different sample subspaces, which makes it difficult for conventional prediction models to model the in-situ data, since most of them assume that the relationship is the same or similar throughout the whole sample space. Therefore, a polynomial chaos expansion-based bagging prediction model (PCE-BP) is proposed in this article. A polynomial chaos expansion-based decision tree is constructed to divide the sample space such that the relationship between the operation parameters and the performance index in the same part is more similar than the others, and bagging is used to ensemble the polynomial chaos expansion-based decision trees to reduce the perturbation and provide robust predictions. The experiments on the mathematical functions show that the proposed prediction model outperforms polynomial chaos expansion, polynomial chaos expansion-based decision tree, and the conventional bagging prediction model. The proposed prediction model is validated through two monitoring datasets from a combine harvester. The experimental results show that the PCE-BP model provides better cleaning loss and impurity rate prediction results than the other prediction models in most experiments, showing the advantages of sample space partitioning and bagging in the data modeling of combine harvesters.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_90-PCE_BP_Polynomial_Chaos_Expansion_Based_Bagging_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced Task Scheduling Algorithm Using Harris Hawks Optimization Algorithm for Cloud Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160189</link>
        <id>10.14569/IJACSA.2025.0160189</id>
        <doi>10.14569/IJACSA.2025.0160189</doi>
        <lastModDate>2025-01-30T09:02:15.2030000+00:00</lastModDate>
        
        <creator>Fang WANG</creator>
        
        <subject>Cloud computing; optimization; task scheduling; Harris Hawks Optimization; resource allocation; quality of service</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>Amongst the most transformational technologies nowadays, cloud computing can provide resources such as CPU, memory, and storage over secure internet connections. Due to its flexibility and resource availability with guaranteed QoS, cloud computing allows comprehensive business and research adoptions. Despite the rapid development, resource management remains one of the significant challenges, especially handling task scheduling efficiently in this environment. Task scheduling strategically assigns tasks to available resources so that Quality of Service (QoS) metrics are effectively related to response time and throughput. This paper proposes an Enhanced Harris Hawks Optimization (EHHO) algorithm for scheduling cloud tasks to mitigate the common limitations found in existing algorithms. EHHO integrates a dynamic random walk strategy, enhancing exploration capabilities to avoid premature convergence and significantly improving scalability and resource allocation efficiency. Simulation outcomes reveal that EHHO minimizes makespan by up to 75%, memory usage by up to 60%, execution time by up to 39%, and cost by up to 66% compared to state-of-the-art algorithms. These benefits demonstrate that EHHO can optimize resource allocation while being highly scalable and reliable. Consistent performance over various stacks such as Kafka, Spark, Flink, and Storm further evidences the superiority of EHHO in handling complex scheduling challenges in dynamic cloud computing environments.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_89-Enhanced_Task_Scheduling_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modified Moth-Flame Optimization Algorithm for Service Composition in Cloud Computing Environments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160188</link>
        <id>10.14569/IJACSA.2025.0160188</id>
        <doi>10.14569/IJACSA.2025.0160188</doi>
        <lastModDate>2025-01-30T09:02:15.1570000+00:00</lastModDate>
        
        <creator>Yeling YANG</creator>
        
        <creator>Miao SONG</creator>
        
        <subject>Cloud computing; quality of service; service composition; edge cloud; moth-flame optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>Cloud computing service composition integrates services, distributed and diverse by nature, into an integrated entity that can meet a user&#39;s requirement with better effectiveness. However, some obstacles regarding high latency and suboptimal Quality of Service (QoS) still exist in a dynamic multi-cloud environment. This study addresses the limitations of traditional optimization algorithms in service composition, specifically the premature convergence and lack of population diversity in the Moth-Flame Optimization (MFO) algorithm. We propose the modified MFO algorithm with a new mechanism called Stagnation Finding and Replacement (SFR) to enhance the diversity of the population. It finds the static solutions based on a distance metric from globally optimal representative solutions and replaces them. MFO-SFR drastically improved all QoS metrics, such as response time, delay, and service stability. Empirical evaluations prove that MFO-SFR outperforms the baseline methods of multi-cloud service composition. It provides a computationally efficient and adaptive solution to cloud service composition problems, ensuring better resource utilization and higher user satisfaction in dynamic multi-cloud environments.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_88-Modified_Moth_Flame_Optimization_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Clustering Framework for Scalable and Robust Query Analysis: Integrating Mini-Batch K-Means with DBSCAN</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160187</link>
        <id>10.14569/IJACSA.2025.0160187</id>
        <doi>10.14569/IJACSA.2025.0160187</doi>
        <lastModDate>2025-01-30T09:02:15.1270000+00:00</lastModDate>
        
        <creator>Sridevi K N</creator>
        
        <creator>Rajanna M</creator>
        
        <subject>Hybrid clustering; information retrieval; mini-batch k-means; query analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>Query clustering is a significant task in information retrieval. Research gaps still exist due to high-dimensional datasets, noise detection, and cluster interpretability. Solving these challenges will support large language models with faster and more efficient responses. This study aims to develop a hybrid clustering approach combining Mini-Batch K-means (MBK) and Density-Based Spatial Clustering of Application with Noise (DBSCAN) to cluster large-scale query datasets for information retrieval. The proposed method utilizes a preprocessing technique for data cleaning, extracts meaningful features, and scales all the features from the query dataset. The proposed hybrid clustering framework utilizes preprocessed data for clustering. The clustering algorithms MBK provide fast, scalable clustering, and DBSCAN delivers a precise, density-based refinement to efficiently process large-scale datasets while enhancing cluster boundaries to handle outliers. The proposed hybrid clustering framework effectively performs query analysis in information retrieval with a Silhouette score of 72.14 % and adjusted rand index of 78.23%. Thus, the hybrid clustering approach provides a robust and scalable solution for query analyzing tasks.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_87-Hybrid_Clustering_Framework_for_Scalable_and_Robust_Query_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application of Collaborative Filtering Optimization Algorithm Based on Semantic Relationships in Interior Design</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160186</link>
        <id>10.14569/IJACSA.2025.0160186</id>
        <doi>10.14569/IJACSA.2025.0160186</doi>
        <lastModDate>2025-01-30T09:02:15.0770000+00:00</lastModDate>
        
        <creator>Kai Zhao</creator>
        
        <creator>Lei Wang</creator>
        
        <subject>Semantic relationships; category combination space; random walks; collaborative filtering; temporal recommendations</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>Due to the diversity of interior design, it is difficult for users to mine target data, so personalized recommendation systems for users are particularly important. Therefore, an optimized collaborative filtering recommendation system is proposed. Firstly, a random walk recommendation model based on category combination space is constructed, abandoning the traditional flat relationship connection and using Hasse diagram to achieve one-to-one mapping between items and types. The semantic relationship and distance are defined. Finally, a basic recommendation framework for random walks is established based on data such as jump behavior. Next, the potential semantic relationships between entities are explored, and a lightweight knowledge graph is proposed to define the social and explicit relationships between entities. Finally, the short-term features of the project are obtained using deep collaborative filtering technology, and a deep collaborative filtering temporal model based on semantic relationships is constructed. In subsequent validation, these experiments confirmed that under the vector dimension of 10, the average HR@K and NDCG@K were 6.9% and 12.9% higher than the other models. Therefore, the collaborative filtering recommendation model based on semantic relationships proposed in the study is reliable.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_86-Application_of_Collaborative_Filtering_Optimization_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comprehensive Bibliometric Literature Review of Chatbot Research: Trends, Frameworks, and Emerging Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160185</link>
        <id>10.14569/IJACSA.2025.0160185</id>
        <doi>10.14569/IJACSA.2025.0160185</doi>
        <lastModDate>2025-01-30T09:02:15.0170000+00:00</lastModDate>
        
        <creator>Nazruddin Safaat Harahap</creator>
        
        <creator>Aslina Saad</creator>
        
        <creator>Nor Hasbiah Ubaidullah</creator>
        
        <subject>Chatbot research; bibliometric literature review; retrieval and generative; trend visualization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>This study aims to conduct a comprehensive bibliometric literature review of chatbot research by examining key trends, frameworks, and influential applications across various domains. It seeks to map the evolution of chatbot technologies, identify influential works, and analyze how the research focus has shifted over time, particularly towards AI-driven chatbot frameworks. An expanded dataset was compiled from the Scopus database, and bibliometric analyses were conducted using n-gram reference analysis, network mapping, and temporal trend visualization. The analysis was performed using R Studio with Biblioshiny, allowing for the identification of thematic clusters and the progression from rule-based to advanced retrieval and generative language model paradigms in chatbot research. Chatbot research has grown significantly from 2020 to 2024, with rising publication volumes and increased global collaboration, led by contributions from the USA, China, and emerging regions, such as Southeast Asia. Thematic analysis highlights a shift from foundational AI and NLP technologies to specialized applications such as mental health chatbots and e-commerce systems, emphasizing practical and user-centered solutions. Advances in chatbot architectures, including generative AI, have demonstrated the field&#39;s interdisciplinary nature and trajectory towards sophisticated, context-aware conversational systems. The analysis primarily used data from Scopus, which may limit the breadth of the included research. Future studies are encouraged to integrate data from other sources, such as the Web of Science (WoS) and PubMed, for a more comprehensive understanding of the field.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_85-Comprehensive_Bibliometric_Literature_Review_of_Chatbot_Research.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Methodological Review of Social Engineering Policy Model for Digital Marketing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160184</link>
        <id>10.14569/IJACSA.2025.0160184</id>
        <doi>10.14569/IJACSA.2025.0160184</doi>
        <lastModDate>2025-01-30T09:02:14.9700000+00:00</lastModDate>
        
        <creator>Wenni Syafitri</creator>
        
        <creator>Zarina Shukur</creator>
        
        <creator>Umi Asma’ Mokhtar</creator>
        
        <creator>Rossilawati Sulaiman</creator>
        
        <subject>Digital marketing; social engineering attack prevention; review study; security policy model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>Social engineering attacks are recognized as human-based threats and continue to increase, despite studies focusing on prevention methods that do not rely on the human aspect. The impacts of these attacks are felt across various industries and organizations. To solve this issue, a social engineering policy model must be proposed for prevention in industrial settings, particularly emphasizing digital marketing activities, a crucial process in contemporary industries. However, hackers often exploit activities or information in these practices, necessitating an industry-specific policy to prevent these threats in digital marketing. As a result, a comprehensive review was conducted to identify critical method for develop social engineering policy model. The review uses Bryman&#39;s method to determine effective approaches for designing a social engineering policy model tailored for digital marketing. Consequently, this review provided a method for crafting effective social engineering policy, providing valuable insights for enhancing digital marketing security.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_84-Methodological_Review_of_Social_Engineering_Policy_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A System Dynamics Model of Frozen Fish Supply Chain</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160183</link>
        <id>10.14569/IJACSA.2025.0160183</id>
        <doi>10.14569/IJACSA.2025.0160183</doi>
        <lastModDate>2025-01-30T09:02:14.9230000+00:00</lastModDate>
        
        <creator>Leni Herdiani</creator>
        
        <creator>Maun Jamaludin</creator>
        
        <creator>Iman Sudirman</creator>
        
        <creator>Widjajani</creator>
        
        <creator>Ismet Rohimat</creator>
        
        <subject>Supply chain; system dynamics; frozen fish; six sub-models; simulation; policy scenario</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>The system dynamics methodology examines the intricate behaviors of complex systems through time, incorporating inventories, transfers, feedback cycles, lookup functions, and temporal delays. In fisheries systems, the interaction between resources and management entities is intricate, with the dynamics of fisheries significantly influencing the formulation of effective policies. Fisheries hold a vital position in Indonesia&#39;s economy, contributing to food security, nutrition, and the welfare of fishermen. Under Law Number 7 of 2016, the fisheries sector covers all activities, from resource management to the marketing of marine products. With its rich fishery resources, Indramayu Regency is a major contributor to West Java&#39;s fish production. TPI Karangsong, the hub of fishing activities in Indramayu, is a key player in the frozen fish supply chain, relying heavily on cold storage facilities to ensure product quality. Consequently, the system dynamics approach proves valuable in understanding the frozen fish supply chain by modeling the interactions between different variables and evaluating the impact of policies to improve fish quality. The system dynamics model in this study consists of six sub-models: fish at TPI, cold storage, refrigerated cabinets, total revenue, cash, and cold trucks. The simulation results provide policy recommendations to improve the quality of frozen fish at TPI Karangsong, namely the baseline scenario, cold truck scenario, cold truck scenario, truck and cold storage integration scenario, cold storage and fish catch drop integration scenario.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_83-A_System_Dynamics_Model_of_Frozen_Fish.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Task Scheduling in Fog Computing-Powered Internet of Things Networks: A Review on Recent Techniques, Classification, and Upcoming Trends</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160182</link>
        <id>10.14569/IJACSA.2025.0160182</id>
        <doi>10.14569/IJACSA.2025.0160182</doi>
        <lastModDate>2025-01-30T09:02:14.9070000+00:00</lastModDate>
        
        <creator>Dongge TIAN</creator>
        
        <subject>Internet of Things; task scheduling; fog computing; quality of service; network congestion; optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>The Internet of Things (IoT) phenomenon influences daily activities by transforming physical equipment into smart objects. The IoT has achieved a wealth of technological innovations that were previously unimaginable. IoT application areas cover various sectors, including medical care, home automation, smart grids, and industrial operations. The massive growth of IoT applications causes network congestion because of the large volume of IoT tasks pushed to the cloud. Fog computing mitigates these transfers by placing resources near the edge. However, new challenges arise, such as limited computing power, high complexity, and the distributed characteristics of fog devices, negatively affecting the Quality of Service (QoS). Much research has been conducted to address these challenges in designing QoS-aware task scheduling optimization techniques. This paper comprehensively reviews task scheduling techniques in fog computing-powered IoT networks. We classify these techniques into heuristic-based, metaheuristic-based, and machine learning-based algorithms, evaluating their objectives, advantages, weaknesses, and performance metrics. Additionally, we highlight research gaps and propose actionable recommendations to address emerging challenges. Our findings offer a structured framework for researchers and practitioners to develop efficient, QoS-aware task scheduling solutions in fog computing environments.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_82-Task_Scheduling_in_Fog_Computing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Hybrid Algorithm Based on Butterfly and Flower Pollination Algorithms for Scheduling Independent Tasks on Cloud Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160181</link>
        <id>10.14569/IJACSA.2025.0160181</id>
        <doi>10.14569/IJACSA.2025.0160181</doi>
        <lastModDate>2025-01-30T09:02:14.8600000+00:00</lastModDate>
        
        <creator>Huiying SHAO</creator>
        
        <subject>Task scheduling; cloud computing; butterfly optimization algorithm; flower pollination algorithm; mutualism</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>Cloud computing is an Internet-based computing paradigm where virtual servers or workstations are offered as platforms, software, infrastructure, and resources. Task scheduling is considered one of the major NP-hard problems in cloud environments, posing several challenges to efficient resource allocation. Many metaheuristic algorithms have been extensively employed to address these task-scheduling problems as discrete optimization problems and have given rise to some proposals. However, these algorithms have inherent limitations due to local optima and convergence to poor results. This paper suggests a hybrid strategy for organizing independent tasks in heterogeneous cloud resources by incorporating the Butterfly Optimization Algorithm (BOA) and Flower Pollination Algorithm (FPA). Although BOA suffers from local optima and loss of diversity, which may cause an early convergence of the swarm, our hybrid approach outperforms such weaknesses by exploiting a mutualism-based mechanism. Indeed, the proposed hybrid algorithm outperforms existing methods while considering different task quantities with better scalability. Experiments are conducted within the CloudSim simulation framework with many task instances. Statistical analysis is performed to test the significance of the obtained results, which confirms that the suggested algorithm is effective at solving cloud-based task scheduling issues. The study findings indicate that the hybrid metaheuristic algorithm could be a promising approach to improving resource utilization and optimizing cloud task scheduling.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_81-A_Novel_Hybrid_Algorithm_Based_on_Butterfly.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Optimization Strategy for CNN Models in Palembang Songket Motif Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160180</link>
        <id>10.14569/IJACSA.2025.0160180</id>
        <doi>10.14569/IJACSA.2025.0160180</doi>
        <lastModDate>2025-01-30T09:02:14.8270000+00:00</lastModDate>
        
        <creator>Yohannes </creator>
        
        <creator>Muhammad Ezar Al Rivan</creator>
        
        <creator>Siska Devella</creator>
        
        <creator>Tinaliah</creator>
        
        <subject>Convolutional neural network; ghost module; palembang songket motif; recognition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>Palembang Songket is an essential part of Indonesian cultural heritage, and its introduction and preservation present challenges, particularly in recognizing various motifs. This research introduces a novel strategy to optimize the performance of Convolutional Neural Networks (CNNs) by presenting a hierarchical integration of Ghost Module operations and Max Pooling, referred as Ghost Feature Maps. While the Ghost Module is effective in reducing parameters and enhancing feature extraction, it has limitations in filtering irrelevant information. To address this shortcoming, we propose a hierarchy in which Max Pooling works in conjunction with the Ghost Module, strengthening its performance by not only extracting dominant features but also eliminating excess, non-essential information. This hierarchical design enables more efficient feature extraction, thus enhancing the model&#39;s recognition accuracy. By combining Ghost Modules and Max Pooling in a structured manner, this approach advances established methodologies and offers a new perspective on feature representation within CNN architectures. Utilizing a dataset of 10 augmented classes of Palembang Songket motifs totaling 1000 images, we conducted experiments using varying ratios of Ghost Feature Maps. The results indicate that a ratio of 2 achieves an impressive accuracy of 0.98 with minimal parameter reduction. Additionally, a ratio of 3 results in a 34% decrease in parameters while maintaining a competitive accuracy of 0.95. Ratios of 4 and 5 continue to demonstrate robust performance, achieving accuracy levels of 0.93 while delivering over 60% reductions in model size and parameters. This research not only contributes to the optimization of CNN architectures but also supports the preservation of cultural heritage by improving the recognition capabilities of Palembang Songket motifs.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_80-A_Novel_Optimization_Strategy_for_CNN_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Proposed Approach for Agile IoT Smart Cities Transformation– Intelligent, Fast and Flexible</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160179</link>
        <id>10.14569/IJACSA.2025.0160179</id>
        <doi>10.14569/IJACSA.2025.0160179</doi>
        <lastModDate>2025-01-30T09:02:14.7970000+00:00</lastModDate>
        
        <creator>Othman Asiry</creator>
        
        <creator>Ayman E. Khedr</creator>
        
        <creator>Amira M. Idrees</creator>
        
        <subject>Smart city; agility; Internet of Things; cloud computing; intelligent systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>Smart city architectures have been varying from one community to another. Each community leader develops their own perspective of smart cities. Some of these communities focus on data management, while others focus on provided services and infrastructure. In this research, an attempt to propose a clear, complete, and efficient perspective of smart cities is accomplished. The proposed generic architecture clarifies the full capabilities, requirements, and layers’ contribution to a successful smart city development. The proposed architecture utilizes Internet of Things tools as well as agile standards in the description of each layer. The research aims to discuss each layer in detail, the relationships among layers, the applied technology, and every aspect that leads to the success of using the recommended architecture. Although smart cities, IoT, and agile research have previously tackled the relation of each one of them with the other, up to the researchers’ knowledge, the three paradigms have not been previously considered as a unified collaborative approach. In order to reach the research target, the proposed architecture intelligently utilizes these paradigms and presents a robust architecture with high-quality standards.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_79-A_Proposed_Approach_for_Agile_IoT_Smart_Cities.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>LFM Book Recommendation Based on Fusion of Time Information and K-Means</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160178</link>
        <id>10.14569/IJACSA.2025.0160178</id>
        <doi>10.14569/IJACSA.2025.0160178</doi>
        <lastModDate>2025-01-30T09:02:14.7670000+00:00</lastModDate>
        
        <creator>Dawei Ji</creator>
        
        <subject>Book recommendation; K-means; time information; latent factor model; preference matrix</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>To meet the growing demand in the field of book recommendation, the research focuses on meeting the personalized needs, behavioral patterns, and interests of readers. A book recommendation algorithm that combines K-means clustering with time information is proposed to provide more convenient and efficient book recommendation services and enhance readers&#39; reading experience. The algorithm constructs a comprehensive user preference matrix by incorporating readers&#39; borrowing time. Then, the K-means clustering is applied to group users with similar preferences and leverages a latent factor model to train and predict user ratings. The methodological integration of clustering and latent factor model ensures a more precise and dynamic recommendation process. The experimental results demonstrated that the proposed algorithm achieved a high average recommendation accuracy of 98.7%. Additionally, the algorithm maintained an average book popularity score of 8.2 after reaching stability, indicating its ability to suggest widely appreciated books. These outcomes validate the effectiveness of the algorithm in delivering accurate and popular book recommendations tailored to individual readers&#39; needs. This study combines K-means clustering with time sensitive preference analysis and latent factor model to introduce an innovative method in the field of book recommendation systems. The findings provide valuable insights and practical applications for libraries seeking to enhance their personalized recommendation services, offering a significant contribution to the field of intelligent information retrieval.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_78-LFM_Book_Recommendation_Based_on_Fusion_of_Time_Information.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Elevator Abnormal State Detection Based on Vibration Analysis and IF Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160177</link>
        <id>10.14569/IJACSA.2025.0160177</id>
        <doi>10.14569/IJACSA.2025.0160177</doi>
        <lastModDate>2025-01-30T09:02:14.7200000+00:00</lastModDate>
        
        <creator>Zhaoxiu Wang</creator>
        
        <subject>Vibration analysis; IF algorithm; elevator; abnormal; detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>Elevators play a crucial role in daily life, and their safety directly impacts the personal and property safety of users. To detect abnormal states of elevators and ensure people&#39;s personal safety, the acceleration signal of elevators is decomposed and Weiszfeld algorithm is used to estimate gravity acceleration. In addition, the study also introduces Kalman filtering to reduce error accumulation. To estimate the operating position of elevators, a method based on information fusion is studied and designed to construct a mapping relationship between elevator vibration energy and position, and to locate the height of elevator faults. Finally, an anomaly detection model combining vibration analysis and the Isolated Forest algorithm is developed. The results showed that the main distribution range of acceleration values in the horizontal direction was between 0.02 m2/s and -0.02 m2/s. The average estimation error and root mean square error of the research designed elevator position estimation method were 0.109 m and 0.113 m, respectively, which could solve the problem of accumulated position errors. The abnormal vibration energy and height corresponding to different operating conditions of elevators were different. The normal value ratios of the anomaly detection model under different sliding windows were 99.91% and 99.57%, respectively. The anomaly detection model designed for research has good performance and can provide technical support for the detection of elevator operation status.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_77-Elevator_Abnormal_State_Detection_Based_on_Vibration_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Developing an Integrated Platform to Track Real Time Football Statistics for Somali Football Federation (SFF)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160176</link>
        <id>10.14569/IJACSA.2025.0160176</id>
        <doi>10.14569/IJACSA.2025.0160176</doi>
        <lastModDate>2025-01-30T09:02:14.6870000+00:00</lastModDate>
        
        <creator>Bashir Abdinur Ahmed</creator>
        
        <creator>Husein Abdirahman Hashi</creator>
        
        <creator>Abdifatah Abdilatif Ahmed</creator>
        
        <creator>Abdikani Mahad Ali</creator>
        
        <subject>Real-time football statistics; Integrated sports platform; Somali Football Federation (SFF); user engagement; sports technology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>The integration of technology in sports has revolutionized how stakeholders interact with and perceive the game. This thesis presents the development of an integrated platform aimed at tracking real-time football statistics for the Somali Football Federation (SFF). Football, being one of the most popular sports globally, relies heavily on accurate and up-to-date statistical data for player performance analysis, team strategies, and fan engagement. The SFF, like many other federations, faces challenges in collecting, managing, and utilizing football statistics effectively. The advent of digital technologies and the internet has revolutionized data collection and dissemination methods across various fields, including sports. Traditional methods of data collection and analysis, which are often manual and time-consuming, can no longer meet the demands of modern football analytics. The platform encompasses a mobile application for fans, an admin panel for administrators, and a backend system for data management. Leveraging modern technologies such as Flutter for mobile development, Node.js and MySQL for backend services, and React for the admin interface, the system ensures comprehensive coverage of match events, player statistics, and tournament standings. Real-time updates facilitated by Socket.IO enhance user engagement and decision-making capabilities for coaches and administrators.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_76-Developing_an_Integrated_Platform.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Analysis of Feature Selection Based on Metaheuristic Methods for Human Heart Sounds Classification Using PCG Signal</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160175</link>
        <id>10.14569/IJACSA.2025.0160175</id>
        <doi>10.14569/IJACSA.2025.0160175</doi>
        <lastModDate>2025-01-30T09:02:14.6400000+00:00</lastModDate>
        
        <creator>Motaz Faroq A Ben Hamza</creator>
        
        <creator>Nilam Nur Amir Sjarif</creator>
        
        <subject>Cardiovascular Diseases (CVDs); Phonocardiogram (PCG) signal processing; Wavelet Scattering Transform (WST); Metaheuristic Methods; Harris Hawks Optimisation (HHO); Dragonfly Algorithm (DA); Grey Wolf Optimiser (GWO); Salp Swarm Algorithm (SSA); Whale Optimisation Algorithm (WOA); Bidirectional Long Short-Term Memory (Bi-LSTM)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>Cardiovascular disease is a critical threat to human health, as most death cases are due to heart disease. Although several doctors employ stethoscopes to auscultate heart sounds to detect abnormalities, the accuracy of the approach is considerably dependent upon the experience and skills of the physician. Consequently, optimal methods are required to analyse and classify heart sounds with Phonocardiogram (PCG) signal-based machine learning methods. The current study formulated a binary classification model by subjecting PCG signals to hyper-filtering with low-pass and cosine filters. Subsequently, numerous features are extracted with the Wavelet Scattering Transform (WST) method. During the feature selection stage, several metaheuristic methods, including Harris Hawks Optimisation (HHO), Dragonfly Algorithm (DA), Grey Wolf Optimiser (GWO), Salp Swarm Algorithm (SSA), and Whale Optimisation Algorithm (WOA), are employed to compare the attributes separately and determine the ideal characteristics for improved classification accuracy. Finally, the selected features were applied as input for the Bidirectional Long Short-Term Memory (Bi-LSTM) algorithm, simplifying the classification process for distinguishing normal and abnormal heart sounds. The present study assessed three PCG datasets: PhysioNet 2016, Yaseen Khan 2018, and PhysioNet 2022, documenting 94.85%, 100%, and 66.87% accuracy rates with 127-SSA, 168-HHO, and 163-HHO, respectively. Based on the results of the PhysioNet 2016 and 2022 datasets, the proposed method with hyperparameters demonstrated superior performance to those with default parameters in categorising normal and abnormal heart sounds appropriately.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_75-Comparative_Analysis_of_Feature_Selection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Lightweight Anonymous Identity Authentication Scheme for the Internet of Things</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160174</link>
        <id>10.14569/IJACSA.2025.0160174</id>
        <doi>10.14569/IJACSA.2025.0160174</doi>
        <lastModDate>2025-01-30T09:02:14.5930000+00:00</lastModDate>
        
        <creator>Zhengdong Deng</creator>
        
        <creator>Xuannian Lei</creator>
        
        <creator>Junyu Liang</creator>
        
        <creator>Hang Xu</creator>
        
        <creator>Zhiyuan Zhu</creator>
        
        <creator>Na Lin</creator>
        
        <creator>Zhongwei Li</creator>
        
        <creator>Jingqi Du</creator>
        
        <subject>Internet of Things; identity authentication; Physical Unclonable Functions; fuzzy extractors; chaotic maps</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>With the rapid growth of Internet of Things (IoT) devices, many of which are resource-constrained and vulnerable to attacks, current identity authentication methods are often too resource-intensive to provide adequate security. This paper proposes an efficient identity authentication scheme that integrates Physical Unclonable Functions (PUFs), Chebyshev chaotic maps, and fuzzy extractors. The scheme enables mutual authentication and key agreement without the need for passwords or smart cards, while providing effective defense against various attacks. The security of the proposed scheme is formally analyzed using an improved BAN logic. A comparison with existing related protocols in terms of security features, computational overhead, and communication overhead demonstrates the security and efficiency of the proposed scheme.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_74-A_Lightweight_Anonymous_Identity_Authentication.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Facial Expressiveness in 3D Cartoon Animation Faces: Leveraging Advanced AI Models for Generative and Predictive Design</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160173</link>
        <id>10.14569/IJACSA.2025.0160173</id>
        <doi>10.14569/IJACSA.2025.0160173</doi>
        <lastModDate>2025-01-30T09:02:14.5630000+00:00</lastModDate>
        
        <creator>Langdi Liao</creator>
        
        <creator>Lei Kang</creator>
        
        <creator>Tingli Yue</creator>
        
        <creator>Aiting Zhou</creator>
        
        <creator>Ming Yang</creator>
        
        <subject>Facial landmark detection; 3D animation; deep learning; AI-assisted rigging; emotion recognition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>An advanced system for facial landmark detection and 3D facial animation rigging is proposed, utilizing deep learning algorithms to accurately detect key facial points, such as the eyes, mouth, and eyebrows. These landmarks enable precise rigging of 3D models, facilitating realistic and controlled facial expressions. The system enhances animation efficiency and realism, providing robust solutions for applications in gaming, animation, and virtual reality. This approach integrates cutting-edge detection techniques with efficient rigging mechanisms. The AI-assisted rigging process reduces manual effort and ensures precise, dynamic animations. The study evaluates the system&#39;s accuracy in facial landmark detection, the efficiency of the rigging process, and its performance in generating consistent emotional expressions across animations. Additionally, the system&#39;s computational efficiency, scalability, and system performance are assessed, demonstrating its practicality for real-time applications. Pilot testing, emotion recognition consistency, and performance metrics reveal the system&#39;s robustness and effectiveness in producing realistic animations while reducing production time. This work contributes to the advancement of animation and virtual environments, offering a scalable solution for realistic facial expression generation and character animation. Future research will focus on refining the system and exploring its potential applications in interactive media and real-time animation.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_73-Enhancing_Facial_Expressiveness_in_3D_Cartoon_Animation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced Jaya Algorithm for Quality-of-Service-Aware Service Composition in the Internet of Things</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160172</link>
        <id>10.14569/IJACSA.2025.0160172</id>
        <doi>10.14569/IJACSA.2025.0160172</doi>
        <lastModDate>2025-01-30T09:02:14.5170000+00:00</lastModDate>
        
        <creator>Yan SHI</creator>
        
        <subject>Service composition; internet of things; quality of service; Jaya algorithm; optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>The Internet of Things (IoT) has shifted how devices and services interact, resulting in diverse innovations ranging from health and smart cities to industrial automation. Nevertheless, at its core, IoT continues to face one of the major tough tasks of Quality of Service-aware Service Composition (QoS-SC), as these IoT settings are normally transient and unpredictable. This paper proposes an improved Jaya algorithm for QoS-SC and focuses on optimizing service selection with a balance between the main QoS attributes: execution time, cost, reliability, and scalability. The proposed approach was designed with adaptive mechanisms to avoid local optima stagnation and slow convergence and thus assure robust exploration and exploitation of the solution area. Incorporating these enhancements, the proposed algorithm outperforms prior metaheuristic approaches regarding QoS satisfaction and computational efficiency. Extensive experiments conducted over diverse IoT scenarios show the algorithm&#39;s scalability, demonstrating that it can achieve faster convergence with superior QoS optimization.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_72-Enhanced_Jaya_Algorithm_for_Quality_of_Service.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparison of Artificial Neural Network and Long Short-Term Memory for Modelling Crude Palm Oil Production in Indonesia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160171</link>
        <id>10.14569/IJACSA.2025.0160171</id>
        <doi>10.14569/IJACSA.2025.0160171</doi>
        <lastModDate>2025-01-30T09:02:14.5000000+00:00</lastModDate>
        
        <creator>Brodjol Sutijo Suprih Ulama</creator>
        
        <creator>Robi Ardana Putra</creator>
        
        <creator>Fausania Hibatullah</creator>
        
        <creator>Mochammad Reza Habibi</creator>
        
        <creator>Mochammad Abdillah Nafis</creator>
        
        <subject>Artificial Neural Network (ANN); Crude Palm Oil (CPO); Long Short-Term Memory (LSTM)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>Indonesia is one of the largest producers and exporters of Crude Palm Oil (CPO), making CPO production crucial to the country&#39;s economic stability. Accurate forecasting of CPO production is essential for effective inventory management, export-import strategy, and economic planning. Traditional time series methods like ARIMA have limitations in modeling nonlinear data, leading to the adoption of machine learning approaches such as Artificial Neural Network (ANN) and Long Short-Term Memory (LSTM). This study compares the performance of ANN, a general neural network, and LSTM, a neural network specifically designed for time series data, in predicting CPO production in Indonesia. Data from 2003 to 2022 were used to train and evaluate both models with various hyperparameter tuning configurations. The results indicate that while both models provide excellent forecasting accuracy, with MAPE values below 10%, the LSTM model achieved a lower out-of-sample MAPE of 5.78% compared to ANN’s 6.87%, suggesting superior performance by LSTM in capturing seasonal patterns in CPO production. Consequently, LSTM is recommended as the preferred model for CPO production forecasting due to its enhanced ability to handle temporal dependencies and nonlinear patterns in the data.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_71-Comparison_of_Artificial_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Internet of Things and Cloud Computing-Driven Deep Learning Framework for Disease Prediction and Monitoring</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160170</link>
        <id>10.14569/IJACSA.2025.0160170</id>
        <doi>10.14569/IJACSA.2025.0160170</doi>
        <lastModDate>2025-01-30T09:02:14.4700000+00:00</lastModDate>
        
        <creator>Bo GUO</creator>
        
        <creator>Lei NIU</creator>
        
        <subject>IoT-driven healthcare; deep learning; fuzzy entropy; secure data storage; cryptography</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>In smart cities, the e-healthcare systems aided by Internet of Things (IoT) technologies play a significant role in proficient health monitoring services. The sensitivity and number of users in health networks highlights the necessity of treating security attacks. In the era of rapid internet connectivity and cloud computing services, patient medical information is most sensitive, and its electronic representation poses privacy and security concerns. Moreover, it is challenging for the traditional classifier to process a massive amount of health data and classify patients&#39; health statuses. To address this matter, this paper presents a novel healthcare model, IoT-CDLDPM, to estimate patients’ disease levels using original data and fuzzy entropy extracted from patients&#39; remote locations. IoT-CDLDPM incorporates a deep learning classifier to analyze extensive patient-related data and provides efficient and accurate health status predictions. Furthermore, the proposed model presents the secured storage structure of the individual&#39;s health data in cloud servers. To give the authenticity of the health data, two new cryptographic algorithms are presented that encrypt and decrypt the data securely transmitted through the network. A comparison with existing methods reveals that the proposed system significantly reduces computation time, with a recorded time of 0.5 seconds, outperforming DSVS, PP-ESAP, and DRDA by up to 80%. Furthermore, the proposed cryptographic model enhances security levels, achieving a range between 99.4% and 99.8% across multiple experimental setups, surpassing other widely used encryption algorithms such as AES, RSA, and ECC-DH.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_70-A_Novel_Internet_of_Things_and_Cloud_Computing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Determination of Pre Coding Elements and Activities for a Pre Coding Program Model for Kindergarten Children Using the Fuzzy Delphi Method (FDM)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160169</link>
        <id>10.14569/IJACSA.2025.0160169</id>
        <doi>10.14569/IJACSA.2025.0160169</doi>
        <lastModDate>2025-01-30T09:02:14.4370000+00:00</lastModDate>
        
        <creator>Siti Naimah Rahman</creator>
        
        <creator>Norly Jamil</creator>
        
        <creator>Intan Farahana Abdul Rani</creator>
        
        <creator>Hafizul Fahri Hanafi</creator>
        
        <subject>Expert consensus; pre coding; element; activity; kindergarten children</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>Computational thinking (CT) skills are becoming increasingly crucial in education, particularly in early childhood education. Pre coding, which involves hands-on activities with real objects, has been shown to be quite effective in fostering kindergarten children&#39;s computational skills. Pre coding, on the other hand, is essential for boosting children’s CT skills, but teachers frequently lack the information necessary to teach these skills successfully. Their successful adoption is hampered by the early childhood education community&#39;s lack of interest in CT skills and the sparse application of pre coding techniques. In order to help kindergarten instructors incorporate pre coding into their teaching and learning, this study focuses on defining the elements and activities described in a pre-coding program model. The study reviewed and compiled a list of prior literature’s pre coding elements and activities. Subsequently, the Fuzzy Delphi Method (FDM) was utilised to refine and validate these elements and activities. Finally, the data collected from 11 selected experts relevant to this field of study were analysed using FDM to examine consensus. The results showed that the eight identified elements and 24 pre coding activities fulfilled the following required criteria: a threshold value (d) of lower than or equal to 0.2, an agreement percentage over 75%, and a fuzzy score value (A) higher than 0.5. These findings demonstrated the suitability of the identified pre coding elements and activities for integration into a pre coding program model for kindergarten children. In summary, this study provides valuable guidance for kindergarten teachers in implementing practical pre coding activities to enhance CT skills among children.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_69-Determination_of_Pre_Coding_Elements_and_Activities.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>YOLO-WP: A Lightweight and Efficient Algorithm for Small-Target Detection in Weld Seams of Small-Diameter Stainless Steel Pipes</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160168</link>
        <id>10.14569/IJACSA.2025.0160168</id>
        <doi>10.14569/IJACSA.2025.0160168</doi>
        <lastModDate>2025-01-30T09:02:14.3900000+00:00</lastModDate>
        
        <creator>Huaishu Hou</creator>
        
        <creator>Yukun Sun</creator>
        
        <creator>Chaofei Jiao</creator>
        
        <subject>Welded pipe; lightweight model; defect detection; deep learning; feature extraction; attention mechanism</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>To address the low detection efficiency and high computational resource demands of current welded pipe defect detection algo-rithms for small target defects, this paper proposes the YO-LO-WP algorithm based on YOLOv5s. The improvements of YOLO-WP are mainly reflected in the following aspects: First, an innovative GhostFusion architecture is introduced in the backbone network. By replacing the C3 modules with C2f mod-ules and integrating the Ghost CBS module inspired by Ghost convolution, cross-stage feature fusion is achieved, significantly enhancing computational efficiency and feature representation for small target defects. Second, the Slim-Neck lightweight de-sign based on GSConv is employed in the neck to further opti-mize the network structure and reduce the number of parame-ters. Additionally, the SimAM lightweight attention mechanism is incorporated to improve the network&#39;s ability to extract de-fect features, and the Focal-EIou loss is utilized to optimize CIou loss, thereby enhancing small object detection and accelerating loss convergence. The experimental results show that the AP(D1) and mAP@0.5 of the YOLO-WP model are improved by 5.3% and 3%, respectively, over the original model. In addi-tion, the number of model parameters and FLOPs are reduced by 40% and 45%, respectively, achieving a good balance be-tween performance and efficiency. We evaluated the perfor-mance of YOLO-WP using other datasets and showed that YOLO-WP exhibits excellent applicability. Compared to exist-ing mainstream detection algorithms, YOLO-WP is more ad-vanced. The YOLO-WP model significantly enhances produc-tion quality in industrial defect detection, laying the foundation for building compact, high-performance embedded weld pipe surface defect detection systems.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_68-YOLO_WP_A_Lightweight_and_Efficient_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced Facial Expression Recognition Based on ResNet50 with a Convolutional Block Attention Module</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160167</link>
        <id>10.14569/IJACSA.2025.0160167</id>
        <doi>10.14569/IJACSA.2025.0160167</doi>
        <lastModDate>2025-01-30T09:02:14.3430000+00:00</lastModDate>
        
        <creator>Liu Luan Xiang Wei</creator>
        
        <creator>Nor Samsiah Sani</creator>
        
        <subject>Data science; computer vision; deep learning; facial expression recognition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>Deep learning techniques are becoming increasingly important in the field of facial expression recognition, especially for automatically extracting complex features and capturing spatial layers in images. However, previous studies have encountered challenges such as complex data sets, limited model generalization, and lack of comprehensive comparative analysis of feature extraction methods, especially those involving attention mechanisms and hyperparameter optimization. This study leverages data science methodologies to handle and analyze large, intricate datasets, while employing advanced computer vision algorithms to accurately detect and classify facial expressions, addressing these challenges by comprehensively evaluating FER tasks using three deep learning models (VGG19, ResNet50, and InceptionV3). The convolutional block attention module is introduced to enhance feature extraction, and the performance of the model is further improved by hyperparameter tuning. The experimental results show that the accuracy of VGG19 model is the highest 71.7\% before the module is integrated, and the accuracy of ResNet50 is the highest 72.4\% after the module is integrated. The performance of all models was significantly improved through the introduction of attention mechanisms and hyperparameter tuning, highlighting the synergistic potential of data science and computer vision in developing robust and efficient in facial expression recognition systems.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_67-Enhanced_Facial_Expression_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Large Language Models for Academic Internal Auditing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160166</link>
        <id>10.14569/IJACSA.2025.0160166</id>
        <doi>10.14569/IJACSA.2025.0160166</doi>
        <lastModDate>2025-01-30T09:02:14.3130000+00:00</lastModDate>
        
        <creator>Houda CHAMMAA</creator>
        
        <creator>Rachid ED-DAOUDI</creator>
        
        <creator>Khadija BENAZZI</creator>
        
        <subject>Large language models; internal auditing; information retrieval; embedding models; academic institutions</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>This research examines the application of Artificial Intelligence in internal auditing, focusing on document management and information retrieval in academic institutions. The study proposes using Large Language Models to streamline document processing during audit preparation, addressing inefficiencies in traditional document handling methods. Through experimental evaluation of three embedding models (BGE-M3, Nomic-embed-text-v1, and CamemBERT) on a dataset of 300 academic regulatory queries, the research demonstrates BGE-M3&#39;s superior performance with an nDCG3 score of 0.90 and top-1 accuracy of 82.5%. The methodology incorporates query expansion using GPT-4 and Llama 3.1, revealing robust performance across varied query formulations. While highlighting AI&#39;s potential to transform internal auditing practices, particularly in Morocco&#39;s academic sector, the study acknowledges implementation challenges including institutional constraints and resistance to technological change. The conducted experiments and result analysis provide useful criteria that can be applied to similar information retrieval challenges in other fields and real-world applications.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_66-Large_Language_Models_for_Academic_Internal_Auditing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application of Big Data Mining System Integrating Spectral Clustering Algorithm and Apache Spark Framework</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160165</link>
        <id>10.14569/IJACSA.2025.0160165</id>
        <doi>10.14569/IJACSA.2025.0160165</doi>
        <lastModDate>2025-01-30T09:02:14.2670000+00:00</lastModDate>
        
        <creator>Yuansheng Guo</creator>
        
        <subject>Spectral clustering algorithm; apache spark; big data; data mining</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>Spectral clustering algorithm is a highly effective clustering algorithm with broad application prospects in data mining. To improve the efficient data processing capability of big data mining systems, a big data mining system that integrates spectral clustering algorithm and Apache Spark framework is proposed. It is applied in the big data mining system by combining Hdoop, Spark framework, and spectral clustering algorithm. The research results indicated that after 300 iterations of spectral clustering algorithm, the error value tended to stabilize and drops to 0.123. In different datasets, different error values were displayed, indicating that spectral clustering algorithm had better performance in discrete data processing and smaller testing errors. The minimum time consumed by the comparative system was 37.83 seconds, the maximum time was 55.26 seconds, and the average time was 51.65 seconds. The minimum time consumed by the research system was 18.93 seconds, the maximum time consumed was 32.22 seconds, and the average time consumed was 28.14 seconds. Compared with the comparative system, the research system consumed less time, trained faster, and was more conducive to shortening the clustering running time. The algorithm framework and system raised in the research have good operational efficiency and clustering ability in data mining processing, which promotes the reliability and development of big data mining systems.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_65-Application_of_Big_Data_Mining_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Segmentation of Nano-Particles from SEM Images Using Transfer Learning and Modified U-Net</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160164</link>
        <id>10.14569/IJACSA.2025.0160164</id>
        <doi>10.14569/IJACSA.2025.0160164</doi>
        <lastModDate>2025-01-30T09:02:14.2330000+00:00</lastModDate>
        
        <creator>Sowmya Sanan V</creator>
        
        <creator>Rimal Isaac R S</creator>
        
        <subject>Nanomaterial; segmentation; ResNet 50; modified UNet; transfer learning; SEM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>Nanomaterials, owing to their distinctive features, are crucial across numerous scientific domains, especially in materials science and nanotechnology. Precise segmentation of Scanning Electron Microscope (SEM) images is essential for evaluating attributes such as nanoparticle dimensions, morphology, and distribution. Conventional image segmentation techniques frequently prove insufficient for managing the intricate textures of SEM images, resulting in a laborious and imprecise process. In this research, a modified U-Net architecture is presented to tackle this challenge, utilizing a ResNet50 backbone pre-trained on ImageNet. This model utilizes the robust feature extraction abilities of ResNet50 alongside the effective segmentation performance of U-Net, hence improving both accuracy and computational efficiency in TiO2 nanoparticle segmentation. The suggested model was assessed using performance metrics including accuracy, precision, recall, IoU, and Dice Coefficient. The results indicated a high segmentation accuracy, demonstrated by a Dice score of 0.946 and an IoU of 0.897, with little variability reflected in standard deviations of 0.002071 and 0.003696, respectively, over 200 epochs. The comparison with existing methods demonstrates that the proposed model surpasses previous approaches by attaining enhanced segmentation accuracy. The modified U-Net design serves as an excellent technique for accurate nanoparticle segmentation in SEM images, providing substantial enhancements compared to traditional approaches. This progress indicates the model&#39;s potential for wider applications in nanomaterial research and characterization, where precise and efficient segmentation is essential for analysis.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_64-Segmentation_of_Nano_Particles_from_SEM_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Metric-Based Counterfactual Data Augmentation with Self-Imitation Reinforcement Learning (SIL)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160163</link>
        <id>10.14569/IJACSA.2025.0160163</id>
        <doi>10.14569/IJACSA.2025.0160163</doi>
        <lastModDate>2025-01-30T09:02:14.2030000+00:00</lastModDate>
        
        <creator>K. C. Sreedhar</creator>
        
        <creator>T. Kavya</creator>
        
        <creator>J. V. S. Rajendra Prasad</creator>
        
        <creator>V. Varshini</creator>
        
        <subject>Natural language processing; fairness; robustness; Word Embedding Association Test (WEAT); SMART testing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>The inherent biases present in language models often lead to discriminatory predictions based on demographic attributes. Fairness in NLP refers to the goal of ensuring that language models and other NLP systems do not produce biased or discriminatory outputs that could negatively affect individuals or groups. Bias in NLP models often arises from training data that reflects societal stereotypes or imbalances. Robustness in NLP refers to the ability of a model to maintain performance when faced with noisy, adversarial, or out-of-distribution data. A robust NLP model should handle variations in input effectively without failing or producing inaccurate results. The proposed approach employs a novel metric called CFRE (Context-Sensitive Fairness and Robustness Evaluation) designed to measure both fairness and robustness of an NLP model under different contextual shifts. Next, it projected the benefits of this metric in terms of experimental parameters. Next, the work integrated counterfactual data augmentation with help of Self-Imitation Reinforcement Learning (SIL) to reinforce successful counterfactual generation by enabling the model to learn from its own high-reward experiences, fostering a more balanced understanding of language. The integration of SIL allows for efficient exploration of the action space, guiding the model to consistently produce unbiased outputs across different contexts. The proposed approach demonstrates the effectiveness of our method through extensive experimentation and compared the results of the proposed metric with that of WEAT and SMART testing, and showed a significant reduction in bias without compromising the model&#39;s overall performance. This framework not only addresses bias in existing models but also contributes to a more robust methodology for training fairer NLP systems. Both the proposed metric and SIL showed better results in experimental parameters.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_63-A_Novel_Metric_Based_Counterfactual_Data_Augmentation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning-Based Automatic Cultural Translation Method for English Tourism</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160162</link>
        <id>10.14569/IJACSA.2025.0160162</id>
        <doi>10.14569/IJACSA.2025.0160162</doi>
        <lastModDate>2025-01-30T09:02:14.1570000+00:00</lastModDate>
        
        <creator>Jianguo Liu</creator>
        
        <creator>Ruohan Liu</creator>
        
        <subject>LSTM-based encoder-decoder model; tourism English culture; automatic translation; enhanced attention mechanism</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>The general LSTM-based encoder-decoder model has the problems of not being able to mine the sentence semantics and translate long text sequences. This study presents a neural machine translation model utilizing LSTM with improved attention, incorporating multi-head attention and multi-skipping attention mechanisms into the LSTM baseline model. By adding multi-head attention computation, the syntactic information in different subspaces can be mined, and then attention can be paid to the semantic information in the sentence sequences, and then multiple attentions are computed on each head separately, which can effectively deal with the long-distance dependency problem and perform better in the translation of long sentences. The proposed model is analysed and compared using the WMT17 Chinese and English datasets, newsdev2017 and newstest2017, and the results show that the proposed model improves the BLEU score of the automatic translation of Tourism English Culture and solves the problem of low scores in long sentence translation.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_62-Deep_Learning_Based_Automatic_Cultural_Translation_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing COVID-19 Detection in X-Ray Images Through Deep Learning Models with Different Image Preprocessing Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160161</link>
        <id>10.14569/IJACSA.2025.0160161</id>
        <doi>10.14569/IJACSA.2025.0160161</doi>
        <lastModDate>2025-01-30T09:02:14.1270000+00:00</lastModDate>
        
        <creator>Ahmad Nuruddin bin Azhar</creator>
        
        <creator>Nor Samsiah Sani</creator>
        
        <creator>Liu Luan Xiang Wei</creator>
        
        <subject>X-ray; COVID-19; image enhancement; Contrast Limited Adaptive Histogram Equalization; Histogram Equalization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>The identification of COVID-19 using chest X-ray (CXR) images plays a critical role in managing the pandemic by providing a rapid, non-invasive, and accessible diagnostic tool. This study evaluates the impact of different image preprocessing techniques on the performance of deep learning models for COVID-19 classification based on COVID-19 Radiography Database, which includes 10,192 normal CXR images, 6012 lung opacity (non-COVID lung infection) images, and 1345 viral pneumonia images. Along with the images, corresponding lung masks are also included to aid in the segmentation and analysis of lung regions. Specifically, three convolutional neural network (CNN) models were developed, each using a distinct preprocessing method: Contrast Limited Adaptive Histogram Equalization (CLAHE), traditional histogram equalization, and no preprocessing. The results revealed that while the CLAHE-enhanced model achieved the highest training accuracy (93.26%) and demonstrated superior stability during training, it showed lower performance in the validation phase, with validation accuracy of 91.31%. In contrast, the model with no preprocessing, which exhibited slightly lower training accuracy (92.98%), outperformed the CLAHE model during validation, achieving the highest validation accuracy of 91.50% and the lowest validation loss. The histogram equalization model demonstrated performance similar to that of CLAHE but with slightly higher validation loss and accuracy compared to the unprocessed model. These findings suggest that while CLAHE excels in enhancing image details during training, it may lead to overfitting and reduced generalization ability. In contrast, the model without preprocessing showed the best generalization and stability, indicating that preprocessing techniques should be chosen carefully to balance feature enhancement with the need for generalization in real-world applications. This study underscores the importance of selecting appropriate image preprocessing techniques to enhance deep learning models&#39; performance in medical image classification, particularly for COVID-19 detection. Histogram Equalization The results contribute to ongoing efforts to optimize diagnostic tools using AI and image processing.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_61-Enhancing_COVID_19_Detection_in_X_Ray_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Integrating Blockchain and Edge Computing: A Systematic Analysis of Security, Efficiency, and Scalability</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160160</link>
        <id>10.14569/IJACSA.2025.0160160</id>
        <doi>10.14569/IJACSA.2025.0160160</doi>
        <lastModDate>2025-01-30T09:02:14.0930000+00:00</lastModDate>
        
        <creator>Youness Bentayeb</creator>
        
        <creator>Kenza Chaoui</creator>
        
        <creator>Hassan Badir</creator>
        
        <subject>Blockchain; edge computing; security; computing efficiency; data privacy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>The integration of blockchain and edge computing presents a transformative potential to enhance security, computing efficiency, and data privacy across diverse industries. This paper begins with an overview of blockchain and edge computing, establishing the foundational technologies for this synergy. It explores the key benefits of their integration, such as improved data security through blockchain’s decentralized nature and reduced latency via edge computing&#39;s localized data processing. Methodologically, the paper employs a systematic analysis of existing technologies and challenges, emphasizing issues such as scalability, managing decentralized networks, and ensuring independence from cloud infrastructure. A detailed Ethereum-based case study demonstrates the feasibility and practical implications of deploying blockchain in edge computing environments, supported by a comparative analysis and an algorithmic approach to integration. The conclusion synthesizes the findings, addressing unresolved challenges and proposing future research directions to optimize performance and ensure the seamless convergence of these technologies.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_60-Blockchain_Meets_Edge_Computing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application of MLP-Mixer-Based Image Style Transfer Technology in Graphic Design</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160159</link>
        <id>10.14569/IJACSA.2025.0160159</id>
        <doi>10.14569/IJACSA.2025.0160159</doi>
        <lastModDate>2025-01-30T09:02:14.0770000+00:00</lastModDate>
        
        <creator>Qibin Wang</creator>
        
        <creator>Xiao Chen</creator>
        
        <creator>Huan Su</creator>
        
        <subject>MLP-Mixer; image style transfer; graphic design; neural networks; artistic rendering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>The rapid advancement of the digital creative industry has highlighted the growing importance of image style transfer technology as a bridge between traditional art and modern design, driving innovation in graphic design. However, conventional style transfer methods face significant challenges, including low computational efficiency and unnatural style transformation in complex image scenarios. This study addresses these limitations by introducing a novel approach to image style transfer based on the MLP-Mixer model. Leveraging the MLP-Mixer&#39;s ability to effectively capture both local and global image features, the proposed method achieves precise separation and integration of style and content. Experimental results demonstrate that the MLP-Mixer-based style transfer significantly enhances the naturalness and diversity of style transformation while preserving image clarity and detail. Additionally, the processing speed is improved by 50%, with style conversion accuracy and user satisfaction increasing by 30% and 35%, respectively, compared to traditional methods. These findings underscore the potential of the MLP-Mixer model for advancing efficiency and realism in graphic design applications.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_59-Application_of_MLP_Mixer_Based_Image_Style_Transfer_Technology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancement of Coastline Video Monitoring System Using Structuring Element Morphological Operations</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160158</link>
        <id>10.14569/IJACSA.2025.0160158</id>
        <doi>10.14569/IJACSA.2025.0160158</doi>
        <lastModDate>2025-01-30T09:02:14.0300000+00:00</lastModDate>
        
        <creator>I Gusti Ngurah Agung Pawana</creator>
        
        <creator>I Made Oka Widyantara</creator>
        
        <creator>Made Sudarma</creator>
        
        <creator>Dewa Made Wiharta</creator>
        
        <creator>Made Widya Jayantari</creator>
        
        <subject>Coastline detection; image processing; Video Monitoring System (CoViMoS); morphological operations</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>Coastal monitoring is vital in environmental management, disaster mitigation, and addressing climate change impacts. Traditional methods are time-consuming and error-prone, prompting the need for innovative systems. This study introduces the Coastal Video Monitoring System (CoViMos), a novel framework for real-time shoreline detection in tropical regions, specifically at Kedonganan Beach, Bali. The CoViMos framework utilizes advanced video monitoring and optimized morphological operations to address challenges such as environmental noise and dynamic shoreline behavior. Key innovations include Kapur’s entropy thresholding enhanced with the Grasshopper Optimization Algorithm (GOA) and structuring elements tailored to the beach’s unique features. Sensitivity analysis reveals that a structuring element size of five pixels offers optimal performance, balancing efficiency, and image fidelity. This configuration achieves peak values in quality metrics such as the Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM), Complex Wavelet SSIM (CWSSIM), and Feature Similarity Index (FSIM) while minimizing Mean Squared Error (MSE) and reducing processing time. The results demonstrate significant improvements in shoreline detection accuracy, with PSNR increasing by 9.3%, SSIM by 1.4%, CWSSIM by 1.7%, and FSIM by 1.6%. Processing time decreased by 1.3%, emphasizing the system’s computational efficiency. These enhancements ensure more precise shoreline mapping, even in noisy and dynamic environments.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_58-Enhancement_of_Coastline_Video_Monitoring_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Early Alzheimer’s Disease Detection Through Targeting the Feature Extraction Using CNNs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160157</link>
        <id>10.14569/IJACSA.2025.0160157</id>
        <doi>10.14569/IJACSA.2025.0160157</doi>
        <lastModDate>2025-01-30T09:02:14.0000000+00:00</lastModDate>
        
        <creator>D Prasad</creator>
        
        <creator>K Jayanthi</creator>
        
        <creator>Pradeep Tilakan</creator>
        
        <subject>Alzheimer&#39;s Disease (AD); convolutional neural networks (CNN); Support Vector Machine (SVM); Directed Acyclic Graph (DAG); Late Mild Cognitive Impairment (LMCI); Alzheimer&#39;s Disease Neuroimaging Initiative (ADNI)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>Alzheimer&#39;s Disease (AD) is a persistent, irreversible, and degenerative neurological disorder of the brain that currently has no effective therapy. This condition is identified by pathological abnormalities in the hippocampal area, which may develop up to 10 years prior to the onset of clinical symptoms. Timely detection of pathogenic abnormalities is essential to impede the worsening of AD. Recent studies on neuroimaging have shown that the use of Deep Learning techniques to analyze multimodal brain scans may effectively and correctly detect AD. The main goal of this work is to design and develop an Artificial Intelligence (AI) based diagnostic framework that can accurately and promptly detect AD by analyzing Structural Magnetic Resonance Imaging (SMRI) data. This study presents a novel approach that combines a Directed Acyclic Graph 3D-CNN with an SVM classifier for timely detection and identification of AD by analyzing the Regions of Interest (RoI) like cerebral spinal fluid, white and gray matter, and the hippocampus in SMRI images. The proposed hybrid model combines Deep Learning for feature extraction and Machine Learning techniques for classification. The obtained results demonstrate its superior performance compared to earlier methods in accurately identifying individuals with early mild cognitive impairment (EMCI) from those with normal cognition (NC) using the Alzheimer&#39;s Disease Neuroimaging Initiative (ADNI) dataset. The model attains a classification accuracy of 97.67%, with precision at 94.12%, and sensitivity at 98.60%.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_57-Early_Alzheimers_Disease_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>User Interface Design of Digital Test Based on Backward Chaining as a Measuring Tool for Students’ Critical Thinking</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160156</link>
        <id>10.14569/IJACSA.2025.0160156</id>
        <doi>10.14569/IJACSA.2025.0160156</doi>
        <lastModDate>2025-01-30T09:02:13.9830000+00:00</lastModDate>
        
        <creator>I Putu Wisna Ariawan</creator>
        
        <creator>P. Wayan Arta Suyasa</creator>
        
        <creator>Agus Adiarta</creator>
        
        <creator>I Komang Gede Sukawijana</creator>
        
        <creator>Nyoman Santiyadnya</creator>
        
        <creator>Dewa Gede Hendra Divayana</creator>
        
        <subject>User interface design; digital test; backward chaining; critical thinking; differentiated learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>Assessing students&#39; critical thinking skills is challenging due to the limitations of current measurement tools. Therefore, there is a need for a digital testing instrument that can effectively evaluate students&#39; critical thinking abilities. The proposed digital test should be designed to present questions in a tiered manner, using a backward chaining approach that starts with general questions and progresses to more detailed ones. However, developing this measurement instrument requires careful planning. One of the initial steps in this process is to create a user interface design. The purpose of this study was to show the quality of the design of the user interface of a digital test based on backward chaining as a measuring tool for students’ critical thinking in a differentiated learning atmosphere. Design development used the Borg and Gall model and only focused on three stages. These stages include design planning, initial testing, and revision for the initial testing results. Data collection was through initial testing of the design. The tool used to collect data was a questionnaire. Respondents involved in the initial testing were 34 people. The location for the study was at several IT vocational high schools spread across six regencies in Bali. The data analysis technique compared the percentage comparison of the quality of the user interface design with the quality standards of the user interface design and referred to a five scale. The results of the study showed that the design quality of the digital test user interface based on backward chaining was included in the good category, as indicated by a quality percentage of 88.94%. Specifically, the impact of the results on the field of educational evaluation is to make it easier for evaluators to make accurate measurements. In general, the effect of this study on the field of informatics engineering education is the existence of innovations in realizing a test to measure critical thinking in the domain of differentiated learning.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_56-User_Interface_Design_of_Digital_Test.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Current Challenges Review of Deep Learning-Based Nuclei Segmentation of Diffuse Large B-Cell Lymphoma</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160155</link>
        <id>10.14569/IJACSA.2025.0160155</id>
        <doi>10.14569/IJACSA.2025.0160155</doi>
        <lastModDate>2025-01-30T09:02:13.9530000+00:00</lastModDate>
        
        <creator>Gei Ki Tang</creator>
        
        <creator>Chee Chin Lim</creator>
        
        <creator>Faezahtul Arbaeyah Hussain</creator>
        
        <creator>Qi Wei Oung</creator>
        
        <creator>Aidy Irman Yazid</creator>
        
        <creator>Sumayyah Mohammad Azmi</creator>
        
        <creator>Haniza Yazid</creator>
        
        <creator>Yen Fook Chong</creator>
        
        <subject>Deep learning; Diffuse Large B-Cell Lymphoma (DLBCL); lymphoma cancer; HoVerNet</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>Diffuse Large B-Cell Lymphoma stands as the most prevalent form of non-Hodgkin lymphoma worldwide, constituting approximately 30 percent of cases within this diverse group of blood cancers affecting the lymphatic system. This study addresses the challenges associated with the accurate DLBCL segmentation and classification, including difficulties in identifying and diagnosing DLBCL, manpower shortage, and limitations of manual imaging methods. The study highlights the potential of deep learning to effectively segment and classify DLBCL types. The implementation of such technology has the potential to extract and preprocess image patches, identify, and segment the nuclei in DLBCL images, and classify DLBCL severity based on segmented nuclei counting.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_55-The_Current_Challenges_Review_of_Deep_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Marine Predator Algorithm and Related Variants: A Systematic Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160154</link>
        <id>10.14569/IJACSA.2025.0160154</id>
        <doi>10.14569/IJACSA.2025.0160154</doi>
        <lastModDate>2025-01-30T09:02:13.9070000+00:00</lastModDate>
        
        <creator>Emmanuel Philibus</creator>
        
        <creator>Azlan Mohd Zain</creator>
        
        <creator>Didik Dwi Prasetya</creator>
        
        <creator>Mahadi Bahari</creator>
        
        <creator>Norfadzlan bin Yusup</creator>
        
        <creator>Rozita Abdul Jalil</creator>
        
        <creator>Mazlina Abdul Majid</creator>
        
        <creator>Azurah A Samah</creator>
        
        <subject>Exploitation-exploration; marine predator algorithm; metaheuristic algorithms; metaheuristic-hybridization; meta-heuristics; optimization; predator prey systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>The Marine Predators Algorithm (MPA) is classified under swarm intelligence methods based on its type of inspiration. It is a population-based metaheuristic optimization algorithm inspired by the general foraging behavior exhibited in the form of Levy and Brownian motion in ocean predators supported by the policy of optimum success rate found in the biological relationship between prey and predators. The algorithm is easy to implement and robust in searching, yielding better solutions to many real-world problems. It is attracting huge and growing interest. This paper provides a systematic review of the research progress and applications of the MPA by analyzing more than 100 articles sourced from Scopus and Web of Science databases using the PRISMA approach. The study expounded the classical MPA’s workflow. It also unveiled a steady upward trend in the use of the algorithm. The research presented different improvements and variants of MPA including parameter-tuning, enhancement of the balance between exploration and exploitation, hybridization of MPA with other techniques to harness the strengths of each of the algorithms towards complementing the weaknesses of the other, and more recently proposed advances. It further underscores the application of MPA in various areas such as Engineering, Computer Science, Mathematics, and Energy. Findings reveal several search strategies implemented to improve the algorithm’s performance. In conclusion, although MPA has been widely accepted, other areas remain yet to be applied, and some improvements are yet to be covered. These have been presented as recommendations for future research direction.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_54-Marine_Predator_Algorithm_and_Related_Variants.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid Transformer-ARIMA Model for Forecasting Global Supply Chain Disruptions Using Multimodal Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160153</link>
        <id>10.14569/IJACSA.2025.0160153</id>
        <doi>10.14569/IJACSA.2025.0160153</doi>
        <lastModDate>2025-01-30T09:02:13.8770000+00:00</lastModDate>
        
        <creator>Qingzi Wang</creator>
        
        <subject>Supply chain disruptions; forecasting models; hybrid model; transformer architecture; ARIMA; multimodal data integration</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>This study presents a robust forecasting model for global supply chain disruptions: port delays, natural disasters, geopolitical events, and pandemics. An integrated solution combining the help of transformer-based models for unstructured textual data preprocessing and ARIMA for structured time series analysis is referred to as a hybrid model. This model combines the insights from both approaches using a feature fusion mechanism. It evaluated the Hybrid Model using accuracy, precision, recall, and finally, F1 score, and it was found to perform much better, generally obtaining an overall accuracy of 94.2% and an overall weighted F1 score of 94.3%. Specifically, class-specific analysis demonstrated high precision in identifying disruptions such as pandemics (95.5%) and natural disasters (94.6%), showing the ability of a model to understand context and time. The proposed approach outperforms classic stand-alone statistical and deep learning models regarding scalability and adaptivity to real-life applications such as risk management and policy making. Future work could include making the weights for each cluster dynamic to optimize weights based on real-time trends and improving accuracy and resilience.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_53-A_Hybrid_Transformer_ARIMA_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Understanding Art Deeply: Sentiment Analysis of Facial Expressions of Graphic Arts Using Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160152</link>
        <id>10.14569/IJACSA.2025.0160152</id>
        <doi>10.14569/IJACSA.2025.0160152</doi>
        <lastModDate>2025-01-30T09:02:13.8430000+00:00</lastModDate>
        
        <creator>Fei Wang</creator>
        
        <subject>Artificial intelligence; deep learning; sentiment analysis; art detection; image processing; convolutional network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>Art serves as a profound medium for humans to express and present their thoughts, emotions, and experiences in aesthetically and captivating means. It is like a universal language transcending the limitations of language enabling communication of complex ideas and feelings. Artificial Intelligence (AI) based data analytics are being applied for research domains such as sentiment analysis in which usually text data is analyzed for opinion mining. In this research study, we take art work and apply deep learning (DL) algorithms to classify seven diverse facial expressions in graphics art. For empirical analysis, state of the art deep learning algorithms of Inceptionv3 and pre-trained model of ResNet have been applied on large dataset. Both models are considered revolutionary deep learning architecture allowing for the training of much deeper networks and thus enhancing model performance in various computer vision tasks such as image recognition and classification tasks. The comprehensive results analysis reveals that the proposed methods of ResNet and Inceptionv3 have achieved accuracy as high as 98% and 99% respectively as compared to existing approaches in the relevant field. This research contributes to the fields of sentiment analysis, computational visual art, and human-computer interaction by addressing the detection of seven diverse facial expressions in graphic art. Our approach enables enhanced understanding of user sentiments, offering significant implications for improving user engagement, emotional intelligence in AI-driven systems, and personalized experiences in digital platforms. This study bridges the gap between visual aesthetics and sentiment detection, providing novel insights into how graphic art influences and reflects human emotions by highlighting the efficacy of DL frameworks for real-time emotion detection applications in diverse fields such as human psychological assessment and behavior analysis.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_52-Understanding_Art_Deeply_Sentiment_Analysis_of_Facial_Expressions.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Strategic Supplier Selection in Advanced Automotive Production: Harnessing AHP and CRNN for Optimal Decision-Making</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160151</link>
        <id>10.14569/IJACSA.2025.0160151</id>
        <doi>10.14569/IJACSA.2025.0160151</doi>
        <lastModDate>2025-01-30T09:02:13.7970000+00:00</lastModDate>
        
        <creator>Karim Haricha</creator>
        
        <creator>Azeddine Khiat</creator>
        
        <creator>Yassine Issaoui</creator>
        
        <creator>Ayoub Bahnasse</creator>
        
        <creator>Hassan Ouajji</creator>
        
        <subject>Supplier selection; analytic hierarchy process; convolutional recurrent neural network; sustainability; decision-making</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>This study presents a novel supplier selection methodology that integrates the Analytic Hierarchy Process (AHP) with a Convolutional Recurrent Neural Network (CRNN) to address the complexities of decision-making in dynamic industrial environments. The AHP component provides a systematic and transparent framework for evaluating many factors, ensuring consistency and minimizing subjective biases in supplier assessment. The Analytic Hierarchy Process (AHP) effectively combines expert knowledge with individual preferences, therefore embodying the human element of decision-making. The CRNN concurrently leverages its ability to process large sequential data, uncover hidden patterns, and assess supplier performance over time. This expertise enhances decision-making by transcending the limitations of traditional analytical methods in managing intricate, multidimensional data. The integration of AHP and CRNN offers a comprehensive evaluation framework, including both objective and subjective factors to enhance effective supplier selection decisions. This approach enhances the long-term sustainability of manufacturing operations by fostering reliable supplier relationships and ensuring access to high-performing suppliers. Experimental validations affirm the efficacy of the suggested approach in promoting sustainable manufacturing systems, highlighting its practical use. The findings demonstrate that the AHP-CRNN framework improves supplier selection criteria and offers prospects for future development and adaptation to address emerging challenges in complex manufacturing environments.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_51-Strategic_Supplier_Selection_in_Advanced_Automotive_Production.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards Transparent Traffic Solutions: Reinforcement Learning and Explainable AI for Traffic Congestion</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160150</link>
        <id>10.14569/IJACSA.2025.0160150</id>
        <doi>10.14569/IJACSA.2025.0160150</doi>
        <lastModDate>2025-01-30T09:02:13.7670000+00:00</lastModDate>
        
        <creator>Shan Khan</creator>
        
        <creator>Taher M. Ghazal</creator>
        
        <creator>Tahir Alyas</creator>
        
        <creator>M. Waqas</creator>
        
        <creator>Muhammad Ahsan Raza</creator>
        
        <creator>Oualid Ali</creator>
        
        <creator>Muhammad Adnan Khan</creator>
        
        <creator>Sagheer Abbas</creator>
        
        <subject>Reinforcement learning; Explainable Artificial Intelligence (XAI); Smart City (SC); IoT; Machine Learning (ML)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>This study introduces a novel approach to traffic congestion detection using Reinforcement Learning (RL) of machine learning classifiers enhanced by Explainable Artificial Intelligence (XAI) techniques in Smart City (SC). Conventional traffic management systems rely on static rules, and heuristics face challenges in dynamically addressing urban traffic problems&#39; complexities. This study explains the novel Reinforcement Learning (RL) framework integrated with an Explainable Artificial Intelligence (XAI) approach to deliver more transparent results. The model significantly reduces the missing data rate and improves overall prediction accuracy by incorporating RL for real-time adaptability and XAI for clarity. The proposed method enhances security, privacy, and prediction accuracy for traffic congestion detection by using Machine Learning (ML). Using RL for adaptive learning and XAI for interpretability, the proposed model achieves improved prediction and reduces the missing data rate, with an accuracy of 98.10, which is better than the existing methods.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_50-Towards_Transparent_Traffic_Solutions_Reinforcement_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Review of Reinforcement Learning Evolution: Taxonomy, Challenges and Emerging Solutions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160149</link>
        <id>10.14569/IJACSA.2025.0160149</id>
        <doi>10.14569/IJACSA.2025.0160149</doi>
        <lastModDate>2025-01-30T09:02:13.7330000+00:00</lastModDate>
        
        <creator>Ji Loun Tan</creator>
        
        <creator>Bakr Ahmed Taha</creator>
        
        <creator>Norazreen Abd Aziz</creator>
        
        <creator>Mohd Hadri Hafiz Mokhtar</creator>
        
        <creator>Muhammad Mukhlisin</creator>
        
        <creator>Norhana Arsad</creator>
        
        <subject>Artificial intelligence; autonomous systems; decision-making optimization; reinforcement learning; robotics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>Reinforcement Learning (RL) has become a rapidly advancing field inside Artificial Intelligence (AI) and self-sufficient structures, revolutionizing the manner in which machines analyze and make selections. Over the past few years, RL has advanced notably with the improvement of more sophisticated algorithms and methodologies that address increasingly complicated actual-world troubles. This progress has been driven by using enhancements in computational power, the availability of big datasets, and improvements in machine-getting strategies, permitting RL to address challenges across a wide range of industries, from robotics and autonomous driving system to healthcare and finance. The effect of RL is evident in its capacity to optimize selection-making procedures in unsure and dynamic environments. By getting to know from interactions with the environment, RL agents can make decisions that maximize lengthy-time period rewards, adapting to converting situations and enhancing over time. This adaptability has made RL an invaluable tool in situations wherein traditional approaches fall brief, especially in complicated, excessive-dimensional spaces and behind-schedule remarks. This review aims to offer radical information about the current nation of RL, highlighting its interdisciplinary contributions and how it shapes the destiny of AI and autonomous technologies. It discusses how RL affects improvements in robotics, natural language processing, and recreation while exploring its deployment&#39;s ethical and practical demanding situations. Additionally, it examines key research from numerous fields that have contributed to RL&#39;s development.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_49-A_Review_of_Reinforcement_Learning_Evolution.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Ensemble Semantic Text Representation with Ontology and Query Expansion for Enhanced Indonesian Quranic Information Retrieval</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160148</link>
        <id>10.14569/IJACSA.2025.0160148</id>
        <doi>10.14569/IJACSA.2025.0160148</doi>
        <lastModDate>2025-01-30T09:02:13.6870000+00:00</lastModDate>
        
        <creator>Liza Trisnawati</creator>
        
        <creator>Noor Azah Binti Samsudin</creator>
        
        <creator>Shamsul Kamal Bin Ahmad Khalid</creator>
        
        <creator>Ezak Fadzrin Bin Ahmad Shaubari</creator>
        
        <creator>Sukri</creator>
        
        <creator>Zul Indra</creator>
        
        <subject>Ensemble method; query expansion; ontology; Al-Quran; search engine</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>This study explores the effectiveness of an ensemble method for Quranic text retrieval, aimed at improving the relevance and accuracy of verses retrieved for specific themes. The ensemble approach integrates three semantic models—Word2Vec, FastText, and GloVe—through a voting mechanism that considers verse frequency and semantic alignment with the query topics. Testing was conducted on themes such as prayer, zakat, fasting, umrah, and eschatology, reflecting fundamental aspects of Quranic teachings. Results demonstrate that the ensemble method significantly outperforms non-ensemble approaches, achieving an average relevance rate of 88%, compared to individual models (Word2Vec: 75%, FastText: 80%, GloVe: 82%). The ensemble method effectively combines the unique strengths of each model. Word2Vec captures general semantic relationships, FastText handles morphological nuances, and GloVe identifies global contextual patterns. By combining these capabilities, the ensemble approach improves both the quantity and quality of retrieved verses, making it a robust tool for semantic analysis in Quranic studies. This research contributes to the field of computational Islamic studies by demonstrating the practical advantages of ensemble methods for religious text retrieval. It lays the foundation for further advancements, including the integration of deep learning techniques, dynamic query handling, and cross-linguistic analysis. The ensemble method offers a promising framework for supporting more accurate and contextually relevant Quranic studies, promoting a deeper understanding of Islamic teachings through data-driven methodologies.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_48-An_Ensemble_Semantic_Text_Representation_with_Ontology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Network Security Based on GCN and Multi-Layer Perception</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160147</link>
        <id>10.14569/IJACSA.2025.0160147</id>
        <doi>10.14569/IJACSA.2025.0160147</doi>
        <lastModDate>2025-01-30T09:02:13.6570000+00:00</lastModDate>
        
        <creator>Wei Yu</creator>
        
        <creator>Huitong Liu</creator>
        
        <creator>Yu Song</creator>
        
        <creator>Jiaming Wang</creator>
        
        <subject>Network security; graph convolutional network; multi-layer perceptron; intrusion detection model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>With the continuous progress of network technology, network security has become a critical issue at present. There are already many network security intrusion detection models, but these detection models still have problems such as low detection accuracy and long interception time of intrusion information. To address these drawbacks, this study utilizes graph convolutional network to optimize multi-layer perceptron. An optimization algorithm based on multi-layer perceptron is innovatively proposed to construct an intrusion detection model. Comparative experiments are conducted on the improved algorithm. The accuracy of the algorithm was 0.98, the F1 value was 0.97, and the detection time was 1.1s. The overall performance was much better than comparison algorithms. Subsequently, the intrusion detection model was applied to network security detection. The detection time was 0.1s, the accuracy was 0.98, and the overall performance outperformed other comparison algorithms. The results demonstrate that the intrusion detection method on the basis of optimized multi-layer perceptron can enhance the detection ability of illegal intrusion information. This study optimizes the performance of detecting illegal network intrusion information, providing a theoretical basis for further development of network security. However, the types of intrusion information in this study are limited and there is still uncertainty. In the future, data augmentation techniques can be used to oversample minority class samples, synthesize new minority class samples, expand sample size, increase detection information, and improve the overall detection performance of the model.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_47-Network_Security_Based_on_GCN_and_Multi_Layer_Perception.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Feature Reduction and Anomaly Detection in IoT Using Machine Learning Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160146</link>
        <id>10.14569/IJACSA.2025.0160146</id>
        <doi>10.14569/IJACSA.2025.0160146</doi>
        <lastModDate>2025-01-30T09:02:13.6270000+00:00</lastModDate>
        
        <creator>Adel Hamdan</creator>
        
        <creator>Muhannad Tahboush</creator>
        
        <creator>Mohammad Adawy</creator>
        
        <creator>Tariq Alwada’n</creator>
        
        <creator>Sameh Ghwanmeh</creator>
        
        <subject>Machine learning; Internet of Things (IoT); anomaly detection; feature reduction; Na&#239;ve Bayesian (NB); Support Vector Machine (SVM); Decision Tree (DT); XGBoost; Random Forest (RF); K-Nearest Neighbor (K-NN)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>Anomaly detection in IoT is a hot topic in cybersecurity. Also, there is no doubt that the increased volume of IoT trading technology increases the challenges it faces. This paper explores several machine-learning algorithms for IoT anomaly detection. The algorithms used are Na&#239;ve Bayesian (NB), Support Vector Machine (SVM), Decision Tree (DT), XGBoost, Random Forest (RF), and K-nearest Neighbor (K-NN). Besides that, this research uses three techniques for feature reduction (FR). The dataset used in this study is RT-IoT2022, which is considered a new dataset. Feature reduction methods used in this study are Principal Component Analysis (PCA), Particle Swarm Optimization (PSO), and Gray Wolf Optimizer (GWO). Several assessment metrics are applied, such as Precision (P), Recall(R), F-measures, and accuracy. The results demonstrate that most machine learning algorithms perform well in IoT anomaly detection. The best results are shown in SVM with approximately 99.99% accuracy.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_46-Feature_Reduction_and_Anomaly_Detection_in_IoT.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>AI-Powered Learning Pathways: Personalized Learning and Dynamic Assessments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160145</link>
        <id>10.14569/IJACSA.2025.0160145</id>
        <doi>10.14569/IJACSA.2025.0160145</doi>
        <lastModDate>2025-01-30T09:02:13.5770000+00:00</lastModDate>
        
        <creator>Mohammad Abrar</creator>
        
        <creator>Walid Aboraya</creator>
        
        <creator>Rawad Abdulghafor</creator>
        
        <creator>Kabali P Subramanian</creator>
        
        <creator>Yousuf Al Husaini</creator>
        
        <creator>Mohammed Al Husaini</creator>
        
        <subject>AI-powered learning; adaptive learning; dynamic assessments; education technology; personalized learning pathways; student engagement</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>Integrating artificial intelligence (AI) in education has introduced innovative approaches, particularly in personalized learning and dynamic assessment. Conventional teaching models often struggle to address learners&#39; diverse needs and abilities, underscoring the necessity for AI-driven flexible learning frameworks. This study explores how AI-aided smart learning paths and dynamic assessments enhance learning efficiency by improving knowledge acquisition, optimizing task completion time, and increasing student engagement. A six-week quasi-experimental study was conducted with 200 students, divided into an experimental group using an AI-based learning system and a control group following traditional methods. Pre- and post-tests and engagement analyses were used to evaluate outcomes. The experimental group demonstrated a 25% improvement in performance, completed tasks 25% faster, and showed a 15% increase in engagement compared to the control group. These findings highlight the potential of AI to deliver personalized learning experiences and timely feedback, significantly enhancing student outcomes. Future research should involve larger participant groups across higher educational levels and examine the long-term impact of AI-supported education on students’ knowledge retention and skill reinforcement.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_45-AI_Powered_Learning_Pathways_Personalized_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Evaluation of Efficient and Accurate Text Detection and Recognition in Natural Scenes Images Using EAST and OCR Fusion</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160144</link>
        <id>10.14569/IJACSA.2025.0160144</id>
        <doi>10.14569/IJACSA.2025.0160144</doi>
        <lastModDate>2025-01-30T09:02:13.5470000+00:00</lastModDate>
        
        <creator>Vishnu Kant Soni</creator>
        
        <creator>Vivek Shukla</creator>
        
        <creator>S. R. Tandan</creator>
        
        <creator>Amit Pimpalkar</creator>
        
        <creator>Neetesh Kumar Nema</creator>
        
        <creator>Muskan Naik</creator>
        
        <subject>Scene text recognition; optical character recognition; deep learning; feature extraction; scene text detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>Scene texts refer to arbitrary text found in images captured by cameras in real-world settings. The tasks of text detection and recognition are critical components of computer vision, with applications spanning scene understanding, information retrieval, robotics, and autonomous driving. Despite significant advancements in deep learning methods, achieving accurate text detection and recognition in complex images remains a formidable challenge for robust real-world applications. Several factors contribute to these challenges. First, the diversity of text shapes, fonts, colors, and styles complicates detection efforts. Second, the myriad combinations of characters, often with unstable attributes, make complete detection difficult, especially when background interruptions obscure character strokes and shapes. Finally, effective coordination of multiple sub-tasks in end-to-end learning is essential for success. This research aimed to tackle these challenges by enhancing text discriminative representation. This study focused on two interconnected problems: Scene Text Recognition (STR), which involves recognizing text from scene images, and Scene Text Detection (STD), which entails simultaneously detecting and recognizing multiple texts within those images. This research focuses on implementing and evaluating the Efficient and Accurate Scene Text Detector (EAST) algorithm for text detection and recognition in natural scene images. The study aims to compare the performance of three prominent Optical Character Recognition (OCR) techniques—TesseractOCR, PaddleOCR, and EasyOCR. The EAST model was applied to a series of sample test images, and the results were visually represented with bounding boxes highlighting the detected text regions. The inference times for each image were recorded, highlighting the algorithm&#39;s efficiency, with average times of 0.446, 0.439, and 0.440 seconds for the respective test images. These results indicate that the EAST algorithm is accurate and operates in real-time, making it suitable for applications requiring immediate text recognition.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_44-Performance_Evaluation_of_Efficient_and_Accurate_Text_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-Objective Optimization of Construction Project Management Based on NSGA-II Algorithm Improvement</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160143</link>
        <id>10.14569/IJACSA.2025.0160143</id>
        <doi>10.14569/IJACSA.2025.0160143</doi>
        <lastModDate>2025-01-30T09:02:13.5170000+00:00</lastModDate>
        
        <creator>Yong Yang</creator>
        
        <creator>Jinrui Men</creator>
        
        <subject>NSGA-II algorithm; improvement strategy; construction engineering; project management; multi-objective optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>In the building industry, one of the key components to ensuring a project&#39;s successful completion is multi-objective project management. However, due to its own limitations, the traditional multi-objective management approach for projects is no longer able to meet the requirements of building construction and urgently needs to be improved. This is because the construction industry is becoming more competitive and construction standards are improving. Traditional methods for multi-objective optimization typically involve simply summing multiple objectives with weights, overlooking the interdependencies among these objectives. These methods often get trapped in local optimal solutions and rely heavily on predefined models and parameters, limiting their adaptability to sudden changes during the construction process. Therefore, a multi-objective management approach based on multi-objective genetic algorithm for construction projects is proposed. It enables in-depth analysis and comprehensive optimization of the complex relationships between objectives, leading to more informed decisions. By facilitating rapid iteration and adaptation, it enables timely adjustments and optimizations to ensure that project goals remain consistent in complex and dynamic environments. In the experimental validation, the NSGA-II algorithm achieved a significant accuracy of 0.642 and success rate of 0.504 on the VOT dataset, both of which improved by about 1.0% and 0.6% compared to the comparison algorithm. Experimental results on the TrackingNet dataset revealed that the algorithm achieved an accuracy of 0.791 and a success rate of 0.763, while it still maintained an accuracy of 0.542 and a success rate of 0.763 in the face of occlusion. The enhanced multi-objective genetic algorithm had higher accuracy and success rates. This demonstrates the efficiency and excellence of the multi-objective management optimization approach suggested in this study for building projects. The research results have some application value in the multi-objective optimization of engineering projects.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_43-Multi_Objective_Optimization_of_Construction_Project_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>DeepLabV3+ Based Mask R-CNN for Crack Detection and Segmentation in Concrete Structures</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160142</link>
        <id>10.14569/IJACSA.2025.0160142</id>
        <doi>10.14569/IJACSA.2025.0160142</doi>
        <lastModDate>2025-01-30T09:02:13.4830000+00:00</lastModDate>
        
        <creator>Yuewei Liu</creator>
        
        <subject>DeepLabV3+; Mask R-CNN; concrete structure; crack detection and segmentation; deep learning algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>In order to solve the problem of concrete structure crack detection and segmentation and improve the efficiency of detection and segmentation, this paper proposes a crack detection and segmentation method for concrete structure based on DeepLabV3+ and Mask R-CNN algorithm. Firstly, a crack detection and segmentation scheme is designed by analysing the crack detection and segmentation problem of concrete structure. Secondly, a crack detection method based on Mask R-CNN algorithm is proposed for the crack detection problem of concrete structure. Then, a crack segmentation method based on DeepLabV3+ algorithm is proposed for the crack segmentation problem of concrete structure. Finally, bridge crack image data is used for the crack detection and segmentation of concrete structure. Finally, the concrete structure crack detection and segmentation method is validated and analysed using bridge crack image data. The results show that the Mask R-CNN model has better performance in the localisation and identification of cracks, and the DeepLabV3+ model has higher accuracy and contour extraction integrity in solving the crack segmentation problem.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_42-DeepLabV3_Based_Mask_R_CNN_for_Crack_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Adoption of Generative AI-Enhanced Profit Sharing Digital Systems in MSMEs: A Comprehensive Model Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160141</link>
        <id>10.14569/IJACSA.2025.0160141</id>
        <doi>10.14569/IJACSA.2025.0160141</doi>
        <lastModDate>2025-01-30T09:02:13.4530000+00:00</lastModDate>
        
        <creator>Mardiana Andarwati</creator>
        
        <creator>Galandaru Swalaganata</creator>
        
        <creator>Sari Yuniarti</creator>
        
        <creator>Fandi Y. Pamuji</creator>
        
        <creator>Edward R. Sitompul</creator>
        
        <creator>Kukuh Yudhistiro</creator>
        
        <creator>Puput Dani Prasetyo Adi</creator>
        
        <subject>Digital finance; Generative AI; TAM; UTAUT; MSMEs</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>Adopting digital finance solutions is crucial for enhancing efficiency and competitiveness within the financial services industry, particularly for Micro, Small, and Medium Enterprises (MSMEs). This study examines the factors influencing the use and acceptance of a sharing-based digital system enhanced with a Generative AI website (E-Mudharabah), employing the Technology Acceptance Model (TAM) and the Unified Theory of Acceptance and Use of Technology (UTAUT). In this article, the Generative AI-enhanced profit-sharing digital systems called E-Mudharabah. It is a web-based management system facilitating capital management for financiers, consultants, and MSME actors. The research integrates key variables from both models, including Perceived Ease of Use, Perceived Usefulness, Performance Expectancy, Social Influence, Facilitating Conditions, Habit, and Technology Self-Efficacy, to assess their impact on Behavioral Intention and Actual Usage. The study utilizes a quantitative approach, gathering data through surveys and analyzing it using the Partial Least Squares Structural Equation Modeling (PLS-SEM) method. Results indicate significant positive effects of perceived usefulness, performance expectancy, and social influence on the behavioral intention to use E-Mudharabah. The findings underscore the role of user-friendly interfaces and societal acceptance in driving adoption. Perceived Usefulness was the most significant variable influencing Behavioral Intention and Actual Usage (p-value &lt; 0.001). Additionally, Social Influence and Facilitating Conditions were shown to have substantial effects, highlighting the importance of user support and societal acceptance in technology adoption. The research also underscores the role of Technology Self-Efficacy in enhancing user confidence and engagement with the platform. These findings suggest that improving digital finance solutions&#39; perceived benefits and ease of use while fostering a supportive environment can significantly boost their adoption rates.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_41-Adoption_of_Generative_AI_Enhanced_Profit_Sharing_Digital_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Foreground Feature-Guided Camouflage Image Generation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160140</link>
        <id>10.14569/IJACSA.2025.0160140</id>
        <doi>10.14569/IJACSA.2025.0160140</doi>
        <lastModDate>2025-01-30T09:02:13.4070000+00:00</lastModDate>
        
        <creator>Yuelin Chen</creator>
        
        <creator>Yuefan An</creator>
        
        <creator>Yonsen Huang</creator>
        
        <creator>Xiaodong Cai</creator>
        
        <subject>Camouflage image; foreground features; object enhancement; detail optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>In the field of visual camouflage, generating a high-quality background image that seamlessly blends with complex foreground objects and diverse background environments is a critical task. When dealing with such complex scenes, the existing techniques have insufficient foreground feature extraction, resulting in insufficient fusion of the generated background image with the foreground objects, making it difficult to achieve the desired camouflage effect. In order to solve this problem and achieve the goal of higher quality visual camouflage effect, this paper proposes a new foreground feature-guided camouflage image generation method (Object Enhancement Module - Diffusion Refinement , OEM-DR), which generates camouflage images by enhancing the foreground features to guide the background. The method firstly designs a new object enhancement module to optimize the attention mechanism of the model, and eliminates the attention weights that have less influence on the output through pruning strategy, so that the model focuses more on the key features of the foreground objects, and thus guides the generation of the background more effectively. Second, a novel detail optimization framework based on diffusion strategy is constructed, which maintains the integrity of the global structure of the image while performing fine optimization processing on the local details of the image. In experiments on standard camouflaged image datasets, the proposed method in this study achieves significant improvement in both FID (Fr&#233;chet Inception Distance) and KID (Kullback-Leibler Divergence) evaluation metrics, which verifies the feasibility of the method. This suggests that by strengthening foreground features and detail optimization, the fusion between background images and foreground objects can be effectively improved to achieve higher quality visual camouflage effects.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_40-Foreground_Feature_Guided_Camouflage_Image_Generation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Jordanian Currency Recognition Using Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160139</link>
        <id>10.14569/IJACSA.2025.0160139</id>
        <doi>10.14569/IJACSA.2025.0160139</doi>
        <lastModDate>2025-01-30T09:02:13.3770000+00:00</lastModDate>
        
        <creator>Salah Alghyaline</creator>
        
        <subject>Automatic currency recognition; deep learning; VGG</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>Automatic Currency Recognition (ACR) has a significant role in various domains, such as assessment of visually impaired people, banking transactions, counterfeit detection, digital transformation, currency exchange, vendor machines, etc. Therefore, developing an accurate ACR system enhances efficiency across several domains. The contribution of this paper is three-fold; it proposed a large dataset of 2799 images and seven denominations for Jordanian currency recognition. The second contribution proposed an efficient multiscale VGG net to recognize Jordanian currency. Third, popular CNN architectures on the proposed dataset will be evaluated, and the result will be compared with the proposed architectures. Four metrics were used in the evaluation. The experimental result showed the accuracy of the proposed Multiscale VGG outperformed VGG16, DenseNet121, ResNet50, and ResNet101 and achieved 99.88%, 99.88%, 99.89%, and 99.98% accuracy, precision, sensitivity, and specificity.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_39-Jordanian_Currency_Recognition_Using_Deep_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Deep Learning for Arabic SMS Phishing Based on URLs Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160138</link>
        <id>10.14569/IJACSA.2025.0160138</id>
        <doi>10.14569/IJACSA.2025.0160138</doi>
        <lastModDate>2025-01-30T09:02:13.3600000+00:00</lastModDate>
        
        <creator>Sadeem Alsufyani</creator>
        
        <creator>Samah Alajmani</creator>
        
        <subject>Phishing; URL phishing; SMS phishing; GRU; BiGRU; CNN</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>The increasing use of SMS phishing messages in Arab communities has created a major security threat, as attackers exploit these SMS services to steal users&#39; sensitive and financial data. This threat highlights the necessity of designing models to detect SMS messages and distinguish between phishing and non-phishing messages. Given the lack of sufficient previous studies addressing Arabic SMS phishing detection, this paper proposes a model that leverages deep learning models to detect Arabic SMS messages based on the URLs they contain. The focus is on the URL aspect because it is one of the common indicators in phishing attempts. The proposed model was applied to two datasets that were in English, and one dataset was in Arabic. Two datasets were translated from English to Arabic. Three datasets included a number of Arabic SMS messages, mostly containing URLs. Three deep learning models—CNN, BiGRU, and GRU—were implemented and compared. Each model was evaluated using metrics such as precision, recall, accuracy, and F1 score. The results showed that the GRU model achieved the highest accuracy of 95.3% compared to other models, indicating its ability to capture sequential patterns in URLs extracted from Arabic SMS messages effectively. This paper contributes to designing a phishing detection model designed for Arab communities to enhance information security within Arab communities.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_38-A_Deep_Learning_for_Arabic_SMS_Phishing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Spam Detection Using Dense-Layers Deep Learning Model and Latent Semantic Indexing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160137</link>
        <id>10.14569/IJACSA.2025.0160137</id>
        <doi>10.14569/IJACSA.2025.0160137</doi>
        <lastModDate>2025-01-30T09:02:13.3270000+00:00</lastModDate>
        
        <creator>Yasser D. Al-Otaibi</creator>
        
        <creator>Shakeel Ahmad</creator>
        
        <creator>Sheikh Muhammad Saqib</creator>
        
        <subject>Spam; supervised learning methods; unsupervised learning methods; LSI; dense; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>In the digital age, online shoppers heavily depend on product feedback and reviews available on the corresponding product pages to guide their purchasing decisions. Feedback is used in sentiment analysis, which is helpful for both customers and company management. Spam feedback can have a negative impact on high-quality products or a positive impact on low-quality products. In both cases, the matter is bothersome. Spam detection can be done with supervised or unsupervised learning methods. We suggested two direct methods to detect feedback orientation as ‘spam’ or &quot;not spam&quot;, also called &quot;ham,&quot; using the deep learning model and the LSI (Latent Semantic Indexing) technique. The first proposed model uses only dense layers to detect the orientation of the text. The second proposed model uses the concept of LSI, an effective information retrieval algorithm that finds the closest text to a provided query, i.e., a list containing spam words. Experimental results of both models using publicly available datasets show the best results (89% accuracy and 89% precision) when compared to their corresponding benchmarks.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_37-Spam_Detection_Using_Dense_Layers_Deep_Learning_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Convolutional Neural Network and Bidirectional Long Short-Term Memory for Personalized Treatment Analysis Using Electronic Health Records</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160136</link>
        <id>10.14569/IJACSA.2025.0160136</id>
        <doi>10.14569/IJACSA.2025.0160136</doi>
        <lastModDate>2025-01-30T09:02:13.2970000+00:00</lastModDate>
        
        <creator>Prasanthi Yavanamandha</creator>
        
        <creator>D. S. Rao</creator>
        
        <subject>Bidirectional-long short-term memory; circle chaotic map; convolutional neural network; electronic medical records; Intensive Care Unit (ICU); ladybug beetle optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>Correct precision techniques have far not been introduced for modeling the modality risk in Intensive Care Unit (ICU) patients. Traditional mortality risk prediction techniques effectively extract the data in longitudinal Electronic Health Records (EHRs), that ignore the difficult relationship and interactions among variables and time dependency in longitudinal records. The proposed work, developed the Convolutional Neural Network – Bidirectional-Long Short-Term Memory (CNN-Bi-LSTM) method for personalized treatment analysis using EHR data. The CNN extracts the significant features from relevant features, focused on spatial-based relationships. Then, the Bi-LSTM layer captured the sequential dependencies and temporal relationships in patient histories that are essential to understand the treatment results. The Circle Levy flight – Ladybug Beetle Optimization (CL-LBO) integrates the circle chaotic map and Levy flight process in traditional LBO to select relevant features for classification. The proposed method reached 99.85% accuracy, 99.60% precision, 99.50% recall, 99.55% f1-score, and 99.95% Area Under Curve (AUC) when compared to LSTM.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_36-Convolutional_Neural_Network_and_Bidirectional_Long_Short_Term_Memory.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimization of Fourth Party Logistics Routing Considering Infection Risk and Delay Risk</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160135</link>
        <id>10.14569/IJACSA.2025.0160135</id>
        <doi>10.14569/IJACSA.2025.0160135</doi>
        <lastModDate>2025-01-30T09:02:13.2500000+00:00</lastModDate>
        
        <creator>Guihua Bo</creator>
        
        <creator>Sijia Li</creator>
        
        <creator>Mingqiang Yin</creator>
        
        <creator>Mingkun Chen</creator>
        
        <creator>Xin Liu</creator>
        
        <subject>Logistics services; public health emergencies; logistics routing optimization; improved Q-learning algorithm; CVaR; infection risk</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>In the context of the rapid development of e-commerce and the increasing demands for logistics services, particularly in the face of challenges posed by public health emergencies, this paper explores how to integrate supply chain resources and optimize delivery processes. It provides an in-depth analysis of the characteristics of the Fourth Party Logistics Routing Optimization Problem (4PLROP) in complex environments, specifically focusing on the impacts of infection risk and delay risk, and proposes a new risk measurement tool. By constructing a mathematical model aimed at minimizing Conditional Value-at-Risk (CVaR) and improved Q-learning algorithm, the study addresses the 4PLROP while considering cost and risk constraints. This approach enhances the efficiency and service quality of the logistics industry, offers effective strategies for 4PL companies in the face of uncertainty, and provides customers with safer and more reliable logistics solutions, contributing to sustainable development.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_35-Optimization_of_Fourth_Party_Logistics_Routing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimized Hybrid Deep Learning for Enhanced Spam Review Detection in E-Commerce Platforms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160134</link>
        <id>10.14569/IJACSA.2025.0160134</id>
        <doi>10.14569/IJACSA.2025.0160134</doi>
        <lastModDate>2025-01-30T09:02:13.2030000+00:00</lastModDate>
        
        <creator>Abdulrahman Alghaligah</creator>
        
        <creator>Ahmed Alotaibi</creator>
        
        <creator>Qaisar Abbas</creator>
        
        <creator>Sarah Alhumoud</creator>
        
        <subject>Spam review detection; CNN-LSTM; CNN-RNN; CNN-GRU; big data; deep learning; amazon product review dataset</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>Spam reviews represent a real danger to e-commerce platforms, steering consumers wrong and trashing the reputations of products. Conventional Machine learning (ML) methods are not capable of handling the complexity and scale of modern data. This study proposes the novel use of hybrid deep learning (DL) models for spam review detection and experiments with both CNN-LSTM and CNN-GRU architectures on the Amazon Product Review Dataset comprising 26.7 million reviews. One important finding is that 200k words vocabulary, with very little preprocessing improves the models a lot. Compared with other models, the CNN-LSTM model achieves the best performance with an accuracy of 92%, precision of 92.22%, recall of 91.73% and F1-score of 91.98%. This outcome emphasizes the effectiveness of using convolutional layers to extract local patterns and LSTM layers to capture long-term dependencies. The results also address how high constraints and hyperparameter search, as well as general-purpose represents such as BERT. Such advancements will help in creating more reliable and reliable spam detection systems to maintain consumer trust on e-commerce platforms.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_34-Optimized_Hybrid_Deep_Learning_for_Enhanced_Spam.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>M-COVIDLex: The Construction of a Domain-Specific Mixed Code Sentiment Lexicon</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160133</link>
        <id>10.14569/IJACSA.2025.0160133</id>
        <doi>10.14569/IJACSA.2025.0160133</doi>
        <lastModDate>2025-01-30T09:02:13.1730000+00:00</lastModDate>
        
        <creator>Siti Noor Allia Noor Ariffin</creator>
        
        <creator>Sabrina Tiun</creator>
        
        <creator>Nazlia Omar</creator>
        
        <subject>Malay social media text; mixed-code sentiment lexicon; sentiment analysis; domain-specific; lexicon-based; informal Malay; Malay part-of-speech; public health emergencies; COVID-19 Malaysia</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>Sentiment lexicons serve as essential components in lexicon-based sentiment analysis models. Research on sentiment analysis based on the Malay lexicon indicates that most existing sentiment lexicons for this language are developed from official text corpora, general domain social media text corpora, or domain-specific social media text corpora. Nonetheless, none of the current sentiment lexicons adequately complement the corpus utilized in this study. The rationale is that words in established sentiment lexicons may convey different sentiments compared to those in this paper’s corpus, as the strength and sentiment of words are context-dependent, influenced by varying terminology or jargon across domains, and words may not share the same sentiment across multiple domains. This paper proposes the construction of a domain-specific mixed-code sentiment lexicon, termed M-COVIDLex, through the integration of corpus-based and dictionary-based techniques, utilizing seven Malay part-of-speech tags, and enhancing Malay part-of-speech tagging for social media text by introducing a new tag: FOR-POS. The constructed M-COVIDLex is evaluated using two distinct domains of social media text corpus: the specific domain and the general domain. The performance indicates that M-COVIDLex is more appropriate as a sentiment lexicon for analyzing sentiment in a domain-specific social media text corpus, providing valuable insights to governments in assessing the sentiment level regarding the analyzed topic.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_33-M_COVIDLex_The_Construction_of_a_Domain_Specific_Mixed_Code.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Agile Approach for Collaborative Inquiry-Based Learning in Ubiquitous Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160132</link>
        <id>10.14569/IJACSA.2025.0160132</id>
        <doi>10.14569/IJACSA.2025.0160132</doi>
        <lastModDate>2025-01-30T09:02:13.1570000+00:00</lastModDate>
        
        <creator>Bushra Fazal Khan</creator>
        
        <creator>Sohaib Ahmed</creator>
        
        <subject>K12 education; agile; ubiquitous; collaborative learning; inquiry based learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>The use of collaborative inquiry-based learning has been prevalent in educational contexts particularly in science education. Using such collaborative environments, learners can increase their engagement, knowledge and critical thinking skills about science. With the advancement of technologies, ubiquitous learning environments have been designed for facilitating learning in real-time contexts. Over the past few years, agile-based approaches have been implemented at higher education for inquiry-based learning activities. However, there is a lack of studies found that focuses on agile-based approach for ubiquitous collaborative inquiry learning activities at K-12 education level. Therefore, this study presents the ScrumBan Ubiquitous Inquiry Framework (SBUIF), for inquiry-based learning activities at K-12 education level. For this purpose, an application uASK has been developed on the proposed framework, SBUIF. For the evaluation purposes, computer-supported collaborative learning (CSCL) affordances along with micro and meso levels of the M3 evaluation framework has been applied. An experiment was conducted for the evaluation of uASK application in comparison with the Trello application, involving 205 to 127 seventh-grade students. Results demonstrated that uASK learners achieved higher scores as compared with Trello participants. Further, survey results indicated higher levels of engagement, satisfaction, and enjoyment among uASK users. The study concludes that uASK offers significant advantages over Trello in fostering collaborative inquiry-based learning activities in ubiquitous environment.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_32-An_Agile_Approach_for_Collaborative_Inquiry_Based_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>CN-GAIN: Classification and Normalization-Denormalization-Based Generative Adversarial Imputation Network for Missing SMES Data Imputation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160131</link>
        <id>10.14569/IJACSA.2025.0160131</id>
        <doi>10.14569/IJACSA.2025.0160131</doi>
        <lastModDate>2025-01-30T09:02:13.1270000+00:00</lastModDate>
        
        <creator>Antonius Wahyu Sudrajat</creator>
        
        <creator>Ermatita</creator>
        
        <creator>Samsuryadi</creator>
        
        <subject>Missing values; GAIN method; normalization-denormalization; imputation; UMKM data</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>Quality data is crucial for supporting the management and development of SMES carried out by the government. However, the inability of SMES actors to provide complete data often results in incomplete dataset. Missing values present a significant challenge to producing quality data. To address this, missing data imputation methods are essential for improving the accuracy of data analysis. The Generative Adversarial Imputation Network (GAIN) is a machine learning method used for imputing missing data, where data preprocessing plays an important role. This study proposes a new model for missing data imputation called the Classification and Normalization-Denormalization-based Generative Adversarial Imputation Network (CN-GAIN). The study simulates different patterns of missing values, specifically MAR (Missing at Random), MCAR (Missing Completely at Random), and MNAR (Missing Not at Random). For comparison, each missing value pattern is processed using both the CN-GAIN and the base GAIN methods. The results demonstrate that the CN-GAIN model outperforms GAIN in predicting missing values. The CN-GAIN model achieves an accuracy of 0.0801% for the MCAR category and shows a lower error rate (RMSE) of 48.78% for the MNAR category. The mean error (MSE) for the MAR category is 99.60%, while the deviation (MAE) for the MNAR category is 70%.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_31-CN_GAIN_Classification_and_Normalization_Denormalization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Machine Learning-Based Fifth-Generation Network Traffic Prediction Using Federated Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160130</link>
        <id>10.14569/IJACSA.2025.0160130</id>
        <doi>10.14569/IJACSA.2025.0160130</doi>
        <lastModDate>2025-01-30T09:02:13.0930000+00:00</lastModDate>
        
        <creator>Mohamed Abdelkarim Nimir Harir</creator>
        
        <creator>Edwin Ataro</creator>
        
        <creator>Clement Temaneh Nyah</creator>
        
        <subject>5G Mobile network; machine learning; federated learning; parallel hybrid LSTM+GRU; network traffic prediction; centralized learning; dynamic network condition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>The rapid development and advancement of 5G technologies and smart devices are associated with faster data transmission rates, reduced latency, more network capacity, and more dependability over 4G networks. However, the networks are also more complex due to the diverse range of applications and technologies, massive device connectivity, and dynamic network conditions. The dynamic and complex nature of the 5G networks requires advanced and accurate traffic prediction methods to optimize resource allocation, enhance the quality of service, and improve network performance. Hence, there is a growing demand for training methods to generate high-quality predictions capable of generalizing to new data across various parties. Traditional methods typically involve gathering data from multiple base stations, transmitting it to a central server, and performing machine learning operations on the collected data. This work suggests a hybrid model of Long Short Term Memory (LSTM), Gated Recurrent Unit (GRU), and federated learning applied to 5G network traffic prediction. The model is assessed on one-step predictions, comparing its performance with standalone LSTM and GRU models within a federated learning environment. In evaluating the predictive performance of the proposed federated learning architecture compared to centralized learning, the federated learning approach results in lower Root Mean Square error (RMSE) and Mean Absolute Errors (MAE) and a 2.25 percent better Coefficient of Determination (R squared).</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_30-Machine_Learning_Based_Fifth_Generation_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multinode LoRa-MQTT of Design Architecture and Analyze Performance for Dual Protocol Network IoT</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160129</link>
        <id>10.14569/IJACSA.2025.0160129</id>
        <doi>10.14569/IJACSA.2025.0160129</doi>
        <lastModDate>2025-01-30T09:02:13.0630000+00:00</lastModDate>
        
        <creator>Rizky Rahmatullah</creator>
        
        <creator>Hongmin Gao</creator>
        
        <creator>Ryan Prasetya Utama</creator>
        
        <creator>Puput Dani Prasetyo Adi</creator>
        
        <creator>Jannat Mubashir</creator>
        
        <creator>Rachmat Muwardi</creator>
        
        <creator>Widar Dwi Gustian</creator>
        
        <creator>Hanifah Dwiyanti</creator>
        
        <creator>Yuliza</creator>
        
        <subject>LoRa; MQTT; multinode; QOS; LoRaWAN</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>LoRaWAN networks and large places do not support Wi-Fi for multiple points. An architecture that offers dual networks to alter their supporting networks is needed for IoT device installation. The novelty in this research is that designing an architecture for multimode LoRa-MQTT with a mechanism for testing LoRa data transmission with different delays and Wireshark for testing Wi-Fi network QoS on MQTT is necessary. This hour-long LoRa network experiment shows that the End-Node can only receive one data at a time. One data set will be received if several data sets are obtained due to conflict. The second experiment showed data barely reached 70%. The signal strength or RSSI, and the node that sent the data initially decide the data received from a given node, some seconds apart, towards tested QOS with excellent packet loss, 21 ms delay, 50,616 bytes/s throughput, and 0.1426 jitter. Avoid data conflicts and loss by utilizing fewer nodes or adding end nodes in this experiment. The network service is excellent. According to this study, LoRa and MQTT can work well together. This approach could solve Internet of Things communication concerns, especially in large places that are LoRaWAN-inaccessible and Wi-Fi networks are limited.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_29-Multinode_LoRa_MQTT_of_Design_Architecture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improved Whale Optimization Algorithm with LSTM for Stock Index Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160128</link>
        <id>10.14569/IJACSA.2025.0160128</id>
        <doi>10.14569/IJACSA.2025.0160128</doi>
        <lastModDate>2025-01-30T09:02:13.0470000+00:00</lastModDate>
        
        <creator>Yu Sun</creator>
        
        <creator>Sofianita Mutalib</creator>
        
        <creator>Liwei Tian</creator>
        
        <subject>Long short-term memory network; chaotic mapping; dynamic adjustment mechanism; improved whale optimization algorithm; financial time series forecasting</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>After the COVID-19 pandemic, the global economy began to recover. However, stock market fluctuations continue to affect economic stability, making accurate predictions essential. This study proposes an Improved Whale Optimization Algorithm (IWOA) to optimize the parameters of the Long Short-Term Memory (LSTM) model, thereby enhancing stock index predictions. The IWOA improves upon the traditional Whale Optimization Algorithm (WOA) by integrating logistic chaotic mapping to increase population diversity and prevent premature convergence. Additionally, it incorporates a dynamic adjustment mechanism to balance global exploration and local exploitation, thus boosting optimization performance. Experiments conducted on five representative global stock indices demonstrate that the IWOA-LSTM model achieves higher accuracy and reliability compared to WOA-LSTM, LSTM, and RNN models. This highlights its value in predicting complex time-series data and supporting financial decision-making during economic recovery.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_28-Improved_Whale_Optimization_Algorithm_with_LSTM.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Stock Market Forecasting Through a Service-Driven Approach: Microservice System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160127</link>
        <id>10.14569/IJACSA.2025.0160127</id>
        <doi>10.14569/IJACSA.2025.0160127</doi>
        <lastModDate>2025-01-30T09:02:13.0000000+00:00</lastModDate>
        
        <creator>Asaad Algarni</creator>
        
        <subject>Stock market; microservice architecture; deep learning; technical indicators; sentiment analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>Predicting stock market is a difficult task that involves not just a knowledge of financial measures but also the ability to assess market patterns, investor sentiment, and macroeconomic factors that can affect the movement of stock prices. Traditional stock recommendation systems are built as monolithic applications, with all components closely coupled within a single codebase. While these systems are functional, yet they are difficult integrating several services and aggregating data from diverse sources due to their lack of scalability and extensibility. A service-driven approach is needed to manage the growing complexity, diversity, and speed of financial data processing. However, microservice architecture has become a useful solution across multiple sectors, particularly in stock systems. In this paper, we design and build a stock market forecasting system based on the microservice architecture that uses advanced analytical approaches such as machine learning, sentiment analysis, and technical analysis to anticipate stock prices and guide informed investing choices. The results demonstrate that the proposed system successfully integrates multiple financial analysis services while maintaining scalability and adaptability due to its microservice architecture. The system successfully retrieved financial metrics and calculated key technical indicators like RSI and MACD. Sentiment analysis detected positive sentiment in Saudi Aramco&#39;s Q3 2021 report, and the LSTM model achieved strong prediction results with an MAE of 0.26 and an MSE of 0.18.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_27-Enhancing_Stock_Market_Forecasting.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fusion of Multimodal Information for Video Comment Text Sentiment Analysis Methods</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160126</link>
        <id>10.14569/IJACSA.2025.0160126</id>
        <doi>10.14569/IJACSA.2025.0160126</doi>
        <lastModDate>2025-01-30T09:02:12.9530000+00:00</lastModDate>
        
        <creator>Jing Han</creator>
        
        <creator>Jinghua Lv</creator>
        
        <subject>Video commentary text sentiment analysis; multimodal information fusion; M-S multimodal sentiment model; convolutional neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>Sentiment analysis of video comment text has important application value in modern social media and opinion management. By conducting sentiment analysis on video comments, we can better understand the emotional tendency of users, optimise content recommendation, and effectively manage public opinion, which is of great practical significance to the push of video content. Aiming at the current video comment text sentiment analysis methods problems such as understanding ambiguity, complex construction, and low accuracy. This paper proposes a sentiment analysis method based on the M-S multimodal sentiment model. Firstly, briefly describes the existing methods of video comment text sentiment analysis and their advantages and disadvantages; then it studies the key steps of multimodal sentiment analysis, and proposes a multimodal sentiment model based on the M-S multimodal sentiment model; finally, the efficiency of the experimental data from the Communist Youth League video comment text was verified through simulation experiments. The results show that the proposed model improves the accuracy and real-time performance of the prediction model, and solves the problem that the time complexity of the model is too large for practical application in the existing multimodal sentiment analysis task of the video comment text sentiment analysis method, and the interrelationships and mutual influences of the multimodal information are not considered.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_26-Fusion_of_Multimodal_Information_for_Video_Comment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimizing Decentralized Exam Timetabling with a Discrete Whale Optimization Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160125</link>
        <id>10.14569/IJACSA.2025.0160125</id>
        <doi>10.14569/IJACSA.2025.0160125</doi>
        <lastModDate>2025-01-30T09:02:12.9070000+00:00</lastModDate>
        
        <creator>Emily Sing Kiang Siew</creator>
        
        <creator>San Nah Sze</creator>
        
        <creator>Say Leng Goh</creator>
        
        <subject>Examination timetabling; discrete whale optimization algorithm; variable neighborhood descent; capacitated; decentralized</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>In recent years, there has been increasing interest in intelligent optimization algorithms, such as the Whale Optimization Algorithm (WOA). Initially proposed for continuous domains, WOA mimics the hunting behavior of humpback whales and has been adapted for discrete domains through modifications. This paper presents a novel discrete Whale Optimization Algorithm approach, integrating the strengths of population-based and local-search algorithms for addressing the examination timetabling problem, a significant challenge many educational institutions face. This problem remains an active area of research and, to the authors&#39; knowledge, has not been adequately addressed by the WOA algorithm. The method was evaluated using real-world data from the first semester of 2023/2024 for faculties at the Universiti of Sarawak, Malaysia. The problem incorporates standard and faculty-specified constraints commonly encountered in real-world scenarios, accommodating online and physical assessments. These constraints include resource utilization, exam spread, splitting exams for shared and non-shared rooms, and period preferences, effectively addressing the diverse requirements of faculties. The proposed method begins by generating an initial solution using a constructive heuristic. Then, several search methods were employed for comparison during the improvement phase, including three Variable Neighborhood Descent (VND) variations and two modified WOA algorithms employing five distinct neighborhoods. These methods have been rigorously tested and compared against proprietary heuristic-based software and manual methods. Among all approaches, the WOA integrated with the iterative threshold-based VND approach outperforms the others. Furthermore, a comparative analysis of the current decentralized approach, decentralized with re-optimization, and centralized approaches underscores the advantages of centralized scheduling in enhancing performance and adaptability.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_25-Optimizing_Decentralized_Exam_Timetabling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimizing Multi-Dimensional SCADA Report Generation Using LSO-GAN for Web-Based Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160124</link>
        <id>10.14569/IJACSA.2025.0160124</id>
        <doi>10.14569/IJACSA.2025.0160124</doi>
        <lastModDate>2025-01-30T09:02:12.8600000+00:00</lastModDate>
        
        <creator>Fanxiu Fang</creator>
        
        <creator>Guocheng Qi</creator>
        
        <creator>Haijun Cao</creator>
        
        <creator>He Huang</creator>
        
        <creator>Lingyi Sun</creator>
        
        <creator>Jingli Yang</creator>
        
        <creator>Yan Sui</creator>
        
        <creator>Yun Liu</creator>
        
        <creator>Dongqing You</creator>
        
        <creator>Wenyu Pei</creator>
        
        <subject>Web technologies; SCADA systems; report customisation; spectral optimisation algorithms; adversarial generative networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>This paper addresses the challenges of custom-generating multi-dimensional data SCADA (Supervisory Control And Data Acquisition) reports using web technologies. To improve efficiency, reduce maintenance costs, and enhance scalability, the paper proposes a custom generation method based on the LSO-GAN (Light Spectrum Optimizer - Generative Adversarial Network) model. The study begins by analyzing the requirements for multi-dimensional SCADA reports and proposes a web-based design scheme. The LSO algorithm is employed to optimize the GAN model, enabling efficient generation of customizable SCADA reports. The proposed LSO-GAN model was validated using relevant SCADA data, with experimental results showing that the method outperformed other models in terms of accuracy and generation efficiency. Specifically, the LSO-GAN model achieved an RMSE of 14.98 and a MAPE of 0.93, surpassing traditional models such as Conv-LSTM and FC-LSTM. The custom report generation method based on LSO-GAN significantly improves the customization and generation of multi-dimensional data SCADA reports, demonstrating superior performance in both accuracy and operational efficiency.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_24-Optimizing_Multi_Dimensional_SCADA_Report_Generation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Big Data Analytics of Knowledge and Skill Sets for Web Development Using Latent Dirichlet Allocation and Clustering Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160123</link>
        <id>10.14569/IJACSA.2025.0160123</id>
        <doi>10.14569/IJACSA.2025.0160123</doi>
        <lastModDate>2025-01-30T09:02:12.8270000+00:00</lastModDate>
        
        <creator>Karina Djunaidi</creator>
        
        <creator>Dine Tiara Kusuma</creator>
        
        <creator>Rahma Farah Ningrum</creator>
        
        <creator>Puji Catur Siswipraptini</creator>
        
        <creator>Dina Fitria Murad</creator>
        
        <subject>Big data analytics; hierarchical clustering; Latent Dirichlet Allocation; web development; knowledge; skill</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>Web development is a data-centric field and fundamental component of data science. The advent of big data analytics has significantly transformed the processes, knowledge domains, and competencies associated with Web development. Accordingly, educational programs must adjust to contemporary advancements by initially determining the abilities required for big data web developers to satisfy industry demands and adhere to current trends. This study aims to identify the knowledge areas and abilities essential for big data analytics and to create a taxonomy by correlating these competences with currently popular tools in web development. A mixed method consisting of semi-automatic and clustering methods is proposed for the semantic analysis of the text content of online job advertisements associated with the development of big data web applications. This methodology uses Latent Dirichlet Allocation (LDA), a probabilistic topic modeling tool, to uncover hidden semantic structures within a precisely specified textual corpus and average linkage hierarchical clustering as a clustering analysis technique for web developers. The results of this study are a web development competency map which is expected to help evaluate and improve the knowledge, qualifications and skills of IT professionals being hired. It helps to identify the roles and competencies of professionals in the company’s personnel recruitment process; and meet industry skill requirements through web development education programs. The competency map consists of knowledge domains, skills and essential tools for web development such as basic knowledge, frameworks, design and user experience, database design, web development, cloud computing and other soft skills. Furthermore, the proposed model can be extended to several types of jobs in the IT sector.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_23-Big_Data_Analytics_of_Knowledge_and_Skill_Sets.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Internet of Things (IoT) Driven Logistics Supply Chain Management Coordinated Response Mechanism</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160122</link>
        <id>10.14569/IJACSA.2025.0160122</id>
        <doi>10.14569/IJACSA.2025.0160122</doi>
        <lastModDate>2025-01-30T09:02:12.7970000+00:00</lastModDate>
        
        <creator>Chong Li</creator>
        
        <subject>IoT; logistics supply chain; management coordination; response mechanism</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>This study explores the development of an IoT-driven logistics supply chain coordination and response mechanism aimed at achieving real-time information sharing, precise forecasting, and rapid decision-making among supply chain nodes. By employing a hierarchical system construction method, SQL database techniques for data management, and an evaluation model combining AHP and entropy methods, the study proposes a robust framework for improving supply chain efficiency and adaptability. The results demonstrate that IoT technology significantly enhances supply chain transparency, resource allocation, and operational efficiency while reducing risks and costs. The proposed mechanism facilitates dynamic adjustments to market changes and unexpected disruptions, fostering a resilient and collaborative supply chain network. This research provides a foundational basis for the integration of IoT in modern supply chains and offers insights into advancing intelligent logistics systems, with implications for improving global competitiveness in the evolving digital economy.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_22-Internet_of_Things_IoT_Driven_Logistics_Supply_Chain_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hotspots and Insights on Quality Evaluation of Study Tours: Visual Analysis Based on Bibliometric Methodology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160121</link>
        <id>10.14569/IJACSA.2025.0160121</id>
        <doi>10.14569/IJACSA.2025.0160121</doi>
        <lastModDate>2025-01-30T09:02:12.7670000+00:00</lastModDate>
        
        <creator>Meihua Deng</creator>
        
        <subject>Research tourism; tourism quality evaluation; visualization analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>In this paper, taking 474 articles about quality evaluation of study tours in Web of Science (WOS) database as the research object, quantitatively analyze them with the help of CiteSpace 6.3.R1 software and excel data statistics, and analyze the impact of the literature data, authors&#39; cooperation network, issuing institutions, journal distribution, and keywords&#39; co-occurrence, clustering, and emergence factors, combined with time interval in-depth analysis and prediction, so as to present the research results in the form of visualized knowledge map. The results of the study show that the field of quality evaluation of research and study tourism an interdisciplinary field involving innovative research with multidisciplinary integration. During the decade of 2015-2024, it has experienced three stages of starting and exploration (2015-2018), rapid growth and diversification (2019-2021), and adjustment and maturity (2022-2024). From the viewpoint of authors and issuing organizations, authors are mostly independent research and have not yet formed a clustering research network. Research hotspots from the theoretical system construction and model development, empirical analysis, gradually shifted to user behavior analysis and recommendation system research. The future tends to research on research and learning integration intelligent decision-making, research and learning industry economy, environmental tourism practice and risk management.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_21-Hotspots_and_Insights_on_Quality_Evaluation_of_Study_Tours.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Customer Churn Prediction Across Industries: A Comparative Study of Ensemble Stacking and Traditional Classifiers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160120</link>
        <id>10.14569/IJACSA.2025.0160120</id>
        <doi>10.14569/IJACSA.2025.0160120</doi>
        <lastModDate>2025-01-30T09:02:12.7330000+00:00</lastModDate>
        
        <creator>Nurul Nadzirah bt Adnan</creator>
        
        <creator>Mohd Khalid Awang</creator>
        
        <subject>Customer churn; single classifier; ensemble classifier; stacking; accuracy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>Predicting customer churn is essential in sectors such as banking, telecommunications, and retail, where retaining existing customers is more cost-effective than acquiring new ones. This paper proposes an enhanced ensemble stacking methodology to improve the prediction performance of ensemble methods. Classic ensemble classifiers and individual models are undergoing enhancements to enhance their sector-wide generalisation. The proposed ensemble stacking method is compared with well-known ensemble classifiers, including Random Forest, Gradient Boosting Machines (GBMs), AdaBoost, and CatBoost, alongside single classifiers such as Logistic Regression (LR), Decision Trees (DT), Naive Bayes (NB), Support Vector Machines (SVM), and Multi-Layer Perceptron. Performance evaluation employs accuracy, precision, recall, and AUC-ROC metrics, utilising datasets from telecom, retail, and banking sectors. This study highlights the importance of investigating ensemble stacking within these three business entities, given that each sector presents distinct challenges and data patterns related to customer churn prediction. According to the results, when compared to other ensemble approaches and single classifiers, the ensemble stacking method achieves better generality and accuracy. The stacking method uses a meta-learner in conjunction with numerous base classifiers to improve model performance and make it adaptable to new domains. This study proves that the ensemble stacking method can accurately anticipate customer turnover and can be used in different industries. It gives firms a great way to keep their clients.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_20-Enhancing_Customer_Churn_Prediction_Across_Industries.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Employing Data-Driven NOA-LSSVM Algorithm for Indoor Spatial Environment Design</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160119</link>
        <id>10.14569/IJACSA.2025.0160119</id>
        <doi>10.14569/IJACSA.2025.0160119</doi>
        <lastModDate>2025-01-30T09:02:12.6870000+00:00</lastModDate>
        
        <creator>Di Wang</creator>
        
        <creator>Hui Ma</creator>
        
        <creator>Tingting Lv</creator>
        
        <subject>New media environments; data-driven algorithms; indoor spatial environment design; mariner optimization method</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>This study aims to enhance the precision and efficiency of indoor spatial design for college physical bookstores in the context of the new media environment. To achieve this, a novel intelligent analysis model was developed by integrating the Navigator Optimization Algorithm (NOA) with the Least Squares Support Vector Machine (LSSVM). The research analyzes the relationship between the new media environment and bookstore design, identifies key design principles, and establishes performance metrics. The proposed NOA-LSSVM model optimizes design parameters by utilizing a hybrid convergence-divergence search mechanism, achieving improved accuracy and computational efficiency. A case study of Jilin Jianzhu University&#39;s bookstore was conducted to evaluate the model&#39;s performance. The NOA-LSSVM model was compared with three other optimization algorithms: the Flower Pollination Algorithm (FPA), Whale Optimization Algorithm (WOA), and Sine Cosine Algorithm (SCA). Results showed that the NOA-LSSVM model achieved superior accuracy, with a Mean Absolute Percentage Error (MAPE) of 2.9, significantly lower than FPA (4.6), WOA (3.8), and SCA (4.2). Additionally, the model exhibited faster convergence and enhanced design efficiency, optimizing the bookstore&#39;s functional zones and spatial layout to balance dynamic and quiet areas effectively. In conclusion, the NOA-LSSVM model demonstrates a robust capability to optimize indoor spatial design in the new media environment, outperforming traditional methods in accuracy and practicality. This study provides valuable insights for integrating intelligent algorithms into spatial design processes, with the potential for broader applications in other commercial or educational spaces. Future research should focus on extending the model&#39;s generalizability and incorporating advanced media technologies for enhanced user experiences.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_19-Employing_Data_Driven_NOA_LSSVM_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Data Mining MRO-BP Network-Based Evaluation Effectiveness of Music Teaching</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160118</link>
        <id>10.14569/IJACSA.2025.0160118</id>
        <doi>10.14569/IJACSA.2025.0160118</doi>
        <lastModDate>2025-01-30T09:02:12.6570000+00:00</lastModDate>
        
        <creator>Yifan Fan</creator>
        
        <subject>Mushroom propagation optimisation algorithm; BP neural network; higher music education teaching outcomes measurement; algorithm evaluation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>This study addresses the need for data analysis in evaluating the teaching outcomes of higher music education. It proposes a solution using data-driven algorithms to measure and analyze these outcomes. This study focuses on the issue of measuring and evaluating the outcomes of music education teaching. It analyzes the process of measuring and assessing these outcomes, designs a program for doing so, and introduces key technologies such as music education teaching process analysis, measurement of music teaching outcomes, construction of an assessment model for music teaching outcomes, and application of the assessment model. The study selects teaching content, practical skills, and social practice ability as the three aspects to evaluate. The results demonstrate that this method achieves higher assessment accuracy and requires less time, effectively addressing the challenge of measuring and evaluating the teaching outcomes of higher music education using big data. The findings demonstrate that the technique exhibits a high level of assessment accuracy and is less time-consuming. Additionally, it effectively addresses the challenge of measuring and evaluating the teaching accomplishments in higher music education from the viewpoint of big data.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_18-Data_Mining_MRO_BP_Network_Based_Evaluation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Investigating Immersion and Presence in Virtual Reality for Architectural Visualization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160117</link>
        <id>10.14569/IJACSA.2025.0160117</id>
        <doi>10.14569/IJACSA.2025.0160117</doi>
        <lastModDate>2025-01-30T09:02:12.6100000+00:00</lastModDate>
        
        <creator>Athira Azmi</creator>
        
        <creator>Sharifah Mashita Syed Mohamad</creator>
        
        <subject>Virtual environment; virtual reality; human-computer interaction; architectural visualization; sense of presence</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>The architecture industry increasingly relies on virtual reality (VR) for architectural visualization, yet there is a critical issue of insufficient user involvement in the design process. This study investigates the sense of immersion and presence in the virtual environment among 60 Malaysian participants aged 20 to 40. The study utilized a 1000 sq. ft. apartment with three bedrooms and two bathrooms, was replicated in a 3D model based on real-world references. Our findings show that participants were moderately immersed in the virtual environment (M = 4.86), but the lack of sense of touch, lack of detail, and interactivity within the virtual environment affected their sense of immersion in VR for architectural visualization. This study has enhanced our understanding of human-computer interaction in VR, specifically for architectural visualization, and has emphasized the importance of improving these aspects to create more effective architectural visualization user experiences.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_17-Investigating_Immersion_and_Presence_in_Virtual_Reality.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An AI-Driven Approach for Advancing English Learning in Educational Information Systems Using Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160116</link>
        <id>10.14569/IJACSA.2025.0160116</id>
        <doi>10.14569/IJACSA.2025.0160116</doi>
        <lastModDate>2025-01-30T09:02:12.5770000+00:00</lastModDate>
        
        <creator>Xue Peng</creator>
        
        <creator>Yue Wang</creator>
        
        <subject>Artificial intelligence; information system; machine learning; English language learning; natural language processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>In current era of globalization, English language learning is important as it has become a global language and helps people to communicate from various regions and languages. For vocational students whose main aim is to get skills and get employed, learning English for communication is important. We here present a proposed framework for learning English language which can become a foundation for a complete Artificial Intelligence (AI) based system for help and guidance to the educators. This study explores the use of diverse Natural Language Processing (NLP) techniques to predict various grammatical aspects of English language content especially focused on tense prediction which lay the foundation of English content. Textual features of Bag of words (BoW) which considers each word as a separate token and Term Frequency –Inverse Document Frequency (TF-IDF) are explored. For both diverse features, the shallow machine learning models of Support Vector Machine (SVM) and Multinomial Na&#239;ve Bayes are applied. Moreover, the ensemble models based on Bagging and Calibrated are applied. The results reveal that BoW model input for SVM and Bagging technique using TF-IDF shows optimal results with high accuracy of 90% and 89% respectively. This empirical analysis confirms that such models can be integrated with web or android based systems which can be helpful for learners of English language.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_16-An_AI_Driven_Approach_for_Advancing_English_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Exploring Diverse Conventional and Deep Linguistic Features for Sentiment Analysis of Online Content</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160115</link>
        <id>10.14569/IJACSA.2025.0160115</id>
        <doi>10.14569/IJACSA.2025.0160115</doi>
        <lastModDate>2025-01-30T09:02:12.5470000+00:00</lastModDate>
        
        <creator>Yajun Tang</creator>
        
        <subject>Artificial intelligence; sentiment analysis; machine learning; word embeddings; natural language programming</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>Social media has changed the world by providing the facility to common person to share their views and generate their own content, known as Users Generated Content (UGC). Due to huge volume of UGC data being created at great velocity, so to analysis this big data, latest AI (Artificial Intelligence) and its sub-domain NLP (Natural Language Processing) are being used. Sentiment analysis of online content is an active research area due to its vast applications in business for review analysis, social and political issues. In this research study, we aim to carry out sentiment analysis of online content by exploring conventional features like Term Frequency – Inverse Document Frequency (TF-IDF), Count-Vectorization, and state of the art word embeddings based word2vec. Extensive exploratory data analysis has been carried out using the latest data visualization approaches. The main novelty lies in the application of unique and diverse machine learning algorithms on social media datasets and the results evaluation using standard performance evaluation measures reveal that the word2vec using Quadratic Discriminant analysis-based classifier show optimal results.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_15-Exploring_Diverse_Conventional_and_Deep_Linguistic_Features.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-Factors Analysis Using Visualizations and SHAP: Comprehensive Case Analysis of Tennis Results Forecasting</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160114</link>
        <id>10.14569/IJACSA.2025.0160114</id>
        <doi>10.14569/IJACSA.2025.0160114</doi>
        <lastModDate>2025-01-30T09:02:12.5170000+00:00</lastModDate>
        
        <creator>Yuan Zhang</creator>
        
        <subject>Artificial intelligence; data analytics; machine learning; match result prediction; XAI; SHAP</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>Explainable Artificial Intelligence (XAI) enhances interpretability in data-driven models, providing valuable insights into complex decision-making processes. By ensuring transparency, XAI bridges the gap between advanced Artificial Intelligence (AI) techniques and their practical applications, fostering trust and enabling data-informed strategies. In the realm of sports analytics, XAI proves particularly significant, as it unravels the multifaceted nature of factors influencing athletic performance. This work uses a rich data analysis flow that includes descriptive, predictive, and prescriptive analysis for the tennis match outcomes. Descriptive analysis uses XAI techniques such as SHAP (SHapley Additive exPlanations) with diverse factors such as physical, geographical, surface level and skill disparities. Top players are ranked; the trend of country-wise winning is presented for the last many decades. Correlation analysis presents inter-dependence of factors. Correlation analysis presents inter-dependence of factors. Predictive analysis makes use of machine learning models, the highest overall accuracy of 80% according to the K-Nearest Neighbors classifier. Lastly, prescriptive analysis recommends specific details which can be helpful for players and coaches as well as for overall strategies planning and performance enhancement. The research underscores the significance of AI-driven insights in sports analytics, particularly for a fast-paced and strategic sport like tennis. By leveraging advanced data analytics methods, this study offers a nuanced understanding of the interplay between player attributes, match contexts, and historical trends, paving the way for enhanced performance and informed strategic planning in professional tennis.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_14-Multi_Factors_Analysis_Using_Visualizations_and_SHAP.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>IT Spin-Offs Challenges in Developing Countries</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160113</link>
        <id>10.14569/IJACSA.2025.0160113</id>
        <doi>10.14569/IJACSA.2025.0160113</doi>
        <lastModDate>2025-01-30T09:02:12.5000000+00:00</lastModDate>
        
        <creator>Mahmoud M. Musleh</creator>
        
        <creator>Ibrahim Mohamed</creator>
        
        <creator>Hasimi Sallehudin</creator>
        
        <creator>Hussam F. Abushawish</creator>
        
        <subject>IT spin-off framework; higher learning; IT challenges; spin-off; framework; developing countries; entrepreneurship; innovation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>IT-enabled spin-off ventures in developing countries’ higher learning institutions have the potential to transform academic research into commercially viable products, thereby fostering economic and technological progress. However, practical implementation faces significant challenges, particularly in conflict areas, such as limited resources, socio-political instability, skill gaps, weak intellectual property laws, and inadequate frameworks for protecting innovation. Objective: This study aims to mitigate these challenges by proposing a strategic framework that leverages universities&#39; available resources to promote IT-enabled spin-offs. This framework addresses barriers and converts challenges into opportunities. Methods: This case study focused on higher learning institutions in developing countries. Specifically, this study examines the unique constraints faced by Palestinian higher learning institutions in conflict zones in order to design a tailored IT-enabled spin-off framework. Results: The proposed framework aligns with the National Development Plan and offers pathways for universities to overcome practical barriers. It emphasizes transforming research output into sustainable IT spin-off ventures that support entrepreneurship and innovation. Conclusions: This study highlights the critical need for a new strategic framework for higher learning institutions that incorporates IT-enabled spinoffs as a guiding principle to promote innovation and entrepreneurship. The proposed framework addresses current gaps and provides actionable solutions for advancing sustainable development in conflict-affected regions.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_13-IT_Spin_Offs_Challenges_in_Developing_Countries.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Highly Functional Ensemble of Improved Chaos Sparrow Search Optimization Algorithm and Enhanced Sun Flower Optimization Algorithm for Query Optimization in Big Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160112</link>
        <id>10.14569/IJACSA.2025.0160112</id>
        <doi>10.14569/IJACSA.2025.0160112</doi>
        <lastModDate>2025-01-30T09:02:12.4700000+00:00</lastModDate>
        
        <creator>Mursubai Sandhya Rani</creator>
        
        <creator>N. Raghavendra Sai</creator>
        
        <subject>Big data (BD); query optimization; Improved Chaos Sparrow Search Optimization Algorithm (ICSSOA); Enhanced Sun Flower Optimization Algorithm (ESOA); ResNet50V2; DBSCAN</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>Numerous systems have to provide the highest level of performance feasible to their users due to the present accessibility of enormous datasets and scalability needs. Efficiency in big data is measurable in terms of the speed at which queries are executed physically. It is too demanding on big data for queries to be executed on time to satisfy users&#39; needs. The query optimizer, one of the critical parts of big data that selects the best query execution plan and subsequently influences the query execution duration, is the primary focus of this research. Therefore, a well-designed query enables the user to obtain results in the required time and enhances the credibility of the associated application. This research suggested an enhanced query optimizing method for big data (BD) utilizing the ICSSOA-ESFOA algorithm (Improved Chaos Sparrow Search Optimization Algorithm- Enhanced Sun Flower Optimization algorithm) with HDFS Map Reduce to avoid the challenges associated with the optimization of queries. The essential features are extracted by employing the ResNet50V2 approach. Effective data arrangement is necessary for making sense of large and complex datasets. For this purpose, we ensemble Density-Based Spatial Clustering of Applications with Noise (DBSCAN) and Improved Spectral Clustering (ISC). The experimental findings demonstrate a significant benefit of the proposed strategy over the present optimization of the queries paradigm, and the proposed approach obtains less execution time and memory consumption. The experimental results show that the proposed strategy significantly outperforms the current optimization paradigm, reaching 99.5% accuracy, 29.4 seconds of execution time, and 450 MB less memory use.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_12-A_Highly_Functional_Ensemble_of_Improved_Chaos_Sparrow_Search.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hawk-Eye Deblurring and Pose Recognition in Tennis Matches Based on Improved GAN and HRNet Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160111</link>
        <id>10.14569/IJACSA.2025.0160111</id>
        <doi>10.14569/IJACSA.2025.0160111</doi>
        <lastModDate>2025-01-30T09:02:12.4230000+00:00</lastModDate>
        
        <creator>Weixin Zhao</creator>
        
        <subject>DeblurGANv2; HRNet; tennis; hawk-eye system; deblurring; pose recognition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>In tennis matches, the Hawk-eye system causes blurry trajectory judgment and low accuracy in player posture recognition due to rapid movement and complex backgrounds. Therefore, the research improves the backbone network and iterative attention feature fusion mechanism of deblur generative adversarial network version. At the same time, Ghost, Sandglass module, and coordinate attention mechanism are used to optimize the high-resolution network, and a new model for deblurring and pose recognition of Hawk-eye images in tennis matches is proposed by integrating the improved generative adversarial network and high-resolution network. The new model achieved an information entropy value of 11.2, a peak signal-to-noise ratio of 29.74 decibels, a structural similarity of 0.89, a minimum parameter size of 4.53, and a running time of 0.25 seconds on the tennis tracking dataset and the Max Planck Society human posture dataset, which was superior to current advanced models. The highest accuracy of deblurring and pose recognition for the model under different lighting intensities was 92.44%, and the highest improvement rate of video frame quality was 18%. From this, the model has significant advantages in deblurring effect, posture recognition accuracy, parameter quantity, and running time, and has high practical application potential. It can provide an advanced theoretical reference for tennis match refereeing and technical training.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_11-Hawk_Eye_Deblurring_and_Pose_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Marked Object-Following System Using Deep Learning and Metaheuristics</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160110</link>
        <id>10.14569/IJACSA.2025.0160110</id>
        <doi>10.14569/IJACSA.2025.0160110</doi>
        <lastModDate>2025-01-30T09:02:12.3900000+00:00</lastModDate>
        
        <creator>Ken Gorro</creator>
        
        <creator>Elmo Ranolo</creator>
        
        <creator>Lawrence Roble</creator>
        
        <creator>Rue Nicole Santillan</creator>
        
        <creator>Anthony Ilano</creator>
        
        <creator>Joseph Pepito</creator>
        
        <creator>Emma Sacan</creator>
        
        <creator>Deofel Balijon</creator>
        
        <subject>Object detection; YOLOv8; distance estimation; A-star; tabu search</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>This paper presents a deep learning methodology for a marked object-following system that incorporates the YOLOv8 (You Only Look Once version 8) object identification model and an inversely proportional distance estimation algorithm. The primary aim of this study is to develop a marked object-following algorithm capable of autonomously tracking a designated marker while maintaining a suitable distance through advanced computer vision techniques. In this study, a marked object is defined as an object that is explicitly labeled, tagged, or physically marked for identification, typically using visible markers such as QR codes, stickers, or distinct added features. Central to the system’s functionality is the YOLOv8 model, which detects objects and generates bounding boxes around identified target classes in real-time. The proposed marked object-following algorithm utilizes the distance estimation method, which leverages fluctuations in the bounding box width to determine the relative distance between the observed user and the camera. A pathfinding algorithm was created using tabu search and a-star to avoid obstacle and generate a path to continue following the marker object. Furthermore, the system’s efficacy was assessed using critical performance metrics, including the F1-score and Precision-Recall. The YOLOv8 model attained an F1-score of 0.95 at a confidence threshold of 0.461 and a mean Average Precision (mAP) of 0.961 at an IoU threshold of 0.5 for all target classes. These results indicate a high level of accuracy in object detection and tracking. However, it is important to note that this algorithm has close door and controlled environments.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_10-Marked_Object_Following_System_Using_Deep_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Digital Twin Model from Freehanded Sketch to Facade Design, 2D-3D Conversion for Volume Design</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160109</link>
        <id>10.14569/IJACSA.2025.0160109</id>
        <doi>10.14569/IJACSA.2025.0160109</doi>
        <lastModDate>2025-01-30T09:02:12.3770000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>BIM; AI; GIS; digital twins; metaverse; generative AI; GauGAN; TriPo; SketchUp; IFC format; GeoTiff</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>The article proposes a method for creating digital twins from freehand sketches for facade design, converting 2D designs to 3D volumes, and integrating these designs into real-world GIS systems. It outlines a process that involves generating 2D exterior images from sketches using generative AI (Gemini 1.5 Pro), converting these 2D images into 3D models with TriPo, and creating design drawings with SketchUp. Additionally, it describes a method for creating 3D exterior images using GauGAN, all for the purpose of construction exterior evaluation. The paper also discusses generating BIM data using generative AI, converting BIM data (in IFC file format) to GeoTiff, and displaying this information in GIS using QGIS software. Moreover, it suggests a method for generating digital twins with SketchUp to facilitate digital design information sharing and simulation within a virtual space. Lastly, it advocates for a cost-effective AI system designed for small and medium-sized construction companies, which often struggle to adopt BIM, to harness the advantages of digital twins.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_9-Digital_Twin_Model_from_Freehanded_Sketch.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SEC-MAC: A Secure Wireless Sensor Network Based on Cooperative Communication</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160108</link>
        <id>10.14569/IJACSA.2025.0160108</id>
        <doi>10.14569/IJACSA.2025.0160108</doi>
        <lastModDate>2025-01-30T09:02:12.3600000+00:00</lastModDate>
        
        <creator>Yassmin Khairat</creator>
        
        <creator>Tamer O. Diab</creator>
        
        <creator>Ahmed Fawzy</creator>
        
        <creator>Samah Osama</creator>
        
        <creator>Abd El- Hady Mahmoud</creator>
        
        <subject>Wireless Sensor Networks (WSNs); energy efficiency; Media Access Control (MAC); cooperative communication; handshaking algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>Wireless Sensor Networks (WSNs) are essential for a wide range of applications, from environmental monitoring to security systems. However, challenges such as energy efficiency, throughput, and packet delivery delay need to be addressed to enhance network performance. This paper introduces a novel Medium Access Control (MAC) protocol that utilizes cooperative communication strategies to improve these critical metrics. The proposed protocol enables source nodes to leverage intermediate nodes as relays, facilitating efficient data transmission to the access point. By employing a cross-layer approach, the protocol optimizes the selection of relay nodes based on factors like transmission time and residual energy, ensuring optimal end-to-end paths. The protocol&#39;s performance is rigorously evaluated using a simulation environment, demonstrating significant improvements over existing methods. Specifically, the protocol enhances throughput by 12%, boosts energy efficiency by 50%, and reduces average packet delivery delay by approximately 48% than IEEE 802.11b. These results indicate that the protocol not only extends the lifespan of sensor nodes by conserving energy but also improves the overall reliability and efficiency of the WSN, making it a robust solution for modern wireless sensor networks. Security in Wireless Sensor Networks (WSNs) is crucial due to vulnerabilities like eavesdropping, data tampering, and denial of service attacks. Our proposed MAC protocol addresses these challenges by incorporating authentication techniques, such as the handshaking protocol. These measures protect data integrity, confidentiality, and availability, ensuring reliable and secure data transmission across the network. This approach enhances the resilience of WSNs, making them more secure and trustworthy for critical applications such as healthcare and security monitoring.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_8-SEC_MAC_A_Secure_Wireless_Sensor_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Harnessing the Power of Federated Learning: A Systematic Review of Light Weight Deep Learning Protocols</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160107</link>
        <id>10.14569/IJACSA.2025.0160107</id>
        <doi>10.14569/IJACSA.2025.0160107</doi>
        <lastModDate>2025-01-30T09:02:12.3270000+00:00</lastModDate>
        
        <creator>Haseeb Khan Shinwari</creator>
        
        <creator>Riaz Ul Amin</creator>
        
        <subject>Light weight protocols; Sentiment analysis; federated learning; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>With rapid proliferation in using smart devices, real time efficient sentiment analysis has gained considerable popularity. These devices generate variety of data. However, for resource con-strained devices to perform sentiment analysis over multimodal data using conventional modals that are computationally complex and resource hungry, is challenging. This challenge may be addressed using a light weight but efficient modal specifically focused on sentiment analysis for contrained devices. in the literature, there are several modals that claims to be light weight however, the real sense and logic to determine if the modal may be termed as lightweight still requires further research. This paper reviews approaches to federated learning for multimodal sentiment analysis. Federated learning enables decen-tralized training without sharing data. Considering the review need to balance privacy concerns, performance, and resource usage, the review evaluates existing approaches to enhance accuracy in sentiment classification. The review identifies strengths and limitations in handling multimodal data. The search focused on studies in databases like IEEE Xplore and Scopus. Studies published in peer-reviewed journals over the past five years were included. The review covers 45 studies, mostly experimental, with some theoretical models. Key results show lightweight protocols improve efficiency and privacy in federated learning. They reduce computational demands while handling text, image, and audio data. There is a growing focus on resource-constrained devices in research. Trade-offs between model complexity and speed are commonly explored. The review addresses how these protocols balance accuracy and computational cost.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_7-Harnessing_the_Power_of_Federated_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Teaching Programming in Higher Education: Analyzing Trends, Technologies, and Pedagogical Approaches Through a Bibliometric Lens</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160106</link>
        <id>10.14569/IJACSA.2025.0160106</id>
        <doi>10.14569/IJACSA.2025.0160106</doi>
        <lastModDate>2025-01-30T09:02:12.2800000+00:00</lastModDate>
        
        <creator>Mariuxi Vinueza-Morales</creator>
        
        <creator>Jorge Rodas-Silva</creator>
        
        <creator>Cristian Vidal-Silva</creator>
        
        <subject>Programming; higher education; teaching strategies; bibliometrics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>In today’s information society, developing programming competencies is essential in higher education. Numerous studies have been conducted on effective strategies for fostering these skills. This study performs a bibliometric analysis of research on teaching strategies for programming in higher education, using data from the SCOPUS and Web of Science (WOS) databases between 2014 and 2023. The analysis identifies key trends, influential authors, and collaboration networks in this field. The most effective teaching strategies include project-based learning, flipped classrooms, and collaborative programming. Emerging technologies such as augmented reality and virtual reality are gaining prominence in programming education. Despite the growth of research in this area, challenges remain, such as the lack of longitudinal studies exploring the long-term impact of these methodologies and the need for greater geographic diversity in studies. This paper emphasizes the importance of exploring new technologies and interdisciplinary approaches and fostering international collaborations to enhance programming education. The findings guide researchers and educators on how to optimize programming learning in a global context.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_6-Teaching_Programming_in_Higher_Education_Analyzing_Trends.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Control Interface for Multi-User Video Games with Hand or Head Gestures in Directional Key-Based Games</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160105</link>
        <id>10.14569/IJACSA.2025.0160105</id>
        <doi>10.14569/IJACSA.2025.0160105</doi>
        <lastModDate>2025-01-30T09:02:12.2500000+00:00</lastModDate>
        
        <creator>Oscar Ramirez-Valdez</creator>
        
        <creator>C&#233;sar Baluarte-Araya</creator>
        
        <creator>Rodrigo Castillo-Lazo</creator>
        
        <creator>Italo Ccoscco-Alvis</creator>
        
        <creator>Alexander Valdiviezo-Tovar</creator>
        
        <creator>Alexander Villafuerte-Quispe</creator>
        
        <creator>Dylan Zu&#241;iga-Huraca</creator>
        
        <subject>Control interface; video games; artificial vision; gesture-based interface; directional commands; human-computer interaction; deep learning algorithms; accessibility; real-time; pattern recognition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>This paper describes the development and implementation of a hand or head gesture-based control interface for video games, enhanced for games that use directional keys. The objective is to develop an adaptive control system for a multiplayer video game that allows users to choose between the use of traditional directional keys or a gesture-based interface. The methodology used follows the Cross-Industry Standard Process for Data Mining (CRISP-DM) development model, which allows a structured integration of analysis, design, implementation and evaluation steps. Technologies such as OpenCV, MediaPipe and deep learning algorithms are used, translating hand movements into directional commands in real time. In addition, the system integrates a client-server architecture based on Node.js that supports multiple users, enabling an immersive gaming experience on PC and mobile platforms. The results highlight the accuracy of the system and its potential to improve accessibility, especially for users with motor disabilities by using their hands or head movements to control the directional keys. Concluding that the control interface for multi-user video games provides the necessary support to gamers in performing the task, promoting accessibility in the entertainment environment.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_5-Control_Interface_for_Multi_User_Video_Games.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Forecasting the Emergence of a Dominant Design by Classifying Product and Process Patents Using Machine Learning and Text Mining</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160104</link>
        <id>10.14569/IJACSA.2025.0160104</id>
        <doi>10.14569/IJACSA.2025.0160104</doi>
        <lastModDate>2025-01-30T09:02:12.2200000+00:00</lastModDate>
        
        <creator>Koji Masuda</creator>
        
        <creator>Yoshinori Hayashi</creator>
        
        <creator>Shigeyuki Haruyama</creator>
        
        <subject>Dominant design; patent analysis; technological innovation; machine learning; text mining; classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>Forecasting the emergence of a dominant design in advance is important because the emergence of the dominant design can provide useful information about the external environment for the product launch. Although the emergence of the dominant design can only be determined as a result of the introduction of the product into the market, it may be possible to predict the emergence of the dominant design in advance by applying a solution based on patent analysis. In the newly proposed technique of separating patents, we can capture changes in the state of technological innovation and analyze the emergence of the dominant design, but there is a problem that it requires processing of large amounts of patent data, and that the processing involves subjective judgments by experts. This study focuses on analyzing technological innovation trends using an approach that separates product patents from process patents, investigates whether this approach can be applied to machine learning, and aims to develop a learning model that automatically classifies patents. We applied text mining to patent information to create structured data sets and compared nine different machine learning classification algorithms with and without dimensionality reduction. The approach was effectively applied to machine learning, and the Random Forest, AdaBoost and Support Vector Machine models achieved high classification performance of over 95%. By developing these learning models, it is possible to objectively forecast the emergence of a dominant design with high accuracy.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_4-Forecasting_the_Emergence_of_a_Dominant_Design.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detection of DDoS Cyberattack Using a Hybrid Trust-Based Technique for Smart Home Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160103</link>
        <id>10.14569/IJACSA.2025.0160103</id>
        <doi>10.14569/IJACSA.2025.0160103</doi>
        <lastModDate>2025-01-30T09:02:12.1870000+00:00</lastModDate>
        
        <creator>Oghenetejiri Okporokpo</creator>
        
        <creator>Funminiyi Olajide</creator>
        
        <creator>Nemitari Ajienka</creator>
        
        <creator>Xiaoqi Ma</creator>
        
        <subject>Trust; smart home; IoT; DDoS; denial of service; DoS; cyber threats; techniques</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>As Smart Home Internet of Things (SHIoT) continue to evolve, improving connectivity and security whilst offering convenience, ease, and efficiency is crucial. SHIoT networks are vulnerable to several cyberattacks, including Distributed Denial of Service (DDoS) attacks. The ever-changing landscape of Smart Home IoT threats presents many problems for current cybersecurity techniques. In response, we propose a hybrid Trust-based approach for DDoS attack detection and mitigation. Our proposed technique incorporates adaptive mechanisms and trust evaluation models to monitor device behaviour and identify malicious nodes dynamically. By leveraging real-time threat detection and secure routing protocols, the proposed trust-based mechanism ensures uninterrupted communication and minimizes the attack surface. Additionally, energy-efficient techniques are employed to safeguard communication without overburdening resource-constrained SHIoT devices. To evaluate the effectiveness of the proposed technique in efficiently detecting and mitigating DDoS attacks, we conducted several simulation experiments and compared the performance of the approach with other existing DDoS detection mechanisms. The results showed notable improvements in terms of energy efficiency, improved system resilience and enhanced computations. Our solution offers a targeted approach to securing Smart Home IoT environments against evolving cyber threats.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_3-Detection_of_DDoS_Cyberattack.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comparative Study of Predictive Analysis Using Machine Learning Techniques: Performance Evaluation of Manual and AutoML Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160102</link>
        <id>10.14569/IJACSA.2025.0160102</id>
        <doi>10.14569/IJACSA.2025.0160102</doi>
        <lastModDate>2025-01-30T09:02:12.1400000+00:00</lastModDate>
        
        <creator>Karim Mohammed Rezaul</creator>
        
        <creator>Md. Jewel</creator>
        
        <creator>Anjali Sudhan</creator>
        
        <creator>Mifta Uddin Khan</creator>
        
        <creator>Maharage Roshika Sathsarani Fernando</creator>
        
        <creator>Kazy Noor e Alam Siddiquee</creator>
        
        <creator>Tajnuva Jannat</creator>
        
        <creator>Muhammad Azizur Rahman</creator>
        
        <creator>Md Shabiul Islam</creator>
        
        <subject>Machine learning; predictive analytics; sports forecasting; automated machine learning (AutoML); feature engineering; model evaluation; data pre-processing; algorithm comparison; football analytics; sports betting; team performance metrics; exploratory data analysis (EDA); cross-validation techniques</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>In this study, we have compared manual machine learning with automated machine learning (AutoML) to see which performs better in predictive analysis. Using data from past football matches, we tested a range of algorithms to forecast game outcomes. By exploring the data, we discovered patterns and team correlations, then cleaned and prepped the data to ensure the models had the best possible inputs. Our findings show that AutoML, especially when using logistic regression can outperform manual methods in prediction accuracy. The big advantage of AutoML is that it automates the tricky parts, like data cleaning, feature selection, and tuning model parameters, saving time and effort compared to manual approaches, which require more expertise to achieve similar results. This research highlights how AutoML can make predictive analysis easier and more accurate, providing useful insights for many fields. Future work could explore using different data types and applying these techniques to other areas to show how adaptable and powerful machine learning can be.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_2-A_Comparative_Study_of_Predictive_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Advanced Machine Learning Approaches for Accurate Migraine Prediction and Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2025</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2025.0160101</link>
        <id>10.14569/IJACSA.2025.0160101</id>
        <doi>10.14569/IJACSA.2025.0160101</doi>
        <lastModDate>2025-01-30T09:02:12.0930000+00:00</lastModDate>
        
        <creator>Chokri Baccouch</creator>
        
        <creator>Chaima Bahar</creator>
        
        <subject>Headache classification; migraine; migraine diagnosis; migraine classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025</description>
        <description>Migraine is a neurovascular disorder with a prevalence that exceeds 1 billion individuals worldwide, but it has long been recognized to have unique diagnostic challenges due to its heterogeneous pathophysiology and dependence on subjective assessments. As has been extensively documented by a number of international law bodies, migraine in the workplace has been identified as a significant issue that requires urgent attention. Migraine defined by episodic, unilateral and debilitating symptoms including aura, nausea incurs a high socioeconomic burden in disability. Mechanisms such as altered cortical excitability and trigeminal system activation, although researched to a high extent, are still inadequately understood. Deep learning and machine learning (ML) hold tremendous potential for transforming diagnosis and classification of migraine. This study evaluates several machine learning (ML) models such as gradient boosting, decision tree, random forest, k-Nearest Neighbors (KNN), support vector machine (SVM), logistic regression, multi-layer perceptron (MLP), artificial neural network (ANN), and deep neural network (DNN) for multi-class classification of migraine. By employing advanced preprocessing techniques and publicly obtainable datasets, the study addresses the challenge of identifying different types of migraines that may share common variables. In this study, several machine learning (ML) models including gradient boosting, decision tree, random forest, k-Nearest Neighbors show that for multi-class migraine classification MLP and Gradient Boosting had good performance in most models, but did perform poorly in complex subcategories like Typical Aura with Migraine. Both attained high accuracies (96.4% and 97%, respectively). KNN and Logistic Regression, two traditional models, performed well at basic classifications but poorly at more complex situations; Neural networks (ANN and DNN) showed much flexibility towards data complexities. These results underscore how important it is to align model selection with data properties and provide avenues for improving performance through regularization and feature engineering. This strategy illustrates how AI-powered solutions can revolutionize the way we manage, treat, and prevent migraines across the globe.</description>
        <description>http://thesai.org/Downloads/Volume16No1/Paper_1-Advanced_Machine_Learning_Approaches_for_Accurate_Migraine_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Accuracy Optimization and Wide Limit Constraints of DC Energy Measurement Based on Improved EEMD</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01512100</link>
        <id>10.14569/IJACSA.2024.01512100</id>
        <doi>10.14569/IJACSA.2024.01512100</doi>
        <lastModDate>2024-12-30T11:56:56.1470000+00:00</lastModDate>
        
        <creator>Xiaoyu Wang</creator>
        
        <creator>Xin Yin</creator>
        
        <creator>Xinggang Li</creator>
        
        <creator>Jiangxue Man</creator>
        
        <creator>Yanhe Liang</creator>
        
        <creator>Fan Xu</creator>
        
        <subject>EEMD; direct current energy; measurement; width limit; ACROA</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>In modern power systems, with the increasing application of renewable energy, direct current transmission technology has put forward new requirements for energy metering. In order to solve the accuracy problem of traditional electric energy metering under DC energy, the research is based on the classical empirical modal decomposition (EEMD), and introduces the artificial chemical reaction optimization algorithm (ACROA) to enhance the global search capability and decomposition accuracy of the original algorithm, and at the same time safeguards the accuracy of metering equipment under extreme conditions through the wide quantitative constraints, and ultimately puts forward a new type of optimization model for the accuracy of DC electric energy metering. The highest measurement accuracy of this model could reach 90%, and it performed better in power signal decomposition and accuracy optimization. Especially under high-frequency interference and complex signal conditions, the measurement error could be reduced to 6.87%, the highest decomposition stability was 94.02%, and the shortest measurement time was 1.12 seconds. Therefore, the model constructed in this study exhibits excellent decomposition accuracy and robustness in complex energy environments, solving the shortcomings of traditional energy metering methods and providing new ideas for future optimization of DC energy metering.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_100-Accuracy_Optimization_and_Wide_Limit_Constraints_of_DC_Energy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Comparison of Object Detection Models for Road Sign Detection Under Different Conditions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151299</link>
        <id>10.14569/IJACSA.2024.0151299</id>
        <doi>10.14569/IJACSA.2024.0151299</doi>
        <lastModDate>2024-12-30T11:56:56.1300000+00:00</lastModDate>
        
        <creator>Zainab Fatima</creator>
        
        <creator>M. Hassan Tanveer</creator>
        
        <creator>Hira Mariam</creator>
        
        <creator>Razvan Cristian Voicu</creator>
        
        <creator>Tanazzah Rehman</creator>
        
        <creator>Rizwan Riaz</creator>
        
        <subject>Artificial intelligence; artificial neural networks; image processing; deep learning; road signs detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>During driving, drivers often overlook the traffic signs along the roads compromising road safety and increasing the risk of accidents. To address this, artificial intelligence (AI) and deep learning techniques are employed, taking into consideration the improvement of advances in Artificial Neural Networks (ANNs) and image processing for robust road sign detection. In this work, we compare the performance of existing state-of-the-art object detection models for road sign detection, including YOLOv8, YOLOv9, RTMDet, Faster-RCNN and RetinaNet, using a large dataset of images of road signs. These models are fine-tuned and hyperparameters are optimized with varied settings like auto-orientation and augmentation during the preprocessing and training phase. The models are then tested, and key performance indicators such as mean average precision (mAP), number of inferences performed per second [frames per second (fps)], and total loss are evaluated. Our study reaffirms the earlier findings in which YOLOv9 and YOLOv8 outperform other detectors in real-time detection tasks because they are faster in inference or prediction than most detectors, but with a compromise in accuracy, as highlighted by the fast fps rates. In contrast, RTMDet is fast and reliable, making it a highly effective option for detecting various road signs. The insights presented in this research are useful in identifying the appropriateness and drawbacks of each model, thereby benefiting from the selection of the best suited model for real-world applications, such as autonomous vehicles or self-driving cars.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_99-Performance_Comparison_of_Object_Detection_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Empowering Home Care: Utilizing IoT and Deep Learning for Intelligent Monitoring and Management of Chronic Diseases</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151298</link>
        <id>10.14569/IJACSA.2024.0151298</id>
        <doi>10.14569/IJACSA.2024.0151298</doi>
        <lastModDate>2024-12-30T11:56:56.1000000+00:00</lastModDate>
        
        <creator>Nouf Alabdulqader</creator>
        
        <creator>Khaled Riad</creator>
        
        <creator>Badar Almarri</creator>
        
        <subject>IoT; IoMT; intelligent monitoring; chronic diseases; deep learning; home care; physiological data; mHealth</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>Integrating Internet of Things (IoT) with Artificial Intelligence (AI) is one of the catalysts for improving traditional healthcare services. This integration has created many opportunities that have led to healthcare shifting towards enabling home care, the concept that harnesses the technologies advanced potential such as the IoT and deep learning for intelligent monitor and manage chronic diseases. As population growth increases, restrictions on traditional healthcare services increase. Some diseases, such as chronic diseases, require innovative solutions that go beyond the boundaries of traditional healthcare settings due to their impact on individuals’ health for example traditional healthcare systems have little capacity to provide high-quality and real-time services. Empowering home care services using deep learning and internet of things technology is promising. It enables continuous monitoring through interconnected devices and deep learning, which provides intelligent insights from massive data sets. This brief explores the key components of enabling home care, including continuous patient health monitoring, predictive analytics, medication management, and remote patient support by healthcare providers, and provides friendly interfaces for end-users. The conjunction between the IoT and deep learning in home-care signals a shift toward precision medicine, enhancing patient outcomes and creating a sustainable model for chronic disease management in the era of decentralized healthcare. This review article aims to discuss the following aspects: presenting the latest technologies in home care systems, showing the merit of combining the Internet of Medical Things (IoMT) and deep learning and its role in monitoring patient conditions and managing chronic disease to improve patient health status accurately, in real-time, and cost-effective, and lastly, debating future studies and providing recommendations for the ongoing development of home care remote monitoring applications.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_98-Empowering_Home_Care_Utilizing_IoT_and_Deep_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SGCN: Structure and Similarity-Driven Graph Convolutional Network for Semi-Supervised Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151297</link>
        <id>10.14569/IJACSA.2024.0151297</id>
        <doi>10.14569/IJACSA.2024.0151297</doi>
        <lastModDate>2024-12-30T11:56:56.0830000+00:00</lastModDate>
        
        <creator>WenQiang Guo</creator>
        
        <creator>YongLong Hu</creator>
        
        <creator>YongYan Hou</creator>
        
        <creator>BoFeng Xue</creator>
        
        <subject>Graph convolutional networks; semi-supervised node classification; Minkowski distance; similarity information</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>Traditional Graph Convolutional Networks (GCNs) primarily utilize graph structural information for information aggregation, often neglecting node attribute information. This approach can distort node similarity, resulting in ineffective node feature representations and reduced performance in semi-supervised node classification tasks. To address these issues, this study introduces a similarity measure based on the Minkowski distance to better capture the proximity of node features. Building on this, SGCN, a novel graph convolutional network, is proposed, which integrates this similarity information with conventional graph structural information. To validate the effectiveness of SGCN in learning node feature representations, two classification models based on SGCN are introduced: SGCN-GCN and SGCN-SGCN. The performance of these models is evaluated on semi-supervised node classification tasks using three benchmark datasets: Cora, Citeseer, and Pubmed. Experimental results demonstrate that the proposed models significantly outperform the standard GCN model in terms of classification accuracy, highlighting the superiority of SGCN in node feature representation learning. Additionally, the impact of different distance metrics and fusion factors on the models’ classification capabilities is investigated, offering deeper insights into their performance characteristics. The code and datasets are available at https://github.com/YONGLONGHU/SGCN.git.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_97-SGCN_Structure_and_Similarity_Driven_Graph_Convolutional_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Leveraging Deep Learning for Enhanced Information Security: A Comprehensive Approach to Threat Detection and Mitigation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151296</link>
        <id>10.14569/IJACSA.2024.0151296</id>
        <doi>10.14569/IJACSA.2024.0151296</doi>
        <lastModDate>2024-12-30T11:56:56.0670000+00:00</lastModDate>
        
        <creator>KaiJing Wang</creator>
        
        <subject>Artificial intelligence; deep learning; information security; threat detection; cybersecurity; convolutional neural net-work; recurrent neural network; mitigation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>Forcing developments in cyberspace means protecting information resources requires enhanced and more dynamic protection models. Traditional approaches don’t adequately address the numerous, sophisticated, varied, and frequently intersecting emergent security challenges, such as malware, phishing, and DDoS attacks. This paper introduces a novel hybrid deep learning framework leveraging convolutional neural networks (CNN) and recurrent neural networks (RNN) for enhanced threat detection and mitigation within a Zero Trust Architecture (ZTA). The model identifies anomalies indicative of potential security threats by analysing large network traffic datasets. To decrease false positive instances, autoencoders are integrated, significantly improving the system’s ability to differentiate between normal and anomalous behaviour. Extensive experiments were conducted using a benchmark cybersecurity dataset, achieving an accuracy rate of 98.75% and a false positive rate of only 1.43%. Compared to traditional approaches, this dynamic deep learning framework is highly adaptable, requiring little oversight to respond effectively to new and evolving threats. From the study results, it can be concluded that deep learning provides a robust and scalable solution for addressing emerging cyber threats and creating a more secure and reliable information security environment. Future work will focus on extending the framework to improve its accuracy and robustness, further advancing cybersecurity capabilities. This research significantly contributes to information security, establishing a promising direction for applying machine learning to enhance cybersecurity.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_96-Leveraging_Deep_Learning_for_Enhanced_Information_Security.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Near-Optimal Traveling Salesman Solution with Deep Attention</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151295</link>
        <id>10.14569/IJACSA.2024.0151295</id>
        <doi>10.14569/IJACSA.2024.0151295</doi>
        <lastModDate>2024-12-30T11:56:56.0530000+00:00</lastModDate>
        
        <creator>Natdanai Kafakthong</creator>
        
        <creator>Krung Sinapiromsaran</creator>
        
        <subject>Traveling salesman problem; deep learning; genetic algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>The Traveling Salesman Problem (TSP) is a well-known problem in computer science that requires finding the shortest possible route that visits every city exactly once. TSP has broad applications in logistics, routing, and supply chain management, where finding optimal or near-optimal solutions efficiently can lead to substantial cost and time reductions. However, traditional solvers rely on iterative processes that can be computationally expensive and time-consuming for large-scale instances. This research proposes a novel deep learning architecture designed to predict optimal or near-optimal TSP tours directly from the problem’s distance matrix, eliminating the need for extensive iterations to save total solving time. The proposed model leverages the attention mechanism to effectively focus on the most relevant parts of the network, ensuring accurate and efficient tour predictions. It has been tested on the TSPLIB benchmark dataset and observed significant improvements in both solution quality and computational speed compared to traditional solvers such as Gurobi and Genetic Algorithm. This method presents a scalable and efficient solution for large-scale TSP instances, making it a promising approach for real-world traveling salesman applications.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_95-Near_Optimal_Traveling_Salesman_Solution.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Novel Collaborative Intrusion Detection for Enhancing Cloud Security</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151294</link>
        <id>10.14569/IJACSA.2024.0151294</id>
        <doi>10.14569/IJACSA.2024.0151294</doi>
        <lastModDate>2024-12-30T11:56:56.0370000+00:00</lastModDate>
        
        <creator>Widad Elbakri</creator>
        
        <creator>Maheyzah Md. Siraj</creator>
        
        <creator>Bander Ali Saleh Al-rimy</creator>
        
        <creator>Sultan Ahmed Almalki</creator>
        
        <creator>Tami Alghamdi</creator>
        
        <creator>Azan Hamad Alkhorem</creator>
        
        <creator>Frederick T. Sheldon</creator>
        
        <subject>Cloud security; intrusion detection; collaborative model; feature selection; anomaly detection; Pruned Exact Linear Time (PELT); gradient boosting machine; support vector machine; NSL-KDD; DDoS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>Intrusion Detection Models (IDM) often suffer from poor accuracy, especially when facing coordinated attacks such as Distributed Denial of Service (DDoS). One significant limitation of existing IDM solutions is the lack of an effective technique to determine the optimal period for sharing attack information among nodes in a distributed IDM environment. This article pro-poses a novel collaborative IDM model that addresses this issue by leveraging the Pruned Exact Linear Time (PELT) change point detection algorithm. The PELT algorithm dynamically determines the appropriate intervals for disseminating attack information to nodes within the collaborative IDM framework. Additionally, to enhance detection accuracy, the proposed model integrates a Gradient Boosting Machine with a Support Vector Machine (GBM-SVM) for collaborative detection of malicious activities. The proposed model was implemented in Apache Spark using the NSL-KDD benchmark intrusion detection dataset. Experimental results demonstrate that this collaborative approach significantly improves detection accuracy and responsiveness to coordinated attacks, providing a robust solution for enhancing cloud security.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_94-Novel_Collaborative_Intrusion_Detection_for_Enhancing_Cloud_Security.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Steganography Security with Generative AI: A Robust Approach Using Content-Adaptive Techniques and FC DenseNet</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151293</link>
        <id>10.14569/IJACSA.2024.0151293</id>
        <doi>10.14569/IJACSA.2024.0151293</doi>
        <lastModDate>2024-12-30T11:56:56.0200000+00:00</lastModDate>
        
        <creator>Ayyah Abdulhafidh Mahmoud Fadhl</creator>
        
        <creator>Bander Ali Saleh Al-rimy</creator>
        
        <creator>Sultan Ahmed Almalki</creator>
        
        <creator>Tami Alghamdi</creator>
        
        <creator>Azan Hamad Alkhorem</creator>
        
        <creator>Frederick T. Sheldon</creator>
        
        <subject>Content adaptive; distortion function; GAN; FC DenseNet; steganography; steganalysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>Content-adaptive image steganography based on minimizing the additive distortion function and Generative Ad-versarial Networks (GAN) is a promising trend. This approach can quickly generate an embedding probability map and has a higher security performance than hand-crafted methods. however, existing works have ignored the semantic information between neighbouring pixels and the NaN-loss scenarios, which leads to improper convergence. Such cases will degrade the generated Stego images’ quality, decreasing the secret payload’s security. FT GAN performance, which incorporates feature reuse in generator architecture, has been investigated by proposing the FC DenseNet-based generator herein. This investigation explores the superior semantic segmentation capabilities of FC DenseNet, including feature reuse, implicit deep supervision, and the vanishing gradient problem alleviation of DenseNet, toward enhancing visual results, increasing security performance, and accelerating training. The ability to maintain high-quality visual characteristics and robust security even in resource-constrained environments, such as Internet of Things (IoT) contexts, demonstrates the practical benefits of this approach. The qualitative analysis of the visual results regarding the texture regions’ localization and intensity exhibited augmented visual quality. Moreover, an improvement in the security attribute of 0.66% has also been demonstrated regarding average detection errors made by the SRM EC Steganalyzer across all target payloads.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_93-Enhancing_Steganography_Security_with_Generative_AI.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Framework for Age Estimation of Fish from Otoliths: Synergy Between RANSAC and Deep Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151292</link>
        <id>10.14569/IJACSA.2024.0151292</id>
        <doi>10.14569/IJACSA.2024.0151292</doi>
        <lastModDate>2024-12-30T11:56:55.9900000+00:00</lastModDate>
        
        <creator>Souleymane KONE</creator>
        
        <creator>Abdoulaye SERE</creator>
        
        <creator>Dekpeltaki&#180;e Augustin METOUALE SOMDA</creator>
        
        <creator>Jos&#180;e Arthur OUEDRAOGO</creator>
        
        <subject>Otoliths; deep learning; pattern recognition; RANSAC; automated counting</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>This study represents a significant advancement in fish ecology by applying deep learning techniques to automate and improve the counting of growth rings in otoliths, which are essential for determining the age and growth patterns of fish. Traditionally, manual methods have been used to analyze these rings, but these approaches are time-consuming, require significant expertise, and are prone to bias. To address these limitations, we propose a novel methodology that combines convolutional neural networks (CNNs) with the RANSAC algorithm, enhancing the accuracy and reliability of ring detection, even in the presence of noise or natural image variations. Unlike manual techniques, which depend on observer expertise and subjective interpretation, our approach improves performance, often surpassing human experts while reducing analysis time. The results demonstrate the potential of deep learning and RANSAC in otolith research, offering powerful tools for sustainable fish population management and transforming research practices in marine ecology by providing faster, more reliable, and accessible analytical methods, setting new standards for more rigorous research.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_92-A_Framework_for_Age_Estimation_of_Fish_from_Otoliths.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Advanced Deep Learning Approaches for Fault Detection and Diagnosis in Inverter-Driven PMSM Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151291</link>
        <id>10.14569/IJACSA.2024.0151291</id>
        <doi>10.14569/IJACSA.2024.0151291</doi>
        <lastModDate>2024-12-30T11:56:55.9730000+00:00</lastModDate>
        
        <creator>Abdelkabir BACHA</creator>
        
        <creator>Ramzi El IDRISSI</creator>
        
        <creator>Fatima LMAI</creator>
        
        <creator>Hicham EL HASSANI</creator>
        
        <creator>Khalid Janati Idrissi</creator>
        
        <creator>Jamal BENHRA</creator>
        
        <subject>Fault detection and diagnosis; PMSM; deep learning; transformers; physics-informed neural networks; power electronics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>This paper presents a comprehensive approach to fault detection and diagnosis (FDD) in inverter-driven Permanent Magnet Synchronous Motor (PMSM) systems through the innovative integration of transformer-based architectures with physics-informed neural networks (PINNs). The methodology addresses critical challenges in power electronics reliability by incorporating domain-specific physical constraints into the learning process, enabling both high accuracy and physically consistent predictions. The proposed system combines advanced sensor fusion techniques with real-time monitoring capabilities, processing multiple input streams including phase currents, temperatures, and voltage measurements. The architecture’s dual-objective optimization approach balances traditional classification metrics with physics-based constraints, ensuring predictions align with fundamental electromagnetic and thermal principles. Experimental validation using a comprehensive dataset of 10,892 samples across nine distinct fault scenarios demonstrates the system’s exceptional performance, achieving 98.57% classification accuracy while maintaining physical consistency scores above 0.98. The model ex-hibits robust performance across varying operational conditions, including speed variations (97.45-98.57% accuracy range) and load fluctuations (97.91-98.12% accuracy range). Notable achievements include perfect detection rates for certain critical faults, such as high-side short circuits and thermal anomalies, with area under ROC curve (AUC) scores of 1.0. This research establishes new benchmarks in condition monitoring and fault diagnosis for power electronic systems, offering practical implications for predictive maintenance and system reliability enhancement.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_91-Advanced_Deep_Learning_Approaches_for_Fault_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fault-Tolerant Control of Nonlinear Delayed Systems Using Lyapunov Approach: Application to a Hydraulic Process</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151290</link>
        <id>10.14569/IJACSA.2024.0151290</id>
        <doi>10.14569/IJACSA.2024.0151290</doi>
        <lastModDate>2024-12-30T11:56:55.9600000+00:00</lastModDate>
        
        <creator>Tayssir Abdelkrim</creator>
        
        <creator>Adel Tellili</creator>
        
        <creator>Nouceyba Abdelkrim</creator>
        
        <subject>Delayed nonlinear system; actuator faults; delayed hydraulic process; additive fault tolerant control; redesign Lya-punov approach</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>Designing stabilizing controllers for delayed non-linear systems with control constraints presents a significant challenge. This paper addresses this issue by proposing a fault-tolerant control approach for a specific class of delayed nonlin-ear systems with actuator faults based on Lyapunov redesign principle. Initially, an assumption is introduced to facilitate the control design for the nominal system. Then, a new control law is developed to resolve the difficulty caused by actuator failures. The proposed nonlinear controller demonstrates the ability to compensate for actuator faults. To validate its effectiveness, the method is applied to a hydraulic system.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_90-Fault_Tolerant_Control_of_Nonlinear_Delayed_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Recursive Center Embedding: An Extension of MLCE for Semantic Evaluation of Complex Sentences</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151289</link>
        <id>10.14569/IJACSA.2024.0151289</id>
        <doi>10.14569/IJACSA.2024.0151289</doi>
        <lastModDate>2024-12-30T11:56:55.9270000+00:00</lastModDate>
        
        <creator>ShivKishan Dubey</creator>
        
        <creator>Narendra Kohli</creator>
        
        <subject>Recursive Center Embedding (RCE); Multi-Level Center Embedding (MLCE); complex sentences; structural similarity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>A novel method for representing hierarchical sentences, named Multi-Leveled Center Embedding (MLCE), has recently been introduced. The approach utilizes the concept of center-embedded structures to demonstrate the structural complexity of complex sentences through iterative calculations of differences between the original and modified embeddings of its hierarchy. Through an implementation of Recursive Center-Embedding (RCE), we enhance the concept of MLCE by incorporating additional leveled features from the center-word hierarchy. The features are essential for training the Word2Vec model, enabling it to generate sophisticated vectors that perform well in sentence similarity analysis. RCE produces vectors via a hier-archical arrangement of center components, illustrating sentence structure that exceeds that of traditional word vectors and the BERT-base contextual model. The aim is to assess the similarity performance of the proposed RCE strategy. Furthermore, it examines its contextual ability obtained through leveled feature vectors that successfully correlated pairs of complex sentences across multiple benchmark datasets.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_89-Recursive_Center_Embedding.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-Label Decision-Making for Aerobics Platform Selection with Enhanced BERT-Residual Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151288</link>
        <id>10.14569/IJACSA.2024.0151288</id>
        <doi>10.14569/IJACSA.2024.0151288</doi>
        <lastModDate>2024-12-30T11:56:55.9130000+00:00</lastModDate>
        
        <creator>Yan Hu</creator>
        
        <subject>Personalized fitness; aerobics recommendations; artificial intelligence; Enhanced BERT-Residual Network (EBRN); hybrid models; user engagement</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>In response to the increased demand for individualized workout routines, online aerobics programs are struggling to fulfil the needs of their various user bases with specialized suggestions. Current systems seldom combine multiple data sources to analyze user preferences, reducing customization accuracy and engagement. Enhanced BERT-Residual Network (EBRN) evaluates multimodal input using residual processing blocks and contextual embeddings based on BERT to bridge textual and structural user characteristics. EBRN’s deep insights may help understand user engagement, fitness goals, and enjoyment. An innovative data balancing and feature selection method, Dynamic Equilibrium Sampling and Feature Transformation (DES-FT), improves data preparation and model accuracy. Two novel metrics, Contextual Scheduling Consistency (CSC) and Complexity-Weighted Accuracy (CWA), may quantify EBRN stability in multi-attribute classification, particularly for complex data. EBRN outperforms standard AI models on a Toronto fitness platform dataset with 98.7% recall, 98.9% precision, and 99.3% accuracy. Its limited geographical dataset and lack of real-time validation hinder the research. The data show individualized aerobics recommendations that include instructor quality, platform accessibility, and material variety may boost involvement. Researchers need additional datasets and real-time flexibility to make this concept more practical. EBRN’s tailored ideas revolutionized digital fitness platform user engagement and enjoyment.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_88-Multi_Label_Decision_Making_for_Aerobics_Platform_Selection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Explainable Deep Transfer Learning Framework for Rice Leaf Disease Diagnosis and Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151287</link>
        <id>10.14569/IJACSA.2024.0151287</id>
        <doi>10.14569/IJACSA.2024.0151287</doi>
        <lastModDate>2024-12-30T11:56:55.8970000+00:00</lastModDate>
        
        <creator>Md Mokshedur Rahman</creator>
        
        <creator>Zhang Yan</creator>
        
        <creator>Mohammad Tarek Aziz</creator>
        
        <creator>MD Abu Bakar Siddick</creator>
        
        <creator>Tien Truong</creator>
        
        <creator>Md. Maskat Sharif</creator>
        
        <creator>Nippon Datta</creator>
        
        <creator>Tanjim Mahmud</creator>
        
        <creator>Renzon Daniel Cosme Pecho</creator>
        
        <creator>Sha Md Farid</creator>
        
        <subject>Rice leaf; ensemble-learning; explainable AI; disease diagnosis; transfer learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>Rice plays a vital role in the food stock. But sometimes this crop leaf falls into disease. And, the amount of food consumed will decrease due to leaf disease. So, discovering the rice leaf disease is necessary to improve rice productivity. Currently, many researchers use deep learning methods to solve this problem. Unfortunately, their research results were less accurate. In this paper, we construct transfer learning models to diagnose and categorize illnesses affecting rice leaves. To further improve the model performance, we construct three ensemble learning models to combine various architectures. In order to bring transparency to the disease diagnostic process, we explore the explainable AI (XAI) problem of the visual object detector and integrate Gradient-weighted Class Activation Mapping (Grad-CAM) into three ensemble models to generate explanations for individual object detections for assessing performance. The results of Ensemble Learning indicate that merging different architectures can be effective in disease diagnosis, as evidenced by their best accuracy of 99.78% which is better than other state-of-the-art works. This research demonstrates that the integration of deep learning and transfer learning models yields improved prediction interpretability and classification accuracy of rice leaf disease. So, we established a dependable method of deep, transfer, and ensemble learning for the diagnosis of diseases affecting rice leaves.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_87-Explainable_Deep_Transfer_Learning_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Approach of Classification of Monkeypox Disease: Integrating Transfer Learning with ViT and Explainable AI</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151286</link>
        <id>10.14569/IJACSA.2024.0151286</id>
        <doi>10.14569/IJACSA.2024.0151286</doi>
        <lastModDate>2024-12-30T11:56:55.8800000+00:00</lastModDate>
        
        <creator>MD Abu Bakar Siddick</creator>
        
        <creator>Zhang Yan</creator>
        
        <creator>Mohammad Tarek Aziz</creator>
        
        <creator>Md Mokshedur Rahman</creator>
        
        <creator>Tanjim Mahmud</creator>
        
        <creator>Sha Md Farid</creator>
        
        <creator>Valisher Sapayev Odilbek Uglu</creator>
        
        <creator>Matchanova Barno Irkinovna</creator>
        
        <creator>Atayev Shokir Kuranbaevich</creator>
        
        <creator>Ulugbek Hajiev</creator>
        
        <subject>Monkeypox; vision transformer; hybrid model; transfer learning; explainable artificial intelligence</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>Human monkeypox is a persistent global health challenge, ranking among the most common illnesses worldwide. Early and accurate diagnosis is critical to developing effective treatments. This study proposes a comprehensive approach to monkeypox diagnosis using deep learning algorithms, including Vision Transformer, MobileNetV2, EfficientNetV2, ResNet-50, and a hybrid model. The hybrid model combines ResNet-50, Mo-bileNetV2, and EfficientNetV2 to reduce error rates and improve classification accuracy. The models were trained, validated, and tested on a specially curated monkeypox dataset. EfficientNetV2 demonstrated the highest training accuracy (99.94%), validation accuracy (97.80%), and testing accuracy (97.67%). ResNet-50 achieved 99.87% training accuracy, 99.85% validation accuracy, and 97.18% testing accuracy. MobileNetV2 reached 95.47% training accuracy, with validation and testing accuracies of 79.51%and 78.18%, respectively. Designed to mitigate overfitting, the Vision Transformer achieved 100% training accuracy, 87.51%validation accuracy, and 99.41% testing accuracy. Our hybrid model yielded 99.33% training accuracy and 99.09% testing accuracy. The Vision Transformer emerged as the most promising model due to its robust performance and high accuracy, followed closely by the hybrid model. Explainable AI (XAI) techniques, such as Grad-CAM, were applied to enhance the interpretability of predictions, providing visual insights into the classification process. The results underscore the potential of Vision Transformer and hybrid deep learning models for accurate and interpretable monkeypox diagnosis.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_86-Hybrid_Approach_of_Classification_of_Monkeypox_Disease.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>TLDViT: A Vision Transformer Model for Tomato Leaf Disease Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151285</link>
        <id>10.14569/IJACSA.2024.0151285</id>
        <doi>10.14569/IJACSA.2024.0151285</doi>
        <lastModDate>2024-12-30T11:56:55.8670000+00:00</lastModDate>
        
        <creator>Sami Aziz Alshammari</creator>
        
        <subject>Tomato Leaf Disease; Vision Transformer (ViT); crop health monitoring; plant disease classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>Accurate and efficient diagnostic methods are essential for crop health monitoring due to the substantial impact of tomato leaf diseases on crop yield and quality. Traditional machine learning models, such as convolutional neural networks (CNNs), have shown promise in plant disease classification; however, they often require extensive data preprocessing and struggle with complex variations in leaf appearance. This study introduces TLDViT (Tomato Leaf Disease Vision Transformer), a Vision Transformer model specifically designed for the classification of tomato leaf diseases. TLDViT reduces the need for preprocessing by learning disease-specific features directly from raw images, leveraging Vision Transformers’ ability to capture long-range dependencies within images. We evaluated TLDViT on the Plant Village Dataset, which includes healthy and diseased samples across multiple classes. For comparative analysis, two Vision Transformer models, ViT-r50-l32 and ViT-l16-fe, were tested. Among these, ViT-r50-l32 achieved the highest performance, surpassing both ViT-l16-fe with an accuracy of 98%. These findings highlight TLDViT’s potential as an effective tool for crop health monitoring and automated plant disease diagnosis.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_85-TLDViT_A_Vision_Transformer_Model_for_Tomato_Leaf_Disease.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>On the Context-Aware Anomaly Detection in Vehicular Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151284</link>
        <id>10.14569/IJACSA.2024.0151284</id>
        <doi>10.14569/IJACSA.2024.0151284</doi>
        <lastModDate>2024-12-30T11:56:55.8500000+00:00</lastModDate>
        
        <creator>Mohammed Abdullatif H. Aljaafari</creator>
        
        <subject>Fog computing; load balancing; task offloading</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>Transportation systems are moving towards autonomous and intelligent vehicles due to advancements in embedded systems, control algorithms, and wireless communications. By enabling connectivity among vehicles, a vehicular network can be developed which offers safe and efficient driving applications. Security is a major challenge for vehicular networks as application reliability depends on it. In this paper, we highlight the security challenges faced by a vehicular network especially related to jamming and data integrity attacks. Such attacks cause major disruptions in the wireless connectivity of users with the centralized servers. We propose a context-aware anomaly detection technique for vehicular networks that considers factors such as signal strength, mobility, and data pattern to find abnormal behaviors and malicious users. We further discuss how an intelligent learning system can be developed using efficient anomaly detection. We implement a vehicular network scenario with malicious users and provide simulation results to highlight the performance gain of the proposed technique. We also highlight several appropriate future opportunities related to the security of vehicular network applications.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_84-On_the_Context_Aware_Anomaly_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enriching Sequential Recommendations with Contextual Auxiliary Information</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151283</link>
        <id>10.14569/IJACSA.2024.0151283</id>
        <doi>10.14569/IJACSA.2024.0151283</doi>
        <lastModDate>2024-12-30T11:56:55.8330000+00:00</lastModDate>
        
        <creator>Adel Alkhalil</creator>
        
        <subject>Recommender system; sequential recommendation; auxiliary information; sentence transformer; sentence embedding</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>Recommender Systems (RS) play a key role in offering suggestions and predicting items for users on e-commerce and social media platforms. Sequential recommendation systems (SRS) leverage the user’s previous interaction history to forecast the next user-item interaction. Although deep learning methods like CNNs and RNNs have enhanced recommendation quality, current models still face challenges in accurately predicting future items based on a user’s past behavior. Transformer-based SRS have shown a significant performance boost in generating accurate recommendations by using only item identifiers which are not sufficient to generate meaningful and relevant results. These models can be improved by incorporating descriptive features of the items, such as textual descriptions. This paper proposes a transformer-based SRS, ConSRec, Contextual Sequential Recommendations, that incorporates auxiliary information of the items, such as textual features, along with item identifiers to model user behavior sequences for producing more accurate recommendations. ConSRec builds upon the BERT4Rec model by integrating auxiliary information through sentence representations derived from the textual features of items. Extensive experiments conducted on several benchmark datasets demonstrate substantial improvements compared to other advanced models.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_83-Enriching_Sequential_Recommendations.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Surface Roughness Prediction Based on CNN-BiTCN-Attention in End Milling</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151282</link>
        <id>10.14569/IJACSA.2024.0151282</id>
        <doi>10.14569/IJACSA.2024.0151282</doi>
        <lastModDate>2024-12-30T11:56:55.8170000+00:00</lastModDate>
        
        <creator>Guanhua Xiao</creator>
        
        <creator>Hanqian Tu</creator>
        
        <creator>Yunzhe Xu</creator>
        
        <creator>Jiahao Shao</creator>
        
        <creator>Dongming Xiang</creator>
        
        <subject>Surface roughness prediction; end milling; CNN-BiTCN-Attention; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>Surface roughness is a pivotal indicator of surface quality for machined components. It directly influences the performance and lifespan of manufactured products. Precise prediction of surface roughness is instrumental in refining production processes and curtailing costs. However, despite the use of identical processing parameters, the final surface roughness would be different. Thus, it challenges the effectiveness of traditional prediction models based solely on processing parameters. Current prevalent approaches for surface roughness prediction rely on handcrafted features, which require expert knowledge and considerable time investment. To address these challenges, we comprehensively consider the advantages of various deep learning methods and propose a novel end-to-end architecture. It synergistically integrates convolutional neural networks (CNN), bidirectional temporal convolutional networks (BiTCN), and attention mechanism, termed the CNN-BiTCN-Attention (CBTA) architecture. This architecture leverages CNN for automatic spatial feature extraction from signals, BiTCN to capture temporal dependencies, and the attention mechanism to focus on important features related to surface roughness. Experiments are conducted with popular deep learning methods on the public ACF dataset, which includes vibration, current, and force signals from the end milling process. The results demonstrate that the CBTA model outperforms other compared models. It achieves exceptional prediction performance with a mean absolute percentage error as low as 0.79% and an R2 as high as 99.81%. This validates the effectiveness and superiority of CBTA in end milling surface roughness prediction.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_82-Surface_Roughness_Prediction_Based_on_CNN_BiTCN_Attention.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning-Optimized CLAHE for Contrast and Color Enhancement in Suzhou Garden Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151281</link>
        <id>10.14569/IJACSA.2024.0151281</id>
        <doi>10.14569/IJACSA.2024.0151281</doi>
        <lastModDate>2024-12-30T11:56:55.8030000+00:00</lastModDate>
        
        <creator>Chuanyuan Li</creator>
        
        <creator>Ziyun Jiao</creator>
        
        <subject>Deep Learning-Optimized CLAHE; image contrast enhancement; color extraction; Suzhou gardens; VGG16; semantic segmentation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>Suzhou gardens are renowned for their unique color palettes and rich cultural significance. This study introduces a deep learning-optimized Contrast Limited Adaptive Histogram Equalization (CLAHE) method to enhance image contrast and improve color extraction accuracy in Suzhou garden images. An initial collection of 18,502 images was refined to 11,526 high-quality images from a single dataset. A pre-trained VGG16 convolutional neural network was used to extract image features, which were then employed to dynamically optimize the CLAHE parameters, thereby preserving the original color tones while enhancing contrast. The optimized CLAHE achieved significant improvements in the Structural Similarity Index (SSIM) by 24.69 percent and in the Peak Signal-to-Noise Ratio (PSNR) by 24.36 percent, and a reduction in Loss of Edge (LOE) by 36.62 percent,compared to the standard CLAHE. Additionally, enhanced structural detail and color complexity were observed. High-Resolution Network (HRNet) was utilized for semantic segmentation, enabling precise color feature extraction. K-means clustering was used to identify key color characteristics and complementary relationships among the primary and secondary colors in Suzhou gardens. A mathematical model capturing these relationships was developed to form the basis of a color palette generator, which can be applied to digital archiving, cultural preservation, aesthetic education, and virtual reality.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_81-Deep_Learning_Optimized_CLAHE_for_Contrast_and_Color_Enhancement.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>User Interface Design of SEVIMA EdLink Platform for Facilitating Tri Kaya Parisudha-Based Asynchronous Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151280</link>
        <id>10.14569/IJACSA.2024.0151280</id>
        <doi>10.14569/IJACSA.2024.0151280</doi>
        <lastModDate>2024-12-30T11:56:55.7700000+00:00</lastModDate>
        
        <creator>Agus Adiarta</creator>
        
        <creator>I Made Sugiarta</creator>
        
        <creator>Komang Krisna Heryanda</creator>
        
        <creator>I Komang Gede Sukawijana</creator>
        
        <creator>Dewa Gede Hendra Divayana</creator>
        
        <subject>Design user interface; SEVIMA EdLink; asynchronous; Tri Kaya Parisudha; independent learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>This research aims to show the user interface design of the SEVIMA EdLink platform to facilitate Tri Kaya Parisudha-based asynchronous learning in the nuances of independent learning. This research used the Research and Development method with the Borg &amp; Gall development model, which focused on several stages, including research and field data collection, planning, design development, initial trial, and revision of the initial trial results. The number of respondents involved in the initial trial of the user interface design was two education experts, two informatics experts, 40 teachers of Tourism Vocational Schools in Bali, and 60 students of Tourism Vocational Schools in Bali. The data collection tool for the initial trial of the user interface design was a questionnaire consisting of ten questions. The analysis was conducted by comparing the effectiveness percentage of the user interface design with the effectiveness categorization standard referring to the five scales. The results showed that the user interface design of the SEVIMA EdLink platform was effective in facilitating Tri Kaya Parisudha-based asynchronous learning. The impact of this research on stakeholders in the field of education is the existence of new information related to the existence of an online learning platform called SEVIMA EdLink, which is integrated with an asynchronous learning strategy, independent learning policy, and Balinese local wisdom.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_80-User_Interface_Design_of_SEVIMA_EdLink_Platform.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Blockchain-Based Financial Control System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151279</link>
        <id>10.14569/IJACSA.2024.0151279</id>
        <doi>10.14569/IJACSA.2024.0151279</doi>
        <lastModDate>2024-12-30T11:56:55.7400000+00:00</lastModDate>
        
        <creator>Tedan Lu</creator>
        
        <subject>Blockchain technology; financial control system; resource allocation; information exchange; consensus mechanism</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>In order to solve the problems of data security and low efficiency of information transmission in traditional financial control systems, this paper discusses in depth the application of blockchain technology in financial control systems. In order to optimize the performance of the traditional financial control system, this paper introduces blockchain technology into it and analyzes the structure and function of the financial control system. By constructing a blockchain-based financial data collection, information exchange and security consensus mechanism, a more efficient financial control system is designed, which can significantly improve the cost efficiency, shorten the audit cycle and enhance the data security. In the model, resource allocation within the financial control system is optimized, information exchange is more efficient, and a consensus mechanism is established. The experimental results prove that the model simplifies data entry and storage, reduces the workload of financial staff and improves transparency. The study bridges the gap between blockchain and traditional financial frameworks and advances the development of modern financial control systems.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_79-Blockchain_Based_Financial_Control_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Risk Assessment for Geological Exploration Projects Based on the Fuzzy-DEMATEL Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151278</link>
        <id>10.14569/IJACSA.2024.0151278</id>
        <doi>10.14569/IJACSA.2024.0151278</doi>
        <lastModDate>2024-12-30T11:56:55.7230000+00:00</lastModDate>
        
        <creator>Zhenhua Yang</creator>
        
        <creator>Hua Shi</creator>
        
        <creator>Ning Tian</creator>
        
        <creator>Juan Bai</creator>
        
        <creator>Xiaoyu Han</creator>
        
        <subject>Geological exploration project; analytic hierarchy process; DEMATEL; fuzzy theory; risk assessment</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>This paper briefly introduces the analytic hierarchy process (AHP) method and uses the fuzzy decision-making and trial evaluation laboratory (DEMATEL) method to adjust the index weight in it. The geological exploration project of Qingdao undersea tunnel project in Shandong Province was selected as the subject of case study. Firstly, the fuzzy-DEMATEL method was used to analyze the degree of influence between different risk factors in the project and the types of risk factors. Then, the AHP method divided the risk factors and calculated their weight. Finally, the influence parameters calculated by the fuzzy-DEMATEL method was employed to adjust the weight of the indicators in the AHP method. The fuzzy-DEMATEL analysis obtained the driving, conclusion, and transitional risk factors. It was found from the analytic results of the AHP method that the construction supervision unit’s qualification risk, management mechanism, and awareness risk had the greatest impact on the risk of the project, and the overall risk level of the project was 2.1 points.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_78-Risk_Assessment_for_Geological_Exploration_Projects.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Advanced Semantic Feature-Based Cross-Domain PII Detection, De-Identification, and Re-Identification Model Using Ensemble Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151277</link>
        <id>10.14569/IJACSA.2024.0151277</id>
        <doi>10.14569/IJACSA.2024.0151277</doi>
        <lastModDate>2024-12-30T11:56:55.7100000+00:00</lastModDate>
        
        <creator>Poornima Kulkarni</creator>
        
        <creator>Cauvery N K</creator>
        
        <creator>Hemavathy R</creator>
        
        <subject>PII Detection; machine learning; natural language processing; artificial intelligence; de-identification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>The digital data being core to any system requires communication across peers and human machine interfaces; however, ensuring (data) security and privacy remains a challenge for the industries, especially under the threat of man-in-the-middle attacks, intruders and even ill-intended unauthorized access at warehouses. Almost all digital communication practices embody personally identifiable information (PII) like an individual&#39;s address, contact details, identification credentials etc. The unauthorized or ill-intended access to these PII attributes can cause major losses to the individual and therefore it is inevitable to identify and de-identify aforesaid PII elements across digital platforms to preserve privacy. Unfortunately, the diversity of PII attributes across disciplines makes it challenging for state-of-arts to perform PII detection by using a predefined dictionary. The model developed for a specific PII type can’t be universally viable for other disciplines. Moreover, applying multiple dictionaries for the different disciplines can make a solution more exhaustive. To alleviate these challenges, in this paper a robust ensemble of ensemble learning assisted semantic feature driven cross-discipline PII detection and de-identification model (EESD-PII) is proposed. To achieve it, a large set of text queries encompassing diverse PII attributes including personal credentials, healthcare data, finance attributes etc. were considered for training based PII detection and classification. The input texts were processed for the different preprocessing tasks including stopping-word removal, punctuation removal, website-link removal, lower case conversion, lemmatization and tokenization. The tokenized text was processed for Word2Vec driven continuous bag-of-word (CBOW) embedding that not only provided latent feature space for analytics but also enabled de-identification to preserve security aspects. To address class-imbalance problems, synthetic minority over-sampling techniques like SMOTE, SMOTE-BL, SMOTE-ENN were applied. Subsequently, the resampled features were processed for the feature selection by using Wilcoxon Rank Sum Test (WRST) method that in sync with 95% confidence interval retained the most significant features. The selected features were processed for Min-Max Normalization to alleviate over-fitting and convergence problems, while the normalized feature vector was classified by using ensemble of ensemble learning model encompassing Bagging, Boosting, AdaBoost, Random Forest and Extra Tree Classifier as base classifier. The proposed model performed a consensus-based majority voting ensemble to annotate each text-query as PII or Non-PII data. The positively annotated query can later be processed for dictionary-based PII attribute masking to achieve de-identification. Though, the use of semantic embedding serves the purpose towards NLP-based PII detection, de identification and re-identification tasks. The simulation results reveal that the proposed EESD-PII model achieves PII annotation accuracy of 99.77%, precision 99.81%, recall 99.63% and F-Measure of 99.71%.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_77-An_Advanced_Semantic_Feature_Based_Cross_Domain_PII_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Approach Based on Information Relevance Perspective and ANN for Predicting the Helpfulness of Online Reviews</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151276</link>
        <id>10.14569/IJACSA.2024.0151276</id>
        <doi>10.14569/IJACSA.2024.0151276</doi>
        <lastModDate>2024-12-30T11:56:55.6930000+00:00</lastModDate>
        
        <creator>Nur Syadhila Bt Che Lah</creator>
        
        <creator>Khursiah Zainal-Mokhtar</creator>
        
        <subject>Review helpfulness; online reviews; information relevance; review novelty; review readability; review specificity; Artificial Neural Networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>This study presents a novel approach to predicting the helpfulness of online reviews using Artificial Neural Networks (ANNs) focused on information relevance. As online reviews significantly influence consumer decision-making, it is critical to understand and identify reviews that provide the most value. This research identifies four key textual features namely content novelty, content specificity, content readability, and content reliability, that contribute to perceived helpfulness and incorporates them as primary inputs for the ANN model. Datasets of Amazon reviews are analyzed, and various preprocessing steps are employed to ensure data quality. Reviews are classified as helpful or unhelpful based on helpful vote thresholds, with experiments conducted across multiple helpful vote thresholds to determine the optimal threshold value. Performance was evaluated using accuracy, precision, recall, and F1 scores, with the best-performing classifier achieving 74.34% accuracy at a helpful vote threshold of 12 votes. These results highlight the potential of information relevance-based criteria to enhance the accuracy of online review helpfulness prediction models.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_76-An_Novel_Approach_Based_on_Information_Relevance_Perspective.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Unlocking the Potential of Cloud Computing in Healthcare: A Comprehensive SWOT Analysis of Stakeholder Readiness and Implementation Challenges</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151275</link>
        <id>10.14569/IJACSA.2024.0151275</id>
        <doi>10.14569/IJACSA.2024.0151275</doi>
        <lastModDate>2024-12-30T11:56:55.6630000+00:00</lastModDate>
        
        <creator>Alaa Abas Mohamed</creator>
        
        <subject>Cloud computing; SWOT; strength; weakness; opportunities; threat</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>The adoption of cloud computing in healthcare holds the potential to revolutionize healthcare delivery, particularly in developing regions. Despite its promise of scalability, cost-effectiveness, and improved data management, challenges such as digital literacy gaps, infrastructure deficiencies, and security concerns hinder its implementation. This study evaluates the readiness for adopting cloud computing in Sudan&#39;s healthcare sector through a comprehensive SWOT analysis. Findings reveal that 93.75% of patients are willing to learn electronic health systems (EHS), yet 53.12% prefer paper records, indicating trust issues. Among medical staff, 34.38% report poor digital literacy, and 46.88% cite limited access to technology as a barrier. Ministry of Health employees highlight poor infrastructure (33.33%) and limited resources (30%) as significant obstacles. By identifying strengths, weaknesses, opportunities, and threats, this research provides actionable recommendations for overcoming these barriers. The findings contribute to the ongoing discourse on digital health transformation, offering insights into fostering trust in cloud technologies for enhanced healthcare outcomes.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_75-Unlocking_the_Potential_of_Cloud_Computing_in_Healthcare.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning for Coronary Artery Stenosis Localization: Comparative Insights from Electrocardiograms (ECG), Photoplethysmograph (PPG) and Their Fusion</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151274</link>
        <id>10.14569/IJACSA.2024.0151274</id>
        <doi>10.14569/IJACSA.2024.0151274</doi>
        <lastModDate>2024-12-30T11:56:55.6470000+00:00</lastModDate>
        
        <creator>Mohd Syazwan Md Yid</creator>
        
        <creator>Rosmina Jaafar</creator>
        
        <creator>Noor Hasmiza Harun</creator>
        
        <creator>Mohd Zubir Suboh</creator>
        
        <creator>Mohd Shawal Faizal Mohamad</creator>
        
        <subject>Coronary artery stenosis; deep learning; ECG; PPG; ECG-PPG fusion; CNN; LSTM; attention mechanism</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>Coronary artery stenosis (CAS) is a critical cardiovascular condition that demands accurate localization for effective treatment and improved patient outcomes. This study addresses the challenge of enhancing CAS localization through a comparative analysis of deep learning techniques applied to electrocardiogram (ECG), photoplethysmograph (PPG), and their combined signals. The primary research question centers on whether the fusion of ECG and PPG signals, analyzed through advanced deep learning architectures, can surpass the accuracy of individual modalities in localizing stenosis in the left anterior descending (LAD), left circumflex (LCX), and right coronary arteries (RCA). Using a dataset of 7,165 recordings from CAS patients, three models—CNN, CNN-LSTM, and CNN-LSTM-ATTN—were evaluated. The CNN-LSTM-ATTN model achieved the highest localization accuracy (98.12%) and perfect AUC scores (1.00) across all arteries, demonstrating the efficacy of multimodal signal integration and attention mechanisms. This research highlights the potential of combining ECG and PPG signals for non-invasive CAS diagnostics, offering a significant advancement in real-time clinical applications. However, limitations include the relatively small dataset size and the focus on single-lead ECG and PPG signals, which may affect the generalizability to broader populations. Future studies should explore larger datasets and multi-lead signal integration to further validate the findings.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_74-Deep_Learning_for_Coronary_Artery_Stenosis_Localization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sentiment Analysis of Web Images by Integrating Machine Learning and Associative Reasoning Ideas</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151273</link>
        <id>10.14569/IJACSA.2024.0151273</id>
        <doi>10.14569/IJACSA.2024.0151273</doi>
        <lastModDate>2024-12-30T11:56:55.6300000+00:00</lastModDate>
        
        <creator>Yuan Fang</creator>
        
        <creator>Yi Wang</creator>
        
        <subject>Sentiment analysis; multi-excitation fusion; image emotion prediction; associative reasoning; attention mechanism</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>To achieve automatic recognition and understanding of image sentiment analysis, the study proposes an image sentiment prediction network based on multi-excitation fusion. This network simultaneously handles multiple excitations, such as color, object, and face, and is designed to predict the sentiment associated with an image. A visual emotion inference network based on scene-object association is proposed using the association reasoning method to describe the emotional associations between different objects. The multi-excitation fusion image sentiment prediction network achieved the highest accuracy of 75.6% when the loss weight was 1.0. The network had the highest accuracy of 76.5% when the object frame data was 10. The average accuracy of the visual sentiment inference network based on scene-object association was 91.8%, which was an improvement of about 3.7% compared to the image sentiment association analysis model. The outcomes revealed that the multi-stimulus fusion method performed better in the image emotion prediction task. The visual emotion inference network based on scene-object association can recognize objects and scenes in images more accurately, and both the scene-based attention mechanism and the masking operation can improve the network performance. This research provides a more effective approach to the field of image sentiment analysis and helps to improve the computer&#39;s ability to recognize and understand emotional expressions.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_73-Sentiment_Analysis_of_Web_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Real-Time Monitoring and Analysis Through Video Surveillance and Alert Generation for Prompt and Immediate Response</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151272</link>
        <id>10.14569/IJACSA.2024.0151272</id>
        <doi>10.14569/IJACSA.2024.0151272</doi>
        <lastModDate>2024-12-30T11:56:55.6170000+00:00</lastModDate>
        
        <creator>Akshat Kumar</creator>
        
        <creator>Renuka Agrawal</creator>
        
        <creator>Akshra Singh</creator>
        
        <creator>Aaftab Noorani</creator>
        
        <creator>Yashika Jaiswal</creator>
        
        <creator>Preeti Hemnani</creator>
        
        <creator>Safa Hamdare</creator>
        
        <subject>Rapid response; anomaly detection; MobileNetV2; VGG16; BiLSTM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>The efficacy of Closed-Circuit Television systems (CCTV) in residential areas is often linked to the lack of real-time alerts and rapid response mechanisms. Enabling immediate notifications upon the identification of irregularities or aggressive conduct can greatly enhance the possibility of averting serious incidents, or at the very least, significantly mitigate their impact. The integration of an automated system for anomaly detection and monitoring, augmented by a real-time alert mechanism, is now a critical necessity. The proposed work presents an advanced methodology for real-time detection of accidents and violent activities, incorporating a sophisticated alarm system that not only triggers instant alerts but also captures and stores video frames for detailed post-event analysis. MobileNetV2 is utilized for spatial analysis due to its computational efficiency compared to other Convolutional Neural Networks (CNN) architectures, while Visual Geometry Group 16 (VGG16) enhances model accuracy, especially on large-scale datasets. The integration of Bi-directional Long Short-Term Memory (BiLSTM) strengthens temporal continuity, significantly reducing false alarms. The proposed system aims to improve both safety and security by enabling authorities to intervene timely to incidents. Combining rapid computation with high detection accuracy, the proposed model is ideally suited for real-time deployment across both urban and residential settings.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_72-Real_Time_Monitoring_and_Analysis_Through_Video_Surveillance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intelligent System for Stability Assessment of Chest X-Ray Segmentation Using Generative Adversarial Network Model with Wavelet Transforms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151271</link>
        <id>10.14569/IJACSA.2024.0151271</id>
        <doi>10.14569/IJACSA.2024.0151271</doi>
        <lastModDate>2024-12-30T11:56:55.5830000+00:00</lastModDate>
        
        <creator>Omar El Mansouri</creator>
        
        <creator>Mohamed Ouriha</creator>
        
        <creator>Wadiai Younes</creator>
        
        <creator>Yousef El Mourabit</creator>
        
        <creator>Youssef El Habouz</creator>
        
        <creator>Boujemaa Nassiri</creator>
        
        <subject>Deep learning; X-rays; segmentation; medical imaging; Generative Adversarial Networks; wavelet transforms</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>Accurate segmentation of chest X-rays is essential for effective medical image analysis, but challenges arise due to inherent stability issues caused by factors such as poor image quality, anatomical variations, and disease-related abnormalities. While Generative Adversarial Networks (GANs) offer automated segmentation, their stability remains a significant limitation. In this paper, we introduce a novel approach to address segmentation stability by integrating GANs with wavelet transforms. Our proposed model features a two-network architecture (generator and discriminator). The discriminator differentiates between the original mask and the mask generated after the generator is trained to produce a mask from a given image. The model was implemented and evaluated on two X-ray datasets, utilizing both original images and perturbed images, the latter generated by adding noise via the Gaussian noise method. A comparative analysis with traditional GANs reveals that our proposed model, which combines GANs with wavelet transforms, outperforms in terms of stability, accuracy, and efficiency. The results highlight the efficacy of our model in overcoming stability limitations in chest X-ray segmentation, potentially advancing subsequent tasks in medical image analysis. This approach provides a valuable tool for clinicians and researchers in the field of medical image analysis.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_71-Intelligent_System_for_Stability_Assessment_of_Chest_X_Ray.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimizing Cervical Cancer Diagnosis with Correlation-Based Feature Selection: A Comparative Study of Machine Learning Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151270</link>
        <id>10.14569/IJACSA.2024.0151270</id>
        <doi>10.14569/IJACSA.2024.0151270</doi>
        <lastModDate>2024-12-30T11:56:55.5670000+00:00</lastModDate>
        
        <creator>Wiwit Supriyanti</creator>
        
        <creator>Sujalwo</creator>
        
        <creator>Dimas Aryo Anggoro</creator>
        
        <creator>Maryam</creator>
        
        <creator>Nova Tri Romadloni</creator>
        
        <subject>Cervical cancer; feature selection; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>Cervical cancer remains a significant global health issue, particularly in developing countries where it is a leading cause of mortality among women. The development of machine learning-based approaches has become essential for early detection and diagnosis of cervical cancer. This research explores the optimization of classification algorithms through Correlation-Based Feature Selection (CFS) for early cervical cancer detection. A dataset consisting of 198 samples and 22 attributes from medical records was processed to reduce dimensionality. CFS was used to select the most relevant features, which were then applied to three classification algorithms: Na&#239;ve Bayes, Decision Tree, and k-Nearest Neighbor (k-NN). The results showed that CFS significantly improved classification accuracy, with Decision Tree achieving the highest accuracy of 85.89%, followed by Na&#239;ve Bayes with 83.34%, and k-NN with 82.32%. These findings demonstrate the importance of feature selection in enhancing classification performance and its potential application in the development of cervical cancer detection tools.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_70-Optimizing_Cervical_Cancer_Diagnosis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cyber Security Risk Assessment Framework for Cloud Customer and Service Provider</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151269</link>
        <id>10.14569/IJACSA.2024.0151269</id>
        <doi>10.14569/IJACSA.2024.0151269</doi>
        <lastModDate>2024-12-30T11:56:55.5370000+00:00</lastModDate>
        
        <creator>N. Sujata Kumari</creator>
        
        <creator>Naresh Vurukonda</creator>
        
        <subject>Deep recurrent neural network; krill herd optimization; artificial bee colony optimization; elliptic curve cryptography</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>The rapid development of cloud computing demands an effective cybersecurity framework for protecting the sensitive information of the infrastructure. Currently, many organizations depend on cloud services for their operation, increasing the risk of cybersecurity. Hence, an intelligent risk assessment mechanism is significant for detecting and mitigating the cybersecurity threats associated with cloud environments. Although various risk assessment methods were developed in the past, they lack the efficiency to handle the dynamic and evolving nature of threats. In this study, we proposed an innovative framework for cybersecurity risk assessment in cloud customers and service providers. Initially, the historical cloud customer and service provider database was collected and fed into the system. The collected dataset contains historical security risks, network traffic, system behavior, etc., and the accumulated dataset was pre-processed to improve the quality of the dataset. The data pre-processing steps not only ensure quality but also transform the dataset into appropriate format for subsequent analysis. Further, a risk assessment module was created using the combination of deep recurrent neural network with krill herd optimization (DRNN-KHO) algorithm. In this module, the DRNN was trained using the pre-processed database to learn the pattern and interconnection between normal and abnormal network traffic. Subsequently, the KHO refines the DRNN parameters in its training phase, increasing the efficiency of risk assessment. This integrated module ensures adaptability to the system, leading to accurate prediction of evolving security threats. Then, a secure data exchange protocol was created for secure transmission between cloud customer and service provider. This protocol is designed by integrating artificial bee colony optimization with the elliptic curve cryptography (ABC-ECC). Thus, this collaborative framework ensures security in the cloud customer and service providers.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_69-Cyber_Security_Risk_Assessment_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improved Decision Tree, Random Forest, and XGBoost Algorithms for Predicting Client Churn in the Telecommunications Industry</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151268</link>
        <id>10.14569/IJACSA.2024.0151268</id>
        <doi>10.14569/IJACSA.2024.0151268</doi>
        <lastModDate>2024-12-30T11:56:55.5070000+00:00</lastModDate>
        
        <creator>Mohamed Ezzeldin Saleh</creator>
        
        <creator>Nadia Abd-Alsabour</creator>
        
        <subject>Churn prediction; decision trees; grid search; random forest; XGBoost</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>Traditional machine learning models, especially decision trees, face great challenges when applied to high-dimensional and imbalanced telecommunication datasets. The research presented in this paper aims to enhance the performance of traditional Decision Tree (DT), Decision Tree with grid search (DT+), random forest (RF), and XGBoost (XGB) models. This is accomplished by augmenting them with robust preprocessing techniques, as well as optimizing them through grid search. We then evaluated how well the enhanced models can accurately predict customer churn and compared their performance metrics in detail. We utilized a dataset derived from the benchmark Cell2Cell dataset by applying combined preprocessing methods including KNN imputation, normalization, and resampling with SMOTE Tomek to address class imbalance. The findings reveal that XGBoost outperformed all other models with an accuracy of 0.82, demonstrating strong precision, recall, and F1 scores. RF also delivered robust results, achieving an accuracy of 0.82, benefiting from its ensemble nature to improve generalization and reduce overfitting.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_68-Improved_Decision_Tree_Random_Forest_and_XGBoost_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>YOLO-Driven Lightweight Mobile Real-Time Pest Detection and Web-Based Monitoring for Sustainable Agriculture</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151267</link>
        <id>10.14569/IJACSA.2024.0151267</id>
        <doi>10.14569/IJACSA.2024.0151267</doi>
        <lastModDate>2024-12-30T11:56:55.4900000+00:00</lastModDate>
        
        <creator>Wong Min On</creator>
        
        <creator>Nirase Fathima Abubacker</creator>
        
        <subject>Pest detection; YOLO; deep learning; real-time monitoring; smartphone application; web-based platform; object detection; pest management; pesticide reduction; sustainable agriculture</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>Nowadays, pest infestations cause significant reductions in agricultural productivity all over the world. To control pests, farmers often apply excessive volumes of pesticides due to the difficulty of manually detecting the pest at an early stage. Their overuse of pesticides has led to environmental pollution and health risks. To address these challenges, many novel systems have been developed to identify pests early, allowing farmers to be alerted about the exact location where pests are detected. However, these systems are constrained by their lack of real-time detection capabilities, limited mobile integration, ability to detect only a small number of pest classes, and the absence of a web-based monitoring system. This paper introduces a pest detection system that leverages the lightweight YOLO deep learning framework and is integrated with a web-based monitoring platform. The YOLO object detection architectures, including YOLOv8n, YOLOv9t, and YOLOv10-N, were studied and optimized for pest detection on smartphones. The models were trained and validated using merging publicly datasets containing 29 pest classes. Among them, the YOLOv9t achieves top performance with a mAP@0.5 value of 89.8%, precision of 87.4%, recall of 84.4%, and an inference time of 250.6ms. The web-based monitoring system enables dynamic real-time monitoring by providing farmers with instant updates and actionable insights for effective and sustainable pest management. From there, farmers can take necessary actions immediately to mitigate pest damage, reduce pesticide overuse, and promote sustainable agricultural practices.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_67-YOLO_Driven_Lightweight_Mobile_Real_Time_Pest_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Real-Time Nature-Inspired Intrusion Detection in Virtual Environments: An Artificial Bees Colony Approach Based on Cloud Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151266</link>
        <id>10.14569/IJACSA.2024.0151266</id>
        <doi>10.14569/IJACSA.2024.0151266</doi>
        <lastModDate>2024-12-30T11:56:55.4600000+00:00</lastModDate>
        
        <creator>Ayanseun S. Ayanboye</creator>
        
        <creator>John E. Efiong</creator>
        
        <creator>Temitope O. Ajayi</creator>
        
        <creator>Rotimi A. Gbadebo</creator>
        
        <creator>Bodunde O. Akinyemi</creator>
        
        <creator>Emmanuel A. Olajubu</creator>
        
        <creator>Ganiyu A. Aderounmu</creator>
        
        <subject>Real-time intrusion detection; virtual environments; artificial bee colony algorithm; cloud model algorithms; intrusion detection system; feature selection; classification; swarm intelligence; fuzzy logic; DNN; ABC_DNN DABCO_CM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>Real-time intrusion detection in virtual environments is crucial for maintaining the security and integrity of modern computing infrastructures. This paper proposes a nature-inspired mathematical model designed to detect both known and unknown attacks on virtual machines, focusing on enhancing detection accuracy and minimizing false alarm rates. The proposed model, named Developed Artificial Bee Colony Optimization Based on Cloud Model (DABCO_CM), is inspired by the foraging behavior of bee swarms and integrates principles from the Artificial Bee Colony algorithm and the cloud model rooted in fuzzy logic theory. The model was simulated using the UNSW_NB15 datasets in Google Colab and benchmarked against an existing model. It achieved a detection accuracy of 97.98%, compared to the existing model&#39;s 95.35%. Sensitivity results showed 99.92% for the proposed model, compared to 96.90% for the existing model, while specificity increased to 93.86%, in contrast to the existing model&#39;s 90.71%. These findings demonstrate a 3.02% increase in sensitivity, a 2.63% increase in accuracy, and a 3.15% increase in specificity, highlighting the model&#39;s superior capability in detecting attacks and its potential to learn from unlabeled data, addressing key challenges in virtual machine security.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_66-A_Real_Time_Nature_Inspired_Intrusion_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing IoT Security Through User Categorization and Aberrant Behavior Detection Using RBAC and Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151265</link>
        <id>10.14569/IJACSA.2024.0151265</id>
        <doi>10.14569/IJACSA.2024.0151265</doi>
        <lastModDate>2024-12-30T11:56:55.4430000+00:00</lastModDate>
        
        <creator>Alshawwa Izzeddin A O</creator>
        
        <creator>Nor Adnan Bin Yahaya</creator>
        
        <creator>Ahmed Y. mahmoud</creator>
        
        <subject>Machine learning; classification; SVM; LOF; IF classification methods; aberrant user behavior; Role-Based Access Control (RBAC); IoT user dataset and user categorization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>The proliferation of Internet of Things (IoT) technology in recent years has revolutionized several industries, providing customers with reliable and efficient IoT services. However, as the IoT ecosystem grows, attention has switched away from straightforward user access to the crucial topic of security. Among others, there is a need to categorize users according to the actions they conduct as well as according to aberrant user behavior. By utilizing Role-Based Access Control (RBAC) and merging the categorization of access rights with the identification of aberrant behavior, access points to the Internet of Things will be strengthened in terms of security and dependability. A system is proposed to identify security flaws and prompt rapid remediation, with the incorporation of a classification of aberrant user behaviors which, in turn, offers a thorough defense against any outside threats. Three classification methods which are Support Vector Machine (SVM), Local Outlier Factor (LOF), and Isolation Forest (IF), were utilized in the study and their accuracy were compared. The results demonstrate the effectiveness of machine learning approaches using a dataset for IoT users, achieving high accuracy in identifying anomalous user behavior and enabling prompt implementation of necessary actions.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_65-Enhancing_IoT_Security_Through_User_Categorization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mining High Utility Itemset with Hybrid Ant Colony Optimization Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151264</link>
        <id>10.14569/IJACSA.2024.0151264</id>
        <doi>10.14569/IJACSA.2024.0151264</doi>
        <lastModDate>2024-12-30T11:56:55.4130000+00:00</lastModDate>
        
        <creator>Keerthi Mohan</creator>
        
        <creator>Anitha J</creator>
        
        <subject>Utility mining; high utility itemset; ant colony optimization; genetic algorithm; evolutionary computation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>A significant area of study within data mining is high-utility itemset mining (HUIM). The exponential problem of broad search space usually comes up while using traditional HUIM algorithms when the database size or the number of unique objects is huge. Evolutionary computation (EC) -based algorithms have been presented as an alternate and efficient method to address HUIM problems since they can quickly produce a set of approximately optimum solutions. In transactional databases, finding entire high-utility itemset (HUIs) still need a lot of time using EC-based methods. In order to deal with this issue, we propose a hybrid Ant colony optimization-based HUIM algorithm. Genetic operators’ crossover is applied to the generated solution by the ant in the Ant Colony optimization algorithm. In this study, a single-point crossover is employed wherein, the crossover point is selected randomly and a mutation operator is applied by changing one or many random bits in a string. This technique requires less time to mine the same number of HUIs than state-of-the-art EC-based HUIM algorithms.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_64-Mining_High_Utility_Itemset.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Efficient Diabetic Retinopathy Detection and Classification System Using LRKSA-CNN and KM-ANFIS</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151263</link>
        <id>10.14569/IJACSA.2024.0151263</id>
        <doi>10.14569/IJACSA.2024.0151263</doi>
        <lastModDate>2024-12-30T11:56:55.3970000+00:00</lastModDate>
        
        <creator>Rachna Kumari</creator>
        
        <creator>Sanjeev Kumar</creator>
        
        <creator>Sunila Godara</creator>
        
        <subject>Intervening contour similarity weights based watershed segmentation (ICSW-WS); min-max normalization based green anaconda optimization (MM-GAO); krusinka membership based adaptive neuro fuzzy interference system (KM-ANFIS); linearly regressed kernel and scaled activation based convolution neural network (LRKSA-CNN); deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>If Diabetic Retinopathy (DR) is not diagnosed in the early stages, it leads to impaired vision and often causes blindness. So, diagnosis of DR is essential. For detecting DR and its diverse stages, various approaches were developed. However, they are limited in considering microstructural changes of visual pathways associated with the visual impairment of DR. Thus, this work proposes an effective Linearly Regressed Kernel and Scaled Activation-based Convolution Neural Network (LRKSA-CNN) to diagnose DR utilizing multimodal images. Primarily, the input Optical Coherence Tomography (OCT) image is preprocessed for contrast enhancement utilizing Contrast-Limited Adaptive Histogram Equalization (CLAHE) and resolution enhancement utilizing the Gaussian Mixture Model (GMM). Likewise, the Magnetic Resonance Imaging (MRI) image’s contrast is also improved, and edge sharpening is performed utilizing Unsharp Mask Filter (USF). Then, preprocessed images are segmented utilizing the Intervening Contour Similarity Weights-based Watershed Segmentation (ICSW-WS) algorithm. Significant features are extracted from the segmented regions. Next, important features are chosen utilizing the Min-max normalization-based Green Anaconda Optimization (MM-GAO) algorithm. By utilizing the LRKSA-CNN technique, the selected features were classified into DR and Non-Diabetic Retinopathy (NDR). Hence, utilizing the Krusinka Membership-based Adaptive Neuro Fuzzy Interference System (KM-ANFIS), various stages of DR were classified based on the presence of intermediate features. Lastly, the proposed system achieves superior outcomes than the baseline systems.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_63-An_Efficient_Diabetic_Retinopathy_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Conceptual Framework for Agricultural Water Management Through Smart Irrigation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151262</link>
        <id>10.14569/IJACSA.2024.0151262</id>
        <doi>10.14569/IJACSA.2024.0151262</doi>
        <lastModDate>2024-12-30T11:56:55.3800000+00:00</lastModDate>
        
        <creator>Abdelouahed Tricha</creator>
        
        <creator>Laila Moussaid</creator>
        
        <creator>Najat Abdeljebbar</creator>
        
        <subject>Agriculture; irrigation system; water management; Internet of Things; sustainability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>The demand for freshwater resources has risen significantly due to population growth and increasing drought conditions in agricultural regions worldwide. Irrigated agriculture consumes a substantial amount of water, often leading to wastage due to inefficient irrigation practices. Recent breakthroughs in emerging technologies, including machine learning, the Internet of Things, wireless communication, and advanced monitoring systems, have facilitated the development of smart irrigation solutions that optimize water usage, enhance efficiency, and reduce operational costs. This paper explores the critical parameters and monitoring strategies for smart irrigation systems, emphasizing soil and water management. It also presents a conceptual framework for implementing sustainable irrigation practices aimed at optimizing water use, improving crop productivity, and ensuring cost-effective management across different agricultural settings.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_62-A_Conceptual_Framework_for_Agricultural_Water_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application of Residual Graph Attention Networks Algorithm in Credit Evaluation for Financial Enterprises</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151261</link>
        <id>10.14569/IJACSA.2024.0151261</id>
        <doi>10.14569/IJACSA.2024.0151261</doi>
        <lastModDate>2024-12-30T11:56:55.3170000+00:00</lastModDate>
        
        <creator>Wenxing Zeng</creator>
        
        <subject>Quantum genetic algorithm; residual networks; attention mechanisms; graph neural networks; credit evaluation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>In the context of digital transformation of enterprises, credit evaluation of financial enterprises faces new challenges and opportunities. Digital transformation introduces a large amount of data and advanced analytical tools, providing richer information and methods for credit evaluation. In this paper, we propose a credit evaluation model based on improved quantum genetic algorithm and residual graph attention network (DRQGA-ResGAT), which aims to utilize the complex correlation data and multi-dimensional information among enterprises for enterprise credit evaluation. The credit evaluation model based on DRQGA-ResGAT performs well in dealing with large-scale and high-dimensional data and can significantly improve the accuracy of credit evaluation. The experimental results show that the ResGAT model combined with the improved quantum genetic algorithm performs even better, and the proposed model has a high precision rate in the credit evaluation of financial enterprises, which has a greater application value. Compared with the traditional ResGAT model, the model improves about 17.06% in precision rate.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_61-Application_of_Residual_Graph_Attention_Networks_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Radar Spectrum Analysis and Machine Learning-Based Classification for Identity-Based Unmanned Aerial Vehicles Detection and Authentication</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151260</link>
        <id>10.14569/IJACSA.2024.0151260</id>
        <doi>10.14569/IJACSA.2024.0151260</doi>
        <lastModDate>2024-12-30T11:56:55.2700000+00:00</lastModDate>
        
        <creator>Aminu Abdulkadir Mahmoud</creator>
        
        <creator>Sofia Najwa Ramli</creator>
        
        <creator>Mohd Aifaa Mohd Ariff</creator>
        
        <creator>Muktar Danlami</creator>
        
        <subject>Authentication; detection; cybersecurity; Micro-Doppler; radar; Unmanned Aerial Vehicle (UAV)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>The significant use of Unmanned Aerial Vehicles (UAVs) in commercial and civilian applications presents various cybersecurity challenges, particularly in detection and authentication. Unauthorized UAVs can be very harmful to the people on the ground, the infrastructure, the right to privacy, and other UAVs. Moreover, using the internet for UAV communication may expose authorized ones to attacks, causing a loss of confidentiality, integrity, and information availability. This paper introduces radar-based UAV detection and authentication using Micro-Doppler (MD) signal analysis. The study provides a unique dataset comprising radar signals from three distinct UAV models captured under varying operational conditions. The dataset enables the analysis of specific features and classification through machine learning models, including k-nearest Neighbor (k-NN), Random Forest, and Support Vector Machine (SVM). The approach leverages radar signal processing to extract MD signatures for accurate UAV identification, enhancing detection and authentication processes. The result indicates that Random Forest achieved the highest accuracy of 100%, with high classification accuracy and zero false alarms, demonstrating its suitability for real-time monitoring. This also highlights the potential of radar-based MD analysis for UAV detection, and it establishes a foundational approach for developing robust UAV monitoring systems, with potential applications in aviation military surveillance, public safety, and regulatory compliance. Future work will focus on expanding the dataset and integrating Remote Identification (RID) policy. A policy that mandates UAVs to disclose their identity upon approaching any territory, this will help to enhance security and scalability of the system.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_60-Radar_Spectrum_Analysis_and_Machine_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Usability Heuristic Evaluation of Mobile Learning Applications Based on the Usability Design Model for Adult Learners</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151259</link>
        <id>10.14569/IJACSA.2024.0151259</id>
        <doi>10.14569/IJACSA.2024.0151259</doi>
        <lastModDate>2024-12-30T11:56:55.1470000+00:00</lastModDate>
        
        <creator>Amy Ling Mei Yin</creator>
        
        <creator>Ahmad Sobri B Hashim</creator>
        
        <creator>Mazeyanti Bt M Ariffin</creator>
        
        <subject>Usability design model; mobile learning; adult learners; heuristic evaluation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>Adult ownership of mobile devices has exploded over the past few years, and smartphones and tablets have become vital for communication, productivity, entertainment, and learning. Some common problems adults face are that they find it difficult to use new technology-based apps because many devices are small. Tasks on new technology-based apps take longer to complete. Therefore, the usability design model for adult learners has been proposed. The objective of this study is to evaluate the usability design model for adult learners and whether the applications containing the model components will affect the satisfaction of adult learners. The evaluation was based on the heuristics guidelines by Nielsen and has been modified and mapped with the seven components in the model. Two existing mobile learning (m-learning) applications (Duolingo and Lingualia) from the Play Store have been chosen for this evaluation. The results indicate that Duolingo has an overall satisfaction mean score of 4.38 compared to Lingualia, where the score is only 2.43. Duolingo meets most of the model’s criteria and can score a higher satisfaction mean score. This indicated that the seven components play important roles in contributing to satisfaction among adult learners.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_59-Usability_Heuristic_Evaluation_of_Mobile_Learning_Applications.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Malware Analysis Approach for Identifying Threat Actor Correlation Using Similarity Comparison Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151258</link>
        <id>10.14569/IJACSA.2024.0151258</id>
        <doi>10.14569/IJACSA.2024.0151258</doi>
        <lastModDate>2024-12-30T11:56:55.1170000+00:00</lastModDate>
        
        <creator>Ahmad Naim Irfan</creator>
        
        <creator>Suriayati Chuprat</creator>
        
        <creator>Mohd Naz&#39;ri Mahrin</creator>
        
        <creator>Aswami Ariffin</creator>
        
        <subject>Malware analysis; APT group; threat actor correlation; CTI</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>Cybersecurity is essential for organisations to protect critical assets from cyber threats in the increasingly digital and interconnected world. However, cybersecurity incidents are rising each year, leading to increased workloads. Current malware analysis approaches are often case-by-case, based on specific scenarios, and are typically limited to identifying malware. When cybersecurity incidents are not handled effectively due to these analytical limitations, operations are disrupted, and an organisation’s brand and client trust are negatively impacted, often resulting in financial loss. The aim of this research is to enhance the analysis of Advanced Persistent Threat (APT) malware by correlating malware with its associated threat actors, such as APT groups, who are the perpetrators or authors of the malware. APT malware represents a highly dangerous threat, and gaining insight into the adversaries behind such attacks is crucial for preventing cyber incidents. This research proposes an advanced malware analysis approach that correlates APT malware with threat actors using a similarity comparison technique. By extracting features from APT malware and analysing the correlation with the threat actor, cybersecurity professionals can implement effective countermeasures to ensure that organisations are better prepared against these sophisticated cyber threats. The solution aims to assist cybersecurity practitioners and researchers in making informed decisions by providing actionable insights and a broader perspective on cyber-attacks, based on detailed information about malware tied to specific threat actors.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_58-A_Malware_Analysis_Approach_for_Identifying_Threat.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Impact of Malware Attacks on the Performance of Various Operating Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151257</link>
        <id>10.14569/IJACSA.2024.0151257</id>
        <doi>10.14569/IJACSA.2024.0151257</doi>
        <lastModDate>2024-12-30T11:56:55.1000000+00:00</lastModDate>
        
        <creator>Maria-Madalina Andronache</creator>
        
        <creator>Alexandru Vulpe</creator>
        
        <creator>Corneliu Burileanu</creator>
        
        <subject>Cybersecurity; network security; network monitoring; incident analysis; incident response</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>Latest research in the field of cyber security concludes that a permanent monitoring of the network and its protection, based on various tools or solutions, are key aspects for protecting it against vulnerabilities. So, it is imperative that solutions such as firewall, antivirus, Intrusion Detection System, Intrusion Prevention System, Security Information and Event Management to be implemented for all networks used. However, if the attack has reached the network, it is necessary to identify and analyze it in order to be able to assess the damage, to prevent similar events from happening and to build an incident response adapted to the network used. This work analyzes the impact of malicious and benign files that have reached a network. Thus, during the work, various analysis methods (both static and dynamic) of real malicious software will be developed, in two different operating systems (Windows 10 and Ubuntu 22.04). Thereby, both the malware and benign files and their impact on various operating systems will be analyzed.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_57-The_Impact_of_Malware_Attacks_on_the_Performance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Math Role-Play Game Using Lehmer’s RNG Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151256</link>
        <id>10.14569/IJACSA.2024.0151256</id>
        <doi>10.14569/IJACSA.2024.0151256</doi>
        <lastModDate>2024-12-30T11:56:55.0670000+00:00</lastModDate>
        
        <creator>Chong Bin Yong</creator>
        
        <creator>Rajermani Thinakaran</creator>
        
        <creator>Nurul Halimatul Asmak Ismail</creator>
        
        <creator>Samer A. B. Awwad</creator>
        
        <subject>Lehmer’s RNG algorithm; online educational; gamification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>Due to the COVID-19 pandemic, schools in Malaysia have been physically closed for more than 40 weeks and the students have to learn online. As Malaysia transitions to endemicity, many younger students struggle to keep up with their education due to significant learning loss caused by school closures and the challenges of virtual classes, including distractions and reduced engagement. This study aims to address these issues by developing an educational application that integrates gaming elements, focusing on arithmetic for Year 6 primary school students. The application engages students through interactive gameplay, requiring them to solve math problems to progress, thereby promoting a fun and effective way to enhance their arithmetic skills and mitigate learning loss.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_56-Math_Role_Play_Game_Using_Lehmers_RNG_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced Aquila Optimizer Algorithm for Efficient Stance Classification in Online Social Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151255</link>
        <id>10.14569/IJACSA.2024.0151255</id>
        <doi>10.14569/IJACSA.2024.0151255</doi>
        <lastModDate>2024-12-30T11:56:55.0530000+00:00</lastModDate>
        
        <creator>Na LI</creator>
        
        <subject>Stance classification; online social networks; opposition-based learning; chaotic local search; Aquila Optimizer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>Stance classification in Online Social Networks (OSNs) is essential to comprehend users&#39; standpoints on various issues relating to social, political, and commercial aspects. However, traditional methods applied to large datasets and complex text structures usually face several challenges. This study introduces the Enhanced Aquila Optimizer (EAO), a metaheuristic algorithm designed to improve convergence and precision in stance classification tasks. EAO incorporates three new strategies: Opposition-Based Learning (OBL) to improve the exploration, Chaotic Local Search (CLS) to escape from the local optima, and a Restart Strategy (RS) to rejuvenate the search process. Experimental assessments on benchmark OSN datasets prove the superiority of EAO in terms of accuracy, precision, and computational efficiency compared to state-of-the-art methods. These findings position EAO as a potential revolution for stance classification and other large-scale text analysis tasks by offering a robust solution that can be used in real-time for complex OSN scenarios.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_55-Enhanced_Aquila_Optimizer_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Traffic Speed Prediction Based on Spatial-Temporal Dynamic and Static Graph Convolutional Recurrent Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151254</link>
        <id>10.14569/IJACSA.2024.0151254</id>
        <doi>10.14569/IJACSA.2024.0151254</doi>
        <lastModDate>2024-12-30T11:56:55.0370000+00:00</lastModDate>
        
        <creator>YANG Wenxi</creator>
        
        <creator>WANG Ziling</creator>
        
        <creator>CUI Tao</creator>
        
        <creator>LU Yudong</creator>
        
        <creator>QU Zhijian</creator>
        
        <subject>Intelligent transportation; traffic speed prediction; spatial-temporal correlation; dynamic graph; graph convolution recurrent network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>Traffic speed prediction based on spatial-temporal data plays an important role in intelligent transportation. The time-varying dynamic spatial relationship and complex spatial-temporal dependence are still important problems to be considered in traffic prediction. In response to existing problems, a Dynamic and Static Graph Convolutional Recurrent Network (DASGCRN) model for traffic speed prediction is proposed to capture the spatial-temporal correlation in the road network. DASGCRN consists of Spatial Correlation Extraction Module (SCEM), Dynamic Graph Construction Module (DGCM), Dynamic Graph Convolution Recurrent Module (DGCRM) and residual decomposition. Firstly, the improved traditional static adjacency matrix captures the relationship between each time step node. Secondly, the graph convolution captures the overall spatial information between the road networks, and the dynamic graph isomorphic network captures the hidden dynamic dependencies between adjacent time series. Thirdly, spatial-temporal correlation of traffic data is captured based on dynamic graph convolution and gated recurrent unit. Finally, the residual mechanism and the phased learning strategy are introduced to enhance the performance of DASGCRN. We conducted extensive experiments on two real-world traffic speed datasets, and the experimental results show that the performance of DASGCRN is significantly better than all baselines.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_54-Traffic_Speed_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Machine Learning as a Tool to Combat Ransomware in Resource-Constrained Business Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151253</link>
        <id>10.14569/IJACSA.2024.0151253</id>
        <doi>10.14569/IJACSA.2024.0151253</doi>
        <lastModDate>2024-12-30T11:56:55.0200000+00:00</lastModDate>
        
        <creator>Luis Jes&#250;s Romero Castro</creator>
        
        <creator>Piero Alexander Cruz Aquino</creator>
        
        <creator>Fidel Eugenio Garcia Rojas</creator>
        
        <subject>Ransomware; cybersecurity; machine learning; microenterprise; threat detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>Ransomware has emerged as one of the leading cybersecurity threats to microenterprises, which often lack the technological and financial resources to implement advanced protection systems. This study proposes a cybersecurity model based on machine learning, designed not only for the detection and mitigation of ransomware attacks but also as a scalable and adaptable solution that can be integrated into business infrastructures across various sectors. By leveraging advanced techniques to identify malicious behavior patterns, the system alerts businesses before significant damage occurs. Moreover, this approach provides complementary measures such as automated updates and backups, enhancing resilience against cyber threats in resource-constrained environments. This research aims not only to protect critical data but also to contribute to the development of accessible cybersecurity models, improving operational continuity and promoting sustainability in the digital landscape.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_53-Machine_Learning_as_a_Tool_to_Combat_Ransonware.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of a Smart Water Dispenser Based on Object Recognition with Raspberry Pi 4</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151252</link>
        <id>10.14569/IJACSA.2024.0151252</id>
        <doi>10.14569/IJACSA.2024.0151252</doi>
        <lastModDate>2024-12-30T11:56:54.9900000+00:00</lastModDate>
        
        <creator>Dani Ramdani</creator>
        
        <creator>Puput Dani Prasetyo Adi</creator>
        
        <creator>Andriana</creator>
        
        <creator>Tjahjo Adiprabowo</creator>
        
        <creator>Yuyu Wahyu</creator>
        
        <creator>Arief Suryadi Satyawan</creator>
        
        <creator>Sally Octaviana Sari</creator>
        
        <creator>Zulkarnain</creator>
        
        <creator>Noor Rohman</creator>
        
        <subject>Smart water dispenser; object recognition; Raspberry Pi 4; YOLO VB; ultrasonic sensor</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>In this project, we develop and apply a Smart Water Dispenser system, which is combined with object recognition and fluid level control supported by Ultrasonic Sensors, Raspberry Pi, and also DC Motors. The essence of this system is to develop a system using the Raspberry Pi 4 Model B with other components that have been integrated and interrelated Hardware and programming using OpenCV, YOLO V8, and other components, the point is that the cup can be detected, and water filling is done precisely and automatically. The process carried out is the detection of cups automatically using Raspberry Pi which is in charge of controlling the DC Motor and also the Ultrasonic sensor (HC-SR04) and detecting based on the volume of water with precision. The dispenser functions to pump water based on the condition of the volume of water in the glass and stop pumping if the volume of the glass has been fulfilled, aka not spilling with a percentage of 90%. In the scenario process, the cup search process is first carried out by scanning three times until a cup is found, if a cup is found, then the sensor component and the valve for the release of water in the hose will stop right at the position of the cup and the water will fill the cup automatically. Otherwise, the system will move backward and the system will be turned off. The first testing process has been successful and shows the effectiveness of the system in the process of finding cups and managing water levels. This innovation shows hope for improving user comfort, especially for disabilities, utilizing advanced technology in object recognition, and of course, saving water usage. In the testing process obtained 95% to 97% accuracy in object detection with different types of cups.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_52-Development_of_a_Smart_Water_Dispenser.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-Source Consistency Deep Learning for Semi-Supervised Operating Condition Recognition in Sucker-Rod Pumping Wells</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151251</link>
        <id>10.14569/IJACSA.2024.0151251</id>
        <doi>10.14569/IJACSA.2024.0151251</doi>
        <lastModDate>2024-12-30T11:56:54.9730000+00:00</lastModDate>
        
        <creator>Jianguo Yang</creator>
        
        <creator>Bin Zhou</creator>
        
        <creator>Muhammad Tahir</creator>
        
        <creator>Min Zhang</creator>
        
        <creator>Xiao Zheng</creator>
        
        <creator>Xinqian Liu</creator>
        
        <subject>Operating condition recognition of sucker-rod pumping wells; multi-source consistency learning; semi-supervised learning; CNN; attention mechanism</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>How making full use of the multiple measured information sources obtained from the sucker-rod pumping wells based on deep learning is crucial for precisely recognizing the operating conditions. However, the existing deep learning-based operating condition recognition technology has the disadvantages of low accuracy and weak practicality owing to the limitations of methods for handling single-source or multi-source data, high demand for sufficient labeled data, and inability to make use of massive unknown operating condition data resources. To solve these problems, here we design a semi-supervised operating condition recognition method based on multi-source consistency deep learning. Specifically, on the basis of the framework of WideResNet28-2 convolutional neural network (CNN), the multi-head self-attention mechanism and feedforward neural network are first used to extract the deeper features of the measured dynamometer cards and the measured electrical power cards, respectively. Then, the consistency constraint loss based on cosine similarity measurement is introduced to ensure the maximum similarity of the final features expressed by different information sources. Next, the optimal global feature representation of multi-source fusion is obtained by learning the weights of the feature representations of different information sources through the adaptive attention mechanism. Finally, the fused multi-source feature combined with the multi-source semi-supervised class-aware contrastive learning is exploited to yield the operating condition recognition model. We test the proposed model with a dataset produced from an oilfield in China with a high-pressure and low permeability thin oil reservoir block. Experiments show that the method proposed can better learn the critical features of multiple measured information sources of oil wells, and further improve the operating condition identification performance by making full use of unknown operating condition data with a small amount of labeled data.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_51-Multi_Source_Consistency_Deep_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Integrating Multi-Agent System and Case-Based Reasoning for Flood Early Warning and Response System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151250</link>
        <id>10.14569/IJACSA.2024.0151250</id>
        <doi>10.14569/IJACSA.2024.0151250</doi>
        <lastModDate>2024-12-30T11:56:54.9430000+00:00</lastModDate>
        
        <creator>Nor Aimuni Md Rashid</creator>
        
        <creator>Zaheera Zainal Abidin</creator>
        
        <creator>Zuraida Abal Abas</creator>
        
        <subject>Flood; multi-agent system; flood early warning system; case-based reasoning; quadruple helix; flood risk</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>This research addresses the limitations of current Multi-Agent Systems (MAS) in Flood Early Warning and Response Systems (FEWRS), focusing on gaps in risk knowledge, monitoring, forecasting, warning dissemination, and response capabilities. These shortcomings reduce the system’s reliability and public trust, highlighting the need for better flood preparedness and learning mechanisms. To tackle these issues, this study proposes a new conceptual framework combining Case-Based Reasoning (CBR) with MAS, aimed at enhancing flood prediction, learning, and decision-making. CBR enables the system to learn from past flood events by retrieving and adapting cases to improve future predictions and responses, while MAS allows for decentralized and collaborative decision-making among various agents within the system. This integration fosters a dynamic, real-time system that adapts to changing conditions and improves over time through continuous feedback. The framework’s effectiveness is evaluated using the quadruple helix model, addressing social, economic, environmental, and governance aspects. Socially, the system increases community resilience through improved early warnings. Economically, it reduces flood impacts by enabling faster and more accurate responses. Environmentally, it enhances monitoring and preservation of ecosystems. In governance, the framework improves coordination between agencies and the public. The CBR-MAS framework significantly improves intelligent detection, decision-making speed, and community resilience, offering substantial improvements over traditional FEWRS. This adaptive approach promises to build a more reliable, trust-worthy system capable of handling the complexities of flood risks in the future.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_50-Integrating_Multi_Agent_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cybersecurity Awareness in Schools: A Systematic Review of Practices, Challenges, and Target Audiences</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151249</link>
        <id>10.14569/IJACSA.2024.0151249</id>
        <doi>10.14569/IJACSA.2024.0151249</doi>
        <lastModDate>2024-12-30T11:56:54.9130000+00:00</lastModDate>
        
        <creator>Abdulrahman Abdullah Arishi</creator>
        
        <creator>Nazhatul Hafizah Kamarudin</creator>
        
        <creator>Khairul Azmi Abu Bakar</creator>
        
        <creator>Zarina Binti Shukur</creator>
        
        <creator>Mohammad Kamrul Hasan</creator>
        
        <subject>Cybersecurity awareness; threats; awareness programs; education; school security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>This systematic literature review examines cybersecurity awareness in schools, focusing on effective practices, challenges, and future directions. Following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines, peer-reviewed publications in English were sourced from ACM Digital Library, IEEE Xplore, ScienceDirect, SpringerLink, and Emerald, covering the period from 2019 to 2024. Studies were included if they focused on cybersecurity awareness in primary and secondary educational settings, excluding those unrelated to educational contexts or published before 2019. A total of 816 records were identified, of which 220 were duplicates and removed. After screening and eligibility assessments, 14 studies met the inclusion criteria. Risk of bias was minimized by adhering to strict inclusion criteria, such as limiting the review to high-quality, peer-reviewed studies, and ensuring consistency in the data extraction process. The review highlights effective practices such as using serious games, mobile apps, and tailored programs to enhance cybersecurity awareness. Challenges include inconsistent curricula, insufficient parental involvement, and resource limitations. These results emphasize integrating cybersecurity education across school curricula and regularly updating content to reflect evolving threats. Limitations include the exclusion of non-English and non-peer-reviewed studies. Future research should consider broader contexts and additional sources.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_49-Cybersecurity_Awareness_in_Schools.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards Secure Internet of Things Communication Through Trustworthy RPL Routing Protocols</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151248</link>
        <id>10.14569/IJACSA.2024.0151248</id>
        <doi>10.14569/IJACSA.2024.0151248</doi>
        <lastModDate>2024-12-30T11:56:54.8970000+00:00</lastModDate>
        
        <creator>Rui LI</creator>
        
        <subject>Internet of Things; routing; trust; data transmission</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>The Internet of Things (IoT) refers to a network of connected objects for autonomous data exchange and processing. With the increasing growth in IoT, ensuring data transmission integrity and security is essential, as data is subject to many attacks. Currently, the routing protocol for low-power lossy networks is RPL and finds wide deployment in IoT deployments. It also provides a framework to define characteristics related to low-power consumption and resilience to specific routing attacks. RPL trust-based routing protocols improve RPL security by introducing a threshold for Minimum Acceptable Trust, permitting only trusted nodes with a sufficient level of obtained trust to participate in routing. This mechanism is designed to reduce malicious activities and to establish secure communications. This paper will provide an overall review of trustworthy RPL routing methods in IoT and discuss the trust metrics of these approaches and their limitations. To the best of our knowledge, this is the first survey focusing on trust-based RPL protocols in IoT, offering valuable insights into the performance of protocols and possible improvements.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_48-Towards_Secure_Internet_of_Things_Communication.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Leveraging Large Language Models for Automated Bug Fixing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151247</link>
        <id>10.14569/IJACSA.2024.0151247</id>
        <doi>10.14569/IJACSA.2024.0151247</doi>
        <lastModDate>2024-12-30T11:56:54.8800000+00:00</lastModDate>
        
        <creator>Shatha Abed Alsaedi</creator>
        
        <creator>Amin Yousef Noaman</creator>
        
        <creator>Ahmed A. A. Gad-Elrab</creator>
        
        <creator>Fathy Elbouraey Eassa</creator>
        
        <creator>Seif Haridi</creator>
        
        <subject>Bug fixing; automated program repair; large language models; software debugging; software maintenance; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>Bug fixing, which is known as Automatic Program Repair (APR), is a significant area of research in the software engineering field. It aims to develop techniques and algorithms to automatically fix bugs and generate fixing patches in the source code. Researchers focus on developing many APR algorithms to enhance software reliability and increase the productivity of developers. In this paper, a novel model for automated bug fixing has been developed leveraging large language models. The proposed model accepts the bug type and the buggy method as inputs and outputs the repaired version of the method. The model can localize the buggy lines, debug the source code, generate the correct patches, and insert them in the correct locations. To evaluate the proposed model, a new dataset which contains 53 Java source code files from four bug classes which are Program Anomaly, GUI, Test-Code and Performance has been presented. The proposed model successfully fixed 49 out of 53 codes using gpt-3.5-turbo and all 53 using gpt-4-0125-preview. The results are notable, with the model achieving accuracies of 92.45% and 100% with gpt-3.5-turbo and gpt-4-0125-preview, respectively. Additionally, the proposed model outperforms several state-of-the-art APR models as it fixes all 40 buggy programs in QuixBugs benchmark dataset.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_47-Leveraging_Large_Language_Models_for_Automated_Bug_Fixing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced Butterfly Optimization Algorithm for Task Scheduling in Cloud Computing Environments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151246</link>
        <id>10.14569/IJACSA.2024.0151246</id>
        <doi>10.14569/IJACSA.2024.0151246</doi>
        <lastModDate>2024-12-30T11:56:54.8500000+00:00</lastModDate>
        
        <creator>Yue ZHAO</creator>
        
        <subject>Cloud computing; resource utilization; task scheduling; Butterfly Optimization Algorithm; fuzzy decision strategy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>Cloud computing is transforming the provision of elastic and adaptable capabilities on demand. A scalable infrastructure and a wide range of offerings make cloud computing essential to today&#39;s computing ecosystem. Cloud resources enable users and various companies to utilize data maintained in a distant location. Generally, cloud vendors provide services within the limitations of Service Level Agreement (SLA) terms. SLAs consist of various Quality of Service (QoS) requirements the supplier promises. Task scheduling is critical to maintaining higher QoS and lower SLAs. In simple terms, task scheduling aims to schedule tasks to limit wasted time and optimize performance. Considering the NP-hard character of cloud task scheduling, metaheuristic algorithms are widely applied to handle this optimization problem. This study presents a novel approach using the Butterfly Optimization Algorithm (BOA) for scheduling cloud-based tasks across diverse resources. BOA performs well on non-constrained and non-biased mathematical functions. However, its search capacity is limited to shifted, rotated, and/or constrained optimization problems. This deficiency is addressed by incorporating a virtual butterfly and improved fuzzy decision processes into the conventional BOA. The suggested methodology improves throughput and resource utilization while reducing the makespan. Regardless of the number of tasks, better results are consistently produced, indicating greater scalability.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_46-Enhanced_Butterfly_Optimization_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design and Application of a TOPSIS-Based Fuzzy Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151245</link>
        <id>10.14569/IJACSA.2024.0151245</id>
        <doi>10.14569/IJACSA.2024.0151245</doi>
        <lastModDate>2024-12-30T11:56:54.8330000+00:00</lastModDate>
        
        <creator>Fei Liu</creator>
        
        <subject>Cultural and tourism integration; attractiveness; TOPSIS; entropy weight method</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>The study aims to evaluate the tourism attractiveness of different tourist attractions in the same region through the TOPSIS model in the perspective of culture and tourism integration, so as to provide theoretical and practical support for the tourism development of the region. On the basis of the concept of culture and tourism integration and its importance in tourism development, the evaluation index system of tourism attraction is constructed, including the indicators of tourism resources, tourism infrastructure, etc. Finally, the entropy weighting method and TOPSIS model are used for the comprehensive evaluation of these indicators, and the weight of each indicator and the comprehensive score of the tourism attraction of a certain place are derived through calculation. The results show that through the analysis of the TOPSIS model, the advantages and shortcomings of the region in terms of tourism resources and cultural characteristics can be clearly understood, and recommendations can be targeted, including strengthening tourism infrastructure construction, excavating and protecting cultural characteristics, and so on. These suggestions can help to further improve and enhance the tourism attractiveness of a certain place, so as to attract more tourists and promote the development of the local economy. Meanwhile, the methodology and framework of this study also provide reference and reference for other regions to carry out similar tourism attractiveness evaluation. In the context of cultural and tourism integration, this study expands the perspective of tourism evaluation and provides new ideas and methods for local tourism development.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_45-Design_and_Application_of_a_TOPSIS_Based_Fuzzy_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Convolutional Layer-Based Feature Extraction in an Ensemble Machine Learning Model for Breast Cancer Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151244</link>
        <id>10.14569/IJACSA.2024.0151244</id>
        <doi>10.14569/IJACSA.2024.0151244</doi>
        <lastModDate>2024-12-30T11:56:54.8030000+00:00</lastModDate>
        
        <creator>Shofwatul ‘Uyun</creator>
        
        <creator>Lina Choridah</creator>
        
        <creator>Slamet Riyadi</creator>
        
        <creator>Ade Umar Ramadhan</creator>
        
        <subject>Ensemble learning; feature extraction; convolutional layer; breast cancer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>Mammography and ultrasound are the main medical imaging modalities for identifying breast lesions. Computer-assisted diagnosis (CAD) is an important tool for radiologists, helping them differentiate benign and malignant lesions more quickly and objectively. The use of appropriate features in mammography and ultrasound is one of the key factors determining the success of computer-assisted diagnosis (CAD) results for breast cancer systems. The diversity of feature forms and extraction techniques is a challenge. Additionally, the use of a single classification algorithm often causes noise, bias, and is not robust. We propose a convolutional layer-based feature extraction technique in the ensemble learning model for the classification of breast cancer. This study uses 439 mammography images (203 benign, 236 malignant) and 421 ultrasound images (244 benign, 177 malignant). This research consists of several stages, including data pre-processing, feature extraction, classification, and performance evaluation. We used four convolution layer-based feature extraction techniques: simple convolution (SC), feature fusion convolution (FFC), feature fusion depthwise convolution (FFDC), and feature fusion depthwise separable convolution (FFDSC). The model uses five machine learning algorithms (support vector machine, random forest, k nearest neighbours, decision tree, and logistic regression) that are part of ensemble learning. The experimental results show that the use of the FFC convolution layer in ensemble learning has the best performance for both datasets. In the ultrasound data set, the FFC achieved a value of 0.90 in each of the accuracy, precision, recall, specificity, and F1 score metrics. In the mammography data set, the FFC achieved a value of 0.98 on each of the same metrics. These results show the effectiveness of feature fusion in improving classification performance in the soft voting classifier for ensemble learning.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_44-Convolutional_Layer_Based_Feature_Extraction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Distributed Framework for Indoor Product Design Using VR and Intelligent Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151243</link>
        <id>10.14569/IJACSA.2024.0151243</id>
        <doi>10.14569/IJACSA.2024.0151243</doi>
        <lastModDate>2024-12-30T11:56:54.7870000+00:00</lastModDate>
        
        <creator>Yaoben Gong</creator>
        
        <creator>Zhenyu Gao</creator>
        
        <subject>Interior home products; virtual reality technology; digital design algorithms; improved simple cyclic units; intelligent algorithms for design application evaluation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>This paper presents an innovative approach to the digital design of indoor home products by integrating virtual reality (VR) technology with intelligent algorithms to enhance design accuracy and efficiency. A model combining the Red deer Optimization Algorithm with a Simple Recurrent Unit (SRU) network is proposed to evaluate and optimize the design process. The study develops a digital design framework that incorporates key evaluation factors, optimizing the SRU network through the Red deer Optimization Algorithm to achieve higher precision in design applications. The model’s performance is validated through extensive experiments using metrics such as Mean Absolute Error (MAE), Root Mean Square Error (RMSE), and Mean Absolute Percentage Error (MAPE). Results show that the RDA-SRU model outperforms other methods, with the smallest MAE of 0.133, RMSE of 0.02, and MAPE of 0.015. Additionally, the model achieved an R&#178; value of 0.968 and the shortest evaluation time of 0.028 seconds, demonstrating its superior performance in predicting and evaluating digital design applications for home products. These findings indicate that the integration of VR with intelligent algorithms significantly improves user experience, customizability, and the overall accuracy of digital design processes. This approach offers a robust solution for designers to create more efficient and user-centric home product designs, meeting growing customer demands for immersive and interactive design experiences.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_43-A_Distributed_Framework_for_Indoor_Product.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Distributed Networks for Brain Tumor Classification Through Temporal Learning and Hybrid Attention Segmentation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151242</link>
        <id>10.14569/IJACSA.2024.0151242</id>
        <doi>10.14569/IJACSA.2024.0151242</doi>
        <lastModDate>2024-12-30T11:56:54.7700000+00:00</lastModDate>
        
        <creator>Sayeedakhanum Pathan</creator>
        
        <creator>Savadam Balaji</creator>
        
        <subject>Brain tumor; magnetic resonance imaging Gaussian filter; hybrid attention-VNet; distributed convolution neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>Brain Tumor (BT), which is the progress of abnormal cells in brain surface is categorized into different types based on the symptoms and the affected parts in brain. Classification of BT using Magnetic Resonance Imaging (MRI) is an important and challenging task for BT diagnosis. Various approaches are designed to solve the issues and there are so many inconsistencies in detecting the tumor at early stage. The changes in variability and the complexity of size, shape, location and texture of lesions, automatic detection of BT still results a challenging task in the medical research community. Hence, a proposed Hybrid Attention Temporal Difference Learning with Distributed Convolutional Neural Network-Bidirectional Long Short-Term Memory (HATDL-DCNN-BiLSTM) is developed in this research to detect and classify the BT at beginning stage that enables to improve the survival rate of humans. The proposed model uses Gaussian filter for input image enhancement, Hybrid Attention-VNet segmentation to generate region of interest and solves the computational issues through the attention modules by minimizing the dimensions. The proposed model consumed less memory utilization and increase the training speed globally using the distributed learning mechanism. The features extracted using Hybrid Attention based Efficient Statistical Triangular ResNet (HA-ESTER) supports the classification model to increase the training efficiency more accurately. The proposed HATDL-DCNN-BiLSTM attains higher efficiency by the metrics of accuracy, recall, F1-score, and precision of 98.93%, 99.21%, 97.67%, and 96.17% with training data, and accuracy, recall, F1-score, and precision of 96.34%, 96.51%, 96.33%, and 96.15% with k-fold using BraTS 2019 dataset.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_42-Distributed_Networks_for_Brain_Tumor_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing CURE Algorithm with Stochastic Neighbor Embedding (CURE-SNE) for Improved Clustering and Outlier Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151241</link>
        <id>10.14569/IJACSA.2024.0151241</id>
        <doi>10.14569/IJACSA.2024.0151241</doi>
        <lastModDate>2024-12-30T11:56:54.7100000+00:00</lastModDate>
        
        <creator>Dewi Sartika Br Ginting</creator>
        
        <creator>Syahril Efendi</creator>
        
        <creator>Amalia</creator>
        
        <creator>Poltak Sihombing</creator>
        
        <subject>Stunting; clustering algorithm; CURE; CURE-SNE; outliers</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>This study focuses on analyzing stunting data using the CURE and CURE-SNE algorithms for clustering and outlier detection. The primary challenge is identifying patterns in stunting data, which includes variables such as age, gender, height, weight, and nutritional status. Both algorithms were employed to group the data and detect outliers that may affect the results of the analysis. The evaluation methods included determining the optimal number of clusters using the silhouette score and assessing cluster quality using the Davies-Bouldin Index (DBI). The results showed that both algorithms formed four clusters, with CURE-SNE detecting 6,050 outliers, while CURE detected 5,047 outliers. Silhouette score analysis revealed that both algorithms formed four optimal clusters. However, when validated using DBI, CURE achieved a score of 0.523, while CURE-SNE produced a lower score of 0.388, indicating that CURE-SNE outperformed CURE in terms of cluster quality. This suggests that CURE-SNE not only detects more outliers but also produces clusters with better separation and compactness. The findings highlight that both algorithms are effective for clustering stunting data, but CURE-SNE excels in terms of outlier detection and overall cluster quality. Thus, CURE-SNE is more suitable for handling complex datasets with potential outliers, providing more accurate insights into the structure of the data. In conclusion, CURE-SNE demonstrates superior performance compared to CURE, offering a more reliable and detailed clustering solution for stunting data analysis.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_41-Enhancing_CURE_Algorithm_with_Stochastic_Neighbor_Embedding.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid Machine Learning Approach for Continuous Risk Management in Business Process Reengineering Projects</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151240</link>
        <id>10.14569/IJACSA.2024.0151240</id>
        <doi>10.14569/IJACSA.2024.0151240</doi>
        <lastModDate>2024-12-30T11:56:54.6470000+00:00</lastModDate>
        
        <creator>RAFFAK Hicham</creator>
        
        <creator>LAKHOUILI Abdallah</creator>
        
        <creator>MANSOURI Moahmed</creator>
        
        <subject>BPR; Risk management; PCA; K-means; XGBoost; PSO; GWO</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>This study proposes a hybrid machine learning approach for continuous risk management in Business Process Reengineering (BPR) projects. This approach combines supervised and unsupervised learning techniques, integrating feature selection and preprocessing through Principal Component Analysis (PCA), clustering with K-means, and visualization with t-SNE. The labeled data are then used as input for predictive modeling with XGBoost, optimized using Particle Swarm Optimization (PSO), Grey Wolf Optimizer (GWO), and Grid Search algorithms.PCA reduces data dimensionality, simplifying analysis and improving model performance. K-means and t-SNE are employed for data clustering and visualization, enabling the identification of risk segments and uncovering hidden patterns. XGBoost, a powerful boosting algorithm, is utilized for predictive modeling due to its efficiency, accuracy, and ability to handle missing values. Optimization techniques further enhance XGBoost&#39;s performance by fine-tuning its hyperparameters. The approach was applied to a risk database from the automotive sector, demonstrating its practical applicability. Results show that PSO achieves the lowest mean squared error (MSE) and root mean squared error (RMSE), followed by GWO and Grid Search. Mahalanobis distance yields more accurate clustering results compared to Euclidean, Manhattan, and Cosine distances. This hybrid machine learning approach significantly enhances risk detection, evaluation, and mitigation in BPR projects, offering a robust framework for proactive decision-making.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_40-A_Hybrid_Machine_Learning_Approach_for_Continuous_Risk_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improved YOLOv11pose for Posture Estimation of Xinjiang Bactrian Camels</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151239</link>
        <id>10.14569/IJACSA.2024.0151239</id>
        <doi>10.14569/IJACSA.2024.0151239</doi>
        <lastModDate>2024-12-30T11:56:54.6300000+00:00</lastModDate>
        
        <creator>Lei Liu</creator>
        
        <creator>Alifu Kurban</creator>
        
        <creator>Yi Liu</creator>
        
        <subject>YOLOv11pose; efficient channel attention; multi-scale pooling structure; DECA-block; Bactrian camel posture estimation；SimSPPF; ECA</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>Automatic pose estimation of camels is crucial for long-term health monitoring in animal husbandry. There is currently less research on camels, and our study has certain practical application value in actual camel farms. Due to the high similarity of camels, this has brought us a huge challenge in pose estimation. This study proposes YOLOv11pose-Camel, a pose estimation algorithm tailored for Bactrian camels. The algorithm enhances feature extraction with a lightweight channel attention mechanism (ECA) and improves detection accuracy through an efficient multi-scale pooling structure (SimSPPF). Additionally, C3k2 modules in the neck are replaced with dynamic convolution blocks (DECA-blocks) to strengthen global feature extraction. We collected a diverse dataset of Bactrian camel images with farm staff assistance and applied data augmentation. The optimized YOLOv11pose model achieved 94.5% accuracy and 94.1% mAP@0.5 on the Xinjiang Bactrian camel dataset, outperforming the baseline by 2.1% and 2.2%, respectively. The model also maintains a good balance between detection speed and efficiency, demonstrating its potential for practical applications in animal husbandry.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_39-Improved_YOLOv11pose_for_Posture_Estimation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cross-Entropy-Driven Optimization of Triangular Fuzzy Neutrosophic MADM for Urban Park Environmental Design Quality Evaluation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151238</link>
        <id>10.14569/IJACSA.2024.0151238</id>
        <doi>10.14569/IJACSA.2024.0151238</doi>
        <lastModDate>2024-12-30T11:56:54.6000000+00:00</lastModDate>
        
        <creator>Xing She</creator>
        
        <creator>Xi Xie</creator>
        
        <creator>Peng Xie</creator>
        
        <subject>Multiple-Attribute Decision-Making (MADM) problems; Triangular Fuzzy Neutrosophic Sets (TFNSs); cross-entropy approach; TFNN-CE approach; urban park environmental design</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>The evaluation of urban park environmental design quality focuses on functionality, aesthetics, ecology, and user experience. Functionality ensures practical facilities, clear zoning, and accessibility. Aesthetics emphasizes visual harmony, cultural integration, and artistic appeal. Ecological quality assesses vegetation, biodiversity, and sustainability, promoting environmental protection. User experience evaluates comfort, safety, inclusivity, and the ability to meet diverse needs. A well-designed park balances these elements, fostering harmony between humans and nature while enhancing public well-being, environmental awareness, and the overall urban living experience. The quality evaluation of urban park environmental design is multi-attribute decision-making (MADM). In this study, triangular fuzzy neutrosophic number cross-entropy (TFNN-CE) approach is executed under triangular fuzzy neutrosophic sets (TFNSs). Furthermore, Then, entropy is employed to execute the weight and TFNN-CE approach is executed for MADM under TFNSs. Finally, numerical example for quality evaluation of urban park environmental design is executed the advantages of TFNN-CE approach through different comparisons. The major contributions of this study could be executed: (1) entropy is employed to execute the weight under TFNSs; (2) TFNN-CE approach is executed under TFNSs; (3) TFNN-CE approach is put forward for MADM under TFNSs; (4) numerical example for quality evaluation of urban park environmental design is executed the advantages of TFNN-CE approach through different comparisons.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_38-Cross_Entropy_Driven_Optimization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predicting the Number of Video Game Players on the Steam Platform Using Machine Learning and Time Lagged Features</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151237</link>
        <id>10.14569/IJACSA.2024.0151237</id>
        <doi>10.14569/IJACSA.2024.0151237</doi>
        <lastModDate>2024-12-30T11:56:54.5830000+00:00</lastModDate>
        
        <creator>Gregorius Henry Wirawan</creator>
        
        <creator>Gede Putra Kusuma</creator>
        
        <subject>Video games; regression method; feature selection; time series forecasting; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>Predicting player count can provide game developers with valuable insights into players’ behavior and trends on the game population, helping with strategic decision-making. Therefore, it is important for the prediction to be as accurate as possible. Using the game’s metadata can help with predicting accuracy, but they stay the same most of the time and do not have enough temporal context. This study explores the use of machine learning with lagged features on top of using metadata and aims to improve accuracy in predicting daily player count, using data from top 100 games from Steam, one of the biggest game distribution platforms. Several combinations of feature selection methods and machine learning models were tested to find which one has the best performance. Experiments on a dataset from multiple games show that Random Forest model combined with Pearson’s Correlation Feature Selection gives the best result, with R2 score of 0.9943. average R2 score above 0.9 across all combinations.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_37-Predicting_the_Number_of_Video_Game_Players.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimizing the Fault Localization Path of Distribution Network UAVs Based on a Cloud-Pipe-Side-End Architecture</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151236</link>
        <id>10.14569/IJACSA.2024.0151236</id>
        <doi>10.14569/IJACSA.2024.0151236</doi>
        <lastModDate>2024-12-30T11:56:54.5670000+00:00</lastModDate>
        
        <creator>Lan Liu</creator>
        
        <creator>Ping Qin</creator>
        
        <creator>Xinqiao Wu</creator>
        
        <creator>Chenrui Zhang</creator>
        
        <subject>Cloud-pipe-edge-end architecture; distribution network UAV; cloud-edge collaboration; edge computing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>The currently proposed optimization algorithm for cooperative fault inspection of distribution network UAVs struggles to accurately detect fault points quickly, leading to low inspection efficiency. To address these issues, we investigate a new fault localization path optimization algorithm for distribution network UAVs based on a cloud-pipe-edge-end architecture. This architecture employs multiple drones for coordinated control, allowing for the simultaneous detection of suspected fault areas. Communication links facilitate interaction at both the drone and system levels, enabling the transmission of fault diagnosis information. Fault defects are identified, and the information is analyzed within an edge computing framework to achieve precise fault localization. Experimental results demonstrate that the proposed algorithm significantly enhances detection speed and accuracy, providing robust technical support for UAV operations.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_36-Optimizing_the_Fault_Localization_Path_of_Distribution.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Machine Learning-Based Intelligent Employment Management System by Extracting Relevant Features</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151235</link>
        <id>10.14569/IJACSA.2024.0151235</id>
        <doi>10.14569/IJACSA.2024.0151235</doi>
        <lastModDate>2024-12-30T11:56:54.5370000+00:00</lastModDate>
        
        <creator>Yiming Wang</creator>
        
        <creator>Chi Che</creator>
        
        <subject>Employment management system; recommendation system; feature index; accuracy and employment intention index</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>In recent years, there has been a significant increase in the number of students trying to broaden the work opportunities available to college graduates. This study presents an intelligent employment management system that may be used in educational institutions for students to gain a better understanding of their occupations and analyzing the sectors in which they will work. In this article, the fundamental concepts of information recommendation are discussed, as well as a customized recommendation system for entrepreneurship that is provided. The fundamental information and personal interest points of college students are represented by feature vectors. These feature vectors provide positive theoretical support for the career planning and employment and entrepreneurship information suggestions of college students. In conclusion, an analysis of the performance of the proposed model is performed to provide college students with a system that is both convenient and quick in terms of information recommendation. This will result in an indirect improvement in the employment rate of graduates and will provide solutions that correspond to the problem of difficult employment.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_35-A_Machine_Learning_Based_Intelligent_Employment_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Construction and Optimal Control Method of Enterprise Information Flaw Risk Contagion Model Based on the Improved LDA Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151234</link>
        <id>10.14569/IJACSA.2024.0151234</id>
        <doi>10.14569/IJACSA.2024.0151234</doi>
        <lastModDate>2024-12-30T11:56:54.5200000+00:00</lastModDate>
        
        <creator>Jun Wang</creator>
        
        <creator>Zhanhong Zhou</creator>
        
        <subject>Management tone manipulation; enterprise information disclosure; risk contagion; optimal control</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>In this study, we construct a risk contagion model for corporate information disclosure using complex network methods and incorporate the manipulative perspective of management tone into it. We employ an enhanced LDA model to analyze and refine the relevant data and models presented in this paper. The results of quantitative analysis show that the improved LDA algorithm optimizes the classification decision boundary, making similar samples closer and different samples more dispersed, thus improving the classification accuracy. Additionally, we combine multi-objective evolutionary optimization techniques with an improved particle swarm optimization algorithm to solve the proposed model while incorporating enhancements through the use of weighted Smote algorithm. The quantization results show that using the weighted Smote algorithm to deal with the imbalance in the dataset significantly improves the classification performance. Furthermore, we compare our proposed method with classical algorithms on four real enterprise information disclosure datasets and observe that our approach exhibits higher efficiency and accuracy compared to traditional optimal control methods. Accounting information disclosure reveals moral hazard and adverse selection, alleviating information asymmetry. Transparent information improves the availability of financing, preventing liquidity risk. High-quality information disclosure reduces financing costs, alleviates confidence crises, ensures capital adequacy, and avoids capital outflows. Research constructs a corporate information disclosure risk contagion model, using an improved LDA model and multi-objective evolutionary optimization methods for analysis, showing high efficiency and good accuracy, effectively controlling environmental and related effects.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_34-Construction_and_Optimal_Control_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Leiden Coloring Algorithm for Influencer Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151233</link>
        <id>10.14569/IJACSA.2024.0151233</id>
        <doi>10.14569/IJACSA.2024.0151233</doi>
        <lastModDate>2024-12-30T11:56:54.5070000+00:00</lastModDate>
        
        <creator>Handrizal </creator>
        
        <creator>Poltak Sihombing</creator>
        
        <creator>Erna Budhiarti Nababan</creator>
        
        <creator>Mohammad Andri Budiman</creator>
        
        <subject>Influencer; Louvain coloring; Leiden; Leiden coloring</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>In today&#39;s digital age, the role of influencers, especially on social media platforms, has grown significantly. A commonly used feature by business professionals today is follower grouping. However, this feature is limited to identifying influencers based solely on mutual followership, highlighting the need for a more sophisticated approach to influencer detection. This study proposes a novel method for influencer detection that integrates the Leiden coloring algorithm and Degree centrality. This approach leverages network analysis to identify patterns and relationships within large-scale datasets. Initially, the Leiden coloring algorithm is employed to partition the network into various communities, considered potential influencer hubs. Subsequently, Degree centrality is utilized to identify nodes with high connectivity, indicating influential individuals. The proposed method was validated using data crawled from Twitter (X) social media, employing the keyword &quot;GarudaIndonesia.&quot; The data was collected using Tweet-Harvest between January 1, 2020, and October 16, 2024, resulting in a dataset of 22,623 rows. The dataset was subjected to two experimental scenarios: 1,000 and 5,000 rows. Compared to the Louvain coloring method, the proposed approach demonstrated an increase in the modularity value of the Leiden coloring algorithm by 0.0306, a reduction in time processing by 14.4848 seconds, and a decrease in the number of communities by 1,290.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_33-Leiden_Coloring_Algorithm_for_Influencer_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards an Ontology to Represent Domain Knowledge of Attention Deficit Hyperactivity Disorder (ADHD): A Conceptual Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151232</link>
        <id>10.14569/IJACSA.2024.0151232</id>
        <doi>10.14569/IJACSA.2024.0151232</doi>
        <lastModDate>2024-12-30T11:56:54.4730000+00:00</lastModDate>
        
        <creator>Shahad Mansour Alsaedi</creator>
        
        <creator>Aishah Alsobhi</creator>
        
        <creator>Hind Bitar</creator>
        
        <subject>Conceptual model; ontology; ADHD; knowledge engineering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>Attention deficit/hyperactivity disorder (ADHD) represents a highly heterogeneous and complex medical domain with numerous multidisciplinary research areas. Despite the rising number of research on the pathophysiology of ADHD, the available information in the ADHD domain is still scattered and disconnected. This research study mainly aims to develop a conceptual model of ADHD by applying knowledge engineering processes to structure the domain knowledge, elucidating key concepts and their interrelationships. The methodology for developing the conceptual model is derived from established practices in ontology construction. It adopts a hybrid approach, integrating principles from prominent methodologies such as Ontology Development 101, the Uschold and King methodology, and METHONTOLOGY. The proposed ADHD conceptual model links various aspects of ADHD including subtypes, symptoms, behaviors, diagnostic criteria, treatment, risk factors, comorbidities, and patient profile. Comprising eight top-level classes and highlighting 13 key relationships, it establishes connections between symptoms and recommended treatments, as well as symptoms and their diverse manifestations, risk factors, ADHD subtypes, and potential comorbidities. While the model captures a broad range of ADHD-related concepts, it has certain limitations. It does not extensively address genetic or neurobiological mechanisms, nor does it capture cultural and contextual variations in ADHD manifestations. These limitations highlight opportunities for future expansion, such as incorporating real-world data and diverse demographic contexts. Nevertheless, the model developed in this study is well-suited to serve as a cornerstone for constructing a comprehensive ADHD domain knowledge ontology. Ontologies play a crucial role as a layer for transferring knowledge and serve as a foundation for developing advanced systems, such as decision-support tools and expert systems, to enhance ADHD research and clinical practice.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_32-Towards_an_Ontology_to_Represent_Domain_Knowledge.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Machine Learning Model for Crowd Density Classification in Hajj Video Frames</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151231</link>
        <id>10.14569/IJACSA.2024.0151231</id>
        <doi>10.14569/IJACSA.2024.0151231</doi>
        <lastModDate>2024-12-30T11:56:54.4430000+00:00</lastModDate>
        
        <creator>Afnan A. Shah</creator>
        
        <subject>Hajj; moderate crowd; overcrowded; very dense crowd; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>Managing the massive annual gatherings of Hajj and Umrah presents significant challenges, particularly as the Saudi government aims to increase the number of pilgrims. Currently, around two million pilgrims attend Hajj and 26 million attend Umrah making crowd control especially in critical areas like the Grand Mosque during Tawaf, a major concern. Additional risks arise in managing dense crowds at key sites such as Arafat where the potential for stampedes, fires and pandemics poses serious threats to public safety. This research proposes a machine learning model to classify crowd density into three levels: moderate crowd, overcrowded and very dense crowd in video frames recorded during Hajj, with a flashing red light to alert organizers in real-time when a very dense crowd is detected. While current research efforts in processing Hajj surveillance videos focus solely on using CNN to detect abnormal behaviors, this research focuses more on high-risk crowds that can lead to disasters. Hazardous crowd conditions require a robust method, as incorrect classification could trigger unnecessary alerts and government intervention, while failure to classify could result in disaster. The proposed model integrates Local Binary Pattern (LBP) texture analysis, which enhances feature extraction for differentiating crowd density levels, along with edge density and area-based features. The model was tested on the KAU-Smart-Crowd &#39;HAJJv2&#39; dataset which contains 18 videos from various key locations during Hajj including &#39;Massaa&#39;, &#39;Jamarat&#39;, &#39;Arafat&#39; and &#39;Tawaf&#39;. The model achieved an accuracy rate of 87% with a 2.14% error percentage (misclassification rate), demonstrating its ability to detect and classify various crowd conditions effectively. That contributes to enhanced crowd management and safety during large-scale events like Hajj.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_31-A_Machine_Learning_Model_for_Crowd_Density_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing User Comfort in Virtual Environments for Effective Stress Therapy: Design Considerations</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151230</link>
        <id>10.14569/IJACSA.2024.0151230</id>
        <doi>10.14569/IJACSA.2024.0151230</doi>
        <lastModDate>2024-12-30T11:56:54.4130000+00:00</lastModDate>
        
        <creator>Farhah Amaliya Zaharuddin</creator>
        
        <creator>Nazrita Ibrahim</creator>
        
        <creator>Azmi Mohd Yusof</creator>
        
        <subject>Virtual environment design; virtual reality; stress therapy; user comfort</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>Mental stress has emerged as a widespread concern in modern society, impacting individuals from diverse demographic backgrounds. Therefore, exploring effective methods for therapy, such as virtual environments tailored for stress management, is vital for advancing mental health and improving coping strategies. Prioritising user comfort in the design of virtual environments is essential for enhancing their efficacy in alleviating stress. By considering four design aspects of virtual environments that influence user comfort: (i) visual clarity, (ii) safety features, (iii) cognitive preparedness, and (iv) social support, this study intends to (i) evaluate the effectiveness of these four user-centered design elements in facilitating stress reduction and (ii) explore the underlying rationale behind their stress-reducing properties. This study utilised a mixed-methods approach comprising (i) experiments, (ii) questionnaires, and (iii) interviews. Following evaluation with the Depression Anxiety Stress Scale (DASS), 40 participants (10 men and 30 females) were chosen from the 55 healthy adults aged 20 to 60 who volunteered for the study. The findings validated the efficacy of all four design aspects in enhancing users&#39; comfort during therapeutic sessions in virtual environments. This study offers important insights not only into the importance of user-centered design in creating virtual environments for stress management, where comfort markedly improves therapy outcomes but also contributes valuable knowledge to the fields of mental health and human-computer interaction, paving the way for further exploration of innovative therapeutic solutions for mental stress.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_30-Enhancing_User_Comfort_in_Virtual_Environments.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Construction and Optimization of Multi-Scenario Autonomous Call Rule Models in Emergency Command Scenarios</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151229</link>
        <id>10.14569/IJACSA.2024.0151229</id>
        <doi>10.14569/IJACSA.2024.0151229</doi>
        <lastModDate>2024-12-30T11:56:54.3970000+00:00</lastModDate>
        
        <creator>Weiyan Zheng</creator>
        
        <creator>Chaoyue Zhu</creator>
        
        <creator>Di Huang</creator>
        
        <creator>Bin Zhou</creator>
        
        <creator>Xingping Yan</creator>
        
        <creator>Panxia Chen</creator>
        
        <subject>Digital signal processing algorithm; dual tone multi-frequency signal detection algorithm; fire; autonomous call model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>In response to the slow processing speed, weak anti-interference, and low accuracy of autonomous call models in current emergency command scenarios, the research focuses on the fire scenario, aiming to improve the emergency response efficiency through technological innovation. The research innovatively integrates digital signal processing algorithm and two-tone multi-frequency signal detection algorithm to develop a hybrid algorithm. Then, a novel autonomous call model based on the hybrid algorithm is constructed. The comparative experimental results indicated that the accuracy of the hybrid algorithm was 0.9 and the error rate was 0.05, which was better than other comparison models. The average accuracy and comprehensive performance score of the model were 0.95 and 97 points, respectively, both of which were better than comparison models. The results confirm that the autonomous call model proposed in this study can accurately and quickly judge emergency scenarios and handle calls, and provide new ideas and theoretical basis for emergency command and rescue of fire and other disasters, with broad application prospects.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_29-Construction_and_Optimization_of_Multi_Scenario_Autonomous_Call.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Integrating Local Channel Attention and Focused Feature Modulation for Wind Turbine Blade Defect Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151228</link>
        <id>10.14569/IJACSA.2024.0151228</id>
        <doi>10.14569/IJACSA.2024.0151228</doi>
        <lastModDate>2024-12-30T11:56:54.3670000+00:00</lastModDate>
        
        <creator>Zheng Cao</creator>
        
        <creator>Rundong He</creator>
        
        <creator>Shaofei Zhang</creator>
        
        <creator>Zhaoyang Qi</creator>
        
        <creator>Sa Li</creator>
        
        <creator>Tong Liu</creator>
        
        <creator>Yue Li</creator>
        
        <subject>Fan blades; YOLO; attention mechanism; defect detection; inner-IoU</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>In the wind power industry, the health state of wind turbine paddles is directly related to the power generation efficiency and the safe operation of the equipment. In order to solve the problems of low efficiency and insufficient accuracy of traditional detection methods, this paper proposes a wind turbine blade defect detection algorithm that integrates local channel attention and focus feature modulation. The algorithm first introduces the Mixed Local Channel Attention (MLCA) mechanism into the C2f module of the backbone network in YOLOv8 to enhance the extraction capability of the backbone network for key features. Then the Focal Feature Modulation (FFM) module is used to replace the original SPPF module in YOLOv8 to further aggregate global contextual features at different levels of granularity; finally, in the Neck part, the pro-gressive feature pyramid AFPN structure is used to enhance the multi-scale feature fusion capability of the model, which in turn improves the accuracy of small object detection. The experi-mental results show that the proposed algorithm has an accuracy of 82.5%, a mAP50 of 78.6%, and GFLOPS of 8.5. In the detection of wind turbine blade defects, which possesses higher detection performance and real-time performance compared with the traditional methods, and is able to effectively identify common defects such as cracks, corrosion, and abrasion, and exhibits strong robustness and application value.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_28-Integrating_Local_Channel_Attention.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Explainable AI-Driven Chatbot System for Heart Disease Prediction Using Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151227</link>
        <id>10.14569/IJACSA.2024.0151227</id>
        <doi>10.14569/IJACSA.2024.0151227</doi>
        <lastModDate>2024-12-30T11:56:54.3500000+00:00</lastModDate>
        
        <creator>Salman Muneer</creator>
        
        <creator>Taher M. Ghazal</creator>
        
        <creator>Tahir Alyas</creator>
        
        <creator>Muhammad Ahsan Raza</creator>
        
        <creator>Sagheer Abbas</creator>
        
        <creator>Omar AlZoubi</creator>
        
        <creator>Oualid Ali</creator>
        
        <subject>Heart disease prediction; machine learning; chatbot system; XAI</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>Heart disease (HD) continues to rank as the top cause of morbidity and mortality worldwide, prompting the enormous importance of correct prediction for effective intervention and prevention strategies. The proposed research involves developing a novel explainable AI (XAI)-driven chatbot system for HD prediction, combined with cutting-edge machine learning (ML) algorithms and advanced XAI techniques. This research work highlights different approaches like Random Forest (RF), Decision Tree (DT), and Bagging-Quantum Support Vector Classifier (QSVC). The RF approach achieves the best performance, with 92.00% accuracy, 91.97% sensitivity, 56.81% specificity, 8.00% miss rate, and 99.93% precision compared to other approaches. SHAP and LIME provide XAI methods for which the chatbot&#39;s predictions and explanations endow trust and understanding with the user. This novel approach proves the potential of seamless integration of explanations in a wide range of web or mobile applications for healthcare. Future works will extend the work on incorporating other diseases&#39; predictions in the model and improve the explanation of those predictions using more advanced explainable AI approaches.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_27-Explainable_AI_Driven_Chatbot_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of Smart Financial Management Research in Shared Perspective: A CiteSpace-Based Analysis Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151226</link>
        <id>10.14569/IJACSA.2024.0151226</id>
        <doi>10.14569/IJACSA.2024.0151226</doi>
        <lastModDate>2024-12-30T11:56:54.3330000+00:00</lastModDate>
        
        <creator>Rongxiu Zhao</creator>
        
        <creator>Duochang Tang</creator>
        
        <subject>Smart finance; financial management; financial sharing; bibliometrics; CiteSpace</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>At a time when information technology is advancing by leaps and bounds, smart financial management is becoming a hotspot of common concern in both academic and practical circles. The purpose of this paper is to systematically sort out the research development trend of smart financial management under the shared vision through the CiteSpace bibliometric analysis method. We select the relevant literature in the Web of Science database during the 10 years from 2014 to 2023 as the research object, set reasonable queer values and time slices, and conduct in-depth analyses of keyword co-occurrence, author cooperation network, keyword clustering, mutant keywords, and time interval, etc., and analyze the research hotspots and evolutionary paths of the smart financial management research under the shared vision, using the data as a renewable basis. Evolutionary path. After the study, it is found that the research in this field presents more obvious stage characteristics, influenced by technological progress, industry demand, and social change, and can be divided into five stages according to the development curve: the construction of the basic framework, the development of the model system, the change of behavioral patterns, personalized recommendations and risks, and the depth of the role of the Internet. As an emerging research field, as research scholars dig deeper into the theoretical logic, the deepening of interdisciplinary research, and the application of emerging technologies, it provides a new impetus and a new direction for intelligent financial management to make financial management more healthy and sustainable development.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_26-Development_of_Smart_Financial_Management_Research.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Transfer Learning for Diagnosing Teeth Using Panoramic X-rays</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151225</link>
        <id>10.14569/IJACSA.2024.0151225</id>
        <doi>10.14569/IJACSA.2024.0151225</doi>
        <lastModDate>2024-12-30T11:56:54.3030000+00:00</lastModDate>
        
        <creator>M. M. EL-GAYAR</creator>
        
        <subject>Machine learning; deep learning; dental diagnosis; transfer learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>The increasing focus on oral diseases has highlighted the need for automated diagnostic processes. Dental panoramic X-rays, commonly used in diagnosis, benefit from advancements in deep learning for efficient disease detection. The DENTEX Challenge 2023 aimed to enhance the automatic detection of abnormal teeth and their enumeration from these X-rays. We propose a unified technique that combines direct classification with a hybrid approach, integrating deep learning and traditional classifiers. Our method integrates segmentation and detection models to identify abnormal teeth accurately. Among various models, the Vision Transformer (ViT) achieved the highest accuracy of 97% using both approaches. The hybrid framework, combining modified U-Net with a Support Vector Machine, reached 99% accuracy with fewer parameters, demonstrating its suitability for clinical applications where efficiency is crucial. These results underscore the potential of AI in improving dental diagnostics.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_25-Hybrid_Transfer_Learning_for_Diagnosing_Teeth.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Multimodal Data Scraping Tool for Collecting Authentic Islamic Text Datasets</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151224</link>
        <id>10.14569/IJACSA.2024.0151224</id>
        <doi>10.14569/IJACSA.2024.0151224</doi>
        <lastModDate>2024-12-30T11:56:54.2870000+00:00</lastModDate>
        
        <creator>Abdallah Namoun</creator>
        
        <creator>Mohammad Ali Humayun</creator>
        
        <creator>Waqas Nawaz</creator>
        
        <subject>Web scraping; Islamic knowledge; machine learning; natural language processing; question and answering; AI chatbots</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>Making decisions based on accurate knowledge is agreed upon to provide ample opportunities in different walks of life. Machine learning and natural language processing (NLP) systems, such as Large Language Models, may use unrecognized sources of Islamic content to fuel their predictive models, which could often lead to incorrect judgments and rulings. This article presents the development of an automated method with four distinct algorithms for text extraction from static websites, dynamic websites, YouTube videos with transcripts, and for speech-to-text conversion from videos without transcripts, particularly targeting Islamic knowledge text. The tool is tested by collecting a reliable Islamic knowledge dataset from authentic sources in Saudi Arabia. We scraped Islamic content in Arabic from text websites of prominent scholars and YouTube channels administered by five authorized agencies in Saudi Arabia. These agencies include the general authority for the affairs of the grand mosque and the prophet’s mosque and charitable foundations in Saudi Arabia. For websites, text data were scraped using Python tools for static and dynamic web scraping such as Beautiful Soup and Selenium. For YouTube channels, data were scraped from existing transcripts or transcribed using automatic speech recognition tools. The final Islamic content dataset comprises 31225 records from regulated sources. Our Islamic knowledge dataset can be used to develop accurate Islamic question answering, AI chatbots and other NLP systems.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_24-A_Multimodal_Data_Scraping_Tool.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Deep Learning-Based LSTM for Stock Price Prediction Using Twitter Sentiment Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151223</link>
        <id>10.14569/IJACSA.2024.0151223</id>
        <doi>10.14569/IJACSA.2024.0151223</doi>
        <lastModDate>2024-12-30T11:56:54.2570000+00:00</lastModDate>
        
        <creator>Shimaa Ouf</creator>
        
        <creator>Mona El Hawary</creator>
        
        <creator>Amal Aboutabl</creator>
        
        <creator>Sherif Adel</creator>
        
        <subject>Sentiment analysis; stocks price prediction; correlation; natural language processing (NLP); machine learning model; LSTM; XGBoost</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>Numerous economic, political, and social factors make stock price predictions challenging and unpredictable. This paper focuses on developing an artificial intelligence (AI) model for stock price prediction. The model utilizes LSTM and XGBoost techniques in three sectors: Apple, Google, and Tesla. It aims to detect the impact of combining sentiment analysis with historical data to see how much people&#39;s opinions can change the stock market. The proposed model computes sentiment scores using natural language processing (NLP) techniques and combines them with historical data based on Date. The RMSE, R&#178;, and MAE metrics are used to evaluate the performance of the proposed model. The integration of sentiment data has demonstrated a significant improvement and achieved a higher accuracy rate compared to historical data alone. This enhances the accuracy of the model and provides investors and the financial sector with valuable information and insights. XGBoost and LSTM demonstrated their effectiveness in stock price prediction; XGBoost outperformed the LSTM technique.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_23-A_Deep_Learning_Based_LSTM_for_Stock_Price_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>FKMU: K-Means Under-Sampling for Data Imbalance in Predicting TF-Target Genes Interactions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151222</link>
        <id>10.14569/IJACSA.2024.0151222</id>
        <doi>10.14569/IJACSA.2024.0151222</doi>
        <lastModDate>2024-12-30T11:56:54.2400000+00:00</lastModDate>
        
        <creator>Thanh Tuoi Le</creator>
        
        <creator>Xuan Tho Dang</creator>
        
        <subject>K-means clustering; imbalanced data; TF-target gene interactions; heterogeneous network; meta-path</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>Identifying interactions between transcription factors (TFs) and target genes is critical for understanding molecular mechanisms in biology and disease. Traditional experimental approaches are often costly and not scalable. We introduce FKMU, a K-means-based under-sampling method designed to address data imbalance in predicting TF-target interactions. By selecting low-frequency TF samples within each cluster and optimizing the balance ratio to 1:1 between known and unknown samples, FKMU significantly improves prediction accuracy for unobserved interactions. Integrated with a deep learning model that uses random walk sampling and skip-gram embeddings, FKMU achieves an average AUC of 0.9388 &#177; 0.0045 through five-fold cross-validation, outperforming state-of-the-art methods. This approach facilitates accurate and large-scale predictions of TF-target interactions, providing a robust tool for molecular biology research.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_22-FKMU_K_means_Under_Sampling_for_Data_Imbalance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Albument-NAS: An Enhanced Bone Fracture Detection Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151221</link>
        <id>10.14569/IJACSA.2024.0151221</id>
        <doi>10.14569/IJACSA.2024.0151221</doi>
        <lastModDate>2024-12-30T11:56:54.2230000+00:00</lastModDate>
        
        <creator>Evandiaz Fedora</creator>
        
        <creator>Alexander Agung Santoso Gunawan</creator>
        
        <subject>Albumentation; augmentation; bone fracture; deep learning; object detection; YOLO-NAS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>Diagnosing fracture locations accurately is challenging, as it heavily depends on the radiologist&#39;s expertise; however, image quality, especially with minor fractures, can limit precision, highlighting the need for automated methods. The accuracy of diagnosing fracture locations often relies on radiologists&#39; expertise; however, image quality, particularly with smaller fractures, can limit precision, underscoring the need for automated methods. Although a large volume of data is available for observation, many datasets lack annotated labels, and manually labeling this data would be highly time-consuming. This research introduces Albument-NAS, a technique that combines the One Shot Detector (OSD) model with the Albumentation image augmentation approach to enhance both speed and accuracy in detecting fracture locations. Albument-NAS achieved a mAP@50 of 83.5%, precision of 87%, and recall of 65.7%, significantly outperforming the previous state-of-the-art model, which had a mAP@50 of 63.8%, when tested on the GRAZPEDWRI dataset—a collection of pediatric wrist injury X-rays. These results establish a new benchmark in fracture detection, illustrating the advantages of combining augmentation techniques with advanced detection models to overcome challenges in medical image analysis.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_21-Albument_NAS_An_Enhanced_Bone_Fracture_Detection_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Systematic Review of Prediction of Cancer Driver Genes with the Application of Graph Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151220</link>
        <id>10.14569/IJACSA.2024.0151220</id>
        <doi>10.14569/IJACSA.2024.0151220</doi>
        <lastModDate>2024-12-30T11:56:54.1930000+00:00</lastModDate>
        
        <creator>Noor Uddin Qureshi</creator>
        
        <creator>Usman Amjad</creator>
        
        <creator>Saima Hassan</creator>
        
        <creator>Kashif Saleem</creator>
        
        <subject>Graph neural network; cancer driver genes; prediction; personalized medicine</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>Graph Neural Networks (GNNs) have emerged as a potential tool in cancer genomics research due to their ability to capture the structural information and interactions between genes in a network, enabling the prediction of cancer driver genes. This systematic literature review assesses the capabilities and challenges of GNNs in predicting cancer driver genes by accumulating findings from relevant papers and research. This systematic literature review focuses on the effectiveness of GNN-based algorithms related to cancer such as cancer gene identification, cancer progress dissection, prediction, and driver mutation identification. Moreover, this paper highlights the requirement to improve omics data integration, formulating personalized medicine models, and strengthening the interpretability of GNNs for clinical purposes. In general, the utilization of GNNs in clinical practice has a significant potential to lead to improved diagnostics and treatment procedures.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_20-Systematic_Review_of_Prediction_of_Cancer_Driver_Genes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>New Knowledge Management Model: Enhancing Knowledge Creation with Zack Gap, Brand Equity, and Data Mining in the Sports Business</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151219</link>
        <id>10.14569/IJACSA.2024.0151219</id>
        <doi>10.14569/IJACSA.2024.0151219</doi>
        <lastModDate>2024-12-30T11:56:54.1770000+00:00</lastModDate>
        
        <creator>Fransiska Prihatini Sihotang</creator>
        
        <creator>Ermatita</creator>
        
        <creator>Dian Palupi Rini</creator>
        
        <creator>Samsuryadi</creator>
        
        <subject>SECI model; zack model; data mining; brand equity; sport business</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>This research improves Socialization, Externalization, Combination, and Internalization (SECI) knowledge management model by combining it with Zack&#39;s knowledge gap model, brand equity concept, and data mining. Zack&#39;s model is incorporated into the SECI model to identify the gap between the knowledge in the organization and the knowledge that the organization should possess. We add the data mining techniques to determine that knowledge gap. The uniqueness of this study lies in the externalization and combination of the SECI model. In the externalization, &quot;what the firm must know&quot; is added; for that, we compile the questionnaire by adopting brand equity and distributing it to the athletes. In the combination, &quot;what the firm knows&quot; is added; we use a database already owned by sports business management. The modifications resulting from both models with data mining in this study were carried out to develop a new knowledge management model in the sports business sector. This new model will be valuable knowledge for sports business management to build strategies and increase their competitiveness in the sports market. In addition, other service business fields besides sports can also apply this new model to improve their knowledge management, which they can then use to improve their marketing strategies.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_19-New_Knowledge_Management_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Security Gap in Microservices: A Systematic Literature Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151218</link>
        <id>10.14569/IJACSA.2024.0151218</id>
        <doi>10.14569/IJACSA.2024.0151218</doi>
        <lastModDate>2024-12-30T11:56:54.1630000+00:00</lastModDate>
        
        <creator>Nurman Rasyid Panusunan Hutasuhut</creator>
        
        <creator>Mochamad Gani Amri</creator>
        
        <creator>Rizal Fathoni Aji</creator>
        
        <subject>Microservice security; cyber-attacks; container; security standards; access control</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>The growing importance of microservices architecture has raised concerns about its security despite a rise in publications addressing various aspects of microservices. Security issues are particularly critical in microservices due to their complex and distributed nature, which makes them vulnerable to various types of cyber-attacks. This study aims to fill the gap in systematic investigations into microservice security by reviewing current state-of-the-art solutions and models. A total of 487 papers were analyzed, with the final selection refined to 87 relevant articles using a snowball method. This approach ensures that the focus remains on security issues, particularly those identified post-2020. However, there is still a significant lack of dedicated security standards or comprehensive models specifically designed for microservices. Key findings highlight the vulnerabilities of container-based applications, the evolving nature of cyber-attacks, and the critical need for effective access control. Moreover, a substantial knowledge gap exists between academia and industry practitioners, which compounds the challenges of securing microservices. This study emphasizes the need for more focused research on security models and guidelines to address the unique vulnerabilities of microservices and facilitate their secure integration into critical applications across various domains.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_18-Security_Gap_in_Microservices.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cloud Computing: Enhancing or Compromising Accounting Data Reliability and Credibility</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151217</link>
        <id>10.14569/IJACSA.2024.0151217</id>
        <doi>10.14569/IJACSA.2024.0151217</doi>
        <lastModDate>2024-12-30T11:56:54.1470000+00:00</lastModDate>
        
        <creator>Mohammed Shaban Thaher</creator>
        
        <subject>Cloud computing; information security; infrastructure as a service; platform as a service; software as a service</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>Business development is intrinsically tied to the evolution of accounting systems, and in today’s digital economy, automation has become indispensable despite increasing setup and maintenance costs. Cloud computing emerges as a promising solution, offering cost reduction and greater flexibility in accounting processes. This paper investigates the influence of cloud technology on accounting practices, emphasizing how IT advancements automate document preparation, streamline data entry, and create new opportunities through cloud services and online platforms. However, cloud adoption is not without its challenges, particularly in the areas of information security and implementation. This study delves into the benefits of cloud-based accounting, with a focus on ensuring data reliability and integrity, while providing practical guidance for secure adoption. By transitioning to cloud systems, organizations can standardize and optimize IT resources. Lastly, the paper outlines strategies to ensure the secure and efficient operation of cloud-based accounting systems within organizations.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_17-Cloud_Computing_Enhancing_or_Compromising_Accounting_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predicting Chronic Obstructive Pulmonary Disease Using ML and DL Approaches and Feature Fusion of X-Ray Image and Patient History</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151216</link>
        <id>10.14569/IJACSA.2024.0151216</id>
        <doi>10.14569/IJACSA.2024.0151216</doi>
        <lastModDate>2024-12-30T11:56:54.1170000+00:00</lastModDate>
        
        <creator>Fatema Kabir</creator>
        
        <creator>Nahida Akter</creator>
        
        <creator>Md. Kamrul Hasan</creator>
        
        <creator>Md. Tofael Ahmed</creator>
        
        <creator>Mariam Akter</creator>
        
        <subject>Chronic obstructive pulmonary disease; COPD; COPD healthcare; advanced monitoring system; COPD early detection; respiratory disease; machine learning; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>By 2030, chronic obstructive pulmonary disease (COPD) is expected to become one of the top three causes of death and a leading contributor to illness globally. Chronic Obstructive Pulmonary Disease (COPD) is a debilitating respiratory disease and lung ailment caused by smoking-related airway inflammation, leading to breathing difficulties. Our COPD Healthcare Monitoring System for COPD Early Detection addresses this critical need by leveraging advanced Machine Learning (ML) and Deep Learning (DL) technologies. Unlike previous studies that predominantly rely on image datasets alone, our advanced monitoring system utilizes both image and text datasets, offering a more comprehensive approach. Importantly, we manually curated our dataset, ensuring its uniqueness and reliability, a feature lacking in existing literature. Despite the utilization of popular models like nnUnet, Cx-Net, and V-net by other papers, our model outperformed them, achieving superior accuracy. XGBoost led with an impressive 0.92 score. Additionally, deep learning models such as VGG16, VGG19, and ResNet50 delivered scores ranging from 0.85 to 0.89, showcasing their efficacy in COPD detection. By amalgamating these techniques, our system revolutionizes COPD care, offering real-time patient data analysis for early detection and management. This innovative approach, coupled with our meticulously curated dataset, promises improved patient outcomes and quality of life. Overall, our study represents a significant advancement in COPD research, paving the way for more accurate diagnosis and personalized treatment strategies.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_16-Predicting_Chronic_Obstructive_Pulmonary_Disease.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Laser Distance Measuring and Image Calibration for Robot Walking Using Mean Shift Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151215</link>
        <id>10.14569/IJACSA.2024.0151215</id>
        <doi>10.14569/IJACSA.2024.0151215</doi>
        <lastModDate>2024-12-30T11:56:54.1000000+00:00</lastModDate>
        
        <creator>Rujipan Kosarat</creator>
        
        <creator>Anan Wongjan</creator>
        
        <subject>Laser distance; image calibration; mean shift algorithm; LabVIEW</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>In this research, we have measured the physical distance between the robot and its surroundings using a laser distance measuring device that we have developed, designed controllers for, and tested operationally. We will record the distance using the USB camera and integrate the LDMSB board into the laser distance measuring design. We will fasten these two parts to the robot&#39;s underside. Developing the experiment in LabVIEW is the next step. The mean shift method enables us to move the robot&#39;s position by relocating a laser-based distance measurement device and capturing a photo at that location. In order to record that area, we will perform a perspective camera calibration. This will allow us to set up or adjust the camera system&#39;s value, or provide visual assistance to ensure that the viewing angle is precisely aligned with the intended view angle. The laser measurement results ranged from one to fifteen meters. A device that makes use of lasers has 99.25% accuracy. Every calibration location throughout the 10 has a precision rating of 94.03%.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_15-Laser_Distance_Measuring_and_Image_Calibration.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Path Planning for Laser Cutting Based on Thermal Field Ant Colony Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151214</link>
        <id>10.14569/IJACSA.2024.0151214</id>
        <doi>10.14569/IJACSA.2024.0151214</doi>
        <lastModDate>2024-12-30T11:56:54.0830000+00:00</lastModDate>
        
        <creator>Junjie GE</creator>
        
        <creator>Guangfa ZHANG</creator>
        
        <creator>Tian CHEN</creator>
        
        <subject>Laser cutting; path planning; ant colony algorithm; thermal field control method</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>In laser cutting technology, path planning is the key to optimizing cutting quality. Traditional ant colony optimization path planning does not prevent excessive heat effects after processing. This paper addresses the problem of heat accumulation during drilling by introducing a heat factor and a heat threshold into the traditional ant colony algorithm. The heat factor and threshold are used to dynamically control heating and cooling in the path planning process, and the heat factor is updated to update the local pheromone. Then, the improved 2-opt algorithm with the introduced heat factor is combined to parallelly optimize the path, and a thermal field ant colony algorithm is proposed. The simulation experiments and actual cutting results show that the proposed algorithm is more efficient and effective than traditional ant colony algorithm and improved ant colony algorithm in terms of reducing heat accumulation while ensuring fewer empty path, and improving laser cutting processing efficiency and quality.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_14-Path_Planning_for_Laser_Cutting.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Ensemble Method for Healthcare Asset Mapping Using Geographical Information System and Hyperspectral Images of Tirupati Region</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151213</link>
        <id>10.14569/IJACSA.2024.0151213</id>
        <doi>10.14569/IJACSA.2024.0151213</doi>
        <lastModDate>2024-12-30T11:56:54.0670000+00:00</lastModDate>
        
        <creator>P. Bhargavi</creator>
        
        <creator>T. Sarath</creator>
        
        <creator>Gopichand G</creator>
        
        <creator>G V Ramesh Babu</creator>
        
        <creator>T Haritha</creator>
        
        <creator>A. Vijaya Krishna</creator>
        
        <subject>Geographical information system; hyperspectral image; remote sensing images; big data analytics; deep ensemble methods; healthcare asset</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>The ever-increasing capabilities of deep learning for image analysis and recognition have encouraged some researchers investigates potential benefits of merging Hyperspectral Images (HSI) and Geographic Information Systems (GIS) with deep learning in the healthcare industry. Healthcare is an ever-changing sector that constantly adopts new technologies to improve decision-making and patient service. This research digs into the role that GIS and Remote Sensing (RS) play in modern healthcare and their significance. By delivering data from faraway places and enabling spatial analysis, the combination of RS and GIS has transformed healthcare. This GIS &amp; RS data will have the numerous quantity of data so big data analytics can be helpful for storing and retrieving of data. This analysis can open up a new possibility for better healthcare planning, disease management, and environmental health assessment based on the study area&#39;s population. This paper deals healthcare assets mapping based on the population and the study area of the Tirupati district hyperspectral image by applying the Deep Ensemble method.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_13-Deep_Ensemble_Method_for_Healthcare_Asset_Mapping.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of On-Premises Version of RAG with AI Agent for Framework Selection Together with Dify and DSL as Well as Ollama for LLM</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151212</link>
        <id>10.14569/IJACSA.2024.0151212</id>
        <doi>10.14569/IJACSA.2024.0151212</doi>
        <lastModDate>2024-12-30T11:56:54.0370000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>RAG (Retrieval-Augmented Generation); API (Application Programming Interface); Lambda; EC2 (Amazon Elastic Compute Cloud); AI agent; Dify; DSL (domain specific language); ollama; YAML (YAML Ain&#39;t Markup Language)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>Currently, most RAGs are cloud-based and include Bedrock. However, there is a trend to return from the cloud to on-premises due to security concerns. In addition, it is common for APIs to call Lambda or EC2 for data access, but it is not easy to select the optimal framework depending on the data attributes. For this reason, the author devised a system for selecting the optimal framework using an AI agent. Furthermore, the author decided to use Dify, which is based on a DSL, as the user interface for the on-premises version of RAG, and ollama as a large-scale language model that can be installed on-premises as well. The author also considered the specifications of the hardware required to build this RAG and confirmed the feasibility of implementation.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_12-Design_of_On_Premises_Version_of_RAG_with_AI_Agent.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comprehensive Evaluation of Machine Learning Techniques for Obstructive Sleep Apnea Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151211</link>
        <id>10.14569/IJACSA.2024.0151211</id>
        <doi>10.14569/IJACSA.2024.0151211</doi>
        <lastModDate>2024-12-30T11:56:54.0200000+00:00</lastModDate>
        
        <creator>Alaa Sheta</creator>
        
        <creator>Walaa H. Elashmawi</creator>
        
        <creator>Adel Djellal</creator>
        
        <creator>Malik Braik</creator>
        
        <creator>Salim Surani</creator>
        
        <creator>Sultan Aljahdali</creator>
        
        <creator>Shyam Subramanian</creator>
        
        <creator>Parth S. Patel</creator>
        
        <subject>Machine learning; obstructive sleep apnea; random forest classifier; oversampling; classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>Obstructive Sleep Apnea (OSA) is a prevalent health issue affecting 10-25% of adults in the United States (US) and is associated with significant economic consequences. Machine learning methods have shown promise in improving the efficiency and accessibility of OSA diagnoses, thus reducing the need for expensive and challenging tests. A comparative analysis of Logistic Regression (LR), Support Vector Machine (SVM), Gradient Boosting (GB), Gaussian Naive Bayes (GNB), Random Forest (RF), and K-Nearest Neighbors (KNN) algorithms was conducted to predict Obstructive Sleep Apnea (OSA). To improve the predictive accuracy of these models, Random Oversampling was applied to address the imbalance in the dataset, ensuring a more equitable representation of the minority class. Patient demographics, including age, sex, height, weight, BMI, neck circumference, and gender, were employed as predictive features in the models. The RFC provided outstanding training and testing accuracies of 87% and 65%, respectively, and a Receiver Operating Characteristic (ROC) score of 87%. The GBC and SVM classifiers also demonstrated good performance on the test dataset. The results of this study show that machine learning techniques may be effectively used to diagnose OSA, with the Random Forest Classifier demonstrating the best results.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_11-Comprehensive_Evaluation_of_Machine_Learning_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of a Mobile Learning App for Financial Literacy in Young People Using Gamification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151210</link>
        <id>10.14569/IJACSA.2024.0151210</id>
        <doi>10.14569/IJACSA.2024.0151210</doi>
        <lastModDate>2024-12-30T11:56:54.0070000+00:00</lastModDate>
        
        <creator>Angie Nayeli Ruiz-Carhuamaca</creator>
        
        <creator>Juliana Alexandra Yauricasa-Seguil</creator>
        
        <creator>Juan Carlos Morales-Arevalo</creator>
        
        <subject>Financial literacy; gamification; financial education; challenge education</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>This research paper addresses the issue of insufficient financial literacy among young people, a challenge that affects their ability to make informed financial decisions. A survey was conducted to assess the current state of financial literacy among young people, whose results show a significant gap in the understanding of key concepts needed to manage their finances, which limits their economic and social development. Based on these findings, an interactive and gamified design aimed at strengthening the level of financial literacy among young people is proposed. This proposal includes wireframes that structure a mobile application, integrating playful elements and educational challenges to promote user participation in their learning process. The methodology of design that is employed focuses on the user experience, which ensures that the tool is accessible and engaging. It is expected that this proposal, based on the survey results, will not only increase the understanding of financial concepts but also motivate young people to apply this knowledge in their daily lives, thus contributing to greater financial independence and a better quality of life.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_10-Design_of_a_Mobile_Learning_App_for_Financial_Literacy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Software Design Aimed at Proper Order Management in SMEs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151209</link>
        <id>10.14569/IJACSA.2024.0151209</id>
        <doi>10.14569/IJACSA.2024.0151209</doi>
        <lastModDate>2024-12-30T11:56:53.9730000+00:00</lastModDate>
        
        <creator>Linett Velasquez Jimenez</creator>
        
        <creator>Herbert Grados Espinoza</creator>
        
        <creator>Santiago Rubi&#241;os Jimenez</creator>
        
        <creator>Juan Grados Gamarra</creator>
        
        <creator>Claudia Marrujo-Ingunza</creator>
        
        <subject>Design thinking; SMEs; order management; software design; usability; user perception</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>The design and evaluation of an order management software oriented to SMEs in Lima is presented. Using Design Thinking, a prototype was developed focusing on Usability, Design and User Satisfaction. Through a Likert scale survey of 308 SME employees, perceptions on operational efficiency and user experience were measured. The results show high acceptance and highlight the intuitiveness of the system. However, areas such as loading speed and e-commerce functionalities require future improvements. This study establishes a framework for similar technological tools in commercial sectors.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_9-Software_Design_Aimed_at_Proper_Order_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Heart of Artificial Intelligence: A Review of Machine Learning for Heart Disease Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151208</link>
        <id>10.14569/IJACSA.2024.0151208</id>
        <doi>10.14569/IJACSA.2024.0151208</doi>
        <lastModDate>2024-12-30T11:56:53.9600000+00:00</lastModDate>
        
        <creator>Brayan R. Neciosup-Bola&#241;os</creator>
        
        <creator>Segundo E. Cieza-Mostacero</creator>
        
        <subject>Machine learning; heart disease; prediction; systematic review; artificial intelligence; algorithms; literature; heart</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>Heart disease is one of the main heart diseases that cause the death of people worldwide, affecting the engine of the human body: the heart. It has a greater incidence in underdeveloped countries such as Angola, Bangladesh, Ethiopia and Haiti, for this reason, obtaining accurate results based on risk factors manually is a complex task. Therefore, this systematic review allowed us to analyze and study 32 articles applying the PRISMA methodology, which allowed us to evaluate the suitability of the methods and, consequently, their reliability in the results. The results of the study showed that the algorithm with the greatest accuracy in predicting these heart diseases is Random Forest. The most commonly used metrics to evaluate machine learning algorithms are sensitivity, F1 score, precision, and accuracy, with sensitivity highlighted as the primary metric. The most predominant independent aspects for predicting heart disease in machine learning models are age, sex, cholesterol, diabetes, and chest pain. Finally, the most used data distribution is 70% for training and 30% for testing, which achieves great accuracy in the algorithm prediction process. This study offers a promising path for the prevention and timely treatment of this disease through the use of machine learning algorithms. In the future, these advances could be applied in a system accessible to all people, thus improving access to healthcare and saving lives.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_8-The_Heart_of_Artificial_Intelligence_A_Review.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>AI-Enabled Vision Transformer for Automated Weed Detection: Advancing Innovation in Agriculture</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151207</link>
        <id>10.14569/IJACSA.2024.0151207</id>
        <doi>10.14569/IJACSA.2024.0151207</doi>
        <lastModDate>2024-12-30T11:56:53.9430000+00:00</lastModDate>
        
        <creator>Shafqaat Ahmad</creator>
        
        <creator>Zhaojie Chen</creator>
        
        <creator>Aqsa</creator>
        
        <creator>Sunaia Ikram</creator>
        
        <creator>Amna Ikram</creator>
        
        <subject>Precision agriculture; weed detection; vision transformer; UAV imagery; crop-weed classification; AI-Tractors</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>Precision agriculture is focusing on automated weed detection in order to improve the use of inputs and minimize the application of herbicides. The presented paper outlines a Vision Transformer (ViT) model for weed detection in crop fields, that tackle difficulties stemming from the resemblance of crops and weeds, especially in complex, diversified settings. The model was trained via pixel-level annotation of the images obtained using high-resolution UAV imagery shot over an organic carrot field with crop, weed, and background. Due to the nature of the mechanism in ViTs that includes self-attention, which allows it to capture long-range spatial dependencies, this approach can very well distinguish crop rows from inter-row weed clusters. To solve the problem of class imbalance and improve the generality of the patches, techniques of data preprocessing such as patch extraction and augmentation were used. The effectiveness of the proposed approach has been confirmed by an accuracy of 89.4% in classification, exceeding the efficiency of basic models such as U-Net and FCN in practical application conditions. This proposed ViT-based approach is a marked improvement in crop management; and provides the prospect for selective weed control, in support of more sustainable agriculture. This model can also be integrated into AI-based tractors for real-time weed management in the field.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_7-AI_Enabled_Vision_Transformer_for_Automated_Weed_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Exploring Wealth Dynamics: A Comprehensive Big Data Analysis of Wealth Accumulation Patterns</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151206</link>
        <id>10.14569/IJACSA.2024.0151206</id>
        <doi>10.14569/IJACSA.2024.0151206</doi>
        <lastModDate>2024-12-30T11:56:53.9270000+00:00</lastModDate>
        
        <creator>Karim Mohammed Rezaul</creator>
        
        <creator>Mifta Uddin Khan</creator>
        
        <creator>Nnamdi Williams David</creator>
        
        <creator>Kazy Noor e Alam Siddiquee</creator>
        
        <creator>Tajnuva Jannat</creator>
        
        <creator>Md Shabiul Islam</creator>
        
        <subject>Big data; python; billionaires; net worth; wealth accumulation; wealth inheritance; geographic location; statistical analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>The study offers a thorough examination of the accumulation and distribution of wealth among billionaires through the application of big data analytics methodologies. This research centres on an extensive dataset known as &quot;Billionaires.csv,&quot; [19] which encompasses a range of information about billionaires from diverse nations, including their demographic characteristics, company particulars, sources of wealth, and more details. The study aims to get a deeper understanding of the determinants that change the net worth of billionaires and detect trends in the worldwide financial system that can guide entrepreneurial ventures and investment possibilities. The dataset is subjected to analysis and visualisation through the utilisation of Python tools and libraries, including but not limited to Pandas, NumPy, Matplotlib, and Seaborn. The results of this study offer valuable insights into the distribution of wealth among billionaires, the factors that contribute to industry success, gender disparities, age demographics, and other factors that influence the accumulation of billionaire wealth.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_6-Exploring_Wealth_Dynamics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Forecasting Unemployment Rate for Multiple Countries Using a New Method for Data Structuring</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151205</link>
        <id>10.14569/IJACSA.2024.0151205</id>
        <doi>10.14569/IJACSA.2024.0151205</doi>
        <lastModDate>2024-12-30T11:56:53.8800000+00:00</lastModDate>
        
        <creator>Amjad M. Monir Aljinbaz</creator>
        
        <creator>Mohamad Mahmoud Al Rahhal</creator>
        
        <subject>Unemployment rate; artificial neural network; time series; hybrid model; genetic algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>Forecasting the Unemployment Rate (UR) plays a key role in shaping economic policies and development strategies. While most research focuses on predicting UR for individual countries, there has been limited progress in creating a unified forecasting model that works across multiple countries. Traditional time series methods are usually designed for single-country data, making it difficult to develop a model that handles data from various regions. This study presents a new data structuring technique that divides time series into smaller segments, enabling the development of a single model applicable to 44 countries using various economic indicators. Four forecasting models were tested: an artificial neural network (ANN), a hybrid ANN with machine learning (ML), a genetic algorithm-optimized ANN (ANN-GA), and a linear regression model. The linear regression model, which used lagged UR values, delivered the best results with an R&#178; of 0.964 and 89.8% accuracy. The ANN-GA model also performed strongly, achieving an R&#178; of 0.945 and 85.1% accuracy. These results highlight the effectiveness of the proposed data structuring method, demonstrating that a single model can accurately forecast multiple time series across different regions.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_5-Forecasting_Unemployment_Rate_for_Multiple_Countries.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Enhanced Real-Time Intrusion Detection Framework Using Federated Transfer Learning in Large-Scale IoT Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151204</link>
        <id>10.14569/IJACSA.2024.0151204</id>
        <doi>10.14569/IJACSA.2024.0151204</doi>
        <lastModDate>2024-12-30T11:56:53.8500000+00:00</lastModDate>
        
        <creator>Khawlah Harahsheh</creator>
        
        <creator>Malek Alzaqebah</creator>
        
        <creator>Chung-Hao Chen</creator>
        
        <subject>Intrusion detection systems; federated learning; transfer learning; cybersecurity; scalability; resource constraints; machine learning; Internet of Things</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>The exponential growth of Internet of Things (IoT) devices has introduced critical security challenges, particularly in scalability, privacy, and resource constraints. Traditional centralized intrusion detection systems (IDS) struggle to address these issues effectively. To overcome these limitations, this study proposes a novel Federated Transfer Learning (FTL)-based intrusion detection framework tailored for large-scale IoT networks. By integrating Federated Learning (FL) with Transfer Learning (TL), the framework enhances detection capabilities while ensuring data privacy and reducing communication overhead. The hybrid model incorporates convolutional neural networks (CNNs), bidirectional gated recurrent units (BiGRUs), attention mechanisms, and ensemble learning. To address the class imbalance, Synthetic Minority Over-sampling Technique (SMOTE) was employed, while optimization techniques such as hyperparameter tuning, regularization, and batch normalization further improved model performance. Experimental evaluations on five diverse IoT datasets, i.e. Bot-IoT, N-BaIoT, TON_IoT, CICIDS 2017, and NSL-KDD, demonstrate that the framework achieves high accuracy (92%-94%) while maintaining scalability, computational efficiency, and data privacy. This approach provides a robust solution to real-time intrusion detection in resource-constrained IoT environments.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_4-An_Enhanced_Real_Time_Intrusion_Detection_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Trustworthiness in Conversational Agents: Patterns in User personality-Based Behavior Towards Chatbots</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151203</link>
        <id>10.14569/IJACSA.2024.0151203</id>
        <doi>10.14569/IJACSA.2024.0151203</doi>
        <lastModDate>2024-12-30T11:56:53.8330000+00:00</lastModDate>
        
        <creator>Jieyu Wang</creator>
        
        <creator>Merary Rangel</creator>
        
        <creator>Mark Schmidt</creator>
        
        <creator>Pavel Safonov</creator>
        
        <subject>Trust; personality; human-centered AI design; user experience</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>As artificial intelligence conversational agent (CA) usage is increasing, research has been done to explore how to improve chatbot user experience by focusing on user personality. This work aims to help designers and industrial professionals understand user trust related to personality in CAs for better human-centered AI design. To achieve this goal, the study investigates the interactions between users with diverse personalities and AI chatbots. We measured participant personalities with a Hogan and Champagnes (1980) typology assessment by categorizing personality dimensions into the extraversion vs. intuition (EN), extraversion vs. sensing (ES), introversion vs. intuition (IN), and introversion vs. sensing (IS) groups. Twenty-nine participants were assigned two tasks to engage with three different AI chatbots: Cleverbot, Kuki, and Replika. Their conversations with the chatbots were analyzed using the open-coding method. Coding schemes were developed to create frequency tables. Results of this study showed that EN personality participants had perceptions of high trustworthiness towards the chatbot, especially when the chatbot was helpful. The ES personality participants, on the other hand, often engaged in brief conversations regardless of whether the chatbot was helpful or not, leading to low trust levels towards the chatbot. The IN personality users experienced mixed outcomes; while some had perceived trusty-worthy conversations despite having unhelpful chatbot responses, others found helpful conversations, yet a perception of low trustworthiness. The IS personality participants typically had the longest conversations, often leading to high perceptions of high trust scores being given to the chatbots. This study indicates that users with diverse personalities have different perceptions of trust toward AI conversational agents. This research provides interpretations of different personality users’ interaction patterns and trends with chatbots for designers as design guidelines to emphasize AI UX design.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_3-Trustworthiness_in_Conversational_Agents.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Framework for Privacy-Preserving Detection of Sickle Blood Cells Using Deep Learning and Cryptographic Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151202</link>
        <id>10.14569/IJACSA.2024.0151202</id>
        <doi>10.14569/IJACSA.2024.0151202</doi>
        <lastModDate>2024-12-30T11:56:53.8170000+00:00</lastModDate>
        
        <creator>Kholoud Alotaibi</creator>
        
        <creator>Naser El-Bathy</creator>
        
        <subject>Sickle cells; deep learning; transfer learning; encryption; AES; SOA</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>Sickle cell anemia is a hereditary disorder where abnormal hemoglobin causes red blood cells to become rigid and crescent-shaped, obstructing blood flow and leading to severe health complications. Early detection of these abnormal cells is essential for timely treatment and reducing disease progression. Traditional screening methods, though effective, are time-intensive and require skilled technicians, making them less suitable for large-scale implementation. This paper presents a conceptual framework that integrates transfer learning, cryptographic algorithms, and service-oriented architecture to provide a secure and efficient solution for sickle cell detection. The framework uses MobileNet, a lightweight deep learning model, enhanced with transfer learning to identify sickle cells from medical images while operating on hardware-constrained environments. Advanced Encryption Standards (AES) ensure sensitive patient data remains secure during transmission and storage, while a service-oriented architecture facilitates seamless interaction between system components. Although not yet implemented, the framework serves as a foundation for future empirical testing, addressing the need for accurate detection, data privacy, and system efficiency in healthcare applications.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_2-A_Framework_for_Privacy_Preserving_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>algoTRIC: Symmetric and Asymmetric Encryption Algorithms for Cryptography – A Comparative Analysis in AI Era</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151201</link>
        <id>10.14569/IJACSA.2024.0151201</id>
        <doi>10.14569/IJACSA.2024.0151201</doi>
        <lastModDate>2024-12-30T11:56:53.7870000+00:00</lastModDate>
        
        <creator>Naresh Kshetri</creator>
        
        <creator>Mir Mehedi Rahman</creator>
        
        <creator>Md Masud Rana</creator>
        
        <creator>Omar Faruq Osama</creator>
        
        <creator>James Hutson</creator>
        
        <subject>Algorithms; analysis; artificial intelligence; asymmetric encryption; cryptography; cybersecurity; symmetric encryption</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(12), 2024</description>
        <description>The increasing integration of artificial intelligence (AI) within cybersecurity has necessitated stronger encryption methods to ensure data security. This paper presents a comparative analysis of symmetric (SE) and asymmetric encryption (AE) algorithms, focusing on their role in securing sensitive information in AI-driven environments. Through an in-depth study of various encryption algorithms such as AES, RSA, and others, this research evaluates the efficiency, complexity, and security of these algorithms within modern cybersecurity frameworks. Utilizing both qualitative and quantitative analysis, this research explores the historical evolution of encryption algorithms and their growing relevance in AI applications. The comparison of SE and AE algorithms focuses on key factors such as processing speed, scalability, and security resilience in the face of evolving threats. Special attention is given to how these algorithms are integrated into AI systems and how they manage the challenges posed by large-scale data processing in multi-agent environments. Our results highlight that while SE algorithms demonstrate high-speed performance and lower computational demands, AE algorithms provide superior security, particularly in scenarios requiring enhanced encryption for AI-based networks. The paper concludes by addressing the security concerns that encryption algorithms must tackle in the age of AI and outlines future research directions aimed at enhancing encryption techniques for cybersecurity.</description>
        <description>http://thesai.org/Downloads/Volume15No12/Paper_1-algoTRIC_Symmetric_and_Asymmetric_Encryption_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implementing a Machine Learning-Based Library Information Management System: A CATALYST-Based Framework Integration</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151062</link>
        <id>10.14569/IJACSA.2024.0151062</id>
        <doi>10.14569/IJACSA.2024.0151062</doi>
        <lastModDate>2024-12-12T08:05:33.2500000+00:00</lastModDate>
        
        <creator>Chunmei Ma</creator>
        
        <subject>Library management system; book availability prediction; machine learning algorithms; university libraries; information retrieval</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>This research proposes using machine learning as a foundational element for enhancing information retrieval procedures in university libraries. This initiative will enhance students&#39; comprehension of the topic and improve the integration of instructional resources. To determine which method is the most effective, the performance of each methodology is compared. The author utilizes two separate methodologies in machine learning. The efficacy of inventory management in university libraries is enhanced by the use of forecasting algorithms. The implementation of these two algorithms was conducted within the framework of the CATALYST technology platform. This strategy enhances the efficacy of information retrieval for diverse book needs.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_62-Implementing_a_Machine_Learning_Based_Library_Information_Management_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Simulation Analysis of Intelligent Control System for Excavators in Large Mining Plants Based on Electronic Control Technology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151176</link>
        <id>10.14569/IJACSA.2024.0151176</id>
        <doi>10.14569/IJACSA.2024.0151176</doi>
        <lastModDate>2024-12-11T06:11:07.3870000+00:00</lastModDate>
        
        <creator>Lei Sun</creator>
        
        <subject>Genetic Algorithm; Particle Swarm Optimization; proportional-integral-differential; mining system; intelligent control</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>With the increasing demand for large-scale mine equipment and the complexity of the operating environment, the intelligent trajectory planning and control of mine systems becomes very important. This paper proposes a proportional-integral-differential (PID) feedback controller combined with adaptive improvement. This controller combines Genetic Algorithm and Particle Swarm Optimization technology to enhance the ability of the excavator’s intelligent control system and improve the control accuracy, response speed, and robustness under different working conditions. The results showed that the constructed PID controller improved the average constraint performance by 2.5% through quintic interpolation, and the power consumption was relatively small. The trajectory prediction error of different joints was less than 5% and the displacement and pressure curves of the hydraulic cylinder were stable and symmetrical. The accuracy of the proposed algorithm was 94% and quickly converged to 0.05 after 50 iterations, which was 18.5%, 15.3%, and 17.5% higher than the other three algorithms, respectively. Therefore, the proposed method has high reliability and adaptability in anti-interference ability, trajectory planning progress, and optimization efficiency, and it provides a better solution for intelligent control of the excavator excavation system.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_76-Simulation_Analysis_of_Intelligent_Control_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimizing Threat Intelligence Strategies for Cybersecurity Awareness Using MADM and Hybrid GraphNet-Bipolar Fuzzy Rough Sets</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01511136</link>
        <id>10.14569/IJACSA.2024.01511136</id>
        <doi>10.14569/IJACSA.2024.01511136</doi>
        <lastModDate>2024-11-29T12:18:20.5930000+00:00</lastModDate>
        
        <creator>Qian Zhang</creator>
        
        <subject>Cybersecurity awareness; threat intelligence; MADM framework; BFRGD-Net; hybrid model; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>Advanced threat detection systems are needed more than ever as cyber-attacks become more advanced. A novel cybersecurity model uses Bipolar Fuzzy Rough Sets, Graph Neural Networks, and dense network (BFRGD-Net) architectures to identify threats with unmatched accuracy and speed. The approach optimizes threat detection using Dynamic Range Realignment, anomaly-driven feature enhancement, and a hybrid feature selection strategy on a comprehensive Texas dataset of 66 months of real-world network activity. With 97.8% accuracy, 97.5% F1-score, and 98.3% AUC, BFRGD-Net sets new standards in the field. Threat Detection Sensitivity shows the model’s capacity to find uncommon, high-severity threats, while Balanced Risk Detection Efficiency provides fast, accurate threat detection. The model has strong correlations and the highest statistical metrics scores compared to other techniques. Extensive simulations demonstrate the model’s capacity to discern threat levels, attack kinds, and response techniques. BFRGD-Net revolutionizes cybersecurity by seamlessly merging cutting-edge machine learning with specific insights. Its advanced threat detection and classification engine reduces false negatives and enables proactive critical infrastructure protection in real-time. The model’s adaptability to various attack situations makes it vital for cybersecurity resilience in a digital environment.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_136-Optimizing_Threat_Intelligence_Strategies.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Visual Recognition and Localization of Industrial Robots Based on SLAM Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01511135</link>
        <id>10.14569/IJACSA.2024.01511135</id>
        <doi>10.14569/IJACSA.2024.01511135</doi>
        <lastModDate>2024-11-29T12:18:20.5600000+00:00</lastModDate>
        
        <creator>Wei Cui</creator>
        
        <creator>Yuefan Zhao</creator>
        
        <creator>Litao Sun</creator>
        
        <subject>SLAM algorithm; industrial robot; visual recognition; location</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>The front-end feature matching module of traditional SLAM systems is characterized by sparse or dense feature points, it is difficult to generate accurate camera trajectory and scene reconstruction results, in response to this problem, the author studied a fast reconstruction algorithm for any path based on V-SLAM, by using improved feature matching algorithms to accurately match feature points, the accuracy of scene sparse reconstruction and camera trajectory recovery has been improved, the backend optimization thread adopts segmented optimization matching to reduce the computational burden of reconstruction, and the performance of the V-SLAM system was improved through parallel processing, the matching results and camera trajectory error comparison results showed that the improved V-SLAM algorithm can quickly recover camera trajectory and scene reconstruction, with the development of multi-sensor collaborative coupling and multi view fusion technology, the V-SLAM method proposed by the author can add virtual 3D objects to real scenes, and the V-SLAM system can extract feature points in the screen in real-time and detect planar objects in the scene, ensure that multiple virtual objects in the scene meet geometric consistency with the actual scene, in the experiment, two objects were added to the virtual scene, users can interactively scale objects and add them without being affected by camera movements, ensuring consistency between objects and the real scene.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_135-Visual_Recognition_and_Localization_of_Industrial_Robots.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intelligent Digital Virtual Clothing Display System Based on LDA Mathematical Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01511133</link>
        <id>10.14569/IJACSA.2024.01511133</id>
        <doi>10.14569/IJACSA.2024.01511133</doi>
        <lastModDate>2024-11-29T12:18:20.5470000+00:00</lastModDate>
        
        <creator>Zhao Wu</creator>
        
        <creator>Qingyuan He</creator>
        
        <subject>Mathematical model; virtual technology; clothing display</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>In order to understand the intelligent digital virtual clothing display system based on mathematical models, the author proposes a research on an intelligent digital virtual clothing display system based on LDA mathematical models. The author first analyzes the realization of clothing matching function, and selects the cooperation between human skin color and clothing field as the influencing factors of clothing color matching and style matching based on expert knowledge and historical experience. Secondly, based on the different characteristics of different skin tones and the knowledge of clothing color matching, a set of clothing matching recommendation plans is presented to recommend suitable colors for users to refer to. Additionally, clothing style recommendations and choices are set, divided into upper and lower clothing, allowing users to choose more independently, the system itself also provides certain reference matching knowledge. Finally, the clothing matching rules were converted into computer image data, through analysis of the current market and existing research results, it was decided to implement a clothing matching display system based on VR technology, while providing recommended clothing matching solutions, a three-dimensional space was constructed to display clothing, allowing users to watch the effects of clothing matching according to their own choices, provide a new way for users in the clothing industry who have this demand.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_133-Intelligent_Digital_Virtual_Clothing_Display_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Alzheimer&#39;s Detection: Leveraging ADNI Data and Large Language Models for High-Accuracy Diagnosis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01511134</link>
        <id>10.14569/IJACSA.2024.01511134</id>
        <doi>10.14569/IJACSA.2024.01511134</doi>
        <lastModDate>2024-11-29T12:18:20.5470000+00:00</lastModDate>
        
        <creator>Hassan Almalki</creator>
        
        <creator>Alaa O. Khadidos</creator>
        
        <creator>Nawaf Alhebaishi</creator>
        
        <subject>Alzheimer; dementia; LLMs; ChatGPT; LSTM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>Alzheimer&#39;s disease (AD), the most common type of dementia, is expected to affect 152 million people by 2050, emphasizing the importance of early diagnosis. This study uses the Alzheimer&#39;s Disease Neuroimaging Initiative (ADNI) dataset, combining cognitive tests, biomarkers, demographic details, and genetic data to build predictive models. Using large language models (LLMs), specifically ChatGPT 3.5, we achieved high classification accuracy, with ROC AUC values of 0.98 for cognitively normal (CN) individuals, 0.99 for dementia, and 0.98 for mild cognitive impairment (MCI). These findings show that LLMs can handle complex data quickly and accurately. By focusing on numerical and text-based data instead of just imaging, this method provides a cost-effective and accessible option for diagnosing AD. Adding genetic information improves the predictions, reflecting the important role of genetics in AD risk. This study highlights the potential of combining different types of data with advanced machine learning and LSTM to improve early AD diagnosis. Future research should explore more ways to combine data and test different machine learning models to further enhance diagnostic tools.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_134-Enhancing_Alzheimers_Detection_Leveraging_ADNI_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Image Information Hiding Processing Based on Deep Neural Network Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01511132</link>
        <id>10.14569/IJACSA.2024.01511132</id>
        <doi>10.14569/IJACSA.2024.01511132</doi>
        <lastModDate>2024-11-29T12:18:20.5300000+00:00</lastModDate>
        
        <creator>Zhe Zhang</creator>
        
        <subject>Image information hiding; neural networks; system design; image acquisition; information processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>In order to more effectively hide and extract image information, a deep neural network-based algorithm and computer-aided image information hiding method is proposed. The hardware design of the system includes the selection of the main control chip, the design of the parallel processing structure, and the design of the Ethernet communication circuit; Software design includes an image information hiding module, an image information extraction module, and a carrier image processing module. The operation of the image information processing system based on the reversible information hiding algorithm is realized through the system hardware and software design. The experimental results show that the carrier image processing degree of the design system is much higher than that of the traditional system, and the maximum value can reach 91%, indicating that the carrier image processing performance of the design system is better. The scheme proposed in this paper can improve the security of secret information while ensuring the quality of dense image. Follow-up studies will continue to explore the combination of adversarial learning and various traditional embedding algorithms to further improve the concealment of graph-hiding algorithms.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_132-Image_Information_Hiding_Processing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Unveiling Hidden Variables in Adversarial Attack Transferability on Pre-Trained Models for COVID-19 Diagnosis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01511131</link>
        <id>10.14569/IJACSA.2024.01511131</id>
        <doi>10.14569/IJACSA.2024.01511131</doi>
        <lastModDate>2024-11-29T12:18:20.5130000+00:00</lastModDate>
        
        <creator>Dua’a Akhtom</creator>
        
        <creator>Manmeet Mahinderjit Singh</creator>
        
        <creator>Chew XinYing</creator>
        
        <subject>Adversarial attack; advanced persistent threat; pre-trained model; robust DL; transferable attack</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>Adversarial attacks represent a significant threat to the robustness and reliability of deep learning models, particularly in high-stakes domains such as medical diagnostics. Advanced Persistent Threat (APT) attacks, characterized by their stealth, complexity, and persistence, exploit adversarial examples to undermine the integrity of AI-driven healthcare systems, posing severe risks to their operational security. This study examines the transferability of adversarial attacks across pre-trained models deployed for COVID-19 diagnosis. Using two prominent convolutional neural networks (CNNs), ResNet50 and EfficientNet-B0, this study explores critical factors that influence the transferability of adversarial perturbations, a vulnerability that could be strategically exploited by APT attackers. By investigating the roles of model architecture, pre-training dataset characteristics, and adversarial attack mechanisms, this research provides valuable insights into the propagation of adversarial examples in medical imaging. Experimental results demonstrate that specific model architectures exhibit varying levels of susceptibility to adversarial transferability. ResNet50, with its deeper layers and residual connections, displayed enhanced robustness against adversarial perturbations, whereas EfficientNet-B0, due to its distinct feature extraction strategy, was more vulnerable to perturbations crafted using ResNet50’s gradients. These findings underscore the influence of architectural design on a model’s resilience to adversarial attacks. By advancing the understanding of adversarial robustness in medical AI applications, this study offers actionable guidelines for mitigating the risks associated with adversarial examples and emerging threats, such as APT attacks, in real-world healthcare scenarios.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_131-Unveiling_Hidden_Variables_in_Adversarial_Attack_Transferability.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Machine Learning Approach to pH Monitoring: Mango Leaf Colorimetry in Aquaculture</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01511130</link>
        <id>10.14569/IJACSA.2024.01511130</id>
        <doi>10.14569/IJACSA.2024.01511130</doi>
        <lastModDate>2024-11-29T12:18:20.5000000+00:00</lastModDate>
        
        <creator>Hajar Rastegari</creator>
        
        <creator>Romi Fadilah Rahmat</creator>
        
        <creator>Farhad Nadi</creator>
        
        <subject>Aquaculture; machine learning; XGboost; water quality; sustainable aquaculture practices; water quality monitoring; mango leaf extract</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>Maintaining optimal water quality is crucial for successful aquaculture. This necessitates careful management of various water quality parameters, including pH levels within their ideal range. There is growing interest in creating affordable optical pH sensors that provide accurate readings across a wide range of pH values. Development of sensors that are both accurate and cost-effective remains a challenge. To this end, this study demonstrates the use of machine learning with mango leaf extract as a colorimetric indicator to achieve accurate and cost-effective pH estimation for aquaculture practices. Mango leaf was utilized as the pH indicator, covering a range from 1 to 13. RGB color extraction and Exif data were used for image analysis to extract relevant features. The XGBoost algorithm, optimized through stepwise hyperparameter tuning with early stopping, was used to train three different models on this dataset to predict pH values. Three classification models, namely Y3, Y5, and Y13, were trained with 3, 5, and 13 output classes, respectively. The overall precision achieved by each model was 0.94, 0.85, and 0.72, respectively. This demonstrates the potential of this approach for developing a user-friendly yet cost-effective sensor for pH detection applicable in aquaculture practices. The proposed method could help aquaculture farmers an affordable and intelligent smartphone-based pH detection tool, enhancing water quality management while reducing the need for expensive instruments and eliminating the need for additional costly and time-consuming experimental work, thereby contributing to the sustainability of aquaculture practices.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_130-A_Machine_Learning_Approach_to_pH_Monitoring.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimized SMS Spam Detection Using SVM-DistilBERT and Voting Classifier: A Comparative Study on the Impact of Lemmatization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01511129</link>
        <id>10.14569/IJACSA.2024.01511129</id>
        <doi>10.14569/IJACSA.2024.01511129</doi>
        <lastModDate>2024-11-29T12:18:20.4830000+00:00</lastModDate>
        
        <creator>Sinar Nadhif Ilyasa</creator>
        
        <creator>Alaa Omar Khadidos</creator>
        
        <subject>SMS spam detection; Support Vector Machine (SVM); DistilBERT; hyperparameter optimization; LIME</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>The rapid growth of digital communication has led to a surge in spam messages, particularly through Short Message Service (SMS). These unsolicited messages pose risks such as phishing and malware, necessitating robust detection mechanisms. This study focuses on a comparative analysis of machine learning models for SMS spam detection, with a particular emphasis on a proposed SVM-DistilBERT model enhanced by a voting classifier. Using the UCI SMS Spam dataset, the models are evaluated based on recall, accuracy, precision, and Receiver Operating Characteristic Area Under the Curve (ROC AUC) scores to assess their effectiveness in correctly identifying spam messages. By leveraging Optuna for hyperparameter optimization, the proposed model achieves superior performance, with an accuracy of 99.6%, surpassing traditional methods like SVM with TF-IDF Bi-gram and AdaBoost, which achieved 98.03%. The study also examines the effects of lemmatization and synonym data augmentation, with lemmatization shown to improve spam detection by reducing feature space redundancy and enhancing semantic understanding. To ensure transparency in decision-making, Local Interpretable Model-Agnostic Explanations (LIME) is applied. The results demonstrate that the optimized SVM-DistilBERT with the voting classifier offers a robust and effective solution for SMS spam filtering.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_129-Optimized_SMS_Spam_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>AI-Blockchain Approach for MQTT Security: A Supply Chain Case Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01511128</link>
        <id>10.14569/IJACSA.2024.01511128</id>
        <doi>10.14569/IJACSA.2024.01511128</doi>
        <lastModDate>2024-11-29T12:18:20.4670000+00:00</lastModDate>
        
        <creator>Raouya AKNIN</creator>
        
        <creator>Hind El Makhtoum</creator>
        
        <creator>Youssef Bentaleb</creator>
        
        <subject>IoT; MQTT; blockchain; smart contracts; AI hybrid model; device reputation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>The use of the MQTT protocol in critical sectors such as healthcare and industry has prompted research to propose solutions for strengthening its security and preventing it from attacks that are growing exponentially and becoming increasingly sophisticated and difficult to detect. This paper aims to improve the security of the MQTT architecture, ensuring it is resilient to current attacks and adaptable to potential future attacks while considering the constraints of the IoT environment. To achieve this, the proposed architecture is based on the interaction between the AI model, which continuously analyzes device behavior, and smart contracts, which automatically apply appropriate security measures once fraud is detected. A device reputation mechanism is used to prevent malicious devices from rejoining the network. The AI model proposed in this article was initially trained on a set of malicious behaviors using supervised learning. The results show that the detection accuracy achieved 95.97%. This accuracy is expected to improve over time through the integration of un-supervised learning into the architecture, enabling the discovery of new attack patterns and additional parameters for malicious behavior identification. For simulation testing, the architecture was applied to supply chain management as a case study of critical applications, and smart contracts were deployed in the Remix environment. The architecture demonstrated resilience and robustness across various attack scenarios.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_128-AI_Blockchain_Approach_for_MQTT_Security.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Using Hybrid Compact Transformer for COVID-19 Detection from Chest X-Ray</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01511127</link>
        <id>10.14569/IJACSA.2024.01511127</id>
        <doi>10.14569/IJACSA.2024.01511127</doi>
        <lastModDate>2024-11-29T12:18:20.4530000+00:00</lastModDate>
        
        <creator>Ghadeer Almoeili</creator>
        
        <creator>Abdenour Bounsiar</creator>
        
        <subject>Deep convolutional neural network; CXR chest X-Ray; COVID-19 pneumonia; vision transformers; compact convolutional transformer; hybrid compact transformer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>By the end of December 2019, the novel coronavirus 2019 (COVID-2019), became a world pandemic affecting the respiratory system. Scientists started investigating using Deep Learning and Convolutional Neural Networks (CNNs) to detect COVID-19 using Chest X-rays (CXRs). One of the main difficulties researchers reported in the detection of lung diseases is the fact that radiographic images can tell that the lungs are abnormal, but they might miss specifying the type of pneumonia exactly. Only the expert radiologist can tell the difference based on patches shapes and distribution on the affted lungs. Also CNN’s require big datasets to provide good results. When new pandemics spread, The limited benchmark datasets for COVID- 19 in CXR images, especially during the onset of the pandemic, is the main motivation of this research. In this research, we will introduce the use of Vision Transformers (ViTs). We consider an updated version of ViT called Compact Transformer (CT) which was proposed to reduce the expansive computations of the self-attention mechanism in ViT and to escape the big data paradigm. As a contribution of this study, We propose using a Hybrid Compact Transformer (HCT) in which a pretrained CNN is used in place of the convolutional layers in CT. Hence, with the hybrid model design, we aim to combine the localization power of CNNs, with the generalization power (attention mechanism or distanced-pixel relations) of ViTs. Based on experimental results using different performance metrics, the Hybrid Compact Transformer is shown to be superior over Compact Transformers and Transfer Learning models. Our proposed technique enjoys the benefits of both worlds; a faster training of the model due to TL with CNNs and reduced data requirements due to CT. Combining localized filters of CNNs and the attention mechanism of CT seems to provide a better discrimination between common pneumonia and Covid-19 pneumonia.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_127-Using_Hybrid_Compact_Transformer_for_COVID_19_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>AudioMag: A Hybrid Audio Protection Scheme in Multilevel DWT-DCT-SVD Transformations for Data Hiding</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01511126</link>
        <id>10.14569/IJACSA.2024.01511126</id>
        <doi>10.14569/IJACSA.2024.01511126</doi>
        <lastModDate>2024-11-29T12:18:20.4370000+00:00</lastModDate>
        
        <creator>Jingjin Yu</creator>
        
        <creator>Chai Wen Chuah</creator>
        
        <creator>Rundan Zheng</creator>
        
        <creator>Janaka Alawatugoda</creator>
        
        <subject>Steganography; image steganography; audio steganography; data hiding; stegoentity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>Steganography is a technique used to hide data within an image or audio in order to maintain the secrecy of the message being communicated. There are several methods used in steganography to achieve this, but commonly, the data hiding is between the same stegoentity, such as an image with an image or audio with audio. One drawback of hiding data within the same entity is that once the security is compromised, one may be able to access the particular data. Therefore, this research proposes hiding audio within an image. The first step is to transform the audio into a hexadecimal value. Next, the hexadecimal value is hidden within the shortened uniform resource locators. The uniform resource locators are concatenated, shuffled, and converted into a quick response code. Finally, the quick response code is embedded into an image. The simulation results show the successful hiding of the audio message within the image, while maintaining the security and confidentiality of the hidden messages.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_126-AudioMag_A_Hybrid_Audio_Protection_Scheme.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Recognizing Multi-Intent Commands of the Virtual Assistant with Low-Resource Languages</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01511125</link>
        <id>10.14569/IJACSA.2024.01511125</id>
        <doi>10.14569/IJACSA.2024.01511125</doi>
        <lastModDate>2024-11-29T12:18:20.4200000+00:00</lastModDate>
        
        <creator>Van-Vinh Nguyen</creator>
        
        <creator>Ha Nguyen-Tien</creator>
        
        <creator>Anh-Quan Nguyen-Duc</creator>
        
        <creator>Trung-Kien Vu</creator>
        
        <creator>Cong Pham-Chi</creator>
        
        <creator>Minh-Hieu Pham</creator>
        
        <subject>Vietnamese command corpus; chatbot; virtual assistants; multi-intent command; artificial intelligence; technical drawing; SCADA framework; build semi-automatic data; low-resource languages</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>Virtual Assistants (VAs) are widely used in many fields. Recently, VAs have been effectively applied in technical drawing tasks, such as in Photoshop and Microsoft Word. Understanding multi-intent commands in VAs poses a significant challenge, especially when the language in query is low-resource, like Vietnamese (no training dataset available for technical drawing domain), which features complex grammar and a limited domain of usage. In this work, we proposing a three-step process to develop a voice assistant capable of understanding multi-intent commands in VAs for low-resource languages, particularly in responding to the SCADA Framework (SF) for performing drawing tasks: (1) for the training dataset, we developed a semi-automatic method for building a labeled command corpus; applying this method to Vietnamese, we built a corpus that includes 3,240 labeled commands; (2) for the multi-intent command processing phase, we introduced a method for splitting multi-intent commands into single-intent commands to enable VAs to perform them more efficiently. By experimenting with the proposed method in Vietnamese, we developed a VA that supports drawing on SF with an accuracy of over 96%. With the results of this study, we can completely apply them to SCADA system products to support the automatic control of techinical drawing operations in them as VAs.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_125-Recognizing_Multi_Intent_Commands_of_the_Virtual_Assistant.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced State Monitoring and Fault Diagnosis Method for Intelligent Manufacturing Systems via RXET in Digital Twin Technology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01511124</link>
        <id>10.14569/IJACSA.2024.01511124</id>
        <doi>10.14569/IJACSA.2024.01511124</doi>
        <lastModDate>2024-11-29T12:18:20.4070000+00:00</lastModDate>
        
        <creator>Min Li</creator>
        
        <subject>RXET; fault diagnosis; intelligent manufacturing; transformer-based attention; predictive maintenance; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>To maintain efficiency and continuity in Industry 4.0, intelligent manufacturing systems use enhanced problem detection and condition monitoring. Existing models typically miss uncommon and essential errors, causing expensive downtimes and lost production. ResXEffNet-Transformer (RXET), a hybrid deep learning model, improves defect identification and pre-dictive maintenance by integrating ResNet, Xception, Efficient-Net, and Transformer-based attention processes. The algorithm was trained on a five-year Texas industrial dataset using IoT-enabled gear and digital twins. To manage data imbalances and temporal irregularities, a strong preprocessing pipeline included Dynamic Skew Correction, Temporal Outlier Normalization, and Harmonic Temporal Encoding. The Adaptive Statistical Evolutionary Selector (ASES) optimized feature selection using the Stochastic Feature Evaluator (SFE) and Evolutionary Divergence Minimizer (EDM) to increase prediction accuracy. The RXET model beat traditional methods with 98.9% accuracy and 99.2%AUC. Two new performance metrics, Temporal Fault Detection Index (TFDI) and Fault Detection Variability Coefficient (FDVC), assessed the model’s capacity to identify problems early and consistently across fault kinds. Simulation findings showed the RXET’s superiority in anticipating uncommon but essential errors. Pearson correlation (0.93) and ANOVA (F-statistic: 8.52) validated the model’s robustness. The sensitivity study showed the best performance with moderate learning rates and batch sizes. RXET provides a complete, real-time problem detection solution for intelligent industrial systems, improving predictive maintenance and addressing challenges in Industry 4.0, digital twin technology, IoT, and machine learning. The proposed RXET model enhances operational reliability in intelligent manufacturing and sets a foundation for future advancements in predictive analytics and large-scale industrial automation.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_124-Enhanced_State_Monitoring_and_Fault_Diagnosis_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced Adaptive Hybrid Convolutional Transformer Network for Malware Detection in IoT</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01511123</link>
        <id>10.14569/IJACSA.2024.01511123</id>
        <doi>10.14569/IJACSA.2024.01511123</doi>
        <lastModDate>2024-11-29T12:18:20.3900000+00:00</lastModDate>
        
        <creator>Abdulaleem Ali Almazroi</creator>
        
        <subject>IoT security; malware detection; convolutional transformer network; cybersecurity; machine learning; network anomaly detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>Many university networks use IoT devices, which increases vulnerability and malware threats. The complex, multi-dimensional structure of IoT network traffic and the imbalance between benign and dangerous data make traditional malware detection techniques ineffective. The Adaptive Hybrid Convolutional Transformer Network (AHCTN) is a novel model that uses CNNs for spatial feature extraction and Transformer networks for global temporal dependencies in IoT data. Unique preprocessing methods like Category Importance Scaling and Logarithmic Skew Compensation handle unbalanced data and severely skewed numerical characteristics. The Unified Feature Selector combines statistical and model-based feature selection methods and guarantees that only the most relevant characteristics are utilized for classification. DWS and LRW handle data imbalance. Our feature engineering approaches, such as Flow Efficiency and Packet Interarrival Consistency, improve prediction accuracy by capturing essential data correlations. The integration of advanced machine learning techniques ensures precise malware classification and enhances cybersecurity by addressing vulnerabilities in IoT-driven academic networks. The AHCTN model was carefully tested using the IoEd-Net dataset, which contains a variety of IoT devices and network activity. The AHCTN outperforms previous models with 98.9% accuracy. It also performs well in Log Loss (0.064), AUC (99.1%), Weighted Temporal Sensitivity (97.1%), and Anomaly Detection Score (96.8%), recognizing uncommon but essential abnormalities in academic network data. These findings demonstrate AHCTN’s robustness and scalability for academic IoT malware detection.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_123-Enhanced_Adaptive_Hybrid_Convolutional_Transformer_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Robust Model for a Healthcare System with Chunk Based RAID Encryption in a Multitenant Blockchain Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01511122</link>
        <id>10.14569/IJACSA.2024.01511122</id>
        <doi>10.14569/IJACSA.2024.01511122</doi>
        <lastModDate>2024-11-29T12:18:20.3730000+00:00</lastModDate>
        
        <creator>Bharath Babu S</creator>
        
        <creator>Jothi K R</creator>
        
        <subject>Multi-tenant; chunk-based; RAID; blockchain; healthcare records</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>Healthcare informatics has revolutionized data extraction from large datasets. However, using analytics while protecting sensitive healthcare data is a major challenge. A novel methodology for Privacy-Preserving Analytics in Healthcare Records addresses this essential issue in this study. The multi-tenant Blockchain framework uses chunk-based RAID encryption. For the healthcare business, chunk-based RAID encryption in a multi-tenant blockchain architecture creates a durable, safe, and efficient solution for processing confidential healthcare information. This solution improves data security, integrity, availability, performance, regulatory compliance, and scalability by combining RAID and blockchain technology. Contemporary healthcare systems need these qualities to work well. This approach was done in Python, and the libraries used the VSCode tool. To maintain data security, integrity, and accessibility, a strong healthcare system architecture with chunk-based RAID encryption in a multi-tenant blockchain network requires various advanced technologies.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_122-A_Robust_Model_for_a_Healthcare_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Integrated Detection and Tracking Framework for 3D Multi-Object Tracking in Vehicle-Infrastructure Cooperation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01511121</link>
        <id>10.14569/IJACSA.2024.01511121</id>
        <doi>10.14569/IJACSA.2024.01511121</doi>
        <lastModDate>2024-11-29T12:18:20.3570000+00:00</lastModDate>
        
        <creator>Tao Hu</creator>
        
        <creator>Ping Wang</creator>
        
        <creator>Xinhong Wang</creator>
        
        <subject>Vehicle-infrastructure cooperative perception; 3D multi-object tracking; XIOU metric; four-stage cascade matching; integrated detection-tracking framework</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>Vehicle-infrastructure cooperative perception has emerged as a promising approach to enhance 3D multi-object tracking by leveraging complementary data from vehicle and infrastructure sensors. However, existing methods face significant challenges, including difficulty in handling occlusions, suboptimal identity association, and inefficiencies in trajectory management, limiting their performance in real-world scenarios. In this paper, we propose a novel vehicle-infrastructure cooperative 3D multi-object tracking framework that addresses these challenges through three key innovations. First, an integrated detection-tracking framework jointly optimizes object detection and tracking, enhancing temporal consistency and reducing errors caused by separately handling the two tasks. Second, the XIOU identity association metric leverages 3D spatial and geometric relation-ships, ensuring robust object matching even under occlusions. Third, a four-stage cascade matching (FSCM) strategy adaptively manages trajectories by leveraging detection and prediction confidences, enabling accurate tracking in complex environments. Evaluated on the V2X-Seq dataset, our method achieves a MOTA of 57.23 and a MOTP of 74.64, significantly reducing identity switches while ensuring low bandwidth consumption and reliable tracking, highlighting its effectiveness and suitability for real-world deployment.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_121- Integrated_Detection_and_Tracking_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Learning Local Reconstruction Errors for Face Forgery Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01511120</link>
        <id>10.14569/IJACSA.2024.01511120</id>
        <doi>10.14569/IJACSA.2024.01511120</doi>
        <lastModDate>2024-11-29T12:18:20.3430000+00:00</lastModDate>
        
        <creator>Haoyu Wu</creator>
        
        <creator>Lingyun Leng</creator>
        
        <creator>Peipeng Yu</creator>
        
        <subject>Face forgery; deepfake detection; local anomalies; generalized detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>Although several deepfake detection technologies have achieved great detection accuracy inside the data domain in recent years, there are still limitations in cross-domain generalization. This is due to the model’s ease of fitting the data sample distribution in the training data domain and its tendency to detect a specific forgery trace in order to reach a judgment rather than catching generalized forgery traces. In this paper, we propose to learn Local Reconstruction Errors for face forgery detection. The local anomaly traces of the fake face are often mapped using the original real face as a reference; however, the original real face of the fake face cannot be acquired in the real scenario. Therefore, this solution designs a local reconstruction autoencoder trained with real samples. By masking key areas of the face, the original real face can be reconstructed. Because the autoencoder only learns how to restore the essential parts of the real face using local patches of real samples, it cannot recover the forging traces or target face information in the fake face. Therefore, the reconstructed image forms a reconstructed difference with the original image. This solution aids the model in detecting local differences in fake faces by producing feature-level local difference attention mappings in the network’s middle layer. A series of experiments demonstrate that this solution has good detection and generalization performance.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_120-Learning_Local_Reconstruction_Errors_for_Face_Forgery_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analyzing EEG Patterns in Functional Food Consumption: The Role of PCA in Decision-Making Processes</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01511119</link>
        <id>10.14569/IJACSA.2024.01511119</id>
        <doi>10.14569/IJACSA.2024.01511119</doi>
        <lastModDate>2024-11-29T12:18:20.3270000+00:00</lastModDate>
        
        <creator>Mauro Daniel Castillo P&#180;erez</creator>
        
        <creator>Jes&#180;us Jaime Moreno Escobar</creator>
        
        <creator>Ver&#180;onica de Jes&#180;us P&#180;erez Franco</creator>
        
        <creator>Ana Lilia Coria Pa&#180;ez</creator>
        
        <creator>Oswaldo Morales Matamoros</creator>
        
        <subject>EEG analysis; functional foods; decision-making; deep learning; and Principal Component Analysis (PCA)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>The impact of obesity and diabetes are two central reasons for the high rate of developing cardiovascular diseases in this country, which is largely due to their ultra-processed, diet-rich foods. Supervised Learning for Decision Making: A Case Study of Functional Food Taste Perceptions In this experiment, we trained ordinary consumers to estimate the taste preferences of a unique group from which no ratings were available(11) and established that decision making can be performed through supervision. A deep learning neural network architecture according to the present disclosure is designed to model the decision-making behavior of consumers consuming functional products. The efficiency of the model can be increased upto 1.23% making use of proper values for the rest of the hyperparameters as explained in experiments carried out where we set the optimal configuration so that nurturing it gives the best results.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_119-Analyzing_EEG_Patterns_in_Functional_Food_Consumption.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>How Predictable are Fitness Landscapes with Machine Learning? A Traveling Salesman Ruggedness Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01511118</link>
        <id>10.14569/IJACSA.2024.01511118</id>
        <doi>10.14569/IJACSA.2024.01511118</doi>
        <lastModDate>2024-11-29T12:18:20.3100000+00:00</lastModDate>
        
        <creator>Mohammed El Amrani</creator>
        
        <creator>Khaoula Bouanane</creator>
        
        <creator>Youssef Benadada</creator>
        
        <subject>Fitness landscape analysis; optimization algorithms; machine learning; landscape ruggedness; traveling salesman problem</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>The notion of fitness landscape (FL) has shown promise in terms of optimization. In this paper we propose a machine learning (ML) prediction approach to quantify FL ruggedness by computing the entropy. The approach aims to build a model that could reveal information about the ruggedness of unseen instances. Its contribution is attractive in many cases like black-box optimization and in case we can rely on the information of small instances to discover the features of larger and time-consuming ones. The experiment consists in evaluating multiple ML models for the prediction of the ruggedness of the traveling salesman problem (TSP). The results show that ML can provide, for instances of a similar problem, acceptable predictions and that it can help to estimate ruggedness of large instances in that case. However, the inclusion of several features is necessary to have a more predictable landscape, especially when dealing with different TSP instances.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_118-How_Predictable_are_Fitness_Landscapes_with_Machine_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning Classification of Gait Disorders in Neurodegenerative Diseases Among Older Adults Using ResNet-50</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01511117</link>
        <id>10.14569/IJACSA.2024.01511117</id>
        <doi>10.14569/IJACSA.2024.01511117</doi>
        <lastModDate>2024-11-29T12:18:20.2970000+00:00</lastModDate>
        
        <creator>K. A. Rahman</creator>
        
        <creator>E. F. Shair</creator>
        
        <creator>A. R. Abdullah</creator>
        
        <creator>T. H. Lee</creator>
        
        <creator>N. H. Nazmi</creator>
        
        <subject>Gait disorders; neurodegenerative diseases; deep learning; vertical Ground Reaction Force (vGRF); ResNet-50</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>Gait disorders in older adults, particularly those associated with neurodegenerative diseases such as Parkinson’s Disease, Huntington’s Disease, and Amyotrophic Lateral Sclerosis , present significant diagnostic challenges. Since these NDDs primarily affect older adults, it is crucial to focus on this population to improve early detection and intervention. This study aimed to classify these gait disorders in individuals aged 50 and above using vertical ground reaction force (vGRF) data. A deep learning model was developed, employing Continuous Wavelet Transform (CWT) for feature extraction, with data augmentation techniques applied to enhance dataset variability and improve model performance. ResNet-50, a deep residual network, was utilized for classification. The model achieved a validation accuracy of 95.06% overall, with class-wise accuracies of 97.14% for ALS vs CO, 92.11% for HD vs CO, and 93.48% for PD vs CO. These findings underscore the potential of combining vGRF data with advanced deep-learning techniques, specifically ResNet-50, to classify gait disorders in older adults accurately, a demographic critically affected by these diseases.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_117-Deep_Learning_Classification_of_Gait_Disorders.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>ATG-Net: Improved Feature Pyramid Network for Aerial Object Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01511116</link>
        <id>10.14569/IJACSA.2024.01511116</id>
        <doi>10.14569/IJACSA.2024.01511116</doi>
        <lastModDate>2024-11-29T12:18:20.2800000+00:00</lastModDate>
        
        <creator>Junbao Zheng</creator>
        
        <creator>ChangHui Yang</creator>
        
        <creator>Jiangsheng Gui</creator>
        
        <subject>Object detection; feature pyramid network; adaptive tri-layer weighting; triple feature encoding; global attention mechanism</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>Object detection in aerial images is gradually gaining wide attention and application. However, given the prevalence of numerous small objects in the Unmanned Aerial Vehicle (UAV) aerial images, the extraction of superior fusion features is critical for the detection of small objects. However, feature fusion in many detectors does not fully consider the specific characteristics of the detection task. To obtain suitable features for the detection task, the paper proposes an improved Feature Pyramid Network (FPN) named ATG-Net, which aims to improve the feature fusion capability. Firstly, we propose an Adaptive Tri-Layer Weighting (ATW) module that adaptively assigns weights to each layer of the feature map according to its size and content complexity. Secondly, a Triple Feature Encoding (TFE) module is implemented, which can fuse feature maps from three different scales. Finally, the paper incorporates the Global Attention Mechanism (GAM) into the network, which includes improved channel attention mechanisms and spatial attention mechanisms. The experiments are conducted on the VisDrone2020 dataset, and the result shows that the network significantly outperforms the baseline detector and a variety of popular object detectors, which significantly improves the feature fusion capability of the network and the detection accuracy of small objects.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_116-ATG_Net_Improved_Feature_Pyramid_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enterprise Architecture Framework Selection for Collaborative Freight Transportation Digitalization: A Hybrid FAHP-FTOPSIS Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01511115</link>
        <id>10.14569/IJACSA.2024.01511115</id>
        <doi>10.14569/IJACSA.2024.01511115</doi>
        <lastModDate>2024-11-29T12:18:20.2630000+00:00</lastModDate>
        
        <creator>Abdelghani Saoud</creator>
        
        <creator>Adil Bellabdaoui</creator>
        
        <creator>Mohamed Lachgar</creator>
        
        <creator>Mohamed Hanine</creator>
        
        <creator>Imran Ashraf</creator>
        
        <subject>Digital transformation; freight transportation; enterprise architecture; multi-criteria decision-making; analytic hierarchy process; fuzzy technique for order of preference by similarity to ideal solution</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>Collaborative freight transportation plays a crucial role for Logistic Service Providers (LSPs) seeking to enhance profitability and service quality, yet it faces challenges at strategic, operational, and technical levels. Digital transformation creates opportunities to overcome these hurdles by extending collaboration beyond physical logistics to encompass information management and digital transformation. Enterprise Architecture Frameworks (EAFs) offer promising solutions by providing a holistic view of various levels within such ecosystems and ensuring alignment between information systems and strategic objectives. However, selecting the right EAF is a complex and critical step. This study introduces an innovative approach for selecting an enterprise Architecture (EA) framework to support the development of a collaborative freight transportation platform. It emphasizes the importance of adopting a systematic EA methodology in the digitalization of the freight transportation sector. The decision-making process integrates established techniques such as the Analytic Hierarchy Process (AHP) and the Fuzzy Technique for Order of Preference by Similarity to Ideal Solution (F-TOPSIS). Applied to a case study involving a Moroccan logistics company, the approach demonstrates effectiveness in framework selection. The study’s findings underscore the method’s significance as a valuable tool for organizations embarking on digital transformation through EA, offering adaptability across diverse industries and contexts.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_115-Enterprise_Architecture_Framework_Selection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>LRSA-Hybrid Encryption Method Using Linear Cipher and RSA Algorithm to Conceal the Text Messages</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01511114</link>
        <id>10.14569/IJACSA.2024.01511114</id>
        <doi>10.14569/IJACSA.2024.01511114</doi>
        <lastModDate>2024-11-29T12:18:20.2500000+00:00</lastModDate>
        
        <creator>Rundan Zheng</creator>
        
        <creator>Chai Wen Chuah</creator>
        
        <creator>Janaka Alawatugoda</creator>
        
        <subject>Confidentiality; data encryption; hybrid encryption; linear cipher; RSA algorithm; Gradatim LRSA; Optimized LRSA</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>Computer science and telecommunications technologies have been experiencing rapid advancements in recent years to protect sensitive data or information from potential harm, misuse, or destruction. By enhancing data security through various methodologies and algorithms, data can be better protected against attacks that may compromise its confidentiality, particularly in the case of text messages. Linear cipher is one of the earliest forms of cryptographic systems which operates by shifting letters that may not provide the highest level of security but adds a layer of complexity to the initial encryption process. Rivest-Shamir-Adleman algorithm represents a more advanced and rigorous approach to encryption that resistant to more sophisticated attacks. The Rivest-Shamir-Adleman algorithm utilizes the mathematical properties of large prime numbers to establish a secure communication channel. The combination of both algorithms or hybrid algorithms employed for data security, the security of text messages is significantly improved, ensuring the confidentiality of the text messages during its transmission. Hence, this research proposes two types of hybrid algorithms, namely Gradatim LRSA and Optimized LRSA, which ensure the confidentiality of the text message using encryption and decryption processes. The results also show that the Optimized LRSA performs with less computation compared to the Gradatim LRSA.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_114-LRSA_Hybrid_Encryption_Method_Using_Linear_Cipher.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimizing Wearable Technology Selection for Injury Prevention in Ice and Snow Athletes Using Interval-Valued Bipolar Fuzzy Programming</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01511113</link>
        <id>10.14569/IJACSA.2024.01511113</id>
        <doi>10.14569/IJACSA.2024.01511113</doi>
        <lastModDate>2024-11-29T12:18:20.2500000+00:00</lastModDate>
        
        <creator>Aichen Li</creator>
        
        <subject>Wearable technology; injury prevention; Interval-Valued Bipolar Fuzzy Programming (IVBFP); Multi-Criteria Decision-Making (MCDM); fuzzy logic; real-time monitoring</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>The growing importance of wearable technology in ice and snow sports highlights its role in injury prevention, where environmental hazards elevate injury risks. To address this, we propose a decision-making model using interval-valued bipolar fuzzy programming (IVBFP) for the optimal selection of wearable devices focused on athlete safety. The model employs multi-criteria decision-making (MCDM) methods to evaluate critical factors such as comfort, safety, durability, and real-time monitoring. Fuzzy logic enhances the precision and consistency of decision-making. The IVBFP model addresses vital challenges, including the diverse performance metrics of wearable devices and the uncertainty in expert evaluations. In comparison analyses, the model exhibited a 15% enhancement in judgment accuracy and a 12% decrease in uncertainty relative to conventional techniques. The results underscore the model’s proficiency in correctly forecasting devices that mitigate injury risks, providing improved athlete protection. The approach effectively incorporates expert viewpoints and subjective evaluations, diminishing harm risk in simulated and actual datasets. This research is significant both theoretically and practically. It offers a comprehensive framework to guarantee athlete safety in extreme conditions, connecting scholars and practitioners.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_113-Optimizing_Wearable_Technology_Selection_for_Injury_Prevention.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Leveraging Semi-Supervised Generative Adversarial Networks to Address Data Scarcity Using Decision Boundary Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01511112</link>
        <id>10.14569/IJACSA.2024.01511112</id>
        <doi>10.14569/IJACSA.2024.01511112</doi>
        <lastModDate>2024-11-29T12:18:20.2330000+00:00</lastModDate>
        
        <creator>Mohamed Ouriha</creator>
        
        <creator>Omar El Mansouri</creator>
        
        <creator>Younes Wadiai</creator>
        
        <creator>Boujamaa Nassiri</creator>
        
        <creator>Youssef El Mourabit</creator>
        
        <creator>Youssef El Habouz</creator>
        
        <subject>Decision boundary; convolutional neural network; Generative Adversarial Networks; MNIST; classification; semi-supervised classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>Convolutional Neural Networks (CNNs) are widely regarded as one of the most effective solutions for image classification. However, developing high-performing systems with these models typically requires a substantial number of labeled images, which can be difficult to acquire. In image classification tasks, insufficient data often leads to overfitting, a critical issue for deep learning models like CNNs. In this study, we introduce a novel approach to addressing data scarcity by leveraging semi-supervised classification models based on Generative Adversarial Networks (SGAN). Our approach demonstrates significant improvements in both efficiency and performance, as shown by variations in the evolution of decision boundaries and overall accuracy. The analysis of decision boundaries is crucial, as it provides insights into the model’s ability to generalize and effectively classify new data points. Using the MNIST dataset, we show that our approach (SGAN) outperform CNN methods, even with fewer labeled images. Specifically, we observe that the distance between the images and the decision boundary in our approach is larger than in CNN-based methods, which contributes to greater model stability. Our approach achieves an accuracy of 84%, while the CNN model struggles to exceed 72%.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_112-Leveraging_Semi_Supervised_Generative_Adversarial_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Ontology-Based Intelligent Interactive Knowledge Interface for Groundnut Crop Information</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01511111</link>
        <id>10.14569/IJACSA.2024.01511111</id>
        <doi>10.14569/IJACSA.2024.01511111</doi>
        <lastModDate>2024-11-29T12:18:20.2170000+00:00</lastModDate>
        
        <creator>Purvi H. Bhensdadia</creator>
        
        <creator>C. K. Bhensdadia</creator>
        
        <subject>Agriculture ontology; ontology construction; question answer system; groundnut ontology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>This paper presents an ontology-based interactive interface designed to provide farmers in Gujarat with information related to groundnut crops. An ontology specific to the groundnut crop was developed and used to create a semantic question-answering (QA) interface. The proposed QA interface converts natural language question into SPARQL Query and provides answer using the backbone ontology. The overall performance of the system is at par with the existing semantic QA system. Overall accuracy of QA System is 80%.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_111-An_Ontology_Based_Intelligent_Interactive_Knowledge_Interface.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automatic Generation of Comparison Charts of Similar GitHub Repositories from Readme Files</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01511110</link>
        <id>10.14569/IJACSA.2024.01511110</id>
        <doi>10.14569/IJACSA.2024.01511110</doi>
        <lastModDate>2024-11-29T12:18:20.2030000+00:00</lastModDate>
        
        <creator>Emad Albassam</creator>
        
        <subject>Multi-class classification; keyword-driven classification; rule-based classification; unsupervised classification; GitHub repositories; comparison charts</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>GitHub is a widely used platform for hosting open-source projects, with over 420 million repositories, promoting code sharing and reusability. However, with this tremendous number of repositories, finding a desirable repository based on user needs takes time and effort, especially as the number of candidate repositories increases. A user search can result in thousands of matching results, whereas GitHub shows only basic information about each repository. Therefore, users evaluate repositories’ applicability to their needs by inspecting the documentation of each repository. This paper discusses how comparison charts of similar repositories can be automatically generated to assist users in finding the desirable repository, reducing the time required to inspect their readme files. First, we implement an unsupervised, keyword-driven classifier based on the Lbl2TransformerVec algorithm to classify relevant content of GitHub readme files. The classifier was trained on a dataset of readme files collected from Java, JavaScript, C#, and C++ repositories. The classifier is evaluated against a different dataset of readme files obtained from Python repositories. Evaluation results indicate an F1 score of 0.75. Then, we incorporate rule-based adjustments to enhance classification results by 13%. Finally, the unique features, similarities, and limitations are automatically extracted from readme files to generate comparison charts using Large Language Models (LLMs).</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_110-Automatic_Generation_of_Comparison_Charts.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Blockchain-Enhanced Security and Efficiency for Thailand’s Health Information System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01511109</link>
        <id>10.14569/IJACSA.2024.01511109</id>
        <doi>10.14569/IJACSA.2024.01511109</doi>
        <lastModDate>2024-11-29T12:18:20.1870000+00:00</lastModDate>
        
        <creator>Thattapon Surasak</creator>
        
        <subject>Blockchain technology; healthcare information system; Electronic Medical Records (EMRs); user experience (UX) testing; web application; data security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>This study seeks to enhance the security, efficiency, and usability of Thailand’s health information system through the integration of blockchain technology and a user-friendly web application. Blockchain’s inherent strengths in secure data stor-age and sharing make it particularly well-suited for addressing the critical challenges of healthcare data management. Currently, Thai citizens face significant barriers when seeking medical treatment across multiple hospitals, as they must manually request and transfer both their Electronic Medical Records (EMRs) and paper-based medical records. This fragmented system creates delays and inefficiencies, as each hospital operates its own isolated data silo. To overcome these challenges, this study proposes a solution utilising a private blockchain to securely store and manage patient medical histories and prescriptions. This approach ensures data integrity while implementing robust authorisation mechanisms to restrict access to sensitive information exclusively to verified individuals. The system’s security is further strengthened by blockchain’s encryption features and the use of smart contracts. A well-designed web application serves as the interface between the secure blockchain database and end-users, offering a seamless experience for both healthcare providers and patients. In addition, User Experience (UX) testing was conducted with healthcare providers to assess the system’s usability and functionality. The results highlight the system’s user-friendly interface, confirming its potential for widespread adoption. By fostering efficient, secure, and patient-centric health information exchange, this study has the potential to significantly enhance healthcare delivery and outcomes in Thailand.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_109-Blockchain_Enhanced_Security_and_Efficiency.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application of Data Exchange Model and New Media Technology in Computer Intelligent Auxiliary Platform</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01511108</link>
        <id>10.14569/IJACSA.2024.01511108</id>
        <doi>10.14569/IJACSA.2024.01511108</doi>
        <lastModDate>2024-11-29T12:18:20.1700000+00:00</lastModDate>
        
        <creator>Na Li</creator>
        
        <subject>Computer intelligent assisted teaching system; information exchange; stress testing; new media technology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>To improve the use of computer-assisted learning, the author presents a method based on the use of new technologies. The system hardware model has a three-layer model system, including the user interface layer, the business option layer, and the data management layer. After teachers, students, and other users log in to the system, the user interface layer needs to enter personal information and advise users by including business-level options. System interaction is mainly influenced by two things: interactive learning and information sharing, interactive learning affects the online lessons of teachers and students; Data exchange is reflected in the data transmission of the data exchange model. According to the test results, after using the system, students with high self-efficacy increased from 11 to 21, and the percentage increased from 21.4 to 34.8, which can be understood as an increase in the number of good students. This interactive learning has been proven to increase students&#39; self-efficacy and improve learning, and the use of this system has been a positive outcome.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_108-Application_of_Data_Exchange_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intelligent Medical Multi-Department Information Attribute Encryption Access Control Method Under Cloud Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01511107</link>
        <id>10.14569/IJACSA.2024.01511107</id>
        <doi>10.14569/IJACSA.2024.01511107</doi>
        <lastModDate>2024-11-29T12:18:20.1570000+00:00</lastModDate>
        
        <creator>Shubin Liao</creator>
        
        <subject>Cloud computing; smart medical care; multi-sectoral information; attribute encryption</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>This paper studies the encrypted access control method of smart medical multi-department information attributes in the cloud computing environment. Under the current wave of informatization, smart medical care has become an important development direction of medical services. However, the ensuing information security issues have become increasingly prominent. Especially in the context of cloud computing, information sharing and cooperative work among multiple departments make information security access control particularly important. This article first conducts an in-depth analysis of the characteristics of multi-departmental information in smart medical care. By collecting and sorting out a large amount of medical data, it is found that more than 85% of the data involves patient privacy and core information of medical business, which puts forward extremely high requirements for data confidentiality and integrity. Therefore, we propose an attribute-based encrypted access control method, which realizes differentiated access control for different users and different departments through a fine division of data attributes. In the implementation of the method, we design efficient encryption and decryption algorithms by using the distributed characteristics of cloud computing. Experimental results show that, compared with traditional access control methods, the proposed method improves the efficiency of data processing while ensuring information security, and the average access response time is reduced by about 20%. In addition, we also verified the effectiveness of this method in multi-department information security management of smart medical care through actual case analysis.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_107-Intelligent_Medical_Multi_Department_Information.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimization of Carbon Dioxide Dense Phase Injection Model Based on DBN Deep Learning Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01511106</link>
        <id>10.14569/IJACSA.2024.01511106</id>
        <doi>10.14569/IJACSA.2024.01511106</doi>
        <lastModDate>2024-11-29T12:18:20.1400000+00:00</lastModDate>
        
        <creator>Juan Zhou</creator>
        
        <creator>Dalong Wang</creator>
        
        <creator>Tieya Jing</creator>
        
        <creator>Zhiwen Liu</creator>
        
        <creator>Yihe Liang</creator>
        
        <creator>Yaowu Nie</creator>
        
        <subject>Supercritical CO2; DBN deep learning algorithm; throttling characteristics; security control; dense phase injection model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>Carbon dioxide dense phase injection images have providing new research ideas for differential detection. Aiming at the drawbacks of large data volume, low matching efficiency, and longtime consumption of high-resolution carbon dioxide dense phase injection models, a registration algorithm for carbon dioxide dense phase injection models based on quadratic matching is proposed. This algorithm first uses down sampling to reduce image dimensions. A difference detection algorithm based on weakly supervised deep confidence network is proposed to neural networks, as well as the high manual labeling workload, low efficiency, and insufficient labeled data of high-resolution carbon dioxide dense phase injection models. This article first explores the throttling of CO2 venting in pipelines through the analysis of CO2 phase equilibrium characteristics. The experiment shows that there is after the valve, the greater the temperature drop. At the same time, water content will affect the throttling temperature drop is about 1.5 degrees; when the gas-liquid ratio is 2500, the throttling temperature drop is 7.4 degrees. CO2 in the reactor to over 8MPa, achieving supercritical pressure. CO2 with the constant temperature water bath is 5~100 degrees, with a temperature control accuracy of &#177; 0.1 degrees. The temperature of the water inside the water bath jacket of the kettle is adjusted through circulation. The maximum pressure of the kettle is 25MPa and the volume is 6L.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_106-Optimization_of_Carbon_Dioxide_Dense_Phase_Injection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design and Research of Artwork Interactive Exhibition System Based on Multi-Source Data Analysis and Augmented Reality Technology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01511105</link>
        <id>10.14569/IJACSA.2024.01511105</id>
        <doi>10.14569/IJACSA.2024.01511105</doi>
        <lastModDate>2024-11-29T12:18:20.1230000+00:00</lastModDate>
        
        <creator>Xiao Chen</creator>
        
        <creator>Qibin Wang</creator>
        
        <subject>Multi-source data; feature analysis; augmented reality technology; artwork interactive exhibition system; prediction model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>The current system has problems such as low efficiency of data processing, lack of smooth user experience and poor combination of display content and interactive technology, etc. There is a pressing need to optimize the integration of data analysis and augmented reality technology to improve the interactivity and visual appeal of exhibitions. This paper introduces and validates a combined prediction model based on multi-source data from the Internet. When using speeded-up robust features (SURF)-64 with a threshold of 500, the number of feature matches is 800, and the matching time is 162.85 ms. At a threshold of 1000, the number of matches drops to 510, and the time decreases to 96.54 ms. For SURF-128, the corresponding matches were 763 and 496, with times of 208.63 ms and 134.21 ms. This indicates that increasing the threshold not only reduces the number of matches but also shortens the matching time, likely due to fewer feature points simplifying the matching process.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_105-Design_and_Research_of_Artwork_Interactive_Exhibition_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Skywatch: Advanced Machine Learning Techniques for Distinguishing UAVs from Birds in Airspace Security</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01511104</link>
        <id>10.14569/IJACSA.2024.01511104</id>
        <doi>10.14569/IJACSA.2024.01511104</doi>
        <lastModDate>2024-11-29T12:18:20.1070000+00:00</lastModDate>
        
        <creator>Muhyeeddin Alqaraleh</creator>
        
        <creator>Mowafaq Salem Alzboon</creator>
        
        <creator>Mohammad Subhi Al-Batah</creator>
        
        <subject>Unmanned Aerial Vehicles (UAVs); machine learning; image recognition; real-time processing; security; computer vision; image processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>This study addresses the critical challenge of distinguishing Unmanned Aerial Vehicles (UAVs) from birds in real-time for airspace security in both military and civilian contexts. As UAVs become increasingly common, advanced systems must accurately identify them in dynamic environments to ensure operational safety. We evaluated several machine learning algorithms, including K-Nearest Neighbors (kNN), AdaBoost, CN2 Rule Induction, and Support Vector Machine (SVM), employing a comprehensive methodology that included data preprocessing steps such as image resizing, normalization, and augmentation to optimize training on the &quot;Birds vs. Drone Dataset.&quot; The performance of each model was assessed using evaluation metrics such as accuracy, precision, recall, F1 score, and Area Under the Curve (AUC) to determine their effectiveness in distinguishing UAVs from birds. Results demonstrate that kNN, AdaBoost, and CN2 Rule Induction are particularly effective, achieving high accuracy while minimizing false positives and false negatives. These models excel in reducing operational risks and enhancing surveillance efficiency, making them suitable for real-time security applications. The integration of these algorithms into existing surveillance systems offers robust classification capabilities and real-time decision-making under challenging conditions. Additionally, the study highlights future directions for research in computational performance optimization, algorithm development, and ethical considerations related to privacy and surveillance. The findings contribute to both the technical domain of machine learning in security and broader societal impacts, such as civil aviation safety and environmental monitoring.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_104-Skywatch_Advanced_Machine_Learning_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-Sensor Data Fusion Analysis for Tai Chi Action Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01511103</link>
        <id>10.14569/IJACSA.2024.01511103</id>
        <doi>10.14569/IJACSA.2024.01511103</doi>
        <lastModDate>2024-11-29T12:18:20.0930000+00:00</lastModDate>
        
        <creator>Jingying Ouyang</creator>
        
        <creator>Jisheng Zhang</creator>
        
        <creator>Yuxin Zhao</creator>
        
        <creator>Changhuo Yang</creator>
        
        <subject>Inertial sensor; visual sensors; segmentation clustering; support vector machine; dynamic time warping algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>The continuous development of action recognition technology can capture the decomposition data of Tai Chi movements, provide precise assistance for learners to correct erroneous movements and enhance their interest in practicing Tai Chi. Inertial sensors and human skeletal models are used to collect motion data. Combined with visual sensors, the motion and trajectory of Tai Chi are processed to obtain the relevant coordinate system of the movement trajectory. Then, the inertial sensor and visual sensor are fused for data processing to standardize the human skeleton model, remove noise interference from the collected information, and improve the smoothness performance of movement trajectories, thereby segmenting and clustering Tai Chi movement trajectories. Finally, the support vector machine and dynamic time-warping algorithm are combined to identify and verify the trajectory of Tai Chi movements. According to the results, in the 25%, 50%, and 75% training sample proportions, the lowest recognition accuracy of the Qi Shi movements was 90.87%, 93.53%, and 98.08%, respectively. The optimal recognition accuracy and standard deviation of single nodes in binary classification were 98.48% and 0.47%, respectively. The best recognition accuracy and standard deviation for multi-joint points in binary classification were 99.77% and 0.16%, respectively. This proves the recognition advantages of binary classification and the superiority of data fusion analysis based on multiple sensors, providing a theoretical basis and technical reference for action recognition technology.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_103-Multi_Sensor_Data_Fusion_Analysis_for_Tai_Chi_Action.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of a Service Robot for Hospital Environments in Rehabilitation Medicine with LiDAR-Based Simultaneous Localization and Mapping</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01511102</link>
        <id>10.14569/IJACSA.2024.01511102</id>
        <doi>10.14569/IJACSA.2024.01511102</doi>
        <lastModDate>2024-11-29T12:18:20.0770000+00:00</lastModDate>
        
        <creator>Sayat Ibrayev</creator>
        
        <creator>Arman Ibrayeva</creator>
        
        <creator>Bekzat Amanov</creator>
        
        <creator>Serik Tolenov</creator>
        
        <subject>Medical service robots; 3D LiDAR technology; autonomous navigation; hospital environments; robot-assisted healthcare; healthcare robotics; operational reliability; patient care automation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>This paper presents the development and evaluation of a medical service robot equipped with 3D LiDAR and advanced localization capabilities tailored for use in hospital environments. The robot employs LiDAR-based Simultaneous Localization and Mapping (SLAM) to navigate autonomously and interact effectively within complex and dynamic healthcare settings. A comparative analysis with the established 3D-SLAM technology in Autoware version 1.14.0, under a Linux ROS framework, provided a benchmark for evaluating our system&#39;s performance. The adaptation of Normal Distribution Transform (NDT) Matching to indoor navigation allowed for precise real-time mapping and enhanced obstacle avoidance capabilities. Empirical validation was conducted through manual maneuvers in various environments, supplemented by ROS simulations to test the system’s response to simulated challenges. The findings demonstrate that the robot&#39;s integration of 3D LiDAR and NDT Matching significantly improves navigation accuracy and operational reliability in a healthcare context. This study not only highlights the robot&#39;s ability to perform essential tasks with high efficiency but also identifies potential areas for further improvement, particularly in sensor performance under diverse environmental conditions. The successful deployment of this technology in a hospital setting illustrates its potential to support medical staff and contribute to patient care, suggesting a promising direction for future research and development in healthcare robotics.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_102-Development_of_a_Service_Robot_for_Hospital_Environments.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Image Keypoint Detection Using Cascaded Depth Separable Convolution Modules</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01511101</link>
        <id>10.14569/IJACSA.2024.01511101</id>
        <doi>10.14569/IJACSA.2024.01511101</doi>
        <lastModDate>2024-11-29T12:18:20.0600000+00:00</lastModDate>
        
        <creator>Rui Deng</creator>
        
        <subject>Depth image; DWCA; key point detection; OpenPose; cascade depth</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>Depth images have become an important data source for human bone keypoint detection due to their three-dimensional information. To optimize the efficiency of keypoint detection in depth images, a depth image keypoint detection model that combines cascaded depth separable convolution modules is constructed. The model first performs data cleaning and preprocessing on the image, replacing traditional convolutional layers with depthwise separable convolutional modules. The Faster OpenPose network is introduced to replace the traditional convolutional network structure with the lighter MobileNetV1 for detecting keypoints in the image. When the dataset size was 4000, the Faster OpenPose model had an accuracy of 0.97 and a mean square error of 0.03. The recognition rates for four different images were 0.91, 0.87, 0.89, and 0.93, respectively. The processing times were 0.32, 0.31, 0.28, and 0.27, respectively. The method of depth image keypoint detection combined with cascaded depth separable convolution modules has good practicality and excellent detection performance for various images, providing new ideas for future keypoint detection technology research.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_101-Deep_Image_Keypoint_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Percussion Big Data Mining and Modeling Method Based on Deep Neural Network Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01511100</link>
        <id>10.14569/IJACSA.2024.01511100</id>
        <doi>10.14569/IJACSA.2024.01511100</doi>
        <lastModDate>2024-11-29T12:18:20.0470000+00:00</lastModDate>
        
        <creator>Xi Song</creator>
        
        <subject>Deep neural network; percussion; big data; mining; modeling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>In order to improve the analysis effector percussion waveform, this paper studies the percussion big data mining and modeling method based on the deep neural network model. Aiming at the problem of the high sampling rate of Analog to Digital Converter (ADC) when the wideband frequency-hopping Linear Frequency Modulation (LFM) percussion waveform is sampled by Nyquist, this paper proposes a method of under sampling, and conducts a simple theoretical analysis. When the signal-to-noise ratio is 35dB, the frequency measurement error is close to 1MHz, which can meet the requirements of frequency measurement accuracy. When the signal-to-noise ratio is higher than 35dB, the frequency measurement error gradually decreases and eventually stabilizes, with a frequency measurement accuracy of around 30 kHz. Due to the low environmental interference in the sound wave recognition of percussion instruments and the close distance between the hardware equipment and the percussion instruments in this paper, the recognition results of the model in this paper have high accuracy Compared with existing methods, this article is more reliable in identifying percussion sound waves. From the data, it can be seen that the method proposed in this article has better performance in waveform recognition in impact big data mining models.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_100-Percussion_Big_Data_Mining_and_Modeling_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>CNN-BiGRU-Focus: A Hybrid Deep Learning Classifier for Sentiment and Hate Speech Analysis of Ashura-Arabic Content for Policy Makers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151199</link>
        <id>10.14569/IJACSA.2024.0151199</id>
        <doi>10.14569/IJACSA.2024.0151199</doi>
        <lastModDate>2024-11-29T12:18:20.0300000+00:00</lastModDate>
        
        <creator>Sarah Omar Alhumoud</creator>
        
        <subject>Arabic hate speech; sentiment analysis; deep learning; convolutional neural networks; bidirectional gated recurrent unit; attention mechanism; social media analysis; Ashura content; natural language processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>The rise of hate speech on social media during significant cultural and religious events, such as Ashura, poses serious challenges for content moderation, particularly in languages like Arabic, which present unique linguistic complexities. Most existing hate speech detection models, primarily developed for English text, fail to effectively handle the intricacies of Arabic, including its diverse dialects and rich morphology. This limitation underscores the need for specialized models tailored to the Arabic language. In response, the CNN-BiGRU-Focus model proposed, a novel hybrid deep learning (DL) approach that combines Convolutional Neural Networks (CNN) to capture local linguistic patterns and Bidirectional Gated Recurrent Units (BiGRU) to manage long-term dependencies in sequential text. An attention mechanism is incorporated to enhance the model&#39;s ability to focus on the most relevant sections of the input, improving both the accuracy and interpretability of its predictions. In this paper, this model applied to a dataset of social media posts related to Ashura, revealing that 32% of the content comprised hate speech, with Shia users expressing more sentiments than Sunni users. Through extensive experiments, the CNN-BiGRU-Focus model demonstrated superior performance, significantly outperforming baseline models. It achieved an accuracy of 99.89% and AUC of 99, marking a substantial improvement in Ashura-Arabic hate speech detection. The model effectively addresses the linguistic challenges of Arabic, including dialect variations and nuanced contexts, making it highly suitable for content moderation tasks. To the best of author’s knowledge, this study represents the first attempt to compile an Arabic-Ashura hate detection dataset from Twitter and apply CNN-BiGRU-Focus DL model to detect hate sentiment in Arabic social media posts. Dataset and source code can be downloaded from (https://github.com/imamu-asa).</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_99-CNN_BiGRU_Focus_A_Hybrid_Deep_Learning_Classifier.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Preprocessing and Analysis Method of Unplanned Event Data for Flight Attendants Based on CNN-GRU</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151198</link>
        <id>10.14569/IJACSA.2024.0151198</id>
        <doi>10.14569/IJACSA.2024.0151198</doi>
        <lastModDate>2024-11-29T12:18:20.0130000+00:00</lastModDate>
        
        <creator>Dongyang Li</creator>
        
        <subject>Convolutional neural network; gate recurrent units; air crew; unplanned events; data preprocessing; data analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>The data of unplanned flight attendant events has characteristics such as diversity and complexity, which pose great challenges to data preprocessing and analysis. This study proposes a preprocessing and analysis method for unplanned flight attendant event data based on Convolutional Neural Networks (CNN) and Gated Recurrent Units (GRU). Firstly, an efficient word vector tool is used to preprocess the raw data, improving its quality and consistency. Then, convolutional neural networks are taken to extract local features of the data, combined with gated loop units to capture dynamic changes in time series, thus achieving effective analysis and prediction of unplanned events in air crew. The results showed that in the 6-class task, the research model exhibited the highest accuracy at 99.24%, the lowest accuracy at 94.24%, and an average accuracy of 98.53%. The highest, lowest, and average accuracies in the 10-class task were 96.63%, 90.17%, and 93.21%, respectively. The performance of the research model was superior to support vector machine, K-nearest neighbor algorithm, and some advanced algorithms. This study provides a more accurate analysis tool for unplanned event data of flight attendants, to assist in the efficiency of aviation emergency event handling and improve aviation safety.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_98-Preprocessing_and_Analysis_Method_of_Unplanned_Event_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Safety Detection Model for Substation Operations with Fused Contextual Information</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151197</link>
        <id>10.14569/IJACSA.2024.0151197</id>
        <doi>10.14569/IJACSA.2024.0151197</doi>
        <lastModDate>2024-11-29T12:18:19.9830000+00:00</lastModDate>
        
        <creator>Bo Chen</creator>
        
        <creator>Hongyu Zhang</creator>
        
        <creator>Runxi Yang</creator>
        
        <creator>Lei Zhao</creator>
        
        <creator>Yi Ding</creator>
        
        <subject>Object detection; context information; electricity construction operation; model complexity; lightweight</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>Detecting and regulating compliance at substation construction sites is critical to ensure the safety of workers. The complex backgrounds and diverse scenes of construction sites, as well as the variations in camera angles and distances, make the object detection models face low accuracy and missed detection problems. In addition, the high complexity of existing models creates an urgent need for effective parameter compression techniques to facilitate deployment at the edge server. To cope with these challenges, this study proposes a safety protection detection algorithm that fuses contextual information for substation operation sites, which enhances multi-scale feature learning through a two-path downsampling (TPD) module to effectively cope with changes in target scales. Meanwhile, the Global and Local Context Information extraction (GLCI) module is utilized to optimize the key information learning and reduce the background interference. Furthermore, the C3GhostNetV2 unit is utilized in discerning the interconnections of far-off spatial pixels, while enhancing the network&#39;s expressive power and reducing the number of parameters and computational costs. The outcomes of the experiments indicate that the present model improves upon the mAP50 metric by 4.5% compared to the baseline model, and the accuracy of the checks and the recall have seen respective increases of 4.8% and 10.1%, while there has been a reduction in both the count of parameters and the floating-point calculations by 16.5% and 12.6% respectively, which proves the validity and practicability of the method.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_97-A_Safety_Detection_Model_for_Substation_Operations.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Multi-Person Collaborative Design Method Driven by Augmented Reality</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151196</link>
        <id>10.14569/IJACSA.2024.0151196</id>
        <doi>10.14569/IJACSA.2024.0151196</doi>
        <lastModDate>2024-11-29T12:18:19.9670000+00:00</lastModDate>
        
        <creator>Liqun Gao</creator>
        
        <subject>Digital twin; optimized design; interior space; multi-person collaborative design</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>The current interior design of commercial buildings is facing innovative challenges, requiring a balance between aesthetics, functionality, and economic benefits. The design industry faces challenges in interdisciplinary integration, lack of standardized processes, and limitations of traditional design methods in complex situations. Although virtual reality technology provides new solutions, its integration difficulty, cost, and operational complexity constrain its widespread application. This study introduces a digitally twin-based augmented reality (AR) collaborative indoor design framework, addressing the decomposition of spatial planning for the complexity inherent in design processes. Subsequently, contextual data in the indoor design process is structured into an indoor design knowledge graph to elucidate information transmission and iterative mechanisms during collaborative design, thereby enhancing the situational adaptability of AR collaborative design. Utilizing a root anchor-based collaborative approach, multiple designers engage in spatial design collaboration within the AR environment. Real-time knowledge and data facilitated by the design knowledge graph contribute to collaborative decision-making, ensuring the quality and efficiency of collaborative design. Finally, exemplified by a complex interior design project for a commercial space, an AR-based collaborative digital twin (DT) interior design system is established, validating the effectiveness and feasibility of the proposed methodology. Through this approach, designers can preview and modify designs in a virtual environment, ultimately reducing errors, shortening design cycles, lowering costs, and enhancing user satisfaction.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_96-A_Multi_Person_Collaborative_Design_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>CIPHomeCare: A Machine Learning-Based System for Monitoring and Alerting Caregivers of Cognitive Insensitivity to Pain (CIP) Patients</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151195</link>
        <id>10.14569/IJACSA.2024.0151195</id>
        <doi>10.14569/IJACSA.2024.0151195</doi>
        <lastModDate>2024-11-29T12:18:19.9530000+00:00</lastModDate>
        
        <creator>Rahaf Alsulami</creator>
        
        <creator>Hind Bitar</creator>
        
        <creator>Abeer Hakeem</creator>
        
        <creator>Reem Alyoubi</creator>
        
        <subject>Cognitive insensitivity to pain patients; CIP; machine learning; motion sensors; quality of life; wearable activity recognition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>Congenital Insensitivity to Pain (CIP) patients, particularly infants, are vulnerable to self-injury due to their inability to perceive pain, which can lead to severe harm, such as biting their hands. This research introduces &quot;CIPHomeCare,&quot; a wearable monitoring solution designed to prevent self-injurious behaviors in CIP patients aged 6 to 24 months. The primary focus of this study is developing and applying machine learning algorithms to classify hand-biting behaviors. Using accelerometer data from the STEVAL-BCN002V1 sensor, which is a motion sensor, several machine learning models—K-Nearest Neighbors (KNN), Random Forest (RF), Naive Bayes (NB), Linear Discriminant Analysis (LDA), and Logistic Regression (LR)—were trained to differentiate between normal and harmful behaviors. To address data imbalance due to the infrequency of biting events, oversampling techniques such as SMOTE, Borderline-SMOTE, ADASYN, K-means-SMOTE, and SMOTE-ENN were employed to enhance classification performance. Among the algorithms, KNN achieved the highest accuracy (98%) and a sensitivity of 72%, highlighting its effectiveness in detecting harmful hand motions. The findings suggest that machine learning, in combination with wearable technology, can provide accurate, personalized monitoring and timely intervention for CIP patients, paving the way for broader clinical applications and real-time prevention of self-injury. The real-time processing capability of the system enables immediate alerting of caregivers, allowing for timely intervention to prevent injuries, thus improving their quality of life.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_95-CIPHomeCare_A_Machine_Learning_Based_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>BackC&amp;P: Augmenting Copy and Paste Operations on Mobile Touch Devices Through Back-of-device Interaction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151193</link>
        <id>10.14569/IJACSA.2024.0151193</id>
        <doi>10.14569/IJACSA.2024.0151193</doi>
        <lastModDate>2024-11-29T12:18:19.9370000+00:00</lastModDate>
        
        <creator>Liang Chen</creator>
        
        <subject>Back-of-device interaction; copy and paste operations; mobile touch devices; touch interaction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>As more and more complex applications, e.g. photo editing software and slideshow editing software, can be used on mobile touch devices, some simple operations, such as copying and pasting, are used more frequently by ordinary mobile users. However, the existing touch techniques are far from perfectly supporting these simple operations on mobile devices. In this paper, a new interactive technique BackC&amp;P, which takes advantage of back-of-device touch input to augment copy and paste operations on mobile devices, is presented. The results of a user study that evaluated the usability of BackC&amp;P are also presented. The findings indicate that BackC&amp;P was about twice as fast as the currently used technique on mobile touch devices when used to complete the copy-and-paste tasks, with no significant decrease in accuracy.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_93-BackCP_Augmenting_Copy_and_Paste_Operations.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Review: PTSD in Pre-Existing Medical Condition on Social Media</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151194</link>
        <id>10.14569/IJACSA.2024.0151194</id>
        <doi>10.14569/IJACSA.2024.0151194</doi>
        <lastModDate>2024-11-29T12:18:19.9370000+00:00</lastModDate>
        
        <creator>Zaber Al Hassan Ayon</creator>
        
        <creator>Nur Hafieza Ismail</creator>
        
        <creator>Nur Shazwani Kamarudin</creator>
        
        <subject>PTSD; mental health; social media; natural language processing; health informatics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>Post-Traumatic Stress Disorder (PTSD) is a multifaceted mental health condition, particularly challenging for individuals with pre-existing medical conditions. This review critically examines the intersection of PTSD and chronic illnesses as expressed on social media platforms. By systematically analyzing literature from 2008 to 2024, the study explores how PTSD manifests and is managed in individuals with chronic conditions such as cancer, heart disease, and autoimmune disorders, with a focus on online expressions on platforms like X (formally known as Twitter) and Facebook. Findings demonstrate that social media data offers valuable insights into the unique challenges faced by individuals with both PTSD and chronic illnesses. Specifically, natural language processing (NLP) and machine learning (ML) techniques can identify potential PTSD cases among these populations, achieving accuracy rates between 74% and 90%. Furthermore, the role of online support communities in shaping coping strategies and facilitating early interventions is highlighted. This review underscores the necessity of incorporating considerations of pre-existing medical conditions in PTSD research and treatment, emphasizing social media&#39;s potential as a monitoring and support tool for vulnerable groups. Future research directions and clinical implications are also discussed, with an emphasis on developing targeted interventions.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_94-A_Review_PTSD_in_Pre_Existing_Medical_Condition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimization of DL Technology for Auxiliary Painting System Construction Based on FST Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151192</link>
        <id>10.14569/IJACSA.2024.0151192</id>
        <doi>10.14569/IJACSA.2024.0151192</doi>
        <lastModDate>2024-11-29T12:18:19.9200000+00:00</lastModDate>
        
        <creator>Pengpeng Xu</creator>
        
        <creator>Guo Chen</creator>
        
        <subject>Finite state transducer; deep learning; CNN; auxiliary painting; style transfer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>The continuous development of computers has brought about the emergence of many image processing software, but these software have relatively limited functions and cannot learn and create works according to the prescribed style. To make it easier for ordinary people to create artistic style paintings, this study proposes the construction of an auxiliary painting system based on finite state transducer algorithm-optimized deep learning technology. The results demonstrated that when there were 12 images, the accuracy of the optimized convolutional neural network model in extracting image features increased by 1.1% compared to before optimization. When the number of images was 1, the optimized model reduced the image feature extraction time by 15.1s compared to before optimization. Compared with other algorithms, the accuracy of extracting image style information based on a convolutional neural network was the highest at 80% under different iteration times. The research algorithm has improved the accuracy and time of extracting image style information.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_92-Optimization_of_DL_Technology_for_Auxiliary_Painting_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Lampung Batik Classification Using AlexNet, EfficientNet, LeNet and MobileNet Architecture</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151191</link>
        <id>10.14569/IJACSA.2024.0151191</id>
        <doi>10.14569/IJACSA.2024.0151191</doi>
        <lastModDate>2024-11-29T12:18:19.9070000+00:00</lastModDate>
        
        <creator>Rico Andrian</creator>
        
        <creator>Rahman Taufik</creator>
        
        <creator>Didik Kurniawan</creator>
        
        <creator>Abbie Syeh Nahri</creator>
        
        <creator>Hans Christian Herwanto</creator>
        
        <subject>Lampung Batik; image classification; convolutionl neural network; AlexNet; EfficientNet; LeNet; MobileNet</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>This study explores the application of image recognition technology based on Convolutional Neural Network (CNN) to classify Lampung batik motifs. Four CNN architectures are employed, namely AlexNet, EfficientNet, LeNet, and MobileNet. The dataset consist of ten motif classes, including Siger Ratu Agung, Sembagi, Jung Agung, Kembang Cengkih, Granitan, Abstract, Sinaran, Tambal, Kambil Sicukil, and Sekar Jagat. It comprises a total of 1000 images of Lampung Batik motifs, which were enhanced using preprocessing techniques such as rotation, shifting, brightness adjustment, and zooming. The classification results show that AlexNet achieves an accuracy of 95.33%, EfficientNet achieves 98.00%, LeNet achieves 99.33%, and MobileNet achieves 98.00%. The best accuracy result was achieved by the LeNet architecture, attributed to its suitability for small datasets. While some classification errors occurred due to similarities in patterns and variations in image positions, employing more advanced methods to better distinguish between similar motifs could address these challenges. This study highlights the effectiveness of CNN architectures in supporting the recognition of Lampung Batik motifs, contributing to the understanding and preservation of Indonesia&#39;s cultural heritage.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_91-Lampung_Batik_Classification_Using_AlexNet.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modeling the Impact of Robotics Learning Experience on Programming Interest Using the Structured Equation Modeling Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151190</link>
        <id>10.14569/IJACSA.2024.0151190</id>
        <doi>10.14569/IJACSA.2024.0151190</doi>
        <lastModDate>2024-11-29T12:18:19.8900000+00:00</lastModDate>
        
        <creator>Nazatul Aini Abd Majid</creator>
        
        <creator>Noor Faridatul Ainun Zainal</creator>
        
        <creator>Zarina Shukur</creator>
        
        <creator>Mohammad Faidzul Nasrudin</creator>
        
        <creator>Nasharuddin Zainal</creator>
        
        <subject>Programming; robotics; Structural Equation Modeling (SEM); experiential learning; student engagement</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>Proficiency in programming is crucial for driving the Fourth Industrial Revolution. Therefore, interest in programming needs to be instilled in students starting from the school level. While the use of robotics can attract students&#39; interest in programming, there is still a lack of research modeling, the impact of robotic learning experiences on programming interest using a structural equation modeling (SEM) approach. This study aims to analyze the structural relationship between interest in programming and learning experiences using a specially developed robotics module based on Kolb&#39;s experiential learning model and the programming development phases. An experiment involving 76 primary and secondary school students was conducted using the robotics module. Data were collected through a questionnaire containing 12 questions for five constructs: engagement, interaction, challenge, competency, and interest. These constructs, which are latent variables, formed the model using the partial least squares-SEM technique through the SmartPLS 4.0 software. The evaluation of the structural model found that the variables of engagement and competency had a significant impact on interest in programming, while interaction and challenge received low values. The developed model has moderate predictive power, indicating that interest in programming can be moderately predicted based on students&#39; experiences using robots.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_90-Modeling_the_Impact_of_Robotics_Learning_Experience.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Efficient Privacy-Preserving Randomization-Based Approach for Classification Upon Encrypted Data in Outsourced Semi-Honest Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151189</link>
        <id>10.14569/IJACSA.2024.0151189</id>
        <doi>10.14569/IJACSA.2024.0151189</doi>
        <lastModDate>2024-11-29T12:18:19.8730000+00:00</lastModDate>
        
        <creator>Vijayendra Sanjay Gaikwad</creator>
        
        <creator>Kishor H. Walse</creator>
        
        <creator>Mohammad Atique Mohammad Junaid</creator>
        
        <subject>Partial homomorphic encryption; classification using encrypted data; randomization; k- nearest neighbours</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>In cloud environment context, organizations often rely on the platform for data storage and on demand access. Data is typically encrypted either by the cloud service itself or by the data owners before outsourcing it to maintain confidentiality. However, when it comes to processing encrypted data for tasks like kNN classification; existing approaches either prove to be inefficient or delegate portion of the classification task to end users or do not satisfy all the privacy requirements. Also, the datasets used in many existing approaches to check the performance seem to have very less attributes and instances; but, it is observed that as dataset size increases, the efficiency and accuracy of many privacy-preserving approaches reduce significantly. In this work, we propose a set of privacy preserving protocols that collectively perform the kNN classification with encrypted data in outsourced semi-honest-cloud environment and also address the stated challenges. This is accomplished by building an efficient randomization-based approach called PPkC that leverages homomorphic cryptosystem properties. With protocol analysis we prove that the proposed approach satisfies all privacy requirements. Finally, with extensive experimentation using real-world and scaled dataset we show that the performance of proposed PPkC protocol is computationally efficient and also independent of the number of nearest neighbours considered.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_89-An_Efficient_Privacy_Preserving_Randomization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>CQRS and Blockchain with Zero-Knowledge Proofs for Secure Multi-Agent Decision-Making</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151188</link>
        <id>10.14569/IJACSA.2024.0151188</id>
        <doi>10.14569/IJACSA.2024.0151188</doi>
        <lastModDate>2024-11-29T12:18:19.8570000+00:00</lastModDate>
        
        <creator>Ayman NAIT CHERIF</creator>
        
        <creator>Mohamed YOUSSFI</creator>
        
        <creator>Zakariae EN-NAIMANI</creator>
        
        <creator>Ahmed TADLAOUI</creator>
        
        <creator>Maha SOULAMI</creator>
        
        <creator>Omar BOUATTANE</creator>
        
        <subject>Decentralized multi-agent systems; decentralized identifiers; zero-knowledge proofs; hyperledger fabric; OAuth 2.0; CQRS; smart grids; healthcare data management; IoT; supply chain management</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>Autonomous decision-making in decentralized multi-agent systems (MAS) poses significant challenges related to security, scalability, and privacy. This paper introduces an innovative architecture that integrates Decentralized Identifiers (DIDs), Zero-Knowledge Proofs (ZKPs), Hyperledger Fabric blockchain, OAuth 2.0 authorization, and the Command Query Responsibility Segregation (CQRS) pattern to establish a secure, scalable, and privacy-focused framework for MAS. The use of DIDs and ZKPs ensures secure, self-sovereign identities and enables privacy-preserving interactions among autonomous agents. Hyperledger Fabric provides an immutable ledger, ensuring data integrity and facilitating transparent transaction processing through smart contracts. The CQRS pattern, combined with event sourcing, optimizes the system’s ability to handle high volumes of read and write operations, enhancing performance and scalability. Practical applications are showcased in Smart Grids, Healthcare Data Management, Secure Internet of Things (IoT) Networks, and Supply Chain Management, highlighting the architecture’s ability to address industry-specific challenges. This integration offers a robust solution for ensuring trust, verifiability, and scalability in distributed systems while preserving the confidentiality of agents.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_88-CQRS_and_Blockchain_with_Zero_Knowledge_Proofs.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application of Machine Learning Algorithms for Predicting Energy Consumption of Servers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151187</link>
        <id>10.14569/IJACSA.2024.0151187</id>
        <doi>10.14569/IJACSA.2024.0151187</doi>
        <lastModDate>2024-11-29T12:18:19.8430000+00:00</lastModDate>
        
        <creator>Meryeme EL YADARI</creator>
        
        <creator>Saloua EL MOTAKI</creator>
        
        <creator>Ali YAHYAOUY</creator>
        
        <creator>Khalid EL FAZAZY</creator>
        
        <creator>Hamid GUALOUS</creator>
        
        <creator>St&#233;phane LE MASSON</creator>
        
        <subject>Data center; server; machine learning; energy consumption; parametric methods; ensemble methods</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>Energy management in data centers is currently a major challenge and arouses considerable interest. Many data center operators are seeking solutions to reduce energy consumption. In this work, the problem of resource overutilization-defined as the excessive usage of critical server resources such as CPU, RAM and storage surpassing their optimal capacity-in data centers is addressed, with a particular focus on servers. Estimating the energy consumption of servers in data centers allows its managers to allocate the necessary resources to ensure adequate quality of service. The research involved generating workloads performance on various servers, each connected to a wattmeter for energy consumption measurement. Data on resource utilization rates and server energy consumption were stored and analyzed. Machine learning models were then used to forecast server energy consumption. Parametric, non-parametric, and ensemble methods were employed and validated using accuracy measurements, non-parametric tests, and model complexity to assess the quality of energy consumption prediction models. The results demonstrated that certain models could provide predictions with a low margin of error and minimal complexity like polynomial regression, while other models showed lower performance. A comparative analysis is conducted to evaluate the performance and limitations of each approach.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_87-Application_of_Machine_Learning_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Application of K-MEANS Algorithm-Based Data Mining in Optimizing Marketing Strategies of Tobacco Companies</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151186</link>
        <id>10.14569/IJACSA.2024.0151186</id>
        <doi>10.14569/IJACSA.2024.0151186</doi>
        <lastModDate>2024-11-29T12:18:19.8100000+00:00</lastModDate>
        
        <creator>Mingqian Ma</creator>
        
        <subject>Data mining; homomorphic encryption; k-means; tobacco; marketing strategy; indicator system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>With the continuous development of data mining technology, more and more industries are applying data mining techniques to optimize their marketing strategies. In response to the persistent decline in tobacco sales and the gradual erosion of customer base in a particular enterprise in recent years, this study employs data mining technology to enhance the current tobacco marketing strategy. Firstly, in response to the current shortcomings of the company, a marketing optimization design scheme was proposed and a customer classification evaluation index system was constructed. Subsequently, homomorphic encryption technology and enhanced peak density thinking were employed to enhance the conventional K-means algorithm. The enhanced algorithm was then utilized in the customer clustering and partitioning scheme, with the objective of investigating the underlying information present in customer consumption data. The performance of the algorithm was tested, and the results showed that the mean square error of the improved K-means algorithm was about 0.1, with an average absolute error of about 0.05. The highest detection rate in the validation set was 0.95, and the lowest false alarm rate was 0.07. Both experts and customers were highly satisfied with the marketing strategy under the enhanced K-means algorithm. In summary, the clustering analysis method used in this study can effectively uncover the hidden value behind various types of customer data, thereby helping companies to make better marketing strategies.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_86-The_Application_of_K_MEANS_Algorithm_Based_Data_Mining.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Identification of Chili Plant Diseases Based on Leaves Using Hyperparameter Optimization Architecture Convolutional Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151185</link>
        <id>10.14569/IJACSA.2024.0151185</id>
        <doi>10.14569/IJACSA.2024.0151185</doi>
        <lastModDate>2024-11-29T12:18:19.7970000+00:00</lastModDate>
        
        <creator>Murinto </creator>
        
        <creator>Sri Winiarti</creator>
        
        <creator>Ardi Pujiyanta</creator>
        
        <subject>Chili leaf; deep learning; MobileNetV2; transfer learning; VGG16</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>This paper proposes a method to detect chili plant diseases based on leaves. Studies in recent years have shown that chili production in Indonesia has decreased. This is because there are several influencing factors. One common factor is the presence of diseases in chili plants that cause less than optimal harvest production. Fungi or pests on chili leaves usually cause diseases that often appear in chili plants. Chili leaf diseases have a negative impact on chili harvest yields. Chili leaf diseases can result in significant decreases in both the quantity and quality of chili harvests. Accurate disease diagnosis will help increase farmer profits. This study identified four major leaf diseases, namely leaf curl, leaf spot, yellowish, and white spot. In this research images were taken using a digital camera. These diseases were classified into five classes (healthy, leaf curl, leaf spot, yellowish, and white spot) using two different pre-trained deep learning networks, namely MobileNetV2 and VGG16, using chili leaf data through deep learning transfer. The experimental results showed the model with the best performance was the VGG16 model. This model achieved a validation accuracy of 94% on public and own data sets. Meanwhile, the next best-performing model is MobileNetV2, which achieved an accuracy of 90%, followed by the Traditional CNN Model, which achieved a validation accuracy of 88%. In future developments, we intend to deploy it on mobile devices to automatically monitor and identify various types of chili plant disease information based on leaves.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_85-Identification_of_Chili_Plant_Diseases.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>FSFYOLO: A Lightweight Model for Forest Smoke and Fire Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151184</link>
        <id>10.14569/IJACSA.2024.0151184</id>
        <doi>10.14569/IJACSA.2024.0151184</doi>
        <lastModDate>2024-11-29T12:18:19.7800000+00:00</lastModDate>
        
        <creator>Yinglai HUANG</creator>
        
        <creator>Jing LIU</creator>
        
        <creator>Liusong YANG</creator>
        
        <subject>Forest smoke and fire; target detection; lightweight; YOLOv8; EfficientViT</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>The detection and identification of forest smoke and fire are critical for forest fire prevention efforts. However, current forest smoke and fire target detection algorithms confront obstacles such as high memory usage, computational costs, and deployment difficulty. Regarding these key issues, this paper presents FSFYOLO, a lightweight forest smoke and fire detection model based on the YOLOv8s model. To efficiently extract key features from forest smoke and fire images while reducing computational redundancy, the lightweight network EfficientViT is used as the backbone network. A lightweight detection head, Partial Convolutional Head (PCHead), is designed using the shared parameters idea to greatly minimize the amount of parameters and computations by leveraging shared convolutional layers and branched processing, thus achieving the lightweight design of the model. In the neck network, a lightweight feature extraction module, C2f-FL, is built to more fully extract local features and surrounding contextual information to widen the receptive field. Additionally, a Coordinate Attention (CA) mechanism is integrated into both the backbone and neck networks to capture cross-channel information, directional awareness, as well as position-sensitive information, improving the model&#39;s capacity to precisely pinpoint fire and smoke in forests. The experimental outcomes results on our self-constructed forest smoke and fire dataset demonstrate that FSFYOLO reduces the number of parameters and computation by 47.6% and 60.9%, respectively, compared to the original model, while improving precision, recall, and mAP50 by 1.3%, 1.0%, and 1.0%, respectively. This demonstrates that FSFYOLO strikes a good compromise between model lightweighting and detection accuracy.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_84-FSFYOLO_A_Lightweight_Model_for_Forest_Smoke.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimizing House Renovation Projects Using Industrial Engineering-Based Approaches</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151183</link>
        <id>10.14569/IJACSA.2024.0151183</id>
        <doi>10.14569/IJACSA.2024.0151183</doi>
        <lastModDate>2024-11-29T12:18:19.7630000+00:00</lastModDate>
        
        <creator>Lim Rou Yan</creator>
        
        <creator>Siti Noor Asyikin Mohd Razali</creator>
        
        <creator>Muhammad Ammar Shafi</creator>
        
        <creator>Norazman Arbin</creator>
        
        <subject>House renovation; Program Evaluation and Review Technique (PERT); Critical Path Method (CPM); project crashing techniques; construction project management; linear programming optimisation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>The persistent challenge of project delays poses significant issues with the escalating demand for house renovations. The company in Kedah, Malaysia, faces frequent project delays due to ineffective project management, leading to substantial liquidated damages. This study aims to minimise project delays and ensure timely completion within budget constraints, focusing on both the entire house renovation project and the kitchen renovation project. This study employed the Program Evaluation and Review Technique to illustrate the project network and utilised the Critical Path Method to identify the critical path while implementing project crashing with linear programming to optimise activity duration reduction and minimise costs. The PERT method results in an illustrative network diagram that aids subsequent analysis. The completion time for the entire house renovation project is determined to be 58 days using CPM, with a 96.8% probability of completion within 60 days. In contrast, for the kitchen renovation project, the completion time is identified as 38 days, with a 0% probability of meeting the 30-day deadline. Therefore, linear programming was successfully applied, shortening the kitchen project to 30 days at a total cost of RM 18,517.50, further reduced to 20 days with a cost of RM 20,980. Both scenarios remained below the total penalty cost of RM 21,780. The finding enables the company to make informed decisions on resource allocation to accelerate project duration and avoid delays. Future research should delve into realistic models, considering labour allocation and indirect costs, for a more comprehensive evaluation of project crashing strategies and their financial impacts.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_83-Optimizing_House_Renovation_Projects.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Taxonomic Study: Data Placement Strategies in Cloud Replication Environments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151182</link>
        <id>10.14569/IJACSA.2024.0151182</id>
        <doi>10.14569/IJACSA.2024.0151182</doi>
        <lastModDate>2024-11-29T12:18:19.7630000+00:00</lastModDate>
        
        <creator>Fazlina Mohd Ali</creator>
        
        <creator>Marizuana Mat Daud</creator>
        
        <creator>Fadilla Atyka Nor Rashid</creator>
        
        <creator>Nazhatul Hafizah Kamarudin</creator>
        
        <creator>Syahanim Mohd Salleh</creator>
        
        <creator>Nur Arzilawati Md Yunus</creator>
        
        <subject>Cloud environment; data replication; placement strategies; replication taxonomy; performance metrics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>Since the past decades, the data replication trend has not subsided; it is progressing rapidly from multiple perspectives to enhance cloud replication performance. Researchers are eagerly focusing on improving the strategies in various perceptions; unfortunately, the vulnerability in every strategy is inevitable. A non-comprehensive replica strategy would have vulnerability and drawbacks. The drawbacks that usually reside in the developed strategies are not limited to high network usage, high process time, high response time, high storage consumption, and more, depending on the research areas. Many researchers are out of ideas to identify state-of-the-art issues. This exhaustive taxonomic study focused on analyzing the diversified contributions and limitations terrain of the cloud replication environment, focusing on data placement strategies. It seeks to delve deeply into its fundamental strategy, practical implementations, and the intricate challenges it poses. Concerning the imminent cloud-driven future, this structured review paper is a vital resource for researchers, policymakers, and industry professionals grappling with the complexities of this emerging paradigm. By illuminating the intricacies of data replication strategies, this study fosters a deeper appreciation for the transformative potential and the multifaceted challenges ahead of cloud data replications.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_82-A_Taxonomic_Study_Data_Placement_Strategies.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>DSTC-Sum: A Supervised Video Summarization Model Using Depthwise Separable Temporal Convolutional</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151181</link>
        <id>10.14569/IJACSA.2024.0151181</id>
        <doi>10.14569/IJACSA.2024.0151181</doi>
        <lastModDate>2024-11-29T12:18:19.7500000+00:00</lastModDate>
        
        <creator>M. Hamza Eissa</creator>
        
        <creator>Hesham Farouk</creator>
        
        <creator>Kamal Eldahshan</creator>
        
        <creator>Amr Abozeid</creator>
        
        <subject>Video summarization; depthwise separable temporal convolutional; video processing; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>The exponential growth in video content has created a critical need for efficient video summarization techniques to enable faster and more accurate information retrieval. Video summarization has excellent potential to simplify the analysis of large video databases in various application areas ranging from surveillance, education, entertainment, and research. DSTC-Sum, a novel supervised video summarization model, is proposed based on Depthwise Separable Temporal Convolutional (DSTC). Leveraging the superior representational efficiency of DSTCN, the model addresses computational challenges and training inefficiencies encountered in traditional recurrent architectures such as Recurrent Neural Networks (RNNs) and Long Short-Term Memory (LSTMs). Additionally, this approach reduces computational overhead and memory usage. DSTC-Sum achieved state-of-the-art performance on two commonly used benchmark datasets, TVSum and SumMe, and outperformed all previous methods with F-scores by 1.8% and 3.33%, respectively. To validate the model&#39;s generality and robustness, the model was further tested on the YouTube and Open Video Project (OVP) datasets. The proposed model did slightly better on these datasets than several popular techniques, with F scores of 60.3 and 58.5, respectively. Finally, these findings confirm that this model captures long-term temporal dependencies and produces high-quality video summaries across all types of videos.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_81-DSTC_Sum_A_Supervised_Video_Summarization_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Face Anti-Spoofing Using Chainlets and Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151180</link>
        <id>10.14569/IJACSA.2024.0151180</id>
        <doi>10.14569/IJACSA.2024.0151180</doi>
        <lastModDate>2024-11-29T12:18:19.7330000+00:00</lastModDate>
        
        <creator>Sarah Abdulaziz Alrethea</creator>
        
        <creator>Adil Ahmad</creator>
        
        <subject>Presentation attacks; Chainlets; contour; handcrafted features; chain code; CNN; face anti-spoofing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>Now-a-days, biometric technology is widely employed for many security purposes. Facial recognition is one of the biometric technologies that is increasingly utilized because it is convenient and contactless. However, the facial recognition system has become the most targeted by unauthorized users to get access to the system. Most facial recognition systems are vulnerable to face spoofing attacks. With the widespread use of the internet and social media, it has become easy to get videos or pictures of people’s faces. The imposter can use these documents to deceive facial authentication systems, which affects the system’s security and privacy. Face spoofing occurs when an unauthorized user attempts to gain access to a facial recognition system using presentation attack instruments (PAIs) such as photos, videos, or 3D masks of the authorized users. Therefore, the need for an effective face anti-spoofing (FAS) system is increased. That motivated us to develop a face anti-spoofing model that accurately detects presentation attacks. In our work, we developed a model that integrates handcrafted features based on Chainlets (as motion-based descriptor) and the convolutional neural network (CNN) to provide a more robust feature vector and enhance accuracy performance. Chainlets can be computed from deep contour-based edge detection using Histograms of Freeman Chain Codes, which provides a richer and rotation-invariant description of edge orientation that can be used to extract Chainlets features. We used a benchmark dataset, the Replay-Attack database. The result shows that the Chainlets-based face anti-spoofing method overcome the state-of-art methods and provide higher accuracy.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_80-Face_Anti_Spoofing_Using_ Chainlets.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Mobility – An Intelligent Robot for the Visually Impaired</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151179</link>
        <id>10.14569/IJACSA.2024.0151179</id>
        <doi>10.14569/IJACSA.2024.0151179</doi>
        <lastModDate>2024-11-29T12:18:19.7170000+00:00</lastModDate>
        
        <creator>Ahmad M. Bisher</creator>
        
        <creator>Rufaida M. Shamroukh</creator>
        
        <creator>Abed M. Shamroukh</creator>
        
        <subject>Robot; obstacle; avoidance; visually impaired; sensors</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>Efficient robot navigation in operational environments requires precise tracking of the path from the starting point to the destination, typically generated using pre-stored map data. However, obstacles in the environment can complicate this process, making reliable obstacle avoidance critical for successful navigation. This paper introduces innovative techniques for robotic navigation and obstacle avoidance, specifically designed to assist visually impaired individuals. To mitigate the limitations and inaccuracies inherent in sensor data, we employ sensor fusion algorithms that integrate inputs from infrared, ultrasonic, vision, and tactile sensors. Additionally, visual landmarks are incorporated as reference points to improve internal odometry correction and enhance mapping accuracy. We believe that our approach not only increases the reliability of navigation but also enhances the robot&#39;s ability to operate effectively in diverse and challenging conditions.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_79-Enhancing_Mobility_An_Intelligent_Robot.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Synthesizing Realistic Knee MRI Images: A VAE-GAN Approach for Enhanced Medical Data Augmentation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151178</link>
        <id>10.14569/IJACSA.2024.0151178</id>
        <doi>10.14569/IJACSA.2024.0151178</doi>
        <lastModDate>2024-11-29T12:18:19.7030000+00:00</lastModDate>
        
        <creator>Revathi S A</creator>
        
        <creator>B Sathish Babu</creator>
        
        <subject>Custom loss function; decoder; discriminator; GAN; latent space; VAE</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>This study presents a novel approach for synthesizing knee MRI images by combining Variational Autoencoders (VAEs) and Generative Adversarial Networks (GANs). By leveraging the strengths of VAEs for efficient latent space representation and GANs for their advanced image generation capabilities, we introduce a VAE-GAN hybrid model tailored specifically for medical imaging applications. This technique not only improves the realism of synthesized knee MRI images but also enriches training datasets, ultimately improving the outcome of machine learning models. We demonstrate significant improvements in synthetic image quality through a carefully designed architecture, which includes custom loss functions that strike a balance between reconstruction accuracy and generative quality. These improvements are validated using quantitative metrics, achieving a Mean Squared Error (MSE) of 0.0914 and a Fr&#233;chet Inception Distance (FID) of 1.4873. This work lays the groundwork for novel research guidelines in biomedical image study, providing a scalable solution to overcome dataset limitations while maintaining privacy standards, and pavement of reliable diagnostic tools.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_78-Synthesizing_Realistic_Knee_MRI_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predicting Graft Failure Within Year After Transplantation Using Data Mining Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151177</link>
        <id>10.14569/IJACSA.2024.0151177</id>
        <doi>10.14569/IJACSA.2024.0151177</doi>
        <lastModDate>2024-11-29T12:18:19.6700000+00:00</lastModDate>
        
        <creator>Meshari Alwazae</creator>
        
        <creator>Saad Alghamdi</creator>
        
        <creator>Lulu Alobaid</creator>
        
        <creator>Bader Aljaber</creator>
        
        <creator>Reem Altwaim</creator>
        
        <subject>Graft failure; liver transplant; data mining; predictive model; classification; association rules</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>The complex factors of liver transplant survival and the potential for post-transplant complications are significant challenges for healthcare professionals. This paper aims to identify the ability to use data mining techniques to develop a predictive model for liver transplant failure by identifying the relationship between abnormalities in periodic patients&#39; laboratory results and graft failure. The researchers obtained data from King Faisal Specialist Hospital and Research Centre to address the research problems. First, the classification technique was used to predict cases with a high risk of liver transplant failure. Second, Association Rules were applied to identify associations between abnormalities in patients’ laboratory results and transplant failure. Before using data mining algorithms, the patient dataset underwent a cleaning process, which involved removing duplicate entries and uncertain results. The algorithms were applied separately to the data of patients who completed the first year without complications and those who experienced transplant failure. The obtained results were then compared and we observed that abnormal levels in Aspartate Transferase (AST), Red Blood Cell (RBC), Hemoglobin (Hgb), &#39;Bilirubin Total&#39;, and &#39;Platelet&#39; occurred exclusively in cases that faced liver transplant failure within the first year. Similarly, abnormal levels in &#39;AST&#39;, &#39;RBC&#39;, Alanine Aminotransferase (ALT), and &#39;Bilirubin Total&#39; were also associated with transplant failure.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_77-Predicting_Graft_Failure_Within_Year_After_Transplantation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Computer-Vision-Based Detection and Monitoring System for Mature Coconut Fruits with a Web Dashboard Visualization Platform</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151175</link>
        <id>10.14569/IJACSA.2024.0151175</id>
        <doi>10.14569/IJACSA.2024.0151175</doi>
        <lastModDate>2024-11-29T12:18:19.6570000+00:00</lastModDate>
        
        <creator>Samfford S. Cabaluna</creator>
        
        <creator>Maria Fe P. Bahinting</creator>
        
        <creator>Leah A. Alindayo</creator>
        
        <subject>Coconut fruit maturity; coconut maturity detection; computer vision; crop monitoring</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>The Philippines is the second largest producer of coconut products in the world with 347 million trees planted in 3.6 million hectares of land across the country. Traditionally, harvesting coconuts is a labor-intensive process in the Philippines that involves manual climbing and chopping fruits, which carries a high risk of harm or even death. Hence, the number of expert coconut climbers has decreased as a result. In response, current research has concentrated on creating robot harvesters. However, classifying the mature coconut fruit is a major problem in the harvesting process that calls for a great deal of experience, patience, and work. Studies employing Convolutional Neural Networks (CNNs) have shown great accuracy in detecting coconut ripeness, although these efforts have been limited to detection without practical integration with harvesting equipment. Moreover, the present research lacks a comprehensive solution that allows real-time data display and monitoring, such as the maturation stage of coconuts, via a web-based dashboard. This discrepancy emphasizes the requirement for systems that can not only identify the age of coconuts but also work with harvesting technologies and provide intuitive user interfaces for data display and decision-making. In order to fill these gaps, this study presents a computer-vision-based system that monitors and detects coconut fruit maturity, with an emphasis on mature coconuts, by utilizing the YOLOv8 model. With a Mean Average Precision (mAP50) of 99.5%, mAP50-95 of 89.5%, precision of 99.5%, and recall of 99.9%, the system demonstrated great accuracy. A web-based dashboard is also integrated into the system to provide monitoring and visualization of detected coconut fruits, along with notifications for fully ripe fruits.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_75-Computer_Vision_Based_Detection_and_Monitoring_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application of Contrast Enhancement Method on Hip X-ray Images as a Media for Detecting Hip Osteoarthritis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151174</link>
        <id>10.14569/IJACSA.2024.0151174</id>
        <doi>10.14569/IJACSA.2024.0151174</doi>
        <lastModDate>2024-11-29T12:18:19.6400000+00:00</lastModDate>
        
        <creator>Faisal Muttaqin</creator>
        
        <creator>Jamari</creator>
        
        <creator>R Rizal Isnanto</creator>
        
        <creator>Tri Indah Winarni</creator>
        
        <creator>Athanasius Priharyoto Bayuseno</creator>
        
        <subject>X-ray; image enhancement; digital image; image processing; grayscale image</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>Image enhancement is one of the most important areas that is being developed in the field of image processing technology. Image contrast enhancement can significantly improve the perception of the digital image itself. X-ray images are crucial in assisting physicians in the formulation of treatment decisions based on diagnostic information. Contrast enhancement techniques, including Histogram Equalization (HE), Contrast Limited Adaptive Histogram Equalization (CLAHE), and CLAHE with double Gamma Correction (CLAGAMTWO), have been utilized on 30 distinct image datasets. Among the three employed methods, the CLAGAMTWO approach yields the optimal values of SSIM = 0.850 and CNR = 0.773. CLAHE has superior performance with an Entropy value of 7.099. CLAGAMTWO is the superior approach overall, as evidenced by the average metric value, yielding optimal picture quality in visual structure (SSIM), information detail (Entropy), and crisp contrast with little noise (CNR).</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_74-Application_of_Contrast_Enhancement_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Classification of Painting Style Based on Image Feature Extraction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151173</link>
        <id>10.14569/IJACSA.2024.0151173</id>
        <doi>10.14569/IJACSA.2024.0151173</doi>
        <lastModDate>2024-11-29T12:18:19.6230000+00:00</lastModDate>
        
        <creator>Yuting Sun</creator>
        
        <subject>Feature extraction; painting; style classification; ResNet50; attention</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>The classification of painting style can help viewers find the works they want to appreciate more conveniently, which has a very important role. This paper realized image feature extraction and classification of paintings based on ResNet50. On the basis of ResNet50, squeeze-and-excitation, and convolutional block attention module (CBAM) attention mechanisms were introduced, and different activation functions were selected for improvement. Then, the effect of this method on painting style classification was studied using the Pandora dataset. It was found that ResNet50 obtained the best classification accuracy under a learning rate of 0.0001, a batch size of 32, and 50 iterations. After combining the CBAM attention mechanism, the accuracy rate was 65.64%, which was 6.77% higher than the original ResNet50 and 2.52% higher than ResNet50+SE. Under different activation functions, ResNet50+CBAM (CeLU) had the most excellent performance, with an accuracy rate of 67.13%, and was also superior to the other classification approaches such as Visual Geometry Group (VGG) 16. The findings prove that the proposed approach is applicable to the style classification of painting works and can be applied in practice.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_73-Classification_of_Painting_Style_Based_on_Image_Feature.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of a Mobile Language Learning App for Students with ADHD Using Augmented Reality</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151172</link>
        <id>10.14569/IJACSA.2024.0151172</id>
        <doi>10.14569/IJACSA.2024.0151172</doi>
        <lastModDate>2024-11-29T12:18:19.6070000+00:00</lastModDate>
        
        <creator>Leonardo Paolo Cesias-Diaz</creator>
        
        <creator>Jorge Armando Laban-Hijar</creator>
        
        <creator>Juan Carlos Morales-Arevalo</creator>
        
        <subject>ADHD; augmented reality; language learning; technology in education; mobile application; design; prototype</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>Attention Deficit Hyperactivity Disorder, ADHD for short, impedes submission to traditional teaching as it affects cognitive abilities such as executive function skills, memorization, and focus. In this case, how will kids with ADHD learn languages? This paper provides a solution by presenting the capabilities of a mobile language learning tool called AugmentedFocus, which is designed to support children with ADHD through the use of augmented reality. This association has allowed personalized instruction, accompanied by noticeable augmented reality elements so that learners, teachers, and administrators are able to use their mobile phones to comprehend instructional materials. The objective is to evaluate the application’s design, architecture, prototype, some testing results, and adjustments made during its implementation, while discussing other features within the context. More significantly, we aim to demonstrate how the mobile application can enhance engagement and retention in language learning among kids with ADHD. Attention Deficit Hyperactivity Disorder obstructs dependency on traditional measures of teaching since it relates to cognitive skills like executory function skills, memory, and focus. In this case, how will kids with ADHD learn languages? This paper addresses the issue by demonstrating the effectiveness of a mobile language learning app, AugmentedFocus that is specifically created for ADHD kids and shows a Technology in education.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_72-Design_of_a_Mobile_Language_Learning_App_for_Students.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Internet of Things and Cloud Computing-Based Adaptive Content Delivery in E-Learning Platforms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151171</link>
        <id>10.14569/IJACSA.2024.0151171</id>
        <doi>10.14569/IJACSA.2024.0151171</doi>
        <lastModDate>2024-11-29T12:18:19.5930000+00:00</lastModDate>
        
        <creator>Lili QIU</creator>
        
        <subject>Cloud computing; Internet of Things; adaptive content delivery; personalized learning; e-learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>In recent years, cloud computing and Internet of Things (IoT) technologies have reshaped e-learning, leading to adaptive content delivery tailored to learners&#39; needs. These paradigms have changed e-learning platforms by providing a scalable and flexible infrastructure for storing and processing large amounts of data. This enables seamless access to teaching materials and resources from anywhere and anytime, increasing the convenience and efficiency of online learning experiences. The convergence of cloud computing, IoT, and e-learning platforms is the heart of this study regarding how these technologies will work together to enable personalized educational experiences. We examine the principles, challenges, and developments in cloud-based adaptive content delivery and highlight the role of IoT data in understanding and incorporating learner preferences. In addition, we discuss possible future directions and implications for the further development of e-learning methods.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_71-Internet_of_Things_and_Cloud_Computing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Gradient Technique-Based Adaptive Multi-Agent Cloud-Based Hybrid Optimization Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151170</link>
        <id>10.14569/IJACSA.2024.0151170</id>
        <doi>10.14569/IJACSA.2024.0151170</doi>
        <lastModDate>2024-11-29T12:18:19.5770000+00:00</lastModDate>
        
        <creator>Mohammad Nadeem Ahmed</creator>
        
        <creator>Mohammad Rashid Hussain</creator>
        
        <creator>Mohammad Husain</creator>
        
        <creator>Abdulaziz M Alshahrani</creator>
        
        <creator>Imran Mohd Khan</creator>
        
        <creator>Arshad Ali</creator>
        
        <subject>Adaptive multi-agent; cloud-based; hybrid optimization; task scheduling; virtual machine migration; gradient technique</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>Efficient virtual machine (VM) movement and task scheduling are crucial for optimal resource utilization and system performance in cloud computing. This paper introduces AMS-DDPG, a novel approach combining Deep Deterministic Policy Gradient (DDPG) with Adaptive Multi-Agent strategies to enhance resource allocation. To further refine AMS-DDPG&#39;s performance, we propose ICWRS, which integrates WSO (Workload Sensitivity Optimization) and RSO (Resource Sensitivity Optimization) techniques for parameter fine-tuning. Experimental evaluations demonstrate that ICWRS-enabled AMS-DDPG significantly outperforms traditional methods, achieving a 25% improvement in resource utilization and a 30% reduction in task completion time, thereby enhancing overall system efficiency. By merging nature-inspired optimization techniques with deep reinforcement learning, our research offers innovative solutions to the challenges of cloud resource allocation. Future work will explore additional optimization methods to further advance cloud system performance.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_70-A_Gradient_Technique_Based_Adaptive_Multi_Agent_Cloud.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>CCNet: CNN CapsNet-Based Hybrid Deep Learning Model for Diagnosing Plant Diseases Using Thermal Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151169</link>
        <id>10.14569/IJACSA.2024.0151169</id>
        <doi>10.14569/IJACSA.2024.0151169</doi>
        <lastModDate>2024-11-29T12:18:19.5600000+00:00</lastModDate>
        
        <creator>Hassan Al_Sukhni</creator>
        
        <creator>Qusay Bsoul</creator>
        
        <creator>Rami Hasan AL-Taani</creator>
        
        <creator>Fadi yassin Salem Al jawazneh</creator>
        
        <creator>Basma S. Alqadi</creator>
        
        <creator>Misbah Mehmood</creator>
        
        <creator>Asif Nawaz</creator>
        
        <creator>Tariq Ali</creator>
        
        <creator>Diaa Salama AbdElminaam</creator>
        
        <subject>CapsNet; classification; CNN; feature extraction; plant disease; thermal images</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>Plant disease diagnosis at an early stage enables farmers, gardeners and agricultural experts to manage and control the spread of illnesses in a timely and suitable manner. The traditional methods of plant disease diagnosis are expensive and might need significant manpower and advanced level machinery. In addition to that, conventional methods, such as visual inspections are prone to subjectivity, time constraints and error susceptibility. In comparison to that, computer based methods such as machine learning is accurately predicting plant diseases underscore the need for a transformative approach. However, by focusing solely on visualized contents and thermal images, these methods overlook the potential insights hidden within customer-posted images that may leads to low accuracy. This study is an attempt to addresses these gaps by proposing an alternative methodology which relies on a hybrid deep learning framework called CCNET. The core CCNET is the utilization of the superiorities of Convolutional Neural capsule network models to get better architecture for plant diseases diagnosis. The proposed CCNET effectively amalgamates the strengths of convolutional layers for spatial feature extraction and the sequential modelling capabilities of CNN and CapsNet for capturing temporal dependencies within image data. The performance of the CCNET has been evaluated through rigorous experimentation. The outcomes underscore the remarkable prowess of the proposed model with the accuracy of 94%. When it compared to the conventional methods, the CCNET surpasses all of them in terms of precision, recall, F-Score, and accuracy.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_69-CCNet_CNN_CapsNet_Based_Hybrid_Deep_Learning_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-Label Aspect-Sentiment Classification on Indonesian Cosmetic Product Reviews with IndoBERT Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151168</link>
        <id>10.14569/IJACSA.2024.0151168</id>
        <doi>10.14569/IJACSA.2024.0151168</doi>
        <lastModDate>2024-11-29T12:18:19.5470000+00:00</lastModDate>
        
        <creator>Ng Chin Mei</creator>
        
        <creator>Sabrina Tiun</creator>
        
        <creator>Gita Sastria</creator>
        
        <subject>Aspect-based sentiment analysis; IndoBERT; multi-label classification; IndoBERTweet; problem transformation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>For an existing cosmetic company to expand, it is crucial to understand customers’ opinions regarding cosmetic products through product reviews. Aspect-based sentiment classification (ABSC), which consists of text representation and classification stages, is typically employed to automatically extract the interested insights from review. Existing studies of ABSC primarily used single-label classification, which fails to capture relationships between multiple aspects in a review. Additionally, the use of contextual embeddings like IndoBERT for representing Indonesian-language cosmetic product reviews has been underexplored. This study addresses these issues by developing a multi-label classification model that leverages IndoBERT, including IndoBERT[b], IndoBERT[k], and IndoBERTweet, to better represent context and capture relationships across multiple aspects in a review. The model is trained and evaluated using a dataset of Indonesian-language cosmetic product reviews from Female Daily. The multi-label models can be constructed using IndoBERT directly as end-to-end model or employing IndoBERT solely as word embedding model. The latter model, also known as conventional multi-label model, needs to be coupled with problem transformation approach and classifier for classification. Single label classification model with Word2Vec serves as baseline to assess the improvement of multi-label model’s performance on Female Daily cosmetic product reviews dataset. The empirical results revealed that the multi-label approach was more effective in identifying sentiments for pre-defined aspects in reviews. Among the models, end-to-end IndoBERT[b] achieved the highest accuracy (86.98%), while conventional multi-label models combining IndoBERT[b], Label Powerset (LP), and Support Vector Machine (SVM) performed best with 69.64%. This study is significant as it provides a more generalized understanding of the BERT embedding within the context of multi-labels classification and explores the effect of contextual embedding in the cosmetic domain.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_68-Multi_Label_Aspect_Sentiment_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Yolov5-Based Attention Mechanism for Gesture Recognition in Complex Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151167</link>
        <id>10.14569/IJACSA.2024.0151167</id>
        <doi>10.14569/IJACSA.2024.0151167</doi>
        <lastModDate>2024-11-29T12:18:19.5300000+00:00</lastModDate>
        
        <creator>Deepak Kumar Khare</creator>
        
        <creator>Amit Bhagat</creator>
        
        <creator>R. Vishnu Priya</creator>
        
        <subject>Gesture recognition; Yolov5; object detection; attention mechanism; bidirectional feature pyramid</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>Object detection is a fundamental task in gesture recognition, involving identifying and localising human hand or body gestures within images or videos amidst varying environmental conditions. To address the inadequate recognition rate of gesture detection algorithms in intricate surroundings caused by issues such as inconsistent illumination, background colors resembling skin tones, and diminutive gesture scales, a gesture recognition approach termed HD-YOLOv5s is presented. An adaptive Gamma image enhancement preprocessing technique grounded in Retinex theory is employed to mitigate the effects of lighting variations on gesture recognition efficacy. A feature extraction network incorporating an adaptive convolutional attention mechanism (SKNet) is developed to augment the network&#39;s feature extraction efficacy and mitigate background interference in intricate situations. A novel bidirectional feature pyramid architecture is implemented in the feature fusion network to fully leverage low-level features, thereby minimizing the loss of shallow semantic information and enhancing the detection accuracy of small-scale gestures. A cross-level connection strategy is employed to enhance the model&#39;s detection efficiency. To assess the efficacy of the suggested technique, experiments were performed on a custom dataset featuring diverse lighting intensity fluctuations and the publicly available NUS-II dataset with intricate backdrops. The recognition rates attained were 99.5% and 98.9%, respectively, with a detection time per frame of about 0.01 to 0.02 seconds.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_67-Yolov5_Based_Attention_Mechanism_for_Gesture_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Comparison of Pretrained Deep Learning Models for Landfill Waste Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151166</link>
        <id>10.14569/IJACSA.2024.0151166</id>
        <doi>10.14569/IJACSA.2024.0151166</doi>
        <lastModDate>2024-11-29T12:18:19.5130000+00:00</lastModDate>
        
        <creator>Hussein Younis</creator>
        
        <creator>Mahmoud Obaid</creator>
        
        <subject>Waste management; deep learning; waste classification; real-waste dataset; performance comparison</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>The escalating challenge of waste management, particularly in developed nations, necessitates innovative approaches to enhance recycling and sorting efficiency. This study investigates the application of Convolutional Neural Networks (CNNs) for landfill waste classification, addressing the limitations of traditional sorting methods. We conducted a performance comparison of five prevalent CNN models—VGG-16, InceptionResNetV2, DenseNet121, Inception V3, and MobileNetV2—using the newly introduced &quot;RealWaste&quot; dataset, comprising 4,752 labeled images. Our findings reveal that EfficientNet achieved the highest average testing accuracy of 96.31%, significantly outperforming other models. The analysis also highlighted common challenges in accurately distinguishing between metal and plastic waste categories across all models. This research underscores the potential of deep learning techniques in automating waste classification processes, thereby contributing to more effective waste management strategies and promoting environmental sustainability.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_66-Performance_Comparison_of_Pretrained_Deep_Learning_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of Fuzzy Logic CRITIC Coupling Coordination Degree Evaluation Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151165</link>
        <id>10.14569/IJACSA.2024.0151165</id>
        <doi>10.14569/IJACSA.2024.0151165</doi>
        <lastModDate>2024-11-29T12:18:19.5000000+00:00</lastModDate>
        
        <creator>Fangfang Hu</creator>
        
        <subject>Cultural and tourism integration; Yangtze River Economic Belt; coupling coordination degree; CRITIC algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>The integrated development of culture and tourism in the Yangtze River Economic Belt refers to a strategic initiative to push economic development and regional coordinated development with culture and tourism as the core. The purpose of this paper is to evaluate the coupling coordination degree of the integrated development of culture and tourism in the Yangtze River Economic Belt, by analysing the integrated development of culture, tourism and economy, and constructing an evaluation index system based on culture and tourism, in which 5 normative indicators and 19 basic indicators are constructed under the cultural perspective, and 4 normative indicators and 10 basic indicators are constructed under the tourism perspective, and its role and impact on regional economic development is explored based on the construction of the index system. Based on the construction of the indicator system, the role and influence of the indicators in regional economic development are explored. The CRITIC algorithm is used to calculate the importance of stratified indicators and stratified evaluation results, and finally, the coupling coordination degree of the research object is calculated through the coupling coordination degree model, which shows that the 13 provinces (municipalities directly under the central government) along the Yangtze River Economic Belt have a slightly different degree of coordination, but the least of them have reached the primary level of coordination, but it also proves that this paper proves the feasibility and necessity of the research method, and it can provide a good solution for the integrated development of culture and tourism in the Yangtze River Economic Belt. However, it also proves the feasibility and necessity of the research method of this paper, which can provide theoretical and practical guidance for the integrated development of culture and tourism in the Yangtze River Economic Belt, and provide new ideas and methods for the development of local tourism along the way.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_65-Development_of_Fuzzy_Logic_CRITIC_Coupling_Coordination.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning-Based Network Security Threat Detection and Defense</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151164</link>
        <id>10.14569/IJACSA.2024.0151164</id>
        <doi>10.14569/IJACSA.2024.0151164</doi>
        <lastModDate>2024-11-29T12:18:19.4830000+00:00</lastModDate>
        
        <creator>Jinjin Chao</creator>
        
        <creator>Tian Xie</creator>
        
        <subject>Network security; threat detection; defense; multilevel feature extraction; dynamic weight adjustment mechanism; interpretability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>This paper introduces deepnetguard, an innovative deep learning algorithm designed to efficiently identify potential security threats in large-scale network traffic.deepnetguard achieves automated feature learning by fusing basic, statistical, and behavioral features through a multi-level feature extraction strategy, and is capable of identifying both short-time patterns and long-time dependencies. To adapt to the dynamic network environment, the algorithm introduces a dynamic weight adjustment mechanism that allows the model to self-optimize the importance of features based on real-time traffic changes. In addition, deepnetguard integrates auto-encoder (ae) and generative adversarial network (gan) technologies to not only detect known threats, but also recognize unknown threats. By applying the attention mechanism, deepnetguard enhances the interpretability of the model, enabling security experts to track and understand the key factors in the model&#39;s decision-making process. Experimental evaluations show that deepnetguard performs well on multiple public datasets, with significant advantages in accuracy, recall, precision, and f1 scores over traditional ids systems and other deep learning models, demonstrating its strong performance in cyber threat detection.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_64-Deep_Learning_Based_Network_Security_Threat_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Replace Your Mouse with Your Hand! HandMouse: A Gesture-Based Virtual Mouse System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151163</link>
        <id>10.14569/IJACSA.2024.0151163</id>
        <doi>10.14569/IJACSA.2024.0151163</doi>
        <lastModDate>2024-11-29T12:18:19.4670000+00:00</lastModDate>
        
        <creator>Qiujiao Wang</creator>
        
        <creator>Zhijie Xie</creator>
        
        <subject>Virtual mouse; ergonomics; gesture; MediaPipe</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>The existing gesture-based operating systems can only simply operate a single piece of software or a specific system, and are not compatible with other applications of mainstream operating systems. In this paper, based on the MediaPipe gesture recognition framework, we design HandMouse, a virtual mouse system that operates using hand gestures. It has the following characteristics: 1. The user does not have contact with the computer hardware when using the system; 2. It requires only one hand to operate, and the design of the gesture considers the ergonomics of the hand; 3. It has most of the functions commonly used in a physical mouse; 4. It can locate the operating area with relative precision. We invited 27 participants to use and evaluate the virtual mouse and then conducted an experiment to compare the performance of the virtual mouse with the physical mouse. The results show that this virtual mouse has a good learning effect and is a great alternative to the physical mouse in public places. The demonstrated operation video is available on https://github.com/wanzhuxie/HandMouse-IJACSA.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_63-Replace_Your_Mouse_with_Your_Hand_HandMouse.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Malicious Traffic Detection Algorithm for the Internet of Things Based on Temporal Spatial Feature Fusion</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151162</link>
        <id>10.14569/IJACSA.2024.0151162</id>
        <doi>10.14569/IJACSA.2024.0151162</doi>
        <lastModDate>2024-11-29T12:18:19.4370000+00:00</lastModDate>
        
        <creator>Linzhong Zhang</creator>
        
        <subject>Internet of Things; network security; temporal-spatial characteristics; traffic detection; fusion algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>With the rapid development of the Internet of Things, the security issues of its network environment have gradually attracted attention. To enable faster and more accurate identification and detection of malicious traffic attacks in the Internet of Things, an optimized malicious traffic detection algorithm based on fusion of temporal and spatial features is proposed. This method improves the feature extraction performance of traffic data and increases the accuracy of traffic detection. The test results showed that the comprehensive performance of the fusion algorithm was superior to the other four algorithms used for comparison. On the KDD99-CUP dataset, the F1 of the feature fusion algorithm reached 93.16%, while the F1 of algorithms 1-4 were 81.36%, 67.89%, 90.56%, and 92.24%, respectively. On the test set, 182 traffic samples were accurately identified, including 139 correctly identified malicious traffic and 43 correctly identified normal traffic, with recognition accuracy of 98.73% and 97.65%, respectively. Experimental results revealed that the use of fused feature extraction in traffic detection systems could improve detection efficiency and accuracy, providing a safer and more reliable guarantee for the interaction process of the Internet of Things network, and safeguarding the rapid development and application of the Internet of Things.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_62-Malicious_Traffic_Detection_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimizing Energy Efficient Cloud Architectures for Edge Computing: A Comprehensive Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151161</link>
        <id>10.14569/IJACSA.2024.0151161</id>
        <doi>10.14569/IJACSA.2024.0151161</doi>
        <lastModDate>2024-11-29T12:18:19.4200000+00:00</lastModDate>
        
        <creator>TA Gamage</creator>
        
        <creator>Indika Perera</creator>
        
        <subject>Cloud computing; edge computing; energy efficiency; sustainability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>Now-a-days, edge computing and cloud computing are considered for collaborating together to produce computing solutions that are more effective, scalable and adaptable. The proliferation of cloud infrastructures has drastically increased energy consumption leading to the need for more research in optimizing energy efficiency for sustainable and efficient systems with reduced operational costs. In addition, the edge computing paradigm has gained wide attention during the last few decades due to the rise of the Internet of Things (IoT) devices, the emergence of applications that require low latency, and the widespread demand for environmentally friendly computing. Moreover, lowering cloud-edge systems&#39; energy footprints is essential for fostering sustainability in light of growing concerns about environmental effects. This research presents a comprehensive review of strategies aimed at optimizing energy efficiency in cloud architectures designed for edge computing environments. Various strategies, including workload optimization, resource allocation, virtualization technologies, and adaptive scaling methods, have been identified as techniques that are widely utilized by contemporary research in reducing energy consumption while maintaining high performance. Furthermore, the paper investigates how advancements in machine learning and AI can be leveraged to dynamically manage resource distribution and energy-efficient enhancements in cloud-edge systems. In addition, challenges to the approaches for energy optimization have been discussed in detail to further provide insights for future research. The conducted comprehensive review provides valuable insights for future research in the edge computing paradigm, particularly emphasizing the critical importance of enhancing energy efficiency in these systems.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_61-Optimizing_Energy_Efficient_Cloud_Architectures.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>New Method in SEM Analysis Using the Apriori Algorithm to Accelerate the Goodness of Fit Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151160</link>
        <id>10.14569/IJACSA.2024.0151160</id>
        <doi>10.14569/IJACSA.2024.0151160</doi>
        <lastModDate>2024-11-29T12:18:19.4070000+00:00</lastModDate>
        
        <creator>Dien Novita</creator>
        
        <creator>Ermatita</creator>
        
        <creator>Samsuryadi</creator>
        
        <creator>Dian Palupi Rini</creator>
        
        <subject>APR-SEM; method; goodness of fit; traditional retail</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>This research aims to develop a new method in Structural Equation Modelling (SEM) analysis using the Apriori algorithm to accelerate the achievement of Goodness of Fit models, focusing on traditional retail purchasing decision models in Indonesia, especially in Palembang. SEM will be used to model causal relationships between variables that influence purchasing decisions in traditional retail. However, the Goodness of Fit model testing process takes a long time due to the complexity of the model. Therefore, this research uses the Apriori algorithm to filter variables that have a significant relationship in traditional retail purchasing decision models to reduce model complexity and speed up Goodness of Fit calculations. There are two stages in the research. First, the Apriori algorithm identifies frequent item sets that frequently appear among variables influencing traditional retail consumer purchasing decisions, such as product, price, and location. This pattern becomes the basis for SEM modeling, focusing on selected variables and, in the second stage, measuring the Goodness of Fit of the SEM model, namely GFI, RMSEA, AGFI, NFI, and CFI, to evaluate the suitability of the model which explains the factors that support traditional retail purchasing decisions in Palembang. The practical implications of this research are significant, as it provides a more efficient and effective method for modeling and understanding consumer behavior in the context of traditional retail. Based on other studies, if this research uses a conventional SEM approach, it does not meet the cut-off value of Goodness of Fit. Meanwhile, the results of the proposed method, namely combining Apriori into SEM, called APR-SEM, obtained a significant Goodness of Fit evaluation. The model coefficient of determination value from APR-SEM is R2 0.71, higher than the conventional model, R2 0.52. This method effectively simplifies the SEM model by identifying the most relevant relationships, thereby providing a clearer understanding of the critical factors influencing purchasing decisions in traditional retail in Palembang City.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_60-New_Method_in_SEM_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Diabetic Retinopathy Classification Using Geometric Augmentation and MobileNetV2 on Retinal Fundus Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151159</link>
        <id>10.14569/IJACSA.2024.0151159</id>
        <doi>10.14569/IJACSA.2024.0151159</doi>
        <lastModDate>2024-11-29T12:18:19.3730000+00:00</lastModDate>
        
        <creator>Helmi Imaduddin</creator>
        
        <creator>Adnan Faris Naufal</creator>
        
        <creator>Fiddin Yusfida A&#39;la</creator>
        
        <creator>Firmansyah</creator>
        
        <subject>Diabetic retinopathy; data augmentation; InceptionV3; MobileNetV2; transfer learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>Diabetic retinopathy (DR) ranks among the foremost contributors to blindness worldwide, particularly affecting the adult demographic. Detecting DR at an early stage is crucial for preventing vision loss; however, conventional approaches like fundus examinations are often lengthy and reliant on specialized expertise. Recent developments in machine learning, especially the application of deep learning models, provide a highly effective option for classifying diabetic retinopathy through retinal fundus images. This investigation examines the efficacy of geometric data augmentation methods alongside MobileNetV2 for the classification of diabetic retinopathy. Utilizing augmentation techniques like image resizing, zooming, shearing, and flipping enhances the model&#39;s ability to generalize. MobileNetV2 is selected for its impressive inference speed and computational efficiency. This analysis evaluates the effectiveness of MobileNetV2 in relation to InceptionV3, emphasizing metrics such as accuracy, precision, sensitivity, and specificity. The findings show that MobileNetV2 attains exceptional performance, achieving an accuracy of 97%. These findings highlight the promise of employing efficient models and augmentation strategies in clinical settings for the early identification of DR. The findings highlight the critical need to incorporate advanced machine learning methods to enhance healthcare results and avert blindness caused by diabetic retinopathy.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_59-Enhancing_Diabetic_Retinopathy_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Applying Data-Driven APO Algorithms for Formative Assessment in English Language Teaching</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151158</link>
        <id>10.14569/IJACSA.2024.0151158</id>
        <doi>10.14569/IJACSA.2024.0151158</doi>
        <lastModDate>2024-11-29T12:18:19.3570000+00:00</lastModDate>
        
        <creator>Guojun Zhou</creator>
        
        <subject>Big data technology; APO algorithm; formative assessment in English language teaching; nuclear limit learning machine</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>This study proposes an innovative approach for improving the accuracy and efficiency of formative assessment in English language teaching. The method integrates the Artificial Protozoa Optimization (APO) algorithm with the Kernel Extreme Learning Machine (KELM) to overcome limitations such as local optima in traditional models. The study utilizes data from five university-level English courses, consisting of 327 samples divided into a training set (70%), validation set (15%), and test set (15%). The APO-KELM model is constructed by optimizing the KELM parameters using the APO algorithm. Comparative analysis is conducted against other models, including ELM, KELM, WOA-KELM, PPE-KELM, and AOA-KELM, in terms of accuracy (RMSE), MAPE (Mean Absolute Percentage Error), and convergence speed. The result shows that the APO-KELM model demonstrates superior performance with a Root Mean Square Error (RMSE) of 0.6204, compared to KELM (0.7210), WOA-KELM (0.6934), PPE-KELM (0.6762), and AOA-KELM (0.6451). In terms of MAPE, APO-KELM achieves 0.48, outperforming KELM (0.55), WOA-KELM (0.52), PPE-KELM (0.51), and AOA-KELM (0.49). Additionally, the APO-KELM model converged within 300 iterations, showing faster convergence compared to other models. The integration of the APO algorithm with the KELM significantly enhances the accuracy and efficiency of formative assessment in English language teaching. The APO-KELM model is more accurate and faster than traditional models, making it a valuable tool for improving assessment systems. Future research should focus on refining the APO algorithm for broader applications in educational assessments.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_58-Applying_Data_Driven_APO_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>DBN-GRU Fusion and Decomposition-Optimisation-Reconstruction Algorithm in Advertising Traffic Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151157</link>
        <id>10.14569/IJACSA.2024.0151157</id>
        <doi>10.14569/IJACSA.2024.0151157</doi>
        <lastModDate>2024-11-29T12:18:19.3430000+00:00</lastModDate>
        
        <creator>Ronghua Zhang</creator>
        
        <subject>New media advertising traffic prediction; kernel principal component analysis; variational modal decomposition; quilt group algorithm; deep learning; decomposition-optimisation-reconstruction algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>As the premise and foundation of advertisement traffic selling and distribution, effective IPTV advertisement traffic prediction not only reduces the operation cost, but also improves the intelligent level of new media advertisement traffic management. In order to further improve the accuracy of new media advertisement traffic prediction, this paper proposes a new media advertisement traffic prediction method based on the hybrid prediction framework of decomposition-optimisation-integration, which is a hybrid model of gated recurrent unit neural network and deep confidence network improved by capsule swarm optimisation algorithm. Firstly, according to the principle of system construction, paper analyses the influencing factors and construct a complete new media advertisement traffic prediction index system; secondly, paper improves the optimisation process of the parameters of the deep confidence network and the gated recurrent unit network by using the quilt group optimisation algorithm, and put forward a new media advertisement traffic prediction method based on the decomposition-optimisation-integration framework; Finally, the proposed method is analysed using new media advertisement traffic data. The results show that the proposed method improves the accuracy of the prediction model and solves the problem of large prediction error in new media advertisement traffic prediction methods.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_57-DBN_GRU_Fusion_and_Decomposition_Optimisation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Road Surface Crack Detection Based on Improved YOLOv9 Image Processing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151156</link>
        <id>10.14569/IJACSA.2024.0151156</id>
        <doi>10.14569/IJACSA.2024.0151156</doi>
        <lastModDate>2024-11-29T12:18:19.3100000+00:00</lastModDate>
        
        <creator>Quanwu Li</creator>
        
        <creator>Shaopeng Duan</creator>
        
        <subject>Road crack; YOLOv9; deep learning; surveillance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>Road surface crack detection is a critical task in road maintenance and safety management. Cracks in road surfaces are often the early indicators of larger structural issues, and if not detected and repaired in time, they can lead to more severe deterioration and increased maintenance costs. Effective and timely crack detection is essential to prolong road lifespan and ensure the safety of road users. This paper introduces CrackNet, an advanced crack detection model built upon the YOLOv9 architecture, which integrates a fusion attention module and task space disentanglement to enhance the accuracy and efficiency of road surface crack detection. Traditional methods often struggle with the complex and irregular nature of road cracks, as well as the challenge of distinguishing cracks from their backgrounds. CrackNet overcomes these challenges by leveraging an attention mechanism that highlights relevant features in both the channel and spatial dimensions while separating the tasks of classification and regression. This approach significantly reduces false negatives and improves localization accuracy. The effectiveness of CrackNet is validated through comparative analysis with other segmentation models, including Unet, SOLO v2, Mask R-CNN, and Deeplab v3+. CrackNet consistently outperforms these models in terms of F1 and Jaccard coefficients. This study highlights the critical role of accurate crack detection in minimizing maintenance costs and enhancing road safety.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_56-Road_Surface_Crack_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Edge Computing in Water Management: A KPCA-DeepESN and HOA-Optimized Framework for Urban Resource Allocation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151155</link>
        <id>10.14569/IJACSA.2024.0151155</id>
        <doi>10.14569/IJACSA.2024.0151155</doi>
        <lastModDate>2024-11-29T12:18:19.2970000+00:00</lastModDate>
        
        <creator>Hanchao Liao</creator>
        
        <creator>Miyuan Shan</creator>
        
        <subject>KPCA Method; water supply and demand equilibrium; allocation of resources in urban water environment; optimization strategy for hiking; DeepESN</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>This paper presents a novel approach to optimizing urban water resource allocation by integrating Kernel Principal Component Analysis (KPCA) with a Deep Echo State Network (DeepESN), further optimized using the Hiking Optimization Algorithm (HOA). The proposed model addresses the issue of achieving an optimal balance between water supply and demand in urban environments, utilizing advanced machine learning techniques to enhance prediction accuracy and allocation efficiency. KPCA is employed to reduce the dimensionality of key water resource indicators, capturing nonlinear relationships in the dataset. DeepESN, a deep recurrent neural network model, is then applied to predict water consumption trends. HOA, a meta-heuristic algorithm inspired by hiker behavior, is used to fine-tune the DeepESN network parameters, ensuring faster convergence and higher accuracy. The experimental setup includes water resource data from January 2010 to December 2023, divided into training, testing, and validation sets. The model’s performance is compared with other approaches, such as PCA-DeepESN and standalone DeepESN. Results show that the KPCA-HOA-DeepESN model achieves the lowest prediction error and fastest convergence, making it a superior solution for urban water management. Optimized network parameters include a reservoir size of 140, a spectral radius of 0.3, an input scaling factor of 0.22, and a reservoir sparsity degree of 0.72. This study demonstrates the applicability of distributed computing techniques in water resource management by utilizing cloud-based data processing and real-time predictions. The proposed approach not only improves resource allocation but also showcases the potential for edge computing to enhance the responsiveness of water management systems.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_55-Edge_Computing_in_Water_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluation of the Optimal Features and Machine Learning Algorithms for Energy Yield Forecasting of a Rural Rooftop PV Installation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151154</link>
        <id>10.14569/IJACSA.2024.0151154</id>
        <doi>10.14569/IJACSA.2024.0151154</doi>
        <lastModDate>2024-11-29T12:18:19.2800000+00:00</lastModDate>
        
        <creator>Boris Evstatiev</creator>
        
        <creator>Katerina Gabrovska-Evstatieva</creator>
        
        <creator>Tsvetelina Kaneva</creator>
        
        <creator>Nikolay Valov</creator>
        
        <creator>Nicolay Mihailov</creator>
        
        <subject>PV yield; forecasting; machine learning; deep learning; features; solar radiation; ambient temperature; wind speed; hour of the day</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>The stability and reliability of the electric grid strongly depend on the ability to schedule and forecast the energy output of all sources. Even though the share of photovoltaic installation in the energy mix is continuously increasing, they have one major drawback: their dependence on different environmental parameters, such as solar irradiance, ambient temperature, cloudiness, etc., which have a highly variable nature. Six machine learning algorithms are compared in this study, regarding their ability to forecast the power generation of a rural rooftop photovoltaic installation using different combinations of the input data. The features selected for investigation are solar radiation, ambient temperature, and wind speed, obtained from a meteorological station, as well as two additional time-based variables – the time of the day and the month of the year. During the validation and testing phases, four models performed better – artificial neural network (ANN), k-Nearest neighbor (kNN), Decision tree (DT), and Random Forest (RF), with ANN achieving the best results in all cases. The optimal combination of input data includes solar radiation, ambient temperature, wind speed, and hour of the day, though the difference with the other scenarios was small. The optimal ANN model achieved R2, MAE, and RMSE of 0.995, 6.71 Wh, and 13.7 Wh, respectively. The results obtained in this study indicate that the yield of PV installations located in rural areas could be forecasted with high probability using a limited number of meteorological data.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_54-Evaluation_of_the_Optimal_Features_and_Machine_Learning_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cyber Resilience Model Based on a Self Supervised Anomaly Detection Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151153</link>
        <id>10.14569/IJACSA.2024.0151153</id>
        <doi>10.14569/IJACSA.2024.0151153</doi>
        <lastModDate>2024-11-29T12:18:19.2630000+00:00</lastModDate>
        
        <creator>Eko Budi Cahyono</creator>
        
        <creator>Suriani Binti Mohd Sam</creator>
        
        <creator>Noor Hafizah Binti Hassan</creator>
        
        <creator>Amrul Faruq</creator>
        
        <subject>Cybersecurity; anomaly detection; cyber resilience model; statistical machine learning; data generating process; bias variance alignment; likelihood ratios; self-supervised learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>Cyber resilience plays an important role in dealing with cybersecurity and business continuity uncertainty in the post-COVID-19 era. The fundamental problem of cyber resilience is the complexity of real-world problems. Therefore, it is necessary to reduce the complexity of real-world problems to be simple and easy to analyze through cyber resilience model. The first part is the representational model by utilizes world models. It utilizes the stochastic nature of latent data to generate log-likelihood values by data-generating process. The second part is the inference model. This concludes the observation of log-likelihoods using a self-supervised anomaly detection approach. This is related to optimizing decision boundary in anomaly detection, which is achieved by supervising two competing hypotheses based on bias-variance alignment and likelihood ratios. The optimization operates a dynamic threshold supervised by a supervisory signal from the underlying structure of log-likelihoods. The paper contributes by conducting research on the cyber resilience model from the perspective of statistical machine learning. It enhances the representational modeling of world models with the Gaussian mixture model for multimodal regression (GMMR). Additionally, it examines the issue of misleading log-likelihood for out-of-distribution inputs caused by the generalization error and optimizes decision boundary in minimizing the generalization error with a new metric named the harmonic likelihood ratio (HLR). Finally, it aims to boost the performance of anomaly detection using self-supervised learning.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_53-Cyber_Resilience_Model_Based_on_a_Self_Supervised_Anomaly_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Real-Time Data Acquisition in SCADA Systems: A JavaWeb and Swarm Intelligence-Based Optimization Framework</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151152</link>
        <id>10.14569/IJACSA.2024.0151152</id>
        <doi>10.14569/IJACSA.2024.0151152</doi>
        <lastModDate>2024-11-29T12:18:19.2500000+00:00</lastModDate>
        
        <creator>Lingyi Sun</creator>
        
        <creator>Tieliang Sun</creator>
        
        <creator>Ruojia Xin</creator>
        
        <creator>Feng Yan</creator>
        
        <creator>Yue Li</creator>
        
        <creator>Hengyu Wang</creator>
        
        <creator>Yecen Tian</creator>
        
        <creator>Dongqing You</creator>
        
        <creator>Yun Liu</creator>
        
        <creator>Muhao Lv</creator>
        
        <subject>JAVAWeb; oil and gas pipelines; SCADA software; design analysis; bird foraging search algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>This paper aims to improve the accuracy and efficiency of SCADA software design and testing for oil and gas pipelines. It proposes a JavaWeb-based SCADA software solution optimized by the Bird Foraging Search (BFS) algorithm combined with an Echo State Network (ESN) for enhanced testing and analysis. A multi-tiered distributed SCADA software architecture based on the Java EE framework was designed to provide real-time data acquisition, monitoring, control, and data analysis. The BFS algorithm was used to optimize the hyperparameters of the ESN model to improve testing accuracy and convergence speed. The BFS-ESN model was compared with other optimization algorithms such as PSO and DE. Experimental results show that the BFS-ESN model achieved a testing accuracy of 97.33% and faster convergence within 700 iterations. It outperformed other algorithms in both accuracy and convergence speed. The JavaWeb-based SCADA software design for oil and gas pipelines is feasible, and the BFS-ESN model significantly enhances the accuracy and efficiency of SCADA software testing. This approach demonstrates the potential for application in SCADA systems, with future research needed to simplify the model and extend its applicability for large-scale deployment.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_52-Real_Time_Data_Acquisition_in_SCADA_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Selecting the Best Machine Learning Models for Industrial Robotics with Hesitant Bipolar Fuzzy MCDM</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151151</link>
        <id>10.14569/IJACSA.2024.0151151</id>
        <doi>10.14569/IJACSA.2024.0151151</doi>
        <lastModDate>2024-11-29T12:18:19.2330000+00:00</lastModDate>
        
        <creator>Chan Gu</creator>
        
        <creator>Bo Tang</creator>
        
        <subject>Machine Learning Model (MLM); Hesitant Bipolar Fuzzy Set (HBFS); Dual Hesitant Bipolar Fuzzy Set (DHBFS); Hesitant Bipolar Fuzzy Aggregation Operators (HBFAO); Dual Hesitant Bipolar Fuzzy Aggregation Operators (DHBFAO); Multi-Criteria Decision-Making (MCDM)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>Machine learning models (MLMs) are used in industry to automate complicated activities, minimize human error, and improve decision-making by evaluating large volumes of data in real time. To managing inventory and quality control in the apparel and auto industries, they provide predictive capabilities such as predicting equipment breakdowns, maintenance and detecting fraud in the finance sector and the major key advantages include cost reduction, higher productivity, better product quality, and tailored client experiences. MLM helps the industries to reduce downtime, prevent errors, and gain a competitive edge through data-driven strategies and processing massive volumes of data in real time. So, there is a need to select the best MLMs for industrial robotics and by considering it, this paper addresses this problem as multiple criteria decision-making (MCDM) by exploiting hesitant bipolar fuzzy information, which takes into account both hesitation and bipolarity in decision-maker preferences. This paper introduced the new aggregation operators (AO) based on geometric and arithmetic procedures to efficiently aggregate the data including the hesitant bipolar fuzzy weighted geometric operator (HBFWGO), which is appropriate for multiplicative relationships, and the hesitant bipolar fuzzy weighted average operator (HBFWAO), which gives weighted importance to qualities. Further, the dual operators including the dual hesitant bipolar fuzzy weighted geometric operator (DHBFWGO) and the dual hesitant bipolar fuzzy weighted average operator (DHBFWAO) have been presented that are further applied to create novel strategies for resolving MCDM issues and offering a methodical manner to assess and combine features. Moreover, the example of selecting the optimal MLMs to show the robustness and efficiency of the suggested methodology has been presented which illustrates the applicability and strength of the proposed methodology in actual decision-making situations.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_51-Selecting_the_Best_Machine_Learning_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Color Matching and Light and Shadow Processing in Intelligent Interior Environment Art Design Analysis and Application Based on Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151150</link>
        <id>10.14569/IJACSA.2024.0151150</id>
        <doi>10.14569/IJACSA.2024.0151150</doi>
        <lastModDate>2024-11-29T12:18:19.2170000+00:00</lastModDate>
        
        <creator>Ji Yang</creator>
        
        <creator>Meifen Song</creator>
        
        <subject>Interior environment design; color matching; virtual reality; neural network; light and shadow processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>In recent years, the application of Virtual Reality (VR) technology in the field of interior environmental design has expanded significantly, offering designers innovative methods to present complex design concepts within virtual spaces. However, the current color matching and light and shadow processing in reality are not mature enough, and the deep learning algorithms applied in VR are relatively basic with low running efficiency. The consistency and authenticity of virtual reality are not stable enough. This paper explores the integration of color matching and light-shadow processing in interior environmental design within VR technology, with a particular emphasis on leveraging neural network models to achieve automated design optimization. By incorporating deep learning algorithms, this study proposes a neural network-based approach to enhance color matching and light-shadow processing, aiming to improve the realism and aesthetic appeal of virtual environments. Experimental results demonstrate that this method offers substantial advantages in terms of color matching accuracy, naturalness of light-shadow effects, and computational efficiency, highlighting its broad potential for application in virtual reality.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_50-Color_Matching_and_Light_and_Shadow_Processing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimizing CatBoost Model: AI-based Analysis on Rail Transit Figma Platform Practice</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151149</link>
        <id>10.14569/IJACSA.2024.0151149</id>
        <doi>10.14569/IJACSA.2024.0151149</doi>
        <lastModDate>2024-11-29T12:18:19.2030000+00:00</lastModDate>
        
        <creator>Ruobing Li</creator>
        
        <creator>Hong Qian</creator>
        
        <subject>Rail transport; figma platform innovation design; intelligent analysis and evaluation algorithm; umbrella lizard optimisation algorithm; CatBoost</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>The research introduces a novel approach that utilizes the Frilled Lizard Optimization (FLO) algorithm to enhance the hyperparameters of the CatBoost model. First, the Figma platform is analyzed in terms of its innovative design applications in rail transit. Then, the FLO algorithm is applied to optimize the CatBoost model, improving its accuracy in detecting foreign objects on rail tracks. Experiments were conducted using a dataset of 6,000 images from rail transit scenarios, divided into seven categories such as left-turning track, straight track, train, pedestrians, and others. The result showed that the FLO-CatBoost model demonstrated superior performance in accuracy, achieving a Root Mean Square Error (RMSE) of 0.274, significantly outperforming other algorithms like TSA, MPA, and RSA. Furthermore, FLO-CatBoost reduced the Mean Absolute Percentage Error (MAPE) and showed better efficiency in evaluation time. Finally, the FLO-CatBoost model significantly enhances the design and evaluation processes for intelligent rail transit systems on the Figma platform, providing higher accuracy and efficiency in detecting foreign objects and improving system design performance.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_49-Optimizing_CatBoost_Model_AI_Based _Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Feature Selection Methods Using RBFNN-Based to Enhance Air Quality Prediction: Insights from Shah Alam</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151148</link>
        <id>10.14569/IJACSA.2024.0151148</id>
        <doi>10.14569/IJACSA.2024.0151148</doi>
        <lastModDate>2024-11-29T12:18:19.1870000+00:00</lastModDate>
        
        <creator>Siti Khadijah Arafin</creator>
        
        <creator>Ahmad Zia Ul-Saufie</creator>
        
        <creator>Nor Azura Md Ghani</creator>
        
        <creator>Nurain Ibrahim</creator>
        
        <subject>Lasso; mRMR; PM2.5 concentration; RBFNN; ReliefF</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>This study examines the predictive efficiency of several feature selection approaches in air quality models aimed to predict next-day PM2.5 concentrations in Shah Alam, Malaysia. Air pollution in urban areas is a significant public health concern, and accurate prediction models are essential for timely interventions. However, determining the most important parameters to include in these models remains difficult, especially in complex urban areas with several pollution sources. To address this, we employed three different feature selection methods and applied them to a dataset comprising 43,824 air quality data points provided by the Department of Environmental Malaysia. The data set contained ten variables, such as gas pollutants and meteorological indicators. Each feature selection approach determined top eight variables to include in a Radial Basis Function Neural Network (RBFNN) model. The results showed that ReliefF outperformed Lasso and mRMR in terms of accuracy, specificity, precision, F1 Score, and AUROC, making it the most effective feature selection method for this study. This study contributes to the body of knowledge on air quality modelling by emphasising the relevance of using proper feature selection techniques that are suited to the specific characteristics of the dataset and urban area. Furthermore, it proposes that future study should look into the use of ReliefF-RBFNN in other settings, such as suburban and rural areas, as well as hybrid feature selection approaches to improve prediction performance across several context.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_48-Feature_Selection_Methods_Using_RBFNN.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Random Forest Algorithm for HR Data Classification and Performance Analysis in Cloud Environments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151147</link>
        <id>10.14569/IJACSA.2024.0151147</id>
        <doi>10.14569/IJACSA.2024.0151147</doi>
        <lastModDate>2024-11-29T12:18:19.1700000+00:00</lastModDate>
        
        <creator>Fangfang Dong</creator>
        
        <subject>Random forest algorithm; business; human resources; data classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>This study applies the Random forest algorithm to classify and evaluate the effectiveness of business human resources (HR) data, focusing on its potential in supporting strategic decision-making and enhancing organizational efficiency. The research introduces a model that automates the categorization of HR data, including employee records, performance evaluations, and training activities, using the Random Forest method. By constructing both classification and effectiveness assessment models, the study aims to provide businesses with a robust tool for managing and evaluating employee contributions. Key HR metrics were analyzed and categorized, leading to the creation of an effectiveness evaluation model that offers objective insights into employee performance. The Random forest algorithm’s accuracy and stability were validated through cross-validation techniques, proving it to be effective in categorizing employee data and identifying different workforce groups. The models developed in this study are designed to support HR managers in optimizing human resource allocation, improving employee satisfaction, and driving overall business performance. The paper also discusses how the model can be optimized further by expanding data sources and applying it to practical business scenarios.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_47-Random_Forest_Algorithm_for_HR_Data_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Time Distributed MobileNetV2 with Auto-CLAHE for Eye Region Drowsiness Detection in Low Light Conditions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151146</link>
        <id>10.14569/IJACSA.2024.0151146</id>
        <doi>10.14569/IJACSA.2024.0151146</doi>
        <lastModDate>2024-11-29T12:18:19.1570000+00:00</lastModDate>
        
        <creator>Farrikh Alzami</creator>
        
        <creator>Muhammad Naufal</creator>
        
        <creator>Harun Al Azies</creator>
        
        <creator>Sri Winarno</creator>
        
        <creator>Moch Arief Soeleman</creator>
        
        <subject>Driver drowsiness detection; Auto-CLAHE; time distributed; MobileNetV2; eye region analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>Driver drowsiness is a critical factor in road safety, contributing significantly to traffic accidents. This study proposes an innovative approach integrating Auto-CLAHE with Time Distributed MobileNetV2 to enhance drowsiness detection accuracy. This study leveraged the ULg Multimodality Drowsiness Database (DROZY) for facial expression analysis, focusing on the eye region. This study methodology involved segmenting videos into 10-second intervals, extracting 20 images per segment, and applying the Haar Cascade method for eye region detection. The Auto-CLAHE technique was developed to dynamically adjust contrast enhancement parameters based on image characteristics. The analysis yielded promising results. Integrating Auto-CLAHE with Time Distributed MobileNetV2 achieved a classification accuracy of 93.62%, outperforming traditional methods including Greyscale (92.55%), AHE (92.91%), and CLAHE (91.13%). Notably, a precision of 93.71% in detecting drowsiness, with a recall of 93.62% and an F1 score of 93.59% were obtained. Statistical analysis using ANOVA and Tukey HSD tests confirmed the significance of present study results. The key innovation of this study is the implementation of Auto-CLAHE, which significantly improves image contrast adaptation. This approach surpasses AHE and basic CLAHE in drowsiness detection performance, demonstrating remarkable robustness across diverse lighting conditions and facial expressions.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_46-Time_Distributed_MobileNetV2_with_Auto_CLAHE.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detecting GPS Spoofing Attacks Using Corrected Low-Cost INS Data with an LSTM Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151145</link>
        <id>10.14569/IJACSA.2024.0151145</id>
        <doi>10.14569/IJACSA.2024.0151145</doi>
        <lastModDate>2024-11-29T12:18:19.1400000+00:00</lastModDate>
        
        <creator>Mohammed AFTATAH</creator>
        
        <creator>Khalid ZEBBARA</creator>
        
        <subject>Secure navigation; GPS spoofing; inertial systems; LSTM; M-of-N method; anti-spoofing techniques</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>With the emergence of new technologies ranging from smart cities to the Internet of Things (IoT), many objects rely on satellite-based navigation systems, such as GPS, to accomplish their tasks securely. However, GPS receivers are exposed to various unintentional and intentional attacks, threatening the availability and reliability of the delivered information. GPS spoofing is considered as one of the most dangerous attacks, where attackers transmit intense signals on the same frequency to disrupt the GPS receiver, leading to erroneous position calculations. Detection methods for GPS spoofing are crucial to ensure secure navigation. This paper proposes a method for GPS spoofing detection that utilizes artificial intelligence algorithms in combination with raw data from an inertial navigation system (INS). Since INS sensors are prone to accumulating errors over time, these inaccuracies are corrected via a Long Short-Term Memory (LSTM) algorithm. The corrected accelerations and angular rates are then compared to the accelerations and angular rates estimated from the GPS data to detect GPS spoofing signals. This comparison uses the modified M-of-N method, demonstrating its effectiveness by a detection rate reaching 80% of the spoofing zones.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_45-Detecting_GPS_Spoofing_Attacks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Smoke Source Location Method Based on Deep Learning Smoke Segmentation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151144</link>
        <id>10.14569/IJACSA.2024.0151144</id>
        <doi>10.14569/IJACSA.2024.0151144</doi>
        <lastModDate>2024-11-29T12:18:19.1230000+00:00</lastModDate>
        
        <creator>Yuanpan ZHENG</creator>
        
        <creator>Zeyuan HUANG</creator>
        
        <creator>Hui WANG</creator>
        
        <creator>Binbin CHEN</creator>
        
        <creator>Chao WANG</creator>
        
        <creator>Yu ZHANG</creator>
        
        <subject>Smoke segmentation; smoke source detection; deep learning; instance segmentation; mathematical modeling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>The generation of smoke is an early warning sign of a fire, and fast, accurate detection of smoke sources is crucial for fire prevention. However, due to the strong diffusivity of smoke, its morphology is easily influenced by environmental factors, and in complex real-world scenarios, smoke sources are often obscured. Current methods lack precision, generalization ability, and robustness in complex environments. With the advancement of deep learning-based smoke segmentation technology, new approaches to smoke source localization have emerged. Smoke segmentation, driven by deep learning models, can accurately capture the morphological characteristics of smoke. This paper proposes a precise and robust smoke source localization method based on deep learning-enabled smoke segmentation. We first conducted experimental evaluations of commonly used deep learning segmentation models and selected the best-performing model as input. Based on the segmentation results, we analyzed the diffusion characteristics and transmittance of smoke, constructed a concentration model, and used it to accurately locate the smoke source. Experimental results demonstrate that, compared with existing methods, this approach maintains high localization accuracy in multi-target smoke scenarios and complex environments, with superior generalization ability and robustness.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_44-A_Smoke_Source_Location_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Multiple-Attribute Decision-Making with Interval-Valued Neutrosophic Sets: Diverse Applications in Evaluating English Teaching Quality</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151143</link>
        <id>10.14569/IJACSA.2024.0151143</id>
        <doi>10.14569/IJACSA.2024.0151143</doi>
        <lastModDate>2024-11-29T12:18:19.1070000+00:00</lastModDate>
        
        <creator>Lijuan Zhao</creator>
        
        <creator>Shuo Du</creator>
        
        <subject>Multiple-attribute decision-making; interval-valued neutrosophic sets (IVNSs); Aczel-Alsina operations; teaching quality evaluation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>The evaluation of college English teaching quality is a key method for systematically analyzing and providing feedback on the teaching process and outcomes. It aims to comprehensively assess the effectiveness of teaching, student learning outcomes, and the appropriateness of the course design. The evaluation typically covers aspects such as teaching methods, classroom atmosphere, student engagement, use of teaching resources, and learning achievements. By collecting data from student feedback, teaching supervision, and exam results, the evaluation helps to improve teaching strategies, enhance students&#39; English proficiency, and ultimately achieve continuous optimization and improvement of teaching quality. The teaching quality evaluation of college English is viewed as the multiple-attribute decision-making (MADM). In this paper, some Aczel-Alsina operators are produced under interval-valued neutrosophic sets (IVNSs). Then, interval-valued neutrosophic number (IVNN) Aczel-Alsina weighted averaging (IVNNAAWA) operator is employed to cope with MADM problem. Finally, the numerical decision example for teaching quality evaluation of college English is employed to illustrate the produced method.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_43-Enhancing_Multiple_Attribute_Decision_Making.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Evaluation of the AuRa Consensus Algorithm for Digital Certificate Processes on the Ethereum Blockchain</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151142</link>
        <id>10.14569/IJACSA.2024.0151142</id>
        <doi>10.14569/IJACSA.2024.0151142</doi>
        <lastModDate>2024-11-29T12:18:19.0930000+00:00</lastModDate>
        
        <creator>Robiah Arifin</creator>
        
        <creator>Wan Aezwani Wan Abu Bakar</creator>
        
        <creator>Mustafa Man</creator>
        
        <creator>Evizal Abdul Kadir</creator>
        
        <subject>Blockchain; Ethereum; consensus algorithm; smart contract; AuRa_ori; AuRa_v1</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>The blockchain serves as a distributed database where data is stored across different servers and networks. It encompasses various types, with Bitcoin, Ethereum, and Hyperledger being notable examples. To safeguard the security of data transactions on the blockchain, it relies on a consensus algorithm. This algorithm facilitates agreement among nodes within the network. There are multiple types of consensus algorithms, each possessing unique specialties and characteristics. This paper drives into the examination of specific Authority Round, here claimed as AuRa_ori consensus algorithm. The AuRa_ori is a specific type of PoA consensus mechanism used primarily in private or permission blockchain networks. It works by having a set of trusted validators take turns in a round-robin fashion to produce new blocks. It is supported by Parity and Ethereum Clients. AuRa_ori assumes that all the authority nodes are synchronised and honest on every transaction process. In AuRa_ori, every transaction process will execute the four phases i.e., assigning of a new leader, proposing a block, commencing agreement and finally, the phase of committing. However, there exist some discrepancies in some of the phases. In response to the scenario, this paper presents a thorough discussion on the vulnerabilities adhered in AuRa phases in transaction execution by focusing on the first phase of assigning a new leader and the third phase, namely the agreement. The vulnerabilities are subjected to the risk of impacting the performance of Transaction Speed Per Second (TPS), Transaction Throughput (TGS), Percentage Decrease (PD) of TPS and Percentage Increase (PI) of TGS. The new improved method, named AuRa_v1 is parallel presented to overcome the vulnerabilities of AuRa_ori at the selected phases. It aims to increase the TPS and to calculate the PD in transaction process using the Ethereum private blockchain systems. The implementation used three set of data scroll certificate. The result showed that the AuRa_v1 able to decrease the TPS almost 30% based on difference number set of data.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_42-Performance_Evaluation_of_the_AuRa_Consensus_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Ensemble Machine Learning Model for Predictive Maintenance on Water Injection Pumps in the Oil and Gas Industry</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151141</link>
        <id>10.14569/IJACSA.2024.0151141</id>
        <doi>10.14569/IJACSA.2024.0151141</doi>
        <lastModDate>2024-11-29T12:18:19.0770000+00:00</lastModDate>
        
        <creator>Salama Mohamed Almazrouei</creator>
        
        <creator>Fikri Dweiri</creator>
        
        <creator>Ridvan Aydin</creator>
        
        <creator>Abdalla Alnaqbi</creator>
        
        <subject>Ensemble machine learning models; oil and gas industry; predictive maintenance; water injection pumps</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>The effective operation of water injection pumps is vital for enhancing oil recovery in the oil and gas industry. To ensure optimal pump performance and prevent unplanned downtime, this study focused on implementing predictive maintenance strategies. We began by identifying five critical operational parameters—Seal Pressure 1, Seal Pressure 2, Vibration Data for the Drive End (VIB DE), Vibration Data for the Non-Drive End (VIB NDE), and Ampere. These parameters were monitored and analyzed to evaluate their impact on pump performance and maintenance needs. To achieve this, we applied three machine learning algorithms: Extreme Gradient Boosting (XGBoost), Light Gradient-Boosting Machine (LGBM), and Random Forest. Each algorithm was independently trained and tested on the dataset corresponding to each operational parameter. We assessed their performance using key accuracy metrics, including R squared, Mean Absolute Error (MAE), and Root Mean Square Error (RMSE). Following this, we developed an Ensemble model, combining the predictive outputs of XGBoost, LGBM, and Random Forest. The Ensemble model was then applied to the same parameters to evaluate its ability to address the limitations observed in standalone models. The results demonstrated that the Ensemble model consistently delivered superior performance, achieving lower RMSE and MAE values and higher R squared coefficients across all parameters. This study culminates in the validation of the Ensemble model as a robust and reliable approach for predictive maintenance. By leveraging the strengths of multiple algorithms, the Ensemble model offers significant improvements in accuracy and reliability, contributing to more effective maintenance systems for the oil and gas industry.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_41-An_Ensemble_Machine_Learning_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-Factor Risk Assessment and Route Optimization for Safe Human Travel</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151140</link>
        <id>10.14569/IJACSA.2024.0151140</id>
        <doi>10.14569/IJACSA.2024.0151140</doi>
        <lastModDate>2024-11-29T12:18:19.0600000+00:00</lastModDate>
        
        <creator>Thilagavathi T</creator>
        
        <creator>Subashini A</creator>
        
        <subject>Multi-factor risk assessment; route optimization; human travel safety; static and dynamic parameters; risk factor analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>In the modern world, frequent travel has become a necessity, with vehicles being the primary mode of transportation. Ensuring human safety while traveling is paramount. To address this, it is essential to adopt a combination of numerous static and dynamic parameters to achieve optimal route design in today’s complex transportation systems. This study introduces a methodology titled &#39;Multi-Factor Risk Assessment and Route Optimization for Safe Human Travel&#39;, which consists of three stages: Route Optimization, Risk Factor Analysis, and Data Collection. To assess the safety of various routes, a combination of dynamic and static factors is considered. These include traffic, weather, and road conditions, as well as vehicle-related factors such as type, age, and the surrounding road environment. By analyzing simulated data, the technique identifies potential risks and optimizes travel paths accordingly. For segmented routes, risk factors are calculated using both static and dynamic parameters, ensuring a comprehensive safety assessment. Prioritizing user safety, the system dynamically adjusts routes to offer the most cost-effective and safest travel options. This study lays a robust foundation for intelligent transportation systems, aimed at ensuring safer travel for users across a range of scenarios.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_40-Multi_Factor_Risk_Assessment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predicting Stock Price Bubbles in China Using Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151139</link>
        <id>10.14569/IJACSA.2024.0151139</id>
        <doi>10.14569/IJACSA.2024.0151139</doi>
        <lastModDate>2024-11-29T12:18:19.0470000+00:00</lastModDate>
        
        <creator>Yunxi Wang</creator>
        
        <creator>Tongjai Yampaka</creator>
        
        <subject>Stock price bubbles; machine learning; Chinese stock market</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>Financial bubbles have long been a focus of researchers, particularly due to the severe negative impacts following the bursting of financial bubbles. Therefore, the ability to effectively predict financial bubbles is of paramount importance. The aim of this study is to measure and predict the stock market price bubble in China from January 2015 to December 2023. To achieve this, we utilized the GSADF test, currently the most effective, to identify and measure the situation of the stock market price bubble in China. Subsequently, we selected inflation rate, consumer confidence index, stock yield, and price-earnings ratio as explanatory/predictive variables. Finally, four machine learning methods were employed to forecast the stock market price bubble in China. The results indicate that a price bubble occurred in the Chinese stock market during the first half of 2015, before the outbreak of the COVID-19 pandemic in China in January 2020. Furthermore, the comparison reveals that among the machine learning methods, logistic regression is the most suitable and effective for China, while other methods such as deep learning and decision trees also hold certain value.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_39-Predicting_Stock_Price_Bubbles_in_China.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improved Real-Time Smoke Detection Model Based on RT-DETR</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151138</link>
        <id>10.14569/IJACSA.2024.0151138</id>
        <doi>10.14569/IJACSA.2024.0151138</doi>
        <lastModDate>2024-11-29T12:18:19.0300000+00:00</lastModDate>
        
        <creator>Yuanpan ZHENG</creator>
        
        <creator>Zeyuan HUANG</creator>
        
        <creator>Binbin CHEN</creator>
        
        <creator>Chao WANG</creator>
        
        <creator>Yu ZHANG</creator>
        
        <subject>RT-DETR; smoke detection; deformable convolution; multi-scale feature fusion; EIoU; image enhancement; dark channel</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>Fire remains a major threat to society and economic activities. Given the real-time demands of smoke detection, most research in deep learning has focused on Convolutional Neural Networks. The Real-Time Detection Transformer (RT-DETR) introduces a promising alternative for this task. This paper extends RT-DETR to address challenges such as morphological variations and interference in smoke detection by proposing the Realtime Smoke Detection Transformer (RS-DETR). RS-DETR uses smoke images with concentration data as input and employs a deformable attention module to manage morphological changes, enabling robust feature extraction. Additionally, a Cross-Scale Smoke Feature Fusion Module (CS-SFFM) is integrated to enhance detection accuracy for small and thin smoke targets through multi-scale feature resampling and fusion. To improve convergence speed and stability, Efficient Intersection over Union (EIoU) replaces Generalized Intersection over Union (GIoU) in feature scoring. The improved model achieves an average precision of 93.9% on a custom dataset, representing a 5.7% improvement over the original model, and demonstrates excellent performance across various detection scenarios.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_38-Improved_Real_Time_Smoke_Detection_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automated Hydroponic Growth Simulation for Lettuce Using ARIMA and Prophet Models During Rainy Season in Indonesia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151137</link>
        <id>10.14569/IJACSA.2024.0151137</id>
        <doi>10.14569/IJACSA.2024.0151137</doi>
        <lastModDate>2024-11-29T12:18:19.0130000+00:00</lastModDate>
        
        <creator>Lendy Rahmadi</creator>
        
        <creator>Hadiyanto</creator>
        
        <creator>Ridwan Sanjaya</creator>
        
        <subject>ARIMA; automated; growth; hydroponic; prophet; simulation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>Hydroponic farming particularly lettuce cultivation, is gaining popularity in Indonesia due to its economical use of water and space, as well as its short growing season. This study focuses on developing of an Automated Hydroponic Growth Simulation for Lettuce Using ARIMA and Prophet Models during the Rainy Season in Indonesia. We developed a simulation model for lettuce development in the Nutrient Film Technique (NFT) hydroponic system using data collected over four harvest periods during the rainy season in early 2024. Two machine learning models, ARIMA and Prophet, are tested to see which is more effective at forecasting lettuce growth. The Prophet model has the greatest results, with a Mean Absolute Error (MAE) of 1.475 and a Root Mean Square Error (RMSE) of 1.808. Based on this, the Prophet model is utilized to create a web application using Streamlit for real-time growth predictions. Future studies should include more data, particularly from the dry season, to increase model flexibility, as well as investigate the use of other crops and machine learning methods, including hybrid models, to improve forecasts.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_37-Automated_Hydroponic_Growth_Simulation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Assessing the Usability of M-Health Applications: A Comparison of Usability Testing, Heuristics Evaluation and Cognitive Walkthrough Methods</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151136</link>
        <id>10.14569/IJACSA.2024.0151136</id>
        <doi>10.14569/IJACSA.2024.0151136</doi>
        <lastModDate>2024-11-29T12:18:19.0000000+00:00</lastModDate>
        
        <creator>Obead Alhadreti</creator>
        
        <subject>Mobile health applications; usability; usability testing; heuristics evaluation; cognitive walkthrough</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>Mobile health applications have increasingly become an important channel for providing services in the health sector. However, poor usability can be a major barrier for the rapid adoption of mobile services. The purpose of this study is to compare the relative performance of three usability evaluation methods, namely, usability testing, heuristics evaluation, and the cognitive walkthrough methods in determining the usability level of mobile health applications. The study also explores the relationship between the metrics of usability testing and the current level of mobile health applications in Saudi Arabia. An experimental approach has been used in this study, which gathered qualitative and quantitative data. The methods were used to assess two mobile health interfaces and were compared on the number, severity, and types of usability problems identified. Correlation tests were also carried out to examine areas of overlap between usability testing metrics. The heuristic evaluation found significantly greater numbers of usability problems than the other techniques. The usability testing method, however, detects problems of greater severity. There is also a significant correlation between the number of usability issues found and how long it takes to perform tasks in usability tests. Moreover, the level of usability of the Saudi applications tested is below expectation and in need of further improvement. Based on the study results, both usability testing and heuristic evaluation should be employed during the design process of mobile health applications for maximum effectiveness. Additionally, it is recommended that SUS questionnaires should not be the sole method of determining the usability level of mobile health applications.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_36-Assessing_the_Usability_of_M_Health_Applications.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimizing Deep Learning for Diabetic Retinopathy Diagnosis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151135</link>
        <id>10.14569/IJACSA.2024.0151135</id>
        <doi>10.14569/IJACSA.2024.0151135</doi>
        <lastModDate>2024-11-29T12:18:18.9830000+00:00</lastModDate>
        
        <creator>Krit Sriporn</creator>
        
        <creator>Cheng-Fa Tsai</creator>
        
        <creator>Li-Jia Rong</creator>
        
        <creator>Paohsi Wang</creator>
        
        <creator>Tso-Yen Tsai</creator>
        
        <creator>Chih-Wen Chen</creator>
        
        <subject>Diabetic retinopathy; deep learning; image processing technologies; imbalanced image dataset; computer aided diagnosis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>The detection of diabetic retinopathy traditionally requires the expertise of medical professionals, making manual detection both time- and labor-intensive. To address these challenges, numerous studies in recent years have proposed automatic detection methods for diabetic retinopathy. This research focuses on applying deep learning and image processing techniques to overcome the issue of performance degradation in classification models caused by imbalanced diabetic retinopathy datasets. It presents an efficient deep learning model aimed at assisting clinicians and medical teams in diagnosing diabetic retinopathy more effectively. In this study, image processing techniques, including image enhancement, brightness correction, and contrast adjustment, are employed as preprocessing steps for fundus images of diabetic retinopathy. A fusion technique combining color space conversion, contrast limited adaptive histogram equalization, multi-scale retinex with color restoration, and Gamma correction is applied to highlight retinal pathological features. Deep learning models such as ResNet50-V2, DenseNet121, Inception-V3, Xception, MobileNet-V2, and InceptionResNet-V2 were trained on the preprocessed datasets. For the APTOS-2019 dataset, DenseNet121 achieved the highest accuracy at 99% for detecting diabetic retinopathy. On the Messidor-2 dataset, InceptionResNet-V2 demonstrated the best performance, with an accuracy of 96%. The overall aim of this research is to develop a computer-aided diagnosis system for classifying diabetic retinopathy.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_35-Optimizing_Deep_Learning_for_Diabetic_Retinopathy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Classification of Liver Disease Using Conventional Tree-Based Machine Learning Approaches with Feature Prioritization Using a Heuristic Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151134</link>
        <id>10.14569/IJACSA.2024.0151134</id>
        <doi>10.14569/IJACSA.2024.0151134</doi>
        <lastModDate>2024-11-29T12:18:18.9670000+00:00</lastModDate>
        
        <creator>Proloy Kumar Mondal</creator>
        
        <creator>Haewon Byeon</creator>
        
        <subject>Liver disease; classification; prediction; CatBoost algorithm; machine learning; optimization algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>Liver disease ranks as one of the leading causes of mortality globally, often going undetected until advanced stages. This study aims to enhance early detection of liver disease by employing machine learning models that utilize key health indicators. Utilizing the Indian Liver Patient Dataset (ILPD) from the UCI repository, we developed a predictive model using the CatBoost algorithm, achieving an initial accuracy of 74%. To improve this, feature selection was performed using the Whale Optimization Algorithm (WOA) and Harris Hawk Optimization (HHO), which increased accuracy to 82% and 85% respectively. The methodology involved preprocessing to correct data imbalances and outlier removal through univariate and bivariate analyses. These optimizations highlight the critical features enhancing the model&#39;s predictive capability. The results indicate that integrating metaheuristic algorithms in feature selection significantly improves the accuracy of liver disease prediction models. Future research could explore the integration of additional datasets and machine learning models to further refine predictive capabilities and understand the underlying pathophysiology of liver diseases.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_34-Classification_of_Liver_Disease.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Understanding Mental Health Content on Social Media and It’s Effect Towards Suicidal Ideation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151133</link>
        <id>10.14569/IJACSA.2024.0151133</id>
        <doi>10.14569/IJACSA.2024.0151133</doi>
        <lastModDate>2024-11-29T12:18:18.9530000+00:00</lastModDate>
        
        <creator>Mohaiminul Islam Bhuiyan</creator>
        
        <creator>Nur Shazwani Kamarudin</creator>
        
        <creator>Nur Hafieza Ismail</creator>
        
        <subject>Suicidal ideation detection; social media analysis; mental health; text analysis; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>The study “Understanding Mental Health Content on Social Media and Its Effect Towards Suicidal Ideation” aims to detail the recognition of suicidal intent through social media, with a focus on the improvement and part of the machine learning (ML), deep learning (DL), and natural language processing (NLP). This review underscores the critical need for effective strategies to identify and support individuals with suicidal ideation, exploiting technological innovations in ML and DL to further suicide prevention efforts. The study details the application of these technologies in analyzing vast amounts of unstructured social media data to detect linguistic patterns, keywords, phrases, tones, and contextual cues associated with suicidal thoughts. It explores various ML and DL models like SVMs, CNNs, LSTM, neural networks, and their effectiveness in interpreting complex data patterns and emotional nuances within text data. The review discusses the potential of these technologies to serve as a life-saving tool by identifying at-risk individuals through their digital traces. Furthermore, it evaluates the real-world effectiveness, limitations, and ethical considerations of employing these technologies for suicide prevention, stressing the importance of responsible development and usage. The study aims to fill critical knowledge gaps by analyzing recent studies, methodologies, tools, and techniques in this field. It highlights the importance of synthesizing current literature to inform practical tools and suicide prevention efforts, guiding innovation in reliable, ethical systems for early intervention. This research synthesis evaluates the intersection of technology and mental health, advocating for the ethical and responsible application of ML, DL, and NLP to offer life-saving potential worldwide while addressing challenges like generalizability, biases, privacy, and the need for further research to ensure these technologies do not exacerbate existing inequities and harms.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_33-Understanding_Mental_Health_Content_on_Social_Media.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SAM-PIE: SAM-Enabled Photovoltaic-Module Image Enhancement for Fault Inspection and Analysis Using ResNet50 and CNNs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151132</link>
        <id>10.14569/IJACSA.2024.0151132</id>
        <doi>10.14569/IJACSA.2024.0151132</doi>
        <lastModDate>2024-11-29T12:18:18.9370000+00:00</lastModDate>
        
        <creator>Rotimi-Williams Bello</creator>
        
        <creator>Pius A. Owolawi</creator>
        
        <creator>Etienne A. van Wyk</creator>
        
        <creator>Chunling Du</creator>
        
        <subject>Anomaly; convolution neural networks; crack; hotspot; photovoltaic; Residual Network-50; shading</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>Different models have been developed for segmentation tasks, each with its uniqueness. Recently, the Segment Anything Model (SAM) was added to the pool of these models with expectations of addressing their weaknesses. SAM, although trained on a huge dataset for segmentation of anything, particularly images of natural source, produces suboptimal results when applied to segmentation of photovoltaic module image due to difference in semantic between photovoltaic module and natural images. In spite of the current suboptimal performance of SAM in segmentation of photovoltaic module images, it demonstrates detection and identification of thermal anomalies in photovoltaic module images that majorly contribute to power production loss. The implication of this is that, the task, the model, and the data corresponding to SAM are applicable to photovoltaic module image diagnosis. In this paper, we propose SAM-enabled photovoltaic-module image enhancement (SAM PIE) for fault inspection and analysis using ResNet50 and CNNs. SAM-PIE combines the strength of SAM for enhancement of the fault inspection and analysis procedure, for optimal performance of the proposed method. Experiments were performed on three thermal anomaly image datasets of photovoltaic modules to validate the performance of SAM-PIE for the classification tasks. The results obtained validates the potential capability of SAM-PIE to perform photovoltaic module image classification. The dataset is publicly and freely available for scientific community use at https://doi.org/10.17632/5ssmfpgrpc.1</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_32-SAM_PIE_SAM_Enabled_Photovoltaic_Module_Image.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Intelligent Transport System for Prediction of Urban Traffic Congestion Level</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151131</link>
        <id>10.14569/IJACSA.2024.0151131</id>
        <doi>10.14569/IJACSA.2024.0151131</doi>
        <lastModDate>2024-11-29T12:18:18.9200000+00:00</lastModDate>
        
        <creator>Mohammad Khalid Imam Rahmani</creator>
        
        <creator>Shahnawaz Khan</creator>
        
        <creator>Md Ezaz Ahmed</creator>
        
        <creator>Khurram Jawad</creator>
        
        <subject>Sustainable development goals; traffic congestion; traffic prediction; Gated Recurrent Unit; long short-term memory; intelligent transport system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>Developing a resilient infrastructure is crucial for nation-building by supporting innovations and promoting sustainable growth. The Kingdom of Saudi Arabia is striving to achieve the Sustainable Development Goals (SDGs) set by the United Nations. Industry, Innovation, and Infrastructure (I3) are some of the strategic objectives of the Kingdom’s Vision 2030 par with the United Nations’ SDGs. The objective is focused to develop trade and transport networks for international, regional, and local connectivity with an investment of billions of dollars to establish a robust transport network and improve the existing one for enhancing road safety to reduce the costs of deaths and serious injuries. For this, a control center for automatic monitoring could be established for 24x7 monitoring of traffic violators; the key project has been named the National Center for Transportation Safety, apart from launching the “Rental Contracts” facility with the Naql portal. Moreover, the growing urban population is causing more vehicles on the roads leading to more traffic congestion which has become severe during peak hours in the major cities causing several other issues such as environmental pollution, high greenhouse gases (GHGs) including CO2 emissions, health risks to the citizen and residents, poor air quality, higher risks of road safety, more energy consumption, discomfort to the commuters, and wastage of time and other resources. Therefore, in this research, we propose an intelligent transport system (ITS) for predicting traffic congestion levels and assist commuters in taking alternative routes to avoid congestion. An intelligent model for predicting urban traffic congestion levels using XGBoost, Gated Recurrent Unit (GRU), and Long Short-Term Memory (LSTM) algorithms is developed. The comparative performance analysis of the techniques concerning the performance metrics: Mean Squared Error (MSE), Root Mean Square Error (RMSE), Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE), Error cost, Outlier sensitivity, and Model Complexity, demonstrate that the LSTM algorithm excels the other two algorithms.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_31-An_Intelligent_Transport_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Application of Graph Neural Network Model Design for Residential Building Layout Design</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151130</link>
        <id>10.14569/IJACSA.2024.0151130</id>
        <doi>10.14569/IJACSA.2024.0151130</doi>
        <lastModDate>2024-11-29T12:18:18.9070000+00:00</lastModDate>
        
        <creator>Shiyu Wang</creator>
        
        <creator>Ningbo Wang</creator>
        
        <subject>Residential building layout plan; deep learning; GNN model; space utilization rate; resident comfort level; quantum particle swarm algorithm; Node2vec algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>In the current process of residential building layout design, there are problems such as low design efficiency, excessive manual intervention, and difficulty in meeting personalized needs. To address these issues, a residential building layout design method based on graph neural network model is proposed to improve the intelligence level of residential building layout design. Firstly, the residential building floor plan layout design data are transformed into graph data suitable for graph neural network model processing. Then, deep learning techniques are used to analyse and identify the spatial distribution characteristics of the main functional areas in the space. Finally, the trained graph neural network model is applied to the actual residential building floor plan layout design and compared with the traditional method. The experimental results show that compared with the traditional computer-aided design method, the residential building floor plan layout design and optimisation method improves the completeness of the design scheme by about 2.3%, the rationality by about 3.6%, the readability by about 1.9%, and the effectiveness by about 10.3%. The method improves the efficiency and accuracy of residential building floor plan layout design, helps to shorten the design cycle and reduce the design cost, and helps to promote technological progress and sustainable development in the field of architectural design.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_30-An_Application_of_Graph_Neural_Network_Model_Design.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Theoretical Framework of Extrinsic Feedback Evaluation in Football Training Based on Motion Templates Using Motion Capture</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151129</link>
        <id>10.14569/IJACSA.2024.0151129</id>
        <doi>10.14569/IJACSA.2024.0151129</doi>
        <lastModDate>2024-11-29T12:18:18.8900000+00:00</lastModDate>
        
        <creator>Amir Irfan Mazian</creator>
        
        <creator>Wan Rizhan</creator>
        
        <creator>Normala Rahim</creator>
        
        <creator>Muhammad D. Zakaria</creator>
        
        <creator>Mohd Sufian Mat Deris</creator>
        
        <creator>Fadzli Syed Abdullah</creator>
        
        <creator>Ahmad Rafi</creator>
        
        <subject>Motion capture; motion templates; football; extrinsic feedback; reverse-gesture description language</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>Motion capture technology (MoCap) has emerged as a pivotal innovation, significantly impacting various sectors, including sports. In football training, MoCap is especially crucial for analyzing player movements with precision. Despite its potential, there remains a notable gap in the utilization of MoCap to create motion templates (MTs) that generate extrinsic feedback, particularly in football. This article proposes a comprehensive theoretical framework for evaluating extrinsic feedback in football training through MTs created using MoCap technology and Reverse-Gesture Description Language (R-GDL). The development of this framework involves several key steps: a literature review, acquaintance meetings, interviews, procedural approvals, experimentation, data conversion, MTs creation, and data evaluation. The framework integrates elements such as football players, MoCap systems, raw and processed data, MTs, evaluation processes, and extrinsic feedback models. The main purpose is to harness the full potential of MoCap technology, enhancing the evaluation and improvement of football training activities. By implementing this framework, we aim to revolutionize how football training is analyzed and optimized, providing coaches and players with invaluable insights and feedback.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_29-A_Theoretical_Framework_of_Extrinsic_Feedback.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Scalp Disorder Imaging: How Deep Learning and Explainable Artificial Intelligence are Revolutionizing Diagnosis and Treatment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151128</link>
        <id>10.14569/IJACSA.2024.0151128</id>
        <doi>10.14569/IJACSA.2024.0151128</doi>
        <lastModDate>2024-11-29T12:18:18.8730000+00:00</lastModDate>
        
        <creator>Vinh Quang Tran</creator>
        
        <creator>Haewon Byeon</creator>
        
        <subject>Scalp disorders; artificial intelligence; explainable artificial intelligence; deep learning; interpretability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>Scalp disorders, affecting millions worldwide, significantly impact both physical and mental health. Deep learning has emerged as a promising tool for automated diagnosis, but ensuring model transparency and reliability is crucial. This review explores the integration of explainable AI (XAI) techniques to enhance the interpretability of deep learning models in scalp disorder diagnosis. We analyzed recent studies employing deep learning models to classify scalp disorders from image data and used XAI methods to understand the models&#39; decision-making processes and identify potential biases. While deep learning has shown promising results, challenges such as data quality and model interpretability persist. Future research should focus on expanding the capabilities of deep learning models for real-time detection and severity prediction, while addressing limitations in data diversity and ensuring the generalizability of models across different populations. The integration of XAI techniques is essential for fostering trust in AI-powered scalp disease diagnosis and facilitating their widespread adoption in clinical practice.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_28-Scalp_Disorder_Imaging.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Data-Driven Deep Machine Learning Approach for Tunnel Deformation Risk Assessment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151127</link>
        <id>10.14569/IJACSA.2024.0151127</id>
        <doi>10.14569/IJACSA.2024.0151127</doi>
        <lastModDate>2024-11-29T12:18:18.8570000+00:00</lastModDate>
        
        <creator>Fusheng Liu</creator>
        
        <subject>Pipe jacking up and over operational tunnel construction; tunnel deformation risk assessment; deep limit learning machine; hybrid leader optimisation algorithm; control strategy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>The shallow overburden pipe jacking over operation tunnel construction project in chalk stratum has the risk of deformation of the soil layer and the existing tunnel, which increases the difficulty of pipe jacking over construction, and the risk assessment and control become the key technology for the safe and successful completion of the construction. Aiming at the problems of the current deformation risk assessment and control method, such as the assessment system is not comprehensive, systematic and objective enough, the prediction accuracy is not efficient enough, and there is a lack of quantitative analysis, etc., a deformation risk assessment and control method is proposed to combine the heuristic optimization algorithm of human behaviour and deep machine learning algorithm for pipe jacking up to and across operation tunnels on shallow overburden of chalky sand stratum. Firstly, by analyzing the construction process of pipe jacking tunnel, the deformation risk factors of the construction process and the deformation risk control scheme are given; then, a deformation risk assessment and control algorithm with improved deep limit learning machine is proposed by combining human heuristic optimization algorithm; finally, the proposed deformation assessment and control model is applied to the deformation risk assessment and control problem of pipe jacking over operation tunnel on shallow overburden of pulverised sand stratum, and a finite element computational model is used to construct the data. Finally, the proposed deformation assessment and control model is applied to the problem of deformation risk assessment and control in a tunnel with shallow overburden in chalky sand stratum by using finite element computational model to construct the data set, training the deformation risk assessment and control model, and using the monitoring data as the test set to validate the validity of the proposed model algorithm, and solving the problem of the poor prediction accuracy of the control algorithm for deformation risk assessment and control of a tunnel with shallow overburden in a tunnel with shallow overburden in chalky sand stratum.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_27-A_Data_Driven_Deep_Machine_Learning_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application Pigeon Swarm Intelligent Optimisation BP Neural Network Algorithm in Railway Tunnel Construction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151126</link>
        <id>10.14569/IJACSA.2024.0151126</id>
        <doi>10.14569/IJACSA.2024.0151126</doi>
        <lastModDate>2024-11-29T12:18:18.8430000+00:00</lastModDate>
        
        <creator>Feng Zhou</creator>
        
        <creator>Hong Ye</creator>
        
        <creator>Jie Song</creator>
        
        <creator>Hui Guo</creator>
        
        <creator>Peng Liu</creator>
        
        <subject>Municipal railway tunnel construction optimization; scenario risk assessment; machine learning; pigeon flock optimisation algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>Due to the uncertainty and complexity of the risk factors of the urban railway tunnel project to increase the difficulty of risk analysis, so that the traditional risk assessment methods can not accurately assess the construction risk of the urban railway tunnel project. Aiming at the problems of the existing risk assessment algorithms, the construction risk assessment method of an urban railway tunnel project based on intelligent optimisation algorithm and machine learning algorithm is proposed. Firstly, for the problem of construction risk identification and assessment of municipal railway tunnel project, a tunnel construction risk identification and assessment scheme using a combination of intelligent optimization algorithm and machine learning algorithm is designed, and the principles and functions of each module of the risk assessment system are introduced; then, for the problem of risk assessment construction, a risk assessment algorithm based on the swarm intelligent optimization algorithm to improve the BP neural network is proposed; secondly, relying on the Hangzhou Secondly, relying on Xinfeng Road underground passage close to cross the underground line 9 tunnel and the side through the Hanghai intercity tunnel project in Hangzhou, the effectiveness of the construction risk assessment algorithm is verified from monitoring data and numerical simulation, and the risk control scheme is proposed in turn. The experimental results show that the risk assessment algorithm proposed in this paper effectively solves the problem of construction risk assessment of the urban railway tunnel project, and improves the prediction performance of the risk assessment algorithm, and verifies that the risk control scheme meets the construction safety requirements.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_26-Application_Pigeon_Swarm_Intelligent_Optimisation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>ARO-CapsNet: A Novel Method for Evaluating User Experience in Immersive VR Furniture Design</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151125</link>
        <id>10.14569/IJACSA.2024.0151125</id>
        <doi>10.14569/IJACSA.2024.0151125</doi>
        <lastModDate>2024-11-29T12:18:18.8270000+00:00</lastModDate>
        
        <creator>Yin Luo</creator>
        
        <creator>Jun Liu</creator>
        
        <creator>Li Zhang</creator>
        
        <subject>Immersive virtual reality; furniture design; application analysis; artificial rabbit optimisation algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>Immersive virtual reality (VR) technology has become an essential tool in enhancing user experience across industries, particularly in furniture design. With the ability to provide realistic, interactive, and immersive environments, it significantly improves user engagement and decision-making in product design. However, existing analysis methods lack precision in evaluating user experience within VR environments. This study aims to develop a more accurate and efficient model for analyzing the application of immersive VR in future furniture design. By integrating the Artificial Rabbit Optimization (ARO) algorithm with Capsule Networks (CapsNet), this research enhances the evaluation of user experience in immersive VR environments. The proposed method uses the ARO algorithm to optimize the parameters of CapsNet, which maps the relationship between the analysis indicators of furniture design and user experience. This model is tested against traditional methods such as CNN and CapsNet alone. The analysis focuses on key factors such as visual elements, interaction, and system performance, with performance metrics like root mean square error (RMSE) and R&#178; value used for evaluation. Experimental results show that the ARO-CapsNet model achieves a RMSE of 0.17 and an R&#178; value of 0.988, outperforming both CNN and CapsNet in terms of accuracy and efficiency. Additionally, the proposed model improves the immersive VR system&#39;s ability to deliver accurate user experience evaluations, making it a superior method for analyzing future furniture design applications. The integration of the ARO algorithm with CapsNet significantly enhances the precision of immersive VR user experience evaluations in furniture design. The ARO-CapsNet model not only improves evaluation accuracy but also increases system efficiency, providing a robust framework for future applications of VR in product design.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_25-ARO_CapsNet_A_Novel_Method_for_Evaluating_User_Experience.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced TODIM-TOPSIS Framework for Interior Design Quality Evaluation in Public Spaces Under Hesitant Fuzzy Sets</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151124</link>
        <id>10.14569/IJACSA.2024.0151124</id>
        <doi>10.14569/IJACSA.2024.0151124</doi>
        <lastModDate>2024-11-29T12:18:18.8100000+00:00</lastModDate>
        
        <creator>Lu Peng</creator>
        
        <subject>Multiple-attribute decision-making (MADM); hesitant fuzzy sets (HFSs); TODIM; TOPSIS; design quality evaluation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>The evaluation of interior landscape design in public spaces involves several aspects, including aesthetics, functionality, sustainability, and user experience. Aesthetic evaluation focuses on the visual appeal and stylistic consistency of the design. Functionality considers the practicality and convenience of the space layout. Sustainability evaluates the environmental friendliness of materials and energy efficiency of the design. Additionally, user experience assessment gathers feedback to gauge comfort and satisfaction. These evaluation criteria help designers optimize spaces to be both attractive and practical while meeting user needs. The interior design quality evaluation in public spaces is multiple-attribute decision-making (MADM) problem. Recently, the TODIM and TOPSIS methods have been applied to address MADM challenges. Hesitant fuzzy sets (HFSs) are used to represent uncertain information in the evaluation of interior landscape design in public spaces. In this study, we developed a hesitant fuzzy TODIM-TOPSIS (HF-TODIM-TOPSIS) approach to tackle Multiple Attribute Decision Making (MADM) issues within the context of HFSs. A numerical case study focused on the interior design quality evaluation in public spaces demonstrates the validity of this approach. The primary contributions of this paper include: (1) Extending the TODIM and TOPSIS approaches to incorporate HFSs; (2) Utilizing information entropy to determine weight values under HFSs; (3) Establishing the HF-TODIM-TOPSIS method for managing MADM in the presence of HFSs; (4) Conducting algorithmic analysis and comparative studies based on a numerical example to assess the practicality and effectiveness of the HF-TODIM-TOPSIS approach.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_24-Enhanced_TODIM_TOPSIS_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predicting Learners’ Academic Progression Using Subspace Clique Model in Multidimensional Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151123</link>
        <id>10.14569/IJACSA.2024.0151123</id>
        <doi>10.14569/IJACSA.2024.0151123</doi>
        <lastModDate>2024-11-29T12:18:18.7970000+00:00</lastModDate>
        
        <creator>Oyugi Odhiambo James</creator>
        
        <creator>Waweru Mwangi</creator>
        
        <creator>Kennedy Ogada</creator>
        
        <subject>Subspace clustering; clique model; academic progression; multidimensional data; feature engineering; cross validation and principal component analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>Subspace clustering examines the traditional clustering techniques that have previously been considered the best approaches to clustering data. This study uses a subspace clustering approach to predict learners&#39; academic progress over time. Using the subspace clustering method, a model was developed that improves the classic Clique by optimizing clustering performance and addresses the clustering challenges posed by inaccuracies due to additional data size and increased dimensionality. The study used an experimental design that included data validation and training to predict students&#39; academic progress. Clustering evaluation metrics including accuracy, precision, and recall measures were identified. The optimized model recorded a better performance index with 98.90% accuracy, 98.50% precision, and 98.50% recall which directly shows the efficiency of the optimized model in predicting learning academic progress through clustering. In this regard, conclusions are drawn for an alternative approach to predictive modeling through cluster analysis, so that educational institutions have a better opportunity to manage learners by ensuring adequate preparation in terms of resources, policies and knowledge. It highlights career guidance for learners based on their academic progress. The result validates the suitability of the model for clustering multidimensional data.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_23-Predicting_Learners_Academic_Progression.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Q-FuzzyNet: A Quantum Inspired QIF-RNN and SA-FPA Optimizer for Intrusion Detection and Mitigation in Intelligent Connected Vehicles</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151122</link>
        <id>10.14569/IJACSA.2024.0151122</id>
        <doi>10.14569/IJACSA.2024.0151122</doi>
        <lastModDate>2024-11-29T12:18:18.7630000+00:00</lastModDate>
        
        <creator>Abdullah Alenizi</creator>
        
        <subject>Cybersecurity; intelligent connected vehicles; artificial intelligence; quantum neural network; recurrent neural network; flower pollination algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>In the evolving landscape of Intelligent Connected Vehicles (ICVs), ensuring cybersecurity is crucial due to the increasing number of cyber threats. Besides, challenges like data breaches, unauthorized access, and hacking attempts are prevalent due to the interconnected nature of ICVs. Several methods have been proposed to secure ICVs; however, accurate intrusion detection remains a challenging task yet to be fully achieved. For this reason, this paper proposes a comprehensive intrusion detection scheme denoted a Q-FuzzyNet, which is specifically tailored to safeguard ICV networks using Deep Learning (DL) approaches. This Q-FuzzyNet approach consists of five phases: (i) Data Collection (ii) Data Pre-processing (iii) Feature extraction (iv) Dimensionality Reduction and (v) Intrusion Detection and Mitigation. Initially, the raw data are gathered from the CICIoV2024 dataset. The collected data are pre-processed via Mean Imputation (MI) for data cleaning. Then, significant features are extracted through higher-order statistical features, Proposed Improved Mutual Information (IMI), Correlation, and Entropy approaches. Subsequently, the dimensionality is reduced via new Improved Linear Discriminant Analysis (ILDA). Ultimately, the data are classified (attacker/Normal) via the Meta-Heuristic Quantum-Inspired Fuzzy-Recursive Neural Network (QIF-RNN) model by combining the Quantum Neural Network (QNN), Recurrent Neural Network (RNN), and Fuzzy logic. The membership function of fuzzy logic is optimized via the new Self Adaptive-Flower Pollination Algorithm (SA-FPA). The identified attackers are mitigated from the network using the Policy Gradient Method. The acquired outcomes from Q-FuzzyNet are validated in terms of Accuracy, Precision, Sensitivity, and F1-score, as well. The highest accuracy of 98.6% has been recorded by the proposed model.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_22-Q_FuzzyNet_A_Quantum_Inspired_QIF_RNN_and_SA_FPA_Optimizer.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Network Security Based on Improved Genetic Algorithm and Weighted Error Back-Propagation Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151121</link>
        <id>10.14569/IJACSA.2024.0151121</id>
        <doi>10.14569/IJACSA.2024.0151121</doi>
        <lastModDate>2024-11-29T12:18:18.7630000+00:00</lastModDate>
        
        <creator>Junjuan Liang</creator>
        
        <subject>Genetic algorithm; weighted error back-propagation; multiple strategies; network security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>In order to solve the problem of feature selection and local optimal solution in the field of network security, a network security protection model based on improved genetic algorithm and weighted error back-propagation algorithm is proposed. The model combines the dynamic error weight and adaptive learning rate of the weighted error back-propagation algorithm to improve the learning ability of the model in dealing with classification imbalance and dynamic attack mode. In addition, the global search capability of genetic algorithm is utilized to optimize the feature selection process and automatically adjust the hyperparameter settings. The experimental results show that the proposed model has an average accuracy of 96.7%, a recall rate of 93.3% and an F1 value of 0.91 on the CIC-IDS-2017 dataset, which has significant advantages over traditional detection methods. In many experiments, the accuracy of normal data is up to 99.97%, the accuracy of known abnormal behavior data is 99.31%, and the accuracy of unknown abnormal behavior data is 98.13%. These results show that this method has high efficiency and reliability when dealing with complex network traffic, and provides a new idea and method for network security protection research.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_21-Network_Security_Based_on_Improved_Genetic_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Internet of Things User Behavior Analysis Model Based on Improved RNN</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151120</link>
        <id>10.14569/IJACSA.2024.0151120</id>
        <doi>10.14569/IJACSA.2024.0151120</doi>
        <lastModDate>2024-11-29T12:18:18.7500000+00:00</lastModDate>
        
        <creator>Keling Bi</creator>
        
        <subject>Internet of Things; user behaviors; recurrent neural network; Bayesian optimization; long short-term memory; hyperparameter</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>Currently, there are issues with low efficiency and outdated Internet of Things resource allocation. To study real Internet of Things user behavior data, a Bayesian optimization algorithm is used to automatically select hyperparameter combinations and construct an Internet of Things user behavior analysis model based on long short-term memory. The results showed that the prediction accuracy of the model reached 96.8% and 97.53% on the training and validation sets, while in the set 50 maximum iterations, the model achieved 80.78% on the test set. In comparing the performance between the research model and the traditional recurrent network model, it was found that the optimal prediction accuracy of the research model was 80.78%, which was better than the comparison model. The application results of the research model in short-term power load forecasting also indicated that the prediction accuracy of the Internet of Things user behavior analysis model based on the improved recurrent network has reached a good level, far superior to the comparative model. The results have important application value for allocating energy and resources in Internet of Things systems.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_20-Internet_of_Things_User_Behavior_Analysis_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Tennis Action Evaluation Model Based on Weighted Counter Clockwise Rotation Angle Similarity Measurement Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151119</link>
        <id>10.14569/IJACSA.2024.0151119</id>
        <doi>10.14569/IJACSA.2024.0151119</doi>
        <lastModDate>2024-11-29T12:18:18.7330000+00:00</lastModDate>
        
        <creator>Danni Jiang</creator>
        
        <creator>Ge Liu</creator>
        
        <subject>Action evaluation; counter clockwise rotation angle; weighting; dynamic time warping; tennis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>In order to intelligently analyze tennis movements and improve evaluation efficiency, a counter clockwise rotation angle of limbs is proposed to solve the direction problem of tennis movements. A dynamic time regularization algorithm is optimized by combining global time weighting and adjacent frame weighting. The results indicated that the proposed counter clockwise rotation angle feature of limbs could effectively represent changes in limb direction and clearly distinguish action postures. The average accuracy of this method in action classification on the Tennis Stroke Dataset was 97.60%. In the action evaluation mode, the average frame rate of the client was between 17.35FPS and 17.49FPS, and the overall average frame rate was about 17.40FPS. The server exhibits higher efficiency in action processing and evaluation, which can process video frames faster. It is more efficient in processing data capabilities and utilizing data resources. This indicates that the performance of the system is relatively consistent in different modes and has stability. The optimized method has a higher generalization ability in recognizing non-tennis movements on different datasets. When dealing with fine movements, the optimized method performs excellently and can better capture subtle differences in the movements. Meanwhile, this enhances the real-time performance of the system, making it suitable for evaluating tennis movements in practical application scenarios. This provides a new technical path for analyzing tennis movements and also serves as a reference for evaluating movements in other sports.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_19-Tennis_Action_Evaluation_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Image Restoration of Landscape Design Based on DCGAN Optimization Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151118</link>
        <id>10.14569/IJACSA.2024.0151118</id>
        <doi>10.14569/IJACSA.2024.0151118</doi>
        <lastModDate>2024-11-29T12:18:18.7170000+00:00</lastModDate>
        
        <creator>Wenjun Zhang</creator>
        
        <subject>Deep convolutional generative adversarial network; image; restoration; landscape architecture; squeeze-and-excitation network; dense convolutional network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>To enhance the quality and effectiveness of image restoration in landscape design, this study optimizes the existing methods for low efficiency and incomplete feature extraction in processing high-resolution and detail rich landscape design images. Firstly, based on the traditional generative adversarial network (GAN), a novel deep convolutional generative adversarial network (DCGAN) model is proposed. Subsequently, the model&#39;s ability to extract detailed features was enhanced by integrating dense connected networks (DenseNet) and compressed excitation networks (SENet) into the network architecture. An improved DCGAN is designed for the restoration of landscape design images. According to the results, the optimized model had a restoration precision and repair recall rate of 0.97 in benchmark performance testing, which was significantly better than traditional deep convolutional generative adversarial network models. In practical applications, the model had an average accuracy of over 97% in repairing four different styles of landscape images, with an average repair time as low as 0.06s. From this, it can be seen that the designed model can provide a more efficient technical means for the restoration and digital preservation of landscape design images.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_18-Image_Restoration_of_Landscape_Design.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Incorporating Local Texture Adversarial Branch and Hybrid Attention for Image Super-Resolution</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151117</link>
        <id>10.14569/IJACSA.2024.0151117</id>
        <doi>10.14569/IJACSA.2024.0151117</doi>
        <lastModDate>2024-11-29T12:18:18.7030000+00:00</lastModDate>
        
        <creator>Na Zhang</creator>
        
        <creator>Hanhao Yao</creator>
        
        <creator>Qingqi Zhang</creator>
        
        <creator>Xiaoan Bao</creator>
        
        <creator>Biao Wu</creator>
        
        <creator>Xiaomei Tu</creator>
        
        <subject>Super-resolution reconstruction; generative adversarial network; hybrid attention; local texture sampling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>In the field of image Super-Resolution reconstruction (SR), traditional SR techniques such as regression-based methods and CNN-based models fail to retain texture details in the reconstructed images. Conversely, Generative Adversarial Networks (GANs) have significantly enhanced the visual quality of image reconstruction through their adversarial training architecture. However, existing GANs still exhibit limitations in capturing local details and efficiently utilizing features. To address these challenges, we have proposed a super-resolution reconstruction method leveraging local texture adversarial and hybrid attention mechanisms. Firstly, a Local Texture Sampling Module (LTSM) is designed to precisely locate small regions with strong texture features within an image, and a local discriminator then performs pixel-by-pixel evaluation on these regions to enhance local texture details. Secondly, a hybrid attention module is integrated into the generator’s residual module to improve feature utilization and representativeness. Finally, we conducted extensive experiments to validate the effectiveness of our method. The results demonstrate that our method surpasses other super-resolution reconstruction methods in terms of PSNR and SSIM on four benchmark datasets. Furthermore, our method visually generates high-resolution images with richer details and more realistic textures.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_17-Incorporating_Local_Texture_Adversarial_Branch_and_Hybrid_Attention.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>FusionSec-IoT: A Federated Learning-Based Intrusion Detection System for Enhancing Security in IoT Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151116</link>
        <id>10.14569/IJACSA.2024.0151116</id>
        <doi>10.14569/IJACSA.2024.0151116</doi>
        <lastModDate>2024-11-29T12:18:18.6870000+00:00</lastModDate>
        
        <creator>Jatinder Pal Singh</creator>
        
        <creator>Rafaqat Kazmi</creator>
        
        <subject>IoT security; Intrusion Detection System (IDS); federated learning; multi-view learning; cyberattack detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>Internet of Things (IoT) has become one of the most significant technological advancements of the modern era, which has impacted multiple sectors in the way it provides communication between connected devices. However, this growth has led to security risks in the IoT devices especially when constructing resource-limited IoT networks that are easily attacked by hackers through methods like DDoS and data theft. Traditional IDS such as centralized IDS and single-view machine learning-based IDS are incapable of providing efficient solutions to these issues due to high communication cost, latency, and low attack detection rate for IDS. To address these challenges, this paper presents FusionSec-IoT, a decentralized IDS based on multi-view learning and federated learning for better detection performance and scalability in the IoT context. The results sows that the proposed technique performs better than the existing IDS methods with 08.3% accuracy as compared to classic IDS techniques centralized IDS (91.5%) and single-view federated learning (92.7%). The other performance metrics like shows a better performance as compared to traditional IDS methods. Thus, FusionSec-IoT detects a broad range of cyberattacks with the help of the employed complex machine learning algorithms such as Reinforcement Learning, Meta-Learning, and Hybrid Feature Selection using Particle Swarm Optimisation (PSO) and Genetic Algorithm (GA), and ensures data privacy is maintained. Moreover, Edge Computing and Graph Neural Networks (GNNs) guarantee fast identification of multi-device coordinated attacks, for instance, botnets. The above-discussed proposed system enhances the traditional IDS approaches in terms of high detection accuracy, better privacy, and scalability, making the proposed system a reliable solution to secure the complex and large-scale IoT networks.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_16-FusionSec_IoT_A_Federated_Learning_Based_Intrusion_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Financial Shifts, Ethical Dilemmas, and Investment Insights in Nursing Homes: A Pre- and Post-Pandemic Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151115</link>
        <id>10.14569/IJACSA.2024.0151115</id>
        <doi>10.14569/IJACSA.2024.0151115</doi>
        <lastModDate>2024-11-29T12:18:18.6700000+00:00</lastModDate>
        
        <creator>Amir El-Ghamry</creator>
        
        <creator>Ameera Ibrahim</creator>
        
        <creator>Noha Elfiky</creator>
        
        <creator>Safwat Hamad</creator>
        
        <subject>Component; COVID-19 impact; nursing home financial performance; post-pandemic investment; ethical standards in nursing homes</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>The COVID-19 pandemic has significantly transformed the operational, financial, and ethical frameworks of nursing homes in the United States. This study offers a detailed analysis of the nursing home sector from 2015 to 2021, focusing on the financial viability and ethical standards before, during, and after the pandemic. The methodology employed a structured approach, including data federation, pre-processing, and trend analysis, using comprehensive datasets on Nursing Homes. The data was cleaned, standardized, and segmented into pre-pandemic (2015–2019), pandemic (2020), and post-pandemic (2021) periods to assess key trends and outcomes. The findings highlight how the pandemic exacerbated existing financial challenges, such as declining occupancy rates, increased operational costs, and reduced revenue streams, which led to closures and heightened investment activity in the sector. Government aid provided temporary stability, but long-term sustainability remains uncertain. Key factors affecting financial performance, including occupancy rates, net income, fines and penalties, and compliance with ethical standards such as vaccination rates and care quality, were analyzed. The study concludes that nursing home investments should be approached cautiously unless facilities meet specific financial and operational criteria, such as high occupancy rates, robust financial performance, low penalties, and strict adherence to ethical standards. Failure to meet these benchmarks may result in heightened financial and operational risks, making such facilities unsuitable for investment. This research offers a comprehensive framework for investors to evaluate nursing home opportunities in the post-pandemic landscape, providing insights into the intersection of financial performance, operational resilience, and ethical compliance.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_15-Financial_Shifts_Ethical_Dilemmas_and_Investment_Insights.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimizing Stroke Risk Prediction Using XGBoost and Deep Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151114</link>
        <id>10.14569/IJACSA.2024.0151114</id>
        <doi>10.14569/IJACSA.2024.0151114</doi>
        <lastModDate>2024-11-29T12:18:18.6570000+00:00</lastModDate>
        
        <creator>Renuka Agrawal</creator>
        
        <creator>Aaditya Ahire</creator>
        
        <creator>Dimple Mehta</creator>
        
        <creator>Preeti Hemnani</creator>
        
        <creator>Safa Hamdare</creator>
        
        <subject>DNN; XGBoost; stress level; stroke prediction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>Predicting brain strokes is inherently complex due to the multifaceted nature of brain health. Recent advancements in machine learning (ML) and deep learning (DL) algorithms have shown promise in forecasting stroke occurrences to a certain extent. This research paper explores the predictive potential of ML and DL models by utilizing a comprehensive dataset encom-passing diverse patient characteristics, including demographic factors, work culture, stress levels, lifestyle, and family history. Notably, this study incorporates 14 clinically significant attributes for prediction, surpassing the 10 attributes utilized by earlier researchers. To address existing limitations and enhance predictive accuracy, a novel ensemble model combining Deep Neural Networks (DNN) and Extreme Gradient Boosting (XGBoost) is proposed in this work. Also, a comparative analysis against individual DNN and XGBoost models, as well as Random Forest and Support Vector Machine (SVM) approaches are being done. The performance of the ensemble model is assessed using various metrics, including accuracy, precision, F1 score, and recall. The findings indicate that the DNN-XGBoost model exhibits superior predictive accuracy compared to standalone DNN and XGBoost models in identifying brain stroke occurrences.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_14-Optimizing_Stroke_Risk_Prediction_Using_XGBoost.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>AI-Powered AOP: Enhancing Runtime Monitoring with Large Language Models and Statistical Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151113</link>
        <id>10.14569/IJACSA.2024.0151113</id>
        <doi>10.14569/IJACSA.2024.0151113</doi>
        <lastModDate>2024-11-29T12:18:18.6400000+00:00</lastModDate>
        
        <creator>Anas AlSobeh</creator>
        
        <creator>Amani Shatnawi</creator>
        
        <creator>Bilal Al-Ahmad</creator>
        
        <creator>Alhan Aljmal</creator>
        
        <creator>Samer Khamaiseh</creator>
        
        <subject>Artificial Intelligence (AI); Aspect-Oriented Programming (AOP); runtime monitoring; Large Language Models (LLMs); Codex AI; software validation; statistical model checking; dynamic program analysis; cross-cutting concerns; joinpoints; pointcut</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>Modern software systems must adapt to dynamic artificial intelligence (AI) environments and evolving requirements. Aspect-oriented programming (AOP) effectively isolates crosscutting concerns (CCs) into single modules called aspects, enhancing quality metrics, and simplifying testing. However, AOP implementation can lead to unexpected program outputs and behavior changes. This paper proposes an AI-enhanced, adaptive monitoring framework for validating program behaviors during aspect weaving that integrates AOP interfaces (AOPIs) with large language models (LLMs), i.e. GPT-Codex AI, to dynamically generate and optimize monitoring aspects and statistical models in realtime. This enables intelligent run-time analysis, adaptive model checking, and natural language (NL) interaction. We tested the framework on ten diverse Java classes from JHotdraw 7.6 by extracting context and numerical data and building a dataset for analysis. By dynamically refining aspects and models based on observed behavior, its results showed that the framework maintained the integrity of the Java OOP class while providing predictive insights into potential conflicts and optimizations. Results demonstrate the framework’s efficacy in detecting subtle behavioral changes induced by aspect weaving, with a 94% accuracy in identifying potential conflicts and a 37% reduction in false positives compared to traditional static analysis techniques. Furthermore, the integration of explainable AI provides developers with clear, actionable explanations for flagged behaviors through NL interfaces, enhancing interpretability and trust in the system.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_13-AI_Powered_AOP_Enhancing_Runtime_Monitoring.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automatic Detection of Lumbar Spine Disc Herniation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151112</link>
        <id>10.14569/IJACSA.2024.0151112</id>
        <doi>10.14569/IJACSA.2024.0151112</doi>
        <lastModDate>2024-11-29T12:18:18.6230000+00:00</lastModDate>
        
        <creator>Mohammed Al Masarweh</creator>
        
        <creator>Olukola Oluseyi</creator>
        
        <creator>Ala Alkafri</creator>
        
        <creator>Hiba Alsmadi</creator>
        
        <creator>Tariq Alwadan</creator>
        
        <subject>Lumbar Disc Herniation; MASK-RCNN; computer vision; artificial intelligence; MR Images</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>Advanced deep-learning approaches have set new standards for computer vision and pattern recognition. However, the complexity of medical images frequently impedes the creation of high-quality ground truth data. In this article, we offer a method for autonomously generating ground truth data from MRI images using instance segmentation, with a novel confidence and consistency metric to assess data quality. We employ an artificial intelligence-based system to annotate regions of interest in MRI images, leveraging Mask R-CNN—a deep neural network architecture with a mean average precision of 98% for localising and identifying discs. Subsequently, the region of interest is classified with an accuracy of 70%. Our approach facilitates radiologists by automating the detection of regions of interest in MRI images, leading to more efficient and reliable diagnoses with assured quality data. This research made significant advances by developing an automated system for medical image segmentation and implementing cutting-edge neural network technologies.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_12-Automatic_Detection_of_Lumbar_Spine_Disc_Herniation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application of Unbalanced Optimal Transport in Healthcare</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151110</link>
        <id>10.14569/IJACSA.2024.0151110</id>
        <doi>10.14569/IJACSA.2024.0151110</doi>
        <lastModDate>2024-11-29T12:18:18.6070000+00:00</lastModDate>
        
        <creator>Qui Phu Pham</creator>
        
        <creator>Nghia Thu Truong</creator>
        
        <creator>Hoang-Hiep Nguyen-Mau</creator>
        
        <creator>Cuong Nguyen</creator>
        
        <creator>Mai Ngoc Tran</creator>
        
        <creator>Dung Luong</creator>
        
        <subject>Optimal transport; unbalanced optimal transport; healthcare</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>Optimal Transport (OT) is a powerful tool widely used in healthcare applications, but its high computational cost and sensitivity to data changes make it less practical for resource-constrained settings. These limitations also contribute to increased environmental impact due to higher CO2 emissions from computing. To address these challenges, we explore Unbalanced Optimal Transport (UOT), a variation of OT that is both computationally efficient and more robust to data variability. We apply UOT to two healthcare scenarios: independence testing on breast cancer data and modeling heart rate variability (HRV). Our experiments show that UOT not only reduces computational costs but also delivers reliable results, making it a practical alternative to OT for socially impactful applications.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_10-Application_of_Unbalanced_Optimal_Transport_in_Healthcare.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fuzzy Logic-Driven Machine Learning Algorithms for Improved Early Disease Diagnosis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151111</link>
        <id>10.14569/IJACSA.2024.0151111</id>
        <doi>10.14569/IJACSA.2024.0151111</doi>
        <lastModDate>2024-11-29T12:18:18.6070000+00:00</lastModDate>
        
        <creator>Leena Arya</creator>
        
        <creator>Narasimha Swamy Lavudiya</creator>
        
        <creator>G Sateesh</creator>
        
        <creator>Harish Padmanaban</creator>
        
        <creator>B. V. Srinivasulu</creator>
        
        <creator>Ravi Rastogi</creator>
        
        <subject>Decision trees; Fuzzy Inference System (FIS); heart disease diagnosis; neural networks; Support Vector Machine (SVM)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>Early disease diagnosis is critical in improving patient outcomes, reducing healthcare costs, and preferably timely intervention. Unfortunately, the algorithms used in conventional diagnostic technology have difficulties dealing with uncertain and imprecise medical data, which may result in either delay or misdiagnosis. This paper describes the combined framework of fuzzy logic and machine learning algorithms to improve the accuracy and reliability of early disease diagnosis. Fuzzy logic addresses imprecision in patient symptoms and variability in clinical data, while machine learning algorithms provide data analytical and predictive capabilities. The proposed system enhances the abilities and complements rule-based reasoning with a predictive model to handle imprecise inputs and deliver accurate disease risk estimation. An experimental analysis of the medical datasets of heart disease, diabetes, and cancer reveals that the proposed method enhances the accuracy, precision, and ultimately robustness of a conventional diagnostic system.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_11-Fuzzy_Logic_Driven_Machine_Learning_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Simulation-Based Analysis of Evacuation Information Sharing Systems Using Geographical Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151109</link>
        <id>10.14569/IJACSA.2024.0151109</id>
        <doi>10.14569/IJACSA.2024.0151109</doi>
        <lastModDate>2024-11-29T12:18:18.5930000+00:00</lastModDate>
        
        <creator>Tatsuki Fukuda</creator>
        
        <subject>Evacuation; flood disaster; evacuation rate; agent-based model; evacuate now button</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>In this study, we developed an agent-based model (ABM) to simulate and improve evacuation rates during flood disasters. Utilizing the “Evacuate Now Button”, a previously proposed system for sharing real-time evacuation rates among residents, our experimental findings demonstrate a significant enhancement in evacuation behavior through this system. Simulations were conducted using geographical data from Nobeoka City, Miyazaki Prefecture, and Toyohashi City, Aichi Prefecture. Results showed that the “Evacuate Now Button” increased evacuation rates from a few percent to approximately 78% in Nobeoka City and 90% in Toyohashi City. We also investigated the effect of varying the range for calculating evacuation rates and the accuracy of the evacuation information shared with residents. It was found that larger calculation ranges led to higher final evacuation rates, while smaller ranges resulted in a quicker initial increase in evacuation behavior. These findings provide valuable insights for enhancing evacuation strategies and disaster preparedness in regions prone to floods.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_9-Simulation_Based_Analysis_of_Evacuation_Information.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>AI Ethical Framework: A Government-Centric Tool Using Generative AI</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151108</link>
        <id>10.14569/IJACSA.2024.0151108</id>
        <doi>10.14569/IJACSA.2024.0151108</doi>
        <lastModDate>2024-11-29T12:18:18.5770000+00:00</lastModDate>
        
        <creator>Lalla Aicha Kone</creator>
        
        <creator>Anna Ouskova Leonteva</creator>
        
        <creator>Mamadou Tourad Diallo</creator>
        
        <creator>Ahmedou Haouba</creator>
        
        <creator>Pierre COLLET</creator>
        
        <subject>AI ethics; Gen AI; LLMs; moral relativism; ethical norms; adaptive ethical framework</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>Artificial Intelligence (AI) is transforming industries and societies globally. To fully harness this advancement, it is crucial for countries to integrate AI across different domains. Moral relativism in AI ethics suggests that as ethical norms vary significantly across societies, frameworks guiding AI development should be context-specific, reflecting the values, norms, and beliefs of the cultures where these technologies are deployed. To address this challenge, we introduce an intuitive, generative AI based solution that could help governments establish local ethical principles for AI software and ensure adherence to these standards. We propose two web applications: one for government use and another for software developers. The government-centric application dynamically calibrates ethical weights across domains such as the economy, education, and healthcare according to sociocultural context. By using LLMs, this application enables the creation of a tailored ethical blueprint for each domain or context, helping each country or region better define its core values. For developers, we propose a diagnostic application that actively checks software, assessing its alignment with the ethical principles established by the government. This feedback allows developers to recalibrate their AI applications, ensuring they are both efficient and ethically suitable for the intended area of use. In summary, this paper presents a tool utilizing LLMs to adapt software development to the ethical and cultural principles of a specific society.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_8-AI_Ethical_Framework_A_Government_Centric_Tool.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Reliable Logistic Regression for Credit Card Fraud Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151107</link>
        <id>10.14569/IJACSA.2024.0151107</id>
        <doi>10.14569/IJACSA.2024.0151107</doi>
        <lastModDate>2024-11-29T12:18:18.5600000+00:00</lastModDate>
        
        <creator>Yassine Hmidy</creator>
        
        <creator>Mouna Ben Mabrouk</creator>
        
        <subject>Credit card fraud; fraud detection; computational complexity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>Credit card fraud poses a significant threat to financial institutions and consumers worldwide, necessitating robust and reliable detection methods. Traditional classification models often struggle with the challenges of imbalanced datasets, noise, and outliers inherent in transaction data. This paper introduces a novel fraud detection approach based on a discrete non-additive integral with respect to a non-monotonic set function. This method not only enhances classification performance but also provides an interval-valued output that serves as an index of reliability for each prediction. The width of this interval correlates with the prediction error, offering valuable insights into the confidence of the classification results. Such an index is crucial in high-stakes scenarios where misclassifications can have severe consequences. The model is validated through extensive experiments on credit card transaction datasets, demonstrating its effectiveness in handling imbalanced data and its superiority over traditional models in terms of accuracy and reliability assessment. However, potential challenges such as increased computational complexity and the need for careful parameter tuning may affect scalability and real-time implementation. Addressing these challenges could further enhance the practical applicability of the proposed method in fraud detection systems.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_7-Reliable_Logistic_Regression_for_Credit_Card_Fraud.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Low-Cost IoT Sensor for Indoor Monitoring with Prediction-Based Data Collection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151106</link>
        <id>10.14569/IJACSA.2024.0151106</id>
        <doi>10.14569/IJACSA.2024.0151106</doi>
        <lastModDate>2024-11-29T12:18:18.5470000+00:00</lastModDate>
        
        <creator>Paolo Capellacci</creator>
        
        <creator>Lorenzo Calisti</creator>
        
        <creator>Emanuele Lattanzi</creator>
        
        <subject>IoT; indoor monitoring; prediction-based data collection; deep-learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>The proliferation of Internet of Things technologies has revolutionized the landscape of indoor environmental monitoring, offering opportunities to enhance comfort, health, and energy efficiency. This paper presents the development and implementation of a low-cost IoT sensor system designed for indoor monitoring with a Machine Learning-driven prediction-based data collection approach. Leveraging deep learning algorithms, the IoT device predicts significant environmental changes and dynamically adjusts the data collection frequency to optimize energy consumption and data transmission. Experimental results demonstrate the system’s ability to accurately predict environ-mental variations, resulting in a reduction in data transmission and power usage up to 96% without compromising the monitoring quality. The findings highlight the potential of prediction-based data collection as a viable solution for sustainable and effective indoor environment monitoring on low-cost IoT devices.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_6-A_Low_Cost_IoT_Sensor_for_Indoor_Monitoring.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Future of IoT Security in Saudi Arabian Start-Ups: A Position Paper</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151105</link>
        <id>10.14569/IJACSA.2024.0151105</id>
        <doi>10.14569/IJACSA.2024.0151105</doi>
        <lastModDate>2024-11-29T12:18:18.5300000+00:00</lastModDate>
        
        <creator>Safar Albaqami</creator>
        
        <creator>Maziar Nekovee</creator>
        
        <creator>Imran Khan</creator>
        
        <subject>IoT; Saudi Arabia; start-ups; computing security challenges; technology; innovation; cyber threats</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>This research explores the intricacies of implementing and securing Internet of Things (IoT) technology in Saudi Arabian startups. In the middle of Saudi Arabia&#39;s ambitious pursuit of social and economic progress via IoT breakthroughs, entrepreneurs have emerged as critical participants grappling with serious computing security challenges. This study conducts a thorough examination of the cybersecurity risks associated with start-up in Saudi Arabia’s IoT applications, technologies and innovations by reviewing a wide range of publications. The key objectives are to identify the main cybersecurity risks, analyze the impact of IoT device networking on privacy and security, and propose strategies to mitigate these threats. Furthermore, the study stresses the importance of funding up-to-date security technologies, cooperation with the cyber experts, and shifting towards the cloud-based security. Also, the study identifies the importance of cybersecurity education and training to enhance the defensive mechanisms of the startups against cyber threats. This study provides novel insights by identifying the distinct cybersecurity obstacles encountered by IoT-enabled businesses in Saudi Arabia and proposing a complete framework to enhance their security architecture. Robust cybersecurity policies are vital for both unleashing the transformative potential of IoT for startups and guiding Saudi Arabia towards its objective of being a worldwide leader in IoT. This paper advocates for a cooperative strategy that involves policymakers, industry stakeholders, and entrepreneurs to prioritize and allocate resources towards a safe and robust IoT ecosystem. This would help promote economic development and innovation in the country.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_5-The_Future_of_IoT_Security_in_Saudi_Arabian_Start_Ups.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Future of Mainframe IDMS: Leveraging Artificial Intelligence for Modernization and Efficiency</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151104</link>
        <id>10.14569/IJACSA.2024.0151104</id>
        <doi>10.14569/IJACSA.2024.0151104</doi>
        <lastModDate>2024-11-29T12:18:18.5130000+00:00</lastModDate>
        
        <creator>Vasanthi Govindaraj</creator>
        
        <subject>IDMS modernization; artificial intelligence in legacy systems; mainframe database optimization; predictive maintenance; cloud integration; AI-driven query optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>IDMS (Integrated Database Management System) has long been a backbone for mission-critical systems in finance, healthcare, and government sectors. However, the rigid architecture of legacy systems poses challenges in scalability, flexibility, and integration with modern technologies. This paper explores IDMS modernization using Artificial Intelligence (AI), with a focus on predictive maintenance, query optimization, and cloud integration. Through real-world implementations, the integration of AI-driven solutions has shown transformative potential: query response times were reduced by 25%, unscheduled downtime decreased by 30%, and system scalability improved by accommodating a 40% increase in traffic without degradation. By leveraging AI-powered automation and modern cloud infrastructures, IDMS can achieve database optimization and real-time operational efficiency. This work highlights how AI ensures the relevance and competitiveness of IDMS, enabling it to meet the demands of modern legacy systems and ensuring its sustained role in critical business operations.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_4-The_Future_of_Mainframe_IDMS.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Augmented Reality in Education: Revolutionizing Teaching and Learning Practices – State-of-the-Art</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151103</link>
        <id>10.14569/IJACSA.2024.0151103</id>
        <doi>10.14569/IJACSA.2024.0151103</doi>
        <lastModDate>2024-11-29T12:18:18.5000000+00:00</lastModDate>
        
        <creator>Samer Alhebaishi</creator>
        
        <creator>Richard Stone</creator>
        
        <subject>Augmented reality; education; instructional technology; technology integration; student engagement; teacher training</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>The evolution and contemporary applications of instructional technology, particularly the transformative impact of Augmented Reality (AR) in education, are comprehensively explored in this study. Tracing the journey from early visual aids to sophisticated AR, the aim is to highlight continuous efforts to enhance educational experiences. The effectiveness of AR in increasing student engagement, comprehension, and personalized learning across various disciplines is critically assessed, revealing its potential to transform abstract concepts into tangible experiences. Additionally, challenges in AR adoption, such as technological constraints, the necessity for comprehensive educator training, and strategic curriculum integration, are discussed. The objective here is to identify research gaps, emphasizing the need for standardized evaluation methods, larger sample sizes, and long-term impact studies to fully understand AR’s potential. This exploration aims to provide a comprehensive understanding of AR’s capability to revolutionize education and to identify pathways for future research and development in this dynamic field.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_3-Augmented_Reality_in_Education.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Analysis of Machine Learning Models for Forecasting Infectious Disease Spread</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151102</link>
        <id>10.14569/IJACSA.2024.0151102</id>
        <doi>10.14569/IJACSA.2024.0151102</doi>
        <lastModDate>2024-11-29T12:18:18.4670000+00:00</lastModDate>
        
        <creator>Praveen Damacharla</creator>
        
        <creator>Venkata Akhil Kumar Gummadi</creator>
        
        <subject>Machine learning; linear regression; random forest; time series; XGBoost</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>Accurate forecasting of infectious disease spread is essential for effective resource planning and strategic decision-making in public health. This study provides a comprehensive evaluation of various machine learning models, from traditional statistical approaches to advanced deep learning techniques, for forecasting disease outbreak dynamics. Focusing on daily positive cases and daily deaths—key indicators despite potential reporting inconsistencies—our analysis aims to identify the most effective models across different algorithm families. By adapting non-time series methods with temporal factors and enriching time series models with exogenous variables, we enhance model suitability for the data’s time-dependent nature. Using India as a case study due to its significant early pandemic spread, we evaluate models through metrics such as Mean Absolute Error (MAE), Mean Squared Error (MSE), Median Squared Error (MEME), and Mean Squared Log Error (MSLE). The models tested include Linear Regression, Elastic Net, Random Forest, XGBoost, and Simple Exponential Smoothing, among others. Results indicate that the Random Forest Regressor outperforms other methods in terms of prediction accuracy across most metrics. Notably, findings suggest that simpler models can sometimes match or even exceed the reliability of more complex approaches. However, limitations include model sensitivity to data quality and the lack of real-time adaptability, which may affect performance in rapidly evolving outbreak situations. These insights have critical implications for public health policy and resource allocation in managing infectious disease outbreaks.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_2-Comparative_Analysis_of_Machine_Learning_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predicting Cervical Cancer Based on Behavioral Risk Factors</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151101</link>
        <id>10.14569/IJACSA.2024.0151101</id>
        <doi>10.14569/IJACSA.2024.0151101</doi>
        <lastModDate>2024-11-29T12:18:18.4530000+00:00</lastModDate>
        
        <creator>Rakeshkumar Mahto</creator>
        
        <creator>Kanika Sood</creator>
        
        <subject>Cervical cancer; random forest; voting classifier; Adaptive Synthetic Sampling (ADASYN); predictive model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(11), 2024</description>
        <description>Machine learning (ML) based predictive models are increasingly used in various fields due to their ability to find patterns and interpret complex relationships between variables in an extensive dataset. However, getting a comprehensive dataset is challenging in the field of medicine for rare or emerging infections. Therefore, developing a robust methodology and selecting ML classifiers that can still make compelling predictions even with smaller and imbalanced datasets is essential to defend against emerging threat or infections. This paper uses behavioral risk factors to predict cervical cancer risk. To create a robust technique, we intentionally selected a smaller imbalanced dataset and applied Adaptive Synthetic (ADASYN) sampling and hyper-parameter tuning to enhance the predictive performance. In this work, hyperparameter tuning, evaluated through 3-fold cross-validation, is employed to optimize the performance of the Random Forest, XGBoost, and Voting Classifier models. The results demonstrated high classification performance, with all models achieving an accuracy of 97.12%. Confusion matrix analysis further revealed the models’ robustness in identifying cervical cancer cases with minimal misclassification. A comparison with previous work confirmed the superiority of our approach, showcasing improved accuracy and precision. This study demonstrates the potential of ML models for early screening and risk assessment, even when working with limited datasets.</description>
        <description>http://thesai.org/Downloads/Volume15No11/Paper_1-Predicting_Cervical_Cancer_Based_on_Behavioral_Risk_Factors.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Impact Analysis of Informatization Means Driven by Artificial Intelligence Technology on Visual Communication</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151055</link>
        <id>10.14569/IJACSA.2024.0151055</id>
        <doi>10.14569/IJACSA.2024.0151055</doi>
        <lastModDate>2024-11-01T14:53:08.1400000+00:00</lastModDate>
        
        <creator>Lei Ni</creator>
        
        <subject>Image segmentation; image detection; density peak clustering algorithm; convolutional neural network; faster region convolutional neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>With the popularization of computer technology, the combination of artificial intelligence and image processing technology has become a research hotspot in the visual communication. Image processing technology mostly involves segmentation and detection of images. Image segmentation often focuses on extracting image contour information, while ignoring the color of the image. The calculation time for image detection is relatively long, and the calculation steps are also relatively cumbersome. In response to the above issues, a density peak clustering algorithm was proposed for image segmentation. In the phase of image detection, the region recommendation network is introduced to improve the faster region Convolutional neural network algorithm. The findings demonstrate that under 15% Gaussian noise and 10% Salt-and-pepper noise, the segmentation accuracy of the density peak clustering algorithm is 98.13% and 97.89% respectively. The accuracy, recall and F-measure of the improved fast region Convolutional neural network algorithm are 98.49%, 97.29% and 97.77% respectively. The accuracy and average time consumption in the graphics processor environment are 98.18% and 2.94ms, respectively. In conclusion, the image segmentation algorithm based on density peak clustering algorithm and the improved fast region Convolutional neural network algorithm are robust, which have good segmentation and detection effects.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_55-Impact_Analysis_of_Informatization_Means.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Environmental and Economic Benefit Analysis of Urban Construction Projects Based on Data Envelopment Analysis and Simulated Annealing Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151026</link>
        <id>10.14569/IJACSA.2024.0151026</id>
        <doi>10.14569/IJACSA.2024.0151026</doi>
        <lastModDate>2024-11-01T14:53:08.0930000+00:00</lastModDate>
        
        <creator>Jie Gong</creator>
        
        <subject>DEA; simulated annealing algorithm; city building; environment; economics; benefit</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>With the continuous advancement of urbanization and the sustained growth of urban population, city building projects are facing severe challenges. How to analyze their environmental and economic benefits has become an urgent problem to be solved. Therefore, based on the proposed method for calculating the environmental and economic benefits of city building projects, this study uses a cross efficiency data envelopment analysis model for evaluation and solution. Then, an improved simulated annealing algorithm is used to achieve environmental and economic benefit optimization. The results showed that the improved simulated annealing algorithm tended to stabilize after 480 iterations, with maximum and minimum values of 0.86 and 0.21, respectively. The maximum F1 value was 0.988, indicating better performance. In the selected three urban construction projects, the cross efficiency data envelopment analysis model achieved high environmental and economic benefits, demonstrating the effectiveness of the model. After optimizing using the improved simulation degradation algorithm, the maximum economic benefit was increased by 850000 yuan, proving the effectiveness of the proposed method in analyzing the environmental and economic benefits of urban construction projects. It can provide more scientific decision support for construction project planning.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_26-Environmental_and_Economic_Benefit_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Ensemble of Weighted Code Mixed Feature Engineering and Machine Learning-Based Multiclass Classification for Enhanced Opinion Mining on Unstructured Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01510124</link>
        <id>10.14569/IJACSA.2024.01510124</id>
        <doi>10.14569/IJACSA.2024.01510124</doi>
        <lastModDate>2024-10-28T14:12:57.6000000+00:00</lastModDate>
        
        <creator>Ruchi Sharma</creator>
        
        <creator>Pravin Shrinath</creator>
        
        <subject>Opinion mining; Machine learning; weighted ensemble; code mixed; Natural Language Processing; Business Intelligence; Online Social Networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>There is an exponential growth of opinions on online platforms, and the rapid rise in communication technologies generates a significant need to analyze opinions in online social networks (OSN). However, these opinions are unstructured, rendering knowledge extraction from opinions complex and challenging to implement. Although existing opinions mining systems are applied in several applications, limited research is available to handle code-mixed opinions of a non-structured nature where there is a switching of lexicons in languages within a single opinion structure. The challenge lies in interpreting complex opinions in multimedia networks owing to their unstructured nature, volume, and lexical structure. This paper presents a novel ensemble approach using machine learning and natural language processing to interpret code mixed opinions efficiently. Firstly, the opinions are extracted from the input corpus and preprocessed using proposed Extended Feature Vectors (EFV). Subsequently, the opinion mining system is implemented using a novel approach using weighted code mixed opinion mining framework (WCM-OMF) for multiclass classification. The proposed WCM-OMF model achieves an accuracy of 79.11% and 72% for the benchmark datasets, which is a significant improvement over existing Hierarchical LSTM, Random Forest, and SVM models and state-of-the-art-methods. The proposed solution can be implemented in opinion detection of other business sectors beneficial in obtaining actionable insights for efficient decision-making in enterprises and Business Intelligence (BI).</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_124-Ensemble_of_Weighted_Code_Mixed_Feature_Engineering.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Advanced Techniques for Optimizing Demand-Side Management in Microgrids Through Load-Based Strategies</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01510123</link>
        <id>10.14569/IJACSA.2024.01510123</id>
        <doi>10.14569/IJACSA.2024.01510123</doi>
        <lastModDate>2024-10-28T14:12:57.5870000+00:00</lastModDate>
        
        <creator>Ramia Ouederni</creator>
        
        <creator>Bechir Bouaziz</creator>
        
        <creator>Faouzi Bacha</creator>
        
        <subject>Demand side management; microgrid; load shifting; peak clipping; valley fill</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>Microgrids are crucial for ensuring reliable electricity in remote areas, but integrating renewable sources like photovoltaic (PV) systems presents challenges due to supply intermittency and demand fluctuations. Demand-side management (DSM) addresses these issues by adjusting consumption patterns. This article explores a DSM strategy combining load shifting (shifting demand to periods of high PV generation), peak clipping (limiting maximum load), and valley filling (redistributing load during low-demand periods). Implemented in MATLAB and tested on a PV-battery microgrid, the strategy significantly reduces peak demand, improves the peak-to-average demand ratio (PAR), and enhances system stability and flexibility, particularly with the inclusion of deferrable loads.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_123-Advanced_Techniques_for_Optimizing_Demand_Side_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Efficient Load-Balancing and Container Deployment for Enhancing Latency in an Edge Computing-Based IoT Network Using Kubernetes for Orchestration</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01510122</link>
        <id>10.14569/IJACSA.2024.01510122</id>
        <doi>10.14569/IJACSA.2024.01510122</doi>
        <lastModDate>2024-10-28T14:12:57.5700000+00:00</lastModDate>
        
        <creator>Garrik Brel Jagho Mdemaya</creator>
        
        <creator>Milliam Maxime Zekeng Ndadji</creator>
        
        <creator>Miguel Landry Foko Sindjoung</creator>
        
        <creator>Mthulisi Velempini</creator>
        
        <subject>Latency; Kubernetes; edge computing; Internet of Things; load-balancing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>Edge Computing (EC) provides computational and storage resources close to data-generating devices, and reduces end-to-end latency for communications between end-devices and the remote servers. In smart cities (SC) for example, thousands of applications are running on edge servers, and it becomes crucial to manage resource allocation and load balancing to improve data transmission throughput and reduce latency. Kubernetes (k8s) is a widely used container orchestration platform that is commonly employed for the efficient management of containerized applications in SC. However, it does not integrate well with certain EC requirements such as network-related metrics and the heterogeneity of EC clusters. Furthermore, requests are equally distributed across all replicas of an application, which may increase the time taken for processing, since in the EC environment, nodes are geographically dispersed. Several existing studies have investigated this problem, unfortunately, the proposed solutions consume a lot of node’s resources in the cluster. To the best of our knowledge, none of studies considered the cluster heterogeneity when deploying applications that have different resource requirements. To address this issue, this paper proposes a new technique to deploy applications on edge servers by extending Kubernetes scheduler, and an approach to manage requests among the different nodes. The simulation results show that our solution generates better results than some of the state-of-the-art works in terms of latency.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_122-Efficient_Load_Balancing_and_Container_Deployment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Secure Scheme to Counter the Man in the Middle Attacks in SDN Networks-Based Domain Name System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01510121</link>
        <id>10.14569/IJACSA.2024.01510121</id>
        <doi>10.14569/IJACSA.2024.01510121</doi>
        <lastModDate>2024-10-28T14:12:57.5530000+00:00</lastModDate>
        
        <creator>Frank Manuel Vuide Pangop</creator>
        
        <creator>Miguel Landry Foko Sindjoung</creator>
        
        <creator>Mthulisi Velempini</creator>
        
        <subject>Cyber security; domain name system; man in the middle attack; software defined networking</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>Internet and computer networks are vulnerable to cyber-attacks which compromise the services they provide to facilitate the management of data and users. The domain name system (DNS) is the Internet service that translates domain names and computer IP addresses and IPs to domain names. DNS is sometimes a victim of attacks that are difficult to detect and prevent because they are not only very stealthy but also conceal its proper functioning. Among the attacks that DNS is subject to, there are man-in-the-middle (MITM) attacks. Traditional networks that centralize all network functions in a single device complicate the detection and protection of systems against these attacks challenging. Software-defined networking (SDN) is a technology that is widely used to address many traditional network problems such as security and network architectures. Therefore, in this paper, we propose a scheme designed to detect and block man-in-the-middle attacks based on a newly defined architecture. The effectiveness of our secured solution is evaluated in an SDN architecture where an Address Resolution Protocol spoofing MITM attack is generated for the evaluation purpose. The results of our simulations show that we can effectively detect the attack and the performance evaluation of our approach shows that the proposed solution is effective in terms of security, implementation cost and resource consumption. We then recommend the use of our proposed solution to address the MITM attacks in SDN networks-based Domain Name System.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_121-A_Secure_Scheme_to_Counter_the_Man_in_the_Middle_Attacks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Revolutionizing Historical Document Digitization: LSTM-Enhanced OCR for Arabic Handwritten Manuscripts</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01510120</link>
        <id>10.14569/IJACSA.2024.01510120</id>
        <doi>10.14569/IJACSA.2024.01510120</doi>
        <lastModDate>2024-10-28T14:12:57.5400000+00:00</lastModDate>
        
        <creator>Safiullah Faizullah</creator>
        
        <creator>Muhammad Sohaib Ayub</creator>
        
        <creator>Turki Alghamdi</creator>
        
        <creator>Toqeer Syed Ali</creator>
        
        <creator>Muhammad Asad Khan</creator>
        
        <creator>Emad Nabil</creator>
        
        <subject>Optical character recognition; transfer learning; Arabic OCR; image processing; classification; convolutional neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>Optical Character Recognition (OCR) holds immense practical value in the realm of hand-written document analysis, given its widespread use in various human transactions. This scientific process enables the conversion of diverse documents or images into analyzable, editable, and searchable data. In this paper, we present a novel approach that combines transfer learning and Arabic OCR technology to digitize ancient handwritten scripts. Our method aims to preserve and enhance accessibility to extensive collections of historically significant materials, including fragile manuscripts and rare books. Through a comprehensive examination of the challenges encountered in digitizing Arabic handwritten texts, we propose a transfer learning-based framework that leverages pre-trained models to overcome the scarcity of labeled data for training OCR systems. The experimental results demonstrate a remarkable improvement in the recognition accuracy of Arabic handwritten texts, thereby offering a highly promising solution for the digitization of historical documents. Our work enables the digitization of large collections of ancient historical materials, including manuscripts and rare books characterized by delicate physical conditions. The proposed approach signifies a significant step towards preserving our cultural heritage and facilitating advanced research in historical document analysis.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_120-Revolutionizing_Historical_Document_Digitization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Investigation into the Risk Factors of Forest Fires and the Efficacy of Machine Learning Techniques for Early Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01510119</link>
        <id>10.14569/IJACSA.2024.01510119</id>
        <doi>10.14569/IJACSA.2024.01510119</doi>
        <lastModDate>2024-10-28T14:12:57.5400000+00:00</lastModDate>
        
        <creator>Asma Cherif</creator>
        
        <creator>Sara Chaudhry</creator>
        
        <creator>Sabina Akhtar</creator>
        
        <subject>Machine Learning; Forest Fire; LSTM; ARIMA; SVR</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>Forest fires are a major environmental hazard that can have significant impacts on human lives. Early detection and swift action are crucial for controlling such situations and minimizing damage. However, the automatic tools based on local sensors in meteorological stations are often insufficient for detecting fires immediately. Machine learning offers a promising solution to forecast forest fires and reduce their rapid spread. In recent state-of-the-art solutions, only one or two techniques have been utilized for prediction. In this research, we investigate several methods for forest fire area prediction, including Long Short Term Memory (LSTM), Auto Regressive Integrated Moving Average (ARIMA), and Support Vector Regression (SVR). Our aim is to identify the most effective and optimal method for predicting forest fires. After comparing our results with other artificial intelligence and machine learning techniques applied to the same dataset, we found that the LSTM approach outperforms the ARIMA and SVR predictors by more than 92%. Our findings also indicate that the LSTM algorithm has a lower estimation error when compared to other predictors, thus providing more accurate forecasts.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_119-An_Investigation_into_the_Risk_Factors_of_Forest_Fires.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Skin Diseases Classification with Machine Learning and Deep Learning Techniques: A Systematic Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01510118</link>
        <id>10.14569/IJACSA.2024.01510118</id>
        <doi>10.14569/IJACSA.2024.01510118</doi>
        <lastModDate>2024-10-28T14:12:57.5230000+00:00</lastModDate>
        
        <creator>Amina Aboulmira</creator>
        
        <creator>Hamid Hrimech</creator>
        
        <creator>Mohamed Lachgar</creator>
        
        <subject>Skin Disease Classification; Artificial Intelligence (AI); Convolutional Neural Networks (CNNs); Transformer-based Models; Generative Adversarial Networks (GANs); ensemble learning; hybrid models; ISIC dataset; dermatology; machine learning; deep learning; skin cancer detection; dermoscopic images; medical imaging; systematic review</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>Skin cancer is one of the most prevalent types of cancer worldwide, and its early detection is crucial for improving patient outcomes. Artificial Intelligence (AI) has shown significant promise in assisting dermatologists with accurate and efficient diagnosis through automated skin disease classification. This systematic review aims to provide a comprehensive overview of the various AI techniques employed for skin disease classification, focusing on their effectiveness across different datasets and methodologies. A total of 220 articles were initially identified from databases such as Scopus and IEEE Xplore. After removing duplicates and conducting a title and abstract screening, 213 studies were assessed for eligibility based on predefined criteria such as study relevance, clarity of results, and innovative AI approaches. Following full-text review, 56 studies were included in the final analysis. These studies were categorized based on the AI techniques used, including Convolutional Neural Networks (CNNs), Transformer-based models, hybrid models combining CNNs with other techniques, Generative Adversarial Networks (GANs), and ensemble learning approaches. The review high-lights that the ISIC dataset and its variations are the most commonly used data sources, owing to their extensive and diverse collection of dermoscopic images. The results indicate that CNN-based models remain the most widely adopted and effective approach for skin disease classification, with several hybrid and Transformer-based models also demonstrating high accuracy and specificity. Despite the advancements, challenges such as dataset variability, the need for more diverse training data, and the lack of interpretability in AI models persist. This review provides insights into current trends and identifies future directions for research, emphasizing the importance of integrating AI into clinical practice for improved skin disease management.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_118-Skin_Diseases_Classification_with_Machine_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>DBPF: An Efficient Dynamic Block Propagation Framework for Blockchain Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01510117</link>
        <id>10.14569/IJACSA.2024.01510117</id>
        <doi>10.14569/IJACSA.2024.01510117</doi>
        <lastModDate>2024-10-28T14:12:57.5070000+00:00</lastModDate>
        
        <creator>Osama Farouk</creator>
        
        <creator>Mahmoud Bakrey</creator>
        
        <creator>Mohamed Abdallah</creator>
        
        <subject>Blockchain; scalability; minimum spanning tree; compression; broadcasting; optimized neighbor selection; network bandwidth; transmission time optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>Scalability poses a significant challenge in blockchain networks, particularly in optimizing the propagation time of new blocks. This paper introduces an approach, termed “DBPF” - Dynamic Block Propagation Framework for Blockchain Networks, aimed at addressing this challenge. The approach focuses on optimizing neighbor selection during block propagation to mitigate redundancy and enhance network efficiency. By employing informed neighbor selection and leveraging the Brotli lossless compression algorithm to reduce block size, the objective is to optimize network bandwidth and minimize transmission time. The DBPF framework calculates the Minimum Spanning Tree (MST) to ensure efficient communication paths between nodes, while the Brotli compression algorithm reduces the block size to optimize network bandwidth. The core objective of DBPF is to streamline the propagation process by selecting optimal neighbors and eliminating unnecessary data redundancy. Through experimentation and simulation of the block propagation process using(DBPF), we demonstrate a significant reduction in the propagation time of new blocks compared to traditional methods. Comparisons against approaches such as selecting neighbors with the least Round-Trip Time RTT, random neighbor selection, and the DONS approach reveal a notable decrease in propagation time up to more than ( 45%) compared to them based on network type and number of nodes. The effectiveness of (DBPF) in boosting blockchain network efficiency and decreasing propagation time is emphasized by the experimental findings. Additionally, various compression algorithms such as zstandard and zlib were tested during the research. Nevertheless, the results suggest that Brotli produced the most positive outcomes. Through the integration of optimized neighbor selection and effective data compression, DBPF presents a hopeful resolution to the scalability issues confronting blockchain networks. These results showcase the capability of (DBPF) to notably enhance network performance, leading the path toward smoother and more efficient blockchain operations.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_117-DBPF_An_Efficient_Dynamic_Block_Propagation_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hiding Encrypted Images in Audios Based on Cellular Automatas and Discrete Fourier Transform</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01510116</link>
        <id>10.14569/IJACSA.2024.01510116</id>
        <doi>10.14569/IJACSA.2024.01510116</doi>
        <lastModDate>2024-10-28T14:12:57.4930000+00:00</lastModDate>
        
        <creator>Jose Alva Cornejo</creator>
        
        <creator>Esdras D. Vasquez</creator>
        
        <creator>Jose Calizaya Quispe</creator>
        
        <creator>Roxana Flores-Quispe</creator>
        
        <creator>Yuber Velazco-Paredes</creator>
        
        <subject>Cellular automaton; Fourier Transform; cryptography; synchronization; stenography; embedding</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>With the increasing need for secure long-distance communication, protecting sensitive information such as images during transmission remains a significant challenge. This paper proposes a new method for hiding encrypted images inside audio files by integrating Cellular Automata (CA) and the Discrete Fourier Transform (DFT). The primary aim is to enable secure transmission of large encrypted images without altering the audio’s perceptual quality. The scheme leverages the crypto-graphic properties of CA to generate encrypted images, which are then embedded into inaudible frequencies of audio using DFT. Results show that this method successfully hides and recovers images of considerable size, maintaining bit-level integrity of the original images while preserving audio quality. However, the scheme lacks resilience to signal processing attacks, such as compression or filtering, the resulting size of the audio is also bigger. Despite this limitations, the method provides a competitive advantage in payload capacity and efficiency, making it suitable for applications where the transmission of large, sensitive data is necessary but not subject to aggressive signal attacks.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_116-Hiding_Encrypted_Images_in_Audios.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimizing Text Summarization with Sentence Clustering and Natural Language Processing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01510115</link>
        <id>10.14569/IJACSA.2024.01510115</id>
        <doi>10.14569/IJACSA.2024.01510115</doi>
        <lastModDate>2024-10-28T14:12:57.4770000+00:00</lastModDate>
        
        <creator>Zahir Edress</creator>
        
        <creator>Yasin Ortakci</creator>
        
        <subject>Abstractive summarization; extractive summarization; sentence clustering; language understanding; information retrieval</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>Text summarization is an important task in natural language processing (NLP), with significant implications for information retrieval and content management. Traditional summarization methods often struggle with issues like redundancy, loss of key information, and inability to capture the underlying semantic structure of the text. This paper addresses these challenges by presenting an advanced approach to extractive summarization, which integrates clustering-based sentence selection with the BART model. The proposed method tackles the problem of redundancy by using Term Frequency-Inverse Document Frequency (TF-IDF) for feature extraction, followed by K-means clustering to group similar sentences. This clustering step is designed to reduce redundancy by ensuring that each cluster represents a distinct theme or topic. Representative sentences are then selected from these clusters based on their cosine similarity to a user query, which helps in retaining the most relevant information. These selected sentences are then fed into the BART model to generate the final abstractive summary. This combination of extractive and abstractive techniques addresses the common problem of information loss, ensuring that the summary is both comprehensive and coherent. The approach is evaluated using the CNN/DailyMail and XSum datasets, which are widely recognized benchmarks in the summarization domain. Results assessed through ROUGE metrics demonstrate that the proposed model substantially improves summarization quality compared to existing benchmarks.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_115-Optimizing_Text_Summarization_with_Sentence_Clustering.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>ERCO-Net: Enhancing Image Dehazing for Optimized Detail Retention</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01510114</link>
        <id>10.14569/IJACSA.2024.01510114</id>
        <doi>10.14569/IJACSA.2024.01510114</doi>
        <lastModDate>2024-10-28T14:12:57.4600000+00:00</lastModDate>
        
        <creator>Muhammad Ayub Sabir</creator>
        
        <creator>Fatima Ashraf</creator>
        
        <creator>Ahthasham Sajid</creator>
        
        <creator>Nisreen Innab</creator>
        
        <creator>Reem Alrowili</creator>
        
        <creator>Yazeed Yasin</creator>
        
        <subject>Image dehazing; edge restriction; contextual optimization; transmission map estimation; haze removal</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>Image dehazing is a crucial preprocessing step in computer vision for enhancing image quality and enabling many downstream applications. However, existing methods often do not accurately restore hazy images while maintaining computational efficiency. To overcome this challenge, we propose ERCO-Net a new fusion framework that combines edge restriction and contextual optimization methods. By using boundary constraints, ERCO-Net extend the boundaries that help in protecting the edges and structures of an image. Contextual optimization impacts the final quality of the dehazed image by enhancing smoothness and coherence. We compare ERCO-Net with conventional approaches such as dark channel prior (DCP), All-in-one dehazing network (AoD), and Feature fusion attention network (FFA-Net). The comparative evaluation highlights the effectiveness of the proposed fusion method, providing significant improvement in image clarity, contrast, and colors. The combination of edge restriction and contextual optimization not only enhances the quality of dehazing but also decreases computational complexity, presenting a promising avenue for advancing image restoration techniques. The source code is available at https://github.com/FatimaAyub12/Image-Dehazing-.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_114-ERCO_Net_Enhancing_Image_Dehazing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Breast Tumor Classification Using Dynamic Ultrasound Sequence Pooling and Deep Transformer Features</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01510112</link>
        <id>10.14569/IJACSA.2024.01510112</id>
        <doi>10.14569/IJACSA.2024.01510112</doi>
        <lastModDate>2024-10-28T14:12:57.4430000+00:00</lastModDate>
        
        <creator>Mohamed A Hassanien</creator>
        
        <creator>Vivek Kumar Singh</creator>
        
        <creator>Mohamed Abdel-Nasser</creator>
        
        <creator>Domenec Puig</creator>
        
        <subject>Breast ultrasound; breast cancer; CAD systems; deep learning; vision transformer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>Breast ultrasound (BUS) imaging is widely utilized for detecting breast cancer, one of the most life-threatening cancers affecting women. Computer-aided diagnosis (CAD) systems can assist radiologists in diagnosing breast cancer; however, the performance of these systems can be degrade by speckle noise, artifacts, and low contrast in BUS images. In this paper, we propose a novel method for breast tumor classification based on the dynamic pooling of BUS sequences. Specifically, we introduce a weighted dynamic pooling approach that models the temporal evolution of breast tissues in BUS sequences, thereby reducing the impact of noise and artifacts. The dynamic pooling weights are determined using image quality metrics such as blurriness and brightness. The pooled BUS sequence is then input into an efficient hybrid vision transformer-CNN network, which is trained to classify breast tumors as benign or malignant. Extensive experiments and comparisons on BUS sequences demonstrate the effectiveness of the proposed method, achieving an accuracy of 93.78%, and outperforming existing methods. The proposed method has the potential to enhance breast cancer diagnosis and contribute to lowering the mortality rate.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_112-Breast_Tumor_Classification_Using_Dynamic_Ultrasound.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Classification of Moroccan Legal and Legislative Texts Using Machine Learning Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01510113</link>
        <id>10.14569/IJACSA.2024.01510113</id>
        <doi>10.14569/IJACSA.2024.01510113</doi>
        <lastModDate>2024-10-28T14:12:57.4430000+00:00</lastModDate>
        
        <creator>Amina BOUHOUCHE</creator>
        
        <creator>Mustapha ESGHIR</creator>
        
        <creator>Mohammed ERRACHID</creator>
        
        <subject>Classification Arabic text; natural language processing; legal data; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>Artificial intelligence tools have revolutionized many fields, bringing significant progress in automating tasks and solving complex problems. In this article, we focus on the legal domain, where the data to be processed are specific and in large quantities. Our study consists in carrying out an automatic classification of Moroccan legal and legislative texts in Arabic. In addition, we will conduct a series of experiments to evaluate the impact of stemming, class imbalance and the impact of data quantity on the performance of the models used. Given the specificity of the Arabic language, we used Natural Language Processing (NLP) tools adapted to this language. For classification, we worked with the following models: Support Vector Machine (SVM), Random Forests (RF), K Nearest Neighbors (KNN) and Naive Bayes (NB). The results obtained are very impressive, and the comparison of model outputs enriches the debate on specificities of each model.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_113-Classification_of_Moroccan_Legal_and_Legislative_Texts.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards Interpretable Diabetic Retinopathy Detection: Combining Multi-CNN Models with Grad-CAM</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01510111</link>
        <id>10.14569/IJACSA.2024.01510111</id>
        <doi>10.14569/IJACSA.2024.01510111</doi>
        <lastModDate>2024-10-28T14:12:57.4300000+00:00</lastModDate>
        
        <creator>Zakaria Said</creator>
        
        <creator>Fatima-Ezzahraa Ben-Bouazza</creator>
        
        <creator>Mounir Mekkour</creator>
        
        <subject>Diabetic retinopathy; retinal images; Grad-CAM; weighted voting; meta-learners</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>Diabetic retinopathy (DR) is a leading cause of vision impairment and blindness, necessitating accurate and early detection to prevent severe outcomes. This paper discusses the utility of ensemble learning methodologies in enhancing the prediction accuracy of Diabetic Retinopathy detection from retinal images and the prospective utilization of Gradient-weighted Class Activation Mapping (Grad-CAM) to maximize model interpretability. Using a dataset of 1,437 color fundus images, we explored the potential of different pre-trained convolutional neural networks (CNNs), including Xception, VGG16, InceptionV3, and DenseNet121. Their respective accuracies on the test set were 89.27%, 91.44%, 89.06%, and 93.35%. Our objective was to improve the accuracy of diabetic retinopathy detection. We explored methods to combine predictions from these four models we began with weighted voting, which achieved an accuracy of 93.95%, and subsequently employed meta-learners, achieving an improved accuracy of 94.63%. These approaches surpassed individual models in distinguishing between non-proliferative and proliferative phases of DR. These findings underscore the potential of these approaches in developing robust diagnostic tools for diabetic retinopathy. Furthermore, techniques like Grad-CAM enhance interpretability, opening the door for further advancements in early-stage detection and clinical integration automatically while maximising accuracy and interpretability.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_111-Towards_Interpretable_Diabetic_Retinopathy_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Credit Card Fraud Detection Using a Stacking Model Approach and Hyperparameter Optimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01510110</link>
        <id>10.14569/IJACSA.2024.01510110</id>
        <doi>10.14569/IJACSA.2024.01510110</doi>
        <lastModDate>2024-10-28T14:12:57.4130000+00:00</lastModDate>
        
        <creator>El Bazi Abdelghafour</creator>
        
        <creator>Chrayah Mohamed</creator>
        
        <creator>Aknin Noura</creator>
        
        <creator>Bouzidi Abdelhamid</creator>
        
        <subject>Credit card fraud detection; stacking models; hyperparameter tuning; logistic regression; ensemble learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>Credit card fraud detection has emerged as a crucial area of study, especially with the rise in online transactions coupled with increased financial losses from fraudulent activities. In this regard, a refined framework for identifying credit card fraud is introduced, utilizing a stacking ensemble model along with hyperparameter optimization. This paper integrates three highly effective algorithms—XGBoost, CatBoost, and Light-GBM—into a single strategy to improve predictive performance and address the issue of unbalanced datasets. To enable a more efficient search and adjustment of model parameters, Bayesian Optimization is employed for hyperparameter tuning. The proposed approach has been tested on a publicly accessible dataset. Results indicate notable enhancements over established baseline models in essential performance metrics, including ROC-AUC, precision, and recall. This method, while effective in fraud detection, holds significant promise for other fields focused on identifying rare occurrences.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_110-Enhancing_Credit_Card_Fraud_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Reducing Traffic Congestion Using Real-Time Traffic Monitoring with YOLOv8</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01510109</link>
        <id>10.14569/IJACSA.2024.01510109</id>
        <doi>10.14569/IJACSA.2024.01510109</doi>
        <lastModDate>2024-10-28T14:12:57.3970000+00:00</lastModDate>
        
        <creator>Sameerchand Pudaruth</creator>
        
        <creator>Irfaan Mohammad Boodhun</creator>
        
        <creator>Choo Wou Onn</creator>
        
        <subject>Computer vision; deep learning; vehicle detection and tracking; traffic accidents; traffic congestion</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>The voluminous number of vehicles present on principal roads together with ongoing road expansion projects are triggering serious roadblocks during peak hours in many places in Mauritius. Consequently, an innovative solution has been proposed using the strength of deep learning neural networks and cutting-edge computer vision methodologies to help reduce this problem. The idea is to create a reliable system that is adequate to measure traffic density and traffic flow on important roads of Mauritius in real-time. A dataset of 2800 frames was collected and used to train and test the YOLO models. A setup was designed for detecting, tracking and counting vehicles such as buses, cars, motorbikes, trucks and vans. Relevant traffic information from videos can also be retrieved to generate statistics for traffic density. Moreover, the system can estimate individual speed of vehicles as well as determining traffic flow on bidirectional roads. The overall mean counting accuracy was 96.1% and the overall mean classification accuracy was 94.4%. For traffic flow, the overall mean accuracy was 93.9%, while traffic density was estimated with an overall mean accuracy of 95.3%. In comparison with manual approaches used in Mauritius to understand the state of traffic, the proposed system is a modern, low-cost and effective solution that can adopted to potentially reduce traffic congestions and traffic accidents.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_109-Reducing_Traffic_Congestion_Using_Real_Time_Traffic_Monitoring.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>MH-LViT: Multi-path Hybrid Lightweight ViT Models with Enhancement Training</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01510107</link>
        <id>10.14569/IJACSA.2024.01510107</id>
        <doi>10.14569/IJACSA.2024.01510107</doi>
        <lastModDate>2024-10-28T14:12:57.3830000+00:00</lastModDate>
        
        <creator>Yating Li</creator>
        
        <creator>Wenwu He</creator>
        
        <creator>Shuli Xing</creator>
        
        <creator>Hengliang Zhu</creator>
        
        <subject>Multi-path hybrid; lightweight ViT; normalized knowledge distillation; Mixup regularization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>Vision Transformers (ViTs) have become increasingly popular in various vision tasks. However, it also becomes challenging to adapt them to applications where computation resources are very limited. To this end, we propose a novel multi-path hybrid architecture and develop a series of lightweight ViT (MH-LViT) models to balance well performance and complexity. Specifically, a triple-path architecture is exploited to facilitate feature representation learning that divides and shuffles image features in channels following a feature scale balancing strategy. In the first path ViTs are utilized to extract global features while in the second path CNNs are introduced to focus more on local features extraction. The third path completes the representation learning with a residual connection. Based on the developed lightweight models, a novel knowledge distillation framework IntPNKD (Normalized Knowledge Distillation with Intermediate Layer Prediction Alignment) is proposed to enhance their representation ability, and in the meanwhile, an additional Mixup regularization term is introduced to further improve their generalization ability. Experimental results on benchmark datasets show that, with the multi-path architecture, the developed lightweight models perform well by utilizing existing CNN and ViT components, and with the proposed model enhancement training methods, the resultant models outperform notably their competitors. For example, on dataset miniImageNet, our MH-LViT M3 improves the top-1 accuracy by 4.43% and runs 4x faster on GPU, compared with EdgeViT-S; on dataset CIFA10, our MH-LViT M1 improves the top-1 accuracy by 1.24% and the enhanced version MH-LViT M1* by 2.28%, compared to the recent model EfficientViT M1.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_107-MH_LViT_Multi_path_Hybrid_Lightweight_ViT_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced Fish Species Detection and Classification Using a Novel Deep Learning Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01510108</link>
        <id>10.14569/IJACSA.2024.01510108</id>
        <doi>10.14569/IJACSA.2024.01510108</doi>
        <lastModDate>2024-10-28T14:12:57.3830000+00:00</lastModDate>
        
        <creator>Musab Iqtait</creator>
        
        <creator>Marwan Harb Alqaryouti</creator>
        
        <creator>Ala Eddin Sadeq</creator>
        
        <creator>Ahmad Aburomman</creator>
        
        <creator>Mahmoud Baniata</creator>
        
        <creator>Zaid Mustafa</creator>
        
        <creator>Huah Yong Chan</creator>
        
        <subject>Deep learning; Fish4Knowledge; classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>This study presents an innovative deep learning approach for accurate fish species detection and classification in underwater environments. We introduce FishNet, a novel convolutional neural network architecture that combines attention mechanisms, transfer learning, and data augmentation techniques to improve fish recognition in challenging aquatic conditions. Our method was evaluated on the Fish4Knowledge dataset, achieving a mean average precision (mAP) of 92.3% for detection and 89.7%accuracy for species classification, outperforming existing state-of-the-art models. The proposed approach demonstrates robust performance across various underwater conditions, including different lighting, turbidity, and occlusion scenarios, making it suitable for real-world applications in marine biology, fisheries management, and ecological monitoring.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_108-Enhanced_Fish_Species_Detection_and_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predicting the Most Suitable Delivery Method for Pregnant Women by Using the KGC Ensemble Algorithm in Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01510106</link>
        <id>10.14569/IJACSA.2024.01510106</id>
        <doi>10.14569/IJACSA.2024.01510106</doi>
        <lastModDate>2024-10-28T14:12:57.3670000+00:00</lastModDate>
        
        <creator>Pusarla Sindhu</creator>
        
        <creator>Parasana Sankara Rao</creator>
        
        <subject>Delivery method; stacking; neonatal mortality; KGC ensemble algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>Maternal and neonatal mortality rates pose a significant challenge in healthcare systems worldwide. Predicting the childbirth approach is essential for safeguarding the mother’s and child’s well-being. Currently, it is dependent on the judgment of the attending obstetrician. However, selecting the incorrect delivery method can cause serious health complications both in mother and child over short-time and long-time. This research harnesses machine learning algorithms’ capability to automate the delivery method prediction process. This research studied two different stackings implemented in machine learning, leveraging a dataset of 6157 electronic health records and a minimal feature set. Stack1 consisted of k-nearest neighbors, decision trees, random forest, and support vector machine methods, yielding an F1-score of 95.67%. Stack 2 consisted of Gradient Boosting, k-nearest neighbors, and CatBoost methods, which yielded 98.84%. This highlights the superior effectiveness of its integrated methodologies. This research enables obstetricians to ascertain the delivery method promptly and initiate essential measures to ensure the mother’s and baby’s safety and well-being.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_106-Predicting_the_Most_Suitable_Delivery_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Accurate Head Pose Estimation-Based SO(3) and Orientation Tokens for Driver Distraction Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01510105</link>
        <id>10.14569/IJACSA.2024.01510105</id>
        <doi>10.14569/IJACSA.2024.01510105</doi>
        <lastModDate>2024-10-28T14:12:57.3500000+00:00</lastModDate>
        
        <creator>Xiong Zhao</creator>
        
        <creator>Sarina Sulaiman</creator>
        
        <creator>Wong Yee Leng</creator>
        
        <subject>Head pose; driver distraction detection; rotation matrix; token; transformer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>Driver distraction is an important cause of traffic accidents. By identifying and analyzing the driver’s head posture through monitor images, the driver’s mental state can be effectively judged, and early warnings or reminders can be given to reduce traffic accidents. We propose a novel dual-branch network named TokenFOE that combines Convolutional Neural Networks (CNN) and Transformer. The CNN branch uses an Multilayer Perceptron (MLP) to infer the image features from the backbone, then generating a rotation matrix based on SO(3) to represent head posture. The Dimension Adaptive Transformer branch uses learnable tokens to represent the head orientation of 9 categories. Integrate the losses of both branches for training, ultimately obtaining accurate head pose estimation results. The training dataset uses 300W-LP, and the quantatitive testing datasets are AFLW-2000 and BIWI. The experiment results show that the Mean Absolute Error is improved by 21.2% and 9.4% compared to the original SOTA model on the two datasets, and the Mean Absolute Error of Vectors is improved by 19.2% and 10.2%, respectively. Based on the model output and calibrated through the camera adapter module, we present the qualitative results on the largest driver distraction detection dataset currently available, the 100-driver dataset, robust and accurate detection results were achieved for four different camera perspectives in two modalities, RGB and Near Infrared. Additionally, the ablation study shows that the model inference speed (21 to 75fps) can be used for real-time detection.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_105-Accurate_Head_Pose_Estimation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Skin Cancer Detection with Transfer Learning and Vision Transformers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01510104</link>
        <id>10.14569/IJACSA.2024.01510104</id>
        <doi>10.14569/IJACSA.2024.01510104</doi>
        <lastModDate>2024-10-28T14:12:57.3370000+00:00</lastModDate>
        
        <creator>Istiak Ahmad</creator>
        
        <creator>Bassma Saleh Alsulami</creator>
        
        <creator>Fahad Alqurashi</creator>
        
        <subject>Medical imaging; skin cancer; multi-class classification; detection; deep learning; transfer learning; vision transformer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>Early and accurate detection of skin cancer is critical for effective treatment. This research aims to enhance skin cancer multi-class classification using transfer learning and Vision Transformers (ViTs), addressing the challenges of imbalanced medical imaging data. We introduced data augmentation techniques to the HAM10000 dataset to enhance the diversity of the training and implemented 13 pre-trained transfer learning models. These included DenseNet (121, 169, and 201), ResNet (50V2, 101V2, and 152V2), VGG (16 and 19), NasNet (mobile and large), InceptionV3, MobileNetV2, and InceptionResNetV2, as well as two Vision Transformer architectures (ViT and deepViT). After fine-tuning these models, DenseNet121 achieved the highest accuracy of 94%, while deepViT reached 92%, highlighting the effectiveness of these approaches in skin cancer detection. Future work will focus on refining these models, exploring hybrid approaches that combine convolutional neural networks and transformers, and expanding the framework to other cancer types to advance automated diagnostic tools in dermatology.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_104-Enhancing_Skin_Cancer_Detection_with_Transfer_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Core Scheduler Task Duplication for Multicore Multiprocessor System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01510103</link>
        <id>10.14569/IJACSA.2024.01510103</id>
        <doi>10.14569/IJACSA.2024.01510103</doi>
        <lastModDate>2024-10-28T14:12:57.3200000+00:00</lastModDate>
        
        <creator>Aya A. Eladgham</creator>
        
        <creator>Nesreen I. Ziedan</creator>
        
        <creator>Ibrahim Ziedan</creator>
        
        <subject>MultiCore; multiprocessor; DAG scheduling; dynamic priority; task duplication; clustering; MCP</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>The increasing complexity of multi-core multiprocessor systems presents significant challenges in task scheduling. The scheduling of tasks across multiple cores remains a significant challenge due to its NP-complete nature, especially with the in-creasing complexity of multi-core / multi-processors architectures. This paper focuses on Multi-Core Oriented (MCO) scheduling algorithms, which specifically target multi-core multi-processor systems. This paper proposes a novel scheduling algorithm, Core Scheduler Task Duplication (CSD), specifically designed for multi-core multi-processors environment. The CSD algorithm combines static and dynamic task prioritization to enhance processor utilization and performance. The proposed algorithm clusters related tasks to the same cores to improve efficiency and reduce execution time. By leveraging task duplication, the proposed algorithm improves processor utilization and reduces task waiting times. To evaluate the CSD algorithm’s performance, the algorithm was implemented and compared against the Modified Critical Path (MCP) scheduling algorithm. A series of experimental tests were conducted on diverse task sets, varying in size and complexity. Simulation results demonstrate that CSD outperforms existing compared approaches in task scheduling and processor utilization, making it a promising solution for multi-core systems.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_103-Core_Scheduler_Task_Duplication.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Secret Sharing as a Defense Mechanism for Ransomware in Cloud Storage Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01510102</link>
        <id>10.14569/IJACSA.2024.01510102</id>
        <doi>10.14569/IJACSA.2024.01510102</doi>
        <lastModDate>2024-10-28T14:12:57.3030000+00:00</lastModDate>
        
        <creator>Shuaib A Wadho</creator>
        
        <creator>Sijjad Ali</creator>
        
        <creator>Asma Ahmed A. Mohammed</creator>
        
        <creator>Aun Yichiet</creator>
        
        <creator>Ming Lee Gan</creator>
        
        <creator>Chen Kang Lee</creator>
        
        <subject>Ransomware; secret sharing; cloud storage; data leakage; reliability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>Ransomware is a prevalent and highly destructive type of malware that has increasingly targeted cloud storage systems, leading to significant data loss and financial damage. Conventional security mechanisms, such as firewalls, antivirus software, and backups, have proven inadequate in preventing ransomware attacks, highlighting the need for more robust solutions. This paper proposes the use of Secret Sharing Schemes (SSS) as a defense mechanism to safeguard cloud storage systems from ransomware threats. Secret sharing works by splitting data into several encrypted shares, which are stored across different locations. This ensures that even if some shares are compromised, the original data remains recoverable, providing both security and redundancy. We conducted a comprehensive review of existing secret sharing schemes and evaluated their suitability for cloud storage protection. Building on this analysis, we proposed a novel framework that integrates secret sharing with cloud storage systems to enhance their resilience against ransomware attacks. The framework was tested through simulations and theoretical evaluations, which demonstrated its effectiveness in preventing data loss, even in the event of partial compromise. Our findings show that secret sharing can significantly improve the reliability and security of cloud storage systems, minimizing the impact of ransomware by allowing data to be reconstructed without paying a ransom. The proposed solution also offers scalability and flexibility, making it adaptable to different cloud storage environments. This research provides a valuable contribution to the field of cloud security, offering a new layer of protection against the growing threat of ransomware.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_102-Secret_Sharing_as_a_Defense_Mechanism_for_Ransomware.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Feature Creation to Enhance Explainability and Predictability of ML Models Using XAI</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01510101</link>
        <id>10.14569/IJACSA.2024.01510101</id>
        <doi>10.14569/IJACSA.2024.01510101</doi>
        <lastModDate>2024-10-28T14:12:57.2900000+00:00</lastModDate>
        
        <creator>Waseem Ahmed</creator>
        
        <subject>XAI; ML; AI; Recruitment</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>Bringing more transparency to the decision making process in fields deploying ML tools is important in various fields. ML tools need to be designed in such a way that they are more understandable and explainable to end users while bringing trust. The field of XAI, although a mature area of research, is increasingly being seen as a solution to address these missing aspects of ML systems. In this paper, we focus on transparency issues when using ML tools in the decision making process in general, and specifically while recruiting candidates to high-profile positions. In the field of software development, it is important to correctly identify and differentiate highly skilled developers from developers who are adept at only performing regular and mundane programming jobs. If AI is used in the decision process, HR recruiting agents need to justify to their managers why certain candidates were selected and why some were rejected. Online Judges (OJ) are increasingly being used for developer recruitment across various levels attracting thousands of candidates. Automating this decision-making process using ML tools can bring speed while mitigating bias in the selection process. However, the raw and huge dataset available on the OJs need to be well curated and enhanced to make the decision process accurate and explainable. To address this, we built and subsequently enhanced a ML regressor model and the underlying dataset using XAI tools. We evaluated the model to show how XAI can be actively and iteratively used during pre-deployment stage to improve the quality of the dataset and to improve the prediction accuracy of the regression model. We show how these iterative changes helped improve the r2-score of the GradientRegressor model used in our experiments from 0.3507 to 0.9834 (an improvement of 63.27%). We also show how the explainability of LIME and SHAP tools were increased using these steps. A unique contribution of this work is the application of XAI to a very niche area in recruitment, i.e. in the evaluation of performance of users on OJs in software developer recruitment.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_101-Feature_Creation_to_Enhance_Explainability_and_Predictability.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Pioneering Granularity: Advancing Native Language Identification in Ultra-Short EAP Texts</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01510100</link>
        <id>10.14569/IJACSA.2024.01510100</id>
        <doi>10.14569/IJACSA.2024.01510100</doi>
        <lastModDate>2024-10-28T14:12:57.2730000+00:00</lastModDate>
        
        <creator>Zhendong Du</creator>
        
        <creator>Kenji Hashimoto</creator>
        
        <subject>Native language identification; English for academic purposes; natural language processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>This study addresses the challenge of Native Language Identification (NLI) in ultra-short English for Academic Purposes (EAP) texts by proposing an innovative two-stage recognition method. Conventional views suggest that ultra-short texts lack sufficient linguistic features for effective NLI. However, we have found that even in such brief texts, subtle linguistic cues—such as syntactic structures, lexical choices, and grammatical errors—can still reveal the author’s native language background. Our approach involves fine-tuning the granularity of first language (L1) labels and refining deep learning models to more accurately capture the subtle differences in second language (L2) English texts written by individuals from similar cultural backgrounds. To validate the effectiveness of this method, we designed and conducted a series of scientific experiments using advanced Natural Language Processing (NLP) techniques. The results demonstrate that models adjusted for granular L1 distinctions exhibit greater sensitivity and accuracy in identifying language variations caused by nuanced cultural differences. Furthermore, this method is not only applicable to ultra-short texts but can also be extended to texts of varying lengths, offering new perspectives and tools for handling diverse language inputs. By integrating in-depth linguistic analysis with advanced computational techniques, our research opens up new possibilities for enhancing the performance and adaptability of NLI models in complex linguistic environments. It also provides fresh insights for future efforts aimed at optimizing the capture of linguistic features.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_100-Pioneering_Granularity_Advancing_Native_Language_Identification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Machine Learning Approach to Identify Promising Mountain Hiking Destinations Using GIS and Remote Sensing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151099</link>
        <id>10.14569/IJACSA.2024.0151099</id>
        <doi>10.14569/IJACSA.2024.0151099</doi>
        <lastModDate>2024-10-28T14:12:57.2570000+00:00</lastModDate>
        
        <creator>Lahbib Naimi</creator>
        
        <creator>Charaf Ouaddi</creator>
        
        <creator>Lamya Benaddi</creator>
        
        <creator>El Mahi Bouziane</creator>
        
        <creator>Abdeslam Jakimi</creator>
        
        <creator>Mohamed Manaouch</creator>
        
        <subject>Machine learning; mountain hiking; AI-based tourism; GIS; remote sensing; tourism; bagging algorithm; decision-making</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>The objective of this study is to address the complex task of identifying optimal locations for mountain hiking sites in the Eastern High Atlas region of Morocco, considering topographical factors. The study assesses the effectiveness of a commonly used machine learning classifier (MLC) in mapping potential mountain hiking areas, which is crucial for promoting and enhancing tourism in the area. To begin with, an extensive inventory of 120 mountain hiking sites was conducted, and precise measurements of three topographical parameters were collected at each site. Subsequently, a machine learning algorithm called Bagging was employed to develop a predictive model. The model achieved a high performance, with an area under the curve (AUC) value of 0.93. The model effectively identified favorable areas, encompassing around 24% of the study region, which were predominantly located in the western part. These areas were characterized by mountainous terrain, shorter slopes, and higher altitudes. The research findings provide valuable guidance to decision-makers, offering a roadmap to enhance the discovery of mountain hiking sites in the region.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_99-Machine_Learning_Approach_to_Identify_Promising_Mountain_Hiking.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Backbone Feature Enhancement and Decoder Improvement in HRNet for Semantic Segmentation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151098</link>
        <id>10.14569/IJACSA.2024.0151098</id>
        <doi>10.14569/IJACSA.2024.0151098</doi>
        <lastModDate>2024-10-28T14:12:57.2430000+00:00</lastModDate>
        
        <creator>HanLei Feng</creator>
        
        <creator>TieGang Zhong</creator>
        
        <subject>Semantic segmentation; HRNet; multi-branch deep strided convolution; axial attention mechanism; progressive fusion upsampling; multi-scale object adaptability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>Addressing issues such as the tendency for small-scale objects to be lost, incomplete segmentation of large-scale objects, and overall low segmentation accuracy in existing semantic segmentation models, an improved HRNet network model is proposed. Firstly, by introducing multi-branch deep stripe convolutions, features of multi-scale objects are adaptively extracted using convolutional kernels of different sizes, which not only enhances the model’s ability to capture multi-scale objects but also strengthens its perception of the contextual environment. Secondly, to optimize the feature aggregation effect, the axial attention mechanism is adopted to aggregate image features along the x-axis and y-axis directions respectively, effectively capturing long-range dependencies within the global scope, and thus achieving precise positioning of objects of interest in the feature map.Finally, by implementing the progressive fusion-based upsampling strategy, it facilitates the complementary fusion of semantic information and detailed information between adjacent feature maps, thereby enhancing the model’s capability to restore fine-grained details in images. Experimental results demonstrate that on the PASCAL VOC2012+SBD dataset, the mean Intersection over Union (mIoU) of the improved HRNet S model in segmenting lower-resolution images is increased by 1.54% compared to the baseline method. Meanwhile, the improved HRNet L model achieved a 3.05% increase in mIoU compared to the original model when handling higher-resolution image segmentation tasks on the Cityscapes dataset, and attained the highest segmentation accuracy in 15 out of the 19 different scale classification categories on this dataset.These results indicate that the proposed method not only exhibits high segmentation accuracy but also possesses strong adaptability to multi-scale objects.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_98-Backbone_Feature_Enhancement_and_Decoder_Improvement.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Feature Map Adversarial Attack Against Vision Transformers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151097</link>
        <id>10.14569/IJACSA.2024.0151097</id>
        <doi>10.14569/IJACSA.2024.0151097</doi>
        <lastModDate>2024-10-28T14:12:57.2270000+00:00</lastModDate>
        
        <creator>Majed Altoub</creator>
        
        <creator>Rashid Mehmood</creator>
        
        <creator>Fahad AlQurashi</creator>
        
        <creator>Saad Alqahtany</creator>
        
        <creator>Bassma Alsulami</creator>
        
        <subject>Vision transformers; adversarial attacks; DNNs; vulnerabilities; feature maps; perturbations; spatial domains; frequency domains</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>Image classification is a domain where Deep Neural Networks (DNNs) have demonstrated remarkable achievements. Recently, Vision Transformers (ViTs) have shown potential in handling large-scale image classification challenges by efficiently scaling to higher resolutions and accommodating larger input sizes compared to traditional Convolutional Neural Networks (CNNs). However, in the context of adversarial attacks, ViTs are still considered vulnerable. Feature maps serve as the foundation for representing and extracting meaningful information from images. While CNNs excel at capturing local features and spatial relationships, ViTs are better at understanding global context and long-range dependencies. This paper proposes a feature map ViT-specific adversarial example attack called Feature Map ViT-specific Attack (FMViTA). The objective of the investigation is to generate adversarial perturbations in the spatial and frequency domains of the image representation that allow deeper distance measurement between perturbed and targeted images. The experiments focus on a ViT pre-trained model that is fine-tuned on the ImageNet dataset. The proposed attack demonstrates the vulnerability of ViTs to adversarial examples by showing that even allowing only 0.02 maximum perturbation magnitude to be added to the input samples gives 100% attack success rate.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_97-A_Feature_Map_Adversarial_Attack_Against_Vision_Transformers.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Smart X-Ray Geiger Data Logger: An Integrated System for Detection, Control, and Dose Evaluation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151096</link>
        <id>10.14569/IJACSA.2024.0151096</id>
        <doi>10.14569/IJACSA.2024.0151096</doi>
        <lastModDate>2024-10-28T14:12:57.2270000+00:00</lastModDate>
        
        <creator>Lhoucine Ben Youssef</creator>
        
        <creator>Abdelmajid Bybi</creator>
        
        <creator>Hilal Drissi</creator>
        
        <creator>El Ayachi Chater</creator>
        
        <subject>X-rays; radiation dose; radiation safety; exposure risk assessment; Geiger-M&#252;ller tube; medical imaging; real-time monitoring; smart devices</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>X-ray dosimetry practices are guided by international standards and regulatory agencies to ensure the safety of patients, radiation workers, and the general public. This paper introduces the Smart X-ray Geiger Data Logger, a comprehensive system designed to enhance radiation safety through integrated detection, control, and dose evaluation. This study is based on the M4011 Geiger-M&#252;ller tube, exploiting ionization effects to measure radiation doses accurately. The system features an advanced algorithm for real-time exposure risk assessment, ensuring adherence to safety limits during medical procedures. Equipped with Wi-Fi connectivity, the device facilitates seamless data transmission and integration with centralized databases for comprehensive exposure monitoring and historical data analysis. The MQTT protocol is utilized for secure and efficient data transmission, ensuring the protection of sensitive information. A user-friendly interface provides instant feedback on radiation levels, cumulative doses, and procedural safety, supported by visual indicators and auditory alarms for immediate alerts. Experimental validation demonstrates the system&#39;s reliability in various settings, confirming its utility in optimizing radiation protection strategies and fostering safer environments in the healthcare field.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_96-Smart_X_ray_Geiger_Data_Logger.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of Traffic Light and Road Sign Detection and Recognition Using Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151095</link>
        <id>10.14569/IJACSA.2024.0151095</id>
        <doi>10.14569/IJACSA.2024.0151095</doi>
        <lastModDate>2024-10-28T14:12:57.2100000+00:00</lastModDate>
        
        <creator>Joseph M. De Guia</creator>
        
        <creator>Madhavi Deveraj</creator>
        
        <subject>Artificial intelligence; autonomous vehicle; traffic light recognition; road sign detection; YOLO; real-time object detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>Traffic light and road sign violations significantly contribute to traffic accidents, particularly at intersections in high-density urban areas. To address these challenges, this research focuses on enhancing the accuracy, robustness, and reliability of Autonomous Vehicle (AV) perception systems using advanced deep learning techniques. The novelty of this study lies in the comprehensive development and evaluation of real-time traffic light and road sign detection systems, comparing state-of-the-art models including YOLOv3, YOLOv5, and YOLOv7. The models were rigorously tested in a controlled offline environment using the Nvidia Titan RTX, followed by extensive field testing on an AV test vehicle equipped with sensor suite and Nvidia RTX GPU. The testing was conducted across complex urban driving scenarios at the CETRAN proving test track, JTC Cleantech Park, and NTU Singapore campus. The traffic light detection and recognition (TLR) results demonstrate that YOLOv7 outperforms YOLOv5 and YOLOv3, achieving a mean Average Precision (mAP@0.5) of 93%, even under challenging conditions like poor lighting and occlusions. While the traffic road sign detection (TSD) mAP@0.5 of 96%. This superior performance highlights the potential of YOLOv7 in enhancing AV safety and reliability. The conclusions underscore the effectiveness of YOLOv7 for real-time detection in AV perception systems, offering crucial insights for future research. Potential implications include the development of more robust and accurate AV systems, capable of safely navigating complex urban environments.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_95-Development_of_Traffic_Light_and_Road_Sign_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Graph Neural Networks and Dominant Set Algorithms for Energy-Efficient Internet of Things Environments: A Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151094</link>
        <id>10.14569/IJACSA.2024.0151094</id>
        <doi>10.14569/IJACSA.2024.0151094</doi>
        <lastModDate>2024-10-28T14:12:57.1930000+00:00</lastModDate>
        
        <creator>Dezhi Liao</creator>
        
        <creator>Xueming Huang</creator>
        
        <subject>Internet of Things; energy efficiency; dominant set; Graph Neural Networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>The widespread usage of Internet of Things (IoT) devices opens up new opportunities for automated operations, monitoring, and communications across various industries. However, extending the lifespan of IoT networks remains crucial because IoT devices are energy-limited. This study investigates the convergence of Graph Neural Networks (GNNs) and dominant set algorithms to extend the longevity of IoT networks. GNNs are neural networks that capture complex relationships and node interactions based on graph-structured data. With these capabilities, GNNs are extremely effective at modeling IoT network dynamics, where devices are connected and whose interactions have a significant impact on performance. In contrast, dominant set algorithms are defined as an approach in which nodes of a network function as agents or leaders to perform resource-efficient and resource-distributed communication. A further detailed overview leverages existing techniques to describe GNNs&#39; role in optimizing dominant set algorithms and discusses integrating these technologies into addressing energy efficiency challenges in IoT settings.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_94-Graph_Neural_Networks_and_Dominant_Set_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Tracking Computer Vision Algorithm Based on Fusion Twin Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151093</link>
        <id>10.14569/IJACSA.2024.0151093</id>
        <doi>10.14569/IJACSA.2024.0151093</doi>
        <lastModDate>2024-10-28T14:12:57.1800000+00:00</lastModDate>
        
        <creator>Xin Wang</creator>
        
        <subject>Visual tracking; twin network; integration; attention mechanism; self-adaption</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>Deep learning technology has promoted the rapid development of visual object tracking, among which algorithms based on twin networks are a hot research direction. Although this method has broad application prospects, its performance is often greatly reduced when encountering target occlusion or similar objects in the background. In response to this issue, a method is proposed to integrate channel and spatial dimension attention mechanisms into the backbone architecture of twin networks, to optimize the algorithm&#39;s recognition accuracy for tracking targets and its stability in changing environments. Then, a region recommendation network based on adaptive anchor box generation is adopted, combined with twin networks to enhance the network&#39;s modeling ability for complex situations. Finally, a new visual tracking algorithm is designed. Through comparative experiments, the success rate of the former increased by 0.6% and 0.9% respectively on the two datasets, and its accuracy also increased by 1.2% and 1.8% accordingly. The success rate of the latter increased by 1.5% and 1.2% respectively in the two datasets, and the accuracy also increased by 1.2% and 0.6% respectively. From this, the improved algorithm can improve the performance of target tracking and has certain application potential in visual target tracking.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_93-Tracking_Computer_Vision_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Balancing Privacy and Performance: Exploring Encryption and Quantization in Content-Based Image Retrieval Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151092</link>
        <id>10.14569/IJACSA.2024.0151092</id>
        <doi>10.14569/IJACSA.2024.0151092</doi>
        <lastModDate>2024-10-28T14:12:57.1630000+00:00</lastModDate>
        
        <creator>Mohamed Jafar Sadik</creator>
        
        <creator>Noor Azah Samsudin</creator>
        
        <creator>Ezak Fadzrin Bin Ahmad</creator>
        
        <subject>Content-Based Image Retrieval (CBIR); Convolutional Neural Networks (CNN): Encrypted data; Feature extraction; Fully Homomorphic Encryption (FHE); medical imaging; privacy; quantization; retrieval accuracy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>This paper presents three significant contributions to the field of privacy-preserving Content-Based Image Retrieval (CBIR) systems for medical imaging. First, we introduce a novel framework that integrates VGG-16 Convolutional Neural Network with a multi-tiered encryption scheme specifically designed for medical image security. Second, we propose an innovative approach to model optimization through three distinct quantization methods (max, 99% percentile, and KL divergence), which significantly reduces computational overhead while maintaining retrieval accuracy. Third, we provide comprehensive empirical evidence demonstrating the framework&#39;s effectiveness across multiple medical imaging modalities, achieving 94.6% accuracy with 99% percentile quantization while maintaining privacy through encryption. Our experimental results, conducted on a dataset of 1,200 medical images across three anatomical categories (lung, brain, and bone), show that our approach successfully balances the competing demands of privacy preservation, computational efficiency, and retrieval accuracy. This work represents a significant advancement in making secure CBIR systems practically deployable in resource-constrained healthcare environments.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_92-Balancing_Privacy_and_Performance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Real-Time Self-Localization and Mapping for Autonomous Navigation of Mobile Robots in Unknown Environments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151090</link>
        <id>10.14569/IJACSA.2024.0151090</id>
        <doi>10.14569/IJACSA.2024.0151090</doi>
        <lastModDate>2024-10-28T14:12:57.1470000+00:00</lastModDate>
        
        <creator>Serik Tolenov</creator>
        
        <creator>Batyrkhan Omarov</creator>
        
        <subject>Robotic platforms; automation technology; LiDAR navigation; system integration; artificial intelligence; Internet of Things; human-robot interaction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>This paper delves into the progressive design and operational capabilities of advanced robotic platforms, highlighting their adaptability, precision, and utility in diverse industrial settings. Anchored by a robust modular design, these platforms integrate sophisticated sensor arrays, including LiDAR for enhanced spatial navigation, and articulated limbs for complex maneuverability, reflecting significant advancements in automation technology. We examine the architectural intricacies and technological integrations that enable these robots to perform a wide range of tasks, from material handling to intricate assembly operations. Through a detailed analysis of system configurations, we assess the implications of such technologies on efficiency and customization in automated processes. Furthermore, the paper discusses the challenges associated with the deployment of advanced robotics, including the complexities of system integration, maintenance, and the steep learning curve for operational proficiency. We also explore future directions in robotic development, emphasizing the potential integration with emerging technologies such as artificial intelligence, the Internet of Things, and augmented reality, which promise to elevate autonomous decision-making and improve human-robot interaction. This comprehensive review aims to provide insights into the current capabilities and future prospects of robotic systems, offering a perspective on how ongoing innovations may reshape industrial practices, enhance operational efficiency, and redefine the landscape of automation technology.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_90-Real_Time_Self_Localization_and_Mapping.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimizing LSTM-Based Model with Ant-Lion Algorithm for Improving Thyroid Prognosis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151091</link>
        <id>10.14569/IJACSA.2024.0151091</id>
        <doi>10.14569/IJACSA.2024.0151091</doi>
        <lastModDate>2024-10-28T14:12:57.1470000+00:00</lastModDate>
        
        <creator>Maria Yousef</creator>
        
        <subject>Thyroid disease; LSTM; ALO; prediction model; optimization algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>In the healthcare sector, early and accurate disease detection is essential for providing appropriate care on time. This is especially crucial in thyroid problems, which can be difficult to diagnose because of their many symptoms. This study aims to propose a new thyroid disease prediction model by utilizing the Ant Lion Optimization (ALO) approach to enhance the hyperparameters of the Long Short-Term Memory (LSTM) deep learning algorithm. To achieve this, after the preprocessing step, we utilize the entropy technique for feature selection, which selects the most important features as an optimal subset of features. The ALO is then employed to optimize the LSTM, identifying the optimal hyperparameters that can influence the model and enhance its efficiency. To assess the suggested methodology, we chose the widely used thyroid disease data. This dataset contains 9,172 samples and 31 features. A set of criteria was used to evaluate the model’s performance, including accuracy, precision, recall, and F1 score. The experimental results showed that: 1) the entropy technique in the feature selection step can reduce the total number of features from 31 to 10; 2) the recommended strategy, which selected the optimal hyperparameter for the LSTM using the Alo algorithm, improved the classifier overall by 7.2% and produced the highest accuracy of 98.6%.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_91-Optimizing_LSTM_Based_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analyzing VGG-19’s Bias in Facial Beauty Prediction: Preference for Feminine Features</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151089</link>
        <id>10.14569/IJACSA.2024.0151089</id>
        <doi>10.14569/IJACSA.2024.0151089</doi>
        <lastModDate>2024-10-28T14:12:57.1330000+00:00</lastModDate>
        
        <creator>Nuno Fernandes</creator>
        
        <creator>Sandra Soares</creator>
        
        <creator>Joana Arantes</creator>
        
        <subject>Deep learning; facial attractiveness; sexual dimorphism; symmetry; VGG-19</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>From an evolutionary perspective, sexual dimorphism has been linked to perceived attractiveness, with masculine traits preferred in men and feminine traits in women. Moreover, symmetry is a strong predictor of facial attractiveness across both sexes. Recent advancements in the field of artificial intelligence have enabled algorithms to accurately predict facial attractiveness. This study aims to investigate whether these algorithms accurately replicate human judgments of attractiveness. We hypothesized that sexually dimorphic manipulations (masculinized men and feminized women) (H1), as well as symmetrized versions (H2), would elicit higher attractiveness ratings from a facial beauty prediction algorithm. Employing transfer learning, we trained six deep-learning models using four facial databases with attractiveness ratings (n = 6848). The top-performing model, VGG-19, demonstrated a high prediction correlation of .86 on the test set. Surprisingly, our findings revealed an interaction effect between sex and sexual dimorphism. Feminized versions of both men’s and women’s faces obtained higher attractiveness ratings than their masculinized counterparts. For symmetry, our results indicated that symmetrized faces were perceived as more attractive, albeit exclusively among women. These findings offer novel insights into the understanding of facial attractiveness from both algorithmic and human behavioral perspectives.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_89-Analyzing_VGG_19s_Bias_in_Facial_Beauty_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Precision Machining of Hard-to-Cut Materials: Current Status and Future Directions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151088</link>
        <id>10.14569/IJACSA.2024.0151088</id>
        <doi>10.14569/IJACSA.2024.0151088</doi>
        <lastModDate>2024-10-28T14:12:57.1170000+00:00</lastModDate>
        
        <creator>Tengjiao CUI</creator>
        
        <subject>Precision machining; hard-to-cut materials; cutting tools; machining processes; emerging technologies</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>Machining difficult materials like superalloys, ceramics, and composites is fundamental in industries where performance is paramount, such as the auto industry, aerospace, and medicine. These materials with relatively high strength, hardness, and high-temperature capabilities pose difficulties in machining, thus calling for improved precision machining technologies. This survey paper presents a detailed review of the current state of the art of precision machining of these difficult materials, along with advances observed in tools for cutting, machining techniques, and new technologies. They range from carbide, ceramics, super hard tools, and geometry of tools, and this topic also deals with tool coatings. The article also discusses specifics of the traditional and nontraditional machining processes: turning, milling, electrical discharge, and laser machining, as well as the relations between additive and hybrid manufacturing. The importance of new technologies or digital and intelligent manufacturing systems in enhancing the accuracy and productivity of machining is also illustrated. Furthermore, the paper also provides information on how digital and intelligent manufacturing technologies can enhance machining efficiency and accuracy. Moreover, future research will aim to minimize tool wear, enhance surface finish and integrity, and environmentally conscious machining. The paper concludes with a hopeful note on the potential of future research to revolutionize the precision machining industry, offering high performance and reliability in critical applications while maintaining a focus on sustainability.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_88-Precision_Machining_of_Hard_to_Cut_Materials.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Simulation Analysis of Obstacle Crossing Stability for Transmission Line Inspection Robot</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151087</link>
        <id>10.14569/IJACSA.2024.0151087</id>
        <doi>10.14569/IJACSA.2024.0151087</doi>
        <lastModDate>2024-10-28T14:12:57.1000000+00:00</lastModDate>
        
        <creator>Qianli Wang</creator>
        
        <subject>Transmission line; inspection robot; obstacle crossing path; kinematic analysis; bidirectional fast expanding random tree</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>As an indispensable energy source in production and daily life, electricity has important implications in the operation of society and economic development. As the hub of power transmission, the safety of transmission lines is related to the stability of the power grid. Regular inspection of transmission lines is an effective measure to ensure the stability of the power system. Patrol robots are used for regular inspection of transmission lines due to their advantages such as low cost and long running time. To achieve collision free, the study proposes an obstacle path planning algorithm based on an improved bidirectional fast expanding random tree through kinematic analysis of the robot. According to the experimental results, when overtaking the damper, the rotation ranges of the 1st claw/arm and the 2nd bracket/arm/claw were (0&#176;~22&#176;), (-50&#176;, 10&#176;), (0&#176;, 25&#176;), (-50&#176;, 10&#176;), and (0&#176;, 22&#176;), respectively. The corresponding rotational speeds were (-1.5~1.5) deg/s, (-3, 2.5) deg/s, (-3.5~3.5) deg/s, (-2.5, 3) deg/s, and (-27, 2) deg/s, respectively. The expansion and contraction ranges of the upper, middle, lower, and horizontal push rods were (0, 100) mm, (0, 110) mm, (-60, 10) mm, and (0, 20) mm, respectively. From the above results, when crossing obstacles, the motion acceleration of the inspection robot is not significant. The speed changes smoothly. The obstacle crossing path planning algorithm proposed in the study can achieve stable motion of the inspection robot.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_87-Simulation_Analysis_of_Obstacle_Crossing_Stability.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>K-Means and Morphology Based Feature Element Extraction Technique for Clothing Patterns and Lines</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151086</link>
        <id>10.14569/IJACSA.2024.0151086</id>
        <doi>10.14569/IJACSA.2024.0151086</doi>
        <lastModDate>2024-10-28T14:12:57.0870000+00:00</lastModDate>
        
        <creator>Xiaojia Ding</creator>
        
        <subject>K-means; morphological algorithm; feature extraction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>In the process of clothing design and production, the traditional artificial feature element extraction method has the problems of low efficiency and insufficient precision, which is difficult to meet the automation and intelligent needs of modern clothing industry. In order to solve this problem, this paper proposes a technology that combines K-means clustering algorithm and morphology method to extract clothing pattern and line feature elements. This technology uses K-means clustering algorithm to preprocess clothing images to realize feature extraction of clothing pattern elements, and then introduces morphology method to realize feature extraction of image line elements. This technology not only improves the accuracy and efficiency of feature element extraction, but also retains the details of clothing images, which provides a strong support for automatic and intelligent processing in clothing design and production.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_86-K_Means_and_Morphology_Based_Feature_Element.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Machine Learning Approaches Applied in Smart Agriculture for the Prediction of Agricultural Yields</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151085</link>
        <id>10.14569/IJACSA.2024.0151085</id>
        <doi>10.14569/IJACSA.2024.0151085</doi>
        <lastModDate>2024-10-28T14:12:57.0700000+00:00</lastModDate>
        
        <creator>Abourabia. Imade</creator>
        
        <creator>Ounacer. Soumaya</creator>
        
        <creator>Elghoumari. Mohammed yassine</creator>
        
        <creator>Azzouazi. Mohamed</creator>
        
        <subject>Machine learning; IOT; artificial intelligence; agricultural yields; smart agriculture; CNN; ViT-B16</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>Machine learning techniques in smart agriculture for yield prediction involve using algorithms to analyze historical and real-time data to forecast crop yields. These approaches aim to optimize agricultural practices, improve resource efficiency and enhance productivity, this paper reviews the application of machine learning techniques in smart agriculture for predicting agricultural yields. With the advent of data-driven technologies, machine learning algorithms have become instrumental in analyzing vast amounts of agricultural data to forecast crop yields accurately. Various machine learning models such as regression, classification, and ensemble methods have been employed to process historical and real-time data on weather patterns, soil conditions, crop types, and farming practices. These models enable farmers and stakeholders to make informed decisions, optimize resource allocation, and mitigate risks associated with agricultural production. Furthermore, the integration of Internet of Things devices and remote sensing technologies has facilitated data collection and improved the precision of yield predictions, this paper discusses the key machine learning approaches, challenges, and future directions in leveraging data analytics for enhancing agricultural productivity and sustainability in smart farming systems. to ensure stability and tracking. Simulations is carried out to verify the theoretical results, The study found that different machine learning techniques had varying accuracy for predicting agricultural yields. ViT-B16 achieved the highest F1-SCORE (99.40%), followed by ResNet-50 (99.54%) and CNN (97.70%), while RPN algorithms had lower accuracy (91.83%). Correlation analysis showed a strong positive relationship between humidity and soil moisture, favoring crop growth, while production had minimal correlation with temperature and area. The AdaBoost Regressor was the best performer, with the lowest MAE (0.22), MSE (0.1), and RMSE (0.31), and Random Forest showed strong predictive power with an R2 score of 0.89, Seasonal data indicated that autumn had the highest agricultural production, followed by spring, while summer and winter had much lower yields due to weather conditions. Seasonal temperature variations from 1997 to 2014 showed autumn was the warmest (34.43&#176;C), boosting crop production, and winter the coldest (34.31&#176;C), reducing yields. These temperature shifts significantly impacted agricultural productivity, with warm seasons enhancing growth and extreme temperatures in summer and winter limiting it, machine learning techniques in smart agriculture are pivotal for predicting crop yields by leveraging historical and real-time data, thus optimizing practices and resource use while boosting productivity. This involves deploying diverse machine learning models like regression, classification, and ensembles to analyze extensive data on weather, soil, crops, and farming methods. Such models empower stakeholders with insights for informed decisions, efficient resource allocation, and risk mitigation in agricultural operations. The integration of Internet of Things and remote sensing further refines data accuracy, aiding precise yield predictions. Despite advancements, challenges persist, including data quality assurance, model complexity, scalability, and interoperability, driving ongoing research and simulations to validate and improve ML applications for sustainable and productive smart farming systems.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_85-Machine_Learning_Approaches_Applied.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Smart IoT System for Enhancing Safety in School Bus Transportation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151084</link>
        <id>10.14569/IJACSA.2024.0151084</id>
        <doi>10.14569/IJACSA.2024.0151084</doi>
        <lastModDate>2024-10-28T14:12:57.0530000+00:00</lastModDate>
        
        <creator>Yousef H. Alfaifi</creator>
        
        <creator>Tareq Alhmiedat</creator>
        
        <creator>Emad Alharbi</creator>
        
        <creator>Ahad Awadh Al Grais</creator>
        
        <creator>Maha Altalk</creator>
        
        <creator>Abdelrahman Osman Elfaki</creator>
        
        <subject>IoT; safety; RFID; school bus; transportation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>School districts globally implement comprehensive and expensive strategies to offer safe bus transportation to and from school. However, these technologies are unfeasible to schools with limited financial resources, thereby leaving students at risk of serious injury. This study focuses on five major obstacles to safe school transportation: 1) forgetting students on the bus unattended; 2) students’ abnormal behavior; 3) overcrowding; 4) abnormal driver behavior; and 5) the risk of a bus running over children after they have disembarked. This paper developed an intelligent system using the Internet of Things, including rule-based and mathematical solutions to overcome the five transportation safety issues in student buses mentioned above, and to enhance student safety, using a bracelet system, short- and long-RFID sensors, and a processing unit to monitor the bus and its surrounding area. Therefore, in this paper, the proposed solution is superior to previous works in the same field. It is distinguished by its comprehensiveness and reasonable cost, making it affordable and easy to both install and maintain.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_84-A_Smart_IoT_System_for_Enhancing_Safety.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A DECOC-Based Classifier for Analyzing Emotional Expressions in Emoji Usage on Social Media</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151083</link>
        <id>10.14569/IJACSA.2024.0151083</id>
        <doi>10.14569/IJACSA.2024.0151083</doi>
        <lastModDate>2024-10-28T14:12:57.0400000+00:00</lastModDate>
        
        <creator>Shaya A. Alshaya</creator>
        
        <subject>Component; emojis; social media communication; whatsapp; emotional expression; machine learning; DECOC classifier</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>In today&#39;s digital era, social media has profoundly transformed communication, enabling new forms of emotional expression through various tools, particularly emojis. Initially created to represent simple emotions, emojis have evolved into a rich and nuanced visual language capable of conveying complex emotional states. While their role in communication is well-documented, there remains a gap in effectively analyzing and interpreting the emotional subtleties conveyed through emojis. This paper presents an innovative approach to sentiment analysis that goes beyond conventional methods by integrating a machine learning model, specifically the DECOC (Error Correcting Output Codes) classifier, tailored for the combined analysis of text and emoji sequences. The proposed model addresses the limitations of existing methods, which often overlook the sequential and contextual nature of emojis in emotional expression. By applying this model to real-world data, including a survey of social media users in Saudi Arabia, we demonstrate its high efficacy, achieving an average accuracy of 94.76%. This result not only outperforms prior models but also validates the significance of treating emojis as fundamental components of digital sentiment analysis. Our findings underscore the critical need for advanced models to decode the emotional layers of emoji usage, offering deeper insights into their role in contemporary digital communication.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_83-A_DECOC_Based_Classifier_for_Analyzing_Emotional_Expressions.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Volleyball Motion Analysis Model Based on GCN and Cross-View 3D Posture Tracking</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151082</link>
        <id>10.14569/IJACSA.2024.0151082</id>
        <doi>10.14569/IJACSA.2024.0151082</doi>
        <lastModDate>2024-10-28T14:12:57.0230000+00:00</lastModDate>
        
        <creator>Hongsi Han</creator>
        
        <creator>Jinming Chang</creator>
        
        <subject>Graphical Convolutional Neural Network; posture estimation; volleyball; motion analysis model; 3D tracking</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>The tracking of motion targets occupies a central position in sports video analysis. To further understand athletes&#39; movements, analyze game strategies, and evaluate sports performance, a 3D posture estimation and tracking model is designed based on Graphical Convolutional Neural Network and the concept of &quot;cross-vision&quot;. The outcomes revealed that the loss function curve of the 3D tracking model designed for the study had the fastest convergence with a minimum convergence value of 0.02. The average precision mean values for the four different publicly available datasets were above 0.90. The maximum improvement reached 21.06% and the minimum average absolute percentage error was 0.153. The higher order tracking accuracy of the model reached 0.982. Association intersection over union was 0.979. Association accuracy and detection accuracy were 0.970 and 0.965 respectively. During the volleyball video analysis, the tracking accuracy and tracking precision reached 89.53% and 90.05%, respectively, with a tracking speed of 33.42 fps. Meanwhile, the method&#39;s trajectory tracking completeness was always maintained at a high level, with its posture estimation correctness reaching 0.979. Mostly tracked and mostly lost confirmed the tracking ability of the method in a long time and cross-view state with high model robustness. This study helps to promote the development and application of related technologies, promote the intelligent development of volleyball in training, competition and analysis, and improve the efficiency of the sport and the level of competition.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_82-Volleyball_Motion_Analysis_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimising Delivery Routes Under Real-World Constraints: A Comparative Study of Ant Colony, Particle Swarm and Genetic Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151081</link>
        <id>10.14569/IJACSA.2024.0151081</id>
        <doi>10.14569/IJACSA.2024.0151081</doi>
        <lastModDate>2024-10-28T14:12:57.0070000+00:00</lastModDate>
        
        <creator>Rneem I. Aldoraibi</creator>
        
        <creator>Fatimah Alanazi</creator>
        
        <creator>Haya Alaskar</creator>
        
        <creator>Abed Alanazi</creator>
        
        <subject>Evolutionary algorithms; genetic algorithm; particle swarm optimisation; ant colony optimisation; urban logistics; route optimisation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>Effective logistics systems are essential for fast and economical package delivery, especially in urban areas. The intricate and ever-changing nature of urban logistics makes traditional methods insufficient. Hence, requirements for the application of sophisticated optimisation techniques have increased. To optimise package delivery routes, this study compares the performance of three popular evolutionary algorithms: ant colony optimisation (ACO), particle swarm Optimisation (PSO), and genetic algorithms (GA). Finding the best algorithm to minimise delivery time and cost while taking into account real-world limitations, such as delivery priority. This guarantees that deliveries with a higher priority are prioritised over others, which may substantially impact route optimisation. We examine each algorithm to create the best possible route plans for delivery trucks using actual data. Several factors are employed to assess each algorithm’s performance, including robustness to changes in environmental variables and computational efficiency—the simulation models delivery demands using actual data. Results indicate that ACO performed better in Los Angeles and Chicago, completing the shortest routes with respective distances of 126,254.18 and 59,214.68, indicating a high degree of flexibility in intricate urban layouts. With the best distance of 48,403.1 in New York, on the other hand, GA achieve good results, demonstrating its usefulness in crowded urban settings. These results highlight how incorporating evolutionary algorithms into urban logistics can improve sustainability and efficiency.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_81-Optimising_Delivery_Routes_Under_Real_World_Constraints.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Brain Tumor Segmentation of Magnetic Resonance Imaging (MRI) Images Using Deep Neural Network Driven Unmodified and Modified U-Net Architecture</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151079</link>
        <id>10.14569/IJACSA.2024.0151079</id>
        <doi>10.14569/IJACSA.2024.0151079</doi>
        <lastModDate>2024-10-28T14:12:56.9770000+00:00</lastModDate>
        
        <creator>Nunik Destria Arianti</creator>
        
        <creator>Azah Kamilah Muda</creator>
        
        <subject>Accuracy; brain tumor; DNN; U-Net architecture; comparison performance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>Accurately separating healthy tissue from tumorous regions is crucial for effective diagnosis and treatment planning based on magnetic resonance imaging (MRI) data. Current manual detection methods rely heavily on human expertise, so MRI-based segmentation is essential to improving diagnostic accuracy and treatment outcomes. The purpose of this paper is to compare the performance of detecting brain tumors from MRI images through segmentation using an unmodified and modified U-Net architecture from deep neural network (DNN) that has been modified by adding batch normalization and dropout on the encoder layer with and without the freeze layer. The study utilizes a public 2D brain tumor dataset containing 3064 T1-weighted contrast-enhanced images of meningioma, glioma, and pituitary tumors. Model performance was evaluated using intersection over union (IoU) and standard metrics such as precision, recall, f1-score, and accuracy across training, validation, and testing stages. Statistical analysis, including ANOVA and Duncan&#39;s multiple range test, was conducted to determine the significance of performance differences across the architectures. Results indicate that while the modified architectures show improved stability and convergence, the freeze layer model demonstrated superior IoU and efficiency, making it a promising approach for more accurate and efficient brain tumor segmentation. The comparison of the three methods revealed that the modified U-Net architecture with a freeze layer significantly reduced training time by 81.72% compared to the unmodified U-Net while maintaining similar performance across validation and testing stages. All three methods showed comparable accuracy and consistency, with no significant differences in performance during validation and testing.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_79-Brain_Tumor_Segmentation_of_Magnetic_Resonance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Indoor Landscape Design and Environmental Adaptability Analysis Based on Improved Fuzzy Control</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151080</link>
        <id>10.14569/IJACSA.2024.0151080</id>
        <doi>10.14569/IJACSA.2024.0151080</doi>
        <lastModDate>2024-10-28T14:12:56.9770000+00:00</lastModDate>
        
        <creator>Jinming Liu</creator>
        
        <creator>Qian Hu</creator>
        
        <creator>Pichai Sodbhiban</creator>
        
        <subject>Fuzzy control; indoor landscape design; environment; adaptability analysis; robot assisted</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>With the increasing demand for automation and intelligence in indoor landscape design, exploring efficient and precise control strategies has become particularly important. Robot-assisted technology and A* algorithm are utilized for indoor environment localization and mapping. Then, type-2 fuzzy adaptive fuzzy control is applied for indoor landscape automatic design. An improved genetic algorithm is utilized for environmental analysis to enhance the adaptability of indoor landscape design to the environment. In the results, the robot adopting this algorithm was significantly better than ordinary robots in path planning optimization, with a fitting accuracy of over 95%. The type-2 fuzzy control model had a maximum speed of 0.75m/s and an overshoot of only 7.1% for balancing robots, resulting in a faster recovery speed and smaller overshoot. The proposed method performed the best in terms of functionality, aesthetics, technicality, accessibility, and user satisfaction for landscape design effectiveness and environmental adaptability. The research improves indoor landscape design’s automation. Meanwhile, the combination of fuzzy control and genetic algorithms enhances the design accuracy and environmental adaptability. This provides a new technological path for indoor landscape design.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_80-Indoor_Landscape_Design_and_Environmental_Adaptability_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design Science Research: Applying Integrated Fogg Persuasive Frameworks to Validate Rural ICT Design Requirements</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151078</link>
        <id>10.14569/IJACSA.2024.0151078</id>
        <doi>10.14569/IJACSA.2024.0151078</doi>
        <lastModDate>2024-10-28T14:12:56.9600000+00:00</lastModDate>
        
        <creator>Noela Jemutai Kipyegen</creator>
        
        <creator>Benard Okelo</creator>
        
        <subject>Digital equality; rural areas; design requirements; rural ICT; artifacts; validate; Fogg frameworks; design science research</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>Designing for digital equality is critical in the modern world. Digital inequality is more pronounced in rural areas, where the majority are illiterate and poor. As a result of this, individuals are not motivated, enabled and triggered enough to access and use Information and Communication Technologies. Additionally, existing rural ICT artifacts or applications are not usable by these demographics. Therefore, this paper understood and validated rural ICT design requirements. It achieved this by developing a community learning system, applying design science research methodology and integrated Fogg persuasive frameworks. Results shows that use of local language, local content, videos, audio, touch-based input, proper content categorization, accessibility (location) and peer participation and collaboration fosters user engagement with the ICT artifact. These approaches had a significant impact in the achievement of user self-efficacy. This is explained by the task findings, 71%, 83%, 70% and 78% successfully, within the stipulated time and on their own, accomplished tasks 1, 2,3 and 4 respectively. Users found the content to be practical and applicable to their day-to-day activities. Users appreciated the system’s potential for learning indicating that it could significantly enhance their knowledge and skills. The significance of Design Science Research and integrated Fogg persuasive frameworks in creating usable and accessible ICT solutions tailored to the needs of the target population cannot be underrated. It was concluded that design solutions targeting vulnerable demographics are key to the success of designs for digital equality. In other words, usable solutions for the aged, women illiterate, uneducated and the poor, are more usable for the young, men, literate, educated, and the rich (financially stable). Thus, enhancing inclusivity in access and use of rural ICTs.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_78-Design_Science_Research_Applying_Integrated_Fogg.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Energy Optimization Management Scheme for Manufacturing Systems Based on BMAPPO: A Deep Reinforcement Learning Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151077</link>
        <id>10.14569/IJACSA.2024.0151077</id>
        <doi>10.14569/IJACSA.2024.0151077</doi>
        <lastModDate>2024-10-28T14:12:56.9430000+00:00</lastModDate>
        
        <creator>Zhe Shao</creator>
        
        <subject>Microgrid; energy optimization management; deep reinforcement learning; multi-agent; Proximal Policy Optimization (PPO)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>To address the depletion of traditional energy sources and the increasingly severe environmental pollution, countries around the world have accelerated the deployment of renewable energy generation equipment. Energy optimization management for microgrids can address the randomness of factors such as renewable energy generation and load, ensuring the safe and stable operation of the system while achieving objectives such as cost minimization. Therefore, this paper conducts an in-depth study of energy optimization management schemes for microgrids and designs a multi-microgrid energy optimization management model and algorithm based on deep reinforcement learning. For the joint optimization problem among multiple microgrids with power flow between them, a two-layer energy optimization management scheme based on the multi-agent proximal policy optimization (PPO) algorithm and optimal power flow (BMAPPO) is proposed. This scheme is divided into two layers: first, the lower layer uses the multi-agent proximal policy optimization algorithm to determine the output of various controllable power devices in each microgrid; then, based on the lower layer&#39;s optimization results, the upper layer uses a second-order cone relaxation optimal power flow model to solve the optimal power flow between multiple microgrids, achieving power scheduling among them; finally, the total cost of the upper and lower layers is calculated to update the network parameters. Experimental results show that compared with other schemes, the proposed scheme achieves multi-microgrid energy optimization management at the lowest cost while ensuring online execution speed.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_77-Energy_Optimization_Management_Scheme.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implementation of Lattice Theory into the TLS to Ensure Secure Traffic Transmission in IP Networks Based on IP PBX Asterisk</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151076</link>
        <id>10.14569/IJACSA.2024.0151076</id>
        <doi>10.14569/IJACSA.2024.0151076</doi>
        <lastModDate>2024-10-28T14:12:56.9300000+00:00</lastModDate>
        
        <creator>Olga Abramkina</creator>
        
        <creator>Mubarak Yakubova</creator>
        
        <creator>Tansaule Serikov</creator>
        
        <creator>Yenlik Begimbayeva</creator>
        
        <creator>Bakhodyr Yakubov</creator>
        
        <subject>IP; PBX; Asterisk; TLS; MITM; post-quantum cryptography</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>This paper presents a novel lattice-based cryptography implementation in the Transport Layer Security (TLS) protocol to enhance the security of traffic transmission in IP networks that use the Asterisk IP PBX platform. Given the growing threat of quantum computing, traditional cryptographic methods are becoming increasingly vulnerable. To address this issue, the study leverages post-quantum cryptography by developing a modified TLS protocol using lattice-based cryptographic algorithms. The performance of the system was evaluated in terms of security, computational efficiency, and real-time communication. The study shows that the proposed lattice-based TLS implementation effectively secures traffic transmission in IP PBX networks, offering a robust solution against both current and future quantum threats.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_76-Implementation_of_Lattice_Theory_into_the_TLS.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Smart Contract Approach for Efficient Transportation Management</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151074</link>
        <id>10.14569/IJACSA.2024.0151074</id>
        <doi>10.14569/IJACSA.2024.0151074</doi>
        <lastModDate>2024-10-28T14:12:56.9130000+00:00</lastModDate>
        
        <creator>Abdullah Alshahrani</creator>
        
        <creator>Ayman Khedr</creator>
        
        <creator>Mohamed Belal</creator>
        
        <creator>Mohamed Saleh</creator>
        
        <subject>Blockchain; cryptography; logistics; smart contracts; transportation; security and privacy; supply chains</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>Transportation management in Egypt faces challenges such as congestion, inefficiency, and a lack of transparency. This work proposes a smart contract-based transportation framework to address these issues and enhance the efficiency of Egypt&#39;s transportation system. By leveraging blockchain technology, smart contracts can facilitate and enforce decentralized and immutable transportation agreements. This approach also fosters increased trust among stakeholders and improves interactions between service providers. This paper presents a conceptual framework that integrates smart contracts, blockchain technology, GPS data, and sensor technologies to further optimize transportation operations. Empirical analysis and case studies demonstrate the effectiveness of smart contracts in improving the shipping registration system. The survey results show that smart contracts streamline processes enhance data security, reduce costs, and improve accuracy. The proposed model, developed on the NEAR platform, outperforms traditional methods and Ethereum-based models by offering faster registration, better cost-efficiency, and improved transaction tracking. This demonstrates the potential for modernizing and optimizing Egypt’s transportation sector.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_74-A_Smart_Contract_Approach_for_Efficient_Transportation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Educational Outcomes Through AI Powered Learning Strategy Recommendation System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151075</link>
        <id>10.14569/IJACSA.2024.0151075</id>
        <doi>10.14569/IJACSA.2024.0151075</doi>
        <lastModDate>2024-10-28T14:12:56.9130000+00:00</lastModDate>
        
        <creator>Daminda Herath</creator>
        
        <creator>Chanuka Dinuwan</creator>
        
        <creator>Charith Ihalagedara</creator>
        
        <creator>Thanuja Ambegoda</creator>
        
        <subject>Artificial intelligence; educational data mining; educational strategies; machine learning; personalized recommendation; student performance prediction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>In order to develop intelligent learning recommendation systems, the work identifies the employment of artificial intelligence (AI) techniques, particularly in the educational data mining (EDM) field. The aggregation of such educational data into an efficient analytical system could also assist as an interesting means of education for the students. In fact, it could ultimately advance the direction of education. Sophisticated machine learning methods were employed to analyze various data types, including educational, socioeconomic, and demographic data, to predict student success. In this research, Logistic Regression (LR), Random Forest (RF), Support Vector Machines (SVM), CatBoost, and XGBoost algorithms were considered to build prediction models using a dataset encompassing a wide range of student traits. Robust evaluation metrics, including precision, recall, accuracy, and F1-score, were used to gauge model effectiveness. The results highlighted that RF was the best with accuracy, precision, and recall. Then, a rule engine was built to enhance the system by finding the most efficient learning tactics for students based on their expected future performance. The proposed AI-based personalized recommendation tool shows a substantial step towards enhancing educational decisions. This solution facilitates educators in creating student academic assistance interventions by offering individualized, data-driven learning strategies.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_75-Enhancing_Educational_Outcomes_Through_AI_Powered.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of Influencing Factors of Tourist Attractions Accessibility Based on Machine Learning Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151073</link>
        <id>10.14569/IJACSA.2024.0151073</id>
        <doi>10.14569/IJACSA.2024.0151073</doi>
        <lastModDate>2024-10-28T14:12:56.8970000+00:00</lastModDate>
        
        <creator>Na Liu</creator>
        
        <creator>Hai Zhang</creator>
        
        <subject>Tourist attractions; factors; tourism; remora optimized adaptive XGBoost (RO-AXGBoost)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>Tourist attractions, defined by their cultural importance, aesthetic appeal, and recreational possibilities, are critical to the tourism industry. However, precisely evaluating tourism needs remains a difficult task, and research in this field is scarce. This research introduces an innovative remora-optimized adaptive XGBoost (RO-AXGBoost) model for predicting accessibility factors for tourist attractions. Data was obtained from Kaggle, and the suggested method was executed in Python. The RO-AXGBoost model&#39;s effectiveness was assessed utilizing metrics like Mean Absolute Percentage Error (MAPE) of 7.24, Mean Absolute Error (MAE) of 7.321, Root Mean Square Error (RMSE) of 10.241, and R-squared (R&#178;) of 85.7%. The results show that the RO-AXGBoost model surpasses conventional approaches by effectively discovering important determinants that have an important impact on the accessibility of tourist attractions.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_73-Analysis_of_Influencing_Factors_of_Tourist_Attractions.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Constructing Knowledge Graph in Blockchain Teaching Program Using Formal Concept Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151072</link>
        <id>10.14569/IJACSA.2024.0151072</id>
        <doi>10.14569/IJACSA.2024.0151072</doi>
        <lastModDate>2024-10-28T14:12:56.8830000+00:00</lastModDate>
        
        <creator>Madina Mansurova</creator>
        
        <creator>Assel Ospan</creator>
        
        <creator>Dinara Zhaisanova</creator>
        
        <subject>Knowledge graph; formal concept analysis; blockchain education; curriculum optimization; interactive learning tools</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>The rapid evolution of blockchain technology calls for innovative educational frameworks to effectively convey its complex principles and applications. This paper investigates the use of Formal Concept Analysis (FCA) for constructing knowledge graphs as part of a blockchain teaching program. FCA, grounded in lattice theory, provides a mathematical foundation for analyzing relationships between concepts, making it an ideal tool for organizing and visualizing knowledge structure within blockchain education. This study aims to develop an interactive, context-based graph that captures the intricate interrelations among blockchain topics. The methodology includes mapping key blockchain concepts and their applications into a structured graph, which enhances both the understanding and the systematic delivery of educational content. The research demonstrates that FCA not only facilitates the creation of scalable and adaptable educational materials but also enhances students&#39; conceptual understanding by presenting the interconnected nature of blockchain concepts in an accessible format. Knowledge graph aids in identifying interconnected learning outcomes that cover overlapping subjects. It serves as a valueable resource for educators focusing on cryptocurrencies, making it easier to create a thorough list of key topics related to particular cryptocurrency characteristics.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_72-Constructing_Knowledge_Graph_in_Blockchain_Teaching_Program.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Strength Calculation Method of Agricultural Machinery Structure Using Finite Element Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151071</link>
        <id>10.14569/IJACSA.2024.0151071</id>
        <doi>10.14569/IJACSA.2024.0151071</doi>
        <lastModDate>2024-10-28T14:12:56.8670000+00:00</lastModDate>
        
        <creator>Jing Yang</creator>
        
        <subject>Agricultural machinery structure; 3 point cultivator with 7-Tynes; finite element analysis; strength calculation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>Analyzing agricultural machinery strength through Finite Element Analysis (FEA) ensures robust design and performance. This method evaluates structural integrity, enhancing reliability and efficiency in agricultural operations. This paper presents a comprehensive finite element method (FEM) analysis focused on assessing the structural strength of a 3-point cultivator outfitted with seven tynes. Cultivators hold pivotal significance in soil preparation, a foundational aspect of agricultural operations. The principal aim of this analysis is to pinpoint potential failure zones within the cultivator tynes under diverse loading conditions, particularly across varying speeds in medium clay and sandy soil. Anecdotal evidence suggests that domestically manufactured cultivators often exhibit structural deficiencies leading to failures at multiple junctures after just one season of operation. To address this challenge, we constructed a detailed CAD model of the time using Siemens NX software. Subsequent FEM analysis, conducted via ANSYS software, facilitated the exploration of stress distributions and deformation characteristics. Our investigation unveiled the maximal and minimal principal stresses alongside total deformation experienced by the tynes. Notably, while the maximum stress approached the material&#39;s yield point, it consistently remained within acceptable thresholds, signifying that the resultant deformation did not induce failure. This study underscores the pivotal role of employing FEM analysis in both the design and assessment phases of agricultural machinery development, thereby augmenting durability and operational efficacy. Ultimately, such initiatives aim to furnish manufacturers with invaluable insights to bolster the structural integrity and longevity of cultivators, fostering enhanced reliability and operational efficiency within the agricultural sector.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_71-Strength_Calculation_Method_of_Agricultural_Machinery_Structure.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Visual Translation of Auspicious Beliefs in Quanzhou Xi Culture from the Perspective of Man-Machine Collaboration</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151070</link>
        <id>10.14569/IJACSA.2024.0151070</id>
        <doi>10.14569/IJACSA.2024.0151070</doi>
        <lastModDate>2024-10-28T14:12:56.8500000+00:00</lastModDate>
        
        <creator>Li Zheng</creator>
        
        <creator>Xu Zhang</creator>
        
        <creator>Huiling Guo</creator>
        
        <subject>Quanzhou Xi culture; symbol visualisation; shape grammar; artificial intelligence; man-machine collaborative design</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>The “Xi” concept in the inheritance of auspicious culture covers the abundance of spiritual and material life, its symbolism is gorgeous and timeless and has lasted for thousands of years. Objective: This study investigates the Quanzhou “happiness” culture, which embodies “reverence for virtue and auspicious beliefs,” exploring its visual symbolization, graphical derivation, redesign, and innovative cultural expressions. Methods: Utilizing literature analysis, field research, and a combination of shape grammar and artificial intelligence, this study dissects and evolves the visual symbols of Quanzhou Xi culture to achieve innovative design through human-machine collaboration. Results: The study deeply refines representative visual symbols of Quanzhou happiness culture, including the “卍” character from Quanzhou embroidery, the Eight Immortals color for wedding happiness, and the longevity turtle cake stamp for longevity happiness. It analyzes and demonstrates the innovative practice of these visual symbols, establishes a folklore perspective, and transitions the happiness culture into a modern fashion context. Conclusion: The research constructs a visual symbol folklore perspective of Quanzhou Xi culture, providing a systematic theoretical foundation and innovative practice paths for promoting and inheriting Xi culture in the modern design field. It promotes Quanzhou Xi culture’s innovative application and fashion transformation in contemporary design.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_70-Visual_Translation_of_Auspicious_Beliefs.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Combining BERT and CNN for Sentiment Analysis A Case Study on COVID-19</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151069</link>
        <id>10.14569/IJACSA.2024.0151069</id>
        <doi>10.14569/IJACSA.2024.0151069</doi>
        <lastModDate>2024-10-28T14:12:56.8370000+00:00</lastModDate>
        
        <creator>Gunjan Kumar</creator>
        
        <creator>Renuka Agrawal</creator>
        
        <creator>Kanhaiya Sharma</creator>
        
        <creator>Pravin Ramesh Gundalwar</creator>
        
        <creator>Aqsa kazi</creator>
        
        <creator>Pratyush Agrawal</creator>
        
        <creator>Manjusha Tomar</creator>
        
        <creator>Shailaja Salagrama</creator>
        
        <subject>Sentiment analysis; COVID-19; BERT; CNN; ensemble model; NLP; transfer learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>This research focuses on sentiment analysis to understand public opinion on various topics, with an emphasis on COVID-19 discussions on Twitter. By utilizing state-of-the-art Machine Learning (ML) and Natural Language Processing (NLP) techniques, the study analyzes sentiment data to provide valuable insights. The process begins with data preparation, involving text cleaning and length filtering to optimize the dataset for analysis. Two models are employed: a Bidirectional Encoder Representations from Transformers (BERT)-based Deep Learning (DL) model and a Convolutional Neural Network (CNN). The BERT model leverages transfer learning, demonstrating strong performance in sentiment classification, while the CNN model excels at extracting contextual features from the input text. To further enhance accuracy, an ensemble model integrates predictions from both approaches. The study emphasizes the ensemble technique’s value for more precise sentiment analysis. Evaluation metrics, including accuracy, classification reports, and confusion matrices, validate the effectiveness of the proposed models and the ensemble approach. This research contributes to the growing field of social media sentiment analysis, particularly during global health crises like COVID-19, and underscores its potential to aid informed decision-making based on public sentiment.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_69-Combining_BERT_and_CNN_for_Sentiment_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-Site Cross Calibration on the LAPAN-A3/IPB Satellite Multispectral Camera with One-Dimensional Kalman Filter Optimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151068</link>
        <id>10.14569/IJACSA.2024.0151068</id>
        <doi>10.14569/IJACSA.2024.0151068</doi>
        <lastModDate>2024-10-28T14:12:56.8200000+00:00</lastModDate>
        
        <creator>Sartika Salaswati</creator>
        
        <creator>Adhi Harmoko Saputro</creator>
        
        <creator>Wahyudi Hasbi</creator>
        
        <creator>Deddy El Amin</creator>
        
        <creator>Patria Rachman Hakim</creator>
        
        <creator>Silmie Vidiya Fani</creator>
        
        <creator>Agung Wahyudiono</creator>
        
        <creator>Ega Asti Anggari</creator>
        
        <subject>Cross-calibration; multi sites; multispectral camera; LAPAN-A3/IPB Satellite; LISA LAPAN-A3 (LA3); OLI LANDSAT-8 (OL8); MSI/SENTINEL-2 (MS2); one-dimensional Kalman filter</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>Multispectral cameras on remote sensing satellites must have good radiometric quality due to their wide range of applications. One type of radiometric calibration that can be performed while the satellite is in orbit is cross-calibration. This research focuses on cross-calibration because it has advantages, including being cost-effective and capable of frequent execution. We proposed a multi-site cross-calibration method with two reference cameras using six calibration sites in 2023. The LISA LAPAN-A3 (LA3) camera serves as the target camera, while the OLI LANDSAT-8 (OL8) and MSI SENTINEL-2 (MS2) cameras act as the reference cameras. The calibration process results in numerous calibration coefficients for each channel, thus requiring optimization to produce a single calibration coefficient. The optimization process uses a one-dimensional Kalman filter to reduce measurement noise. The results show that the one-dimensional Kalman filter can reduce noise in the calibration coefficient data, making LA3 radiance values closer to the reference radiance values. Additionally, this study demonstrates that LA3 calibration results with MS2 as the reference camera are better than those with OL8 as the reference.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_68-Multi_Site_Cross_Calibration_on_the_LAPAN_A3IPB.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Modified Lightweight DeepSORT Variant for Vehicle Tracking</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151067</link>
        <id>10.14569/IJACSA.2024.0151067</id>
        <doi>10.14569/IJACSA.2024.0151067</doi>
        <lastModDate>2024-10-28T14:12:56.8030000+00:00</lastModDate>
        
        <creator>Ayoub El-alami</creator>
        
        <creator>Younes Nadir</creator>
        
        <creator>Khalifa Mansouri</creator>
        
        <subject>Distributed systems; intelligent transportation systems; edge computing; object tracking</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>Object tracking plays a pivotal role in Intelligent Transportation Systems (ITS), enabling applications such as traffic monitoring, congestion management, and enhancing road safety in urban environments. However, existing object tracking algorithms like DeepSORT are computationally intensive, which hinders their deployment on resource-constrained edge devices essential for distributed ITS solutions. Urban mobility challenges necessitate efficient and accurate vehicle tracking to ensure smooth traffic flow and reduce accidents. In this paper, we present a modified lightweight variant of the DeepSORT algorithm tailored for vehicle tracking in traffic surveillance systems. By leveraging multi-dimensional features extracted directly from YOLOv5 detections, our approach eliminates the need for an additional convolutional neural network (CNN) descriptor and reduces computational overhead. Experiments on real-world traffic surveillance data demonstrate that our method reduces tracking time to 25.29% of that required by DeepSORT, with only a minimal increase over the simpler SORT algorithm. Additionally, it maintains low error rates between 0.43% and 1.69% in challenging urban scenarios. Our lightweight solution facilitates efficient and accurate vehicle tracking on edge devices, contributing to more effective ITS deployments and improved road safety.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_67-A_Modified_Lightweight_DeepSORT_Variant.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Individual Cow Identification Using Non-Fixed Point-of-View Images and Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151066</link>
        <id>10.14569/IJACSA.2024.0151066</id>
        <doi>10.14569/IJACSA.2024.0151066</doi>
        <lastModDate>2024-10-28T14:12:56.7730000+00:00</lastModDate>
        
        <creator>Yordan Kalmukov</creator>
        
        <creator>Boris Evstatiev</creator>
        
        <creator>Seher Kadirova</creator>
        
        <subject>Cow identification; convolutional neural network; YOLOv3; non-fixed point-of-view</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>Monitoring and traceability are crucial for ensuring efficient and financially beneficial cattle breeding in contemporary animal husbandry. While most farmers rely mainly on ear tags, the development of computer vision and machine learning methods opened many new noninvasive opportunities for the identification, localization, and behavior recognition of cows. In this paper, a series of experimental analyses are presented aimed at investigating the possibility of identification of cows using non-fixed point-of-view images and deep learning. 14 objects were chosen and a photo session was made for each one, which provides training/validation images with different viewing angles of the animals. Next, a darknet-53-based convolutional neural network (CNN) was trained using YOLOv3, capable of identifying the investigated objects. The optimal model achieved 92.2% accuracy when photos of single or grouped non-overlapping animals were used. On the other hand, the trained CNN showed poor performance with group images, containing overlapping cows. The obtained results showed that cows could be reliably recognized using non-fixed point-of-view images, which is the main novelty of this study; however, certain limitations exist in the usage scenarios.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_66-Individual_Cow_Identification_Using_Non_Fixed.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application Citespace Visualization Tool to Online Public Opinion Group Label: Generation, Dissemination and Trends</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151065</link>
        <id>10.14569/IJACSA.2024.0151065</id>
        <doi>10.14569/IJACSA.2024.0151065</doi>
        <lastModDate>2024-10-28T14:12:56.7570000+00:00</lastModDate>
        
        <creator>Jingyi Ju</creator>
        
        <subject>Group labeling; online public opinion; labeled communication; communication mechanisms; visualization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>With the popularization of mobile Internet technology, the social and cultural environment provides a favorable communication environment for online news dissemination, leading to a highly ubiquitous phenomenon of group labeling communication events. This study explores the generation, dissemination, and evolution trends of group labeling in the online public opinion environment. This study crawled 20975 initial literature data included in the core database of Web of Science, obtained 9834 valid literature after several rounds of screening, and utilized CiteSpace 6.3 software to do metric data analysis and word frequency analysis on the above valid literature data and successively adopted the Analysis means of literature being co-cited, author co-citation, journal co-citation, keyword co-citation, and clustering to analyze group labeling. We have successively used the Analysis of literature co-citation, author co-citation, journal co-citation, keyword co-citation, clustering, etc., and disassembled the group labeling into the generation environment, dissemination process, and trend evolution. The study utilizes the disciplinary perspective of journalism and communication and social media platforms to assist it. Thus, it summarizes and reveals the interaction of group labels in cyberspace and the natural world and seeks to grasp the communication mechanism of the process of insight into this emerging discursive power.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_65-Application_Citespace_Visualization_Tool.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Critical Success Factors of Microservices Architecture Implementation in the Information System Project</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151064</link>
        <id>10.14569/IJACSA.2024.0151064</id>
        <doi>10.14569/IJACSA.2024.0151064</doi>
        <lastModDate>2024-10-28T14:12:56.7430000+00:00</lastModDate>
        
        <creator>Mochamad Gani Amri</creator>
        
        <creator>Teguh Raharjo</creator>
        
        <creator>Anita Nur Fitriani</creator>
        
        <creator>Nurman Rasyid Panusunan Hutasuhut</creator>
        
        <subject>Microservice architecture; software architecture; critical success factors; analytical hierarchy process</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>Microservice Architecture (MSA) promises enhancements in information systems, including improved performance, scalability, availability, and maintenance. However, challenges during the design, development, and operations phases can hinder successful deployment. This research presents a case study of one of the leading telecommunications companies in Indonesia, which encountered a three-month delay in implementing its microservices architecture (MSA). The study aims to provide actionable insights for the company to enhance its MSA deployment and contribute to academic knowledge by offering a structured approach to evaluating critical success factors (CSFs) in similar contexts. Through a literature review, twenty-one factors were identified and categorized into four groups: (1) Organization, (2) Process, (3) Systems &amp; Tools, and (4) Knowledge, Skills &amp; Behavior. The Analytical Hierarchy Process (AHP) was used to evaluate the priority of each factor based on survey data from project executors and software development practitioners. The findings indicate that the Organization category is the most crucial, with (1) Top Management Support, (2) Clear Vision, and (3) Adequate Resources being the top three CSFs for MSA implementation.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_64-Critical_Success_Factors_of_Microservices_Architecture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Impact of the GQM Framework on Software Engineering Exam Outcomes</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151063</link>
        <id>10.14569/IJACSA.2024.0151063</id>
        <doi>10.14569/IJACSA.2024.0151063</doi>
        <lastModDate>2024-10-28T14:12:56.7100000+00:00</lastModDate>
        
        <creator>Reem Abdulaziz Alnanih</creator>
        
        <subject>Goal-question-metric (GQM); software engineering; education; learning process; learning outcomes; continuous improvement; statistical analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>Assessment is crucial in educational systems, particularly in Software Engineering (SE) programs, where fair and effective evaluations drive continuous improvement. The shift to student-centric methodologies has evolved assessment strategies to focus on aligning educational processes with students&#39; developmental needs rather than merely measuring academic outputs. This paper adapts the Goal-Question-Metric (GQM) framework to enhance learning in software engineering education by linking educational goals, learning activities, and assessment methods. This approach specifies expected learning outcomes and integrates mechanisms for continuous improvement, aligning teaching strategies with student performance metrics. A systematic framework for course assessment using the GQM framework is presented, aligning assessment methods with Intended Learning Outcomes (ILOs) and Student Learning Outcomes (SLOs) to ensure data-driven enhancements. To validate this approach, a template was introduced to assess the impact of a tailored GQM approach on the final exam outcomes of a software engineering course at King Abdulaziz University’s Department of Computer Science. A controlled experiment was conducted over two semesters with students from the CPCS 351 course. The control group, in the first semester, completed their finals without applying GQM, while the experimental group in the following semester employed a customized GQM framework. Statistical analyses, including ANOVA and Mann-Whitney U tests, were utilized to compare exam performance between the groups. Results indicated a significant improvement in the exam scores of the experimental group, thereby validating the effectiveness of the GQM framework in boosting academic performance through structured exam preparation and execution.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_63-The_Impact_of_the_GQM_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning Approach in Complex Sentiment Analysis: A Case Study on Social Problems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151061</link>
        <id>10.14569/IJACSA.2024.0151061</id>
        <doi>10.14569/IJACSA.2024.0151061</doi>
        <lastModDate>2024-10-28T14:12:56.6630000+00:00</lastModDate>
        
        <creator>Bambang Nurdewanto</creator>
        
        <creator>Kukuh Yudhistiro</creator>
        
        <creator>Dani Yuniawan</creator>
        
        <creator>Himawan Pramaditya</creator>
        
        <creator>Mochammad Daffa Putra Karyudi</creator>
        
        <creator>Yulia Natasya Farah Diba Arifin</creator>
        
        <creator>Puput Dani Prasetyo Adi</creator>
        
        <subject>Component; sentiment analysis; deep learning; artificial intelligence; social case</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>This scholarly investigation examines the utilization of artificial intelligence (AI) technology in the analysis and resolution of intricate societal challenges in many countries. The originality of this study resides in the employment of deep learning algorithms, particularly Convolutional Neural Network (CNN), to execute sentiment analysis with an elevated degree of complexity. The examination encompasses three principal dimensions of sentiment: Sentiment, Tone, and Object, with the intention of offering profound insights into public perceptions regarding various social challenges. The fundamental sentiment is categorized into three classifications: Positive, Neutral, and Negative. Moreover, the Tone analysis introduces an additional layer of comprehension that encompasses Support, Suggestion, Criticism, Complaint, and Others, thereby delineating a more precise communicative context. The Object dimension is employed to ascertain the target of the sentiment, whether it pertains to an Individual, Organization, Policy, or other entity. This inquiry applied the analysis to several clusters of social issues, including Poverty and Economic Disparity, Health and Wellbeing, Education and Literacy, Violence and Security, as well as Environment and Social Life. The findings are anticipated to aid the government in devising policies that are more effective and responsive to the exigencies of society, through an enhanced understanding of public sentiment.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_61-Deep_Learning_Approach_in_Complex_Sentiment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Machine Learning Operations (MLOps) Monitoring Model Using BI-LSTM and SARSA Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151060</link>
        <id>10.14569/IJACSA.2024.0151060</id>
        <doi>10.14569/IJACSA.2024.0151060</doi>
        <lastModDate>2024-10-28T14:12:56.6470000+00:00</lastModDate>
        
        <creator>Zeinab Shoieb Elgamal</creator>
        
        <creator>Laila Elfangary</creator>
        
        <creator>Hanan Fahmy</creator>
        
        <subject>Machine learning; MLOps; monitoring; container; model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>Machine learning operations (MLOps) achieves faster model development, deliver higher machine learning models quality, and faster deployment cycle. Unfortunately, MLOps is still an uncertain concept with ambiguous research implications. Professionals and academics have focused only on creating machine learning models, rather than using sophisticated machine learning systems in practical situations. Furthermore, the monitoring system must have a comprehensive view over the system interactions. The need for a strong efficient monitoring system increases when it comes to use the multi container services. Therefore, this research provides a new proposed model called Multi Containers Monitoring (MCM) Model, based on multi container service and machine learning approaches which are bidirectional long short-term memory (BI-LSTM) and state-action-reward-state-action (SARSA). The proposed MCM model enables MLOps systems to be scaled and monitored efficiently. The proposed MCM model realizes and interprets the interactions between the containers. The proposed MCM model enhances the performance of the software release and increases the number of software deployments across different types of environments. Moreover, this research proposes four routines for each layer of the proposed MCM model that illustrates how each layer is going to be developed. This research also illustrates how the proposed MCM model achieves improvements ratio in software deployment cycles by using MLOps up to 24.55% and in build duration cycle up to 13%.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_60-A_Machine_Learning_Operations_MLOps_Monitoring_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comprehensive Crucial Review of Re-Purposing DNN-Based Systems: Significance, Challenges, and Future Directions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151059</link>
        <id>10.14569/IJACSA.2024.0151059</id>
        <doi>10.14569/IJACSA.2024.0151059</doi>
        <lastModDate>2024-10-28T14:12:56.6330000+00:00</lastModDate>
        
        <creator>Yaser M Al-Hamzi</creator>
        
        <creator>Shamsul Bin Sahibuddin</creator>
        
        <subject>DNNs; DNN-based systems; significance and challenges; incompatibility; re-purposing; review</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>The fourth industrial revolution is marked by the significance of artificial intelligence (AI), particularly the remarkable progress in deep neural networks (DNNs). These networks have become crucial in various areas of daily life because of their remarkable pattern-learning capabilities on massive datasets. However, the incompatibility of these systems makes reutilizing them for efficient data analysis and computation highly intricate and challenging due to their fragmentation, internal structure, and complexity. Training in DNNs, a vital essential activity in model development, is often time-consuming and costly intensive computation. More precisely, reusing the entire model during deployment when only a small portion of its required features will result in excessive overhead. On the other hand, reengineering the model without efficient code review could also pose security risks as the model would inherit its defects and weaknesses. This paper comprehensively reviews DNN-based systems, encompassing cutting-edge frameworks, algorithms, and models for complex data and existent limitations. The study, which results from a thorough examination, analysis, and synthesis of observations from 193 recent scholarly papers, provides a wealth of knowledge on the subject, identifying key issues and future research directions by offering novel guidelines to advance the DNN model’s repurposing and adaptation, especially in finance, healthcare, and autonomous applications. The demonstrated findings, specifically those related to failure and risk challenges of DNN converters, including factors (n=12), symptoms (n1=4, n2=3), and root causes (n1=4, n2=3), will enrich the ML-DNNs community and guide them toward desirable model development and deployment improvement, with significant practical implications for intelligent industries.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_59-A_Comprehensive_Crucial_Review_of_Re_Purposing_DNN.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Proposed Batik Automatic Classification System Based on Ensemble Deep Learning and GLCM Feature Extraction Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151058</link>
        <id>10.14569/IJACSA.2024.0151058</id>
        <doi>10.14569/IJACSA.2024.0151058</doi>
        <lastModDate>2024-10-28T14:12:56.6170000+00:00</lastModDate>
        
        <creator>Luluk Elvitaria</creator>
        
        <creator>Ezak Fadzrin Ahmad Shaubari</creator>
        
        <creator>Noor Azah Samsudin</creator>
        
        <creator>Shamsul Kamal Ahmad Khalid</creator>
        
        <creator>Salamun</creator>
        
        <creator>Zul Indra</creator>
        
        <subject>Batik; GLCM; ResNet; ensemble method; hard voting</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>Classification of batik images is a challenge in the field of digital image processing, considering the complexity of patterns, colors, and textures of various batik motifs. This study proposes an ensemble method that combines texture feature extraction using Gray Level Co-occurrence Matrix (GLCM) with the Residual Neural Network (ResNet) classification model to improve accuracy in batik image classification. Texture features such as contrast, dissimilarity, entropy, homogeneity, mean, and standard deviation are extracted using GLCM and combined with ResNet to produce a more robust classification model. The experimental results show that the proposed method achieves high performance, namely above 90% for each evaluation metric used, such as accuracy, precision, recall and F-1 Score. The best performance in classifying batik images is obtained by the Standard Deviation feature with accuracy, precision, recall, and F1-score of 95%, 93%, 93%, and 93%, respectively. Furthermore, the application of the ensemble method based on the hard voting approach has proven effective in increasing the accuracy of batik image classification by utilizing a combination of texture features and deep learning models. The proposed method makes a significant contribution to the efforts to preserve batik culture through digitalization and can be implemented for various purposes such as an image-based batik search system.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_58-A_Proposed_Batik_Automatic_Classification_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Human Dorsal Hand Vein Segmentation Method Based on GR-UNet Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151057</link>
        <id>10.14569/IJACSA.2024.0151057</id>
        <doi>10.14569/IJACSA.2024.0151057</doi>
        <lastModDate>2024-10-28T14:12:56.6000000+00:00</lastModDate>
        
        <creator>Zhike Zhao</creator>
        
        <creator>Wen Zeng</creator>
        
        <creator>Kunkun Wu</creator>
        
        <creator>Xiaocan Cui</creator>
        
        <subject>Human dorsal hand veins; GR-UNet; near infrared technology; deep residual network-50; global attention mechanism; loss function</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>To solve the issue of inaccurate segmentation accuracy of human dorsal hand veins (HDHV), we propose a segmentation method based on the global residual U-Net (GR-Unet) model. Initially, a visual acquisition device for dorsal hand vein imaging was designed utilizing near-infrared technology, resulting in the creation of a dataset comprising 864 images of HDHV. Subsequently, a Bottleneck from the deep residual network-50 (ResNet50) was integrated into the U-Net model to enhance its depth and alleviate the problem of vanishing gradients. Furthermore, a global attention mechanism (GAM) was introduced at the junction to improve the acquisition of global feature information. Additionally, a weighted loss function that combines cross-entropy loss and Dice loss was employed to address the imbalance between positive and negative samples. The experimental results indicate that the GR-Unet model achieved accuracies of 78.82%, 88.03%, 93.92%, and 97.5% in terms of intersection over union, mean intersection over union, mean pixel accuracy, and overall accuracy, respectively.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_57-Human_Dorsal_Hand_Vein_Segmentation_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Revolutionizing Rice Leaf Disease Detection: Next-Generation SMOREF-SVM Integrating Spider Monkey Optimization and Advanced Machine Learning Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151056</link>
        <id>10.14569/IJACSA.2024.0151056</id>
        <doi>10.14569/IJACSA.2024.0151056</doi>
        <lastModDate>2024-10-28T14:12:56.5870000+00:00</lastModDate>
        
        <creator>Avip Kurniawan</creator>
        
        <creator>Tri Retnaningsih Soeprobowati</creator>
        
        <creator>Budi Warsito</creator>
        
        <subject>SMOREF-SVM; rice leaf disease; classification; Spider Monkey Optimization (SMO); machine learning; image processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>Leaf diseases pose a significant challenge to rice productivity, which is critical as rice is a staple food for over half of the world&#39;s population and a major agricultural commodity. These diseases can lead to severe economic losses and jeopardize food security, particularly in regions heavily reliant on rice farming. Traditional detection methods, such as visual inspection and microscopy, are often inadequate for early disease identification, which is crucial for effective management and minimizing yield loss. This presentation introduces SMOREF-SVM, a novel approach that combines Spider Monkey Optimization (SMO) with Random Forest (RF) and Support Vector Machine (SVM) to improve the classification of rice leaf diseases. The innovation of SMOREF-SVM lies in its use of SMO for effective feature optimization, which selects the most relevant features from complex disease patterns, and its dual-classification framework using RF and SVM. Results demonstrate that SMOREF-SVM achieves an average accuracy of 98%, significantly outperforming standard SVM methods, which achieve around 90%. SMOREF-SVM also improves key metrics, including Precision, Recall, and F1 Score, by 5-10% for diseases with fewer samples, reaching Precision of 94%, Recall of 92%, and F1 Score of 93%. Additionally, ROC curve analysis shows an enhanced Area Under the Curve (AUC), approaching 0.98 for more disease classes, compared to 0.85 with traditional methods. This makes SMOREF-SVM a valuable tool for early and accurate disease detection, offering the potential to improve crop productivity and sustainability, addressing the critical challenges of disease management in agriculture.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_56-Revolutionizing_Rice_Leaf_Disease_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Research on Credit Card Fraud Prediction Model Based on GAN-DNN Imbalance Classification Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151054</link>
        <id>10.14569/IJACSA.2024.0151054</id>
        <doi>10.14569/IJACSA.2024.0151054</doi>
        <lastModDate>2024-10-28T14:12:56.5530000+00:00</lastModDate>
        
        <creator>Qin Wang</creator>
        
        <creator>Mary Jane C.Samonte</creator>
        
        <subject>Generative adversarial network; deep neural network; unbalanced data; credit card fraud; classification algorithms</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>Credit card consumption has become an important way of consumption in modern life, but the problem of credit card fraud has also emerged, disrupting the financial order and restricting the development of the industry. Aiming at the data class imbalance problem in credit card fraud detection and improving the accuracy of fraud detection, this paper uses the Generative Adversarial Network (GAN) to generate fraud samples and balance the number of fraud transaction samples and normal transaction samples. Then, a deep neural network (DNN) is used to construct a credit card fraud prediction model. The study compares this model with commonly used classification algorithms and sampling methods in detail and confirms that the designed credit card fraud prediction model has a good effect, providing a theoretical basis and practical reference for financial institutions to predict credit card fraud.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_54-Research_on_Credit_Card_Fraud_Prediction_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>MSMA: Merged Slime Mould Algorithm for Solving Engineering Design Problems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151053</link>
        <id>10.14569/IJACSA.2024.0151053</id>
        <doi>10.14569/IJACSA.2024.0151053</doi>
        <lastModDate>2024-10-28T14:12:56.5400000+00:00</lastModDate>
        
        <creator>Khaled Mohammad Alhashash</creator>
        
        <creator>Hussein Samma</creator>
        
        <creator>Shahrel Azmin Suandi</creator>
        
        <subject>Slime mould algorithm; engineering design problems; metaheuristic; optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>The Slime Mould Algorithm (SMA) has effectively solved various real-world problems such as image segmentation, solar photovoltaic cell parameter estimation, and economic emission dispatch. However, SMA and its variants still face limitations when dealing with low-dimensional optimization problems, including slow convergence and local optima traps. This study aims to develop an optimized algorithm, the Merged Slime Mould Algorithm (MSMA), to overcome these limitations and improve performance in low-dimensional optimization tasks. Additionally, MSMA introduces a novel approach by merging the Adaptive Opposition Slime Mould Algorithm (AOSMA) and the Smart Switching Slime Mould Algorithm (S2SMA), simplifying the hybridization process and enhancing optimization performance. MSMA eliminates the need for multiple initializations, avoids memory-switching requirements, and employs adaptive and smart switching rules to harness the strengths of both algorithms. The performance of MSMA is evaluated using the CEC 2005 benchmark and ten real-world applications. The Wilcoxon rank-sum test verifies the effectiveness of the proposed approach, with results compared to various SMA variations and related optimization methods. Numerical findings demonstrate superior fitness values achieved by the proposed strategy, while statistical results indicate MSMA&#39;s outperformance with a rapid convergence curve.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_53-MSMA_Merged_Slime_Mould_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Active Semi-Supervised Clustering Algorithm for Multi-Density Datasets</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151052</link>
        <id>10.14569/IJACSA.2024.0151052</id>
        <doi>10.14569/IJACSA.2024.0151052</doi>
        <lastModDate>2024-10-28T14:12:56.5230000+00:00</lastModDate>
        
        <creator>Walid Atwa</creator>
        
        <creator>Abdulwahab Ali Almazroi</creator>
        
        <creator>Eman A. Aldhahr</creator>
        
        <creator>Nourah Fahad Janbi</creator>
        
        <subject>Semi-supervised clustering; pairwise constraints; multi-density data; active learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>Semi-supervised clustering with pairwise constraints has been a hot topic among researchers and experts. However, the problem becomes quite difficult to manage using random constraints for clustering data when the clusters have different shapes, densities, and sizes. This research proposes an active semi-supervised density-based clustering algorithm, termed &quot;ASS-DBSCAN,&quot; designed specifically for clustering multi-density data. By integrating active learning and semi-supervised techniques, ASS-DBSCAN enhances traditional clustering methods, allowing it to handle complex data distributions with varying densities more effectively. This research provides two major contributions. The first contribution of this research is to analyze how to link constraints (including that must be linked and ones that should not be linked) that will be utilized by the clustering algorithm. The second contribution made by this research is the ability to add multiple density levels to the dataset. We perform experiments over real datasets. The ASS-DBSCAN algorithm was evaluated against existing state-of-the-art system for various evaluation metrics in which it performed remarkably well.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_52-Active_Semi_Supervised_Clustering_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analyzing the Impact of Occupancy Patterns on Indoor Air Quality in University Classrooms Using a Real-Time Monitoring System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151051</link>
        <id>10.14569/IJACSA.2024.0151051</id>
        <doi>10.14569/IJACSA.2024.0151051</doi>
        <lastModDate>2024-10-28T14:12:56.5070000+00:00</lastModDate>
        
        <creator>Sri Ratna Sulistiyanti</creator>
        
        <creator>Muhamad Komarudin</creator>
        
        <creator>F. X Arinto Setyawan</creator>
        
        <creator>Hery Dian Septama</creator>
        
        <creator>Titin Yulianti</creator>
        
        <creator>M. Farid Ammar</creator>
        
        <subject>Indoor air quality; monitoring; pollution; IoT</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>Indoor air quality (IAQ) in universities is of concern because it directly affects students&#39; health and performance. This study presents an IoT-based system for real-time monitoring of IAQ in university classrooms. The system uses MQ-7 and MQ-135 sensors to monitor CO and CO2 pollution parameters. The data is then processed by the ESP32 microcontroller, displayed on the LCD screen, and responded to immediately in the mobile application. The system’s real-time monitoring capabilities, data display, and alert mechanism provide valuable insights to improve the classroom environment. The sensors used in the system achieved an accuracy of 97.17% for five people and 93.96% for ten people&#39;s scenarios. This study investigates the relationship between human behavior, classroom activities, and occupancy impacts the IAQ. The results show a strong positive correlation between occupancy rates and CO2 levels, indicating the importance of ventilation in densely populated classrooms. The correlation coefficient between the number of students and the CO2 levels is 0.982. This coefficient is remarkably close to 1, indicating a strong positive correlation. In other words, as the number of students in the classroom increases, the CO2 levels also increase significantly. The high correlation coefficient suggests a direct relationship between the number of students and the CO2 levels. This IoT-based system will facilitate a data-driven approach to improving indoor environmental conditions, supporting healthier and more effective learning environments in educational institutions.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_51-Analyzing_the_Impact_of_Occupancy_Patterns.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimization of 3D Coverage Layout for Multi-UAV Collaborative Lighting in Emergency Rescue Operations</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151050</link>
        <id>10.14569/IJACSA.2024.0151050</id>
        <doi>10.14569/IJACSA.2024.0151050</doi>
        <lastModDate>2024-10-28T14:12:56.5070000+00:00</lastModDate>
        
        <creator>Dan Jiang</creator>
        
        <creator>Rui Yan</creator>
        
        <subject>Unmanned Aerial Vehicles; emergency rescue; collaborative lighting; three-dimensional coverage; particle swarm algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>In emergency rescue scenarios, Unmanned Aerial Vehicles (UAVs) play a pivotal role in navigating complex terrains and high-risk environments. This paper proposes an optimization model for the three-dimensional coverage layout of a multi-UAV collaborative lighting system, specifically designed to meet the spatial requirements of emergency operations. An enhanced Particle Swarm Optimization (PSO) algorithm is employed to tackle the layout challenges, featuring adaptive inertia weights and asymmetric learning factors to improve both efficiency and global search capabilities. The simulation results demonstrate that the proposed method significantly enhances coverage efficiency, achieving over 90% coverage in critical areas while ensuring precise UAV positioning. Additionally, the algorithm shows faster convergence and stronger global search ability, effectively optimizing UAV deployment and improving operational efficiency during rescue missions. This study offers a practical and reliable layout solution for multi-UAV collaborative lighting systems, which is crucial for reducing rescue times, ensuring operational safety, and improving resource allocation in emergency responses.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_50-Optimization_of_3D_Coverage_Layout_for_Multi_UAV.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of a Causal Model of Post-Millennials&#39; Willingness to Disclose Information to Online Fashion Businesses (Thailand)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151048</link>
        <id>10.14569/IJACSA.2024.0151048</id>
        <doi>10.14569/IJACSA.2024.0151048</doi>
        <lastModDate>2024-10-28T14:12:56.4770000+00:00</lastModDate>
        
        <creator>Apiwat Krommuang</creator>
        
        <creator>Jinnawat Kasisuwan</creator>
        
        <subject>Online fashion business; Post-Millennials; privacy calculus; willingness; disclose information</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>This research examines the causal factors influencing the willingness of Central Post-Millennials to disclose information to online fashion businesses by using privacy calculus theory as the basic principle for modeling. The study has three primary objectives: (1) to investigate the causal factors influencing willingness to disclose information, (2) to analyze both the direct and indirect effects of perceived risk, perceived benefit, perceived value, perceived control over the use of personalization data, and trust on the willingness to disclose information, and (3) to develop a causal factor model for understanding the determinants of willingness to disclose information among Central Post-Millennials in the context of online fashion businesses. The research sample consists of 385 individuals, and data were collected using a structured questionnaire. Descriptive and inferential statistical methods were employed for data analysis. The relationships between variables were assessed using Pearson&#39;s Correlation Coefficient. The model&#39;s fit to the empirical data was evaluated using goodness-of-fit measures, and the transmission of influence was tested through structural equation modeling (SEM). The findings reveal that demographic factors do not significantly affect the willingness to disclose information. However, the study identifies perceived risk, perceived benefit, perceived value, perceived control over the use of personalization data, and trust as key determinants of willingness to disclose information to online fashion businesses. Among these, perceived control exhibits the strongest influence, closely followed by trust. These results highlight the antecedent processes influencing the willingness to disclose information, as represented by a model developed from a comprehensive literature review and empirically tested for consistency with the data.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_48-Development_of_a_Causal_Model_of_Post_Millennials.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Facial Expression Classification System Using Stacked CNN</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151049</link>
        <id>10.14569/IJACSA.2024.0151049</id>
        <doi>10.14569/IJACSA.2024.0151049</doi>
        <lastModDate>2024-10-28T14:12:56.4770000+00:00</lastModDate>
        
        <creator>Aditya Wikan Mahastama</creator>
        
        <creator>Edwin Mahendra</creator>
        
        <creator>Antonius Rachmat Chrismanto</creator>
        
        <creator>Maria Nila Anggia Rini</creator>
        
        <creator>Andhika Galuh Prabawati</creator>
        
        <subject>FER; CNN; deep learning; image classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>Automatic emotion recognition technology through facial expressions has broad potential, ranging from human-computer interaction to stress detection and blood pressure assessment. Facial expressions exhibit patterns and characteristics that can be identified and analyzed by image processing and machine learning methods. These methods provide a basis for the development of emotion recognition systems. This research develops a facial emotion recognition model using Convolutional Neural Network (CNN) architecture, a popular architecture in image classification, segmentation, and object detection. CNNs offer automatic feature extraction and complex pattern recognition advantages on image data. This research uses three types of datasets, FER2013, CK+, and IMED, to optimize the deep learning approach. The developed model achieved an overall accuracy of 71% on the three datasets combined, with an average precision, recall, and F1-Score of 71%. The results show that CNN architecture performed well in facial emotion classification, supporting potential practical applications in various fields.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_49-Facial_Expression_Classification_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Eating Behavior and Level of Knowledge About Healthy Eating Among Gym Users: A Multinomial Logistic Regression Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151047</link>
        <id>10.14569/IJACSA.2024.0151047</id>
        <doi>10.14569/IJACSA.2024.0151047</doi>
        <lastModDate>2024-10-28T14:12:56.4430000+00:00</lastModDate>
        
        <creator>Ana Huamani-Huaracca</creator>
        
        <creator>Sebasti&#225;n Ramos-Cosi</creator>
        
        <creator>Michael Cieza-Terrones</creator>
        
        <creator>Gina Le&#243;n-Untiveros</creator>
        
        <creator>Alicia Alva Mantari</creator>
        
        <subject>Knowledge; healthy eating; gym; eating behavior</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>The World Health Organization indicates that unhealthy diets cause approximately 11 million deaths annually worldwide. In Peru, 57.9% of the population consumes highly processed foods daily. The objective of this study is to analyze the relationship between knowledge about healthy eating and eating behavior among gym users in a district of Lima, Peru. Using an exploratory and quantitative design, information was collected from 156 users through a hybrid questionnaire, analyzed with SPSS and multinomial logistic regression techniques. The results reveal that 57.42% of the participants have an intermediate knowledge of healthy eating, while only 17.42% reach a high level. Likewise, 49.03% exhibit an intermediate eating behavior. In addition, sociodemographic factors, such as the duration of gym attendance and maintenance of a specific diet, were found to influence eating behavior. It is concluded that there is a significant relationship between the level of knowledge and eating behavior, underlining the importance of nutrition education to improve eating habits in this population.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_47-Eating_Behavior_and_Level_of_Knowledge.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Robust Image Tampering Detection and Ownership Authentication Using Zero-Watermarking and Siamese Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151046</link>
        <id>10.14569/IJACSA.2024.0151046</id>
        <doi>10.14569/IJACSA.2024.0151046</doi>
        <lastModDate>2024-10-28T14:12:56.4300000+00:00</lastModDate>
        
        <creator>Rodrigo Eduardo Arevalo-Ancona</creator>
        
        <creator>Manuel Cedillo-Hernandez</creator>
        
        <creator>Francisco Javier Garcia-Ugalde</creator>
        
        <subject>Zero-watermarking; tampering detection; ownership authentication; neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>The development of advanced image editing tools has significantly increased the manipulation of digital images, creating a pressing need for robust tamper detection and ownership authentication systems. This paper presents a method that combines zero-watermarking with Siamese neural networks to detect image tampering and verify ownership. The approach utilizes features from the Discrete Wavelet Transform (DWT) and employs two halftone images as watermarks: one representing the owner&#39;s portrait and the other corresponding to the protected image. A feature matrix is generated from the owner&#39;s portrait using the Siamese network and securely linked to the image&#39;s halftone watermark through an XOR operation. Additionally, data augmentation enhances the model&#39;s robustness, ensuring effective learning of image features even under geometric and signal processing distortions. Experimental results demonstrate high accuracy in recovering halftone images, enabling precise tamper detection and ownership verification across different datasets and image distortions (geometric and image processing distortions).</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_46-Robust_Image_Tampering_Detection_and_Ownership_Authentication.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>TSO Algorithm and DBN-Based Comprehensive Evaluation System for University Physical Education</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151045</link>
        <id>10.14569/IJACSA.2024.0151045</id>
        <doi>10.14569/IJACSA.2024.0151045</doi>
        <lastModDate>2024-10-28T14:12:56.4130000+00:00</lastModDate>
        
        <creator>Yonghua Yang</creator>
        
        <subject>Campus fun run app; integrated evaluation of university sports inside and outside the classroom; tuna swarm optimization algorithm; deep confidence network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>With the rise of fitness technologies and the integration of smart applications in education, improving physical education evaluation methods is essential for better assessing student performance inside and outside the classroom. Traditional evaluation methods often lack precision, fairness, and real-time capabilities. This study aims to develop an integrated evaluation method for university physical education using a combination of the Tuna Swarm Optimization (TSO) algorithm and a Deep Belief Network (DBN) to optimize the accuracy and efficiency of evaluating both in-class and extracurricular physical activities. The evaluation system is built using the Campus Running APP, which tracks and analyzes student performance in various physical education aspects, including in-class participation, extracurricular activities, and fitness tests. The TSO algorithm is employed to optimize the DBN, improving its ability to process complex datasets and avoid local optima. The model is trained and tested on a dataset collected from student activity on the Campus Running APP. Experimental results show that the TSO-DBN model outperforms traditional methods, such as DBN, GWO-DBN, and FTTA-DBN, in terms of evaluation accuracy and processing time. The TSO-DBN model achieves a root mean square error (RMSE) of 0.2-0.3, significantly lower than the comparison models. Additionally, it reaches an R&#178; value of 0.98, indicating high prediction accuracy, and demonstrates the fastest evaluation time of 0.0025 seconds. These results underscore the model’s superior ability to provide accurate, real-time assessments. The integration of the TSO algorithm with the DBN significantly improves the precision, efficiency, and fairness of physical education evaluations. The model offers a comprehensive and objective system for assessing student performance, helping universities better monitor and promote student health and physical activity. This approach paves the way for future research and application of AI-based systems in educational environments.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_45-TSO_Algorithm_and_DBN_Based_Comprehensive_Evaluation_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimizing Production in Reconfigurable Manufacturing Systems with Artificial Intelligence and Petri Nets</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151044</link>
        <id>10.14569/IJACSA.2024.0151044</id>
        <doi>10.14569/IJACSA.2024.0151044</doi>
        <lastModDate>2024-10-28T14:12:56.4130000+00:00</lastModDate>
        
        <creator>Salah Hammedi</creator>
        
        <creator>Jalloul Elmelliani</creator>
        
        <creator>Lotfi Nabli</creator>
        
        <creator>Abdallah Namoun</creator>
        
        <creator>Meshari Huwaytim Alanazi</creator>
        
        <creator>Nasser Aljohani</creator>
        
        <creator>Mohamed Shili</creator>
        
        <creator>Sami Alshmrany</creator>
        
        <subject>Artificial Intelligence (AI); Genetic Algorithms (GAs); optimization; intelligent scheduling; Petri Nets; Reconfigurable Manufacturing Systems (RMFS); scheduling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>This article presents an advanced approach to optimize production in Reconfigurable Manufacturing Systems (RMFS) by integrating Petri Nets with artificial intelligence (AI) techniques, particularly a genetic algorithm (GA). The proposed methodology aims to enhance scheduling efficiency and adaptability in dynamic manufacturing environments. Quantitative analysis demonstrates significant improvements, with the approach achieving an 85% success rate in reducing lead times and improving resource utilization, outperforming traditional scheduling methods by a margin of 15%. Furthermore, our AI-driven system exhibits a 90% success rate in providing data-driven insights, leading to more informed decision-making processes compared to existing neural network optimization techniques. The scalability of the proposed method is evidenced by its consistent performance across various RMS configurations, achieving an 80% success rate in optimizing scheduling decisions. This study not only validates the robustness of the proposed method through extensive benchmarking but also highlights its potential for widespread adoption in real-world manufacturing scenarios. The findings contribute to the advancement of intelligent manufacturing by offering a novel, efficient, and adaptable solution for complex scheduling challenges in RMFS.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_44-Optimizing_Production_in_Reconfigurable_Manufacturing_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Prediction of Booking Trends and Customer Demand in the Tourism and Hospitality Sector Using AI-Based Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151043</link>
        <id>10.14569/IJACSA.2024.0151043</id>
        <doi>10.14569/IJACSA.2024.0151043</doi>
        <lastModDate>2024-10-28T14:12:56.3970000+00:00</lastModDate>
        
        <creator>Siham Rekiek</creator>
        
        <creator>Hakim Jebari</creator>
        
        <creator>Kamal Reklaoui</creator>
        
        <subject>Artificial Intelligence; decision-making; long short-term memory; XGBoost; Random Forest; Prophet; tourism; hospitality; demand forecasting; booking trends; customer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>Accurate demand forecasting is critical for optimizing operations in the tourism and hospitality sectors. This paper proposes a robust multi-algorithmic framework leveraging four advanced models of Artificial Intelligence (LSTM, Random Forest, XGBoost, and Prophet) to predict booking trends and customer demand. In contrast to traditional approaches, this study incorporates external factors such as competitors&#39; pricing strategies, local events, and weather patterns, offering a more holistic view of demand drivers. Using a comprehensive dataset from a leading hotel chain, we systematically compare the performance of these models, providing detailed evaluations. The findings offer actionable insights for hotel managers, demonstrating how predictive analytics can inform revenue management, improve operational efficiency, and enhance marketing initiatives. These results contribute to the evolving field of demand forecasting, offering practical recommendations for data-driven decision-making in the tourism and the hospitality sector.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_43-Prediction_of_Booking_Trends_and_Customer_Demand.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intelligent Service Book Sorting in University Libraries Based on Linear Discriminant Analysis Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151042</link>
        <id>10.14569/IJACSA.2024.0151042</id>
        <doi>10.14569/IJACSA.2024.0151042</doi>
        <lastModDate>2024-10-28T14:12:56.3830000+00:00</lastModDate>
        
        <creator>Changjun Wang</creator>
        
        <creator>Fengxia You</creator>
        
        <creator>Yu Wang</creator>
        
        <subject>Linear discrimination; library intelligent services; book sorting; university libraries</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>The demand for intelligent services in university libraries is constantly increasing, especially in the intelligent book sorting work. The research aims to explore an intelligent classification method for university library books based on linear discriminant analysis. It is used to reduce the dimensionality of feature multidimensional data. A membership model for different categories of books is established to achieve classification. The results showed that when the training set data was reduced to two-dimensional, the feature extraction accuracy of the classification algorithm reached 64.02%, which was significantly higher than 52.48% of one-dimensional data. In addition, the membership calculation accuracy of axiomatic fuzzy sets on two-dimensional data was high, reducing the classification difficulties caused by mixed samples. After comparing and analyzing different algorithms, the proposed transfer learning linear-discriminant analysis-axiomatic fuzzy set algorithm achieved the highest accuracy of 98.67% and completed data classification in about 20s, which was superior to other commonly used classification algorithms. The practical significance of the research lies in providing an efficient and accurate book sorting algorithm, which helps to improve the work efficiency and service quality of libraries.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_42-Intelligent_Service_Boo_Sorting_in_University_Libraries.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Review of Personalized Recommender System for Mental Health Interventions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151041</link>
        <id>10.14569/IJACSA.2024.0151041</id>
        <doi>10.14569/IJACSA.2024.0151041</doi>
        <lastModDate>2024-10-28T14:12:56.3670000+00:00</lastModDate>
        
        <creator>Idayati Binti Mazlan</creator>
        
        <creator>Noraswaliza Abdullah</creator>
        
        <creator>Norashikin Ahmad</creator>
        
        <creator>Siti Zaleha Harun</creator>
        
        <subject>Recommender system; collaborative filtering; content-based filtering; hybrid recommender system; mental health</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>Personalized recommender systems for mental health are becoming indispensable instruments for providing individuals with individualized resources and therapeutic interventions. This study aims to explore the application of recommender systems within the mental health domain through a systematic literature review. The research is guided by three primary questions: 1) What is a recommender system, and what techniques are available within these systems? 2) What techniques and approaches are used explicitly in recommender systems for mental health applications? 3) What are the limitations and challenges in applying recommender systems in the mental health domain? The first step in answering these questions is to give a thorough introduction to recommender systems, covering all the different methods, including content-based filtering, collaborative filtering, knowledge-based filtering, and hybrid approaches. Next, examine the specific techniques and approaches employed in the mental health context, highlighting their unique requirements for adaptation, benefits, and limitations. Ultimately, the research highlights the key limitations and challenges, including data privacy concerns, the need for tailored recommendations, and the complexities of user engagement in mental health environments. By synthesizing current knowledge, this review provides valuable insights into the potential and constraints of recommender systems in supporting mental health, offering guidance for future research and development in this critical area.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_41-A_Review_of_Personalized_Recommender_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Deep Learning Based Detection Method for Insulator Defects in High Voltage Transmission Lines</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151040</link>
        <id>10.14569/IJACSA.2024.0151040</id>
        <doi>10.14569/IJACSA.2024.0151040</doi>
        <lastModDate>2024-10-28T14:12:56.3500000+00:00</lastModDate>
        
        <creator>Wang Tingyu</creator>
        
        <creator>Sun Xia</creator>
        
        <creator>Liu Jiaxing</creator>
        
        <creator>Zhang Yue</creator>
        
        <subject>Insulators; insulator defect detection; improved YOLOv5; BiFPN network; PyQt5</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>The high-voltage transmission system is a key component of the power network, and the reliability of its insulators directly affects the safe operation of the system. Traditional insulator defect detection methods are reliant on manual inspection, which requires significant human resources and is prone to substantial subjectivity. To address this issue, this paper proposes an insulator defect recognition method based on the improved YOLOv5 algorithm. This method first collects images of insulator defects and then utilizes the YOLOv5 model for recognition training. To enhance multi-scale feature fusion capability, a bidirectional feature pyramid network (BiFPN) is introduced. During the training process, the SiUL function is used, and the SE attention mechanism has been integrated into the detection backbone network, which enhances the model&#39;s detection accuracy. Experimental results show that the model achieves a detection precision of 90.27%, a recall of 89.14%, and a mAP of 91.34% on the test set. To further enhance the model&#39;s practicality, a PyQt5-based user interface (GUI) for the inspection system is designed, enabling interactive functions such as image uploading, defect detection, and result display. In summary, the research presented in this paper provides efficient and accurate technical support for intelligent power inspection, offering a wide range of application prospects.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_40-A_Deep_Learning_Based_Detection_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced Methodology for Production-Education Integration and Quality Evaluation of Rural Vocational Education Under Rural Revitalization with 2-Tuple Linguistic Neutrosophic Numbers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151039</link>
        <id>10.14569/IJACSA.2024.0151039</id>
        <doi>10.14569/IJACSA.2024.0151039</doi>
        <lastModDate>2024-10-28T14:12:56.3370000+00:00</lastModDate>
        
        <creator>Xingli Wang</creator>
        
        <subject>Multiple-attribute group decision-making (MAGDM); 2TLNSs; ExpTODIM approach; GRA approach; PEI quality evaluation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>Rural vocational education (RVE) plays a crucial role in nurturing practical talents for the development of rural economy and society in the new era, as well as cultivating future generations of agricultural successors. Quality evaluation serves as an essential means of ensuring educational excellence. It acts as a key element for supervision, assurance, and enhancement of educational quality. For the production-education integration (PEI) in RVE, establishing a quality evaluation system that aligns with national and rural conditions, caters to the needs of modern agricultural industry development, and reflects the characteristics of RVE is crucial. Such a system plays a vital role in leading and promoting the deep integration of industry and education in rural vocational colleges. The PEI quality evaluation in RVE under rural revitalization involves MAGDM. Currently, Exponential TODIM (ExpTODIM) approach and grey relational analysis (GRA) approach has been utilized to address MAGDM challenges. To handle uncertain information in PEI quality evaluation in RVE under rural revitalization, 2-tuple linguistic neutrosophic sets (2TLNSs) are conducted as a valuable tool. This paper introduces the implementation of the 2-tuple linguistic neutrosophic number Exponential TODIM-GRA (2TLNN-ExpTODIM-GRA) approach to effectively manage MAGDM problems using 2TLNSs. Additionally, a numerical study is conducted to validate the application of this approach for PEI quality evaluation in RVE under rural revitalization.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_39-Enhanced_Methodology_for_Production_Education_Integration.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>CoCoSo Framework for Management Performance Evaluation of Teaching Services in Sports Colleges and Universities with Euclidean Distance and Logarithmic Distance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151038</link>
        <id>10.14569/IJACSA.2024.0151038</id>
        <doi>10.14569/IJACSA.2024.0151038</doi>
        <lastModDate>2024-10-28T14:12:56.3200000+00:00</lastModDate>
        
        <creator>Feng Li</creator>
        
        <creator>Yuefei Wen</creator>
        
        <subject>Multiple-Attribute Decision-Making (MADM); Double-Valued Neutrosophic Sets (DVNSs); CoCoSo technique; management performance evaluation of teaching services</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>Sports colleges are the highest educational level in China&#39;s higher education system to cultivate sports professionals. They shoulder the arduous task of cultivating sports talents with innovative spirit and practical ability, and contribute to the country&#39;s sports and education undertakings. Through the study of the service performance of the teaching management departments in sports colleges, it is beneficial for teaching management workers to establish the central position of teaching work more firmly in their thoughts and actions, transform their work style, enhance their ideological awareness of serving teaching work, serve teaching, teachers, and students, closely focus on teaching work to provide various services, strive to improve the level of management services, and make improving service levels and optimizing service quality an important part of improving the teaching management level in sports colleges. The management performance evaluation of teaching services in sports colleges and universities is regarded as the defined multiple-attribute decision-making (MADM). Recently, the CoCoSo and entropy technique was utilized to cope with MADM. The double-valued neutrosophic sets (DVNSs) are utilized as a technique for characterizing fuzzy information during the management performance evaluation of teaching services in sports colleges and universities. In this study, double-valued neutrosophic number CoCoSo (DVNN-CoCoSo) technique is administrated for MADM in light with DVNN Euclidean distance (DVNNED) and DVNN Logarithmic distance (DVNNLD). Finally, numerical example for management performance evaluation of teaching services in sports colleges and universities is put forward to show the DVNN-CoCoSo technique. The major contribution of this study is administrated: (1) DVNN-CoCoSo technique is administrated for MADM in light with DVNNED and DVNNLD; (2) The objective weights are considered through entropy technique; (3) numerical example for management performance evaluation of teaching services in sports colleges and universities and some comparative analysis are administrated to verify the DVNN-CoCoSo technique.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_38-CoCoSo_Framework_for_Management_Performance_Evaluation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced IVIFN-ExpTODIM-MABAC Technique for Multi-Attribute Group Decision-Making Under Interval-Valued Intuitionistic Fuzzy Sets</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151037</link>
        <id>10.14569/IJACSA.2024.0151037</id>
        <doi>10.14569/IJACSA.2024.0151037</doi>
        <lastModDate>2024-10-28T14:12:56.3030000+00:00</lastModDate>
        
        <creator>Bin Xie</creator>
        
        <subject>Multi-attribute group decision-making (MAGDM); interval-valued intuitionistic fuzzy sets (IVIFSs); ExpTODIM approach; MABAC approach; college English teaching quality evaluation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>The evaluation of English teaching quality is crucial for enhancing teaching effectiveness. It helps teachers understand their teaching methods and students&#39; learning outcomes through systematic assessment, thereby guiding teachers to adjust their teaching strategies. Additionally, the results of the evaluation provide decision-making support for educational management at schools, optimizing curriculum design and resource allocation. Regular evaluations of teaching quality motivate teachers for continuous professional development, improve teaching standards, and ensure that students achieve maximum growth and progress in their English learning journey. The assessment of college English teaching quality employs multi-attribute group decision-making (MAGDM). Techniques like Exponential TODIM (ExpTODIM) and MABAC are utilized to facilitate MAGDM. During the evaluation process, interval-valued intuitionistic fuzzy sets (IVIFSs) are utilized to handle fuzzy data. This research introduces a novel method, the interval-valued intuitionistic fuzzy number ExpTODIM-MABAC (IVIFN-ExpTODIM-MABAC) tailored for MAGDM under the framework of IVIFSs. To demonstrate its efficacy, a numerical example evaluating college English teaching quality is presented. Key contributions of this study include: (1) Extending the ExpTODIM-MABAC method to include IVIFSs with an Entropy model; (2) Utilizing Entropy to ascertain weights within IVIFSs; (3) Proposing the IVIFN-ExpTODIM-MABAC approach for MAGDM under IVIFSs; (4) Validating the approach with a numerical example and various comparative analyses of college English teaching quality.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_37-Enhanced_IVIFN_Exp_TODIM_MABAC_Technique.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automated Detection of Malevolent Domains in Cyberspace Using Natural Language Processing and Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151036</link>
        <id>10.14569/IJACSA.2024.0151036</id>
        <doi>10.14569/IJACSA.2024.0151036</doi>
        <lastModDate>2024-10-28T14:12:56.2900000+00:00</lastModDate>
        
        <creator>Saleem Raja Abdul Samad</creator>
        
        <creator>Pradeepa Ganesan</creator>
        
        <creator>Amna Salim Al-Kaabi</creator>
        
        <creator>Justin Rajasekaran</creator>
        
        <creator>Singaravelan M</creator>
        
        <creator>Peerbasha Shebbeer Basha</creator>
        
        <subject>Machine learning; N-gram; linguistic features; natural language processing (NLP); malicious webpage</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>Cyberattacks are intentional attacks on computer systems, networks, and devices. Malware, phishing, drive-by downloads, and injection are popular cyberattacks that can harm individuals, businesses, and organizations. Most of these attacks trick internet users by using malicious links or webpages. Malicious webpages can be used to distribute malware, steal personal information, conduct phishing attacks, or perform other malicious activities. Detecting such malicious websites is a tedious task for internet users. Therefore, locating such a website in cyberspace requires an automated detection tool. Currently, machine learning techniques are being used to detect such malicious websites. The majority of recent studies derive a limited number of features from webpages (both benign and malicious) and use machine learning (ML) algorithms to detect fraudulent webpages. However, these constrained capabilities might not use the full potential of the dataset. This study addresses this issue by identifying malicious websites using both the URL and webpage content features. To maximize detection accuracy, both ngrams and vectorization methods in natural language processing are adopted with minimum feature-set. To exploit the full potential of the dataset, the proposed approach derives the 22 common linguistic features of the URL and generates ngrams from the domain name of the URL. The textual content of the webpages was also used. The research employs seven machine learning algorithms with three vectorization methods. The outcome reveals that the proposed method outperformed the results of previous studies.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_36-Automated_Detection_of_Malevolent_Domains.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of Intelligent Learning Model Based on Ant Colony Optimization Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151035</link>
        <id>10.14569/IJACSA.2024.0151035</id>
        <doi>10.14569/IJACSA.2024.0151035</doi>
        <lastModDate>2024-10-28T14:12:56.2730000+00:00</lastModDate>
        
        <creator>Xiaojing Guo</creator>
        
        <creator>Xiaoying Zhu</creator>
        
        <creator>Lei Liu</creator>
        
        <subject>Online courses; ant colony optimization algorithm; intelligent learning model; path planning; local optimum</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>In the process of the gradual popularization of online courses, learners are increasingly dissatisfied with the recommendation mechanism of imprecise courses when faced with a large number of course choices. How to better recommend relevant courses to targeted users has become a current research hotspot. An intelligent learning model based on ant colony optimization algorithm is introduced, which can accurately calculate the similarity between courses and learners. After structured classification, the model recommends courses to learners in the optimal way. The results showed that the accuracy of this method reached 10-20 when tested in Sphere and Ellipse functions, and the optimal solution for problem Ulysses21 was 27, which was better than Advanced Sorting Ant System (ASrank), Maximum Minimum Ant System (MMAS), and Ant System (AS) based on optimization sorting. The proposed ant colony optimization algorithm had better convergence performance than ASrank, MMAS, and AS algorithms, with a shortest path of 53.5. After reaching Root Mean Square Error (RMSE) and Relative Deviation (RD) distributions of 6% and 8%, the stability of the proposed method no longer decreased with increasing RMSE. The accuracy did not vary significantly with changes in the dataset, and the reproducibility performance was better than other comparison models. In the scenarios of path Block and path Naive, the proposed algorithm had an average computation time of only 1011, which was better than the Ant Colony Optimization (ACO) and Massive Multilingual Speech (MMS) models. Therefore, the proposed algorithm improves the performance of intelligent learning models, solves the problem of local optima while enhancing the convergence efficiency of the model, and provides new solutions and directions for increasing the recommendation performance of online learning platforms.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_35-Development_of_Intelligent_Learning_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Smart System for Driver Behavior Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151034</link>
        <id>10.14569/IJACSA.2024.0151034</id>
        <doi>10.14569/IJACSA.2024.0151034</doi>
        <lastModDate>2024-10-28T14:12:56.2570000+00:00</lastModDate>
        
        <creator>Hajar LAZAR</creator>
        
        <creator>Zahi JARIR</creator>
        
        <subject>Driver behavior prediction; OBD II; smartphone sensors; intelligent transport system; traffic safety</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>Driver behavior has recently emerged as a challenging topic in Traffic risk studies. Despite the advances in this topic, the challenges still remain. In fact, the current contribution deals with predicting at Real-time driver behavior based on machine learning techniques handling data sensing collected from smartphone sensors (accelerometer, gyroscope, GPS) and from OBD II. To ensure prediction at real time, we used a real-time architecture utilizing Atlas MongoDB service to synchronize data communication. Furthermore, we opt Random Forest model that demonstrates the highest performance compared to other models. This model has the advantage of predicting and preventing by warning a driver if his or her driving style is aggressive, moderate or slow. The proposed system aims to give more information about incidents to gain a better understanding of their causes.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_34-Smart_System_for_Driver_Behavior_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Computer Modeling of the Stress-Strain State of Two Kvershlags with a Double Periodic System of Slits Weighty Elastic Transtropic Massif</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151033</link>
        <id>10.14569/IJACSA.2024.0151033</id>
        <doi>10.14569/IJACSA.2024.0151033</doi>
        <lastModDate>2024-10-28T14:12:56.2430000+00:00</lastModDate>
        
        <creator>Tursinbay Turymbetov</creator>
        
        <creator>Gulmira Tugelbaeva</creator>
        
        <creator>Baqlan Kojahmet</creator>
        
        <creator>Bekzat Kuatbekov</creator>
        
        <creator>Serzhan Maulenov</creator>
        
        <creator>Bakhytzhan Turymbetov</creator>
        
        <creator>Mukhamejan Abdibek</creator>
        
        <subject>Transtropic; cavities; stress-strain state; deformation; finite element; slits</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>This paper presents computer modeling of the stress-strain state of two kvershlags with a double periodic system of slits weighty elastic transtropic massif. It introduced key concepts such as &#39;kvershlag&#39;, a term used to describe perpendicular cavities in a layered massif, and &#39;weighted elastic transtropic massif&#39;, which refers to a specialized geological structure considered in the study. These terms are critical for understanding the modeling approach. Due to the complexity of the analytical solution of this class of problems, a numerical method is used. Such a mixed problem is provided to obtain a solution by bringing it to an equivalent environment. To solve such a mixed problem, it is offered to get a solution by bringing it to an equivalent climate in terms of stiffness. The finite element method was used to solve the problem. A software package has been created to solve the stress-strain state of the two kvershlags. To ensure the correctness of the software complex, it was checked using test tasks. To study the stress-strain state of kvershlags in a weighted massif. The basic systems of equations are obtained. Algorithms are constructed and the program complex FEM_3D for solving finite element method problems is compiled. Mixed problems of the stress-strain state of cavities are solved approximately. The results of complex computer calculations are systematized, analyzed, specific conclusions are drawn and recommendations for their practical application are proposed. A computer simulated the stress-strain state of two kvershlags. The numerical solution to the given problem was obtained using the software. Results demonstrate that our numerical method approach results in 0.01%.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_33-Computer_Modeling_of_the_Stress_Strain_State.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Interactive Attention-Based Approach to Document-Level Relationship Extraction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151032</link>
        <id>10.14569/IJACSA.2024.0151032</id>
        <doi>10.14569/IJACSA.2024.0151032</doi>
        <lastModDate>2024-10-28T14:12:56.2270000+00:00</lastModDate>
        
        <creator>Zhang Mei</creator>
        
        <creator>Zhao Zhongyuan</creator>
        
        <creator>Xu Zhitong</creator>
        
        <subject>Document-level relation extraction; interaction attention-based; the baseline model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>Document-level relation extraction entails sifting through extensive document data to pinpoint relationships and pertinent event details among various entities. This process aids intelligence analysts in swiftly grasping the essence of the content while revealing potential connections and emerging trends, thus proving invaluable for research purposes. This paper puts forward a method for document-level relation extraction that leverages an interaction attention mechanism. Initially, building on an evidence-based approach for extracting relations at the document level, the interaction attention mechanism is introduced, extracting the final layer of hidden states containing rich semantic information from the document encoder. Subsequently, these concealed states are fed into a self-attention layer informed by dependency parsing. The outputs from both elements serve as distinct supervisory signals for the interactive input. By pooling these output results, it can derive context embeddings that possess enhanced representational power. Preliminarily, relation triples are extracted using the relation classifier. In conclusion, building on the preliminary relationship results, the process of relationship inference is carried out independently using pseudo-documents created from the source material and pertinent evidence. Only those relationships with a cumulative inference score that surpasses a certain threshold are regarded as the final outcomes. Experimental findings from the publicly accessible datasets indicate commendable performance.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_32-An_Interactive_Attention_Based_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Machine Learning for Predicting Intradialytic Hypotension: A Survey Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151031</link>
        <id>10.14569/IJACSA.2024.0151031</id>
        <doi>10.14569/IJACSA.2024.0151031</doi>
        <lastModDate>2024-10-28T14:12:56.2100000+00:00</lastModDate>
        
        <creator>Saeed Alqahtani</creator>
        
        <creator>Suhuai Luo</creator>
        
        <creator>Mashhour Alanazi</creator>
        
        <creator>Kamran Shaukat</creator>
        
        <creator>Mohammed G Alsubaie</creator>
        
        <creator>Mohammad Amer</creator>
        
        <subject>Hemodialysis; machine learning; deep learning; artificial intelligence; intradialytic hypotension; electrocardiogram; light gradient boosting machine; deep neural network; recurrent neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>Intradialytic hypotension (IDH) is a common complication in patients undergoing maintenance hemodialysis and is associated with an increased risk of cardiovascular and all-cause mortality. Machine learning (ML) and deep learning (DL) techniques transform healthcare by enabling accurate disease diagnosis, personalised treatment plans, and clinical decision support. However, challenges like data quality, privacy, and interpretability must be addressed for responsible adoption. This survey review aims to summarise and analyse relevant articles on applying machine learning models for predicting IDH. Among these models, deep learning, a subfield of machine learning, stands out because it can improve the overall performance of health care, particularly in diagnostic imaging and pathologic processes and in the synthetic judgment of big data flow. The insights gained from this survey review will assist researchers and practitioners in selecting appropriate machine-learning models and implementing preemptive measures to prevent IDH in dialysis patients.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_31-Machine_Learning_for_Predicting_Intradialytic_Hypotension.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Bridging the Gap: Machine Learning and Vision Neural Networks in Autonomous Vehicles for the Aging Population</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151030</link>
        <id>10.14569/IJACSA.2024.0151030</id>
        <doi>10.14569/IJACSA.2024.0151030</doi>
        <lastModDate>2024-10-28T14:12:56.1930000+00:00</lastModDate>
        
        <creator>Shengsheng Tan</creator>
        
        <subject>Autonomous vehicles; machine learning; vision neural network; human-computer interaction; aging population; artificial intelligence</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>As autonomous vehicles (AVs) evolve recently, it is necessary to address the unique needs of the aging population group. They can get a significant benefit from this technology. This scoping review focus the role of machine learning and vision neural networks in autonomous vehicles. A focus on enhancing safety, usability, and trust for elderly users will be mentioned as well. It systematically reviews existing literature to identify how these technologies address the cognitive and physical challenges faced by older adults. The review highlights key advancements in AV technology, such as adaptive interfaces and assistive features. That can enhance the driving experience for the elderly. Additionally, it investigates factors influencing trust and acceptance of AVs among older adults, emphasizing the importance of transparent and user-friendly design. Although, the despite notable progress has been made, the significant gaps remain in understanding how to optimize these technologies to meet the diverse needs of elderly passengers. The review identifies areas for future research, including personalized AV systems and regulatory frameworks that support designs friendly to the elderly. By addressing these gaps, the study aims to contribute to developing autonomous vehicles that are inclusive and accessible. It will make the mobility and quality of life for the aging population increased. This review underscores the importance of integrating machine learning and vision neural networks in designing AVs that cater to the unique needs of older adults. It was also offering valuable insights for researchers, policymakers, and industry stakeholders advancing autonomous vehicle technology.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_30-Bridging_the_Gap_Machine_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Emotion Regulation Through Virtual Reality Design Framework for Social-Emotional Learning (VRSEL)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151029</link>
        <id>10.14569/IJACSA.2024.0151029</id>
        <doi>10.14569/IJACSA.2024.0151029</doi>
        <lastModDate>2024-10-28T14:12:56.1800000+00:00</lastModDate>
        
        <creator>Irna Hamzah</creator>
        
        <creator>Ely Salwana</creator>
        
        <creator>Nilufar Baghaei</creator>
        
        <creator>Mark Billinghurst</creator>
        
        <creator>Azhar Arsad</creator>
        
        <subject>Virtual reality; design; framework; social-emotional learning; emotion regulation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>Virtual reality (VR) has swiftly progressed, transitioning from a niche technology primarily associated with gaming to a versatile tool with broad applications across entertainment, healthcare, education, and beyond. Social-emotional learning (SEL) is increasingly recognized for its role in enhancing individuals&#39; social skills and emotional regulation. However, despite the growing body of research on VR, the development of specific design frameworks for integrating VR with SEL remains underexplored. This study addresses this gap by employing thematic analysis to identify the critical components necessary for a Virtual Reality Design Framework for Social-Emotional Learning (VRSEL) aimed at improving emotion regulation among Malaysian adolescents. Through qualitative data derived from expert interviews in SEL and VR, this research proposes a framework that leverages immersive VR technology to create realistic, interactive scenarios that facilitate the practice and development of social-emotional skills. The framework emphasizes key design principles, including user interface (UI), presentation layer (PL), and brain activity (BA). Our findings suggest that VRSEL is a powerful tool for SEL, offering significant potential for educational environments. However, challenges such as technical barriers, content development, and educator training must be addressed to fully realize its benefits. This research highlights the promising role of VR in advancing SEL and lays the groundwork for further exploration and refinement of VRSEL in diverse educational settings.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_29-Enhancing_Emotion_Regulation_Through_Virtual_Reality_Design.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Exploring the Impact of User Experience Elements on Virtual Reality for Emotion Regulation Through mVR-Real App</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151028</link>
        <id>10.14569/IJACSA.2024.0151028</id>
        <doi>10.14569/IJACSA.2024.0151028</doi>
        <lastModDate>2024-10-28T14:12:56.1630000+00:00</lastModDate>
        
        <creator>Irna Hamzah</creator>
        
        <creator>Ely Salwana</creator>
        
        <creator>Nilufar Baghaei</creator>
        
        <subject>Virtual reality; app; user experience; emotion regulation; social-emotional learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>Virtual reality, a rapidly advancing technological development, has significantly evolved over the past few years, offering immersive experiences through headsets, gloves, or controllers that engage users in dynamic and captivating environments. Its applications span various fields, including video games, healthcare, education, and training simulations. However, there remains a gap in utilizing virtual reality for adolescent emotion regulation. This study explores the mVR-REAL app, designed to enhance social-emotional learning in teenagers by improving emotion regulation. Social-emotional learning is an educational approach aimed at developing essential socio-emotional skills in adolescents. Sixteen participants engaged with the mVR-REAL app through virtual scene episodes using the Meta Quest 2 headset. The emotional responses and user feedback were measured using pre and post-test questionnaires with a Likert scale such as, strongly agree, agree, neutral, disagree and strongly disagree. A numerical value was assigned to each hypothetical option on the scale. Analysis conducted with SPSS software revealed statistically significant improvements in user experience following the use of mVR-REAL. The findings suggest that mVR-REAL has the potential to enhance user experience, evoke strong emotional responses, and foster greater engagement compared to traditional applications. These insights will inform future large-scale testing of mVR-REAL, emphasizing the importance of emotional design, as well as psychological and cultural factors, in the development of virtual reality apps for emotion regulation. However, time-related challenges were identified due to the restricted duration of the VR session, highlighting the need for further research and refinement in future virtual reality app development.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_28-Exploring_the_Impact_of_User_Experience_Elements.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving the Accuracy of Chili Leaf Disease Classification with ResNet and Fine-Tuning Strategy</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151027</link>
        <id>10.14569/IJACSA.2024.0151027</id>
        <doi>10.14569/IJACSA.2024.0151027</doi>
        <lastModDate>2024-10-28T14:12:56.1470000+00:00</lastModDate>
        
        <creator>Sayuti Rahman</creator>
        
        <creator>Rahmat Arief Setyadi</creator>
        
        <creator>Asmah Indrawati</creator>
        
        <creator>Arnes Sembiring</creator>
        
        <creator>Muhammad Zen</creator>
        
        <subject>Chili leaf classification; convolutional neural network; ResNet10; fine-tuning; precision agriculture</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>Lack of diseases detection in plants frequently results in the spread of diseases that are difficult to treat and expensive. Rapid diseases recognition enables farmers to control the diseases with appropriate treatment. This study aims to support chili farmers in identifying chili plant diseases based on leaf images. This work presents a CNN design based on several existing CNN architectures that have been fine-tuned to achieve the highest possible accuracy. The study found that the ResNet101 model with the Tanh activation function, SGD optimizer, and Reduced Learning Rate (ReduceLR) schedule, achieved a peak classification accuracy of 99.53%. This significant improvement demonstrates the potential of using advanced CNN techniques and fine-tuning strategies to enhance model accuracy in agricultural applications. The implications of this study extend to the field of precision agriculture, suggesting that the proposed model can be integrated into smart farming systems to improve the timely and efficient control of chili leaf diseases. Such advancements not only enhance crop yields but also contribute to sustainable agricultural practices and the economic stability of chili farmers.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_27-Improving_the_Accuracy_of_Chili_Leaf_Disease.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Novel Biomarkers for Colorectal Cancer Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151025</link>
        <id>10.14569/IJACSA.2024.0151025</id>
        <doi>10.14569/IJACSA.2024.0151025</doi>
        <lastModDate>2024-10-28T14:12:56.1170000+00:00</lastModDate>
        
        <creator>Mohamed Ashraf</creator>
        
        <creator>M. M. El-Gayar</creator>
        
        <creator>Eman Eldaydamony</creator>
        
        <subject>Colorectal cancer (CRC); microarray; biomarkers; gene expression omnibus; feature selection; chi-squared test; XGBoost</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>Most researchers work on solving the important issue of identifying biomarkers linked to a certain disease, like cancer, in order to assist in the disease’s diagnosis and treatment. Several research have recently suggested several methods for identifying genes linked to disease. A handful of these methods were created specifically for CRC gene prediction, though. This research presents a novel prediction technique to determine new biomarkers related to CRC that can assist in the diagnosing process. First, we preprocessed four Microarray datasets (GSE4107, GSE8671, GSE9348 and GSE32323) using RMA (Robust Multi-Array Average) method to remove local artifacts and normalize the values. Second, we used the chi-squared test for feature selection to identify some significant features from datasets. Finally, the features were fed to XGBoost (eXtreme Gradient Boosting) to diagnose various test scenarios. The proposed model achieves a high mean accuracy rate and low standard deviation. When compared to other systems, the experiment findings show promise. The predicted biomarkers are validated through a review of the literature.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_25-Novel_Biomarkers_for_Colorectal_Cancer_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multilevel Characteristic Weighted Fusion Algorithm in Domestic Waste Information Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151024</link>
        <id>10.14569/IJACSA.2024.0151024</id>
        <doi>10.14569/IJACSA.2024.0151024</doi>
        <lastModDate>2024-10-28T14:12:56.1000000+00:00</lastModDate>
        
        <creator>Min Li</creator>
        
        <subject>Multi-feature; weighted fusion; image; deep learning; waste classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>The study of domestic waste image classification holds significant significance for fields like environmental protection and smart city development. To improve the classification efficiency of household waste information, a multi-feature weighted fusion method for household waste image classification is proposed. In this research, deep learning technology was applied to develop a multi-level feature-weighted fusion network model for domestic garbage image classification. The study first analyzed the VGG-16 architecture and created a garbage image dataset for domestic garbage according to the current Shenzhen garbage classification standard. Based on this, a multi-level feature-weighted fusion model for garbage image classification was constructed using VGG-16 as the backbone network. Furthermore, it was combined with the backbone feature extraction network as well as the content-aware and boundary-aware feature extraction networks. The performance of the classification model was tested, and it was found that the highest classification accuracy of the classification model can reach 0.98, and the shortest classification time is only 3s. The multi-level feature-weighted fusion garbage image classification model constructed in this research not only has better classification performance, but also can provide a new processing idea for the urban garbage classification problem.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_24-Multilevel_Characteristic_Weighted_Fusion_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Lung CT Image Classification Algorithm Based on Improved Inception Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151023</link>
        <id>10.14569/IJACSA.2024.0151023</id>
        <doi>10.14569/IJACSA.2024.0151023</doi>
        <lastModDate>2024-10-28T14:12:56.0870000+00:00</lastModDate>
        
        <creator>Qianlan Liu</creator>
        
        <subject>Image classification; inception; lung CT images; CNN; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>With the continuous development of digital technology, traditional lung computed tomography medical image processing has problems such as complex images, small sample data, and similar symptoms between diseases. How to efficiently process lung computed tomography image classification has become a technical challenge. Based on this, the Inception algorithm is fused with the improved U-Net fully convolutional network to construct a lung computed tomography image classification algorithm model based on the improved Inception network. Subsequently, the Inception algorithm is compared with other algorithms for performance analysis. The results show that the proposed algorithm has the highest accuracy of 92.7% and the lowest error rate of 0.013%, which is superior to the comparison algorithm. In terms of recall comparison, the algorithm is approximately 0.121 and 0.213 higher than ResNet and GoogLeNct algorithms, respectively. In comparison with other models, the proposed model has a classification accuracy of 98.1% for viral pneumonia, with faster convergence speed and fewer required parameters. From this result, the proposed Inception network based lung computed tomography image classification algorithm model can efficiently process data information, provide technical support for lung computed tomography image classification, and thereby improve the accuracy of lung disease diagnosis.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_23-Lung_CT_Image_Classification_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Serious Games Model for Higher-Order Thinking Skills in Science Education</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151022</link>
        <id>10.14569/IJACSA.2024.0151022</id>
        <doi>10.14569/IJACSA.2024.0151022</doi>
        <lastModDate>2024-10-28T14:12:56.0530000+00:00</lastModDate>
        
        <creator>Siti Norliza Awang Noh</creator>
        
        <creator>Hazura Mohamed</creator>
        
        <creator>Nor Azan Mat Zin</creator>
        
        <subject>Serious game; Higher-Order Thinking (HOT) skills; science education; game element; learning element</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>The popularity of digital games has led to the emergence of serious games, which are developed with specific purposes beyond mere entertainment. Serious games in education represent more innovative and current pedagogical approaches. However, the existing digital games have been shown to improve critical thinking skills, although there is still a limited amount of research on science education. A preliminary study has found that digital games developed for science teaching do not incorporate all aspects of Higher-Order Thinking Skills (HOTS). This study aims to identify and validate game components and design a serious game model for HOTS in science education (PKBATDPS Model), which was validated using the Electric Circuit prototype. The study is divided into four phases: analysis, design, development and evaluation. During the analysis phase, the components of the PKBATDPS model were identified. The Electric Circuit prototype was evaluated using a quasi-experimental procedure that included pre-tests, post-tests, and learning motivation questionnaires. The experiment involved 32 elementary students; 16 in the experimental group used the serious games application prototype, whereas 16 in the control group received the traditional method. The results show that the PKBATDPS Model can be effectively used to increase students’ HOTS and motivation in science education.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_22-Serious_Games_Model_for_Higher_Order_Thinking_Skills.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Smart Muni Platform: Efficient Emergency and Citizen Security Management Based on Geolocation, Technological Integration and Real Time Communication</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151021</link>
        <id>10.14569/IJACSA.2024.0151021</id>
        <doi>10.14569/IJACSA.2024.0151021</doi>
        <lastModDate>2024-10-28T14:12:56.0400000+00:00</lastModDate>
        
        <creator>Gerson Castro-Chucan</creator>
        
        <creator>Hiroshi Chalco-Pe&#241;afiel</creator>
        
        <creator>Norka Bedregal-Alpaca</creator>
        
        <creator>Victor Cornejo-Aparicio</creator>
        
        <subject>Citizen security; geolocation; emergency management; cloud technology; alert platform</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>The global increase in crime across cities has led to the development and implementation of technological solutions, such as the Smart Muni platform, designed to enhance citizen security. This platform integrates geolocation, real-time notifications, and a digital panic button to optimize emergency management and coordination between citizens and authorities. Developed using a structured approach that included requirements analysis, system design, development, testing, validation, deployment, and maintenance, the platform employs advanced technologies such as Firebase, Amazon S3, and Twilio, ensuring scalability, high availability, and seamless communication. Initially implemented in two districts with high crime rates, Smart Muni registered an average of 10 daily alerts, with peaks of up to 50 alerts in a single day. The system has proven effective in managing frequent incidents like alcoholism and domestic violence, significantly reducing response times and improving coordination. Despite its success, Smart Muni faces challenges related to optimizing its resilience against potential system failures and improving its ability to handle increased data loads. In comparison to other international systems, Smart Muni&#39;s flexible and scalable architecture stands out, though future enhancements are required to further strengthen the system’s reliability and expand its features. Overall, Smart Muni has proven to be a valuable tool in improving citizen security, fostering stronger relationships between citizens and authorities, and contributing to a safer community.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_21-Smart_Muni_Platform_Efficient_Emergency.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Efficient Remote Health Monitoring Using Deep Learning and Parallel Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151020</link>
        <id>10.14569/IJACSA.2024.0151020</id>
        <doi>10.14569/IJACSA.2024.0151020</doi>
        <lastModDate>2024-10-28T14:12:56.0230000+00:00</lastModDate>
        
        <creator>Zakaria El Khadiri</creator>
        
        <creator>Rachid Latif</creator>
        
        <creator>Amine Saddik</creator>
        
        <subject>Real-time healthcare; embedded systems; heterogeneous computing; deep learning; CPU-GPU architecture</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>This study presents a novel approach for non-contact extraction of physiological parameters, such as heart rate and respiratory rate, from facial images captured using RGB cameras, leveraging recent advancements in deep learning and signal processing techniques. The proposed system integrates Artifacts intelligent-driven algorithms for accurately estimating vital signs, addressing key challenges such as variations in lighting conditions, facial orientation, and noise. The system is implemented on both a naive homogeneous architecture and an optimized heterogeneous CPU-GPU system to enhance real-time performance and computational efficiency. A comparative analysis is performed to evaluate processing speed, accuracy, and resource utilization across both architectures. Results demonstrate that the optimized heterogeneous system significantly outperforms the homogeneous counterpart, achieving faster processing times while maintaining high accuracy levels. This performance boost is achieved through parallel computing frameworks such as OpenMP and OpenCL, which allow for efficient resource allocation and scalability. The research highlights the potential of heterogeneous architectures for real-time healthcare applications, including remote patient monitoring and telemedicine, providing a robust solution for future developments in non-invasive health technology.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_20-Efficient_Remote_Health_Monitoring.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimization with Adaptive Learning: A Better Approach for Reducing SSE to Fit Accurate Linear Regression Model for Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151019</link>
        <id>10.14569/IJACSA.2024.0151019</id>
        <doi>10.14569/IJACSA.2024.0151019</doi>
        <lastModDate>2024-10-28T14:12:56.0070000+00:00</lastModDate>
        
        <creator>Vijay Kumar Verma</creator>
        
        <creator>Umesh Banodha</creator>
        
        <creator>Kamlesh Malpani</creator>
        
        <subject>Adaptive learning; regression; optimization; minimum; cost; objective; error; random</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>The Optimization provides a way through which an optimum can be achieved. It is all about designing accurate and optimal output for a given problems with using minimum available resources. It is a task which refers to minimizing an objective function f(x) parameterized by x or it is the task which refers minimizing the cost function using the model’s parameters. In machine learning optimization is slightly different. Usually most of the problems, are very much aware about shape, size and type of data. Such information helps us to know where need improve. In case of machine learning optimization works perfectly when there is no knowledge about new data. The method proposed in this paper is named as Optimization with adaptive learning which is used to minimize the cost in term of number of iterations for linear regression to fit the correct line for given dataset to reduce residual error. In regression analysis a curve or line fit in such a way for the data objects, that the differences of distances between the data points and curve or line is always minimum. Proposed approach Initialize random values for parameters of linear model and calculate Error (SSE). Our objective is minimizing the values of SSE, if SSE is large, need to adjust the selected initial values. The size of the step used in each iteration is direction movement to reach the local minimum for optimal value. After performing certain repetitions of the deviation, minimum value for SSE has found and it has a stable value with no change. Real life data set have been used for expositional analysis.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_19-Optimization_with_Adaptive_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cyberbullying Detection on Social Networks Using a Hybrid Deep Learning Architecture Based on Convolutional and Recurrent Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151018</link>
        <id>10.14569/IJACSA.2024.0151018</id>
        <doi>10.14569/IJACSA.2024.0151018</doi>
        <lastModDate>2024-10-28T14:12:55.9600000+00:00</lastModDate>
        
        <creator>Aigerim Altayeva</creator>
        
        <creator>Rustam Abdrakhmanov</creator>
        
        <creator>Aigerim Toktarova</creator>
        
        <creator>Abdimukhan Tolep</creator>
        
        <subject>Cyberbullying detection; deep learning; CNN; LSTM; social media monitoring; sentiment analysis; digital safety</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>This research paper explores the development and efficacy of a hybrid deep learning architecture for cyberbullying detection on social media platforms, integrating Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM) networks. By leveraging the strengths of both CNNs and LSTMs, the model aims to enhance the accuracy and sensitivity of detecting cyberbullying incidents. The study systematically evaluates the performance of the proposed model through a series of experiments involving a diverse dataset derived from various social media interactions, categorized by sentiment and type of bullying. Results indicate that while the model achieves high accuracy in identifying cyberbullying, challenges such as overfitting and the need for better generalization to unseen data persist. The paper also discusses ethical considerations and the potential for bias in automated monitoring systems, stressing the importance of ethical AI practices in social media governance. The findings underscore the complexity of automated cyberbullying detection and highlight the necessity for advanced machine learning techniques that are robust, scalable, and aligned with ethical standards. This study contributes to the broader discourse on the application of artificial intelligence in enhancing digital safety and advocates for a multidisciplinary approach to address the socio-technical challenges posed by cyberbullying in the digital age.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_18-Cyberbullying_Detection_on_Social_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Badminton Tracking and Motion Evaluation Model Based on Faster RCNN and Improved VGG19</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151017</link>
        <id>10.14569/IJACSA.2024.0151017</id>
        <doi>10.14569/IJACSA.2024.0151017</doi>
        <lastModDate>2024-10-28T14:12:55.9430000+00:00</lastModDate>
        
        <creator>Jun Ou</creator>
        
        <creator>Chao Fu</creator>
        
        <creator>Yanyun Cao</creator>
        
        <subject>Faster RCNN; VGG19; badminton; target tracking; motion evaluation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>Badminton, as a popular sport in the field of sports, has rich information on body motions and motion trajectories. Accurately identifying the swinging motions during badminton is of great significance for badminton education, promotion, and competition. Therefore, based on the framework of Faster R-CNN multi object tracking algorithm, a new badminton tracking and motion evaluation model is proposed by introducing a VGG19 network architecture and real-time multi person pose estimation algorithm for performance optimization. The experimental results showed that the new badminton tracking and motion evaluation model achieved an average processing speed of 31.02 frames per second for five bone points in the human head, shoulder, elbow, wrist, and neck. Its accuracy in detecting the highest percentage of correct key points for the head, shoulders, elbows, wrists, and neck reached 98.05%, 98.10%, 97.89%, 97.55%, and 98.26%, respectively. The minimum values of mean square error and mean absolute error were only 0.021 and 0.026. The highest resource consumption rate was only 6.85%, and the highest accuracy of motion evaluation was 97.71%. In addition, indoor and outdoor environments had almost no impact on the performance of the model. In summary, the study aims to improve the fast region convolutional neural network and apply it to badminton tracking and motion evaluation with higher effectiveness and recognition accuracy. This study aims to demonstrate a more effective approach for the development of badminton sports.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_17-Badminton_Tracking_and_Motion_Evaluation_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Review of Malaysia Digital Health Service Mobile Applications’ Usability Design</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151016</link>
        <id>10.14569/IJACSA.2024.0151016</id>
        <doi>10.14569/IJACSA.2024.0151016</doi>
        <lastModDate>2024-10-28T14:12:55.9300000+00:00</lastModDate>
        
        <creator>Kah Hao Lim</creator>
        
        <creator>Chia Yean Lim</creator>
        
        <creator>Anusha Achuthan</creator>
        
        <creator>Chin Ernst Wong</creator>
        
        <creator>Vina Phei Sean Tan</creator>
        
        <subject>Health system accessible; ISO/IEC9126; Nielsen usability model; older adults; usability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>Digital health services have become a trend and receive higher demand in Malaysia. However, the adoption of mobile applications to support the digital health service in the country remains low especially among older adults, contributing to low usability support of the mobile applications. This paper reviews the usability models and design factors that are relevant and applicable to support the digital health service mobile applications’ design for older adults. Seven usability design factors such as efficiency, help and documentation, learnability, memorability, user-friendliness, need-base, and push-base were discovered to be most suitable to support older adult users. Subsequently, a review was conducted on the fulfilment of seven usability design factors in key Malaysian digital health service mobile applications. Findings showed that most applications supported high learnability and memorability but lacked support for another five usability factors. Lastly, a usability design framework to support the Malaysia digital health service mobile applications for older adult users would be proposed. A full exploratory study is the next step to validate the proposed framework.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_16-The_Review_of_Malaysia_Digital_Health_Service_Mobile_Applications.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Real Time Object Detection for Sustainable Air Conditioner Energy Management System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151015</link>
        <id>10.14569/IJACSA.2024.0151015</id>
        <doi>10.14569/IJACSA.2024.0151015</doi>
        <lastModDate>2024-10-28T14:12:55.9130000+00:00</lastModDate>
        
        <creator>Chang Shi Ying</creator>
        
        <creator>C. PuiLin</creator>
        
        <subject>Deep learning; energy consumption; energy efficiency; global warming; climate change; real-time object detection; Air conditioner optimization; smart meter; environmental footprint; climate action</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>Air conditioning has become indispensable for maintaining human comfort, especially during hot weather, as people rely on it to stay cool indoors. However, the long-term and uncontrolled use of air conditioners has significantly contributed to climate change and environmental degradation. The extensive use of air conditioners releases more carbon dioxide, a greenhouse gas, into the atmosphere, exacerbating global warming and leading to adverse climate impacts. The proposed sustainable air conditioning energy management system aims to address this issue by optimising air conditioner use while minimising its environmental footprint and mitigating climate change. Current air conditioning systems in offices, buildings, and homes typically rely on fixed temperature settings, leading to excessive energy consumption and increased greenhouse gas emissions. Existing solutions, such as fixed timers, manual timer settings, and physical controllers, are ineffective as they cannot dynamically respond to changes in environmental conditions, such as room occupancy and activity levels, resulting in significant inefficiencies and environmental hazards. To overcome these limitations, the proposed system introduces an innovative solution using software engineering technology, specifically real-time object detection, to control air conditioning energy usage. This approach redefines air conditioning management by allowing the system to dynamically adapt to room occupancy, environmental factors, and activity levels, ensuring the right amount of cooling is delivered at the right time. This method represents a concrete and effective response to climate change challenges and demonstrates a commitment to creating a sustainable and environmentally responsible future.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_15-Real_Time_Object_Detection_for_Sustainable_Air_Conditioner.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Review on NS Beyond 5G: Techniques, Applications, Challenges and Future Research Directions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151014</link>
        <id>10.14569/IJACSA.2024.0151014</id>
        <doi>10.14569/IJACSA.2024.0151014</doi>
        <lastModDate>2024-10-28T14:12:55.8970000+00:00</lastModDate>
        
        <creator>Cui Zhiyi</creator>
        
        <creator>Azana Hafizah Mohd Aman</creator>
        
        <creator>Faizan Qamar</creator>
        
        <subject>5G; network slicing; resource allocation; dynamic allocation; security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>With the advent of the fifth generation (5G) era, many emerging Internet of Things (IoT) applications have emerged to make life more convenient and intelligent. While the number of connected devices is growing, there are also differences in the network requirements of each device. Network slicing (NS) as an emerging technology, provides multiple logical networks for infrastructure. Each of these logical networks can provide specialized services for different needs of different applications by defining its logical topology, reliability and security level. This article provides an overview of the basic architecture, categories, and life cycle of network slicing. Then summarized two kinds of resource allocation methods and security problems based on three kinds of network slicing technologies. With the investigation of recent studies, it is found that network slicing is widely used in Industrial Internet of Things (IIoT), Internet of Medical Things (IoMT), in-vehicle systems and other applications. It improves network efficiency, improves service quality and enhances security and privacy by optimizing indicators such as latency, resource management and service quality. At the end of this study, according to the challenges faced by different research methods, the future research direction of network slicing is proposed.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_14-A_Review_on_NS_Beyond_5G_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Migrating from Monolithic to Microservice Architectures: A Systematic Literature Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151013</link>
        <id>10.14569/IJACSA.2024.0151013</id>
        <doi>10.14569/IJACSA.2024.0151013</doi>
        <lastModDate>2024-10-28T14:12:55.8830000+00:00</lastModDate>
        
        <creator>Hossam Hassan</creator>
        
        <creator>Manal A. Abdel-Fattah</creator>
        
        <creator>Wael Mohamed</creator>
        
        <subject>Software migration; software evolution; monolithic architecture; microservice architecture; systematic literature review</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>Migration from monolithic software systems to modern microservice architecture is a critical process for enhancing software systems&#39; scalability, maintainability, and performance. This study conducted a systematic literature review to explore the various methodologies, techniques, and algorithms used in the migration of monolithic systems to modern microservice architectures. Furthermore, this study underscored the role of artificial intelligence in enhancing the efficiency and effectiveness of the migration process by examining recent literature to identify significant patterns, challenges, and optimal solutions. In addition, it emphasizes the importance of migrating monolithic systems into microservices by synthesizing various research studies that enable greater flexibility, fault tolerance, and independent scalability. The findings offer valuable insights for both researchers and practitioners in the software industry. In addition, it provides practical guidance on implementing AI-driven methodologies in software architecture evolution. Finally, we highlight future research directions in providing an automation technique for the software architecture migration.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_13-Migrating_from_Monolithic_to_Microservice_Architecture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis and Usability Evaluation of Virtual Reality in Cultural Landscape Promotion Platform Application</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151012</link>
        <id>10.14569/IJACSA.2024.0151012</id>
        <doi>10.14569/IJACSA.2024.0151012</doi>
        <lastModDate>2024-10-28T14:12:55.8670000+00:00</lastModDate>
        
        <creator>Yufang Huang</creator>
        
        <creator>Yin Luo</creator>
        
        <creator>Jingyu Zheng</creator>
        
        <creator>Xunxiang Li</creator>
        
        <subject>Virtual reality technology; cultural landscape promotion platform; usability assessment algorithm; Mayfly algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>In order to improve the efficiency of analysing the application of virtual reality technology in cultural landscape promotion platform and enhance the accuracy of usability assessment, a feasibility assessment method based on MA-BiGRU for virtual reality technology cultural landscape promotion platform is proposed. Firstly, it analyses the application of virtual reality technology in cultural landscape promotion platform and designs the application feasibility assessment; secondly, it combines MA algorithm and BiGRU network, and proposes a usability assessment algorithm of virtual reality technology based on MA-BiGRU model; lastly, it analyses the feasibility and validity of the proposed method by using actual cases. The results show that, compared with other models conducted, the proposed method has a higher assessment and prediction accuracy, and it also effectively assists in completing the virtual reproduction of the cultural landscape promotion platform of Dongao Deng Village, Dongtou District, Wenzhou City, and improves the design effect of virtual reality technology in the cultural landscape promotion platform.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_12-Analysis_and_Usability_Evaluation_of_Virtual_Reality.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Knowledge Graph-Based Badminton Tactics Mining and Reasoning for Badminton Player Training Pattern Analysis and Optimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151011</link>
        <id>10.14569/IJACSA.2024.0151011</id>
        <doi>10.14569/IJACSA.2024.0151011</doi>
        <lastModDate>2024-10-28T14:12:55.8500000+00:00</lastModDate>
        
        <creator>Xingli Hu</creator>
        
        <creator>Jiangtao Li</creator>
        
        <creator>Ren Cai</creator>
        
        <subject>Badminton tactical analysis; graph neural networks; attention mechanisms; training pattern optimization; heterogeneous graph splitting; artificial intelligence</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>As the global emphasis on sports data analysis and athlete performance optimization continues to grow, traditional badminton training methods are increasingly insufficient to meet the demands of modern high-level competitive sports. The exploration and reasoning of badminton tactics can significantly aid coaches and athletes in better comprehending game strategies, playing a vital role in the analysis and optimization of training methods. By utilizing knowledge graph-based badminton tactics mining, an approach involving heterogeneous graph splitting is employed, coupled with the incorporation of a cross-relational attention mechanism within relational graph neural networks. This mechanism assigns varying weights based on the importance of neighboring nodes across different relations, facilitating information aggregation and dissemination across multiple relationships. Furthermore, to address the challenges posed by the complexity of large-scale knowledge graphs, which feature numerous entity relationships and intricate internal structures, techniques such as training subgraph sampling, positive-negative sampling, and block-diagonal matrix decomposition are introduced. These techniques help to reduce the computational load and complexity of model training, while also enhancing the model&#39;s generalization capabilities. Finally, comparative experiments conducted on a proprietary badminton tactics dataset demonstrated the effectiveness and superiority of the proposed model improvements when reasonable parameters were applied. The case study shows that this approach holds considerable promise for the analysis and optimization of badminton players&#39; training strategies.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_11-Knowledge_Graph_Based_Badminton_Tactics_Mining.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Attention Mechanism-Based CNN-LSTM for Abusive Comments Detection and Classification in Social Media Text</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151010</link>
        <id>10.14569/IJACSA.2024.0151010</id>
        <doi>10.14569/IJACSA.2024.0151010</doi>
        <lastModDate>2024-10-28T14:12:55.8200000+00:00</lastModDate>
        
        <creator>BalaAnand Muthu First</creator>
        
        <creator>Kogilavani Shanmugavadive</creator>
        
        <creator>Veerappampalayam Easwaramoorthy Sathishkumar</creator>
        
        <creator>Muthukumaran Maruthappa</creator>
        
        <creator>Malliga Subramanian</creator>
        
        <creator>Rajermani Thinakaran</creator>
        
        <subject>Attention mechanism; hybrid CNN-LSTM model; machine learning model; deep learning model; abusive comments detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>Human contact with one another through social networks, blogs, forums, and online news portals and communication has dramatically increased in recent years. People use these platforms to express their feelings, but sometimes hateful comments are also spread. When abusive language is used in online comments to attack individuals such as celebrities, politicians, and products, as well as groups of people associated with a given country, age, or religion, cyberbullying begins. Due to the ever-growing number of messages, it is challenging to manually recognize these abusive comments on social media platforms. This research work concentrates on a novel attention mechanism-based hybrid Convolutional Neural Network - Long Short Term Memory (CNN-LSTM) model to detect abusive comments by getting more contextual information from individual sentences. The proposed attention mechanism-based hybrid CNN-LSTM model is compared with various models on the dataset provided by the shared task on Abusive Comment Detection in Tamil – ACL 2022 which contains 9 class labels such as Misandry, Counter-speech, Xenophobia, Misogyny, Hope-speech, Homophobia, Transphobic, Not-Tamil and None-of-the-above. We obtained an accuracy of 67.14%, 68.92%, 65.35% and 68.75% on Na&#239;ve Bayes, Support Vector Machine, Logistic Regression and Random Forest respectively. Furthermore, we applied the same dataset to deep learning models like Convolutional Neural Networks (CNN), Long Short Term Memory (LSTM), Bidirectional-Long Short Term Memory (Bi-LSTM) and obtained an accuracy of 70.28%, 71.67% and 69.45%, respectively. To obtain more contextual information semantically a novel attention mechanism is applied to the hybrid CNN-LSTM model and obtained an accuracy of 75.98% which is an improvement over all the developed models as a process innovation.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_10-Attention_Mechanism_Based_CNN_LSTM.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>AI-Powered Waste Classification Using Convolutional Neural Networks (CNNs)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151009</link>
        <id>10.14569/IJACSA.2024.0151009</id>
        <doi>10.14569/IJACSA.2024.0151009</doi>
        <lastModDate>2024-10-28T14:12:55.8030000+00:00</lastModDate>
        
        <creator>Chan Jia Yi</creator>
        
        <creator>Chong Fong Kim</creator>
        
        <subject>Convolutional neural networks; CNN; deep learning; waste classification; recycling; zero waste; SDGs</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>In Malaysia, approximately 70%-80% of recyclable materials end up in landfills due to low participation in Separation at Source Initiative. This is largely attributed to the public perception that waste segregation is a foreign idea, coupled with a lack of public knowledge. Around 72.19% of the residents are unsure about waste categorization and proper waste disposal. This confusion leads to apathy toward recycling efforts exacerbated by deficient environmental awareness. Existing waste classification systems mainly rely on manual entry of waste item names, leading to inaccuracies and lack of user engagement, prompting a shift towards advanced deep learning models. Moreover, current systems often fail to provide comprehensive disposal guidelines, leaving users uninformed. This project addresses the gap by specifically developing an AI-Powered Waste Classification System incorporated with Convolutional Neural Network (CNN), classifying waste technologically and providing environmentally friendly disposal guidelines. By leveraging primary and secondary waste image data, the project achieves a training accuracy of 80.66% and a validation accuracy of 77.62% in waste classification. The uniqueness of this project lies in its utilization of CNN within a user-friendly web interface that allows the user to capture or upload waste image, offering immediate waste classification results and sustainable waste disposal guidelines. It also enables users to locate recycling centers and access the dashboard. This system represents an ongoing effort to educate people and contribute to the field of waste management. It promotes Sustainability Development Goal (SDG) 12 (Responsible Consumption and Production) and SDG 13 (Climate Action), contributes zero waste, raises environmental awareness, and aligns with Malaysia&#39;s goals to increase recycling rates to 40% and reduce waste sent to landfills by 2025.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_9-AI_Powered_Waste_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Data Encoding with Generative AI: Towards Improved Machine Learning Performance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151007</link>
        <id>10.14569/IJACSA.2024.0151007</id>
        <doi>10.14569/IJACSA.2024.0151007</doi>
        <lastModDate>2024-10-28T14:12:55.7730000+00:00</lastModDate>
        
        <creator>Abdelkrim SAOUABE</creator>
        
        <creator>Hicham OUALLA</creator>
        
        <creator>Imad MOURTAJI</creator>
        
        <subject>Data encoding; Generative AI; salary simulator; human resource management; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>This article explores the design and implementation of a Generative AI-based data encoding system aimed at enhancing human resource management processes. Addressing the complexity of HR data and the need for informed decision-making, the study introduces a novel approach that leverages Generative AI for data encoding. This approach is applied to an HR database to develop a machine learning model designed to create a salary simulator, capable of generating accurate and personalized salary estimates based on factors such as work experience, skills, geographical location, and market trends. The aim of this approach is to improve the performance of the machine learning model. Experimental results indicate that this encoding approach improves the accuracy and fairness of salary determinations. Overall, the article demonstrates how AI can revolutionize HR management by delivering innovative solutions for more equitable and strategic compensation practices.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_7-Data_Encoding_with_Generative_AI.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>RETRACTED: Cross-Modal Hash Retrieval Model for Semantic Segmentation Network for Digital Libraries</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151008</link>
        <id>10.14569/IJACSA.2024.0151008</id>
        <doi>10.14569/IJACSA.2024.0151008</doi>
        <lastModDate>2024-10-28T14:12:55.7730000+00:00</lastModDate>
        
        <creator>Siyu Tang</creator>
        
        <creator>Jun Yin</creator>
        
        <subject>Digital library; hash retrieval; semantic segmentation; word2vec; fully convolutional neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>After careful and considered review of the content of this paper by a duly constituted expert committee, this paper has been found to be in violation of IJACSA`s Publication Principles. We hereby retract the content of this paper. Reasonable effort should be made to remove all past references to this paper. Retraction DOI: 10.14569/IJACSA.2024.0151008.retraction</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_8-Cross_Modal_Hash_Retrieval_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Designing Conversational Agents for Student Wellbeing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151006</link>
        <id>10.14569/IJACSA.2024.0151006</id>
        <doi>10.14569/IJACSA.2024.0151006</doi>
        <lastModDate>2024-10-28T14:12:55.7430000+00:00</lastModDate>
        
        <creator>Jieyu Wang</creator>
        
        <creator>Li Zhang</creator>
        
        <creator>Dingfang Kang</creator>
        
        <creator>Katherina G. Pattit</creator>
        
        <subject>Conversational agent/chatbot; wellbeing; UX design</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>The innovative development of AI technology provides new possibilities and new solutions to the problems that we are facing in modern society. Student well-being status has been a major concern in well-being care, especially in the post-pandemic era. The availability and quality of well-being support have limited the accessibility of well-being resources to students. Using conversational agents (CA) or chatbots to empower student well-being care is a promising solution for universities, considering the availability and cost of implementation. This research aims to explore how CAs assist students with possible well-being concerns. We invited 96 participants to fill out surveys with their demographic information and 11 short answer questions concerning their well-being and their acceptance and expectations of CAs. The results suggested the participants accepted the use of well-being CAs with ethical concerns. Upon user acceptance, the participants expressed expectations on design features such as facial expression recognition, translation, images, personalized long-term memory, etc. Based on the results this work presents a conceptual framework and chat flows for the design of a student well-being chatbot, which provides a user-centered design example for UX designers of wellbeing. Further research will introduce detailed design discoveries and a high-fidelity CA prototype to shed light on student well-being support applications. Implementing the CA will enhance the accessibility and quality of student well-being services, fostering a healthier campus environment.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_6-Designing_Conversational_Agents.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid Regression-Based Network Model for Continuous Face Recognition and Authentication</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151005</link>
        <id>10.14569/IJACSA.2024.0151005</id>
        <doi>10.14569/IJACSA.2024.0151005</doi>
        <lastModDate>2024-10-28T14:12:55.7270000+00:00</lastModDate>
        
        <creator>Bhanu Kiran Devisetty</creator>
        
        <creator>Ayush Goyal</creator>
        
        <creator>Avdesh Mishra</creator>
        
        <creator>Mais W Nijim</creator>
        
        <creator>David Hicks</creator>
        
        <creator>George Toscano</creator>
        
        <subject>Vision-based computing; object detection; face detection; face recognition; feature extraction; feature coefficients; classification; authentication; biometrics; biometric authentication</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>This research proposes a continuous remote biometric user authentication system implemented with a face recognition model pre-trained on face images. This work develops an algorithm combining the Hybrid Block Overlapping KT Polynomials (HBKT) and Regression-based Support Vector Machine (RSVM) methods for a face recognition-based remote user authentication system that uses a model pre-trained on the ORL, Face94 and GT datasets to recognize authorized users from face images captured through a webcam for continuous remote biometric user authentication. HBKT polynomials enhance feature extraction by capturing local and global facial patterns, while RSVM improves classification performance through efficient regression-based decision boundaries. The system can continuously capture user face images from the user’s webcam for user authentication, but it can be affected by lighting variations, occlusion, and computational overhead from continuous image capture. This has been implemented in a Python program. The proposed method, when compared to previous state-of-the-art algorithms, was observed to have higher F-measure, accuracy, and speed, for most of the cases. The proposed method was observed to have accuracies of 98.82% (ORL dataset), 96.73% (GT dataset), and 95.9% (Face94 dataset).</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_5-A_Hybrid_Regression_Based_Network_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>AI in the Detection and Prevention of Distributed Denial of Service (DDoS) Attacks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151004</link>
        <id>10.14569/IJACSA.2024.0151004</id>
        <doi>10.14569/IJACSA.2024.0151004</doi>
        <lastModDate>2024-10-28T14:12:55.6930000+00:00</lastModDate>
        
        <creator>Sina Ahmadi</creator>
        
        <subject>Artificial intelligence; Distributed Denial of Service (Ddos); machine learning; detection; accuracy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>Distributed Denial of Service (DDoS) attacks are malicious attacks that aim to disrupt the normal flow of traffic to the targeted server or network by manipulating the server’s infrastructure with overflowing internet traffic. This study aims to investigate several artificial intelligence (AI) models and utilise them in the DDoS detection system. The paper examines how AI is being used to detect DDoS attacks in real-time to find the most accurate methods to improve network security. The machine learning models identified and discussed in this research include random forest, decision tree (DT), convolutional neural network (CNN), NGBoosT classifier, and stochastic gradient descent (SGD). The research findings demonstrate the effectiveness of these models in detecting DDoS attacks. The study highlights the potential for future enhancement of these technologies to enhance the security and privacy of data servers and networks in real-time. Using the qualitative research method and comparing several AI models, research results reveal that the random forest model offers the best detection accuracy (99.9974%). This finding holds significant implications for the enhancement of future DDoS detection systems.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_4-AI_in_the_Detection_and_Prevention.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Triggered Screen Restriction Framework: Transforming Gamified Physical Interventions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151003</link>
        <id>10.14569/IJACSA.2024.0151003</id>
        <doi>10.14569/IJACSA.2024.0151003</doi>
        <lastModDate>2024-10-28T14:12:55.6800000+00:00</lastModDate>
        
        <creator>Majed Hariri</creator>
        
        <creator>Richard Stone</creator>
        
        <creator>Ulrike Genschel</creator>
        
        <subject>Gamification; physical activity; negative reinforcement; triggered screen restriction framework; TSR framework; gamified physical intervention</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>This study examines the effectiveness of the Triggered Screen Restriction (TSR) framework, a novel technique to promote exercise that combines negative reinforcement with adaptive gamification elements. The study examined the TSR framework’s impact on physical activity levels, addictive nature, health indicators, psychological factors, and app usability compared to a control group. A mixed experimental design was employed, with 30 participants randomly assigned to either an experimental group using a custom iOS app with the TSR framework or a control group using a similar app without TSR features. Results revealed that the TSR group demonstrated significantly higher physical activity levels (p &lt; .05). The TSR framework resulted in significant increases in app usage frequency (p &lt; .001). Health indicators showed a significant improvement in balance and stability through the single-leg stance test (p &lt; .05), while other health metrics, including maximum jumping jacks completed in one minute, post-exercise heart rate, and body composition, exhibited no significant changes. Analysis of psychological factors revealed a significant increase in perceived competence in the TSR group (p &lt; .05), with no significant changes observed in autonomy or relatedness. The TSR intervention demonstrated significantly better usability metrics, including ease of use, system reliability, and perceived usefulness, compared to the control condition (all p &lt; .001). The study contributes to the expanding adoption of gamified physical interventions, showcasing the TSR framework as an effective technique for addressing physical inactivity. Future research should explore long-term effectiveness, diverse populations, and integration with wearable devices to further validate and refine the TSR approach in addressing physical inactivity.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_3-Triggered_Screen_Restriction_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Developing a Blockchain Based Supply Chain CO2 Footprint Tracking Framework Enabled by IoT</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151002</link>
        <id>10.14569/IJACSA.2024.0151002</id>
        <doi>10.14569/IJACSA.2024.0151002</doi>
        <lastModDate>2024-10-28T14:12:55.6630000+00:00</lastModDate>
        
        <creator>Mohammad Yaser Mofatteh</creator>
        
        <creator>Roshanak Davallou</creator>
        
        <creator>Chaida Ndahiro Ishimwe</creator>
        
        <creator>Swaresh Suresh Divekar</creator>
        
        <creator>Omid Fatahi Valilai</creator>
        
        <subject>Blockchain; IoT; supply chain; carbon footprint; sustainability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>In various industries, the convergence of the Internet of Things (IoT) and blockchain technologies has left an indelible mark on the pursuit of decarbonization. These innovations have seamlessly integrated into diverse fields, from manufacturing to logistics, offering sustainable solutions that enhance operational efficiency, transparency, and accountability. The interplay between IoT and blockchain has particularly contributed to the reduction of carbon footprints, fostering environmentally responsible practices. As industries embrace these technologies, the decentralized and transparent nature of blockchain ensures traceability in supply chains, while IoT devices facilitate real-time data monitoring. Together, they create a powerful synergy that not only streamlines processes but also drives a collective commitment to reducing environmental impact, marking a paradigm shift towards greener and more sustainable industries. Within this landscape, this research offers a comprehensive exploration of the transformative potential of blockchain in supply chain management, emphasizing its intricate connection with IoT and carbon footprint reduction. The conceptual model presented delineates the seamless integration of these elements, providing a nuanced understanding of how blockchain can revolutionize transparency and sustainability. Through practical examples and a layered diagram, it showcases the tangible benefits of this integration, highlighting its capacity to enhance data integrity and transparency in real-world supply chain scenarios. The research stands as a testament to the instrumental role that blockchain can play in fostering environmentally responsible practices within supply chains, laying the groundwork for a more sustainable future.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_2-Developing_a_Blockchain_Based_Supply_Chain_CO2_Footprint.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of an AI Based Failure Predictor Model to Reduce Filament Waste for a Sustainable 3D Printing Process</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0151001</link>
        <id>10.14569/IJACSA.2024.0151001</id>
        <doi>10.14569/IJACSA.2024.0151001</doi>
        <lastModDate>2024-10-28T14:12:55.6470000+00:00</lastModDate>
        
        <creator>Noushin Mohammadian</creator>
        
        <creator>Melissa Sof&#237;a Molina Silva</creator>
        
        <creator>Giorgi Basiladze</creator>
        
        <creator>Omid Fatahi Valilai</creator>
        
        <subject>3D printing; Fused Filament Fabrication (FFF); motion tracking; environmental sustainability; printing waste reduction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(10), 2024</description>
        <description>This paper delves into the integration of motion tracking technology for real-time monitoring in 3D printing, with a focus on the popular fused filament fabrication (FFF) technique. Despite FFF&#39;s cost-efficiency, prevalent printing errors pose significant challenges to its commercial and environmental viability. This study proposes a solution by incorporating motion tracking nodes into the 3D printing process, tracked by cameras, enabling dynamic identification and rectification of printing failures. Addressing key research questions, the paper explores the applicability of motion tracking for failure detection, its impact on printed object quality, and the potential reduction in 3D printing waste. The proposed real-time monitoring system aims to fill a critical gap in existing 3D printing procedures, providing dynamic failure identification. The study integrates machine learning, computer vision, and motion tracking technologies, employing an inductive theoretical development strategy with active learning iterations for continuous improvement. Highlighting the revolutionary potential of 3D printing and acknowledging challenges in continuous monitoring and waste management, the suggested system pioneers real-time monitoring, striving to enhance efficiency, sustainability, and adaptability to diverse production demands. As the study progresses into implementation, it aspires to contribute significantly to the evolution of 3D printing technologies.</description>
        <description>http://thesai.org/Downloads/Volume15No10/Paper_1-Development_of_an_AI_Based_Failure_Predictor_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Real-Time Sign Language Fingerspelling Recognition System Using 2D Deep CNN with Two-Stream Feature Extraction Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01509108</link>
        <id>10.14569/IJACSA.2024.01509108</id>
        <doi>10.14569/IJACSA.2024.01509108</doi>
        <lastModDate>2024-10-03T09:58:01.8270000+00:00</lastModDate>
        
        <creator>Aziza Zhidebayeva</creator>
        
        <creator>Gulira Nurmukhanbetova</creator>
        
        <creator>Sapargali Aldeshov</creator>
        
        <creator>Kamshat Zhamalova</creator>
        
        <creator>Satmyrza Mamikov</creator>
        
        <creator>Nursaule Torebay</creator>
        
        <subject>Deep learning; sign language recognition; convolutional neural networks; real-time processing; gesture recognition; machine learning; accessibility technology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>This research paper introduces a novel sign language recognition system developed using advanced deep learning (DL) techniques aimed at enhancing communication capabilities between deaf and hearing individuals. The system leverages a convolutional neural network (CNN) architecture, optimized for the real-time interpretation of dynamic hand gestures that constitute sign language. A comprehensive dataset was employed to train and validate the model, encompassing a diverse range of gestures across different environmental settings. Comparative analysis revealed that the deep learning-based model significantly outperforms traditional machine learning techniques in terms of recognition accuracy, particularly with the increase in the volume of training data. This was illustrated through various performance metrics, including a detailed confusion matrix and Levenshtein distance measurements, highlighting the system’s efficacy in accurately identifying complex gestures. Real-time application tests further demonstrated the model&#39;s robustness and adaptability to varying lighting conditions and backgrounds, essential for practical deployment. Key challenges identified include the need for broader linguistic diversity in training datasets and enhanced model sensitivity to subtle gestural distinctions. The paper concludes with suggestions for future research directions, emphasizing algorithm optimization, data diversification, and user-centric design improvements to foster wider adoption and usability. This study underscores the potential of deep learning technologies to revolutionize assistive communication tools, making them more accessible and effective for the deaf community.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_108-Real_Time_Sign_Language_Fingerspelling_Recognition_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Real-Time Road Damage Detection System on Deep Learning Based Image Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01509107</link>
        <id>10.14569/IJACSA.2024.01509107</id>
        <doi>10.14569/IJACSA.2024.01509107</doi>
        <lastModDate>2024-10-03T09:58:01.7800000+00:00</lastModDate>
        
        <creator>Bakhytzhan Kulambayev</creator>
        
        <creator>Belik Gleb</creator>
        
        <creator>Nazbek Katayev</creator>
        
        <creator>Islam Menglibay</creator>
        
        <creator>Zeinel Momynkulov</creator>
        
        <subject>Deep learning; road damage detection; Mask R-CNN; image segmentation; convolutional neural networks; infrastructure management; smart cities; real-time analytics; predictive maintenance; urban planning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>This research paper introduces a sophisticated deep learning-based system for real-time detection and segmentation of road damages, utilizing the Mask R-CNN framework to enhance road maintenance and safety. The primary objective was to develop a robust automated system capable of accurately identifying and classifying various types of road damages under diverse environmental conditions. The system employs advanced convolutional neural networks to process and analyze images captured from road surfaces, enabling precise localization and segmentation of damages such as cracks, potholes, and surface wear. Evaluation of the model&#39;s performance through metrics like accuracy, precision, recall, and F1-score demonstrated high effectiveness in real-world scenarios. The confusion matrix and loss curves presented in the study illustrate the system&#39;s ability to generalize well to unseen data, mitigating overfitting while maintaining high detection sensitivity. Challenges such as variable lighting, shadows, and background noise were addressed, highlighting the system&#39;s resilience and the need for further dataset diversification and integration of multimodal data sources. The potential improvements discussed include refining the convolutional network architecture and incorporating predictive maintenance capabilities. The system&#39;s application extends beyond mere detection, promising transformative impacts on urban planning and infrastructure management by integrating with smart city frameworks to facilitate real-time, predictive road maintenance. This research sets a benchmark for future developments in the field of automated road assessment, pointing towards a future where AI-driven technologies significantly enhance public safety and infrastructure efficiency.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_107-Real_Time_Road_Damage_Detection_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Prototype of an Indoor Pathfinding Application with Obstacle Detection for the Visually Impaired</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01509106</link>
        <id>10.14569/IJACSA.2024.01509106</id>
        <doi>10.14569/IJACSA.2024.01509106</doi>
        <lastModDate>2024-10-03T09:58:01.7500000+00:00</lastModDate>
        
        <creator>Ken Gorro</creator>
        
        <creator>Lawrence Roble</creator>
        
        <creator>Mike Albert Magana</creator>
        
        <creator>Rey Paolo Buot</creator>
        
        <creator>Louis Severino Romano</creator>
        
        <creator>Herbert Cando</creator>
        
        <creator>Bonifacio Amper</creator>
        
        <creator>Rhyan Jay Signe</creator>
        
        <creator>Elmo Ranolo</creator>
        
        <subject>Yolov8; A-star algorithm; pathfinding; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>This study presents an initial prototype for a project aimed at assisting visually impaired individuals using deep learning techniques. The proposed system utilizes the You Only Look Once (YOLOv8) algorithm to detect objects tagged as obstacles. Designed for indoor environments, the system employs a CCTV camera and a computer server running the YOLOv8 model. Additionally, the A-star algorithm is used to determine the optimal path to avoid detected obstacles. Video frames are divided into tiles, each considered a node; nodes with detected objects are marked with a value of 0. The YOLOv8 model currently achieves an initial accuracy rate of 70%, with a mean Average Precision (mAP) at an Intersection over Union (IoU) threshold of 0.5 reaching 0.993 across all classes. This high mAP indicates an exceptional balance between precision and recall, signifying the model&#39;s effectiveness in object detection. Furthermore, the model yields an impressive F1-score of 0.99 at a confidence threshold of 0.624, demonstrating a robust balance between precision and recall, which is crucial for minimizing both false positives and false negatives. This prototype being developed assumes that a destination can be set by an operator of the system using the server that connects to the CCTV camera. The system was tested in enclosed environments and was able to provide a path that potentially avoids obstacles. The development of audio commands to guide visually impaired users is ongoing. These audio commands depend on identifying the direction an individual is going, requiring an additional deep-learning model to generate accurate instructions.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_106-Prototype_of_an_Indoor_Pathfinding_Application.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Interactive ChatBot for PDF Content Conversation Using an LLM Language Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01509105</link>
        <id>10.14569/IJACSA.2024.01509105</id>
        <doi>10.14569/IJACSA.2024.01509105</doi>
        <lastModDate>2024-10-03T09:58:01.6100000+00:00</lastModDate>
        
        <creator>Ting Tin Tin</creator>
        
        <creator>Seow Yu Xuan</creator>
        
        <creator>Wong Man Ee</creator>
        
        <creator>Lee Kuok Tiung</creator>
        
        <creator>Ali Aitizaz</creator>
        
        <subject>Natural language processing; learning opportunities; ChatGPT; PDF</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>Natural Language Processing (NLP) leverages Artificial Intelligence (AI) to enable computer programs to understand and generate human language. ChatGPT has recently become popular in assignment accomplishment. This project aims to develop and improve an interactive PDF chat application using OpenAI&#39;s language model (LLM), specifically GPT-3.5, integrated with Streamlit and LangChain frameworks to assist in learning process. The application enhances user interaction with documents by providing real-time text extraction, summarization, translation, and user-defined question-answering to increase learning opportunities. Key features include obtaining document summaries, multilingual support for improved accessibility, and a document preview section with features such as zoom, rotation, and download. Although it currently faces limitations in handling image-rich PDFs, future enhancements include better image rendering, conversation history, and query download features. Overall, this interactive chatbot model aims to streamline document interaction, making information retrieval efficient and user-friendly.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_105-Interactive_ChatBot_for_PDF_Content_Conversation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Artificial Neural Network Model for Water Quality Prediction in the Amoju Hydrographic Subbasin, Cajamarca-Peru</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01509104</link>
        <id>10.14569/IJACSA.2024.01509104</id>
        <doi>10.14569/IJACSA.2024.01509104</doi>
        <lastModDate>2024-09-30T08:10:15.0930000+00:00</lastModDate>
        
        <creator>Alex Alfredo Huaman Llanos</creator>
        
        <creator>Jeimis Royler Yalta Meza</creator>
        
        <creator>Danicza Violeta Sanchez Cordova</creator>
        
        <creator>Juan Carlos Chasquero Martinez</creator>
        
        <creator>Lenin Qui&#241;ones Huatangari</creator>
        
        <creator>Dulcet Lorena Quinto Sanchez</creator>
        
        <creator>Roxana Rojas Segura</creator>
        
        <creator>Alfredo Lazaro Lude&#241;a Gutierrez</creator>
        
        <subject>Artificial neural networks; hydrographic subbasin; machine learning models; water quality index; water resource management</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>Water quality is crucial for sustaining life, and accurate prediction models are essential for effective management. This study introduces an Artificial Neural Network (ANN) model designed to predict the Water Quality Index (WQI) in the Amoju Hydrographic Subbasin, Cajamarca-Peru. The model was developed using key water quality parameters, including electrical conductivity (EC), total dissolved solids (TDS), calcium carbonate (CaCO3), and phosphate (〖PO〗_4^(3-)), identified through Pearson correlation analysis. Data from water samples collected over six months were used to train and validate the model. Results revealed that the ANN model achieved high predictive accuracy, with a significant correlation between WQI and the aforementioned parameters. The model&#39;s performance outstrips traditional methods demonstrating its capability to effectively capture complex interdependencies among water quality indicators. This research emphasizes the potential of AI-driven approaches for enhancing predictive accuracy in environmental monitoring. Future studies should consider incorporating additional variables, such as heavy metals and microbial indicators, and consider the application of real-time AI-driven monitoring systems to further refine water quality management strategies. The ANN model presented here offers a promising tool for decision-makers, providing a reliable method for predicting water quality in similar hydrographic basins and contributing to the broader field of AI in environmental science.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_104-An_Artificial_Neural_Network_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Open-Domain Search Quiz Engine Based on Transformer</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01509103</link>
        <id>10.14569/IJACSA.2024.01509103</id>
        <doi>10.14569/IJACSA.2024.01509103</doi>
        <lastModDate>2024-09-30T08:10:15.0770000+00:00</lastModDate>
        
        <creator>Xiaoling Niu</creator>
        
        <creator>Ge Guo</creator>
        
        <subject>Natural language processing; deep learning; transformer; Bi-LSTM; semantic understanding</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>As the volume of information on the Internet continues to grow exponentially, efficient retrieval of relevant data has become a significant challenge. Traditional keyword matching techniques, while useful, often fall short in addressing the complex and varied queries users present. This paper introduces a novel approach to automated question and answer systems by integrating deep learning and natural language processing (NLP) technologies. Specifically, it combines the Transformer model with the HowNet knowledge base to enhance semantic understanding and contextual relevance of responses. The proposed system architecture includes layers for word embedding, Transformer encoding, attention mechanisms, and Bi-directional Long Short-Term Memory (Bi-LSTM) processing, enabling sophisticated semantic matching and implication recognition. Using the BQ Corpus dataset in the banking and finance domain, the system demonstrated substantial improvements in accuracy and F1-score over existing models. The primary contributions of this research are threefold: (1) the introduction of a semantic fusion approach using HowNet for enhanced contextual understanding, (2) the optimization of Transformer-based deep learning techniques for Q&amp;A systems, and (3) a comprehensive evaluation using the BQ Corpus dataset, demonstrating significant improvements in accuracy and F1-score over baseline models. These contributions have important implications for improving the handling of complex and synonym-rich queries in automated Q&amp;A systems. The experimental results highlight that the integrated approach significantly enhances the performance of automated Q&amp;A systems, offering a more efficient and accurate means of information retrieval. This advancement is particularly crucial in the era of big data and Web 3.0, where the ability to quickly and accurately access relevant information is essential for both users and organizations.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_103-An_Open_Domain_Search_Quiz_Engine.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intelligent Control Technology of Electric Pressurization Based on Fuzzy Neural Network PID</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01509102</link>
        <id>10.14569/IJACSA.2024.01509102</id>
        <doi>10.14569/IJACSA.2024.01509102</doi>
        <lastModDate>2024-09-30T08:10:15.0630000+00:00</lastModDate>
        
        <creator>Yabing Li</creator>
        
        <creator>Limin Su</creator>
        
        <creator>Huili Guo</creator>
        
        <subject>Frequency conversion; PID control algorithm; electrical pressurization system; intelligent control technology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>In this study, we delved deeply into the intelligent control technology of electrical pressurization, utilizing a fuzzy neural network-based PID approach. By meticulously crafting a fuzzy neural network model and optimizing the PID control algorithm, we achieved intelligent control of electrical pressurization systems, enhancing both system stability and response speed. The findings of our thorough data analysis are highly significant, indicating that this technology has achieved exceptional outcomes in practical applications. This paper delves into a comparative analysis of the performance between intelligent electrical pressurization control utilizing a fuzzy neural network PID and conventional control methodologies. Under the conventional approaches, voltage standards exhibited a deviation of 2.5% along with a fluctuation span that reached as high as 5%. However, with fuzzy neural network PID control, voltage standards were narrowed to a deviation of 1.5%, with a fluctuation range reduced to 3%. Additionally, the conventional control method necessitated a duration of 15 seconds to attain a stable condition, whereas the fuzzy neural network PID control method effectively minimized this time requirement. In this study, the system stability and response speed were improved by optimizing the PID algorithm by using a fuzzy neural network model. Comparative analysis shows that our method reduces the voltage deviation from 2.5% to 1.5% and reduces the fluctuation range from 5% to 3%. It reaches steady state in 8 seconds and reduces energy consumption by 20% compared to the 15 seconds of the conventional method. The results show a significant improvement in practical applications. Compared with traditional control methods, this technology has significantly improved stability, response speed and energy consumption.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_102-Intelligent_Control_Technology_of_Electric_Pressurization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Meta-Heuristics-Based Solution for Multi-Objective Workflow Scheduling in Fog Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01509101</link>
        <id>10.14569/IJACSA.2024.01509101</id>
        <doi>10.14569/IJACSA.2024.01509101</doi>
        <lastModDate>2024-09-30T08:10:15.0470000+00:00</lastModDate>
        
        <creator>Gyan Singh</creator>
        
        <creator>Vivek Dubey</creator>
        
        <subject>Fog computing; DAG; workflow applications; makespan; energy; PSO</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>In recent years, there has been a significant increase in the volume of data generated by Internet of Things (IoT) applications, mostly driven by the rapid proliferation of IoT devices and advancements in communication technologies. The conventional cloud computing network was not specifically built to handle such a vast volume of data, leading to several issues, including increased processing time, higher costs, larger band-width usage, increased power usage, and communication delays. As a solution, conventional cloud servers have been expanded to additional layers of computing, storage, and network, termed as cloud-fog computing. The cloud-fog computing provides storage, processing, networking, and analytics capabilities in close proximity to IoT devices. The problem of scheduling work-flow applications in cloud-fog environments to optimize several conflicting objectives is classified as computationally complex. Particle Swarm Optimization is the widely recognized evolutionary meta-heuristic and is the optimal method for implementing multi-objective solutions because of its user-friendly approach and quick converging capability. Despite its wide acceptance, it does have several drawbacks, such as early convergence and solution stagnation. In order to overcome these limitations, this paper establishes a comprehensive theoretical model to schedule workflow applications for cloud-fog systems. The proposed model employs various competing objectives, such as power usage, overall cost, and makespan. To achieve this, we introduce a novel algorithm, learning enhanced particle swarm optimization (LE-PSO), which incorporates an inverse tangent inertia weight policy and adaptive learning factor methods. The efficiency of the LE-PSO is subsequently assessed by employing an operational data set of scientific workflow applications within a cloudsim-based simulation and validated against GAMPSO, EMMOO, PSO, and GA state-of-the-art approaches. The workflow scheduling, we suggest achieves the substantial decrease in makespan and power usage while maintaining the total cost at an optimal level, in comparison to existing meta-heuristics.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_101-A_Meta_Heuristics_Based_Solution_for_Multi_Objective_Workflow_Scheduling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multiclass Fruit Detection Using Improved YOLOv3 Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01509100</link>
        <id>10.14569/IJACSA.2024.01509100</id>
        <doi>10.14569/IJACSA.2024.01509100</doi>
        <lastModDate>2024-09-30T08:10:15.0300000+00:00</lastModDate>
        
        <creator>Seema C. Shrawne</creator>
        
        <creator>Jay Sawant</creator>
        
        <creator>Omkar Chaubal</creator>
        
        <creator>Karan Suryawanshi</creator>
        
        <creator>Diven Sirwani</creator>
        
        <creator>Vijay K. Sambhe</creator>
        
        <subject>Precision agriculture; yield estimation; fruit detection; YOLOv3; feature concatenation; spatial contexting</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>Manual interventions continue to be used in fruit-picking and billing at large-scale fruit storage facilities. Recent advances in deep in learning approaches, such as one-stage detectors like You Only Look Once (YOLO) and Single Stage Detector (SSD), as well as two-stage detectors like Faster RCNN and Mask RCNN, aim to streamline the processes involved with fruit detection and enhance efficiency. However, these frameworks continue to suffer with multi-scale objects, in terms of performance and efficiency due to large parameter sizes. These problems increase when multi-class fruits are encountered. We propose an improved version of the one-stage detector framework YOLOv3 for multi-class fruit detection. Our proposed model addresses the challenges of multi-scale object detection and detection of different fruit types in an image by incorporating CNN, bottleneck, and Spatial Pyramid Pooling Fast (SPPF) modules in the Head, Neck, and custom backbone of the YOLOv3 framework. Optimization of learnable parameters for computational efficiency is achieved by concatenating features at different feature map resolutions. The proposed model incorporates fewer layers and parameters compared to YOLOv3 and YOLOv5 models. We performed extensive testing on three datasets downloaded from Roboflow and compared them with YOLOv3 and YOLOv5 models. Our model achieved mAP50 of 0.747 on Dataset 1 comprising images of apples, bananas, and oranges whereas Dataset 2 consisting of images of apples, oranges, lemon, and Pear, achieved mAP50 of 0.981. Testing the Mineapple dataset comprising on-tree images of apples of varied sizes, achieved an accuracy of 0.643. We observe that the performance of our model beats the performance of the YOLOv3 and YOLOv5 models.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_100-Multiclass_Fruit_Detection_Using_Improved_YOLOv3_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Relevant Feature Identification Approach to Detect APTs in HTTPS Traffic</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150999</link>
        <id>10.14569/IJACSA.2024.0150999</id>
        <doi>10.14569/IJACSA.2024.0150999</doi>
        <lastModDate>2024-09-30T08:10:15.0170000+00:00</lastModDate>
        
        <creator>Abdou Romaric Tapsoba</creator>
        
        <creator>Tounwendyam Frederic Ouedraogo</creator>
        
        <subject>DNS over HTTPS; advanced persistent threats; machine learning; cyber threat intelligence; MITRE ATT&amp;CK; domain generation algorithms</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>This study addresses the significant challenges posed by Advanced Persistent Threats (APTs) in modern computer networks, particularly their use of DNS to establish covert communication via command and control (C&amp;C) servers. The advent of TLS 1.3 encryption further complicates detection efforts, as critical data within DNS over HTTPS (DoH) traffic remains inaccessible, and decryption would compromise user privacy. APTs frequently leverage Domain Generation Algorithms (DGAs), necessitating real-time detection solutions based on immediately accessible features within HTTPS traffic. Current research predominantly focuses on system-level behavioral analysis, often neglecting the proactive potential offered by Cyber Threat Intelligence (CTI), which can reveal malicious patterns through Techniques, Tactics, and Procedures (TTPs) and Indicators of Compromise (IoCs). This study proposes an innovative approach utilizing the MITRE ATT&amp;CK framework to identify relevant features in the face of encryption and the inherent complexity of APT activities. The primary objective is to develop a robust dataset and methodology capable of detecting APT behaviors throughout their lifecycle, emphasizing a lightweight, cost-effective solution through passive monitoring of network traffic to ensure real-time detection. The key contributions of this research include an in-depth analysis of the encryption challenges in detecting DNS-based APTs, a thorough examination of APT attack strategies using DNS, and the integration of CTI to enhance detection capabilities. Moreover, this study introduces the KAPT 2024 dataset, generated by the KExtractor tool, and demonstrates the effectiveness of the detection model through experiments with a variety of machine learning algorithms. The results underscore the potential for this approach to significantly improve APT detection in encrypted network environments.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_99-A_Relevant_Feature_Identification_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Proposal of OptDG Algorithm for Solving the Knapsack Problem</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150998</link>
        <id>10.14569/IJACSA.2024.0150998</id>
        <doi>10.14569/IJACSA.2024.0150998</doi>
        <lastModDate>2024-09-30T08:10:14.9830000+00:00</lastModDate>
        
        <creator>Matej Arlovic</creator>
        
        <creator>Tomislav Rudec</creator>
        
        <creator>Josip Miletic</creator>
        
        <creator>Josip Balen</creator>
        
        <subject>Dynamic programming; Dantzig algorithm; greedy algorithm; knapsack problem; linear programming; NP-Problem; optimization; OptDG</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>In a computational complexity theory, P, NP, NP-complete and NP-hard problems are divided into complexity classes which are used to emphasize how challenging it is to solve particular types of problems. The Knapsack problem is a well-known computational complexity theory and fundamental NP-hard optimization problem that has applications in a variety of disciplines. Being one of the most well-known NP-hard problems, it has been studied extensively in science and practice from theoretical and practical perspectives. One of the solution to the Knapsack problem is the Dantzig’s greedy algorithm which can be expressed as a linear programming algorithm which seeks to discover the optimal solution to the knapsack problem. In this paper, an optimized Dantzig greedy (OptDG) algorithm that addresses frequent edge cases, is suggested. Furthermore, OptDG algorithm is compared with the Dantzig’s greedy and optimal dynamically programmed algorithms for solving the Knapsack problem and performance evaluation is conducted.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_98-Proposal_of_OptDG_Algorithm_for_Solving_the_Knapsack_Problem.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Proposed λ_Mining Model for Hierarchical Multi-Level Predictive Recommendations</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150997</link>
        <id>10.14569/IJACSA.2024.0150997</id>
        <doi>10.14569/IJACSA.2024.0150997</doi>
        <lastModDate>2024-09-30T08:10:14.9530000+00:00</lastModDate>
        
        <creator>Yousef S. Alsahafi</creator>
        
        <creator>Ayman E. Khedr</creator>
        
        <creator>Amira M. Idrees</creator>
        
        <subject>Recommendation systems; data mining; features selection; associations rules mining; classification techniques</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>Delivering the most suitable products and services essentially relies on successfully exploring the potential relationship between customers and products. This immense need for intelligent exploration has led to the emergence of recommendation systems. In an environment where an immense variety exists, it is vital for buyers to own an intelligent exploratory map to guide them in finding their choices. Personalization has proven to be a successful contributor to recommenders. It provides an accurate guide to explore the users’ preferences. In the field of recommendation systems, the performance of the systems has been continuously measured by their success in accurate, personalized recommendations. There is no argument that personalization is one key success; however, this research argues that recommendation systems are not only about personalization. Other success factors should be considered in targeting optimality. The current research explores the hierarchy map representing the strengths and dependencies of the recommendation systems pillars associated with their influence level and relationships. Moreover, the research proposes a novel predictive approach that applies a hybrid of content and collaborative filtering recommendation systems to provide the most suitable customer recommendations effectively. The model utilizes a proposed features selection approach to detect the most significant features and explore the most effective associations’ schemes for the recommendations label feature. The proposed model is validated using a benchmark dataset by extracting direct and transitive associations and following the identified schematic for the required recommendations. The classification techniques are applied, proving the model&#39;s applicability with an accuracy ranging from 96% to 99%.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_97-A_Proposed_Mining_Model_for_Hierarchical.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Efficient Hierarchical Mechanism for Handling Network Partitioning Over Mobile Ad Hoc Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150996</link>
        <id>10.14569/IJACSA.2024.0150996</id>
        <doi>10.14569/IJACSA.2024.0150996</doi>
        <lastModDate>2024-09-30T08:10:14.9370000+00:00</lastModDate>
        
        <creator>Ali Tahir</creator>
        
        <creator>Fathe Jeribi</creator>
        
        <subject>Mobile Ad Hoc networks; network partitioning; distributed hash tables; logical cluster member node; logical cluster leader; logical network identifier</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>Mobile ad hoc networks exhibit distinctive challenges e.g., limited transmission range and dynamic mobility of the participating nodes. These challenges serve as the reasons for the frequent occurrence of network partitioning in mobile ad hoc networks. Network partitioning happens when a linked network topology is partitioned into two or more independent partitions. Because of this phenomenon, the participating node in one partition maintains no linkage with a node in another partition. Network partitioning results in the inaccessibility of mapping knowledge, logical labeling space, and logical structure of the participating nodes. As a result, the performance of a distributed hash tables (DHTs)-oriented routing mechanism is severely affected. In DHT-oriented routing methodologies, the logical network identifier of a new participating node is calculated by considering the logical network identifiers of all the physical neighboring nodes. The logical network identifiers are utilized for routing of packets from a source participating node to a destination participating node in the network. In the event of network partitioning, the incorrect computation of logical network identifiers happens concerning the physical proximity of the participating nodes. This research work suggests an effective routing mechanism to deal with the aforementioned network partitioning-related issues. Simulation results prove the superiority of the suggested scheme over the existing mechanisms.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_96-An_Efficient_Hierarchical_Mechanism.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning for Stock Price Prediction and Portfolio Optimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150995</link>
        <id>10.14569/IJACSA.2024.0150995</id>
        <doi>10.14569/IJACSA.2024.0150995</doi>
        <lastModDate>2024-09-30T08:10:14.9230000+00:00</lastModDate>
        
        <creator>Ashy Sebastian</creator>
        
        <creator>Veerta Tantia</creator>
        
        <subject>Deep learning; long-short term memory; stock price prediction; portfolio optimization; emerging markets; Indian stock market</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>Using deep learning for stock market predictions and portfolio optimizations is a burgeoning field of research. This study focuses on the stock market dynamics in developing countries, which are often considered less stable than their developed counterparts. The study is structured in two stages. In the first stage, the authors introduce a stacked LSTM model for predicting NIFTY stocks and then rank the stocks based on their predicted returns. In the second stage, the high-return stocks are selected to form 30 different portfolios with six different objectives, each comprising the top 7, 8, 9, and 10 NIFTY stocks. These portfolios are then compared based on risk and returns. Experimental results show that portfolios with five stocks offer the best returns and that adding more than nine stocks to the portfolio leads to excessive diversification and complexity. Therefore, the findings suggest that the proposed two-stage portfolio optimization method has the potential to construct a promising investment strategy, offering a balance between historical and future information on assets.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_95-Deep_Learning_for_Stock_Price_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimized Blockchain-Based Deep Learning Model for Cloud Intrusion Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150994</link>
        <id>10.14569/IJACSA.2024.0150994</id>
        <doi>10.14569/IJACSA.2024.0150994</doi>
        <lastModDate>2024-09-30T08:10:14.8900000+00:00</lastModDate>
        
        <creator>Sultan Alasmari</creator>
        
        <subject>Intrusion detection system; blockchain; deep learning; hybrid optimization; cloud computing; feature selection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>Cyberattacks are becoming increasingly complex and subtle. In many different types of networks, intrusion detection systems, or IDSs, are frequently employed to help in the prompt detection of intrusions. Blockchain technology has gained a lot of attention recently as a means of sharing data without a reliable third party. Specifically, it is impossible to change data stored in one block without changing all the following blocks. Create a deep learning (DL) method based on blockchain technology and hybrid optimization to improve the IDS&#39;s prediction accuracy. The UNSW-NB15 dataset is gathered via the Kaggle platform and utilized for Python system training. Principal component analysis (PCA) is used in the preprocessing to eliminate errors and duplication. Next, employ association rule learning (ARL) and information gain (IG) approaches to retrieve pertinent characteristics. The greatest features are the ones that improve detection performance through hybrid seahorse and bat optimization (HSHBA) selection. Lastly, create an efficient intrusion detection system by designing Blockchain-based Ensemble DL (BEDL) models, with convolutional neural networks (CNNs), restricted Boltzmann machines (RBM), and generative adversarial networks (GAN). The constructed model&#39;s experimental results are verified using pre-existing classifiers, yielding an improved accuracy of 99.12% and precision of 99%.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_94-Optimized_Blockchain_Based_Deep_Learning_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of Intelligent Extraction Method for Key Electronic Information Based on Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150993</link>
        <id>10.14569/IJACSA.2024.0150993</id>
        <doi>10.14569/IJACSA.2024.0150993</doi>
        <lastModDate>2024-09-30T08:10:14.8600000+00:00</lastModDate>
        
        <creator>Xiaoqin Chen</creator>
        
        <creator>Xiaojun Cheng</creator>
        
        <subject>Key electronic information; intelligent extraction; TextRank; LSTM; context</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>With the rapid development of the Internet and other emerging media, how to find the needed information from massive electronic documents in time and accurately has become an urgent problem. A key electronic information extraction method based on neural network learning ideas has been proposed to solve the problems of time-consuming and difficult deep semantic feature mining in traditional text classification methods. Firstly, a weighted graph model was introduced to improve the TextRank keyword extraction algorithm, helping to capture complex data information and implicit semantics. The results indicate that the optimization method has the highest extraction accuracy (96.52%) on the CSL dataset, and its performance in feature extraction of information data is superior to other comparative models. Secondly, combining LSTM and self attention mechanism to achieve key feature extraction of contextual semantic information. The results indicate that this optimization method has relatively small training and testing errors in data classification, and tends to converge in the later stages of iteration. The accuracy of information extraction reached 94.37%, which is better than other comparative models. The keyword extraction integrity of the fusion model on the THUCNews dataset and Sogou News dataset were 86.2 and 84.1, respectively, with consistency of 96.3 and 94.7, and grammatical correctness of 92.1 and 92.2, respectively. The neural network-based extraction method proposed by the research institute can not only effectively improve the accuracy of information extraction, but also adapt to the changing data environment, and has great potential for application in the field of electronic information processing.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_93-Design_of_Intelligent_Extraction_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning-Based Image Recognition Technology for Wind Turbine Blade Surface Defects</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150992</link>
        <id>10.14569/IJACSA.2024.0150992</id>
        <doi>10.14569/IJACSA.2024.0150992</doi>
        <lastModDate>2024-09-30T08:10:14.8430000+00:00</lastModDate>
        
        <creator>Zheng Cao</creator>
        
        <creator>Qianming Wang</creator>
        
        <subject>Wind turbine blades; image recognition; defect detection; deep learning; WindDefectNet</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>This paper proposes WindDefectNet, an image recognition system for surface defects of wind turbine blades, aiming at solving the key problems in wind turbine blade maintenance. At the beginning of the system design, the functional requirements and performance index requirements are clarified to ensure the realization of the functions of image acquisition and preprocessing, defect detection and classification, defect localization and size measurement, and to emphasize the key performance indexes such as accuracy, recall, processing speed and robustness of the system. The system architecture consists of multiple modules, including image acquisition and preprocessing module, feature extraction module, attention enhancement module, defect detection module, etc., which work together to achieve efficient defect recognition and localization. By adopting advanced deep learning techniques and model design, WindDefectNet is able to maintain high accuracy and stability in complex environments. Experimental results show that WindDefectNet performs well under different lighting conditions, shooting angles, wind speed and weather conditions, and has good environmental adaptability and robustness. The system provides strong technical support for blade maintenance in the wind power industry.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_92-Deep_Learning_Based_Image_Recognition_Technology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Moving Beyond Traditional Incident Response: Combating APTs with Warfare-Enabled Continuous Response</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150991</link>
        <id>10.14569/IJACSA.2024.0150991</id>
        <doi>10.14569/IJACSA.2024.0150991</doi>
        <lastModDate>2024-09-30T08:10:14.8270000+00:00</lastModDate>
        
        <creator>Abid Hussain Shah</creator>
        
        <subject>Information operations; information warfare; cyber security; dynamic capabilities; incident response capabilities; warfare enabled capabilities</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>Critically examining the cybersecurity management practices, it can be concluded that security management used by the organizations is mostly control-centered against a wide range of threats to information systems. This control-centered approach has matured to act as a shield to prevent against a large variety of attacks. Since threats against the information systems are becoming sophisticated, persistent and evolving, therefore, the current approach has not been very effective against the advanced strategies and techniques used by the emerging threats like APTs (Advanced Persistent Threats). The core argument of this paper suggests that to match up the capabilities of APTs, organizations need a major shift in their strategies. This shift needs to focus more on the response oriented techniques relegating erstwhile prevention-centered approach. Traditionally the warfare strategies are more response oriented. Some of the non-kinetic strategies (not involving physical fighting) can be useful in developing response capability of Information Systems. Therefore, drawing on the warfare paradigm, and making use of DCT (Dynamic Capability Theory), this research examines the applicability of warfare strategies in the entrepreneur domain. This article will also contribute by means of a research framework arguing that the integration of prevailing information security capabilities; such as incident response capabilities and security capabilities from the warfare practices is possible resulting in dynamic capabilities (warfare-enabled). Such capabilities can improve security performance.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_91-Moving_Beyond_Traditional_Incident_Response.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Adoption of Electronic Payments in Online Shopping: The Mediating Role of Customer Trust</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150990</link>
        <id>10.14569/IJACSA.2024.0150990</id>
        <doi>10.14569/IJACSA.2024.0150990</doi>
        <lastModDate>2024-09-30T08:10:14.8130000+00:00</lastModDate>
        
        <creator>Nguyen Thi Phuong Giang</creator>
        
        <creator>Thai Dong Tan</creator>
        
        <creator>Le Huu Hung</creator>
        
        <creator>Nguyen Binh Phuong Duy</creator>
        
        <subject>Electronic payment; Intention to use; Online shopping</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>This study investigates the factors influencing electronic payment in online shopping behavior among Ho Chi Minh City consumers. With the rapid advancement of technology, e-commerce has become a new trend, and understanding the intention to adopt electronic payment is crucial for online businesses. The research employs quantitative and qualitative methods, utilizing a survey of 437 Ho Chi Minh City consumers. The data collected is processed using SPSS 24 and SmartPls4 software. Eight factors related to consumers&#39; intention to use electronic payment are identified: social influence, security, perceived usefulness, convenience, ease of use, customer trust, perceived risk, and performance expectancy. The study&#39;s findings will contribute to the existing knowledge base for businesses, facilitating the promotion of electronic payment adoption. This support will aid businesses in developing more attractive online sales strategies, encouraging consumers to shop and pay online more frequently and, at the same time, contribute to supporting departments in formulating policies for digital payments, thereby promoting national digital transformation.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_90-The_Adoption_of_Electronic_Payments_in_Online_Shopping.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comprehensive Study of BIM for Infrastructural Crack Detection and the Vital Strategies</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150989</link>
        <id>10.14569/IJACSA.2024.0150989</id>
        <doi>10.14569/IJACSA.2024.0150989</doi>
        <lastModDate>2024-09-30T08:10:14.7970000+00:00</lastModDate>
        
        <creator>Samuel Adekunle</creator>
        
        <creator>Opeoluwa Akinradewo</creator>
        
        <creator>Babatunde Ogunbayo</creator>
        
        <creator>Andrew Ebekozien</creator>
        
        <creator>Clinton Aigbavboa</creator>
        
        <subject>Developing country; emerging technology; facility management; visualisation; emerging technology; South Africa</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>Building information modelling is one of the emerging technologies in the construction industry and is relevant to its productivity and efficiency. Application which affects the product and process of the industry. An underdeveloped area with less attention is its adoption for crack detection and visualisation for infrastructural maintenance. This study provides a thorough perspective on BIM adoption for crack detection and visualisation. It also identified the different strategies that can aid in the adoption and use of BIM for infrastructural monitoring and maintenance in South Africa. The study adopted a quantitative approach, and questionnaires were distributed to industry professionals through an online platform. The collected data was analysed. The results indicate a need for incorporation of this aspect into the HEI curriculum and a teaching approach that is practical and experimental to be adopted.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_89-A_Comprehensive_Study_of_BIM_for_Infrastructural_Crack_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Machine Learning Approaches for Predicting Occupancy Patterns and its Influence on Indoor Air Quality in Office Environments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150987</link>
        <id>10.14569/IJACSA.2024.0150987</id>
        <doi>10.14569/IJACSA.2024.0150987</doi>
        <lastModDate>2024-09-30T08:10:14.7800000+00:00</lastModDate>
        
        <creator>Amir Hamzah Mohd Shaberi</creator>
        
        <creator>Sumayyah Dzulkifly</creator>
        
        <creator>Wang Shir Li</creator>
        
        <creator>Yona Falinie A. Gaus</creator>
        
        <subject>Indoor air quality; occupancy patterns; machine learning; deep learning; regression models; Mean Squared Error; Mean Absolute Error; IAQ monitoring; IAQ management</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>It is normal for the modern population to spend 12 hours or more daily indoors where the level of comfort can be moderated. Yet, indoor occupants are similarly exposed to various air pollutants just as outdoors. Indoor air pollution could be detrimental toward the occupant&#39;s health noted by the United Nation Environment Programme (UNEP) in the Pollution Action Note, published on 7th of September 2021. According to the American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) standards, occupancy patterns could influence indoor air quality. Hence, this paper investigates the utilisation of machine learning algorithms in predicting occupancy patterns against indoor air quality (IAQ) variables such as humidity, temperature, light, and carbon dioxide (CO2). This study compares the performance of selected machine learning approaches, namely deep learning (LSTM, CNN), regression (ANN) and (SVR) models. In addition, it explores the diverse range of evaluation metrics utilized to evaluate the performance of machine learning in the specific context of Mean Squared Error (MSE) and Mean Absolute Error (MAE). In the training phase, the SVR model achieved the lowest MAE of 0.0826 and MSE of 0.0280 as compared to the other algorithms. The ANN model demonstrated slightly better generalization capabilities in the testing phase, while the LSTM model demonstrated robust performance in the test phase. Overall, the results highlighted the significant impact of occupancy behaviour on Indoor Air Quality (IAQ) variables and underscored the importance of advanced modelling techniques in IAQ monitoring and management, emphasizing the need for tailored approaches to address the complex relationship between occupancy patterns and IAQ variables.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_87-Machine_Learning_Approaches_for_Predicting_Occupancy_Patterns.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Visualization of Personality and Phobia Type Clustering with GMM and Spectral</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150988</link>
        <id>10.14569/IJACSA.2024.0150988</id>
        <doi>10.14569/IJACSA.2024.0150988</doi>
        <lastModDate>2024-09-30T08:10:14.7800000+00:00</lastModDate>
        
        <creator>Ting Tin Tin</creator>
        
        <creator>Cheok Jia Wei</creator>
        
        <creator>Ong Tzi Min</creator>
        
        <creator>Lim Siew Mooi</creator>
        
        <creator>Lee Kuok Tiung</creator>
        
        <creator>Ali Aitizaz</creator>
        
        <creator>Chaw Jun Kit</creator>
        
        <creator>Ayodeji Olalekan Salau</creator>
        
        <subject>Unsupervised learning; learning opportunities; clustering; personality; machine learning; Gaussian mixture model; spectral clustering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>Personality traits, the unique characteristics defining individuals, have intrigued philosophers and scholars for centuries. With recent advances in machine learning, there is an opportunity to revolutionize how we understand and differentiate personality traits. This study seeks to develop a robust cluster analysis approach (unsupervised learning) to efficiently and accurately classify individuals based on their personality traits, overcoming the limitations of manual classification. The problem at hand is to create a system that can handle the subjective nature of qualitative personality data, providing insights into how people interact, collaborate, and behave in various social contexts and thus increase learning opportunities. To achieve this, various unsupervised clustering techniques, including spectral clustering and Gaussian mixture models, will be employed to identify similarities in unlabeled data collected through interview questions. The clustering approach is crucial in helping policy makers to identify suitable approaches to improve teamwork efficiency in both educational institutions and job industries.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_88-Visualization_of_Personality_and_Phobia_Type_Clustering.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards Accurate Detection of Diabetic Retinopathy Using Image Processing and Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150986</link>
        <id>10.14569/IJACSA.2024.0150986</id>
        <doi>10.14569/IJACSA.2024.0150986</doi>
        <lastModDate>2024-09-30T08:10:14.7670000+00:00</lastModDate>
        
        <creator>K. Kalindhu Navanjana De Silva</creator>
        
        <creator>T. Sanduni Kumari Lanka Fernando</creator>
        
        <creator>L. D. Lakshan Sandaruwan Jayasinghe</creator>
        
        <creator>M.H.Dinuka Sandaruwan Jayalath</creator>
        
        <creator>Kasun Karunanayake</creator>
        
        <creator>B.A.P. Madhuwantha</creator>
        
        <subject>Diabetic retinopathy; fundus images; computer-assisted analysis; deep learning; image processing; convolutional neural networks component</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>Diabetic retinopathy (DR) is a critical complication of diabetes, characterized by pathological changes in retinal blood vessels. This paper presents an innovative software application designed for DR detection and staging using fundus images. The system generates comprehensive reports, facilitating treatment planning and improving patient outcomes. Our study aims to develop an affordable computer assisted analysis system for accurate DR assessment, leveraging publicly available fundus image datasets. Key objectives include identifying relevant features for DR staging, developing robust image processing algorithms for lesion detection, and implementing machine learning models for accurate diagnosis. The research employs various pre-processing techniques to enhance image quality and optimize feature extraction. Convolutional Neural Networks (CNNs) are utilized for stage classification, achieving an impressive accuracy of 93.45%. Lesion detection algorithms, including optic disk localization, blood vessel segmentation, and exudate identification, demonstrate promising results in accurately identifying DR-related abnormalities. The developed software product integrates these advancements, providing a user-friendly interface for efficient DR diagnosis and management. Evaluation results validate the effectiveness of the CNN model in stage classification and lesion detection, with high sensitivity and specificity. The study discusses the significance of image augmentation and hyperparameter tuning in improving model performance. Future research directions include enhancing the detection of microaneurysms and hemorrhages, incorporating higher resolution images, and standardizing evaluation methods for lesion detection algorithms. In conclusion, this research underscores the potential of technology in revolutionizing DR diagnosis and management. The developed software product offers a cost-effective solution for early DR detection, emphasizing the importance of accessible healthcare solutions. The findings contribute to advancing the field of DR analysis and inspire further innovation for improved patient care.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_86-Towards_Accurate_Detection_of_Diabetic_Retinopathy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Content-Based Image Retrieval Using Transfer Learning and Vector Database</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150985</link>
        <id>10.14569/IJACSA.2024.0150985</id>
        <doi>10.14569/IJACSA.2024.0150985</doi>
        <lastModDate>2024-09-30T08:10:14.7500000+00:00</lastModDate>
        
        <creator>Li Shuo</creator>
        
        <creator>Lilly Suriani Affendey</creator>
        
        <creator>Fatimah Sidi</creator>
        
        <subject>Content-based image retrieval (CBIR); image retrieval; transfer learning; convolutional neural networks; VGG-16; vector database; milvus; feature extraction; high-dimensional vectors; real-time image search</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>Content-based image retrieval (CBIR) systems are essential for efficiently searching large image datasets using image features instead of text annotations. Major challenges include extracting effective feature representations to improve accuracy, as well as indexing them to improve the retrieval speed. The use of pre-trained deep learning models to extract features has elicited interest from researchers. In addition, the emergence of open-source vector databases allows efficient vector indexing which significantly increases the speed of similarity search. This paper introduces a novel CBIR system that combines transfer learning with vector databases to improve retrieval speed and accuracy. Using a pre-trained VGG-16 model, we extract high-dimensional feature vectors from images, which are stored and retrieved using the Milvus vector database. Our approach significantly reduces retrieval time, achieving real-time responses while maintaining high precision and recall. Experiments conducted on ImageClef, ImageNet, and Corel-1k datasets demonstrate the system’s effectiveness in large-scale image retrieval tasks, outperforming traditional methods in both speed and accuracy.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_85-Content_Based_Image_Retrieval_Using_Transfer_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intelligent Detection and Search Model for Communication Signals Based on Deep-Re-Hash Retrieval Technology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150984</link>
        <id>10.14569/IJACSA.2024.0150984</id>
        <doi>10.14569/IJACSA.2024.0150984</doi>
        <lastModDate>2024-09-30T08:10:14.7330000+00:00</lastModDate>
        
        <creator>Hui Liu</creator>
        
        <creator>Xupeng Liu</creator>
        
        <subject>Deep-Re-Hash retrieval; communication signals; image data; cauchy function; hadamard matrix</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>With the explosive growth of image data, traditional image retrieval methods face challenges of low efficiency and insufficient accuracy. In view of this, the study first analyzed the traditional Deep-Re-Hash detection technology and constructed a general hash detection model. Secondly, Cauchy functions and Hadamard matrices were introduced to optimize the generation of hash centers, and an improved Deep-Re-Hash detection model was proposed. The experimental results showed that the highest precision of the improved Deep-Re-Hash was 97%, and the highest MAP value was 90%. In simulation testing, the lowest detection similarity of the improved Deep-Re-Hash detection model was 64.8%, and the detection speed at this time was 7.6s. The hash codes generated by this model were highly aggregated, with very clear edges. In the indicator rating, the highest storage occupancy rating was close to 45 points, the highest detection satisfaction rating was close to 50 points, and the highest detection time rating was close to 30 points. Based on the above data, the proposed improved Deep-Re-Hash detection model shows great potential in processing large-scale image data. It successfully improves the intelligent detection and search efficiency of communication image signals, providing useful reference and inspiration for researchers in related fields.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_84-Intelligent_Detection_and_Search_Model_for_Communication_Signals.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>AI-Driven Prioritization Techniques of Requirements in Agile Methodologies: A Systematic Literature Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150983</link>
        <id>10.14569/IJACSA.2024.0150983</id>
        <doi>10.14569/IJACSA.2024.0150983</doi>
        <lastModDate>2024-09-30T08:10:14.7200000+00:00</lastModDate>
        
        <creator>Aya M. Radwan</creator>
        
        <creator>Manal A. Abdel-Fattah</creator>
        
        <creator>Wael Mohamed</creator>
        
        <subject>Requirement analysis; requirement prioritization; agile; fuzzy logic; machine learning; optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>Software requirements are the foundation of a successful software development project, outlining the customer&#39;s expectations for the software&#39;s functionality. Conventional techniques of requirement prioritization present several challenges, such as scalability, customer satisfaction, efficiency, and dependency management. These challenges make the process difficult to manage effectively. Prioritizing requirements by setting criteria in order of importance is essential to addressing these issues and ensuring the efficient use of resources, especially as software becomes more complex. Artificial intelligence (AI) offers promising solutions to these challenges through algorithms like Machine Learning, Fuzzy Logic, Optimization, and Natural Language Processing. Despite the availability of reviews on conventional prioritization techniques, there is a notable gap in comprehensive reviews of AI-based methods. This paper offers a systematic literature review (SLR) of AI-driven requirements prioritization techniques within Agile methodologies, covering 32 papers published between 2010 and 2024. We conducted a parametric analysis of these techniques, identifying key parameters related to both the prioritization process and specific AI methods. Our findings clarify the application domains of various AI-based techniques, offering crucial insights for researchers, requirement analysts, and stakeholders to choose the most effective prioritization methods. These insights consider dependencies and emphasize the importance of collaboration between stakeholders and the development team for optimal results.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_83-AI_Driven_Prioritization_Techniques_of_Requirements.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Art Image Color Sequence Data Processing Method Based on Artificial Intelligence Technology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150982</link>
        <id>10.14569/IJACSA.2024.0150982</id>
        <doi>10.14569/IJACSA.2024.0150982</doi>
        <lastModDate>2024-09-30T08:10:14.7030000+00:00</lastModDate>
        
        <creator>Xujing Zhao</creator>
        
        <creator>Xiwen Chen</creator>
        
        <creator>Jianfei Shen</creator>
        
        <subject>Artworks; quality enhancement; image processing technology; color space model; target separation; experimental color target</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>With the traditional quality enhancement methods cannot control the best field density range resulting in too large threshold value of colour difference in art works. Therefore, a research on art works quality enhancement based on image processing technology is proposed. The CIE L* a* b* color space model is established to divide the color magnitude and then transform the color space by RGB space conversion model. On this basis, the quality of art works is enhanced according to the process of the quality enhancement of art works. As considering that the actual density is not within the control range, the image processing technology is used to separate targets to solve this problem. In the experiment, Adobe Illustrator CS6 software was used to make the experimental color target and six test samples were selected to test whether the distribution results of the two methods in different degree of color difference perception met the quality enhancement requirements. The experimental results show that the quality enhancement effect of the proposed method is better and more in line with the design requirements.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_82-Art_Image_Color_Sequence_Data_Processing_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Music Emotion Classification Using Multi-Feature Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150981</link>
        <id>10.14569/IJACSA.2024.0150981</id>
        <doi>10.14569/IJACSA.2024.0150981</doi>
        <lastModDate>2024-09-30T08:10:14.6870000+00:00</lastModDate>
        
        <creator>Affreen Ara</creator>
        
        <creator>Rekha V</creator>
        
        <subject>Emotion classification; music lyrics; feature extraction; lexicon features</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>Emotions are a fundamental aspect of human expression, and music lyrics are a rich source of emotional content. Understanding the emotions conveyed in lyrics is crucial for a variety of applications, including music recommendation systems, emotion classification, and emotion-driven music composition. While extensive research has been conducted on emotion classification using audio or combined audio-lyrics data, relatively few studies focus exclusively on lyrics. This gap highlights the need for more focused research on lyric-based emotion classification to better understand its unique challenges and potentials. This paper introduces a novel approach for emotion classification in music lyrics, leveraging a combination of natural language processing (NLP) techniques and dimension reduction methods. Our methodology systematically extracts and represents the emotional features embedded within the lyrics, utilizing a diverse set of NLP techniques and integrating new features derived from various emotion lexicons and text analysis. Through extensive experimentation, we demonstrate the effectiveness of our approach, achieving significant improvements in accurately classifying the emotions expressed in music lyrics. This study underscores the potential of lyric-based emotion analysis and provides a robust framework for further research in this area.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_81-Enhancing_Music_Emotion_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Research and Implementation of Facial Expression Recognition Algorithm Based on Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150980</link>
        <id>10.14569/IJACSA.2024.0150980</id>
        <doi>10.14569/IJACSA.2024.0150980</doi>
        <lastModDate>2024-09-30T08:10:14.6730000+00:00</lastModDate>
        
        <creator>Xinjiu Xie</creator>
        
        <creator>Jinxue Huang</creator>
        
        <subject>Facial expression; expression recognition; convolutional neural network; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>Traditional information security management methods can provide a degree of personal information protection but remain vulnerable to issues such as data breaches and password theft. To bolster information security, facial expression recognition offers a promising alternative. To achieve efficient and accurate facial expression recognition, we propose a lightweight neural network algorithm called T-SNet (Teacher-Student Net). In our approach, the teacher model is an enhanced version of ResNet18, incorporating fine-grained feature extraction modules and pre-trained on the MS-Celeb-1M facial dataset. The student model uses the lightweight convolutional neural network ShuffleNetV2, with the model&#39;s accuracy further improved by optimizing the distillation loss function. This design carefully considers the key features of facial expressions, determines the most effective extraction techniques, and classifies and recognizes these features. To evaluate the performance of our algorithm, we conducted comparative experiments against state-of-the-art facial expression recognition methods. The results show that our approach outperforms existing methods in both recognition accuracy and efficiency.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_80-Research_and_Implementation_of_Facial_Expression.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Furniture Panel Processing Positioning Design Based on 3D Measurement and Depth Image Technology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150979</link>
        <id>10.14569/IJACSA.2024.0150979</id>
        <doi>10.14569/IJACSA.2024.0150979</doi>
        <lastModDate>2024-09-30T08:10:14.6570000+00:00</lastModDate>
        
        <creator>Binglu Chen</creator>
        
        <creator>Guanyu Chen</creator>
        
        <creator>Qianqian Hu</creator>
        
        <subject>Laser scanning; 3D measurement; deep image processing; positioning; computer vision</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>In recent years, furniture panel processing positioning based on computer vision technology has received increasing attention. A 3D measurement imaging technology based on laser scanning technology is proposed to address the significant environmental impact of traditional visual technology. Subsequently, deep image processing techniques are introduced to address the high image noise. In the experiment of measuring panel using 3D measurement technology, 14 measurement lines were taken every 10mm of the measurement length. The maximum measurement value was 204.62mm, the minimum measurement value was 204.37mm, and the manual measurement result was 204.5mm. 14 measurement lines were taken every 14mm of the measurement length. The maximum measurement value was 134.15mm, the minimum measurement value was 133.894mm, and the manual measurement was 134.1mm. 14 measurement lines were taken every 14mm of the measurement thickness. The maximum measurement value was 26.646mm, the minimum measurement value was 26.242mm, and the result of manual measurement was 26.5mm. The 3D imaging technology based on laser scanning is relatively accurate in measuring the 3D data of panels, which can be applied in the positioning and detection system of panel processing. In addition, the experiment compares depth image processing methods, verifying the effectiveness of the designed method. Meanwhile, this research also has certain reference significance for exploring the real-time positioning of other objects.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_79-Furniture_Panel_Processing_Positioning_Design.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced Early Detection of Oral Squamous Cell Carcinoma via Transfer Learning and Ensemble Deep Learning on Histopathological Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150978</link>
        <id>10.14569/IJACSA.2024.0150978</id>
        <doi>10.14569/IJACSA.2024.0150978</doi>
        <lastModDate>2024-09-30T08:10:14.6400000+00:00</lastModDate>
        
        <creator>Gurjot Kaur</creator>
        
        <creator>Sheifali Gupta</creator>
        
        <creator>Ashraf Osman Ibrahim</creator>
        
        <creator>Salil bharany</creator>
        
        <creator>Marwa Anwar Ibrahim Elghazawy</creator>
        
        <creator>Hadia Abdelgader Osman</creator>
        
        <creator>Ali Ahmed</creator>
        
        <subject>Oral Squamous Cell Carcinoma (OSCC); histopathology images; transfer learning; ensemble learning; EfficientNetB3; ResNet50; deep learning; cancer detection; medical image analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>Oral Squamous Cell Carcinoma (OSCC) is one main kind of oral cancer; early diagnosis is rather important to increase patient survival chances. This study investigates the application of advanced deep learning techniques including transfer learning and ensemble learning to increase the accuracy of oral squamous cell cancer (OSCC) diagnosis using histopathological image analysis. Two transfer learning models, EfficientNetB3 and ResNet50, support the suggested method to extract suitable features from the histopathological images. Both models permit fine-tuning to improve their classification accuracy. On tests taken after the initial training, the EfficientNetB3 model scored 96.15%. Later on, training ResNet50 yielded a test accuracy of 91.40%. Weighted voting merged several models into an ensemble model designed to maximize the strengths of each network. With a test accuracy of 98.59% and a training accuracy of 99.34%, the ensemble model showed notably higher performance than the values obtained by the individual models. Divided into OSCC and standard categories, the collection has 5,192 extremely well-resolved images. The images were used to create training, validation, and testing sets. We used this method to consistently evaluate the model&#39;s performance and reduce overfitting. Furthermore, the ensemble model proved to be quite accurate with recall and F1 scoring, thereby proving its capacity to routinely identify OSCC images. Both groups produced ROC curves, and the area under the curve (AUC) demonstrated excellent model performance. Transfer learning and ensemble learning are used together in this study to show that OSCC can be found early and consistently in histopathology images. The findings reveal that the recommended strategy could be a consistent tool to assist pathologists in the precise and timely detection of OSCC, thereby improving patient treatment and outcomes.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_78-Enhanced_Early_Detection_of_Oral_Squamous_Cell_Carcinoma.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced Quantitative Financial Analysis Using CNN-LSTM Cross-Stitch Hybrid Networks for Feature Integration</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150977</link>
        <id>10.14569/IJACSA.2024.0150977</id>
        <doi>10.14569/IJACSA.2024.0150977</doi>
        <lastModDate>2024-09-30T08:10:14.6270000+00:00</lastModDate>
        
        <creator>Taviti Naidu Gongada</creator>
        
        <creator>B. Kumar Babu</creator>
        
        <creator>Janjhyam Venkata Naga Ramesh</creator>
        
        <creator>P. N. V. Syamala Rao M</creator>
        
        <creator>K. Aanandha Saravanan</creator>
        
        <creator>K Swetha</creator>
        
        <creator>Mano Ashish Tripathi</creator>
        
        <subject>Cross-Stitch Hybrid Networks; predictive modelling; LSTM networks; convolutional neural networks; financial analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>This research paper provides innovative approaches to support financial prediction, or it is a different kind of economic prediction that extends over collecting different economic information. Financial prediction is a concept that has been employed. The present study offers a unique approach to predicting finances by integrating many financial issues utilizing a cross-stitch hybrid approach. The method uses information from several financial databases, including market data, corporate reports, and macroeconomic indicators, to create a comprehensive dataset. Employing MinMax normalization the features are equally scaled to provide uniform input for the algorithm. The combination of Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM) systems form the basis of the framework. To understand the time-dependent nature of financial information, LSTM networks (long short-term memory) are utilized to record and simulate the temporal interactions and patterns. Concurrently, spatial features are extracted using CNNs; these components help identify patterns that are difficult to identify with conventional techniques. Better handling of risks, more optimal approaches to investing, and more informed decision-making are made possible by the enhanced forecasting potential that this method—which is described above—offers. Potential pilot studies will focus on innovative uses in financial decision-making and advancements in cross-stitching structure. This paper proposes a sophisticated approach that can help stakeholders, such as investors, analysts of data, and other financial intermediaries, traverse the complexities of financial markets.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_77-Enhanced_Quantitative_Financial_Analysis_Using_CNN_LSTM.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Supply Chain Transparency and Efficiency Through Innovative Blockchain Solutions for Optimal Operations Management</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150976</link>
        <id>10.14569/IJACSA.2024.0150976</id>
        <doi>10.14569/IJACSA.2024.0150976</doi>
        <lastModDate>2024-09-30T08:10:14.6100000+00:00</lastModDate>
        
        <creator>Shamrao Parashram Ghodake</creator>
        
        <creator>Vishal M. Tidake</creator>
        
        <creator>Sanjit Singh</creator>
        
        <creator>Elangovan Muniyandy</creator>
        
        <creator>Mohit</creator>
        
        <creator>Lakshmana Phaneendra Maguluri</creator>
        
        <creator>John T Mesia Dhas</creator>
        
        <subject>Blockchain; supply chain management; advanced encryption standard; Ethereum blockchain; data storage</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>Blockchain technology holds the potential to revolutionize supply chain management by ensuring transparency, efficiency, and security. This paper presents a detailed examination of blockchain&#39;s implementation in supply chain systems, focusing on safeguarding confidential information and preserving supply chain integrity. The method involves extracting ‘sales order’ data from Walmart’s transactional database, which is then encrypted using AES algorithms to protect sensitive details such as client names and geographical information. Utilizing Ethereum&#39;s decentralized architecture, smart contracts are employed to manage transactions, encryption, decryption, and access rights. The Ethereum P2P network also aids in data validation and asset preservation, enhancing the system’s reliability. Comparative analysis shows that the proposed encryption method, with encryption and decryption times of 2.8 and 3.2 seconds, outperforms traditional methods like RSA and ABE. Implemented in Python, this blockchain-based technique offers a robust, nearly infallible solution that can be applied to various supply chain practices, including Asset Management (AM), Enterprise Asset Management (EAM), and Supply Chain Management (SCM), addressing contemporary challenges and enhancing operational efficiency.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_76-Enhancing_Supply_Chain_Transparency_and_Efficiency.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Increasing the Performance of Iceberg Query Through Summary Tables</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150975</link>
        <id>10.14569/IJACSA.2024.0150975</id>
        <doi>10.14569/IJACSA.2024.0150975</doi>
        <lastModDate>2024-09-30T08:10:14.5930000+00:00</lastModDate>
        
        <creator>Gohar Rahman</creator>
        
        <creator>Wajid Ali</creator>
        
        <creator>Mehmood Ahmed</creator>
        
        <creator>Hassan Jamil Sayed</creator>
        
        <creator>Mohammad A. Saleh</creator>
        
        <subject>Threshold (TH); bitmap index; aggregate function; Iceberg Query (IB); anti-monotone; non-anti-monotone aggregation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>One of the key challenging problems in data mining is data retrieval from large data repositories, as the sizes of data are growing very fast, to deal with this situation, there is a need for efficient data mining techniques. For efficient mining tasks number of queries have been emerged. Iceberg query is one of them, in which the output is much smaller like the tip of the iceberg as compared to the large input dataset, these queries take very long processing time and require a huge amount of main memory. However the processing devices have limited memories, so the efficient processing of iceberg queries is a challenging problem for most of the researchers. In this paper we present a novel technique, namely a summary table, to address this problem. Specifically, we adopt the summary table technique to acquire the required results at summary levels. The experimental results demonstrate that the summary table technique is highly effective for large datasets. Compared to bitmap indexing and cubed techniques, the summary table offers faster retrieval capabilities. Furthermore, the proposed technique achieved state-of-the-art performance.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_75-Increasing_the_Performance_of_Iceberg_Query.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning-Driven Localization of Coronary Artery Stenosis Using Combined Electrocardiograms (ECGs) and Photoplethysmograph (PPG) Signal Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150974</link>
        <id>10.14569/IJACSA.2024.0150974</id>
        <doi>10.14569/IJACSA.2024.0150974</doi>
        <lastModDate>2024-09-30T08:10:14.5800000+00:00</lastModDate>
        
        <creator>Mohd Syazwan Md Yid</creator>
        
        <creator>Rosmina Jaafar</creator>
        
        <creator>Noor Hasmiza Harun</creator>
        
        <creator>Mohd Zubir Suboh</creator>
        
        <creator>Mohd Shawal Faizal Mohamad</creator>
        
        <subject>Deep learning; CNN; LSTM; ATTN; simultaneous ECG and PPG; coronary artery disease</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>The application of artificial intelligence (AI) to electrocardiograms (ECGs) and photoplethysmograph (PPG) for diagnosing significant coronary artery disease (CAD) is not well established. This study aimed to determine whether the combination of ECG and PPG signals could accurately identify the location of blocked coronary arteries in CAD patients. Simultaneous measurement of ECG and PPG signal data were collected from a Malaysian university hospital, including patients with confirmed significant CAD based on invasive coronary angiography. ECG and PPG datasets were concatenated to form a single dataset, thereby enhancing the information available for the training process. Experimental results demonstrate that the Convolutional Neural Networks (CNN) + Long Short-Term Memory (LSTM) + Attention (ATTN) mechanisms model significantly outperforms standalone CNN and CNN + LSTM models, achieving an accuracy of 98.12% and perfect Area Under the Curve (AUC) scores of 1.00 for the detection of blockages in the left anterior descending (LAD) artery, left circumflex (LCX) artery, and right coronary artery (RCA). The integration of LSTM layers captures temporal dependencies in the sequential data, while the attention mechanism selectively highlights the most relevant signal features. This study demonstrates that AI-enhanced models can effectively analyze simultaneous measurement of standard single-lead ECGs and PPG to predict the location of coronary artery blockages and could be a valuable screening tool for detecting coronary artery obstructions, potentially enabling their use in routine health checks and in identifying patients at high risk for future coronary events.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_74-Deep_Learning_Driven_Localization_of_Coronary_Artery_Stenosis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid Intelligent System for IP Traffic Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150973</link>
        <id>10.14569/IJACSA.2024.0150973</id>
        <doi>10.14569/IJACSA.2024.0150973</doi>
        <lastModDate>2024-09-30T08:10:14.5630000+00:00</lastModDate>
        
        <creator>Muhana Magboul Ali Muslam</creator>
        
        <subject>Internet application classification; IP traffic classification; machine learning; machine learning techniques; stacking classifier</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>The classification of IP traffic is important for many reasons, including network management and security, quality of service (QoS) monitoring and provisioning, and high hardware utilisation. Recently, many machine learning-based IP traffic classifiers have been developed. Unfortunately, most of them need to be trained on large datasets and thus require a long training time and significant computational power. In this paper, I investigate this problem and, as a solution, present a hybrid system, which I call the ISITC, that combines the random forest (RF) and XGBoost (XGB) machine learning techniques with the support vector classifier (SVC) as the final estimator, the stacking classifier. This design leads to the development of a model that performs the classification of IP traffic and internet applications efficiently and with high accuracy. I evaluate the performance of the ISITC and various IP traffic classifiers, including neural network (NN), RF, decision tree (DT), and XGB classifiers and SVCs. The experimental results show that the ISITC provides the best IP traffic classification, with an accuracy of 96.7, and outperforms the other IP traffic classifiers: the NN classifier has an accuracy of 59, the RF classifier has an accuracy of 88.5, the DT classifier has an accuracy of 90.5, the XGB classifier has an accuracy of 89.8, and the SVC has an accuracy of 64.8.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_73-A_Hybrid_Intelligent_System_for_IP_Traffic_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Student Well-Being Prediction with an Innovative Attention-LSTM Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150972</link>
        <id>10.14569/IJACSA.2024.0150972</id>
        <doi>10.14569/IJACSA.2024.0150972</doi>
        <lastModDate>2024-09-30T08:10:14.5470000+00:00</lastModDate>
        
        <creator>Vinod Waiker</creator>
        
        <creator>Janjhyam Venkata Naga Ramesh</creator>
        
        <creator>Ajmeera Kiran</creator>
        
        <creator>Pradeep Jangir</creator>
        
        <creator>Ritwik Haldar</creator>
        
        <creator>Padamata Ramesh Babu</creator>
        
        <creator>E. Thenmozhi</creator>
        
        <subject>Student mental health; attention-based LSTM; well-being enhancement; predictive modelling; innovative techniques</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>This study introduces a groundbreaking method for predicting scholar well-being with the use of a sophisticated interest-primarily based Long Short-Term Memory (LSTM) version. Addressing the developing problem of intellectual health in academic settings, the studies pursuits to provide new insights and powerful techniques for reinforcing pupil mental well-being. The recognition is on enhancing the prediction of mental fitness issues via the revolutionary use of interest-primarily based LSTM algorithms, which excel in discerning various ranges of relevance among input facts points. The version leverages a unique methodology to procedure various datasets, which include academic information, social media activity, and textual survey responses. By emphasizing sizable capabilities like language patterns and shifts in educational performance, the attention-based totally LSTM version overcomes barriers of conventional predictive techniques and demonstrates superior accuracy in figuring out subtle indicators of mental health troubles. The schooling dataset is categorized into behavioral states along with &quot;healthy,&quot; &quot;confused,&quot; &quot;traumatic,&quot; and &quot;depressed,&quot; allowing the version to build a strong learning foundation. This research highlights the transformative ability of superior interest-primarily based strategies, offering an effective device for improving our know-how and predictive capabilities concerning adolescent mental fitness situations. The study underscores the significance of integrating progressive device studying tactics in addressing intellectual health demanding situations and enhancing standard scholar well-being. Upon implementation and rigorous checking out in Python, the proposed technique achieves a notable accuracy price of 98.9% in identifying mental fitness issues among college students. This observe underscores the transformative potential of superior interest-based totally strategies, thereby improving the expertise and predictive competencies concerning mental fitness conditions in teens.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_72-Enhancing_Student_Well_Being_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>ABC-Optimized CNN-GRU Algorithm for Improved Cervical Cancer Detection and Classification Using Multimodal Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150971</link>
        <id>10.14569/IJACSA.2024.0150971</id>
        <doi>10.14569/IJACSA.2024.0150971</doi>
        <lastModDate>2024-09-30T08:10:14.5300000+00:00</lastModDate>
        
        <creator>Donepudi Rohini</creator>
        
        <creator>M Kavitha</creator>
        
        <subject>Cervical cancer; CNN-GRU; Pap smear images; Artificial Bee Colony Optimizer; early detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>Cervical cancer is the second most common malignancy among women, making it a major public health problem worldwide. Early detection of cervical cancer is important because it increases the chances of effective treatment and survival. Regular screening and early management can prevent the growth of cervical cancer, thus reducing mortality. Traditional methods of detection, such as Pap smears, have proven useful, but are time-consuming and rely on behavioral interpretation by cytologists. To overcome these issues the study uses method another for a convolutional neural networks (CNNs) and gated recurrent units (GRUs) to detect and classify cervical cancer in Pap smear images by tuning with Artificial Bee Colony (ABC) Optimizer. This study used several datasets with high-resolution images from the SipakMed collection, with 4049 images and a fetal dataset with patient information for the CNN component of the model, specifically the ResNet-152 system, is extracted spatial attributes from these images. After feature extraction, the GRU component analyzes the sequential data to identify temporal combinations and patterns. This hybrid CNN-GRU algorithm uses the features of two networks: the ability of CNN to learn spatial patterns and the ability of GRU to understand sequential networks and tuning the parameters using ABC. The proposed model outperformed the conventional ML methods with a classification accuracy of 94.89%, and provided a reliable solution for early detection of cervical cancer Using these DL methods role which, not only enables a more accurate diagnosis, but also allows a comprehensive examination of the abnormal cervical cells, making it a positive detections to programs and patient outcomes. This work highlights the promise of cutting-edge AI techniques to improve cervical cancer diagnosis, and the need for faster and more accurate diagnosis in the battle to emphasize the fight against this common disease.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_71-ABC_Optimized_CNN_GRU_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Advancing Natural Language Processing with a Combined Approach: Sentiment Analysis and Transformation Using Graph Convolutional LSTM</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150970</link>
        <id>10.14569/IJACSA.2024.0150970</id>
        <doi>10.14569/IJACSA.2024.0150970</doi>
        <lastModDate>2024-09-30T08:10:14.5000000+00:00</lastModDate>
        
        <creator>Kedala Karunasree</creator>
        
        <creator>P. Shailaja</creator>
        
        <creator>T Rajesh</creator>
        
        <creator>U. Sesadri</creator>
        
        <creator>Choudaraju Neelima</creator>
        
        <creator>Divya Nimma</creator>
        
        <creator>Malabika Adak</creator>
        
        <subject>Graph Convolutional Networks (GCN); Long Short-Term Memory (LSTM); Natural Language Processing (NLP); sentiment analysis; emotions; text classification; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>Sentiment analysis is a key component of Natural Language Processing (NLP), taking into account the extraction of emotional cues from text. However, traditional strategies often fail to capture diffused feelings embedded in language. To deal with this, we advocate a novel hybrid model that complements sentiment analysis by way of combining Graph Convolutional Networks (GCNs) with Long Short-Term Memory (LSTM) networks. This fusion leverages LSTM’s sequential reminiscence abilities and GCN’s ability to model contextual relationships, allowing the detection of nuanced feelings regularly overlooked with the aid of conventional techniques. The hybrid technique demonstrates superior generalization overall performance and resilience, making it mainly powerful in complicated sentiment detection responsibilities that require a deeper knowledge of text. These results emphasize the capacity of combining sequential memory architectures with graph-based contextual facts to revolutionize sentiment analysis in NLP. This study not only introduces an innovative approach to sentiment analysis but also underscores the importance of integrating advanced techniques to push the boundaries of NLP research. This cutting-edge hybrid model surpasses the performance of previous techniques like CNN, CNN-LSTM, and RNN-LSTM with an amazing accuracy of 99.33%, creating a new benchmark in sentiment analysis. The results demonstrate how more precise sentiment analysis made possible by fusing sequential memory architectures with graph-based contextual information might revolutionise NLP. The findings provide a new benchmark, advancing the sphere by way of enabling greater specific and nuanced sentiment evaluation for a wide range of programs, inclusive of purchaser remarks analysis, social media monitoring, and emotional intelligence in AI structures.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_70-Advancing_Natural_Language_Processing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Impact of Read Theory in Mobile-Assisted Language Learning on Engineering Freshmen&#39;s Reading Comprehension Using BI-LSTM</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150969</link>
        <id>10.14569/IJACSA.2024.0150969</id>
        <doi>10.14569/IJACSA.2024.0150969</doi>
        <lastModDate>2024-09-30T08:10:14.4830000+00:00</lastModDate>
        
        <creator>E. Pearlin</creator>
        
        <creator>S. Mercy Gnana Gandhi</creator>
        
        <subject>Read theory; language learning; Bi-LSTM; mobile-assisted; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>The effect of Read Theory in Mobile-Assisted Language Learning (MALL) on reading comprehension is critical, especially for engineering freshmen who require excellent language abilities to navigate their complicated academic courses. Read Theory is a customized reading platform that offers adaptive reading activities based on the user&#39;s ability level, which is especially useful in MALL settings where accessibility and flexibility are essential. However, traditional methods of MALL have frequently faced with constraints, such as the inability to completely adapt to students&#39; different and dynamic learning demands. This deficiency usually results in poor improvement in reading skills because the conventional paradigms do not capture the intricate and diverse learning processes that are necessary for the effective learning of languages. To fix these issues, a Deep Learning approach that involves the implementation of BI-LSTM networks for enhancing the completion’s reading outcomes is offered. BI-LSTM is more suitable for this task because it has forward and backward reading capabilities to better understand and predict the dynamics of language acquisition. The research established improvement and an astonishing accuracy of 99.3%. The implementation was done using Python. This high value of accuracy disproves the common weakness of the strategy and provides convincing evidence that the proposed approach can significantly enhance MALL projects’ outcomes. The specified technique, which improves on the flaws of previous approaches, does not only improve the process of reading but has the potential to revolutionize language acquisition for engineering students, making it more effective and conforming to ability.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_69-Impact_of_Read_Theory_in_Mobile_Assisted_Language.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Natural Disaster Clustering Using K-Means, DBSCAN, SOM, GMM, and Mean Shift: An Analysis of Fema Disaster Statistics</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150968</link>
        <id>10.14569/IJACSA.2024.0150968</id>
        <doi>10.14569/IJACSA.2024.0150968</doi>
        <lastModDate>2024-09-30T08:10:14.4700000+00:00</lastModDate>
        
        <creator>Ting Tin Tin</creator>
        
        <creator>Yap Jia Hao</creator>
        
        <creator>Yong Chang Yeou</creator>
        
        <creator>Lim Siew Mooi</creator>
        
        <creator>Goh Ting Yew</creator>
        
        <creator>Temitope Olumide Olugbade</creator>
        
        <creator>Ali Aitizaz</creator>
        
        <subject>Natural disasters; disaster management; unsupervised learning; clustering; disaster frequency; disaster types; mitigation strategies; adaptation strategies</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>Natural disasters tend to ruin people’s lives and infrastructure, which requires comprehensive analysis and understanding to inform effective disaster management and response planning. This research addresses the lack of in-depth analysis of federally declared disasters in the United States using a dataset sourced from FEMA. Through the application of unsupervised learning techniques, including K-means clustering, DBSCAN, self-organizing maps (SOM), and the Gaussian mixture model (GMM), similar types of disasters are clustered based on their frequency. The relationship between disaster type and disaster frequency is analyzed to gain insight into patterns and correlations, facilitating targeted mitigation and adaptation strategies. By using the techniques of clustering, we can accurately group similar disaster types, duration time, occurring time and location of disaster. By implementing these approaches, our study aims to improve the understanding of disaster occurrences and inform decision-making processes in disaster mitigation strategies and adaptation strategies.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_68-Natural_Disaster_Clustering_Using_K_means.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multimedia Network Data Fusion System Integrating SSA and Reinforcement Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150966</link>
        <id>10.14569/IJACSA.2024.0150966</id>
        <doi>10.14569/IJACSA.2024.0150966</doi>
        <lastModDate>2024-09-30T08:10:14.4530000+00:00</lastModDate>
        
        <creator>Fangrui Li</creator>
        
        <subject>Sparrow search algorithm; reinforcement learning; multimedia network; data fusion; performance improvement</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>To improve the performance and efficiency of multimedia network data fusion system, this study proposes an improved sparrow search algorithm on the ground of reinforcement learning algorithm and sparrow search algorithm, and improves the multimedia network data fusion model on the ground of this algorithm. A performance comparison experiment was conducted on the improved sparrow search algorithm, and it was found that the algorithm entered a convergence state after 380 iterations in a unimodal function. Its time consumption is lower than other comparison algorithms, and it has not fallen into the local optimal situation after 500 iterations in the multimodal benchmark function. Its performance is significantly superior to other comparison algorithms. Moreover, the study conducted relevant experiments on the multimedia network data fusion model and found that the F1 value output by the model was 0.37, with an accuracy of 92.4%, which is higher than other data fusion models. And the mean square error of this model reaches 0.52, and the processing time is 0.1 seconds, which is lower than other comparative data fusion models. The quality of output data and data processing efficiency of this model are better. The relevant outcomes demonstrate that the improved sparrow search algorithm possesses good global search and convergence performance. And the improved multimedia network data fusion model has better accuracy and efficiency, and has good practical application value. This study can provide reference and reference for multimedia network data fusion systems.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_66-Multimedia_Network_Data_Fusion_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application of U-Net Network Algorithm in Electronic Information Field</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150967</link>
        <id>10.14569/IJACSA.2024.0150967</id>
        <doi>10.14569/IJACSA.2024.0150967</doi>
        <lastModDate>2024-09-30T08:10:14.4530000+00:00</lastModDate>
        
        <creator>Liang Wang</creator>
        
        <subject>U-Net; attention calibrated U-Net; convolutional neural network; deep learning; digital data; accuracy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>This rapidly evolving landscape, which includes the field of medical diagnostics, has integrated with the electronic data (E-Data) field to provide precise and efficient treatment for complex medical conditions. The research field has further catapulted its reach to include various data types, including image, video, medical expert diagnostic type, and sensor input, out of which the image-based diagnostic model has excellent research potential. Convolutional Neural Network (CNN) based models have evolved into better Deep Learning (DL) models for handling complex intricacies featured in the input image. U-Net is a prominent CNN model developed to handle the features of image data. The U-Net excels in capturing detailed features through its encoder-decoder structure and skip connections, but its uniform weighting across different network layers may not adequately address the subtleties involved in complex medical anomaly detection. This work proposed the Attention Calibrated U-Net (ACU-Net) model that is designed to address the challenges of U-Net in detecting Fetal Cardiac Rhabdomyoma (FCR) from echocardiographic (ECG) images. FCR is a prevalent benign cardiac tumor in fetuses that poses significant diagnostic challenges due to its variable manifestations and the intricate nature of fetal cardiac anatomy. The proposed model enhances the U-Net architecture with attention mechanisms and employs a hybrid Loss Function (LF) that combines Cross-Entropy Loss, Dice Loss, and an attention-driven component for effective FCR detection. The model was compared against others and demonstrated better specificity, accuracy, precision, recall, and F1-score performance across various ECG views (LVOT, RVOT, 3VT, and 4CH).</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_67-Application_of_U_Net_Network_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Advancements and Challenges in Geospatial Artificial Intelligence, Evaluating Support Vector Machines Models for Dengue Fever Prediction: A Structured Literature Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150965</link>
        <id>10.14569/IJACSA.2024.0150965</id>
        <doi>10.14569/IJACSA.2024.0150965</doi>
        <lastModDate>2024-09-30T08:10:14.4370000+00:00</lastModDate>
        
        <creator>Hetty Meileni</creator>
        
        <creator>Ermatita</creator>
        
        <creator>Abdiansah</creator>
        
        <creator>Nyayu Latifah Husni</creator>
        
        <subject>Support vector machines; geospatial artificial intelligence; kernel methods; dengue fever prediction; real-time data analytics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>This review examines recent advancements and ongoing challenges in applying Support Vector Machines within Geospatial Artificial Intelligence, specifically for dengue fever prediction. Recent developments in Support Vector Machines include the introduction of advanced kernel methods, such as Radial Basis Function and polynomial kernels, which enhance the model’s ability to handle complex spatial data and interactions. Integration with high-resolution geospatial data and real-time analytics has significantly improved predictive accuracy, particularly in mapping environmental factors influencing disease spread. However, challenges persist, including issues with data quality, computational demands, and model interpretability. Data scarcity and the high computational cost of Support Vector Machines, especially with non-linear kernels, necessitate optimization techniques and advanced computing resources. Parameter tuning and enhancing model interpretability are critical for effective implementation. Future research should focus on developing new kernels and hybrid models that combine Support Vector Machines with other machine learning approaches to address these challenges. Practical applications in public health can benefit from improved real-time data processing and high-resolution analytics, while ensuring adherence to ethical and regulatory standards. This review underscores the potential of Support Vector Machines in Geospatial Artificial Intelligence for disease prediction and highlights areas where further innovation and research are needed to enhance its practical utility in public health.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_65-Advancements_and_Challenges_in_Geospatial_Artificial_Intelligence.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Advancing Quantum Cryptography Algorithms for Secure Data Storage and Processing in Cloud Computing: Enhancing Robustness Against Emerging Cyber Threats</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150964</link>
        <id>10.14569/IJACSA.2024.0150964</id>
        <doi>10.14569/IJACSA.2024.0150964</doi>
        <lastModDate>2024-09-30T08:10:14.4230000+00:00</lastModDate>
        
        <creator>Devulapally Swetha</creator>
        
        <creator>Shaik Khaja Mohiddin</creator>
        
        <subject>Quantum key distribution; cloud computing; cyber threats; lattice based cryptography; E91; future-proof security paradigm; python; quantum computing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>The rise of cloud computing has transformed data storage and processing but introduced new vulnerabilities, especially with the impending threat of quantum computing. Traditional cryptographic methods, though currently effective, are at risk of being compromised by quantum attacks. This research aims to develop a quantum-resistant security framework for cloud environments, combining lattice-based cryptography with Quantum Key Distribution (QKD) protocols, particularly the E91 protocol, for secure key management. The framework also incorporates quantum authentication protocols to enhance user identity verification, protecting against unauthorized access and tampering. The proposed solution balances robust security with practical implementation, ensuring scalability and efficiency in real-world cloud environments. Performance evaluations indicate an encryption time of approximately 30 milliseconds, outperforming existing methods such as RSA and DES. This research contributes to the development of future-proof cryptographic standards, addressing both current security challenges and emerging quantum computing threats. By leveraging quantum mechanics, the framework strengthens cloud-based data protection, providing a resilient solution against evolving cyber risks. The results hold significant promise for advancing cloud security, laying the groundwork for next-generation encryption techniques that can withstand the threats posed by quantum computing.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_64-Advancing_Quantum_Cryptography_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimization of the Energy-Saving Data Storage Algorithm for Differentiated Cloud Computing Tasks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150963</link>
        <id>10.14569/IJACSA.2024.0150963</id>
        <doi>10.14569/IJACSA.2024.0150963</doi>
        <lastModDate>2024-09-30T08:10:14.4070000+00:00</lastModDate>
        
        <creator>Peichen Zhao</creator>
        
        <subject>Energy-saving data storage algorithm; differentiated task recognition; cloud computing; intelligent storage strategy; data classification and distribution</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>This study presents a novel energy-saving data storage algorithm designed to enhance data storage efficiency and reduce energy consumption in cloud computing environments. By intelligently discerning and categorizing various cloud computing tasks, the algorithm dynamically adapts data storage strategies, resulting in a targeted optimization methodology that is both devised and experimentally validated. The study findings demonstrate that the optimized model surpasses comparative models in accuracy, precision, recall, and F1-score, achieving peak values of 0.863, 0.812, 0.784, and 0.798, respectively, thereby affirming the efficacy of the optimized approach. In simulation experiments involving tasks with varying data volumes, the optimized model consistently exhibits lower latency compared to Attention-based Long Short-Term Memory Encoder-Decoder Network and Deep Reinforcement Learning Task Scheduling models. Furthermore, across tasks with differing data volumes, the optimized model maintains high throughput levels, with only marginal reductions in throughput as data volume increases, indicating sustained and stable performance. Consequently, this study is pertinent to cloud computing data storage and energy-saving optimization, offering valuable insights for future research and practical applications.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_63-Optimization_of_the_Energy_Saving_Data_Storage_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dimensionality Reduction Evolutionary Framework for Solving High-Dimensional Expensive Problems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150962</link>
        <id>10.14569/IJACSA.2024.0150962</id>
        <doi>10.14569/IJACSA.2024.0150962</doi>
        <lastModDate>2024-09-30T08:10:14.3770000+00:00</lastModDate>
        
        <creator>SONGWei </creator>
        
        <creator>ZOUFucai</creator>
        
        <subject>Dimensionality reduction; high-dimensional expensive optimiza-tion; Surrogate-assisted model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>Most of improvement strategies for surrogate-assisted optimiza-tion algorithms fail to help the population quickly locate satis-factory solutions. To address this challenge, a novel framework called dimensionality reduction surrogate-assisted evolutionary (DRSAE) framework is proposed. DRSAE introduces an effi-cient dimensionality reduction network to create a low-dimensional search space, allowing some individuals to search in the population within the reduced space. This strategy signifi-cantly lowers the complexity of the search space and makes it easier to locate promising regions. Meanwhile, a hierarchical search is conducted in the high-dimensional space. Lower-level particles indiscriminately learn from higher-level peers, corre-spondingly the highest-level particles undergo self-mutation. A comprehensive comparison between DRSAE and mainstream HEPs algorithms was conducted using seven widely used benchmark functions. Comparison experiments on problems with dimensionality increasing from 50 to 200 further substanti-ate the good scalability of the developed optimizer.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_62-Dimensionality_Reduction_Evolutionary_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Bubble Detection in Glass Manufacturing Images Using Generative Adversarial Networks, Filters and Channel Fusion</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150961</link>
        <id>10.14569/IJACSA.2024.0150961</id>
        <doi>10.14569/IJACSA.2024.0150961</doi>
        <lastModDate>2024-09-30T08:10:14.3600000+00:00</lastModDate>
        
        <creator>Md Ezaz Ahmed</creator>
        
        <creator>Mohammad Khalid Imam Rahmani</creator>
        
        <creator>Surbhi Bhatia Khan</creator>
        
        <subject>Computer vision; Generative Adversarial Network; Information-Preserving Feature Aggregation; Conflict Information Suppression Feature Fusion Module; Fine-Grained Aggregation Module; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>With the increasing production of glassware products, the detection of bubble defects has been of vital importance. The manual inspection of glass bubble defects is considered to be tedious and inefficient way due to the increasing volume of images, and the high probability of human error. Computer vision-based methods provide us with a platform for automating the bubble defect detection process which can overcome the disadvantages associated with manual inspection thereby significantly reducing the cost and improving the quality. To address these issues, we propose an integrated deep learning (DL) based bubble detection algorithm, in which an image data set is prepared using a Generative Adversarial Network (GAN). The proposed algorithm exploits the Information-Preserving Feature Aggregation (IPFA) module for achieving semantic feature extraction by maintaining the small defects’ internal features. To weed out irrelevant information due to fusion, the proposed research introduces the Conflict Information Suppression Feature Fusion Module (CSFM) to further advance the component combination methodology, the Fine-Grained Conglomeration Module (FGAM) is employed to facilitate cooperation among feature maps at various levels. This approach mitigates the generation of conflicting information arising from erroneous features. The algorithm improved performance with an accuracy rate of 0.677 and a recall rate of 0.716 with a precision value of 0.638.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_61-Bubble_Detection_in_Glass_Manufacturing_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Efficient Task Offloading Using Ant Colony Optimization and Reptile Search Algorithms in Edge Computing for Things Context</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150960</link>
        <id>10.14569/IJACSA.2024.0150960</id>
        <doi>10.14569/IJACSA.2024.0150960</doi>
        <lastModDate>2024-09-30T08:10:14.3430000+00:00</lastModDate>
        
        <creator>Ting Zhang</creator>
        
        <creator>Xiaojie Guo</creator>
        
        <subject>Task offloading; edge computing; ant colony optimization; reptile search algorithm; Internet of Things; energy efficiency</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>The widespread use of Internet of Things (IoT) technology has triggered unparalleled data creation and processing needs, necessitating effective computation offloading solutions. Conventional edge computing approaches have difficulties in dealing with rising energy usage issues and task allocation delays. This study introduces a novel hybrid metaheuristic algorithm called ACO-RSA, which synergizes two metaheuristic algorithms, Ant Colony Optimization (ACO) and Reptile Search Algorithm (RSA). The proposed approach addresses the energy and latency issues associated with offloading computations in IoT edge computing environments. A comprehensive system design that effectively encapsulates the uplink transmission communication model and a personalized multi-user computing task load model is developed. The system considers various constraints, such as network latency, task complexity, and available computing resources. Based on this, we formulate an optimization objective suitable for computing outsourcing in the IoT ecosystem. Simulations conducted in a real-world IoT scenario demonstrate that ACO-RSA significantly reduces both time delay and energy consumption compared to benchmark algorithms, achieving up to 27.6% energy savings and 25.4% reduction in time delay. ACO-RSA exhibits robustness and scalability when optimizing task offloading in IoT edge computing environments.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_60-Efficient_Task_Offloading_Using_Ant_Colony_Optimization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Liver and Tumour Segmentation Using Anchor Free Mechanism-Based Mask Region Convolutional Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150959</link>
        <id>10.14569/IJACSA.2024.0150959</id>
        <doi>10.14569/IJACSA.2024.0150959</doi>
        <lastModDate>2024-09-30T08:10:14.3300000+00:00</lastModDate>
        
        <creator>Sangi Narasimhulu</creator>
        
        <creator>Ch D V Subba Rao</creator>
        
        <subject>Anchor free; computer-aided diagnosis; deep neural network; EfficientNetB2; liver and tumor segmentation; masked region-based convolutional neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>An accurate liver tumour segmentation helps acquire the measurable biomarkers for decision support systems and Computer-Aided Diagnosis (CAD). However, most existing approaches fail to effectively segment tumours in the liver due to the overlapping of liver with any other organ in the image. To solve this problem, this research proposes Anchor Free with Masked Region-based Convolutional Neural Network (AFMRCNN) approach for segmenting liver tumours. The AF attains a precise localization of tumours by directly predicting the tumour location without relying on predefined anchor boxes. Standard datasets like LiTS and CHAOS are utilized to experiment with the efficiency of the proposed method. An EfficientNetB2 is performed to extract the most relevant features from the segmented data. The Deep Neural Network (DNN) is performed for the classification of liver tumours into binary classes by capturing intricate patterns and relationships in the data with the help of a non-linear activation function. The experimental results exhibit the proposed ARMRCNN method’s commendable segmentation performance of 0.998 Dice Similarity Coefficient (DSC), as opposed to the existing methods, UoloNet and UNet++ + pre-activated multiscale Res2Net approach with Channel-wise Attention (PARCA) on the LiTS dataset.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_59-Liver_and_Tumour_Segmentation_Using_Anchor_Free_Mechanism.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimizing Customer Interactions: A BERT and Reinforcement Learning Hybrid Approach to Chatbot Development</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150958</link>
        <id>10.14569/IJACSA.2024.0150958</id>
        <doi>10.14569/IJACSA.2024.0150958</doi>
        <lastModDate>2024-09-30T08:10:14.3130000+00:00</lastModDate>
        
        <creator>K. R. Praneeth</creator>
        
        <creator>Taranpreet Singh Ruprah</creator>
        
        <creator>J Naga Madhuri</creator>
        
        <creator>A L Sreenivasulu</creator>
        
        <creator>Syed Shareefunnisa</creator>
        
        <creator>Vuda Sreenivasa Rao</creator>
        
        <subject>Chatbots; BERT (Bidirectional Encoder Representations from Transformers); RL (Reinforcement Learning); customer service; responsiveness</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>In the case of chatbots massive progress has been made, but problems remain in handling the complexity of the sentence and in context relevance. Traditional models can be rather insufficient when it comes to providing various levels of detail in the responses to the end-users’ questions, particularly when referring to customer support scenarios. To overcome these limitations, this research comes up with a new model which combines the BERT model with DRL. Through DRL, BERT pre-training is adding flexibility and correspondence to correctly perceive contextual delicate matters in the response. The proposed method includes the following pipeline where in; data tokenization, conversion to lowercase characters, lemmatization and then passes through the BERT fine-tuned model. DRL is utilized to optimize the chatbot’s response in the light of long term rewards and the conversational history, the interactions are formulated as a Markov Decision Process with the reward functions based on cosine similarity of the consecutive responses. This makes it feasible for the chatbot to provide context based replies in addition to the option of constant learning for enhanced performance. It also proved that the accuracy and relevance of the BERT-DRL hybrid system were higher than traditional models according to the BLEU and ROUGE scores. The performance of the chatbot also increases with the length of the conversation and the transitions from one response to the other are coherent. This research contributes to the field through the integration of BERT in understanding language and DRL in the iterative learning process in the innovation within the flaws of chatbot technologies and establishing a new benchmark for conversational AI in customer service settings.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_58-Optimizing_Customer_Interactions.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Birth Certificates Delivery, Traceability and Authentication Using Blockchain Technology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150957</link>
        <id>10.14569/IJACSA.2024.0150957</id>
        <doi>10.14569/IJACSA.2024.0150957</doi>
        <lastModDate>2024-09-30T08:10:14.2970000+00:00</lastModDate>
        
        <creator>Tankou Tsomo Maurice Eddy</creator>
        
        <creator>Bell Bitjoka Georges</creator>
        
        <creator>Ngohe Ekam Paul Salomon</creator>
        
        <creator>Ekani Mebenga Vianney Boniface</creator>
        
        <subject>Birth certificates; blockchain; security; traceability; authentication; counterfeiting; falsification; hyperledger fabric</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>Now-a-days, the vast majority of birth certificate registration systems are paper-based and managed independently by administrative communities. This means that birth information only exists at the place where the birth is registered, which facilitates the counterfeiting or falsification of such identity documents. Therefore, the implementation of a system for the issuance, traceability, and authentication of birth certificates is imperative. Blockchain, characterized by transparency, immutability, protection, privacy, and autonomy, makes this technology the ideal solution for implementing a birth certificate registration, traceability, and authentication system. This article presents a decentralized system for the registration, traceability, and authentication of birth certificates based on Hyperledger Fabric private blockchain deployed in a Virtual Private Network - Multi-Protocol Label Switching (VPN-MPLS) network. This birth certificate is characterized on one hand by the attributes of its owner and on the other hand by a Quick Response (QR) code containing the digital signature of its signer and the unique identifier of the birth certificate. Within the network, the unique identifier of the generated document is hashed and stored using the Secure Hash Algorithm-256 (SHA-256) hash function to optimize storage space and enhance security. Furthermore, the proposed platform includes an application designed using Docker Compose, Apache CouchDB, NodeJS, Go, and Hyperledger Explorer. The designed model is a birth certificate registration platform that ensures enhanced security and transparency.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_57-Birth_Certificates_Delivery_Traceability_and_Authentication.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Secure and Efficient Framework for Multi-User Encrypted Cloud Databases Supporting Single and Multiple Keyword Searches</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150955</link>
        <id>10.14569/IJACSA.2024.0150955</id>
        <doi>10.14569/IJACSA.2024.0150955</doi>
        <lastModDate>2024-09-30T08:10:14.2670000+00:00</lastModDate>
        
        <creator>J V S Arundathi</creator>
        
        <creator>K V V Satyanarayana</creator>
        
        <subject>Secure keyword search; encrypted search; multi-user framework; encrypted cloud database; single and multiple key users</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>Multi-user encrypted cloud databases have become essential for secure data storage and retrieval, especially when supporting both single and multiple keyword searches. Ensuring data confidentiality, integrity, and efficient access within such systems is paramount, particularly when dealing with multiple data owners and users. This paper presents a Secure Encrypted Trie-based Search (SETBS) method that significantly enhances multi-owner authentication, data secrecy, and data integrity in cloud environments. The SETBS framework leverages a sophisticated Merkle hash tree for dynamic maintenance and autonomous user verification, ensuring that the identity of users is reliable and that personal information remains protected across various ownership domains. By optimally utilizing resources, SETBS provides a robust and efficient solution for managing data in cloud environments. The framework addresses the bottleneck issue by distributing the workload among first-level owners, resulting in fair resource distribution and increased system efficiency. A key feature of the SETBS method is its ability to guarantee data integrity without compromising security. Users can be assured that their data remains unaltered and protected from unauthorized access, thanks to the integration of the Merkle hash tree. This mechanism enables clients to confirm the integrity of their data stored in the cloud, providing peace of mind regarding its security. Moreover, SETBS proves to be a flexible and scalable solution for large-scale cloud deployments, efficiently managing multiple data owners and parallelizing the processing load. The framework&#39;s focus on data privacy ensures that personal data remains secure during search operations. With lower encryption and decryption times compared to existing methods such as SPEKS, DSSE, and MKHE, SETBS demonstrates superior performance and is implemented in Python. This comprehensive approach offers an all-encompassing solution for businesses seeking to enhance their cloud security architecture while ensuring efficient data management, from processing to real-time or batch data analysis.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_55-A_Secure_and_Efficient_Framework_for_Multi_User_Encrypted_Cloud.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Exploring the Application of Neural Networks in the Learning and Optimization of Sports Skills Training</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150956</link>
        <id>10.14569/IJACSA.2024.0150956</id>
        <doi>10.14569/IJACSA.2024.0150956</doi>
        <lastModDate>2024-09-30T08:10:14.2670000+00:00</lastModDate>
        
        <creator>Dazheng Liu</creator>
        
        <subject>Deep neural network; action recognition; 2D pose prediction; pose estimation; sports skill training; attention mechanism</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>Sports skills training is a crucial component of sports education, significantly contributing to the development of athletic abilities and overall physical literacy. It is essential to utilized neural networks to optimize traditional training methods that are inefficient and rely on subjective assessments. This paper develops methods for sports action recognition and athlete pose estimation and prediction based on deep neural networks. Given the complexity and rapid changes in sports skills, we propose a multi-task framework-based HICNN-PSTA model for jointly recognizing sports actions and estimating human poses. This method leverages the advantages of Convolution and Involution operators in computing channel and spatial information to extract sports skill features and uses a decoupled multi-head attention mechanism to fully capture spatio-temporal information. Furthermore, to accurately predict human poses to avoid potential sports injuries, this paper introduces an MS-GCN prediction model based on the multi-scale graph. This method utilizes the constraints between human body key points and parts, dividing the 2D human pose into different levels, significantly enhancing the modeling capability of human pose sequences. The proposed algorithms have been thoroughly validated on a basketball skills dataset and compared with various advanced algorithms. Experimental results sufficiently demonstrate the effectiveness of the proposed methods in sports action recognition and human pose estimation and prediction. This research advances the application of deep neural networks in the field of sports training, providing significant reference value for related studies.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_56-Exploring_the_Application_of_Neural_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Exploring Multimedia Movement Through Spatio-Temporal Indexing and Double-Cache Schemes</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150954</link>
        <id>10.14569/IJACSA.2024.0150954</id>
        <doi>10.14569/IJACSA.2024.0150954</doi>
        <lastModDate>2024-09-30T08:10:14.2330000+00:00</lastModDate>
        
        <creator>Zhen QIN</creator>
        
        <creator>Lin ZHANG</creator>
        
        <subject>Multimedia digital art; double-cache collaboration; distributed indexing; spatio-temporal data; content center network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>Conventional IP/TCP designs encounter several safety and scalability concerns with the growing demand for application services. A novel Internet design, like a Content Center Network (CCN), was introduced to address these issues comprehensively. Every hub within a CCN is responsible for data storage. The collaboration guarantees users quick data retrieval. By collaborating with dual caches, network peers can access data from their caches and leverage other peers&#39; caches, resulting in improved cache utilization and overall network speed. The present study examines multimodal digital artworks&#39; form, style, and action relationships and views them as holistic creative units. The study examines the complex structure of digital content following current information. We present a distributed index incorporating spatio-temporal information to address the challenges of storing and retrieving large amounts of spatio-temporal data. This distributed index combines internal R with external B+ trees to provide high concurrency and low latency indexing services for external applications. With double buffer technology and distributed index architecture, we can optimize the cache utility of content center networks and enhance the retrieval speed of multimedia data. Adopting the distributed index, designed to accommodate spatio-temporal data in the multimedia digital art design, can enhance large-scale storage and retrieval for Internet-future architectures.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_54-Exploring_Multimedia_Movement.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Artificial Intelligence-Driven Decision Support Systems for Sustainable Energy Management in Smart Cities</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150953</link>
        <id>10.14569/IJACSA.2024.0150953</id>
        <doi>10.14569/IJACSA.2024.0150953</doi>
        <lastModDate>2024-09-30T08:10:14.2330000+00:00</lastModDate>
        
        <creator>Ning MA</creator>
        
        <subject>Smart cities; artificial intelligence; decision support systems; sustainable energy management; urban resilience; interdisciplinary collaboration</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>Due to the ongoing urbanization trend, smart cities are critical to designing a sustainable future. Urban sustainability involves action-oriented approaches for optimizing resource usage, ecological impact reduction, and overall efficiency enhancement. Energy management is one of the main concerns in urban, residential, and building planning. Artificial Intelligence (AI) uses data analytics and machine learning to instigate business automation and deal with intelligent tasks involved in numerous industries. Thus, AI needs to be considered in the strategic plan, especially in the long-term strategy of smart city planning. Decision Support Systems (DSS) are integrated with human-machine interaction methods like the Internet of Things (IoT). Along with their growth in size and complexity, the communications of IoT smart devices, industrial equipment, sensors, and mobile applications present an increasing challenge in meeting Service Level Agreements (SLAs) in diverse cloud data centers and user requests. This challenge would be further compounded if the energy consumption of industrial IoT networks also increased tremendously. Thus, DSS models are necessary for automated decision-making in crucial IoT settings like intelligent industrial systems and smart cities. The present study examines how AI can be integrated into DSS to tackle the intricate difficulties of sustainable energy management in smart cities. The study examines the evolution of DSSs and elucidates how AI enhances their functionalities. The study explores several AI methods, such as machine learning algorithms and predictive analytics, that aid in predicting, optimizing, and making real-time decisions inside urban energy systems. Furthermore, real-world instances from different smart cities highlight the practical applications, benefits, and interdisciplinary collaboration necessary to successfully implement AI-driven DSS in sustainable energy management.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_53-Artificial_Intelligence_Driven_Decision_Support_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Forensic Facial Reconstruction from Sketch in Crime Investigation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150952</link>
        <id>10.14569/IJACSA.2024.0150952</id>
        <doi>10.14569/IJACSA.2024.0150952</doi>
        <lastModDate>2024-09-30T08:10:14.2030000+00:00</lastModDate>
        
        <creator>Doaa M. Mohammed</creator>
        
        <creator>Mostafa Elgendy</creator>
        
        <creator>Mohamed Taha</creator>
        
        <subject>Sketch-to-Face; facial features; Sketch-to-Face CycleGAN; victim&#39;s identification; criminal offense</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>Many crimes are committed every day all over the world, and one of them is a criminal offense that includes a wide range of illegal acts such as murder, theft, assault, rape, kidnapping, fraud, and others. Criminals pose a threat to security, which harms the public interest. In this case, the police question all eyewitnesses at the crime scene, and sometimes, witnesses who were present at the crime scene can remember the face of the criminal. The witness accurately describes the person&#39;s facial features in the report, such as eyes, nose, etc. Law enforcement authorities use eyewitness information to identify the person. Criminal investigations can be accelerated by converting sketched faces into actual images, but this requires eyewitnesses to confirm the description in the report. Drawings make it very difficult to identify real human faces because they do not contain the details that help to catch criminals. In contrast, color photographs contain many details that help to identify facial features more clearly. This work proposes to generate color images using the modified modulation Sketch-to-Face CycleGAN and then pass them through Generative Facial Prior-GAN. CycleGAN consists of a generator and discriminator. The generator is used to generate colored images, and the discriminator is used to identify whether the images are real or fake. These are then passed to GFPGAN to improve the quality of the colored images. The structural similarity index measure of 0.8154 is achieved when creating photorealistic images from drawings.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_52-Forensic_Facial_Reconstruction_from_Sketch_in_Crime_Investigation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Convolutional Neural Network-Based Predictive Model for Assessing the Learning Effectiveness of Online Courses Among College Students</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150951</link>
        <id>10.14569/IJACSA.2024.0150951</id>
        <doi>10.14569/IJACSA.2024.0150951</doi>
        <lastModDate>2024-09-30T08:10:14.1870000+00:00</lastModDate>
        
        <creator>Xuehui Zhang</creator>
        
        <creator>Lin Yang</creator>
        
        <subject>Convolutional neural network; nutcracker optimization algorithm; assessment of learning effectiveness; college students; online courses</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>With the development of artificial intelligence (AI) technology, higher education institutions usually consider both online courses and offline classrooms in the course design process. To verify the effectiveness of online courses, this study designed a deep learning model to analyze the learning behavior of online course users (college students) and predict their final grades. Firstly, our method summarizes several learning features that are used in machine learning models for predicting student grades, including the performance of users (college students) in online courses and their basic information. Based on nutcracker optimization algorithm (NOA), we designed a multi-layer convolutional neural network (CNN) and developed an improved NOA (I-NOA) to optimize the internal parameters of the CNN. Prediction mainly includes two steps: firstly, analyzing users&#39; emotions based on their comments in online course forums. Secondly, predict the final grade based on the user&#39;s emotions and other quantifiable learning features. To validate the effectiveness of INOA-Based CNN (I-NOA-CNN) algorithm, we evaluated it using a dataset consisting of five different online courses and a total of 120 students. The simulation results indicate that compared with existing methods, the I-NOA-CNN algorithm has higher prediction accuracy, and the proposed model can effectively predict the learning effect of users.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_51-A_Convolutional_Neural_Network_Based_Predictive_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dynamic Path Planning for Autonomous Robots in Forest Fire Scenarios Using Hybrid Deep Reinforcement Learning and Particle Swarm Optimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150950</link>
        <id>10.14569/IJACSA.2024.0150950</id>
        <doi>10.14569/IJACSA.2024.0150950</doi>
        <lastModDate>2024-09-30T08:10:14.1730000+00:00</lastModDate>
        
        <creator>N. K. Thakre</creator>
        
        <creator>Divya Nimma</creator>
        
        <creator>Anil V Turukmane</creator>
        
        <creator>Akhilesh Kumar Singh</creator>
        
        <creator>Divya Rohatgi</creator>
        
        <creator>Balakrishna Bangaru</creator>
        
        <subject>Adaptive path planning; deep reinforcement learning; disaster environments; drone rescuing; particle swarm optimization; forest fire</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>The growing frequency of forest area fires poses critical challenges for emergency response, necessitating progressive solutions for effective navigation and direction planning in dynamic environments. This study investigates an adaptive technique to enhance the performance of autonomous robots deployed in forest area fireplace scenarios. The primary objective is to develop a hybrid methodology that integrates advanced studying strategies with optimization techniques to enhance route planning beneath unexpectedly changing situations. To reap this, a simulation-based total framework became hooked up, in which self-reliant robots were tasked with navigating diverse forest fire eventualities. The method includes schooling a model to dynamically adapt to environmental modifications at the same time as optimizing direction choice in real time. Performance metrics together with direction efficiency, adaptability to obstacles, and reaction time been analyzed to assess the effectiveness of the proposed solution. Results indicate an enormous improvement in path planning performance as compared to traditional methods, with more suitable adaptability main to faster response instances and extra effective navigation. The findings underscore the functionality of the proposed method to cope with the complexities of forest area fire environments, demonstrating its potential for real-world applications in disaster response. The results are shown in the conceived DRL-PSO framework where execution time is reduced up to 95% and the success rate of 95 % for the proposed method compared to the conventional ones. Python is used to implement the proposed work. Compared to the proposed method’s execution time of 68. 3 seconds and the highest success rate among evaluated strategies, so it can be used as a powerful solution for autonomous drone navigation in dangerous situations. In the end, this research contributes precious insights into adaptive route planning for self-sufficient robots in unsafe situations, providing a strong framework for destiny advancements in disaster management technologies.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_50-Dynamic_Path_Planning_for_Autonomous_Robots.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detecting Malware of Windows OS Using AI Classification for Image of Extracted Behavior Features</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150949</link>
        <id>10.14569/IJACSA.2024.0150949</id>
        <doi>10.14569/IJACSA.2024.0150949</doi>
        <lastModDate>2024-09-30T08:10:14.1570000+00:00</lastModDate>
        
        <creator>Kang Dongshik</creator>
        
        <creator>Noor Aldeen Alhamedi</creator>
        
        <subject>Malware analysis; dynamic-based analysis; image classification; malware behavior extraction; text</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>Malware detection is crucial for protecting digital environments. Traditional methods involve static and dynamic analysis, but recent advancements leverage artificial intelligence (AI) to enhance detection accuracy. This study aims to improve malware detection by integrating dynamic malware analysis with AI-driven techniques. The primary challenge addressed is accurately classifying and detecting malware based on behavior extracted from isolated virtual machines. By analyzing 50 malware samples and 11 benign programs, we extract ten behavioral features such as process ID, CPU usage, and network connections. We employ text-based classification using feedforward neural networks (FNN) and recurrent neural networks (RNN), achieving accuracy rates of 56% and 68%, respectively. Additionally, we convert the extracted features into grayscale images for image-based classification with a convolutional neural network (CNN), resulting in a higher accuracy of 70.1%. This multi-modal approach, combining behavioral analysis with AI, not only enhances detection accuracy but also provides a comprehensive understanding of malware behavior compared to competing methods.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_49-Detecting_Malware_of_Windows_OS_Using_AI_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Machine Translation-Based Language Modeling Enables Multi-Scenario Applications of English Language</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150948</link>
        <id>10.14569/IJACSA.2024.0150948</id>
        <doi>10.14569/IJACSA.2024.0150948</doi>
        <lastModDate>2024-09-30T08:10:14.1400000+00:00</lastModDate>
        
        <creator>Shengming Liu</creator>
        
        <subject>Machine translation; decoder; English language; multi-scene; attention mechanism</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>Traditional machine translation models suffer from problems such as long training time and insufficient adaptability when dealing with multiple English language scenarios. At the same time, some models often struggle to meet practical translation needs in complex language environments. A translation model that combines the feed-forward neural network decoder and the attention mechanism is suggested as a solution to this problem. Additionally, the model analyzes the similarity of the English language to enhance its translation ability. The resulting machine translation model can be applied to different English scenarios. The study&#39;s findings showed that the model performs better when the convolutional and attention layers have a higher number of layers relative to one another. The highest average value of the bilingual evaluation study for the research use model was 29.65. The research use model can machine translate different English language application scenarios and also the model performed better. The new model performed better than the traditional model and was able to translate the English language well in a variety of settings. The model used in the study had the maximum parameter data size of 4586, which is 932 higher than the lowest statistical machine translation model of 3654. The metric value was 3.96 higher than the statistical machine translation model. It is evident that investigating the use of the model can enhance the English language scene translation effect, with each scene doing well in translation. This provides new ideas for the direction of multi-scene application of machine translation language model afterwards.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_48-Machine_Translation_Based_Language_Modeling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Basira: An Intelligent Mobile Application for Real-Time Comprehensive Assistance for Visually Impaired Navigation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150947</link>
        <id>10.14569/IJACSA.2024.0150947</id>
        <doi>10.14569/IJACSA.2024.0150947</doi>
        <lastModDate>2024-09-30T08:10:14.1270000+00:00</lastModDate>
        
        <creator>Amal Alshahrani</creator>
        
        <creator>Areej Alqurashi</creator>
        
        <creator>Nuha Imam</creator>
        
        <creator>Amjad Alghamdi</creator>
        
        <creator>Raghad Alzahrani</creator>
        
        <subject>Visual impairment; mobility application; computer vision; object detection; obstacle detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>Individuals with visual impairments face numerous challenges in their daily lives, with navigating streets and public spaces being particularly daunting. The inability to identify safe crossing locations and assess the feasibility of crossing significantly restricts their mobility and independence. The profound impact of visual impairments on daily activities underscores the urgent need for solutions to improve mobility and enhance safety. This study aims to address this pressing issue by leveraging computer vision and deep learning techniques to enhance object detection capabilities. The Basira mobile application was developed using the Flutter platform and integrated with a detection model. The application features voice command functionality to guide users during navigation and assist in identifying daily items. It can recognize a wide range of obstacles and objects in real-time, enabling users to make informed decisions while navigating. Initial testing of the application has shown promising results, with clear improvements in users&#39; ability to navigate safely and confidently in various environments. Basira enhances independence and contributes to improving the quality of life for individuals with visual impairments. This study represents a significant step towards developing innovative technological solutions aimed at enabling all individuals to navigate freely and safely.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_47-Basira_An_Intelligent_Mobile_Application.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Lightweight Privacy Preservation Protocol for IOT</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150946</link>
        <id>10.14569/IJACSA.2024.0150946</id>
        <doi>10.14569/IJACSA.2024.0150946</doi>
        <lastModDate>2024-09-30T08:10:14.1270000+00:00</lastModDate>
        
        <creator>Ahmed Mahmoud Al-Badawy</creator>
        
        <creator>Mohammed Belal</creator>
        
        <creator>Hala Abbas</creator>
        
        <subject>IOT; data privacy; lightweight protocols; end to end protection; data and metadata protection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>Due to rapid evolution of Internet of things (IOT) in terms of hardware, software and communication leads to widespread expansion across many domains and sectors. This expansion consequently results in sensitive data transfer increase for purposes of complex calculations and decision making which in turn leads to increase of data attacks and leakage which results in data privacy violation. Although, a lot of current solutions tried to fulfill data privacy via lightweight mechanisms but neither provided end to end protection nor gave a focus to metadata protection which can reveal valuable information about data it describes. This paper presents a lightweight complete data privacy protocol which manages the lifecycle of data starting from object registration till data transfer to cloud. The proposed protocol is a trusted third party free (TTP-Free) which adopts anonymization techniques, lightweight key agreement protocol, end to end encryption and message authentication code to fulfill identity and data protection which in turn fulfill complete data privacy.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_46-A_Lightweight_Privacy_Preservation_Protocol_for_IoT.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Subjectivity Analysis of an Enhanced Feature Set for Code-Switching Text</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150945</link>
        <id>10.14569/IJACSA.2024.0150945</id>
        <doi>10.14569/IJACSA.2024.0150945</doi>
        <lastModDate>2024-09-30T08:10:14.0930000+00:00</lastModDate>
        
        <creator>Emaliana Kasmuri</creator>
        
        <creator>Halizah Basiron</creator>
        
        <subject>Subjectivity analysis; code-switching; enhanced feature sets; Malay-English text</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>The phenomenon of code-switching has posed a new challenge to the linguistic computing area. Conventionally, the computer will process monolingual text or multilingual text. However, code-switching is different from this kind of text. Two or more languages are used to construct a piece of code-switching text, particularly a code-switching sentence. It is challenging for the computer to process a piece of code-switching text with languages that exist simultaneously. The challenge is more intense for the computer in subjectivity analysis, where the computer should distinguish subjective from objective code-switching text. This paper proposed three feature sets for subjectivity analysis on Malay-English code-switching text: Embedded Code-Switching Feature Sets, Unified Code-Switching Feature Sets, and Stylistic Feature Sets. These feature sets were enhanced from the monolingual feature set of subjectivity analysis. Experiments were conducted using the data harvested from Malay-English blogs. These data were labelled as either subjective or objective. Two machine learning classifiers – the Support Vector Machine (SVM) and Naive-Bayes, were used to evaluate the classification performance of the proposed feature sets. The experiments were carried out on individual feature sets and the combination of them. The results show the classification performance from combining the unified and stylistic feature sets surpassed other proposed feature sets at 59% accuracy. Therefore, it is concluded that the combination of unified and stylistic feature sets is necessary for the subjectivity analysis of Malay-English code-switching text.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_45-Subjectivity_Analysis_of_an_Enhanced_Feature_Set.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning with IoT-Based Solar Energy System for Future Smart Agriculture System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150944</link>
        <id>10.14569/IJACSA.2024.0150944</id>
        <doi>10.14569/IJACSA.2024.0150944</doi>
        <lastModDate>2024-09-30T08:10:14.0770000+00:00</lastModDate>
        
        <creator>Vidya M S</creator>
        
        <creator>Ravi Kumar B. N</creator>
        
        <creator>Anil G. N</creator>
        
        <creator>Ambika G. N</creator>
        
        <subject>Precision agriculture; smart monitoring; Internet of Things; Radial Basis Function Networks; Elephant Search Algorithm (ESA)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>Agriculture has a considerable contribution to the economy. Agriculture automation is a serious issue that is becoming more prevalent around the world. Farmers&#39; traditional practices were insufficient to achieve these objectives. Artificial Intelligence (A1) and the Internet of Things (IoTs) are being used in agriculture to improve crop yield and quality. Distributed solar energy resources can now be remotely operated, monitored, and controlled through the IoT and deep learning technology. The development of an IoT-based solar energy system for intelligent irrigation is critical for water- and energy-stressed areas around the world. The qualitative design focuses on secondary data collection techniques. The deep learning model Radial Basis Function Networks (RBFN) is used in conjunction with the Elephant Search Algorithm (ESA) in this IoT-based solar energy system for future smart agriculture. Sensor systems help farmers understand their crops better, reduce their environmental impact and conserve resources. These advanced systems enable effective soil and weather monitoring, as well as water management. To provide the required operating power, the proposed system, RBFN-ESA, employs an IoT-based solar cell forecasting process. The proposed model RBFN-ESA will collect these data to predict the required parameter values for solar energy systems in future smart agriculture systems. The results of the RBFN-ESA model are effective and efficient. According to the findings, RBFN-ESA outperforms CNN, ANN, SVM, RF, and LSTM in terms of energy consumption (56.764J for 100 data points from the dataset), accuracy achieved (97.467% for 600 nodes), and soil moisture level (94.41% for 600 data).</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_44-Deep_Learning_with_IoT_Based_Solar_Energy_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Vehicular Traffic Congestion Detection System and Improved Energy-Aware Cost Effective Task Scheduling Approach for Multi-Objective Optimization on Cloud Fog Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150943</link>
        <id>10.14569/IJACSA.2024.0150943</id>
        <doi>10.14569/IJACSA.2024.0150943</doi>
        <lastModDate>2024-09-30T08:10:14.0630000+00:00</lastModDate>
        
        <creator>Praveen Kumar Mishra</creator>
        
        <creator>Amit Kumar Chaturvedi</creator>
        
        <subject>IoT; fog computing; task scheduling; multi objective Model; iFogSim tool</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>A current research area called fog computing aims to extend the advantages of cloud computing to network edges. Task scheduling is a crucial problem for fog device data processing since a lot of data from the sensor or Internet of Things layer is generated at the fog layer. This research suggested a vehicular traffic congestion detection model and an energy-aware cost effective task scheduling (ECTS) method in a cloud fog scenario. This research proposes an ECTS approach to allocate jobs to the fog nodes effectively. The recommended scheduling approach minimizes energy consumption and decreases expenses for time-sensitive real-time applications. The ECTS algorithm is implemented, and results are analysed using the iFogSim simulator. The proposed method minimizes energy consumption and cost. The suggested ECTS method is tested with five sets of inputs in this paper. The experiment&#39;s results show that an ECTS minimizes energy consumption in comparison to alternative algorithms. It also reduces the execution cost. The suggested approach outperforms both the Round-Robin (RR) and Genetic Algorithm techniques. According to the simulation results, the suggested algorithm reduced overall costs by 13.38% and energy usage by 6.59% compared to the Genetic Algorithm (GA). Compared to RR, the proposed method minimizes energy use by 13.76% and total costs by 18.46%.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_43-Vehicular_Traffic_Congestion_Detection_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Elevating Grape Detection Precision and Efficiency with a Novel Deep Learning Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150942</link>
        <id>10.14569/IJACSA.2024.0150942</id>
        <doi>10.14569/IJACSA.2024.0150942</doi>
        <lastModDate>2024-09-30T08:10:14.0470000+00:00</lastModDate>
        
        <creator>Xiaoli Geng</creator>
        
        <creator>Yaru Huang</creator>
        
        <creator>Yangxu Wang</creator>
        
        <subject>Computer vision; deep learning; Convolutional Neural Networks (CNN); real-time object detection; dual-path detection structure</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>In the domain of modern agricultural automation, precise grape detection in orchards is pivotal for efficient harvesting operations. This study introduces the Grapes Enhanced Feature Detection Network (GEFDNet), leveraging deep learning and convolutional neural networks (CNN) to enhance target detection capabilities specifically for grape detection in orchard environments. GEFDNet integrates an innovative Enhanced Feature Fusion Module (EFFM) into an advanced YOLO architecture, employing a 16x downsampling Backbone for feature extraction. This approach significantly reduces computational complexity while capturing rich spatial hierarchies and accelerating model inference, which is crucial for real-time object detection. Additionally, an optimized dual-path detection structure with an attention mechanism in the Neck enhances the model&#39;s focus on targets and robustness against dense grape detection and complex background interference, a common challenge in computer vision applications. Experimental results demonstrate that GEFDNet achieves at least a 3.5% improvement in mean Average Precision (mAP@0.5), reaching 89.4%. It also has a 9.24% reduction in parameters and a 10.35 FPS increase in frame rate compared to YOLOv9. This advancement maintains high precision while improving operational efficiency, offering a promising solution for the development of automated harvesting technologies. The study is publicly available at: https://github.com/YangxuWangamI/GEFDNet.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_42-Elevating_Grape_Detection_Precision_and_Efficiency.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Role of Artificial Intelligence in Enhancing Business Intelligence Capabilities for E-Commerce Platforms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150941</link>
        <id>10.14569/IJACSA.2024.0150941</id>
        <doi>10.14569/IJACSA.2024.0150941</doi>
        <lastModDate>2024-09-30T08:10:14.0300000+00:00</lastModDate>
        
        <creator>Sinek Mehuli Br Perangin-Angin</creator>
        
        <creator>David Jumpa Malem Sembiring</creator>
        
        <creator>Asprina Br Surbakti</creator>
        
        <creator>Soleh Darmansyah</creator>
        
        <subject>Bidirectional Encoder Representations from Transformers (BERT); Graph Neural Networks (GNNs); business intelligence; e-commerce; product recommendation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>This research focuses on the application of BERT (Bidirectional Encoder Representations from Transformers) and Graph Neural Networks (GNNs) to improve business intelligence (BI) capabilities on e-commerce platforms. The main aim of the research is to develop automation methods for the classification of customer interactions and to create a more effective product recommendation system. In this study, BERT was used to analyze and classify customer interaction texts, including questions, complaints, and reviews, with accuracy reaching 97% and sentiment analysis accuracy of 93%. GNNs are applied to model complex relationships between customers and products based on transaction data, then used to provide product recommendations. The evaluation results show that the GNNs model achieved a mean average precision (MAP) of 0.92 and a normalized discounted cumulative gain (NDCG) of 0.88, indicating high relevance and accuracy in product recommendations. This research concludes that the integration of BERT and GNNs improves operational efficiency through classification automation but also provides added value in marketing strategies with better personalization of recommendations.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_41-The_Role_of_Artificial_Intelligence_in_Enhancing_Business_Intelligence.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of a Hybrid Quantum Key Distribution Concept for Multi-User Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150940</link>
        <id>10.14569/IJACSA.2024.0150940</id>
        <doi>10.14569/IJACSA.2024.0150940</doi>
        <lastModDate>2024-09-30T08:10:14.0170000+00:00</lastModDate>
        
        <creator>Begimbayeva Y</creator>
        
        <creator>Zhaxalykov T</creator>
        
        <creator>Makarov M</creator>
        
        <creator>Ussatova O</creator>
        
        <creator>Tynymbayev S</creator>
        
        <creator>Temirbekova Zh</creator>
        
        <subject>Quantum key distribution; quantum communication; multi-user networks; network security; quantum-based attacks; cryptography; point-to-point protocols; resource efficiency; cryptography; information security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>This paper investigates the increasing concerns related to the vulnerability of contemporary security solutions in the face of quantum-based attacks, which pose significant challenges to existing cryptographic methods. Most current Quantum Key Distribution (QKD) protocols are designed with a focus on point-to-point communication, limiting their application in broader network environments where multiple users need to exchange information securely. To address this limitation, a thorough analysis of twin-field-based algorithms is conducted, emphasizing their distinct characteristics and evaluating their performance in practical scenarios in Sections II, III, and IV. By synthesizing insights from these analyses, integrating cutting-edge advancements in Quantum Communication technologies, and drawing on proven methodologies from established point-to-point protocols, this study introduces a novel concept for a Hybrid Twin-Field QKD protocol in Section IV. This network-oriented approach is designed to facilitate secure communication in networks involving multiple users, offering a practical and scalable solution. The proposed protocol aims to reduce resource consumption while maintaining high-security standards, thereby making it a viable option for real-world quantum communication networks. This work contributes to the development of more resilient and efficient quantum networks capable of withstanding future quantum-based threats.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_40-Development_of_a_Hybrid_Quantum_Key_Distribution_Concept.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Eavesdropping Interference in Wireless Communication Networks Based on Physical Layer Security</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150939</link>
        <id>10.14569/IJACSA.2024.0150939</id>
        <doi>10.14569/IJACSA.2024.0150939</doi>
        <lastModDate>2024-09-30T08:10:14.0000000+00:00</lastModDate>
        
        <creator>Mingming Chen</creator>
        
        <creator>Yuzhi Chen</creator>
        
        <subject>Wireless communication; sensors; network security; eavesdropping interference; clustering scenario</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>Effective communication security protection can protect people&#39;s privacy from being violated. To raise the communication security of wireless communication networks, a collaborative eavesdropping interference scheme with added artificial noise is proposed by combining physical layer security and clustering scenarios to protect the communication security of wireless sensor networks. This scheme adds artificial noise to the transmitted signal to interfere with the eavesdropping signal, making the main channel the dominant channel and achieving eavesdropping interference in wireless communication networks. The results show that after using artificial noise, as the signal-to-noise ratio of the main channel increases from 0 to 20dB, the confidentiality capacity can increase from 0.5 to over 4.0. When the transmission power is 0.4W, the confidentiality capacity reaches its maximum and does not depend on the signal-to-noise ratio. When the number of interfering nodes increases from 1 to 2, the confidentiality capacity increases from approximately 4.7 to around 5.8. The research designed a wireless communication network eavesdropping interference scheme that can effectively protect the information security of the wireless communication network, making the main channel an advantageous channel and achieving complete confidentiality. This scheme can be applied to wireless communication networks to improve the security level of the network.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_39-Eavesdropping_Interference_in_Wireless_Communication_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Human-Computer Interaction Standardization and Systematization Development</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150938</link>
        <id>10.14569/IJACSA.2024.0150938</id>
        <doi>10.14569/IJACSA.2024.0150938</doi>
        <lastModDate>2024-09-30T08:10:13.9830000+00:00</lastModDate>
        
        <creator>Xiaoling Lyu</creator>
        
        <creator>Hongmiao Yuan</creator>
        
        <creator>Zhao Zhang</creator>
        
        <subject>Human-computer interaction; information technology; English education; standardization; systematization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>Amidst the information technology boom, this study harnesses IT and human-computer interaction to revolutionize English education. Our model, grounded in literature review and fieldwork, implemented teaching experiments that enhanced students&#39; English proficiency by 25%, particularly in listening and speaking. Student engagement and interest surged by 30%, underscoring the effectiveness of standardized, systematized English education in the digital era. The study advocates for broader adoption of informatization in teaching, emphasizing the pivotal role of teachers in facilitating this educational shift. With significant outcomes, our research paves the way for future enhancements in English education, ensuring quality and equity in learning. Our approach addresses the gap between traditional teaching and technological advancements, offering personalized learning experiences that improve student outcomes. It also ensures consistent teaching quality and bridges educational divides. We introduce an informatization-based English education model, supported by literature review, fieldwork, and teaching experiments. Our findings show significant improvements in students&#39; English proficiency, highlighting the model&#39;s effectiveness.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_38-Human_Computer_Interaction_Standardization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Reinforcement Learning-Based Carrier Tuning Algorithm for Mobile Communication Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150937</link>
        <id>10.14569/IJACSA.2024.0150937</id>
        <doi>10.14569/IJACSA.2024.0150937</doi>
        <lastModDate>2024-09-30T08:10:13.9700000+00:00</lastModDate>
        
        <creator>Weimin Zhang</creator>
        
        <creator>Xinying Zhao</creator>
        
        <subject>DRL; mobile network; carrier tuning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>With the evolution of mobile communication networks towards 5G and beyond to 6G, managing network resources presents unprecedented challenges, particularly in scenarios demanding high data rates, low latency, and extensive connectivity. Traditional resource allocation methods struggle with network dynamics and complexity, including user mobility, varying network loads, and diverse Quality of Service (QoS) requirements. Deep Reinforcement Learning (DRL), an emerging AI technique, demonstrates significant potential due to its adaptive and learning capabilities. This paper integrates user mobility and network load prediction into a DRL framework and proposes a novel reward function to enhance resource utilization efficiency while meeting real-time QoS demands. We establish a system model involving base stations and receiving terminals to simulate communication services within coverage areas. Comparative experiments analyze the performance of the DRL approach versus traditional methods across metrics such as throughput, delay, and spectral efficiency. Results indicate DRL&#39;s superiority in handling dynamic environments and fulfilling QoS needs, especially under heavy loads. This study introduces innovative approaches and tools for future mobile network resource management, paving the way for practical DRL implementations and enhancing overall network performance.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_37-Deep_Reinforcement_Learning_Based_Carrier_Tuning_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Emergency Response: A Smart Ambulance System Using Game-Building Theory and Real-Time Optimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150936</link>
        <id>10.14569/IJACSA.2024.0150936</id>
        <doi>10.14569/IJACSA.2024.0150936</doi>
        <lastModDate>2024-09-30T08:10:13.9530000+00:00</lastModDate>
        
        <creator>Guneet Singh Bhatia</creator>
        
        <creator>Azhar Hussain Mozumder</creator>
        
        <creator>Saied Pirasteh</creator>
        
        <creator>Satinder Singh</creator>
        
        <creator>Moin Hasan</creator>
        
        <subject>Emergency medical services; ambulance dispatch optimization; advanced game theory; Negamax algorithm; real-time optimization; predictive analytics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>Dispatching ambulances early and efficiently is paramount and difficult in the field of emergency medical services. In this regard, the paper designs a smart ambulance system based on game-building theory. The system employs an advanced Negamax algorithm for optimizing the dispatch of ambulances during emergencies. Besides traditional methods, real-time traffic data, patient condition severity, and dynamic resource allocation also improve the system further. With the integration of predictive analytics and real-time data, it allows dynamic adaptation to changing urban conditions, optimal resource allocation as well as minimizing response time. According to our simulations involving extensive scenarios, our Negamax-based system performs significantly better with respect to average response times when compared with traditional methods averagely reducing them by more than 50%, hence, showing double improvement. The study not only improves efficiency in the operation of emergency services but also presents an expandable framework that can be used for future developments in critical response systems thereby leading to their association with smart city infrastructure and AI-based predictive emergency management.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_36-Enhancing_Emergency_Response.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>DBRF: Random Forest Optimization Algorithm Based on DBSCAN</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150935</link>
        <id>10.14569/IJACSA.2024.0150935</id>
        <doi>10.14569/IJACSA.2024.0150935</doi>
        <lastModDate>2024-09-30T08:10:13.9370000+00:00</lastModDate>
        
        <creator>Wang Zhuo</creator>
        
        <creator>Azlin Ahmad</creator>
        
        <subject>Random forest; DBSCAN; feature selection; feature redundancy; classification algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>The correlation and redundancy of features will directly affect the quality of randomly selected features, weakening the convergence of random forests (RF) and reducing the performance of random forest models. This paper introduces an improved random forest algorithm—A Random Forest Algorithm Based on DBSCAN (DBRF). The algorithm utilizes the DBSCAN (Density-Based Spatial Clustering of Applications with Noise) algorithm to improve the feature extraction process, to extract a more efficient feature set. The algorithm first uses DBSCAN to group all features based on their relevance and then selects features from each group in proportion to construct a feature subset for each decision tree, repeating this process until the random forest is built. The algorithm ensures the diversity of features in the random forest while eliminating the correlation and redundancy among features to some extent, thereby improving the quality of random feature selection. In the experimental verification, the classification prediction results of CART, RF, and DBRF, three different classifiers, were compared through ten-fold cross-validation on six different-sized datasets using accuracy, precision, recall, F1, and running time as validation indicators. Through experimental verification, it was found that DBRF algorithm outperformed RF, and the prediction performance was improved, especially in terms of time complexity. This algorithm is suitable for various fields and can effectively improve the classification prediction performance at a lower complexity level.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_35-DBRF_Random_Forest_Optimization_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mixed Integer Programming Model Based on Data Algorithms in Sustainable Supply Chain Management</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150934</link>
        <id>10.14569/IJACSA.2024.0150934</id>
        <doi>10.14569/IJACSA.2024.0150934</doi>
        <lastModDate>2024-09-30T08:10:13.9230000+00:00</lastModDate>
        
        <creator>Shaobin Dong</creator>
        
        <creator>Aihua Li</creator>
        
        <subject>Mixed Integer Programming Model; sustainable; supply chain management; heuristic algorithm; adaptive genetic algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>With the deepening of globalization and increasing demands for environmental sustainability, modern supply chains are faced with increasingly complex management challenges. To reduce management costs and enhance efficiency, an experimental approach is proposed based on a Mixed Integer Programming Model, integrating heuristic algorithms with adaptive genetic algorithms. The objective is to improve both the efficiency and sustainability of supply chain management. Initially, the selection of suppliers within the supply chain is analyzed. Subsequently, heuristic algorithms and genetic algorithms are jointly employed to design, generate, and optimize initial solutions. Results indicate that during initial runs on training and validation sets, the fitness values of the research method reached as high as 99.67 and 96.77 at the 22nd and 68th iterations, respectively. Moreover, on the training set with a dataset size of 112, the accuracy of the research method was 98.56%, significantly outperforming other algorithms. With the system running five times, the time consumed for supplier selection and successful order allocation was merely 0.654s and 0.643s, respectively. In practical application analysis, when the system iterated 99 times, the research method incurred the minimum total cost of 962,700 yuan. These findings demonstrate that the research method effectively minimizes supply chain management costs while maximizing efficiency, offering practical strategies for optimizing and sustainably developing supply chain management.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_34-Mixed_Integer_Programming_Model_Based_on_Data_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Robust Wrapper-Based Feature Selection Technique Using Real-Valued Triangulation Topology Aggregation Optimizer</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150933</link>
        <id>10.14569/IJACSA.2024.0150933</id>
        <doi>10.14569/IJACSA.2024.0150933</doi>
        <lastModDate>2024-09-30T08:10:13.8900000+00:00</lastModDate>
        
        <creator>Li Pan</creator>
        
        <creator>Wy-Liang Cheng</creator>
        
        <creator>Sew Sun Tiang</creator>
        
        <creator>Kim Soon Chong</creator>
        
        <creator>Chin Hong Wong</creator>
        
        <creator>Abhishek Sharma</creator>
        
        <creator>Touseef Sadiq</creator>
        
        <creator>Aasam Karim</creator>
        
        <creator>Wei Hong Lim</creator>
        
        <subject>Classification; exploration; exploitation; feature selection; metaheuristic search algorithm; machine learning; optimization; triangulation topology aggregation optimizer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>Feature selection is a critical preprocessing technique used to remove irrelevant and redundant features from datasets while maintaining or improving the accuracy of machine learning models. Recent advancements in this area have primarily focused on wrapper-based feature selection methods, which leverage metaheuristic search algorithms (MSAs) to identify optimal feature subsets. In this paper, we propose a novel wrapper-based feature selection method utilizing the Triangulation Topology Aggregation Optimizer (TTAO), a newly developed algorithm inspired by the geometric properties of triangular topology and similarity. To adapt the TTAO for binary feature selection tasks, we introduce a conversion mechanism that transforms continuous decision variables into binary space, allowing the TTAO—which is inherently designed for real-valued problems—to function efficiently in binary domains. TTAO incorporates two distinct search strategies, generic aggregation and local aggregation, to maintain an effective balance between global exploration and local exploitation. Through extensive experimental evaluations on a wide range of benchmark datasets, TTAO demonstrates superior performance over conventional MSAs in feature selection tasks. The results highlight TTAO&#39;s capability to enhance model accuracy and computational efficiency, positioning it as a promising tool to advance feature selection and support industrial innovation in data-driven tasks.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_33-A_Robust_Wrapper_Based_Feature_Selection_Technique.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>IoT-Based Integrated Heads-Up Display for Motorcycle Helmet</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150932</link>
        <id>10.14569/IJACSA.2024.0150932</id>
        <doi>10.14569/IJACSA.2024.0150932</doi>
        <lastModDate>2024-09-30T08:10:13.8900000+00:00</lastModDate>
        
        <creator>L. Raj</creator>
        
        <creator>M. Batumalay</creator>
        
        <creator>C. Batumalai</creator>
        
        <creator>Prabadevi B</creator>
        
        <subject>Heads-up display; motorcycle helmet; Internet of Things (IoT); android application; Raspberry Pi; product innovation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>The prevalence of visual impairment among the global population is a growing concern, with rates continuing to rise at an alarming pace. According to statistics from the World Health Organization (WHO), an estimated 2.2 billion people globally live with some form of visual impairment. Several methods exist to aid the blind in everyday navigation, such as walking sticks and guide dogs. However, these aids do not come without their drawbacks. For instance, using traditional guide dogs may not be suitable for some individuals due to allergies, cultural beliefs, or being unable to take care of a living animal due to the level of responsibility required. Innovations such as smart walking sticks and robotic guide dogs are continually being developed to overcome these gaps and cater to the unique requirements of the visually impaired. Hence, this proposed system is equipped with a joystick-controlled robotic guide that mimics the responsibilities of a traditional guide dog. The proposed system features an obstacle avoidance feature that will detect obstacles in its environment to avoid collisions. It will also provide audio feedback through a Bluetooth-connected mobile application when an obstacle has been detected. The proposed system is a product innovation which can be targeted to benefit visually impaired users by providing them with more independence as well as convenience in terms of mobility. Upon performing acceptance testing with the target audience, the system has been found to achieve its target in aiding the guidance of blind individuals.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_32-IoT_Based_Integrated_Heads_Up_Display.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Method for Mission Analysis Using ToT-Based Prompt Technology Utilized Generative AI</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150931</link>
        <id>10.14569/IJACSA.2024.0150931</id>
        <doi>10.14569/IJACSA.2024.0150931</doi>
        <lastModDate>2024-09-30T08:10:13.8770000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>CoT; ToT; AI; Mission analysis; prompt technology; generative AI; SaganSat-0; remote sensing satellite; 720-degree camera; thermal infrared camera; Geiger counter</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>Method for mission analysis using ToT: Tree of Thought-based prompt technology utilized generative analysis AI is proposed. Mission analysis needs methods for simulation of the supposed images which will be acquired with the imaging mission instruments, and the other mission instruments. In order to create simulation images, ToT-based prompt technology utilized generative AI is used. An application of the proposed method is shown for a mission analysis for SaganSat-0 of remote sensing satellite which will carry three mission instruments, a 720-degree camera, a thermal infrared camera and a Geiger counter. The simulated images and the Geiger counter sounds created by the proposed method are shown here together with analyzed results.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_31-Method_for_Mission_Analysis_Using_ToT_Based_Prompt_Technology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Construction of Image Retrieval Module for Cultural and Creative Products Based on DF-CNN</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150930</link>
        <id>10.14569/IJACSA.2024.0150930</id>
        <doi>10.14569/IJACSA.2024.0150930</doi>
        <lastModDate>2024-09-30T08:10:13.8600000+00:00</lastModDate>
        
        <creator>Meng Jiang</creator>
        
        <subject>Image retrieval; class vector; extreme gradient enhancement; Chinese creative products; layer by layer processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>With the growth of the cultural and creative product industry, more and more cultural and creative products have been designed and published in different channels. A method based on image retrieval module is proposed to address the search problem of Chinese creative products in online channels. During the process, a cascaded forest is proposed to achieve layer by layer processing, with class vectors as the main transfer content in the entire forest system. An image attribute feature extraction process that introduces extreme gradient enhancement is designed, and the aggregation of multi-scale and multi-region features is utilized to improve image retrieval performance. The experimental results showed that in the similarity test of extracting image feature information when the image contained three composite cultural and creative objects and the total pixel amount of the image reached 7M, the similarity of image feature information was 97.6%. In the analysis of running time, the research method only took 7.4ms to generate search results in seven fields. In the analysis of the proportion of false search content, the research method maintained a false search proportion of within 6.0% when searching for a single cultural and creative product object. This indicates that the research method has higher accuracy and efficiency in image retrieval of cultural and creative products. Research methods can provide certain technical support for the development of the cultural and creative industry.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_30-Construction_of_Image_Retrieval_Module.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>RSS-LSTM: A Metaheuristic-Driven Optimization Approach for Efficient Text Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150929</link>
        <id>10.14569/IJACSA.2024.0150929</id>
        <doi>10.14569/IJACSA.2024.0150929</doi>
        <lastModDate>2024-09-30T08:10:13.8430000+00:00</lastModDate>
        
        <creator>Muhammad Nasir</creator>
        
        <creator>Noor Azah Samsudin</creator>
        
        <creator>Shamsul Kamal Ahmad Khalid</creator>
        
        <creator>Souad Baowidan</creator>
        
        <creator>Humaira Arshad</creator>
        
        <creator>Wareesa Sharif</creator>
        
        <subject>Deep learning; text classification; Long Short-Term Memory; Ringed Seal Search; metaheuristic algorithms; Part Swarm Optimization; Genetic Algorithm; Firefly Algorithm; hyperparameter optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>The digital data consumed by the average user daily is huge now and is increasing daily all over the world, which requires sophisticated methods to automatically process data, such as retrieving, searching, and formatting the data, particularly for classifying text data. Long Short-Term Memory (LSTM) is a prominent deep learning model for text classification. Several metaheuristic approaches, such as the Genetic Algorithm (GA), Particle Swarm Optimization (PSO), and Firefly Algorithm (FF), have also been used to optimize Deep Learning (DL) models for classification. This study introduced an improved technique for text classification, called RSS-LSTM. The proposed technique optimized the hyperparameters and kernel function of LSTM through the Ringed Seal Search (RSS) algorithm to enhance simplification and learning ability. This work was also compared and evaluated against state-of-the-art techniques such as GA-LSTM, PSO-LSTM, and FF-LSTM. The results showed significantly better results using the proposed techniques, with an accuracy of 96%, recall of 96%, precision of 96%, and 95% f-measure on the Reuters-21578 dataset. In addition, it showed an accuracy of 77%, recall of 77%, precision of 78%, and f-measure of 76% on the 20 Newsgroups dataset, while it achieved accuracy, recall, precision, and f-measure of 91%, 91%, 94%, and 90%, respectively, using the AG News dataset.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_29-RSS_LSTM_Metaheuristic_Driven_Optimization_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Capacity-Influenced Approach to Find Better Initial Solution in Transportation Problems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150928</link>
        <id>10.14569/IJACSA.2024.0150928</id>
        <doi>10.14569/IJACSA.2024.0150928</doi>
        <lastModDate>2024-09-30T08:10:13.8270000+00:00</lastModDate>
        
        <creator>Md. Toufiqur Rahman</creator>
        
        <creator>A R M Jalal Uddin Jamali</creator>
        
        <creator>Momta Hena</creator>
        
        <creator>Mohammad Mehedi Hassan</creator>
        
        <creator>Md Rafiul Hassan</creator>
        
        <subject>Transportation problem; least cost method; Vogel’s approximate method; cost matrix; transportation tableau; node; capacity; route; capacity-influenced; weighted opportunity cost</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>Finding an Initial Basic Feasible Solution (IBFS) is the first and essential step in obtaining the optimal solution for any Transportation Problem. Numerous approaches are available in the literature to determine the IBFS; however, many of these methods are modifications of Vogel&#39;s Approximate Method (VAM) and/or the Least Cost Method (LCM). None of the existing methods directly consider the capacity of distributions among the nodes when selecting the allocation steps. While researchers have proposed various approaches and demonstrated improved solutions with numerical instances, they have not thoroughly investigated the underlying causes of these results. In this article, we explore the impact of capacity distributions among the nodes on the VAM and LCM in an experimental domain. The study introduces a novel and unique Capacity-Influenced Distribution Indicator (CI-DI) designed to control the flow of allocation. Ultimately, we propose a novel Capacity-Influenced approach that embeds both LCM and VAM to determine the IBFS for Transportation Problems (TPs). The novelty of the proposed approach lies in its direct consideration of capacity distribution among the nodes in the flow of allocations, this feature is lacking in LCM, VAM, and other established approaches. The proposed method develops a novel distribution indicator and a novel cost entry embedded capacity-based matrix to control the flow of allocations and thereby finds the IBFS for the Transportation Problem. We have conducted extensive numerical experiments to assess the effectiveness of the proposed approach. Experimental analysis demonstrates that the proposed method is more efficient in finding the IBFS than existing approaches. Moreover, as it uses a one-time generated Distribution Indicator (DI) for all steps of allocation, it is computationally cheaper than VAM, which generates a DI for each step of allocation.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_28-A_Capacity_Influenced_Approach_to_Find_Bette_Initial_Solution.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimized Fertilizer Dispensing for Sustainable Agriculture Through Secured IoT-Blockchain Framework</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150927</link>
        <id>10.14569/IJACSA.2024.0150927</id>
        <doi>10.14569/IJACSA.2024.0150927</doi>
        <lastModDate>2024-09-30T08:10:13.7970000+00:00</lastModDate>
        
        <creator>B. C. Preethi</creator>
        
        <creator>G. Sugitha</creator>
        
        <creator>T. B. Sivakumar</creator>
        
        <subject>Fertilizer dispensing; IoT sensors; blockchain; deep learning; convolutional neural network; greenhouse management; and decentralized application</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>Precision farming is essential for optimizing resource use and improving crop yields to attain sustainable agriculture. However, challenges like data insecurity, fertilizer costs, and inadequate consideration of soil health pose a hindrance to achieving these goals. To overcome these issues, the proposed work presents a novel approach for optimizing fertilizer dispensing by developing a framework connecting IoT and blockchain with a community of greenhouses. The system consists of IoT sensors installed inside the greenhouses to measure soil pH and nutrient values. This collected sensor data is compressed and stored securely and in an off-chain manner by the IPFS (Inter-Planetary File System) hash using the Keccak-256. MetaMask transfers the data for blockchain registration and authentication. The data is then preprocessed using Z-score normalization, Label Encoding, and One-Hot Encoding to obtain a precise analysis. A Deep Learning-based Convolutional Neural Network (DL-CNN) is used to classify soil conditions and determine the appropriate fertilizer requirements. The results of the DL-CNN model are viewed in a dashboard through a Decentralized Application (D-App) that we developed to provide real-time information to consumers, field analysts, and agricultural organizations. Field analysts use the information to establish a control center for precisely applying fertilizers. The proposed method achieves a classification accuracy rate of 98.86%, thus increasing soil health and providing a solution for effectively managing fertilizers.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_27-Optimized_Fertilizer_Dispensing_for_Sustainable_Agriculture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing BLDC Motor Speed Control by Mitigating Bias with a Variation Model Filter</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150925</link>
        <id>10.14569/IJACSA.2024.0150925</id>
        <doi>10.14569/IJACSA.2024.0150925</doi>
        <lastModDate>2024-09-30T08:10:13.7800000+00:00</lastModDate>
        
        <creator>Abdul Rahman Abdul Majid</creator>
        
        <subject>EV’s motors; brushless direct current (BLDC) motor; active disturbances rejection control (ADRC); disturbance rejection; bias estimation; Variation Model Filter (VMF)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>Brushless DC motors (BLDC) are integral to a wide array of applications, from electric vehicles to industrial machinery, due to their superior efficiency, reliability, and performance. Effective control of BLDC motors is essential to leverage their full potential and ensure optimal operation. Traditional PID controllers often fall short in handling the nonlinear and dynamic characteristics of BLDC systems, while advanced methods like Active Disturbance Rejection Control (ADRC) introduce additional complexity and cost. This research proposes a Variation Model Filter (VMF) based control system that estimates and compensates for the total bias arising from parameter variations and internal uncertainties. This method simplifies the control process, enhances robustness, and boosts performance without requiring extensive parameter tuning or high costs. Additionally, the paper provides a comprehensive mathematical model for the speed dynamics of BLDC motors. Simulation results based on MATLAB/Simulink indicate that the VMF-based PID control system surpasses both linear ADRC and traditional PID controllers in managing speed dynamics and responding to load disturbances. This approach offers an efficient and cost-effective solution for BLDC motor speed control, with significant potential for broader application and further optimization in motor control systems.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_25-Enhancing_BLDC_Motor_Speed_Control_by_Mitigating_Bias.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dynamic Monitoring of Bridge Structures via an Integrated Cloud and Edge Computing System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150926</link>
        <id>10.14569/IJACSA.2024.0150926</id>
        <doi>10.14569/IJACSA.2024.0150926</doi>
        <lastModDate>2024-09-30T08:10:13.7800000+00:00</lastModDate>
        
        <creator>Guoqi Zhang</creator>
        
        <creator>Pengcheng Zhang</creator>
        
        <creator>Xingwang Li</creator>
        
        <creator>Yizhe Yang</creator>
        
        <subject>Dynamic monitoring of bridge structures; edge computing; cloud computing; data processing; modal analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>Traditional bridge monitoring techniques, which predominantly rely on centralized data processing, often exhibit slow and inflexible responses when managing large-scale sensor network data. This study proposes an integrated edge and cloud computing approach to enhance the response time and data processing efficiency of dynamic bridge structure monitoring systems, thereby improving bridge safety and reliability. The proposed monitoring system leverages both edge and cloud computing, incorporating modules such as sensor data management, structural assessment and warning, data processing, monitoring, and data acquisition and transmission. High-performance and cost-effective sensors are utilized to monitor the real-time dynamic responses of the bridge, including displacement, acceleration, tilt, and stress, as well as external loads and environmental effects. The data processing module employs the modal superposition method, frequency response function, and modal analysis for dynamic analysis, while the cloud computing platform facilitates deep learning analysis and long-term data storage. A real case study demonstrates the system&#39;s performance across various settings and operational conditions, highlighting the effectiveness of integrating edge and cloud computing. The results indicate that the integration scheme significantly enhances monitoring accuracy, system stability, real-time response capacity, and data processing efficiency.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_26-Dynamic_Monitoring_of_Bridge_Structures_via_an_Integrated_Cloud.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dynamic Priority-Based Round Robin: An Advanced Load Balancing Technique for Cloud Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150924</link>
        <id>10.14569/IJACSA.2024.0150924</id>
        <doi>10.14569/IJACSA.2024.0150924</doi>
        <lastModDate>2024-09-30T08:10:13.7670000+00:00</lastModDate>
        
        <creator>Parupally Venu</creator>
        
        <creator>Pachipala Yellamma</creator>
        
        <creator>Yama Rupesh</creator>
        
        <creator>Yerrapothu Teja Naga Eswar</creator>
        
        <creator>Maruboina Mahiddar Reddy</creator>
        
        <subject>Load balancer; traffic distribution; cloud computing; resource utilization; scalability; Dynamic Priority Based Round Robin (DPBRR)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>An imbalance of load is an essential problem in cloud computing where the division of work between virtual machines is not well-optimized. Performance bottlenecks result from this unequal resource allocation, which keeps the system from operating at its full capability. Managing this load-balancing issue becomes critical to improving overall efficiency, resource utilization, and responsiveness as cloud infrastructures strive to respond to changing workloads and scale dynamically. Crossing the load-balancing landscape introduced a new strategy to effectively improve the load-balancing factor and ways to improve load-balancing performance by understanding how existing algorithms work, an effective method of load balancing. The &quot;Dynamic Priority Based Round Robin&quot; algorithm is a new approach that combines three different algorithms to improve cloud load balancing. This method improves load balancing by taking the best aspects of previous algorithms and improving them. It works remarkably well and responds quickly to commands, greatly reducing processing time. This DPBRR algorithm also plays an important role in improving cloud load balancing in many ways, including improving resource consumption, inefficiency, scalability, fault tolerance, cost optimization, and other aspects. Since it is a combination of algorithms, it may have its drawbacks, but its cloud computing enhancements are very useful for doing many tasks quickly. Strength and adaptability are quite effective, as is adaptability.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_24-Dynamic_Priority_Based_Round_Robin.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SIEM and Threat Intelligence: Protecting Applications with Wazuh and TheHive</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150923</link>
        <id>10.14569/IJACSA.2024.0150923</id>
        <doi>10.14569/IJACSA.2024.0150923</doi>
        <lastModDate>2024-09-30T08:10:13.7500000+00:00</lastModDate>
        
        <creator>Jumiaty </creator>
        
        <creator>Benfano Soewito</creator>
        
        <subject>Application server security; application vulnerability; threat detection; SIEM; Wazuh; TheHive; Telegram and CVSS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>The consequences of cyberattacks on enterprises are highly varied. DDoS assaults can render an organization&#39;s website inaccessible; SQL attacks can compromise the integrity of data in a database, and Brute Force attacks can lead to unauthorized users gaining control over a server or application. Hence, it is crucial for enterprises to be aware of these potential dangers and employ solutions capable of monitoring networks, apps, and servers. In this study, the author employs Wazuh, TheHive, Telegram, and CVSS. Wazuh functions as a tool for monitoring applications and identifying potential security risks. TheHive classifies threats according to their level of importance. Telegram is utilized for dispatching notifications to the administrator. The findings indicate that Wazuh can promptly identify security risks by verifying that the date and time configurations on each utilized server align with the Indonesian time standard. Several vulnerabilities in the applications were successfully detected. The Wazuh server monitors two specific apps, namely Kompetensi and ESPPD. Surveillance commenced on March 20, 2024, at 17:49 and concluded on June 20, 2024, at 01:10, effectively amassing a total of 16,580 logs. 11 essential alert categories require follow-up due to their potential to compromise the system&#39;s integrity, confidentiality, and availability. To validate the detection results, the Common Vulnerability Scoring System (CVSS) is used. The assessment of vulnerability levels varies depending on the Wazuh level and CVSS. This arises because CVSS assigns scores based on five exploitability characteristics and incorporates the expertise of specialists to determine the assessment category and evaluate the potential impact of a successful threat. The outcome of this assessment, involving professional expertise, is heavily influenced by the unique attributes of each company. As a result, even when evaluating the same threats, the assessment can yield varying results. Evaluations utilizing Wazuh and CVSS are highly efficient in determining the extent of discovered hazards. By integrating these two technologies, the produced findings become more accurate.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_23-SIEM_Threat_Intelligence_Protecting_Applications.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Compactness-Weighted KNN Classification Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150922</link>
        <id>10.14569/IJACSA.2024.0150922</id>
        <doi>10.14569/IJACSA.2024.0150922</doi>
        <lastModDate>2024-09-30T08:10:13.7330000+00:00</lastModDate>
        
        <creator>Bengting Wan</creator>
        
        <creator>Zhixiang Sheng</creator>
        
        <creator>Wenqiang Zhu</creator>
        
        <creator>Zhiyi Hu</creator>
        
        <subject>K-nearest neighbors; feature weight; Minkowski distance; com-pactness</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>The K-Nearest Neighbor (KNN) algorithm is a widely used classical classification tool, yet enhancing the classification ac-curacy for multi-feature large datasets remains a challenge. The paper introduces a Compactness-Weighted KNN classification algorithm using a weighted Minkowski distance (CKNN) to address this. Due to the variability in sample distribution, a method for deriving feature weights based on compactness is designed. Subsequently, a formula for calculating the weighted Minkowski distance using compactness weights is proposed, forming the basis for developing the CKNN algorithm. Com-parative experimental results on five real-world datasets demonstrate that the CKNN algorithm outperforms eight exist-ing variant KNN algorithms in Accuracy, Precision, Recall, and F1 performance metrics. The test results and sensitivity analysis confirm the CKNN&#39;s efficacy in classifying multi-feature da-tasets.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_22-Compactness_Weighted_KNN_Classification_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Data Mining for the Analysis of Student Assessment Results in Engineering by Applying Active Didactic Strategy</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150921</link>
        <id>10.14569/IJACSA.2024.0150921</id>
        <doi>10.14569/IJACSA.2024.0150921</doi>
        <lastModDate>2024-09-30T08:10:13.7200000+00:00</lastModDate>
        
        <creator>C&#233;sar Baluarte-Araya</creator>
        
        <creator>Oscar Ramirez-Valdez</creator>
        
        <creator>Ernesto Suarez-Lopez</creator>
        
        <creator>Percy Huertas-Niqu&#233;n</creator>
        
        <subject>Student outcome; problem based learning; assessment; performance indicator; data mining</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>To make improvements in the teaching-learning process in educational institutions such as universities, it is necessary to analyse the results obtained and recorded from applying Active Didactic Strategies and, based on this, to propose improvements that will help to achieve the Student Outcomes established for the subject in question; the problem to be solved is thus defined, and the results to be obtained from the analysis are relevant for the improvement of student performance. The objective is to analyse the results of the student assessment, the basis for the calculation of which is based on the recording of the qualification achieved through the performance indicators defined for each criterion, of the competencies involved and aligned with the Student Outcomes of the problems proposed to the student, applying various data mining techniques. Data mining is used to treat large amounts and types of data to obtain hidden information and reveal states, patterns and trends; as well as in Education to study the behaviour of students in terms of their performance. The methodology used for the development of the work is based on the Cross-Industry Standard Process for Data Mining methodological model, which is widely used in data mining projects. The results obtained reveal that the Student&#39;s t-test and Snedecor&#39;s F-test are highly significant, as well as the determination of the lowest performance indicators in order to plan future improvement actions towards better student performance and achieve a high level of learning. Concluding that if the same teaching and learning process is applied the result will be very similar, therefore, the students have finished learning very well.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_21-Data_Mining_for_the_Analysis_of_Student_Assessment_Results.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Framework for Capturing Quality Requirements by Integrating the Requirement Engineering Elements in Agile Software Development Methods</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150920</link>
        <id>10.14569/IJACSA.2024.0150920</id>
        <doi>10.14569/IJACSA.2024.0150920</doi>
        <lastModDate>2024-09-30T08:10:13.7030000+00:00</lastModDate>
        
        <creator>Yuli Fitrisia</creator>
        
        <creator>Rosziati Ibrahim</creator>
        
        <subject>Quality requirement; requirement engineering; ASD; framework; ASD practitioners</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>The early phase of Agile Software Development (ASD) methods is Requirement Engineering (RE). Quality Requirement (QR) is a type of RE that needs to be captured at the initial development phase to reduce rework, time, and maintenance costs. However, QR is one of the issues mentioned in ASD, namely the need for more capability to elicit, analyze, document, and manage QR. Therefore, this research aims to propose a framework for capturing QR to address QR issues in ASD by integrating RE elements, namely the RE phases, Documentation, Roles, and RE techniques. This research was conducted in four phases: 1) undertaking a theoretical study, 2) conducting an exploratory study to identify the current practices and issues to capture QR in ASD, 3) constructing the framework by using the RE elements, and 4) evaluating the framework by conducting ASD practitioners’ view using questionnaires. The questionnaires were then analyzed using descriptive statistics based on the average mean of each element. The result shows the average mean for all elements (4.25), the average mean of each element for the RE phases (4.36), the documentations (4.11), the roles (4.25), and the RE techniques (4.18). The mean distribution of each element is more than 4 out of 5 indicating that the framework to capture QR is verified. Thus, this framework can be used by ASD practitioners as a guideline to capture QR in ASD methods.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_20-A_Framework_for_Capturing_Quality_Requirements.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>DeeplabV3+ Model with CBAM and CSPM Attention Mechanism for Navel Orange Defects Segmentation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150919</link>
        <id>10.14569/IJACSA.2024.0150919</id>
        <doi>10.14569/IJACSA.2024.0150919</doi>
        <lastModDate>2024-09-30T08:10:13.6870000+00:00</lastModDate>
        
        <creator>Guo Jinmei</creator>
        
        <creator>Wan Nurshazwani Wan Zakaria</creator>
        
        <creator>Wei Bisheng</creator>
        
        <creator>Muhammad Azmi Bin Ayub</creator>
        
        <subject>Navel oranges; defect detection; DeeplabV3+; HECA-MobileNetV3; CBAM attention mechanism; CSPM mechanism</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>Accurate defect detection of navel oranges is the key to ensuring the quality of navel oranges and extending their storage life. An improved DeeplabV3+ model integrating attention mechanism is proposed to increase the current low recognition accuracy and slow detection speed of defect detection in navel oranges grading and sorting process. The improved lightweight backbone network HECA-MobileV3 is applied in the DeeplabV3+ model to reduce the amount of computational data and improve the image processing speed. In addition, the Convolutional Block Attention Module (CBAM) and Channel Space Parallel Mechanism CSPM are integrated to the DeeplabV3+ model. ASPP structure is redesigned and the low feature extraction network is optimized to enhance the capture of target edge information and improve the segmentation effect of the model. Experimental results show that the proposed model exhibits a better MIoU and MPA with 89.50% and 94.02%, respectively, while reducing parameters by 49.42M and increasing detection speed by 55.6fps, which are 7.27% and 3.51% higher than the basic model. The results are superior than U-Net, SegNet and PSP-Net semantic segmentation networks. As a results, the proposed method provides better real-time performance, which meets the requirements of industrial production for detection accuracy and speed.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_19-DeeplabV3_Model_with_CBAM_and_CSPM_Attention_Mechanism.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Texture Feature and Mel-Spectrogram Analysis for Music Sound Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150918</link>
        <id>10.14569/IJACSA.2024.0150918</id>
        <doi>10.14569/IJACSA.2024.0150918</doi>
        <lastModDate>2024-09-30T08:10:13.6730000+00:00</lastModDate>
        
        <creator>M. E. ElAlami</creator>
        
        <creator>S. M. K. Tobar</creator>
        
        <creator>S. M. Khater</creator>
        
        <creator>Eman. A. Esmaeil</creator>
        
        <subject>Mel-spectrograms; ML; texture features; MC</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>The categorization of music has received substantial interest in the management of large-scale databases. However, the sound of music classification (MC) is poorly interesting, making it a big challenge. For this reason, this paper has proposed a new robust combining method based on texture feature with Mel-spectrogram to classify Arabic music sound. A music audio dataset consisting of 404 sound recordings for different four classes of Arabic music sounds has been collected. The collected data became available for free on the Kaggle website. Firstly, music sound is transformed into a Mel spectrogram, and then several texture features are extracted from these Mel spectrogram images. A two-dimensional Haar wavelet is applied to each Mel-spectrogram image, and Local Binary Patterns (LBP), Gray Level Co-occurrence Matrix (GLCM), and Histogram of Oriented Gradient (HOG) are utilized for feature extraction. K-nearest neighbors (KNN), random forest (RF), decision tree (DT), logistic regression (LR), AdaBoost, extreme gradient boosting (XGB), and support vector machine (SVM) classifiers were utilized in a comparative analysis of Machine Learning (ML) algorithms. Two different datasets have been employed in order to evaluate the effectiveness of our approach: the collected dataset that the authors had gathered and the global GTZAN dataset. Our method demonstrates superior performance with a five-fold cross-validation. The experimental findings indicated that the XGB exhibited a high accuracy with an average performance of 97.80% for accuracy, 97.72% for F1-Score, 97.75% for recall, and 97.81% for precision.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_18-Texture_Feature_and_Mel_Spectrogram_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Pre-Encryption Ransomware Detection (PERD) Taxonomy, and Research Directions: Systematic Literature Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150917</link>
        <id>10.14569/IJACSA.2024.0150917</id>
        <doi>10.14569/IJACSA.2024.0150917</doi>
        <lastModDate>2024-09-30T08:10:13.6570000+00:00</lastModDate>
        
        <creator>Mujeeb Ur Rehman Shaikh</creator>
        
        <creator>Mohd Fadzil Hassan</creator>
        
        <creator>Rehan Akbar</creator>
        
        <creator>Rafi Ullah</creator>
        
        <creator>K.S. Savita</creator>
        
        <creator>Ubaid Rehman</creator>
        
        <creator>Jameel Shehu Yalli</creator>
        
        <subject>Cybersecurity; ransomware detection; static and dynamic analysis; machine learning; cyber-attacks; security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>Today’s era is witnessing an alarming surge in ransomware attacks, propelled by the increasingly sophisticated obfuscation tools deployed by cybercriminals to evade conventional antivirus defenses. Therefore, there is a need to better detect and obfuscate viruses. This analysis embarks on a comprehensive exploration of the intricate landscape of ransomware threats, which will become even more problematic in the upcoming era. Attackers may practice new encryption approaches or obfuscation methods to create ransomware that is more difficult to detect and analyze. The damage caused by ransomware ranges from financial losses, at best paid for ransom, to the loss of human life. We presented a Systematic Literature Review and quality analysis of published research papers on the topic. We investigated 30 articles published between the year 2018 to the year 2023(H1). The outline of what has been published thus far is reflected in the 30 papers that were chosen and explained in this article. One of our main conclusions was that machine learning ML-based detection models performed better than others. Additionally, we discovered that only a small number of papers were able to receive excellent ratings based on the standards for quality assessment. To identify past research practices and provide insight into potential future guidelines in the pre-encryption ransomware detection (PERD) space, we summarized and synthesized the existing machine learning studies for this SLR. Future researchers will use this study as a roadmap and assistance to investigate the preexisting literature efficiently and effectively.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_17-Pre_Encryption_Ransomware_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Heart-SecureCloud: A Secure Cloud-Based Hybrid DL System for Diagnosis of Heart Disease Through Transformer-Recurrent Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150916</link>
        <id>10.14569/IJACSA.2024.0150916</id>
        <doi>10.14569/IJACSA.2024.0150916</doi>
        <lastModDate>2024-09-30T08:10:13.6400000+00:00</lastModDate>
        
        <creator>Talal Saad Albalawi</creator>
        
        <subject>Heart disease diagnosis; deep learning; cloud computing; feature extraction; data security; hyperparameter optimization; encryption</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>Cardiovascular disease (CVD) has rapidly increased after COVID-19. Several computerized systems have been developed in the past to diagnose CVD disease. However, the high computing expenses of deep learning (DL) models and the complexity of architectures are significant issues. Therefore, to resolve these issues, an accurate diagnosis of CVD disease is required. This paper proposes a hybrid and secure deep learning (DL) system known as Heart-SecureCloud to predict multiclass heart diseases. To develop this Heart-SecureCloud system, four major stages are makeup such as preprocessing and augmentation, feature extraction and transformation, deep learning and hyperparameter optimization, and cloud security. Advanced signal processing and augmentation technologies are applied to ECG data in the preprocessing and augmentation step to enhance data quality. In the feature extraction and transformation step, adaptive wavelet transforms, and feature scaling are used to extract and convert spectral and temporal data. The DL and hyperparameter optimization step utilize a novel hybrid transformer-recurrent neural network model, which is further optimized for accuracy and efficiency using hyperband-GA. Transfer learning refines pre-trained models using domain-specific data. The unique aspect of the Heart-SecureCloud system is its implementation through a secure cloud, which safeguards medical data with encryption and access control mechanisms. The system&#39;s efficacy is demonstrated through testing and evaluation on three publicly available datasets, such as MIT-BIH Arrhythmia MIMIC-III Waveform and PTB-ECG. The Heart-SecureCloud DL architecture achieved impressive results of 98.75% of accuracy, 98.80% of recall, 98.70% of precision, and 98.75% of F1-score. Moreover, the Heart-SecureCloud DL underscores its promise for safe medical diagnostics deployment.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_16-Heart_SecureCloud_A_Secure_Cloud_Based_Hybrid_DL_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automatic Recognition and Labeling of Knowledge Points in Learning Test Questions Based on Deep-Walk Image Data Mining</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150915</link>
        <id>10.14569/IJACSA.2024.0150915</id>
        <doi>10.14569/IJACSA.2024.0150915</doi>
        <lastModDate>2024-09-30T08:10:13.6270000+00:00</lastModDate>
        
        <creator>Ying Chang</creator>
        
        <creator>Qinghua Zhu</creator>
        
        <subject>Deep-walk; image data mining; study test questions; knowledge point recognition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>This paper deeply studies and discusses the application of image data mining technology based on the Deep-Walk algorithm in automatic recognition and annotation of knowledge points in learning test questions. With the rapid development of educational informatization, how to effectively mine and label the knowledge points in learning test questions from image data has become an urgent problem to be solved. In this paper, we introduce a novel approach that integrates graph embedding technology with natural language processing techniques. Initially, we leverage the Deep-Walk algorithm to embed the knowledge points present in the test question images, effectively transforming the high-dimensional image data into a low-dimensional vector representation. This transformation meticulously preserves the intricate structural information while meticulously capturing the subtle semantic nuances embedded within the image data. Subsequently, we undertake a thorough semantic analysis of these vectors, seamlessly integrating natural language processing techniques, to facilitate automated recognition with unparalleled precision. This innovative methodology not only elevates the accuracy of knowledge point recognition to new heights but also achieves semantic annotation of these points, thereby furnishing richer, more insightful data support for subsequent intelligent education applications. Through experimental verification, the proposed method has achieved remarkable results on multiple data sets, which proves its feasibility and effectiveness in practical applications. Furthermore, this paper delves into the expansive potential applications of this methodology in the realm of image data mining, encompassing areas such as online education, intelligent tutoring systems, personalized learning frameworks, and numerous other domains. As we look ahead, we aim to refine the algorithm, enhance recognition accuracy, and uncover additional application scenarios, thereby contributing significantly to the intelligent evolution of the education sector.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_15-Automatic_Recognition_and_Labeling_of_Knowledge_Points.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application and Effectiveness of Improving Retrieval Systems Based on User Understanding in Smart Archive Management Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150914</link>
        <id>10.14569/IJACSA.2024.0150914</id>
        <doi>10.14569/IJACSA.2024.0150914</doi>
        <lastModDate>2024-09-30T08:10:13.6100000+00:00</lastModDate>
        
        <creator>Chao Yan</creator>
        
        <subject>Knowledge graph; user understanding; retrieval system; smart archive management; BiLSTM-CRF</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>In traditional archive management systems, keyword-based retrieval systems often fail to meet users&#39; personalized and precise retrieval needs. To solve this problem, a knowledge graph is first constructed using bidirectional long short-term memory networks and conditional random fields and combined with user understanding-based semantic retrieval to obtain an improved personalized retrieval system. The research results show that the improved personalized retrieval system has significantly better retrieval accuracy and recall rate than traditional retrieval systems. The improved personalized retrieval system has retrieval accuracy rates of 90.24%, 89.65%, 87.52%, 96.33%, and 95.18% for students, civil servants, demobilized soldiers, law enforcement personnel, and retirees, respectively, and recall rates of 89.35%, 91.57%, 89.34%, 97.54%, and 96.63%, respectively. Applying it to the smart archive management system, the accuracy of archive retrieval, personalized recommendation accuracy, response time, and user satisfaction are significantly better than conventional management systems. The improvement and introduction of personalized retrieval systems based on user understanding and knowledge graphs have achieved significant results.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_14-Application_and_Effectiveness_of_Improving_Retrieval_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Using Combined Data Encryption and Trusted Network Methods to Improve the Network Security of the Internet of Things Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150913</link>
        <id>10.14569/IJACSA.2024.0150913</id>
        <doi>10.14569/IJACSA.2024.0150913</doi>
        <lastModDate>2024-09-30T08:10:13.5930000+00:00</lastModDate>
        
        <creator>Yudan Zhao</creator>
        
        <subject>Security protocols; Encryption; Internet of Things; Resource constraints; SM9</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>With the integration of big data and artificial intelligence, the Internet of Things has rapidly developed as the foundation for collecting data. The data collected by Internet of Things devices is mostly sensitive information, but limited resources can easily lead to data leakage. Therefore, this study adopts a combination of data encryption and trusted networks to improve the network security of the Internet of Things. This study proposes an Internet of Things network security system based on an improved SM9 encryption algorithm and iTLS security protocol. The system uses a key generation center to generate and distribute keys to complete information encryption and decryption, and identity authentication is carried out through dynamic keys. The results indicated that the total time for key generation, encryption, and decryption based on the SM9-iTLS network security system was 3.63 seconds. The total time for key generation, signature, and signature verification in the system was 3.65 seconds, which is better than other Internet of Things network security systems, and it also had better network resource occupancy and latency than other systems. The Internet of Things network security system based on improved SM9-iTLS can not only improve the security of information transmission among Internet of Things devices but also optimize the efficiency of information transmission. The research results have a certain promoting effect on developing the Internet of Things information security field.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_13-Using_Combined_Data_Encryption_and_Trusted_Network_Methods.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application of Fuzzy Decision Support System Based on GNN in Anomaly Detection and Incident Response Service of Intelligent Security</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150912</link>
        <id>10.14569/IJACSA.2024.0150912</id>
        <doi>10.14569/IJACSA.2024.0150912</doi>
        <lastModDate>2024-09-30T08:10:13.5770000+00:00</lastModDate>
        
        <creator>Tao Chen</creator>
        
        <creator>Xiaoqian Wu</creator>
        
        <subject>GNN; fuzzy decision support system; intelligent security; anomaly detection; incident response service</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>This paper introduces a fuzzy decision support system (FDSS) based on a graph neural network (GNN) for anomaly detection and intelligent security. The primary aim is to develop a robust system capable of accurately identifying anomalies and providing timely incident response services. GNNs are utilized to capture the complex relationships and features between nodes in graph data, learning the embedded representation of each node through information transfer and aggregation mechanisms, which encapsulate the structural information of the graph. The FDSS leverages these features to construct a fuzzy rule base and perform fuzzy inference, generating decision suggestions that enhance the system&#39;s adaptability and robustness in dealing with uncertain data. The challenges addressed include the need for efficient anomaly detection in large-scale surveillance networks, the requirement for fast response times during emergencies, and the necessity for scalable and adaptable systems. Experimental results demonstrate that the GNN-based FDSS surpasses other methods in terms of anomaly detection accuracy, incident response service efficiency, system processing capacity, and model generalization ability. Compared to traditional statistical methods, machine learning models, and deep learning models, the proposed system maintains high precision and recall rates, processes data more efficiently, and adapts well to new datasets.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_12-Application_of_Fuzzy_Decision_Support_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application of Speech Recognition Technology Based on Multimodal Information in Human-Computer Interaction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150911</link>
        <id>10.14569/IJACSA.2024.0150911</id>
        <doi>10.14569/IJACSA.2024.0150911</doi>
        <lastModDate>2024-09-30T08:10:13.5630000+00:00</lastModDate>
        
        <creator>Yuan Zhang</creator>
        
        <subject>Multimodal information; speech recognition; intelligent optimization algorithm; multimodal human-computer interaction; CTC; attention mechanisms; artificial bee colony algorithms</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>Multimodal human-computer interaction is an important trend in the development of human-computer interaction field. In order to accelerate the technological change of human-computer interaction system, the study firstly fuses Connectionist Temporal Classification algorithm and attention mechanism to design a speech recognition architecture, and then further optimizes the end-to-end architecture of speech recognition by using the improved artificial swarming algorithm, to obtain a speech recognition model suitable for multimodal human-computer interaction system. One of them, Connectionist Temporal Classification, is a machine learning algorithm that deals with sequence-to-sequence problems; and the Attention Mechanism allows the model to process the input data in such a way that it can focus its attention on the relevant parts. The experimental results show that, the hypervolume of the improved swarm algorithm converges to 0.861, which is 0.099 and 0.059 compared to the ant colony and differential evolution algorithms, while the traditional swarm algorithm takes the value of 0.676; the inverse generation distance of the improved swarm algorithm converges to 0.194, while that of the traditional swarm, ant colony, and differential evolution algorithms converge to 0.263, 0.342, and 0.246, respectively. Hypervolume and Inverse Generation Distance Measures the diversity and convergence of the solution set. The speech recognition model takes higher values than the other speech recognition models in the evaluation metrics of accuracy, precision, and recall, and the lowest values of the error rate at the character, word, and sentence levels are respectively 0.037, 0.036 and 0.035, ensuring higher recognition accuracy while weighing the real-time rate. In the multimodal interactive system, the experimental group’s average opinion scores, objective ratings of speech quality, and short-term goal comprehensibility scores, and the overall user experience showed a significant advantage over the control group of the other methods, and the application scores were at a high level. The speech processing technology designed in this study is of great significance for improving the interaction efficiency and user experience, and provides certain references and lessons for the research in the field of human-computer interaction and speech recognition.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_11-Application_of_Speech_Recognition_Technology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Construction of Image Retrieval Module for Ethnic Art Design Products Based on DF-CNN</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150910</link>
        <id>10.14569/IJACSA.2024.0150910</id>
        <doi>10.14569/IJACSA.2024.0150910</doi>
        <lastModDate>2024-09-30T08:10:13.5470000+00:00</lastModDate>
        
        <creator>Yaru He</creator>
        
        <subject>Image retrieval; main characteristics; local features; deep forest; convolutional neural network; bar attention mechanism; residual module</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>With the increasing interest of consumers in ethnic art, more design products with ethnic art characteristics are being displayed. In order to help users easily retrieve related art products, an image retrieval model that can effectively extract data is proposed. The research method strengthens the depth of data mining through weighted methods, main characteristics and local features in images based on the multi-window combination, and uses the deep forest algorithm to expand the decision path and select information gain nodes. By adjusting the weights of convolutional neural networks, the retrieval ability of the model is enhanced. The gradient problem in the propagation process is optimized using residual modules, and the prominent features of the features are strengthened using a bar attention mechanism to optimize the retrieval ability. The results indicated that the loss function of the research model converged within 20 iterations, and the matching degree of the retrieved images in the testing set reached 91.28% after iterative training. The AUC of the research model was 0.876, indicating that the model had a good performance in image retrieval and classification. The retrieval accuracy of the research model was higher than other methods for image data of different specifications. This indicates that the research model has universality for multi-scale image retrieval, which can provide theoretical support for the development of ethnic art design products.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_10-Construction_of_Image_Retrieval_Module.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>CNN-Based Salient Target Detection Method of UAV Video Reconnaissance Image</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150909</link>
        <id>10.14569/IJACSA.2024.0150909</id>
        <doi>10.14569/IJACSA.2024.0150909</doi>
        <lastModDate>2024-09-30T08:10:13.5300000+00:00</lastModDate>
        
        <creator>Li Na</creator>
        
        <subject>Regional convolutional neural network; K-means clustering; UAV reconnaissance image; salient target detection; task loss function</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>In order to address the challenges of image complexity, capturing subtle information, fluctuating lighting, and dynamic background interference in drone video reconnaissance, this paper proposes a salient object detection method based on convolutional neural network (CNN). This method first preprocesses the drone video reconnaissance images to remove haze and improve image quality. Subsequently, the Faster R-CNN framework was utilized for detection, where in the Region Proposal Network (RPN) stage, the K-means clustering algorithm was used to generate optimized preset anchor boxes for specific datasets to enhance the accuracy of target candidate regions. The Fast R-CNN classification loss function is used to distinguish salient target regions in reconnaissance images, while the regression loss function precisely adjusts the target bounding boxes to ensure accurate detection of salient targets. In response to the potential failure of Faster R-CNN in extreme situations, this paper innovatively introduces a saliency screening strategy based on similarity analysis to finely screen superpixels, preliminarily locate target positions, and further optimize saliency object detection results. In addition, the use of saturation component enhancement and brightness component dual frequency coefficient enhancement techniques in the HSI color space significantly improves the visual effect of salient target images, enhancing image clarity while preserving the natural and soft colors, effectively improving the visual quality of detection results. The experimental results show that this method exhibits significant advantages of high accuracy and low false detection rate in salient object detection of unmanned aerial vehicle (UAV) video reconnaissance images. Especially in complex scenes, it can still stably and accurately identify targets, significantly improving detection performance.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_9-CNN_Based_Salient_Target_Detection_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of Real Time Meteorological Grade Monitoring Stations with AI Analytics</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150908</link>
        <id>10.14569/IJACSA.2024.0150908</id>
        <doi>10.14569/IJACSA.2024.0150908</doi>
        <lastModDate>2024-09-30T08:10:13.5170000+00:00</lastModDate>
        
        <creator>Adrian Kok Eng Hock</creator>
        
        <creator>Chan Yee Kit</creator>
        
        <creator>Koo Voon Chet</creator>
        
        <subject>Air pollution; air particles; PM2.5; PM10; real time; light scattering sensor; neural networks; AI; machine learning; R2; IoT; WSM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>Air pollution comes in many forms and the basis of measure is the concentration of particles in the air. The quality of air depends on the quantity of pollution measured by a particle sensor that is accurate down to micron-meter consistencies. The size of the pollutants will be ingested by humans and cause respiratory problems and its effects on health conditions. The research will study the measurement of particles using multiple types of light scattering sensors and reference them to the accuracy of meteorological standards for precision in measurement. The sensors will be subjected to extreme conditions to gauge the repeatability and behavior and also long-term deployment usage. This study is required as when deployed on the field, dust particles will degrade the sensors over time. Early detection of sensor sensitivity and maintenance is therefore considered part of the research. Air particle data is volatile and dynamic over time and with that said, mass deployment of these sensors will give a better measurement of pollution data. However, with more and more data, standard statistics used show a basic level indicator and hence the idea of using machine learning algorithms as part of artificial intelligence (AI) processing is adapted for analyzing and also predicting particle data. There is a foreseeable challenge on this as there is no one machine learning for use only for this and multiple models are considered and gauged with the best accuracy using R2 value as low as 0.75 during the entire research. Lastly, with the seamless Internet of Things sensing architecture, the improved spatial data resolution will be improved and can be used to complement the current pollution measurement data for Malaysia in particular.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_8-Development_of_Real_Time_Meteorological_Grade_Monitoring_Stations.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Neural Network and Human-Computer Interaction Technology in the Field of Art Design</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150907</link>
        <id>10.14569/IJACSA.2024.0150907</id>
        <doi>10.14569/IJACSA.2024.0150907</doi>
        <lastModDate>2024-09-30T08:10:13.5000000+00:00</lastModDate>
        
        <creator>Lan Guo</creator>
        
        <creator>Lisha Luo</creator>
        
        <creator>Weiquan Fan</creator>
        
        <subject>Deep neural network; human-computer interaction; Cycle Generative Adversarial Networks; art design; image generation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>Traditional art design is usually based on the designer’s intuitive creativity. Limited by individual experience, knowledge and imagination, it is difficult to create more abundant and higher quality works, and the workload is huge, which limits the production efficiency of artworks. Through deep neural networks and human-computer interaction technology, the quality of art design can be improved; the workload and cost of designers can be reduced, and more artistic inspiration and tools can be provided to designers. The main contribution of this paper is to propose the use of a Cycle Generative Adversarial Network (Cycle GAN) to realize the automatic conversion of text to image and provide an immersive art experience through human-computer interaction technology such as virtual reality. In addition, the target audience of this paper is art designers and researchers of human-computer interaction technology, aiming to help them break through the traditional creation mode and lead art design to diversification and avant-garde. The content loss rate of character image conversion in Cycle GAN was reduced by 74.5% compared with that of human image conversion. The average peak signal-to-noise ratio of figure images generated by Cycle GAN was 57.9% higher than that of figure images generated by the artificial method. The character images generated by Cycle GAN reduce content loss and are more realistic. Deep neural networks and human-computer interaction technology can promote the development and progress of art design, break the traditional creative mode and bondage, and lead art to be more diversified and avant-garde.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_7-Deep_Neural_Network_and_Human_Computer_Interaction_Technology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Natural Language Processing Model for the Development of an Italian-Language Chatbot for Public Administration</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150906</link>
        <id>10.14569/IJACSA.2024.0150906</id>
        <doi>10.14569/IJACSA.2024.0150906</doi>
        <lastModDate>2024-09-30T08:10:13.4830000+00:00</lastModDate>
        
        <creator>Antonio Piizzi</creator>
        
        <creator>Donatello Vavallo</creator>
        
        <creator>Gaetano Lazzo</creator>
        
        <creator>Saverio Dimola</creator>
        
        <creator>Elvira Zazzera</creator>
        
        <subject>Natural Language Processing; chatbot; BERT; transformer; Italian language</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>Natural Language Processing models (NLP) are used in chatbots to understand user input, interpret its meaning, and generate conversational responses to provide immediate and consistent assistance. This reduces problem-solving time and staff workload and increases user satisfaction. There are both rule-based chatbots, which use decision trees and are programmed to answer specific questions, and self-learning chatbots, which can handle more complex conversations through continuous learning about data and user interactions. However, only a few chatbots have been developed specifically for the Italian language. The development of chatbots for Public Administration (PA) in the Italian language presents unique challenges, particularly in creating models that can accurately understand and respond to user queries based on complex, context-specific documents. This paper proposes a novel natural language processing (NLP) model tailored to the Italian language, designed to support the development of an advanced Question Answering (QA) chatbot for PA. The core of the proposed model is based on the BERT (Bidirectional Encoder Representations from Transformers) architecture, enhanced with an encoder/decoder module and a highway network module to improve the filtering and processing of input text. The principal aim of this research is to address the gap in Italian-language NLP models by providing a robust solution capable of handling the intricacies of the Italian language within the context of PA. The model is trained and evaluated using the Italian version of the Stanford Question Answering Dataset (SQuAD-IT). Experimental results demonstrate that the proposed model outperforms existing models such as BIDAF in terms of F1-score and Exact Match (EM), indicating its superior ability to provide precise and accurate answers. The comparative analysis highlights a significant performance improvement, with the proposed model achieving an F1-score of 59.41% and an EM of 46.24%, compared to 49.35% and 38.43%, respectively, for BIDAF. The findings suggest that the proposed model offers substantial benefits in terms of accuracy and efficiency for PA applications.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_6-A_Natural_Language_Processing_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>AI-Assisted Academic Writing: A Comparative Study of Student-Crafted and ChatGPT-Enhanced Critiques in Ubiquitous Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150905</link>
        <id>10.14569/IJACSA.2024.0150905</id>
        <doi>10.14569/IJACSA.2024.0150905</doi>
        <lastModDate>2024-09-30T08:10:13.4700000+00:00</lastModDate>
        
        <creator>Edward R Sykes</creator>
        
        <subject>AI in higher education; AI in academic writing; readability metrics; LLM; ethical considerations</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>This study examines the impact of Large Language Models (LLMs), such as ChatGPT, on the development of academic critique skills among fourth-year Computer Science undergraduates enrolled in a Ubiquitous Computing course. The research systematically evaluates the differences between student-authored critiques and those revised with the aid of ChatGPT, utilizing established readability metrics such as the Flesch-Kincaid Grade Level, Flesch Reading Ease, and Gunning Fog Index. The findings highlight the potential of AI to enhance readability and analytical depth, while also revealing challenges related to dependency, academic integrity, and algorithmic bias. These results extend implications across learning sciences, pedagogy, and educational technology, providing actionable insights into leveraging AI to augment traditional learning methods and enhance critical thinking and personalized education.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_5-AI_Assisted_Academic_Writing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Control-Driven Media: A Unifying Model for Consistent, Cross-platform Multimedia Experiences</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150904</link>
        <id>10.14569/IJACSA.2024.0150904</id>
        <doi>10.14569/IJACSA.2024.0150904</doi>
        <lastModDate>2024-09-30T08:10:13.4530000+00:00</lastModDate>
        
        <creator>Ingar M. Arntzen</creator>
        
        <creator>Njal T. Borch</creator>
        
        <creator>Anders Andersen</creator>
        
        <subject>Multi-platform; media control; continuous media; data-driven media; interactive media; orchestrated media</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>Many media providers offer complementary prod-ucts on different platforms to target a diverse consumer base. Online sports coverage, for instance, may include professionally produced audio and video channels, as well as Web pages and native apps offering live statistics, maps, data visualizations, social commentary and more. Many consumers also engage in parallel usage, setting up streaming products and interactive interfaces on available screens, laptops and handheld devices. This ability to combine products holds great promise, yet, with no coordination, cross-platform user experiences often appear inconsistent and disconnected. We present Control-driven Media (CdM), an extension of the current media model that adds support for coordination and consistency across interfaces, devices, products, and platforms while remaining compatible with existing services, technologies, and workflows. CdM promotes online media control as an independent resource type in multimedia systems. With control as a driving force, CdM offers a highly flexible model, opening up for further innovations in automation, personalization, multi-device support, collaboration and time-driven visualization. Furthermore, CdM bridges the gap between continuous media and Web/native apps, allowing the combined powers of these platforms to be seamlessly exploited as parts of a single, consistent user experience. Extensive research in time-dependent, multi-device, data-driven media experiences supports CdM. In particular, CdM requires a generic and flexible concept for online, timeline-consistent media control, for which a candidate solution (State Trajectory) has recently been published. This paper makes the case for CdM, bringing the significant potential of this model to the attention of research and industry.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_4-Control_Driven_Media_A_Unifying_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>STAR, a Universal, Repeatable, Strategic Model of Corporate Innovation for Industry Domination</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150903</link>
        <id>10.14569/IJACSA.2024.0150903</id>
        <doi>10.14569/IJACSA.2024.0150903</doi>
        <lastModDate>2024-09-30T08:10:13.4370000+00:00</lastModDate>
        
        <creator>Ronald Berman</creator>
        
        <creator>Nicholas Markette</creator>
        
        <creator>Robert Vera</creator>
        
        <creator>Tim Gehle</creator>
        
        <subject>Corporate entrepreneurship; innovation model; market dominance; competitive advantage</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>Within an existing organization, internal expertise, staffing, compensation, information systems, and market focus may complicate the introduction of new ideas while culture and aversion to risk may completely derail the organizations’ ability to innovate. The STAR model for corporate innovation provides a theoretical model on how to develop and execute innovative practices to overcome these obstacles and achieve significant market penetration and value. The model is a theoretical framework that empowers organizations of all sizes to construct the necessary structures and advocacy needed to create products, services, and internal processes that enable them to dominate the industry in which they participate. The model also provides the mechanism to support the identification, acceptance, and rapid deployment of relevant new technologies that offer an opportunity to create an unfair advantage, something that is very hard to replicate.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_3-STAR_a_Universal_Repeatable_Strategic_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Effects of IDS/IPS Placement on Big Data Systems in Geo‑Distributed Wide Area Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150902</link>
        <id>10.14569/IJACSA.2024.0150902</id>
        <doi>10.14569/IJACSA.2024.0150902</doi>
        <lastModDate>2024-09-30T08:10:13.4230000+00:00</lastModDate>
        
        <creator>Michael Hart</creator>
        
        <creator>Eric Richardson</creator>
        
        <creator>Rushit Dave</creator>
        
        <subject>Information security; network topology; wide-area big data; wide-area networks; wide-area streaming</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>Geographically-distributed wide-area networks (WANs) offer expansive distributed and parallel computing capabilities. This includes the ability to advance Wide-Area Big Data (WABD). As data streaming traverses foreign networks, intrusion detection systems (IDSs) and intrusion prevention systems (IDSs) play an important role in securing information. The authors anticipate that securing WAN network topology with IDSs/IPSs can significantly impact wide-area data streaming performance. In this paper, the researchers develop and implement a geographically distributed big data streaming application using the Python programming language to benchmark IDS/IPS placement in hub-and-spoke, custom-mesh, and full-mesh network topologies. The results of the experiments illustrate that custom-mesh WANs allow IDS/IPS placements that maximize data stream packet transfers while reducing overall WAN latency. Hub-and-spoke network topology produces the lowest combined WAN latency over competing network designs but at the cost of single points of failure within the network. IDS/IPS placement in full-mesh designs is less efficient than custom-mesh yet offers the greatest opportunity for highly available data streams. Testing is limited by specific big data systems, WAN topologies, and IDS/IPS technology.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_2-The_Effects_of_IDSIPS_Placement_on_Big_Data_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evolving Software Architectures from Monolithic Systems to Resilient Microservices: Best Practices, Challenges and Future Trends</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150901</link>
        <id>10.14569/IJACSA.2024.0150901</id>
        <doi>10.14569/IJACSA.2024.0150901</doi>
        <lastModDate>2024-09-30T08:10:13.4070000+00:00</lastModDate>
        
        <creator>Martin Kaloudis</creator>
        
        <subject>Service-Orientated Architecture; SOA; microservices; monolithic architecture; migration</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(9), 2024</description>
        <description>Microservice architecture has emerged as a widely adopted methodology in software development, addressing the inherent limitations of traditional monolithic and Service-Oriented Architectures (SOA). This paper examines the evolution of microservices, emphasising their advantages in enhancing flexibility, scalability, and fault tolerance compared to legacy models. Through detailed case studies, it explores how leading companies, such as Netflix and Amazon, have leveraged microservices to optimise resource utilisation and operational adaptability. The study also addresses significant implementation challenges, including ensuring data consistency and managing APIs. Best practices, such as Domain-Driven Design (DDD) and the Saga Pattern, are evaluated with examples from Uber&#39;s cross-functional teams and Airbnb&#39;s transaction management. This research synthesises these findings into actionable guidelines for organisations transitioning from monolithic architectures, proposing a phased migration approach to mitigate risks and improve operational agility. Furthermore, the paper explores future trends, such as Kubernetes and AIOps, offering insights into the evolving microservices landscape and their potential to improve system scalability and resilience. The scientific contribution of this article lies in the development of practical best practices, providing a structured strategy for organisations seeking to modernise their IT infrastructure.</description>
        <description>http://thesai.org/Downloads/Volume15No9/Paper_1-Evolving_Software_Architectures_from_Monolithic_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Impact of E-Commerce Drivers on the Innovativeness in Organizational Practices</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01508131</link>
        <id>10.14569/IJACSA.2024.01508131</id>
        <doi>10.14569/IJACSA.2024.01508131</doi>
        <lastModDate>2024-08-30T08:07:28.0630000+00:00</lastModDate>
        
        <creator>Abdulghader Abu Reemah A Abdullah</creator>
        
        <creator>Ibrahim Mohamed</creator>
        
        <creator>Nurhizam Safie Mohd Satar</creator>
        
        <subject>E-commerce innovation; e-commerce drivers; performance management; decision making; management style</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>Innovation in e-commerce practices has revolution-ized the way goods and services are purchased or sold online. This relatively new tool for online transaction provides range of access to wealth of information and knowledge needed to facilitate electronic commerce globally using internet network. The case is not the same in the developing countries where e-commerce innovation is deprived of key the components to drives developing economy. To clearly understand innovation in e-commerce diffusion, 375 quantitative data generated from e-commerce organizations in Libya. Statistically analysis of the key drivers of e-commerce innovations focused on the need for a shift in organizational attitude and knowledge through decision making that are committed to meeting customer’s needs. The inter statistical covariance indicated a strong homogeneity between the drivers of e-commerce with mean value range of 4.09 to 4.82 (58.4 % to 68.8% of responses) indicating that 219 to 258 respondents out of 375 are of the same view. There is strong positive correlation between the drivers of e-commerce innovations except for e-commerce management style that has moderate relation and were statistically significant at 0.00 level. This study clearly explained the main factors of interest that are versatile in providing timely delivery of goods, efficient services and in meeting with e-commerce developmental trend.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_131-The_Impact_of_E_Commerce_Drivers_on_the_Innovativeness_in_Organizational_Practices.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Impact of Virtual Collaboration Tools on 21st-Century Skills, Scientific Process Skills and Scientific Creativity in STEM</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01508130</link>
        <id>10.14569/IJACSA.2024.01508130</id>
        <doi>10.14569/IJACSA.2024.01508130</doi>
        <lastModDate>2024-08-30T08:07:28.0630000+00:00</lastModDate>
        
        <creator>Nur Atiqah Jalaludin</creator>
        
        <creator>Mohamad Hidir Mhd Salim</creator>
        
        <creator>Mohamad Sattar Rasul</creator>
        
        <creator>Athirah Farhana Muhammad Amin</creator>
        
        <creator>Mohd Aizuddin Saari</creator>
        
        <subject>Virtual collaboration tools; 21st-century skills; scientific process skills; scientific creativity; STEM education</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>Virtual collaboration tools have become increasingly important in STEM education, especially after the COVID-19 pandemic. These tools offer many benefits, including developing 21st-century skills and fostering scientific process skills and scientific creativity. However, there are concerns regarding their effectiveness across different genders and regions. This study evaluates the impact of the ExxonMobil Young Engineers (EYE) program, which uses the Zoom application, on enhancing 21st-century skills, scientific process skills, and scientific creativity among secondary school students in Malaysia. The participants primarily consist of 520 secondary school students, with teachers acting as facilitators and professional engineers from ExxonMobil serving as instructors. A pre-test survey was conducted to assess students&#39; initial skill levels. The program consisted of three phases: briefing, breakout room activities, and final reflections. After the program, a post-test survey was conducted to evaluate changes in student skills. Data analysis was analyzed using SPSS software by employing descriptive statistics, MANOVA with Wilks&#39; lambda, one-way ANOVA, and partial eta squared to measure the program&#39;s impact and the influence of gender and regional factors. The results showed significant improvements in all three skill areas post-intervention: 21st-century skills, scientific process skills, and scientific creativity. Gender differences were significant for 21st-century skills, while regional differences significantly affected scientific process skills. The EYE program could enhance students&#39; STEM-related skills using virtual collaboration tools like Zoom. However, regional and gender differences highlight the importance of adapting programs to address specific challenges and ensuring equitable opportunities for all students.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_130-The_Impact_of_Virtual_Collaboration_Tools_on_21st_Century_Skills.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detecting Malware on Windows OS Using AI Classification of Extracted Behavioral Features from Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01508129</link>
        <id>10.14569/IJACSA.2024.01508129</id>
        <doi>10.14569/IJACSA.2024.01508129</doi>
        <lastModDate>2024-08-30T08:07:28.0470000+00:00</lastModDate>
        
        <creator>Nooraldeen Alhamedi</creator>
        
        <creator>Kang Dongshik</creator>
        
        <subject>Malware analysis; dynamic analysis; image classification; malware behavior extraction; text</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>In this research, using dynamic analysis ten critical features were extracted from malware samples operating in isolated virtual machines. These features included process ID, name, user, CPU usage, network connections, memory usage, and other pertinent parameters. The dataset comprised 50 malware samples and 11 benign programs, providing a data for training and testing the models. Initially, text-based classification methods were employed, utilizing feedforward neural networks (FNN) and recurrent neural networks (RNN). The FNN model achieved an accuracy rate of 56%, while the RNN model demonstrated better performance with an accuracy rate of 68%. These results highlight the potential of neural networks in analyzing and identifying malware based on behavioral patterns. To further explore AI&#39;s capabilities in malware detection, the extracted features were transformed into grayscale images. This transformation enabled the application of convolutional neural networks (CNN), which excel at capturing spatial patterns. Two CNN models were developed: a simple model and a more complex model. The simple CNN model, applied to the grayscale images, achieved an accuracy rate of 70.1%. The more complex CNN model, with multiple convolutional and fully connected layers, significantly improved performance, achieving an accuracy rate of 88%. The findings from this research underscore the importance of dynamic analysis. By leveraging both text and image-based classification methods, this study contributes to the development of more robust and accurate malware detection systems. It provides a comprehensive framework for future advancements in cybersecurity, emphasizing the critical role of dynamic analysis in identifying and mitigating threats.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_129-Detecting_Malware_of_Windows_OS_Using_AI_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Configurable Framework for High-Performance Graph Storage and Mutation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01508128</link>
        <id>10.14569/IJACSA.2024.01508128</id>
        <doi>10.14569/IJACSA.2024.01508128</doi>
        <lastModDate>2024-08-30T08:07:28.0330000+00:00</lastModDate>
        
        <creator>Soukaina Firmli</creator>
        
        <creator>Dalila Chiadmi</creator>
        
        <creator>Kawtar Younsi Dahbi</creator>
        
        <subject>Data structures; concurrency; graph processing; graph mutations; high-performance computing; traffic management</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>In the realm of graph processing, efficient storage and update mechanisms are crucial due to the large volume of graphs and their dynamic nature. Traditional data structures such as adjacency lists and matrices, while effective in certain scenarios, often suffer from performance trade-offs such as high memory consumption or slow update capabilities. To address these challenges, we introduce CoreGraph, an advanced graph framework designed to optimize both read and update performance. CoreGraph leverages a novel segmentation method and in-place update techniques, along with configurable memory allocators and synchronization mechanisms, to enhance parallel processing and reduce memory consumption. CoreGraph’s update throughput (with up to 20x) and analytics performance exceed those of several state-of-the-art graph structures such as Teseo, GraphOne and LLAMA, while maintaining low memory consumption when the workload includes updates. This paper details the architecture and benefits of CoreGraph, highlighting its practical application in traffic data management where it seamlessly integrates with existing systems providing a scalable and efficient solution for real-world graph data management challenges.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_128-A_Configurable_Framework_for_High_Performance_Graph_Storage_and_Mutation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Impact of Emojis Exclusion on the Performance of Arabic Sarcasm Detection Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01508127</link>
        <id>10.14569/IJACSA.2024.01508127</id>
        <doi>10.14569/IJACSA.2024.01508127</doi>
        <lastModDate>2024-08-30T08:07:28.0170000+00:00</lastModDate>
        
        <creator>Ghalyah Aleryani</creator>
        
        <creator>Wael Deabes</creator>
        
        <creator>Khaled Albishre</creator>
        
        <creator>Alaa E. Abdel-Hakim</creator>
        
        <subject>Arabic language; AraBERT; sarcasm detecting; data preprocessing; emojis impact; social media content</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>The complex challenge of detecting sarcasm in Arabic speech on social media is exacerbated by the language’s diversity and the nature of sarcastic expressions. There is a significant gap in the capability of existing models to effectively interpret sarcasm in Arabic, necessitating more sophisticated and precise detection methods. In this paper, we investigate the impact of a fundamental preprocessing component on sarcasm detection. While emojis play a crucial role in mitigating the absence of body language and facial expressions in modern communication, their impact on automated text analysis, particularly in sarcasm detection, remains underexplored. We examine the effect of excluding emojis from datasets on the performance of sarcasm detection models in social media content for Arabic, a language with a super-rich vocabulary. This investigation includes the adaptation and enhancement of AraBERT pre-training models by specifically excluding emojis to improve sarcasm detection capabilities. We use AraBERT pre-training to refine the specified models, demonstrating that the removal of emojis can significantly boost the accuracy of sarcasm detection. This approach facilitates a more refined interpretation of language, eliminating the potential confusion introduced by non-textual elements. The evaluated AraBERT models, through the focused strategy of emojis removal, adeptly navigate the complexities of Arabic sarcasm. This study establishes new benchmarks in Arabic natural language processing and offers valuable insights for social media platforms.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_127-Impact_of_Emojis_Exclusion_on_the_Performance_of_Arabic_Sarcasm_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Preprocessing Techniques for Clustering Arabic Text: Challenges and Future Directions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01508126</link>
        <id>10.14569/IJACSA.2024.01508126</id>
        <doi>10.14569/IJACSA.2024.01508126</doi>
        <lastModDate>2024-08-30T08:07:28.0000000+00:00</lastModDate>
        
        <creator>Tahani Almutairi</creator>
        
        <creator>Shireen Saifuddin</creator>
        
        <creator>Reem Alotaibi</creator>
        
        <creator>Shahendah Sarhan</creator>
        
        <creator>Sarah Nassif</creator>
        
        <subject>Arabic preprocessing; Arabic language; survey; clustering; Arabic analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>Arabic is a complex language for text analysis because of its orthographic features, rich synonyms, and semantic style. Thus, Arabic text must be prepared more carefully in the preprocessing stage for the analyzer to improve the quality of the results. Moreover, many preprocessing steps have been proposed to improve the text analyzer quality by reducing high dimensionality, selecting the proper features to describe the text, and enhancing the process speed. This paper deeply investigates and summarizes the use of Arabic preprocessing techniques in Arabic text in general and focuses in-depth on clustering. Moreover, it focuses on seven preprocesses that are now used to prepare Arabic and provides the available tools for each of them; the seven preprocess are tokenization, normalization, stopword removal, stemming, vectorization, lemmatization, and feature selection. In addition, this paper investigates any work that uses synonyms and semantic techniques for preprocessing to prepare the text or reduce the dimensionality of the clustering algorithm. Therefore, this survey investigated nine techniques for Arabic text preprocessing to identify the challenges in this area. Finally, this study aims to serve as a reference for researchers interested in this area, and ends with potential future research directions.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_126-Preprocessing_Techniques_for_Clustering_Arabic_Text_Challenges.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Under Sampling Techniques for Handling Unbalanced Data with Various Imbalance Rates: A Comparative Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01508124</link>
        <id>10.14569/IJACSA.2024.01508124</id>
        <doi>10.14569/IJACSA.2024.01508124</doi>
        <lastModDate>2024-08-30T08:07:27.9870000+00:00</lastModDate>
        
        <creator>Esraa Abu Elsoud</creator>
        
        <creator>Mohamad Hassan</creator>
        
        <creator>Omar Alidmat</creator>
        
        <creator>Esraa Al Henawi</creator>
        
        <creator>Nawaf Alshdaifat</creator>
        
        <creator>Mosab Igtait</creator>
        
        <creator>Ayman Ghaben</creator>
        
        <creator>Anwar Katrawi</creator>
        
        <creator>Mohmmad Dmour</creator>
        
        <subject>Clusters centroid; decision tree; neighborhood cleaning rule; random under sampling; Tomek Link under sampling; unbalanced datasets</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>Unbalanced data sets represent data sets that contain an unequal number of examples for different classes. This dataset represents a problem faced by machine learning tools; as in datasets with high imbalance ratios, false negative rate per-centages will be increased because most classifiers will be affected by the major class. Choosing specific evaluation metrics that are most informative and sampling techniques represent a common way to handle this problem. In this paper, a comparative analysis between four of the most common under-sampling techniques is conducted over datasets with various imbalance rates (IR) range from low to medium to high IR. Decision Tree classifier and twelve imbalanced data sets with various IR are used for evaluating the effects of each technique depending on Recall, F1-measure, gmean, recall for minor class, and F1-measure for minor class evaluation metrics. Results demonstrate that Clusters Centroid outperformed Neighborhood Cleaning Rule (NCL) based on recall for all low IR datasets. For both medium, and high IR datasets NCL, and Random Under Sampling (RUS) outperformed the rest techniques, while Tomek Link has the worst effect.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_124-Under_Sampling_Techniques_for_Handling_Unbalanced_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multiclass Chest Disease Classification Using Deep CNNs with Bayesian Optimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01508125</link>
        <id>10.14569/IJACSA.2024.01508125</id>
        <doi>10.14569/IJACSA.2024.01508125</doi>
        <lastModDate>2024-08-30T08:07:27.9870000+00:00</lastModDate>
        
        <creator>Maneet Kaur Bohmrah</creator>
        
        <creator>Harjot Kaur</creator>
        
        <subject>Deep neural networks; machine learning; Bayesian optimization; image preprocessing; COVID-19; pneumonia</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>Ever since its outbreak, numerous research studies have been initiated worldwide as an attempt for an accurate and efficient diagnosis of COVID-19. In the recent past, patients suffering from various chronic lung diseases, either passed away due to COVID-19 or Pneumonia. Both of these pulmonary diseases are strongly correlated as they share a common set of symptoms and even for medical professionals, it has been difficult to perform discerned diagnosis for both of these diseases. The dire need of the current scenario is a chest-disease diagnosis framework for accurate, precise, real-time and automatic detection of COVID-19 because of its mass fatality rate. The review of various contemporary and previous research works show that the currently avail-able computer-aided diagnosis systems are insufficient for real-time implementation of COVID-19 prediction due to their long training time, substantial memory requirements and excessive computations. This work proposes an optimized hybrid DNN-ML framework by combining Deep Neural Networks’ (DNNs) models and optimized Machine Learning (ML) classifiers along with an efficacious image preprocessing approach. For feature extraction, Deep learning (DL) models namely GoogleNet, EfficientNetB0, and ResNet50 have been deployed and extracted features have been further fed to Bayesian optimized ML classifiers. The two major contributions of this study are, Edge based Region of Interest (ROI) extraction and use of Bayesian optimization approach for configuring optimal architectures of ML classifiers. With extensive experimentation, it has been observed that the proposed optimized hybrid DNN-ML model with encapsulated image preprocessing techniques performed much better as com-pared to various previously existing ML-DNN models. Based on the promising results obtained from this proposed light weight hybrid framework, it has been concluded that, this model can facilitate radiologists, while functioning as an accurate disease diagnosis and support system for early detection of COVID-19 and Pneumonia.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_125-Multiclass_Chest_Disease_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design and Implementation of Style-Transfer Operations in a Game Engine</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01508123</link>
        <id>10.14569/IJACSA.2024.01508123</id>
        <doi>10.14569/IJACSA.2024.01508123</doi>
        <lastModDate>2024-08-30T08:07:27.9700000+00:00</lastModDate>
        
        <creator>Haechan Park</creator>
        
        <creator>Nakhoon Baek</creator>
        
        <subject>Style transfer; neural network models; game engine; rendering textures; real-time operations</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>The image style transfer operations are a kind of high-level image processing techniques, in which a target image is transformed to show a given style. These kind of operations are typically acquired with modern neural network models. In this paper, we aim to achieve the image style-transfer operations in real time, with the underlying computer games. We can apply the style-transfer operations to the all or part of rendering textures in the existing games, to change the overall feeling and appearance of those games. For a computer game or its underlying game engine, the style-transfer neural network models should be executed so fast to maintain the real-time execution of the original game. Efficient data management is also required to achieve deep learning operations while maintaining overall performance of the game as much as possible. This paper compares several aspects of style-transfer neural network models, and its executions in the game engines. We propose a design and implementation way for the real-time style-transfer operations. The experimental result shows a set of technical points to be considered, while applying neural network models to a game engine. We finally shows that we achieved real-time style-transfer operations, with the Barracuda module in the Unity game engine.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_123-Design_and_Implementation_of_Style_Transfer_Operations_in_a_Game_Engine.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Data Augmentation Approach to Sentiment Analysis of MOOC Reviews</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01508122</link>
        <id>10.14569/IJACSA.2024.01508122</id>
        <doi>10.14569/IJACSA.2024.01508122</doi>
        <lastModDate>2024-08-30T08:07:27.9530000+00:00</lastModDate>
        
        <creator>Guangmin Li</creator>
        
        <creator>Long Zhou</creator>
        
        <creator>Qiang Tong</creator>
        
        <creator>Yi Ding</creator>
        
        <creator>Xiaolin Qi</creator>
        
        <creator>Hang Liu</creator>
        
        <subject>Data augmentation; sentiment analysis; MOOC; natural language processing; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>To address the lack of Chinese online course review corpora for aspect-based sentiment analysis, we pro-pose Semantic Token Augmentation and Replacement (STAR), a semantic-relative distance-based data augmentation method. STAR leverages natural language processing techniques such as word embedding and semantic similarity to extract high-frequency words near aspect terms, learns their word vectors to obtain synonyms and replaces these words to enhance sentence diversity while maintaining semantic consistency. Experiments on a Chinese MOOC dataset show STAR improves Macro-F1 scores by 3.39%-8.18% for LCFS-BERT and 1.66%-8.37% for LCF-BERT compared to baselines. These results demonstrate STAR’s effectiveness in improving the generalization ability of deep learning models for Chinese MOOC sentiment analysis.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_122-A_Data_Augmentation_Approach_to_Sentiment_Analysis_of_MOOC_Reviews.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Priority-Based Service Provision Using Blockchain, Caching, Reputation and Duplication in Edge-Cloud Environments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01508121</link>
        <id>10.14569/IJACSA.2024.01508121</id>
        <doi>10.14569/IJACSA.2024.01508121</doi>
        <lastModDate>2024-08-30T08:07:27.9370000+00:00</lastModDate>
        
        <creator>Tarik CHANYOUR</creator>
        
        <creator>Seddiq EL KASMI ALAOUI</creator>
        
        <creator>Mohamed EL GHMARY</creator>
        
        <subject>Multi-access Edge-cloud Computing; container base image chunks; replication; fragmentation; service provision; blockchain; Markov approximation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>The integration of Multi-access Edge Computing (MEC) and Dense Small Cell (DSC) infrastructures within 5G and beyond networks marks a substantial leap forward in communication technologies. This convergence is critical for meeting the stringent low latency demands of services delivered to Smart Devices (SDs) through lightweight containers. This paper introduces a novel split-duplicate-cache technique seam-lessly embedded within a secure blockchain-based edge-cloud architecture. Our primary objective is to significantly shorten the service initiation durations in high density conditions of SDs and ENs. This is executed by meticulously gathering, verifying, and combining the most optimal chunk candidates. Concurrently, we ensure that resource allocation for services within targeted ENs is meticulously evaluated for every service request. The system challenges and decisions are modeled then represented as a mixed-integer nonlinear optimization problem. To tackle this intricate problem, three solutions are developed and evaluated: the Brute-Force Search Algorithm (BFS-CDCA) for small-scale environments, the Simulated Annealing-Based Heuristic (SA-CDCA) and the Markov Approximation-Based Solution (MA-CDCA) for complex, high-dimensional environments. A comparative analysis of these methods is conducted in terms of solution quality, computational efficiency, and scalability to assess their performance and identify the most suitable approach for different problem instances.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_121-Priority_Based_Service_Provision_Using_Blockchain_Caching_Reputation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Simple and Efficient Approach for Extracting Object Hierarchy in Image Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01508120</link>
        <id>10.14569/IJACSA.2024.01508120</id>
        <doi>10.14569/IJACSA.2024.01508120</doi>
        <lastModDate>2024-08-30T08:07:27.9230000+00:00</lastModDate>
        
        <creator>Saravit Soeng</creator>
        
        <creator>Vungsovanreach Kong</creator>
        
        <creator>Munirot Thon</creator>
        
        <creator>Wan-Sup Cho</creator>
        
        <creator>Tae-Kyung Kim</creator>
        
        <subject>Object hierarchy; object relationship; object detection; computer vision</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>An object hierarchy in images refers to the structured relationship between objects, where parent objects have one or more child objects. This hierarchical structure is useful in various computer vision applications, such as detecting motorcycle riders without helmets or identifying individuals carrying illegal items in restricted areas. However, extracting object hierarchies from images is challenging without advanced techniques like machine learning or deep learning. In this paper, a simple and efficient method is proposed for extracting object hierarchies in images based on object detection results. This method is implemented in a standalone package compatible with both Python and C++ programming languages. The package generates object hierarchies from detection results by using bounding box overlap to identify parent-child relationships. Experimental results show that the proposed method accurately extracts object hierarchies from images, providing a practical tool to enhance object detection capabilities. The source code for this approach is available at https://github.com/saravit-soeng/HiExtract.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_120-A_Simple_and_Efficient_Approach_for_Extracting_Object_Hierarchy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>TGMoE: A Text Guided Mixture-of-Experts Model for Multimodal Sentiment Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01508119</link>
        <id>10.14569/IJACSA.2024.01508119</id>
        <doi>10.14569/IJACSA.2024.01508119</doi>
        <lastModDate>2024-08-30T08:07:27.9230000+00:00</lastModDate>
        
        <creator>Xueliang Zhao</creator>
        
        <creator>Mingyang Wang</creator>
        
        <creator>Yingchun Tan</creator>
        
        <creator>Xianjie Wang</creator>
        
        <subject>Multimodal fusion; sentiment analysis; cross modal; mixture of experts</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>Multimodal sentiment analysis seeks to determine the sentiment polarity of targets by integrating diverse data types, including text, visual, and audio modalities. However, during the process of multimodal data fusion, existing methods often fail to adequately analyze the sentimental relationships between different modalities and overlook the varying contributions of different modalities to sentiment analysis results. To address this issue, we propose a Text Guided Mixture-of-Experts (TGMoE) Model for Multimodal Sentiment Analysis. Based on the varying contributions of different modalities to sentiment analysis, this model introduces a text guided cross-modal attention mechanism that fuses text separately with visual and audio modalities, leveraging attention to capture interactions between these modalities and effectively enrich the text modality with supplementary information from the visual and audio data. Additionally, by employing a sparsely gated mixture of expert layers, the TGMoE model constructs multiple expert networks to simultaneously learn sentiment information, enhancing the nonlinear representation capability of multimodal features. This approach makes multimodal features more distinguishable concerning sentiment, thereby improving the accuracy of sentiment polarity judgments. The experimental results on the publicly available multimodal sentiment analysis datasets CMU-MOSI and CMU-MOSEI show that the TGMoE model outperforms most existing multimodal sentiment analysis models and can effectively improve the performance of sentiment analysis.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_119-TGMoE_A_Text_Guided_Mixture_of_Experts_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Integrated IoT-Driven System with Fuzzy Logic and V2X Communication for Real-Time Speed Monitoring and Accident Prevention in Urban Traffic</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01508118</link>
        <id>10.14569/IJACSA.2024.01508118</id>
        <doi>10.14569/IJACSA.2024.01508118</doi>
        <lastModDate>2024-08-30T08:07:27.9070000+00:00</lastModDate>
        
        <creator>Khadiza Tul Kubra</creator>
        
        <creator>Tajim Md. Niamat Ullah Akhund</creator>
        
        <creator>Waleed M. Al-Nuwaiser</creator>
        
        <creator>Md Assaduzzaman</creator>
        
        <creator>Md. Suhag Ali</creator>
        
        <creator>M. Mesbahuddin Sarker</creator>
        
        <subject>Internet of Things; high-speed monitoring; alcohol detection; matlab simulation; write FIS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>Road safety is a critical concern globally, with speeding being a leading cause of traffic accidents. Leveraging advanced technologies can significantly enhance the ability to monitor and control vehicle speeds in real time. Traditional methods of speed monitoring are often limited in their ability to provide real-time, adaptive interventions. Existing systems do not adequately integrate sensor data and decision-making processes to prevent speeding-related accidents effectively. This paper aims to address these limitations by proposing a novel system that utilizes Internet of Things (IoT) technology combined with fuzzy logic to monitor vehicle speeds and prevent accidents in real time. The proposed system integrates IoT sensors for continuous vehicle speed monitoring and employs a Fuzzy Inference System (FIS) to make decisions based on variables such as speed, alcohol presence, and driver fitness. The system also facilitates interaction between drivers and law enforcement through Vehicle-to-Everything (V2X) communication. The FIS implementation demonstrated effective speed control capabilities, accurately assessing and responding to various risk levels, thereby reducing the likelihood of speeding-related accidents. This research contributes to the advancement of road safety systems by integrating IoT and fuzzy logic technologies, offering a more adaptive and responsive approach to traffic management and accident prevention. Future enhancements will focus on incorporating machine learning techniques to dynamically adjust FIS rules based on real-time data and improve sensor network reliability to ensure more accurate and comprehensive monitoring.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_118-Integrated_IoT_Driven_System_with_Fuzzy_Logic.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>EiAiMSPS: Edge Inspired Artificial Intelligence-based Multi Stakeholders Personalized Security Mechanism in iCPS for PCS</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01508117</link>
        <id>10.14569/IJACSA.2024.01508117</id>
        <doi>10.14569/IJACSA.2024.01508117</doi>
        <lastModDate>2024-08-30T08:07:27.8900000+00:00</lastModDate>
        
        <creator>Swati Devliyal</creator>
        
        <creator>Sachin Sharma</creator>
        
        <creator>Himanshu Rai Goyal</creator>
        
        <subject>Internet of things; pharmaceutical care; machine learning; authentication; artificial intelligence</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>Artificial Intelligence (AI) is becoming more prevalent in the healthcare sector like in pharmaceutical care to achieve rapid and precise outcomes. Machine learning techniques are critical in preserving this balance since they ensure both the confidentiality and authenticity of healthcare data. Early sickness projections benefit clinicians when establishing early monetary choices, in the lives of their patients. The Web of Things (IoT) is acting as an accelerator to boost the efficacy of AI applications in healthcare. Healthcare service pharmaceutical care is also in demand and can have AI for good patient care. The sensor gathers the data from individuals, then the data is examined employing machine learning algorithms. The work’s major intent is to come up with an automated learning-based user authentication algorithm for providing secure communication. The other goal is to ensure data privacy for sensitive information that does not currently have security. The Federated Learning (FL) technique, which uses a decentralized environment to train models, can be utilized for this purpose. It enhances data privacy. This work proposes in addition to security a differential privacy preservation strategy that involves introducing random noise to a data sample to generate anonymity. The model’s performance and data quality are assessed, as privacy preservation approaches frequently reduce data quality.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_117-EiAiMSPS_Edge_Inspired_Artificial_Intelligence.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimized Retrieval and Secured Cloud Storage for Medical Surgery Videos Using Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01508115</link>
        <id>10.14569/IJACSA.2024.01508115</id>
        <doi>10.14569/IJACSA.2024.01508115</doi>
        <lastModDate>2024-08-30T08:07:27.8770000+00:00</lastModDate>
        
        <creator>Megala G</creator>
        
        <creator>Swarnalatha P</creator>
        
        <subject>Medical video storage; feature selection; Variational Auto Encoder (VAE); weighted-graph-based prefetching algorithm; group lasso</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>Efficient secured storage and retrieval of medical surgical videos are essential for modern healthcare systems. Traditional methods often struggle with scalability, accessibility, and data security, necessitating innovative solutions. This study introduces a novel deep learning-based framework that leverages a hybrid algorithm combining a Variational Autoencoder (VAE) and Group Lasso for optimized video feature selection. This approach reduces dimensionality and enhances the retrieval accuracy of video frames. For storage and retrieval, the system employs a weighted graph-based prefetching algorithm to manage encrypted video data on the cloud, ensuring both speed and security. To ensure data security, video frames are encrypted before cloud storage. Experimental results show that this system outperforms current methods in retrieval speed and accuracy of 99% while maintaining data security. This framework is a significant advancement in medical data management, offering potential applications across other fields that require secure handling of large data volumes.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_115-Optimized_Retrieval_and_Secured_Cloud_Storage.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Rolling Bearing Reliability Prediction Based on Signal Noise Reduction and RHA-MKRVM</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01508116</link>
        <id>10.14569/IJACSA.2024.01508116</id>
        <doi>10.14569/IJACSA.2024.01508116</doi>
        <lastModDate>2024-08-30T08:07:27.8770000+00:00</lastModDate>
        
        <creator>Yifan Yu</creator>
        
        <subject>Rolling bearing; reliability evaluation and pre-diction; complete ensemble empirical mode decomposition with adaptive noise; generalized refined composite multi-scale sample entropy; uniform manifold approximation and projection; red-tailed hawk algorithm; mixed kernel relevance vector machine</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>In order to solve the problem of reliability assessment and prediction of rolling bearings, a noise reduction method (CEEMDAN-GRCMSE) based on complete ensemble empirical mode decomposition with adaptive noise(CEEMDAN) combined with generalized refined composite multi-scale sample entropy (GRCMSE) is proposed from the vibration signals to remove the noise from the bearing vibration signals, and then the feature set of the noise-reduced signals is downscaled by using the Uniform manifold approximation and projection(UMAP) algorithm, and the reliability assessment model is established by using a logistic regression algorithm to establish a reliability assessment model, and use the red-tailed hawk algorithm for parameter optimization of the mixed kernel relation vector machine, which is used to predict the bearing state, and finally the predicted state information is brought into the assessment model to obtain the final results. In this paper, the whole life cycle data of rolling bearings from Xi ’an Jiaotong University-Sun Science and Technology Joint Laboratory (XJTU-SY) are used to verify the effectiveness of the proposed method. The superiority of the proposed method is highlighted by comparing the analysis results with those of other AI methods.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_116-Rolling_Bearing_Reliability_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Advancements in Deep Learning Architectures for Image Recognition and Semantic Segmentation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01508114</link>
        <id>10.14569/IJACSA.2024.01508114</id>
        <doi>10.14569/IJACSA.2024.01508114</doi>
        <lastModDate>2024-08-30T08:07:27.8430000+00:00</lastModDate>
        
        <creator>Divya Nimma</creator>
        
        <creator>Arjun Uddagiri</creator>
        
        <subject>Convolutional Neural Networks (CNNs); AlexNet; image classification; transfer learning; MNIST Dataset; Custom CNN Architecture; deep learning; model training and evaluation; neural network optimization; activation functions; feature extraction; machine learning; pattern recognition; data preprocessing; loss functions; model accuracy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>This paper focuses on using Convolutional Neural Networks (CNNs) for tasks such as image classification. It covers both pre-trained models and those that are built from scratch. The paper begins by demonstrating how to utilize the well-known AlexNet model, which is highly effective for image recognition due to transfer learning. It then explains how to load and prepare the MNIST dataset, a common choice for testing image classification methods. Additionally, it introduces a custom CNN designed specifically for recognizing MNIST digits, outlining its architecture, which includes convolutional layers, activation functions, and fully connected layers for capturing handwritten numbers&#39; details. The paper also guides starting the model, running it on sample data, reviewing outputs, and assessing the accuracy of predictions. Furthermore, it delves into training the custom CNN and evaluating its performance by comparing it with established benchmarks, utilizing loss functions and optimization techniques to fine-tune the model and assess its classification accuracy. This work integrates theory with practical application, serving as a comprehensive guide for creating and evaluating CNNs in image classification, with implications for both research and real-world applications in computer vision.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_114-Advancements_in_Deep_Learning_Architectures_for_Image_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Synchronous Update and Optimization Method for Large-Scale Image 3D Reconstruction Technology Under Cloud-Edge Fusion Architecture</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01508112</link>
        <id>10.14569/IJACSA.2024.01508112</id>
        <doi>10.14569/IJACSA.2024.01508112</doi>
        <lastModDate>2024-08-30T08:07:27.8300000+00:00</lastModDate>
        
        <creator>Jian Zhang</creator>
        
        <creator>Jingbin Luo</creator>
        
        <creator>Yilong Chen</creator>
        
        <subject>Cloud-edge fusion; cloud-edge collaboration; 3D reconstruction; synchronized update</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>Aiming at the problems of limited bandwidth and network delay in the traditional centralized cloud computing mode during large-scale image processing of transmission and distribution digital corridors, a synchronous updating and optimization method for large-scale image 3D reconstruction technology under cloud-edge fusion architecture is proposed. Based on the cloud-side fusion architecture, the image data of the transmission and distribution corridor is preprocessed, feature extraction is performed by deep learning, synchronous updating is performed by using the cloud-side cooperative network, and matching and 3D reconstruction are performed according to the order of the point cloud data; given the dynamically changing characteristics of the image data in the cloud-side fusion environment, the incremental learning is combined with the continuous learning and synchronous updating of the model parameters, to realize the adaptive updating mechanism. The research method utilizes the advantage of cloud-edge fusion architecture to distribute the computational tasks to the cloud and edge, realizing parallel processing and load balancing, and improving the accuracy and efficiency of 3D reconstruction. The experimental results show that the research method in this paper has an image feature point matching rate as high as 96.72%, a lower network latency rate, and a higher real-time performance, which provides strong technical support for the optimization of the transmission and distribution digital corridor 3D reconstruction technology.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_112-Synchronous_Update_and_Optimization_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Application of Anti-Collision Algorithms in University Records Management</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01508113</link>
        <id>10.14569/IJACSA.2024.01508113</id>
        <doi>10.14569/IJACSA.2024.01508113</doi>
        <lastModDate>2024-08-30T08:07:27.8300000+00:00</lastModDate>
        
        <creator>Ying Wang</creator>
        
        <creator>Ying Mi</creator>
        
        <subject>Anti-collision algorithms; archive management systems; information networks; RFID technology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>University records management has grown in importance as a result of the quick growth of big data, artificial intelligence, and other technologies. However, university archives management is prone to data loss, redundancy, and errors. Moreover, the use of scientific management systems and algorithms can effectively improve such problems. To create an effective and secure archive management system and run simulation tests, the study suggests an RFID-based archive management system and uses nested random time slot ALOHA (RS0) and binary tree (BT) anti-collision algorithms to solve the collision problem between tags in the created system. The test results showed that the average query coefficient, recognition efficiency, and communication volume of the proposed algorithm were 1 and 1.2 times, 95% and 90%, 50 Bit and 180 Bit in two scenarios, continuous and uniform, respectively. 0.91% and 3.92%, 24.21% and 31.14% of the system CUP and memory occupation were achieved when the number of clients was 10 and 100, respectively. The average response time of the system was 0.112s and 1.244s when 100 and 1000 users were accessed, respectively. The information extraction accuracy of the system was 94% at 1000 accessed users. This suggests that the approach used in the study can significantly improve the operational effectiveness of the records management system and the accuracy of information extraction, as well as provide technical support for improving the university records management system.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_113-The_Application_of_Anti_Collision_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design and Application of Intelligent Visual Communication System for User Experience</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01508111</link>
        <id>10.14569/IJACSA.2024.01508111</id>
        <doi>10.14569/IJACSA.2024.01508111</doi>
        <lastModDate>2024-08-30T08:07:27.8130000+00:00</lastModDate>
        
        <creator>Chao Peng</creator>
        
        <subject>Visual communication system; user experience-oriented; multimodal features; recommendation algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>The design and application of visual communication system should be human-oriented, but currently this is often ignored by designers, resulting in poor user experience of visual communication. In order to improve the experience effect of visual communication system, combined with the existing computer technology, this paper proposes an intelligent visual communication system for user experience. First, for the problem of extracting multimodal features of users, considering the characteristics of different modal data, long and short-term memory networks are used to extract features with contextual information, and multi-scale convolutional neural networks are used for visual modality to extract low-level features from video frames. In the cross-modal stage, the low-level features in the source modality are used to enhance the target modality features. Then, for the personalized recommendation problem of users, a graph information extractor is constructed based on the graph convolutional neural network to fuse the recommended user-item bipartite graph node neighborhood information and generate a dense vector representation of nodes, which can enhance the recommendation effect in the form of incorporating the graph information representation in the deep recommendation model with Transformer as the sequence feature extractor. The proposed method is experimentally validated to shorten the response time and improve the performance of the system, which can increase the user experience of the visual communication system. The system designed in this article is user experience oriented, combined with Multimodal Features and intelligent recommendation algorithms, effectively meeting the personalized needs of users and has certain practical significance.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_111-Design_and_Application_of_Intelligent_Visual_Communication_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hyperparameter Optimization in Transfer Learning for Improved Pathogen and Abiotic Plant Disease Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01508110</link>
        <id>10.14569/IJACSA.2024.01508110</id>
        <doi>10.14569/IJACSA.2024.01508110</doi>
        <lastModDate>2024-08-30T08:07:27.7970000+00:00</lastModDate>
        
        <creator>Asha Rani K P</creator>
        
        <creator>Gowrishankar S</creator>
        
        <subject>Spreadable diseases; non-spreadable diseases; transfer learning; Keras; optimizers; CNN; underfitting and overfitting; retraining the models; base models; finetuning; abiotic; biotic; infectious and non-infectious diseases; custom optimization techniques; hyperparameter tuning in neural networks; hybrid activation functions</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>The application of machine learning, particularly through image-based analysis using computer vision techniques, has greatly improved the management of crop diseases in agriculture. This study explores the use of transfer learning to classify both spreadable and non-spreadable diseases affecting soybean, lettuce, and banana plants, with a special focus on various parts of the banana plant. In this research, 11 different transfer learning models were evaluated in Keras, with hyperparameters such as optimizers fine-tuned and models retrained to boost disease classification accuracy. Results showed enhanced detection capabilities, especially in models like VGG_19 and Xception, when optimized. The study also proposes a new approach by integrating an EfficientNetV2-style architecture with a custom-designed activation function and optimizer to improve model efficiency and accuracy. The custom activation function combines the advantages of ReLU and Tanh to optimize learning, while the hybrid optimizer merges feature of Adam and Stochastic Gradient Descent (SGD) to balance adaptive learning rates and generalization. This innovative approach achieved outstanding results, with an accuracy of 99.96% and an F1 score of 0.99 in distinguishing spreadable and non-spreadable plant diseases. The combination of these advanced methods marks a significant step forward in the use of machine learning for agricultural challenges, demonstrating the potential of customized neural network architectures and optimization strategies for accurate plant disease classification.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_110-Hyperparameter_Optimization_in_Transfer_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application of Sanda-Assisted Teaching System Integrating VR Technology from a 5G Perspective</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01508108</link>
        <id>10.14569/IJACSA.2024.01508108</id>
        <doi>10.14569/IJACSA.2024.01508108</doi>
        <lastModDate>2024-08-30T08:07:27.7830000+00:00</lastModDate>
        
        <creator>Zhaoquan Zhang</creator>
        
        <creator>Yong Ding</creator>
        
        <subject>VR technology; Sanda; teaching system; motion recognition; feature extraction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>Since the focus of Sanda teaching is to allow students to master martial arts techniques through confrontational exercises, Sanda teaching in colleges and universities commonly adopts contextualized teaching methods. However, this Sanda teaching method suffers from deficiencies such as poor teaching effectiveness and difficulty in reflecting the effectiveness of Sanda martial arts. In order to solve these problems and make up for the shortcomings of offline Sanda teaching, the study adopts the virtual reality technology and the fifth generation mobile communication technology to construct a Sanda-assisted teaching system for college students. In order to ascertain whether students complete the Sanda movement practice, the study has designed two models: a self-supervised model based on acceleration and angular velocity contrast learning and a multi-task semi-supervised model based on time-frequency contrast learning. These models aim to improve the analytical function of the Sanda-assisted teaching system and address the analytical deficiencies of the existing human movement identification algorithms. The results indicated that the maximum accuracy of the research-designed self-supervised model was 95.76% and 95.89% on the training and test sets, respectively. The multi-task semi-supervised model designed in the study plateaued after nearly 22 and 24 iterations on the training and test sets, respectively. The average response time of the research-designed system was 59ms, and the throughput could reach a maximum of 77651bit/s. The model and the research-designed system both worked well, and they can lower the risk of student injuries while offering technological support for Sanda-assisted teaching and learning in higher education institutions.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_108-Application_of_Sanda_Assisted_Teaching_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Data Collection Method Based on Data Perception and Positioning Technology in the Context of Artificial Intelligence and the Internet of Things</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01508109</link>
        <id>10.14569/IJACSA.2024.01508109</id>
        <doi>10.14569/IJACSA.2024.01508109</doi>
        <lastModDate>2024-08-30T08:07:27.7830000+00:00</lastModDate>
        
        <creator>Xinbo Zhao</creator>
        
        <creator>Fei Fei</creator>
        
        <subject>Wireless sensor network; data collection; compression perception technology; Sparse Bayesian Learning; signal reconstruction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>Wireless sensor networks are an important technical form of the underlying network of the Internet of Things. The energy of each node in the network is finite. When a node runs out of energy, it can cause network interruptions, which can affect the reliability of data collection. To reduce the consumption of communication resources and ensure the reliability of data collection, the study proposes data collection based on data compression perception positioning technology. This method first uses a Bayesian compression perception method to select nodes, and then adopts an adaptive sparse strategy to collect data. When selecting nodes using this proposed method, wireless sensor networks had the longest network lifespan. In the case of different degrees of redundancy and sparsity, the research method had the lowest reconstruction error, with reconstruction errors of 0.31 and 0.40, respectively. When the balance factor was set to 0.6, the reconstruction error of the research method was the lowest, with a minimum reconstruction error of 0.05. This proposed method has better reconstruction performance, effectively prolongs the lifespan of wireless sensor networks, and reduces the consumption of communication resources.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_109-Data_Collection_Method_Based_on_Data_Perception.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application of Improved CSA Algorithm-Based Fuzzy Logic in Computer Network Control Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01508107</link>
        <id>10.14569/IJACSA.2024.01508107</id>
        <doi>10.14569/IJACSA.2024.01508107</doi>
        <lastModDate>2024-08-30T08:07:27.7670000+00:00</lastModDate>
        
        <creator>Jianxi Yu</creator>
        
        <subject>CSA; computer network; fuzzy logic; principal component analysis method; network operation; situation awareness</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>In the past few years, with the high-speed popularization of computers and the widespread use of smart phones and mobile devices, the Internet has gradually become an indispensable part of people&#39;s daily lives. The Internet is constantly driving the process of digital society and providing people with more convenient and innovative applications. However, the internet industry also faces challenges such as runtime ambiguity, instability, large data volume, and difficulties in network situational awareness. In response to the above issues, this study combines the standard cuckoo algorithm with a fuzzy neural network to design a computer network situational awareness system. It uses principal component analysis to deduct the dimensions of the original data and then adds Gaussian noise to introduce appropriate randomness. The test proved that the improved model had a significant optimization effect on real network data, with an improvement of about 81.2% compared to the standard cuckoo algorithm. In the 220th iteration of the test set, the Loss function value was 0, which could accurately predict the network situation, with an accuracy rate of 98%. The designed system identification has higher recognition accuracy and less time consumption and has certain application potential in computer networks.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_107-Application_of_Improved_CSA_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>BlockChain and Deep Learning with Dynamic Pattern Features for Lung Cancer Diagnosis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01508106</link>
        <id>10.14569/IJACSA.2024.01508106</id>
        <doi>10.14569/IJACSA.2024.01508106</doi>
        <lastModDate>2024-08-30T08:07:27.7500000+00:00</lastModDate>
        
        <creator>A. Angel Mary</creator>
        
        <creator>K. K. Thanammal</creator>
        
        <subject>Lung cancer; spiking convolutional neural network; LIDC-IDRI; CLAHE; Honey Badger optimization Algorithm; segmentation; classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>Cancers in the respiratory tract grow out of control in lung carcinoma, a deadly disease. Because cancers have irregular shapes, it can be challenging to diagnose them and determine their sizes and forms from imaging studies. Furthermore, a serious issue with health image inquiry is large disparity. Artificial intelligence and blockchain are two cutting-edge advances in the healthcare industry. This paper introduces a Blockchain with a deep learning network for the early diagnosis of lung cancer in an effort to address these problems. Images from CT scans and CXRs were included in the LIDC-IDRI and NIH Chest X-ray collection. Initially, these images are pre-processed by Contrast Limited Adaptive Histogram Equalization (CLAHE) to enhance the image clarity and reduce the noise. Then the Honey Badger optimization Algorithm (HBA) is used to segment the lung region from the pre-processed image. Morphological segments of the lung region are used to generate dynamic patterns. Finally, these patterns are aggregated into the deep neural Spiking Convolutional Neural Network (SCNN), which is the global model for classifying the images into normal and abnormal cases. Based on the classification, the SCNN model achieves 98.64% accuracy from the LIDC-IDRI database and 98.9% on the NH Chest X-ray lung image dataset. The experiments indicate that the proposed approach results in lower energy consumption and faster inference times. Furthermore, the interpretability of the classification findings is improved by the intrinsic explainability of SCNNs, offering more profound understanding of the decision-making process. With these benefits, SCNNs are positioned as a reliable and effective technique for classifying lung images, providing a significant advancement over current methods.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_106-BlockChain_and_Deep_Learning_with_Dynamic_Pattern_Features.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving Automatic Short Answer Scoring Task Through a Hybrid Deep Learning Framework</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01508105</link>
        <id>10.14569/IJACSA.2024.01508105</id>
        <doi>10.14569/IJACSA.2024.01508105</doi>
        <lastModDate>2024-08-30T08:07:27.7370000+00:00</lastModDate>
        
        <creator>Soumia Ikiss</creator>
        
        <creator>Najima Daoudi</creator>
        
        <creator>Manar Abourezq</creator>
        
        <creator>Mostafa Bellafkih</creator>
        
        <subject>Student answer; automatic scoring; BERT language model; LSTM neural network; Natural Language Processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>An automatic short-answer scoring system involves using computational techniques to automatically evaluate and score student answers based on a given question and desired answer. The increasing reliance on automated systems for assessing student responses has highlighted the need for accurate and reliable short-answer scoring mechanisms. This research aims to improve the understanding and evaluation of student answers by developing an advanced automatic scoring system. While previous studies have explored various methodologies, many fail to capture the full complexity of response text. To address this gap, our study combines the strengths of classical neural networks with the capabilities of large language models. Specifically, we fine-tune the Bidirectional Encoder Representations from Transformers (BERT) model and integrate it with a recurrent neural network to enhance the depth of text comprehension. We evaluate our approach on the widely-used Mohler dataset and benchmark its performance against several baseline models using RMSE (Root Mean Square Error) and Pearson correlation metrics. The experimental results demonstrate that our method outperforms most existing systems, providing a more robust solution for automatic short-answer scoring.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_105-Improving_Automatic_Short_Answer_Scoring_Task.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Deep Reinforcement Learning (DRL) Based Approach to SFC Request Scheduling in Computer Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01508104</link>
        <id>10.14569/IJACSA.2024.01508104</id>
        <doi>10.14569/IJACSA.2024.01508104</doi>
        <lastModDate>2024-08-30T08:07:27.7370000+00:00</lastModDate>
        
        <creator>Eesha Nagireddy</creator>
        
        <subject>RL models; SFC chain; Deep-Q-Network; Dijkstra’s algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>This study investigates the use of Deep Reinforcement Learning (DRL) to minimize the latency between the source and destination of Service Function Chaining (SFC) requests in Neural Networks. The approach utilizes Deep-Q-Network (DQN) reinforcement learning to determine the shortest path between two nodes using the Greedy-Simulated Annealing (GSA) Dijkstra&#39;s Algorithm, when applied to SFC requests. The containers within the SFC framework help train the RL model based on bandwidth restrictions (fiber networks) to optimize the different pathways in terms of action space. Through rigorous evaluation of varying action spaces in models, we assessed that the Dijikstra’s Algorithm, within the sphere, is in fact a viable optimized solution to SFC request based problems. Our findings illustrate how this framework can be applied to early request based topologies to introduce a more optimized method of resource allocation, quality of service, and network performance to generalize the relationship between SFC and RL.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_104-A_Deep_Reinforcement_Learning_DRL_Based_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Exploration of Deep Semantic Analysis and Application of Video Images in Visual Communication Design Based on Multimodal Feature Fusion Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01508103</link>
        <id>10.14569/IJACSA.2024.01508103</id>
        <doi>10.14569/IJACSA.2024.01508103</doi>
        <lastModDate>2024-08-30T08:07:27.7200000+00:00</lastModDate>
        
        <creator>Yanlin Chen</creator>
        
        <creator>Xiwen Chen</creator>
        
        <subject>Video semantics; understanding; visual communication; design</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>Fully utilizing image and video semantic processing techniques can play a more effective role in visual communication design. In order to further explore the application of multimodal feature fusion algorithm (MFF) in video image feature analysis in visual communication design, with the aim of enhancing the depth and breadth of design creation. This article focuses on the application of video semantic understanding technology by combining image and video semantic processing techniques, in order to achieve a comprehensive, three-dimensional, and open expansion of design thinking. The MFF algorithm was proposed and implemented, which innovatively integrates multimodal information such as visual and audio in videos, deeply explores action semantics, and shows significant performance improvements compared to traditional algorithms. Specifically, compared to the other two mainstream algorithms, its performance has improved by 24.33% and 14.58%, respectively. This discovery not only validates the superiority of MFF algorithm in the field of video semantic understanding, but also reveals the profound impact of video semantic understanding technology on visual communication design practice, providing new perspectives and tools for the design industry and promoting innovation and development of design thinking. The novelty of this study lies in its interdisciplinary methodology, which applies advanced algorithm techniques to the field of art and design, and the significant improvement of the proposed MFF algorithm in enhancing design efficiency and creativity.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_103-Exploration_of_Deep_Semantic_Analysis_and_Application_of_Video_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Advanced IoT-Enabled Indoor Thermal Comfort Prediction Using SVM and Random Forest Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01508102</link>
        <id>10.14569/IJACSA.2024.01508102</id>
        <doi>10.14569/IJACSA.2024.01508102</doi>
        <lastModDate>2024-08-30T08:07:27.7030000+00:00</lastModDate>
        
        <creator>Nurtileu Assymkhan</creator>
        
        <creator>Amandyk Kartbayev</creator>
        
        <subject>Heating; building energy management; thermal comfort; IoT; Support Vector Machine; Random Forest</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>Predicting thermal comfort within indoor environments is essential for enhancing human health, productivity, and well-being. This study uses interdisciplinary approaches, integrating insights from engineering, psychology, and data science to develop sophisticated machine learning models that predict thermal comfort. Traditional methods often depend on subjective human input and can be inefficient. In contrast, this research applies Support Vector Machines (SVM) and Random Forest algorithms, celebrated for their precision and speed in handling complex datasets. The advent of the Internet of Things (IoT) further revolutionizes building management systems by introducing adaptive control algorithms and enabling smarter, IoT-driven architectures. We focus on the comparative analysis of SVM and Random Forest in predicting indoor thermal comfort, discussing their respective advantages and limitations under various environmental conditions and building designs. The dataset we used included comprehensive thermal comfort data, which underwent rigorous preprocessing to enhance model training and testing—80% of the data was used for training and the remaining 20% for testing. The models were evaluated based on their ability to accurately mirror complex interactions between environmental factors and occupant comfort levels. The results indicated that while both models performed robustly, Random Forest demonstrated greater stability and slightly higher accuracy in most scenarios. The paper proposes potential strategies for incorporating additional predictive features to further refine the accuracy of these models, emphasizing the promise of machine learning in advancing indoor comfort optimization.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_102-Advanced_IoT_Enabled_Indoor_Thermal_Comfort_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dynamic Simulation and Forecasting of Spatial Expansion in Small and Medium-Sized Cities Using ANN-CA-Markov Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01508101</link>
        <id>10.14569/IJACSA.2024.01508101</id>
        <doi>10.14569/IJACSA.2024.01508101</doi>
        <lastModDate>2024-08-30T08:07:27.6870000+00:00</lastModDate>
        
        <creator>Chengquan Gao</creator>
        
        <subject>ANN-CA; Markov; small and medium-sized cities; spatial; planning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>This study utilizes the ANN-CA-Markov (Artificial Neural Network-Cellular Automata-Markov) model to address spatial planning and expansion challenges in China’s small and medium-sized cities. With China’s urbanization rate reaching 59.58% in 2018 and expected to hit 70% by 2030, the country is entering a mid-stage of urbanization, leading to rapid expansion of megacities and a gradual decline in smaller cities. The study aims to dynamically simulate urban spatiotemporal evolution and predict future land use changes, integrating land use data, DEM elevation, transportation, administrative centers, and ecological information. The model forecasts the ecological spatial layout of Wanzhou District by 2025, with results indicating a slight decrease in ecological space and an increase in construction land. This suggests a need to balance urban development with ecological sustainability amidst rapid urbanization. The study demonstrates the high accuracy of the ANN-CA-Markov model in predicting land use changes and provides valuable insights for urban planners in making informed land use decisions.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_101-Dynamic_Simulation_and_Forecasting_of_Spatial_Expansion.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comprehensive Authentication Taxonomy and Lightweight Considerations in the Internet-of-Medical-Things (IoMT)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01508100</link>
        <id>10.14569/IJACSA.2024.01508100</id>
        <doi>10.14569/IJACSA.2024.01508100</doi>
        <lastModDate>2024-08-30T08:07:27.6730000+00:00</lastModDate>
        
        <creator>Azlina Ahmadi Julaihi</creator>
        
        <creator>Md Asri Ngadi</creator>
        
        <creator>Raja Zahilah binti Raja Mohd Radzi</creator>
        
        <subject>Lightweight authentication; Aggregated Authentication’ Multi-Factor Authentication (MFA); Internet-of-Medical Things (IoMT)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>The potential of Internet-of-Things (IoT) in healthcare is evident in its ability to connect medical equipment, sensors, and healthcare personnel to provide high-quality medical expertise in remote locations. The constraints faced by these devices such as limited storage, power, and energy resources necessitate the need for a lightweight authentication mechanism that is both efficient and secure. This study contributes by exploring challenges and lightweight authentication advancement, focusing on their efficiency on the Internet-of-Medical-Things (IoMT). A review of recent literature reveals ongoing issues such as the high complexity of cryptographic operations, scalability challenges, and security vulnerabilities in the proposed authentication systems. These findings lead to the need for multi-factor authentication with a simplified cryptographic process and more efficient aggregated management practices tailored to the constraints of IoMT environments. This study also introduces an extended taxonomy, namely, Lightweight Aggregated Authentication Solutions (LAAS), a lightweight efficiency approach that includes a streamlined authentication process and aggregated authentication, providing an understanding of lightweight authentication approaches. By identifying critical research gaps and future research directions, this study aims to provide a secure authentication protocol for IoMT and similar resource-constraint domains.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_100-A_Comprehensive_Authentication_Taxonomy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Interactive Color Design Based on AR Virtual Implantation Technology Between Users and Artificial Intelligence</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150899</link>
        <id>10.14569/IJACSA.2024.0150899</id>
        <doi>10.14569/IJACSA.2024.0150899</doi>
        <lastModDate>2024-08-30T08:07:27.6730000+00:00</lastModDate>
        
        <creator>Jun Ma</creator>
        
        <creator>Ying Chen</creator>
        
        <subject>Artificial intelligence; AR virtual implantation technology; color; style transfer; furniture</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>To achieve user interactive color design, this study takes furniture color interactive design as an example, introduces artificial intelligence and augmented reality virtual implantation technology, allowing users to design furniture colors and styles according to their own ideas. By improving cyclic consistency to generate adversarial networks, furniture image style transfer is carried out, and indoor feature point classification and virtual model registration are carried out through density based spatial clustering and other methods to design an unlabeled augmented reality furniture system. The results showed that compared to other methods, the improved cyclic consistency generation adversarial network had a higher structural similarity value. In the zebra to horse image, the structural similarity value was 0.987, which was 0.018 higher than the algorithm before improvement. The registration effect of density-based spatial clustering algorithm was good, with a shorter time consumption in different scenarios, and a maximum time consumption of 0.308 seconds in occluded composite scenes. The performance of the drawing component is good, with each process of tracking threads taking less than 20ms. The research method not only satisfies users in designing furniture colors and styles, but also enhances their experience.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_99-Interactive_Color_Design_Based_on_AR_Virtual_Implantation_Technology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Leveraging Mechanomyography Signal for Quantitative Muscle Spasticity Assessment of Upper Limb in Neurological Disorders Using Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150898</link>
        <id>10.14569/IJACSA.2024.0150898</id>
        <doi>10.14569/IJACSA.2024.0150898</doi>
        <lastModDate>2024-08-30T08:07:27.6570000+00:00</lastModDate>
        
        <creator>Muhamad Aliff Imran Daud</creator>
        
        <creator>Asmarani Ahmad Puzi</creator>
        
        <creator>Shahrul Na’im Sidek</creator>
        
        <creator>Ahmad Anwar Zainuddin</creator>
        
        <creator>Ismail Mohd Khairuddin</creator>
        
        <creator>Mohd Azri Abdul Mutalib</creator>
        
        <subject>Spasticity; mechanomyography; Modified Ashworth Scale; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>Upper motor neuron syndrome is characterised by spasticity, which represents a neurological disability that can be found in several disorders such as cerebral palsy, amyotrophic lateral sclerosis, stroke, brain injury, and spinal cord injury. Muscle spasticity is always assessed by therapists using conventional methods involving passive movement and assigning spasticity grades to the relevant joints based on the degree of muscle resistance which leads to inconsistency in assessment and could affect the efficiency of the rehabilitation process. To address this problem, the study proposed to develop a muscle spasticity model using Mechanomyography (MMG) signals from the forearm muscles. The muscle spasticity model leveraged based on the Modified Ashworth Scale and focus on flexion and extension movements of the forearm. Thirty subjects who satisfied the requirements and provided consent were recruited to participate in the data collection. The data underwent a pre-processing stage and was subsequently analysed prior to the extraction of features. The dataset consists of forty-eight extracted features from the three-direction x, y, z axes (for both biceps and triceps muscle), corresponding to the longitudinal, lateral, and transverse orientations relative to the muscle fibers. Significant features selection was conducted to analyse if overall significant difference showed in the combined set of these features across the different spasticity levels. The test results determined the selection of twenty-five features from a total of forty-eight which be used to train an optimum classifier algorithm for the purpose of quantifying the level of muscle spasticity. Linear Discriminant Analysis (LDA), Decision Trees (DTs), Support Vector Machine (SVM), and K-Nearest Neighbour (KNN) algorithms have been employed to achieve better accuracy in quantifying the muscle spasticity level. The KNN-based classifier achieved the highest performance, with an accuracy of 91.29% with k=15, surpassing the accuracy of other classifiers. This leads to consistency in spasticity evaluation, hence offering optimum rehabilitation strategies.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_98-Leveraging_Mechanomyography_Signal.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Convolutional Neural Network Model for Cacao Phytophthora Palmivora Disease Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150897</link>
        <id>10.14569/IJACSA.2024.0150897</id>
        <doi>10.14569/IJACSA.2024.0150897</doi>
        <lastModDate>2024-08-30T08:07:27.6400000+00:00</lastModDate>
        
        <creator>Jude B. Rola</creator>
        
        <creator>Jomari Joseph A. Barrera</creator>
        
        <creator>Maricel V. Calhoun</creator>
        
        <creator>Jonah Flor Ora&#241;o – Maaghop</creator>
        
        <creator>Magdalene C. Unajan</creator>
        
        <creator>Joshua Mhel Boncalon</creator>
        
        <creator>Elizabeth T. Sebios</creator>
        
        <creator>Joy S. Espinosa</creator>
        
        <subject>Machine-learning; Convolutional Neural Network; detection of P. palmivora</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>Cacao, scientifically known as Theobroma cacao, is a highly nutritious food and is extensively utilized in multiple sectors, including agriculture and health. Nevertheless, the agricultural sector encounters notable obstacles as a result of Cacao disease such as pod rot, predominantly attributed to the Phytophthora genus. The objective of this work is to conduct a comparative analysis to determine the most effective machine-learning technique for the detection of P. palmivora infection in Cacao pods. Few studies have delved into this topic previously, but this study focuses in utilizing a little larger dataset, achieving better model, and attaining higher accuracy. A total of 2000 images of cacao pods, both healthy and disease-infected were collected. Subsequently, the images were subjected to manual classification by a domain expert based on the discernible presence or absence of the disease. The study examined six machine learning algorithms, specifically Na&#239;ve Bayes, Random Forest, Hoeffding Tree, Multilayer Neural Network, and Convolutional Neural Network (CNN). The CNN model had 99% level of accuracy, the highest among the five machine learning algorithms in the testing phase. The methodology has the potential to significantly advance sustainable agricultural practices and disease management. To enhance the model&#39;s recognition capabilities, additional datasets encompassing a broader range of Cacao varieties is necessary.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_97-Convolutional_Neural_Network_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modeling Micro Traffic Flow Phenomena Based on Vehicle Types and Driver Characteristics Using Cellular Automata and Monte Carlo</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150896</link>
        <id>10.14569/IJACSA.2024.0150896</id>
        <doi>10.14569/IJACSA.2024.0150896</doi>
        <lastModDate>2024-08-30T08:07:27.6270000+00:00</lastModDate>
        
        <creator>Tri Harsono</creator>
        
        <creator>Kohei Arai</creator>
        
        <subject>Micro traffic flow; driver characteristics; cellular automata; Monte Carlo</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>The modeling of micro traffic flow on a highway has been extensively observed and studied in various aspects, such as driver characteristics in car-following and lane-changing behaviors. Regarding car-following and lane-changing, an interesting aspect is how to model the movement conditions of vehicles on a highway that exhibit unique characteristics regarding the speed of four-wheeled or more vehicles passing through it. This condition occurs on the Porong Highway in Sidoarjo, East Java, Indonesia. Based on these conditions, this study develops a microscopic traffic flow model incorporating driver characteristics categorized into three types: careful drivers, ordinary drivers, and skilled drivers, each with distinct vehicle speed traits. These driver characteristics are integrated into the Nagel-Schreckenberg Stochastic Traffic Cellular Automata (NaSch STCA) model, which we refer to as the Modified NaSch STCA. The Monte Carlo simulation is employed to generate events through random numbers for the Occupied Initial State, Slowdown Probability, and Probability of Lane Changing. These three components are integral parts of the Modified NaSch STCA model. Experiments (simulations) were conducted on the constructed vehicle movement model, and one of the outcomes is that the travel time obtained from the NaSch STCA model is significantly faster than that obtained from the Modified NaSch STCA model. This condition is attributed to the unique vehicle speed characteristics on the Porong Highway, where the average speed vr = 38 km/h is relatively lower than the average speed typically observed on a highway.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_96-Modeling_Micro_Traffic_Flow_Phenomena.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Securing RPL Networks with Enhanced Routing Efficiency with Congestion Prediction and Load Balancing Strategy</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150894</link>
        <id>10.14569/IJACSA.2024.0150894</id>
        <doi>10.14569/IJACSA.2024.0150894</doi>
        <lastModDate>2024-08-30T08:07:27.6100000+00:00</lastModDate>
        
        <creator>Saumya Raj</creator>
        
        <creator>Rajesh R</creator>
        
        <subject>Low power and Lossy Network (LLN); Routing Protocol for LLN (RPL); load balancing; Internet of Things (IoT); Internet Protocol Version 6 (IPv6)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>Low power and Lossy Networks (LLNs) are essential components of the Internet of Things (IoT) environment. In LNNs, the Routing Protocol for LLN (RPL)-based Internet Protocol Version 6 (IPv6) routing protocol is regarded as a standardized solution. However, the existing models did not account for the issues with congestion and security when modeling the RPL. Thus, to resolve these issues, this paper proposes a novel Exponential Poisson Distribution–Fuzzy (EPD-Fuzzy) model and Kullback Leibler Divergence-based Tunicate Swarm Algorithm (KLD-TSA) for developing a reliable RPL model. The hash codes are first generated for the registered nodes at the network end in order to achieve security; the hash codes are subsequently compared via requests with the immediate nodes. Each node sends a request to its neighbors using the hash value; if the hash value matches, a path is formed. The parent nodes are then chosen and ranked using the Pearson Correlation Coefficient-Spotted Hyena Optimization Algorithm (PCC-SHOA) technique to minimize latency. To avoid congestion, the EPD-Fuzzy is employed to predict congestion; then, a genitor node is introduced in the congested scenarios. The big data and videos are split, compressed, and sent via multiple paths to reduce the losses in the RPL. Moreover, to avoid network traffic, a novel KLD-TSA load balancing is introduced at the user end. The experiential outcomes exhibited the proposed technique’s effectiveness regarding Packet delivery ratio (PDR).</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_94-Securing_RPL_Networks_with_Enhanced_Routing_Efficiency.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimizing Hyperparameters in Machine Learning Models for Accurate Fitness Activity Classification in School-Aged Children</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150895</link>
        <id>10.14569/IJACSA.2024.0150895</id>
        <doi>10.14569/IJACSA.2024.0150895</doi>
        <lastModDate>2024-08-30T08:07:27.6100000+00:00</lastModDate>
        
        <creator>Britsel Calluchi Arocutipa</creator>
        
        <creator>Magaly Villegas Cahuana</creator>
        
        <creator>Vanessa Huanca Hilachoque</creator>
        
        <creator>Marco Cossio Bola&#241;os</creator>
        
        <subject>Machine learning; classification; physical fitness; schoolchildren; hyperparameters</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>Classification using machine learning algorithms in physical fitness tests carried out by students in educational centers can help prevent obesity and other related diseases. This research aims to evaluate physical fitness using percentiles of the tests and machine learning algorithms with hyperparameter optimization. The process followed was knowledge discovery in databases (KDD). Data were collected from 1525 students (784 women, 741 men) aged 6 to 17, selected non-probabilistically from five public schools. For the evaluation, anthropometric parameters such as age, weight, height, sitting height, abdominal circumference, relaxed arm circumference, oxygen saturation, resting heart rate, and maximum expiratory flow were considered. Physical Fitness tests included sitting flexibility, kangaroo horizontal jump, and 20-meter fly speed. Within the percentiles observed, we took three cut-off points as a basis for the present research: &gt; P75 (above average), p25 to p75 (average), and &lt; P25 (below average). The following machine learning algorithms were used for classification: Random Forest, Support Vector Machine, Decision tree, Logistic Regression, Naive Bayes, K-nearest neighbor, XGBboost, Neural network, Cat Boost, LGBM, and Gradient Boosting. The algorithms were hyperparameter optimized using GridSearchCV to find the best configurations. In conclusion, the importance of hyperparameter optimization in improving the accuracy of machine learning models is highlighted. Random Forest performs well in classifying the “High” and “Low” categories in most tests but struggles to correctly classify the “Normal” category for both male and female students.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_95-Optimizing_Hyperparameters_in_Machine_Learning_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detecting Online Gambling Promotions on Indonesian Twitter Using Text Mining Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150893</link>
        <id>10.14569/IJACSA.2024.0150893</id>
        <doi>10.14569/IJACSA.2024.0150893</doi>
        <lastModDate>2024-08-30T08:07:27.5930000+00:00</lastModDate>
        
        <creator>Reza Bayu Perdana</creator>
        
        <creator>Ardin</creator>
        
        <creator>Indra Budi</creator>
        
        <creator>Aris Budi Santoso</creator>
        
        <creator>Amanah Ramadiah</creator>
        
        <creator>Prabu Kresna Putra</creator>
        
        <subject>Social media; analytics; online gambling; intention classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>This study addresses the pressing challenge of detecting online gambling promotions on Indonesian Twitter using text mining algorithms for text classification and analytics. Amid limited research on this subject, especially in the Indonesian context, we aim to identify common textual features used in gambling promotions and determine the most effective classification models. By analyzing a dataset of 6038 tweets collected and using methods such as Random Forest, Logistic Regression, and Convolutional Neural Networks, complemented by a comparison analysis of text representation methods, we identified frequently occurring words such as &#39;link&#39;, &#39;situs&#39;, &#39;prediksi&#39;, &#39;jackpot&#39;, &#39;maxwin&#39;, and &#39;togel&#39;. The results indicate that the combination of TF-IDF and Random Forest is the most effective method for detecting online gambling promotion content on Indonesian Twitter, achieving a recall value of 0.958 and a precision value of 0.966. These findings can contribute to cybersecurity and support law enforcement in mitigating the negative effects of such promotions, particularly on the Twitter platform in Indonesia.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_93-Detecting_Online_Gambling_Promotions_on_Indonesian_Twitter.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cryptographic Techniques in Digital Media Security: Current Practices and Future Directions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150892</link>
        <id>10.14569/IJACSA.2024.0150892</id>
        <doi>10.14569/IJACSA.2024.0150892</doi>
        <lastModDate>2024-08-30T08:07:27.5800000+00:00</lastModDate>
        
        <creator>Gongling ZHANG</creator>
        
        <subject>Digital media; cryptographic; content security; digital rights management; watermarking; blockchain</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>Content privacy and unauthorized access to copyrighted digital media content are common in the dynamic, fast-paced digitalized media marketplace. Cryptographic methods are the foundation of modern digital media security, and they must ensure the security, integrity, and authenticity of digital media data. This article analyses cryptographic methods that are used to protect digital media content. The paper reviews the main cryptographic concepts, such as symmetric cryptography, asymmetric cryptography, hash functions, and digital signatures. The paper also discusses some popular approaches: encryption, Digital Rights Management (DRM), watermarking, and solutions based on blockchain. Finally, we highlight implementation challenges such as key management and scalability and identify emerging trends such as quantum-safe cryptography and privacy-preserving techniques. By presenting the current research results and discussing the directions for the future, the study aims to pave the way for secure, efficient, and robust cryptographic solutions for digital media protection, leading to sustainable development, innovation, and secure communication of digital content among users.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_92-Cryptographic_Techniques_in_Digital_Media_Security_Current_Practices.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Attention-Based Deep Learning Approach for Pedestrian Detection in Self-Driving Cars</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150891</link>
        <id>10.14569/IJACSA.2024.0150891</id>
        <doi>10.14569/IJACSA.2024.0150891</doi>
        <lastModDate>2024-08-30T08:07:27.5630000+00:00</lastModDate>
        
        <creator>Wael Ahmad AlZoubi</creator>
        
        <creator>Girish Bhagwant Desale</creator>
        
        <creator>Sweety Bakyarani E</creator>
        
        <creator>Uma Kumari C R</creator>
        
        <creator>Divya Nimma</creator>
        
        <creator>K Swetha</creator>
        
        <creator>B Kiran Bala</creator>
        
        <subject>Pedestrian recognition; autonomous vehicle safety; deep learning; attention mechanism; Bidirectional Gated Recurrent Units</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>Autonomous vehicle safety relies heavily on the ability to accurately detect pedestrians, as this capability is crucial for preventing accidents and saving lives. Pedestrian recognition is particularly challenging in the dynamic and complex environments of urban areas. Effective pedestrian detection is crucial for ensuring road safety in autonomous vehicles. Current pedestrian identification systems often fall short in capturing the nuances of pedestrian behavior and appearance, potentially leading to dangerous situations. These limitations are mainly due to difficulties in various conditions, such as low-light environments, occlusions, and intricate urban settings. This paper proposes a novel solution to these challenges by integrating an attention-based convolutional bi-GRU model with deep learning techniques for pedestrian recognition. This method leverages deep learning to provide a robust solution for pedestrian detection. Convolutional layers are utilized to extract spatial features, attention mechanisms highlight semantic details, and Bidirectional Gated Recurrent Units (Bi-GRU) capture the temporal context in the proposed model. The process begins with data collection to build a comprehensive pedestrian dataset, followed by preprocessing using min-max normalization. The key components of the model work together to enhance pedestrian detection, ensuring a more accurate and comprehensive understanding of dynamic pedestrian scenarios. The implementation of this unique approach was carried out using Python, employing libraries such as TensorFlow, Keras, and OpenCV. The proposed attention-based convolutional bi-GRU model outperforms previous models by an average of 17.1%, achieving an accuracy rate of 99.4%. The model significantly surpasses Random Forest, Faster R-CNN, and SVM in terms of pedestrian recognition accuracy, which is critical for autonomous vehicle safety.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_91-Attention_Based_Deep_Learning_Approach_for_Pedestrian_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Attention-Based Joint Learning for Intent Detection and Slot Filling Using Bidirectional Long Short-Term Memory and Convolutional Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150890</link>
        <id>10.14569/IJACSA.2024.0150890</id>
        <doi>10.14569/IJACSA.2024.0150890</doi>
        <lastModDate>2024-08-30T08:07:27.5470000+00:00</lastModDate>
        
        <creator>Yusuf Idris Muhammad</creator>
        
        <creator>Naomie Salim</creator>
        
        <creator>Sharin Hazlin Huspi</creator>
        
        <creator>Anazida Zainal</creator>
        
        <subject>Joint learning; intent detection; slot filling; multichannel</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>Effective natural language understanding is crucial for dialogue systems, requiring precise intent detection and slot filling to facilitate interactions. Traditionally, these subtasks have been addressed separately, but their interconnection suggests that joint solutions yield better results. Recent neural network-based approaches have shown significant performance in joint intent detection and slot filling tasks. The two primary neural network structures used are recurrent neural networks (RNNs) and convolutional neural networks (CNNs). RNNs capture long-term dependencies and store previous information semantics in a fixed-size vector, but their ability to extract global semantics is limited. CNNs can capture n-gram features using convolutional filters, but their performance is constrained by filter width. To leverage the strengths and mitigate the weaknesses of both networks, this paper proposes an attention-based joint learning classification for intent detection and slot filling using BiLSTM and CNNs (AJLISBC). The BiLSTM encodes input sequences in both forward and backward directions, producing high-dimensional representations. It applies scalar and vectorial attention to obtain multichannel representations, with scalar attention calculating word-level importance and vectorial attention assessing feature-level importance. For classification, AJLISBC employs a CNN structure to capture word relations in the representations generated by the attention mechanism, effectively extracting n-gram features. Experimental results on the benchmark Airline Travel Information System (ATIS) dataset demonstrate that AJLISBC outperforms state-of-the-art methods.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_90-Attention_Based_Joint_Learning_for_Intent_Detection_and_Slot_Filling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multiclass Osteoporosis Detection: Enhancing Accuracy with Woodpecker-Optimized CNN-XGBoost</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150889</link>
        <id>10.14569/IJACSA.2024.0150889</id>
        <doi>10.14569/IJACSA.2024.0150889</doi>
        <lastModDate>2024-08-30T08:07:27.5330000+00:00</lastModDate>
        
        <creator>Mithun DSouza</creator>
        
        <creator>Divya Nimma</creator>
        
        <creator>Kiran Sree Pokkuluri</creator>
        
        <creator>Janjhyam Venkata Naga Ramesh</creator>
        
        <creator>Suresh Babu Kondaveeti</creator>
        
        <creator>Lavanya Kongala</creator>
        
        <subject>Osteoporosis detection; multiclass classification; Woodpecker Optimization Algorithm; Convolutional Neural Network (CNN); XGBoost</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>In the realm of medical diagnostics, accurately identifying osteoporosis through multiclass classification poses a significant challenge due to the subtle variations in bone density and structure. This study proposes a novel approach to enhance detection accuracy by integrating the Woodpecker Optimization Algorithm with a hybrid Convolutional Neural Network (CNN) and XGBoost model. The Woodpecker Optimization Algorithm is employed to fine-tune the CNN-XGBoost model parameters, leveraging its ability to efficiently search for optimal configurations amidst complex data landscapes. The proposed framework begins with the CNN component, designed to automatically extract hierarchical features from bone density images. This initial stage is crucial for capturing intricate patterns that signify osteoporotic conditions across multiple classes. Subsequently, the extracted features are fed into an XGBoost classifier, renowned for its robust performance in handling structured data and multiclass classification tasks. By combining these two powerful techniques, the model aims to synergistically utilize the strengths of deep learning in feature extraction and gradient boosting in decision-making. Experimental validation is conducted on a comprehensive dataset comprising diverse bone density scans, ensuring the model&#39;s robustness across various patient demographics and imaging conditions. Performance criteria including recall, precision, reliability, and F1-score are assessed to show how well the suggested Woodpecker-optimized CNN-XGBoost framework performs in comparison to other approaches when it comes to obtaining better accuracy in diagnosis. The findings underscore the potential of hybrid models in advancing osteoporosis detection, offering clinicians a reliable tool for early and precise diagnosis, thereby facilitating timely interventions to mitigate the debilitating effects of bone-related diseases. Osteoporosis detection model with a classification accuracy of 97.1% implemented in Python.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_89-Multiclass_Osteoporosis_Detection_Enhancing_Accuracy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning Fusion for Intracranial Hemorrhage Classification in Brain CT Imaging</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150887</link>
        <id>10.14569/IJACSA.2024.0150887</id>
        <doi>10.14569/IJACSA.2024.0150887</doi>
        <lastModDate>2024-08-30T08:07:27.5170000+00:00</lastModDate>
        
        <creator>Padma Priya S. Babu</creator>
        
        <creator>T. Brindha</creator>
        
        <subject>Intracranial hemorrhage; deep learning; DenseNet 121; LSTM; brain CT images</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>Brain hemorrhages are characterized by the rupture in the arteries of brain due blood clotting or high blood pressure (BP), presents a significant risk of traumatic injury or even death. This bleeding results in the damage in brain cells, with common causes including brain tumors, aneurysm, blood vessel abnormalities, amyloid angiopathy, trauma, high BP, and bleeding disorders. When a hemorrhage happens, oxygen can no longer reach the brain tissues and brain cells begin to die if they are depleted of oxygen and nutrients for longer than three or four minutes. The affected nerve cells and the related functions they control are damaged as well. Early detection of brain hemorrhages is crucial. In this paper an efficient hybrid deep learning (DL) model is proposed for the intracranial hemorrhage detection (ICH) from brain CT images. The proposed method integrates DenseNet 121 and Long Short-Term Memory (LSTM) models for the accurate classification of ICH. The DenseNet 121 model act as the feature extraction model. The experimental results demonstrated that the model attained 97.50% accuracy, 97.00% precision, 95.99% recall and 96.33% F1 score, demonstrating its effectiveness in accurately identifying and classifying ICH.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_87-Deep_Learning_Fusion_for_Intracranial_Hemorrhage_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Romanian Sign Language and Mime-Gesture Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150888</link>
        <id>10.14569/IJACSA.2024.0150888</id>
        <doi>10.14569/IJACSA.2024.0150888</doi>
        <lastModDate>2024-08-30T08:07:27.5170000+00:00</lastModDate>
        
        <creator>Enachi Andrei</creator>
        
        <creator>Turcu Cornel</creator>
        
        <creator>George Culea</creator>
        
        <creator>Sghera Bogdan Constantin</creator>
        
        <creator>Ungureanu Andrei Gabriel</creator>
        
        <subject>RSL; sign language; machine learning; model; mime gestures</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>This paper presents a comprehensive approach to Romanian Sign Language (RSL) recognition using machine learning techniques. The primary focus is on developing and evaluating a robust model capable of accurately classifying hand and mime gestures representative of RSL and converting it into speech through an application. Utilizing a dataset of hand landmarks captured and stored in CSV format, the study outlines the preprocessing steps, model training, and performance evaluation. Key components of the methodology include data preparation, model training, performance evaluation and model optimization. The results demonstrate the feasibility of using machine learning for RSL recognition, achieving promising accuracy rates. The study concludes with a discussion on potential applications and future enhancements, including real-time gesture recognition and expanding the dataset for improved generalization. This work contributes to the broader effort of making sign language more accessible through technology, particularly for the Romanian-speaking deaf and hard-of-hearing community.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_88-Romanian_Sign_Language_and_Mime_Gesture_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of a 5G-Optimized MIMO Antenna with Enhanced Isolation Using Neutralization Line and SRRs Metamaterials</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150886</link>
        <id>10.14569/IJACSA.2024.0150886</id>
        <doi>10.14569/IJACSA.2024.0150886</doi>
        <lastModDate>2024-08-30T08:07:27.5000000+00:00</lastModDate>
        
        <creator>Chaker Essid</creator>
        
        <creator>Linda Chouikhi</creator>
        
        <creator>Alsharef Mohammad</creator>
        
        <creator>Bassem Ben Salah</creator>
        
        <creator>Hedi Sakli</creator>
        
        <subject>5G; antenna; MIMO; SRRs metamaterials; isolation; IOT; neutralization line (NL)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>This paper presents the design of a Multiple Input Multiple Output (MIMO) antenna intended for 5G wireless applications operating in the 3.5 GHz frequency range. The MIMO system consists of two adjacent antennas, measuring 100 mm &#215; 80 mm &#215; 1.6 mm, with spacing between radiating elements equal to one-eighth of the wavelength (λ/8). The antenna is constructed on an FR4 substrate with a permittivity of 4.3, and a microstrip line is employed for feeding the patch. Several techniques are employed to enhance the isolation between the antennas. Specifically, two decoupling methods are explored: the use of a neutralization line (NL) and the incorporation of metamaterial split-ring resonators (SRRs). Simulation results demonstrate substantial isolation, exceeding 20 dB with SRR implementation and more than 23 dB with the NL approach. Both individual antennas and the MIMO configuration are simulated, analyzed, and then physically fabricated for measurement, exhibiting good agreement between measured and simulated results. The study investigates the impact of each technique on antenna to determine the optimal configuration for the applications of 5G and IOT in different fields such as health (wireless medical telemetry systems (WMTS)). Remarkably, the introduction of metamaterial (MTM) with SRRs achieves a noteworthy reduction of mutual coupling by 23 dB while minimizing the mutual coupling to about 23 dB with NL insertion.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_86-Development_of_a_5G_Optimized_MIMO_Antenna.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Complex Environmental Localization of Scenic Spots by Integrating LANDMARC Localization System and Traditional Location Fingerprint Localization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150885</link>
        <id>10.14569/IJACSA.2024.0150885</id>
        <doi>10.14569/IJACSA.2024.0150885</doi>
        <lastModDate>2024-08-30T08:07:27.4870000+00:00</lastModDate>
        
        <creator>Shasha Song</creator>
        
        <creator>Cong Li</creator>
        
        <subject>LANDMARC; localization system; fingerprint localization; environmental localization; scenic spot</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>The scenic spot contains complex and changeable indoor and outdoor environments, some of which may be difficult to work effectively due to signal occlusion, multipath effect and other factors. In response to this problem, this paper proposes a method of Location Identification Based on the Dynamic Active Radio Frequency IDentification Calibration system and fingerprint localization system. It aims to improve positioning accuracy and reliability in the complex environment in the scenic spot. Firstly, the Location Identification Based on Dynamic Active Radio Frequency IDentification Calibration system is analyzed and improved. Then the improved positioning algorithm is applied to the complex environment of the scenic spot. Finally, the positioning results of the improved positioning algorithm in the complex environment of the scenic spot are tested. The experimental results show that when the K value is set to 4, the reader is arranged in the four corners and the center of the area, and the label density is set to 6&#215;6, the average error of the research system in terms of error control is only 0.32, which is 0.28 less than that of the ultrasonic positioning system. All in all, the combination of Location Identification Based on Dynamic Active Radio Frequency IDentification Calibration system and traditional location fingerprint location of the scenic spot complex environment positioning scheme, it has shown great advantages in positioning accuracy, stability and real-time.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_85-Complex_Environmental_Localization_of_Scenic_Spots.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Semantic Segmentation Method for Road Scene Images Based on Improved DeeplabV3+ Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150883</link>
        <id>10.14569/IJACSA.2024.0150883</id>
        <doi>10.14569/IJACSA.2024.0150883</doi>
        <lastModDate>2024-08-30T08:07:27.4700000+00:00</lastModDate>
        
        <creator>Lihua Bi</creator>
        
        <creator>Xiangfei Zhang</creator>
        
        <creator>Shihao Li</creator>
        
        <creator>Canlin Li</creator>
        
        <subject>Image enhancement; attention mechanism; semantic segmentation; road scene images</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>Semantic segmentation of road scenes plays a crucial role in many fields such as autonomous driving, intelligent transportation systems and urban planning. Through the precise identification and segmentation of elements such as roads, pedestrians, vehicles, and traffic signs, the system can better understand the surrounding environment and make safe and effective decisions. However, the existing semantic segmentation technology still faces many challenges in the face of complex road scenes, such as lighting changes, weather effects, different viewing angles and the existence of occlusions. Combined with the actual road scene image, this paper improves DeeplabV3+ network and applies it to semantic segmentation of road scene image, and proposes a semantic segmentation method of road scene image based on improved DeeplabV3+ network. By adding enhancement strategies for road scene images and hyperparameter adjustment, the method improves the training process of DeeplabV3+ network, and uses SK attention mechanism to improve the feature fusion module in DeeplabV3+, so as to improve the segmentation effect of road scene images. After the validation of Cityscapes and other data sets, the segmentation accuracy index mIoU of the proposed method reaches 79.8%, which can predict better semantic style effect, effectively improve the segmentation performance and accuracy of the model, and achieve better segmentation index results in the comparison network, and the subjective visual effect of the segmentation is also better.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_83-A_Semantic_Segmentation_Method_for_Road_Scene_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Tuberculosis Diagnosis and Treatment Outcomes: A Stacked Loopy Decision Tree Approach Empowered by Moth Search Algorithm Optimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150884</link>
        <id>10.14569/IJACSA.2024.0150884</id>
        <doi>10.14569/IJACSA.2024.0150884</doi>
        <lastModDate>2024-08-30T08:07:27.4700000+00:00</lastModDate>
        
        <creator>Huma Khan</creator>
        
        <creator>Mithun DSouza</creator>
        
        <creator>K. Suresh Babu</creator>
        
        <creator>Janjhyam Venkata Naga Ramesh</creator>
        
        <creator>K. R. Praneeth</creator>
        
        <creator>Pinapati Lakshmana Rao</creator>
        
        <subject>Tuberculosis (TB); chest x-ray; stacked loopy decision trees (SLDT); moth search algorithm (MSA); medical imaging</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>Chest X-ray imaging is the main tool for detecting tuberculosis (TB), providing essential information about pulmonary abnormalities that may indicate the presence of the disease. Still, manual interpretation is a common component of older diagnostic methods, and it may be laborious and subjective. The development of sophisticated machine learning methods offers a potential way to improve TB detection through the automation of chest X-ray image interpretation. This takes a look at goals to increase a sturdy framework for TB diagnosis the usage of Stacked Loopy Decision Trees (SLDT) optimized with the Moth Search Algorithm (MSA). The objective is to improve upon current techniques with the aid of integrating sophisticated feature extraction and ensemble mastering strategies. The novelty lies in the integration of SLDT, a hierarchical ensemble model able to shooting complex styles in chest X-ray images, with MSA for optimized parameter tuning and function selection. This technique addresses the complexity of TB analysis by enhancing each interpretability and overall performance metrics. The proposed framework employs the Gray-Level Co-prevalence Matrix (GLCM) for texture characteristic extraction, accompanied with the aid of SLDT ensemble studying optimized through MSA. This methodology objectives to discern TB-particular styles from chest X-ray pictures with excessive accuracy and efficiency. Evaluation of a comprehensive dataset demonstrates advanced performance metrics including accuracy, sensitivity, specificity, and vicinity underneath the ROC curve (AUC-ROC) compared to traditional gadget gaining knowledge of procedures. The outcomes demonstrate how well the SLDT-MSA framework performs in diagnosing TB, with 99% accuracy. The observation indicates that the SLDT-MSA framework offers practitioners a trustworthy and easily understandable solution, marking a significant advancement in TB diagnosis.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_84-Enhancing_Tuberculosis_Diagnosis_and_Treatment_Outcomes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluating the Impact of Yoga Practices to Improve Chronic Venous Insufficiency Symptoms: A Classification by Gaussian Process</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150882</link>
        <id>10.14569/IJACSA.2024.0150882</id>
        <doi>10.14569/IJACSA.2024.0150882</doi>
        <lastModDate>2024-08-30T08:07:27.4530000+00:00</lastModDate>
        
        <creator>Feng Yun Gou</creator>
        
        <subject>Chronic venous insufficiency; yoga; Gaussian Process Classifier; Crystal Structure Algorithm; Fire Hawk Optimizer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>Chronic Venous Insufficiency (CVI) is a widespread condition marked by diverse venous system irregularities stemming from occlusion and varicosities. Factors like family history and lifestyle choices amplify CVI&#39;s economic consequences, emphasizing the need for proactive measures. The sedentary lifestyle of many individuals can contribute to various diseases, including CVI. Yoga is now endorsed as a multifaceted exercise to alleviate CVI symptoms, offering a holistic approach and complementary therapy for diverse medical conditions. This study developed a method for evaluating and classifying symptoms associated with varicose veins, utilizing the Venous Clinical Score (VCSS) data. A specific emphasis was placed on investigating the impact of yoga on these symptoms, and a comprehensive performance assessment was conducted based on data obtained from a cohort of 100 patients. This paper achieves optimal performance by employing the Gaussian Process Classifier (GPC) along with two optimizers, namely the Crystal Structure Algorithm (CSA) and the Fire Hawk Optimizer (FHO). The results indicate that in predicting VCSS-Pre (reflecting symptoms before engaging in yoga exercises), the GPFH exhibited superior performance with an F1-score of 0.872, surpassing the GPCS, which attained an F1-score of 0.861 by almost 1.26%. Additionally, the prediction for VCSS-1, reflecting symptoms after one month of yoga practices, revealed the GPFH outperforming the GPCS with respective F1-score values of 0.910 and 0.901.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_82-Evaluating_the_Impact_of_Yoga_Practices.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid of Extreme Learning Machine and Cellular Neural Network Segmentation in Mangrove Fruit Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150881</link>
        <id>10.14569/IJACSA.2024.0150881</id>
        <doi>10.14569/IJACSA.2024.0150881</doi>
        <lastModDate>2024-08-30T08:07:27.4370000+00:00</lastModDate>
        
        <creator>Romi Fadillah Rahmat</creator>
        
        <creator>Opim Salim Sitompul</creator>
        
        <creator>Maya Silvi Lydia</creator>
        
        <creator>Fahmi</creator>
        
        <creator>Shifani Adriani Ch</creator>
        
        <creator>Pauzi Ibrahim Nainggolan</creator>
        
        <creator>Riza Sulaiman</creator>
        
        <subject>Mangroves; mangrove conservation; image processing; ecological informatics; cellular neural network; extreme learning machine</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>Mangroves are a collection of plants that inhabit the intertidal zone, namely the area between the lowest and highest points reached by the tide. Overall, mangroves provide a range of advantages, including the prevention of coastal erosion, the inhibition of seawater intrusion onto land leading to brackish groundwater, and serving as habitats and food sources for diverse animal species. In addition, many types of mangrove fruit have been used as sustenance for humans and as ingredients in processed food products. Mangrove fruit has a considerable variety of species, each characterized by distinct forms. At now, farmers and the general public rely only on visual observation to identify mangrove fruit species. Consequently, their ability to accurately detect the correct species is not guaranteed. In order to address this issue, this study employs digital image processing using the Extreme Learning Machine technique to facilitate the identification of various kinds and varieties of mangrove fruit by the general public and farmers. The study utilizes gray-scaling and Contrast Enhancement as image processing methods, while segmentation is performed by the use of the Cellular Neural Network approach. Following extensive testing in this study, it was determined that the used methodology effectively identified several species of mangrove fruit. The results yielded an accuracy rate of 94.11% for extracting shape, texture, and color elements, and accuracy rate of 99.63% for extracting texture and color features.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_81-A_Hybrid_of_Extreme_Learning_Machine_and_Cellular_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Real-Time Robotic Force Control for Automation of Ultrasound Scanning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150879</link>
        <id>10.14569/IJACSA.2024.0150879</id>
        <doi>10.14569/IJACSA.2024.0150879</doi>
        <lastModDate>2024-08-30T08:07:27.4230000+00:00</lastModDate>
        
        <creator>Ungku Muhammad Zuhairi Ungku Zakaria</creator>
        
        <creator>Seri Mastura Mustaza</creator>
        
        <creator>Mohd Hairi Mohd Zaman</creator>
        
        <creator>Ashrani Aizzuddin Abd Rahni</creator>
        
        <subject>Real-time; position control; force control; ROS; collaborative robot; admittance control; kinematics; dynamic; tool center point; damping</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>Ablation represents a minimally invasive option for liver cancer treatment, commonly guided by imaging techniques such as ultrasound. Recently, there has been a surge in interest in semi-automated or fully automated robotic image acquisition. Specifically, there is a continuing interest in automating medical ultrasound image acquisition due to ultrasound being widely used, having a lower cost, and being more portable than other imaging modalities. This study explores automated robot-assisted ultrasound imaging for liver ablation procedures. The study proposed utilizing a collaborative robot arm from Universal Robots (UR), which has gained popularity across various medical applications. A robotic real-time force control system was designed and demonstrated to regulate the contact force exerted by the robot on the surface of a torso phantom, ensuring optimal contact during ultrasound imaging. The Robot Operating System (ROS) and the UR Real-Time Data Exchange (UR-RTDE) interface were employed to control the robot. The findings indicate that the contact force can be maintained around a set desired value of 9N. However, deviations occur due to residual forces from acceleration when the probe is not in contact with the phantom. These results provide a foundation for further advancements in the automation of ultrasound scanning.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_79-Real_Time_Robotic_Force_Control_for_Automation_of_Ultrasound_Scanning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning Model for Enhancing Automated Recycling Machine with Incentive Mechanisms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150880</link>
        <id>10.14569/IJACSA.2024.0150880</id>
        <doi>10.14569/IJACSA.2024.0150880</doi>
        <lastModDate>2024-08-30T08:07:27.4230000+00:00</lastModDate>
        
        <creator>Razali Tomari</creator>
        
        <creator>Aeslina Abdul Kadir</creator>
        
        <creator>Wan Nurshazwani Wan Zakaria</creator>
        
        <creator>Dipankar Das</creator>
        
        <creator>Muhamad Bakhtiar Azni</creator>
        
        <subject>Recycling machine; You Only Look Once (YOLO); vision system; interactive recycling; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>Automated Recycling Machine (ARM) can be defined as an interactive tool to flourish recycling culture among community by providing incentive to the user that deposit the recyclable items. To enable this, the machine crucially needs a material validation module to identify the deposited recyclable items. Utilizing combination of sensors for such purpose is a tedious task and hence vision-based YOLO detection framework is proposed to identify three types of recyclable material which are aluminum can, PET bottle and tetra-pak Initially, the 14883 training samples and 937 validation samples were fed to the various YOLO variants for investigating an optimal model that can yield high accuracy and suitable for CPU usage during inference stage. Next the user interface is constructed to effectively communicate with the user when operating the ARM with easy-to-use graphical instruction. Eventually, the ARM body was designed and developed with durable material for usage in indoor and outdoor conditions. From series of experiments, it can be concluded that, the YOLOv8-m detection model well suit for the ARM material identification usage with 0.949 mAP@0.5:0.95 score and 0.997 F1 score. Field testing showed that the ARM effectively encourages recycling, evidenced by the significant number of recyclable items collected.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_80-Deep_Learning_Model_for_Enhancing_Automated_Recycling_Machine.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Analysis of Small and Medium-Sized Enterprises Cybersecurity Program Assessment Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150878</link>
        <id>10.14569/IJACSA.2024.0150878</id>
        <doi>10.14569/IJACSA.2024.0150878</doi>
        <lastModDate>2024-08-30T08:07:27.4070000+00:00</lastModDate>
        
        <creator>Wan Nur Eliana Wan Mohd Ludin</creator>
        
        <creator>Masnizah Mohd</creator>
        
        <creator>Wan Fariza Paizi@Fauzi</creator>
        
        <subject>Cybersecurity; SMEs; cybersecurity program assessment models; cybersecurity assessment frameworks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>In the digital age, Small and Medium-sized Enterprises must review and improve their cybersecurity posture to combat rising risks. This paper thoroughly compares Small and Medium-sized Enterprises cybersecurity program assessment approaches. The National Institute of Standards and Technology&#39;s Cybersecurity Framework, CyberSecurity Readiness Model for SMEs, Cybersecurity Evaluation Model, and Adaptable Security Maturity Assessment and Standardisation framework were examined. The NIST CSF is adaptable and applicable to many sectors, while the CSRM provides a standardized way to assess an organization&#39;s cyber readiness. With its resource limits and operational scales, the CSRM-SME meets SMEs&#39; particular issues. Organizations may examine and improve cybersecurity with CSEM. The approach can be used for SMEs, higher education institutions, and industrial control systems. The ASMAS architecture is flexible for continual security enhancement due to its scalability and standardization. This comparison analysis shows each framework&#39;s strengths and weaknesses, revealing their suitability for diverse SME scenarios. This paper helps SMEs choose the best model to strengthen cybersecurity, boost resilience, and meet global standards. This paper will compare the NIST CSF, CSRM-SME, CSEM, and ASMAS cybersecurity frameworks.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_78-Comparative_Analysis_of_Small_and_Medium_Sized_Enterprises_Cybersecurity.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Orchard Cultivation Through Drone Technology and Deep Stream Algorithms in Precision Agriculture</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150877</link>
        <id>10.14569/IJACSA.2024.0150877</id>
        <doi>10.14569/IJACSA.2024.0150877</doi>
        <lastModDate>2024-08-30T08:07:27.3900000+00:00</lastModDate>
        
        <creator>P. Srinivasa Rao</creator>
        
        <creator>Anantha Raman G R</creator>
        
        <creator>Madira Siva Sankara Rao</creator>
        
        <creator>K. Radha</creator>
        
        <creator>Rabie Ahmed</creator>
        
        <subject>Smart agriculture; crops; cultivation; deep stream algorithms; drone and technology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>The integration of cutting-edge technology in agriculture has revolutionized traditional farming practices, paving the way for Smart Agriculture. This research presents a novel approach to enhancing the cultivation of orchard crops by combining deep-stream algorithms with drone technology. Focusing on pomegranate farming, the study employs a drone system with four specialized cameras: thermal, optical RGB, multi-spectral, and LiDAR. These cameras facilitate comprehensive data collection and analysis throughout the crop growth cycle. The thermal camera monitors plant health, yield estimation, fertilizer management, and irrigation mapping. The optical RGB camera contributes to crop management by analyzing vegetation indices, assessing fruit quality, and detecting weeds. The multi-spectral and hyperspectral cameras enable early detection of crop diseases and assessment of damaged crops. LiDAR aids in understanding crop growth by measuring plant height, tracking phenology, and analyzing water flow patterns. The data collected is processed in real-time using Deep Stream algorithms on an NVIDIA Jetson GPU, providing predictive insights into key farming characteristics. Our model demonstrated superior performance compared to four established models—MDC, MLP, SVM, and ANFIS—achieving the highest accuracy (95%), sensitivity (94%), specificity (96%), and precision (91%). This integrated method offers a robust solution for precision agriculture, empowering farmers to optimize crop management, enhance productivity, and promote sustainable agriculture practices.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_77-Enhancing_Orchard_Cultivation_Through_Drone_Technology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Malicious Website Detection Using Random Forest and Pearson Correlation for Effective Feature Selection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150876</link>
        <id>10.14569/IJACSA.2024.0150876</id>
        <doi>10.14569/IJACSA.2024.0150876</doi>
        <lastModDate>2024-08-30T08:07:27.3770000+00:00</lastModDate>
        
        <creator>Esha Sangra</creator>
        
        <creator>Renuka Agrawal</creator>
        
        <creator>Pravin Ramesh Gundalwar</creator>
        
        <creator>Kanhaiya Sharma</creator>
        
        <creator>Divyansh Bangri</creator>
        
        <creator>Debadrita Nandi</creator>
        
        <subject>Malicious URL; machine learning; feature selection; Random Forest; cybercrime</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>In recent years, the internet has expanded rapidly, driving significant advancements in digitalization that have transformed day to day lives. Its growing influence on consumers and the economy has increased the risk of cyberattacks. Cybercriminals exploited network misconfigurations and security vulnerabilities during these transitions. Among countless cyberattacks, phishing remains the most common form of cybercrime. Phishing via malicious Uniform Resource Locator (URL)s threatens potential victims by posing as an imposter and stealing critical and sensitive data. An increase in cyberattacks using phishing needs immediate attention to find a scalable solution. Earlier techniques like blacklisting, signature matching, and regular expression method are insufficient because of the requirement to keep updating the rule engine or signature database regularly. Significant research has recently been conducted on using Machine Learning (ML) models to detect malicious URLs. In this study, the authors have provided a study highlighting the importance of significant feature selection for training ML models for detecting malicious URLs. Pearson correlation is employed in this study for selecting significant features, and the outcome demonstrates that in terms of accuracy and other performance indices, the Random Forest classifier outperforms the other classifiers.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_76-Malicious_Website_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Blockchain-Based Vaccination Record Tracking System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150874</link>
        <id>10.14569/IJACSA.2024.0150874</id>
        <doi>10.14569/IJACSA.2024.0150874</doi>
        <lastModDate>2024-08-30T08:07:27.3600000+00:00</lastModDate>
        
        <creator>Shwetha G K</creator>
        
        <creator>Jayantkumar A Rathod</creator>
        
        <creator>Naveen G</creator>
        
        <creator>Mounesh Arkachari</creator>
        
        <creator>Pushparani M K</creator>
        
        <subject>Blockchain technology; decentralized; vaccine record tracking; integrity; smart contracts; vaccination record storage; traceability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>Blockchain technology is basically a decentralized database maintained by applicable parties and has been extensively used in colorful scripts similar as logistics and finance. In terms of operations in the medical field, it&#39;s getting increasingly important because the case&#39;s symptoms may be related to a certain vaccine. Whether the case has been vaccinated with this vaccine will lead to different individual results by the croaker. This study proposes a traceable blockchain-grounded vaccination record storehouse and sharing system. In the proposed system, the case gets the vaccination at any legal clinic and the VR can be saved accompanied by the hand into the blockchain center, which ensures traceability. When the case visits the sanitarium for treatment, the croaker can gain the details of the VR from the blockchain center and also make an opinion. The security of the proposed system will be defended by the programmed smart contracts. The proper record storage after encryption will ensure data privacy, integrity and security. Blockchain tracebility uses block-chain technology to record the movement of a product in the supply chain.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_74-Blockchain_Based_Vaccination_Record_Tracking_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhance the Security of the Cloud Using a Hybrid Optimization-Based Proxy Re-Encryption Technique Considered Blockchain</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150875</link>
        <id>10.14569/IJACSA.2024.0150875</id>
        <doi>10.14569/IJACSA.2024.0150875</doi>
        <lastModDate>2024-08-30T08:07:27.3600000+00:00</lastModDate>
        
        <creator>Ahmed I. Alutaibi</creator>
        
        <subject>Cloud security; Internet of Things; proxy re-encryption; blockchain; data sharing; hybrid optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>Every day, a vast amount of data with incalculable value will be generated by the IoT devices that are deployed in various types of applications. It is crucial to ensure the reliability and safety of IoT data exchange in a cloud context because this data frequently contains the user&#39;s private information. This study presents a novel encrypted data storage and security system using the blockchain method in conjunction with hybrid optimization-based proxy re-encryption (HO-PREB). Dependency on outside central service providers is eliminated by the HO-PREB-based consensus process. In the blockchain system, several consensus nodes serve as proxy service nodes to restore encrypted data and merge transformed ciphertext with private data. Hybrid owl and bat optimization is employed to select the optimal key for enhancing security. This removes the limitations associated with securely storing and distributing private encrypted data via a distributed network. Moreover, the blockchain&#39;s distributed ledger ensures the permanent storage of data-sharing records and outcomes, ensuring accuracy and dependability. The simulated experiments of the designed model are evaluated with existing cryptographic techniques and gain a lower latency of 3.2 s and a lower turnaround time of 45 ms. Furthermore, the developed technique enhances cloud system security and possesses the capability to detect and mitigate attacks in the cloud environment.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_75-Enhance_the_Security_of_the_Cloud_Using_a_Hybrid_Optimization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Using Pretrained VGG19 Model and Image Segmentation for Rice Leaf Disease Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150873</link>
        <id>10.14569/IJACSA.2024.0150873</id>
        <doi>10.14569/IJACSA.2024.0150873</doi>
        <lastModDate>2024-08-30T08:07:27.3430000+00:00</lastModDate>
        
        <creator>Gulbakhram Beissenova</creator>
        
        <creator>Almira Madiyarova</creator>
        
        <creator>Akbayan Aliyeva</creator>
        
        <creator>Gulsara Mambetaliyeva</creator>
        
        <creator>Yerzhan Koshkarov</creator>
        
        <creator>Nagima Sarsenbiyeva</creator>
        
        <creator>Marzhan Chazhabayeva</creator>
        
        <creator>Gulnara Seidaliyeva</creator>
        
        <subject>Rice crop diseases; convolutional neural networks; VGG19 model; image segmentation; disease classification; data augmentation; model generalization; sustainable farming</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>This study explores the application of the VGG19 convolutional neural network (CNN) model, pre-trained on ImageNet, for the classification of rice crop diseases using image segmentation techniques. The research aims to enhance disease detection accuracy by integrating a robust deep learning framework tailored to the specific challenges of agricultural pathology. A dataset comprising 200 images categorized into four disease classes was employed to train and validate the model. Techniques such as data augmentation, dropout, and dynamic learning rate adjustments were utilized to improve model training and prevent overfitting. The model&#39;s performance was evaluated using metrics including accuracy, precision, recall, and F1-score, with a particular focus on the ability to generalize to unseen data. Results indicated a high overall accuracy exceeding 90%, showcasing the model’s capability to effectively classify rice crop diseases. Challenges such as class-specific misclassification were addressed through the model’s learning strategy, highlighting areas for further enhancement. The research contributes to the field by demonstrating the potential of using pre-trained CNN models, adapted through fine-tuning and robust training techniques, to significantly improve disease detection in crops, thereby supporting sustainable agricultural practices and enhancing food security. Future work will explore the integration of multimodal data and real-time detection systems to broaden the applicability and effectiveness of the technology in diverse agricultural settings.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_73-Using_Pretrained_VGG19_Model_and_Image_Segmentation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Combined Framework for Type-2 Neutrosophic Number Multiple-Attribute Decision-Making and Applications to Quality Evaluation of Digital Agriculture Park Information System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150872</link>
        <id>10.14569/IJACSA.2024.0150872</id>
        <doi>10.14569/IJACSA.2024.0150872</doi>
        <lastModDate>2024-08-30T08:07:27.3300000+00:00</lastModDate>
        
        <creator>Wei Ji</creator>
        
        <creator>Ning Sun</creator>
        
        <creator>Botao Cao</creator>
        
        <creator>Xichan Mu</creator>
        
        <subject>Multiple-Attribute Decision-Making (MADM); Type-2 Neutrosophic Numbers (T2NNs); ExpTODIM approach; TOPSIS approach; quality evaluation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>A digital agriculture park refers to an agricultural production and organizational unit of a certain scale where digital technology is used to optimize the agricultural supply chain. It enhances park management and service levels, achieving a new development model characterized by safe, low-carbon, high-quality, high-yield, precise, and efficient production, management, service, and operation. The quality evaluation of digital agriculture park information system is a multiple-attribute decision-making (MADM). Currently, the Exponential TODIM (ExpTODIM) and TOPSIS was put forward MADM. The Type-2 neutrosophic numbers (T2NNs) are employed to portray fuzzy information during the quality evaluation of digital agriculture park information system. In this works, the Type-2 neutrosophic number Exponential TODIM-TOPSIS (T2NN-ExpTODIM-TOPSIS) approach is put forward MAGDM under T2NNs. Finally, numerical study for quality evaluation of digital agriculture park information system is determined to demonstrate the T2NN-ExpTODIM-TOPSIS approach. The major research motivation is cultivated: (1) ExpTODIM and TOPSIS approach was enhanced under IFSs; (2) Entropy is put forward weight numbers in light with score values along with T2NNs; (3) T2NN-ExpTODIM-TOPSIS is put forward the MADM along with T2NNs; (4) numerical example for quality evaluation of digital agriculture park information system and different comparative analysis is put forward the validity of T2NN-ExpTODIM-TOPSIS.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_72-Combined_Framework_for_Type_2_Neutrosophic_Number.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implementing Optimization Methods into Practice to Enhance the Performance of Solar Power Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150871</link>
        <id>10.14569/IJACSA.2024.0150871</id>
        <doi>10.14569/IJACSA.2024.0150871</doi>
        <lastModDate>2024-08-30T08:07:27.3300000+00:00</lastModDate>
        
        <creator>Lu&#231;iana Toti</creator>
        
        <creator>Alma Stana</creator>
        
        <creator>Alma Golgota</creator>
        
        <creator>Eno Toti</creator>
        
        <subject>Optimization; photovoltaic systems; education; performance; controller; SIMULINK</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>The use of contemporary technological tools, as well as the modernization of curricula in the field of electronic and electrical engineering, is one of the main objectives of the academic staff at the Universities. Efforts to improve curricula and scientific research infrastructure find strong support from the CBHE programs funded by the EU. The purpose of this work is to include the optimization methods for improving of the photovoltaic system’s performance using the digital technologies which develop students&#39; theoretical and practical skills for a sustainable development in the field of energy. Optimizing of the photovoltaic system through the addition of a booster, MPPT controller to the existing architecture, as well as with the help of SIMULINK will increase the energy efficiency of the photovoltaic system, increasing the University&#39;s economic benefit and moreover, the ecological benefits for the population. The implementation of the optimization algorithms will increase the simulation skills of the academic staff and students for a more in-depth analysis related to the implementation of RES, an analysis which until now has only been developed through software data collection methods of the system. As case study it is the utilization of photovoltaic system in the University of Durres area, which is a sustainable development area in both the public and private sectors after the 2019 earthquake. The study brings a transdisciplinary approach that contribute in the education of the new generation towards a green society and economy. It includes knowledge of the field of electrical engineering in the direction of increasing the performance of renewable energy systems as well as the analysis of electronic circuits with the help of optimization algorithms and different ICT tools.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_71-Implementing_Optimization_Methods.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning and Computer Vision-Based System for Detecting and Separating Abnormal Bags in Automatic Bagging Machines</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150870</link>
        <id>10.14569/IJACSA.2024.0150870</id>
        <doi>10.14569/IJACSA.2024.0150870</doi>
        <lastModDate>2024-08-30T08:07:27.3130000+00:00</lastModDate>
        
        <creator>Trung Dung Nguyen</creator>
        
        <creator>Thanh Quyen Ngo</creator>
        
        <creator>Chi Kien Ha</creator>
        
        <subject>Automatic bagging machines; deep learning; computer vision; bags classification; data augmentation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>This paper presents a novel deep learning and computer vision-based system for detecting and separating abnormal bags within automatic bagging machines, addressing a key challenge in industrial quality control. The core of our approach is the development of a data collection system seamlessly integrated into the production line. This system captures a comprehensive variety of bag images, ensuring a dataset representative of real-world variability. To augment the quantity and quality of our training data, we implement both offline and online data augmentation techniques. For classifying normal and abnormal bags, we design a lightweight deep learning model based on the residual network for deployment on computationally constrained devices. Specifically, we improve the initial convolutional layer by utilizing ghost convolution and implement a reduced channel strategy across the network layers. Additionally, knowledge distillation is employed to refine the model&#39;s performance by transferring insights from a fully trained, more complex network. We conduct extensive comparisons with other convolutional neural network models, demonstrating that our proposed model achieves superior performance in classifying bags while maintaining high efficiency. Ablation studies further validate the contribution of each modification to the model&#39;s success. Upon deployment, the model demonstrates robust accuracy and operational efficiency in a live production environment. The system provides significant improvements in automatic bagging processes, combining accuracy with practical applicability in industrial settings.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_70-Deep_Learning_and_Computer_Vision_Based_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>DMMFnet: A Dual-Branch Multimodal Medical Image Fusion Network Using Super Token and Channel-Spatial Attention</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150869</link>
        <id>10.14569/IJACSA.2024.0150869</id>
        <doi>10.14569/IJACSA.2024.0150869</doi>
        <lastModDate>2024-08-30T08:07:27.2970000+00:00</lastModDate>
        
        <creator>Yukun Zhang</creator>
        
        <creator>Lei Wang</creator>
        
        <creator>Muhammad Tahir</creator>
        
        <creator>Zizhen Huang</creator>
        
        <creator>Yaolong Han</creator>
        
        <creator>Shanliang Yang</creator>
        
        <creator>Shilong Liu</creator>
        
        <creator>Muhammad Imran Saeed</creator>
        
        <subject>Medical image fusion; channel-spatial attention; super token sampling; encoder–decoder</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>Multimodal medical image fusion leverages the correlation between different modal images to enhance the information contained within a single medical image. Existing fusion methods often fail to effectively extract multiscale features from medical images and establish long-distance relationships between deep feature blocks. To address these issues, we propose DMMFnet, an encoder-decoder fusion network that utilizes shared and private encoders to extract shared and private features. DMMFnet is based on super token sampling and channel-spatial attention. The shared encoder and decoder use a transformer structure with super token sampling technology to effectively integrate information from different modalities, improving processing efficiency and enhancing the ability to capture key features. The private encoder consists of invertible neural networks and transformer modules, designed to extract local and global features, respectively. A novel transformer module refines attention distribution and feature aggregation to capture superpixel-level global correlations, ensuring that the network effectively captures essential global information, thereby enhancing the quality of the fused image. Experimental results, comparing DMMFnet with nine leading fusion methods, indicate that DMMFnet significantly improves various evaluation metrics and achieves superior visual effects, demonstrating its advanced fusion capability.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_69-DMMFnet_A_Dual_Branch_Multimodal_Medical_Image_Fusion_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Noise Reduction Techniques in Adas Sensor Data Management: Methods and Comparative Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150868</link>
        <id>10.14569/IJACSA.2024.0150868</id>
        <doi>10.14569/IJACSA.2024.0150868</doi>
        <lastModDate>2024-08-30T08:07:27.2830000+00:00</lastModDate>
        
        <creator>Ahmed Alami</creator>
        
        <creator>Fouad Belmajdoub</creator>
        
        <subject>ADAS; sensor data management; noise reduction; KalmanNet; Wavelet Denoising; RADAR; SMA; EMA; LPF; HIL bench testing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>This review examines noise reduction techniques in Advanced Driver Assistance Systems (ADAS) sensor data management, crucial for enhancing vehicle safety and performance. ADAS relies on real-time data from conventional sensors (e.g., wheel speed sensors, LiDAR, radar, cameras) and MEMS sensors (e.g., accelerometers, gyroscopes) to execute critical functions like lane keeping, collision avoidance, and adaptive cruise control. These sensors are susceptible to thermal noise, mechanical vibrations, and environmental interferences, which degrade system performance. We explore filtering techniques including KalmanNet, Simple Moving Average (SMA), Exponential Moving Average (EMA), Wavelet Denoising, and Low Pass Filtering (LPF), assessing their efficacy in noise reduction and data integrity improvement. These methods are compared using key performance metrics such as Signal-to-Noise Ratio (SNR), Mean Squared Error (MSE), and Root Mean Squared Error (RMSE). Recent advancements in hybrid filtering approaches and adaptive algorithms are discussed, highlighting their strengths and limitations for different sensor types and ADAS functionalities. Findings demonstrate the superior performance of Wavelet Denoising for non-stationary signals, SMA and EMA&#39;s effectiveness for smoother signal variations, and LPF&#39;s excellence in high-frequency noise attenuation with careful tuning. KalmanNet showed significant improvements in noise reduction and data accuracy, particularly in complex and dynamic environments. Extended Kalman Filter (EKF) and Unscented Kalman Filter (UKF) were especially effective for RADAR sensors, handling non-linearities and providing accurate state estimation. Emphasizing Hardware-in-the-Loop (HIL) bench testing to validate these techniques in real-world scenarios, this study underscores the importance of selecting appropriate methods based on specific noise characteristics and system requirements. This research provides valuable insights for ADAS and autonomous driving technologies development, emphasizing precise signal processing&#39;s critical role in ensuring accurate sensor data interpretation and decision-making.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_68-Noise_Reduction_Techniques_in_Adas_Sensor_Data_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Facial Expression Real-Time Animation Simulation Technology for General Mobile Platforms Based on OpenGL</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150867</link>
        <id>10.14569/IJACSA.2024.0150867</id>
        <doi>10.14569/IJACSA.2024.0150867</doi>
        <lastModDate>2024-08-30T08:07:27.2670000+00:00</lastModDate>
        
        <creator>Mingzhe Cao</creator>
        
        <subject>OpenGL; mobile platform; facial expressions; animation simulation; rendering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>With the popularization and development of mobile devices, the demand for image processing continues to increase. However, the limited hardware resources of mobile devices make traditional CPU computing unable to meet the requirements of real-time image processing. In response to the limited rendering resources of mobile platforms, this study adopts OpenGL for graphic interface design and animation simulation of facial expressions to control the changes of facial expressions in real-time, achieve facial expression animation simulation, and develop effective expression fusion methods. By combining rich rendering effects such as particle effects, facial expressions can be expressed more realistically and interestingly in 3D models. The results indicated that the research method only required less than 50 MB of memory, and the average accuracy of facial expression recognition had significantly improved. The final normalized average error level was close to 4%, with higher accuracy. The processing speed of each image was around 19.4ms, which could achieve animation simulation of facial expressions and had strong universality and flexibility. This method optimizes the real-time performance, stability, and user experience of facial expression real-time animation simulation, which can meet the needs of different application scenarios.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_67-Facial_Expression_Real_Time_Animation_Simulation_Technology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Novel Data-Driven Machine Learning Models for Heating Load Prediction: Single and Optimized Naive Bayes</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150866</link>
        <id>10.14569/IJACSA.2024.0150866</id>
        <doi>10.14569/IJACSA.2024.0150866</doi>
        <lastModDate>2024-08-30T08:07:27.2500000+00:00</lastModDate>
        
        <creator>Fangyuan Li</creator>
        
        <subject>Prediction models; heating load demand; building energy consumption; Naive Bayes; metaheuristic optimization algorithms</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>Numerous approaches can be employed to create models for assessing the heat gains of a building arising from both external and internal sources. This modeling process evaluates effective operational strategies, conducts retrofit audits, and projects energy consumption. These techniques range from simple regression analyses to more intricate models grounded in physical principles. A prevalent assumption underlying all these modeling techniques is the requirement for input variables to be derived from authentic data, as the absence of realistic input data can lead to substantial underestimations or overestimations in energy consumption assessments. In this paper, eight input parameters, including relative compactness, orientation, wall area, roof area, glazing area, overall height, surface area, and glazing area distribution, are employed for training proposed Naive Bayes (NB)-based machine learning models. Utilizing a novel approach, this research explores the application of Beluga Whale Optimization and the Coot Optimization algorithm for optimizing the Naive Bayes model in heating load prediction. By harnessing the collective intelligence of Beluga Whales and drawing from the cooperative behavior of coots, the research aims to improve the model&#39;s predictive capabilities, which is of paramount importance in building energy management. Based on the comparative analysis between developed models (NB, NBCO, and NBBW), it is attainable that NBCO and NBBW, as two optimized models, have 2.4% and 1.3% higher R2 values, respectively. Also, the RMSE of the NBCO was, on average, 19-33% lower than that of the two other models, confirming the high accuracy of NBCO. This innovative integration of bio-inspired optimization techniques with machine learning demonstrates a promising avenue for optimizing predictive models, offering potential energy efficiency and sustainability advancements in the built environment.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_66-Novel_Data_Driven_Machine_Learning_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>UWB Printed MIMO Antennas for Satellite Sensing System (SRSS) Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150865</link>
        <id>10.14569/IJACSA.2024.0150865</id>
        <doi>10.14569/IJACSA.2024.0150865</doi>
        <lastModDate>2024-08-30T08:07:27.2370000+00:00</lastModDate>
        
        <creator>Wyssem Fathallah</creator>
        
        <creator>Chafai Abdelhamid</creator>
        
        <creator>Chokri Baccouch</creator>
        
        <creator>Alsharef Mohammad</creator>
        
        <creator>Khalil Jouili</creator>
        
        <creator>Hedi Sakli</creator>
        
        <subject>5G antenna; 5G satellite networks; millimeter band; wireless communications; SRR; IoT</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>The deployment of ultra-wideband (UWB) technology offers enhanced capabilities for various Internet of Things (IoT) applications, including smart cities, smart buildings, smart aggregation, and smart healthcare. UWB technology supports high data rate communication over short distances with very low power densities. This paper presents a UWB printed antenna design with multiple input and output (MIMO) capabilities, specifically tailored for Routed Satellite Sensor Systems (SRSS) to enhance IoT applications. The proposed UWB printed antenna, designed for the 2–18 GHz frequency band, has overall dimensions of 14.5 mm x 14.5 mm, with an efficiency exceeding 70% and a gain ranging from 2 to 6.5 dB. Both simulated and measured reflection parameters (|S11|) at the antenna input show strong agreement. Furthermore, a compact MIMO system is introduced, featuring four closely spaced antennas with a gap of 0.03λ, housed in a 60 mm x 48 mm module. To minimize coupling effects between the antennas, the design incorporates five Split Ring Resonator (SRR) elements arranged linearly between the radiating elements. This arrangement achieves a mutual coupling reduction to -35 dB at 8 GHz, compared to -20 dB isolation in systems without SRR. The results demonstrate that the proposed MIMO antenna system offers promising performance and meets the requirements for effective space communication within satellite sensor networks.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_65-UWB_Printed_MIMO_Antennas_for_Satellite_Sensing_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards Secure Cloud-Enabled Wireless Ad-Hoc Networks: A Novel Cross-Layer Validation Mechanism</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150864</link>
        <id>10.14569/IJACSA.2024.0150864</id>
        <doi>10.14569/IJACSA.2024.0150864</doi>
        <lastModDate>2024-08-30T08:07:27.2200000+00:00</lastModDate>
        
        <creator>Zhenguo LIU</creator>
        
        <subject>Network security; wireless ad-hoc networks; cloud environments; cross-layer validation; blackhole and wormhole attacks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>Network security tackles a broad spectrum of damaging activities that threaten network infrastructure. Addressing these risks is essential to keep data accurate and networks running. This research aims to detect and prevent blackholes and wormholes in cloud-based wireless ad-hoc networks. A new Cross-Layer Validation Mechanism (CLVM) is introduced to detect and counter these dangerous attacks. CLVM boosts network security and ensures data travels through cross-layer interactions. CLVM is tested using NS2 software by performing several simulations and comparing the results with previous methods. The results show that CLVM effectively defends against blackhole and wormhole attacks, which makes it a crucial extra service for cloud computing. CLVM provides a strong defense against new security threats, making sure the network stays reliable and safe.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_64-Towards_Secure_Cloud_Enabled_Wireless_Ad_Hoc_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automatic Plant Disease Detection System Using Advanced Convolutional Neural Network-Based Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150863</link>
        <id>10.14569/IJACSA.2024.0150863</id>
        <doi>10.14569/IJACSA.2024.0150863</doi>
        <lastModDate>2024-08-30T08:07:27.2030000+00:00</lastModDate>
        
        <creator>Sai Krishna Gudepu</creator>
        
        <creator>Vijay Kumar Burugari</creator>
        
        <subject>Plant disease detection; advanced CNN; Artificial Intelligence (AI); deep learning; precision agriculture</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>With technology innovations such as Artificial Intelligence (AI) and Internet of Things (IoT), unprecedented applications and solutions to real world problems are made possible. Precision agriculture is one such problem which is aimed at technology driven agriculture. So far, the research on agriculture and usage of technologies are at government level to reap benefits of technologies in crop yield prediction and finding the cultivated areas. However, the fruits of technologies could not reach farmers. Farmers still suffer from plenty of problems such as natural calamities, reduction in crop yield, high expenditure and lack of technical support. Plant diseases is an important problem faced by farmers as they could not find diseases early. There is need for early plant disease detection in agriculture. From the related works, it is known that deep learning techniques like Convolutional Neural Network (CNN) is best used to process image data to solve real world problems. However, as one size does not fit all, CNN cannot solve all problems without exploiting its layers based on the problem in hand. Towards this end, we designed an automatic plant disease detection system by proposing an advanced CNN model. We proposed an algorithm known as Advanced CNN for Plant Disease Detection (ACNN-PDD) to realize the proposed system. Our system is evaluated with PlantVillage, a benchmark dataset for crop disease detection result, and real-time dataset (captured from live agricultural fields). The investigational outcomes showed the utility of the proposed system. The proposed advanced CNN based model ACNN-PDD achieve 96.83% accuracy which is higher than many existing models. Thus our system can be integrated with precision agriculture infrastructure to enable farmers to detect plant diseases early.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_63-Automatic_Plant_Disease_Detection_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Efficient Parallel Algorithm for Extracting Fuzzy-Crisp Formal Concepts</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150862</link>
        <id>10.14569/IJACSA.2024.0150862</id>
        <doi>10.14569/IJACSA.2024.0150862</doi>
        <lastModDate>2024-08-30T08:07:27.1900000+00:00</lastModDate>
        
        <creator>Ebtesam Shemis</creator>
        
        <creator>Arabi Keshk</creator>
        
        <creator>Ammar Mohammed</creator>
        
        <creator>Gamal Elhady</creator>
        
        <subject>Fuzzy Formal Concept Analysis; single-sided fuzzy concept; fuzzy-crisp concepts; parallel algorithm; fuzzy concepts extraction; knowledge representation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>Fuzzy Formal Concept Analysis (FFCA) is a robust mathematical tool for analyzing data, particularly where uncertainty or fuzziness is inherent. FFCA is utilized across various domains, including data mining, information retrieval, and knowledge representation. However, fuzzy concepts extraction is a crucial yet computationally intensive task. This paper addresses the challenge of time efficiency in extracting single-sided fuzzy concepts from large datasets. A parallel algorithm is proposed to reduce computational time and optimize resource utilization, thus enabling the scalable analysis of expanding datasets. By computing fuzzy concepts across multiple threads in parallel, each thread processes an attribute independently to extract fuzzy concepts, which are then merged in the final step. The proposed algorithm extracts fuzzy-crisp concepts, which are more concise than other types of fuzzy concepts. Experiments were conducted to evaluate the performance of the proposed parallel algorithm against existing sequential methods. Experimental results demonstrate significant gains in computational efficiency, with the algorithm achieving an average time reduction of 68% compared to the attribute-based algorithm and up to 83%-time reduction compared to the fuzzy CbO algorithm across various types of datasets, including binary, quantitative, and fuzzy.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_62-Efficient_Parallel_Algorithm_for_Extracting_Fuzzy_Crisp_Formal_Concepts.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>ResNet50 and GRU: A Synergistic Model for Accurate Facial Emotion Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150861</link>
        <id>10.14569/IJACSA.2024.0150861</id>
        <doi>10.14569/IJACSA.2024.0150861</doi>
        <lastModDate>2024-08-30T08:07:27.1900000+00:00</lastModDate>
        
        <creator>Shanimol. A</creator>
        
        <creator>J Charles</creator>
        
        <subject>Deep convolutional neural network; ResNet-50; Facial Emotion Recognition; Gated Recurrent Unit</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>Humans use voice, gestures, and emotions to communicate with one another. It improves oral communication effectiveness and facilitates concept of understanding. Majority of people are able to identify facial emotions with ease, regardless of gender, nationality, culture, or ethnicity. The recognition of facial expressions is becoming more and more significant in a variety of newly developed computing applications. Facial expression detection is a hot topic in almost every industry, including marketing, artificial intelligence, gaming, and healthcare. This study proposes a novel hybrid model combining ResNet-50 and Gated Recurrent Unit (GRU) for enhanced Facial emotion recognition (FER) accuracy. The dataset for the study is taken from Kaggle repository. ResNet-50, a deep convolutional neural network, excels in feature extraction by capturing intricate spatial hierarchies in facial images. GRU, effectively processes sequential data, capturing temporal dependencies crucial for emotion recognition. The integration of ResNet-50 and GRU leverages the strengths of both architectures, enabling robust and accurate emotion detection. Experimental result on CK+ dataset demonstrate that the proposed hybrid model outperforms current methods, achieving a remarkable accuracy of 95.56%. This superior performance underscores the model&#39;s potential for real-world applications in diverse domains such as security, healthcare, and interactive systems.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_61-ResNet50_and_GRU_A_Synergistic_Model_for_Accurate_Facial_Emotion_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>UTAUT Model for Digital Mental Health Interventions: Factors Influencing User Adoption</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150860</link>
        <id>10.14569/IJACSA.2024.0150860</id>
        <doi>10.14569/IJACSA.2024.0150860</doi>
        <lastModDate>2024-08-30T08:07:27.1730000+00:00</lastModDate>
        
        <creator>Mohammed Alojail</creator>
        
        <subject>Digital transformation; mental health interventions; UTAUT model; TAM; SmartPLS; user acceptance; hypothesis testing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>The impact of digital revolution on mental health therapies is examined in this research. As explained in the paper, the delivery of mental healthcare is being revolutionized by digital transformation, which is providing creative answers to the problems associated with mental health illnesses. But knowing and managing user approval is crucial to the effective integration of digital transformation approaches into mental health therapies. To investigate user’s acceptance regarding digital transformation in Mental Health therapies, this study outlines a modeling-based method based on a well-established Unified Theory of acceptability and Use of Technology theory, abbreviated as UTAUT. This study delves into the base constructs of Expected Performance, Expected Effort, Social Influence, Conditions facilitating the use of proposed solution, Hedonic Motivation, and Value for Money, utilizing the UTAUT model as a framework. By employing Structural Equation Modelling (SEM) in a thorough study, the goal of this research is to identify statistical correlations that impact user acceptance dynamics. To offer context-specific insights, this article also delves into digital mental health solutions, including teletherapy platforms, mood monitoring smartphone applications, and virtual reality-based exposure therapy. This study enhances accessibility, engagement, and results for people seeking mental health care by providing a deeper knowledge of user acceptability, which aids in the creation and roll-out of digital mental health solutions.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_60-UTAUT_Model_for_Digital_Mental_Health_Interventions.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dose Archiving and Communication System in Moroccan Healthcare: A Unified Approach to X-Ray Dose Management and Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150859</link>
        <id>10.14569/IJACSA.2024.0150859</id>
        <doi>10.14569/IJACSA.2024.0150859</doi>
        <lastModDate>2024-08-30T08:07:27.1570000+00:00</lastModDate>
        
        <creator>Lhoucine Ben Youssef</creator>
        
        <creator>Abdelmajid Bybi</creator>
        
        <creator>Hilal Drissi</creator>
        
        <creator>El Ayachi Chater</creator>
        
        <subject>DACS; Real-time monitoring; radiation protection; radiology practice; healthcare professionals; x-ray doses; regulatory compliance; patient safety; Moroccan healthcare</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>This study explores the implementation of a Dose Archiving and Communication System (DACS) in Moroccan healthcare, highlighting the importance of X-ray dose management in modern radiology. It emphasizes patient safety and the ALARA principle to minimize radiation exposure while maintaining diagnostic accuracy. The research discusses advancements in imaging technologies, such as dose-reduction algorithms and real-time monitoring systems. A survey of 1000 healthcare professionals reveals significant challenges in X-ray dose management, including poor dose tracking, regulatory non-compliance, and inadequate radiation protection training. Noteworthy findings reveal that 10% of patients received doses exceeding 5 Gray, underscoring the exigency for robust dose management systems. The article delineates a strategic implementation approach for DACS in Moroccan hospitals, comprising meticulous needs assessment, infrastructure fortification, and stakeholder engagement. By harnessing cloud-based storage, blockchain technology, and industry-standard encryption protocols, the envisioned DACS endeavors to furnish a secure, scalable, and efficient framework for radiation dose management. This holistic approach, underpinned by empirical statistics regarding training in radiation protection, network infrastructure, and DACS implementation strategies, aims to elevate patient outcomes and ensure stringent regulatory compliance.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_59-Dose_Archiving_and_Communication_System_in_Moroccan_Healthcare.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards Secure Internet of Things-Enabled Healthcare: Integrating Elliptic Curve Digital Signatures and Rivest Cipher Encryption</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150858</link>
        <id>10.14569/IJACSA.2024.0150858</id>
        <doi>10.14569/IJACSA.2024.0150858</doi>
        <lastModDate>2024-08-30T08:07:27.1400000+00:00</lastModDate>
        
        <creator>Longyang Du</creator>
        
        <creator>Tian Xie</creator>
        
        <subject>IoT-enabled healthcare; data privacy; security; hybrid security framework; SHA-256; RC4; encryption; data integrity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>The expansion of Internet of Things (IoT) applications, such as wireless sensor networks, intelligent devices, Internet technologies, and machine-to-machine interaction, has changed current information technology in recent decades. The IoT enables the exchange of information and communication between items via an internal network. Nevertheless, the advancement of technology raises the urgent issue of ensuring data privacy and security, particularly in critical sectors like healthcare. This study aims to address the problem by developing a hybrid security scheme that combines the Secure Hash Algorithm (SHA-256), Rivest Cipher 4 (RC4), and Elliptic Curve Digital Signature Scheme (ECDSS) to ensure the confidentiality and integrity of medical data transmitted by IoT-enabled healthcare systems. This hybrid model employs the Elliptic Curve Digital Signature Scheme (ECDSS) to perform exclusive OR (XOR) operations inside the RC4 encryption algorithm. This enhances the RC4 encryption process by manipulating the encryption key. Moreover, SHA-256 is used to convert incoming data in order to guarantee data security. An empirical investigation validates the superiority of the suggested model. This framework attains a data transfer rate of 11.67 megabytes per millisecond, accompanied by an encryption duration of 846 milliseconds and a decryption duration of 627 milliseconds.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_58-Towards_Secure_Internet_of_Things_Enabled_Healthcare.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Improved YOLOv8 Method for Measuring the Body Size of Xinjiang Bactrian Camels</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150857</link>
        <id>10.14569/IJACSA.2024.0150857</id>
        <doi>10.14569/IJACSA.2024.0150857</doi>
        <lastModDate>2024-08-30T08:07:27.1400000+00:00</lastModDate>
        
        <creator>Yue Peng</creator>
        
        <creator>Alifu Kurban</creator>
        
        <creator>Mengmei Sang</creator>
        
        <subject>YOLOv8; Asymptotic Feature Pyramid Network; SKAttention; Bactrian camel body size measurement</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>Camel body size measurement has initially been applied in livestock production. However, current methods suffer from low measurement accuracy due to detection box localization loss and occlusions. This study proposes an effective algorithm, Camel-YOLOv8, specifically designed for detecting Xinjiang Bactrian camels and calculating their body sizes. By integrating the Selective Kernel Networks (SKAttention) mechanism and an enhanced Asymptotic Feature Pyramid Network structure (AFPN-beta), the algorithm successfully captures the body characteristics of Bactrian camels in natural environments and converts these into precise size data. We have developed a Xinjiang Bactrian camel body size measurement dataset and applied the enhanced YOLOv8 model for accurate classification and detection. By extracting key point pixel values and applying Zhang Zhengyou’s calibration method, the coordinate system data is converted into accurate body size measurements. The Camel-YOLOv8 achieves a detection accuracy of 76.4% on the Xinjiang Bactrian camel dataset, marking a 3.7% improvement over the baseline model. In terms of body size calculation, the average relative errors for height and chest circumference are -3.39% and 4.1%, respectively, demonstrating high measurement precision. The algorithm not only maintains high detection accuracy but also achieves a reasonable balance between detection speed and efficiency, providing an effective solution for rapid acquisition of camel body size information.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_57-An_Improved_YOLOv8_Method_for_Measuring_the_Body_Size.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Protein-Coding sORFs Prediction Based on U-Net and Coordinate Attention with Hybrid Encoding</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150856</link>
        <id>10.14569/IJACSA.2024.0150856</id>
        <doi>10.14569/IJACSA.2024.0150856</doi>
        <lastModDate>2024-08-30T08:07:27.1270000+00:00</lastModDate>
        
        <creator>Ziling Wang</creator>
        
        <creator>Wenxi Yang</creator>
        
        <creator>Zhijian Qu</creator>
        
        <subject>Small open reading frames; deep learning; hybrid coding; U-Net; coordinate attention</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>Small proteins encoded by small open reading frames (sORFs) exhibit significant biological activity in crucial biological processes such as embryonic development and metabolism. Accurately predicting whether sORFs encode small proteins is a key challenge in current research. To address this challenge, many methods have been proposed, however, existing methods rely solely on biological features as the sequence encoding scheme, which results in high feature extraction complexity and limited applicability across species. To tackle this issue, we proposed a deep learning architecture UAsORFs based on hybrid coding of sORFs sequences. In contrast to mainstream prediction methods, this framework processes sORF sequences using a mixed encoding approach, including both one-hot and gapped k-mer encodings, which effectively captures global and local sequence information. Additionally, it autonomously learns to extract features of sORFs and captures both long-range and short-range interactions between sequences through U-Net and coordinate attention mechanisms. Our research demonstrates significant progress in predicting encoded peptides from eukaryotic and prokaryotic sORFs, particularly in improving the cross-species predictive MCC index on the eukaryotic dataset.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_56-Protein_Coding_sORFs_Prediction_Based_on_U_Net_and_Coordinate_Attention.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Twitter Truth: Advanced Multi-Model Embedding for Fake News Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150855</link>
        <id>10.14569/IJACSA.2024.0150855</id>
        <doi>10.14569/IJACSA.2024.0150855</doi>
        <lastModDate>2024-08-30T08:07:27.1100000+00:00</lastModDate>
        
        <creator>Yasmine LAHLOU</creator>
        
        <creator>Sanaa El FKIHI</creator>
        
        <creator>Rdouan FAIZI</creator>
        
        <subject>Fake news detection; transformer-based models; text classification; sentiment analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>The identification of fake news represents a substantial challenge within the context of the accelerated dissemination of digital information, most notably on social media and online platforms. This study introduces a novel approach, entitled &quot; MT-FND: Multi-Model Embedding Approach to Fake News Detection,&quot; which is designed to enhance the detection of fake news. The methodology presented here integrates the strengths of multiple transformer-based models, namely BERT, ELECTRA, and XLNet, with the objective of encoding and extracting contextual information from news articles. In addition to transformer embeddings, a variety of other features are incorporated, including sentiment analysis, tweet length, word count, and graph-based features, to enrich the representation of textual content. The fusion of signals from diverse models and features provides a more comprehensive and nuanced comprehension of news articles, thereby improving the accuracy of discerning misinformation. To evaluate the efficacy of the approach, a benchmark dataset comprising both authentic and fabricated news articles was employed. The proposed framework was tested using three different machine-learning models: Random Forest (RF), Support Vector Machine (SVM), and XGBoost (XGB). The experimental results demonstrate the effectiveness of the multi-model embedding fusion approach in detecting fake news, with XGB achieving the highest performance with an accuracy of 87.28%, a precision of 85.56%, a recall of 89.53%, and an F1-score of 87.50%. These findings signify a notable improvement over traditional machine learning classifiers, underscoring the potential of this fusion approach in advancing methodologies for combating misinformation, promoting information integrity, and enhancing decision-making processes in digital media landscapes.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_55-Twitter_Truth_Advanced_Multi_Model_Embedding_for_Fake_News_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Laboratory Abnormal Behavior Recognition Method Based on Skeletal Features</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150854</link>
        <id>10.14569/IJACSA.2024.0150854</id>
        <doi>10.14569/IJACSA.2024.0150854</doi>
        <lastModDate>2024-08-30T08:07:27.0930000+00:00</lastModDate>
        
        <creator>Dawei Zhang</creator>
        
        <subject>Skeletal features; abnormal behavior recognition; OpenPose algorithm; Kinect sensor; Discrete Fourier Transform</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>The identification of abnormal laboratory behavior is of great significance for the safety monitoring and management of laboratories. Traditional identification methods usually rely on cameras and other equipment, which are costly and prone to privacy leakage. In the process of human body recognition, they are easily affected by various factors such as complex backgrounds, human clothing, and light intensity, resulting in low recognition rates and poor recognition results. This article investigates a laboratory abnormal behavior recognition method based on skeletal features. One is to use Kinect sensors instead of traditional image sensors to obtain characteristic skeletal data of the human body, reducing external limitations such as lighting and increasing effective data collection. Then, the collected data is smoothed, aligned, and image enhanced using moving average filtering, discrete Fourier transform, and contrast, effectively improving data quality and helping to better identify abnormal behavior. Finally, the OpenPose algorithm is used to construct a laboratory anomaly behavior recognition model. OpenPose can be used to connect the entire skeleton through the relationships between points during the process of extracting human skeletal points, and combined with multi-scale pyramid networks to improve the network structure, effectively improving the accuracy and recognition speed of laboratory abnormal behavior recognition. The experiment shows that the accuracy, precision, and recall of the behavior recognition model constructed by the algorithm are 95.33%, 96.68%, and 93.77%, respectively. Compared with traditional anomaly detection methods, it has higher accuracy and robustness, lower parameter count, and higher operational efficiency.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_54-Laboratory_Abnormal_Behavior_Recognition_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Research on Traffic Flow Prediction Using the MSTA-GNet Model Based on the PeMS Dataset</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150853</link>
        <id>10.14569/IJACSA.2024.0150853</id>
        <doi>10.14569/IJACSA.2024.0150853</doi>
        <lastModDate>2024-08-30T08:07:27.0800000+00:00</lastModDate>
        
        <creator>Deng Cong</creator>
        
        <subject>MSTA-GNet; deep learning; PeMS dataset; traffic flow prediction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>This study introduces the MSTA-GNet (Multi-Scale Spatiotemporal Attention Graph Network), a novel deep learning model which integrates spatiotemporal self-attention mechanisms to model heterogeneous dependencies in traffic networks. The primary objective of the study is to improve existing traffic flow prediction models to address the inadequacies of traditional models in complex big data environments. Key innovations of the MSTA-GNet model include positional encoding and global and local self-attention mechanisms to capture long-term and short-term dependencies. Using the PeMS (Performance Measurement System) dataset, the study conducted performance comparison experiments among various deep learning models, including LSTM (Long Short-Term Memory), GCN (Graph Convolutional Network), DCRNN (Diffusion Convolutional Recurrent Neural Network), STGCN (Spatiotemporal Graph Convolutional Network), STMetaNet (Spatiotemporal Meta Network), and MSTA-GNet. The results showed that MSTA-GNet significantly outperformed other models with improvements of 13.4%, 11.8%, and 9.7% in Mean Absolute Error (MAE), Root Mean Square Error (RMSE), and Mean Absolute Percentage Error (MAPE) metrics, respectively. Ablation studies further validated the significance of attention mechanisms, feature extraction, convolutional layers, and graph networks, confirming the effectiveness and practical application of MSTA-GNet in traffic flow prediction. This research provides important insights for AI-based congestion management, support for low-carbon traffic networks, and optimization of local traffic operations, demonstrating its significant practical value in intelligent transportation systems.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_53-Research_on_Traffic_Flow_Prediction_Using_the_MSTA_GNet_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Indonesian Text Summarization with Latent Dirichlet Allocation and Maximum Marginal Relevance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150852</link>
        <id>10.14569/IJACSA.2024.0150852</id>
        <doi>10.14569/IJACSA.2024.0150852</doi>
        <lastModDate>2024-08-30T08:07:27.0630000+00:00</lastModDate>
        
        <creator>Muhammad Faisal</creator>
        
        <creator>Bima Hamdani Mawaridi</creator>
        
        <creator>Ashri Shabrina Afrah</creator>
        
        <creator>Supriyono</creator>
        
        <creator>Yunifa Miftachul Arif</creator>
        
        <creator>Abdul Aziz</creator>
        
        <creator>Linda Wijayanti</creator>
        
        <creator>Melisa Mulyadi</creator>
        
        <subject>Indonesian summarization; LDA; MMR; ROUGE evaluation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>Maximum Marginal Relevance (MMR) Summarization of text is very important in grasping quickly long articles particularly for people who are very busy. In this paper, we use LDA to give topic queries for news articles, which then become inputs to the MMR method. According to this paper&#39;s summarization system, the ROUGE metric is employed to evaluate the summaries of news articles with 30 percent compression and 50 percent compression. Experimental findings show that the LDA-MMR combination outperforms MMR on its own in all our tests across all query lengths or number of sentences used and gives highest average ROUGE value of 0.570 for a 50% compression rate; 0.547 at 30% This implies that our system efficiently produces meaningful summaries using content-based keywords rather than click bait titles, which should not lead to complaints about misleading advertisements. This summarizer can convey the main points of a piece of news coverage in a concise form, thus offering people useful new tools for quickly digesting information.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_52-Enhancing_Indonesian_Text_Summarization_with_Latent_Dirichlet_Allocation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design and Application of the DPC-K-Means Clustering Algorithm for Evaluation of English Teaching Proficiency</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150851</link>
        <id>10.14569/IJACSA.2024.0150851</id>
        <doi>10.14569/IJACSA.2024.0150851</doi>
        <lastModDate>2024-08-30T08:07:27.0470000+00:00</lastModDate>
        
        <creator>Mei Niu</creator>
        
        <subject>K-Means; density-peak clustering algorithm; ELT competency assessment; convolutional neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>Effective and precise methodologies for evaluating the proficiency in English language instruction are instrumental in enhancing educators&#39; competencies and the effectiveness of educational administrative processes. The objective of this paper is to refine the neutrality and precision of such assessments by introducing a novel approach that leverages an advanced K-means algorithm in conjunction with convolutional neural networks (CNNs). Initially, a thorough examination of the issue at hand leads to the formulation of an assessment framework that integrates both a clustering algorithm and a CNN, with a comprehensive elucidation of the pivotal technical aspects. Subsequently, the paper introduces a data clustering and categorization technique grounded in the DPC-K-means methodology, specifically tailored for indices that measure English teaching proficiency, and employs CNNs to devise a model for evaluating these competencies. The integration of these two components—data clustering and the assessment model—gives rise to an innovative technique. Ultimately, the proposed method is implemented and its practicality is substantiated through an analysis of empirical data from educators&#39; teaching proficiency indices. A comparative analysis with existing algorithms reveals that the proposed method achieves superior clustering performance and the lowest margin of error in predictive assessments.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_51-Design_and_Application_of_the_DPC_K_Means_Clustering_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Data-Driven Approaches to Energy Utilization Efficiency Enhancement in Intelligent Logistics</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150850</link>
        <id>10.14569/IJACSA.2024.0150850</id>
        <doi>10.14569/IJACSA.2024.0150850</doi>
        <lastModDate>2024-08-30T08:07:27.0470000+00:00</lastModDate>
        
        <creator>Xuan Long</creator>
        
        <subject>Intelligent logistics; energy; utilization efficiency; data-driven</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>With the rapid development of intelligent logistics, new challenges and opportunities are presented for energy utilization efficiency improvement. This study explores the feasibility and effectiveness of using data-driven methods to improve energy utilization efficiency in an intelligent logistics environment and provides theoretical support and practical guidance for achieving the sustainable development of optimized logistics management procedures. First, a dataset was established by collecting relevant data in the optimized logistics management procedure, including transportation information and energy consumption data. Then, data analysis and mining techniques are used to conduct an in-depth dataset analysis to reveal the influencing factors of energy utilization efficiency and potential optimization directions. Then, strategies and methods for energy utilization efficiency improvement are designed by combining intelligent optimization algorithms. Finally, simulation experiments and case studies are utilized to verify the effectiveness and feasibility of the proposed methods. The results show that using data-driven methods can significantly improve the energy utilization efficiency of optimized logistics management procedures, reduce logistics costs, and enhance the sustainability and competitiveness of the system. Through in-depth analysis and empirical research, a series of actionable optimization strategies are proposed, providing new ideas and methods for optimizing energy and logistics management procedures. These results significantly promote the sustainable development of optimized logistics management procedures and enhance competitiveness.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_50-Data_Driven_Approaches_to_Energy_Utilization_Efficiency.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Safety for High Ceiling Emergency Light Monitoring</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150849</link>
        <id>10.14569/IJACSA.2024.0150849</id>
        <doi>10.14569/IJACSA.2024.0150849</doi>
        <lastModDate>2024-08-30T08:07:27.0330000+00:00</lastModDate>
        
        <creator>G. X. Jun</creator>
        
        <creator>M. Batumalay</creator>
        
        <creator>C. Batumalai</creator>
        
        <creator>Prabadevi B</creator>
        
        <subject>Safety management; Internet of Things; high ceiling safety; high building safety; safety; process innovation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>The performance of information technology has gradually improved and advanced during this period for safety management. Nonetheless, there is no disputing that during power outages, emergency lights continue to be crucial to people&#39;s safety and comfort. Regrettably, businesses don&#39;t give emergency lighting in buildings enough thought. The primary causes of the problem are expensive spending and long-term management for high ceiling safety. Additionally, one aspect that must be considered and ensured is the safety of maintenance personnel when the light is installed in high-rise locations. Thus, by creating wireless global control and monitoring via Android mobile phones, our effort intends to increase the availability and reliability of the emergency light. The suggested light monitoring system collects information from Internet of Things devices and transmits it to users&#39; mobile phones over the Internet. Moreover, the risk of employees keeping the emergency light will be significantly reduced because it is monitored via the Internet on mobile devices. Additionally, by using the information the sensor inside the emergency light collects, it is possible to estimate its current condition, including its battery life. This repair will also improve everyone&#39;s safety within the building by increasing the emergency light&#39;s dependability with good process innovation.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_49-Enhancing_Safety_for_High_Ceiling_Emergency_Light_Monitoring.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Innovative Melanoma Diagnosis: Harnessing VI Transformer Architecture</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150848</link>
        <id>10.14569/IJACSA.2024.0150848</id>
        <doi>10.14569/IJACSA.2024.0150848</doi>
        <lastModDate>2024-08-30T08:07:27.0170000+00:00</lastModDate>
        
        <creator>Sreelakshmi Jayasankar</creator>
        
        <creator>T. Brindha</creator>
        
        <subject>Vision transformer; melanoma; convolutional neural networks; deep learning model; transformer encoder; dermoscopy image</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>Melanoma, the most severe type of skin cancer, ranks ninth among the most prevalent cancer types. Prolonged exposure to ultraviolet radiation triggers mutations in melanocytes, the pigment -producing cells responsible for melanin production. This excessive melanin secretion leads to the formation of dark-colored moles, which can evolve into cancerous tumors over time and metastasize rapidly. This research introduces a Vision Transformer, revolutionizes computer vision architecture by diverging from traditional convolutional neural networks, employing transformer models to handle images as sequences of flattened, spatially-structured patches. The dermoscopy images sourced from the Kaggle repository, an extensive online database known for its diverse collection of high-quality medical imagery is utilized in this study. This novel deep learning model for melanoma classification, aiming to enhance diagnostic accuracy and reduce reliance on expert interpretation. The model achieves an accuracy of 96.23%, indicating strong overall correctness in classifying both Benign and Malignant cases. Comparative simulation of the proposed method against other methods in skin cancer diagnosis reveal that the suggested approach attains superior accuracy. These findings underscore the efficacy of the system in advancing the field of skin cancer diagnosis, offering promising prospects for enhanced accuracy and efficacy in clinical settings.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_48-Innovative_Melanoma_Diagnosis_Harnessing_VI_Transformer_Architecture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improved Decision Support System for Alzheimer&#39;s Diagnosis Using a Hybrid Machine Learning Approach with Structural MRI Brain Scans</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150847</link>
        <id>10.14569/IJACSA.2024.0150847</id>
        <doi>10.14569/IJACSA.2024.0150847</doi>
        <lastModDate>2024-08-30T08:07:27.0000000+00:00</lastModDate>
        
        <creator>Niranjan Kumar Parvatham</creator>
        
        <creator>Lakshmana Phaneendra Maguluri</creator>
        
        <subject>Alzheimer’s disease; binary classification; Convolutional Neural Network (CNN); horizontal flipping; healthcare decision support system; MRI images; Support Vector Machine(SVM)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>Alzheimer&#39;s disease (AD) causes damage to brain cells and their activities. This disease is typically caused by ageing, making people over the age of 65 more susceptible. As the disease progresses, it slowly destroys brain cells, making it harder to think clearly, recall things, and do everyday tasks. The end result of this is dementia. Metabolic disorders, such as diabetes and Alzheimer&#39;s disease, affect a substantial proportion of the world&#39;s population. While there is no permanent cure for AD, early diagnosis can help reduce damage to brain cells and support a faster recovery. Recent research has explored various machine learning approaches for early disease detection. However, traditional ML (Machine Learning) methods and deep learning techniques such as CNNs have not been individually effective in accurately detecting Alzheimer&#39;s disease (AD). In this proposed work, we developed a hybrid model that processes sMRI brain images to detect them as demented or non-demented. The model consists of two parts: the first part involves extracting significant features through a sequence of convolution and pooling operations; the second part uses these features to train SVM for binary classification. Data augmentation techniques such as horizontal flipping are used to balance dataset. We calculated key performance metrics essential for the healthcare domain, including sensitivity, specificity, accuracy, and F1-score. Notably, our model achieved an impressive accuracy of 99.60% in detecting AD, with a sensitivity of 99.83%, a specificity of 99.40%, and an F1-score of 99.58%. These results were validated using 15-fold cross-validation, enhancing the model&#39;s robustness for new data. This approach yields a more robust model, offering greater accuracy and precision compared to existing methods. This model can effectively support manual systems for detecting AD with greater accuracy.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_47-Improved_Decision_Support_System_for_Alzheimers_Diagnosis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimizing Dance Training Programs Using Deep Learning: Exploring Motion Feedback Mechanisms Based on Pose Recognition and Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150846</link>
        <id>10.14569/IJACSA.2024.0150846</id>
        <doi>10.14569/IJACSA.2024.0150846</doi>
        <lastModDate>2024-08-30T08:07:26.9870000+00:00</lastModDate>
        
        <creator>Yuting Jiao</creator>
        
        <subject>Deep learning; pose recognition; pose prediction; dance training; graph convolutional network; attention mechanism</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>Dance pose recognition and prediction is an important part of dance training and a challenging task in the field of artificial intelligence. Due to the diverse styles and significant variations in dance movements, conventional methods struggle to capture effective dance pose features for recognition. In this context, we have developed a dance pose recognition and prediction method based on deep learning. Given the characteristics of dance movements, such as complex human postures and dynamic movements, we proposed the MKFF-ST-GCN model, which integrates multi-kinematic feature fusion with ST-GCN. This model fully captures the dynamic information of dance movements by calculating the first and second-order kinematic features of keypoints and fuses the kinematic features using a multi-head attention mechanism. Additionally, to address dance pose prediction issues, we proposed the STGA-Net based on the spatial-temporal graph attention mechanism. This model improves the long-distance information modeling capability by calculating local and global graph attentions of dance poses, effectively solving the problem of dance pose prediction. To comprehensively evaluate the quality of the proposed methods in dance pose recognition and prediction, we conducted extensive experimental validations and comparisons with several common algorithms. The experimental results fully demonstrate the effectiveness of our methods in dance pose recognition and prediction. This study not only advances the technology of dance pose recognition and prediction but also provides valuable experience for the field.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_46-Optimizing_Dance_Training_Programs_Using_Deep_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Missing Value Imputation in Data MCAR for Classification of Type 2 Diabetes Mellitus and its Complications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150845</link>
        <id>10.14569/IJACSA.2024.0150845</id>
        <doi>10.14569/IJACSA.2024.0150845</doi>
        <lastModDate>2024-08-30T08:07:26.9700000+00:00</lastModDate>
        
        <creator>Anik Andriani</creator>
        
        <creator>Sri Hartati</creator>
        
        <creator>Afiahayati</creator>
        
        <creator>Cornelia Wahyu Danawati</creator>
        
        <subject>Missing value; prognosis of diabetes mellitus; missing completely at random; decision tree</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>Type 2 diabetes mellitus (T2DM) is a disease that is at risk for many complications. Previous research on the prognosis of T2DM and its complications is limited to the impact of T2DM on one particular disease. Guidebook for T2DM Management in Indonesia has eight categories of T2DM complications. The purpose of this study is to classify T2DM prognosis into eight categories: one controlled class and seven classes of aggravating disorders. The classification was based on medical record data from T2DM patients at Panti Rapih Hospital in Yogyakarta between 2017 and 2022. The problem is that the medical record data has numerous missing values (MV). The dataset had 29% missing values, classified as Missing Completely at Random (MCAR). This study performed imputation on the dataset prior to categorization. For MV imputation, a variety of imputation methods were used, and their accuracy was measured using Mean Absolute Error (MAE) and Root Mean Square Error (RMSE). The best imputation results were utilized to update the dataset. Subsequently, the dataset was used for classification employing several classification methods. The classification results were compared to determine the method with the highest accuracy in this scenario. The Decision Tree method with stratified k-fold cross-validation emerged as the optimal method for this classification. The results revealed an average accuracy value of 0.8529.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_45-Missing_Value_Imputation_in_Data_MCAR_for_Classification_of_Type_2_Diabetes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Machine Learning Approach for Real-Time Malicious URL Detection Using SOM-RMO and RBFN with Tabu Search</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150844</link>
        <id>10.14569/IJACSA.2024.0150844</id>
        <doi>10.14569/IJACSA.2024.0150844</doi>
        <lastModDate>2024-08-30T08:07:26.9530000+00:00</lastModDate>
        
        <creator>Swetha T</creator>
        
        <creator>Seshaiah M</creator>
        
        <creator>Hemalatha K L</creator>
        
        <creator>Murthy S V N</creator>
        
        <creator>Manjunatha Kumar BH</creator>
        
        <subject>Malicious URL detection; self-organizing map; Radial Movement Optimization; ensemble radial basis function network; Tabu Search</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>The proliferation of malicious URLs has become a significant threat to internet security, encompassing SPAM, phishing, malware, and defacement attacks. Traditional detection methods struggle to keep pace with the evolving nature of these threats. Detecting malicious URLs in real-time requires advanced techniques capable of handling large datasets and identifying novel attack patterns. The challenge lies in developing a robust model that combines efficient feature extraction with accurate classification. We propose a hybrid machine learning approach combining Self-Organizing Map based Radial Movement Optimization (SOM-RMO) for feature extraction and Ensemble Radial Basis Function Network (RBFN) based Tabu Search for classification. SOM-RMO effectively reduces dimensionality and highlights significant features, while RBFN, optimized with Tabu Search, classifies URLs with high precision. The proposed model demonstrates superior performance in detecting various malicious URL attacks. On a benchmark dataset, the proposed approach achieved an accuracy of 96.5%, precision of 95.2%, recall of 94.8%, and an F1-score of 95.0%, outperforming traditional methods significantly.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_44-Hybrid_Machine_Learning_Approach_for_Real_Time_Malicious_URL_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluating the Impact of Fuzzy Logic Controllers on the Efficiency of FCCUs: Simulation-Based Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150843</link>
        <id>10.14569/IJACSA.2024.0150843</id>
        <doi>10.14569/IJACSA.2024.0150843</doi>
        <lastModDate>2024-08-30T08:07:26.9370000+00:00</lastModDate>
        
        <creator>Harsh Pagare</creator>
        
        <creator>Kushagra Mishra</creator>
        
        <creator>Kanhaiya Sharma</creator>
        
        <creator>Sandeep Singh Rawat</creator>
        
        <creator>Shailaja Salagrama</creator>
        
        <subject>Non-Linear modeling; fuzzy logic controller; machine learning; optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>This study investigates the methods for creating nonlinear models and developing Fuzzy logic controllers for the Fluidized Catalytic Cracking Unit (FCCU) at different global refineries. The FCCU plays a crucial role in the petrochemical sector, processing a significant portion of the world&#39;s crude oil - in 2006, FCCUs were responsible for refining a third of the global crude oil supply. These units are essential for converting heavier oils, such as gasoil and crude oil, into lighter, more critical products like gasoline and olefinic gases. Given their efficiency in producing a large volume of products and the volatile nature of petrochemical market prices, optimization of these units is a priority for engineers and investors. Traditional control mechanisms often need to improve in managing the FCCU&#39;s complex, dynamic, and nonlinear operations, where creating an accurate mathematical model is challenging or involves significant simplifications. Fuzzy Logic controllers, which mimic human reasoning more closely than conventional methods, offer a promising alternative for such unpredictable and complex systems. The results of this work illustrate the usefulness and possible advantages of utilizing Fuzzy Logic controllers in the management of FCCU plants and they are also compared with the latest machine learning techniques as well. These findings are corroborated by simulations conducted with the MATLAB Fuzzy Logic Toolbox R2012b.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_43-Evaluating_the_Impact_of_Fuzzy_Logic_Controllers_on_the_Efficiency_of_FCCUs.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fitness Equipment Design Based on Web User Text Mining</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150842</link>
        <id>10.14569/IJACSA.2024.0150842</id>
        <doi>10.14569/IJACSA.2024.0150842</doi>
        <lastModDate>2024-08-30T08:07:26.9370000+00:00</lastModDate>
        
        <creator>Jinyang Xu</creator>
        
        <creator>Xuedong Zhang</creator>
        
        <creator>Xinlian Li</creator>
        
        <creator>Shun Yu</creator>
        
        <creator>Yanming Chen</creator>
        
        <subject>Home fitness equipment; crawler data; FAHP; TOPSIS; product design</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>To propose home fitness equipment that meets modern users&#39; needs, this study employs web user text mining, combined with the Fuzzy Analytic Hierarchy Process (FAHP) and the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS), to design and evaluate home fitness equipment that aligns with contemporary demands. First, we used crawler data to collect user reviews of home fitness equipment from a well-known Chinese shopping platform. The data were cleaned and processed to extract key user needs and preferences. Next, the FAHP method was used to prioritize these requirements, and TOPSIS was applied for the comprehensive evaluation of design proposals. This process allowed us to identify the solution that best meets user needs, completing the development of the product design. The results indicate that the second design, with its features targeting lumbar health, efficient space utilization, rich interactive experience, integration of smart technology, and minimalist appearance, has significant market potential and social value. Finally, the SUS (System Usability Scale) was used to validate the design, showing excellent user satisfaction and usability for the second scheme. This study establishes a design process incorporating web scraping, FAHP, and TOPSIS, demonstrating the effectiveness of this theoretical integration in the field of home fitness equipment design.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_42-Fitness_Equipment_Design_Based_on_Web_User_Text_Mining.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hidden Markov Model-Based Performance Recognition System for Marching Wind Bands</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150841</link>
        <id>10.14569/IJACSA.2024.0150841</id>
        <doi>10.14569/IJACSA.2024.0150841</doi>
        <lastModDate>2024-08-30T08:07:26.9230000+00:00</lastModDate>
        
        <creator>Wei Jiang</creator>
        
        <subject>Music information retrieval; Hidden Markov Model; feature extraction; automatic music recognition; marching band performance; PCP features</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>This paper explores the automatic recognition of marching band performances using advanced music information retrieval techniques. Music, a crucial medium for emotional expression and cultural exchange, greatly benefits from the harmonic backing provided by marching wind orchestras. Identifying these performances manually is both time-consuming and labor-intensive, particularly for non-professionals. This study addresses this challenge by leveraging Hidden Markov Models (HMM) and improved Pitch Class Profile (PCP) features to automate the recognition process. The research also explores the system&#39;s performance on real-world audio recordings with background noise and microphone variations. By dividing the audio signal into frames and transforming it to the frequency domain, the PCP feature vectors are extracted and used within the HMM framework. Experimental results demonstrate that the proposed method significantly enhances recognition accuracy compared to traditional PCP features and template matching models. The study identifies challenges in distinguishing similar tonal values, such as F-major and D-minor, which affect recognition rates. Additionally, the research highlights the importance of addressing background noise and microphone variations in real-world applications. Ethical considerations regarding privacy and intellectual property rights are also discussed. This research establishes a comprehensive system for automatic marching band performance recognition, contributing to advancements in music information retrieval and analysis.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_41-A_Hidden_Markov_Model_Based_Performance_Recognition_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design and Research of Cross-Border E-Commerce Short Video Recommendation System Based on Multi-Modal Fusion Transformer Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150840</link>
        <id>10.14569/IJACSA.2024.0150840</id>
        <doi>10.14569/IJACSA.2024.0150840</doi>
        <lastModDate>2024-08-30T08:07:26.9070000+00:00</lastModDate>
        
        <creator>Yiran Hu</creator>
        
        <subject>Multimodal fusion; transformer model; cross-border e-commerce; short video recommendation system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>This study designed a cross-border e-commerce short video recommendation system based on Transformer&#39;s multimodal analysis model. When mining associations, the model not only focuses on the relationships between modalities, but also improves semantic context by addressing contextual correlations within and between modalities. At the same time, the model uses a cross modal multi head attention mechanism for multi-level association mining, and constructs an association network interwoven with latitude and longitude. In the process of exploring the essential correlation between patterns and subjective emotional fluctuations, the potential context between patterns has been realized. Fully explore correlations and then more accurately identify the truth contained in the original data. In addition, this study proposes a self supervised single modal label generation method. When multimodal labels are known, it does not require complex deep networks and only relies on the mapping relationship between multimodal representations and labels to generate a single modal label. Modal labeling can achieve phased automatic labeling of single modal labels, and quantify the mapping relationship between modal representations and labels from the representation space to generate weak single modal labels. The study also achieved multimodal collaborative learning in the context of limited differential information acquisition due to incomplete labeling, fully utilizing multimodal information. The experimental results on classic datasets in the field of multimodal analysis show that it outperforms the baseline model in terms of accuracy and F1 score, reaching 98.76% and 97.89%, respectively.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_40-Design_and_Research_of_Cross_Border_E_Commerce.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimization of Knitting Path of Flat Knitting Machine Based on Reinforcement Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150839</link>
        <id>10.14569/IJACSA.2024.0150839</id>
        <doi>10.14569/IJACSA.2024.0150839</doi>
        <lastModDate>2024-08-30T08:07:26.8900000+00:00</lastModDate>
        
        <creator>Tianqi Yang</creator>
        
        <subject>Flat knitting machine; reinforcement learning; weaving path optimization; textile industry</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>In the textile industry, the flat knitting machine plays a crucial role as a production tool, and the quality of its weaving path is closely related to the overall product quality and production efficiency. Seeking to improve and optimize the knitting path to improve product effectiveness and productivity has become an urgent concern for the textile industry. This article elegantly streamlines and enhances the intricate weaving process of fabrics, harnessing the formidable power of reinforcement learning to achieve unparalleled optimization of weaving paths on a flat knitting machine. By ingeniously integrating reinforcement learning technology into the fabric production realm, we aspire to elevate both the quality and production efficiency of textiles to new heights. The core of our approach lies in meticulously defining a state space, action space, and a tailored reward function, each meticulously crafted to mirror the intricacies of the knitting process. This model serves as the cornerstone upon which we construct an innovative knitting pathway optimization algorithm, deeply rooted in the principles of reinforcement learning. Our algorithm embodies a relentless pursuit of excellence, learning from its interactions with the dynamic environment, embracing a methodical trial-and-error approach, and continuously refining its decision-making strategy. Its ultimate goal: to maximize the long-term cumulative reward, ensuring that every stitch contributes to the overall optimization of the weaving process. In essence, we have forged a groundbreaking collaboration between the traditional art of fabric weaving and the cutting-edge science of reinforcement learning, ushering in a new era of intelligent and efficient textile production. Through this process of iterative optimization, the agent can gradually learn the optimal knitting path. To verify the effectiveness of the algorithm, we performed extensive experimental validation. The experimental results show that reinforcement learning can significantly improve knitting efficiency, improve the appearance and feel of fabrics. Compared with traditional methods, the method proposed in this article has a higher level of automation and better adaptability, achieving more efficient and intelligent knitting production, with a 10% increase in production efficiency.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_39-Optimization_of_Knitting_Path_of_Flat_Knitting_Machine.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Heuristic Intelligent Algorithm-Based Approach for In-Depth Development and Application Analysis of Micro- and Nanoembedded Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150838</link>
        <id>10.14569/IJACSA.2024.0150838</id>
        <doi>10.14569/IJACSA.2024.0150838</doi>
        <lastModDate>2024-08-30T08:07:26.8770000+00:00</lastModDate>
        
        <creator>Buzhong Liu</creator>
        
        <subject>Photon search algorithm; deep development of micro- and nanoembedded systems; application test and analysis methodology; LightGBM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>Developing application analysis and testing methods is an important part of the in-depth development of micro-nano embedded systems under complex integrated architectures. Therefore, the research on application analysis and testing models is of great significance for the in-depth development and efficient design of embedded systems. In order to fully explore the effective information of test analysis data in the in-depth development of micro-nano embedded systems under complex integrated architectures and improve the analysis and prediction accuracy of test analysis models, a development test analysis model based on the photon search algorithm and LightGBM is proposed. First, the development process of micro-nano embedded systems under complex integrated architectures is analysed, and a system analysis architecture is designed to propose test analysis factors. Second, a development test analysis model is established by combining the photon search algorithm and LightGBM. Subsequently, the feasibility and efficiency of the model are proposed by analysing the data of the development process. The analysis of data examples shows that the LightGBM test analysis model has high analysis and prediction accuracy and generalisation performance.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_38-Heuristic_Intelligent_Algorithm_Based_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modified TOPSIS Method for Neutrosophic Cubic Number Multi-Attribute Decision-Making and Applications to Music Composition Effectiveness Evaluation of Film and Television</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150837</link>
        <id>10.14569/IJACSA.2024.0150837</id>
        <doi>10.14569/IJACSA.2024.0150837</doi>
        <lastModDate>2024-08-30T08:07:26.8770000+00:00</lastModDate>
        
        <creator>Liang Yang</creator>
        
        <creator>Jun Zhao</creator>
        
        <subject>Multi-Attribute Decision-Making (MADM); NCSs; TOPSIS approach; music composition effectiveness evaluation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>Contemporary music composition for film and television has exhibited a trend towards diversification, which is reflected in various aspects such as the diversity of musical styles, the integration of music and visuals, as well as the technical means of music creation. With the continuous advancement of music production technology and film/television production technology, the creation of music for film and television has increasingly emphasized the organic integration of music and visuals, as well as the role of music in emotional expression and atmosphere creation. Meanwhile, the fusion and innovation of different musical styles have also brought more possibilities and space to the creation of music for film and television. This trend of diversification not only enriches the artistic expressiveness of film and television works, but also provides audiences with a more diverse audiovisual experience. The music composition effectiveness evaluation of film and television is multi-attribute decision-making (MADM) problem. In this paper, the TOPSIS method is extended to the framework of neutrosophic cubic sets (NCSs) to address MADM problems. The CRITIC method is employed to obtain the weights of the attributes, ensuring a systematic and objective approach to determining their relative importance. Furthermore, the neutrosophic cubic number TOPSIS (NCN-TOPSIS) approach is established for MADM scenarios. To demonstrate the applicability of the proposed NCN-TOPSIS model, a numerical example focused on the music composition effectiveness evaluation in film and television is presented. Additionally, comparative analyses are conducted to showcase the advantages of the NCN-TOPSIS approach over other decision-making methods. By extending the TOPSIS technique to the NCSs environment and integrating the CRITIC method, this research contributes to the field of MADM by providing a robust and efficient decision-making tool for complex, multi-criteria problems, as exemplified by the music composition evaluation in the film and television industry.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_37-Modified_TOPSIS_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Stock Price Forecasting with Optimized Long Short-Term Memory Network with Manta Ray Foraging Optimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150836</link>
        <id>10.14569/IJACSA.2024.0150836</id>
        <doi>10.14569/IJACSA.2024.0150836</doi>
        <lastModDate>2024-08-30T08:07:26.8600000+00:00</lastModDate>
        
        <creator>Zhongpo Gao</creator>
        
        <creator>Junwen Jing</creator>
        
        <subject>Stock price; hybrid forecasting method; Manta Ray Foraging Optimization; empirical mode decomposition; Nasdaq index</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>The stock market is a financial marketplace where investors may participate through the acquisition and sale of stocks in publicly traded companies. Predicting stock prices in the securities sector may be challenging due to the intricate nature of the subject, which necessitates a comprehensive grasp of several interconnected factors. Numerous factors, including politics, society, as well as the economy, have an impact on the stock market. The primary objective of financial market investing is to exploit larger profits. Financial markets provide many opportunities for market analysts, investors, and researchers in several industries due to significant technology advancements. Conventional approaches encounter difficulties in capturing the complex, non-linear connections that exist in market data, which requires the implementation of sophisticated artificial intelligence models. This paper presents a new approach to tackling certain issues by suggesting a unique model. It combines the long short-term memory method and Empirical Mode Decomposition with the Manta Ray Foraging Optimization. When tested in the current study&#39;s dynamic stock market, the EMD-MRFO-LSTM model outperformed other models regarding performance and efficiency. The Nasdaq index data from January 2, 2015, to June 29, 2023, were used in this study. The findings demonstrate how the suggested model is capable of making precise stock price predictions. The suggested model offers a workable approach to studying and predicting stock price time series by obtaining values of 0.9973, 91.99, 71.54, and 0.57, for coefficient of determination (R^2), root means square error (RMSE), mean absolute error (MAE), and mean absolute percentage error (MAPE), respectively. Compared to other methods currently in use, the proposed model has a higher accuracy in forecasting and is more physically relevant to the dynamic stock market, according to the outcomes of the experiment.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_36-Stock_Price_Forecasting.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>DIAUTIS III: A Fuzzy and Affective Platform for Obtaining Autism Mental Models and Learning Aids</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150835</link>
        <id>10.14569/IJACSA.2024.0150835</id>
        <doi>10.14569/IJACSA.2024.0150835</doi>
        <lastModDate>2024-08-30T08:07:26.8430000+00:00</lastModDate>
        
        <creator>Mohamed El Alami</creator>
        
        <creator>Sara El khabbazi</creator>
        
        <creator>Fernando de Arriaga</creator>
        
        <subject>Fuzzy computing; affective computing; mental models; learning aids; autism; category theory</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>Autism spectrum disorders (ASD) are conditions characterized by social interaction and communication difficulties, atypical patterns of activities, and unusual reactions to sensations. Char-acteristics of autism may be detected in early childhood, but diagnosis is often delayed. The diagnosis of autistic children typically aligns with medical and psychological recommenda-tions, but it does not evaluate all the problems, intensity, or changes in symptoms over time. It also does not identify the affective states associated with these deficiencies, making aid less effective. The mental model of autistic children contains their deficits, tasks, and intensities, beyond diagnostics. That why we enhance DIAUTIS platform for achieve our objectives related to helping children with ASD. DIAUTIS I is a platform that aim to diagnosing autism and identifying its severity using cognitive, fuzzy, and affective computing. It presents tests, eval-uates results, and presents a final model. Then, we implemented DIAUTIS II by adding KASP methodology, a new methodology of designing serious games, based on knowledge, affect, sensory, and pedagogy, this tool allows to DIAUTIS II agents to design-ing over 80 games, considering a child&#39;s background. In this paper, we will present a new tool of formalization of the autism mental model based on fuzzy and affective computing. DIAUTIS III is the extension of DIAUTIS II platform aim to represent a cognitive fuzzy mental model with using the metrics of category theory. So far, no mental model has been developed for autism. Our mental model, can be obtained anytime as a fuzzy cognitive map or fuzzy graph and the use of affective computing. In addi-tion, the mathematical theory of categories represent this men-tal model of an autistic child from a fuzzy graph, and it allows for operations like CONS and TRA to evaluate the difference between two mental models. Fuzzy cognitive mental model can be used to develop new techniques for improving the learning and integration of autistic children into social life, which is the focus of our immediate future.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_35-DIAUTIS_III_A_Fuzzy_and_Affective_Platform.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Systematic Review of Virtual Commerce Solutions for the Metaverse</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150834</link>
        <id>10.14569/IJACSA.2024.0150834</id>
        <doi>10.14569/IJACSA.2024.0150834</doi>
        <lastModDate>2024-08-30T08:07:26.8300000+00:00</lastModDate>
        
        <creator>Ghazala Bilquise</creator>
        
        <creator>Khaled Shaalan</creator>
        
        <creator>Manar Alkhatib</creator>
        
        <subject>Metaverse; v-commerce; immersive technologies; design attributes</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>The metaverse, a rapidly evolving field, promises to transform online shopping through immersive technologies. This systematic review aims to explore and analyze the key design features of Virtual Commerce (v-commerce) solutions within this digital environment. By examining 24 studies that have developed immersive v-commerce applications, this review seeks to compile a taxonomy of essential design attributes necessary for creating effective and engaging v-commerce experiences. The review classifies these attributes into three primary dimensions: Product, Intelligent Services, and Functionality. The findings indicate that within the Augmented Reality (AR) category, product visualization and natural interaction were the most studied attributes. In the Virtual Reality (VR) category, intuitive affordances emerged as the most frequently investigated features. Meanwhile, Mixed Reality (MR) studies commonly focused on information quality, intuitive affordances, and shopping assistants. The insights from this review provide valuable guidance for researchers, developers, and practitioners aiming to enhance consumer engagement and satisfaction in the metaverse through well-designed v-commerce applications. By synthesizing the results of various studies, this review offers a comprehensive overview of the current state of v-commerce research, identifies existing gaps, and proposes potential directions for future development in the field.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_34-A_Systematic_Review_of_Virtual_Commerce_Solutions.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Feature Interaction Based Neural Network Approach: Predicting Job Turnover in Early Career Graduates in South Korea</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150833</link>
        <id>10.14569/IJACSA.2024.0150833</id>
        <doi>10.14569/IJACSA.2024.0150833</doi>
        <lastModDate>2024-08-30T08:07:26.8300000+00:00</lastModDate>
        
        <creator>Haewon Byeon</creator>
        
        <subject>Job turnover prediction; feature interaction based neural network; employment data analysis; predictive modeling; university graduates</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>Predicting job turnover among early career university graduates is crucial for both employees and employers. This study introduced a Feature Interaction based Neural Network model designed to predict job turnover among university graduates in their 20s and 30s in South Korea within the first five years of employment. The FINN model leveraged the Graduates Occupational Mobility Survey dataset, which included detailed information on approximately 26,544 graduates. This rich dataset encompassed a wide range of variables, including personal attributes, employment characteristics, job satisfaction, and job preparation activities. The model combined an embedding layer to convert sparse features into dense vectors with a neural network component to capture high-order feature interactions. We compared the FINN model&#39;s performance against eight baseline models: Logistic Regression, Factorization Machines, Field-aware Factorization Machines, Support Vector Machine, Random Forest, Product-based Neural Networks, Wide &amp; Deep, and DeepFM. Evaluation metrics used were Area Under the ROC Curve (AUC) and Log Loss. The results demonstrated that the FINN model outperformed all baseline models, achieving an AUC of 0.830 and a Log Loss of 0.370. The FINN model represents a significant advancement in predictive modeling for job turnover, providing valuable insights that can inform both individual career planning and organizational human resource practices. This research underscores the potential of advanced neural network architectures in employment data analysis and predictive modeling.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_33-A_Feature_Interaction_Based_Neural_Network_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Arabic Phishing Email Detection: A Hybrid Machine Learning Based on Genetic Algorithm Feature Selection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150832</link>
        <id>10.14569/IJACSA.2024.0150832</id>
        <doi>10.14569/IJACSA.2024.0150832</doi>
        <lastModDate>2024-08-30T08:07:26.8130000+00:00</lastModDate>
        
        <creator>Amjad A. Alsuwaylimi</creator>
        
        <subject>Machine learning; phishing email; BiLSTM; Arabic content-based</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>Recently, owing to widespread Internet use and technological breakthroughs, cyber-attacks have increased. One of the most common types of attacks is phishing, which is executed through email and is a leading cause of the recent surge in cyber-attacks. These attacks maliciously demand sensitive or private information from individuals and companies. Various methods have been employed to address this issue by classifying emails, such as feature-based classification and manual verification. However, these methods face significant challenges regarding computational efficiency and classification precision. This work presents a novel hybrid approach that combines machine learning and deep learning techniques to improve the identification of phishing emails containing Arabic content. A genetic algorithm is employed to optimize feature selection, thereby enhancing the performance of the model by effectively identifying the most relevant features. The novel dataset comprises 1,173 records categorized into two classes: phishing and legitimate. A number of empirical investigations were carried out to assess and contrast the performance outcomes of the proposed model. The findings reveal that the proposed hybrid model outperforms other machine learning classifiers and standalone deep learning models.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_32-Enhancing_Arabic_Phishing_Email_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sketch and Size Orient Malicious Activity Monitoring for Efficient Video Surveillance Using CNN</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150831</link>
        <id>10.14569/IJACSA.2024.0150831</id>
        <doi>10.14569/IJACSA.2024.0150831</doi>
        <lastModDate>2024-08-30T08:07:26.7970000+00:00</lastModDate>
        
        <creator>K. Lokesh</creator>
        
        <creator>M. Baskar</creator>
        
        <subject>Video surveillance; deep learning; activity monitoring; malicious activity; SSMAM; SSM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>Towards malicious activity monitoring in organizations, there exist several techniques and suffer with poor accuracy. To handle this issue, an efficient Sketch and Size orient malicious activity monitoring (SSMAM) is presented in this article. The model captures the video frames and performs segmentation to extract the features of frames as shapes and size. The video frames are enhanced for its quality by applying High Level Intensity Analysis algorithm. The quality improved image has been segmented with Color Quantization Segmentation. Using the segmented image, the feature are extracted and applied with scaling and rotation for different number of size and angle. Such features extracted have been trained with convolution neural network. The CNN model is designed to perform convolution on two levels and pooling as well. At the test phase, the method extract the same set of features and performs convolution to obtain same set of feature lengths and the neurons are designed computes the value of Sketch Support Measure (SSM) towards various class of activity. According the value of SSM, the method classifies the user activity towards efficient video surveillance. The proposed approach improves the performance in activity monitoring and video surveillance.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_31-Sketch_and_Size_Orient_Malicious_Activity_Monitoring.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Digital Financial Security with LSTM and Blockchain Technology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150830</link>
        <id>10.14569/IJACSA.2024.0150830</id>
        <doi>10.14569/IJACSA.2024.0150830</doi>
        <lastModDate>2024-08-30T08:07:26.7830000+00:00</lastModDate>
        
        <creator>Thanyah Aldaham</creator>
        
        <creator>Hedi HAMDI</creator>
        
        <subject>Digital financing; block chain; ETC; security; anomaly detection; machine learning; LSTM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>The growing dependence on digital financial and banking transactions has brought about a significant focus on implementing strong security protocols. Blockchain technology has proved itself throughout the years to be a reliable solution upon which transactions can safely take place. This study explores the use of blockchain technology, specifically Ethereum Classic (ETC), to enhance the security of digital financial and banking transactions. The aim is to develop a system using an LSTM model to predict and detect anomalies in transaction data. The proposed LSTM model was trained before being tested and the results prove that the proposed model can effectively enhance the security, especially when compared to other studies in the same domain. The proposed model achieved a prediction accuracy of 99.5%, demonstrating its effectiveness in enhancing security by preventing overfitting and identifying potential threats in network activities. The results suggest significant improvements in digital transaction security, enhancing both the traceability and transparency of blockchain transactions while reducing fraud rates. Future work will extend this model&#39;s applicability to larger-scale decentralized finance systems.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_30-Enhancing_Digital_Financial_Security_with_LSTM_and_Blockchain_Technology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Harnessing Technology to Achieve the Highest Quality in the Academic Program of University Studies</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150829</link>
        <id>10.14569/IJACSA.2024.0150829</id>
        <doi>10.14569/IJACSA.2024.0150829</doi>
        <lastModDate>2024-08-30T08:07:26.7830000+00:00</lastModDate>
        
        <creator>Rania Aboalela</creator>
        
        <subject>ChatGPT; academic accreditations; technology programs; computer science; Kingdom of Saudi Arabia; website development; NCAAA; ABET</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>This research aims to utilize information technology to improve education quality, particularly in higher education. A key contribution of this research is the application of generative artificial intelligence, specifically ChatGPT, to validate test questions that meet both international (ABET) and local (NCAAA) academic accreditation standards. The study was conducted within the Information Systems Department&#39;s bachelor&#39;s program at King Abdulaziz University in the Kingdom of Saudi Arabia, focusing on a website development course. The custom ChatGPT application, named Question Checker, was developed to validate questions generated by instructors. These validation criteria were aligned with the accreditation requirements for technology and computer science programs, ensuring compliance with both ABET and NCAAA standards. The application was tested by validating nine questions related to Student Outcomes, demonstrating its effectiveness in supporting the educational objectives of the program.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_29-Harnessing_Technology_to_Achieve_the_Highest_Quality.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimization of Distribution Routes in Agricultural Product Supply Chain Decision Management Based on Improved ALNS Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150828</link>
        <id>10.14569/IJACSA.2024.0150828</id>
        <doi>10.14569/IJACSA.2024.0150828</doi>
        <lastModDate>2024-08-30T08:07:26.7670000+00:00</lastModDate>
        
        <creator>Liling Liu</creator>
        
        <creator>Yang Chen</creator>
        
        <creator>Ao Li</creator>
        
        <subject>ALNS; agricultural products; path optimization; cold chain transportation; supply chain</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>The transportation of fresh agricultural products is not conducted along a sufficiently precise route, resulting in an extended transportation time for vehicles and a consequent deterioration in product freshness. Therefore, the study proposes an agricultural product transportation path optimization model based on an optimized adaptive large neighborhood search algorithm. The Solomon standard test case is used for the experiment, and the algorithm before and after optimization is compared. From the results, the optimized method was effective for the distribution model C201, R201, and CR201 sets after conducting case analysis. The total cost of the R201 transportation set was the lowest, while C101 had the highest total cost. The lowest vehicle cost consumption was R201 at 600, and the highest was C101 at 2220. The C101 algorithm took 145 s to calculate, and R201 took 199 s. All values of CR201 were average, with high fault tolerance. The proposed method was used to address the optimal operator solution. The C201 example took 244 s to calculate 2350 objective function values. The R201 example took 239 s to obtain 657 objective function values. The CR201 example took 233 s to obtain 764 objective function values. This indicates that the designed method has a significant effect on optimizing the distribution path of agricultural products. Compared with the unimproved algorithm, it has more accurate search ability and lower transportation costs. This algorithm provides path optimization ideas for the agricultural product transportation industry.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_28-Optimization_of_Distribution_Routes_in_Agricultural_Product.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automation of Book Categorisation Based on Network Centric Quality Management System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150827</link>
        <id>10.14569/IJACSA.2024.0150827</id>
        <doi>10.14569/IJACSA.2024.0150827</doi>
        <lastModDate>2024-08-30T08:07:26.7500000+00:00</lastModDate>
        
        <creator>Tingting Liu</creator>
        
        <creator>Qiyuan Liu</creator>
        
        <creator>Linya Fu</creator>
        
        <subject>Crawler; books; automated; classification; Recurrent Neural Network; Multiple Attention Mechanism; knowledge graphs</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>In order to improve the efficiency of automatic book classification, the study uses a crawler to crawl book data from regular websites and perform data cleaning and fusion to build a structured knowledge graph. Meanwhile, the processed data is applied to a pre-trained model to improve it, and migration learning is used to improve the results. Fusion of Multiple Attention Mechanisms, Recurrent Neural Network, and Convolutional Neural Network modules into the classification model and feature fusion is used to enhance feature extraction. In addition, the study designed a pre-trained model architecture to help automatically categorise and manage book resources. The results of this study show a significant improvement in the classification of Chinese books on the Chinese Book L2 Subject Classification, iFlytek, and THUCNews datasets with significant performance improvement. The fusion of long and short-term memory and convolutional network Transformer-based bi-directional encoding models improved the accuracy by 0.19%, 1.54% and 0.42% on the two datasets, respectively, while the weighted average F1 scores improved accordingly. Through wireless technology, the automatic classification efficiency of books is realized and the management ability of the library is improved.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_27-Automation_of_Book_Categorisation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Machine Learning Techniques for Protecting Intelligent Vehicles in Intelligent Transport Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150826</link>
        <id>10.14569/IJACSA.2024.0150826</id>
        <doi>10.14569/IJACSA.2024.0150826</doi>
        <lastModDate>2024-08-30T08:07:26.7370000+00:00</lastModDate>
        
        <creator>Yuan Chen</creator>
        
        <subject>Intelligent vehicle protection; machine learning techniques; Gaussian Process Regression; convolutional neural networks; long and short-term memory networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>Intelligent transport system (ITS) is the development direction of future transport systems, in which intelligent vehicles are the key components. In order to protect the safety of intelligent vehicles, machine learning techniques are widely used in ITS. For intelligent protection in ITS, the study introduces an improved driving behaviour modelling method based on Bagging Gaussian Process Regression. Meanwhile, to further promote the accuracy of driving behaviour modelling and prediction, Convolutional Neural Network-Long and Short-term Memory Network-Gaussian Process Regression are used for effective feature extraction. The results show that in the straight overtaking scenario, the mean absolute error, root mean square error and maximum absolute error of the improved Bagging Gaussian process regression method are 0.5241, 0.9547 and 10.7705, respectively. In the corner obstacle avoidance scenario, the improved Bagging Gaussian process regression method is only 0.6527, 0.9436 and 14.7531. Besides, the mean absolute error of the Convolutional Neural Network-Long and Short-term Memory Network-Gaussian process regression algorithm is only 0.0387 in the case of the input temporal image frame number of 5. This denoted that the method put forward in the study can provide a more accurate and robust modeling and prediction of driving behaviours in complex traffic environments, and it has a high application potential in the field of safety and protection of intelligent vehicles.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_26-Machine_Learning_Techniques_for_Protecting_Intelligent_Vehicles.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced Resume Screening for Smart Hiring Using Sentence-Bidirectional Encoder Representations from Transformers (S-BERT)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150825</link>
        <id>10.14569/IJACSA.2024.0150825</id>
        <doi>10.14569/IJACSA.2024.0150825</doi>
        <lastModDate>2024-08-30T08:07:26.7370000+00:00</lastModDate>
        
        <creator>Asmita Deshmukh</creator>
        
        <creator>Anjali Raut</creator>
        
        <subject>S-BERT; resume; automated screening; job; CV</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>In a world inundated with resumes, the hiring process is often challenging, particularly for large organizations. HR professionals face the daunting task of manually sifting through numerous applications. This paper presents ‘Enhanced Resume Screening for Smart Hiring using Sentence-Bidirectional Encoder Representations from Transformers (S-BERT)’ to revolutionize this process. For HR professionals dealing with overwhelming numbers of resumes, the manual screening process is time consuming and error-prone. To address this, here the proposed solution is developed for an automated solution leveraging NLP techniques and a cosine distance matrix. Our approach involves pre-processing, embed- ding generation using S-BERT, cosine similarity calculation, and ranking based on scores. In our evaluation on a dataset of 223 resumes, our automated screening mechanism demonstrated remarkable efficiency with a screening speed of 0.233 seconds per resume. The system’s accuracy was 90%, showcasing its ability to effectively identify relevant resumes. This work presents a powerful tool for HR professionals, significantly reducing the manual workload and enhancing the accuracy of identifying suitable candidates. The societal impact lies in streamlining hiring processes, making them more efficient and accessible, ultimately contributing to a more productive and equitable job market.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_25-Enhanced_Resume_Screening_for_Smart_Hiring.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Lightweight and Efficient High-Resolution Network for Human Pose Estimation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150824</link>
        <id>10.14569/IJACSA.2024.0150824</id>
        <doi>10.14569/IJACSA.2024.0150824</doi>
        <lastModDate>2024-08-30T08:07:26.7200000+00:00</lastModDate>
        
        <creator>Jiarui Liu</creator>
        
        <creator>Xiugang Gong</creator>
        
        <creator>Qun Guo</creator>
        
        <subject>Human pose estimation; model lightweighting; Ghost module; attention mechanism; multi-scale feature fusion</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>To address the challenges of high parameter quantities and elevated computational demands in high-resolution network, which limit their application on devices with constrained computational resources, we propose a lightweight and efficient high-resolution network, LE-HRNet. Firstly, we designs a lightweight module, LEblock, to extract feature information. LEblock leverages the Ghost module to substantially decrease the number of model parameters. Based on this, to effectively recognize human keypoints, we designed a Multi-Scale Coordinate Attention Mechanism (MCAM). MCAM enhances the model&#39;s perception of details and contextual information by integrating multi-scale features and coordinate information, improving the detection capability for human keypoints. Additionally, we designs a Cross-Resolution Multi-Scale Feature Fusion Module (CMFFM). By optimizing the upsampling and downsampling processes, CMFFM further reduces the number of model parameters while enhancing the extraction of cross-branch channel features and spatial features to ensure the model&#39;s performance. The proposed model&#39;s experimental results demonstrate accuracies of 69.3% on the COCO dataset and 88.7% on the MPII dataset, with a parameter count of only 5.4M, substantially decreasing the number of model parameters while preserving its performance.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_24-Lightweight_and_Efficient_High_Resolution_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Quantifying the Effects of Homogeneous Interference on Coverage Quality in Wireless Sensor Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150823</link>
        <id>10.14569/IJACSA.2024.0150823</id>
        <doi>10.14569/IJACSA.2024.0150823</doi>
        <lastModDate>2024-08-30T08:07:26.7030000+00:00</lastModDate>
        
        <creator>Qingmiao Liu</creator>
        
        <creator>Qiang Liu</creator>
        
        <creator>Minhuan Wang</creator>
        
        <subject>Wireless sensor networks; homogeneous interference; basic probability assignment; coverage quality; Dempster-Shafer theory</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>This study develops a coverage perception interference model for Wireless Sensor Networks, focusing on the challenges of homogeneous interference within Regions of Interest. Traditional perception models often overlook areas that, while covered, do not meet the required coverage standards for accurate classification. This model addresses both uncovered areas and those inadequately covered, which are susceptible to classification errors. A propositional space for the coverage model is defined to assess the impact of homogeneous interference on sensor nodes, with the aim of quantifying its effects on network coverage quality and stability in complex environments. The study emphasizes the generation of Basic Probability Assignments using Dempster-Shafer theory, a robust framework for managing uncertain information in sensory data. Probability Density Functions derived from historical and real-time data are utilized to facilitate precise BPA calculations by integrating over specific attribute ranges, thereby enhancing the accuracy and reliability of target detection. Algorithms are also developed to calculate the interference effect BPA, which are integrated with perception coverage models to improve the assessment and optimization of coverage quality. The research enhances the methodological understanding of managing interference in WSNs and offers practical strategies for improving sensor network operations in environments affected by significant interference, boosting the reliability and effectiveness of critical surveillance and monitoring applications.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_23-Quantifying_the_Effects_of_Homogeneous_Interference.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Method by Utilizing Deep Learning to Identify Malware Within Numerous Industrial Sensors on IoTs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150822</link>
        <id>10.14569/IJACSA.2024.0150822</id>
        <doi>10.14569/IJACSA.2024.0150822</doi>
        <lastModDate>2024-08-30T08:07:26.6870000+00:00</lastModDate>
        
        <creator>Ronghua MA</creator>
        
        <subject>Malware; malware detection; industrial sensors; Internet of Things (IoTs); Deep Learning (DL)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>The industrial sensors of IoT is an emerging model, which combines Internet and the industrial physical smart objects. These objects belong to the broad domains like the smart homes, the smart cities, the processes of the industrial and the military, the agriculture and the business. Due to the substantial advancement in Industrial Internet of Things (IIoT) technologies, numerous IIoT applications have been developed over the past ten years. Recently, there have been multiple reports of malware-based cyber-attacks targeting IIoT systems. Consequently, this research focuses on creating an effective Artificial Intelligence (AI)-powered system for detecting zero-day malware in IIoT environments. In the current article, a combined framework for the detection of the malware basis on the deep learning (DL) is proposed, that uses the dual-density discrete wavelet transform for the extraction of the feature and a combination from the convolutional neural network (CNN) and the long-term short-term memory (LSTM). The method is utilized for malware detection and classification. It has been assessed using the Malimg dataset and the Microsoft BIG 2015 dataset. The results demonstrate that our proposed model can classify malware with remarkable accuracy, surpassing similar methods. When tested on the Microsoft BIG 2015 and Malimg datasets, the accuracy achieved is 95.36% and 98.12%, respectively.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_22-A_Method_by_Utilizing_Deep_Learning_to_Identify_Malware.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Business Intelligence with Hybrid Transformers and Automated Annotation for Arabic Sentiment Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150821</link>
        <id>10.14569/IJACSA.2024.0150821</id>
        <doi>10.14569/IJACSA.2024.0150821</doi>
        <lastModDate>2024-08-30T08:07:26.6870000+00:00</lastModDate>
        
        <creator>Wael M.S. Yafooz</creator>
        
        <subject>Business intelligence; machine learning; sentiment analysis; transformers; BERT; Arabic annotation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>Business is a key focus for many individuals, companies, countries and organisations. One effective way to enhance business performance is by analysing customer opinions through sentiment analysis. This technique offers valuable insights, known as business intelligence, which directly benefits business owners by informing their decisions and strategies. Substantial attention has been given to business intelligence through proposed machine learning approaches, deep learning models and approaches utilizing natural language processing methods. However, building a robust model to detect and identify users’ opinion and automated text annotation, particularly for the Arabic language, still faces many challenges. Thus, this study aims to propose a hybrid transfer learning model that uses transformers to identify positive and negative user comments that are related to business. This model consists of three pretrained models, namely, AraBERT, ArabicBERT, and XLM-RoBERTa. In addition, this study proposes a hybrid automatic Arabic annotation method based on CAMelBERT, TextBlob and Farasa to automatically classify user comments. A novel dataset, which is collected from user-generated comments (i.e. reviews on mobile apps), is introduced. This dataset is annotated twice using the proposed method and human-based annotation. Then, several experiments are conducted to evaluate the performance of the proposed model and the proposed annotation method. Experiment results show that the proposed hybrid model outperforms the baseline models, and the proposed annotation method achieves high accuracy, which is close to human-based annotation.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_21-Enhancing_Business_Intelligence_with_Hybrid_Transformers.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Analysis of a Hyperledger-Based Medical Record Data Management Using Amazon Web Services</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150820</link>
        <id>10.14569/IJACSA.2024.0150820</id>
        <doi>10.14569/IJACSA.2024.0150820</doi>
        <lastModDate>2024-08-30T08:07:26.6730000+00:00</lastModDate>
        
        <creator>Mohammed K Elghoul</creator>
        
        <creator>Sayed F. Bahgat</creator>
        
        <creator>Ashraf S. Hussein</creator>
        
        <creator>Safwat H. Hamad</creator>
        
        <subject>Hyperledger; blockchain; healthcare; data management</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>Recently, there&#39;s been growing excitement around the innovative capabilities of blockchain technology, especially for enhancing security, privacy, and transparency. Its application in various sectors, like finance and logistics, is intriguing, but its potential in healthcare stands out. Specifically, in the realm of medical data management, blockchain can transform how we protect patient data. Our study unveils a cutting-edge approach to handle digital health records by harnessing the power of Amazon Web Services (AWS). This pioneering, serverless model is not only cost-effective, with charges only for used resources, but also offers heightened security and for blockchain network access. We build a private, permissioned blockchain network with Hyperledger Fabric to control access while ensuring transparency. The paper demonstrates the prowess of this new system is validated through rigorous tests on speed, network prowess, and multi-user handling, complete with a detailed cost analysis for implementation. The paper further demonstrates the use of the Gatling open-source library to design various experiments for performance measurement.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_20-Performance_Analysis_of_a_Hyperledger_Based_Medical_Record_Data_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Computational Modeling of the Thermally Stressed State of a Partially Insulated Variable Cross-Section Rod</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150819</link>
        <id>10.14569/IJACSA.2024.0150819</id>
        <doi>10.14569/IJACSA.2024.0150819</doi>
        <lastModDate>2024-08-30T08:07:26.6570000+00:00</lastModDate>
        
        <creator>Zhuldyz Tashenova</creator>
        
        <creator>Elmira Nurlybaeva</creator>
        
        <creator>Zhanat Abdugulova</creator>
        
        <creator>Shirin Amanzholova</creator>
        
        <creator>Nazira Zharaskhan</creator>
        
        <creator>Aigerim Sambetova</creator>
        
        <creator>Anarbay Kudaykulov</creator>
        
        <subject>Heat flow; heat transfer; thermal expansion coefficient; thermal conductivity; modulus of elasticity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>The formulation of the proposed methods and algorithms facilitates a comprehensive examination of intricate non-stationary thermo-mechanical processes in rods with varying cross-sectional geometries. Furthermore, it advances the theoretical framework for analyzing the thermo-mechanical properties of rod structures utilized in the machinery industry of the Republic of Kazakhstan. The creation of these intellectual products aids in the progression of this sector and fortifies the nation&#39;s sovereignty. This article delineates methods and algorithms for investigating non-stationary thermo-mechanical processes in rods with diverse cross-sectional shapes that influence global manufacturing technologies. The scientific and practical importance of this work lies in the application potential of the developed approach for examining non-stationary thermo-mechanical characteristics of rod-like elements in various installations. The findings also enhance the scientific research direction in mechanical engineering. In conclusion, the article outlines future technological advancements, summarizes the research on non-stationary thermo-mechanical processes in rods with different cross-sectional geometries, and highlights significant economic benefits by facilitating the selection of reliable rods for specified operating conditions. This ensures the continuous and dependable operation of machinery used in mechanical engineering.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_19-Computational_Modeling_of_the_Thermally_Stressed_State.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Diabetes Prediction Using Machine Learning with Feature Engineering and Hyperparameter Tuning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150818</link>
        <id>10.14569/IJACSA.2024.0150818</id>
        <doi>10.14569/IJACSA.2024.0150818</doi>
        <lastModDate>2024-08-30T08:07:26.6400000+00:00</lastModDate>
        
        <creator>Hakim El Massari</creator>
        
        <creator>Noreddine Gherabi</creator>
        
        <creator>Fatima Qanouni</creator>
        
        <creator>Sajida Mhammedi</creator>
        
        <subject>Machine learning; feature engineering; hyperparameter tuning; diabetes prediction; healthcare</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>Diabetes, a chronic illness, has seen an increase in prevalence over the years, posing several health challenges. This study aims to predict diabetes onset using the Pima Indians Diabetes dataset. We implemented several machine learning algorithms, namely Random Forest, Gradient Boosting, XGBoost, LightGBM, and CatBoost. To enhance model performance, we applied a variety of feature engineering techniques, including SelectKBest, Recursive Feature Elimination (RFE), Recursive Feature Elimination with Cross-Validation (RFECV), Forward Feature Selection, and Backward Feature Elimination. RFECV proved to be the most effective method, leading to the selection of the best feature set. In addition, hyperparameter tuning techniques are used to determine the optimal parameters for the models created. Upon training these models with the optimized parameters, XGBoost outperformed the others with an accuracy of 94%, while Random Forest and CatBoost both achieved 92.5%. These results highlight XGBoost&#39;s superior predictive power and the significance of thorough feature engineering and model tuning in diabetes prediction.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_18-Diabetes_Prediction_Using_Machine_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimizing Data Security in Computer-Assisted Test Applications Through the Advanced Encryption Standard 256-Bit Cipher Block Chaining</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150817</link>
        <id>10.14569/IJACSA.2024.0150817</id>
        <doi>10.14569/IJACSA.2024.0150817</doi>
        <lastModDate>2024-08-30T08:07:26.6400000+00:00</lastModDate>
        
        <creator>M. Afridon</creator>
        
        <creator>Agus Tedyyana</creator>
        
        <creator>Fajar Ratnawati</creator>
        
        <creator>Afis Julianto</creator>
        
        <creator>M. Nur Faizi</creator>
        
        <subject>AES256-CBC; data security; computer-assisted test; academic integrity; encryption standards; digital assessment security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>In the digital education era, the importance of Computer-Assisted Test programs is underscored by their efficiency in conducting assessments. However, the increasing incidence of data breaches and cyberthreats has made the implementation of robust data protection measures imperative. This study explores the adoption of the Advanced Encryption Standard 256-bit Cipher Block Chaining in CAT applications to enhance data security. Known for its strong encryption capabilities, AES-256-CBC is an excellent choice for securing sensitive test data. The research focuses on the application of AES-256-CBC within CAT systems during the independent admission process at Politeknik Negeri Bengkalis, a critical phase where the integrity of exam materials and student data is paramount. We evaluate the effectiveness of AES-256-CBC in encrypting user data and exam materials across different CAT systems, thus preserving data integrity and confidentiality. The implementation of AES-256-CBC helps prevent unauthorized access and manipulation of test results, ensuring a secure online testing environment. This research not only demonstrates the technical implementation of AES-256-CBC but also assesses its impact on enhancing the security posture of CAT applications at Politeknik Negeri Bengkalis. The findings contribute to the broader discussion on data security in educational technology, positioning AES-256-CBC as a potent solution for maintaining academic integrity in digital testing environments.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_17-Optimizing_Data_Security_in_Computer_Assisted_Test_Applications.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Software Systems Documentation: A Systematic Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150816</link>
        <id>10.14569/IJACSA.2024.0150816</id>
        <doi>10.14569/IJACSA.2024.0150816</doi>
        <lastModDate>2024-08-30T08:07:26.6270000+00:00</lastModDate>
        
        <creator>Abdullah A H Alzahrani</creator>
        
        <subject>Software engineering; software systems documentation; software maintenance; software quality; software development</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>In the domain of software engineering, software documentation encompasses the methodical creation and management of artifacts describing software systems. Traditionally linked to software maintenance, its significance extends throughout the entire software development lifecycle. While often regarded as a quintessential indicator of software quality, the perception of documentation as a time-consuming and arduous task frequently leads to its neglect or obsolescence. This research presents a systematic review of the past decade&#39;s literature on software documentation to identify trends and challenges. Employing a rigorous systematic methodology, the study yielded 29 primary studies and a collection of related works. Analysis of these studies revealed two primary themes: issues and best practices, and models and tools. Findings indicate a notable research gap in the area of software documentation. Furthermore, the study underscores several critical challenges, including a dearth of automated tools, immature documentation models, and an insufficient emphasis on forward-looking documentation.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_16-Software_Systems_Documentation_A_Systematic_Review.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Digital Landscape Architecture Design Combining 3D Image Reconstruction Technology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150815</link>
        <id>10.14569/IJACSA.2024.0150815</id>
        <doi>10.14569/IJACSA.2024.0150815</doi>
        <lastModDate>2024-08-30T08:07:26.6100000+00:00</lastModDate>
        
        <creator>Chen Chen</creator>
        
        <subject>3D image reconstruction; PSO; gardens; RGBD; digital landscape</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>To achieve better digital landscape design and visual presentation effects, this study proposes a digital landscape design method based on improved 3D image reconstruction technology. Firstly, a precise point cloud registration algorithm combining normal distribution transformation and Trimmed iterative nearest point algorithm is proposed. A color texture method for 3D models is designed in terms of 3D reconstruction, and a visual scene, 3D reconstruction method based on RGBD data is constructed. Secondly, knowledge networks are introduced to assist in the intelligent generation and planning of plant communities in urban landscape scenes. The knowledge network established through the plant database integrates the principles of landscape design and optimizes the layout of landscape plants in urban parks. The running speed and accuracy of research algorithms were superior to traditional methods, especially in terms of registration performance. Compared to the other two algorithms, the registration time of the research algorithm was reduced by 2%, and the errors were reduced by 71.4% and 87.5%, respectively. The panoramic quality of research methods fluctuated within a small range of 0.8 or above, while traditional methods exhibited instability and lower quality. The landscape design generated by research methods was more aesthetically pleasing and harmonious with the actual landscape in terms of plant selection and layout. The proposed method follows the principles of eco-friendly design and demonstrates significant potential for application in the field of urban landscape design.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_15-Digital_Landscape_Architecture_Design.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sleep Disorder Diagnosis Through Complex-Morlet-Wavelet Representation Using Bi-GRU and Self-Attention</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150814</link>
        <id>10.14569/IJACSA.2024.0150814</id>
        <doi>10.14569/IJACSA.2024.0150814</doi>
        <lastModDate>2024-08-30T08:07:26.5930000+00:00</lastModDate>
        
        <creator>Mubarak Albathan</creator>
        
        <subject>Deep learning; complex morlet wavelet; bidirectional gated recurrent unit; sleep stage detection; multistage sleep disorder; ensemble-bagged tree classifier</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>Sleep disorders pose notable health risks, impacting memory, cognitive performance, and overall well-being. Traditional polysomnography (PSG) used for sleep disorder diagnosis are complex and inconvenient due to complex multi-class representation of signals. This study introduces an automated sleep-disorder-detection method using electrooculography (EOG) and electroencephalography (EEG) signals to address the gaps in automated, real-time, and noninvasive sleep-disorder diagnosis. Traditional methods rely on complex PSG analysis, whereas the proposed method simplifies the involved process, reducing reliance on cumbersome equipment and specialized settings. The preprocessed EEG and EOG signals are transformed into a two-dimensional time-frequency image using a complex-Morlet-wavelet (CMW) transform. This transform assists in capturing both the frequency and time characteristics of the signals. Afterwards, the features are extracted using a bidirectional gated recurrent unit (Bi-GRU) with a self-attention layer and an ensemble-bagged tree classifier (EBTC) to correctly classify sleep disorders and very efficiently identify them. The overall system combines EOG and EEG signal features to accurately classify people with insomnia, narcolepsy, nocturnal frontal lobe epilepsy (NFLE), periodic leg movement (PLM), rapid-eye-movement (RBD), sleep behavior disorder (SDB), and healthy, with success rates of 99.7%, 97.6%, 95.4%, 94.5%, 96.5%, 98.3%, and 94.1%, respectively. Using the 10-fold cross-validation technique, the proposed method yields 96.59% accuracy and AUC of 0.966 with regard to classification of sleep disorders into multistage classes. The proposed system assists medical experts for automated sleep-disorder diagnosis.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_14-Sleep_Disorder_Diagnosis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Optimization of Support Vector Machine with Adversarial Grasshopper Optimization for Heart Disease Diagnosis and Feature Selection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150813</link>
        <id>10.14569/IJACSA.2024.0150813</id>
        <doi>10.14569/IJACSA.2024.0150813</doi>
        <lastModDate>2024-08-30T08:07:26.5930000+00:00</lastModDate>
        
        <creator>Nan Tang</creator>
        
        <creator>Lele Wang</creator>
        
        <creator>Kangming Li</creator>
        
        <creator>Zhen Liu</creator>
        
        <creator>Yanan Dai</creator>
        
        <creator>Ji Hao</creator>
        
        <creator>Qingdui Zhang</creator>
        
        <creator>Huamei Sun</creator>
        
        <creator>Chunmei Qi</creator>
        
        <subject>Heart disease predictions; Support Vector Machine; Grasshopper Optimization Algorithm; feature selection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>The World Health Organization reports that cardiac disorders result in approximately 1.02 million deaths. Over the last years, heart disorders, also known as cardiovascular diseases, have significantly influenced the medical sector due to their immense global impact and high level of danger. Unfortunately, accurate prognosis of heart problems or CD, as well as continuous monitoring of the patient for 24 hours, is unattainable due to the extensive expertise and time required. The management and identification of cardiac disease pose significant challenges, particularly in impoverished or developing nations. Moreover, the absence of adequate medical attention or prompt disease management can result in the individual&#39;s demise. This study presents a novel optimization technique for diagnosing cardiac illness utilizing Support Vector Machine (SVM) and Grasshopper Optimization Algorithm (GOA). The primary objective of this approach is to identify the most impactful characteristics and enhance the efficiency of the SVM model. The GOA algorithm, which draws inspiration from the natural movements of grasshoppers, enhances the search for features in the data and effectively reduces the feature set while maintaining prediction accuracy. The initial stage involved pre-processing the ECG data, followed by its classification using several algorithms such as SVM and GOA. The findings demonstrated that the suggested approach has markedly enhanced the effectiveness and precision of heart disease diagnosis through meticulous feature selection and model optimization. This approach can serve as an efficient tool for early detection of heart disease by simplifying the process and enhancing its speed.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_13-Performance_Optimization_of_Support_Vector_Machine.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Prediction of Outpatient No-Show Appointments Using Machine Learning Algorithms for Pediatric Patients in Saudi Arabia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150812</link>
        <id>10.14569/IJACSA.2024.0150812</id>
        <doi>10.14569/IJACSA.2024.0150812</doi>
        <lastModDate>2024-08-30T08:07:26.5800000+00:00</lastModDate>
        
        <creator>Abdulwahhab Alshammari</creator>
        
        <creator>Fahad Alotaibi</creator>
        
        <creator>Sana Alnafrani</creator>
        
        <subject>No-show; pediatric; machine learning; algorithms; prediction; outpatients</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>Patient no-shows are prevalent in pediatric outpatient visits, leading to underutilized medical resources, increased healthcare costs, reduced clinic efficiency, and decreased access to care. The use of machine learning techniques provides insights to mitigate this problem. This study aimed to develop a predictive model for patient no-shows at the Ministry of National Guard Health-Affairs, Saudi Arabia, and evaluate the results of various machine learning algorithms in predicting these events. Four machine learning algorithms - Gradient Boosting, AdaBoost, Random Forest, and Naive Bayes - were used to create predictive models for patient no-shows. Each model underwent extensive parameter tuning and reliability assessment to ensure robust performance, including sensitivity analysis and cross-validation. Gradient Boosting achieved the highest area under the receiver operating curve (AUC) of 0.902 and Classification Accuracy (CA) of 0.944, while the AdaBoost model achieved an AUC of 0.812 and CA of 0.927. The Naive Bayes and Random Forest models achieved AUCs of 0.677 and 0.889 and CAs of 0.915 and 0.937, respectively. The confusion matrix demonstrated high true-positive rates for no-shows for the Gradient Boosting and Random Forest models, while Naive Bayes had the lowest values. The Gradient Boosting and Random Forest models were most effective in predicting patient no-shows. These models could enhance outpatient clinic efficiency by predicting no-shows. Future research can further refine these models and investigate practical strategies for their implementation.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_12-Prediction_of_Outpatient_No_Show_Appointments.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Clustering Algorithms to Analyse Smart City Traffic Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150811</link>
        <id>10.14569/IJACSA.2024.0150811</id>
        <doi>10.14569/IJACSA.2024.0150811</doi>
        <lastModDate>2024-08-30T08:07:26.5630000+00:00</lastModDate>
        
        <creator>Praveena Kumari M K</creator>
        
        <creator>Manjaiah D H</creator>
        
        <creator>Ashwini K M</creator>
        
        <subject>Clustering; smart city; traffic; analyze</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>Urban transportation systems encounter significant challenges in extracting meaningful traffic patterns from extensive historical datasets, a critical aspect of smart city initiatives. This paper addresses the challenge of analyzing and understanding these patterns by employing various clustering techniques on hourly urban traffic flow data. The principal aim is to develop a model that can effectively analyze temporal patterns in urban traffic, uncovering underlying trends and factors influencing traffic flow, which are essential for optimizing smart city infrastructure. To achieve this, we applied DBSCAN, K-Means, Affinity Propagation, Mean Shift, and Gaussian Mixture clustering techniques to the traffic dataset of Aarhus, Denmark&#39;s second-largest city. The performance of these clustering methods was evaluated using the Silhouette Score and Dunn Index, with DBSCAN emerging as the most effective algorithm in terms of cluster quality and computational efficiency. The study also compares the training times of the algorithms, revealing that DBSCAN, K-Means, and Gaussian Mixture offer faster training times, while Affinity Propagation and Mean Shift are more computationally intensive. The results demonstrate that DBSCAN not only provides superior clustering performance but also operates efficiently, making it an ideal choice for analyzing urban traffic patterns in large datasets. This research emphasizes the importance of selecting appropriate clustering techniques for effective traffic analysis and management within smart city frameworks, thereby contributing to more efficient urban planning and infrastructure development.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_11-Clustering_Algorithms_to_Analyse_Smart_City_Traffic_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Quantitative Measurement and Preference Research of Urban Landscape Environmental Image Based on Computer Vision</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150810</link>
        <id>10.14569/IJACSA.2024.0150810</id>
        <doi>10.14569/IJACSA.2024.0150810</doi>
        <lastModDate>2024-08-30T08:07:26.5470000+00:00</lastModDate>
        
        <creator>Yan Wang</creator>
        
        <subject>Computer vision; urban landscape; environmental image; quantification; measure</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>At present, research on landscape preferences mostly uses traditional questionnaire surveys to obtain public aesthetic attitudes, and the analysis method still relies on manual coding with small sample sizes. However, the research on landscape preference of applying network big data and computer vision technology is rare, and the research content and algorithm application are limited. In order to improve the research effect of quantitative measurement and preference of urban landscape environment image, the algorithm proposed in this paper combines two-dimensional analysis modules, two-dimensional visual domain analysis and three-dimensional visual analysis, and makes full use of the advantages of the two analysis modules, and analyzes the scale from large scale to medium and micro scale based on different accuracy urban digital models. Through image classification and content recognition, image semantic segmentation and image color quantification, the landscape feature information in pictures is mined, and the dimension of landscape image is put forward based on this. In addition, this paper combines experimental analysis to verify that the method proposed in this paper has certain results. It is not only suitable for visual analysis of landmark buildings and landmark structures in cities, but also can analyze the visual characteristics of natural landscapes as urban images in cities. Therefore, the quantitative method of urban visual landscape analysis proposed in this paper can provide reliable data support for the follow-up urban design work.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_10-Quantitative_Measurement_and_Preference_Research.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Applied to Art and Design Scene Visual Comprehension and Recognition Algorithm Research</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150809</link>
        <id>10.14569/IJACSA.2024.0150809</id>
        <doi>10.14569/IJACSA.2024.0150809</doi>
        <lastModDate>2024-08-30T08:07:26.5330000+00:00</lastModDate>
        
        <creator>Yuxin Shi</creator>
        
        <subject>Art design; scene visual understanding and recognition; multilayer perceptron; pond goose algorithm; image dataset</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>Combining advanced intelligent algorithms to improve the scene visual understanding and recognition method for art design can not only provide more inspirations and creative materials for artists, but also improve the efficiency and quality of art creation, and provide scientific and accurate references of artworks. Focusing on the art design scene visual understanding and recognition problem, a scene visual understanding and recognition method based on the intelligent optimisation algorithm to optimise the structural parameters of the multilayer perception machine is proposed. Firstly, the scene visual recognition method is outlined and analyzed, and the application scheme of multilayer perceptron in the understanding and recognition problem is designed; then, for the problems of the multilayer perceptron model, such as the training does not generalize, combined with the Pond&#39;s optimization algorithm, the training parameters of the multilayer perceptron model are optimized, and the visual understanding and recognition scheme of the art design scene is designed; finally, the proposed model is verified with the image dataset, and the scene visual understanding and recognition accuracy reaches 0.98, compared with other models, the proposed method has higher recognition accuracy. This research solves the problem of scene visual understanding and recognition, and applies it to the field of art design to improve the efficiency of art design assistance.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_9-Applied_to_Art_and_Design_Scene_Visual_Comprehension.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Image Generation Using StyleVGG19-NST Generative Adversarial Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150808</link>
        <id>10.14569/IJACSA.2024.0150808</id>
        <doi>10.14569/IJACSA.2024.0150808</doi>
        <lastModDate>2024-08-30T08:07:26.5170000+00:00</lastModDate>
        
        <creator>Dorcas Oladayo Esan</creator>
        
        <creator>Pius Adewale Owolawi</creator>
        
        <creator>Chunling Tu</creator>
        
        <subject>Artworks; VGG19; Neural Style Transfer; Generative Adversarial Network; inception score; StyleGAN</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>Creating new image styles from the content of existing images is challenging to conventional Generative Adversarial Networks (GANs), due to their inability to generate high-quality image resolutions. The study aims to create top-notch images that seamlessly blend the style of one image with another without losing its style to artefacts. This research integrates Style Generative Adversarial Networks with Visual Geometry Group 19 (VGG19) and Neural Style Transfer (NST) to address this challenging issue. The styleGAN is employed to generate high-quality images, the VGG19 model is used to extract features from the image and NST is used for style transfer. Experiments were conducted on curated COCO masks and publicly available CelebFace art image datasets. The outcomes of the proposed approach when contrasted with alternative simulation techniques, indicated that the CelebFace dataset results produced an Inception Score (IS) of 16.57, Frecher Inception Distance (FID) of 18.33, Peak Signal-to-Noise Ratio (PSNR) of 28.33, Structural Similarity Index Measure (SSIM) of 0.93. While the curated dataset yields high IS scores of 11.67, low FID scores of 21.49, PSNR of 29.98, and SSIM of 0.98. This result indicates that artists can generate a variety of artistic styles with less effort without losing the key features of artefacts with the proposed method.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_8-Image_Generation_Using_StyleVGG19_NST_Generative_Adversarial_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance and Accuracy Research of the Large Language Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150807</link>
        <id>10.14569/IJACSA.2024.0150807</id>
        <doi>10.14569/IJACSA.2024.0150807</doi>
        <lastModDate>2024-08-30T08:07:26.4870000+00:00</lastModDate>
        
        <creator>Nicoleta Cristina GAITAN</creator>
        
        <subject>Large language models; artificial intelligence; ChatGPT; natural language processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>Starting with the end of 2022, there has been a massive global interest in Artificial Intelligence and, in particular, in the technology of large language models. These reduced the resolution of many problems dailies of varying degrees of complexity at a level accessible to every individual, whether it was an academic, business or social environment. A multitude of digital products have begun to use large language models to offer new functionalities such as intelligent messaging applications trained to respond efficiently depending on the specific parameters of a company, virtual assistants for programmers (GitHub Copilot), video call summarization functionality (Zoom), interpretation and extraction rapid drawing of conclusions from massive data (Big Data). These are just a few of the many uses of these technologies. Therefore, the general objective of this paper is the comparative analysis between three large language models such as ChatGPT, Gemini, and Llama3. Each model&#39;s strengths and constraints are analyzed, offering insights into their optimal use cases. This analysis provides a comprehensive understanding of the current state of large language models powered by deep learning, capable of executing various natural language processing (NLP) tasks, guiding future developments and applications in the field of artificial intelligence (AI).</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_7-Performance_and_Accuracy_Research_of_the_Large_Language_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Logistics Transportation Vehicle Monitoring and Scheduling Based on the Internet of Things and Cloud Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150806</link>
        <id>10.14569/IJACSA.2024.0150806</id>
        <doi>10.14569/IJACSA.2024.0150806</doi>
        <lastModDate>2024-08-30T08:07:26.4700000+00:00</lastModDate>
        
        <creator>Kang Wang</creator>
        
        <creator>Xin Wang</creator>
        
        <subject>Internet of Things; cloud computing; logistics and transportation; vehicle monitoring; vehicle scheduling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>This paper addresses challenges in the logistics industry, particularly information lag, inefficient resource allocation, and poor management, exacerbated by global economic integration and e-commerce growth. An advanced logistics and transportation vehicle monitoring and scheduling system is designed using IoT and cloud computing technologies. This system integrates Yolov5 for real-time vehicle location, DeepSort for continuous tracking, and a space-time convolutional network for vehicle status analysis, forming a comprehensive monitoring model. An improved multi-objective particle swarm optimization algorithm optimizes vehicle scheduling, balancing objectives like minimizing travel distance, time, and carbon emissions. Experimental results demonstrate superior performance in real-time monitoring accuracy, scheduling efficiency, arrival time prediction, road condition forecasting, and failure risk prediction. Notable achievements include 95% vehicle utilization, a 0.25 RMSE for predicted arrival times, and a 0.20 MAE for failure risk prediction. While the system significantly enhances operational efficiency and supports resource optimization, future work will focus on data security, system stability, and practical deployment challenges. This research contributes to transforming the logistics industry into a smarter, greener, and more efficient sector.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_6-Logistics_Transportation_Vehicle_Monitoring_and_Scheduling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of Customer Behavior Characteristics and Optimization of Online Advertising Based on Deep Reinforcement Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150805</link>
        <id>10.14569/IJACSA.2024.0150805</id>
        <doi>10.14569/IJACSA.2024.0150805</doi>
        <lastModDate>2024-08-30T08:07:26.4530000+00:00</lastModDate>
        
        <creator>Zhenyan Shang</creator>
        
        <creator>Bi Ge</creator>
        
        <subject>Real-time online advertising; ARMA-XGBoost model; information asymmetry; deep reinforcement learning decision-making behavior; Transfer Learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>With the shift from traditional media to online advertising, real-time strategies have become crucial, evolving to meet contemporary demands. Advertisers strive to succeed in online advertising evaluations by demand-side platforms to secure display opportunities. Discrepancies in information evaluation can impact click-through rates, emphasizing the need for precise prediction models in asymmetric contexts. Time dynamics significantly influence online ad click-through rates, with rest hours outperforming working hours. This study introduces the ARMA model to refine click predictions by preprocessing hits and employing a single XGBoost model. Furthermore, a reinforcement learning model is developed to explore online advertising strategies amidst information imbalances. Data is segmented into training (70%), validation (15%), and test sets (15%), with model parameters optimized using the DQN algorithm over 48 hours. Validation and testing on separate datasets comprising 15,000 entries each yield model accuracies of 0.85 and recall rates of 0.82. The incorporation of regret minimization algorithms enhances reward functions in deep reinforcement learning. Leveraging Tencent data, a comparative analysis evaluates advertisers’ click rates as overrated, underrated, or accurately predicted by DSPs. Findings indicate that smart customer behavior characteristics outperform DQN, converging swiftly to optimal solutions under complete information. Smart characteristics exhibit stability and flexibility, with human-machine collaboration circumventing the drawbacks of random exploration. Transfer Learning amalgamates experimentation with real-world insights, bolstering algorithm adaptability for intelligent decision-making tools in enterprises.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_5-Analysis_of_Customer_Behavior_Characteristics_and_Optimization_of_Online_Advertising.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automatic Identification and Evaluation of Rural Landscape Features Based on U-net</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150804</link>
        <id>10.14569/IJACSA.2024.0150804</id>
        <doi>10.14569/IJACSA.2024.0150804</doi>
        <lastModDate>2024-08-30T08:07:26.4370000+00:00</lastModDate>
        
        <creator>Ling Sun</creator>
        
        <creator>Jun Liu</creator>
        
        <creator>Yi Qu</creator>
        
        <creator>Jiashun Jiang</creator>
        
        <creator>Bin Huang</creator>
        
        <subject>Rural landscape; foreground area extraction; deep learning; phase unwrapping; ResU-net</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>The study delves into the landscape feature identification method and its application in Xijingyu Village, investigating landscape composition elements. Analyzing rural landscape structure holistically aids in dividing landscape characteristic zoning maps, essential for guiding rural landscape and territorial spatial planning. By utilizing GIS software for superposition analysis based on topography, geology, vegetation cover, and land use, the village range of west well valley undergoes further refinement. To address the inefficiencies of common foreground extraction algorithms relying heavily on rural landscape images, a novel approach is introduced. This new algorithm focuses on directly extracting foreground areas from rural landscape interference images by leveraging stripe sinusoidal characteristics. An adaptive gray scale mask is established to capture the sinusoidal changes in interference stripes, facilitating the direct extraction of foreground areas through a calculated blend of masks. In evaluating the results, the newly proposed algorithm demonstrates significant improvements in operation efficiency while maintaining accuracy. Specific enhancements include classifying pixel gray values into intervals and recalibrating them to enhance analysis metrics. Compared to traditional methods, the algorithm showcases advantageous enhancements across various parameters, such as PRI, GCE, and VOI. Moreover, to address challenges in unwrapping low-quality rural landscape phase areas, a ResU-net convolutional neural network is employed for phase unwrapping. By constructing image datasets of interference stripe wrapping and unwrapping alongside noise simulations for model training, the network structure&#39;s feasibility is verified. The study&#39;s innovative methodologies aim to optimize rural landscape analysis and planning processes by enhancing accuracy and efficiency in landscape feature identification, foreground area extraction, and phase unwrapping of rural landscapes. These advancements offer substantial improvements in quality and precision for territorial spatial planning and rural landscape management practices.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_4-Automatic_Identification_and_Evaluation_of_Rural_Landscape_Features.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Personalized Hybrid Tourist Destination Recommendation System: An Integration of Emotion and Sentiment Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150803</link>
        <id>10.14569/IJACSA.2024.0150803</id>
        <doi>10.14569/IJACSA.2024.0150803</doi>
        <lastModDate>2024-08-30T08:07:26.4370000+00:00</lastModDate>
        
        <creator>Suphitcha Chanrueang</creator>
        
        <creator>Sotarat Thammaboosadee</creator>
        
        <creator>Hongnian Yu</creator>
        
        <subject>Recommendations; hybrid recommendation system; Collaborative Filtering; Content-based Filtering; social media data; travel planning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>This research introduces a personalized hybrid tourist destination recommendation system tailored for the growing trend of independent travel, which leverages social media data for trip planning. The system sets itself apart from traditional models by incorporating both emotional and sentiment data from social platforms to create customized travel experiences. The proposed approach utilizes Machine Learning techniques to improve recommendation accuracy, employing Collaborative Filtering for emotional pattern recognition and Content-based Filtering for sentiment-driven destination analysis. This integration results in a sophisticated weighted hybrid model that effectively balances the strengths of both filtering techniques. Empirical evaluations produced RMSE, MAE, and MSE scores of 0.301, 0.317, and 0.311, respectively, indicating the system&#39;s superior performance in predicting user preferences and interpreting emotional data. These findings highlight a significant advancement over previous recommendation systems, demonstrating how the integration of emotional and sentiment analysis can not only improve accuracy but also enhance user satisfaction by providing more personalized and contextually relevant travel suggestions. Furthermore, this study underscores the broader implications of such analysis in various industries, opening new avenues for future research and practical implementation in fields where personalized recommendations are crucial for enhancing user experience and engagement.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_3-A_Personalized_Hybrid_Tourist_Destination_Recommendation_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimizing Low-Resource Zero-Shot Event Argument Classification with Flash-Attention and Global Constraints Enhanced ALBERT Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150802</link>
        <id>10.14569/IJACSA.2024.0150802</id>
        <doi>10.14569/IJACSA.2024.0150802</doi>
        <lastModDate>2024-08-30T08:07:26.4230000+00:00</lastModDate>
        
        <creator>Tongyue Sun</creator>
        
        <creator>Jiayi Xiao</creator>
        
        <subject>Artificial intelligence; natural language processing; event argument classification; zero-shot learning; flash-Attention; global constraints; low-resource</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>Event Argument Classification (EAC) is an essential subtask of event extraction. Most previous supervised models rely on costly annotations, and reducing the demand for computa-tional and data resources in resource-constrained environments is a significant challenge within the field. We propose a Zero-Shot EAC model, ALBERT-F, which leverages the efficiency of the ALBERT architecture combined with the Flash-Attention mechanism. This novel integration aims to address the limita-tions of traditional EAC methods, which often require extensive manual annotations and significant computational resources. The ALBERT-F model simplifies the design by factorizing embedding parameters, while Flash-Attention enhances computational speed and reduces memory access overhead. With the addition of global constraints and prompting, ALBERT-F improves the generaliz-ability of the model to unseen events. Our experiments on the ACE dataset show that ALBERT-F outperforms the Zero-shot BERT baseline by achieving at least a 3.4% increase in F1 score. Moreover, the model demonstrates a substantial reduction in GPU memory consumption by 75.1% and processing time by 33.3%, underscoring its suitability for environments with constrained resources.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_2-Optimizing_Low_Resource_Zero_Shot_Event_Argument.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Ensemble Learning with Sleep Mode Management to Enhance Anomaly Detection in IoT Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150801</link>
        <id>10.14569/IJACSA.2024.0150801</id>
        <doi>10.14569/IJACSA.2024.0150801</doi>
        <lastModDate>2024-08-30T08:07:26.4070000+00:00</lastModDate>
        
        <creator>Khawlah Harahsheh</creator>
        
        <creator>Rami Al-Naimat</creator>
        
        <creator>Malek Alzaqebah</creator>
        
        <creator>Salam Shreem</creator>
        
        <creator>Esraa Aldreabi</creator>
        
        <creator>Chung-Hao Chen</creator>
        
        <subject>IoT; IDS; machine learning; ensemble technique; sleep-awake cycle; cybersecurity; anomaly detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(8), 2024</description>
        <description>The rapid proliferation of Internet of Things (IoT) devices has underscored the critical need for energy-efficient cybersecurity measures. This presents the dual challenge of maintaining robust security while minimizing power consumption. Thus, this paper proposes enhancing the machine learning performance through Ensemble Techniques with Sleep Mode Management (ELSM) approach for IoT Intrusion Detection Systems (IDS). The main challenge lies in the high-power consumption attributed to continuous monitoring in traditional IDS setups. ELSM addresses this challenge by introducing a sophisticated sleep-awake mechanism, activating the IDS system only during anomaly detection events, effectively minimizing energy expenditure during periods of normal network operation. By strategically managing the sleep modes of IoT devices, ELSM significantly conserves energy without compromising security vigilance. Moreover, achieving high detection accuracy with limited computational resources poses another problem in IoT security. To overcome this challenge, ELSM employs ensemble learning techniques with a novel voting mechanism. This mechanism integrates the outputs of six different anomaly detection algorithms, using their collective intelligence to enhance prediction accuracy and overall system performance. By combining the strengths of multiple algorithms, ELSM adapts dynamically to evolving threat landscapes and diverse IoT environments. The efficacy of the proposed ELSM model is rigorously evaluated using the IoT Botnets Attack Detection Dataset, a benchmark dataset representing real-world IoT security scenarios, where it achieves an impressive 99.97% accuracy in detecting intrusions while efficiently managing power consumption.</description>
        <description>http://thesai.org/Downloads/Volume15No8/Paper_1-Ensemble_Learning_with_Sleep_Mode_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Overview of the Complex Landscape and Future Directions of Ethics in Light of Emerging Technologies</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01507142</link>
        <id>10.14569/IJACSA.2024.01507142</id>
        <doi>10.14569/IJACSA.2024.01507142</doi>
        <lastModDate>2024-07-31T12:06:35.5030000+00:00</lastModDate>
        
        <creator>Marianne A. Azer</creator>
        
        <creator>Rasha Samir</creator>
        
        <subject>Artificial intelligence; cybersecurity; data privacy; digital ethics; ethical considerations; information security; machine learning; technology ethics; transparency</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>In today’s rapidly evolving technological landscape, the ethical dimensions of information technology (IT) have become increasingly prominent, influencing everything from algorithmic decision-making to data privacy and cybersecurity. This paper offers a thorough examination of the multifaceted ethical considerations inherent in information Technology, spanning various domains such as artificial intelligence (AI), big data analytics, cybersecurity practices, quantum computing, human behavior, environmental impact, and more. Through an in-depth analysis of real-world cases and existing research literature, this paper explores the ethical dilemmas and challenges encountered by stakeholders across the IT ecosystem. Central to the discussion are themes of transparency, accountability, fairness, and privacy protection, which are crucial for fostering trust and ethical behavior in the design, deployment, and governance of IT systems. The paper underscores the importance of integrating ethical principles into the technological innovation, emphasizing the need for proactive measures to mitigate biases, uphold individual rights, and promote equitable outcomes. It also explores the ethical implications of emerging technologies such as AI, quantum computing, and the Internet of Things (IoT), shedding light on the potential risks and benefits they entail. Furthermore, the paper outlines future directions and strategies for advancing ethical practices in IT, advocating for multidisciplinary collaboration, global regulatory frameworks, corporate social responsibility initiatives, and continuous ethical inquiry. By providing a comprehensive roadmap for navigating ethical considerations in IT, this paper aims to empower policymakers, industry professionals, researchers, and educators to make informed decisions and promote a more ethical and sustainable digital future.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_142-Overview_of_the_Complex_Landscape_and_Future_Directions_of_Ethics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SecureTransfer: A Transfer Learning Based Poison Attack Detection in ML Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01507141</link>
        <id>10.14569/IJACSA.2024.01507141</id>
        <doi>10.14569/IJACSA.2024.01507141</doi>
        <lastModDate>2024-07-31T12:06:35.5030000+00:00</lastModDate>
        
        <creator>Archa A T</creator>
        
        <creator>K. Kartheeban</creator>
        
        <subject>Poison attacks; machine learning security; transfer learning; generative adversarial networks; convolutional neural networks; VGG16</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>Critical systems are increasingly being integrated with machine learning (ML) models, which exposes them to a range of adversarial attacks.The vulnerability of machine learning systems to hostile attacks has drawn a lot of attention in recent years. When harmful input is added to the training set, it can lead to poison attacks, which can seriously impair model performance and threaten system security. Poison attacks pose a serious risk since they involve the injection of malicious data into the training set by adversaries, which influences the model’s performance during inference. It’s necessary to identify these poison attacks in order to preserve the reliability and security of machine learning systems. A novel method based on transfer learning is proposed to identify poisoning attacks in machine learning systems.The methodology for generating poison data is initially created and later implemented using transfer learning techniques. Here, the poisonous data is detected using the pre-trained VGG16 model. This method can also be used in distributed Machine learning systems with scattered data and computation across several nodes. Benchmark datasets are used to evaluate this strategy in order to prove the effectiveness of proposed method. Some real-time applications, advantages, limitations and future work are also discussed here.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_141-SecureTransfer_A_Transfer_Learning_Based_Poison_Attack_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Unexpected Trajectory Detection Based on the Geometrical Features of AIS-Generated Ship Tracks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01507140</link>
        <id>10.14569/IJACSA.2024.01507140</id>
        <doi>10.14569/IJACSA.2024.01507140</doi>
        <lastModDate>2024-07-31T12:06:35.4730000+00:00</lastModDate>
        
        <creator>Wayan Mahardhika Wijaya</creator>
        
        <creator>Yasuhiro Nakamura</creator>
        
        <subject>Automatic identification system; vessel trajectory classification; unexpected behavior detection; data mining; data-driven decision support</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>Due to the efficiency and reliability of delivering goods by ships, maritime transport has been the backbone of global trade. In normal circumstances, a ship’s voyage is expected to assure the safety of life at sea, efficient and safe navigation, and protection of the maritime environment. However, ships may demonstrate unexpected behavior due to certain situations, such as machinery malfunction, unexpected bad weather, and other emergencies, as well as involvement in illicit activities. These situations pose threats to the safety and security of maritime transport. The expansion of the threats makes manual surveillance inefficient, which involves extensive labor and is prone to oversight. Thus, automated surveillance systems are required. This paper proposes a method to detect the unexpected behavior of ships based on the Automatic Identification System (AIS) data. The method exploits the geometrical features of AIS-generated trajectories to identify unexpected trajectory, which could be a deviation from the common routes, loitering, or both deviating and loitering. It introduces novel formulas for calculating trajectory redundancy and curvature features. The DBSCAN clustering is applied based on the features to classify trajectories as expected or unexpected. Unlike existing methods, the proposed technique does not require trajectory-to-image conversion or training of labeled datasets. The technique was tested on real-world AIS data from the South China Sea, Western Indonesia, Singapore, and Malaysian waters between July 2021 and February 2022. The experimental results demonstrate the method’s feasibility in detecting deviating and loitering behaviors. Evaluation on a labeled dataset shows superior performance compared to existing loitering detection methods across multiple metrics, with 99% accuracy and 100% precision in identifying loitering trajectories. The proposed method aims to provide maritime authorities and fleet owners with an efficient tool for monitoring ship behaviors in real time regarding safety, security, and economic concerns.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_140-Unexpected_Trajectory_Detection_Based_on_the_Geometrical_Features.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Classification of Spatial Data Based on K-means and Vorono&#168;ı Diagram</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01507139</link>
        <id>10.14569/IJACSA.2024.01507139</id>
        <doi>10.14569/IJACSA.2024.01507139</doi>
        <lastModDate>2024-07-31T12:06:35.4570000+00:00</lastModDate>
        
        <creator>Moubaric KABORE</creator>
        
        <creator>B&#233;n&#233;-wend&#233; Odilon Isa&#239;e ZOUNGRANA</creator>
        
        <creator>Abdoulaye SERE</creator>
        
        <subject>Classification; K-means; vorono&#168;ı diagram; GIS; big data; data research</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>This paper is focusing on the problem of the time taken by different algorithms to search data in a large database. The execution time of these algorithms becomes high, in the case of searching data in a non-redundant data, distributed in different database sites where the research consists of reading on each site for finding data. The main purpose is to establish adapted models to represent data in order to facilitate data research. This paper describes a classification of spatial data using a combination of k-means algorithm and vorono&#168;ı diagram to determine different clusters, representing different group of database sites. The advantages of classification is made through the k-means algorithm that defines the best number and the centers of required clusters and voronoı&#168; diagram which gives definitely the delineation of the area with margins, representing the model of organizing data. A composition of K-mean algorithm followed by voronoı&#168; diagram has been implemented on simulation data in order to get the clusters, where future parallel research can be realized on different cluster to improve the execution time. In application to e-health in GIS, a best distribution of medical center and available services, will contribute strongly to facilitate population well-being.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_139-Classification_of_Spatial_Data_Based_on_K_means.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Log-Driven Conformance Checking Approximation Method Based on Machine Learning Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01507138</link>
        <id>10.14569/IJACSA.2024.01507138</id>
        <doi>10.14569/IJACSA.2024.01507138</doi>
        <lastModDate>2024-07-31T12:06:35.4270000+00:00</lastModDate>
        
        <creator>Huan Fang</creator>
        
        <creator>Sichen Zhang</creator>
        
        <creator>Zhenhui Mei</creator>
        
        <subject>Conformance checking; fitness; log driven; machine learning; deep learning; probabilities</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>Conformance checking techniques are usually used to determine to what degree a process model and real execution trace correspond to each other. Most of the state-of-the-art techniques to calculate conformance value provide an exact value under the circumstance that the reference model of a business system is known. However, in many real applications, the reference model is unknown or changed for various reasons, so the initial known reference model is no longer feasible, and only some historical event execution traces with its corresponding conformance value are retained. This paper proposes a log drivened conformance checking method, which tackles two perspective issues, the first is presenting an approach to calculate the approximate conformance checking value much faster than the existing methods using machine learning method. The second is presenting an approach to conduct conformance checking in probabilistic circumstances. Both kinds of approaches are from the perspective of no reference model is known and only historical event traces and their corresponding fitness can be used as train data. Specifically, for large event data, the computing time of the proposed methods is shorter than those align-based methods, and the baseling methods includes k-nearest neighboring, random forest, quadratic discriminant analysis, linear discriminant analysis, gated recurrent unit and long short-term memory. Experimental results show that adding a machine learning classification vector in the training set as preprocessing for train data can obtain a higher conformance checking value compared with the training sample without increasing the classification vector. Simultaneously, when conducted in processes with probabilities, the proposed log-log conformance checking approach can detect more inconsistent behaviors. The proposed method provides a new approach to improve the efficiency and accuracy of conformance checking. It enhances the management efficiency of business processes, potentially reducing costs and risks, and can be applied to conformance checking of complex processes in the future.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_138-Log_Driven_Conformance_Checking_Approximation_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automated Detection of Offensive Images and Sarcastic Memes in Social Media Through NLP</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01507137</link>
        <id>10.14569/IJACSA.2024.01507137</id>
        <doi>10.14569/IJACSA.2024.01507137</doi>
        <lastModDate>2024-07-31T12:06:35.4100000+00:00</lastModDate>
        
        <creator>Tummala Purnima</creator>
        
        <creator>Ch Koteswara Rao</creator>
        
        <subject>Deep learning; natural language processing; offensive images; sarcastic memes; toxic content detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>In this digital era, social media is one of the key platforms for collecting customer feedback and reflecting their views on various aspects, including products, services, brands, events, and other topics of interest. However, there is a rise of sarcastic memes on social media, which often convey contrary meaning to the implied sentiments and challenge traditional machine learning identification techniques. The memes, blending text and visuals on social media, are difficult to discern solely from the captions or images, as their humor often relies on subtle contextual cues requiring a nuanced understanding for accurate interpretation. Our study introduces Offensive Images and Sarcastic Memes Detection to address this problem. Our model employs various techniques to identify sarcastic memes and offensive images. The model uses Optical Character Recognition (OCR) and bidirectional long-short term memory (Bi-LSTM) for sarcastic meme detection. For offensive image detection, the model employs Autoencoder LSTM, deep learning models such as Densenet and mobilenet, and computer vision techniques like Feature Fusion Process (FFP) based on Transfer Learning (TL) with Image Augmentation. The study showcases the effectiveness of the proposed methods in achieving high accuracy in detecting offensive content across different modalities, such as text, memes, and images. Based on tests conducted on real-world datasets, our model has demonstrated an accuracy rate of 92% on the Hateful Memes Challenge dataset. The proposed methodology has also achieved a Testing Accuracy (TA) of 95.7% for Densenet with transfer learning on the NPDI dataset and 95.12% on the Pornography dataset. Moreover, implementing Transfer Learning with a Feature Fusion Process (FFP) has resulted in a TA of 99.45% for the NPDI dataset and 98.5% for the Pornography dataset.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_137-Automated_Detection_of_Offensive_Images_and_Sarcastic_Memes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Exploring Effective Diagnostic and Therapeutic Strategies for Deep Vein Thrombosis in High-Risk Patients: A Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01507136</link>
        <id>10.14569/IJACSA.2024.01507136</id>
        <doi>10.14569/IJACSA.2024.01507136</doi>
        <lastModDate>2024-07-31T12:06:35.3770000+00:00</lastModDate>
        
        <creator>Pavihaa Lakshmi B</creator>
        
        <creator>Vidhya S</creator>
        
        <subject>Diagnosis; DVT; game-based therapy; head-mounted display; rehabilitation therapy; virtual reality</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>Blood clots formed in blood vessels are termed as Thrombus. The pivotal strategy in diagnosing the early-stage thrombus, plays a vital role. Most commonly, blood clot occurs in the calf muscles of the lower extremities which leads to Deep Vein Thrombosis (DVT). Vulnerable patients are those who are involved in prolonged bed rest post-surgery, and the patients who are already affected with stroke, acute ischemia, cerebral palsy, etc. According to a report by the World Health Organization (WHO), nearly 900,000 people are affected annually, with approximately 100,000 dies each year At present blood clots can be identified using blood tests such as D-dimer blood tests, Cardiac Biomarkers and some imaging modalities like Doppler ultrasound, venography, Magnetic resonance imaging (MRI), computed tomography (CT). We have elaborately discussed the importance of emphasizing diagnostic yield and incidence of DVT, focusing on the risk factors available for DVT diagnostic and therapeutic techniques. The research addresses DVT incidence, diagnostic strategies, and therapeutic interventions; the efficacy of VR rehabilitation and treatment modalities; challenges related to artificial intelligence (AI)-based treatments; and explores the potential benefits of different game types in DVT management. This study aims to bridge the gap between research and real-time application by providing a wide range of strategies that comprise both basic and state-of-the-art techniques. It is a vital source for researchers and experts, providing perceptions into the effective development of advanced medical devices. The study concludes with a summary of point-of-care diagnosis, rehab therapy, and an exploration of various game types, providing future insights.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_136-Exploring_Effective_Diagnostic_and_Therapeutic_Strategies.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning and Web Applications Vulnerabilities Detection: An Approach Based on Large Language Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01507135</link>
        <id>10.14569/IJACSA.2024.01507135</id>
        <doi>10.14569/IJACSA.2024.01507135</doi>
        <lastModDate>2024-07-31T12:06:35.3630000+00:00</lastModDate>
        
        <creator>Sidwendluian Romaric Nana</creator>
        
        <creator>Didier Bassole</creator>
        
        <creator>Desire Guel</creator>
        
        <creator>Oumarou Sie</creator>
        
        <subject>Deep learning; web application; vulnerability; detection; large language model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>Web applications are part of the daily life of Internet users, who find services in all sectors of activity. Web applications have become the target of malicious users. They exploit web application vulnerabilities to gain access to unauthorized resources and sensitive data, with consequences for users and businesses alike. The growing complexity of web techniques makes traditional web vulnerability detection methods less effective. These methods tend to generate false positives, and their implementation requires cybersecurity expertise. As for Machine Learning/Deep Learning-based web vulnerability detection techniques, they require large datasets for model training. Unfortunately, the lack of data and its obsolescence make these models inoperable. The emergence of large language models and their success in natural language processing offers new prospects for web vulnerability detection. Large language models can be fine-tuned with little data to perform specific tasks. In this paper, we propose an approach based on large language models for web application vulnerability detection.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_135-Deep_Learning_and_Web_Applications_Vulnerabilities_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An FPA-Optimized XGBoost Stacking for Multi-Class Imbalanced Network Attack Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01507134</link>
        <id>10.14569/IJACSA.2024.01507134</id>
        <doi>10.14569/IJACSA.2024.01507134</doi>
        <lastModDate>2024-07-31T12:06:35.3300000+00:00</lastModDate>
        
        <creator>Hui Fern Soon</creator>
        
        <creator>Amiza Amir</creator>
        
        <creator>Hiromitsu Nishizaki</creator>
        
        <creator>Nik Adilah Hanin Zahri</creator>
        
        <creator>Latifah Munirah Kamarudin</creator>
        
        <subject>Intrusion detection; multi-class imbalanced classification; ensemble learning approaches</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>Network anomaly detection systems face challenges with imbalanced datasets, particularly in classifying underrepresented attack types. This study proposes a novel framework for improving F1-scores in multi-class imbalanced network attack detection using the UNSW-NB15 dataset, without resorting to resampling techniques. Our approach integrates Flower Pollination Algorithm-based hyperparameter tuning with an ensemble of XGBoost classifiers in a stacking configuration. Experimental results show that our FPA-XGBoost-Stacking model significantly outperforms individual XGBoost classifiers and existing ensemble models. The model achieved a higher overall weighted F1-score compare to the individual XGBoost classifier and Thockchom et al.’s heterogeneous stacking ensemble. Our approach demonstrated remarkable effectiveness across various levels of class imbalance, for example Analysis and Backdoor which is highly underrepresented classes, and DoS which is moderately underrepresented class. This research contributes to more effective network security systems by offering a solution for imbalanced classification without resampling techniques’ drawbacks. It demonstrates that homogeneous stacking with XGBoost can outperform heterogeneous approaches for skewed class distributions. Future work will extend this approach to other cybersecurity datasets and explore its applicability in real-time network environments.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_134-An_FPA_Optimized_XGBoost_Stacking.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>IPD-Net: Detecting AI-Generated Images via Inter-Patch Dependencies</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01507133</link>
        <id>10.14569/IJACSA.2024.01507133</id>
        <doi>10.14569/IJACSA.2024.01507133</doi>
        <lastModDate>2024-07-31T12:06:35.3170000+00:00</lastModDate>
        
        <creator>Jiahan Chen</creator>
        
        <creator>Mengtin Lo</creator>
        
        <creator>Hailiang Liao</creator>
        
        <creator>Tianlin Huang</creator>
        
        <subject>AI-generated image detection; image forensics; self-attention mechanism</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>With the rapid development of generative models, the fidelity of AI-generated images has almost reached a level that is difficult for humans to distinguish true from fake. The rapid development of this technology may lead to the widespread dissemination of fake content. Therefore, developing effective AI-generated image detectors has become very important. However, current detectors still have limitations in their ability to generalize detection tasks across different generative models. In this paper, we propose an efficient and simple neural network framework based on inter-patch dependencies, called IPD-Net, for detecting AI-generated images produced by various generative models. Previous research has shown that there are inconsistencies in the inter-pixel relations between the rich texture region and the poor texture region in AI-generated images. Based on this principle, our IPD-Net uses a self-attention calculation method to model the dependencies between all patches within an image. This enables our IPD-Net to self-learn how to extract appropriate inter-patch dependencies and classify them, further improving detection efficiency. We perform experimental evaluations on the CNNSpot-DS and GenImage datasets. Experimental results show that our IPD-Net outperforms several state-of-the-art baseline models on multiple metrics and has good generalization ability.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_133-IPD_Net_Detecting_AI_Generated_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Degree Based Search: A Novel Graph Traversal Algorithm Using Degree Based Priority Queues</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01507132</link>
        <id>10.14569/IJACSA.2024.01507132</id>
        <doi>10.14569/IJACSA.2024.01507132</doi>
        <lastModDate>2024-07-31T12:06:35.3000000+00:00</lastModDate>
        
        <creator>Shyma P V</creator>
        
        <creator>Sanil Shanker K P</creator>
        
        <subject>Graph traversal; degree based search algorithm; ascendant node; ascendant node first searching algorithm; descent node; descent node first searching algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>This paper introduces a novel graph traversal algorithm, Degree Based Search, which leverages degree-based ordering and priority queues to efficiently identify shortest paths in complex graph structures. Our method prioritizes nodes based on their degrees, enhancing exploration of related components and offering flexibility in diverse scenarios. Comparative analysis demonstrates superior performance of Degree Based Search in accelerating path discovery compared to traditional methods like Breadth First Search and Depth First Search. This approach improves exploration by focusing on related components. Using a priority queue ensures optimal node selection; the method iteratively chooses nodes with the highest or lowest degree. Based on this concept, we classify our approach into two distinct algorithms: the Ascendant Node First Search, which prioritizes nodes with the highest degree, and the Descent Node First Search , which prioritizes nodes with the lowest degree. This methodology offers diversity and flexibility in graph exploration, accommodating various scenarios and maximizing efficiency in navigating complex graph structures. The study demonstrates the Degree based Searching algorithm’s efficacy in accelerating path discovery within graphs. Experimental validation illustrates its proficiency in solving intricate tasks like detecting communities in Facebook networks. Moreover, its versatility shines across diverse domains, from autonomous driving to warehouse robotics and biological systems. This algorithm emerges as a potent tool for graph analysis, efficiently traversing graphs and significantly enhancing performance. Its wide applicability unlocks novel possibilities in various scenarios, advancing graph-related applications.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_132-Degree_Based_Search_A_Novel_Graph_Traversal_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Identification of Agile Requirements Change Management Success Factors in Global Software Development Based on the Best-Worst Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01507131</link>
        <id>10.14569/IJACSA.2024.01507131</id>
        <doi>10.14569/IJACSA.2024.01507131</doi>
        <lastModDate>2024-07-31T12:06:35.2830000+00:00</lastModDate>
        
        <creator>Abdulmajeed Aljuhani</creator>
        
        <subject>Best-Worst Method (BWM); Agile Requirements Change Management (ARCM); success factors; Global Software Development (GSD)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>To create products that are both cost effective and high quality, a majority of software development companies are following the principles of global software development, or GSD. One of the most significant and challenging stages of the agile software development process is requirements change management (RCM); however, the execution of agile software development activities is hindered by the geographical distance between the GSD teams, especially when it comes to agile requirements change management (ARCM). The literature claims that, in a particular context, ARCM can profit from applying Multi-Criteria Decision-Making (MCDM) techniques. Within the area of ARCM, an optimal framework can be offered constitutionally, thus presenting an effective decision-making process that ought to encourage higher consumer satisfaction with software projects created in such a way. A methodology for applying the MCDM method in the ARCM context is presented in this paper. In particular, we propose a model for investigating the prioritization of ARCM success factors in the GSD context based on a decision-making method; namely, the Best-Worst Method (BWM). The BWM’s ability to solve intricate decision-making problems with multiple criteria and alternatives is demonstrated by the proposed model’s findings.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_131-Identification_of_Agile_Requirements_Change_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Exploring Abstractive Text Summarization: Methods, Dataset, Evaluation, and Emerging Challenges</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01507130</link>
        <id>10.14569/IJACSA.2024.01507130</id>
        <doi>10.14569/IJACSA.2024.01507130</doi>
        <lastModDate>2024-07-31T12:06:35.2700000+00:00</lastModDate>
        
        <creator>Yusuf Sunusi</creator>
        
        <creator>Nazlia Omar</creator>
        
        <creator>Lailatul Qadri Zakaria</creator>
        
        <subject>Abstractive text summarization; systematic literature review; natural language processing; evaluation metrics; dataset; computation linguistics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>The latest advanced models for abstractive summarization, which utilize encoder-decoder frameworks, produce exactly one summary for each source text. This systematic literature review (SLR) comprehensively examines the recent advancements in abstractive text summarization (ATS), a pivotal area in natural language processing (NLP) that aims to generate concise and coherent summaries from extensive text sources. We delve into the evolution of ATS, focusing on key aspects such as encoder-decoder architectures, innovative mechanisms like attention and pointer-generator models, training and optimization methods, datasets, and evaluation metrics. Our review analyzes a wide range of studies, highlighting the transition from traditional sequence-to-sequence models to more advanced approaches like Transformer-based architectures. We explore the integration of mechanisms such as attention, which enhances model interpretability and effectiveness, and pointer-generator networks, which adeptly balance between copying and generating text. The review also addresses the challenges in training these models, including issues related to dataset quality and diversity, particularly in low-resource languages. A critical analysis of evaluation metrics reveals a heavy reliance on ROUGE scores, prompting a discussion on the need for more nuanced evaluation methods that align closely with human judgment. Additionally, we identify and discuss emerging research gaps, such as the need for effective summary length control and the handling of model hallucination, which are crucial for the practical application of ATS. This SLR not only synthesizes current research trends and methodologies in ATS, but also provides insights into future directions, underscoring the importance of continuous innovation in model development, dataset enhancement, and evaluation strategies. Our findings aim to guide researchers and practitioners in navigating the evolving landscape of abstractive text summarization and in identifying areas ripe for future exploration and development.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_130-Exploring_Abstractive_Text_Summarization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Construction Cost Estimation in Data-Poor Areas Using Grasshopper Optimization Algorithm-Guided Multi-Layer Perceptron and Transfer Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01507129</link>
        <id>10.14569/IJACSA.2024.01507129</id>
        <doi>10.14569/IJACSA.2024.01507129</doi>
        <lastModDate>2024-07-31T12:06:35.2530000+00:00</lastModDate>
        
        <creator>Xuan Sha</creator>
        
        <creator>Guoqing Dong</creator>
        
        <creator>Xiaolei Li</creator>
        
        <creator>Juan Sheng</creator>
        
        <subject>Construction cost estimation; multi-layer perceptron; grasshopper optimization algorithm; transfer learning; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>Accurate construction cost estimation is crucial for completing projects within the planned timeframe and budget. Using machine learning methods to predict construction costs has become a new trend. However, machine learning methods typically require a large amount of data for model training, which makes it particularly challenging in data-poor areas. This paper proposes a novel method, Grasshopper Optimization Algorithm-Guided Multi-Layer Perceptron with Transfer Learning (GOA-MLP-TL), specifically designed for construction cost estimation in data-poor areas. GOA-MLP-TL utilizes the global optimal search capability of the GOA to optimize the parameters of the MLP network. Additionally, an adaptation layer is added into the MLP network, using the Maximum Mean Discrepancy (MMD) measure as a regularization to bridge the gap between the source and target domains. The GOA-MLP-TL can effectively leverage the model trained on data-rich area, and transfer the knowledge to adapt the model suitable for data-poor areas. The proposed approach is verified on two datasets from different areas, and the experimental result shows that, compared to the traditional machine learning method MLP and GOA-MLP without transfer learning, the correlation coefficient (R2) of the proposed GOA-MLP-TL is improved by 12.05% and 6.90%, respectively. This demonstrate the effectiveness of GOA-MLP-TL for the construction cost estimation task in the data-poor area.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_129-Construction_Cost_Estimation_in_Data_Poor_Areas.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Graph Convolutional Network for Occupational Disease Prediction with Multiple Dimensional Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01507128</link>
        <id>10.14569/IJACSA.2024.01507128</id>
        <doi>10.14569/IJACSA.2024.01507128</doi>
        <lastModDate>2024-07-31T12:06:35.2370000+00:00</lastModDate>
        
        <creator>Khanh Nguyen-Trong</creator>
        
        <creator>Tuan Vu-Van</creator>
        
        <creator>Phuong Luong Thi Bich</creator>
        
        <subject>Occupational disease diagnostics; heterogeneous data; imbalanced data; Graph Convolutional Network (GCN); deep graph convolutional neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>Occupational diseases present a significant global challenge, affecting a vast number of workers. Accurate prediction of occupational disease incidence is crucial for effective prevention and control measures. Although deep learning methods have recently emerged as promising tools for disease forecasting, existing research often focuses solely on patient body parameters and disease symptoms, potentially overlooking vital diagnostic information. Addressing this gap, our study introduces a Deep Graph Convolutional Neural Network (DGCNN) designed to detect occupational diseases by utilizing demographic information, work environment data, and the intricate relationships between these data points. Experimental results demonstrate that our DGCNN method surpasses other state-of-the-art methods, achieving high performance with an Area Under the Curve (AUC) of 96.2%, an accuracy of 98.7%, and an F1-score of 75.2% on the testing set. This study not only highlights the effectiveness of DGCNNs in occupational disease prediction but also underscores the value of integrating diverse data types for comprehensive disease diagnosis.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_128-Graph_Convolutional_Network_for_Occupational_Disease_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Unleashing the Power of Open-Source Transformers in Medical Imaging: Insights from a Brain</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01507126</link>
        <id>10.14569/IJACSA.2024.01507126</id>
        <doi>10.14569/IJACSA.2024.01507126</doi>
        <lastModDate>2024-07-31T12:06:35.2230000+00:00</lastModDate>
        
        <creator>M. A. Rahman</creator>
        
        <creator>A. Joy</creator>
        
        <creator>A. T. Abir</creator>
        
        <creator>T. Shimamura</creator>
        
        <subject>Open-source transformers; ConvNeXt V2; seg-former; brain tumor classification; medical image segmentation; diagnostic accuracy; neuro-oncology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>This research investigates the application of open-source transformers, specifically the ConvNeXt V2 and Seg-former models, for brain tumor classification and segmentation in medical imaging. The ConvNeXt V2 model is adapted for classification tasks, while the Segformer model is tailored for segmentation tasks, both undergoing a fine-tuning process involving model initialization, label encoding, hyperparameter adjustment, and training. The ConvNeXt V2 model demonstrates exceptional performance in accurately classifying various types of brain tumors, achieving a remarkable accuracy of 99.60%. In comparison to other state-of-the-art models such as ConvNeXt V1, Swin, and ViT, ConvNeXt V2 consistently outperforms them, attaining superior accuracy rates across all metrics for each tumor type. Surprisingly, when there is no tumor present, it has predicted with 100% accuracy. In contrast, the Segformer model has excelled in accurately segmenting brain tumors, achieving a Dice score of up to 90% and a Hausdorff distance of 0.87mm. These results underscore the transformative potential of open-source transformers, exemplified by ConvNeXt V2 and Segformer models, in revolutionizing medical imaging practices. This study paves the way for further exploration of transformer applications in medical imaging and optimization of these models for enhanced performance, heralding a promising future for advanced diagnostic tools.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_126-Unleashing_the_Power_of_Open_Source_Transformers.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Multi-Criteria Decision-Making Approach for Equipment Evaluation Based on Cloud Model and VIKOR Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01507127</link>
        <id>10.14569/IJACSA.2024.01507127</id>
        <doi>10.14569/IJACSA.2024.01507127</doi>
        <lastModDate>2024-07-31T12:06:35.2230000+00:00</lastModDate>
        
        <creator>Jincheng Guan</creator>
        
        <creator>Jiachen Liu</creator>
        
        <creator>Hao Chen</creator>
        
        <creator>Wenhao Bi</creator>
        
        <subject>Multi-criteria decision-making; equipment evaluation; cloud model; VIKOR</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>Equipment evaluation stands as a critical task in both equipment system development and military operation planning. This task is often recognized as a complex multi-criteria decision-making (MCDM) problem. Adding to the intricacy is the uncertain nature inherent in military operations, leading to the introduction of fuzziness and randomness into the equipment evaluation problem, rendering it unsuitable for precise information. This paper addresses the uncertainty associated with equipment evaluation by proposing a novel MCDM method that combines the cloud model and the VIKOR method. To address the multifaceted nature of the equipment evaluation problem, a two-level hierarchical evaluation framework is constructed, which comprehensively considers both the capabilities and characteristics of the equipment system during the evaluation process. The cloud model is then employed to represent the uncertain evaluations provided by experts, and a similarity-based expert weight calculation approach is introduced for calculating expert weights, thereby determining the relative importance of different experts. Subsequently, the VIKOR method is extended by incorporating the cloud model to evaluate and rank various equipment systems, where the criteria weights for this evaluation are established using the analytic hierarchy process (AHP). To demonstrate the efficacy of the proposed method, a practical case study involving the evaluation of unmanned combat aerial vehicles is presented. The results obtained are validated through sensitivity analysis and comparative analysis, affirming the reliability and reasonability of the proposed method in providing equipment evaluation results. In summary, the proposed method offers a novel and effective approach for addressing equipment evaluation challenges under uncertainty.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_127-A_Multi_Criteria_Decision_Making_Approach_for_Equipment_Evaluation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Exploring Google Play Store Apps Using Predictive User Context: A Comprehensive Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01507125</link>
        <id>10.14569/IJACSA.2024.01507125</id>
        <doi>10.14569/IJACSA.2024.01507125</doi>
        <lastModDate>2024-07-31T12:06:35.2070000+00:00</lastModDate>
        
        <creator>Anandh A</creator>
        
        <creator>Ramya R</creator>
        
        <creator>Vakaimalar E</creator>
        
        <creator>Santhipriya B</creator>
        
        <subject>Na&#239;ve Bayes; random forest; logistic regression; mining; Google play store; android; mobile application</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>Google Play Store is a digital platform for mobile applications, where users can download and install apps for their android devices. It is a great source of data for mining and analyzing app performance and user behavior. The increasing volume of mobile applications poses a challenge for users in finding apps that align with their preferences. This work aims to utilize predictive user context to analyze user behavior, thereby enhancing user experience and app development. The work focuses on identifying trends in the app market to recommend suitable applications for users. Play Store app analysis involves gathering data, performing comprehensive evaluations, and making informed decisions to improve app performance and user engagement. By applying Na&#239;ve Bayes, Random Forest, and Logistic Regression algorithms, this work evaluates the relationship between application attributes such as categories and the number of downloads, determining the most effective profiling algorithm for app performance evaluation. This analysis is crucial for recognizing user engagement trends, discovering new opportunities, and optimizing existing applications.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_125-Exploring_Google_Play_Store_Apps.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid CNN: An Empirical Analysis of Machine Learning Models for Predicting Legal Judgments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01507124</link>
        <id>10.14569/IJACSA.2024.01507124</id>
        <doi>10.14569/IJACSA.2024.01507124</doi>
        <lastModDate>2024-07-31T12:06:35.1900000+00:00</lastModDate>
        
        <creator>G. Sukanya</creator>
        
        <creator>J. Priyadarshini</creator>
        
        <subject>Legal judgment prediction; encoding; SVM; SGD; Doc2vec; CNN; transformers</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>Artificial Intelligence with NLP has revolutionized the legal industry, which was previously under-digitized, and it&#39;s eager to adopt digital technologies for increased efficiency. Case backlog issues, exacerbated by population growth, can be alleviated by AI&#39;s potential in decision prediction for laypeople, litigants, and adjudicators. Legal judgment prediction (LJP) is viewed as a text classification cum prediction problem, with encoding models crucial for accurate textual representation and downstream tasks. These models capture syntax, semantics, and context, varying in performance based on the task and dataset. Selecting the right model, whether traditional ML or DL, using different evaluation metrics, is complex. This paper addresses the above research gap by reviewing 12 cutting-edge ML models and 10 DL models with two embedding methods on real-time Madras High Court criminal cases from Manupatra. The comprehensive comparison of classifier models on real-time case documents provides insights for researchers to innovate despite challenges and limitations. Evaluation metrics like accuracy, F1 score, precision, and recall show that Support Vector Machines (SVM), Logistic Regression, and SGD with Doc2Vec (D2V) encoding and shallow neural networks perform well. Although Transformers process longer input sequences with parallel word analysis and self-attention layers, they have weaknesses on real-time datasets. This article proposes a novel hybrid CNN with a transformer model to predict binary judgments, outperforming traditional ML and DL models in precision, recall, and accuracy. Finally, we summarise the most important ramifications, potential research avenues, and difficulties facing the legal research field.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_124-Hybrid_CNN_An_Empirical_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Diagnosing People at Risk of Heart Diseases Using the Arduino Platform Under the IoT Platform</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01507123</link>
        <id>10.14569/IJACSA.2024.01507123</id>
        <doi>10.14569/IJACSA.2024.01507123</doi>
        <lastModDate>2024-07-31T12:06:35.1600000+00:00</lastModDate>
        
        <creator>Xiaoxi Fan</creator>
        
        <creator>Qiaoxia Wang</creator>
        
        <creator>Yao Sun</creator>
        
        <subject>Arduino platform; internet of things; heart disease diagnosis; high-quality healthcare; cardiovascular diseases; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>Using the Arduino platform under the Internet of Things (IoT) platform to diagnose individuals at risk of heart diseases. An enormous volume of data focus has been placed on delivering high-quality healthcare in response to the increasing prevalence of life-threatening health conditions among patients. Several factors contribute to the health conditions of individuals, and certain diseases can be severe and even fatal. Both in industrialised and developing nations, cardiovascular illnesses have surpassed all others as the leading causes in the last few decades. Significant decreases in mortality may be achieved by detecting cardiac problems early and keeping medical experts closely monitored. Unfortunately, it is not currently possible to accurately detect heart diseases in all cases and provide round-the-clock consultation with medical experts. This is due to the need for additional knowledge, time, and expertise. Aiming to identify possible heart illness using Deep Learning (DL) methods, this research proposes a concept for an IoT-based system that could foresee the occurrence of heart disease. This paper introduces a pre-processing technique, Transfer by Subspace Similarity (TBSS), aimed at enhancing the accuracy of electrocardiogram (ECG) signal classification. This proposed IoT implementation includes using the Arduino IoT operating system to store and evaluate data gathered by the Pulse Sensor. The raw data collected includes interference that decreases the precision of the classification. A novel pre-processing technique is used to remove distorted ECG signals. To find out how well the classifier worked, this study used the Hybrid Model (CNN-LSTM) classifier algorithms. These algorithms detect normal and abnormal heartbeat rates based on temporal and spatial features. A Deep Learning (DL) model that uses Talos for hyper-parameter optimisation has been recommended. This approach dramatically improves the accuracy of heart disease predictions. The experimental findings clearly show that Machine Learning (ML) methods for classification perform much better after pre-processing. Using the widely recognised MIT-BIH-AR database, we assess the planned outline in comparison to MCH ResNet. This system leverages a CNN-LSTM model, which was optimized using hyper-parameter tuning with Talos, achieving outstanding metrics. Specifically, it recorded an accuracy of 99.1%, a precision of 98.8%, a recall of 99.5%, an F1-score of 99.1%, and an AUC-ROC of 0.99.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_123-Diagnosing_People_at_Risk_of_Heart_Diseases.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Ensemble Feature Selection for Student Performance and Activity-Based Behaviour Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01507122</link>
        <id>10.14569/IJACSA.2024.01507122</id>
        <doi>10.14569/IJACSA.2024.01507122</doi>
        <lastModDate>2024-07-31T12:06:35.1430000+00:00</lastModDate>
        
        <creator>Varsha Ganesh</creator>
        
        <creator>S Umarani</creator>
        
        <subject>Behaviour analysis; deep learning; educational data mining; student performance prediction; students activity monitoring; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>Analyzing students&#39; behaviour during online classes is vital for teachers to identify the strengths and weaknesses of online classes. This analysis, based on observing academic performance and student activity data, helps teachers to understand the teaching outcomes. Most Educational Data Mining (EDM) processes analyze students&#39; academic or behavioural data; in this case, the accurate prediction of student behaviours could not be achieved. This study addresses these issues by considering student’s activity and academic performance datasets to evaluate teaching and learner outcomes efficiently. It is necessary to utilize a suitable method to handle the high dimensional data while analyzing Educational Data (ED), because academic data is growing daily and exponentially. This study uses two kinds of data for student behaviour analysis. It is essential to use feature reduction and selection methods to extract only important features to improve the student’s behaviour analysis performance. By utilizing a hybrid ensemble method to get the most relevant features to predict students’ performance and activity levels, this approach helps to reduce the complexity of the feature-learning model and improve the prediction performance of the classification model. This study uses Improved Principal Component Analysis (IPCA) to select the most relevant feature. The resultant features of the IPCA are given as input to an ensemble method to select the most relevant feature sets to improve the prediction accuracy. The prediction is done with the help of Residual Network-50 (ResNet50) is combined with a Support Vector Machine (SVM) to classify students&#39; performance and activity during online classes. This performance analysis evaluates the students’ behaviour analysis model. The proposed approach could predict the performance and activity of students with a maximum of 98.03% accuracy for online classes, and 98.06% accuracy for exams.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_122-Ensemble_Feature_Selection_for_Student_Performance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Tunisian Lung Cancer Dataset: Collection, Annotation and Validation with Transfer Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01507121</link>
        <id>10.14569/IJACSA.2024.01507121</id>
        <doi>10.14569/IJACSA.2024.01507121</doi>
        <lastModDate>2024-07-31T12:06:35.1430000+00:00</lastModDate>
        
        <creator>Omar Khouadja</creator>
        
        <creator>Mohamed Saber Naceur</creator>
        
        <creator>Samira Mhamedi</creator>
        
        <creator>Anis Baffoun</creator>
        
        <subject>Lung cancer; Tunisia; dataset; transfer learning; medical imaging; annotations</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>Globally, lung cancer remains the leading cause of cancer-related deaths, with early detection significantly improving survival rates. Developing robust machine learning models for early detection necessitates access to high-quality, localized datasets. This project establishes the first lung cancer dataset in Tunisia, utilizing DICOM CT scans from 123 Tunisian patients. The dataset, annotated by experienced radiologists, includes diverse forms of lung cancer at various stages. Using transfer learning with pre-trained 3D ResNet models from Tencent’s MedicalNet, our tests showed the dataset outperformed previous models in specificity and sensitivity. This demonstrates its effectiveness in capturing the unique clinical characteristics of the Tunisian population and its potential to significantly enhance lung cancer diagnosis and detection.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_121-Tunisian_Lung_Cancer_Dataset.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Original Strategy for Verbatim Collecting Knowledge from Mostly-Illiterate and Secretive Experts: West Africa Traditional Medicine’s Case</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01507120</link>
        <id>10.14569/IJACSA.2024.01507120</id>
        <doi>10.14569/IJACSA.2024.01507120</doi>
        <lastModDate>2024-07-31T12:06:35.1270000+00:00</lastModDate>
        
        <creator>Kouam&#233; Appoh</creator>
        
        <creator>Lamy Jean-Baptiste</creator>
        
        <creator>Kroa Ehoul&#233;</creator>
        
        <subject>Knowledge elicitation; collection data method; ontology; traditional medicine; West Africa; ontoMEDTRAD</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>80% of least developed countries populations rely on traditional medicine (TM). West Africa is not left outdone. Multilingualism is very manifest. Additionally, TM practitioners (TMP) commonly desire to keep secret their knowledge. Illiteracy affects the vast majority of TMP in this region. Thus, exchanges between practitioners for knowledge and experience sharing are severely hindered by multilingualism, illiteracy and secretiveness. The reliability and relevance question of the data and knowledge gathered from these practitioners is therefore raised. Conventional data collection methods are not operational in this context. Hence, we designed an original collection data method that we called back-and-forth, to overcome these difficulties. Such method allows us to obtain stable and verbatim collection from the TMP. Both sequential and recursive, it is applied to data collection during visits carried out for 110 practitioners in West Africa, with two to four visits per practitioner. 79 practitioners were finally included in the study project. The others 31 either did not adhere to the project or provided unstable knowledge. 13 diseases and 12 plants were collected, with the &quot;plant cure disease&quot; relations between them, as expressed by these 79 practitioners. Our second objective was to extend the domain ontology of west Africa TM, accurately ontoMEDTRAD, due to the emergence of three new concepts arising from the above. Face to climate change that may lead to some plants extinction, to update some old reference sources contents of TM, it has proved necessary to compare them with the opinions and knowledge collected from TMP.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_120-Original_Strategy_for_Verbatim_Collecting_Knowledge.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Forecast for Container Retention in IoT Serverless Applications on OpenWhisk</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01507119</link>
        <id>10.14569/IJACSA.2024.01507119</id>
        <doi>10.14569/IJACSA.2024.01507119</doi>
        <lastModDate>2024-07-31T12:06:35.1130000+00:00</lastModDate>
        
        <creator>Ganeshan Mahalingam</creator>
        
        <creator>Rajesh Appusamy</creator>
        
        <subject>Serverless IoT; AWS Forecast Deep AR+; Prophet; AWS EKS; docker and containers; cold start; OpenWhisk</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>This research tackles resource management in OpenWhisk-based serverless applications for the Internet of Things (IoT) by introducing a novel approach to container retention optimization. We leverage the capabilities of AWS Forecast, specifically its DeepAR+ and Prophet algorithms, to dynamically forecast workload patterns. This real-time forecast empowers us to make adaptive adjustments to container retention durations. By optimizing retention times, we can effectively mitigate cold start latency, the primary reason behind sluggish response times in IoT serverless environments. Our approach outperforms conventional preloading and chaining techniques by significantly increasing resource utilization efficiency. Since OpenWhisk is an open-source platform, our methodology was able to achieve a cost reduction. By integrating it with Amazon Forecast&#39;s built-in algorithms, we surpassed traditional cache cold start strategies. These findings strongly support the viability of dynamic container retention optimization for IoT serverless deployments. Evaluations conducted on the OpenWhisk platform demonstrate substantial benefits. We observed a remarkable 67% reduction in cold start latency, translating to expedited response times and a demonstrably enhanced end-user application experience. These findings convincingly validate the efficacy of AWS Forecast in optimizing container retention for IoT serverless deployments by capitalizing on its deep learning (DeepAR+) and interpretable forecasting (Prophet) abilities. This research lays a solid foundation for future studies on optimizing container management across various DevOps practices and container orchestration platforms, contributing to the advancement of efficient and responsive serverless architectures.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_119-Forecast_for_Container_Retention_in_IoT_Serverless_Applications.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Analysis of the Effect of Using Online Loans on User Data Privacy</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01507118</link>
        <id>10.14569/IJACSA.2024.01507118</id>
        <doi>10.14569/IJACSA.2024.01507118</doi>
        <lastModDate>2024-07-31T12:06:35.0970000+00:00</lastModDate>
        
        <creator>Indrajani Sutedja</creator>
        
        <creator>Muhammad Firdaus Adam</creator>
        
        <creator>Fauzan Hafizh</creator>
        
        <creator>Muhammad Farrel Wahyudi</creator>
        
        <subject>Online loans; data privacy; peer-to-peer lending; OJK; fintechs</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>Online loans deliberately leak user data. The entry of the digital ecosystem at the beginning of the 20th century initiated major changes in society in the way information is controlled, communicated and expressed. Industrial development in Indonesia is growing very rapidly, especially along with the progress of the digital economy industry. Changes in the digital economy have changed the way we access the economy. The role of digitalization has changed the way we work and the way society collaborates with other parties. This digitalization cannot be separated from the role of financial technology, including online loans. Digitalization can simplify the lending and borrowing process and increase accessibility so that it can be done efficiently. However, we also have to be aware of the risks of online loans, especially in terms of user privacy and data security. According to the Financial Services Authority (OJK) rules, incidents of data privacy violations and online loan data leaks in 2022 will reach 1,200 cases. One case is when a loan provider acts by accessing a user&#39;s personal data to intimidate and threaten. They even made the situation even more threatening by coming to the user&#39;s location with several hired thugs to intimidate them into physical confrontation and making several unreasonable demands such as increasing loan interest. In some cases, there are fintechs who deliberately sell or trade some of their users&#39; personal data for their own profit. This research aims to provide education about the importance of clear regulations from the central government regarding the Peer-to-Peer lending industry. This research uses a systematic literature review method to be more structured and objective in writing this paper.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_118-An_Analysis_of_the_Effect_of_Using_Online_Loans.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Improved Genetic Algorithm and its Application in Routing Optimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01507117</link>
        <id>10.14569/IJACSA.2024.01507117</id>
        <doi>10.14569/IJACSA.2024.01507117</doi>
        <lastModDate>2024-07-31T12:06:35.0800000+00:00</lastModDate>
        
        <creator>Jianwei Wang</creator>
        
        <creator>Wenjuan Sun</creator>
        
        <subject>Improvement of genetic algorithm; routing optimization; shortest path; crossover operator; mutation operator</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>Traditional routing algorithms can&#39;t adapt to the complex and changeable network environment, and the basic genetic algorithm can&#39;t be applied to solving routing optimization problems directly because of the lack of coding methods. An improved basic genetic algorithm was purposed to find the optimal or near-optimal routing. The network model and mathematical expression of routing optimization problem was defined, and the routing problem was transformed into a problem of finding the optimal solution. In order to meet the specific needs of network routing optimization, some key improvements of GA have been made, including the design of coding scheme, the generation of initial population, the construction of fitness function and the improvement of crossover operator and mutation operator. The simulation results of two typical network environments show that the improved GA has excellent performance in routing optimization. Compared with Dijkstra algorithm and Floyd algorithm, the improved GA in this paper not only has excellent robustness and adaptability in solving routing optimization problems, but also can effectively cope with the dynamic changes of network environment, providing an efficient and reliable routing solution for dynamic network environment.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_117-An_Improved_Genetic_Algorithm_and_its_Application.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Advanced Active Player Tracking System in Handball Videos Using Multi-Deep Sort Algorithm with GAN Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01507116</link>
        <id>10.14569/IJACSA.2024.01507116</id>
        <doi>10.14569/IJACSA.2024.01507116</doi>
        <lastModDate>2024-07-31T12:06:35.0670000+00:00</lastModDate>
        
        <creator>Poovaraghan R J</creator>
        
        <creator>Prabhavathy P</creator>
        
        <subject>Handball recognition; multi-deep SORT; GAN; deep learning; computer vision</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>Active player tracking in sports analytics is crucial for understanding team dynamics, player performance, and game strategies. This paper introduces an innovative approach to tracking active players in handball videos using a fusion of the Multi-Deep SORT algorithm and a Generative Adversarial Network (GAN) model. The novel integration aims to enhance player appearance for robust and precise tracking in dynamic gameplay. The system starts with a GAN model trained on annotated handball video data, generating synthetic frames to improve the visual quality and realism of player appearances, thereby refining the input data for tracking. The Multi-Deep SORT algorithm, enhanced with GAN-generated features, improves object association and continuous player tracking. This framework addresses key challenges in active player tracking, handling occlusions, variations in player appearances, and complex interactions. Additionally, GAN-based enhancements improve accuracy in distinguishing active from inactive players, facilitating precise localization and recognition. Performance evaluation demonstrates the system&#39;s efficacy in achieving high tracking accuracy, robustness, and differentiation between player activity levels. Metrics such as Average Precision (AP), Average Recall (AR), accuracy, and F1-score affirm the system&#39;s advancement in active player tracking. This pioneering fusion of Multi-Deep SORT with GAN-based player appearance enhancement sets a new standard for precise, robust, and context-aware active player tracking in handball videos. It offers comprehensive insights for coaches, analysts, and players to optimize team strategies and performance. This paper highlights the novel integration&#39;s advancements and benefits in the domain of sports analytics. Notably, the proposed method achieved enhanced efficiency with an average precision of 94.99%, recall of 93.67%, accuracy of 93.89%, and F-score of 94.33%.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_116-Advanced_Active_Player_Tracking_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Reliability in Cloud Computing Applications with Chaotic Particle Swarm Optimization Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01507115</link>
        <id>10.14569/IJACSA.2024.01507115</id>
        <doi>10.14569/IJACSA.2024.01507115</doi>
        <lastModDate>2024-07-31T12:06:35.0500000+00:00</lastModDate>
        
        <creator>Wenli WANG</creator>
        
        <creator>Yanlin BAI</creator>
        
        <subject>Reliability; cloud computing; chaotic particle swarm optimization algorithm; distributed systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>In recent years, IT managers of large enterprises and stakeholders have turned to cloud computing due to the benefits of reduced maintenance costs and security concerns, as well as access to high-performance hardware and software resources. The two main challenges that need to be considered in terms of importance are ensuring that everyone has access to services and finding efficient allocation options. First, especially with software services, it is very difficult to predict every service that may be needed. The second challenge is to select the best independent service among different providers with features related to application reliability. This paper presents a framework that uses the particle swarm optimization technique to optimize reliability parameters in distributed systems applications. The proposed strategy seeks a program with the best service and a high degree of competence. Although this method does not provide an exact solution, the particle swarm optimization algorithm reaches a result close to the best solution and reduces the time required to adjust the parameters of distributed systems applications. The results of the work have been compared with the genetic algorithm and it has been shown that the PSO algorithm has a shorter response time than both the genetic algorithm and the PSO. Also, the PSO algorithm shows strong stability and ensures that the solution obtained from the proposed approach will be close to the optimal solution.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_115-Reliability_in_Cloud_Computing_Applications.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Improved Liver Disease Detection Based on YOLOv8 Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01507114</link>
        <id>10.14569/IJACSA.2024.01507114</id>
        <doi>10.14569/IJACSA.2024.01507114</doi>
        <lastModDate>2024-07-31T12:06:35.0330000+00:00</lastModDate>
        
        <creator>Junjie Huang</creator>
        
        <creator>Caihong Li</creator>
        
        <creator>Fengjun Yan</creator>
        
        <creator>Yuanchun Guo</creator>
        
        <subject>Liver disease detection; deep learning; digital pathology; YOLOv8; accuracy enhancement</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>The identification and diagnosis of liver diseases hold significant importance within the domain of digital pathology research. Various methods have been explored in the literature to address this crucial task, with deep learning techniques emerging as particularly promising due to their ability to yield highly accurate results compared to other traditional approaches. However, despite these advancements, a significant research gap persists in the field. Many deep learning-based liver disease detection methods continue to struggle with achieving consistently high accuracy rates. This issue is highlighted in numerous studies where traditional convolutional neural networks and hybrid models fall short in precision and recall metrics. To bridge this gap, our study proposes a novel approach utilizing the YOLOv8 algorithm, which is designed to significantly enhance the accuracy and effectiveness of liver disease detection. The YOLOv8 algorithm&#39;s architecture is well-suited for real-time object detection and has been optimized for medical imaging applications. Our method involves generating innovative models tailored specifically for liver disease detection by leveraging a comprehensive dataset from the Roboflow repository, consisting of 3,976 annotated liver images. This dataset provides a diverse range of liver disease cases, ensuring robust model training. Our approach includes meticulous model training with rigorous hyperparameter tuning, using 70% of the data for training, 20% for validation, and 10% for testing. This structured training process ensures that the model learns effectively while minimizing overfitting. We evaluate the model using precision, recall, and mean average precision (mAP@0.5) metrics, demonstrating significant improvements over existing methods. Through extensive experimental results and detailed performance evaluations, our study achieves high accuracy rates, thus addressing the existing research gap and providing an effective approach for liver disease detection.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_114-An_Improved_Liver_Disease_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modification of the Danzig-Wolf Decomposition Method for Building Hierarchical Intelligent Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01507113</link>
        <id>10.14569/IJACSA.2024.01507113</id>
        <doi>10.14569/IJACSA.2024.01507113</doi>
        <lastModDate>2024-07-31T12:06:35.0200000+00:00</lastModDate>
        
        <creator>Turganzhan Velyamov</creator>
        
        <creator>Alexandr Kim</creator>
        
        <creator>Olga Manankova</creator>
        
        <subject>Decomposition method; optimization; parallel processing; linear programming</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>This article examines the Dantzig-Wolfe decomposition method for solving large-scale optimization problems. The standard simplex algorithm solves these problems, making the Dantzig-Wolfe method a valuable tool. The article describes in detail a new modification of the Dantzig-Wolfe decomposition method. This modification aims to improve the efficiency of the coordination task, a key component that defines subtasks. By significantly reducing the number of rows in the coordination problem, the proposed method achieves faster computation and reduced memory requirements compared to the original approach. Although the Dantzig-Wolfe method has encountered problems due to the complexity of implementing algorithms for hierarchical systems, this modification opens up new potential.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_113-Modification_of_the_Danzig_Wolf_Decomposition_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Integrated Approach for Real-Time Gender and Age Classification in Video Inputs Using FaceNet and Deep Learning Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01507112</link>
        <id>10.14569/IJACSA.2024.01507112</id>
        <doi>10.14569/IJACSA.2024.01507112</doi>
        <lastModDate>2024-07-31T12:06:35.0030000+00:00</lastModDate>
        
        <creator>Abhishek Nazare</creator>
        
        <creator>Sunita Padmannavar</creator>
        
        <subject>Gender classification; age estimation; face detection; FaceNet; ResNet34; computer vision techniques</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>The increasing demand for real-time gender and age classification in video inputs has spurred advancements in computer vision techniques. This research work presents a comprehensive pipeline for addressing this challenge, encompassing three pivotal tasks: face detection, gender classification, and age estimation. FaceNet effectively identifies faces within video streams, serving as the foundation for subsequent analyses. Moving forward, gender classification is achieved by utilizing a finely tuned ResNet34 model. The model is trained as a binary classifier for the gender identification. The optimization process employs a binary cross-entropy loss function facilitated by the ADAM optimizer with a learning rate of 1e-2. The achieved accuracy of 97% on the test dataset demonstrates the model&#39;s proficiency. The ADAM optimizer with a learning rate 1e-3 is used to train with the Mean Absolute Error (MAE) loss function. The evaluation metric, MAE, underscores the model&#39;s effectiveness, with an achieved MAE error of 6.8, signifying its proficiency in age estimation. The comprehensive pipeline proposed in this research showcases the individual components&#39; efficacy and demonstrates the synergy achieved through their integration. Experimental results substantiate the pipeline&#39;s capacity for real-time gender and age classification within video inputs, thus opening avenues for applications spanning diverse domains.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_112-An_Integrated_Approach_for_Real_Time_Gender.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Computer Aided Classification of Lung Cancer, Ground Glass Lung and Pulmonary Fibrosis Using Machine Learning and KNN Classifier</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01507111</link>
        <id>10.14569/IJACSA.2024.01507111</id>
        <doi>10.14569/IJACSA.2024.01507111</doi>
        <lastModDate>2024-07-31T12:06:34.9870000+00:00</lastModDate>
        
        <creator>Prathibha T P</creator>
        
        <creator>Punal M Arabi</creator>
        
        <subject>Ground glass; healthy; KNN; LBP; lung cancer; lung diseases classification; LBP; ML and pulmonary fibrosis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>Respiratory diseases are one of the most prevalent acute and chronic ailments worldwide. According to a recent survey, there were around 545 million cases of chronic respiratory diseases worldwide. Chronic respiratory diseases such as chronic obstructive pulmonary disease (COPD), pneumoconioses, asthma, interstitial lung disease and pulmonary sarcoidosis are significant public health problems across the world. The most significant CRD (Chronic Respiratory Disease) risks have been identified including smoking, contact with indoor and outdoor pollutants, allergies, occupational exposure, poor nutrition, obesity, inactivity and other factors. Interstitial lung diseases are diagnosed on high-resolution computed tomography (HRCT) using a variety of different interstitial pattern namely such as reticular, nodular, reticulonodular, ground-glass lung, cystic, ground-glass with reticular, cystic with ground-glass. If the lung diseases are identified at an early stage life span could be increased. Computer aided diagnosis could play a crucial role in identifying lung diseases at an early stage, disease management and treatment planning. In this paper a novel method is proposed to identify and classify HRCT images of cancerous lung using ML (Machine Learning) and to identify and classify ground glass lung, pulmonary fibrosis lung and healthy lung HRCT images using LBP (Local Binary Pattern) and KNN (K-Nearest Neighbor) classifier. Experimenting the proposed method on 996 images yielded 94% accuracy.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_111-Computer_Aided_Classification_of_Lung_Cancer.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>eTNT: Enhanced TextNetTopics with Filtered LDA Topics and Sequential Forward / Backward Topic Scoring Approaches</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01507110</link>
        <id>10.14569/IJACSA.2024.01507110</id>
        <doi>10.14569/IJACSA.2024.01507110</doi>
        <lastModDate>2024-07-31T12:06:34.9730000+00:00</lastModDate>
        
        <creator>Daniel Voskergian</creator>
        
        <creator>Rashid Jayousi</creator>
        
        <creator>Burcu Bakir-Gungor</creator>
        
        <subject>Topic scoring; topic modeling; text classification; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>TextNetTopics is a novel text classification-based topic modelling approach that focuses on topic selection rather than individual word selection to train a machine learning algorithm. However, one key limitation of TextNetTopics is its scoring component, which evaluates each topic in isolation and ranks them accordingly, ignoring the potential relationships between topics. In addition, the chosen topics may contain redundant or irrelevant features, potentially increasing the feature set size and introducing noise that can degrade the overall model performance. To address these limitations and improve the classification performance, this study introduces an enhancement to TextNetTopics. eTNT integrates two novel scoring approaches: Sequential Forward Topic Scoring (SFTS) and Sequential Backward Topic Scoring (SBTS), which consider topic interactions by assessing sets of topics simultaneously. Moreover, it incorporates a filtering component that aims to enhance topics&#39; quality and discriminative power by removing non-informative features from each topic using Random Forest feature importance values. These integrations aim to streamline the topic selection process and enhance classifier efficiency for text classification. The results obtained from the WOS-5736, LitCovid, and MultiLabel datasets provide valuable insights into the superior effectiveness of eTNT compared to its counterpart, TextNetTopics.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_110-eTNT_Enhanced_TextNetTopics_with_Filtered_LDA_Topics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Generating New Ulos Motif with Generative AI Method in Digital Tenun Nusantara (DiTenun) Platform</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01507109</link>
        <id>10.14569/IJACSA.2024.01507109</id>
        <doi>10.14569/IJACSA.2024.01507109</doi>
        <lastModDate>2024-07-31T12:06:34.9570000+00:00</lastModDate>
        
        <creator>Humasak Simanjuntak</creator>
        
        <creator>Evelin Panjaitan</creator>
        
        <creator>Sandraulina Siregar</creator>
        
        <creator>Unedo Manalu</creator>
        
        <creator>Samuel Situmeang</creator>
        
        <creator>Arlinta Barus</creator>
        
        <subject>Generate Ulos motif; StyleGAN; DiTenun; generative AI Ulos motif</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>DiTenun is a startup developing a platform that utilizes artificial intelligence to create innovative digital textile patterns for woven fabrics. One of the woven motifs produced is the Ulos motif, a traditional weaving from the Batak tribe that consists of various types, patterns/motifs, and sizes. Currently, DiTenun platform applies two methods to generate Ulos motifs: image quilting and SinGAN. The image quilting method uses synthetic textures to form a new texture by combining blocks from the original texture. The SinGAN is a Generative Adversarial Network (GAN) method that accepts one image motif as input to generate a new motif that resembles the training motif. The new motifs generated by both methods are still repetitive and not diverse (less variation). Therefore, this paper focuses on improving the StyleGAN method, which utilizes two or more Ulos motif images as input to produce new innovative motifs by mixing regularization. Six experimental scenarios are carried out on the Ulos motif image dataset with different numbers of input motifs and hyperparameter tuning. The experiment results are new images with diverse patterns, colour combinations, and merge motif elements. The StyleGAN performance is measured with Frechet Inception Distance (FID) and Kernel Inception Distance (KID) to find the best-quality motif generated based on the six hyperparameter tuning scenarios. The results show that the fourth scenario on Ulos Batak Karo, Gundur Category (Min Max Resolution: 8 and 256, number images 4, on training iteration per resolution = 100000 and max iteration = 50000000) is the best motif generated, based on FID and KID score, are 91.32 and 0.04, respectively.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_109-Generating_New_Ulos_Motif_with_Generative_AI_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimization of Green Supply Chain Management Based on Improved MPA</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01507108</link>
        <id>10.14569/IJACSA.2024.01507108</id>
        <doi>10.14569/IJACSA.2024.01507108</doi>
        <lastModDate>2024-07-31T12:06:34.9400000+00:00</lastModDate>
        
        <creator>Dan Li</creator>
        
        <subject>Green supply chain; supply chain management; marine predator algorithm; optimization problem; fish gathering device</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>With the advancement of industrialization and urbanization in the global market, the contradiction between economic development and environmental protection is becoming increasingly prominent. In response to the optimization problem, this study constructs a green supply chain network problem model with green constraints. In the second half of the iteration of the ocean predator algorithm, Gaussian mutation is used to replace the original fish swarm aggregation device effect, proposing an improved ocean predator algorithm to solve the green supply chain network model. The results demonstrated that the designed algorithm performed greater than other algorithms on all four benchmark functions. Except for the mean value of 2.17&#215;10-202 when solving function 1, the other mean and standard deviation were all 0. When solving the multi-modal benchmark test function, the proposed algorithm still had the fastest convergence speed and the difference was more obvious. In small-scale testing sets, the proposed algorithm could find the best solution for the test instance, resulting in a lower total cost of 139,832.97 yuan, 148,561.28 yuan, and 147,535.81 yuan, respectively. In three different scale test sets, the proposed algorithm had the fastest convergence speed and successfully converged to feasible solutions. The research results verified the algorithm performance and its good application effect in handling green supply chain network problems, which helps optimize it.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_108-Optimization_of_Green_Supply_Chain_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Application of Blockchain Technology in Network Security and Authentication: Issues and Strategies</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01507107</link>
        <id>10.14569/IJACSA.2024.01507107</id>
        <doi>10.14569/IJACSA.2024.01507107</doi>
        <lastModDate>2024-07-31T12:06:34.9270000+00:00</lastModDate>
        
        <creator>Yanli Lu</creator>
        
        <subject>Blockchain; network security; identity verification; deep regression model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>With the advent of the digital age, the importance of network security and authentication is gradually highlighted. Blockchain technology, as a distributed, immutable record technology, brings great potential value to both areas. This study aims to delve into how blockchain technology can ensure network security and its application in authentication. Through extensive questionnaires and data collection, the study successfully built a deep regression model to reveal relevant causal relationships. The findings show that the adoption of blockchain technology can significantly improve the perceived effectiveness of cybersecurity, especially when organizations have a high opinion of it. This finding provides a valuable reference for organizations to make better use of this technology. However, there are still some limitations in the study, such as the scope of data collection and the complexity of the model. For these problems, this paper also puts forward corresponding solutions.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_107_The_Application_of_Blockchain_Technology_in_Network_Security.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Exploring Photo-Based Dialogue Between Elderly Individuals and Generative AI Agents</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01507106</link>
        <id>10.14569/IJACSA.2024.01507106</id>
        <doi>10.14569/IJACSA.2024.01507106</doi>
        <lastModDate>2024-07-31T12:06:34.9100000+00:00</lastModDate>
        
        <creator>Kousuke Shimizu</creator>
        
        <creator>Banba Ami</creator>
        
        <creator>Choi Dongeun</creator>
        
        <creator>Miyuki Iwamoto</creator>
        
        <creator>Nahoko Kusaka</creator>
        
        <creator>Panote Siriaraya</creator>
        
        <creator>Noriaki Kuwahara</creator>
        
        <subject>Generative AI; elderly care; conversational agents; photo-based interaction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>Japan&#39;s rapid transition into a super-aged society, with 29% of its population aged 65 and over, underscores the urgent need for innovative elderly care solutions. This study explores the use of generative AI to facilitate meaningful interactions between elderly individuals and AI conversational agents using photos. Utilizing Microsoft Azure&#39;s AI services, including Computer Vision and Speech, the AI agent analyzes photos to generate engaging conversation prompts, leveraging GPT-3.5-turbo for natural language processing. Preliminary experiments with healthy elderly participants provided insights to refine the AI agent&#39;s conversational skills, focusing on timing, speech speed, and emotional engagement. The findings indicate that elderly users respond positively to AI agents that exhibit human-like conversational behaviors, such as attentiveness and expressive communication. By addressing functional and emotional needs, the AI agent aims to enhance the quality of life for the elderly, offering scalable solutions to the challenges of an aging society. Future work will focus on further improving the AI agent&#39;s ca-pabilities and assessing its impact on the mental health and so-cial engagement of elderly users.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_106-Exploring_Photo_Based_Dialogue_Between_Elderly_Individuals.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Knowledge Graph-Based JingFang Granules Efficacy Analysis for Influenza-Like Illness</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01507105</link>
        <id>10.14569/IJACSA.2024.01507105</id>
        <doi>10.14569/IJACSA.2024.01507105</doi>
        <lastModDate>2024-07-31T12:06:34.8930000+00:00</lastModDate>
        
        <creator>Yuqing Li</creator>
        
        <creator>Zhitao Jiang</creator>
        
        <creator>Zhiyan Huang</creator>
        
        <creator>Wenqiao Gong</creator>
        
        <creator>Yanling Jiang</creator>
        
        <creator>Guoliang Cheng</creator>
        
        <subject>Knowledge graph; clinical trial; influenza-like illness; jingfang; drug efficacy analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>This study presents a novel approach to evaluate the efficacy of JingFang granules in treating influenza-like illness by integrating knowledge graph technology with clinical trial data. We developed an innovative knowledge graph-based pharmacological analysis method and validated its effectiveness through a randomized controlled clinical trial. A knowledge graph was constructed by extracting drug-disease entities and their relationships from the literature using a machine learning workflow. Deep mining of the knowledge graph was performed using a graph convolutional network and T5 mini-model to analyze the association between JingFang and various diseases. Subsequently, a randomized controlled clinical trial involving 106 patients was conducted. Results showed that the cure rate in the JingFang combined treatment group (92.5%) was significantly higher than in the control group (81.1%), especially among the middle-aged and elderly population. Subgroup analysis revealed that JingFang had a more pronounced therapeutic effect on patients aged 34 and above, consistent with the knowledge graph analysis results. The innovation of this study lies in proposing a novel framework for evaluating therapeutic efficacy by combining knowledge graphs with clinical trial results. This approach not only provides new analytical tools for similar drug development but also improves the efficiency and accuracy of drug development by systematically validating literature efficacy data and integrating it with actual clinical trial results. Furthermore, applying a knowledge graph to evaluate the therapeutic effects of traditional Chinese medicines like JingFang is an innovative and unique approach, bringing new perspectives to this under-explored field. This method holds potential for broad application in drug development and repurposing, particularly in the context of Traditional Chinese Medicine.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_105-Knowledge_Graph_Based_JingFang_Granules_Efficacy_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Innovative Approaches to Agricultural Risk with Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01507104</link>
        <id>10.14569/IJACSA.2024.01507104</id>
        <doi>10.14569/IJACSA.2024.01507104</doi>
        <lastModDate>2024-07-31T12:06:34.8770000+00:00</lastModDate>
        
        <creator>Sumi. M</creator>
        
        <creator>S. Manju Priya</creator>
        
        <subject>Random forest; ridge classifier; logistic regression; gradient boosting; extreme gradient boost; Variation Inflation Factor; support vector machine; farmer risk prediction; agricultural risk</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>Agriculture is fraught with uncertainties arising from factors like weather volatility, pest outbreaks, market fluctuations, and technological advancements, posing significant challenges to farmers. By gaining insights into these risks, farmers can enhance decision-making, adopt proactive measures, and optimize resource allocation to minimize negative impacts and maximize productivity. The research introduces an innovative approach to risk prediction, highlighting its pivotal role in improving agricultural practices. Through meticulous analysis and optimization of a farmer dataset, employing pre-processing techniques, the study ensures the reliability of predictive models built on high-quality data. Utilizing Variation Inflation Factor (VIF) for feature selection, the study identifies influential features critical for accurate risk classification. Employing techniques like KNN, Random Forest, logistic regression, SVM, Ridge classifier, Gradient Boosting and XGBoost, the study achieves promising results. Among them KNN, random forest, Gradient Boosting and XGBoost scored with high accuracy of 88.46%. This underscores the effectiveness of the proposed methodology in providing actionable insights into potential risks faced by farmers, enabling informed decision-making and risk mitigation strategies.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_104-Innovative_Approaches_to_Agricultural_Risk.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis Performance of One-Stage and Two Stage Object Detection Method for Car Damage Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01507103</link>
        <id>10.14569/IJACSA.2024.01507103</id>
        <doi>10.14569/IJACSA.2024.01507103</doi>
        <lastModDate>2024-07-31T12:06:34.8770000+00:00</lastModDate>
        
        <creator>Harum Ananda Setyawan</creator>
        
        <creator>Alhadi Bustamam</creator>
        
        <creator>Rinaldi Anwar Buyung</creator>
        
        <subject>Car damage detection; insurance claim; deep learning; object detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>The large use of private cars is directly proportional to the number of insurance claims. Therefore, insurance companies need a breakthrough or new approach that is more effective and efficient to be able to compete for the trust of their customers. One approach that can be taken is to use artificial intelligence to detect damage to the car body to speed up the claims process. In this research, several experiments will be carried out using various types of models, namely Mask-R-CNN, ResNet50, MobileNetv2, YOLO-v5, and YOLO-v8 to detect damage to the car body. Furthermore, in the experiments that were carried out, the best results were obtained using the YOLO-v8x model with precision, recall, and F1-score values of 0.963, 0.951, and 0.936 respectively.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_103-Analysis_Performance_of_One_Stage_and_Two_Stage_Object.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Use of Natural Language Processing Methods in Teaching Turkish Proverbs and Idioms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01507102</link>
        <id>10.14569/IJACSA.2024.01507102</id>
        <doi>10.14569/IJACSA.2024.01507102</doi>
        <lastModDate>2024-07-31T12:06:34.8630000+00:00</lastModDate>
        
        <creator>Ert&#252;rk ERDAGI</creator>
        
        <subject>Idiom; proverb; natural language processing; word frequency; n-gram analysis; contextual analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>In this study, a series of studies are proposed for easy learning of proverbs and idioms in the language. In Turkish, proverbs and idioms are structures that are used both in the academic environment and in their daily lives, especially by 10-year-old students who have entered the abstract thinking stage. Since this structure contains abstract expressions, it seems difficult to learn at first. In the study, 2396 proverbs and 11209 idioms in the online dictionary of the Turkish Language Association were used. A pre-test was conducted to measure the knowledge level of 20 students selected as the study group. The structure of idioms and proverbs was analyzed using Natural Language Processing methods. With the analysis, difficulty groups were divided according to information such as word count, n-gram analysis, frequency level, and the student was asked questions from the online question pool for the tutorial and the test during the process. Generative artificial intelligence enables semantic analysis of texts containing idioms and proverbs. Following the studies, a test was applied to the students and the efficiency of the process was tried to be measured. As a result, students&#39; idiom knowledge increased by 51.8% and proverb knowledge increased by 59.40%.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_102-Use_of_Natural_Language_Processing_Methods.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Predictive Analysis of Vehicle Accident Risk: A Fuzzy-Bayesian Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01507101</link>
        <id>10.14569/IJACSA.2024.01507101</id>
        <doi>10.14569/IJACSA.2024.01507101</doi>
        <lastModDate>2024-07-31T12:06:34.8470000+00:00</lastModDate>
        
        <creator>Houssam Mensouri</creator>
        
        <creator>Loubna Bouhsaien</creator>
        
        <creator>Youssra Amazou</creator>
        
        <creator>Abdellah Azmani</creator>
        
        <creator>Monir Azmani</creator>
        
        <subject>Road traffic injuries; risk management; predictive analysis; Bayesian network; fuzzy logic; accident</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>Although delivery transport activities aim to ensure excellent customer service, risks such as accidents, property damage, and additional costs occur frequently, necessitating risk control and prevention as critical components of transport supply chain quality. This article analyzes the risk of accidents, a fundamental root cause of critical situations that can have significant economic impacts on transport companies and potentially lead to customer loss if recurring. The case study develops a fuzzy Bayesian approach to anticipate accident risks through predictive analysis by combining Bayesian networks and fuzzy logic. Results reveal a strong correlation between fatal injuries in accidents and factors related to driver and vehicle conditions. The predictive model for accident occurrence is validated through three axioms, offering insights for carriers, transport companies, and governments to minimize accidents, injuries, and costs. Moreover, the developed model provides a foundation for various predictive applications in freight transport and other research fields aiming to identify parameters impacting accident occurrence.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_101-Enhancing_Predictive_Analysis_of_Vehicle_Accident_Risk.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning-Driven Citrus Disease Detection: A Novel Approach with DeepOverlay L-UNet and VGG-RefineNet</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01507100</link>
        <id>10.14569/IJACSA.2024.01507100</id>
        <doi>10.14569/IJACSA.2024.01507100</doi>
        <lastModDate>2024-07-31T12:06:34.8300000+00:00</lastModDate>
        
        <creator>P Dinesh</creator>
        
        <creator>Ramanathan Lakshmanan</creator>
        
        <subject>Citrus disease detection; highlighting affected region; Deep learning; semantic segmentation; DeepOverlay L-UNet; VGG-RefineNet</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>Agriculture is essential to the world&#39;s desire to produce food, generate income, and maintain livelihoods. Citrus fruits are produced worldwide and have a significant impact on food production, nutrition, and agriculture. During production, farmers face difficulties due to diseases that affect plant growth. Black spot, canker, and greening are some citrus leaf diseases that risk citrus production, resulting in economic losses as well as reduced supply stability. Early detection of these diseases through recent technologies like deep learning will help farmers with better yields and quality. The current methods fall short in marking the area affected by the disease with accuracy and more performance. This work has a novel method proposed for the segmentation and classification of citrus leaf diseases. The method consists of three phases. In the first phase, DeepOverlay L-UNet is used to segment the affected regions. In the second phase, disease detection is carried out using VGG-RefineNet, and in the third phase, the affected region is highlighted in the original image with a severity level. On the other hand, the DeepOverlay L-UNet model proves to be effective in detecting affected areas, thereby enabling clear visualization of the spread of the disease. The result affirms that the proposed method outperforms with a better training IOU of 0.9864 and a validation IOU of 0.9334.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_100-Deep_Learning_Driven_Citrus_Disease_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>AI-IoT Enabled Surveillance Security: DeepFake Detection and Person Re-Identification Strategies</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150799</link>
        <id>10.14569/IJACSA.2024.0150799</id>
        <doi>10.14569/IJACSA.2024.0150799</doi>
        <lastModDate>2024-07-31T12:06:34.8170000+00:00</lastModDate>
        
        <creator>Srikanth Bethu</creator>
        
        <creator>M. Trupthi</creator>
        
        <creator>Suresh Kumar Mandala</creator>
        
        <creator>Syed Karimunnisa</creator>
        
        <creator>Ayesha Banu</creator>
        
        <subject>Artificial intelligence; deep learning; face recognition; IoT; reinforcement learning; Deep Q network; deepfake</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>Face Recognition serves as a biometric tool and technological approach for identifying individuals based on distinctive facial features and physiological characteristics such as interocular distance, nasal width, lip contours, and facial structure. Among various identification methods, it stands out for its efficacy. However, the emergence of deepfake technology poses a significant security threat to real-time surveillance networks. In response to this challenge, we propose an AI-IoT enabled Surveillance security system framework aimed at mitigating deepfake-related risks. This framework is designed for person identification by leveraging facial features and characteristics. Specifically, we employ a Reinforcement Learning-based Deep Q Network framework for person identification and deepfake detection. Through the integration of AI and IoT technologies, our framework offers enhanced surveillance security by accurately identifying individuals while effectively detecting and combating deepfake-generated content. This research contributes to the advancement of surveillance systems, providing a robust solution to address emerging security threats in real-time monitoring environments. The introduction of this Deep Q Network, is useful to build real-time surveillance framework where live images are identified by a continuous learning mechanism and solves the security issues by a feedback mechanism.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_99-AI_IoT_Enabled_Surveillance_Security.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Smart System with Jetson Nano for Remote Insect Monitoring</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150798</link>
        <id>10.14569/IJACSA.2024.0150798</id>
        <doi>10.14569/IJACSA.2024.0150798</doi>
        <lastModDate>2024-07-31T12:06:34.8000000+00:00</lastModDate>
        
        <creator>Thanh-Nghi Doan</creator>
        
        <creator>Thien-Hue Phan</creator>
        
        <subject>NVIDIA Jetson Nano; insect monitoring; YOLOv7</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>Insect monitoring is vital for agricultural management and environmental conservation, but traditional methods are labor-intensive and time-consuming. This paper introduces a novel smart system utilizing NVIDIA&#39;s Jetson Nano technology combined with object detection models for remote insect monitoring. The system automates the processes of detection, identification, and monitoring, thereby significantly improving the efficiency and accuracy of insect population assessments. The implementation of the YOLOv7 model on a dataset containing 10 insect species achieved a mAP@0.5 accuracy of 77.2%. This enables farmers to take timely and appropriate measures to prevent pests and diseases, reducing production costs and protecting the environment.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_98-A_Novel_Smart_System_with_Jetson_Nano.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Low-Cost Transition Towards Smart Grids in Low-Income Countries: The Case Study of Togo</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150797</link>
        <id>10.14569/IJACSA.2024.0150797</id>
        <doi>10.14569/IJACSA.2024.0150797</doi>
        <lastModDate>2024-07-31T12:06:34.7830000+00:00</lastModDate>
        
        <creator>Mohamed BARATE</creator>
        
        <creator>Eyoul&#233;ki Tcheyi Gnadi PALANGA</creator>
        
        <creator>Ayit&#233; S&#233;nah Akoda AJAVON</creator>
        
        <creator>Kodjo AGBOSSOU</creator>
        
        <subject>Smart grid; telecommunications network; low cost; low-income countries</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>Power grids must integrate information and communication technologies to become intelligent. This integration will enable power grids to be reliable, resilient, and environmentally friendly. The smart grid would help low-income countries to have a more stable power system to boost their development. However, implementing a smart grid is costly and requires specialized skills. This article aims to outline a low-cost transition from conventional power grids to smart grids in low-income countries. It examines the possibility of telecommunications networks participating in implementing smart grids in these countries, to minimize costs. A combination of quantitative and qualitative methods was used. Using Togo as an example, a conceptual scheme for a low-cost smart grid is proposed, with Togo&#39;s telecom operators as the telecoms network support. A transition plan to the smart grid is proposed, based on feedback from developed countries.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_97-The_Low_Cost_Transition_Towards_Smart_Grids.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Advances in Consortium Chain Scalability: A Review of the Practical Byzantine Fault Tolerance Consensus Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150796</link>
        <id>10.14569/IJACSA.2024.0150796</id>
        <doi>10.14569/IJACSA.2024.0150796</doi>
        <lastModDate>2024-07-31T12:06:34.7700000+00:00</lastModDate>
        
        <creator>Nur Haliza Abdul Wahab</creator>
        
        <creator>Zhang Dayong</creator>
        
        <creator>Juniardi Nur Fadila</creator>
        
        <creator>Keng Yinn Wong</creator>
        
        <subject>Blockchain; Practical Byzantine Fault Tolerance (PBFT); consensus algorithm; cryptography</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>Blockchain technology, renowned for its decentralized, immutable, and transparent features, offers a reliable framework for trust in distributed systems. The growing popularity of consortium blockchains, which include public, private, hybrid, and consortium chains, stems from their balance of privacy and collaboration. A significant challenge in these systems is the scalability of consensus mechanisms, particularly when employing the Practical Byzantine Fault Tolerance (PBFT) algorithm. This review focuses on enhancing PBFT&#39;s scalability, a critical factor in the effectiveness of consortium chains. Innovations such as Boneh–Lynn–Shacham (BLS) signatures and Verifiable Random Functions (VRF) are highlighted for their ability to reduce algorithmic complexity and increase transaction throughput. The discussion extends to real-world applications, particularly in platforms like Hyperledger Fabric, showcasing the practical benefits of these advancements. This paper provides a concise overview of the latest methodologies that enhance the performance scalability of PBFT-based consortium chains, serving as a valuable resource for researchers and practitioners aiming to optimize these systems for high-performance demands.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_96-Advances_in_Consortium_Chain_Scalability.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards a Framework for Optimized Microservices Placement in Cloud Native Environments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150795</link>
        <id>10.14569/IJACSA.2024.0150795</id>
        <doi>10.14569/IJACSA.2024.0150795</doi>
        <lastModDate>2024-07-31T12:06:34.7370000+00:00</lastModDate>
        
        <creator>Riane Driss</creator>
        
        <creator>Ettazi Widad</creator>
        
        <creator>Ettalbi Ahmed</creator>
        
        <subject>Cloud native architecture; Service placement; containerization; Cloud resource allocation; microservices architecture</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>In recent times, cloud-native technologies have increasingly enabled the design and deployment of applications using a microservice architecture, enhancing modularity, scalability, and management efficiency. These advancements are specifically tailored for the creation and orchestration of containerized applications, marking a significant leap forward in the industry. Emerging cloud-native applications employ container-based virtualization instead of the traditional virtual machine approach. However, adopting this new cloud-native approach requires a shift in vision, particularly in addressing the challenges of microservices placement. Ensuring optimal resource utilization, maintaining service availability, and managing the complexity of distributed deployments are critical considerations that necessitate advanced orchestration and automation strategies. We introduce a new framework for optimized microservices placement that optimizes application performance based on resource requirements. This approach aims to efficiently allocate infrastructural resources while ensuring high service availability and adherence to service level agreements. The implementation and experimental results of our method validate the feasibility of the proposed approach.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_95-Towards_a_Framework_for_Optimized_Microservices_Placement.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Architecture of Depthwise Separable CNN and Multi-Level Pooling for Detection and Classification of Myopic Maculopathy</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150794</link>
        <id>10.14569/IJACSA.2024.0150794</id>
        <doi>10.14569/IJACSA.2024.0150794</doi>
        <lastModDate>2024-07-31T12:06:34.7370000+00:00</lastModDate>
        
        <creator>Alaa E. S. Ahmed</creator>
        
        <subject>Retinograph; ophthalmologists; computer-aided diagnosis; vision loss; deep learning; retinograph images; myopic maculopathy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>Myopic maculopathy (MM), also known as myopic macular degeneration, is the most serious, irreversible, vision-threatening complication and the leading cause of visual impairment and blindness. Numerous research studies demonstrate that the convolutional neural network (CNN) outperforms many applications. Current CNN designs employ a variety of techniques, such as fixed convolutional kernels, the absolute value layer, data augmentation, and domain knowledge, to enhance performance. However, some network structure designing hasn&#39;t received much attention yet. The intricacy of the MM categorization and definition system makes it challenging to employ deep learning (DL) technology in the diagnosis of pathologic myopia lesions. To increase the detection precision of MM&#39;s spatial domain, the proposed work first concentrates on creating a novel CNN network structure then improve the convolution kernels in the preprocessing layer. The number of parameters is decreased, and the characteristic of a small local region is modeled using the smaller convolution kernels. Next channel correlation of the residuals with separable convolutions is employed to compress the image features. Then, the local features using the spatial pyramid pooling (SPP) technique is combined, which improves the features&#39; capacity to be represented by multi-level pooling. The use of data augmentation is the final step in enhancing network performance. Compress the residuals in this paper to make use of the channel correlation. The accuracy achieved by the model was 95%, F1-score of 96.5% and AUC of 0.92 on augmented MM-PALM dataset. The paper concludes by conducting a comparative study of various deep-learning architectures. The findings highlight that the hybrid CNN with SPP and XgBoost (Depthwise-XgBoost) architecture is the ideal deep learning classification model for automated detection of four stages of MM.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_94-A_Novel_Architecture_of_Depthwise_Separable_CNN.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning-Based Depression Analysis Among College Students Using Multi Modal Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150793</link>
        <id>10.14569/IJACSA.2024.0150793</id>
        <doi>10.14569/IJACSA.2024.0150793</doi>
        <lastModDate>2024-07-31T12:06:34.7070000+00:00</lastModDate>
        
        <creator>Liyan Wang</creator>
        
        <subject>Depression analysis; multimodal techniques; mental health; real-time monitoring; hybrid algorithms</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>This study proposed a novel approach to handle mental health, particularly, depression among college students, called CRADDS A Comprehensive Real-time Adaptive Depression Detection System. The novel CRADDS combined advanced tensor fusion networks which is able to analyze emotions using audio, text and video data more accurately, this is possible due to the strength of deep learning and multimodal approaches. This system is constructed with a hybrid algorithm framework that combines SVM (Support Vector Machines), CNN (Convolutional Neural Network) and (Bidirectional Long-Term Short-Term Memory) BiLSTM techniques. To address the limitations identified in earlier research, CRADDS increasing its feature set and using effective machine learning algorithms to reduce false positives and negatives. Further, it includes the advanced IoT devices to collect real time data from various range of public and private sources. The depression symptoms may be continuously monitored in real time, which helps to identify depressions in early stages and guaranteed the perfect well-being of students. Additionally, the model has the ability to adjust based on the interaction features, which helps to provide psychological support using the automatic responses observed from the verbal and nonverbal clues. Experiments show that the proposed CRADDS obtained an impressive accuracy based on the features of text, audio and video, when compared with the existing models. Overall, CRADDS is a useful tool for mental health professionals and educational institutions because it not only identifies depression but also helps to treat it earlier, and guarantees good academic scores and general well-being. The proposed validation accuracy increases from 63.04% to 86.08% which is higher than compared existing SVM model.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_93-Deep_Learning_Based_Depression_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Ensemble Machine Learning for Enhanced Breast Cancer Prediction: A Comparative Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150792</link>
        <id>10.14569/IJACSA.2024.0150792</id>
        <doi>10.14569/IJACSA.2024.0150792</doi>
        <lastModDate>2024-07-31T12:06:34.6900000+00:00</lastModDate>
        
        <creator>Md. Mijanur Rahman</creator>
        
        <creator>Khandoker Humayoun Kobir</creator>
        
        <creator>Sanjana Akther</creator>
        
        <creator>Md. Abul Hasnat Kallol</creator>
        
        <subject>Breast cancer; detection; machine learning; bagging; boosting; stacking; voting; chi square; ensemble; hybrid ensemble; bioinformatics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>Breast cancer poses a significant threat to women’s health, affecting one in every eight women globally and often leading to fatal outcomes due to delayed detection in advanced stages. Recent advancements in machine learning have opened doors to early detection possibilities. This study explores various machine learning algorithms, including K- Nearest Neighbor (KNN), Support Vector Machine (SVM), Multi- Layer Perceptron (MLP), Decision Tree (DT), Logistic Regression (LR), Naive Bayes (NB), Random Forest (RF), Ada Boost (AB), Gradient Boosting (GB), and XGboost (XGB). The employed algorithms, along with nested ensembles of Bagging, Boosting, Stacking, and Voting, predicted whether a cell is benign or malignant using the Wisconsin Diagnostic Breast Cancer (WDBC) dataset. Utilizing the Chi-square feature selection technique, this study identified 21 essential features to enhance prediction accuracy. Results of this study indicate that MLP LR achieved the highest accuracy of 98.25%, closely followed by SVM with 97.08% accuracy. Notably, the Voting classifier yielded the highest accuracy of 99.42% among the ensemble methods. These findings suggest that the research model holds promise for accurate breast cancer prediction, thus contributing to increased awareness and early intervention.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_92-Ensemble_Machine_Learning_for_Enhanced_Breast_Cancer_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Privacy Protection of Secure Sharing Electronic Health Records Based on Blockchain</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150791</link>
        <id>10.14569/IJACSA.2024.0150791</id>
        <doi>10.14569/IJACSA.2024.0150791</doi>
        <lastModDate>2024-07-31T12:06:34.6770000+00:00</lastModDate>
        
        <creator>Yuan Wang</creator>
        
        <creator>Lin Sun</creator>
        
        <subject>Blockchain; secure sharing; electronic health records; privacy protection; zero-knowledge proof; attribute encryption</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>The secure sharing and privacy protection of medical data have become pain points for medical data management platforms. Therefore, a secure sharing electronic health record privacy protection method based on blockchain is proposed in the study, aiming to improve data security privacy and ensure absolute ownership of patients&#39; medical data. Attribute encryption and blockchain computing are utilized to construct a data secure sharing model, and zero-knowledge proof and ElGamal encryption algorithms are introduced to further improve the construction of data privacy protection methods. Experimental verification showed that the data secure sharing method proposed in the study has more advantages in terms of production key size and time cost. Compared with other public recognition mechanisms, zero-knowledge proof reduced the average time cost of generating keys by 54.36%. The proposed data privacy protection method had an average increase of 7.73% in protection effectiveness compared to other methods. The results indicate that the data secure sharing and privacy protection methods proposed in the study can improve the overall performance and security of the system while fully ensuring the absolute ownership of patients&#39; data. This method has positive application value in the privacy protection of medical data.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_91-Privacy_Protection_of_Secure_Sharing_Electronic_Health_Records.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Harnessing Big Data: Strategic Insights for IT Management</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150790</link>
        <id>10.14569/IJACSA.2024.0150790</id>
        <doi>10.14569/IJACSA.2024.0150790</doi>
        <lastModDate>2024-07-31T12:06:34.6600000+00:00</lastModDate>
        
        <creator>Asfar H Siddiqui</creator>
        
        <creator>Swetha V P</creator>
        
        <creator>Harish Chowdhary</creator>
        
        <creator>R.V.V. Krishna</creator>
        
        <creator>Elangovan Muniyandy</creator>
        
        <creator>Lakshmana Phaneendra Maguluri</creator>
        
        <subject>Autoregressive integrated moving average; big data analytics; strategic planning; IT management; time-series forecasting</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>Big Data analytics has become an essential tool for IT management, enabling data-driven decision-making in various areas, such as resource allocation and strategic planning. This research examines the use of ARIMA (Auto Regressive Integrated Moving Average) models to improve decision-making in IT management. ARIMA is a popular time-series forecasting method that provides predictive skills, allowing businesses to foresee future patterns and base decisions on historical data analysis. ARIMA models are beneficial in strategic planning by predicting market trends, service demand, and IT resource utilization, which helps firms make proactive resource allocation decisions and maximize operational efficiency. Additionally, ARIMA aids predictive maintenance techniques by forecasting equipment failures and maintenance needs, enabling businesses to reduce downtime and interruptions in critical IT systems. For resource allocation, ARIMA simplifies IT budget optimization by predicting spending needs and identifying potential cost-saving areas. Through accurate forecasts of future budgetary requirements, ARIMA facilitates smart financial resource allocation, investment prioritization, and efficient cost containment, all while optimizing value delivery. Furthermore, ARIMA supports risk management initiatives by evaluating and predicting risks associated with IT projects, operations, and investments. Analyzing historical data and identifying potential risks and vulnerabilities, ARIMA enables firms to mitigate risks, limit adverse effects on business operations, and enhance decision-making processes. Integrating ARIMA into data-driven decision-making processes for strategic planning and resource allocation in IT management has great potential to improve organizational efficiency, agility, competitiveness, and effectiveness. Implemented using Python, the proposed approach has an MSE of 1.25, making it more efficient than current techniques like exponential smoothing and moving average.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_90-Harnessing_Big_Data_Strategic_Insights.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid DBN-GRU Model for Enhanced Sentiment Analysis in Product Reviews</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150789</link>
        <id>10.14569/IJACSA.2024.0150789</id>
        <doi>10.14569/IJACSA.2024.0150789</doi>
        <lastModDate>2024-07-31T12:06:34.6430000+00:00</lastModDate>
        
        <creator>Shaista Khan</creator>
        
        <creator>J Chandra Sekhar</creator>
        
        <creator>J. Ramu</creator>
        
        <creator>Yousef A.Baker El-Ebiary</creator>
        
        <creator>K.Aanandha Saravanan</creator>
        
        <creator>Kuchipudi Prasanth Kumar</creator>
        
        <creator>Prajakta Uday Waghe</creator>
        
        <subject>Sentimental analysis; product review; deep learning; DBN-GRU</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>In an era marked by a proliferation of online reviews across various domains, navigating the extensive and diverse range of opinions can be challenging. Sentiment analysis aims to extract and interpret sentiments from these vast pools of data using computational linguistics and information retrieval techniques. This study focuses on employing deep learning methods such as Deep Belief Networks (DBN) and Gated Recurrent Units (GRU) to classify reviews into positive and negative sentiments, addressing the issue of information overload in Product Reviews. The primary objective is to develop an efficient sentiment analysis system that reliably categorizes reviews as positive or negative. The study introduces a novel sentiment analysis framework combining Deep Belief Networks and Gated Recurrent Units for online product review classification, enhancing accuracy through advanced feature extraction and classification techniques. The comprehensive preparation pipeline—comprising data splitting, stemming, stop word removal and special character separation—enhances dataset refinement for improved classification accuracy. The proposed framework consists of four main phases: pre-processing, feature extraction, classification, and evaluation. During the preparation phase, the dataset is meticulously cleaned and refined to reduce noise and enhance signal quality. Significant features are then extracted from the pre-processed data using advanced feature extraction algorithms. The DBN-GRU model leverages these features for sentiment classification, effectively distinguishing between positive and negative attitudes. The framework’s performance is subsequently evaluated to assess its efficacy in accurately classifying reviews. The combination of in-depth pre-processing procedures and the DBN-GRU technique yielded promising results in sentiment categorization. The framework demonstrated a high accuracy of 98.74% in differentiating between positive and negative sentiments, thereby facilitating the effective analysis of online reviews. This study presents a robust framework for sentiment analysis, utilizing the DBN-GRU method to classify online reviews. Through extensive preprocessing and advanced classification techniques, the system addresses the challenges of noise and information overload in online reviews, providing valuable insights for both consumers and businesses.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_89-A_Hybrid_DBN_GRU_Model_for_Enhanced_Sentiment_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysing Code-Mixed Text in Programming Instruction Through Machine Learning for Feature Extraction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150788</link>
        <id>10.14569/IJACSA.2024.0150788</id>
        <doi>10.14569/IJACSA.2024.0150788</doi>
        <lastModDate>2024-07-31T12:06:34.6270000+00:00</lastModDate>
        
        <creator>Myagmarsuren Orosoo</creator>
        
        <creator>J Chandra Sekhar</creator>
        
        <creator>Manikandan Rengarajan</creator>
        
        <creator>Nyamsuren Tsendsuren</creator>
        
        <creator>Adapa Gopi</creator>
        
        <creator>Yousef A.Baker El-Ebiary</creator>
        
        <creator>Prema S</creator>
        
        <creator>Ahmed I. Taloba</creator>
        
        <subject>Code-mixed text; text processing; federated learning; bidirectional long short-term memory; programming education; real-time applications; computer-aided education</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>In programming education, code-mixed text using multiple languages or dialects simultaneously can significantly hinder learning outcomes due to misinterpretation and inadequate processing by traditional systems. For instance, students with bilingual or multilingual backgrounds may face difficulties with automated code reviews or multilingual coding tutorials if their code-mixed queries are not accurately understood. Motivated by these challenges, this paper proposes a Federated Bi-LSTM Model for Feature Extraction and Classification. This model leverages Bidirectional Long Short-Term Memory (Bi-LSTM) networks within a federated learning framework to effectively accommodate various code-switching methodologies and context-dependent linguistic elements while ensuring data security and privacy across distributed sources. The Federated Bi-LSTM Model demonstrates impressive performance, achieving 99.3% accuracy nearly 19% higher than traditional techniques such as Support Vector Machines (SVM), Multilayer Perceptron (MLP), and Random Forest (RF). This significant improvement underscores the model&#39;s capability to efficiently analyse code-mixed text and enhance programming instruction for multilingual learners. However, the model faces limitations in processing highly specialized code-mixed text and adapting to real-time applications. Future research should focus on optimizing the model for these challenges and exploring its applicability in broader domains of computer-assisted education. This model represents a substantial advancement in language-aware computing, offering a promising solution for the evolving needs of adaptive and inclusive programming education technologies. This advancement has the potential to transform language-sensitive computing, providing significant support for multilingual learners and setting a new standard for inclusive programming education.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_88-Analysing_Code_Mixed_Text_in_Programming_Instruction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing English Learning Environments Through Real-Time Emotion Detection and Sentiment Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150787</link>
        <id>10.14569/IJACSA.2024.0150787</id>
        <doi>10.14569/IJACSA.2024.0150787</doi>
        <lastModDate>2024-07-31T12:06:34.6130000+00:00</lastModDate>
        
        <creator>Myagmarsuren Orosoo</creator>
        
        <creator>Yaisna Rajkumari</creator>
        
        <creator>Komminni Ramesh</creator>
        
        <creator>Gulnaz Fatma</creator>
        
        <creator>M. Nagabhaskar</creator>
        
        <creator>Adapa Gopi</creator>
        
        <creator>Manikandan Rengarajan</creator>
        
        <subject>Convolutional neural network; federated learning; LSTM; Natural Language Processing; SMOTE</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>Educational technology is increasingly focusing on real-time language learning. Prior studies have utilized Natural Language Processing (NLP) to assess students&#39; classroom behavior by analyzing their reported feelings and thoughts. However, these studies have not fully enhanced the feedback provided to instructors and peers. This research addresses this issue by combining two innovative technologies: Federated 3D-Convolutional Neural Networks (Fed 3D-CNN) and Long Short-Term Memory (LSTM) networks and also aims to investigate classroom attitudes to enhance students&#39; language competence. These technologies enable the modification of teaching strategies through text analysis and image recognition, providing comprehensive feedback on student interactions. For this study, the Multimodal Emotion Lines Dataset (MELD) and eNTERFACE&#39;05 datasets were selected. eNTERFACE contains 3D images of individuals, while MELD analyzes spoken patterns. To address over fitting issues, the SMOTE technique is used to balance the dataset through oversampling and under sampling. The study accurately predicts human emotions using Federated 3D-CNN technology, which excels in image processing by predicting personal information from various angles. Federated Learning with 3D-CNNs allows simultaneous implementation for multiple clients by leveraging both local and global weight changes. The NLP system identifies emotional language patterns in students, laying the foundation for this analysis. Although not all student feedback has been extensively studied in the literature, the Fed 3D-CNN and LSTM algorithm recommendations are valuable for extracting feedback-related information from audio and video. The proposed framework achieves a prediction accuracy of 97.72%, outperforming existing methods. This study aims to investigate classroom attitudes to enhance students&#39; language competence.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_87-Enhancing_English_Learning_Environments.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implementation of Slicing Aided Hyper Inference (SAHI) in YOLOv8 to Counting Oil Palm Trees Using High-Resolution Aerial Imagery Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150786</link>
        <id>10.14569/IJACSA.2024.0150786</id>
        <doi>10.14569/IJACSA.2024.0150786</doi>
        <lastModDate>2024-07-31T12:06:34.5970000+00:00</lastModDate>
        
        <creator>Naufal Najiv Zhorif</creator>
        
        <creator>Rahmat Kenzie Anandyto</creator>
        
        <creator>Albrizy Ullaya Rusyadi</creator>
        
        <creator>Edy Irwansyah</creator>
        
        <subject>Oil palm tree; YOLOv8; SAHI; aerial imagery; tree counting</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>Palm oil is a commodity that contributes significantly to Indonesia&#39;s national economic growth, with a total plantation area of 116,000 hectares. By 2023, Indonesia is projected to produce approximately 47 million metric tons of palm oil. One of the major challenges in the manual counting of oil palm trees in a large area of a plantation is the labour-intensive, time-consuming, costly, and dangerous nature of the work in the field. The use of aerial imagery allows for the mapping of large areas with comprehensive data coverage. This study proposes a method of mapping oil palm plantations for the counting of oil palm trees using high-resolution aerial images taken with drones. Furthermore, the use of artificial intelligence (AI) methods and deep learning (DL) with the You Only Look Once (YOLO) model for object detection has demonstrated good accuracy in previous studies. This research will utilize the YOLOv8m object detection model and the slicing method, namely Slicing Hyper Aided Hyper Inference (SAHI), which is anticipated to enhance the precision of object detection models on high-resolution aerial imagery. The study concluded that the use of the SAHI slicing method can significantly enhance the accuracy of the model, as evidenced by a Mean Absolute Percentage Error (MAPE) value of 0.01758 on aerial imagery equivalent to 73.2 hectares, with a detection time of 5 minutes and 45 seconds.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_86-Implementation_of_Slicing_Aided_Hyper_Inference.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Systematic Review on Assessment in Adaptive Learning: Theories, Algorithms and Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150785</link>
        <id>10.14569/IJACSA.2024.0150785</id>
        <doi>10.14569/IJACSA.2024.0150785</doi>
        <lastModDate>2024-07-31T12:06:34.5970000+00:00</lastModDate>
        
        <creator>Adel Ihichr</creator>
        
        <creator>Omar Oustous</creator>
        
        <creator>Younes El Bouzekri El Idrissi</creator>
        
        <creator>Ayoub Ait Lahcen</creator>
        
        <subject>Adaptive assessment; adaptive learning; test; education; techniques</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>Computerized knowledge assessments have become increasingly popular, especially since COVID-19 has transformed assessment practices from both technological and pedagogical standpoints. This systematic review of the literature aims to analyze studies concerning the integration of adaptive assessment techniques and algorithms in Learning Management Systems (LMS) to generate a global vision of their potential to enhance the quality and adaptability of learning, and to provide recommendations for their application. A review of international indexed databases, specifically Scopus, was conducted, focusing on studies published between 2000 and 2024. The PICO framework was used to formulate the search query and the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) framework to select 66 relevant studies based on inclusion and exclusion criteria such as publishing year, document type, subject area, language, and other factors. The results reveal that integrating adaptive assessments positively impacts the quality of learning by generating short tests dynamically adapted to students’ skills, learning styles, and behaviors. Furthermore, the findings identify various techniques and algorithms used, as well as their main features and benefits. These tools tailor adaptive learning programs to meet students’ specific needs, preferences, and proficiency levels, thereby enhancing student motivation and enabling them to engage with material that matches their knowledge and abilities. In conclusion, the systematic review emphasizes the significance of integrating adaptive assessments in educational environments and offers tailored recommendations for their implementation to provide adaptive learning. These recommendations can be adopted and reused as guidelines to develop new and more sophisticated assessment models.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_85-A_Systematic_Review_on_Assessment_in_Adaptive_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Brain and Heart Rate Variability Patterns Recognition for Depression Classification of Mental Health Disorder</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150784</link>
        <id>10.14569/IJACSA.2024.0150784</id>
        <doi>10.14569/IJACSA.2024.0150784</doi>
        <lastModDate>2024-07-31T12:06:34.5800000+00:00</lastModDate>
        
        <creator>Qaisar Abbas</creator>
        
        <creator>M. Emre Celebi</creator>
        
        <creator>Talal AlBalawi</creator>
        
        <creator>Yassine Daadaa</creator>
        
        <subject>Mental health disorder; depression patterns; electroencephalogram; heart rate variability; deep learning; mobilenet; behavioral analysis; internet of medical things</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>Depression is common and dangerous if untreated. We must detect depression patterns early and accurately to provide timely interventions and assistance. We present a novel depression prediction method (depressive-deep), which combines preprocess brain electroencephalogram (EEG) and ECG-based heart-rate variability (HRV) signals into a 2D scalogram. Later, we extracted features from 2D scalogram images using a fine-tuned MobileNetV2 deep learning (DL) architecture. We integrated an AdaBoost ensemble learning algorithm to improve the model’s performance. Our study suggested ensemble learning can accurately predict asymmetric and symmetric depression patterns from multimodal signals such as EEG and ECG. These patterns include major depressive state (MDS), cognitive and emotional arousal (CEA), mood disorder patterns (MDPs), mood and emotional regulation (MER), and stress and emotional dysregulation (SED). To develop this depressive-deep model, we have performed a pre-trained strategy on two publicly available datasets, MODMA and SWEEL-KW. The sensitivity (SE), specificity (SP), accuracy (ACC), F1-score, precision (P), Matthew’s correlation coefficient (MCC), and area under the curve (AUC) have been analyzed to determine the best depression prediction model. Moreover, we used wearable devices over the Internet of Medical Things (IoMT) to extract signals and check the depressive-deep system’s generalizability. To ensure model robustness, we use several assessment criteria, including cross-validation. The depressive-deep and feature extraction strategies outperformed compared to the other methods in depression prediction, obtaining an ACC of 0.96, IOTSE of 0.98, SP of 0.95, P of 0.95, F1-score of 0.96, and MCC of 0.96. The main findings suggest that using 2D scalogram and depressive-deep (fine-tuning of MobileNet2 + AdaBoost) algorithms outperform them in detecting early depression, improving mental health diagnosis and treatment.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_84-Brain_and_Heart_Rate_Variability_Patterns_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Precision Farming with AI: An Integrated Deep Learning Solution for Paddy Leaf Disease Monitoring</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150783</link>
        <id>10.14569/IJACSA.2024.0150783</id>
        <doi>10.14569/IJACSA.2024.0150783</doi>
        <lastModDate>2024-07-31T12:06:34.5670000+00:00</lastModDate>
        
        <creator>Pramod K</creator>
        
        <creator>V. R. Nagarajan</creator>
        
        <subject>Paddy rice; leaf diseases; hybrid deep learning; efficientnetb0; capsule network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>Paddy rice, an essential food source for millions, is highly susceptible to various leaf diseases that threaten its yield and quality. This study introduces a cutting-edge hybrid deep learning model designed to address the critical need for accurate and timely identification and classification of paddy leaf diseases. Traditional methods often lack the precision and efficiency required for effective disease detection, necessitating the development of more sophisticated approaches. Our proposed model leverages the feature extraction capabilities of EfficientNetB0 and the hierarchical relationship capturing abilities of the Capsule Network, resulting in superior disease classification performance. The hybrid model demonstrates outstanding accuracy, achieving 97.86%, along with precision, recall, and F1-scores of 97.98%, 98.01%, and 97.99%, respectively. It effectively differentiates between diseases such as Narrow Brown Spot, Bacterial Leaf Blight, Leaf Blast, Leaf Scald, Brown Spot, and healthy leaves, showcasing its robustness in practical applications. This research highlights the importance of advanced technological interventions in agriculture, providing a scalable and efficient solution for disease detection in paddy crops. The hybrid deep learning model offers significant benefits to farmers and agricultural stakeholders, facilitating timely disease management, optimizing resource use, and improving crop management practices. Ultimately, this innovation supports agricultural sustainability and enhances global food security.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_83-Precision_Farming_with_AI.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Data Sensitivity Preservation-Securing Value Using Varied Differential Privacy Method (SP-SV Method)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150782</link>
        <id>10.14569/IJACSA.2024.0150782</id>
        <doi>10.14569/IJACSA.2024.0150782</doi>
        <lastModDate>2024-07-31T12:06:34.5500000+00:00</lastModDate>
        
        <creator>Supriya G Purohit</creator>
        
        <creator>Veeragangadhara Swamy</creator>
        
        <subject>Big data; privacy preservation; security; data publish; data privacy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>Numerous governmental entities, including hospitals and the Bureau of Statistics, as well as other functional units, have shown great interest in personalized privacy. Numerous models and techniques for data posting have been put forward, the majority of which concentrated on a single sensitive property. A few scholarly articles highlighted the need to protect the privacy of data which includes many sensitive qualities. Utilizing current techniques like the sanctity of privacy in data gets decreased if many sensitive values are published while maintaining k-anonymity and l-diversity simultaneously. Furthermore, customization hasn&#39;t been investigated in this context. We describe a publishing strategy in this research that handles customization when publishing material that has many sensitive features for analysis. The model makes use of a slicing strategy that is reinforced by fuzzy approaches for numerical sensitive characteristics based on variety, generalization of categorical sensitive attributes, and probabilistic anonymization of quasi-identifiers using differential privacy. We limit the confidence that an adversary may draw about a sensitive value in a publicly available data collection to the level of understanding as an inference drawn from known information. Both artificial datasets based on real-life healthcare data were used in the trials. The outcomes guarantee that the data value is maintained while securing individual’s privacy.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_82-Data_Sensitivity_Preservation_Securing_Value.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Pilot Study on Consumer Preference, Intentions and Trust on Purchasing-Pattern for Online Virtual Shops</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150780</link>
        <id>10.14569/IJACSA.2024.0150780</id>
        <doi>10.14569/IJACSA.2024.0150780</doi>
        <lastModDate>2024-07-31T12:06:34.5330000+00:00</lastModDate>
        
        <creator>Sebastina Nkechi Okofu</creator>
        
        <creator>Kizito Eluemunor Anazia</creator>
        
        <creator>Maureen Ifeanyi Akazue</creator>
        
        <creator>Margaret Dumebi Okpor</creator>
        
        <creator>Amanda Enadona Oweimieto</creator>
        
        <creator>Clive Ebomagune Asuai</creator>
        
        <creator>Geoffrey Augustine Nwokolo</creator>
        
        <creator>Arnold Adimabua Ojugo</creator>
        
        <creator>Emmanuel Obiajulu Ojei</creator>
        
        <subject>Consumer preference; consumer trust; purchasing-pattern; purchase intentions; online virtual shops</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>User behaviour about an item is a choice predicated on their perception of the item in order to satisfy the intent of such a purchase pattern/choice as made. With virtual stores to improve consumer coverage, monetization and ease of product delivery, users&#39; trust is lowered with the non-delivery of advertised products as items purchased are often replaced with new/similar products. To resolve the issues of lowered consumer trust and preference for products purchased via online shops – each transaction reflects a user buying behaviour. This, if harnessed – will aid businesses to reshape their inventory to handle various challenges arising from feature evolution, feature drift, product replacement, and concept evolution. Our study seeks to resolve these issues via a Bayesian network with trust, preference and intent as features of the virtual store to investigate their effectiveness in the design and usefulness to promote e-commerce in Nigeria. Data consists of 8,693 records collected via Google Play Scraper Library for Jumia as retrieved from over 586 respondents. Expert evaluation ranked the design choice in the use of the parameters as high.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_80-Pilot_Study_on_Consumer_Preference_Intentions_and_Trust.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Research on the Path of Enhancing Employment and Entrepreneurship Ability of Deaf College Students Based on Knowledge Graph</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150781</link>
        <id>10.14569/IJACSA.2024.0150781</id>
        <doi>10.14569/IJACSA.2024.0150781</doi>
        <lastModDate>2024-07-31T12:06:34.5330000+00:00</lastModDate>
        
        <creator>Pengyu Liu</creator>
        
        <subject>Knowledge graph; hearing impaired college students; employment and entrepreneurial ability; interest matching; feature extraction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>Enhancing employment capabilities and selecting suitable career paths are crucial for deaf university students. The advancement of knowledge graph technology has opened up technical possibilities for career decision-making among these students. This paper calculates user preferences and introduces an exponential decay function integrated with a time factor to accurately reflect the dynamic changes in user interest preferences over time. Leveraging knowledge graphs for personalized recommendations, the study proposes recommending necessary skills to enhance employment and entrepreneurial capabilities among students. Additionally, it employs knowledge graphs to suggest more suitable career paths for deaf university students. Finally, through empirical validation, the paper demonstrates the effectiveness of the proposed hybrid clustering and interest-based collaborative filtering recommendation algorithm.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_81-Research_on_the_Path_of_Enhancing_Employment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimization of a Hybrid Renewable Energy System Based on Meta-Heuristic Optimization Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150779</link>
        <id>10.14569/IJACSA.2024.0150779</id>
        <doi>10.14569/IJACSA.2024.0150779</doi>
        <lastModDate>2024-07-31T12:06:34.5200000+00:00</lastModDate>
        
        <creator>Ramia Ouederni</creator>
        
        <creator>Bechir Bouaziz</creator>
        
        <creator>Faouzi Bacha</creator>
        
        <subject>Hybrid renewable energy system; techno-economic optimzation; optimal sizing; MOPSO; SSO</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>Islands represent strategic platforms for exploring and exploiting marine resources. This article presents a hybrid renewable electric system (HRES) designed to power the island communities of Djerba in Tunisia. The system integrates photovoltaic panels, wind turbines, tidal turbines, hydraulic systems, biomass, and batteries, taking into account available climatic and land resources. A multi-objective optimization method is proposed for sizing this system to minimize power loss and energy costs. Two optimization algorithms, MOPSO (Multi-Objective Particle Swarm Optimization) and SSO (Social Spider Optimization) have been used to solve this problem. MATLAB simulations show that MOPSO offers better convergence and coverage than SSO. The results confirm the viability of the proposed algorithm and method for optimal sizing. In addition, they enable an in-depth analysis of the electrical production and economic benefits associated with the various system components.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_79-Optimization_of_a_Hybrid_Renewable_Energy_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced Harris Hawks Optimization Algorithm for SLA-Aware Task Scheduling in Cloud Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150778</link>
        <id>10.14569/IJACSA.2024.0150778</id>
        <doi>10.14569/IJACSA.2024.0150778</doi>
        <lastModDate>2024-07-31T12:06:34.5030000+00:00</lastModDate>
        
        <creator>Junhua Liu</creator>
        
        <creator>Chaoyang Lei</creator>
        
        <creator>Gen Yin</creator>
        
        <subject>Cloud computing; scheduling; optimization; SLA; SaaS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>Cloud computing has revolutionized how Software as a Service (SaaS) suppliers deliver applications by leasing shareable resources from Infrastructure as a Service (IaaS) suppliers. However, meeting users&#39; Quality of Service (QoS) parameters while maximizing profits from the cloud infrastructure presents a significant challenge. This study addresses this challenge by proposing an Enhanced Harris Hawks Optimization (EHHO) algorithm for cloud task scheduling, specifically designed to satisfy Service Level Agreements (SLAs), meet users QoS requirements, and enhance resource utilization efficiency. Drawing inspiration from Harris&#39;s falcon hunting habits in nature, the basic HHO algorithm has shown promise in finding optimal solutions to specific problems. However, it often suffers from convergence to local optima, impairing solution quality. To mitigate this issue, our study enhances the HHO algorithm by introducing an exploration factor that optimizes parameters and improves its exploration capabilities. The proposed EHHO algorithm is assessed against established optimization algorithms, including Genetic Algorithm (GA), Ant Colony Optimization (ACO), and Particle Swarm Optimization (PSO). The results demonstrate that our method significantly improves the makespan for GA, ACO, and PSO by 19.2%, 17.1%, and 20.4%, respectively, while also achieving improvements of 17.1%, 17.3%, and 17.2% for BigDataBench workloads. Furthermore, our EHHO algorithm exhibits a substantial reduction in SLA violations compared to PSO, ACO, and GA, achieving improvements of 55.2%, 41.4%, and 33.6%, respectively, for general workloads, and 61.9%, 23.1%, and 52.7%, respectively, for BigDataBench workloads.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_78-Enhanced_Harris_Hawks_Optimization_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Explainable Artificial Intelligence for Urban Planning: Challenges, Solutions, and Future Trends from a New Perspective</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150777</link>
        <id>10.14569/IJACSA.2024.0150777</id>
        <doi>10.14569/IJACSA.2024.0150777</doi>
        <lastModDate>2024-07-31T12:06:34.4870000+00:00</lastModDate>
        
        <creator>Shan TONG</creator>
        
        <creator>Shaokang LI</creator>
        
        <subject>Explainable artificial intelligence; urban planning; rule-based systems; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>Integrating Artificial Intelligence (AI) into urban planning transforms resource allocation and sustainable development. Nevertheless, the lack of transparency in some AI models raises questions about accountability and public trust. This paper investigates the role of Explainable AI (XAI) in urban planning, focusing on its ability to improve transparency and build trust between stakeholders. The study comprehensively examines approaches to achieving explainability, encompassing rule-based systems and interpretable machine learning models. Case studies illustrate the effective application of XAI in practical urban planning situations and highlight the critical role of transparency in the decision-making flow. This study examines the barriers that hinder the smooth integration of XAI into urban planning methodologies. These challenges include ethical concerns, the complexity of the models used, and the need for explanations tailored to specific areas.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_77-Explainable_Artificial_Intelligence_for_Urban_Planning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SocialBullyAlert: A Web Application for Cyberbullying Detection on Minors&#39; Social Media</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150776</link>
        <id>10.14569/IJACSA.2024.0150776</id>
        <doi>10.14569/IJACSA.2024.0150776</doi>
        <lastModDate>2024-07-31T12:06:34.4730000+00:00</lastModDate>
        
        <creator>Elizabeth Adriana Nina-Guti&#233;rrez</creator>
        
        <creator>Jes&#250;s Emerson Pacheco-Alanya</creator>
        
        <creator>Juan Carlos Morales-Arevalo</creator>
        
        <subject>Cyberbullying; artificial intelligence (AI); neural networks; parental control; social media; offensive content detection; User Experience (UX); User Interface (UI); mental health</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>The severe problem of cyberbullying towards minors is addressed, which has been shown to have significant impacts on the mental and emotional health of children and adolescents. Subsequently, the effectiveness of existing artificial intelligence models and neural networks in detecting cyberbullying on social media is analyzed. In response, a web platform is developed whose contribution is to identify offensive content, adapt to various slangs and idioms, and offer an intuitive interface with high usability in terms of user experience (UX) and user interface (UI) design. The application was validated with cyberbullying experts (teachers, principals, and psychologists), and the UI/UX design was also validated with users (parents). Limitations and future challenges are discussed, including varying cyberbullying regulations, the need for constant updates, and adapting to multiple languages and cultural contexts. This highlights the importance of ongoing research to enhance parental control tools in digital environments.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_76-SocialBullyAlert_A_Web_Application_for_Cyberbullying_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Business Insights into the Internet of Things: User Experiences and Organizational Strategies</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150775</link>
        <id>10.14569/IJACSA.2024.0150775</id>
        <doi>10.14569/IJACSA.2024.0150775</doi>
        <lastModDate>2024-07-31T12:06:34.4570000+00:00</lastModDate>
        
        <creator>Yang WEI</creator>
        
        <subject>Internet of Things; business literature; user perspectives; organizational impact; adoption trends; data-driven strategies</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>The Internet of Things (IoT) has revolutionized business operations across industries by integrating physical devices into digital networks. This study discusses the extensive business literature, particularly the impact of IoT from the perspective of users and organizations. This paper provides a comprehensive analysis of the effects, challenges, and opportunities of IoT in the business domain by integrating various perspectives and insights. We analyze trends in IoT adoption and explore the conditions promoting its widespread use in different industries and regions. The research investigates user perspectives, such as acceptance, user experience, and the ethics of the IoT. This paper focuses on how IoT will lead to new business models and the implications for strategy, operations, and client relationships. It critically reviews challenges, such as security vulnerabilities, compatibility challenges, and legal frameworks that currently restrict effortless integration of IoT in the industry from a business standpoint. Finally, we provide recommendations for further research.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_75-Business_Insights_into_the_Internet_of_Things.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Effective Feature Extraction Using Residual Attention and Local Context Aware Classifier for Crop Yield Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150774</link>
        <id>10.14569/IJACSA.2024.0150774</id>
        <doi>10.14569/IJACSA.2024.0150774</doi>
        <lastModDate>2024-07-31T12:06:34.4400000+00:00</lastModDate>
        
        <creator>Vinaykumar Vajjanakurike Nagaraju</creator>
        
        <creator>Ananda Babu Jayachandra</creator>
        
        <creator>Balaji Prabhu Baluvaneralu Veeranna</creator>
        
        <creator>Ravi Prakash Madenur Lingaraju</creator>
        
        <subject>Convolutional long short term memory; crop yield prediction; residual attention and local context-aware network; root mean squared error; spatial context data</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>Crop yield forecasting plays a key role in agricultural management and planning which is highly essential for food security and production in regional to global scales. However, a prediction of crop yield is considered a challenging task due to the difficulty in extracting spatial context and local semantic features, and difficulty in handling spatiotemporal relations. In order to address these issues, a comprehensive feature extraction is developed along with an effective deep-learning classifier. In this paper, the Residual Attention and Local Context Aware Classifier (RALCAC) is developed for obtaining appropriate features from the remote sensing crop yield images. The developed RALCAC helps to obtain the spatial context using Residual Attention (RA) module and local semantic information that are beneficial in understanding the detailed depiction of the crop. Further, the Convolutional Long Short Term Memory (ConvLSTM) is used to obtain the prediction of crop yield using the comprehensive features from the RALCAC. The RALCAC is analysed by means of Root Mean Squared Error (RMSE) and coefficient of determination. The existing research such as DeepYield, SSTNN and 3DCNN are used to compare the RALCAC method. The RMSE of RALCAC for the MODIS dataset is 3.257, and it is lesser when compared to the DeepYield.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_74-Effective_Feature_Extraction_Using_Residual_Attention.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Decoding Visual Question Answering Methodologies: Unveiling Applications in Multimodal Learning Frameworks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150773</link>
        <id>10.14569/IJACSA.2024.0150773</id>
        <doi>10.14569/IJACSA.2024.0150773</doi>
        <lastModDate>2024-07-31T12:06:34.4270000+00:00</lastModDate>
        
        <creator>Y Harika Devi</creator>
        
        <creator>G Ramu</creator>
        
        <subject>Visual Question Answering (VQA); Multimodal Learning; Neural Module Networks (NMN); Multimodal Compact Bilinear Pooling (MCB); question types; F1 score</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>This research investigates the intricacies of Visual Question Answering (VQA) methodologies and their applications within Multimodal Learning Frameworks. Our approach, founded on the synergy of Multimodal Compact Bilinear Pooling (MCB) and Neural Module Networks (NMN), offers a comprehensive understanding of visual and textual elements. Notably, the model excels in responding to Descriptive questions with an accuracy of 88%, showcasing a nuanced grasp of detailed inquiries. Factual questions follow closely with an 86% accuracy, while Inferential questions exhibit commendable performance at 82%. Precision scores reinforce the model&#39;s reliability, registering 85% for Descriptive, 82% for Factual, and 78% for inferential questions. Robust recall scores further emphasize the model&#39;s ability to retrieve relevant information across question types. The F1 Score, reflecting a harmonious blend of precision and recall, attests to the model&#39;s strong overall performance: 87% for Descriptive, 84% for Factual, and 80% for inferential questions. Visualizations through boxplots and violin plots affirm the model&#39;s consistency in accuracy and precision across question types. Future directions encompass dataset expansion, integration of transfer learning, attention mechanisms for interpretability, and exploration of broader multimodal applications beyond VQA. This research establishes a resilient framework for advancing VQA methodologies, paving the way for enhanced multimodal learning in diverse contexts.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_73-Decoding_Visual_Question_Answering_Methodologies.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development and Research of a Method for Multi-Level Protection of Transmitted Information in IP Networks Based on Asterisk IP PBX Using Various Codecs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150771</link>
        <id>10.14569/IJACSA.2024.0150771</id>
        <doi>10.14569/IJACSA.2024.0150771</doi>
        <lastModDate>2024-07-31T12:06:34.4100000+00:00</lastModDate>
        
        <creator>Mubarak Yakubova</creator>
        
        <creator>Tansaule Serikov</creator>
        
        <creator>Olga Manankova</creator>
        
        <subject>Asterisk PBX; IP telephony systems; codecs; data security; Python</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>Research indicates that the utilization of existing symmetric and asymmetric cryptosystems, as well as steganography, fails to ensure the requisite security and reliability in IP networks, where IP PBX Asterisk assumes the role of information transmission facilitator through switching processes. Consequently, this publication undertakes the development and investigation of a four-tiered information protection method when employing various voice codecs in IP networks based on IP PBX Asterisk. The adoption of multi-tiered protection significantly prolongs the cryptanalysis duration for malicious actors, thereby serving as a deterrent to information interception. The primary achievement of this research lies in minimizing the latency incurred during information traversal across the four layers of protection to less than 150 milliseconds, a benchmark widely acknowledged as optimal for assessing voice traffic service quality during transmission. It is noteworthy that a delay parameter of 150 milliseconds in telecommunications networks is pivotal; failure to meet this criterion at the receiving end may result in signal distortion such as jitter, audio degradation, unintelligibility, and other impairments. The devised methodology can be employed in networks transmitting highly classified or business-sensitive information. We contend that the developed encryption enhancement methodology, which prolongs the cryptanalysis duration for malicious entities and the conducted analysis, represents a novel scientific contribution.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_71-Development_and_Research_of_a_Method_for_Multi_Level_Protection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Unmanned Aerial Vehicles Following Photography Path Planning Technology Based on Kinematic and Adaptive Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150772</link>
        <id>10.14569/IJACSA.2024.0150772</id>
        <doi>10.14569/IJACSA.2024.0150772</doi>
        <lastModDate>2024-07-31T12:06:34.4100000+00:00</lastModDate>
        
        <creator>Sa Xiao</creator>
        
        <subject>Kinematic model; adaptive control; unmanned aerial vehicles; path planning; follow photography</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>As a representative invention of modern intelligent technology, unmanned aerial vehicles are receiving more and more attention in various fields. However, unmanned aerial vehicles cannot autonomously track path planning based on dynamic changes in conventional path planning. To address the aforementioned issues, this study proposes a path-planning algorithm for unmanned aerial vehicles following photography based on kinematic and adaptive models. A global coordinate system and an aircraft coordinate system are constructed based on the motion relationship between the unmanned aerial vehicles and the tracking target, and the two are converted into a horizontal projection coordinate system to digitize the observed data. On this basis, an adaptive control model is established based on the circular tracking path planning algorithm, and finally, simulation experiments and practical application tests are conducted in combination with the unmanned aerial vehicles following and shooting planning algorithm. The results showed that the best fitness of the proposed algorithm compared with the other two algorithms was 97.56, 93.87, and 92.79, and the path time and average speed of the studied algorithm were 38s and 3.4m/s, which were better than the other two algorithms. In the real machine experiment, there were six circular paths planned by the research algorithm, and the relative distance between the unmanned aerial vehicles and the target was within the range of 200m-600m. The actual trajectory had a high degree of overlap with the model planned trajectory. Research has shown that the proposed algorithm not only stabilizes the illumination angle within an effective range in path planning, but also has high convergence and superior path planning performance in practical applications.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_72-Unmanned_Aerial_Vehicles.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Precision Construction of Salary Prediction System Based on Deep Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150770</link>
        <id>10.14569/IJACSA.2024.0150770</id>
        <doi>10.14569/IJACSA.2024.0150770</doi>
        <lastModDate>2024-07-31T12:06:34.3930000+00:00</lastModDate>
        
        <creator>Yuping Wang</creator>
        
        <creator>MingYan Bai</creator>
        
        <creator>Changjiang Liao</creator>
        
        <subject>Deep neural network; Adam; salary; prediction; system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>Currently, most recruitment websites use keyword search or job nature classification to filter the salary information that job seekers are most concerned about. Job seekers need to spend much time and effort to understand the salary range of their desired position. In order to help job seekers quickly and accurately understand the salary of their desired position and market value, Word2vec model and latent Dirichlet allocation model are used to obtain topic features, which are used as the basis for the salary prediction model. The study uses deep neural networks and adaptive moment estimation algorithms to construct the salary prediction model. Based on the constructed salary prediction model, the final salary prediction system is constructed based on a browser/server model. The results showed that on the training set, the maximum accuracy of the salary prediction model was 96.71%, the minimum was 93.75%, and the average was 95.07%. The mean absolute percentage error and mean square error of this model were 5.661% and 0.3462, respectively. The maximum average response time of the salary prediction system was 134.2s, the minimum was 2.02s, and the maximum throughput was 1500000byte/s. The salary prediction model has good performance, which can provide technical support for salary prediction.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_70-Precision_Construction_of_Salary_Prediction_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Educational Enhancement Through Augmented Reality Simulation: A Bibliometric Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150769</link>
        <id>10.14569/IJACSA.2024.0150769</id>
        <doi>10.14569/IJACSA.2024.0150769</doi>
        <lastModDate>2024-07-31T12:06:34.3770000+00:00</lastModDate>
        
        <creator>Zuhaili Mohd Arshad</creator>
        
        <creator>Mohamed Nor Azhari Azman</creator>
        
        <creator>Olzhas Kenzhaliyev</creator>
        
        <creator>Farid R. Kassimov</creator>
        
        <subject>Augmented reality; simulation; learning; education</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>Augmented Reality (AR) has become a key technology in the education sector, offering interactive learning experiences that improve student engagement and understanding. Despite its increasing use, a thorough summary of AR research in educational environments is still required. This study applies bibliometric analysis to identify trends in this research field. Data from the Scopus database and VOSviewer software version 1.6.19 was used to analyze academic publications from 2018 to 2023. The original dataset of 4858 articles was narrowed down to 1109 articles concentrating on &quot;augmented reality&quot; AND &quot;simulation&quot; in student learning. Methods such as advanced data mining, co-citation analysis, and network visualization were utilized to outline the structure and trends in this research area. Key findings include a significant rise in research activity over the past decade, identification of the ten most prolific authors in AR simulation studies, and detailed visualizations of information distribution. Significant challenges include high costs and difficulties in technical integration. The study addresses these issues through interdisciplinary research that combines educational theory with AR technology. Results demonstrate growing interest in AR applications, particularly within STEM education, driven by technological advancements and increased funding. Despite these challenges, the potential of AR to enhance learning outcomes is clear. This research concludes that AR simulations can be a valuable educational tool, with further studies needed to explore the scalability of AR applications in various educational settings and to develop evidence-based guidelines for effective integration.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_69-Educational_Enhancement_Through_Augmented_Reality_Simulation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Computer Vision-Based Pill Recognition Application: Bridging Gaps in Medication Understanding for the Elderly</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150768</link>
        <id>10.14569/IJACSA.2024.0150768</id>
        <doi>10.14569/IJACSA.2024.0150768</doi>
        <lastModDate>2024-07-31T12:06:34.3630000+00:00</lastModDate>
        
        <creator>Taif Alahmadi</creator>
        
        <creator>Rana Alsaedi</creator>
        
        <creator>Ameera Alfadli</creator>
        
        <creator>Ohoud Alzubaidi</creator>
        
        <creator>Afnan Aldhahri</creator>
        
        <subject>Pill detection; seniors; computer vision; artificial intelligence</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>Identifying prescribed medication accurately remains a challenge for many people, particularly older individuals who may experience medication errors due to impaired vision, lack of English proficiency, or other disabilities. This problem is more prevalent in healthcare settings where pills are often distributed in strips rather than in traditional packaging, increasing the risk of dangerous consequences. To address this issue, a mobile application has been developed using Computer Vision and Artificial Intelligence to accurately recognize pills and provide relevant information through text and speech formats. The approach integrates the GPT-4 API for imprint extraction and YOLOv8 for image detection, significantly enhancing the application&#39;s accuracy. The goal is to improve medication management for vulnerable populations facing unique accessibility challenges. The application has achieved an overall accuracy of 90.89%, demonstrating its effectiveness in assisting users to identify and manage their medication.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_68-A_Computer_Vision_Based_Pill_Recognition_Application.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Reading Recommendation Technology in Digital Libraries Based on Readers&#39; Social Relationships and Readers&#39; Interests</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150767</link>
        <id>10.14569/IJACSA.2024.0150767</id>
        <doi>10.14569/IJACSA.2024.0150767</doi>
        <lastModDate>2024-07-31T12:06:34.3470000+00:00</lastModDate>
        
        <creator>Weiying Zheng</creator>
        
        <subject>Digital library; recommend; behavioral characteristics; interest; attention mechanism</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>In recent years, the construction of digital libraries has contributed to the advancement of smart lending services. The challenge of suggesting appropriate books for readers from a vast collection of books remains a primary obstacle in the current construction of digital libraries. A fusion method for recommending content to readers with diverse interests is proposed. The method initially extracts short-term borrowing behavior characteristics and simultaneously considers the social similarity characteristics of readers, resulting in the recommendation of content through target ranking search. Aiming to cater to long-term readers, a reading recommendation method that integrates readers&#39; reading behaviors is proposed to model readers&#39; interests through the attention mechanism. It constructs readers&#39; preference models by using synergistic metrics, and finally achieves content recommendation through preference fusion. The proposed model attained the swiftest convergence and the minimum logarithmic loss of 1.85 in recommending readings for multi-interest readers. Additionally, the accuracy of the proposed model in recommending science reading scenarios was 97.24%, surpassing other models. In the reading recommendation experiments for extended borrowings, the suggested model demonstrated superior performance with regard to recall and precision, which were 0.198 and 0.062, respectively. Lastly, after comparing the recommendation errors of different reading models, the proposed model exhibited a root-mean-square error and an average absolute error of 0.731 and 0.721, respectively. These results denote the most precise recommendation accuracy among the three models. The proposed model demonstrates excellent recommendation effectiveness in real-world reading recommendation scenarios. This research offers significant technical references for the advancement of related recommendation technology and the development of digital libraries.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_67-Reading_Recommendation_Technology_in_Digital_Libraries.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards Dimension Reduction: A Balanced Relative Discrimination Feature Ranking Technique for Efficient Text Classification (BRDC)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150766</link>
        <id>10.14569/IJACSA.2024.0150766</id>
        <doi>10.14569/IJACSA.2024.0150766</doi>
        <lastModDate>2024-07-31T12:06:34.3300000+00:00</lastModDate>
        
        <creator>Muhammad Nasir</creator>
        
        <creator>Noor Azah Samsudin</creator>
        
        <creator>Wareesa Sharif</creator>
        
        <creator>Souad Baowidan</creator>
        
        <creator>Dr. Humaira Arshad</creator>
        
        <creator>Muhammad Faheem Mushtaq</creator>
        
        <subject>Text classification; balanced relative discrimination criterion; dimension reduction; feature ranking; deep learning; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>The volume and complexity of textual data have significantly increased worldwide, demanding a comprehensive understanding of machine learning techniques for accurate text classification in various applications. In recent years, there has been significant growth in natural language processing (NLP) and neural networks (NNs). Deep learning (DL) models have outperformed classical machine learning approaches in text classification tasks, such as sentiment analysis, news categorization, question answering, and natural language inference. Dimension reduction is crucial for refining the classifier performance and decreasing the computational cost of text classification. Existing methodologies, such as the Improved Relative Discrimination Criterion (IRDC) and the Relative Discrimination Criterion (RDC), exhibit deficiencies in proper normalization and are not well-balanced regarding distinct class&#39;s term ranking. This study introduced an improved feature-ranking metric called the Balanced Relative Discrimination Criterion (BRDC). This study measured document frequencies into term-count estimations, facilitating a normalized and balanced classification approach. The proposed methodology demonstrated superior performance compared to existing techniques. Experiments were conducted to evaluate the efficacy of the proposed techniques using Decision Tree (DT), Logistic Regression (LR), Multinomial Na&#239;ve Bayes (MNB), and Long Short-Term Memory (LSTM) models on three benchmark datasets: Reuters-21578, 20newsgroup, and AG News. The findings indicate that LSTM outperformed the other models and can be applied in conjunction with the proposed BRDC approach.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_66-Towards_Dimension_Reduction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Predictive Model for Software Cost Estimation Using ARIMA Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150764</link>
        <id>10.14569/IJACSA.2024.0150764</id>
        <doi>10.14569/IJACSA.2024.0150764</doi>
        <lastModDate>2024-07-31T12:06:34.3170000+00:00</lastModDate>
        
        <creator>Moatasem M. Draz</creator>
        
        <creator>Osama Emam</creator>
        
        <creator>Safaa M. Azzam</creator>
        
        <subject>Software cost estimation; software effort estimation; promise repository; SCE; ARIMA</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>Technology is a differentiator in business today. It plays a different and decisive role by providing programs that contribute to this. To build this software while avoiding risks during the implementation and construction process, it is necessary to estimate the cost. The cost estimation process is the process of estimating the effort, time, and resources needed to build a software project. It is a crucial process as it provides good planning during the construction and implementation process and reduces the risks you may be exposed to. Therefore, previous studies sought to build models and methods to estimate this, but they were not accurate enough to complete the process. Therefore, this study seeks to build a model using the Autoregressive integrated moving average (ARIMA) algorithm. Five datasets the COCOMO81, COCOMONasaV1, COCOMONasaV2, Desharnais, and China were used. The dataset was processed to remove noise and missing values, visualized to understand it, and linked using a time series to predict the future values of the data. It will then be trained on the ARIMA algorithm. To ensure the effectiveness and efficiency of the model for use, four famous evaluation criteria were used: mean magnitude of relative error (MMRE), root mean square error (RMSE), mean magnitude of relative error (MdMRE), and prediction accuracy (PRED). This experiment showed impressive software cost estimation results, with MMRE, RMSE, MdMRE, and PRED results being 0.07613, 0.04999, 0.03813, and 95% for the COCOMO81 dataset, respectively. The results were high for the COCOMONasaV1 dataset, reaching 0.02227, 0.02899, 0.01113, and 97.1%. The COCOMONasaV2 results were 0.01035, 0.00650, 0.00517, and 99.35%, respectively. The China dataset showed good prediction results of 0.00001, 0.00430, 0.00008, and 99.57%, respectively. The results were impressive and promising for the Desharnais dataset, showing 0.00004, 0.0039, 0.00002, and 99.6%. The results of this study are promising and distinctive compared to recent studies, and they also contribute to good business planning and risk reduction.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_64-A_Predictive_Model_for_Software_Cost_Estimation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Defense Mechanisms for Vehicular Networks: Deep Learning Approaches for Detecting DDoS Attacks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150765</link>
        <id>10.14569/IJACSA.2024.0150765</id>
        <doi>10.14569/IJACSA.2024.0150765</doi>
        <lastModDate>2024-07-31T12:06:34.3170000+00:00</lastModDate>
        
        <creator>Lekshmi V</creator>
        
        <creator>R. Suji Pramila</creator>
        
        <creator>Tibbie Pon Symon V A</creator>
        
        <subject>Vehicular Ad-hoc Networks; Denial of Service attacks; deep learning; auto encoder; Long Short-Term Memory; self-attention mechanism; cyber threats; network reliability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>Vehicular Ad-hoc Networks (VANETs) are engineered to meet the distinctive demands of vehicular communication, facilitating interactions between vehicles and roadside infrastructure to enhance road safety, traffic efficiency, and diverse applications such as traffic management and infotainment services. However, the looming threat of Distributed Denial of Service (DDoS) attacks in VANETs poses a significant challenge, potentially disrupting critical services and compromising user safety. To address this challenge, this study proposes a novel deep learning (DL)-based model that integrates Long Short-Term Memory (LSTM) architecture with self-attention mechanisms to effectively detect DDoS attacks in VANETs. By incorporating autoencoders for feature extraction, the model leverages the sequential nature of VANET data, prioritizing relevant information within input sequences to accurately identify malicious activities. With an impressive accuracy of 98.39%, precision of 97.79%, recall of 98.00%, and F1-score of 98.20%, the proposed approach demonstrates remarkable efficacy in safeguarding VANETs against cyber threats, thereby contributing to enhanced road safety and network reliability.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_65-Defense_Mechanisms_for_Vehicular_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Underwater Quality Enhancement Based on Mixture Contrast Limited Adaptive Histogram and Multiscale Fusion</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150763</link>
        <id>10.14569/IJACSA.2024.0150763</id>
        <doi>10.14569/IJACSA.2024.0150763</doi>
        <lastModDate>2024-07-31T12:06:34.2830000+00:00</lastModDate>
        
        <creator>Septa Cahyani</creator>
        
        <creator>Anny Kartika Sari</creator>
        
        <creator>Agus Harjoko</creator>
        
        <subject>CLAHE; Color space enhancement; luminance; sharpening</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>This paper presents a novel approach for enhancing the visual quality of underwater images using various spatial processing techniques. This research addresses the common issues encountered in underwater imaging, such as color distortion, low clarity, low contrast, bluish or greenish tints caused by light scattering and absorption, and the presence of underwater organisms. To solve these problems, we utilize various image processing methods such as white balancing, Contrast Limited Adaptive Histogram Equalization (CLAHE) in Lab and HSV color spaces, sharpening, weight map generation, and multiscale fusion. The effectiveness of the proposed approach is evaluated quantitatively using mean squared error (MSE), peak signal-to-noise ratio (PSNR), and structural similarity index (SSIM). The results indicate that the optimal CLAHE parameters are a block size 4x4 and a clip limit 1.2. These parameters yielded an MSE value of 0.7594, a PSNR value of 20.7121, and an SSIM value of 0.8826, demonstrating superior performance compared to previous research. A qualitative evaluation was also conducted using eight respondents based on overall visual quality, color fidelity, and contrast enhancement. The assessment results demonstrate satisfactory outcomes, with a mean score of 4.3278 and a standard deviation of 0.7238. Overall, this research demonstrates that effective and efficient enhancement of underwater image quality through computational methods can be achieved using simple techniques with appropriate parameters and placement, thereby enabling better scientific research and exploration of the underwater world.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_63-Underwater_Quality_Enhancement.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design and Development of a Unified Query Platform as Middleware for NoSQL Data Stores</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150762</link>
        <id>10.14569/IJACSA.2024.0150762</id>
        <doi>10.14569/IJACSA.2024.0150762</doi>
        <lastModDate>2024-07-31T12:06:34.2700000+00:00</lastModDate>
        
        <creator>Hadwin Valentine</creator>
        
        <creator>Boniface Kabaso</creator>
        
        <subject>Unified query; polyglot; NoSQL; middleware; query processing; big data</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>The advancements in technology such as Web 2.0, 3.0, mobile devices and recently IoT devices has given rise to a massive amount of structured, semi-structure and unstructured datasets, i.e. big data. The increasing complexity and diversity of data sources poses significant challenges for stakeholders when extracting meaningful insights. This paper demonstrates how we developed a unified query prototype as middleware using a polyglot technique capable of interrogating and manipulating the four categories of NoSQL data models. This study applied established algorithms to different aspects of the prototype to attain this study’s objective. The prototype was subjected to an experiment where varying query workloads were processed. The performance data comprised of application performance index, memory consumption, and execution time and error rates. The results demonstrated that the prototype had a low error rate indicating it’s robustness and reliability. In addition, the results showed that the prototype is responsive and able to query the underlying storage system effectively and efficiently. The prototype provides a standardize set of operations abstracting the complexities of each underlying storage system; reducing the need for multiple data retrieval management systems.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_62-Design_and_Development_of_a_Unified_Query_Platform.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Quantum Cryptology in the Big Data Security Era</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150761</link>
        <id>10.14569/IJACSA.2024.0150761</id>
        <doi>10.14569/IJACSA.2024.0150761</doi>
        <lastModDate>2024-07-31T12:06:34.2530000+00:00</lastModDate>
        
        <creator>Chaymae Majdoubi</creator>
        
        <creator>Saida El Mendili</creator>
        
        <creator>Youssef Gahi</creator>
        
        <subject>Data security; quantum cryptology; big data; cryptography</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>Quantum cryptography, based on the principles of quantum mechanics, has emerged as a cutting-edge domain for cryptographic applications. A prime example is quantum key distribution, offering a theoretically secure information solution to the key exchange challenge. The inherent strength of quantum cryptography lies in its ability to accomplish cryptographic tasks deemed insurmountable through classical communication alone. This paper explores the landscape of quantum computing in the Big Data Era, drawing parallels with classical methodologies. It illuminates the constraints of current approaches and suggests avenues for progress. By unravelling the intricacies of quantum cryptography and highlighting its deviations from classical counterparts, this study enriches the ongoing discourse on secure communication protocols. The findings underscore the significance of quantum cryptographic methods, fueling further exploration and development in this dynamic and promising field contributing to Data security.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_61-Quantum_Cryptology_in_the_Big_Data_Security_Era.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Group Non-Critical Behavior Recognition Based on Joint Attention Mechanism of Sensor Data and Semantic Domain</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150760</link>
        <id>10.14569/IJACSA.2024.0150760</id>
        <doi>10.14569/IJACSA.2024.0150760</doi>
        <lastModDate>2024-07-31T12:06:34.2530000+00:00</lastModDate>
        
        <creator>Chen Li</creator>
        
        <creator>Baoluo Liu</creator>
        
        <subject>Sensor data; attention mechanisms; semantic domains; non-critical; group behavior</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>As science and technology continue to advance, sensor technology is being used in more and more industries. However, traditional methods have the problem of ignoring the semantic information of individual behavior and the correlation between individuals and groups. Based on this, the study proposes a new method for group behavior recognition. The process of feature extraction is performed on group behavior by collecting sensor data and combining a semantic domain joint attention mechanism. This is achieved through the construction of a recognition method based on a data domain and semantic domain joint attention mechanism, which enables the accurate identification of non-critical behaviors in the group. The findings showed that, when the group members are constant, the hybrid network based on a convolutional neural network and bi-directional long and short-term memory network improved the F1 by 0.2% and the accuracy by 0.19%. Moreover, the hybrid network combining graph neural network, bi-directional long and short-term memory network, and convolutional neural network improved results. In group behavior recognition, group relationship modeling based on graph convolutional network improved F1 by 0.17% and accuracy by 0.17% compared to the hybrid network, indicating that group relationship modeling better captures group interaction features and improves recognition. The method is highly effective in the field of group behavior recognition and is expected to provide a new idea for monitoring and managing group behavior in practical scenarios.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_60-Group_Non_Critical_Behavior_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Edge Computing for Real-Time Decision Making in Autonomous Driving: Review of Challenges, Solutions, and Future Trends</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150759</link>
        <id>10.14569/IJACSA.2024.0150759</id>
        <doi>10.14569/IJACSA.2024.0150759</doi>
        <lastModDate>2024-07-31T12:06:34.2370000+00:00</lastModDate>
        
        <creator>Jihong XIE</creator>
        
        <creator>Xiang ZHOU</creator>
        
        <creator>Lu CHENG</creator>
        
        <subject>Edge computing; autonomous driving; real-time decision-making; reliability; resource management</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>In the coming half-century, autonomous vehicles will share the roads alongside manually operated automobiles, leading to ongoing interactions between the two categories of vehicles. The advancement of autonomous driving systems has raised the importance of real-time decision-making abilities. Edge computing plays a crucial role in satisfying this requirement by bringing computation and data processing closer to the source, reducing delay, and enhancing the overall efficiency of autonomous vehicles. This paper explores the core principles of edge computing, emphasizing its capability to handle data close to its origin. The study focuses on the issues of network reliability, safety, scalability, and resource management. It offers insights into strategies and technology that effectively handle these challenges. Case studies demonstrate practical implementations and highlight the real-world benefits of edge computing in enhancing decision-making processes for autonomous vehicles. Furthermore, the study outlines upcoming trends and examines emerging technologies such as artificial intelligence, 5G connectivity, and innovative edge computing architectures.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_59-Edge_Computing_for_Real_Time_Decision_Making.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Metaheuristic Optimization for Dynamic Task Scheduling in Cloud Computing Environments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150758</link>
        <id>10.14569/IJACSA.2024.0150758</id>
        <doi>10.14569/IJACSA.2024.0150758</doi>
        <lastModDate>2024-07-31T12:06:34.2230000+00:00</lastModDate>
        
        <creator>Longyang Du</creator>
        
        <creator>Qingxuan Wang</creator>
        
        <subject>Dynamic task scheduling; cloud computing; metaheuristic optimization; load balancing; task allocation; resource utilization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>Cloud computing enables the sharing of resources across the Internet in a highly adaptable and quantifiable way. This technology allows users to access customizable distributed resources and offers various services for resource allocation, scientific operations, and service computing via virtualization. Effectively allocating tasks to available resources is essential to providing reliable consumer performance. Task scheduling in cloud computing models presents substantial challenges as it necessitates an efficient scheduler to map multiple tasks from numerous sources and dynamically distribute resources to users based on their requirements. This study presents a metaheuristic optimization methodology that integrates load balancing by dynamically distributing tasks across available resources based on current load conditions. This ensures an even distribution of workloads, preventing resource bottlenecks and enhancing overall system performance. The suggested method is suitable for both constant and variable activities. Our technique was compared with established metaheuristic methods, including HDD-PLB, HG-GSA, and CAAH. The proposed method demonstrated superior performance due to its adaptive load balancing mechanism and efficient resource utilization, reducing task completion times and improving overall system throughput.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_58-Metaheuristic_Optimization_for_Dynamic_Task_Scheduling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of Learning Algorithms for Predicting Carbon Emissions of Light-Duty Vehicles</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150757</link>
        <id>10.14569/IJACSA.2024.0150757</id>
        <doi>10.14569/IJACSA.2024.0150757</doi>
        <lastModDate>2024-07-31T12:06:34.1900000+00:00</lastModDate>
        
        <creator>Rashmi B. Kale</creator>
        
        <creator>Nuzhat Faiz Shaikh</creator>
        
        <subject>Carbon emission; machine learning algorithms; CariQ carbon emission dataset; An Air Quality Index (AQI)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>This research presents a comparative analysis of different learning methods developed for the prediction of carbon emissions from light-duty vehicles. With the growing concern over environmental sustainability, accurate prediction of carbon emissions is vital for developing effective mitigation strategies. The work assesses the performance of various algorithms trained on vehicle-specific data attributes to predict the emission patterns of a fuel type of different light duty models. This work uses two real-time petrol and diesel datasets collected by CariQ app and device. Canada government dataset is also used from the online repository for prediction of the vehicle emission. The evaluation is based on their predictive accuracy. The findings reveal insights into the effectiveness of different learning techniques in accurately estimating carbon emissions from vehicles, providing valuable guidance for policymakers and researchers in the field of environmental sustainability and transportation planning.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_57-Analysis_of_Learning_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Analysis of Na&#239;ve Bayes Classifier, Support Vector Machine and Decision Tree in Rainfall Classification Using Confusion Matrix</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150755</link>
        <id>10.14569/IJACSA.2024.0150755</id>
        <doi>10.14569/IJACSA.2024.0150755</doi>
        <lastModDate>2024-07-31T12:06:34.1770000+00:00</lastModDate>
        
        <creator>Elvira Vidya Berliana</creator>
        
        <creator>Mardhani Riasetiawan</creator>
        
        <subject>Na&#239;ve Bayes Classifier (NBC); Support Vector Machine (SVM); decision tree; confusion matrix; classification; rainfall; temperature; humidity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>The climate in Indonesia is sometimes unstable to this day. This unstable climate change will cause difficulties in predicting rainfall conditions. With unstable climate change, an algorithm is needed that helps the public predict rainfall conditions using rainfall, temperature and humidity parameters. The research process uses daily climate data from the Indonesia Climatology Agency with time span 2018 – 2023. The classification system using the Na&#239;ve Bayes Classifier (NBC) algorithm is less able to capture complexity and complex feature interactions with an accuracy of 97%-98%, Support Vector Machine (SVM) has an accuracy of 92%-94% and fewer prediction errors than NBC and Decision Tree which experienced overfitting especially when testing sets with 50% data with an accuracy of 99%-100%. Even though the Decision Tree shows the best performance, there is still a risk of overfitting so, SVM is a stable choice in this research.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_55-Comparative_Analysis_of_Na&#239;ve_Bayes_Classifier.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Calibrating Hand Gesture Recognition for Stroke Rehabilitation Internet-of-Things (RIOT) Using MediaPipe in Smart Healthcare Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150756</link>
        <id>10.14569/IJACSA.2024.0150756</id>
        <doi>10.14569/IJACSA.2024.0150756</doi>
        <lastModDate>2024-07-31T12:06:34.1770000+00:00</lastModDate>
        
        <creator>Ahmad Anwar Zainuddin</creator>
        
        <creator>Nurul Hanis Mohd Dhuzuki</creator>
        
        <creator>Asmarani Ahmad Puzi</creator>
        
        <creator>Mohd Naqiuddin Johar</creator>
        
        <creator>Maslina Yazid</creator>
        
        <subject>Internet-of-Things (IoT); RIOT; stroke rehabilitation; calibration; machine learning; MediaPipe; data security; smart healthcare</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>Stroke rehabilitation is fraught with challenges, particularly regarding patient mobility, imprecise assessment scoring during the therapy session, and the security of healthcare data shared online. This work aims to address these issues by calibrating hand gesture recognition systems using the Rehabilitation Internet-of-Things (RIOT) framework and examining the effectiveness of machine learning algorithms in conjunction with the MediaPipe framework for gesture recognition calibration. RIOT represents an IoT system developed for the purpose of facilitating remote rehabilitation, with a particular focus on individuals recovering from strokes and residing in geographically distant regions, in addition to healthcare professionals specialising in physical therapy. The Design of Experiment (DoE) methodology allows physiotherapists and researchers to systematically explore the relationship between RIOT and accurate hand gesture recognition using Python&#39;s MediaPipe library, by addressing possible factors that may affect the reliability of patients’ scoring results while emphasising data security consideration. To ensure precise rehabilitation assessments, this initiative seeks to enhance accessible home-based stroke rehabilitation by producing optimal and secure calibrated hand gesture recognition with practical recognition techniques. These solutions will be able to benefit both physiotherapists and patients, especially stroke patients who require themselves to be monitored remotely while prioritising security measures within the smart healthcare context.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_56-Calibrating_Hand_Gesture_Recognition_for_Stroke_Rehabilitation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hidden Markov Model for Cardholder Purchasing Pattern Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150754</link>
        <id>10.14569/IJACSA.2024.0150754</id>
        <doi>10.14569/IJACSA.2024.0150754</doi>
        <lastModDate>2024-07-31T12:06:34.1600000+00:00</lastModDate>
        
        <creator>Okoth Jeremiah Otieno</creator>
        
        <creator>Michael Kimwele</creator>
        
        <creator>Kennedy Ogada</creator>
        
        <subject>Hidden Markov Model; cardholder transaction patterns; merchant categories; predictive algorithms</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>This study utilizes the Hidden Markov Model to predict cardholder purchasing patterns by monitoring card transaction trends and profiling cardholders based on dominant transactional motivations across four merchant sectors, i.e., service centers, social joints, restaurants, and health facilities. The research addresses shortfalls with existing studies which often disregard credit, prepaid, and debit card transactions outside online transaction channels, primarily focusing only on credit card fraud detection. This research also addresses the challenges of existing prediction algorithms such as support vector machine, decision tree, and na&#239;ve Bayes classifiers. The research presents a three-phased Hidden Markov Model implementation starting with initialization, de-coding, and evaluation all executed through a Python script and further validated through a 2-fold cross-validation technique. The study uses an experimental design to systematically investigate cardholder transactional patterns, exposing training and validation data to varied initial and transition state probabilities to optimize prediction outcomes. The results are evaluated through three key metrics, i.e., accuracy, precision, and recall measures, achieving optimal performance of 100% for both accuracy and precision rates, with a 99% on recall rate, thereby outperforming existing predictive algorithms like support vector machine, decision tree, and Na&#239;ve Bayes classifiers. This study proves the Hidden Markov Model’s effectiveness in dynamically modeling cardholder behaviors within merchant categories, offering a full understanding of the real motivations behind card transactions. The implication of this research encompasses enhancing merchant growth strategies by empowering card acquirers and issuers with a better approach to optimize their operations and marketing synergies based on a clear understanding of cardholder transactional patterns. Further, the research significantly contributes to consumer behavior analysis and predictive modeling within the card payments ecosystem.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_54-Hidden_Markov_Model_for_Cardholder_Purchasing_Pattern.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Microarray Gene Expression Dataset Feature Selection and Classification with Swarm Optimization to Diagnosis Diseases</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150753</link>
        <id>10.14569/IJACSA.2024.0150753</id>
        <doi>10.14569/IJACSA.2024.0150753</doi>
        <lastModDate>2024-07-31T12:06:34.1430000+00:00</lastModDate>
        
        <creator>Peddarapu Rama Krishna</creator>
        
        <creator>Pothuraju Rajarajeswari</creator>
        
        <subject>Feature Selection; classification; gene expression data; Microarray; RRP_SW; hybrid feature selection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>Bioinformatic data concentrated on the accumulation of data pace in the undesired information. Bioinformatics data has vast data-intensive biological information through the computation of data. However, bioinformatics data utilizes statistical methods with gene expression for cancer diagnosis and prognosis. Microarray data provides rough approximations for gene expression analysis. Microarray dataset evaluates the massive gene features presence of sample size and characteristics of microarray data. Hence, it is necessary to evaluate the features in the microarray dataset to exhibit effective outcomes through patterns of gene expression. This paper presented a re-sampling of random probability Swarm Optimization (RRP_SW). With RRP_SW model uses the random re-sampling model estimation of features. The features are evaluated through the computation of a multi-objective optimization model. In the microarray, dataset re-sampling estimated the features in the datasets. The features are samples through the computation of probability values in the datasets for classification. With the RRP_SW model, extreme learning is utilized for the classification of features in the microarray dataset with the benchmark datasets.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_53-Microarray_Gene_Expression_Dataset_Feature_Selection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Novel Cognitive Assisted Adaptive Frame Selection for Continuous Sign Language Recognition in Videos Using ConvLSTM</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150752</link>
        <id>10.14569/IJACSA.2024.0150752</id>
        <doi>10.14569/IJACSA.2024.0150752</doi>
        <lastModDate>2024-07-31T12:06:34.1270000+00:00</lastModDate>
        
        <creator>Priyanka Ganesan</creator>
        
        <creator>Senthil Kumar Jagatheesaperumal</creator>
        
        <creator>Matheshkumar P</creator>
        
        <creator>Silvia Gaftandzhieva</creator>
        
        <creator>Rositsa Doneva</creator>
        
        <subject>ConvLSTM; GRU; keyframes; LSTM; sequential learning; sign language recognition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>People with a hearing impairment commonly use sign language for communication, however, they find it challenging to communicate with a normal person who does not recognise the sign language. They normally require an intermediary human to act as a translator for convenient means of expressing their thoughts. To address this issue, the work aims to enhance their communication capability by eliminating the need for an intermediary person by developing a sign language converter that uses a vision-based dynamic recognition strategy to convert continuous sign language into multimodal output. This work introduces a deep neural network based on convolutional long short-term memory (ConvLSTM) networks to determine the real-time dynamic gesture recognition of the actions of the impaired persons captured through cameras. The investigations of the continuous sign language recognition (CSLR) were deployed on the Chinese Sign Language Dataset, CSL-Daily, Phoenix-2014 and Phoenix-2014T datasets and the performance comparisons were done for conventional LSTM, Gated Recurrent Unit (GRU) and ConvLSTM. Experimental results have shown that the ConvLSTM network outperforms the other techniques, and they can detect the sign actions with a better accuracy of 90%, and a precision rate of 0.93, which ensures interpreting the meanings for each sign sequence with ease by integrating the proposed novel cognitive assisted adaptive keyframe selection. The proposed system could be easily implemented in the modern learning management system.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_52-Novel_Cognitive_Assisted_Adaptive_Frame_Selection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Compliance Framework for Personal Data Protection Law Standards</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150751</link>
        <id>10.14569/IJACSA.2024.0150751</id>
        <doi>10.14569/IJACSA.2024.0150751</doi>
        <lastModDate>2024-07-31T12:06:34.1130000+00:00</lastModDate>
        
        <creator>Norah Nasser Alkhamsi</creator>
        
        <creator>Sultan Saud Alqahtani</creator>
        
        <subject>Personal data protection law (PDPL); framework; data management; data protection; privacy policy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>Personal data protection laws are crucial for protecting individual privacy in a data-driven world. To this end, the Kingdom of Saudi Arabia has published the Personal Data Protection Law (PDPL), which aims to empower individuals to manage and control their personal information more securely and effectively. However, data management ecosystems that process such data face challenges directly applying PDPL due to difficulties translating legal provisions into a technological context. Furthermore, non-compliance with PDPL can result in financial, legal, and reputational risks. To address these challenges, this paper developed an approach for legal compliance with PDPL through a framework that analyses and translates legal terms into measurable data management standards. The framework guides data management ecosystems in implementing and complying with PDPL requirements and covers all integral parts of data management. To demonstrate the practical application of this approach, a case study utilized two advanced deep learning models, MARBERTv2 and AraELECTRA, to enhance privacy policy adherence in Saudi Arabian websites with PDPL requirements. The results are highly promising, with MARBERTv2 achieving a micro-average F1-score of 93.32% and AraELECTRA delivering solid performance at 92.46%. This underscores the effectiveness of deep learning models in facilitating PDPL compliance.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_51-Compliance_Framework_for_Personal_Data_Protection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deployment of Secure Data Parameters Between Stock Inverters and Interfaces Using Command-Contamination-Stealth Management System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150750</link>
        <id>10.14569/IJACSA.2024.0150750</id>
        <doi>10.14569/IJACSA.2024.0150750</doi>
        <lastModDate>2024-07-31T12:06:34.0970000+00:00</lastModDate>
        
        <creator>Santosh Kumar Henge</creator>
        
        <creator>Sanjeev Kumar Mandal</creator>
        
        <creator>Ameya Madhukar Rane</creator>
        
        <creator>Megha Sharma</creator>
        
        <creator>Ravleen Singh</creator>
        
        <creator>S Anka Siva Phani Kumar</creator>
        
        <creator>Anusha Marouthu</creator>
        
        <subject>Robot-network (BOTNET); Module Management (MM); Role Management (RM); Detection Conceal and Prevention (DoCP); LAN-WAN-LAN transmission (LWL-T); Remote Level Command Executions (RLCE); Distributed Denial-of-Service (DDoS)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>The security issues more impact on stock data which allows the stockholders (SHs) and stock-inverters (SIs) to predict and invert false assets and stock values. Because of the security flaws and threads that let an attacker take over network devices, the attacker uses the system to attack another system. These problems have an even greater influence on stock data, which gives stockholders (SHs) and stock-inverters (SIs) the ability to forecast and reverse fictitious assets and stock values. This study suggests test scenarios regulate different BOTNETs, layered threshold-influenced data security parameters, and DDoS vulnerabilities for stock data integration and validation. In order to study the behavioral entry and exit sites of SHs and SIs, it has integrated three-tiered procedures with threshold-impacted data security criteria and data matrices. Role Management (RM), Remote Level of Command Executions (RLCE), LAN-WAN-LAN Transmission (LWL-T), and Detection of Conceal and Prevention (DoCP) environments are the frameworks of the first layer. The RM, RLCE, LWL-T and DoCP are tuned with threshold-influenced data security parameters which are more influencing stock values. The second layer is framed with Module Management (MM), Command Module (ComM), Contamination Module (ConM), and Stealth Module (SM). The third layer is framed with expected scenarios and threshold of various vulnerabilities, a thread which occurs based on DoS and BOTNETs. All these layers are interconnected together and integrated with behavioral factors of SHs and SIs. The vulnerabilities are tuned with SHs and SIs input data, then filtered with SHs and SIs behavioral matrices, the alerts has been generated according to their existing entries of the data. These influenced threshold metrics tuned through ARIMA and LSTM for future analysis of stock values. The authentication mode has synchronized dual and multi authentication mode of execution, which tuned to cross verify the investors credentials.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_50-Deployment_of_Secure_Data_Parameters.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Advanced Fusion of 3D U-Net-LSTM Models for Accurate Brain Tumor Segmentation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150749</link>
        <id>10.14569/IJACSA.2024.0150749</id>
        <doi>10.14569/IJACSA.2024.0150749</doi>
        <lastModDate>2024-07-31T12:06:34.0800000+00:00</lastModDate>
        
        <creator>Ravikumar Sajjanar</creator>
        
        <creator>Umesh D. Dixit</creator>
        
        <subject>Brain tumor segmentation; frost filter pre-processing; UNet architecture; LSTM; kaggle BRATS 2020 dataset</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>Accurate detection and segmentation of brain tumors are essential in tomography for effective diagnosis and treatment planning. This study presents advancements in 3D segmentation techniques using data from the Kaggle BRATS 2020 dataset. To enhance the reliability of brain tumor diagnosis, innovative approaches such as Frost filter-based preprocessing, UNet segmentation architecture, and Long Short-Term Memory (LSTM) segmentation are employed. The methodology starts with data preprocessing using the Frost filter, which effectively reduces noise and enhances image clarity, thus improving segmentation accuracy. Subsequently, the UNet architecture is utilized to precisely segment brain tumor regions. UNet&#39;s ability to capture contextual information and its efficient use of skip connections contribute to accurately delineating tumor boundaries in three-dimensional space. Additionally, the temporal aspect of brain tumor progression is addressed by employing an LSTM network, which increases segmentation accuracy. The LSTM algorithm integrates temporal patterns in sequential imaging data, enabling reliable segmentation of tumor presence and characteristics over time. By analyzing the ordered sequence of continuous MRI scans, the LSTM framework achieves more precise and adaptable tumor recognition. Evaluation results based on the Kaggle BRATS 2020 dataset demonstrate significant improvements in segmentation and segmentation performance compared to previous methods. The proposed approach enhances the accuracy of tumor boundary delineation and the ability to classify tumor types and track temporal changes in tumor growth. The &quot;U-Net-LSTM&quot; method achieves an accuracy of 98.9% in segmentation tasks, showcasing its superior performance compared to other techniques. This method is implemented using Python, underscoring its efficacy in achieving high accuracy in segmentation tasks.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_49-Advanced_Fusion_of_3D_U_Net_LSTM_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Security Enhanced Edge Computing Task Scheduling Method Based on Blockchain and Task Cache</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150748</link>
        <id>10.14569/IJACSA.2024.0150748</id>
        <doi>10.14569/IJACSA.2024.0150748</doi>
        <lastModDate>2024-07-31T12:06:34.0670000+00:00</lastModDate>
        
        <creator>Cong Li</creator>
        
        <subject>Blockchain; task cache; edge computing; task scheduling; industrial internet</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>Aiming at edge computing nodes&#39; limited computing and storage capacity, a two-layer task scheduling model based on blockchain and task cache was proposed. The high-similarity task results were cached in the edge cache pool, and the blockchain-assisted task caching model was combined to enhance system security. The genetic evolution algorithm was used to solve the minimum cost that the optimal scheduling model can obtain. The genetic algorithm’s initialization and mutation operations were adjusted to improve the convergence rate. Compared with algorithms without cache pooling and blockchain, the proposed joint blockchain and task caching task scheduling model reduced the cost by 9.4% and 14.3%, respectively. As the capacity space of the cache pool increased, the system cost gradually decreased. Compared with the capacity space of 3GB, the system cost of 10Gbit capacity space was reduced by 10.6%. The system cost decreased as the computing power of edge nodes increased. Compared with edge nodes with a computing frequency of 8GHz, the nodes cost at 18GHz was reduced by 36.4%. Therefore, the proposed edge computing task scheduling model ensures the security of task scheduling based on reducing delay and control costs, providing a foundation for modern industrial task scheduling.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_48-Security_Enhanced_Edge_Computing_Task_Scheduling_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Q-learning Guided Grey Wolf Optimizer for UAV 3D Path Planning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150747</link>
        <id>10.14569/IJACSA.2024.0150747</id>
        <doi>10.14569/IJACSA.2024.0150747</doi>
        <lastModDate>2024-07-31T12:06:34.0500000+00:00</lastModDate>
        
        <creator>Binbin Tu</creator>
        
        <creator>Fei Wang</creator>
        
        <creator>Xiaowei Han</creator>
        
        <creator>Xibei Fu</creator>
        
        <subject>Q-learning; grey wolf optimizer; laplace crossover; 3D path planning; optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>Path planning is a critical component of autonomous unmanned aerial vehicle (UAV) navigation systems, yet traditional and sampling-based methods encounter limitations in three-dimensional (3D) path planning. This paper offers a structured review of applicable algorithms in 3D space, introduces the state-of-the-art techniques, and addresses cutting-edge challenges associated with UAV heuristic decomposition methods. Furthermore, we develop a Q-learning guided grey wolf optimizer (QGWO) to tackle the UAV 3D path planning problem in complex scenarios. QGWO incorporates two exploration strategies from the aquila optimizer into the grey wolf optimizer, enhancing its capacity to escape local optima and utilize the population for broader exploration. Q-learning guides the search process, enabling the algorithm to store iterative information, accelerate convergence, and balance exploration and exploitation. Additionally, Laplace crossover perturbs the positions of the α and β wolves, preventing the algorithm from becoming trapped in local optima. To validate its effectiveness, QGWO and ten advanced heuristic algorithms were tested in 3D path planning simulations across six terrain scenarios of varying complexity. Experimental results demonstrate that QGWO achieves optimal cost metrics, outperforming the original grey wolf optimizer by up to 1.34% and significantly surpassing other algorithms with a 70.92% reduction in standard deviation. This highlights the effectiveness and robustness of QGWO in 3D path planning for UAV. Moreover, the Wilcoxon rank sum test shows that the null hypothesis is rejected in 98.33% of cases, confirming the statistical superiority of the proposed QGWO.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_47-Q_learning_Guided_Grey_Wolf_Optimizer_for_UAV_3D_Path_Planning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Adaptive Language-Interacted Hyper-Modality Representation for Multimodal Sentiment Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150746</link>
        <id>10.14569/IJACSA.2024.0150746</id>
        <doi>10.14569/IJACSA.2024.0150746</doi>
        <lastModDate>2024-07-31T12:06:34.0030000+00:00</lastModDate>
        
        <creator>Lei Pan</creator>
        
        <creator>WenLong Liu</creator>
        
        <subject>Multimodal; multimodal fusion; sentiment analysis; adaptive language-interacted</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>In an attempt to mitigate the problem of neglecting unimodal information and incorporating emotionally unrelated data during the fusion process of multimodal representation, this study presents an adaptive language interaction representation (Adaptive Language-interacted Representation, ALR) model in this study. Initially, the unimodal representation module is utilized to obtain a minimal but adequate representation of the unimodal information. Subsequently, we acknowledge that video and audio modalities may contain sentiment data that is not relevant. To address this issue, hyper-modality representation is constructed to mute the impact of irrelevant sentimental information. This is achieved through interaction among text, video and audio features. Finally, the hyper-modality representation is integrated through multimodal fusion module, harnessing more efficient multimodal sentiment analysis. On the datasets CMU-MOSEI, MELD and IEMOCAP, the model outperforms the major of existing sentiment analysis models.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_46-Adaptive_Language_Interacted_Hyper_Modality_Representation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Reinforcement Learning Driven Self-Adaptation in Hypervisor-Based Cloud Intrusion Detection Systems (RLDAC-IDS)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150745</link>
        <id>10.14569/IJACSA.2024.0150745</id>
        <doi>10.14569/IJACSA.2024.0150745</doi>
        <lastModDate>2024-07-31T12:06:33.9870000+00:00</lastModDate>
        
        <creator>Alaa A. Qaffas</creator>
        
        <subject>Cloud security; intrusion detection system; adaptive framework; hypervisor-based IDS; self-adaptation; emerging threat detection; reinforcement learning; behavioral analysis; cloud computing; intelligent intrusion detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>With the rise in cloud adoption, securing dynamic virtual environments remains a significant challenge. While traditional Intrusion Detection Systems (IDS) have attempted to address security concerns in the cloud mostly through static detection rules and without adaptation capabilities to identify new attack vectors, a self-optimizing framework called Reinforcement Learning-Driven Self-Adaptation in Hypervisor-Based Cloud Intrusion Detection Systems (RLDAC-IDS) is suggested to overcome this limitation. RLDAC-IDS leverages the inherent visibility of hypervisors into virtualized resources to gain valuable insights into cloud operations and threats. Its key components include real-time behavioral analysis, anomaly detection, and identification of known threats. The innovation of RLDAC-IDS lies in the incorporation of reinforcement learning to continuously improve the detection rules and responses. RLDAC-IDS exemplifies intelligent intrusion detection through its ability to learn and adapt to new threat patterns autonomously. By continuous optimization and intelligent intrusion detection techniques, the system progresses to tackle emerging attack vectors while minimizing false alarms. In contrast, RLDAC-IDS is highly adaptive and can easily adjust to the changing conditions of cloud environments. In summary, RLDAC-IDS represents a major advancement in cloud IDS through its adaptive, self-learning approach, overcoming the limitations of existing solutions to provide robust protection amidst the complexities and dynamics of modern virtualized settings.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_45-Reinforcement_Learning_Driven_Self_Adaptation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Children&#39;s Expression Recognition Based on Multi-Scale Asymmetric Convolutional Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150744</link>
        <id>10.14569/IJACSA.2024.0150744</id>
        <doi>10.14569/IJACSA.2024.0150744</doi>
        <lastModDate>2024-07-31T12:06:33.9730000+00:00</lastModDate>
        
        <creator>Pengfei Wang</creator>
        
        <creator>Xiugang Gong</creator>
        
        <creator>Qun Guo</creator>
        
        <creator>Guangjie Chang</creator>
        
        <creator>Fuxiang Du</creator>
        
        <subject>Children&#39;s expression recognition; convolutional neural network; multi-scale asymmetric convolutional neural network; asymmetric convolutional layers</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>This paper proposes a multi-scale asymmetric convolutional neural network (MACNN), specifically designed to tackle the challenges encountered by traditional convolutional neural networks in the realm of children&#39;s facial expression recognition. MACNN addresses problems like low accuracy from facial expression changes, poor generalization across datasets, and inefficiency in traditional convolution operations. The model introduces a multi-scale convolution layer for capturing diverse features, enhancing feature extraction and recognition accuracy. Additionally, an asymmetric convolutional layer is integrated to learn directional features, improving robustness and generalization in facial expression analysis. Post-training, this layer can revert to a standard square convolutional layer, optimizing efficiency for child expression recognition. Experimental results indicate that the proposed algorithm achieves a recognition accuracy of 63.35% on a self-constructed children&#39;s expression dataset, under the configuration of a GPU Tesla P100 with 16GB video memory. This performance exceeds all comparative algorithms and maintains efficient recognition. Furthermore, the algorithm attains a recognition accuracy of 78.26% on the extensive natural environment expression dataset RAF-DB, highlighting its robustness, generalization capability, and potential for practical application.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_44-Childrens_Expression_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comprehensive Study on Crude Oil Price Forecasting in Morocco Using Advanced Machine Learning and Ensemble Methods</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150743</link>
        <id>10.14569/IJACSA.2024.0150743</id>
        <doi>10.14569/IJACSA.2024.0150743</doi>
        <lastModDate>2024-07-31T12:06:33.9570000+00:00</lastModDate>
        
        <creator>Hicham BOUSSATTA</creator>
        
        <creator>Marouane CHIHAB</creator>
        
        <creator>Younes CHIHAB</creator>
        
        <subject>Crude oil prices; machine learning; ensemble model; economic forecasts; energy sector</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>This study employs a range of machine learning models to forecast crude oil prices in Morocco, including Linear Regression, Random Forest, Support Vector Regression (SVR), XGBoost, ARIMA, Prophet and Gradient Boosting. Among these, SVR demonstrated the highest accuracy with an RMSE of 1.414. Additionally, the ARIMA and Prophet models were evaluated, yielding RMSEs of 2.46 and 1.41, respectively. An ensemble model, which combines predictions from all the individual models, achieved an RMSE of 2.144, indicating robust performance. Projections for 2024-2027 show a rising trend in crude oil prices, with the SVR model forecasting 21.91 MAD in 2027, and the ensemble model predicting 14.47 MAD. These findings underscore the effectiveness of ensemble learning and advanced machine learning techniques in producing reliable economic forecasts, offering valuable insights for stakeholders in the energy sector.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_43-A_Comprehensive_Study_on_Crude_Oil_Price_Forecasting_in_Morocco.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Customer Experience Through Arabic Aspect-Based Sentiment Analysis of Saudi Reviews</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150742</link>
        <id>10.14569/IJACSA.2024.0150742</id>
        <doi>10.14569/IJACSA.2024.0150742</doi>
        <lastModDate>2024-07-31T12:06:33.9400000+00:00</lastModDate>
        
        <creator>Razan Alrefae</creator>
        
        <creator>Revan Alqahmi</creator>
        
        <creator>Munirah Alduraibi</creator>
        
        <creator>Shatha Almatrafi</creator>
        
        <creator>Asmaa Alayed</creator>
        
        <subject>Customer experience; Arabic natural language processing; sentiment analysis; Arabic Aspect-Based Sentiment Analysis; online reviews; review analytic; e-commerce; business owners</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>Big brands thrive in today&#39;s competitive marketplace by focusing on customer experience through product reviews. Manual analysis of these reviews is labor-intensive, necessitating automated solutions. This paper conducts aspect-based sentiment analysis on Saudi dialect product reviews using machine learning and NLP techniques. Addressing the lack of datasets, we create a unique dataset for Aspect-Based Sentiment Analysis (ABSA) in Arabic, focusing on the Saudi dialect, comprising two manually annotated datasets of 2000 reviews each. We experiment with feature extraction techniques such as Part-of-Speech tagging (POS), Term Frequency-Inverse Document Frequency (TF-IDF), and n-grams, applying them to machine learning algorithms including Support Vector Machine (SVM), Random Forest (RF), Naive Bayes (NB), and K-Nearest Neighbors (KNN). Our results show that for electronics reviews, RF with TF-IDF, POS tagging, and tri-grams achieves 86.26% accuracy, while for clothes reviews, SVM with TF-IDF, POS tagging, and bi-grams achieves 86.51% accuracy.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_42-Enhancing_Customer_Experience_Through_Arabic.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparison of Resnet Models in UNet Classiﬁer for Mapping Oil Palm Plantation Area with Semantic Segmentation Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150741</link>
        <id>10.14569/IJACSA.2024.0150741</id>
        <doi>10.14569/IJACSA.2024.0150741</doi>
        <lastModDate>2024-07-31T12:06:33.9270000+00:00</lastModDate>
        
        <creator>Fepri Putra Panghurian</creator>
        
        <creator>Hady Pranoto</creator>
        
        <creator>Edy Irwansyah</creator>
        
        <creator>Fabian Surya Pramudya</creator>
        
        <subject>Deep learning; UNet; ResNet; oil palm; semantic segmentation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>In 2023, Indonesia experienced an increase Industrial oil palm plantations grew by 116,000 hectares in 2023, an increase of 54% from the previous year. Oil palm is one of the main agricultural commodities in Indonesia, with a significant contribution to the national economy. However, manually mapping and monitoring oil palm land is still a big challenge. This manual process is labor-intensive, time-consuming and costly. In addition, the accuracy of the data generated is often inadequate, especially in identifying the actual crop condition and land area. Remote sensing (RS) provides extensive and comprehensive data on oil palm land and crop conditions through satellite and drone imagery. In this research, a method of mapping oil palm plantations is proposed using medium resolution sentinel satellite imagery data that is widely available and has adequate spatial resolution. In addition, it is proposed to implement the artificial intelligence (AI) method with deep learning (DL) using the UNet classifier which has been proven in previous studies to provide sufficient accuracy. The research will develop a DL model/architecture with ResNet-34 and ResNet-50 backbones that are expected to further improve the accuracy of segmentation results so that it can be used in oil palm land mapping. The research concluded that semantic segmentation using the UNet classifier with ResNet-34 and ResNet-50 backbone produced F1 scores of 0.89 and 0.922, respectively. The accuracy obtained at the inference/deployment model stage for each ResNet-34, and ResNet-50 backbone was 88.8% with an inference duration of 10 minutes and 91.8% with an inference duration of 20 minutes.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_41-Comparison_of_Resnet_Models_in_UNet.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Multi-Reading Habits Fusion Adversarial Network for Multi-Modal Fake News Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150740</link>
        <id>10.14569/IJACSA.2024.0150740</id>
        <doi>10.14569/IJACSA.2024.0150740</doi>
        <lastModDate>2024-07-31T12:06:33.9100000+00:00</lastModDate>
        
        <creator>Bofan Wang</creator>
        
        <creator>Shenwu Zhang</creator>
        
        <subject>Multimodal fake news detection; feature extraction; feature fu-sion; consistency alignment</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>Existing multimodal fake news detection methods face three challenges: the lack of extraction for implicit shared features, shallow integration of multimodal features, and insufficient at-tention to the inconsistency of features across different modali-ties. To address these challenges, a multi-reading habits fusion adversarial network for multimodal fake news detection is pro-posed. In this model, to mitigate the influence of feature changes due to events and emotions, a dual discriminator based on do-main adversarial training is built to extract invariant common features. Inspired by the diverse reading habits of individuals, three fundamental reading habits are identified, and a multi-reading habits fusion layer is introduced to learn the interde-pendencies among the multimodal feature representations of the news. To investigate the semantic inconsistencies of different modalities in news, a similarity constraint reasoning layer is proposed, which first explores the semantic consistency between image descriptions and unimodal features, and then delves into the semantic discrepancies between unimodal and multimodal features. Extensive experimentation has been carried out on the multimodal datasets of Weibo and Twitter. The outcomes indi-cate that the proposed model surpasses the performance of mainstream advanced benchmarks on both platforms.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_40-A_Multi_Reading_Habits_Fusion_Adversarial_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fire Evacuation Path Planning Based on Improved MADDPG (Multi-Agent Deep Deterministic Policy Gradient) Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150738</link>
        <id>10.14569/IJACSA.2024.0150738</id>
        <doi>10.14569/IJACSA.2024.0150738</doi>
        <lastModDate>2024-07-31T12:06:33.8770000+00:00</lastModDate>
        
        <creator>Qiong Huang</creator>
        
        <creator>Ying Si</creator>
        
        <creator>Haoyu Wang</creator>
        
        <subject>Fire evacuation path; congestion degree; dangerous grid; multi-agent; Multi-Agent Deep Deterministic Policy Gradient</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>The lack of a scientific and reasonable optimal evacuation path planning scheme is one of the main causes of casualties in fire accidents. In addition to the high temperature and harmful smoke in the fire environment, the crowding problem caused by the change of the position of the crowd in the evacuation process will also affect the evacuation effect. Therefore, by improving the multi-agent depth deterministic strategy gradient algorithm, an AMADDPG (Adjacency Multi-agent Deep Deterministic Policy Gradient) model suitable for fire evacuation is proposed. First, the dangerous grid area is defined, and the influence of congestion degree and nearest exit is considered at the same time. The learning framework of &quot;distributed execution and centralized local learning&quot; is adopted to realize experience sharing among neighboring agents. Improve the learning efficiency and evacuation effect of the model. The experimental results show that the model can basically adapt to the complex and dynamic fire environment well, achieve the optimal path planning within 30, and ensure that the degree of congestion on the evacuation path is maintained within 0.5, which can achieve the safe evacuation goal. Meanwhile, compared with the MADDPG algorithm, the model has obvious advantages in terms of training efficiency and stability. It has good application value.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_38-Fire_Evacuation_Path_Planning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Students’ Perceptions of Its Usefulness and Ease of Use on Learning Management System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150739</link>
        <id>10.14569/IJACSA.2024.0150739</id>
        <doi>10.14569/IJACSA.2024.0150739</doi>
        <lastModDate>2024-07-31T12:06:33.8770000+00:00</lastModDate>
        
        <creator>Linda Khoo Mei Sui</creator>
        
        <creator>Nurlisa Loke Abdullah</creator>
        
        <creator>Subatira Balakrishnan</creator>
        
        <creator>Wan Sofiah Meor Osman</creator>
        
        <subject>Learning management system; perceptions; usefulness; ease of use</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>The importance of the Learning Management System (LMS) has been discussed over recent years as it is crucial for students to manage this tool for their learning. The study&#39;s objective was to ascertain whether learners believe the LMS satisfies their learning goals and to bridge the gap between the growing body of research on learner-centered instructional design and LMS design. A survey was carried out with 528 students to get the data. The results revealed that most of the learners agreed that LMS is a useful tool to enhance their learning. This proves that LMS can be used as a device to make their learning better and more effective. The study&#39;s conclusions could be used as a guide for the university&#39;s administration as it adopted pertinent digital technologies, with the goal of creating an efficient implementation strategy that would enhance service delivery. Universities and colleges would benefit from this established approach in selecting the best learning management system (LMS) to meet their diverse needs. It will also act as a guide for developers who want to create an assessment system.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_39-Students_Perceptions_of_Its_Usefulness_and_Ease.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Ensemble IDO Method for Outlier Detection and N2O Emission Prediction in Agriculture</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150737</link>
        <id>10.14569/IJACSA.2024.0150737</id>
        <doi>10.14569/IJACSA.2024.0150737</doi>
        <lastModDate>2024-07-31T12:06:33.8470000+00:00</lastModDate>
        
        <creator>Ahmad Rofiqul Muslikh</creator>
        
        <creator>Pulung Nurtantio Andono</creator>
        
        <creator>Aris Marjuni</creator>
        
        <creator>Heru Agus Santoso</creator>
        
        <subject>Ensemble framework; outlier; detection; N2O emission; isolation forest; DBSCAN; one-class SVM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>Nitrous oxide (N2O) emissions from agricultural activities significantly contribute to climate change, necessitating accurate predictive models to inform mitigation strategies. This study proposes an ensemble framework combining Isolation Forest, DBSCAN, and One-Class SVM to enhance outlier detection in N2O emission datasets. The dataset, consisting of 2,246 rows and 21 columns, was preprocessed to address missing values and normalize data. Outlier detection was performed using each method individually, followed by integration through hard and soft voting techniques. The results revealed that Isolation Forest identified 113 outliers, DBSCAN detected 1,801, and One-Class SVM found 118. Hard voting identified 165 outliers, while soft voting detected 734, ensuring a refined dataset for subsequent modeling. The ensemble approach improved the accuracy of the XGBoost model for N2O emission prediction. The best results were obtained using the Random Search Cross Validation hyperparameter tuning, with a test size is 20%, achieving a CV MSE of 0.0215, MSE of 0.0144, RMSE of 0.1200, MAE of 0.0723, and an R&#178; of 0.6750. This study demonstrates the effectiveness of combining multiple outlier detection methods to enhance data quality and model performance, supporting more reliable predictions of N2O emissions.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_37-Ensemble_IDO_Method_for_Outlier_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparison of Different Models for Traffic Signs Under Weather Conditions Using Image Detection and Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150736</link>
        <id>10.14569/IJACSA.2024.0150736</id>
        <doi>10.14569/IJACSA.2024.0150736</doi>
        <lastModDate>2024-07-31T12:06:33.8300000+00:00</lastModDate>
        
        <creator>Amal Alshahrani</creator>
        
        <creator>Leen Alshrif</creator>
        
        <creator>Fatima Bajawi</creator>
        
        <creator>Razan Alqarni</creator>
        
        <creator>Reem Alharthi</creator>
        
        <creator>Haneen Alkurbi</creator>
        
        <subject>Traffic signs; detection; classification; YOLO; VGG16</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>This study focuses on enhancing the accuracy of traffic sign detection systems for self-driving. With the increasing proliferation of autonomous vehicles, reliable detection and interpretation of traffic signs is crucial for road safety and efficiency. The primary goal of this research was to improve the performance of traffic sign detection, particularly in identifying unfamiliar signs and dealing with adverse weather conditions. We obtained a dataset of 3,480 images from Roboflow and utilized deep learning techniques, including Convolutional Neural Networks (CNNs) and algorithms such as YOLO and the Vision Engineering (VGG) toolkit. Unlike previous studies that focused on a single version of YOLO, this study conducted a comparative analysis of different deep-learning models, including YOLOv5, YOLOv8, and VGG-16. The study results show promising outcomes, with YOLOv5 achieving an accuracy of up to 94.2%, YOLOv8 reaching 95.3% accuracy, and VGG-16 outperforming the other techniques with an impressive 98.68% accuracy. These findings highlight the significant potential for future advancements in traffic sign detection systems, contributing to the ongoing efforts to enhance the safety and efficiency of autonomous driving technologies.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_36-Comparison_of_Different_Models_for_Traffic_Signs.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Efficient and Secure Access Authorization Policy for Cloud Storage Resources Based on Fuzzy Searchable Encryption</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150735</link>
        <id>10.14569/IJACSA.2024.0150735</id>
        <doi>10.14569/IJACSA.2024.0150735</doi>
        <lastModDate>2024-07-31T12:06:33.8170000+00:00</lastModDate>
        
        <creator>Jun Fu</creator>
        
        <subject>Fuzzy search encryption; cloud storage; security access; CP-ABE (Ciphertext-Policy Attribute-Based Encryption); access control; authorization policy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>When fuzzy searchable encrypted cloud storage resources are available, keywords are allowed to have a certain range of changes. Even if there are slight differences in the spelling, word order, or spacing between words, the correct data can be matched. Therefore, it does not have the effect of fine-grained access control (FGAC). Consequently, to satisfy the security demands of cloud storage assets and the ease of resource retrieval through fuzzy searchable encryption, CP-ABE employs attribute and policy definitions to introduce a novel, effective security access authorization approach for cloud storage assets utilizing fuzzy searchable encryption technology. Encrypt cloud storage resources after keyword preprocessing through initialization, file encryption and decryption, index generation encryption, search, and other steps; use the wildcard-based method to generate indexes; and use the Bloom filter to generate security traps to achieve Pail lier-based asymmetric fuzzy searchable encryption of resources. In combination with the CP-ABE-based access control method, authorized users are assigned private keys in the authorization center to ensure that unauthorized users cannot obtain cloud storage resources and complete the fuzzy searchable encryption access authorization of cloud storage resources. The experiment shows that the search index generation of this strategy greatly reduces the resource utilization rate and effectively improves the fuzzy search speed. Moreover, the combination of fuzzy searchable encryption and CP-ABE can better ensure full cloud storage resources.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_35-An_Efficient_and_Secure_Access_Authorization_Policy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Using Deep Learning on Retinal Images to Classify the Severity of Diabetic Retinopathy</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150734</link>
        <id>10.14569/IJACSA.2024.0150734</id>
        <doi>10.14569/IJACSA.2024.0150734</doi>
        <lastModDate>2024-07-31T12:06:33.8000000+00:00</lastModDate>
        
        <creator>Shereen A. El-aal</creator>
        
        <creator>Rania Salah El-Sayed</creator>
        
        <creator>Abdulellah Abdullah Alsulaiman</creator>
        
        <creator>Mohammed Abdel Razek</creator>
        
        <subject>Deep learning; diabetic retinopathy (DR); Gaussian Blur Filter; support vector machine (SVM); color space; performance evaluations</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>Diabetic retinopathy (DR) is a leading cause of blindness worldwide, particularly among working-age individuals. With the increasing prevalence of diabetes, there is an urgent need to address the public health burden posed by DR. This research paper aims to develop a clinical decision support approach that integrates automated DR detection and classifying the grade of severity in DR. A three-stage deep learning model for DR detection is proposed. First, incorporating preprocessing, image enhancement, and augmenting the DR images using three different color space transformations and a filtering technique: BGR to RGB, RGR to LAB, and Gaussian Blur Filter. Secondly, feature extraction and representation learning are based on CNN with various layers. Thirdly, classification is based on SVM. The implementation and evaluation of the proposed model on a dataset containing five stages of DR are essential steps towards validating its performance and assessing its potential for clinical applications. Through thorough dataset preprocessing, model training, performance analysis, comparison with baseline methods, and generalization tests, we can gain insights into the model&#39;s classification and staging capabilities. This research makes a significant contribution to the field of DR severity detection, ultimately leading to enhanced diagnostic capabilities. The developed models demonstrated an accuracy rate of 94.72%, indicating their efficacy in accurately assessing the severity of the condition.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_34-Using_Deep_Learning_on_Retinal_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Semi-Supervised Clustering Algorithms Through Active Constraints</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150733</link>
        <id>10.14569/IJACSA.2024.0150733</id>
        <doi>10.14569/IJACSA.2024.0150733</doi>
        <lastModDate>2024-07-31T12:06:33.7830000+00:00</lastModDate>
        
        <creator>Abdulwahab Ali Almazroi</creator>
        
        <creator>Walid Atwa</creator>
        
        <subject>Semi-supervised; pairwise constraints; affinity propagation; active learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>Pairwise constraints improve clustering performance in constraint-based clustering issues, especially since they are applicable. However, randomly choosing these constraints may be adverse and minimize accuracy. To address the problem of random choosing pairwise constraints, an active learning method is used to identify the most informative constraints, which are then selected by the active learning technique. In this research, we replaced random selection with an active learning strategy. We provide a semi-supervised selective affinity propagation clustering approach with active constraints, which combines the affinity propagation (AP) clustering algorithm with prior information to improve semi-supervised clustering performance. Based on the neighborhood concept, we select the most informative constraints where neighborhoods include labelled examples of various clusters. The experimental results on eight real datasets demonstrate that the proposed method in this paper outperforms other baseline methods and that it can improve clustering performance significantly.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_33-Semi_Supervised_Clustering_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Large-Scale Image Indexing and Retrieval Methods: A PRISMA-Based Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150732</link>
        <id>10.14569/IJACSA.2024.0150732</id>
        <doi>10.14569/IJACSA.2024.0150732</doi>
        <lastModDate>2024-07-31T12:06:33.7700000+00:00</lastModDate>
        
        <creator>Abdelkrim Saouabe</creator>
        
        <creator>Said Tkatek</creator>
        
        <creator>Hicham Oualla</creator>
        
        <creator>Carlos SOSA Henriquez</creator>
        
        <subject>Image indexing; image retrieval; similarity; PRISMA; computer vision</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>Large-scale image indexing and retrieval are pivotal in artificial intelligence, especially within computer vision, for efficiently organizing and accessing extensive image databases. This systematic literature review employs the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) methodology to thoroughly analyze and synthesise the current research landscape in this domain. Through meticulous research and a stringent selection process, this study uncovers significant trends, pioneering methodologies, and ongoing challenges in large-scale image indexing and retrieval. Key findings reveal a growing adoption of deep learning techniques, the integration of multimodal data to improve retrieval accuracy, and persistent challenges related to scalability and real-time processing. These insights offer a valuable resource for researchers and practitioners striving to enhance the efficiency and effectiveness of image indexing and retrieval systems.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_32-Large_Scale_Image_Indexing_and_Retrieval_Methods.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Oversampling Social Media-Sourced Image Datasets for Better Deep Learning Classification of Natural Disaster Damage Levels</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150731</link>
        <id>10.14569/IJACSA.2024.0150731</id>
        <doi>10.14569/IJACSA.2024.0150731</doi>
        <lastModDate>2024-07-31T12:06:33.7370000+00:00</lastModDate>
        
        <creator>Nicholas Lau Kheng Seng</creator>
        
        <creator>Goh Wei Wei</creator>
        
        <creator>Tan Ee Xion</creator>
        
        <subject>Deep learning; image processing; oversampling; image data augmentation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>People in areas affected by natural disasters and use social media websites such as Facebook, Twitter (also known as “X”) and Instagram tend to post images of damage to their surroundings. These social media sites have become vital sources of immediate and highly available data for providing situational awareness and organisation for natural disaster response. A few previous attempts at classifying the level of natural disaster damage in these images using image processing techniques had noted the challenge in producing robust classification models due to the effect of overfitting caused by a lack of observations and data imbalance in annotated datasets. This article shows an attempt to improve a training strategy within the data level for deep learning models such as VGG16, ResNetV2 and EffecientNetV2, used to estimate the level of disaster damage in images by training them with data generated using image data augmentation with data balancing, oversampling up to eight times and combining the oversampled image data collections. The F-1 score achieved for classifying damage on earthquake images and images from the Hurricane Matthew data collection by training EfficientNetV2 on a generated dataset made with a combination of oversampled data surpassed previous benchmark results. These results show that using data balancing and oversampling on the dataset prior to training deep learning models on these datasets result in increased robustness.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_31-Oversampling_Social_Media_Sourced_Image_Datasets.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>DGA Domain Name Detection and Classification Using Deep Learning Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150730</link>
        <id>10.14569/IJACSA.2024.0150730</id>
        <doi>10.14569/IJACSA.2024.0150730</doi>
        <lastModDate>2024-07-31T12:06:33.7070000+00:00</lastModDate>
        
        <creator>Ranjana B Nadagoudar</creator>
        
        <creator>M Ramakrishna</creator>
        
        <subject>Botnet; cyber security; Domain Generation Algorithms (DGAs); gated recurrent unit; Domain Name System (DNS)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>In today&#39;s cyber environment, modern botnets and malware are increasingly employing domain generation mechanisms to circumvent conventional detection solutions reliant on blacklisting or statistical methods for malicious domains. These outdated methods prove inadequate against algorithmically generated domain names, presenting significant challenges for cyber security. Domain Generation Algorithms (DGAs) have become essential tools for many malware families, allowing them to create numerous DGA domain names to establish communication with C&amp;C servers. Consequently, detecting such malware has become a formidable task in cyber security. Traditional approaches to domain name detection rely heavily on manual feature engineering and statistical analysis, with classifiers designed to differentiate between legitimate and DGA domain names. In this study, we propose a novel approach to classify and detect algorithmically generated domain names. The deep learning architectures, including LSTM, RNN and GRU are trained and evaluated for their effectiveness in distinguishing between legitimate and malicious domain names. The performance of each model is evaluated using standard metrics such as precision, recall, and F1-score. The findings of this research have significant implications for cyber security defense strategies. Our experimental findings illustrate that the proposed model outperforms current state-of-the-art methods in both DGA domain name classification and detection. Our proposed model achieved 99% accuracy for DGA classification. By integrating additional feature extraction and knowledge-based methods our proposed model surpasses existing models. The experimental outcomes suggest that our proposed model gated recurrent unit can achieve 99% accuracy, a 94% recall rate, and a 98% F1-score for the detection and classification of DGA-generated domain names.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_30-DGA_Domain_Name_Detection_and_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Blockchain Framework for Academic Certificates Authentication</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150729</link>
        <id>10.14569/IJACSA.2024.0150729</id>
        <doi>10.14569/IJACSA.2024.0150729</doi>
        <lastModDate>2024-07-31T12:06:33.6900000+00:00</lastModDate>
        
        <creator>Ruqaya Abdelmagid</creator>
        
        <creator>Mohamed Abdelsalam</creator>
        
        <creator>Fahad Kamal Alsheref</creator>
        
        <subject>Academic certificates; tampering; security; blockchain; Hyperledger Fabric; Ethereum; channels; nodes; peers; Chaincode</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>This paper proposes a framework to solve academic certificate fraud by implementing a blockchain network. A permission Hyperledger Fabric network is deployed to store students’ information and allows the proper access to guarantee the system&#39;s security. The paper discusses several studies that introduce variants of solutions for the academic certification tampering problem by using blockchain technology. It finds Hyperledger Fabric secure, and performant with higher TPS than Bitcoin and Ethereum; latency increases with participant number.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_29-A_Blockchain_Framework_for_Academic_Certificates.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Revolutionizing Esophageal Cancer Diagnosis: A Deep Learning-Based Method in Endoscopic Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150728</link>
        <id>10.14569/IJACSA.2024.0150728</id>
        <doi>10.14569/IJACSA.2024.0150728</doi>
        <lastModDate>2024-07-31T12:06:33.6770000+00:00</lastModDate>
        
        <creator>Shincy P Kunjumon</creator>
        
        <creator>S Felix Stephen</creator>
        
        <subject>Deep learning; esophagus cancer; transfer learning; endoscopic images; inception ResNet V2; fine tuning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>Esophageal cancer (EC) is a severe and commonly increasing disease due to the uncontrolled growth in the esophagus. It is the sixth leading cause of cancer-related deaths worldwide. The traditional methods for the diagnosis of EC are not only time-consuming but also suffer from inconsistencies due to human factors such as experience and fatigue. This paper proposes a deep learning (DL) approach for the detection of EC from endoscopic images to improve efficiency and accuracy. The study utilizes an endoscopic image dataset of 2000 images evenly split into cancerous and non-cancerous cases. After image preprocessing and augmentation, these images are fed into the proposed Inception ResNet V2 model. The extracted features were processed by the final classification layers and produced class probabilities. The simulation results revealed that the suggested model attained 98.50% of accuracy, 97.50% of precision, 98.75% of recall and 98.00% of F1 score after fine-tuning. These results underscore the model&#39;s capability to accurately identify EC, minimizing false positives and enhancing diagnostic reliability. The proposed DL framework for automated EC detection, promising advancements in clinical workflows and patient care.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_28-Revolutionizing_Esophageal_Cancer_Diagnosis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Recent Advances in Medical Image Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150727</link>
        <id>10.14569/IJACSA.2024.0150727</id>
        <doi>10.14569/IJACSA.2024.0150727</doi>
        <lastModDate>2024-07-31T12:06:33.6600000+00:00</lastModDate>
        
        <creator>Loan Dao</creator>
        
        <creator>Ngoc Quoc Ly</creator>
        
        <subject>Medical Image Classification (MIC); Artificial Intelligence (AI); Vision Transformer (ViT); Vision-Language Model (VLM); eXplainable AI (XAI)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>Medical image classification is crucial for diagnosis and treatment, benefiting significantly from advancements in artificial intelligence. The paper reviews recent progress in the field, focusing on three levels of solutions: basic, specific, and applied. It highlights advances in traditional methods using deep learning models like Convolutional Neural Networks and Vision Transformers, as well as state-of-the-art approaches with Vision-Language Models. These models tackle the issue of limited labeled data, and enhance and explain predictive results through Explainable Artificial Intelligence.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_27-Recent_Advances_in_Medical_Image_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application of Optimizing Multifactor Correction in Fatigue Life Prediction and Reliability Evaluation of Structural Components</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150726</link>
        <id>10.14569/IJACSA.2024.0150726</id>
        <doi>10.14569/IJACSA.2024.0150726</doi>
        <lastModDate>2024-07-31T12:06:33.6430000+00:00</lastModDate>
        
        <creator>Yi Zhang</creator>
        
        <subject>Multi factor bayesian theory correction; structural components; fatigue life; reliability; Bayesian theory</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>Multi factor correction is optimized for fatigue life prediction and reliability evaluation of structural components. Based on the optimization of Bayesian theory, reliability evaluation is carried out to improve the efficiency of fatigue life prediction and reliability evaluation of structural components. The research results indicate that the crack propagation length increases with the increase in loading time. The average probability density of the modified method is 3.628, while the probability density of the traditional fracture mechanics model is 1.242. Based on the multi factor modified crack propagation prediction model, the predicted data accuracy exceeds the traditional fracture mechanics model. It is consistent with the experimental results. The crack propagation prediction model based on multi factor correction can ensure the accuracy of the prediction. The reliability of the model is evaluated. The average prediction accuracy of multiple sets of data is over 90%. This research method helps predict the fatigue life of structural components and evaluate reliability to ensure the safe operation of construction machinery.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_26-Application_of_Optimizing_Multifactor_Correction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Facial Expression Recognition Method Based on Improved VGG19 Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150725</link>
        <id>10.14569/IJACSA.2024.0150725</id>
        <doi>10.14569/IJACSA.2024.0150725</doi>
        <lastModDate>2024-07-31T12:06:33.6270000+00:00</lastModDate>
        
        <creator>Lihua Bi</creator>
        
        <creator>Shenbo Tang</creator>
        
        <creator>Canlin Li</creator>
        
        <subject>Facial expression recognition; deep learning; VGG19 model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>With the increasing demand for human-computer interaction and the development of emotional computing technology, facial expression recognition has become a major focus in research. In this paper, an improved VGG19 network model is proposed by involving enhancement strategies, and the facial expression recognition process with the improved VGG19 model is provided. We validated the model on FER2013 and CK+ datasets and conducted comparative experiments on facial expression recognition accuracy among the improved VGG19 and other classic models, including the original VGG19. Instance tests were also performed, using probability histograms to reflect the effectiveness of expression recognition. These experiments and tests demonstrate the superiority, as well as the applicability and stability of the improved VGG19 model on facial expression recognition.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_25-A_Facial_Expression_Recognition_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel and Refined Contactless User Feedback System for Immediate On-Site Response Collection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150724</link>
        <id>10.14569/IJACSA.2024.0150724</id>
        <doi>10.14569/IJACSA.2024.0150724</doi>
        <lastModDate>2024-07-31T12:06:33.6130000+00:00</lastModDate>
        
        <creator>Harold Harrison</creator>
        
        <creator>Mazlina Mamat</creator>
        
        <creator>Farrah Wong</creator>
        
        <creator>Hoe Tung Yew</creator>
        
        <subject>Contactless; human-computer interaction; Internet of Things; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>This paper introduces a Contactless User Feedback System (CUFS) that provides an innovative solution for capturing user feedback through hand gestures. It comprises a User Feedback Device (UFD), a mobile application, and a cloud database. The CUFS operates through a structured sequence, guiding users through a series of questions displayed on an LCD. Using the Pi Camera V2 for contactless hand shape capture, users can express feedback through recognized hand signs. A live video feed enhances user accuracy, while secure data transmission to a database ensures comprehensive feedback collection, including timestamp, date, location, and a unique identifier. A mobile application offers real-time oversight for administrators, presenting facility status insights, data validation outcomes, and customization options for predefined feedback categories. This study also identifies and strategically addresses challenges in image quality, responsiveness, and data validation to enhance the CUFS&#39;s overall performance. Innovations include optimized lighting for superior image quality, a parallel multi-threading approach for improved responsiveness, and a data validation mechanism on the server side. The refined CUFS demonstrates recognition accuracies consistently surpassing 93%, validating the effectiveness of these improvements. This paper presents a novel and refined CUFS that combines hardware and software components, contributing significantly to the advancement of contactless human-computer interaction and Internet of Things-based systems.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_24-A_Novel_and_Refined_Contactless_User_Feedback_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predictive Modeling of Student Performance Using RFECV-RF for Feature Selection and Machine Learning Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150723</link>
        <id>10.14569/IJACSA.2024.0150723</id>
        <doi>10.14569/IJACSA.2024.0150723</doi>
        <lastModDate>2024-07-31T12:06:33.5970000+00:00</lastModDate>
        
        <creator>Abdellatif HARIF</creator>
        
        <creator>Moulay Abdellah KASSIMI</creator>
        
        <subject>Student performance prediction; Recursive Feature Elimination (RFE); cross-validation; Random Forest (RF); feature selection; IBN ZOHR University</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>Predicting student performance has become a strategic challenge for universities, essential for increasing student success rates, retention, and tackling dropout rates. However, the large volume of educational data complicates this task. Therefore, many research projects have focused on using Machine Learning techniques to predict student success. This study aims to propose a performance prediction model for students at IBN ZOHR University in Morocco. We employ a combination of Random Forest and Recursive Feature Elimination with Cross-Validation (RFECV-RF) for optimal feature selection. Using these features, we build classification models with several Machine Learning algorithms, including AdaBoost, Logistic Regression (LR), k-Nearest Neighbors (k-NN), Naive Bayes (NB), Support Vector Machines (SVM), and Decision Trees (DT). Our results show that the SVM model, using the 8 features selected by RFECV-RF, outperforms the other classifiers with an accuracy of 87%. This demonstrates the effectiveness and efficiency of our feature selection method and the superiority of the SVM model in predicting student performance.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_23-Predictive_Modeling_of_Student_Performance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Differential Privacy Federated Learning: A Comprehensive Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150722</link>
        <id>10.14569/IJACSA.2024.0150722</id>
        <doi>10.14569/IJACSA.2024.0150722</doi>
        <lastModDate>2024-07-31T12:06:33.5800000+00:00</lastModDate>
        
        <creator>Fangfang Shan</creator>
        
        <creator>Shiqi Mao</creator>
        
        <creator>Yanlong Lu</creator>
        
        <creator>Shuaifeng Li</creator>
        
        <subject>Federated learning; differential privacy; privacy protection; gradient clipping</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>Federated Learning (FL) has received a lot of attention lately when it comes to protecting data privacy, especially in industries with sensitive data like healthcare, banking, and the Internet of Things (IoT). However, although FL protects privacy by not sharing raw data, the information transfer during its model update process can still potentially leak user privacy. Differential Privacy (DP), as an advanced privacy protection technology, introduces random noise during data queries or model updates, further enhancing the privacy protection capability of Federated Learning. This paper delves into the theory, technology, development, and future research recommendations of Differential Privacy Federated Learning (DP-FL). Firstly, the article introduces the basic concepts of Federated Learning, including synchronous and asynchronous optimization algorithms, and explains the fundamentals of Differential Privacy, including centralized and local DP mechanisms. Then, the paper discusses in detail the application of DP in Federated Learning under different gradient clipping strategies, including fixed clipping and adaptive clipping methods, and explores the application of user-level and sample-level DP in Federated Learning. Finally, the paper discusses future research directions for DP-FL, emphasizing advancements in asynchronous DP-FL and personalized DP-FL.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_22-Differential_Privacy_Federated_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Advancing Urban Infrastructure Safety: Modern Research in Deep Learning for Manhole Situation Supervision Through Drone Imaging and Geographic Information System Integration</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150721</link>
        <id>10.14569/IJACSA.2024.0150721</id>
        <doi>10.14569/IJACSA.2024.0150721</doi>
        <lastModDate>2024-07-31T12:06:33.5500000+00:00</lastModDate>
        
        <creator>Ayoub Oulahyane</creator>
        
        <creator>Mohcine Kodad</creator>
        
        <subject>Urban infrastructure safety; object detection; Deep Learning (DL); UAV (Drones); Computer Vision (CV)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>This paper research introduces a cutting-edge approach to enhancing urban infrastructure safety through the integration of modern technologies. Leveraging state of the art deep learning techniques, specifically the recent object detection models, with a focus on YOLOv8, we propose a system for supervising and detecting manhole situations using drone imagery and GPS location data. Our experiments with object detection models demonstrate exceptional results, showcasing high accuracy and efficiency in the detection of manhole covers and potential hazards in real-time drone imagery. The best trained model is YOLOv8, which achieves a mAP@50 rate of 89% and a Precision rate of 95%, surpassing existing methods. By combining this visual information with precise GPS location data, our system offers a comprehensive solution for monitoring urban landscapes. The integration of YOLOv8 not only improves the efficiency of manhole detection but also contributes to proactive maintenance and risk mitigation in urban environments. This research represents also a significant step forward in leveraging modern research methodologies, and the outstanding results of our trained models underscore the effectiveness of Object detection models in addressing critical infrastructure challenges.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_21-Advancing_Urban_Infrastructure_Safety.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Kepler Optimization Algorithm-Based Convolutional Neural Network Model for Risk Management of Internet Enterprises</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150720</link>
        <id>10.14569/IJACSA.2024.0150720</id>
        <doi>10.14569/IJACSA.2024.0150720</doi>
        <lastModDate>2024-07-31T12:06:33.5330000+00:00</lastModDate>
        
        <creator>Bin Liu</creator>
        
        <creator>Fengjiao Zhou</creator>
        
        <creator>Haitong Jiang</creator>
        
        <creator>Rui Ma</creator>
        
        <subject>Risk management; Kepler optimization algorithm; Convolutional Neural Network; Internet enterprises</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>Internet enterprises, as the representative enterprises of technology-based enterprises, contribute more and more to the growth of the world economy. To ensure the sustainable development of enterprises, it is necessary to predict the risks in the operation of Internet enterprises. An accurate risk prediction model can not only safeguard the interests of enterprises but also provide certain references for investors. Therefore, this study designed a Convolutional Neural Network (CNN) model based on the Kepler optimization algorithm (KOA) for risk prediction of Internet enterprises, aiming to maximize the accuracy of the prediction model, and to help Internet enterprises carry out risk management. Firstly, we select the indicators related to the financial risk of Internet enterprises, and predict the risk based on the traditional statistical analysis of Logistic regression model. On this basis, KOA was improved based on evolutionary strategies and fish foraging strategies, and the improved algorithm was applied to optimize CNN. Based on improved KOA and CNN algorithms, an IKOA-CNN risk prediction model is proposed. Finally, by comparing traditional statistical analysis-based models and other learning-based models, the results show that the IKOA-CNN algorithm proposed in this study has the highest prediction accuracy.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_20-A_Kepler_Optimization_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Interplay Between Machine Learning Techniques and Supply Chain Performance: A Structured Content Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150719</link>
        <id>10.14569/IJACSA.2024.0150719</id>
        <doi>10.14569/IJACSA.2024.0150719</doi>
        <lastModDate>2024-07-31T12:06:33.5200000+00:00</lastModDate>
        
        <creator>Asmaa Es-satty</creator>
        
        <creator>Mohamed Naimi</creator>
        
        <creator>Radouane Lemghari</creator>
        
        <creator>Chafik Okar</creator>
        
        <subject>Bibliometric analysis; machine learning; ProKnow-C methodology; supply chain performance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>Over recent years, disruptive technologies have shown considerable potential to improve supply chain efficiency. In this regard, numerous papers have explored the link between machine learning techniques and supply chain performance. However, research works still need more systematization. To fill this gap, this paper aims to systematize published papers highlighting the impact of advanced technologies, such as machine learning, on supply chain performance. A structured content analysis was conducted on 91 selected journal articles from the Scopus and Web of Science databases. Bibliometric analysis has identified nine distinct groupings of research papers that explore the relationship between the machine learning and supply chain performance. These clusters cover topics such as big data and supply chain management, knowledge management, decision-making processes, business process management, and the applications of big data analytics within this domain. Each cluster’s content was clarified through a rigorous systematic literature review. The proposed study can be seen as a kind of comprehensive initiative to systematically map and consolidate this rapidly evolving body of literature. By identifying the key research themes and their interrelationships, this analysis seeks to elucidate the current state-of-the-art and to highlight potential directions for future research in this critical field.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_19-The_Interplay_Between_Machine_Learning_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Augmented Reality Development for Garbage Sortation Education for Children</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150718</link>
        <id>10.14569/IJACSA.2024.0150718</id>
        <doi>10.14569/IJACSA.2024.0150718</doi>
        <lastModDate>2024-07-31T12:06:33.5030000+00:00</lastModDate>
        
        <creator>Devi Afriyantari Puspa Putri</creator>
        
        <creator>Nisa Dwi Septiyanti</creator>
        
        <creator>Endah Sudarmilah</creator>
        
        <creator>Diah Priyawati</creator>
        
        <subject>Augmented reality; pemilahan sampah; Unity3D; vuforia</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>The global crisis problem related to climate change, one of the main factors is the accumulation of waste which is getting higher every day. One effective way to reduce the accumulation of waste is by sorting the waste and recycling the waste. However, the waste sorting process in Indonesia is still less effective because only 1.4% can be processed and sorted. One of the biggest causes is a lack of knowledge regarding the types of waste that exist. Based on these problems, the aim of this research is: to create augmented reality-based waste sorting educational technology which is expected to increase knowledge of types of waste and increase environmentally conscious behavior. In addition, the ADDIE development model is used for the research methods that will be used. This research has successfully built an Augmented Reality sorting waste for mobile application and received a good rating on SUS questionnaire and consider acceptable with average score 84.5 out of 100.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_18-Augmented_Reality_Development.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Method for Detecting the Appropriateness of Wearing a Helmet Chin Strap at Construction Sites</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150717</link>
        <id>10.14569/IJACSA.2024.0150717</id>
        <doi>10.14569/IJACSA.2024.0150717</doi>
        <lastModDate>2024-07-31T12:06:33.4870000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Kodai Beppu</creator>
        
        <creator>Yuya Ifuku</creator>
        
        <creator>Mariko Oda</creator>
        
        <subject>Detectron2; safety-first construction; helmet chin strap; annotation; roboflow; COCO annotator; YOLOv8</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>A novel method for verifying the proper use of helmet chin straps during clothing inspections at construction sites is proposed, prioritizing safety in construction environments. As the problem statement, existing helmet-wearing state detection systems often rely on approaches that might not be optimal. This research aims to address limitations in single-view detection and proposes a multi-view deep learning approach for improved accuracy. The proposed method leverages transfer learning for object detection using well-known models such as YOLOv8 and Detectron2. The annotation process for detecting helmet chin straps was conducted using the COCO format with the assistance of Roboflow. Through experimental analysis, the following findings were observed: Using images captured simultaneously from two different angles of the chin strap condition, Detectron2 demonstrated a remarkable ability to accurately determine the state of helmet usage. It could identify conditions such as the chin strap being removed or loosely fastened with 100% accuracy.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_17-Method_for_Detecting_the_Appropriateness.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>IoT-Opthom-CAD: IoT-Enabled Classification System of Multiclass Retinal Eye Diseases Using Dynamic Swin Transformers and Explainable Artificial Intelligence</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150716</link>
        <id>10.14569/IJACSA.2024.0150716</id>
        <doi>10.14569/IJACSA.2024.0150716</doi>
        <lastModDate>2024-07-31T12:06:33.4730000+00:00</lastModDate>
        
        <creator>Talal AlBalawi</creator>
        
        <creator>Mutlaq B. Aldajani</creator>
        
        <creator>Qaisar Abbas</creator>
        
        <creator>Yassine Daadaa</creator>
        
        <subject>Computer-aided diagnosis; ophthalmology; multiclass classification; tessellation; age-related macular degeneration; Optic Disc Edema (ODE); hypertensive retinopathy; data augmentation; transformers; Swin; explainable AI; Internet of Things</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>Integrating Internet of Things (IoT)-assisted eye-related recognition incorporates connected devices and sensors for primary analysis and monitoring of eye conditions. Recent advancements in IoT-based retinal fundus recognition utilizing deep learning (DL) have significantly enhanced early analysis and monitoring of eye-related diseases. Ophthalmologists use retinal images in the diagnosis of different eye diseases. Numerous computer-aided diagnosis (CAD) studies have been conducted by using IoT and DL technologies on the early diagnosis of eye-related diseases. The retina is susceptible to microvascular alterations due to numerous retinal disorders. This study creates a new, non-invasive CAD system called IoT-Opthom-CAD. It uses Swin transformers and the gradient boosting (LightGBM) method to find different eye diseases in colored fundus images after applying data augmentations techniques. We introduce a Swin transformer (dc-swin) that is efficient and powerful by connecting a dynamic cross-attention layer to extract local and global features. In practice, this dynamic attention layer suggests a mechanism where the model dynamically focuses on different parts of the image at other times, learning to cross-reference or integrate information across these parts. Next, the LightGBM method is used to divide these features into multiple groups, including normal (NML), diabetic retinopathy (DR), tessellation (TSN), age-related macular degeneration (ARMD), Optic Disc Edema (ODE), and hypertensive retinopathy (HR). To find the causes of eye-related diseases, the Grad-CAM is used as an explainable artificial intelligence (xAI). To develop the Opthom-CAD system, preprocessing, and data augmentation steps are integrated to strengthen this architecture. Multi-label three retinal disease datasets, such as MuReD, BRSET, and OIA-ODIR, are utilized to evaluate this system. After ten times of cross-validation tests, the proposed Opthom-CAD system shows excellent results such as an AUC of 0.95, f1-score of 95.7, accuracy of up to 96.5%, precision of 95%, recall of 94% and f1-score of 95.7. The results indicated that the performance of the Opthom-CAD system is much better than that of numerous baseline state-of-the-art models. As a result, the Opthom-CAD system can assist dermatologists in detecting eye-related diseases. The source code is public and accessible for anyone to view and modify from GitHub (https://github.com/Qaisar256/Opthom-CAD).</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_16-IoT_Opthom_CAD_IoT_enabled_Classification_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Research of the V2X Technology Organization Model for Self-Managed Technical Equipment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150715</link>
        <id>10.14569/IJACSA.2024.0150715</id>
        <doi>10.14569/IJACSA.2024.0150715</doi>
        <lastModDate>2024-07-31T12:06:33.4570000+00:00</lastModDate>
        
        <creator>Amir Gubaidullin</creator>
        
        <creator>Olga Manankova</creator>
        
        <subject>V2X; V2V; autonomous vehicles; DSRC; scenarios; frequency spectrum</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>The steady progression of information technology today is opening up opportunities for extensive automation across various sectors, including the automotive industry. The active development of IT systems has paved the way for V2X (Vehicle-to-Everything) technology, which enables communication such as &quot;vehicle-to-vehicle&quot; and &quot;vehicle-to-road infrastructure&quot;. This article focuses on exploring the use of V2X technology to create &quot;intelligent transportation&quot;. Currently, V2X technologies are not widely adopted due to the limited coverage of 5G networks. Although the existing 4G network is adequate for streaming HD content and playing online games, it cannot support the safer and smarter operation required for autonomous cars. Nevertheless, within the 4G network framework, it is possible to develop a comprehensive solution for automating car traffic. This would significantly reduce the number of road accidents and optimize traffic flow. This article explores the implementation of V2X technology in road traffic to achieve these goals.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_15-Research_of_the_V2X_Technology_Organization_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Method for Prediction of Motion Based on Recursive Least Squares Method with Time Warp Parameter and its Application to Physical Therapy</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150714</link>
        <id>10.14569/IJACSA.2024.0150714</id>
        <doi>10.14569/IJACSA.2024.0150714</doi>
        <lastModDate>2024-07-31T12:06:33.4400000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Kosuke Eto</creator>
        
        <creator>Mariko Oda</creator>
        
        <subject>Exercise therapy; disabled person; body-building exercise; 3D character; Recursive Least-Squares estimation: RLS method; Dynamic Time Warping: DTW method</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>We build an exercise therapy support system for children with disabilities that applies artificial intelligence technology. In this study, a 3DCG character shows a model body-building exercise, and at the same time provides feedback such as calling out to the trainee. At that time, to make the exercise therapy work more effectively, the trainee&#39;s movement is attempted to be corrected by notifying the trainee with a voice or other means before the trainee&#39;s movement deviates significantly from that of the 3DCG character. Since there is inevitably a delay between the movements of the 3DCG characters playing the role of the trainee and the trainer, it is necessary to predict this delay using time series analysis. The Recursive Least-Squares estimation: RLS method was used for this prediction method. In addition, the similarity of the movements of both companies was evaluated using the Dynamic Time Warping: DTW method, and the time warp calculated in this process was used as input for the RLS method. The results of the experiment confirmed that the predictions were made with sufficient accuracy and that when the degree of similarity was low, the 3DCG character playing the trainer&#39;s role spoke to them, leading to improvements in the trainees&#39; movements.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_14-Method_for_Prediction_of_Motion_Based_on_Recursive_Least_Squares_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Temporal Fusion Transformers for Enhanced Multivariate Time Series Forecasting of Indonesian Stock Prices</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150713</link>
        <id>10.14569/IJACSA.2024.0150713</id>
        <doi>10.14569/IJACSA.2024.0150713</doi>
        <lastModDate>2024-07-31T12:06:33.4100000+00:00</lastModDate>
        
        <creator>Standy Hartanto</creator>
        
        <creator>Alexander Agung Santoso Gunawan</creator>
        
        <subject>Time series forecasting; stock price prediction; capital market; technical analysis; TFT</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>The stock market represents the financial pulse of economies and is an important part of the global financial system. It allows people to buy and sell shares in publicly held corporations. It serves as a platform for investors to trade ownership in businesses, enabling companies to raise capital for expansion and operations. However, the stock market can be very risky for any investor because of the fluctuating prices and uncertainties of the market. Integrating deep learning into stock market analysis enables researchers and practitioners to gain a deeper understanding of the trends and variations that will improve investment decisions. Recent advancements in the area of deep learning, more specifically with the invention of transformer-based models, have revolutionized research in stock market prediction. The Temporal Fusion Transformer (TFT) was introduced as a model that uses self-attention mechanisms to capture complex temporal dynamics across multiple time-series sequences. This study investigates feature engineering and technical data integrated into the TFT models to improve short-term stock market prediction. The Variance Inflation Factor (VIF) was used to quantify the severity of multicollinearity in the dataset. Evaluation metrics were used to evaluate TFT models’ effectiveness in improving the accuracy of stock market forecasting compared to other transformer models and traditional statistical Na&#239;ve models used as baselines. The results prove that TFT models excel in forecasting by effectively identifying multiple patterns, resulting in better predictive accuracy. Furthermore, considering the unique patterns of individual stocks, TFT obtained a remarkable SMAPE of 0.0022.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_13-Temporal_Fusion_Transformers_for_Enhanced_Multivariate_Time_Series.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Having Deep Investigation on Predicting Unconfined Compressive Strength by Decision Tree in Hybrid and Individual Approaches</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150712</link>
        <id>10.14569/IJACSA.2024.0150712</id>
        <doi>10.14569/IJACSA.2024.0150712</doi>
        <lastModDate>2024-07-31T12:06:33.3770000+00:00</lastModDate>
        
        <creator>Qingqing Zhang</creator>
        
        <creator>Lei Wang</creator>
        
        <creator>Hongmei Gu</creator>
        
        <subject>Unconfined compressive strength; machine learning; decision tree; population-based vortex search algorithm; arithmetic optimizer algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>In the field of geotechnical engineering Rocks&#39; unconfined compressive strength (UCS) is an important variable that plays a significant part in civil engineering projects like foundation design, mining, and tunneling. These projects&#39; stability and safety depend on how accurately UCS predicts the future. In this study, machine learning (ML) techniques are applied to forecast UCS for soil-stabilizer combinations. This study aims to build complex and highly accurate predictive models using the robust Decision Tree (DT) as a primary ML tool. These models show relationships between UCS considering a variety of intrinsic soil properties, including dispersion, plasticity, linear particle size shrinkage, and the kind of and number of stabilizing additives. Furthermore, this paper integrates two meta-heuristic algorithms: the Population-based‎ vortex search algorithm (PVS) and the Arithmetic optimizer algorithm (AOA) to enhance the precision of models. These algorithms work in tandem to bolster the accuracy of predictive models. This study has subjected models to rigorous validation by analyzing UCS samples from different soil types, drawing from historical stabilization test results. This study unveils three noteworthy models: DTAO, DTPB, and an independent DT model. Each model provides invaluable insights that support the meticulous projection of UCS for soil-stabilizer blends. Notably, the DTAO model stands out with exceptional performance metrics. With an R2 value of 0.998 and an impressively low RMSE of 1.242, it showcases precision and reliability. These findings not only underscore the accuracy of the DTAO model but also emphasize its effectiveness in predicting soil stabilization outcomes.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_12-Having_Deep_Investigation_on_Predicting_Unconfined_Compressive_Strength.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Hybrid Learning Approaches for COVID-19 Virus Detection Using Chest X-ray Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150711</link>
        <id>10.14569/IJACSA.2024.0150711</id>
        <doi>10.14569/IJACSA.2024.0150711</doi>
        <lastModDate>2024-07-31T12:06:33.3630000+00:00</lastModDate>
        
        <creator>Mansor Alohali</creator>
        
        <subject>COVID-19 detection; deep learning; deep hybrid learning; chest X-ray analysis; machine learning classifiers; medical image analysis; convolutional networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>This paper introduces a novel deep learning framework for highly accurate COVID-19 detection using chest X-ray images. The proposed model tackles the challenge by combining stacked Convolutional Neural Network models for superior feature extraction to potentially enhance interpretability. The proposed model achieved a high accuracy in distinguishing COVID-19 from healthy cases. The study demonstrates the potential of deep hybrid learning for accurate COVID-19 detection, paving the way for its application in real-world settings. Future research directions could explore methods to further refine the model&#39;s capabilities. Overall, this work contributes significantly to the development of robust deep-learning methods for COVID-19 detection with the potential for broader use in medical image analysis.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_11-Deep_Hybrid_Learning_Approaches_for_COVID_19_Virus_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Exploring the Impact of Time Management Skills on Academic Achievement with an XGBC Model and Metaheuristic Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150710</link>
        <id>10.14569/IJACSA.2024.0150710</id>
        <doi>10.14569/IJACSA.2024.0150710</doi>
        <lastModDate>2024-07-31T12:06:33.3470000+00:00</lastModDate>
        
        <creator>Songyang Li</creator>
        
        <subject>Student academic performance; time management; machine learning; extreme gradient boosting classification; metaheuristic algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>Estimating a student&#39;s academic performance is a crucial aspect of learning preparation. In order to predict understudy academic performance, this consideration uses a few Machine Learning (ML) models and Time Administration Aptitudes data from the Time Structure Questionnaire (TSQ). While a number of other useful characteristics have been used to forecast academic achievement, TSQ findings, which directly evaluate students&#39; time management skills, have never been included. ‎‎‎This oversight is surprising, as time management skills likely play a significant role in academic success. Time administration may be an ability that may impact the student&#39;s academic accomplishment. The purpose of this research is to look at the connection between college students&#39; academic success and their ability to manage their time well.‎‎‏ The Extreme Gradient Boosting Classification (XGBC) model has been utilized in this study to forecast academic student performance. To enhance the prediction accuracy of the XGBC model, this study employed three optimizers: Giant Trevally Optimizer (GTO), Bald Eagle Search Optimization (BESO), and Seagull Optimization Algorithm (SOA). Impartial performance evaluators were employed in this study to assess the models&#39; predictions, minimizing potential biases. The findings showcase the success of this approach in developing an accurate predictive model for student academic performance. Notably, the XGBE surpassed other models, achieving impressive accuracy and precision values of 0.920 and 0.923 during the training phase.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_10-Exploring_the_Impact_of_Time_Management_Skills.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Machine Learning Models Based on CATBoost Classifier for Assessing Students&#39; Academic Performance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150709</link>
        <id>10.14569/IJACSA.2024.0150709</id>
        <doi>10.14569/IJACSA.2024.0150709</doi>
        <lastModDate>2024-07-31T12:06:33.3300000+00:00</lastModDate>
        
        <creator>Ding Hao</creator>
        
        <creator>Yang Xiaoqi</creator>
        
        <creator>Qi Taoyu</creator>
        
        <subject>Academic performance; hybridization; CATBoost classifier; meta-heuristic algorithms; educational institutions</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>This study addresses the imperative task of predicting and evaluating students&#39; academic performance by amalgamating qualitative and quantitative factors, crucial in light of the persisting challenges undergraduates encounter in completing their degrees. Educational institutions wield significant influence in prognosticating student outcomes, necessitating the application of data mining (DM) techniques such as classification, clustering, and regression to discern and forecast student study behaviors. Through this research, the potential of deriving demonstrates valuable insights from educational data, empowering educational stakeholders with enhanced decision-making capabilities and facilitating improved student outcomes. Employing a hybrid approach, models developed within the realm of educational DM, leveraging the CATBoost Classifier (CATC) in conjunction with two cutting-edge optimization algorithms: Victoria Amazonica Optimization (VAO) and Artificial Rabbits Optimization (ARO). Initially, the models undergo partitioning into training and testing sets for performance evaluation utilizing statistical metrics. After classifying 649 students according to their final scores, VAO outperformed ARO in terms of maximizing CATC&#39;s classification ability, resulting in an approximate 6% enhancement in accuracy and precision. Moreover, the VAO model adeptly categorizes 606 out of 649 students accurately. This research furnishes invaluable predictive models for educators, researchers, and policymakers endeavoring to enrich students&#39; educational journeys and foster academic success.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_9-Hybrid_Machine_Learning_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards Secure Internet of Things-Enabled Intelligent Transportation Systems: A Comprehensive Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150708</link>
        <id>10.14569/IJACSA.2024.0150708</id>
        <doi>10.14569/IJACSA.2024.0150708</doi>
        <lastModDate>2024-07-31T12:06:33.3170000+00:00</lastModDate>
        
        <creator>Changxia Lu</creator>
        
        <creator>Fengyun Wang</creator>
        
        <subject>Internet of Things; intelligent transportation; security; logistics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>The Internet of Things (IoT) constitutes a technological evolution capable of influencing the establishment of smart cities in a wide range of fields, including transportation. Intelligent Transportation Systems (ITS) represent a prominent IoT-enabled solution designed to enhance the efficiency, safety, and sustainability of transport networks. However, integrating IoT with ITS introduces significant security challenges that need to be addressed to ensure the reliability of these systems. This research aims to critically analyze the current state of IoT-integrated ITS, identify security threats and vulnerabilities, and evaluate existing security measures to propose robust solutions. Utilizing a comprehensive review methodology that includes literature analysis and expert interviews, we identify key achievements and pinpoint critical security gaps. Our findings indicate that while substantial progress has been made in securing ITS, significant challenges remain, particularly regarding scalability, interoperability, and real-time data processing. The study proposes enhanced security protocols and methods to mitigate these risks, contributing to the development of more secure and resilient IoT-enabled ITS.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_8-Towards_Secure_Internet_of_Things_Enabled_Intelligent_Transportation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Security Systems: Human and Automated Surveillance Approaches</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150707</link>
        <id>10.14569/IJACSA.2024.0150707</id>
        <doi>10.14569/IJACSA.2024.0150707</doi>
        <lastModDate>2024-07-31T12:06:33.3000000+00:00</lastModDate>
        
        <creator>Mohammed Ameen</creator>
        
        <creator>Richard Stone</creator>
        
        <creator>Ulrike Genschel</creator>
        
        <creator>Fatima Mgaedeh</creator>
        
        <subject>Hybrid surveillance systems; human-AI interaction; operator training; predictive modeling; linear regression</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>The study investigates the performance of hybrid security systems under different personnel training and artificial intelligence (AI) assistance conditions. The aim is to understand the system’s impact on different scenarios that involve human operators and AI and to develop a predictive model for optimizing system performance. A human security information model was built to predict the performance of hybrid security systems. The system’s performance metrics (response time, hits, misses, mistakes), cognitive load, visual discrimination, trust, and confidence were measured under different training and assistance conditions. Participants were divided into trained and non-trained groups, and each group performed surveillance tasks with and without AI assistance. Predictive modeling was performed using Linear Regression. The training significantly improved performance by reducing misses and mistakes and increasing hits, both with and without AI assistance. In the non-trained group, AI assistance boosted speed and hit accuracy but led to more mistakes. AI assessment reduced response time and misses for the trained group while increasing hits without affecting the mistake rate. Trust and confidence were higher with AI in the non-trained group, while AI reduced cognitive load in the trained group. The findings highlight the interactions between human operators, AI assistance, and training in hybrid surveillance systems. The predictive model can guide the design and implementation of these systems to optimize performance. Future studies should focus on strategies to enhance operator trust in AI-assisted systems and confidence, further optimizing the collaborative potential of hybrid surveillance frameworks.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_7-Hybrid_Security_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Memory-Based Neural Network Model for English to Telugu Language Translation on Different Types of Sentences</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150706</link>
        <id>10.14569/IJACSA.2024.0150706</id>
        <doi>10.14569/IJACSA.2024.0150706</doi>
        <lastModDate>2024-07-31T12:06:33.2830000+00:00</lastModDate>
        
        <creator>Bilal Bataineh</creator>
        
        <creator>Bandi Vamsi</creator>
        
        <creator>Ali Al Bataineh</creator>
        
        <creator>Bhanu Prakash Doppala</creator>
        
        <subject>Machine translation; English-Telugu translation; RNN; LSTM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>In India, regional languages play an important role in government-to-public, public-to-citizen rights, weather forecasting and farming. Depending on the state the language also changes accordingly. But in the case of remote areas, the understanding level becomes complex since everything nowadays is presented in the English Language. In such conditions, the regional language manual translation consumes more time to provide services to the common people. The automatic translation of one language to another by maintaining the meaning of the given input sentence there by producing the exact meaning in the output language is carried out through Machine Translation. In this work, we proposed a Memory Based Neural Network for Translation (MBNNT) model on simple, compound and complex sentences for English to Telugu language translation. We used BLEU and WER metrics for identifying the translation quality. On applying these metrics over different type of sentences LSTM showed promising results over Statistical Machine Translation and Recurrent Neural Networks in terms of the quality and performance.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_6-A_Memory_Based_Neural_Network_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Autonomous Robots for Transport Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150705</link>
        <id>10.14569/IJACSA.2024.0150705</id>
        <doi>10.14569/IJACSA.2024.0150705</doi>
        <lastModDate>2024-07-31T12:06:33.2700000+00:00</lastModDate>
        
        <creator>Yang Lu</creator>
        
        <subject>Autonomous vehicles; transport choices; sustainability; health; physical activity; active transport; shared autonomous vehicles; private autonomous vehicles; public transport</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>Even though automation of travel systems is already happening, it&#39;s important to know how the introduction of self-driving cars might change people&#39;s transportation habits because changes in these choices could have an effect on health as well as the long-term viability and efficiency of transportation systems. For this study to be useful in Australia, it had to fill in this information gap that had been seen. The people who answered gave information about their backgrounds, the ways they currently travel, the importance they thought certain aspects of transportation were, and their feelings about self-driving cars. Then, they read a story that had been shaped by the opinions of experts and that talked about a future where cars would drive themselves. After reading the story, the people who answered picked the types of transportation they would most likely use in that scenario. They used descriptive studies to look at how transport choices have changed and regression models to figure out the factors that would be used to predict how transport options will change in the future. A lot of people who answered said they wanted to use outdoor, shared, and public travel more in the future than they do now. Half as many chances were taken to use private transport. In general, better public transportation, a workable system for active transportation, and fairly cheap shared driverless cars were seen as positive changes in how people planned to use transportation in the imagined situation. In the event that politicians are able to take action to achieve these results, the autonomization of transportation is likely to result in good changes to society.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_5-Autonomous_Robots_for_Transport_Applications.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Audio Classification Through MFCC Feature Extraction and Data Augmentation with CNN and RNN Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150704</link>
        <id>10.14569/IJACSA.2024.0150704</id>
        <doi>10.14569/IJACSA.2024.0150704</doi>
        <lastModDate>2024-07-31T12:06:33.2530000+00:00</lastModDate>
        
        <creator>Karim Mohammed Rezaul</creator>
        
        <creator>Md. Jewel</creator>
        
        <creator>Md Shabiul Islam</creator>
        
        <creator>Kazy Noor e Alam Siddiquee</creator>
        
        <creator>Nick Barua</creator>
        
        <creator>Muhammad Azizur Rahman</creator>
        
        <creator>Mohammad Shan-A-Khuda</creator>
        
        <creator>Rejwan Bin Sulaiman</creator>
        
        <creator>Md Sadeque Imam Shaikh</creator>
        
        <creator>Md Abrar Hamim</creator>
        
        <creator>F.M Tanmoy</creator>
        
        <creator>Afraz Ul Haque</creator>
        
        <creator>Musarrat Saberin Nipun</creator>
        
        <creator>Navid Dorudian</creator>
        
        <creator>Amer Kareem</creator>
        
        <creator>Ahmmed Khondokar Farid</creator>
        
        <creator>Asma Mubarak</creator>
        
        <creator>Tajnuva Jannat</creator>
        
        <creator>Umme Fatema Tuj Asha</creator>
        
        <subject>Deep learning (artificial intelligence); data augmentation; audio segmentation; signal processing; frame blocking; fast fourier transform; discrete cosine transform; feature extraction; MFCC; CNN; RNN</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>Sound classification is a multifaceted task that necessitates the gathering and processing of vast quantities of data, as well as the construction of machine learning models that can accurately distinguish between various sounds. In our project, we implemented a novel methodology for classifying both musical instruments and environmental sounds, utilizing convolutional and recurrent neural networks. We used the Mel Frequency Cepstral Coefficient (MFCC) method to extract features from audio, which emulates the human auditory system and produces highly distinct features. Knowing how important data processing is, we implemented distinctive approaches, including a range of data augmentation and cleaning techniques, to achieve an optimized solution. The outcomes were noteworthy, as both the convolutional and recurrent neural network models achieved a commendable level of accuracy. As machine learning and deep learning continue to revolutionize image classification, it is high time to explore the development of adaptable models for audio classification. Despite the challenges associated with a small dataset, we successfully crafted our models using convolutional and recurrent neural networks. Overall, our strategy for sound classification bears significant implications for diverse domains, encompassing speech recognition, music production, and healthcare. We hold the belief that with further research and progress, our work can pave the way for breakthroughs in audio data classification and analysis.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_4-Enhancing_Audio_Classification_Through_MFCC.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Healthcare: Machine Learning for Diabetes Prediction and Retinopathy Risk Evaluation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150703</link>
        <id>10.14569/IJACSA.2024.0150703</id>
        <doi>10.14569/IJACSA.2024.0150703</doi>
        <lastModDate>2024-07-31T12:06:33.2370000+00:00</lastModDate>
        
        <creator>Ghinwa Barakat</creator>
        
        <creator>Samer El Hajj Hassan</creator>
        
        <creator>Nghia Duong-Trung</creator>
        
        <creator>Wiam Ramadan</creator>
        
        <subject>Machine learning; diabetes prediction; artificial intelligence in healthcare; XGBoost; Random Forest</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>Diabetes mellitus stands as a major public health issue that affects millions globally. Among the various complications associated with diabetes, diabetic retinopathy presents a significant concern, affecting approximately one-third of diabetic patients. Early detection of diabetic retinopathy is paramount, as timely treatment can significantly reduce the risk of severe visual impairment. The study employs advanced machine learning techniques to predict diabetes and assess risk levels for retinopathy, aiming to enhance predictive accuracy and risk stratification in clinical settings. This approach contributes to better management and treatment outcomes. A diverse array of machine learning models including Logistic Regression, Random Forest, XGBoost, voting classifiers was used. These models were applied to a meticulously selected dataset, specifically designed to include comprehensive diabetic indicators along with retinopathy outcomes, enabling a detailed comparative analysis. Among the evaluated models, XGBoost demonstrated superior performance in terms of accuracy, sensitivity, and computational efficiency. This model excelled in identifying risk levels among diabetic patients, providing a reliable tool for early detection of potential retinopathy. The findings suggest that the integration of machine learning models, particularly XGBoost, into the healthcare system could significantly enhance early screening and personalized treatment plans for diabetic retinopathy. This advancement holds the potential to improve patient outcomes through timely and accurate risk assessment, paving the way for targeted interventions.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_3-Enhancing_Healthcare_Machine_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Administrative Source Registers for the Development of a Robust Large Language Model: A Novel Methodological Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150702</link>
        <id>10.14569/IJACSA.2024.0150702</id>
        <doi>10.14569/IJACSA.2024.0150702</doi>
        <lastModDate>2024-07-31T12:06:33.2230000+00:00</lastModDate>
        
        <creator>Adham Kahlawi</creator>
        
        <creator>Cristina Martelli</creator>
        
        <subject>Statistical information systems; administrative data reuse; ontology; database; semantic web; knowledge graph; LLM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>Accurate statistical information is critical for understanding, describing, and managing socio-economic systems. While data availability has increased, often it does not meet the quality requirements for effective governance. Administrative registers are crucial for statistical information production, but their potential is hampered by quality issues stemming from administrative inconsistencies. This paper explores the integration of semantic technologies, including ontologies and knowledge graphs, with administrative databases to improve data quality. We discuss the development of large language models (LLMs) that enable a robust, queryable framework, facilitating the integration of disparate data sources. This approach ensures high-quality administrative data, essential for statistical reuse and the development of comprehensive, dynamic knowledge graphs and LLMs tailored for administrative applications.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_2-Enhancing_Administrative_Source_Registers_for_the_Development.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Integrating Observability with DevOps Practices in Financial Services Technologies: A Study on Enhancing Software Development and Operational Resilience</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150701</link>
        <id>10.14569/IJACSA.2024.0150701</id>
        <doi>10.14569/IJACSA.2024.0150701</doi>
        <lastModDate>2024-07-31T12:06:33.2070000+00:00</lastModDate>
        
        <creator>Ankur Mahida</creator>
        
        <subject>Observability; monitoring; integrated analysis; DevOPs; integration; operational resilience</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(7), 2024</description>
        <description>The finance market closely depends on translation and high-quality software solutions when performing crucial transactions and processing important information and customer services. Thus, systems’ reliability and good performance become crucial when these systems become complicated. This paper aims to focus on the implementation of the observability concept with the DevOps approach in financial services technologies, where its strengths, weaknesses, opportunities, and threats are also discussed with regard to the future. The concept of observability is intertwined with DevOps since, with its help, it is possible to gain deep insights into the system’s inner state and further enhance status monitoring, detect problems in less time, and optimize performance constantly. When organized and analyzed properly, observability data can, therefore, play a critical role in increasing software quality in financial institutions, aligning with regulatory standards, and decreasing development and operations teams’ silos. However, the implementation of observability within an organization using DevOps best practices in the financial services industry has some challenges, which include The issue of security, especially when it comes to data, the Challenge of data overload, the challenging task of encouraging the right organizational culture for continuous and consistent observability. The article presents a guide that discusses how to incorporate observability with DevOps: the step-by-step process of defining observability needs, choosing the most suitable tools, integrating with other tools in the existing DevOps frameworks, laboratory of alarms, and constant enhancement. Furthermore, it considers examples of how some financial organizations have applied observability to reduce risks, improve efficacy, and enrich customers’ interactions. In addition, the article also deliberates on the future perspectives of observability, for instance, artificial intelligence and machine learning are quickly emerging as means through which different tasks of observability can be automated, and there are increasing concerns with security when it comes to the implementation of observability in the financial services industry. By adopting observability and aligning it with DevOps, financial institutions can develop and sustain sound, reliable and high-quality infrastructure and maintain the industry’s leadership.</description>
        <description>http://thesai.org/Downloads/Volume15No7/Paper_1-Integrating_Observability_with_DevOps_Practices.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Text Extraction and Translation Through Lip Reading using Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01506156</link>
        <id>10.14569/IJACSA.2024.01506156</id>
        <doi>10.14569/IJACSA.2024.01506156</doi>
        <lastModDate>2024-06-29T11:26:07.3030000+00:00</lastModDate>
        
        <creator>Sai Teja Krithik Putcha</creator>
        
        <creator>Yelagandula Sai Venkata Rajam</creator>
        
        <creator>K. Sugamya</creator>
        
        <creator>Sushank Gopala</creator>
        
        <subject>Deep Learning (DL); Convolutional Neural Networks (CNN); Lip Reading; Recurrent Neural Networks (RNN)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>Deep learning has revolutionized industries such as natural language processing and computer vision. This study explores the fusion of these domains by proposing a novel approach for text extraction and translation using lip reading and deep learning. Lip reading, the process of interpreting spoken language by analyzing lip movements, has garnered interest due to its potential applications in noisy environments, silent communication, and accessibility enhancements. This study employs the power of deep learning architectures such as CNNs and RNNs to accurately extract text content from lip movements captured in video sequences. The proposed model consists of multiple stages: lip region detection, feature extraction, text recognition, and translation. Initially, the model identifies and isolates the lip region within video frames using a CNN-based object detection approach. Subsequently, relevant features are extracted from the lip region using CNNs to capture intricate motion patterns and convert these visual features into textual in-formation. The extracted text is further processed and translated into the desired language using machine translation techniques to enable translation.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_156-Text_Extraction_and_TranslationThrough_Lip_Reading.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>LBPSCN: Local Binary Pattern Scaled Capsule Network for the Recognition of Ocular Diseases</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01506155</link>
        <id>10.14569/IJACSA.2024.01506155</id>
        <doi>10.14569/IJACSA.2024.01506155</doi>
        <lastModDate>2024-06-29T11:26:07.2700000+00:00</lastModDate>
        
        <creator>Mavis Serwaa</creator>
        
        <creator>Patrick Kwabena Mensah</creator>
        
        <creator>Adebayo Felix Adekoya</creator>
        
        <creator>Mighty Abra Ayidzoe</creator>
        
        <subject>Glaucoma; cataracts; capsule network; Convolutional neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>Glaucoma and cataracts are leading causes of blindness worldwide, resulting in significant vision loss and quality of life impairment. Early detection and diagnosis are crucial for effective treatment and prevention of further damage. However, diagnosis is challenging, especially when intraocular pressure is low or cataracts are present. Deep learning algorithms, particularly Convolutional Neural Networks (CNNs), have shown promise in detecting eye diseases but require large training datasets to achieve high performance.. To address this limitation, this work proposes a modified Capsule Network algorithm with a novel scaled processing algorithm and local binary pattern layer, enabling robust and accurate diagnosis of glaucoma and cataracts. The proposed model demonstrates performance comparable to state-of-the-art methods, achieving high accuracy on combined, cataract-only, and glaucoma-only datasets (94.32%, 96.87%, and 95.23%, respectively). This work introduces enhanced feature extraction and robustness to illumination variations, addressing critical limitations of existing methods.. The proposed model offers a promising tool for ophthalmologists and glaucoma specialists to accurately diagnose glaucoma and cataract-compromised eyes, potentially improving patient outcomes.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_155-LBPSCN_Local_Binary_Pattern_Scaled_Capsule_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Bone Quality Classification of Dual Energy X-ray Absorptiometry Images Using Convolutional Neural Network Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01506154</link>
        <id>10.14569/IJACSA.2024.01506154</id>
        <doi>10.14569/IJACSA.2024.01506154</doi>
        <lastModDate>2024-06-29T11:26:07.2400000+00:00</lastModDate>
        
        <creator>Mailen Gonzalez</creator>
        
        <creator>Jose M. Fuertes Garcia</creator>
        
        <creator>Manuel J. Lucena Lopez</creator>
        
        <creator>Ruben Abdala</creator>
        
        <creator>Jose M. Massa</creator>
        
        <subject>Osteoporosis; Dual Energy X-ray Absorptiometry (DXA); Trabecular Bone Score (TBS); Classification; Convolutional Neural Network (CNN)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>The assessment of bone trabecular quality degrada-tion is important for the detection of diseases such as osteoporosis. The gold standard for its diagnosis is the Dual Energy X-ray Absorptiometry (DXA) image modality. The analysis of these images is a topic of growing interest, especially with artificial intelligence techniques. This work proposes the detection of a degraded bone structure from DXA images using some approaches based on the learning of Trabecular Bone Score (TBS) ranges. The proposed models are supported by intelligent systems based on convolutional neural networks using two kinds of approaches: ad hoc architectures and knowledge transfer systems in deep network architectures, such as AlexNet, ResNet, VGG, SqueezeNet, and DenseNet retrained with DXA images. For both approaches, experimental studies were made comparing the proposed models in terms of effectiveness and training time, achieving an F1-Score result of approximately 0.75 to classify the bone structure as degraded or normal according to its TBS range.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_154-Bone_Quality_Classification_of_Dual_Energy_X_ray_Absorptiometry_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparing AI Algorithms for Optimizing Elliptic Curve Cryptography Parameters in e-Commerce Integrations: A Pre-Quantum Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01506153</link>
        <id>10.14569/IJACSA.2024.01506153</id>
        <doi>10.14569/IJACSA.2024.01506153</doi>
        <lastModDate>2024-06-29T11:26:07.2100000+00:00</lastModDate>
        
        <creator>Felipe Tellez</creator>
        
        <creator>Jorge Ortiz</creator>
        
        <subject>Artificial intelligence; genetic algorithms; particle swarm optimization; elliptic curve cryptography; e-commerce; third-party integrations; pre-quantum computing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>This paper presents a comparative analysis between the Genetic Algorithm (GA) and Particle Swarm Optimization (PSO), two vital artificial intelligence algorithms, focusing on optimizing Elliptic Curve Cryptography (ECC) parameters. These encompass the elliptic curve coefficients, prime number, generator point, group order, and cofactor. The study provides insights into which of the bio-inspired algorithms yields better optimization results for ECC configurations, examining performances under the same fitness function. This function incorporates methods to ensure robust ECC parameters, including assessing for singular or anomalous curves and applying Pollard’s rho attack and Hasse’s theorem for optimization precision. The optimized parameters generated by GA and PSO are tested in a simulated e-commerce environment, contrasting with well-known curves like secp256k1 during the transmission of order messages using Elliptic Curve-Diffie Hellman (ECDH) and Hash-based Message Authentication Code (HMAC). Focusing on traditional computing in the pre-quantum era, this research highlights the efficacy of GA and PSO in ECC optimization, with implications for enhancing cybersecurity in third-party e-commerce integrations. We recommend the immediate consideration of these findings before quantum computing’s widespread adoption.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_153-Comparing_AI_Algorithms_for_Optimizing_Elliptic_Curve_Cryptography.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Blockchain-based System Towards Data Security Against Smart Contract Vulnerabilities: Electronic Toll Collection Context</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01506152</link>
        <id>10.14569/IJACSA.2024.01506152</id>
        <doi>10.14569/IJACSA.2024.01506152</doi>
        <lastModDate>2024-06-29T11:26:07.1770000+00:00</lastModDate>
        
        <creator>Olfa Ben Rhaiem</creator>
        
        <creator>Marwa Amara</creator>
        
        <creator>Radhia Zaghdoud</creator>
        
        <creator>Lamia Chaari</creator>
        
        <creator>Maha Metab Alshammari</creator>
        
        <subject>Blockchain; Ethereum; smart contracts; Reentrancy Attacks; security; ETC</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>Electronic Toll Collection (ETC) systems have been proposed as a replacement for traditional toll booths, where vehicles are required to queue to make payments, particularly during holiday period. Thus, the primary advantage of ETC is improved traffic efficiency. However, existing ETC systems lack the security necessary to protect vehicle information privacy and prevent fund theft. As a result, automatic payments become inefficient and susceptible to attacks, such as Reentrancy attack. In this paper, we utilize Ethereum blockchain and smart contracts as the automatic payment method. The biggest challenges are to authenticate the vehicle data, automatically deducts fees from the user’s wallet and protects against smart contract Reentrancy Attack without leaking distance information. To address these challenges, we propose an end-to-end Verification algorithms at both entry and exit toll points that corporate measures to protect distance-related information from potential leaks. The proposed system’s performance was evaluated on a private blockchain. Results demonstrate that our approach enhances transaction security and ensures accurate payment processing.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_152-Blockchain_based_System_Towards_Data_Security.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Human IoT Interaction Approach for Modeling Human Walking Patterns Using Two-Dimensional Levy Walk Distribution</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01506151</link>
        <id>10.14569/IJACSA.2024.01506151</id>
        <doi>10.14569/IJACSA.2024.01506151</doi>
        <lastModDate>2024-06-29T11:26:07.1630000+00:00</lastModDate>
        
        <creator>Tajim Md. Niamat Ullah Akhund</creator>
        
        <creator>Waleed M. Al-Nuwaiser</creator>
        
        <subject>Internet of Things (IoT); wearable sensors; Human-Computer Interaction (HCI); 3-axis accelerometer gyroscope; walking pattern; levy walk distribution; abnormal walk prediction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>This work presents a novel approach to modeling and analyzing human walking patterns using a two-dimensional Levy walk distribution and the Internet of Sensing Things. The study proposes the strategic placement of MPU6050 sensors within a garment worn on the human leg to capture motion data during walking activities that can model human walking patterns. Random samples are generated from the Levy distribution through numerical modeling, simulating normal human walking patterns. A real-world experiment involving five male participants wearing sensor-equipped garments during normal walking activities validates the proposed methodology. Statistical analysis, including the Kolmogorov-Smirnov test, confirms the agreement between simulated Levy distributions and observed step distance data, supporting the hypothesis that deviations indicate abnormal walking patterns. The study contributes to advancing sensor-based systems for human activity recognition and health monitoring, offering insights into the feasibility of using Levy walk distributions for gait analysis.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_151-Human_IoT_Interaction_Approach_for_Modeling_Human_Walking_Patterns.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Differential Evolution-based Pseudotime Estimation Method for Single-cell Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01506150</link>
        <id>10.14569/IJACSA.2024.01506150</id>
        <doi>10.14569/IJACSA.2024.01506150</doi>
        <lastModDate>2024-06-29T11:26:07.1300000+00:00</lastModDate>
        
        <creator>Nazifa Tasnim Hia</creator>
        
        <creator>Ishrat Jahan Emu</creator>
        
        <creator>Muhammad Ibrahim</creator>
        
        <creator>Sumon Ahmed</creator>
        
        <subject>Pseudotime estimation; trajectory inference; single-cell; differential evolution; RNA-seq</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>The analysis of single-cell genomics data creates an intriguing opportunity for researchers to examine the complex biological system more closely but is challenging due to inherent biological and technical noise. One popular approach involves learning a lower dimensional manifold or pseudotime trajectory through the data that can capture the primary sources of variation in the data. A smooth function of pseudotime then can be used to align gene expression patterns through the lineages in the trajectory which later facilitates downstream analysis such as heterogeneous cell type identification. Here, we propose a differential evolution based pseudotime estimation method. The model operates on continuous search space and allows easy integration of the cell capture time information in the inference process. The suitability of the proposed model is investigated by applying it on benchmarking single-cell data sets collected from different organisms using different assaying techniques. The experimental result shows the model’s capability of producing plausible biological insights about cell ordering which makes it an appealing choice for pseudoitme estimation using single-cell transcriptome data.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_150-A_Differential_Evolution_based_Pseudotime_Estimation_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design and Development of an Efficient Explainable AI Framework for Heart Disease Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01506149</link>
        <id>10.14569/IJACSA.2024.01506149</id>
        <doi>10.14569/IJACSA.2024.01506149</doi>
        <lastModDate>2024-06-29T11:26:07.1000000+00:00</lastModDate>
        
        <creator>Deepika Tenepalli</creator>
        
        <creator>Navamani T M</creator>
        
        <subject>Machine learning; heart disease; explainable AI; XGBoost; SHAP</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>Heart disease remains a global health concern, demanding early and accurate prediction for improved patient outcomes. Machine learning offers promising tools, but existing methods face accuracy, class imbalance, and overfitting issues. In this work, we propose an efficient Explainable Recursive Feature Elimination with eXtreme Gradient Boosting (ERFEX) Framework for heart disease prediction. ERFEX leverages Explainable AI techniques to identify crucial features while ad-dressing class imbalance issues. We implemented various machine learning algorithms within the ERFEX framework, utilizing Support Vector Machine-based Synthetic Minority Over-sampling Technique (SVMSMOTE) and SHapley Additive exPlanations (SHAP) for imbalanced class handling and feature selection with explainability. Among these models, Random Forest and XGBoost classifiers within the ERFEX framework achieved 100% training accuracy and 98.23% testing accuracy. Furthermore, SHAP analysis provided interpretable insights into feature importance, improving model trustworthiness. Thus, the findings of this work demonstrate the potential of ERFEX for accurate and explainable heart disease prediction, paving the way for improved clinical decision-making.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_149-Design_and_Development_of_an_Efficient_Explainable_AI_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>On Constructing a Secure and Fast Key Derivation Function Based on Stream Ciphers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01506148</link>
        <id>10.14569/IJACSA.2024.01506148</id>
        <doi>10.14569/IJACSA.2024.01506148</doi>
        <lastModDate>2024-06-29T11:26:07.0830000+00:00</lastModDate>
        
        <creator>Chai Wen Chuah</creator>
        
        <creator>Janaka Alawatugoda</creator>
        
        <creator>Nureize Arbaiy</creator>
        
        <subject>Key derivation functions; extractors; expanders; stream ciphers; hash functions; symmetric-key cryptography</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>In order to protect electronic data, pseudorandom cryptographic keys generated by a standard function known as a key derivation function play an important role. The inputs to the function are known as initial keying materials, such as passwords, shared secret keys, and non-random strings. Existing standard secure functions for the key derivation function are based on stream ciphers, block ciphers, and hash functions. The latest secure and fast design is a stream cipher-based key derivation function (SCKDF2). The security levels for key derivation functions based on stream ciphers, block ciphers, and hash functions are equal. However, the execution time for key derivation functions based on stream ciphers is faster compared to the other two functions. This paper proposes an improved design for a key derivation function based on stream ciphers, namely I−SCKDF2. We simulate instances for the proposed I−SCKDF2 using Trivium. As a result, I−SCKDF2 has a lower execution time compared to the existing SCKDF2. The results show that the execution time taken by I−SCKDF2 to generate an n-bit cryptographic key is almost 50 percent lower than SCKDF2. The security of I−SCKDF2 passed all the security tests in the Dieharder test tool. It has been proven that the proposed I−SCKDF2 is secure, and the simulation time is faster compared to SCKDF2.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_148-On_Constructing_a_Secure_and_Fast_Key_Derivation_Function.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automatic Flipper Control for Crawler Type Rescue Robot using Reinforcement Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01506147</link>
        <id>10.14569/IJACSA.2024.01506147</id>
        <doi>10.14569/IJACSA.2024.01506147</doi>
        <lastModDate>2024-06-29T11:26:07.0670000+00:00</lastModDate>
        
        <creator>Hitoshi Kono</creator>
        
        <creator>Sadaharu Isayama</creator>
        
        <creator>Fukuro Koshiji</creator>
        
        <creator>Kaori Watanabe</creator>
        
        <creator>Hidekazu Suzuki</creator>
        
        <subject>Rescue robot; sub-crawler control; reinforcement learning; physics simulation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>In recent years, many natural disasters have occurred, and rescue robots have been used to gather information at disaster sites. Rescue robots, particularly crawler type rescue robots are operated through remote control by their operators via wireless communication or wired. However, certain robots have been known to not return owing to tipping over or disconnection of communication wires caused by missed operations. Therefore, studies have focused on automatic control of rescue robots. Adapting the rescue robot for uneven terrain or unexpected obstacle shape to travel in autonomous control situation is challenging. It requires complete autonomous control as well as partial control of the rescue robot, which necessitates assistance for teleoperation. This study proposed automatic flipper control of rescue robots using reinforcement learning for stepping over steps. The proposed method involved designing of the learning environment, reward setting, and system configuration for reinforcement learning. The input data for the training data were coarse-grained information using a distance sensor, gyro sensor, and GPS coordinates information. Reinforcement learning was performed through a physical simulation within an environment wherein the shape of a step changed once every 100 episodes. The robot’s compensation was designed to reduce the impact on the robot’s body by changing the robot’s attitude angle. The learned knowledge, which is contained action-value function, was reused to verify that the flipper could be automatically controlled by the operator when the rescue robot is operated as moving along a direction remotely, and that the robot could step over steps.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_147-Automatic_Flipper_Control_for_Crawler_Type_Rescue_Robot.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Language Models for Multi-Lingual Tasks - A Survey</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01506146</link>
        <id>10.14569/IJACSA.2024.01506146</id>
        <doi>10.14569/IJACSA.2024.01506146</doi>
        <lastModDate>2024-06-29T11:26:07.0530000+00:00</lastModDate>
        
        <creator>Amir Reza Jafari</creator>
        
        <creator>Behnam Heidary</creator>
        
        <creator>Reza Farahbakhsh</creator>
        
        <creator>Mostafa Salehi</creator>
        
        <creator>Noel Crespi</creator>
        
        <subject>Language models; transfer learning; BERT; NLP; multilingual task; low resource languages; LLMs</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>These days different online media platforms such as social media provide their users the possibility to exchange and engage in different languages. It is not surprising anymore to see comments from different languages in posts published by international celebrities and figures. In this era, understanding cross-language content and multilingualism in natural language processing (NLP) are crucial, and huge amount of efforts have been dedicated on leverage existing technologies in NLP to tackle this challenging research problem, specially with advances in language analysis and the introduction of large language models. In this survey, we provide a comprehensive overview of the existing literature focusing on the evolution of language models with a focus on multilingual tasks and then we identify potential opportunities for further research in this domain.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_146-Language_Models_for_Multi_Lingual_Tasks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Emotion Detection with Word Embeddings in a Low Resourced Language: Turkish</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01506145</link>
        <id>10.14569/IJACSA.2024.01506145</id>
        <doi>10.14569/IJACSA.2024.01506145</doi>
        <lastModDate>2024-06-29T11:26:07.0200000+00:00</lastModDate>
        
        <creator>Senem Kumova Metin</creator>
        
        <creator>Hatice Ertugrul Giraz</creator>
        
        <subject>Emotion detection; word embedding; vector similarity; Turkish</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>Through natural language processing, subjective information can be obtained from written sources such as suggestions, reviews, and social media publications. Understanding and knowing the user experience or in other words the feelings/emotions of user on any type of product or situation directly affects the decisions to be taken on the regarding product or service. In this study, we focus on a hybrid approach of text-based emotion detection. We combined keyword and lexicon-based approaches by the use of word embeddings. In emotion detection, simply lexicon words/keywords and text units are compared in several different ways and the comparison results are used in emotion identification experiments. As this identification procedure is examined, it is explicit that the performance depends mainly on two actors: the lexicon/keyword list and the representation of text unit. We propose to employ word vectors/embeddings on both actors. Firstly, we propose a hybrid approach that uses word vector similarities in order to determine lexicon words, on contrary to traditional approaches that employs all arbitrary words in given text. By our approach, the overall effort in emotion identification is to be reduced by decreasing the number of arbitrary words that do not carry the emotive content. Moreover, the hybrid approach will decrease the need for crowdsourcing in lexicon word labelling. Secondly, we propose to build the representations of text units by measuring their word vector similarities to given lexicon. We built up two lexicons by our approach and presented three different comparison metrics based on embedding similarities. Emotion identification experiments are performed employing both unsupervised and supervised methods on Turkish text. The experimental results showed that employing the hybrid approach that involves word embeddings is promising on Turkish texts and also due to its flexible and language-independent structure it can be improved and used in studies on different languages.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_145-Hybrid_Emotion_Detection_with_Word_Embeddings.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Blockchain-based Decentralised Management of Digital Passports of Health (DPoH) for Vaccination Records</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01506144</link>
        <id>10.14569/IJACSA.2024.01506144</id>
        <doi>10.14569/IJACSA.2024.01506144</doi>
        <lastModDate>2024-06-29T11:26:07.0070000+00:00</lastModDate>
        
        <creator>Abdulrahman Alreshidi</creator>
        
        <subject>Smart healthcare; blockchain; software architecture; digital passport of health; software engineering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>With the recent impact of viral infections and pandemics – akin to a recent global healthcare emergency due to COVID-19 - there is an urgent need for mass-scale testing and vaccination initiatives for tackling the health and economic crises. However, the centralized storage of patient information has given rise to significant concerns regarding privacy, transparency, and efficient transmission of vaccination records. This paper exploits a blockchain-based solution that presents a novel approach by seamlessly integrating identity verification, encryption protocols, and decentralized storage via IPFS (InterPlanetary File System) which gives rise to the concept of Digital Passport of Health (DPoH). The proposed solution in this paper introduces the concept of DPoH, specifically designed for test certification, and leverages the power of smart contracts on Ethereum-based blockchain technology for securely creating, managing and transmitting data in the form of DPoH. The proposed solution is being evaluated in three dimensions including (i) gas cost (i.e. energy efficiency), (ii) data storage (i.e. storage efficiency), and (iii) data access (i.e. response time) for creation and transmission of DHoPs. The developed solution and its criteria-based validation are complemented with algorithmic implementations that can progress existing research and development on blockchain-based management of health-critical systems.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_144-Blockchain_based_Decentralised_Management_of_Digital_Passports.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dynamic Gesture Recognition using a Transformer and Mediapipe</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01506143</link>
        <id>10.14569/IJACSA.2024.01506143</id>
        <doi>10.14569/IJACSA.2024.01506143</doi>
        <lastModDate>2024-06-29T11:26:06.9900000+00:00</lastModDate>
        
        <creator>Asma H. Althubiti</creator>
        
        <creator>Haneen Algethami</creator>
        
        <subject>Gesture recognition; self-attention; transformer en-coder; skeleton; transfer learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>There is a rising interest in dynamic gesture recognition as a research area. This is the result of emerging global pandemics as well as the need to avoid touching different surfaces. Most of the previous research has focused on implementing deep learning algorithms for the RGB modality. However, despite its potential to enhance the algorithm’s performance, gesture recognition has not widely utilised the concept of attention. Most research also used three-dimensional convolutional networks with long short-term memory networks for gesture recognition. However, these networks can be computationally expensive. As a result, this paper employs pre-trained models in conjunction with the skeleton modality to address the challenges posed by background noise. The goal is to present a comparative analysis of various gesture recognition models, divided based on video frames or skeletons. The performance of different models was evaluated using a dataset taken from Kaggle with a size of 2 GB. Each video contains 30 frames (or images) to recognise five gestures. The transformer model for skeleton-based gesture recognition achieves 0.99 accuracy and can be used to capture temporal dependencies in sequential data.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_143-Dynamic_Gesture_Recognition_using_a_Transformer_and_Mediapipe.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>FEC-IGE: An Efficient Approach to Classify Fracture Based on Convolutional Neural Networks and Integrated Gradients Explanation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01506142</link>
        <id>10.14569/IJACSA.2024.01506142</id>
        <doi>10.14569/IJACSA.2024.01506142</doi>
        <lastModDate>2024-06-29T11:26:06.9600000+00:00</lastModDate>
        
        <creator>Triet Minh Nguyen</creator>
        
        <creator>Thuan Van Tran</creator>
        
        <creator>Quy Thanh Lu</creator>
        
        <subject>Convolutional neural network; transfer learning; fine-tuning; X-ray image classification; EfficientNet; classification break bone; deep learning; integrated gradients explanation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>In this paper, we propose the FEC-IGE framework includes data preprocessing, data augmentation, transfer learning, and fine-tuning of the pre-trained model of convolutional neural network (CNN) architecture for the problem of bone fracture classification. Bone fractures are a widespread medical issue globally, with a significant prevalence and imposing substantial burdens on individuals and healthcare systems. The impact of bone fractures extends beyond physical injury, often leading to pain, reduced mobility, and decreased quality of life for affected individuals. Moreover, fractures can incur substantial economic costs due to medical expenses, rehabilitation, and lost productivity. In recent years, progress in machine learning methodologies has exhibited potential in tackling issues pertaining to fracture diagnosis and classification. By harnessing the capabilities of deep learning frameworks, scholars aspire to design precise and effective mechanisms for automatically detecting and classifying bone fractures from medical imaging data. In this study, FEC-IGE framework has demonstrated its potential and strength when applied models pre-trained of CNN architecture in the task of classifying X-ray bone fracture images with accuracies of 98.48%, 96.92%, and 97.24% in three experimental scenarios. These outcomes are the consequence of the model’s fine-tuning and transfer learning procedures applied to an enhanced dataset including 1129 X-ray pictures classified into ten different kinds of fractures: avulsion fracture, comminuted fracture, fracture dislocation, greenstick fracture, hairline fracture, impacted fracture, longitudinal fracture, oblique fracture, pathological fracture, and spiral fracture. To increase transparency and understanding of the model, Integrated Gradients explanation was also applied in this study. Finally, metrics including precision, recall, F1 score, precision, and confusion matrix were applied to evaluate performance and other in-depth analysis.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_142-FEC-IGE_An_Efficient_Approach_to_Classify_Fracture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Short Video Recommendation Method Based on Sentiment Analysis and K-means++</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01506141</link>
        <id>10.14569/IJACSA.2024.01506141</id>
        <doi>10.14569/IJACSA.2024.01506141</doi>
        <lastModDate>2024-06-29T11:26:06.9430000+00:00</lastModDate>
        
        <creator>Rong Hu</creator>
        
        <creator>Wei Yue</creator>
        
        <subject>Short videos; barrage; sentiment analysis; K-means++; recommendation; cluster density</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>With the explosive growth of short video content, effectively recommending videos that interest users has become a major challenge. In this study, a short video recommendation model based on barrage sentiment analysis and improved K-means++ was raised to address the interest matching problem in short video recommendation systems. The model uses sentiment vectors to represent bullet content, clusters short videos through sentiment similarity calculation, and studies the use of clustering density to eliminate abnormal sample points during the clustering process. The study validated the effectiveness of the raised model through simulation experiments. The outcomes denoted that when the historical data size increased to 7000, the model&#39;s prediction accuracy could reach 0.81, recall rate was 0.822, and F1 value was 0.832. Compared with the current four mainstream recommendation algorithms, this model showed advantages in clustering time and complexity, with clustering time reduced to 8.2 seconds, demonstrating the efficiency of the model in raising recommendation efficiency and accuracy. In summary, the model proposed in the study has high recommendation accuracy in short video recommendation systems and meets the real-time demands of short video recommendation, which can effectively raise the quality of short video recommendations.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_141-Short_Video_Recommendation_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Method for Ripeness Classification of Harvested Strawberries using Hue Information of Images Acquired After the Harvest</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01506140</link>
        <id>10.14569/IJACSA.2024.01506140</id>
        <doi>10.14569/IJACSA.2024.01506140</doi>
        <lastModDate>2024-06-29T11:26:06.9270000+00:00</lastModDate>
        
        <creator>Jin Sawada</creator>
        
        <creator>Kohei Arai</creator>
        
        <creator>Souichiro Tashi</creator>
        
        <creator>Shigenori Inakazu</creator>
        
        <creator>Mariko Oda</creator>
        
        <subject>Amaou; Robotsumi; hue; strawberry; automatic harvest; 10 grades classification; questionnaire; image defects</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>Hakata Amaou is the most popular strawberry in Fukuoka Prefecture. However, Amaou farmers face a significant challenge due to a shortage of labor and successors, primarily caused by an aging workforce. This labor shortage is particularly severe during the harvest season, when work must be completed within a short timeframe. To address this issue, INAK System Co., Ltd. has developed an automatic harvesting system called &quot;Robotsumi,&quot; which utilizes image recognition technology. Despite this advancement, the current image recognition method has not yet been able to classify the Amaou strawberries into 10 quality grades. Additionally, the image recognition process is affected by image defects, varying light conditions, and shadows. To overcome these challenges, this study first conducted questionnaires to gather information on the ripeness of harvested strawberries as classified by humans. Based on the questionnaire results, maturity classifications using modes of hue were performed. The discrimination results are verified and reported here.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_140-Method_for_Ripeness_Classification_of_Harvested_Strawberries.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Football Video Image Restoration Based on Generalized Equalized Fuzzy C-mean Clustering Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01506139</link>
        <id>10.14569/IJACSA.2024.01506139</id>
        <doi>10.14569/IJACSA.2024.01506139</doi>
        <lastModDate>2024-06-29T11:26:06.8970000+00:00</lastModDate>
        
        <creator>Shaonan Liu</creator>
        
        <subject>Generalized equilibrium; fuzzy c-mean clustering algorithm; image restoration; local spatial information; adaptive edge protection factor</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>With the development of image processing techniques, the quality of visual content has become crucial for acquiring and analyzing information, especially in applications in the field of sports, such as football match videos. Conventional image restoration techniques have limitations in dealing with motion blur and noise interference, especially in maintaining edge information and texture details. Aiming at these challenges, the study presents a generalized balanced fuzzy C-mean clustering algorithm incorporating fuzzy logic and cluster analysis by introducing local spatial information and adaptive edge protection factors, and the generalized balanced fuzzy C-mean clustering algorithm optimizes the updating strategies of the affiliation function and the class center in order to enhance the detail preservation and noise suppression, aiming to improve the recovery quality of football video images. The results demonstrated that the average gradient ratio, edge strength, standard deviation, and information entropy of the designed algorithm were 1.77, 0.92, 0.26, and 1.73, respectively, which were significantly better than those of other algorithms, proving its superiority in image restoration. Football video images can be made clearer and more detailed with the help of the generalized balanced fuzzy C-mean clustering technique, which also advances motion analysis and automatic identification technologies.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_139-Football_Video_Image_Restoration.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Hand Sign Recognition in Challenging Lighting Conditions Through Hybrid Edge Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01506138</link>
        <id>10.14569/IJACSA.2024.01506138</id>
        <doi>10.14569/IJACSA.2024.01506138</doi>
        <lastModDate>2024-06-29T11:26:06.8800000+00:00</lastModDate>
        
        <creator>Fairuz Husna Binti Rusli</creator>
        
        <creator>Mohd Hilmi Hasan</creator>
        
        <creator>Syazmi Zul Arif Hakimi Saadon</creator>
        
        <creator>Muhammad Hamza Azam</creator>
        
        <subject>Critical lighting; edge detection; image recognition; image segmentation; sign language</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>Edge detection is essential for image processing and recognition. However, single methods struggle under challenging lighting conditions, limiting the effectiveness of applications like sign language recognition. This study aimed to improve the edge detection method in critical lighting for better sign language interpretation. The experiment compared conventional methods (Prewitt, Canny, Roberts, Sobel) with hybrid ones. Project effectiveness was gauged across multiple evaluations considering dataset characteristics portraying critical lighting conditions tested on English alphabet hand signs and with different threshold values. Evaluation metrics included pixel value improvement, algorithm processing time, and sign language recognition accuracy. The findings of this research demonstrate that combining the Prewitt and Sobel operators, as well as integrating Prewitt with Roberts, yielded superior edge quality and efficient processing times for hand sign recognition. The hybrid method excelled in backlight at 100 thresholds and direct light conditions at a threshold of 150. By employing the hybrid method, hand sign recognition rates saw a notable improvement of the pixel value of more than 100% and hand and sign recognition also improved up to 11.5%. Overall, the study highlighted the hybrid method&#39;s efficacy for hand sign recognition, offering a robust solution for lighting challenges. These findings not only advance image processing but also have significant implications for technology reliant on accurate segmentation and recognition, particularly in critical applications like sign language interpretation.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_138-Enhancing_Hand_Sign_Recognition_in_Challenging_Lighting.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Appraising the Building Cooling Load via Hybrid Framework of Machine Learning Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01506137</link>
        <id>10.14569/IJACSA.2024.01506137</id>
        <doi>10.14569/IJACSA.2024.01506137</doi>
        <lastModDate>2024-06-29T11:26:06.8500000+00:00</lastModDate>
        
        <creator>Longlong Yue</creator>
        
        <creator>Xiangli Liu</creator>
        
        <creator>Shiliang Chang</creator>
        
        <subject>K-nearest-neighbors; machine learning; cooling load prediction; African Vultures Optimization; Sand Cat Swarm Optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>The overarching objective of this study lies in the thorough evaluation of the effectiveness of K-nearest neighbors (KNN) models in the precise estimation of building cooling load consumption. This assessment holds significant importance as it pertains to the feasibility and reliability of implementing machine learning techniques, particularly the KNN algorithm, within the domain of building energy management. This evaluation process centers on scrutinizing five distinct spatial metrics closely associated with the KNN algorithm. To refine and enhance the algorithm&#39;s predictive capabilities, this endeavor incorporates utilizing test samples drawn from an extensive database. These test samples serve as valuable resources for augmenting the overall predictive accuracy of the model, ultimately leading to more robust and reliable predictions of cooling load consumption within the building systems. Ultimately, the research endeavors to contribute substantially to advancing more energy-efficient and automated cooling system control strategies. Developed models encompass a single base model, another model optimized through the application of African Vultures Optimization, and a third model optimized using the Sand Cat Swarm Optimization technique. The training dataset includes 70% of the data, with eight input variables relating to the geometric and glazing characteristics of the buildings. After validating 15% of the dataset, the performance of the remaining 15% is tested. An analysis of various evaluation metrics reveals that KNSC (K-Nearest Neighbors optimized with the Sand Cat Swarm Optimization) demonstrates remarkable accuracy and stability among the three candidate models. It achieves a substantial reduction in the prediction Root Mean Square Error (RMSE) of 32.8% and 21.5% in comparison to the other two models (KNN and KNAV) and attains a maximum R2 value of 0.985 for cooling load prediction.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_137-Appraising_the_Building_Cooling_Load.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>RETRACTED: Optimization of Robot Environment Interaction Based on Asynchronous Advantage Actor-Critic Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01506136</link>
        <id>10.14569/IJACSA.2024.01506136</id>
        <doi>10.14569/IJACSA.2024.01506136</doi>
        <lastModDate>2024-06-29T11:26:06.8170000+00:00</lastModDate>
        
        <creator>Jitang Xu</creator>
        
        <creator>Qiang Chen</creator>
        
        <subject>A3C; robot environment interaction; intelligent technology; simulation testing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>After careful and considered review of the content of this paper by a duly constituted expert committee, this paper has been found to be in violation of IJACSA`s Publication Principles. We hereby retract the content of this paper. Reasonable effort should be made to remove all past references to this paper. Retraction DOI: 10.14569/IJACSA.2024.01506136.retraction</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_136-Optimization_of_Robot_Environment_Interaction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Application of Optimization Algorithms for Workflow Scheduling Based on Cloud Computing IaaS Environment in Industry Multi-Cloud Scenarios</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01506135</link>
        <id>10.14569/IJACSA.2024.01506135</id>
        <doi>10.14569/IJACSA.2024.01506135</doi>
        <lastModDate>2024-06-29T11:26:06.7870000+00:00</lastModDate>
        
        <creator>Cunbing Li</creator>
        
        <subject>Cloud computing; IaaS; scheduling model; evolutionary algorithms; heuristic model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>The advancement of cloud computing has enabled workflow scheduling to provide users with more network resources. However, there are some scheduling issues between resource allocation and user needs in workflows in IaaS environments. Based on this, this study adopts a heuristic scheduling model based on deadline and list and constructs a single objective workflow scheduling model based on deadline. Based on fuzzy-dominated sorting, traditional non-dominated sorting is improved to construct a time-cost dual objective workflow scheduling model. Introducing evolutionary algorithms with a reliability index as the scheduling objective, a time-cost reliability three-objective workflow scheduling model is constructed. The results showed that the total execution time of the single objective workflow scheduling model in four standard workflows was 92s, 106s, 113s, and 105s, respectively. The throughput was 144b/s, 138b/s, 140b/s, and 142b/s, respectively, all of which were superior to other models. Compared with other comparative models, the dual objective workflow scheduling model and the three objective workflow scheduling model had higher HV values, less execution time, and better Pareto frontier solutions. This study solves the three objective scheduling problem of time cost reliability in IaaS environment, which has a certain reference value in resource scheduling on cloud platforms.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_135-The_Application_of_Optimization_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Time Window NSGA-II Route Planning Algorithm for Home Care Appointment Scheduling in the Elderly Industry</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01506134</link>
        <id>10.14569/IJACSA.2024.01506134</id>
        <doi>10.14569/IJACSA.2024.01506134</doi>
        <lastModDate>2024-06-29T11:26:06.7700000+00:00</lastModDate>
        
        <creator>Guoping Xie</creator>
        
        <subject>FTWNSGA-II; aging in place; path planning; appointment scheduling; fuzzy time windows</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>Given the lack of healthcare resources, the home care sector faces a serious challenge in figuring out how to maximize the effectiveness of healthcare employees&#39; services and raise consumer satisfaction. In this study, a model for healthcare worker scheduling and path planning is built. Fuzzy time window theory is used to discuss how to determine service duration and fuzzy service duration sub-situations. A path-planning algorithm based on a non-dominated ranking genetic algorithm is used to optimize the decision-making process. To analyze the aspects that affect the results of the model runs and use them as a foundation for effective planning recommendations, simulation experiments based on real data were conducted. According to the findings, customer demand under a defined service hour reaches a threshold of 343 before additional man-hour expenses starts to accrue. Decision-makers must therefore make adequate staffing modifications before this happens. The appointment time window has a greater impact on customer satisfaction and can be suitably extended in the customer appointment interface to raise satisfaction. The -value, which can be calculated based on the carer&#39;s fuzzy service hours, high and low peak demand, and the percentage of urgent tasks, is related to the time cost and satisfaction under fuzzy service hours. The corresponding optimal -values are 0.6, 0.3, 0.6, and 0.6, which can balance the time cost and customer satisfaction in this scenario.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_134-Time_Window_NSGA_II_Route_Planning_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Data Sharing Privacy Protection Model Based on Federated Learning and Blockchain Technology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01506133</link>
        <id>10.14569/IJACSA.2024.01506133</id>
        <doi>10.14569/IJACSA.2024.01506133</doi>
        <lastModDate>2024-06-29T11:26:06.7400000+00:00</lastModDate>
        
        <creator>Fei Ren</creator>
        
        <creator>Zhi Liang</creator>
        
        <subject>Federated learning; blockchain; data sharing; privacy; reputation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>As the main driving force for social development in the new era, data sharing is controversial in terms of privacy and security. Traditional privacy protection methods are a bit challenging when faced with complex and massive shared data. Given this, firstly, the Byzantine consensus algorithm in blockchain technology was elaborated. Meanwhile, a decision tree algorithm was introduced for node classification optimization, and a new consensus algorithm was proposed. In addition, local data training and updating were achieved through federated learning, and a new data-sharing privacy protection model was proposed after jointly optimizing consensus algorithms. The maximum throughput of the optimized consensus algorithm was 1560. The maximum consensus delay was 110 milliseconds. After multiple iterations, the removal rate of the Byzantine nodes reached 56.6%. The optimal reputation value of the new data-sharing privacy protection model was 0.75. The lowest reputation value after 10 iterations was 0.32. As a result, this proposed model achieves excellent results in data sharing privacy protection tasks, demonstrating high model feasibility and effectiveness. The research aims to provide a reliable method for data sharing privacy protection in the field.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_133-A_Data_Sharing_Privacy_Protection_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Robust Chaos Image Encryption System using Modification Logistic Map, Gingerbread Man and Arnold Cat Map</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01506132</link>
        <id>10.14569/IJACSA.2024.01506132</id>
        <doi>10.14569/IJACSA.2024.01506132</doi>
        <lastModDate>2024-06-29T11:26:06.7230000+00:00</lastModDate>
        
        <creator>Lina Jamal Ibrahim</creator>
        
        <creator>John Bush Idoko</creator>
        
        <creator>Almuntadher M. Alwhelat</creator>
        
        <subject>Arnold cat map; confusion; diffusion; image encryption; modification logistic map</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>In the field of security, the information must be protected from unauthorized use because it contains a great deal of sensitive information especially in images. Image encryption is now recognized as an outstanding strategy for protecting images from attackers. Despite numerous advancements, an efficient image encryption method remains essential to achieve high image security. Therefore, an accurate encryption algorithm requires a formidable random key generator and regeneration abilities. In addition, a new strategy for confusion and diffusion with different processes. To accomplish these objectives, a framework for image encryption with three main phases has been created. Firstly, a new key generator was created with a high level of randomness based on different chaotic maps and the proposed Modification Logistic Map function. Secondly, the confusion phase has been proposed based on sorting the key generator ascending and then permuting the image pixels according to the sorting key. Lastly, the confusion phase has been presented based on generating the Gingerbread Man Method (GGM), Arnold Cat Map (ACM) transform and, XOR between the confused image and Arnold image. The ACM is used to remove flat areas from the image. Various parameters were used to assess the experimental result. In conclusion, it has been confirmed that the suggested picture encryption approach is a solid success in the field of encryption.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_132-Robust_Chaos_Image_Encryption_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of Network Security Assessment and Prediction Model Based on Improved K-means Clustering and Intelligent Optimization Recurrent Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01506131</link>
        <id>10.14569/IJACSA.2024.01506131</id>
        <doi>10.14569/IJACSA.2024.01506131</doi>
        <lastModDate>2024-06-29T11:26:06.7100000+00:00</lastModDate>
        
        <creator>Qianqian Wang</creator>
        
        <creator>Xingxue Ren</creator>
        
        <creator>Lei Li</creator>
        
        <creator>Huimin Peng</creator>
        
        <subject>K-means; cybersecurity; situational assessment; situational prediction; self-encoder</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>Aiming at the security problems in cyberspace, the study proposes a cyber security assessment and prediction model based on improved K-means and intelligent optimization recurrent neural network. Firstly, based on traditional self-encoder and K-means algorithm, sparse self-encoder and K-means++ algorithm are proposed to build a cyber security posture assessment model based on improved K-means. Then, a two-way gated loop unit is used for security posture prediction, and a particle swarm optimization algorithm is utilized for enhancing the two-way gated loop unit, and the prediction is performed jointly with the model based on convolutional neural network. The results show that the proposed safety assessment model can react quickly when a fault occurs and is not prone to misjudgment with good stability. The accuracy of the safety assessment model was 99.8%, the running time was 0.277 s, and the recall rate was 96.67%, which was 96.49% in the F1 metric. The proposed safety prediction model has the lowest mean absolute error and root mean square error, which are 0.18 and 0.30. The running time is relatively long, which is 703.23 s and 787.46 s, but still within the acceptable range. The model-predicted posture values fit well with the actual posture values. In summary, the model constructed by the study has a good application effect and helps to ensure the security of cyberspace.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_131-Design_of_Network_Security_Assessment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Smart Parking: An Efficient System for Parking and Payment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01506130</link>
        <id>10.14569/IJACSA.2024.01506130</id>
        <doi>10.14569/IJACSA.2024.01506130</doi>
        <lastModDate>2024-06-29T11:26:06.6770000+00:00</lastModDate>
        
        <creator>Md Ezaz Ahmed</creator>
        
        <creator>Mohammad Arif</creator>
        
        <creator>Mohammad Khalid Imam Rahmani</creator>
        
        <creator>Md Tabrez Nafis</creator>
        
        <creator>Javed Ali</creator>
        
        <subject>Smart parking; Internet of Things; sensors; simulation; NS3; VANET</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>In addition to being a time-consuming and annoying driving experience, searching for a cheap and empty parking space also wastes fuel and pollutes the air. In densely populated cities, there are limited and expensive public parking spaces. On the other hand, private parking spaces are typically underutilized, and the parking space owners are willing to charge higher parking fees to cover the expenses of maintaining their excess parking capacity. In light of these circumstances, it is essential to look for a smart parking system that gathers and allows private parking spaces to ease the worries of public parking. An Internet of Things (IoT) enabled parking space recommendation system is proposed in this paper. It makes recommendations by utilizing IoT technology (traffic and parking sensors). The recommended system helps users automatically pick a spot at the lowest charge by accounting for metrics like distance, availability of vacancy at the slot, and the charges. To accomplish this, the user parking cost is calculated using performance measures. This system provides the user with a way to request a parking spot when one is available, as well as a way to recommend a new parking lot if the present one is filled. The proposed model reduces user waiting time and increases the likelihood of finding an empty slot in the parking, based on the simulation results used besides offering an anonymous payment method. The proposed system also exploited the concept of VANET as it uses onboard and roadside units. The novelty of the research is that apart from calculating the cost function it also maintains the neighbors table at each neighbor which will be shared among all as and when there is a change. We have simulated the environment in Network Simulator 3 (NS3).</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_130-Smart_Parking_An_Efficient_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design and Optimization of Reversible Information Hiding Image Encryption Algorithms in the Context of Electronic Information Security</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01506129</link>
        <id>10.14569/IJACSA.2024.01506129</id>
        <doi>10.14569/IJACSA.2024.01506129</doi>
        <lastModDate>2024-06-29T11:26:06.6630000+00:00</lastModDate>
        
        <creator>Li Zhang</creator>
        
        <creator>Keke Shan</creator>
        
        <subject>Information security; reversible information hiding; key sensitivity; chaos system optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>With the widespread application of electronic information, in order to meet the growing security needs in the field of electronic information security, a new encryption algorithm based on a novel chaotic map with traversal and chaos characteristics has been proposed. By introducing a hash algorithm and chaotic map, the randomness and nonlinear characteristics of the system are enhanced, and the confidentiality of data and the security of the system are improved. The encryption process includes generating chaotic sequences, constructing permutation boxes, and DNA encoding operations, ultimately generating cipher-text images with high randomness. Meanwhile, an information-hiding encryption algorithm with a four-dimensional conservative chaotic system is designed, which improves the randomness and initial value sensitivity of the algorithm by introducing a chaotic system, and optimized reversible information hiding and image encryption. The algorithm includes chaotic system encryption, additional data embedding, rearrangement strategy, and symmetric structure data extraction and image restoration. The algorithm was robust to images with 50% tampering degree, with an average peak signal-to-noise ratio of 31.26dB, demonstrating high key sensitivity. In the light home plot test, the peak signal-to-noise ratio reached 57.2dB. Under the same QF value but different embedding amounts, the signal-to-noise ratio of the algorithm was 46.9dB, which was superior to other algorithms, highlighting its outstanding performance in different challenges.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_129-Design_and_Optimization_of_Reversible_Information.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Blockchain-Enabled Decentralized Trustworthy Framework Envisioned for Patient-Centric Community Healthcare</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01506128</link>
        <id>10.14569/IJACSA.2024.01506128</id>
        <doi>10.14569/IJACSA.2024.01506128</doi>
        <lastModDate>2024-06-29T11:26:06.6470000+00:00</lastModDate>
        
        <creator>Mohammad Khalid Imam Rahmani</creator>
        
        <creator>Javed Ali</creator>
        
        <creator>Surbhi Bhatia Khan</creator>
        
        <creator>Muhammad Tahir</creator>
        
        <subject>Blockchain; smart contract; externally owned accounts; decentralized trustworthy framework; community healthcare; Ethereum; supply chain management</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>Ethereum has gained significant attention from businesses as a blockchain technology since its conception. Beyond the first use of cryptocurrencies, it provides many additional features. In the pharmaceutical sector, where reliable supply chains are necessary for cross-border transactions, Ethereum shows promise. It addresses problems through quality, traceability, and transparency in a place defined by complexity and strong laws because of its decentralized structure. As a result, this study looks at how Ethereum is used in the pharmaceutical sector, namely the networks that allow smart contracts to communicate with one another on the Ethereum network. The above concepts are formulated via communication networks, inter-contract owner interactions, and simulation analysis, which seeks to identify dubious practices and unjust contracts inside the supply chain. The study suggests effective manufacturing techniques that call for reduction rather than storage to technological obstacles. With this endeavor, we hope to provide insights into Ethereum-based contract ecosystems and assist in anomaly identification for enhanced security and transparency. The main objective is to support patient record methodology and transform the way healthcare data is managed. The suggested model integrates front-end interfaces, back-end optimization, distributed storage, proof-of-work techniques, and training to establish a safe and efficient ecosystem for healthcare data. These elements can be combined through the blockchain-enabled architecture to transform manufacturing-protecting chemicals in handling, distribution, and necessary training.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_128-Blockchain_Enabled_Decentralized_Trustworthy_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comparative Study Between Linear and Affine Multi-Model in Predictive Control of a Nonlinear Dynamic System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01506127</link>
        <id>10.14569/IJACSA.2024.01506127</id>
        <doi>10.14569/IJACSA.2024.01506127</doi>
        <lastModDate>2024-06-29T11:26:06.6170000+00:00</lastModDate>
        
        <creator>Houda Mezrigui</creator>
        
        <creator>Wassila Chagra</creator>
        
        <creator>Maher Ben Hariz</creator>
        
        <subject>Affine models; linear models; static characteristic; linear model predictive control; prediction horizon; tracking performances</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>Model Predictive Control (MPC) is the most successful control strategy that coped in many areas. However, the success of an MPC scheme lies in the accuracy of the adopted prediction model. This paper treats the problem of MPC when there is a need to a larger domain of set-point values and best tracking performances. It presents a novel modeling structure for representing a nonlinear dynamic system based on its static nonlinear characteristic. Then, the Multiple Affine Model (MAM) structure is compared to Multiple Linear Models (MLM) in a Linear MPC (LMPC) scheme. It is noted that the MAM structure offers more precision for modeling and the much smaller number of models. Therefore, it guarantees the best tracking performances in terms of stability, speed and accuracy.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_127-A_Comparative_Study_Between_Linear_and_Affine_Multi_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimizing Industrial Engineering Performance with Fuzzy CNN Framework for Efficiency and Productivity</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01506126</link>
        <id>10.14569/IJACSA.2024.01506126</id>
        <doi>10.14569/IJACSA.2024.01506126</doi>
        <lastModDate>2024-06-29T11:26:06.5830000+00:00</lastModDate>
        
        <creator>Suraj Bandhekar</creator>
        
        <creator>Abdul Hameed Kalifullah</creator>
        
        <creator>Venkata Krishna Rao Likki</creator>
        
        <creator>Hatem S. A. Hamatta</creator>
        
        <creator>Deepa</creator>
        
        <creator>Tumikipalli Nagaraju Yadav</creator>
        
        <subject>Industrial Engineering performance; manufacturing industry; fuzzy-based convolutional neural network; fault diagnostic</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>In industrial engineering, efficiency is paramount. Convolutional Neural Networks (CNNs) are commonly used to identify and detect labour activity in industrial environments. Accurate fault detection is crucial for identifying and classifying defects in production. This research proposes a novel approach to enhancing industrial performance by predicting defects in manufacturing processes using a fuzzy-based CNN technique. The framework integrates cutting-edge fuzzy logic with CNNs, improving diagnostic model efficacy through fuzzy logic-based weight adjustments during training. Additionally, a novel fuzzy classification method is used for defect detection, followed by a demand forecast error simulation tailored to specific regions. The framework begins with initial training data, which is then combined with multiple classifiers to form a comprehensive dataset. The CNN, enhanced by fuzzy logic for weight updates, first employs fuzzy classification to diagnose errors, then simulates demand forecast errors regionally. This refined dataset is subsequently used as input for the CNN. Implementation in a manufacturing organization demonstrates the proposed framework&#39;s effectiveness, significantly improving fault diagnostic accuracy compared to traditional methods. By leveraging the latest advances in CNNs and fuzzy logic, the framework offers a robust solution for boosting industrial efficiency. This comprehensive approach to defect detection in industrial processes seamlessly integrates CNNs with fuzzy logic, highlighting the framework&#39;s utility and potential impact on industrial efficiency. The results underscore the viability of this innovative technology in enhancing industrial engineering performance.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_126-Optimizing_Industrial_Engineering_Performance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Federated LSTM Model for Enhanced Anomaly Detection in Cyber Security: A Novel Approach for Distributed Threat</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01506125</link>
        <id>10.14569/IJACSA.2024.01506125</id>
        <doi>10.14569/IJACSA.2024.01506125</doi>
        <lastModDate>2024-06-29T11:26:06.5670000+00:00</lastModDate>
        
        <creator>Aradhana Sahu</creator>
        
        <creator>Yousef A.Baker El-Ebiary</creator>
        
        <creator>K. Aanandha Saravanan</creator>
        
        <creator>K. Thilagam</creator>
        
        <creator>Gunnam Rama Devi</creator>
        
        <creator>Adapa Gopi</creator>
        
        <creator>Ahmed I. Taloba</creator>
        
        <subject>Federated learning; LSTM; anomaly detection; cyber security; distributed threats; privacy-preserving model training</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>Technological improvements have led to a rapid expansion of the digital realm, raising concerns about cyber security. The last ten years have seen an enormous rise in Internet applications, which has greatly raised the requirement for information network security. In the realm of cyber security, detecting anomalies efficiently and effectively is paramount to safeguarding digital assets and infrastructure. Traditional anomaly detection methods often struggle with the evolving landscape of cyber threats, particularly in distributed environments. To address this challenge, the research proposes a novel approach leveraging federated learning and Long Short-Term Memory (LSTM) networks. Federated learning permits training models across decentralised data sources without sacrificing data privacy, and LSTM networks are highly effective in identifying temporal correlations in sequential data, which makes them suitable for analysing cyber security time-series data. In this paper, the study presents the federated LSTM model architecture tailored for anomaly detection in distributed environments. By allowing model updates to be performed locally on individual devices or servers without sharing raw data, federated learning mitigates privacy concerns associated with centralized data aggregation. This decentralized approach not only safeguards sensitive information but also fosters collaboration among diverse stakeholders, empowering them to contribute to model improvement without relinquishing control over their data. Python software is used to implement the method. The research demonstrate its effectiveness through experiments on real-world cyber security datasets, showcasing improved detection rates compared to traditional methods. When compared to RNN, SVM, and CNN, the suggested Fed LSTM method exhibits superior accuracy with 98.9%, which is 2.28% more advanced. Additionally, the research discuss the practical implications and scalability of our approach, highlighting its potential to enhance cyber security measures in distributed threat scenarios.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_125-Federated_LSTM_Model_for_Enhanced_Anomaly_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fiber Tracking Method with Adaptive Selection of Peak Direction Based on CSD Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01506124</link>
        <id>10.14569/IJACSA.2024.01506124</id>
        <doi>10.14569/IJACSA.2024.01506124</doi>
        <lastModDate>2024-06-29T11:26:06.5530000+00:00</lastModDate>
        
        <creator>Qian Zheng</creator>
        
        <creator>Kefu Guo</creator>
        
        <creator>Jiaofen Nan</creator>
        
        <creator>Lujuan Deng</creator>
        
        <creator>Junying Cheng</creator>
        
        <subject>Diffusion magnetic resonance imaging; constrained spherical deconvolution; fiber orientation distribution; fiber tractography</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>As a multi-fiber tracking model, the constrained spherical deconvolution (CSD) model is widely used in the field of fiber reconstruction. The CSD model has shown good reconstruction capabilities for crossing fibers in low anisotropy regions, which can achieve more accurate results in terms of brain fiber reconstruction. However, the current fiber tracking algorithms based on the CSD model have a few drawbacks in the selection of tracking strategies, especially in the certain crossing regions, which may lead to isotropic diffusion signals, premature termination of fibers, high computational complexity, and low efficiency. In this study, we proposed the fiber tracking method with adaptive selection of peak direction based on CSD model, called FTASP_CSD, for fiber reconstruction. The method first filters the fiber orientation distribution (FOD) peak threshold and eliminates peak directions lower than the set threshold. Secondly, a priority strategy is used to implement direction selection, and the tracking direction is adaptively adjusted according to the overall shape and needs of the FOD. Through dynamic selection of the maximum peak direction, the second maximum peak direction and the nearest peak direction, the tracking direction that best matches the true fiber direction is found. This method not only ensures spatial consistency, but also avoids the influence of stray peaks in the FOD that may be introduced by imaging noise on the fiber tracking direction. Experimental results on simulation and in vivo data show that the fiber bundles tracked by the FTASP_CSD method have a much smoother in the overall visual effect than the state-of-the-art methods. The fiber bundles tracked in the region of crossing or bifurcating fibers are more complete. This improves the angular resolution of the recognition of fiber crossings and lays a foundation for further in-depth research on fiber tracking technology.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_124-Fiber_Tracking_Method_with_Adaptive_Selection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predictive Modeling of Student Performance Through Classification with Gaussian Process Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01506123</link>
        <id>10.14569/IJACSA.2024.01506123</id>
        <doi>10.14569/IJACSA.2024.01506123</doi>
        <lastModDate>2024-06-29T11:26:06.5200000+00:00</lastModDate>
        
        <creator>Xiaowei ZHANG</creator>
        
        <creator>Junlin YUE</creator>
        
        <subject>Academic performance; language; hybrid algorithms; Gaussian Process Classification; population-based Vortex Search Algorithm; COOT Optimization Algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>In the contemporary educational landscape, proactively engaging in predictive assessment has become indispensable for academic institutions. This strategic imperative involves evaluating students based on their innate aptitude, preparing them adequately for impending examinations, and fostering both academic and personal development. Alarming statistics underscore a notable failure rate among students, particularly in courses. This article aims to employ predictive methodologies to assess and anticipate the academic performance of students in language courses during the G2 and G3 academic exams. The study utilizes the Gaussian Process Classification (GPC) model in conjunction with two optimization algorithms, the Population-based Vortex Search Algorithm (PVS) and the COOT Optimization Algorithm (COA), resulting in the creation of GPPV and GPCO models. The classification of students into distinct performance categories based on their language scores reveals that the GPPV model exhibits the highest concordance between measured and predicted outcomes. In G2, the GPPV model demonstrated the notable 51.1% correct categorization of students as Poor, followed by 25.57% in the Acceptable category, 14.17% in the good category, and 7.7% in the Excellent category. This performance surpasses both the optimized GPCO model and the singular GPC model, signifying its efficacy in predictive analysis and educational advancement.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_123-Predictive_Modeling_of_Student_Performance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Advanced Diagnosis of Polycystic Ovarian Syndrome using Machine Learning and Multimodal Data Integration</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01506122</link>
        <id>10.14569/IJACSA.2024.01506122</id>
        <doi>10.14569/IJACSA.2024.01506122</doi>
        <lastModDate>2024-06-29T11:26:06.5070000+00:00</lastModDate>
        
        <creator>Nethra Sai M</creator>
        
        <creator>Sakthivel V</creator>
        
        <creator>Prakash P</creator>
        
        <creator>Vishnukumar K</creator>
        
        <creator>Dugki Min</creator>
        
        <subject>PCOS; ultrasound images; clinical data; feature extraction; classification; Rotterdam criteria</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>PCOS is a common endocrine disorder that impacts women in their reproductive years characterized by irregular menstrual cycles, hyperandrogenism, and polycystic ovaries. Polycystic Ovary Syndrome (PCOS) presents significant challenges in diagnosis due to its heterogeneous nature and varied clinical manifestations. This project aimed to develop a comprehensive system for PCOS detection, integrating ultrasound images and clinical data through advanced machine learning techniques, using Rotterdam criteria for diagnostic decisions. Feature extraction from ultrasound images was conducted using the ResNet-50 deep learning model, while clinical data underwent correlation-based feature selection. Three classification algorithms - Support Vector Machine (SVM), Random Forest and Logistic Regression - were used to categorize the extracted features from ultrasound images. The integration of image-based and clinical-based features was explored and evaluated to have better accuracy revealing the potential for enhancing PCOS diagnosis accuracy. The developed system holds promise for assisting doctors in PCOS diagnosis, offering a holistic approach that leverages both imaging and clinical information.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_122-Advanced_Diagnosis_of_Polycystic_Ovarian_Syndrome.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sleep Apnea and Rapid Eye Movement Detection using ResNet-50 and Gradient Boost</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01506121</link>
        <id>10.14569/IJACSA.2024.01506121</id>
        <doi>10.14569/IJACSA.2024.01506121</doi>
        <lastModDate>2024-06-29T11:26:06.4730000+00:00</lastModDate>
        
        <creator>Ganti Venkata Varshini</creator>
        
        <creator>Sakthivel V</creator>
        
        <creator>Prakash P</creator>
        
        <creator>Mansoor Hussain D</creator>
        
        <creator>Jae Woo Lee</creator>
        
        <subject>Sleep Apnea; Rapid Eye movement; ResNet-50; Gradient boost; sleep stage; sleep disorders</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>Sleep apnea is a prevalent sleep problem marked by interruptions in breathing or superficial breaths while asleep. This frequently results in disrupted sleep patterns and can pose significant health risks such as cardiovascular issues and daytime exhaustion Rapid Eye Movement (REM) sleep stage is easily identifiable due to rapid eye movements, intense dreaming, and muscle immobility. This stage is vital for cognitive processes, the strengthening of memories, and the regulation of emotions. Detection of REM sleep is essential for understanding sleep architecture and diagnosing various sleep disorders. This paper proposes two machine learning models to detect these disorders from physiological signals. The study employs the Apnea-ECG dataset from PhysioNet for sleep apnea detection and the Sleep-EDF dataset for REM detection. For sleep apnea, a ResNet-50 deep learning model is adapted to process ECG signals, treating them as image-like representations. ResNet-50 is trained on the Apnea-ECG dataset, which provides annotated electrocardiogram recordings for supervised learning. For REM detection, Gradient Boosting, an ensemble machine learning technique, is applied to EEG signals from the Sleep-EDF dataset. Relevant features associated with REM sleep phases are extracted from EEG signals and used to train the model. This paper contributes to automated sleep disorder diagnosis by presenting tailored machine learning models for detecting sleep apnea and REM from physiological signals.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_121-Sleep_Apnea_and_Rapid_Eye_Movement_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Harnessing Machine Learning and Meta-Heuristic Algorithms for Accurate Cooling Load Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01506119</link>
        <id>10.14569/IJACSA.2024.01506119</id>
        <doi>10.14569/IJACSA.2024.01506119</doi>
        <lastModDate>2024-06-29T11:26:06.4430000+00:00</lastModDate>
        
        <creator>Yanfang Zhang</creator>
        
        <subject>Building energy; cooling load; machine learning; Gaussian Process Regression; Improved Manta-Ray Foraging Optimizer; Weevil Damage Optimization Algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>Precisely calculating the cooling load is essential to improving the energy efficiency of cooling systems, as well as maximizing the performance of chillers and air conditioning controls. Machine learning (ML) has better capabilities in this area than conventional techniques and regression analysis, which are lacking. ML models are capable of automatically recognizing complex patterns that are influenced by various factors, including occupancy, building materials, and weather. They enable responsive predictions that enhance energy optimization and efficient building management because they scale well with data and adapt to changing scenarios. This research acknowledges the difficulties presented by the intricacies of energy optimization while exploring the intricate world of cooling load systems. To solve these issues, in-depth research and creative approaches to problem-solving are needed. The Weevil Damage Optimization Algorithm (WDOA) and the Improved Manta-Ray Foraging Optimizer (IMRFO) are two meta-heuristic algorithms that are seamlessly combined with the Gaussian Process Regression (GPR) model in this study to increase accuracy. Previous stability tests have provided extensive validation for the cooling load data used in these algorithms. The research presents three different models, each of which offers important insights for precise cooling load prediction: GPWD, GPIM, and an independent GPR model. With an RMSE value of 1.004 and an impressive R2 value of 0.990, the GPWD model stands out as the best performer among these models. The remarkable outcomes demonstrate the outstanding precision of the GPWD model in forecasting the cooling load, highlighting its applicability to actual building management situations.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_119-Harnessing_Machine_Learning_and_Meta_Heuristic_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Complementary Empirical Ensemble Mode Decomposition Method for Respiration Extraction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01506120</link>
        <id>10.14569/IJACSA.2024.01506120</id>
        <doi>10.14569/IJACSA.2024.01506120</doi>
        <lastModDate>2024-06-29T11:26:06.4430000+00:00</lastModDate>
        
        <creator>Xiangkui Wan</creator>
        
        <creator>Wenxin Gong</creator>
        
        <creator>Yunfan Chen</creator>
        
        <creator>Yang Liu</creator>
        
        <subject>ECG; white gaussian noise; complementary ensemble empirical mode decomposition; ECG-derived respiration (EDR)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>Respiration monitoring is essential for diagnosing and managing a variety of diseases. It is a non-invasive, convenient and effective method to derive breathing from ECG signals. This paper proposes a new complementary ensemble empirical mode decomposition (NCEEMD) method for respiration extraction. By additional ensemble empirical mode decomposition (EEMD) of the auxiliary white gaussian noise, the noise residue of the corresponding respiratory band after the EEMD decomposition of original ECG signal is subtracted. The new IMF was selected for correlation analysis with the measured respiratory signal, and the optimal amplitude noise coefficient was determined adaptively by the principle of maximum correlation increment. Then IMF in the respiratory band is selected to reconstruct the respiratory signal which is ECG-derived respiration (EDR). A comparative experiment of respiration extraction was conducted using the data of the MIT-BIH Polysomnographic database. The experimental results show that compared with the complementary ensemble empirical mode decomposition (CEEMD) method, the proposed EDR extraction method reduces the average MSE by 3.95%, RMSE by 2.74%, and MAE by 2.52% and the physical significance of the IMF component is more explicit. This method has good accuracy, robustness and adaptability, and provides a new solution idea for the extraction of respiratory signals.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_120-A_New_Complementary_Empirical_Ensemble_Mode.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Quantum-Enhanced Security Advances for Cloud Computing Environments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01506118</link>
        <id>10.14569/IJACSA.2024.01506118</id>
        <doi>10.14569/IJACSA.2024.01506118</doi>
        <lastModDate>2024-06-29T11:26:06.4130000+00:00</lastModDate>
        
        <creator>Devulapally Swetha</creator>
        
        <creator>Shaik Khaja Mohiddin</creator>
        
        <subject>Quantum-enhanced security; cloud computing; quantum key distribution; advanced encryption standard; key management</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>Recent developments in quantum-enhanced security have demonstrated encouraging promise for enhancing cloud computing environments&#39; security. Utilizing quantum physics, in particular Quantum Key Distribution (QKD), provides a new method for generating cryptographic keys and improves cloud data transport security. The present study offers a thorough investigation of the integration of QKD with conventional encryption techniques, including Advanced Encryption Standard (AES), in order to tackle the dynamic cyber security scenario in cloud computing. The approach entails combining AES for encryption and decryption procedures and establishing a QKD layer within the cloud architecture to produce true quantum keys utilizing Quantum in Cloud technology. Data transmission security is greatly improved by the smooth integration of AES with QKD-generated keys, guaranteeing confidentiality, integrity, and authenticity. In addition, strong key management practices are put in place to handle cryptographic keys safely at every stage of their lifespan, reducing the possibility of unwanted access or interception. The suggested approach successfully addresses the difficulties presented by cyber threats by offering a robust and flexible means of enhancing security in cloud-based systems. Using both traditional and quantum encryption methods, this strategy provides a strong barrier against cyber-attacks, data leaks, and other security flaws. After 70 simulation rounds, the suggested strategy, which is implemented using the QKD-AES framework in Python software, achieved a data access rate of 820 MB/s. In addition to providing an accurate and quantitative assessment of the performance, this also exhibits a high data access rate attained under simulated conditions. At 15 milliseconds, the key generation time was achieved with efficiency, guaranteeing the quick creation of secure cryptographic keys in cloud environments. Overall, there is a lot of potential in using quantum-enhanced security techniques to protect sensitive data and guarantee the integrity of cloud computing infrastructures.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_118-Quantum_Enhanced_Security_Advances.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Utilizing Machine Learning and Deep Learning Approaches for the Detection of Cyberbullying Issues</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01506117</link>
        <id>10.14569/IJACSA.2024.01506117</id>
        <doi>10.14569/IJACSA.2024.01506117</doi>
        <lastModDate>2024-06-29T11:26:06.3800000+00:00</lastModDate>
        
        <creator>Aiymkhan Ostayeva</creator>
        
        <creator>Zhazira Kozhamkulova</creator>
        
        <creator>Zhadra Kozhamkulova</creator>
        
        <creator>Yerkebulan Aimakhanov</creator>
        
        <creator>Dina Abylkhassenova</creator>
        
        <creator>Aisulu Serik</creator>
        
        <creator>Kuralay Turganbay</creator>
        
        <creator>Yegenberdi Tenizbayev</creator>
        
        <subject>Machine learning; cyberbullying; feature engineering; feature extraction; feature selection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>This research paper delves into the intricate domain of cyberbullying detection on social media, addressing the pressing issue of online harassment and its implications. The study encompasses a comprehensive exploration of key aspects, including data collection and preprocessing, feature engineering, machine learning model selection and training, and the application of robust evaluation metrics. The paper underscores the pivotal role of feature engineering in enhancing model performance by extracting relevant information from raw data and constructing meaningful features. It highlights the versatility of supervised machine learning techniques such as Support Vector Machines, Na&#239;ve Bayes, Decision Trees, and others in the context of cyberbullying detection, emphasizing their ability to learn patterns and classify instances based on labeled data. Furthermore, it elucidates the significance of evaluation metrics like accuracy, precision, recall, F1-score, and AUC-ROC in quantitatively assessing model effectiveness, providing a comprehensive understanding of the model&#39;s performance across different classification tasks. By providing valuable insights and methodologies, this research contributes to the ongoing efforts to combat cyberbullying, ultimately promoting safer online environments and safeguarding individuals from the pernicious effects of online harassment.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_117-Utilizing_Machine_Learning_and_Deep_Learning_Approaches.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cross-Cultural Language Proficiency Scaling using Transformer and Attention Mechanism Hybrid Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01506116</link>
        <id>10.14569/IJACSA.2024.01506116</id>
        <doi>10.14569/IJACSA.2024.01506116</doi>
        <lastModDate>2024-06-29T11:26:06.3670000+00:00</lastModDate>
        
        <creator>Anna Gustina Zainal</creator>
        
        <creator>M. Misba</creator>
        
        <creator>Punit Pathak</creator>
        
        <creator>Indrajit Patra</creator>
        
        <creator>Adapa Gopi</creator>
        
        <creator>Yousef A.Baker El-Ebiary</creator>
        
        <creator>Prema S</creator>
        
        <subject>Cross-cultural; language proficiency; transformer; attention mechanism; hybrid model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>Assessing language competency in a variety of linguistic and cultural situations requires the use of a cross-cultural language proficiency scale. This study suggests a hybrid model that takes cross-cultural characteristics into account and successfully scales language competency by combining Transformer design with attention processes. The approach seeks to improve the precision and consistency of language competency evaluation by capturing both cross-cultural subtleties and linguistic context. The suggested hybrid model is made up of many essential parts. To capture semantic information, the incoming text is first tokenized into subword units and then transformed into embeddings using word2vec, a pre-trained word embedding algorithm. The contextual information is then extracted from the input sequence using a Transformer encoder stack, which uses multi-head self-attention techniques to focus on distinct textual elements. An attention mechanism layer (or layers) particularly tailored to attend to cross-cultural traits are introduced, in addition to the Transformer encoder. Through learning cross-cultural patterns and links between various languages or cultural settings, this attention mechanism improves the model&#39;s comprehension and incorporation of cross-cultural subtleties. A representation that blends linguistic context and cross-cultural elements is produced by fusing the results of the Transformer encoder and the cross-cultural attention mechanism layer(s). This fused representation is subsequently subjected to a classifier in order to forecast language competency levels. The hybrid model uses categorical cross-entropy as the objective function and is trained on a variety of datasets that span several languages and cultural situations. Python is used to implement the suggested work. The accuracy of the suggested study is 97.3% when compared to the T-TC-INT Model, BERT + MECT.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_116-Cross_Cultural_Language_Proficiency_Scaling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Navigating XRP Volatility: A Deep Learning Perspective on Technical Indicators</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01506115</link>
        <id>10.14569/IJACSA.2024.01506115</id>
        <doi>10.14569/IJACSA.2024.01506115</doi>
        <lastModDate>2024-06-29T11:26:06.3500000+00:00</lastModDate>
        
        <creator>Susrita Mahapatro</creator>
        
        <creator>Prabhat Kumar Sahu</creator>
        
        <creator>Asit Subudhi</creator>
        
        <subject>Cryptocurrency; ripple; convolutional neural network; gated recurrent unit; technical indicators</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>The rise of cryptocurrency has dramatically changed. Cryptocurrencies have dramatically reshaped the landscape of financial transactions, enabling seamless cross-border exchanges without centralized oversight. This revolutionary shift, powered by blockchain technology, has democratized currency control, entrusting it to a widespread network of participants rather than a single entity. Originating from Satoshi Nakamoto&#39;s introduction of Bitcoin, this digital currency model operates on a decentralized framework, contrasting starkly with traditional, centrally governed monetary systems. This research delves into forecasting the price of Ripple (XRP) by leveraging advanced deep-learning approaches and various technical indicators. This study achieves remarkable precision in its predictions through the meticulous preprocessing of data and the application of neural networks, particularly the convolutional neural network-gated recurrent unit hybrid model. Technical indicators further refined these forecasts, highlighting the effective collaboration between machine learning techniques and financial market analysis. Despite the volatile nature of the cryptocurrency market, this work makes a substantial contribution to the field of cryptocurrency prediction strategies, advocating for further investigations into the effects of macroeconomic factors and the utilization of more extensive datasets to deepen our understanding of market dynamics.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_115-Navigating_XRP_Volatility_A_Deep_Learning_Perspective.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>BrainLang DL: A Deep Learning Approach to FMRI for Unveiling Neural Correlates of Language across Cultures</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01506114</link>
        <id>10.14569/IJACSA.2024.01506114</id>
        <doi>10.14569/IJACSA.2024.01506114</doi>
        <lastModDate>2024-06-29T11:26:06.3170000+00:00</lastModDate>
        
        <creator>A. Greeni</creator>
        
        <creator>Yousef A.Baker El-Ebiary</creator>
        
        <creator>G. Venkata Krishna</creator>
        
        <creator>G. Vikram</creator>
        
        <creator>Kuchipudi Prasanth Kumar</creator>
        
        <creator>Ravikiran K</creator>
        
        <creator>B Kiran Bala</creator>
        
        <subject>Long Short-Term Memory; Gated Recurrent Unit; deep learning; functional magnetic resonance imaging; language</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>Employing deep learning techniques on fMRI data enables the exploration of universal and culturally specific neural correlates underlying language processing across diverse populations. The study presents &quot;BrainLang DL,&quot; a novel deep learning (DL) approach leveraging functional Magnetic Resonance Imaging (fMRI) data to unveil neural correlates of language processing across diverse cultural backgrounds. To bridge the knowledge gap in the universal and culture-specific aspects of language processing, we engaged participants from various cultural groups in a series of linguistic tasks while recording their brain activity using fMRI. Our rigorous data preprocessing pipeline included steps such as motion correction, slice timing correction, and spatial smoothing to enhance data quality for subsequent analysis. For feature extraction, research utilized the Crocodile Hunting Optimization (CHO) algorithm to pinpoint critical brain regions and connectivity patterns linked to language functions. To capture the temporal dynamics of neural activity related to language processing, we deployed advanced recurrent neural networks, specifically Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) models. These techniques enabled us to unravel how linguistic information is encoded and processed over time. Our findings reveal both common and unique neural activation patterns in language processing across different cultures. Universally shared neural mechanisms highlight the fundamental aspects of language processing, while distinct variations underscore the influence of cultural context on brain activity. Furthermore, we employed Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) networks to analyze the temporal dynamics of language-related neural activity, uncovering how linguistic information is represented and processed over time. By integrating DL with fMRI analysis, our study provides a nuanced understanding of the neural correlates of language across cultures. It reveal both shared neural mechanisms underlying language processing across diverse populations and culturally specific variations in brain activation patterns. These findings contribute to a more comprehensive understanding of the neural basis of language and its modulation by cultural factors. Ultimately, our approach offers insights into the complex interplay between language, cognition, and culture, with implications for fields such as linguistics, neuroscience, and cross-cultural psychology.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_114-BrainLang_DL_A_Deep_Learning_Approach_to_FMRI.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Advancing Healthcare Anomaly Detection: Integrating GANs with Attention Mechanisms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01506113</link>
        <id>10.14569/IJACSA.2024.01506113</id>
        <doi>10.14569/IJACSA.2024.01506113</doi>
        <lastModDate>2024-06-29T11:26:06.3030000+00:00</lastModDate>
        
        <creator>Thakkalapally Preethi</creator>
        
        <creator>Afsana Anjum</creator>
        
        <creator>Anjum Ara Ahmad</creator>
        
        <creator>Chamandeep Kaur</creator>
        
        <creator>Vuda Sreenivasa Rao</creator>
        
        <creator>Yousef A.Baker El-Ebiary</creator>
        
        <creator>Ahmed I. Taloba</creator>
        
        <subject>Generative Adversarial Networks (GANs); Convolutional Block Attention Module (CBAM); anomaly detection; attention mechanism; healthcare</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>Early illness diagnosis, treatment monitoring, and healthcare administration all depend heavily on the identification of abnormalities in medical data. This paper proposes a unique way to improve healthcare anomaly detection through the integration of attention mechanisms and Generative Adversarial Networks (GANs) for improved performance. By integrating GANs, artificial data that closely mimics the distributions of actual healthcare data may be produced, so, it is important to supplementing the dataset and strengthening the resilience of anomaly detection algorithms. Simultaneously, the Convolutional Block Attention Module (CBAM) facilitates the model&#39;s concentration on useful characteristics present in the data, thereby augmenting its capacity to identify minute deviations from the norm. The suggested method is assessed using a large dataset from healthcare settings that includes both typical and unusual cases. When compared to current techniques, the results show notable gains in anomaly detection performance. The model also shows resilience to noise, small abnormalities, and class imbalance, indicating its potential for practical clinical applications. The suggested strategy has the potential to improve clinical decision-making and patient care by giving doctors faster, more precise insights into anomalous health states. With an accuracy of around 99.12%, the suggested GAN-CBAM is implemented in Python software and outperforms other current techniques such as Gaussian Distribution Anomaly detection (GDA), Augmented Time Regularized (ATR-GAN), and Convolutional Long Short-Term Memory (ConvLSTM) by 2.97%. With potential benefits for bettering patient outcomes and the effectiveness of the healthcare system, the suggested strategy is a major step forward in the improvement of anomaly identification in the field of medicine.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_113-Advancing_Healthcare_Anomaly_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Multi-Modal CNN-based Approach for COVID-19 Diagnosis using ECG, X-Ray, and CT</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01506112</link>
        <id>10.14569/IJACSA.2024.01506112</id>
        <doi>10.14569/IJACSA.2024.01506112</doi>
        <lastModDate>2024-06-29T11:26:06.2700000+00:00</lastModDate>
        
        <creator>Kumar Keshamoni</creator>
        
        <creator>L Koteswara Rao</creator>
        
        <creator>D. Subba Rao</creator>
        
        <subject>COVID-19 Diagnosis; Multi-Modality Imaging; Convolutional Neural Networks (CNN); CT imaging; Gaussian filtering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>Controlling the spread of Coronavirus Disease 2019 (COVID-19) and reducing its impact on public health need prompt identification and treatment. To improve diagnostic accuracy, this study attempts to create and assess a Multi-Modality COVID-19 Diagnosis System that integrates X-ray, Electrocardiogram (ECG), and Computed Tomography (CT) images utilizing Convolutional Neural Network (CNN) algorithms. To increase the accuracy of COVID-19 diagnosis, the suggested system incorporates data from many imaging modalities in a novel way, including cardiac symptoms identified by ECG data. This approach has not been thoroughly studied in the literature to date. The system analyses CT, ECG, and X-ray images using CNN algorithms, including Visual Geometry Group 19 (VGG19) and Deep Convolutional Networks (DCNN). While ECG data helps detect related cardiac symptoms, CT and X-ray images offer precise insights into lung abnormalities indicative of COVID-19 pneumonia. Noise reduction and image smoothing are accomplished through the implementation of Gaussian filtering algorithms. After extracting characteristics suggestive of either bacterial or viral pneumonia, a deep neural network refines them for accurate COVID-19 identification. Python software is employed throughout the system&#39;s implementation. A thorough evaluation of the trained CNN model using separate datasets revealed an amazing 99.12% accuracy rate in COVID-19 detection from chest imaging data. The diagnostic accuracy of the suggested DCNN model was much higher than that of the current models, including Random Forest and Linear Ridge. The Multi-Modality COVID-19 Diagnosis System uses cutting-edge CNN algorithms to seamlessly combine ECG, X-ray, and CT imaging data to provide a highly accurate diagnosis tool. With the implementation of this approach, medical personnel could potentially be able to diagnose COVID-19 more quickly and accurately, which would improve the disease&#39;s treatment and control.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_112-A_Multi_Modal_CNN_based_Approach_for_COVID_19_Diagnosis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid Framework for Evaluating Financial Market Price: An Analysis of the Hang Seng Index Case Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01506111</link>
        <id>10.14569/IJACSA.2024.01506111</id>
        <doi>10.14569/IJACSA.2024.01506111</doi>
        <lastModDate>2024-06-29T11:26:06.2400000+00:00</lastModDate>
        
        <creator>Runhua Liu</creator>
        
        <creator>Zhengfeng Yang</creator>
        
        <creator>Juan Su</creator>
        
        <creator>Yu Cao</creator>
        
        <subject>Efficient market; Hang Seng Index; stock forecasting; support vector regression; Aquila optimizer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>The accurate prediction of financial outcomes presents a considerable challenge as a result of the intricate interaction of economic fundamentals, market dynamics, and investor psychology. The task of accurately forecasting stock prices in the securities market is a challenging undertaking owing to the presence of non-stationary, non-linearity, and significant volatility in the time series data of stock prices. The utilization of conventional approaches possesses the potential to enhance the precision of predictive modeling. It is crucial to acknowledge that these methodologies also encompass computational intricacies, hence potentially augmenting the likelihood of prediction inaccuracies. This work introduces a methodology that addresses many issues by integrating support vector regression technology with the Aquila optimizer procedure. The results of this investigation suggest that, when compared to the other models, the hybrid model performed better and had more efficacy. The proposed model performed at an ideal level and demonstrated a significant level of effectiveness, with a low number of errors. The Hang Seng Index data was analyzed in order to assess the predictive model&#39;s accuracy in stock price forecasting. The data was accessible for the years 2015 through 2023. The results show that the proposed framework performs well and is reliable when analyzing and predicting the price time series of equities. Empirical data suggests that, in comparison to other methods presently in use, the suggested model forecasts outcomes with a higher degree of accuracy.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_111-A_Hybrid_Framework_for_Evaluating_Financial_Market_Price.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Study on Life Insurance Early Claim Detection Modeling by Considering Multiple Features Transformation Strategies for Higher Accuracy</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01506110</link>
        <id>10.14569/IJACSA.2024.01506110</id>
        <doi>10.14569/IJACSA.2024.01506110</doi>
        <lastModDate>2024-06-29T11:26:06.2100000+00:00</lastModDate>
        
        <creator>Tham Hiu Huen</creator>
        
        <creator>Lim Tong Ming</creator>
        
        <subject>Machine learning; feature selection; life insurance; binary classification; Random Forest</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>Early claims in the life insurance sector can lead to significant financial losses if not properly managed. This paper experiments a number of feature selection such as values regrouping, over or undersampling, and encoding that aim to enhance early claim detection by considering five (5) different machine learning algorithms. Utilizing the built-in feature importance from Random Forest, along with regrouping and correlation techniques, we identify the top seven (7) most significant features from a total 800 feature candidates. Our proposed strategy provides a streamlined and effective way to focus on the most relevant features, thereby improving the accuracy and precision of early claim predictive models for the life insurance domain. The results of this study offer practical insights into reducing fraudulent claims and mitigating financial risk. We used Random Forest besides considering techniques such as LightGBM, XGBoost, Feed Forward Neural Network, and CatBoost to train our model and achieved a maximum accuracy of 0.92 across three samples, indicating that our approach can effectively identify critical features and produce reliable results.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_110-A_Study_on_Life_Insurance_Early_Claim_Detection_Modeling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Impact of Path Planning Model Based on Improved Ant Colony Optimization Algorithm on Green Traffic Management</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01506109</link>
        <id>10.14569/IJACSA.2024.01506109</id>
        <doi>10.14569/IJACSA.2024.01506109</doi>
        <lastModDate>2024-06-29T11:26:06.1770000+00:00</lastModDate>
        
        <creator>Huan Yu</creator>
        
        <subject>Ant colony optimization; A*; path planning; obstacle avoidance; traffic control</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>In response to the demand for green city construction, low-carbon travel standards have been further implemented. This research focuses on intelligent transportation management and designs path planning algorithms. Firstly, the basic model of the proposed ant colony optimization algorithm was constructed. In response to the poor convergence of traditional algorithms, a rollback strategy was introduced to optimize the model taboo table. Subsequently, in response to the dynamic obstacle avoidance problem in practical applications, the optimized A* algorithm was studied and applied to global path planning. The improved ant colony algorithm was applied to local obstacle avoidance planning, further enhancing the accuracy and practicality of the algorithm. In simulation analysis, facing more complex simulation environments, this research method could better achieve obstacle avoidance path planning. The average number of search nodes decreased by 6, the average search time decreased by 4.11%, and the average path length decreased by 22.07%. In summary, the ant colony optimization algorithm designed through research is more suitable for path planning needs in different scenarios, with the best overall performance. It can plan the shortest driving path while ensuring precise obstacle avoidance, helping to achieve green traffic management.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_109-The_Impact_of_Path_Planning_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Three-Dimensional Animation Capture Driver Technology for Digital Media</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01506108</link>
        <id>10.14569/IJACSA.2024.01506108</id>
        <doi>10.14569/IJACSA.2024.01506108</doi>
        <lastModDate>2024-06-29T11:26:06.1630000+00:00</lastModDate>
        
        <creator>Wanjie Dong</creator>
        
        <subject>3D animation; computer vision; motion matching algorithm; human 3D skeletal model; motion capture technology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>For the motion capture driving technology of three-dimensional animation, this study combines skeleton extraction methods and human motion pose data to construct the human skeleton of three-dimensional animated characters. Combining matching algorithms and action recognition techniques, the postures of the human three-dimensional model were tested and analyzed. The experimental results showed that the level-set central clustering method extracted shoulder joint position values of 0.26, 0.24, 0.28, and 0.21 in the four models, respectively. The error value was the smallest among the skeleton extraction algorithms, indicating that this skeleton extraction algorithm had high accuracy in extracting human skeleton information. In addition, the depth information of human joint points was compared using the parallax ranging method, and the highest error was 1.57%. This further demonstrated that the coordinate error of the three-dimensional joints was relatively accurate, which also proved the effectiveness of the binocular stereo vision system. The system had an accuracy of over 80% in recognizing joint rotation information and dynamic movements in the human three-dimensional model. Finally, the highest accuracy of inertial sensors in capturing human movements was 97%, indicating the superiority of digital media in capturing three-dimensional animation technology. This also provides a theoretical basis and technical reference for animation production and other aspects.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_108-Three_Dimensional_Animation_Capture_Driver_Technology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Hybrid Deep Neural Network Classifier for EEG Emotional Brain Signals</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01506107</link>
        <id>10.14569/IJACSA.2024.01506107</id>
        <doi>10.14569/IJACSA.2024.01506107</doi>
        <lastModDate>2024-06-29T11:26:06.1300000+00:00</lastModDate>
        
        <creator>Mahmoud A. A. Mousa</creator>
        
        <creator>Abdelrahman T. Elgohr</creator>
        
        <creator>Hatem A. Khater</creator>
        
        <subject>BCI; EEG; Brain Signals Classification; SVM; LSTM; CNN; CNN-LSTM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>The field of brain computer interface (BCI) is one of the most exciting areas in the field of scientific research, as it can overlap with all fields that need intelligent control, especially the field of the medical industry. In order to deal with the brain and its different signals, there are many ways to collect a dataset of brain signals, the most important of which is the collection of signals using the non-invasive EEG method. This group of data that has been collected must be classified, and the features affecting changes in it must be selected to become useful for use in different control capabilities. Due to the need for some fields used in BCI to have high accuracy and speed in order to comply with the environment&#39;s motion sequences, this paper explores the classification of brain signals for their usage as control signals in Brain Computer Interface research, with the aim of integrating them into different control systems. The objective of the study is to investigate the EEG brain signal classification using different techniques such as Long Short-Term Memory (LSTM), Convolutional Neural Networks (CNN), as well as the machine learning approach represented by the Support Vector Machine (SVM). We also present a novel hybrid classification technique called CNN-LSTM which combines CNNs with LSTM networks. This proposed model processes the input data through one or more of the CNN’s convolutional layers to identify spatial patterns and the output is fed into the LSTM layers to capture temporal dependencies and sequential patterns. This proposed combination uses CNNs’ spatial feature extraction and LSTMs’ temporal modelling to achieve high efficacy across domains. A test was done to determine the most effective approach for classifying emotional brain signals that indicate the user&#39;s emotional state. The dataset used in this research was generated from a widely available MUSE EEG headgear with four dry extra-cranial electrodes. The comparison came in favor of the proposed hybrid model (CNN-LSTM) in first place with an accuracy of 98.5% and a step speed of 244 milliseconds/step; the CNN model came in the second place with an accuracy of 98.03% and a step speed of 58 milliseconds/step; and in the third place, the LSTM model recorded an accuracy of 97.35% and a step speed of 2 sec/step; finally, in last place, SVM came with 87.5% accuracy and 39 milliseconds/step running speed.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_107-A_Novel_Hybrid_Deep_Neural_Network_Classifier.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving the Prediction of Student Performance by Integrating a Random Forest Classifier with Meta-Heuristic Optimization Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01506106</link>
        <id>10.14569/IJACSA.2024.01506106</id>
        <doi>10.14569/IJACSA.2024.01506106</doi>
        <lastModDate>2024-06-29T11:26:06.1000000+00:00</lastModDate>
        
        <creator>Chao Ma</creator>
        
        <subject>Classification; student performance; machine learning; Random Forest Classifier; Electric Charged Particles Optimization; Artificial Rabbits Optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>Anticipating student performance in higher education is crucial for informed decision-making and the reduction of dropout rates. This study focuses on the intricate analysis of diverse educational datasets using machine learning, particularly emphasizing dimensionality reduction. The aim is to empower educators with data-driven insights, enabling timely interventions for academic improvement. By categorizing individuals based on their inherent aptitudes, the study seeks to mitigate failure rates and enhance the overall educational experience. The integration of predictive modeling, particularly employing the robust Random Forest Classifier (RFC), allows the academic community to proactively address challenges and foster a supportive learning environment, thereby improving student outcomes. To bolster predictive capabilities, the study adopts the RFC model and enhances its efficacy through advanced optimization algorithms, specifically Electric Charged Particles Optimization (ECPO) and Artificial Rabbits Optimization (ARO). These sophisticated algorithms are strategically integrated to refine decision-making processes and enhance predictive precision. Furthermore, the analysis of the input variables has been conducted to assess their individual impact on student performance. This analysis can help institutions identify and address areas for improvement in their management practices. The study&#39;s commitment to leveraging state-of-the-art machine learning and bio-inspired algorithms underscores its dedication to achieving precise and resilient predictions of the performance of 4424 students, ultimately contributing to the advancement of educational outcomes. The research outcomes highlight the superiority of the RFEC model, optimized through ECPO for RFC, in aligning with actual measured values, affirming its efficacy in predictive accuracy.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_106-Improving_the_Prediction_of_Student_Performance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multimodal Application of GAN in the Image Recognition of Wheat Diseases and Insect Pests</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01506105</link>
        <id>10.14569/IJACSA.2024.01506105</id>
        <doi>10.14569/IJACSA.2024.01506105</doi>
        <lastModDate>2024-06-29T11:26:06.0830000+00:00</lastModDate>
        
        <creator>Bing Li</creator>
        
        <creator>Shaoqing Yang</creator>
        
        <creator>Zeqiang Wang</creator>
        
        <subject>Deep Learning; Identification of diseases and insect pests; Image classification; System development</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>“Food is the most important thing for the people”, Food is intricately linked to both the national economy and the livelihood of the people, serving as a vital material for our daily existence. Wheat, standing as one of the three core grain crops, holds paramount importance in safeguarding national food security. However, the wheat planting process remains constantly exposed to a diverse array of environmental factors, ranging from the intensity of light to fluctuations in temperature, soil fertility, fertilizer application methods, and water availability. Occasionally, these variables trigger diseases and insect infestations that can seriously affect wheat yield and quality if not promptly and effectively addressed. Therefore, it is imperative to manage these challenges in a timely and effective manner, ensuring the safety and integrity of wheat production, which in turn guarantees the stability of our national food supply. Traditional methods of manual detection of pests and diseases mainly rely on naked eye observation and manual statistics. Such solutions are highly subjective, have low timeliness, and difficult to unify precision. With the development of computer technology and deep learning, more and more research and applications have been carried out to address the shortcomings of traditional manual detection methods. In this study, deep learning is combined with the application of disease and insect pest recognition. Studying wheat powdery mildew, scab, leaf rust, and midge, convolutional and capsule networks are investigated for pest recognition, establishing an image recognition system for wheat diseases and pests.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_105-Multimodal_Application_of_GAN_in_the_Image_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Classification of Pneumonia from Chest X-ray images using Support Vector Machine and Convolutional Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01506104</link>
        <id>10.14569/IJACSA.2024.01506104</id>
        <doi>10.14569/IJACSA.2024.01506104</doi>
        <lastModDate>2024-06-29T11:26:06.0530000+00:00</lastModDate>
        
        <creator>M. Fariz Fadillah Mardianto</creator>
        
        <creator>Alfredi Yoani</creator>
        
        <creator>Steven Soewignjo</creator>
        
        <creator>I Kadek Pasek Kusuma Adi Putra</creator>
        
        <creator>Deshinta Arrova Dewi</creator>
        
        <subject>Pneumonia; chest X-ray; Support Vector Machine; Convolutional Neural Network; SDGs; Society 5.0</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>Pneumonia presents a global health challenge, especially in distinguishing bacterial and viral types via chest X-ray diagnostics. This study focuses on deep learning models Convolutional Neural Networks (CNN) and Support Vector Machines (SVM) for pneumonia classification. Our findings highlight CNN&#39;s superior performance. It achieves 91% accuracy overall, outperforming SVM&#39;s 79% in differentiating normal lungs and pneumonia-affected lungs. Specifically, CNN excels in distinguishing between bacterial and viral pneumonia with 92% accuracy, compared to SVM&#39;s 88%. These results underscore deep learning models&#39; potential to enhance diagnostic precision, improve treatment efficacy and reduce pneumonia-related mortality. In the context of Society 5.0, which integrates technology for societal well-being, deep learning in healthcare emerges as transformative. Enabling early and accurate pneumonia detection, this research aligns with the United Nations Sustainable Development Goals (SDGs). It supports Goal 3 (Good Health and Well-being) by advancing healthcare outcomes and Goal 9 (Industry, Innovation, and Infrastructure) through innovative medical diagnostics. Therefore, this study emphasizes deep learning&#39;s pivotal role in revolutionizing pneumonia diagnosis, offering efficient healthcare solutions aligned with current global health challenges.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_104-Classification_of_Pneumonia_from_Chest_X_ray_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Blockchain-based and IoT-based Health Monitoring App: Lowering Risks and Improving Security and Privacy</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01506103</link>
        <id>10.14569/IJACSA.2024.01506103</id>
        <doi>10.14569/IJACSA.2024.01506103</doi>
        <lastModDate>2024-06-29T11:26:06.0370000+00:00</lastModDate>
        
        <creator>Chelsey C. Y. Hang</creator>
        
        <creator>M. Batumalay</creator>
        
        <creator>T D Subash</creator>
        
        <creator>R. Thinakaran</creator>
        
        <creator>B. Chitra</creator>
        
        <subject>IoT health monitoring system; security and privacy; and blockchain technology; health policy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>Blockchain technology is known for its decentralized and immutable nature, which makes it highly resistant to hacking and unauthorized access. This would ensure that patients&#39; private health information remains secure and protected from potential breaches. Moreover, the use of blockchain can also enhance data integrity by creating a transparent and tamper-proof record of all health updates, further increasing trust in the systems. The COVID-19 epidemic has made human health one of the most crucial things we should focus on more in our day-to-day lives. Social separation could help contain the COVID-19 pandemic. Humans are therefore urged to avoid physical contact with one another if the condition is permitted. It is suggested that medical professionals use the Internet of Things (IoT)-based Health Monitoring Application to keep an eye on their patients via their mobile devices. With the help of the suggested system, patients can update the system with their daily health status, and medical professionals can use their mobile devices to monitor their patients for future health policy. Because the suggested system is an application that users can access from their mobile devices rather than just using a laptop or computer to browse the website, it is more practical than most of the current system. Patients do not need to visit the hospital for a check-up because they can update the system with their health information. If physicians discover unusual symptoms in a patient&#39;s medical record, are they obligated to seek medical attention? Furthermore, private health information is regarded as confidential. Consequently, this would examine the risks associated with the backend system of the suggested solution as well as security threats. Additionally, by utilizing blockchain technology, improvements in security and privacy can be achieved.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_103-Blockchain_based_IoT_based_Health_Monitoring_App.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Bionic Hand Movements Recognition: A Unified Framework with Attention-Guided ROI Identification and the Bionic Fusion Net Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01506102</link>
        <id>10.14569/IJACSA.2024.01506102</id>
        <doi>10.14569/IJACSA.2024.01506102</doi>
        <lastModDate>2024-06-29T11:26:06.0070000+00:00</lastModDate>
        
        <creator>Prakash. S</creator>
        
        <creator>Josephine H. H</creator>
        
        <creator>Priya. S</creator>
        
        <creator>M. Batumalay</creator>
        
        <subject>Bionic Hand; Optimized Red Fox Falcon Algorithm; Xception; Squeeze-Net; Shuffle-Net; Bi-LSTM; Huber Loss; Sustainable Development Goals (SDG); good health; well-being; health policy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>In prosthetics, bionic hand movement recognition is crucial to developing sophisticated systems that can effectively understand and react to human motions. Recent advances in image processing, feature extraction, and deep learning have improved bionic hand movement detection systems&#39; accuracy and flexibility. This study proposes a unified framework called using attention-guided ROI detection and a unique Bionic Fusion-Net architecture to overcome these difficulties which contributes towards Sustainable Development Goal (SDG) Good Health and Well Being. Initially pre-processing undergoes dataset augmentation and image enhancement. The ROI Identification approach uses an attention-guided U-Net with sophisticated convolutional components. Spatial Features, BionicNet-1, and BionicNet-2 learn spatial and temporal features together during feature extraction. Optimized Red Fox Falcon Algorithm (O-RFF) which is a hybrid of Red Fox and Falcon Optimization Algorithms improves the feature selection. The Bionic Fusion-Net Architecture combines Xception, Squeeze-Net, Shuffle-Net, optimized Bi-LSTM, and Huber Loss function application. The recommended technique improves bionic hand movement recognition flexibility that attained an accuracy of about 99% which outperformed other approaches in use for well-being and future health policy.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_102-Bionic_Hand_Movements_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Application of AES-SM2 Hybrid Encryption Algorithm in Big Data Security and Privacy Protection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01506101</link>
        <id>10.14569/IJACSA.2024.01506101</id>
        <doi>10.14569/IJACSA.2024.01506101</doi>
        <lastModDate>2024-06-29T11:26:05.9730000+00:00</lastModDate>
        
        <creator>Pingyun Huang</creator>
        
        <creator>Guizhou Liao</creator>
        
        <creator>Jianhong Ren</creator>
        
        <subject>AES; SM2; privacy protection; encryption algorithm; data security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>In the times of big data, information security and privacy protection have become important issues facing today&#39;s society. To address big data&#39;s security and privacy problems, research designs and implements a hybrid encryption method using advanced encryption standard algorithms and standard encryption module 2 algorithms for encryption operations. This method utilizes Advanced Encryption Standard encryption algorithms to encrypt plaintext data without calling any encryption libraries. It improves the key extension method and security analysis of Advanced Encryption Standard algorithms. The experimental results show that by changing one key, the confusion range of the improved Advanced Encryption Standard algorithm is 62 &#177; 6, while the confusion range of the traditional Advanced Encryption Standard algorithm is 63 &#177; 7. The encryption time of the RSA algorithm is 16.50ms higher than that of Standard Encryption Module 2. The Advanced Encryption Standard scheme improved by Standard Encryption Module 2+ has the fastest decryption speed, followed by RSA+Advanced Encryption Standard scheme, and finally Standard Encryption Module 2+Advanced Encryption Standard scheme. The hybrid encryption algorithm proposed by the research institute can encrypt sensitive information in big data without leaking plaintext information, effectively protecting sensitive information in big data. This scheme can effectively protect sensitive information in big data and provide new ideas for big data in terms of network security and privacy protection.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_101-The_Application_of_AES_SM2_Hybrid_Encryption_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>UAV Path Planning Method Considering Safety and Signal Shielding Risk</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01506100</link>
        <id>10.14569/IJACSA.2024.01506100</id>
        <doi>10.14569/IJACSA.2024.01506100</doi>
        <lastModDate>2024-06-29T11:26:05.9430000+00:00</lastModDate>
        
        <creator>Xiaoyong Chen</creator>
        
        <creator>Jiajun Fang</creator>
        
        <creator>Yanjie Zhai</creator>
        
        <subject>Multi-objective particle swarm optimization; path planning; cubic B-splines</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>In order to meet the needs for the safe operation of unmanned aerial vehicles (UAV)s in cities, this paper proposes a multi-objective path planning method based on a particle swarm optimization algorithm. Firstly, a complex urban environment model is constructed by using the grid method. Then, taking the total length of the UAV path and the minimum flight risk as objectives, the multi-objective path optimization problem is established under the condition of taking into account the obstacle avoidance requirements and performance constraints of the UAV. Finally, the optimization problem is solved by a multi-objective particle swarm optimization algorithm and the path curve is smoothed by cubic B-spline. The simulation results show that the multi-objective path planning method proposed in this paper is more reasonable than the method that only considers the lowest security risk or the shortest path.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_100-UAV_Path_Planning_Method_Considering_Safety.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Spectral Mixture Analysis-based WQI with Convolutional Long Short-Term Memory Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150699</link>
        <id>10.14569/IJACSA.2024.0150699</id>
        <doi>10.14569/IJACSA.2024.0150699</doi>
        <lastModDate>2024-06-29T11:26:05.9130000+00:00</lastModDate>
        
        <creator>Ika Oktavianti</creator>
        
        <creator>Yusuf Hartono</creator>
        
        <creator>Sukemi</creator>
        
        <subject>Water quality index; Spectral Mixture Analysis; remote sensing; deep learning; convolutional long short-term memory</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>Surface water, including river water, is an important natural resource for human life. However, river water quality in Indonesia often declines due to various factors, such as excessive water consumption, waste pollution, and natural disasters. This study aims to predict the Water Quality Index (WQI) of rivers using Spectral Mixture Analysis with deep learning architecture. The methods used in this study are Spectral Mixture Analysis (SMA) and Convolutional Long Short-Term Memory (ConvLSTM). SMAs are used to decompose the spectral signatures of water quality components and provide insight into the composition of water bodies. ConvLSTM, a deep learning architecture, is used to capture temporal dependencies and spatial patterns in water quality data. The results showed that the percentage of WQI prediction accuracy for 345-band model was better than 234-band model, reaching 34.78%. The visible color spectrum that represents the Meets (M) and Light (R) Pollution Index is Blue (0, 0, 255) and wavelengths ranging from 0.53 μm to 0.88 μm. The test results of the ConvLSTM hybrid model on 8 mandatory parameters of River WQI measurements at 30 watershed monitoring points of North Musi Rawas Regency from 2021 to 2023, the accuracy value reaches 96% or it is considered that the performance of this model is acceptable. This research proves that Spectral Mixture Analysis with hybrid model Convolutional Long Short-Term Memory techniques is effectively capable of predicting and monitoring the WQI of rivers and these results can be used to take appropriate steps in determining policies.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_99-Spectral_Mixture_Analysis_Based_WQI.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intelligent Transport Systems: Analysis of Applications, Security Challenges, and Robust Countermeasures</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150698</link>
        <id>10.14569/IJACSA.2024.0150698</id>
        <doi>10.14569/IJACSA.2024.0150698</doi>
        <lastModDate>2024-06-29T11:26:05.8800000+00:00</lastModDate>
        
        <creator>Mada Alharb</creator>
        
        <creator>Abdulatif Alabdulatif</creator>
        
        <subject>Intelligent Transport Systems (ITS); cybersecurity; urban mobility; anomaly detection systems; privacy concerns</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>Intelligent Transport Systems (ITS) are instrumental in optimizing transportation networks, enhancing efficiency, and promoting sustainable mobility in smart cities and advanced technological environments. However, the increasing integration of digital technologies in transportation infrastructure introduces cyber-physical risks and privacy concerns. This paper aims to explore the diverse applications of ITS, and its impact on traffic management, vehicle communication, and urban mobility. It examines real-world deployments and emerging trends to illustrate ITS&#39;s transformative potential. Furthermore, it critically assesses the security vulnerabilities inherent in intelligent transport systems, including cyber threats targeting communication protocols, data integrity, and network interconnectedness. Privacy issues related to data collection and utilization are also scrutinized. Furthermore, it emphasizes the importance of proactive security measures to mitigate threats and ensure the resilience of ITS. Finally, the research proposes robust security methodologies, such as encryption techniques, anomaly detection systems, and secure communication routes, drawing upon theoretical frameworks and empirical case studies. Legislative recommendations and collaborative initiatives are advocated to foster a trustworthy intelligent transport ecosystem and address security challenges comprehensively.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_98-Intelligent_Transport_Systems_Analysis_of_Applications.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Identification of the Main Traditional Project Management Methods Through a Systematic Literature Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150697</link>
        <id>10.14569/IJACSA.2024.0150697</id>
        <doi>10.14569/IJACSA.2024.0150697</doi>
        <lastModDate>2024-06-29T11:26:05.8670000+00:00</lastModDate>
        
        <creator>Fernanda Souza Valadares</creator>
        
        <creator>Naira Cristina Souza Moura</creator>
        
        <creator>T&#225;bata Nakagomi Fernandes Pereira</creator>
        
        <creator>Milena De Oliveira Arantes</creator>
        
        <subject>Traditional methods; project management; framework; PMBOK&#174;</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>Traditional project management methods are specific, predictable and seek to keep the planning as detailed as possible and, even over time, companies continue to integrate them into their processes. The present study aims to raise the main traditional methods of Project Management, to present them in more detail, through a Systematic Literature Review. In this review, 37 articles were found and analyzed to answer five research questions. The research questions focused on answering: the main traditional project management methods, the most relevant maturity models, trends in the area, and the challenges and future directions for project management. As the main results, PMBOK was pointed out as the main traditional method, followed by PRINCE2, ISO 21500 standard and CTCR methodology. In addition, highlighting the tools, there are Gantt Chart, Earned Value Management, Critical Chain Project Management, and TOC Method as the most relevant. Therefore, it is possible to obtain a broad and detailed view of the main traditional methods of PM and with this, researchers in the area will be able to make better decisions in choosing the appropriate method for their type of project. As for challenges and future directions, the article pointed out that currently, project processes are complex and therefore do not meet their initial deadlines, cost, quality and business goals. Thus, difficulties in PM also stand out: delays in the schedule, lack of clearly defined objectives and support from leadership/company, scope changes, insufficient resources, poor risk management and measurement of project performance and lack of communication.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_97-Identification_of_the_Main_Traditional_Project_Management_Methods.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Creativity in the Digital Canvas: A Comprehensive Analysis of Art and Design Education Pedagogy</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150696</link>
        <id>10.14569/IJACSA.2024.0150696</id>
        <doi>10.14569/IJACSA.2024.0150696</doi>
        <lastModDate>2024-06-29T11:26:05.8330000+00:00</lastModDate>
        
        <creator>Qian TONG</creator>
        
        <subject>Creativity; art and design education; pedagogical practices; learning outcomes; assessment; grounded theory</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>Promoting creativity in the dynamic field of education has become a critical goal for educators, aiming to prepare students with the essential abilities for success in various professional and personal situations. As educational institutions globally attempt to promote creative learning outcomes, there is still a notable lack of knowledge regarding efficient techniques for teaching creativity. In this paper, we address the pressing need to bridge the knowledge gap associated with teaching creativity in artistic disciplines. The goal is to offer educators and researchers detailed knowledge of the methods used to promote creativity in art and design education by combining research, historical insights, and modern advancements. We explore the complexities of creative ideas, both classic and current educational methods, as well as the distinct problems and possibilities in art and design education. Finally, the study provides insights into the ongoing debate about creativity with respect to art and design education, offering suggestions for pedagogical innovation in the future to meet the dynamic challenges and potentials within the artistic and design disciplines.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_96-Creativity_in_the_Digital_Canvas.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimized Task Scheduling in Cloud Manufacturing with Multi-level Scheduling Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150695</link>
        <id>10.14569/IJACSA.2024.0150695</id>
        <doi>10.14569/IJACSA.2024.0150695</doi>
        <lastModDate>2024-06-29T11:26:05.8030000+00:00</lastModDate>
        
        <creator>Xiaoli ZHU</creator>
        
        <subject>Cloud manufacturing; multi-level scheduling model; task scheduling; multi-objective optimization; resource allocation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>Cloud Manufacturing (CMfg) utilizes the cloud computing paradigm to provide manufacturing services over the Internet flexibly and cost-effectively, where users only pay for what they use and may access services as needed. The scheduling method directly impacts the overall efficiency of CMfg systems. Manufacturing industries supply services aligned with customer-specific needs recorded in CMfg systems. CMfg managers develop manufacturing strategies based on real-time demand to establish service delivery timing. Many elements influence customer satisfaction, including dependability, timeliness, quality, and pricing. Therefore, CMfg depends on the use of multi-objective and real-time task scheduling. Multi-objective evolutionary algorithms have effectively examined many solutions, such as non-dominant, Pareto-efficient, and Pareto-optimal solutions, using both actual and synthetic workflows. This study introduces a new Multi-level Scheduling Model (MSM) and evaluates its effectiveness by comparing it with other multi-objective algorithms, including the weighted genetic algorithm, the non-dominated genetic sorting Algorithm II, and the starch Pareto evolution algorithm. The primary emphasis is on assessing the efficacy of algorithms and their suitability in commercial multi-cloud setups. The MSM&#39;s dynamic nature and adaptive features are emphasized, indicating its ability to effectively handle the complexity and demands of CMfg and resolve the scheduling issue within this environment. Experimental results suggest that MSM outperforms other algorithms by achieving a 20% improvement in makespan.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_95-Optimized_Task_Scheduling_in_Cloud_Manufacturing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Design and Execution of a Multimedia Information Intelligent Processing System Oriented to User Experience</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150694</link>
        <id>10.14569/IJACSA.2024.0150694</id>
        <doi>10.14569/IJACSA.2024.0150694</doi>
        <lastModDate>2024-06-29T11:26:05.7700000+00:00</lastModDate>
        
        <creator>Hongmei Liu</creator>
        
        <subject>User experience; multimedia information; intelligent processing; wireless network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>With the rapid growth of the world economy and the increasing pursuit of culture and entertainment, the integration of multimedia database technology and networks has become crucial. Through extensive research, this integration allows for seamless integration of multimedia information (MI) and promotes accelerated development of cultural exchange on the internet. This article studies and designs a multimedia information (MI) intelligent processing system for user experience (UE). This system integrates multimedia database technology and network technology, aiming to provide seamless integration of multimedia information, accelerate cultural exchange on the network, and enrich the cultural experience of users. In the system design, we propose a UE mode based on context-aware technology and develop an innovative access selection algorithm that can dynamically select the best access path based on network status and user preferences. The experimental results show that the algorithm performs well in terms of throughput, latency, and link load, effectively meeting the QoE (Quality of Experience) requirements of users. In addition, the system has high scalability and can cope with constantly growing data and computing needs without sacrificing performance. The implementation of this system not only provides users with a richer and more personalized cultural experience, but also provides strong support for building a more interconnected global community.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_94-The_Design_and_Execution_of_a_Multimedia_Information.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving Image Stitching Effect using Super-Resolution Technique</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150693</link>
        <id>10.14569/IJACSA.2024.0150693</id>
        <doi>10.14569/IJACSA.2024.0150693</doi>
        <lastModDate>2024-06-29T11:26:05.7400000+00:00</lastModDate>
        
        <creator>Jinjun Liu</creator>
        
        <subject>Image; stitching; super-resolution technology; vision and image processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>This paper aims to present a novel methodology that merges image stitching with super-resolution techniques, enabling the creation of a high-resolution panoramic image from several low-resolution inputs. The proposed approach comprehensively addresses challenges throughout the process, encompassing image preprocessing, alignment and handling of mismatches, stitching, super-resolution reconstruction, and post-processing. Employing advanced methodologies such as Convolutional Neural Networks (CNNs), Scale-Invariant Feature Transform (SIFT), Random Sample Consensus (RANSAC), GrabCut algorithm, Super-Resolution Convolutional Neural Network (SRCNN), gradient domain optimization, and Structural Similarity Index Measure (SSIM), each step meticulously tackles specific issues inherent to image stitching tasks. A key innovation lies in the synergy of image stitching and super-resolution techniques, yielding a solution that boasts high robustness and efficiency. This versatile method is adaptable to diverse image processing contexts. To validate its effectiveness, experiments were conducted on two established datasets, USIS-D and VGG, where a quartet of quantitative metrics – Peak Signal-to-Noise Ratio (PSNR), SSIM, Entropy (EN), and Quality Assessment of Blurred Faces (QABF) – were employed to gauge the quality of stitched images against alternative methods. The outcomes decisively illustrate the superiority of our proposed method, achieving superior performance across all metrics and producing panoramas devoid of seams and distortions. This work thereby contributes a significant advancement in the realm of high-fidelity panoramic image reconstruction.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_93-Improving_Image_Stitching_Effect.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evolving Security for 6G: Integrating Software-Defined Networking and Network Function Virtualization into Next-Generation Architectures</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150692</link>
        <id>10.14569/IJACSA.2024.0150692</id>
        <doi>10.14569/IJACSA.2024.0150692</doi>
        <lastModDate>2024-06-29T11:26:05.7230000+00:00</lastModDate>
        
        <creator>JAADOUNI Hatim</creator>
        
        <creator>CHAOUI Habiba</creator>
        
        <creator>SAADI Chaimae</creator>
        
        <subject>6G Network; network function virtualization; software defined network; security; architecture</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>As technology continues to advance, the emergence of 6G networks is imminent, promising unprecedented levels of connectivity and innovation. A critical aspect of designing the security architecture for 6G networks revolves around the utilization of Software-Defined Networking (SDN) and Network Function Virtualization (NFV) technologies. By harnessing the capabilities of SDN and NFV, the security infrastructure of 6G networks stands to gain significant advantages in terms of flexibility, scalability, and agility. SDN facilitates the decoupling of the network control plane from the data plane, enabling centralized management and control of network resources. This article examines the synergistic relationship between SDN and NFV in enhancing the resilience and adaptability of 6G security architectures, offering insights into key challenges, emerging trends, and future directions in securing the next generation of wireless networks.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_92-Evolving_Security_for_6G_Integrating_Software.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Quality of Service-Oriented Data Optimization in Networks using Artificial Intelligence Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150691</link>
        <id>10.14569/IJACSA.2024.0150691</id>
        <doi>10.14569/IJACSA.2024.0150691</doi>
        <lastModDate>2024-06-29T11:26:05.6930000+00:00</lastModDate>
        
        <creator>Zhenhua Yang</creator>
        
        <creator>Qiwen Yang</creator>
        
        <creator>Minghong Yang</creator>
        
        <subject>Artificial intelligence; networking; quality of service-oriented; data optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>This paper outlines a comprehensive AI-driven Quality of Service (QoS) optimization method, presenting a rigorous examination of its effectiveness through extensive experimentation and analysis. By applying real-world datasets to simulate network environments, the study systematically evaluates the proposed method’s impact across various QoS metrics. Key findings reveal substantial enhancements in reducing average latency, minimizing packet loss, and boosting bandwidth utilization compared to baseline scenarios, with the Deep Deterministic Policy Gradient (DDPG) model showcasing the most notable improvements. The research demonstrates that AI optimization strategies, particularly those leveraging DQN and DDPG algorithms, significantly improve upon conventional methods. Specifically, post-migration optimizations lead to a recovery and even surpassing of pre-migration QoS levels, with delays dropping to levels below initial readings, packet loss nearly eliminated, and bandwidth utilization markedly improved. The study further illustrates that while lower learning rates necessitate longer convergence times, they ultimately facilitate superior model performance and stability. In-depth case studies within a cloud data center setting underscore the system’s proficiency in handling large-scale Virtual Machine (VM) migrations with minimal disruption to network performance. The AI-driven optimization successfully mitigates the typical latency spikes, packet loss increases, and resource utilization dips associated with VM migrations, thereby affirming its practical value in maintaining high network efficiency and stability during such operations. Comparative analyses against traditional traffic engineering methods, rule-based controls, and other machine learning approaches consistently place the AI optimization method ahead, achieving up to an 8% increase in throughput alongside a 2 ms decrease in latency. Furthermore, the technique excels in reducing packet loss by 25% and elevating resource utilization rates, underscoring its prowess in enhancing network efficiency and stability. Robustness and scalability assessments validate the method’s applicability across diverse network scales, traffic patterns, and congestion levels, confirming its adaptability and effectiveness in a wide array of operational contexts. Overall, the research conclusively evidences the AI-driven QoS optimization system’s capacity to tangibly enhance network performance, positioning it as a highly efficacious solution for contemporary networking challenges.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_91-Quality_of_Service_Oriented_Data_Optimization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Fuzzy-based Spectrum Allocation (FBSA) Technique for Enhanced Quality of Service (QoS) in 6G Heterogeneous Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150690</link>
        <id>10.14569/IJACSA.2024.0150690</id>
        <doi>10.14569/IJACSA.2024.0150690</doi>
        <lastModDate>2024-06-29T11:26:05.6630000+00:00</lastModDate>
        
        <creator>S. B. Prakalya</creator>
        
        <creator>Samuthira Pandi V</creator>
        
        <creator>S. Sujatha</creator>
        
        <creator>R. Thangam</creator>
        
        <creator>D. Karunkuzhali</creator>
        
        <creator>G. Keerthiga</creator>
        
        <subject>FBSA; D2A; 6G; spectrum allocation; QoS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>This research focuses on Device to Any device (D2A) communication for 6G in unpredictable circumstances where the topology of the D2A network changes over time as a result of the mobility of D2A Devices. Extremely sophisticated applications with demands for ultra-low latency and ultra-high data rate can be made achievable by cellular D2A communications in 6G. The best way to ensure Quality of Service (QoS) is to make the most of the scarce MAC Layer resources. To share information between D2A systems and a variety of devices, spectrum allocation is crucial. In this paper, a novel Fuzzy Based Spectrum Allocation (FBSA) approach is established to efficiently and rational distribute resources for D2A. A system model for D2A transmission has been established for metropolitan regions, common security and non-secure services are implemented in the network to assess the network performance for this feasible technique. Comparing the proposed FBSA approach to its prior works, which could not deliver guaranteed services due to low resource utilization. Riverbed Modeler simulation results show that the proposed approach can significantly enhance resource usage and satisfy the requirements of D2A systems.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_90-A_Novel_Fuzzy_based_Spectrum_Allocation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Defect Prediction of Finite State Machine Models Based on Transfer Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150689</link>
        <id>10.14569/IJACSA.2024.0150689</id>
        <doi>10.14569/IJACSA.2024.0150689</doi>
        <lastModDate>2024-06-29T11:26:05.6300000+00:00</lastModDate>
        
        <creator>Wei Zhang</creator>
        
        <subject>Transfer learning; DFSM; software defects; defect prediction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>As software systems become increasingly intricate, predicting cache defects has emerged as a crucial aspect of maintaining software quality. This article introduces a novel approach for predicting cache defects, utilizing a transfer learning (TL) software deterministic finite state machine (DFSM) model. Finite State Machine (DFSM) model defect prediction based on transfer learning is an innovative software defect prediction method. This method combines the advantages of transfer learning (TL) and deterministic finite state machine (DFSM). Intended to improve the effectiveness and accuracy of software cache defect prediction. This innovative method seeks to enhance the effectiveness of predicting cache issues within software. By merging the precision of DFSM with TL&#39;s versatility, the proposed technique is transferable to target projects through training and learning from source projects, addressing data scarcity challenges in new or evolving projects. This method utilizes transfer learning (TL) strategy to transfer knowledge from the source project to the target project through learning and training, thereby solving the problem of data scarcity. Experimental findings reveal that as training data grows, the method&#39;s test coverage and fault detection rate steadily increase. Additionally, it demonstrates impressive execution efficiency and stability. In comparison to traditional methods, this approach exhibits substantial benefits in elevating software quality and reliability, offering a fresh and efficient tool for ensuring software quality. Thanks to the TL strategy, the method rapidly adapts to the unique environments and requirements of new or evolving projects, thereby enhancing forecasting accuracy and efficiency.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_89-Defect_Prediction_of_Finite_State_Machine_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comprehensive Machine Learning Framework for Anomaly Detection in Credit Card Transactions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150688</link>
        <id>10.14569/IJACSA.2024.0150688</id>
        <doi>10.14569/IJACSA.2024.0150688</doi>
        <lastModDate>2024-06-29T11:26:05.6170000+00:00</lastModDate>
        
        <creator>Fathe Jeribi</creator>
        
        <subject>Cybersecurity; anomaly detection; machine learning; optimization; nearmiss; SMOTE</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>Cybercrimes originate in a variety of forms, and the majority of crimes involve credit cards. Despite various steps taken to prevent credit card fraud, it is crucial to alert customers to unusual attempts at fraudulent transactions. The internet has been largely geared to meet this challenge. Many studies have been published over the years to identify anomalies in credit card transactions, and machine learning (ML) has played a significant role in this. Though various anomaly detection techniques are in place, transaction irregularities remain, especially during banking card transactions. The objective of this proposed work is to bring out an efficient machine learning model for identifying abnormal anomalies in credit card-based transactions by considering the limitations of the existing frameworks. The proposed research employs a ML framework comprising data preprocessing, discovering correlations, outlier removal, feature reduction, and classification with a sampling trade-off. The framework uses classifiers such as logistic regression, kNN, support vector machines, and decision trees. The NearMiss and SMOTE approaches are used to address overfitting and underfitting issues through sampling trade-off, which is the defining feature of this research. Significant improvement was noticed when the machine learning models were evaluated using fresh data after a sampling trade-off.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_88-A_Comprehensive_Machine_Learning_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Assessing the Impact of Digitalization on Internal Auditing Function</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150687</link>
        <id>10.14569/IJACSA.2024.0150687</id>
        <doi>10.14569/IJACSA.2024.0150687</doi>
        <lastModDate>2024-06-29T11:26:05.6000000+00:00</lastModDate>
        
        <creator>Khawla Karimallah</creator>
        
        <creator>Hicham Drissi</creator>
        
        <subject>Digitalization; data analytics; organization; Internal Audit Function (IAF); agility</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>Over the past decades, the business environment has become increasingly digitized. Advances in new technologies are driving significant organizational change. Over the years, the internal audit as a governance actor, has adapted to meet the demands of the evolving business environment, and its role in consulting activities has been a significant topic of debate in the literature. This research aims to study the impact of the digitalization of organizations on the internal audit function. The method used to achieve this goal is a survey conducted with 175 internal auditors and managers working for companies in various sectors. The results indicate the existence of a positive relationship between the level of digitalization of the organization and the diversion of risks. This requires greater agility on the part of internal audit, through strengthening the digital skills of auditors, particularly in data analysis, to meet the needs of different stakeholders. The results also indicate that the level of digitalization of the organization has an indirect effect on the level of integration of consulting missions in the internal audit plan, a new role that internal audit is developing to support added value.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_87-Assessing_the_Impact_of_Digitalization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multimodal Sentiment Analysis using Deep Learning Fusion Techniques and Transformers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150686</link>
        <id>10.14569/IJACSA.2024.0150686</id>
        <doi>10.14569/IJACSA.2024.0150686</doi>
        <lastModDate>2024-06-29T11:26:05.5670000+00:00</lastModDate>
        
        <creator>Muhaimin Bin Habib</creator>
        
        <creator>Md. Ferdous Bin Hafiz</creator>
        
        <creator>Niaz Ashraf Khan</creator>
        
        <creator>Sohrab Hossain</creator>
        
        <subject>Multimodal sentiment analysis; deep learning; transfer learning; natural language processing; image processing; BERT</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>Multimodal sentiment analysis extracts sentiments from multiple modalities like text, images, audio, and videos. Most of the current sentiment classifications are based on single modality which is less effective due to simple architecture. This paper studies multimodal sentiment analysis by combining several deep learning text and image processing models. These fusion techniques are RoBERTa with EfficientNet b3, RoBERTa with ResNet50, and BERT with MobileNetV2. This paper focuses on improving sentiment analysis through the combination of text and image data. The performance of each fusion model is carefully analyzed using accuracy, confusion matrices, and ROC curves. The fusion techniques implemented in this study outperformed the previous benchmark models. Notably, the EfficientNet-b3 and RoBERTa combination achieves the highest accuracy (75%) and F1 score (74.9%). This research contributes to the field of sentiment analysis by showing the potential of combining textual and visual data for more accurate sentiment analysis. This will lay the groundwork for researchers in the future to work on multimodal sentiment analysis.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_86-Multimodal_Sentiment_Analysis_using_Deep_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>From Technical Indicators to Trading Decisions: A Deep Learning Model Combining CNN and LSTM</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150685</link>
        <id>10.14569/IJACSA.2024.0150685</id>
        <doi>10.14569/IJACSA.2024.0150685</doi>
        <lastModDate>2024-06-29T11:26:05.5370000+00:00</lastModDate>
        
        <creator>SAHIB Mohamed Rida</creator>
        
        <creator>ELKINA Hamza</creator>
        
        <creator>ZAKI Taher</creator>
        
        <subject>Stock market prediction; CNN-LSTM hybrid model; financial time series; technical indicators; CNN; LSTM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>Stock market prediction is a highly attractive and popular field within finance, driven by the potential for significant profits that come with substantial risks due to data non-linearity and complex economic principles. Extracting features from trading data is crucial in this domain, and numerous strategies have been developed. Among these, deep learning has achieved impressive results in financial applications because of its robust data processing capabilities. In our study, we propose a hybrid deep learning model, the CNN-LSTM, which combines the 2D Convolutional Neural Network (CNN) for image processing with the Long Short-Term Memory (LSTM) network for managing image sequences and classification. We transformed the top 15 of 21 technical indicators from financial time series into 15x15 images for 21 different day periods. Each image is then categorized as Sell, Hold, or Buy based on the trading data. Our model demonstrates superior performance in stock predictions over other deep learning models.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_85-From_Technical_Indicators_to_Trading_Decisions.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Real-Time Air Quality Monitoring Model using Fuzzy Inference System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150684</link>
        <id>10.14569/IJACSA.2024.0150684</id>
        <doi>10.14569/IJACSA.2024.0150684</doi>
        <lastModDate>2024-06-29T11:26:05.5200000+00:00</lastModDate>
        
        <creator>Muhammad Saleem</creator>
        
        <creator>Nitinkumar Shingari</creator>
        
        <creator>Muhammad Sajid Farooq</creator>
        
        <creator>Beenu Mago</creator>
        
        <creator>Muhammad Adnan Khan</creator>
        
        <subject>IoT; fuzzy inference system; smart city; air quality monitoring</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>Air pollution, which is both environmental and social, is a serious issue that affects people&#39;s health as well as ecosystems and the environment. Air pollution currently poses a number of health problems to the ecosystem. The most important factor that has a direct impact on disease occurrence and decreases people&#39;s quality of life is city and metropolitan air quality. It is critical to establish real-time air quality monitoring in order to make timely decisions based on measurements and evaluations of environmental factors. Monitoring systems are influential in multiple smart city initiatives for keeping an eye on air quality and reducing pollutant concentrations in metropolitan areas. The Internet of Things (IoT) is becoming increasingly important in a variety of sectors, including air quality monitoring. In this research work, a real-time air quality monitoring model employing fuzzy inference is proposed for monitoring air pollution using multiple parameters such as Sulphur Dioxide (SO2), Nitrogen Dioxide (NO2), Carbon Monoxide (CO), Ozone (O3) and Suspended Particulates (PM10). This proposed research presents a novel technique for improving air quality monitoring. This proposed fuzzy inference system also provides better results in terms of monitoring air quality in a more efficient and effective way.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_84-Real_Time_Air_Quality_Monitoring_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Two-Step Classification for Solving Data Imbalance and Anomalies in an Altman Z-Score-based Bankruptcy Prediction Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150683</link>
        <id>10.14569/IJACSA.2024.0150683</id>
        <doi>10.14569/IJACSA.2024.0150683</doi>
        <lastModDate>2024-06-29T11:26:05.4900000+00:00</lastModDate>
        
        <creator>Abdul Syukur</creator>
        
        <creator>Arry Maulana Syarif</creator>
        
        <creator>Ika Novita Dewi</creator>
        
        <creator>Aris Marjuni</creator>
        
        <subject>Bankruptcy prediction; Altman Z-Score; data imbalance and anomaly; data binning; two-steps classification; LSTM; rule-based classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>Differences in bankruptcy regulations with varying value parameters cause data anomalies when implemented in the Altman Z-Score model. Another common problem in bankruptcy predictions is imbalanced data; the number of companies that fall into the bankruptcy category is much smaller than those that do not. Therefore, a novel method was proposed to address data imbalance and anomalies in an Altman Z-Score-based bankruptcy prediction model. The proposed method employs a two-step classification controlled with data binning. Assumption values were used to set the proportion of distress and non-distress classes. Quartile calculation-based data binning is then used to ordinally rank the non-distress category into three classes. Furthermore, a two-step classification was performed using the Long-Short Term Memory (LSTM) method, followed by a rule-based classification method. The LSTM method predicts output in the form of one class representing the distress zone and three classes representing non-distress zone subcategories. The results are then processed using a rule-based classification to summarize the output into a two-class classification, where all data not in the distress zone class is part of the non-distress zone. The performance evaluation shows promising results, with outcomes closely matching the source bankruptcy data. These findings strengthen the evidence that the Altman Z-Score is a powerful tool for bankruptcy prediction and demonstrate that the proposed method can improve the Altman Z-Score model in handling differences in data value parameters.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_83-Two_Step_Classification_for_Solving_Data_Imbalance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Taxonomy of IDS in IoTs: ML Classifiers, Feature Selection Models, Datasets and Future Directions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150682</link>
        <id>10.14569/IJACSA.2024.0150682</id>
        <doi>10.14569/IJACSA.2024.0150682</doi>
        <lastModDate>2024-06-29T11:26:05.4600000+00:00</lastModDate>
        
        <creator>Hessah Alqahtani</creator>
        
        <creator>Monir Abdullah</creator>
        
        <subject>Intrusion detection system; feature selection; support vector machine; random forest; decision tree; NSL-KDD</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>The applications of the Internet of Things (IoT) are becoming increasingly popular nowadays. Network security and privacy are major concerns of the IoTs, as many IoT devices are connected to the network via the Internet, making IoT networks more vulnerable to various cyber-attacks. An Intrusion Detection System (IDS) is a solution to deal with security and privacy issues by protecting IoT networks from different types of attacks. In this paper, we provide a taxonomy of IDS in IoT. Different Machine Learning (ML) classifiers, feature selection models, and Datasets with high detection accuracy are presented. Our analysis indicates a heightened emphasis on ML-based IDS, with Support vector machines (SVMs) at 33% and RFs at 31% being the most widely used classifiers. Despite the diversity in the use of different datasets for IDS, the NSL-KDD is the most commonly used in 49% of studies. In the realm of feature selection, the K-means and SMO algorithms emerge with an impressive 99.33%, marking the highest percentage in previous research on feature selection for ML-based ID. Moreover, we addressed the future pathways and challenges of IDS detection.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_82-A_Taxonomy_of_IDS_in_IoTs.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Lightweight Fire Detection Algorithm Based on Improved YOLOv5</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150681</link>
        <id>10.14569/IJACSA.2024.0150681</id>
        <doi>10.14569/IJACSA.2024.0150681</doi>
        <lastModDate>2024-06-29T11:26:05.4270000+00:00</lastModDate>
        
        <creator>Dawei Zhang</creator>
        
        <creator>Yutang Chen</creator>
        
        <subject>YOLOv5; FasterNet; GSConv; VoV-GSCSP; Fire detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>Among all kinds of disasters, fire is one of the most frequent and common major disasters that threaten public safety and social development. At present, the widely used smoke sensor method to detect fire is susceptible to factors such as distance, resulting in untimely detection. With the development of computer vision technology, image detection technology based on machine learning has been superior to traditional detection methods in terms of detection accuracy and speed, and has gradually become the emerging mainstream in the field of fire detection. At this stage, most of the methods proposed in related studies are based on high-performance hardware devices, which limits the practical application of relevant results. This paper proposes an improved fire detection algorithm based on the YOLOv5 model to address the common issues of high memory usage, slow detection speed, and high operating costs in current fire detection algorithms. The algorithm introduces FasterNet network into the backbone network to reduce model memory usage and improve detection speed. Using Ghost-Shuffle Convolution (GSConv) in the neck network reduces the number of model parameters and computational costs. Introducing a one-time aggregation cross-stage partial network module (VoV-GSCSP) to enhance feature extraction capability and improve the detection accuracy of the model. The experimental results show that compared with the original YOLOv5 model, the improved model achieves better recognition performance, with an average accuracy of 98.3%, a 31.4% reduction in memory usage, and a 13% increase in detection speed. The number of parameters decreased by 33%, and the computational workload decreased by 35%. The improved algorithm can achieve fast and accurate identification of fires, and the lightweight model is more suitable for the deployment and implementation of general embedded hardware.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_81-Lightweight_Fire_Detection_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Pest Detection in Agricultural Farms using SqueezeNet and Multi-Layer Perceptron Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150680</link>
        <id>10.14569/IJACSA.2024.0150680</id>
        <doi>10.14569/IJACSA.2024.0150680</doi>
        <lastModDate>2024-06-29T11:26:05.4130000+00:00</lastModDate>
        
        <creator>Intan Nurma Yulita</creator>
        
        <creator>Anton Satria Prabuwono</creator>
        
        <creator>Firman Ardiansyah</creator>
        
        <creator>Juli Rejito</creator>
        
        <creator>Asep Sholahuddin</creator>
        
        <creator>Rudi Rosadi</creator>
        
        <subject>Pest detection; Squeezenet; multi-layer perceptron; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>Pest detection is essential to protect agricultural systems from economic losses, lower food production, and environmental degradation. Detection of pests is a crucial aspect of agricultural sustainability because it helps to allocate resources, reduce production costs, and increase producers&#39; profits. Artificial intelligence (AI) has revolutionized the detection of agronomic pests by employing deep learning models to accurately detect individual pests and differentiate between species and life stages. Combining SeueezeNet and Multi-Layer Perceptron, this study extracts feature vectors from image data to detect pests. There are four primary phases: preprocessing, image embedding with SqueezeNet, the final classifier with MLP, and 10-fold cross-validation. Data for this study is acquired in the form of plant pests. The total number of images acquired is 3150, with 350 from each class. Based on the research, the combination model demonstrates excellent performance. Each experiment&#39;s accuracy is greater than 99 %. It shows that Squeezenet can effectively extract the data&#39;s features, whereas Multi-Layer Perceptron can process these features for optimal classification performance. Even though there are still several classes, such as mites, sawflies, and stem borer, that have not been correctly classified. Since each image&#39;s background is unique, it cannot be classified correctly. These promising findings have broad implications for boosting agricultural output and decreasing pest-related losses. Optimal use of this approach in a variety of agricultural contexts requires more study and field testing.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_80-Pest_Detection_in_Agricultural_Farms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Text Matching Model Combining Ranking Information and Negative Example Smoothing Strategies</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150679</link>
        <id>10.14569/IJACSA.2024.0150679</id>
        <doi>10.14569/IJACSA.2024.0150679</doi>
        <lastModDate>2024-06-29T11:26:05.3970000+00:00</lastModDate>
        
        <creator>Xiaodong Cai</creator>
        
        <creator>Lifang Dong</creator>
        
        <creator>Yeyang Huang</creator>
        
        <creator>Mingyao Chen</creator>
        
        <subject>Text matching; ranking information; negative example smoothing strategy; jensen-shannon divergence; listnet sorting algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>Aiming at the problems that current text matching methods are difficult to accurately capture the fine-grained ranking information between texts and the insufficient information interaction between different negative examples, a text matching model combining ranking information and negative example smoothing strategy is proposed. Firstly, it ensures the consistency of the ranking of two sentence representations of the input text obtained after different Dropout masks through Jensen-Shannon Divergence. Secondly, it utilizes the pre-trained SimCSE as the teacher model to obtain coarse-grained ranking information and distills this information into the student model through the ListNet sorting algorithm to obtain fine-grained ranking information. Finally, the negative examples are augmented by a negative example smoothing strategy, which effectively solves the problem of insufficient information interaction between negative examples without increasing the batch size. Experimental results on the standard semantic text similarity task show that the proposed model achieves a significant improvement in the Spearman correlation coefficient evaluation metrics compared with existing state-of-the-art methods, proving its effectiveness.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_79-Text_Matching_Model_Combining_Ranking_Information.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Framework for Sentiment Analysis: Dimensionality Reduction for Machine Learning (DRML)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150678</link>
        <id>10.14569/IJACSA.2024.0150678</id>
        <doi>10.14569/IJACSA.2024.0150678</doi>
        <lastModDate>2024-06-29T11:26:05.3670000+00:00</lastModDate>
        
        <creator>Dhamayanthi N</creator>
        
        <creator>Lavanya B</creator>
        
        <subject>Machine learning; text mining; natural language processing; sentiment analysis; opinion classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>Sentiment analysis is vital for understanding public opinion, but improving its performance is challenging due to the complexities of high-dimensional text data and diverse user-generated content. We propose a novel framework based on Dimensionality Reduction for Machine Learning (DRML) that enhances the classification performance by 21.55% while reducing the dimension of the feature matrix by 99.63%. Our research addresses the fundamental question of whether it is possible to reduce the feature space significantly while improving sentiment analysis performance. Our approach employs Principal Component Analysis (PCA) to effectively capture essential textual features and includes the development of an algorithm for identifying principal components from positive and negative reviews. We then create a supervised dataset by combining these components. Furthermore, we integrate a range of state-of-the-art machine learning algorithms (Decision Tree, K-Nearest Neighbours, Bernoulli Na&#239;ve Bayes, and Majority Voting Ensemble) into our framework, along with a custom tokenizer, to harness the full potential of reduced-dimensional data for sentiment classification. We have conducted extensive experiments using gold standard multi-domain benchmark datasets from Amazon to show that DRML outperforms other state-of-the-art approaches. Our proposed methodology gives superior performance with an average performance of 98.38% which is a significant increase in performance by 21.55% compared to the baseline methodology using Bag of Words (BoW). In terms of individual evaluation parameters, DRML shows an increase of 21.84% in Accuracy, 20.4% in Precision, 21.84% in Recall, and 22.11% in F1-score. In comparison with the state-of-the-art (SOTA) methodologies applied to the same benchmark dataset in recent years, our framework demonstrates a significant average increase in Accuracy for Sentiment Analysis by 10.96%. This substantial improvement underscores the effectiveness of our approach. To conclude, our research contributes to the field of sentiment analysis by introducing an innovative framework that not only improves the efficiency of sentiment analysis but also paves the way for the analysis of extensive textual data in diverse real-world applications.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_78-A_Novel_Framework_for_Sentiment_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Efficient Ensemble Algorithm for Boosting k-Nearest Neighbors Classification Performance via Feature Bagging</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150677</link>
        <id>10.14569/IJACSA.2024.0150677</id>
        <doi>10.14569/IJACSA.2024.0150677</doi>
        <lastModDate>2024-06-29T11:26:05.3330000+00:00</lastModDate>
        
        <creator>Huu-Hoa Nguyen</creator>
        
        <subject>Bagging; ensemble; feature; k-nearest neighbors</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>This paper proposes a novel ensemble algorithm aimed at improving the performance of k-Nearest Neighbors (KNN) classification by incorporating feature bagging techniques, which help overcome the inherent limitations of KNN in Big Data scenarios. The proposed algorithm, termed FBE (Feature Bagging-based Ensemble), employs an efficient ensemble strategy with sorted feature subset techniques to reduce the time complexity from linear to logarithmic. By focusing on essential features during iterative training and utilizing a binary search in the testing phase, FBE boosts computational efficiency and accuracy in high-dimensional and imbalanced datasets. Our study rigorously evaluates the proposed FBE algorithm against traditional KNN, Random Forest (RF), and AdaBoost algorithms across ten benchmark datasets from the UCI Machine Learning Repository. The experimental results demonstrate that FBE not only outperforms the conventional KNN and AdaBoost across all evaluated metrics (accuracy, precision, recall, and F1 score) but also shows competitive performance compared to RF. Specifically, FBE exhibits remarkable improvements in datasets characterized by high dimensionality and class imbalances. The main contributions of this research include the development of an adaptive KNN framework that addresses the typical computational demands and vulnerability to noise in the data, making it well-suited for large-scale datasets. The ensemble methodology within FBE also helps reduce overfitting, a common challenge in standard KNN models, by diversifying the decision-making process across multiple data subsets. This strategy ensures robustness and reliability, positioning FBE as a suitable tool for classification tasks in diverse domains such as healthcare and image processing.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_77-An_Efficient_Ensemble_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Artificial Intelligence-based Real-Time Electricity Metering Data Analysis and its Application to Anti-Theft Actions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150676</link>
        <id>10.14569/IJACSA.2024.0150676</id>
        <doi>10.14569/IJACSA.2024.0150676</doi>
        <lastModDate>2024-06-29T11:26:05.3170000+00:00</lastModDate>
        
        <creator>Kai Liu</creator>
        
        <creator>Anlei Liu</creator>
        
        <creator>Xun Ma</creator>
        
        <creator>Xuchao Jia</creator>
        
        <subject>Artificial intelligence; real-time electrical energy; metering data analysis; anti-power theft</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>This study focuses on the key issue of anti-stealing behavior identification in power systems, aiming to improve the security and efficiency of power energy management. Under the current background of intelligent power grid, the existence of anti-theft phenomenon not only causes serious economic losses, but also poses a threat to the stability of power grid operation. Aiming at this situation, this paper proposes a novel and effective feature extraction and optimization method, which utilizes the recursive feature elimination (rfe) technique, combined with the correlation and exclusion analysis of the features, to achieve the deep screening and dimensionality reduction of a large amount of raw data, so as to refine the core feature set that has the most differentiation for the anti-stolen power behavior. During the research process, this paper constructed a hybrid model integrating long short-term memory network (LSTM) and autoencoder. The model cleverly combines the advantages of LSTM in capturing time series dependency and the powerful ability of autoencoder in feature learning and noise reduction, and is especially designed for targeted enhancement of anti-electricity theft behaviors to achieve real-time and accurate behavior recognition. In order to verify the performance and practicality of the proposed method, this paper carries out rigorous simulation experiments and practical case studies. By comparing the classical anti-electricity theft recognition methods, the results show that the hybrid model proposed in this study exhibits significant advantages in both recognition accuracy and response speed. Whether in the simulation environment or actual application scenarios, this method can effectively identify and warn potential power theft behavior, thus providing a strong technical support for the power company’s anti-power theft management.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_76-Artificial_Intelligence_based_Real_Time_Electricity.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>LSTM-GNOG: A New Paradigm to Address Cold Start Movie Recommendation System using LSTM with Gaussian Nesterov’s Optimal Gradient</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150675</link>
        <id>10.14569/IJACSA.2024.0150675</id>
        <doi>10.14569/IJACSA.2024.0150675</doi>
        <lastModDate>2024-06-29T11:26:05.3030000+00:00</lastModDate>
        
        <creator>Ravikumar R N</creator>
        
        <creator>Sanjay Jain</creator>
        
        <creator>Manash Sarkar</creator>
        
        <subject>Cold start; Gaussian Nesterov’s optimal gradient; long short-term memory; movie recommendation system; probabilistic matrix factorization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>In this modern streaming platform, the movie recommendation system is an important tool for enabling the users to find new content specialized to their interests. To address the cold start problem prevalent in movie recommendation systems, we introduce the Long Short-Term Memory-Gaussian Nesterov’s Optimal Gradient (LSTM-GNOG) approach. This model utilizes both implicit and explicit feedback to effectively manage sparse rating data. By integrating Bayesian Personalized Ranking (BPR) and Probabilistic Matrix Factorization (PMF) algorithms with preprocessing via Singular Value Decomposition (SVD), our system enhances data robustness. Our empirical results on the MovieLens 100K, MovieLens 1M, FilmTrust, and Ciao datasets demonstrate significant improvements, with Mean Absolute Error (MAE) values of 0.4962, 0.5249, 0.4625, and 0.5341, respectively. Compared to traditional methods such as Unsupervised Boltzmann Machine-based Time-aware Recommendation (UBMTR) and Efficient Gowers-Jaccard-Sigmoid Measure (EGJSM), LSTM-GNOG shows better improvement in prediction accuracy. These results underscore the effectiveness of LSTM-GNOG in overcoming data sparsity issues in movie recommendations.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_75-LSTM_GNOG_A_New_Paradigm_to_Address_Cold_Start_Movie.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dynamic Shader Termination and Throttling for Side-Channel Security on GPUOwl</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150674</link>
        <id>10.14569/IJACSA.2024.0150674</id>
        <doi>10.14569/IJACSA.2024.0150674</doi>
        <lastModDate>2024-06-29T11:26:05.2700000+00:00</lastModDate>
        
        <creator>Nelson Lungu</creator>
        
        <creator>Satyendr Singh</creator>
        
        <creator>Simon Tembo</creator>
        
        <creator>Manoj Ranjan Mishra</creator>
        
        <creator>Hani Moaiteq Aljahdali</creator>
        
        <creator>Lalbihari Barik</creator>
        
        <creator>Parthasarathi Pattnayak</creator>
        
        <creator>Mahendra Kumar Gourisaria</creator>
        
        <creator>Sudhansu Shekhar Patra</creator>
        
        <subject>Graphics processing units; security; side-channel attacks; shader throttling; GPUOwl</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>GPUs are becoming more and more appealing targets for side-channel attacks because of their high levels of parallelism and shared hardware resources. In order to reduce side-channel assaults on GPUs, we provide a unique dynamic shader termination and throttling approach in this research. The main concept is to use runtime profiling and heuristics to dynamically terminate and restrict the frequency and concurrency of shader programs. We use the open-source GPGPU simulator GPUOwl to implement the suggested method. Our findings show that the suggested method may successfully thwart a variety of side-channel assaults while having no influence on efficiency. Over a range of benchmarks, the average overhead introduced by the dynamic shader termination and throttling is 5.6%. At the same time, it successfully thwarts recently demonstrated cache-based and timing-based side-channel attacks on GPUs. Thus, the proposed technique offers an efficient software-based defence to enhance the side-channel security of GPUs.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_74-Dynamic_Shader_Termination_and_Throttling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>High-Resolution Remote Sensing Image Object Detection System for Small Unmanned Aerial Vehicles Based on MPSOC</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150673</link>
        <id>10.14569/IJACSA.2024.0150673</id>
        <doi>10.14569/IJACSA.2024.0150673</doi>
        <lastModDate>2024-06-29T11:26:05.2400000+00:00</lastModDate>
        
        <creator>Hui Xia</creator>
        
        <subject>UAVs; remote sensing images; object recognition; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>With the maturation of remote sensing, the applications of small unmanned aerial vehicles are rapidly expanding. Efficient image object detection algorithms have become crucial for information extraction in unmanned aerial vehicles. To meet this demand, an improved YOLOv5s algorithm was developed and deployed within a multi-processor system to optimize the performance of object detection in high-resolution remote sensing images captured by small unmanned aerial vehicles. Through adjustments to the structure and parameters of YOLOv5s, the algorithm was enhanced to improve object recognition capabilities in high-resolution remote sensing imagery. Experimental results demonstrated that the improved YOLOv5s (I-YOLOv5s) algorithm effectively mitigates interference from shadows and other external factors, enabling precise identification of objects. During training, I-YOLOv5s exhibited faster convergence, reaching optimal status after approximately 176 iterations. In performance evaluation, the algorithm achieved F1 and Recall values of 0.92 and 0.94, respectively, significantly outperforming single-shot multibox detectors. I-YOLOv5s attained a maximum average precision of 0.96, markedly higher than comparative algorithms, with its Loss value reduced to a mere 0.06. The introduction of this enhanced algorithm not only enhances the accuracy and efficiency of object detection but also profoundly advances the further application of unmanned aerial vehicles in fields such as environmental monitoring, traffic management, and disaster assessment.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_73-High_Resolution_Remote_Sensing_Image_Object_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Receive Satellite-Terrestrial Networks Data using Multi-Domain BGP Protocol Gateways</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150672</link>
        <id>10.14569/IJACSA.2024.0150672</id>
        <doi>10.14569/IJACSA.2024.0150672</doi>
        <lastModDate>2024-06-29T11:26:05.2230000+00:00</lastModDate>
        
        <creator>Tieshi Song</creator>
        
        <creator>Zhanbo Liu</creator>
        
        <subject>Internet of Things; satellite-terrestrial networks; multi-domain; BGP-5; protocol gateways</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>In terms of communication media, computer network technology has advanced significantly as a way of communication between devices. An Internet protocol called Border Gateway Protocol (BGP) is used to route traffic and share data between AS. But as of right now, BGP version 5 (BGP-5) has a fairly prevalent problem that degrades the performance of modern IP networks: &quot;high convergence delay&quot; when making routing changes. Since their formation at the start of the twenty-first century, satellite-terrestrial networks (STN) have drawn attention. Particularly in data centers and enterprise networks, this technology has greatly improved traffic control, administration, and monitoring. When adopting the STN paradigm, difficulties were discovered with providing administrative control, security, administration, and monitoring across domain borders. BGP-5 is used in a multi-domain STN to route traffic and communicate data across many domains or autonomous systems. Through fewer advertisement pathways, BGP-5 shields terrestrial networks from the high dynamics of satellites. Furthermore, a genuine network environment is constructed for authentic testing. According to the findings, BGP-5 can lower CPU consumption by 8.23% to 9.56% and bandwidth resource occupancy of the terrestrial network by 32.12% to 73.26%.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_72-Receive_Satellite_Terrestrial_Networks_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Adaptive Residual Attention Recommendation Model Based on Interest Social Influence</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150671</link>
        <id>10.14569/IJACSA.2024.0150671</id>
        <doi>10.14569/IJACSA.2024.0150671</doi>
        <lastModDate>2024-06-29T11:26:05.1930000+00:00</lastModDate>
        
        <creator>Sheng Fang</creator>
        
        <creator>Xiaodong Cai</creator>
        
        <creator>Yun Xue</creator>
        
        <creator>Wei Lu</creator>
        
        <subject>Social recommendation; redundant and noisy; interest social mapping; social selection mechanism; adaptive residual attention mechanism</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>Existing social recommendation models mostly directly use original social data in the social space. However, original social data may contain a large amount of redundant and noisy social relationships. Additionally, existing feature fusion methods struggle to adaptively fuse features between nodes deeply, which can degrade the recommendation performance of the model. Addressing these issues, this paper proposes an Adaptive Residual Attention Recommendation Model based on Interest Social Influence. Firstly, we construct a novel Interest Social Mapping Module to model the confidence of social relationships based on user interests and map original social data to interest social space, thereby gaining a deeper understanding of user interest relationships in social networks. Secondly, we introduce a unique Social Selection Mechanism that dynamically filters and removes meaningless social interactions in the interest social space using social confidence scores, effectively filtering out social information that may interfere with or mislead users. Finally, we design an Adaptive Residual Attention Mechanism to flexibly adjust the feature fusion method of nodes, thereby obtaining more effective node information to improve recommendation accuracy. Experimental results show that compared to several state-of-the-art methods, the proposed model exhibits significant improvements on the Ciao and Epinions datasets.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_71-Adaptive_Residual_Attention_Recommendation_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of Research Trends in Maritime Communication</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150670</link>
        <id>10.14569/IJACSA.2024.0150670</id>
        <doi>10.14569/IJACSA.2024.0150670</doi>
        <lastModDate>2024-06-29T11:26:05.1630000+00:00</lastModDate>
        
        <creator>G. Pradeep Reddy</creator>
        
        <creator>Shrutika Sinha</creator>
        
        <creator>Soo-Hyun Park</creator>
        
        <subject>Artificial intelligence (AI); internet of things (IoT); maritime communication; maritime research trends; Scopus</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>Maritime industry plays an important role in the transport of various goods and passengers; it is the major contributor to global trade. With the advent of new communication technologies, advances in Artificial Intelligence, and the ubiquitous Internet of Things, the maritime industry is evolving day by day. Effective communication plays a key role in ensuring the smooth operation of maritime activities. However, the researchers in this domain need to understand and analyze various research trends that can offer various insights. In this view, this paper provides a clear understanding of the scientific landscape in maritime communication based on the data available in the Scopus database. Scopus is the largest abstract and citation database from Elsevier which provides comprehensive detail about the literature in various subject fields. This research considers the last 10 years data, i.e. from 2013 to 2023 for the analysis. A total of 505 publications were obtained from the database. These publications include various document types such as articles, conference papers, reviews, etc. The analysis is carried out from various perspectives including year, country, subject area, funding sponsor, document type, affiliation, author, and source. Further, to understand the mutual relations, collaborations between different countries, the co-occurrence of various keywords, and the bibliographic coupling among diverse sources are also analyzed. This analysis provides a clear view and serves the researchers willing to work in this area and other stakeholders to understand various perspectives in this domain.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_70-Analysis_of_Research_Trends_in_Maritime_Communication.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SVNN-ExpTODIM Technique for Maturity Evaluation of Digital Transformation in Retail Enterprises Under Single-Valued Neutrosophic Sets</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150669</link>
        <id>10.14569/IJACSA.2024.0150669</id>
        <doi>10.14569/IJACSA.2024.0150669</doi>
        <lastModDate>2024-06-29T11:26:05.1470000+00:00</lastModDate>
        
        <creator>Xiaoling Yang</creator>
        
        <subject>Multiple-attribute group decision-making (MAGDM); single-valued neutrosophic sets (SVNSs); information entropy; exponential TODIM; maturity evaluation of digital transformation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>The digital economy has become an important force driving the transformation of old and new driving forces in China&#39;s economy, and also provides an opportunity for retail enterprises to &quot;overtake&quot; by changing lanes. The evaluation of the maturity of digital transformation in retail enterprises plays an important role in their digital transformation process. Although more and more retail enterprises are realizing the important role of digital transformation in their own development, the digital transformation of retail enterprises is a complex issue that involves all aspects of retail enterprise management. There are still many retail enterprises that lack clear strategic goals and practical paths, as well as effective supporting assessments and institutional incentives in the process of digital transformation, which may further widen the digital level gap between retail enterprises. The maturity evaluation of digital transformation in retail enterprises is multiple-attribute group decision-making (MAGDM). Recently, the Exponential TODIM (ExpTODIM) technique was employed to cope with MAGDM. The single-valued neutrosophic sets (SVNSs) are presented as decision tool for characterizing fuzzy information during the maturity evaluation of digital transformation in retail enterprise. In this study, the single-valued neutrosophic number Exponential TODIM (SVNN-ExpTODIM) technique is presented to solve the MAGDM under SVNSs. At last, numerical study for maturity evaluation of digital transformation in retail enterprise is presented to validate the SVNN-ExpTODIM technique through comparative analysis.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_69-SVNN_ExpTODIM_Technique_for_Maturity_Evaluation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Acne Severity Classification on Mobile Devices using Lighweight Deep Learning Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150668</link>
        <id>10.14569/IJACSA.2024.0150668</id>
        <doi>10.14569/IJACSA.2024.0150668</doi>
        <lastModDate>2024-06-29T11:26:05.1170000+00:00</lastModDate>
        
        <creator>Nor Surayahani Suriani</creator>
        
        <creator>Syaidatus Syahira Ahmad Tarmizi</creator>
        
        <creator>Mohd Norzali Hj Mohd</creator>
        
        <creator>Shaharil Mohd Shah</creator>
        
        <subject>Acne detection; severity level; MobileNetV2; convolutional neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>Acne is a prevalent skin condition affecting millions of people globally, impacting not just physical health but also mental well-being. Early detection of skin diseases such as acne is important for making treatment decisions to prevent the spread of the disease. The main goal of this project is to develop an Android mobile application with deep learning that allows users to diagnose skin diseases and also detect the severity level of skin diseases in three levels: mild, moderate, and severe. Most of the deep learning methods require devices with high computational resources which hardly implemented in mobile applications. To overcome this problem, this research will focus on lightweight Convolutional Neural Networks (CNN). This study focuses on the efficiency of MobileNetV2 and Android applications that are used in this project to detect skin diseases and severity levels. Android Studio is used to create a GUI interface, and the model works perfectly and successfully by using TensorFlow Lite. The skin disease images of acne with severity levels (mild, moderate, and severe) achieve 92% accuracy. This study also demonstrated good results when it was implemented on an Android application through live camera input.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_68-Acne_Severity_Classification_on_Mobile_Devices.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Validation of a Supply Chain Innovation System Based on Blockchain Technology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150667</link>
        <id>10.14569/IJACSA.2024.0150667</id>
        <doi>10.14569/IJACSA.2024.0150667</doi>
        <lastModDate>2024-06-29T11:26:05.1000000+00:00</lastModDate>
        
        <creator>Ahmed El Maalmi</creator>
        
        <creator>Kaoutar Jenoui</creator>
        
        <creator>Laila El Abbadi</creator>
        
        <subject>Supply chain management; blockchain technologies; traceability; security validation; business validation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>Technologies play a pivotal role in achieving competitive advantage and operational efficiency. This paper explores the transformative potential of blockchain technology within the context of supply chain operations. While the theoretical promise of blockchain as a secure, transparent, and decentralized transaction recording system is undeniable, practical adoption in supply chain systems remains ensnared in skepticism and caution. In the dynamic field of global supply chain management, the adoption of cutting-edge technologies is critical for securing a competitive edge and enhancing operational efficiencies. This paper delves into the revolutionary impact of blockchain technology on supply chain operations, acknowledging its theoretical benefits as a secure, transparent, and decentralized system for recording transactions. However, it also notes the cautious approach towards its practical implementation within supply chains due to prevailing skepticism. This investigation aims to unravel the efficacy of blockchain in enhancing security, efficiency, accuracy, and cost-effectiveness within supply chain systems. By bridging theoretical aspirations with practical realities, this study sheds light on both the advantages and constraints of incorporating blockchain into supply chain management. The application of a blockchain-based system in this research demonstrates significant enhancements in supply chain processes and supplier selection within a decentralized framework. Key performance indicators underscore the system&#39;s robustness and utility. Furthermore, the deployment of smart contracts, facilitating automatic verification of data modifications and access rights, underscores the platform&#39;s capability in handling diverse operations. Despite ongoing concerns regarding blockchain&#39;s performance and scalability, this study observes a positive trend towards overcoming these challenges. the findings contribute to the growing body of knowledge on blockchain technology, marking a significant leap forward in its application within the realm of supply chain management.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_67-Validation_of_a_Supply_Chain_Innovation_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Power Up on the Go: Designing a Piezoelectric Shoe Charger</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150666</link>
        <id>10.14569/IJACSA.2024.0150666</id>
        <doi>10.14569/IJACSA.2024.0150666</doi>
        <lastModDate>2024-06-29T11:26:05.0670000+00:00</lastModDate>
        
        <creator>Jamil Abedalrahim Jamil Alsayaydeh</creator>
        
        <creator>Rex Bacarra</creator>
        
        <creator>Abdul Halim Bin Dahalan</creator>
        
        <creator>Pugaaneswari Velautham</creator>
        
        <creator>Khaled Abidallah Salameh Aldarab’ah</creator>
        
        <subject>Piezoelectric (PZT); generates electricity; energy harvesting; eco-friendly charging; servomotor; sustainable technology; kinetic power generation; Arduino control</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>As modern society continues to thrive, electricity has become an essential component of daily life. However, as the demand for electricity rises, some electrical loads struggle to perform. This can even affect simple tasks, such as charging a mobile phone. In order to meet the ever-expanding energy demands, it is crucial to explore cleaner and renewable power sources. This paper highlights a promising electricity generation method that utilizes piezoelectric materials. Specifically, the study employs the piezoelectric (PZT) material to convert pressure from human movements into electrical power. A bridge rectifier circuit is designed to store this power in a battery, which can be used to charge mobile phones. In addition, a microcontroller is implemented to program the auto-lacing light function and utilize the piezoelectric material as a power supply for the microcontroller. The circuit is designed to calculate the total power produced by the piezoelectric material. Multisim software was utilized to simulate the circuit design, and the results indicate that the power generated is sufficient to charge mobile phones. The study finds that a single piezoelectric plate can generate 5mA in one second when placed under mechanical stress (i.e., human movement). By utilizing four piezoelectric materials, the study was able to generate 13.48V in one second when mechanical force was applied. This is more than enough to supply power to charge a mobile phone, as well as power an LED and 5V servomotor.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_66-Power_Up_on_the_Go.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Maximizing Human Capital: Talent Decision-Making Using Information Technology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150665</link>
        <id>10.14569/IJACSA.2024.0150665</id>
        <doi>10.14569/IJACSA.2024.0150665</doi>
        <lastModDate>2024-06-29T11:26:05.0530000+00:00</lastModDate>
        
        <creator>Rui Zhang</creator>
        
        <creator>Xiaobai Li</creator>
        
        <creator>Gang Liu</creator>
        
        <subject>WASPAS; information technology; virtual reality; entropy; machine learning; T-spherical fuzzy Sets</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>In the current fiercely competitive landscape, an organization’s ability to succeed depends on its ability to leverage information technology to support personnel decisions that optimise the use of its people resources. This research examines five different strategies for optimising human capital through the use of information technology within the framework of multi-criteria decision-making (MCDM). Alternatively, you can leverage data-driven performance monitoring systems, artificial intelligence-driven talent acquisition platforms, virtual reality (VR) onboarding and training simulations, predictive analytics tools for succession planning and talent forecasting, and machine learning algorithms for skill assessment and development. Eight criteria—efficacy, efficiency, accuracy, accessibility, scalability, ethical concerns, influence on the organization’s success, and trend adaptability—were developed to assess these options. We may determine the weights associated with each choice and rate them by applying the entropy weighted WASPAS (weighted aggregated sum product assessment) approach on top of the T-spherical fuzzy set (T-SFS) theory. This study adds to our understanding of how businesses could utilize information technology wisely to enhance human resource management in addition to providing guidance on how to assess various approaches based on how well they perform across a variety of metrics. Human resource specialists and organizational leaders may make use of the useful suggestions made by the study to improve personnel decision-making procedures and to make the most of their workforce’s potential in the digital age.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_65-Maximizing_Human_Capital.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Educational Big Data Mining: Comparison of Multiple Machine Learning Algorithms in Predictive Modelling of Student Academic Performance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150664</link>
        <id>10.14569/IJACSA.2024.0150664</id>
        <doi>10.14569/IJACSA.2024.0150664</doi>
        <lastModDate>2024-06-29T11:26:05.0200000+00:00</lastModDate>
        
        <creator>Ting Tin Tin</creator>
        
        <creator>Lee Shi Hock</creator>
        
        <creator>Omolayo M. Ikumapayi</creator>
        
        <subject>Academic performance; CGPA; education data mining; machine learning; predictive modelling; R&amp;D investment</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>Utilisation of Educational Data Mining (EDM) can be useful in predicting academic performance of students to mitigate student attrition rate, allocation of resources, and aid in decision-making processes for higher education institution. This article uses a large dataset from the Programme for International Student Assessment (PISA) consisting of 612,004 participants from 79 countries, supported by the machine learning approach to predict student academic performance. Unlike most of the literature that is confined to one geographical location or with limited datasets and factors, this article studies other factors that contribute to academic success and uses student data from various backgrounds. The accuracy of the proposed model to predict student performance achieved 74%. It is discovered that Gradient Boosted Trees surpass the other classification models that were considered (Logistic Regression, Na&#239;ve Bayes, Deep Learning, Random Forest, Fast Large Margin, Generalised Linear Model, Decision Tree and Support Vector Machine). Reading skills and habits are of the highest importance in predicting the academic performance of students.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_64-Educational_Big_Data_Mining.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>ERFN: Leveraging Context for Enhanced Emotion Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150663</link>
        <id>10.14569/IJACSA.2024.0150663</id>
        <doi>10.14569/IJACSA.2024.0150663</doi>
        <lastModDate>2024-06-29T11:26:05.0070000+00:00</lastModDate>
        
        <creator>Navneet Gupta</creator>
        
        <creator>R. Vishnu Priya</creator>
        
        <creator>Chandan Kumar Verma</creator>
        
        <subject>Context-based emotion recognitions; deep learning; optical flow; CNN</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>The majority of previous methods for identifying emotions concentrate on facial expressions rather than taking into account the rich contextual information that suggests significant emotional states. To fully utilize the contextual information in order to make up for the lack of emotion information. In this work, The Emotion Recognition Fusion Network (ERFN) is a novel model that uses advanced techniques for efficient context-aware identification of human emotion recognition. It incorporates the Flow Context Aware Loss Fusion (FCALF) model, which focuses on emotion analysis in a video sequence. The model uses deep feature extraction (VGG16), Farneb&#228;ck optical flow model, and L1 loss to calculate the Average Contextual Loss (ACL) for selecting key frames. The selected frames are used to obtain resultant optical flow images. Data augmentation techniques are applied exclusively to the training images. The resultant optical flow images undergo feature extraction using both InceptionResNetV2 and VGG16, fine-tuned by adding layer followed by GlobalMaxPool2D and a dense layer, capturing intricate details and flow-contextual information from face, body, and scene. The fused features are fed into a Softmax layer for classification. Experimental results show that the ERFN outperforms existing models in terms of accuracy and generalization, contributing to its effectiveness in capturing context-aware emotions. The proposed approach shows promising results in real-world uncontrolled environments (CAER-S) and laboratory-controlled (CK+) datasets.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_63-ERFN_Leveraging_Context_for_Enhanced_Emotion_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Strategies for Optimizing Personalized Learning Pathways with Artificial Intelligence Assistance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150662</link>
        <id>10.14569/IJACSA.2024.0150662</id>
        <doi>10.14569/IJACSA.2024.0150662</doi>
        <lastModDate>2024-06-29T11:26:04.9730000+00:00</lastModDate>
        
        <creator>Weifeng Deng</creator>
        
        <creator>Lin Wang</creator>
        
        <creator>Xue Deng</creator>
        
        <subject>Personalized learning pathways (PLPs); artificial intelligence (AI); dynamic model; incremental learning; resource recommendation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>With the deepening application of artificial intelligence (AI) in the field of education, Personalized Learning Pathways (PLPs) as a strategy to revolutionize traditional educational models have garnered widespread attention. This paper aims to explore strategies for optimizing PLPs with the aid of AI, in order to enhance learning efficiency, stimulate students&#39; interest in learning, and foster their holistic development. The background section discusses the &quot;one-size-fits-all&quot; teaching methods prevalent in traditional education models and the importance and necessity of PLPs. Following this, the study delves into the limitations of existing methods for optimizing PLPs, especially in terms of dynamic adaptability and real-time feedback mechanisms. The paper consists of two main parts: the first part constructs a dynamic model to simulate the impact of PLP design features on the student learning process; the second part proposes a dynamic PLP resource recommendation algorithm based on incremental learning. By updating students&#39; abilities, preferences, and knowledge states in real-time, the algorithm can provide more precise learning resource recommendations. The experimental results demonstrate that the proposed dynamic PLP resource recommendation algorithm based on incremental learning exhibits significant effects in optimizing PLP design. It can improve the accuracy of the recommendation system and positively influence the long-term learning state transition of students. This proves the potential and practical application value of dynamic models in the field of personalized education. The methods and findings of this study not only enrich the theoretical foundation of the field of personalized learning but also offer robust technical support for practical educational practices, holding significant academic and practical value.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_62-Strategies_for_Optimizing_Personalized_Learning_Pathways.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automated Detection of Learning Styles using Online Activities and Model Indicators</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150661</link>
        <id>10.14569/IJACSA.2024.0150661</id>
        <doi>10.14569/IJACSA.2024.0150661</doi>
        <lastModDate>2024-06-29T11:26:04.9600000+00:00</lastModDate>
        
        <creator>Alia Lestari</creator>
        
        <creator>Armin Lawi</creator>
        
        <creator>Sri Astuti Thamrin</creator>
        
        <creator>Nurul Hidayat</creator>
        
        <subject>Learning style; Felder-Silverman Learning Style Model; machine learning; support vector machine; recursive feature elimination; accuracy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>Understanding learning styles is essential for learners and instructors to identify strengths and weaknesses in the education system. Although the Felder-Silverman Learning Style Model (FSLSM) is commonly used for this purpose, its reliance on in-person surveys can be time-consuming and prone to inaccuracies. This paper proposes an automated approach using Machine Learning (ML) to detect learning styles. This method extracts features from online activity data in Learning Management System (LMS) databases, aligning them with FSLSM indicators to label different learning styles. The dataset is divided into training and testing groups, respectively, to build and evaluate Support Vector Machine (SVM) classifiers. Feature selection is performed using the Recursive Feature Elimination (RFE) algorithm to improve the performance of the classifier, which results in the SVM-RFE algorithm. The experimental results showed promising accuracy for all model dimensions, i.e., 95.76% for processing, 85.88% for perception, 93.16% for input, and 96.42% for understanding dimensions. This approach offers a robust framework for automated learning style detection, which significantly reduces reliance on manual surveys and improves efficiency in educational settings.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_61-Automated_Detection_of_Learning_Styles.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluating Noise-Robustness of Convolutional and Recurrent Neural Networks for Baby Cry Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150660</link>
        <id>10.14569/IJACSA.2024.0150660</id>
        <doi>10.14569/IJACSA.2024.0150660</doi>
        <lastModDate>2024-06-29T11:26:04.9270000+00:00</lastModDate>
        
        <creator>Medhanita Dewi Renanti</creator>
        
        <creator>Agus Buono</creator>
        
        <creator>Karlisa Priandana</creator>
        
        <creator>Sony Hartono Wijaya</creator>
        
        <subject>Baby cry recognition; deep learning; gated recurrent unit; long short-term memory; noise robustness; signal-to-noise ratio</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>Reliable baby cry recognition plays a crucial role in infant care and monitoring, yet real-world environment poses challenges to system accuracy due to its background noises. This study proposes a novel CNN architecture for baby cry recognition under varying noise conditions, featuring three convolutional layers, a max pooling layer, and 0.5 dropout set, and compares its performance against standard RNN models. The models were trained for 100 epochs with a batch size of 64 and evaluated in both clean and noisy environments. To simulate real-world scenarios, recordings were transformed into audio signals and subjected to varying levels of background noise, particularly at different signal-to-noise ratios (SNRs). Results indicate that both models achieved high accuracy (&gt;89%) in noise-free conditions. However, the proposed CNN maintained higher precision (93%) and overall accuracy (91%) than the RNN under 10dB noise, demonstrating its superior noise robustness for baby cry recognition. This improvement is attributed to the CNN’s capacity to capture spatial features in audio signals, making it susceptible to noise disruptions. These findings contribute to the development of more reliable and robust baby cry recognition systems.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_60-Evaluating_Noise_Robustness_of_Convolutional.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Integrating Causal Inference and Machine Learning for Early Diagnosis and Management of Diabetes</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150659</link>
        <id>10.14569/IJACSA.2024.0150659</id>
        <doi>10.14569/IJACSA.2024.0150659</doi>
        <lastModDate>2024-06-29T11:26:04.9130000+00:00</lastModDate>
        
        <creator>Sahar Echajei</creator>
        
        <creator>Mohamed Hafdane</creator>
        
        <creator>Hanane Ferjouchia</creator>
        
        <creator>Mostafa Rachik</creator>
        
        <subject>Machine learning; classification; causal inference; Bayesian networks; ensemble technique; diabetes diagnosis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>In the context of the increasing prevalence of diabetes, this work focuses on integrating causal inference with Machine Learning (ML) for early diagnosis and effective management of diabetes. We applied a series of advanced techniques to improve model performance, including the use of data preprocessing methods, evaluation of variable importance and causal analysis, Feature Engineering methods, and hyperparameter optimization. The diabetes prediction model is a Stacking ensemble model that combines the predictions of several base models (namely: Random Forest Classifier, XGBClassifier, Gradient Boosting Classifier). Initial results showed a precision of 0.70, a recall of 0.70, an Area Under Curve (AUC) of 0.768, and a Mean Cross Entropy (MCE) of 0.299. After optimization, precision increased to 0.73, recall to 0.73, AUC to 0.798, and MCE improved to 0.271. This approach has demonstrated a significant improvement in diabetes prediction, suggesting that the integration of causal inference and Machine Learning is a promising path for the diagnosis and management of diabetes. The reduction in MCE, alongside improvements in precision, recall, and AUC, underscores the effectiveness of our optimization techniques in enhancing model reliability and performance.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_59-Integrating_Causal_Inference_and_Machine_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Enhanced Secure User Authentication and Authorized Scheme for Smart Home Management</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150658</link>
        <id>10.14569/IJACSA.2024.0150658</id>
        <doi>10.14569/IJACSA.2024.0150658</doi>
        <lastModDate>2024-06-29T11:26:04.8800000+00:00</lastModDate>
        
        <creator>Md. Razu Ahmed</creator>
        
        <creator>Mohammad Osiur Rahman</creator>
        
        <subject>Smart home automation; Internet of Things; security and privacy; ACL; IPSec; VPN</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>Due to rapid and advanced technology, home automation has gained popularity, making daily life easier. Digitalization and automation have encompassed a wide range of activities and industries. IoT will make home automation more affordable and appealing. With IoT-enabled remote appliance control, smart home automation should improve living standards. A home gateway configures smart, multimedia, and home networks for IoT devices. Safety of life and property is essential to human fulfilment. Automating homes and using related apps have increased the ease, comfort, security, and safety of living in one. Home automation systems have motion detection capabilities and surveillance features to enhance home security. The logic of avoiding excessive or fraudulent notifications remains difficult. Using intelligent response and monitoring mechanisms improves the efficiency of smart home automation. This study introduces a smart home automation system designed to control household devices, monitor environmental conditions, and identify unauthorized entry inside the smart home network and its immediate surrounding area. A smart home network design and configuration that enables secure IoT services with an Access Control List (ACL) for home networks. The research aims to design a robust authentication scheme for guaranteed secure communication in a smart home environment. Implementing a Next Generation Access Control (NGAC) technique serves with Telnet, SSH, IPSec and VPN to detect unauthorized access and mitigate security issues. The efficacy of the suggested design and configuration is validated using a simulation, demonstrating a notable performance in the context of enhanced security measures.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_58-An_Enhanced_Secure_User_Authentication.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced CoCoSo Technique for Sport Teaching Quality Evaluation with Double-Valued Neutrosophic Number Multiple-Attribute Decision-Making</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150657</link>
        <id>10.14569/IJACSA.2024.0150657</id>
        <doi>10.14569/IJACSA.2024.0150657</doi>
        <lastModDate>2024-06-29T11:26:04.8500000+00:00</lastModDate>
        
        <creator>Xuan Wen</creator>
        
        <creator>Changhong Pan</creator>
        
        <subject>Multiple-attribute decision-making (MADM); double-valued neutrosophic sets (DVNSs); CoCoSo technique; blended teaching quality evaluation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>Only by effectively combining online and offline teaching, and vigorously promoting the integration of online and offline teaching in college physical education, can we maximize the reform and innovation of college physical education teaching, and continuously improve teaching quality. Although blended teaching has become one of the important techniques in college physical education teaching and has continuously achieved new results, there are still some problems in the process of organization and implementation that need to be seriously improved. The blended teaching quality evaluation is regarded as the defined multiple-attribute decision-making (MADM). Recently, the CoCoSo and entropy technique was utilized to cope with MADM. The double-valued neutrosophic sets (DVNSs) are utilized as a technique for characterizing fuzzy information during the blended teaching quality evaluation. In this study, CoCoSo is constructed for MADM under DVNSs. Then, the double-valued neutrosophic number CoCoSo (DVNN-CoCoSo) technique is constructed for MADM. Finally, numerical example for blended teaching quality evaluation is put forward to show the DVNN-CoCoSo technique.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_57-Enhanced_CoCoSo_Technique_for_Blended_Teaching_Quality.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Personalized Art Design of Wheel Rims Based on Image Mapping of Image Requirements</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150656</link>
        <id>10.14569/IJACSA.2024.0150656</id>
        <doi>10.14569/IJACSA.2024.0150656</doi>
        <lastModDate>2024-06-29T11:26:04.8330000+00:00</lastModDate>
        
        <creator>Jianhui Li</creator>
        
        <subject>Wheels; art design; styling design; user needs; image clustering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>In the customization of wheel rims, to convert users’ emotional images and needs into design solutions, research is conducted based on pixel theory, using clustering algorithms, principal component analysis and other technologies to establish image association sample libraries, obtain image mapping relationships, and construct a wheel rim shape design platform system and system design improvements. The results showed that unlike methods such as support vector machines, the K-means algorithm had higher classification accuracy and smaller average absolute error. The classification accuracy of the K-means algorithm was 93.15%, and the support vector machine was 84.33%. The minimum average absolute error of the K-means algorithm was 0.56. In the application of the wheel personalized customization platform system, the improved design improved user satisfaction and ease of use, with corresponding scores of 4.40 and 4.35, respectively. The research method can transform user image needs into wheel shape design schemes to meet user needs.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_56-Personalized_Art_Design_of_Wheel_Rims.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Efficient Squeeze-and-Excitation-Enhanced Deep Learning Method for Automatic Modulation Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150655</link>
        <id>10.14569/IJACSA.2024.0150655</id>
        <doi>10.14569/IJACSA.2024.0150655</doi>
        <lastModDate>2024-06-29T11:26:04.8030000+00:00</lastModDate>
        
        <creator>Nadia Kassri</creator>
        
        <creator>Abdeslam Ennouaary</creator>
        
        <creator>Slimane Bah</creator>
        
        <subject>Cognitive radio; modulation classification; deep learning; convolutional neural networks; Gated Recurrent Units; squeeze and excitation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>The rapid proliferation of mobile devices and Internet of Things (IoT) gadgets has led to a critical shortage of spectral resources. Cognitive Radio (CR) emerges as a propitious technology to tackle this issue by enabling the opportunistic use of underexploited frequency bands. Automatic Modulation Classification (AMC), which serves as a technique to blindly identify modulation types of received signals, plays a pivotal role in carrying out several CR functions, including inference detection and link adaptation. Recent research has turned to Deep Learning (DL) networks to overcome the shortcomings of traditional AMC techniques. However, most existing DL approaches are impractical for resource-limited systems. To address this challenge, we propose a novel lightweight hybrid neural network for AMC that fuses Convolutional Neural Networks (CNNs) and Gated Recurrent Units (GRUs) layers, along with a customized Squeeze and Excitation (SE) block. The integration of CNNs and GRUs allows for the learning of both spatial and temporal dependencies in modulated signals, while the SE block recalibrates features by modeling interdependencies between CNN network channels. Our experimental results, using the RadioML 2016.10A dataset, clearly demonstrate the superior performance of our approach in effectively managing the tradeoff between accuracy and complexity compared to baseline methods. Specifically, our approach achieves the highest accuracy of 91.73%, surpassing all reference models while reducing the memory footprint by at least 45%. In future work, further investigation is warranted to differentiate modulations sharing temporal or frequency domain characteristics and enhance classification accuracy in high-noise environments.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_55-Efficient_Squeeze_and_Excitation_Enhanced_Deep_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Elevating Aspect-Based Sentiment Analysis in the Moroccan Cosmetics Industry with Transformer-based Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150654</link>
        <id>10.14569/IJACSA.2024.0150654</id>
        <doi>10.14569/IJACSA.2024.0150654</doi>
        <lastModDate>2024-06-29T11:26:04.8030000+00:00</lastModDate>
        
        <creator>Kawtar Mouyassir</creator>
        
        <creator>Abderrahmane Fathi</creator>
        
        <creator>Noureddine Assad</creator>
        
        <subject>MABST; Aspect-Based Sentiment Analysis (ABSA); transformer-based models; Moroccan cosmetics industry; natural language processing (NLP); influencer marketing; albert; DistillBERT; electra; XLNet (Transformer models)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>In navigating the dynamic consumer landscape, this study emphasizes the collaborative synergy between influencers and brands, focusing on a cosmetics brand in the Moroccan market. Employing advanced Natural Language Processing (NLP) models, the research explores multifaceted aspects to provide a comprehensive insight into consumer sentiments and product aspects. The primary objective is to empower decision-makers by identifying both the strengths and weaknesses of their products, including evaluating how effectively the influencer promotes their product. Central to this study is the introduction of the MultiLingual Aspect-Based Sentiment Transformer (MABST) framework, a hybrid sentiment analysis model tailored for the beauty and cosmetics industry. MABST integrates cutting-edge transformer models such as Albert, DistillBERT, Electra, and XLNet, enabling advanced sentiment extraction across diverse linguistic contexts in cosmetic product reviews and influencer collaborations. This framework enhances understanding of influencer marketing dynamics and equips businesses with insights to inform strategic decisions and refine promotional strategies in the competitive digital landscape.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_54-Elevating_Aspect_Based_Sentiment_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluating the Effectiveness of Brain Tumor Image Generation using Generative Adversarial Network with Adam Optimizer</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150653</link>
        <id>10.14569/IJACSA.2024.0150653</id>
        <doi>10.14569/IJACSA.2024.0150653</doi>
        <lastModDate>2024-06-29T11:26:04.7870000+00:00</lastModDate>
        
        <creator>Aryaf Al-Adwan</creator>
        
        <subject>Generative Adversarial Networks; images; medical; Convolutional Neural Networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>Deep learning models known as Generative Adversarial Networks (GANs) have shown great potential in several applications, such as computer vision and image synthesis. They are now a viable tool in medical imaging, useful for tasks like improving diagnostic model performance, generating new images, and augmenting existing data. This paper aims to utilize the capabilities of GANs to produce synthetic MRI images, with the purpose of enhancing the training dataset for tumor classification. A new method is presented to classify tumors in MRI images by combining GANs and Convolutional Neural Networks (CNNs). This method employed the Adam optimizer and the Binary Cross Entropy (BCE) with Logits Loss as the criterion, where they contributed in optimizing the training process and stabilizing the GANs. The proposed method in this paper achieved an average accuracy of 95.1% and an average loss of 0.080 with large images. Furthermore, the proposed method is evaluated based on Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM) and is compared to the existing models of GAN. These outcomes highlight the potential of the GAN-based approach in contributing to improved medical diagnostics and treatments.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_53-Evaluating_the_Effectiveness_of_Brain_Tumor_Image.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Adaptive Channel Coding to Enhance the Performance in Rayleigh Channel</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150652</link>
        <id>10.14569/IJACSA.2024.0150652</id>
        <doi>10.14569/IJACSA.2024.0150652</doi>
        <lastModDate>2024-06-29T11:26:04.7700000+00:00</lastModDate>
        
        <creator>Srividya L</creator>
        
        <creator>Sudha P. N</creator>
        
        <subject>Adaptive error control coding; turbo coding; convolutional coding; bit error rate; throughput; Rayleigh channel</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>Rayleigh fading channel model is usually used to model real time wireless mobile communication as it has the potential to emulate the multipath scattering effect, dispersion, fading, reflection, refraction and Doppler shift. Mobility and interferences, will change the channel conditions over the time and so will the error environment and results in variable bit error rates (BER). Fixed channel coding schemes have proven in providing reliability of the data despite of poor channel conditions, but fails to contend with time varying channel conditions. Hence they suffer loss in the information rate during good channel conditions. There is need for adaptive scheme that adapts dynamically to channel conditions improving the overall performance and reliability in communication. An adaptive channel coding technique(ACC) is proposed in this paper which requires a simple statistics from the receiver and switches two channel coding schemes dynamically to the changing environment and makes it different from other schemes which deals dynamic tuning of parameters of one Error control coding (ECC) scheme. This strategy not only guarantees reliability but also spectral efficiency as channel capacity is utilized effectively by switching between two ECCs, less robust (high data rate) Convolutional ECC is used when the channel conditions are good and more robust (low data rate) Turbo ECC is used when the channel conditions degraded. Proposed concept is implemented using MATLAB and results outperforms the conventional fixed ECC schemes, an effective reduction of Eb/N0 requirement is obtained for a target BER compared to the fixed or predetermined ECCs. ACC is tested under various mobile channel environment and proven resilient to varying channel conditions. It is beneficial in providing flexibility in QoS by changing the switching criteria according to the application.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_52-Adaptive_Channel_Coding_to_Enhance_the_Performance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Image Change Detection Based on Fuzzy Clustering and Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150651</link>
        <id>10.14569/IJACSA.2024.0150651</id>
        <doi>10.14569/IJACSA.2024.0150651</doi>
        <lastModDate>2024-06-29T11:26:04.7400000+00:00</lastModDate>
        
        <creator>Chenwei Wang</creator>
        
        <creator>Xiating Li</creator>
        
        <subject>Fuzzy C-means algorithm; fuzzy membership degree; Gabor texture; channel attention; neural networks; synthetic aperture radar images</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>In the change detection of synthetic aperture radar images, the image quality and change detection accuracy are difficult to meet the application requirements due to the influence of speckle noise. Therefore, the study improved the fuzzy C-means algorithm by introducing fuzzy membership degree and Gabor texture features. Features were weighted through channel attention, resulting in an image change detection model, namely, the fuzzy local information C-means for Gabor textures and multi-scale channel attention wavelet convolutional neural network. The segmentation accuracy of the model was 0.995, which improved by 0.119 compared to the traditional fuzzy C-means algorithm. When adding multiplicative noise with different variances, the noise variance reached 0.30, and the accuracy of the algorithm still reached 0.982. In practical application analysis, the detection and segmentation accuracy of river images was 0.983 with a partition coefficient of 0.935, and the segmentation accuracy of farmland images was 0.960 with a partition coefficient of 0.902. Therefore, the algorithm has good stability and anti-noise performance. The algorithm can be widely applied in various fields of synthetic aperture radar image change detection, such as disaster assessment, urban development monitoring, and environmental change monitoring. This paper provides more accurate analysis results, which help with policy formulation and effective resource management.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_51-Image_Change_Detection_Based_on_Fuzzy_Clustering.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Anomaly Detection Model Based on Pearson Correlation Coefficient and Gradient Booster Mechanism</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150650</link>
        <id>10.14569/IJACSA.2024.0150650</id>
        <doi>10.14569/IJACSA.2024.0150650</doi>
        <lastModDate>2024-06-29T11:26:04.7230000+00:00</lastModDate>
        
        <creator>Tuo Ding</creator>
        
        <creator>He Sui</creator>
        
        <subject>Anomaly detection; class overlap; Pearson correlation coefficient; gradient booster mechanism</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>Anomaly detection aims to build a decision model that estimates the class of new data based on historical sample features. However, the distance between samples in the feature space is very close sometimes, resulting in samples being invisible to the detection model that is the class overlap problem. To address this issue, an anomaly detection model based on Pearson correlation coefficient and gradient booster mechanism is proposed in this paper. Different from traditional resampling methods, the proposed method groups and sorts features from different dimensions such as feature correlation, feature importance, and feature exclusivity firstly. Then, it selects features with higher correlation and lower importance for deletion to improve the training accuracy of the detector. Furthermore, through the unilateral gradient sampling mechanism, ineffective or inefficient training samples can be further reduced to improve the training efficiency of the detector. Finally, the proposed method was compared with three feature selection methods and six anomaly detection ensemble models on six datasets. The experimental results showed that the proposed method has significant advantages on feature selection, detection performance, detection stability, and computational cost.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_50-An_Anomaly_Detection_Model_Based_on_Pearson_Correlation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Security and Privacy Issues in Network Function Virtualization: A Review from Architectural Perspective</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150649</link>
        <id>10.14569/IJACSA.2024.0150649</id>
        <doi>10.14569/IJACSA.2024.0150649</doi>
        <lastModDate>2024-06-29T11:26:04.6930000+00:00</lastModDate>
        
        <creator>Bilal Zahran</creator>
        
        <creator>Naveed Ahmed</creator>
        
        <creator>Abdel Rahman Alzoubaidi</creator>
        
        <creator>Md Asri Ngadi</creator>
        
        <subject>Network functions virtualization; virtualized network function; network security; security threat; cloud computing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>Network Function Virtualization (NFV) delivers numerous benefits to customers since it is a cost-effective evolution of legacy networks, allowing for rapid network augmentation and extension at a low cost as network functions are virtualized. However, on the other hand, there is a big security concern for NFV users because of shared infrastructure. There are many studies in the literature that report various NFV security threats. In this paper, we categorize these threats according to the alignment of NFV architecture and delineate a taxonomy for NFV security threats. This work provides detailed information about security threats, causes, and countermeasures to reduce the security vulnerabilities of NFV. We believe that the study of NFV security threats from an architectural perspective is a step forward for better insight into these threats since the roots of many NFV threats are connected to their architecture. We also present how NFV design should be revamped to mitigate NFV security threats, something that is a recent trend in this area. Finally, we highlight future research directions to provide enhanced security for future NFV-based networks.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_49-Security_and_Privacy_Issues_in_Network_Function_Virtualization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Developing a Reliable Hybrid Machine Learning Model for Objective Soccer Player Valuation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150648</link>
        <id>10.14569/IJACSA.2024.0150648</id>
        <doi>10.14569/IJACSA.2024.0150648</doi>
        <lastModDate>2024-06-29T11:26:04.6770000+00:00</lastModDate>
        
        <creator>Hongtao Yu</creator>
        
        <creator>Jialiang Li</creator>
        
        <subject>Market value; machine learning; soccer player; decision tree regression; Honey Badger Algorithm; Jellyfish Search ptimizer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>Football is both a popular sport and a big business. Managers are concerned about the important decisions that team managers make when it comes to player transfers, player valuation issues, and particularly the determination of market values and transfer fees. Market values are important because they can be thought of as estimates of transfer fees or prices that could be paid for a player on the transfer market. Football specialists have historically estimated the market. However, expert opinions are opaque and imprecise. Thus, data analytics may offer a reliable substitute or supplement to expert-based market value estimates. This paper suggests a quantitative, objective approach to value football players on the market. The technique is based on applying machine learning algorithms to football player performance data. To achieve this objective, Decision Tree Regression (DTR) was employed to predict the market value of football players. Additionally, two novel metaheuristic algorithms, Honey Badger Algorithm (HBA) and Jellyfish Search Optimizer (JSO), were utilized to enhance the performance of the DTR model. The experiment made use of FIFA 20 game data that was gathered from sofifa.com. In addition, it aims to examine the information and pinpoint the key elements influencing market value assessment. The trial results showed that the DTJS hybrid model performed better in predicting the participants&#39; market pricing than other algorithms. With an R2 value of 0.984 and the lowest error ratio when compared to the baseline, it gets the highest accuracy score. Lastly, it is thought that these findings may be crucial in the discussions that occur between football teams and the agents of players. This strategy may be used as a springboard to expedite the negotiation process and provide a quantifiable, objective assessment of a player&#39;s market worth.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_48-Developing_a_Reliable_Hybrid_Machine_Learning_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid Framework to Implement DevOps Practices on Blockchain Applications (DevChainOps)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150647</link>
        <id>10.14569/IJACSA.2024.0150647</id>
        <doi>10.14569/IJACSA.2024.0150647</doi>
        <lastModDate>2024-06-29T11:26:04.6300000+00:00</lastModDate>
        
        <creator>Ramadan Nasr</creator>
        
        <creator>Mohamed I. Marie</creator>
        
        <creator>Ahmed El Sayed</creator>
        
        <subject>Blockchain; decentralized applications (dapps); DevOps; smart contracts; continuous integration (CI); continuous deployment (CD); model-driven development (MDD)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>As the adoption and utilization of blockchain technology continue to expand in enterprise software development, integrating the established DevOps approach emerges as a rational decision. This integration has the potential to accelerate software development and delivery, enhance software quality, and improve overall productivity. However, there is currently a lack of guidance on a structured DevOps approach, specifically within the realm of blockchain-based software development. This paper emphasizes the importance of implementing an effective DevOps process and investigates its utilization in the development of blockchain smart contracts. Specifically, this study introduces a framework that aims to seamlessly integrate DevOps into the process of smart contract development. Specifically, this research paper presents a framework that has been developed to seamlessly incorporate DevOps principles into the process of smart contract development. The primary focus of this framework is to streamline the continuous delivery and deployment of blockchain smart contracts packaged in containers. It comprises two fundamental components: delivery and deployment, which communicate through Git-distributed version control. Smart contract applications and node-specific deployment configurations are stored in dedicated GitHub repositories. The delivery component guarantees the synchronization of the deployment package with the most recent version of the smart contract application and the node deployment configuration files. The deployment component, meanwhile, is responsible for executing blockchain-decentralized applications in containers across all blockchain nodes. It leverages GitHub, Jenkins, and Docker for this purpose. To validate its effectiveness, multiple tests have been conducted on Quorum&#39;s simple storage, Sawtooth&#39;s XO Integerkey, and Corda&#39;s token decentralized applications (dapps) dappsto evaluate the effectiveness of the proposed method.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_47-A_Hybrid_Framework_to_Implement_DevOps_Practices.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Transfer Learning-based Weed Classification and Detection for Precision Agriculture</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150646</link>
        <id>10.14569/IJACSA.2024.0150646</id>
        <doi>10.14569/IJACSA.2024.0150646</doi>
        <lastModDate>2024-06-29T11:26:04.6170000+00:00</lastModDate>
        
        <creator>Nurul Ayni Mat Pauzi</creator>
        
        <creator>Seri Mastura Mustaza</creator>
        
        <creator>Nasharuddin Zainal</creator>
        
        <creator>Muhammad Faiz Bukhori</creator>
        
        <subject>Artificial intelligence; deep learning; CNN; transfer learning; VGG16</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>Artificial intelligence (AI) technologies, including deep learning (DL), have seen a sharp rise in application in agriculture in recent years. Numerous issues in agriculture have led to crop losses and detrimental effects on the environment. Precision agriculture tasks are becoming increasingly complicated; however, AI facilitates huge improvement in learning capacity brought about by the advancements in deep learning techniques. This study examined how CNN and VGG16 (transfer learning) were used for weed classification for the application of spraying herbicides selectively in palm oil plantations based on the type of optimizer, values of learning rate and weight decay used on the models. The result shows that the VGG 16 BN model with Adagrad optimizer, learning rate value of 0.001 and weight decay of 0.0001 shows the average accuracy of 97.6 percent and highest accuracy of 99 percent.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_46-Transfer_Learning_based_Weed_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Friend Recommender System to Influence Friends on Social Networks Based on B-Mine Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150645</link>
        <id>10.14569/IJACSA.2024.0150645</id>
        <doi>10.14569/IJACSA.2024.0150645</doi>
        <lastModDate>2024-06-29T11:26:04.5830000+00:00</lastModDate>
        
        <creator>Tingting Feng</creator>
        
        <creator>Wenya Jin</creator>
        
        <creator>Wei Li</creator>
        
        <subject>Influential nodes; recommender; social networks and B-mine</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>Social networks are linked by one or more particular kinds of connections, including web links, friends, family, and the sharing of ideas and money. Graph theory is used to investigate social relationships in social network analysis. The individuals within the networks are the vertices, and the connections among them are the edges. Between vertices, there can be a wide variety of edges. Due to the rise in Internet usage, online shopping, and social media usage in recent years, recommender systems have become more and more popular. Numerous websites have been successful in putting this recommender system into place. This thesis introduced an approach that uses the B-mine method to explore common patterns and enhance the accuracy of identifying influential nodes in social networks. In this method, two user similarity criteria—coverage and confidence—were used simultaneously to improve the recommender system. The behavior of previous users is analyzed, and recommendations are made to the current user based on friends&#39; behavior and similarity, as well as on their interactions and preferences across different groups. According to the simulation results, the suggested approach performs satisfactorily, with accuracy and sensitivity of 89% and 76%, respectively.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_45-Friend_Recommender_System_to_Influence_Friends.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-Class Flower Counting Model with Zha-KNN Labelled Images Using Ma-Yolov9</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150644</link>
        <id>10.14569/IJACSA.2024.0150644</id>
        <doi>10.14569/IJACSA.2024.0150644</doi>
        <lastModDate>2024-06-29T11:26:04.5530000+00:00</lastModDate>
        
        <creator>A. Jasmine Xavier</creator>
        
        <creator>S. Valarmathy</creator>
        
        <creator>J. Gowrishankar</creator>
        
        <creator>B. Niranjana Devi</creator>
        
        <subject>Flower counting; bilateral filter; Zhang Shasha Algorithm distance measured-K-Nearest Neighbor (ZSA-KNN); Gradient Intensity-Canny Edge Detection (GI-CED); mish-activated YoloV9</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>The flowering period is a critical time for the growth of plants. Counting flowers can help farmers predict the corresponding fields yield information. As there are several works proposed for flower counting purposes, they lack the prediction of different flowers with counts. Hence, a novel model has been proposed in this study. Initially, this model is fed with different flower images as input, then these images undergo pre-processing. In pre-processing, the images are converted to grayscale for improved accuracy, and then the images noise is removed using bilateral filters. Noise-removed images are then given for edge detection, using GI-CED. Edge-detected images are then augmented to improve the learning rate of the model. Augmented images are labeled using ZHA-KNN. Labeled images feature extracted and are given to MA-YoloV9, which is pre-trained to detect flowers in the image count and obtained as output. Overall, the proposed model was implemented and obtained an accuracy value of about 98.8% and F1-Score obtained 92.2% which is far better than the previous counting models.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_44-Multi_Class_Flower_Counting_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Virtual Second Life Affects the Existence of Arab Residents</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150643</link>
        <id>10.14569/IJACSA.2024.0150643</id>
        <doi>10.14569/IJACSA.2024.0150643</doi>
        <lastModDate>2024-06-29T11:26:04.5370000+00:00</lastModDate>
        
        <creator>Galal Eldin Abbas Eltayeb</creator>
        
        <subject>Internet; second life; virtual worlds; Arab; values; ethics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>The 3D virtual community known as Second Life (SL) which is available on the Internet (www.secondlife.com) represents the latest online services for business, learning, training, and entertainment. People, regional and ethnic groups, business organizations, social activities, and various societal environments populate this world. People who live in this virtual world, known as residents, use personal avatars to declare themselves. People from the Arab region also exist in this world, and they practice their activities as human beings, emotions, and actions. For the Arab residents, there is no escape from living in these communities, like others, using this unlimited space and time. The SL Societies honour their own traditions, ethics, and behaviors as personal values. And since Arab society, in particular, has its own values, traditions, and ethics, could there be a significant reflection of these values in the Second Life Society? This paper aims to pinpoint the possible consequences that certain ethical attitudes attribute to Arab residents, while also posing the crucial question of whether these values and ethics align with the diverse societies within the SL realm. The paper identifies the possibility of a decline in the popularity and population of SL with the reluctance of Arab societies, although, a large number of Arabs have access to Internet services as enriched with technical issues and Internet provision in most Arab countries.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_43-Virtual_Second_Life_Affects_the_Existence.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Artistic Color Matching Technology Based on Silhouette Coefficient and Visual Perception</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150642</link>
        <id>10.14569/IJACSA.2024.0150642</id>
        <doi>10.14569/IJACSA.2024.0150642</doi>
        <lastModDate>2024-06-29T11:26:04.5070000+00:00</lastModDate>
        
        <creator>Huizhou Li</creator>
        
        <creator>Wubin Zhu</creator>
        
        <subject>Silhouette coefficient; visual perception; K-Means; color matching; similarity measurement; Pix2Pix</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>Now-a-days, traditional color matching methods cannot meet the current market demand. Meanwhile, there are many factors to consider in the design, which affect the design efficiency. Therefore, it is necessary to seek more efficient design methods. So, this study proposed an improved K-Means based on silhouette coefficients and designed an image main color adaptive extraction model. Subsequently, an evaluation method for artistic color matching schemes based on visual perception and similarity measurement was introduced. Finally, Pix2Pix based on visual aesthetics was designed to develop color matching schemes. These results confirmed that in the objective evaluation of the main color extraction results, the structural similarity of the main color images generated using the silhouette coefficient was superior to other methods. The maximum structural similarity of this method was 0.675, with an average of 0.663. Meanwhile, the peak signal-to-noise ratio of the main color image generated by this method reached a maximum of 21.49 dB, with an average of 21.05 dB. In the validation of Pix2Pix based on visual aesthetics, the average color palette similarity of Pix2Pix&#39;s design scheme based on visual aesthetics was 0.807. Meanwhile, the average comprehensive evaluation index of this method was 0.798, which was better than Pix2Pix without integrating visual aesthetics. In the experimental verification of computational efficiency, the average color matching time of the Pix2Pix network model based on visual aesthetics is only 13.75ms. The average time consumption of the K-Means clustering algorithm model is as high as 135.67ms. Overall, the designed image main color adaptive extraction model and color matching model have strong practical applicability. These methods provide effective auxiliary design solutions for the design and development of artistic products, which helps to improve design efficiency.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_42-Artistic_Color_Matching_Technology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Image Technology Investigation Based on Fingerprint Devices and Artificial Intelligence</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150641</link>
        <id>10.14569/IJACSA.2024.0150641</id>
        <doi>10.14569/IJACSA.2024.0150641</doi>
        <lastModDate>2024-06-29T11:26:04.4900000+00:00</lastModDate>
        
        <creator>Xuemei Zhao</creator>
        
        <subject>Investigation technology; fingerprint devices; image vision; fingerprint localization; image recognition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>In response to the inaccurate visual positioning of fingerprint data images in investigative techniques, a new method based on wireless networks and artificial intelligence is proposed. The new method integrates wireless networks and image vision, while enhancing fingerprint data and images using cross temporal generative networks and channel state information. The research results indicated that the maximum positioning error value of the new model was 1.3m, which was 0.7m, 0.2m, and 0.4m lower than other models. The minimum positioning error value in indoor environments was 0.9m, which was lower compared with the 1.0m, 1.4m, and 1.6m of other models. The model used in the study had higher localization performance and recognition accuracy. The average accuracy was improved by about 4.5% compared with the TDF method with the lowest accuracy. The average root mean square error value was relatively low, with a minimum of 2.15. Compared with the highest SDF model, it was 4.43 lower. Therefore, the proposed method has better fingerprint recognition localization and investigation techniques, which has a better research guidance role for fingerprint localization and image recognition localization.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_41-Image_Technology_Investigation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Foliar Nitrogen Estimation with Artificial Intelligence and Technological Tools: State of the Art and Future Challenges</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150640</link>
        <id>10.14569/IJACSA.2024.0150640</id>
        <doi>10.14569/IJACSA.2024.0150640</doi>
        <lastModDate>2024-06-29T11:26:04.4600000+00:00</lastModDate>
        
        <creator>Angeles Gallegos</creator>
        
        <creator>Mayra E. Gavito</creator>
        
        <creator>Heberto Ferreira-Medina</creator>
        
        <subject>Digital images; spectral data; estimation models; technological tools; nitrogen</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>Nitrogen plays a fundamental role in plant growth, but its high application has significant negative impacts for the farmers and the environment. This nutrient is often provided in excess to prevent plant growth limitations when it ought to be administered in the exact quantities because many farmers do not have access to technology or affordable soil and plant chemical analyses. Precision agriculture through monitoring of crop nutrition may be possible with quantitative, non-destructive methods and technological tools that allow farmers to conduct a rapid and representative verification of their fertilizer applications. In this sense, we carried out a systematic review and bibliometric analysis of recent scientific research to answer the questions: 1) Can artificial intelligence-based, non-destructive analysis of plant nutrition provide relevant information for decision-making in agricultural systems?, 2) Have recent studies reached the stage of developing technological tools to be applied in agricultural systems and field conditions?, and 3) What is the way forward to achieve popularization of the application and development of technological tools in agricultural systems? We found that non-destructive analyses of foliar nutrition need to provide more supportive information for decision-making given the challenge of interpreting and replicating results in agricultural systems operating under uncontrolled conditions, such as field conditions. To address this issue, we propose developing accessible technological tools, such as mobile applications, tailored to farmers’ needs. However, most studies had not yet considered developing a technological tool as part of their objectives. Therefore, it is critical to develop accessible and affordable technologies and monitoring systems that approach precision agriculture since the conservation and sustainable management of natural resources demands translating scientific knowledge into supporting tools that reach farmers and decision-makers worldwide. The way forward is innovation through technological developments that enhance current agricultural systems.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_40-Foliar_Nitrogen_Estimation_with_Artificial_Intelligence.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Latent Variables Improve Hard-Constrained Controllable Text Generation on Weak Correlation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150639</link>
        <id>10.14569/IJACSA.2024.0150639</id>
        <doi>10.14569/IJACSA.2024.0150639</doi>
        <lastModDate>2024-06-29T11:26:04.4430000+00:00</lastModDate>
        
        <creator>Weigang Zhu</creator>
        
        <creator>Xiaoming Liu</creator>
        
        <creator>Guan Yang</creator>
        
        <creator>Jie Liu</creator>
        
        <creator>Haotian Qi</creator>
        
        <subject>Latent variables; controllable text generation; weak correlation; hard constraint</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>Hard-constrained controllable text generation aims to forcefully generate texts that contain specified constrained vocabulary, fulfilling the demands of more specialized application scenarios in comparison to soft constraint controllable text generation. However, in the presence of multiple weak correlation constraints in the constraint set, soft-constrained controllable models aggravate the constraint loss phenomenon, while the hard-constrained controllable models significantly suffer from quality degradation. To address this problem, a method for hard-constrained controllable text generation based on latent variables improving on weak correlations is proposed. The method utilizes latent variables to capture both global and local constraint correlation information to guide the language model to generate hard-constrained controllable text at the macro and micro levels, respectively. The introduction of latent variables not only reveals the latent correlation between constraints, but also helps the model to precisely satisfy these constraints while maintaining semantic coherence and logical correctness. Experiment findings reveal that under conditions of weak correlation hard constraints, the quality of text generation by the method proposed exceeds that of the currently established strong baseline models.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_39-Latent_Variables_Improve_Hard_Constrained_Controllable_Text.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Natsukashii: A Sentiment Emotion Analytics Based on Recent Music Choice on Spotify</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150638</link>
        <id>10.14569/IJACSA.2024.0150638</id>
        <doi>10.14569/IJACSA.2024.0150638</doi>
        <lastModDate>2024-06-29T11:26:04.3970000+00:00</lastModDate>
        
        <creator>Khor Zhen Win</creator>
        
        <creator>Mafas Raheem</creator>
        
        <subject>Spotify; sentiment analysis; data preparation; data visualization; web development</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>Natsukashii offers a delightful platform for users to seamlessly connect with their Spotify accounts and delve into cherished musical moments, fostering a profound emotional connection with their recent experiences. This platform harnesses the power of Spotify&#39;s data, facilitating a secure connection to users&#39; accounts while ensuring that no Spotify data is stored locally. Its array of features includes captivating data visualizations, such as display cards, radar charts, and area charts, elegantly showcasing both recent favorites and top-listened tunes. However, the crowning jewel of Natsukashii lies in its ability to provide users with a heartfelt insight into their current mood, derived from the audio features of their recent playlist selections. By meticulously preparing and analyzing the audio features provided by Spotify, Natsukashii delivers a personalized sentiment analysis, offering users a poignant glimpse into their emotional state through the lens of their musical preferences. Moreover, this enriching experience is seamlessly accessible across desktop and mobile platforms, compatible with popular web browsers like Google Chrome, Firefox, and Microsoft Edge.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_38-Natsukashii_A_Sentiment_Emotion_Analytics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A GAN-based Hybrid Deep Learning Approach for Enhancing Intrusion Detection in IoT Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150637</link>
        <id>10.14569/IJACSA.2024.0150637</id>
        <doi>10.14569/IJACSA.2024.0150637</doi>
        <lastModDate>2024-06-29T11:26:04.3500000+00:00</lastModDate>
        
        <creator>S. Balaji</creator>
        
        <creator>G. Dhanabalan</creator>
        
        <creator>C. Umarani</creator>
        
        <creator>J. Naskath</creator>
        
        <subject>Distributed Denial of Service (DDoS); Internet of Things (IoT); Deep Neural Network (DNN); intrusion detection; Generative Adversarial Network (GAN)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>Internet of Things (IoT) strongly involves intelligent objects sharing information to achieve tasks in the environment with an excellence of living standards. In resource-constrained it is extremely difficult chore to impart security against intrusion. It is unprotected from Distributed Denial of Service (DDoS), Gray hole, sinkhole, wormhole attacks, spoofing, and Sybil attacks. Recent years, deep neural network (DNN) methodologies are widely used to detect malicious attacks. We develop a Hybrid deep learning based GAN Network to detect malicious attacks in IoT networks. Due to composite and time-varying vigorous environment of IOT networks, the model trainig samples are insufficient since intrusion samples combined with normal samples will lead to high false detection rate. We created a dynamic distributed IDS to detect malicious behaviors without centralized controllers. Preprocessing sets threshold values to identify malicious behaviors. Experimental results show HDGAN outperforms existing algorithms with higher accuracy 98%, precision 98% and 95% lower False Positive Rate (FPR).</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_37-A_GAN_based_Hybrid_Deep_Learning_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Postpartum Depression Identification: Integrating Mutual Learning-based Artificial Bee Colony and Proximal Policy Optimization for Enhanced Diagnostic Precision</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150636</link>
        <id>10.14569/IJACSA.2024.0150636</id>
        <doi>10.14569/IJACSA.2024.0150636</doi>
        <lastModDate>2024-06-29T11:26:04.3170000+00:00</lastModDate>
        
        <creator>Yayuan Tang</creator>
        
        <creator>Tangsen Huang</creator>
        
        <creator>Xiangdong Yin</creator>
        
        <subject>Postpartum depression; imbalanced classification; Proximal Policy Optimization; Artificial Bee Colony; reinforcement learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>Postpartum depression (PPD) affects approximately 12% of mothers, posing significant challenges for maternal and child health. Despite its prevalence, many affected women lack adequate support. Early identification of those at high risk is cost-effective but remains challenging. This study introduces an innovative model for PPD detection, combining the Mutual Learning-based Artificial Bee Colony (ML-ABC) method with Proximal Policy Optimization (PPO). This model uses a PPO-based algorithm tailored to the imbalanced dataset characteristics, employing an artificial neural network (ANN) for policy formation in categorization tasks. PPO enhances stability by preventing drastic policy shifts during training, treating the training process as a series of interconnected decisions, with each data point considered a state. The network, acting as an agent, improves at recognizing fewer common classes through rewards or penalties. The model incorporates an advanced pre-training strategy using ML-ABC to adjust initial weight configurations to increase classification precision, enhancing early pattern recognition. Evaluated on a Swedish study (2009-2018) dataset comprising 4313 cases, the model demonstrates superior precision and accuracy, with accuracy and F-measure scores of 0.91 and 0.88, respectively, proving highly effective for identifying PPD.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_36-Postpartum_Depression_Identification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning Algorithm Research and Performance Optimization of Financial Treasury Big Data Monitoring Platform</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150635</link>
        <id>10.14569/IJACSA.2024.0150635</id>
        <doi>10.14569/IJACSA.2024.0150635</doi>
        <lastModDate>2024-06-29T11:26:04.2700000+00:00</lastModDate>
        
        <creator>Yanbing Wang</creator>
        
        <creator>Ding Ding</creator>
        
        <subject>Deep learning; financial database big data monitoring; algorithm research; performance optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>With the rapid development of information technology and the advent of the digital age, the management of fiscal treasury is facing unprecedented challenges and opportunities. In order to improve the efficiency and effectiveness of deep learning algorithms in the financial and treasury big data monitoring platform, this paper further studies the performance optimization methods of the model. This paper deeply studies deep learning algorithm research and performance optimization of financial Treasury big data monitoring platforms. This paper reviews the basic concepts, methods, and applications of deep learning and their application in the financial database big data monitoring platform. In the financial Treasury big data monitoring platform, deep learning algorithms are widely used in image recognition, natural language processing, recommendation systems and other fields. This article first conducts in-depth theoretical research on deep learning algorithms, including various neural network structures (such as convolutional neural network CNN, recurrent neural network RNN, etc.), optimization algorithms (such as gradient descent method and its variants), regularization techniques, etc. In addition, we also studied the practical applications of deep learning in fields such as image processing, natural language processing, and recommendation systems. In order to verify the effectiveness of deep learning algorithms in the financial and treasury big data monitoring platform, we designed corresponding experiments. These experiments include using deep learning algorithms for image recognition of financial documents, natural language processing, and building recommendation systems. We collected real fiscal treasury data as the experimental dataset and preprocessed and annotated the data.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_35-Deep_Learning_Algorithm_Research_and_Performance_Optimization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Smart Construction Benefit Evaluation Method Combining C-OWA Operator and Grey Clustering</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150634</link>
        <id>10.14569/IJACSA.2024.0150634</id>
        <doi>10.14569/IJACSA.2024.0150634</doi>
        <lastModDate>2024-06-29T11:26:04.2230000+00:00</lastModDate>
        
        <creator>Yunzhu Sun</creator>
        
        <creator>Yunfeng Zhang</creator>
        
        <subject>C-OWA; Grey system; clustering; intelligent construction; sustainability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>Currently, there is a lack of effective objective quantitative methods for evaluating the benefits of smart construction. Therefore, this study proposes a comprehensive method for evaluating the benefits of smart construction. This method establishes an indicator system from the perspective of evaluation objectives, and on this basis, uses a continuous ordered weighted average operator to ensure the objectivity of indicator weight allocation. Afterwards, the grey clustering method is used to form a scoring matrix, achieving effective comprehensive quantitative evaluation. The results showed that for the selected project, the comprehensive benefit value evaluated was 8.342, indicating that the smart construction efficiency of the project had reached a good level. Meanwhile, the extensive benefits of the project showed a stepwise upward trend from 2021 to 2023. This study aims to design and apply a smart construction benefit evaluation method that integrates continuous ordered weighted average operator and grey clustering, which is practical and can provide data reference for project management of smart buildings.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_34-A_Smart_Construction_Benefit_Evaluation_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Incremental Learning for GRU and RNN-based Assamese UPoS Tagger</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150633</link>
        <id>10.14569/IJACSA.2024.0150633</id>
        <doi>10.14569/IJACSA.2024.0150633</doi>
        <lastModDate>2024-06-29T11:26:04.2100000+00:00</lastModDate>
        
        <creator>Kuwali Talukdar</creator>
        
        <creator>Shikhar Kumar Sarma</creator>
        
        <subject>Assamese UPoS; PoS tagger; RNN; GRU; incremental learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>This research paper introduces a novel approach to enhance the performance of Universal Part-of-Speech (UPoS) tagging for the low-resource language Assamese, employing Recurrent Neural Networks (RNNs) and Gated Recurrent Units (GRUs). The novelty added in this study is the experimentation with Incremental Learning, a dynamic paradigm allowing the models to continually refine their understanding as they encounter new set of linguistic data. The proposed model utilizes the strengths of GRUs and traditional RNNs to capture long range sequential dependencies and contextual information within Assamese sentences. Incorporation of Incremental Learning ensures the model&#39;s adaptability to evolving linguistic patterns, particularly crucial for under-resourced languages like Assamese. Experimental results showcase the superiority of the proposed approach, achieving state-of-the-art accuracy in Assamese UPoS tagging. The research not only contributes to the field of natural language processing but also addresses the specific challenges posed by under-resourced languages. The significance of Incremental Learning is highlighted, showcasing its role in dynamically updating the model&#39;s knowledge base with new UPoS-tagged data. This feature proves essential in real-world scenarios where language evolves, ensuring sustained optimal performance in Assamese UPoS tagging. The paper presents the details of the innovative framework for UPoS tagging in Assamese, combining the significance of Incremental Learning with Deep Learning techniques, pushing the boundaries of natural language processing models for low resource languages exploring the importance of dynamic learning paradigms.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_33-Incremental_Learning_for_GRU_and_RNN.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An IoT Solution to Detect Overheated Idler Rollers in Belt Conveyors</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150632</link>
        <id>10.14569/IJACSA.2024.0150632</id>
        <doi>10.14569/IJACSA.2024.0150632</doi>
        <lastModDate>2024-06-29T11:26:04.1770000+00:00</lastModDate>
        
        <creator>Manuel J. Ibarra-Cabrera</creator>
        
        <creator>Jaime Guevara Rios</creator>
        
        <creator>Dennis Vargas Ovalle</creator>
        
        <creator>Mario Aquino-Cruz</creator>
        
        <creator>Hugo D. Calderon-Vilca</creator>
        
        <creator>Sergio F. Ochoa</creator>
        
        <subject>IoT system; overheated idler detection; conveyor belts; mining companies; autonomous and on-demand monitoring</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>It is common knowledge that mechanical systems need oversight and maintenance procedures. There are numerous prevalent operation monitoring techniques, and in the era of IoT and predictive maintenance, it is possible to find multiple solutions to supervise these systems. This article describes the design and implementation of a low-cost system, which use an IoT approach to detect overheated idlers in conveyors belt in mining facilities. The system involves the use of temperature sensors, coordinately with heat map image sensors. The users (i.e., mining operators) can monitor overheated idlers in the whole conveyor belt, making on-demand queries using Telegram or a website, and also receiving autonomous warnings. Prototypes of this system were installed on a conveyor belt at a construction materials manufacturing company, and also in a copper mining company, both located in Apurimac, Peru. The usability and usefulness of the system were evaluated by 20 experts in maintenance and operation of conveyor belts, who filled the questionnaire proposed by TAM (Technology Acceptance Model). The results show that 91% of them consider the system useful for detecting the overheating of idlers in a conveyor belt, and 93% of them considers the solution as easy to use.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_32-An_IoT_Solution_to_Detect_Overheated_Idler_Rollers.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Increasing the Accuracy of Writer Identification Based on Bee Colony Optimization Algorithm and Hybrid Deep Learning Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150631</link>
        <id>10.14569/IJACSA.2024.0150631</id>
        <doi>10.14569/IJACSA.2024.0150631</doi>
        <lastModDate>2024-06-29T11:26:04.1470000+00:00</lastModDate>
        
        <creator>Hao Libo</creator>
        
        <creator>Xu Jingqi</creator>
        
        <subject>Optimization; bee colony algorithm; deep learning; author identity recognition; handwriting</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>It is one of the most important and challenging classification issues to identify the writer&#39;s identity from offline handwriting images, which has been the focus of many researchers in recent years. This article presents a novel approach to identifying the author of offline Pertian manuscripts from scanned images based on deep convolutional neural networks. For the first time in the proposed network, the bee colony algorithm has been used in the middle layers of a deep convolutional neural network in order to improve the accuracy of identifying the author and to optimize the parameters, as well as improve the learning performance. In terms of the presented scenario, it was tested independently of the written language in both Persian and English. The proposed method is more accurate than previous studies for the IMA dataset, with an accuracy of 97.60%. Moreover, for the Firemaker dataset, the proposed model has significantly improved over the existing results, with the accuracy of the current model being 99.71%, a value that is 1.78% higher than the results of the previous models.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_31-Increasing_the_Accuracy_of_Writer_Identification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Designing an Experimental Setup for Data Provenance Tracking using a Public Blockchain: A Case Study using a Water Bottling Plant</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150630</link>
        <id>10.14569/IJACSA.2024.0150630</id>
        <doi>10.14569/IJACSA.2024.0150630</doi>
        <lastModDate>2024-06-29T11:26:04.1300000+00:00</lastModDate>
        
        <creator>O. L. Mokalusi</creator>
        
        <creator>R. B. Kuriakose</creator>
        
        <creator>H. J. Vermaak</creator>
        
        <subject>Data provenance; public blockchain; smart contracts; supply chain; smart manufacturing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>Data provenance, in an end-to-end supply chain context, refers to the tracking of the origin and history of every raw material, process, packaging and distribution involved in a manufacturing network. The traditional client-server architecture utilised in centralised systems, stores data in a single location, making it vulnerable to single points of failure, data tampering, and unau-thorised access. As a result, a lack of data provenance and standardisation for products in a manufacturing supply chain. This leads to a lack of traceability and transparency. Therefore, this article presents a hypothesis that these challenges can be overcome by incorporating data provenance into blockchain-based smart contracts for traceability and transparency. This article is structured such that it firstly discusses data prove-nance traceability with a focus on the cloud-based storage sys-tem architecture domains for data provenance traceability across end-to-end supply chains. Secondly, this article sheds more light on the design of an experimental setup for block-chain-based data provenance traceability in a manufacturing supply chain using a case study of a water bottling plant. Final-ly, it showcases and discusses the results of the experiments for this purpose.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_30-Designing_an_Experimental_Setup_for_Data_Provenance_Tracking.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of the Entropy of the Heart Rate Signal During the Creative Process</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150629</link>
        <id>10.14569/IJACSA.2024.0150629</id>
        <doi>10.14569/IJACSA.2024.0150629</doi>
        <lastModDate>2024-06-29T11:26:04.1000000+00:00</lastModDate>
        
        <creator>Ning Zhu</creator>
        
        <subject>Heart rate signal; creative process; entropy; autonomous signals</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>Among the most important cognitive behaviors, creativity is essential for the flourishing of societies and mastery of various aspects of life around us. The effects of creative activities on the brain have only been examined in a few limited studies to date. The effects of such activities on the autonomic system have not been extensively studied. In this study, the changes in the heart rate signal before and during creative activity were examined using methods based on extracting chaotic and non-linear features from the heart rate signal. In particular, this study explores the qualitative changes in entropy during creative thinking and compares them with the resting state to determine whether or not creative activity is progressing. Based on analyzing the heart rate signals of 52 people while performing the three activities of the Torrance creativity test and comparing them with the resting state, the amount of approximate entropy and fuzzy entropy increased with the progress of the creative process. In contrast, comparing each stage of creativity to the previous stage during each activity in both types of entropy shows an increase in the average value at the end of each activity. The comparison of these steps with the last step two minutes ago shows completely incremental changes in activity 3 of both entropies. These entropies increase as the signal becomes more irregular and complex during the creative process. Our findings reveal significant increases in both approximate entropy and fuzzy entropy during creative activities compared to the resting state, suggesting heightened complexity and irregularity in heart rate dynamics as creativity unfolds. These results not only advance our understanding of the physiological correlates of creativity but also highlight the potential of heart rate entropy analysis as a tool for evaluating and enhancing creative abilities.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_29-Analysis_of_the_Entropy_of_the_Heart_Rate_Signal.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Theoretical Framework for Temporal Graph Warehousing with Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150628</link>
        <id>10.14569/IJACSA.2024.0150628</id>
        <doi>10.14569/IJACSA.2024.0150628</doi>
        <lastModDate>2024-06-29T11:26:04.0670000+00:00</lastModDate>
        
        <creator>Annie Y. H. Chou</creator>
        
        <creator>Frank S. C. Tseng</creator>
        
        <subject>Data warehousing; graph database; graph warehousing; social computing; temporal data</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>The evolution of data management systems has witnessed a paradigm shift towards dynamic and temporal representations of relationships. Graph databases, positioned as key players in managing highly-connected data with a fundamental requirement for relationship analysis, have recognized the need for incorporating temporal features. These features are crucial for capturing the temporal dynamics inherent in various applications, offering a more comprehensive understanding of relationships over time. This theoretical exploration emphasizes the importance of incorporating temporal dimensions into graph data warehousing for contemporary applications. Temporal features introduce a dynamic dimension to graph data, enabling a more nuanced understanding of relationships and patterns over time. The integration of temporal features in graph data management and analysis not only addresses the dynamic nature of contemporary applications but also contributes to enhanced modeling and analytical capabilities.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_28-A_Theoretical_Framework_for_Temporal_Graph_Warehousing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Developing a Digital Twin Model for Improved Pasture Management at Sheep Farm to Mitigate the Impact of Climate Change</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150627</link>
        <id>10.14569/IJACSA.2024.0150627</id>
        <doi>10.14569/IJACSA.2024.0150627</doi>
        <lastModDate>2024-06-29T11:26:04.0370000+00:00</lastModDate>
        
        <creator>Ntebaleng Junia Lemphane</creator>
        
        <creator>Ben Kotze</creator>
        
        <creator>Rangith Baby Kuriakose</creator>
        
        <subject>Artificial intelligence; artificial neural network; climate change; digital twin; Internet of Things; machine learning; pasture management; smart farming</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>Small-scale livestock farmers experience significant losses because of decreased productivity caused by decline in pasture production brought on by climate change. Technology in livestock farming introduced the idea of &quot;smart farming,&quot; which has simplified pasture management. Internet of Things (IoT), Artificial Intelligence (AI) and data analytics are just a few of the cutting-edge technology techniques that smart farming incorporates. Digital twin technology is proposed in this study to alleviate the challenge of changing weather patterns that affect pasture management. Digital twin model is developed to predict pasture height to ascertain the predicted amount of pasture and ensure that the sheep have access to enough food for sustainable production. Pasture growth is influenced by temperature, rainfall and soil moisture; thus, pasture height predictions depend on these factors. Digital twin is made of predictive models built on historical and real-time data collected from the IoT sensors and stored in ThingSpeak&#174; cloud. Data analysis was performed in MATLAB&#174; using the neural network algorithm and predictions of the system are modelled in SIMULINK&#174; platform. Digital twin predicted the pasture height to be 52 cm while the observed reading was 56 cm. Therefore, with the prediction error of -4, the digital twin can serve to enhance pasture management through its capabilities and assist farmers in decision making.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_27-Developing_a_Digital_Twin_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-Sensor Fusion and YOLOv5 Model for Automated Detection of Aircraft Cabin Door</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150626</link>
        <id>10.14569/IJACSA.2024.0150626</id>
        <doi>10.14569/IJACSA.2024.0150626</doi>
        <lastModDate>2024-06-29T11:26:04.0200000+00:00</lastModDate>
        
        <creator>Ihnsik Weon</creator>
        
        <creator>Soon-Geul Lee</creator>
        
        <subject>Jet bridge; Yolo-v5; sensor fusing; segmentation; door detects; automation docking system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>This study investigated perception technology of an autonomous driving system to enable independent connection between an aircraft and a boarding bridge. GigE video sensors and solid-state lidars were installed on the cabin side of the boarding bridge, and a technology that fuses the data from these two different sensors was developed and applied. Using the fused data, a technology for identifying the aircraft door was researched using Yolo-v5, one of the feature point extractors. Yolo-v5 is a deep learning-based feature point extractor that was able to identify the door after being trained with more than 10,000 frames of images under predetermined weather and time conditions. Additionally, a parallel alignment control function was applied between the aircraft body and the cabin of the boarding bridge to increase the reliability of the aircraft door identification technology based on the fused data. To achieve this, a certain area of interest was set within the fused data so that the distance deviation to the left and right of the cabin could be calculated. Finally, to verify the research results, tests were conducted to identify aircraft doors under various environmental conditions with more than six airlines selected. Originally, the Yolo-v5 model secured 93.5% accuracy, but through this study, the detection accuracy for limited-environment aircraft doors was increased to over 95%.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_26-Multi_Sensor_Fusion_and_YOLOv5_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Computer Image Encryption Technology Based on Chaotic Sequence Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150624</link>
        <id>10.14569/IJACSA.2024.0150624</id>
        <doi>10.14569/IJACSA.2024.0150624</doi>
        <lastModDate>2024-06-29T11:26:03.9900000+00:00</lastModDate>
        
        <creator>Li Shen</creator>
        
        <subject>Chaotic sequence algorithm; image encryption; mapping effect; pixel code; security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>With the wide application of computer images and the popularization of network transmission, the public demand for image encryption technology is becoming more and more urgent. Privacy and data security can be effectively guaranteed through image encryption, but the existing encryption technology still has problems such as high overhead and poor encryption performance. Therefore, in order to improve the processing efficiency of encryption technology, the study constructs a two-dimensional composite chaotic system based on the analysis of existing chaotic sequence algorithms. Additionally, a novel approach to picture encryption is put forth by merging the composite chaotic system following the algorithmic optimization of disruption and diffusion in the image encryption phase. The chaotic mapping performed best, according to the experimental results, when the chaotic system&#39;s parameters were between 10 and 75. At this time, the algorithm had the highest encryption speed of 632 Mbit/s and decryption speed of 583 Mbit/s, the lowest resource consumption rate of 21.4% and the lowest delay rate of 11.5%. It can be seen that the method proposed in the study shows significant advantages in terms of security and effectiveness of image encryption, and is capable of realizing high-quality encryption of computer images. The novel image encryption technique that the research proposed has a high degree of security and feasibility and can achieve high-quality encryption of computer images.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_24-Computer_Image_Encryption_Technology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Neural Network-Powered Intrusion Detection in Multi-Cloud and Fog Environments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150625</link>
        <id>10.14569/IJACSA.2024.0150625</id>
        <doi>10.14569/IJACSA.2024.0150625</doi>
        <lastModDate>2024-06-29T11:26:03.9900000+00:00</lastModDate>
        
        <creator>Yanfeng ZHANG</creator>
        
        <creator>Zhe XU</creator>
        
        <subject>Cloud computing; fog computing; intrusion detection; privacy protection; neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>Cloud Computing has revolutionized the technological landscape, offering a platform for resource provisioning where organizations can access computing resources, storage, applications, and services. The shared nature of these resources introduces complexities in ensuring security and privacy. With the advent of edge and fog computing alongside cloud technologies, the processing, data storage, and management paradigm faces challenges in safeguarding against potential intrusions. Attacks on fog computing, IoT cloud, and related advancements can have pervasive and detrimental consequences. To address these concerns, various security standards and schemes have been suggested and deployed to enhance fog computing security. In particular, the focus of these security measures has become vital due to the involvement of multiple networks and numerous fog nodes through which end-users interact. These nodes facilitate the transfer of sensitive information, amplifying privacy concerns. This paper proposes a multi-layered intermittent neural network model tailored specifically for enhancing security in fog computing, especially in proximity to end-users and IoT devices. Emphasizing the need to mitigate privacy risks inherent in extensive network connections, the model leverages a customized adaptation of the NSLKDD dataset, a challenging dataset commonly applied to evaluate intrusion detection systems. A range of current models and feature sets are rigorously investigated to quantify the effectiveness of the proposed approach. Through comprehensive research findings and replication studies, the paper demonstrates the stability and robustness of the suggested method versus various performance metrics employed for intrusion detection. The assessment illustrates the model&#39;s superior capability in addressing privacy and security challenges in hybrid cloud environments incorporating intrusion detection systems, offering a promising solution for the evolving landscape of cloud-based computing technologies.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_25-Neural_Network_Powered_Intrusion_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Financial Risk Prediction and Management using Machine Learning and Natural Language Processing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150623</link>
        <id>10.14569/IJACSA.2024.0150623</id>
        <doi>10.14569/IJACSA.2024.0150623</doi>
        <lastModDate>2024-06-29T11:26:03.9600000+00:00</lastModDate>
        
        <creator>Tianyu Li</creator>
        
        <creator>Xiangyu Dai</creator>
        
        <subject>Financial risk management; machine learning; Natural Language Processing (NLP); Deep FM model; risk prediction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>With the continuous development and changes in the global financial markets, financial risk management has become increasingly important for the stable operation of enterprises. Traditional financial risk management methods, primarily relying on financial statement analysis and historical data statistics, show clear limitations when dealing with large-scale unstructured data. The rapid development of machine learning and Natural Language Processing (NLP) technologies in recent years offers new perspectives and methods for financial risk prediction and management. This paper explores and conducts empirical analysis financial risk management using these advanced technologies, with a particular focus on the application of NLP in measuring financial risk tendencies, and the financial risk prediction and management based on a Deep neural network - Factorization Machine (DeepFM) model. Through in-depth analysis and research, this paper proposes a new financial risk management model that combines NLP and deep learning technologies, aimed at improving the accuracy and efficiency of financial risk prediction. This study not only broadens the theoretical horizons of financial risk management but also provides effective technical support and decision-making references for practical operations.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_23-Financial_Risk_Prediction_and_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fuzzy Control-based Adaptive Adjustment of Dynamic Stiffness for Stewart Platforms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150622</link>
        <id>10.14569/IJACSA.2024.0150622</id>
        <doi>10.14569/IJACSA.2024.0150622</doi>
        <lastModDate>2024-06-29T11:26:03.9430000+00:00</lastModDate>
        
        <creator>Zhiqiang Zhao</creator>
        
        <creator>Yuetao Liu</creator>
        
        <creator>Changsong Yu</creator>
        
        <creator>Changsong Yu</creator>
        
        <subject>Fuzzy control; regulation methods; Stewart platform; stiffness adaptive</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>An adaptive adjusting strategy of Stewart platform dynamic stiffness based on fuzzy control is explored in this paper. The transient response, steady-state accuracy, anti-disturbance and robustness of Stewart platform are improved remarkably. Simulation experiments and data analysis show that compared with traditional fixed stiffness or PID control, this fuzzy control strategy can quickly achieve steady state under various operating conditions, effectively deal with load mutation, parameter change and model uncertainty, and greatly enhance the overall stability and performance of Stewart platform. In an application example, the strategy is used in precision machining field to optimize Stewart platform support and accurately control high-speed machine table, facing frequent fluctuation of dynamic load. The fuzzy controller takes displacement error, speed error, cutting force and material hardness as inputs and dynamic stiffness as outputs, and constructs fuzzy rule base and optimized membership function suitable for various machining conditions. The evaluation shows that fuzzy control performs well in transient response, and the response time is shortened by about 30% in the face of large load sudden change. In steady-state accuracy, displacement error &#177; 0.05 mm and velocity error &#177;0.1&#176;/s are strictly controlled, which is better than pure PID control. In anti-disturbance test, fuzzy control successfully reduces the influence of random disturbance on platform trajectory by 70%. Robustness tests show that the fuzzy controller maintains stable control effect even when the system parameters vary by &#177;10%, and the system performance score is above 8.5, which is far superior to that of traditional PID controller under the same conditions.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_22-Fuzzy_Control_based_Adaptive_Adjustment_of_Dynamic_Stiffness.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of Network Attack Intrusion Detection System Based on Improved FWA Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150621</link>
        <id>10.14569/IJACSA.2024.0150621</id>
        <doi>10.14569/IJACSA.2024.0150621</doi>
        <lastModDate>2024-06-29T11:26:03.9270000+00:00</lastModDate>
        
        <creator>Qingsong Chang</creator>
        
        <creator>Weiyan Feng</creator>
        
        <creator>Xingguo Wang</creator>
        
        <subject>Fireworks algorithm; fitness; initial cluster; characteristics; intrusion detection; network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>The increasing diversity of network attack behaviors has led to increasingly serious network security issues. Based on this, this study proposes an optimized fireworks algorithm to build an intrusion detection model. Firstly, the traditional algorithm is optimized by improving the uniformity of initial individual distribution and designing a fitness value update strategy, which greatly reduces the computational burden of the model and improves recognition accuracy. Then, the feature analysis detection strategy is selected and the model is fused to ensure system stability. Finally, to validate the effectiveness of the model, a comparative experimental analysis is conducted. The results validated that the average accuracy of the research model was 99.06%, with an average detection rate of 96.98%, which is relatively higher than the other models by 2.57%. The error warning rate was only 0.13%, lower than the other models of 1.60%. In summary, the proposed intrusion detection model based on the fireworks algorithm and feature analysis can effectively identify attack behaviors and classify them correctly.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_21-Design_of_Network_Attack_Intrusion_Detection_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Designing the VPN with Top-Down to Improve Information Security</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150620</link>
        <id>10.14569/IJACSA.2024.0150620</id>
        <doi>10.14569/IJACSA.2024.0150620</doi>
        <lastModDate>2024-06-29T11:26:03.8970000+00:00</lastModDate>
        
        <creator>Valero Andia Billy Scott</creator>
        
        <creator>Sanchez Atuncar Giancarlo</creator>
        
        <subject>VPN; cyber-attacks; security information</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>In this article, presents a systematic review of virtual private networks (VPNs) and their contribution to improving information security, with a particular focus on the Andia Consortium. It examines how VPN technology, through its ability to provide a secure channel for communication between devices, can protect organizations&#39; valuable digital data against cyber-attacks. Various types of VPN systems, their security strategies, advantages and disadvantages, and their dependence on different protocols and standards are discussed. Additionally, tunneling technology, a key technology in VPN implementation, is explored. Through this study, we seek to identify the benefits and limitations of using VPN to improve information security. This work aims to provide a deeper understanding of how VPNs can be designed from the top down to improve information security in organizations.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_20-Designing_the_VPN_with_Top_Down_to_Improve_Information.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application of Improved Deep Convolutional Neural Network Algorithm in Damaged Information Restoration</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150619</link>
        <id>10.14569/IJACSA.2024.0150619</id>
        <doi>10.14569/IJACSA.2024.0150619</doi>
        <lastModDate>2024-06-29T11:26:03.8670000+00:00</lastModDate>
        
        <creator>Wenya Jia</creator>
        
        <subject>Damaged document information; restoration; deep convolutional neural network; grayscale rules</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>The repair of damaged documents has practical significance in multiple fields and can help people better analyze data information. This study proposes an improved algorithm model based on deep convolutional neural networks to address the issues of poor restoration performance and insufficient restoration information in the current process of restoring damaged document information. The new model improves the ability of document image classification and recognition data by using deep convolutional neural networks and incorporates grayscale rules to enhance the edge information restoration problem in the document information restoration process. The results indicated that in the repair of document data, the research model could achieve good document repair results. The average accuracy of the research model was 94.2%, which was 4.6% higher than the 89.6% of other models. The average percentage error of the model was around 3.6, which was about 2.2 lower than other models. The algorithm model used had the lowest average root mean square error of only 4.4, which was 1.9 lower than the highest model, and its stability was the best among several models. Therefore, the new model has a good repair effect in document information restoration, which has good guiding significance for the research of damaged information restoration.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_19-Application_of_Improved_Deep_Convolutional_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Smart City Traffic Data Analysis and Prediction Based on Weighted K-means Clustering Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150618</link>
        <id>10.14569/IJACSA.2024.0150618</id>
        <doi>10.14569/IJACSA.2024.0150618</doi>
        <lastModDate>2024-06-29T11:26:03.8330000+00:00</lastModDate>
        
        <creator>Lei Li</creator>
        
        <subject>K-means; smart cities; traffic flow; prediction; holt; weight</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>Urban traffic congestion is becoming a more serious issue as urbanization picks up speed. This study improved the conventional K-means method to create a new traffic flow prediction algorithm that can more accurately estimate the city&#39;s traffic flow. Firstly, the traditional K-means algorithm is given different weights by weighting, so as to analyze the traffic congestion in five urban areas of Chengdu by changing the weight values, and based on this, a traffic flow prediction model is further designed by combining with Holt&#39;s exponential smoothing algorithm. The findings showed that the weighted K-means method is capable of accurately identifying the patterns of traffic congestion in Chengdu&#39;s five urban regions and the prediction model combined with Holt&#39;s exponential smoothing algorithm had a better prediction performance. Under the environmental conditions of high traffic flow, when the time was close to 12:00, the designed model was able to obtain a prediction value of 9.81 pcu/h, which was consistent with the actual situation. This shows that this study not only provides new ideas and methods for traffic management in smart cities but also provides a reference value for the design of traffic prediction models.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_18-Smart_City_Traffic_Data_Analysis_and_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implementation of Improved Raft Consensus Algorithm in IoT Information Security Management</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150617</link>
        <id>10.14569/IJACSA.2024.0150617</id>
        <doi>10.14569/IJACSA.2024.0150617</doi>
        <lastModDate>2024-06-29T11:26:03.8170000+00:00</lastModDate>
        
        <creator>Mingzhen Zhang</creator>
        
        <subject>Blockchain; consensus algorithm; Internet of Things; information; management; raft</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>In the context of the rapid expansion of the Internet of Things, information security management has become particularly crucial. In response to the performance bottleneck of traditional Raft consensus algorithms, this study proposes an improved Raft algorithm that combines density noise spatial clustering algorithm and vote change mechanism, aiming to improve the quantity processing efficiency and consistency of Internet of Things systems in large-scale environments. Firstly, a density noise spatial clustering algorithm is added to the traditional Raft algorithm to partition all consensus nodes into multiple sub clusters. Subsequently, a vote change mechanism is introduced to optimize the leadership election process. Finally, an Internet of Things information security management model is built using the improved Raft algorithm. The results showed that the improved Raft algorithm could complete 500 client requests in just 9.5 minutes of consensus trading time. The log replication accuracy of the management model built using this algorithm under four bandwidth conditions of 0.5Mbps, 5Mbps, 50Mbps, and 500Mbps was as high as 0.98, 0.99, 0.98, and 0.97, respectively. Therefore, the designed consensus algorithm not only has good data processing capabilities, but the model built using this algorithm can also achieve good performance in practical applications.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_17-Implementation_of_Improved_Raft_Consensus.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimization of Body Pressure Relief Support Wearable Devices Integrating 3D Printing and Gait Recognition Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150616</link>
        <id>10.14569/IJACSA.2024.0150616</id>
        <doi>10.14569/IJACSA.2024.0150616</doi>
        <lastModDate>2024-06-29T11:26:03.7870000+00:00</lastModDate>
        
        <creator>Yaqiong Zhou</creator>
        
        <creator>Bing Hu</creator>
        
        <subject>3D printing; gait recognition; body decompression support; wearing devices; electromyographic signal</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>To improve wearing comfort and achieve individual recognition, this study designs an ankle exoskeleton that simulates natural human movement based on the joint structure of the human lower limbs. The function of the sole spring is achieved through compression springs on the exoskeleton framework coupled with the foot, and a customized insole is designed using 3D printing technology. This study uses a gait recognition algorithm based on a convolutional gated recurrent unit fully convolutional network with a dual attention mechanism to achieve individual recognition. The results showed that compared to the natural state, when walking with exoskeletons, the integrated electromyographic signals of the gastrocnemius and tibialis anterior muscles decreased by 5.4% and 3.6%, respectively, and the intelligent insole reduced plantar pressure to a certain extent. The accuracy of the proposed gait recognition algorithm could reach 95.26%, which was 2.03% higher than that of fully convolutional networks. In addition, the fuzzy output signals of the left and right feet were combined to obtain the proportions of single support phase and double support phase during walking, which were 92.7% and 7.3%, respectively. This study indicates that a body pressure reducing support wearable device that integrates 3D printing and gait recognition algorithms can reduce lower limb joint pressure, providing a new possibility for improving wearing comfort and achieving individual recognition. It also helps to improve the quality of life for the target audience.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_16-Optimization_of_Body_Pressure_Relief_Support.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Obtaining the California Bearing Ratio Prediction via Hybrid Composition of Random Forest</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150615</link>
        <id>10.14569/IJACSA.2024.0150615</id>
        <doi>10.14569/IJACSA.2024.0150615</doi>
        <lastModDate>2024-06-29T11:26:03.7700000+00:00</lastModDate>
        
        <creator>Bensheng Wu</creator>
        
        <creator>Yan Zheng</creator>
        
        <subject>California bearing ratio; gold rush optimizer; stochastic paint optimizer; electrostatic discharge algorithm; random forest</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>Artificial intelligence algorithms have become much more sophisticated, so the most complex and challenging problems can be solved with them. California Bearing Ratio (CBR) is a time-consuming testing parameter, and univariate and multivariate regression methods are used to address this challenge. Therefore, the CBR value is an essential parameter in indexing the resistance provided by a structure&#39;s subterranean formation or foundation soil. CBR is a crucial factor in pavement design. However, its determination in laboratory conditions can be a time-consuming process. This makes it necessary to look for an alternative method to estimate CBR in the soil subgrade, especially the developed layers of the soil. This study has developed one of the machine learning (ML) models, including Random Forest (RF), to predict the CBR. Additionally, some meta-heuristic algorithms have been used for improving the accuracy and optimizing the output of the prediction, consisting of Gold Rush optimizer (GRO), Stochastic Paint optimizer (SPO), and Electrostatic Discharge algorithm (EDA). The results of the hybrid models were compared via some criteria to choose the desired model. SPO had the most desirable performance when coupled with RF compared to other optimizers, exhibiting high R2 and low RMSE.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_15-Obtaining_the_California_Bearing_Ratio_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Utilization of a Multi-Layer Perceptron Model for Estimation of the Heating Load</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150614</link>
        <id>10.14569/IJACSA.2024.0150614</id>
        <doi>10.14569/IJACSA.2024.0150614</doi>
        <lastModDate>2024-06-29T11:26:03.7400000+00:00</lastModDate>
        
        <creator>Ken Chen</creator>
        
        <creator>Wenyao Zhu</creator>
        
        <subject>Heating energy consumption; heating load; Multi-Layer Perceptron; Artificial Hummingbird Algorithm; Improved Arithmetic Optimization Algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>The growing significance of energy-efficient building management techniques has led to research that combines precise heating demand predictions with sophisticated optimization algorithms. This research seeks a comprehensive solution to enhance building energy efficiency, addressing the growing concern for sustainability and responsible resource use in contemporary research and practice. In this research endeavor, the complex topic of energy optimization within the complex domain of heating, ventilation, and air conditioning (HVAC) systems is being tackled with a combination of creative problem-solving techniques and thorough examination. The significance of accurate heating load forecasts for raising HVAC system efficiency and cutting expenses is emphasized in this study. It introduces innovative methods by combining two advanced optimization algorithms, the Artificial Hummingbird Algorithm (AHA) and the Improved Arithmetic Optimization Algorithm (IAOA), with the Multi-Layer Perceptron (MLP) model. The main objective is to improve heating load forecast accuracy and expedite HVAC system optimization procedures. This study emphasizes how important precise heating load forecasts are to attaining energy efficiency, cost savings, and the ultimate objective of encouraging environmental sustainability in building management. The assessments unequivocally illustrate that the MLAH (Multi-Layer Perceptron with Artificial Hummingbird Algorithm) model in the second layer emerges as the most exceptional predictor. It attains an impressive maximum Coefficient of Determination (R2) value of 0.998 during the testing phase, reflecting a remarkable explanatory capacity and displaying remarkably low Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE) values of 0.43 and 0.337, indicating minimal prediction discrepancies compared to alternative models.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_14-The_Utilization_of_a_Multi_Layer_Perceptron_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Advanced IoT Techniques for Detecting Water Leaks in Supply Networks with LoRaWAN</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150613</link>
        <id>10.14569/IJACSA.2024.0150613</id>
        <doi>10.14569/IJACSA.2024.0150613</doi>
        <lastModDate>2024-06-29T11:26:03.7100000+00:00</lastModDate>
        
        <creator>Essouabni Mohammed</creator>
        
        <creator>El Mhamdi Jamal</creator>
        
        <creator>Jilbab Abdelilah</creator>
        
        <subject>Internet of things; LoRaWAN; leak detection; pipeline monitoring; ultrasonic liquid level sensor</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>Water leaks are a common problem when water flows through pipes, causing significant losses of this valuable resource. Our solution uses the Internet of Things (IoT) to address these losses. We employ LoRaWAN (Long Range Wide Area Network) technology to collect data from sensors, allowing real-time monitoring of pipelines and the detection of leaks and bursts as soon as they occur. Our goal is to contribute to the preservation of available water resources. We propose non-destructive ultrasonic level sensors to mitigate this issue, thereby avoiding water supply interruptions. These sensors are easy to install and maintain, with a cost that is affordable compared to other existing solutions. Our work aims to gather as much information as possible from water pipelines to ensure rapid leak detection. By using IoT and the LoRaWAN communication protocol, we automate the management of water supply facilities, enhancing efficiency and reducing wastage of this precious resource. We achieved satisfactory results using this solution on our test water pipe.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_13-Advanced_IoT_Techniques_for_Detecting_Water_Leaks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modified SFWBP Framework for Vocal Teaching Quality Evaluation Based on the MEREC Technique</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150612</link>
        <id>10.14569/IJACSA.2024.0150612</id>
        <doi>10.14569/IJACSA.2024.0150612</doi>
        <lastModDate>2024-06-29T11:26:03.6930000+00:00</lastModDate>
        
        <creator>Lei Huang</creator>
        
        <subject>Multiple-attribute group decision-making; Spherical fuzzy sets (SFSs); MEREC; bidirectional projection technique; vocal teaching quality evaluation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>With the gradual improvement of people&#39;s living standards, their pursuit of art is also constantly increasing. Vocal music is not only an important course in the training process of music majors but also an important factor in improving personal qualities and expanding one&#39;s own abilities. In the process of vocal teaching, there are many factors that affect the quality of teaching, among which the teacher-student factor is one of the important influencing factors. How to enhance the role of teacher-student factors in improving the quality of vocal teaching has become one of the main directions for the development of vocal teaching. The vocal teaching quality evaluation could be looked as the multiple-attribute group decision-making (MAGDM). Spherical fuzzy sets (SFSs) could portray the uncertainty and fuzziness during the vocal teaching quality evaluation more effectively and deeply. In this paper, based on bidirectional projection, we shall propose the spherical fuzzy bidirectional projection (SFBP) technique and spherical fuzzy weighted bidirectional projection (SFWBP) technique. First of all, the definition of SFSs is introduced. Furthermore, the SFBP technique and SFWBP technique with SFSs are proposed based on the bidirectional projection. Based on the developed SFWBP technique, the MAGDM technique is organized and all computing steps are organized. Finally, a numerical example for vocal teaching quality evaluation is employed to verify the SFWBP technique and some comparisons are employed to verify advantages of SFWBP technique with SFSs.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_12-Modified_SFWBP_Framework_for_Vocal_Teaching_Quality.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>PhyGame: An Interactive and Gamified Learning Support System for Secondary Physics Education</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150611</link>
        <id>10.14569/IJACSA.2024.0150611</id>
        <doi>10.14569/IJACSA.2024.0150611</doi>
        <lastModDate>2024-06-29T11:26:03.6630000+00:00</lastModDate>
        
        <creator>Toshiki Katanosaka</creator>
        
        <creator>M. Fahim Ferdous Khan</creator>
        
        <creator>Ken Sakamura</creator>
        
        <subject>Gamification; interactive learning; online education; engagement; STEM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>With the rapid development of affordable digital technology, digital transformation is progressing in different sectors of society. Education is no exception; especially online education has been widely spreading since the coronavirus pandemic. While online education enables individuals to over-come the constraints associated with traditional offline formats (e.g. flexibility regarding time and place), it also poses several challenges. Particularly, in STEM subjects that require hands-on experience, there are limits to what online education can offer. Therefore, online education platforms for such subjects should be developed with a goal to replicate offline hands-on experience as much as possible. It has been reported that many learners lose their motivation and drop out of online courses. Previous research has shown that virtual hands-on experiments are vital for enhancing learners’ motivation. Taking these factors into consideration, we have developed a system called PhyGame for secondary-level students’ physics education using interactive elements and gamification. Through evaluation by 44 secondary-level students, the system has been proven to be an effective learning platform for learning physics with enjoyment while maintaining a high level of student motivation and engagement.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_11-PhyGame_An_Interactive_and_Gamified_Learning_Support_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Can Semi-Supervised Learning Improve Prediction of Deep Learning Model Resource Consumption?</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150610</link>
        <id>10.14569/IJACSA.2024.0150610</id>
        <doi>10.14569/IJACSA.2024.0150610</doi>
        <lastModDate>2024-06-29T11:26:03.6300000+00:00</lastModDate>
        
        <creator>Karthick Panner Selvam</creator>
        
        <creator>Mats Brorsson</creator>
        
        <subject>Performance model; deep learning; Graph neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>As computational demands for deep learning models escalate, accurately predicting training characteristics like training time and memory usage has become crucial. These predictions are essential for optimal hardware resource allocation. Traditional performance prediction methods primarily rely on supervised learning paradigms. Our novel approach, TraPPM (Training characteristics Performance Predictive Model), combines the strengths of unsupervised and supervised learning to enhance prediction accuracy. We use an unsupervised Graph Neural Network (GNN) to extract complex graph representations from unlabeled deep learning architectures. These representations are then integrated with a sophisticated, supervised GNN-based performance regressor. Our hybrid model excels in predicting training characteristics with greater precision. Through empirical evaluation using the Mean Absolute Percentage Error (MAPE) metric, TraPPM demonstrates notable efficacy. The model achieves a MAPE of 9.51% for predicting training step duration and 4.92% for memory usage estimation. These results affirm TraPPM’s enhanced predictive accuracy, significantly surpassing traditional supervised prediction methods. Code and data are available at: https://github.com/karthickai/trappm</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_10-Can_Semi_Supervised_Learning_Improve_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluating the Effect on Heart Rate Variability of Adults Exposed to Radio-Frequency Electromagnetic Fields in Modern Office Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150609</link>
        <id>10.14569/IJACSA.2024.0150609</id>
        <doi>10.14569/IJACSA.2024.0150609</doi>
        <lastModDate>2024-06-29T11:26:03.6000000+00:00</lastModDate>
        
        <creator>Sanda Dale</creator>
        
        <creator>Romulus Reiz</creator>
        
        <creator>Sorin Popa</creator>
        
        <creator>Andreea Ardelean-Dale</creator>
        
        <creator>Julian Keller</creator>
        
        <creator>Jens Uwe Geier</creator>
        
        <subject>Radio frequency electromagnetic ﬁelds; heart rate variability; office environment; Wi-Fi; DECT</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>The objective of the study was to investigate whether heart rate variability (HRV) is an appropriate method to describe potential effects of RF-EMF on humans considering a modern office environment radiation level with the frequencies 1.8 GHz (DECT) and 2.45 GHz (Wi-Fi) and an exposure time of 10 min. The emitters were 1 m distant from the test subjects. The HRV parameters SDNN, RMSSD, LF and HF were recorded from 60 adults in three runs, totaling up to 154 recordings. Effects were evident for the parameter SDNN. In two runs, HRV changed from control to exposure phase, in one run from exposure phase to control. The cofactors smoking, coffee consumption, and the use of strong medications did not modulate EMF effects. HRV seems to be suitable to detect effects of radio-frequency electromagnetic fields on humans under certain conditions. In the future, prolonged exposure and new frequencies (5G) should be included in order to provide a better description of RF-EMF effects in modern office environments.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_9-Evaluating_the_Effect_on_Heart_Rate_Variability_of_Adults.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Utilizing Machine Learning Techniques to Assess Technical Document Quality</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150608</link>
        <id>10.14569/IJACSA.2024.0150608</id>
        <doi>10.14569/IJACSA.2024.0150608</doi>
        <lastModDate>2024-06-29T11:26:03.5830000+00:00</lastModDate>
        
        <creator>Muhammad Junaid Iqbal</creator>
        
        <creator>Fabio Massimo Zanzotto</creator>
        
        <creator>Usman Nawaz</creator>
        
        <subject>Image verification; machine learning; ensemble approach; multi-feature image recognition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>Information is disseminated through images in newspapers, periodicals, the internet, and academic journals. With the aid of various tools such as Adobe, GIMP, and Corel Draw, distinguishing between an original image and a forgery has become increasingly challenging. Most conventional methods rely on constructed traits for detecting image counterfeiting. Image verification plays a crucial role in securing and ensuring the authenticity of individuals&#39; identities in sensitive documents. This research proposes a machine learning approach (Support Vector Machine, SVM, and Histogram of Oriented Gradients, HOG) to identify images and confirm their authenticity. The Histogram of Oriented Gradients (HOG) is employed to extract diverse features including matching, image size, and dimensions for image verification. The training and testing phases are carried out using a Support Vector Machine (SVM). The proposed image verification technique is evaluated using extensive datasets to ascertain image recognition accuracy, alongside metrics such as specificity, sensitivity, and precision. Comparative analysis with existing techniques reveals that the average image verification accuracy of the proposed method stands at 98%, surpassing previous image verification methods.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_8-Utilizing_Machine_Learning_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Capability Assessment Framework for Artificial Intelligence and Blockchain Adoption in Public Sector of United Arab Emirates (UAE)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150607</link>
        <id>10.14569/IJACSA.2024.0150607</id>
        <doi>10.14569/IJACSA.2024.0150607</doi>
        <lastModDate>2024-06-29T11:26:03.5530000+00:00</lastModDate>
        
        <creator>Ahmad Mofleh Al Graibeh</creator>
        
        <creator>Saba Khan</creator>
        
        <creator>Salah Al-Majeed</creator>
        
        <creator>Shujun Zhang</creator>
        
        <subject>Conceptual framework; Artificial Intelligence; Blockchain; maturity model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>This is an ongoing study with the aim to develop a maturity model for efficient deployment of Artificial Intelligence (AI) and Blockchain (BC) in the United Arab Emirates (UAE) public sector. The organizations would leverage this maturity model to assess their efficacy of deploying AI and BC technologies in their operations, highlighting their capabilities for successful integration of these technologies while underscoring their incompetency and directing their attention towards areas of improvement. To achieve this aim, initially the conceptual framework is proposed which would act as primary frame of reference for conducting empirical research in this prospect and developing a maturity model. This study presents the conceptual framework, which highlights the essential dimensions and factors that should be assessed and enhanced for successful implementation of AI and BC technologies. The framework also introduces five stages of maturity/development to mark the progress of each dimension of conceptual framework. This conceptual framework is 4x5 grid which vertically presents four dimensions and horizontally it presents five stages of maturity. Strategy &amp; Governance, Technology, People, and Process are dimensions of framework whereas initial, developed, defined, managed and optimized are stages of maturity.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_7-Capability_Assessment_Framework_for_Artificial_Intelligence.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Revolutionizing Campus Communication: NLP-Powered University Chatbots</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150606</link>
        <id>10.14569/IJACSA.2024.0150606</id>
        <doi>10.14569/IJACSA.2024.0150606</doi>
        <lastModDate>2024-06-29T11:26:03.5200000+00:00</lastModDate>
        
        <creator>Ritu Ramakrishnan</creator>
        
        <creator>Priyanka Thangamuthu</creator>
        
        <creator>Austin Nguyen</creator>
        
        <creator>Jinzhu Gao</creator>
        
        <subject>Artificial intelligence; natural language processing; chatbot; machine learning; recommender systems; neural network; BERT</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>Artificial intelligence (AI) based chatbots leverage programmed software instructions to simulate human speech and user interaction. These versatile tools can be employed in various domains, from managing smart home devices to providing personal virtual assistants. They can also be useful in responding to common queries and can make information easier to access. In response to this need, we developed a specialized chatbot tailored for the academic environment by training an NLP model to answer frequently asked questions (FAQs) the need of searching through the university website. The main goal is to optimize user engagement and streamline information retrieval within a university setting. By employing ML and NLP techniques, we enhance the chatbot&#39;s capabilities, enabling it to provide effective and precise answers, contributing to a more seamless and efficient experience for users seeking information about the university. The study discusses the pivotal decision-making process between implementing a custom neural network and the BERT model. Through a comparative analysis, the custom neural network emerges as the preferred solution, displaying efficiency, quick deployment, and superior accuracy in handling task-specific queries. While BERT presents unparalleled versatility in natural language processing, its resource-intensive pre-training, and challenges in adapting to the intricacies of the university-specific dataset limit its efficiency in this application. This research emphasizes the importance of customization to meet the unique demands of a university chatbot, providing valuable insights for developers seeking to strike a balance between efficiency and specialization in similar applications.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_6-Revolutionizing_Campus_Communication.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Word-Pattern: Enhancement of Usability and Security of User-Chosen Recognition Textual Password</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150605</link>
        <id>10.14569/IJACSA.2024.0150605</id>
        <doi>10.14569/IJACSA.2024.0150605</doi>
        <lastModDate>2024-06-29T11:26:03.4900000+00:00</lastModDate>
        
        <creator>Hassan Wasfi</creator>
        
        <creator>Richard Stone</creator>
        
        <creator>Ulrike Genschel</creator>
        
        <subject>Authentication; password; passphrase; recognition; recall; pattern; usability; security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>Knowledge-based authentication systems are the most common methods used to verify users’ identity, especially textual passwords. However, periodic changes in password complexity exacerbate human’s limitations of remembering hard passwords over time. Therefore, a novel authentication method called Word Pattern Recognition Textual Password (WPRTP) was proposed to overcome these issues. WPRTP is based on drawing pattern on a grid with a specific security requirement to balance between usability and security. This paper aims to compare WPRTP with a recall passphrase to explore its potential for enhancing user experience, usability, and security. Fifty-four users evaluated the efficiency of WPRTP on memorability, registration time, and login time. The results indicated that WPRTP is significantly more memorable over long-term periods, with a 100% success rate, and required less registration time (29 seconds for WPRTP and 122 seconds for recall passphrase). Additionally, WPRTP users demon-strated faster login times (20 seconds for WPRTP and 42 seconds for recall passphrase). Thus, WPRTP is a potential alternative to conventional authentication methods. Future work will focus on systematically managing and reducing the tendency among users to depend on familiar, repetitive patterns in the creation of a weak password.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_5-Word_Pattern_Enhancement_of_Usability_and_Security.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Operator Machine Augmentation Resource Framework</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150604</link>
        <id>10.14569/IJACSA.2024.0150604</id>
        <doi>10.14569/IJACSA.2024.0150604</doi>
        <lastModDate>2024-06-29T11:26:03.4730000+00:00</lastModDate>
        
        <creator>Mohammed Ameen</creator>
        
        <creator>Richard Stone</creator>
        
        <creator>Majed Hariri</creator>
        
        <creator>Faisal Binzagr</creator>
        
        <subject>Crowd monitoring; public security; Operator Machine Augmentation Resource (OMAR) framework; CCTV operator; surveillance system; crowd monitoring systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>The growing number of people gathering in public and the massive incidents that have occurred in recent years. It raises questions about public safety and security. This paper illustrates the technical implementation of the operator machine augmentation resource (OMAR) framework, which integrates advanced technologies, including a Computer Vision model and CCTV operators’ training techniques, to address the limitations of traditional surveillance systems. The OMAR framework enhances the productivity of surveillance systems by facilitating operators’ tasks and improving theirs. The framework’s components, including Alert Triggers, a Computer Vision model, and human training, work together to create better output, and a more convincing system will improve the quality of security and reduce human effort. Although the OMAR framework represents a potentially significant step forward in surveillance security systems, it remains a theoretical model requiring further investigation and rigorous testing. Future work will focus on evaluating the effectiveness of the OMAR framework through an empirical study, examining its impact on various aspects of human performance and adaptations.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_4-Operator_Machine_Augmentation_Resource_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Quantitative Study on Real-Time Police Patrol Route Optimization using Dynamic Hotspot Allocation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150603</link>
        <id>10.14569/IJACSA.2024.0150603</id>
        <doi>10.14569/IJACSA.2024.0150603</doi>
        <lastModDate>2024-06-29T11:26:03.4600000+00:00</lastModDate>
        
        <creator>Rakesh Ramakrishnan</creator>
        
        <creator>Soumithri Chilakamarri</creator>
        
        <creator>Roopalatha Mangalseth Budda</creator>
        
        <creator>Ashik Dawood Mohammed Anifa</creator>
        
        <subject>Route optimization; redesigning police patrol; data-driven strategies; novel patrol routing; random forest; real-time crime prediction; crime data; 911 incident response; hamilton path</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>A quantitative study on the optimization of police patrol routes in real-time using dynamic hotspot allocation is presented in this article. Ensuring public safety necessitates addressing the difficulties law enforcement agencies encounter in optimizing patrol routes within limited resources. In dynamic environments, static patrol route planning and traditional random routing are inadequate. In order to prevent crime, this study suggests using big data analysis to pinpoint crime hotspots and create patrol routes that are most effective. Our suggested approach, when paired with the Random Forest algorithm, predicts crime-prone areas by combining 911 incident response data and crime datasets. This allows for the efficient use of police resources and successful preventive measures. A greedy algorithm is used to steer patrol units toward the best routes, maximizing their presence close to hotspots. Besides, a Hamilton way is powerfully made based on overhauled hotspots and crisis call hubs. Whereas the spatial selection technique addresses restrictions of randomized investigation, productive policing remains pivotal for societal well-being and financial development. Progressions in innovation enable decision-makers with real-time data on criminal exercises, guaranteeing resource-friendly strategies inside budgetary imperatives. Successful communication with the public is crucial, as security impacts different perspectives of society, including venture choices. Hence, cutting-edge approaches are crucial for informed decision-making and keeping up with general security.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_3-A_Quantitative_Study_on_Real_Time_Police_Patrol_Route.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Designing a Conversational Agent for Education using a Personality-based Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150602</link>
        <id>10.14569/IJACSA.2024.0150602</id>
        <doi>10.14569/IJACSA.2024.0150602</doi>
        <lastModDate>2024-06-29T11:26:03.3500000+00:00</lastModDate>
        
        <creator>Jieyu Wang</creator>
        
        <creator>Jim Q. Chen</creator>
        
        <creator>Dingfang Kang</creator>
        
        <creator>Susantha Herath</creator>
        
        <creator>Abdullah AbuHussein</creator>
        
        <subject>Conversational agent/chatbot; personality-based UX design; human-centered AI</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>Conversational agents (CA) for education are the dialog systems that can interact with students intelligently. They are gaining popularity because of the potential benefits of education. However, there is very little research focusing on personality-based educational CA design. Therefore, we designed and built a high-fidelity educational CA prototype with four personality dimensions via Juji. This personality-based UX design supports the interaction between the CA and diverse users with eight personality styles within four dimensions. During the analysis and design phase, we extracted the keywords, attributes, distinctive behaviors, and interaction expectations to streamline the literal description of personalities into concrete design guidelines applicable to the prototype. The design guidelines were generated based on the extraction to specify interaction features, user expectations, and potential behaviors or actions that should be avoided. Based on the guidelines, we further developed four personality-based design logic in this integrated prototype. This work provides design guidelines for future user personality-based educational CA design. Moreover, the design is among the first group to provide four personality dimensions of design logic in one integrated prototype to better serve students. It sheds light on the future development of human-centred personality-based AI design in the industry while most chatbots are still rapidly developing.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_2-Designing_a_Conversational_Agent_for_Education.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Integrating Advanced Language Models and Vector Database for Enhanced AI Query Retrieval in Web Development</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150601</link>
        <id>10.14569/IJACSA.2024.0150601</id>
        <doi>10.14569/IJACSA.2024.0150601</doi>
        <lastModDate>2024-06-29T11:26:03.2870000+00:00</lastModDate>
        
        <creator>Xiaoli Huan</creator>
        
        <creator>Hong Zhou</creator>
        
        <subject>LLM (Large Language Model); vector databases; retrieval-augmented generation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(6), 2024</description>
        <description>In the dynamic field of web development, the integration of sophisticated AI technologies for query processing has become increasingly crucial. This paper presents a framework that significantly improves the relevance of web query responses by leveraging cutting-edge technologies like Hugging Face, FAISS, Google PaLM, Gemini, and LangChain. We explore and compare the performance of both PaLM and Gemini, two powerful LLMs, to identify strengths and weaknesses in the context of web development query retrieval. Our approach capitalizes on the synergistic combination of these freely accessible tools, ultimately leading to a more efficient and user-friendly query processing system.</description>
        <description>http://thesai.org/Downloads/Volume15No6/Paper_1-Integrating_Advanced_Language_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Log Clustering-based Method for Repairing Missing Traces with Context Probability Information</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01505144</link>
        <id>10.14569/IJACSA.2024.01505144</id>
        <doi>10.14569/IJACSA.2024.01505144</doi>
        <lastModDate>2024-05-30T17:06:15.7600000+00:00</lastModDate>
        
        <creator>Huan Fang</creator>
        
        <creator>Wenjie Su</creator>
        
        <subject>Trace clustering; log repairing; process mining; context semantics; conditional probability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>In real business processes, low quality event logs due to outliers and missing values tend to degrade the performance of process mining related algorithms, which in turn affects the correct execution of decisions. In order to repair the missing values in event logs under the condition that the reference model of the process system is unknown, this paper proposes a method that can repair consecutive missing values. First, the event logs are divided according to the integrity of the trace, and then the cluster algorithm is applied to complete logs to generate homogeneous trace clusters. Then match the missing trace to the most similar sub log, generate candidate sequences according to the context of the missing part, calculate the context probability of each candidate sequence, and select the one with the highest probability as the repair result. When the number of missing items in the trace is 1, our method has the highest repair accuracy of 97.5 percent in the Small log and 93.3 percent in the real event logs bpic20. Finally, the feasibility of this method is verified on four event logs with different missing ratios and has certain advantages compared with existing methods.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_144-Log_Clustering_based_Method_for_Repairing_Missing_Traces.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Securing Networks: An In-Depth Analysis of Intrusion Detection using Machine Learning and Model Explanations</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01505143</link>
        <id>10.14569/IJACSA.2024.01505143</id>
        <doi>10.14569/IJACSA.2024.01505143</doi>
        <lastModDate>2024-05-30T17:06:15.7430000+00:00</lastModDate>
        
        <creator>Hoang-Tu Vo</creator>
        
        <creator>Nhon Nguyen Thien</creator>
        
        <creator>Kheo Chau Mui</creator>
        
        <creator>Phuc Pham Tien</creator>
        
        <subject>Intrusion detection systems; machine learning models; model interpretability; cybersecurity; LIME; SHAP; explain-able machine learning models</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>As cyber threats continue to evolve in complexity, the need for robust intrusion detection systems (IDS) becomes increasingly critical. Machine learning (ML) models have demon-strated their effectiveness in detecting anomalies and potential intrusions. In this article, we delve into the world of intrusion detection by exploring the application of four distinct ML models: XGBoost, Decision Trees, Random Forests, and Bagging. And leveraging the interpretability tools LIME (Local Interpretable Model-agnostic Explanations) and SHAP (SHapley Additive ex-Planations) to explain the classification results. Our exploration begins with an in-depth analysis of each machine learning model, shedding light on their strengths, weaknesses, and suitability for intrusion detection. However, machine learning models often operate as ”black boxes” making it crucial to explain their inner workings. This article introduces LIME (Local Interpretable Model-agnostic Explanations) and SHAP (SHapley Additive ex-Planations) as indispensable tools for model interpretability. Throughout the article, we demonstrate the practical application of LIME and SHAP to explain and interpret the output of our intrusion detection models. By doing so, we gain valuable insights into the decision-making process of these models, enhancing our ability to identify and respond to potential threats effectively.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_143-Securing_Networks_An_In_Depth_Analysis_of_Intrusion_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>ACNGCNN: Improving Efficiency of Breast Cancer Detection and Progression using Adversarial Capsule Network with Graph Convolutional Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01505142</link>
        <id>10.14569/IJACSA.2024.01505142</id>
        <doi>10.14569/IJACSA.2024.01505142</doi>
        <lastModDate>2024-05-30T17:06:15.7300000+00:00</lastModDate>
        
        <creator>Srinivasa Rao Pallapu</creator>
        
        <creator>Khasim Syed</creator>
        
        <subject>Breast cancer detection; deep learning; image pre-possessing; disease progression; recurrent neural networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>New diagnostic methods are needed to improve the accuracy and efficiency of breast cancer detection and progression. Although successful, current methods frequently lack precision, accuracy, and timeliness, especially in the early phases Of breast cancer progression. Our research proposes a new model using deep learning to improve breast cancer detection and classification, addressing constraints. Our breast cancer image and sample preprocessing approach combines a non-local means filter (NLM) and Generative Adversarial Networks (GAN). The model classifies datasets using LSTM with BiGRU-based Recurrent ShuffleNet V2, a highly efficient and accurate technique for sequential data samples. The integration of a Capsule Network with Graph Convolutional Neural Networks (CNGCNN) significantly improves breast cancer detection. This method was carefully tested on BreaKHis. The results were amazing, showing gains across multiple metrics: 4.9% greater precision, 3.5% higher accuracy, 3.4% higher recall, 2.5% higher AUC (Area Under the Curve), 1.9% higher specificity, and 3.4%decreased delay in the identification of breast cancer stages. Particularly striking was the model’s performance in diagnosing illness development, where it displayed 3.5% greater precision, 3.9% higher accuracy, 4.5% higher recall, 3.4% higher AUC, 2.9% higher specificity, and 1.5% lower latency. Significant clinical impacts result from this work. Our methodology enables early diagnosis and precise staging of breast cancer, enabling focused therapies to improve patient outcomes and survival rates. The greater precision and reduced time lag in diagnosing disease progression also allow for more effective monitoring and treatment modifications. Overall, this study marks a considerable improvement in the field of breast cancer diagnostics, delivering a more efficient, accurate, and reliable tool for healthcare providers in their fight against this ubiquitous disease.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_142-ACNGCNN_Improving_Efficiency_of_Breast_Cancer_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Audio Watermarking: A Comprehensive Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01505141</link>
        <id>10.14569/IJACSA.2024.01505141</id>
        <doi>10.14569/IJACSA.2024.01505141</doi>
        <lastModDate>2024-05-30T17:06:15.7130000+00:00</lastModDate>
        
        <creator>Mohammad Shorif Uddin</creator>
        
        <creator>Ohidujjaman</creator>
        
        <creator>Mahmudul Hasan</creator>
        
        <creator>Tetsuya Shimamura</creator>
        
        <subject>Audio watermarking; deep learning approach; spread-spectrum method; signal-processing attacks; time domain; transform domain</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>Audio watermarking has emerged as a potent technology for copyright protection, content authentication, content monitoring, and tracking in the digital age. This paper offers a comprehensive exploration of audio watermarking principles, techniques, applications, and challenges. Initially, it presents the fundamental concepts of digital watermarking, elucidating its key characteristics and functionalities. After that, different audio watermarking methods in both the time and transform domains are explained, such as feature-based, parametric, and spread-spectrum methods, along with how they work, and their pros and cons. The paper further addresses critical challenges in maintaining key criteria such as imperceptibility, robustness, and payload capacity associated with audio watermarking. Additionally, it examines watermarking evaluation metrics, datasets, and performance findings under diverse signal-processing attacks. Finally, the review concludes by discussing future directions in audio watermarking research, emphasizing advancements in deep learning-based approaches and emerging applications.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_141-Audio_Watermarking_A_Comprehensive_Review.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Optimal Knowledge Distillation for Formulating an Effective Defense Model Against Membership Inference Attacks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01505140</link>
        <id>10.14569/IJACSA.2024.01505140</id>
        <doi>10.14569/IJACSA.2024.01505140</doi>
        <lastModDate>2024-05-30T17:06:15.6970000+00:00</lastModDate>
        
        <creator>Thi Thanh Thuy Pham</creator>
        
        <creator>Huong-Giang Doan</creator>
        
        <subject>Knowledge distillation; membership inference attack; teacher model; student model; privacy-utility trade-off</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>A membership inference attack (MIA) on machine learning models aims to determine the sensitive data that has been used to train machine learning models. Machine learning-based applications (MLaaS—machine learning as a service) in finance, banking, healthcare, etc. are facing the risks of private data leaks by MIA. Several solutions have been proposed for mitigating MIA attacks, such as confidence score masking, regularization, knowledge distillation (KD), etc. However, the utility-privacy trade-off problem is still a major challenge for existing approaches. In this work, we explore the KD-based approach to defending against MIA attacks. This approach has received increasing attention in the research community on machine learning safety recently as it aims at effectively addressing the above-mentioned challenge of mitigating MIA attacks. An efficient KD-based defense framework that includes multiple teacher and student models is proposed in this work for alleviating MIA attacks. Three main phases are deployed in this framework: (1) teacher model training; (2) knowledge distillation from the teacher model to the student model based on prediction augmentation and aggregation from the teacher model; and (3) repeated knowledge distillation among student models. The experimental results on standard datasets show the outperforms in both model utility and privacy of the proposed framework compared to other state-of-the-art solutions for mitigating MIA.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_140-An_Optimal_Knowledge_Distillation_for_Formulating_an_Effective_Defense_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>AEGANB3: An Efficient Framework with Self-attention Mechanism and Deep Convolutional Generative Adversarial Network for Breast Cancer Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01505139</link>
        <id>10.14569/IJACSA.2024.01505139</id>
        <doi>10.14569/IJACSA.2024.01505139</doi>
        <lastModDate>2024-05-30T17:06:15.6830000+00:00</lastModDate>
        
        <creator>Huong Hoang Luong</creator>
        
        <creator>Hai Thanh Nguyen</creator>
        
        <creator>Nguyen Thai-Nghe</creator>
        
        <subject>Breast cancer; classification; Convolutional Neural Network (CNN); Vision Transformer (ViT); fine-tuning; transfer learning; self-attention</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>Breast cancer remains a significant illness around the world, but it has become the most dangerous when faced with women. Early detection is paramount in improving prognosis and treatment. Thus, ultrasonography has appeared as a valuable diagnostic tool for breast cancer. However, the accurate interpretation of ultrasound images requires expertise. To address these challenges, recent advancements in computer vision such as using convolutional neural networks (CNN) and vision transformers (ViT) for the classification of medical images, which become popular and promise to increase the accuracy and efficiency of breast cancer detection. Specifically, transfer learning and fine-tuning techniques have been created to leverage pre-trained CNN models. With a self-attention mechanism in ViT, models can effectively feature extraction and learning from limited annotated medical images. In this study3, the Breast Ultrasound Images Dataset (Dataset BUSI) with three classes including normal, benign, and malignant was utilized to classify breast cancer images. Additionally, Deep Convolutional Generative Adversarial Networks (DCGAN) with several techniques were applied for data augmentation and preprocessing to increase robustness and address data imbalance. The AttentiveEfficientGANB3 (AEGANB3) framework is proposed with a customized EfficientNetB3 model and self-attention mechanism, which showed an impressive result in the test accuracy of 98.01%. Finally, Gradient-weighted Class Activation Mapping (Grad-CAM) for visualizing the model decision.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_139-AEGANB3_An_Efficient_Framework_with_Self_attention_Mechanism.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>User-Friendly Privacy-Preserving Blockchain-based Data Trading</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01505138</link>
        <id>10.14569/IJACSA.2024.01505138</id>
        <doi>10.14569/IJACSA.2024.01505138</doi>
        <lastModDate>2024-05-30T17:06:15.6670000+00:00</lastModDate>
        
        <creator>Jiahui Cao</creator>
        
        <creator>Junyao Ye</creator>
        
        <creator>Junzuo Lai</creator>
        
        <subject>Data trading; blockchain; personalized local differential privacy; data security; user-friendly</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>As the digital economy flourishes, the use of blockchain technology for data trading has seen a surge in popularity. Yet, previous approaches have frequently faltered in harmonizing security with user experience, culminating in suboptimal transactional efficiency. This study introduces a personalized local differential privacy framework, adeptly tackling data security concerns while accommodating the individual privacy preferences of data owners. Furthermore, the framework bolsters transaction flexibility and efficiency by catering to needs of data consumers for detailed queries and enabling data owners to effortlessly elevate their privacy budget to achieve greater financial returns. The efficacy of our approach is validated through a comprehensive series of experimental validations.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_138-User_Friendly_Privacy_Preserving_Blockchain.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Deep Learning Approach to Convert Handwritten Arabic Text to Digital Form</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01505137</link>
        <id>10.14569/IJACSA.2024.01505137</id>
        <doi>10.14569/IJACSA.2024.01505137</doi>
        <lastModDate>2024-05-30T17:06:15.6500000+00:00</lastModDate>
        
        <creator>Bayan N. Alshahrani</creator>
        
        <creator>Wael Y. Alghamdi</creator>
        
        <subject>Deep learning; convolutional neural networks; bidirectional long short term memory; connectional temporal classification; Arabic handwriting recognition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>The recognition of Arabic words presents considerable difficulties owing to the complex characteristics of the Arabic script, which encompasses letters positioned both above and below the baseline, hamzas, and dots. In order to address these intricacies, we provide a structured approach for transforming handwritten Arabic text into a digital format. We employ a hybrid deep learning technique that combines Convolutional Neural Networks (CNNs), Bidirectional Long Short-Term Memory (BLSTM), and Connectionist Temporal Classification (CTC). We collected datasets that cover a wide range of Arabic text variations. We have also created a pre-processing pipeline. Our methodology successfully achieved an accuracy rate of 99.52%. At the level of recognizing the letters of the word, with an accuracy of 98.36% at the level of the full word. In order to evaluate the effectiveness of our suggested method for recognizing handwritten text, we utilize two essential metrics: Word Error Rate (WER) and Character Error Rate (CER) to compare its performance. The experimental research demonstrates a WER of 1.64 % and a CER of 0.48%.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_137-A_Deep_Learning_Approach_to_Convert_Handwritten_Arabic_Text.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mitigating Security Risks in Firewalls and Web Applications using Vulnerability Assessment and Penetration Testing (VAPT)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01505136</link>
        <id>10.14569/IJACSA.2024.01505136</id>
        <doi>10.14569/IJACSA.2024.01505136</doi>
        <lastModDate>2024-05-30T17:06:15.6500000+00:00</lastModDate>
        
        <creator>Alanoud Alquwayzani</creator>
        
        <creator>Rawabi Aldossri</creator>
        
        <creator>Mounir Frikha</creator>
        
        <subject>Web Application Firewalls (WAFs); Vulnerability Assessment and Penetration Testing (VAPT); cybersecurity; security vulnerabilities; security misconfigurations; network scanning tools; vulnerability detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>In today’s digital age, both organizations and individuals heavily depend on web applications for a wide range of activities. However, this reliance on the web also opens up opportunities for attackers to exploit security weaknesses present in these applications. Web Application Firewalls (WAFs) are typically the first line of defense, protecting web apps by filtering and monitoring HTTP traffic. However, if these firewalls are not properly configured, they can be bypassed or compromised by attackers. The escalating number of attacks targeting web applications underscores the urgent need to enhance their security. This paper offers an in-depth review of existing research on web application Vulnerability Assessment and Penetration Testing (VAPT). Our unique contribution lies in the comprehensive synthesis and categorization of VAPT tools based on their optimal use cases, which provides a practical guide for selecting the appropriate tools for specific scenarios. Additionally, this study integrates emerging technologies such as artificial intelligence and machine learning into the VAPT framework, addressing the evolving nature of cyber threats. The paper also identifies common challenges encountered during the VAPT process and proposes actionable recommendations to overcome these obstacles. Furthermore, it discusses best practices such as secure coding practices and defense-in-depth strategies to improve the effectiveness and efficiency of VAPT efforts. By offering these insights, this paper aims to advance the current understanding and application of VAPT in enhancing the security of web applications and firewalls.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_136-Mitigating_Security_Risks_in_Firewalls_and_Web_Applications.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Transformer Meets External Context: A Novel Approach to Enhance Neural Machine Translation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01505135</link>
        <id>10.14569/IJACSA.2024.01505135</id>
        <doi>10.14569/IJACSA.2024.01505135</doi>
        <lastModDate>2024-05-30T17:06:15.6370000+00:00</lastModDate>
        
        <creator>Mohammed Alsuhaibani</creator>
        
        <creator>Kamel Gaanoun</creator>
        
        <creator>Ali Alsohaibani</creator>
        
        <subject>Deep learning; transformers; context; NMT; neural machine translation; natural language processing systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>Most neural machine translation (NMT) systems rely on parallel data, comprising text in the source language and its corresponding translation in the target language. While it’s acknowledged that context enhances NMT models, this work proposes a novel approach by incorporating external context, specifically explanations of source text meanings, akin to how human translators leverage context for comprehension. The suggested methodology innovatively addresses the challenge of incorporating lengthy contextual information into NMT systems. By employing state-of-the-art transformer-based models, external context is integrated, thereby enriching the translation process. A key aspect of the approach lies in the utilization of diverse text summarization techniques, strategically employed to efficiently distill extensive contextual details into the NMT framework. This novel solution not only overcomes the obstacle posed by lengthy context but also enhances the translation quality, marking a advancement in the field of NMT. Furthermore, the data-centric approach ensures robustness and effectiveness, yielding improvements in translation quality, as evidenced by a considerable boost in BLEU score points ranging from 0.46 to 1.87 over baseline models. Additionally, we make our dataset publicly available, facilitating further research in this domain.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_135-Transformer_Meets_External_Context.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>HybridGCN: An Integrative Model for Scalable Recommender Systems with Knowledge Graph and Graph Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01505134</link>
        <id>10.14569/IJACSA.2024.01505134</id>
        <doi>10.14569/IJACSA.2024.01505134</doi>
        <lastModDate>2024-05-30T17:06:15.6200000+00:00</lastModDate>
        
        <creator>Dang-Anh-Khoa Nguyen</creator>
        
        <creator>Sang Kha</creator>
        
        <creator>Thanh-Van Le</creator>
        
        <subject>Large-scale dataset processing; recommender systems; graph neural network; knowledge graph construction; data segmentation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>Graph Neural Networks (GNNs) have emerged as a state-of-the-art approach in building modern Recommender Systems (RS). By leveraging the complex relationships among items, users, and their attributes, which can be represented as a Knowledge Graph (KG), these models can explore implicit semantic sub-structures within graphs, thereby enhancing the learning of user and item representations. In this paper, we propose an end-to-end architectural framework for developing recommendation models based on GNNs and KGs, namely Hy-bridGCN. Our proposed methodologies aim to address three main challenges: (1) making graph-based RS scalable on large-scale datasets, (2) constructing domain-specific KGs from unstructured data sources, and (3) tackling the issue of incomplete knowledge in constructed KGs. To achieve these goals, we design a multi-stage integrated procedure, ranging from user segmentation and LLM-supported KG construction process to interconnectedly propagating between the KG and the Interaction Graph (IG). Our experimental results on a telecom e-commerce domain dataset demonstrate that our approach not only makes existing GNN-based recommender baselines feasible on large-scale data but also achieves comparative performance with the HybridGCN core.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_134-HybridGCN_An_Integrative_Model_for_Scalable_Recommender_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Survey of Reversible Data Hiding in Encrypted Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01505133</link>
        <id>10.14569/IJACSA.2024.01505133</id>
        <doi>10.14569/IJACSA.2024.01505133</doi>
        <lastModDate>2024-05-30T17:06:15.6030000+00:00</lastModDate>
        
        <creator>Ghadeer Asiri</creator>
        
        <creator>Atef Masmoudi</creator>
        
        <subject>Reversible data hiding; encrypted image</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>The creation and application of multimedia has undergone a revolution in the last several years. This is a result of the rise in internet-based communications, which involves the exchange of digital data in the forms of text files, audio files, video files, and image files. For this reason, multimedia has emerged as a vital aspect of people’s everyday existence. Information security is crucial since there are several threats that target multime-dia integrity, confidentiality, and authentication.Multimedia data needs to be safeguarded, perhaps using encryption, in order to solve these numerous issues. Reversible data hiding in encrypted pictures (RDHEI) is investigate in this survey. (RDHEI) process, which functions by adding extra data to a picture, has surfaced. Employers and academics alike are becoming more interested in and focused on the RDHEI due to its vast range of applications. The purpose of this review is to introduce the various RDHEI schemes, identify the most important RDHEI techniques with varying embedding rates, and then examine the applications and future prospects of RDHEI. The main characteristics of each representative RDHEI Technique taken into consideration in this survey are enumerated in a comparison table.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_133-A_Survey_of_Reversible_Data_Hiding_in_Encrypted_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Improved Facial Expression Recognition using CNN-BiLSTM with Attention Mechanism</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01505132</link>
        <id>10.14569/IJACSA.2024.01505132</id>
        <doi>10.14569/IJACSA.2024.01505132</doi>
        <lastModDate>2024-05-30T17:06:15.5870000+00:00</lastModDate>
        
        <creator>Samanthisvaran Jayaraman</creator>
        
        <creator>Anand Mahendran</creator>
        
        <subject>Facial expression recognition; occlusion; attention mechanism; convolution neural networks; bidirectional long short time memory</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>In the recent years, Facial Expression Recognition is one of the hot research topics among the researchers and experts in the field of Computer Vision and Human Computer Interaction. Traditional deep learning models have found it difficult to process images that has occlusion, illumination and pose dimensional properties, and also imbalances of various datasets has led to large distinction in recognition rates, slow speed of convergence and low accuracy. In this paper, we pro-pose a hybrid Convolution Neural Networks-Bidirectional Long Short Term Memory along with point multiplication attention mechanism and Linear Discriminant analysis is incorporated to tackle aforementioned non-frontal image properties with the help of Median Filter and Global Contrast Normalization in data preprocessing. Following this, DenseNet and Softmax is used for reconstruction of images by enhancing feature maps with essential information for classifying the images in the undertaken input datasets i.e. FER2013 and CK+. The proposed model is compared with other traditional models such as CNN-LSTM, DSCNN-LSTM, CNN-BiLSTM and ACNN-LSTM in terms of accuracy, precision, recall and F1 score. The proposed network model achieved highest accuracy in classifying the facial images on FER2013 dataset with 95.12% accuracy which is 3.1% higher than CNN-LSTM, 2.7% higher than DSCNN-LSTM, 2% higher than CNN-BiLSTM and 3.7% higher than ACNN-LSTM network models, and the proposed model has achieved 98.98% of accuracy with CK+ in classifying the images which is 5.1% higher than CNN-LSTM, 5.7% higher than DSCNN-LSTM, 3.3% higher than CNN-BiLSTM and 6.9% higher than ACNN-LSTM network models in facial expression recognition.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_132-An_Improved_Facial_Expression_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detecting Digital Image Forgeries with Copy-Move and Splicing Image Analysis using Deep Learning Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01505131</link>
        <id>10.14569/IJACSA.2024.01505131</id>
        <doi>10.14569/IJACSA.2024.01505131</doi>
        <lastModDate>2024-05-30T17:06:15.5730000+00:00</lastModDate>
        
        <creator>Divya Prathana Timothy</creator>
        
        <creator>Ajit Kumar Santra</creator>
        
        <subject>Copy-move; splicing; deep learning; image forgery detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>The proliferation of digitally altered images across social media platforms has escalated the urgency for robust image forgery detection systems. Traditional detection methodologies, while varied, often fall short in addressing the multifaceted nature of image forgeries in the digital landscape. Recognizing the need for advanced solutions, this paper introduces a novel deep-learning approach that leverages the architectural strengths of GNNs, CNNs, VGG16, MobileNet, and ResNet50. Our method uniquely integrates these architectures to effectively detect and analyze multiple types of image forgeries, including image splicing and copy-move forgeries. This approach is groundbreaking as it adapts these networks to focus on identifying discrepancies in the compression quality between forged and original image regions. By examining the differences between the original and compressed image versions, our model constructs a feature-rich representation, which is then analyzed by a tailored deep-learning network. This network has been enhanced by removing its original classifier and implementing a new one specifically designed for binary forgery classification. Very few researchers have explored the application of deep learning techniques in copy-move and splice image analysis for detecting digital image forgeries, making our work particularly significant. A comprehensive comparative analysis with pre-trained models underscores the superiority of our method, with the GNN model achieving an impressive accuracy of 98.54 percent on the CASIA V1 dataset. This not only sets a new benchmark in the field but also highlights the efficiency of our model, which benefits from reduced training parameters and accelerated training times.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_131-Detecting_Digital_Image_Forgeries.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Adaptive Learning Model for Detecting Wheat Diseases</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01505130</link>
        <id>10.14569/IJACSA.2024.01505130</id>
        <doi>10.14569/IJACSA.2024.01505130</doi>
        <lastModDate>2024-05-30T17:06:15.5570000+00:00</lastModDate>
        
        <creator>Mohammed Abdalla</creator>
        
        <creator>Osama Mohamed</creator>
        
        <creator>Elshaimaa M. Azmi</creator>
        
        <subject>Food security; image recognition; deep learning; conventional neural networks; digital agriculture; agriculture sustainability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>Nowadays, the wheat plant has been considered a crucial source of protein, energy, and micronutrients for people. The motivation behind this study comes from how to increase the wheat crop growth and prevent wheat diseases as this plant plays a significant impact on food security all over the world. Wheat plant diseases can be divided into fungal, bacterial, viral, nematode, insect pests, physiological and genetic anomalies, and mineral and environmental stress. Digital images containing the wheat plant disease are collected from different public sources like Kaggle and GitHub. In this study, an adaptive deep-learning model is developed to classify and detect various types of wheat diseases collected digitally in an efficient accurate manner. The dataset is split into two sets: approximately 80% of the data ( 8,946 images) for the training set and 20% (2,259 images) for the validation set. The training set is composed of 1445, 1478, 1557, 1510, 1424, and 1532 images of healthy, leaf rust, powdery mildew, septoria, stem rust, and stripe rust while the validation set contains 357, 360, 404, 402, 353 and 383 images respectively. The suggested method achieved 97.47% validation accuracy on the training set of images and a testing accuracy of 98.42%on the testing set. This study offers a method of training for the classification and detection of wheat diseases using a mix of recently established pre-trained convolutional neural networks (CNN), DenseNet, ResNet, and EfficientNet integrated with the one-fit cycle policy. In comparison to the current state of the art, the proposed model is accurate and efficient.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_130-Adaptive_Learning_Model_for_Detecting_Wheat_Diseases.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Diabetes Prediction: An Improved Boosting Algorithm for Diabetes Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01505129</link>
        <id>10.14569/IJACSA.2024.01505129</id>
        <doi>10.14569/IJACSA.2024.01505129</doi>
        <lastModDate>2024-05-30T17:06:15.5400000+00:00</lastModDate>
        
        <creator>Md. Shahin Alam</creator>
        
        <creator>Most. Jannatul Ferdous</creator>
        
        <creator>Nishat Sarkar Neera</creator>
        
        <subject>Diabetes prediction; ensemble technique; machine learning; binary classification; Moderated-AdaBoost</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>Diabetes is increasing gradually due to the inability to effectively use the human body’s insulin, which threatens public health. People with diabetes who go undiagnosed at early stages or who have diabetes have a high risk of heart disease, kidney disease, eye problems, stroke, and nerve damage for which diabetes diagnosis is crucial to prevent. Our advanced machine learning algorithm is the gateway to a revolutionary possibility of detecting whether the human body has diabetes. Developed this method based on machine learning with one lakh data and the main objective of creating a new and novel diabetes prediction model named moderated Ada-Boost(AB) that can accurately diagnose diabetes. About 10 different classification methods are applied in this research such as Random forest classifier (RF), logistic regression (LR), decision tree classifier (DT), support vector machine (SVM), Bayesian Classifier (BC) or Naive Bayes Classifier (NB), Bagging Classifier (BG), Stacking Classifier (ST), Moderated Ada-Boost(AB) Classifier, K Neighbors Classifier (KN) and Artificial Neural Network (ANN). The crucial contribution is to find out the appropriate values for the different models using the hyper-parameter tuning process. We have proposed a new boosting model named Moderated Ada-Boost(AB) which is the combination of the hyper-parameter tuned random forest model and Ada-boost model. Different evaluation metrics such as accuracy, precision, recall, f1 score, and others are used to evaluate the performance of the models. Our proposed new boosting algorithm named Moderated Ada-Boost(AB) provides better accuracy than other models whose training accuracy is 99.95% and testing accuracy is 98.14%.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_129-Enhancing_Diabetes_Prediction_An_Improved_Boosting_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Inclusive Smart Cities: IoT-Cloud Solutions for Enhanced Energy Analytics and Safety</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01505128</link>
        <id>10.14569/IJACSA.2024.01505128</id>
        <doi>10.14569/IJACSA.2024.01505128</doi>
        <lastModDate>2024-05-30T17:06:15.5100000+00:00</lastModDate>
        
        <creator>Abdulwahab Ali Almazroi</creator>
        
        <creator>Faisal S. Alsubaei</creator>
        
        <creator>Nasir Ayub</creator>
        
        <creator>Noor Zaman Jhanjhi</creator>
        
        <subject>IoT Security; theft detection; smart cities; cloud computing; disability support</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>Securing smart cities in the evolving Internet of Things (IoT) demands innovative security solutions that extend beyond conventional theft detection. This study introduces temporal convolutional networks and gated recurrent units (TCGR), a pioneering model tailored for the dynamic IoT-SM dataset, addressing eight distinct forms of theft. In contrast to conventional techniques, TCGR utilizes Jaya tuning (TCGRJ), ensuring improved accuracy and computational efficiency. The technique employs ResNeXt for feature extraction to extract important patterns from IoT device-generated data and Edited Nearest Neighbors for data balancing. Empirical evaluations validate TCGRJ’s greater precision (96.7%) and accuracy (97.1%) in detecting theft. The model significantly aids in preventing theft-related risks and is designed for real-time Internet of Things applications in smart cities, aligning with the broader goal of creating safer spaces by reducing hazards associated with unauthorized electrical connections. TCGRJ promotes sustainable energy practices that benefit every resident, particularly those with disabilities, by discouraging theft and encouraging economical power consumption. This research underscores the crucial role of advanced theft detection technologies in developing smart cities that prioritize inclusivity, accessibility, and an enhanced quality of life for all individuals, including those with disabilities.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_128-Inclusive_Smart_Cities_IoT_Cloud_Solutions.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning Enhanced Hand Gesture Recognition for Efficient Drone use in Agriculture</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01505127</link>
        <id>10.14569/IJACSA.2024.01505127</id>
        <doi>10.14569/IJACSA.2024.01505127</doi>
        <lastModDate>2024-05-30T17:06:15.4930000+00:00</lastModDate>
        
        <creator>Phaitoon Srinil</creator>
        
        <creator>Pattharaporn Thongnim</creator>
        
        <subject>Deep learning; Convolutional Neural Network; hand gesture recognition; drone; agriculture</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>The use of deep learning in unmanned aerial vehicles (UAVs), or drones, has greatly improved various technologies by making complex tasks easier, faster, and requiring less human help. This study looks into how artificial intelligence (AI) can be used in farming, especially through creating a system where drones can be controlled by hand gestures to support agricultural activities. By using a special type of AI called a Convolutional Neural Network (CNN) with an EfficientNet B3 model, this research developed a gesture recognition system. It was trained on 1,393 pictures of different hand signals taken under various light conditions and from three different people. The system was evaluated based on its training and testing performance, showing very high scores in terms of loss, accuracy, F1 score, and the Area Under the Curve (AUC), which means it can recognize gestures accurately and work well in different situations. This has big implications for farming, as it gives farmers an easy way to control drones for tasks like checking on crops and spraying them precisely, which also helps keep them safe. This study is an important step towards smarter farming practices. Moreover, the system’s ability to perform well in different settings shows it could also be useful in other areas like construction, where drones need to operate precisely and flexibly.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_127-Deep_Learning_Enhanced_Hand_Gesture_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Revolutionary AI-Driven Skeletal Fingerprinting for Remote Individual Identification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01505126</link>
        <id>10.14569/IJACSA.2024.01505126</id>
        <doi>10.14569/IJACSA.2024.01505126</doi>
        <lastModDate>2024-05-30T17:06:15.4800000+00:00</lastModDate>
        
        <creator>Achraf BERRAJAA</creator>
        
        <creator>Ayyoub El OUTMANI</creator>
        
        <creator>Issam BERRAJAA</creator>
        
        <creator>Nourddin SAIDOU</creator>
        
        <subject>Artificial intelligence; recognition of individuals; new fingerprint; lagrange polynomials; CNN</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>This research aims to devise a distinct mathematical key for individual identification and recognition. This key, represented through signals, is constructed using Lagrange polynomials derived from the skeletal points. Consequently, we present this key as a novel fingerprint categorized within physiological fingerprints. It’s crucial to highlight that the primary application of this fingerprint is for remote individual identification, specifically excluding any bodily masking. Subsequently, we implement an artificial intelligence model, specifically a Convolutional Neural Network (CNN), for the automated detection of individuals. The proposed CNN is trained on an extensive dataset comprising 10000 real-world cases and augmented data. Our skeletal fingerprint recognition system demonstrates superior performance compared to other physiological fingerprints, achieving a remark-able 98% accuracy in detecting individuals at a distance.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_126-Revolutionary_AI_Driven_Skeletal_Fingerprinting.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Impact of Various Factors on the Convolutional Neural Networks Model on Arabic Handwritten Character Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01505125</link>
        <id>10.14569/IJACSA.2024.01505125</id>
        <doi>10.14569/IJACSA.2024.01505125</doi>
        <lastModDate>2024-05-30T17:06:15.4630000+00:00</lastModDate>
        
        <creator>Alhag Alsayed</creator>
        
        <creator>Chunlin Li</creator>
        
        <creator>Ahmed Fat’hAlalim</creator>
        
        <creator>Mohammed Hafiz</creator>
        
        <creator>Jihad Mohamed</creator>
        
        <creator>Zainab Obied</creator>
        
        <creator>Mohammed Abdalsalam</creator>
        
        <subject>Arabic Handwritten Character Recognition (AHCR); Optical Character Recognition (OCR); Deep Learning (DL); Convolutional Neural Network (CNN); Characters Recognition (CR)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>Recognizing Arabic handwritten characters (AHCR) poses a significant challenge due to the intricate and variable nature of the Arabic script. However, recent advancements in machine learning, particularly through Convolutional Neural Networks (CNNs), have demonstrated promising outcomes in accurately identifying and categorizing these characters. While numerous studies have explored languages like English and Chinese, the Arabic language still requires further research to enhance its compatibility with computer systems. This study investigates the impact of various factors on the CNN model for AHCR, including batch size, filter size, the number of blocks, and the number of convolutional layers within each block. A series of experiments were conducted to determine the optimal model configuration for the AHCD dataset. The most effective model was identified with the following parameters: Batch Size (BS) = 64, Number of Blocks (NB) = 3, Number of Convolution Layers in Block (NC) = 3, and Filter Size (FS) = 64. This model achieved an impressive training accuracy of 98.29% and testing accuracy of 97.87%.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_125-The_Impact_of_Various_Factors_on_the_Convolutional_Neural_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>NLP-Based Automatic Summarization using Bidirectional Encoder Representations from Transformers-Long Short Term Memory Hybrid Model: Enhancing Text Compression</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01505124</link>
        <id>10.14569/IJACSA.2024.01505124</id>
        <doi>10.14569/IJACSA.2024.01505124</doi>
        <lastModDate>2024-05-30T17:06:15.4630000+00:00</lastModDate>
        
        <creator>Ranju S Kartha</creator>
        
        <creator>Sanjay Agal</creator>
        
        <creator>Niyati Dhirubhai Odedra</creator>
        
        <creator>Ch Sudipta Kishore Nanda</creator>
        
        <creator>Vuda Sreenivasa Rao</creator>
        
        <creator>Annaji M Kuthe</creator>
        
        <creator>Ahmed I. Taloba</creator>
        
        <subject>Automated text compression; BERT-based extractive summarization; LSTM-based abstractive summarization; NLP-based hybrid approach; Particle Swarm Optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>When the amount of online text data continues to grow, the need for summarized text documents becomes increasingly important. Manually summarizing lengthy articles and determining the domain of the content is a time-consuming and tiresome process for humans. Modern technology can classify large amounts of text documents, identifying key phrases that serve as essential concepts or terms to be included in the summary. Automated text compression allows users to quickly identify the key points and generate the novel words of the document. The study introduces a NLP based hybrid approach for automatic text summarization that combines BERT-based extractive summarization with LSTM-based abstractive summarization techniques. The model aims to create concise and informative summaries. Trained on the BBC news summary dataset, a widely accepted benchmark for text summarization tasks, the model&#39;s parameters are optimized using Particle Swarm Optimization, a metaheuristic optimization technique. The hybrid model integrates BERT&#39;s extractive capabilities to identify important sentences and LSTM&#39;s abstractive abilities to generate coherent summaries, resulting in improved performance compared to individual approaches. PSO optimization enhances the model&#39;s efficiency and convergence during training. Experimental results demonstrate the evaluated accuracy scores of ROUGE 1 is 0.671428, ROUGE 2 is 0.56428 and ROUGE L is 0.671428 effectiveness of the proposed approach in enhancing text compression, producing summaries that capture the original text that minimizing redundancy and preserving key information. The study contributes to advancing text summarization tasks and highlights the potential of hybrid NLP-based models in this field.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_124-NLP_Based_Automatic_Summarization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Integrating AI and IoT in Advanced Optical Systems for Sustainable Energy and Environment Monitoring</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01505123</link>
        <id>10.14569/IJACSA.2024.01505123</id>
        <doi>10.14569/IJACSA.2024.01505123</doi>
        <lastModDate>2024-05-30T17:06:15.4470000+00:00</lastModDate>
        
        <creator>Shamim Ahmad Khan</creator>
        
        <creator>Abdul Hameed Kalifullah</creator>
        
        <creator>Kamila Ibragimova</creator>
        
        <creator>Akhilesh Kumar Singh</creator>
        
        <creator>Elangovan Muniyandy</creator>
        
        <creator>Venubabu Rachapudi</creator>
        
        <subject>Auto encoder; artificial intelligence; Internet of Things; gated recurrent unit; sustainable energy; environmental monitoring</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>The increasing demand for sustainable energy solutions and environmental monitoring necessitates advanced technologies. This work combines the capabilities of AI, in the form of a GRU-Auto encoder, with IoT-connected Advanced Optical Systems to create a comprehensive monitoring system. Current monitoring systems often face limitations in real-time analysis and adaptability. Conventional methods struggle to provide timely insights for sustainable energy and environmental management due to the complexity of data patterns and the lack of dynamic adaptability. Our proposed methodology introduces an optimized GRU-Auto encoder, which excels in learning complex temporal patterns, making it well-suited for dynamic environmental and energy data. The integration with Advanced Optical Systems ensures a continuous influx of high-quality real-time data through IoT, enabling more accurate and adaptive analysis. The study involves optimizing the GRU-Auto encoder through hyper parameter tuning and gradient clipping. The model is integrated into an IoT platform that connects with Advanced Optical Systems for seamless data flow. Real-time data from environmental and energy sensors are processed through the AI model, providing immediate insights. Performance is evaluated based on the system&#39;s ability to accurately predict environmental trends, optimize energy consumption, and adapt to dynamic changes. Comparative analyses with traditional methods show advantages of the suggested strategy in terms of efficiency and accuracy. This research presents a significant development in the field of study of sustainable energy and environment monitoring, offering a robust solution for real-time data analysis and adaptive decision-making. The integration of an optimized GRU-Auto encoder with IoT-connected Advanced Optical Systems showcases promising results in improving overall system performance and sustainability.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_123-Integrating_AI_and_IoT_in_Advanced_Optical_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Image Generation of Animation Drawing Robot Based on Knowledge Distillation and Semantic Constraints</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01505122</link>
        <id>10.14569/IJACSA.2024.01505122</id>
        <doi>10.14569/IJACSA.2024.01505122</doi>
        <lastModDate>2024-05-30T17:06:15.4330000+00:00</lastModDate>
        
        <creator>Dujuan Wang</creator>
        
        <subject>Knowledge distillation; semantic constraints; robot; image; generation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>With the development of robot technology, animation drawing robots have gradually appeared in the public eye. Animation drawing robots can generate many types of images, but there are also problems such as poor quality of generated images and long image drawing time. In order to improve the quality of images generated by animation drawing robots, an animation face line drawing generation algorithm based on knowledge distillation was designed to reduce computational complexity through knowledge distillation. To further raise the quality of images generated by robots, the research also designed an unsupervised facial caricature image generation algorithm based on semantic constraints, which uses facial semantic labels to constrain the facial structure of the generated images. The outcomes denote that the max values of the peak signal-to-noise ratio and feature similarity index measurements of the line drawing generation model are 39.45 and 0.7660 respectively, and the mini values are 37.51 and 0.7483 respectively. The average values of the gradient magnitude similarity bias and structural similarity of the loss function used in this model are 0.2041 and 0.8669 respectively. The max and mini values of Fr&#233;chet Inception Distance of the face caricature image generation model are 81.60 and 71.32 respectively, and the max and mini time-consuming values are 15.21s and 13.24s respectively. Both the line drawing generation model and the face caricature image generation model have good performance and can provide technical support for the image generation of animation drawing robots.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_122-Image_Generation_of_Animation_Drawing_Robot.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Utilizing Machine Learning Approach to Forecast Fuel Consumption of Backhoe Loader Equipment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01505121</link>
        <id>10.14569/IJACSA.2024.01505121</id>
        <doi>10.14569/IJACSA.2024.01505121</doi>
        <lastModDate>2024-05-30T17:06:15.4170000+00:00</lastModDate>
        
        <creator>Poonam Katyare</creator>
        
        <creator>Shubhalaxmi Joshi</creator>
        
        <creator>Mrudula Kulkarni</creator>
        
        <subject>Machine learning; construction equipment; fuel consumption</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>This study addresses the challenge of forecasting fuel consumption for various categories of construction equipment, with a specific focus on Backhoe Loaders (BL). Accurate predictions of fuel usage are crucial for optimizing operational efficiency in the increasingly technology-driven construction industry. The proposed methodology involves the application of multiple machine learning (ML) models, including Multiple Linear Regression (MLR), Support Vector Regression (SVR), and Decision Tree Regression (DT), to analyze historical data and key equipment characteristics. The results demonstrate that Decision Tree models outperform other techniques in terms of precision, as evidenced by comparative analysis of the coefficient of determination. These findings enable construction firms to make informed decisions about equipment utilization, resource allocation, and operational productivity, thereby enhancing cost efficiency and minimizing environmental impact. This study provides valuable insights for decision-makers in construction project cost estimation, emphasizing the significant influence of fuel consumption on overall project expenses.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_121-Utilizing_Machine_Learning_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Explainable Artificial Intelligence Method for Identifying Cardiovascular Disease with a Combination CNN-XG-Boost Framework</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01505120</link>
        <id>10.14569/IJACSA.2024.01505120</id>
        <doi>10.14569/IJACSA.2024.01505120</doi>
        <lastModDate>2024-05-30T17:06:15.4000000+00:00</lastModDate>
        
        <creator>J Chandra Sekhar</creator>
        
        <creator>T L Deepika Roy</creator>
        
        <creator>K. Sridharan</creator>
        
        <creator>Natrayan L</creator>
        
        <creator>Dr.K.Aanandha Saravanan</creator>
        
        <creator>Ahmed I. Taloba</creator>
        
        <subject>Cardiovascular disease; CNN; XGBoost; traditional approaches; explainable AI</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>Cardiovascular disease (CVD) is a globally significant health issue that presents with a multitude of risk factors and complex physiology, making early detection, avoidance, and effective management a challenge. Early detection is essential for effective treatment of CVD, and typical approaches involve an integrated strategy that includes lifestyle modifications like exercise and diet, medications to control risk factors like high blood pressure and cholesterol, interventions like angioplasties or bypass surgery in extreme cases, and ongoing surveillance to prevent complications and promote heart function. Traditional approaches often rely on manual interpretation, which is time-consuming and prone to error. In this paper, proposed study uses an automated detection method using machine learning. The CNN and XGBoost algorithms&#39; greatest characteristics are combined in the hybrid technique. CNN is excellent in identifying pertinent features from medical images, while XGBoost performs well with tabular data. By including these strategies, the model&#39;s robustness and precision in predicting CVD are both increased. Furthermore, data normalization techniques are employed to confirm the accuracy and consistency of the model&#39;s projections. By standardizing the input data, the normalization procedure lowers variability and increases the model&#39;s ability to extrapolate across instances. This work explores a novel approach to CVD detection using a CNN/XGBoost hybrid model. The hybrid CNN-XGBoost and explainable AI system has undergone extensive testing and validation, and its performance in accurately detecting CVD is encouraging. Due to its ease of use and effectiveness, this technique may be applied in clinical settings, potentially assisting medical professionals in the prompt assessment and care of patients with cardiovascular disease.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_120-Explainable_Artificial_Intelligence_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimizing Resource Allocation in Cloud Environments using Fruit Fly Optimization and Convolutional Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01505119</link>
        <id>10.14569/IJACSA.2024.01505119</id>
        <doi>10.14569/IJACSA.2024.01505119</doi>
        <lastModDate>2024-05-30T17:06:15.3830000+00:00</lastModDate>
        
        <creator>Taviti Naidu Gongada</creator>
        
        <creator>Girish Bhagwant Desale</creator>
        
        <creator>Shamrao Parashram Ghodake</creator>
        
        <creator>K. Sridharan</creator>
        
        <creator>Vuda Sreenivasa Rao</creator>
        
        <creator>Yousef A.Baker El-Ebiary</creator>
        
        <subject>Cloud computing; resource utilization; task scheduling; Fruit Fly Optimization; convolutional neural networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>Cloud computing environments play a crucial role in modern computing infrastructures, offering scalability, flexibility, and cost-efficiency. However, optimizing resource utilization and performance in such dynamic and complex environments remains a significant challenge. This study addresses this challenge by proposing a novel framework that integrates Fruit Fly Optimization (FFO) with Convolutional Neural Networks (CNN) for task scheduling optimization. The background emphasizes the importance of efficient resource allocation and management in cloud computing to meet increasing demands for computational resources while minimizing costs and enhancing overall system performance. The objective of this research is to develop a comprehensive framework that leverages the complementary strengths of FFO and CNN to address the shortcomings of traditional task scheduling approaches. The novelty of the proposed framework lies in its integration of optimization techniques with advanced data analysis methods, enabling dynamic and adaptive task allocation based on real-time workload patterns. The proposed framework is thoroughly evaluated using historical workload data, and results demonstrate significant improvements over traditional methods. Specifically, the FFO-CNN framework achieves average response times ranging from 120 to 180 milliseconds, while maintaining high resource utilization rates ranging from 90% to 98%. These results highlight the effectiveness of the FFO-CNN framework in enhancing resource utilization and performance in cloud computing environments. This research contributes to advancing the state-of-the-art in cloud resource management by introducing a novel approach that combines optimization and data analysis techniques. The proposed framework offers a promising solution to the challenges of resource allocation and task scheduling in cloud computing environments, paving the way for more efficient and sustainable cloud infrastructures in the future.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_119-Optimizing_Resource_Allocation_in_Cloud_Environments.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced Arachnid Swarm-Tuned Convolutional Neural Network Model for Efficient Intrusion Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01505117</link>
        <id>10.14569/IJACSA.2024.01505117</id>
        <doi>10.14569/IJACSA.2024.01505117</doi>
        <lastModDate>2024-05-30T17:06:15.3700000+00:00</lastModDate>
        
        <creator>Nishit Patil</creator>
        
        <creator>Shubhlaxmi Joshi</creator>
        
        <subject>Intrusion Detection; arachnid swarm optimization; Convolutional Neural Network; pre-processing; arachnid swarm optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>Digital systems in the connected world of today bring convenience but also complicated cyber security challenges. The inadequacies of conventional intrusion detection techniques are exposed by the constant adaptation and exploitation of vulnerabilities by advanced cyber threats. Identifying dangers in massive data flows gets more difficult as networks grow, necessitating innovative methods. With the aim of minimizing these concerns, a new ID model is created utilizing cutting-edge machine learning to proactively and flexibly combat dynamic cyber attacks, with regard to evolving cyber attackers, this model seeks to improve accuracy and protection systems. This research develops an arachnid swarm optimization-based Convolutional neural network (ASO opt CNN) model to improve ID performance. An improved modified residual CNN is employed in the model to lessen the vanishing and exploding gradient problems in deep networks and facilitates the optimization process, making it easier for deep networks to learn. The developed model is adjusted using arachnid swarm optimization (ASO), which is the hybridization particle swarm optimization (PSO) and social spider optimization (SSO). Utilizing test data, the model&#39;s efficacy is evaluated at last. This test data is also subjected to preprocessing, which leads to the creation of a robust detection model that can identify the presence of network attacks. Experimentation and comparison indicate the approach&#39;s effectiveness by attaining accuracies of 95.95%, 95.61%, and 95.00% for three datasets respectively. This highlights the developed model’s potential to detect intrusions more effectively.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_117-Enhanced_Arachnid_Swarm_Tuned_Convolutional_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Elevating Offensive Language Detection: CNN-GRU and BERT for Enhanced Hate Speech Identification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01505118</link>
        <id>10.14569/IJACSA.2024.01505118</id>
        <doi>10.14569/IJACSA.2024.01505118</doi>
        <lastModDate>2024-05-30T17:06:15.3700000+00:00</lastModDate>
        
        <creator>M. Madhavi</creator>
        
        <creator>Sanjay Agal</creator>
        
        <creator>Niyati Dhirubhai Odedra</creator>
        
        <creator>Harish Chowdhary</creator>
        
        <creator>Taranpreet Singh Ruprah</creator>
        
        <creator>Veera Ankalu Vuyyuru</creator>
        
        <creator>Yousef A.Baker El-Ebiary</creator>
        
        <subject>Bidirectional encoder representations from transformers; convolutional neural network; Gated Recurrent Unit; hate speech; hugging face transformer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>Upholding a secure and accepting digital environment is severely hindered by hate speech and inappropriate information on the internet. A novel approach that combines Convolutional Neural Network with GRU and BERT from Transformers proposed for enhancing the identification of offensive content, particularly hate speech. The method utilizes the strengths of both CNN-GRU and BERT models to capture complex linguistic patterns and contextual information present in hate speech. The proposed model first utilizes CNN-GRU to extract local and sequential features from textual data, allowing for effective representation learning of offensive language. Subsequently, BERT, advanced transformer-based model, is employed to capture contextualized representations of the text, thereby enhancing the understanding of detailed linguistic nuances and cultural contexts associated with hate speech. Fine tuning BERT model using hugging face transformer. To execute tests using datasets for hate speech identification that are made accessible to the public and show how well the method works to identify inappropriate content. By assisting with the continuing efforts to prevent the dissemination of hate speech and undesirable language online, the proposed framework promotes a more diverse and secure digital environment. The proposed method is implemented using python. The method achieves 98% competitive performance compared to existing approaches LSTM and RNN, CNN, LSTM and GBAT, showcasing its potential for real-world applications in combating online hate speech. Furthermore, it provides insights into the interpretability of the model&#39;s predictions, highlighting key linguistic and contextual factors influencing offensive language detection. The study contributes to advancing hate speech detection by integrating CNN-GRU and BERT models, giving a robust solution for enhancing offensive content identification in online platforms.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_118-Elevating_Offensive_Language_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Exploring Enhanced Object Detection and Classification Methods for Alstroemeria Genus Morado</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01505116</link>
        <id>10.14569/IJACSA.2024.01505116</id>
        <doi>10.14569/IJACSA.2024.01505116</doi>
        <lastModDate>2024-05-30T17:06:15.3530000+00:00</lastModDate>
        
        <creator>Yaru Huang</creator>
        
        <creator>Yangxu Wang</creator>
        
        <subject>Alstroemeria; object detection; maturity classification; multi-scale feature fusion; Convolutional Neural Network (CNN)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>As an important ornamental plant, the automatic detection and classification of the maturity of Alstroemeria Genus Morado flowers hold significant importance in precision agriculture. However, this task faces numerous challenges due to the diversity of morphological characteristics, complex growth environments, and factors such as occlusion and lighting variations. Currently, this field is relatively unexplored, necessitating innovative methods to overcome existing difficulties. To fill this research gap, this study developed a deep learning-based object detection framework, the Alstroemeria Genus Morado Network (AGMNet), specifically optimized for the detection and classification of Alstroemeria Genus Morado flowers. This convolutional neural network utilizes multi-scale feature fusion techniques and spatial attention mechanisms, along with a dual-path detection structure, significantly enhancing its capability for automatic maturity classification and detection of flowers. Notably, AGMNet addresses the issue of class imbalance in its design and employs advanced data augmentation techniques to enhance the model&#39;s generalization ability. In comparative experiments on the morado_5may dataset, AGMNet demonstrated superior performance in Precision, Recall, and F1-score, with a 3.8% improvement in the mAP metric over the latest YOLOv9 model, showcasing stronger generalization capabilities. AGMNet is expected to play a more significant role in enhancing agricultural production efficiency and automation levels.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_116-Exploring_Enhanced_Object_Detection_and_Classification_Methods.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards Optimal Image Processing-based Internet of Things Monitoring Approaches for Sustainable Cities</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01505115</link>
        <id>10.14569/IJACSA.2024.01505115</id>
        <doi>10.14569/IJACSA.2024.01505115</doi>
        <lastModDate>2024-05-30T17:06:15.3370000+00:00</lastModDate>
        
        <creator>Weiwei LIU</creator>
        
        <creator>Guifeng CHEN</creator>
        
        <subject>Sustainable cities; Internet of Things; image processing; urban monitoring; smart city</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>Population growth and urbanization demand innovative strategies for sustainable city management. This paper focuses on the integration of the Internet of Things (IoT) and image processing technologies for environmental monitoring in sustainable urban development. The IoT forms an integral part of the Information and Communication Technology (ICT) infrastructure in smart sustainable cities. It offers a new model for urban design, due to the ability to offer environmentally sustainable alternatives. Furthermore, image processing is a method employed in computer vision that provides reliable approaches for extracting significant data from images. The convergence of these technologies has the capacity to enhance the effectiveness and durability of our urban surroundings. This paper discusses the current state-of-the-art in both IoT and image processing, highlighting their individual applications, architectures, and challenges. This paper explores the integration of the aforementioned technologies in a harmonized monitoring system to promote synergies and complementarities. Several case studies demonstrate the successful adoption of the harmonized approach in urban contexts, focusing on the environmental monitoring, energy management, transportation, and social well-being. The combination of IoT with image processing raises concerns regarding privacy, standardization, and scalability. The study has provided a direction for future research and suggested that more participant and multiple-strategy approaches could be beneficial to address some existing limitations and move toward a more sustainable urban context. It should therefore be viewed as a compass or a roadmap for future research in the areas of IoT and image processing-based monitoring towards todays and future sustainable urban environments.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_115-Towards_Optimal_Image_Processing_based_Internet_of_Things.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Diagnosis of NEC using a Multi-Feature Fusion Machine Learning Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01505114</link>
        <id>10.14569/IJACSA.2024.01505114</id>
        <doi>10.14569/IJACSA.2024.01505114</doi>
        <lastModDate>2024-05-30T17:06:15.3230000+00:00</lastModDate>
        
        <creator>Jiahe Li</creator>
        
        <creator>Yue Han</creator>
        
        <creator>Yunzhou Li</creator>
        
        <creator>Jin Zhang</creator>
        
        <creator>Ling He</creator>
        
        <creator>Tao Xiong</creator>
        
        <creator>Qian Gao</creator>
        
        <subject>Diagnosis of necrotizing enterocolitis (NEC); bowel sound; feature fusion; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>Necrotizing enterocolitis (NEC) is a severe gastrointestinal emergency in neonates, marked by its complex etiology, ambiguous clinical manifestations, and significant morbidity and mortality, profoundly affecting long-term pediatric health outcomes. The prevailing diagnostic approaches for NEC, including traditional manual auscultation of bowel sounds, suffer from limited sensitivity and specificity, leading to potential misdiagnoses and delayed treatment. In this paper, we introduce a groundbreaking NEC diagnostic framework employing machine learning algorithms that utilize multi-feature fusion of bowel sounds, significantly improving the diagnostic accuracy. Bowel sounds from NEC patients and healthy newborns are meticulously captured using a specialized acquisition system, designed to overcome the inherent challenges associated with the low amplitude, substantial background noise, and high variability of neonatal bowel sounds. To enhance the diagnostic framework, we extract mel-frequency cepstral coefficient (MFCC), short-time energy (STE), and zero-crossing rate (ZCR) to capture comprehensive frequency and time domain features, ensuring a robust representation of bowel sound characteristics. These features are then integrated using a multi-feature fusion technique to form a singular feature vector, providing a rich, integrated dataset for the machine learning algorithm. Employing the support vector machine (SVM), the algorithm achieved an accuracy (ACC) of 88.00%, sensitivity (SEN) of 100.00%, and an area under the receiver operating characteristic (ROC) curve (AUC) of 97.62%, achieving high accuracy in diagnosing NEC. This innovative approach not only improves the accuracy and objectivity of NEC diagnosis but also shows promise in revolutionizing neonatal care through facilitating early and precise diagnosis. It significantly enhances clinical outcomes for affected neonates.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_114-Diagnosis_of_NEC_using_a_Multi_Feature_Fusion.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Weighted Recursive Graph Color Coding for Enhanced Load Identification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01505113</link>
        <id>10.14569/IJACSA.2024.01505113</id>
        <doi>10.14569/IJACSA.2024.01505113</doi>
        <lastModDate>2024-05-30T17:06:15.3070000+00:00</lastModDate>
        
        <creator>Li Zhang</creator>
        
        <creator>Hengtao Ai</creator>
        
        <creator>Yuhang Liu</creator>
        
        <creator>Shiqing Li</creator>
        
        <creator>Tao Zhang</creator>
        
        <subject>Non-Intrusive Load Monitoring (NILM); Weighted Recurrence Graph (WRG); color coding; AlexNet neural network; load signature</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>In the pursuit of high-precision load identification, traditional methodologies grapple with significant drawbacks, including low recognition rates, intricate signature construction, and narrow applicability. This study introduces a novel approach employing weighted recursive graph (WRG) color coding to surmount these challenges. Power consumption data, procured from advanced load monitoring devices, undergo extraction of single-cycle currents, which are then subjected to dimensional reduction via Piece-wise Aggregate Approximation (PAA). In a transformative step, these currents are encoded into load signatures through the recursive graph time series methodology, culminating in the generation of WRG images. An AlexNet neural network model is engaged to distil and assimilate the distinctive features of the WRG images. The simulation results indicate that the identification rate can exceed 97%. Additionally, an experimental platform was set up to verify the method proposed in this paper, and the results show that the actual identification rate can reach over 96%. Both the simulation results and experiments fully demonstrate that the proposed identification method has a high accuracy. This method not only sets a new standard in non-intrusive load identification but also enhances the generalization of load signature applicability across diverse scenarios.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_113-Weighted_Recursive_Graph_Color_Coding.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Performance of a Temporal Multi-Modal Sentiment Analysis Model Based on Multitask Learning in Social Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01505112</link>
        <id>10.14569/IJACSA.2024.01505112</id>
        <doi>10.14569/IJACSA.2024.01505112</doi>
        <lastModDate>2024-05-30T17:06:15.2770000+00:00</lastModDate>
        
        <creator>Lin He</creator>
        
        <creator>Haili Lu</creator>
        
        <subject>Multi task learning; multi-modal; emotional analysis; attention mechanism; feature fusion</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>When conducting sentiment analysis on social networks, facing the challenge of temporal and multi-modal data, it is necessary to enable the model to deeply mine and combine information from various modalities. Therefore, this study constructs an emotion analysis model based on multitask learning. This model utilizes a comprehensive framework of convolutional networks, bidirectional gated recurrent units, and multi head self attention mechanisms to represent single modal temporal features in an innovative way, and adopts a cross modal feature fusion strategy. The experiment showed that the model accomplished 0.83 average precision and a 0.83 F1-value, respectively. In contrast with multi-scale attention (0.69, 0.70), aspect-based sentiment analysis (0.78, 0.74), and long short-term memory network (0.71, 0.78) models, this model demonstrated higher robustness and classification accuracy. Especially in terms of parallel computing efficiency, the acceleration ratio of the model reached 1.61, which is the highest among all compared models, highlighting the potential for time savings in large data volumes. This study has shown good performance in sentiment analysis in social networks, providing a novel perspective for solving complex sentiment classification problems.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_112-The_Performance_of_a_Temporal_Multi_Modal_Sentiment_Analysis_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Exploring Cutting-Edge Developments in Deep Learning for Biomedical Signal Processing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01505111</link>
        <id>10.14569/IJACSA.2024.01505111</id>
        <doi>10.14569/IJACSA.2024.01505111</doi>
        <lastModDate>2024-05-30T17:06:15.2770000+00:00</lastModDate>
        
        <creator>Yukun Zhu</creator>
        
        <creator>Haiyan Zhang</creator>
        
        <creator>Bing Liu</creator>
        
        <creator>Junyan Dou</creator>
        
        <subject>Biomedical signal processing; health monitoring; deep learning; electrocardiography; electromyography; electroencephalography</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>Biomedical condition monitoring devices are progressing quickly by incorporating cost-effective and non-invasive sensors to track vital signs, record medical circumstances, and deliver meaningful responses. These sophisticated innovations rely on breakthrough technology to provide intelligent platforms for health monitoring, quick illness recognition, and precise treatment. Biomedical signal processing determines patterns of signals and serves as the backbone for reliable applications, medical diagnostics, and research. Deep Learning (DL) methods have brought significant innovation in biomedical signal processing, leading to the transformation of the health sector and medical diagnostics. This article covers an entire range of technical innovations evolved for DL-based biomedical signal processing where different modalities have been considered, including Electrocardiography (ECG), Electromyography (EMG), and Electroencephalography (EEG). A vast amount of biomedical data in various forms is available, and DL concepts are required to extract and model this data in order to identify hidden complex patterns that can be utilized to improve the diagnosis, prognosis, and personalized treatment of diseases in an individual. The nature of this developing topic certainly gives rise to a number of challenges. First, the application of sensitive and noisy time series data requires truly robust models. Second, many inferences made at the bedside must have interpretability by design. Third, the field will require that processing be performed in real-time if used for therapeutic interventions. We systematically evaluate these challenges and highlight areas where continued research is needed. The general expansion of DL technologies into the biomedical domain gives rise to novel concerns about accountability and transparency of algorithmic decision-making, a subject which we briefly touch upon as well.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_111-Exploring_Cutting_Edge_Developments_in_Deep_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Differential Diagnosis of Attention-Deficit/Hyperactivity Disorder and Bipolar Disorder using Steady-State Visual Evoked Potentials</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01505110</link>
        <id>10.14569/IJACSA.2024.01505110</id>
        <doi>10.14569/IJACSA.2024.01505110</doi>
        <lastModDate>2024-05-30T17:06:15.2600000+00:00</lastModDate>
        
        <creator>Xiaoxia Li</creator>
        
        <subject>Attention-deficit/Hyperactivity disorder (ADHD); bipolar disorder; electroencephalography (EEG); steady-state visual evoked potential (SSVEP); machine learning; classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>Bipolar disorder and Attention-deficit/Hyperactivity disorder (ADHD) are two prevalent disorders whose symptoms are similar. In order to reduce the misdiagnosis between bipolar disorder and ADHD, a machine learning-based system using electroencephalography (EEG) and steady state potentials (i.e., steady-state visual evoked potential [SSVEP]) was evaluated to classify ADHD, bipolar disorder and normal conditions. Indeed, this research was conducted for the first time with the aim of designing a machine learning system for EEG detection of ADHD, bipolar disorder, and normal conditions using SSVEPs. For this purpose, both linear and nonlinear dynamics of extracted SSVEPs were analyzed. Indeed, after data preprocessing, spectral analysis and recurrence quantification analysis (RQA) were applied to SSVEPs. Then, feature selection was utilized through the DISR. Finally, we utilized various machine learning techniques to classify the linear and nonlinear features extracted from SSVEPs into three classes of ADHD, bipolar disorder and normal: k-nearest neighbors (KNN), support vector machine (SVM), linear discriminant analysis (LDA) and Na&#239;ve Bayes. Experimental results showed that SVM classifier with linear kernel yielded an accuracy of 78.57% for ADHD, bipolar disorder and normal classification through the leave-one-subject-out (LOSO) cross-validation. Although this research is the first to evaluate the utilization of signal processing and machine learning approaches in SSVEP classification of these disorders, it has limitations that future studies should investigate to enhance the efficacy of proposed system.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_110-Differential_Diagnosis_of_Attention_Deficit_Hyperactivity_Disorder.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Identifying Competition Characteristics of Athletes Through Video Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01505109</link>
        <id>10.14569/IJACSA.2024.01505109</id>
        <doi>10.14569/IJACSA.2024.01505109</doi>
        <lastModDate>2024-05-30T17:06:15.2430000+00:00</lastModDate>
        
        <creator>Yuzhong Liu</creator>
        
        <creator>Tianfan Zhang</creator>
        
        <creator>Zhe Li</creator>
        
        <creator>Mengshuang Ma</creator>
        
        <subject>Video analysis technology; scene recognition method; athlete identification; posture reconstruction; table tennis competition; feature extraction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>The vast repositories of training and competition video data serve as indispensable resources for athlete training and competitor analysis, providing a solid foundation for strategic competition analysis and tactics formulation. However, the effectiveness of these analyses hinges on the abundance and precision of data, often requiring costly professional systems for existing video analysis techniques. Meanwhile, readily accessible non-professional data frequently lacks standardization, compelling manual analysis and experiential judgments, thus limiting the widespread adoption of video analysis technologies. To address these challenges, we have devised an intelligent video analysis technology and a methodology for identifying athletes&#39; competition characteristics. Initially, we employed target detection models, such as You Only Look Once (YOLO), renowned for their ease of deployment and low environmental dependency, to perform fundamental detection tasks. This was further complemented by the intelligent selection of standardized scenes through customizable scene rules, leading to the formation of a standardized scene dataset. On this robust foundation, we achieved classification and identification of competition participants as well as sideline recognition, ultimately compiling a comprehensive competitive dataset. Subsequently, we constructed an athlete posture estimation method utilizing OpenPose, aimed at minimizing interference caused by obstructions and enhancing the accuracy of feature extraction. In experimental validation, we gathered a diverse collection of table tennis competition video data from the internet, serving as a validation dataset. The results were impressive, with a detection success rate for standardized scenes exceeding 94% and an identification success rate for competitors surpassing 98%. The accuracy of posture reconstruction for obstructed individuals exceeded 60%, and the effectiveness of identifying athletes&#39; main features exceeded 90%, convincingly demonstrating the effectiveness of the proposed video analysis method.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_109-Identifying_Competition_Characteristics_of_Athletes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>New 3D Shape Descriptor Extraction using CatBoost Classifier for Accurate 3D Model Retrieval</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01505107</link>
        <id>10.14569/IJACSA.2024.01505107</id>
        <doi>10.14569/IJACSA.2024.01505107</doi>
        <lastModDate>2024-05-30T17:06:15.2300000+00:00</lastModDate>
        
        <creator>Mohcine BOUKSIM</creator>
        
        <creator>Fatima RAFII ZAKANI</creator>
        
        <creator>Khadija ARHID</creator>
        
        <creator>Azzeddine DAHBI</creator>
        
        <creator>Taoufiq GADI</creator>
        
        <creator>Mohamed ABOULFATAH</creator>
        
        <subject>3D object retrieval; 3D shape retrieval; 3D shape matching; indexing; descriptor; CatBoost</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>Given the wide application of 3D model analysis, covering domains such as medicine, engineering, and virtual reality, the demand for innovative content-based 3D shape retrieval systems capable of handling complex 3D data efficiently have significantly increased. This paper proposes a new 3D shape retrieval method that uses the CatBoost classifier, a machine learning algorithm, to capture a unique descriptor for each 3D mesh. The main idea of our method is to get a specific and a unique signature or descriptor for each 3D model by training the CatBoost classifier with features obtained directly from the 3D models. This idea not only accelerates the training process, but also ensures the consistency and relevance of the data fed to the classifier during the training process. Once fully trained, the classifier generates a descriptor that is used during the indexing and retrieval process. The efficiency of our method is demonstrated by conducting extensive experiments on the Princeton shape benchmark database. The results demonstrate high retrieval accuracy in comparison to various existing methods in the literature. Our method&#39;s ability to outperform these methods shows its potential as highly useful tool in the field of content-based 3D shape retrieval.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_107-New_3D_Shape_Descriptor_Extraction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>YOLO-T: Multi-Target Detection Algorithm for Transmission Lines</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01505108</link>
        <id>10.14569/IJACSA.2024.01505108</id>
        <doi>10.14569/IJACSA.2024.01505108</doi>
        <lastModDate>2024-05-30T17:06:15.2300000+00:00</lastModDate>
        
        <creator>Shengwen Li</creator>
        
        <creator>Huabing Ouyang</creator>
        
        <creator>Tian Chen</creator>
        
        <creator>Xiaokang Lu</creator>
        
        <creator>Zhendong Zhao</creator>
        
        <subject>Transmission line inspection; contextual transformer; attention mechanism; ghost convolution</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>During UAV inspections of transmission lines, inspectors often encounter long distance and obstructed targets. However, existing detection algorithms tend to be less accurate when trying to detect these targets. Existing algorithms perform inadequately in handling long-distance and occluded targets, lacking effective detection capabilities for small objects and complex backgrounds. Therefore, we propose an improved YOLOv8-based YOLO-T algorithm for detecting multiple targets on transmission lines, optimized using transfer learning. Firstly, the model is lightweight while ensuring detection accuracy by replacing the original convolution block in the C2f module of the neck network with Ghost convolution. Secondly, to improve the target detection ability of the model, the C2f module in the backbone network is replaced with the Contextual Transformer module. Then, the feature extraction of the model is improved by integrating the Attention module and the residual edge on the SPPF (Spatial Pyramid Pooling-Fast). Finally, we introduce a new shallow feature layer to enable multi-scale feature fusion, optimizing the model detection accuracy for small and obscured objects. Parameters and GFLOPs are conserved by using the Add operation instead of the Concat operation. The experiment reveals that the enhanced algorithm achieves a mean detection accuracy of 97.19% on the transmission line dataset, which is 2.03% higher than the baseline YOLOv8 algorithm. It can also effectively detect small and occluded targets at long distances with a high FPS (98.91 frames/s).</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_108-YOLO_T_Multi_Target_Detection_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improved SegNet with Hybrid Classifier for Lung Cancer Segmentation and Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01505106</link>
        <id>10.14569/IJACSA.2024.01505106</id>
        <doi>10.14569/IJACSA.2024.01505106</doi>
        <lastModDate>2024-05-30T17:06:15.2130000+00:00</lastModDate>
        
        <creator>Rathod Dharmesh Ishwerlal</creator>
        
        <creator>Reshu Agarwal</creator>
        
        <creator>K.S. Sujatha</creator>
        
        <subject>Improved SegNet; LGXP; MBN; LSTM; LinkNet; lung cancer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>Prompt diagnosis is crucial globally to save lives, underscoring the urgent need in light of lung cancer&#39;s status as a leading cause of death. While CT scans serve as a primary imaging tool for LC detection, manual analysis is laborious and prone to inaccuracies. Recognizing these challenges, computational techniques, particularly ML and DL algorithms, are being increasingly explored as efficient alternatives to enhance the precise identification of cancerous and non-cancerous regions within CT scans, aiming to expedite diagnosis and mitigate errors. The proposed model employs Preprocessing to standardize image features, followed by segmentation using an Improved SegNet framework to delineate cancerous regions. Features like LGXP and MBP are then extracted, facilitating classification with a hybrid classifier which combines LSTM and LinkNet models. Implemented in Python, the model&#39;s performance is evaluated against conventional methods, showcasing superior accuracy, sensitivity, and precision. This framework promises to revolutionize LC diagnosis, enabling early intervention and improved patient outcomes.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_106-Improved_SegNet_with_Hybrid_Classifier.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Systematic Review on Multi-Factor Authentication Framework</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01505105</link>
        <id>10.14569/IJACSA.2024.01505105</id>
        <doi>10.14569/IJACSA.2024.01505105</doi>
        <lastModDate>2024-05-30T17:06:15.1970000+00:00</lastModDate>
        
        <creator>Muhammad Syahreen</creator>
        
        <creator>Noor Hafizah</creator>
        
        <creator>Nurazean Maarop</creator>
        
        <creator>Mayasarah Maslinan</creator>
        
        <subject>Data privacy; information; multi-factor authentication; security challenges</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>In the new era of technology, where information can be accessed and gained at the push of a button, security concerns are raised about protecting the system and data privacy and confidentiality. Traditional ways of user authentication are vulnerable to multiple attacks across all platforms. Various studies propose the use of more than one authentication process to enhance the security level of a system, either hosted on-premise or on the cloud. However, there is limited study on guidelines and appropriate authentication frameworks that suit the needs of an organization. A systematic literature review of a Multi-Factor Authentication framework was conducted through five primary databases: Scopus, IEEE, Science Direct, Springer Link, and Web of Science. The review examined the proposed solution and the underlying methods in a Multi-Factor Authentication framework. Numerous authentication methods were combined to address specific system and data security challenges. The most common authentication method is biometric authentication, which addresses the uniqueness of the user&#39;s biological identity. The majority of the proposed solutions were proof of concept and require a pilot test or experiment in the future.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_105-A_Systematic_Review_on_Multi_Factor_Authentication.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cloud Workload Prediction Based on Bayesian-Optimized Autoformer</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01505104</link>
        <id>10.14569/IJACSA.2024.01505104</id>
        <doi>10.14569/IJACSA.2024.01505104</doi>
        <lastModDate>2024-05-30T17:06:15.1830000+00:00</lastModDate>
        
        <creator>Biying Zhang</creator>
        
        <creator>Yuling Huang</creator>
        
        <creator>Zuoqiang Du</creator>
        
        <creator>Zhimin Qiu</creator>
        
        <subject>Cloud computing; deep learning; workload prediction; Autoformer; Bayesian optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>Accurate workload forecasting plays a pivotal role in the management of cloud computing resources, enabling significant enhancement in the performance of the cloud platform and effective prevention of resource wastage. However, the complexity, variability, and strong time dependencies of cloud workloads make prediction difficult. To address the challenge of enhancing accuracy in contemporary cloud workload prediction, this paper employs empirical and quantitative research methods, introducing a cloud workload prediction method based on Bayesian-optimized Autoformer, termed BO-Autoformer. Initially, the cloud workload data were divided according to the time-sliding window to construct a continuous feature sequence, which was used as the input of the model to construct the Autoformer prediction model. Subsequently, to further enhance the model&#39;s performance, the Bayesian optimization method was employed to identify the optimal combination of hyperparameters, resulting in the development of the Bayesian optimization-based Autoformer cloud workload prediction model. Finally, experiments were conducted on a real Google dataset to evaluate the model&#39;s effectiveness. The findings reveal that, compared to alternative models, the proposed prediction model demonstrates superior performance on the cloud workload dataset, and can effectively improve the prediction accuracy of the cloud workload.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_104-Cloud_Workload_Prediction_Based_on_Bayesian.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modified Artificial Bee Colony Algorithm for Load Balancing in Cloud Computing Environments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01505103</link>
        <id>10.14569/IJACSA.2024.01505103</id>
        <doi>10.14569/IJACSA.2024.01505103</doi>
        <lastModDate>2024-05-30T17:06:15.1670000+00:00</lastModDate>
        
        <creator>Qian LI</creator>
        
        <creator>Xue WANG</creator>
        
        <subject>Resource utilization; cloud computing; task scheduling; Artificial Bee Colony; genetic algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>Task scheduling in cloud computing is a complex optimization problem influenced by the ever-changing user requirements and the different architectures of cloud systems. Efficiently distributing workloads across Virtual Machines (VMs) is critical to mitigate the negative consequences of inadequate and excessive workloads, such as higher power consumption and possible machine malfunctions. This paper presents a novel method for dynamic load balancing using a Modified Artificial Bee Colony (MABC) algorithm. The ABC algorithm has exceptional competence in solving complex nonlinear optimization problems based on bee colonies&#39; foraging behavior. Nevertheless, the traditional version of the ABC algorithm cannot effectively use resources, resulting in a rapid decline in population diversity and an ineffective spread of knowledge about the best solution between generations. To address these limitations, this study integrates a genetic model into the algorithm, enhancing population diversity through crossover and mutation operators. The developed algorithm is compared with the prevailing algorithms to confirm its effectiveness. The results of the proposed MABC algorithm for the load balancing method are compared with the current ones, and it is observed that this algorithm is more beneficial in terms of cost and energy as well as resource utilization.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_103-Modified_Artificial_Bee_Colony_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predictive Modeling of Yoga&#39;s Impact on Individuals with Venous Chronic Cerebrospinal System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01505102</link>
        <id>10.14569/IJACSA.2024.01505102</id>
        <doi>10.14569/IJACSA.2024.01505102</doi>
        <lastModDate>2024-05-30T17:06:15.1500000+00:00</lastModDate>
        
        <creator>Sanjun Qiu</creator>
        
        <subject>Yoga; varicose veins; Extra Tree Classification; cheetah optimization algorithm; Black Widow Optimizer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>People leading a modern lifestyle often experience varicose veins, commonly attributed to factors associated with work and diet, such as prolonged periods of standing or excess weight. These disorders include elevated blood pressure in the lower extremities, especially the legs. An often-researched metric associated with these illnesses is the Vascular Clinical Severity Score (VCSS), which is connected to discomfort and skin discolorations. However, yoga appears to be a viable way to prevent and manage these problems, significantly lessening the negative consequences of varicose veins. The investigation of yoga&#39;s effect on VCSS in this study uses a novel strategy combining machine learning with the Extra Tree Classification (ETC), which is improved by the Cheetah Optimizer (CO) and Black Widow Optimizer (BWO). In this study, the ETC model was combined with previously mentioned optimizers, and two models were amalgamated, referred to as ETBW and ETCO. Through the evaluation of the performance of these models, it was discerned that the accuracy measure for prediction was associated with the ETCO model in the context of VCSS. By revealing subtle correlations between yoga treatments and VCSS results, this multidisciplinary approach seeks to provide a thorough knowledge of preventative and control processes. This research advances the understanding of vascular health by correlating yoga interventions with VCSS outcomes using machine learning and optimization algorithms. By enhancing predictive accuracy, it promotes multidisciplinary collaboration, personalized medicine, and innovation in healthcare, promising improved patient care and outcomes in varicose vein management.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_102-Predictive_Modeling_of_Yogas_Impact.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Local Path Planning of Mobile Robots Based on the Improved SAC Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01505100</link>
        <id>10.14569/IJACSA.2024.01505100</id>
        <doi>10.14569/IJACSA.2024.01505100</doi>
        <lastModDate>2024-05-30T17:06:15.1330000+00:00</lastModDate>
        
        <creator>Ruihong Zhou</creator>
        
        <creator>Caihong Li</creator>
        
        <creator>Guosheng Zhang</creator>
        
        <creator>Yaoyu Zhang</creator>
        
        <creator>Jiajun Liu</creator>
        
        <subject>Mobile robots; local path planning; reinforcement learning; SAC algorithm; priority experience replay; experience pool adjustment; Robot Operating System (ROS)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>This paper proposes a new EP-PER-SAC algorithm to solve the problems of slow training speed and low learning efficiency of the SAC (Soft Actor Critic) algorithm in the local path planning of mobile robots by introducing the Priority Experience Replay (PER) strategy and Experience Pool (EP) adjustment technique. This algorithm replaces equal probability random sampling with sampling based on the priority experience to increase the frequency of extracting important samples, thereby improves the stability and convergence speed of model training. On this basis, it requires to continuously monitor the learning progress and exploration rate changes of the robot to dynamically adjust the experience pool, so the robot can adapt effectively to the environment changes and the storage requirements and learning efficiency of the algorithm are balanced. Then, the algorithm&#39;s reward and punishment function is improved to reduce the blindness of algorithm training. Finally, experiments are conducted under different obstacle environments to verify the feasibility of the algorithm based on ROS (Robot Operating System) simulation platform and real environment. The results show that the improved EP-PER-SAC algorithm has a shorter path length and faster model convergence speed than the original SAC algorithm and PER-SAC algorithm.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_100-Local_Path_Planning_of_Mobile_Robots.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Exploring Music Style Transfer and Innovative Composition using Deep Learning Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01505101</link>
        <id>10.14569/IJACSA.2024.01505101</id>
        <doi>10.14569/IJACSA.2024.01505101</doi>
        <lastModDate>2024-05-30T17:06:15.1330000+00:00</lastModDate>
        
        <creator>Sujie He</creator>
        
        <subject>Deep learning; style transfer; innovative composition; Generative Adversarial Networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>Automatic music generation represents a challenging task within the field of artificial intelligence, aiming to harness machine learning techniques to compose music that is appreciable by humans. In this context, we introduce a text-based music data representation method that bridges the gap for the application of large text-generation models in music creation. Addressing the characteristics of music such as smaller note dimensionality and longer length, we employed a deep generative adversarial network model based on music measures (MT-CHSE-GAN). This model integrates paragraph text generation methods, improves the quality and efficiency of music melody generation through measure-wise processing and channel attention mechanisms. The MT-CHSE-GAN model provides a novel framework for music data processing and generation, offering an effective solution to the problem of long-sequence music generation. To comprehensively evaluate the quality of the generated music, we used accuracy, loss rate, and music theory knowledge as evaluation metrics and compared our model with other music generation models. Experimental results demonstrate our method&#39;s significant advantages in music generation quality. Despite progress in the field of automatic music generation, its application still faces challenges, particularly in terms of quantitative evaluation metrics and the breadth of model applications. Future research will continue to explore expanding the model&#39;s application scope, enriching evaluation methods, and further improving the quality and expressiveness of the generated music. This study not only advances the development of music generation technology but also provides valuable experience and insights for research in related fields.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_101-Exploring_Music_Style_Transfer_and_Innovative_Composition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Integrated Generalized Linear Regression with Two Step-AS Algorithm for COVID-19 Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150599</link>
        <id>10.14569/IJACSA.2024.0150599</id>
        <doi>10.14569/IJACSA.2024.0150599</doi>
        <lastModDate>2024-05-30T17:06:15.1200000+00:00</lastModDate>
        
        <creator>Ahmed Hamza Osman</creator>
        
        <creator>Hani Moetque Aljahdali</creator>
        
        <creator>Sultan Menwer Altarrazi</creator>
        
        <creator>Altyeb Taha</creator>
        
        <subject>Generalized Linear model; COVID-19; TwoStep-AS; clustering; X-ray images</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>This research introduces a computer-aided intelligence model designed to automatically identify positive instances of COVID-19 for routine medical applications. The model, built on the Generalized Linear architecture, employs the TwoStep-AS cluster method with diverse screen relatives, Weight sharing and stripping characteristics automatically identify distinctive features in chest X-ray images. Unlike the conventional transformational learning approach, our model underwent training both before and after clustering. The dataset was subjected to a compilation process that involved subdividing samples and categories into multiple sub-samples and subgroups. New cluster labels were then assigned to each cluster, treating each subject cluster as a distinct category. Discriminant features extracted from this process were used to train the Generalized Linear model, which was subsequently applied to classify instances. The TwoStep-AS clustering method underwent modification using pre-compiling the data earlier then employing the Generalized Linear model to identify COVID samples from X-ray chest results. Tests were conducted by the COVID-radiology data guaranteed the correctness of the results. The suggested model demonstrated an impressive accuracy of 90.6%, establishing it as a highly efficient, cost-effective, and rapid intelligence tool for the detection of Coronavirus infections.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_99-An_Integrated_Generalized_Linear_Regression.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Sentiment Analysis on Social Media Data with Advanced Deep Learning Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150598</link>
        <id>10.14569/IJACSA.2024.0150598</id>
        <doi>10.14569/IJACSA.2024.0150598</doi>
        <lastModDate>2024-05-30T17:06:15.1030000+00:00</lastModDate>
        
        <creator>Huu-Hoa Nguyen</creator>
        
        <subject>Sentiment analysis; deep learning; hyperparameter; feature extraction; social media; digital platform; gridsearchcv; BiLSTM; TF-IDF; word2vec; glove; Scikit-learn and Gensim</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>This paper introduces a comprehensive methodology for conducting sentiment analysis on social media using advanced deep learning techniques to address the unique challenges of this domain. As digital platforms play an increasingly pivotal role in shaping public discourse, the demand for real-time sentiment analysis has expanded across various sectors, including policymaking, brand monitoring, and personalized services. Our study details a robust framework that encompasses every phase of the deep learning process, from data collection and preprocessing to feature extraction and model optimization. We implement sophisticated data preprocessing techniques to improve data quality and adopt innovative feature extraction methods such as TF-IDF, Word2Vec, and GloVe. Our approach integrates several advanced deep learning configurations, including variants of BiLSTMs, and employs tools like Scikit-learn and Gensim for efficient hyperparameter tuning and model optimization. Through meticulous optimization with GridSearchCV, we enhance the robustness and generalizability of our models. We conduct extensive experimental analysis to evaluate these models against multiple configurations using standard metrics to identify the most effective techniques. Additionally, we benchmark our methods against prior studies, and our findings demonstrate that our proposed approaches outperform comparative techniques. These results provide valuable insights for implementing deep learning in sentiment analysis and contribute to setting benchmarks in the field, thus advancing both the theoretical and practical applications of sentiment analysis in real-world scenarios.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_98-Enhancing_Sentiment_Analysis_on_Social_Media_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Stepwise Discriminant Analysis and FBCSP Feature Selection Strategy for EEG MI Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150597</link>
        <id>10.14569/IJACSA.2024.0150597</id>
        <doi>10.14569/IJACSA.2024.0150597</doi>
        <lastModDate>2024-05-30T17:06:15.0870000+00:00</lastModDate>
        
        <creator>YingHui Meng</creator>
        
        <creator>YaRu Su</creator>
        
        <creator>Duan Li</creator>
        
        <creator>JiaoFen Nan</creator>
        
        <creator>YongQuan Xia</creator>
        
        <subject>Stepwise discriminant analysis; electroenceph alogram; motor imagery; sliding time window; filter bank common spatial pattern</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>Accurate decoding of brain intentions is a pivotal technology within Brain-Computer Interface (BCI) systems that rely on Motor Imagery (MI). The effective extraction of information features plays a critical role in the precise decoding of these brain intentions. However, there exists significant individual and environmental variability in signals, and the sensitivity of EEG signals from different subjects also varies, imposing higher demands on both feature exploration and accurate decoding. To address these challenges, we employ adaptive sliding time windows and a stepwise discriminant analysis strategy to selectively extract features obtained through the Filter Bank Common Spatial Pattern (FBCSP). This entails the identification of an optimal feature combination tailored to specific patients, thereby mitigating individual differences and environmental variations. Initially, adaptive sliding time windows are applied to segment electroencephalogram (EEG) data for different subjects, followed by FBCSP for feature extraction. Subsequently, a stepwise discriminant analysis (SDA) incorporating prior knowledge is employed for optimal feature selection, effectively and adaptively identifying the best feature combination for specific subjects. The proposed method is evaluated using two publicly available datasets, the EEG recognition accuracy for Dataset A is 98.47%, and for Dataset B, it is 95.2%. In comparison to current publicly reported research results (utilizing Power Spectral Density (PSD) + Support Vector Machine (SVM) methods) for Dataset A, the proposed method improves MI recognition accuracy by 25.37%. For Dataset B, compared to current publicly reported results (FBCNet method), the proposed method improves MI recognition accuracy by 26.4%. The experimental results underscore the method&#39;s broad applicability, scalability, and substantial value for promotion and application.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_97-A_Stepwise_Discriminant_Analysis_and_FBCSP_Feature.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cross-Modal Fine-Grained Interaction Fusion in Fake News Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150596</link>
        <id>10.14569/IJACSA.2024.0150596</id>
        <doi>10.14569/IJACSA.2024.0150596</doi>
        <lastModDate>2024-05-30T17:06:15.0730000+00:00</lastModDate>
        
        <creator>Zhanbin Che</creator>
        
        <creator>GuangBo Cui</creator>
        
        <subject>Fake news detection; attention mechanism; multimodal feature fusion; local similarity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>The popularity of social media has significantly increased the speed and scope of news dissemination, making the emergence and spread of fake news easier. Current fake news detection methods often ignore the correlation between text and images, leading to insufficient modal interaction and fusion. To address these issues, a cross-modal fine-grained interaction and fusion model for fake news detection is proposed. Specifically, this study addresses the correlation problem between text and image modalities by designing an interaction similarity domain. It extracts features of text word weight distribution using an attention mechanism network, guides the features of different regions of the image, and calculates the local similarity between the two. This approach analyzes positive and negative correlations between modalities at a fine-grained level, thereby strengthening the intermodal connection. Additionally, to tackle the problem of insufficient fusion of semantic feature vectors between text and images, this paper designs a fusion network that employs improved encoding and decoding using a Transformer for inter-modal information fusion, achieving the final multimodal feature representation. Experimental results show that our proposed method achieves excellent performance on WeiboA and Twitter, with accuracies of 88.2% and 89%, respectively, outperforming the benchmark model in several evaluation metrics.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_96-Cross_Modal_Fine_Grained_Interaction_Fusion.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>EfficientSkinCaSV2B3: An Efficient Framework Towards Improving Skin Classification and Segmentation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150595</link>
        <id>10.14569/IJACSA.2024.0150595</id>
        <doi>10.14569/IJACSA.2024.0150595</doi>
        <lastModDate>2024-05-30T17:06:15.0570000+00:00</lastModDate>
        
        <creator>Quy Lu Thanh</creator>
        
        <creator>Triet Minh Nguyen</creator>
        
        <subject>Skin cancer; Convolutional Neural Network (CNN); transfer learning; fine tuning; classification; segmentation; EffecientNetB3V2</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>Ozone layer depletion has gained attention as a serious environmental issue. Because of its effects on human health especially skin cancer. Besides, Ultraviolet (UV) radiation is known to be a major risk factor for skin cancer. For instance, it can damage the DNA in skin cells leading to mutations that may eventually result in cancerous growth. Basal cell carcinoma, squamous cell carcinoma, and melanoma are the three primary forms of skin cancer linked to UV exposure. Additionally, it triggers associated illnesses including nevus, seborrheic keratosis, actinic keratosis, dermatofibroma, and vascular lesions. Many medical and computer studies were published as a result to address these disorders. Especially, using an aspect of deep learning that is transfer learning and fine-tuning for the classification of skin images. In this research, the EffecientSkinCaSV2B3 framework was proposed and applied to classify and segment the skin cancer dataset, which were collected and validated by The International Skin Imaging Collaboration (ISIC). In addition, Gradient-weighted Class Activation Mapping (Grad-CAM) is used in skin cancer classification to visually explain images, aiding in understanding model decisions and highlighting important areas. Based on color and texture, k-means clustering was used for the segmentation between portions that were healthy and those that were unhealthy. The study reached a surprising accuracy of 84.91% in nine classes of classifying skin cancer. In other experiments, the customized EfficientNetV2B3 model achieved 94.00% in classifying malign and benign. Moreover, scenarios pointed out that in classifying six classes (i.e., between benign skin diseases) and three classes (i.e., between malign skin diseases) the model earned a high accuracy of 89.56% and 96.74%, respectively.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_95-EfficientSkinCaSV2B3_An_Efficient_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced U-Net Architecture for Lung Segmentation on Computed Tomography and X-Ray Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150594</link>
        <id>10.14569/IJACSA.2024.0150594</id>
        <doi>10.14569/IJACSA.2024.0150594</doi>
        <lastModDate>2024-05-30T17:06:15.0270000+00:00</lastModDate>
        
        <creator>Gulnara Saimassay</creator>
        
        <creator>Mels Begenov</creator>
        
        <creator>Ualikhan Sadyk</creator>
        
        <creator>Rashid Baimukashev</creator>
        
        <creator>Askhat Maratov</creator>
        
        <creator>Batyrkhan Omarov</creator>
        
        <subject>Lung disease; deep learning; U-Net; computed tomography; segmentation; diagnosis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>In the expanding field of medical imaging, precise segmentation of anatomical structures is critical for accurate diagnosis and therapeutic interventions. This research paper introduces an innovative approach, building upon the established U-Net architecture, to enhance lung segmentation techniques applied to Computed Tomography (CT) images. Traditional methods of lung segmentation in CT scans often confront challenges such as heterogeneous tissue densities, variability in human anatomy, and pathological alterations, necessitating an approach that embodies greater robustness and precision. Our study presents a modified U-Net model, characterized by an integration of advanced convolutional layers and innovative skip connections, improving the reception field and facilitating the retention of high-frequency details essential for capturing the lung&#39;s intricate structures. The enhanced U-Net architecture demonstrates substantial improvements in dealing with the subtleties of lung parenchyma, effectively distinguishing between precarious nuances of tissues, and pathologies. Rigorous quantitative evaluations showcase a significant increase in the Dice coefficient and a decrease in the Hausdorff distance, indicating a more refined segmentation output compared to predecessor models. Additionally, the proposed model manifests exceptional versatility and computational efficiency, making it conducive for real-time clinical applications. This research underlines the transformative potential of employing advanced deep learning architectures for biomedical imaging, paving the way for early intervention, accurate diagnosis, and personalized treatment paradigms in pulmonary disorders. The findings have profound implications, propelling forward the nexus of artificial intelligence and healthcare towards unprecedented horizons.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_94-Enhanced_U_Net_Architecture_for_Lung_Segmentation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of Deep Learning Models for Traffic Sign Recognition in Autonomous Vehicles</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150593</link>
        <id>10.14569/IJACSA.2024.0150593</id>
        <doi>10.14569/IJACSA.2024.0150593</doi>
        <lastModDate>2024-05-30T17:06:14.9930000+00:00</lastModDate>
        
        <creator>Zhadra Kozhamkulova</creator>
        
        <creator>Zhanar Bidakhmet</creator>
        
        <creator>Marina Vorogushina</creator>
        
        <creator>Zhuldyz Tashenova</creator>
        
        <creator>Bella Tussupova</creator>
        
        <creator>Elmira Nurlybaeva</creator>
        
        <creator>Dastan Kambarov</creator>
        
        <subject>Traffic sign recognition; machine learning; deep learning; computer vision; image classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>This research paper investigates the development of deep learning models for traffic sign recognition in autonomous vehicles. Leveraging convolutional neural networks (CNNs), the study explores various architectural configurations and evaluation methodologies to assess the efficacy of CNNs in accurately identifying and classifying traffic signs. Through a systematic evaluation process utilizing metrics such as accuracy, precision, recall, and F-score, the research demonstrates the robustness and generalization capability of the developed models across diverse environmental conditions. Furthermore, the utilization of visualization techniques, including the Matplotlib library, enhances the interpretability of model training dynamics and optimization progress. The findings highlight the significance of CNN architecture in facilitating hierarchical feature extraction and spatial dependency learning, thereby enabling reliable and efficient traffic sign recognition. The successful recognition of traffic signs under varying lighting conditions underscores the resilience of the developed models to environmental perturbations. Overall, this research contributes to advancing the capabilities of autonomous vehicle systems and lays the groundwork for the implementation of intelligent traffic sign recognition systems aimed at enhancing road safety and navigational efficiency.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_93-Development_of_Deep_Learning_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Contrastive Learning and Multi-Choice Negative Sampling Recommendation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150592</link>
        <id>10.14569/IJACSA.2024.0150592</id>
        <doi>10.14569/IJACSA.2024.0150592</doi>
        <lastModDate>2024-05-30T17:06:14.9800000+00:00</lastModDate>
        
        <creator>Yun Xue</creator>
        
        <creator>Xiaodong Cai</creator>
        
        <creator>Sheng Fang</creator>
        
        <creator>Li Zhou</creator>
        
        <subject>Recommendation algorithms; comparative learning; negative sampling; pruning strategies</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>Most existing recommendation models that directly model user interests on user-item interaction data usually ignore the natural noise present in the interaction data, leading to bias in the model&#39;s learning of user preferences during data propagation and aggregation. In addition, the currently adopted negative sampling strategy does not consider the relationship between the prediction scores of positive samples and the degree of difficulty of negative samples, and is unable to adaptively select a suitable negative sample for each positive sample, leading to a decrease in the model recommendation performance. In order to solve the above problems, this paper proposes a Contrastive Learning and Multi-choice Negative Sampling Recommendation. Firstly, an improved topology-aware pruning strategy is used to process the user-item bipartite graph, which uses the topology information of the graph to remove noise and improve the accuracy of model prediction. In addition, a new multivariate selective negative sampling module is designed, which ensures that each positive sample selects a negative sample of appropriate hardness through two sampling principles, improving the model embedding space representation capability, which in turn leads to improved model recommendation accuracy. Experimental results on the Urban-Book and Yelp2018 datasets show that the proposed algorithm significantly improves all the metrics compared to the state-of-the-art model, which proves the effectiveness and sophistication of the algorithm in different scenarios.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_92-Contrastive_Learning_and_Multi_Choice_Negative_Sampling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automatic Personality Recognition in Videos using Dynamic Networks and Rank Loss</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150591</link>
        <id>10.14569/IJACSA.2024.0150591</id>
        <doi>10.14569/IJACSA.2024.0150591</doi>
        <lastModDate>2024-05-30T17:06:14.9630000+00:00</lastModDate>
        
        <creator>Nethravathi Periyapatna Sathyanarayana</creator>
        
        <creator>Karuna Pandith</creator>
        
        <creator>Manjula Sanjay Koti</creator>
        
        <creator>Rajermani Thinakaran</creator>
        
        <subject>Automatic personality recognition; facial movements; individual-specific representation; personality factors; convolutional neural networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>There are a few difficulties with current automatic personality recognition technologies. Two of these are discussed in this article. They use of very brief video segments or individual frames to come to conclusion with personality factors rather than long-term behavior; and absence of techniques to record individuals&#39; facial movements for personality recognition. To address these concerns, this work first offers a unique Rank Loss for self-regulated learning of facial movements that uses the innate time related development of facial movements in lieu of personality traits. Our method begins by training a basic U-net type system that can predict broad facial movements from a collection of unlabeled face recordings. The robust model is frozen subsequently, and a series of intermediary filters is added to the architecture. The self-regulated education is then restarted, but only with films tailored to the individual. As a result, the weights of the learnt filters are individual-specific, making it a useful tool for simulating individual facial dynamics. The weights of the learnt filters are then concatenated as an individual-specific representation, to forecast personality factors without the assistance of other components of the network. The proposed strategy is tested on ChaLearn personality dataset. We infer that the tasks performed by the individual in the video matter, merging or combined application of tasks achieves the high-rise precision. Also, multi-scale characteristics are better penetrating than single-scale dynamics, along with achieving impressive outcomes as process innovation in prediction of the personality factors scores through videos.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_91-Automatic_Personality_Recognition_in_Videos.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automatic Detection of Ascaris Lumbricoides in Microscopic Images using Convolutional Neural Networks (CNN)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150590</link>
        <id>10.14569/IJACSA.2024.0150590</id>
        <doi>10.14569/IJACSA.2024.0150590</doi>
        <lastModDate>2024-05-30T17:06:14.9470000+00:00</lastModDate>
        
        <creator>Giovanni Gelber Martinez Pastor</creator>
        
        <creator>Cesar Roberto Ancco Ruelas</creator>
        
        <creator>Eveling Castro-Gutierrez</creator>
        
        <creator>Victor Luis V&#225;squez Huerta</creator>
        
        <subject>Ascaris lumbricoides; Convolutional Neural Networks; OpenCV; microscopic images; moment-based detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>Parasites are disease-causing agents both in Peru and worldwide. In many contexts, diagnosis is done manually by observing microscopic images, where it&#39;s necessary to identify parasite eggs. However, this process is notably slow, and sometimes image clarity may be insufficient, making rapid and accurate identification challenging. This can be due to various factors, such as image quality or the presence of noise. This paper focused on a Convolutional Neural Network (CNN) model. Through this approach, the training, testing, and validation stages of our CNN model to detect and identify Ascaris lumbricoides parasite eggs. The results show that the proposed CNN model, combined with image preprocessing, yielded highly favorable results in parasite egg identification. Additionally, very satisfactory values were achieved in model testing and validation, indicating its effectiveness and precision in diagnosing parasite presence. This research represents a significant advancement in the field of parasitological diagnosis, offering an efficient and accurate solution for parasite detection through microscopic image analysis. It is hoped that these results contribute to improving diagnosis and treatment methods for parasitic diseases.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_90-Automatic_Detection_of_Ascaris_Lumbricoides.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Investigating an Ensemble Classifier Based on Multi-Objective Genetic Algorithm for Machine Learning Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150589</link>
        <id>10.14569/IJACSA.2024.0150589</id>
        <doi>10.14569/IJACSA.2024.0150589</doi>
        <lastModDate>2024-05-30T17:06:14.9330000+00:00</lastModDate>
        
        <creator>Zhiyuan LIU</creator>
        
        <subject>Machine learning; genetic algorithm; ensemble classification; classification error</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>Ensemble learning in machine learning applications is crucial because it leverages the collective wisdom of multiple models to enhance predictive performance and generalization. Ensemble learning is a method to provide a better approximation of an optimal classifier. A number of basic classifiers are used in ensemble learning. In order to improve performance, it is important for the basic classifiers to possess adequate efficacy and exhibit distinct classification errors. Additionally, an appropriate technique should be employed to amalgamate the outcomes of these classifiers. Numerous methods for ensemble classification have been introduced, including voting, bagging and reinforcement methods. In this particular study, an ensemble classifier that relies on the weighted mean of the basic classifiers&#39; outputs was proposed. To estimate the combination weights, a multi-objective genetic algorithm, considering factors such as classification error, diversity, sparsity, and density criteria, was utilized. Through implementations on UCI datasets, the proposed approach demonstrates a significant enhancement in classification accuracy compared to other conventional ensemble classifiers. In summary, the obtained results showed that genetic-based ensemble classifiers provide advantages such as enhanced capability to handle complex datasets, improved robustness and generalization, and flexible adaptability. These advantages make them a valuable tool in various domains, contributing to more accurate and reliable predictions. Future studies should test and validate this method on more and larger datasets to determine its actual performance.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_89-Investigating_an_Ensemble_Classifier_Based_on_Multi_Objective.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-Objective Optimization of Oilfield Development Planning Based on Shuffled Frog Leaping Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150588</link>
        <id>10.14569/IJACSA.2024.0150588</id>
        <doi>10.14569/IJACSA.2024.0150588</doi>
        <lastModDate>2024-05-30T17:06:14.9170000+00:00</lastModDate>
        
        <creator>Jun Wei</creator>
        
        <subject>Shuffled frog leaping algorithm; oilfield development; multi-objective; optimization; improve</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>Oilfield development planning is a complex task that involves multiple optimization objectives and constraints. Therefore, a study proposes an improved shuffled frog leaping algorithm to achieve multi-objective optimization tasks. In multi-objective problems, the fitness value of the algorithm is not adaptive to the memetic evolution, resulting in local search failures. Research is conducted on improving the shuffled frog leaping algorithm through non-dominated sorting genetic algorithm-II, memetic evolution, and traversal methods, and then verifying the effectiveness of the algorithm. The outcomes denoted that when the population was 30 and the grouping was 5, the algorithm proposed in the study had the fastest search speed and better optimization effect. The improved shuffled frog leaping algorithm had advantages in both construction period and cost compared to the shuffled frog leaping algorithm, with a construction period difference of 19 days and a cost difference of $13871. In comparative experiments with other algorithms, the average optimal solution and running time of the proposed algorithm were 0.324 and 7.2 seconds, respectively, which can quickly find the optimal solution in a short time. The algorithm proposed in the study can effectively optimize the complex objectives and constraints in oilfield development planning problems.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_88-Multi_objective_Optimization_of_Oilfield_Development_Planning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning Approach to Classify Brain Tumors from Magnetic Resonance Imaging Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150587</link>
        <id>10.14569/IJACSA.2024.0150587</id>
        <doi>10.14569/IJACSA.2024.0150587</doi>
        <lastModDate>2024-05-30T17:06:14.9000000+00:00</lastModDate>
        
        <creator>Asma Ahmed A. Mohammed</creator>
        
        <subject>Deep learning; brain tumor; MRI images; Convolutional Neural Networks (CNN); Xception; VGG-16; ResNet50</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>Brain tumor is one of the primary causes of mortality all over the globe, and it poses as one of the most complicated tasks in contemporary medicine when it comes to its proper diagnosis and classification into its many different types. Both benign and malignant tumors affect the lives of their respective patients as they may lead to mortality, or in the least many related difficulties and sicknesses. Typically, MRI (Magnetic Resonance Imaging) is used as a diagnostic technique where experts manually analyze the images to detect tumors. On the other hand, advanced technologies such as deep learning can step into the light and aid in the diagnosis and classification procedures in a much more time-efficient and precise manner. MRI images are an effective input that can be used in deep learning technologies such as CNN in order to accurately detect brain tumors. In this study, VGG-16, ResNet50, and Xception were trained on a Kaggle dataset consisting of brain tumor MRI images. The performance of the models was evaluated where it was found that brain tumors can be efficiently detected from MRI images with high accuracy and precision using VGG-16, ResNet50, and Xception. The highest performing model was the proposed XCeption model with perfect scores.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_87-Deep_Learning_Approach_to_Classify_Brain_Tumors.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Establishment of Economic Analysis Model Based on Artificial Intelligence Technology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150586</link>
        <id>10.14569/IJACSA.2024.0150586</id>
        <doi>10.14569/IJACSA.2024.0150586</doi>
        <lastModDate>2024-05-30T17:06:14.8830000+00:00</lastModDate>
        
        <creator>Jiqing Shi</creator>
        
        <subject>Artificial intelligence; lasso regression; BP neural network; economic analysis model; major global economies; multi-hidden layer variable analysis; economic trends</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>With the continuous evolution of artificial intelligence technology, its integration into economic analysis models is becoming increasingly prevalent. This paper employs the Lasso Back Propagation neural network method to conduct financial analysis and prediction for major global economies, focusing on total Gross Domestic Product, combined Gross Domestic Product growth rate, and Consumer Price Index. The real Gross Domestic Product of the top 30 countries in the global ranking is meticulously analyzed and categorized into various economic types. This categorization, coupled with the utilization of neural network multi-hidden layer variable analysis, facilitates the analysis and prediction of national economic trends. The findings reveal that overall economic growth among the top 30 countries is sluggish, albeit showing a growth trajectory. However, the driving force for economic growth remains notably inadequate. Moreover, employing a single time series model effectively predicts Gross Domestic Product and Consumer Price Index growth rates, alongside other macroeconomic indicators. Notably, the absence of autocorrelation in the fitting residual series underscores the applicability of the time series method for combined forecasting, affirming the robustness of the predictive framework.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_86-Establishment_of_Economic_Analysis_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Studying the Behavior of a Modified Deep Learning Model for Disease Detection Through X-ray Chest Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150585</link>
        <id>10.14569/IJACSA.2024.0150585</id>
        <doi>10.14569/IJACSA.2024.0150585</doi>
        <lastModDate>2024-05-30T17:06:14.8700000+00:00</lastModDate>
        
        <creator>Elma Zanaj</creator>
        
        <creator>Lorena Balliu</creator>
        
        <creator>Gledis Basha</creator>
        
        <creator>Elunada Gjata</creator>
        
        <creator>Elinda Kajo Me&#231;e</creator>
        
        <subject>Machine learning; big data; X-ray chest image; CNN; VGG19</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>In modern medical diagnostics, Deep Learning models are commonly used for illness diagnosis, especially over X-ray chest images. Deep Learning approaches provide unmatched promise for early identification, prognosis, and treatment evaluation across a range of illnesses, by combining sophisticated algorithms with large datasets. It is crucial to research these models to lead to improved ones to progress toward disease identification&#39;s precision, effectiveness, and scalability. This paper presents the study of a CNN+VGG19 Deep Learning architecture (subsets of machine learning), both before and after its modification. The same dataset is used over the existing and modified models to compare metrics under the same conditions. They are compared using metrics like loss, accuracy, precision, sensitivity, and AUC. These metrics display lower values in the updated model than in the original one. The numbers demonstrate the occurrence of the overfitting phenomenon, which is most likely the result of the model&#39;s increased complexity for a small dataset. The noise in the images included in the dataset may also be the cause. As a result, it can be stated that regularization techniques should be applied; otherwise, layers of extraction and classification should not be added to the model to prevent overfitting.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_85-Studying_the_Behavior_of_a_Modified_Deep_Learning_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Convolutional Recurrent Neural Network for Cyberbullying Detection on Textual Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150584</link>
        <id>10.14569/IJACSA.2024.0150584</id>
        <doi>10.14569/IJACSA.2024.0150584</doi>
        <lastModDate>2024-05-30T17:06:14.8530000+00:00</lastModDate>
        
        <creator>Altynzer Baiganova</creator>
        
        <creator>Saniya Toxanova</creator>
        
        <creator>Meruert Yerekesheva</creator>
        
        <creator>Nurshat Nauryzova</creator>
        
        <creator>Zhanar Zhumagalieva</creator>
        
        <creator>Aigerim Tulendi</creator>
        
        <subject>CNN; RNN; LSTM; urban sounds; impulsive sounds</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>With the burgeoning use of social media platforms, online harassment and cyberbullying have become significant concerns. Traditional mechanisms often falter, necessitating advanced methodologies for efficient detection. This study presents an innovative approach to identifying cyberbullying incidents on social media sites, employing a hybrid neural network architecture that amalgamates Long Short-Term Memory (LSTM) and Convolutional Neural Network (CNN). By harnessing the sequential processing capabilities of LSTM to analyze the temporal progression of textual data, and the spatial discernment of CNN to pinpoint bullying keywords and patterns, the model demonstrates substantial improvement in detection accuracy compared to extant methods. A diverse dataset, encompassing multiple social media platforms and linguistic styles, was utilized to train and test the model, ensuring robustness. Results evince that the LSTM-CNN amalgamation can adeptly handle varied sentence structures and contextual nuances, outstripping traditional machine learning classifiers in both specificity and sensitivity. This research underscores the potential of hybrid neural networks in addressing contemporary digital challenges, urging further exploration into blended architectures for nuanced problem-solving in cyber realms.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_84-Hybrid_Convolutional_Recurrent_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Real-Time Road Lane-Lines Detection using Mask-RCNN Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150583</link>
        <id>10.14569/IJACSA.2024.0150583</id>
        <doi>10.14569/IJACSA.2024.0150583</doi>
        <lastModDate>2024-05-30T17:06:14.8370000+00:00</lastModDate>
        
        <creator>Gulbakhram Beissenova</creator>
        
        <creator>Dinara Ussipbekova</creator>
        
        <creator>Firuza Sultanova</creator>
        
        <creator>Karasheva Nurzhamal</creator>
        
        <creator>Gulmira Baenova</creator>
        
        <creator>Marzhan Suimenova</creator>
        
        <creator>Kamar Rzayeva</creator>
        
        <creator>Zhanar Azhibekova</creator>
        
        <creator>Aizhan Ydyrys</creator>
        
        <subject>Lane lines; detection; classification; segmentation; Mask-RCNN; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>This paper presents a novel approach to real-time road lane-line detection using the Mask R-CNN framework, with the aim of enhancing the safety and efficiency of autonomous driving systems. Through extensive experimentation and analysis, the proposed system demonstrates robust performance in accurately detecting and segmenting lane boundaries under diverse driving conditions. Leveraging deep learning techniques, the system exhibits a high level of accuracy in handling complex scenarios, including variations in lighting conditions and occlusions. Real-time processing capabilities enable instantaneous feedback, contributing to improved driving safety and efficiency. However, challenges such as model generalizability, interpretability, computational efficiency, and resilience to adverse weather conditions remain to be addressed. Future research directions include optimizing the system&#39;s performance across different geographic regions and road types and enhancing its adaptability to adverse weather conditions. The findings presented in this paper contribute to the ongoing efforts to advance autonomous driving technology, with implications for improving road safety and transportation efficiency in real-world settings. The proposed system holds promise for practical deployment in autonomous vehicles, paving the way for safer and more efficient transportation systems in the future.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_83-Real_Time_Road_Lane_Lines_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Transforming Pixels: Crafting a 3D Integer Discrete Cosine Transform for Advanced Image Compression</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150582</link>
        <id>10.14569/IJACSA.2024.0150582</id>
        <doi>10.14569/IJACSA.2024.0150582</doi>
        <lastModDate>2024-05-30T17:06:14.8230000+00:00</lastModDate>
        
        <creator>R. Rajprabu</creator>
        
        <creator>T. Prathiba</creator>
        
        <creator>Deepa Priya V.</creator>
        
        <creator>Arthy Rajkumar</creator>
        
        <creator>Rajkannan. C</creator>
        
        <creator>P. Ramalakshmi</creator>
        
        <subject>Discrete cosine transform; 3D integer DCT; Image compression; JPEG algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>We propose an innovative technique for image compression based on the 3-dimensional Integer Discrete Cosine Transform (3D-Integer DCT), which will serve as an alternative to the existing DCT-based compression technique. If an image is encoded as cubes [row &#215; column &#215; temporal length] instead of blocks [row &#215; column], higher compression can be achieved. Here, the number of blocks is represented as the temporal length. To construct cubes, we use highly correlated blocks, and the correlation level is determined using the mean absolute difference (MAD). The suggested 3D-Integer DCT-based coder can achieve a higher compression ratio while maintaining the required image quality. It also needs fewer coefficients to encode an image than the usual Joint Photographic Expert Group (JPEG) coder. Adopting integer DCT further reduces the computational complexity of the proposed algorithm, given the abundance of methods available in the literature to determine equivalent integers for DCT. We choose an optimum integer group that minimizes mean squared error (MSE) and improves coding efficiency for computing 3D-Integer DCT. We also conducted a detailed analysis to examine the impact of implementing integer DCT in image compression. When we look at peak signal-to-noise noise ratio (PSNR), bits per pixel, and structural similarity index (SSIM), we see that the proposed algorithm does a better job than the standard real-value DCT-based compression algorithm like JPEG.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_82-Transforming_Pixels_Crafting_a_3D_Integer_Discrete_Cosine_Transform.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Computer Simulation Study of Stiffness Variation of Stewart Platform under Different Loads</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150581</link>
        <id>10.14569/IJACSA.2024.0150581</id>
        <doi>10.14569/IJACSA.2024.0150581</doi>
        <lastModDate>2024-05-30T17:06:14.8070000+00:00</lastModDate>
        
        <creator>Zhiqiang Zhao</creator>
        
        <creator>Yuetao Liu</creator>
        
        <creator>Changsong Yu</creator>
        
        <creator>Peicen Jiang</creator>
        
        <subject>Stewart; different loads; stiffness variation; computer simulation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>The ability of Stewart platform to resist deformation is an important target for designing and optimizing the platform, and studying the variation rule of stiffness of Stewart platform under different loads can help us to understand the dynamic characteristics of the platform, guide the design and control of the platform, and improve the performance and stability of the platform. The purpose of this paper is to change the law of stiffness variation and influence factors of Stewart platform under different loads, aiming to study the change of stiffness of Stewart platform under different loads as well as the influence factors, and the influence of stiffness change on the performance and stability of the platform. Firstly, using MATLAB software, the kinematic and mechanical model of Stewart platform was established, the analytical expression of the stiffness matrix of the platform was deduced, and the stiffness characteristics and stiffness singularity of the platform were analyzed. Then, using ADAMS software, the dynamic simulation model of the Stewart platform was established, and the stiffness of the platform was simulated and analyzed. The results show that the stiffness of the Stewart platform will appear singularity or sudden change under some special positions or loads, which should be avoided as much as possible so as not to affect the performance and stability of the platform. There is a certain correlation between the dynamic and static stiffness, but it is also affected by the nonlinearity of the structure, damping, coupling and other factors.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_81-Computer_Simulation_Study_of_Stiffness_Variation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Big Data Multi-Strategy Predator Algorithm for Passenger Flow Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150580</link>
        <id>10.14569/IJACSA.2024.0150580</id>
        <doi>10.14569/IJACSA.2024.0150580</doi>
        <lastModDate>2024-05-30T17:06:14.7900000+00:00</lastModDate>
        
        <creator>Peng Guo</creator>
        
        <subject>Passemger Flow Prediction; regularized extreme learning machine; Collaborative Filtering Algorithm; Marine Predator Algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>Faced with the rapidly recovering tourism market, accurate prediction of passenger flow can help local authorities achieve more effective resource regulation. Therefore, based on big data technology, a multi-strategy predator algorithm is proposed, which uses the Marine Predator Algorithm, combined with regularized extreme learning machines and Collaborative Filtering Algorithms, to achieve accurate passenger flow prediction. The experiment findings denote that the performance parameters of the algorithm are excellent, with extremely strong convergence performance, and only 30 iterations are needed to reach the optimal solution. The fitting degree of this algorithm is 97.8%, which is 6.27% -19.31% higher than that of long and short-term memory networks, random forest algorithms, and support vector machine regression. In actual passenger flow prediction, the error rate of this algorithm is only 2.29%, which is 3.47% -6.50% higher than the three comparison algorithms. This study provides a new and efficient prediction method for passenger flow prediction. Its excellent predictive performance can not only help relevant departments predict and manage passenger traffic more accurately, but also provide reference for traffic prediction in other fields. Overall, this study has important reference value and practical significance for the research and practice of passenger flow prediction.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_80-Big_Data_Multi_Strategy_Predator_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application Analysis of Network Security Situational Awareness Model for Asset Information Protection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150579</link>
        <id>10.14569/IJACSA.2024.0150579</id>
        <doi>10.14569/IJACSA.2024.0150579</doi>
        <lastModDate>2024-05-30T17:06:14.7770000+00:00</lastModDate>
        
        <creator>Yuemei Ren</creator>
        
        <creator>Xianju Feng</creator>
        
        <subject>Asset information protection; cyber security; situational awareness; knowledge graph; attack scenarios</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>The popularity of the Internet makes the network develop rapidly. However, the network security threat is more complex and hidden. The traditional network security alarm system has the problems of low accuracy and low efficiency when dealing with huge redundant data. Therefore, the research comprehensively considers the network security problems, proposes a network security situational awareness model for asset information protection combined with knowledge graph, establishes an asset-based network security knowledge graph, utilizes attribute graphs to complete the network attack scenario discovery and network situational understanding, and verifies the effectiveness and superiority of the model. The experimental results show that the research-proposed model detects an average of 9706 attacks out of 10000 attacks. For 100 high-risk level attacks, the number of detections is higher than 98. The average correctness, recall, and false alarm rates of the research proposed model are 99.48%, 99.04%, and 0.86%, respectively. In addition, when the model is running, its maximum memory usage is only 22.67%, and the time to complete the attack detection at the same time is 258.4s, both of which are much lower than the comparison algorithms. Finally, the research-proposed model is able to effectively reflect the impact of attack events on the posture of asset nodes. The proposed cybersecurity situational awareness model is of great theoretical and practical significance for improving organizational cybersecurity, innovating cybersecurity solutions, and maintaining the security of asset information in the digital era.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_79-Application_Analysis_of_Network_Security_Situational_Awareness_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Network Security Evaluation Based on Improved Genetic Algorithm and Weighted Error Backpropagation Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150578</link>
        <id>10.14569/IJACSA.2024.0150578</id>
        <doi>10.14569/IJACSA.2024.0150578</doi>
        <lastModDate>2024-05-30T17:06:14.7600000+00:00</lastModDate>
        
        <creator>Jinlong Pang</creator>
        
        <creator>Chongwei Liu</creator>
        
        <subject>Genetic algorithm; return propagation algorithm; cybersecurity evaluation; weighting; network vulnerability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>As the speed advancement of network technology and the popularization of applications, network security problems are becoming more and more prominent, all kinds of network attacks and security threats are increasing, and the demand for network security evaluation is becoming more and more urgent. To address the issues of long time-consuming and low accuracy in the traditional network security evaluation model, the study proposes a network security evaluation model based on improved genetic algorithm and weighted error BP algorithm. The study first combines the weighted error BP algorithm with the improved genetic algorithm for data analysis and research, and then integrates the two to construct a network security evaluation model. The results show that in the detection of network security vulnerabilities, the evaluation model of the data processing vulnerability detection accuracy, risk detection rate of 93.28%, 91.88%, respectively. The function training error of the model is 8.93% respectively, while the decoding accuracy and stability are 90.43% and 92.07% respectively, which are better than the comparison method. This indicates that the method has high accuracy and robustness in network security evaluation, and can provide network administrators and users with a more scientific and reliable basis for decision-making.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_78-Network_Security_Evaluation_Based_on_Improved_Genetic_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Exhaustive Insights Towards Social-Media Driven Disaster Management Approaches</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150577</link>
        <id>10.14569/IJACSA.2024.0150577</id>
        <doi>10.14569/IJACSA.2024.0150577</doi>
        <lastModDate>2024-05-30T17:06:14.7430000+00:00</lastModDate>
        
        <creator>Nethravathy Krishnappa</creator>
        
        <creator>D Saraswathi</creator>
        
        <creator>Chandrasekar Chelliah</creator>
        
        <subject>Artificial intelligence; disaster management; information propagation; social media; community</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>The manuscript presents discussion about the disaster management approaches using social media. It is noted that rising popularity of social media has been witnessed to significantly contribute towards information propagation and community participation to deal with the event of disaster. Different from conventional disaster management policies, the scope of inclusion of social media-based approaches are quite novel and yet promising. However, the problem is towards unclear information about the effectivity of such schemes. Hence, this manuscript contributes towards bridging this information gap by carrying out an exhaustive and systematic review of existing methodology frequently adopted towards disaster management using social media viz. early warning methods, information dissemination methods, crisis mapping method, and predictive approach, where Artificial Intelligence was noted to be quite dominant scheme. The contributory findings of this review study contribute towards clear visualization of updated research trends, critical learning outcomes associated with identified research gap with illustrated discussion of the reviewed articles. A clear and informative study findings contributes towards future researchers. The result of review has also answered the formed research question to give potential insight towards existing system. The result of the review finds that existing approaches has both beneficial aspect and limitation associated with complex learning approaches, higher infrastructural cost, model complexities, security threats, higher resource dependencies.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_77-Exhaustive_Insights_Towards_Social_Media_Driven_Disaster_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Smart Contract Security Through Multi-Agent Deep Reinforcement Learning Fuzzing: A Survey of Approaches and Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150576</link>
        <id>10.14569/IJACSA.2024.0150576</id>
        <doi>10.14569/IJACSA.2024.0150576</doi>
        <lastModDate>2024-05-30T17:06:14.7300000+00:00</lastModDate>
        
        <creator>Muhammad Farman Andrijasa</creator>
        
        <creator>Saiful Adli Ismail</creator>
        
        <creator>Norulhusna Ahmad</creator>
        
        <creator>Othman Mohd Yusop</creator>
        
        <subject>Smart contract security; multi-agent systems; deep reinforcement learning; fuzzing techniques; blockchain technology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>Multi-Agent Systems (MAS) and Deep Reinforcement Learning (DRL) have emerged as powerful tools for enhancing security measures, particularly in the context of smart contract security in blockchain technology. This literature review explores the integration of Multi-Agent DRL fuzzing techniques to bolster the security of smart contracts. The study delves into the formalization of emergence in MAS, the comprehensive survey of multi-agent reinforcement learning, and progress on the state explosion problem in model checking. By addressing challenges such as state space explosion, real-time detection, and adaptability across blockchain platforms, researchers aim to advance the field of smart contract security. The review emphasizes the significance of Multi-Agent DRL fuzzing in improving security testing processes and calls for future research and collaboration to enhance the resilience and integrity of decentralized applications. Through advancements in algorithmic efficiency, the incorporation of Explainable AI, cross-domain applications of MAS, and cooperation with blockchain development teams, the future of smart contract security holds promise for robust and secure blockchain ecosystems.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_76-Enhancing_Smart_Contract_Security.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Generation of Topical Educational Content by Estimation of the Number of Patents in the Digital Field</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150575</link>
        <id>10.14569/IJACSA.2024.0150575</id>
        <doi>10.14569/IJACSA.2024.0150575</doi>
        <lastModDate>2024-05-30T17:06:14.7130000+00:00</lastModDate>
        
        <creator>Evgeny Nikulchev</creator>
        
        <creator>Dmitry Ilin</creator>
        
        <subject>Time series; patent analysis; parameter plane; predictive analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>Analysis of trends in the development of emerging technologies based on patents is a well-recognized approach. An increase in the number of patents precedes the extensive spread of technological solutions and their incorporation in production and professional activities, making it possible to perform predictive analysis. The field of digital technology, which is changing most dynamically among production areas, was chosen as the object of study. The study develops an approach to the analysis of emerging technologies that are related to a given domain. Methods for obtaining quantitative parameters have been developed based on time series representing the number of patents per year. The concept of a parameter plane has been introduced. It includes the parameters of stable growth/decline and annual quantity of patents. A special feature of the approach is the calculation of parameters for the last observed segment of the stable dynamic behavior of the time series based on the developed algorithm. The work takes the Digital Marketing domain as an example and presents analysis of 296 keywords related to this concept. Based on time series constructed from the patent database for 2000-2021, the most promising technologies were identified. The application of the results for the generation of topical educational content in the Digital Marketing field is considered.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_75-Generation_of_Topical_Educational_Content_by_Estimation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Age Estimation from Handwriting: A Deep Learning Approach with Attention Mechanisms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150574</link>
        <id>10.14569/IJACSA.2024.0150574</id>
        <doi>10.14569/IJACSA.2024.0150574</doi>
        <lastModDate>2024-05-30T17:06:14.6970000+00:00</lastModDate>
        
        <creator>Li Zhao</creator>
        
        <creator>Xiaoping Wu</creator>
        
        <creator>Xiaoming Chen</creator>
        
        <subject>Age estimation; coordinate attention mechanism; handwriting analysis; accuracy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>Currently, age estimation is a hot research topic in the field of forensic biology. Age estimation methods based on facial or brain features are easily affected by external factors. In contrast, handwriting analysis is a more reliable method for age estimation. This paper aims to improve the accuracy and efficiency of age prediction using handwriting analysis by proposing a novel method that integrates a coordinate attention mechanism in a deep residual network (CA-ResNet). This method can more accurately capture important features in the input handwritten images while reducing the number of model parameters, thereby improving the accuracy (Acc) and efficiency of the model for age estimation. The proposed method is evaluated on standard handwriting datasets and the created dataset, and it is compared with the current state-of-the-art methods. The results show that the method consistently outperforms others, achieving an accuracy of 79.60% on the IAM handwriting dataset, with a 6.31% improvement over other methods.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_74-Enhancing_Age_Estimation_from_Handwriting.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Visual Communication Design Based on Sparsity-Enhanced Image Processing Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150573</link>
        <id>10.14569/IJACSA.2024.0150573</id>
        <doi>10.14569/IJACSA.2024.0150573</doi>
        <lastModDate>2024-05-30T17:06:14.6830000+00:00</lastModDate>
        
        <creator>Zheng Wang</creator>
        
        <creator>Dongsik Hong</creator>
        
        <subject>Deep neural networks; convolutional neural networks; sparsity; dictionary learning; image reinforcement processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>In the field of visual communication, image clarity and accuracy are the key to convey effective information. A new sparsity-enhanced image processing model is introduced to address the limitations of traditional image processing models in terms of image resolution and fidelity. This model combines a deep neural networks learning framework with a sparse convolutional neural networks enhancement module to complete image reinforcement processing, thereby achieving more accurate image reconstruction techniques. Dictionary learning is used to train models so that the sparse representation of low resolution and high-resolution images has the same dictionary coefficients. By comparing with the existing techniques Enhanced Super-Resolution Generative Adversarial Network, Wide Activation for Efficient and Accurate Image Super-Resolution, and Bicubic Interpolation, and the new model achieves an average peak signal-to-noise ratio of 32.9334 dB, which significantly outperforms the comparison group, respectively, with improvements of 1.9252 dB, 6.6509 dB, and 9.7297 dB, respectively. In addition, the new model demonstrates advantages in structural similarity and learning to perceive image block similarity, implying that it not only enhances the objective quality of the image, but also improves the subjective visual effect of the image. The improved resolution and fidelity of the output image confirms the model&#39;s superior performance in processing details and textures. This advancement not only improves the accuracy and efficiency of image processing techniques, but also provides strong technical support for the creation and dissemination of high-quality visual content, which is particularly suitable for application scenarios requiring high-precision visual displays, such as satellite image analysis, remote sensing detection and medical imaging.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_73-Visual_Communication_Design_Based_on_Sparsity.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Migration Learning and Multi-View Training for Low-Resource Machine Translation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150572</link>
        <id>10.14569/IJACSA.2024.0150572</id>
        <doi>10.14569/IJACSA.2024.0150572</doi>
        <lastModDate>2024-05-30T17:06:14.6670000+00:00</lastModDate>
        
        <creator>Jing Yan</creator>
        
        <creator>Tao Lin</creator>
        
        <creator>Shuai Zhao</creator>
        
        <subject>Low-resource machine translation; migration learning; multi-view training; continual pretraining; multidimensional linguistic feature integration</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>This paper discusses the main challenges and solution strategies of low-resource machine translation, and proposes a novel translation method combining migration learning and multi-view training. In a low-resource environment, neural machine translation models are prone to problems such as insufficient generalization performance, inaccurate translation of long sentences, difficulty in processing unregistered words, and inaccurate translation of domain-specific terms due to their heavy reliance on massively parallel corpora. Migration learning gradually adapts to the translation tasks of low-resource languages in the process of fine-tuning by borrowing the general translation knowledge of high-resource languages and utilizing pre-training models such as BERT, XLM-R, and so on. Multi-perspective training, on the other hand, emphasizes the integration of source and target language features from multiple levels, such as word level, syntax and semantics, in order to enhance the model&#39;s comprehension and translation ability under limited data conditions. In the experiments, the study designed an experimental scheme containing pre-training model selection, multi-perspective feature construction, and migration learning and multi-perspective fusion, and compared the performance with randomly initialized Transformer model, pre-training-only model, and traditional statistical machine translation model. The experiments demonstrate that the model with multi-view training strategy significantly outperforms the baseline model in evaluation metrics such as BLEU, TER, and ChrF, and exhibits stronger robustness and accuracy in processing complex language structures and domain-specific terminology.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_72-Migration_Learning_and_Multi_View_Training.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimal Trajectory Planning for Robotic Arm Based on Improved Dynamic Multi-Population Particle Swarm Optimization Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150571</link>
        <id>10.14569/IJACSA.2024.0150571</id>
        <doi>10.14569/IJACSA.2024.0150571</doi>
        <lastModDate>2024-05-30T17:06:14.6500000+00:00</lastModDate>
        
        <creator>Rong Wu</creator>
        
        <creator>Yong Yang</creator>
        
        <creator>Xiaotong Yao</creator>
        
        <creator>Nannan Lu</creator>
        
        <subject>Particle swarm optimization; Gaussian mutation; mixed particles; levy flight; greedy strategy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>In response to the problem of easy falling into local optima and low execution efficiency of the basic particle swarm optimization algorithm for 6-degree-of-freedom robots under kinematic constraints, a trajectory planning method based on an improved dynamic multi-population particle swarm optimization algorithm is proposed. According to the average fitness value, the population is divided into three subpopulations. The subpopulation with fitness values higher than the average is classified as the inferior group, while the subpopulation with fitness values lower than the average is classified as the superior group. An equal number of populations are selected from both to form a mixed group. The inferior group is updated using Gaussian mutation and mixed particles, while the superior group is updated using Levy flight and greedy strategies. The mixed group is updated using improved learning factors and inertia weights. Simulation results demonstrate that the improved dynamic multi-population particle swarm optimization algorithm enhances work efficiency and convergence speed, validating the feasibility and effectiveness of the algorithm.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_71-Optimal_Trajectory_Planning_for_Robotic_Arm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Data Security Optimization at Cloud Storage using Confidentiality-based Data Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150570</link>
        <id>10.14569/IJACSA.2024.0150570</id>
        <doi>10.14569/IJACSA.2024.0150570</doi>
        <lastModDate>2024-05-30T17:06:14.6330000+00:00</lastModDate>
        
        <creator>Dorababu Sudarsa</creator>
        
        <creator>A. Nagaraja Rao</creator>
        
        <creator>A. P. Sivakumar</creator>
        
        <subject>Cloud storage; data privacy; data security; data classification; degree of confidentiality</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>Data is the most assets for any organization, stored either in individual systems, server, or cloud platform. Cloud, one of the trending storage systems being adapted now a day is the state-of-the-art of the advanced technology. The major concern with this technological growth is privacy and security of data. Hoisting of data in this platform must be with privacy and security. Hence, there is an urge for service that provides security associated with data to the stake holders. Though the existing security for the data is provided at different levels incurred high cost in terms of processing time. This research aims at providing novel classification-based security algorithm (CBSA) composed with confidential-based classification and encryption with low cost. The confidential-based classification classifies the data into three different levels based on its degree of confidentiality; confidential-based encryption applies a suitable and proportional security mechanism dynamically to each of the levels of data. Thus, the data security process will become optimal and cost effective. The proposed algorithm has outperformed the existing algorithms in terms of processing time and entropy. The processing time and entropy of proposed algorithm has improved by 10%.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_70-Data_Security_Optimization_at_Cloud_Storage.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Decision Making Systems for Pneumonia Detection using Deep Learning on X-Ray Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150569</link>
        <id>10.14569/IJACSA.2024.0150569</id>
        <doi>10.14569/IJACSA.2024.0150569</doi>
        <lastModDate>2024-05-30T17:06:14.6200000+00:00</lastModDate>
        
        <creator>Zhadra Kozhamkulova</creator>
        
        <creator>Elmira Nurlybaeva</creator>
        
        <creator>Madina Suleimenova</creator>
        
        <creator>Dinargul Mukhammejanova</creator>
        
        <creator>Marina Vorogushina</creator>
        
        <creator>Zhanar Bidakhmet</creator>
        
        <creator>Mukhit Maikotov</creator>
        
        <subject>CNN; machine learning; pneumonia; X-ray; image analysis; classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>This research paper investigates the application of Convolutional Neural Networks (CNNs) for the classification of pneumonia using chest X-ray images. Through rigorous experimentation and data analysis, the study demonstrates the model&#39;s impressive learning capabilities, achieving a notable accuracy of 96% in pneumonia classification. The consistent decrease in training and validation losses across 25 learning epochs underscores the model&#39;s adaptability and proficiency. However, the research also highlights the challenge of dataset imbalance and the need for improved model interpretability. These findings emphasize the potential of deep learning models in enhancing pneumonia diagnosis but also underscore the importance of addressing existing limitations. The study calls for future research to explore techniques for addressing dataset imbalances, enhance model interpretability, and extend the scope to address nuanced diagnostic challenges within the field of pneumonia classification. Ultimately, this research contributes to the advancement of medical image analysis and the potential for deep learning models to aid in early and accurate pneumonia diagnosis, thereby improving patient care and clinical outcomes.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_69-Decision_Making_Systems_for_Pneumonia_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Raise of Security Concern in IoT Devices: Measuring IoT Security Through Penetration Testing Framework</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150568</link>
        <id>10.14569/IJACSA.2024.0150568</id>
        <doi>10.14569/IJACSA.2024.0150568</doi>
        <lastModDate>2024-05-30T17:06:14.6030000+00:00</lastModDate>
        
        <creator>Abdul Ghafar Jaafar</creator>
        
        <creator>Saiful Adli Ismail</creator>
        
        <creator>Abdul Habir</creator>
        
        <creator>Khairul Akram Zainol Ariffin</creator>
        
        <creator>Othman Mohd Yusop</creator>
        
        <subject>IoT Security; IoT penetration testing; security assessment; automated penetration testing; penetration testing framework</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>Despite the widespread adoption of IoT devices across different industries to enhance human activities, there is a pressing need to address the vulnerabilities associated with these devices, as they can potentially give rise to a plethora of cyber threats. Cyberattacks targeting IoT devices are predominantly attributed to inadequate patching and security updates. Furthermore, the current atmosphere pertaining to IoT penetration tests primarily focuses on specific devices and sectors while leaving certain fields behind, such as household devices. This study delves into recent penetration testing on IoT devices. Further, it discusses and critically analyzes the significance and issues in conducting IoT penetration tests. The findings of this study reveal a substantial demand for automated IoT penetration testing to serve diverse industries because conducting such testing has the capacity to diminish the consequences of cyber-attacks across numerous industries that utilize IoT devices for various purposes. This study is intended to be a ready reference for the research community to construct effective and innovative solutions in IoT penetration testing, which covers various fields.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_68-A_Raise_of_Security_Concern_in_IoT_Devices.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Method Resource Sharing in On-Premises Environment Based on Cross-Origin Resource Sharing and its Application for Safety-First Constructions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150567</link>
        <id>10.14569/IJACSA.2024.0150567</id>
        <doi>10.14569/IJACSA.2024.0150567</doi>
        <lastModDate>2024-05-30T17:06:14.5870000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Kodai Norikoshi</creator>
        
        <creator>Mariko Oda</creator>
        
        <subject>Cross-Origin Resource Sharing: CORS; CSRF (Cross-Site Request Forgery); SameSite; withCredentials flag; Access-Control-Allow-Credentials header; safety first constructions</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>The method of resource sharing in an on-premises environment based on Cross-Origin Resource Sharing (CORS) is proposed for security reasons. However, using CORS entails several risks: Cross-Site Request Forgery (CSRF), difficulties in secure configuration, handling credentials, controlling complex requests, and restrictions associated with using wildcards. (1) To mitigate these risks, the following countermeasures are proposed: (2) Use CSRF tokens and the “SameSite” attribute. (3) Minimize preflight requests by allowing only specific origins. (4) Use the “withCredentials” flag or set the “Access-Control-Allow-Credentials” header on the server. (5) Handle custom headers by adding the required headers to CORS settings. (6) Specify a specific origin in the “Access-Control-Allow-Origin” header instead of using wildcards. Additionally, applying CORS for safety-first constructions, which helps raise awareness of dangerous actions in construction fields, is also being explored.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_67-Method_Resource_Sharing_in_On_Premises_Environment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Recognition of Hate Speech using Advanced Learning Model-based Multi-Layered Approach (MLA)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150566</link>
        <id>10.14569/IJACSA.2024.0150566</id>
        <doi>10.14569/IJACSA.2024.0150566</doi>
        <lastModDate>2024-05-30T17:06:14.5730000+00:00</lastModDate>
        
        <creator>Puspendu Biswas</creator>
        
        <creator>Donavalli Haritha</creator>
        
        <subject>Multi-Layered Approach (MLA); Deep Learning (DL); DistilBERT; GloVe; Bi-grams (2-grams)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>Hate speech becomes more complicated for the users of social media. Some users on online social networking sites (OSNS) create a lot of nonsense by uploading hate speech. OSNS applications developing many models to prevent this hate speech in terms of text and videos. However, these messages still need to be fixed for OSNS users. Sophisticated techniques must automatically identify and detect hate speech material to solve this problem. This paper proposes an advanced learning model-based Multi-Layered Approach (MLA) for hate speech recognition. The proposed model analyses textual data and finds hate speech patterns using multiple deep learning (DL) architectures. The algorithm can generalize well across settings and languages because it was trained on text datasets that include various hate speech types. The final step is an integrated model called Text Convolutional Neural Networks (TCNN), which combines hate text pattern detection with T-Convolutionals. Essential components of the model include the pre-trained model for DistilBERT, integrated pre-processing techniques like Text Cleaning, Lemmatization, and Stemming, and feature extraction techniques like GloVe and Bi-grams (2-grams) to capture contextual information and nuances within language. The model integrates continuous learning techniques to handle the dynamic nature of hate speech. It enables the model to update its comprehension of new language patterns and evolving forms of objectionable content. The evaluation of the proposed model involves benchmarking against existing hate speech detection methods, demonstrating superior precision, recall, and overall accuracy. Finally, the proposed MLA offers a practical and adaptable solution for recognizing hate speech, contributing to creating safer online environments.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_66-Recognition_of_Hate_Speech_using_Advanced_Learning_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Road Accident Detection using SVM and Learning: A Comparative Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150565</link>
        <id>10.14569/IJACSA.2024.0150565</id>
        <doi>10.14569/IJACSA.2024.0150565</doi>
        <lastModDate>2024-05-30T17:06:14.5570000+00:00</lastModDate>
        
        <creator>Fatima Qanouni</creator>
        
        <creator>Hakim El Massari</creator>
        
        <creator>Noreddine Gherabi</creator>
        
        <creator>Maria El Badaoui</creator>
        
        <subject>Road accidents; road traffic management; machine learning; SVM; deep learning; ensemble learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>Everyday, a great deal of children and young adults (aged five to 29) lives are lost in road accidents. The most frequent causes are a driver’s behavior, the streets infrastructure is of lower quality and the delayed response of emergency services especially in rural areas. There is a need for automatics road accident systems detection that can assist in recognizing road accidents and determining their positions. This work reviews existing machine learning approaches for road accidents detection. We propose three distinct classifiers: Convolutional Neural Network CNN, Recurrent Convolution Neural Network R-CNN and Support Vector Machine SVM, using a CCTV footage dataset. These models are evaluated based on ROC curve, F1 measure, precision, accuracy and recall, and the achieved accuracies were 92%, 82%, and 93%, respectively. In addition, we suggest using an ensemble learning strategy to maximize the strengths of individual classifiers, raising detection accuracy to 94%.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_65-Road_Accident_Detection_using_SVM_and_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Method of Budding Detection with YOLO-based Approach for Determination of the Best Time to Plucking Tealeaves</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150564</link>
        <id>10.14569/IJACSA.2024.0150564</id>
        <doi>10.14569/IJACSA.2024.0150564</doi>
        <lastModDate>2024-05-30T17:06:14.5400000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Yoho Kawaguchi</creator>
        
        <subject>Budding; YOLO; plucking; tealeaves; quality; quantity; hyperparameter; learning performance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>Method of budding detection with YOLO (You Only Look Once) for determination of the best time to plucking tealeaves is proposed. In order to get the best quality and quantity of tealeaves, it is very important to determine the best time to plucking date. It is most likely that the number of days elapsed after the budding of the tealeaves are the most effective for determine the best plucking day. Therefore, method for detect the budding is getting much important. In this paper, YOLO-based object detection is proposed. Hyperparameter of the YOLO has to be optimized. Also, a comparative study is conducted for the resolution of the cameras used for acquisition of tealeaves from a point of view for learning performance of YOLO. Through experiments, it is found that the proposed method for detection of budding is effective in terms of learning performance for getting the best quality and quantity of tealeaves harvested.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_64-Method_of_Budding_Detection_with_YOLO_based_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Controlling System for Smart Farming-based Internet of Things (IoT)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150563</link>
        <id>10.14569/IJACSA.2024.0150563</id>
        <doi>10.14569/IJACSA.2024.0150563</doi>
        <lastModDate>2024-05-30T17:06:14.5270000+00:00</lastModDate>
        
        <creator>Dodi Yudo Setyawan</creator>
        
        <creator>Warsito</creator>
        
        <creator>Roniyus Marjunus</creator>
        
        <creator>Sumaryo</creator>
        
        <subject>IoT; agriculture; automation; sustainability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>The integration of IoT systems in agriculture has become a very important need amid the high population and increasingly limited farmland, which demands researchers to be more innovative in addressing these issues. Using IoT systems for automatic irrigation, fertilization, and cooling based on sensor values through internet networks. Poor internet connection leads to the failure of automation and sustainability in online conditions, which can be very dangerous for plants. This paper presents a new IoT-based control system divided into two parts: an automation system and an IoT system, which can maintain sustainability in online conditions to ensure that plants in the planting area are always controlled. In addition, the sensors used have undergone calibration processes to determine the increase in precision of the sensor values produced. The research results show that the system can maintain sustainability under online conditions. Mobile apps are available for control when the system is online, but if it goes offline and is unable to reconnect, the Arduino Mega will fully manage control using soil moisture sensor values for irrigation processes if the values fall below a certain threshold. This demonstrates the sustainability of the system in online conditions, allowing continuous control and reducing the risk of plant death in the planting area. The calibration result shows an increase in precision for the air temperature and humidity (DHT 11 sensor) by 7.14 and 6.15, respectively. Additionally, the precision improvement for the soil pH sensor is 1.81, while for the soil moisture sensor and the water flow sensor, it is 0.13 and 0.008, respectively.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_63-A_Novel_Controlling_System_for_Smart_Farming.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Ontology Driven for Mapping a Relational Database to a Knowledge-based System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150562</link>
        <id>10.14569/IJACSA.2024.0150562</id>
        <doi>10.14569/IJACSA.2024.0150562</doi>
        <lastModDate>2024-05-30T17:06:14.5100000+00:00</lastModDate>
        
        <creator>Abdelrahman Osman Elfaki</creator>
        
        <creator>Yousef H. Alfaifi</creator>
        
        <subject>Mapping knowledge; ontology-based; relational database; online analytical processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>The mapping of a relational database system to a knowledge-based system is a key stage in developing an online analytical processing (OLAP) system. OLAP is a cornerstone in discovering hidden knowledge in any business. Hence, the existence of an OLAP system is one of the modern success factors in a business environment. Mapping has proven benefits for knowledge-based systems in terms of enabling the discovery of hidden relationships among objects and the inference of new information. However, there remains room for improvement in respect of the quality of the mapping output. Therefore, in this paper, a rule-based method for mapping a relational database to a knowledge-based system is introduced. First, the proposed mapping process, which involves converting the tables and relationships of a relational database into facts and rules for a knowledge-based system, is illustrated through the use of a detailed case study. Then the correctness of the proposed method is proved by testing the tautology results against equivalent SQL queries. In addition, the completeness of the proposed method is proved by demonstrating that the used predicates are sufficient to allow a complete modeling of the required system. Furthermore, the experimental results show that the performance of the knowledge-based system that was developed using the proposed method is much better than that of an equivalent relational database.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_62-Ontology_Driven_for_Mapping_a_Relational_Database.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Tourist Attraction Recommendation Model Based on RFPAP-NNPAP Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150561</link>
        <id>10.14569/IJACSA.2024.0150561</id>
        <doi>10.14569/IJACSA.2024.0150561</doi>
        <lastModDate>2024-05-30T17:06:14.4930000+00:00</lastModDate>
        
        <creator>Jun Li</creator>
        
        <subject>Tourist attractions; recommendation model; RF; ANN; FP-Growth</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>Driven by globalization and digitization, the tourism industry is facing new challenges and opportunities brought about by big data and artificial intelligence. The recommendation of tourist attractions, as an important part of the industry, has a direct influence on the tourist experience. However, with the diversification and personalization of tourism demand, traditional recommendation methods have shown shortcomings: weak processing ability for complex nonlinear data, affecting recommendation accuracy and personalization, and insufficient efficiency and stability when processing large-scale data. Faced with this challenge, this study proposed a hybrid tourist attraction recommendation model with random forest, artificial neural network, and frequent pattern growth. This model utilized the powerful classification and regression capabilities of random forests, as well as the complex nonlinear mapping ability of artificial neural networks, to predict tourist attraction preferences. And on this basis, the frequent pattern growth algorithm was introduced to mine the associated attractions of tourist preferences, thereby achieving accurate recommendation of tourist attractions. In experimental verification, the proposed model demonstrated superior performance. It not only surpassed traditional tourist attraction recommendation methods in accuracy and personalization, but also exhibited efficient and stable characteristics when processing large-scale data. After about 16 iterations, the MAPE value of the mixed model decreased to 0.44%. After about 39 iterations, the MAPE value of the mixed model decreased to 0.40%. The average accuracy, recall rate and F-value of the proposed model are 92.26%, 82.11% and 84.43%, respectively, which are superior to the comparison algorithm. Its error correction accuracy fluctuates around 90%. This study provides a new solution to the problem of personalized recommendation of tourist attractions, providing theoretical guidance for the tourism applications of random forests and artificial neural networks, and improving the tourist experience, promoting the development of the tourism industry.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_61-Tourist_Attraction_Recommendation_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Improved MobileNet Model Integrated Spatial and Channel Attention Mechanisms for Tea Disease</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150560</link>
        <id>10.14569/IJACSA.2024.0150560</id>
        <doi>10.14569/IJACSA.2024.0150560</doi>
        <lastModDate>2024-05-30T17:06:14.4930000+00:00</lastModDate>
        
        <creator>Li Zhang</creator>
        
        <creator>Jiacheng Sun</creator>
        
        <creator>Minghui Yang</creator>
        
        <subject>Tea disease; MobileNetV3; attention mechanism; convolutional neural network component</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>Aiming at addressing the challenges of large model parameters, high computational cost, and low accuracy of the traditional tea disease identification model, an improved MobileNet model integrated spatial and channel attention mechanisms (MobileNet-SCA) was proposed for tea disease identification. Firstly, the tea disease identification dataset was augmented through random clipping, rotation transformation, and perspective transformation to simulate diverse image acquisition perspectives and mitigate overfitting effects. Secondly, based on the convolutional neural network (CNN) framework, the Channel Attention (CA) mechanism and Spatial Attention (SA) mechanism were introduced to carry out global average pooling and group normalization operations on input feature maps respectively, and adjust the channel weights using the learned parameters. Then the h-swish activation function was utilized to scale, and the two kinds of attention mechanisms were spliced and mixed to improve the channel and spatial information. In addition, the MobileNetV3 network&#39;s structure underwent optimization by adjusting the number of input channels, the size of the convolution kernel, and the number of channels in the residual block. The experimental results showed that the identification accuracy of MobileNet-SCA for tea diseases was 5.39% higher than the original model. This method can balance the identification accuracy and identification time well, and it meets the requirements for accurate and rapid identification of tea diseases.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_60-An_Improved_MobileNet_Model_Integrated_Spatial_and_Channel_Attention.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mobile Application with Augmented Reality Applying the MESOVA Methodology to Improve the Learning of Primary School Students in an Educational Center</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150559</link>
        <id>10.14569/IJACSA.2024.0150559</id>
        <doi>10.14569/IJACSA.2024.0150559</doi>
        <lastModDate>2024-05-30T17:06:14.4800000+00:00</lastModDate>
        
        <creator>Anthony Wilder Arias Vilchez</creator>
        
        <creator>Tomas Silvestre Marcelo Lloclla Soto</creator>
        
        <creator>Giancarlo Sanchez Atuncar</creator>
        
        <subject>Augmented reality; mobile application; MESOVA methodology; Kolmogorov-Smirnov; Wilcoxon; education; Vuforia</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>&quot;The application was developed using the MESOVA methodology, employing technologies such as Unity, Vuforia, and Visual Studio with the purpose of enhancing the educational experience for elementary school students. This innovative tool integrates augmented reality with the pedagogical principles of MESOVA, standing out notably from other research. Focusing on topics such as scientific knowledge and design and construction skills, the application not only provides information but also includes games that encourage interaction with the universe and planets, offering a participative and meaningful educational experience. The pretest results revealed an average scientific knowledge of 9.75%, significantly increasing to 15.55% in the posttest. Similarly, design and construction skills, initially evaluated at 8.24%, experienced a remarkable increase to 14.99% in the posttest. The adaptability of the application to the specific needs of elementary school students creates a stimulating and personalized learning environment. The combination of MESOVA and augmented reality enriches the educational experience, promoting understanding, collaboration, and critical thinking among students. In conclusion, the initiative goes beyond providing basic information; it becomes a transformative educational resource that equips students with fundamental cognitive and social skills as they explore the universe through augmented reality. Ultimately, it highlights the potential of technology and pedagogy to create a dynamic and enriching educational environment for elementary school students.&quot;</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_59-Mobile_Application_with_Augmented_Reality.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Deep Residual Network Designed for Detecting Cracks in Buildings of Historical Significance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150558</link>
        <id>10.14569/IJACSA.2024.0150558</id>
        <doi>10.14569/IJACSA.2024.0150558</doi>
        <lastModDate>2024-05-30T17:06:14.4630000+00:00</lastModDate>
        
        <creator>Zlikha Makhanova</creator>
        
        <creator>Gulbakhram Beissenova</creator>
        
        <creator>Almira Madiyarova</creator>
        
        <creator>Marzhan Chazhabayeva</creator>
        
        <creator>Gulsara Mambetaliyeva</creator>
        
        <creator>Marzhan Suimenova</creator>
        
        <creator>Guldana Shaimerdenova</creator>
        
        <creator>Elmira Mussirepova</creator>
        
        <creator>Aidos Baiburin</creator>
        
        <subject>Crack detection; historical buildings; deep learning; convolutional neural networks; heritage conservation; image analysis; machine learning; non-destructive testing; preservation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>This research paper investigates the application of deep learning techniques, specifically convolutional neural networks (CNNs), for crack detection in historical buildings. The study addresses the pressing need for non-invasive and efficient methods of assessing structural integrity in heritage conservation. Leveraging a dataset comprising images of historical building surfaces, the proposed CNN model demonstrates high accuracy and precision in identifying surface cracks. Through the integration of convolutional and fully connected layers, the model effectively distinguishes between positive and negative instances of cracks, facilitating automated detection processes. Visual representations of crack finding cases in ancient buildings validate the model&#39;s efficacy in real-world applications, offering tangible evidence of its capability to detect structural anomalies. While the study highlights the potential of deep learning algorithms in heritage preservation efforts, it also acknowledges challenges such as model generalization, computational complexity, and interpretability. Future research endeavors should focus on addressing these challenges and exploring new avenues for innovation to enhance the reliability and accessibility of crack detection technologies in cultural heritage conservation. Ultimately, this research contributes to the development of sustainable solutions for safeguarding architectural heritage, ensuring its preservation for future generations.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_58-A_Deep_Residual_Network_Designed_for_Detecting_Cracks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Offensive Language Detection on Social Media using Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150557</link>
        <id>10.14569/IJACSA.2024.0150557</id>
        <doi>10.14569/IJACSA.2024.0150557</doi>
        <lastModDate>2024-05-30T17:06:14.4470000+00:00</lastModDate>
        
        <creator>Rustam Abdrakhmanov</creator>
        
        <creator>Serik Muktarovich Kenesbayev</creator>
        
        <creator>Kamalbek Berkimbayev</creator>
        
        <creator>Gumyrbek Toikenov</creator>
        
        <creator>Elmira Abdrashova</creator>
        
        <creator>Oichagul Alchinbayeva</creator>
        
        <creator>Aizhan Ydyrys</creator>
        
        <subject>Machine learning; deep learning; hate speech; CNN; RNN; LSTM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>This research paper addresses the critical issue of cyberbullying detection within the realm of social networks, employing a comprehensive examination of various machine learning and deep learning techniques. The study investigates the performance of these methodologies through rigorous evaluation using standard metrics, including Accuracy, Precision, Recall, F-measure, and AUC-ROC. The findings highlight the notable efficacy of deep learning models, particularly the Bidirectional Long Short-Term Memory (BiLSTM) architecture, in consistently outperforming alternative methods across diverse classification tasks. Confusion matrices and graphical representations further elucidate model performance, emphasizing the BiLSTM-based model&#39;s remarkable capacity to discern and classify cyberbullying instances accurately. These results underscore the significance of advanced neural network structures in capturing the complexities of online hate speech and offensive content. This research contributes valuable insights toward fostering safer and more inclusive online communities by facilitating early identification and mitigation of cyberbullying. Future investigations may explore hybrid approaches, additional feature integration, or real-time detection systems to further refine and advance the state-of-the-art in addressing this critical societal concern.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_57-Offensive_Language_Detection_on_Social_Media.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Emotion-based Autism Spectrum Disorder Detection by Leveraging Transfer Learning and Machine Learning Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150556</link>
        <id>10.14569/IJACSA.2024.0150556</id>
        <doi>10.14569/IJACSA.2024.0150556</doi>
        <lastModDate>2024-05-30T17:06:14.4330000+00:00</lastModDate>
        
        <creator>I. Srilalita Sarwani</creator>
        
        <creator>D. Lalitha Bhaskari</creator>
        
        <creator>Sangeeta Bhamidipati</creator>
        
        <subject>Autism Spectrum Disorder; transfer learning; image features; VGG; ResNet; Inception</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>Autism Spectrum Disorder (ASD) presents as a neurodevelopmental condition impacting social interaction, communication, and behavior, underscoring the imperative of early detection and intervention to enhance outcomes. This paper introduces a novel approach to ASD detection utilizing facial features extracted from the Autistic Children Facial Dataset. Leveraging transfer learning models, including VGG16, ResNet, and Inception, high-level features are extracted from facial images. Additionally, fine-grained details are captured through the utilization of handcrafted image features such as Histogram of Oriented Gradients, Local Binary Patterns, Scale-Invariant Feature Transform, PHASH descriptors. Integration of these features yields three distinct feature vectors, combining image features with VGG16, ResNet, and Inception features. Subsequently, multiple machine learning classifiers, including Random Forest, KNN, Decision Tree, SVM, and Logistic Regression, are employed for ASD classification. Through rigorous experimentation and evaluation, the performance of these classifiers across three datasets is compared to identify the optimal approach for ASD detection. By evaluating multiple classifiers and feature combinations, this work offers insights into the most effective approaches for ASD detection.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_56-Emotion_based_Autism_Spectrum_Disorder_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Toward Optimal Service Composition in the Internet of Things via Cloud-Fog Integration and Improved Artificial Bee Colony Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150555</link>
        <id>10.14569/IJACSA.2024.0150555</id>
        <doi>10.14569/IJACSA.2024.0150555</doi>
        <lastModDate>2024-05-30T17:06:14.4170000+00:00</lastModDate>
        
        <creator>Guixia Xiao</creator>
        
        <subject>Internet of Things (IoT); fog computing; service composition; Artificial Bee Colony (ABC) Algorithm; Dynamic Reduction Mechanism</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>In the quest to delve deeper into the burgeoning realm of the service-oriented Internet of Things (IoT), the pressing challenge of smoothly integrating functionalities within smart objects emerges prominently. IoT devices, notorious for their resource constraints, often lean heavily on cloud infrastructures to function effectively. However, the emergence of fog computing offers a promising alternative, allowing the processing of IoT applications closer to the sensors and thereby slashing delays. This research develops a novel method for IoT service composition that leverages both fog and cloud computing, utilizing an enhanced version of the Artificial Bee Colony (ABC) algorithm to refine its convergence rate. The approach introduces a Dynamic Reduction (DR) mechanism designed to perturb dimensions innovatively. Traditionally, the ABC algorithm generates new solutions that closely mimic their parent solutions, which unfortunately slows down convergence. By initiating the process with significant dimension disparities among solutions and gradually reducing these disparities over successive iterations, this method strikes an optimal balance between exploration and exploitation through dynamic adjustment of dimension perturbation counts. Comparative analyses against contemporary methodologies reveal significant improvements: a 17% decrease in average energy consumption, a 10% boost in availability, an 8% enhancement in reliability, and a remarkable 23% reduction in average cost. Combining the strengths of fog and cloud computing with the refined ABC algorithm through the Dynamic Reduction mechanism significantly advances the efficiency and effectiveness of IoT service compositions.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_55-Toward_Optimal_Service_Composition_in_the_Internet_of_Things.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Estimating Stock Market Prices with Histogram-based Gradient Boosting Regressor: A Case Study on Alphabet Inc</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150553</link>
        <id>10.14569/IJACSA.2024.0150553</id>
        <doi>10.14569/IJACSA.2024.0150553</doi>
        <lastModDate>2024-05-30T17:06:14.4000000+00:00</lastModDate>
        
        <creator>Shigen Li</creator>
        
        <subject>Alphabet Inc.; market movement; stock; financial markets; Histogram-based gradient boosting</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>One of the most important and common activities mentioned while discussing the financial markets is stock market trading. An investor is constantly searching for methods to estimate future trends to minimize losses and maximize profits due to the unavoidable volatility in stock prices. It is undeniable, nonetheless, that there is currently no mechanism for accurately estimating future market patterns despite numerous approaches being investigated to enhance model performance as much as feasible. Findings indicate notable improvements in accuracy compared to traditional Histogram-based gradient-boosting models. Experiments conducted on historical stock price datasets verify the efficacy of the proposed method. The combined strength of HGBoost and optimization techniques, including Particle Swarm Optimization, Slime Mold Algorithm, and Grey Wolf Optimization, not only increases prediction accuracy but also fortifies the model&#39;s ability to adjust to changing market conditions. The results for HGBoost, PSO- HGBoost, SMA- HGBoost, and GWO- HGBoost were 0.964, 0.973, 0.981, and 0.988, in that order. Compared to HGBoost, the result of GWO- HGBoost shows how combining with the optimizer can enhance the output of the given model.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_53-Estimating_Stock_Market_Prices_with_Histogram.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Study on Wireless Sensor Node Localization and Target Tracking Based on Improved Locust Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150554</link>
        <id>10.14569/IJACSA.2024.0150554</id>
        <doi>10.14569/IJACSA.2024.0150554</doi>
        <lastModDate>2024-05-30T17:06:14.4000000+00:00</lastModDate>
        
        <creator>Tan SONGHE</creator>
        
        <creator>Qin Qi</creator>
        
        <subject>Improved locust algorithm; wireless sensors; node localization; target tracking; target state; unknown node position</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>To improve the positioning accuracy of wireless sensor nodes and ensure the target tracking effect, a wireless sensor node positioning and target tracking method based on an improved locust algorithm is proposed. The DV Hop algorithm is used to calculate the minimum hops and average hops distance between the unknown node and each anchor node to obtain the location of the unknown node, realize the rough positioning of wireless sensor nodes, and analyze the positioning error to determine the positioning accuracy target function; The improved locust algorithm is used to solve the positioning accuracy objective function to obtain the sensor node positioning results with the minimum error; The target tracking model and the target is calculated. According to the target observation information obtained by all sensor nodes, the target state in the wireless sensor network model is tracked using the probability hypothesis density filtering algorithm. The test results show that the algorithm has better performance, the spatial evaluation index results are all lower than 0.020, and the individual distribution in the solution set is better; The location of each unknown node in different node distribution states can be obtained; The positioning error under the surface and plane is less than 0.012; The maximum error of target tracking is 0.142m; It can track single target and multiple targets.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_54-A_Study_on_Wireless_Sensor_Node_Localization_and_Target_Tracking.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comprehensive and Simulated Modeling of a Centralized Transport Robot Control System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150552</link>
        <id>10.14569/IJACSA.2024.0150552</id>
        <doi>10.14569/IJACSA.2024.0150552</doi>
        <lastModDate>2024-05-30T17:06:14.3830000+00:00</lastModDate>
        
        <creator>Murad Bashabsheh</creator>
        
        <subject>Artificial intelligence; centralized control system; transport robots; automatic system; agent-based modeling; AnyLogic; Arduino microcontroller</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>This work proposes a new simulation model for a centralized transport robot control system that was created with the AnyLogic environment and a special blend of agent-based and discrete-event approaches. The model attempts to do a comprehensive analysis of the centralized request distribution algorithm among robots, gauging the effectiveness of the transport system based on service arrival times. For in-depth testing, a transport robot model was developed using Arduino microcontrollers and NRF24L01 transceivers for communication. Item movement test sequences were created to be uniform in both full-scale and simulation testing. Good, though not perfect, agreement was found between the simulation and experimental results, underscoring the difficulty of obtaining high accuracy in real-time coordinate identification in the absence of sensors. This shortcoming notwithstanding, the novel simulation model provides an invaluable instrument for determining the viability and efficiency of transportation systems as well as analyzing decentralized control mechanisms prior to actual deployment. The novelty of this paper in that it builds a thorough simulation model for a centralized transport robot control system using an AnyLogic environment and a unique blend of discrete-event and agent-based approaches. This comprehensive technique is a novel contribution to the discipline since it enables a thorough evaluation of a centralized request distribution system.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_52-Comprehensive_and_Simulated_Modeling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing SDN Anomaly Detection: A Hybrid Deep Learning Model with SCA-TSO Optimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150551</link>
        <id>10.14569/IJACSA.2024.0150551</id>
        <doi>10.14569/IJACSA.2024.0150551</doi>
        <lastModDate>2024-05-30T17:06:14.3530000+00:00</lastModDate>
        
        <creator>Ahmed Mohanad Jaber ALHILO</creator>
        
        <creator>Hakan Koyuncu</creator>
        
        <subject>SDN; Intrusion Detection System; deep learning; CNN; LSTM; SCA; TSO</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>The paper explores the evolving landscape of network security, in Software Defined Networking (SDN) highlighting the challenges faced by security measures as networks transition to software-based control. SDN revolutionizes Internet technology by simplifying network management and boosting capabilities through the OpenFlow protocol. It also brings forth security vulnerabilities. To address this we present a hybrid Intrusion Detection System (IDS) tailored for SDN environments leveraging a state of the art dataset optimized for SDN security analysis along with machine learning and deep learning approaches. This comprehensive research incorporates data preprocessing, feature engineering and advanced model development techniques to combat the intricacies of cyber threats in SDN settings. Our approach merges feature from the sine cosine algorithm (SCA) and tuna swarm optimization (TSO) to optimize the fusion of Long Short Term Memory Networks (LSTM) and Convolutional Neural Networks (CNN). By capturing both spatial aspects of network traffic dynamics our model excels at detecting and categorizing cyber threats, including zero-day attacks. Thorough evaluation includes analysis using confusion matrices ROC curves and classification reports to assess the model’s ability to differentiate between attack types and normal network behavior. Our research indicates that improving network security using software defined methods can be achieved by implementing learning and machine learning strategies paving the way, for more reliable and effective network administration solutions.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_51-Enhancing_SDN_Anomaly_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Remote Palliative Care: A Systematic Review of Effectiveness, Accessibility, and Patient Satisfaction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150550</link>
        <id>10.14569/IJACSA.2024.0150550</id>
        <doi>10.14569/IJACSA.2024.0150550</doi>
        <lastModDate>2024-05-30T17:06:14.3370000+00:00</lastModDate>
        
        <creator>Rihab El Sabrouty</creator>
        
        <creator>Abdelmajid Elouadi</creator>
        
        <creator>Mai Abdou Salifou Karimoune</creator>
        
        <subject>Palliative care; eHealth; patient; artificial intelligence; well-being</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>Remote palliative care has emerged as a viable option to address the complex needs of patients facing life-limiting illnesses, particularly in the context of evolving healthcare landscapes and technological advancements. This systematic review aims to comprehensively examine the effectiveness, accessibility, and patient satisfaction of remote palliative care interventions. Through a meticulous analysis of empirical studies, clinical trials, and qualitative research, this review synthesizes evidence about the impact of remote palliative care on clinical outcomes, patient access to services, and overall satisfaction levels. Our findings highlight the benefits of remote palliative care, including improved symptom management, enhanced patient autonomy, and greater convenience in accessing care, particularly for individuals in rural or underserved areas. Moreover, we identify key facilitators and barriers influencing the implementation and uptake of remote palliative care services, such as technological proficiency, infrastructure limitations, and concerns regarding the quality of interpersonal communication. By critically evaluating the existing literature, this review underscores the significance of remote palliative care as a patient-centred approach to delivering compassionate end-of-life care. Furthermore, it underscores the need for ongoing research efforts and policy initiatives to optimize the effectiveness and accessibility of remote palliative care services to ensure equitable and high-quality care for all patients facing serious illnesses.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_50-Remote_Palliative_Care_A_Systematic_Review.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Traffic Flow Prediction at Intersections: Enhancing with a Hybrid LSTM-PSO Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150549</link>
        <id>10.14569/IJACSA.2024.0150549</id>
        <doi>10.14569/IJACSA.2024.0150549</doi>
        <lastModDate>2024-05-30T17:06:14.3230000+00:00</lastModDate>
        
        <creator>Chaimaa CHAOURA</creator>
        
        <creator>Hajar LAZAR</creator>
        
        <creator>Zahi JARIR</creator>
        
        <subject>Deep learning; intersection congestion; intelligent transport systems; traffic flow prediction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>The growing challenge of increasing traffic volumes presents a real challenge for road safety, emergency response and overall transport efficiency. Intelligent transportation systems play a fundamental role in solving these challenges, through accurate traffic prediction. In this study, we propose a hybrid model that combines the Long-Term Memory Algorithm (LSTM) and Particle Swarm Optimization (PSO) to predict traffic flow more accurately at intersections. Our approach takes advantage of the strength of PSO, a robust optimization technique inspired by swarm intelligence, to optimize the hyperparameters of the LSTM algorithm. Through in-depth benchmarking, we evaluate the performance of our hybrid LSTM-PSO model against other existing models. By evaluating measures such as root mean square error and mean absolute error, we demonstrate the superior efficiency of the proposed hybrid model. Our results highlight the effectiveness of our approach in outperforming alternative models, offering a promising solution for intelligent transportation systems to accurately predict traffic flow at intersections and improve overall traffic management efficiency.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_49-Traffic_Flow_Prediction_at_Intersections.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Investigation of Scalability in EHRs using Healthcare 4.0 and Blockchain</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150548</link>
        <id>10.14569/IJACSA.2024.0150548</id>
        <doi>10.14569/IJACSA.2024.0150548</doi>
        <lastModDate>2024-05-30T17:06:14.2900000+00:00</lastModDate>
        
        <creator>Ahmad Fayyaz Madni</creator>
        
        <creator>Munam Ali Shah</creator>
        
        <creator>Muhammad Al-Naeem</creator>
        
        <subject>EHRs; secure cloud; Healthcare 4.0; Blockchain; scalability; cyber threats; medical information; security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>In the past decade, Electronic Health Records (EHRs) based on clouds have become popular in empowering remote patient monitoring. The rise of Health 4.0, which includes using system elements and cloud services to access health records remotely, has gained highest attention of the experts. Healthcare 4.0 requires the consistent collection, combination, transmission, exchange, and storage of medical information related to the patients. Because patient information is a private data, it might be challenging to keep hackers out of the reach. As a result, secure cloud storage, access, and exchange of patient medical information is critical in ensuring that the information is not exposed in any unauthorized manner. Security mechanisms that employ Blockchain technology have become popular in recent years since they can provide robust data sharing amongst large number of users and provide storage protection with low computing costs. Researchers have now shifted their focus to using Blockchain to protect healthcare information administration. This work presents an architecture to investigate the scalability of the Healthcare 4.0 systems that use Blockchain. The investigations are carried out under different test scenarios and are evaluated under numerous circumstances, including varying user and data volumes, while also considering the presence of cyber threats. The results demonstrate interesting findings related to the efficiency and effectiveness of deploying Healthcare 4.0 and Blockchain in EHRs.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_48-An_Investigation_of_Scalability_in_EHRs.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Generative AI-Powered Predictive Analytics Model: Leveraging Synthetic Datasets to Determine ERP Adoption Success Through Critical Success Factors</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150547</link>
        <id>10.14569/IJACSA.2024.0150547</id>
        <doi>10.14569/IJACSA.2024.0150547</doi>
        <lastModDate>2024-05-30T17:06:14.2770000+00:00</lastModDate>
        
        <creator>Koh Chee Hong</creator>
        
        <creator>Abdul Samad Bin Shibghatullah</creator>
        
        <creator>Thong Chee Ling</creator>
        
        <creator>Samer Muthana Sarsam</creator>
        
        <subject>ERP adoption; predictive analytics; generative AI; synthetic data; GANs; VAEs; Pearson&#39;s correlation coefficient; random forest</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>Data scarcity is a significant problem in Enterprise Resource Planning (ERP) adoption prediction, limiting the accuracy and reliability of traditional predictive models. This study addresses this issue by integrating Generative Artificial Intelligence (AI) technologies, specifically Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs), to generate synthetic data that supplements sparse real-world data. A systematic literature review identified critical gaps in existing ERP adoption models, underscoring the need for innovative approaches. The generated synthetic data, validated through comprehensive statistical analyses including mean, variance, skewness, kurtosis, and the Kolmogorov-Smirnov test, demonstrated high accuracy and reliability, aligning closely with real-world data. A hybrid predictive model was developed, combining Generative AI with Pearson Correlation Coefficient (PCC) and Random Forest techniques. This model was rigorously tested and compared against traditional models such as SVM, Neural Networks, Linear Regression, and Decision Trees. The hybrid model achieved superior performance, with an accuracy of 90%, precision of 88%, recall of 89%, and Area Under the Receiver Operating Characteristic Curve (AUC-ROC) score of 0.91, significantly outperforming traditional models in predicting ERP adoption outcomes. The research also established continuous monitoring and adaptation mechanisms to ensure the model&#39;s long-term effectiveness. The findings provide practical insights for organizations, offering a robust tool for forecasting ERP adoption success and facilitating more informed decision-making and resource allocation. This study not only advances theoretical understanding by addressing data scarcity through synthetic data generation but also provides a practical framework for enhancing ERP adoption strategies.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_47-Generative_AI_Powered_Predictive_Analytics_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Image Processing-based Performance Evaluation of KNN and SVM Classifiers for Lung Cancer Diagnosis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150546</link>
        <id>10.14569/IJACSA.2024.0150546</id>
        <doi>10.14569/IJACSA.2024.0150546</doi>
        <lastModDate>2024-05-30T17:06:14.2600000+00:00</lastModDate>
        
        <creator>Kavitha B C</creator>
        
        <creator>Naveen K B</creator>
        
        <subject>K-Nearest Neighbors; lung cancer detection; machine learning; medical data; performance metrics; support vector machine</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>It is important to note that the cure rates in cases of advanced stages of lung cancer are remarkably low, which stresses out the importance for early detection as means to increase survival chances. A strong area of focus when it comes to increased research in the lung cancer diagnosis is the search for ways through which this disease can be identified at its early stages. The methodology described below is proposed as a means to facilitate early detection of lung cancer There are two phases in this approach. The study deals with effectiveness of three types of classifiers K-Nearest Neighbors (KNN), Random Forest and Support Vector Machine (SVM) to identify cases related to lung cancer via relevant medical data assessment. In this application, the eval axis performs profiling or measures the accuracy of applying these classifiers and discriminating between cancerous instances versus non-cancerous ones within the dataset. To rate the adequacy of classifiers in distinguishing classes, performance metrics like accuracy, precision, recall and F1- score are used. Furthermore, the research compares KNN, Random Forest and SVM, explaining their specific advantages as well as disadvantages logically referring to how they can or cannot be applied while detecting lung cancer. This investigation shows helpful results in suggesting the possibility that machine learning techniques could assist to identify lung cancer as exact and timely as possible, providing more successful diagnostic procedures and patient outcomes. The experimental findings show that SVM gives the best result at 95.06%, KNN comes second with a percentage of 86.89.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_46-Image_Processing_based_Performance_Evaluation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning-based Classification of MRI Images for Early Detection and Staging of Alzheimer&#39;s Disease</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150545</link>
        <id>10.14569/IJACSA.2024.0150545</id>
        <doi>10.14569/IJACSA.2024.0150545</doi>
        <lastModDate>2024-05-30T17:06:14.2430000+00:00</lastModDate>
        
        <creator>Parvatham Niranjan Kumar</creator>
        
        <creator>Lakshmana Phaneendra Maguluri</creator>
        
        <subject>Alzheimer’s disease (AD); Convolution Neural Network (CNN); Deep Learning (DL); Transfer Learning (TL); imaging pre-processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>Alzheimer&#39;s disease (AD) poses a significant challenge to modern healthcare, as effective treatment remains elusive. Drugs may slow down the progress of the disease, but there is currently no cure for it. Early AD identification is crucial for providing the required medications before brain damage occurs. In this course of research, we studied various deep learning techniques to address the challenge of early AD detection by utilizing structural MRI (sMRI) images as biomarkers. Deep learning techniques are pivotal in accurately analyzing vast amounts of MRI data to identify Alzheimer&#39;s and anticipate its progression. A balanced MRI image dataset of 12,936 images was used in this study to extract sufficient features for accurately distinguishing Alzheimer’s disease stages, due to the similarities in the characteristics of its early stages, necessitating more images than previous studies. The GoogLeNet model was utilized in our investigation to derive features from each MRI scan image. These features were then inputted into a feed-forward neural network (FFNN) for AD stage prediction. The FFNN model, utilizing GoogLeNet features, underwent rigorous training over multiple epochs using a small batch size to ensure robust performance on unseen data and achieved 98.37% accuracy, 98.39% sensitivity, 98.50% precision, and 99.45% specificity. Most remarkably, our results show that the model detected AD with an amazing average accuracy rate of 99.01%.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_45-Deep_Learning_based_Classification_of_MRI_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Quantum Orthogonal Frequency-Division Multiplexing Transmission Scheme</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150544</link>
        <id>10.14569/IJACSA.2024.0150544</id>
        <doi>10.14569/IJACSA.2024.0150544</doi>
        <lastModDate>2024-05-30T17:06:14.2130000+00:00</lastModDate>
        
        <creator>Mohammed R. Almasaoodi</creator>
        
        <creator>Abdulbasit M. A. Sabaawi</creator>
        
        <creator>Sara El Gaily</creator>
        
        <creator>S&#225;ndor Imre</creator>
        
        <subject>Discrete Fourier transform; quantum Fourier transform; orthogonal frequency-division multiplexing; quantum channel</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>Recently, extensive research attention has been dedicated to enabling Orthogonal Frequency-Division Multiplexing (OFDM) waveforms to be compatible with a modern communication system. Encoding data as OFDM wavelengths still has a lot of problems, like the peak-to-average power ratio (PAPR) and the cyclic prefix (CP), which are important factors that affect how efficiently the spectrum is used. To meet the quality-of-service requirements imposed by communication system applications, this paper proposes to replace the classical encoding and decoding schemes, classical channel, discrete Fourier transform (DFT), and inverse discrete Fourier transform (IDFT) with their classical counterparts. This new quantum OFDM transmission scheme allows for the preparation of a quantum OFDM symbol without the need to incorporate a CP. To validate the accuracy of the suggested quantum OFDM transmission scheme, we compared it with the most widely recognised reference quantum transmission scheme. We have demonstrated that increasing the channel resistivity results in a higher probability of correctly measuring the quantum state in the quantum OFDM transmission scheme compared to the reference quantum transmission scheme. The results are verified by IBM&#39;s Qiskit.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_44-A_Novel_Quantum_Orthogonal_Frequency_Division.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Image Segmentation in Complex Backgrounds using an Improved Generative Adversarial Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150543</link>
        <id>10.14569/IJACSA.2024.0150543</id>
        <doi>10.14569/IJACSA.2024.0150543</doi>
        <lastModDate>2024-05-30T17:06:14.1830000+00:00</lastModDate>
        
        <creator>Mei Wang</creator>
        
        <creator>Yiru Zhang</creator>
        
        <subject>Generative Adversarial Networks (GANs); complex backgrounds; image segmentation; data augmentation; feature dimensionality reduction encoding</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>As technology advances, solving image segmentation challenges in complex backgrounds has become a key issue across various fields. Traditional image segmentation methods underperform in addressing these challenges, and existing generative adversarial networks (GANs) also face several problems when applied in complex environments, such as low generation quality and unstable model training. To address these issues, this study introduces an improved GAN approach for image segmentation in complex backgrounds. This method encompasses preprocessing of complex background image datasets, feature reduction encoding based on cerebellar neural networks, image data augmentation in complex backgrounds, and the application of an improved GAN. In this paper, new generator and discriminator network structures are designed and image data enhancement is implemented through self-play learning. Experimental results demonstrate significant improvements in image segmentation tasks in various complex backgrounds, enhancing the accuracy and robustness of segmentation. This research offers new insights and methodologies for image processing in complex backgrounds, holding substantial theoretical and practical significance.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_43-Image_Segmentation_in_Complex_Backgrounds.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SchemaLogix: Advancing Interoperability with Machine Learning in Schema Matching</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150542</link>
        <id>10.14569/IJACSA.2024.0150542</id>
        <doi>10.14569/IJACSA.2024.0150542</doi>
        <lastModDate>2024-05-30T17:06:14.1500000+00:00</lastModDate>
        
        <creator>Mohamed Raoui</creator>
        
        <creator>Mohammed Ennaouri</creator>
        
        <creator>Moulay Hafid El Yazidi</creator>
        
        <creator>Ahmed Zellou</creator>
        
        <subject>Interoperability; data integration; schema matching; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>Schema matching, a fundamental process in data integration, traditionally employs pairwise comparisons to discern semantic correspondences among elements in disparate schemas. However, recent developments underscore the necessity of concurrent matching of interconnected schemas, termed schema alignment, to reconcile heterogeneous elements. This paper presents SchemaLogix, an innovative machine learning-based approach for schema matching. SchemaLogix addresses challenges such as data scarcity and domain-specific constraints through an inventive bootstrapping method, autonomously generating extensive datasets. Furthermore, SchemaLogix capitalizes on inherent alignment context constraints to optimize learning and improve precision across varied schema structures. Additionally, SchemaLogix incorporates user contributions to validate chosen correspondences, refining outputs based on valuable feedback. Empirical evaluations establish SchemaLogix&#39;s superiority over traditional methods, achieving an exceptional maximum S1 score of 0.90. These results offer practical insights for real-world applications, substantially advancing data integration and interoperability endeavors.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_42-SchemaLogix_Advancing_Interoperability_with_Machine_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Capacitance-base System Design for Measurement of Crude Oil Moisture</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150541</link>
        <id>10.14569/IJACSA.2024.0150541</id>
        <doi>10.14569/IJACSA.2024.0150541</doi>
        <lastModDate>2024-05-30T17:06:14.1330000+00:00</lastModDate>
        
        <creator>ZhixueShi </creator>
        
        <creator>Xudong Zhao</creator>
        
        <subject>Capacitance measurement; crude oil moisture; PCAP01; STM32; time-frequency domain</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>Challenges including difficulty in cleaning and low measurement accuracy widely exist in traditional methods for measuring moisture in crude oil. In order to solve these problems, a capacitance measurement device that combines PCAP01 and STM32 has been designed. PCAP01 is employed as the processing core of the sensor, which significantly enhances the measurement accuracy of capacitance-based methods. As to STM32, it plays a critical role in data acquisition, signal processing, and data transmission. Besides, the capacitance-based device contains two symmetric half-cylindrical electrode plates that are closely attached to the outer wall of the cylindrical glass vessel, where the crude oil sample to be tested is contained. This design prevents the direct contact between the liquid sample and the electrode plates, thus eliminating issues related to cleaning difficulties. Time-frequency domain expansion is presented to realize the fit between moisture and the capacitance. Experimental results indicate that the designed system delivers a high accuracy across the entire 0-100% range.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_41-A_Capacitance_based_System_Design_for_Measurement_of_Crude_Oil.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Whale Optimization Algorithm with Differential Evolution and L&#233;vy Flight for Robot Path Planning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150540</link>
        <id>10.14569/IJACSA.2024.0150540</id>
        <doi>10.14569/IJACSA.2024.0150540</doi>
        <lastModDate>2024-05-30T17:06:14.1030000+00:00</lastModDate>
        
        <creator>Rongrong TANG</creator>
        
        <creator>Xuebang TANG</creator>
        
        <creator>Hongwang ZHAO</creator>
        
        <subject>Path planning; mobile robot; differential evolution; Whale Optimization Algorithm; l&#233;vy flight</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>Path planning is a prominent and essential part of mobile robot navigation in robotics. It allows robots to determine the optimal path from a given beginning point to a desired end goal. Additionally, it enables robots to navigate around obstacles, recognize secure pathways, and select the optimal route to follow, considering multiple aspects. The Whale Optimization Algorithm (WOA) is a frequently adopted approach to planning mobile robot paths. However, conventional WOA suffers from drawbacks such as a sluggish convergence rate, inefficiency, and local optimization traps. This study presents a novel methodology integrating WOA with L&#233;vy flight and Differential Evolution (DE) to plan robot paths. As WOA evolves, the Levy flight promotes worldwide search capabilities. On the other hand, DE enhances WOA&#39;s ability to perform local searches and exploitation while also maintaining a variety of solutions to avoid getting stuck in local optima. The simulation results demonstrate that the proposed approach offers greater planning efficiency and enhanced route quality.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_40-Enhancing_Whale_Optimization_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Exploring the Impact of PCA Variants on Intrusion Detection System Performance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150539</link>
        <id>10.14569/IJACSA.2024.0150539</id>
        <doi>10.14569/IJACSA.2024.0150539</doi>
        <lastModDate>2024-05-30T17:06:14.0870000+00:00</lastModDate>
        
        <creator>CHENTOUFI Oumaima</creator>
        
        <creator>CHOUKHAIRI Mouad</creator>
        
        <creator>CHOUGDALI Khalid</creator>
        
        <creator>ALLOUG Ilyas</creator>
        
        <subject>Intrusion detection; dimensionality reduction; feature extraction; KDDCup’99; NSL-KDD</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>Intrusion detection systems (IDS) play a critical role in safeguarding network security by identifying malicious activities within network traffic. However, the effectiveness of an IDS hinges on its ability to extract relevant features from the vast amount of data it collects. This study investigates the impact of different feature extraction methods on the performance of IDS. We compare the performance of various feature extraction techniques on two widely used intrusion detection datasets: KDD Cup 99 and NSL-KDD. By evaluating these techniques on both datasets, we aim to gain insights into the generalizability and robustness of each method across different dataset characteristics. The study compares the performance of these methods using standard metrics like detection rate, F-measure and FPR for intrusion detection.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_39-Exploring_the_Impact_of_PCA_Variants.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>AdvAttackVis: An Adversarial Attack Visualization System for Deep Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150538</link>
        <id>10.14569/IJACSA.2024.0150538</id>
        <doi>10.14569/IJACSA.2024.0150538</doi>
        <lastModDate>2024-05-30T17:06:14.0730000+00:00</lastModDate>
        
        <creator>DING Wei-jie</creator>
        
        <creator>Shen Xuchen</creator>
        
        <creator>Yuan Ying</creator>
        
        <creator>MAO Ting-yun</creator>
        
        <creator>SUN Guo-dao</creator>
        
        <creator>CHEN Li-li</creator>
        
        <creator>CHEN bing-ting</creator>
        
        <subject>Deep learning; deep neural networks; adversarial attacks; adversarial examples; interactive visualization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>Deep learning has been widely used in various scenarios such as image classification, natural language processing, and speech recognition. However, deep neural networks are vulnerable to adversarial attacks, resulting in incorrect predictions. Adversarial attacks involve generating adversarial examples and attacking a target model. The generation mechanism of adversarial examples and the prediction principle of the target model for adversarial examples are complicated, which makes it difficult for deep learning users to understand adversarial attacks. In this paper, we present an adversarial attack visualization system called AdvAttackVis to assist users in learning, understanding, and exploring adversarial attacks. Based on the designed interactive visualization interface, the system enables users to train and analyze adversarial attack models, understand the principles of adversarial attacks, analyze the results of attacks on the target model, and explore the prediction mechanism of the target model for adversarial examples. Through real case studies on adversarial attacks, we demonstrate the usability and effectiveness of the proposed visualization system.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_38-AdvAttackVis_An_Adversarial_Attack_Visualization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fusion Lightweight Steel Surface Defect Detection Algorithm Based on Improved Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150537</link>
        <id>10.14569/IJACSA.2024.0150537</id>
        <doi>10.14569/IJACSA.2024.0150537</doi>
        <lastModDate>2024-05-30T17:06:14.0570000+00:00</lastModDate>
        
        <creator>Fei Ren</creator>
        
        <creator>Jiajie Fei</creator>
        
        <creator>HongSheng Li</creator>
        
        <creator>Bonifacio T. Doma Jr</creator>
        
        <subject>Deep learning; improved YOLOv5; YOLOv5-mobilenet; surface defects</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>In industrial production, timely and accurate detection and identification of surface defects in steel materials were crucial for ensuring product quality, enhancing production efficiency, and reducing production costs. This study addressed the problem of surface defect detection in steel materials by proposing an algorithm based on an improved version of YOLOv5. The algorithm achieved lightweight and high efficiency by incorporating the MobileNet series network. Experimental results demonstrated that the improved algorithm significantly reduced inference time and model file size while maintaining performance. Specifically, the YOLOv5-MobileNet-Small model exhibited slightly lower performance but excelled in inference time and model file size. On the other hand, the YOLOv5-MobileNet-Large model achieved a slight performance improvement while significantly reducing inference time and model file size. These results indicated that the improved algorithm could achieve lightweighting while maintaining performance, showing promising applications in steel surface defect detection tasks. It provided an efficient and feasible solution for this important domain, offering new insights and methods for similar surface defect detection problems and contributing to research and applications in related fields.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_37-Fusion_Lightweight_Steel_Surface_Defect_Detection_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Digital Public System of Urban Art: Navigating Human-Computer Interaction in Artistic Design for Innovative Urban Expressions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150536</link>
        <id>10.14569/IJACSA.2024.0150536</id>
        <doi>10.14569/IJACSA.2024.0150536</doi>
        <lastModDate>2024-05-30T17:06:14.0400000+00:00</lastModDate>
        
        <creator>Yuan Yao</creator>
        
        <creator>Ying Liu</creator>
        
        <subject>Urban art; digitalization; human-computer interaction; creative expression; public art; technological innovation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>The convergence of digital technology and urban art has given rise to novel urban art digitalization systems. This paper investigates the relationship between Human-Computer Interaction (HCI) and creative design, particularly in the age of 3D printing, Virtual Reality (VR), and digital art. We highlight the transformative potential of these technologies using examples that include interactive public installations and VR art exhibitions, thereby providing empirical evidence to ground our discussion of the evolving paradigms of technology and public art mutual constitution. We also contribute prescriptive guidance for bringing digital art into cities. We hope to offer a full guide to understanding how digital innovation could catalyze the growth and evolution of the wealth of cultural assets that characterize cities.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_36-Digital_Public_System_of_Urban_Art.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Model for Responsive Agriculture Hub via e-Commerce to Sustain Food Security</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150535</link>
        <id>10.14569/IJACSA.2024.0150535</id>
        <doi>10.14569/IJACSA.2024.0150535</doi>
        <lastModDate>2024-05-30T17:06:14.0270000+00:00</lastModDate>
        
        <creator>Wan Nurhayati Wan Ab. Rahman</creator>
        
        <creator>Wan Nurfarah Wan Zulkifli</creator>
        
        <creator>Nur Nabilah Zainuri</creator>
        
        <creator>Hanis Amira Khairol Anwar</creator>
        
        <subject>Digital agriculture hub model; digital value chain; responsive agriculture hub; food security; multi-sided e-commerce</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>Ensuring food security in the face of evolving environmental, economic, and societal challenges requires innovative solutions that leverage emerging technologies. This paper proposes a model for a responsive agriculture hub facilitated through e-commerce platforms to address the dynamic demands of food production, distribution, and consumption. The model integrates data-driven decision-making, supply chain optimization, and digital marketplaces to enhance the efficiency and resilience of agricultural systems. By harnessing real-time data analytics, predictive algorithms, and smart logistics, the proposed hub enables agile responses to fluctuating market conditions, climatic variability, and resource constraints. Through case studies and simulation analyses, we demonstrate the effectiveness of the model in enhancing the accessibility, affordability, and sustainability of food systems. Furthermore, we discuss the implications of this approach for stakeholders across the agricultural value chain, including farmers, distributors, retailers, and consumers. The findings underscore the potential of leveraging e-commerce platforms as catalysts for transformative change in agriculture, contributing to the overarching goal of achieving food security in an increasingly uncertain world.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_35-Model_for_Responsive_Agriculture_Hub.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hardhat-YOLO: A YOLOv5-based Lightweight Hardhat-Wearing Detection Algorithm in Substation Sites</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150534</link>
        <id>10.14569/IJACSA.2024.0150534</id>
        <doi>10.14569/IJACSA.2024.0150534</doi>
        <lastModDate>2024-05-30T17:06:14.0100000+00:00</lastModDate>
        
        <creator>Wanbo Luo</creator>
        
        <creator>Ahmad Ihsan Mohd Yassin</creator>
        
        <creator>Khairul Khaizi Mohd Shariff</creator>
        
        <creator>Rajeswari Raju</creator>
        
        <subject>Hardhat-wearing detection; You Only Look Once (YOLO); MobileNet; Substation; power Internet of Things (PIoT)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>Accidents at substation sites have occurred frequently in recent years due to workers violating power safety regulations by not wearing hardhats. Therefore, it is necessary to provide real-time warnings when detecting workers without hardhats. Nevertheless, the deployment of deep learning-based algorithms necessitates the utilization of a multitude of parameters and computations, which consequently engenders an augmented expenditure on hardware. Therefore, using a lightweight backbone can address this issue well. This paper explored methods, such as deep learning, power Internet of Things (PIoT), and edge computing and proposed a lightweight and effective method called hardhat-YOLO for hardhat-wearing detection. First, the MobileNetv3-small backbone replaced the backbone of You Only Look Once (YOLO) v5s to reduce parameters and increase detection speed. In addition, the Convolutional Block Attention Module (CBAM) was effectively integrated into the network to improve detection precision. Finally, the hardhat-YOLO model, trained with a customized dataset, was transmitted to edge computing terminals in substations through PIoT for hardhat-wearing detection. Compared to the YOLOv5s model, the Parameters and Giga Floating Point Operations (GFLOPs) of the proposed model decreased by about 35.5% and 54.4%, respectively, and Frame per Second (FPS) increased by 17.3% approximately. The experimental results indicated that the hardhat-YOLO model achieved a Mean Average Precision of 83.3% at 50% intersection over union (mAP50), correctly and effectively conducting hardhat-wearing detection tasks.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_34-Hardhat_YOLO_A_YOLOv5_based_Lightweight_Hardhat.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analyzing Quantity-based Strategies for Supply Chain Sustainability and Resilience in Uncertain Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150533</link>
        <id>10.14569/IJACSA.2024.0150533</id>
        <doi>10.14569/IJACSA.2024.0150533</doi>
        <lastModDate>2024-05-30T17:06:13.9930000+00:00</lastModDate>
        
        <creator>Dounia SAIDI</creator>
        
        <creator>Aziz AIT BASSOU</creator>
        
        <creator>Mustapha HLYAL</creator>
        
        <creator>Jamila EL ALAMI</creator>
        
        <subject>Supply chain management; competition; sustainability; resilience; demand uncertainty</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>In today&#39;s interconnected world, where supply chains are the backbone of commerce, ensuring their resilience and sustainability is paramount. This study investigates how quantity-based strategies in supply chain networks are influenced by sustainability and resilience considerations. A conceptual framework is devised, focusing on a two-echelon supply chain network comprising a central supplier and multiple stores. A stochastic mathematical model is constructed to tackle demand uncertainty while incorporating parameters related to sustainability and resilience. Competitive negotiations between suppliers and stores aim at maximizing expected profits. Two store configurations are examined: non-cooperative and cooperative. Supplier resilience is reinforced through strategies like security stocks and diversified sourcing, while sustainability efforts are considered by the supplier and stores. Results show that demand following a uniform distribution benefits stores and suppliers, and cooperative behavior among stores leads to higher profitability. Sustainability initiatives impact expected profits, with security stocks particularly advantageous for supplier profitability. The utilization of foreign products has a detrimental effect on expected profits, emphasizing the significance of government regulation via customs fees. The study underscores the importance of integrating sustainability and resilience in supply chain networks. It concludes with reflections on model limitations and proposes avenues for future research in this domain.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_33-Analyzing_Quantity_based_Strategies_for_Supply_Chain_Sustainability.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Improved VMD and Wavelet Hybrid Denoising Model for Wearable SSVEP-BCI</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150532</link>
        <id>10.14569/IJACSA.2024.0150532</id>
        <doi>10.14569/IJACSA.2024.0150532</doi>
        <lastModDate>2024-05-30T17:06:13.9800000+00:00</lastModDate>
        
        <creator>Yongquan Xia</creator>
        
        <creator>Keyun Li</creator>
        
        <creator>Duan Li</creator>
        
        <creator>Jiaofen Nan</creator>
        
        <creator>Ronglei Lu</creator>
        
        <subject>Brain-computer interface; steady-state visual evoked potential; style; variational mode decomposition; wavelet packet transform</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>The brain-computer interface (BCI) based on steady-state visual evoked potentials (SSVEP) has attracted considerable attention due to its non-invasiveness, low user training requirements, and efficient information transfer rate. To optimize the accuracy of SSVEP detection, we propose an innovative hybrid EEG denoising model combining variational mode decomposition (VMD) with wavelet packet transform(WPT). This model ingeniously integrates VMD decomposition and WPT denoising techniques, employing detrended fluctuation analysis (DFA) thresholding to deeply filter the noisy data collected from wearable devices. The filtered components are then reconstructed alongside the unprocessed components. Finally, three classification algorithms are used to validate the proposed method on a wearable SSVEP-BCI dataset. Our proposed algorithm achieves accuracies of 71.27% and 86.35% on dry and wet electrodes, respectively. Comparing the use of VMD combined with adaptive wavelet denoising and direct denoising with VMD, the classification accuracy of our method improved by 3.68% and 0.26% on dry electrodes, respectively, and by 3.28% and 0.66% on wet electrodes, respectively. The proposed approach demonstrates excellent performance and holds promising potential for application and generalization in the field of wearable EEG denoising.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_32-An_Improved_VMD_and_Wavelet_Hybrid_Denoising_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Deep Learning-based Method for Determining Semantic Similarity of English Translation Keywords</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150531</link>
        <id>10.14569/IJACSA.2024.0150531</id>
        <doi>10.14569/IJACSA.2024.0150531</doi>
        <lastModDate>2024-05-30T17:06:13.9470000+00:00</lastModDate>
        
        <creator>Wu Zhili</creator>
        
        <creator>Zhang Qian</creator>
        
        <subject>Deep learning; English translation; keyword; semantic similarity; co-occurrence of words; bidirectional LSTM neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>In the English translation task, the semantics of context play an important role in correctly understanding the subtle differences between keywords. The bidirectional LSTM includes a positive LSTM and a reverse LSTM. When processing sequence data, you can consider the information of the preceding and following text at the same time. Therefore, to capture the subtle semantic differences between English translation keywords and accurately evaluate their similarity, a new semantic similarity determination method for English translation keywords is studied with the bidirectional LSTM neural network in deep learning as the main algorithm. This method introduces an English translation keyword extraction algorithm based on word co-occurrence and uses the co-occurrence relationship between words to identify and extract keywords in English translation. The extracted keywords are input into the bidirectional LSTM neural network keyword semantic similarity judgment model based on deep learning, and the weight of the bidirectional LSTM neural network is set by using the sparrow search algorithm to optimize. After the bidirectional LSTM neural network is trained, the information on keyword word vectors is captured, and the similarity between keyword word vectors is evaluated. The experimental results show that the sentence similarity calculated by the proposed method for English translation is very close to the result of professional manual scoring. The Spearman rank correlation coefficient of the semantic similarity determination result is 1, and the determination result is accurate.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_31-A_Deep_Learning_based_Method_for_Determining_Semantic_Similarity.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design and Implementation of an Information Management System for College Students in Higher Education Institutions Based on Cloud Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150530</link>
        <id>10.14569/IJACSA.2024.0150530</id>
        <doi>10.14569/IJACSA.2024.0150530</doi>
        <lastModDate>2024-05-30T17:06:13.9330000+00:00</lastModDate>
        
        <creator>Mo Bin</creator>
        
        <subject>Cloud computing; student information; management system; load balancing; virtual gateway; personalized recommendation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>A cloud computing-based system has been developed to enhance the efficiency and practicality of the information management system for college students in higher vocational colleges. This system incorporates a well-defined architecture that leverages cloud computing technology. The management layer&#39;s logic module ensures the security of vocational college students&#39; information by deploying virtual gateways at strategic points within the system, thereby controlling access, sharing, and exchange of information. The resource module in the application layer optimizes server cluster load balancing by minimizing task completion time and improving load balancing effectiveness. Additionally, the M-Cloud storage mode is employed to store and back up application layer cloud information, along with the distributed Bigtable information base. The user access layer provides users with convenient services through the corresponding cloud service access interface in the application layer. Furthermore, the employment information of college students and enterprise position information are clustered using the K-means algorithm based on data mining, and personalized employment recommendations are made using similarity calculations. Experimental results demonstrate that the system boasts a user-friendly interface design, efficient operation, and comprehensive management functions. The system&#39;s server cluster exhibits strong load-balancing capabilities, effectively mitigating network congestion and minimizing the risks of network storms and paralysis.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_30-Design_and_Implementation_of_an_Information_Management_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Enhancement of Wi-Fi Fingerprinting-based Indoor Positioning using Truncated Singular Value Decomposition and LSTM Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150529</link>
        <id>10.14569/IJACSA.2024.0150529</id>
        <doi>10.14569/IJACSA.2024.0150529</doi>
        <lastModDate>2024-05-30T17:06:13.9170000+00:00</lastModDate>
        
        <creator>Duc Khoi Nguyen</creator>
        
        <creator>Thi Hang Duong</creator>
        
        <creator>Le Cuong Nguyen</creator>
        
        <creator>Manh Kha Hoang</creator>
        
        <subject>Indoor positioning; Wi-Fi fingerprinting; Truncated Singular Value Decomposition; LSTM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>Wi-Fi based indoor positioning has been considered as the most promising approach for civil location-based service due to the widespread availability Wi-Fi systems in many buildings. One of the most favorable approaches is to employ received signal strength indicator (RSSI) of Wi-Fi access points as the signals for estimating the mobile object locations. However, developing a solution to obtain high positioning accuracy while reducing system complexity using traditional methods as well as deep learning based methods is still a very challenging task. This paper presents a proposal to combine the Truncated Singular Value Decomposition (SVD) technique with a Long Short -Term Memory (LSTM) model to enhance the performance of indoor positioning system. Experimental results on a public dataset demonstrate that the proposed approach outperforms other state-of-the-art solutions by means of positioning accuracy as well as computational cost.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_29-Performance_Enhancement_of_Wi_Fi_Fingerprinting.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Impact of Dual Objective Optimization Model Combining Non-Dominated Genetic Algorithm on Rural Industrial Ecological Economy</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150528</link>
        <id>10.14569/IJACSA.2024.0150528</id>
        <doi>10.14569/IJACSA.2024.0150528</doi>
        <lastModDate>2024-05-30T17:06:13.9000000+00:00</lastModDate>
        
        <creator>Ying Wang</creator>
        
        <subject>Industrial chain production mode; ecological economy; environmental benefits; non-dominated sorting genetic algorithm; dual objective programming model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>Due to the development of industrial economy, it has caused serious damage to the ecological environment. Based on the industrial structure and production scale, rural industrial economic parks are planned to analyze the quantity and weight of pollutants emitted from the original industries. The results showed that the quantity and weight of hydrogen sulfide in the coking industry were 10kg/t and 94, respectively. The weight of smoke and carbon monoxide in the steelmaking industry was relatively high, with 54 and 34, respectively. Non-dominated sorting genetic algorithm and multi-objective programming model are used to optimize the comprehensive benefits and industrial structure of rural industrial ecological economy. According to the experimental results, when the scale of the coking industry was 135600 tons, the steelmaking industry was 314900 tons, the ironmaking industry was 148100 tons, and the underground coal gasification industry was 424.76 million Nm3. The comprehensive economic benefits of the industry reached the optimal level of 0.6415. The environmental and comprehensive benefits generated by the increased power generation industry were 64.98 and 40.87, respectively. Therefore, it indicates that the dual objective programming model combining non-dominated sorting genetic algorithm can improve the rural industrial ecological economy.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_28-The_Impact_of_Dual_Objective_Optimization_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Embedding Emotions in the Metaverse: The Emotive Keywords for Augmented Reality Mobile Library Application</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150527</link>
        <id>10.14569/IJACSA.2024.0150527</id>
        <doi>10.14569/IJACSA.2024.0150527</doi>
        <lastModDate>2024-05-30T17:06:13.8830000+00:00</lastModDate>
        
        <creator>Nik Azlina Nik Ahmad</creator>
        
        <creator>Munaisyah Abdullah</creator>
        
        <creator>Ahmad Iqbal Hakim Suhaimi</creator>
        
        <creator>Anitawati Mohd Lokman</creator>
        
        <subject>Affective engineering; emotional design; human factor; Kansei engineering; metaverse library; mobile augmented reality; user experience</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>The emergence of the metaverse, marked by the seamless integration of augmented reality (AR) applications across various sectors is driving a profound transformation in the digital landscape. As we delve into the digital realm of the metaverse, just like other applications, it unfolds as an equally captivating canvas for emotional exploration, where a comprehensive understanding of human emotion for better user experience (UX) is vital. Although the efforts to investigate emotions within the metaverse are in progress, however there is a notable absence of extensive research that examines the user’s emotional experiences which incorporates a tailored set of keywords specifically for designing user interface (UI) products within this context, resulting in a substantial void in this particular domain. Therefore, the objective of this research is to synthesise and validate an extensive array of emotive keywords explicitly tailored for AR-based Mobile Library Application (MLA) design. This endeavor involves an exhaustive review of literature and a rigorous validation process, encompassing input from both linguistic and technical experts in the field. The result is an explicit collection of sixty emotive keywords that will significantly contribute to the metaverse realm by adding a layer of emotional depth to enrich the AR-based MLA experience. These findings offer valuable guidance for practitioners and researchers, advancing the landscape of MLA interface design and ultimately boosting UX in the educational sector.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_27-Embedding_Emotions_in_the_Metaverse.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Fraud Detection in Credit Card Transactions using Optimized Federated Learning Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150526</link>
        <id>10.14569/IJACSA.2024.0150526</id>
        <doi>10.14569/IJACSA.2024.0150526</doi>
        <lastModDate>2024-05-30T17:06:13.8530000+00:00</lastModDate>
        
        <creator>Mustafa Abdul Salam</creator>
        
        <creator>Doaa L. El-Bably</creator>
        
        <creator>Khaled M. Fouad</creator>
        
        <creator>M. Salah Eldin Elsayed</creator>
        
        <subject>Credit card fraud detection (CCFD); federated learning; optimization algorithms; identically independent distributions (IIDs); metaheuristic optimization techniques</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>In recent years, credit card transaction fraud has inflicted significant losses on both consumers and financial institutions. To address this critical issue, we propose an optimized framework for fraud detection. This study deals with non-identically independent distributions (IIDs) involving different numbers of clients. The proposed framework empowers banks to construct robust fraud detection models using their internal training data. Specifically, by optimizing the initial global model before to the federated learning phase, the suggested optimization technique accelerates convergence speed by reducing communication costs when moving forward with federal training. The optimization techniques using the three most recent metaheuristic Optimizers, namely: An improved gorilla troops optimizer (AGTO), Coati Optimization Algorithm (CoatiOA), Coati Optimization Algorithm (COA). Furthermore, credit card data is highly skewed, which makes it challenging to predict fraudulent transactions. The resampling strategy is used as a preprocessing step to improve the outcomes of unbalanced or skewed data. The performance of these algorithms is documented and compared. Computation time, accuracy, precision, recall, F-measure, loss, and computation time are used to assess the algorithms&#39; performance. The experimental results show that AGTO and (CoatiOA) exhibit higher accuracy, precision, recall, and F1 scores compared to the baseline FL Model. Additionally, they achieve lower loss values.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_26-Enhancing_Fraud_Detection_in_Credit_Card_Transactions.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Presenting a New Approach for Clustering Optimization in Wireless Sensor Networks using Fuzzy Cuckoo Search Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150525</link>
        <id>10.14569/IJACSA.2024.0150525</id>
        <doi>10.14569/IJACSA.2024.0150525</doi>
        <lastModDate>2024-05-30T17:06:13.8370000+00:00</lastModDate>
        
        <creator>Bing ZHOU</creator>
        
        <creator>Youyou LI</creator>
        
        <subject>Wireless sensor network; fuzzy cuckoo search algorithm; clustering; fuzzy model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>Because of the developments in this technology, wireless sensor networks are now among the most commonly used in the domains of agriculture, harsh environments, medical, and the military. Among the many problems with these networks is their limited lifespan. Much work has been done in the fields of sensor communication, routing, and data gathering to reduce energy usage and increase network life. Routing protocols and clustering algorithms are two techniques for reducing energy use. Selecting the cluster head is the most important stage in any clustering technique. The objectives of this article are to decrease total energy consumption, increase packet delivery rates, and lengthen the network&#39;s lifetime. In order to do this, the LEACH protocol uses cuckoo search instead of probability distribution during the cluster head selection step and fuzzy logic during the routing phase. A MATLAB environment was utilized to evaluate the proposed method with the LEACH algorithm under identical conditions. The results of the comparison show that the recommended approach does a better job of prolonging the network&#39;s lifetime than the LEACH protocol.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_25-Presenting_a_New_Approach_for_Clustering_Optimization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Stock Market Volatility Estimation: A Case Study of the Hang Seng Index</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150524</link>
        <id>10.14569/IJACSA.2024.0150524</id>
        <doi>10.14569/IJACSA.2024.0150524</doi>
        <lastModDate>2024-05-30T17:06:13.8230000+00:00</lastModDate>
        
        <creator>Shengwen Wu</creator>
        
        <creator>Qiqi Lin</creator>
        
        <creator>Xuefeng liu</creator>
        
        <subject>Hang Seng index; financial market; stock price prediction; Random Forest; biological bases optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>Among the influential elements in the national economy is the stock market. The stock market is a multifaceted system that combines economics, investor psychology, and other market mechanics. The objective of the financial market investment is to maximize profits; but, due to the market&#39;s complexity and the multitude of factors that might impact it, it is challenging to predict its future behavior. The challenging process of stock price prediction requires the analysis of a wide range of social, political, and economic factors. These variables include market trends, financial statements, earnings reports, and other data. The goal of this project is to develop an accurate hybrid stock price forecasting model using Random Forest which is combined with the optimization. Random Forest is one type of machine learning that is often used in time series analysis. This study provides stock price forecasting using the Hang Seng index market, which consists of the largest and most liquid corporations that are publicly traded on the Hong Kong Stock Exchange, data from 2015 to 2023. The Dow Jones and KOSPI were evaluated as two additional indices. This study demonstrates some optimization approaches including genetic algorithm, grey wolf optimization, and biogeography-based optimization, which drew inspiration from the phenomenon of species migrating between islands in search of a suitable habitat. Biogeography-based optimization has shown the best result among these optimizations. The proposed hybrid model obtained the values 0.992, 0.997, and 0.9937 for the coefficient of determination for HSI, Dow Jones, and KOSPI markets, respectively. These results indicate the ability of the model in order to predict the stock market with a high degree of accuracy.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_24-Stock_Market_Volatility_Estimation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Method for Assessing Financial Market Price Behavior: An Analysis of the Shanghai Stock Exchange Index</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150523</link>
        <id>10.14569/IJACSA.2024.0150523</id>
        <doi>10.14569/IJACSA.2024.0150523</doi>
        <lastModDate>2024-05-30T17:06:13.8230000+00:00</lastModDate>
        
        <creator>Zhi Huang</creator>
        
        <creator>Jiansheng Li</creator>
        
        <subject>Financial market; shanghai stock exchange price; gated recurrent unit; grasshopper optimization algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>A stock market is a venue where the shares of publicly traded companies are available for purchase and sale by individuals. The financial markets exert a substantial influence on various domains, including technology, employment, and business. Given the substantial rewards and risks associated with stock trading, investors are exceedingly concerned with the precision of future stock value forecasts. They modify their investment strategies in an effort to achieve even greater returns. Accurate stock price forecasting can be challenging in the securities industry due to the complex nature of the problem and the requirement for a comprehensive understanding of various interconnected factors. The stock market is influenced by a variety of factors, including politics, society, and economics. A multitude of interrelated factors contribute to these behaviors, and stock price fluctuations are capricious. In order to tackle a range of these difficulties, the present investigation proposes an innovative framework that integrates a Grasshopper optimization method with the gated recurrent unit model, a machine-learning approach. The research used data from the Shang Hai Stock Exchange Index for the period of 2015–2023. The proposed hybrid model was also tested on the 2013–2022 S&amp;P 500 and Nikkei 225. The proposed model demonstrated optimal performance, exhibiting a minimal error rate and exceptional effectiveness. The study&#39;s findings demonstrate that the proposed model is more suitable for the volatile stock market and surpasses other existing strategies in terms of predictive accuracy.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_23-A_Method_for_Assessing_Financial_Market_Price_Behavior.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Examining the Various Neural Network Algorithms Considering the Superiority of Mouth Brooding Fish in Data Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150522</link>
        <id>10.14569/IJACSA.2024.0150522</id>
        <doi>10.14569/IJACSA.2024.0150522</doi>
        <lastModDate>2024-05-30T17:06:13.8070000+00:00</lastModDate>
        
        <creator>Lang Liu</creator>
        
        <creator>Yong Zhu</creator>
        
        <subject>Medical data analysis; clinical decision support; dataset classification; Mouth Brooding Fish; Support Vector Machine (SVM)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>Data classification, a crucial practice in information management, involves categorizing data based on its sensitivity to determine appropriate access levels and protection measures. This paper explores the utilization of novel algorithms, including mouth-brooding fish (MBF), alongside machine learning techniques, for the analysis of medical health data. The SVM exhibits suboptimal performance in the task of data categorization. Therefore, Adaboost may be considered a viable substitute for MBF due to its superior performance in terms of F-score, accuracy, specificity, and sensitivity. The accuracy of MBF, which stands at about 95%, surpasses that of Adaboost by a significant margin of 77%. The F-score, accuracy, and specificity values obtained for MBF are exceptional when compared to the other chosen models, with values of 97.17%, 93.6%, and 96.5%, respectively. The proposed algorithm exhibits promising advancements in health data categorization, offering a potential breakthrough in data classification methodologies. Leveraging this innovative approach could facilitate more accurate and efficient management of sensitive medical data, thereby enhancing healthcare systems&#39; capabilities for data protection and analysis. The main novelty of this study lies in the introduction and evaluation of the MBF algorithm for data classification within the medical domain. Unlike traditional algorithms, MBF draws inspiration from the collective behavior of mouth-brooding fish, offering a unique optimization strategy that enhances both exploration and exploitation of the solution space. This novel approach presents a promising avenue for advancing healthcare analytics and decision-making processes.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_22-Examining_the_Various_Neural_Network_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Study: Mouth Brooding Fish (MBF) as a Novel Approach for Android Malware Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150521</link>
        <id>10.14569/IJACSA.2024.0150521</id>
        <doi>10.14569/IJACSA.2024.0150521</doi>
        <lastModDate>2024-05-30T17:06:13.7900000+00:00</lastModDate>
        
        <creator>Kangle Zhou</creator>
        
        <creator>Panpan Wang</creator>
        
        <creator>Baiqing He</creator>
        
        <subject>Android malware detection; ensemble learning; SVM; MLP; RF</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>Android Malware Detection has become increasingly prevalent, with the highest market share among all other mobile operating systems due to its open-source nature and user-friendliness. This has resulted in an uncontrolled proliferation of malicious applications targeting the Android platform. Emerging trends of Android malware are employing highly sophisticated detection and analysis evasion techniques, rendering traditional signature-based detection methods less effective in identifying modern and unknown malware. Alternative approaches, such as Machine Learning methods, have emerged as leading solutions for timely zero-day anomaly detection. Ensemble learning, a common meta-approach in machine learning, seeks to improve predictive performance by amalgamating predictions from multiple models. This paper introduces an enhanced strategy, Mouth Brooding Fish (MBF), based on ensemble learning for Android Malware Detection (AMD). The findings are further compared with the outputs of various algorithms including Support Vector Machine (SVM), AdaBoost, Multilayer Perceptron (MLP), Gaussian Kernel (GK), and Random Forest (RF). Compared to the other selected models, MBF exhibits remarkable performance with an F-score of 98.57%, precision of 99.65%, sensitivity of 97.51%, and specificity of 97.51%. Thus, the significant novelty of this work lies in the accuracy and authenticity of the selected algorithms, demonstrating their superior performance overall.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_21-Comparative_Study_Mouth_Brooding_Fish.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comparative Work to Highlight the Superiority of Mouth Brooding Fish (MBF) over the Various ML Techniques in Password Security Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150520</link>
        <id>10.14569/IJACSA.2024.0150520</id>
        <doi>10.14569/IJACSA.2024.0150520</doi>
        <lastModDate>2024-05-30T17:06:13.7770000+00:00</lastModDate>
        
        <creator>Yan Shi</creator>
        
        <creator>Yue Wang</creator>
        
        <subject>Mouth Brooding Fish (MBF); password security; Sber dataset; SVM; Random Forest; AdaBoost</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>Within the domain of password security classification, the pursuit of practical and dependable methodologies has prompted the examination of both biological and technological paradigms. The present study investigates the efficacy of Mouth Brooding Fish (MBF) as an innovative method in contrast to conventional Machine Learning (ML) approaches for classifying password security. The research approach entails a rigorous examination of the comparative analysis of MBF and ML algorithms, evaluating their effectiveness in password classification using many criteria, including accuracy, robustness, flexibility, and durability against adversarial assaults. The findings suggest that ML approaches have shown significant effectiveness in classifying passwords. However, using methodologies inspired by the minimum Bayes risk framework demonstrates a higher degree of resistance against typical cyber dangers. The intrinsic biological mechanisms of MBF, encompassing adaptive behaviors and inherent protection, play a role in enhancing the resilience and adaptability of the password security categorization system. The results offer significant insights that can inform the evolution of password security systems, integrating biological principles with technical progress to enhance safeguarding measures in digital environments. To emphasize the advantages of the suggested approach, several ML approaches are investigated, such as Support Vector Machines (SVM), AdaBoost, Multilayer Perceptron (MLP), Gaussian Kernel (GK), and Random Forest (RF). The F-score, accuracy, sensitivity, and specificity metrics for MBF exhibit noteworthy performance compared to the other selected models, with values of 100%.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_20-A_Comparative_Work_to_Highlight_the_Superiority_of_Mouth_Brooding_Fish.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cloud-Enabled Real-Time Monitoring and Alert System for Primary Network Resource Scheduling and Large-Scale Users</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150519</link>
        <id>10.14569/IJACSA.2024.0150519</id>
        <doi>10.14569/IJACSA.2024.0150519</doi>
        <lastModDate>2024-05-30T17:06:13.7600000+00:00</lastModDate>
        
        <creator>Bin Zhang</creator>
        
        <creator>Hongchun Shu</creator>
        
        <creator>Dajun Si</creator>
        
        <creator>Jinding He</creator>
        
        <creator>Wenlin Yan</creator>
        
        <subject>Cloud computing; main network scheduling; large users; real-time monitoring; monitoring and prediction; systems research</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>This paper innovatively combines cloud computing with Bayesian networks, aiming to provide an efficient and real-time prediction and scheduling platform for power main network scheduling and large-scale user monitoring. The core of the research lies in the development of a set of novel intelligent scheduling algorithms, which integrates multi-objective optimization theory and deep reinforcement learning technology to achieve dynamic and optimal allocation of power grid resources in the cloud environment. By constructing a comprehensive evaluation system, this study verifies the advancement of the proposed model in multiple dimensions: not only does it make breakthroughs in the in-depth parsing and accurate prediction of electric power data, but it also significantly improves the prediction accuracy of the main grid load changes, tariff dynamic adjustments, grid security posture, and power consumption patterns of large users. The empirical study shows that compared with the existing methods, the model proposed in this study effectively reduces energy consumption and operation costs while improving prediction accuracy and dispatching efficiency, demonstrating its significant innovative value and practical significance in the field of intelligent grid management. The innovation of this paper lies in the development of a composite prediction model that integrates the powerful classification and prediction capabilities of Bayesian networks and the efficient learning mechanism of deep reinforcement learning in complex decision-making scenarios.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_19-Cloud_Enabled_Real_Time_Monitoring_and_Alert_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Logistics Path Planning Method using NSGA-II Algorithm and BP Neural Network in the Era of Logistics 4.0</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150518</link>
        <id>10.14569/IJACSA.2024.0150518</id>
        <doi>10.14569/IJACSA.2024.0150518</doi>
        <lastModDate>2024-05-30T17:06:13.7430000+00:00</lastModDate>
        
        <creator>Liuqing Li</creator>
        
        <subject>Whale optimization algorithm; non-dominant ordering genetic algorithm; backpropagation network; logistics and distribution; path planning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>The distribution of fresh food is affected by its perishable characteristics, and compared with ordinary logistics distribution, the distribution path needs to be very reasonably planned. However, the complexity of the actual road network and the time variation of traffic conditions are not considered in the existing food logistics planning methods. In order to solve this problem, this study takes road section travel prediction as the starting point, and uses the non-dominant ranking genetic algorithm II and the backpropagation network to construct a new logistics path planning model. Firstly, the road condition information detected by the retainer detection and the floating vehicle technology is integrated, and the travel time prediction is input into the backpropagation network model. In order to make the prediction model perform better, it is improved using a whale optimization algorithm. Then, according to the prediction results, the non-dominant ranking genetic algorithm II is used for distribution route planning. Through experimental analysis, the average distribution cost of method designed by this study was 9476 yuan, and the average carbon emission was 2871kg. Compared with the other three algorithms, the distribution cost was more than 15% lower, and the carbon emission was more than 12.5% lower. The planning method designed by the institute can achieve more reasonable, low-cost, and environmentally friendly logistics and distribution, and bring more satisfactory services to the lives of urban residents.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_18-Logistics_Path_Planning_Method_using_NSGA_II_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>IoT Device Identity Authentication Method Based on rPPG and CNN Facial Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150517</link>
        <id>10.14569/IJACSA.2024.0150517</id>
        <doi>10.14569/IJACSA.2024.0150517</doi>
        <lastModDate>2024-05-30T17:06:13.7130000+00:00</lastModDate>
        
        <creator>Liwan Wu</creator>
        
        <creator>Chong Yang</creator>
        
        <subject>Internet of Things; identity authentication; facial recognition; remote photoplethysmography; error rate</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>This study aims to address the insufficient model recognition accuracy and limitations of authentication techniques in current IoT authentication methods. The research presents a more accurate face video image authentication technique by using a new authentication method that combines convolutional neural networks (CNN) and remote Photoplethysmography (rPPG) volumetric tracing. This method comprehensively analyzes facial video images to achieve effective authentication of user identity. The results showed that the new method had higher recognition accuracy when the light was weak. The new method performed better in ablation experiments. The error rate was 1.12% lower than the separate CNN model and 1.73% lower than the rPPG model. The half-error rate was lower than the traditional face authentication recognition model, and the method had better performance effect. Meanwhile, the images with high similarity showed better recognition stability. It can be seen that the new method is able to solve problems such as the recognition accuracy in identity authentication, but the recognition effect under extreme conditions requires further research. The research provides a new technical solution for the authentication of Internet of Things devices, which helps to improve the security and accuracy of the authentication system. By combining the CNN model and rPPG, the research not only improves the recognition accuracy in complex environments, but also enhances the system&#39;s adaptability to environmental changes. The new method provides a new solution for the advancement of Internet of Things authentication technology.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_17-IoT_Device_Identity_Authentication_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predicting Math Performance in High School Students using Machine Learning Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150516</link>
        <id>10.14569/IJACSA.2024.0150516</id>
        <doi>10.14569/IJACSA.2024.0150516</doi>
        <lastModDate>2024-05-30T17:06:13.6970000+00:00</lastModDate>
        
        <creator>Yuan hui</creator>
        
        <subject>Student performance; math grade prediction; feature selection; regression analysis; machine learning; data mining</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>In the field of education, understanding and predicting student performance plays a crucial role in improving the quality of system management decisions. In this study, the power of various machine learning techniques to learn the complicated task of predicting students’ performance in math courses using demographic data of 395 students was investigated. Predicting students&#39; performance through demographic information makes it possible to predict their performance before the start of the course. Filtered and wrapper feature selection methods were used to find 10 important features in predicting students&#39; final math grades. Then, all the features of the data set as well as the 10 selected features of each of the feature selection methods were used as input for the regression analysis with the Adaboost model. Finally, the prediction performance of each of these feature sets in predicting students&#39; math grades was evaluated using criteria such as Pearson&#39;s correlation coefficient and mean squared error. The best result was obtained from feature selection by the LASSO method. After the LASSO method for feature selection, the Extra Tree and Gradient Boosting Machine methods respectively had the best prediction of the final math grade. The present study showed that the LASSO feature selection technique integrated with regression analysis with the Adaboost model is a suitable data mining framework for predicting students&#39; mathematical performance.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_16-Predicting_Math_Performance_in_High_School_Students.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Exploring Differential Entropy and Multifractal Cumulants for EEG-based Mental Workload Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150515</link>
        <id>10.14569/IJACSA.2024.0150515</id>
        <doi>10.14569/IJACSA.2024.0150515</doi>
        <lastModDate>2024-05-30T17:06:13.6830000+00:00</lastModDate>
        
        <creator>Yan Lu</creator>
        
        <subject>Mental workload; EEG; nonlinear analysis; multifractal; differential entropy; fuzzy KNN; classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>In the current research, two nonlinear features were utilized for the design of EEG-based mental workload recognition: one feature based on differential entropy and the other feature based on multifractal cumulants. Clean EEGs recorded from 36 healthy volunteers in both resting and task states were subjected to feature extraction via differential entropy and multifractal cumulants. Then, these nonlinear features were utilized as input for a fuzzy KNN classifier. Experimental results showed that the multifractal cumulants feature vector achieved an AUC of 0.951, which is larger than the differential entropy feature vector (AUC = 0.935). However, the combination of both feature sets resulted in added value in identifying these two mental workloads (AUC = 0.993). Furthermore, the multifractal cumulants feature vector (best classification accuracy = 94.76%) obtained better classification results than the differential entropy feature vector (best classification accuracy = 92.61%). However, the combination of these two feature vectors achieved the best classification results: accuracy of 96.52%, sensitivity of 97.68%, specificity of 95.58%, and F1-score of 96.61%. This shows that these two feature vectors are complementary in identifying different mental workloads.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_15-Exploring_Differential_Entropy_and_Multifractal_Cumulants.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automated Motor Imagery Detection Through EEG Analysis and Deep Learning Models for Brain-Computer Interface Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150514</link>
        <id>10.14569/IJACSA.2024.0150514</id>
        <doi>10.14569/IJACSA.2024.0150514</doi>
        <lastModDate>2024-05-30T17:06:13.6670000+00:00</lastModDate>
        
        <creator>Yang Li</creator>
        
        <creator>Bocheng Liu</creator>
        
        <creator>Yujia Tian</creator>
        
        <subject>Brain-computer interface (BCI); Electroencephalogram (EEG); motor imagery; deep learning; classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>The classification of motor imagery holds significant importance within brain-computer interface (BCI) research as it allows for the identification of a person&#39;s intention, such as controlling a prosthesis. Motor imagery involves the brain&#39;s dynamic activities, commonly captured using electroencephalography (EEG) to record nonstationary time series with low signal-to-noise ratios. While various methods exist for extracting features from EEG signals, the application of deep learning techniques to enhance the representation of EEG features for improved motor imagery classification performance has been relatively unexplored. This research introduces a new deep learning approach based on two-dimensional CNNs with different architectures. Specifically, time-frequency domain representations of EEGs obtained by the wavelet transform method with different mother wavelets (Mexicanhat, Cmor, and Cgaus). The BCI competition IV-2a dataset held in 2008 was utilized for testing the proposed deep learning approaches. Several experiments were conducted and the results showed that the proposed method achieved better performance than some state-of-the-art methods. The findings of this study showed that the architecture of CNN and specifically the number of convolution layers in this deep learning network has a significant effect on the classification performance of motor imagery brain data. In addition, the mother wavelet in the wavelet transform is very important in the classification performance of motor imagery EEG data.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_14-Automated_Motor_Imagery_Detection_Through_EEG_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detecting User Credibility on Twitter using a Hybrid Machine Learning Model of Features’ Selection and Weighting</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150513</link>
        <id>10.14569/IJACSA.2024.0150513</id>
        <doi>10.14569/IJACSA.2024.0150513</doi>
        <lastModDate>2024-05-30T17:06:13.6500000+00:00</lastModDate>
        
        <creator>Nahid R. Abid-Althaqafi</creator>
        
        <creator>Hessah A. Alsalamah</creator>
        
        <subject>User credibility; supervised machine learning; feature selection; feature weighting; social network; twitter</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>With the pervasive and rapidly growing presence of the internet and social media, creating untrustworthy accounts has become effortless, allowing fake news to be spread for personal or private interests. As a result, it is crucial in this era to investigate the credibility of users on social networking platforms such as Twitter. In this research, we aim to integrate existing solutions from previous research to create a hybrid model. Our approach is based on selecting and weighting features using supervised machine learning methods such as ExtraTressClarifier, correlation-based algorithm methods, and SelectKBest to extract new ranked and weighted features in the dataset and then use them to train our model to discover their impact on the accuracy of user credibility detection issues. The research objective is to combine feature selection and weighting methods with Supervised Machine Learning to evaluate their impact on the accuracy of user credibility detection on Twitter. In addition, we measure the effectiveness of different feature categories on this detection. Experiments are conducted on one of the online available datasets. We seek to employ extracted features from a user&#39;s profile and statistical and emotional information. Then, the experimental results are compared to discover the effectiveness of the proposed solution. This study focuses on revealing the credibility of Twitter (or X-platform as recently renamed) accounts, which may result in the need for some adjustments to the generalization of Twitter’s outputs to other social media accounts such as LinkedIn, Facebook, and others.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_13-Detecting_User_Credibility_on_Twitter.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimization of Student Behavior Detection Algorithm Based on Improved SSD Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150512</link>
        <id>10.14569/IJACSA.2024.0150512</id>
        <doi>10.14569/IJACSA.2024.0150512</doi>
        <lastModDate>2024-05-30T17:06:13.6330000+00:00</lastModDate>
        
        <creator>Yongqing CAO</creator>
        
        <creator>Dan LIU</creator>
        
        <subject>Improved single shot detector (SSD) model; mobilenet network; class behavior recognition; artificial intelligence</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>Despite advancements in educational technology, traditional action recognition algorithms have struggled to effectively monitor student behavior in dynamic classroom settings. To address this gap, the Single Shot Detector (SSD) algorithm was optimized for educational environments. This study aimed to assess whether integrating the Mobilenet architecture with the SSD algorithm could improve the accuracy and speed of detecting student behavior in classrooms, and how these enhancements would impact the practical implementation of behavior-monitoring technologies in education. An improved SSD algorithm was developed using Mobilenet, known for its efficient data processing capabilities. A dataset of 2,500 images depicting various student behaviors was collected and enhanced through preprocessing methods to train the model. The optimized SSD model outperformed traditional algorithms in accuracy and speed, thanks to the integration of Mobilenet. Evaluation metrics such as precision, recall, and frames per second (fps) confirmed the superior performance of the Mobilenet-enhanced SSD algorithm in real-time environmental analysis. This advancement represents a significant improvement in surveillance technologies for educational settings, enabling more precise and timely assessments of student behavior. Despite the promising outcomes, the study faced limitations due to the uniformity of the dataset, which mainly consisted of controlled environment images. To improve the generalizability of the findings, it is suggested that future research should broaden the dataset to encompass a wider range of educational settings and student demographics. Additionally, it is encouraged to explore alternative advanced machine learning frameworks and conduct longitudinal studies to evaluate the influence of real-time behavior monitoring on educational outcomes.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_12-Optimization_of_Student_Behavior_Detection_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Adaptive Scheduling of Robots in the Mixed Flow Workshop of Industrial Internet of Things</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150511</link>
        <id>10.14569/IJACSA.2024.0150511</id>
        <doi>10.14569/IJACSA.2024.0150511</doi>
        <lastModDate>2024-05-30T17:06:13.6030000+00:00</lastModDate>
        
        <creator>Dejun Miao</creator>
        
        <creator>Rongyan Xu</creator>
        
        <creator>Yizong Dai</creator>
        
        <creator>Jiusong Chen</creator>
        
        <subject>Industrial Internet of Things; mixed flow workshop; robot; Markov decision-making process; SPMCTS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>With the deep integration of industrial Internet of Things technology and artificial intelligence technology, the material robot has been widely used in the Internet of Things workshop. In view of many complex factors such as real-time dynamic change and uncertain condition in workshop, this paper proposes to realize workshop adaptive scheduling decision with component layer construction and SPMCTS search method with real-time state as the root node. This method transforms the robot scheduling problem into a Markov decision process and describes a detailed representation of workshop states, actions, rewards, and strategies. In the real-time scheduling process, the search method is based on the artifact component layer construction, and only considers the state relationship between two adjacent groups, so as to simplify the calculation difficulty. In the subtree search, SPMCTS is applied to search the real-time state as the root node, and the extension method and shear method are applied to conduct strategy exploration and information accumulation, so that the deeper the real-time state node in the subtree, the more the optimal strategy can be obtained quickly and accurately. Finally, the effectiveness and superiority of the proposed method are verified by real case simulation analysis.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_11-Adaptive_Scheduling_of_Robots_in_the_Mixed_Flow_Workshop.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Ensemble Empirical Mode Decomposition Based on Sparse Bayesian Learning with Mixed Kernel for Landslide Displacement Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150510</link>
        <id>10.14569/IJACSA.2024.0150510</id>
        <doi>10.14569/IJACSA.2024.0150510</doi>
        <lastModDate>2024-05-30T17:06:13.5870000+00:00</lastModDate>
        
        <creator>Ping Jiang</creator>
        
        <creator>Jiejie Chen</creator>
        
        <subject>Bubble; cublic; ensemble empirical mode decomposition; landslide; Sparse Bayesian Learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>Inspired by the principles of decomposition and ensemble, we introduce an Ensemble Empirical Mode Decomposition (EEMD) method that incorporates Sparse Bayesian Learning (SBL) with Mixed Kernel, referred to as EEMD-SBLMK, specifically tailored for landslide displacement prediction. EEMD and Mutual Information (MI) techniques were jointly employed to identify potential input variables for our forecast model. Additionally, each selected component was trained using distinct kernel functions. By minimizing the number of Relevance Vector Machine (RVM) rules computed, we achieved an optimal balance between kernel functions and selected parameters. The EEMD-SBLMK approach generated final results by summing the prediction values of each subsequence along with the residual function associated with the corresponding kernel function. To validate the performance of our EEMD-SBLMK model, we conducted a real-world case study on the Liangshuijing (LSJ) landslide in China. Furthermore, in comparison to RVM-Cubic and RVM-Bubble, EEMD-SBLMK emerged as the most effective method, delivering superior results in the same measurement metrics.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_10-Ensemble_Empirical_Mode_Decomposition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Construction of Cloud Computing Task Scheduling Model Based on Simulated Annealing Hybrid Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150509</link>
        <id>10.14569/IJACSA.2024.0150509</id>
        <doi>10.14569/IJACSA.2024.0150509</doi>
        <lastModDate>2024-05-30T17:06:13.5730000+00:00</lastModDate>
        
        <creator>Kejin Lv</creator>
        
        <creator>Tianxu Huang</creator>
        
        <subject>Simulated annealing algorithm; taboo search optimization algorithm; cloud computing; task scheduling; completion time; load balancing degree</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>With the development of cloud computing technology, effective task scheduling can help people improve work efficiency. Therefore, this study presented a hybrid algorithm on the grounds of simulated annealing and taboo search to optimize task scheduling in cloud computing. This study presented a hybrid algorithm for optimizing the cloud computing task scheduling model. The model used simulated annealing algorithm and taboo search algorithm to convert the objective function into an energy function, allowing atoms to quickly arrange in terms of a certain rule for obtaining the optimal solution. The study analyzed the model through simulation experiments, and the experiment showed that the optimal value of the hybrid algorithm in high-dimensional unimodal testing was 7.15E-247, far superior to the whale optimization algorithm&#39;s 3.99E-28 and the grey wolf optimization algorithm&#39;s 1.10E-28. The completion time of the hybrid algorithm decreased with the growth of virtual machines, and the shortest time was 8.6 seconds. However, the load balancing degree of the hybrid algorithm increased with the growth of virtual machines. The final results indicated that the proposed hybrid algorithm exhibits high efficiency and superior performance in cloud computing task scheduling, especially when dealing with large-scale and complex optimization problems.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_9-Construction_of_Cloud_Computing_Task_Scheduling_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Tendon-Driven Robotic Arm Control Method Based on Radial Basis Function Adaptive Tracking Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150508</link>
        <id>10.14569/IJACSA.2024.0150508</id>
        <doi>10.14569/IJACSA.2024.0150508</doi>
        <lastModDate>2024-05-30T17:06:13.5570000+00:00</lastModDate>
        
        <creator>Xiaoke Fang</creator>
        
        <subject>Tendon drive; adaptive neural network; dynamic relationship; sliding membrane control; trajectory tracking</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>With the rapid development of intelligent technology, robotic arms are widely used in different fields. The study combines the tendon drive theory and radial basis function neural network to construct the robotic arm model, and then combines the back-stepping method and non-singular fast terminal sliding mode to improve the controller and system optimization of the tendon drive robotic arm model. Simulation tests on commercial mathematical software platforms yielded that joint 2 achieves stable overlap of position trajectory and velocity trajectory after 0.2s and 0.5s with errors of 1&#176; and 1&#176;/s, respectively. Radial basis function neural network approximation of robotic arm error converged to the true value at 14s. The optimized joint achieved the accuracy of trajectory tracking after 0.2s. Also the control torque of joint 2 changes at 1.5s, 4.5s and 8s and its change is small. The tendon tension curve was smoother and more stable in the range of -0.05N~0.0.5N to show that the robotic arm model has superiority after the optimization of the controller, and the interference observer had accurate estimation of the tracking trajectory of the tendon-driven robotic arm. Therefore, the radial basis function-based adaptive tracking algorithm had higher accuracy for the tendon-driven robotic arm model and provided technical reference for the control system of the intelligent robotic arm.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_8-Tendon_Driven_Robotic_Arm_Control_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Tile Defect Recognition Network Based on Amplified Attention Mechanism and Feature Fusion</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150507</link>
        <id>10.14569/IJACSA.2024.0150507</id>
        <doi>10.14569/IJACSA.2024.0150507</doi>
        <lastModDate>2024-05-30T17:06:13.5400000+00:00</lastModDate>
        
        <creator>JiaMing Zhang</creator>
        
        <creator>ZanXia Qiang</creator>
        
        <creator>YuGang Li</creator>
        
        <subject>Amplified attention mechanism; defect recognition; small target recognition; Yolo; feature fusion</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>For the current situation of low AP of tile defect detection with incomplete detection of defect types, this paper proposes YOLO-SA, a detection neural network based on the enhanced attention mechanism and feature fusion. We propose an enhanced attention mechanism named amplified attention mechanism to reduce the information attenuation of the defect information in the neural network and improve the AP of the neural network. Then, we use the EIoU loss function, the four-layer feature fusion, and let the backbone network directly involved in the detection and other methods to construct an excellent tile defect detection and recognition model Yolo-SA. In the experiments, this neural network achieves better experimental results with an improvement of 8.15 percentage points over Yolov5s and 8.93 percentage points over Yolov8n. The model proposed in this paper has high application value in the direction of tile defect recognition.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_7-Tile_Defect_Recognition_Network_Based_on_Amplified_Attention_Mechanism.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Advancing Hospital Cybersecurity Through IoT-Enabled Neural Network for Human Behavior Analysis and Anomaly Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150506</link>
        <id>10.14569/IJACSA.2024.0150506</id>
        <doi>10.14569/IJACSA.2024.0150506</doi>
        <lastModDate>2024-05-30T17:06:13.5270000+00:00</lastModDate>
        
        <creator>Faisal ALmojel</creator>
        
        <creator>Shailendra Mishra</creator>
        
        <subject>IoT security; cyber security; network security; machine learning; LSTM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>The integration of Internet of Things (IoT) technologies in hospital environments has introduced transformative changes in patient care and operational efficiency. However, this increased connectivity also presents significant cybersecurity challenges, particularly concerning the protection of patient data and healthcare operations. This research explores the application of advanced machine learning models, specifically LSTM-CNN hybrid architectures, for anomaly detection and behavior analysis in hospital IoT ecosystems. Employing a mixed-methods approach, the study utilizes LSTM -CNN models, coupled with the Mobile Health Human Behavior Analysis dataset, to analyze human behavior in a cybersecurity context in the hospital. The model architecture, tailored for the dynamic nature of hospital IoT activities, features a layered. The training accuracy attains an impressive 99.53%, underscoring the model&#39;s proficiency in learning from the training data. On the testing set, the model exhibits robust generalization with an accuracy of 91.42%. This paper represents a significant advancement in the convergence of AI and healthcare cybersecurity. The model&#39;s efficacy and promising outcomes underscore its potential deployment in real-world hospital scenarios.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_6-Advancing_Hospital_Cybersecurity_Through_IoT.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Find a Research Collaborator: An Ontology-Based Solution to Find the Right Resources for Research Collaboration</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150505</link>
        <id>10.14569/IJACSA.2024.0150505</id>
        <doi>10.14569/IJACSA.2024.0150505</doi>
        <lastModDate>2024-05-30T17:06:13.5100000+00:00</lastModDate>
        
        <creator>Nada Abdullah Alrehaili</creator>
        
        <creator>Muhammad Ahtisham Aslam</creator>
        
        <creator>Amani Falah Alharbi</creator>
        
        <creator>Rehab Bahaaddin Ashari</creator>
        
        <subject>Higher Education Ontology (HEO); Linked Open Data (LOD); Machine Reasoning; Semantic Web (SW); SPARQL Queries</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>Researchers in Higher Education (HE) institution-s/academia and in industry are continuously engaged in gen-erating new solutions and products for existing and emergent problems. Doing quality research and producing better scientific results depend greatly on solid research teams and scientific collaborators. Research output in HE institutions and industry can be optimized with appropriate resources in research teams and collaborations with suitable research partners. The main challenge in finding suitable resources for joint research projects and scientific collaborations pertains to the availability of data and metadata of researchers and their scientific work in tradi-tional formats, for instance, websites, portals, documents, and traditional databases. However, these traditional data sources do not support intelligent and smart ways of finding and querying the right resources for joint research and scientific collaboration. A possible solution resides in the deployment of Semantic Web (SW) techniques and technologies for representing researcher and their research contribution data in a machine-understandable format, thus ultimately proving useful for smart and intelligent query-answering purposes. In pursuit of this, we present a general Methodology for Ontology Design and Development (MODD). We also describe the use of this methodology to design and develop Higher Education Ontology (HEO). This HEO can be used to automate various activities and processes in HE. In addition, we describe the use and adoption of the HEO through a case study on the topic of “finding the right resources for joint research and scientific collaboration”. Finally, we provide an analysis and evaluation of our methodology for posing smart queries and evaluating the results based on machine reasoning.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_5-Find_a_Research_Collaborator_An_Ontology_Based_Solution.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intelligent Framework in a Serverless Computing for Serving using Artificial Intelligence and Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150504</link>
        <id>10.14569/IJACSA.2024.0150504</id>
        <doi>10.14569/IJACSA.2024.0150504</doi>
        <lastModDate>2024-05-30T17:06:13.4930000+00:00</lastModDate>
        
        <creator>Deepak Khatri</creator>
        
        <creator>Sunil Kumar Khatri</creator>
        
        <creator>Deepti Mishra</creator>
        
        <subject>Machine learning; data analytics; serverless computing; performance testing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>Serverless computing has grown in popularity as a paradigm for deploying applications in the cloud due to its ability to scale, cost-effectiveness, and simplified infrastructure management. Serverless architectures can benefit AI and Machine Learning (ML) models, which are becoming increasingly complex and resource-intensive. This study investigates the integration of AI/ML frameworks and models into serverless computing environments. It explains the steps involved, including model training, deployment, packaging, function implementation, and inference. Serverless platforms&#39; auto-scaling capabilities allow for seamless handling of varying workloads, while built-in monitoring and logging features ensure effective management. Continuous integration and deployment pipelines simplify the deployment process. Using serverless computing for AI/ML models offers developers scalability, flexibility, and cost savings, allowing them to focus on model development rather than infrastructure issues. The proposed model leverages performance forecasting and serverless computing model deployment using virtual machines, specifically utilizing the Knative platform. Experimental validation demonstrates that the model effectively predicts performance based on specific parameters with minimal data collection. The results indicate significant improvements in scalability and cost efficiency while maintaining optimal performance. This performance model can guide application owners in selecting the best configurations for varying workloads and assist serverless providers in setting adaptive defaults for target value configurations.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_4-Intelligent_Framework_in_a_Serverless_Computing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Intelligent Method for Collecting and Analyzing Voice Reviews to Gauge Customer Satisfaction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150503</link>
        <id>10.14569/IJACSA.2024.0150503</id>
        <doi>10.14569/IJACSA.2024.0150503</doi>
        <lastModDate>2024-05-30T17:06:13.4800000+00:00</lastModDate>
        
        <creator>Nail Khabibullin</creator>
        
        <subject>Voice reviews; customer satisfaction; text mining; sentiment analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>Customer loyalty and customer satisfaction are premier goals of modern business since these factors indicate customers’ future behaviour and ultimate impact on the revenue and value of a business. The customers’ reviews, ratings, and rankings are a primary source for gauging customer satisfaction levels. Similar efforts have been reported in the literature. However, there has been no solution that can record real-time views of customers and provide analysis of the views. In this paper, a novel approach is presented that records, stores, and analyzes the customer live reviews and uses text mining to perform various levels of analysis of the reviews. The used approach also involves steps like void-to-text conversion, pre-processing, sentiment analysis, and sentiment report generation. This paper also presents a prototype tool that is the outcome of the present research. This research not only provides novel functionalities in the domain but also outperforms similar solutions in performance.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_3-An_Intelligent_Method_for_Collecting_and_Analyzing_Voice_Reviews.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multiview Outlier Filtered Pediatric Heart Sound Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150501</link>
        <id>10.14569/IJACSA.2024.0150501</id>
        <doi>10.14569/IJACSA.2024.0150501</doi>
        <lastModDate>2024-05-30T17:06:13.4630000+00:00</lastModDate>
        
        <creator>Sagnik Dakshit</creator>
        
        <subject>Deep learning; outlier filtering; machine learning; ECG</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>The advancements in deep learning has generated a large-scale interest in development of black-box models for various use cases in different domains such as healthcare, in both at-home and critical setting for diagnosis and monitoring of various health conditions. The use of audio signals as a view for diagnosis is nascent and the success of deep learning models in ingesting multimedia data provides an opportunity for use as a diagnostic medium. For the widespread use of these decision support systems, it is prudent to develop high performing systems which require large quantities of data for training and low-cost method of data collection making it more accessible for developing regions of the world and general population. Data collected from low-cost collection especially wireless devices are prone to outliers and anomalies. The presence of outliers skews the hypothesis space of the model and leads to model drift on deployment. In this paper, we propose a multiview pipeline through interpretable outlier filtering on the small Mendeley Children Heart Sound dataset collected using wireless low-cost digital stethoscope. Our proposed pipeline explores and provides dimensionally reduced interpertable visualizations for functional understanding of the effect of various outlier filtering methods on deep learning model hypothesis space and fusion strategies for multiple views of heart sound data namely raw time-series signal and Mel Frequency Cepstrum Coefficients achieving 98.19% state-of-the-art testing accuracy.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_1-Multiview_Outlier_Filtered_Pediatric_Heart_Sound.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Trigger Screen Restriction Framework, iOS use Case Towards Building a Gamified Physical Intervention</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150502</link>
        <id>10.14569/IJACSA.2024.0150502</id>
        <doi>10.14569/IJACSA.2024.0150502</doi>
        <lastModDate>2024-05-30T17:06:13.4630000+00:00</lastModDate>
        
        <creator>Majed Hariri</creator>
        
        <creator>Richard Stone</creator>
        
        <subject>Gamification; physical activity; screen-time restriction; triggered screen restriction framework; TSR Framework; personalized interventions; gamified physical intervention</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(5), 2024</description>
        <description>The growing trend of inactive lifestyles caused by excessive use of mobile devices raises severe concerns about people’s health and well-being. This paper illustrates the technical implementation of the Trigger Screen Restriction (TSR) framework, which integrates advanced technologies, including machine learning and gamification techniques, to address the limitations of traditional gamified physical interventions. The TSR framework encourages physical activity by leveraging the fear of missing out phenomenon, strategically restricting access to social media applications based on activity goals. The framework’s components, including the Screen Time Restriction, Notification Triggers, Computer Vision Model, and Reward Engine, work together to create an engaging and personalized experience that motivates users to engage in regular physical activity. Although the TSR framework represents a potentially significant step forward in gamified physical activity interventions, it remains a theoretical model requiring further investigation and rigorous testing.</description>
        <description>http://thesai.org/Downloads/Volume15No5/Paper_2-Trigger_Screen_Restriction_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Robust Extreme Learning Machine Based on p-order Laplace Kernel-Induced Loss Function</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01504128</link>
        <id>10.14569/IJACSA.2024.01504128</id>
        <doi>10.14569/IJACSA.2024.01504128</doi>
        <lastModDate>2024-05-01T14:48:53.1400000+00:00</lastModDate>
        
        <creator>Liutao Luo</creator>
        
        <creator>Kuaini Wang</creator>
        
        <creator>Qiang Lin</creator>
        
        <subject>p-order Laplace kernel-induced loss; extreme learning machine; robustness; iterative reweighted</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>Since the datasets of the practical problems are usually affected by various noises and outliers, the traditional extreme learning machine (ELM) shows low prediction accuracy and significant fluctuation of prediction results when learning such datasets. In order to overcome this shortcoming, the l2 loss function is replaced by the correntropy loss function induced by the p-order Laplace kernel in the traditional ELM. Correntropy is a local similarity measure, which can reduce the impact of outliers in learning. In addition, introducing the p-order into the correntropy loss function is rewarding to bring down the sensitivity of the model to noises and outliers, and selecting the appropriate p can enhance the robustness of the model. An iterative reweighted algorithm is selected to obtain the optimal hidden layer output weight. The outliers are given smaller weights in each iteration, significantly enhancing the robustness of the model. To verify the regression prediction of the proposed model, it is compared with other methods on artificial datasets and eighteen benchmark datasets. Experimental results demonstrate that the proposed method outperforms other methods in the majority of cases.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_128-Robust_Extreme_Learning_Machine.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comprehensive Analysis of Network Security Attack Classification using Machine Learning Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01504127</link>
        <id>10.14569/IJACSA.2024.01504127</id>
        <doi>10.14569/IJACSA.2024.01504127</doi>
        <lastModDate>2024-05-01T14:48:53.0930000+00:00</lastModDate>
        
        <creator>Abdulaziz Saeed Alqahtani</creator>
        
        <creator>Osamah A. Altammami</creator>
        
        <creator>Mohd Anul Haq</creator>
        
        <subject>Machine learning; cyber security; intrusion detection; network security; cyber security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>As internet usage and connected devices continue to proliferate, the concern for network security among individuals, businesses, and governments has intensified. Cybercriminals exploit these opportunities through various attacks, including phishing emails, malware, and DDoS attacks, leading to disruptions, data exposure, and financial losses. In response, this study investigates the effectiveness of machine learning algorithms for enhancing intrusion detection systems in network security. Our findings reveal that Random Forest demonstrates superior performance, achieving 90% accuracy and balanced precision-recall scores. KNN exhibits robust predictive capabilities, while Logistic Regression delivers commendable accuracy, precision, and recall. However, Naive Bayes exhibits slightly lower performance compared to other algorithms. The study underscores the significance of leveraging advanced machine learning techniques for accurate intrusion detection, with Random Forest emerging as a promising choice. Future research directions include refining models and exploring novel approaches to further enhance network security.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_127-A_Comprehensive_Analysis_of_Network_Security_Attack_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Packet Loss Concealment Estimating Residual Errors of Forward-Backward Linear Prediction for Bone-Conducted Speech</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01504126</link>
        <id>10.14569/IJACSA.2024.01504126</id>
        <doi>10.14569/IJACSA.2024.01504126</doi>
        <lastModDate>2024-04-30T10:33:21.4600000+00:00</lastModDate>
        
        <creator>Ohidujjaman </creator>
        
        <creator>Nozomiko Yasui</creator>
        
        <creator>Yosuke Sugiura</creator>
        
        <creator>Tetsuya Shimamura</creator>
        
        <creator>Hisanori Makinae</creator>
        
        <subject>Autocorrelation method; bone-conducted speech; modified covariance method; packet loss concealment; residual error</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>This study proposes a suitable model for packet loss concealment (PLC) by estimating the residual error of the linear prediction (LP) method for bone-conducted (BC) speech. Instead of conventional LP-based PLC techniques where the residual error is ignored, we employ forward-backward linear prediction (FBLP), known as the modified covariance (MC) method, by incorporating the residual error estimates. The MC method provides precise LP estimation for a short data length, reduces the numerical difficulties, and produces a stable model, whereas the conventional autocorrelation (ACR) method of LP suffers from numerical problems. The MC method has the effect of compressing the spectral dynamic range of the BC speech, which improves the numerical difficulties. Simulation results reveal that the proposed method provides excellent outcomes from some objective evaluation scores in contrast to conventional PLC techniques.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_126-Packet_Loss_Concealment_Estimating_Residual_Errors.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Rigorous Experimental Analysis of Tabular Data Generated using TVAE and CTGAN</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01504125</link>
        <id>10.14569/IJACSA.2024.01504125</id>
        <doi>10.14569/IJACSA.2024.01504125</doi>
        <lastModDate>2024-04-30T10:33:21.4300000+00:00</lastModDate>
        
        <creator>Parul Yadav</creator>
        
        <creator>Manish Gaur</creator>
        
        <creator>Rahul Kumar Madhukar</creator>
        
        <creator>Gaurav Verma</creator>
        
        <creator>Pankaj Kumar</creator>
        
        <creator>Nishat Fatima</creator>
        
        <creator>Saqib Sarwar</creator>
        
        <creator>Yash Raj Dwivedi</creator>
        
        <subject>Synthetic data generation; tabular data generation; data privacy; conditional generative adversarial networks; variational autoencoder</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>Synthetic data generation research has been progressing at a rapid pace and novel methods are being designed every now and then. Earlier, statistical methods were used to learn the distributions of real data and then sample synthetic data from those distributions. Recent advances in generative models have led to more efficient modeling of complex high-dimensional datasets. Also, privacy concerns have led to the development of robust models with lesser risk of privacy breaches. Firstly, the paper presents a comprehensive survey of existing techniques for tabular data generation and evaluation matrices. Secondly, it elaborates on a comparative analysis of state-of- the-art synthetic data generation techniques, specifically CTGAN and TVAE for small, medium, and large-scale datasets with varying data distributions. It further evaluates the synthetic data using quantitative and qualitative metrics/techniques. Finally, this paper presents the outcomes and also highlights the issues and shortcomings which are still need to be addressed.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_125-Rigorous_Experimental_Analysis_of_Tabular_Data_Generated.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving Chicken Disease Classification Based on Vision Transformer and Combine with Integrated Gradients Explanation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01504124</link>
        <id>10.14569/IJACSA.2024.01504124</id>
        <doi>10.14569/IJACSA.2024.01504124</doi>
        <lastModDate>2024-04-30T10:33:21.4130000+00:00</lastModDate>
        
        <creator>Huong Hoang Luong</creator>
        
        <creator>Triet Minh Nguyen</creator>
        
        <subject>Vision Transformer; ViT16; classification chicken disease; transfer learning; fine-tuning; image classification; integrated gradients explanation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>Chicken diseases are an important problem in the livestock industry, affecting the health and production performance of chicken flocks worldwide. These diseases can seriously damage the health of chickens, reduce egg production, or increase mortality, causing great economic losses to farmers. Therefore, detecting and preventing diseases in chickens is a top concern in the livestock industry, to ensure the health and sustainable production of chicken flocks. In recent years, advances in machine learning techniques have shown promise in solving challenges related to image diagnosis and classification. Leveraging the power of machine learning models, we propose the ViT16 model for disease classification in chickens, demonstrating its potential in assisting healthcare professionals to diagnose chicken flocks more effectively. In this study, ViT16 demonstrated its potential and strengths when compared with 5 models in the CNN architecture and ViT32 in the ViT architecture in the task of classifying chicken disease images with an accuracy of 99.25% - 99.75% - 100% - 98.25% in four experimental scenarios with our enhanced dataset and fine-tuning. These results were generated from transfer learning and model tuning on an augmented dataset consisting of 8067 images classified into four classes: Coccidiosis, New Castle Disease, Salmonella, and Healthy. Furthermore, the Integrated Gradients explanation has an important role in increasing the transparency and understanding of the image classification model, thereby improving and optimizing model performance. The performance evaluation of each model is done through in-depth analysis, including metrics such as precision, recall, F1 score, accuracy, and confusion matrix.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_124-Improving_Chicken_Disease_Classification_Based_on_Vision_Transformer.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>On the Combination of Multi-Input and Self-Attention for Sign Language Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01504123</link>
        <id>10.14569/IJACSA.2024.01504123</id>
        <doi>10.14569/IJACSA.2024.01504123</doi>
        <lastModDate>2024-04-30T10:33:21.4000000+00:00</lastModDate>
        
        <creator>Nam Vu Hoai</creator>
        
        <creator>Thuong Vu Van</creator>
        
        <creator>Dat Tran Anh</creator>
        
        <subject>Multi-input; self-attention; deep learning models; video-based sign language; sign language recognition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>Sign language recognition can be considered as a branch of human action recognition. The deaf-muted community utilizes upper body gestures to convey sign language words. With the rapid development of intelligent systems based on deep learn-ing models, video-based sign language recognition models can be integrated into services and products to improve the quality of life for the deaf-muted community. However, comprehending the relationship between different words within videos is a complex and challenging task, particularly in understanding sign language actions in videos, further constraining the performance of previous methods. Recent methods have been explored to generate video annotations to address this challenge, such as creating questions and answers for images. An optimistic approach involves fine-tuning autoregressive language models trained using multi-input and self-attention mechanisms to facilitate understanding of sign language in videos. We have introduced a bidirectional transformer language model, MISA (multi-input self-attention), to enhance solutions for VideoQA (video question and answer) without relying on labeled annotations. Specifically, (1) one direction of the model generates descriptions for each frame of the video to learn from the frames and their descriptions, and (2) the other direction generates questions for each frame of the video, then integrates inference with the first aspect to produce questions that effectively identify sign language actions. Our proposed method has outperformed recent techniques in VideoQA by eliminating the need for manual labeling across various datasets, including CSL-Daily, PHOENIX14T, and PVSL (our dataset). Furthermore, it demonstrates competitive performance in low-data environments and operates under supervision.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_123-On_the_Combination_of_Multi_Input_and_Self_Attention.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dynamic Task Offloading Optimization in Mobile Edge Computing Systems with Time-Varying Workloads Using Improved Particle Swarm Optimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01504122</link>
        <id>10.14569/IJACSA.2024.01504122</id>
        <doi>10.14569/IJACSA.2024.01504122</doi>
        <lastModDate>2024-04-30T10:33:21.3830000+00:00</lastModDate>
        
        <creator>Mohammad Asique E Rasool</creator>
        
        <creator>Anoop Kumar</creator>
        
        <creator>Asharul Islam</creator>
        
        <subject>Particle Swarm Optimization (PSO); Mobile Edge Computing (MEC); Multi-User Multi-Server systems; dynamic load balancing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>Mobile edge computing (MEC) enables offloading of compute-intensive and latency-sensitive tasks from resource-constrained mobile devices to servers at the network edge. This paper considers the dynamic optimization of task offloading in multi-user multi-server MEC systems with time-varying task workloads. The arrival times and computational demands of tasks are modeled as stochastic processes. The goal is to minimize the average task delay by optimal dynamic server selection over time. A particle swarm optimization (PSO) based algorithm is proposed that makes efficient offloading decisions in each time slot based on newly arrived tasks and pending workload across servers. The PSO-based policy is shown to outperform heuristics like genetic algorithms and simulated annealing in terms of adaptability to workload fluctuations and spikes. Experiments under varying task arrival rates demonstrate PSO’s capability to dynamically optimize time-averaged delay and energy costs through joint optimization of server selection and resource allocation. The proposed techniques provide a practical and efficient dynamic load balancing mechanism for real-time MEC systems with variable workloads.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_122-Dynamic_Task_Offloading_Optimization_in_Mobile_Edge_Computing_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving Potato Diseases Classification Based on Custom ConvNeXtSmall and Combine with the Explanation Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01504121</link>
        <id>10.14569/IJACSA.2024.01504121</id>
        <doi>10.14569/IJACSA.2024.01504121</doi>
        <lastModDate>2024-04-30T10:33:21.3530000+00:00</lastModDate>
        
        <creator>Huong Hoang Luong</creator>
        
        <subject>Potato disease; classification; fine-tuning; transfer learning; Convolutional Neural Network (CNN); k-means clustering; Gradient-weighted Class Activation Mapping (Grad-CAM)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>Potatoes are short-term crops grown for harvesting tubers. It is a type of tuber that grows on roots and is the fourth most common crop after rice, wheat, and corn. Fresh potatoes can also be used in an incredible variety of dishes by baking, boiling, or frying them. Moreover, the paper, textile, wood, and pharmaceutical industries also make extensive use of potato starch. However, soil and climate pollution are highly unfavorable for potato growth and lead to a lot of diseases such as common scab, black scurf, blackleg, dry rot, and pink rot. Thus. several types of research in medicine and computers were started for the early detection, classification, and treatment of potato diseases. In this study, transfer learning and fine-tuning were applied to potato disease classification based on a custom ConvNeXtSmall model. In addition, Gradient-weighted Class Activation Mapping (i.e., Grad-CAM) is provided for visual explanation in the final result after classification. For potato illness segmentation, k-means clustering was used to enable the difference between healthy and diseased sections based on color and texture. The data was collected from numerous websites and validated by the Bangladesh Agricultural Research Institute (i.e., BARI), including six types of potato diseases and healthy images. With a Convolutional Neural Networks (i.e., CNN) model from the Keras library, our study reached the unexpected validation accuracy, test accuracy, and F1 score in seven classifications of 99.49%, 98.97%, and 98.97%, respectively. Concerning four-class classification, high accuracy values were obtained for most of the models (i.e., 100%).</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_121-Improving_Potato_Diseases_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of an Educational Robot for Exploring the Internet of Things</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01504120</link>
        <id>10.14569/IJACSA.2024.01504120</id>
        <doi>10.14569/IJACSA.2024.01504120</doi>
        <lastModDate>2024-04-30T10:33:21.3370000+00:00</lastModDate>
        
        <creator>Zhumaniyaz Mamatnabiyev</creator>
        
        <creator>Christos Chronis</creator>
        
        <creator>Iraklis Varlamis</creator>
        
        <creator>Meirambek Zhaparov</creator>
        
        <subject>Educational robots; Internet of Things; IoT Education; Arduino for Education; IoT Educational Kit</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>Educational robots, when integrated into STEM (Science, Technology, Engineering, and Mathematics) education across a range of age groups, serve to enhance learning experiences by facilitating hands-on activities. These robots are particularly instrumental in the realm of Internet of Things (IoT) education, guiding learners from basic to advanced applications. This paper introduces the IoTXplorBot, an open-source and open-design educational robot, developed to foster the learning of IoT concepts in a cost-effective manner. The robot is equipped with a variety of low-cost sensors and actuators and features an interchangeable microcontroller that is compatible with other development boards from the Arduino Nano family. This compatibility allows for diverse programming languages and varied purposes. The robot’s printed circuit board is designed to be user-friendly, even for those with no engineering skills. The proposed board includes additional pins and a breadboard on the robot’s chassis, enabling the extension of the robot with other hardware components. The use of the Arduino board allows learners to leverage all capabilities from Arduino, such as the Arduino IoT cloud, dashboard, online compiler, and project hub. These resources aid in the development of new projects and in finding solutions to encountered problems. The paper concludes with a discussion on the future development of this robot, underscoring its potential for ongoing adaptation and improvement.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_120-Development_of_an_Educational_Robot.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predictor Model for Chronic Kidney Disease using Adaptive Gradient Clipping with Deep Neural Nets</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01504119</link>
        <id>10.14569/IJACSA.2024.01504119</id>
        <doi>10.14569/IJACSA.2024.01504119</doi>
        <lastModDate>2024-04-30T10:33:21.3030000+00:00</lastModDate>
        
        <creator>Neeraj Sharma</creator>
        
        <creator>Praveen Lalwani</creator>
        
        <subject>CT Kidney; VGG16; ResNet50; InceptionV3; gradient clipping; image processing; multiclass classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>This research aims to develop computer vision based predictive model for the three prominent kidney ailments namely Cyst, Stone, and Tumor which are common renal disorders that require timely medical intervention. This classification model is tested and trained using the multi-class CT Kidney Dataset which contains 12,446 images collected from PACS (Picture Archiving and Communication System) from different hospitals in Dhaka, Bangladesh. Initial models are build using plain VGG16, ResNet50, and InceptionV3 deep neural nets. Then after clip value filter of ADAM optimizer is applied which results in marginally improved accuracy and at the last Adaptive Gradient Clipping is applied as a replacement of batch norm process and this produces overall best results. The Adaptive Gradient Clipping based model achieves accuracy of 97.15% in VGG16, 99.5% in ResNet50, and 99.23% in InceptionV3. Overall classification metrics are best for ResNet50 and Inception V3 with Adaptive Gradient Clipping technique.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_119-Predictor_Model_for_Chronic_Kidney_Disease.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Efficiency Hardware Design for Lane Detector Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01504118</link>
        <id>10.14569/IJACSA.2024.01504118</id>
        <doi>10.14569/IJACSA.2024.01504118</doi>
        <lastModDate>2024-04-30T10:33:21.2900000+00:00</lastModDate>
        
        <creator>Duc Khai Lam</creator>
        
        <subject>FPGA; Hough transform; look up table; lane detector; autonomous vehicle</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>The Hough Transform (HT) algorithm is a popular method for lane detection based on the &#39;voting&#39; process to extract complete lines. The voting process is derived from the HT algorithm and then executed in parameter space (ρ, θ) to identify the &#39;votes&#39; with the highest count, meaning that image points with pairs of angle θ and distance ρ corresponding to those &#39;votes&#39; lie on the same line. However, this algorithm requires significant memory and computational complexity. In this paper, we propose a new algorithm for the Hough Space (HS) by utilizing parameter-ization (Y-intercept, θ) instead of (ρ, θ) parameterization and lane direction. This simplifies the inverse LHT operation and reduces the accumulator&#39;s size and computational complexity compared to the standard LHT. We aim to minimize processing time per frame for real-time processing. Our implementation operates at a frequency of 250MHz, and the processing time for each frame with a resolution of 1024x1024 is 4.19ms, achieving an accuracy of 85.49%. This design is synthesized on the Virtex-7 VC707 FPGA.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_118-An_Efficiency_Hardware_Design_for_Lane_Detector_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>New Trust Management Scheme Based on Blockchain and KNN Reinforcement Learning Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01504117</link>
        <id>10.14569/IJACSA.2024.01504117</id>
        <doi>10.14569/IJACSA.2024.01504117</doi>
        <lastModDate>2024-04-30T10:33:21.2570000+00:00</lastModDate>
        
        <creator>Ahdab Hulayyil Aljohani</creator>
        
        <creator>Abdulaziz Al-shammri</creator>
        
        <subject>Vehicular Ad hoc Networks (VANETs); Blockchain; trust management; reinforcement learning algorithm; privacy preservation; network security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>There has been a continual rise in the quantity of smart and autonomous automobiles in recent decades. the effectiveness of communication among vehicles in Vehicular Ad-hoc Networks (VANET) is critical for ensuring the safety of drivers’ lives. the primary objective of VANET is to share critical information regarding life-threatening events, such as traffic jams and accident alerts in a timely and accurate manner. Nevertheless, typical VANETs encounter several security issues involving threats to confidentiality, integrity, and availability. This paper proposes a new decentralized and tamper-resistant scheme for privacy preservation. We designed a new trust management system that utilizes blockchain technology. We strive to establish trust between vehicles and infrastructure and preserve privacy by guaranteeing the authenticity and integrity of the information exchanged in VANETS. Our proposal adopts the principles of reinforcement learning to dynamically evaluate and allocate trust scores to vehicles and infrastructure based on their behavior. The scheme’s performance has been evaluated based on key metrics. The results show that our new system provides an effective behavior management technique while preserving vehicle privacy.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_117-New_Trust_Management_Scheme_Based_on_Blockchain.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>GROCAFAST: Revolutionizing Grocery Shopping for Seamless Convenience and Enhanced User Experience</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01504116</link>
        <id>10.14569/IJACSA.2024.01504116</id>
        <doi>10.14569/IJACSA.2024.01504116</doi>
        <lastModDate>2024-04-30T10:33:21.2430000+00:00</lastModDate>
        
        <creator>Abeer Hakeem</creator>
        
        <creator>Layan Fakhurji</creator>
        
        <creator>Raneem Alshareef</creator>
        
        <creator>Elaf Aloufi</creator>
        
        <creator>Manar Altaiary</creator>
        
        <creator>Afraa Attiah</creator>
        
        <creator>Linda Mohaisen</creator>
        
        <subject>Grocery shopping app; route map; grocery shopping experience; dijkstra’s algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>This paper presents the Smart Grocery Shopping system (GROCAFAST), a system for optimizing the grocery shopping experience and improving efficiency for shoppers. The GROCAFAST system consists of a mobile app and a server component. The mobile app allows shoppers to create, manage, and update grocery lists while providing store navigation assistance. The server component processes data, generates optimized route maps, maintains an inventory database, and facilitates the online chat room. Unlike existing grocery shopping systems, GROCAFAST is cost-effective as it does not rely on any extra infrastructure and reduces both shopping time and walking steps. GROCAFAST utilizes Dijkstra’s algorithm to efficiently guide shoppers through the store, minimizing the time needed to visit all aisles containing their desired items. The user-friendly interface and time-saving features make grocery shopping more efficient and enjoyable. The evaluation results demonstrate that GROCAFAST reduces the total shopping time by 67.6% when compared to a traditional approach that mimics the way shoppers visit a grocery store, browse aisles, and select items. It also reduces the walking steps by 59%.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_116-GROCAFAST_Revolutionizing_Grocery_Shopping_for_Seamless_Convenience.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Developing a Patient-Centric Healthcare IoT Platform with Blockchain and Smart Contract Data Management</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01504115</link>
        <id>10.14569/IJACSA.2024.01504115</id>
        <doi>10.14569/IJACSA.2024.01504115</doi>
        <lastModDate>2024-04-30T10:33:21.2100000+00:00</lastModDate>
        
        <creator>Duc B. T</creator>
        
        <creator>Trung P. H. T</creator>
        
        <creator>Trong N. D. P</creator>
        
        <creator>Phuc N. T</creator>
        
        <creator>Khoa T. D</creator>
        
        <creator>Khiem H. G</creator>
        
        <creator>Nam B. T</creator>
        
        <creator>Bang L. K</creator>
        
        <subject>Medical test result; blockchain; smart contract; NFT; Ethereum; Fantom; polygon; binance smart chain</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>The Internet of Things (IoT) has been rapidly integrated into various industries, with healthcare emerging as a key area of impact. A notable development in this sector is the IoHT-MBA system, a specialized Internet of Healthcare Things (IoHT) framework. This system utilizes a microservice approach combined with a brokerless architecture, efficiently tackling issues like data gathering, managing users and devices, and controlling devices remotely. Despite its effectiveness, there’s a growing need to improve the privacy and control of patient data. To address this, we propose an enhanced version of the IoHT-MBA system, incorporating blockchain technology, specifically through the use of Hyperledger Fabric. This integration aims to create a more secure, transparent, and patient-centric data management platform. The system enables patients to oversee their peripheral devices, such as smartphones and sensors. These devices are integrated as part of the edge layer of the IoHT, contributing to a decentralized storage service. In our model, data is primarily retained on user devices, with only summarized data being communicated to service providers and recorded on the blockchain. This approach significantly boosts data privacy and user control. Access to user data is strictly regulated and must align with the patient’s privacy conditions, which are established through smart contracts, thus providing an additional layer of security and transparency. We have conducted an evaluation of our blockchain-enhanced platform using key theories in microservice and brokerless architecture, such as Round Trip Time and Broken Connection Test Cases. Additionally, we’ve performed tests on data generation and queries using Hyperledger Caliper. The results confirm the strength and efficiency of our blockchain-integrated system in the healthcare IoT domain.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_115-Developing_a_Patient_Centric_Healthcare_IoT_Platform.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Harnessing AI to Generate Indian Sign Language from Natural Speech and Text for Digital Inclusion and Accessibility</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01504114</link>
        <id>10.14569/IJACSA.2024.01504114</id>
        <doi>10.14569/IJACSA.2024.01504114</doi>
        <lastModDate>2024-04-30T10:33:21.1970000+00:00</lastModDate>
        
        <creator>Parul Yadav</creator>
        
        <creator>Puneet Sharma</creator>
        
        <creator>Pooja Khanna</creator>
        
        <creator>Mahima Chawla</creator>
        
        <creator>Rishi Jain</creator>
        
        <creator>Laiba Noor</creator>
        
        <subject>Sign language generation; automatic speech recognition; speech-to-indian sign language; indian sign language; digital inclusion and accessibility</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>Sign language is the fundamental mode of communication for those who are deaf and mute, as well as for individuals with hearing impairments. Regrettably, there has been a dearth of research on Indian Sign Language, primarily due to the lack of adequate grammar and regional variations in such language. Consequently, research in this area has been limited. The primary objective of our research is to develop a sophisticated speech/ text-to-Indian sign language conversion system that employs advanced 3D modeling techniques to display sign language motions. Our research is motivated by our desire to promote effective communication between hearing and hearing-impaired individuals in India. The proposed model integrates Automatic Speech Recognition (ASR) technology, which effectively transforms spoken words into text, and leverages 3D modeling techniques to generate corresponding sign language motions. We have conducted a comprehensive study of the grammar of Indian Sign Language, which includes identifying sentence structure and signs that represent the tense of the subject. It is noteworthy that the sentence structure of Indian Sign Language follows the Subject-Object-Verb sequence, in contrast to spoken language, which follows the Subject-Verb-Object structure. To enhance user experience as well as digital inclusion and accessibility, the research incorporates user-friendly and simple interfaces that allow individuals to interact effortlessly with the system intuitively. The model/ system is equipped to receive speech input through a microphone/ text and provide immediate feedback through 3D-modeled videos that display the generated sign language gestures and has achieved 99.2% accuracy. Our main goal is to promote digital inclusion and improve accessibility and enhance the user experience.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_114-Harnessing_AI_to_Generate_Indian_Sign_Language_from_Natural_Speech.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automating Tomato Ripeness Classification and Counting with YOLOv9</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01504113</link>
        <id>10.14569/IJACSA.2024.01504113</id>
        <doi>10.14569/IJACSA.2024.01504113</doi>
        <lastModDate>2024-04-30T10:33:21.1800000+00:00</lastModDate>
        
        <creator>Hoang-Tu Vo</creator>
        
        <creator>Kheo Chau Mui</creator>
        
        <creator>Nhon Nguyen Thien</creator>
        
        <creator>Phuc Pham Tien</creator>
        
        <subject>Tomato monitoring; manual counting; Artificial Intelligence (AI); Image analysis techniques; YOLO; YOLOv9</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>This article proposes a novel solution to the long-standing issue of ripe (or manual) tomato monitoring and counting, often relying on visual inspection, which is both time-consuming, requires a lot of labor and prone to inaccuracies. By leveraging the power of artificial intelligence (AI) and image analysis techniques, a more efficient and precise method for automating this process is introduced. This approach promises to significantly reduce labor requirements while enhancing accuracy, thus improving overall quality and productivity. In this study, we explore the application of the latest version of YOLO (You Only Look Once), specifically YOLOv9, in automating the classification of tomato ripeness levels and counting tomatoes. To assess the performance of the proposed model, the study employs standard evaluation metrics including Precision, Recall, and mAP50. These metrics provide valuable insights into the model’s ability to accurately detect and count tomatoes in real-world scenarios. The results indicate that the YOLOv9-based model achieves superior performance, as evidenced by the following evaluation metrics: Precision: 0.856, Recall: 0.832, and mAP50: 0.882. By leveraging YOLOv9 and comprehensive evaluation metrics, this research aims to provide a robust solution for automating tomato monitoring processes. Additionally, by offering the future integration of robotics, the collection phase can further optimize efficiency and enable the expansion of cultivation areas.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_113-Automating_Tomato_Ripeness_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Blockchain-Driven Decentralization of Electronic Health Records in Saudi Arabia: An Ethereum-Based Framework for Enhanced Security and Patient Control</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01504112</link>
        <id>10.14569/IJACSA.2024.01504112</id>
        <doi>10.14569/IJACSA.2024.01504112</doi>
        <lastModDate>2024-04-30T10:33:21.1630000+00:00</lastModDate>
        
        <creator>Atef Masmoudi</creator>
        
        <creator>Maha Saeed</creator>
        
        <subject>Blockchain; Ethereum; smart contract; Web 3.0; decentralized application; electronic health records</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>In the rapidly evolving landscape of e-HealthCare in Saudi Arabia, enhancing the security and integrity of Electronic Health Records (EHRs) is imperative. Existing systems encounter challenges stemming from centralized storage, vulner-able data integrity, susceptibility to power failures, and issues of ownership by entities other than the patients themselves. Moreover, the sharing of sensitive patient information among anonymous bodies exacerbates the vulnerability of these records. In response to these challenges, this paper advocates for the trans-formative potential of blockchain technology. Blockchain, with its decentralized and distributed architecture, offers a revolutionary approach to communication among network nodes, eliminating the need for a central authority. This paper proposes a solution that places the patient at the forefront, empowering them as the primary controller of their medical data. The research delves into the current state of e-HealthCare in Saudi Arabia, examines the challenges faced by existing EHR systems, and introduces blockchain technology, particularly Ethereum, as a viable and transformative solution. The paper details the use of Ethereum blockchain to secure and manage medical records, with a Public Key Infrastructure (PKI) applied to safeguard the confidentiality of patient information. The decentralized InterPlanetary File System (IPFS) is employed for the secure and resilient storage of encrypted medical records. Additionally, Smart contracts, integral to the Ethereum blockchain, play a central role in automating and enforcing the rules governing access to medical records. Moreover, a Web 3.0 decentralized application (DApp) is developed to provide a user-friendly interface, empowering patients to seamlessly interact with and control access to their health data. At the end, this paper presents a guiding framework for clinicians, policymakers, and academics, illustrating the trans-formative potential of blockchain and associated technologies in revolutionizing EHR management in Saudi Arabia’s healthcare systems.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_112-Blockchain_Driven_Decentralization_of_Electronic_Health_Records.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimization of PID Controller Parameter using the Geometric Mean Optimizer</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01504111</link>
        <id>10.14569/IJACSA.2024.01504111</id>
        <doi>10.14569/IJACSA.2024.01504111</doi>
        <lastModDate>2024-04-30T10:33:21.1500000+00:00</lastModDate>
        
        <creator>Osama Abdellatif</creator>
        
        <creator>Mohamed Issa</creator>
        
        <creator>Ibrahim Ziedan</creator>
        
        <subject>Metaheuristics; PID controller; GMO; DC motor</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>The PID controller is a crucial element in numerous engineering applications. However, a significant challenge with PID lies in selecting optimal parameter values. Conventional methods need extra tunning and may not yield the best performance. In this study, a recently introduced metaheuristic algorithm, Geometric Mean Optimizer (GMO), is employed to identify the most suitable PID parameter values. In conventional methods, a fixed empirical equations are applied to select parameter values of PID. In GMO, there is a wide search space to select the optimal parameter values of PID based on an objective function. The objective function that the GMO seeks to minimize is the Integral of Absolute Error (IAE). GMO is chosen for its effectiveness in balancing exploration and exploitation of the search space, as well as its robustness and scalability. GMO is tested in the context of optimizing PID parameters for an engineering application: DC motor regulations. The results demonstrated GMO’s superiority over comparable algorithms.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_111-Optimization_of_PID_Controller_Parameter.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-Granularity Feature Fusion for Enhancing Encrypted Traffic Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01504110</link>
        <id>10.14569/IJACSA.2024.01504110</id>
        <doi>10.14569/IJACSA.2024.01504110</doi>
        <lastModDate>2024-04-30T10:33:21.1170000+00:00</lastModDate>
        
        <creator>Quan Ding</creator>
        
        <creator>Zhengpeng Zha</creator>
        
        <creator>Yanjun Li</creator>
        
        <creator>Zhenhua Ling</creator>
        
        <subject>Encrypted traffic classification; BERT; multi-granularity fusion</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>Encrypted traffic classification, a pivotal process in network security and management, involves analyzing and categorizing data traffic that has been encrypted for privacy and security. This task demands the extraction of distinctive and robust feature representations from content-concealed data to ensure accurate and reliable classification. Traditional approaches have focused on utilizing either the payload of encrypted traffic or statistical features for more precise classification. While these methods achieve relative success, their limitation lies in not harnessing multi-grained features, thus impeding further advance-ments in encrypted traffic classification capabilities. To tackle this challenge, ET-CompBERT is presented, an innovative framework specifically designed for the fusion of multi-granularity features in encrypted traffic, encompassing both payload and global temporal attributes. The extensive experiments reveal that our approach significantly enhances classification performance in data-rich scenarios (achieving up to a +4.43% improvement in certain cases over existing methods) and establishes state-of-the-art results on training sets with different sizes. The source codes will be released after paper acceptance.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_110-Multi-Granularity_Feature_Fusion_for_Enhancing_Encrypted_Traffic.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Unified Approach for Scalable Task-Oriented Dialogue System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01504108</link>
        <id>10.14569/IJACSA.2024.01504108</id>
        <doi>10.14569/IJACSA.2024.01504108</doi>
        <lastModDate>2024-04-30T10:33:21.0870000+00:00</lastModDate>
        
        <creator>Manisha Thakkar</creator>
        
        <creator>Nitin Pise</creator>
        
        <subject>Task-oriented dialogue system; unified; adaptive multi-domain; large language models; prompts</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>Task-oriented dialogue (TOD) systems are currently the subject of extensive research owing to their immense significance in the fields of human-computer interaction and natural language processing. These systems assist users to accomplish certain tasks efficiently. However, most commercial TOD systems rely on handcrafted rules and offer functionalities in a single domain. These systems perform well but are not scalable to adapt multiple domains without manual efforts. Pretrained language models (PLMs) have been popularly applied to enhance these systems via fine-tuning. Recently, large language models (LLMs) have made significant advancements in this field but lack the ability to converse proactively in multiple turns, which is an essential parameter for designing TOD systems. To address these challenges, this paper initially studies the impact of language understanding on the overall performance of a TOD system in a multi-domain environment. Furthermore, to design an efficient TOD system, we propose a unified approach by leveraging LLM with reinforcement learning (RL) based dialogue policy. The experimental results demonstrate that a unified approach using LLM is more promising for scaling the capabilities of TOD systems with prompt adaptive instructions with more user friendly and human-like response generation.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_108-Unified_Approach_for_Scalable_Task_Oriented_Dialogue_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Day Trading Strategy Based on Transformer Model, Technical Indicators and Multiresolution Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01504109</link>
        <id>10.14569/IJACSA.2024.01504109</id>
        <doi>10.14569/IJACSA.2024.01504109</doi>
        <lastModDate>2024-04-30T10:33:21.0870000+00:00</lastModDate>
        
        <creator>Salahadin A. Mohammed</creator>
        
        <subject>Artificial neural network; saudi stock exchange; machine learning; deep learning; transformer model; stock price prediction; time series analysis; technical analysis; multiresolution analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>Stock prices are very volatile because they are affected by infinite number of factors, such as economical, social, political, and human behavior. This makes finding consistently profitable day trading strategy extremely challenging and that is why an overwhelming majority of stock traders loose money over time. Professional day traders, who are very few in number, have a trading strategy that can exploit this price volatility to consistently earn profit from the market. This study proposes a consistently profitable day trading strategy based on price volatility, transformer model, time2vec, technical indicators, and multiresolution analysis. The proposed trading strategy has eight trading systems, each with a different profit-target based on the risk taken per trade. This study shows that the proposed trading strategy results in consistent profits when the profit-target is 1.5 to 3.5 times the risk taken per trade. If the profit-target is not in that range, then it may result in a loss. The proposed trading strategy was compared with the buy-and-hold strategy and it showed consistent profits with all the stocks whereas the buy-and-hold strategy was inconsistent and resulted in losses in half the stocks. Also three of the consistently profitable trading systems showed significantly higher average profits and expectancy than the buy-and-hold trading strategy.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_109-Day_Trading_Strategy_Based_on_Transformer_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Impact of the IoT Integration and Sustainability on Competition Within an Oligopolistic 3PL Market</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01504107</link>
        <id>10.14569/IJACSA.2024.01504107</id>
        <doi>10.14569/IJACSA.2024.01504107</doi>
        <lastModDate>2024-04-30T10:33:21.0400000+00:00</lastModDate>
        
        <creator>Kenza Izikki</creator>
        
        <creator>Aziz Ait Bassou</creator>
        
        <creator>Mustapha Hlyal</creator>
        
        <creator>Jamila El Alami</creator>
        
        <subject>Third party logistics; internet of things; sustainability; oligopoly; game theory</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>The third party logistics (3PL) sector holds a crucial role in modern supply chains, streamlining the movement of goods and optimizing logistics operations. The 3PL industry’s journey towards digitalization and sustainability reflects a crucial strategy to create an efficient and resilient supply chain. It is increasingly integrating Internet of Things technologies (IoT) within its operations. This latter is a cutting-edge technology widely used in the supply chain realm as it offers numerous advantages namely traceability and real-time decision-making capability. In view of growing concerns for the environment and the social welfare, supply chain actors are seeking to make various initiatives to shift to more sustainable practices. This paper studies the competition within an oligopolistic market of 3PL firms. Through the lens of game theory, we construct a mathematical model where a supply chain composed of n firms competes through pricing, IoT integration efforts and sustainability efforts. Results show that the IoT integration and sustainability efforts impact the pricing decisions of the firm. Moreover, this study highlights how the rivals’ decisions on the IoT integration and sustainability efforts impact the firm’s decision-making processes. Furthermore, a comparison of the model decision variables within a duopoly and an oligopolistic setting is conducted. This paper concludes to the significant impact of the rivals’ strategies on the firm’s decisions and profitability.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_107-Impact_of_the_IoT_Integration_and_Sustainability.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Federated Machine Learning for Epileptic Seizure Detection using EEG</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01504106</link>
        <id>10.14569/IJACSA.2024.01504106</id>
        <doi>10.14569/IJACSA.2024.01504106</doi>
        <lastModDate>2024-04-30T10:33:21.0230000+00:00</lastModDate>
        
        <creator>S. Vasanthadev Suryakala</creator>
        
        <creator>T. R. Sree Vidya</creator>
        
        <creator>S. Hari Ramakrishnans</creator>
        
        <subject>Federal Machine Learning (FML); electroencephalography; epileptic seizure; cross-decentralization; health care; sensitivity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>Early seizure detection is difficult with epilepsy. This use of Electroencephalography (EEG) data has proven transformational, however standard centralized machine learning algorithms have privacy and generalization issues. A decentralized approach to epileptic seizure detection using Federated Machine Learning (FML) is presented in this research. The concentration of critical EEG data in conventional models may compromise patient confidentiality. The proposed FML technique trains models using local datasets without sharing raw EEG recordings. Hence the data set used for the model is devoid of noise thus rendering preprocessing unnecessary. Training using decentralized data sources broadens the model&#39;s seizure pattern repertoire, improving its adaptability to case heterogeneity. The Federated Machine Learning (FML) model shows that the suggested method for EEG-based epileptic seizure identification is promising for healthcare implementation and deployment. The proposed approach obtains sensitivity, specificity, and accuracy of 98.24%, 99.23%, 99% respectively. The proposed study is validated with the existing literature and the developed model outperforms the existing study.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_106-Federated_Machine_Learning_for_Epileptic_Seizure.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Algorithm using Rivest-Shamir-Adleman and Elliptic Curve Cryptography for Secure Email Communication</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01504105</link>
        <id>10.14569/IJACSA.2024.01504105</id>
        <doi>10.14569/IJACSA.2024.01504105</doi>
        <lastModDate>2024-04-30T10:33:21.0070000+00:00</lastModDate>
        
        <creator>Kwame Assa-Agyei</creator>
        
        <creator>Kayode Owa</creator>
        
        <creator>Tawfik Al-Hadhrami</creator>
        
        <creator>Funminiyi Olajide</creator>
        
        <subject>RSA; ECC; Advanced Encryption Standard; encryption; decryption; signature generation; verification; key exchange time; hybrid encryption</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>Email serves as the primary communication system in our daily lives, and to bolster its security and efficiency, many email systems employ Public Key Infrastructure (PKI). However, the convenience of email also introduces numerous security vulnerabilities, including unauthorized access, eavesdropping, identity spoofing, interception, and data tampering. This study is primarily focused on examining how two encryption techniques, RSA and ECC, affect the efficiency of secure email systems. Furthermore, the research seeks to introduce a hybrid cryptography algorithm that utilizes both RSA and ECC to ensure security and confidentiality in the context of secure email communication. The research evaluates various performance metrics, including key exchange time, encryption and decryption durations, signature generation, and verification times, to understand how these encryption methods affect the efficiency and efficacy of secure email communication. The experimental findings highlight the advantages of ECC in terms of Key Exchange Time, making it a compelling choice for establishing secure email communication channels. While RSA demonstrates a slight advantage in encryption, decryption, and signature generation for smaller files, ECC&#39;s efficiency becomes apparent as file sizes increase, positioning it as a favorable option for handling larger attachments in secure emails. Through the comparison of experiments, it is also concluded that the hybrid encryption algorithm optimizes the key exchange times, encryption efficiency, signature generation and verification times.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_105-Hybrid_Algorithm_using_Rivest_Shamir_Adleman.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Breast Cancer Classification through Transfer Learning with Vision Transformer, PCA, and Machine Learning Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01504104</link>
        <id>10.14569/IJACSA.2024.01504104</id>
        <doi>10.14569/IJACSA.2024.01504104</doi>
        <lastModDate>2024-04-30T10:33:20.9930000+00:00</lastModDate>
        
        <creator>Juan Gutierrez-Cardenas</creator>
        
        <subject>Breast cancer; vision transformer; transfer learning; PCA; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>Breast cancer is a leading cause of death among women worldwide, making early detection crucial for saving lives and preventing the spread of the disease. Deep Learning and Machine Learning techniques, coupled with the availability of diverse breast cancer datasets, have proven to be effective in assisting healthcare practitioners worldwide. Recent advancements in image classification models, such as Vision Transformers and pretrained models, offer promising avenues for breast cancer imaging classification research. In this study, we employ a pretrained Vision Transformer (ViT) model, specifically trained on the ImageNet dataset, as a feature extractor. We combine this with Principal Component Analysis (PCA) for dimensionality reduction and evaluate two classifiers, namely a Multilayer Perceptron (MLP) and a Support Vector Machine (SVM), for breast mammogram image classification. The results demonstrate that the transfer learning approach using ViT, PCA, and an MLP classifier achieves an average accuracy, precision, recall, and F1-score of 98% for the DSMM dataset and 95% for the INbreast dataset, considering the same metrics which are comparable to the current state-of-the-art.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_104-Breast_Cancer_Classification_Through_Transfer_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Adaptive Target Region Attention Network-based Human Pose Estimation in Smart Classroom</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01504103</link>
        <id>10.14569/IJACSA.2024.01504103</id>
        <doi>10.14569/IJACSA.2024.01504103</doi>
        <lastModDate>2024-04-30T10:33:20.9770000+00:00</lastModDate>
        
        <creator>Jianwen Mo</creator>
        
        <creator>Guiyun Jiang</creator>
        
        <creator>Hua Yuan</creator>
        
        <creator>Zhaoyu Shou</creator>
        
        <creator>Huibing Zhang</creator>
        
        <subject>Human pose estimation; smart classroom; Lite-HRNet; deformable convolutional encoding network; target region attention</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>In smart classroom environments, problems such as occlusion and overlap make the acquisition of student pose information challenging. To address these problems, a lightweight human pose estimation model with Adaptive Target Region Attention based on Lite-HRNet is proposed for smart classroom scenarios. Firstly, the Deformable Convolutional Encoding Network (DCEN) module is designed to reconstruct the encoding of features through an encoder and then a multi-layer deformable convolutional module is used to adaptively focus on the image region to obtain a feature representation that focuses on the target region of interest of the student subject. Secondly, the Channel And Spatial Attention (CASA) module is designed to attenuate or enhance the feature attention in different regions of the feature map to obtain a more accurate representation of the target feature. Finally, extensive experiments were conducted on the COCO dataset and the smart classroom dataset (SC-Data) to compare the proposed model with the current main popular human pose estimation framework. The experimental results show that the performance of the model reaches 67.5(mAP) in the COCO dataset, which is an improvement of 2.7(mAP) compared to the Lite-HRNet model, and 86.6(mAP) in the SC-Data dataset, which is an improvement of 1.6(mAP) compared to the Lite-HRNet model.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_103-Adaptive_Target_Region_Attention_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Investigating Cooling Load Estimation via Hybrid Models Based on the Radial Basis Function</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01504102</link>
        <id>10.14569/IJACSA.2024.01504102</id>
        <doi>10.14569/IJACSA.2024.01504102</doi>
        <lastModDate>2024-04-30T10:33:20.9470000+00:00</lastModDate>
        
        <creator>Sirui Zhang</creator>
        
        <creator>Hao Zheng</creator>
        
        <subject>Cooling load estimation; machine learning; building energy consumption; radial basis functions; dynamic arithmetic optimization algorithm; golden eagle optimization algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>To advance energy conservation in cooling systems within buildings, a pivotal technology known as cooling load prediction is essential. Traditional industry computational models typically employ forward or inverse modeling techniques, but these methods often demand extensive computational resources and involve lengthy procedures. However, artificial intelligence (AI) surpasses these approaches, with its models exhibiting the capability to autonomously discern intricate patterns, adapt dynamically, and enhance their performance as data volumes increase. AI models excel in forecasting cooling loads, accounting for various factors like weather conditions, building materials, and occupancy. This results in agile and responsive predictions, ultimately leading to heightened energy efficiency. The dataset of this study, which comprised 768 samples, was derived from previous studies. The primary objective of this study is to introduce a novel framework for the prediction of Cooling Load via integrating the Radial Basis Function (RBF) with 2 innovative optimization algorithms, specifically the Dynamic Arithmetic Optimization Algorithm (DAO) and the Golden Eagle Optimization Algorithm (GEO). The predictive outcomes indicate that the RBDA prediction model outperforms RBF in cooling load predictions, with RMSE=0.792, approximately half as much as those of RBF. Furthermore, the RBDA model&#39;s performance, especially in the training phase, confirmed the optimal value of R2=0.993.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_102-Investigating_Cooling_Load_Estimation_via_Hybrid_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Keyword Acquisition for Language Composition Based on TextRank Automatic Summarization Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01504101</link>
        <id>10.14569/IJACSA.2024.01504101</id>
        <doi>10.14569/IJACSA.2024.01504101</doi>
        <lastModDate>2024-04-30T10:33:20.9130000+00:00</lastModDate>
        
        <creator>Yan Jiang</creator>
        
        <creator>Chunlin Xiang</creator>
        
        <creator>Lingtong Li</creator>
        
        <subject>Language composition; keywords; best match 25; textrank; digests</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>It is important to extract keywords from text quickly and accurately for composition analysis, but the accuracy of traditional keyword acquisition models is not high. Therefore, in this study, the Best Match 25 algorithm was first used to preprocess the compositions and evaluate the similarity between sentences. Then, TextRank was used to extract the abstract, construct segmentation and named entity model, and finally verify the research content. The results show that in the performance test, the Best Match 25 similarity algorithm has higher accuracy, recall rate and F1 value, the average running time is only 2182ms, and has the largest receiver working characteristic curve area, which is significantly higher than other models, reaching 0.954. The accuracy of TextRank algorithm is above 90%, the average accuracy of 100 text analysis is 94.23%, the average recall rate and F1 value are 96.67% and 95.85%, respectively. In comparison of the application of the four methods, the research model shows obvious advantages, the average keyword coverage rate is 94.54%, the average processing time of 16 texts is 11.29 seconds, and the average 24-hour memory usage is only 15.67%, which is lower than the other three methods. The experimental results confirm the superiority of the model in terms of keyword extraction accuracy. This research not only provides a new technical tool for language composition teaching and evaluation, but also provides a new idea and method for keyword extraction research in the field of natural language processing.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_101-Keyword_Acquisition_for_Language_Composition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimization Method for Digital Twin Manufacturing System Based on NSGA-II</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01504100</link>
        <id>10.14569/IJACSA.2024.01504100</id>
        <doi>10.14569/IJACSA.2024.01504100</doi>
        <lastModDate>2024-04-30T10:33:20.9000000+00:00</lastModDate>
        
        <creator>Yu Ding</creator>
        
        <creator>Longhua Li</creator>
        
        <subject>Multi-objective optimization; NSGA-II; Digital twin; Production time; Production energy consumption</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>In the wave of industrial modernization, a concept that comprehensively covers the product lifecycle has been proposed, namely the digital twin manufacturing system. The digital twin manufacturing system can conduct three-dimensional simulation of the workshop, thereby achieving dynamic scheduling and energy efficiency optimization of the workshop. The optimization of digital twin manufacturing systems has become a focus of research. In order to reduce power consumption and production time in manufacturing workshops, the study adopted a non-dominated sorting genetic algorithm to improve its elitist retention strategy for the problem of easily falling into local optima. On the ground of the idea of multi-objective optimization, the optimization was carried out with the production time and power consumption of the manufacturing industry as the objectives. The experiment showcased that the improved algorithm outperforms the multi-objective optimization algorithm on the ground of decomposition and the evolutionary algorithm on the ground of Pareto dominance. Compared to the two comparison algorithms, the production time optimization effect and power consumption optimization effect of different numbers of devices were 11.12%-21.37% and 2.14%-6.89% higher, respectively. The optimization time of the improved algorithm was 713.5 seconds, that was reduced by 173.8 seconds and 179.8 seconds compared to the other two algorithms, respectively. The total power consumption of the improved optimization model was 2883.7kWs, which was reduced by 32.0kW*s and 45.5kW*s compared to the other two algorithms, respectively. This study proposed a new multi-objective optimization algorithm for the current digital twin manufacturing industry. This algorithm effectively reduces production time and power consumption, and has important guiding significance for manufacturing system optimization in actual production environments.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_100-Optimization_Method_for_Digital_Twin_Manufacturing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Proposal for Improving Economic Decision-Making Through Stock Price Index Forecasting</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150499</link>
        <id>10.14569/IJACSA.2024.0150499</id>
        <doi>10.14569/IJACSA.2024.0150499</doi>
        <lastModDate>2024-04-30T10:33:20.8670000+00:00</lastModDate>
        
        <creator>Xu Yao</creator>
        
        <creator>Weikang Zeng</creator>
        
        <creator>Lei Zhu</creator>
        
        <creator>Xiaoxiao Wu</creator>
        
        <creator>Di Li</creator>
        
        <subject>Hybrid model; recurrent neural networks; grey wolf optimization; stock price prediction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>The non-stationary, non-linear, and extremely noisy nature of stock price time series data, which are created from economic factors and systematic and unsystematic risks, makes it difficult to make reliable predictions of stock prices in the securities market. Conventional methods may improve forecasting accuracy, but they can additionally complicate the computations involved, increasing the likelihood of prediction errors. To address these issues, a novel hybrid model that combines recurrent neural networks and grey wolf optimization was introduced in the current study. The suggested model outperformed other models in the study with high efficacy, minimal error, and peak performance. Utilizing data from Alphabet stock spanning from June 29, 2023, to January 1, 2015, the effectiveness of the hybrid model was assessed. The gathered information comprised daily prices and trading volume. The outcomes showed that the suggested model is a reliable and effective method for analyzing and forecasting the time series of the financial market. The suggested model is additionally particularly well-suited to the volatile stock market and outperforms other recent strategies in terms of forecasting accuracy.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_99-A_Novel_Proposal_for_Improving_Economic_Decision.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Unveiling Spoofing Attempts: A DCGAN-based Approach to Enhance Face Spoof Detection in Biometric Authentication</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150498</link>
        <id>10.14569/IJACSA.2024.0150498</id>
        <doi>10.14569/IJACSA.2024.0150498</doi>
        <lastModDate>2024-04-30T10:33:20.8530000+00:00</lastModDate>
        
        <creator>Vuda Sreenivasa Rao</creator>
        
        <creator>Shirisha Kasireddy</creator>
        
        <creator>Annapurna Mishra</creator>
        
        <creator>R. Salini</creator>
        
        <creator>Sanjiv Rao Godla</creator>
        
        <creator>Khaled Bedair</creator>
        
        <subject>Biometric authentication systems; deep convolutional generative adversarial networks; face spoof detection; synthetic image generation; unauthorized access</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>Face spoofing attacks have become more dangerous as biometric identification has become more widely used. Through the utilisation of false facial photographs, attackers seek to fool systems in these assaults, endangering the security of biometric authentication devices and perhaps allowing unauthorized access to private information. Effectively recognizing and thwarting such spoofing attacks is critical to the dependability and credibility of biometric identification systems in a variety of applications. This research seeks to offer a unique strategy that uses Deep Convolutional Generative Adversarial Networks (DCGANs) to improve face spoof detection in order to counter the challenge provided by face spoofing assaults. In order to strengthen the security of biometric authentication systems in applications like identity verification, access control, and mobile device unlocking, the goal is to increase the accuracy and effectiveness of facial spoof detection. The training dataset is then supplemented with these artificial images, which strengthens the face spoof detection system&#39;s resilience. More accurate face spoofing is made possible by the strategy that leverages the discriminative characteristics obtained throughout the process to train the discriminator network employing adversarial learning to discriminate between actual and fake images. Experiments on the CelebFacesAttributes (CelebA) datasets show how effective the suggested method is over traditional techniques. The suggested technique outperforms conventional methods and achieves an astounding accuracy of 99.1% in face-spoof detection systems. The system exhibits impressive precision in differentiating between real and fake faces through the efficient use of artificial intelligence and adversarial learning. This effectively decreases the possibility of unwanted access and enhances the overall dependability of biometric authentication methods.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_98-Unveiling_Spoofing_Attempts.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing IoT Network Security: ML and Blockchain for Intrusion Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150497</link>
        <id>10.14569/IJACSA.2024.0150497</id>
        <doi>10.14569/IJACSA.2024.0150497</doi>
        <lastModDate>2024-04-30T10:33:20.8370000+00:00</lastModDate>
        
        <creator>N. Sunanda</creator>
        
        <creator>K. Shailaja</creator>
        
        <creator>Prabhakar Kandukuri</creator>
        
        <creator>Krishnamoorthy</creator>
        
        <creator>Vuda Sreenivasa Rao</creator>
        
        <creator>Sanjiv Rao Godla</creator>
        
        <subject>Intrusion detection; IoT networks; machine learning; random forest; red fox optimization; blockchain technology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>Given the proliferation of connected devices and the evolving threat landscape, intrusion detection plays a pivotal role in safeguarding IoT networks. However, traditional methodologies struggle to adapt to the dynamic and diverse settings of IoT environments. To address these challenges, this study proposes an innovative framework that leverages machine learning, specifically Red Fox Optimization (RFO) for feature selection, and Attention-based Bidirectional Long Short-Term Memory (Bi-LSTM). Additionally, the integration of blockchain technology is explored to provide immutable and tamper-proof logs of detected intrusions, bolstering the overall security of the system. Previous research has highlighted the limitations of conventional intrusion detection techniques in IoT networks, particularly in accommodating diverse data sources and rapidly evolving attack strategies. The attention mechanism enables the model to concentrate on pertinent features, enhancing the accuracy and efficiency of anomaly and malicious activity detection in IoT traffic. Furthermore, the utilization of RFO for feature selection aims to reduce data dimensionality and enhance the scalability of the intrusion detection system. Moreover, the inclusion of blockchain technology enhances security by ensuring the integrity and immutability of intrusion detection logs. The proposed framework is implemented using Python for machine learning tasks and Solidity for blockchain development. Experimental findings demonstrate the efficacy of the approach, achieving a detection accuracy of approximately 98.9% on real-world IoT datasets. These results underscore the significance of the research in advancing IoT security practices. By amalgamating machine learning, optimization techniques, and blockchain technology, this framework provides a robust and scalable solution for intrusion detection in IoT networks, fostering improved efficiency and security in interconnected environments.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_97-Enhancing_IoT_Network_Security.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Basketball Free Throw Posture Analysis and Hit Probability Prediction System Based on Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150496</link>
        <id>10.14569/IJACSA.2024.0150496</id>
        <doi>10.14569/IJACSA.2024.0150496</doi>
        <lastModDate>2024-04-30T10:33:20.8030000+00:00</lastModDate>
        
        <creator>Yuankai Luo</creator>
        
        <creator>Yan Peng</creator>
        
        <creator>Juan Yang</creator>
        
        <subject>Deep learning; CBAM; OpenPose; Free throws; Posture analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>With the continuous progress of basketball technology and tactics, educators need to adopt new teaching methods to cultivate high-quality athletes who meet the needs of modern basketball development. In basketball teaching, the accuracy of free throw techniques directly affects teaching effectiveness. Therefore, the automated prediction of free throw hits is of great significance for reducing manual labor and improving training efficiency. In order to automatically predict the free throw hits and reduce manual fatigue, the study conducts an in-depth analysis for the criticality of free throw in basketball. In this study, the target detection model of target basketball players is constructed based on YOLOv5 and CBAM, and the basketball free throw hit prediction model is constructed based on the OpenPose algorithm. The main quantitative results showed that the proposed model could accurately recognize the athlete posture in free throw actions and save them as video frames in practical applications. Specifically, when using the free throw keyframe limb angle as features, the model achieved a prediction accuracy of 71% and a recall rate of 86% in internal testing. In external testing, the prediction accuracy was improved to 89% and the recall rate was 77%. In addition, combining the relative position difference and angle characteristics of joint points, the accuracy of internal testing was significantly improved to 80%, and the recall rate was increased to 96%. The accuracy of external testing was improved to 95%, with a recall rate of 75%. The experimental results showed that the various functional modules of the system basically meet the expectations, confirming that the basketball penalty posture analysis and hit probability prediction system based on deep learning can effectively assist basketball teaching and meet the practical teaching application needs. The contribution of the research lies in providing a scientific basketball free throw training tool, which helps coaches and athletes better understand and improve free throw techniques, thereby improving free throw hits accuracy. Meanwhile, this study also provides new theoretical and practical references for the application of deep learning in motor skill analysis and training, which has potential value for updating the basketball education system and reducing teacher workload.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_96-Basketball_Free_Throw_Posture_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Threat Detection in Financial Cyber Security Through Auto Encoder-MLP Hybrid Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150495</link>
        <id>10.14569/IJACSA.2024.0150495</id>
        <doi>10.14569/IJACSA.2024.0150495</doi>
        <lastModDate>2024-04-30T10:33:20.7900000+00:00</lastModDate>
        
        <creator>Layth Almahadeen</creator>
        
        <creator>Ghayth ALMahadin</creator>
        
        <creator>Kathari Santosh</creator>
        
        <creator>Mohd Aarif</creator>
        
        <creator>Pinak Deb</creator>
        
        <creator>Maganti Syamala</creator>
        
        <creator>B Kiran Bala</creator>
        
        <subject>Financial cyber security; auto encoder; multilayer perceptron; threat detection; hybrid models</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>Cyber-attacks have the potential to cause power outages, malfunctions with military equipment, and breaches of sensitive data. Owing to the substantial financial value of the information it contains, the banking sector is especially vulnerable. The number of digital footprints that banks have increases, increasing the attack surface available to hackers. This paper presents a unique approach to improve financial cyber security threat detection by integrating Auto Encoder-Multilayer Perceptron (AE-MLP) hybrid models. These models use MLP neural networks&#39; discriminative capabilities for detection tasks, while also utilizing auto encoders&#39; strengths in collecting complex patterns and abnormalities in financial data. The NSL-KDD dataset, which is varied and includes transaction records, user activity patterns, and network traffic, was thoroughly analysed. The results show that the AE-MLP hybrid models perform well in spotting possible risks including fraud, data breaches, and unauthorized access attempts. Auto encoders improve the accuracy of threat detection methods by efficiently compressing and rebuilding complicated data representations. This makes it easier to extract latent characteristics that are essential for differentiating between normal and abnormal activity. The approach is implemented with Python software. The recommended Hybrid AE+MLP approach shows better accuracy with 99%, which is 13.16% more sophisticated, when compared to traditional approach. The suggested approach improves financial cyber security systems&#39; capacity for prediction while also providing scalability and efficiency while handling massive amounts of data in real-time settings.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_95-Enhancing_Threat_Detection_in_Financial_Cyber_Security.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Federated Convolutional Neural Networks for Predictive Analysis of Traumatic Brain Injury: Advancements in Decentralized Health Monitoring</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150494</link>
        <id>10.14569/IJACSA.2024.0150494</id>
        <doi>10.14569/IJACSA.2024.0150494</doi>
        <lastModDate>2024-04-30T10:33:20.7570000+00:00</lastModDate>
        
        <creator>Tripti Sharma</creator>
        
        <creator>Desidi Narsimha Reddy</creator>
        
        <creator>Chamandeep Kaur</creator>
        
        <creator>Sanjiv Rao Godla</creator>
        
        <creator>R. Salini</creator>
        
        <creator>Adapa Gopi</creator>
        
        <creator>Yousef A.Baker El-Ebiary</creator>
        
        <subject>Traumatic brain injury; federated learning; convolutional neural network; grasshopper optimization algorithm; health monitoring</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>Traumatic Brain Injury (TBI) is a significant global health concern, often leading to long-term disabilities and cognitive impairments. Accurate and timely diagnosis of TBI is crucial for effective treatment and management. In this paper, we propose a novel federated convolutional neural network (FedCNN) framework for predictive analysis of TBI in decentralized health monitoring. The framework is implemented in Python, leveraging three diverse datasets: CQ500, RSNA, and CENTER-TBI, each containing annotated brain CT images associated with TBI. The methodology encompasses data preprocessing, feature extraction using gray level co-occurrence matrix (GLCM), feature selection employing the Grasshopper Optimization Algorithm (GOA), and classification using FedCNN. Our approach achieves superior performance compared to existing methods such as DANN, RF and DT, and LSTM, with an accuracy of 99.2%, surpassing other approaches by 1.6%. The FedCNN framework offers decentralized privacy-preserving training across individual networks while sharing model parameters with a central server, ensuring data privacy and decentralization in health monitoring. Evaluation metrics including accuracy, precision, recall, and F1-score demonstrate the effectiveness of our approach in accurately classifying normal and abnormal brain CT images associated with TBI. The ROC analysis further validates the discriminative ability of the FedCNN framework, highlighting its potential as an advanced tool for TBI diagnosis. Our study contributes to the field of decentralized health monitoring by providing a reliable and efficient approach for TBI management, offering significant advancements in patient care and healthcare management. Future research could explore extending the FedCNN framework to incorporate additional modalities and datasets, as well as integrating advanced deep learning architectures and optimization algorithms to further improve performance and scalability in healthcare applications.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_94-Federated_Convolutional_Neural_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sustainable Artificial Intelligence: Assessing Performance in Detecting Fake Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150493</link>
        <id>10.14569/IJACSA.2024.0150493</id>
        <doi>10.14569/IJACSA.2024.0150493</doi>
        <lastModDate>2024-04-30T10:33:20.7430000+00:00</lastModDate>
        
        <creator>Othman A. Alrusaini</creator>
        
        <subject>Artificial intelligence; image validation; deep learning; deep neural networks; fake images; image forgery; image manipulations</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>Detecting fake images is crucial because they may confuse and influence people into making bad judgments or adopting incorrect stances that might have disastrous consequences. In this study, we investigate not only the effectiveness of artificial intelligence, specifically deep learning and deep neural networks, for fake image detection but also the sustainability of these methods. The primary objective of this investigation was to determine the efficacy and sustainable application of deep learning algorithms in detecting fake images. We measured the amplitude of observable phenomena using effect sizes and random effects. Our meta-analysis of 32 relevant studies revealed a compelling effect size of 1.7337, indicating that the model&#39;s performance is robust. Despite this, some moderate heterogeneity was observed (Q-value = 65.5867; I2 = 52.7344%). While deep learning solutions such as CNNs and GANs emerged as leaders in detecting fake images, their efficacy and sustainability were contingent on the nature of the training images and the resources consumed during training and operation. The study highlighted adversarial confrontations, the need for perpetual model revisions due to the ever-changing nature of image manipulations, and data scarcity as technical obstacles. Additionally, the sustainable deployment of these AI technologies in diverse environments was considered crucial.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_93-Sustainable_Artificial_Intelligence.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Approach for Enhanced Depression Detection using Learning Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150492</link>
        <id>10.14569/IJACSA.2024.0150492</id>
        <doi>10.14569/IJACSA.2024.0150492</doi>
        <lastModDate>2024-04-30T10:33:20.7270000+00:00</lastModDate>
        
        <creator>Ganesh D. Jadhav</creator>
        
        <creator>Sachin D. Babar</creator>
        
        <creator>Parikshit N. Mahalle</creator>
        
        <subject>Depression detection; machine learning; extended- distress analysis interview corpus; ensemble-LSRG model; mamdani fuzzy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>According to the World Health Organization (WHO), depression affects over 350 million people worldwide, making it the most common health problem. Depression has numerous causes, including fluctuations in business, social life, the economy, and personal relationships. Depression is one of the leading contributors to mental illness in people, which also has an impact on a person&#39;s thoughts, behavior, emotions, and general wellbeing. This study aids in the clinical understanding of patients&#39; mental health with depression. The primary objective of research is to examine learning strategies to enhance the effectiveness of depression detection. The proposed work includes ‘Extended- Distress Analysis Interview corpus’ (E-DAIC) label dataset description and proposed methodology. The membership function applies to the Patients Health Questionnaire (PHQ8_Score) for Mamdani Fuzzy depression detection levels, in addition to the study of the hybrid approach. It also reviews the proposed techniques used for depression detection to improve the performance of the system. Finally, we developed the Ensemble- LSRG (Logistic classifier, Support Vector classifier, Random Forest Classifier, Gradient boosting classifier) model, which gives 98.21% accuracy, precision of 99%, recall of 99%, F1 score of 99%, mean squared error of 1.78%, mean absolute error of 1.78%, and R2 of 94.23.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_92-Hybrid_Approach_for_Enhanced_Depression_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Network Security Situation Prediction Technology Based on Fusion of Knowledge Graph</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150491</link>
        <id>10.14569/IJACSA.2024.0150491</id>
        <doi>10.14569/IJACSA.2024.0150491</doi>
        <lastModDate>2024-04-30T10:33:20.6970000+00:00</lastModDate>
        
        <creator>Wei Luo</creator>
        
        <subject>Knowledge graph; network security situation; gated recurrent unit; Bayesian attack graph; relationship extraction; relationship recognition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>It is difficult to accurately reflect different network attack events in real time, which leads to poor performance in predicting network security situations. A knowledge graph-based entity recognition model and entity relationship extraction model was developed for enhancing the reliability and processing efficiency of secure data. Then a knowledge graph-based situational assessment method was introduced, and a network security situational prediction model based on self-attention mechanism and gated recurrent unit was constructed. The study&#39;s results showed that the constructed prediction model achieved stable mean square error values of approximately 0.0127 and 0.0136 after being trained on the NSL-KDD and CICIDS2017 datasets for 678 and 589 iterations, respectively. The mean square error value was lower due to fewer training iterations compared to other prediction models. The model was embedded into the information security system of an actual Internet company, and the detection accuracy of the number of network attacks was more than 95%. The results of our study indicate that the method used in the study can accurately predict the network security situation and provide technical support for predicting network information security of the same type.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_91-Network_Security_Situation_Prediction_Technology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Advancing Automated and Adaptive Educational Resources Through Semantic Analysis with BERT and GRU in English Language Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150490</link>
        <id>10.14569/IJACSA.2024.0150490</id>
        <doi>10.14569/IJACSA.2024.0150490</doi>
        <lastModDate>2024-04-30T10:33:20.6630000+00:00</lastModDate>
        
        <creator>V Moses Jayakumar</creator>
        
        <creator>R. Rajakumari</creator>
        
        <creator>Sana Sarwar</creator>
        
        <creator>Darakhshan Mazhar Syed</creator>
        
        <creator>Prema S</creator>
        
        <creator>Santhosh Boddupalli</creator>
        
        <creator>Yousef A.Baker El-Ebiary</creator>
        
        <subject>BERT; Content Generation; English Language Learning; Gated Recurrent Unit; Semantic Analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>Semantics describe how language and its constituent parts are understood or interpreted. Semantic analysis is the computer analysis of language to derive connections, meaning, and context from words and sentences. In English language learning, dynamic content generation entails developing instructional materials that adjust to the specific requirements of each student, delivering individualized and contextually appropriate information to boost understanding and engagement. To tailor instructional materials to the various requirements of students, dynamic content creation is essential in English language learning (ELL). This work is a unique method for automatic and adaptive content production in ELL that uses Gated Recurrent Unit (GRU) and Bidirectional Encoder Representations from Transformers (BERT) together. The suggested approach uses BERT enabling content selection, adaption, and adaptive educational content production, and GRU for semantic extraction of features and contextual information captured from textual input. The article presents a novel approach to creating automated and adaptable educational tools for ELL that uses GRU for semantic feature extraction. Using persuasive essays collected in the PERSUADE 2.0 corpus annotated with discourse components and competency scores, this is an extensive dataset. After extensive testing, this approach shows outstanding outcomes, with high accuracy reaching 97% when compared to the current Spiking Neural Network (SNN) &amp;Convolutional Neural Network (CNN), Logistic Regression (LR), and Convolutional Bidirectional Recurrent Neural Network (CBRNN). Python is used to implement the suggested work. The suggested strategy improves ELL engagement and understanding by providing individualized, contextually appropriate learning resources to each student. In addition, the flexibility of the system allows for real-time modifications to suit the changing needs and preferences of the learners. By providing instructors and students in a variety of educational contexts with a scalable and effective approach, this study advances automated content development in ELL. The model architecture will be improved in the future, along with the application&#39;s expansion into other domains outside of ELL and the investigation of new language aspects.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_90-Advancing_Automated_and_Adaptive_Educational_Resources.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing HCI Through Real-Time Gesture Recognition with Federated CNNs: Improving Performance and Responsiveness</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150489</link>
        <id>10.14569/IJACSA.2024.0150489</id>
        <doi>10.14569/IJACSA.2024.0150489</doi>
        <lastModDate>2024-04-30T10:33:20.6500000+00:00</lastModDate>
        
        <creator>R. Stella Maragatham</creator>
        
        <creator>Yousef A. Baker El-Ebiary</creator>
        
        <creator>Srilakshmi V</creator>
        
        <creator>K. Sridharan</creator>
        
        <creator>Vuda Sreenivasa Rao</creator>
        
        <creator>Sanjiv Rao Godla</creator>
        
        <subject>Real-time gesture detection; federated convolutional neural networks; privacy-preserving machine learning; adaptive learning rate scheduling; Decentralized human-computer interaction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>To facilitate smooth human-computer interaction (HCI) in a variety of contexts, from augmented reality to sign language translation, real-time gesture detection is essential. In this paper, researchers leverage federated convolutional neural networks (CNNs) to present a novel strategy that tackles these issues. By utilizing federated learning, authors may cooperatively train a global CNN model on several decentralized devices without sharing raw data, protecting user privacy. Using this concept, researchers create a federated CNN architecture designed for real-time applications including gesture recognition. This federated approach enables continuous model refinement and adaption to various user behaviours and environmental situations by pooling local model updates from edge devices. This paper suggests improvements to the federated learning system to maximize responsiveness and speed. To lessen the probability of privacy violations when aggregating models, this research uses techniques like differential privacy. Additionally, to reduce communication overhead and quicken convergence, To incorporate adaptive learning rate scheduling and model compression techniques research show how federated CNN approach may achieve state-of-the-art performance in real-time gesture detection tasks through comprehensive tests on benchmark datasets. In addition to performing better than centralized learning techniques. This approach guarantees improved responsiveness and adaptability to dynamic contexts. Furthermore, federated learning&#39;s decentralized architecture protects user confidentiality and data security, which qualifies it for usage in delicate HCI applications. All things considered, the design to propose a viable path forward for real-time gesture detection system advancement, facilitating more organic and intuitive computer-human interactions while preserving user privacy and data integrity. The proposed federated CNN approach achieves a prediction accuracy in real-time gesture detection tasks, outperforming centralized learning techniques while preserving user privacy and data integrity. The proposed framework that achieves prediction accuracy of 98.70% was implemented in python.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_89-Enhancing_HCI_Through_Real_Time_Gesture_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>DeepCardioNet: Efficient Left Ventricular Epicardium and Endocardium Segmentation using Computer Vision</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150488</link>
        <id>10.14569/IJACSA.2024.0150488</id>
        <doi>10.14569/IJACSA.2024.0150488</doi>
        <lastModDate>2024-04-30T10:33:20.6330000+00:00</lastModDate>
        
        <creator>Bukka Shobharani</creator>
        
        <creator>S Girinath</creator>
        
        <creator>K. Suresh Babu</creator>
        
        <creator>J. Chenni Kumaran</creator>
        
        <creator>Yousef A.Baker El-Ebiary</creator>
        
        <creator>S. Farhad</creator>
        
        <subject>DeepCardioNet; attention swin U-Net; ventricular epicardium; endocardium; computer vision approach</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>In the realm of medical image analysis, accurate segmentation of cardiac structures is essential for accurate diagnosis and therapy planning. Using the efficient Attention Swin U-Net architecture, this study provides DEEPCARDIONET, a novel computer vision approach for effectively segmenting the left ventricular epicardium and endocardium. The paper presents DEEPCARDIONET, a cutting-edge computer vision method designed to efficiently separate the left ventricular epicardium and endocardium in medical pictures. The main innovation of DEEPCARDIONET is that it makes use of the Attention Swin U-Net architecture, a state-of-the-art framework that is well-known for its capacity to collect contextual information and complicated attributes. Specially designed for the segmentation task, the Attention Swin U-Net guarantees superior performance in identifying the relevant left ventricular characteristics. The model&#39;s ability to identify positive instances with high precision and a low false positive rate is demonstrated by its good sensitivity, specificity, and accuracy. The Dice Similarity Coefficient (DSC) illustrates the improved performance of the proposed method in addition to accuracy, showing how effectively it captures spatial overlaps between predicted and ground truth segmentations. The model&#39;s generalizability and performance in a variety of medical imaging contexts are demonstrated by its application and evaluation across many datasets. DEEPCARDIONET is an intriguing method for enhancing cardiac picture segmentation, with potential applications in clinical diagnosis and treatment planning. The proposed method achieves an amazing accuracy of 99.21% by using a deep neural network architecture, which significantly beats existing models like TransUNet, MedT, and FAT-Net. The implementation, which uses Python, demonstrates how versatile and useful the language is for the scientific computing community.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_88-DeepCardioNet_Efficient_Left_Ventricular_Epicardium_and_Endocardium_Segmentation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Real-time Air Quality Monitoring in Smart Cities using IoT-enabled Advanced Optical Sensors</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150487</link>
        <id>10.14569/IJACSA.2024.0150487</id>
        <doi>10.14569/IJACSA.2024.0150487</doi>
        <lastModDate>2024-04-30T10:33:20.6030000+00:00</lastModDate>
        
        <creator>Anushree A. Aserkar</creator>
        
        <creator>Sanjiv Rao Godla</creator>
        
        <creator>Yousef A.Baker El-Ebiary</creator>
        
        <creator>Krishnamoorthy</creator>
        
        <creator>Janjhyam Venkata Naga Ramesh</creator>
        
        <subject>Internet of Things (IoT); air quality control; low-cost sensors; ESP-WROOM-32 microcontroller; Amazon Web Server (AWS)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>Air quality control has drawn a lot of attention from both theoretical research and practical application due to the air pollution problem&#39;s increasing severity. As urbanization accelerates, the need for effective air quality monitoring in smart cities becomes increasingly critical. Traditional methods of air quality monitoring often involve stationary monitoring stations, providing limited coverage and outdated data. This study proposes an Internet of Things (IoT) centred framework equipped with inexpensive devices to monitor pollutants vital to human health, in line with World Health Organization recommendations, in response to the pressing issue of air pollution and its increased importance. The hardware development entails building a device that can track significant contamination percentages. Ammonia, carbon monoxide, nitrogen dioxide, PM2.5 and PM10 particulate matter, ozone, and nitrogen dioxide. The gadget is driven by the ESP-WROOM-32 microcontroller, which has Bluetooth and Wi-Fi capabilities for easy data connection to a cloud server. It uses PMSA003, MICS-6814, and MQ-131 sensors. The gadget activates indicators when a pollutant concentration exceeds the allowable limit, enhancing its software to enable immediate response and intervention. This work leverages the robust cloud architecture of Amazon Web Server (AWS) to integrate it into the system and improve accessibility and data control. This combination no longer just ensures data preservation but also enables real-time tracking and analysis, which adds to a comprehensive and preventive strategy for reducing air pollution and preserving public health. With an RMSE score of 3.7656, the Real-Time Alerts with AWS Integration model—which was built in Python—has the lowest value.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_87-Real_time_Air_Quality_Monitoring.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Event-based Smart Contracts for Automated Claims Processing and Payouts in Smart Insurance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150486</link>
        <id>10.14569/IJACSA.2024.0150486</id>
        <doi>10.14569/IJACSA.2024.0150486</doi>
        <lastModDate>2024-04-30T10:33:20.5870000+00:00</lastModDate>
        
        <creator>Araddhana Arvind Deshmukh</creator>
        
        <creator>Prabhakar Kandukuri</creator>
        
        <creator>Janga Vijaykumar</creator>
        
        <creator>Anna Shalini</creator>
        
        <creator>S. Farhad</creator>
        
        <creator>Elangovan Muniyandy</creator>
        
        <creator>Yousef A.Baker El-Ebiary</creator>
        
        <subject>Blockchain technology; smart contracts; event-based triggers; automated claims processing; transparency and trustworthiness</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>The combination of blockchain technology and smart contracts has become a viable way to expedite claims processing and payouts in the quickly changing insurance industry. Enhancing efficiency, transparency, and reliability for the industry may be achieved by automating certain procedures and initiating them on predetermined triggers, smart contracts that is event-based. Conventional insurance procedures can be laborious, slow, and prone to human mistake, which can cause inefficiencies and delays in the resolution of claims. This research proposes a simplified system that automates the whole claims process from submission to reimbursement by utilizing blockchain technology and smart contracts. The suggested method does away with the requirement for human claim filing by having policyholders&#39; claims automatically triggered by predetermined occurrences. These occurrences might be anything from medical emergencies to natural calamities, enabling prompt and precise claim start. The whole claims process is managed by smart contracts that are programmed with precise triggers and conditions, guaranteeing transaction immutability, security, and transparency. Moreover, reimbursements are carried out automatically after the triggering event has been verified, disregarding conventional bureaucratic processes and drastically cutting down on processing times. This strategy decreases the possibility of fraud and disagreement while also improving operational efficiency by combining self-executing contracts with decentralized ledger technology. Insurance companies and policyholders will both eventually profit from an accelerated, transparent, and reliable claims processing procedure thanks to the use of event-based smart contracts. A Python-implemented system achieving 97.6% accuracy using the proposed method, demonstrates its efficacy and reliability for the given task.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_86-Event_Based_Smart_Contracts_for_Automated_Claims.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Strengthening Sentence Similarity Identification Through OpenAI Embeddings and Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150485</link>
        <id>10.14569/IJACSA.2024.0150485</id>
        <doi>10.14569/IJACSA.2024.0150485</doi>
        <lastModDate>2024-04-30T10:33:20.5700000+00:00</lastModDate>
        
        <creator>Nilesh B. Korade</creator>
        
        <creator>Mahendra B. Salunke</creator>
        
        <creator>Amol A. Bhosle</creator>
        
        <creator>Prashant B. Kumbharkar</creator>
        
        <creator>Gayatri G. Asalkar</creator>
        
        <creator>Rutuja G. Khedkar</creator>
        
        <subject>OpenAI; embedding; sentence similarity; FastText; Word2Vec; CNN; LSTM; precision; recall; F1-score</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>Discovering similarity between sentences can be beneficial to a variety of systems, including chatbots for customer support, educational platforms, e-commerce customer inquiries, and community forums or question-answering systems. One of the primary issues that online question-answering platforms and customer service chatbots have is the large number of duplicate inquiries that are placed on the platform. In addition to cluttering up the platform, these repetitive queries degrade the content&#39;s quality and make it harder for visitors to locate pertinent information. Therefore, it is necessary to automatically detect sentence similarity in order to improve the user experience and quickly match user expectations. The present study makes use of the Quora dataset to construct a framework for similarity discovery in sentence pairs. As part of our research, we have built additional attributes based on textual data for improving the accuracy of similarity prediction. The study investigates several vectorization methods and their influence on accuracy. To convert preprocess text input to a numerical vector, we implemented Word2Vec, FastText, Term Frequency-Inverse Document Frequency (TF-IDF), CountVectorizer (CV), and OpenAI embedding. In order to judge sentence similarity, the embedding offered by several approaches was used with various models, including cosine similarity, Random Forest (RF), AdaBoost, XGBoost, LSTM, and CNN. The result demonstrates that all algorithms trained on OpenAI embedding yield excellent outcomes. The OpenAI-created embedding offers excellent information to models trained on it and has significant potential for capturing sentence similarity.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_85-Strengthening_Sentence_Similarity_Identification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid Approach with Xception and NasNet for Early Breast Cancer Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150484</link>
        <id>10.14569/IJACSA.2024.0150484</id>
        <doi>10.14569/IJACSA.2024.0150484</doi>
        <lastModDate>2024-04-30T10:33:20.5530000+00:00</lastModDate>
        
        <creator>Yassin Benajiba</creator>
        
        <creator>Mohamed Chrayah</creator>
        
        <creator>Yassine Al-Amrani</creator>
        
        <subject>Breast Cancer; CNN; Hybrid Model: Xception; NasNet</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>Breast cancer is the most common cancer in women, accounting for 12.5% of global cancer cases in 2020, and the leading cause of cancer deaths in women worldwide. Early detection is therefore crucial to reducing deaths, and recent studies suggest that deep learning techniques can detect breast cancer more accurately than experienced doctors. Experienced doctors can detect breast cancer with only 79% accuracy, while machine learning techniques can achieve up to 91% accuracy (and sometimes up to 97%). To improve breast cancer classification, we conducted a study using two deep learning models, Xception and NasNet, which we combined to achieve better results in distinguishing between malignant and benign tumours in digital databases and cell images obtained from mammograms. Our hybrid model showed good classification results, with an accuracy of over 96.2% and an AUC of 0.993 (99.3%) for mammography data. Remarkably, these results outperformed all other models we compared them with, Top of Form ResNet101 and VGG, which only achieved accuracies of 87%, 88% and 84.4% respectively. Our results were also the best in the field, surpassing the accuracy of other recent hybrid models such as MOD-RES + NasMobile with 89.50% accuracy and VGG 16 + LR with 92.60% accuracy. By achieving this high accuracy rate, our work can make a significant contribution to reducing breast cancer deaths worldwide by helping doctors to detect the disease early and begin treatment immediately.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_84-A_Hybrid_Approach_with_Xception_and_NasNet.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>COOT-Optimized Real-Time Drowsiness Detection using GRU and Enhanced Deep Belief Networks for Advanced Driver Safety</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150483</link>
        <id>10.14569/IJACSA.2024.0150483</id>
        <doi>10.14569/IJACSA.2024.0150483</doi>
        <lastModDate>2024-04-30T10:33:20.5400000+00:00</lastModDate>
        
        <creator>Gunnam Rama Devi</creator>
        
        <creator>Hayder Musaad Al-Tmimi</creator>
        
        <creator>Ghadir Kamil Ghadir</creator>
        
        <creator>Shweta Sharma</creator>
        
        <creator>Eswar Patnala</creator>
        
        <creator>B Kiran Bala</creator>
        
        <creator>Yousef A.Baker El-Ebiary</creator>
        
        <subject>Drowsiness detection; driver safety; real-time monitoring; gated recurrent units; enhanced deep belief networks; COOT optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>Drowsiness among drivers is a major hazard to road safety, resulting in innumerable incidents globally. Despite substantial study, existing approaches for detecting drowsiness in real time continue to confront obstacles, such as low accuracy and efficiency. In these circumstances, this study tackles the critical problems of identifying drowsiness and driver safety by suggesting a novel approach that leverages the combined effectiveness of Gated Recurrent Units (GRU) and Enhanced Deep Belief Networks (EDBN), which is optimised using COOT, a new bird collective-behavioral-based optimisation algorithm. The study begins by emphasising the relevance of sleepiness detection in improving driver safety and the limitations of prior studies in reaching high accuracy in real-time detection. The suggested method tries to close this gap by combining the GRU and EDBN simulations, which are known for their temporal modelling and feature learning capabilities, respectively, to give a comprehensive solution for sleepiness detection. Following thorough experimentation, the suggested technique achieves an outstanding accuracy of around 99%, indicating its efficiency in detecting sleepiness states in real-time driving scenarios. The relevance of this research stems from its potential to greatly reduce the number of accidents caused by drowsy driving, hence improving overall road safety. Furthermore, the use of COOT to optimize the parameters of the GRU and EDBN models adds a new dimension to the research, demonstrating the effectiveness of nature-inspired optimization methodologies for improving the performance of machine learning algorithms for critical applications such as driver safety.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_83-COOT_Optimized_Real_Time_Drowsiness_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Integrated Arnold and Bessel Function-based Image Encryption on Blockchain</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150482</link>
        <id>10.14569/IJACSA.2024.0150482</id>
        <doi>10.14569/IJACSA.2024.0150482</doi>
        <lastModDate>2024-04-30T10:33:20.5070000+00:00</lastModDate>
        
        <creator>Abhay Kumar Yadav</creator>
        
        <creator>Virendra P. Vishwakarma</creator>
        
        <subject>Arnold transformation; block encryption; Bessel functions; blockchain</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>Images store large amount of information that are used in visual representation, analysis, and expression of data. Storage and retrieval of images possess a greater challenge to researchers globally. This research paper presents an integrated approach for image encryption and decryption using an Arnold map and first-order Bessel function-based chaos. Traditional methods of image encryption are generally based on single algorithms or techniques, making them vulnerable to various security threats. To address these challenges, our novel method combines the robustness of Arnold transformation with the unique properties of Bessel functions-based chaos. Furthermore, we implemented the decentralized nature of blockchain technology for storing and managing encryption keys securely. By utilizing blockchain&#39;s tamper-resistant and transparent ledger, we enhance the integrity and traceability of the encryption process, mitigating the risk of unauthorized access or tampering. The proposed method leverages the chaotic behavior of Bessel function for enhancing security of encryption process. A chaos obtained from first order Bessel function has been utilized for encryption key for encryption after Arnold transformation. The obtained cypher text is stored in blockchain in form of encrypted blocks for secured storage and added security. Experimental evaluations demonstrate the efficiency, effectiveness and robustness of our proposed encryption method when compared with performance of previously developed techniques highlighting the superiority of the proposed method in protecting image data against unauthorized access.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_82-An_Integrated_Arnold_and_Bessel_Function.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing the Diagnosis of Depression and Anxiety Through Explainable Machine Learning Methods</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150481</link>
        <id>10.14569/IJACSA.2024.0150481</id>
        <doi>10.14569/IJACSA.2024.0150481</doi>
        <lastModDate>2024-04-30T10:33:20.4930000+00:00</lastModDate>
        
        <creator>Mai Marey</creator>
        
        <creator>Dina Salem</creator>
        
        <creator>Nora El Rashidy</creator>
        
        <creator>Hazem ELBakry</creator>
        
        <subject>Mental health; Recursive Feature Elimination (RFE); machine learning; Xgboost</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>Diagnosing depression and anxiety involves various methods, including referenda-based approaches that may lack accuracy. However, machine learning has emerged as a promising approach to address these limitations and improve diagnostic accuracy. In this scientific paper, we present a study that utilizes a digital dataset to apply machine learning techniques for diagnosing psychological disorders. The study employs numerical, striatum, and mathematical analytic methodologies to extract dataset features. The Recursive Feature Elimination (RFE) algorithm is used for feature selection, and several classification algorithms, including SVM, decision tree, random forest, logistic regression, and XGBoost, are evaluated to identify the most effective technique for the proposed methodology. The dataset consists of 783 samples from patients with depression and anxiety, which are used to test the proposed strategies. The classification results are evaluated using performance metrics such as accuracy (AC), precision (PR), recall (RE), and F1-score (F1). The objective of this study is to identify the best algorithm based on these metrics, aiming to achieve optimal classification of depression and anxiety disorders. The results obtained will be further enhanced by modifying the dataset and exploring additional machine learning algorithms. This research significantly contributes to the field of mental health diagnosis by leveraging machine learning techniques to enhance the accuracy and effectiveness of diagnosing depression and anxiety disorders.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_81-Enhancing_the_Diagnosis_of_Depression_and_Anxiety.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of Emotion Analysis Model IABC-Deep Learning-based for Vocal Performance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150480</link>
        <id>10.14569/IJACSA.2024.0150480</id>
        <doi>10.14569/IJACSA.2024.0150480</doi>
        <lastModDate>2024-04-30T10:33:20.4770000+00:00</lastModDate>
        
        <creator>Zhenjie Zhu</creator>
        
        <creator>Xiaojie Lv</creator>
        
        <subject>Vocal performance; deep learning; Artificial Bee Colony (ABC); emotion analysis model; Deep Neural Network (DNN)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>With the development of deep learning technology, and due to its potential in solving optimization problems with deep structures, deep learning technology is gradually being applied to sentiment analysis models. However, most existing deep learning-based sentiment analysis models have low accuracy issues. Therefore, this study focuses on the issue of emotional analysis in vocal performance. Firstly, based on vocal performance experts and user experience, classify the emotions expressed in vocal performance works to identify the emotional representations of music. On this basis, in order to improve the accuracy of emotion analysis models for deep learning based vocal performance, an improved artificial bee colony algorithm (IABC) was developed to optimize deep neural networks (DNN). Finally, the effectiveness of the proposed deep neural network based on improved artificial bee colony (IABC-DNN) was verified through a training set consisting of 150 vocal performance works and a testing set consisting of 30 vocal performance works. The results indicate that the accuracy of the sentiment analysis model for vocal performance based on IABC-DNN can reach 98%.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_80-Design_of_Emotion_Analysis_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Integrating Lesk Algorithm with Cosine Semantic Similarity to Resolve Polysemy for Setswana Language</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150479</link>
        <id>10.14569/IJACSA.2024.0150479</id>
        <doi>10.14569/IJACSA.2024.0150479</doi>
        <lastModDate>2024-04-30T10:33:20.4470000+00:00</lastModDate>
        
        <creator>Tebatso Gorgina Moape</creator>
        
        <creator>Oludayo O. Olugbara</creator>
        
        <creator>Sunday O. Ojo</creator>
        
        <subject>Word sense disambiguation; Lesk algorithm; cosine similarity; Bidirectional Encoder Representations from Transformers (BERT)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>Word Sense Disambiguation (WSD) serves as an intermediate task for enhancing text understanding in Natural Language Processing (NLP) applications, including machine translation, information retrieval, and text summarization. Its role is to enhance the effectiveness and efficiency of these applications by ensuring the accurate selection of the appropriate sense for polysemous words in diverse contexts. This task is recognized as an AI-complete problem, indicating its longstanding complexity since the 1950s. One of the earliest proposed solutions to address polysemy in NLP is the Lesk algorithm, which has seen various adaptations by researchers for different languages over the years. This study proposes a simplified, Lesk-based algorithm to resolve polysemy for Setswana. Instead of combinatorial comparisons among candidate senses that Lesk is based on that cause computational complexity, this study models word sense glosses using Bidirectional Encoder Representations from Transformers (BERT) and Cosine similarity measure, which have been proven to achieve optimal performance in WSD. The proposed algorithm was evaluated on Setswana and obtained an accuracy of 86.66 and an error rate of 14.34, surpassing the accuracy of other Lesk-based algorithms for other languages.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_79-Integrating_Lesk_Algorithm_with_Cosine_Semantic_Similarity.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Estimating Coconut Yield Production using Hyperparameter Tuning of Long Short-Term Memory</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150478</link>
        <id>10.14569/IJACSA.2024.0150478</id>
        <doi>10.14569/IJACSA.2024.0150478</doi>
        <lastModDate>2024-04-30T10:33:20.4130000+00:00</lastModDate>
        
        <creator>Niranjan Shadaksharappa Jayanna</creator>
        
        <creator>Raviprakash Madenur Lingaraju</creator>
        
        <subject>Auto-regressive integrated moving average; coconut yield production; improved sine cosine algorithm; long short-term memory; time series</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>Coconut production is one of the significant and main sources of revenue in India. In this research, an Auto-Regressive Integrated Moving Average (ARIMA)-Improved Sine Cosine Algorithm (ISCA) with Long Short-Term Memory (LSTM) is proposed for coconut yield production using time series data. It is used for converting non-stationary data to stationary time series data by applying differences. The Holt-Winter Seasonal Method is the Exponential Smoothing variations utilized for seasonal data. The time-series data are given as the input to the LSTM classifier to classify the yield production and the LSTM model is tuned by hyperparameter using Improved Sine Cosine Algorithm (ISCA). In basic SCA, parameter setting and search precision are crucial and the modified SCA improves the coverage speed and search precision of the algorithm. The model’s performance is estimated by utilizing R2, Mean Absolute Error (MAE), Mean Squared Error (MSE), and Root Mean Square Error (RMSE) with date on yield from 2011-2021 by categorizing yearly production into 120 records and eight million nuts. The outcomes display that the LSTM-ISCA offers values of 0.38, 0.126, 0.049 and 0.221 for R2, MAE, MSE and RMSE metrics, which offer a precise yield production when related to other models.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_78-Estimating_Coconut_Yield_Production.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning Approach for Workload Prediction and Balancing in Cloud Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150477</link>
        <id>10.14569/IJACSA.2024.0150477</id>
        <doi>10.14569/IJACSA.2024.0150477</doi>
        <lastModDate>2024-04-30T10:33:20.4000000+00:00</lastModDate>
        
        <creator>Syed Karimunnisa</creator>
        
        <creator>Yellamma Pachipala</creator>
        
        <subject>Task scheduling; virtual machines; optimization; workload prediction; migration; QoS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>Cloud Computing voted as one of the most revolutionized technologies serving huge user demand engrosses a prominent place in research. Despite several parameters that influence the cloud performance, factors like Workload prediction and scheduling are triggering challenges for researchers in leveraging the system proficiency. Contributions by practitioners given workload prophesy left scope for further enhancement in terms of makespan, migration efficiency, and cost. Anticipating the future workload in due to avoid unfair allocation of cloud resources is a crucial aspect of efficient resource allocation. Our work aims to address this gap and improve efficiency by proposing a Deep Max-out prediction model, which predicts the future workload and facilitates workload balancing paving the path for enhanced scheduling with a hybrid Tasmanian Devil-assisted Bald Eagle Search (TES) optimization algorithm. The results evaluated proved that the TES scored efficiency in makespan with 16.342%, and migration efficiency of 14.75% over existing approaches like WACO, MPSO, and DBOA (Weighted Ant Colony Optimization Modified Particle Swarm Optimization, Discrete Butterfly Optimization Algorithm). Similarly, the error analysis during the evaluation of prediction performance has been figured out using different approaches like MSE, RMSE, MAE, and MSLE, among which our proposed method overwhelms with less error than the traditional methods.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_77-Deep_Learning_Approach_for_Workload_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Bi-Level Particle Swarm Optimization for Joint Pricing in a Supply Chain</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150476</link>
        <id>10.14569/IJACSA.2024.0150476</id>
        <doi>10.14569/IJACSA.2024.0150476</doi>
        <lastModDate>2024-04-30T10:33:20.3830000+00:00</lastModDate>
        
        <creator>Umar Mansyuri</creator>
        
        <creator>Andreas Tri Panudju</creator>
        
        <creator>Helena Sitorus</creator>
        
        <creator>Widya Spalanzani</creator>
        
        <creator>Nunung Nurhasanah</creator>
        
        <creator>Dedy Khaerudin</creator>
        
        <subject>Bi-Level algorithm; joint pricing; optimization; particle swarm optimization; supply chain</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>This study examines the integration of pricing and lot-sizing strategies within a system comprising only one producer and retailer. The adoption of a bi-level programming technique is justified in establishing a bi-level joint pricing model guided by the producer owing to the hierarchical nature of the supply chain. This problem maximizes manufacturer and retailer profitability by setting the wholesale quantity, lot size, and retail price simultaneously. We created a bi-level particle swarm optimization to solve bi-level programming challenges. This algorithm effectively addresses BLPPS by eliminating the need for any priori assumptions about the conditions of the problem. The bi-level particle swarm optimization algorithm demonstrated a commendable level of efficacy when applied to a set of eight benchmark bi-level issues. The proposed bi-level model was solved using the BPSO and analyzed using experimental data.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_76-The_Bi_level_Particle_Swarm_Optimization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Challenges and Solutions of Agile Software Development Implementation: A Case Study Indonesian Healthcare Organization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150475</link>
        <id>10.14569/IJACSA.2024.0150475</id>
        <doi>10.14569/IJACSA.2024.0150475</doi>
        <lastModDate>2024-04-30T10:33:20.3530000+00:00</lastModDate>
        
        <creator>Ulfah Nur Mukharomah</creator>
        
        <creator>Teguh Raharjo</creator>
        
        <creator>Ni Wayan Trisnawaty</creator>
        
        <subject>Agile Software Development; challenge solutions; IT projects; information technology; application implementation; healthcare organization; Literature Review</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>One healthcare organization in Indonesia has implemented Agile software development (ASD) to complete software development. The organization&#39;s problems are post-deployment system bugs, and some software development projects must carry over to the following year. This study aims to assess and provide recommendations for improving agile software development by identifying the challenges faced. Research conducts literature reviews on previous research to identify challenges in ASD in several organizations. Research is also conducted using quantitative methods by surveying software development teams to validate implementation challenges and provide recommendations for these challenges. The results of this study were in the form of a survey attended by thirty-one respondents. The study results found that 14 challenges were faced in other organizations, and 11 were faced by one healthcare organization in Indonesia. Healthcare organizations in Indonesia can apply recommendations to make awareness related to understanding agile software development culture and make adjustments to project documentation by aligning with agile values.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_75-Challenges_and_Solutions_of_Agile_Software_Development.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Smart AI Framework for Backlog Refinement and UML Diagram Generation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150474</link>
        <id>10.14569/IJACSA.2024.0150474</id>
        <doi>10.14569/IJACSA.2024.0150474</doi>
        <lastModDate>2024-04-30T10:33:20.3370000+00:00</lastModDate>
        
        <creator>Samia NASIRI</creator>
        
        <creator>Mohammed LAHMER</creator>
        
        <subject>Artificial intelligence; NLP; Agile methodology; UML</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>In Agile development, it is crucial to refine the backlog to prioritize tasks, resolve problems quickly, and align development efforts with project goals. Automated tools can help in this process by generating Unified Modeling Language (UML) diagrams, allowing teams to work more efficiently with a clear understanding and communicate product requirements. This paper presents an automated approach to Agile methodology which refines backlogs by detecting duplicate user stories and clustering them. Following the refinement process, our approach generates UML diagrams automatically for each cluster, including both class and use case diagrams. Our method is based on machine learning and natural language processing techniques. To implement our approach, we developed a tool that selects the user stories file, groups them by actor, and employs the unsupervised k-means algorithm to form clusters. After that, we used Sentence Bidirectional Encoder Representations from Transformers (SBERT) to measure the similarity between user stories in a cluster. The tool highlights the most similar user stories and facilitates the decision to delete or keep them. Additionally, our approach detects similar or duplicate use cases in the UML use case diagram, making it more convenient for computer system designers. We evaluated our approach on a set of case studies using different performance measures. The results demonstrated its effectiveness in detecting duplicate user stories in the backlog and duplicate use cases. Our automated approach not only saves time and reduces errors, but it also improves collaboration between team members. With an automatic generation of UML diagrams from user stories, all team members can understand product requirements clearly and consistently, regardless of their technical expertise.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_74-A_Smart_AI_Framework_for_Backlog_Refinement.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Graph Convolutional Neural Networks (GCNNs)-based Framework to Enhance the Detection of COVID-19 from X-Ray and CT Scan Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150473</link>
        <id>10.14569/IJACSA.2024.0150473</id>
        <doi>10.14569/IJACSA.2024.0150473</doi>
        <lastModDate>2024-04-30T10:33:20.3030000+00:00</lastModDate>
        
        <creator>D. Raghu</creator>
        
        <creator>Hrudaya Kumar Tripathy</creator>
        
        <creator>Raiza Borreo</creator>
        
        <subject>COVID-19 detection; Graph Neural Networks; X-ray; CT scan images; hybrid optimization; medical imaging analysis; diagnostic tools; pandemic response</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>The constant need for robust and efficient COVID-19 detection methodologies has prompted the exploration of advanced techniques in medical imaging analysis. This paper presents a novel framework that leverages Graph Convolutional Neural Networks (GCNNs) to enhance the detection of COVID-19 from CT scan and X-Ray images. Hence, the GCNN parameters were tuned by the hybrid optimization to gain a more exact detection. Therefore, the novel technique known as Hybrid NADAM Graph Neural Prediction (NAGNP). The framework is designed to achieve efficiency through a hybrid optimization strategy. The methodology involves constructing graph representations from Chest X-ray or CT scan images, where nodes encapsulate critical image patches or regions of interest. These graphs are fed into GCNN architectures tailored for graph-based data, facilitating intricate feature extraction and information aggregation. A hybrid optimization approach is employed to optimize the model&#39;s performance, encompassing fine-tuning of GCNN hyperparameters and strategic model optimization techniques. Through rigorous evaluation and validation using diverse datasets, our framework demonstrates promising results in accurate and efficient COVID-19 diagnosis. Integrating GCNNs and hybrid optimization presents a viable pathway toward reliable and practical diagnostic tools in combating the ongoing pandemic.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_73-A_Novel_Graph_Convolutional_Neural_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Lightweight Cryptographic Algorithms for Medical IoT Devices using Combined Transformation and Expansion (CTE) and Dynamic Chaotic System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150472</link>
        <id>10.14569/IJACSA.2024.0150472</id>
        <doi>10.14569/IJACSA.2024.0150472</doi>
        <lastModDate>2024-04-30T10:33:20.2900000+00:00</lastModDate>
        
        <creator>Abdul Muhammed Rasheed</creator>
        
        <creator>Retnaswami Mathusoothana Satheesh Kumar</creator>
        
        <subject>Internet of Things (IoT); data transmission; data security; medical IoT devices; lightweight cryptography; encryption; decryption</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>IoT is growing in prominence as a result of its various applications across many industries. They gather information from the real world and send it over networks. The number of small computing devices, such as RFID tags, wireless sensors, embedded devices, and IoT devices, has increased significantly in the last few years. They are anticipated to produce enormous amounts of sensitive data for the purpose of controlling and monitoring. The security of those devices is crucial because they handle precious private data. An encryption algorithm is required to safeguard these delicate devices. The performance of devices is hampered by traditional encryption ciphers like RSA or AES, which are costly and easy to crack. In the realm of IoT security, lightweight image encryption is crucial. For image encryption, the majority of currently used lightweight techniques use separate pixel values and position modifications. These kinds of schemes are limited by their high vulnerability to cracking. This paper introduces a Lightweight cryptography (LWC) algorithm for medical IoT devices using Combined Transformation and Expansion (CTE) and Dynamic Chaos System. The suggested system is evaluated in terms of cross-entropy, UACI, and NPCR. As demonstrated by the experimental results, the suggested system is ideal for medical IoT systems and has very high encryption and decryption efficiency. The proposed system is characterized by its low memory usage and simplicity.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_72-Lightweight_Cryptographic_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning Network Optimization for Analysis and Classification of High Band Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150471</link>
        <id>10.14569/IJACSA.2024.0150471</id>
        <doi>10.14569/IJACSA.2024.0150471</doi>
        <lastModDate>2024-04-30T10:33:20.2570000+00:00</lastModDate>
        
        <creator>Manju Sundararajan</creator>
        
        <creator>S. J Grace Shoba</creator>
        
        <creator>Y. Rajesh Babu</creator>
        
        <creator>P N S Lakshmi</creator>
        
        <subject>Deep learning networks; Convolutional Neural Network (CNN); spectral-spatial transformer; adaptive motion optimization; high-band image analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>Examination and categorization of high-band pictures are used to describe the process of analysing and classifying photos that have been taken in many bands. Deep learning networks are known for their capacity to extract intricate information from images with a high bandwidth. The novelty lies in the integration of adaptive motion optimization, spectral-spatial transformer for categorization, and CNN-based feature extraction, enhancing high-band picture search efficiency and accuracy. The three primary parts of the technique are adaptive motion for optimization, spectral-spatial transformer for categorization, and CNN-based feature extraction. Initially, hierarchical characteristics from high-band pictures using a CNN. The CNN method enables precise feature representation and does a good job of matching the image&#39;s high and low features. This transformer module modifies the spectral and spatial properties of pictures intended for usage, enabling more careful categorization. This method performs better when processing complicated and variable picture data by integrating spectral and spatial information. Additionally, it is preferable to incorporate adaptive motion algorithms into offering the deep learning network training set. During training, this optimization technique dynamically modifies the motion parameter for quicker convergence and better generalization performance. The usefulness of the suggested strategy is demonstrated by researchers through numerous implementations on real-world high-band picture datasets. The challenges of hyperspectral imaging (HSI) classification, driven by high dimensionality and complex spectral-spatial relationships, demand innovative solutions. Current methodologies, including CNNs and transformer-based networks, suffer from resource demands and interpretability issues, necessitating exploration of combined approaches for enhanced accuracy. In high-band image evaluation and classification applications, the approach delivers state-of-the-art performance and python-implemented model has a 97.8% accuracy rate exceeding previous methods.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_71-Deep_Learning_Network_Optimization_for_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Advancing Prostate Cancer Diagnostics with Image Masking Techniques in Medical Image Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150470</link>
        <id>10.14569/IJACSA.2024.0150470</id>
        <doi>10.14569/IJACSA.2024.0150470</doi>
        <lastModDate>2024-04-30T10:33:20.2430000+00:00</lastModDate>
        
        <creator>H. V. Ramana Rao</creator>
        
        <creator>V RaviSankar</creator>
        
        <subject>Prostate cancer; data exploration; image analysis; medical conditions; background prediction techniques; data preparation; diagnostic accuracy; dataset characteristics; image dimensions; deep learning; statistical analysis; prostate cancer detection; advanced imaging modalities; healthcare diagnostics; medical image analysis; machine learning; target variables</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>Prostate cancer is a prevalent health concern characterized by the abnormal and uncontrolled growth of cells within the prostate gland in men. This research paper outlines a standardized methodology for integrating medical slide images into machine learning algorithms, specifically emphasizing advancing healthcare diagnostics. The methodology involves thorough data collection, exploration, and image analysis, establishing a foundation for future progress in medical image analysis. The study investigates the relationships among image characteristics, data providers, and target variables to reveal patterns conducive to diagnosing medical conditions. Novel background prediction techniques are introduced, highlighting the importance of meticulous data preparation for improved diagnostic accuracy. The results of our research offer insights into dataset characteristics and image dimensions, facilitating the development of machine-learning models for healthcare diagnosis. Through deep learning and statistical analysis, we contribute to the evolving field of prostate cancer detection, showcasing the potential of advanced imaging modalities. This research promises to revolutionize healthcare diagnostics and shape the trajectory of medical image analysis, providing a robust framework for applying machine learning algorithms in the field. The standardized approach presented in this paper aims to enhance the reproducibility and comparability of studies in medical image analysis, fostering advancements in healthcare technology.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_70-Advancing_Prostate_Cancer_Diagnostics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Supply Chain Management Efficiency: A Data-Driven Approach using Predictive Analytics and Machine Learning Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150469</link>
        <id>10.14569/IJACSA.2024.0150469</id>
        <doi>10.14569/IJACSA.2024.0150469</doi>
        <lastModDate>2024-04-30T10:33:20.2100000+00:00</lastModDate>
        
        <creator>Shamrao Parashram Ghodake</creator>
        
        <creator>Vinod Ramchandra Malkar</creator>
        
        <creator>Kathari Santosh</creator>
        
        <creator>L. Jabasheela</creator>
        
        <creator>Shokhjakhon Abdufattokhov</creator>
        
        <creator>Adapa Gopi</creator>
        
        <subject>Supply chain management; predictive analytics; demand forecasting; inventory management; exploratory data analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>Contemporary firms rely heavily on the effectiveness of their supply chain management. Modern supply chains are complicated and unpredictable, and traditional methods frequently find it difficult to adjust to these factors. Increasing supply chain efficiency through improved supplier performance, demand prediction, inventory optimisation, and streamlined logistics processes may be achieved by utilising sophisticated data analytics and machine learning approaches. In order to improve supply chain management efficiency, this study suggests a unique data-driven strategy that makes use of Deep Q-Learning (DQL). The goal is to create optimisation frameworks and prediction models that can support well-informed decision-making and supply chain operational excellence. The deep Q learning technique is thoroughly integrated into supply chain management in this study, which makes it innovative. The suggested framework gives a comprehensive method for tackling the difficulties of contemporary supply chain management by integrating cutting-edge methodologies including demand forecasting, inventory optimisation, supplier performance prediction, and logistics optimisation. Predictive modelling, performance assessment, and data preparation are three of the proposed framework&#39;s essential elements. Cleansing and converting raw data to make it easier to analyse is known as data preparation. To create machine learning frameworks for applications like demand forecasting and logistics optimization, predictive modelling uses DQL. The method&#39;s efficacy in raising supply chain efficiency is evaluated through performance evaluation and acquired 98.9% accuracy while implementation. Findings show that the suggested DQL-based strategy is beneficial. Demand is precisely predicted using predictive models, which improves inventory control and lowers stockouts. Supply chain efficiencies brought about by DQL-based optimisation algorithms include lower costs and better service quality. Performance assessment measures show notable gains above baseline methods, highlighting the importance of DQL in supply chain management. This study demonstrates how Deep Q-Learning has the ability to completely change supply chain management procedures. In today&#39;s dynamic environment, organisations may gain competitive advantage and sustainable development through supply chain operations that are more efficient, agile, and resilient thanks to the incorporation of modern analytical methodologies and data-driven insights.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_69-Enhancing_Supply_Chain_Management_Efficiency.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Timber Defect Identification: Enhanced Classification with Residual Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150468</link>
        <id>10.14569/IJACSA.2024.0150468</id>
        <doi>10.14569/IJACSA.2024.0150468</doi>
        <lastModDate>2024-04-30T10:33:20.1970000+00:00</lastModDate>
        
        <creator>Teo Hong Chun</creator>
        
        <creator>Ummi Raba’ah Hashim</creator>
        
        <creator>Sabrina Ahmad</creator>
        
        <subject>Residual neural network; convolutional neural network; timber defect identification; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>This study investigates the potential enhancement of classification accuracy in timber defect identification through the utilization of deep learning, specifically residual networks. By exploring the refinement of these networks via increased depth and multi-level feature incorporation, the goal is to develop a framework capable of distinguishing various defect classes. A sequence of ablation experiments was conducted, comparing our proposed framework&#39;s performance (R1, R2 and R3) with the original ResNet50 architecture. Furthermore, the framework’s classification accuracy was evaluated across different timber species and statistical analyses such as independent t-tests and one-way ANOVA tests were conducted to identify the significant differences. Results showed that while the R1 architecture demonstrated slight improvement over ResNet50, particularly with the addition of an extra layer (&quot;ConvG&quot;), the latter still maintained superior overall performance in defect identification. Similarly, the R2 architecture, despite achieving notable accuracy improvements, slightly lagged behind R1. Integration of fully pre-activation activation functions in the R3 architecture yielded significant enhancements, with a 14.18% increase in classification accuracy compared to ResNet50. The R3 architecture showcased commendable defect identification performance across various timber species, though with slightly lower accuracy in Rubberwood. Nonetheless, its performance surpassed both ResNet50 and other proposed architectures, suggesting its suitability for timber defect identification. Statistical analysis confirmed the superiority of the R3 architecture across multiple timber species and this underscores the significance of integrating network depth and fully pre-activation activation functions in improving classification performance. In conclusion, while the wood industry has made strides towards automation in timber grading, significant challenges remain. Overcoming these challenges will require innovative approaches and advancements in image processing and artificial intelligence to realize the full potential of automated grading systems.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_68-Timber_Defect_Identification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Entity Relation Joint Extraction Method Based on Insertion Transformers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150467</link>
        <id>10.14569/IJACSA.2024.0150467</id>
        <doi>10.14569/IJACSA.2024.0150467</doi>
        <lastModDate>2024-04-30T10:33:20.1800000+00:00</lastModDate>
        
        <creator>Haotian Qi</creator>
        
        <creator>Weiguang Liu</creator>
        
        <creator>Fenghua Liu</creator>
        
        <creator>Weigang Zhu</creator>
        
        <creator>Fangfang Shan</creator>
        
        <subject>Entity relation extraction; tagging strategy; joint extraction; transformer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>Existing multi-module multi-step and multi-module single-step methods for entity relation joint extraction suffer from issues such as cascading errors and redundant mistakes. In contrast, the single-module single-step modeling approach effectively alleviates these limitations. However, the single-module single-step method still faces challenges when dealing with complex relation extraction tasks, such as excessive negative samples and long decoding times. To address these issues, this paper proposes an entity relation joint extraction method based on Insertion Transformers, which adopts the single-module single-step approach and integrates the newly proposed tagging strategy. This method iteratively identifies and inserts tags in the text, and then effectively reduces decoding time and the count of negative samples by leveraging attention mechanisms combined with contextual information, while also resolving the problem of entity overlap. Compared to the state-of-the-art models on two public datasets, this method achieves high F1 scores of 93.2% and 91.5%, respectively, demonstrating its efficiency in resolving entity overlap issues.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_67-Entity_Relation_Joint_Extraction_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Patrol Platform Based on Unmanned Aerial Vehicle for Urban Safety and Intelligent Social Governance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150466</link>
        <id>10.14569/IJACSA.2024.0150466</id>
        <doi>10.14569/IJACSA.2024.0150466</doi>
        <lastModDate>2024-04-30T10:33:20.1630000+00:00</lastModDate>
        
        <creator>Ying Yang</creator>
        
        <creator>Rui Ma</creator>
        
        <creator>Fengjiao Zhou</creator>
        
        <subject>Patrol drones; trajectory planning; smart city governance; crow search algorithm; swarm intelligence algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>Urban patrols can detect emergencies in a timely manner and collect information, which helps to improve the quality of services in the city and enhance the comfort of residents. This study proposes the use of IoT-based drones for urban patrol tasks, aiming to explore the potential applications of drones in smart city governance. The main technical challenge in the process of urban patrols by drones is how to plan a flight path for them. Therefore, this article first designs a smart patrol system based on drones and Internet of Things (IoT). Meanwhile, as information collection is an important aspect of urban patrol tasks, a mathematical model with the goal of maximizing information collection has been established to provide cost-effective patrol services. On this basis, in order to improve the accuracy of crow search algorithm (CSA), differential crow search strategy and variable flight step size are designed. In addition, the Levy flight strategy is introduced into the traditional CSA algorithm, and an improved crow search algorithm (ICSA) is proposed. Finally, a corresponding simulation environment was established based on the actual urban scene and compared with other algorithms. The numerical results indicate that compared with the other three swarm intelligence algorithms, the algorithm designed in this paper has more superiority.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_66-A_Patrol_Platform_Based_on_Unmanned_Aerial_Vehicle.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analyzing Privacy Implications and Security Vulnerabilities in Single Sign-On Systems: A Case Study on OpenID Connect</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150465</link>
        <id>10.14569/IJACSA.2024.0150465</id>
        <doi>10.14569/IJACSA.2024.0150465</doi>
        <lastModDate>2024-04-30T10:33:20.1330000+00:00</lastModDate>
        
        <creator>Mohammed Al Shabi</creator>
        
        <creator>Rashiq Rafiq Marie</creator>
        
        <subject>Single Sign-On; OpenID connect protocol; vulnerabilities; privacy; third-party</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>Single Sign-On (SSO) systems have gained popularity for simplifying the login process, enabling users to authenticate through a single identity provider (IDP). However, their widespread adoption raises concerns regarding user privacy, as IDPs like Google or Facebook can accumulate extensive data on user web behavior. This presents a significant challenge for privacy-conscious users seeking to restrict disclosure of their online activities to third-party entities. This paper presents a comprehensive study focused on the OpenID Connect protocol, a widely utilized SSO standard. Our analysis delves into the protocol&#39;s operation, identifying security flaws and vulnerabilities across its various stages. Additionally, we systematically examine the privacy implications associated with user access to SSO systems. We offer a detailed account of how easily user information can be accessed, shedding light on potential risks. The findings underscore the imperative to address privacy vulnerabilities within SSO infrastructures. We advocate for proactive measures to enhance system security and safeguard user privacy effectively. By identifying weaknesses in the OpenID Connect protocol and its implementations, stakeholders can implement targeted strategies to mitigate risks and ensure the protection of user data. This research aims to foster a more secure and privacy-respecting environment within the evolving landscape of SSO systems.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_65-Analyzing_Privacy_Implications_and_Security.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving Predictive Maintenance in Industrial Environments via IIoT and Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150464</link>
        <id>10.14569/IJACSA.2024.0150464</id>
        <doi>10.14569/IJACSA.2024.0150464</doi>
        <lastModDate>2024-04-30T10:33:20.1170000+00:00</lastModDate>
        
        <creator>Saleh Othman Alhuqayl</creator>
        
        <creator>Abdulaziz Turki Alenazi</creator>
        
        <creator>Hamad Abdulaziz Alabduljabbar</creator>
        
        <creator>Mohd Anul Haq</creator>
        
        <subject>Predictive maintenance; IIoT; data visualization; machine learning; industrial systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>Optimizing maintenance procedures is essential in today&#39;s industrial settings to reduce downtime and increase operational effectiveness. To improve predictive maintenance in industrial settings, this article investigates the combination of machine learning (ML) techniques and the Industrial Internet of Things (IIoT). The goal of this research is to advance predictive maintenance in industrial settings by integrating ML with IIoT in a seamless manner. Addressing the complexities of industrial systems and limitations of traditional maintenance methods, this study presents a methodology leveraging four distinct ML models. The technique includes a thorough assessment of these models&#39; correctness, revealing differences that highlight the significance of a careful model selection procedure. The current investigation analysis finds the most effective model for predictive maintenance activities using thorough data analysis and visualization. Our work offers a potential path forward for the industrial sector and provides insights into the complex interactions between IIoT and ML. This study lays the groundwork for future developments in predictive maintenance, which will reduce downtime and extend the life of industrial equipment.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_64-Improving_Predictive_Maintenance_in_Industrial_Environments.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Effective Book Recommendation System using Weighted Alternating Least Square (WALS) Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150463</link>
        <id>10.14569/IJACSA.2024.0150463</id>
        <doi>10.14569/IJACSA.2024.0150463</doi>
        <lastModDate>2024-04-30T10:33:20.0870000+00:00</lastModDate>
        
        <creator>Kavitha V K</creator>
        
        <creator>Sankar Murugesan</creator>
        
        <subject>Recommendation system; user ratings; matrix factorization; alternating least square; weighted matrix factorization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>Book recommendation systems are essential resources for connecting people with the correct books, encouraging a love of reading, and sustaining a vibrant literary ecosystem in an era when information overload is a prevalent problem. With the emergence of digital libraries and large online book retailers, readers may no longer find their next great literary journey without the help of customized book suggestions. This work offers a novel way to improve book recommendation systems using the Weighted Alternating Least Squares (WALS) technique, which is intended to uncover meaningful patterns in user ratings. The suggested approach minimizes the Root Mean Square Error (RMSE), a crucial indicator of recommendation system (RS) performance, in order to tackle the problem of optimizing recommendations. By representing user-item interactions as a matrix factorization problem, the WALS approach improves the recommendation process. In contrast to conventional techniques, WALS adds weighted elements that highlight specific user-item pairings&#39; significance, increasing the recommendations&#39; accuracy. Through an empirical study, the proposed approach demonstrates a significant reduction in RMSE when compared to standard RS, highlighting its effectiveness in enhancing the quality of book recommendations. By leveraging weighted matrix factorization, the proposed method adapts to the nuanced preferences and behaviors of users, resulting in more accurate and personalized book recommendations. This advancement in recommendation technology is poised to benefit both readers and the book industry by fostering more engaging and satisfying reading experiences.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_63-An_Effective_Book_Recommendation_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Crowdsourcing Requirements Engineering: A Taxonomy-based Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150462</link>
        <id>10.14569/IJACSA.2024.0150462</id>
        <doi>10.14569/IJACSA.2024.0150462</doi>
        <lastModDate>2024-04-30T10:33:20.0700000+00:00</lastModDate>
        
        <creator>Ghadah Alamer</creator>
        
        <creator>Sultan Alyahya</creator>
        
        <creator>Hmood Al-Dossari</creator>
        
        <subject>Crowdsourcing requirements engineering; crowdsourcing; CrowdRE; crowd</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>Interesting insights have been found by the research community indicating that early user involvement in Requirements Engineering (RE) has a considerable association with higher requirements quality, software project success and as well boosting user loyalty. In addition, traditional RE approaches confront scalability issues and would be time consuming and expensive to be applied with contemporary applications that can be surrounded by a large crowd. Therefore, recent attention has been shed on leveraging the principle of Crowdsourcing (CS) in requirements engineering. Engaging the crowd in RE activities has been researched by several studies. Hence, we synthesize and review the literature of the knowledge domain Crowdsourcing Requirements Engineering using a proposed taxonomy of the area. A total of 52 studies were selected for review in this paper. The review aims to provide the potential directions in the area and pave the way for other researchers to understand it and find possible gaps.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_62-Crowdsourcing_Requirements_Engineering.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Emotion Recognition with Intensity Level from Bangla Speech using Feature Transformation and Cascaded Deep Learning Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150460</link>
        <id>10.14569/IJACSA.2024.0150460</id>
        <doi>10.14569/IJACSA.2024.0150460</doi>
        <lastModDate>2024-04-30T10:33:20.0400000+00:00</lastModDate>
        
        <creator>Md. Masum Billah</creator>
        
        <creator>Md. Likhon Sarker</creator>
        
        <creator>M. A. H. Akhand</creator>
        
        <creator>Md Abdus Samad Kamal</creator>
        
        <subject>Bangla speech emotion recognition; speech signal transformation; convolutional neural network; bidirectional long short-term memory</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>Speech Emotion Recognition (SER) identifies and categorizes emotional states by analyzing speech signals. The intensity of specific emotional expressions (e.g., anger) conveys critical directives and plays a crucial role in social behavior. SER is intrinsically language-specific; this study investigated a novel cascaded deep learning (DL) model to Bangla SER with intensity level. The proposed method employs the Mel-Frequency Cepstral Coefficient, Short-Time Fourier Transform (STFT), and Chroma STFT signal transformation techniques; the respective trans-formed features are blended into a 3D form and used as the input of the DL model. The cascaded model performs the task in two stages: classify emotion in Stage 1 and then measure the intensity in Stage 2. DL architecture used in both stages is the same, which consists of a 3D Convolutional Neural Network (CNN), a Time Distribution Flatten (TDF) layer, a Long Short-term Memory (LSTM), and a Bidirectional LSTM (Bi-LSTM). CNN first extracts features from 3D formed input; the features are passed through the TDF layer, Bi-LSTM, and LSTM; finally, the model classifies emotion along with its intensity level. The proposed model has been evaluated rigorously using developed KBES and other datasets. The proposed model revealed as the best-suited SER method compared to existing prominent methods achieving accuracy of 88.30% and 71.67% for RAVDESS and KBES datasets, respectively.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_60-Emotion_Recognition_with_Intensity_Level.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimizing Deep Learning for Efficient and Noise-Robust License Plate Detection and Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150461</link>
        <id>10.14569/IJACSA.2024.0150461</id>
        <doi>10.14569/IJACSA.2024.0150461</doi>
        <lastModDate>2024-04-30T10:33:20.0400000+00:00</lastModDate>
        
        <creator>Seong-O Shim</creator>
        
        <creator>Romil Imtiaz</creator>
        
        <creator>Safa Habibullah</creator>
        
        <creator>Abdulrahman A. Alshdadi</creator>
        
        <subject>De-noising; image analysis; image processing; computer vision; image restoration</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>Accurate license plate recognition (LPR) remains a crucial task in various applications, from traffic monitoring to security systems. However, noisy environments with challenging factors like blurred images, low light, and complex backgrounds can significantly impede traditional LPR methods. This work proposes a deep learning based LPR system optimized for performance in noisy environments through hyperparameter tuning and bounding box refinement. We first preprocessed the noisy images by noise reduction which is crucial for robust LPR. We employed Convolutional Autoencoder (CAE) trained on noisy/clean image pairs to remove noise and enhance details. We utilized the InceptionResNetV2 architecture, pre-trained on ImageNet, for its strong feature extraction capabilities. We then added Region Proposal Network (RPN) head added to InceptionResNetV2 to predict candidate bounding boxes around potential license plates. We employed grid search to optimize key hyperparameters like learning rate, optimizer settings, and RPN anchor scales, ensuring optimal model performance for the specific noise patterns in the target dataset. Non-maximum suppression (NMS) eliminates redundant proposals, and a separate detection head classifies each remaining bounding box as license plate or background. Finally, bounding boxes are refined for improved accuracy. For confirmed license plates, a Bidirectional LSTM/CRNN network extracts and decodes character sequences within the refined bounding boxes. Compared to the recent methods, the proposed approach yielded the highest detection and recognition performance in noisy environments which can be best utilized for monitoring traffic, security systems in noisy environment. Our optimized LPR system demonstrates significantly improved accuracy and robustness compared to baseline methods, particularly in noisy environments.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_61-Optimizing_Deep_Learning_for_Efficient_and_Noise_Robust_License.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Robust Hybrid Convolutional Network for Tumor Classification Using Brain MRI Image Datasets</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150459</link>
        <id>10.14569/IJACSA.2024.0150459</id>
        <doi>10.14569/IJACSA.2024.0150459</doi>
        <lastModDate>2024-04-30T10:33:20.0070000+00:00</lastModDate>
        
        <creator>Satish Bansal</creator>
        
        <creator>Rakesh S Jadon</creator>
        
        <creator>Sanjay K. Gupta</creator>
        
        <subject>CNN; SVM; MRI images; brain tumor; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>Brain tumour detection is challenging for experts or doctors in the early stage. Many advanced techniques are used for the detection of different cancers and analysis using different medical images. Deep learning (DL) comes under artificial intelligence, which is used to analyse and characterisation medical image processing and also finds the classification of brain cancer. Magnetic Resonance Imaging (MRI) has become the keystone in brain cancer recognition and the fusion of advanced imaging methods with cutting-edge DL models has exposed great potential in enhancing accuracy. This research aims to develop an efficient hybrid CNN model by employing support vector machine (SVM) classifiers to advance the efficacy and stability of the projected convolutional neural network (CNN) model. Two distinct brain MRI image datasets (Dataset_MC and Dataset_BC) are binary and multi-classified using the suggested CNN and hybrid CNN-SVM (Support Vector Machine) models. The suggested CNN model employs fewer layers and parameters for feature extraction, while SVM functions as a classifier to preserve maximum accuracy in a shorter amount of time. The experiment result shows the evaluation of the projected CNN model with the SVM for the performance evaluation, in which CNN-SVM give the maximum accuracy on the test datasets at 99% (Dataset_BC) and 98% (Dataset_MC) as compared to other CNN models.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_59-A_Robust_Hybrid_Convolutional_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Discovering the Global Landscape of Agri-Food and Blockchain: A Bibliometric Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150458</link>
        <id>10.14569/IJACSA.2024.0150458</id>
        <doi>10.14569/IJACSA.2024.0150458</doi>
        <lastModDate>2024-04-30T10:33:19.9770000+00:00</lastModDate>
        
        <creator>Sharifah Khairun Nisa’ Habib Elias</creator>
        
        <creator>Sahnius Usman</creator>
        
        <creator>Suriayati Chuprat</creator>
        
        <subject>Agri-food supply chain; bibliometric; blockchain; traceability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>The agri-food supply chain encompasses all the entities involved in the production and processing of food, from producers to consumers. Traceability is crucial in ensuring that food products are available, affordable, and accessible. Blockchain technology has been proposed as a way to improve traceability in the agri-food supply chain by providing transparency and trust. However, research in this area is still in its early stages. This study aims to examine the trend of blockchain in agri-food supply chain traceability for food security. A bibliometric analysis was conducted on 1047 scholarly works from the Scopus database, starting in 2016. The analysis looked at citation patterns and the development of blockchain technology in agri-food supply chain research and identified trends by source title, nation, institution, and key players. The analysis also examined the frequency of keywords, titles, and abstracts to identify key themes. The analysis has revealed a strong correlation between blockchain technology and traceability in the agri-food supply chain, indicating a promising area for further research. The results show that blockchain-based research for traceability in the agri-food supply chain has increased and is being widely distributed, particularly in regions beyond Europe. The potential benefits it can bring to the supply chain will contribute to the success of the Sustainable Development Goals (SDGs) by ensuring a safe and sufficient global food supply.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_58-Discovering_the_Global_Landscape_of_Agri_Food.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Investigating the Effect of Small Sample Process Capability Index Under Different Bootstrap Methods</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150457</link>
        <id>10.14569/IJACSA.2024.0150457</id>
        <doi>10.14569/IJACSA.2024.0150457</doi>
        <lastModDate>2024-04-30T10:33:19.9300000+00:00</lastModDate>
        
        <creator>Liyan Wang</creator>
        
        <creator>Guihua Bo</creator>
        
        <creator>Mingjuan Du</creator>
        
        <subject>Process capability indices; bootstrap; confidence interval; small samples</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>In the quality control of multi-variety and small-batch products, the calculation of the process capability index is particularly important. However, when the sample size is not enough, the process distribution cannot be judged, if the traditional method is still used to calculate the process capability index; there will be misapplication or misuse. In this paper, the Bootstrap method is introduced into the estimation of process capability index and the calculation of its confidence interval by using Standard Bootstrap (SB), Percentile Bootstrap (PB), Percentile-t Bootstrap (PTB) and Biased-corrected Percentile Bootstrap (BCPB) methods were used to analyze and compare the process capability index. It is found that in symmetric distribution, only the sample size has a significant effect on the length of the confidence interval；but in asymmetric distribution, sample size and Bootstrap methods are both significant factors affecting the length of confidence interval.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_57-Investigating_the_Effect_of_Small_Sample_Process.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Prediction of Pigment Epithelial Detachment in Optical Coherence Tomography Images using Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150456</link>
        <id>10.14569/IJACSA.2024.0150456</id>
        <doi>10.14569/IJACSA.2024.0150456</doi>
        <lastModDate>2024-04-30T10:33:19.9130000+00:00</lastModDate>
        
        <creator>T. M. Sheeba</creator>
        
        <creator>S. Albert Antony Raj</creator>
        
        <subject>Artificial neural network; k-nearest neighbor; logistic regression; layer segmentation; na&#239;ve base; optical coherence tomography; pigment epithelial detachment</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>Pigment Epithelial Detachment (PED) is an eye condition that can affect adults over 50 and eventually harm their central vision. The PED region is positioned between the Bruch&#39;s membrane (BM) and the RPE (Retinal Pigment Epithelium) layer. Due to PED, the RPE layer is elevated arc shaped. In this paper, a method to extract the best features to detect pigment epithelial detachment (PED) is proposed. This method uses four-stage strategy that drew inspiration from OCT (Optical Coherence Tomography) imaging to detect the PED. In the first stage, to reduce the speckle-noise, in the second stage, segment the Retinal Pigment Epithelium (RPE) layer. In the third stage, a novel method is proposed to extract the best features to detect PED, and in the fourth stage, machine learning classifiers such as K-Nearest Neighbors (KNN), Logistic Regression (LR), Na&#239;ve ayes (NB), and Artificial Neural Networks (ANN) were used to significantly predict the PED. For experimental results, 150 retinal OCT volumes were used, 75 normal OCT volumes, and 75 pigment epithelial detachment volumes. Among the 150 images, 80% were used for training and 20% were used for testing. Here, there are 30 images for testing and 120 images for training. To generate a confusion matrix based on the matrices are true positive (TP), false positive (FP), true negative (TN), and false negative (FN). Logistic Regression is predicted more accuracy among the ANN, LR, NB, and KNN models. The LR model predicted accuracy 96.67% for PED detection.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_56-Prediction_of_Pigment_Epithelial_Detachment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Leather Image Quality Classification and Defect Detection System using Mask Region-based Convolution Neural Network Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150455</link>
        <id>10.14569/IJACSA.2024.0150455</id>
        <doi>10.14569/IJACSA.2024.0150455</doi>
        <lastModDate>2024-04-30T10:33:19.8830000+00:00</lastModDate>
        
        <creator>Azween Bin Abdullah</creator>
        
        <creator>Malathy Jawahar</creator>
        
        <creator>Nalini Manogaran</creator>
        
        <creator>Geetha Subbiah</creator>
        
        <creator>Koteeswaran Seeranagan</creator>
        
        <creator>Balamurugan Balusamy</creator>
        
        <creator>Abhishek Chengam Saravanan</creator>
        
        <subject>Image leather classification; leather defect detection; Convolutional Neural Network; CNN; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>The leather industry is increasingly becoming one amongst the most important manufacturing industries in the world. Increasing demand has posed a great challenge as well as an opportunity for these industries. Quality of a leather product has been always the main factor in the setting of the market selling price. Usually, quality control is done with manual inspection. However, with human related errors such as fatigue, loss of concentration, etc., misclassification of the produced leather quality becomes a very serious issue. To tackle this issue, traditionally, image processing algorithms have been used, but, have not been effective due to low accuracies and high processing time. The introduction of Deep Learning methodologies such as Convolutional Neural Networks (CNNs), however, makes image classification much simpler. It incorporates automated feature learning and extraction, giving accurate results in lesser time. In addition, the usage of deep learning can also be applied for defect detection, which is, locating defects in the image. In this paper, a system for leather image classification and defect detection is proposed. Initially, the captured images are sent to a classification system, which classifies the image as good quality or defect quality. If the output of the classification system is defect quality, then a defect detection system works on the images, and locates the defects in the image. The classification system and the defect detection system are developed using Inception V3 CNN and Mask R-CNN respectively. Experimental results using these CNNs have shown great potential with respect to object classification and detection, which, with further development can give unparalleled performance for applications in these fields.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_55-Leather_Image_Quality_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automating Mushroom Culture Classification: A Machine Learning Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150454</link>
        <id>10.14569/IJACSA.2024.0150454</id>
        <doi>10.14569/IJACSA.2024.0150454</doi>
        <lastModDate>2024-04-30T10:33:19.8670000+00:00</lastModDate>
        
        <creator>Hamimah Ujir</creator>
        
        <creator>Irwandi Hipiny</creator>
        
        <creator>Mohamad Hasnul Bolhassan</creator>
        
        <creator>Ku Nurul Fazira Ku Azir</creator>
        
        <creator>SA Ali</creator>
        
        <subject>Machine learning; convolution neural networks; mushroom cultivation; rhizomorph mycelium</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>Traditionally, the classification of mushroom cultures has conventionally relied on manual inspection by human experts. However, this methodology is susceptible to human bias and errors, primarily due to its dependency on individual judgments. To overcome these limitations, we introduce an innovative approach that harnesses machine learning methodologies to automate the classification of mushroom cultures. Our methodology employs two distinct strategies: the first involves utilizing the histogram profile of the HSV color space, while the second employs a convolutional neural network (CNN)-based technique. We evaluated a dataset of 1400 images from two strains of Pleurotus ostreatus mycelium samples over a period of 14 days. During the cultivation phase, we base our operations on the histogram profiles of the masked areas. The application of the HSV histogram profile led to an average precision of 74.6% for phase 2, with phase 3 yielding a higher precision of 95.2%. For CNN-based method, the discriminative image features are extracted from captured images of rhizomorph mycelium growth. These features are then used to train a machine learning model that can accurately estimate the growth rate of a rhizomorph mycelium culture and predict contamination status. Using MNet and MConNet approach, our results achieved an average accuracy of 92.15% for growth prediction and 97.81% for contamination prediction. Our results suggest that computer-based approaches could revolutionize the mushroom cultivation industry by making it more efficient and productive. Our approach is less prone to human error than manual inspection, and it can be used to produce mushrooms more efficiently and with higher quality.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_54-Automating_Mushroom_Culture_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sentiment Analysis of Pandemic Tweets with COVID-19 as a Prototype</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150453</link>
        <id>10.14569/IJACSA.2024.0150453</id>
        <doi>10.14569/IJACSA.2024.0150453</doi>
        <lastModDate>2024-04-30T10:33:19.8370000+00:00</lastModDate>
        
        <creator>Mashail Almutiri</creator>
        
        <creator>Mona Alghamdi</creator>
        
        <creator>Hanan Elazhary</creator>
        
        <subject>COVID-19; deep learning; machine learning; sentiment analysis; text mining; tweets</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>One of the most important applications of text mining is sentiment analysis of pandemic tweets. For example, it can make governments able to predict the onset of pandemics and to put in place safe policies based on people&#39;s feelings. Many research studies addressed this issue using various datasets and models. Nevertheless, this is still an open area of research in which many datasets and models are yet to be explored. This paper is interested in the sentiment analysis of COVID-19 tweets as a prototype. Our literature review revealed that as the dataset size increases, the accuracy generally tends to decrease. This suggests that using a small dataset might provide misleading results that cannot be generalized. Hence, it is better to consider large datasets and try to improve analysis performance on it. Accordingly, in this paper we consider a huge dataset, namely COVIDSenti, which is composed of three sub datasets (COVIDSenti_A, COVIDSenti_B, and COVIDSenti_C). These datasets have been processed with a number of Machine Learning (ML) models, Deep Learning (DL) models, and transformers. In this paper, we examine other ML and DL models aiming to find superior solutions. Specifically, we consider Ridge Classifier (RC), Multinomial Na&#239;ve Bayes (MNB), Stochastic Gradient Descent (SGD), Support Vector Classification (SVC), Extreme Gradient Boosting (XGBoost), and the DL Gated Recurrent Unit (GRU). Experimental results have shown that unlike the models that we tested, and the state-of-the-art models on the same dataset, SGD technique with count vectorizer showed quite constantly high performance on all the four datasets.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_53-Sentiment_Analysis_of_Pandemic_Tweets.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cinematic Curator: A Machine Learning Approach to Personalized Movie Recommendations</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150452</link>
        <id>10.14569/IJACSA.2024.0150452</id>
        <doi>10.14569/IJACSA.2024.0150452</doi>
        <lastModDate>2024-04-30T10:33:19.8200000+00:00</lastModDate>
        
        <creator>B. Venkateswarlu</creator>
        
        <creator>N. Yaswanth</creator>
        
        <creator>A. Manoj Kumar</creator>
        
        <creator>U. Satish</creator>
        
        <creator>K. Dwijesh</creator>
        
        <creator>N. Sunanda</creator>
        
        <subject>Machine learning algorithms decision tree; random forest model-evaluation; accuracy value; precision value; F1 score</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>This work suggests a sophisticated movie recommendation system that offers individualized recommendations based on user preferences by combining content-based filtering, collaborative filtering, and deep learning approaches. The system use natural language processing (NLP) to examine user-generated content, movie summaries, and reviews in order to get a sophisticated comprehension of thematic aspects and narrative styles. The model includes SHAP for explainability to improve transparency and give consumers insight into the reasoning behind recommendations. The user-friendly interface, which is accessible via web and mobile applications, guarantees a smooth experience. The system is able to adjust to changing user preferences and market trends through ongoing upgrades that are founded on fresh data. The system&#39;s efficacy is validated by user research and A/B testing, which show precise and customized movie recommendations that satisfy a range of tastes.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_52-Cinematic_Curator_A_Machine_Learning_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improvement of Social Skills in Children with Autism Spectrum Disorder Through the use of a Video Game</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150451</link>
        <id>10.14569/IJACSA.2024.0150451</id>
        <doi>10.14569/IJACSA.2024.0150451</doi>
        <lastModDate>2024-04-30T10:33:19.8030000+00:00</lastModDate>
        
        <creator>Luis C. Soles-N&#250;&#241;ez</creator>
        
        <creator>Segundo E. Cieza-Mostacero</creator>
        
        <subject>Video games; social skills; autism spectrum disorder; SUM methodology; academic software</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>The main research objective was to improve social skills through a video game, the type of research was applied with a pure experimental design, with a sample of 60 children with autism spectrum disorder from the Christa McAuliffe school, randomly allocated 30 to the control group (CG) and 30 to the experimental group (GE), the latter using a video game developed with the Unity 3D; Data collection was carried out by means of an adapted test from the cited authors; subsequently, the data collected was analyzed and processed using the Jamovi v2 statistical software. 3.28. The results obtained were an increase of 27.8% on average in the level of communication skills, an increase of 22.4% on average in the level of skills related to feelings, an increase of 20.4% on average in the level of skills alternative to violence and an increase of 19% on average in the level of Pro-amical skills. It was concluded that, the use of a video game significantly improved social skills.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_51-Improvement_of_Social_Skills_in_Children_with_Autism.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards a New Artificial Intelligence-based Framework for Teachers’ Online Continuous Professional Development Programs: Systematic Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150450</link>
        <id>10.14569/IJACSA.2024.0150450</id>
        <doi>10.14569/IJACSA.2024.0150450</doi>
        <lastModDate>2024-04-30T10:33:19.7730000+00:00</lastModDate>
        
        <creator>Hamza Fakhar</creator>
        
        <creator>Mohammed Lamrabet</creator>
        
        <creator>Noureddine Echantoufi</creator>
        
        <creator>Khalid El khattabi</creator>
        
        <creator>Lotfi Ajana</creator>
        
        <subject>Artificial intelligent; continuous professional development; Moroccan in-service teacher; digital teacher; online training; adaptive development</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>In recent years, the Artificial Intelligence (AI) field has witnessed rapid growth, affecting diverse sectors, including education. In this systematic review of literature, we aimed to analyze studies concerning the integration of AI in the continuous professional development (CPD) of teachers in order to generate a global vision on its potential to enhance the quality of CPD programs in the international level, and to provide recommendations for its application in the Moroccan context. To achieve our objective, we conducted a research that involves a review of international indexed databases (Scopus, Web of Science, Eric) published between 2019 and 2023 using PICO framework to formulate our search query and PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) framework to select 25 relevant studies based on include and exclude criteria like publishing year, type of documents, publishing mode, subject area, language, and other criteria. The results reveal that AI integration has a positive impact on CPD programs by offering beneficial intelligent tools that can tailor adaptive training programs to meet teachers’ specific needs, preferences, and proficiency levels. Furthermore, our findings identify the importance of integrating AI as a core topic within CPD programs to enhance teachers’ AI literacy, enabling them to effectively navigate and utilize AI-based tools in their educational environment. This is important for preparing teachers to engage with the technological advances shaping the educational system. In conclusion, our systematic review emphasizes the significance of AI integration in CPD programs and offers tailored recommendations for its implementation in the Moroccan educational context. By adopting these recommendations, Morocco will pave the way for a dynamic CPD framework that meets the evolving needs of educators and students alike.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_50-Towards_a_New_Artificial_Intelligence_based_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis and Enhancement of Prediction of Cardiovascular Disease Diagnosis using Machine Learning Models SVM, SGD, and XGBoost</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150449</link>
        <id>10.14569/IJACSA.2024.0150449</id>
        <doi>10.14569/IJACSA.2024.0150449</doi>
        <lastModDate>2024-04-30T10:33:19.7430000+00:00</lastModDate>
        
        <creator>Sandeep Tomar</creator>
        
        <creator>Deepak Dembla</creator>
        
        <creator>Yogesh Chaba</creator>
        
        <subject>CVD; SVM; SGD; XGBoost; classifiers; machine learning; ROC; accuracy; confusion matrix</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>Cardiovascular disease (CVD), claiming 17.9 million lives annually, is exacerbated by factors like high blood pressure and obesity, prompting extensive data collection for deeper insights. Machine learning aids in accurate diagnosis, with techniques like SVM, SGD, and XGBoost proposed for heart disease prediction, addressing challenges such as data imbalance and optimizing diagnostic accuracy. This study integrates these algorithms to improve cardiovascular disease diagnosis, aiming to reduce mortality rates through timely interventions. This research investigates the efficacy of Support Vector Machine (SVM), Stochastic Gradient Descent (SGD), and XGBoost machine learning techniques for heart disease prediction. Analysis of the models&#39; performance metrics reveals distinct characteristics and capabilities. SVM demonstrates robust performance with a training accuracy of 88.28% and a model accuracy score of 87.5%, exhibiting high precision and recall values across both classes. SGD, while commendable with a training accuracy of 83.65% and a model accuracy score of 84.24%, falls slightly behind SVM in accuracy and precision. XGBoost Classifier showcases perfect training accuracy but potential overfitting, yet demonstrates comparable precision and recall values to SVM. Overall, SVM emerges as the most effective model for heart disease prediction, followed by SGD and XGBoost Classifier. Further optimization and investigation into generalization capabilities are recommended to enhance the performance of SGD and XGBoost Classifier in clinical settings.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_49-Analysis_and_Enhancement_of_Prediction_of_Cardiovascular_Disease.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Genetic Algorithm-based Approach for Design-level Class Decomposition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150448</link>
        <id>10.14569/IJACSA.2024.0150448</id>
        <doi>10.14569/IJACSA.2024.0150448</doi>
        <lastModDate>2024-04-30T10:33:19.7100000+00:00</lastModDate>
        
        <creator>Bayu Priyambadha</creator>
        
        <creator>Nobuya Takahashi</creator>
        
        <creator>Tetsuro Katayama</creator>
        
        <subject>Genetic algorithm; refactoring; class decomposition; blob smell; software internal quality</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>Software is always changed to accommodate environmental changes to preserve its existence. While changes happen to the software, the internal structure tends to decline in quality. The refactoring process is worth running to preserve the internal structure of the software. The decomposition process is a suitable refactoring process for Blob smell in class. It tried to split up the class based on the context in order to arrange it based on each responsibility. The previous approach has been implemented but still leaves problems. The optimum arrangement of class cannot be achieved using the previous approach. The genetic algorithm provides the search mechanism to find the optimum state based on the criterion stated at the beginning of the process. This paper presents the use of genetic algorithms to solve the design-level class decomposition problem. The paper explained several points, including the conversion from class to the chromosome construct, the fitness function calculation, selection, crossover, and mutation. The results show that the use of a genetic algorithm was able to solve the previous problems. The genetic algorithm can solve the local optimum problem from the previous approach. The increment of the fitness function of the study case proves it.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_48-A_Genetic_Algorithm_based_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Segmentation Analysis for Brain Stroke Diagnosis Based on Susceptibility-Weighted Imaging (SWI) using Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150447</link>
        <id>10.14569/IJACSA.2024.0150447</id>
        <doi>10.14569/IJACSA.2024.0150447</doi>
        <lastModDate>2024-04-30T10:33:19.6970000+00:00</lastModDate>
        
        <creator>Shaarmila Kandaya</creator>
        
        <creator>Abdul Rahim Abdullah</creator>
        
        <creator>Norhashimah Mohd Saad</creator>
        
        <creator>Ezreen Farina</creator>
        
        <creator>Ahmad Sobri Muda</creator>
        
        <subject>Magnetic Resonance Imaging (MRI) diagnosis; time is brain; Susceptibility Weighted Imaging (SWI) and dice coefficient</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>Magnetic Resonance Imaging (MRI) plays a crucial role in diagnosing brain disorders, with stroke being a significant category among them. Recent studies emphasize the importance of swift treatment for stroke, known as &quot;time is brain,&quot; as early intervention within six hours of stroke onset can save lives and improve outcomes. However, the conventional manual diagnosis of brain stroke by neuroradiologists is subjective and time-consuming. To address this issue, this study presents an automatic technique for diagnosing and segmenting brain stroke from MRI images according to pre and post stroke patient. The technique utilizes machine learning methods, focusing on Susceptibility Weighted Imaging (SWI) sequences. The machine learning technique involves four stage, those are pre-processing, segmentation, feature extraction, and classification. In this paper, preprocessing and segmentation are proposed to identify the stroke region. The segmentation performance is assessed using Jaccard indices, Dice Coefficient, false positive, and false negative rates. The results show that adaptive threshold performs best for stroke lesion segmentation, with good improvement stroke patient that achieving the highest Dice coefficient of 0.96. In conclusion, this proposed stroke segmentation technique has promising potential for diagnosing early brain stroke, providing an efficient and automated approach to aid medical professionals in timely and accurate diagnoses.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_47-Segmentation_Analysis_for_Brain_Stroke_Diagnosis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>StockBiLSTM: Utilizing an Efficient Deep Learning Approach for Forecasting Stock Market Time Series Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150446</link>
        <id>10.14569/IJACSA.2024.0150446</id>
        <doi>10.14569/IJACSA.2024.0150446</doi>
        <lastModDate>2024-04-30T10:33:19.6630000+00:00</lastModDate>
        
        <creator>Diaa Salama Abd Elminaam</creator>
        
        <creator>Asmaa M M. El-Tanany</creator>
        
        <creator>Mohamed Abd El Fattah</creator>
        
        <creator>Mustafa Abdul Salam</creator>
        
        <subject>Stock prediction; Univariate LSTM models; Deep learning; financial forecasting; Vanilla LSTM; Stacked LSTM; Bidirectional LSTM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>The article introduces a novel approach for forecasting stock market prices, employing a computationally efficient Bidirectional Long Short-Term Memory (BiLSTM) model enhanced with a global pooling mechanism. Based on the deep learning framework, this method leverages the temporal dynamics of stock data in both forward and reverse time frames, enabling enhanced predictive accuracy. Utilizing datasets from significant market players—HPQ, Bank of New York Mellon, and Pfizer—the authors demonstrate that the proposed single-layered BiLSTM model, optimized with RMSprop, significantly outperforms traditional Vanilla and Stacked LSTM models. The results are quantitatively evaluated using root mean squared error (RMSE), mean absolute error (MAE), and the coefficient of determination (R^2), where the BiLSTM model shows a consistent improvement in all metrics across different stock datasets. We optimized the hyperparameters tuning using two distinct optimizers (ADAM, RMSprop) on the HPQ, New York Bank, and Pfizer datasets. The dataset has been preprocessed to account for missing values, standardize the features, and separate it into training and testing sets. Moreover, line graphs and candlestick charts illustrate the models&#39; ability to capture stock market trends. The proposed algorithms attained respective RMSE values of 0.413, 0.704, and 0.478. the proposed algorithms attained respective RMSE values of 0.413, 0.704, and 0.478. The results show the proposed methods&#39; superiority over recently published models. In addition, it is concluded that the proposed single-layered BiLSTM-based architecture is computationally efficient and can be recommended for real-time applications involving Stock market time series data.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_46-StockBiLSTM_Utilizing_an_Efficient_Deep_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving Prediction Accuracy using Random Forest Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150445</link>
        <id>10.14569/IJACSA.2024.0150445</id>
        <doi>10.14569/IJACSA.2024.0150445</doi>
        <lastModDate>2024-04-30T10:33:19.6500000+00:00</lastModDate>
        
        <creator>Nesma Elsayed</creator>
        
        <creator>Sherif Abd Elaleem</creator>
        
        <creator>Mohamed Marie</creator>
        
        <subject>Corporate bankruptcy; feature selection; financial ratios; prediction models; random forest</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>One of the latest studies in predicting bankruptcy is the performance of the financial prediction models. Although several models have been developed, they often do not achieve high performance, especially when using an imbalanced data set. This highlights the need for more exact prediction models. This paper examines the application as well as the benefits of machine learning with the purpose of constructing prediction models in the field of corporate financial performance. There is a lack of scientific research related to the effects of using random forest algorithms in attribute selection and prediction process for enhancing financial prediction. This paper tests various feature selection methods along with different prediction models to fill the gap. The study used a quantitative approach to develop and propose a business failure model. The approach involved analyzing and preprocessing a large dataset of bankrupt and non-bankrupt enterprises. The performance of the model was then evaluated using various metrics such as accuracy, precision, and recall. Findings from the present study show that random forest is recommended as the best model to predict corporate bankruptcy. Moreover, findings write down that the proper use of attribute selection methods helps to enhance the prediction precision of the proposed models. The use of random forest algorithm in feature selection and prediction can produce more exact and more reliable results in predicting bankruptcy. The study proves the potential of machine learning techniques to enhance financial performance.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_45-Improving_Prediction_Accuracy_using_Random_Forest.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Ultimate Bearing Capacity Assessment of Rock Foundations using a Hybrid Decision Tree Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150444</link>
        <id>10.14569/IJACSA.2024.0150444</id>
        <doi>10.14569/IJACSA.2024.0150444</doi>
        <lastModDate>2024-04-30T10:33:19.6330000+00:00</lastModDate>
        
        <creator>Mei Guo</creator>
        
        <creator>Ren-an Jiang</creator>
        
        <subject>Ultimate bearing capacity; decision tree; zebra optimization algorithm; coronavirus herd immunity optimizer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>Accurately estimating the ultimate bearing capacity of piles embedded in rock is of paramount importance in the domains of civil engineering, construction, and foundation design. This research introduces an innovative solution to tackle this issue, leveraging a fusion of the Decision Tree method with two state-of-the-art optimization algorithms: the Zebra Optimization Algorithm and the Coronavirus Herd Immunity Optimizer. The research approach encompassed the creation of a hybridized model, unifying the DT with the Zebra Optimization Algorithm and Coronavirus Herd Immunity Optimizer. The primary objective was to augment the precision of the ultimate bearing capacity of prediction for piles embedded in rock. This hybridization strategy harnessed the capabilities of DT along with the two pioneering optimizers to address the inherent uncertainty stemming from diverse factors impacting bearing capacity. The Zebra Optimization Algorithm and Coronavirus Herd Immunity Optimizer showcased their efficacy in refining the base model, leading to substantial enhancements in predictive performance. This study&#39;s discoveries make a significant stride in the realm of geotechnical engineering by furnishing a sturdy approach to forecasting ultimate bearing capacity in rock-socketed piles. The hybridization method is a hopeful path for future research endeavors and practical implementations. Specifically, the DT + Zebra Optimization Algorithm model yielded dependable outcomes, as evidenced by their impressive R-squared value of 0.9981 and a low Root mean squared error value of 629.78. The attained outcomes empower engineers and designers to make well-informed choices concerning structural foundations in soft soil settings. Ultimately, this research advocates for safer and more efficient construction methodologies, mitigating the hazards linked to foundation failures.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_44-Enhancing_Ultimate_Bearing_Capacity_Assessment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Investigating Sampler Impact on AI Image Generation: A Case Study on Dogs Playing in the River</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150443</link>
        <id>10.14569/IJACSA.2024.0150443</id>
        <doi>10.14569/IJACSA.2024.0150443</doi>
        <lastModDate>2024-04-30T10:33:19.5870000+00:00</lastModDate>
        
        <creator>Sanjay Deshmukh</creator>
        
        <subject>Artificial Intelligence; image generation; filter; sampler; Euler; Heun</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>AI image generation is a new and exciting field with many different uses. It is important to understand how different sampling techniques affect the quality of AI-generated images in order to get the best results. This study looks at how different sampling techniques affect the quality of AI-generated images of dogs playing in the river. This study is limited to a specific scenario, as there are not many images of dogs playing in the river already on the internet. The study used the Playground.ai open-source web platform to test different sampling techniques. DDIM was found to be the best sampling technique for generating realistic images of dogs playing in the river. Euler was also found to be very fast, which is an important consideration when choosing a sampling technique. These findings show that different sampling techniques have different strengths and weaknesses, and it is important to choose the right sampling technique for the specific task at hand. This study provides valuable insights into how sampling techniques affect AI image generation. It is important to choose the right sampling technique for the specific task at hand in order to get the best results. The study also demonstrates the societal relevance of AI-generated imagery in various applications.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_43-Investigating_Sampler_Impact_on_AI_Image.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Impact of Contradicting Subtle Emotion Cues on Large Language Models with Various Prompting Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150442</link>
        <id>10.14569/IJACSA.2024.0150442</id>
        <doi>10.14569/IJACSA.2024.0150442</doi>
        <lastModDate>2024-04-30T10:33:19.5700000+00:00</lastModDate>
        
        <creator>Noor Ul Huda</creator>
        
        <creator>Sanam Fayaz Sahito</creator>
        
        <creator>Abdul Rehman Gilal</creator>
        
        <creator>Ahsanullah Abro</creator>
        
        <creator>Abdullah Alshanqiti</creator>
        
        <creator>Aeshah Alsughayyir</creator>
        
        <creator>Abdul Sattar Palli</creator>
        
        <subject>Emotion cues; prompt; Large Language Model (LLM); Human Computer Interactions (HCI)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>The landscape of human-machine interaction is undergoing a transformation with the integration of conversational technologies. In various domains, Large Language Model (LLM) based chatbots are progressively taking on roles traditionally handled by human agents, such as task execution, answering queries, offering guidance, and delivering social and emotional assistance. Consequently, enhancing user satisfaction with these technologies is crucial for their effective incorporation. Emotions indeed play an effective role in responses generated by reinforcement-learning-based chatbots. In text-based prompts, emotions can be signaled by visual (emojis, emoticons) and linguistic (misspellings, tone of voice, word choice, sentence length, similes) aspects. Therefore, researchers are harnessing the power of Artificial Intelligence (AI) and Natural Language Processing techniques to imbue chatbots with emotional intelligence capabilities. This research aims to explore the impact of feeding contradicting emotional cues to the LLMs through different prompting techniques. The evaluation is based on specified instructions versus provided emotional signals. Each prompting technique is scrutinized by inducing a variety of emotions on widely used LLMs, ChatGPT 3.5 and Gemini. Instead of automating the prompting process, the prompts are given by exerting cognitive load to be more realistic regarding Human-Computer Interaction (HCI). The responses are evaluated using human-provided qualitative insights. The results indicate that simile-based cues have the highest impact in both ChatGPT and Gemini. However, results also conclude that the Gemini is more sensitive towards emotional cues. The finding of this research can benefit multiple fields: HCI, AI Development, Natural Language Processing, Prompt Engineering, Psychology, and Emotion analysis.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_42-Impact_of_Contradicting_Subtle_Emotion_Cues.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Permanent Magnet Motor Control System Based on Fuzzy PID Control</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150441</link>
        <id>10.14569/IJACSA.2024.0150441</id>
        <doi>10.14569/IJACSA.2024.0150441</doi>
        <lastModDate>2024-04-30T10:33:19.5400000+00:00</lastModDate>
        
        <creator>Yin Sha</creator>
        
        <creator>Huwei Chen</creator>
        
        <subject>Permanent magnet motor; fuzzy PID; fuzzy control; automatic control system; artificial bee colony</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>Although the traditional permanent magnet synchronous motor control system is simple and convenient, the control of speed and accuracy is often affected by external interference, which impacts the dynamic and static performance requirements. Therefore, this study attempts to introduce fuzzy rules to improve the proportional integral differential control method, and further integrate intelligent optimization algorithms into the fuzzy proportional integral differential control method to construct an efficient and feasible permanent magnet synchronous motor control method. The simulation experiment demonstrates that under fuzzy proportional integral differential control, there is no overshoot in the waveform when facing changes in load, and the tuning time increases from 0.01 seconds to 0.12 seconds. The steady-state error of speed control is small, and there is no obvious oscillation in the waveform. Fuzzy control enhances the control system. After the optimization of the artificial bee colony algorithm, the control system has a faster speed response, with the overshoot diminished from 11.2% to 3.1%, and the adjustment time reduced from 0.27 seconds to 0.19 seconds, enhancing its adaptability. Under load regulation, the optimized control system speed response curve responds in a timely manner without obvious overshooting and oscillating changes. Optimizing variable universe fuzzy proportional integral differential control enables the control system for having better static and dynamic performance, and enhances the adaptability and follow-up of the control system. The current curve starts to stabilize at 0.04s, overcoming the control system oscillations early. The speed response curve and the motor torque curve are improved by the optimized variable domain theory, and the amount of overshoot is significantly reduced. The research and design of a permanent magnet motor control system has practical significance for improving the application performance and adaptability of permanent magnet motors.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_41-Permanent_Magnet_Motor_Control_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multimodal Feature Fusion Video Description Model Integrating Attention Mechanisms and Contrastive Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150440</link>
        <id>10.14569/IJACSA.2024.0150440</id>
        <doi>10.14569/IJACSA.2024.0150440</doi>
        <lastModDate>2024-04-30T10:33:19.5230000+00:00</lastModDate>
        
        <creator>Wang Zhihao</creator>
        
        <creator>Che Zhanbin</creator>
        
        <subject>Multimodal feature fusion; video description; spatiotemporal attention; comparative learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>To avoid the issue of significant redundancy in the spatiotemporal features extracted from multimodal video description methods and the substantial semantic gaps between different modalities within video data. Building upon the TimeSformer model, this paper proposes a two-stage video description approach (Multimodal Feature Fusion Video Description Model Integrating Attention Mechanism and Contrastive Learning, MFFCL). The TimeSformer encoder extracts spatiotemporal attention features from the input video and performs feature selection. Contrastive learning is employed to establish semantic associations between the spatiotemporal attention features and textual descriptions. Finally, GPT2 is employed to generate descriptive text. Experimental validations on the MAVD, MSR-VTT, and VATEX datasets were conducted against several typical benchmark methods, including Swin-BERT and GIT. The results indicate that the proposed method achieves outstanding performance on metrics such as Bleu-4, METEOR, ROUGE-L, and CIDEr. The spatiotemporal attention features extracted by the model can fully express the video content and that the language model can generate complete video description text.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_40-Multimodal_Feature_Fusion_Video_Description_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Genetic Artificial Bee Colony Algorithm for Investigating Job Creation and Economic Enhancement in Medical Waste Recycling</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150439</link>
        <id>10.14569/IJACSA.2024.0150439</id>
        <doi>10.14569/IJACSA.2024.0150439</doi>
        <lastModDate>2024-04-30T10:33:19.4930000+00:00</lastModDate>
        
        <creator>El Liazidi Sara</creator>
        
        <creator>Dkhissi Btissam</creator>
        
        <subject>Medical waste recycling; social impacts; genetic artificial bee colony algorithm; job creation; economic value</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>The effective management of end-of-life products, whether through recycling or incineration for electricity generation, holds pivotal significance amidst escalating concerns over economic, environmental, and social ramifications. While the economic and environmental dimensions often receive primary focus, the social aspect remains comparatively neglected within sustainability discourse. This paper undertakes a comprehensive exploration of the positive social impacts engendered by medical waste recycling, with a specific focus on job creation and economic value enhancement. The principal aim of this research is to highlight the social benefits derived from medical waste recycling, elucidating its role in fostering employment opportunities, and augmenting economic prosperity. By employing a Genetic Artificial Bee Colony algorithm, this study addresses two mathematical problems pertinent to optimizing recycling processes, thereby contributing to the advancement of sustainable waste management practices. Additionally, the proposed algorithm exhibits superior performance, highlighting its potential in addressing sustainability challenges. Ultimately, integrating the social dimension into end-of-life product management discussions can lead to a more comprehensive approach to sustainability, balancing environmental preservation with socio-economic progress.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_39-A_Genetic_Artificial_Bee_Colony_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Influence of a Serious Video Game on the Behavior of Drivers in the Face of Automobile Incidents</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150438</link>
        <id>10.14569/IJACSA.2024.0150438</id>
        <doi>10.14569/IJACSA.2024.0150438</doi>
        <lastModDate>2024-04-30T10:33:19.4600000+00:00</lastModDate>
        
        <creator>Bryan S. Diaz-Sipiran</creator>
        
        <creator>Segundo E. Cieza-Mostacero</creator>
        
        <subject>Videogame; serious; behavior; driving; incidents</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>The primary objective of this research was to enhance driver behavior during incidents through the use of a serious video game. The study employed a true experimental design. The research population consisted of an unspecified number of drivers from the city of Trujillo. Sixty drivers from Trujillo were randomly selected, with 30 assigned to the control group and 30 to the experimental group. The experimental group utilized a video game developed in Unreal Engine 5.2.1., observation forms were used to gather information, and the collected data were subsequently analyzed and processed using the statistical software Jamovi v2.4.11. The results revealed a decrease equivalent to a 43.75% reduction in the number of action mistakes, a 51.14% reduction in the number of intention mistakes, a 31.4% decrease in the number of traffic law violations, and a 42.92% reduction in the number of aggressive attitudes. In conclusion, the use of a serious video game significantly improved driver behavior during incidents.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_38-Influence_of_a_Serious_Video_Game_on_the_Behavior.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Analysis of Transformer Models for Sentiment Analysis in Low-Resource Languages</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150437</link>
        <id>10.14569/IJACSA.2024.0150437</id>
        <doi>10.14569/IJACSA.2024.0150437</doi>
        <lastModDate>2024-04-30T10:33:19.4470000+00:00</lastModDate>
        
        <creator>Yusuf Aliyu</creator>
        
        <creator>Aliza Sarlan</creator>
        
        <creator>Kamaluddeen Usman Danyaro</creator>
        
        <creator>Abdulahi Sani B A Rahman</creator>
        
        <subject>Sentiment analysis; low-resource languages; multilingual; word-embedding; transformer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>The analysis of sentiments expressed on social media platforms is a crucial tool for understanding user opinions and preferences. The large amount of the texts found on social media are mostly in different languages. However, the accuracy of sentiment analysis in these systems faces different challenges in multilingual low-resource settings. Recent advancements in deep learning transformer models have demonstrated superior performance compared to traditional machine learning techniques. The majority of preceding works are predominantly constructed on the foundation of monolingual languages. This study presents a comparative analysis that assesses the effectiveness of transformer models, for multilingual low-resource languages sentiment analysis. The study aims to improve the accuracy of the existing baseline performance in analyzing tweets written in 12 low-resource African languages. Four widely used start-of-the-art transformer models were employed. The experiment was carried out using standard datasets of tweets. The study showcases AfriBERTa as a robust performer, exhibiting superior sentiment analysis capabilities across diverse linguistic contexts. It outperformed the established benchmarks in both SemEval-2023 Task 12 and AfriSenti baseline. Our framework achieves remarkable results with an F1-score of 81% and an accuracy rate of 80.9%. This study provides validation of the framework&#39;s robustness in the domain of sentiment analysis across a low-resource linguistics context. our research not only contributes a comprehensive sentiment analysis framework for low-resource African languages but also charts a roadmap for future enhancements. Emphasize the ongoing pursuit of adaptability and robustness in sentiment analysis models for diverse linguistic landscapes.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_37-Comparative_Analysis_of_Transformer_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid MCDM Model for Service Composition in Cloud Manufacturing using O-TOPSIS</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150436</link>
        <id>10.14569/IJACSA.2024.0150436</id>
        <doi>10.14569/IJACSA.2024.0150436</doi>
        <lastModDate>2024-04-30T10:33:19.4130000+00:00</lastModDate>
        
        <creator>Syed Omer Farooq Ahmed</creator>
        
        <creator>Adapa Gopi</creator>
        
        <subject>Cloud manufacturing (CMFg); CRITIC method; O-TOPSIS method; service composition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>The purpose of this research article was to define the current or future usage of Industry 4.0 technologies (Cloud Computing, IoT, etc.) to improve industrial manufacturing. The goal of this study is to rate the options using a hybrid CRITIC - O-TOPSIS Multi Criteria Decision Making model. The CRITIC technique is used to calculate Objective Weights. Also, when comparing the findings to TOPSIS, A thorough Systematic Literature Review comes first. Secondly, a theoretical approach to recognizing the Index System of Criteria. Third, Creating a Hybrid Model of CRITIC and O-TOPSIS for Decision Making. Lastly, Comparing and Ordering Options. The proposed technique successfully addresses the ambiguity and uncertainty of heterogeneous information while maintaining assessment data accuracy. Also, because objective weights are more grounded in reality than subjective weights, the result is more precise. CRITIC approach results reveals that Ease of Opting has the most weight and Ease of Implementation has the least weight O-TOPSIS method ranks alternatives in the following order: A4&gt;A5&gt;A3&gt;A1&gt;A2. This paper ranks alternatives based on extensive 22 criteria in Service Composition in Cloud Manufacturing using the hybrid model CRITIC - O-TOPSIS</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_36-A_Hybrid_MCDM_Model_for_Service_Composition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Research on Resource Sharing Method of Library and Document Center Under the Multimedia Background</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150435</link>
        <id>10.14569/IJACSA.2024.0150435</id>
        <doi>10.14569/IJACSA.2024.0150435</doi>
        <lastModDate>2024-04-30T10:33:19.3830000+00:00</lastModDate>
        
        <creator>Jianhui Zhang</creator>
        
        <subject>Multimedia background; library and reference center; resource sharing; virtual technology; multimedia technology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>In order to improve the utilization effect of the resources of the book and document center and ensure the security of its resource sharing, the resource sharing methods of the book and document center under the multimedia background are studied. The resource layer of this method is based on multimedia technology and combined with virtual technology to build a multimedia document cloud resource pool; At the same time, the adaptive clustering algorithm of empirical mode feature decomposition is used to obtain the number of document resources clustering and resource category labels, complete the resource clustering of the book and document center, and store it in the constructed resource pool; Users log in directly through the document resource sharing service of the service layer, and enter the resource center after authentication by the management layer. The service layer uses the regional document information resource co-construction and sharing mechanism based on blockchain to encrypt, co-identify and decrypt the clustered resources in the resource pool and then share the resources of the book and document center. The test results show that the clustering purity and contour coefficient of the method is above 0.970, and the clustering quality is good; The security of resource sharing is good, and the sensitivity result is 10.11% when the resource sharing ratio is 100%; It can effectively complete the resource sharing in the book and document center, and meet the sharing needs of book and document resources.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_35-Research_on_Resource_Sharing_Method_of_Library.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-Discriminator Image Restoration Algorithm Based on Hybrid Dilated Convolution Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150434</link>
        <id>10.14569/IJACSA.2024.0150434</id>
        <doi>10.14569/IJACSA.2024.0150434</doi>
        <lastModDate>2024-04-30T10:33:19.3530000+00:00</lastModDate>
        
        <creator>Chunming Wu</creator>
        
        <creator>Fengshuo Qi</creator>
        
        <subject>GAN; image restoration; hybrid dilated convolution; attention mechanism; two-stage network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>With the continuous development of generative adversarial networks (GAN), many image restoration problems that are difficult to solve based on traditional methods have been given new research avenues. Nevertheless, there are still problems such as structural distortion and texture blurring of the complemented image in the face of irregular missing. In order to overcome these problems and retrieve the lost critical data of the image, a two-stage image restoration complementation network is proposed in this paper. While introducing hybrid dilation convolution, two attention mechanisms are added to the network and optimized using multiple loss functions. This not only results in better image quality metrics, but also clearer and more coherent image details. In this paper, we tested the network on CelebA-HQ, Places2 and The Paris datasets and compared it with several classical image restoration models, such as GLC, Gconv, Musical and RFR, and the results proved that the complementary images in this paper are improved compared to the others.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_34-Multi_Discriminator_Image_Restoration_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Intelligent Learning Approach for Improving ECG Signal Classification and Arrhythmia Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150433</link>
        <id>10.14569/IJACSA.2024.0150433</id>
        <doi>10.14569/IJACSA.2024.0150433</doi>
        <lastModDate>2024-04-30T10:33:19.3370000+00:00</lastModDate>
        
        <creator>Sarah Allabun</creator>
        
        <subject>Electrocardiogram; cardiovascular diseases; classification; ResNet-50; Xception</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>The development of deep learning algorithms in recent years has shown promise in interpreting ECGs, as these algorithms can be trained on large datasets and can learn to identify patterns associated with different heart conditions. The advantage of these algorithms is their ability to process large amounts of data quickly and accurately, which can help improve the speed and accuracy of diagnoses, especially for patients with heart conditions. Our proposed work provides performant models based on residual neural networks to automate the diagnosis of 12-lead ECG signals with more than 25 classes comprising different cardiovascular diseases (CVDs) and a healthy sinus rhythm. We conducted an experimental study using public datasets from Germany, the USA, and China and trained two models based on Residual Neural Net-works-50 (ResNet-50) and Xception from CNN techniques, which is one of the most effective classification models. Our models achieved high performances for both training and test tasks in terms of accuracy, precision, recall, and loss, with accuracy, recall, and precision exceeding 99.87% for the two proposed models during the training and validation. The loss obtained by the end of these two phases was 3.38.10-4. With these promising results, our suggested models can serve as diagnostic aids for cardiologists to evaluate ECG signals more quickly and objectively. Further quantitative and qualitative evaluations are presented and discussed in the study, and our work can be extended to other multi-modal big biological data tied with ECG for similar sets of patients to obtain a better understanding of the proposed approach for the benefit of the medical world.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_33-An_Intelligent_Learning_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Cost-Effective IoT-based Transcutaneous Electrical Nerve Stimulation (TENS): Proof-of-Concept Design and Evaluation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150432</link>
        <id>10.14569/IJACSA.2024.0150432</id>
        <doi>10.14569/IJACSA.2024.0150432</doi>
        <lastModDate>2024-04-30T10:33:19.3030000+00:00</lastModDate>
        
        <creator>Ahmad O. Alokaily</creator>
        
        <creator>Meshael J. Almansour</creator>
        
        <creator>Ahmed A. Aldohbeyb</creator>
        
        <creator>Suhail S. Alshahrani</creator>
        
        <subject>Electro-stimulator; Internet of Things; TENS; pain management; smart health; IoT; telemedicine</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>Transcutaneous electrical nerve stimulation (TENS) systems have been extensively used as a noninvasive and non-pharmaceutical approach for pain management and rehabilitation programs. Moreover, recent advances in telemedicine applications and the Internet of Things (IoT) have led to an increased interest in developing affordable systems that facilitate the remote monitoring of home-based therapeutic programs that help quantify usage and adherence, especially in clinical trials and research. Therefore, this study introduces the design and proof of concept validation of an IoT-enabled, cost-effective, single-channel TENS for remote monitoring of stimulation parameters. The presented prototype features programmable software that supports manipulating the stimulation parameters such as stimulation patterns, pulse width, and frequency. This flexibility can help researchers substantially investigate the effect of different stimulation parameters and develop subject-specific stimulation protocols. The IoT-based TENS system was built using commercial-grade electronic components controlled with open-source software. The system was validated for generating low-frequency (10 Hz) and high-frequency TENS stimulation (100 Hz). The developed system could produce constant biphasic pulses with an adjustable compliance voltage of 5–32 V. The stimulation current corresponding to the applied voltage was quantified across a resistive load of 1 kΩ, resulting in a stimulation current of approximately 4.88–28.79 mA. Furthermore, synchronizing the TENS system with an IoT platform provided the advantage of monitoring the usage and important stimulation parameters, which could greatly benefit healthcare providers. Hence, the proposed system discussed herein has the potential to be used in education, research, and clinics to investigate the effect of TENS devices in a variety of applications outside of the clinical setup.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_32-A_Cost_Effective_IoT_based_Transcutaneous_Electrical_Nerve.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>ConvADD: Exploring a Novel CNN Architecture for Alzheimer&#39;s Disease Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150431</link>
        <id>10.14569/IJACSA.2024.0150431</id>
        <doi>10.14569/IJACSA.2024.0150431</doi>
        <lastModDate>2024-04-30T10:33:19.2900000+00:00</lastModDate>
        
        <creator>Mohammed G Alsubaie</creator>
        
        <creator>Suhuai Luo</creator>
        
        <creator>Kamran Shaukat</creator>
        
        <subject>Alzheimer’s disease; AD detection; convolution neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>Alzheimer&#39;s disease (AD) poses a significant healthcare challenge, with an escalating prevalence and a forecasted surge in affected individuals. The urgency for precise diagnostic tools to enable early interventions and improved patient care is evident. Despite advancements, existing detection frameworks exhibit limitations in accurately identifying AD, especially in its early stages. Model optimisation and accuracy are other issues. This paper aims to address this critical research gap by introducing ConvADD, an advanced Convolutional Neural Network architecture tailored for AD detection. By meticulously designing ConvADD, this study endeavours to surpass the limitations of current methodologies and enhance accuracy metrics, optimisation, and reliability of AD diagnosis. The dataset was collected from Kaggle and consists of preprocessed 2D images extracted from 3D images. Through rigorous experimentation, ConvADD demonstrates remarkable performance metrics, showcasing its potential as a robust and effective. The proposed model shows remarkable results with a tool for AD detection accuracy of 98.01%, precision of 98%, recall of 98%, and an F1-Score of 98%, with only 2.1 million parameters. However, despite its promising results, several challenges and limitations remain, such as generalizability across diverse populations and the need for further validation studies. By elucidating these gaps and challenges, this paper contributes to the ongoing discourse on improving AD detection methodologies and lays the groundwork for future research endeavours in this domain.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_31-ConvADD_Exploring_a_Novel_CNN_Architecture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimizing Bug Bounty Programs for Efficient Malware-Related Vulnerability Discovery</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150430</link>
        <id>10.14569/IJACSA.2024.0150430</id>
        <doi>10.14569/IJACSA.2024.0150430</doi>
        <lastModDate>2024-04-30T10:33:19.2730000+00:00</lastModDate>
        
        <creator>Semi Yulianto</creator>
        
        <creator>Benfano Soewito</creator>
        
        <creator>Ford Lumban Gaol</creator>
        
        <creator>Aditya Kurniawan</creator>
        
        <subject>Bug bounty; malware; vulnerability discovery; cyber defense</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>Conventional security measures struggle to keep pace with the rapidly evolving threat of malware, which demands novel approaches for vulnerability discovery. Although Bug Bounty Programs (BBPs) are promising, they often underperform in attracting researchers, particularly in uncovering malware-related vulnerabilities. This study optimizes BBP structures to maximize engagement and target malware vulnerability discovery, ultimately strengthening cyber defense. Employing a mixed-methods approach, we compared public and private BBPs and analyzed the key factors influencing researcher participation and the types of vulnerabilities discovered. Our findings reveal a blueprint for effective malware-focused BBPs that enable targeted detection, faster patching, and broader software coverage. This empowers researchers and fosters collaboration within the cybersecurity community, significantly reducing the attack surface for malicious actors. However, challenges related to resource sustainability and legal complexity persist. By optimizing BBPs, we unlocked a powerful tool to fight cybercrime.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_30-Optimizing_Bug_Bounty_Programs_for_Efficient_Malware.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Autoencoder and CNN for Content-based Retrieval of Multimodal Medical Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150429</link>
        <id>10.14569/IJACSA.2024.0150429</id>
        <doi>10.14569/IJACSA.2024.0150429</doi>
        <lastModDate>2024-04-30T10:33:19.2570000+00:00</lastModDate>
        
        <creator>Suresh Kumar J S</creator>
        
        <creator>Maria Celestin Vigila S</creator>
        
        <subject>Medical image retrieval; multiclass medical images; artificial intelligence; deep learning; convolutional neural network; autoencoder</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>Content-Based Medical Image Retrieval (CBMIR) is a widely adopted approach for retrieving related images by the comparison inherent features present in the input image to those stored in the database. However, the domain of CBMIR specific to multiclass medical images faces formidable challenges, primarily stemming from a lack of comprehensive research in this area. Existing methodologies in this field have demonstrated suboptimal performance and propagated misinformation, particularly during the crucial feature extraction process. In response, this investigation seeks to leverage deep learning, a subset of artificial intelligence for the extraction of features and elevate overall performance outcomes. The research focuses on multiclass medical images employing the ImageNet dataset, aiming to rectify the deficiencies observed in previous studies. The utilization of the CNN-based Autoencoder method manifests as a strategic choice to enhance the accuracy of feature extraction, thereby fostering improved retrieval results. In the ImageNet dataset, the results obtained from the proposed CBMIR model demonstrate notable average values for accuracy (95.87%), precision (96.03%) and recall (95.54%). This underscores the efficacy of the CNN-based autoencoder model in achieving good accuracy and underscores its potential as a transformative tool in advancing medical image retrieval.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_29-Autoencoder_and_CNN_for_Content_based_Retrieval.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Transfer Learning-based CNN Model for the Classification of Breast Cancer from Histopathological Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150428</link>
        <id>10.14569/IJACSA.2024.0150428</id>
        <doi>10.14569/IJACSA.2024.0150428</doi>
        <lastModDate>2024-04-30T10:33:19.2270000+00:00</lastModDate>
        
        <creator>Sumitha A</creator>
        
        <creator>Rimal Isaac R S</creator>
        
        <subject>Breast cancer; transfer learning; ResNet152v2; medical image analysis; ICAIR 2018 dataset</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>Breast cancer can have significant emotional and physical repercussions for women and their families. The timely identification of potential breast cancer risks is crucial for prompt medical intervention and support. In this research, we introduce innovative methods for breast cancer detection, employing a Convolutional Neural Network (CNN) architecture and Transfer Learning (TL) technique. Our foundation is the ICAIR dataset, encompassing a diverse array of histopathological images. To harness the capabilities of deep learning and expand the model&#39;s knowledge base, we propose a TL model. The CNN component adeptly extracts spatial features from histopathological images, while the TL component incorporates pretrained weights into the model. To tackle challenges arising from limited labeled data and prevent overfitting, we employ ResNet152v2. Utilizing a pre-trained CNN model on extensive image datasets initializes our CNN component, enabling the network to learn pertinent features from histopathological images. The proposed model achieves commendable accuracy (96.47%), precision (96.24%), F1-score (97.18%), and recall (96.63%) in identifying potential breast cancer cases. This approach holds the potential to assist medical professionals in early breast cancer risk assessment and intervention, ultimately enhancing the quality of care for women&#39;s health.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_28-Transfer_Learning_based_CNN_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of a New Chaotic Function-based Algorithm for Encrypting Digital Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150427</link>
        <id>10.14569/IJACSA.2024.0150427</id>
        <doi>10.14569/IJACSA.2024.0150427</doi>
        <lastModDate>2024-04-30T10:33:19.2100000+00:00</lastModDate>
        
        <creator>Dhian Sweetania</creator>
        
        <creator>Suryadi MT</creator>
        
        <creator>Sarifuddin Madenda</creator>
        
        <subject>Chaotic function; decryption; encryption; function composition; key space; MS tent map</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>This paper discusses the development of a new chaotic function (proposed chaotic map) as a keystream generator to be used to encrypt and decrypt the image. The proposed chaotic function is obtained through the composition process of two chaotic functions MS map and Tent map, with the aim of increasing data resistances to attacks. The randomness properties of the keystream generated by this function have been tested using Bifurcation diagrams, Lyapunov exponent, and NIST randomness analysis. All the analysis results indicate that the keystream passed the randomness tests and safe to be used for image encryption. The performance of the proposed chaotic function was measured by way of analysis of its initial value sensitivity, key space, and correlation coefficient of the encrypted image. This function can further increase the resilience against brute force attacks, minimizing the possibility of brute attacks, and has key combinations or key space of 1.05 &#215; 10959 that is much greater than the key space generated by MS Map + Tent Map of 5.832 &#215; 10958. Finaly, quantitative measurements of encrypted image quality show an MSE value of 0 and a PSNR value of ∞. These values mean that the encrypted image data is the same as its original and both are also visually identical.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_27-Development_of_a_New_Chaotic_Function.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Training Model of High-Rise Building Project Management Talent under Multi-Objective Evolutionary Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150426</link>
        <id>10.14569/IJACSA.2024.0150426</id>
        <doi>10.14569/IJACSA.2024.0150426</doi>
        <lastModDate>2024-04-30T10:33:19.1800000+00:00</lastModDate>
        
        <creator>Pan QI</creator>
        
        <subject>Multi-objective evolution; high-rise building; engineering project; management personnel training; skill proficiency; project cost</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>In order to meet the development needs of the construction engineering industry and further optimize and improve the talent training mode, this paper studies the talent training model of high-rise construction project management under the multi-objective evolutionary algorithm. The cognitive ability model of management talent is constructed, and the learning ability of management talent is analyzed. With the optimization objectives of minimizing the construction period, minimizing the project cost, and maximizing the benefit of skill growth in high-rise building projects, and taking the conditions of average proficiency and average duration of construction as constraints, the mixed immune genetic algorithm with the introduction of the double-island model is adopted to carry out multi-objective evolution of management talent training, so as to obtain the best training scheme for management talent in high-rise building projects. The experimental results show that after the optimization of this model, the skill proficiency of project management personnel can be effectively improved, construction time can be effectively reduced, construction efficiency can be improved, and construction costs can be improved.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_26-Training_Model_of_High_rise_Building_Project_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Superframe Segmentation for Content-based Video Summarization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150425</link>
        <id>10.14569/IJACSA.2024.0150425</id>
        <doi>10.14569/IJACSA.2024.0150425</doi>
        <lastModDate>2024-04-30T10:33:19.1630000+00:00</lastModDate>
        
        <creator>Priyanka Ganesan</creator>
        
        <creator>Senthil Kumar Jagatheesaperumal</creator>
        
        <creator>Abirami R</creator>
        
        <creator>Lekhasri K</creator>
        
        <creator>Silvia Gaftandzhieva</creator>
        
        <creator>Rositsa Doneva</creator>
        
        <subject>Video summarization; deep learning; super frame segmentation; keyframes; keyshot identification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>Video summarization is a complex computer vision task that involves the compression of lengthy videos into shorter yet informative summaries that retain the crucial content of the original footage. This paper presents a content-based video summarization approach that utilizes superframe segmentation to identify and extract keyframes representing the most significant information in a video. Unlike other methods that rely solely on visual cues, our approach segments the video into meaningful and coherent visual content units while also preserving the original video&#39;s temporal coherence. This method helps keep the context and continuity of the video in the summary. It involves dividing the video into superframes, each of which is a cluster of adjacent frames with similar motion and visual characteristics. The superframes are then ranked based on their salient scores, which are calculated using visual and motion features. The proposed method selects the top-ranked super frames for the video summary. It has been evaluated on the SUMMe and TVSum datasets and achieved state-of-the-art results for F1-score and accuracy. Based on the experimental outcomes, it is evident that the suggested superframe segmentation method is effective for video summarization, which could be largely assistive for monitoring and controlling the student activities, particularly during their online exams.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_25-Superframe_Segmentation_for_Content_based_Video.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Leveraging Machine Learning Methods for Crime Analysis in Textual Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150424</link>
        <id>10.14569/IJACSA.2024.0150424</id>
        <doi>10.14569/IJACSA.2024.0150424</doi>
        <lastModDate>2024-04-30T10:33:19.1330000+00:00</lastModDate>
        
        <creator>Shynar Mussiraliyeva</creator>
        
        <creator>Gulshat Baispay</creator>
        
        <subject>Machine learning; artificial intelligence; crime analysis; text processing; natural language processing; text analysis; data-driven decision making</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>The proposed research paper explores the application of machine learning techniques in crime analysis problem, specifically focusing on the classification of crime-related textual data. Through a comparative analysis of various machine learning models, including traditional approaches and deep learning architectures, the study evaluates their effectiveness in accurately detecting and categorizing crime-related text data. The performance of the models is assessed using rigorous evaluation metrics, such as the area under the receiver operating characteristic curve (AUC-ROC), to provide insights into their discriminative power and reliability. The findings reveal that machine learning frameworks, particularly the deep learning model, consistently outperform conventional machine learning approaches, highlighting the potential of advanced neural network architectures in crime analysis tasks. The implications of these findings for law enforcement agencies and researchers are discussed, emphasizing the importance of leveraging advanced machine learning techniques to enhance crime prevention and intervention efforts. Furthermore, avenues for future research are identified, including the integration of multiple data sources and the exploration of interpretability and explainability of machine learning models in crime analysis problem. Overall, this research contributes to advancing the field of crime analysis problem and underscores the importance of leveraging innovative computational approaches to address complex societal challenges.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_24-Leveraging_Machine_Learning_Methods.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Integrated Ensemble Model for Diabetes Mellitus Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150423</link>
        <id>10.14569/IJACSA.2024.0150423</id>
        <doi>10.14569/IJACSA.2024.0150423</doi>
        <lastModDate>2024-04-30T10:33:19.1170000+00:00</lastModDate>
        
        <creator>Abdulaziz A Alzubaidi</creator>
        
        <creator>Sami M Halawani</creator>
        
        <creator>Mutasem Jarrah</creator>
        
        <subject>Diabetes mellitus; machine learning; deep learning; stacking; ensemble learning; RF; CNN-LSTM; SDLs; XGBoost</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>Diabetes Mellitus, commonly referred to as (DM), is a chronic illness that affects populations worldwide, leading to more complications such as renal failure, visual impairment, and cardiovascular disease, thus significantly compromising the individual&#39;s well-being of life. Detecting DM at an early stage is both challenging and a critical procedure for healthcare professionals, given that delayed diagnosis can result to difficulties in managing the progression of the disease. This study seeks to introduce an innovative stacking ensemble model for early DM detection, utilizing an ensemble of machine learning and deep learning models. Our proposed stacking model integrates multiple prediction learners, including Random Forest (RF), Convolutional Neural Network (CNN) with Long Short-Term Memory networks (CNN-LSTM), and Sequential Dense Layers (SDLs) as base learner models, with the Extreme Gradient Boosting model (XGBoost) serving as the Meta-Learner model. Findings demonstrate that our proposed model achieves a 99% accuracy on the Pima dataset and 97% accuracy on the DPD dataset in detecting diabetes mellitus disease. In conclusion, our model holds promise for developing a diagnostic tool for DM disease, and it is recommended to conduct further testing on the types of diabetes mellitus to enhance and evaluate its performance comprehensively.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_23-Integrated_Ensemble_Model_for_Diabetes_Mellitus.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Underwater Video Image Restoration and Visual Communication Optimization Based on Improved Non Local Prior Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150422</link>
        <id>10.14569/IJACSA.2024.0150422</id>
        <doi>10.14569/IJACSA.2024.0150422</doi>
        <lastModDate>2024-04-30T10:33:19.1170000+00:00</lastModDate>
        
        <creator>Tian Xia</creator>
        
        <subject>Improving non local prior algorithms; underwater video images; visual communication effect; optical characteristic processing; image quality</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>Underwater image processing should balance image clarity restoration and comprehensive display of underwater scenes, requiring image fusion and stitching techniques. The pixel level fusion method is based on pixels, and by fusing different image data, it eliminates stitching gaps and sudden changes in lighting intensity, preserves detailed information, and thus improves the accuracy of stitching images. In the process of restoring underwater video images without local priors, there is still room for optimization in steps such as removing atmospheric light values, estimating transmittance, and calculating dehazing images through regularization. Based on the characteristics of Jerlov water types, water quality is classified according to the properties of suspended solids, and each channel is adjusted to the compensation space to improve the restoration algorithm. Background light estimation is used to determine the degree of image degradation, select the optimal attenuation coefficient ratio, and restore the image. The experimental results show that it is crucial to choose a ratio of attenuation coefficients that is close to the actual water quality environment being photographed. Both this model and traditional algorithms have an accuracy rate of over 99.0%, with the accuracy of this model sometimes reaching 99.9%. Pixel level fusion and background light estimation technology optimize underwater images, improve stitching accuracy and clarity, enhance target detection and recognition, and have important value for marine exploration rigs.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_22-Underwater_Video_Image_Restoration.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Particle Swarm Optimization Performance Through CUDA and Tree Reduction Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150421</link>
        <id>10.14569/IJACSA.2024.0150421</id>
        <doi>10.14569/IJACSA.2024.0150421</doi>
        <lastModDate>2024-04-30T10:33:19.0870000+00:00</lastModDate>
        
        <creator>Hussein Younis</creator>
        
        <creator>Mujahed Eleyat</creator>
        
        <subject>Particle swarm optimization; tree reduction algorithm; parallel implementations; CUDA; GPU</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>In this paper, we present an enhancement for Particle Swarm Optimization performance by utilizing CUDA and a Tree Reduction Algorithm. PSO is a widely used metaheuristic algorithm that has been adapted into a CUDA version known as CPSO. The tree reduction algorithm is employed to efficiently compute the global best position. To evaluate our approach, we compared the speedup achieved by our CUDA version against the standard version of PSO, observing a maximum speedup of 37x. Additionally, we identified a linear relationship between the size of swarm particles and execution time; as the number of particles increases, so does computational load – highlighting the efficiency of parallel implementations in reducing execution time. Our proposed parallel PSOs have demonstrated significant reductions in execution time along with improvements in convergence speed and local optimization performance - particularly beneficial for solving large-scale problems with high computational loads.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_21-Enhancing_Particle_Swarm_Optimization_Performance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Transmission Line Monitoring Technology Based on Compressed Sensing Wireless Sensor Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150419</link>
        <id>10.14569/IJACSA.2024.0150419</id>
        <doi>10.14569/IJACSA.2024.0150419</doi>
        <lastModDate>2024-04-30T10:33:19.0530000+00:00</lastModDate>
        
        <creator>Shuling YIN</creator>
        
        <creator>Renping YU</creator>
        
        <creator>Longzhi WANG</creator>
        
        <subject>Compressed sensing; transmission line; wireless sensor network; orthogonal wavelet transform; data reconstruction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>Given wireless sensor networks&#39; significant data transmission requirements, conventional direct transmission often leads to bandwidth constraints and excessive network energy consumption. This paper proposes a transmission line monitoring technology based on compressed sensing wireless sensor networks to achieve real-time monitoring of ice-covered power lines. Grounded in compressed sensing theory, this method utilizes dual orthogonal wavelet transform sparse matrices for sparse representation of sensor data. Considering the practical requirements of power line monitoring, a data transmission model is established to implement compressed sampling transmission. The regularization orthogonal matching pursuit algorithm is employed for high-precision reconstruction of compressed data. The software and hardware components of the power line monitoring system are designed, and experiments are conducted under real-world conditions. The results demonstrate that: 1) the system operates stably with an ideal data compression effect, achieving a compression ratio of 93.191%. The absolute reconstruction errors for temperature, humidity, and wind speed sensor data are 0.064&#176;C, 0.052%, and 0.128 m/s, respectively, indicating high reconstruction accuracy and effectively avoiding transmission impacts caused by bandwidth issues. 2) In a 36-hour energy consumption loss test, compared to direct transmission, the compressed transmission mode exhibits a lower rate of battery voltage decay, with a decrease of approximately 11.18%, effectively extending the network&#39;s lifespan.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_19-Transmission_Line_Monitoring_Technology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implementation of Cosine Similarity Algorithm on Omnibus Law Drafting</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150420</link>
        <id>10.14569/IJACSA.2024.0150420</id>
        <doi>10.14569/IJACSA.2024.0150420</doi>
        <lastModDate>2024-04-30T10:33:19.0530000+00:00</lastModDate>
        
        <creator>Aristoteles </creator>
        
        <creator>Muhammad Umaruddin Syam</creator>
        
        <creator>Tristiyanto</creator>
        
        <creator>Bambang Hermanto</creator>
        
        <subject>Cosine similarity; FastAPI; Laravel; Omnibus Law</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>Drafting of Omnibus Laws presents a complex challenge in legal governance, often involving the integration and consolidation of disparate legal provisions into a unified framework. In this context, the application of advanced computational techniques becomes crucial for streamlining the drafting process and ensuring coherence across the law&#39;s various components. Cosine similarity, a widely used measure in natural language processing and document analysis, offers a quantitative means to assess the similarity between different sections or articles within the Omnibus Law draft. By representing legal texts as high-dimensional vectors in a vector space model, cosine similarity enables the comparison of textual similarity based on the cosine of the angle between these vectors. Implementing cosine similarity in the context of omnibus law using FastAPI and Laravel can be a valuable tool for analyzing similarity between legal documents, especially in the context of omnibus law. Legal practitioners and researchers can use the cosine similarity measure to compare the textual content of different legal documents and identify similarities. This can aid in tasks such as legal document retrieval, clustering similar provisions, and detecting potential inconsistencies. The combination of FastAPI and Laravel provides a potent and efficient way to develop and deploy this functionality, contributing to the advancement of legal informatics and analysis. The dataset used is Undang-Undang (UU) which used Bahasa from 1945 to 2022, comprising a total of 1705 UU. The implemented cosine similarity yielded a recall rate of 90.10% on the law.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_20-Implementation_of_Cosine_Similarity_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning-Powered Lung Cancer Diagnosis: Harnessing IoT Medical Data and CT Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150418</link>
        <id>10.14569/IJACSA.2024.0150418</id>
        <doi>10.14569/IJACSA.2024.0150418</doi>
        <lastModDate>2024-04-30T10:33:19.0400000+00:00</lastModDate>
        
        <creator>Xiao Zhang</creator>
        
        <creator>Xiaobo Wang</creator>
        
        <creator>Tao Huang</creator>
        
        <creator>Jinping Sheng</creator>
        
        <subject>Diagnosis; CT images; deep learning; convolutional neural network; lung cancer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>Currently, lung cancer poses a significant global threat, ranking among the most perilous and lethal ailments. Accurate early detection and effective treatments play pivotal roles in mitigating its mortality rates. Utilizing deep learning techniques, CT scans offer a highly advantageous imaging modality for diagnosing lung cancer. In this study, we introduce an innovative approach employing a hybrid Deep Convolutional Neural Network (DCNN), trained on both CT scan images and medical data retrieved from IoT wearable sensors. Our method encompasses a CNN comprising 22 layers, amalgamating latent features extracted from CT scan images and IoT sensor data to enhance the detection accuracy of our model. Training our model on a balanced dataset, we evaluate its performance based on metrics including accuracy, Area under the Curve (AUC) score, loss, and recall. Upon assessment, our method surpasses comparable approaches, exhibiting promising prospects for lung cancer diagnosis compared to alternative models.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_18-Deep_Learning_Powered_Lung_Cancer_Diagnosis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Data Dynamic Prediction Algorithm in the Process of Entity Information Search for the Internet of Things</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150417</link>
        <id>10.14569/IJACSA.2024.0150417</id>
        <doi>10.14569/IJACSA.2024.0150417</doi>
        <lastModDate>2024-04-30T10:33:19.0070000+00:00</lastModDate>
        
        <creator>Tianqing Liu</creator>
        
        <subject>Internet of Things; sensors; swinging door trending (SDT); support vector machine (SVM); data dynamic prediction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>To address the issue of insufficient real-time capability in existing Internet search engines within the Internet of Things environment, this research investigates the architecture of Internet of Things search systems. It proposes a data dynamic prediction algorithm tailored for the process of entity information search in the Internet of Things. The study is based on the design of a data compression algorithm for the Internet of Things entity information search process using the Rotating Gate Compression Algorithm. The algorithm employs the Least Squares Support Vector Machine to dynamically predict changes in entity node states in the Internet of Things, aiming to reduce sensor node resource consumption and achieve real-time search. Finally, the research introduces an Internet of Things entity information search system based on the data dynamic algorithm. Performance test results indicate that the segmented compression algorithm designed in the study can enhance compression accuracy and compression rate. As compression accuracy increases, errors also correspondingly increase. The prediction algorithm designed in the study shows a decrease in node energy consumption as reporting cycles increase, reaching 0.2 at 5 cycles. At the 5-cycle point, the prediction errors on two research datasets are 0.5 and 7.8, respectively. The optimized data dynamic prediction algorithm in the study effectively reduces node data transmission, lowers node energy consumption, and accurately predicts node state changes to meet user search demands.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_17-Data_Dynamic_Prediction_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Research on Diagnosis Method of Common Knee Diseases Based on Subjective Symptoms and Random Forest Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150416</link>
        <id>10.14569/IJACSA.2024.0150416</id>
        <doi>10.14569/IJACSA.2024.0150416</doi>
        <lastModDate>2024-04-30T10:33:18.9930000+00:00</lastModDate>
        
        <creator>Guangjun Wang</creator>
        
        <creator>Mengxia Hu</creator>
        
        <creator>Linlin Lv</creator>
        
        <creator>Hanyuan Zhang </creator>
        
        <creator>Yining Sun</creator>
        
        <creator>Benyue Su</creator>
        
        <creator>Zuchang Ma</creator>
        
        <subject>Knee diseases; subjective symptoms; random forest algorithm; self-diagnosis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>Knee diseases are common diseases in the elderly, and timely and effective diagnosis of knee diseases is essential for disease treatment and rehabilitation training. In this study, we construct a diagnostic model of common knee diseases based on subjective symptoms and random forest algorithm to realize patients&#39; self-initial diagnosis. In this paper, we first constructed a questionnaire of subjective symptoms of knee, and set up a questionnaire system to guide users to fill out the questionnaire correctly. Then clinical data collection is carried out to obtain clinical questionnaire data. Finally, the diagnostic analysis of three common diseases of knee joint is carried out by random forest machine learning method. Through leave-one-out cross validation, the accuracy of meniscus injury, anterior cruciate ligament injury and knee osteoarthritis diseases are 0.79, 0.84, 0.81 respectively; the sensitivity is 0.79, 0.84, 0.88 respectively; and the specificity is 0.80, 0.84, 0.79 respectively. The results show that the method can achieve a good effect of self-diagnosis, and can provide a knee joint disease screening a convenient and effective approach.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_16-Research_on_Diagnosis_Method_of_Common_Knee_Diseases.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Prediction of Financial Markets Utilizing an Innovatively Optimized Hybrid Model: A Case Study of the Hang Seng Index</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150415</link>
        <id>10.14569/IJACSA.2024.0150415</id>
        <doi>10.14569/IJACSA.2024.0150415</doi>
        <lastModDate>2024-04-30T10:33:18.9770000+00:00</lastModDate>
        
        <creator>Xiaopeng YANG</creator>
        
        <subject>Financial markets; stock future trend; Hang Seng Index; Gated Recurrent Units; Aquila Optimizer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>Stock trading is a highly consequential and frequently discussed subject in the realm of financial markets. Due to the volatile and unpredictable nature of stock prices, investors are perpetually seeking methods to forecast future trends in order to minimize losses and maximize profits. Nevertheless, despite the ongoing investigation of various approaches to optimize the predictive efficacy of models, it is indisputable that a method for accurately forecasting forthcoming market trends does not yet exist. A multitude of algorithms are currently being employed to forecast stock prices due to significant developments that have occurred in recent years. An innovative algorithm for predicting stock prices are examined in this paper which is a Gated Recurrent Unit combined with the Aquila optimizer. A comprehensive data implementation utilizing the Hang Seng Index stock price was executed as a dataset of this research which was collected between the years of 2015 and the end of June 2023. In the study, several additional methods for predicting stock market movements are also detailed. A comprehensive comparative analysis of the stock price prediction performances of the aforementioned algorithms has also been carried out to offer a more in-depth analysis and then the results are displayed in an understandable tabular and graphical manner. The proposed model obtained the values of 0.9934, 0.71, 143.62, and 36530.58, for R^2, MAPE, MAE, and MSE, respectively. These results proved the efficiency and accuracy of the suggested method and it was determined that the proposed model algorithm produces results with a high degree of accuracy and performs the best when it comes to forecasting a time series or stock price.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_15-Prediction_of_Financial_Markets.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Review and Analysis of Financial Market Movements: Google Stock Case Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150414</link>
        <id>10.14569/IJACSA.2024.0150414</id>
        <doi>10.14569/IJACSA.2024.0150414</doi>
        <lastModDate>2024-04-30T10:33:18.9470000+00:00</lastModDate>
        
        <creator>Yiming LU</creator>
        
        <subject>Stock future trend; financial market; investment; machine learning algorithms; Google stock</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>A financial marketplace where shares of companies with public listings are bought and sold is called the stock market. It serves as a gauge of a nation&#39;s economic health by taking into account the operations of individual businesses as well as the general business climate. The relationship between supply and demand affects stock prices. Though it might be dangerous, stock market investing has the potential to provide large rewards in the long run. Together with increased prediction accuracy, optimization techniques such as Biogeography-based optimization (BBO), Artificial bee colony algorithm (ABC) and Aquila Optimization (AO) Algorithm further enhance the Extreme gradient boosting (XGBoost) ability to adapt to changing market conditions. The results were 0.955, 0.966, 0.972, and 0.982 for XGBoost, BBO-XGBoost, ABC-XGBoost, and AO-XGBoost, in that order. The performance difference between AO-XGBoost and XGBoost shows how combining with the optimizer may enhance the model&#39;s performance. By comparing the output of many optimizers, the most accurate optimization has been determined to be the model&#39;s main optimizer.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_14-Review_and_Analysis_of_Financial_Market_Movements.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Building Energy Efficiency: A Hybrid Meta-Heuristic Approach for Cooling Load Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150412</link>
        <id>10.14569/IJACSA.2024.0150412</id>
        <doi>10.14569/IJACSA.2024.0150412</doi>
        <lastModDate>2024-04-30T10:33:18.9300000+00:00</lastModDate>
        
        <creator>Chenguang Wang</creator>
        
        <creator>Yanjie Zhou</creator>
        
        <creator>Libin Deng</creator>
        
        <creator>Ping Xiong</creator>
        
        <creator>Jiarui Zhang</creator>
        
        <creator>Jiamin Deng</creator>
        
        <creator>Zili Lei</creator>
        
        <subject>Building energy; cooling load; machine learning; radial basis function; self-adaptive bonobo optimizer; differential squirrel search algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>The research tackles the complex problem of accurately predicting cooling loads in the context of energy efficiency and building management. It presents a novel approach that increases the precision of cooling load forecasts by utilizing machine learning (ML). The main objective is to incorporate a hybridization strategy into Radial Basis Function (RBF) models, a commonly used method for cooling load prediction, to improve their effectiveness. This new method significantly increases accuracy and reliability. The resulting hybrid models, which combine two powerful optimization techniques, outperform the state-of-the-art approaches and mark a major advancement in predictive modelling. The study performs in-depth analyses to compare standalone and hybrid model configurations, guaranteeing an unbiased and thorough performance evaluation. The deliberate choice of incorporating the Self-adaptive Bonobo Optimizer (SABO) and Differential Squirrel Search Algorithm (DSSA) underscores the significance of leveraging the distinctive strengths of each optimizer. The study delves into three variations of the RBF model: RBF, RBDS, and RRBSA. Among these, the RBF model, integrating the SABO optimizer (RBSA), distinguishes itself with an impressive R2 value of 0.995, denoting an exceptionally close alignment with the data. Furthermore, a low Root Mean Square Error (RMSE) value of 0.700 underscores the model&#39;s remarkable precision. The research showcases the effectiveness of fusing ML techniques in the RBSA model for precise cooling load predictions. This hybrid model furnishes more dependable insights for energy conservation and sustainable building operations, thereby contributing to a more environmentally conscious and sustainable future.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_12-Enhancing_Building_Energy_Efficiency.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Securing IoT Environment by Deploying Federated Deep Learning Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150413</link>
        <id>10.14569/IJACSA.2024.0150413</id>
        <doi>10.14569/IJACSA.2024.0150413</doi>
        <lastModDate>2024-04-30T10:33:18.9300000+00:00</lastModDate>
        
        <creator>Saleh Alghamdi</creator>
        
        <creator>Aiiad Albeshri</creator>
        
        <subject>Internet of Things (IoT); security breaches; machine learning; Deep Learning (DL)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>The vast network of interconnected devices, known as the Internet of Things (IoT), produces significant volumes of data and is vulnerable to security threats. The proliferation of IoT protocols has resulted in numerous zero-day attacks, which traditional machine learning systems struggle to detect due to IoT networks&#39; complexity and the sheer volume of these attacks. This situation highlights the urgent need for developing more advanced and effective attack detection methods to address the growing security challenges in IoT environments. In this research, we propose an attack detection mechanism based on deep learning for federated learning in IoT. Specifically, we aim to detect and prevent malicious attacks in the form of model poisoning and Byzantine attacks that can compromise the accuracy and integrity of the trained model. The objective is to compare the performance of a distributed attack detection system using a DL model against a centralized detection system that uses shallow machine learning models. The proposed approach uses a distributed attack detection system that consists of multiple nodes, each with its own DL model for detecting attacks. The DL model is trained using a large dataset of network traffic to learn high-level features that can distinguish between normal and malicious traffic. The distributed system allows for efficient and scalable detection of attacks in a federated learning network within the IoT. The experiments show that the distributed attack detection system using DL outperforms centralized detection systems that use shallow machine learning models. The proposed approach has the potential to improve the security of the IoT by detecting attacks more effectively than traditional machine learning systems. However, there are limitations to the approach, such as the need for a large dataset for training the DL model and the computational resources required for the distributed system.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_13-Securing_IoT_Environment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automated Weeding Systems for Weed Detection and Removal in Garlic / Ginger Fields</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150411</link>
        <id>10.14569/IJACSA.2024.0150411</id>
        <doi>10.14569/IJACSA.2024.0150411</doi>
        <lastModDate>2024-04-30T10:33:18.9000000+00:00</lastModDate>
        
        <creator>Tsubasa Nakabayashi</creator>
        
        <creator>Kohei Yamagishi</creator>
        
        <creator>Tsuyoshi Suzuki</creator>
        
        <subject>Weed detection; weeding; mask R-CNN; agriculture robot</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>The global agriculture industry has faced various problems, such as rapid population growth and climate change. Among several countries, Japan has a declining agricultural workforce. To solve this problem, the Japanese government aims to realize “Smart agriculture” that applies information and communication technology, artificial intelligence, and robotics. Smart agriculture requires the development of robot technology to perform weeding and other labor-intensive agricultural tasks. Robotic weeding consists of an object detection method using machine learning to classify weeds and crops and an autonomous weeding system using robot hands and lasers. However, the approach used for these methods changes depending on the crop growth. The weeding system must consider the combination according to crop growth. This study addresses weed detection and autonomous weeding in crop-weed mixed ridges, such as garlic and ginger fields. We first develop a weed detection method using Mask R-CNN, which can detect individual weeds by instance segmentation from color images captured by an RGB-D camera. The proposed system can obtain weed coordinates in physical space based on the detected weed region and the depth image captured by the camera. Subsequently, we propose an approach to guide the weeding manipulator toward the detected weed coordinates. This paper integrates weed detection and autonomous weeding through these two proposed methods. We evaluate the performance of the Mask R-CNN trained on images taken in an actual field and demonstrate that the proposed autonomous weeding system works on a reproduced ridge with artificial weeds similar to garlic and weed leaves.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_11-Automated_Weeding_Systems_for_Weed_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predicting Optimal Learning Approaches for Nursing Students in Morocco</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150410</link>
        <id>10.14569/IJACSA.2024.0150410</id>
        <doi>10.14569/IJACSA.2024.0150410</doi>
        <lastModDate>2024-04-30T10:33:18.8830000+00:00</lastModDate>
        
        <creator>Samira Fadili</creator>
        
        <creator>Merouane Ertel</creator>
        
        <creator>Aziz Mengad</creator>
        
        <creator>Said Amali</creator>
        
        <subject>Learning styles; nursing students; predictive modeling; classification; personalized education</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>In nursing education, recognizing and accommodating diverse learning styles is imperative for the development of effective educational programs and the success of nursing students. This article addresses the crucial challenge of classifying the learning styles of nursing students in Morocco, where contextual studies are limited. To address this research gap, a contextual approach is proposed, aiming to develop a predictive model of the most appropriate learning approach (observational, experiential, reflective and active) for each nursing student in Morocco. This model incorporates a comprehensive set of variables such as age, gender, education, work experience, preferred learning strategies, engagement in social activities, attitudes toward failure, and self-assessment preferences. We used four multivariate machine learning algorithms, namely SVM, Tree, Neural Network, and Naive Bayes, to determine the most reliable and effective classifiers. The results show that neural network and decision tree classifiers are particularly powerful in predicting the most suitable learning approach for each nursing student. This research endeavors to enhance the success of nursing students and raise the overall quality of healthcare delivery in the country by tailoring educational programs to match individual learning styles.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_10-Predicting_Optimal_Learning_Approaches.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Novel Approaches for Access Level Modelling of Employees in an Organization Through Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150409</link>
        <id>10.14569/IJACSA.2024.0150409</id>
        <doi>10.14569/IJACSA.2024.0150409</doi>
        <lastModDate>2024-04-30T10:33:18.8530000+00:00</lastModDate>
        
        <creator>Priyanka C Hiremath</creator>
        
        <creator>Raju G T</creator>
        
        <subject>Access control; machine learning; employee behavior modeling; data analysis; organizational performance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>In the contemporary business landscape, organizational trustworthiness is of utmost importance. Employee behavior, a pivotal aspect of trustworthiness, undergoes analysis and prediction through data science methodologies. Simultaneously, effective control over employee access within an organization is imperative for security and privacy assurance. This research proposes an innovative approach to model employee access levels using Geo-Social data and machine learning techniques like Linear Regression, K-Nearest Neighbours, Decision Tree, Random Forest, XGBoost, and Multi-Layered Perceptron. The data, sourced from social and geographical realms, encompasses details on employee geography, navigation preferences, spatial exploration, and choice set formations. Utilizing this information, a behavioral model is constructed to assess employee trustworthiness, categorizing them into access levels: low, moderate, high, and very high. The model&#39;s periodic review ensures adaptive access level adjustments based on evolving behavioral patterns. The proposed approach not only cultivates a more trustworthy organizational network but also furnishes a precise and reliable trustworthiness evaluation. This refinement contributes to heightened organizational coherence, increased employee commitment, and reduced turnover. Additionally, the approach ensures enhanced control over employee access, mitigating the risks of data breaches and information leaks by restricting the access of employees with lower trustworthiness.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_9-Novel_Approaches_for_Access_Level_Modelling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>DUF-Net: A Retinal Vessel Segmentation Method Integrating Global and Local Features with Skeleton Fitting Assistance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150408</link>
        <id>10.14569/IJACSA.2024.0150408</id>
        <doi>10.14569/IJACSA.2024.0150408</doi>
        <lastModDate>2024-04-30T10:33:18.8200000+00:00</lastModDate>
        
        <creator>Xuelin Xu</creator>
        
        <creator>Ren Lin</creator>
        
        <creator>Jianwei Chen</creator>
        
        <creator>Huabin He</creator>
        
        <subject>Fundus image; vessel segmentation; skeleton fitting; data augmentation; patch classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>Assisted evaluation through retinal vessel segmentation facilitates the early prevention and diagnosis of retinal lesions. To address the scarcity of medical samples, current research commonly employs image patching techniques to augment the training dataset. However, the vascular features in fundus images exhibit complex distribution, patch-based methods frequently encounter the challenge of isolated patches lacking contextual information, consequently resulting in issues such as vessel discontinuity and loss. Additionally, there are a higher number of samples with strong contrast vessels compared to those with weak contrast vessels in retinal images. Moreover, within individual patches, there are more pixels of strong contrast vessels compared to weak contrast vessels, leading to lower segmentation accuracy for small vessels. Hence, this study introduces a patch-based deep neural network method for retinal vessel segmentation to address the issues. Firstly, a novel architecture, termed Double U-Net with a Feature Fusion Module (DUF-Net), is proposed. This network structure effectively supplements missing contextual information and improves the problem of vessel discontinuity. Furthermore, an algorithm is introduced to classify vascular patches based on their contrast levels. Subsequently, conventional data augmentation methods were employed to achieve a balance in the number of samples with strong and weak contrast vessels. Additionally, method with skeleton fitting assistance is introduced to improve the segmentation of vessels with weak contrast. Finally, the proposed method is evaluated across four publicly available datasets: DRIVE, CHASE_DB1, STARE, and HRF. The results demonstrate that the proposed method effectively ensures the continuity of segmented blood vessels while maintaining accuracy.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_8-DUF_Net_A_Retinal_Vessel_Segmentation_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimizing Student Performance Prediction: A Data Mining Approach with MLPC Model and Metaheuristic Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150407</link>
        <id>10.14569/IJACSA.2024.0150407</id>
        <doi>10.14569/IJACSA.2024.0150407</doi>
        <lastModDate>2024-04-30T10:33:18.8030000+00:00</lastModDate>
        
        <creator>Qing Hai</creator>
        
        <creator>Changshou Wang</creator>
        
        <subject>Educational data mining; multilayer perceptron classification; pelican optimization algorithm; crystal structure algorithm; student performance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>Given the information stored in educational databases, automated achievement of the learner’s prediction is essential. The field of educational data mining (EDM) is handling this task. EDM creates techniques for locating data gathered from educational settings. These techniques are applied to comprehend students and the environment in which they learn. Institutions of higher learning are frequently interested in finding how many students will pass or fail required courses. Prior research has shown that many researchers focus only on selecting the right algorithm for classification, ignoring issues that arise throughout the data mining stage, such as classification error, class imbalance, and high dimensionality data, among other issues. These kinds of issues decreased the model&#39;s accuracy. This study emphasizes the application of the Multilayer Perceptron Classification (MLPC) for supervised learning to predict student performance, with various popular classification methods being employed in this field. Furthermore, an ensemble technique is utilized to enhance the accuracy of the classifier. The goal of the collaborative approach is to address forecasting and categorization issues. This study demonstrates how crucial it is to do algorithm fine-tuning activities and data pretreatment to address the quality of data concerns. The exploratory dataset utilized in this study comes from the Pelican Optimization Algorithm (POA) and Crystal Structure Algorithm (CSA). In this research, a hybrid approach is embraced, integrating the mentioned optimizers to facilitate the development of MLPO and MLCS. Based on the findings, MLPO2 demonstrated superior efficiency compared to the other methods, achieving an impressive 95.78% success rate.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_7-Optimizing_Student_Performance_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluating the Accuracy of Cloud-based 3D Human Pose Estimation Tools: A Case Study of MOTiO by RADiCAL</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150406</link>
        <id>10.14569/IJACSA.2024.0150406</id>
        <doi>10.14569/IJACSA.2024.0150406</doi>
        <lastModDate>2024-04-30T10:33:18.7730000+00:00</lastModDate>
        
        <creator>Hamza Khalloufi</creator>
        
        <creator>Mohamed Zaifri</creator>
        
        <creator>Abdessamad Benlahbib</creator>
        
        <creator>Fatima Zahra Kaghat</creator>
        
        <creator>Ahmed Azough</creator>
        
        <subject>3D; human pose estimation; animation; evaluation; motion tracking</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>The use of 3D Human Pose Estimation (HPE) has become increasingly popular in the field of computer vision due to its various applications in human-computer interaction, animation, surveillance, virtual reality, video interpretation, and gesture recognition. However, traditional sensor-based motion capture systems are limited by their high cost and the need for multiple cameras and physical markers. To address these limitations, cloud-based HPE tools, such as DeepMotion and MOTiON by RADiCAL, have been developed. This study presents the first scientific evaluation of MOTiON by RADiCAL, a cloud-based 3D HPE tool based on deep learning and cloud computing. The evaluation was conducted using the CMU dataset, which was filtered and cleaned for this purpose. The results were compared to the ground truth using two metrics, the Mean per Joint Error (MPJPE) and the Percentage of Correct Keypoints (PCK). The results showed an accuracy of 98 mm MPJPE and 96% PCK for most scenarios and genders. This study suggests that cloud-based HPE tools such as MOTiON by RADiCAL can be a suitable alternative to traditional sensor-based motion capture systems for simple scenarios with slow movements and little occlusion.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_6-Evaluating_the_Accuracy_of_Cloud_based_3D_Human_Pose.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Distributed Optimization Scheduling Consistency Algorithm for Smart Grid and its Application in Cost Control of Power Grid</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150405</link>
        <id>10.14569/IJACSA.2024.0150405</id>
        <doi>10.14569/IJACSA.2024.0150405</doi>
        <lastModDate>2024-04-30T10:33:18.7570000+00:00</lastModDate>
        
        <creator>Lihua Shang</creator>
        
        <creator>Meijiao Sun</creator>
        
        <creator>Cheng Pan</creator>
        
        <creator>Xiaoqiang San</creator>
        
        <subject>Distributed consistency algorithm; convex optimization; economic dispatch; smart grid</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>There are problems such as low scalability and low convergence accuracy in the economic dispatch of smart grids. To address these situations, this study considers various constraints such as supply-demand balance constraints, climb constraints, and capacity constraints based on the unified consensus algorithm of multi-agent systems. By using Lagrange duality theory and internal penalty function method, the optimization of smart grid economic dispatch is transformed into an unconstrained optimization problem, and a distributed second-order consistency algorithm is proposed to solve the model problem. IEEE6 bus system testing showed that the generator cost of the distributed second-order consistency algorithm in the first, second, and third time periods was 2.2475 million yuan, 5.8236 million yuan, and 3.7932 million yuan, respectively. Compared to the first-order consistency algorithm, the generator cost during the corresponding time period has increased by 10.23%, 11.36%, and 13.36%. The actual total output has reached supply-demand balance in a short period of time with the changes in renewable energy, while maintaining supply-demand balance during the scheduling process. The actual total output during low, peak, and off peak periods was 99MW, 147MW, and 120MW, respectively. This study uses distributed second-order consistency algorithm to solve the economic dispatch model of smart grid to achieve higher convergence accuracy and speed. The study is limited by the assumption that the cost functions of each power generation unit are quadratic convex cost functions under ideal conditions. This economic dispatch model may not accurately reflect practical applications.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_5-Distributed_Optimization_Scheduling_Consistency_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Analysis of Telemedicine in Media Coverage Pre- and Post-COVID-19 using Unsupervised Latent Dirichlet Topic Modeling</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150404</link>
        <id>10.14569/IJACSA.2024.0150404</id>
        <doi>10.14569/IJACSA.2024.0150404</doi>
        <lastModDate>2024-04-30T10:33:18.7100000+00:00</lastModDate>
        
        <creator>Haewon Byeon</creator>
        
        <subject>Telemedicine; COVID-19; medical law; healthcare transformation; LDA topic modeling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>Telemedicine, driven by technology, has become a game-changer in healthcare, with the COVID-19 pandemic amplifying its significance by necessitating remote healthcare solutions. This study explores the evolution of telemedicine through news big data analysis. Our research encompassed a vast dataset from 51 media outlets (total 28,372 articles), including national and regional dailies, economic newspapers, broadcasters, and professional journals. Using LDA analysis, we delved into pre- and post-pandemic telemedicine trends comprehensively. A crucial revelation was the prominence of &quot;medical law&quot; in telemedicine discussions, underscoring the need for legal reforms. Keywords like &quot;artificial intelligence&quot; and &quot;big data&quot; underscored technology&#39;s pivotal role. Post-pandemic, keywords like &quot;COVID-19,&quot; &quot;online healthcare,&quot; and &quot;telemedicine&quot; surged, reflecting the pandemic&#39;s impact on remote healthcare reliance. These keywords&#39; increased frequency highlights the pandemic&#39;s transformative influence. This study stresses addressing healthcare&#39;s legal constraints and maximizing technology&#39;s potential. To seamlessly integrate telemedicine, policy support and institutional backing are imperative. In summary, telemedicine&#39;s rise, propelled by COVID-19, signifies a healthcare paradigm shift. This study sheds light on its trajectory, emphasizing legal reforms, tech innovation, and pandemic-induced changes. The post-pandemic era must prioritize informed policy decisions for telemedicine&#39;s effective and accessible implementation.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_4-Comparative_Analysis_of_Telemedicine_in_Media_Coverage.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Assisted Requirements Selection by Clustering using an Analytical Hierarchical Process</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150403</link>
        <id>10.14569/IJACSA.2024.0150403</id>
        <doi>10.14569/IJACSA.2024.0150403</doi>
        <lastModDate>2024-04-30T10:33:18.7100000+00:00</lastModDate>
        
        <creator>Shehzadi Nazeeha Saleem</creator>
        
        <creator>Linda Mohaisen</creator>
        
        <subject>Requirements prioritization; next release plan; software product planning; decision support; MoSCoW; AHP; k-Means; GMM; BIRCH; PAM; hierarchical; clustering; clusters evaluation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>This research investigates the fusion of the Analytic Hierarchy Process (AHP) with clustering techniques to enhance project outcomes. Two quantitative datasets comprising 20 and 100 software requirements are analyzed. A novel AHP dataset is developed to impartially evaluate clustering strategies. Five clustering algorithms (K-means, Hierarchical, PAM, GMM, BIRCH) are employed, providing diverse analytical tools. Cluster quality and coherence are assessed using evaluation criteria including the Dunn Index, Silhouette Index, and Calinski Harabaz Index. The MoSCoW technique organizes requirements into clusters, prioritizing critical requirements. This strategy combines strategic prioritization with quantitative analysis, facilitating objective evaluation of clustering results and resource allocation based on requirement priority. The study demonstrates how clustering can prioritize software requirements and integrate advanced data analysis into project management, showcasing the transformative potential of converging AHP with clustering in software engineering.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_3-Assisted_Requirements_Selection_by_Clustering.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Classification of Thoracic Abnormalities from Chest X-Ray Images with Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150402</link>
        <id>10.14569/IJACSA.2024.0150402</id>
        <doi>10.14569/IJACSA.2024.0150402</doi>
        <lastModDate>2024-04-30T10:33:18.6330000+00:00</lastModDate>
        
        <creator>Usman Nawaz</creator>
        
        <creator>Muhammad Ummar Ashraf</creator>
        
        <creator>Muhammad Junaid Iqbal</creator>
        
        <creator>Muhammad Asaf</creator>
        
        <creator>Mariam Munsif Mir</creator>
        
        <creator>Usman Ahmed Raza</creator>
        
        <creator>Bilal Sharif</creator>
        
        <subject>Localization; classification; ensemble learning; YOLOV5; VINBigData; thoracic abnormalities; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>Most Chest X-Rays (CXRs) are used to spot the existence of chest diseases by radiologists worldwide. Examining multiple X-rays at the busiest medical facility may result in time and financial loss. Furthermore, in the detection of the disease, expert abilities and attention are needed. CXRs are usually used for the detection of heart and lung region anomalies. In this research, multi-level Deep Learning for CXRs ailment detection has been used to identify solutions to these issues. Spotting these anomalies with high precision automatically will significantly improve the processes of realistic diagnosis. However, the absence of efficient, public databases and benchmark analyses makes it hard to match the appropriate diagnosis techniques and define them. The publicly accessible VINBigData datasets have been used to address these difficulties and researched the output of established multi-level Deep Learning architectures on various abnormalities. A high accuracy in CXRs abnormality detection on this dataset has been achieved. The focus of this research is to develop a multi-level Deep Learning approach for Localization and Classification of thoracic abnormalities from chest radiograph. The proposed technique automatically localizes and categorizes fourteen types of thoracic abnormalities from chest radiographs. The used dataset consists of 18,000 scans that have been annotated by experienced radiologists. The YoloV5 model has been trained with fifteen thousand independently labeled images and evaluated on a test set of three thousand images. These annotations were collected via VinBigData&#39;s web-based platform, VinLab. Image preprocessing techniques are utilized for noise removal, image sequences normalization, and contrast enhancement. Finally, Deep Ensemble approaches are used for feature extraction and classification of thoracic abnormalities from chest radiograph.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_2-Classification_of_Thoracic_Abnormalities.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comparative Analysis of Traditional and Machine Learning Methods in Forecasting the Stock Markets of China and the US</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150401</link>
        <id>10.14569/IJACSA.2024.0150401</id>
        <doi>10.14569/IJACSA.2024.0150401</doi>
        <lastModDate>2024-04-30T10:33:18.4600000+00:00</lastModDate>
        
        <creator>Shangshang Jin</creator>
        
        <subject>Machine learning; Holt&#39;s LES; SVR; LSTM; GRU</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024</description>
        <description>In the volatile and uncertain financial markets of the post-COVID-19 era, our study conducts a comparative analysis of traditional econometric models—specifically, the AutoRegressive Integrated Moving Average (ARIMA) and Holt&#39;s Linear Exponential Smoothing (Holt&#39;s LES)—against advanced machine learning techniques, including Support Vector Regression (SVR), Long Short-Term Memory (LSTM) networks, and Gated Recurrent Units (GRU). Focused on the daily stock prices of the S&amp;P 500 and SSE Index, the study utilizes a suite of metrics such as R-squared, RMSE, MAPE, and MAE to evaluate the forecasting accuracy of these methodologies. This approach allows us to explore how each model fares in capturing the complex dynamics of stock market movements in major economies like the U.S. and China amidst ongoing market fluctuations instigated by the pandemic. The findings reveal that while traditional models like ARIMA demonstrate strong predictive accuracy over short-term horizons, LSTM networks excel in capturing complex, non-linear patterns in the data, showcasing superior performance over longer forecast horizons. This nuanced comparison highlights the strengths and limitations of each model, with LSTM emerging as the most effective in navigating the unpredictable dynamics of post-pandemic financial markets. Our results offer crucial insights into optimizing forecasting methodologies for stock price predictions, aiding investors, policymakers, and scholars in making informed decisions amidst ongoing market challenges.</description>
        <description>http://thesai.org/Downloads/Volume15No4/Paper_1-A_Comparative_Analysis_of_Traditional_and_Machine_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Low-Resource Question-Answering Performance Through Word Seeding and Customized Refinement</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01503138</link>
        <id>10.14569/IJACSA.2024.01503138</id>
        <doi>10.14569/IJACSA.2024.01503138</doi>
        <lastModDate>2024-03-30T14:57:43.0330000+00:00</lastModDate>
        
        <creator>Hariom Pandya</creator>
        
        <creator>Brijesh Bhatt</creator>
        
        <subject>Embedding learning; words seeding; bilingual dataset generation; low-resource question-answering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>The state-of-the-art approaches in Question-Answering (QA) systems necessitate extensive supervised training datasets. In low-resource languages (LRL), the scarcity of data poses a bottleneck, and the manual annotation of labeled data is a rigorous process. Addressing this challenge, some recent efforts have explored cross-lingual or multilingual QA learning by leveraging training data from resource-rich languages (RRL). However, the efficiency of such approaches relies on syntactic compatibility between languages. The paper introduces the innovative method that involves seeding LRL data into RRL to create a bilingual supervised corpus while preserving the syntactical structure of RRL. The method employs the translation and transliteration of selected parts-of-speech (POS) category words. Additionally, the paper also proposes a customized approach to fine-tune the models using bilingual data. Employing the bilingual data and the proposed fine-tuning approach, the most successful model has achieved a 75.62 F1 score on the XQuAD Hindi dataset and a 68.92 F1 score on the MLQA Hindi dataset in a zero-shot architecture. In the experiments conducted using few-shot learning setup, the highest F1 scores of 79.17 on the XQuAD Hindi dataset and 70.42 on the MLQA Hindi dataset have been achieved.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_138-Enhancing_Low_Resource_Question_Answering_Performance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Unveiling the Dynamic Landscape of Malware Sandboxing: A Comprehensive Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01503137</link>
        <id>10.14569/IJACSA.2024.01503137</id>
        <doi>10.14569/IJACSA.2024.01503137</doi>
        <lastModDate>2024-03-30T14:57:43.0330000+00:00</lastModDate>
        
        <creator>Elhaam Debas</creator>
        
        <creator>Norah Alhumam</creator>
        
        <creator>Khaled Riad</creator>
        
        <subject>Malware analysis; threat hunting; security operations; machine learning; cutting-edge AI; sandboxing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>In contemporary times, the landscape of malware analysis has advanced into an era of sophisticated threat detection. Today’s malware sandboxes conduct rudimentary analyses and have evolved to incorporate cutting-edge artificial intelligence and machine learning capabilities. These advancements empower them to discern subtle anomalies and recognize emerging threats with a heightened level of accuracy. Moreover, malware sandboxes have adeptly adapted to counteract evasion tactics, creating a more realistic and challenging environment for malicious entities attempting to detect and evade analysis. This paper delves into the maturation of malware sandbox technology, tracing its progression from basic analysis to the intricate realm of advanced threat hunting. At the core of this evolution is the instrumental role played by malware sandboxes in providing a secure and dynamic environment for the in-depth examination of malicious code, contributing significantly to the ongoing battle against evolving cyber threats. In addressing the ongoing challenges of evasive malware detection, the focus lies on advancing detection mechanisms, leveraging machine learning models, and evolving malware sandboxes to create adaptive environments. Future efforts should prioritize the creation of comprehensive datasets, distinguish between legitimate and malicious evasion techniques, enhance detection of unknown tactics, optimize execution environments, and enable adaptability to zero-day malware through efficient learning mechanisms, thereby fortifying cybersecurity defences against emerging threats.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_137-Unveiling_the_Dynamic_Landscape_of_Malware_Sandboxing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>DeepSL: Deep Neural Network-based Similarity Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01503136</link>
        <id>10.14569/IJACSA.2024.01503136</id>
        <doi>10.14569/IJACSA.2024.01503136</doi>
        <lastModDate>2024-03-30T14:57:43.0170000+00:00</lastModDate>
        
        <creator>Mohamedou Cheikh Tourad</creator>
        
        <creator>Abdali Abdelmounaim</creator>
        
        <creator>Mohamed Dhleima</creator>
        
        <creator>Cheikh Abdelkader Ahmed Telmoud</creator>
        
        <creator>Mohamed Lachgar</creator>
        
        <subject>Similarity learning; Siamese networks; MCESTA; triplet loss; similarity metrics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>The quest for a top-rated similarity metric is inherently mission-specific, with no universally ”great” metric relevant across all domain names. Notably, the efficacy of a similarity metric is regularly contingent on the character of the challenge and the characteristics of the records at hand. This paper introduces an innovative mathematical model called MCESTA, a versatile and effective technique designed to enhance similarity learning via the combination of multiple similarity functions. Each characteristic within it is assigned a selected weight, tailor-made to the necessities of the given project and data type. This adaptive weighting mechanism enables it to outperform conventional methods by providing an extra nuanced approach to measuring similarity. The technique demonstrates significant enhancements in numerous machine learning tasks, highlighting the adaptability and effectiveness of our model in diverse applications.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_136-DeepSL_Deep_Neural_Network_based_Similarity_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Screening Cyberattacks and Fraud via Heterogeneous Layering</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01503135</link>
        <id>10.14569/IJACSA.2024.01503135</id>
        <doi>10.14569/IJACSA.2024.01503135</doi>
        <lastModDate>2024-03-30T14:57:43.0000000+00:00</lastModDate>
        
        <creator>Abdulrahman Alahmadi</creator>
        
        <subject>Fraud; Internet of Things (IoT); Deep Learning (DL); ensemble; stacking; cyberattack; Machine Learning (ML)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>On the Internet of Things (IoT) age, intelligent equipment is employed to give effective and dependable utilization of applications. IoT devices may recognize and provide extensive information while also intelligently processing that data. Data systems, systems for control, plus sensing are growing increasingly vital in contemporary manufacturing processes. The amount of internet of things gadgets and methods used is growing, that has culminated in a rise in assaults. Such assaults have the potential to interrupt international activities and cause major financial losses. Multiple methods, including Machine learning (ML) in addition to Deep Learning (DL), are being utilized for identifying cyberattack. In this investigation, researchers offer an ensemble staking approach that is strong strategy in ML for detecting assaults via the Internet of Things having excellent accuracy. Tests were carried out using three distinct information: credit card data, NSL-KDD, and UNSW. Single fundamental classifications were beaten by the suggested layered ensembles classification. The results show that the cyberattack detection model in this research possessed a 95.15% accuracy percentage, while the credit card fraud detection model achieved a 93.50% accuracy percentage.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_135-Screening_Cyberattacks_and_Fraud_via_Heterogeneous_Layering.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Utilizing Various Machine Learning Techniques for Diabetes Mellitus Feature Selection and Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01503134</link>
        <id>10.14569/IJACSA.2024.01503134</id>
        <doi>10.14569/IJACSA.2024.01503134</doi>
        <lastModDate>2024-03-30T14:57:42.9870000+00:00</lastModDate>
        
        <creator>Alaa Sheta</creator>
        
        <creator>Walaa H. Elashmawi</creator>
        
        <creator>Ahmad Al-Qerem</creator>
        
        <creator>Emad S. Othman</creator>
        
        <subject>Diabetes; machine learning; random forest; SMOTE technique</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>Diabetes mellitus is a chronic disease affecting over 38.4 million adults worldwide. Unfortunately, 8.7 million were undiagnosed. Early detection and diagnosis of diabetes can save millions of people’s lives. Significant benefits can be achieved if we have the means and tools for the early diagnosis and treatment of diabetes since it can reduce the ratio of cardiovascular disease and mortality rate. It is urgently necessary to explore computational methods and machine learning for possible assistance in the diagnosis of diabetes to support physician decisions. This research utilizes machine learning to diagnose diabetes based on several selected features collected from patients. This research provides a complete process for data handling and pre-processing, feature selection, model development, and evaluation. Among the models tested, our results reveal that Random Forest performs best in accuracy (i.e., 0.945%). This emphasizes Random Forest’s efficiency in precisely helping diagnose and reduce the risk of diabetes.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_134-Utilizing_Various_Machine_Learning_Techniques_for_Diabetes_Mellitus.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Optimized Air Traffic Departure Sequence According to the Standard Instrument Departures</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01503133</link>
        <id>10.14569/IJACSA.2024.01503133</id>
        <doi>10.14569/IJACSA.2024.01503133</doi>
        <lastModDate>2024-03-30T14:57:42.9700000+00:00</lastModDate>
        
        <creator>Abdelmounaime Bikir</creator>
        
        <creator>Otmane Idrissi</creator>
        
        <creator>Khalifa Mansouri</creator>
        
        <creator>Mohamed Qbadou</creator>
        
        <subject>Air traffic management; standard instrument departure; departure traffic sequencing; genetic algorithm; heuristic algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>Sequencing efficiently the departure traffic remains among the critical parts of air traffic management these days. It not only reduces delays and congestion at hold points, but it also enhances airport operations, improves traffic planning, and increases capacity. This research paper proposes an approach, that employs a genetic algorithm (GA), to help air traffic con-trollers in organizing a sequence for the departure traffic based on the standard instrument departures (SIDs) configuration. A scenario with Randomly assigned types, SIDs, and departure times was applied to a set of aircraft in a terminal area with a four-SID configuration to assess the performance of the suggested GA. Subsequently, a comparison with the standard method of First Come First Served (FCFS) was conducted. The testing data revealed promising results in terms of the total spent time to reach a specified altitude after takeoff.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_133-An_Optimized_Air_Traffic_Departure_Sequence.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detecting and Visualizing Implementation Feature Interactions in Extracted Core Assets of Software Product Line</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01503132</link>
        <id>10.14569/IJACSA.2024.01503132</id>
        <doi>10.14569/IJACSA.2024.01503132</doi>
        <lastModDate>2024-03-30T14:57:42.9530000+00:00</lastModDate>
        
        <creator>Hamzeh Eyal Salman</creator>
        
        <creator>Yaqin Al-Ma’aitah</creator>
        
        <creator>Abdelhak-Djamel Seriai</creator>
        
        <subject>Unwanted feature interaction; core assets; extractive approach; visualization; shared artifacts; implementation dependency</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>Recently, software products have played a vital role in our daily lives, having a significant impact on industries and the economy. Software product line engineering is an engineering strategy that allows for the systematic reuse and development of a set of software products simultaneously, rather than just one software product at a time. This strategy mainly relies on features composition to generate multiple new software products. Unwanted feature interactions, where the integration of multiple feature implementations hinders each other, are challenging in this strategy. This leads to performance degradation, and unexpected behaviors may happen. In this article, we propose an approach to detect and visualize all feature interactions early. Our approach depends on an unsupervised clustering technique called formal concept analysis to achieve the goal. The effectiveness of the proposed approach is evaluated by applying it to a large and benchmark case study in this domain. The results indicate that the proposed approach effectively detects and visualizes all interacted features. Also, it saves developer efforts for detecting interacted features in a range between 67% and 93%.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_132-Detecting_and_Visualizing_Implementation_Feature_Interactions.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Unified Access Management for Digital Evidence Storage: Integrating Attribute-based and Role-based Access Control with XACML</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01503131</link>
        <id>10.14569/IJACSA.2024.01503131</id>
        <doi>10.14569/IJACSA.2024.01503131</doi>
        <lastModDate>2024-03-30T14:57:42.9400000+00:00</lastModDate>
        
        <creator>Ayu Maulina</creator>
        
        <creator>Zulfany Erlisa Rasjid</creator>
        
        <subject>ABAC; RBAC; digital evidence storage; XACML; network security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>Digital evidence is stored in digital evidence storage. An access control system is crucial in situations where not all users can access digital evidence, ensuring that each user’s access is limited to what is essential for them to do their jobs. As a result, access control must be included. Role-based access control (RBAC) and attribute-based access control (ABAC) are two of the several varieties of access control. Only the ABAC model is applied in digital evidence storage systems in the research that has been done. In order to get more precise findings, some academics have suggested combining these two models. In light of this, this study suggests a hybrid paradigm for digital evidence storage that combines the key components of both ABAC and RBAC. In addition to utilizing eXtensible Access Control Markup (XACML) throughout the policy statement creation process. A programming language called XACML uses the XML format to specify RBAC and ABAC rules. The study’s findings demonstrate that the ABAC and RBAC models can function in accordance with the developed permit and deny test scenarios.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_131-Unified_Access_Management_for_Digital_Evidence_Storage.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Efficient Blockchain Neighbor Selection Framework Based on Agglomerative Clustering</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01503130</link>
        <id>10.14569/IJACSA.2024.01503130</id>
        <doi>10.14569/IJACSA.2024.01503130</doi>
        <lastModDate>2024-03-30T14:57:42.9400000+00:00</lastModDate>
        
        <creator>Marwa F. Mohamed</creator>
        
        <creator>Mostafa Elkhouly</creator>
        
        <creator>Safa Abd El-Aziz</creator>
        
        <creator>Mohamed Tahoun</creator>
        
        <subject>Blockchain; scalability; agglomerative clustering; broadcasting; optimized neighbor selection; minimum spanning tree; parallel processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>Blockchain-based decentralized applications have garnered significant attention and have been widely deployed in recent years. However, blockchain technology faces several challenges, such as limited transaction throughput, large blockchain sizes, scalability, and consensus protocol limitations. This paper introduces an efficient framework to accelerate broadcast efficiency and enhance the blockchain system’s throughput by reducing block propagation time. It addresses these concerns by proposing a dynamic and optimized Blockchain Neighbor Selection Framework (BNSF) based on agglomerative clustering. The main idea behind the BNSF is to divide the network into clusters and select a leader node for each cluster. Each leader node resolves the Minimum Spanning Tree (MST) problem for its cluster in parallel. Once these individual MSTs are connected, they form a comprehensive MST for the entire network, where nodes obtain optimal neighbors to facilitate the process of block propagation. The evaluation of BNSF showed superior performance compared to neighbor selection solutions such as Dynamic Optimized Neighbor Selection Algorithm (DONS), Ran-dom Neighbor Selection (RNS), and Neighbor Selection based on Round Trip Time (RTT-NS). Furthermore, BNSF significantly reduced the block propagation time, surpassing DONS, RTT-NS, and RNS by 51.14%, 99.16%, and 99.95%, respectively. The BNSF framework also achieved an average MST calculation time of 27.92% lower than the DONS algorithm.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_130-An_Efficient_Blockchain_Neighbor_Selection_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Video-based Domain Generalization for Abnormal Event and Behavior Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01503129</link>
        <id>10.14569/IJACSA.2024.01503129</id>
        <doi>10.14569/IJACSA.2024.01503129</doi>
        <lastModDate>2024-03-30T14:57:42.9230000+00:00</lastModDate>
        
        <creator>Salma Kammoun Jarraya</creator>
        
        <creator>Alaa Atallah Almazroey</creator>
        
        <subject>Domain generalization; abnormal event; abnormal behavior</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>Surveillance cameras have been widely deployed in public and private areas in recent years to enhance security and ensure public safety, necessitating the monitoring of unforeseen incidents and behaviors. An intelligent automated system is essential for detecting anomalies in video scenes to save the time and cost associated with manual detection by laborers monitoring displays. This study introduces a deep learning method to identify abnormal events and behaviors in surveillance footage of crowded areas, utilizing a scene-based domain generalization strategy. By utilizing the keyframe selection approach, keyframes containing relevant information are extracted from video frames. The chosen keyframes are utilized to create a spatio-temporal entropy template that reflects the motion area. The acquired template is then fed into the pre-trained AlexNet network to extract high-level features. The study utilizes the Relieff feature selection approach to choose suitable features, which are then served as input to Support Vector Machine (SVM) classifier. The model is assessed using six available datasets and two datasets built in this research, containing videos of normal and abnormal events and behaviors. The study found that the proposed method, utilizing domain generalization, surpassed state-of-arts methods in terms of detection accuracy, achieving a range from 87.5% to 100%. It also demonstrated the model’s effectiveness in detecting anomalies from various domains with an accuracy rate of 97.13%.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_129-Video-based Domain Generalization for Abnormal Event.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Robust Stability Analysis of Switched Neutral Delayed Systems with Parameter Uncertainties</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01503128</link>
        <id>10.14569/IJACSA.2024.01503128</id>
        <doi>10.14569/IJACSA.2024.01503128</doi>
        <lastModDate>2024-03-30T14:57:42.9070000+00:00</lastModDate>
        
        <creator>Nidhal Khorchani</creator>
        
        <creator>Rafika El Harabi</creator>
        
        <creator>Wiem Jebri Jemai</creator>
        
        <creator>Hassen Dahman</creator>
        
        <subject>Switched neutral systems; parameter uncertainties; delay-dependent; robust stability; multiple quadratic Lyapunov-Krasovskii; LMI technique</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>A time-delay neural system is an accurate class of neural system that exposes delays in both the state values and their derivatives. In this case, it is critical to maintain the system stability. Here, the stability investigation on uncertain switched-neutral systems with state-time delays is the focus of this paper. In fact, a novel adequate condition in terms of the feasibility of Linear Matrix Inequalities (LMIs) is offered to guarantee the global asymptotically stability of this category of systems with parameter uncertainties, based on the Lyapunov-Krasovskii functional method. Additionally, resistance against errors and disturbances can be ensured using the Multiple Quadratic Lyapunov Functions (MQLFs). Through a numerical example, the designed method’s effectiveness is proven.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_128-Robust_Stability_Analysis_of_Switched_Neutral_Delayed_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Machine Learning-based Solution for Monitoring of Converters in Smart Grid Application</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01503127</link>
        <id>10.14569/IJACSA.2024.01503127</id>
        <doi>10.14569/IJACSA.2024.01503127</doi>
        <lastModDate>2024-03-30T14:57:42.8930000+00:00</lastModDate>
        
        <creator>Umaiz Sadiq</creator>
        
        <creator>Fatma Mallek</creator>
        
        <creator>Saif Ur Rehman</creator>
        
        <creator>Rao Muhammad Asif</creator>
        
        <creator>Ateeq Ur Rehman</creator>
        
        <creator>Habib Hamam</creator>
        
        <subject>Artificial intelligence; photovoltaic; support vector machine; machine learning; K-Nearest neighbor; maximum power point tracking; pulse width modulation; prognostic analysis; one-against-rest; one-against-one; direct acyclic graph; multi class support vector machine; DC-DC converter; zeta c</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>The integration of renewable energy sources and the advancement of smart grid technologies have revolutionized the power distribution landscape. As the smart grid evolves, the monitoring and control of power converters play a crucial role in ensuring the stability and efficiency of the overall system. This research paper introduced a converter monitoring system in photovoltaic systems, the main concern is to protect the electrical system from disastrous failures that occur when the system is in operating condition. The reliability of the converters is significantly influenced by the degradation of their passive components, which can be characterized in various ways. For instance, the aging of inductors and capacitors can be char-acterized by a decrease in their inductance and capacitance values. Identifying which component is undergoing degradation and assessing whether it is in a critical condition or not, is crucial for implementing cost-effective maintenance strategies. This paper explores a set of classification algorithms, leveraging machine learning, trained on data collected from a Zeta converter simulated in Matlab Simulink. the report presents observations on how each algorithm effectively predicts the component and its condition and Graphical Performance Comparison for different ML Techniques serves as a crucial endeavor in evaluating and understanding the effectiveness of various ML approaches. The goal is to provide a comprehensive overview of how these techniques fare concerning criteria such as accuracy, precision, recall, F1 score, and Specificity among others. Quadratic Support Vector Machine (SVM) yields superior results compared to other machine learning techniques employed in training our dataset.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_127-A_Machine_Learning_based_Solution_for_Monitoring_of_Converters.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Effect of Pre-processing on a Convolutional Neural Network Model for Dorsal Hand Vein Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01503126</link>
        <id>10.14569/IJACSA.2024.01503126</id>
        <doi>10.14569/IJACSA.2024.01503126</doi>
        <lastModDate>2024-03-30T14:57:42.8930000+00:00</lastModDate>
        
        <creator>Omar Tarawneh</creator>
        
        <creator>Qotadeh Saber</creator>
        
        <creator>Ahmed Almaghthawi</creator>
        
        <creator>Hamza Abu Owida</creator>
        
        <creator>Abedalhakeem Issa</creator>
        
        <creator>Nawaf Alshdaifat</creator>
        
        <creator>Ghaith Jaradat</creator>
        
        <creator>Suhaila Abuowaida</creator>
        
        <creator>Mohammad Arabiat</creator>
        
        <subject>CNNs; preprocessing; dorsal hand vein; recognition; CNN; authentication</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>There are numerous techniques for identifying users, including cards, passwords, and biometrics. Emerging technologies such as cloud computing, smart gadgets, and home automation have raised users’ awareness of the privacy and security of their data. The current study aimed to utilise the CNN model augmented with various pre-processing filters to create a reliable identification system based on the DHV. In addition, the proposed implementing several pre-processing filters to enhance CNN recognition accuracy. The study used a dataset of 500 hand-vein images extracted from 50 patients, while the dataset training was done using the data augmentation technique. The accuracy of the proposed model in this study in classifying images without using image processing showed that 70% was approved for training. Moreover, the results indicated that using the mean filter to remove the noise gave better results, as the accuracy reached 99% in both training conditions.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_126-The_Effect_of_Pre_processing_on_a_Convolutional_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Unmasking Fake Social Network Accounts with Explainable Intelligence</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01503125</link>
        <id>10.14569/IJACSA.2024.01503125</id>
        <doi>10.14569/IJACSA.2024.01503125</doi>
        <lastModDate>2024-03-30T14:57:42.8770000+00:00</lastModDate>
        
        <creator>Eman Alnagi</creator>
        
        <creator>Ashraf Ahmad</creator>
        
        <creator>Qasem Abu Al-Haija</creator>
        
        <creator>Abdullah Aref</creator>
        
        <subject>Explainable Artificial Intelligence (XAI); Shapley Additive exPlanations (SHAP); feature selection; fake accounts detection; social media</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>The recent global social network platforms have intertwined a web connecting people universally, encouraging unprecedented social interactions and information exchange. However, this digital connectivity has also spawned the growth of fake social media accounts used for mass spamming and targeted attacks on certain accounts or sites. In response, carefully con-structed artificial intelligence (AI) models have been used across numerous digital domains as a defense against these dishonest accounts. However, clear articulation and validation are required to integrate these AI models into security and commerce. This study navigates this crucial turning point by using Explainable AI’s SHAP technique to explain the results of an XGBoost model painstakingly trained on a pair of datasets collected from Instagram and Twitter. These outcomes are painstakingly inspected, assessed, and benchmarked against traditional feature selection techniques using SHAP. This analysis comes to a head in a demonstrative discourse demonstrating SHAP’s suitability as a reliable explainable AI (XAI) for this crucial goal.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_125-Unmasking_Fake_Social_Network_Accounts.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Secure Sharing of Patient Controlled e-Health Record using an Enhanced Access Control Model with Encryption Based on User Identity</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01503124</link>
        <id>10.14569/IJACSA.2024.01503124</id>
        <doi>10.14569/IJACSA.2024.01503124</doi>
        <lastModDate>2024-03-30T14:57:42.8600000+00:00</lastModDate>
        
        <creator>Mohinder Singh B</creator>
        
        <creator>Jaisankar N</creator>
        
        <subject>Access permissions; fine-grained access control; identity based encryption; key generation center; electronic health record</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>Healthcare industry is converting to digital due to the constantly evolving medical needs in the modern digital age. Many researchers have put up models like Ciphertext Policy Attribute Based Encryption (CPABE) to provide security to health records. But, the CPABE-variants failed to give total control of a medical record to its corresponding owner i.e., patient. Recently, Mittal et al. suggested that Identity Based Encryption (IBE) can be used to achieve this. But, this model used a Key Generation Center (KGC) to maintain keys that reduces the trust as the keys may get leaked. To overcome this problem, an enhanced access control model along with data encryption is presented where a separate key generation center is not needed. Because of this, the processing time for setting-up and extraction of keys is minimized. The total processed time of proposed is 74.42ms. But, the same is 92.89ms, 165.42ms, and 218.75ms in case of Boneh-Franklin, Zhang et al., and Yu et al., respectively. Our proposed model also gives a patient the complete control of his/her own health record. The data owner can decide who can access the record (full/ partial) with what access rights (read/write/ update). The data requestors can be a doctor/ nurse/insurance providers/ researchers and so on. The requestors are not based on groups or roles but based on an identity that is accepted by the data owner. The proposed model also withstands the key leakage attacks that are due to the key generation center.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_124-Secure_Sharing_of_Patient_Controlled_e_Health_Record.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predicting ICU Admission for COVID-19 Patients in Saudi Arabia: A Comparative Study of AdaBoost and Bagging Methods</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01503123</link>
        <id>10.14569/IJACSA.2024.01503123</id>
        <doi>10.14569/IJACSA.2024.01503123</doi>
        <lastModDate>2024-03-30T14:57:42.8430000+00:00</lastModDate>
        
        <creator>Hamza Ghandorh</creator>
        
        <creator>Mohammad Zubair Khan</creator>
        
        <creator>Mehshan Ahmed Khan</creator>
        
        <creator>Yousef M. Alsofayan</creator>
        
        <creator>Ahmed A. Alahmari</creator>
        
        <creator>Anas A. Khan</creator>
        
        <subject>COVID-19; adaptive boosting; bootstrap aggregation; prediction; ICU admission; Saudi Arabia; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>COVID-19’s high fatality rate and accurately deter-mining the mortality rate within a particular geographic region continue to be significant concerns. In this study, the authors investigated and assessed the performance of two advanced machine learning approaches, Adaptive Boosting (AdaBoost) and Bootstrap Aggregation (Bagging), as strong predictors of COVID- 19-related intensive care unit (ICU) admissions within Saudi Arabia. These models may help Saudi health-care organizations determine who is at a higher risk of readmission, allowing for more targeted interventions and improved patient outcomes. The authors found AdaBoost-RF and Bagging-RF methods produced the most precise models, with accuracy rates of 97.4% and 97.2%, respectively. This work, like prior studies, illustrates the viability of developing, validating, and using machine learning (ML) prediction models to forecast ICU admission in COVID-19 cases. The ML models that have been developed have tremendous potential in the fight against COVID-19 in the health-care industry.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_123-Predicting_ICU_Admission_for_COVID_19_Patients.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Image Binary Matrix Processing to Encrypt-Decrypt Digital Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01503121</link>
        <id>10.14569/IJACSA.2024.01503121</id>
        <doi>10.14569/IJACSA.2024.01503121</doi>
        <lastModDate>2024-03-30T14:57:42.8300000+00:00</lastModDate>
        
        <creator>Mohamad Al-Laham</creator>
        
        <creator>Firas Omar</creator>
        
        <creator>Ziad A. Alqadi</creator>
        
        <subject>Image processing; binary matrix; encrypt-decrypt; digital image</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>This research study presents a simple cryptographic solution for protecting grayscale and colored digital images, which are commonly used in computer applications. Due to their widespread use, protecting these photos is crucial to preventing unauthorized access. This article’s methodology manipulates an image’s binary matrix using basic operations. These specified actions include increasing the 8-column matrix to 64 columns, reorganizing it into 64 columns, separating it into four blocks, and shuffle the columns using secret index keys. These keys are produced using four sets of common chaotic logistic parameters. Each set executes a chaotic logistic map model to generate a chaotic key, which is then translated into an index key. This index key shuffles columns during encryption and reverses during decryption. The cryptographic approach promises a large key space that can withstand hacking. The encrypted image is secure since the decryption procedure is sensitive to the precise private key values. Private keys are frequently chaotic logistic parameters, making encryption resilient. This method is convenient since it supports images of any size and kind without modifying the encryption or decryption techniques. Shuffling replaces difficult logical procedures in typical data encryption methods, simplifying the cryptographic process. Experiments with several photos will evaluate the proposed strategy. The encrypted and decrypted photos will be examined to ensure the method meets cryptographic standards. Speed tests will also compare the proposed method to existing cryptographic methods to show its potential to speed up picture cryptography by lowering encryption and decryption times.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_121-Image_Binary_Matrix_Processing_to_Encrypt_Decrypt_Digital_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An End-to-End Model of ArVi-MoCoGAN and C3D with Attention Unit for Arbitrary-view Dynamic Gesture Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01503122</link>
        <id>10.14569/IJACSA.2024.01503122</id>
        <doi>10.14569/IJACSA.2024.01503122</doi>
        <lastModDate>2024-03-30T14:57:42.8300000+00:00</lastModDate>
        
        <creator>Huong-Giang Doan</creator>
        
        <creator>Hong-Quan Luong</creator>
        
        <creator>Thi Thanh Thuy Pham</creator>
        
        <subject>Dynamic gesture recognition; attention unit; generative adversarial network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>Human gesture recognition is an attractive research area in computer vision with many applications such as human-machine interaction, virtual reality, etc. Recent deep learning techniques have been efficiently applied for gesture recognition, but they require a large and diverse amount of training data. In fact, the available gesture datasets contain mostly static gestures and/or certain fixed viewpoints. Some contain dynamic gestures, but they are not diverse in poses and viewpoints. In this paper, we propose a novel end-to-end framework for dynamic gesture recognition from unknown viewpoints. It has two main components: (1) an efficient GAN-based architecture, named ArVi-MoCoGAN; (2) the gesture recognition component, which contains C3D backbones and an attention unit. ArVi-MoCoGAN aims at generating videos at multiple fixed viewpoints from a real dynamic gesture at an arbitrary viewpoint. It also returns the probability that a real arbitrary view gesture belongs to which of the fixed-viewpoint gestures. These outputs of ArVi-MoCoGAN will be processed in the next component to improve the arbitrary view recognition performance through multi-view synthetic gestures. The proposed system is extensively analyzed and evaluated on four standard dynamic gesture datasets. The experimental results of our proposed method are better than the current solutions, from 1% to 13.58% for arbitrary view gesture recognition and from 1.2% to 7.8% for single view gesture recognition.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_122-An_End_to_End_Model_of_ArVi_MoCoGAN_and_C3D.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Model Robustness and Accuracy Against Adversarial Attacks via Adversarial Input Training</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01503120</link>
        <id>10.14569/IJACSA.2024.01503120</id>
        <doi>10.14569/IJACSA.2024.01503120</doi>
        <lastModDate>2024-03-30T14:57:42.8130000+00:00</lastModDate>
        
        <creator>Ganesh Ingle</creator>
        
        <creator>Sanjesh Pawale</creator>
        
        <subject>Adversarial attacks; Input Adversarial Training (IAT); deep learning security; model robustness</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>Adversarial attacks present a formidable challenge to the integrity of Convolutional Neural Network-Long Short-Term Memory (CNN-LSTM) models, particularly in the domain of power quality disturbance (PQD) classification, necessitating the development of effective defense mechanisms. These attacks, characterized by their subtlety, can significantly degrade the performance of models critical for maintaining power system stability and efficiency. This study introduces the concept of adversarial attacks on CNN-LSTM models and emphasizes the critical need for robust defenses.We propose Input Adversarial Training (IAT) as a novel defense strategy aimed at enhancing the resilience of CNN-LSTM models. IAT involves training models on a blend of clean and adversarially perturbed inputs, intending to improve their robustness. The effectiveness of IAT is assessed through a series of comparisons with established defense mech-anisms, employing metrics such as accuracy, precision, recall, and F1-score on both unperturbed and adversarially modified datasets.The results are compelling: models defended with IAT exhibit remarkable improvements in robustness against adver-sarial attacks. Specifically, IAT-enhanced models demonstrated an increase in accuracy on adversarially perturbed data to 85%, a precision improvement to 86%, a recall rise to 85%, and an F1-score enhancement to 85.5%. These figures significantly surpass those achieved by models utilizing standard adversarial training (75% accuracy) and defensive distillation (70% accuracy), showcasing IAT’s superior capacity to maintain model accuracy under adversarial conditions.In conclusion, IAT stands out as an effective defense mechanism, significantly bolstering the resilience of CNN-LSTM models against adversarial perturbations. This research not only sheds light on the vulnerabilities of these models to adversarial attacks but also establishes IAT as a benchmark in defense strategy development, promising enhanced security and reliability for PQD classification and related applications.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_120-Enhancing_Model_Robustness_and_Accuracy_Against_Adversarial_Attacks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Software Defect Prediction via Generative Adversarial Networks and Pre-Trained Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01503119</link>
        <id>10.14569/IJACSA.2024.01503119</id>
        <doi>10.14569/IJACSA.2024.01503119</doi>
        <lastModDate>2024-03-30T14:57:42.7970000+00:00</lastModDate>
        
        <creator>Wei Song</creator>
        
        <creator>Lu Gan</creator>
        
        <creator>Tie Bao</creator>
        
        <subject>Software defect prediction; semi-supervised learning; generative adversarial networks; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>Software defect prediction, which aims to predict defective modules during software development, has been implemented to assist developers in identifying defects and ensure software quality. Traditional defect prediction methods utilize manually designed features such as “Lines Of Code” that fail to capture the syntactic and semantic structures of code. Moreover, the high cost and difficulty of building the training set lead to insufficient data, which poses a significant challenge for training deep learning models, particularly for new projects. To overcome the practical challenge of data limitation and improve predictive capacity, this paper presents DP-GANPT, a novel defect prediction model that integrates generative adversarial networks and state-of-the-art code pre-trained models, employing a novel bi-modal code-prompt input representation. The proposed approach explores the use of code pre-trained model as auto-encoders and employs generative adversarial networks algorithms and semi-supervised learning techniques for optimization. To facilitate effective training and evaluation, a new software defect prediction dataset is constructed based on the existing PROMISE dataset and its associated engineering files. Extensive experiments are performed on both within-project and cross-project defect prediction tasks to evaluate the effectiveness of DP-GANPT. The results reveal that DP-GANPT outperforms all the state-of-the-art baselines, and achieves performance comparable to them with significantly less labeled data.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_119-Software_Defect_Prediction_via_Generative_Adversarial_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SCEditor: A Graphical Editor Prototype for Smart Contract Design and Development</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01503118</link>
        <id>10.14569/IJACSA.2024.01503118</id>
        <doi>10.14569/IJACSA.2024.01503118</doi>
        <lastModDate>2024-03-30T14:57:42.7830000+00:00</lastModDate>
        
        <creator>Yassine Ait Hsain</creator>
        
        <creator>Naziha Laaz</creator>
        
        <creator>Samir Mbarki</creator>
        
        <subject>Blockchain; Metamodel; Model-driven Engineering (MDE); Smart Contract (SC); SC Programming</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>In recent years, particularly with the Ethereum blockchain’s advent, smart contracts have gained significant interest as a means of regulating exchanges among multiple parties via code. This surge has prompted the emergence of various smart contract (SC) programming languages, each possessing distinct philosophies, grammatical structures, and components. Conse-quently, developers are increasingly involved in SC programming. However, these languages are platform specific, implying that a transition to another platform necessitates the use of different languages. Additionally, developers require a certain level of control over SCs to address encountered bugs and ensure maintenance. To address these developer-centric challenges, this paper presents SCEditor, a novel Eclipse Sirius-based prototype editor designed for the visualization, design, and creation of SCs. The editor proposes a means of standardizing the usage of SC programming languages through the incorporation of graphical syntax and a metamodel conforming to Model-Driven Engineering (MDE) principles and SC construction rules to generate an abstract SC model. The efficacy of this editor is demonstrated through testing on a voting SC written in Vyper and Solidity languages. Furthermore, the editor holds potential for future exploitation in model transformation and code generation for various SC languages.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_118-SCEditor_A_Graphical_Editor_Prototype_for_Smart_Contract_Design.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Scientometric Analysis and Knowledge Mapping of Cybersecurity</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01503117</link>
        <id>10.14569/IJACSA.2024.01503117</id>
        <doi>10.14569/IJACSA.2024.01503117</doi>
        <lastModDate>2024-03-30T14:57:42.7830000+00:00</lastModDate>
        
        <creator>Fahad Alqurashi</creator>
        
        <creator>Istiak Ahmad</creator>
        
        <subject>Cybersecurity; cyber threats; scientometric analysis; bibliomatic analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>Cybersecurity research includes several areas, such as authentication, software and hardware vulnerabilities, and defences against cyberattacks. However, only a limited number of cybersecurity experts have a comprehensive understanding of all aspects of this sector. Hence, it is vital to possess an impartial comprehension of the prevailing patterns in cyberse-curity research. Scientometric analysis and knowledge mapping may effectively detect cybersecurity research trends, significant studies, and emerging technologies within this particular context. The main aim of this research is to comprehend the developmental trend of the academic literature about the concepts of “malware detection” and ‘cybersecurity’. We collected 9,967 publications from January 2019 to December 2023 and used the Citespace tool for scientometric analysis. This study found six co-citation clusters,namely malware classification, evading malware classifier, android malware detection, IoT network, CNN, and ransomeware families. Additionally, this study discovered that the top contributing countries are the USA, China, and India based on the citation count, and the Chinese Academy of Science, the University of California, and the University of Texas are the top contributing institutions based on the frequency of the publications.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_117-Scientometric_Analysis_and_Knowledge_Mapping.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Issuance Policies of Route Origin Authorization with a Single Prefix and Multiple Prefixes: A Comparative Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01503116</link>
        <id>10.14569/IJACSA.2024.01503116</id>
        <doi>10.14569/IJACSA.2024.01503116</doi>
        <lastModDate>2024-03-30T14:57:42.7500000+00:00</lastModDate>
        
        <creator>Zetong Lai</creator>
        
        <creator>Zhiwei Yan</creator>
        
        <creator>Guanggang Geng</creator>
        
        <creator>Hidenori Nakazato</creator>
        
        <subject>BGP; RPKI; route origin authorization; inter-domain routing security; computer network protocols; routing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>Resource Public Key Infrastructure (RPKI) is a solution to mitigate the security issues faced by inter-domain routing. Within the RPKI framework, Route Origin Authorization (ROA) plays a crucial role as an RPKI object. ROA allows address space holders to place a single IP address prefix or multiple IP address prefixes in it. However, this feature has introduced security risks during the global deployment of RPKI. In this study, we analyze the current status of ROA issuance and discuss the impact of using two ROA issuance policies on RPKI security and synchronization efficiency. Based on the aforementioned work, recommendations are proposed for the utilization of ROA issuance policies.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_116-Issuance_Policies_of_Route_Origin_Authorization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Spherical Fuzzy Z-Numbers-based CRITIC CRADIAS and MARCOS Approaches for Evaluating English Teacher Performance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01503115</link>
        <id>10.14569/IJACSA.2024.01503115</id>
        <doi>10.14569/IJACSA.2024.01503115</doi>
        <lastModDate>2024-03-30T14:57:42.7500000+00:00</lastModDate>
        
        <creator>Jie Niu</creator>
        
        <subject>SFZNS; CRITRIC technique; CRADIAS method; MARCOS method</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>Consider combining quantitative and qualitative data for these case studies, such as interviews with English teachers, student evaluations, classroom observations, and surveys. Contextual elements, including community support, resources, and school demographics, should also be taken into consideration. The assessment process in English teaching performance evaluation is very complicated and diverse, making it a perfect fit for use in the Multi-Attribute Group Decision Making (MAGDM) framework. The utilization of Spherical Fuzzy Z˘-Number Sets (SF Z˘NS) is essential in Multi-Attribute Group Decision Making (MAGDM) to handle intricate problems. These sets are significantly more capable of handling higher levels of uncertainty than the fuzzy set designs used today. Here, we provide a method, Compromise Ranking of Alternatives from Distance to Ideal Solution (CRADIAS), designed to address MAGDM problems in SF Z˘NS, particularly in cases when attribute weights are opaque. Attribute weights may be found by applying the CRITIC technique. The first section of the research covers the examination of spherical fuzzy Z numbers, their accuracy and scoring functions, and the main concept behind their functioning. We then propose the use of spherical fuzzy Z˘-Number data to handle MAGDM cases in a decision-making process. This work strengthens the topic’s theoretical underpinnings as well as its practical applicability. By conducting a comparison study, we apply the MARCOS approach to validate and illustrate the validity of our findings. This methodical approach guarantees a thorough evaluation of the suggested method’s effectiveness and adds to the current discussion on how to make wise decisions in difficult and uncertain situations.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_115-Spherical_Fuzzy_Z_Numbers_based_CRITIC_CRADIAS_and_MARCOS_Approaches.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Research Octane Number Prediction Based on Feature Selection and Multi-Model Fusion</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01503114</link>
        <id>10.14569/IJACSA.2024.01503114</id>
        <doi>10.14569/IJACSA.2024.01503114</doi>
        <lastModDate>2024-03-30T14:57:42.7370000+00:00</lastModDate>
        
        <creator>Junlin Gu</creator>
        
        <subject>Feature selection; random forest model; support vector machine model; RON loss</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>The catalytic cracking-based process for lightening heavy oil yields gasoline products with sulfur and olefin contents surpassing 95%, consequently diminishing the Research Octane Number (RON) of gasoline during desulfurization and olefin reduction stages. Hence, investigating methodologies to mitigate RON loss in gasoline while maintaining effective desulfurization is imperative. This study addresses this challenge by initially performing data cleaning and augmentation, employing box plot modeling and Grubbs’ test for outlier detection and removal. Subsequently, through the integration of mutual information and the Lasso method, data dimensionality is reduced, with the top 30 variables selected as primary factors. A predictive model for RON loss is then established based on these 30 variables, utilizing Random forest and Support Vector Regression (SVR) models. Employing this model enables the computation of RON loss for each data sample. Comparing with existing methods, our approach could ensure a balance between effective desulfurization and mitigated RON loss in gasoline products.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_114-Research_Octane_Number_Prediction_Based_on_Feature_Selection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Weighted PSO Ensemble using Diversity of CNN Classifiers and Color Space for Endoscopy Image Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01503113</link>
        <id>10.14569/IJACSA.2024.01503113</id>
        <doi>10.14569/IJACSA.2024.01503113</doi>
        <lastModDate>2024-03-30T14:57:42.7200000+00:00</lastModDate>
        
        <creator>Diah Arianti</creator>
        
        <creator>Azizi Abdullah</creator>
        
        <creator>Shahnorbanun Sahran</creator>
        
        <creator>Wong Zhyqin</creator>
        
        <subject>Convolution neural network; particle swarm optimization; diversity; weighted ensemble</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>Endoscopic image is a manifestation of visualization technology to the human gastrointestinal tract, allowing detection of abnormalities, characterization of lesions, and guidance for therapeutic interventions. Accurate and reliable classification of endoscopy images remains challenging due to variations in image quality, diverse anatomical structures, and subtle abnormalities such as polyps and ulcers. Convolutional Neural Network (CNN) is widely used in modern medical imaging, especially for abnormality classification tasks. However, relying on a single CNN classifier limits the model’s ability to capture endoscopy images’ full complexity and variability. A potential solution to the problem involves employing ensemble learning, which combines multiple models to reach at a final decision. Nevertheless, this learning approach presents several challenges, notably a significant risk of data bias. This issue arises from the unequal influence of weak and strong learners in most ensemble strategies, such as standard voting, which usually depend on certain assumptions, including equal performance among the models. However, it reduces the capability towards diverse model collaboration. Therefore, this paper proposes two solutions to the problems. Firstly, we create a diverse pool of CNNs with end-to-end approach. This approach promotes model diversity and enhances confidence in making a final decision. Secondly, we propose employing Particle Swarm Optimization to enhance the weight of the members in the ensemble learner in order to create a more resilient and accurate model compared to the standard ensemble learning approach. The experiment demonstrates that the proposed ensemble model outperforms the baseline model on both the Kvasir 1 and Kvasir 2 datasets, highlighting the effectiveness of the suggested approach in integrating diverse information from the baseline model. This enhanced performance highlights the efficacy of capturing diverse information from the baseline model.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_113-Weighted_PSO_Ensemble_using_Diversity_of_CNN_Classifiers.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Ceramic Microscope Image Classification Based on Multi-Scale Fusion Bottleneck Structure and Chunking Attention Mechanism</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01503112</link>
        <id>10.14569/IJACSA.2024.01503112</id>
        <doi>10.14569/IJACSA.2024.01503112</doi>
        <lastModDate>2024-03-30T14:57:42.7030000+00:00</lastModDate>
        
        <creator>Zhihuang Zhuang</creator>
        
        <creator>Xing Xu</creator>
        
        <creator>Xuewen Xia</creator>
        
        <creator>Yuanxiang Li</creator>
        
        <creator>Yinglong Zhang</creator>
        
        <subject>Deep learning; ceramic anti-counterfeiting; image classification; attention mechanism</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>In recent years, the status of ceramics in fields such as art, culture, and historical research has been continuously improving. However, the increase in malicious counterfeiting and forgery of ceramics has disrupted the normal order of the ceramic market and brought challenges to the identification of authenticity. Due to the intricate and interfered nature of the microscopic characteristics of ceramics, traditional identification methods have been suffering from issues of low accuracy and efficiency. To address these issues, there is a proposal for a multi-scale fusion bottleneck structure and a chunking attention module to improve the neural network model of Resnet50 and perform ceramic microscopic image classification and recognition. Firstly, the original bottleneck structure has been replaced with a multi-scale fusion bottleneck structure, which can establish a feature pyramid and establish associations between different feature layers, effectively focusing on features at different scales. Then, chunking attention modules are added to both the shallow and deep networks, respectively, to establish remote dependencies in low-level detail features and high-level semantic features, to reduce the impact of convolutional receptive field restrictions. The experimental results show that, in terms of classification accuracy and other indicators, this model surpasses the mainstream neural network models with a better classification accuracy of 3.98%compared to the benchmark model Resnet50, achieving 98.74%. Meanwhile, in comparison with non-convolutional network models, it has been found that convolutional models are more suitable for the recognition of ceramic microscopic features.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_112-Ceramic_Microscope_Image_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Revolutionizing Healthcare by Unleashing the Power of Machine Learning in Diagnosis and Treatment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01503111</link>
        <id>10.14569/IJACSA.2024.01503111</id>
        <doi>10.14569/IJACSA.2024.01503111</doi>
        <lastModDate>2024-03-30T14:57:42.6900000+00:00</lastModDate>
        
        <creator>Medini Gupta</creator>
        
        <creator>Sarvesh Tanwar</creator>
        
        <creator>Salil Bharany</creator>
        
        <creator>Faisal Binzagr</creator>
        
        <creator>Hadia Abdelgader Osman</creator>
        
        <creator>Ashraf Osman Ibrahim</creator>
        
        <creator>Samsul Ariffin Abdul Karim</creator>
        
        <subject>Machine Learning; Health Diagnosis; Supervised Learning; Prediction; Classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>Machine learning (ML) is a versatile technology that has the potential to revolutionize various industries. ML can predict future trends in customer expectations that allow organizations to develop new products accordingly. ML is a crucial field of data science that uses different algorithms to predict insights and improve decision-making. The widespread acceptance of ML algorithms ML can provide helpful information using the enormous volume of health data generated regularly. Quicker diagnoses by doctors can be delivered by adopting ML techniques that can bring down medical charges and applying pattern identification algorithms to examine medical images. Every technology brings its challenges; in the same way, ML also has several challenges in healthcare that need to be acknowledged before we witness complete automation in medical diagnosis. People are still forbidden to share their personal information with intermediaries for treatment. Medical record governance is essential to ensure that health records are not missed. Manual diagnosis often goes in the wrong direction, as doctors are also human. Lack of communication between medical workers and patients, considering the insufficient data to diagnose disease, sometimes results in deteriorating health conditions. This paper deals with an introduction to machine learning. These ML algorithms are widely used for health diagnosis, a comparison analysis of literature work that has been done so far, existing challenges of the healthcare system, healthcare industry using machine learning applications, real-life use cases, practical implementation of disease prediction, and conclusion with its future scope.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_111-Revolutionizing_Healthcare_by_Unleashing_the_Power.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Water Quality Forecasting Reliability Through Optimal Parameterization of Neuro-Fuzzy Models via Tunicate Swarm Optimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01503110</link>
        <id>10.14569/IJACSA.2024.01503110</id>
        <doi>10.14569/IJACSA.2024.01503110</doi>
        <lastModDate>2024-03-30T14:57:42.6730000+00:00</lastModDate>
        
        <creator>Kambala Vijaya Kumar</creator>
        
        <creator>Y Dileep Kumar</creator>
        
        <creator>Sanjiv Rao Godla</creator>
        
        <creator>Mohammed Saleh Al Ansari</creator>
        
        <creator>Yousef A.Baker El-Ebiary</creator>
        
        <creator>Elangovan Muniyandy</creator>
        
        <subject>Water quality forecasting; neuro-fuzzy models; tunicate swarm optimization; parameter optimization; environmental decision support</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>Forecasting water quality is critical to environmental management because it facilitates quick decision-making and resource allocation. On the opposite hand, current methods are not always able to produce reliable forecasts, which is often due to challenges in parameter optimization for complex models. This research presents a novel approach to enhance the forecasting accuracy of water quality by optimizing neuro-fuzzy models using Tunicate Swarm Optimisation (TSO). The introduction highlights the limitations of current techniques as well as the necessity for precise estimates of water quality. One of the drawbacks is that neuro-fuzzy models are not well-modelled, which makes it harder for them to identify the minute patterns in data on water quality. The suggested approach is unique in that it applies TSO, an optimization algorithm inspired by nature that emulates tunicates&#39; behaviour, to the neuro-fuzzy models&#39; parameter optimization process. The highly complex parameter space is effectively navigated by TSO&#39;s swarm intelligence, which strikes a balance between exploration and exploitation to improve model performance. To optimize model parameters, the process comprises three steps: creating an objective function, defining the neuro-fuzzy model, and seamlessly integrating TSO. By mimicking the motions of tunicates as they look for the best conditions in the marine environment, TSO constantly optimizes the variables. Experiments demonstrate that the proposed strategy is more effective than traditional optimization techniques in forecasting water quality. As seen by the optimised neuro-fuzzy model&#39;s increased prediction accuracy and several dataset validations, Tunicate Swarm Optimisation has potential for reliable environmental forecasting. This work presents a potential path for improved environmental decision-making systems by offering an optimisation strategy inspired by nature that overcomes the limitations of existing methods and enhances water quality forecasting tools.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_110-Enhancing_Water_Quality_Forecasting_Reliability.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hierarchical Spatiotemporal Aspect-Based Sentiment Analysis for Chain Restaurants using Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01503109</link>
        <id>10.14569/IJACSA.2024.01503109</id>
        <doi>10.14569/IJACSA.2024.01503109</doi>
        <lastModDate>2024-03-30T14:57:42.6570000+00:00</lastModDate>
        
        <creator>Mouyassir Kawtar</creator>
        
        <creator>Abderrahmane Fathi</creator>
        
        <creator>Noureddine Assad</creator>
        
        <creator>Ali Kartit</creator>
        
        <subject>HISABSA; hybrid model; NLP; ML; VADER Lexicon; AFFIN model; TextBlob; ABSA; Restaurant reviews; Transformer-based models; Lexicon-based methods; RoBERTa model; BERT model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>In recent years, aspect-based sentiment analysis of restaurant business reviews has emerged as a pivotal area of research in natural language processing (NLP), aiming to provide detailed analytical methods benefiting both consumers and industry professionals. This study introduces a novel approach, Hierarchical Spatiotemporal Aspect-Based Sentiment Analysis (HISABSA), which combines lexicon-based methods such as VADER Lexicon, the AFFIN model, and TextBlob with contextual methods. By integrating advanced machine learning (ML) techniques, this hybrid methodology facilitates sentiment analysis, empowering chain restaurants to assess changes in sentiments towards specific aspects of their services across different branches and over time. Leveraging transformer-based models such as RoBERTa and BERT, this approach achieves effective sentiment classification and aspect extraction from text reviews. The results demonstrate the reliability of extracting valid aspects from online reviews of specific branches, offering valuable insights to business owners striving to succeed in competitive markets.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_109-Hierarchical_Spatiotemporal_Aspect_Based_Sentiment_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Secure IoT Seed-based Matrix Key Generator</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01503108</link>
        <id>10.14569/IJACSA.2024.01503108</id>
        <doi>10.14569/IJACSA.2024.01503108</doi>
        <lastModDate>2024-03-30T14:57:42.6570000+00:00</lastModDate>
        
        <creator>Youssef NOUR-EL AINE</creator>
        
        <creator>Cherkaoui LEGHRIS</creator>
        
        <subject>Security; IoT; steganography; key exchange; cryptography</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>The rapid evolution of the Internet of Things (IoT) has significantly transformed various aspects of both personal and professional spheres, offering innovative solutions in fields from home automation to industrial manufacturing. This progression is driven by the integration of physical devices with digital networks, facilitating efficient communication and data processing. However, such advancements bring forth critical security challenges, especially regarding data privacy and network integrity. Conventional cryptographic methods often fall short in addressing the unique requirements of IoT environments, such as limited device computational power and the need for efficient energy consumption. This paper introduces a novel approach to IoT security, inspired by the principles of steganography – the art of concealing information within other non-secret data. This method enhances security by embedding secret information within the payload or communication protocols, aligning with the low-power and minimal processing capabilities of IoT devices. We propose a steganographic key generation algorithm, adapted from the Diffie-Hellman key exchange model, tailored for IoT. This approach eliminates the need for explicit parameter exchange, thereby reducing vulnerability to key interception and unauthorized access, prevalent in IoT networks. The algorithm utilizes a pre-shared 2D matrix and a synchronized seed-based approach for covert communication without explicit data exchange. Furthermore, we have rigorously tested our algorithm using the NIST Statistical Test Suite (STS), comparing its execution time with other algorithms. The results underscore our algorithm&#39;s superior performance and suitability for IoT applications, highlighting its potential to secure IoT networks effectively without compromising on efficiency and device resource constraints. This paper presents the design, implementation, and potential implications of this algorithm for enhancing IoT security, ensuring the full realization of IoT benefits without compromising user security and privacy.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_108-Secure_IoT_Seed_Based_Matrix_Key_Generator.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Path Planning and Control of Intelligent Delivery UAV Based on Internet of Things and Edge Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01503107</link>
        <id>10.14569/IJACSA.2024.01503107</id>
        <doi>10.14569/IJACSA.2024.01503107</doi>
        <lastModDate>2024-03-30T14:57:42.6430000+00:00</lastModDate>
        
        <creator>Xiuzhu Zhang</creator>
        
        <subject>Internet of things; edge computing; smart distribution; drone path; planning and control</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>This paper investigates the intelligent delivery UAV path planning and control problem based on the Internet of Things and edge computing, and proposes a novel model and algorithm to realize the collaborative optimization of the path planning and control of the UAV, which improves the intelligence level and flight efficiency of the UAV. In this paper, the mathematical model of UAV path planning and control is firstly established, the relationship and influencing factors among the elements of UAV, edge server, delivery task, path planning and control are analyzed, and the optimization objectives and constraints are proposed. Then, this paper designs an algorithmic framework for UAV path planning and control, using the support and guidance of edge computing to achieve the cooperative optimization of path planning and control of UAVs, taking into account the constraints and objectives of the UAVs themselves, as well as the synergy and competition between UAVs. Then, this paper proposes specific algorithms for UAV path planning and control, adopting methods such as meta-heuristics, to solve the optimization problem of UAV path planning and control, and improve the intelligent level and flight performance of UAVs.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_107-Path_Planning_and_Control_of_Intelligent_Delivery.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-Objective Reinforcement Learning for Virtual Machines Placement in Cloud Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01503105</link>
        <id>10.14569/IJACSA.2024.01503105</id>
        <doi>10.14569/IJACSA.2024.01503105</doi>
        <lastModDate>2024-03-30T14:57:42.6270000+00:00</lastModDate>
        
        <creator>Chayan Bhatt</creator>
        
        <creator>Sunita Singhal</creator>
        
        <subject>Virtual machines placement; cloud computing; reinforcement learning; energy consumption; resource utilization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>The rapid demand for cloud services has provoked cloud providers to efficiently resolve the problem of Virtual Machines Placement in the cloud. This paper presents a VM Placement using Reinforcement Learning that aims to provide optimal resource and energy management for cloud data centers. Reinforcement Learning provides better decision-making as it solves the complexity of VM Placement problem caused due to tradeoff among the objectives and hence is useful for mapping requested VM on the minimum number of Physical Machines. An enhanced Tournament-based selection strategy along with Roulette Wheel sampling has been applied to ensure that the optimization goes through balanced exploration and exploitation, thereby giving better solution quality. Two heuristics have been used for the ordering of VM, considering the impact of CPU and memory utilizations over the VM placement. Moreover, the concept of the Pareto approximate set has been considered to ensure that both objectives are prioritized according to the perspective of the users. The proposed technique has been implemented on MATLAB 2020b. Simulation analysis showed that the VMRL performed preferably well and has shown improvement of 17%, 20% and 18% in terms of energy consumption, resource utilization and fragmentation respectively in comparison to other multi-objective algorithms.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_105-Multi_Objective_Reinforcement_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Aerial Image Segmentation Approach with Statistical Multimodal Markov Fields</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01503106</link>
        <id>10.14569/IJACSA.2024.01503106</id>
        <doi>10.14569/IJACSA.2024.01503106</doi>
        <lastModDate>2024-03-30T14:57:42.6270000+00:00</lastModDate>
        
        <creator>Jamal Bouchti</creator>
        
        <creator>Ahmed Bendahmane</creator>
        
        <creator>Adel Asselman</creator>
        
        <subject>Image segmentation; multimodal markov fields statistical integration; CIEDE2000 color difference; texture features; edge information</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>Aerial images, captured by drones, satellites, or aircraft, are omnipresent in diverse fields, from mapping and surveillance to precision agriculture. The efficacy of image analysis in these domains hinges on the quality of segmentation, and the precise delineation of objects and regions of interest. In this context, leveraging Markov fields for aerial image segmentation emerges as a promising avenue. The segmentation of aerial images presents a formidable challenge due to the variability in capture conditions, lighting, vegetation, and environmental factors. To meet this challenge, the work proposes an innovative method harnessing the power of Markov fields by integrating a multimodal energy function. This energy function amalgamates key attributes, including color difference measured by the CIEDE2000 metric, texture features, and detected edge information. The CIEDE2000 metric, derived from the CIELab color space, is renowned for its ability to measure color difference more consistently with human perception than conventional metrics. By incorporating this metric into the energy function, the approach enhances sensitivity to subtle color variations crucial for aerial image segmentation. Texture, a vital attribute characterizing regions in aerial images, offers crucial insights into terrain or objects. The method incorporates texture features to refine the separation of homogeneous regions. Contours, playing a fundamental role in segmentation, are identified using an edge detector to pinpoint boundaries between regions of interest. This information is integrated into the energy function, elevating contour consistency and segmentation accuracy. This article comprehensively presents the methodological approach, the conducted experiments, obtained results, and a thorough discussion of the method&#39;s advantages and limitations.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_106-A_New_Aerial_Image_Segmentation_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning-based Food Calorie Estimation Method in Dietary Assessment: An Advanced Approach using Convolutional Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01503104</link>
        <id>10.14569/IJACSA.2024.01503104</id>
        <doi>10.14569/IJACSA.2024.01503104</doi>
        <lastModDate>2024-03-30T14:57:42.6100000+00:00</lastModDate>
        
        <creator>Kalivaraprasad B</creator>
        
        <creator>Prasad M.V.D</creator>
        
        <creator>Naveen kishore Gattim</creator>
        
        <subject>Deep learning; convolutional neural networks; food calorie estimation; dietary assessment; computer vision; health informatics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>Dietary pattern assessments, essential for chronic illness management and well-being, involve time-consuming manual data input and food intake remembering. A more dependable and automated approach is needed since such procedures may create mistakes and inconsistencies. This study solves a long-standing problem by automating nutritional assessment using deep learning and image analysis. CNNs, deep learning models for image processing, were employed in our study. Food category algorithms are trained with thousands of pictures. Even with numerous food items, these models can distinguish them in digital photographs. Our method calculates food portions after identification. Photometric food measurements are obtained using reference objects like plates and forks. Yet another deep learning model predicts portions. The method evaluates food calories last. Select food types and portions are matched to nutritional databases. These findings might automate, enhance, and user-centrically assess food intake in health informatics. Our first experiments are encouraging, but we must understand the approach&#39;s limits and need for refinement. The findings underpin future research and development. This approach envisions a future where patients can monitor their nutrition and doctors can get accurate data. This may prevent and treat lifestyle problems.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_104-Deep_Learning_based_Food_Calorie_Estimation_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detection of Harassment Toward Women in Twitter During Pandemic Based on Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01503103</link>
        <id>10.14569/IJACSA.2024.01503103</id>
        <doi>10.14569/IJACSA.2024.01503103</doi>
        <lastModDate>2024-03-30T14:57:42.5930000+00:00</lastModDate>
        
        <creator>Wan Nor Asyikin Wan Mustapha</creator>
        
        <creator>Norlina Mohd Sabri</creator>
        
        <creator>Nor Azila Awang Abu Bakar</creator>
        
        <creator>Nik Marsyahariani Nik Daud</creator>
        
        <creator>Azilawati Azizan</creator>
        
        <subject>Harassment; women; detection; twitter; SVM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>Harassment is an offensive behavior, intimidating and could cause discomfort to the victims. In some cases, the harassments could lead to a traumatic experience to the vulnerable victims. Currently, the harassments towards women in social media have become more daring and are rising. The increasing number of the social media users since the Covid-19 pandemic in 2020 might be one of the factor. Due to the problem, this research aims to assist in detecting the harassment sentiments toward women in Twitter. The sentiment analysis is based on a machine learning approach and Support Vector Machine (SVM) has been chosen due its acceptable performance in sentiment classification. The objective of the research is to explore the capability of SVM in the detection of harassments toward women in Twitter. The research methodology covers the data collection using Tweepy, data preprocessing, data labelling using TextBlob, feature extraction using TF-IDF vectorizer and dataset splitting using the Hold-Out method. The algorithm was evaluated using the Confusion Matrix and the ROC analysis. The algorithm was integrated with the Graphical User Interface (GUI) using Streamlit for ease of use. The implementation of the SVM algorithm in detecting the harassments toward women was successful and reliable as it achieved good performance, with 81% accuracy. The recommendations for the SVM model improvement is to train the dataset of other languages and to collect the Twitter data regularly. The performance of SVM would also be compared with other machine learning algorithms for further validations.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_103-Detection_of_Harassment_Toward_Women_in_Twitter.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Profiling and Classification of Users Through a Customer Feedback-based Machine Learning Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01503102</link>
        <id>10.14569/IJACSA.2024.01503102</id>
        <doi>10.14569/IJACSA.2024.01503102</doi>
        <lastModDate>2024-03-30T14:57:42.5800000+00:00</lastModDate>
        
        <creator>Jihane LARIOUI</creator>
        
        <creator>Abdeltif EL BYED</creator>
        
        <subject>Machine learning; urban mobility; multimodal transportation; multi-agent systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>The systems aimed at predicting user preferences and providing recommendations are now commonly used in many systems such as online shops, social websites, and tourist guide websites. These systems typically rely on collecting user data and learning from it in order to improve their performance. In the context of urban mobility, user Profiling and Classification represent a crucial step in the continuous enhancement of services provided by our multi-agent system for multimodal transportation. In this paper, our goal is to implement and compare some machine learning (ML) algorithms. We will address the technical aspect of this implementation, demonstrating this model leverages customer feedback to develop a thorough understanding of individual preferences and travel behaviors. Through this approach, we can categorize users into distinct groups, enabling a finer personalization of route recommendations and transportation preferences. The ML model analyzes customer feedback, identifies recurring patterns, and continuously adjusts user profiles based on their evolution. This innovative approach aims to optimize the user experience by offering more precise and tailored recommendations, while fostering dynamic adaptation of the system to the changing needs of urban users.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_102-Profiling_and_Classification_of_Users_through_a_Customer_Feedback.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Advancing Strawberry Disease Detection in Agriculture: A Transfer Learning Approach with YOLOv5 Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01503101</link>
        <id>10.14569/IJACSA.2024.01503101</id>
        <doi>10.14569/IJACSA.2024.01503101</doi>
        <lastModDate>2024-03-30T14:57:42.5630000+00:00</lastModDate>
        
        <creator>Chunmao LIU</creator>
        
        <subject>Strawberry disease detection; deep learning; agricultural; YOLOv5 model; training</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>Strawberry Disease Detection in the Agricultural Sector is of paramount importance, as it directly impacts crop yield and quality. A multitude of methods have been explored in the literature to address this challenge, but deep learning techniques have consistently demonstrated superior accuracy in disease detection. Nevertheless, the current research challenge in deep learning-based strawberry disease detection remains the demand for consistently high accuracy rates. In this study, we propose a deep learning model based on the Yolov5 architecture to address the aforementioned research challenge effectively. Our approach involves the generation of a custom dataset tailored to strawberry disease detection and the execution of comprehensive training, validation, and testing processes to fine-tune the model. Experimental results and performance evaluations were conducted to validate our proposed method, demonstrating its ability to achieve accurate results consistently. This research contributes to the ongoing efforts to enhance strawberry disease detection methods within the agricultural sector, ultimately aiding in the early identification and mitigation of diseases to preserve crop yield and quality.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_101-Advancing_Strawberry_Disease_Detection_in_Agriculture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Employee Performance Management</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01503100</link>
        <id>10.14569/IJACSA.2024.01503100</id>
        <doi>10.14569/IJACSA.2024.01503100</doi>
        <lastModDate>2024-03-30T14:57:42.5470000+00:00</lastModDate>
        
        <creator>Zbakh Mourad</creator>
        
        <creator>Aknin Noura</creator>
        
        <creator>Chrayah Mohamed</creator>
        
        <creator>Bouzidi Abdelhamid</creator>
        
        <subject>HRM; HR analytics; Employee Performance Prediction; Support Vector Machine (SVM) Algorithm; K-Nearest Neighbor (KNN) Algorithm; Multiple Linear Regression (MLR) algorithm; Principal Component Analysis (PCA)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>Human resource management (HRM) plays a crucial role in the effective functioning of modern businesses. However, As the volume of data continues to increase, HR professionals are facing growing challenges in objectively gathering, measuring, and interpreting human resources data. The research problem addressed in this study is the need to improve methods for the objective classification of teams based on the most relevant performance factors considering the subjectivity of current tools. To tackle this issue, the research questions focus on the possibility of developing an efficient model for team classification using supervised machine learning algorithms. This study consists of developing and validating three team classification models using the support vector machine (SVM), the K-nearest neighbor (KNN) algorithm, and the multiple linear regression algorithm (MLR) after using PCA for data reduction. Following extensive validation, the module based on MLR was identified as the most effective, achieving an accuracy of 87.5% in Predicting employee performance, which makes it possible to anticipate and fill employee skills gaps and optimize recruiting efforts. This work provides human resources professionals with a data-driven decision support to enhance Human Resources Management using Machine Learning.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_100-Enhancing_Employee_Performance_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Research on Innovative Design of Towable Caravans Integrating Kano-AHP and TRIZ Theories</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150399</link>
        <id>10.14569/IJACSA.2024.0150399</id>
        <doi>10.14569/IJACSA.2024.0150399</doi>
        <lastModDate>2024-03-30T14:57:42.5330000+00:00</lastModDate>
        
        <creator>Jinyang Xu</creator>
        
        <creator>Xuedong Zhang</creator>
        
        <creator>Aihu Liao</creator>
        
        <creator>Shun Yu</creator>
        
        <creator>Yanming Chen</creator>
        
        <creator>Longping Chen</creator>
        
        <subject>Kano model; towed caravans; exterior design; Analytic Hierarchy Process (AHP); TRIZ theory</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>The caravan industry in China is facing significant challenges, primarily because the mode of caravan travel is relatively niche within the country and the industry as a whole has had a slow start. This has ultimately resulted in a mismatch between the design aesthetics of caravans and the preferences of Chinese consumers. Based on the foundation of understanding user preferences, this study proposes a new design methodology that integrates the Kano model, the Analytic Hierarchy Process (AHP), and TRIZ theory to align with the preferences of Chinese users. Initially, a Kano model is constructed based on the suggestions from experts and users to categorize user needs. Subsequently, the AHP method is employed to reclassify the key needs identified in the Kano model, establish judgment matrices, and develop a scoring system to provide a scientific basis for design decisions. Finally, TRIZ theory is applied to address potential physical and technical contradictions encountered during the design process, thereby developing practical and aesthetically pleasing caravan design solutions.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_99-Research_on_Innovative_Design_of_Towable_Caravans.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Integrated CNN-BiLSTM Approach for Facial Expressions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150398</link>
        <id>10.14569/IJACSA.2024.0150398</id>
        <doi>10.14569/IJACSA.2024.0150398</doi>
        <lastModDate>2024-03-30T14:57:42.5170000+00:00</lastModDate>
        
        <creator>B. H. Pansambal</creator>
        
        <creator>A. B. Nandgaokar</creator>
        
        <creator>J. L. Rajput</creator>
        
        <creator>Abhay Wagh</creator>
        
        <subject>CNN (Convolutional Neural Network); BiLSTM (Bi Directional Long Short Term Memory); facial expression recognition; deep learning; flattening</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>Deep learning algorithms have demonstrated good performance in many sectors and applications. Facial expression recognition (FER) is recognizing the emotions through images. FER is an integral part of many applications. With the help of the CNN-BiLSTM integrated approach, higher accuracy can be achieved in identification of the facial expressions. Convolutional neural networks (CNN) consist of a Conv2D layer, dividing the given images into batches, performing normalization and if required flattening the data i.e. converting the data in a 1D array and achieving a higher accuracy. BiLSTM works on two LSTMs i.e. one in the forward direction and the other in a backward direction. One can use LSTM to process the images (datasets) however, it is suggested with the help of BiLSTM can predict the expressions with more accuracy. Input data is available in both the direction (forward and backward) which helps maintaining the context. Using LSTM CNN and BiLSTM always helps increasing the prediction accuracy. Application areas where a BiLSTM can give more prediction accuracy are the forecasting models, text recognition, speech recognition, classifying the large data and the proposed facial expression recognition. The integrated approach (CNN and BiLSTM) increases the accuracy significantly as discussed in the results and discussion section. This approach could be categorized as a fusion technique where two methods (approaches) are integrated to get higher accuracy. The results and discussion section elaborates the effectiveness of the integrated approach compared to HERO: human emotions recognition for realizing the intelligent internet of things. As compared to the HERO approach CNN-BiLSTM gives good results in terms of precision and recall.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_98-An_Integrated_CNN_BiLSTM_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Maximizing Solar Panel Efficiency in Partial Shade: The Improved POA Solution for MPPT</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150397</link>
        <id>10.14569/IJACSA.2024.0150397</id>
        <doi>10.14569/IJACSA.2024.0150397</doi>
        <lastModDate>2024-03-30T14:57:42.5000000+00:00</lastModDate>
        
        <creator>Youssef Mhanni</creator>
        
        <creator>Youssef Lagmich</creator>
        
        <subject>Pelican Optimization Algorithm (POA); Maximum Power Point Tracking (MPPT); Solar Photovoltaic Systems; Partial Shading</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>This paper presents an innovative approach to improving Maximum Power Point Tracking (MPPT) in solar photovoltaic (PV) systems affected by partial shading, a common challenge that significantly reduces efficiency. Our research focuses on enhancing the Pelican Optimization Algorithm (POA), a promising tool in solar energy optimization, to better tackle the efficiency drop observed under shaded conditions. The enhancements to the POA involve the integration of advanced adaptive mechanisms that enable more precise response to the fluctuating irradiance patterns typical of partially shaded environments. This revised version of the POA demonstrates remarkable adaptability and precision in identifying and tracking the maximum power point, significantly outperforming its original iteration. The methodology of this study encompasses a series of rigorous simulations and real-world testing scenarios, designed to evaluate the POA&#39;s performance under various degrees and patterns of shading. The results show a notable improvement in efficiency, with the enhanced POA maintaining high levels of energy capture even in suboptimal sunlight conditions. Additionally, the improved algorithm exhibits robustness against the rapid changes in irradiance, which is characteristic of partially shaded solar PV systems. Our findings underscore the potential of the enhanced POA as a robust, adaptive solution for optimizing solar energy collection, offering significant benefits for solar installations in geographies prone to shading. This work not only contributes to the field of renewable energy optimization but also provides valuable insights for the development of more resilient and efficient solar energy systems.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_97-Maximizing_Solar_Panel_Efficiency_in_Partial_Shade.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Facial Emotion Recognition-based Engagement De-tection in Autism Spectrum Disorder</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150395</link>
        <id>10.14569/IJACSA.2024.0150395</id>
        <doi>10.14569/IJACSA.2024.0150395</doi>
        <lastModDate>2024-03-30T14:57:42.4870000+00:00</lastModDate>
        
        <creator>Noura Alhakbani</creator>
        
        <subject>Engagement detection; facial emotion recognition; autistic chil-dren; convolutional neural networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>Engagement is the state of alertness that a person experiences and the deliberate focus of their attention on a task-relevant stimulus. It positively correlates with many aspects such as learning, social support, and acceptance. Facial emotion recog-nition using artificial intelligence can be beneficial to automati-cally measure individual engagement especially when using au-tomated learning and playing modalities such as using Robots. In this study, we proposed an automatic engagement detection model through facial emotional recognition, particularly in de-termining autistic children’s engagement. The methodology em-ployed a transfer learning approach at the dataset level, utiliz-ing facial image datasets from typically developing (TD) chil-dren and children with ASD. The classification task was per-formed using convolutional neural network (CNN) methods. Comparative analysis revealed that the CNN method demon-strated superior accuracy compared to random forest (RF), support vector machine (SVM), and decision tree algorithms in both the TD and ASD datasets. The findings highlight the poten-tial of CNN-based facial emotion recognition for accurately assessing engagement in children with ASD, with implications for enhancing learning, social support, and acceptance in this population. This research contributes to the field of engagement measurement in autism and underscores the importance of lev-eraging AI techniques for improving understanding and support for children with ASD.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_95-Facial_Emotion_Recognition_based_Engagement_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>State-Feedback Control of Ball-Plate System: Geometric Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150396</link>
        <id>10.14569/IJACSA.2024.0150396</id>
        <doi>10.14569/IJACSA.2024.0150396</doi>
        <lastModDate>2024-03-30T14:57:42.4870000+00:00</lastModDate>
        
        <creator>Khalid Lefrouni</creator>
        
        <creator>Saoudi Taibi</creator>
        
        <subject>Ball-plate system; Delay systems; Geometric approach; State-feedback control</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>This research focuses on investigating the issue of accurately controlling the location of the ball in the ball and plate system. The findings of this research have practical applications across several domains, including optimizing the alignment of solar panels to enhance their energy generation capacity. In this work, we propose the development of a system dynamics model using the Euler-Lagrangian approach. Furthermore, we analyze a technique in the frequency domain known as the geometric approach to create a state-feedback control that ensures the stability of the system. This study primarily focuses on analyzing the characteristic equations associated with the closed-loop system, while also considering the impact of feedback delay. Ultimately, the proposed technique is substantiated by presenting simulation data for validation.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_96-State_Feedback_Control_of_Ball_Plate_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Adaptive Threshold Tuning-based Load Balancing (ATTLB) for Cost Minimization in Cloud Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150394</link>
        <id>10.14569/IJACSA.2024.0150394</id>
        <doi>10.14569/IJACSA.2024.0150394</doi>
        <lastModDate>2024-03-30T14:57:42.4700000+00:00</lastModDate>
        
        <creator>Lama S. Khoshaim</creator>
        
        <subject>Cloud computing; load balancing; threshold optimization; cost minimization; pricing models; CloudSim; resource allocation; cost-aware load balancing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>Cloud computing has revolutionized the on-demand resource provisioning through virtualization. However, dynamic pricing of cloud resources presents cost management challenges. Load balancing is critical for cloud efficiency; however, current algorithms use static thresholds and are unable to adapt to fluctuating prices. This study proposes a novel Dynamic Threshold Tuning (ATTLB) algorithm that optimizes the CPU and memory thresholds of a load balancer based on real-time pricing. The ATTLB algorithm has a pricing monitor to track spot prices; a VM profiler to record capacities; a threshold optimizer to tune thresholds based on pricing, capacity, and SLAs; and a load dispatcher to assign requests to VMs using the optimized thresholds. Extensive simulations compare ATTLB with weighted round-robin (WRR), ant colony optimization (ACO), and least connection-based load balancing (LCLB) algorithms using the CloudSim toolkit. The results demonstrate the ability of ATTLB to reduce total costs by over 35% and improve SLA violations by 41% compared with prior techniques for cloud load balancing. Adaptive threshold tuning provides robustness against dynamic pricing and demand changes. ATTLB balances cost, performance, and utilization through real-time threshold adaptation.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_94-Adaptive_Threshold_Tuning_based_Load_Balancing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Time-Series Classification Approach for Human Activity Recognition with Data Augmentation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150393</link>
        <id>10.14569/IJACSA.2024.0150393</id>
        <doi>10.14569/IJACSA.2024.0150393</doi>
        <lastModDate>2024-03-30T14:57:42.4530000+00:00</lastModDate>
        
        <creator>Youssef Errafik</creator>
        
        <creator>Younes Dhassi</creator>
        
        <creator>Adil Kenzi</creator>
        
        <subject>Deep Learning (DL); multivariate time series; Time Series Classification (TSC); Human Activity Recognition (HAR)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>Accurate classification of multivariate time series data represents a major challenge for scientists and practitioners exploring time series data in different domains. LSTM-Auto-encoders are Deep Learning models that aim to represent input data efficiently while minimizing information loss during the reconstruction phase. Although they are commonly used for Dimensionality Reduction and Data Augmentation, their potential in extracting dynamic features and temporal patterns for temporal data classification is not fully exploited in contrast to the tasks of time-series prediction and anomaly detection. In this article, we present a multi-level hybrid TSC-LSTM-Auto-Encoder architecture that takes full advantage of the incorporation of temporal labels to capture comprehensively temporal features and patterns. This approach aims to improve the performance of temporal data classification using this additional information. We evaluated the proposed architecture for Human activity Recognition (HAR) using the UCI-HAR and WISDM public benchmark datasets. The achieved performance outperforms the current state-of-the-art methods.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_93-A_New_Time_Series_Classification_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Artificial Intelligence System for Malaria Diagnosis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150392</link>
        <id>10.14569/IJACSA.2024.0150392</id>
        <doi>10.14569/IJACSA.2024.0150392</doi>
        <lastModDate>2024-03-30T14:57:42.4400000+00:00</lastModDate>
        
        <creator>Phoebe A Barracloug</creator>
        
        <creator>Charles M Were</creator>
        
        <creator>Hilda Mwangakala</creator>
        
        <creator>Gerhard Fehringer</creator>
        
        <creator>Dornald O Ohanya</creator>
        
        <creator>Harison Agola</creator>
        
        <creator>Philip Nandi</creator>
        
        <subject>Malaria diagnosis; malaria symptoms; artificial intelligence and machine learning classifier; malaria classifier</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>Malaria threats have remained one of the major global health issues over the past decades specifically in low-middle income countries. 70% of the Kenya population lives in malaria endemic zones and the majority have barriers to access health services due to factors including lack of income, distance, and social culture. Despite various research efforts using blood smears under a microscope to combat malaria with advantages, this method is time consuming and needs skillful personnel. To effectively solve this issue, this study introduces a new method integrating InfoGainAttributeEval feature selection techniques and parameter tuning method based on Artificial Intelligence and Machine Learning (AIML) classifiers with features to diagnose types of malaria more accurately. The proposed method uses 100 features extracted from 4000 samples. Sets of experiments were conducted using Artificial Neural Network (ANNs), Na&#239;ve Bayes (NB), Random Forest (RF) classifiers and Ensemble methods (Meta Bagging, Random Committee Meta, and Voting). Na&#239;ve Bayes has the best result. It achieved 100% accuracy and built the model in 0.01 second. The results demonstrate that the proposed method can classify malaria types accurately and has the best result compared to the reported results in the field.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_92-Artificial_Intelligence_System_for_Malaria_Diagnosis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Customer Segmentation Insights by using RFM + Discount Proportion Model with Clustering Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150390</link>
        <id>10.14569/IJACSA.2024.0150390</id>
        <doi>10.14569/IJACSA.2024.0150390</doi>
        <lastModDate>2024-03-30T14:57:42.4230000+00:00</lastModDate>
        
        <creator>Victor Hugo Antonius</creator>
        
        <creator>Devi Fitrianah</creator>
        
        <subject>Clustering; RFM; discount proportion; customer segmentation; data mining</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>In this digital era, the use of e-commerce has expanded and is widely adopted by society. One of the reasons why people use e-commerce platforms is because of their convenience and ease of use. However, the rapid growth of e-commerce has led to a substantial rise in transactions within the platform, involving various business entities. Therefore, it is crucial to perform customer segmentation to group them based on their purchasing behavior. The implementation of data mining techniques, such as clustering, is highly beneficial in this case. Clustering helps process datasets and transform them into useful information. In this study, transaction data obtained from one of the e-commerce stores, i.e. MurahJaya888 and followed by analysis using various clustering methods such as K-means, K-medoids, Fuzzy c-means, and Mini-batch k-means. We also proposed a new model that will become the attributes cluster, namely, RFM + DP (Discount Proportion). The Discount Proportion Rate will provide more insights for customer segmentation as it helps understand purchasing behavior that is more responsive to discount utilization. Implementing these four clustering methods with RFM + DP model resulted in four clusters based on the optimal elbow method. Furthermore, the evaluation and performance metrics for each clustering algorithm indicate that Mini Batch K-Means achieved the highest silhouette score of 0.50. Meanwhile, K-Means obtained the highest CH index value compared to the other algorithms, which was 1056.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_90-Enhancing_Customer_Segmentation_Insights.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Adapting Outperformer from Topic Modeling Methods for Topic Extraction and Analysis: The Case of Afaan Oromo, Amharic, and Tigrigna Facebook Text Comments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150391</link>
        <id>10.14569/IJACSA.2024.0150391</id>
        <doi>10.14569/IJACSA.2024.0150391</doi>
        <lastModDate>2024-03-30T14:57:42.4230000+00:00</lastModDate>
        
        <creator>Naol Bakala Defersha</creator>
        
        <creator>Kula Kekeba Tune</creator>
        
        <creator>Solomon Teferra Abate</creator>
        
        <subject>Afaan oromo; amharic; tigrigna; BERTopic; topic extraction; social media data</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>Facebook users generate a vast amount of data, including posts, comments, and replies, in various formats such as short text, long text, structured, unstructured, and semi structured. Consequently, obtaining import information from social media data becomes a significant challenge for low-resource languages such as Afaan Oromo, Amharic, and Tigrigna. Topic modeling algorithms are designed to identify and categorize topics within a set of documents based on their semantic similarity which helps obtain insight from documents. This study proposes latent Dirichlet allocation, matrix factorization, probabilistic latent semantic analysis, and BERTopic to extract topics from Facebook text comments in Afaan Oromo, Amharic, and Tigrigna. The study utilized text comments from the Facebook pages of various individuals, including activists, politicians, athletes, media companies, and government offices. BERTopic was found to be the most effective for identifying major topics and providing valuable insights into user conversations and social media trends with coherence scores of 82.74%, 87.85%, and 81.79% respectively.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_91-Adapting_Outperformer_from_Topic_Modeling_Methods.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Botnet Detection and Incident Response in Security Operation Center (SOC): A Proposed Framework</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150389</link>
        <id>10.14569/IJACSA.2024.0150389</id>
        <doi>10.14569/IJACSA.2024.0150389</doi>
        <lastModDate>2024-03-30T14:57:42.4070000+00:00</lastModDate>
        
        <creator>Roslaily Muhammad</creator>
        
        <creator>Saiful Adli Ismail</creator>
        
        <creator>Noor Hafizah Hassan</creator>
        
        <subject>Botnet detection; threat incident response; security operation center</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>In the dynamic landscape of evolving cyber threats, Security Operations Centers (SOCs) play an important role in protecting digital assets. Among these threats, botnets are particularly challenging due to their ability to take over many devices and launch coordinated attacks. Through comparative analysis, the research gaps in existing frameworks have been identified. Based on these insights, a botnet detection and incident response framework aligned with SOC practices has been proposed. This proposed framework emphasizes proactive measures by integrating threat intelligence, detection and monitoring tools to detect botnet attack and facilitate rapid response. Future research will focus on conducting evaluation and validation studies to assess the effectiveness and performance of the framework in controlled environments. This effort will contribute to develop the framework and ensuring it aligns with practical cybersecurity needs.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_89-Botnet_Detection_and_Incident_Response_in_Security_Operation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>DeepEmoVision: Unveiling Emotion Dynamics in Video Through Deep Learning Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150388</link>
        <id>10.14569/IJACSA.2024.0150388</id>
        <doi>10.14569/IJACSA.2024.0150388</doi>
        <lastModDate>2024-03-30T14:57:42.3930000+00:00</lastModDate>
        
        <creator>Prathwini </creator>
        
        <creator>Prathyakshini</creator>
        
        <subject>Emotion detection; video analysis; Recurrent Neural Networks (RNN); Support Vector Machines (SVM); K-Nearest Neighbours (KNN); Convolutional Neural Networks (CNN); facial expression recognition; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>Emotion detection from videos plays a pivotal role in understanding human behavior and interaction. This study delves into a cutting-edge method that leverages Recurrent Neural Networks (RNN), Support Vector Machines (SVM), K-Nearest Neighbours (KNN), Convolutional Neural Networks (CNN) and to precisely detect emotions exhibited in video content, holding significant importance in comprehending human behavior and interactions. The devised approach entails a multi-phase procedure: initially, employing CNN-based feature extraction to isolate facial expressions and pertinent visual cues by extracting and pre-processing video frames. These extracted features capture intricate patterns and spatial information crucial for discerning emotions. The results of the trials show that CNN, SVM, KNN, and RNN have promising performance, highlighting their potential. Among the other machine learning models, RNN has attained a 95% accuracy rate in recognizing and classifying emotions in video information. This combination of approaches provides a thorough plan for identifying emotions in dynamic visual material in real time.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_88-DeepEmoVision_Unveiling_Emotion_Dynamics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Cryptojacking Detection Through Hybrid Black Widow Optimization and Generative Adversarial Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150387</link>
        <id>10.14569/IJACSA.2024.0150387</id>
        <doi>10.14569/IJACSA.2024.0150387</doi>
        <lastModDate>2024-03-30T14:57:42.3770000+00:00</lastModDate>
        
        <creator>Meenal R. Kale</creator>
        
        <creator>Deepa</creator>
        
        <creator>Anil Kumar N</creator>
        
        <creator>N. Lakshmipathi Anantha</creator>
        
        <creator>Vuda Sreenivasa Rao</creator>
        
        <creator>Sanjiv Rao Godla</creator>
        
        <creator>E. Thenmozhi</creator>
        
        <subject>Cryptojacking; attack detection; Generative Adversarial Networks; Black Widow Optimization; cybercriminals</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>Cybercriminals now find cryptocurrency mining to be a lucrative endeavour. This is frequently seen in the form of cryptojacking, which is the illegal use of computer resources for cryptocurrency mining. Protecting user resources and preserving the integrity of digital ecosystems depend heavily on the detection and mitigation of such threats. This research presents a unique method that combines Black Widow Optimisation (HBWO) with Generative Adversarial Networks (GANs) to improve the detection of cryptojacking. Due to its covert nature and tendency to elude conventional detection methods, cryptojacking is still a widespread concern. In order to overcome this difficulty, our work makes use of the complementary abilities of deep learning and metaheuristic optimisation. To maximise feature selection for efficient identification of cryptojacking activity, BWO—which draws inspiration from the foraging behaviour of spiders—is utilised. Simultaneously, GANs are employed to produce artificial intelligence (AI) augmentations, which strengthen the detection model&#39;s resilience and enrich the training dataset. Utilising HBWO to identify the most discriminative features is the first step in our technique, which also includes preprocessing the dataset to extract pertinent features. The training dataset is then supplemented with artificial data samples created using GANs, which enhances the detection model&#39;s capacity for generalisation. Experiments conducted on real-world datasets show the effectiveness of our solution, outperforming baseline techniques. The hybrid technique that has been suggested offers a viable way to improve the detection of cryptojacking. Through the combination of HBWO for feature optimisation and GANs for data augmentation, our approach demonstrates improved 98.02% accuracy and resilience in detecting cryptojacking activity. With its novel framework for fending against new dangers in the digital sphere, this research adds to the continuing efforts in cybersecurity.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_87-Enhancing_Cryptojacking_Detection_Through_Hybrid_Black_Widow.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Educational Data Mining in European Union – Achievements and Challenges: A Systematic Literature Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150386</link>
        <id>10.14569/IJACSA.2024.0150386</id>
        <doi>10.14569/IJACSA.2024.0150386</doi>
        <lastModDate>2024-03-30T14:57:42.3600000+00:00</lastModDate>
        
        <creator>Corina Simionescu</creator>
        
        <creator>Mirela Danubianu</creator>
        
        <creator>Bogdanel Constantin Gradinaru</creator>
        
        <creator>Marius Silviu Maciuca</creator>
        
        <subject>Educational data mining; systematic literature review; European Union; Kitchenham methodology; data mining techniques</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>The quality of education is one of the pillars of sustainable development, as set out in “The 2030 Agenda for Sustainable Development”, adopted by all United Nations Member States in 2015. Recent social and technological developments, as well as events such as the COVID-19 pandemic or conflicts in many parts of the world, have led to essential changes in the way education processes are carried out. In addition, they have made it possible to generate, collect and store large amounts of data related to these processes, data that can hide useful information for decisions that, in the medium or long term, can lead to a significant increase in the quality of education. Uncovering this information is the subject of Educational Data Mining. To understand the state-of-the-art reflected by recent developments, trends, theories, methodologies, and applications in this field, in the European Union, we considered it appropriate to conduct a systematic and critical literature review. Our paper aims to identify, analyze, and synthesize relevant information from these articles, both to build a foundation for further studies and to identify gaps or unexplored issues that can be addressed in future research. The analysis is based on research identified in three international databases recognized for content quality: Scopus, Science direct, and IEEEXplore.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_86-Educational_Data_Mining_in_European_Union.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Penetration Testing Framework using the Q Learning Ensemble Deep CNN Discriminator Framework</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150385</link>
        <id>10.14569/IJACSA.2024.0150385</id>
        <doi>10.14569/IJACSA.2024.0150385</doi>
        <lastModDate>2024-03-30T14:57:42.3430000+00:00</lastModDate>
        
        <creator>Dipali Nilesh Railkar</creator>
        
        <creator>Shubhalaxmi Joshi</creator>
        
        <subject>Penetration testing; Q-learning; ensemble deep CNN; prairie natural swarm optimization; Nmap script engine</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>Penetration testing (PT) serves as an effective tool for examining networks and identifying vulnerabilities by simulating a hacker&#39;s attack to uncover valuable information, such as details about the host&#39;s operating and database systems. Strong penetration testing is crucial for assessing system vulnerabilities in the constantly changing world of cyber security. Existing methods often struggle with adapting to dynamic threats, providing limited automation, and lacking the ability to discern subtle security weaknesses. In comparison to manual PT, intelligent PT has gained widespread popularity due to its efficiency, resulting in reduced time consumption and lower labor costs. Considering this, the effective penetration testing framework is developed using prairie natural swarm (PNS) optimized Q-learning ensemble deep CNN. Initially, the penetration testing environment (Shodan search engine) is simulated, and along with that expert knowledge base is also generated. Subsequently, the Nmap script engine and Metasploit are deployed, providing robust tools for network investigation and vulnerability assessment. The system state is then relayed to the Q-learning ensemble deep convolutional neural network (Q-learning ensemble deep CNN) classifier. This unique ensemble combines the strengths of Q-learning and deep CNNs, enabling optimal policy learning for decision-making. The prairie natural swarm optimization algorithm is developed through the hybridization of coyote and particle swarm characteristics to fine-tune classifier parameters, enhancing performance. Additionally, the discriminator is trained to maximize standard action rewards while minimizing discounted action rewards, distinguishing valuable from less valuable information. By evaluating the advantage function, successful penetration likelihood is determined, informing situational decision-making through the Q-learning ensemble deep CNN classifier. Accuracy, sensitivity, and specificity as well as the proposed PNS-optimized Q-learning ensemble deep model are used to evaluate the output. In comparison to other approaches currently in use, CNN achieves values of 94.54%, 94.40%, 94.90% for TP, 94.64%, 94.69%, and 94.52% for k-fold.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_85-Penetration_Testing_Framework_using_the_Q_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Student Performance Estimation Through Innovative Classification Techniques in Education</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150384</link>
        <id>10.14569/IJACSA.2024.0150384</id>
        <doi>10.14569/IJACSA.2024.0150384</doi>
        <lastModDate>2024-03-30T14:57:42.3300000+00:00</lastModDate>
        
        <creator>Hui Fan</creator>
        
        <creator>Guoping Zhu</creator>
        
        <creator>Jianhua Zhan</creator>
        
        <subject>Student performance; Gaussian Process Classification; Golden Eagle Optimizer; Pelican Optimization Algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>In the current era of intense educational competition, institutions must effectively classify individuals based on their abilities, proactively forecast student performance, and work towards enhancing their forthcoming examination outcomes. Providing early guidance to students is crucial in helping them focus their efforts on specific areas to boost their academic achievements. This analytical approach supports educational institutions in mitigating failure rates by utilizing students&#39; previous performance in relevant courses to predict their outcomes in a specific program. Data mining encompasses a variety of techniques used to reveal hidden patterns within vast datasets. In the context of educational data mining, these methods are applied within the educational sphere, with a specific emphasis on analyzing data from both students and educators. These patterns can offer significant value for predictive and analytical objectives. In this study, Gaussian Process Classification (GPC) was employed for the prediction of student performance. To improve the model&#39;s accuracy, two cutting-edge optimizers, namely the Golden Eagle Optimizer (GEO) and the Pelican Optimization Algorithm (POA), were incorporated. When assessing the model&#39;s performance, four widely used metrics were utilized: Accuracy, Precision, Recall, and F1-score. The results of this study underscore the effectiveness of both the POA and GEO optimizers in enhancing GPC performance. Specifically, GPC+GEO demonstrated remarkable effectiveness in the Poor grade, while GPC+POA excelled in the Acceptable and Excellent category. This highlights the positive impact of these optimization techniques on the model&#39;s predictive capabilities.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_84-Student_Performance_Estimation_Through_Innovative_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Security in IoT Networks: Advancements in Key Exchange, User Authentication, and Data Integrity Mechanisms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150383</link>
        <id>10.14569/IJACSA.2024.0150383</id>
        <doi>10.14569/IJACSA.2024.0150383</doi>
        <lastModDate>2024-03-30T14:57:42.2970000+00:00</lastModDate>
        
        <creator>Alumuru Mahesh Reddy</creator>
        
        <creator>M. Kameswara Rao</creator>
        
        <subject>IoT; Public key; key authentication; gate way node; data integrity mechanisms</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>Future Internet (FI) will be shaped by the Internet of Things (IoT), however because of their limited resources and varied communication capabilities, IoT devices present substantial challenges when it comes to securing connectivity. The adoption of robust security measures is hindered by limited compute power, memory, and energy resources, hence diminishing the promise for improved IoT capabilities. Confidentiality, integrity, and authenticity are ensured via authentication mechanisms are influenced by privacy needs, which are driven by sorts of customers that IoT networks service. Authentication is crucial in vital industries like linked cars and smart cities where hackers might use holes to access sensor data. Verification of the Gate Way Node (GWN), which is responsible for mutual authentication, user and sensor registration, and session key creation, is essential. The efficiency of key creation has been enhanced to tackle temporal intricacies linked to different key lengths. With notable advantages, the novel method shortens the time required to generate cryptographic keys: only 60 milliseconds for 100-bit keys and 120 milliseconds for 256-bit keys. This improvement fortifies resistance against new cyber threats by strengthening security basis of IoT networks and enhancing responsiveness and dependability. Through open transmission channels, users send login requests, and after successfully authenticating, they create session keys to establish secure connections with cloud servers. Python simulation results show how resilient the system is to security threats while preserving affordable interaction, processing, and storage. This development not only strengthens IoT networks but also guarantees their sustainability in the face of changing security threats.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_83-Enhancing_Security_in_IoT_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Reliable Hybridization Approach for Estimation of The Heating Load of Residential Buildings</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150382</link>
        <id>10.14569/IJACSA.2024.0150382</id>
        <doi>10.14569/IJACSA.2024.0150382</doi>
        <lastModDate>2024-03-30T14:57:42.2970000+00:00</lastModDate>
        
        <creator>Huanhuan Li</creator>
        
        <subject>Heating load; residential buildings; k-nearest neighbor; snake optimizer; black widow optimizers</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>In recent times, the world&#39;s growing population, coupled with its ever-increasing energy demands, has led to a significant rise in the consumption of fossil fuels. Consequently, this surge in fossil fuel usage has exacerbated the threat of global warming. Building energy consumption represents a significant portion of global energy usage. Accurately determining the energy consumption of buildings is crucial for effective energy management and preventing excessive usage. In pursuit of this goal, this study introduces a novel and robust machine learning (ML) method based on the K-nearest Neighbor (KNN) algorithm for predicting the heating load of residential buildings. While the KNN model demonstrates satisfactory performance in predicting heating loads, for the attainment of optimal results and accuracy, two novel optimizers, the Snake Optimizer (SO) and the Black Widow Optimizer (BWO), have been incorporated into the hybridization of the KNN model. The results highlight the effectiveness of KNSO in predicting heating load, as evidenced by its impressive R2 value of 0.986 and the low RMSE value of 1.231. This breakthrough contributes significantly to the ever-pressing pursuit of energy efficiency in the built environment and its pivotal role in addressing global environmental challenges.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_82-Reliable_Hybridization_Approach_for_Estimation_of_The_Heating_Load.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Blockchain-Enabled Cybersecurity Framework for Safeguarding Patient Data in Medical Informatics</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150381</link>
        <id>10.14569/IJACSA.2024.0150381</id>
        <doi>10.14569/IJACSA.2024.0150381</doi>
        <lastModDate>2024-03-30T14:57:42.2830000+00:00</lastModDate>
        
        <creator>Prajakta U. Waghe</creator>
        
        <creator>A Suresh Kumar</creator>
        
        <creator>Arun B Prasad</creator>
        
        <creator>Vuda Sreenivasa Rao</creator>
        
        <creator>E. Thenmozhi</creator>
        
        <creator>Sanjiv Rao Godla</creator>
        
        <creator>Yousef A.Baker El-Ebiary</creator>
        
        <subject>Block Chain; Cybersecurity; Diffie Hellmen; Patient Data; Proof of Work</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>Securing patient information is crucial in the quickly changing field of healthcare informatics to guarantee privacy, reliability, and adherence to legal requirements. This article presents a complete cybersecurity architecture enabled by blockchain and customized for the medical informatics area. The framework initiatives to provide adequate safeguards for sensitive patient data by utilizing AES-Diffie-Hellman key exchange for secure communication, blockchain technology with Proof-of-Work (PoW), and Role-Based Access Control (RBAC) for fine access management. A strong cybersecurity architecture is crucial for maintaining the security, credibility, and availability of private patient information in the current healthcare information management environment. By using decentralized storage, access control methods, and cutting-edge encryption strategies, the suggested framework overcomes these difficulties. The framework ensures safe data transport and storage by showcasing effective AES encryption as well as decryption procedures through performance evaluation. PoW consensus combined with blockchain technology provides the framework with auditable and immutable data storage, reducing the possibility of data manipulation and unwanted access. Additionally, granular access control is made possible by the integration of RBAC, guaranteeing that only those with the proper authorization may access patient data. Python is used to implement the suggested framework. The suggested method considerably outperformed NTRU, RSA, and DES with encryption and decryption times of 12.1 and 12.2 seconds, respectively. The proposed Blockchain-Enabled Cybersecurity Framework demonstrates exceptional efficacy, evidenced by its ability to achieve a 97.9% reduction in unauthorized access incidents, thus offering robust protection for patient data in medical informatics.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_81-Blockchain_Enabled_Cybersecurity_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A User Control Framework for Cloud Data Migration in Software as a Service</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150380</link>
        <id>10.14569/IJACSA.2024.0150380</id>
        <doi>10.14569/IJACSA.2024.0150380</doi>
        <lastModDate>2024-03-30T14:57:42.2670000+00:00</lastModDate>
        
        <creator>Danga Imbaji Injuwe</creator>
        
        <creator>Hamidah Ibrahim</creator>
        
        <creator>Fatimah Sidi</creator>
        
        <creator>Iskandar Ishak</creator>
        
        <subject>Comparative analysis; cloud computing; cloud data migration; on-premises to cloud migration; user control; Software as a Service</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>Cloud computing represents the overarching paradigm that enables organizations to leverage cloud services for data storage and application deployment. Nowadays, organizations that use the cloud services can migrate their data using software as a service (SaaS). The organizations’ data and application are deployed over the cloud through the cloud data migration process of the on-premise to cloud migration; referring to the transition process from the legacy, locally hosted systems to cloud environment. Several data migration frameworks have emerged to guide users in the migration process. While numerous studies have addressed the importance of granting control to users during the cloud data migration process, a user control framework is yet to be created. Thereby, depriving user of visibility and sense of ownership, customization to meet users need, compliance and governance, and training. This paper aims at achieving this by proposing a conceptual user control framework for cloud data migration process in SaaS. The framework is constructed based on a comprehensive analysis conducted over existing research works that are related to cloud data migration with the aim to identify the steps/phases of data migration process, the factors affecting the user control with regard to the identified phases, and the control metrics of each identified factor. An initial conceptual user control framework is constructed based on the analysis of the literatures and further enhancement of the framework is made based on the expert reviews.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_80-A_User_Control_Framework_for_Cloud_Data_Migration.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Retrieval-Augmented Generation Approach: Document Question Answering using Large Language Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150379</link>
        <id>10.14569/IJACSA.2024.0150379</id>
        <doi>10.14569/IJACSA.2024.0150379</doi>
        <lastModDate>2024-03-30T14:57:42.2500000+00:00</lastModDate>
        
        <creator>Kurnia Muludi</creator>
        
        <creator>Kaira Milani Fitria</creator>
        
        <creator>Joko Triloka</creator>
        
        <creator>Sutedi</creator>
        
        <subject>Natural Language Processing; Large Language Model; Retrieval Augmented Generation; Question Answering; GPT</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>This study introduces the Retrieval Augmented Generation (RAG) method to improve Question-Answering (QA) systems by addressing document processing in Natural Language Processing problems. It represents the latest breakthrough in applying RAG to document question and answer applications, overcoming previous QA system obstacles. RAG combines search techniques in vector store and text generation mechanism developed by Large Language Models, offering a time-efficient alternative to manual reading limitations. The research evaluates RAG&#39;s that use Generative Pre-trained Transformer 3.5 or GPT-3.5-turbo from the ChatGPT model and its impact on document data processing, comparing it with other applications. This research also provides datasets to test the capabilities of the QA document system. The proposed dataset and Stanford Question Answering Dataset (SQuAD) are used for performance testing. The study contributes theoretically by advancing methodologies and knowledge representation, supporting benchmarking in research communities. Results highlight RAG&#39;s superiority: achieving a precision of 0.74 in Recall-Oriented Understudy for Gisting Evaluation (ROUGE) testing, outperforming others at 0.5; obtaining an F1 score of 0.88 in BERTScore, surpassing other QA apps at 0.81; attaining a precision of 0.28 in Bilingual Evaluation Understudy (BLEU) testing, surpassing others with a precision of 0.09; and scoring 0.33 in Jaccard Similarity, outshining others at 0.04. These findings underscore RAG&#39;s efficiency and competitiveness, promising a positive impact on various industrial sectors through advanced Artificial Intelligence (AI) technology.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_79-Retrieval_Augmented_Generation_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Rolling Bearing Life Prediction Technology Based on Feature Screening and LSTM Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150378</link>
        <id>10.14569/IJACSA.2024.0150378</id>
        <doi>10.14569/IJACSA.2024.0150378</doi>
        <lastModDate>2024-03-30T14:57:42.2370000+00:00</lastModDate>
        
        <creator>Yujun Zhao</creator>
        
        <subject>Features; rolling bearings; prediction; fisher score; bidirectional long short term model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>As one of the important components of industrial equipment, the health condition of rolling bearings will directly affect the operational effectiveness of the equipment. Therefore, to ensure equipment safety and reduce maintenance costs, an intelligent rolling bearing life prediction technology is proposed. Firstly, it extracts the fault information of rolling bearings and introduces Fisher score for feature selection. Simultaneously, a variational modal analysis method on the grounds of improved particle swarm optimization is introduced to achieve denoising of rolling bearing signals. Finally, an improved bidirectional long short-term model is introduced to construct a prediction model and achieve the life prediction of rolling bearings. In the performance analysis of the denoising model, the optimal modal component K value of the denoising model was obtained through experimental analysis as 3, and the optimal penalty factor number was 1000. In the time-domain signal analysis of the two models, the proposed model possesses a more excellent decomposition effect on the original signal compared to the comparative model, and the signal denoising ability is improved by 26.35%. In the prediction of rolling bearing life, the proposed model can accurately predict the early and late life of rolling bearings. For example, when the collection time is 100, the actual remaining life is 0.712, and the proposed model is 0.721, which is better than other models. In the comparison of average absolute error, the proposed model is 0.035, which outperforms other models. This indicates that the proposed rolling bearing life prediction model has excellent predictive performance. The research provides essential technical references for the maintenance of industrial machinery and equipment, as well as equipment life monitoring.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_78-Rolling_Bearing_Life_Prediction_Technology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Deep Learning Framework for Detection and Classification of Implant Manufacturer using X-Ray Radiographs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150377</link>
        <id>10.14569/IJACSA.2024.0150377</id>
        <doi>10.14569/IJACSA.2024.0150377</doi>
        <lastModDate>2024-03-30T14:57:42.2200000+00:00</lastModDate>
        
        <creator>Attar Mahay Sheetal</creator>
        
        <creator>K. Sreekumar</creator>
        
        <subject>Machine learning; deep learning; convolution neural network; Adversarial Network (GAN); Principal Component Analysis (PCA); shoulder implants</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>Now-a-days, artificial prosthesis is widely used to mitigate pain in damaged shoulders and restore their movement ability. The process involves a complex surgery that attempts to fix an artificial prosthesis into a dead shoulder as a replacement for the ball and socket joints of the shoulder. Long after the surgical process, the need for revision or reoperation may arise due to some problems with the prosthesis. Identification of prosthesis manufacturer is a paramount step in the reoperation exercise. Traditional approach compares the prosthesis under consideration with prosthesis from a vast number of manufacturers. This approach is cost-efficient and requires no extra training for the physician to identify the prosthesis manufacturer. However, the method is time inefficient and is prone to mistakes. Systems based on machine learning have the potential to reduce human errors and expedite the revision process. This paper proposes a shallow 2D convolution neural network (CNN) for the classification of shoulder prosthesis To speed-up the learning process and improve the performance of the deep learning model for implant classification, this paper employed three different techniques. Firstly, a generative adversarial network (GAN) is applied to the dataset to augment the classes with fewer samples to ensure the data imbalance problem is eliminated. Secondly, the highly discriminating features are extracted using principal component analysis (PCA) and used to train the proposed model. Lastly, the model hyper-parameters are optimised to ensure optimal model performance. The model trained with extracted features with a variance of 0.99 achieved the best accuracy of 99.8%.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_77-A_Deep_Learning_Framework_for_Detection_and_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimization Strategy for Industrial Machinery Product Selection Scheme Based on DMOEA</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150376</link>
        <id>10.14569/IJACSA.2024.0150376</id>
        <doi>10.14569/IJACSA.2024.0150376</doi>
        <lastModDate>2024-03-30T14:57:42.2030000+00:00</lastModDate>
        
        <creator>Shichang Liu</creator>
        
        <creator>Xinbin Yang</creator>
        
        <creator>Haihua Huang</creator>
        
        <subject>Industrial machinery products; optional configuration plan; multi objective evolutionary algorithm; density calculation; selection success rate</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>With the continuous innovation and replacement of industrial machinery products, the traditional optional configuration plans are no longer able to complete product selection work with high quality. To further optimize the product selection process and solve the multi-objective selection problem of industrial machinery products, a multi-objective problem model for product selection is normalized and constructed based on the existing difficulties in industrial machinery product selection. A new product selection model is proposed by introducing a multi-objective evolutionary algorithm based on density calculation for model solving. The experimental results showed that the new model had the highest selection success rate of 97% and selection accuracy close to 95% when the iterations were 250. In addition, the maximum absolute error sum of the selected bearing and bearing seat diameters under this model was 0.002. The maximum relative error was 0.01%. The highest reliability of algorithm fitting was 99.9%. Simulation tests found that the average selection success rate was 93%. The average selection quality loss was 26%. In summary, the new selection model proposed in the study has certain advantages and feasibility. It can provide effective decision-making solutions for the design and selection of industrial machinery products.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_76-Optimization_Strategy_for_Industrial_Machinery_Product.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Student Outcome Assessment on Structured Query Language using Rubrics and Automated Feedback Generation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150374</link>
        <id>10.14569/IJACSA.2024.0150374</id>
        <doi>10.14569/IJACSA.2024.0150374</doi>
        <lastModDate>2024-03-30T14:57:42.1900000+00:00</lastModDate>
        
        <creator>Sidhidatri Nayak</creator>
        
        <creator>Reshu Agarwal</creator>
        
        <creator>Sunil Kumar Khatri</creator>
        
        <creator>Masoud Mohammadian</creator>
        
        <subject>Automated SQL Query grading system; Cosine similarity; LSA; Multinomialnb; KNN; Logistic regression; student outcome assessment; rubric; feedback</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>Automated assessment of student assignment based on SQL(Structured Query Language) queries is an efficient method for evaluating and providing feedback on their DBMS-related skills. This paper provides a three step approach of how student submissions are assessed automatically using various machine learning approaches and introduced an automated grading system for SQL(Structured Query Language) queries. ASQGS (Automated SQL Query Grading System) is the process of evaluating SQL queries submitted by students of a classroom. Due to the difficulties involved in the automatic grading procedure, this endeavor continues to attract the researcher&#39;s interest in developing a new and superior grading system. The purpose of this study is to demonstrate how text relevance is calculated between a reference query that the teacher sets and a query that the student submits. To compute the grade, the similarity value between the student and reference queries will be compared. In this paper various feature similarity techniques were discussed which is required before applying the machine learning model to automatically assess the grade of the student’s SQL assignment. In the second step the grade received by the ASQG is used for student outcome assessment using rubrics with respect to Bloom’s taxonomy and finally scores can be calculated using predefined rubrics criteria. Additionally, in the 3rd step the system can generate feedback for students, highlighting specific areas of improvement, errors, or suggestions to enhance their queries among different groups of students segregated by their SQL knowledge.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_74-Student_Outcome_Assessment_on_Structured_Query_Language.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Genetic Algorithms and Feature Selection for Improving the Classification Performance in Healthcare</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150375</link>
        <id>10.14569/IJACSA.2024.0150375</id>
        <doi>10.14569/IJACSA.2024.0150375</doi>
        <lastModDate>2024-03-30T14:57:42.1900000+00:00</lastModDate>
        
        <creator>Alaa Alassaf</creator>
        
        <creator>Eman Alarbeed</creator>
        
        <creator>Ghady Alrasheed</creator>
        
        <creator>Abdulsalam Almirdasie</creator>
        
        <creator>Shahd Almutairi</creator>
        
        <creator>Mohammed Abullah Al-Hagery</creator>
        
        <creator>Faisal Saeed</creator>
        
        <subject>Cancer classification; gene expression; feature selection; microarray data; algorithm; machine learning; genetic algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>Microarray technology appeared recently and is used in genetic research to study gene expressions. Microarray has been widely applied to many fields, especially the health sector, such as diagnosing and predicting diseases, specifically cancer diseases. These experiments usually generate a huge amount of gene expression data with analytical and computational complexities. Therefore, feature selection techniques and different classifications help solve these problems by eliminating irrelevant and redundant features. This paper presents a proposed method for classifying the data using eight classifications machine learning algorithms. Then, the Genetic Algorithm (GA) is applied to improve the selection of the best features and parameters for the model. We use the higher accuracy of the model among the different classifications as a measure of fit in the genetic algorithm; this means that the model’s accuracy can be used to select the best solutions than others in the community. The proposed method was applied to the colon, breast, prostate, and Central Nervous System (CNS) diseases and experimental outcomes demonstrated an accuracy rate of 93.75, 96.15, 82.76, and 93.33 respectively. Based on these findings, the proposed method works well and effectively.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_75-Genetic_Algorithms_and_Feature_Selection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Utilizing the Metaverse in Astrosociology: Examine Students&#39; Perspectives of Space Science Education</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150373</link>
        <id>10.14569/IJACSA.2024.0150373</id>
        <doi>10.14569/IJACSA.2024.0150373</doi>
        <lastModDate>2024-03-30T14:57:42.1730000+00:00</lastModDate>
        
        <creator>Yahya Almurtadha</creator>
        
        <subject>Metaverse; space science education; astrosociology; virtual reality; space simulation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>Big economic countries must invest in space skills to create a favorable business environment, particularly in KSA considering the present mindset in outer space. KSA&#39;s vast landmass is a tremendous asset that makes it the perfect position to provide space services throughout the Middle East and the world. Space science education is becoming increasingly important, requiring advanced technology and computational skills to benefit early-career scientists. The Ministry of Education in KSA has declared that students will take Earth and Space Sciences to prepare them for global competition. Traditional learning experiences seem to have little to no impact on students&#39; conceptual understandings of the space science courses. The sociological interests of Generation Z serve as the foundation for modern Metaverse approaches. Students&#39; comprehension and interest in studying space and the galaxy are increased by provided a simulation of space travel using metaverse technology. The major goal of this study is to underline the significance and usefulness of employing metaverse technology while creating a new space science curriculum to advance knowledge in the field of space scientific education. Another goal is to introduce the value of astrosociology in understanding how people might interact with one another in space. A voluntary survey was completed by 39 students prior to their training in the metaverse space simulation as part of this study. They then used the space simulation with careful observation. After that, they reply to a follow-up survey. The findings supported the suggestion that the metaverse should be included in space science curricula. A number of comments and interests also arise on the viability of space travel, social interaction, and the advantages of using the metaverse to research these issues.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_73-Utilizing_the_Metaverse_in_Astrosociology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Effective Forecasting Approach of Temperature Enabling Climate Change Analysis in Saudi Arabia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150372</link>
        <id>10.14569/IJACSA.2024.0150372</id>
        <doi>10.14569/IJACSA.2024.0150372</doi>
        <lastModDate>2024-03-30T14:57:42.1570000+00:00</lastModDate>
        
        <creator>Sultan Noman Qasem</creator>
        
        <creator>Samah M. Alzanin</creator>
        
        <subject>Climate change; Saudi Arabia; temperature; forecasting; recurrent neural network models</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>Climate change is a global issue with far-reaching consequences, and understanding regional temperature patterns is critical for effective climate change analysis. In this context, accurate forecasting of temperature is critical for mitigating and understanding its impact. This study proposes an effective temperature forecasting approach in Saudi Arabia, a region highly vulnerable to climate change&#39;s effects, particularly rising temperatures. The approach uses advanced neural networks models such as the Long Short-Term Memory (LSTM), Gate Recurrent Unit (GRU), and Bidirectional LSTM (BiLSTM) model. A comparative analysis of these models is also introduced to determine the most effective model for forecasting the mean values of temperatures in the following years, understanding climate variability, and informing sustainable adaptation strategies. Several experiments are conducted to train and evaluate the models on a time series data of temperatures in Saudi Arabia, taken from a public dataset of countries&#39; historical global average land temperatures. Performance metrics such as Mean Absolute Error (MAE), Mean Relative Error (MRE), Root Mean Squared Error (RMSE), and coefficient of determination (R-squared) are employed to measure the accuracy and reliability of each model. Experimental results show the models&#39; ability to capture short-term fluctuations and long-term trends in temperature patterns. The findings contribute to the advancement of climate modeling methodologies and offer a basis for selecting a suitable model in similar environmental contexts.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_72-An_Effective_Forecasting_Approach_of_Temperature.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>NTDA: The Mitigation of Denial of Service (DoS) Cyberattack Based on Network Traffic Detection Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150370</link>
        <id>10.14569/IJACSA.2024.0150370</id>
        <doi>10.14569/IJACSA.2024.0150370</doi>
        <lastModDate>2024-03-30T14:57:42.1430000+00:00</lastModDate>
        
        <creator>Muhannad Tahboush</creator>
        
        <creator>Adel Hamdan</creator>
        
        <creator>Firas Alzobi</creator>
        
        <creator>Moath Husni</creator>
        
        <creator>Mohammad Adawy</creator>
        
        <subject>Network security; DoS attack; cyberattack; network traffic</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>Security is one of the important aspects which is used to protect data availability from being compromised. Denial of service (DoS) attack is a common type of cyberattack and becomes serious security threats to information systems and current computer networks. DoS aims to explicit attempts that will consume and disrupt victim resources to limit access to information services by flooding a target system with a high volume of traffic, thereby preventing the availability of the resources to the legitimate users. However, several solutions were developed to overcome the DoS attack, but still suffer from limitations such as requiring additional hardware, fail to provide a unified solution and incur a high delay of detection accuracy. Therefore, the network traffic detection approach (NTDA) is proposed to detect the DoS attack in a more optimistic manner based on various scenarios. First, the high network traffic measurements and mean deviation, second scenario relied on the transmission rate per second (TPS) of the sender. The proposed algorithm NTDA was simulated using MATLAB R2020a. The performance metrics taken into consideration are false negative rate, accuracy, detection rate and true positive rate. The simulation results show that the performance parameters of proposed NTDA algorithm outperformed in DoS detection the other well-known algorithms.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_70-NTDA_The_Mitigation_of_Denial_of_Service_DoS.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Word2vec-based Latent Semantic Indexing (Word2Vec-LSI) for Contextual Analysis in Job-Matching Application</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150371</link>
        <id>10.14569/IJACSA.2024.0150371</id>
        <doi>10.14569/IJACSA.2024.0150371</doi>
        <lastModDate>2024-03-30T14:57:42.1430000+00:00</lastModDate>
        
        <creator>Sukri Sukri</creator>
        
        <creator>Noor Azah Samsudin</creator>
        
        <creator>Ezak Fadzrin</creator>
        
        <creator>Shamsul Kamal Ahmad Khalid</creator>
        
        <creator>Liza Trisnawati</creator>
        
        <subject>Contextual; LSI; job-matching; text-base; word2vec</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>Job-matching applications have become a technology that provides solutions for making decisions about accepting and looking for work. The contextual analysis of documents or data from job matching is needed to make decisions. Some existing studies on the analysis of job-matching applications can use the Latent Semantic Indexing (LSI) method, which is based on word-to-word comparisons in the text. LSI has the advantage of contextual analysis. It can analyze amounts of data above 10,000 words. However, the conventional LSI method has limitations in contextual analysis because it uses the exact words but different meanings. Therefore, this paper proposes a new technique called word2vec-based latent semantic indexing (Word2vec-LSI) for contextual analysis, which is based on gen-sim as a multi-language word library. Then, modeling in text and wordnet and stopword as basic text modeling. We then used word2vec-LSI to perform contextual analysis based on the Irish (IE), Swedish (SE), and United Kingdom (UK) languages in the dataset (Jobs on CareerBuilder UK). The results of applying conventional LSI have an accuracy level of 79%, recall has a value of 79%, precision has a value of 62%, and Fi-Scor has a value of 70% with a similarity level of up to 50%. After implementing word2vec-LSI, it can increase accuracy, recall, and precision, and Fi-Scor both have 84% in contextual analysis, and the similarity level reaches up to 95%. Experiments confirm the usefulness of word2vec-LSI in increasing accuracy for contextual analysis applicable in natural language text mining.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_71-Word2vec_based_Latent_Semantic_Indexing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Model for Automatic Code Generation from High Fidelity Graphical User Interface Mockups using Deep Learning Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150369</link>
        <id>10.14569/IJACSA.2024.0150369</id>
        <doi>10.14569/IJACSA.2024.0150369</doi>
        <lastModDate>2024-03-30T14:57:42.1270000+00:00</lastModDate>
        
        <creator>Michel Samir</creator>
        
        <creator>Ahmed Elsayed</creator>
        
        <creator>Mohamed I. Marie</creator>
        
        <subject>Code generation; graphical user interfaces; deep learning; computer vision; mockups</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>Graphical user interface (GUI) is the most prevalent type of user interfaces (UI) due to its visual nature, which allows direct manipulation and interaction with the software. Mockup-based design is a frequently used workflow for constructing GUI. In this workflow, the anticipated UI design process typically progresses through multiple steps, culminating in the creation of a higher fidelity mockup and subsequent implementation of that mockup into code. The design process involves repeating those multiple steps because of the ongoing changes in requirements, which can make the process tedious and necessitate modifications to the GUI code. Additionally, the process of implementing and converting a design into GUI code itself is laborious and time-consuming task that can prevent developers from dedicating the bulk of their time implementing the software&#39;s functionality and logic, making it a costly endeavor. Automating the code generation process using GUI design images can be a solution to mitigate these issues and allow more time to be allocated towards building the application&#39;s functionality. In this research paper, deep learning object detectors are employed to detect the predominant UI elements and their spatial arrangement in a high-fidelity UI mockup image. This approach generates an intermediate representation, including the layout hierarchy of the user interface leading to the automation of the front-end code generation process for the mockup. The proposed approach demonstrates its effectiveness through experimental results, achieving a recognition mean average precision (mAP) of 91.37% for atomic elements and 87.40% for container elements in the mockup image. Additionally, similarity metrics are employed to assess the visual resemblance between the generated mockups and the original ones.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_69-A_Model_for_Automatic_Code_Generation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimized Deep Belief Networks Based Categorization of Type 2 Diabetes using Tabu Search Optimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150368</link>
        <id>10.14569/IJACSA.2024.0150368</id>
        <doi>10.14569/IJACSA.2024.0150368</doi>
        <lastModDate>2024-03-30T14:57:42.1100000+00:00</lastModDate>
        
        <creator>Smita Panigrahy</creator>
        
        <creator>Sachikanta Dash</creator>
        
        <creator>Sasmita Padhy</creator>
        
        <subject>Deep belief network; Tabu search; diabetics mellitus; hyper-parameters; optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>Diabetics mellitus has the potential to result in numerous challenges. Based on the increasing morbidity rates in recent years, it is projected that the global diabetic population will surpass 642 million by 2040, indicating that approximately one in every ten individuals will have diabetes. Undoubtedly, this alarming statistic necessitates urgent focus from both academics as well as industry to foster novelty and advancement in prediction of diabetics, with the aim of preserving patients&#39; lives. Deep learning (DL) was employed to forecast a multitude of ailments as a result of its swift advancement. Nevertheless, DL approaches continue to face challenges in achieving optimal prediction performance as a result of the selection of hyper-parameters and tuning of parameters. Hence, the careful choice of hyper-parameters plays a crucial role in enhancing classification performance. This paper introduces TSO-DBN, a Tabu Search Optimization method (TSO) that is based on Deep Belief Network (DBN). TSO-DBN has demonstrated exceptional performance in several medical fields. The Tabu Search Optimization algorithm (TSO) has been used to pick hyper-parameters and optimize parameters. During the experiment, two problems were tackled in order to improve the findings. The TSO-DBN model exhibited exceptional performance, surpassing other models with an accuracy of 96.23%, an F1-score of 0.8749, and a Matthews Correlation Coefficient (MCC) of 0.88.63.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_68-Optimized_Deep_Belief_Networks_Based_Categorization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Deep Learning Model for Prediction of Cardiovascular Disease Using Heart Sound</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150367</link>
        <id>10.14569/IJACSA.2024.0150367</id>
        <doi>10.14569/IJACSA.2024.0150367</doi>
        <lastModDate>2024-03-30T14:57:42.0930000+00:00</lastModDate>
        
        <creator>Rohit Ravi</creator>
        
        <creator>P. Madhavan</creator>
        
        <subject>Cardiovascular disease; prediction; LSTM; MFCC; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>Cardiovascular disease is the most emerging disease in this generation of youth. You need to know about your heart condition to overcome this disease appropriately. An electronic stethoscope is used in the cardiac auscultation technique to listen to and analyze heart sounds. Several pathologic cardiac diseases can be detected by auscultation of the heart sounds. Unlike heart murmurs, the sounds of the heart are separate; brief auditory phenomena usually originate from a single source. This article proposes a deep-learning model for predicting cardiovascular disease. The combined deep learning model uses the MFCC and LSTM for feature extraction and prediction of cardiovascular disease. The model achieved an accuracy of 94.3%. The sound dataset used in this work is retrieved from the UC Irvine Machine Learning Repository. The main focus of this research is to create an automated system that can assist doctors in identifying normal and abnormal heart sounds.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_67-A_Deep_Learning_Model_for_Prediction_of_Cardiovascular_Disease.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>NovSRC: A Novelty-Oriented Scientific Collaborators Recommendation Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150366</link>
        <id>10.14569/IJACSA.2024.0150366</id>
        <doi>10.14569/IJACSA.2024.0150366</doi>
        <lastModDate>2024-03-30T14:57:42.0800000+00:00</lastModDate>
        
        <creator>Xiuxiu Li</creator>
        
        <creator>Mingyang Wang</creator>
        
        <creator>Chaoran Wang</creator>
        
        <creator>Yujia Fu</creator>
        
        <creator>Xianjie Wang</creator>
        
        <subject>Scientific collaborator recommendation; novelty; heterogeneous academic collaboration network; network representation learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>Collaborator recommendation is a crucial topic in research management. This paper proposes a Novelty-Oriented Scientific Research Collaborator recommendation model (NovSRC). By recommending collaborators under the guidance of novel indicators, NovSRC aims to broaden scholars&#39; research perspectives and facilitate the progress of research innovation. NovSRC utilizes heterogeneous academic networks composed of different academic entities and their relationships to learn vector representations of scholars and quantify their novelty metrics. A weighted academic collaboration network was constructed by measuring the novelty collaboration strength (NCS) among scholars under the novelty index, and based on this network, the final vector representation of scholars under the guidance of novelty characteristics was learned. By calculating the similarity between scholar vectors, NovSRC generates a Top-N recommendation list with a focus on novelty. The experimental results indicate that NovSRC achieved the best recommendation performance. Compared with the baseline models, the recommendation precision of NovSRC has improved by 6.9%, the F1 value has increased by 17.3%, and the novelty collaboration strength among scholars has increased by 3.3%. The analysis of the recommended list shows that compared to the target scholars, scholars recommended by the NovSRC model exhibit a wider distribution of research interests, which confirms that novelty has become a key benchmark factor for scholars seeking collaborators.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_66-NovSRC_A_Novelty_Oriented_Scientific_Collaborators.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Virtual Reality and Augmented Reality in Artistic Expression: A Comprehensive Study of Innovative Technologies</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150365</link>
        <id>10.14569/IJACSA.2024.0150365</id>
        <doi>10.14569/IJACSA.2024.0150365</doi>
        <lastModDate>2024-03-30T14:57:42.0800000+00:00</lastModDate>
        
        <creator>Fan Wang</creator>
        
        <creator>Zonghai Zhang</creator>
        
        <creator>Liangyi Li</creator>
        
        <creator>Siyu Long</creator>
        
        <subject>Virtual reality; augmented reality; artistic expression; emerging technologies; immersive experiences</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>Over the last decade, Virtual Reality (VR) and Augmented Reality (AR) have gained popularity across various industries, particularly the arts, thanks to technological advances and inexpensive hardware and software availability. These technologies have redefined the boundaries of creativity and immersive experiences in artistic expression. This paper explores the dynamic interface between AR, VR, and the diverse Information Technology (IT) landscape. In this context, AR augments the physical world with digital overlays, while VR places users in fully simulated environments. This paper discusses these technologies in detail, including their basic concepts and hardware and software components. This survey examines how AR and VR can positively impact artistic fields such as virtual art galleries, augmented public installations, and innovative theatrical performances. We discuss limitations in hardware, software development, user experience, and ethical considerations. Further, we emphasize collaboration possibilities, accessibility, and inclusivity to probe AR and VR&#39;s profound impact on artistic creativity. The paper illustrates the transformative power of these technologies through case studies and noteworthy projects. Finally, future trends are outlined, highlighting advancements, emerging artistic forms, and social and cultural implications.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_65-Virtual_Reality_and_Augmented_Reality.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhance Telecommunication Security Through the Integration of Support Vector Machines</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150364</link>
        <id>10.14569/IJACSA.2024.0150364</id>
        <doi>10.14569/IJACSA.2024.0150364</doi>
        <lastModDate>2024-03-30T14:57:42.0630000+00:00</lastModDate>
        
        <creator>Agus Tedyyana</creator>
        
        <creator>Adi Affandi Ahmad</creator>
        
        <creator>Mohd Rushdi Idrus</creator>
        
        <creator>Ahmad Hanis Mohd Shabli</creator>
        
        <creator>Mohamad Amir Abu Seman</creator>
        
        <creator>Osman Ghazali</creator>
        
        <creator>Jaroji</creator>
        
        <creator>Abd Hadi Abd Razak</creator>
        
        <subject>Call security system; artificial intelligence; support vector machine; data analysis; fraud detection system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>This research investigates the escalating issue of telephone-based fraud in Indonesia, a consequence of enhanced connectivity and technological advancements. As the telecommunications sector expands, it faces increased threats from sophisticated criminal activities, notably voice call fraud, which leads to significant financial losses and diminishes trust in digital systems. This study presents a novel security system that leverages the capabilities of Support Vector Machines (SVM) for the advanced classification of complex patterns inherent in fraudulent activities. By integrating SVM algorithms, this system aims to effectively process and analyze large volumes of data to identify and prevent fraudulent acts. The utilization of SVM in our proposed framework represents a significant strategy to combat the adaptive and evolving tactics of cybercriminals, thereby bolstering the resilience of telecommunications infrastructure. Upon further refinement, the system exhibited a substantial improvement in identifying fraudulent activities, with accuracy rates increasing from 81% to 86%. This enhancement underscores the system&#39;s efficacy in real-world scenarios. Our research underscores the critical need to marry technological innovations with ethical and privacy considerations, highlighting the role of public awareness and education in augmenting security measures. The development of this SVM-based security system constitutes a pivotal step towards reinforcing Indonesia&#39;s telecommunications infrastructure, contributing to the national objective of securing the digital economy and fostering a robust digital ecosystem. By addressing current and future cyber threats, this approach exemplifies Indonesia&#39;s commitment to leveraging technology for societal welfare, ensuring a secure and prosperous digital future for its citizens.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_64-Enhance_Telecommunication_Security.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Machine Learning Approaches for Predicting and Diagnosing Major Depressive Disorder</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150363</link>
        <id>10.14569/IJACSA.2024.0150363</id>
        <doi>10.14569/IJACSA.2024.0150363</doi>
        <lastModDate>2024-03-30T14:57:42.0470000+00:00</lastModDate>
        
        <creator>N. Balakrishna</creator>
        
        <creator>M. B. Mukesh Krishnan</creator>
        
        <creator>D. Ganesh</creator>
        
        <subject>Major Depressive Disorder (MDD); hybrid machine learning; cat boost; random forest; XG boost; XGB random forest; SVM; logistic regression; EEG data</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>Major Depressive Disorder (MDD) is common and debilitating, requiring accurate prediction and diagnosis. This study uses clinical, demographic, and EEG data to test hybrid machine learning methods for MDD prediction and diagnosis. EEG data reveals brain electrical activity and can identify MDD patterns and traits. The study aimed to enhance Major Depressive Disorder (MDD) prediction and diagnosis using hybrid machine learning methods, focusing on EEG data alongside clinical and demographic information. Employing various algorithms like CatBoost, Random Forest, XG Boost, XGB Random Forest, SVM with a linear kernel, and logistic regression with Elasticnet regularization, the study found that CatBoost achieved the highest accuracy of 93.1% in MDD prediction and diagnosis, surpassing other models. Additionally, the ensemble model combining XGBoost and Random Forest showed strong performance in ROC analysis, effectively discriminating between individuals with and without MDD. These findings underscore the potential of EEG data integration and hybrid machine learning techniques in accurately identifying and classifying MDD patients, paving the way for personalized interventions and targeted treatments in depressive disorders.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_63-Hybrid_Machine_Learning_Approaches.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Harris Hawks Optimization Algorithm for Resource Allocation in Cloud Computing Environments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150362</link>
        <id>10.14569/IJACSA.2024.0150362</id>
        <doi>10.14569/IJACSA.2024.0150362</doi>
        <lastModDate>2024-03-30T14:57:42.0330000+00:00</lastModDate>
        
        <creator>Ganghua Bai</creator>
        
        <subject>Cloud computing; virtual machine allocation; energy efficiency; resource utilization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>Cloud computing is revolutionizing the delivery of on-demand scalable and customizable resources. With its flexible resource access and diverse service models, cloud computing is essential to modern computing infrastructure. In cloud environments, assigning Virtual Machines (VMs) to Physical Machines (PMs) remains a complex and challenging task critical to optimizing resource utilization and minimizing energy consumption. Given the NP-hard nature of VM allocation, solving this optimization problem requires efficient strategies, usually addressed by metaheuristic algorithms. This study introduces a novel method for allocating VMs based on the Harris Hawks Optimization (HHO) algorithm. HHO has exhibited the capacity to provide optimal solutions to specific issues inspired by the hunting behavior of Harris&#39;s falcons in the natural world. However, there are often problems with convergence to local optima, which affects the quality of the solution. To mitigate this challenge, this study employs a tent chaotic map during the initialization phase, aiming for enhanced diversity in the initial population. The proposed method, Enhanced HHO (EHHO), has superior performance compared to previous algorithms. The results confirm the effectiveness of the introduced tent chaotic map improvement and suggest that EHHO can improve solution quality, higher convergence speed, and improved robustness in addressing VM allocation challenges in cloud computing deployments.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_62-Enhancing_Harris_Hawks_Optimization_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Method for Constructing and Managing Level of Detail for Non-Closed Boundary Models of Buildings</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150360</link>
        <id>10.14569/IJACSA.2024.0150360</id>
        <doi>10.14569/IJACSA.2024.0150360</doi>
        <lastModDate>2024-03-30T14:57:42.0170000+00:00</lastModDate>
        
        <creator>Ahyun Lee</creator>
        
        <subject>GIS; digital twin; 3D map; level of detail</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>An urban digital twin (UDT) involves creating a virtual three-dimensional (3D) digital replica of a real-world city. To build a UDT model, it needs a comprehensive 3D representation of the city&#39;s terrain, buildings, and infrastructure. In order to effectively visualize and manage large-scale spatial data in 3D, it is essential to establish and maintain an appropriate level of detail (LoD) for the 3D model. This study proposes to construct and manage LoDs for VWorld building data. However, since buildings are often composed of non-closed boundary models, applying a quadric mesh-based simplification algorithm may result in the deletion of meshes containing important contour information that defines the shape of the building. To overcome this problem, this paper proposes to use a geometric filtering algorithm to preserve the building outline shape.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_60-A_Method_for_Constructing_and_Managing_Level.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Optimal Allocation Method for Energy Storage in Low Voltage Distribution Power Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150361</link>
        <id>10.14569/IJACSA.2024.0150361</id>
        <doi>10.14569/IJACSA.2024.0150361</doi>
        <lastModDate>2024-03-30T14:57:42.0170000+00:00</lastModDate>
        
        <creator>Lin Zhu</creator>
        
        <creator>Xiaofang Meng</creator>
        
        <creator>Nannan Zhang</creator>
        
        <subject>Optimal allocation; voltage over-limit; distributed energy storage; low voltage distribution networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>In order to promote the absorption of photovoltaic in low-voltage distribution network, and reduce the voltage over-limit problem caused by high proportion of distributed photovoltaics, this paper proposes a method for optimizing the allocation of distributed energy storage system in low voltage distribution network. Firstly, based on the node voltage of the maximum load day and all day, the optimal clustering number k is obtained by the elbow method, and the K-means clustering algorithm is used to realize the zoning of the distribution network. Secondly, the objective function is to improve the node voltage, reduce the power loss, and minimize the comprehensive cost of energy storage investment, and at the same time consider various constraints such as power balance and energy storage battery, and construct a multi-objective optimization model for the optimal configuration of distributed energy storage system in low voltage distribution network. After normalizing each objective function, the weight coefficient of each objective function is determined based on the analytic hierarchy method. The whale algorithm is used to solve the model to determine the best installation location and capacity of distributed energy storage. Finally, taking an actual area as an example, the effectiveness of the proposed model in leveling the voltage exceeding of low voltage distribution network nodes is verified.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_61-The_Optimal_Allocation_Method_for_Energy_Storage.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Speech Emotion Recognition in Multimodal Environments with Transformer: Arabic and English Audio Datasets</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150359</link>
        <id>10.14569/IJACSA.2024.0150359</id>
        <doi>10.14569/IJACSA.2024.0150359</doi>
        <lastModDate>2024-03-30T14:57:42.0000000+00:00</lastModDate>
        
        <creator>Esraa A. Mohamed</creator>
        
        <creator>Abdelrahim Koura</creator>
        
        <creator>Mohammed Kayed</creator>
        
        <subject>Speech emotion recognition; transformer encoder; fine-tuning; wav2vec; multimodal emotion recognition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>Speech Emotion Recognition (SER) is a fast-developing area of study with a primary goal of automatically identifying and analyzing the emotional states expressed in speech. Emotions are crucial in human communication as they impact the effectiveness and meaning of linguistic expressions. SER aims to create computational approaches and models to detect and interpret emotions from speech signals. One of the primary applications of SER is evident in the field of Human-Computer Interaction (HCI), where it can be used to develop interactive systems that adapt to the user&#39;s emotional state based on their voice. This paper investigates the use of speech data for speech emotion recognition. Additionally, we applied a transformation process to convert the speech data into 2D images. Subsequently, we compared the outcomes of this transformation with the original speech data, aligning the comparison with a dataset containing labeled speech samples in both Arabic and English. Our experiments compare three methods: a transformer-based model, a Vision Transformer (ViT) based model, and a wave2vec-based model. The transformer model is trained from scratch on two significant audio datasets: the Arabic Natural Audio Dataset (ANAD) and the Toronto Emotional Speech Set (TESS), while the vision transformer is evaluated alongside wave2vec as part of transfer learning. The results are impressive. The transformer model achieved remarkable accuracies of 94% and 99% on ANAD and TESS datasets, respectively. Additionally, ViT demonstrates strong capabilities, achieving accuracies of 88% and 98% on the ANAD and TESS datasets, respectively. To assess the transfer learning potential, we also explore the Wave2Vector model with fine-tuning. However, the findings suggest limited success, achieving only a 56% accuracy rate on the ANAD dataset.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_59-Speech_Emotion_Recognition_in_Multimodal_Environments.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Data Warehouses Security</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150358</link>
        <id>10.14569/IJACSA.2024.0150358</id>
        <doi>10.14569/IJACSA.2024.0150358</doi>
        <lastModDate>2024-03-30T14:57:41.9870000+00:00</lastModDate>
        
        <creator>Muhanad A. Alkhubouli</creator>
        
        <creator>Hany M. Lala</creator>
        
        <creator>AbdAllah A. AlHabshy</creator>
        
        <creator>Kamal A. ElDahshan</creator>
        
        <subject>Data warehouse; data security; encryption; security issues; data integrity; privacy; confidentiality</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>Data Warehouses (DWs) are essential for enterprises, containing valuable business information and thus becoming prime targets for internal and external attacks. Data warehouses are crucial assets for organizations, serving critical purposes in business and decision-making. They consolidate data from diverse sources, making it easier for organizations to analyze and derive insights from their data. However, as data is moved from one source to another, security issues arise. Unfortunately, current data security solutions often fail in DW environments due to resource-intensive processes, increased query response times, and frequent false positive alarms. The structure of the data warehouse is designed to facilitate efficient analysis. Developing and deploying a data warehouse is a difficult process and its security is an even greater concern. This study provides a comprehensive review of existing data security methods, emphasizing their implementation challenges in DW environments. Our analysis highlights the limitations of these solutions, particularly in meeting scalability and performance needs. We conclude that current methods are impractical for DW systems and support for a comprehensive solution tailored to their specific requirements. Our findings underscore the ongoing significance of data warehouse security in industrial projects, necessitating further research to address remaining challenges and unanswered questions.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_58-Enhancing_Data_Warehouses_Security.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards a Machine Learning-based Model for Corporate Loan Default Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150357</link>
        <id>10.14569/IJACSA.2024.0150357</id>
        <doi>10.14569/IJACSA.2024.0150357</doi>
        <lastModDate>2024-03-30T14:57:41.9870000+00:00</lastModDate>
        
        <creator>Imane RHZIOUAL BERRADA</creator>
        
        <creator>Fatimazahra BARRAMOU</creator>
        
        <creator>Omar BACHIR ALAMI</creator>
        
        <subject>Loan default; prediction; artificial intelligence; data analysis; machine learning; companies; corporate; real estate; bank</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>As the core business of the banking system is to lend money and then get it back, loan default is one of the most crucial issues for commercial banks. With data analysis and artificial intelligence, extracting valuable information from historical data, to lower their losses, banks would be able to classify their customers and predict the probability of credit repayment instead of relying on traditional methods. As most actual research is focused on individuals’ loans, the novelty of the present paper is to treat corporate loans. Its main objective is to propose a model to address the problem using selected machine learning algorithms to classify companies into two classes to be able to predict loan defaulters. This paper delves into the Corporate Loan Default Prediction Model (CLD PM), which is designed to forecast loan defaults in corporations. The model is grounded in the CRISP-DM process, commencing with comprehending corporate requirements and implementing classification techniques. The data acquisition and preparation phase are critical in testing the selected algorithms, which involve Logistic Regression, Decision Tree, Support Vector Machine, Random Forest, XGBoost, and Adaboost. The model&#39;s efficacy is assessed using various metrics, namely Accuracy, Precision, Recall, F1 score, and AUC. Subsequently, the model is scrutinized using an actual dataset of loans for Moroccan real estate firms. The findings reveal that the Random Forest and XGBoost algorithms outperformed the others, with every metric surpassing 90%. This was accomplished by utilizing SMOTE as an oversampling method, given the dataset&#39;s imbalance. Furthermore, when concentrating on financial statements, selecting the five most significant financial ratios and the company&#39;s age, Random Forest was adept at predicting defaulters with good results: accuracy of 90%, precision of 75%, recall of 50%, F1 score of 60% and AUC of 77%.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_57-Towards_a_Machine_Learning_based_Model_for_Corporate_Loan.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Convolutional Neural Networks Fusion with Support Vector Machines and K-Nearest Neighbors for Precise Crop Leaf Disease Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150356</link>
        <id>10.14569/IJACSA.2024.0150356</id>
        <doi>10.14569/IJACSA.2024.0150356</doi>
        <lastModDate>2024-03-30T14:57:41.9700000+00:00</lastModDate>
        
        <creator>Sunil Kumar H R</creator>
        
        <creator>Poornima K M</creator>
        
        <subject>Deep Convolutional Neural Network (DCNN); multiclass Support Vector Machine (SVM); K-Nearest Neighbor (KNN); ensemble; features; accuracy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>Maize and Paddy are pivotal crops in India, playing a vital role in ensuring food security. Timely detection of diseases and the implementation of remedial measures are crucial for securing optimal crop yield and profitability for farmers. This study utilizes a dataset encompassing images of diseased maize and paddy leaves, addressing various conditions such as corn blight, common rust, gray leaf spot, brown spot, hispa, and leaf blast, alongside images of healthy leaves. The dataset used here is a combination of online repository as well as manually collected samples from neighborhood farmlands at different growth stages. A machine vision approach that is accessible, quick, robust and cost effective to determine crop leaf diseases is need of the hour. In the proposed work, using transfer-learning approach, many Deep Convolutional Neural Networks (DCNN) and hybrid DCNNs have been developed, trained, validated and tested. To achieve better accuracy, integration of DCNNs and machine learning classifiers like multiclass Support Vector Machine (SVM) and K-Nearest Neighbor (KNN) algorithms is carried out. The research is carried out in four stages, in the first stage, DCNNs have been used as classifiers. Subsequently, these same DCNNs are repurposed as feature extractors, and the extracted features are input into classifiers such as multiclass SVM and KNN. In the third stage, an ensemble of DCNNs is performed for networks exhibiting excellent performance during first stage. At a fourth stage, features extracted from these ensemble networks are fed into the same multiclass SVM and KNN classifiers to assess accuracy. A total of 1600 images for training and 400 images for testing are used. For maize data set, we achieved a 100% accuracy in AlexNet plus VGG-16 hybrid network for multiclass SVM with 75:25 split ratio and for paddy dataset 99.51% accuracy is achieved in ResNet-50 plus Darknet-53 hybrid network for multiclass SVM with 75:25 split ratio. In the proposed study a comprehensive analysis is conducted, exploring features from various layers and adjusting data split ratios.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_56-Deep_Convolutional_Neural_Networks_Fusion.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Approach for Developing an Ontology: Learned from Business Model Ontology Design and Development</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150355</link>
        <id>10.14569/IJACSA.2024.0150355</id>
        <doi>10.14569/IJACSA.2024.0150355</doi>
        <lastModDate>2024-03-30T14:57:41.9530000+00:00</lastModDate>
        
        <creator>Ahadi Haji Mohd Nasir</creator>
        
        <creator>Mohd Firdaus Sulaiman</creator>
        
        <creator>Liew Kok Leong</creator>
        
        <creator>Ely Salwana</creator>
        
        <creator>Mohammad Nazir Ahmad</creator>
        
        <subject>Ontology; Ontology Development Method (ODM); Business Model Ontology (BMO); Unified Ontology Approach (UOA)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>Ontology, serving as an explicit specification of conceptualization, has found widespread applications across various fields. Business Model Ontology (BMO) stands out as a prominent ontology, especially in the domains of business and entrepreneurship. This study employs the narrative literature review method to delve into the Ontology Development Method (ODM). By identifying commonalities among various ODMs and drawing insights from the BMO, the study proposes a Unified Ontology Approach (UOA) as an alternative ODM. The UOA is derived by combining the common characteristics and key steps of various ODMs, aiming to streamline the ontology development process and enhance its effectiveness. Through an extensive analysis of existing methodologies, this research contributes to the field by offering a consolidated perspective on ODMs. The study findings shed light on the strengths and weaknesses of different approaches, facilitating informed decision-making for ontology developers. Furthermore, the discussion explores the implications of adopting the UOA in practical applications, emphasizing its potential to improve ontology quality, interoperability, and adaptability across diverse domains. In conclusion, this paper advocates for the adoption of the UOA as a comprehensive and flexible framework for ontology development. By synthesizing the strengths of existing ODMs and insights from the BMO, the UOA offers a promising avenue for advancing the field of ontology development and driving progress in various domains and applications.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_55-An_Approach_for_Developing_an_Ontology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Analysis for Secret Message Sharing using Different Levels of Encoding Over QSDC</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150353</link>
        <id>10.14569/IJACSA.2024.0150353</id>
        <doi>10.14569/IJACSA.2024.0150353</doi>
        <lastModDate>2024-03-30T14:57:41.9400000+00:00</lastModDate>
        
        <creator>Nur Shahirah Binti Azahari</creator>
        
        <creator>Nur Ziadah Binti Harun</creator>
        
        <creator>Chai Wen Chuah</creator>
        
        <creator>Rosmamalmi Mat Nawi</creator>
        
        <creator>Zuriati Binti Ahmad Zukarnain</creator>
        
        <creator>Nor Iryani Binti Yahya</creator>
        
        <subject>Multiphoton approach; multi-party; level of encoding; scalability; error probability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>It was recently proposed to use quantum secure direct communication (QSDC), a branch of quantum cryptography, to secure data transfers from sender to receiver without relying on computational complexity. Despite the benefits of multiphoton, sending secret messages between several parties in a quantum channel still presents a challenge because the current multiphoton only considers two parties. When more parties are included, the scalability problem becomes apparent. Therefore, the scalable multiphoton approach is needed to allow secure sharing between the legal parties. The manipulation of level encoding provides new opportunities for more efficient quantum information processing and message sharing. This research aims to propose a strategy that uses four-level encoding with the multiphoton approach to share secret messages between multi-party. From the analysis conducted, it has been shown that a high number of level encoding can shorten the time taken for photon transmission between parties and an attacker has a lower probability of chances to launch an attack, however, communication will be affected due to high sensitivity to noise.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_53-Performance_Analysis_for_Secret_Message_Sharing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Handling Transactional Data Features via Associative Rule Mining for Mobile Online Shopping Platforms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150354</link>
        <id>10.14569/IJACSA.2024.0150354</id>
        <doi>10.14569/IJACSA.2024.0150354</doi>
        <lastModDate>2024-03-30T14:57:41.9400000+00:00</lastModDate>
        
        <creator>Maureen Ifeanyi Akazue</creator>
        
        <creator>Sebastina Nkechi Okofu</creator>
        
        <creator>Arnold Adimabua Ojugo</creator>
        
        <creator>Patrick Ogholuwarami Ejeh</creator>
        
        <creator>Christopher Chukwufunaya Odiakaose</creator>
        
        <creator>Frances Uche Emordi</creator>
        
        <creator>Rita Erhovwo Ako</creator>
        
        <creator>Victor Ochuko Geteloma</creator>
        
        <subject>Association rule mining; online shopping platforms; feature evolution; concept drift; concept evolution; shelf placement</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>Transactional data processing is often a reflection of a consumer&#39;s buying behavior. The relational records if properly mined, helps business managers and owners to improve their sales volume. Transaction datasets are often rippled with the inherent challenges in their manipulation, storage and handling due to their infinite length, evolution of product features, evolution in product concept, and oftentimes, a complete drift away from product feat. The previous studies&#39; inability to resolve many of these challenges as abovementioned, alongside the assumptions that transactional datasets are presumed to be stationary when using the association rules – have been found to also often hinder their performance. As it deprives the decision support system of the needed flexibility and robust adaptiveness to manage the dynamics of concept drift that characterizes transaction data. Our study proposes an associative rule mining model using four consumer theories with RapidMiner and Hadoop Tableau analytic tools to handle and manage such large data. The dataset was retrieved from Roban Store Asaba and consists of 556,000 transactional records. The model is a 6-layered framework and yields its best result with a 0.1 value for both the confidence and support level(s) at 94% accuracy, 87% sensitivity, 32% specificity, and a 20-second convergence and processing time.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_54-Handling_Transactional_Data_Features.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Exploring the Landscape: Analysis of Model Results on Various Convolutional Neural Network Architectures for iRESPOND System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150352</link>
        <id>10.14569/IJACSA.2024.0150352</id>
        <doi>10.14569/IJACSA.2024.0150352</doi>
        <lastModDate>2024-03-30T14:57:41.9230000+00:00</lastModDate>
        
        <creator>Freddie Prianes</creator>
        
        <creator>Kaela Marie Fortuno</creator>
        
        <creator>Rosel Onesa</creator>
        
        <creator>Brenda Benosa</creator>
        
        <creator>Thelma Palaoag</creator>
        
        <creator>Nancy Flores</creator>
        
        <subject>Artificial intelligence; image classification; emergency response; model training; optimizers; learning rate</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>In the era of rapid technological advancement, the integration of cutting-edge technologies plays a pivotal role in enhancing the efficiency and responsiveness of critical systems. iRESPOND, a real-time Geospatial Information and Alert System, stands at the forefront of such innovations, facilitating timely and informed decision-making in dynamic environments. As the demand for accurate and swift responses, the role of CNN models in iRESPOND becomes significant. The study focuses on seven prominent CNN architectures, namely EfficientNet (B0, B7, V2B0, and V2L), InceptionV3, ResNet50, and VGG19 and with the integration of different optimizers and learning rates. The methodology employed a strategic implementation of looping during the training phase. This iterative approach is designed to systematically re-train the CNN models, emphasizing identifying the most suitable architecture among the seven considered variants. The primary objective is to discern the optimal architecture and fine-tune critical parameters, explicitly targeting the optimizer and learning rate values. The differential impact of each model on the system&#39;s ability is to discern patterns and anomalies in the image datasets. ResNet50 exhibited robust performance showcasing suitability for real-time processing in dynamic environments with a better accuracy result of 95.02%. However, the EfficientNetV2B0 model, characterized by its advancements in network scaling, presented promising results with a lower loss of 0.187. Generally, the findings not only contribute valuable insights into the optimal selection of architectures for iRESPOND but also highlight the importance of fine-tuning hyperparameters through an iterative training approach, which paves the way for the continued enhancement of iRESPOND as an adaptive system.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_52-Exploring_the_Landscape_Analysis_of_Model_Results.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Application of Improved Scale Invariant Feature Transformation Algorithm in Facial Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150350</link>
        <id>10.14569/IJACSA.2024.0150350</id>
        <doi>10.14569/IJACSA.2024.0150350</doi>
        <lastModDate>2024-03-30T14:57:41.9070000+00:00</lastModDate>
        
        <creator>Yingzi Cong</creator>
        
        <subject>Haar wavelet features; scale invariant feature transformation algorithm; deep belief network; facial recognition; performance improvement</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>Currently, face recognition models suffer from insufficient accuracy, stability, and computational efficiency. To address this issue, an improved feature extraction algorithm on the ground of Haar wavelet features and scale invariant feature transformation algorithm is proposed. In addition, the study also combines this algorithm with deep belief networks to construct an improved facial recognition model. The effectiveness of the proposed improved feature extraction algorithm was verified, and it was found that the recognition accuracy of the algorithm was 94.2%, which is better than other comparative algorithms. In addition, the study also conducted empirical analysis on the improved facial recognition model and found that the recognition accuracy of the model was 0.92, and the feature matching time was 2.6 seconds, which was better than other comparative models in terms of performance. On the ground of the above results, the proposed facial recognition model has significantly improved recognition accuracy and efficiency compared to traditional models. It can provide theoretical reference for improving the universality of facial recognition applications in different fields.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_50-The_Application_of_Improved_Scale_Invariant_Feature.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Designing a Mobile Application for Identifying Strawberry Diseases with YOLOv8 Model Integration</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150351</link>
        <id>10.14569/IJACSA.2024.0150351</id>
        <doi>10.14569/IJACSA.2024.0150351</doi>
        <lastModDate>2024-03-30T14:57:41.9070000+00:00</lastModDate>
        
        <creator>Thuy Van Tran</creator>
        
        <creator>Quang - Huy Do Ba</creator>
        
        <creator>Kim Thanh Tran</creator>
        
        <creator>Dang Hai Nguyen</creator>
        
        <creator>Dinh Chung Dang</creator>
        
        <creator>Van - Luc Dinh</creator>
        
        <subject>Computer vision; YOLOv8; strawberry diseases</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>The progress in computer vision has led to the development of potential solutions, becoming a versatile technological key to addressing challenging issues in agriculture. These solutions aim to enhance the quality of agricultural products, boost the economy&#39;s competitiveness, and reduce labor and costs. Specifically, the detection of diseases in various fruits before harvest to avoid reducing product quality and quantity still relies on the experience of long-time farmers. This leads to difficulties in controlling disease sources over large cultivated areas, resulting in uneven quality control after harvest, which may lead to low prices or failure to meet export requirements to developed markets. Therefore, this stage has now been applied with modern technology to gradually replace humans. In this paper, we propose a mobile application to detect four common diseases in strawberry trees by using image processing technology that combines an artificial intelligence network in identification: based on size, color, and shape defects on the surface of the fruit. The proposed model consists of different versions of YOLOv8 with RGB input to accurately detect diseases in strawberries and provide assessments. Among these, the YOLOv8n model utilizes the fewest parameters with only 11M, but it produces more output parameters with higher accuracy compared to some other YOLOv8 models, achieving an average accuracy of approximately 87.9%. Therefore, the proposed method emerges as one of the possible solutions for strawberry disease detection.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_51-Designing_a_Mobile_Application.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Bloom Cognitive Hierarchical Classification Model for Chinese Exercises Based on Improved Chinese-RoBERTa-wwm and BiLSTM</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150349</link>
        <id>10.14569/IJACSA.2024.0150349</id>
        <doi>10.14569/IJACSA.2024.0150349</doi>
        <lastModDate>2024-03-30T14:57:41.8930000+00:00</lastModDate>
        
        <creator>Zhaoyu Shou</creator>
        
        <creator>Yipeng Liu</creator>
        
        <creator>Dongxu Li</creator>
        
        <creator>Jianwen Mo</creator>
        
        <creator>Huibing Zhang</creator>
        
        <subject>Chinese Text Classification; Chinese-RoBERTa-wwm; BiLSTM; Bloom Cognitive Hierarchy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>Assessing students&#39; cognitive ability is one of the most important prerequisites for improving learning effectiveness, and the process involves aspects such as exercises, students&#39; answers and teaching cases. In order to effectively assess students&#39; cognitive ability, this paper proposes a Chinese text classification model that can automatically and accurately classify Bloom&#39;s cognitive hierarchy of exercises, starting from the exercises. Firstly, FreeLB perturbation is added to the input Embedding to enhance the generalization performance of the model, and Chinese-RoBERTa-wwm is used to obtain the pooler information and sequence information of the text; secondly, LSTM is used to extract the deep-associative features in the sequence information and combine with the pooler information to construct the semantically informative word vectors; lastly, the word vectors are fed into BiLSTM to learn the sequence bi-directional dependency information to obtain more comprehensive semantic features to achieve the accurate classification of the exercises. Experiments show that the model proposed in this paper significantly outperforms the baseline model on three Chinese public datasets, achieving 94.8%, 94.09% and 94.71% accuracies respectively, and also effectively performs the Bloom cognitive hierarchy classification task on two Chinese exercise datasets with less data.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_49-A_Bloom_Cognitive_Hierarchical_Classification_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Novel Design of a Robotic Arm Prototype with Complex Movements Based on Surface EMG Signals to Assist Disabilities in Vietnam</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150348</link>
        <id>10.14569/IJACSA.2024.0150348</id>
        <doi>10.14569/IJACSA.2024.0150348</doi>
        <lastModDate>2024-03-30T14:57:41.8770000+00:00</lastModDate>
        
        <creator>Ngoc–Khoat Nguyen</creator>
        
        <creator>Thi–Mai–Phuong Dao</creator>
        
        <creator>Van–Kien Nguyen</creator>
        
        <creator>Van–Hung Pham</creator>
        
        <creator>Van–Minh Pham</creator>
        
        <creator>Van–Nam Pham</creator>
        
        <subject>Disabilities; sEMG; signal processing; human arm; robotic arm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>In recent years, surface electromyography (sEMG) signals have been recognized as a type of signal with significant practical implications not only in medicine but also in the field of science and engineering for functional rehabilitation. This study focuses on understanding the application of surface electromyography signals in controlling a robotic arm for assisting disabled individuals in Vietnam. The raw sEMG signals, collected using appropriate sensors, have been processed using an effective method that includes several steps such as A/D converting and the use of band-pass and low-pass filters combined with an envelope detector. To demonstrate the meaningful effectiveness of the processed sEMG signals, the study has designed a robotic arm model with complex finger movements similar to those of a human. The experimental results show that the robotic arm operates effectively, with fast response times, meeting the support needs of disabled individuals.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_48-Novel_Design_of_a_Robotic_Arm_Prototype.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Defining Integrated Agriculture Information System Non-Functional Requirement and Re-engineering the Metadata</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150347</link>
        <id>10.14569/IJACSA.2024.0150347</id>
        <doi>10.14569/IJACSA.2024.0150347</doi>
        <lastModDate>2024-03-30T14:57:41.8600000+00:00</lastModDate>
        
        <creator>Argo Wibowo</creator>
        
        <creator>Antonius Rachmat Chrismanto</creator>
        
        <creator>Gabriel Indra Widi Tamtama</creator>
        
        <creator>Rosa Delima</creator>
        
        <subject>Information system; non-functional requirements; BPMN; metadata; feature-driven development</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>Developing a well-functioning information system like integrated agriculture information system (IAIS) requires a list of task requirements that will be transformed into system features. Feature Driven Development (FDD) model is suitable for this situation. The requirements for building an information system are not solely based on functional needs but also non-functional requirements (NFR). Non-functional requirements also play a crucial role in system development as they affect business process management. A well-defined business process will ultimately result in robust system features. It is essential to map non-functional requirements to the business process to clearly identify the information system requirements that will become new features. Not only can NFR enrich system metadata and databases, but they also serve as the initial foundation for the system coding process, leading to the final information system output. This study creates a flow diagram mapping NFR to the business process using Business Process Management Notation (BPMN). Several identified NFR categories are then transformed into metadata and use case diagrams. The formation of this NFR mapping flow diagram is expected to facilitate information system development by visualizing system requirements in a forward and backward flow according to the sequence of processes. Feature development can be streamlined in the event of NFR changes by tracing NFR and related features.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_47-Defining_Integrated_Agriculture_Information_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-Track Music Generation Based on the AC Algorithm and Global Value Return Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150346</link>
        <id>10.14569/IJACSA.2024.0150346</id>
        <doi>10.14569/IJACSA.2024.0150346</doi>
        <lastModDate>2024-03-30T14:57:41.8600000+00:00</lastModDate>
        
        <creator>Wei Guo</creator>
        
        <subject>AC; global value; return network; track; music model; rhythm; melody</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>In the current field of deep learning and music information retrieval, automated music generation has become a hot research topic. This study addresses the issues of low clarity and musicality in current multi-track music generation by combining the Actor-Critic algorithm and the Global Value Return Network to create a novel multi-track music generation model. The study first utilizes the Actor-Critic algorithm to generate single-track music rhythm and melody models. Building upon this foundation, the study further optimizes the single-track models using the Global Value Return Network and proposes the multi-track music model. The results demonstrate that the harmonization accuracy of the final multi-track music generation model ranges from 0.90 to 0.98, with a maximum value of 0.98. Additionally, the audience satisfaction and expert satisfaction of the model are 0.96 and 0.97, respectively, indicating that the model has a high musical appreciation value. Overall, the multi-track music generation model designed in this study addresses the limitations of single-track music generation and produces more rhythmically diverse multi-track music.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_46-Multi_Track_Music_Generation_Based_on_the_AC_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Forecasting the Yoga Influence on Chronic Venous Insufficiency: Employing Machine Learning Methods</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150345</link>
        <id>10.14569/IJACSA.2024.0150345</id>
        <doi>10.14569/IJACSA.2024.0150345</doi>
        <lastModDate>2024-03-30T14:57:41.8430000+00:00</lastModDate>
        
        <creator>Xiao Du</creator>
        
        <subject>Chronic Venous Insufficiency; yoga; classification; machine learning; Support Vector Classification; smell agent optimization; Dwarf Mongoose Optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>This investigation introduces a groundbreaking approach to unravel the complexities of Chronic Venous Insufficiency (CVI) by leveraging machine learning, notably the Support Vector Classification (SVC), alongside optimization systems like Dwarf Mon-goose Optimization (DMO) and Smell Agent Optimization (SAO). This pioneering strategy not only aims to bolster predictive Precision but also seeks to optimize personalized treatment paradigms for CVI, presenting a compelling avenue for the advancement of healthcare solutions. The study aims to predict the impact of yoga on CVI using a comprehensive dataset, incorporating demographic information, baseline severity indicators, and yoga practice details. Through meticulous feature engineering, machine learning algorithms forecast outcomes such as changes in symptom severity and overall well-being improvements. This predictive model has the potential to transform personalized CVI treatment plans by offering tailored recommendations for specific yoga practices, optimizing therapeutic approaches, and guiding efficient healthcare resource allocation. Ethical considerations, patient preferences, and safety are highlighted for responsible translation into clinical settings. The integration of SVC with optimization systems presents a novel and promising approach, contributing meaningfully to personalized CVI management and providing valuable insights for current and future practices. The results obtained for VCSS-PRE and VCSS-1 unequivocally highlight the outstanding performance of the SVDM model in both prediction and categorization. The model achieved remarkable Accuracy and Precision values, attaining 92.9% and 93.1% for VCSS-PRE and 94.3% and 94.9% for VCSS-1.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_45-Forecasting_the_Yoga_Influence_on_Chronic_Venous_Insufficiency.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Method for Disaster Area Detection with Just One SAR Data Acquired on the Day After Earthquake Based on YOLOv8</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150344</link>
        <id>10.14569/IJACSA.2024.0150344</id>
        <doi>10.14569/IJACSA.2024.0150344</doi>
        <lastModDate>2024-03-30T14:57:41.8300000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Yushin Nakaoka</creator>
        
        <creator>Hiroshi Okumura</creator>
        
        <subject>SAR; YOLOv8; Detectron2; earthquake; disaster; disaster area detection; noto peninsula earthquake</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>Method for earthquake disaster area detection with just a single satellite-based SAR data which is acquired on the day after earthquake based on object detection method of YOLOv8 and Detectron2 is proposed. Through experiments with several SAR data derived from the different SAR satellites which observed Noto Peninsula earthquake occurred on the first of January 2024, it is found that the proposed method works well to detect several types of damages effectively. Also, it is found that the proposed method based on “Roboflow” and YOLOv8 as well as Detectron2 for annotation and object detection is appropriate for disaster area detection. Furthermore, it is possible to detect disaster areas even if just one single SAR data which acquired on the day after the disaster occurred because the trained learning model for disaster area detection is created through experiments.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_44-Method_for_Disaster_Area_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cyber Security Intrusion Detection and Bot Data Collection using Deep Learning in the IoT</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150343</link>
        <id>10.14569/IJACSA.2024.0150343</id>
        <doi>10.14569/IJACSA.2024.0150343</doi>
        <lastModDate>2024-03-30T14:57:41.8300000+00:00</lastModDate>
        
        <creator>Fahad Ali Alotaibi</creator>
        
        <creator>Shailendra Mishra</creator>
        
        <subject>Internet of things; intrusion detection system; random neural networks; feed forward neural networks; convolutional neural networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>In the digital age, cybersecurity is a growing concern, especially as IoT continues to grow rapidly. Cybersecurity intrusion detection systems are critical in protecting IoT environments from malicious activity. Deep learning approaches have emerged as promising intrusion detection techniques due to their ability to automatically learn complex patterns and features from large-scale data sets. In this research, we give a detailed assessment of the use of deep learning algorithms for cybersecurity intrusion detection in IoT contexts. The study discusses the challenges of securing IoT systems, such as device heterogeneity, limited computational resources, and the dynamic nature of IoT networks. To detect intrusions in IoT environments, convolutional neural networks (CNNs) and recurrent neural networks (RNNs) have been used. The NF-UQ-NIDS and NF-Bot-IoT data sets are used for training and assessing deep learning-based intrusion detection systems. Our study also explores using deep learning approaches to identify botnets in IoT settings to counter the growing threat of botnets. Also, analyze representative bot data sets and explain their significance in understanding botnet behavior and effective defenses. The study evaluated IDS performance and traffic flow in the IoT context using various machine learning algorithms. For IoT environments, the results highlight the importance of selecting appropriate algorithms and employing effective data pre-processing techniques to improve accuracy and performance. Cyber-attack detection with the proposed system is highly accurate when compared with other algorithms for both NF-UQ-NIDS and NF-BoT-IoT data sets.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_43-Cyber_Security_Intrusion_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Disease-Aware Chest X-Ray Style GAN Image Generation and CatBoost Gradient Boosted Trees</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150342</link>
        <id>10.14569/IJACSA.2024.0150342</id>
        <doi>10.14569/IJACSA.2024.0150342</doi>
        <lastModDate>2024-03-30T14:57:41.8130000+00:00</lastModDate>
        
        <creator>Andi Besse Firdausiah Mansur</creator>
        
        <subject>Artificial intelligence; StyleGAN; chest X-ray prediction; COVID19; CatBoost gradient boosted trees</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>Artificial Intelligence has significantly advanced and is proficient in image classification. Even though the COVID-19 pandemic has ended, the virus is now considered to have entered an endemic phase. Historically, COVID-19 detection has predominantly depended on a single technology known as the polymerase chain reaction (PCR). The academic community is keen radiograph data to forecast COVID-19 because of its prospective advantages. The proposed methodology aims to improve dataset quality by utilizing artificially generated images produced by StyleGAN. The ratio of 59:41 was used to combine the synthetic datasets with the real ones. The combination of the StyleGAN framework, the VGG19, and CatBoost Gradient Boosted Trees is to improve prediction accuracy. Accurate and precise measurements significantly impact the evaluation of a model&#39;s performance. The assessment resulted in 98.67% accurate and 97.21% precise. In the future, we may enhance the diversity and quality of the collection by integrating other datasets from different sources with the Chest X-ray dataset.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_42-Disease_Aware_Chest_X_Ray_Style_GAN_Image_Generation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Revolutionizing Education: Cutting-Edge Predictive Models for Student Success</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150341</link>
        <id>10.14569/IJACSA.2024.0150341</id>
        <doi>10.14569/IJACSA.2024.0150341</doi>
        <lastModDate>2024-03-30T14:57:41.7970000+00:00</lastModDate>
        
        <creator>Moyan Li</creator>
        
        <creator>Suyawen</creator>
        
        <subject>Student performance; Support Vector Classification; sea horse optimization; adaptive opposition slime mould algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>Student performance prediction systems are crucial for improving educational outcomes in various institutions, including universities, schools, and training centers. These systems gather data from diverse sources such as examination centers, registration departments, virtual courses, and e-learning platforms. Analyzing educational data is challenging due to its vast and varied nature, and to address this, machine learning techniques are employed. Dimensionality reduction, enabled by machine learning algorithms, simplifies complex datasets, making them more manageable for analysis. In this study, the Support Vector Classification (SVC) model is used for student performance prediction. SVC is a powerful machine-learning approach for classification tasks. To further enhance the model&#39;s efficiency and accuracy, two optimization algorithms, the Sea Horse Optimization (SHO) and the Adaptive Opposition Slime Mould Algorithm (AOSMA), are integrated. Machine learning (ML) reduces complexity through techniques like feature selection and dimensionality reduction, improving the effectiveness of student performance prediction systems and enabling data-informed decisions for educators and institutions. The combination of SVC with these innovative optimization strategies highlights the study&#39;s commitment to leveraging the latest advancements in ML and bio-inspired algorithms for more precise and robust student performance predictions, ultimately enhancing educational outcomes. Based on the obtained outcomes, it reveals that the SVSH model registered the best performance in predicting and categorizing the student performance with Accuracy=92.4%, Precision=93%, Recall=92%, and F1_Score=92%. Implementing SHO and AOSMA optimizers to the SVC model resulted in improvement of Accuracy evaluator outputs by 2.12% and 0.89%, respectively.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_41-Revolutionizing_Education_Cutting_Edge_Predictive_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Experimental IoT System to Maintain Water Quality in Catfish Pond</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150340</link>
        <id>10.14569/IJACSA.2024.0150340</id>
        <doi>10.14569/IJACSA.2024.0150340</doi>
        <lastModDate>2024-03-30T14:57:41.7970000+00:00</lastModDate>
        
        <creator>Adani Bimasakti Wibisono</creator>
        
        <creator>Riyanto Jayadi</creator>
        
        <subject>IoT; aquaculture; catfish cultivation; monitoring; controlling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>This study investigates the challenges in catfish aquaculture, mainly focusing on water quality, which is crucial for successful fish farming. This research aims to implement Internet of Things (IoT) technology with sensors connected to a microcontroller to monitor and control critical parameters such as temperature, pH, and oxygen levels in catfish ponds. Utilizing NodeMCU and specific sensors, the system provides real-time monitoring, enabling early detection of environmental changes that could impact fish health. The research findings indicate IoT technology in catfish aquaculture can enhance fish health and growth. Real-time monitoring reduces the risk of diseases by providing an optimal environment for the fish. Additionally, automatic control using fuzzy logic, which can adjust email notifications automatically, and actuators such as water pumps and pH regulators that work automatically based on conditions help maintain the stability of water quality. A comparison between conventional and IoT-based farming reveals that the IoT system can reduce catfish mortality by optimizing feed distribution and regulating pH levels. Thus, this study positively contributes to developing more efficient, sustainable and healthy catfish aquaculture methods through implementing IoT technology.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_40-Experimental_IoT System_to_Maintain_Water_Quality.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced Detection of COVID-19 using Deep Learning and Multi-Agent Framework: The DLRPET Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150339</link>
        <id>10.14569/IJACSA.2024.0150339</id>
        <doi>10.14569/IJACSA.2024.0150339</doi>
        <lastModDate>2024-03-30T14:57:41.7830000+00:00</lastModDate>
        
        <creator>Rupinder Kaur Walia</creator>
        
        <creator>Harjot Kaur</creator>
        
        <subject>COVID-19; deep learning; SVM; ResNet; disease classification; biomedical applications; multi-agent</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>The ongoing global pandemic caused by novel coronavirus (COVID-19) has emphasized the urgent need for accurate and efficient methods of detection. Over the past few years, several methods were proposed by various researchers for detecting COVID-19, but there is still a scope of improvement. Considering this, an effective and highly accurate detection model is presented in this paper that is based on Deep learning and multi-Agent concepts. Our main objective is to develop a model that can not only detect COVID-19 with high accuracy but also reduces complexity and dimensionality issues. To accomplish this objective, we applied a Deep Layer Relevance Propagation and Extra Tree (DLRPET) technique for selecting only crucial and informative features from the processed dataset. Also, a lightweight ResNet based Deep Learning model is proposed for classifying the disease. The ResNet model is initialized three times creating agents which analyses the data individually. The novelty contribution of this work is that instead of passing the entire training set to the classifier, we have divided the training dataset into three subsets. Each subset is passed to a specific agent for training and making individual predictions. The final prediction in proposed network is made by implementing majority voting mechanism to determine whether an individual is COVID-19 positive or negative. The experimental outcomes indicated that our approach achieved an accuracy of 99.73% that is around 2% higher than standard best performing KISM model. Moreover, proposed model attained precision of 100%, recall of 99.73% and F1-score of 98.59 % respectively, showing an increase of 5% in precision, 4.73% in recall and 4.59% in F1-score than best performing SVM model.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_39-Enhanced_Detection_of_COVID_19_using_Deep_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>AI-based KNN Approaches for Predicting Cooling Loads in Residential Buildings</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150338</link>
        <id>10.14569/IJACSA.2024.0150338</id>
        <doi>10.14569/IJACSA.2024.0150338</doi>
        <lastModDate>2024-03-30T14:57:41.7670000+00:00</lastModDate>
        
        <creator>Zhaofang Du</creator>
        
        <subject>Cooling load; K-nearest neighbor; dynamic arithmetic optimization; wild geese algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>Cooling Load (CL) estimation in residential buildings is crucial for optimizing energy consumption and ensuring indoor comfort. This article presents an innovative approach that leverages Artificial Intelligence (AI) techniques, particularly K-Nearest Neighbors (KNN), in combination with advanced optimizers, including Dynamic Arithmetic Optimization (DAO) and Wild Geese Algorithm (WGA), to enhance the accuracy of CL predictions. The proposed method harnesses the power of KNN, a machine-learning algorithm renowned for its simplicity and efficiency in regression tasks. By training on historical CL data and relevant building parameters, the KNN model can make precise predictions, 768 sample with considering factors such as Glazing Area, Glazing Area Distribution, Surface Area, Orientation, Overall Height, Wall Area, Roof Area, and Relative Compactness. Two state-of-the-art optimizers, DAO and WGA, are introduced to refine the CL estimation process further. The integration of KNN with DAO and WGA yields a robust AI-driven framework proficient in the precise estimation of CL in residential constructions. This approach not only enhances energy efficiency by optimizing cooling system operations but also contributes to sustainable building design and reduced environmental impact. Through extensive experimentation and validation, this study demonstrates the effectiveness of the proposed method, showcasing its potential to revolutionize CL estimation in residential buildings. The results indicate that the hybridization of KNN with DAO optimizers yields promising outcomes in predicting CL. The high R2 value of 0.996 and low RMSE value of 0.698 demonstrate the accuracy of the KNDA model.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_38-AI_based_KNN_Approaches_for_Predicting_Cooling_Loads.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Data-Driven Rice Yield Predictions and Prescriptive Analytics for Sustainable Agriculture in Malaysia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150337</link>
        <id>10.14569/IJACSA.2024.0150337</id>
        <doi>10.14569/IJACSA.2024.0150337</doi>
        <lastModDate>2024-03-30T14:57:41.7500000+00:00</lastModDate>
        
        <creator>Muhammad Marong</creator>
        
        <creator>Nor Azura Husin</creator>
        
        <creator>Maslina Zolkepli</creator>
        
        <creator>Lilly Suriani Affendey</creator>
        
        <subject>Rice yield prediction; sustainable agriculture; linear regression; support vector machine; artificial neural network; predictive analytics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>Maximizing rice yield is critical for ensuring food security and sustainable agriculture in Malaysia. This research investigates the impact of environmental conditions and management methods on crop yields, focusing on accurate predictions to inform decision-making by farmers. Utilizing machine learning algorithms as decision-support tools, the study analyses commonly used models—Linear Regression, Support Vector Machines, Random Forest, and Artificial Neural Networks—alongside key environmental factors such as temperature, rainfall, and historical yield data. A comprehensive dataset for rice yield prediction in Malaysia was constructed, encompassing yield data from 2014 to 2018. To elucidate the influence of climatic factors, long-term rainfall records spanning 1981 to 2018 were incorporated into the analysis. This extensive dataset facilitates the exploration of recent agricultural trends in Malaysia and their relationship to rice yield. The study specifically evaluates the performance of Random Forest, Support Vector Machine (SVM), and Neural Network (NN) models using metrics like Correlation Coefficient, Mean Absolute Error (MAE), Root Mean Squared Error (RMSE), Mean Squared Error (MSE), and Mean Absolute Percentage Error (MAPE). Results reveal Random Forest as the standout performer with a Correlation Coefficient of 0.954, indicating a robust positive linear relationship between predictions and actual yield data. SVM and NN also exhibit respectable Correlation Coefficients of 0.767 and 0.791, respectively, making them effective tools for rice yield prediction in Malaysia. By integrating diverse environmental and management factors, the proposed methodology enhances prediction accuracy, enabling farmers to optimize practices for better economic outcomes. This approach holds significant potential for contributing to sustainable agriculture, improved food security, and enhanced economic efficiency in Malaysia&#39;s rice farming sector. Leveraging machine learning, the research aims to transform rice yield prediction into a proactive decision-making tool, fostering a resilient and productive agrarian ecosystem in Malaysia.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_37-Data_Driven_Rice_Yield_Predictions.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning to Predict Start-Up Business Success</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150336</link>
        <id>10.14569/IJACSA.2024.0150336</id>
        <doi>10.14569/IJACSA.2024.0150336</doi>
        <lastModDate>2024-03-30T14:57:41.7500000+00:00</lastModDate>
        
        <creator>Lobna Hsairi</creator>
        
        <subject>Deep learning; Convolutional Neural Network (CNN); prediction; start-up business</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>Over the past few decades, there has been rapid growth in the formation of new start-ups around the world. Thus, it is an important and challenging task to understand what makes start-ups successful and to predict their success. Several reasons are responsible for the success and failure of a start-up, including bad management, lack of funds, etc. This work aims to create a predictive model for start-ups based on many key factors involved in the early stages of a start-up’s life. Current research on predicting success mainly focuses on financial data such as ROI, revenue, etc. Therefore, in this paper, a different approach is proposed by first investigating other non-financial factors affecting start-up success and failure. Second, the adoption of an algorithm that has not been used much in predicting start-up success, which is Convolutional Neural Network (CNN). The dataset was acquired from Kaggle. The final model was reached through a series of four experiments to determine which model predicts better. The final model was implemented using a CNN with an average accuracy of 82%, an average loss of 0.4, an average 0.9 recall and an average 0.9 precision.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_36-Deep_Learning_to_Predict_Start_up_Business_Success.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Cyclic Framework for Ethical Implications of Artificial Intelligence in Autonomous Vehicles</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150335</link>
        <id>10.14569/IJACSA.2024.0150335</id>
        <doi>10.14569/IJACSA.2024.0150335</doi>
        <lastModDate>2024-03-30T14:57:41.7370000+00:00</lastModDate>
        
        <creator>Ahmed M. Shamsan Saleh</creator>
        
        <subject>Artificial intelligence; autonomous vehicles; ethical implications; AI decision-making</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>The emergence of artificial intelligence (AI)-powered autonomous vehicles (AVs) represents a significant turning point in field of transportation, offering the potential for improved safety, efficiency, and convenience. However, the use of AI in this particular context exhibits significant ethical implications that require careful examination. This paper presents an extensive analysis of ethical considerations related integration of AI in AVs. It employs a multi-faceted approach to investigate ethical concerns of decision-making powered by AI including well-known trolley problem and moral judgments generated by AI algorithms. Additionally, it explores the complexities within safety and liability issues in the occurrence of incidents involving AVs, addressing the legal and ethical obligations of manufacturers, regulators, and users. The paper addresses the complex interaction between AI-driven transportation and its potential effects on employment and society. It provides an analysis on displacement of jobs and associated disruptions in workforce, as well as consequences for urban planning and public transportation systems. Furthermore, this study investigates the domain of privacy and data security in AVs, delving into issues related to gathering and utilization of data, as well ethical handling of personal information. Finally, this paper proposes a cyclic framework for ethical governance in AVs integrated with AI. It outlines future directions that prioritize transparency, accountability, and adherence to international humanitarian regulations. The study&#39;s findings and recommendations represent significant importance for policymakers, industry participants, and society. These stakeholders play crucial role in guiding the progress of AI in AVs, to create a transportation environment that is both safer and more ethically aligned.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_35-A_Cyclic_Framework_for_Ethical_Implications.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of Deep Learning Enabled Augmented Reality Framework for Monitoring the Physical Quality Training of Future Trainers-Teachers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150334</link>
        <id>10.14569/IJACSA.2024.0150334</id>
        <doi>10.14569/IJACSA.2024.0150334</doi>
        <lastModDate>2024-03-30T14:57:41.7200000+00:00</lastModDate>
        
        <creator>Sarsenkul Tileubay</creator>
        
        <creator>Meruert Yerekeshova</creator>
        
        <creator>Altynzer Baiganova</creator>
        
        <creator>Dariqa Janyssova</creator>
        
        <creator>Nurlan Omarov</creator>
        
        <creator>Bakhytzhan Omarov</creator>
        
        <creator>Zakhira Baiekeyeva</creator>
        
        <subject>PoseNET; MoveNET; deep learning; exercise; computer vision</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>The fusion of augmented reality (AR) and deep learning technologies has ushered in a transformative era in the realm of real-time physical activity monitoring. This research paper introduces a system that harnesses the capabilities of PoseNet-based skeletal keypoint extraction and deep neural networks to achieve unparalleled accuracy and real-time functionality in the identification and classification of a wide spectrum of physical activities. With an impressive accuracy rate of 98% within 100 training epochs, the system proves its mettle in precise activity recognition, making it invaluable in domains such as fitness training, physical education, sports coaching, and home-based fitness. The system&#39;s real-time feedback mechanism, bolstered by AR technology, not only enhances user engagement but also motivates users to optimize their exercise routines. This paper not only elucidates the system&#39;s architecture and functionality but also highlights its potential applications across diverse fields. Furthermore, it delineates the trajectory of future research avenues, including the development of advanced feedback mechanisms, exploration of multi-modal sensing techniques, personalization for users, assessment of long-term impacts, and endeavors to ensure accessibility, inclusivity, and data privacy. In essence, this research sets the stage for the evolution of real-time physical activity monitoring, offering a compelling framework to improve fitness, physical education, and athletic training while promoting healthier lifestyles and the overall well-being of individuals worldwide.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_34-Development_of_Deep_Learning_Enabled_Augmented_Reality.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep CNN Approach with Visual Features for Real-Time Pavement Crack Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150333</link>
        <id>10.14569/IJACSA.2024.0150333</id>
        <doi>10.14569/IJACSA.2024.0150333</doi>
        <lastModDate>2024-03-30T14:57:41.7200000+00:00</lastModDate>
        
        <creator>Bakhytzhan Kulambayev</creator>
        
        <creator>Gulnar Astaubayeva</creator>
        
        <creator>Gulnara Tleuberdiyeva</creator>
        
        <creator>Janna Alimkulova</creator>
        
        <creator>Gulzhan Nussupbekova</creator>
        
        <creator>Olga Kisseleva</creator>
        
        <subject>Road damage; crack; image processing; classification; segmentation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>This research delves into an innovative approach to an age-old urban maintenance challenge: the timely and accurate detection of pavement cracks, a key issue linked to public safety and fiscal efficiency. Harnessing the power of Deep Convolutional Neural Networks (DCNNs), the study introduces a cutting-edge model, meticulously optimized for the nuanced task of identifying fissures in diverse pavement types, under various lighting and environmental conditions. Traditional methodologies often stumble in this regard, plagued by issues of low accuracy and high false-positive rates, predominantly due to their inability to adeptly handle the intricate variations in images caused by shadows, traffic, or debris. This paper propounds a robust algorithm that trains the model using a rich library of images, capturing an array of crack types, from hairline fractures to gaping crevices, thus imbuing the system with an astute &#39;understanding&#39; of target anomalies. One salient breakthrough detailed is the model&#39;s capacity for &#39;context-aware&#39; analysis, allowing for a more adaptive, precision-driven scrutiny that significantly mitigates the issue of over-generalization common in less sophisticated systems. Furthermore, the research breaks ground by integrating a novel feedback mechanism, enabling the DCNN to learn dynamically from misclassifications in an iterative refinement process, markedly enhancing detection reliability over time. The findings underscore not only improved accuracy but also heightened processing speeds, promising substantial implications for scalable real-world application and establishing a significant leap forward in predictive urban infrastructure maintenance.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_33-Deep_CNN_Approach_with_Visual_Features.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Machine Learning Enhanced Framework for Big Data Modeling with Application in Industry 4.0</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150332</link>
        <id>10.14569/IJACSA.2024.0150332</id>
        <doi>10.14569/IJACSA.2024.0150332</doi>
        <lastModDate>2024-03-30T14:57:41.7030000+00:00</lastModDate>
        
        <creator>Gulnur Kazbekova</creator>
        
        <creator>Zhuldyz Ismagulova</creator>
        
        <creator>Botagoz Zhussipbek</creator>
        
        <creator>Yntymak Abdrazakh</creator>
        
        <creator>Gulzipa Iskendirova</creator>
        
        <creator>Nurgul Toilybayeva</creator>
        
        <subject>Industry 4.0; machine learning; big data; application; management</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>In the dynamic milieu of Industry 4.0, characterized by the deluge of big data, this research promulgates a groundbreaking framework that harnesses machine learning (ML) to optimize big data modeling processes, addressing the intricate requirements and challenges of contemporary industrial domains. Traditional data processing mechanisms falter in the face of the sheer volume, velocity, and variety of big data, necessitating more robust, intelligent solutions. This paper delineates the development and application of an innovative ML-augmented framework, engineered to interpret and model complex, multifaceted data structures more efficiently and accurately than has been feasible with conventional methodologies. Central to our approach is the integration of advanced ML strategies—including but not limited to deep learning and neural networks—with sophisticated analytics tools, collectively capable of automated decision-making, predictive analysis, and trend identification in real-time scenarios. Beyond theoretical formulation, our research rigorously evaluates the framework through empirical analysis and industrial case studies, demonstrating tangible enhancements in data utility, predictive accuracy, operational efficiency, and scalability within various Industry 4.0 contexts. The results signify a marked improvement over existing models, particularly in handling high-dimensional data and facilitating actionable insights, thereby empowering industrial entities to navigate the complexities of digital transformation. This exploration underscores the potential of machine learning as a pivotal ally in evolving data strategies, setting a new precedent for data-driven decision-making paradigms in the era of Industry 4.0.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_32-Machine_Learning_Enhanced_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intelligent Fuzzy-PID Temperature Control System for Ensuring Comfortable Microclimate in an Intelligent Building</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150331</link>
        <id>10.14569/IJACSA.2024.0150331</id>
        <doi>10.14569/IJACSA.2024.0150331</doi>
        <lastModDate>2024-03-30T14:57:41.6900000+00:00</lastModDate>
        
        <creator>Rustam Abdrakhmanov</creator>
        
        <creator>Kamalbek Berkimbayev</creator>
        
        <creator>Angisin Seitmuratov</creator>
        
        <creator>Almira Ibashova</creator>
        
        <creator>Akbayan Aliyeva</creator>
        
        <creator>Gulira Nurmukhanbetova</creator>
        
        <subject>Fuzzy logic; PID; Temperature; Microclimate; Smart Building</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>In an era characterized by the growing significance of energy-efficient and human-centric environmental control systems, this research endeavors to investigate the efficacy of a Fuzzy Proportional-Integral-Derivative (PID) control approach for temperature regulation within Heating, Ventilation, and Air Conditioning (HVAC) systems. The study leverages the adaptability and robustness of fuzzy logic to dynamically tune the PID controller&#39;s parameters in response to changing environmental conditions. Through comprehensive simulations and comparative analyses, the research showcases the superior performance of the proposed fuzzy PID control system in terms of rapid response, overload avoidance, and minimal steady-state error, particularly when contrasted with conventional PID control and model predictive control (MPC) methodologies. Furthermore, the research extends its scope to assess the control system&#39;s resilience in the face of significant load variations, affirming its practical applicability in real-world HVAC scenarios. Beyond its immediate implications for HVAC systems, this research underscores the broader potential of fuzzy PID control in enhancing control precision and adaptability across various domains, including robotics, industrial automation, and process control. By advocating for future research endeavors in optimizing fuzzy membership functions, implementing real-time solutions, and exploring multi-objective optimization, among other avenues, this study seeks to contribute to the ongoing discourse surrounding advanced control strategies for achieving energy-efficient and human-centric environmental regulation.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_31-Intelligent_Fuzzy_PID_Temperature_Control_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Research on Personalized Recommendation Algorithms Based on User Profile</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150330</link>
        <id>10.14569/IJACSA.2024.0150330</id>
        <doi>10.14569/IJACSA.2024.0150330</doi>
        <lastModDate>2024-03-30T14:57:41.6900000+00:00</lastModDate>
        
        <creator>Guo Hui</creator>
        
        <creator>Zhou LiQing</creator>
        
        <creator>Chen Mang</creator>
        
        <creator>Xv ShiKun</creator>
        
        <subject>Recommender system; large language model; user profile; multi-disciplinary</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>In recent decades, recommendation systems (RS) have played a pivotal role in societal life, closely intertwined with people&#39;s everyday activities. However, traditional recommendation systems still require thorough consideration of comprehensive user profiles as they have struggled to provide more personalized and accurate recommendation services. This paper delves into the analysis and enrichment of user profiles, utilizing this foundation to tailor recommendations for individuals across domains such as movies, TV shows, and books. The paper constructs a chart comprising 246 types of user profile attributes, primarily covering dimensions like gender, age, occupation, and religious beliefs, among 16 other dimensions. This chart integrates approximately 1.2 million data points, encompassing information relevant to movies, TV shows, and novels. Through training on the dataset, the study has enhanced the model&#39;s recommendation effectiveness. Post-training, the recommendation accuracy surpasses that of pre-training based on proposed evaluation metrics. Furthermore, post-manual evaluation, the recommended results are more reasonable and align better with user profiles.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_30-Research_on_Personalized_Recommendation_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Framework for Organization of Medical Processes in Medical Institutions Based on Big Data Technologies</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150329</link>
        <id>10.14569/IJACSA.2024.0150329</id>
        <doi>10.14569/IJACSA.2024.0150329</doi>
        <lastModDate>2024-03-30T14:57:41.6730000+00:00</lastModDate>
        
        <creator>Botagoz Zhussipbek</creator>
        
        <creator>Tursinbay Turymbetov</creator>
        
        <creator>Nuraim Ibragimova</creator>
        
        <creator>Zinegul Yergalauova</creator>
        
        <creator>Gulmira Nigmetova</creator>
        
        <creator>Saule Tanybergenova</creator>
        
        <creator>Zhanar Musagulova</creator>
        
        <subject>Big data; data-driven technology; artificial intelligence; medical processes; medical institutions</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>This research paper delves into the burgeoning field of Big Data analytics in healthcare, proposing an innovative framework aimed at refining the organization and management of medical processes within healthcare institutions. Through the lens of detailed case studies, including stroke diagnosis leveraging the UNet model, and the identification of heart and respiratory diseases via machine learning algorithms applied to data from wearable devices, the study illuminates the profound capabilities of Big Data technologies in enhancing the precision of diagnostics, tailoring patient treatment, and elevating the overall efficiency of healthcare services. It meticulously interprets the outcomes of these applications, discusses the practical implications for healthcare professionals and institutions, confronts the challenges inherent in the integration of sophisticated analytics in clinical settings, and outlines potential directions for future research. Among the pivotal challenges highlighted are issues related to data privacy, security, the need for advanced infrastructure, and the imperative for ongoing training and interdisciplinary cooperation to navigate the complexities of Big Data in healthcare. The paper underscores the transformative promise of Big Data analytics, suggesting that comprehensive adoption and adept implementation could revolutionize healthcare delivery, making it more personalized, efficient, and cost-effective. Through this exploration, the paper contributes to the ongoing discourse on the integration of technology in healthcare, offering insights into how Big Data analytics can serve as a cornerstone for the next generation of medical diagnostics and patient care management, thereby enhancing health outcomes on a global scale.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_29-Framework_for_Organization_of_Medical_Processes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design and Implementation of a Real-Time Image Processing System Based on Sobel Edge Detection using Model-based Design Methods</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150328</link>
        <id>10.14569/IJACSA.2024.0150328</id>
        <doi>10.14569/IJACSA.2024.0150328</doi>
        <lastModDate>2024-03-30T14:57:41.6570000+00:00</lastModDate>
        
        <creator>Taoufik Saidani</creator>
        
        <creator>Refka Ghodhbani</creator>
        
        <creator>Mohamed Ben Ammar</creator>
        
        <creator>Marouan Kouki</creator>
        
        <creator>Mohammad H Algarni</creator>
        
        <creator>Yahia Said</creator>
        
        <creator>Amani Kachoukh</creator>
        
        <creator>Amjad A. Alsuwaylimi</creator>
        
        <creator>Albia Maqbool</creator>
        
        <creator>Eman H. Abd-Elkawy</creator>
        
        <subject>Image processing; sobel edge detection; high level synthesis; model based design; Zynq7000 MATLAB HDL coder</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>Image processing and computer vision applications often use the Sobel edge detection technique in order to discover corners in input photographs. This is done in order to improve accuracy and efficiency. For the great majority of today&#39;s image processing applications, real-time implementation of image processing techniques like Sobel edge detection in hardware devices like field-programmable gate arrays (FPGAs) is required. Sobel edge detection is only one example. The use of FPGAs makes it feasible to have a quicker algorithmic throughput, which is required in order to match real-time speeds or in circumstances when it is critical to have faster data rates. The results of this study allowed for the Sobel edge detection approach to be applied in a manner that was not only speedy but also space-efficient. For the purpose of actually putting the recommended implementation into action, a one-of-a-kind high-level synthesis (HLS) design approach for intermediate data nodes that is based on application-specific bit widths was used. The high-level simulation code known as register transfer level (RTL) was generated by using the MATLAB HDL coder for HLS. The code written in hardware description language (HDL) that was produced was implemented on a Xilinx ZedBoard with the aid of the Vivado software, and it was tested in real time with the assistance of an input video stream.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_28-Design_and_Implementation_of_a_Real_Time_Image_Processing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Presenting a Hybrid Method to Overcome the Challenges of Determining the Uncertainty of Future Stock Price Identification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150327</link>
        <id>10.14569/IJACSA.2024.0150327</id>
        <doi>10.14569/IJACSA.2024.0150327</doi>
        <lastModDate>2024-03-30T14:57:41.6570000+00:00</lastModDate>
        
        <creator>Zhiqiong Zou</creator>
        
        <creator>Guangyu Xiao</creator>
        
        <subject>Stock market prediction movement; prediction models; Radical basis function; optimization approaches</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>A particular location, framework, or forum where buyers and sellers congregate to trade products, services, or assets is referred to as an economic market. While the future is unpredictable and unknowable, it is still possible to make informed predictions about the course of events. Predicting stock market movements using artificial intelligence and machine learning is one such potential. Even if the stock market is volatile, it is still feasible and wise to use artificial intelligence to create well-informed forecasts before making an investment. The current work suggests a novel approach to increase stock price forecast accuracy by integrating the Radical basis function with Particle Swarm Optimization, Slime Mold Algorithm, and Moth Flame Optimization. The objective of the study is to improve stock price forecast accuracy while accounting for the complexity and volatility of financial markets. The efficacy of the proposed strategy has been tested in the real world using historical stock price statistics. Results demonstrate considerable accuracy improvements over traditional RBF models. The combined strength of RBF and the optimization technique enhances the model&#39;s ability to adapt to changing market conditions in addition to increasing prediction accuracy. Results were 0.984, 0.990, 0.991, and 0.994 for RBF, PSO-RBF, SMA, and MFO-RBF, respectively. The performance of MFO-RBF in comparison to RBF shows how combining with the optimizer can enhance the performance of the given model. By contrasting the outcomes of various optimizers, the most accurate optimization has been determined as the main optimizer of the model.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_27-Presenting_a_Hybrid_Method_to_Overcome_the_Challenges.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predicting Obesity in Nutritional Patients using Decision Tree Modeling</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150326</link>
        <id>10.14569/IJACSA.2024.0150326</id>
        <doi>10.14569/IJACSA.2024.0150326</doi>
        <lastModDate>2024-03-30T14:57:41.6430000+00:00</lastModDate>
        
        <creator>Orlando Iparraguirre-Villanueva</creator>
        
        <creator>Luis Mirano-Portilla</creator>
        
        <creator>Manuel Gamarra-Mendoza</creator>
        
        <creator>Wilmer Robles-Espiritu</creator>
        
        <subject>Obesity; Machine Learning (ML); Decision Tree (DT); Prediction; CRISP-DM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>Obesity has become a widespread problem that affects not only physical well-being but also mental health. To address this problem and provide solutions, Machine Learning (ML) technology tools are being applied. Studies are currently being developed to improve the prediction of obesity. This study aimed to predict obesity levels in nutritional patients by analyzing their physical and dietary habits using the Decision Tree (DT) model. For the development of this work, we chose to use the CRISP-DM framework to follow the development in an organized way, thus achieving a better understanding of the data and describing, evaluating, and analyzing the results. The results of this work yielded metrics with significant values for predicting obesity: so much so that the accuracy rate was 92.89%, the sensitivity rate was 94% and the F1 score was 93%. Likewise, accuracy metrics above 88% were obtained for each level of obesity, demonstrating the effectiveness of the DT model in predicting this type of task. Finally, the results demonstrate that the DT model is effective in predicting obesity, with significant results that motivate further research to continue improving accuracy in this type of task.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_26-Predicting_Obesity_in_Nutritional_Patients.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Precision Face Mask Detection in Crowded Environment using Machine Vision</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150325</link>
        <id>10.14569/IJACSA.2024.0150325</id>
        <doi>10.14569/IJACSA.2024.0150325</doi>
        <lastModDate>2024-03-30T14:57:41.6270000+00:00</lastModDate>
        
        <creator>Jamil Abedalrahim Jamil Alsayaydeh</creator>
        
        <creator>Mohd Faizal bin Yusof</creator>
        
        <creator>Chan Yoke Lin</creator>
        
        <creator>Mohammed Nasser Mohammed Al-Andoli</creator>
        
        <creator>Safarudin Gazali Herawan</creator>
        
        <creator>Ida Syafiza Md Isa</creator>
        
        <subject>Face mask detection; machine vision; cascade object detector; cross-validation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>In the face of rampant global disease transmission, effective preventive strategies are imperative. This study tackles the challenge of ensuring compliance in crowded settings by developing a sophisticated face mask detection system. Utilizing MATLAB and the Cascade Object detector, the system focuses on detecting white surgical masks in frontal images. Training the system is critical for accuracy; therefore, cross-validation is employed due to limited data. The results reveal accuracies of 76.67% for initial training, 67.50% for a 9:11 cropping ratio, and 89.17% for a 9:4:7 cropping ratio, highlighting the system&#39;s remarkable precision in mask detection. Looking ahead, the system&#39;s adaptability can be further expanded to include various mask colors and types, extending its effectiveness beyond COVID-19 to combat a range of respiratory illnesses. This research represents a significant advancement in reinforcing preventive measures against future disease outbreaks, especially in densely populated environments, contributing significantly to global public health and safety initiatives.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_25-Precision_Face_Mask_Detection_in_Crowded_Environment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Assessment of Attention-based Deep Learning Architectures for Classifying EEG in ADHD and Typical Children</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150324</link>
        <id>10.14569/IJACSA.2024.0150324</id>
        <doi>10.14569/IJACSA.2024.0150324</doi>
        <lastModDate>2024-03-30T14:57:41.6100000+00:00</lastModDate>
        
        <creator>Mingzhu Han</creator>
        
        <creator>Guoqin Jin</creator>
        
        <creator>Wei Li</creator>
        
        <subject>ADHD; EEG; deep learning; attention mechanisms; CNN; LSTM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>Although limited research has explored the integration of electroencephalography (EEG) and deep learning approaches for attention deficit hyperactivity disorder (ADHD) detection, using deep learning models for actual data, including EEGs, remains a difficult endeavour. The purpose of this work was to evaluate how different attention processes affected the performance of well-established deep-learning models for the identification of ADHD. Two specific architectures, namely long short-term memory (LSTM)+ attention (Att) and convolutional neural network (CNN)s+Att, were compared. The CNN+Att model consists of a dropout, an LSTM layer, a dense layer, and a CNN layer merged with the convolutional block attention module (CBAM) structure. On top of the first LSTM layer, an extra LSTM layer, including T LSTM cells, was added for the LSTM+Att model. The information from this stacked LSTM structure was then passed to a dense layer, which, in turn, was connected to the classification layer, which comprised two neurons. Experimental results showed that the best classification result was achieved using the LSTM+Att model with 98.91% accuracy, 99.87% accuracy, 97.79% specificity and 98.87% F1-score. After that, the LSTM, CNN+Att, and CNN models succeeded in classifying ADHD and Normal EEG signals with 98.45%, 97.74% and 97.16% accuracy, respectively. The information in the data was successfully utilized by investigating the application of attention mechanisms and the precise position of the attention layer inside the deep learning model. This fascinating finding creates opportunities for more study on large-scale EEG datasets and more reliable information extraction from massive data sets, ultimately allowing links to be made between brain activity and specific behaviours or task execution.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_24-Assessment_of_Attention_based_Deep_Learning_Architectures.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Beyond BERT: Exploring the Efficacy of RoBERTa and ALBERT in Supervised Multiclass Text Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150323</link>
        <id>10.14569/IJACSA.2024.0150323</id>
        <doi>10.14569/IJACSA.2024.0150323</doi>
        <lastModDate>2024-03-30T14:57:41.5930000+00:00</lastModDate>
        
        <creator>Christian Y. Sy</creator>
        
        <creator>Lany L. Maceda</creator>
        
        <creator>Mary Joy P. Canon</creator>
        
        <creator>Nancy M. Flores</creator>
        
        <subject>Multi-class text classification; Bidirectional Encoder Representations from Transformers (BERT); RoBERTa; ALBERT; Universal Access to Quality Tertiary Education (UAQTE) program; educational policy reforms</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>This study investigates the performance of transformer-based machine learning models, specifically BERT, RoBERTa, and ALBERT, in multiclass text classification within the context of the Universal Access to Quality Tertiary Education (UAQTE) program. The aim is to systematically categorize and analyze qualitative responses to uncover domain-specific patterns in students&#39; experiences. Through rigorous evaluation of various hyperparameter configurations, consistent enhancements in model performance are observed with smaller batch sizes and increased epochs, while optimal learning rates further boost accuracy. However, achieving an optimal balance between sequence length and model efficacy presents nuanced challenges, with instances of overfitting emerging after a certain number of epochs. Notably, the findings underscore the effectiveness of the UAQTE program in addressing student needs, particularly evident in categories such as &quot;Family Support&quot; and &quot;Financial Support,&quot; with RoBERTa emerging as a standout choice due to its stable performance during training. Future research should focus on fine-tuning hyperparameter values and adopting continuous monitoring mechanisms to reduce overfitting. Furthermore, ongoing review and modification of educational efforts, informed by evidence-based decision-making and stakeholder feedback, is critical to fulfill students&#39; changing needs effectively.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_23-Beyond_BERT_Exploring_the_Efficacy_of_RoBERTa_and_ALBERT.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>RUICP: Commodity Recommendation Model Based on User Real Time Interest and Commodity Popularity</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150322</link>
        <id>10.14569/IJACSA.2024.0150322</id>
        <doi>10.14569/IJACSA.2024.0150322</doi>
        <lastModDate>2024-03-30T14:57:41.5800000+00:00</lastModDate>
        
        <creator>Wenchao Xu</creator>
        
        <creator>Ling Xia</creator>
        
        <subject>User real time interest; commodity popularity; recommend</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>At present, the recommendation of massive commodities mainly depends on the short-term click through rate of commodities and the data directly browsed and clicked by users. This recommendation method can better meet the shopping needs of users, but there are two shortcomings. One is to recommend homogeneous commodities to long-term shopping users; second, we can&#39;t grasp the real-time changes of users&#39; interests, and can only recommend results similar to the recently clicked products. Therefore, this study intends to establish a time-varying expression method of users&#39; interest intensity to solve the deviation of real-time recommendation content, and propose a recommendation model RUICP based on users&#39; time-dependent interest and commodity heat. Firstly, the user&#39;s basic data and cumulative usage information are used for portrait, specifically, the user&#39;s usage data is divided into isochronous and deep-seated semantic feature analysis, the model is optimized and the user&#39;s long-term interest intensity is obtained after parameter estimation; Then, the user&#39;s short-term interest is obtained by splitting the user&#39;s short-term use data, and the user&#39;s final interest is calculated by combining the short-term interest and the user&#39;s long-term interest intensity; Then calculate the product popularity score by adding the repeated click through rate of products, and then update the ranking of products; Finally, the classic item based collaborative filtering algorithm is used to calculate the matching degree of user interest and goods, and then recommend. The results of simulation experiments show that compared with other methods, RUICP has higher recommendation accuracy for old users and has certain value for solving the cold start problem.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_22-RUICP_Commodity_Recommendation_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards Digital Preservation of Cultural Heritage: Exploring Serious Games for Songket Tradition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150321</link>
        <id>10.14569/IJACSA.2024.0150321</id>
        <doi>10.14569/IJACSA.2024.0150321</doi>
        <lastModDate>2024-03-30T14:57:41.5800000+00:00</lastModDate>
        
        <creator>Nor Hafidzah Abdullah</creator>
        
        <creator>Wan Malini Wan Isa</creator>
        
        <creator>Syadiah Nor Wan Shamsuddin</creator>
        
        <creator>Norkhairani Abdul Rawi</creator>
        
        <creator>Maizan Mat Amin</creator>
        
        <creator>Wan Mohd Adzim Wan Mohd Zain</creator>
        
        <subject>Cultural Heritage; digital preservation; serious game; Songket</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>Over the past few decades, Malaysia has undergone remarkable technological advancement, establishing itself as a vibrant hub for innovation in Southeast Asia. However, technological progress must be harmonized with preserving and promoting the country&#39;s cultural heritage. Digital preservation of cultural heritage emerges as a critical endeavor, particularly for future generations. Nonetheless, there remains a notable deficiency in preservation methodologies for cultural heritage, particularly concerning technological approaches. This paper delves into the realm of cultural heritage and presents findings from a study on preserving Songket&#39;s heritage. Interviews were conducted with three experts on Songket heritage, revealing a prevailing lack of awareness regarding Songket heritage preservation. Additionally, the analysis highlights inherent flaws in current preservation methods, hindering efforts to engage a wider audience, particularly the younger generation. The experts unanimously advocate digitizing heritage knowledge, including the integration of serious games, to facilitate Songket preservation and safeguarding efforts. The use of serious games can also attract and engage the younger generation to the heritage of Songket.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_21-Towards_Digital_Preservation_of_Cultural_Heritage.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Underwater Image Enhancement via Higher-Order Moment CLAHE Model and V Channel Substitute</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150320</link>
        <id>10.14569/IJACSA.2024.0150320</id>
        <doi>10.14569/IJACSA.2024.0150320</doi>
        <lastModDate>2024-03-30T14:57:41.5630000+00:00</lastModDate>
        
        <creator>Chen Yahui</creator>
        
        <creator>Liang Yitao</creator>
        
        <creator>Li Yongfeng</creator>
        
        <creator>Liu Hongyue</creator>
        
        <creator>Li Lan</creator>
        
        <subject>Underwater images; contrast enhancement; adaptive CLAHE; high-order moments; dynamic features</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>Images captured underwater often exhibit low contrast and color distortion attributed to special properties of light in water. Underwater image enhancement methods have become an effective solution to address these issues due to its simplicity and effectiveness. However, underwater image enhancement methods (such as CLAHE) face challenge of increasing image contrast, improve generalization of method. Here, underwater image enhancement via higher-order moment CLAHE model and V channel substitute is proposed to enhance contrast and correct color distortion. Firstly, analyze statistical features of image histograms, use higher-order moments to quantify features in a targeted manner, add them to CLAHE, so that improved CLAHE can accurately enhance contrast of underwater image according to dynamic features of image blocks, avoiding over- or under-enhancement of image. Then, for problem of color distortion, this paper novelty uses gray data to substitutes V channel in HSV color space, compensated for lost information, so as to achieve purpose of color correction in terms of visual perception. Finally, color correction of image through gray world method, which effectively improve color distortion problem. Our method is qualitatively and quantitatively compared with multiple state-of-the-art methods in public dataset, demonstrating that this method better solved low contrast and color distortion, in addition, details were more realistic, and evaluation indexes of underwater image quality were better.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_20-Underwater_Image_Enhancement.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Prediction of Cardiovascular Disease using Machine Learning Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150319</link>
        <id>10.14569/IJACSA.2024.0150319</id>
        <doi>10.14569/IJACSA.2024.0150319</doi>
        <lastModDate>2024-03-30T14:57:41.5470000+00:00</lastModDate>
        
        <creator>Mahesh Kumar Joshi</creator>
        
        <creator>Deepak Dembla</creator>
        
        <creator>Suman Bhatia</creator>
        
        <subject>Cardiovascular disease; heart; logistic regression; K-NN; machine learning; na&#239;ve bayes; SVM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>Heart is the most critical organ of our body for being responsible for regulating and maintaining the blood circulation levels. Globally, heart disease cases are prevalent and constitute a significant cause of mortality. Manifestations such as chest discomfort and irregular heartbeat are notable symptoms. The healthcare sector has amassed substantial knowledge in this domain. Analyzing the research, this paper delves into the concept of utilizing ML algorithms to predict cardiac diseases. In this research will employ a diverse array of machine learning techniques, including decision tree, support vector classifier, random forest, K-NN, logistic regression and naive Bayes. These algorithms utilize specific characteristics to forecast cardiac diseases effectively. Leveraging machine learning algorithms to analyze and predict outcomes from the extensive healthcare-generated data shows considerable promise. Recent advancements in machine learning models have incorporated numerous features, and in this study, propose the integration of these features in machine learning algorithms to forecast cardiovascular ailments. The main objective of this research is to identify the performance of the mentioned machine learning algorithms for predicting cardiovascular elements.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_19-Prediction_of_Cardiovascular_Disease_using_Machine_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Fire and Smoke Detection Model Based on YOLOv8 Improvement</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150318</link>
        <id>10.14569/IJACSA.2024.0150318</id>
        <doi>10.14569/IJACSA.2024.0150318</doi>
        <lastModDate>2024-03-30T14:57:41.5470000+00:00</lastModDate>
        
        <creator>Pengcheng Gao</creator>
        
        <subject>Fire and smoke detection; deep learning; computer vision; YOLO</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>The warning of fire and smoke provides security for people&#39;s lives and properties. The utilization of deep learning for fire and smoke warning has been an active area of research, especially the use of target detection algorithms has achieved significant results. For improving the fire and smoke detection performance of model in different scenarios, a high-precision and lightweight improvement based on the model of You Only Look Once (YOLO), is developed. It utilizes partial convolutions to reduce the complexity of model, and add an attention block to acquire the cross-space learning capability. In addition, the neck network is redesigned to realize bidirectional feature fusion. Experiments show that it has significantly improved the results for all metrics in the public Fire-Smoke dataset, and the size of the model has also been widely reduced. Comparisons with other popular target detection models under the same conditions indicate that the improved model has the best performance as well. In order to have a more visual comparison with the detectability of the original model, the heatmap experiments are also established, which also demonstrate that it is characterized by less leakage rate and more focused attention.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_18-A_Fire_and_Smoke_Detection_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A CNN-based Deep Learning Framework for Driver’s Drowsiness Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150317</link>
        <id>10.14569/IJACSA.2024.0150317</id>
        <doi>10.14569/IJACSA.2024.0150317</doi>
        <lastModDate>2024-03-30T14:57:41.5330000+00:00</lastModDate>
        
        <creator>Ali Sohail</creator>
        
        <creator>Asghar Ali Shah</creator>
        
        <creator>Sheeba Ilyas</creator>
        
        <creator>Nizal Alshammry</creator>
        
        <subject>Drowsiness detection face detection; eye detection; yawn detection; deep learning; convolutional neural network; electroencephalograph; eye aspect ratio</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>Accidents are one of the major causes of injuries and deaths worldwide. According to the WHO report, in 2022 an estimated 1.3 million people die from road accidents. Driver fatigue is the primary factor in these traffic accidents. There are a number of studies presented by previous researchers in the context of driver’s drowsiness detection. The majority of earlier strategies relied on image processing systems that used algorithms to identify the yawning, eye closure, and eyebrow of the driver taken from the live video camera. One of the major issues of the previous studies was the delay in detection time and dataset. These studies used physical sensors for monitoring the driver’s behavior causes in delay time of detection. In this article, a deep learning approach is used to provide a continuous strategy for detecting driver’s drowsiness using an efficient dataset. The trained algorithm is employed on the video taken from the live camera to extract the driver&#39;s facial landmarks, which are subsequently processed by a trained algorithm to provide results. The dataset used for training the CNN algorithm is consisting of 2904 images taken from various subjects under various driving circumstances. The data is preprocessed by different methods including statistical moments, CNN filters, frequency vector determination and position Incidence vector calculation. After training the algorithm the feature-based cascade classifiers files are used to recognize the face from the real-life scenario using the live camera. The accuracy of the purposed model is 95%, which is the highest of all the purposed models, based on data gathered from different kind of scenarios.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_17-A_CNN_based_Deep_Learning_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Integration of Effective Models to Provide a Novel Method to Identify the Future Trend of Nikkei 225 Stocks Index</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150316</link>
        <id>10.14569/IJACSA.2024.0150316</id>
        <doi>10.14569/IJACSA.2024.0150316</doi>
        <lastModDate>2024-03-30T14:57:41.5170000+00:00</lastModDate>
        
        <creator>Jiang Zhu</creator>
        
        <creator>Haiyan Wu</creator>
        
        <subject>Financial market; stock future trend; Nikkei 225 index; random forest; artificial bee colony</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>The stock market refers to a financial market in which individuals and institutions engage in the buying and selling of shares of publicly listed firms. The valuation of stocks is influenced by the interplay between the forces of supply and demand. The act of allocating funds to the stock market entails a certain degree of risk, while it presents the possibility of substantial gains over an extended period. The task of predicting stock prices in the securities market is further complicated by the presence of non-stationary and non-linear characteristics in financial time series data. While traditional techniques have the potential to enhance the accuracy of forecasting, they are also associated with computational complexities that might lead to an elevated occurrence of prediction mistakes. This is the reason why the financial industry has seen a growing prevalence of novel methods, particularly in the stock market. This work introduces a novel model that effectively addresses several challenges by integrating the random forest methodology with the artificial bee colony algorithm. In the current study, the hybrid model demonstrated superior performance and effectiveness compared to the other models. The proposed model exhibited optimum performance and demonstrated a significant degree of effectiveness with low errors. The efficiency of the predictive model for stock price forecasts was established via the analysis of data obtained from the Nikkei 225 index. The data included the timeframe from January 2013 to December 2022. The results reveal that the proposed framework demonstrates efficacy and reliability in evaluating and predicting the price time series of equities. The empirical evidence suggests that, when compared to other current methodologies, the proposed model has a greater degree of accuracy in predicting outcomes.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_16-Integration_of_Effective_Models_to_Provide_a_Novel_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Clustering Algorithms in Sentiment Analysis Techniques in Social Media – A Rapid Literature Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150314</link>
        <id>10.14569/IJACSA.2024.0150314</id>
        <doi>10.14569/IJACSA.2024.0150314</doi>
        <lastModDate>2024-03-30T14:57:41.5000000+00:00</lastModDate>
        
        <creator>Vasile Daniel Pavaloaia</creator>
        
        <subject>Clustering algorithms; K-means; HAC; DBSCAN; sentiment analysis; natural language processing techniques; social media datasets; Twitter/X</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>Based on the high dynamic of Sentiment Analysis (SA) topic among the latest publication landscape, the current review attempts to fill a research gap. Consequently, the paper elaborates on the most recent body of literature to extract and analyze the papers that elaborate on the clustering algorithms applied on social media datasets for performing SA. The current rapid review attempts to answer the research questions by analyzing a pool of 46 articles published in between Dec 2020 – Dec 2023. The manuscripts were thoroughly selected from Scopus (Sco) and WebOf-Science (WoS) databases and, after filtering the initial pool of 164 articles, the final results (46) were extracted and read in full.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_14-Clustering_Algorithms_in_Sentiment_Analysis_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Systematic Review of the Literature on the Use of Artificial Intelligence in Forecasting the Demand for Products and Services in Various Sectors</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150315</link>
        <id>10.14569/IJACSA.2024.0150315</id>
        <doi>10.14569/IJACSA.2024.0150315</doi>
        <lastModDate>2024-03-30T14:57:41.5000000+00:00</lastModDate>
        
        <creator>Jos&#233; Rolando Neira Villar</creator>
        
        <creator>Miguel Angel Cano Lengua</creator>
        
        <subject>Demand; agglomeration algorithm; services; PRISMA methodology; artificial intelligence</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>This systematic review, carried out under the PRISMA methodology, aims to identify the recently proposed artificial intelligence models for demand forecasting, distinguishing the problems they try to overcome, recognizing the artificial intelligence methods used, detailing the performance metrics used, recognizing the performance achieved by these models and identifying what is new in them. Studies in the manufacturing, retail trade, tourism and electric energy sectors were considered in order to facilitate the transfer of knowledge from different sectors. 33 articles were analyzed, with the main results being that the proposed models are generally ensembles of various artificial intelligence methods; that the complexity of data and its scarcity are the main problems addressed; that combinations of simple machine learning, “bagging”, “boosting” and deep neural networks, are the most used methods; that the performance of the proposed models surpasses the classic statistical methods and other reference models; and that, finally, the proposed novelties cover aspects such as the type of data used, the pattern extraction techniques used, the assembly forms of the applied models and the use of algorithms for automating the adjustment of the models. Finally, a forecast model is proposed that includes the most innovative aspects found in this research.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_15-A_Systematic_Review_of_the_Literature.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Introducing an Innovative Approach to Mitigate Investment Risk in Financial Markets: A Case Study of Nikkei 225</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150313</link>
        <id>10.14569/IJACSA.2024.0150313</id>
        <doi>10.14569/IJACSA.2024.0150313</doi>
        <lastModDate>2024-03-30T14:57:41.4870000+00:00</lastModDate>
        
        <creator>Xiao Duan</creator>
        
        <subject>NIKKEI 225 index; artificial bee colony; stock price; financial markets; support vector regression</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>When the value of an investor&#39;s stock portfolio rises during a period of great market performance, investors often experience a gain in wealth. Spending may increase when people feel more at ease and confident about their financial circumstances. On the other hand, during a market crisis, a fall in wealth could lead to lower consumer spending, which could impede economic growth. Stock market trend prediction is thought to be a more important and fruitful endeavor. Stock prices will, therefore, provide significant returns from prudent investing decisions. Because of the outdated and erratic data, stock market forecasts pose a serious challenge to investors. As a result, stock market forecasting is among the main challenges faced by investors trying to optimize their return on investment. The goal of this research is to provide an accurate hybrid stock price forecasting model using Nikkei 225 index data from 2013 to 2022. The construction of the support vector regression involves the integration of multiple optimization approaches, including moth flame optimization, artificial bee colony, and genetic algorithms. Moth flame optimization is proven to produce the best results out of all of these optimization techniques. The evaluation criteria used in this research are MAE, MAPE, MSE, and RMSE. The results obtained for MFO-SVR, which is 0.70 for criterion MAPE, show the high accuracy of this model for estimating the price of Nikkei 225.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_13-Introducing_an_Innovative_Approach_to_Mitigate_Investment_Risk.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Efficient and Intelligent System for Controlling the Speed of Vehicle using Fuzzy Logic and Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150311</link>
        <id>10.14569/IJACSA.2024.0150311</id>
        <doi>10.14569/IJACSA.2024.0150311</doi>
        <lastModDate>2024-03-30T14:57:41.4700000+00:00</lastModDate>
        
        <creator>Anup Lal Yadav</creator>
        
        <creator>Sandip Kumar Goyal</creator>
        
        <subject>Speed control; fuzzy logics; deep learning; decision making; collision avoidance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>Vehicle collisions are a significant problem worldwide, causing injuries, fatalities, and property damage. There are several reasons for the collapse of vehicles such as rash driving, over speeding, less driving skills, increasing number of vehicles, drunk and drive, etc. However, over speeding is one of the critical factors out of all the reasons for vehicle collisions. To address the critical issues, the current article proposes a Fuzzy-based algorithm to prevent and control the speed of the vehicle. The major objective of the proposed system is to control the speed of the vehicle for proactive collision avoidance. Deep learning and fuzzy system provide better integrated approach for the controlling of the speed and avoid vehicle collision. Fuzzification of the speed variable provides an advanced or viable solution for speed control. The current research used RNN and other deep learning algorithm to predict the traffic and identify the traffic frequency. The traffic frequency in a time-series frame provides the frequency of the traffic within a time frame that can be detected by using involvement of IoT.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_11-An_Efficient_and_Intelligent_System_for_Controlling_the_Speed.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Single Stage Detector for Breast Cancer Detection on Digital Mammogram</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150312</link>
        <id>10.14569/IJACSA.2024.0150312</id>
        <doi>10.14569/IJACSA.2024.0150312</doi>
        <lastModDate>2024-03-30T14:57:41.4700000+00:00</lastModDate>
        
        <creator>Li Xu</creator>
        
        <creator>Nan Jia</creator>
        
        <creator>Mingmin Zhang</creator>
        
        <subject>Breast cancer detection; digital mammogram; deep learning; YOLOv5 algorithm; medical image processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>Medical image processing plays a pivotal role in modern healthcare, and the early detection of breast cancer in digital mammograms. Several methods have been explored in the literature to improve breast cancer detection, with deep-learning approaches emerging as particularly promising due to their ability to provide accurate results. However, a persistent research challenge in deep learning-based breast cancer detection lies in addressing the historically low accuracy rates observed in previous studies. This paper presents a novel deep-learning model utilizing a single-stage detector based on the YOLOv5 algorithm, designed specifically to tackle the issue of low accuracy in breast cancer detection. The proposed method involves the generation of a custom dataset and subsequent training, validation, and testing phases to evaluate the model&#39;s performance rigorously. Experimental results and comprehensive performance evaluations demonstrate that the proposed method achieves remarkable accuracy, marking a significant advancement in breast cancer detection through extensive experiments and rigorous performance analysis.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_12-A_Single_Stage_Detector_for_Breast_Cancer_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Offline Author Identification using Non-Congruent Handwriting Data Based on Deep Convolutional Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150310</link>
        <id>10.14569/IJACSA.2024.0150310</id>
        <doi>10.14569/IJACSA.2024.0150310</doi>
        <lastModDate>2024-03-30T14:57:41.4530000+00:00</lastModDate>
        
        <creator>Ying LIU</creator>
        
        <creator>Gege Meng</creator>
        
        <creator>Naiyue ZHANG</creator>
        
        <subject>Handwriting recognition; offline author identification; deep convolutional neural network; image processing; language versatility; feature extraction; hierarchical model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>This investigation presents a novel technique for offline author identification using handwriting samples across diverse experimental conditions, addressing the intricacies of various writing styles and the imperative for organizations to authenticate authorship. Notably, the study leverages inconsistent data and develops a method independent of language constraints. Utilizing a comprehensive dataset adhering to American Society for Testing and Materials (ASTM) standards, a deep convolutional neural network (DCNN) model, enhanced with pre-trained networks, extracts features hierarchically from raw manuscript data. The inclusion of heterogeneous data underscores a significant advantage of this study, while the applicability of the proposed DCNN model to multiple languages further highlights its versatility. Experimental results demonstrate the efficacy of the proposed method in author identification. Specifically, the proposed model outperforms conventional approaches across four comprehensive datasets, exhibiting superior accuracy. Comparative analysis with engineering features and traditional methods such as Support Vector Machine (SVM) and Backpropagation Neural Network (BPNN) underscores the superiority of the proposed technique, yielding approximately a 13% increase in identification accuracy while reducing reliance on expert knowledge. The validation results, showcase the diminishing network error and increasing accuracy, with the proposed model achieving 99% accuracy after 200 iterations, surpassing the performance of the LeNet model. These findings underscore the robustness and utility of the proposed technique in diverse applications, positioning it as a valuable asset for handwriting recognition experts.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_10-Offline_Author_Identification_using_Non_Congruent_Handwriting_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Educational Performance Prediction with Random Forest and Innovative Optimizers: A Data Mining Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150308</link>
        <id>10.14569/IJACSA.2024.0150308</id>
        <doi>10.14569/IJACSA.2024.0150308</doi>
        <lastModDate>2024-03-30T14:57:41.4400000+00:00</lastModDate>
        
        <creator>Yanli Chen</creator>
        
        <creator>Ke Jin</creator>
        
        <subject>Student performance; Random Forest Classification; victoria amazonia; phasor particle swarm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>In the ever-evolving landscape of education, institutions grapple with the intricate task of evaluating individual capabilities and forecasting student performance. Providing timely guidance becomes pivotal, steering students toward specific areas for focused academic enhancement. Within the educational domain, the utilization of data mining emerges as a powerful tool, revealing latent patterns within vast datasets. This study adopts the Random Forest classifier (RFC) for predicting student performance, bolstered by the integration of two innovative optimizers—Victoria Amazonia Optimization (VAO) and Phasor Particle Swarm Optimizer (PPSO). A notable contribution of this research lies in the introduction of these novel optimizers to augment the model&#39;s accuracy, elevating the precision of predictions. Robust evaluation metrics, including Accuracy, Precision, Recall, and F1-score, meticulously gauge the model&#39;s effectiveness in this context. Remarkably, the results underscore the supremacy of RFC+VAO, showcasing exceptional values for Accuracy (0.934), Precision (0.940), Recall (0.930), and F1-score (0.930). This substantiates the significant contribution of integrating VAO into the Random Forest framework, promising substantial advancements in predictive analytics for educational institutions. The findings not only accentuate the efficacy of the proposed methodology but also herald a new era of precision and reliability in predicting student performance, thereby enriching the landscape of educational data analytics.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_8-Educational_Performance_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fuzzy Deep Learning Approach for the Early Detection of Degenerative Disease</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150309</link>
        <id>10.14569/IJACSA.2024.0150309</id>
        <doi>10.14569/IJACSA.2024.0150309</doi>
        <lastModDate>2024-03-30T14:57:41.4400000+00:00</lastModDate>
        
        <creator>Chairani </creator>
        
        <creator>Suhendro Y. Irianto</creator>
        
        <creator>Sri Karnila</creator>
        
        <creator>Adimas</creator>
        
        <subject>Degenerative diseases; brain tumor; fuzzy; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>Degenerative diseases can impact individuals of any age, encompassing children and teenagers; however, they typically tend to affect productive or adult individuals. Globally, conventional and advanced diagnostic methods, including those developed in Indonesia, have emerged to identify and manage these health conditions. Problems in brain tumor detection are the intricate process of precisely and effectively identifying the presence of tumors in the brain. On the other hand, diagnosing brain tumors in the laboratory poses issues related to time consumption, inaccuracy, lack of consistency, and costliness. This study specifically concentrates on the early detection of brain tumors by analyzing images generated through MRI scans. Unlike the traditional method of manual image analysis conducted by seasoned physicians, our approach integrates fuzzy logic to enable the early identification of brain tumors. The principal objective of this research is to enhance understanding and develop an intelligent, swift, and precise application for diagnosing brain tumors using medical imaging. The segmentation technique provides practical technology for the early detection of brain tumors. Utilizing a dataset comprising over 13,000 data points and undergoing a year-long training process with approximately 1,310 MRI images, the research culminates in the creation of a tool or software application system for the analysis of medical images. Despite the impressive precision score of 0.9992, highlighting its exceptional accuracy in correctly identifying positive instances, the recall value of 0.5767 suggests the potential exclusion of a significant number of actual positive instances in its predictions.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_9-Fuzzy_Deep_Learning_Approach_for_the_Early_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparing Regression Models to Predict Property Crime in High-Risk Lima Districts</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150307</link>
        <id>10.14569/IJACSA.2024.0150307</id>
        <doi>10.14569/IJACSA.2024.0150307</doi>
        <lastModDate>2024-03-30T14:57:41.4230000+00:00</lastModDate>
        
        <creator>Maria Escobedo</creator>
        
        <creator>Cynthia Tapia</creator>
        
        <creator>Juan Gutierrez</creator>
        
        <creator>Victor Ayma</creator>
        
        <subject>Supervised techniques; machine learning; regression; crime; prediction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>Crime continues to be an issue, in Metropolitan Lima, Peru affecting society. Our focus is on property crimes. We recognized the lack of studies on predicting these crimes. To tackle this problem, we used regression techniques such as XGBoost, Extra Tree, Support Vector, Bagging, Random Forest and AdaBoost. Through GridsearchCV we optimized hyperparameters to enhance our research findings. The results showed that Extra Tree Regression stood out as the model with an R2 value of 0.79. Additionally, error metrics like MSE (185.43) RMSE (13.62) and MAE (10.47) were considered to evaluate the model&#39;s performance. Our approach considers time patterns in crime incidents. Contributes, to addressing the issue of insecurity in a meaningful way.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_7-Comparing_Regression_Models_to_Predict_Property_Crime.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Integrating Generative AI for Advancing Agile Software Development and Mitigating Project Management Challenges</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150306</link>
        <id>10.14569/IJACSA.2024.0150306</id>
        <doi>10.14569/IJACSA.2024.0150306</doi>
        <lastModDate>2024-03-30T14:57:41.4070000+00:00</lastModDate>
        
        <creator>Anas BAHI</creator>
        
        <creator>Jihane GHARIB</creator>
        
        <creator>Youssef GAHI</creator>
        
        <subject>Artificial intelligence; software engineering; Agile software development</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>Agile software development emphasizes iterative progress, adaptability, and stakeholder collaboration. It champions flexible planning, continuous improvement, and rapid delivery, aiming to respond swiftly to change and deliver value efficiently. Integrating Generative Artificial Intelligence (AI) into Agile software development processes presents a promising avenue for overcoming project management challenges and enhancing the efficiency and effectiveness of software development endeavors. This paper explores the potential benefits of leveraging Generative AI in Agile methodologies, aiming to streamline development workflows, foster innovation, and mitigate common project management challenges. By harnessing the capabilities of Generative AI for tasks such as code generation, automated testing, and predictive analytics, Agile teams can augment their productivity, accelerate delivery cycles, and improve the quality of software products. Additionally, Generative AI offers opportunities for enhancing collaboration, facilitating decision-making, and addressing uncertainties inherent in Agile project management. Through an in-depth analysis of the integration of Generative AI within Agile frameworks, this paper provides insights into how organizations can harness the transformative potential of AI to advance Agile software development practices and navigate the complexities of modern software projects more effectively.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_6-Integrating_Generative_AI_for_Advancing_Agile_Software.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Generative Adversarial Neural Networks for Realistic Stock Market Simulations</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150305</link>
        <id>10.14569/IJACSA.2024.0150305</id>
        <doi>10.14569/IJACSA.2024.0150305</doi>
        <lastModDate>2024-03-30T14:57:41.4070000+00:00</lastModDate>
        
        <creator>Badre Labiad</creator>
        
        <creator>Abdelaziz Berrado</creator>
        
        <creator>Loubna Benabbou</creator>
        
        <subject>Limit order book simulations; transformers; wasserstein GAN with gradient penalty; FI-2010 benchmark dataset</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>Stock market simulations are widely used to create synthetic environments for testing trading strategies before deploying them to real-time markets. However, the weak realism often found in these simulations presents a significant challenge. Improving the quality of stock market simulations could be facilitated by the availability of rich and granular real Limit Order Books (LOB) data. Unfortunately, access to LOB data is typically very limited. To address this issue, a framework based on Generative Adversarial Networks (GAN) is proposed to generate synthetic realistic LOB data. This generated data can then be utilized for simulating downstream decision-making tasks, such as testing trading strategies, conducting stress tests, and performing prediction tasks. To effectively tackle challenges related to the temporal and local dependencies inherent in LOB structures and to generate highly realistic data, the framework relies on a specific data representation and preprocessing scheme, transformers, and conditional Wasserstein GAN with gradient penalty. The framework is trained using the FI-2010 benchmark dataset and an ablation study is conducted to demonstrate the importance of each component of the proposed framework. Moreover, qualitative and quantitative metrics are proposed to assess the quality of the generated data. Experimental results indicate that the framework outperforms existing benchmarks in simulating realistic market conditions, thus demonstrating its effectiveness in generating synthetic LOB data for diverse downstream tasks.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_5-Generative_Adversarial_Neural_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>User Experience and Behavioural Adaptation Based on Repeated Usage of Vehicle Automation: Online Survey</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150304</link>
        <id>10.14569/IJACSA.2024.0150304</id>
        <doi>10.14569/IJACSA.2024.0150304</doi>
        <lastModDate>2024-03-30T14:57:41.3930000+00:00</lastModDate>
        
        <creator>Naomi Y. Mbelekani</creator>
        
        <creator>Klaus Bengler</creator>
        
        <subject>Automated vehicles; automation effects; user experience (UX); trust; acceptance; behavioural adaptations</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>For years, Level 2 vehicle automation systems (VAS) have been commercially available, yet the extent to which users comprehend their capabilities and limitations remains largely unclear. This study aimed to evaluate user knowledge regarding Level 2 VAS and explore the correlation between user experiences (UX), behavioural adaptations, trust, and acceptance. By using an online survey, we sought to deepen understanding of how UX, trust, and acceptance of Level 2 automated vehicles (AVs) evolve with prolonged use in urban traffic. The survey, comprising demographic data and knowledge inquiries (automated driving experience and timeframes, vehicle operation competency, driving skills over long-term use of automation, the learning process, automation-induced effects, trust in automation, and ADS researchers and manufacturers), was completed by various drivers (N=16). This investigation focused on users&#39; long-term experiences with automation in urban traffic. Consequently, we offer user-centric transformative insights into users&#39; experiences with driving automation in urban traffic settings. Results revealed that users’ knowledge of automation exhibits their learning patterns, trust and acceptance. Moreover, users’ attitudes trust, and acceptance varies across different user profiles. What we have also learned about UX and the changing nature of user behaviours towards automation is that, automated driving changes influence the safety and risk conditions in which users and AVs interact. These findings can inform the development of interaction design strategies and policy aimed at enhancing UX of AV users.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_4-User_Experience_and_Behavioural_Adaptation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Privacy-Aware Decision Making: The Effect of Privacy Nudges on Privacy Awareness and the Monetary Assessment of Personal Information</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150303</link>
        <id>10.14569/IJACSA.2024.0150303</id>
        <doi>10.14569/IJACSA.2024.0150303</doi>
        <lastModDate>2024-03-30T14:57:41.3770000+00:00</lastModDate>
        
        <creator>Vera Schmitt</creator>
        
        <creator>James Nicholson</creator>
        
        <creator>Sebastian Moller</creator>
        
        <subject>Privacy protection; privacy policy analysis; GDPR; willingness to pay; privacy awareness</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>Nowadays, smartphones are equipped with various sensors collecting a huge amount of sensitive personal information about their users. However, for smartphone users, it remains hidden, and sensitive information is accessed by used applications and data requesters. Moreover, governmental institutions have no means to verify if applications requesting sensitive informa-tion are compliant with the General Data Protection Directive (GDPR), as it is infeasible to check the technical details and data requested by applications that are on the market. Thus, this research aims to shed light on the compliance analysis of applications with the GDPR. Therefore, a multidimensional analysis is applied to analyzing the permission requests of applications and empirically test if the information provided about potentially dangerous permissions influences the privacy awareness and their willingness to pay or sell personal data of users. The use case of Google Maps has been chosen to examine privacy awareness and the monetary assessment of data in a concrete scenario. The information about the multidimensional analysis of the permission requests of Google Maps and the privacy consent form is used to design privacy nudges to inform users about potentially harmful permission requests that are not in line with the GDPR. The privacy nudges are evaluated in two crowdsourcing experiments with overall 426 participants, showing that information about harmful data collection practices increases privacy awareness and also the willingness to pay for the protection of personal data.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_3-Privacy_Aware_Decision_Making_The_Effect_of_Privacy_Nudges.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of Gait Motion Sensor Mobile Authentication with Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150302</link>
        <id>10.14569/IJACSA.2024.0150302</id>
        <doi>10.14569/IJACSA.2024.0150302</doi>
        <lastModDate>2024-03-30T14:57:41.3770000+00:00</lastModDate>
        
        <creator>Sara Kokal</creator>
        
        <creator>Mounika Vanamala</creator>
        
        <creator>Rushit Dave</creator>
        
        <subject>Machine learning; machine learning algorithms; behavioral biometrics; gait dynamics; motion sensors</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>In recent decades, mobile devices have evolved in potential and prevalence significantly while advancements in security have stagnated. As smartphones now hold unprecedented amounts of sensitive data, there is an increasing need to resolve this gap in security. To address this issue, researchers have experimented with biometric-based authentication methods to improve smartphone security. Following a comprehensive review, it was found that gait-based mobile authentication is under-researched compared to other behavioral biometrics. This study aims to contribute to the knowledge of biometric and gait-based authentication through the analysis of recent gait datasets and their potential with machine learning algorithms. Two recently published gait datasets were used with algorithms such as Random Forest, Decision Tree, and XGBoost to successfully differentiate users based on their respective walking features. Throughout this paper, the datasets, methodology, algorithms, experimental results, and goals for future work will be described.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_2-Analysis_of_Gait_Motion_Sensor_Mobile_Authentication.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Network Intrusion Detection in Cloud Environments: A Comparative Analysis of Approaches</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150301</link>
        <id>10.14569/IJACSA.2024.0150301</id>
        <doi>10.14569/IJACSA.2024.0150301</doi>
        <lastModDate>2024-03-30T14:57:41.3600000+00:00</lastModDate>
        
        <creator>Sina Ahmadi</creator>
        
        <subject>Cloud networking; cloud security; firewall; intrusion detection; NIDS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024</description>
        <description>This research study comprehensively analyzes network intrusion detection in cloud environments by examining several approaches. These approaches have been explored and compared to determine the optimal and appropriate choice based on specific conditions. This research study employs a qualitative approach, specifically conducting a thematic literature analysis from 2020 to 2024. The research material has been exclusively obtained via Google Scholar. The traditional approaches identified in this research include anomaly-based and signature-based detection, along with innovative technologies and methods such as user behavior monitoring and machine learning. The findings of these studies demonstrate the effectiveness of conventional methods in known threat detection. They also struggle to identify novel attacks and understand the need for hybrid approaches that integrate the strengths of both. In this research study, the authors have addressed challenges such as privacy compliance, performance scalability, and false positives, highlighting the importance of continuous monitoring, privacy-preserving technologies, and real-time threat intelligence integration. This study also highlights the importance of stakeholder buy-in and staff training for the successful implementation of a network intrusion detection system (NIDS), especially when determining the evolving nature of cyber threats. This study concludes by defining a balanced approach combining new and old methodologies to offer an effective defense against diverse cyber threats in cloud environments. The future scope of NIDS in cloud environments has also been discussed, including enhancing privacy compliance capabilities and integrating AI-driven anomaly detection to meet emerging threats and regulatory requirements.</description>
        <description>http://thesai.org/Downloads/Volume15No3/Paper_1-Network_Intrusion_Detection_in_Cloud_Environments.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of Big Data Task Scheduling Optimization Algorithm Based on Improved Deep Q-Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01502103</link>
        <id>10.14569/IJACSA.2024.01502103</id>
        <doi>10.14569/IJACSA.2024.01502103</doi>
        <lastModDate>2024-03-04T06:05:02.5930000+00:00</lastModDate>
        
        <creator>Fu Chen</creator>
        
        <creator>Chunyi Wu</creator>
        
        <subject>Big data; Task scheduling; Policy gradient; Deep Q-network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>Big data analysis can provide valuable insights not easily obtained from traditional data scales. However, addressing scheduling issues in big data can be challenging due to the vast amount and diverse nature of the data. To overcome this, a scheduling model based on Markov decision process is proposed. The deep Q-network algorithm is used for directed acyclic graph task scheduling. To improve this model further, the gradient strategy algorithm is introduced. From the results, when the dataset size was about 500, the hybrid algorithm achieved a recall rate of 0.96, outperforming the gradient strategy algorithm (0.83), deep Q-network algorithm (0.79), and estimated earliest completion time algorithm (0.63). Although the estimated earliest completion time algorithm had longer training times under different dataset sizes, the hybrid algorithm&#39;s training time was slightly longer than the gradient strategy algorithm and slightly shorter than the deep Q-network algorithm. Overall, the proposed algorithm exhibits superior performance and significant value in solving engineering problems.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_103-Design_of_Big_Data_Task_Scheduling_Optimization_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Construction of an Art Education Teaching System Assisted by Artificial Intelligence</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01502102</link>
        <id>10.14569/IJACSA.2024.01502102</id>
        <doi>10.14569/IJACSA.2024.01502102</doi>
        <lastModDate>2024-03-04T06:05:02.5600000+00:00</lastModDate>
        
        <creator>Xianyu Wang</creator>
        
        <creator>Xiaoguang Sun</creator>
        
        <subject>Feature extraction BP neural network; Tone recognition; smart art teaching; MEL frequency cepstral coefficient; MFCC algorithm; time frequency characteristics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>With the continuous progress of art education and artificial intelligence technology, traditional music teaching models are facing transformation. This article aims to construct an art education and teaching system based on artificial intelligence, especially for teaching music sound recognition. Through in-depth research, we have designed a music sound recognition system that uses Mel frequency cepstral coefficient (MFCC) for feature parameter extraction, and combines BP neural network algorithm to construct a music sound learning model. The main purpose is to improve the efficiency and accuracy of music teaching through artificial intelligence technology. The main challenge we face in this process is how to effectively extract the features of music sounds and accurately identify different tones through algorithms. By using the MFCC algorithm, we have successfully solved this problem as it can effectively describe the time-frequency characteristics of music sound. Our proposed music sound learning model is based on a BP neural network, which trains the network to learn the mapping relationship between music sound and pitch. The experiment used piano sound as an example to verify the accuracy and reliability of the system. The simulation experiments conducted in MATLAB environment show that our system can accurately recognize and extract the main frequency of music, and has higher performance compared to traditional methods.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_102-Construction_of_an_Art_Education_Teaching_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving Load Balance in Fog Nodes by Reinforcement Learning Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01502101</link>
        <id>10.14569/IJACSA.2024.01502101</id>
        <doi>10.14569/IJACSA.2024.01502101</doi>
        <lastModDate>2024-03-04T06:05:02.3270000+00:00</lastModDate>
        
        <creator>Hongwei DING</creator>
        
        <creator>Ying ZHANG</creator>
        
        <subject>Fog computing; resource allocation; reinforcement learning; delay; load balancing; fog nodes</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>Fog computing is a distributed computing concept that brings cloud services out to the network&#39;s edge. Real-time user queries and data streams are processed by cloud nodes. Tasks should be evenly divided among fog nodes in order to maximize speed and efficiency, optimize resource efficiency, and reaction time. Real-time user requests and data flow processing are done by cloud nodes. Nodes in a network must share responsibilities in a balanced manner in order to maximize speed and efficiency, resource efficiency, and reaction time, hence in this article, a novel approach is presented. When it comes to fog computing, load balancing essential suggested to be improved. According to the suggested algorithm, a task submitted to the fog node via a mobile device would be processed by the fog node using reinforcement learning before being passed on to another fog node. Neighbor or let the cloud handle it. According to the simulation findings, the suggested algorithm has achieved a reduced execution time than other compared approaches by properly allocating the work among the nodes. Consequently, the suggested technique has reduced the chance of incorrect job assignment by 24.02% and the response time to the user by 31.60% when compared to similar methods.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_101-Improving_Load_Balance_in_Fog_Nodes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Diagnosing Autism Spectrum Disorder in Pediatric Patients via Gait Analysis using ANN and SVM with Electromyography Signals</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01502100</link>
        <id>10.14569/IJACSA.2024.01502100</id>
        <doi>10.14569/IJACSA.2024.01502100</doi>
        <lastModDate>2024-02-28T12:51:31.7870000+00:00</lastModDate>
        
        <creator>Rozita Jailani</creator>
        
        <creator>Nur Khalidah Zakaria</creator>
        
        <creator>M. N. Mohd Nor</creator>
        
        <creator>Heru Supriyono</creator>
        
        <subject>Autism Spectrum Disorder; Electromyography signals; Artificial Neural Network; Support Vector Machine; precision health</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>Autism Spectrum Disorder (ASD) is a permanent neurological maturation condition that impacts communication, social interaction, and behavior. It is also associated with atypical walking patterns. This study aims to create an automated classification model to distinguish ASD children during walking based on the muscles Electromyography (EMG) signals. The study involved 35 children diagnosed with ASD and an equal number of typically developing (TD) children, all aged between 6 and 13 years. The Trigno Wireless EMG System was used to collect EMG signals from specific muscles in the lower limb (Biceps Femoris - BF, Rectus Femoris - RF, Tibialis Anterior - TA, Gastrocnemius - GAS) and the arm (Biceps Brachii - BB, Triceps Brachii - TB) on the left side. To identify the most significant features influencing walking in ASD children, a statistical analysis using the Mann-Whitney Test was conducted. The dataset contained 42 features derived from the analysis of six muscles across seven distinct walking phases throughout a single gait cycle. Following this, the Mann-Whitney Test was utilized for feature selection, uncovering five significantly distinctive features within the EMG signals between children with ASD and those who were typically developing. The most notable EMG features were subsequently employed in constructing classification models, namely an Artificial Neural Network (ANN) and a Support Vector Machine (SVM), aimed at distinguishing between children with ASD and those who were typically developing. The results indicated that the SVM classifier outperformed the ANN classifier, achieving an accuracy rate of 75%. This discovery shows potential for employing EMG signal analysis and classification model algorithms in diagnosing autism, thereby advancing precision health.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_100-Diagnosing_Autism_Spectrum_Disorder_in_Pediatric_Patients.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Management System of IoT Informatization Training Room Based on Improved YOLOV4 Detection and Recognition Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150299</link>
        <id>10.14569/IJACSA.2024.0150299</id>
        <doi>10.14569/IJACSA.2024.0150299</doi>
        <lastModDate>2024-02-28T12:51:31.7530000+00:00</lastModDate>
        
        <creator>Huiling Hu</creator>
        
        <subject>YOLOV4 algorithm; Internet of Things informatization; training room; management system; detection and recognition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>In response to the problems of low recognition rate and long system operation time in equipment detection management in the existing IoT information training room management system. A research has proposed an IoT information training room equipment detection management system on the ground of an improved YOLOV4 detection and recognition algorithm to solve the above problems. Firstly, it used the YOLOV algorithm to detect and identify equipment in the IoT information training room. Then, it used clustering methods to improve the YOLOV algorithm, thereby enhancing the detection accuracy and robustness of the algorithm, and thereby enhancing the performance of the equipment management system in the equipment management process of the training room. Finally, performance validation of the training room management system was conducted using datasets and simulation experiments. The results showed that the loss value of the training room equipment management system constructed using the improved YOLOv4 algorithm during the training process was 0.16. The accuracy and recall rates of device recognition were 95.71% and 92.83%, respectively. And the detection false alarm rate during the device detection and recognition process was only 2.15%, with a mAP value of 91.66%, and the detection and recognition indicators are higher than those of the comparison method. This indicates that the training room equipment management system constructed in the study has good adaptability in equipment detection and recognition in IoT information training rooms. The research aims to provide effective technical support for the management system of IoT training room equipment.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_99-The_Management_System_of_IoT_Informatization_Training_Room.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Algorithm Based on Priority Rules for Solving a Multi-drone Routing Problem in Hazardous Waste Collection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150298</link>
        <id>10.14569/IJACSA.2024.0150298</id>
        <doi>10.14569/IJACSA.2024.0150298</doi>
        <lastModDate>2024-02-28T12:51:31.7400000+00:00</lastModDate>
        
        <creator>Youssef Harrath</creator>
        
        <creator>Jihene Kaabi</creator>
        
        <creator>Eman Alaradi</creator>
        
        <creator>Manar Alnoaimi</creator>
        
        <creator>Noor Alawadhi</creator>
        
        <subject>Drones; trip assignment; priority rules; flying capacity; load balance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>This research investigates the problem of assigning pre-scheduled trips to multiple drones to collect hazardous waste from different sites in the minimum time. Each drone is subject to essential restrictions: maximum flying capacity and recharge operation. The goal is to assign the trips to the drones so that the waste is collected in the minimum time. This is done if the total flying time is equally distributed among the drones. An algorithm was developed to solve the problem. The algorithm is based on two main ideas: sort the trips according to a given priority rule and assign the current trip to the first available drone. Three different priority rules have been tested: Shortest Flying Time, Longest Flying Time, and Median Flying Time. Two recharging conditions are maintained: recharging needed time and recharging full duration. By applying each priority rule and each recharging condition, we generate a six versions of the algorithm. The six versions of the proposed algorithm were implemented in Java programming language.The results were analyzed and compared proving that the Longest Flying Time priority rule surpasses the other two rules. Moreover, recharging a drone just enough for taking the next trip proved to be better than fully recharging it.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_98-An_Algorithm_Based_on_Priority_Rules.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimizing Bandwidth Reservation Decision Time in Vehicular Networks using Batched LSTM</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150297</link>
        <id>10.14569/IJACSA.2024.0150297</id>
        <doi>10.14569/IJACSA.2024.0150297</doi>
        <lastModDate>2024-02-28T12:51:31.7230000+00:00</lastModDate>
        
        <creator>Abdullah Al-khatib</creator>
        
        <creator>Klaus Moessner</creator>
        
        <creator>Holger Timinger</creator>
        
        <subject>Networked vehicular application; time-sensitive net-working; network reservation; batched LSTM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>Time-sensitive and safety-critical networked vehicular applications, such as autonomous driving, require deterministic guaranteed resources. This is achieved through advanced individual bandwidth reservations. The efficient timing of a vehicle decision to place a cost-efficient reservation request is crucial, as vehicles typically lack sufficient information about future bandwidth resource availability and costs. Predicting bandwidth costs often using time-series machine learning models like Long Short-Term Memory (LSTM). However, standard LSTM models typically require longer durations of multiple input data sets to achieve high accuracy. In certain scenarios, quick decisions must be made, even if the vehicle means sacrificing some accuracy. We propose a batched LSTM model to assist vehicles in placing bandwidth reservation requests within a limited data for an upcoming driving path. The model divides data during training to enhance computational efficiency and model performance. We validated our model using historical Amazon price data, providing a real-world scenario for experiment. The results demonstrate that the batched LSTM model not only achieves higher accuracy within a short input data duration but also significantly reduces bandwidth costs by up to 27% compared to traditional time-series machine learning models.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_97-Optimizing_Bandwidth_Reservation_Decision_Time.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid Model for Ischemic Stroke Brain Segmentation from MRI Images using CBAM and ResNet50-Unet</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150296</link>
        <id>10.14569/IJACSA.2024.0150296</id>
        <doi>10.14569/IJACSA.2024.0150296</doi>
        <lastModDate>2024-02-28T12:51:31.6930000+00:00</lastModDate>
        
        <creator>Fathia ABOUDI</creator>
        
        <creator>Cyrine DRISSI</creator>
        
        <creator>Tarek KRAIEM</creator>
        
        <subject>Medical image segmentation; ischemic stroke disease; UNet; ResNet50; convolution block attention module; magnetic resonance imaging; transfer learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>Ischemic stroke is the most prevalent type of stroke and a leading cause of mortality and long-term impairment globally. Timely identification, precise localization, and early detection of ischemic stroke lesions brain are critical in healthcare. Various modalities are employed for detection, and magnetic resonance imaging stands out as the most effective. Different magnetic resonance imaging techniques have been proposed for the detection of ischemic stroke lesion tumors, allowing for image uploading and visualization. Automated segmentation of ischemic stroke lesions from magnetic resonance imaging images has an important role in the analysis, prognostic, diagnosis, and clinical treatment planning of some neurological diseases. Recently, computer-aided diagnosis systems based on deep learning techniques have demonstrated significant promise in medical image analysis, particularly in multi-modality medical image segmentation. Automated segmentation is a difficult task due to the enormous quantity of data provided by magnetic resonance imaging and the variation in the location and size of the lesion. In this study, we develop an automated computer-aided diagnosis system for the automatic segmentation of ischemic stroke lesions from magnetic resonance imaging images using a Convolution Block Attention Module (CBAM) and hybrid UNet-ResNet50 model. The UNet model is integrated into the architecture, and the ResNet50 backbone is pre-trained to enhance feature extraction. CBAM block is a model applied in this approach to extract the most effective feature maps. The proposed approach is evaluated on the public Ischemic Stroke Lesion Segmentation Challenge 2015 dataset, arranged into weighted-T1(T1), weighted-T2(T2), FLAIR, and DWI sequences. Experimental results demonstrate the efficacy of our approach, achieving an impressive accuracy value of 99.56%, a precision value of 97.12%, and a DC of 79.6%. Notably, our approach outperforms other state-of-the-art methods, particularly in terms of accuracy values, highlighting its potential as a robust tool for automated ischemic stroke lesion segmentation in magnetic resonance imaging.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_96-A_Hybrid_Model_for_Ischemic_Stroke_Brain_Segmentation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predicting Aircraft Engine Failures using Artificial Intelligence</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150295</link>
        <id>10.14569/IJACSA.2024.0150295</id>
        <doi>10.14569/IJACSA.2024.0150295</doi>
        <lastModDate>2024-02-28T12:51:31.6770000+00:00</lastModDate>
        
        <creator>Asmae BENTALEB</creator>
        
        <creator>Kaoutar TOUMLAL</creator>
        
        <creator>Jaafar ABOUCHABAKA</creator>
        
        <subject>Aircraft engine failures; machine learning; predic-tive maintenance; C-MAPSS; aviation safety</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>Nowadays, the aviation sector continues to develop especially with the emergence of new technologies, and solutions. Hence, there is an increasing demand for enhanced safety and operational efficiency in the aviation industry. As to guarantee this safety, the aircraft’s engines must be monitored, controlled and maintained, however in an efficient way. Thus, the research community is working continuously in order to provide solutions that are efficient and cost effective. Artificial intelligence and more specifically machine learning models have been employed in this sense. Here comes the proposition of this article. It presents solutions implementing predictive maintenance using machine learning models. They help in predicting aircraft’s failures. This is in order to avoid operations of unscheduled maintenance and disruptions of services.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_95-Predicting_Aircraft_Engine_Failures_using_Artificial_Intelligence.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Internet of Things-based Predictive Maintenance Architecture for Intensive Care Unit Ventilators</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150294</link>
        <id>10.14569/IJACSA.2024.0150294</id>
        <doi>10.14569/IJACSA.2024.0150294</doi>
        <lastModDate>2024-02-28T12:51:31.6600000+00:00</lastModDate>
        
        <creator>Oumaima Manchadi</creator>
        
        <creator>Fatima-Ezzahraa BEN-BOUAZZA</creator>
        
        <creator>Zineb El Otmani Dehbi</creator>
        
        <creator>Aymane Edder</creator>
        
        <creator>Idriss Tafala</creator>
        
        <creator>Mehdi Et-Taoussi</creator>
        
        <creator>Bassma Jioudi</creator>
        
        <subject>Internet of things; predictive maintenance; embed-ded Machine learning; data analytics; failure modes; mechanical ventilator</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>Intensive care units commonly utilize mechanical ventilators to treat patients with different medical conditions, which are crucial for patient care and survival. ICU ventilators have evolved through four distinct generations, each displaying unique features. Despite progress made since the 1940s, contemporary designs are insufficient to meet the increasing needs of patients and hospitals. Malfunctions in mechanical ventilators pose significant dangers to patients, highlighting the importance of focusing on their safety, security, precision, and dependability. Our study aims to address the significant issue at hand. Furthermore, the IoT industry has garnered significant attention because of rapid progress in smart devices, sensors, and actuators. The healthcare industry has seen a notable increase in health data as a result of the growing utilization of IoT and cloud computing technologies. To enhance growth, new models and distributed data analytics strategies must be developed to fully utilize the value of the vast datasets generated, including the incorporation of embedded machine learning. The study focuses on conducting Pareto and Failure Modes and Effects Analysis (FMECA) on ventilators in a specific hospital’s ICU, specifically those manufactured by the same company and unit. The analysis aims to identify the most critical and failure-prone component. Subsequently, we propose an IoT-focused framework for a predictive maintenance system implemented at the component level. The architecture comprises a monitoring framework and a data analytics module to predict potential system failures in advance, enhancing overall reliability.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_94-An_Internet_of_Things_based_Predictive_Maintenance_Architecture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimizing Grape Leaf Disease Identification Through Transfer Learning and Hyperparameter Tuning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150293</link>
        <id>10.14569/IJACSA.2024.0150293</id>
        <doi>10.14569/IJACSA.2024.0150293</doi>
        <lastModDate>2024-02-28T12:51:31.6470000+00:00</lastModDate>
        
        <creator>Hoang-Tu Vo</creator>
        
        <creator>Kheo Chau Mui</creator>
        
        <creator>Nhon Nguyen Thien</creator>
        
        <creator>Phuc Pham Tien</creator>
        
        <creator>Huan Lam Le</creator>
        
        <subject>Grape disease recognition; disease identification; transfer learning; hyperparameter optimization; hyperband strat-egy; fine-tuning; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>Grapes are a globally cultivated fruit with significant economic and nutritional value, but they are susceptible to diseases that can harm crop quality and yield. Identifying grape leaf diseases accurately and promptly is vital for effective disease management and sustainable viticulture. To address this challenge, we employ a transfer learning approach, utilizing well-established pre-trained models such as ResNet50V2, ResNet152V2, MobileNetV2, Xception, and In-ceptionV3, renowned for their exceptional performance across various tasks. Our primary objective is to identify the most suitable network architecture for the classification of grape leaf diseases. This is achieved through a rigorous evaluation process that considers key metrics such as accuracy, F1 score, precision, recall, and loss. By systematically assessing these models, we aim to select the one that demonstrates the best performance on our dataset. Following model selection, we proceed to the crucial phase of fine-tuning the model’s hyperparameters. This fine-tuning process is essential to enhance the model’s predictive capabilities and overall effectiveness in disease identification. To accomplish this, we conduct an extensive hyperparameter search using the Hyperband strategy. Hyperparameters play a pivotal role in shaping the behavior and performance of deep learning models, and by systematically exploring a wide range of hyperparameter combinations, our goal is to identify the most optimal configuration that maximizes the model’s performance on the given dataset. Additionally, the study’s results were compared with those of numerous relevant studies.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_93-Optimizing_Grape_Leaf_Disease_Identification_Through_Transfer_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing K-means Clustering Results with Gradient Boosting: A Post-Processing Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150292</link>
        <id>10.14569/IJACSA.2024.0150292</id>
        <doi>10.14569/IJACSA.2024.0150292</doi>
        <lastModDate>2024-02-28T12:51:31.6130000+00:00</lastModDate>
        
        <creator>Mousa Alzakan</creator>
        
        <creator>Hissah Almousa</creator>
        
        <creator>Arwa Almarzoqi</creator>
        
        <creator>Mohammed Alghasham</creator>
        
        <creator>Munirah Aldawsari</creator>
        
        <creator>Mohammed Al-Hagery</creator>
        
        <subject>K-means; gradient boosting; post-processing; mis-classification; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>As the volume and complexity of data continue to grow exponentially, finding efficient and accurate clustering algorithms has become crucial for many applications. K-means clustering is a widely used unsupervised machine learning technique for data analysis and pattern recognition. Despite its popularity, k-means suffers from certain limitations, such as sensitivity to initial conditions, difficulty in determining the optimal number of clusters, and the potential for misclassification. This research paper proposes an enhanced approach for improving the accuracy and performance of the k-means clustering algorithm by incorporating post-processing techniques using a gradient boosting algorithm. The proposed method comprises training the gradient boosting model on the labeled training set, i.e., the samples with correct cluster assignments obtained from the k-means algorithm, to predict the correct cluster assignments for the misclassified samples in the testing set. This results in refined cluster assignments for the testing set. The k-means algorithm is only used initially to cluster the data and obtain initial cluster assignments. The effectiveness of the proposed approach is validated through experiments on several benchmark datasets, and the results show a significant improvement in clustering accuracy and robustness compared to the standard k-means algorithm. The proposed approach has the potential to enhance the performance of k-means in various real-world applications and domains.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_92-Enhancing_K_means_Clustering_Results_with_Gradient_Boosting.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automation Process for Learning Outcome Predictions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150291</link>
        <id>10.14569/IJACSA.2024.0150291</id>
        <doi>10.14569/IJACSA.2024.0150291</doi>
        <lastModDate>2024-02-28T12:51:31.5970000+00:00</lastModDate>
        
        <creator>Minh-Phuong Han</creator>
        
        <creator>Trung-Tung Doan</creator>
        
        <creator>Minh-Hoan Pham</creator>
        
        <creator>Trung-Tuan Nguyen</creator>
        
        <subject>Machine learning; predictive learning outcomes; education; logistic regression; k-nearest neighbors; Gaussian Naive Bayes; Random Forest; support vector regression</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>This paper presents a comprehensive study on the evaluation of algorithms for automating learning outcome predictions, with a focus on the application of machine learning techniques. We investigate various predictive models (logistic regression, random forest, gaussian naive bayes, k-nearest neighbors and support vector regression) to assess their efficacy in forecasting student performance in educational settings. Our experimental approach involves the application of these models to predict the outcomes of a specific course, analyzing their accuracy and reliability. We also highlight the significance of an automation process in facilitating the practical application of these predictive models. This study highlights the promise of machine learning in advancing educational assessment and paves the way for further investigations into enhancing the adaptability and inclusivity of algorithms in various educational settings.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_91-Automation_Process_for_Learning_Outcome_Predictions.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cross-Modal Sentiment Analysis Based on CLIP Image-Text Attention Interaction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150290</link>
        <id>10.14569/IJACSA.2024.0150290</id>
        <doi>10.14569/IJACSA.2024.0150290</doi>
        <lastModDate>2024-02-28T12:51:31.5830000+00:00</lastModDate>
        
        <creator>Xintao Lu</creator>
        
        <creator>Yonglong Ni</creator>
        
        <creator>Zuohua Ding</creator>
        
        <subject>Multi-modal; image-text interaction; multi-head attention mechanism; sentiment analysis; cross-modal fusion</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>Multimodal sentiment analysis is a traditional text-based sentiment analysis technique. However, the field of multi-modal sentiment analysis still faces challenges such as inconsistent cross-modal feature information, poor interaction capabilities, and insufficient feature fusion. To address these issues, this paper proposes a cross-modal sentiment model based on CLIP image-text attention interaction. The model utilizes pre-trained ResNet50 and RoBERTa to extract primary image-text features. After contrastive learning with the CLIP model, it employs a multi-head attention mechanism for cross-modal feature interaction to enhance information exchange between different modalities. Subsequently, a cross-modal gating module is used to fuse feature networks, combining features at different levels while controlling feature weights. The final output is fed into a fully connected layer for sentiment recognition. Comparative experiments are conducted on the publicly available datasets MSVA-Single and MSVA-Multiple. The experimental results demonstrate that our model achieved accuracy rates of 75.38%and 73.95% , and F1-scores of 75.21% and 73.83% on the mentioned datasets, respectively. This indicates that the proposed approach exhibits higher generalization and robustness compared to existing sentiment analysis models.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_90-Cross_Modal_Sentiment_Analysis_Based_on_CLIP_Image.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>IMEO: Anomaly Detection for IoT Devices using Semantic-based Correlations</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150289</link>
        <id>10.14569/IJACSA.2024.0150289</id>
        <doi>10.14569/IJACSA.2024.0150289</doi>
        <lastModDate>2024-02-28T12:51:31.5500000+00:00</lastModDate>
        
        <creator>Seungmin Oh</creator>
        
        <creator>Jihye Hong</creator>
        
        <creator>Daeho Kim</creator>
        
        <creator>Eun-Kyu Lee</creator>
        
        <creator>Junghee Jo</creator>
        
        <subject>Security; anomaly detection; semantics; Internet of Things; attack</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>In the Internet of Things (IoT) security, anomalies due to attacks or device malfunctions can have serious consequences in our daily lives. Previous solutions have been struggling with high rates of false alarms and missing many actual anomalies. They also take a long time to detect anomalies even if they successfully detect anomalies. To overcome the limitations, this paper proposes a novel anomaly detection system, named IoT Malfunction Extraction Observer (IMEO), that utilizes semantics and correlation information for smart homes. Given IoT devices installed at home, IMEO creates virtual correlations based on semantic information such as applications, device types, relation-ships, and installation locations. The generated correlations are validated and improved using event logs extracted from smart home applications. The finally extracted correlations are then used to simulate the normal behaviors of the smart home. Any discrepancy between the actual state of a device and the simulated state is reported as abnormal while comparing correlations and event logs. IMEO also utilizes the observation that malfunctions of IoT devices occur repeatedly. An anomaly database is created and used so that repetitive malfunctions are quickly detected, which eventually reduces processing time. This paper builds a smart home testbed on a real-world residential house and deploys IoT devices. Six different types of anomalies are analyzed, synthesized, and injected to the testbed, with which IMEO’s detection performance is evaluated and compared with the state-of-the-art correlation-only detection method. Experimental results demonstrate that the proposed method achieves higher performance of detection accuracy with faster processing time.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_89-IMEO_Anomaly_Detection_for_IoT_Devices.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Future Iris Imaging with Advanced Fuzzified Histogram Equalization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150288</link>
        <id>10.14569/IJACSA.2024.0150288</id>
        <doi>10.14569/IJACSA.2024.0150288</doi>
        <lastModDate>2024-02-28T12:51:31.5370000+00:00</lastModDate>
        
        <creator>Nurul Amirah Mashudi</creator>
        
        <creator>Norulhusna Ahmad</creator>
        
        <creator>Rudzidatul Akmam Dziyauddin</creator>
        
        <creator>Norliza Mohd Noor</creator>
        
        <subject>Image enhancement; fuzzy logic; histogram equalization; CLAHE; iris recognition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>Images captured under low lighting frequently exhibit low brightness, low contrast, and a small grayscale. These features can affect the individual’s view and severely limit the performance of machine vision systems, particularly when data annotation is involved. Hence, the issues motivate this study to examine the effectiveness of advanced fuzzified histogram equalization for image enhancement. A comparative study was conducted based on the low lighting condition of iris images to evaluate three image enhancement methods: Advanced Fuzzified Histogram Equalization (AFHE), Contrast Limited Adaptive Histogram Equalization (CLAHE), and Fuzzy Contrast Enhancement (FCE) using the MIREIS dataset. The Gaussian membership functions (GMF) were modified accordingly to satisfy the suitable pixel intensity of the input iris images. The results were compared using the peak signal-to-noise ratio (PSNR) value, including the central processing unit (CPU) times. As a result, the AFHE showed a better PSNR value at 76.02db with faster CPU times at 4.04s compared to CLAHE and FCE. Although the PSNR value of HE is slightly lower than CLAHE (0.3%) and FCE (0.7%), AFHE improved the image’s quality and brightness, which can help other researchers with the data annotation process. The performance of the proposed methods was validated by comparing them with state-of-the-art methods. The results demonstrated that AFHE, CLAHE, and FCE exceeded other HE, AHE, CLAHE, and hybrid HE using fuzzy approaches that employed PSNR metrics.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_88-Future_Iris_Imaging_with_Advanced_Fuzzified_Histogram.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Robust License Plate Detection and Recognition Framework for Arabic Plates with Severe Tilt Angles</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150287</link>
        <id>10.14569/IJACSA.2024.0150287</id>
        <doi>10.14569/IJACSA.2024.0150287</doi>
        <lastModDate>2024-02-28T12:51:31.5200000+00:00</lastModDate>
        
        <creator>Khaled Hefnawy</creator>
        
        <creator>Ahmed Lila</creator>
        
        <creator>Elsayed Hemayed</creator>
        
        <creator>Mohamed Elshenawy</creator>
        
        <subject>License plate detection; license plate recognition; feature extraction; Mask R-CNN; object detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>This paper addresses the challenge of accurately detecting and recognizing Arabic license plates, particularly those subjected to severe tilt angles. It presents a robust license plate detection and recognition framework that consists three main steps: plate detection and segmentation, plate perspective correction, and vehicle number recognition. In the first step, a mask R-CNN model is used to detect the plate location, providing pixel-wise labels of identified plates’ areas. Following this, a perspective correction technique is used to obtain a clear and rectangular image of each license plate in the image. Lastly, the framework employs a Bidirectional Long Short-Term Memory (Bi-LSTM) model for accurate vehicle number recognition. The framework’s efficacy is demonstrated through its application to build a plate recognition system tailored for Egyptian license plates. The system was tested on a dataset collected from campus gate cameras at Zewail city of science and technology, achieving a character accuracy of 97%.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_87-A_Robust_License_Plate_Detection_and_Recognition_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cephalometric Landmarks Identification Through an Object Detection-based Deep Learning Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150286</link>
        <id>10.14569/IJACSA.2024.0150286</id>
        <doi>10.14569/IJACSA.2024.0150286</doi>
        <lastModDate>2024-02-28T12:51:31.5030000+00:00</lastModDate>
        
        <creator>Idriss Tafala</creator>
        
        <creator>Fatima-Ezzahraa Ben-Bouazza</creator>
        
        <creator>Aymane Edder</creator>
        
        <creator>Oumaima Manchadi</creator>
        
        <creator>Mehdi Et-Taoussi</creator>
        
        <creator>Bassma Jioudi</creator>
        
        <subject>Cephalometry; YOLOv8; landmark detection; orthodontics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>In the field of orthodontics, the accurate identification of cephalometric landmarks in dental radiography plays a crucial role in ensuring precise diagnoses and efficient treatment planning. Previous studies have demonstrated the impressive capabilities of advanced deep learning models in this particular domain. However, due to the ever-changing technological landscape, it is imperative to consistently investigate and explore emerging algorithms to further improve efficiency in this field. The present study centers around the assessment of the effectiveness of YOLOv8, the most recent version of the ’You Only Look Once (YOLO)’ algorithm series, with a particular emphasis on its autonomous capability to accurately identify cephalometric landmarks. In this study, a thorough examination was con-ducted to evaluate the YOLOv8 algorithm efficiency in detecting cephalometric landmarks. The assessments encompassed various aspects such as precision, adaptability in challenging conditions, and a comparative analysis with alternative algorithms. The predefined proximities of 2mm, 2.5mm, and 3mm were utilized for the comparisons. By focusing on its potential as a noteworthy breakthrough, the investigation seeks to ascertain whether the recent enhancements indeed bring about a significant stride in the precise identification of cephalometric landmarks.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_86-Cephalometric_Landmarks_Identification_Through_an_Object_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Ensemble Dynamic Model and Bio-Inspired Feature Selection Method-based Decision Support System for Predicting Multiple Organ Dysfunction Syndrome in the ICU</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150285</link>
        <id>10.14569/IJACSA.2024.0150285</id>
        <doi>10.14569/IJACSA.2024.0150285</doi>
        <lastModDate>2024-02-28T12:51:31.4900000+00:00</lastModDate>
        
        <creator>Anas Maach</creator>
        
        <creator>El Houssine El Mazoudi</creator>
        
        <creator>Jamila Elalami</creator>
        
        <creator>Noureddine Elalami</creator>
        
        <subject>Ensemble dynamic model; MODS prediction; decision support system; Bio-Inspired feature selection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>Multiple Organ Dysfunction Syndrome (MODS) is one of the most common and severe conditions affecting patients admitted to intensive care units (ICUs). It is characterized by the simultaneous failure or dysfunction of at least two organ systems. Although no specific remedy for MODS has been identified to date, early diagnosis and adequate organ support can significantly improve patient outcomes. Identifying patients at risk of developing MODS in the ICU is challenging. Currently, several methods are used for this purpose, including scoring systems like SOFA and MOD Score, as well as machine learning-based approaches. However, these methods often have limitations. Some require invasive features, making them complex to use in a smart healthcare system. Others suffer from a lack of performance due to various problems, which can potentially lead to unreliable predictions. Feature selection can improve ML models’ performance. Recently, bio-inspired feature selection techniques have shown promise in improving the performance of machine learning methods in many domains, but their effectiveness in MODS prediction has not yet been evaluated. Additionally, research on early MODS prediction, particularly utilizing time-series data and dynamic ensemble methods, remains limited. To fill this gap, the present research used state-of-the-art machine learning algorithms, namely dynamic ensemble techniques, to predict patients at risk of developing MODS in the ICU. Dynamic ensembles are new methods that select an ensemble of the best-performing models for every new test case. We compared the performance of these models with full features and with feature selection. Three nature-inspired meta-heuristic optimization models, namely the binary bat algorithm (BBA), grey wolf optimization (GWO), and genetic algorithm (GA), were evaluated to select the optimal feature subset. The models were built using non-invasive patient features and time-series data from the first 12 hours of ICU admission. The results showed that feature selection significantly improved the performance of dynamic ensemble models. Notably, the METADES model, employing grey wolf optimization for feature selection, demonstrated the best performance in terms of accuracy(96.5%), F1 score (96.4%), precision (97.2%), recall (95.7%), and area under the ROC curve (AUC) (98.4%). These findings highlight the potential and effectiveness of our approach for early MODS prediction in ICUs.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_85-An_Ensemble_Dynamic_Model_and_Bio_Inspired_ Feature_Selection_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>TPMN: Texture Prior-Aware Multi-Level Feature Fusion Network for Corrugated Cardboard Parcels Defect Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150284</link>
        <id>10.14569/IJACSA.2024.0150284</id>
        <doi>10.14569/IJACSA.2024.0150284</doi>
        <lastModDate>2024-02-28T12:51:31.4730000+00:00</lastModDate>
        
        <creator>Xing He</creator>
        
        <creator>Haoxiang Fan</creator>
        
        <creator>Cuifeng Du</creator>
        
        <creator>Xingyu Zhu</creator>
        
        <creator>Yuyu Zhou</creator>
        
        <creator>Renzhang Chen</creator>
        
        <creator>Zhefu Li</creator>
        
        <creator>Guihua Zheng</creator>
        
        <creator>Yuansheng Zhong</creator>
        
        <creator>Changjiang Liu</creator>
        
        <creator>Jiandan Yang</creator>
        
        <creator>Quanlong Guan</creator>
        
        <subject>Logistics; surface defect detection; multi-level feature fusion; prior attention; corrugated cardboard boxes</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>Surface defect detection is the task of identifying and localizing defects on the surface of an object, which is a widely applied task in various industries. In the logistics industry, logistics companies need to monitor the condition of goods for potential defects throughout the entire logistics process for effective logistics quality control. However, effective defect detection methods are still lacking for courier packages using corrugated cardboard boxes, which rely on judging whether deformation and leakage have occurred by examining areas on their surface with abundant texture. Specifically, the defect rate and supporting structure of the packages are influenced by temperature and humidity, and the openings and bends of defects are inconsistent. This results in defective packages having rich and non-uniform texture features. Moreover, convolutional neural networks struggle to effectively extract low-level semantic texture features of defects and perceive multi-level image features of packages. Considering the above challenges, we propose a novel texture prior-aware multi-level feature fusion network (TPMN). We first introduce prior knowledge and attention mechanisms to enable the neural network to focus on extracting low-level texture features from the image in the early stages. We also design a multi-level feature fusion method to integrate features from different levels, avoiding the gradual loss of low-level semantic information in CNN and enabling comprehensive perception of multi-level image features. To support further research, we contribute the cardboard-boxes-dataset, comprising 1210 images of packages. Experiments on this dataset showcase the superior performance of TPMN, even in few-shot learning scenarios, demonstrating its effectiveness in surface defect detection within the logistics and supply chain domains.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_84-TPMN_Texture_Prior_Aware_Multi_Level_Feature_Fusion_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Review on DDoS Attacks Classifying and Detection by ML/DL Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150283</link>
        <id>10.14569/IJACSA.2024.0150283</id>
        <doi>10.14569/IJACSA.2024.0150283</doi>
        <lastModDate>2024-02-28T12:51:31.4430000+00:00</lastModDate>
        
        <creator>Haya Malooh Alqahtani</creator>
        
        <creator>Monir Abdullah</creator>
        
        <subject>Classification; DDoS attacks; machine learning; cybersecurity; detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>Internet security is under serious threat due to Distributed Denial of Service (DDoS) attacks. These attacks inflict considerable damage by disrupting network services, resulting in the impairment and complete disablement of system functions. The accurate classification and detection of DDoS attacks is extremely important. We provide a review of different models of Machine Learning (ML)/Deep Learning (DL)-based DDoS attack detection used by researchers that consider different classifiers. Our analysis indicates a heightened emphasis on ML-based classifiers where 22% of studies opted for the widely recognized SVM classifier. For DL-based, 27% of the studies opted for the widely recognized CNN. While the majority of researchers have formulated their datasets, NSL-KDD was employed in 55% of the studies. In addition, we discussed the future directions and challenges of DDoS detection.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_83-A_Review_on_DDoS_Attacks_Classifying_and_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>DDoS Attacks Detection in IoV using ML-based Models with an Enhanced Feature Selection Technique</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150282</link>
        <id>10.14569/IJACSA.2024.0150282</id>
        <doi>10.14569/IJACSA.2024.0150282</doi>
        <lastModDate>2024-02-28T12:51:31.4270000+00:00</lastModDate>
        
        <creator>Ohoud Ali Albishi</creator>
        
        <creator>Monir Abdullah</creator>
        
        <subject>Random forest; IoV; DDoS; feature selection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>The Internet of Vesicles (IoV) is an open and integrated network system with high reliability and security control capabilities. The system consists of vehicles, users, in-frastructure, and related networks. Despite the many advantages of IoV, it is also vulnerable to various types of attacks due to the continuous and increasing growth of cyber security attacks. One of the most significant attacks is a Distributed Denial of Service (DDoS) attack, where an intruder or a group of attackers attempts to deny legitimate users access to the service. This attack is performed by many systems, and the attacker uses high-performance processing units. The most common DDoS attacks are User Datagram Protocol (UDP) Lag and, SYN Flood. There are many solutions to deal with these attacks, but DDoS attacks require high-quality solutions. In this research, we explore how these attacks can be addressed through Machine Learning (ML) models. We proposed a method for identifying DDoS attacks using ML models, which we integrate with the CICDDoS2019 dataset that contains instances of such attacks. This approach also provides a good estimate of the model’s performance based on feature extraction strategic, while still being computationally efficient algorithms to divide the dataset into training and testing sets. The best ML models tested in the UDP Lag attack, Decision Tree (DT) and Random Forest (RF) had the best results with a precision, recall, and F1 score of 99.9%. In the SYN Flood attack, the best-tested ML models, including K-Nearest Neighbor (KNN), DT, and RF, demonstrated superior results with 99.9% precision, recall, and F1-score.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_82-DDoS_Attacks_Detection_in_IoV_using_ML_based_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Prominent Security Vulnerabilities in Cloud Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150281</link>
        <id>10.14569/IJACSA.2024.0150281</id>
        <doi>10.14569/IJACSA.2024.0150281</doi>
        <lastModDate>2024-02-28T12:51:31.4100000+00:00</lastModDate>
        
        <creator>Alanoud Alquwayzani</creator>
        
        <creator>Rawabi Aldossri</creator>
        
        <creator>Mounir Frikha</creator>
        
        <subject>Cloud computing; vulnerabilities; cloud security; cloud misconfigurations; data loss; threats</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>This research study examines the significant security vulnerabilities and threats in cloud computing, analyzes their potential consequences for enterprises, and proposes effective solutions for mitigating these vulnerabilities. This paper discusses the increasing significance of cloud security in a time characterized by rapid data expansion and technological progress. The paper examines prevalent vulnerabilities in cloud computing, including cloud misconfigurations, data leakage, shared technology threats, and insider threats. It emphasizes the necessity of adopting a proactive and comprehensive approach to ensure cloud security. The report places significant emphasis on the shared responsibility paradigm, adherence to industry laws, and the dynamic nature of cybersecurity threats. The situation necessitates the cooperation of researchers, cybersecurity professionals, and enterprises to proactively address these difficulties. This partnership aims to provide a thorough manual for organizations aiming to bolster their cloud security measures and safeguard valuable data in an ever-evolving digital landscape.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_81-Prominent_Security_Vulnerabilities_in_Cloud_Computing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Structure-Aware Scheduling Algorithm for Deadline-Constrained Scientific Workflows in the Cloud</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150280</link>
        <id>10.14569/IJACSA.2024.0150280</id>
        <doi>10.14569/IJACSA.2024.0150280</doi>
        <lastModDate>2024-02-28T12:51:31.3800000+00:00</lastModDate>
        
        <creator>Ali Al-Haboobi</creator>
        
        <creator>Gabor Kecskemeti</creator>
        
        <subject>Workflow scheduling; workflow structure; cloud computing; resource provisioning; deadline constrained; infrastructure as a service</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>Cloud computing provides pay-per-use IT services through the Internet. Although cloud computing resources can help scientific workflow applications, several algorithms face the problem of meeting the user’s deadline while minimising the cost of workflow execution. In the cloud, selecting the appropriate type and the exact number of VMs is a major challenge for scheduling algorithms, as tasks in workflow applications are distributed very differently. Depending on workflow requirements, algorithms need to decide when to provision or de-provision VMs. Therefore, this paper presents an algorithm for effectively selecting and allocating resources. Based on the workflow structure, it decides the type and number of VMs to use and when to lease and release them. For some structures, our proposed algorithm uses the initial rented VMs to schedule all tasks of the same workflow to minimise data transfer costs. We evaluate the performance of our algorithm by simulating it with synthetic workflows derived from real scientific workflows with different structures. Our algorithm is compared with Dyna and CGA approaches in terms of meeting deadlines and execution costs. The experimental results show that the proposed algorithm met all the deadline factors of each workflow, while the CGA and Dyna algorithms met 25% and 50%, respectively, of all the deadline factors of all workflows. The results also show that the proposed algorithm provides more cost-efficient schedules than CGA and Dyna.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_80-Structure_Aware_Scheduling_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Semantic Information Classification of IoT Perception Data Based on Density Peak Fast Search Clustering Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150279</link>
        <id>10.14569/IJACSA.2024.0150279</id>
        <doi>10.14569/IJACSA.2024.0150279</doi>
        <lastModDate>2024-02-28T12:51:31.3630000+00:00</lastModDate>
        
        <creator>Lin Chen</creator>
        
        <creator>Jinli Hu</creator>
        
        <creator>Weisheng Wang</creator>
        
        <subject>Clustering algorithm; Internet of Things; perceived data; classification; peak density; semantic information</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>In the rapidly developing field of the Internet of Things today, effective processing and analysis of perceptual data has become crucial. The perception data of the Internet of Things is usually large, diverse, and presents high-dimensional characteristics, which poses new challenges to data clustering algorithms. This study utilizes the K-center point algorithm to optimize the density peak fast search clustering algorithm, proposes a new clustering algorithm, and applies it to the research of semantic classification of perception data in the Internet of Things. Firstly, the K-center algorithm was used to optimize the clustering center optimization process of the density peak fast search clustering algorithm. Then, the optimized algorithm was applied to the automatic semantic classification model. Thus, a new automatic semantic annotation model for IoT aware data has been established. The research results showed that the classification accuracy of the proposed optimization algorithm was as high as 0.98, and the running stability of the automatic semantic annotation model optimized using this algorithm was as high as 0.99, with a running time as low as 1s. In summary, the automatic semantic annotation model built in this study can effectively improve the efficiency and accuracy of semantic classification, thereby providing more accurate and efficient data support for intelligent services.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_79-Semantic_Information_Classification_of_IoT_Perception_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Efficient Simulation of Light Scattering Effects in the Atmosphere</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150278</link>
        <id>10.14569/IJACSA.2024.0150278</id>
        <doi>10.14569/IJACSA.2024.0150278</doi>
        <lastModDate>2024-02-28T12:51:31.3330000+00:00</lastModDate>
        
        <creator>Huiling Guo</creator>
        
        <creator>Xiliang Ren</creator>
        
        <creator>Jing Zhao</creator>
        
        <creator>Yong Tang</creator>
        
        <subject>Light scattering; ray marching; jittered sampling; color synthesis; real-time rendering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>Atmospheric light scattering encompasses intricate physical process, including diverse scattering mechanisms and optical parameters. Addressing the challenges posed by the computationally intensive task of deciphering this phenomenon, this study introduces an efficient real-time simulation strategy. The proposed approach employs a physics-driven atmospheric modeling, leveraging a unified phase function to emulate both Rayleigh and Mie scattering phenomena. The scattering integral is approximated and discretized using the concept of ray-marching to solve the scattering integral. Based on the characteristics of different light sources, accurate ray-marching lengths are determined, streamlining the computational trajectory of the light path. Additionally, the introduction of texture dithering enhances the randomness of the initial sampling positions. The Shadow Map algorithm is adeptly employed to generate shadow mapping textures, eliminating the need for light calculations within shadowed regions, thereby reducing the number of samples and computational workload. Finally, color synthesis is used to determine the rendering color of the atmosphere under various fog density conditions. Experimental results show that this approach significantly improves rendering efficiency, and achieves real-time rendering while maintaining a realistic light scattering effect compared with other advanced light scattering rendering methods.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_78-Efficient_Simulation_of_Light_Scattering_Effects.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Occupancy Measurement in Under-Actuated Zones: YOLO-based Deep Learning Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150277</link>
        <id>10.14569/IJACSA.2024.0150277</id>
        <doi>10.14569/IJACSA.2024.0150277</doi>
        <lastModDate>2024-02-28T12:51:31.3170000+00:00</lastModDate>
        
        <creator>Ade Syahputra</creator>
        
        <creator>Yaddarabullah</creator>
        
        <creator>Mohammad Faiz Azhary</creator>
        
        <creator>Aedah Binti Abd Rahman</creator>
        
        <creator>Amna Saad</creator>
        
        <subject>YOLO; HVAC system; occupant’s position; occupant calculation; under-actuated zone</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>The challenge of accurately detecting and identifying individuals within under-actuated zones presents a relevant research problem in occupant detection. This study aims to address the challenge of occupant detection in under-actuated zones through the utilization of the You Only Look Once version 8 (YOLO v8) object detection model. The research methodology involves a comprehensive evaluation of YOLO v8&#39;s performance across three distinct zones, where its precision, accuracy, and recall capabilities in identifying occupants are rigorously assessed. The outcomes of this performance evaluation, expressed through quantitative metrics, provide compelling evidence of the efficacy of the YOLO v8 model in the context of occupant detection in under-actuated zones. Across these three diverse under-actuated zones, YOLO v8 consistently exhibits remarkable mean Average Precision (mAP) scores, achieving 99.2% in Zone 1, 78.3% in Zone 2, and 96.2% in Zone 3. These mAP scores serve as a testament to the model&#39;s precision, indicating its proficiency in accurately localizing and identifying occupants within each zone. Furthermore, YOLO v8 demonstrates impressive efficiency in executing occupant detection tasks. The model boasts rapid processing times, with all three zones being analyzed in a matter of milliseconds. Specifically, YOLO v8 achieves execution times of 0.004 seconds in both Zone 1 and Zone 3, while Zone 2, which entails slightly more computational effort, still maintains an efficient execution time of 0.024 seconds. This efficiency constitutes a pivotal advantage of YOLO v8, as it ensures expeditious and effective occupant detection in the context of under-actuated zones.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_77-Occupancy_Measurement_in_Under_Actuated_Zones.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Animation Media Art Teaching Design Based on Big Data Fusion Technology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150275</link>
        <id>10.14569/IJACSA.2024.0150275</id>
        <doi>10.14569/IJACSA.2024.0150275</doi>
        <lastModDate>2024-02-28T12:51:31.2870000+00:00</lastModDate>
        
        <creator>Rongjuan Wang</creator>
        
        <creator>Yiran Tao</creator>
        
        <subject>Animation; big data fusion; classification regression tree algorithm; media art teaching system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>Animation, as an ancient art expression form, still has vigorous development, and the need for animation talents in society is increasing daily. This study first introduces the definition of animation and the development of animation at home and abroad. After that, the classification regression tree algorithm&#39;s principle and function theorem are described. This study divides the data into original and new animations based on big data fusion technology. It establishes a media art teaching system with search, recommendation, and playback as the three cores. Additionally, iteration is used to calculate the optimal hidden semantic matrix, a comparison is made between the benefits and drawbacks of the Sigmoid, Tanh, and ReLU functions, and lastly, the activation function chosen is the ReLU function. Compared with the loss value in the ideal case, the experimental findings comply with the likely criteria, and the categorical regression tree algorithm model predicts an error rate that falls within acceptable limits. Practically speaking, it is known that when the hidden factor dimension is 12, the system model works best for characterizing animation features. The comparison shows that the non-standard collaborative filtering recommendation system is inferior to the recommendations filtered by the categorical regression tree algorithm model. Following the use of the system, the students&#39; drawing and directing abilities, animation scope, and animation appreciation level all improved significantly. The questionnaire survey concluded that the teachers and students of animation majors in universities were satisfied with the system.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_75-Animation_Media_Art_Teaching_Design.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Advancing Human Action Recognition and Medical Image Segmentation using GRU Networks with V-Net Architecture</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150276</link>
        <id>10.14569/IJACSA.2024.0150276</id>
        <doi>10.14569/IJACSA.2024.0150276</doi>
        <lastModDate>2024-02-28T12:51:31.2870000+00:00</lastModDate>
        
        <creator>Dustakar Surendra Rao</creator>
        
        <creator>L. Koteswara Rao</creator>
        
        <creator>Vipparthi Bhagyaraju</creator>
        
        <creator>P. Rohini</creator>
        
        <subject>Human action recognition; medical image segmentation; grated rectifier unit; V-net architecture; neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>Human Action Recognition and Medical Image Segmentation study presents a novel framework that leverages advanced neural network architectures to improve Medical Image Segmentation and Human Action Recognition (HAR). Gated Recurrent Units (GRU) are used in the HAR domain to efficiently capture complex temporal correlations in video sequences, yielding better accuracy, precision, recall, and F1 Score than current models. In computer vision and medical imaging, the current research environment highlights the significance of advanced techniques, especially when addressing problems like computational complexity, resilience, and noise in real-world applications. Improved medical image segmentation and human action recognition (HAR) are of growing interest. While methods such as the V-Net architecture for medical picture segmentation and Spatial Temporal Graph Convolutional Networks (ST-GCNs) for HAR have shown promise, they are constrained by things like processing requirement and noise sensitivity. The suggested methods highlight the necessity of sophisticated neural network topologies and optimisation techniques for medical picture segmentation and HAR, with further study focusing on transfer learning and attention processes. A Python tool has been implemented to perform min-max normalization, utilize GRU for human action recognition, employ V-net for medical image segmentation, and optimize with the Adam optimizer, with performance evaluation metrics integrated for comprehensive analysis. This study provides an optimised GRU network strategy for Human Action Recognition with 92% accuracy, and a V-Net-based method for Medical Image Segmentation with 88% Intersection over Union and 92% Dice Coefficient.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_76-Advancing_Human_Action_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Personalized Recommendation Algorithm Based on Trajectory Mining Model in Intelligent Travel Route Planning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150274</link>
        <id>10.14569/IJACSA.2024.0150274</id>
        <doi>10.14569/IJACSA.2024.0150274</doi>
        <lastModDate>2024-02-28T12:51:31.2530000+00:00</lastModDate>
        
        <creator>Jingya Shi</creator>
        
        <creator>Qianyao Sun</creator>
        
        <subject>Trajectory mining; personalized recommendations; travel routes; genetic algorithm; visiting sequence of scenic spots</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>With the increasing demand for personalized travel, traditional travel route planning methods are no longer able to meet the diverse needs of users. In view of this, on the ground of the analysis of user trajectory data at the temporal and spatial levels, a new scenic spot recommendation model is proposed by combining personalized recommendation algorithms. Meanwhile, improved genetic algorithm and minimum spanning tree algorithm were introduced to adjust the structure of the personalized recommendation model. After matching the visit sequence of scenic spots, the final new personalized tourism route recommendation model was proposed. The experiment demonstrates that the optimal pause time for the personalized scenic spot recommendation model is 45 minutes, the pause distance is 15 meters, and the clustering radius is 500 meters. And the model has the highest accuracy in the Tok-10 testing environment, with a maximum value of 90%. In addition, the new personalized tourism route recommendation model has the highest accuracy of 85.6%, the highest recall rate of 88.7%, the highest F1 value of 92.4%, and an average convergence rate of 88.9%. In summary, the new scenic spot and route recommendation model proposed in the study can achieve more intelligent and personalized travel route planning, providing new guidance for the intelligent development of travel route recommendation.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_74-Personalized_Recommendation_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Employing a Hybrid Convolutional Neural Network and Extreme Learning Machine for Precision Liver Disease Forecasting</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150273</link>
        <id>10.14569/IJACSA.2024.0150273</id>
        <doi>10.14569/IJACSA.2024.0150273</doi>
        <lastModDate>2024-02-28T12:51:31.2400000+00:00</lastModDate>
        
        <creator>Araddhana Arvind Deshmukh</creator>
        
        <creator>R. V. V. Krishna</creator>
        
        <creator>Rahama Salman</creator>
        
        <creator>S Sandhiya</creator>
        
        <creator>Balajee J</creator>
        
        <creator>Daniel Pilli</creator>
        
        <subject>Liver disease prognosis; convolutional neural network extreme learning machine; grey wolf optimization; patient care</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>This paper discusses the critical relevance of precise forecasting in liver disease, as well as the need for early identification and categorization for immediate action and personalized treatment strategies. The paper describes a unique strategy for improving liver disease classification using ultrasound image processing. The recommended technique combines the properties of the Extreme Learning Machine (ELM), Convolutional Neural Network (CNN), along Grey Wolf Optimisation (GWO) to form an integrated model known as CNN-ELM-GWO. The data is provided by Pakistan&#39;s Multan Institute of Nuclear Medicine and Radiotherapy, and it is then pre-processed utilizing bilateral and optimal wavelet filtering techniques to increase the dataset&#39;s quality. To properly extract significant visual information, feature extraction employs a deep CNN architecture using six convolutional layers, batch normalization, and max-pooling. The ELM serves as a classifier, whereas the CNN is a feature extractor. The GWO algorithm, based on grey wolf searching strategies, refines the CNN and ELM hyperparameters in two stages, progressively boosting the system&#39;s classification accuracy. When implemented in Python, CNN-ELM-GWO exceeds traditional machine learning algorithms (MLP, RF, KNN, and NB) in terms of accuracy, precision, recall, and F1-score metrics. The proposed technique achieves an impressive 99.7% accuracy, revealing its potential to significantly enhance the classification of liver disease by employing ultrasound images. The CNN-ELM-GWO technique outperforms conventional approaches in liver disease forecasting by a substantial margin of 27.5%, showing its potential to revolutionize medical imaging and prospects.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_73-Employing_a_Hybrid_Convolutional_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Monitoring Student Attendance Through Vision Transformer-based Iris Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150272</link>
        <id>10.14569/IJACSA.2024.0150272</id>
        <doi>10.14569/IJACSA.2024.0150272</doi>
        <lastModDate>2024-02-28T12:51:31.2230000+00:00</lastModDate>
        
        <creator>Slimane Ennajar</creator>
        
        <creator>Walid Bouarifi</creator>
        
        <subject>Iris Recognition; Vision transformer; student attendance; vision transformer models; educational technology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>In the context of the ongoing digital transformation, the effective monitoring of student attendance holds paramount significance for educational establishments. This study presents an innovative approach using Vision Transformer technology for iris recognition to automate student attendance tracking. We fine-tuned Vision Transformer models, specifically ViT-B16, ViT-B32, ViT-L16, and ViT-L32, using the CASIA-Iris-Syn dataset and focused on overcoming challenges related to intra-class variation through data augmentation techniques, including rotation, shearing, and brightness adjustments. The results reveal that ViT-L16 is the most proficient, achieving an impressive accuracy of 95.69%. Comparative analysis with prior methodologies, specifically those employing Vision Transformer with Convolutional Neural Network, underscores the superiority of our proposed ViT-L16 model. This superiority is evident across various metrics, including accuracy, precision, recall, and F1 score. The experimental setup involves the use of Jupyter Notebook, Python technologies, TensorFlow, and Keras, emphasizing evaluations based on loss, accuracy, and Confusion Matrix. ViT-L16 consistently outshines other models, showcasing its resilience in iris recognition for student attendance. This research marks a significant step towards modernizing attendance systems, offering an accurate and automated solution suitable for the evolving needs of educational settings. Future work could explore integrating additional biometric modalities and refining Vision Transformer architecture for enhanced performance and broader application in educational environments.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_72-Monitoring_Student_Attendance_Through_Vision_Transformer.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing the Odia Handwritten Character and Numeral Recognition System&#39;s Performance with an Ensemble of Deep Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150271</link>
        <id>10.14569/IJACSA.2024.0150271</id>
        <doi>10.14569/IJACSA.2024.0150271</doi>
        <lastModDate>2024-02-28T12:51:31.2070000+00:00</lastModDate>
        
        <creator>Mamatarani Das</creator>
        
        <creator>Mrutyunjaya Panda</creator>
        
        <creator>Soumya Sahoo</creator>
        
        <subject>Odia language; ensemble learning; machine learning; Gabor features; CNN; DNN</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>Offline handwritten character recognition (OHCR) is considered a challenging task in pattern recognition due to the inter-class similarity and intra-class variations among the symbols present in the alphabet set. In this work, a learning-based weighted average ensemble of deep neural network models (WEnDNN) is proposed to classify the 10 digits and 47 characters present in the alphabet set of Odia language, an official language of India. To build the base model for the ensemble network (EnDNN), three suitable convolutional neural networks (CNN), are designed and trained from scratch. The WEnDNN&#39;s accuracy is increased by using a grid search approach to determine the ideal weight allocations to give to the top-performing model. The performance of the WEnDNN model is compared with several standard machine learning models, which take the non-handcrafted features extracted from the finely tuned, pre-trained VGG16 model and a combination of Gabor and pixel intensity values to create handcrafted features. On several benchmark handwritten datasets, including NITR Odia characters (OHCS v1.0), ISI Kolkata Odia numerals, and IITBBS Odia numerals, the performance of the proposed WEnDNN model is assessed and compared. The experimental results demonstrate that, in terms of recognition accuracy, the proposed approach beats other state-of-the-art approaches.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_71-Enhancing_the_Odia_Handwritten_Character_and_Numeral_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analyzing Multiple Data Sources for Suicide Risk Detection: A Deep Learning Hybrid Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150270</link>
        <id>10.14569/IJACSA.2024.0150270</id>
        <doi>10.14569/IJACSA.2024.0150270</doi>
        <lastModDate>2024-02-28T12:51:31.1930000+00:00</lastModDate>
        
        <creator>Saraf Anika</creator>
        
        <creator>Swarup Dewanjee</creator>
        
        <creator>Sidratul Muntaha</creator>
        
        <subject>BiGRU-CNN hybrid; multisource dataset; word embeddings; NLP; sentiment analysis; cross-dataset testing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>In the current digital landscape, social media’s extensive user-generated content presents a unique opportunity for identifying emotional distress signals. With suicide rates on the rise, this study takes aid of Natural Language Processing (NLP) and Sentiment Analysis to detect suicide risk. Centering primarily around deep learning (DL) architectures, including Convolutional Neural Network (CNN), Bidirectional Gated Recurrent Unit (Bi-GRU) and their combined hybrid BiGRU-CNN model, the research incorporates machine learning (ML) for comparative analysis through multisource datasets from Reddit and Twitter. The methodology commenced with data pre-processing, followed by exploring word embedding techniques. This research included an analysis of both Word2Vec variants as well as pretrained GloVe embeddings, where Skip-Gram paired with Adam optimizer showed superior results. For thorough evaluation, Receiver Operating Characteristic (ROC) curves, Confusion Matrix and Accuracy-Loss graphs were utilized. Furthermore, generalizability of employed models was testified and evaluated by in-depth inspections. The process was accomplished by activating manual input test, cross dataset test and k-fold cross validation procedures. In the course of scrutinizing, the proposed BiGRU-CNN model outperformed the traditional DL and ML models with consistent and reliable performance. Correspondingly, the proposed model achieved accuracies of 93.07% and 92.47% on the respective datasets which advocate its potential as a tool for the early detection of suicidal thought.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_70-Analyzing_Multiple_Data_Sources_for_Suicide_Risk_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Elevating Smart Industry Security: An Advanced IoT-Integrated Framework for Detecting Suspicious Activities using ELM and LSTM Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150268</link>
        <id>10.14569/IJACSA.2024.0150268</id>
        <doi>10.14569/IJACSA.2024.0150268</doi>
        <lastModDate>2024-02-28T12:51:31.1470000+00:00</lastModDate>
        
        <creator>Mohammad Eid Alzahrani</creator>
        
        <subject>Internet of Things (IoT); Smart Industries; Extreme Learning Machine (ELM); Long Short-Term Memory (LSTM); Activity Recognition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>The proliferation of Internet of Things (IoT) devices in smart industrial contexts necessitates robust security measures to thwart potential threats. This study addresses the escalating security challenges arising from the widespread deployment of IoT devices in smart industrial environments. Focusing on the identification and categorization of potentially harmful activities, our research introduces an innovative framework that seamlessly integrates networks of Extreme Learning Machines (ELM) with Long Short-Term Memory (LSTM). The primary goal is to significantly enhance the accuracy and efficiency of real-time detection of suspicious activities. Implemented using Python, the framework exhibits a remarkable 97.5% improvement in recognizing and accurately categorizing suspicious activities compared to traditional methods such as Conv 1D and 3D CNN. Rigorous testing on a substantial real-world dataset simulating smart industry scenarios underlines this substantial improvement over conventional approaches in identifying and precisely classifying questionable activities. The design excels in comprehending complex behavioral trends within the dynamic IoT data environment, leveraging the temporal memory retention capacity of LSTM networks. This research lays the groundwork for fortifying cybersecurity in smart industries against emerging online threats and malicious actions. The proposed framework capitalizes on the synergies between LSTM and ELM networks to achieve heightened accuracy in identifying suspicious activities, providing comprehensive and dynamic insights from real-time IoT data. These insights are crucial for proactive threat detection and prevention in smart industrial settings, contributing to an elevated level of security against evolving threats.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_68-Elevating_Smart_Industry_Security.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Agricultural Yield Forecasting with Deep Convolutional Generative Adversarial Networks and Satellite Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150269</link>
        <id>10.14569/IJACSA.2024.0150269</id>
        <doi>10.14569/IJACSA.2024.0150269</doi>
        <lastModDate>2024-02-28T12:51:31.1470000+00:00</lastModDate>
        
        <creator>D. Anuradha</creator>
        
        <creator>Ramu Kuchipudi</creator>
        
        <creator>B Ashreetha</creator>
        
        <creator>Janjhyam Venkata Naga Ramesh</creator>
        
        <creator>Ayadi Rami</creator>
        
        <subject>Agricultural yield prediction; DCGANs; CNN; satellite imagery; data augmentation; synthetic image generation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>Ensuring food security amidst growing global population and environmental changes is imperative. This research introduces a pioneering approach that integrates cutting-edge deep learning techniques. Deep Convolutional Generative Adversarial Networks (DCGANs) and Convolutional Neural Networks (CNNs) with high-resolution satellite imagery to optimize agricultural yield prediction. The model leverages DCGANs to generate synthetic satellite images resembling real agricultural settings, enriching the dataset for training a CNN-based yield estimation model alongside actual satellite data. DCGANs facilitate data augmentation, enhancing the model&#39;s generalization across diverse environmental and seasonal scenarios. Extensive experiments with multi-temporal and multi-spectral satellite image datasets validate the proposed method&#39;s effectiveness. Trained CNN adeptly discerns intricate patterns related to crop growth phases, health, and yield potential. Leveraging Python software, the study confirms that integrating DCGANs significantly enhances agricultural production forecasting compared to conventional CNN-based approaches. Against established optimization methods like RCNN, YOLOv3, Deep CNN, and Two Stage Neural Networks, the proposed DCGAN-CNN fusion achieves 98.6% accuracy, a 3.62% improvement. Synthetic images augment model resilience by exposing it to varied situations and enhancing adaptability to diverse geographic regions and climatic shifts. Moreover, the research delves into CNN model interpretability, elucidating learnt features and their correlation with yield-related factors. This paradigm promises to advance agricultural output projections, advocate sustainable farming, and aid policymakers in addressing global food security amidst evolving environmental challenges.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_69-Enhancing_Agricultural_Yield_Forecasting.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Utilizing Federated Learning for Enhanced Real-Time Traffic Prediction in Smart Urban Environments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150267</link>
        <id>10.14569/IJACSA.2024.0150267</id>
        <doi>10.14569/IJACSA.2024.0150267</doi>
        <lastModDate>2024-02-28T12:51:31.1130000+00:00</lastModDate>
        
        <creator>Mamta Kumari</creator>
        
        <creator>Zoirov Ulmas</creator>
        
        <creator>Suseendra R</creator>
        
        <creator>Janjhyam Venkata Naga Ramesh</creator>
        
        <creator>Yousef A. Baker El-Ebiary</creator>
        
        <subject>Federated Learning; smart city; convolutional neural network; recurrent neural network; traffic prediction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>Federated Learning (FL), a crucial advancement in smart city technology, combines real-time traffic predictions with the potential to enhance urban mobility. This paper suggests a novel approach to real-time traffic prediction in smart cities: a hybrid Convolutional Neural Network-Recurrent Neural Network (CNN-RNN) architecture. The investigation started with the systematic collection and preprocessing of a low-resolution dataset (1.6 GB) derived from real-time Closed Circuit Television (CCTV) traffic camera images at significant intersections in Guntur and Vijayawada. The dataset has been cleaned up utilizing min-max normalization to facilitate use. The primary contribution of this study is the hybrid architecture that it develops by fusing RNN to detect temporal dynamics with CNN for geographic extraction of characteristics. While the RNN&#39;s recurrent interactions preserve hidden states for sequential processing, the CNN efficiently retrieves high-level spatial information from static traffic images. Weight adjustments and backpropagation are used in the training of the proposed hybrid model in order to enhance real-time predictions that aid in traffic management. Notably, the implementation is done with Python software. The model reaches a testing accuracy of 99.8% by the 100th epoch, demonstrating excellent performance in the results and discussion section. The Mean Absolute Error (MAE) results, which show a 4.5% improvement over existing methods like Long Short Term Memory (LSTM), Support Vector Machine (SVM), Sparse Auto Encoder (SAE), and Gated Recurrent Unit (GRU), illustrate the efficacy of the model. This demonstrates how well complex patterns may be represented by the model, yielding precise real-time traffic predictions in crowded metropolitan settings. A new era of more precise and effective real-time traffic forecasts is about to begin, thanks to the hybrid CNN-RNN architecture, which is validated by the combined strengths of FL, CNN, and RNN as well as the overall outcomes.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_67-Utilizing_Federated_Learning_for_Enhanced_Real_Time_Traffic.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Leveraging Machine Learning for Enhanced Cyber Attack Detection and Defence in Big Data Management and Process Mining</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150266</link>
        <id>10.14569/IJACSA.2024.0150266</id>
        <doi>10.14569/IJACSA.2024.0150266</doi>
        <lastModDate>2024-02-28T12:51:31.0970000+00:00</lastModDate>
        
        <creator>Taviti Naidu Gongada</creator>
        
        <creator>Amit Agnihotri</creator>
        
        <creator>Kathari Santosh</creator>
        
        <creator>Vijayalakshmi Ponnuswamy</creator>
        
        <creator>Narendran S</creator>
        
        <creator>Tripti Sharma</creator>
        
        <creator>Yousef A.Baker El-Ebiary</creator>
        
        <subject>Machine learning; data mining; cyber-attack detection; big data; support vector regression</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>The rapidly developing field of &quot;Commercial Operation Divergence Analysis,&quot; this research seeks to identify and understand differences in commercial systems that exceed expected results. Approaches in this domain aim to identify the characteristics of process implementations that are associated with changes in process effectiveness. This entails identifying the features of procedural behaviours that result in unpleasant results and figuring out which behaviours have the biggest impact on increased efficiency. As the scale and complexity of big data management and process mining continue to expand, the threat of cyber-attacks poses a critical challenge. This research leverages machine learning techniques for the detection and defence against cyber threats within the realm of big data management and process mining. The study introduces novel metrics such as Skewness, Coefficient of Variation, Standard Deviation, Maximum, Minimum, and Mean for assessing the security state, utilizing variables like SPI, SPEI, and SSI. The research addresses prior issues in cyber-attack detection by integrating machine learning into the specific context of big data and process mining. The novelty lies in the application of Skewness and other statistical metrics to enhance the precision of threat detection. The results demonstrate the effectiveness of the proposed methodology, showcasing promising outcomes in identifying and mitigating cyber threats in the given dataset and which makes use of Support Vector Regression (SVR), has a standard deviation of 0.9, which is consistent with the variability shown in SVM. The results demonstrate a significant achievement, with a Mean Absolute Error (MAE) of 0.98, indicating the efficacy of the proposed approach in providing accurate and timely insights for cyberattack detection and defense, thereby enhancing the overall security posture in data-intensive systems. The results highlight how well the proposed method extracts significant insights from complicated event data, with important ramifications for real-world application and decision-making procedures.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_66-Leveraging_Machine_Learning_for_Enhanced_Cyber_Attack_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Inter Patient ECG Arrhythmia Classification Approach with Deep Feature Extraction and 1D Convolutional Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150265</link>
        <id>10.14569/IJACSA.2024.0150265</id>
        <doi>10.14569/IJACSA.2024.0150265</doi>
        <lastModDate>2024-02-28T12:51:31.0830000+00:00</lastModDate>
        
        <creator>Mohamed Elmehdi Ait Bourkha</creator>
        
        <creator>Anas Hatim</creator>
        
        <creator>Dounia Nasir</creator>
        
        <creator>Said El Beid</creator>
        
        <creator>Assia Sayed Tahiri</creator>
        
        <subject>Electrocardiogram (ECG); Cardiovascular Diseases (CVD); Wavelet Scattering Transform (WST); Convolutional Neural Network (CNN)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>The World Health Organization (WHO) sheds light on the escalating prevalence of heart diseases, foreseeing a substantial rise in the years ahead, impacting a vast global population. Swift and accurate early detection becomes pivotal in managing severe complications, underscoring the urgency of timely identification. While Ventricular Ectopic Beats (V) might initially be considered normal, their frequent occurrence could serve as a potential red flag for progressing to severe conditions like atrial fibrillation, Ventricular Tachycardia, and even cardiac arrest. This accentuates the need for developing an automated approach for early detection of cardiovascular diseases (CVD). This paper presents a novel method to classify arrhythmias. Leveraging the Wavelet Scattering Transform (WST) to extract morphological features from Electrocardiogram heartbeats (ECG), these features seamlessly integrate into a 1D Convolutional Neural Network (CNN). The CNN is finely tuned to distinguish between V, Supraventricular Ectopic Beats (S), and Non-Ectopic Beats (N). Our model&#39;s performance surpasses state-of-the-art models, boasting precision, sensitivity, and specificity of 94.56%, 97.26%, and 99.54% for V, and 99.25%, 98.65%, and 93.26% for N. Remarkably, it achieves 68.01% precision, 77.75% sensitivity, and 99.14% specificity for S.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_65-A_Novel_Inter_Patient_ECG_Arrhythmia_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automated Detection of Autism Spectrum Disorder Symptoms using Text Mining and Machine Learning for Early Diagnosis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150264</link>
        <id>10.14569/IJACSA.2024.0150264</id>
        <doi>10.14569/IJACSA.2024.0150264</doi>
        <lastModDate>2024-02-28T12:51:31.0670000+00:00</lastModDate>
        
        <creator>Mihaela Chistol</creator>
        
        <creator>Mirela Danubianu</creator>
        
        <subject>Text mining; machine learning; artificial intelligence; assistive technologies; Autism Spectrum Disorder; early diagnosis; screening</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>Autism spectrum disorder (ASD) is a neurological condition whose etiology is still insufficiently understood. The heterogeneity of manifestations makes the diagnosis process difficult. Thus, many children are diagnosed too late, which leads to the loss of precious time that can be used for therapy. A viable solution could be to equip medical staff with modern technologies to detect autism in its early stages. The objective of this research was to investigate, through empirical means, how text mining and machine learning (ML) algorithms can aid in the early ASD diagnosis by identifying patterns and ASD symptoms in text data regarding children’s behavior that concerned parents provided. The research involved the design of an innovative technical solution based on text mining for the identification of ASD symptoms in unstructured text data describing children’s behavior and the practical implementation of the solution using Rapid Miner. The dataset was created through a controlled experiment with 44 participants, parents of children diagnosed with ASD, who answered questions about their children’s (35 boys and 9 girls) behavior. Analysis of the performance of models trained with ML algorithms: Na&#239;ve Bayes, K-Nearest Neighbors, Deep Learning and Random Forest revealed that the K-Nearest Neighbors classifier outperformed the other methods, achieving the highest accuracy of 78.69%. Results obtained using text mining and ML demonstrated the feasibility of using parents’ narratives to develop predictive models for autism symptoms detection. The achieved accuracy highlights the potential of text mining as an autonomous and time- and cost-effective method for early identification of ASD in children.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_64-Automated_Detection_of_Autism_Spectrum_Disorder.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Toward Enhanced Customer Transaction Insights: An Apriori Algorithm-based Analysis of Sales Patterns at University Industrial Corporation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150263</link>
        <id>10.14569/IJACSA.2024.0150263</id>
        <doi>10.14569/IJACSA.2024.0150263</doi>
        <lastModDate>2024-02-28T12:51:31.0500000+00:00</lastModDate>
        
        <creator>Alex Alfredo Huaman Llanos</creator>
        
        <creator>Lenin Qui&#241;ones Huatangari</creator>
        
        <creator>Jeimis Royler Yalta Meza</creator>
        
        <creator>Alexander Huaman Monteza</creator>
        
        <creator>Orestes Daniel Adrianzen Guerrero</creator>
        
        <creator>John Smith Rodriguez Estacio</creator>
        
        <subject>Apriori algorithm; association rules; Customer Relationship Management (CRM); decision making; text mining</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>The University Industrial Corporation (CIU) at the National University of Jaen offers a range of consumable products, encompassing nectar, water, coffee, chocolate, and chocoteja. However, its sales transactions function without a systematic analysis. To address this, the study gathered and analyzed sales data from March to November 2023, aiming to identify and delineate associations among frequently co-purchased products, revealing underlying interdependencies and associations. Employing text mining methodologies, this study preprocessed and analyzed 1542 sales records using the Apriori algorithm, culminating in the extraction of 17 association rules. Among these rules, three standout associations were uncovered: the purchase of chocolate, chocoteja and water suggests a purchase of nectar; chocolate, nectar and water acquisitions correlate with chocoteja purchases; lastly chocolate and nectar purchases are associated with chocoteja acquisitions. These findings provide insights to augment potential production adjustments within the CIU, enabling the leveraging of established associations to boost sales and revenue. Moreover, the identified rules serve as a cornerstone for decision-makers, actionable guidance for stakeholders, enabling the identification of co-purchased products, fostering informed production planning, fine-tuning marketing strategies for customer relationship management (CRM), and enhancing CIU&#39;s market competitiveness and profitability.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_63-Toward_Enhanced_Customer_Transaction_Insights.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Approach to Classifying X-Ray Images of Scoliosis and Spondylolisthesis Based on Fine-Tuned Xception Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150262</link>
        <id>10.14569/IJACSA.2024.0150262</id>
        <doi>10.14569/IJACSA.2024.0150262</doi>
        <lastModDate>2024-02-28T12:51:31.0370000+00:00</lastModDate>
        
        <creator>Quy Thanh Lu</creator>
        
        <creator>Triet Minh Nguyen</creator>
        
        <subject>Transfer learning; fine tuning; spondylolisthesis; scoliosis; classification; Xception</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>The vertebral column is a marvel of biological engineering and it considers a main part of the skeleton in vertebrate animals. In addition, it serves as the central axis of the human body comprising a series of interlocking vertebrae that provide structural support and flexibility. From basic works like bending and twisting to more complex actions such as walking and running, the spine&#39;s impact on human life is profound, underscoring its indispensable role in maintaining physical well-being and overall functionality. Moreover, in the hard-working schedule of people in modern life, a bunch of diseases impact on vertebral column such as spondylolisthesis and scoliosis. As a result, numerous researches were provided to take a hand in solving or avoiding these illnesses including machine learning. In this study, transfer learning and fine tuning were used for the classification of X-ray images on vertebrae sickness to avoid complex and wasted time in a medical examination process. The dataset for vertebrae illnesses X-ray images was collected at King Abdullah University Hospital and Jordan University of Science and Technology in Irbid, Jordan. It comprised 338 subjects including: 79 spondylolisthesis, 188 scoliosis, and 71 normal X-ray images. With the customized layers model in Xception that is used for image classification, we received surprisingly high results including validation accuracy, test accuracy, and F1 score in three-class classifications (i.e., spondylolisthesis, scoliosis, and normal) at 99.00%, 97.86%, and 97.86%, respectively. Additionally, two-class detection also received high accuracy values (i.e., 98.86% and 99.57%). Considering various high-performance metrics in the result indicates a robust ability to identify vertebrae diseases using X-ray images. The study found that machine learning significantly raises medical examinations compared to traditional methods, offering a myriad of benefits in terms of accuracy, efficiency, and diagnostic capabilities.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_62-An_Approach_to_Classifying_X_Ray_Images_of_Scoliosis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Image Retrieval Evaluation Metric for Songket Motif</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150261</link>
        <id>10.14569/IJACSA.2024.0150261</id>
        <doi>10.14569/IJACSA.2024.0150261</doi>
        <lastModDate>2024-02-28T12:51:31.0200000+00:00</lastModDate>
        
        <creator>Nadiah Yusof</creator>
        
        <creator>Amirah Ismail</creator>
        
        <creator>Nazatul Aini Abd Majid</creator>
        
        <creator>Zurina Muda</creator>
        
        <subject>Heritage; songket motifs; songket motifs retrieval; ground truth data</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>Songket is a fine art heritage specializing in promoting the unique features of Malay identity. Past studies have shown that hundreds of Songket motifs had been produced, but unfortunately, most were not stored digitally. However, the digital collection of image data and determining its ground truth data should be given attention. This paper focuses on an evaluation metric to retrieve image Songket motifs. The initial label for each class of images in the database is ground truth data. The activity of determining the ground truth data involves two research objectives that have been discussed, namely identifying the ground truth data set of Songket Motifs involving Activity One, to obtain two ground truth data sets, precisely the training data set and the test data set involving six categories, to be specific &#39;Flora&#39;, &#39;Fauna&#39;, &#39;Nature&#39;, &#39;Cosmos&#39;, &#39;Food&#39; and &#39;Calligraphy&#39;. This phase test was carried out through a survey using a qualitative method, which is participatory by 15 respondents who have classified 413 specific motif images into 56 Songket motifs categories that refer to the six prominent motifs. Meanwhile, Activity Two is a validation-classification test of ground truth data sets by three experts to equate the selection of general and expert respondents to obtain training data sets for testing purposes involving six categories. After rearranging, only 50 ground truth Specific Motifs have been selected. Accordingly, the relationship coefficient correlation method is also implemented to see the relationship between two data through a statistical evaluation angle. In addition, precision and recall methods are also used to obtain precision and recall values for each ground truth data, and the F-measurement method is used to make a single evaluation. The F-measurement result for each category &#39;Flora&#39;: 26.7 – 100 (20 ID-Category), &#39;Fauna&#39;: 35.3 – 100 (6 ID-Category), &#39;Nature&#39;: 30.8 – 100 (5 ID-Category), &#39;Cosmos&#39;: 53.3 – 100 (7 ID-Category), and &#39;Motif&#39;: 47.6 – 100 (9 ID-Category). Using ground truth data enables image retrieval research to conduct unbiased system testing and evaluation.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_61-Image_Retrieval_Evaluation_Metric_for_Songket_Motif.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Integrating Taguchi Method and Support Vector Machine for Enhanced Surface Roughness Modeling and Optimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150260</link>
        <id>10.14569/IJACSA.2024.0150260</id>
        <doi>10.14569/IJACSA.2024.0150260</doi>
        <lastModDate>2024-02-28T12:51:31.0030000+00:00</lastModDate>
        
        <creator>Ashanira Mat Deris</creator>
        
        <creator>Rozniza Ali</creator>
        
        <creator>Ily Amalina Ahmad Sabri</creator>
        
        <creator>Nurezayana Zainal</creator>
        
        <subject>Support Vector Machine; surface roughness; end milling; Taguchi method</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>End milling process is widely used in various industrial applications, including health, aerospace and manufacturing industries. Over the years, machine technology of end milling has grown exponentially to attain the needs of various fields especially in manufacturing industry. The main concern of manufacturing industry is to obtain good quality products. The machined products quality is commonly correlated with the value of surface roughness (Ra), representing vital aspect that can influence overall machining performance. However, finding the optimal value of surface roughness is remain as a challenging task because it involves a lot of considerations on the cutting process especially the selection of suitable machining parameters and also cutting materials and workpiece. Hence, this study presents a support vector machine (SVM) prediction model to obtain the minimum Ra for end milling machining process. The prediction model was developed with three input parameters, namely feed rate, depth of cut and spindle speed, while Ra is the output parameter. The data of end milling is collected from the case studies based on the machining experimental with titanium alloy, workpiece and three types of cutting tools, namely uncoated carbide WC-Co (uncoated), common PVD-TiAlN (TiAlN) and Supernitride coating (SNTR). The prediction result has found that SVM is an effective prediction model by giving a better Ra value compared with experimental and regression results.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_60-Integrating_Taguchi_Method_and_Support_Vector_Machine.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Lightweight Neural Network for Accurate Rice Panicle Detection and Counting in Field Conditions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150259</link>
        <id>10.14569/IJACSA.2024.0150259</id>
        <doi>10.14569/IJACSA.2024.0150259</doi>
        <lastModDate>2024-02-28T12:51:30.9900000+00:00</lastModDate>
        
        <creator>Wenchao Xu</creator>
        
        <creator>Yangxu Wang</creator>
        
        <subject>Computer vision; deep learning; lightweight; neural network architecture; remote sensing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>Monitoring rice spikelet yield is crucial for ensuring food security, but manual observations are tedious and subjective. Deep learning approaches for automated counting often require high device resources, limiting their applicability on low-cost edge devices. This paper presents the Rice Lightweight Feature Detection Network (RLFDNet). RLFDNet designed for the field of computer vision, features a lightweight encoder and decoder, effectively decoding shallow and deep information within its neural network architecture. Innovative designs including dense feature pyramid network, reinforcement learning guidance, attention mechanisms, dynamic receptive field adjustment, and shape feature fusion enable outstanding performance in object detection and counting, even with low-resolution images. Across different elevations, ranging from 7m to 20m, RLFDNet demonstrates significantly superior accuracy and inference efficiency compared to other advanced object detection methods. With a parameter count of only 4.40 million, it achieves an impressive frame rate of 80.43 FPS on a GTX1080Ti GPU, meeting real-time application requirements on inexpensive devices. RLFDNet&#39;s exceptional performance is further highlighted by an MAE of 1.86 and an R&#178; of 0.9461, along with an average precision of mAP@0.5 reaching 0.91. These results underscore RLFDNet&#39;s capability as a potent and reliable visual tool for agricultural practitioners, offering promising prospects for future research endeavors.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_59-A_Lightweight_Neural_Network_for_Accurate_Rice_Panicle_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>i-Tech: Empowering Educators to Bring Experimental Learning to Classrooms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150258</link>
        <id>10.14569/IJACSA.2024.0150258</id>
        <doi>10.14569/IJACSA.2024.0150258</doi>
        <lastModDate>2024-02-28T12:51:30.9730000+00:00</lastModDate>
        
        <creator>Amani Alqarni</creator>
        
        <creator>Jieyu Wang</creator>
        
        <creator>Abdullah Abuhussein</creator>
        
        <subject>Virtual reality; 360&#176; video; user behavior analysis; content delivery; immersive media; education; technology in education; instructional design; human-computer interaction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>The integration of technology in education has gained significant attention, with Virtual Reality (VR), Augmented Reality (AR), and 360&#176; VR emerging as transformative tools for enhancing student learning experiences. Despite their potential benefits, these immersive technologies have not achieved widespread adoption in education. Educators face numerous challenges in finding suitable 360&#176; content for their courses and integrating complex content creation tools. Creating educational 360&#176; content often involves hiring programmers or mastering intricate programming techniques, which can be time-consuming and daunting. Educators also struggle with finding platforms to host, edit, segment video content according to topics, and add subtitles and translations to their 360&#176; videos. To address these challenges, this paper presents the implementation and evaluation of a user-friendly prototype tool with a step-by-step graphical user interface. This high-fidelity prototype assists educators in uploading 360&#176; content, segmenting it into chapters or topics, incorporating questions or requirements within video segments, adding subtitles and translations, and facilitating content sharing among educators. This design aims to assist teachers in publishing their 360&#176; content while reducing the complex VR programming for them. It enables them to integrate immersive learning in their classrooms with ease. The final goal is to promote greater adoption of 360&#176; VR content in education and enhance learning outcomes.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_58-i_Tech_Empowering_Educators_to_Bring_Experimental_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>MR-FNC: A Fake News Classification Model to Mitigate Racism</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150257</link>
        <id>10.14569/IJACSA.2024.0150257</id>
        <doi>10.14569/IJACSA.2024.0150257</doi>
        <lastModDate>2024-02-28T12:51:30.9730000+00:00</lastModDate>
        
        <creator>Muhammad Kamran</creator>
        
        <creator>Ahmad S. Alghamdi</creator>
        
        <creator>Ammar Saeed</creator>
        
        <creator>Faisal S. Alsubaei</creator>
        
        <subject>Machine learning; deep learning; fake news detection; social media</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>One of the most challenging tasks while processing natural language text is to authenticate the correctness of the provided information particularly for classification of fake news. Fake news is a growing source of apprehension in recent times for hate speech as well. For instance, the followers of various beliefs face constant discrimination and receive negative perspectives directed at them. Fake news is one of the most prominent reasons for various kinds of racism and stands at par with individual, interpersonal, and structural racism types observed worldwide yet it does not get much importance and remains to be neglected. In this paper, to mitigate racism, we address the fake news regarding beliefs related to Islam as a case study. Though fake news remained to be a concerning factor since the beginning of Islam, a significant increase has been noticed in it for the last three years. Additionally, the accessibility of social media platforms and the growth in their use have helped to propagate misinformation, hate speech, and unfavorable views about Islam. Based on these deductions, this study intends to categorize such anti-Islamic content and misinformation found in Twitter posts. Several preprocessing and data enhancement steps were employed on retrieved data. Word2vec and GloVe were implemented to derive deep features while TF-IDF and BOW were applied to derive textual features from the data respectively. Finally, the classification phase was performed using four Machine-based predictive analysis (ML) algorithms Random Forest (RF), Na&#239;ve Bayes (NB), Logistic Regression (LR), Support Vector Machine (SVM), and a custom deep CNN. The results when compared with certain performance evaluation measures show that on average, ML-models perform better than the CNN for the utilized dataset.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_57-MR_FNC_A_Fake_News_Classification_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Efficient Compression for Remote Sensing: Multispectral Transform and Deep Recurrent Neural Networks for Lossless Hyper-Spectral Imagine</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150256</link>
        <id>10.14569/IJACSA.2024.0150256</id>
        <doi>10.14569/IJACSA.2024.0150256</doi>
        <lastModDate>2024-02-28T12:51:30.9570000+00:00</lastModDate>
        
        <creator>D. Anuradha</creator>
        
        <creator>Gillala Chandra Sekhar</creator>
        
        <creator>Annapurna Mishra</creator>
        
        <creator>Puneet Thapar</creator>
        
        <creator>Yousef A.Baker El-Ebiary</creator>
        
        <creator>Maganti Syamala</creator>
        
        <subject>Multi-Spectral transform; lossless compression; hyper-spectral data; deep recurrent neural network; compression algorithms</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>Remote sensing technologies, which are essential for everything from environmental monitoring to disaster relief, enable large-scale multispectral data collection. In the field of hyper-spectral imaging, where high-dimensional data is required for precise analysis, effective compression techniques are critical for transmission and storage. In the field of hyper-spectral imaging, the development of efficient compression techniques is critical because datasets containing high-dimensional information must be transmitted and stored efficiently without sacrificing analytical precision. The paper presents advanced compression techniques that combine deep Recurrent Neural Networks (RNNs) with multispectral transforms to achieve lossless compression in hyper-spectral imaging. The Discrete Wavelet Transform (DWT) is used to efficiently capture spectral and spatial information by utilizing the properties of multispectral transforms. Simultaneously, deep RNNs are used to model the hyper-spectral data with complex dependencies, allowing for sequential compression. The overall compression efficiency that is increased by the integration of spatial and spectral information allows for reduced storage requirements and improved transmission efficiency. Python software is used to implement the proposed model. When compared to Liner Spectral Mixture Analysis (LSMA) based compression, Spatial Orientation Tree Wavelet (STW)-Wavelet Difference Reduction (WDR), and DPCM, the proposed DWT-RNN-LSTM method has a better PSNR value of 45 dB and a lower MSE of 7.50%. Adaptive compression methods are presented in order to dynamically adapt to various data properties and ensure application in various hyperspectral scenes. Studies on hyper-spectral images of various sizes and resolutions demonstrate the approach&#39;s scalability and generalization, as well as the utility and adaptability of the proposed compression framework in a variety of remote sensing scenarios.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_56-Efficient_Compression_for_Remote_Sensing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Intelligent Fuzzy-PID Controller for Supporting Comfort Microclimate in Smart Homes</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150255</link>
        <id>10.14569/IJACSA.2024.0150255</id>
        <doi>10.14569/IJACSA.2024.0150255</doi>
        <lastModDate>2024-02-28T12:51:30.9430000+00:00</lastModDate>
        
        <creator>Nazbek Katayev</creator>
        
        <creator>Ainur Zhakish</creator>
        
        <creator>Nurlan Kulmyrzayev</creator>
        
        <creator>Assylzat Abuova</creator>
        
        <creator>Sveta Toxanova</creator>
        
        <creator>Aiymkhan Ostayeva</creator>
        
        <creator>Gulsim Dossanova</creator>
        
        <subject>HVAC; Fuzzy logic; energy management; comfort management; smart home</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>Addressing the challenge of ensuring a comfortable indoor environment in both commercial and residential buildings through the use of heating, ventilation, and air conditioning (HVAC) systems is a critical issue. This challenge is intricately connected to the development of sophisticated multi-channel controllers to regulate temperature and humidity effectively. This academic discussion initially focuses on the development and examination of a complex, interactive, nonlinear mathematical model that encapsulates the ideal parameters for temperature and humidity to achieve the desired comfort levels. The paper then progresses to explore various methodologies in the design of temperature and humidity control systems. It delves into the traditional Proportional-Integral-Derivative (PID) controllers, a mainstay in the industry, and extends to more advanced iterations. These include the integration of PID controllers with distinct decoupled controllers and the innovative combination of PID controllers with self-adjusting parameters, which are informed by the principles of fuzzy logic. This combination is particularly significant for the processes of heating and humidification. Subsequently, the paper presents the results obtained from simulations conducted on a proposed fuzzy-PID controller using Matlab, a widely used computational tool. These simulations are crucial in evaluating the efficacy of the controller design. Additionally, the paper offers an analysis of experimental data collected over a six-month period. This data is instrumental in assessing the real-world performance of the proposed system, providing valuable insights into its practical applicability and effectiveness in managing indoor climate conditions. In summary, this comprehensive study not only lays the groundwork for an interactive model for climate control but also compares various controller designs, culminating in the proposal and evaluation of an advanced fuzzy-PID controller. This work stands as a significant contribution to the ongoing efforts to enhance indoor climate control in buildings.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_55-An_Intelligent_Fuzzy_PID_Controller.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Load Balancing in DCN Servers Through Software Defined Network Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150254</link>
        <id>10.14569/IJACSA.2024.0150254</id>
        <doi>10.14569/IJACSA.2024.0150254</doi>
        <lastModDate>2024-02-28T12:51:30.9100000+00:00</lastModDate>
        
        <creator>Gulbakhram Beissenova</creator>
        
        <creator>Aziza Zhidebayeva</creator>
        
        <creator>Zhadyra Kopzhassarova</creator>
        
        <creator>Pernekul Kozhabekova</creator>
        
        <creator>Bayan Myrzakhmetova</creator>
        
        <creator>Mukhtar Kerimbekov</creator>
        
        <creator>Dinara Ussipbekova</creator>
        
        <creator>Nabi Yeshenkozhaev</creator>
        
        <subject>Software defined network; DCN; machine learning; deep learning; server; load balancing; software</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>In this research paper, we delve into the innovative realm of optimizing load balancing in Data Center Networks (DCNs) by leveraging the capabilities of Software-Defined Networking (SDN) and machine learning algorithms. Traditional DCN architectures face significant challenges in handling unpredictable traffic patterns, leading to bottlenecks, network congestion, and suboptimal utilization of resources. Our study proposes a novel framework that integrates the flexibility and programmability of SDN with the predictive and analytical prowess of machine learning. We employed a multi-layered methodology, initially constructing a virtualized environment to simulate real-world DCN traffic scenarios, followed by the implementation of SDN controllers to instill adaptiveness and programmability. Subsequently, we integrated machine learning models, training them on a substantial dataset encompassing diverse traffic patterns and network conditions. The crux of our approach was the application of these trained models to anticipate network congestion and dynamically adjust traffic flows, ensuring efficient load distribution among servers. A comparative analysis was conducted against prevailing load balancing methods, revealing our model&#39;s superiority in terms of latency reduction, enhanced throughput, and improved resource allocation. Furthermore, our research illuminates the potential for machine learning&#39;s self-learning mechanism to foresee and adapt to future network states or exigencies, marking a significant advancement from reactive to proactive network management. This convergence of SDN and machine learning, as demonstrated, ushers in a new era of intelligent, scalable, and highly reliable DCNs, demanding further exploration and investment for future-ready data centers.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_54-Load_Balancing_in_DCN_Servers.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning Augmented with SMOTE for Timely Alzheimer&#39;s Disease Detection in MRI Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150253</link>
        <id>10.14569/IJACSA.2024.0150253</id>
        <doi>10.14569/IJACSA.2024.0150253</doi>
        <lastModDate>2024-02-28T12:51:30.8970000+00:00</lastModDate>
        
        <creator>P Gayathri</creator>
        
        <creator>N. Geetha</creator>
        
        <creator>M. Sridhar</creator>
        
        <creator>Ramu Kuchipudi</creator>
        
        <creator>K. Suresh Babu</creator>
        
        <creator>Lakshmana Phaneendra Maguluri</creator>
        
        <creator>B Kiran Bala</creator>
        
        <subject>Alzheimer&#39;s disease; MRI scans; Convolutional Neural Networks (CNNs); Synthetic Minority Over-sampling Technique (SMOTE); Spider Monkey Optimization (SMO)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>Timely diagnosis of Alzheimer&#39;s Disease (AD) is pivotal for effective intervention and improved patient outcomes, utilizing Magnetic Resonance Imaging (MRI) to unveil structural brain changes associated with the disorder. This research presents an integrated methodology for early detection of Alzheimer&#39;s Disease from Magnetic Resonance Imaging, combining advanced techniques. The framework initiates with Convolutional Neural Networks (CNNs) for intricate feature extraction from structural MRI data indicative of Alzheimer&#39;s Disease. To address class imbalance in medical datasets, Synthetic Minority Over-sampling Technique (SMOTE) ensures a balanced representation of Alzheimer&#39;s Disease and non- Alzheimer&#39;s Disease instances. The classification phase employs Spider Monkey Optimization (SMO) to optimize model parameters, enhancing precision and sensitivity in Alzheimer&#39;s Disease diagnosis. This work aims to provide a comprehensive approach, improving accuracy and tackling imbalanced datasets challenges in early Alzheimer&#39;s detection. Experimental outcomes demonstrate the proposed approach outperforming conventional techniques in terms of classification accuracy, sensitivity, and specificity. With a notable 91% classification accuracy, particularly significant in medical diagnostics, this method holds promise for practical application in clinical settings, showcasing robustness and potential for enhancing patient outcomes in early-stage Alzheimer&#39;s diagnosis. The implementation is conducted in Python.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_53-Deep_Learning_Augmented_with_SMOTE.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Robust Stacked Broad Learning System for Noisy Data Regression</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150252</link>
        <id>10.14569/IJACSA.2024.0150252</id>
        <doi>10.14569/IJACSA.2024.0150252</doi>
        <lastModDate>2024-02-28T12:51:30.8800000+00:00</lastModDate>
        
        <creator>Kai Zheng</creator>
        
        <creator>Jie Liu</creator>
        
        <subject>Robust; stacking; broad learning system; deep learning; neural networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>Robust broad learning system (RBLS) demonstrates the generalization and robustness for solving uncertain data regression tasks. To enhance representation ability of RBLS, this paper aims at developing a novel robust stacked broad learning system for solving noisy data regression problems, termed as RSBLS. In our work, we expand traditional BLS into a stacked broad learning system model with deep structure of feature nodes and enhancement nodes. Furthermore, ℓ1 norm loss function is employed to update the objective function of RSBLS for processing noisy data, we apply augmented Lagrange multiplier (ALM) to get the output weights of RSBLS which keeps the effectiveness and efficiency compared with weighted loss function. Simulation results over some regression datasets with outliers demonstrate that, the proposed RSBLS performs favorably with better robustness with respect to RVFL, BLS, Huber-WBLS, KDE-WBLS and RBLS.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_52-A_Novel_Robust_Stacked_Broad_Learning_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Efficiency of Hybrid Decision Tree Algorithms in Evaluating the Academic Performance of Students</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150251</link>
        <id>10.14569/IJACSA.2024.0150251</id>
        <doi>10.14569/IJACSA.2024.0150251</doi>
        <lastModDate>2024-02-28T12:51:30.8470000+00:00</lastModDate>
        
        <creator>Yanxin Xie</creator>
        
        <subject>Academic performance; decision tree; pelican optimization algorithm; runge kutta optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>Educational institutions are anticipated to take substantial and proactive roles in guaranteeing students&#39; successful program completion. Academic performance is conventionally employed to categorize and forecast students&#39; future ability to confront post-graduation challenges. A student&#39;s academic accomplishments are instrumental in shaping exceptional individuals who may become future leaders. Using algorithms to assess and predict academic performance is a well-established practice in machine learning, encompassing techniques such as neural networks(NN), logistic regression(LR), decision trees(DT), and others. The goal of this project is to improve decision trees&#39; ability to predict students&#39; academic achievement via the use of data mining methods and meta-heuristic algorithms. ‎‎‎Educational data mining involves the utilization of data analysis methodologies and tools to examine the extensive data generated within educational establishments as a result of students&#39; interactions and activities throughout their academic journey. Pelican Optimization Algorithm (POA) and Runge Kutta optimization (RKO) are utilized algorithms in developing hybrid models, both of which can efficiently search for optimal or near-optimal splits by fine-tuning the hyperparameters of decision tree models. Students&#39; final grades were predicted through training and testing models and categorized into four classes: Excellent, Good, Acceptable, and Poor. The classification capability of a single model and optimized counterparts was evaluated using Accuracy, Recall, Precision, and F1-score in separate phases for each category. Obtained results for all models revealed that POA and RKO developed Accuracy of DTC by 1.86% and 0.87%. Also, Precision and Recall metric analysis further manifest the superiority of DTPO. Prediction based on classifiers, especially workable optimized versions such as DTPO, paves the way for institutions to raise student success rates.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_51-Efficiency_of_Hybrid_Decision_Tree_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Advancing Parkinson&#39;s Disease Severity Prediction using Multimodal Convolutional Recursive Deep Belief Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150250</link>
        <id>10.14569/IJACSA.2024.0150250</id>
        <doi>10.14569/IJACSA.2024.0150250</doi>
        <lastModDate>2024-02-28T12:51:30.8170000+00:00</lastModDate>
        
        <creator>Shaikh Abdul Hannan</creator>
        
        <subject>Parkinson&#39;s Disease (PD); Convolutional Neural Networks (CNN); Deep Belief Networks (DBN); Rat Swarm Optimization (RSO)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>Parkinson&#39;s disease (PD), a progressive neurological ailment predominantly affecting individuals over the age of 60, involves the gradual loss of dopamine-producing neurons. The challenges associated with the subjectivity, resource intensity, and limited efficacy of current diagnostic methods, including the Unified Parkinson’s Disease Rating Scale (UPDRS), neuroimaging, and genetic analysis, underscore the need for innovative approaches. This paper introduces a groundbreaking multimodal deep learning framework that integrates Recurrent Neural Networks (RNN-DBN) for precise feature selection and Convolutional Neural Networks (CNNs) for robust feature extraction, aiming to enhance the accuracy of PD severity prediction. The methodology synergistically incorporates genetic data, imaging data from MRI and PET scans, and clinical evaluations. CNNs effectively capture spatial and temporal patterns within each data modality, preserving inter-modal linkages. The proposed RNN-DBN architecture, by skillfully leveraging temporal dependencies, improves model interpretability and provides a clearer understanding of the progression of Parkinson&#39;s disease symptoms. Evaluation across diverse PD datasets demonstrates superior predictive performance compared to existing methods. This multimodal deep learning framework holds the potential to revolutionize PD diagnosis and monitoring, offering physicians a valuable tool for assessing the condition&#39;s severity. The integration of various data sources enhances the model&#39;s accuracy, providing a holistic perspective on Parkinson&#39;s disease progression. This, in turn, facilitates improved clinical decision-making and patient care. Notably, the implementation in Python achieves a remarkable accuracy of 94.87%, surpassing existing methods like EOFSC and CNN by 1.44%.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_50-Advancing_Parkinsons_Disease_Severity_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards High Quality PCB Defect Detection Leveraging State-of-the-Art Hybrid Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150249</link>
        <id>10.14569/IJACSA.2024.0150249</id>
        <doi>10.14569/IJACSA.2024.0150249</doi>
        <lastModDate>2024-02-28T12:51:30.8000000+00:00</lastModDate>
        
        <creator>Tuan Anh Nguyen</creator>
        
        <creator>Hoanh Nguyen</creator>
        
        <subject>PCB defect detection; hybrid neural network; bottleneck transformer; ghost convolution; wise-IoU loss</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>The automatic detection of defects in printed circuit boards (PCBs) is a critical step in ensuring the reliability of electronic devices. This paper introduces a novel approach for PCB defect detection. It incorporates a state-of-the-art hybrid architecture that leverages both convolutional neural networks (CNNs) and transformer-based models. Our model comprises three main components: a Backbone for feature extraction, a Neck for feature map refinement, and a Head for defect prediction. The Backbone utilizes ResNet and Bottleneck Transformer blocks, which are proficient at highlighting small defect features and overcoming the shortcomings of previous models. The Neck module, designed with Ghost Convolution, refines feature maps. It reduces computational demands while preserving the quality of feature representation. This module also facilitates the integration of multi-scale features, essential for accurately detecting a wide range of defect sizes. The Head employs a Fully Convolutional One-stage detection approach, allowing for the prediction process to proceed without reliance on predefined anchors. Within the Head, we incorporate the Wise-IoU loss to refine bounding box regression. This optimizes the model&#39;s focus on high-overlap regions and mitigates the influence of outlier samples. Comprehensive experiments on standard PCB datasets validate the effectiveness of our proposed method. The results show significant improvements over existing techniques, particularly in the detection of small and subtle defects.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_49-Towards_High_Quality_PCB_Defect_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Real-Time Airborne Target Tracking using DeepSort Algorithm and Yolov7 Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150248</link>
        <id>10.14569/IJACSA.2024.0150248</id>
        <doi>10.14569/IJACSA.2024.0150248</doi>
        <lastModDate>2024-02-28T12:51:30.7870000+00:00</lastModDate>
        
        <creator>Yasmine Ghazlane</creator>
        
        <creator>Ahmed El Hilali Alaoui</creator>
        
        <creator>Hicham Medomi</creator>
        
        <creator>Hajar Bnouachir</creator>
        
        <subject>Real-time detection; target tracking; anti-drone; Artificial Intelligence; Computer Vision</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>In light of the explosive growth of drones, it is more critical than ever to strengthen and secure aerial security and privacy. Drones are used maliciously by exploiting some gaps in artificial intelligence and cybersecurity. Airborne target detection and tracking tasks have gained paramount importance in various domains, encompassing surveillance, security, and traffic management. As airspace security systems aiming to regulate drone activities, anti-drones leverage mostly artificial intelligence and computer vision advances in the used detection and tracking models to perform effectively and accurately airborne target detection, identification, and tracking. The reliability of the anti-drone systems relies mostly on the ability of the incorporated models to satisfy an optimal compromise between speed and performance in terms of inference speed and used detection evaluation metrics since the system should recognize the targets effectively and rapidly to take appropriate actions regarding the target. This research article explores the efficacy of DeepSort algorithm coupled with YOLOv7 model in detecting and tracking five distinct airborne targets namely, drones, birds, airplanes, daytime frames, and buildings across diverse contexts. The used DeepSort and Yolov7 models aim to be used in anti-drone systems to detect and track the most encountered airborne targets to reinforce airspace safety and security. The study conducts a comparative analysis of tracking performance under different scenarios to evaluate the algorithm&#39;s versatility, robustness, and accuracy. The experimental results show the effectiveness of the proposed approach.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_48-Real_Time_Airborne_Target_Tracking.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Elevating Student Performance Prediction using Extra-Trees Classifier and Meta-Heuristic Optimization Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150247</link>
        <id>10.14569/IJACSA.2024.0150247</id>
        <doi>10.14569/IJACSA.2024.0150247</doi>
        <lastModDate>2024-02-28T12:51:30.7530000+00:00</lastModDate>
        
        <creator>Yangbo Li</creator>
        
        <creator>Mengfan He</creator>
        
        <subject>Student performance; mathematics; machine learning; Extra-Trees Classifier; Gorilla Troops Optimizer; Reptile Search Algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>In the highly competitive landscape of academia, the study addresses the multifaceted challenge of analyzing voluminous and diverse educational datasets through the application of machine learning, specifically emphasizing dimensionality reduction techniques. This sophisticated approach facilitates educators in making data-informed decisions, providing timely guidance for targeted academic improvement, and enhancing the overall educational experience by stratifying individuals based on their innate aptitudes and mitigating failure rates. To fortify predictive capabilities, the study employs the robust Extra-Trees Classifier (ETC) model for classification tasks. This model is enhanced by integrating the Gorilla Troops Optimizer (GTO) and Reptile Search Algorithm (RSA), cutting-edge optimization algorithms designed to refine decision-making processes and improve predictive precision. This strategic amalgamation underscores the research&#39;s commitment to leveraging advanced machine learning and bio-inspired algorithms to achieve more accurate and resilient student performance predictions in the mathematics course, ultimately aiming to elevate educational outcomes. Analyses of G1 and G3 showcase the efficacy of the ETRS model, demonstrating 97.5% Accuracy, F1-Score, and Recall in predicting the G1 values. Similarly, the ETRS model emerges as the premier predictor for G3, attaining 95.3% Accuracy, Recall, and F1-Score, respectively. These outcomes underscore the significant contributions of the proposed models in advancing precision and discernment in student performance prediction, aligning with the overarching goal of refining educational outcomes.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_47-Elevating_Student_Performance_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Study on the Implementation of Multimodal Continuous Authentication in Smartphones: A Systematic Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150246</link>
        <id>10.14569/IJACSA.2024.0150246</id>
        <doi>10.14569/IJACSA.2024.0150246</doi>
        <lastModDate>2024-02-28T12:51:30.7400000+00:00</lastModDate>
        
        <creator>Rahmad Syalevi</creator>
        
        <creator>Aji Prasetyo</creator>
        
        <creator>Rizal Fathoni Aji</creator>
        
        <subject>Authentication; continuous multimodal; biometric authenticator; smartphone</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>Profound societal shifts result from the inception of the 4.0 age of the Industrial Revolution and rapid technological advancements. The widespread adoption of e-services has resulted in substantial reliance on smartphones to access diverse offerings. Even so, account breaches and data leaks are risks that users take when they rely so heavily on their smartphones. Authentication is an essential method of safeguarding personal information. The purpose of this study is to undertake a thorough review of the literature on the deployment and trends of multimodal biometric authentication on smartphones. The studies will look at several biometric modalities, such as behavioral and physiological characteristics, and the algorithms for pattern recognition used in continuous authentication systems. The results show various biometric authenticators and emphasize the importance of behavioral features in smartphone authentication. In addition, the research underlines the significance of machine learning algorithms in pattern identification for rapid and accurate analysis. This study helps to understand the present authentication technique landscape and gives ideas for future advances in safe and user-friendly smartphone authentication systems.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_46-Study_on_the_Implementation_of_Multimodal_Continuous_Authentication.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Neuro-Genetic Security Framework for Misbehavior Detection in VANETs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150244</link>
        <id>10.14569/IJACSA.2024.0150244</id>
        <doi>10.14569/IJACSA.2024.0150244</doi>
        <lastModDate>2024-02-28T12:51:30.7070000+00:00</lastModDate>
        
        <creator>Ila Naqvi</creator>
        
        <creator>Alka Chaudhary</creator>
        
        <creator>Anil Kumar</creator>
        
        <subject>VANET security; genetic algorithm; ANN fitness function; misbehavior detection; hybrid detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>Genetic Algorithm (GA) is an excellent optimization algorithm which has attracted the attention of researchers in various fields. Many papers have been published on works done on GA, but no single paper ever utilized this algorithm for misbehavior detection in VANETs. This is because GA requires manual definition of fitness function and defining a fitness function for VANETs is a complex task. Automating the creation of these fitness functions is still a difficulty, even though studies have found several successful applications of GA. In this study, a neuro-genetic security framework has been built with ANN classifier for detecting misbehavior in VANETs. It leverages a genetic algorithm for feature reduction with ANN as a dynamic fitness function, considering both node behaviors and contextual GPS data. Deployed at the Roadside Unit (RSU) level, the framework detects misbehaving nodes, broadcasting alerts to RSUs, Central Authority and the vehicles. The ANN based fitness function has been employed in GA that enabled the GA to select the best results. The 10- fold CV used enabled the whole system to be unbiased giving a precision accuracy of 0.9976 with recall and F1 scores as 0.9977, and 0.9977 respectively. Comparative evaluations, using the VeReMi Extension dataset, demonstrate the framework&#39;s superiority in precision, recall, and F1 score for binary and multiclass classification. This hybrid genetic algorithm with ANN fitness function presents a robust, adaptive solution for VANET misbehavior detection. Its context-aware nature accommodates dynamic scenarios, offering an effective security framework for the evolving threats in vehicular environments.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_44-A_Neuro_Genetic_Security_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Method for Predictive Trend Analytics with SNS Information for Marketing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150245</link>
        <id>10.14569/IJACSA.2024.0150245</id>
        <doi>10.14569/IJACSA.2024.0150245</doi>
        <lastModDate>2024-02-28T12:51:30.7070000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Ikuya Fujikawa</creator>
        
        <creator>Yusuke Nakagawa</creator>
        
        <creator>Sayuri Ogawa</creator>
        
        <subject>X (former tweeter); Instagram; Facebook; YouTube; TikTok; market trend; AWS; Google analytics; keyword analysis; page view analysis; access analysis; heat map analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>A method for predictive trend analytics with social media information is proposed for marketing. Through keyword analysis, page view analysis, access analysis, heat map analysis, Google Analytics, real time analysis, company and competitor analysis, trend analysis with the social media data derived from X (former tweeter), Instagram, Facebook, YouTube, TikTok, market trend can be predicted. The proposed method is created in a local server and is extended to AWS cloud. The proposed system, also ensure negative / positive analysis from the acquired social media information. Through some experiments, it is found that by using AI to analyze social data by category, you can visualize the degree of attention for each keyword, model relationships between information, identify trending keywords, and where the keywords are in their lifecycle. It turns out that it&#39;s possible to categorize which ones exist and predict which ones will scale up in the next six months. In addition, corporate product development and marketing personnel can identify themes, materials, benefits, etc. that have signs of becoming popular based on insights based on predictive behavioral data obtained from the proposed method and system and utilize them in new business development and new product planning.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_45-Method_for_Predictive_Trend_Analytics_with_SNS_Information.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Edge Detail Preservation Technique for Enhancing Speckle Reduction Filtering Performance in Medical Ultrasound Imaging</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150243</link>
        <id>10.14569/IJACSA.2024.0150243</id>
        <doi>10.14569/IJACSA.2024.0150243</doi>
        <lastModDate>2024-02-28T12:51:30.6770000+00:00</lastModDate>
        
        <creator>Yasser M. Kadah</creator>
        
        <creator>Ahmed F. Elnokrashy</creator>
        
        <subject>Edge detail preservation; image quality metrics; speckle reduction; ultrasound imaging</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>Ultrasound imaging is a unique medical imaging modality due to its clinical versatility, manageable biological effects, and low cost. However, a significant limitation of ultrasound imaging is the noisy appearance of its images due to speckle noise, which reduces image quality and hence makes diagnosis more challenging. Consequently, this problem received interest from many research groups and many methods have been proposed for speckle suppression using various filtering techniques. The common problem with such methods is that they tend to distort the edge detail content within the image and blurring is commonly encountered. In this work, we propose a new method that could be combined with previous speckle suppression techniques to preserve edge detail content of the image. The original image is first processed to extract the edge detail content. Rather than presenting the original method to the speckle suppression filtering technique, the edge detail content is subtracted from the original image before it is filtered. Then, such edge detail content is added to the output of filtering to form the final image. The new method is practically verified using 26 imaging experiments as well as ultrasound images from publicly available databases in combination with four widely used speckle reduction filters. The results are evaluated qualitatively and quantitatively using standard image quality metrics.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_43-Edge_Detail_Preservation_Technique.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Review on Applications of Electroencephalogram: Includes Imagined Speech</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150242</link>
        <id>10.14569/IJACSA.2024.0150242</id>
        <doi>10.14569/IJACSA.2024.0150242</doi>
        <lastModDate>2024-02-28T12:51:30.6470000+00:00</lastModDate>
        
        <creator>S. Santhakumari</creator>
        
        <creator>Kamalakannan. J</creator>
        
        <subject>Electroencephalogram; brain signals; invasive; non-invasive; imagined speech; electrodes; epilepsy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>In the last two decades, the Brain-Computer Interface system with EEG signals has assisted people in various ways. In particular, to patients with paralysis, epilepsy, and Alzheimer&#39;s disease, not only to the patient but also to physically, visually challenged people and Hard-of-Hearing people. One of the non-invasive methods that can read human brain activities is Electroencephalogram (EEG). The EEG has been used in many applications, especially in medicine. The applications of the EEG are not limited to the medical domain; it keeps extending to many areas. This review includes the various application of EEG; and more in imagined speech. The main objective of this survey is to know about imagined speech, and perhaps to some extent, will be useful future direction in decoding imagined speech. Imagined speech classifications have used different models; the models are discussed, the significance of choosing the number of electrodes, and the main challenges in EEG.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_42-A_Review_on_Applications_of_Electroencephalogram.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Actor Critic-based Multi Objective Reinforcement Learning for Multi Access Edge Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150241</link>
        <id>10.14569/IJACSA.2024.0150241</id>
        <doi>10.14569/IJACSA.2024.0150241</doi>
        <lastModDate>2024-02-28T12:51:30.6300000+00:00</lastModDate>
        
        <creator>Vishal Khot</creator>
        
        <creator>Vallisha M</creator>
        
        <creator>Sharan S Pai</creator>
        
        <creator>Chandra Shekar R K</creator>
        
        <creator>Kayarvizhy N</creator>
        
        <subject>Edge computing; reinforcement learning; multi objective optimization; neural networks; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>In recent times, large applications that need near real-time processing are increasingly being used on devices with limited resources. Multi access edge computing is a computing paradigm that provides a solution to this problem by placing servers as close to resource constrained devices as possible. However, the edge device must consider multiple conflicting objectives, viz., energy consumption, latency, task drop rate and quality of experience. Many previous approaches optimize on only one objective or a fixed linear combination of multiple objectives. These approaches don’t ensure best performance for applications that run on edge servers, as there is no guarantee that the solution obtained by these approaches lies on the pareto-front. In this work, Multi Objective Reinforcement Learning with Actor-Critic model is proposed to optimize the drop rate, latency and energy consumption parameters during offloading decision. The model is compared with MORL-Tabular, MORL-Deep Q Network and MORL-Double Deep Q Network models. The proposed model outperforms all the other models in terms of drop rate and latency.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_41-Actor_Critic_based_Multi_Objective_Reinforcement_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>DeepBiG: A Hybrid Supervised CNN and Bidirectional GRU Model for Predicting the DNA Sequence</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150240</link>
        <id>10.14569/IJACSA.2024.0150240</id>
        <doi>10.14569/IJACSA.2024.0150240</doi>
        <lastModDate>2024-02-28T12:51:30.5970000+00:00</lastModDate>
        
        <creator>Chai Wen Chuah</creator>
        
        <creator>Wanxian He</creator>
        
        <creator>De-Shuang Huang</creator>
        
        <subject>DNA sequencing; deep learning; convolutional neural networks; bidirectional gated recurrent; k-mer; tokenizing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>Understanding the deoxyribonucleic acid (DNA) sequence is a major component of bioinformatics research. The amount of biological data increases tremendously. Hence, there is a need for effective approaches to handle the critical problem in the general computational framework of DNA sequence pre-diction and classification. Numerous deep learning languages can be used to complete these tasks compared to manual techniques that have been followed for ages. The aim of this project is to employ effective approaches for pre-processing DNA sequences and using deep learning languages to train the sequences for making judgments, predictions, and classifications of DNA se-quences into known categories. In this study, the pre-processing methods include k-mers and tokenization. We employ a novel hybrid deep learning algorithm that combines convolutional neural networks and is followed by bidirectional gated recurrent networks. This combination can capture dependencies within the genome sequence, even in large datasets with a lot of noise. The proposed model is compared with existing widely used models and classifiers. The results show that the proposed model achieves a good result with an accuracy of 82.90%. The dataset consists of 44,391 labeled DNA sequences obtained from the Encode project.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_40-DeepBiG_ A_Hybrid_Supervised_CNN_and_Bidirectional_GRU_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detection of Personal Protective Equipment (PPE) using an Anchor Free-Convolutional Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150239</link>
        <id>10.14569/IJACSA.2024.0150239</id>
        <doi>10.14569/IJACSA.2024.0150239</doi>
        <lastModDate>2024-02-28T12:51:30.5830000+00:00</lastModDate>
        
        <creator>Honggang WANG</creator>
        
        <subject>PPE detection; deep learning; YOLOv8; industrial environments; real-time detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>In industrial environments, the utilization of Personal Protective Equipment (PPE) is paramount for safeguarding workers from potential hazards. While various PPE detection methods have been explored in the literature, deep learning approaches have consistently demonstrated superior accuracy in comparison to other methodologies. However, addressing the pressing research challenge in deep learning-based PPE detection, which pertains to achieving high accuracy rates, non-destructive monitoring, and real-time capabilities, remains a critical need. To address this challenge, this study proposes a deep learning model based on the Yolov8 architecture. This model is specifically designed to meet the rigorous demands of PPE detection, ensuring accurate results. The methodology involves the creation of a custom dataset and encompasses rigorous training, validation, and testing processes. Experimental results and performance evaluations validate the proposed method, illustrating its ability to achieve highly accurate results consistently. This research contributes to the field by offering an effective and robust solution for PPE detection in industrial environments, emphasizing the paramount importance of accuracy, non-destructiveness, and real-time capabilities in ensuring workplace safety.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_39-Detection_of_Personal_Protective_Equipment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Weighted Ensemble Model to Improve the Performance of Software Project Failure Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150238</link>
        <id>10.14569/IJACSA.2024.0150238</id>
        <doi>10.14569/IJACSA.2024.0150238</doi>
        <lastModDate>2024-02-28T12:51:30.5500000+00:00</lastModDate>
        
        <creator>Mohammad A. Ibraigheeth</creator>
        
        <creator>Aws I. Abu Eid</creator>
        
        <creator>Yazan A. Alsariera</creator>
        
        <creator>Waleed F. Awwad</creator>
        
        <creator>Majid Nawaz</creator>
        
        <subject>Ensemble learning; failure prediction; base models; project outcome</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>The development of a software project is frequently influenced by various risk factors that can lead to project failure. Predicting potential software project failures early can aid organizations in making decisions regarding possible solutions and improvements. This paper proposes a software project failure prediction model based on a weighted ensemble learning approach. The proposed model aims to determine the failure probability as well as the expected project outcome (Success/Failure). Various ensemble approaches, such as simple majority voting, can be employed in predicting software project failure. However, in majority voting algorithms, all base models have the same weights, resulting in an equal effect on the final prediction result, regardless of their predictive abilities. Our proposed algorithm assigns higher weights to base models that demonstrate a greater ability to correctly predict more challenging data instances. The proposed model is developed based on a dataset gathered from real previous software project reports, comprising both successful and failed projects, to provide evidence supporting the predictive model&#39;s capabilities and to obtain high-confidence results. The performance of the developed model is comprehensively assessed through various measures, revealing its superiority in predicting software project failures compared to both simple majority voting and individual models. This research suggests that the proposed model can be integrated into the software system development process, spanning requirement analysis, planning, design, and implementation phases, to evaluate the project&#39;s status and identify potential risks.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_38-A_New_Weighted_Ensemble_Model_to_Improve_the_Performance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Explainable Multistage Ensemble 1D Convolutional Neural Network for Trust Worthy Credit Decision</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150237</link>
        <id>10.14569/IJACSA.2024.0150237</id>
        <doi>10.14569/IJACSA.2024.0150237</doi>
        <lastModDate>2024-02-28T12:51:30.5370000+00:00</lastModDate>
        
        <creator>Pavitha N</creator>
        
        <creator>Shounak Sugave</creator>
        
        <subject>Credit risk prediction; explainable AI; multistage ensemble; 1D convolutional neural network; interpretability; transparency; lending decisions; financial institutions</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>Banking is a dynamic industry that places significant importance on risk management, requiring accurate and interpretable AI models to make transparent lending decisions. This study introduces a groundbreaking approach that combines a multistage ensemble technique with a 1D convolutional neural network (CNN) architecture. The algorithm not only delivers superior classification performance but also offers interpretable explanations for its decisions. The algorithm is designed with multiple strategic steps to enhance model performance without sacrificing explainability. Thorough experiments were conducted using datasets from private banks and non-banking financial companies (NBFCs) in India to evaluate the algorithm&#39;s performance. It was compared against various state-of-the-art models, demonstrating remarkable precision, recall, F1 score, and accuracy values of 0.994, 0.992, 0.993, and 0.991, respectively. This outperformed competing models like homogeneous deep ensembles, 1D CNN, and Artificial Neural Networks (ANN). Furthermore, individual borrower dataset evaluations confirmed the proposed algorithm&#39;s consistency and efficiency, achieving precision, recall, F1 score, and accuracy values of 0.960, 0.961, 0.952, and 0.964, respectively. The research emphasizes the effectiveness of the explanatory integration decision process, wherein the Explainable Multistage Ensemble 1D CNN not only provides enhanced credit risk prediction but also facilitates transparent and interpretable lending decisions. The algorithm&#39;s ability to offer understandable explanations empowers financial institutions to make well-informed lending decisions, reduce credit risk, and foster a more stable and inclusive financial ecosystem.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_37-Explainable_Multistage_Ensemble_1D.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Action Recognition Method of Basketball Training Based on Big Data Technology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150236</link>
        <id>10.14569/IJACSA.2024.0150236</id>
        <doi>10.14569/IJACSA.2024.0150236</doi>
        <lastModDate>2024-02-28T12:51:30.5030000+00:00</lastModDate>
        
        <creator>Dongsheng CHEN</creator>
        
        <creator>Zhen Ni</creator>
        
        <subject>Action recognition; computer vision; big data technology; three-dimensional convolution; channel and spatial attention mechanisms</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>Aiming at the problem that improper posture of basketball players leads to not obvious sports effects, the present paper proposes an action recognition method combining computer vision and big data technology and applies it to athletes&#39; daily training and competition. Firstly, based on the current mainstream motion recognition models, 3D graph convolution are used to improve the original 3D convolution to promote the expression ability of spatial structure features and temporal features in skeleton sequences. Secondly, channel and spatial attention mechanisms are introduced to focus on the weight distribution of key points and strong features in different posture recognition processes. Finally, the proposed model is tested in real data, and the test results show that the model runs smoothly while maintaining high recognition performance. It can more effectively direct basketball players to implement comprehensive, systematic, and scientific teaching and training standards that directly support raising the game&#39;s general level of performance.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_36-Action_Recognition_Method_of_Basketball_Training.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Investigating the Impact of Train / Test Split Ratio on the Performance of Pre-Trained Models with Custom Datasets</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150235</link>
        <id>10.14569/IJACSA.2024.0150235</id>
        <doi>10.14569/IJACSA.2024.0150235</doi>
        <lastModDate>2024-02-28T12:51:30.4900000+00:00</lastModDate>
        
        <creator>Houda Bichri</creator>
        
        <creator>Adil Chergui</creator>
        
        <creator>Mustapha Hain</creator>
        
        <subject>Artificial intelligence; classification; MobileNetV2; ResNet50v2; sensitivity; specificity; train / test split ratio; VGG19</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>The proper allocation of data between training and testing is a critical factor influencing the performance of deep learning models, especially those built upon pre-trained architectures. Having the suitable training set size is an important factor for the classification model’s generalization performance. The main goal of this study is to find the appropriate training set size for three pre-trained networks using different custom datasets. For this aim, the study presented in this paper explores the effect of varying the train / test split ratio on the performance of three popular pre-trained models, namely MobileNetV2, ResNet50v2 and VGG19, with a focus on image classification task. In this work, three balanced datasets never seen by the models have been used, each containing 1000 images divided into two classes. The train / test split ratios used for this study are: 60-40, 70-30, 80-20 and 90-10. The focus was on the critical metrics of sensitivity, specificity and overall accuracy to evaluate the performance of the classifiers under the different ratios. Experimental results show that, the performance of the classifiers is affected by varying the training / testing split ratio for the three custom datasets. Moreover, with the three pre-trained models, using more than 70% of the dataset images for the training task gives better performance.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_35-Investigating_the_Impact_of_Train_Test_Split_Ratio.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Elevating Neuro-Linguistic Decoding: Deepening Neural-Device Interaction with RNN-GRU for Non-Invasive Language Decoding</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150233</link>
        <id>10.14569/IJACSA.2024.0150233</id>
        <doi>10.14569/IJACSA.2024.0150233</doi>
        <lastModDate>2024-02-28T12:51:30.4570000+00:00</lastModDate>
        
        <creator>V Moses Jayakumar</creator>
        
        <creator>R. Rajakumari</creator>
        
        <creator>Kuppala Padmini</creator>
        
        <creator>Sanjiv Rao Godla</creator>
        
        <creator>Yousef A.Baker El-Ebiary</creator>
        
        <creator>Vijayalakshmi Ponnuswamy</creator>
        
        <subject>Recurrent Neural Networks (RNN); Gated Recurrent Units (GRU); neurolinguistic learning; neural devices; brain machine interfaces</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>Exploring innovative pathways for non-invasive neural communication with language interfaces, this research delves into the interdisciplinary realm of neurolinguistic learning, merging neuroscience and machine learning. It scrutinizes the intricacies of decoding neural patterns associated with language comprehension. Leveraging advanced neural network architectures, specifically Deep Recurrent Neural Networks (RNN) and Gated Recurrent Units (GRU), the study aims to amplify the landscape of neuro-device interaction. The focus of Neurolinguistic Learning lies in extracting language-related brain signals without resorting to invasive procedures. Employing cutting-edge non-invasive methods and deep learning techniques, the research aims to elevate the capabilities of neural devices such as brain-machine interfaces and neuroprosthetics. A distinctive approach involves crafting a sophisticated Deep RNN-GRU model designed to capture intricate brain patterns linked to language processing. This architectural innovation, implemented in the Python software environment, harnesses the strengths of RNNs and GRUs to enhance language decoding. The study&#39;s outcomes hold promise for advancing non-invasive brain language decoding systems, contributing to the expanding knowledge base in neurolinguistic learning. The remarkable accuracy of the proposed RNN-GRU model, boasting a 90% accuracy rate, signifies its potential application in critical real-world scenarios. This includes assistive technologies and brain-machine interfaces where precise decoding of cerebral language signals is paramount. The research underscores the efficacy of deep learning methodologies in pushing the boundaries of neurotechnology. Notably, the model outperforms established techniques, surpassing alternatives like CSP-SVM and EEGNet by an impressive 30.4% in accuracy. The model&#39;s proficiency in deciphering topic words underscores its ability to extract intricate language patterns from non-invasive brain inputs.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_33-Elevating_Neuro_Linguistic_Decoding.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Post Pandemic Tourism: Sentiment Analysis using Support Vector Machine Based on TikTok Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150234</link>
        <id>10.14569/IJACSA.2024.0150234</id>
        <doi>10.14569/IJACSA.2024.0150234</doi>
        <lastModDate>2024-02-28T12:51:30.4570000+00:00</lastModDate>
        
        <creator>Norlina Mohd Sabri</creator>
        
        <creator>Siti Nur Athira Muhamad Subki</creator>
        
        <creator>Ummu Fatihah Mohd Bahrin</creator>
        
        <creator>Mazidah Puteh</creator>
        
        <subject>Post pandemic tourism; support vector machine; sentiment classification; TikTok data</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>The tourism industry is one of the hard hit businesses during the Covid-19 pandemic and has been struggling for backup ever since. However, nowadays the industry has started to bloom again with the lifting of all of the restrictions of Covid-19. This research aims to analyze the sentiments of the tourists using the Support Vector Machine (SVM) algorithm to know their views on the tourist spots after the pandemic. The scope of the research covers the state of Terengganu which is popularly known for its islands and unique culture on the east coast of Malaysia. TikTok data has been used as the source of data as social media currently has become one of the top mediums for reviewing, selling and promoting products and services. The objective of the research is to explore the SVM algorithm in the sentiment classification of tourist spots in Terengganu. This research is expected to help the Tourism Terengganu to improve their tourist spots and their services. The phases of the research include collecting data from TikTok, data pre-processing, data labelling, feature extraction, model creation using SVM, graphical user interface development and performance evaluation. The evaluation results showed that the performance of the SVM classifier model was good and reliable, with 90.68% accuracy. The future work would be collecting more data from TikTok regularly to further improve the accuracy of the algorithm.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_34-Post_Pandemic_Tourism_Sentiment_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cross-Modal Video Retrieval Model Based on Video-Text Dual Alignment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150232</link>
        <id>10.14569/IJACSA.2024.0150232</id>
        <doi>10.14569/IJACSA.2024.0150232</doi>
        <lastModDate>2024-02-28T12:51:30.4270000+00:00</lastModDate>
        
        <creator>Zhanbin Che</creator>
        
        <creator>Huaili Guo</creator>
        
        <subject>Video-text alignment; cross-modal; contrastive learning; similarity measure; feature fusion</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>Cross-modal video retrieval remains a major challenge in natural language processing due to the natural semantic divide between video and text. Most approaches use a single encoder to extract video and text features separately, and train video-text pairs by means of contrastive learning, but this global alignment of video and text is prone to neglecting more fine-grained features of both. In addition, some studies focus only on profiling the video description text, ignoring the correlation relationship with the video. Therefore, this paper proposes a video retrieval method based on video-text alignment, which realizes both global and fine-grained alignment between video and text. For global alignment, the video and text are aligned by a single encoder and after linear projection; for fine-grained alignment, the video encoder is trained to align the video and text by masking some semantic information in the text. By experimentally comparing with multiple existing methods on MSR-VTT and MSVD datasets, the model achieves R@1 (recall at 1) metrics of 51.5% and 52.4% on MSR-VTT and MSVD datasets, respectively, which indicates that the proposed model can improve the efficiency of cross-modal video retrieval.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_32-Cross_Modal_Video_Retrieval_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Precision Insulin Delivery: Predictive Modelling for Bolus Insulin Injection in Real-Time</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150231</link>
        <id>10.14569/IJACSA.2024.0150231</id>
        <doi>10.14569/IJACSA.2024.0150231</doi>
        <lastModDate>2024-02-28T12:51:30.4100000+00:00</lastModDate>
        
        <creator>V. K. R. Rajeswari Satuluri</creator>
        
        <creator>Vijayakumar Ponnusamy</creator>
        
        <subject>Continuous glucose monitoring; bolus insulin prediction; data curation; data detersion; diabetes mellitus; exploratory data analysis; feature selection; machine learning; pre-processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>Insulin is recommended for patients with Diabetes Mellitus (DM). It is challenging for doctors to prescribe accurate bolus insulin before every meal due to real-time factors such as the size of the meal, skipping a previous meal, and physical activity, which can risk the patient towards hyperglycemia or hypoglycemia. Previous studies executed insulin predictions where the methods did not consider the cases of controlled glucose levels, type of insulin prescribed, time of insulin-induced, and data detersion that can alter the predictions. To address these problems, our work has proposed an insulin predictive model from the integration of Internet of Things (IoT) devices, i.e., Continuous glucose monitoring (CGM) sensor and insulin pumps with rapid-acting insulin type where the insulin dosage with corresponding Current Blood Glucose levels (CBG) and improved Next Blood Glucose levels (NBG) are chosen. The dataset is subjected to data detersion where pre-processing, Exploratory Data Analysis (EDA), and Feature Selection is performed. Machine Learning (ML) models are applied on curated dataset where Decision Tree (DT)-Bagging algorithm, performed the best with a Mean Absolute Error (MAE) of 1.54 and a Mean Square Error (MSE) of 4.15. Performance metrics of the current study imply its suitability in medical applications for accurate prediction of real-time insulin dosage.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_31-Precision_Insulin_Delivery_Predictive_Modelling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Thyroid Cancer Diagnostics Through Hybrid Machine Learning and Metabolomics Approaches</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150230</link>
        <id>10.14569/IJACSA.2024.0150230</id>
        <doi>10.14569/IJACSA.2024.0150230</doi>
        <lastModDate>2024-02-28T12:51:30.3800000+00:00</lastModDate>
        
        <creator>Meghana G Raj</creator>
        
        <subject>Thyroid cancer; hybrid ML models; metabolomics; diagnostic accuracy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>Thyroid cancer, a prevalent endocrine malignancy, necessitates advanced diagnostic techniques for accurate and early detection. This study introduces an innovative approach that integrates hybrid Machine Learning (ML) algorithms with metabolomics, offering a novel pathway in thyroid cancer diagnostics. Our methodology employs a range of hybrid ML models, combining the strengths of various algorithms to analyze complex metabolomic data effectively. These models include ensemble methods, neural network-based hybrids, and integrations of unsupervised and supervised learning techniques, tailored to decipher the intricate patterns within metabolic profiles associated with thyroid cancer. The study demonstrates how these hybrid ML algorithms can efficiently process and interpret metabolomic data, leading to enhanced diagnostic accuracy. By leveraging the distinct characteristics of each ML model, our approach not only improves the detection of thyroid cancer but also contributes to a deeper understanding of its metabolic underpinnings. The findings of this study pave the way for more personalized and precise medical interventions in thyroid cancer management, showcasing the potential of hybrid ML models in revolutionizing cancer diagnostics. Our system analyzes thyroid cancer metabolomic data using ensemble methods, neural network-based hybrids, and unsupervised and supervised learning integrations. The research shows hybrid ML models may revolutionize cancer diagnoses by improving accuracy. LSTM+CNN, LSTM+GRU, and CNN+GRU have high accuracy rates, helping us comprehend thyroid cancer&#39;s biochemical roots. Hybrid ML models enhance thyroid cancer diagnosis and management, enabling more tailored and accurate medical treatments. The hybrid machine learning models like LSTM+CNN, LSTM+GRU, and CNN+GRU beat CNN, VGG-19, Inception-ResNet-v2, decision support, and random forests (99.45%).</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_30-Enhancing_Thyroid_Cancer_Diagnostics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance-Optimised Design of the RISC-V Five-Stage Pipelined Processor NRP</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150229</link>
        <id>10.14569/IJACSA.2024.0150229</id>
        <doi>10.14569/IJACSA.2024.0150229</doi>
        <lastModDate>2024-02-28T12:51:30.3800000+00:00</lastModDate>
        
        <creator>Hongkui Li</creator>
        
        <creator>Chaoxia Jing</creator>
        
        <creator>Jie Liu</creator>
        
        <subject>Architecture; FPGA; RISC-V; RV32I; Verilog HDL; five-stage</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>The five-stage pipeline processor is a mature and stable processor architecture suitable for many applications in the field of computer hardware. Based on the RISC-V instruction set architecture, the five-stage pipeline processor has advantages in performance, functionality, and power consumption. This paper presents an optimized RV32I five-stage pipeline processor, NRP, and proposes two optimization methods to improve the performance of NRP. These methods include instruction decoding unit optimization and branch prediction optimization. We implemented NRP using Verilog HDL and verified its performance using Vivado and the Xilinx Artya7-35T FPGA board. Experimental data shows that after adopting these methods, the CoreMark score of the five-stage pipeline processor reached 3.11 CoreMark/MHz, representing an 11.07% performance improvement.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_29-Performance_Optimised_Design.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improved ORB Algorithm Through Feature Point Optimization and Gaussian Pyramid</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150228</link>
        <id>10.14569/IJACSA.2024.0150228</id>
        <doi>10.14569/IJACSA.2024.0150228</doi>
        <lastModDate>2024-02-28T12:51:30.3470000+00:00</lastModDate>
        
        <creator>Rohmat Indra Borman</creator>
        
        <creator>Agus Harjoko</creator>
        
        <creator>Wahyono</creator>
        
        <subject>Feature point; Gaussian pyramid; image matching; ORB algorithm; scale invariance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>Feature points obtained using traditional ORB methods often exhibit redundancy, uneven distribution, and lack scale invariance. This study enhances the traditional ORB algorithm by presenting an optimal technique for extracting feature points, thereby overcoming these challenges. Initially, the image is partitioned into several areas. The determination of the quantity of feature points to be extracted from each region takes into account both the overall number of feature points and the number of divisions that the image undergoes. This method tackles concerns related to the overlap and redundancy of feature points in the extraction process. To counteract the non-scale invariance issue in feature points obtained via the ORB method, a Gaussian pyramid is employed, and feature points are extracted at each level. Experimental findings demonstrate that our method successfully extracts feature points with greater uniformity and rationality, while preserving image matching accuracy. Specifically, our technique outperforms the traditional ORB algorithm by approximately 4% and the SURF algorithm by 2% in terms of matching performance. Additionally, the processing time of our proposed algorithm is three times faster than that of the SURF algorithm and twelve times faster than the SIFT algorithm.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_28-Improved_ORB_Algorithm_Through_Feature_Point_Optimization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>AI-Enhanced Comprehensive Liver Tumor Prediction using Convolutional Autoencoder and Genomic Signatures</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150227</link>
        <id>10.14569/IJACSA.2024.0150227</id>
        <doi>10.14569/IJACSA.2024.0150227</doi>
        <lastModDate>2024-02-28T12:51:30.3170000+00:00</lastModDate>
        
        <creator>G. Prabaharan</creator>
        
        <creator>D. Dhinakaran</creator>
        
        <creator>P. Raghavan</creator>
        
        <creator>S. Gopalakrishnan</creator>
        
        <creator>G. Elumalai</creator>
        
        <subject>Liver tumor prediction; autoencoder; segmentation; feature extraction; genomics; artificial intelligence</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>Liver tumor prediction plays a pivotal role in optimizing treatment strategies and improving patient outcomes. In our proposed work, we present an innovative AI-driven framework for liver tumor prediction, uniting cutting-edge techniques to enhance precision and depth of analysis. The framework integrates a Histological Convolutional Autoencoder (HistoCovAE) for meticulous tumor segmentation in medical imaging, and Genomic Feature Extraction (MIRSLiC) for a nuanced understanding of molecular markers. Additionally, a Multidimensional Feature Extraction module amalgamates videomics, radiomics, acoustics, and clinical data, creating a comprehensive dataset. These dimensions synergize in a unified model, offering detailed predictions encompassing tumor characteristics, subtypes, and prognosis. Model evaluation and continuous improvement, guided by real-world outcomes, underscore reliability. This integrative approach transcends conventional boundaries, providing clinicians’ actionable insights for personalized treatment strategies and heralding a new era in liver tumor prediction. Our model undergoes rigorous evaluation against diverse datasets, and the performance metrics underscore its reliability and accuracy. With precision exceeding 87%, recall rates above 92%, and a Dice coefficient surpassing 0.89 in tumor segmentation, our model showcases exceptional accuracy and robustness. In prognostic modeling, survival prediction accuracy consistently surpasses 84%, highlighting the model&#39;s ability to provide valuable insights into the future trajectory of liver cancer.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_27-AI_Enhanced_Comprehensive_Liver_Tumor_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Revolutionizing Plant Disease Detection in Leaves: An Innovative Hybrid ABOCNN Framework for Advanced and Accurate Identification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150226</link>
        <id>10.14569/IJACSA.2024.0150226</id>
        <doi>10.14569/IJACSA.2024.0150226</doi>
        <lastModDate>2024-02-28T12:51:30.3000000+00:00</lastModDate>
        
        <creator>V. Krishna Pratap</creator>
        
        <creator>N. Suresh Kumar</creator>
        
        <subject>Convolutional Neural Network (CNN); attention model; leaf disease detection; attention-based one-class neural network; crop production</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>Plant diseases are a persistent threat to the global agricultural economy, compromising food supply and security. Accurate and early diagnosis is vital for effective agricultural management. This study addresses this gap by introducing a better approach for identifying plant diseases in leaves: the Integrated Hybrid Attention-Based One-Class Neural Network (ABOCNN) System. The system uses deep learning and domain-specific information, as well as powerful neural networks and attention processes, to extract features unique to a certain ailment while excluding irrelevant data. By dynamically focusing on prominent areas in leaf images, the proposed methodology obtains an impressive 99.6% accuracy, beating both traditional approaches and cutting-edge deep-learning approaches by an average of 12.7%. The practical use of this strategy has a significant influence on crop yield and agricultural sustainability. Attention maps increase interpretability and help individuals comprehend more fully how decisions are made. The system, written in Python, is precise, scalable, and adaptable, making it a helpful tool for a wide range of agricultural applications combining multiple plant species and disease classifications. With an incredible 99.6% accuracy rate, the Integrated Hybrid ABOCNN Technology provides an innovative method for diagnosing plant diseases, outperforming conventional approaches by 12.7%. Attention maps increase interpretability and give important information about the model&#39;s decision-making processes.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_26-Revolutionizing_Plant_Disease_Detection_in_Leaves.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sky Pixel Detection in Outdoor Urban Scenes: U-Net with Transfer Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150225</link>
        <id>10.14569/IJACSA.2024.0150225</id>
        <doi>10.14569/IJACSA.2024.0150225</doi>
        <lastModDate>2024-02-28T12:51:30.2870000+00:00</lastModDate>
        
        <creator>Athar Ibrahim Alboqomi</creator>
        
        <creator>Rehan Ullah Khan</creator>
        
        <subject>Computer vision; transfer learning; semantic segmentation; sky detection; U-Net; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>The sky depicts a high visual importance in outdoor scenes, often appearing in video sequences and photos. Sky information is crucial for accurate sky detection in several computer vision applications, such as scene understanding, navigation, surveillance, and weather forecasting. The difficulty of detecting is clarified by variations in the sky&#39;s size, weather and lighting conditions, and the sky&#39;s reflection on other objects. This article presents a new contribution to address the challenges facing sky detection. A unique dataset was built that includes scenes of distinct lighting and atmospheric phenomena. Additionally, a modified U-Net architecture was proposed with pre-trained models as encoder VGG19, EfficientNetB4, InceptionV3, and DenseNet121 for sky detection to solve outdoor image limitations and evaluate the influence of different encoders when integrated with the U-Net, aiming to identify which encoder describes features of the sky accurately. The proposed approach shows encouraging results; as it presents improved performance over the adjusted U-Net architecture with inceptionv3 on the proposed dataset, achieving mean Intersection over union, dice similarity coefficient, recall, precision, and accuracy of 98.57 %, 99.57 %, 99.41 %, 99.73%, and 99.40 %, respectively. At the same time, the best loss was achieved in U-Net with VGG19 equivalent of 0.09.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_25-Sky_Pixel_Detection_in_Outdoor_Urban_Scenes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Driving Area Detection Algorithm Based on Improved Swin Transformer</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150224</link>
        <id>10.14569/IJACSA.2024.0150224</id>
        <doi>10.14569/IJACSA.2024.0150224</doi>
        <lastModDate>2024-02-28T12:51:30.2530000+00:00</lastModDate>
        
        <creator>Shuang Liu</creator>
        
        <creator>Ying Li</creator>
        
        <creator>Huankun Sheng</creator>
        
        <subject>CNNS; driving area detection; multiscale fusion; semantic segmentation; Swin Transformer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>Drivable area or free space detection is an essential part of the perception system of an autonomous vehicle. It helps intelligent vehicles understand road conditions and determine safe driving areas. Most of the driving area detection algorithms are based on semantic segmentation that classifies each pixel into its category, and recent advances in convolutional neural networks (CNNs) have significantly facilitated semantic segmentation in driving scenarios. Though promising results have been obtained, the existing CNN-based drivable area detection methods usually process one local neighborhood at a time. The locality of convolutional operation fails to capture long-range dependencies. To solve this problem, we propose an improved Swin Transformer based on shift window, named Multi-Swin. First, an improved patch merging strategy is proposed to enhance feature interactions between adjacent patches. Second, a decoder with upsampling layer is designed to restore the resolution of the feature map. Last, a multi-scale fusion module is utilized to improve the representation ability of global semantic and geometric information. Our method is evaluated and tested on the publicly available Cityscapes dataset. The experimental results show that our method achieves 91.92% IoU in road segmentation detection, surpassing state-of-the-art methods.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_24-A_Driving_Area_Detection_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Ethnicity Classification Based on Facial Images using Deep Learning Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150223</link>
        <id>10.14569/IJACSA.2024.0150223</id>
        <doi>10.14569/IJACSA.2024.0150223</doi>
        <lastModDate>2024-02-28T12:51:30.2400000+00:00</lastModDate>
        
        <creator>Abdul-aziz Kalkatawi</creator>
        
        <creator>Usman Saeed</creator>
        
        <subject>Vision transformer; deep learning; ethnicity; race; classification; recognition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>Race and ethnicity are terminologies used to describe and categorize humans into groups based on biological and sociological criteria. One of these criteria is the physical appearance such as facial traits which are explicitly represented by a person’s facial structure. The field of computer science has mostly been concerned with the automatic detection of human ethnicity using computer vision-based techniques, where it can be challenging due to the ambiguity and complexity on how an ethnic class can be implicitly inferred from the facial traits in terms of quantitative and conceptual models. The current techniques for ethnicity recognition in the field of computer vision are based on encoded facial feature descriptors or Convolutional Neural Network (CNN) based feature extractors. However, deep learning techniques developed for image-based classification can provide a better end to end solution for ethnicity recognition. This paper is a first attempt to utilize a deep learning-based technique called vision transformer to recognize the ethnicity of a person using real world facial images. The implementation of Multi-Axis Vision Transformer achieves 77.2% classification accuracy for the ethnic groups of Asian, Black, Indian, Latino Hispanic, Middle Eastern, and White.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_23-Ethnicity_Classification_Based_on_Facial_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sound Classification for Javanese Eagle Based on Improved Mel-Frequency Cepstral Coefficients and Deep Convolutional Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150222</link>
        <id>10.14569/IJACSA.2024.0150222</id>
        <doi>10.14569/IJACSA.2024.0150222</doi>
        <lastModDate>2024-02-28T12:51:30.2230000+00:00</lastModDate>
        
        <creator>Silvester Dian Handy Permana</creator>
        
        <creator>T. K. Abdul Rahman</creator>
        
        <subject>Improved MFCC; deep convolutional neural network; Javanese eagle sound; sound classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>The Javanese Eagle is a rare and protected animal in Indonesia. These animals only live in a few species and are threatened with extinction. These birds need to be bred to avoid extinction. One form of communication between the Javanese eagles and each other is the sound of their tweets. These tweets can be studied and classified to conserve endangered animals. This study will classify the sound of the Javanese Eagles for the benefit of animal conservation. Data in the form of voice tweets will be classified. This classification uses algorithms from improved MFCC (Mel-Frequency Cepstral Coefficients) and Deep Convolutional Neural Network. The result of this study was to classify the sound of the Javanese Eagle from the lack of food or drink, the normal tweets state of the bird, and to find out the Javanese Eagle in finding a partner. This research has been carried out by comparing the CNN architecture with AlexNet and VGGNet models and various combinations of training, validation, and test data. The best model dataset underwent division into 80% for training, 10% for validation, and 10% for testing. Training and testing of both IMFCC and VGGNet models occurred using the same dataset. During training, VGGNet achieved 100% accuracy, while testing yielded 99%. ROC Curve: &#39;Normal&#39; AUC 0.996, &#39;Looking for Partner&#39; AUC 1.000, &#39;Looking for Food&#39; AUC 0.996. This study aids Javanese Eagle conservation, crucial for preventing extinction at conservation sites.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_22-Sound_Classification_for_Javanese_Eagle.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Rural Revitalization Evaluation using a Hybrid Method of BP Neural Network and Genetic Algorithm Based on Deep Learning Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150221</link>
        <id>10.14569/IJACSA.2024.0150221</id>
        <doi>10.14569/IJACSA.2024.0150221</doi>
        <lastModDate>2024-02-28T12:51:30.1770000+00:00</lastModDate>
        
        <creator>Songmei Wang</creator>
        
        <creator>Min Han</creator>
        
        <subject>The rural revitalization strategy; deep learning model; the GA-BP neural network; evaluation model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>The rural revitalization strategy is a comprehensive plan for supporting rural revival in the new development stage while prioritizing agricultural and rural area development. Establishing a rural revitalization evaluation model will help monitor and guide the development of rural revitalization strategies and comprehensively deepen rural reforms. This research combines the benefits of the BP neural network with the genetic algorithm, introduces the genetic algorithm in optimizing the weights and thresholds of the BP neural network, and develops a GA-BP neural network model to evaluate and predict rural rejuvenation. The research findings demonstrated that the GA-BP neural network model possesses rapid convergence, accuracy, and stability in assessing and predicting rural revival and can evaluate and predict rural revitalization well.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_21-Rural_Revitalization_Evaluation_using_a_Hybrid_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Q-KGSWS: Querying the Hybrid Framework of Knowledge Graph and Semantic Web Services for Service Discovery</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150220</link>
        <id>10.14569/IJACSA.2024.0150220</id>
        <doi>10.14569/IJACSA.2024.0150220</doi>
        <lastModDate>2024-02-28T12:51:30.1600000+00:00</lastModDate>
        
        <creator>Pooja Thapar</creator>
        
        <creator>Lalit Sen Sharma</creator>
        
        <subject>Ontologies; knowledge graph; semantic web services; SPARQL query language; OWLS; data integration; service discovery</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>In the era of big data, Knowledge Graphs (KGs) have become essential tools for managing interconnected datasets across various domains. This paper introduces a novel RDF (Resource Description Framework) based Knowledge Graph of Semantic Web Services (KGSWS), designed to enhance service discovery. Leveraging the versatile SPARQL query language, the framework facilitates precise querying operations on KGSWS, enabling customized service matching for user queries. Through comprehensive experimentation and analysis, notable improvements in accuracy (69.75% and 90.01%) and rapid response times (0.61s and 1.57s) across two semantic search levels are demonstrated, validating the efficacy of the approach. Furthermore, research questions regarding the interlinking of ontologies, methods for formulating automatic queries, and efficient retrieval of services are addressed, offering insights into future avenues for research. This work represents a significant advancement in the domain of semantic web services, with potential applications across various industries reliant on efficient service identification and integration. Future phases of research will focus on logical inference and the integration of machine learning-based graph embedding models, promising even greater strides in knowledge discovery within the KGSWS framework, thus reshaping the domain of semantic web services.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_20-Q_KGSWS_Querying_the_Hybrid_Framework_of_Knowledge_Graph.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automatic Dust Reduction System: An IoT Intervention for Air quality</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150219</link>
        <id>10.14569/IJACSA.2024.0150219</id>
        <doi>10.14569/IJACSA.2024.0150219</doi>
        <lastModDate>2024-02-28T12:51:30.1300000+00:00</lastModDate>
        
        <creator>Bosharah Makki Zakri</creator>
        
        <creator>Ohoud Alzamzami</creator>
        
        <creator>Amal Babour</creator>
        
        <subject>Dust Suppression; dust elimination; digital dust sensor; humidifier; dust intensity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>Air quality is of great importance due to its direct impact on the environment, human health, and quality of life. It could be affected negatively by the presence of dust particles in the atmosphere. Thus, it is vital to purify air from dust and mitigate its impact on air quality. In this regard, dust sensors play a vital role in monitoring and measuring airborne dust particles. They utilize various techniques, such as optical scattering, to detect and quantify the concentration of dust in the air. Microcontrollers are powerful and versatile devices, which have been widely used in many Internet of Things (IoT) applications. They process the collected data from sensors and react accordingly by controlling the operation of IoT devices. Accordingly, the primary goal of this paper is to develop a model for reducing the amount of dust and other particulates in the air to improve its quality. In addition to the microcontroller, which controls the overall operation of the proposed model, two other main components are utilized: a sensor and a sprinkler. The results of the model have shown that it can successfully reduce the dust concentration and suppress the dust intensity to less than 0.1%. The result concluded that the proposed model achieved its primary goals by integrating sensors and sprinkler into an intelligent dust removal model.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_19-Automatic_Dust_Reduction_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Combined Ensemble Model (CEM) for a Liver Cancer Detection System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150218</link>
        <id>10.14569/IJACSA.2024.0150218</id>
        <doi>10.14569/IJACSA.2024.0150218</doi>
        <lastModDate>2024-02-28T12:51:30.1130000+00:00</lastModDate>
        
        <creator>T. Sumallika</creator>
        
        <creator>R. Satya Prasad</creator>
        
        <subject>Liver Cancer; Hepatocellular Carcinoma (HCC); Combined Ensemble Model (CEM); RESNET50; Extreme Gradient Boosting (EGB); Recurrent Neural Network (RNN)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>The liver is one of the most important organs in the human body. The liver&#39;s proper function is critical for overall health, and liver diseases or disorders can have serious consequences. Liver cancer is also known as hepatic cancer, which is divided into various types of cells that belong to the cancer. The most common type of liver cancer is hepatocellular carcinoma (HCC). HCC is one of the most common types of liver cancer that can affect up to 85% of people worldwide. Early detection of liver cancer is essential in healthcare because it increases the chances of successful treatment and patient outcomes. Many researchers have developed models that help detect and diagnose liver cancer. The first step in detecting liver cancer is identifying people at a higher risk. Chronic hepatitis B or C infection, cirrhosis, heavy alcohol use, obesity, and exposure to certain chemicals and toxins are all risk factors. This paper is mainly focused on detecting the cancer-affected regions that occur in the liver. In this paper, a combined ensemble model (CEM) for a liver cancer detection system is developed to find and detect liver cancer and liver disorders in their early stages. A pre-trained model, RESNET50 with transfer learning, is used to obtain the features from the pre-trained model—an advanced preprocessing technique involved in filtering the noise from input CT scan images. A hybrid feature extraction (HFE) technique also gets significant elements from the input CT scan images. Finally, the proposed CEM combines an Extreme Gradient Boosting (EGB) algorithm with a Recurrent Neural Network (RNN) that focuses on detecting the abnormal cancer cells present in input CT scan images. The performance of the CEM shows a high accuracy of 98.48% with a 10% high detection rate. Previously, it was 88.12%.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_18-A_Combined_Ensemble_Model_CEM_for_a_Liver_Cancer_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Smart Framework for Enhancing Automated Teller Machines (ATMs) Fraud Prevention</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150217</link>
        <id>10.14569/IJACSA.2024.0150217</id>
        <doi>10.14569/IJACSA.2024.0150217</doi>
        <lastModDate>2024-02-28T12:51:30.0830000+00:00</lastModDate>
        
        <creator>Mohamed Abdelsalam Ahmed</creator>
        
        <creator>Nada Tarek Abbas Haleem</creator>
        
        <creator>Amira M. Idrees</creator>
        
        <subject>Automated Teller Machines (ATMs); digital banking; image processing; iris recognition; One Time Password (OTP); machine learning; fraud detection; fraud prevention; biometrics; security; banking</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>Over the past years, clients have largely depended on and trusted Automated Teller Machines (ATMs) to fulfill their banking needs and control their accounts easily and quickly. Despite the significant advantages of ATMs, fraud has become a very high risk and danger. As it leads to controlling all clients&#39; accounts. In this paper, the proposed framework is using the iris recognition technology combined with the one-time password (OTP) to detect and prevent the known as well as the unknown attacks on ATMs and provide a table of the attackers and the suspected attackers with a counter to take a preventive action with them. Our proposed preventive actions are: card withdrawal, flagging the identified iris as an attacker in the database, notifying the card owner with this suspicious behavior, reporting to the Central Bank of Egypt (CBE), and calling the police when an attacker&#39;s iris counts three capturing times, even if for a different card. Two case studies were attempted to achieve the highest accuracy, the first case was using the Chinese Academy of Sciences&#39; Institute of Automation V1.0 (CASIA-IrisV1) dataset using the Cosine Distance. The second one was using the Indian Institute of Technology Delhi (IITD) dataset using k-Nearest Neighbors (KNN) and Histogram of Oriented Gradient (HOG) techniques together reaching 100% accuracy.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_17-A_Smart_Framework_for_Enhancing_Automated_Teller_Machines.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Roadmap for Generative Models Redefining Learning in Egyptian Higher Education</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150216</link>
        <id>10.14569/IJACSA.2024.0150216</id>
        <doi>10.14569/IJACSA.2024.0150216</doi>
        <lastModDate>2024-02-28T12:51:30.0670000+00:00</lastModDate>
        
        <creator>Laila Mohamed ElFangary</creator>
        
        <subject>Artificial intelligence; generative models; prompt engineering; higher education; Egyptian universities</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>Artificial Intelligence (AI) Generative models have become powerful tools in all sciences, research, academia, and businesses. Egyptian Universities need to leverage those models while using them ethically and responsibly to survive in the current global market. This paper explains the evolution of those models, from basic natural language processing by IBM in 1954 to the current powerful revolutionary generative models. The paper presents research that helps us get desired outputs or behaviors from generative models through prompt engineering, chain of thought prompting and ReAct. The paper presents Egypt and Egyptian Universities readiness and steps taken to get advantage of the latest AI technologies. The paper examines the training of those models to identify their advantages and disadvantages for university members focusing on the Egyptian context. The roadmap for Egyptian Universities use of generative models consists of a SWOT analysis; an infographic of policies and guidelines with regard to faculty and students use of generative models at Egyptian Universities promoting academic integrity and innovation, while minimizing the risks associated with this technology; A table of types and severities of penalties for policy violations by students using generative models is specified and finally a framework for nontechnical users of generative models of reusable patterns to get the optimal desired output of the models is developed.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_16-Roadmap_for_Generative_Models_Redefining_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Addressing Imbalanced Data in Network Intrusion Detection: A Review and Survey</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150215</link>
        <id>10.14569/IJACSA.2024.0150215</id>
        <doi>10.14569/IJACSA.2024.0150215</doi>
        <lastModDate>2024-02-28T12:51:30.0370000+00:00</lastModDate>
        
        <creator>Elham Abdullah Al-Qarni</creator>
        
        <creator>Ghadah Ahmad Al-Asmari</creator>
        
        <subject>Network intrusion detection system; data imbalance; resampling; data level techniques; hybrid techniques</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>The proliferation of internet-connected devices, including smartphones, smartwatches, and computers, has led to an unprecedented surge in data generation. The rapid rise in device connectivity points to an urgent need for robust cybersecurity measures to counter the mounting wave of cyber threats. Among the strategies aimed at establishing efficient network intrusion detection systems, the integration of machine learning techniques is a prominent avenue. However, the application of machine learning models to imbalanced intrusion detection datasets, such as NSL-KDD, CICIDS2017, and UGR&#39;16, presents challenges. In such intricate scenarios, accurately distinguishing network intrusions poses a formidable challenge. The term &quot;imbalance&quot; refers to the imbalanced distribution of data across classes, which adversely affects the precision of machine learning algorithm classifications. This comprehensive survey embarks on a thorough exploration of the spectrum of methodologies proposed to address the challenge of imbalanced data. Simultaneously, it assesses the efficacy of these methodologies within the realm of network intrusion detection. Moreover, by shedding light on the potential consequences of not effectively tackling imbalanced data, this study aims to provide a holistic understanding of the intricate interplay between machine learning and intrusion detection in imbalanced settings.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_15-Addressing_Imbalanced_Data_in_Network_Intrusion_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Intrusion Detection System Based on Data Resampling and Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150214</link>
        <id>10.14569/IJACSA.2024.0150214</id>
        <doi>10.14569/IJACSA.2024.0150214</doi>
        <lastModDate>2024-02-28T12:51:30.0370000+00:00</lastModDate>
        
        <creator>Huan Chen</creator>
        
        <creator>Gui-Rong You</creator>
        
        <creator>Yeou-Ren Shiue</creator>
        
        <subject>Intrusion detection; deep learning; random undersampling; synthetic minority oversampling technique; convolutional neural network; transformer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>The growth of the internet has advanced information-sharing capabilities and vastly increased the importance of global network security. However, because new and inconspicuous abnormal behaviors are nearly impossible to detect in massive network access environments, modern intrusion detection systems have identified a high rate of false-positive (FP) and false-negative (FN) attacks. To overcome this, this paper proposes a hybrid deep learning model that significantly mitigates the disadvantages of consistently imbalanced sample attack data. First, it resolves imbalanced data using random undersampling and synthetic minority oversampling techniques. Then, convolutional neural networks (CNNs) extract local and spatial features, and a transformer encoder extracts global and temporal features. The novelty of this combination increases recognition accuracy at the algorithm level, which is crucial to reducing FPs and FNs. The model was subjected to multiclassification testing on the NSL-KDD and CICIDS2017 benchmark datasets, and the results show that our model has higher classification accuracy and lower FP rates than state-of-the-art intrusion detection models. Moreover, it significantly improves the detection rate of low-frequency attacks.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_14-Hybrid_Intrusion_Detection_System_Based_on_Data_Resampling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Web-based Expert Bots System in Identifying Complementary Personality Traits and Recommending Optimal Team Composition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150213</link>
        <id>10.14569/IJACSA.2024.0150213</id>
        <doi>10.14569/IJACSA.2024.0150213</doi>
        <lastModDate>2024-02-28T12:51:30.0030000+00:00</lastModDate>
        
        <creator>Mysaa Fatani</creator>
        
        <creator>Haneen Banjar</creator>
        
        <subject>Web-based expert system; personality traits; team composition; workplace efficiency and chatbot integration</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>The use of web-based expert systems in the workplace has become increasingly common in recent years, with companies using these automated tools to streamline a range of tasks, from customer service to employee training. However, the potential of web-based expert bots systems to help build more effective teams by identifying employees with complementary personality traits and providing recommendations for team composition has received less attention. This paper investigates the application of a web-based expert bots’ system in identifying complementary personality traits among employees to recommend optimal team compositions. We developed a web-based expert bot system, augmented by a chatbot interface, to evaluate and synthesize employee personality profiles for improved team alignment. The results, derived from questionnaire feedback and prototype assessments, demonstrate the system&#39;s capability to enhance team performance metrics and behavioral competencies. The discussion outlines the system&#39;s advantages, and its potential in organizational settings, and acknowledges its limitations. Web-based expert systems with chatbots that exhibit unique personalities tend to be more engaging and effective. Consequently, this system is expected to not only foster better team cohesion but also to increase user involvement and satisfaction. Future work is dedicated to expanding the system&#39;s capabilities and conducting extensive field testing to establish its practical effectiveness.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_13-Web_based_Expert_Bots_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Data Manipulation in Wireless Sensor Networks: Enhancing Security Through Blockchain Integration with Proposal Mitigation Strategy</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150212</link>
        <id>10.14569/IJACSA.2024.0150212</id>
        <doi>10.14569/IJACSA.2024.0150212</doi>
        <lastModDate>2024-02-28T12:51:29.9900000+00:00</lastModDate>
        
        <creator>Ayoub Toubi</creator>
        
        <creator>Abdelmajid Hajami</creator>
        
        <subject>Wireless sensor networks; blockchain technology; network security; data integrity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>In recent years, Wireless Sensor Networks (WSNs) have become integral in various applications ranging from environmental monitoring to defense. However, the security and reliability of these networks remain a paramount concern due to their susceptibility to various types of cyber-attacks and failures. This paper proposes a novel integration of blockchain technology with WSNs to address these challenges. Blockchain, with its decentralized and tamper-resistant ledger, offers a robust framework to enhance the security and reliability of sensor networks. The study begins by analyzing the current security threats and challenges faced by WSNs, emphasizing the need for a solution that can ensure data integrity, confidentiality, and network resilience. We then introduce blockchain technology and discuss its key features such as decentralization, immutability, and consensus algorithms, which are beneficial in creating a secure and reliable WSN environment. Subsequently, we present a detailed architecture of how blockchain can be integrated with WSNs. This includes the deployment of a lightweight blockchain protocol suited for the limited computational resources of sensor nodes. We also explore the use of smart contracts for automated, secure data handling and network management within WSNs. To validate the proposed integration, we conduct a simulations based on network attacks. The results demonstrate significant improvements in the security and reliability of WSNs when blockchain is implemented. This is evidenced by enhanced resistance to common attacks, such as data manipulation and node compromise and increased network uptime.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_12-Data_Manipulation_in_Wireless_Sensor_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predictive Modeling of Kuwaiti Chronic Kidney Diseases (KCKD): Leveraging Electronic Health Records for Clinical Decision-Making</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150211</link>
        <id>10.14569/IJACSA.2024.0150211</id>
        <doi>10.14569/IJACSA.2024.0150211</doi>
        <lastModDate>2024-02-28T12:51:29.9730000+00:00</lastModDate>
        
        <creator>Talal M. Alenezi</creator>
        
        <creator>Taiseer H. Sulaiman</creator>
        
        <creator>Mohamed Abdelrazek</creator>
        
        <creator>Amr M. AbdelAziz</creator>
        
        <subject>Chronic kidney diseases; Electronic Health Records (EHR); classification; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>Chronic kidney disease (CDK) represents a significant public health concern globally, and its prevalence is on the rise. In the context of Kuwait, this study addresses the imperative of predicting CKD by leveraging the wealth of information embedded in electronic health records (EHRs). The primary objective is to develop a predictive model capable of early identification of individuals at risk for CKD, thereby enabling timely interventions and personalized healthcare strategies and equip clinicians with information that enhances their ability to make well-informed decisions regarding prognoses or therapeutic interventions. In this study, a dataset has been created from Kuwaiti healthcare institutions, emphasizing the richness and diversity of patient information encapsulated in EHRs and a feature engineering step has been applied for labeling it. Various ensemble learning algorithms, Ada Boost, Extreme Gradient Boosting, Extra Trees, Gradient Boosting, Random Forest, and various single learning algorithms, Decision Tree, K-Nearest Neighbors, Logistic Regression, Multilayer Perceptron, Stochastic Gradient Descent, Support Vector Machines, have been implemented. By examining the empirical findings of our tests, our results showcase the models’ capability to identify individuals at risk for CKD at an early stage, facilitating targeted healthcare interventions. Decision Tree was the best classifier achieving 99.5% accuracy and 99.3% macro averaged f1-score.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_11-Predictive_Modeling_of_Kuwaiti_Chronic_Kidney_Diseases.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Blockchain-based Cannabis Traceability in Supply Chain Management</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150210</link>
        <id>10.14569/IJACSA.2024.0150210</id>
        <doi>10.14569/IJACSA.2024.0150210</doi>
        <lastModDate>2024-02-28T12:51:29.9570000+00:00</lastModDate>
        
        <creator>Piwat Nowvaratkoolchai</creator>
        
        <creator>Natcha Thawesaengskulthai</creator>
        
        <creator>Wattana Viriyasitavat</creator>
        
        <creator>Pramoch Rangsunvigit</creator>
        
        <subject>Blockchain; cannabis; traceability; supply chain management; polygon; on-chain and off-chain</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>The typical cannabis supply chain is encountering obstacles with the traceability of product regulations and standards. It is a complex structure involving multiple organizations and healthcare products. Questionable products finding their way onto the legal market are potentially dangerous. The proportion of Tetrahydrocannabinol (THC)/Cannabidiol (CBD) and the source of the cannabis strains have an impact on human treatment, limiting the traditional cannabis supply chain from seed to sale. Currently, the cannabis supply chain involves multiple stakeholders, which complicates the validation of various essential criteria, including license management, Certificate of Analysis (COA), and conformance quality standards and regulations. Existing traceability systems involve a centralized authority, leading to a lack of transparency and tracking system immutability. This study offers a Polygon blockchain-based strategy using smart contracts and decentralized on-chain and off-chain storage for efficient information searches in the cannabis supply chain. Eliminating the need for middlemen, the blockchain-based solution gives data security and transaction immutability history to all stakeholders. The storage structure comprises on-chain and off-chain components, algorithms, and the operating principles of the suggested solution. In addition, the suggested system delivers query efficiency and assures supply chain management authenticity and dependability. To assess the performance of the cannabis supply chain, scalability in developing a blockchain-based traceability process avoids delays and high transaction fees.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_10-Blockchain_based_Cannabis_Traceability.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Packaging Beautification Design Based on Visual Image and Personalized Pattern Matching</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150209</link>
        <id>10.14569/IJACSA.2024.0150209</id>
        <doi>10.14569/IJACSA.2024.0150209</doi>
        <lastModDate>2024-02-28T12:51:29.9270000+00:00</lastModDate>
        
        <creator>Deli Chen</creator>
        
        <subject>Visual images; personalized patterns; total variational model; GrabCut model; migration model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>Visual image technology is widely used in the field of product art design, enriching the visual beautification design effect of products. To improve the design effect of product packaging, a personalized packaging pattern matching technology is proposed based on computer vision image technology. Firstly, based on user needs, a pattern feature extraction technology is proposed, which uses the total variation model and GrabCut model to smooth and segment the image. Secondly, an improved style transfer generative adversarial network model is proposed for transfer training between feature elements and targets. Considering the problem of insufficient detail preservation in traditional transfer models, attention layers are incorporated into the transfer model for improvement. In the pattern feature extraction experiment, the proposed model had the best pixel accuracy in Image 1. In the pattern matching experiment, the proposed model had the lowest mapping loss in both pattern combinations, with a value of 0.135 in the Zhuang brocade pattern and 0.236 in the blue and white porcelain pattern, which was superior to other models. Comparing the effect of different model pattern combinations, in the blue and white porcelain pattern combination, the proposed model had an optimal peak signal-to-noise ratio of 32.32, which was superior to other models. The proposed model has excellent application effects in packaging design beautification. The research content will provide critical technical references for e-commerce product packaging design and intelligent image processing.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_9-Packaging_Beautification_Design_Based_on_Visual_Image.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Physical Training in Higher Vocational Colleges Based on Sequencing Adaptive Genetic Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150208</link>
        <id>10.14569/IJACSA.2024.0150208</id>
        <doi>10.14569/IJACSA.2024.0150208</doi>
        <lastModDate>2024-02-28T12:51:29.9100000+00:00</lastModDate>
        
        <creator>Quanzhong Gao</creator>
        
        <subject>Sequencing adaptive genetic algorithm; higher vocational colleges; sports training; convergence speed</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>This study is based on the sequencing adaptive genetic algorithm and conducts an in-depth discussion on optimization issues in the field of higher vocational sports training. By analyzing the shortcomings of traditional genetic algorithms in optimizing training plans, a new sequencing adaptive genetic algorithm is proposed to improve the optimization effect and adaptability of training plans. First, the optimization goals and constraints in higher vocational sports training were studied, including the diversity of training content and the rationality of training intensity. Secondly, based on the sequencing adaptive genetic algorithm, an optimization algorithm framework suitable for higher vocational sports training was designed, including key steps such as individual coding, fitness evaluation, and crossover mutation. Then, the proposed algorithm was verified and analyzed using experimental data. The results showed that the algorithm can effectively improve the optimization effect of the training plan and has strong adaptability and generalization capabilities. Finally, through comparison with traditional genetic algorithms and other optimization algorithms, the superiority and practicability of sequencing adaptive genetic algorithms in higher vocational sports training are further verified.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_8-Physical_Training_in_Higher_Vocational_Colleges.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Correlation Analysis Between Student Psychological State and Grades Based on Data Mining Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150207</link>
        <id>10.14569/IJACSA.2024.0150207</id>
        <doi>10.14569/IJACSA.2024.0150207</doi>
        <lastModDate>2024-02-28T12:51:29.8970000+00:00</lastModDate>
        
        <creator>Zeng Daoyan</creator>
        
        <creator>Chen Disi</creator>
        
        <subject>Data mining algorithms; vocational students; positive psychological state; academic performance; correlation model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>As society has evolved and educational reform has become more profound, the psychological state and academic performance of vocational college students have become the focus of attention for educators. This study aims to construct a correlation model between the positive psychological state and academic performance of vocational college students based on data mining algorithms to offer a conceptual foundation and practical guidance for the optimization of vocational education. The relationship between positive psychological state and academic performance was analyzed through a literature review, as well as the application of data mining algorithms in the field of education. A certain amount of data on vocational college students was collected using questionnaire surveys and empirical research methods, including their basic information, positive psychological status indicators, and academic performance data. Subsequently, data mining algorithms were used to preprocess and analyze the collected data, and a correlation model between the positive psychological state and academic performance of vocational college students was constructed. Finally, through validation and evaluation of the model, it was found that there is a significant positive correlation between positive psychological state and academic performance, and the model has high predictive accuracy. The study&#39;s results suggest that the positive psychological state of vocational college students has a significant impact on their academic performance. Educators should consider students&#39; mental health and take effective measures to enhance their positive psychological state, thereby improving their academic performance. This study provides a new research perspective and method for the field of vocational education, which helps to promote the development and reform of vocational education.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_7-Correlation_Analysis_Between_Student_Psychological_State.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>China&#39;s Science and Technology Finance and Economic Corridor Development: A Coupling Relationship Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150206</link>
        <id>10.14569/IJACSA.2024.0150206</id>
        <doi>10.14569/IJACSA.2024.0150206</doi>
        <lastModDate>2024-02-28T12:51:29.8630000+00:00</lastModDate>
        
        <creator>Rui Tian</creator>
        
        <creator>Birong Xu</creator>
        
        <subject>Technology finance; regional economic development; industrial clusters; life cycle theory</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>This study aims to explore the coupling relationship between science and technology finance and economic corridor development in my country based on the life cycle theory of industrial clusters. By analyzing the interaction between science and technology finance and the development of economic corridors, our degree of correlation and influence mechanism at different stages are revealed. The main findings of this study include that at different stages of the life cycle of industrial clusters, there are differences in the effect of science and technology finance on the development of economic corridors, showing a gradually strengthening or decreasing trend; there is a strong positive relationship between science and technology finance and the development of economic corridors. Coupling relationship, the development of science and technology finance promotes the construction and development of economic corridors, and vice versa; the coupling relationship between science and technology finance and the development of economic corridors has important policy implications and provides useful information for government departments, business managers and scientific research institutions Reference and guidance. These findings are of great value to the formulation and implementation of science and technology finance policies, the planning and construction of economic corridors, and the cultivation and development of industrial clusters. Government departments can adjust science, technology, finance and economic corridor development policies based on the characteristics of the coupling relationship to promote their coordinated development and virtuous cycle. Based on the research results, business managers and scientific research institutions can optimize resource allocation, enhance innovation capabilities, and achieve sustainable development.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_6-Chinas_Science_and_Technology_Finance_and_Economic_Corridor.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Classifying Motorcycle Rider Helmet on a Low Light Video Scene using Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150205</link>
        <id>10.14569/IJACSA.2024.0150205</id>
        <doi>10.14569/IJACSA.2024.0150205</doi>
        <lastModDate>2024-02-28T12:51:29.8470000+00:00</lastModDate>
        
        <creator>John Paul Q. Tomas</creator>
        
        <creator>Bonifacio T. Doma</creator>
        
        <subject>Artificial intelligence; computer vision; computer vision problems; object detection; YOLOv5; YOLOv7; Deep SORT; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>For safety in transportation, it is important to always monitor the use of proper motorcycle helmet, especially at night. One way to enforce transportation rules and regulations in wearing proper motorcycle helmet is to use computer vision technology. This study focusses on classifying motorcycle rider helmet at low light video conditions, like at dusk and at night, using YOLOv5 and YOLOv7 with Deep SORT. In these deep learning methods, the study tunes and optimizes hyperparameters to attain high accuracy in classifying motorcycle rider helmet at this challenging environment. To accomplish this objective, a vast and diverse dataset was employed, containing classes such as riders, different types of helmets (valid and invalid), and instances of riders not wearing helmets at all in Metro Manila, Philippines. The results show that Hyperparameter 3 consistently outperformed other settings in terms of precision (95.6%), recall (91.2%), and mean average precision (mAP) scores across multiple scales and time frames with 95.1% on mAP@0.5 and 76.3% on mAP@0.95, owing to greater epochs, quicker learning rates, and lower batch sizes.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_5-Classifying_Motorcycle_Rider_Helmet.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SkySculptor: Intuitive Drone Control Through Ground-Integrated Radar and Foot Gestures in Smart Indoor Environments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150204</link>
        <id>10.14569/IJACSA.2024.0150204</id>
        <doi>10.14569/IJACSA.2024.0150204</doi>
        <lastModDate>2024-02-28T12:51:29.8170000+00:00</lastModDate>
        
        <creator>Alexandru-Ionut Siean</creator>
        
        <subject>Drone control; gesture input; ultra-wideband radar; software application; smart environments</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>SkySculptor is a software application designed to optimize drone control in smart indoor environments. The primary focus is on using gesture input for drone control, particularly investigating mid-air free-foot interactions detected by radar sensing. This software application simplifies the process of controlling drones in smart indoor environments. Additionally, outcomes of utilizing a 15-antenna ultra-wideband 3D radar are presented, establishing a dictionary of six directional swipe gestures for controlling drone functions. Based on the findings of this research article, guidelines for the future development of software applications for drone control in intelligent indoor environments are proposed.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_4-SkySculptor_Intuitive_Drone_Control.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Texture and Color Descriptor Features-based Vacant Parking Space Detection using K-Nearest Neighbors</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150203</link>
        <id>10.14569/IJACSA.2024.0150203</id>
        <doi>10.14569/IJACSA.2024.0150203</doi>
        <lastModDate>2024-02-28T12:51:29.8000000+00:00</lastModDate>
        
        <creator>A F M Saifuddin Saif</creator>
        
        <creator>Zainal Rasyid Mahayuddin</creator>
        
        <subject>Texture; color descriptor; k-nearest neighbors; computer vision; image processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>The importance of the detection of vacant parking spaces is increasing gradually. A system capable of detecting vacant parking spaces in real-time can play an important role in saving valuable time for motorists, decreasing traffic jams, and reducing air pollution. Vision-based parking space detection methods are advantageous in terms of installation and maintenance as existing security cameras in a parking area can be used without the requirement of additional hardware and the detection program can be run on a local or a remote server. One major problem of the vision-based detection method in this context is making the model generalized for detection in various weather conditions. This research proposes a hybrid method to detect vacant parking spaces that use texture and color descriptors. A weighted KNN is used for the classification of parking spaces. The proposed method experimented on PKLot, a large benchmarking dataset that contains images of three parking areas in three weather conditions. The proposed model achieves an accuracy of 99.47% on average while training with 10-fold cross-validation and achieves an accuracy of 99.41% accuracy on average while testing with unseen data. The model shows robustness and better performance in terms of accuracy and processing speed. Several comparisons are also done to show how well it performs with methods found in previous research.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_3-Texture_and_Color_Descriptor_Features_based_Vacant_Parking.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Assessing and Mitigating Network Vulnerabilities in Philips Hue and Nest Protect Smart Home Devices</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150202</link>
        <id>10.14569/IJACSA.2024.0150202</id>
        <doi>10.14569/IJACSA.2024.0150202</doi>
        <lastModDate>2024-02-28T12:51:29.7870000+00:00</lastModDate>
        
        <creator>Arvind Sredhar</creator>
        
        <creator>Adil Khan</creator>
        
        <creator>Abdul Rehman Gilal</creator>
        
        <creator>Aeshah Alsughayyir</creator>
        
        <creator>Abdullah Alshanqiti</creator>
        
        <creator>Bandeh Ali Talpur</creator>
        
        <subject>Internet of Things (IoT); Smart Home Devices (SHDs); network vulnerability assessment; Philips Hue; Nest Protect</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>The Internet of Things (IoT) has gained momentum across various sectors, particularly in the consumer market with the adoption of smart devices. IoT extends internet connectivity to physical devices, enabling control via smartphones, environmental sensing, and updates. However, smart home devices are susceptible to cyberattacks due to vulnerabilities, lack of monitoring, and built-in security. They can also participate in botnets, leading to large-scale attacks. Vulnerabilities in these devices may exist at the sensing, network, or application layers, impacting data confidentiality, integrity, and service availability. This research aims to identify network-layer vulnerabilities affecting the &#39;Availability&#39; of Philips Hue and Nest Protect. By establishing a test environment, the baseline behavior of these devices is examined, followed by scans for open ports and services to detect network-based threats. Volumetric flood attacks are then conducted to assess susceptibility, and findings are shared to define the devices&#39; default security posture. The research also addresses security issues related to home routers and aims to reduce the attack surface of smart home devices through isolation and network-level protection. This involves deploying a Firewall to isolate smart devices from non-IoT devices and prevent intrusions.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_2-Assessing_and_Mitigating_Network_Vulnerabilities.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Neural Network-based Methods for Brain Image De-noising: A Short Comparison</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150201</link>
        <id>10.14569/IJACSA.2024.0150201</id>
        <doi>10.14569/IJACSA.2024.0150201</doi>
        <lastModDate>2024-02-28T12:51:29.7400000+00:00</lastModDate>
        
        <creator>Keyan Rahimi</creator>
        
        <creator>Noorbakhsh Amiri Golilarz</creator>
        
        <subject>CNN; Deep neural network; de-noising; MR image; PSNR</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024</description>
        <description>Various types of noise may affect the visual quality of images during capturing and transmitting procedures. Finding a proper technique to remove the possible noise and improve both quantitative and qualitative results is always considered as one of the most important and challenging pre-processing tasks in image and signal processing. In this paper, we made a short comparison between two well-known approaches called thresholding neural network (TNN) and deep neural network (DNN) based methods for image de-noising. De-noising results of TNNs, Dn-CNNs, Flashlight CNN (FLCNN) and Diamond de-noising networks (DmDN) have been compared with each other. In this regard, several experiments have been performed in terms of Peak Signal to Noise Ratio (PSNR) to validate the performance analysis of various de-noising methods. The analysis indicates that DmDNs perform better than other learning-based algorithms for de-noising brain MR images. DmDN achieved a PSNR value of 29.85 dB, 30.74 dB, 29.15 dB, and 29.45 dB for de-noising MR image 1, MR image 2, MR image 3 and MR Image 4, respectively for a standard deviation of 15.</description>
        <description>http://thesai.org/Downloads/Volume15No2/Paper_1-Deep_Neural_Network_based_Methods_for_Brain_Image.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modern Education: Advanced Prediction Techniques for Student Achievement Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01501126</link>
        <id>10.14569/IJACSA.2024.01501126</id>
        <doi>10.14569/IJACSA.2024.01501126</doi>
        <lastModDate>2024-01-30T15:33:24.6130000+00:00</lastModDate>
        
        <creator>Xi LU</creator>
        
        <subject>Student performance; classification; decision tree classification; fox optimization; black widow optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>Enhancing educational outcomes across varied institutions like universities, schools, and training centers necessitates accurately predicting student performance. These systems aggregates the data from multiple sources—exam centers, virtual courses, registration departments, and e-learning platforms. Analyzing this complex and diverse educational data is a challenge, thus necessitating the application of machine learning techniques. Utilizing machine learning algorithms for dimensionality reduction simplifies intricate datasets, enabling more comprehensive analysis. Through machine learning, educational data is refined, uncovering valuable patterns and forecasts by simplifying complexities via feature selection and dimensionality reduction methods. This refinement significantly amplifies the efficacy of student performance prediction systems, empowering educators and institutions with data-driven insights and thereby enriching the overall educational landscape. In this particular research, the Decision Tree Classification (DTC) model is used for forecasting student performance. DTC stands out as a potent machine-learning method for classification purposes. Two optimization algorithms, namely the Fox Optimization (FO) and the Black Widow Optimization (BWO), are integrated to heighten the model&#39;s accuracy and efficiency further. The amalgamation of DTC with these pioneering optimization techniques underscores the study&#39;s dedication to harnessing the forefront of machine learning and bio-inspired algorithms, ensuring more precise and resilient predictions of student performance, ultimately culminating in improved educational outcomes. From the results garnered for G1 and G3, it is evident that the DTBW model demonstrated the most exceptional performance in both predicting and categorizing G1, achieving an Accuracy and Precision value of 93.7 percent. Conversely, the DTFO model emerged as the most precise predictor for G3, achieving an Accuracy and Precision of 93.4 and 93.5 percent, respectively, in the prediction task.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_126-Modern_Education_Advanced_Prediction_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards a Continuous Temporal Improvement Approach for Real-Time Business Processes</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01501125</link>
        <id>10.14569/IJACSA.2024.01501125</id>
        <doi>10.14569/IJACSA.2024.01501125</doi>
        <lastModDate>2024-01-30T15:33:24.6130000+00:00</lastModDate>
        
        <creator>Asma Ouarhim</creator>
        
        <creator>Karim Baina</creator>
        
        <creator>Brahim Elbhiri</creator>
        
        <subject>Real-time business process; real-time enterprises; temporal latency; process validation; continuous improvement approach</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>Time is relative, which makes the interaction so sensitive. Indeed, contemplating the concept of real-time enterprises resembled envisioning an idealized notion that seemed unattainable and impracticable in reality. Consequently, we give a new definition of the real-time concept according to our needs and targets for a successful business process. According to this definition, we can go towards a real-time business process validation algorithm, which has the goal of ensuring quality in terms of time, i.e., time latency ≃ 0. Put simply, it serves as a method to assess the consistency of a process. This approach aids in comprehending the temporal patterns inherent in a process as it evolves, empowering decision-makers to glean insights and swiftly form initial judgments for effective problem-solving and the identification of appropriate solutions. Thus, our main purpose is to deliver the right information and knowledge to the right person at the right time. To achieve this, we introduce a novel real-time component within the Business Process Management Notation (BPMN), encompassing various attributes that facilitate process monitoring. This extension transforms the BPMN into a unified real-time business process meta-model. To be more specific, our contribution proposes a continuous temporal improvement assessment and knowledge management as temporal knowledge helps to evaluate the real-time situation of the business process.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_125-Towards_a_Continuous_Temporal_Improvement_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Machine Learning in Malware Analysis: Current Trends and Future Directions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01501124</link>
        <id>10.14569/IJACSA.2024.01501124</id>
        <doi>10.14569/IJACSA.2024.01501124</doi>
        <lastModDate>2024-01-30T15:33:24.5970000+00:00</lastModDate>
        
        <creator>Safa Altaha</creator>
        
        <creator>Khaled Riad</creator>
        
        <subject>Malware; malware analysis; machine learning; deep learning; transfer learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>Malware analysis is a critical component of cyber-security due to the increasing sophistication and the widespread of malicious software. Machine learning is highly significant in malware analysis because it can process huge amounts of data, identify complex patterns, and adjust to changing threats. This paper provides a comprehensive overview of existing work related to Machine Learning (ML) methods used to analyze malware along with a description of each trend. The results of the survey demonstrate the effectiveness and importance of three trends, which are: deep learning, transfer learning, and XML techniques in the context of malware analysis. These approaches improve accuracy, interpretability, and transparency in detecting and analyzing malware. Moreover, the related challenges and Issues are presented. After identifying these challenges, we highlight future directions and potential areas that require more attention and improvement, such as distributed computing and parallelization techniques which can reduce training time and memory requirements for large datasets. Also, further investigation is needed to develop image resizing techniques to be used during the visual representation of malware to minimize information loss while maintaining consistent image sizes. These areas can contribute to the enhancement of machine learning-based malware analysis.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_124-Machine_Learning_in_Malware_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Reciprocal Bucketization (RB) - An Efficient Data Anonymization Model for Smart Hospital Data Publishing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01501123</link>
        <id>10.14569/IJACSA.2024.01501123</id>
        <doi>10.14569/IJACSA.2024.01501123</doi>
        <lastModDate>2024-01-30T15:33:24.5800000+00:00</lastModDate>
        
        <creator>Rajesh S M</creator>
        
        <creator>Prabha R</creator>
        
        <subject>Anatomization; anonymization; entropy; pearson’s contingency coefficient; and KL – Divergence</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>With the lightning growth of the Internet of Things (IoT), enormous applications have been developed to serve industries, the environment, society, etc. Smart Health care is one of the significant applications of the IoT, where intelligent environments enrich safety and ease of surveillance. The database of the Smart Hospital records the patient’s sensitive information, which could face various potential privacy breaches through linkage attacks. Publishing such sensitive data to society is challenging in adopting the best privacy preservation model to defend against linkage attacks. In his paper, we propose a novel Reciprocal Bucketization Anonymization model as the privacy preservation method to defend against Identity, Attribute, and Correlated Linkage attacks. The proposed anonymization method creates the Buckets of patient records and then partitions the data into sensor trajectory and Multiple Sensitive attributes (MSA). A local suppression is employed on Sensor Trajectory Data and Slicing on MSA to get the anonymized data to be published gathered by combining anonymized sensor trajectory and MSA. The proposed method is validated on the synthetic and real-time dataset by comparing its data utility loss in both sensor trajectory and the MSA. The experimental results eradicate that the RB –Anonymization exhibits the nature of best privacy preservation against Identity, Attribute, and Correlated linkages attacks with negligible utility loss compared with the existing methods.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_123-Reciprocal_Bucketization_RB_An_Efficient_Data_Anonymization_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Transformative Automation: AI in Scientific Literature Reviews</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01501122</link>
        <id>10.14569/IJACSA.2024.01501122</id>
        <doi>10.14569/IJACSA.2024.01501122</doi>
        <lastModDate>2024-01-30T15:33:24.5670000+00:00</lastModDate>
        
        <creator>Kirtirajsinh Zala</creator>
        
        <creator>Biswaranjan Acharya</creator>
        
        <creator>Madhav Mashru</creator>
        
        <creator>Damodharan Palaniappan</creator>
        
        <creator>Vassilis C. Gerogiannis</creator>
        
        <creator>Andreas Kanavos</creator>
        
        <creator>Ioannis Karamitsos</creator>
        
        <subject>Artificial intelligence; systematic literature review; scholarly data analysis; machine learning algorithms; natural language processing; scientific publication automation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>This paper investigates the integration of Artificial Intelligence (AI) into systematic literature reviews (SLRs), aiming to address the challenges associated with the manual review process. SLRs, a crucial aspect of scholarly research, often prove time-consuming and prone to errors. In response, this work explores the application of AI techniques, including Natural Language Processing (NLP), machine learning, data mining, and text analytics, to automate various stages of the SLR process. Specifically, we focus on paper identification, information extraction, and data synthesis. The study delves into the roles of NLP and machine learning algorithms in automating the identification of relevant papers based on defined criteria. Researchers now have access to a diverse set of AI-based tools and platforms designed to streamline SLRs, offering automated search, retrieval, text mining, and analysis of relevant publications. The dynamic field of AI-driven SLR automation continues to evolve, with ongoing exploration of new techniques and enhancements to existing algorithms. This shift from manual efforts to automation not only enhances the efficiency and effectiveness of SLRs but also marks a significant advancement in the broader research process.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_122-Transformative_Automation_AI_in_Scientific_Literature_Reviews.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Robust Deep Learning Model for Terrain Slope Estimation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01501121</link>
        <id>10.14569/IJACSA.2024.01501121</id>
        <doi>10.14569/IJACSA.2024.01501121</doi>
        <lastModDate>2024-01-30T15:33:24.5670000+00:00</lastModDate>
        
        <creator>Abdulaziz Alorf</creator>
        
        <subject>Terrain slope estimation; spacecrafts; robotics; artificial intelligence; machine learning techniques; deep neural network; computer vision</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>Interest in autonomous robots has grown significantly in recent years, motivated by the many advances in computational power and artificial intelligence. Space probes landing on extra-terrestrial celestial bodies, as well as vertical take-off and landing on unknown terrains, are two examples of high levels of autonomy being pursued. These robots must be endowed with the capability to evaluate the suitability of a given portion of terrain to perform the final touchdown. In these scenarios, the slope of the terrain where a lander is about to touch the ground is crucial for a safe landing. The capability to measure the slope of the terrain underneath the vehicle is essential to perform missions where landing on unknown terrain is desired. This work attempts to develop algorithms to assess the slope of the terrain below a vehicle using monocular images in the visible spectrum. A lander takes these images with a camera pointing in the landing direction at the final descent before the touchdown. The algorithms are based on convolutional neural networks, which classify the perceived slope into discrete bins. To this end, three convolutional neural networks were trained using images taken from multiple types of surfaces, extracting features that indicate the existing inclination in the photographed surface. The metrics of the experiments show that it is feasible to identify the inclination of surfaces, along with their respective orientations. Our overall aim is that if a hazardous slope is detected, the vehicle can abort the landing and search for another, more appropriate site.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_121-A_Robust_Deep_Learning_Model_for_Terrain_Slope_Estimation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Guiding 3D Digital Content Generation with Pre-Trained Diffusion Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01501120</link>
        <id>10.14569/IJACSA.2024.01501120</id>
        <doi>10.14569/IJACSA.2024.01501120</doi>
        <lastModDate>2024-01-30T15:33:24.5500000+00:00</lastModDate>
        
        <creator>Jing Li</creator>
        
        <creator>Zhengping Li</creator>
        
        <creator>Peizhe Jiang</creator>
        
        <creator>Lijun Wang</creator>
        
        <creator>Xiaoxue Li</creator>
        
        <creator>Yuwen Hao</creator>
        
        <subject>3D Digital content; computer vision; artificial intelligence; diffusion models; 3D representation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>The production technology of 3D digital content involves multiple stages, including 3D modeling, simulation animation, visualization rendering, and perceptual interaction. It is not only the core technology supporting the creation of 3D digital content but also a key element in enhancing immersive application experiences in virtual reality and the metaverse. A primary focus in computer vision and computer graphics research has been on how to create 3D digital content that is efficient, convenient, controllable, and editable. Currently, producing high-quality 3D digital content still requires significant time and effort from a large number of designers. To address this challenge, leveraging artificial intelligence-generated methods to break down production barriers has emerged as an effective strategy. With the substantial breakthroughs achieved by diffusion models in the field of image generation, they also demonstrate tremendous potential in 3D digital content generation, potentially becoming a foundational model in this area. Recent studies have shown that diffusion model-based techniques for generating 3D digital content can significantly reduce production costs and enhance efficiency. Therefore, it is essential to summarize and categorize existing methods to facilitate further research. This paper systematically reviews 3D digital content generation methods, introducing related 3D representation techniques and focusing on 3D digital content generation schemes, algorithms, and pipeline based on diffusion models. We perform a horizontal comparison of different approaches in terms of generation speed and quality, deeply analyze existing challenges, and propose viable solutions. Furthermore, we thoroughly explore future research themes and directions in this domain, aiming to provide guidance and reference for subsequent research endeavors.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_120-Guiding_3D_Digital_Content_Generation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Utilizing UAV Data for Neural Network-based Classification of Melon Leaf Diseases in Smart Agriculture</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01501119</link>
        <id>10.14569/IJACSA.2024.01501119</id>
        <doi>10.14569/IJACSA.2024.01501119</doi>
        <lastModDate>2024-01-30T15:33:24.5330000+00:00</lastModDate>
        
        <creator>Siti Nur Aisyah Mohd Robi</creator>
        
        <creator>Norulhusna Ahmad</creator>
        
        <creator>Mohd Azri Mohd Izhar</creator>
        
        <creator>Hazilah Mad Kaidi</creator>
        
        <creator>Norliza Mohd Noor</creator>
        
        <subject>Smart agriculture; plant disease; melon leaf disease; image processing; neural network; UAV</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>Integrating unmanned aerial vehicle (UAV) technology with plant disease detection is a significant advancement in agricultural surveillance, marking the beginning of a transformational era characterised by innovation. Traditionally, farmers have had to rely on manual visual inspections to identify melon leaf diseases, which proves to be a time-consuming and costly process in terms of labour. This paper aims to use UAV technology for plant disease detection to achieve notable progress in agricultural surveillance. Incorporating UAV technology, specifically utilising the You Only Look Once version 8 (YOLOv8) deep-learning model, is revolutionary in precision agriculture. This study uses UAV imagery in precision agriculture to explore the utility of YOLOv8, a powerful deep-learning model, for detecting diseases in melon leaves. The labelled dataset is created by annotating disease-affected areas using bounding boxes. The YOLOv8 model has been trained using a labelled dataset to detect and classify various diseases accurately. Following the training, the performance of YOLOv8 stands out significantly compared to other models, boasting an impressive accuracy of 83.2%. This high level of accuracy underscores its effectiveness in object detection tasks and positions it as a robust choice in computer vision applications. It has been shown that rigorous evaluation can help find diseases, which suggests that it could be used for early intervention in precision farming and to change how crop management systems work. This has the potential to assist farmers in promptly identifying and addressing plant issues, hence altering their crop management practices.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_119-Utilizing_UAV_Data_for_Neural_Network_based_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Overview of Data Augmentation Techniques in Time Series Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01501118</link>
        <id>10.14569/IJACSA.2024.01501118</id>
        <doi>10.14569/IJACSA.2024.01501118</doi>
        <lastModDate>2024-01-30T15:33:24.5200000+00:00</lastModDate>
        
        <creator>Ihababdelbasset ANNAKI</creator>
        
        <creator>Mohammed RAHMOUNE</creator>
        
        <creator>Mohammed BOURHALEB</creator>
        
        <subject>Time series; data augmentation; machine learning; deep learning; synthetic data generation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>Time series data analysis is vital in numerous fields, driven by advancements in deep learning and machine learning. This paper presents a comprehensive overview of data augmentation techniques in time series analysis, with a specific focus on their applications within deep learning and machine learning. We commence with a systematic methodology for literature selection, curating 757 articles from prominent databases. Subsequent sections delve into various data augmentation techniques, encompassing traditional approaches like interpolation and advanced methods like Synthetic Data Generation, Generative Adversarial Networks (GANs), and Variational Autoencoders (VAEs). These techniques address complexities inherent in time series data. Moreover, we scrutinize limitations, including computational costs and overfitting risks. However, it’s essential to note that our analysis does not end with limitations. We also comprehensively analyzed the advantages and applicability of the techniques under consideration. This holistic evaluation allows us to provide a balanced perspective. In summary, this overview illuminates data augmentation’s role in time series analysis within deep and machine-learning contexts. It provides valuable insights for researchers and practitioners, advancing these fields and charting paths for future exploration.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_118-Overview_of_Data_Augmentation_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>EpiNet: A Hybrid Machine Learning Model for Epileptic Seizure Prediction using EEG Signals from a 500 Patient Dataset</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01501116</link>
        <id>10.14569/IJACSA.2024.01501116</id>
        <doi>10.14569/IJACSA.2024.01501116</doi>
        <lastModDate>2024-01-30T15:33:24.5030000+00:00</lastModDate>
        
        <creator>Oishika Khair Esha</creator>
        
        <creator>Nasima Begum</creator>
        
        <creator>Shaila Rahman</creator>
        
        <subject>Epilepsy; seizure prediction; computer vision; hybrid model; electroencephalography; bonn dataset; proactive healthcare</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>The accurate prognosis of epileptic seizures has great significance in enhancing the management of epilepsy, necessitating the creation of robust and precise predictive models. EpiNet, our hybrid machine learning model for EEG signal analysis, incorporates key elements of computer vision and machine learning , positioning it within this advancing technological domain for enhanced seizure prediction accuracy. Hence, this research aims to provide a thorough investigation using the Bonn Electroencephalogram (EEG) signals dataset as an alternative method. The methodology used in this study encompasses the training of five machine learning models, such as Support Vector Machines (SVM), Gaussian Naive Bayes, Gradient Boosting, XGBoost, and LightGBM. Performance criteria, including accuracy, sensitivity, specificity, precision, recall, and F1-score, are extensively used to assess the efficacy of each model. A unique contribution is the development of a hybrid model, integrating predictions from individual models to enhance the overall accuracy of epilepsy identification. Experimental results demonstrate notable success, with the hybrid model achieving an accuracy of 99.81%. Performance matrices for both classes demonstrate the hybrid model’s epileptic seizure prediction reliability. Visualizations, including ROC-AUC curves and accuracy curves, provide a nuanced understanding of the models’ discriminative abilities and performance improvement with increasing sample size. A comparative analysis with existing studies reaffirms the advancement of our research, positioning it at the forefront of epileptic seizure prediction. This study not only highlights the promising integration of machine learning in medical diagnostics but also emphasises areas for future refinement. The achieved results open avenues for proactive healthcare management and improved patient outcomes.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_116-EpiNet_A_Hybrid_Machine_Learning_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comparative Study of ChatGPT-based and Hybrid Parser-based Sentence Parsing Methods for Semantic Graph-based Induction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01501117</link>
        <id>10.14569/IJACSA.2024.01501117</id>
        <doi>10.14569/IJACSA.2024.01501117</doi>
        <lastModDate>2024-01-30T15:33:24.5030000+00:00</lastModDate>
        
        <creator>Walelign Tewabe</creator>
        
        <creator>Laszlo Kovacs</creator>
        
        <subject>Adverb prediction; ChatGPT; hybrid parser-based; natural language processing; sentence parsing; semantic graph induction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>Sentence parsing is a fundamental step in the conversion of a text document into semantic graphs. In this research, novel phrase parsing techniques for semantic graph-based induction are presented, namely the ChatGPT-based and Hybrid Parser-based approaches. The performance of these two approaches in the context of inducing semantic networks from textual data is assessed through a comprehensive analysis in this study. The primary purpose is to enhance the construction of semantic graphs, specifically focusing on capturing detailed event descriptions and relationships within text. The research finds that the Hybrid Parser-Based approach exhibits a slight advantage in accuracy (acc hybrid = 0.87) compared to ChatGPT (acc GPT = 0.85) in sentence parsing tasks. Furthermore, the efficiency analysis reveals that ChatGPT’s response quality varies with different prompt sizes, while the Hybrid Parser-Based method consistently maintains an “excellent” response quality rating.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_117-A_Comparative_Study_of_ChatGPT_based_and_Hybrid_Parser_based_Sentence.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automated Paper-based Multiple Choice Scoring Framework using Fast Object Detection Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01501115</link>
        <id>10.14569/IJACSA.2024.01501115</id>
        <doi>10.14569/IJACSA.2024.01501115</doi>
        <lastModDate>2024-01-30T15:33:24.4870000+00:00</lastModDate>
        
        <creator>Pham Doan Tinh</creator>
        
        <creator>Ta Quang Minh</creator>
        
        <subject>Optical mark reader; multiple choice exam; automatic scoring; segmentation; fast object detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>Optical mark reader (OMR) technology is an important research topic in artificial intelligence, with a wide range of applications such as text processing, document recognition, surveying, statistics, and process automation. Researchers have proposed many methods employing either traditional image processing and statistics or complex machine learning models. This paper presents a feasible solution for the OMR problem. It uses a fast object detection model to detect markers effectively and then segment the answer sheet into smaller regions for the mark reader model to recognize the user’s selections accurately. The experimental results on actual answer sheets from college exams show that the error is less than 0.5 percent, and the processing speed can achieve up to 50 answer sheets per minute on standard core i5 personal computers.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_115-Automated_Paper_based_Multiple_Choice_Scoring_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Adversarial Defense in Neural Networks by Combining Feature Masking and Gradient Manipulation on the MNIST Dataset</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01501114</link>
        <id>10.14569/IJACSA.2024.01501114</id>
        <doi>10.14569/IJACSA.2024.01501114</doi>
        <lastModDate>2024-01-30T15:33:24.4730000+00:00</lastModDate>
        
        <creator>Ganesh Ingle</creator>
        
        <creator>Sanjesh Pawale</creator>
        
        <subject>Feature masking; neural networks; gradient manipulation; adversarial resilience; fast gradient sign method</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>This research investigates the escalating issue of adversarial attacks on neural networks within AI security, specifically targeting image recognition using the MNIST dataset. Our exploration centered on the potential of a combined approach incorporating feature masking and gradient manipulation to bolster adversarial defense. The main objective was to evaluate the extent to which this integrated strategy enhances network resilience against such attacks, contributing to the advancement of more robust AI systems. In our experimental framework, we utilized a conventional neural network architecture, integrating various levels of feature masking alongside established training protocols. A baseline model, devoid of feature masking, functioned as a comparative standard to gauge the efficacy of our proposed technique. We assessed the model’s performance in standard scenarios as well as under Fast Gradient Sign Method (FGSM) adversarial assaults. The outcomes provided significant insights. The baseline model demonstrated a high test accuracy of 98% on the MNIST dataset, yet it showed limited resistance to adversarial incursions, with accuracy diminishing to 60% under FGSM onslaughts. Conversely, models incorporating feature masking exhibited a reciprocal relationship between masking proportion and accuracy, counterbalanced by an enhancement in adversarial resilience. Specifically, a 10% masking ratio achieved a 96% accuracy rate coupled with a 75% robustness against attacks, a 30% masking led to a 94% accuracy with an 80%robustness level, and a 50% masking threshold resulted in a 92% accuracy, attaining the apex of robustness at 85%. These results affirm the efficacy of feature masking in augmenting adversarial defense, highlighting a pivotal equilibrium between accuracy and resilience. The study lays the groundwork for further investigations into refined masking methodologies and their amalgamation with other defensive strategies, potentially broadening the scope of neural network security against adver-sarial threats. Our contributions are significant to the realm of AI security, showcasing an effective strategy for the development of more secure and dependable neural network frameworks.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_114-Enhancing_Adversarial_Defense_in_Neural_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Using Deep Learning to Recognize Fake Faces</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01501113</link>
        <id>10.14569/IJACSA.2024.01501113</id>
        <doi>10.14569/IJACSA.2024.01501113</doi>
        <lastModDate>2024-01-30T15:33:24.4570000+00:00</lastModDate>
        
        <creator>Jaffar Atwan</creator>
        
        <creator>Mohammad Wedyan</creator>
        
        <creator>Dheeb Albashish</creator>
        
        <creator>Elaf Aljaafrah</creator>
        
        <creator>Ryan Alturki</creator>
        
        <creator>Bandar Alshawi</creator>
        
        <subject>Deep learning; machine learning; deepfake; convolutional neural network; global average pooling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>In recent times, many fake faces have been created using deep learning and machine learning. Most fake faces made with deep learning are referred to as “deepfake photos.” Our study’s primary goal is to propose a useful framework for recognizing deep-fake photos using deep learning and transformative learning techniques. This paper proposed convolutional neural network (CNN) models based on deep transfer learning method-ologies in which the designed classifier using global average pooling (GAP), dropout, and a dense layer with two neurons that use SoftMax are substituted for the final fully connected layer in the pretrained models. DenseNet201, the suggested framework, produced the best accuracy of 86.85% for both the deepfake and real picture datasets, while MobileNet produced a lower accuracy of 82.78%. The obtained experimental results showed that the proposed method outperformed other state-of-the-art fake picture discriminators in terms of performance. The proposed architecture helps cybersecurity specialists fight deepfake-related cybercrimes.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_113-Using_Deep_Learning_to_Recognize_Fake_Faces.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Bystander Detection: Automatic Labeling Techniques using Feature Selection and Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01501112</link>
        <id>10.14569/IJACSA.2024.01501112</id>
        <doi>10.14569/IJACSA.2024.01501112</doi>
        <lastModDate>2024-01-30T15:33:24.4570000+00:00</lastModDate>
        
        <creator>Anamika Gupta</creator>
        
        <creator>Khushboo Thakkar</creator>
        
        <creator>Veenu Bhasin</creator>
        
        <creator>Aman Tiwari</creator>
        
        <creator>Vibhor Mathur</creator>
        
        <subject>Bystanders; cyberbullying; machine learning; defender; instigator; impartial; toxicity; twitter</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>A hostile or aggressive behavior on an online platform by an individual or a group of people is termed as cyberbullying. A bystander is the one who sees or knows about such incidences of cyberbullying. A defender who intervenes can mitigate the impact of bullying, an instigator who accomplices the bully, can add to the victim’s suffering, and an impartial onlooker who remains neutral and observes the scenario without getting engaged. Studying the behavior of Bystanders role can help in shaping the scale and progression of bullying incidents. However, the lack of data hinders the research in this area. Recently, a dataset, CYBY23, of Twitter threads having main tweets and the replies of Bystanders was published on Kaggle in Oct 2023. The dataset has extracted features related to toxicity and sensitivity of the main tweets and reply tweets. The authors have got manual annotators to assign the labels of Bystanders’ roles. Manually labeling bystanders’ roles is a labor-intensive task which eventually raises the need to have an automatic labeling technique for identifying the Bystander role. In this work, we aim to suggest a machine-learning model with high efficiency for the automatic labeling of Bystanders. Initially, the dataset was re-sampled using SMOTE to make it a balanced dataset. Next, we experimented with 12 models using various feature engineering techniques. Best features were selected for further experimentation by removing highly correlated and less relevant features. The models were evaluated on the metrics of accuracy, precision, recall, and F1 score. We found that the Random Forest Classifier (RFC) model with a certain set of features is the highest scorer among all 12 models. The RFC model was further tested against various splits of training and test sets. The highest results were achieved using a training set of 85% and a test set of 15%, having 78.83% accuracy, 81.79% precision, 74.83% recall, and 79.45% F1 score. Automatic labeling proposed in this work, will help in scaling the dataset which will be useful for further studies related to cyberbullying.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_112-Bystander_Detection_Automatic_Labeling_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluating Tree-based Ensemble Strategies for Imbalanced Network Attack Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01501111</link>
        <id>10.14569/IJACSA.2024.01501111</id>
        <doi>10.14569/IJACSA.2024.01501111</doi>
        <lastModDate>2024-01-30T15:33:24.4400000+00:00</lastModDate>
        
        <creator>Hui Fern Soon</creator>
        
        <creator>Amiza Amir</creator>
        
        <creator>Hiromitsu Nishizaki</creator>
        
        <creator>Nik Adilah Hanin Zahri</creator>
        
        <creator>Latifah Munirah Kamarudin</creator>
        
        <creator>Saidatul Norlyana Azemi</creator>
        
        <subject>Multiclass imbalanced classification; ensemble algorithm; network attack; UNSW-NB15 dataset; F1-score</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>With the continual evolution of cybersecurity threats, the development of effective intrusion detection systems is increasingly crucial and challenging. This study tackles these challenges by exploring imbalanced multiclass classification, a common situation in network intrusion datasets mirroring real-world scenarios. The paper aims to empirically assess the performance of diverse classification algorithms in managing imbalanced class distributions. Experiments were conducted using the UNSW-NB15 network intrusion detection benchmark dataset, comprising ten highly imbalanced classes. The evaluation includes basic, traditional algorithms like the Decision Tree, K-Nearest Neighbor, and Gaussian Naive Bayes, as well as advanced ensemble methods such as Gradient Boosted Decision Trees (GraBoost) and AdaBoost. Our findings reveal that the Decision Tree surpassed the Multi-Layer Perceptron, K-Nearest Neighbor, and Naive Bayes in terms of overall F1-score. Furthermore, thorough evaluations of nine tree-based ensemble algorithms were performed, showcasing their varying efficacy. Bagging, Random Forest, ExtraTrees, and XGBoost achieved the highest F1-scores. However, in individual class analysis, XGBoost demonstrated exceptional performance relative to the other algorithms. This is confirmed by achieving the highest F1-scores in eight out of the ten classes within the dataset. These results establish XGBoost as a predominant method for handling multiclass imbalance classification with Bagging being the closest feasible alternative, as Bagging gains an almost similar accuracy and F1-score as XGBoost.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_111-Evaluating_Tree_based_Ensemble_Strategies_for_Imbalanced_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Investigating the Impact of Preprocessing Techniques and Representation Models on Arabic Text Classification using Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01501110</link>
        <id>10.14569/IJACSA.2024.01501110</id>
        <doi>10.14569/IJACSA.2024.01501110</doi>
        <lastModDate>2024-01-30T15:33:24.4230000+00:00</lastModDate>
        
        <creator>Mahmoud Masadeh</creator>
        
        <creator>Moustapha. A</creator>
        
        <creator>Sharada B</creator>
        
        <creator>Hanumanthappa J</creator>
        
        <creator>Hemachandran K</creator>
        
        <creator>Channabasava Chola</creator>
        
        <creator>Abdullah Y. Muaad</creator>
        
        <subject>Arabic Text Classification (ATC); Text Mining (TM); Machine Learning (ML); preprocessing methods; representation models; Feature Extraction (FE); Feature Selection (FS)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>Arabic Text Classification (ATC) is a crucial step for various Natural Language Processing (NLP) applications. It emerged as a response to the exponential growth of online content like social posts and review comments. In this study, preprocessing techniques and representation models are used to evaluate the effectiveness of ATC using Machine Learning (ML). Generally, the ATC operation depends on various factors, such as stemming in preprocessing, feature extraction and selection, and the nature of the dataset. To enhance the overall classification performance, preprocessing methodologies are primarily employed to transform each Arabic term into its root form and reduce the dimensionality of representation. In the representation of Arabic text, feature extraction and selection processes are imperative, as they significantly enhance the performance of ATC. This study implements the chosen classifiers using various feature selection algorithms. The comprehensive assessment of classification outcomes is conducted by comparing various classifiers, including Multinomial Naive Bayes (MNB), Bernoulli Naive Bayes (BNB), Stochastic Gradient Descent (SGD), Support Vector Classifier (SVC), Logistic Regression (LR), and linear Support Vector Classifier (LSVC). These ML classifiers are assessed utilizing short and long Arabic text benchmark datasets called BBC Arabic corpus and the COVID-19 dataset. The assessment findings indicate that the efficacy of classification is significantly influenced by the preprocessing methods, representation model, classification algorithm, and the datasets’ characteristics. In most cases, the SGDC and LSVC have consistently surpassed other classifiers for the datasets under consideration when significant features are chosen.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_110-Investigating_the_Impact_of_Preprocessing_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Improved K-means Clustering Algorithm Towards an Efficient Educational and Economical Data Modeling</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01501109</link>
        <id>10.14569/IJACSA.2024.01501109</id>
        <doi>10.14569/IJACSA.2024.01501109</doi>
        <lastModDate>2024-01-30T15:33:24.4100000+00:00</lastModDate>
        
        <creator>Rabab El Hatimi</creator>
        
        <creator>Cherifa Fatima Choukhan</creator>
        
        <creator>Mustapha Esghir</creator>
        
        <subject>Education assessment; unsupervised learning; statistical analysis; world bank data; K-means</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>Education is one of the most crucial pillars for the sustainable development of societies. It is essential for each country to assess its level of access to education. However, the conventional methods of ranking access to education have their limitations. Therefore, there is a need for strategic planning to develop a new classification methods. This study aims to address this need by developing an innovative and efficient unsupervised K-Means model capable of predicting global access to education. The novel approach adopted in this research fills a gap in traditional ranking methods for assessing access to education. Utilizing statistical analysis of data sourced from the World Bank, we evaluated education access across 217 countries spanning various continents and levels of development. By employing economic and educational factors as input for the K-Means algorithm, we successfully identified three distinct clusters, each comprising countries with similar levels of education access. The reliability of our approach was reinforced through rigorous statistical testing to validate the results. Furthermore, we compared the economies of countries within each cluster using primary data, enabling specific recommendations at the economic level to assist countries with limited education access in enhancing their circumstances. Finally, this study makes a significant contribution by introducing a new approach to globally assess education access. The findings provide practical recommendations to aid countries in improving their educational opportunities.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_109-An_Improved_K_means_Clustering_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application of Style Transfer Algorithm in Artistic Design Expression of Terrain Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01501108</link>
        <id>10.14569/IJACSA.2024.01501108</id>
        <doi>10.14569/IJACSA.2024.01501108</doi>
        <lastModDate>2024-01-30T15:33:24.4100000+00:00</lastModDate>
        
        <creator>Yangfei Chen</creator>
        
        <subject>Generative adversarial network; terrain; style transfer; artistic; peak signal-to-noise ratio; Structural Similarity Index</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>The use of artistic expression to depict and express terrain and landform can not only convey terrain information, but also spread art and culture. The existing landscape design methods focus on the accurate expression of terrain height and the realistic expression of form, but neglect the aesthetic aspect of landscape design. In view of this situation, this paper studied the use of generative adversarial network, constructed the presentation mode of landscape plane style, and realized the expression of landscape art style. A terrain style transfer model based on a pre-trained deep neural network model and style transfer algorithm was constructed to achieve a variety of terrain style expressions. The results showed that, in terms of Peak Signal to Noise Ratio, the proposed style transfer algorithm was higher than style attention network and adaptive instance normalization, and the peak signal-to-noise ratio index value was increased by 7.5% and 16.5%. This indicated that the style transfer model proposed by the topographic artistry research had more advantages in terms of image diversity and fidelity. The Structural Similarity Index of the proposed algorithm has been greatly improved. This research expands the method of computer rendering of terrain environment art, which is of great significance for the preservation of traditional Chinese culture.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_108-Application_of_Style_Transfer_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improved Ant Colony Algorithm Based on Binarization in Computer Text Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01501107</link>
        <id>10.14569/IJACSA.2024.01501107</id>
        <doi>10.14569/IJACSA.2024.01501107</doi>
        <lastModDate>2024-01-30T15:33:24.3930000+00:00</lastModDate>
        
        <creator>Zhen Li</creator>
        
        <subject>Binarization; ant colony algorithm; text recognition; edge detection; Otsu algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>Pheromones, path selection, and probability transfer functions are the main factors that affect the performance of computer text recognition. The path selection function is the most important factor affecting the recognition rate. In response to the difficulties in path selection and slow algorithm convergence in the text recognition, an edge detection algorithm based on improved ant colony optimization algorithm is proposed. The strong denoising performance of the ant colony optimization algorithm reduces the interference of textured backgrounds. The edge extraction effect is analyzed in the connected domain to overcome complex effects. Finally, the improved Otsu binarization algorithm is used to recognize the text. According to the results, the proposed method could effectively preserve the edge information of characters in images. The positioning effect of the text area was good. The accuracy rate reached around 85%. The tuned threshold improved the binarization effect. The text recognition rate of the improved ant colony algorithm proposed in the research has generally reached 80%, with good text positioning accuracy and recognition rate, which has great practical significance in computer text recognition.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_107-Improved_Ant_Colony_Algorithm_Based_on_Binarization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Research on Evaluation and Improvement of Government Short Video Communication Effect Based on Big Data Statistics</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01501106</link>
        <id>10.14569/IJACSA.2024.01501106</id>
        <doi>10.14569/IJACSA.2024.01501106</doi>
        <lastModDate>2024-01-30T15:33:24.3770000+00:00</lastModDate>
        
        <creator>Man Xu</creator>
        
        <subject>Big data statistics; short videos of government affairs; communication effect; linear regression; mainstream media</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>Mainstream media is no longer the only way for people to obtain information, and the official media no longer has absolute control. People can choose the form and content of receiving information according to their preferences, which poses a new challenge to the government departments that have always been serious. From the beginning of short video to its prosperity, the government has shown great interest in its characteristics and functions. It has started to layout short video of government affairs on platforms such as Tiktok and Kwai, opened accounts one after another, and actively participated in the production and dissemination of content. Through the continuous launch of well-designed &quot;hot money&quot;, the popularity of government affairs short videos on Tiktok and other platforms continued to rise, harvested a large number of fans, attracted social attention, and also brought good results and repercussions. This paper proposes an optimization design scheme for the evaluation and improvement of the dissemination effect of government short video based on big data statistics. The basic situation of government video is obtained through content analysis, and then the judgment coefficient and linear regression in big data statistics are used to extract common factors to improve the dissemination effect of government short video, so as to improve the dissemination influence of government short video. Finally, simulation test and analysis are carried out. Simulation results show that the proposed algorithm has certain accuracy, which is 8.24% higher than the traditional algorithm. Carrying out the research on the promotion planning and design with the dissemination of short videos of government affairs as the core has important practical guiding significance for guiding local grass-roots governments to build public services and public feedback.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_106-Research_on_Evaluation_and_Improvement_of_Government_Short_Video.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of Teaching Mode and Evaluation Method of Effect of Art Design Course from the Perspective of Big Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01501105</link>
        <id>10.14569/IJACSA.2024.01501105</id>
        <doi>10.14569/IJACSA.2024.01501105</doi>
        <lastModDate>2024-01-30T15:33:24.3630000+00:00</lastModDate>
        
        <creator>Danjun ZHU</creator>
        
        <creator>Gangtian LIU</creator>
        
        <subject>Big data perspective; teaching mode; evaluation system; art and design; hybrid teaching</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>In modern educational curriculum teaching, we should fully leverage the advantages of modern technology, especially in teaching methods, and deeply understand and apply big data technology. This article explores the design and effectiveness evaluation methods of curriculum teaching models from the perspective of big data. We utilized big data thinking and conducted research and practical exploration to compare and evaluate teaching mode design methods. In the art and design course, we adopted a blended learning model, combining MOOC and SPOC, and innovated traditional teaching methods and plans. Meanwhile, we investigated the teaching effectiveness and feasibility of this blended learning model. By extensively evaluating teaching techniques, evaluation methods, and technologies that support the learning process, we reconstructed blended learning evaluation indicators and evaluated the effectiveness of learning outcomes and processes under different teaching modes. The research results show that the blended learning model based on big data perspective can significantly improve the effectiveness of classroom teaching. In contrast, learners&#39; self-learning ability and practical innovation ability have also been further improved.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_105-Design_of_Teaching_Mode_and_Evaluation_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Application of MIR Technology in Higher Vocational English Teaching</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01501103</link>
        <id>10.14569/IJACSA.2024.01501103</id>
        <doi>10.14569/IJACSA.2024.01501103</doi>
        <lastModDate>2024-01-30T15:33:24.3470000+00:00</lastModDate>
        
        <creator>Xiaoting Deng</creator>
        
        <subject>English teaching in higher vocational colleges; multimedia information retrieval technology; applied research; modern teaching models</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>The traditional teaching model is teacher centered, with conservative textbooks and methods. To some extent, multimedia information retrieval technology can provide relevant information based on user query conditions, thereby alleviating the problem of information overload. This study applies image retrieval, audio and video retrieval techniques from multimedia information retrieval technology to vocational English education. It is recommended to include visual, auditory, and video materials in the course plan to meet the needs of all students. This will help ensure that the teaching objectives of each unit are achieved. Multimedia information retrieval technology may create a new learning mode in which vocational college students can use a series of mobile terminals for learning activities at anytime and anywhere, making learning more comfortable and personalized. A random double-blind survey questionnaire was designed to investigate student satisfaction and evaluate the effectiveness of multimedia information retrieval technology in vocational English teaching, in order to test the effectiveness of multimedia information retrieval technology in vocational college English teaching. According to the survey results, the majority of students acknowledge the performance of multimedia information retrieval technology in English teaching. Therefore, the application of multimedia information retrieval technology in vocational English teaching is conducive to cultivating students&#39; self-learning ability and creative thinking ability. Meanwhile, multimedia information retrieval technology has improved the quality and level of information literacy education for college students.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_103-The_Application_of_MIR_Technology_in_Higher_Vocational_English.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Meta-Model Classification Based on the Na&#239;ve Bias Technique Auto-Regulated via Novel Metaheuristic Methods to Define Optimal Attributes of Student Performance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01501104</link>
        <id>10.14569/IJACSA.2024.01501104</id>
        <doi>10.14569/IJACSA.2024.01501104</doi>
        <lastModDate>2024-01-30T15:33:24.3470000+00:00</lastModDate>
        
        <creator>Zhen Ren</creator>
        
        <creator>Mingmin He</creator>
        
        <subject>Student performance; machine learning; classification; Naive Bayes Classification; Alibaba and the forty thieves; Leader Harris Hawk’s Optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>Accurately assessing and predicting student performance is critical in today’s educational environment. Schools are dependent on evaluating students’ skills, forecasting their grades, and providing customized instruction to improve their academic performance. Early intervention is essential for pinpointing areas in need of development. By predicting students’ futures in particular subjects, data mining, a potent technique for revealing hidden patterns within large datasets, helps lower failure rates. These methods are combined in the field of educational data mining, which focuses on the analysis of data from educators and students with the aim of raising academic achievement. In this study, the Naive Bayes classification (NBC) model is given the main responsibility for predicting student performance. However, two cutting-edge optimization strategies, Alibaba and the Forty Thieves (AFT) and Leader Harris Hawk’s optimization (LHHO), have been used to improve the model’s accuracy. The study’s findings show that the NBC+AFT model performs more accurately than the other models. Accuracy, Precision, Recall, and F1-Score all display impressive performance metrics for a superior model, with values of 0.891, 0.9, 0.89, and 0.89, respectively. These metrics outperform those of competing models, highlighting how successful this strategy is. Because of the NBC+AFT model’s strong performance, educational institutions are getting closer to a time when they will be able to predict students’ success more precisely and help them along the way, making everyone’s academic journey more promising and brighter.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_104-Meta_Model_Classification_Based_on_the_Na&#239;ve_Bias_Technique.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application of Ant Colony Optimization Improved Clustering Algorithm in Malicious Software Identification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01501102</link>
        <id>10.14569/IJACSA.2024.01501102</id>
        <doi>10.14569/IJACSA.2024.01501102</doi>
        <lastModDate>2024-01-30T15:33:24.3300000+00:00</lastModDate>
        
        <creator>Yong Qian</creator>
        
        <subject>Ant colony algorithm; clustering algorithm; malicious software identification; computer security; optimization algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>Due to the increasing threat of malware to computer systems and networks, traditional malware detection and recognition technologies face difficulties and limitations. Therefore, exploring new methods to improve the accuracy and efficiency of malware identification has become an urgent need. This study introduces ant colony algorithm to optimize traditional clustering algorithms and algorithm parameters. The experimental results showed that the improvement rates of the improved algorithm in accuracy, echo value, and false alarm rate were 0.253, 0.115, and 0.056, respectively. The accuracy on the training and validation sets continued to increase and the loss curve continued to decrease. In addition, the improved algorithm had stronger modeling ability for data feature relationships and temporal information. This is of great help in improving the recognition ability of virus and worm software. The improved algorithm had a lower occupancy rate of computing resources compared to other algorithms, but it could also effectively monitor device operation. Compared with traditional methods, this method can more accurately identify malicious software and effectively identify malicious software samples from large-scale datasets. This is of great significance for protecting computer systems and network security.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_102-Application_of_Ant_Colony_Optimization_Improved_Clustering_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Estimation of Heating Load Consumption in Residual Buildings using Optimized Regression Models Based on Support Vector Machine</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01501101</link>
        <id>10.14569/IJACSA.2024.01501101</id>
        <doi>10.14569/IJACSA.2024.01501101</doi>
        <lastModDate>2024-01-30T15:33:24.3170000+00:00</lastModDate>
        
        <creator>Chao WANG</creator>
        
        <creator>Xuehui QIU</creator>
        
        <subject>Heating load demand; prediction models; building energy consumption; support vector machine; metaheuristic optimization algorithms</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>Accurate energy consumption forecasting and assessing retrofit options are vital for energy conservation and emissions reduction. Predicting building energy usage is complex due to factors like building attributes, energy systems, weather conditions, and occupant behavior. Extensive research has led to diverse methods and tools for estimating building energy performance, including physics-based simulations. However, accurate simulations often require detailed data and vary based on modeling sophistication. The growing availability of public building energy data offers opportunities for applying machine learning to predict building energy performance. This study evaluates Support Vector Regression (SVR) models for estimating building heating load consumption. These models encompass a single model, one optimized with the Transit Search Optimization Algorithm (TSO) and another optimized with the Coot optimization algorithm (COA). The training dataset consists of 70% of the data, which incorporates eight input variables related to the geometric and glazing characteristics of the buildings. Following the validation of 15% of the dataset, the performance of the remaining 15% is evaluated using five different assessment metrics. Among the three candidate models, Support Vector Regression optimized with the Coot optimization algorithm (SVCO) demonstrates remarkable accuracy and stability, reducing prediction errors by an average of 20% to over 50% compared to the other two models and achieving a maximum R2 value of 0.992 for heating load prediction.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_101-Estimation_of_Heating_Load_Consumption_in_Residual_Buildings.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Deep Learning-based Framework for Vehicle License Plate Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.01501100</link>
        <id>10.14569/IJACSA.2024.01501100</id>
        <doi>10.14569/IJACSA.2024.01501100</doi>
        <lastModDate>2024-01-30T15:33:24.3170000+00:00</lastModDate>
        
        <creator>Deming Yang</creator>
        
        <creator>Ling Yang</creator>
        
        <subject>Intelligent traffic monitoring; smart transportation; deep learning; Yolov5; performance evaluation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>In the contemporary landscape of smart transportation systems, the imperative role of intelligent traffic monitoring in bolstering efficiency, safety, and sustainability cannot be overstated. Leveraging recent strides in computer vision, machine learning, and data analytics, this study addresses the pressing need for advancements in car license plate recognition within these systems. Employing an innovative approach based on the YOLOv5 architecture in deep learning, the study focuses on refining the accuracy of license plate recognition. A bespoke dataset is meticulously curated to facilitate a comprehensive evaluation of the proposed methodology, with extensive experiments conducted and metrics such as precision, recall, and F1-score employed for assessment. The outcomes underscore the efficacy of the approach in significantly enhancing the precision and accuracy of license plate recognition using performance evaluation of the proposed method. This tailored dataset ensures a rigorous evaluation, affirming the practical viability of the proposed approach in real-world scenarios. The study not only showcases the successful application of deep learning and YOLOv5 in achieving accurate license plate detection and recognition but also contributes to the broader discourse on advancing intelligent traffic monitoring for more robust and efficient smart transportation systems.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_100-A_Deep_Learning_based_Framework_for_Vehicle_License.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predicting Students&#39; Academic Performance Through Machine Learning Classifiers: A Study Employing the Naive Bayes Classifier (NBC)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150199</link>
        <id>10.14569/IJACSA.2024.0150199</id>
        <doi>10.14569/IJACSA.2024.0150199</doi>
        <lastModDate>2024-01-30T15:33:24.3000000+00:00</lastModDate>
        
        <creator>Xin ZHENG</creator>
        
        <creator>Conghui LI</creator>
        
        <subject>Machine learning; Naive Bayes Classifier; Artificial Rabbits Optimization; Jellyfish Search Optimizer; student performance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>Modern universities must strategically analyze and manage student performance, utilizing knowledge discovery and data mining to extract valuable insights and enhance efficiency. Educational Data Mining (EDM) is a theory-oriented approach in academic settings that integrates computational methods to improve academic performance and faculty management. Machine learning algorithms are essential for knowledge discovery, enabling accurate performance prediction and early student identification, with classification being a widely applied method in predicting student performance based on various traits. Utilizing the Naive Bayes classifier (NBC) model, this research predicts student performance by harnessing the robust capabilities inherent in this classification tool. To bolster both efficiency and accuracy, the model integrates two optimization algorithms, namely Jellyfish Search Optimizer (JSO) and Artificial Rabbits Optimization (ARO). This underscores the research&#39;s commitment to employing cutting-edge machine learning and algorithms inspired by nature to achieve heightened precision in predicting student performance through the refinement of decision-making and prediction quality. To classify and predict G1 and G3 grades and evaluate students&#39; performance in this study, a comprehensive analysis of the information pertaining to 395 students has been conducted. The results indicate that in predicting G1, the NBAR model, with an F1_Score of 0.882, performed almost 1.03% better than the NBJS model, which had an F1_Score of 0.873. In G3 prediction, the NBAR model outperformed the NBJS model with F1_Score values of 0.893 and 0.884, respectively.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_99-Predicting_Students_Academic_Performance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Application of Artificial Intelligence Technology in Ideological and Political Education</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150198</link>
        <id>10.14569/IJACSA.2024.0150198</id>
        <doi>10.14569/IJACSA.2024.0150198</doi>
        <lastModDate>2024-01-30T15:33:24.2830000+00:00</lastModDate>
        
        <creator>Chao Xu</creator>
        
        <creator>Lin Wu</creator>
        
        <subject>Artificial intelligence; ideological and political education; wisdom development; semantic understanding and emotional analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>As for many schools, artificial intelligence will be more than a practical background; it is also a technical tool and an opportunity for development. Artificial intelligence&#39;s in-depth integration and standardization can inject new technological momentum into effectively identifying educational objects&#39; ideological dynamics, improving educational content&#39;s accuracy, and expanding the spatial dimension. It has become one and such an inevitable trend of innovation and development. However, there are also many potential risks and practical problems at the value premise, technical limits, and specific operation level, such as privacy protection and ideological security risks, the loss of educational subjectivity, the digitization of educational relations, and the lack of specialized talents. Therefore, it is necessary to look at the technical momentum and potential risks of artificial intelligence dialectically, promote the rationality of educational value, strengthen technical supervision, forge an intelligent education team, reasonably define the integration boundary and application scope of artificial intelligence, and combine the main initiative of human beings with the intelligence of machines. It combines strengths, actively explores the path of coexistence and co-prosperity between education and technology, and consciously constructs an intelligent form of them.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_98-The_Application_of_Artificial_Intelligence_Technology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Attraction Recommendation and Itinerary Planning for Smart Rural Tourism Based on Regional Segmentation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150196</link>
        <id>10.14569/IJACSA.2024.0150196</id>
        <doi>10.14569/IJACSA.2024.0150196</doi>
        <lastModDate>2024-01-30T15:33:24.2700000+00:00</lastModDate>
        
        <creator>Ruiping Chen</creator>
        
        <creator>Yanli Zhou</creator>
        
        <creator>Dejun Zhang</creator>
        
        <subject>Regional division; trip planning; recommended tourist attractions; clustering algorithm; time factor</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>As the rural tourism industry develops, effective attraction recommendations and planning are crucial for the tourist experience. Then, a rural scenic spot tourism recommendation and planning technology based on regional segmentation was proposed. The scenic area was divided into multiple grids based on tourist check-in behaviour, and the interest and influence of the scenic area were associated with the grid check-in behaviour. Content recommendation was achieved through two factors: popularity and regional location. And considering the sparsity of data in the recommendation, clustering algorithms were introduced to model tourist check-in behaviour based on factors such as time and regional location, and content recommendation was achieved through tourist preferences. In the performance analysis of recommendation models, the proposed model has an accuracy of 0.965 and 0.956 on the Gowalla and Yelp datasets, respectively, which is superior to other models. Comparing the recommendation loss performance of different models, the proposed model has an RMSE loss of 0.120 on the Gowalla dataset, which is superior to other models. In practical application analysis, when the recommended number is 5, the accuracy and recall of the proposed model are 0.138 and 0.069, respectively, which are superior to other models. In tourism itinerary planning, the overall planning time of the model is the shortest. Therefore, the proposed model has excellent application effects, and the research content provides important technical references for tourist travel and rural tourism destination planning.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_96-Attraction_Recommendation_and_Itinerary_Planning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid GAN-BiGRU Model Enhanced by African Buffalo Optimization for Diabetic Retinopathy Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150197</link>
        <id>10.14569/IJACSA.2024.0150197</id>
        <doi>10.14569/IJACSA.2024.0150197</doi>
        <lastModDate>2024-01-30T15:33:24.2700000+00:00</lastModDate>
        
        <creator>Sasikala P</creator>
        
        <creator>Sushil Dohare</creator>
        
        <creator>Mohammed Saleh Al Ansari</creator>
        
        <creator>Janjhyam Venkata Naga Ramesh</creator>
        
        <creator>Yousef A.Baker El-Ebiary</creator>
        
        <creator>E. Thenmozhi</creator>
        
        <subject>African Buffalo Optimization (ABO); Bidirectional Gated Recurrent Unit (BI-GRU); Generative Adversarial Network (GAN); diabetic retinopathy; medical diagnosis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>Diabetic retinopathy (DR) is a severe complication of diabetes mellitus, leading to vision impairment or even blindness if not diagnosed and treated early. A manual inspection of the patient&#39;s retina is the conventional way for diagnosing diabetic retinopathy. This study offers a novel method for the identification of diabetic retinopathy in medical diagnosis. Using a hybrid Generative Adversarial Network (GAN) and Bidirectional Gated Recurrent Unit (BiGRU) model, further refined using the African Buffalo Optimization algorithm, the model&#39;s capacity to identify minute patterns suggestive of diabetic retinopathy is improved by the GAN&#39;s skill in extracting complex characteristics from retinal pictures. The technique of feature extraction plays a critical role in revealing information that may be hidden yet is essential for a precise diagnosis. Then, the BiGRU part works on the characteristics that have been extracted, efficiently maintaining temporal relationships, and enabling thorough information absorption. The combination of GAN&#39;s feature extraction capabilities with BiGRU&#39;s sequential information processing capability creates a synergistic interaction that gives the model a comprehensive grasp of retinal pictures. Moreover, the African Buffalo Optimization technique is utilized to optimize the model&#39;s performance for improved accuracy in the identification of diabetic retinopathy by fine-tuning its parameters. The current study, which uses Python, obtains a 98.5% accuracy rate and demonstrates its amazing ability to reach high levels of accuracy in Diabetic Retinopathy Detection.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_97-A_Hybrid_GAN_BiGRU_Model_Enhanced_by_African_Buffalo_Optimization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Audio Style Conversion Based on AutoML and Big Data Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150195</link>
        <id>10.14569/IJACSA.2024.0150195</id>
        <doi>10.14569/IJACSA.2024.0150195</doi>
        <lastModDate>2024-01-30T15:33:24.2530000+00:00</lastModDate>
        
        <creator>Dan Chi</creator>
        
        <subject>AutoML; audio style conversion; machine learning; big data analysis; adain module</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>In the field of audio style conversion research, the application of AutoML and big data analysis has shown great potential. The study used AutoML and big data analysis methods to conduct deep learning on audio styles, especially in style transitions between flutes and violins. The results show that using iterative learning for audio style conversion training, the training curve tends to stabilize after 100 iterations, while the validation curve reaches stability after 175 iterations. In terms of efficiency analysis, the efficiency of the yellow curve and the green curve reached 1.05 and 1.34, respectively, with the latter being significantly more efficient. This study achieved significant results in audio style conversion through the application of AutoML and big data analysis, successfully improving conversion accuracy. This progress has practical application value in multiple fields, including music production and sound effect design.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_95-Audio_Style_Conversion_Based_on_AutoML.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Designing an Adaptive Effective Intrusion Detection System for Smart Home IoT</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150194</link>
        <id>10.14569/IJACSA.2024.0150194</id>
        <doi>10.14569/IJACSA.2024.0150194</doi>
        <lastModDate>2024-01-30T15:33:24.2370000+00:00</lastModDate>
        
        <creator>Hassen Sallay</creator>
        
        <subject>Smart home; IoT; IDS; taxonomy; architecture; SDN; ELM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>As the ubiquity of IoT devices in smart homes escalates, so does the vulnerability to cyber threats that exploit weaknesses in device security. Timely and accurate detection of attacks is critical to protect smart home networks. Intrusion Detection Systems (IDS) are a cornerstone in any layered security defense strategy. However, building such a system is challenging given smart home devices&#39; resource constraints and behaviors&#39; diversity. This paper presents an adaptative IDS based on a device-specific approach and SDN deployment. We categorize devices based on traffic profiles to enable specialized architectural design and dynamically assign the suitable detection model. We demonstrate the IDS efficiency, effectiveness, and adaptability by thoroughly benchmarking an ensemble of machine learning models, mainly tree ensemble models and extreme learning machine variants, on the up-to-date IoT CICIoT2023 security dataset. Our IDS multi-component device-aware architecture leverages software-defined networking and virtualized network functions for scalable deployment, with an edge computing design to meet strict latency requirements. The results reveal that our adaptive model selection ensures detection accuracy while maintaining low latency, aligning with the critical requirement of real-time accuracy and adaptability to smart home devices&#39; traffic patterns.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_94-Designing_An_Adaptive_Effective_Intrusion_Detection_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intelligent Temperature Control Method of Instrument Based on Fuzzy PID Control Technology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150193</link>
        <id>10.14569/IJACSA.2024.0150193</id>
        <doi>10.14569/IJACSA.2024.0150193</doi>
        <lastModDate>2024-01-30T15:33:24.2230000+00:00</lastModDate>
        
        <creator>Wenfang Li</creator>
        
        <creator>Yuqiao Wang</creator>
        
        <subject>Fuzzy PID control; instrumentation; intelligent temperature control; differential negative feedback; grey wolf optimization algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>The current instrumentation intelligent temperature control is generally realized based on PID control technology, whose efficiency and precision are low and cannot meet the actual production requirements. A fuzzy PID (FPID) control technique is suggested as a solution to this issue with the goal to increase the control precision by adjusting the PID parameters in real-time using a fuzzy algorithm. In addition, a multi-strategy-fused Improved Grey Wolf Optimization (MGWO) algorithm is used to obtain the optimal fuzzy rule parameters for the fuzzy controller to achieve the optimization of FPID. In addition to the aforementioned, the MGWO-FPID-based instrumentation intelligent temperature control model is created to enhance the instrumentation&#39;s ability to regulate temperature. The testing results demonstrated that the MGWO-FPID model outperformed the other two models with values for the objective function of 5 10-8, adaptation degree of 13.1, control regulation time of 2.08 s, F1 value of 96.14%, MAE value of 8.53, Recall value of 95.37%, and AUC value of 0.995. The above results prove that the MGWO-FPID-based instrumentation intelligent temperature control model proposed in the study has high accuracy and efficiency, which can effectively realize the instrumentation intelligent temperature control in industrial production, and then improve the accuracy and efficiency of instrumentation temperature control, ensure the safe production of industry, and promote the industrial development to a certain extent. This model can monitor and regulate the temperature in the industrial production process in real time, avoiding safety accidents caused by temperature anomalies, and ensuring the safety of industrial production. And the application of this model can improve the efficiency and product quality of industrial production, help reduce production costs and improve economic benefits. This can not only promote the development of related industries, but also drive the economic development of the entire society.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_93-Intelligent_Temperature_Control_Method_of_Instrument.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Revolutionizing Generalized Anxiety Disorder Detection using a Deep Learning Approach with MGADHF Architecture on Social Media</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150192</link>
        <id>10.14569/IJACSA.2024.0150192</id>
        <doi>10.14569/IJACSA.2024.0150192</doi>
        <lastModDate>2024-01-30T15:33:24.2230000+00:00</lastModDate>
        
        <creator>Faisal Alshanketi</creator>
        
        <subject>Deep learning; machine learning; anxiety disorder; social media; grey wolf optimization technique</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>In the contemporary landscape, social media has emerged as a dominant medium via which individuals are able to articulate a wide range of emotions, encompassing both positive and negative sentiments, therefore offering significant insights into their psychological well-being. The ability to identify these emotional signals plays a vital role in the timely identification of persons who are undergoing depression and other mental health difficulties, hence facilitating the implementation of potentially life-saving therapies. There are already a multitude of clever algorithms available that demonstrate high accuracy in predicting depression. Despite the availability of many machine learning (ML) techniques for detecting persons with depression, the overall effectiveness of these systems has been deemed unsatisfactory. In order to overcome this constraint, the present study introduces an innovative methodology for identifying depression by employing deep learning (DL) techniques, specifically the Deep Learning Multi-Aspect Generalized Anxiety Disorder Detection with Hierarchical-Attention Network and Fuzzy (MGADHF). The process of feature selection is conducted by employing the Adaptive Particle and Grey Wolf optimization techniques and fuzzy. The Multi-Aspect Depression Detection with Hierarchical Attention Network (MDHAN) model is subsequently utilized to categorize Twitter data, differentiating between those exhibiting symptoms of depression and those who do not. Comparative assessments are performed utilizing established methodologies such as Convolutional Neural Network (CNN), Support Vector Machine (SVM), Minimum Description Length (MDL), and MDHAN. As proposed, the MGADHF architecture demonstrates a notable accuracy level, reaching 99.19%. This surpasses frequency-based DL models&#39; performance and achieves a reduced false-positive rate.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_92-Revolutionizing_Generalized_Anxiety_Disorder_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>2D-CNN Architecture for Accurate Classification of COVID-19 Related Pneumonia on X-Ray Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150191</link>
        <id>10.14569/IJACSA.2024.0150191</id>
        <doi>10.14569/IJACSA.2024.0150191</doi>
        <lastModDate>2024-01-30T15:33:24.2070000+00:00</lastModDate>
        
        <creator>Nurlan Dzhaynakbaev</creator>
        
        <creator>Nurgul Kurmanbekkyzy</creator>
        
        <creator>Aigul Baimakhanova</creator>
        
        <creator>Iyungul Mussatayeva</creator>
        
        <subject>Machine learning; deep learning; X-Ray; CNN; detection; classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>In the wake of the COVID-19 pandemic, the use of medical imaging, particularly X-ray radiography, has become integral to the rapid and accurate diagnosis of pneumonia induced by the virus. This research paper introduces a novel two-dimensional Convolutional Neural Network (2D-CNN) architecture specifically tailored for the classification of COVID-19 related pneumonia in X-ray images. Leveraging the advancements in deep learning, our model is designed to distinguish between viral pneumonia, typical of COVID-19, and other types of pneumonia, as well as healthy lung imagery. The architecture of the proposed 2D-CNN is characterized by its depth and a unique layer arrangement, which optimizes feature extraction from X-ray images, thus enhancing the model&#39;s diagnostic precision. We trained our model using a substantial dataset comprising thousands of annotated X-ray images, including those of patients diagnosed with COVID-19, patients with other pneumonia types, and individuals with no lung infection. This dataset enabled the model to learn a wide range of radiographic features associated with different lung conditions. Our model demonstrated exceptional performance, achieving high accuracy, sensitivity, and specificity in preliminary tests. The results indicate that our 2D-CNN model not only outperforms existing pneumonia classification models but also provides a valuable tool for healthcare professionals in the early detection and differentiation of COVID-19 related pneumonia. This capability is crucial for prompt and appropriate treatment, potentially reducing the pandemic&#39;s burden on healthcare systems. Furthermore, the model&#39;s design allows for easy integration into existing medical imaging workflows, offering a practical and efficient solution for frontline medical facilities. Our research contributes to the ongoing efforts to combat COVID-19 by enhancing diagnostic procedures through the application of artificial intelligence in medical imaging.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_91-2D_CNN_Architecture_for_Accurate_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Target Detection in Martial Arts Competition Video using Kalman Filter Algorithm Based on Multi target Tracking</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150190</link>
        <id>10.14569/IJACSA.2024.0150190</id>
        <doi>10.14569/IJACSA.2024.0150190</doi>
        <lastModDate>2024-01-30T15:33:24.1900000+00:00</lastModDate>
        
        <creator>Zhiguo Xin</creator>
        
        <subject>Multi target tracking; Kalman filtering algorithm; martial arts competition videos; target detection; feature matching</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>To solve the low accuracy and poor stability in traditional object tracking methods for martial arts competition videos, a Kalman filtering algorithm based on feature matching and multi object tracking is proposed for object detection in martial arts competition videos. Firstly, feature matching in multi target tracking is studied. Then, based on target feature matching, the Kalman filtering algorithm is fused to construct a target detection model in martial arts videos. Finally, simulation experiments are conducted to verify the performance and application effectiveness of the model. The results showed that the average tracking errors of the model on the X and Y axes were 3.86% and 3.38%, respectively. At the same time, the average accuracy and recall rate in the video target tracking process were 93.64% and 95.48%, respectively. After 100 iterations, the results gradually stabilized. This indicated that the constructed model could accurately detect targets in martial arts competition videos. It had high tracking accuracy and robustness. Compared with traditional object detection methods, this algorithm has better performance and effectiveness. The Kalman filter algorithm based on feature matching and multi target tracking has broad application prospects and research value in target detection in martial arts competition videos.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_90-Target_Detection_in_Martial_Arts_Competition_Video.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>EmotionNet: Dissecting Stress and Anxiety Through EEG-based Deep Learning Approaches</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150189</link>
        <id>10.14569/IJACSA.2024.0150189</id>
        <doi>10.14569/IJACSA.2024.0150189</doi>
        <lastModDate>2024-01-30T15:33:24.1730000+00:00</lastModDate>
        
        <creator>Yassine Daadaa</creator>
        
        <subject>Electroencephalography (EEG); Long short-term memory (LSTM); Convolutional neural network (CNN); human stress; anxiety detection; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>Amid global health crises, such as the COVID-19 pandemic, the heightened prevalence of mental health disorders like stress and anxiety has underscored the importance of understanding and predicting human emotions. Introducing &quot;EmotionNet,&quot; an advanced system that leverages deep learning and state-of-the-art hardware capabilities to predict emotions, specifically stress and anxiety. Through the analysis of electroencephalography (EEG) signals, EmotionNet is uniquely poised to decode human emotions in real time. To get information from pre-processed EEG signals, the EmotionNet architecture combines convolutional neural networks (CNN) and long short-term memory (LSTM) networks in a way that works well together. This dual approach first decomposes EEG signals into their core alpha, beta, and theta rhythms. We preprocess these decomposed signals and develop a CNN-LSTM-based architecture for feature extraction. The LSTM captures the intricate temporal dynamics of EEG signals, further enhancing understanding. The end process discerningly classifies signals into &quot;stress&quot; or &quot;anxiety&quot; states through the AdaBoost classifier. Evaluation against the esteemed DEEP, SEED, and DASPS datasets showcased EmotionNet&#39;s exceptional prowess, achieving a remarkable accuracy of 98.6%, which surpasses even human detection rates. Beyond its technical accomplishments, EmotionNet emphasizes the paramount importance of addressing and safeguarding mental health.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_89-EmotionNet_Dissecting_Stress_and_Anxiety_Through_EEG.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning for Early Detection of Tomato Leaf Diseases: A ResNet-18 Approach for Sustainable Agriculture</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150188</link>
        <id>10.14569/IJACSA.2024.0150188</id>
        <doi>10.14569/IJACSA.2024.0150188</doi>
        <lastModDate>2024-01-30T15:33:24.1730000+00:00</lastModDate>
        
        <creator>Asha M S</creator>
        
        <creator>Yogish H K</creator>
        
        <subject>Convolution neural networks; tomato crop health; deep learning; binary classification; disease detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>The paper explores the application of Convolutional Neural Networks (CNNs), specifically ResNet-18, in revolutionizing the identification of diseases in tomato crops. Facing threats from pathogens like Phytophthora infestans, timely disease detection is crucial for mitigating economic losses and ensuring food security. Traditionally, manual inspection and labour-intensive tests posed limitations, prompting a shift to CNNs for more efficient solutions. The study uses a well-organized dataset, employing data preprocessing techniques and ResNet-18 architecture. The model achieves remarkable results, with a 91% F1 score, indicating its proficiency in distinguishing healthy and unhealthy tomato leaves. Metrics such as accuracy, sensitivity, specificity, and a high AUC score on the ROC curve underscore the model&#39;s exceptional performance. The significance of this work lies in its practical applications for early disease detection in agriculture. The ResNet-18 model, with its high precision and specificity, presents a powerful tool for crop management, contributing to sustainable agriculture and global food security.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_88-Deep_Learning_for_Early_Detection_of_Tomato_Leaf_Diseases.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Construction of Short-Term Traffic Flow Prediction Model Based on IoT and Deep Learning Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150187</link>
        <id>10.14569/IJACSA.2024.0150187</id>
        <doi>10.14569/IJACSA.2024.0150187</doi>
        <lastModDate>2024-01-30T15:33:24.1600000+00:00</lastModDate>
        
        <creator>Xiaowei Sun</creator>
        
        <creator>Huili Dou</creator>
        
        <subject>Internet of things; deep learning algorithm; short term traffic flow; prediction model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>On a global scale, traffic problems are an essential factor affecting urban operations, particularly challenging the frequent occurrence of traffic congestion and accidents. The solution to the problem requires real-time and accurate prediction of traffic flow. This article mainly explores the application of the Internet of Things and deep learning in traffic flow prediction, aiming to solve the problem where existing methods cannot meet the requirements of real-time and accuracy. IoT devices, such as road sensors and in-vehicle GPS devices, which provides rich information for traffic flow prediction. With the ability of deep learning, it can not only learn and abstract a large amount of complex traffic data but also handle traffic flow prediction tasks in various complex situations. During the model construction process, the complexity of the road network was fully considered, practical algorithms were designed to fuse multi-source data, and the structure of the model was optimized to meet the needs of real-time prediction. The experimental results show that the absolute error of the test results is generally less than 6km/h, which can better reflect the traffic speed of the road section in the future.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_87-Construction_of_Short_Term_Traffic_Flow_Prediction_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of Intellectual Decision Making System for Logistic Business Process Management</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150186</link>
        <id>10.14569/IJACSA.2024.0150186</id>
        <doi>10.14569/IJACSA.2024.0150186</doi>
        <lastModDate>2024-01-30T15:33:24.1430000+00:00</lastModDate>
        
        <creator>Zhadra Kozhamkulova</creator>
        
        <creator>Leilya Kuntunova</creator>
        
        <creator>Shirin Amanzholova</creator>
        
        <creator>Almagul Bizhanova</creator>
        
        <creator>Marina Vorogushina</creator>
        
        <creator>Aizhan Kuparova</creator>
        
        <creator>Mukhit Maikotov</creator>
        
        <creator>Elmira Nurlybayeva</creator>
        
        <subject>Decision making; logistics; business process; machine learning; management</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>This research paper delves into the design and development of an Intellectual Decision Making System (IDMS) incorporated into a Logistic Business Process Management System (LBPSMS), employing advanced Machine Learning (ML) models. Aimed at streamlining and optimizing logistics business operations, the focal point of this study is to significantly elevate efficiency, enhance decision-making precision, and substantially reduce operational costs. This research introduces a pioneering hybrid approach that amalgamates both supervised and unsupervised machine learning algorithms, creating a unique paradigm for predictive analytics, trend analysis, and anomaly detection in logistics business processes. The practical application of these combined methodologies extends to diverse areas such as accurate demand forecasting, optimal route planning, efficient inventory management, and predictive customer behavior analysis. Empirical evidence from experimental trials corroborates the efficacy of the proposed IDMS, showcasing its profound impact on the decision-making process, with clear and measurable enhancements in operational efficiency and overall business performance within the logistics sector. This study thus delivers invaluable insights into the realm of machine learning applications within logistics, extending a comprehensive blueprint for future research undertakings and practical system implementations. With its practical significance and academic relevance, this research underscores the transformative potential of machine learning in revolutionizing the logistics business process management systems.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_86-Development_of_Intellectual_Decision_Making_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>State-of-the-Art Review of Deep Learning Methods in Fake Banknote Recognition Problem</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150185</link>
        <id>10.14569/IJACSA.2024.0150185</id>
        <doi>10.14569/IJACSA.2024.0150185</doi>
        <lastModDate>2024-01-30T15:33:24.1270000+00:00</lastModDate>
        
        <creator>Ualikhan Sadyk</creator>
        
        <creator>Rashid Baimukashev</creator>
        
        <creator>Cemil Turan</creator>
        
        <subject>Fake banknote; detection; classification; recognition; review</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>In the burgeoning epoch of digital finance, the exigency for fortified monetary transactions is paramount, underscoring the need for advanced counterfeit deterrence methodologies. The research paper provides an exhaustive analysis, delving into the profundities of employing sophisticated deep learning (DL) paradigms in the battle against fiscal fraudulence through fake banknote detection. This comprehensive review juxtaposes the traditional machine learning approaches with the avant-garde DL techniques, accentuating the conspicuous superiority of the latter in terms of accuracy, efficiency, and the diminution of human oversight. Spanning multiple continents and currencies, the discourse highlights the universal applicability and potency of DL, incorporating convolutional neural networks (CNNs), recurrent neural networks (RNNs), and generative adversarial networks (GANs) in discerning the most cryptic of counterfeits, a feat unachievable by obsolete technologies. The paper meticulously dissects the architectures, learning processes, and operational facets of these systems, offering insights into their convolutional strata, pooling heuristics, backpropagation, and loss minimization algorithms, alluding to their consequential roles in feature extraction and intricate pattern recognition - the quintessentials of authenticating banknotes. Furthermore, the exploration broaches the ethical and privacy concerns stemming from DL, including data bias and over-reliance on technology, suggesting the harmonization of algorithmic advancements with robust legislative frameworks. Conclusively, this seminal review posits that while DL techniques herald a revolutionary competence in fake banknote recognition, continuous research, and multi-faceted strategies are imperative in adapting to the ever-evolving chicanery of counterfeit malefactors.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_85-State_of_the_art_Review_of_Deep_Learning_Methods.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automated Fruit Sorting in Smart Agriculture System: Analysis of Deep Learning-based Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150183</link>
        <id>10.14569/IJACSA.2024.0150183</id>
        <doi>10.14569/IJACSA.2024.0150183</doi>
        <lastModDate>2024-01-30T15:33:24.1130000+00:00</lastModDate>
        
        <creator>Cheng Liu</creator>
        
        <creator>Shengxiao Niu</creator>
        
        <subject>Smart agriculture; automated fruit sorting; deep learning; Convolutional Neural Network (CNN); analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>Automated fruit sorting plays a crucial role in smart agriculture, enabling efficient and accurate classification of fruits based on various quality parameters. Traditionally, rule-based and machine-learning methods have been employed for fruit sorting, but in recent years, deep learning-based approaches have gained significant attention. This paper investigates deep learning methods for fruit sorting and justifies their prevalence in the field. Therefore, it is necessary to address these limitations and improve the effectiveness of CNN-based fruit sorting methods. This research paper presents a comprehensive analysis of CNN-based methods, highlighting their strengths and limitations. This analysis aims to contribute to advancing automated fruit sorting in smart agriculture and provide insights for future research and development in deep learning-based fruit sorting techniques.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_83-Automated_Fruit_Sorting_in_Smart_Agriculture_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Artificial Intelligence-driven Training and Improvement Methods for College Students&#39; Line Dancing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150184</link>
        <id>10.14569/IJACSA.2024.0150184</id>
        <doi>10.14569/IJACSA.2024.0150184</doi>
        <lastModDate>2024-01-30T15:33:24.1130000+00:00</lastModDate>
        
        <creator>Xiaohui WANG</creator>
        
        <subject>Motion capture; artificial intelligence technology; virtual reality; college students’ line dance training; dance ascension</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>With the advancement of computer technology, artificial intelligence technology has gradually become a research focus, and the thinking of relevant researchers has gradually transferred from the computer to the interaction between computers and humans. Artificial intelligence has begun to appear in various industries. With its rigorous computing logic and efficient computing speed, artificial intelligence technology begins to replace high-precision or highly repetitive work in work gradually. However, no specific data supports the specific work efficiency and output. In this context, this essay studies the methods of AI in the training and improvement of college students&#39; line dancing levels. Virtual reality technology mainly undertakes functions such as virtual space modeling, sound positioning, sensory feedback, voice interaction, visual and spatial tracking, to ensure accurate positioning during choreography and motion capture. In this case, mechanical capture devices are used for motion capture in virtual reality space. This article uses intelligent capture technology based on virtual reality technology and artificial intelligence algorithms to capture and analyze the dance posture, generate analysis reports in a timely manner, and provide correction and suggestions for the dance posture. The final results show that AI can improve the training efficiency of line dancing of college students and can increase the innovation degree and method of dance posture by 7% to 13% compared with pure artificial. It shows that artificial intelligence technology plays a good role in college students&#39; overall line dance training. At the same time, this paper also argues that artificial intelligence technology can effectively improve the overall productivity of traditional industries.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_84-Artificial_Intelligence_driven_Training_and_Improvement_Methods.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Applying Computer Vision and Machine Learning Techniques in STEM-Education Self-Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150182</link>
        <id>10.14569/IJACSA.2024.0150182</id>
        <doi>10.14569/IJACSA.2024.0150182</doi>
        <lastModDate>2024-01-30T15:33:24.0970000+00:00</lastModDate>
        
        <creator>Rustam Abdrakhmanov</creator>
        
        <creator>Assyl Tuimebayev</creator>
        
        <creator>Botagoz Zhussipbek</creator>
        
        <creator>Kalmurat Utebayev</creator>
        
        <creator>Venera Nakhipova</creator>
        
        <creator>Oichagul Alchinbayeva</creator>
        
        <creator>Gulfairuz Makhanova</creator>
        
        <creator>Olzhas Kazhybayev</creator>
        
        <subject>Load balancing; machine learning; server; classification; software</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>In this innovative exploration, &quot;Applying Computer Vision Techniques in STEM-Education Self-Study,&quot; the research delves into the transformative intersection of advanced computer vision (CV) technologies and self-directed learning within Science, Technology, Engineering, and Mathematics (STEM) education. Challenging traditional educational paradigms, this study posits that sophisticated CV algorithms, when judiciously integrated with modern educational frameworks, can profoundly augment the efficacy of self-study models for students navigating the increasingly intricate STEM curricula. By leveraging state-of-the-art facial recognition, object detection, and pattern analysis, the study underscores how CV can monitor, analyze, and thereby enhance students&#39; engagement and interaction with digital content, a pioneering stride that addresses the prevalent disconnect between static study materials and the dynamic nature of learner engagement. Furthermore, the research illuminates the critical role of CV in generating personalized study roadmaps, effectively responding to individual learner&#39;s behavioral patterns and cognitive absorption rhythms, identified through meticulous analysis of captured visual data, thereby transcending the one-size-fits-all educational approach. Through rigorous qualitative and quantitative research methods, the paper offers groundbreaking insights into students&#39; study habits, proclivities, and the nuanced obstacles they face, facilitating the creation of responsive, adaptive, and deeply personalized learning experiences. Conclusively, this research serves as a clarion call to educators, technologists, and policy-makers, emphatically demonstrating that the thoughtful application of computer vision techniques not only catalyzes a more engaging self-study landscape but also holds the latent potential to revolutionize the holistic STEM education ecosystem.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_82-Applying_Computer_Vision_and_Machine_Learning_Techniqes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application of Skeletal Skinned Mesh Algorithm Based on 3D Virtual Human Model in Computer Animation Design</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150181</link>
        <id>10.14569/IJACSA.2024.0150181</id>
        <doi>10.14569/IJACSA.2024.0150181</doi>
        <lastModDate>2024-01-30T15:33:24.0800000+00:00</lastModDate>
        
        <creator>Zhongkai Zhan</creator>
        
        <subject>3D virtual human; skinned mesh algorithm; weight; character animation; dual quaternion; motion capture data</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>3D virtual character animation is the core technology of games, animation, and virtual reality. To improve its visual and realistic effects, the research focused on the skeleton skinned mesh algorithm. Firstly, a three-dimensional human body model was established based on motion capture data. Then, the skin vertex weight calculation and bone skin animation design were completed for the human body model. These experiments confirm that the designed weight calculation method has a smooth weight transition and good computational stability. The designed skinned mesh algorithm outperforms its skinned mesh algorithms in accuracy, recall, and area under curve values, with a maximum area under curve value of 0.927. Its smoothness and volume retention rate are both above 90.00%, and there is no obvious collapse phenomenon. Its other objective and subjective evaluation indicators are superior to the existing advanced skinned mesh algorithms processing, and the skin effect is realistic and smooth. Overall, this study contributes to the creation of 3D virtual character animation, enhances the visual realism of virtual creation, and provides key support for the animation performance of virtual characters.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_81-Application_of_Skeletal_Skinned_Mesh_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>HarborSync: An Advanced Energy-efficient Clustering-based Algorithm for Wireless Sensor Networks to Optimize Aggregation and Congestion Control</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150180</link>
        <id>10.14569/IJACSA.2024.0150180</id>
        <doi>10.14569/IJACSA.2024.0150180</doi>
        <lastModDate>2024-01-30T15:33:24.0670000+00:00</lastModDate>
        
        <creator>Ibrahim Aqeel</creator>
        
        <subject>Clustering; congestion control; cluster head selection; energy-efficient clustering; wireless sensor networks; energy optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>In the ever-evolving landscape of Wireless Sensor Networks (WSNs), the demand for cutting-edge algorithms has never been more critical. This paper proposes an algorithm, HarborSync, to improve stability, energy efficiency, durability, and congestion control in WSN. While selecting cluster heads and backup nodes, HarborSync applies the Optimised Stable Clustering Algorithm (OSCA) and the Weighted Clustering Algorithm (WCA). This fresh method puts the groundwork for better performance by acquainting techniques to intentionally postpone changes in cluster heads and computing priorities. Using the innovative Cluster-based Aggregation and Congestion Control (CACC) features, HarborSync provides enhanced routing, adaptive reconfiguration, efficient aggregation techniques, and dynamic congestion monitoring. Among HarborSync’s strengths, stability bears out with a 90% rating, surpassing those of LEACH (78%), LEACH-C (82%), TEEN (88%), and PEGASIS (76%). When it comes to durability, HarborSync scores 88% better than LEACH (75%), LEACH-C (80%), TEEN (85%), and PEGASIS (72%). The HarborSync score is 3.85% for congestion control compared to LEACH and LEACH-C, managing 5.22%, TEEN accomplishing 4.98%, and PEGASIS with 7.32%. Regarding adaptability, HarborSync showcases its versatility, earning an 85% rating, surpassing LEACH (72%), competes with LEACH-C (78%), equals TEEN (90%), and outperforms PEGASIS (68%). In the critical realm of packet loss management, HarborSync demonstrates efficiency with a reduced rate of 6.179%. Therefore, it outperforms LEACH (7.811%), LEACH-C (6.897%), TEEN (4.953%), and PEGASIS (7.973%).</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_80-HarborSync_An_Advanced_Energy_efficient_Clustering.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Research on Neural Network-based Automatic Music Multi-Instrument Classification Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150179</link>
        <id>10.14569/IJACSA.2024.0150179</id>
        <doi>10.14569/IJACSA.2024.0150179</doi>
        <lastModDate>2024-01-30T15:33:24.0670000+00:00</lastModDate>
        
        <creator>Ribin Guo</creator>
        
        <subject>Neural network; musical instrument; automatic classification; auditory feature; sparrow search algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>The automatic classification of multi-instruments plays a crucial role in providing services for music retrieval and recommendation. This paper focuses on automatic multi-instrument classification. Firstly, instrument features were analyzed, and Mel-frequency cepstral coefficient (MFCC) and perceptual linear predictive coefficient (PLPC) were extracted from instrument signals. Features were selected using the entropy weight method. The optimal initial weight threshold of a back-propagation neural network (BPNN) was obtained by utilizing the sparrow search algorithm (SSA), achieving a SSA-BPNN classifier. Experiments were conducted using the IRMAS dataset. The results demonstrated that the combination of MFCC and PLPC selected through the entropy weight method achieved the best performance in automatic multi-instrument classification. The method yielded high P value, recall rate, and F1 value, 0.72, 0.71, and 0.71, respectively. Moreover, it outperformed other algorithms such as support vector machine and XGBoost. These results confirm the reliability of the automatic multi-instrument classification method proposed in this paper, making it suitable for practical applications.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_79-Research_on_Neural_Network_based_Automatic_Music.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Spatial Display Model of Oil Painting Art Based on Digital Vision Design</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150178</link>
        <id>10.14569/IJACSA.2024.0150178</id>
        <doi>10.14569/IJACSA.2024.0150178</doi>
        <lastModDate>2024-01-30T15:33:24.0500000+00:00</lastModDate>
        
        <creator>Qiong Yang</creator>
        
        <creator>Zixuan Yue</creator>
        
        <subject>Oil painting; spatial visualization; Stereo matching; Spatial display; ELAS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>Oil painting, owing to its unique expressive approach, holds infinite charm in classical artistic creation, yet introduces complexities in terms of manual maintenance. In pursuit of digital spatial visualization of oil painting art, this study employs a stereo matching algorithm and Efficient large-scale stereo matching, focusing on aspects like disparity maps and pixel contrasts. Furthermore, enhancements in the algorithm involve the incorporation of the cross-arms strategy for image registration and the selection of auxiliary point sets to optimize the handling of image features. Results indicate that the proposed model, evaluated on the Middlebury dataset, achieves high accuracy, recall rates, and F1 scores, measuring 97.2%, 95.0%, and 97.5% respectively, surpassing the DecStereo algorithm by 3.4%, 8.2%, and 5.7%. When tested on the Photo2monet oil painting dataset, the proposed model achieves peak signal-to-noise ratio and average structural similarity index values of 16.781 and 0.833 respectively. This suggests that the proposed model excels in digital visual representation of oil paintings, exhibiting higher image precision, stronger stereo matching capabilities, and superior spatial display performance.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_78-Spatial_Display_Model_of_Oil_Painting_Art.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Geospatial Pharmacy Navigator: A Web and Mobile Application Integrating Geographical Information System (GIS) for Medicine Accessibility</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150176</link>
        <id>10.14569/IJACSA.2024.0150176</id>
        <doi>10.14569/IJACSA.2024.0150176</doi>
        <lastModDate>2024-01-30T15:33:24.0330000+00:00</lastModDate>
        
        <creator>Mia Amor C. Tinam-isan</creator>
        
        <creator>Sherwin D. Sandoval</creator>
        
        <creator>Nathanael R. Neri</creator>
        
        <creator>Nasrollah L. Gandamato</creator>
        
        <subject>ICT in health; mobile application; web application; GIS; pharmacy mapping</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>This project introduces a web and mobile application that integrates Geographic Information Systems (GIS) to identify pharmacies with available prescription drugs, addressing the expanding role of Information and Communication Technology (ICT) in healthcare. The primary objective is to offer the general public an easy-to-use platform that locates the closest pharmacy having the searched drugs or medicines. Adopting the Rapid Application Development methodology ensures continuous engagement with stakeholders, allowing developers to closely align the application with user requirements. Essential elements of the web platform include chat functionality, inventory management, pharmacy oversight, and the display of medication listings. General users may check medication lists, search pharmacies, find pharmacy locations and the best routes, search for specific medications, access comprehensive medication information, and more with the mobile application. Fifty respondents, comprising five pharmacists and forty-five general users, expressed overall satisfaction with the system&#39;s functionality, emphasizing its ease of use and straightforward navigation across most features. This project not only amplifies the importance of ICT in the healthcare industry, but it also shows how technology can be successfully integrated to improve accessibility and expedite healthcare procedures for both the general public and professionals.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_76-Geospatial_Pharmacy_Navigator_A_Web_and_Mobile_Application.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Bio-Inspired Optimization-based Cloud Resource Demand Prediction using Improved Support Vector Machine</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150177</link>
        <id>10.14569/IJACSA.2024.0150177</id>
        <doi>10.14569/IJACSA.2024.0150177</doi>
        <lastModDate>2024-01-30T15:33:24.0330000+00:00</lastModDate>
        
        <creator>Nisha Sanjay</creator>
        
        <creator>Sasikumaran Sreedharan</creator>
        
        <subject>Cloud computing; resource demand; machine learning; cloud resource demand prediction; bio-inspired algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>In order to furnish diverse resource requirements in cloud computing, numerous resources are integrated into a data centre. How to deliver resources in a timely and accurate manner to meet user expectations is a significant concern. However, the resource demands of users fluctuate greatly and frequently change regularly. It&#39;s possible that the resource provision won&#39;t happen on time. Furthermore, because some physical resources are shut down to save energy, there may occasionally not be enough of them to meet user requests. Therefore, it&#39;s critical to offer resource provision proactively to ensure positive user involvement using cloud computing. To enable resource provision in advance, it is essential to accurately estimate future resource demands. Using machine learning techniques, we offer a unique approach in this study that tries to identify key features, accelerating the forecast of cloud resource consumption. Finding the classification method with the greatest fit and maximum classification accuracy is crucial when predicting cloud resource consumption. The attribute selection method is used to decrease the dataset. The categorization process is then given the reduced data. The hybrid attribute selection method used in the investigation, which combines the bio-inspired algorithm genetic algorithm, the pulse-coupled neural network, and the particle swarm optimization algorithm, improves classification accuracy. The accuracy of prediction employing this technique is examined using a variety of performance criteria. When it comes to predicting the demand for cloud resources, the experimental results show that the suggested machine learning method performs more effectively than traditional machine learning models.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_77-Hybrid_Bio_Inspired_Optimization_based_Cloud_Resource.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Scalable Accelerated Intelligent Charging Strategy Recommendation for Electric Vehicles Based on Deep Q-Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150175</link>
        <id>10.14569/IJACSA.2024.0150175</id>
        <doi>10.14569/IJACSA.2024.0150175</doi>
        <lastModDate>2024-01-30T15:33:24.0200000+00:00</lastModDate>
        
        <creator>Xianhao Shen</creator>
        
        <creator>Zhen Wu</creator>
        
        <creator>Yexin Zhang</creator>
        
        <creator>Shaohua Niu</creator>
        
        <subject>Scalable acceleration; smart charging; Deep Q-network; Markov decision</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>With the rapid development of electric vehicles, their charging strategies significantly impact the overall power grid. Solving the spatiotemporal scheduling problem of vehicle charging has become a hot research topic. This paper focuses on recommending suitable charging stations for electric vehicles and proposes a scalable accelerated intelligent charging strategy recommendation algorithm based on Deep Q-Networks (DQN). The strategy recommendation problem is formulated as a Markov decision process, where the continuous sequence of regional charging requests within a time slice is fed into the DQN network as the input state, enabling optimal charging strategy recommendations for each electric vehicle. The algorithm aims to maintain regional load balance while minimizing user waiting time. To enhance the algorithm&#39;s applicability, a scalable, accelerated charging strategy framework is further proposed, which incorporates information filtering and shared experience pool mechanisms to adapt to different expansion scenarios and expedite strategy iterations in new scenarios. Simulation results demonstrate that the proposed DQN-based strategy recommendation algorithm outperforms the shortest path-first strategy, and the scalable, accelerated charging strategy framework achieves a 64.3% improvement in iteration speed in new scenarios, which helps to reduce the cloud server load and saves overheads.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_75-Scalable_Accelerated_Intelligent_Charging_Strategy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Presenting an Optimized Hybrid Model for Stock Price Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150174</link>
        <id>10.14569/IJACSA.2024.0150174</id>
        <doi>10.14569/IJACSA.2024.0150174</doi>
        <lastModDate>2024-01-30T15:33:24.0030000+00:00</lastModDate>
        
        <creator>Liangchao LIU</creator>
        
        <subject>Stock prediction; machine learning approaches; ensemble learning; grasshopper optimization; histogram-based gradient boosting</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>In the finance sector, stock price forecasting is deemed crucial for traders and investors. In this study, a detailed comparison and analysis of various machine learning models for stock price forecasting were undertaken. Historical stock data and an array of technical indicators were utilized in these models. The enhancement of the Histogram-Based Gradient Boosting (HGBR) method for predicting the Nasdaq stock index was the focus. Optimization techniques such as genetic algorithm optimization, biologically-based optimization, and the grasshopper optimization algorithm were applied. Among these, the most promising results were shown by the grasshopper optimization method. The optimized HGBR models, namely GA-HGBR, BBO-HGBR, and GOA-HGBR, were found to have achieved significant improvements, with coefficient of determination values of 0.96, 0.98, and 0.99, respectively. These figures underscore the substantial advancement of these models as compared to the baseline HGBR model. Metrics such as Mean Absolute Error, Root Mean Square Error, Mean Absolute Percentage Error, and the Coefficient of Determination were employed to assess the performance of the models.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_74-Presenting_an_Optimized_Hybrid_Model_for_Stock_Price_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Method to Increase the Analysis Accuracy of Stock Market Valuation: A Case Study of the Nasdaq Index</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150173</link>
        <id>10.14569/IJACSA.2024.0150173</id>
        <doi>10.14569/IJACSA.2024.0150173</doi>
        <lastModDate>2024-01-30T15:33:23.9870000+00:00</lastModDate>
        
        <creator>Haixia Niu</creator>
        
        <subject>Machine learning; Nasdaq index; support vector regression; gray wolf optimizer; slime mould algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>For a significant period, conventional methodologies have been employed to assess fundamental and technical aspects in forecasting and analyzing stock market performance. The precision and availability of stock market predictions have been enhanced by machine learning. Various machine learning methods have been utilized for stock market predictions. A novel, optimized machine-learning approach for financial market analysis is aimed to be introduced by this study. A unique method for improving the accuracy of stock price forecasting by incorporating support vector regression with the slime mould algorithm is presented in the present work. Other optimization algorithms were employed to enhance the prediction accuracy and the convergence speed of the network, which were Biogeography-based optimization and Gray Wolf Optimizer. An assessment of the proposed model&#39;s effectiveness in predicting stock prices was conducted through research employing Nasdaq index data extending from January 1, 2015, to June 29, 2023. Substantial improvements in accuracy for the proposed model were indicated by the results compared to other models, with an R-squared value of 0.991, a root mean absolute error of 149.248, a mean absolute percentage error of 0.930, and a mean absolute error of 116.260. Furthermore, not only is the prediction accuracy enhanced by the integration of the proposed model, but the model&#39;s adaptability to dynamic market conditions is also increased.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_73-A_Method_to_Increase_the_Analysis_Accuracy_of_Stock_Market.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Real Time FPGA Implementation of a High Speed for Video Encryption and Decryption System with High Level Synthesis Tools</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150172</link>
        <id>10.14569/IJACSA.2024.0150172</id>
        <doi>10.14569/IJACSA.2024.0150172</doi>
        <lastModDate>2024-01-30T15:33:23.9730000+00:00</lastModDate>
        
        <creator>Ahmed Alhomoud</creator>
        
        <subject>Security; encryption; decryption; AES; HDL coder; high level synthesis; FPGA; Zynq7000</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>The development of communication networks has made information security more important than ever for both transmission and storage. Since the majority of networks involve images, image security is becoming a difficult challenge. In order to provide real-time image encryption and decryption, this study suggests an FPGA implementation of a video cryptosystem that has been well-optimized based on high level synthesis. The MATLAB HDL coder and Vivado Tools from Xilinx are used in the design, implementation, and validation of the algorithm on the Xilinx Zynq FPGA platform. Low resource consumption and pipeline processing are well-suited to the hardware architecture. For real-time applications involving secret picture encryption and decryption, the suggested hardware approach is widely utilized. This study suggests an implementation of the encryption-decryption system that is both very efficient and area-optimized. A unique high-level synthesis (HLS) design technique based on application-specific bit widths for intermediate data nodes was used to realize the proposed implementation. For HLS, MATLAB HDL coder was used to generate register transfer level RTL design. Using Vivado software, the RTL design was implemented on the Xilinx ZedBoard, and its functioning was tested in real time using an input video stream. The results produced are faster and more area- efficient (target FPGA has fewer gates than before) than those of earlier solutions for the same target board.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_72-Real_Time_FPGA_Implementation_of_a_High_Speed_for_Video_Encryption.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Experience Replay Optimization via ESMM for Stable Deep Reinforcement Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150171</link>
        <id>10.14569/IJACSA.2024.0150171</id>
        <doi>10.14569/IJACSA.2024.0150171</doi>
        <lastModDate>2024-01-30T15:33:23.9570000+00:00</lastModDate>
        
        <creator>Richard Sakyi Osei</creator>
        
        <creator>Daphne Lopez</creator>
        
        <subject>Experience replay; experience replay optimization; experience retention strategy; experience selection strategy; replay memory management</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>The memorization and reuse of experience, popularly known as experience replay (ER), has improved the performance of off-policy deep reinforcement learning (DRL) algorithms such as deep Q-networks (DQN) and deep deterministic policy gradients (DDPG). Despite its success, ER faces the challenges of noisy transitions, large memory sizes, and unstable returns. Researchers have introduced replay mechanisms focusing on experience selection strategies to address these issues. However, the choice of experience retention strategy has a significant influence on the selection strategy. Experience Replay Optimization (ERO) is a novel reinforcement learning algorithm that uses a deep replay policy for experience selection. However, ERO relies on the na&#239;ve first-in-first-out (FIFO) retention strategy, which seeks to manage replay memory by constantly retaining recent experiences irrespective of their relevance to the agent’s learning. FIFO sequentially overwrites the oldest experience with a new one when the replay memory is full. To improve the retention strategy of ERO, we propose an experience replay optimization with enhanced sequential memory management (ERO-ESMM). ERO-ESMM uses an improved sequential retention strategy to manage the replay memory efficiently and stabilize the performance of the DRL agent. The efficacy of the ESMM strategy is evaluated together with five additional retention strategies across four distinct OpenAI environments. The experimental results indicate that ESMM performs better than the other five fundamental retention strategies.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_71-Experience_Replay_Optimization_via_ESMM.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Traffic Flow Prediction in Urban Networks: Integrating Sequential Neural Network Architectures</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150170</link>
        <id>10.14569/IJACSA.2024.0150170</id>
        <doi>10.14569/IJACSA.2024.0150170</doi>
        <lastModDate>2024-01-30T15:33:23.9400000+00:00</lastModDate>
        
        <creator>Eva Lieskovska</creator>
        
        <creator>Maros Jakubec</creator>
        
        <creator>Pavol Kudela</creator>
        
        <subject>Traffic flow; short-term prediction; machine learning; transformer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>The rapid growth of urban areas has significantly compounded traffic challenges, amplifying concerns about congestion and the need for efficient traffic management. Accurate short-term traffic flow prediction remains important for strategic infrastructure planning within these expanding urban networks. This study explores a Transformer-based model designed for traffic flow prediction, conducting a comprehensive comparison with established models such as Long Short-Term Memory (LSTM), Bidirectional Long Short-Term Memory (BiLSTM), Bidirectional Gated Recurrent Unit (BiGRU), and Time-Delay Neural Network (TDNN). Our approach integrates traditional time series values with derived time-related features, enhancing the model&#39;s predictive capabilities. The aim is to effectively capture temporal dependencies within operational data. Despite the effectiveness of existing models, internal complexities persist due to diverse road conditions that influence traffic dynamics. The proposed Transformer model consistently demonstrates competitive performance and offers adaptability when learning from longer time spans. However, the simpler BiLSTM model proved to be the most effective when applied to the utilized data.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_70-Traffic_Flow_Prediction_in_Urban_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Approach to Data Clustering based on Self-Adaptive Bacteria Foraging Optimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150169</link>
        <id>10.14569/IJACSA.2024.0150169</id>
        <doi>10.14569/IJACSA.2024.0150169</doi>
        <lastModDate>2024-01-30T15:33:23.9230000+00:00</lastModDate>
        
        <creator>Tanmoy Singha</creator>
        
        <creator>Rudra Sankar Dhar</creator>
        
        <creator>Joydeep Dutta</creator>
        
        <creator>Arindam Biswas</creator>
        
        <subject>Data clustering; Self-Adaptive Bacterial Foraging Optimization (SABFO); Particle Swarm Optimization (PSO); FBADE scheme; the k-means algorithm and the classical BFO</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>Data clustering reduces the number of data objects by grouping similar data objects together. In this process, data are divided into valuable groups (clusters) or expressive without at all previous information. This manuscript represents a different clustering algorithm based on the technique of the adaptive strategy algorithm known as Self-Adaptive Bacterial Foraging Optimization (SABFO). It is a streamlining strategy for bunching issues where a cluster of bacteria forages to converge to definite locations as ultimate group communities by limiting the fitness function. The superiority of this method is assessed on numerous famous benchmark data sets. In this paper, the authors have compared the projected technique with some well-known advanced clustering approaches: the k-means algorithm, the Particle Swarm optimization algorithm, and the Fitness-Based Adaptive Differential Evolution (FBADE) Scheme. An experimental finding demonstrates the usefulness of the projected algorithm as a clustering method that can operate on data sets with different densities, and cluster sizes.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_69-A_Novel_Approach_to_Data_Clustering.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Practical Application of AI and Large Language Models in Software Engineering Education</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150168</link>
        <id>10.14569/IJACSA.2024.0150168</id>
        <doi>10.14569/IJACSA.2024.0150168</doi>
        <lastModDate>2024-01-30T15:33:23.9230000+00:00</lastModDate>
        
        <creator>Vasil Kozov</creator>
        
        <creator>Galina Ivanova</creator>
        
        <creator>Desislava Atanasova</creator>
        
        <subject>Application of AI-powered software; AI generated images; software engineering; stable diffusion; higher education</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>Subjects with limited application in the software industry like AI have recently received tremendous boon due to the development and raise of publicity of LLMs. LLM-powered software has a wide array of practical applications that must be taught to Software Engineering students, so that they can be relevant in the field. The speed of technological change is extremely fast, and university curriculums must include those changes. Renewing and creating new methodologies and workshops is a difficult task to complete successfully in such a dynamic environment full of cutting-edge technologies. This paper aims to showcase our approach to using LLM-powered software for AI generated images, like Stable diffusion and code generation tools like ChatGPT in workshops for two relevant subjects – Analysis of Software Requirements and Specifications, as well as Artificial Intelligence. A comparison between the different available LLMs that generate images is made, and the choice between them is explained. Student feedback is shown and a general positive and motivational impact is noted during and after the workshop. A brief introduction that covers the subjects where AI is applied is made. The proposed solutions for several uses of AI in the field of higher education, more specifically software engineering, are presented. Several workshops have been made and included in the curriculum. The results of their application have been noted and an analysis is made. More propositions on further development based on the gained experience, feedback and retrieved data are made. Conclusions are made on the application of AI in higher education and different ways to utilize such tools are presented.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_68-Practical_Application_of_AI_and_Large_Language_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Context-Aware Transfer Learning Approach to Detect Informative Social Media Content for Disaster Management</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150167</link>
        <id>10.14569/IJACSA.2024.0150167</id>
        <doi>10.14569/IJACSA.2024.0150167</doi>
        <lastModDate>2024-01-30T15:33:23.9100000+00:00</lastModDate>
        
        <creator>Saima Saleem</creator>
        
        <creator>Monica Mehrotra</creator>
        
        <subject>Disaster management; twitter; distilBERT; deep learning; multistage finetuning; transfer learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>In the wake of disasters, timely access to accurate information about on-the-ground situation is crucial for effective disaster response. In this regard, social media (SM) like Twitter have emerged as an invaluable source of real-time user-generated data during such events. However, accurately detecting informative content from large amounts of unstructured user-generated data under such time-sensitive circumstances remains a challenging task. Existing methods predominantly rely on non-contextual language models, which fail to accurately capture the intricate context and linguistic nuances within the disaster-related tweets. While some recent studies have explored context-aware methods, they are based on computationally demanding transformer architectures. To strike a balance between effectiveness and computational efficiency, this study introduces a new context-aware transfer learning approach based on DistilBERT for the accurate detection of disaster related informative content on SM. Our novel approach integrates DistilBERT with a Feed Forward Neural Network (FFNN) and involves multistage finetuning of the model on balanced benchmark real-world disaster datasets. The integration of DistilBERT with an FFNN provides a simple and computationally efficient architecture, while the multistage finetuning facilitates a deeper adaptation of the model to the disaster domain, resulting in improved performance. Our proposed model delivers significant improvements compared to the state-of-the-art (SOTA) methods. This suggests that our model not only addresses the computational challenges but also enhances the contextual understanding, making it a promising advancement for accurate and efficient disaster-related informative content detection on SM platforms.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_67-Context_Aware_Transfer_Learning_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Healthcare Intrusion Detection using Hybrid Correlation-based Feature Selection-Bat Optimization Algorithm with Convolutional Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150166</link>
        <id>10.14569/IJACSA.2024.0150166</id>
        <doi>10.14569/IJACSA.2024.0150166</doi>
        <lastModDate>2024-01-30T15:33:23.8930000+00:00</lastModDate>
        
        <creator>H. Kanakadurga Bella</creator>
        
        <creator>S. Vasundra</creator>
        
        <subject>Convolutional neural network; deep learning; intrusion detection system; healthcare; security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>Cloud computing is popular among users in various areas such as healthcare, banking, and education due to its low-cost services alongside increased reliability and efficiency. But, security is a significant problem in cloud-based systems due to the cloud services being accessed via the Internet by a variety of users. Therefore, the patient’s health information needs to be kept confidential, secure, and accurate. Moreover, any change in actual patient data potentially results in errors during the diagnosis and treatment. In this research, the hybrid Correlation-based Feature Selection-Bat Optimization Algorithm (HCFS-BOA) based on the Convolutional Neural Network (CNN) model is proposed for intrusion detection to secure the entire network in the healthcare system. Initially, the data is obtained from the CIC-IDS2017, NSL-KDD datasets, after which min-max normalization is performed to normalize the acquired data. HCFS-BOA is employed in feature selection to examine the appropriate features that not only have significant correlations with the target variable, but also contribute to the optimal performance of intrusion detection in the healthcare system. Finally, CNN classification is performed to identify and classify intrusion detection accurately and effectively in the healthcare system. The existing methods namely, SafetyMed, Hybrid Intrusion Detection System (HIDS), and Blockchain-orchestrated Deep learning method for Secure Data Transmission in IoT-enabled healthcare systems (BDSDT) are employed to evaluate the efficacy of HCFS-BOA-based CNN. The proposed HCFS-BOA-based CNN achieves a better accuracy of 99.45% when compared with the existing methods: SafetyMed, HIDS, and BDSDT.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_66-Healthcare_Intrusion_Detection_using_Hybrid_Correlation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Decoding the Narrative: Patterns and Dynamics in Monkeypox Scholarly Publications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150165</link>
        <id>10.14569/IJACSA.2024.0150165</id>
        <doi>10.14569/IJACSA.2024.0150165</doi>
        <lastModDate>2024-01-30T15:33:23.8770000+00:00</lastModDate>
        
        <creator>Muhammad Khahfi Zuhanda</creator>
        
        <creator>Desniarti</creator>
        
        <creator>Anil Hakim Syofra</creator>
        
        <creator>Andre Hasudungan Lubis</creator>
        
        <creator>Prana Ugiana Gio</creator>
        
        <creator>Habib Satria</creator>
        
        <creator>Rahmad Syah</creator>
        
        <subject>Bibliometrics; monkeypox virus; research trends; publication patterns; research impact</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>This study conducts a bibliometric analysis of monkeypox research to uncover trends, influential publishers, and key research topics. A dataset of Google Scholar-indexed articles was analyzed using bibliometric methods and tools such as Publish or Perish (PoP), VOSviewer, and Bibliometrix. The study reveals a growing research interest in monkeypox, with a notable increase in publications over the past decade. The Wiley Online Library emerged as the leading publisher, while highly cited articles covered various aspects of the disease. Cluster analysis identified key research topics, including clinical features, zoonotic transmission, and outbreak patterns. Network visualization and bigram analysis showcased relationships between authors, keywords, and publishers, with &quot;monkeypox&quot; being the most frequent keyword. By visualizing topic trends over time, the study identified emerging areas of investigation. The findings contribute to a comprehensive understanding of monkeypox research, aiding in identifying research gaps and guiding future studies. This research highlights the relevance of bibliometric analysis in health and information sciences. By uncovering trends, influential publishers, and key topics in monkeypox research, this study informs prevention, vaccination, and treatment strategies for mitigating the impact of monkeypox on public health.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_65-Decoding_the_Narrative_Patterns_and_Dynamics_in_Monkeypox.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Research on Spatial Accessibility Measurement Algorithm for Sanya Tourist Attractions Based on Seasonal Factor Adjustment Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150164</link>
        <id>10.14569/IJACSA.2024.0150164</id>
        <doi>10.14569/IJACSA.2024.0150164</doi>
        <lastModDate>2024-01-30T15:33:23.8630000+00:00</lastModDate>
        
        <creator>Xiaodong Mao</creator>
        
        <creator>Yan Zhuang</creator>
        
        <subject>Seasonal factors; adjustment analysis; Sanya Tourist Attractions; spatial accessibility measure; GIS technology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>Seasonal factors will lead to changes in tourists&#39; demand for scenic spots in different seasons, which will affect the traffic network and road conditions, and then affect the convenience and efficiency of tourists arriving at scenic spots. Based on the adjustment and analysis of seasonal factors, this study puts forward an algorithm for measuring the spatial accessibility of Sanya tourist attractions. Principal component analysis is used to denoise the data of Sanya tourist attractions in different seasons, and independent component analysis is used to extract the data characteristics of Sanya tourist attractions in different seasons after denoising. On this basis, the spatial accessibility index of Sanya tourist attractions is calculated by combining the spatial information of Sanya tourist attractions and GIS technology, and the spatial accessibility of Sanya tourist attractions is analyzed, and the spatial accessibility measurement model of Sanya tourist attractions is constructed to realize the spatial accessibility measurement of Sanya tourist attractions. The experimental results show that the spatial accessibility measurement method of Sanya tourist attractions is effective, which can effectively improve the accuracy of accessibility measurement and shorten the accessibility measurement time. It aims to help decision makers plan and optimize tourist routes and improve the efficiency and convenience of tourists arriving at their destinations.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_64-Research_on_Spatial_Accessibility_Measurement_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Diabetes Management: A Hybrid Adaptive Machine Learning Approach for Intelligent Patient Monitoring in e-Health Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150162</link>
        <id>10.14569/IJACSA.2024.0150162</id>
        <doi>10.14569/IJACSA.2024.0150162</doi>
        <lastModDate>2024-01-30T15:33:23.8470000+00:00</lastModDate>
        
        <creator>Sushil Dohare</creator>
        
        <creator>Deeba K</creator>
        
        <creator>Laxmi Pamulaparthy</creator>
        
        <creator>Shokhjakhon Abdufattokhov</creator>
        
        <creator>Janjhyam Venkata Naga Ramesh</creator>
        
        <creator>Yousef A.Baker El-Ebiary</creator>
        
        <creator>E. Thenmozhi</creator>
        
        <subject>Diabetes; machine learning; convolutional neural network; support vector machine; grey wolf optimization; e-health systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>The goal of the present research is to better understand the need of accurate and ongoing monitoring in the complicated chronic metabolic disease known as diabetes. With the integration of an intelligent system utilising a hybrid adaptive machine learning classifier, the suggested method presents a novel way to tracking individuals with diabetes. The system uses cutting edge technologies like intelligent tracking and machine learning (ML) to improve the efficacy and accuracy of diabetes patient monitoring. Integrating smart gadgets, sensors, and telephones in key locations to gather full body dimension data that is essential for diabetic health forms the architectural basis. Using a dataset that includes comprehensive data on the patient&#39;s characteristics and glucose levels, this investigation looks at sixty-two diabetic patients who were followed up on a daily basis for sixty-seven days. The study presents a hybrid architecture that combines a Convolutional Neural Network (CNN) with a Support Vector Machine (SVM) in order to optimise system performance. To train and optimise the hybrid model, Grey Wolf Optimisation (GWO) is utilised, drawing inspiration from collaborative optimisation in wolf packs. Thorough assessment, utilising standardised performance criteria including recall, F1-Score, accuracy, precision, and the Receiver Operating Characteristic (ROC) Curve, methodically verifies the suggested solution. The results reveal a remarkable 99.6% accuracy rate, which shows a considerable increase throughout training epochs. The CNN-SVM hybrid model achieves a classification accuracy advantage of around 4.15% over traditional techniques such as SVM, Decision Trees, and Sequential Minimal Optimisation. Python software is used to implement the suggested CNN-SVM technique. This research advances e-health systems by presenting a novel framework for effective diabetic patient monitoring that integrates machine learning, intelligent tracking, and optimisation techniques. The results point to a great deal of promise for the proposed method in the field of medicine, especially in the accurate diagnosis and follow-up of diabetic patients, which would provide opportunities for tailored and adaptable patient care.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_62-Enhancing_Diabetes_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Feature Selection Model Development on Near-Infrared Spectroscopy Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150163</link>
        <id>10.14569/IJACSA.2024.0150163</id>
        <doi>10.14569/IJACSA.2024.0150163</doi>
        <lastModDate>2024-01-30T15:33:23.8470000+00:00</lastModDate>
        
        <creator>Ridwan Raafi’udin</creator>
        
        <creator>Y. Aris Purwanto</creator>
        
        <creator>Imas Sukaesih Sitanggang</creator>
        
        <creator>Dewi Apri Astuti</creator>
        
        <subject>Beef quality prediction; feature selection; machine learning; Random Forest Regressor</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>This study aims to develop a feature selection model on Near-Infrared Spectroscopy (NIRS) data. The object used is beef with six quality parameters: color, drip loss, pH, storage time, Total Plate Colony (TPC), and water moisture. The prediction model is a Random Forest Regressor (RFR) with default parameters. The feature selection model is carried out by mapping spectroscopic data into line form. The collection of lines is made into one line by finding the mean value. Next, apply the line simplification method based on angle elimination, starting from the smallest angle to the largest. Each iteration will eliminate one corner, reducing one column of data in the corresponding dataset. Then, the predicted value in the form of R2 will be collected, and the highest value will be considered the best feature selection formation. RFR prediction results with R2 values are as follows: color R2= 0.597, drip loss R2=0.891, pH R2=0.797, storage time R2=0.889, TPC R2=0.721, and water moisture R2=0.540. Meanwhile, after applying the feature selection model, the R2 values for all parameters increased to color R2=0.877, drip loss R2=0.943, pH R2=0.904, storage time R2=0.917, TPC R2=0.951, and water moisture R2=0.893. Based on the results of increasing the R2 value of the six parameters, an average value of increasing prediction accuracy of 17.49% can be taken. So, the feature selection method based on line simplification with an angle elimination system can provide very good results.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_63-Feature_Selection_Model_Development.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>FPGA-based Implementation of a Resource-Efficient UNET Model for Brain Tumour Segmentation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150161</link>
        <id>10.14569/IJACSA.2024.0150161</id>
        <doi>10.14569/IJACSA.2024.0150161</doi>
        <lastModDate>2024-01-30T15:33:23.8300000+00:00</lastModDate>
        
        <creator>Modise Kagiso Neiso</creator>
        
        <creator>Nicasio Maguu Muchuka</creator>
        
        <creator>Shadrack Maina Mambo</creator>
        
        <subject>UNET; field programmable gate array; high-level synthesis for machine learning; brain tumour segmentation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>In this study an optimized UNET model is used for FPGA-based inference in the context of brain tumour segmentation using the BraTS dataset. The presented model features reduced depth and fewer filters, tailored to enhance efficiency on FPGA hardware. The implementation leverages High-Level Synthesis for Machine Learning (HLS4ML) to optimize and convert a Keras-based UNET model to Hardware Description Language (HDL) in the Kintex Ultrascale (xcku085-flva1517-3-e) FPGA. Resource strategy, First in First out (FIFO) depth optimization, and precision adjustment were employed to optimize FPGA resource utilization. Resource strategy is demonstrated to be effective, with resource utilization reaching a saturation point at a 1000-reuse factor. Following FIFO optimization, significant reductions are observed, including a 55 percent decrease in Block RAM (BRAM) usage, a 43 percent reduction in Flip-Flops (FF), and a 49 percent reduction in Look-Up Tables (LUT). In C/RTL co-simulation, the proposed FPGA-based UNET model achieves an Intersection over Union (IoU) score of 74 percent, demonstrating comparable segmentation accuracy to the original Keras model. These findings underscore the viability of the optimized UNET model for efficient brain tumour segmentation on FPGA platforms.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_61-FPGA_based_Implementation_of_a_Resource_Efficient_UNET_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dual-Branch Grouping Multiscale Residual Embedding U-Net and Cross-Attention Fusion Networks for Hyperspectral Image Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150160</link>
        <id>10.14569/IJACSA.2024.0150160</id>
        <doi>10.14569/IJACSA.2024.0150160</doi>
        <lastModDate>2024-01-30T15:33:23.8170000+00:00</lastModDate>
        
        <creator>Ning Ouyang</creator>
        
        <creator>Chenyu Huang</creator>
        
        <creator>Leping Lin</creator>
        
        <subject>U-Net; multiscale; cross-attention; hyperspectral image classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>Due to the high cost and time-consuming nature of acquiring labelled samples of hyperspectral data, classification of hyperspectral images with a small number of training samples has been an urgent problem. In recent years, U-Net can train the characteristics of high-precision models with a small amount of data, showing its good performance in small samples. To this end, this paper proposes a dual-branch grouping multiscale residual embedding U-Net and cross-attention fusion networks (DGMRU_CAF) for hyperspectral image classification is proposed. The network contains two branches, spatial GMRU and spectral GMRU, which can reduce the interference between the two types of features, spatial and spectral. In this case, each branch introduces U-Net and designs a grouped multiscale residual block (GMR), which can be used in spatial GMRUs to compensate for the loss of feature information caused by spatial features during down-sampling, and in spectral GMRUs to solve the problem of redundancy in spectral dimensions. Considering the effective fusion of spatial and spectral features between the two branches, the spatial-spectral cross-attention fusion (SSCAF) module is designed to enable the interactive fusion of spatial-spectral features. Experimental results on WHU-Hi-HanChuan and Pavia Center datasets shows the superiority of the method proposed in this paper.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_60-Dual_Branch_Grouping_Multiscale_Residual_Embedding_U_Net.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Revolutionizing Software Project Development: A CNN-LSTM Hybrid Model for Effective Defect Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150158</link>
        <id>10.14569/IJACSA.2024.0150158</id>
        <doi>10.14569/IJACSA.2024.0150158</doi>
        <lastModDate>2024-01-30T15:33:23.8000000+00:00</lastModDate>
        
        <creator>Selvin Jose G</creator>
        
        <creator>J Charles</creator>
        
        <subject>Data driven software development; proactive defect identification; software quality; predictive analytics; software defect prediction; artificial intelligence; long short term memory</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>Within the domain of software development, the practice of software defect prediction (SDP) holds a central and critical position, significantly contributing to the efficiency and ultimate success of projects. It embodies a proactive approach that harnesses data-driven techniques and analytics to preemptively identify potential defects or vulnerabilities within software systems, thereby enhancing overall quality and reliability while significantly impacting project timelines and resource allocation. The efficiency of software development projects hinges on their ability to adhere to deadlines, budget constraints, and deliver high-quality products. SDP contributes to these objectives through various means. This paper introduces a novel SDP model that harnesses the combined capabilities of Convolutional Neural Networks (CNNs) and Long Short Term Memory (LSTMs) unit. CNNs excel at extracting features from structured data, enabling them to discern patterns and dependencies within code repositories and change histories. LSTMs, conversely, excel in handling sequential data, which is pivotal for capturing the temporal aspects of software development and tracking the evolution of defects over time. The outcomes of the proposed CNN-LSTM hybrid model showcase its superior predictive performance. Simulation results affirm the substantial potential of this model to bolster the efficiency and reliability of software development processes. As technology advances and data-driven methodologies become increasingly prevalent in the software industry, the integration of such hybrid models presents a promising avenue for continually elevating software quality and ensuring the triumph of software projects. In summary, the utilization of this innovative SDP model offers a transformative approach to efficient software development, positioning it as a vital tool for project success and quality assurance.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_58-Revolutionizing_Software_Project_Development.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>US Road Sign Detection and Visibility Estimation using Artificial Intelligence Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150159</link>
        <id>10.14569/IJACSA.2024.0150159</id>
        <doi>10.14569/IJACSA.2024.0150159</doi>
        <lastModDate>2024-01-30T15:33:23.8000000+00:00</lastModDate>
        
        <creator>Jafar AbuKhait</creator>
        
        <subject>Road sign detection; YOLOv8; driver assistance system; fuzzy logic; detectability; visibility estimation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>This paper presents a fully-automated system for detecting road signs in the United States and assess their visibility during daytime from the perspective of the driver using images captured by an in-vehicle camera. The system deploys YOLOv8 to build a multi-label detection model and then, calculates various readability and detectability factors, including the simplicity of the surroundings, potential obstructions, and the angle at which the road sign is positioned, to determine the overall visibility of the sign. This proposed system can be integrated into Driver Assistance Systems (DAS) to manage the information delivered to drivers, as an excess of information could potentially distract them. Road signs are categorized based on their visibility levels, allowing Driver Assistance Systems to caution drivers about signs that may have lower visibility but are of significant importance. The system comprises four main stages: 1) identifying road signs using YOLOv8; 2) segmenting the surrounding areas; 3) measuring visibility parameters; and 4) determining visibility levels through fuzzy logic inference system. This paper introduces a visibility estimation system for road signs specifically tailored to the United States. Experimental results showcase the system’s effectiveness. The visibility levels generated by the proposed system were subjectively compared to decisions made by human experts, revealing a substantial agreement between the two approaches.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_59-US_Road_Sign_Detection_and_Visibility_Estimation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implementation of Machine Learning Classification Algorithm Based on Ensemble Learning for Detection of Vegetable Crops Disease</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150157</link>
        <id>10.14569/IJACSA.2024.0150157</id>
        <doi>10.14569/IJACSA.2024.0150157</doi>
        <lastModDate>2024-01-30T15:33:23.7830000+00:00</lastModDate>
        
        <creator>Pradeep Jha</creator>
        
        <creator>Deepak Dembla</creator>
        
        <creator>Widhi Dubey</creator>
        
        <subject>DNN; transfer learning; crop; ensemble model; deep stacking and stacking approach; image pre-processing; tomato; bell paper; potato; disease</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>In India, plant diseases pose a significant threat to food security, requiring precise detection and management protocols to minimize potential damage. Research introduces an innovative ensemble machine learning model for precise disease detection in tomato, potato, and bell pepper crops. Utilizing transfer learning, pre-trained models such as MobileNet and Inception are fine-tuned on a dataset of over 10,403 images of diseased and healthy plant leaves. The models are combined into a diverse ensemble, enhancing the precision and robustness of disease detection. The proposed ensemble models achieve an impressive accuracy rate of 98.95%, demonstrating their superiority over individual models in reducing misclassification and false positives. This advancement in plant disease detection provides valuable support to farmers and agricultural experts by enabling early disease identification and intervention.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_57-Implementation_of_Machine_Learning_Classification_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evolving Adoption of eLearning Tools and Developing Online Courses: A Practical Case Study from Al-Baha University, Saudi Arabia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150156</link>
        <id>10.14569/IJACSA.2024.0150156</id>
        <doi>10.14569/IJACSA.2024.0150156</doi>
        <lastModDate>2024-01-30T15:33:23.7700000+00:00</lastModDate>
        
        <creator>Hassan Alghamdi</creator>
        
        <creator>Naif Alzahrani</creator>
        
        <subject>eLearning; ICT competencies; Higher Education Institutions (HEIs); Learning Management System (LMS)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>eLearning or online learning has gained acceptance worldwide, particularly after the Covid-19 pandemic. Although the pandemic has forced the shift towards this learning mode, there is still a continuous need to improve instructors&#39; cognitive and practical competencies to effectively design and deliver online courses. In this paper, a practical case study from Al-Baha University, a Higher Education Institution (HEI) in Saudi Arabia, is presented, showing the development stages of eLearning at the university and how effective utilization of eLearning tools through a structured methodology in a short time, with minimum resources, has helped to improve the teaching and learning experiences for both instructors and students at the university before the pandemic. Various standards and research techniques have been adopted to develop and assess the methodology and its viability of implementation in other higher education institutions. The findings show the methodology’s effectiveness and how it helps Al-Baha University smoothly adapt to the online shift at the onset of the pandemic. The methodology is presented to and gained acceptance and recommendation for application in other HEIs in Saudi Arabia from the committee of eLearning and distance education deans in Saudi Universities in March 2023. It also receives the Anthology Middle East award for community engagement in November 2023.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_56-Evolving_Adoption_of_eLearning_Tools.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Image Caption Generation using Deep Learning For Video Summarization Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150155</link>
        <id>10.14569/IJACSA.2024.0150155</id>
        <doi>10.14569/IJACSA.2024.0150155</doi>
        <lastModDate>2024-01-30T15:33:23.7530000+00:00</lastModDate>
        
        <creator>Mohammed Inayathulla</creator>
        
        <creator>Karthikeyan C</creator>
        
        <subject>Video summarization; deep learning; image caption synthesis; densenet201; GloVe embeddings; LSTM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>In the area of video summarization applications, automatic image caption synthesis using deep learning is a promising approach. This methodology utilizes the capabilities of neural networks to autonomously produce detailed textual descriptions for significant frames or instances in a video. Through the examination of visual elements, deep learning models possess the capability to discern and classify objects, scenarios, and actions, hence enabling the generation of coherent and useful captions. This paper presents a novel methodology for generating image captions in the context of video summarizing applications. DenseNet201 architecture is used to extract image features, enabling the effective extraction of comprehensive visual information from keyframes in the videos. In text processing, GloVe embedding, which is pre-trained word vectors that capture semantic associations between words, is employed to efficiently represent textual information. The utilization of these embeddings establishes a fundamental basis for comprehending the contextual variations and semantic significance of words contained within the captions. LSTM models are subsequently utilized to process the GloVe embeddings, facilitating the development of captions that keep coherence, context, and readability. The integration of GloVe embeddings with LSTM models in this study facilitates the effective fusion of visual and textual data, leading to the generation of captions that are both informative and contextually relevant for video summarization. The proposed model significantly enhances the performance by combining the strengths of convolutional neural networks for image analysis and recurrent neural networks for natural language generation. The experimental results demonstrate the effectiveness of the proposed approach in generating informative captions for video summarization, offering a valuable tool for content understanding, retrieval, and recommendation.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_55-Image_Caption_Generation_using_Deep_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Association Model of Temperature and Cattle Weight Influencing the Weight Loss of Cattle Due to Stress During Transportation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150154</link>
        <id>10.14569/IJACSA.2024.0150154</id>
        <doi>10.14569/IJACSA.2024.0150154</doi>
        <lastModDate>2024-01-30T15:33:23.7530000+00:00</lastModDate>
        
        <creator>Jajam Haerul Jaman</creator>
        
        <creator>Agus Buono</creator>
        
        <creator>Dewi Apri Astuti</creator>
        
        <creator>Sony Hartono Wijaya</creator>
        
        <creator>Burhanuddin</creator>
        
        <creator>Jajam Haerul Jaman</creator>
        
        <subject>Association rule; animal welfare; cattle management; animal product quality; modern agriculture; recommendations; sustainability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>This study aimed to enhance animal welfare in the context of modern agriculture. The Association Rule analysis method using FP-Growth and Apriori algorithms was employed to identify patterns and factors influencing animal welfare, particularly in the context of live cattle weight loss (shrink) due to stress during transportation. Data obtained from several farms and clinical tests were used to develop insights into the relationship between farming practices, data science, and animal welfare. The research stages included data preprocessing, initial analysis, modeling, evaluation, and interpretation of results, recommendations and implications, and conclusions. The research results indicate that the use of FP-Growth and Apriori algorithms uncovered hidden patterns in the data, resulting in four association rules from FP-Growth and five rules from Apriori. These rules aid in designing recommendations to enhance animal welfare, improve agricultural efficiency, and support sustainability of the cattle sector. Our findings have significant implications in the context of animal welfare and sustainable farm management.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_54-Association_Model_of_Temperature_and_Cattle_Weight_Influencing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>DDoS Classification using Combined Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150153</link>
        <id>10.14569/IJACSA.2024.0150153</id>
        <doi>10.14569/IJACSA.2024.0150153</doi>
        <lastModDate>2024-01-30T15:33:23.7370000+00:00</lastModDate>
        
        <creator>Mohd Azahari Mohd Yusof</creator>
        
        <creator>Noor Zuraidin Mohd Safar</creator>
        
        <creator>Zubaile Abdullah</creator>
        
        <creator>Firkhan Ali Hamid Ali</creator>
        
        <creator>Khairul Amin Mohamad Sukri</creator>
        
        <creator>Muhamad Hanif Jofri</creator>
        
        <creator>Juliana Mohamed</creator>
        
        <creator>Abdul Halim Omar</creator>
        
        <creator>Ida Aryanie Bahrudin</creator>
        
        <creator>Mohd Hatta Mohamed Ali @ Md Hani</creator>
        
        <subject>DDoS; machine learning; accuracy; false positive rate</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>Now-a-days, the attacker&#39;s favourite is to disrupt a network system. An attacker has the capability to generate various types of DDoS attacks simultaneously, including the Smurf attack, ICMP flood, UDP flood, and TCP SYN flood. This DDoS issue encouraged the design of a classification technique against DDoS attacks that enter a computer network environment. The technique is called Packet Threshold Algorithm (PTA) and is combined with several machine learning to classify incoming packets that have been captured and recorded. Apart from that, the combination of techniques can differentiate between normal packets and DDoS attacks. The performance of all techniques in the research achieved high detection accuracy while mitigating the issue of a high false positive rate. The four techniques focused in this research are PTA-SVM, PTA-NB, PTA-LR and PTA-KNN. Based on the results of detection accuracy and false positive rate for all the techniques involved, it proves the PTA-KNN technique is a more effective technique in the context of detection of incoming packets whether DDoS attacks or normal packets.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_53-DDoS_Classification_using_Combined_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving Brain Tumor MRI Image Classification Prediction based on Fine-tuned MobileNet</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150152</link>
        <id>10.14569/IJACSA.2024.0150152</id>
        <doi>10.14569/IJACSA.2024.0150152</doi>
        <lastModDate>2024-01-30T15:33:23.7230000+00:00</lastModDate>
        
        <creator>Quy Thanh Lu</creator>
        
        <creator>Triet Minh Nguyen</creator>
        
        <creator>Huan Le Lam</creator>
        
        <subject>Brain tumor; fine-tuning; transfer learning; Magnetic Resonance Imaging (MRI); MobileNet</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>Brain tumors are a prevalent issue in contemporary society as they impact human health. The location of the tumor in the brain determines the variety of symptoms that may manifest. Some frequent symptoms are cephalalgia, convulsions, visual impairments, nausea, emesis, asthenia, paresthesia, dysphasia, personality alterations, and amnesia. The prognosis for brain cancer differs considerably depending on the cancer type. Nevertheless, brain tumors are amenable to treatment with surgical intervention, chemotherapy, and radiotherapy if the diagnosis is timely. Furthermore, artificial intelligence and machine learning can assist in the detection of brain tumors as they have significant implications for the analysis of Magnetic Resonance Imaging (MRI). To accomplish this objective, automated measurement instruments were proposed based on the processing of MRI. In this study, we employed the latest developments in deep transfer learning and fine-tuning to identify tumors without many complex steps. We gathered data from authentic MRI of 3264 subjects (i.e., 926 glioma tumors, 937 meningioma tumors, 901 pituitary tumors, and 500 normal). With the MobileNet model from the Keras library, we attained the highest validation accuracy, test accuracy, and F1 score in four-class classifications was 97.24%, 97,86%, and 97.85%, respectively. Concerning two-class classification, high accuracy values were obtained for most of the models (i.e., ~100%). These outcomes and other performance indicators demonstrate a strong capability to diagnose brain tumors from conventional MRI. The current research developed a supportive machine learning that can aid doctors in making the accurate diagnosis with less time and mistakes.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_52-Improving_Brain_Tumor_MRI_Image_Classification_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Explore Innovative Depth Vision Models with Domain Adaptation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150151</link>
        <id>10.14569/IJACSA.2024.0150151</id>
        <doi>10.14569/IJACSA.2024.0150151</doi>
        <lastModDate>2024-01-30T15:33:23.7070000+00:00</lastModDate>
        
        <creator>Wenchao Xu</creator>
        
        <creator>Yangxu Wang</creator>
        
        <subject>Deep learning; neural network; domain adaptation; lightweight; regularization techniques</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>In recent years, deep learning has garnered widespread attention in graph-structured data. Nevertheless, due to the high cost of collecting labeled graph data, domain adaptation becomes particularly crucial in supervised graph learning tasks. The performance of existing methods may degrade when there are disparities between training and testing data, especially in challenging scenarios such as remote sensing image analysis. In this study, an approach to achieving high-quality domain adaptation without explicit adaptation was explored. The proposed Efficient Lightweight Aggregation Network (ELANet) model addresses domain adaptation challenges in graph-structured data by employing an efficient lightweight architecture and regularization techniques. Through experiments on real datasets, ELANet demonstrated robust domain adaptability and generality, performing exceptionally well in cross-domain settings of remote sensing images. Furthermore, the research indicates that regularization techniques play a crucial role in mitigating the model&#39;s sensitivity to domain differences, especially when incorporating a module that adjusts feature weights in response to redefined features. Moreover, the study finds that under the same training and validation set configurations, the model achieves better training outcomes with appropriate data transformation strategies. The achievements of this research extend not only to the agricultural domain but also show promising results in various object detection scenarios, contributing to the advancement of domain adaptation research.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_51-Explore_Innovative_Depth_Vision_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Topology Approach for Crude Oil Price Forecasting of Particle Swarm Optimization and Long Short-Term Memory</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150150</link>
        <id>10.14569/IJACSA.2024.0150150</id>
        <doi>10.14569/IJACSA.2024.0150150</doi>
        <lastModDate>2024-01-30T15:33:23.7070000+00:00</lastModDate>
        
        <creator>Marina Yusoff</creator>
        
        <creator>Darul Ehsan</creator>
        
        <creator>Muhammad Yusof Sharif</creator>
        
        <creator>Mohamad Taufik Mohd Sallehud-din</creator>
        
        <subject>Crude oil; deep learning; Particle Swarm Optimization; Long Term-Short Memory; forecasting</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>Forecasting crude oil prices hold significant importance in finance, energy, and economics, given its extensive impact on worldwide markets and socio-economic equilibrium. Using Long Short-Term Memory (LSTM) neural networks has exhibited noteworthy achievements in time series forecasting, specifically in predicting crude oil prices. Nevertheless, LSTM models frequently depend on the manual adjustment of hyperparameters, a task that can be laborious and demanding. This study presents a novel methodology incorporating Particle Swarm Optimization (PSO) into LSTM networks to optimize the network architecture and minimize the error. This study employs historical data on crude oil prices to explore and identify optimal hyperparameters autonomously and embedded with the star and ring topology of PSO to address the local and global search capabilities. The findings demonstrate that LSTM+starPSO is superior to LSTM+ringPSO, previous hybrid LSTM-PSO, conventional LSTM networks, and statistical time series methods in its predictive accuracy. LSTM+starPSO model offers a better RMSE of about +0.16% and +22.82% for WTI and BRENT datasets, respectively. The results indicate that the LSTM model, when enhanced with PSO, demonstrates a better proficiency in capturing the patterns and inherent dynamics data changes of crude oil prices. The proposed model offers a dual benefit by alleviating the need for manual hyperparameter tuning and serving as a valuable resource for stakeholders in the energy and financial industries interested in obtaining dependable insights into fluctuations in crude oil prices.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_50-Topology_Approach_for_Crude_Oil_Price_Forecasting.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SpanBERT-based Multilayer Fusion Model for Extractive Reading Comprehension</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150149</link>
        <id>10.14569/IJACSA.2024.0150149</id>
        <doi>10.14569/IJACSA.2024.0150149</doi>
        <lastModDate>2024-01-30T15:33:23.6900000+00:00</lastModDate>
        
        <creator>Pu Zhang</creator>
        
        <creator>Lei He</creator>
        
        <creator>Deng Xi</creator>
        
        <subject>Machine reading comprehension; pre-trained model; transformer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>Extractive reading comprehension is a prominent research topic in machine reading comprehension, which aims to predict the correct answer from the given context. Pre-trained models have recently shown considerable effectiveness in this area. However, during the training process, most existing models face the problem of semantic information loss. To address this problem, this paper proposes a model based on the SpanBERT pre-trained model to predict answers using a multi-layer fusion method. Both the outputs of the intermediate layer and the prediction layer of the transformer are fused to perform answer prediction, thereby improving the model&#39;s performance. The proposed model achieves F1 scores of 92.54%, 84.02%, 80.86%, 71.32%, and EM scores of 86.27%, 81.25%, 69.10%, 56.42% on the SQuAD1.1, SQuAD2.0, Natural Questions and NewsQA datasets, respectively. Experimental results show that our model outperforms a number of existing models and has excellent performance.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_49-SpanBERT_based_Multilayer_Fusion_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Ensemble Approach to Question Classification: Integrating Electra Transformer, GloVe, and LSTM</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150148</link>
        <id>10.14569/IJACSA.2024.0150148</id>
        <doi>10.14569/IJACSA.2024.0150148</doi>
        <lastModDate>2024-01-30T15:33:23.6730000+00:00</lastModDate>
        
        <creator>Sanad Aburass</creator>
        
        <creator>Osama Dorgham</creator>
        
        <creator>Maha Abu Rumman</creator>
        
        <subject>Ensemble learning; long short term memory; transformer models; Electra; GloVe; TREC dataset</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>Natural Language Processing (NLP) has emerged as a critical technology for understanding and generating human language, with applications including machine translation, sentiment analysis, and, most importantly, question classification. As a subfield of NLP, question classification focuses on determining the type of information being sought, which is an important step for downstream applications such as question answering systems. This study introduces an innovative ensemble approach to question classification that combines the strengths of the Electra, GloVe, and LSTM models. After being tried thoroughly on the well-known TREC dataset, the model shows that combining these different technologies can produce better outcomes. For understanding complex language, Electra uses transformers; GloVe uses global vector representations for word-level meaning; and LSTM models long-term relationships through sequence learning. Our ensemble model is a strong and effective way to solve the hard problem of question classification by mixing these parts in a smart way. The ensemble method works because it got an 80% accuracy score on the test dataset when it was compared to well-known models like BERT, RoBERTa, and DistilBERT.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_48-An_Ensemble_Approach_to_Question_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Double Branch Lightweight Finger Vein Recognition based on Diffusion Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150147</link>
        <id>10.14569/IJACSA.2024.0150147</id>
        <doi>10.14569/IJACSA.2024.0150147</doi>
        <lastModDate>2024-01-30T15:33:23.6600000+00:00</lastModDate>
        
        <creator>Zhiyong Tao</creator>
        
        <creator>Yajing Gao</creator>
        
        <creator>Sen Lin</creator>
        
        <subject>Finger vein recognition; convolution neural network; diffusion model; multi-head self-attention mechanism; lightweight network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>Aiming at the problems of high complexity, insufficient global information extraction and easy overfitting in finger vein recognition, a finger vein recognition method based on diffusion model is proposed. Firstly, finger vein images are generated according to the dataset by diffusion model, which is used to prevent overfitting; secondly, a streamlined convolutional neural network is used to form a two-branch lightweight backbone network with an improved multi-head self-attention mechanism, which can effectively reduce the complexity of the model; and finally, in order to maximally extract the image&#39;s overall information, the convolution is used to merge the extracted local and global features, and the recognition results are output. The algorithm can reach a maximum recognition rate of 99.78% on multiple datasets, while the number of references is only 2.15M, which further reduces the complexity of the algorithm while maintaining a high accuracy compared to other novel finger vein recognition algorithms as well as lightweight convolutional neural network models. As the first attempt in this field, it will provide new ideas for future research work.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_47-Double_Branch_Lightweight_Finger_Vein_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Low-Light Image Enhancement using Retinex-based Network with Attention Mechanism</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150146</link>
        <id>10.14569/IJACSA.2024.0150146</id>
        <doi>10.14569/IJACSA.2024.0150146</doi>
        <lastModDate>2024-01-30T15:33:23.6600000+00:00</lastModDate>
        
        <creator>Shaojin Ma</creator>
        
        <creator>Weiguo Pan</creator>
        
        <creator>Nuoya Li</creator>
        
        <creator>Songjie Du</creator>
        
        <creator>Hongzhe Liu</creator>
        
        <creator>Bingxin Xu</creator>
        
        <creator>Cheng Xu</creator>
        
        <creator>Xuewei Li</creator>
        
        <subject>Low-light image enhancement; decomposition network; FEM attention mechanism; denoising network; detail enhancement</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>Images in low-light conditions typically exhibit significant degradation such as low contrast, color shift, noise and artifacts, which diminish the accuracy of the recognition task in computer vision. To address these challenges, this paper proposes a low-light image enhancement method based on Retinex. Specifically, a decomposition network is designed to acquire high-quality light illumination and reflection maps, complemented by the incorporation of a comprehensive loss function. A denoising network was proposed to mitigate the noise in low-light images with the assistance of images’ spatial information. Notably, the extended convolution layer has been employed to replace the maximum pooling layer and the Basic-Residual-Modules (BRM) module from the decomposition network has integrates into the denoising network. To address challenges related to shadow blocks and halo artifacts, an enhancement module was proposed to be integration into the jump connections of U-Net. This enhancement module leverages the Feature-Extraction- Module (FEM) attention module, a sophisticated mechanism that improves the network’s capacity to learn meaningful features by integrating the image features in both channel dimensions and spatial attention mechanism to receive more detailed illumination information about the object and suppress other useless information. Based on the experiments conducted on public datasets LOL-V1 and LOL-V2, our method demonstrates noteworthy performance improvements. The enhanced results by our method achieve an average of 23.15, 0.88, 0.419 and 0.0040 on four evaluation metrics - PSNR, SSIM, NIQE and GMSD. Those results superior to the mainstream methods.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_46-Low_Light_Image_Enhancement_using_Retinex_based_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Explainable and Optimized Network Intrusion Detection Model using Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150145</link>
        <id>10.14569/IJACSA.2024.0150145</id>
        <doi>10.14569/IJACSA.2024.0150145</doi>
        <lastModDate>2024-01-30T15:33:23.6430000+00:00</lastModDate>
        
        <creator>Haripriya C</creator>
        
        <creator>Prabhudev Jagadeesh M. P</creator>
        
        <subject>Network Intrusion Detection; deep learning; hyper parameter optimization; hyperband; CSE CIC IDS 2018 dataset; XAI methods; LIME; SHAP</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>In the current age, internet and its usage have become a core part of human existence and with it we have developed technologies that seamlessly integrate with various phases of our day to day activities. The main challenge with most modern-day infrastructure is that the requirements pertaining to security are often an afterthought. Despite growing awareness, current solutions are still unable to completely protect computer networks and internet applications from the ever-evolving threat landscape. In the recent years, deep learning algorithms have proved to be very efficient in detecting network intrusions. However, it is exhausting, time-consuming, and computationally expensive to manually adjust the hyper parameters of deep learning models. Also, it is important to develop models that not only make accurate predictions but also help in understanding how the model is making those predictions. Thus, model explainability helps increase user’s trust. The current research gap in the domain of Network Intrusion Detection is the absence of a holistic framework that incorporates both optimization and explainable methods. In this research article, a hybrid approach to hyper parameter optimization using hyperband is proposed. An overall accuracy of 98.58% is achieved by considering all the attack types of the CSE CIC 2018 dataset. The proposed hybrid framework enhances the performance of Network Intrusion Detection by choosing an optimized set of parameters and leverages explainable AI (XAI) methods such as Local Interpretable Model agnostic Explanations (LIME) and SHapely Additive exPlanations (SHAP) to understand model predictions.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_45-An_Explainable_and_Optimized_Network_Intrusion_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>From Time Series to Images: Revolutionizing Stock Market Predictions with Convolutional Deep Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150144</link>
        <id>10.14569/IJACSA.2024.0150144</id>
        <doi>10.14569/IJACSA.2024.0150144</doi>
        <lastModDate>2024-01-30T15:33:23.6270000+00:00</lastModDate>
        
        <creator>TATANE Khalid</creator>
        
        <creator>SAHIB Mohamed Rida</creator>
        
        <creator>ZAKI Taher</creator>
        
        <subject>Technical indicators; convolutional neural networks; stock trend forecasting; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>Predicting the trend of stock prices is a hard task due to numerous factors and prerequisites that can affect price movement in a specific direction. Various strategies have been proposed to extract relevant features of stock data, which is crucial for this domain. Due to its powerful data processing capabilities, deep learning has demonstrated remarkable results in the financial field among modern tools. This research suggests a convolutional deep neural network model that utilizes a 2D-CNN to process and classify images. The process for creating images involves transforming the top technical indicators from a financial time series, each calculated for 21 different day periods, to create images of specific sizes. The images are labeled Sell, Hold, or Buy based on the original trading data. Compared to the Long Short Time Memory Model and to the one-dimensional Convolutional Neural Network and the model exhibits the best performance.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_44-From_Time_Series_to_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Students&#39; Perception of ChatGPT Usage in Education</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150143</link>
        <id>10.14569/IJACSA.2024.0150143</id>
        <doi>10.14569/IJACSA.2024.0150143</doi>
        <lastModDate>2024-01-30T15:33:23.6130000+00:00</lastModDate>
        
        <creator>Irena Valova</creator>
        
        <creator>Tsvetelina Mladenova</creator>
        
        <creator>Gabriel Kanev</creator>
        
        <subject>Artificial intelligence in education; assessment; ChatGPT; Generative Pretrained Transformer 3; GPT-3; higher education; learning; teaching; Natural Language Processing (NLP)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>This research article delves into the impact of ChatGPT on education, focusing on the perceptions and usage patterns among high school and university students. The article begins by introducing ChatGPT, emphasizing its rapid user adoption and widespread interest. It explores the application of ChatGPT in various fields, including healthcare, agriculture, and education. A comprehensive survey involving 102 students, both high school and university, is detailed, covering aspects like familiarity with ChatGPT, reasons for usage, self-assessment of its effectiveness, and attitudes toward informing teachers about its use. The findings reveal varied perspectives on the benefits and challenges of incorporating ChatGPT in the learning process. The article concludes by emphasizing the need for careful consideration and integration of AI technologies in education, highlighting the risks of uncritical reliance on such tools and advocating for a balanced approach to foster students&#39; critical thinking and intellectual growth.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_43-Students_Perception_of_ChatGPT_Usage_in_Education.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improved Algorithm with YOLOv5s for Obstacle Detection of Rail Transit</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150142</link>
        <id>10.14569/IJACSA.2024.0150142</id>
        <doi>10.14569/IJACSA.2024.0150142</doi>
        <lastModDate>2024-01-30T15:33:23.6130000+00:00</lastModDate>
        
        <creator>Shuangyuan Li</creator>
        
        <creator>Zhengwei Wang</creator>
        
        <creator>Yanchang Lv</creator>
        
        <creator>Xiangyang Liu</creator>
        
        <subject>Railroad track intrusion detection; CBAM (Convolutional Block Attention Module) attention; activation function; decoupling probe; loss function</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>As an infrastructure for urban development, it is particularly important to ensure the safe operation of urban rail transit. Foreign object intrusion in urban rail transit area is one of the main causes of train accidents. To tackle the obstacle detection challenge in rail transit, this paper introduces the CS-YOLO urban rail foreign object intrusion detection model. It utilizes the improved YOLOv5s algorithm, incorporating an enhanced convolutional attention CBAM module to replace the original YOLOv5s algorithm&#39;s backbone network C3 module. In addition, the KM-Decoupled Head is proposed to decouple the detection head, and SIoU is applied as the loss function. Tested on the WZ dataset, the average accuracy increased from 0.844 to 0.893 .The research method in this paper provides a reference basis for urban rail transit safety detection, which has certain reference value.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_42-Improved_Algorithm_with_YOLOv5s_for_Obstacle_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dynamic Object Detection Revolution: Deep Learning with Attention, Semantic Understanding, and Instance Segmentation for Real-World Precision</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150141</link>
        <id>10.14569/IJACSA.2024.0150141</id>
        <doi>10.14569/IJACSA.2024.0150141</doi>
        <lastModDate>2024-01-30T15:33:23.5970000+00:00</lastModDate>
        
        <creator>Karimunnisa Shaik</creator>
        
        <creator>Dyuti Banerjee</creator>
        
        <creator>R. Sabin Begum</creator>
        
        <creator>Narne Srikanth</creator>
        
        <creator>Jonnadula Narasimharao</creator>
        
        <creator>Yousef A.Baker El-Ebiary</creator>
        
        <creator>E. Thenmozhi</creator>
        
        <subject>Semantic segmentation; instance segmentation; convolutional neural network; bidirectional long short-term memory; attention mechanism</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>Semantic and instance segmentation are critical goals that span a wide range of applications, from autonomous driving to object recognition in different fields. The existing approaches have limitations, especially when it comes to the difficult task of identifying and detecting minute things in intricate real-world situations. This work presents a novel method that uses a hybrid deep learning architecture with the Python programming language to smoothly combine semantic and instance segmentation. The suggested approach takes care of the pressing necessity in challenging real-world settings for accurate localization and fine-grained object detection. By combining the strengths of a Convolutional Neural Network (CNN) with a Bidirectional Long Short-Term Memory Network (BiLSTM), the hybrid model effectively achieves semantic segmentation by using sequential input and spatial information. A parallel attention method is smoothly included into the segmentation process to further improve the model&#39;s capabilities and enable the recognition of important object attributes. This study highlights the difficulties caused by changing environmental elements, highlighting the need for precise object location and understanding in addition to the complexities of fine-grained object detection. The suggested approach has an outstanding accuracy rate of 99.66%, outperforming existing approaches by 25.22%. This significant increase highlights the benefits that the hybrid design has over individual techniques and shows how effective it is at resolving issues that arise in dynamic real-world circumstances. The research highlights the importance of attention processes in deep learning and demonstrates how they might improve the specificity and accuracy of object detection and localization in intricate real-world scenarios. The improved performance of the suggested methodology is with well-known techniques like RCNN, CNN, and DNN, reaffirming its status as a reliable means of developing object localization and recognition in difficult situations.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_41-Dynamic_Object_Detection_Revolution.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Vision Transformers and CNNs for Enhanced Transmission Line Segmentation in Aerial Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150140</link>
        <id>10.14569/IJACSA.2024.0150140</id>
        <doi>10.14569/IJACSA.2024.0150140</doi>
        <lastModDate>2024-01-30T15:33:23.5800000+00:00</lastModDate>
        
        <creator>Hoanh Nguyen</creator>
        
        <creator>Tuan Anh Nguyen</creator>
        
        <subject>Vision transformers; convolutional neural networks; transmission lines segmentation; hybrid model; feature fusion</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>This paper presents a novel architecture for the segmentation of transmission lines in aerial images, utilizing a hybrid model that combines the strengths of Vision Transformers (ViTs) and Convolutional Neural Networks (CNNs). The proposed method first employs a Swin Transformer backbone (Swin-B) that processes the input image through a hierarchical structure, effectively capturing multi-scale contextual information. Following this, an upsampling strategy is employed, wherein the features extracted by the transformer are refined through convolutional layers, ensuring that the resolution is maintained, and spatial details are recovered. To integrate multi-level feature maps, a feature fusion module with a squeeze-and-excitation (SE) layer is introduced, which consolidates the benefits of both high-level and low-level feature extractions. The SE layer plays a pivotal role in augmenting the feature channels, focusing the model&#39;s attention on the most informative features for transmission line detection. By leveraging the global receptive field of ViTs for comprehensive context and the local precision of CNNs for fine-grained detail, our method aims to set a new benchmark for transmission line segmentation in aerial imagery. The effectiveness of our approach is demonstrated through extensive experiments and comparisons with existing state-of-the-art methods.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_40-Hybrid_Vision_Transformers_and_CNNs.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Revolutionizing Magnetic Resonance Imaging Image Reconstruction: A Unified Approach Integrating Deep Residual Networks and Generative Adversarial Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150139</link>
        <id>10.14569/IJACSA.2024.0150139</id>
        <doi>10.14569/IJACSA.2024.0150139</doi>
        <lastModDate>2024-01-30T15:33:23.5670000+00:00</lastModDate>
        
        <creator>M Nagalakshmi</creator>
        
        <creator>M. Balamurugan</creator>
        
        <creator>B. Hemantha Kumar</creator>
        
        <creator>Lakshmana Phaneendra Maguluri</creator>
        
        <creator>Abdul Rahman Mohammed ALAnsari</creator>
        
        <creator>Yousef A.Baker El-Ebiary</creator>
        
        <subject>Magnetic Resonance Imaging (MRI); deep learning; generative adversarial network; deep residual network; ResNet50</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>Advancements in data capture techniques in the field of Magnetic Resonance Imaging (MRI) offer faster retrieval of critical medical imagery. Even with these advances, reconstruction techniques are generally slow and visually poor, making it difficult to include compression sensors. To address these issues, this work proposes a novel hybrid GAN-DRN architecture-based method for MRI reconstruction. This approach greatly improves texture, boundary characteristics, and picture fidelity over previous methods by combining Generative Adversarial Networks (GANs) with Deep Residual Networks (DRNs). One important innovation is the GAN&#39;s all-encompassing learning mechanism, which modifies the generator&#39;s behaviour to protect the network against corrupted input. In addition, the discriminator assesses forecast validity thoroughly at the same time. With this special technique, intrinsic features in the original photo are skillfully extracted and managed, producing excellent results that adhere to predetermined quality criteria. The Hybrid GAN-DRN technique&#39;s effectiveness is demonstrated by experimental findings, which use Python software to achieve an astounding 0.99 SSIM (Structural Similarities Index) and an amazing 50.3 peak signal-to-noise ratio. This achievement is a significant advancement in MRI reconstructions and has the potential to completely transform the medical imaging industry. In the future, efforts will be directed towards improving real-time MRI reconstruction, going multi-modal MRI fusion, confirming clinical effectiveness via trials, and investigating robustness, intuitive interfaces, transferable learning, and explanatory techniques to improve clinical interpretive practices and adoption.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_39-Revolutionizing_Magnetic_Resonance_Imaging.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Quality-of-Service in Software-Defined Networks Through the Integration of Firefly-Fruit Fly Optimization and Deep Reinforcement Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150138</link>
        <id>10.14569/IJACSA.2024.0150138</id>
        <doi>10.14569/IJACSA.2024.0150138</doi>
        <lastModDate>2024-01-30T15:33:23.5670000+00:00</lastModDate>
        
        <creator>Mahmoud Aboughaly</creator>
        
        <creator>Shaikh Abdul Hannan</creator>
        
        <subject>Software Defined Network (SDN); Quality of Service (QoS); firefly-fruit fly optimization; Deep Reinforcement Learning (DRL); adaptive QoS enhancement; network optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>The Software Defined Networking (SDN) paradigm has emerged as a critical tool for meeting the dynamic demands of network management with respect to efficiency and flexibility. Quality of Service (QoS) optimization, which encompasses essential features including bandwidth allocation, latency, and packet loss, is a major problem in SDN systems due to its direct influence on network application performance and user experience. To deal with these important issues, this paper tackles the critical problem of Software-Defined Networks (SDNs) Quality-of-Service (QoS) optimization, which is a critical factor affecting network application performance and user experience. Within the Firefly-Fruit Fly Optimised Deep Reinforcement Learning (DQ-FFO-DRL) framework, a novel combination of optimization techniques derived from Fruit Fly and Firefly behaviors with Deep Q-Learning is presented in this suggested approach, which is called Deep Q-Learning. The framework effectively investigates ideal network configurations by utilizing the distinct advantages of the Fruit Fly and Firefly optimization components, while the Deep Q-Learning component dynamically adjusts to changing network circumstances by drawing conclusions from prior experiences. Extensive testing and modeling reveal that the DQ-FFO-DRL approach performs very well in SDNs compared to conventional QoS management solutions. When it comes to negotiating the always changing world of resource allocation, network usage, and overall network performance, this algorithm demonstrates exceptional adaptability. The suggested system, which is implemented in Python, offers an advanced and flexible method for enhancing QoS in SDN systems.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_38-Enhancing_Quality_of_Service_in_Software_Defined_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Review of Fake News Detection Techniques for Arabic Language</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150137</link>
        <id>10.14569/IJACSA.2024.0150137</id>
        <doi>10.14569/IJACSA.2024.0150137</doi>
        <lastModDate>2024-01-30T15:33:23.5500000+00:00</lastModDate>
        
        <creator>Taghreed Alotaibi</creator>
        
        <creator>Hmood Al-Dossari</creator>
        
        <subject>Fake news detection; rumors; classification; Arabic language</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>The growing proliferation of social networks provides users worldwide access to vast amounts of information. However, although social media users have benefitted significantly from the rise of various platforms in terms of interacting with others, e.g., expressing their opinions, finding products and services, and checking reviews, it has also raised critical problems, such as the spread of fake news. Spreading fake news not only affects individual citizens but also governments and countries. This situation necessitates the immediate integration of artificial intelligence methodologies to address and alleviate this issue effectively. Researchers in the field have leveraged different techniques to mitigate this problem. However, research in the Arabic language for fake news detection is still in its early stages compared with other languages, such as English. This review paper intends to provide a clear view of Arabic research in the field. In addition, the paper aims to provide other researchers working on solving Arabic fake news detection problems with a better understanding of the common features used in extraction, machine learning, and deep learning algorithms. Moreover, a list of publicly available datasets is provided to give an idea of their characteristics and facilitate researcher access. Furthermore, some of limitations and challenges related to Arabic fake news and rumor detection are discussed to encourage other researchers.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_37-A_Review_of_Fake_News_Detection_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Method for Extracting Traffic Parameters from Drone Videos to Assist Car-Following Modeling</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150136</link>
        <id>10.14569/IJACSA.2024.0150136</id>
        <doi>10.14569/IJACSA.2024.0150136</doi>
        <lastModDate>2024-01-30T15:33:23.5330000+00:00</lastModDate>
        
        <creator>Xiangzhou Zhang</creator>
        
        <creator>Zhongke Shi</creator>
        
        <subject>UAV; Yolov7-tiny; DeepSor; Car-following model; Stability analysis; Traffic congestion; safety assessment</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>A new method for extracting traffic parameters from UAV videos to assist in establishing a car-following model is proposed in this paper. The improved ShuffleNet network and GSConv module were introduced into the Yolov7-tiny neural network model as the target detection stage. HOG features and IOU motion metrics are introduced into the DeepSort multi-object tracking algorithm as the tracking matching stage. By building a self-built UAV aerial traffic data set, experiments were conducted to prove that the new method improved a few detection and tracking indicators. In addition, it improves the false detection, missed detection, wrong ID conversion and other phenomena of the previous algorithm, and improves the accuracy and lightweight of multi-target tracking. Finally, gray correlation was applied to analyze the traffic parameters extracted by the new method, and the driver&#39;s visual perception of collision was introduced into the car-following model. Through stability analysis, small disturbance simulation and collision risk assessment, the newly proposed traffic flow parameter extraction method has been proven to improve the dynamic characteristics and safety of the car-following model, and can be used to alleviate traffic congestion and improve driving safety.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_36-A_Method_for_Extracting_Traffic_Parameters_from_Drone_Videos.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dimensionality Reduction: A Comparative Review using RBM, KPCA, and t-SNE for Micro-Expressions Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150135</link>
        <id>10.14569/IJACSA.2024.0150135</id>
        <doi>10.14569/IJACSA.2024.0150135</doi>
        <lastModDate>2024-01-30T15:33:23.5200000+00:00</lastModDate>
        
        <creator>Viola Bakiasi</creator>
        
        <creator>Markela Mu&#231;a</creator>
        
        <creator>Rinela Kap&#231;iu</creator>
        
        <subject>Dimensionality reduction; Kernel Principal Component Analyses (KPCA); t-distributed Stochastic Neighbor Embedding (t-SNE); Restricted Boltzmann Machine (RBM); facial feature extraction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>Facial expressions are the main ways how humans display emotions. Under certain circumstances, humans can do facial expression, but emotions can also appear in the special form of micro-expressions. A micro-expression is a very brief facial expression faced on people’s faces under some circumstances. Micro-expressions are shown in the situations when a person tries to lie or hide something. Studying micro-expressions sounds very attractive but considering the number of pixels that an image contains becomes difficult. Feature extraction techniques are the most popular ones for reducing data dimensionality. Those techniques create a new low-dimensional dataset, which tries to represent as much information as original dataset. Many and many methods are used for dimensionality reduction. Restricted Boltzmann Machine (RBM), Kernel Principal Component Analyses (KPCA) and t-distributed stochastic neighbor embedding (t-SNE) are currently widely used by researchers. Choosing the right dimensionality reduction technique is time consuming. This study proposes one framework for micro-expression recognition. The two key processes of this framework are the facial feature extraction (Dlib) and dimensionality reduction using RBM, KPCA and t-SNE. We will select the technique that generates new dataset which represents as much the original dataset as possible. The framework will be trained with images from the CASMEII database, which is a database built specially for research purposes. The framework will be tested with new images unseen before. Software used for conducting the experiments is Python.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_35-Dimensionality_Reduction_A_Comparative_Review.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced Emotion Analysis Model using Machine Learning in Saudi Dialect: COVID-19 Vaccination Case Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150134</link>
        <id>10.14569/IJACSA.2024.0150134</id>
        <doi>10.14569/IJACSA.2024.0150134</doi>
        <lastModDate>2024-01-30T15:33:23.5030000+00:00</lastModDate>
        
        <creator>Abdulrahman O. Mostafa</creator>
        
        <creator>Tarig M. Ahmed</creator>
        
        <subject>Data mining; natural language processing; sentiment analysis; emotion analysis; machine learning; support vector machine; logistic regression; decision tree; Covid-19</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>Sentiment Analysis (SA) and Emotion Analysis (EA) are effective areas of research aimed to auto-detect and recognize the sentiment expressed in a text and identify the underpinning opinion towards a specific topic. Although they are often considered interchangeable terms, they have slight differences. The primary purpose of SA is to find the polarity expressed in a text by distinguishing between positive, negative, and neutral opinions. EA is concerned with detecting more emotion categories, such as happiness, anger, sadness, and fear. EA allows the analysis to extract more accurate and detailed results that suit the field in which it is applied. This work delves into EA within the Saudi Arabian dialect, focusing on sentiments related to COVID-19 vaccination campaigns. Our endeavor addresses the absence of research on developing an effective EA machine-learning model for Saudi dialect texts, particularly within the healthcare and vaccinations domain, exacerbated by the lack of an EA manual-labeled corpus. Using a systematic approach, a dataset of 33,373 tweets is collected, annotated, and preprocessed. Thirty-six machine learning experiments encompassing SVM, Logistic Regression, Decision Tree models, three stemming techniques, and four feature extraction methods enhance the understanding of public sentiment surrounding COVID-19 vaccination campaigns. Our Logistic Regression model achieved 74.95% accuracy. Findings reveal a predominantly positive sentiment, particularly happiness, among Saudi citizens. This research contributes valuable insights for healthcare communication, public sentiment monitoring, and decision-making while providing labeled-corpus and ML model comparison results for improving model performance and exploring broader linguistic and dialectal applications.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_34-Enhanced_Emotion_Analysis_Model_using_Machine_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Observational Quantitative Study of Healthy Lifestyles and Nutritional Status in Firefighters of the fifth Command of Callao, Ventanilla 2023</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150133</link>
        <id>10.14569/IJACSA.2024.0150133</id>
        <doi>10.14569/IJACSA.2024.0150133</doi>
        <lastModDate>2024-01-30T15:33:23.5030000+00:00</lastModDate>
        
        <creator>Genrry Perez-Olivos</creator>
        
        <creator>Exilda Garcia-Carhuapoma</creator>
        
        <creator>Ethel Gurreonero-Seguro</creator>
        
        <creator>Julio M&#233;ndez-Nina</creator>
        
        <creator>Sebastian Ramos-Cosi</creator>
        
        <creator>Alicia Alva Mantari</creator>
        
        <subject>BMI; firemen; lifestyles; excess weight</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>Given the high concern for human health, the aim is to determine the relationship between healthy lifestyles and nutritional status among firefighters of the VCD Callao Ventanilla 2023. This study was conducted in four volunteer fire companies, namely B-75, B-184, B-207, B-232, located in the districts of Ventanilla and Mi Per&#250;. The population consists of 291 personnel, with a sample of 168 participants. It was observed that 58.9% (99) of the participants are under 36 years old, 29.8% (50) are between 36 and 45 years old, and 11.3% (19) are 46 years or older. In terms of gender, 62.5% (105) are males. Regarding the duration of their firefighting service, 70.2% (118) have a maximum of 10 years of seniority. On the other hand, 57.7% (97) of the participants have an unhealthy lifestyle, 40.5% (68) have a healthy lifestyle, and 1.8% (3) have a very healthy lifestyle. Regarding the nutritional status of the firefighters in this study, it was found that 53.3% (89) are overweight, 26.8% are considered normal weight, 19.6% (33) are obese, and 0.6% (1) are underweight. Concerning lifestyles, the study revealed that 57.7% of the participants have an unhealthy lifestyle, 40.5% have a healthy lifestyle, and 1.8% (3) have a very healthy lifestyle. It is worth mentioning that according to Rodr&#237;guez C&#39;s study, 95.2% of volunteers belonging to the B107 Fire Company lead a healthy lifestyle, while 4.8% do not. Statistically, we can assert that there is no significant relationship between the variable of healthy lifestyles and nutritional status. However, it is observed that there is a direct relationship between nutritional status and age. Likewise, it can be affirmed that more than at least 72.9% of the studied population is overweight, either with overweight or obesity.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_33-Observational_Quantitative_Study_of_Healthy_Lifestyles.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Cost-Efficient Approach for Creating Virtual Fitting Room using Generative Adversarial Networks (GANs)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150132</link>
        <id>10.14569/IJACSA.2024.0150132</id>
        <doi>10.14569/IJACSA.2024.0150132</doi>
        <lastModDate>2024-01-30T15:33:23.4870000+00:00</lastModDate>
        
        <creator>Kirolos Attallah</creator>
        
        <creator>Girgis Zaky</creator>
        
        <creator>Nourhan Abdelrhim</creator>
        
        <creator>Kyrillos Botros</creator>
        
        <creator>Amjad Dife</creator>
        
        <creator>Nermin Negied</creator>
        
        <subject>Generative Adversarial Networks (GANs); virtual reality; human body segmentation; image generator; conditional generator; background removal</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>Customers all over the world want to see how the clothes fit them or not before purchasing. Therefore, customers by nature prefer brick-and-mortar clothes shopping so they can try on products before purchasing them. But after the Pandemic of COVID19 many sellers either shifted to online shopping or closed their fitting rooms which made the shopping process hesitant and doubtful. The fact that the clothes may not be suitable for their buyers after purchase led us to think about using new AI technologies to create an online platform or a virtual fitting room (VFR) in the form of a mobile application and a deployed model using a webpage that can be embedded later to any online store where they can try on any number of cloth items without physically trying them. Besides, it will save much searching time for their needs. Furthermore, it will reduce the crowding and headache in the physical shops by applying the same technology using a special type of mirror that will enable customers to try on faster. On the other hand, from business owners&#39; perspective, this project will highly increase their online sales, besides, it will save the quality of the products by avoiding physical trials issues. The main approach used in this work is applying Generative Adversarial Networks (GANs) combined with image processing techniques to generate one output image from two input images which are the person image and the cloth image. This work achieved results that outperformed the state-of-the-art approaches found in literature.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_32-A_Cost_Efficient_Approach_for_Creating_Virtual_Fitting_Room.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparison of SVM kernels in Credit Card Fraud Detection using GANs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150131</link>
        <id>10.14569/IJACSA.2024.0150131</id>
        <doi>10.14569/IJACSA.2024.0150131</doi>
        <lastModDate>2024-01-30T15:33:23.4730000+00:00</lastModDate>
        
        <creator>Bandar Alshawi</creator>
        
        <subject>Fraud transactions; credit card; Generative Adversarial Network; Support Vector Machine kernels; imbalance dataset</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>The technological evolution in smartphones and telecommunication systems have led people to be more dependent on online shopping and electronic payments, which created burdensome task of transaction validation for many financial institutions. This paper examined and evaluated the efficacy of Support vector machine (SVM) kernels on Generative Adversarial Network (GAN)-generated synthetic data to detect credit card fraud transactions. Four SVM kernels have been investigated and compared; linear, polynomial, sigmoid, and redial basis function. The accuracy results indicated that linear and polynomial kernels reached over 91%, while sigmoid and redial basis function reached 79% and 83% respectively. Linear and polynomial models received over 90% ROC and F1 score, in contrast the ROC scores were lower for sigmoid (81%) and redial basis function (83%). Both sigmoid and redial basis function achieved over 80% in terms of F1 score. The precision score demonstrated a high score for both linear and polynomial kernel reaching 99%. Additionally, sigmoid and redial basis function achieved over 80%. These results overcame the imbalance dataset issue through the generation of synthetic data by applying the SVM kernels using GANs algorithm.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_31-Comparison_of_SVM_kernels_in_Credit_Card_Fraud_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Category Decomposition-based Within Pixel Information Retrieval Method and its Application to Partial Cloud Extraction from Satellite Imagery Pixels</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150129</link>
        <id>10.14569/IJACSA.2024.0150129</id>
        <doi>10.14569/IJACSA.2024.0150129</doi>
        <lastModDate>2024-01-30T15:33:23.4570000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Yasunori Terayama</creator>
        
        <creator>Masao Moriyama</creator>
        
        <subject>Category decomposition; information retrieval; cloud cover estimation; Generalized Inverse Matrix Method: GIMM and well-known Least Square Method: LSM and Maximum Likelihood Method: MLH</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>Category decomposition-based within pixel information retrieval method is proposed together with its application to partial cloud extraction from satellite imagery pixels. A comparative study was conducted for estimation of the sea surface temperature of the pixel suffered from partial cloud cover within a pixel. Three methods for estimation of partial cloud cover within a pixel, based on the proposed category decomposition-based method with Generalized Inverse Matrix Method: GIMM and well-known Least Square Method: LSM and Maximum Likelihood Method: MLH, were compared. It was found that around 9% of RMS (Root Mean Square) error can be achieved. Also, it was found that estimation accuracy highly depends on variance of representative vectors for cloud and the ocean or observed noise. The experimental results with simulated data show RMS error of GIMM are highly dependent to the noise followed by MLH and LSM. The results also show the best estimation accuracy can be achieved for MLH followed by LSM and GIMM.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_29-Category_Decomposition_based_Within_Pixel_Information.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Costless Expert Systems Development and Re-engineering</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150130</link>
        <id>10.14569/IJACSA.2024.0150130</id>
        <doi>10.14569/IJACSA.2024.0150130</doi>
        <lastModDate>2024-01-30T15:33:23.4570000+00:00</lastModDate>
        
        <creator>Manal Alsharidi</creator>
        
        <creator>Abdelgaffar Hamed Ali</creator>
        
        <subject>Model-Driven-Architecture(MDA);Unified Modelling Language (UML); Platform-Independent Model (PIM); Platform-Specific Model (PSM); Query- View- Transform (QVT)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>Symbolic AI is indispensable for the current LLM agents that are used for example to reason the context of the questions. An expert system is a symbolic AI that can explain the reasoning it reached to, which typically is a rule-based system has been attractive for different domains such as medicine, agriculture, and operations. On average, these systems involve hundreds of rules that are instable; moreover, they are coded at low levels of abstraction. Therefore, designing and reengineering an expert system is still costly and needs technical knowledge because of the manual process and maintaining of a low-level abstraction. On the other hand, model-driven architecture (MDA) has proven to be a successful technology that raised the abstraction level and formalized it to automate software development. It specifies business aspects in the platform-independent model (PIM) and implementation aspects in a platform-specific model (PSM). It then automates mapping between them using a standard mapping language called Query- View- Transform QVT. This paper argues that utilizing MDA principles such as the automation and abstractions represented by the descriptor PIM and PSM and mappings metamodels will not only overcome the instability of rules of expert systems, but also provides new insights for its usage. Therefore, this work proposes an MDA-compliant methodology that adopts a UML sequence diagram, a class diagram for the PIM descriptor, and a generic PSM) based on production rules. Moreover, a UML profile to support lacking features in the sequence model has been developed. However, the paper argues for a new kind of process-oriented expert system. Therefore, it not only allows domain experts to develop or participate in expert systems but also reduces the cost of developing new systems and re-engineering or maintenance of the critical and large-scale legacy expert systems.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_30-Costless_Expert_Systems_Devolplment_and_Re_engineering.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Investigating Agile Values and Principles in Real Practices</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150128</link>
        <id>10.14569/IJACSA.2024.0150128</id>
        <doi>10.14569/IJACSA.2024.0150128</doi>
        <lastModDate>2024-01-30T15:33:23.4400000+00:00</lastModDate>
        
        <creator>Abdullah A H Alzahrani</creator>
        
        <subject>Agile; software engineering; information systems; change management; organisational change</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>Software engineering is the field of development of information systems. However, the development process can often be complicated. Therefore, many researchers have introduced their approaches to manage the complication. This led to the introduction of new subfields such as change management, and organisational change. Agile can be regarded as a collection of best practices with the same values and principles. Since the introduction of Agile manifesto, many researchers, manufacturers, and organisations have introduced their thoughts, tools, and models to enhance the understanding and adoption of Agile. Sharing a similar understanding of Agile among people involved is essential in order to adopt it. This paper investigates the understanding of Agile among IT professionals. In addition, the factors that impact the understanding and adoption of Agile are highlighted and studied. A survey methodology was employed in this research among IT professionals from different organisations. The results of this study show that productivity and ability to accept change are conflicting the understanding among participants. Furthermore, the experience of participants has an impact on the ways in which Agile are adopted.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_28-Investigating_Agile_Values_and_Principles_in_Real_Practices.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving the Trajectory Clustering using Meta-Heuristic Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150126</link>
        <id>10.14569/IJACSA.2024.0150126</id>
        <doi>10.14569/IJACSA.2024.0150126</doi>
        <lastModDate>2024-01-30T15:33:23.4230000+00:00</lastModDate>
        
        <creator>Haiyang Li</creator>
        
        <creator>Xinliu Diao</creator>
        
        <subject>Ant colony method; particle swarm algorithm; HCM clustering; and trajectory lines</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>The rapid growth of GPS trajectories obscures valuable information regarding urban road infrastructure, urban traffic patterns, and population mobility. An innovative method termed trajectory regression clustering is introduced to improve the extraction of hidden data and generate more precise clustering results. This approach belongs to the unsupervised trajectory clustering category and has the objective of minimizing the loss of local information inside the trajectory. It also seeks to prevent the algorithm from getting stuck in a suboptimal solution. The methodology we employ consists of three primary stages. To begin with, we present the notion of trajectory clustering and devise a distinctive approach known as angle-based partitioning to segment line segments. The evaluation results indicate a significant improvement in the clustering accuracy of the proposed method compared to existing methodologies, especially for a high number of clusters. The HCMGA and HCMMOPSO algorithms have improved clustering accuracy for MBP values by 0.61% and 0.64%, respectively, as compared to previous approaches. Moreover, based on the implementation findings, the ant colony approach demonstrates superior accuracy compared to alternative methods, while the particle swarm method exhibits faster convergence.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_26-Improving_the_Trajectory_Clustering.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sustainability and Resilience Analysis in Supply Chain Considering Pricing Policies and Government Economic Measures</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150127</link>
        <id>10.14569/IJACSA.2024.0150127</id>
        <doi>10.14569/IJACSA.2024.0150127</doi>
        <lastModDate>2024-01-30T15:33:23.4230000+00:00</lastModDate>
        
        <creator>Dounia SAIDI</creator>
        
        <creator>Aziz AIT BASSOU</creator>
        
        <creator>Jamila EL ALAMI</creator>
        
        <creator>Mustapha HLYAL</creator>
        
        <subject>Supply chain management; pricing policies; sustainability; resilience; government regulation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>Sustainability and resilience are becoming increasingly critical in shaping supply chain pricing strategies. They ensure that supply chains can withstand disruptions while adhering to environmental and social standards, thereby securing long-term economic viability. Despite their importance, the integration of these two pillars with the promotion of domestic products remains under-explored, especially concerning their influence on the competitive dynamics within supply chains. This study seeks to bridge this gap by examining the influence of sustainability, resilience, and domestic product promotion on supply chain pricing strategies. We introduce a model that captures the interactions among a central supplier, multiple stores, and the government, focusing on strategies adopted by each stakeholder to maximize its profit while adhering to sustainability and resilience requirements. The study reveals that stores&#39; pricing strategies are significantly influenced by their sustainability efforts, with the cost coefficient of these efforts and the elasticity of sustainability efforts directly affecting profit margins. It also finds that the supplier&#39;s resilience strategy involves allocating inventory reserves to manage wholesale pricing effectively. Governmental regulatory measures, through taxation and subsidies, are shown to play a crucial role in maintaining the balance between domestic and foreign products and providing flexibility to diversify product sources to cope with local disruptions. Finally, perspectives are provided to enrich the understanding of how sustainability and resilience can be considered and impact pricing policies of the whole network.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_27-Sustainability_and_Resilience_Analysis_in_Supply_Chain.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Evaluation of Machine Learning Classifiers for Predicting Denial-of-Service Attack in Internet of Things</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150125</link>
        <id>10.14569/IJACSA.2024.0150125</id>
        <doi>10.14569/IJACSA.2024.0150125</doi>
        <lastModDate>2024-01-30T15:33:23.4100000+00:00</lastModDate>
        
        <creator>Omar Almomani</creator>
        
        <creator>Adeeb Alsaaidah</creator>
        
        <creator>Ahmad Adel Abu Shareha</creator>
        
        <creator>Abdullah Alzaqebah</creator>
        
        <creator>Malek Almomani</creator>
        
        <subject>Cybersecurity; IDS; DOS attack; IoT; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>Eliminating security threats on the Internet of Things (IoT) requires recognizing threat attacks. IoT and its implementations are currently the most common scientific field. When it comes to real-world implementations, IoT&#39;s attributes, on the one hand, make it simple to apply, but on the other hand, they expose it to cyber-attacks. Denial of Service (DoS) attack is a type of threat that is now widespread in the field of IoT. Its primary goal is to stop or damage service or capability on a target. Conventional Intrusion Detection Systems (IDS) are no longer sufficient for detecting these sophisticated attacks with unpredictable behaviors. Machine learning (ML)--based intrusion detection does not need a massive list of expected activities or a variety of threat signatures to create detection rules. This study aims to evaluate different ML classifiers for network intrusion detection that focus on DoS attacks in the IoT environment to determine the best ML classifier that can detect the DoS attack. The XGBoost, Decision Tree (DT), Gaussian Naive Bayes (NB), Random Forest (RF), Logistic Regression (LR), and Support Vector Machine (SVM) ML classifiers are used to evaluate the DoS attack. The UNSW-NB15 dataset was used for this study. The obtained accuracy rate for XGboost was 98.92%, SVM 98.62%, Gaussian NB 83.75%, LR 97.74%, RF 99.48%, and DT 99.16%. where the precision rate for XGboost, SVM, Gaussian NB, LR, RF, and DT was 98.40%, 98.29%, 77.50%, 97.14%, 99.21%, and 99.12%, respectively. The sensitivity rate for XGboost, SVM, Gaussian NB, LR, RF, and DT was 99.29%, 98.76%, 91.87%, 98.06%, 99.69%, and 99.08%, respectively. The results show that the RF classifier outperformed other classifiers in terms of Accuracy, Precision, and Sensitivity.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_25-Performance_Evaluation_of_Machine_Learning_Classifiers.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Machine Learning-Driven Integration of Genetic and Textual Data for Enhanced Genetic Variation Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150124</link>
        <id>10.14569/IJACSA.2024.0150124</id>
        <doi>10.14569/IJACSA.2024.0150124</doi>
        <lastModDate>2024-01-30T15:33:23.3930000+00:00</lastModDate>
        
        <creator>Malkapurapu Sivamanikanta</creator>
        
        <creator>N Ravinder</creator>
        
        <subject>Precision medicine; genetic testing; driver mutations; cancer genomes; textual clinical literature; text mining; genetic variations</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>Precision medicine and genetic testing have the potential to revolutionize disease treatment by identifying driver mutations crucial for tumor growth in cancer genomes. However, clinical pathologists face the time-consuming and error-prone task of classifying genetic variations using Textual clinical literature. In this research paper, titled “Machine Learning-Driven Integration of Genetic and Textual Data for Enhanced Genetic Variation Classification”, we propose a solution to automate this process. We aim to develop a robust machine learning algorithm with a knowledge base foundation to streamline precision medicine. Our methods leverage advanced machine learning and natural language processing techniques, coupled with a comprehensive knowledge base that incorporates clinical and genetic data to inform mutation significance. We use text mining to extract relevant information from scientific literature, enhancing classification accuracy. Our results demonstrate significant improvements in efficiency and accuracy compared to manual methods. Our system excels at identifying driver mutations, reducing the burden on clinical pathologists and minimizing errors. Automating this critical aspect of precision medicine promises to empower healthcare professionals to make more precise treatment decisions, advancing the field and improving patient care.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_24-Machine_Learning_Driven_Integration_of_Genetic_and_Textual_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Crime Prediction Model using Three Classification Techniques: Random Forest, Logistic Regression, and LightGBM</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150123</link>
        <id>10.14569/IJACSA.2024.0150123</id>
        <doi>10.14569/IJACSA.2024.0150123</doi>
        <lastModDate>2024-01-30T15:33:23.3930000+00:00</lastModDate>
        
        <creator>Abdulrahman Alsubayhin</creator>
        
        <creator>Muhammad Sher Ramzan</creator>
        
        <creator>Bander Alzahrani</creator>
        
        <subject>Crime prediction; random forest; logistic regression; LightGBM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>Predicting the likelihood of a crime occurring is difficult, but machine learning can be used to develop models that can do so. Random forest, logistic regression, and LightGBM are three well-known classification methods that can be applied to crime prediction. Random forest is an ensemble learning algorithm that predicts by combining multiple decision trees. It is an effective method for classification tasks, and it is frequently employed for crime prediction because it handles imbalanced datasets well. Logistic regression is a linear model that can be used to predict the probability of a binary outcome, such as the occurrence of a crime. It is a relatively straightforward technique that can be effective for crime prediction if the features are carefully chosen. LightGBM is a gradient-boosting decision tree algorithm with a reputation for speed and precision. It is a relatively new algorithm, but because it can achieve high accuracy on even small datasets, it has rapidly gained popularity for crime prediction. The experimental results show that the LightGBM performs best for binary classification, followed by Random Forest and Logistic Regression.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_23-Crime_Prediction_Model_using_Three_Classification_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Perceived Benefits and Challenges of Implementing CMMI on Agile Project Management: A Systematic Literature Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150122</link>
        <id>10.14569/IJACSA.2024.0150122</id>
        <doi>10.14569/IJACSA.2024.0150122</doi>
        <lastModDate>2024-01-30T15:33:23.3770000+00:00</lastModDate>
        
        <creator>Anggia Astridita</creator>
        
        <creator>Teguh Raharjo</creator>
        
        <creator>Anita Nur Fitriani</creator>
        
        <subject>CMMI; SPI; Agile project management; systematic literature review; PRISMA</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>In an era where the agility and responsiveness of Agile project management are paramount, the integration of structured models like the Capability Maturity Model Integration (CMMI) presents a blend of unique opportunities and challenges. This study conducts a comprehensive systematic literature review of 23 scientific articles, chosen through the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) methodology, to explore the benefits and challenges of CMMI and software development integration within the context of Agile project management. Emphasizing the enhancement of Agile project management maturity, the research delves into the role of CMMI, particularly CMMI-DEV, as a pivotal element in Software Process Improvement (SPI) models tailored to Agile environments. The study’s novelty lies in its systematic and in-depth investigation of CMMI’s integration with Agile project management methodologies, a critical yet underexplored area in the existing literature. Addressing the urgency highlighted by global trends of resource inefficiencies and project management challenges, this research offers timely insights for both academia and industry. This study also categorizes key benefits while identifying prevalent challenges, such as resource constraints and organizational resistance. Additionally, this research also suggests solutions and improvements to these challenges. By offering a comprehensive evaluation, the research significantly advances the understanding of the complexities and potential of CMMI and Agile project management integration. It provides valuable insights for practical applications in organizational settings, emphasizing the potential of integrating structured models like CMMI-DEV with Agile project management methodologies. This integration is essential for enhancing project management maturity, marking a significant step forward in academic research and practical applications in this vital domain.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_22-Perceived_Benefits_and_Challenges_of_Implementing_CMMI.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving of Smart Health Houses: Identifying Emotion Recognition using Facial Expression Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150121</link>
        <id>10.14569/IJACSA.2024.0150121</id>
        <doi>10.14569/IJACSA.2024.0150121</doi>
        <lastModDate>2024-01-30T15:33:23.3630000+00:00</lastModDate>
        
        <creator>Yang SHI</creator>
        
        <creator>Yanbin BU</creator>
        
        <subject>Smart health houses; computer vision; facial expression; emotion recognition; YOLO</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>Smart health houses have shown great potential for providing advanced healthcare services and support to individuals. Although various computer vision based approaches have been developed, current facial expression analysis methods still have limitations that need to be addressed. This research paper introduces a facial expression analysis technique for emission recognition based on YOLOv4-based algorithm. The proposed method involves the use of a custom dataset for training, validation, and testing of the model. By overcoming the limitations of existing methods, the proposed technique delivers precise and accurate results in detecting subtle changes in facial expressions. Through several experimental and performance evaluation tasks, we have assessed the efficacy of our proposed method and demonstrated its potential to enhance the accuracy of Smart Health Houses. This study emphasizes the importance of addressing emotional well-being in healthcare. As experimental results shown, the prosed method achieved satisfy accuracy rate and the effectiveness of the YOLOv4 model for emotion detection suggests that emotional intelligence training can be a valuable tool in achieving this goal.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_21-Improving_of_Smart_Health_Houses.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Solution to Improve the Detection of the Nominal Value of the Financial Market: A Case Study of the Alphabet Stocks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150119</link>
        <id>10.14569/IJACSA.2024.0150119</id>
        <doi>10.14569/IJACSA.2024.0150119</doi>
        <lastModDate>2024-01-30T15:33:23.3470000+00:00</lastModDate>
        
        <creator>Zhaohua Li</creator>
        
        <creator>Xinyue Chang</creator>
        
        <subject>Alphabet stock; machine learning; light gradient boosting machine; optimization; artificial bee colony algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>Given the regular occurrence of non-stationarity, non-linearity, and high levels of noise in time series data, predicting the value of stocks is a considerable difficulty. Traditional methods have the potential to enhance the precision of forecasting, although they concurrently introduce computational complexity, hence augmenting the probability of prediction inaccuracies. To effectively tackle a range of concerns, the existing body of research proposes a novel approach that combines a light gradient boosting machine, a machine learning methodology, with artificial bee colony optimization. In the context of the examined dynamic stock market, the proposed model demonstrated better efficiency and performance compared to alternative models. The recommended model exhibited optimal performance, characterized by a low error rate and high efficacy. The analysis utilized data about the stock of Alphabet over the period spanning from January 2, 2015, to June 29, 2023. The outcomes of the study provide evidence of the predictive accuracy of the proposed model in determining stock prices. The study&#39;s findings demonstrate how well the suggested model performs when it comes to correctly predicting stock prices. The proposed model presents a pragmatic methodology for evaluating and forecasting time series data about stock prices. The research&#39;s findings show that, in terms of forecast accuracy, the suggested model performs better than the methods currently in use.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_19-A_Solution_to_Improve_the_Detection_of_the_Nominal_Value.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of the Financial Market via an Optimized Machine Learning Algorithm: A Case Study of the Nasdaq Index</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150120</link>
        <id>10.14569/IJACSA.2024.0150120</id>
        <doi>10.14569/IJACSA.2024.0150120</doi>
        <lastModDate>2024-01-30T15:33:23.3470000+00:00</lastModDate>
        
        <creator>Lei Wang</creator>
        
        <creator>Mingzhu Xie</creator>
        
        <subject>Stock market prediction; Nasdaq index; random forest; moth-flame optimization; MFO-RF</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>The complex interaction among economic variables, market forces, and investor psychology presents a formidable obstacle to making accurate forecasts in the realm of finance. Moreover, the nonstationary, non-linear, and highly volatile nature of stock price time series data further compounds the difficulty of accurately predicting stock prices within the securities market. Traditional methods have the potential to enhance the precision of forecasting, although they concurrently introduce computational complexities that may lead to an increase in prediction mistakes. This paper presents a unique model that effectively handles several challenges by integrating the Moth Flame optimization technique with the random forest method. The hybrid model demonstrated superior efficacy and performance compared to other models in the present investigation. The model that was suggested exhibited a high level of efficacy, with little error and optimal performance. The study evaluated the efficacy of a suggested predictive model for forecasting stock prices by analyzing data from the Nasdaq index for the period spanning from January 1, 2015, to June 29, 2023. The results indicate that the proposed model is a reliable and effective approach for analyzing and forecasting the time series of stock prices. The experimental findings indicate that the proposed model exhibits superior performance in terms of predicting accuracy compared to other contemporary methodologies.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_20-Analysis_of_the_Financial_Market_via_an_Optimized_Machine_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Telemedicine and its Impact on the Preoperative Period</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150118</link>
        <id>10.14569/IJACSA.2024.0150118</id>
        <doi>10.14569/IJACSA.2024.0150118</doi>
        <lastModDate>2024-01-30T15:33:23.3300000+00:00</lastModDate>
        
        <creator>Raquel Elisa Apaza-Avila</creator>
        
        <subject>Telemedicine; digital health; e-health; preoperative care; preoperative period; systematic review</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>The application of telemedicine has aroused a lot of interest in the field of chronic disease care, which is associated with clinical medicine. The aim of this research is to systematically evaluate the published evidence on telemedicine in the preoperative period. A systematic search was conducted over the last five years, excluding secondary research. Selection criteria were applied, obtaining 68 articles that met these criteria and quality criteria. The results show that the largest production is carried out in the United States and the United Kingdom, with collaboration between institutions and countries. The main use of telemedicine was in teleconsultation and telecounseling activities. In addition, the application of telemedicine in the preoperative period was made to a greater extent for general procedures without distinction of surgical specialty, oncological surgery and traumatology. An increased production observed can be related to the need for physical distancing due to the pandemic. Future research could include the co-occurrence of search terms, the impact of smartphones, NER terms, and the impact of polarity and objectivity on readers&#39; choice of articles to read, share, and cite.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_18-Telemedicine_and_its_Impact_on_the_Preoperative_Period.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Construction and Application of Library Intelligent Acquisition Decision Model Based on Decision Tree Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150116</link>
        <id>10.14569/IJACSA.2024.0150116</id>
        <doi>10.14569/IJACSA.2024.0150116</doi>
        <lastModDate>2024-01-30T15:33:23.3170000+00:00</lastModDate>
        
        <creator>Hong Pan</creator>
        
        <subject>Decision tree; machine learning; fuzzy logic; intelligent interview model; post-pruning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>In today&#39;s digital age, libraries, as the core institutions of knowledge management and information services, are facing an increasing demand from readers. In order to provide more efficient, accurate, and personalized interview services, intelligent interview decision-making in libraries has become an important research field. Traditional manual interview services face challenges such as personnel training and knowledge updates, making it difficult to quickly adapt to new needs and changes. To address these issues, research is being conducted on using machine learning technology to perform post pruning on the basis of standard decision trees and combining it with fuzzy logic to design a fuzzy decision tree. The experimental results show that the F rejection rate (FN) of the model rapidly decreases to about 0.1 as the number of training iterations gradually increases, and stabilizes at around 0.05 after 210 rounds of training, which is 0.10 lower than the rule-based decision model FN. The intelligent acquisition decision-making model designed in this study has higher accuracy and stability, and has certain application potential in the field of intelligent acquisition decision-making in libraries.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_16-The_Construction_and_Application_of_Library_Intelligent_Acquisition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Predictive Sales System Based on Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150117</link>
        <id>10.14569/IJACSA.2024.0150117</id>
        <doi>10.14569/IJACSA.2024.0150117</doi>
        <lastModDate>2024-01-30T15:33:23.3170000+00:00</lastModDate>
        
        <creator>Jean Paul Luyo Ballena</creator>
        
        <creator>Cristhian Pool Ortiz Pallihuanca</creator>
        
        <creator>Ernesto Adolfo Carrera Salas</creator>
        
        <subject>Deep learning; neural network architectures; sales prediction; neural networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>There are several techniques for predictive sales systems, in this study, a system based on different machine learning algorithms is developed for a trading company in Lima. As any company, it needs to be accurate in its sales calculations to manage the volume of production or product purchases. With the system, the trading company has a mechanism to order products from its supplier based on the predictions and estimates of the needs according to the projection of its sales. For the sales predictive system, Deep Learning technology and the neural network architectures GRU (Gated Recurrent Unit), LSTM (Long Short Term Memory) and RNN (Recurrent Neural Network) were used, 10 products were sampled, and the sales quantities of the last 12 months were obtained for the evaluation. The study found that the LSTM architecture excels in accuracy, significantly outperforming GRU and RNN in terms of Mean Absolute Percentage Error (MAPE), achieving an average MAPE of 7.07%, in contrast to the MAPE of 27.14% for GRU and the MAPE of 36.17% for RNN. These findings support the effectiveness and versatility of LSTM in time series prediction, demonstrating its usefulness in a variety of real-world applications.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_17-A_Predictive_Sales_System_Based_on_Deep_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Scheme Design of Wearable Sensor for Exercise Habits Based on Random Game</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150115</link>
        <id>10.14569/IJACSA.2024.0150115</id>
        <doi>10.14569/IJACSA.2024.0150115</doi>
        <lastModDate>2024-01-30T15:33:23.3000000+00:00</lastModDate>
        
        <creator>Youqin Huang</creator>
        
        <creator>Zhaodi Feng</creator>
        
        <subject>Random game; adaptive search hybrid learning algorithm; wearable sensors; physical exercise; evolution of actuators; exercise habits; anchor node positioning; semi definite programming method</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>The development of random game theory has enabled wearable sensors to obtain actuator evolution in sports exercise, thus the design of user exercise habits during the exercise process has begun to be studied. Conventional devices only focus on automatic adjustment of sports design, with slight shortcomings in personalization. To address this issue, this study added an anchor node localization device to the adaptive search hybrid learning algorithm and analyzed the exercise goals of athletes. At the same time, a semi definite programming method was installed in wearable sensors to achieve the goal of paying attention to the physical condition of athletes. To verify the performance of the fusion device, this study conducted experiments on the Physical dataset and compared it with three models such as Harris Eagle Optimization. The accuracy rates of designing exercise habits schemes for the four devices were 97.4%, 96.5%, 94.7%, and 91.2%, respectively, indicating that the model has the strongest stability. Under the same running time, the energy loss of this model was 0.11kW * h, which performs the best among the four models. When the athletes are different in age, the F1 values of the four devices are 5.9, 4.5, 4.2 and 3.6 respectively. The results indicate that the proposed fusion model has strong robustness and is suitable for designing exercise habit schemes in the evolution of sports exercise actuators.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_15-The_Scheme_Design_of_Wearable_Sensor.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application Effect of Human-Computer Interactive Gymnastic Sports Action Recognition System Based on PTP-CNN Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150113</link>
        <id>10.14569/IJACSA.2024.0150113</id>
        <doi>10.14569/IJACSA.2024.0150113</doi>
        <lastModDate>2024-01-30T15:33:23.2700000+00:00</lastModDate>
        
        <creator>Yonge Ren</creator>
        
        <creator>Keshuang Sun</creator>
        
        <subject>PTP; CNN; human-computer interaction; gymnastic sports; action recognition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>With the rapid development of artificial intelligence technology, the recognition accuracy performance of traditional gymnastic sports action recognition system can no longer meet the needs of today&#39;s society. To address these problems, an improved action recognition algorithm combining Precision Time Protocal (PTP) and Convolutional Neural Networks (CNN) is proposed, and a human-computer interaction gymnastic action recognition system based on PTP-CNN algorithm is constructed. The performance test of the proposed PTP-CNN algorithm was conducted, and it was found that the accuracy of PTP-CNN algorithm was 92.8% and the recall rate was 95.2%, which was better than the comparison algorithm. The performance comparison experiments of the gymnastic action recognition system based on the PTP-CNN algorithm found that the recognition accuracy of the PTP-CNN gymnastic action recognition system was 96.3% and the running time was 3.4s, which was better than the other comparison systems. Comprehensive results can be found that the research proposed PTP-CNN recognition algorithm and improved gymnastic action recognition system can effectively improve the performance of traditional algorithms and models, which has practical application value and great application potential.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_13-Application_Effect_of_Human_Computer_Interactive_Gymnastic.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Lean Service Conceptual Model for Digital Transformation in the Competitive Service Industry</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150114</link>
        <id>10.14569/IJACSA.2024.0150114</id>
        <doi>10.14569/IJACSA.2024.0150114</doi>
        <lastModDate>2024-01-30T15:33:23.2700000+00:00</lastModDate>
        
        <creator>Nur Niswah Hasina Mohammad Amin</creator>
        
        <creator>Amelia Natasya Abdul Wahab</creator>
        
        <creator>Nur Fazidah Elias</creator>
        
        <creator>Ruzzakiah Jenal</creator>
        
        <creator>Muhammad Ihsan Jambak</creator>
        
        <creator>Nur Afini Natrah Mohd Ashril</creator>
        
        <subject>Lean principles; digital transformation; model conceptual; service industry; waste; dimension; qualitative research</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>In today&#39;s competitive service industry, the pressure to boost productivity, cut costs, and improve service quality is immense. By integrating lean principles and digital transformation, organizations can streamline processes and reduce waste. Although various lean models have been developed for different service industry, there is no universal standard. Hence, this study aims to address this gap by proposing a Lean Service Conceptual Model through qualitative research by identifying nine types of waste and seven lean dimensions. Interviews, observations, and audio-visual materials are the data collection methods used in this study. The model aligns seamlessly with modern digital technologies such as big data, the Internet of Things, blockchain, cloud computing, and artificial intelligence, making it adaptable for service organizations to excel in the digital age. The model focuses on enhancing efficiency and effectiveness while primarily reducing waste in service operations. Due to restrictions during the pandemic and the interest expressed by the informants in participating in this study, the focus is thus made on a single case study, which may lead to biased findings. However, future studies will be performed on multiple case studies to enhance the findings. Exploring and reviewing an array of best practices, techniques, and tools available for waste reduction within organizational operations is paramount.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_14-A_Lean_Service_Conceptual_Model_for_Digital_Transformation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Anomaly Detection in Structural Health Monitoring with Ensemble Learning and Reinforcement Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150112</link>
        <id>10.14569/IJACSA.2024.0150112</id>
        <doi>10.14569/IJACSA.2024.0150112</doi>
        <lastModDate>2024-01-30T15:33:23.2530000+00:00</lastModDate>
        
        <creator>Nan Huang</creator>
        
        <subject>Structural health monitoring; Anomaly detection; reinforcement learning; differential equation; imbalanced classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>This research introduces a novel approach for improving the analysis of Structural Health Monitoring (SHM) data in civil engineering. SHM data, essential for assessing the integrity of infrastructures like bridges, often contains inaccuracies because of sensor errors, environmental factors, and transmission glitches. These inaccuracies can severely hinder identifying structural patterns, detecting damages, and evaluating overall conditions. Our method combines advanced techniques from machine learning, including dilated convolutional neural networks (CNNs), an enhanced differential equation (DE) model, and reinforcement learning (RL), to effectively identify and filter out these irregularities in SHM data. At the heart of our approach lies the use of CNNs, which extract key features from the SHM data. These features are then processed to classify the data accurately. We address the challenge of imbalanced datasets, common in SHM, through a RL-driven method that treats the training procedure as a sequence of choices, with the network learning to distinguish between less and more common data patterns. To further refine our method, we integrate a novel mutation operator within the DE framework. This operator identifies key clusters in the data, guiding the backpropagation process for more effective learning. Our approach was rigorously tested on a dataset from a large cable-stayed bridge in China, provided by the IPC-SHM community. The results of our experiments highlight the effectiveness of our approach, demonstrating an Accuracy of 0.8601 and an F-measure of 0.8540, outperforming other methods compared in our study. This underscores the potential of our method in enhancing the accuracy and reliability of SHM data analysis in civil infrastructure monitoring.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_12-Anomaly_Detection_in_Structural_Health_Monitoring.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Brightness Equalization Algorithm for Chinese Painting Pigments in Low-Light Environment Based on Region Division</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150111</link>
        <id>10.14569/IJACSA.2024.0150111</id>
        <doi>10.14569/IJACSA.2024.0150111</doi>
        <lastModDate>2024-01-30T15:33:23.2370000+00:00</lastModDate>
        
        <creator>Lijuan Cheng</creator>
        
        <subject>Chinese painting; low-light; region division; guided filtering; scaling factor</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>With the promotion and development of Chinese painting and the advancement of photography technology, people can appreciate various types of Chinese paintings through image and other methods. However, Chinese painting images in low-light environments face the problem of extreme uneven brightness distribution. The currently proposed solutions for this problem are not sufficient. Therefore, this research proposes a brightness equalization algorithm for Chinese painting pigments in low-light environments based on region division. This algorithm also utilizes guided filtering for image denoising. In performance testing, the proposed method has a runtime of 16.63 seconds under a scaling factor of 1 and a runtime of 8.37 seconds under a scaling factor of 0.1, which are the fastest among the compared algorithms. In simulation experiments, the brightness equalization value of the proposed method is 198.93, which is listed at the best among all the compared algorithms. This research provides a valuable research direction for the brightness equalization of Chinese painting pigments.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_11-Brightness_Equalization_Algorithm_for_Chinese_Painting_Pigments.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design and Analysis of Deep Learning Method for Fragmenting Brain Tissue in MRI Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150110</link>
        <id>10.14569/IJACSA.2024.0150110</id>
        <doi>10.14569/IJACSA.2024.0150110</doi>
        <lastModDate>2024-01-30T15:33:23.2070000+00:00</lastModDate>
        
        <creator>Ting Yang</creator>
        
        <creator>Jiabao Sun</creator>
        
        <subject>Brain tumor; deep learning; neural networks; magnetic resonance imaging</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>An essential component of medical image processing is brain tumour segmentation. The process of giving each pixel a label is called image segmentation in order for pixels bearing the same label to share characteristics and help distinguish the target. A higher fatality rate and additional dangers can be avoided with early identification. It can be challenging and time-consuming to manually (man-made) segment brain tumours from the numerous MRI pictures generated during medical procedures in order to diagnose malignancy. This is the fundamental reason why brain tumour imaging has to be automated. The deep learning technique for the segmentation of brain tissue in magnetic resonance imaging (MRI) pictures was examined and enhanced in this work. Researchers are using deep learning techniques—convolutional neural networks in particular—to tackle the complex problem of biological image fragmentation object recognition. In contrast to traditional classification techniques that take in manually constructed qualities, convolutional neural networks automatically extract the required complicated features from the data itself. This solves a number of problems.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_10-Design_and_Analysis_of_Deep_Learning_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Yolo-based Approach for Fire and Smoke Detection in IoT Surveillance Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150109</link>
        <id>10.14569/IJACSA.2024.0150109</id>
        <doi>10.14569/IJACSA.2024.0150109</doi>
        <lastModDate>2024-01-30T15:33:23.1900000+00:00</lastModDate>
        
        <creator>Dawei Zhang</creator>
        
        <subject>IoT; surveillance systems; fire detection; deep learning; Yolov8</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>Fire and smoke detection in IoT surveillance systems is of utmost importance for ensuring public safety and preventing property damage. While traditional methods have been used for fire detection, deep learning-based approaches have gained significant attention due to their ability to learn complex patterns and achieve high accuracy. This paper addresses the current research challenge of achieving high accuracy rates with deep learning-based fire detection methods while keeping computation costs low. This paper proposes a method based on the Yolov8 algorithm that effectively tackles this challenge through model generation using a custom dataset and the model&#39;s training, validation, and testing. The model&#39;s efficacy is succinctly assessed by the precision, recall and F1-curve metrics, with notable proficiency in fire detection, crucial for early warnings and prevention. Experimental results and performance evaluations show that our proposed method outperforms other state-of-the-art methods. This makes it a promising fire and smoke detection approach in IoT surveillance systems.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_9-A_Yolo_based_Approach_for_Fire_and_Smoke_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Efficient Processing of Large-Scale Medical Data in IoT: A Hybrid Hadoop-Spark Approach for Health Status Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150108</link>
        <id>10.14569/IJACSA.2024.0150108</id>
        <doi>10.14569/IJACSA.2024.0150108</doi>
        <lastModDate>2024-01-30T15:33:23.1900000+00:00</lastModDate>
        
        <creator>Yu Lina</creator>
        
        <creator>Su Wenlong</creator>
        
        <subject>Internet of Things; big data; hadoop; spark-based machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>In the realm of Internet of Things (IoT)-driven healthcare, diverse technologies, including wearable medical devices, mobile applications, and cloud-based health systems, generate substantial data streams, posing challenges in real-time operations, especially during emergencies. This study recommends a hybrid architecture utilizing Hadoop for real-time processing of extensive medical data within the IoT framework. By employing distributed machine learning models, the system analyzes health-related data streams ingested into Spark streams via Kafka threads, aiming to transform conventional machine learning methodologies within Spark&#39;s real-time processing, crafting scalable and efficient distributed approaches for predicting health statuses related to diabetes and heart disease while navigating the landscape of big data. Furthermore, the system provides real-time health status forecasts based on a multitude of input features, disseminates alert messages to caregivers, and stores this valuable information within a distributed database, which is instrumental in health data analysis and the production of flow reports. We compute a range of evaluation parameters to evaluate the proposed methods&#39; efficacy. This assessment phase encompasses measuring the performance of the Spark-based machine learning algorithm in a distributed parallel computing environment.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_8-Efficient_Processing_of_Large_Scale_Medical_Data_in_IoT.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>RETRACTED: Enhanced Linear Regression Models for Resource Usage Prediction in Dynamic Cloud Environments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150107</link>
        <id>10.14569/IJACSA.2024.0150107</id>
        <doi>10.14569/IJACSA.2024.0150107</doi>
        <lastModDate>2024-01-30T15:33:23.1730000+00:00</lastModDate>
        
        <creator>Xiaoxiao Ma</creator>
        
        <subject>Cloud computing; resource utilization; prediction; linear regression; metaheuristics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>After careful and considered review of the content of this paper by a duly constituted expert committee, this paper has been found to be in violation of IJACSA`s Publication Principles. We hereby retract the content of this paper. Reasonable effort should be made to remove all past references to this paper. Retraction DOI: 10.14569/IJACSA.2024.0150107.retraction</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_7-Enhanced_Linear_Regression_Models_for_Resource_Usage_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of a Framework for Predicting Students&#39; Academic Performance in STEM Education using Machine Learning Methods</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150105</link>
        <id>10.14569/IJACSA.2024.0150105</id>
        <doi>10.14569/IJACSA.2024.0150105</doi>
        <lastModDate>2024-01-30T15:33:23.1600000+00:00</lastModDate>
        
        <creator>Rustam Abdrakhmanov</creator>
        
        <creator>Ainur Zhaxanova</creator>
        
        <creator>Malika Karatayeva</creator>
        
        <creator>Gulzhan Zholaushievna Niyazova</creator>
        
        <creator>Kamalbek Berkimbayev</creator>
        
        <creator>Assyl Tuimebayev</creator>
        
        <subject>Load balancing; machine learning; server; classification; software</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>In the continuously evolving educational landscape, the prediction of students&#39; academic performance in STEM (Science, Technology, Engineering, Mathematics) disciplines stands as a paramount component for educational stakeholders aiming at enhancing learning methodologies and outcomes. This research paper delves into a sophisticated analysis, employing Machine Learning (ML) algorithms to predict students&#39; achievements, focusing explicitly on the multifaceted realm of STEM education. By harnessing a robust dataset drawn from diverse educational backgrounds, incorporating myriad factors such as historical academic data, socioeconomic demographics, and individual learning interactions, the study innovates by transcending traditional prediction parameters. The research meticulously evaluates several machine learning models, juxtaposing their efficacies through rigorous methodologies, including Random Forest, Support Vector Machines, and Neural Networks, subsequently advocating for an ensemble approach to bolster prediction accuracy. Critical insights reveal that customized learning pathways, preemptive identification of at-risk candidates, and the nuanced understanding of contributing influencers are significantly enhanced through the ML framework, offering a transformative lens for academic strategies. Furthermore, the paper confronts the ethical quandaries and challenges of data privacy emerging in the wake of advanced analytics in education, proposing a holistic guideline for stakeholders. This exploration not only underscores the potential of machine learning in revolutionizing predictive strategies in STEM education but also advocates for continuous model optimization, embracing a symbiotic integration between pedagogical methodologies and technological advancements, thereby redefining the trajectories of educational paradigms.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_5-Development_of_a_Framework_for_Predicting_Students_Academic.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automatic Recognition of Marine Creatures using Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150106</link>
        <id>10.14569/IJACSA.2024.0150106</id>
        <doi>10.14569/IJACSA.2024.0150106</doi>
        <lastModDate>2024-01-30T15:33:23.1600000+00:00</lastModDate>
        
        <creator>Oudayrao Ittoo</creator>
        
        <creator>Sameerchand Pudaruth</creator>
        
        <subject>Marine creature identification; machine learning; deep learning; MobileNetV1; Mauritius</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>The identification of marine species is a challenge for people all over the world, and the situation is not different for Mauritians. It is of utmost importance to create an automated system to correctly identify marine species. In the past, researchers have used machine learning to address the issue of marine creature recognition. The manual feature extraction part of machine learning complicates model creation as features have to be extracted manually using an appropriate filter. In this work, we have used deep learning models to automate the feature extraction procedure. Currently, there is no publicly available dataset of marine creatures from the Indian Ocean. We created one of the biggest datasets used in this field, consisting of 51 different marine species collected from the Odysseo Oceanarium in Mauritius. The original dataset has a total of 5,709 images and is imbalanced. Image augmentations were performed to create an oversampled version of the dataset with 171 images per class, for a total of 8,721 images. The MobileNetV1 model trained on the oversampled dataset with a split ratio of 80% for training and 10% for validation and testing was the best performing one in terms of classification accuracy and inference time. The model had the smallest inference time of 0.10 seconds per image and attained a classification accuracy of 99.89% and an F1 score of 99.89%.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_6-Automatic_Recognition_of_Marine_Creatures.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>ML-based Meta-Model Usability Evaluation of Mobile Medical Apps</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150104</link>
        <id>10.14569/IJACSA.2024.0150104</id>
        <doi>10.14569/IJACSA.2024.0150104</doi>
        <lastModDate>2024-01-30T15:33:23.1430000+00:00</lastModDate>
        
        <creator>Khalid Hamid</creator>
        
        <creator>Muhammad Ibrar</creator>
        
        <creator>Amir Mohammad Delshadi</creator>
        
        <creator>Mubbashar Hussain</creator>
        
        <creator>Muhammad Waseem Iqbal</creator>
        
        <creator>Abdul Hameed</creator>
        
        <creator>Misbah Noor</creator>
        
        <subject>ANOVA; completeness; efficiency; effectiveness; perceptional usability; response surface methodology; actual usability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>Mobile medical applications (MMAPPs) are one of the recent trends in mobile trading applications (Apps). MMAPPs permit users to resolve health issues easily and effectively in their place. However, the primary issue is effective usability for users in maps. Barely any examination breaks down usability issues subject to the user&#39;s age, orientation, trading accessories, or experience. The motivation behind this study is to decide the level of usability issues, concerning traits and experience of versatile clinical clients. The review utilizes a quantitative technique and performs client try and hypothetical insight through the survey by 677 members with six distinct assignments on the application&#39;s point of interaction. The post-try review is finished with concerning members. The Response surface method (RSM) is used for perceptional and experimental designs. In each case, participants are divided into 13 runs or groups. Experimental groups are involved after checking the perceptions about theoretical usability for different attributes according to the usability model through the questionnaire. The difference is recorded between the perception of users about usability (theoretical usability) and actual performance for usability. The study analyzed through Analysis of variance (ANOVA) that there is a need to improve mobile medical applications but it is also recommended to minimize the gap between the perception level of laymen and the actual performance of IT literate users in context with usability. The experimentation measures the tasks usability of various mobile medical applications concerning their effectiveness, efficiency, completeness, learnability, memorability, easiness, complexity, number of errors and satisfaction. Every design model also produces a mathematical expression to calculate usability with its attributes. The results of this study will help to improve the usability of MMAPPs for users in their convenient context.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_4-ML_based_Meta_Model_Usability_Evaluation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Enhanced Anti-Phishing Technique for Social Media Users: A Multilayer Q-Learning Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150103</link>
        <id>10.14569/IJACSA.2024.0150103</id>
        <doi>10.14569/IJACSA.2024.0150103</doi>
        <lastModDate>2024-01-30T15:33:23.1270000+00:00</lastModDate>
        
        <creator>Asif Irshad Khan</creator>
        
        <creator>Bhuvan Unhelkar</creator>
        
        <subject>Multilayer Q-learning; anti-phishing model; social media users; machine learning; optimization; URLs; logistic Bayesian LSTM model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>As social media usage grows in popularity, so does the risk of encountering malicious Uniform Resource Locator (URLs). Determining the authenticity of a URL can be a highly challenging task, primarily due to the sophisticated attack structure employed by phishing attempts. Phishing exploits the vulnerabilities of computer users, making it difficult to discern between genuine and fraudulent URLs. To address this issue, a self-learning AI framework is required to warn social media users of potentially dangerous links. While several anti-phishing techniques exist, including blacklists, heuristics, and machine learning-based techniques, there is still a need for improvement in terms of detection accuracy. Hence, this study proposed a novel approach to combat phishing attacks using artificial neural networks, and the main aim is to create and validate the anti-phishing technique tool for detection accuracy. Initially, the URL data is collected, followed by preprocessing and then the analysis for malicious activity using the Logistic Bayesian Long Short-Term Memory model (LB-LSTM). The observed malicious URL features are extracted using multilayer Q-learning with the CaspNet and swarm optimization models. Analysis of these features enables the identification of a malicious URL, which is then removed, and the social media user is warned. The proposed technique attained a detection accuracy of 94.33%, Area under the ROC Curve (AUC) of 98.71%, Mean Squared Error (MSE) of 5.67%, Mean average precision of 88.67%, Recall of 98.67%, and F-1 score of 94.34%.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_3-An_Enhanced_Anti_Phishing_Technique_for_Social_Media_Users.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid Approach for Automatic Question Generation from Program Codes</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150102</link>
        <id>10.14569/IJACSA.2024.0150102</id>
        <doi>10.14569/IJACSA.2024.0150102</doi>
        <lastModDate>2024-01-30T15:33:23.1130000+00:00</lastModDate>
        
        <creator>Jawad Alshboul</creator>
        
        <creator>Erika Baksa-Varga</creator>
        
        <subject>Question generation; e-learning; python question generator; semantic code conversion</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>Generating questions is one of the most challenging tasks in the natural language processing discipline. With the significant emergence of electronic educational platforms like e-learning systems and the large scalability achieved with e-learning, there is an increased urge to generate intelligent and deliberate questions to measure students&#39; understanding. Many works have been done in this field with different techniques; however, most approaches work on extracting questions from text. This research aims to build a model that can conceptualize and generate questions on Python programming language from program codes. Different models are proposed by inserting text and generating questions; however, the challenge is understanding the concepts in the code snippets and linking them to the lessons so that the model can generate relevant and reasonable questions for students. Therefore, the standards applied to measure the results are the code complexity and question validity regarding the questions. The method used to achieve this goal combines the QuestionGenAi framework and ontology based on semantic code conversion. The results produced are questions based on the code snippets provided. The evaluation criteria were code complexity, question validity, and question context. This work has great potential improvement to the e-learning platforms to improve the overall experience for both learners and instructors.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_2-A_Hybrid_Approach_for_Automatic_Question_Generation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Reliability Evaluation Framework for Centralized Agricultural Internet of Things (Agri-IoT)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2024</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2024.0150101</link>
        <id>10.14569/IJACSA.2024.0150101</id>
        <doi>10.14569/IJACSA.2024.0150101</doi>
        <lastModDate>2024-01-30T15:33:23.0970000+00:00</lastModDate>
        
        <creator>Fatoumata Thiam</creator>
        
        <creator>Maissa Mbaye</creator>
        
        <creator>Maya Flores</creator>
        
        <creator>Alexander Wyglinski</creator>
        
        <subject>Energy; IoT; reliability; real-world testbed; opti-mization; Agri-IoT</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024</description>
        <description>This paper presents a holistic reliability evalua-tion framework for Agri-IoT based on real-world testbed and mathematical modeling of network failure prediction. A testbed has been designed, implemented, and deployed in the real-world in the experimental farm at Saint-Louis/Senegal as a representative area of Sahel conditions. Data collected has been used for real-world reliability analysis and to feed mathematical modeling of network reliability based on energy and environ-mental conditions data with Kaplan Meier and Nelson Aalen estimators. Key factors affecting the network’s lifespan, such as network coverage and density, are explored, along with a comprehensive evaluation of energy consumption to understand node discharge rates impact. The survival analysis, employing Kaplan-Meier and Nelson-Aalen estimators, establishes network stability and the probability of node survival over time. The findings contribute to the understanding of Agri-IoT reliability in a real-world Sahel environment, offering practical insights for system optimization and environmental challenge mitigation in real-world deployments.</description>
        <description>http://thesai.org/Downloads/Volume15No1/Paper_1-Reliability_Evaluation_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Quality of Data (QoD) in Internet of Things (IOT): An Overview, State-of-the-Art, Taxonomy and Future Directions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01412110</link>
        <id>10.14569/IJACSA.2023.01412110</id>
        <doi>10.14569/IJACSA.2023.01412110</doi>
        <lastModDate>2023-12-29T11:47:35.0900000+00:00</lastModDate>
        
        <creator>Jameel Shehu Yalli</creator>
        
        <creator>Mohd Hilmi Hasan</creator>
        
        <creator>Nazleeni Samiha Haron</creator>
        
        <creator>Mujeeb Ur Rehman Shaikh</creator>
        
        <creator>Nafeesa Yousuf Murad</creator>
        
        <creator>Abdullahi Lawal Bako</creator>
        
        <subject>Quality of Data (QoD); Internet of Things (IoT); RFID; WSN; Taxonomy; trustworthiness; outlier; anomaly; confusion matrix; QoD assurance technique</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>The Internet of Things (IoT) data is the main component for finding the basis that allows decisions to be made intelligently and enables other services to be explored and used. Data originates from smart things that have the capabilities to connect and share data enormously with other things in the IoT ecosystem. However, the level of intelligence obtained and the type of services provided, all depend on whether the data is trusted or not. High-quality data is the most trusted;, it can be used to extract meaningful insights from an event and can also be used to provide good services to humans. Therefore, decisions based on high-quality and trusted data could be good, whereas those based on low-quality or untrusted data are not only bad but could also have severe consequences. The term Quality of Data (QoD) is used to represent data trustworthiness and is used throughout this paper. To the best of our knowledge, this work is the first to coin the term QoD. The problems that hinder QoD are identified and discussed. One if it is an outlier, it is a major feature of the data that degrades its overall quality. Several machine-learning techniques that detect outliers have been studied and presented, with few data-cleaning techniques. This paper aims to present the elements necessary to ensure QoD by presenting the overview of the IoT state-of-the-art. Then, data quality, data in IoT, and outliers are studied, and some quality assurance techniques that maintain data quality is presented. A comprehensive taxonomy is shown to provide state-of-the-art data in IoT. Open issues and future directions were suggested at the end of the paper.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_110-Quality_of_Data_QoD_in_Internet_of_Things_IOT.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Robust Extreme Learning Machine with Exponential Squared Loss via DC Programming</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01412109</link>
        <id>10.14569/IJACSA.2023.01412109</id>
        <doi>10.14569/IJACSA.2023.01412109</doi>
        <lastModDate>2023-12-29T11:47:35.0770000+00:00</lastModDate>
        
        <creator>Kuaini Wang</creator>
        
        <creator>Xiaoxue Wang</creator>
        
        <creator>Weicheng Zhan</creator>
        
        <creator>Mingming Wang</creator>
        
        <creator>Jinde Cao</creator>
        
        <subject>Extreme learning machine; exponential squared loss; DC programming; DCA; robust regression</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>Extreme learning machines (ELM) have recently attracted considerable attention because of its fast learning rate, simple model structure, and good generalization ability. However, classical ELM with least squares loss function is prone to overfitting and lack robustness in dealing with datasets containing noise and outliers in the real world. In this paper, inspired by the maximum correntropy criterion, an exponential squared loss function is introduced, which is nonconvex and insensitive to noise and outliers. A robust ELM with exponential squared loss (RESELM) is presented to overcome the overfitting problem. The proposed model with nonconvexity is difficult to be directly optimized. Considering the superior performance of difference of convex functions (DC) programming in solving nonconvex problems, this paper optimizes the model by expressing the objective function as a DC function and employing DC algorithm (DCA). To examine the effectiveness of the proposed algorithm in noisy environment, different levels of outliers are added to the training samples in the experiments. Experimental results on benchmark data sets with different outliers levels illustrate that the proposed RESELM achieves significant advantages in generalization performance and robustness, especially in higher outliers levels.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_109-Robust_Extreme_Learning_Machine_with_Exponential_Squared_Loss.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Indonesian Twitter Emotion Recognition Model using Feature Engineering</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01412108</link>
        <id>10.14569/IJACSA.2023.01412108</id>
        <doi>10.14569/IJACSA.2023.01412108</doi>
        <lastModDate>2023-12-29T11:47:35.0430000+00:00</lastModDate>
        
        <creator>Rhio Sutoyo</creator>
        
        <creator>Harco Leslie Hendric Spits Warnars</creator>
        
        <creator>Sani Muhamad Isa</creator>
        
        <creator>Widodo Budiharto</creator>
        
        <subject>Text classification; feature engineering; emotion recognition; Indonesian tweet; natural language processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>Twitter is a social media platform that has a large amount of unstructured natural language text. The content of Twitter can be utilized to capture human behavior via emphasized emotions located in tweets. In their tweets, people commonly express emotions to show their feelings. Hence, it is crucial to recognize the text’s underlined emotions to understand the message’s meaning. Feature engineering is the process of improving raw data into often overlooked features. This research explores feature engineering techniques to find the best features for building an emotion recognition model on the Indonesian Twitter dataset. Two different text data representations were used, namely, TF-IDF and word embedding. This research proposed 12 feature engineering configurations in TF-IDF by combining data stemming, data augmentation, and machine learning classifiers. Moreover, this research proposed 27 feature engineering configurations in word embedding by combining three-word embedding models, three pooling techniques, and three machine-learning classifiers. In total, there are 39 feature engineering combinations. The configuration with the best F1 score is TF-IDF with logistic regression, stemmed dataset, and augmented dataset. The model achieved 65.27% accuracy and 66.09% F1 score. The detailed characteristics from the top seven models in TF-IDF also follow the same feature engineering configuration. Lastly, this work improves performance from the previous research by 1.44% and 2.01% on the word2vec and fastText approaches, respectively.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_108-Indonesian_Twitter_Emotion_Recognition_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Transformative Learning Through Augmented Reality Empowered by Machine Learning for Primary School Pupils: A Real-Time Data Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01412107</link>
        <id>10.14569/IJACSA.2023.01412107</id>
        <doi>10.14569/IJACSA.2023.01412107</doi>
        <lastModDate>2023-12-29T11:47:35.0130000+00:00</lastModDate>
        
        <creator>Abinaya M</creator>
        
        <creator>Vadivu G</creator>
        
        <subject>Artificial intelligence; augmented reality; adaptive learning; machine learning; transformative learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>Academic performance and student engagement are constant challenges in the field of modern education. When it comes to engaging students, traditional teaching methods frequently fall short, so creative solutions are needed. The Transformative potential of Augmented Reality (AR) technology as a cutting-edge teaching strategy is examined in this study. AR presents a dynamic, immersive learning environment that has the potential to completely transform conventional class-rooms. By incorporating AR into the curriculum, our research transforms pedagogical paradigms, closes the engagement gap, and raises academic performance through an adaptive learning system. The study reveals the complex dynamics of AR-enhanced education through thorough analysis, powerful visualizations, and significant ANOVA results (p-value=0.03). It challenges accepted educational theories and provides insights into the complex effects on learning outcomes and student engagement. This study highlights the significance of AR in educational settings and promotes its incorporation as a transformative instrument that can establish dynamic and captivating learning environments, encourage critical thinking, creativity, and early field exploration through Artificial Intelligence (AI), and ultimately mould future leaders who can succeed.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_107-Transformative_Learning_Through_Augmented_Reality.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predictive Modeling of Landslide Susceptibility in Soft Soil Canal Regions: A Focus on Early Warning Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01412106</link>
        <id>10.14569/IJACSA.2023.01412106</id>
        <doi>10.14569/IJACSA.2023.01412106</doi>
        <lastModDate>2023-12-29T11:47:34.9800000+00:00</lastModDate>
        
        <creator>Dang Tram Anh</creator>
        
        <creator>Luong Vinh Quoc Danh</creator>
        
        <creator>Chi-Ngon Nguyen</creator>
        
        <subject>Landslide early warning; soft soil; Mekong Delta; long short-term memory</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>The Mekong Delta (MD) has suffered significant losses in land resources, economic damage, and human and property casualties due to recent landslides. An early warning system for landslides is a valuable tool for identifying the effectiveness and timely detection of changes in the soil to promptly determine solutions and minimize damage caused by landslides in an area. In this study, we apply a machine learning approach based on the Long Short-Term Memory (LSTM) algorithm to experiment with early warning of landslide events on soft soil in the MD. Horizontal pressure, the change in inclination angles of the sensor pile due to the soil mass sliding in both the x and y directions, and the warning levels are determined based on the deformation and displacement of the soil along the riverbank, considered candidate factors for inputs in the model. Data from the established sensor system is used to train the model, creating a training and testing dataset of 374,415 samples. The accuracy of the detection and classification threshold of the system is proposed to be measured using the average F1 score derived from precision and recall values. The optimal prediction results are gleaned from an observational window of 4 minutes and 30 seconds to project roughly 2 hours into the future. The validation process resulted in recall, precision, and F1-score stands at 0.8232 with a remarkably low standard deviation of about 1%. The successful application of this research can help identify abnormal events leading to riverbank landslides due to loading, thereby creating conditions for developing a reliable information system to provide managers with the ability to suggest timely solutions to protect the lives, property of residents and infrastructures.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_106-Predictive_Modeling_of_Landslide_Susceptibility.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid Deep Learning Framework for Efficient Sentiment Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01412105</link>
        <id>10.14569/IJACSA.2023.01412105</id>
        <doi>10.14569/IJACSA.2023.01412105</doi>
        <lastModDate>2023-12-29T11:47:34.9670000+00:00</lastModDate>
        
        <creator>Asish Karthikeya Gogineni</creator>
        
        <creator>S Kiran Sai Reddy</creator>
        
        <creator>Harika Kakarala</creator>
        
        <creator>Yaswanth Chowdary Gavini</creator>
        
        <creator>M Pavana Venkat</creator>
        
        <creator>Koduru Hajarathaiah</creator>
        
        <creator>Murali Krishna Enduri</creator>
        
        <subject>Sentiment analysis; LSTM; GRU; Convolutional Neural Networks (CNNs); BOW; TF-IDF</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>In the era of Microblogging and the rapid growth of online platforms, an exponential rise is shown in the volume of data generated by internet users across various domains. Additionally, the creation of digital or textual data is expanding significantly. This is because consumers respond to comments made on social media platforms regarding events or products based on their personal experiences. Sentiment analysis is usually used to accomplish this kind of classification on a large scale. It is described as the process of going through all user reviews and comments that are discovered in product reviews, events, or similar sources in order to look for unstructured text comments. Our study examines how deep learning models like LSTM, GRU, CNN, and hybrid models (LSTM+CNN, LSTM+GRU, GRU+CNN) capture complex sentiment patterns in text data. Additionally, we study integrating BOW and TF-IDF as complementing features to improve model predictive power. CNN with RNNs consistently improves outcomes, demonstrating the synergy between convolutional and recurrent neural network architectures in recognizing nuanced emotion subtleties.In addition, TF-IDF typically outperforms BOW in enhancing deep learning model sentiment analysis accuracy.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_105-A_Hybrid_Deep_Learning_Framework_for_Efficient_Sentiment_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Assamese Word Recognition for CBIR: A Comparative Study of Ensemble Methods and Feature Extraction Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01412104</link>
        <id>10.14569/IJACSA.2023.01412104</id>
        <doi>10.14569/IJACSA.2023.01412104</doi>
        <lastModDate>2023-12-29T11:47:34.9500000+00:00</lastModDate>
        
        <creator>Naiwrita Borah</creator>
        
        <creator>Udayan Baruah</creator>
        
        <creator>Barnali Dey</creator>
        
        <creator>Merin Thomas</creator>
        
        <creator>Sunanda Das</creator>
        
        <creator>Moumi Pandit</creator>
        
        <creator>Bijoyeta Roy</creator>
        
        <creator>Amrita Biswas</creator>
        
        <subject>Assamese literary works; automatic word recognition; comparative analysis; feature-based approaches; intelligent assistive technology; machine learning; word image analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>This study conducts a thorough assessment of ensemble machine learning methods, specifically focusing on the identification of Assamese words. This task is crucial for improving Content-Based Image Retrieval systems and safeguarding the digital heritage of Assamese culture. We analyze the efficacy of different algorithms, such as CatBoost, XGBoost, Gradient Boosting, Random Forest, Bagging, AdaBoost, Stacking, and Histogram-Based Gradient Boosting, by thoroughly examining their performance in terms of accuracy, precision, recall, Kappa, F1-score, Matthews Correlation Coefficient, and AUC. The Cat-Boost algorithm stands out as the top performer, achieving an accuracy rate of 97.7%, precision rate of 95%, and recall rate of 96%. XGBoost is also acknowledged for its substantial effectiveness. This comparative analysis emphasizes CatBoost’s superiority in terms of precision and recall. Additionally, it underscores the strong ability of ensemble classifiers to enhance assistive technologies, promote social inclusivity, and seamlessly integrate the Assamese language into technological applications.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_104-Enhancing Assamese Word Recognition for CBIR.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>CLFM: Contrastive Learning and Filter-attention Mechanism for Joint Relation Extraction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01412102</link>
        <id>10.14569/IJACSA.2023.01412102</id>
        <doi>10.14569/IJACSA.2023.01412102</doi>
        <lastModDate>2023-12-29T11:47:34.9330000+00:00</lastModDate>
        
        <creator>Zhiyuan Wang</creator>
        
        <creator>Chuyuan Wei</creator>
        
        <creator>Jinzhe Li</creator>
        
        <creator>Lei Zhang</creator>
        
        <creator>Cheng Lv</creator>
        
        <subject>Natural language processing; relation extraction; attention mechanism; contrastive learning; multi-task learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>Relation extraction is a fundamental task in natural language processing, which involves extracting structured information from textual data. Despite the success of joint methods in recent years, most of them still have the propagation of cascade errors. Specifically, the error in former step will be accumulated into the final combined triples. Meanwhile, these methods also encounter another challenges related to insufficient interaction between subtasks. To alleviate these issues, this paper proposes a novel joint relation extraction model that integrates a contrastive learning approach and a filter-attention mechanism. The proposed model incorporates a potential relation decoder that utilizes contrastive learning to reduce error propagation and enhance the accuracy of relation classification, particularly in scenarios involving multiple relationships. It also includes a relation-specific sequence tagging decoder that employs a filter-attention mechanism to highlight more informative features, alongside an auxiliary matrix that amalgamates information related to entity pairs. Extensive experiments are conducted on two public datasets and the results demonstrate that this approach outperforms other models with the same structure in recall and F1. Moreover, experiments show that both the contrastive learning strategy and the proposed filter-attention mechanism work well.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_102-CLFM_Contrastive_Learning_and_Filter_attention_Mechanism.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Airborne Disease Prediction: Integrating Deep Infomax and Self-Organizing Maps for Risk Factor Identification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01412103</link>
        <id>10.14569/IJACSA.2023.01412103</id>
        <doi>10.14569/IJACSA.2023.01412103</doi>
        <lastModDate>2023-12-29T11:47:34.9330000+00:00</lastModDate>
        
        <creator>Bhakti S. Pimpale</creator>
        
        <creator>Anala A. Pandit</creator>
        
        <subject>Asthma; deepinfomax; self organizing map; risk factors; air pollution</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>Asthma poses a significant global public health concern, particularly in urban centers where environmental pollutants and variable weather patterns contribute to heightened prevalence and symptom exacerbation. The Deonar dumping ground, one of Mumbai’s largest landfills, releases a complex mix of particulate matter and hazardous gases, posing a serious threat to local respiratory health. Despite the urgency for comprehensive research integrating patient-specific data with localized weather and air quality metrics, such studies remain limited. This study addresses the critical research gap by investigating asthma risk factors near the Deonar dumping ground. Integrating detailed patient records with precise local weather and air quality measurements, our research aims to unravel the intricate relationship between environmental exposure and respiratory health outcomes. The findings provide crucial insights into the specific risk factors influencing asthma incidence and severity in this region, informing the development of targeted interventions and mitigation strategies. Employing a novel ensemble Deep Info Max - Self-Organizing Map (DIM-SOM) technique, our study compares its performance with various clustering algorithms, including SOM, K-Means, Bisecting K-Means, DBSCAN, and others. The novel ensemble DIM-SOM demonstrated superior performance, achieving a significantly higher Silhouette Score of 0.9234, a lower Davies-Bouldin Score of 0.1276, and a more favorable Calinski-Harabasz Score of 389723.6225 compared to other algorithms. These findings underscore the efficacy of the novel ensemble DIM-SOM approach in generating dense, well-separated, and meaningful clusters, emphasizing its potential to enhance clustering performance compared to traditional algo-rithms. The study further emphasizes the need for proactive mitigation measures and tailored healthcare interventions based on the identified environmental risk factors.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_103-Enhancing_Airborne_Disease_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Learnable Local Similarity for Face Forgery Detection and Localization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01412101</link>
        <id>10.14569/IJACSA.2023.01412101</id>
        <doi>10.14569/IJACSA.2023.01412101</doi>
        <lastModDate>2023-12-29T11:47:34.9200000+00:00</lastModDate>
        
        <creator>Lingyun Leng</creator>
        
        <creator>Jianwei Fei</creator>
        
        <creator>Yunshu Dai</creator>
        
        <subject>Face forgery detection; local similarity; forgery localization; generalized detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>The emergence of many face forgery technologies has led to the widespread of forgery faces on the Internet, causing a series of serious social impacts, thus face forgery detection technology has attracted increasing attention. While many face forgery detection algorithms have demonstrated impressive performance against known manipulation methods, their efficacy tends to diminish severely when applied to unknown forgeries. Previous research commonly viewed face forgery detection as a binary classification problem, disregarding the crucial distinction between real and forged faces, thereby limiting the generalizability of detection algorithms. To overcome this issue, this paper proposes a novel face forgery detection method that utilizes a trainable metric to learn local similarity between local features of facial images, achieving a more generalized detection result. What’s more, it incorporate cross-level features to accurately locate forgery regions. After conducting extensive experiments on FaceForensics++, Celeb-DF-v2, and DFD, which demonstrate that the effectiveness of the proposed method is comparable to state-of-the-art detection algorithms.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_101-Learnable_Local_Similarity_for_Face_Forgery_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Recognition and Translation of Ancient South Arabian Musnad Inscriptions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01412100</link>
        <id>10.14569/IJACSA.2023.01412100</id>
        <doi>10.14569/IJACSA.2023.01412100</doi>
        <lastModDate>2023-12-29T11:47:34.8730000+00:00</lastModDate>
        
        <creator>Afnan Altalhi</creator>
        
        <creator>Atheer Alwethinani</creator>
        
        <creator>Bashaer Alghamdi</creator>
        
        <creator>Jumanah Mutahhar</creator>
        
        <creator>Wojood Almatrafi</creator>
        
        <creator>Seereen Noorwali</creator>
        
        <subject>Musnad inscriptions; text recognition; deep learning; VGG16; ResNet-50; MobileNetV2</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>Inscriptions play an important role in preserving historical information. As such, conservation of these inscriptions provides valuable insights into the history and cultural heritage of the region. Musnad inscriptions are considered one of the earliest forms of writing from the Arabian Peninsula, preceding the modern Arabic font; however, most Musnad inscriptions remain unread and untranslated, signifying a substantial loss of historical information. In response, this paper represents a significant contribution to the field by proposing a successful approach to interpreting Musnad inscriptions. To do so, a dataset was prepared from the Saudi Arabian Ministry of Culture and subjected to preprocessing for optimal recognition, a step that entailed several experiments to enhance image quality and preparedness for recognition. The dataset was then trained and tested with 29 classes using three different convolutional neural network (CNN) architectures: Visual Geometry Group 16 (VGG16), Residual Network 50 (ResNet50) and MobileNetV2. Thereafter, the performance of each architecture was evaluated based on its accuracy in recognising Musnad inscriptions. The results demonstrate that VGG16 achieved the highest accuracy of 93.81%, followed by ResNet50 at 89.39% and MobileNetV2 at 80.02%.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_100-Recognition_and_Translation_of_Ancient_South_Arabian_Musnad.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Safety and Multifaceted Preferences to Optimise Cycling Routes for Cyclist-Centric Urban Mobility</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141299</link>
        <id>10.14569/IJACSA.2023.0141299</id>
        <doi>10.14569/IJACSA.2023.0141299</doi>
        <lastModDate>2023-12-29T11:47:34.8570000+00:00</lastModDate>
        
        <creator>Mohammed Alatiyyah</creator>
        
        <subject>Bike routing; dynamic vehicle routing inventory routing; approximate dynamic programming</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>In order to optimise bicycle routes across a variety of multiple parameters, including safety, efficiency and subtle rider preferences, this work explores the difficult domain of the Bike Routing Problem (BRP) using a sophisticated Simulated Annealing approach. In this innovative structure, a wide range of limitations and inclinations are combined and carefully calibrated to create routes that skillfully meet the varied and changing needs of cyclists. Extensive testing on a dataset representing a range of rider preferences demonstrates the effectiveness of this novel approach, resulting in significant improvements in route selection. This research is a significant resource for urban planners and politicians. Its data-driven solutions and strategic recommendations will help them strengthen bicycle infrastructure, even beyond its immediate applicability in resolving the BRP.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_99-Enhancing_Safety_and_Multifaceted_Preferences.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving the Classification of Airplane Accidents Severity using Feature Selection, Extraction and Machine Learning Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141298</link>
        <id>10.14569/IJACSA.2023.0141298</id>
        <doi>10.14569/IJACSA.2023.0141298</doi>
        <lastModDate>2023-12-29T11:47:34.8400000+00:00</lastModDate>
        
        <creator>Rachid KAIDI</creator>
        
        <creator>Mohammed AL ACHHAB</creator>
        
        <creator>Mohamed LAZAAR</creator>
        
        <creator>Hicham OMARA</creator>
        
        <subject>Airplane accident; severity; flights safety; machine learning; KNN; Random Forest (RF); Decision Tree (DT)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>Airplane mode of transportation is statistically the most secure means of travel. This is due to the fact that flights require several conditions and precautions because aviation accidents are most of the time fatal and have disastrous consequences. For this purpose, in this paper, the mean goal is to study the different levels of fatality of airplane accidents using machine learning models. The study rely on airplane accident severity dataset to implement three machine learning models: KNN, Decision Tree and Random Forest. This study began with implementing two features selection and extraction methods, PCA and RFE in order to reduce dataset dimensionality and complexity of models and reduce training time by implementing machine learning models on dataset and measuring their performance. Results show that KNN and Decision Tree demonstrates high levels of performances by achieving 100% of accuracy and f1-score metrics; while Random Forest achieves its best performances after application of PCA when it reaches an accuracy equal to 97.83% and f1-score equal to 97.82%.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_98-Improving_the_Classification_of_Airplane_Accidents_Severity.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Speech Recognition Models for Holy Quran Recitation Based on Modern Approaches and Tajweed Rules: A Comprehensive Overview</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141297</link>
        <id>10.14569/IJACSA.2023.0141297</id>
        <doi>10.14569/IJACSA.2023.0141297</doi>
        <lastModDate>2023-12-29T11:47:34.8270000+00:00</lastModDate>
        
        <creator>Sumayya Al-Fadhli</creator>
        
        <creator>Hajar Al-Harbi</creator>
        
        <creator>Asma Cherif</creator>
        
        <subject>Speech recognition; acoustic models; language model; neural network; deep learning; quran recitation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>Speech is considered the most natural way to communicate with people. The purpose of speech recognition technology is to allow machines to recognize and understand human speech, enabling them to take action based on the spoken words. Speech recognition is especially useful in educational fields, as it can provide powerful automatic correction for language learning purposes. In the context of learning the Quran, it is essential for every Muslim to recite it correctly. Traditionally, this involves an expert gari who listens to the student’s recitation, identifies any mistakes, and provides appropriate corrections. While effective, this method is time-consuming. To address this challenge, apps that help students fix their recitation of the Holy Quran are becoming increasingly popular. However, these apps require a robust and error-free speech recognition model. While recent advancements in speech recognition have produced highly accurate results for written and spoken Arabic and non-Arabic speech recognition, the field of Holy Quran speech recognition is still in its early stages. Therefore, this paper aims to provide a comprehensive literature review of the existing research in the field of Holy Quran speech recognition. Its goal is to identify the limitations of current works, determine future research directions, and highlight important research in the fields of spoken and written languages.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_97-Speech_Recognition_Models_for_Holy_Quran_Recitation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Memetic Algorithm to Solve the Two-Echelon Collaborative Multi-Centre Multi-Periodic Vehicle Routing Problem with Specific Constraints</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141296</link>
        <id>10.14569/IJACSA.2023.0141296</id>
        <doi>10.14569/IJACSA.2023.0141296</doi>
        <lastModDate>2023-12-29T11:47:34.8100000+00:00</lastModDate>
        
        <creator>Camelia Snoussi</creator>
        
        <creator>Abdellah El Fallahi</creator>
        
        <creator>Sarir Hicham</creator>
        
        <subject>Collaborative vehicle routing problem; two-echelon networks; memetic algorithm; Variable neighbourhood serach</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>The collaboration between distribution companies is gaining a great interest in the last years due to the benefit provided to reduce the cost of deliveries. In this work we study the centralized two-echelon collaborative multi-center multi-periodic vehicle routing problem with a specific constraints. In which each distribution center conserves its VIP customers, and each partner keep their delivery scheduling unchangeable. The problem is modelled as a MILP, and to solve it a hybrid algorithm is proposed. This algorithm combines a multi-population memetic algorithm (MPMA) and a variable neighbourhood search algorithm that integrates a tabu search list (VNS-T). The results obtained are compared with those obtained by CPLEX solver and the best known solution of the multi-depot vehicle routing problem (MDVRP).</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_96-A_Memetic_Algorithm_to_Solve_the_Two_Echelon_Collaborative.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Iterative Learning Control for High Relative Degree Discrete-Time Systems with Random Initial Shifts</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141295</link>
        <id>10.14569/IJACSA.2023.0141295</id>
        <doi>10.14569/IJACSA.2023.0141295</doi>
        <lastModDate>2023-12-29T11:47:34.7930000+00:00</lastModDate>
        
        <creator>Dongjie Chen</creator>
        
        <creator>Tiantian Lu</creator>
        
        <creator>Zhenjie Yin</creator>
        
        <subject>Relative degree; iterative learning control; random initial shifts; difference equation; discrete-time system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>In this paper, an iterative learning control (ILC) strategy under compression mapping framework is presented for high relative degree discrete-time systems with random initial shifts. Firstly, utilizing the high relative degree of the system and difference term, a control law is designed and a p-order non-homogeneous linear difference equation is established. The appropriate control gain is selected according to the charac-teristics of solution of the difference equation and the initial shifts, so as to ensure that the high relative degree discrete-time system can reach a steady-state deviation output at a fixed time. Subsequently, a PD-type control law is employed to correct the fixed deviation of the system. Theoretical analysis indicates that this ILC strategy can ensure that the high relative degree systems achieve accurate tracking after the predefined time. Finally, the simulation experiments are conducted on a linear discrete-time Multiple-Input Multiple-Output(MIMO) system with relative degree 1 and a Multiple-Input Single-Output(MISO) system with relative degree 2, respectively, and the results verify the effectiveness of the algorithm.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_95-Iterative_Learning_Control_for_High_Relative_Degree.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sentiment Analysis on Banking Feedback and News Data using Synonyms and Antonyms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141294</link>
        <id>10.14569/IJACSA.2023.0141294</id>
        <doi>10.14569/IJACSA.2023.0141294</doi>
        <lastModDate>2023-12-29T11:47:34.7770000+00:00</lastModDate>
        
        <creator>Aniruddha Mohanty</creator>
        
        <creator>Ravindranath C. Cherukuri</creator>
        
        <subject>ELECTRA; Synonyms and antonyms; sentiment analysis; datasets; sentiment score; control distance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>Sentiment analysis is crucial for deciphering customers’ enthusiasm, frustration, and the market mood within the banking sector. This importance arises from financial data’s specialized and sensitive nature, enabling a deeper understanding of customer sentiments. In today’s digital and social marketing landscape within the banking and financial sector, sentiment analysis is significant in shaping customer insights, product development, brand reputation management, risk management, customer service improvement, fraud detection, market research, compliance regulations, etc. This paper introduces a novel approach to sentiment analysis in the banking sector, emphasizing integrating diverse text features to enable dynamic analysis. This proposed approach aims to assess the sentiment score of distinct words used within a document and classify them as positive, negative, or neutral. After rephrasing sentences using synonyms and antonyms of unique words, the system calculates sentence similarity using a distance control mechanism. Then, the system updates the dataset with the positive, negative, and neutral labels. Ultimately, the ELECTRA model utilizes the self-trained sentiment-scored data dictionary, and the newly created dataset is processed using the SoftMax activation function in combination with a customized ADAM optimizer. The approach’s effectiveness is confirmed through the analysis of post-bank customer feedback and the phrase bank dataset, yielding accuracy scores of 92.15%and 93.47%, respectively. This study stands out due to its unique approach, which centers on evaluating customer satisfaction and market sentiment by utilizing sentiment scores of words and assessing sentence similarities.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_94-Sentiment_Analysis_on_Banking_Feedback_and_News_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Efficient IoT Security: Weighted Voting for BASHLITE and Mirai Attack Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141293</link>
        <id>10.14569/IJACSA.2023.0141293</id>
        <doi>10.14569/IJACSA.2023.0141293</doi>
        <lastModDate>2023-12-29T11:47:34.7630000+00:00</lastModDate>
        
        <creator>Marwan Abu-Zanona</creator>
        
        <subject>Internet of Things; IoT security; BASHLITE; Mirai attacks; ensemble learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>The increasing number of devices in the Internet of Things (IoT) has exposed various vulnerabilities, such as BASHLITE and Mirai attacks, making it easier for cyber threats to emerge. Due to these vulnerabilities, developing innovative detection and mitigation strategies is essential. Our proposed solution is an ensemble-based weighted voting model that combines different classifiers, including Random Forest, eXtreme Gradient Boosting (XGBoost), Gradient Boosting, K-nearest neighbor (KNN), Multilayer Perceptron (MLP), and Adaptive Boosting (AdaBoost), using artificial intelligence and machine learning. We evaluated our model on the N-BaIoT dataset, a benchmark in this domain. Our results show that the weighted voting approach has exceptional accuracy, precision, recall, and F1-Score. This highlights the effectiveness of our model in classifying various attack instances within the IoT security context. Our approach performs better than other state-of-the-art methods, achieving a remarkable accuracy of 99.9955% in detecting and preventing BASHLITE and Mirai cyber-attacks on IoT devices.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_93-Efficient_IoT_Security_Weighted_Voting_for_BASHLITE.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The PSR-Transformer Nexus: A Deep Dive into Stock Time Series Forecasting</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141292</link>
        <id>10.14569/IJACSA.2023.0141292</id>
        <doi>10.14569/IJACSA.2023.0141292</doi>
        <lastModDate>2023-12-29T11:47:34.7630000+00:00</lastModDate>
        
        <creator>Nguyen Ngoc Phien</creator>
        
        <creator>Jan Platos</creator>
        
        <subject>Stock market forecasting; deep learning; chaos theory; phase space reconstruction; transformer neural networks; time series analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>Accurate stock market forecasting has remained an elusive endeavor due to the inherent complexity of financial systems dynamics. While deep neural networks have shown initial promise, robustness concerns around long-term dependencies persist. This research pioneers a synergistic fusion of nonlinear time series analysis and algorithmic advances in representation learning to enhance predictive modeling. Phase space reconstruction provides a principled way to reconstruct multidimensional phase spaces from single variable measurements, elucidating dynamical evolution. Transformer networks with self-attention have recently propelled state-of-the-art results in sequence modeling tasks. This paper introduces PSR-Transformer Networks specifically tailored for stock forecasting by feeding PSR interpreted constructs to transformer encoders. Extensive empirical evaluation on 20 years of historical equities data demonstrates significant accuracy improvements along with enhanced robustness against LSTM, CNN-LSTM and Transformer models. The proposed interdisciplinary fusion establishes new performance benchmarks on modeling financial time series, validating synergies between domain-specific reconstruction and cutting-edge deep learning.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_92-The_PSR_Transformer_Nexus_A_Deep_Dive_into_Stock_Time_Series.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Exploiting Deepfakes by Analyzing Temporal Feature Inconsistency</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141291</link>
        <id>10.14569/IJACSA.2023.0141291</id>
        <doi>10.14569/IJACSA.2023.0141291</doi>
        <lastModDate>2023-12-29T11:47:34.7470000+00:00</lastModDate>
        
        <creator>Junlin Gu</creator>
        
        <creator>Yihan Xu</creator>
        
        <creator>Juan Sun</creator>
        
        <creator>Weiwei Liu</creator>
        
        <subject>Face forgery detection; Convolutional Neural Net-work; Long Short-Term Memory Network; time consistency</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>In recent years, the rapid advancement of image generation technology has facilitated the creation of counterfeit images and videos, posing significant challenges for content authenticity verification. Malefactors can easily extract videos from social networks and generate their own deceptive renditions using state-of-the-art techniques. The latest Deepfake face forgery videos have reached an unprecedented level of sophistication, making it exceptionally difficult to discern signs of manipulation. While several methods have been proposed for detecting fraudulent media, they often target specific aspects, and as new attack methods emerge, these approaches tend to become obsolete. This paper presents a novel detection approach that combines Convolutional Neural Networks (CNN) and Long Short-Term Memory Networks (LSTM). Initially, CNN is employed to extract image features from each frame of the input facial video, capturing subtle alterations and irregularities in manipulated content. Subsequently, the extracted feature sequence is used to train the LSTM network, mimicking the temporal consistency of human visual perception and enhancing the effectiveness of counterfeit video detection. To validate this methodology, a comprehensive evaluation is conducted using the FaceForensic++ dataset, affirming its proficiency in identifying Deepfake counter-feit videos.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_91-Exploiting_Deepfakes_by_Analyzing_Temporal_Feature_Inconsistency.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Planning and Expansion of the Transmission Network in the Presence of Wind Power Plants</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141289</link>
        <id>10.14569/IJACSA.2023.0141289</id>
        <doi>10.14569/IJACSA.2023.0141289</doi>
        <lastModDate>2023-12-29T11:47:34.7170000+00:00</lastModDate>
        
        <creator>Hui Sun</creator>
        
        <subject>Wind farms; Transmission Expansion Planning (TEP); multi-objective optimization model; Shuffled Frog Leaping Algorithm (SFLA)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>The proliferation of renewable energy sources, particularly wind farms, is rapidly gaining momentum owing to their numerous benefits. Consequently, it is imperative to account for the impact of wind farms on transmission expansion planning (TEP), which is a crucial aspect of power system planning. This article presents a multi-objective optimization model that utilizes DC load flow to address the TEP challenge while also incorporating wind farm uncertainties into the model. The present study aims to optimize the expansion and planning of the TEP in the power system by considering investment and maintenance costs as objective functions. To achieve this, a multi-objective approach utilizing the shuffled frog leaping algorithm (SFLA) is proposed and implemented. The proposed objectives are simulated on the RTS-IEEE 24-bus test network. The results obtained from the proposed algorithm are compared with those of the Genetic Algorithm (GA) to assess and validate the proposed approach.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_89-Planning_and_Expansion_of_the_Transmission_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>LPDA: Cross-Project Software Defect Prediction Approach via Locality Preserving and Distribution Alignment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141290</link>
        <id>10.14569/IJACSA.2023.0141290</id>
        <doi>10.14569/IJACSA.2023.0141290</doi>
        <lastModDate>2023-12-29T11:47:34.7170000+00:00</lastModDate>
        
        <creator>Jin Xian</creator>
        
        <creator>Jinglei Li</creator>
        
        <creator>Quanyi Zou</creator>
        
        <creator>Yunting Xian</creator>
        
        <subject>Cross Project Defect Prediction; discriminative distribution alignment; local preserving; domain adaption</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>Cross-Project Defect Prediction (CPDP) based on domain adaptation aims to achieve defect prediction tasks in an unlabeled target software project by borrowing the defect knowledge extracted from well-annotated source software projects. Most existing CPDP approaches enhance transferability between projects but struggle with misalignments due to limited exploration of class-specific features and inability to preserve original local relationships in transformed features. In order to tackle these challenges, The article introduces a novel Cross-Project Defect Prediction (CPDP) approach called Local Preserving and Distribution Alignment (LPDA). This approach addresses the challenge of misalignments in CPDP due to limited exploration of discriminative feature representations and the failure to preserve original local relationship consistency. LPDA combines transferability and discriminability for CPDP tasks. It uses locality-preserving projection to maintain module consistency and distribution alignment, which includes transferable and discriminant distribution alignment. The former narrows the distributions of both source and target projects, while the latter increases the discrepancy between different classes across projects. The effectiveness of LPDA was tested through 118 cross-project prediction tasks involving 22 software projects from four distinct repositories. The results showed that LPDA outperforms baseline CPDP methods by efficiently learning representations that integrate transferability and discriminability while preserving local geometry to optimize distances within and between categories.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_90-LPDA_Cross_Project_Software_Defect_Prediction_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Application of Cognitive Decision-Making Algorithm in Cross-Border e-Commerce Digital Marketing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141288</link>
        <id>10.14569/IJACSA.2023.0141288</id>
        <doi>10.14569/IJACSA.2023.0141288</doi>
        <lastModDate>2023-12-29T11:47:34.7000000+00:00</lastModDate>
        
        <creator>Xuehui Wang</creator>
        
        <subject>Cross-border e-commerce; decision-making; pricing issues; optimization algorithms; ABO</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>Extensive global research aims to improve digital marketing profits through pricing decision-making and optimization. A dual word-of-mouth diffusion pricing model is developed for cross-border e-commerce, addressing word-of-mouth accumulation and information diffusion effects. The traditional artificial bee colony algorithm is optimized with security domain search and information diffusion profiles, enhancing global search capabilities. Performance tests reveal that word-of-mouth scale significantly influences cross-border e-commerce profits, increasing with scale coefficient, consumer conversions, and optimal profits. The proposed algorithms demonstrate high efficiency and convergence rates, surpassing common iterations and benefits in the clothing pricing problem. The comprehensive imitation effect is -0.14, and the word-of-mouth scale effect is 1.34. Pre-sale and sales prices for clothing are set at 347.49 and 641.393, respectively. Similarly, in pricing cross-border e-commerce electronic products, the algorithm achieves optimal profits after 230 iterations, surpassing other algorithms. Overall, the proposed model exhibits superior computational performance in cross-border e-commerce pricing decision-making compared to conventional approaches.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_88-The_Application_of_Cognitive_Decision_Making_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Presentation of a New Method for Intrusion Detection by using Deep Learning in Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141287</link>
        <id>10.14569/IJACSA.2023.0141287</id>
        <doi>10.14569/IJACSA.2023.0141287</doi>
        <lastModDate>2023-12-29T11:47:34.6830000+00:00</lastModDate>
        
        <creator>Hui MA</creator>
        
        <subject>Attack; security on cyberspace; classification; intrusion detection; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>Intrusion detection in cyberspace is an important field for today&#39;s research on the scope of the security of computer networks. The purpose of designing and implementing the systems of intrusion detection is to accurately categorize the virtual users, the hackers and the network intruders based on their normal or abnormal behavior. Due to the significant increase in the volume of the exchanged data in cyberspace, the identification and the reduction of inappropriate data characteristics will play a significant role in the increment of accuracy and speed of intrusion detection systems. The most advanced systems for intrusion detection are designed for the detection of an attack with the inspection of the full data of an attack. It means that a system of detection will be able to recognize the attack only after the execution of the attack on the attacked computer. In this paper, a system for end-to-end early intrusion detection is presented for the prevention of attacks on the network before these attacks cause further detriment to the system. The proposed method uses a classifier based on the network of the deep neural for the detection of an attack. The proposed network on a supervised method is trained for the exploitation of the related features by the raw data of the traffic of the network. Experimentally, the proposed approach has been evaluated on the dataset of NSL-KDD. The extensive experiments show that the presented approach performs better than the advanced approaches based on the accuracy, the rate of detection and the rate of the false positive, and also, the proposed system betters the rate of detection for the classes of the minority.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_87-Presentation_of_a_New_Method_for_Intrusion_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Supply Chain Disturbance Management Scheduling Model Based on HPSO Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141286</link>
        <id>10.14569/IJACSA.2023.0141286</id>
        <doi>10.14569/IJACSA.2023.0141286</doi>
        <lastModDate>2023-12-29T11:47:34.6700000+00:00</lastModDate>
        
        <creator>Ling Wang</creator>
        
        <subject>HPSO algorithm; disturbance management; supply chain; system dynamics; anti-production behavior</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>The continuous expansion of business has led to the development of enterprises from vertical integration to horizontal integration, and the interlocking of the supply chain system, but the influence of anti-production behavior factors and the frequent occurrence of disruption events lead to difficulties in supply chain scheduling, which affects the development of enterprises. To address the above problems, the study analyzes the factors influencing counterproductive behavior based on system dynamics, constructs a supply chain disruption management scheduling model on this basis, and solves the supply chain disruption management scheduling model using Hybrid Particle Swarm Optimization algorithm. The findings indicate that the number of non-inferior solutions, uniformity of distribution of non-inferior solutions, dominance ratio of non-inferior solutions, average distance between non-inferior solutions and optimal Pareto, maximum distance, dispersion of non-inferior solutions and coverage of non-inferior solutions of the hybrid particle swarm algorithm are 12.3, 5.283, 0.264, 0.611, 4.474, 4.627, 601.300, respectively in the A condition, 601.300. The number of non-inferior solutions, uniformity of non-inferior solution distribution, dominance ratio of non-inferior solutions, average distance between non-inferior solutions and optimal Pareto, maximum distance, dispersion of non-inferior solutions and coverage of non-inferior solutions for the hybrid particle swarm algorithm under B condition are 12.3, 5.283, 0.264, 0.611, 4.474, In summary, the proposed algorithm has excellent performance and can effectively reduce the impact of interference events, thereby improving the level of supply chain interference management and scheduling, and promoting the sustainable development of this field.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_86-Supply_Chain_Disturbance_Management_Scheduling_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Recurrence Prediction and Risk Classification of COPD Patients Based on Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141285</link>
        <id>10.14569/IJACSA.2023.0141285</id>
        <doi>10.14569/IJACSA.2023.0141285</doi>
        <lastModDate>2023-12-29T11:47:34.6700000+00:00</lastModDate>
        
        <creator>Xin Qi</creator>
        
        <creator>Hong Chen</creator>
        
        <subject>Machine learning; COPD; BiLSTM; XGBoost; k-means; recurrence; risk classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>In response to the frequent recurrence and readmission of patients with chronic obstructive pulmonary disease, a machine learning based recurrence risk prediction and risk classification model for patients with chronic obstructive pulmonary disease is studied and constructed. Approach: This model first utilizes the optimized long short-term memory network to recognize named entities in patient electronic medical records and extract entity features. Then, XGBoost is used to predict the probability of patient relapse and readmission, and its risk is classified. Results: These results confirm that the optimized bidirectional long short-term memory network has the best performance with an accuracy of 84.36% in electronic medical record named entity recognition. The accuracy of XGBoost is the highest on both the training and testing sets, with values of 0.8827 and 0.8514, respectively. XGBoost has the best predictive ability and effectiveness. By using k-means for layering, the workload of manual evaluation was reduced by 91%, and the overall simulation accuracy of the model was as high as 97.3% and 96.4%. Conclusions: These indicate that this method can be used to balance high-risk patients between risk, cost, and resources.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_85-Recurrence_Prediction_and_Risk_Classification_of_COPD_Patients.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Network Oral English Teaching System Based on Speech Recognition Technology and Deep Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141284</link>
        <id>10.14569/IJACSA.2023.0141284</id>
        <doi>10.14569/IJACSA.2023.0141284</doi>
        <lastModDate>2023-12-29T11:47:34.6530000+00:00</lastModDate>
        
        <creator>Na He</creator>
        
        <creator>Weihua Liu</creator>
        
        <subject>Deep neural network; Markov model; voice design technology; Viterbi algorithm; oral English teaching</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>With the development of computer technology, computer-aided instruction is being used more and more widely in the field of education. Based on speech recognition technology and deep neural network, this paper proposes an online oral English teaching system. Firstly, the speech recognition technology is introduced and its feature extraction is elaborated in detail. Then, three basic problems and three basic algorithms that need to be solved in speech recognition system using Markov model are discussed. The application of HMM technology in speech recognition system is studied, and some algorithms are optimized. The logarithmic processing of Viterbi algorithm, compared with the traditional algorithm, greatly reduces the amount of computation and solves the overflow problem in the operation process. By combining deep network with HMM, continuous speech signal modeling is realized. According to the characteristics of the DNN-HMM model, it is proposed that the model cannot model the long-term dependence of speech signals and train complex problems. Based on Kaldi, the model training comparison experiments of monophonon model, triphonon model and adding feature transformation technology are carried out to continuously improve the model performance. Finally, through simulation experiments, it is found that the recognition rate of the optimized DNN-HMM mixed model proposed in this paper is the highest, reaching 97.5%, followed by the HMM model, which is 95.4%, and the lowest recognition rate is the PNN model, which is 90.1%.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_84-Network_Oral_English_Teaching_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Combining Unsupervised and Supervised Learning to Predict Poverty Households in Sakon Nakhon, Thailand</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141283</link>
        <id>10.14569/IJACSA.2023.0141283</id>
        <doi>10.14569/IJACSA.2023.0141283</doi>
        <lastModDate>2023-12-29T11:47:34.6370000+00:00</lastModDate>
        
        <creator>Sutisa Songleknok</creator>
        
        <creator>Suthasinee Kuptabut</creator>
        
        <subject>K-prototype; decision tree; feature selection; Sakon Nakhon poverty households; unsupervised learning; supervised learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>Poverty is a problem that various government agencies are attempting to address accurately and precisely. This solution relies on data and analysis of features affecting poverty. Machine Learning is a technique to analyze and focus on poverty features encompassing five livelihood capitals: human, physical, economic, natural, and social capital to understand the household context and environment. The dataset contains 1,598 poverty households from Kut Bak district, Sakon Nakhon, Thailand. K-prototype was used to group categorical and numerical dataset into four clusters and labelled as Destitute, Extreme poor, Moderate poor, and Vulnerable non-poor. The performances of the Decision tree classifier with feature selection algorithms, including MI, ReliefF, RFE, and SFS, are compared. The best performance is SFS with F-measure, precision, and recall at 74.6%, 74.8%, and 74.7%, respectively. The result is the decision tree rules to predict the poverty level of households, enabling the establishment of guidelines for resolving household issues, and addressing broader problems within the areas.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_83-Combining_Unsupervised_and_Supervised_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Research on Efficient CNN Acceleration Through Mixed Precision Quantization: A Comprehensive Methodology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141282</link>
        <id>10.14569/IJACSA.2023.0141282</id>
        <doi>10.14569/IJACSA.2023.0141282</doi>
        <lastModDate>2023-12-29T11:47:34.6230000+00:00</lastModDate>
        
        <creator>Yizhi He</creator>
        
        <creator>Wenlong Liu</creator>
        
        <creator>Muhammad Tahir</creator>
        
        <creator>Zhao Li</creator>
        
        <creator>Shaoshuang Zhang</creator>
        
        <creator>Hussain Bux Amur</creator>
        
        <subject>Convolutional Neural Networks (CNNs); edge computing technologies; Field Programmable Gate Array (FPGA) accelerator; mixed precision quantization; loss variation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>To overcome challenges associated with deploying Convolutional Neural Networks (CNNs) on edge computing devices with limited memory and computing resources, we propose a mixed-precision CNN calculation method on a Field Programmable Gate Array (FPGA). This approach involves a collaborative design encompassing both software and hardware aspects. Initially, we devised a CNN quantization method tailored for the fixed-point operation characteristics of FPGA, addressing the computational challenges posed by floating-point parameters. We introduce a bit-width strategy search algorithm that assigns bit-widths to each layer based on CNN loss variation induced by quantization. Through retraining, this strategy mitigates the degradation in CNN inference accuracy. For FPGA acceleration design, we employ a flow processing architecture with multiple Processing Elements (PEs) to support mixed-precision CNNs. Our approach incorporates a folding design method to implement shared PEs between layers, significantly reducing FPGA resource usage. Furthermore, we designed a data reading method, incorporating a register set buffer between memory and processing elements to alleviate issues related to mismatched data reading and computing speeds. Our implementation of the mixed-precision ResNet20 model on the Kintex-7 Eco R2 development board achieves an inference accuracy of 91.68% and a computing speed 4.27 times faster than the Central Processing Unit (CPU) on the CIFAR-10 dataset, with an accuracy drop of only 1.21%. Compared to a unified 16-bit FPGA accelerator design method, our proposed approach demonstrates an 89-fold increase in computing speed while maintaining similar accuracy.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_82-Research_on_Efficient_CNN_Acceleration_Through_Mixed_Precision.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Advanced Techniques for Recognizing Emotions: A Unified Approach using Facial Patterns, Speech Attributes, and Multimedia Descriptors</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141281</link>
        <id>10.14569/IJACSA.2023.0141281</id>
        <doi>10.14569/IJACSA.2023.0141281</doi>
        <lastModDate>2023-12-29T11:47:34.6070000+00:00</lastModDate>
        
        <creator>Kummari Ramyasree</creator>
        
        <creator>Chennupati Sumanth Kumar</creator>
        
        <subject>Local Difference Pattern (LDP); Mel-Frequency Cepstral Coefficients (MFCC); Long-Term Average Spectrum (LTAS); Self-Similarity Distance Matrix (SSDM); Support Vector Machine (SVM)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>The inability to efficiently store distinguishing edges, local appearance-based textured descriptions generally have limited performance in detecting facial expression analysis. The existing technology has certain drawbacks, such as the potential for poor edge-related disturbance in face photos and the reliance on present sets of characteristics that might fail to adequately represent the subtleties of emotions and thoughts in a variety of contexts. In order to overcome the difficulties associated with identifying facial expressions identification and emotion categorization, this study presents an innovative structure that combines three different information sets: a new multimedia descriptors, prosodic functions, and Local Differential Pattern (LDP). The principal driving force is the existence of noise-induced warped and weak edges in face pictures, which result in inaccurate expressions characteristic assessment. By identifying and encoding only greater edge reactions, as opposed to standard local descriptors that the LDP approach improves the endurance of face feature extraction. Robinson Compass and Kirsch Compass Masks are used for recognising edges, and the LDP formulation encodes each pixel with seven bits of information to reduce code repetition. The last category comprises Long-Term Average Spectrum (LTAS) obtained from signals related to speech, Mel-Frequency Cepstral Coefficients (MFCC), and Forming agents. Fisher Criterion is used to reduce dimensionality, and unpredictable characteristics are used in picking features. Emotion prediction is achieved by classifying two distinct circumstances using Support Vector Machine (SVM) and Decision Tree (DT) algorithms, and combining the obtained data. The research also presents a unique Visual or audio Descriptors that gives priority to key structure selections and face positioning for Audio-visual input. A concise depiction of expression is offered by the suggested Self-Similarity Distance Matrix (SSDM), which uses facial highlight points to estimate both time and space correlations. Formant frequency range, energy sources, probabilistic properties, and spectroscopic aspects define the acoustic signal. The 98% accuracy rate is attained by the emotion recognition algorithm. Major improvements upon cutting-edge techniques are shown in validation studies on the SAVEE and RML information sets, highlighting the usefulness of the suggested model in identifying and categorising emotions and facial movements in a variety of contexts. The implementation of this research is done by using Python tool.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_81-Advanced_Techniques_for_Recognizing_Emotions.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Graph-Cut Guided ROI Segmentation Algorithm with Lightweight Deep Learning Framework for Cervical Cancer Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141280</link>
        <id>10.14569/IJACSA.2023.0141280</id>
        <doi>10.14569/IJACSA.2023.0141280</doi>
        <lastModDate>2023-12-29T11:47:34.5900000+00:00</lastModDate>
        
        <creator>Shiny T L</creator>
        
        <creator>Kumar Parasuraman</creator>
        
        <subject>Cervical cancer classification; deep learning; lightweight deep learning framework; graph-cut guided ROI segmentation algorithm; nuclei region isolation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>Cervical cancer classification has witnessed numerous advancements through deep learning methods; however, existing approaches often rely on multiple models for segmentation and classification, leading to heightened computational demands and prolonged training times. In this research, a lightweight deep learning framework for cervical cancer classification is presented. The framework comprises three primary components: a Graph-Cut Guided Region of Interest (ROI) segmentation algorithm, a streamlined DenseNet architecture, and a Multi-Class Logistic Regression classifier. The Graph-Cut Guided ROI segmentation algorithm is used to accurately isolate nuclei regions within multicellular Pap smear images. This is a lightweight algorithm that is able to achieve high segmentation accuracy with minimal computational overhead. The streamlined DenseNet architecture is used to efficiently extract salient features from the segmented images. This architecture is specifically designed to reduce feature redundancy and eliminate incongruous feature maps. The Multi-Class Logistic Regression classifier is used to classify the segmented images into different cell types and stages of cervical cancer. Experimental results show the proposed method is able to achieve high classification accuracy with minimal training time. The framework was trained and evaluated on a dataset of 963 Pap smear images. The proposed framework achieved a 98% cell type classification accuracy in precision, recall, and F1-score for classifying multi-cell Pap smear images. The training loss was also very low. The average training time was 21 minutes for different sets of training images, and the average testing time was 0.50 seconds for different sizes of testing images, which is much lower than the existing methods.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_80-A_Graph_Cut_Guided_ROI_Segmentation_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Robot Human-Machine Interaction Method Based on Natural Language Processing and Speech Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141278</link>
        <id>10.14569/IJACSA.2023.0141278</id>
        <doi>10.14569/IJACSA.2023.0141278</doi>
        <lastModDate>2023-12-29T11:47:34.5770000+00:00</lastModDate>
        
        <creator>Shuli Wang</creator>
        
        <creator>Fei Long</creator>
        
        <subject>Human-computer interaction; speech recognition; natural language processing; lexical analysis; syntactic analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>With the rapid development of artificial intelligence technology, robots have gradually entered people&#39;s lives and work. The robot human-machine interaction system for image recognition has been widely used. However, there are still many problems with robot human-machine interaction methods that utilize natural language processing and speech recognition. Therefore, this study proposes a new robot human-machine interaction method that combines structured perceptron lexical analysis model and transfer dependency syntactic analysis model on the basis of existing interaction systems. The purpose is to further explore language based human-machine interaction systems and improve interaction performance. The experiment shows that the testing accuracy of the structured perceptron model reaches 95%, the recall rate reaches 81%, and the F1 value reaches 82%. The transfer dependency syntax analysis model has a data analysis speed of up to 750K/s. In simulation testing, the new robot human-machine interaction method has an accuracy of 92% compared to other existing methods, and exhibits excellent robustness and response sensitivity. In summary, research methods can provide a theoretical and practical basis for the improvement of robot interaction capabilities and the further development of human-machine collaboration.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_78-Robot_Human_Machine_Interaction_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Investigating of Deep Learning-based Approaches for Anomaly Detection in IoT Surveillance Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141279</link>
        <id>10.14569/IJACSA.2023.0141279</id>
        <doi>10.14569/IJACSA.2023.0141279</doi>
        <lastModDate>2023-12-29T11:47:34.5770000+00:00</lastModDate>
        
        <creator>Jianchang HUANG</creator>
        
        <creator>Yakun CAI</creator>
        
        <creator>Tingting SUN</creator>
        
        <subject>Internet of Things; surveillance systems; anomaly detection; deep learning; video analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>Anomaly detection plays a crucial role in ensuring the security and integrity of Internet of Things (IoT) surveillance systems. Nowadays, deep learning methods have gained significant popularity in anomaly detection because of their ability to learn and extract intricate features from complex data automatically. However, despite the advancements in deep learning-based anomaly detection, several limitations and research gaps exist. These include the need for improving the interpretability of deep learning models, addressing the challenges of limited training data, handling concept drift in evolving IoT environments, and achieving real-time performance. It is crucial to conduct a comprehensive review of existing deep learning methods to address these limitations as well as identify the most accurate and effective approaches for anomaly detection in IoT surveillance systems. This review paper presents an extensive analysis of existing deep learning methods by collecting results and performance evaluations from various studies. The collected results enable the identification and comparison of the most accurate deep-learning methods for anomaly detection. Finally, the findings of this review will contribute to the development of more efficient and reliable anomaly detection techniques for enhancing the security and effectiveness of IoT surveillance systems.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_79-Investigating_of_Deep_Learning_based_Approaches_for_Anomaly_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Rural Homestay Spatial Planning and Design Based on Bert BiLSTM EIC Algorithm in the Background of Digital Ecology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141277</link>
        <id>10.14569/IJACSA.2023.0141277</id>
        <doi>10.14569/IJACSA.2023.0141277</doi>
        <lastModDate>2023-12-29T11:47:34.5600000+00:00</lastModDate>
        
        <creator>Zhibin Qiu</creator>
        
        <creator>Junghoon Mok</creator>
        
        <subject>Rural homestay; spatial planning; deep learning; emotional analysis; bidirectional long short term memory network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>There is a promising development prospect in the digital ecosystem. In this context, the spatial planning of rural homestays has also received widespread attention. The research aims to better explore the advantages and determine the development direction of rural homestays, while providing two-way demand support for consumers and managers. Therefore, this study combines bidirectional long-term and short-term memory networks, pre-trained models, and emotional information attention mechanisms in deep learning. A new emotional analysis model is proposed. Then it is applied to the spatial planning of homestays near Chengdu Normal University and Chengdu Neusoft University. The experimental results show that the accuracy, recall, and F1 values of the new emotion analysis model proposed in this research reach 94%, 93%, and 94%, respectively. In terms of consumer satisfaction with the spatial location of homestays before and after the renovation, the average score of homestays near Chengdu Normal University increases by 21% compared to before the renovation. The average score of homestays near Chengdu Neusoft University increases by 40% compared to before the renovation. In summary, the new emotional analysis model proposed in this research has certain feasibility and effectiveness in the planning of rural homestay spatial location, providing new ideas for homestay spatial location planning.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_77-Rural_Homestay_Spatial_Planning_and_Design.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Analysis of Weighted Ensemble and Majority Voting Algorithms for Intrusion Detection in OpenStack Cloud Environments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141276</link>
        <id>10.14569/IJACSA.2023.0141276</id>
        <doi>10.14569/IJACSA.2023.0141276</doi>
        <lastModDate>2023-12-29T11:47:34.5270000+00:00</lastModDate>
        
        <creator>Pravin Patil</creator>
        
        <creator>Geetanjali Kale</creator>
        
        <creator>Nidhi Bivalkar</creator>
        
        <creator>Agneya Kolhatkar</creator>
        
        <subject>Intrusion detection; ensemble algorithms; cloud security; openstack; weighted ensemble; majority voting</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>In the ever-evolving landscape of cybersecurity, the detection of malicious activities within cloud environments remains a critical challenge. This research aims to compare the effectiveness of two ensemble algorithms, the weighted ensemble algorithm and the majority voting algorithm, in the context of intrusion detection within an OpenStack cloud environment. To conduct this study, a dataset was generated using a network of 10 virtual machines, simulating the complex dynamics of a real cloud infrastructure. Various attack scenarios were simulated, and system metrics including CPU usage, memory utilization, and network traffic were monitored and logged. The weighted ensemble algorithm combines the predictions of multiple individual models with varying weights, while the majority voting algorithm aggregates predictions from multiple models. Through a rigorous experimental setup, these algorithms were applied to the generated dataset, and their performance was evaluated using standard metrics such as accuracy, precision, recall, and F1-score. These findings provide valuable insights into the strengths and weaknesses of ensemble algorithms for intrusion detection in cloud environments. It highlights the importance of selecting appropriate algorithms based on specific security requirements and threat profiles. Different attack scenarios may require different algorithmic approaches to achieve optimal results. Overall, this study contributes to the understanding of ensemble techniques in cloud security and offers a foundation for further research in optimizing intrusion detection strategies within dynamic and complex cloud environments. By identifying the strengths and weaknesses of different ensemble algorithms, cybersecurity professionals can make informed decisions in selecting the most suitable approach to enhance the security of cloud environments.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_76-Comparative_Analysis_of_Weighted_Ensemble_and_Majority_Voting.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced Multi-Object Detection via the Integration of PSO, Kalman Filtering, and CNN Compressive Sensing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141275</link>
        <id>10.14569/IJACSA.2023.0141275</id>
        <doi>10.14569/IJACSA.2023.0141275</doi>
        <lastModDate>2023-12-29T11:47:34.4970000+00:00</lastModDate>
        
        <creator>S. V. Suresh Babu Matla</creator>
        
        <creator>S. Ravi</creator>
        
        <creator>Muralikrishna Puttagunta</creator>
        
        <subject>Multi-Object tracking; object detection; convolutional neural networks; kalman filtering; particle swarm optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>Many inventive techniques have been created in the field of machine vision to solve the challenging challenge of detecting and tracking one or more objects in the face of challenging conditions, such as obstacles, object motion, changes in light, shaking, and rotations. This research article provides a novel method that combines Convolutional Neural Networks (CNNs), Compressive Sensing, Kalman Filtering, and Particle Swarm Optimization (PSO) to address the challenges of multi-object tracking under dynamic conditions. Initially, a CNN-based object classification and identification system is demonstrated, which efficiently locates objects in video frames. Subsequently, in order to produce precise representations of object appearances, utilizing compressive sensing techniques. The Kalman Filter ensures adaptability to irregular observations, eliminates erroneous data, and reduces uncertainty. PSO enhances tracking efficiency by optimizing forecast precision. When combined, these techniques provide robust tracking even in the presence of complex movement patterns, occlusions, and visual disparities. The efficiency of this strategy is demonstrated by an empirical investigation that produces a remarkable tracking accuracy of 98%, which is 3.15% greater than other methods across a range of challenging settings. This technique has been compared to various existing approaches, including the Clustering Method, YOLOV4 DNN Model, and YOLOV3 Model, and its deployment is made easier with Python software. This hybrid technique, which addresses the limitations of separate approaches and offers a holistic approach to multi-object monitoring, has potential applications in surveillance, robotics, and autonomous systems.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_75-Enhanced_Multi_Object_Detection_via_the_Integration_of_PSO.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Production System Performance: Failure Detection and Availability Improvement with Deep Learning and Genetic Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141274</link>
        <id>10.14569/IJACSA.2023.0141274</id>
        <doi>10.14569/IJACSA.2023.0141274</doi>
        <lastModDate>2023-12-29T11:47:34.4970000+00:00</lastModDate>
        
        <creator>Artika Farhana</creator>
        
        <creator>Shaista Sabeer</creator>
        
        <creator>Ayasha Siddiqua</creator>
        
        <creator>Afsana Anjum</creator>
        
        <subject>Autoencoder; availability enhancement; convolutional neural network; failure detection; genetic algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>A crucial component of industrial operations is the detection of production system failures, which aims to spot any problems before they get worse. By applying cutting-edge methods like deep learning and genetic algorithms, failure detection accuracy may be improved, allowing for preemptive actions to reduce downtime and maximize system availability. These methods improve reactivity to possible errors and solve dynamic issues, which enhances the overall efficiency and reliability of production systems. This study offers a novel method for improving the availability and failure detection of production systems using deep learning techniques and genetic algorithms in a data-driven strategy. The goal of the project is to provide a complete framework for efficient failure detection that incorporates deep learning models, particularly Convolutional Neural Network (CNN) Autoencoder. Furthermore, system configurations are optimized through the use of genetic algorithms, improving overall availability. The suggested model is able to identify complex patterns and connections in the data by being trained on a variety of datasets that contain information about equipment failure. The incorporation of genetic algorithm guarantees flexibility and resilience in system setups, hence augmenting total availability. The study presents a proactive and flexible approach to the dynamic issues encountered in industrial environments, providing a notable breakthrough in failure detection and availability improvement. The proposed model is implemented in Python software. It achieves an astounding 99.32% accuracy rate, which is 3.58% higher than that of current techniques like CNN-LSTM (Long Short-Term Memory), Bi-LSTM (Bi-directional Long Short-Term Memory), and CNN-RNN (Recurrent Neural Network). The data-driven approach&#39;s high accuracy highlights its efficacy in forecasting and avoiding problems, which minimizes downtime and maximizes production efficiency.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_74-Enhancing_Production_System_Performance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Fusion Deep Learning Approach for Retinal Disease Diagnosis Enhanced by Web Application Predictive Tool</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141273</link>
        <id>10.14569/IJACSA.2023.0141273</id>
        <doi>10.14569/IJACSA.2023.0141273</doi>
        <lastModDate>2023-12-29T11:47:34.4800000+00:00</lastModDate>
        
        <creator>Nani Gopal Barai</creator>
        
        <creator>Subrata Banik</creator>
        
        <creator>F M Javed Mehedi Shamrat</creator>
        
        <subject>Retinal disease; RetiNet; hybrid model; learning; Web application; gaussian blur; histogram equalization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>Retinal disorders such as age-related macular degeneration and diabetic macular edema can lead to permanent blindness. Optical coherence tomography (OCT) enables professionals to observe cross-sections of the retina, which aids in diagnosis. Manually analyzing images is time-consuming, difficult, and prone to mistakes. In the dynamic and constantly evolving domain of artificial intelligence (AI) and medical imaging, our research represents a significant development in the field of retinal diagnostics. In this study, we introduced “RetiNet”, an advanced hybrid model that is derived from the best features of ResNet50 and DenseNet121. To the model, we utilized an open-source retinal dataset that underwent a meticulous refinement process using a series of preprocessing techniques. The techniques involved Histogram Equalization for the purpose of achieving optimal contrast, Gaussian blur to mitigate noise, morphological operations to facilitate precise feature extraction, and Data Balancing to ensure impartial model training. These operations led to the attainment of a test accuracy of 98.50% by RetiNet, surpassing the performance standard set by existing models. A web application has been developed with the purpose of disease prediction, providing doctors with assistance in their diagnostic procedures. Through the development of RetiNet, our research not only transforms the accuracy of retinal diagnostics but also introduces an innovative combination of deep learning and application-oriented solutions. This innovation brings in a novel era characterized by improving reliability and efficiency in the field of medical diagnostics.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_73-A_Novel_Fusion_Deep_Learning_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intelligent Evaluation and Optimization of Postgraduate Education Comprehensive Ability Training under the Mode of “One Case, Three Systems”</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141272</link>
        <id>10.14569/IJACSA.2023.0141272</id>
        <doi>10.14569/IJACSA.2023.0141272</doi>
        <lastModDate>2023-12-29T11:47:34.4500000+00:00</lastModDate>
        
        <creator>Yong Xiang</creator>
        
        <creator>Zeyou Chen</creator>
        
        <creator>Liyu Lu</creator>
        
        <creator>Yao Wei</creator>
        
        <subject>Comprehensive ability training of graduate students; one case; three systems; feature weighted clustering algorithm; sampling method; ensemble learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>This study aims to explore the intelligent evaluation and optimization methods for the comprehensive ability training of graduate students under the mode of &quot;one case, three systems&quot; to improve the quality and effect of graduate training. Firstly, a weighted clustering algorithm for mixed attributes is designed. Secondly, an evaluation model of postgraduate training quality based on sampling method and ensemble learning is established. Finally, the algorithm&#39;s and the model&#39;s performance are compared and tested. The test results show that with the increase in the number of experiments, the accuracy of the proposed weighted clustering algorithm can reach more than 90%, which is improved by 10%. The average number of iterations is 276, and the accuracy and F1 value can achieve the highest level with fewer iterations and stable algorithm performance. Compared with the R1 model, F1 and the accuracy of the model proposed in this study are enhanced by 3.29% and 6.75%, respectively. The feature-weighted clustering algorithm and the training quality evaluation model designed here complement each other and jointly construct a more elaborate and comprehensive training system. The feature-weighted clustering algorithm oriented to mixed attributes for the first time combines sampling methods and ensemble learning in the education ability training. Moreover, a multi-dimensional and intelligent postgraduate training evaluation framework is constructed, which provides a new idea for improving the quality of postgraduate training.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_72-Intelligent_Evaluation_and_Optimization_of_Postgraduate_Education.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Piano Single Tone Recognition and Classification Method Based on CNN Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141271</link>
        <id>10.14569/IJACSA.2023.0141271</id>
        <doi>10.14569/IJACSA.2023.0141271</doi>
        <lastModDate>2023-12-29T11:47:34.4330000+00:00</lastModDate>
        
        <creator>Miaoping Geng</creator>
        
        <creator>Ruidi He</creator>
        
        <creator>Ziyihe Zhou</creator>
        
        <subject>CNN model; piano; single tone recognition; classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>In order to improve the recognition and classification effect of piano single tone, this paper combines the CNN (Convolutional Neural Networks) model to construct the piano single tone recognition and classification model, and equalizes the uniformly irradiated parabolic tone transmission hardware. In this paper, the analytic method is used to calculate the direction diagram of the tone transmission hardware, and the analytical expression for calculating the gain of the tone transmission hardware is obtained. Moreover, this paper gives the calculation and analytical expression of the hardware gain of the tone transmission in the main lobe, and obtains the calculation method of the relative position of the two tone transmission hardware by using the conversion relationship between the global coordinate and the local coordinate. Finally, the variation law of the received power with the azimuth/elevation angle of the receiving tone transmission hardware and the incident high-power microwave frequency is given. The experimental study shows that the piano single tone recognition and classification method based on CNN model proposed in this paper can play an important role in piano single tone recognition. This article improves the note recognition algorithm for piano music by combining note features with frequency spectrum to obtain note spectrum, which improves the accuracy of audio classification recognition.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_71-A_Piano_Single_Tone_Recognition_and_Classification_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of Synthetic Data Utilization with Generative Adversarial Network in Flood Classification using K-Nearest Neighbor Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141270</link>
        <id>10.14569/IJACSA.2023.0141270</id>
        <doi>10.14569/IJACSA.2023.0141270</doi>
        <lastModDate>2023-12-29T11:47:34.4200000+00:00</lastModDate>
        
        <creator>Wahyu Afriza</creator>
        
        <creator>Mardhani Riasetiawan</creator>
        
        <creator>Dyah Aruming Tyas</creator>
        
        <subject>Classification; rainfall; synthetic data; KNN; GAN</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>Indonesia is a country with a tropical climate that has high rainfall rates and is supported by the uncertainty of weather and climate conditions. With the uncertainty of weather and climate as well as flood events, minimal predictive information on flooding, and the lack of availability of data on the causes of flooding, a comparison of synthetic data generation from the minimal data available from BMKG with synthetic data generation from Kaggle online platform data in the form of temperature and humidity data, rainfall, and wind speed from BMKG and annual rain data from Kaggle was analyzed. This research aims to obtain the results of data comparison analysis of synthetic data generation from different datasets with the benchmark of classification system results using K-Nearest Neighbor (KNN) and accuracy evaluation with Confusion Matrix. The research process uses climate data from the BMKG DI Yogyakarta Climatology Station within 20 months, the Geophysical Station within 12 months, and Kerala data with a range of 1901–2018. Synthetic data generation is done using the Conditional Tabular Generative Adversarial Network (CTGAN) model. CTGAN produces quite good data in terms of distribution and data differences if the original data is large and the synthetic data generated is small. The KNN classification system on the BMKG data experienced overfitting, as indicated by the accuracy value in the evaluation increasing in the range of 85–94% and the validation decreasing in the range of 89%–65%. This is because there is no uniqueness in the data and too little original data made into synthetics, which affects the difficulty of the classification system in identifying data that is quite different in distance and data values generated by CTGAN. In Kerala, the accuracy value on evaluation is in the range of 92–95%, and validation is in the range of 0.7–0.83%, with Classifier k1 being the most optimal system.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_70-Analysis_of_Synthetic_Data_Utilization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Autism Severity Classification: Integrating LSTM into CNNs for Multisite Meltdown Grading</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141269</link>
        <id>10.14569/IJACSA.2023.0141269</id>
        <doi>10.14569/IJACSA.2023.0141269</doi>
        <lastModDate>2023-12-29T11:47:34.4030000+00:00</lastModDate>
        
        <creator>Sumbul Alam</creator>
        
        <creator>S Pravinth Raja</creator>
        
        <creator>Yonis Gulzar</creator>
        
        <creator>Mohammad Shuaib Mir</creator>
        
        <subject>Autism spectrum disorder; mutilating meltdown; convolution neural network; long short term memory; multisite meltdown; video classification; image classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>Autism spectrum disorder (ASD) is a neurodevelopmental condition characterized by deficits in social interaction, verbal and non-verbal communication, and is often associated with cognitive and neurobehavioral challenges. Timely screening and diagnosis of ASD are crucial for early educational planning, treatment, family support, and timely medical intervention. Manual diagnostic methods are time-consuming and labor-intensive, underscoring the need for automated approaches to assist caretakers and parents. While various researchers have employed machine learning and deep learning techniques for ASD diagnosis, existing models often fall short in capturing the complexity of multisite meltdowns and fully leveraging the interdependence among these meltdowns for severity assessment in acquired facial images of children, hindering the development of a comprehensive grading system. This paper introduces a novel approach using a Long Short Term Memory (LSTM) integrated Convolution Neural Network (CNN) designed to identify multisite meltdowns and exploit their interdependence for severity assessment in ASD. The process begins with image pre-processing, involving discrete convolution filters for noise removal and contrast enhancement to improve image quality. The enhanced image then undergoes instance segmentation using the Segment Anything model to identify significant regions in the child&#39;s image. The segmented region is subjected to principal component analysis for feature extraction, and these features are utilized by the LSTM-integrated CNN for meltdown detection and severity classification. The model is trained using children&#39;s images extracted from videos, and testing is performed on videos captured during children&#39;s observations. Performance analysis reveals superior results, with a training accuracy of 88% and validation accuracy of 84%, outperforming conventional methods. This innovative approach not only enhances the efficiency of ASD diagnosis but also provides a more nuanced understanding of multisite meltdowns and their impact on severity, contributing to the development of a robust grading system.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_69-Enhancing_Autism_Severity_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comprehensive Review of Healthcare Prediction using Data Science with Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141268</link>
        <id>10.14569/IJACSA.2023.0141268</id>
        <doi>10.14569/IJACSA.2023.0141268</doi>
        <lastModDate>2023-12-29T11:47:34.3870000+00:00</lastModDate>
        
        <creator>Asha Latha Thandu</creator>
        
        <creator>Pradeepini Gera</creator>
        
        <subject>Data science; deep belief network; healthcare; sparse auto encoder; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>Data science in healthcare prediction technology can identify diseases and spot even the smallest changes in the patient&#39;s health factors and prevent the diseases. Several factors make data science crucial to healthcare today the most important among them is the competitive demand for valuable information in the healthcare systems. The data science technology along with Deep Learning (DL) techniques creates medical records, disease diagnosis, and especially, real-time monitoring of patients. Each DL algorithm performs differently using different datasets. The impacts on different predictive results may be affects overall results. The variability of prognostic results is large in the clinical decision-making process. Consequently, it is necessary to understand the several DL algorithms required for handling big amount of data in healthcare sector. Therefore, this review paper highlights the basic DL algorithms used for prediction, classification and explains how they are used in the healthcare sector. The goal of this review is to provide a clear overview of data science technologies in healthcare solutions. The analysis determines that each DL algorithm have several negativities. The optimal method is necessary for critical healthcare prediction data. This review also offers several examples of data science and DL to diagnose upcoming trends on the healthcare system.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_68-A_Comprehensive_Review_of_Healthcare_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of an Intelligent Service Delivery System to Increase Efficiency of Software Defined Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141267</link>
        <id>10.14569/IJACSA.2023.0141267</id>
        <doi>10.14569/IJACSA.2023.0141267</doi>
        <lastModDate>2023-12-29T11:47:34.3730000+00:00</lastModDate>
        
        <creator>Serik Joldasbayev</creator>
        
        <creator>Saya Sapakova</creator>
        
        <creator>Almash Zhaksylyk</creator>
        
        <creator>Bakhytzhan Kulambayev</creator>
        
        <creator>Reanta Armankyzy</creator>
        
        <creator>Aruzhan Bolysbek</creator>
        
        <subject>Load balancing; machine learning; server; classification; software</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>The burgeoning complexity in network management has garnered considerable attention, specifically focusing on Software-Defined Networking (SDN), a transformative technology that addresses limitations inherent in traditional network infrastructures. Despite its advantages, SDN is often susceptible to bottlenecks and excessive load issues, underscoring the necessity for more robust load balancing solutions. Previous research in this realm has predominantly concentrated on employing static or dynamic methodologies, encapsulating only a handful of parameters for traffic management, thereby limiting their effectiveness. This study introduces an innovative, intelligence-led approach to service delivery systems in SDN, specifically by orchestrating packet forwarding—encompassing both TCP and UDP traffic—through a multi-faceted analysis utilizing twelve distinct parameters elaborated in subsequent sections. This research leverages advanced machine learning algorithms, notably K-Means and DBSCAN clustering, to discern patterns and optimize traffic distribution, ensuring a more nuanced, responsive load balancing mechanism. A salient feature of this methodology involves determining the ideal number of operational clusters to enhance efficiency systematically. The proposed system underwent rigorous testing with an escalating scale of network packets, encompassing counts of 5,000 to an extensive 10,000,000, to validate performance under varying load conditions. Comparative analysis between K-Means and DBSCAN&#39;s results reveals critical insights into their operational efficacy, corroborated by juxtaposition with extant scholarly perspectives. This investigation&#39;s findings significantly contribute to the discourse on adaptive network solutions, demonstrating that an intelligent, parameter-rich approach can substantively mitigate load-related challenges, thereby revolutionizing service delivery paradigms within Software-Defined Networks.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_67-Development_of_an_Intelligent_Service_Delivery_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Applying Big Data Analysis and Machine Learning Approaches for Optimal Production Management</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141266</link>
        <id>10.14569/IJACSA.2023.0141266</id>
        <doi>10.14569/IJACSA.2023.0141266</doi>
        <lastModDate>2023-12-29T11:47:34.3730000+00:00</lastModDate>
        
        <creator>Sarsenkul Tileubay</creator>
        
        <creator>Bayanali Doszhanov</creator>
        
        <creator>Bulgyn Mailykhanova</creator>
        
        <creator>Nurlan Kulmurzayev</creator>
        
        <creator>Aisanim Sarsenbayeva</creator>
        
        <creator>Zhadyra Akanova</creator>
        
        <creator>Sveta Toxanova</creator>
        
        <subject>Optimal production; smart manufacturing; machine learning; big data; management</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>In this research paper, we delve into the transformative potential of integrating Big Data analytics with machine learning (ML) techniques, orchestrating a paradigm shift in production management methodologies. Traditional production systems, often marred by inefficiencies stemming from data opacity, have encountered bottlenecks that throttle scalability and adaptability, particularly in complex, fluctuating markets. By harnessing the voluminous streams of data—both structured and unstructured—generated in contemporary production environments, and subjecting these data lakes to advanced ML algorithms, we unveil profound insights and predictive patterns that remain elusive under conventional analytical methods. Our discourse juxtaposes the multidimensionality of Big Data—emphasizing velocity, variety, veracity, and volume—with the finesse of ML models, such as neural networks and reinforcement learning, which adapt iteratively to the dynamism inherent in production landscapes. This symbiosis underpins a more holistic, anticipatory decision-making process, empowering stakeholders to pinpoint and mitigate operational hiccups, optimize supply chain vectors, and streamline quality assurance protocols, thereby catalyzing a more resilient, responsive, and cost-effective production framework. Furthermore, we explore the ethical contours of data stewardship in this context, advocating for a judicious balance between technological ascendancy and responsible data governance. The culmination of this exploration is the conceptualization of a predictive, self-regulating production ecosystem that thrives on continuous learning and improvement, dynamically calibrating itself in response to an ever-evolving market tableau and thereby heralding a new era of optimal, sustainable, and intelligent production management.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_66-Applying_Big_Data_Analysis_and_Machine_Learning_Approaches.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Exploring a Novel Machine Learning Approach for Evaluating Parkinson&#39;s Disease, Duration, and Vitamin D Level</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141265</link>
        <id>10.14569/IJACSA.2023.0141265</id>
        <doi>10.14569/IJACSA.2023.0141265</doi>
        <lastModDate>2023-12-29T11:47:34.3570000+00:00</lastModDate>
        
        <creator>Md. Asraf Ali</creator>
        
        <creator>Md. Kishor Morol</creator>
        
        <creator>Muhammad F Mridha</creator>
        
        <creator>Nafiz Fahad</creator>
        
        <creator>Md Sadi Al Huda</creator>
        
        <creator>Nasim Ahmed</creator>
        
        <subject>Parkinson&#39;s disease; machine learning; vitamin D; severity; disease duration</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>Parkinson&#39;s disease is an increasingly prevalent, degenerative neurological condition predominantly afflicting individuals aged 50 and older. As global life expectancy continues to rise, the imperative for a deeper comprehension of factors influencing the course and intensity of PD becomes more pronounced. This investigation delves into these facets, scrutinizing various parameters including patient medical history, dietary practices, and vitamin D levels. A dataset comprising 50 PD patients and 50 healthy controls, sourced from Dhaka Medical Institute, serves as the foundation for this study. Machine learning techniques, notably the Modified Random Forest Classifier (MRFC), are harnessed to prognosticate both PD severity and duration. Strikingly, the MRFC-based prediction model for PD severity attains an impressive accuracy of 97.14%, while the predictive model for PD duration demonstrates an accuracy of 95.16%. Noteworthy is the observation that vitamin D levels are notably higher in the healthy cohort compared to PD-afflicted individuals, exerting a substantial positive influence on both the severity and duration predictions, surpassing the influence of other measured parameters. This inquiry underscores the practicality of machine learning in forecasting PD progression and duration and underscores the pivotal role of vitamin D levels as a predictive factor. These discoveries provide invaluable insights into advancing our comprehension and management of PD in an aging population.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_65-Exploring_A_Novel_Machine_Learning_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Sophisticated Deep Learning Framework of Advanced Techniques to Detect Malicious Users in Online Social Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141264</link>
        <id>10.14569/IJACSA.2023.0141264</id>
        <doi>10.14569/IJACSA.2023.0141264</doi>
        <lastModDate>2023-12-29T11:47:34.3400000+00:00</lastModDate>
        
        <creator>Sailaja Terumalasetti</creator>
        
        <creator>Reeja S R</creator>
        
        <subject>Online social networks; malicious user behavior; convolution neural networks; long short-term memory; genetic algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>Malicious user detection is a cybersecurity exploration domain because of the emergent jeopardies of data breaches and cyberattacks. Malicious users have the potential to detriment the system by engaging in unauthorized actions or thieving sensitive data. This paper proposes the dual-powered CLM technique (Convolution neural networks and LSTM) and optimization technique, a sophisticated methodology for distinguishing malicious user behavior that assimilates LSTM and CNN, and finally optimization technique to enhance the results. A genetic algorithm is used to augment the model&#39;s capability to perceive altering and nuanced malicious performance by fine-tuning its parameters. Due to the rising vulnerabilities of data breaches and cyber-attacks, malicious user identification in OSN (Online Social Networks) is a significant topic of research in cybersecurity. The proposed technique pursues to ascertain anomalous user behavior patterns by assessing vast quantities of data generated by digital systems with CLM and optimizing detection accuracy with genetic algorithms. On a public dataset of social media bot dataset, a twibot-20 dataset comprehending user activity data, was explored to measure the performance of the suggested methodology. The outcomes demonstrated that, in comparison to conventional machine learning algorithms like SVM and RF, which respectively obtained 92.3% and 88.9% accuracy, our technique, had a better accuracy of 98.7%. Moreover, the other metrics measures were assessed, and the proposed technique outperformed traditional machine learning algorithms in each situation.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_64-A_Sophisticated_Deep_Learning_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Contribution of Health Management Information Systems to Enhancing Healthcare Operations</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141263</link>
        <id>10.14569/IJACSA.2023.0141263</id>
        <doi>10.14569/IJACSA.2023.0141263</doi>
        <lastModDate>2023-12-29T11:47:34.3270000+00:00</lastModDate>
        
        <creator>Majzoob K. Omer</creator>
        
        <subject>Health management information system; cronbach alpha; moderator; information timelines</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>Various strategies for enhancing quality have been implemented by developed and developing countries in light of the worldwide emphasis on bolstering healthcare systems. Many nations are currently directing their attention towards bolstering their existing information systems or establishing new ones, recognizing the critical role of information in the functioning of healthcare systems. The study aimed to assess the impact of leadership style, organizational factors, technology, and healthcare provider behavior on the implementation of health management information systems in healthcare organizations. While the study was informed by the performance framework of routine information systems, it was primarily based on system theory. After conducting the analysis in Python and SPSS, the data was presented using descriptive statistics, such as means and standard deviations, and inferential statistics including regression analysis. The study observed that information timelines significantly moderated the connection between the technical factor and the integration of health management information systems.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_63-The_Contribution_of_Health_Management_Information_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimizing Network Security and Performance Through the Integration of Hybrid GAN-RNN Models in SDN-based Access Control and Traffic Engineering</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141262</link>
        <id>10.14569/IJACSA.2023.0141262</id>
        <doi>10.14569/IJACSA.2023.0141262</doi>
        <lastModDate>2023-12-29T11:47:34.3100000+00:00</lastModDate>
        
        <creator>Ganesh Khekare</creator>
        
        <creator>K. Pavan Kumar</creator>
        
        <creator>Kundeti Naga Prasanthi</creator>
        
        <creator>Sanjiv Rao Godla</creator>
        
        <creator>Venubabu Rachapudi</creator>
        
        <creator>Mohammed Saleh Al Ansari</creator>
        
        <creator>Yousef A. Baker El-Ebiary</creator>
        
        <subject>Software-defined networking; generative adversarial networks; recurrent neural networks; traffic engineering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>By offering flexible and adaptable infrastructures Software-Defined Networking (SDN) has emerged as a disruptive technology that has completely changed network provisioning and administration. By seamlessly integrating Hybrid Generative Adversarial Network-Recurrent Neural Network (GAN-RNN) modeling into the foundation of SDN-based traffic engineering and accessibility control methods, this work presents a novel and comprehensive method to improve network efficiency and security. The proposed Hybrid GAN-RNN models address two important aspects of network management: traffic optimization and access control. They combine the benefits of Generative Adversarial Networks (GANs) and Recurrent Neural Networks (RNNs). Traditional traffic engineering techniques frequently find it difficult to quickly adjust to situations that are changing quickly within today&#39;s dynamic networking environments. The models&#39; capacity to generate synthetic traffic patterns that nearly perfectly replicate the complexity of real network traffic demonstrates the power of GANs. Network administrators can now allocate resources and routing methods more dynamically, as well as in responding to real-time network inconsistencies, due to this state-of-the-art technology. The technique known as Hybrid GAN-RNN addresses the enduring problem of network security. With their reputation for continuous learning and by utilizing Python software, recurrent neural networks (RNNs) are at the forefront of developing flexible management of access rules. With an incredible 99.4% accuracy rate, the &quot;Proposed GAN-RNN&quot; approach outperforms the other approaches. A comprehensive evaluation of network traffic and new safety risks allow for the immediate modification of these policies. This work is interesting because it combines hybrid GAN-RNN algorithms to strengthen security protocols with adaptive access control while also optimizing network efficiency through realistic traffic modeling.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_62-Optimizing_Network_Security_and_Performance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimizing Crop Yield Prediction in Precision Agriculture with Hyperspectral Imaging-Unmixing and Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141261</link>
        <id>10.14569/IJACSA.2023.0141261</id>
        <doi>10.14569/IJACSA.2023.0141261</doi>
        <lastModDate>2023-12-29T11:47:34.2770000+00:00</lastModDate>
        
        <creator>Deeba K</creator>
        
        <creator>O. Rama Devi</creator>
        
        <creator>Mohammed Saleh Al Ansari</creator>
        
        <creator>Bhargavi Peddi Reddy</creator>
        
        <creator>Manohara H T</creator>
        
        <creator>Yousef A. Baker El-Ebiary</creator>
        
        <creator>Manikandan Rengarajan</creator>
        
        <subject>Crop yield prediction; hyper spectral image; spectral unmixing; resource management; precision agriculture</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>The optimization of crop yield projections has arisen as a major problem in modern agriculture, due to the increasing demand for food supply and the necessity for effective resource management. Precision and scalability are hampered by the limits associated with conventional agricultural production prediction techniques, which mostly rely on observations and simple data sources. While methods like random forest (RF) and K-nearest neighbors (KNN) are widely used, their reliance on personal assessments and insufficient knowledge of crop attributes typically results in less accurate forecasts and makes them unsuitable for agricultural precision. The suggested method combines deep learning, spectral unmixing, and hyperspectral imaging methods to overcome these obstacles. With the use of hyperspectral imaging, which records a vast array of data that is not visible to the human eye, crop attributes may be thoroughly examined and can identify the unique spectral fingerprints of different agricultural constituents by using spectral unmixing approaches, which makes it easier to evaluate the health and growth phases of the crop. Then, using this augmented spectral data, deep learning algorithms create a solid, data-driven basis for precise crop production prediction. MATLAB has been used in the suggested workflow. The combination of deep learning, spectrum unmixing, and hyperspectral imaging provides a comprehensive, cutting-edge approach that goes beyond the constraints of conventional techniques were implemented in python. Some of the algorithms that were examined, this one with integration has the lowest Root Mean Square Error (RMSE) of 0.15 and Mean Absolute Error (MAE) of 0.14, demonstrating higher prediction accuracy above other current models. This novel method represents a substantial breakthrough in precision agriculture while also improving crop production prediction.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_61-Optimizing_Crop_Yield_Prediction_in_Precision_Agriculture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Software User Interface Testing Through Few Shot Deep Learning: A Novel Approach for Automated Accuracy and Usability Evaluation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141260</link>
        <id>10.14569/IJACSA.2023.0141260</id>
        <doi>10.14569/IJACSA.2023.0141260</doi>
        <lastModDate>2023-12-29T11:47:34.2630000+00:00</lastModDate>
        
        <creator>Aris Puji Widodo</creator>
        
        <creator>Adi Wibowo</creator>
        
        <creator>Kabul Kurniawan</creator>
        
        <subject>Deep learning; efficientnet; few-shot; software testing; UI screen classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>Traditional user interface (UI) testing methods in software development are time-consuming and prone to human error, requiring more efficient and accurate approaches. Moreover, deep learning requires extensive data training to develop accurate automated UI software testing. This paper proposes an efficient and accurate method for automating UI software testing using Deep learning with training data limitations. We propose a novel deep learning-based framework suitable for UI element analysis in data-scarce situations, focusing on Few-shot learning. Our framework initiates with several robust feature extraction modules that employ and compare sophisticated encoder models to be adept at capturing complex patterns from a sparse dataset. The methodology employs the Enrico and UI screen mistake datasets, overcoming training data limitations. Utilizing encoder models, including CNN, VGG-16, ResNet-50, MobileNet-V3, and EfficientNet-B1, the EfficientNet-B1 model excelled in the setting of Few-Shot learning with five-shot with an average accuracy of 76.05%. Our proposed model&#39;s accuracy was improved and compared to the state-of-the-art method. Our findings demonstrate the effectiveness of few-shot learning in UI screen classification, setting new benchmarks in software testing and usability evaluation, particularly in limited data scenarios.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_60-Enhancing_Software_User_Interface_Testing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Artificial Intelligence for Confidential Information Sharing Based on Knowledge-Based System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141259</link>
        <id>10.14569/IJACSA.2023.0141259</id>
        <doi>10.14569/IJACSA.2023.0141259</doi>
        <lastModDate>2023-12-29T11:47:34.2470000+00:00</lastModDate>
        
        <creator>Bouchra Boulahiat</creator>
        
        <creator>Salima Trichni</creator>
        
        <creator>Mohammed Bougrine</creator>
        
        <creator>Fouzia Omary</creator>
        
        <subject>IT security; cryptography; confidentiality; Knowledge-Based system; artificial intelligence</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>Ensuring the security of sensitive data and protecting user privacy remains one of the most significant challenges in our contemporary landscape. Organizations Companies cannot adopt a new technology without reassurance regarding data confidentiality. To address these challenges, we present an innovative system that draws upon extensive knowledge and expertise in the field of cryptography, especially in encryption methods. This system tailors its strategies to align with specific scenarios, prioritizing data confidentiality. Our solution is based on one of the Artificial Intelligence techniques, which is Knowledge-Based Systems (KBS) and extends the intelligent encryption methods from our previous research. However, this new system has taken a novel approach by reconfiguring this within KBS architecture. We have introduced additional technical components, including knowledge bases, an inference engine, and the Nearest Neighbor (NN) search algorithm. As a result, this revised architecture not only enhances security and system performance but also showcases improved maintainability and scalability.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_59-Artificial_Intelligence_for_Confidential_Information_Sharing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Use of ANN, LSTM and CNN Classifiers for the New MSCC and BSCC Methods in the Detection of Parkinson&#39;s Disease by Voice Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141258</link>
        <id>10.14569/IJACSA.2023.0141258</id>
        <doi>10.14569/IJACSA.2023.0141258</doi>
        <lastModDate>2023-12-29T11:47:34.2300000+00:00</lastModDate>
        
        <creator>Miyara Mounia</creator>
        
        <creator>Boualoulou Nouhaila</creator>
        
        <creator>Nsiri Benayad</creator>
        
        <creator>Belhoussine Drissi Taoufiq</creator>
        
        <subject>Parkinson’s Disease (PD); Bark Spectrogram Cepstral Coefficients (BSCC); Mel Spectrogram Cepstral Coefficients (MSCC); Long-Term Memory Neural Networks (LSTM); Convolutional Neural Networks (CNN); Artificial Neural Networks (ANN)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>Parkinson&#39;s disease (PD) is a neurodegenerative condition that impacts a significant global population. The timely and precise identification of PD plays a pivotal role in facilitating early intervention and the efficient management of the condition. Recently, speech analysis has emerged as a promising non-invasive technique for the detection of PD due to its accessibility and ability to reveal subtle vocal biomarkers associated with the disease. This research introduces an innovative approach utilizing Short-Time Fourier Transform (STFT) to generate spectrograms, specifically Bark Spectrogram Cepstral Coefficients (BSCC) and Mel Spectrogram Cepstral Coefficients (MSCC). These coefficients are compared with traditional and well-known coefficients, namely Mel-Frequency Cepstral Coefficients (MFCC) and Bark Frequency Cepstral Coefficients (BFCC). To extract the most effective coefficients for Parkinson&#39;s disease detection, three robust classification techniques—Long Short-Term Memory neural networks (LSTM), Convolutional Neural Networks (CNN), and Artificial Neural Networks (ANN)—are employed. As a result, the BSCC and MSCC algorithms achieve a maximum accuracy rate of 90%, surpassing the accuracy of the traditional MFCC and BFCC coefficients. Therefore, these newly proposed coefficients prove to be more precise in diagnosing Parkinson&#39;s disease compared to the conventional MFCC and BFCC coefficients.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_58-Use_of_ANN_LSTM_and_CNN_Classifiers.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Efficient Honeycomb Lung Segmentation Network Combining Multi-Paradigms Representation and Cascade Attention</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141256</link>
        <id>10.14569/IJACSA.2023.0141256</id>
        <doi>10.14569/IJACSA.2023.0141256</doi>
        <lastModDate>2023-12-29T11:47:34.2170000+00:00</lastModDate>
        
        <creator>Bingqian Yang</creator>
        
        <creator>Xiufang Feng</creator>
        
        <creator>Yunyun Dong</creator>
        
        <subject>Honeycomb lung; attention; convolutional neural network; transformer; image segmentation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>Honeycomb lung is a pulmonary manifestation that occurs in the terminal stage of various lung diseases, which greatly threatens patients. Due to the different locations and irregular shapes of lesions, the accurate segmentation of the honeycomb region is an essential and challenging problem. However, most deep learning methods struggle to effectively utilize both global and local information from lesion images, resulting in cannot to accurately segment the lesion. In addition, these methods often ignore some semantic information that is necessary for the segmentation of lesion location and shape in the decoding stage. To alleviate these challenges, in this paper, we propose a dual-branch encoder and cascaded decoder network (DECDNet) for segmenting honeycombs lesions. First, we design a dual-branch encoder consisting of ResNet34 and Swin-Transformer with different paradigm representations to extract local features and long-range dependencies respectively. Next, to further combine the different paradigm features, we develop the feature fusion module to obtain richer representation information. Finally, considering the problem of information loss during the decoder, a cascaded attention decoder is constructed to aggregate the multi-stage encoder information to get the final segmentation result. Experimental results demonstrate that our method outperforms other methods on the in-house honeycomb lung dataset. Notably, compared with the other nine universal methods, the proposed DECDNet obtains the highest IoU (86.34%), Dice (92.66%), Precision (93.21%), Recall (92.13%), F1-Score (92.66%), and achieves the lowest HD95 (7.33) and ASD (2.30). In particular, our method enables precisely segmenting lesions under different clinical scenarios as well. Our code and dataset are available at https://github.com/ybq17/DECDNet.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_56-An_Efficient_Honeycomb_Lung_Segmentation_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Efficient Deep Reinforcement Learning for Smart Buildings: Integrating Energy Storage Systems Through Advanced Energy Management Strategies</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141257</link>
        <id>10.14569/IJACSA.2023.0141257</id>
        <doi>10.14569/IJACSA.2023.0141257</doi>
        <lastModDate>2023-12-29T11:47:34.2170000+00:00</lastModDate>
        
        <creator>Artika Farhana</creator>
        
        <creator>Nimmati Satheesh</creator>
        
        <creator>Ramya M</creator>
        
        <creator>Janjhyam Venkata Naga Ramesh</creator>
        
        <creator>Yousef A. Baker El-Ebiary</creator>
        
        <subject>Deep q-network; cost optimization; smart building; energy management; peak demand</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>This study presents a novel and workable approach to solving the critical issue of improving energy management in smart buildings. Using a large dataset from a seven-story office building in Bangkok, Thailand, our work introduces a novel approach that combines Deep Q-network (DQN) algorithms with energy storage models and cost optimization strategies. The suggested approach is intended to reduce operational expenses, improve the energy economic performance, and efficiently control peak demand. The energy storage model used in this research incorporates the use of the capabilities of advanced storage models in smart buildings, particularly lithium-ion batteries and supercapacitors. When the cost optimization approach is applied using linear programming, energy consumption costs are significantly reduced. Notably, our method outperforms current algorithms, specifically outperforming them, to show its effectiveness in smart building energy management by outperforming current algorithms, especially Genetic and Fuzzy Algorithms. In comparison to traditional methods, the DQN algorithm exhibits an impressive 8.6% reduction in Mean Square Error (MSE) and a 6.4% drop in Mean Absolute Error (MAE), making it a standout performer in the research through Python software. The results highlight the significance of optimizing DQN algorithm parameters for best outcomes, with a focus on adaptability to various properties of smart buildings. This investigation is novel because it integrates cost optimization, reinforcement learning, and energy storage. This results in a flexible and all-inclusive framework that can be used for effective and sustainable energy management in smart buildings.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_57-Efficient_Deep_Reinforcement_Learning_for_Smart_Buildings.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Pancreatic Cancer Detection Through Hyperparameter Tuning and Ensemble Methods</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141255</link>
        <id>10.14569/IJACSA.2023.0141255</id>
        <doi>10.14569/IJACSA.2023.0141255</doi>
        <lastModDate>2023-12-29T11:47:34.2000000+00:00</lastModDate>
        
        <creator>Koteswaramma Dodda</creator>
        
        <creator>G. Muneeswari</creator>
        
        <subject>Pancreatic cancer; Machine learning (ML); urinary biomarkers; grid search hyper parameter tuning; Random Forest (RF)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>Computing techniques have brought about a significant transformation in the field of medical research. Machine learning techniques have facilitated the analysis of vast amounts of data, modeling of complex scenarios, and the ability to make well-informed decisions. This presents an opportunity to develop reliable and effective medical system implementations, which may include the automatic recognition of uncertain issues related to health. Currently, significant research efforts to be directed towards the prediction of cancer, particularly focusing on addressing the various health complications caused by this disease, which can adversely impact multiple organs within the body. Pancreatic Cancer (PC) stands out as a highly lethal form of tumor, with a rather discouraging global five-year survival rate of approximately 5%. The truth behind the early detection increases the survival rate and it also helps the radiologists to give better treatment to those who are affected at early stages. Creatinine, LYVE1, REG1B, and TFF1 are urine proteomic biomarkers that offer a promising non-invasive and affordable diagnostic technique for detecting pancreatic cancer. In this study, a novel model that combines gridsearchCV technique to search and find the optimal combination of hyperparameters for a random forest classifier. In this research a new ensemble method to enhance the performance for classification of pancreatic cancer and non-cancer by using urinary biomarkers which is collected from Kaggle. The implemented model achieved better results of Accuracy 99.98%, F-1 score 99.98, Precision 99.98, and Recall 99.98.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_55-Pancreatic_Cancer_Detection_Through_Hyperparameter_Tuning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards a Reference Architecture for Semantic Interoperability in Multi-Cloud Platforms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141254</link>
        <id>10.14569/IJACSA.2023.0141254</id>
        <doi>10.14569/IJACSA.2023.0141254</doi>
        <lastModDate>2023-12-29T11:47:34.1830000+00:00</lastModDate>
        
        <creator>Norazian M Hamdan</creator>
        
        <creator>Novia Admodisastro</creator>
        
        <subject>Cloud computing; multi-cloud; reference architecture; semantic interoperability; semantic technologies</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>This paper focuses on semantic interoperability as one of the most significant issues in multi-cloud platforms. Organizations and individuals that adopt the multi-cloud strategy often use various cloud services and platforms. On top of that, cloud service providers may offer a range of services with unique data formats, structures, and semantics. Hence, semantic interoperability is required to enable applications and services to understand and use data consistently, regardless of the cloud service providers. The main goal of this study is to propose a reference architecture for semantic interoperability in multi-cloud platforms. Towards achieving the main goal, this paper presents two main contributions. First contribution is an extended cloud computing interoperability taxonomy, with semantic approach as one of the solutions for facilitating semantic cloud interoperability. Two fundamental semantic approaches have been identified, namely semantic technologies and frameworks which will be adopted as the main building blocks. Semantic technologies, such as ontologies, can be used to represent the semantics or meanings of data. Data may be reliably represented across multiple cloud platforms by employing a common ontology. This promotes semantic interoperability by ensuring that data is interpreted and processed uniformly within diverse cloud platforms. On the other hand, a framework offers a standardized and organized way for managing, exchanging, and representing data and services. For the second contribution of this paper, a review of recent (2018-2023) related works has been conducted by investigating the state-of-the-art of semantic interoperability in multi-cloud platforms. As a result, the proposed solution will be implemented in the context of a reference architecture. The reference architecture will act as a blueprint to systematically represent semantic interoperability in multi-cloud platforms using a hybrid approach of role-based and layer-based. Additionally, a semantic layer will be extended to the reference architecture to facilitate semantic interoperability.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_54-Towards_a_Reference_Architecture_for_Semantic_Interoperability.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Advancing Road Safety: Precision Driver Detection System with Integrated Overspeed, Alcohol Detection, and Tracking Capabilities</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141253</link>
        <id>10.14569/IJACSA.2023.0141253</id>
        <doi>10.14569/IJACSA.2023.0141253</doi>
        <lastModDate>2023-12-29T11:47:34.1530000+00:00</lastModDate>
        
        <creator>Jamil Abedalrahim Jamil Alsayaydeh</creator>
        
        <creator>Mohd Faizal bin Yusof</creator>
        
        <creator>Khivisha S. Mohan</creator>
        
        <creator>A K M Zakir Hossain</creator>
        
        <creator>Serhii Leoshchenko</creator>
        
        <subject>Integrated driver safety; overspeed detection system; alcohol monitoring technology; comprehensive vehicle security; real-time accident prevention; ESP32 GPS safety; responsible driving solutions</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>In response to ongoing concerns about road accidents linked to overspeeding and drunk driving, this study introduces a groundbreaking solution: The Integrated Driver Safety system. It is a comprehensive vehicle safety system designed for real-time prevention. Crafted with cutting-edge components including ESP32, MQ3 sensor, relay, and GPS, this system operates on a dual framework. It swiftly detects instances of over speeding, triggering immediate email alerts, while concurrently inhibiting engine ignition upon detecting alcohol consumption, actively thwarting drunk driving attempts. This proactive approach not only provides real-time notifications but physically prevents intoxicated driving, drastically reducing accidents caused by these factors. With an impressive overspeed detection accuracy surpassing 95% and an efficient alcohol monitoring system, this technology cultivates responsible driving habits. Its potential widespread adoption foretells a future where road safety reaches unprecedented levels, underscoring the industry&#39;s dedication to innovation and safer driving experiences. Through this research, a compelling case emerges for the global embrace of these innovative preventive measures, illuminating a path toward significantly enhanced road safety standards.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_53-Advancing_Road_Safety_Precision_Driver_Detection_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced Atrial Fibrillation Detection-based Wavelet Scattering Transform with Time Window Selection and Neural Network Integration</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141252</link>
        <id>10.14569/IJACSA.2023.0141252</id>
        <doi>10.14569/IJACSA.2023.0141252</doi>
        <lastModDate>2023-12-29T11:47:34.1530000+00:00</lastModDate>
        
        <creator>Mohamed Elmehdi Ait Bourkha</creator>
        
        <creator>Anas Hatim</creator>
        
        <creator>Dounia Nasir</creator>
        
        <creator>Said El Beid</creator>
        
        <subject>Electrocardiogram (ECG); Atrial Fibrillation (AF); Wavelet Scattering Network (WSN); Artificial Neural Network (ANN)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>Atrial Fibrillation (AF), a prevalent anomaly in cardiac rhythm, significantly impacts a substantial portion of the population, with projections indicating an escalation in its prevalence in the near future. This disorder manifests as irregular and accelerated heartbeats originating within the heart&#39;s upper chambers known as the atria. Neglecting to address this condition could potentially lead to serious consequences, particularly an elevated susceptibility to stroke and heart failure. This underscores the critical importance of developing an automated approach for detecting AF. In our study, an automatic approach was introduced for classifying short single-lead Electrocardiogram (ECG) recordings signals into four categories: Atrial fibrillation (AF), Normal rhythm (N), Noisy rhythm (~), or Other rhythms (O). The wavelet scattering network (WSN) is employed to extract morphological features from the ECG signals, which are then inputted into an Artificial Neural Network (ANN) with time windows selection and majority vote. The results from the testing data exhibit that our proposed model outperforms the state-of-art models, achieving a remarkable overall accuracy of 87.35% and an F1 score of 89.13%.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_52-Enhanced_Atrial_Fibrillation_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Computational Prediction Model of Blood-Brain Barrier Penetration Based on Machine Learning Approaches</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141251</link>
        <id>10.14569/IJACSA.2023.0141251</id>
        <doi>10.14569/IJACSA.2023.0141251</doi>
        <lastModDate>2023-12-29T11:47:34.1370000+00:00</lastModDate>
        
        <creator>Deep Himmatbhai Ajabani</creator>
        
        <subject>Central Nervous System (CNS); Blood-Brain Barrier (BBB); Machine Learning (ML); Simplified Molecular Input Line Entry System (SMILES); Support Vector Machine (SVM); K-Nearest Neighbor (KNN); Logistic Regression (LR); Multi-Layer Perceptron (MLP); Light Gradient Boosting Machine (LightGBM); Random Forest (RF)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>Within the field of medical sciences, addressing brain illnesses such as Alzheimer&#39;s disease, Parkinson&#39;s disease, and brain tumors poses significant difficulties. Despite thorough investigation, the search for truly successful neurotherapies continues to be challenging to achieve. The blood-brain barrier (BBB), which is currently a major area of research, restricts the passage of medicinal substances into the central nervous system (CNS). It is crucial in the field of neuroscience to create drugs that can effectively cross the blood-brain barrier (BBB) and treat cognitive disorders. The objective of this study is to improve the accuracy of machine learning models in predicting BBB permeability, which is a critical factor in medication development. In recent times, a range of machine learning models such as Support Vector Machines (SVM), K-Nearest Neighbors (KNN), Logistic Regression (LR), Artificial Neural Networks (ANN), and Random Forests (RF) have been utilized for BBB. By employing descriptors of varying dimensions (1D, 2D, or 3D), these models demonstrate the potential to make precise predictions. However, the majority of these studies are biased to the nature of datasets. To accomplish our objective, we utilized three BBB datasets for training and testing our model. The Random Forest (RF) model has shown exceptional performance when used on larger datasets and extensive feature sets. The RF model attained an overall accuracy of 90.36% with 10-fold cross-validation. Additionally, it earned an AUC of 0.96, a sensitivity of 77.73%, and a specificity of 94.74%. The assessment of an external dataset resulted in an accuracy rate of 91.89%, an AUC value of 0.94, a sensitivity rate of 91.43%, and a specificity rate of 92.31%.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_51-A_Computational_Prediction_Model_of_Blood_Brain_Barrier.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Underwater Object Recognition Through the Synergy of Transformer and Feature Enhancement Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141250</link>
        <id>10.14569/IJACSA.2023.0141250</id>
        <doi>10.14569/IJACSA.2023.0141250</doi>
        <lastModDate>2023-12-29T11:47:34.1230000+00:00</lastModDate>
        
        <creator>Hoanh Nguyen</creator>
        
        <creator>Tuan Anh Nguyen</creator>
        
        <subject>Underwater object recognition; swin transformer; self-attention; feature alignment</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>Underwater object recognition presents a unique set of challenges due to the complex and dynamic characteristics of marine environments. This paper introduces a novel, multi-layered architecture that leverages the capabilities of Swin Transformer modules to process segmented image patches derived from aquatic scenes. A key component of our approach is the integration of the Feature Alignment Module (FAM), which is designed to address the complexities of underwater object recognition by enabling the model to selectively emphasize essential features. It combines multi-level features from various network stages, thereby enhancing the depth and scope of feature representation. Furthermore, this paper incorporates multiple detection heads, each embedded with the innovative ACmix module. This module offers an integrated fusion of convolution and self-attention mechanisms, refining detection precision. With the combined strengths of the Swin Transformer, FAM, and ACmix module, the proposed method achieves significant improvements in underwater object detection. To demonstrate the robustness and effectiveness of the proposed method, we conducted experiments on the UTDAC2020 dataset, highlighting its potential and contributions to the field.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_50-Enhancing_Underwater_Object_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Framework for Risk Prediction in the Health Insurance Sector using GIS and Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141249</link>
        <id>10.14569/IJACSA.2023.0141249</id>
        <doi>10.14569/IJACSA.2023.0141249</doi>
        <lastModDate>2023-12-29T11:47:34.1070000+00:00</lastModDate>
        
        <creator>Prasanta Baruah</creator>
        
        <creator>Pankaj Pratap Singh</creator>
        
        <creator>Sanjiv kumar Ojah</creator>
        
        <subject>Risk prediction; data analytics; predictive analytics; underwriting; geographical information systems; random forest; artificial neural network; decision tree</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>Evaluation of risk is a key component to categorize the customers of the life insurance businesses. The underwriting technique is carried out by the industries to charge the policies appropriately. Due to the availability of data hugely, the automation of underwriting process can be done using data analytics technology. Due to this, the underwriting process becomes faster and therefore quickly processes a large number of applications. This study is carried to enhance risk assessment of the applicants of life insurance industries using predictive analytics. In this research, the Geographical Information Systems (GIS) system is used to collect the data such as Air pollution, Industrial area, Covid-19 and Malaria of various geographic areas of our country, since these factors attribute to the risk of an applicant of life insurance business. Thereafter, the research is carried out using this dataset along with another dataset containing more than 50,000 entries of normal attributes of applicants of a life insurance company. Artificial Neural Network (ANN), Decision Tree (DT), and Random forest (RF) algorithms are applied on both the datasets to predict the risks of the applicants. The results showed that random forest outperformed among all the algorithms, providing the more accurate result.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_49-A_Novel_Framework_for_Risk_Prediction_in_the_Health_Insurance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Method for Hyperparameter Tuning of EfficientNetV2-based Image Classification by Deliberately Modifying Optuna Tuned Result</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141248</link>
        <id>10.14569/IJACSA.2023.0141248</id>
        <doi>10.14569/IJACSA.2023.0141248</doi>
        <lastModDate>2023-12-29T11:47:34.0900000+00:00</lastModDate>
        
        <creator>Jin Shimazoe</creator>
        
        <creator>Kohei Arai</creator>
        
        <creator>Mari Oda</creator>
        
        <subject>Hyperparameter tuning; EfficientNetV2; Optuna; textile pattern; optimal hyperparameter; learning process; pattern fluctuation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>Method for hyperparameter tuning of EfficientNetV2-based image classification by deliberately modifying Optuna tuned result is proposed. An example of the proposed method for textile pattern quality evaluation (good or bad textile pattern fluctuation quality classification) is shown. When using the hyperparameters obtained by Optuna without changing them, the accuracy certainly improved. Furthermore, as a result of learning by changing the hyperparameter with the highest degree of importance, the accuracy changed, so it could be said that the degree of importance was certainly high. However, the accuracy also changes when learning is performed by changing the least important hyperparameter, and sometimes the accuracy is improved compared to when learning is performed using the optimal hyperparameter. From this result, it is found that the optimal hyperparameters obtained with Optuna are not necessarily optimal.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_48-Method_for_Hyperparameter_Tuning_of_EfficientNetV2.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Construction of Campus Network Public Opinion Analysis Model Based on T-GAN Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141247</link>
        <id>10.14569/IJACSA.2023.0141247</id>
        <doi>10.14569/IJACSA.2023.0141247</doi>
        <lastModDate>2023-12-29T11:47:34.0770000+00:00</lastModDate>
        
        <creator>Jianan Zhang</creator>
        
        <subject>Public opinion analysis; T-GAN; feature extraction; multi-scale convolutional neural network; campus network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>The advancement of information technology has made the internet and social media an indispensable part of modern life, but with it comes a flood of false information and rumors. The aim of this study is to develop a technology that can automatically identify campus network public opinion information, in order to protect student groups from the intrusion of erroneous information, maintain their mental health, and promote a clear campus public opinion environment. This study used the Scrapy framework to write web scraping scripts to collect campus public opinion data, and carried out cleaning and preprocessing. Then, a transformer based generative adversarial network (T-GAN) model was designed, combined with a multi-scale convolutional neural network (MCNN) structure, for public opinion analysis on campus networks. The results show that the accuracy of the dataset processed by the T-GAN model has been improved on LGBT, KNN, SVM, and RoBERTa, proving that the campus network public opinion analysis model based on the T-GAN model helps to automatically identify campus network public opinion, protect students&#39; physical and mental health, and promote the healthy development of the campus network environment.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_47-The_Construction_of_Campus_Network_Public_Opinion.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Durian Disease Classification using Vision Transformer for Cutting-Edge Disease Control</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141246</link>
        <id>10.14569/IJACSA.2023.0141246</id>
        <doi>10.14569/IJACSA.2023.0141246</doi>
        <lastModDate>2023-12-29T11:47:34.0600000+00:00</lastModDate>
        
        <creator>Marizuana Mat Daud</creator>
        
        <creator>Abdelrahman Abualqumssan</creator>
        
        <creator>Fadilla ‘Atyka Nor Rashid</creator>
        
        <creator>Mohamad Hanif Md Saad</creator>
        
        <creator>Wan Mimi Diyana Wan Zaki</creator>
        
        <creator>Nurhizam Safie Mohd Satar</creator>
        
        <subject>Vision transformer; durian disease; deep learning; disease control</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>The durian fruit holds a prominent position as a beloved fruit not only in ASEAN countries but also in European nations. Its significant potential for contributing to economic growth in the agricultural sector is undeniable. However, the prevalence of durian leaf diseases in various ASEAN countries, including Malaysia, Indonesia, the Philippines, and Thailand, presents formidable challenges. Traditionally, the identification of these leaf diseases has relied on manual visual inspection, a laborious and time-consuming process. In response to this challenge, an innovative approach is presented for the classification and recognition of durian leaf diseases, delves into cutting-edge disease control strategies using vision transformer. The diseases include the classes of leaf spot, blight sport, algal leaf spot and healthy class. Our methodology incorporates the utilization of well-established deep learning models, specifically vision transformer model, with meticulous fine-tuning of hyperparameters such as epochs, optimizers, and maximum learning rates. Notably, our research demonstrates an outstanding achievement: vision transformer attains an impressive accuracy rate of 94.12% through the hyperparameter of the Adam optimizer with a maximum learning rate of 0.001. This work not only provides a robust solution for durian disease control but also showcases the potential of advanced deep learning techniques in agricultural practices. Our work contributes to the broader field of precision agriculture and underscores the critical role of technology in securing the future of durian farming.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_46-Durian_Disease_Classification_using_Vision_Transformer.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application of Machine Learning in Learning Problems and Disorders: A Systematic Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141245</link>
        <id>10.14569/IJACSA.2023.0141245</id>
        <doi>10.14569/IJACSA.2023.0141245</doi>
        <lastModDate>2023-12-29T11:47:34.0430000+00:00</lastModDate>
        
        <creator>Mario Aquino Cruz</creator>
        
        <creator>Oscar Alcides Choquehuallpa Hurtado</creator>
        
        <creator>Esther Calatayud Madariaga</creator>
        
        <subject>Machine learning; learning disorder; deep learning; ADHD; dyslexia; learning impairment</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>Learning Disorders, which affect approximately 10% of the school population, represent a significant challenge in the educational field. The lack of proper diagnosis and treatment can have profound consequences, triggering psychological problems in those affected by disorders that impact reading, writing, numeracy and attention, among others. Notable among them are Attention Deficit Hyperactivity Disorder (ADHD) and dyslexia. In this context, a literature review focusing on Machine Learning applications to address these educational problems is addressed. The methodology proposed by Barbara Kitchenham guides this analysis, using the online tool Parsifal for the review, generation of search strings, formulation of research questions and management of information sources. The first findings of this research highlight a growing trend in the application of Machine Learning techniques in learning problems and disorders, especially in the last five years, as of 2019. Among the primary sources, the IEEE Digital Library emerges as a key source of information in this rapidly developing field. This innovative approach has the potential to significantly improve early detection, accurate diagnosis and implementation of personalized interventions, thus offering new perspectives in understanding and addressing the educational challenges associated with Learning Disorders.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_45-Application_of_Machine_Learning_in_Learning_Problems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Security and Privacy of Cloud Data Auditing Protocols: A Review, State-of-the-art, Open Issues, and Future Research Directions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141243</link>
        <id>10.14569/IJACSA.2023.0141243</id>
        <doi>10.14569/IJACSA.2023.0141243</doi>
        <lastModDate>2023-12-29T11:47:34.0270000+00:00</lastModDate>
        
        <creator>Muhammad Farooq</creator>
        
        <creator>Mohd Rushdi Idrus</creator>
        
        <creator>Adi Affandi Ahmad</creator>
        
        <creator>Ahmad Hanis Mohd Shabli</creator>
        
        <creator>Osman Ghazali</creator>
        
        <subject>Cloud computing; proof of possession; data integrity auditing; proof of retrievability; public auditing; proof of ownership</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>Cloud service providers offer a trustworthy and resistant-based storage environment for on-demand cloud services to outsource clients’ data. Several researchers and business entities currently adopt cloud services to store their data in remote cloud storage servers for cost-saving purposes. Cloud storage offers numerous advantages to users like scalability, low capital expenses, and data available from any place, anytime, regardless of location and device. However, as the users lose physical access and control over data, the storage service raises security and privacy issues, such as confidentiality, integrity, and availability of outsourced data. Data integrity is a primary concern for cloud users to confirm whether data integrity is intact or not. This paper presents a comprehensive review of cloud data auditing schemes and a comparative analysis of the desirable features. Furthermore, it provides advantages and disadvantages of the state-of-the-art techniques and a performance comparison regarding the communicational and computational costs of involved entities. It also highlights desirable features of different techniques, open issues, and future research trends of cloud data auditing protocols.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_43-Security_and_Privacy_of_Cloud_Data_Auditing_Protocols.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Promises, Challenges and Opportunities of Integrating SDN and Blockchain with IoT Applications: A Survey</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141244</link>
        <id>10.14569/IJACSA.2023.0141244</id>
        <doi>10.14569/IJACSA.2023.0141244</doi>
        <lastModDate>2023-12-29T11:47:34.0270000+00:00</lastModDate>
        
        <creator>Loubna Elhaloui</creator>
        
        <creator>Mohamed Tabaa</creator>
        
        <creator>Sanaa Elfilali</creator>
        
        <creator>El habib Benlahmar</creator>
        
        <subject>Internet of things; SDN; blockchain</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>Security is a major issue in the IT world, and its aim is to maintain user confidence and the coherence of the entire information system. Various international and European research projects, as well as IT manufacturers, have proposed new solutions and mechanisms to solve the problem of security in the IoT environment. Software-Defined Networking (SDN) and Blockchain are advanced technologies utilized globally for establishing secure network communication and constructing resilient network infrastructures. They serve as a robust and dependable foundation for addressing various challenges, including security, privacy, scalability, and access control. Indeed, SDN and Blockchain technologies have demonstrated their ability to efficiently manage resource utilization and facilitate secure network communication within the Internet of Things (IoT) ecosystem. Nonetheless, there exists a research gap concerning the creation of a comprehensive framework that can fulfill the unique requirements of the IoT environment. Consequently, this paper presents a recent investigation into the integration of SDN and Blockchain with IoT. The objective is to analyze their primary contributions and identify the challenges involved. Subsequently, we offer relevant recommendations to address these challenges and enhance the security and privacy of the IoT landscape.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_44-Promises_Challenges_and_Opportunities_of_Integrating_SDN.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning-based Pothole Detection for Intelligent Transportation: A YOLOv5 Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141242</link>
        <id>10.14569/IJACSA.2023.0141242</id>
        <doi>10.14569/IJACSA.2023.0141242</doi>
        <lastModDate>2023-12-29T11:47:34.0130000+00:00</lastModDate>
        
        <creator>Qian Li</creator>
        
        <creator>Yanjuan Shi</creator>
        
        <creator>Qing Liu</creator>
        
        <creator>Gang Liu</creator>
        
        <subject>Pothole detection; deep learning; intelligent transportation systems; YOLOv5</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>Pothole detection plays a crucial role in intelligent transportation systems, ensuring road safety and efficient infrastructure management. Extensive research in the literature has explored various methods for pothole detection. Among these approaches, deep learning-based methods have emerged as highly accurate alternatives, surpassing other techniques. The widespread adoption of deep learning in pothole detection can be justified by its ability to learn discriminative features, leading to improved detection performance automatically. Nevertheless, the present research challenge lies in achieving high accuracy rates while maintaining non-destructiveness and real-time processing. In this study, we propose a deep learning model according to the YOLOv5 architecture to address this challenge. Our method includes generating a custom dataset and conducting training, validation, and testing processes. Experimental outcomes and performance evaluations show the suggested method&#39;s efficacy, showcasing its accurate detection capabilities.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_42-Deep_Learning_based_Pothole_Detection_for_Intelligent_Transportation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimal Cluster Head Selection in Wireless Sensor Network via Combined Osprey-Chimp Optimization Algorithm: CIOO</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141241</link>
        <id>10.14569/IJACSA.2023.0141241</id>
        <doi>10.14569/IJACSA.2023.0141241</doi>
        <lastModDate>2023-12-29T11:47:33.9970000+00:00</lastModDate>
        
        <creator>Vikhyath K B</creator>
        
        <creator>Achyutha Prasad N</creator>
        
        <subject>Wireless sensor network; clustering; cluster head; cluster head selection; chimp optimization; osprey optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>The development of Wireless Sensor Network (WSN) has gained significant attention for smart systems due to their potential use in a wide range of areas. WSN consists of tiny, independently arranged sensor nodes that run on batteries. The resources and energy usage for sensor nodes are the major factors. Particularly, the unbalanced nodes’ raises the energy use and reduces the network life-span. Energy efficiency in WSN cluster head selection remains a challenging task. The best method has been developed for reducing node energy consumption is clustering. However, the current clustering strategy failed to properly allocate the energy needs of the nodes without considering energy features, node quantity, as well as adaptability. Hence, there is need for advanced clustering process with new optimization tactics, and accordingly, a new cluster-head selection model in WSN is proposed in this work. Initially, the clustering process is done by the k-means algorithm. The Cluster Head (CH) selection is the subsequent progress under the consideration of node’s energy, distances, delays, and risks as well. A novel CIOO (Chimp Integrated Osprey Optimization) algorithm combining the Osprey and Chimp optimization algorithm is proposed for Cluster Head Selection (CHS). Finally, the performance of proposed model is evaluated over the conventional methods.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_41-Optimal_Cluster_Head_Selection_in_Wireless_Sensor_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparison of the Application of Weighted Cosine Similarity and Minkowski Distance Similarity Methods in Stroke Diagnostic Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141240</link>
        <id>10.14569/IJACSA.2023.0141240</id>
        <doi>10.14569/IJACSA.2023.0141240</doi>
        <lastModDate>2023-12-29T11:47:33.9800000+00:00</lastModDate>
        
        <creator>Joko Purwadi</creator>
        
        <creator>Rosa Delima</creator>
        
        <creator>Argo Wibowo</creator>
        
        <creator>Angelina Rumuy</creator>
        
        <subject>Expert system; stroke; case-based reasoning; Minkowski Distance; jaccard coefficient; weighted cosine; threshold; accuracy; diagnosis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>Stroke is a critical medical condition requiring prompt intervention due to its multifaceted symptoms and causes influenced by various factors, including psychological aspects and the patient&#39;s lifestyle or daily habits that impact risk factors. The recovery process involves consistent medical care and lifestyle adjustments tailored to the individual case. Expert Systems, a scientific field focused on studying and developing diagnostic systems, can employ the Case-based Reasoning method to identify the type of stroke based on similarities with prior patient cases, considering specific causes and symptoms. This study utilizes the Weighted Cosine, Jaccard Coefficient, and Minkowski Distance methods to assess the similarity of stroke cases. The evaluation is based on input data such as patient causes or symptoms and risk factors from medical records. The analysis of case similarity and solutions involves applying the Weighted Cosine, Jaccard Coefficient, and Minkowski Distance methods, with a defined threshold value. The highest similarity values from previous patient cases are selected for each method. The test outcomes suggest that employing the Minkowski Distance method with a threshold value of 75 and an r value of three or four yields the highest levels of accuracy, recall, and precision. The Minkowski Distance achieves an accuracy and recall rate of more than 88 percent with 100 percent precision.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_40-Comparison_of_the_Application_of_Weighted_Cosine_Similarity.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Machine Learning-based Secure 5G Network Slicing: A Systematic Literature Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141239</link>
        <id>10.14569/IJACSA.2023.0141239</id>
        <doi>10.14569/IJACSA.2023.0141239</doi>
        <lastModDate>2023-12-29T11:47:33.9670000+00:00</lastModDate>
        
        <creator>Meshari Huwaytim Alanazi</creator>
        
        <subject>5G; accuracy; deep learning; efficiency; security; machine learning; network slicing; scalability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>As the fifth-generation (5G) wireless networks continue to advance, the concept of network slicing has gained significant attention for enabling the provisioning of diverse services tailored to specific application requirements. However, the security concerns associated with network slicing pose significant challenges that demand comprehensive exploration and analysis. In this paper, we present a systematic literature review that critically examines the existing body of research on machine learning techniques for securing 5G network slicing. Through an extensive analysis of a wide range of scholarly articles selected from specific search databases, we identify and classify the key machine learning approaches proposed for enhancing the security of network slicing in the 5G environment. We investigate these techniques based on their effectiveness in addressing various security threats and vulnerabilities while considering factors such as accuracy, scalability, and efficiency. Our review reveals that machine learning techniques, including deep learning algorithms, have been proposed for anomaly detection, intrusion detection, and authentication in 5G network slicing. However, we observe that these techniques face challenges related to accuracy under dynamic and heterogeneous network conditions, scalability when dealing with a large number of network slices, and efficiency in terms of computational complexity and resource utilization. To overcome these challenges, our experimentation shows that the integration of reinforcement learning techniques with CNNs, multi-agent reinforcement learning, and distributed SVM frameworks emerged as potential solutions with improved accuracy and scalability in network slicing. Furthermore, we identify promising research directions, including the exploration of hybrid machine learning models, the adoption of explainable AI techniques, and the investigation of privacy-preserving mechanisms.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_39-Machine_Learning_based_Secure_5G_Network_Slicing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Energy-Aware Clustering in the Internet of Things using Tabu Search and Ant Colony Optimization Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141238</link>
        <id>10.14569/IJACSA.2023.0141238</id>
        <doi>10.14569/IJACSA.2023.0141238</doi>
        <lastModDate>2023-12-29T11:47:33.9500000+00:00</lastModDate>
        
        <creator>Mei Li</creator>
        
        <creator>Jing Ai</creator>
        
        <subject>Internet of things; clustering; data transmission; energy efficiency; ant colony optimization algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>The Internet of Things (IoT) significantly impacts communication systems&#39; efficiency and the requirements for applications in our daily lives. Among the major challenges involved in data transmission over IoT networks is the development of an energy-efficient clustering mechanism. Recent methods are challenged by long transmission delays, imbalanced load distribution, and limited network lifespan. This paper suggests a new cluster-based routing method combining Tabu Search (TS) and Ant Colony Optimization (ACO) algorithms. The TS algorithm overcomes the disadvantage of ACO, in which ants move randomly throughout the colony in search of food sources. In the process of solving optimization problems, the ACO algorithm traps ants, resulting in a considerable increase in the time required for local searches. TS can be used to overcome these drawbacks. In fact, the TS algorithm eliminates the problem of getting stuck in local optima due to the randomness of the search process. Experimental results indicate that the proposed hybrid algorithm outperforms ACO, LEACH, and genetic algorithms regarding energy consumption and network lifetime.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_38-Energy_Aware_Clustering_in_the_Internet_of_Things.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Encryption Traffic Classification Method Based on ConvNeXt and Bilinear Attention Mechanism</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141237</link>
        <id>10.14569/IJACSA.2023.0141237</id>
        <doi>10.14569/IJACSA.2023.0141237</doi>
        <lastModDate>2023-12-29T11:47:33.9330000+00:00</lastModDate>
        
        <creator>Xiaohua Feng</creator>
        
        <creator>Yuan Liu</creator>
        
        <subject>Encryption traffic recognition; end-to-end; convolutional neural network; bilinear attention module</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>The rapid growth in internet traffic resulted to the emergence of network traffic categorization as a crucial area of research in network performance and management. This technological advancement has demonstrated its efficacy in aiding network administrators to identify anomalies within network behavior. However, the widespread adoption of encryption technology and the continual evolution of encryption protocols present a novel challenge in the classification of encrypted traffic. Addressing this challenge, this paper introduces an innovative methodology for classifying encrypted traffic by harnessing ConvNeXt and a fusion attention mechanism. Through the representation of traffic data as images and the integration of a bilinear attention mechanism into the model, our proposed approach attains heightened precision in the classification of encrypted network traffic. To substantiate the effectiveness of our methodology, experiments were conducted employing the publicly available ISCX VPN-nonVPN dataset. The experimental findings showcase superior recognition performance, underscoring the efficacy of the proposed approach.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_37-Encryption_Traffic_Classification_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Urban Image Segmentation in Media Integration Era Based on Improved Sparse Matrix Generation of Digital Image Processing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141235</link>
        <id>10.14569/IJACSA.2023.0141235</id>
        <doi>10.14569/IJACSA.2023.0141235</doi>
        <lastModDate>2023-12-29T11:47:33.9200000+00:00</lastModDate>
        
        <creator>Dan Zheng</creator>
        
        <creator>Yan Xie</creator>
        
        <subject>Media integration; digital image processing; city image; image segmentation; improved sparse matrix generation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>Media integration integrates the resources of various media platforms, including audience, technical and human resources. In the era of media integration, media in various channels, at different levels and in different fields provide various choices for image communication and brand building of cities. The technology of image processing by computer has gradually affected all aspects of people’s life and work, bringing more and more convenience to people. In this paper, the application of digital image processing technology in city image communication in the age of media integration is studied. A new sparse matrix creation method is proposed, and the created sparse matrix is used as the similarity matrix to segment the spectral clustering image, so that the edge contour weakened in gradient calculation can be corrected and strengthened again. The research shows that the improved algorithm is superior to the traditional algorithm, and compared with the fuzzy entropy algorithm based on exhaustive search, the gray contrast between regions and Bezdek partition coefficient are improved by 9.301% and 4.127%. In terms of speed, the algorithm in this paper has absolute advantages, so our research is also affirmed, which fully shows that it should have high application value.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_35-Urban_Image_Segmentation_in_Media_Integration_Era.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards a Stacking Ensemble Model for Predicting Diabetes Mellitus using Combination of Machine Learning Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141236</link>
        <id>10.14569/IJACSA.2023.0141236</id>
        <doi>10.14569/IJACSA.2023.0141236</doi>
        <lastModDate>2023-12-29T11:47:33.9200000+00:00</lastModDate>
        
        <creator>Abdulaziz A Alzubaidi</creator>
        
        <creator>Sami M Halawani</creator>
        
        <creator>Mutasem Jarrah</creator>
        
        <subject>DM; Diabetes Mellitus; Stacking; Ensemble learning; Machine Learning; Random Forest (RF); Logistic Regression (LR); Extreme Gradient Boosting model (XGBoost)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>Diabetes Mellitus (DM) is a chronic disease affecting the world&#39;s population, it causes long-term issues such as kidney failure, blindness, and heart disease, hurting one&#39;s quality of life. Diagnosing diabetes mellitus in an early stage is a challenge and a decisive decision for medical experts, as delay in diagnosis leads to complications in controlling the progression of the disease. Therefore, this research aims to develop a novel stacking ensemble model to predict diabetes mellitus a combination of machine learning models, where an ensemble of Prediction classifiers was used, such as Random Forest (RF), Logistic Regression (LR), as base learners&#39; models, and the Extreme gradient Boosting model (XGBoost) as a Meta-Learner model. The results indicated that our proposed stacking model can predict diabetes mellitus with 83% accuracy on Pima dataset and 97% with DPD dataset. In conclusion, our proposed model can be used to build a diagnostic application for diabetes mellitus, as recommend testing our model on a huge and diverse dataset to obtain more accurate results.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_36-Towards_A_Stacking_Ensemble_Model_for_Predicting_Diabetes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of Crack Detection and Crack Length Calculation Method using Image Processing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141234</link>
        <id>10.14569/IJACSA.2023.0141234</id>
        <doi>10.14569/IJACSA.2023.0141234</doi>
        <lastModDate>2023-12-29T11:47:33.9030000+00:00</lastModDate>
        
        <creator>Jewon Oh</creator>
        
        <creator>Yutaka Matsumoto</creator>
        
        <creator>Kohei Arai</creator>
        
        <subject>Image processing; crack detection; length calculation; color detection; canny; threshold; OpenCV</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>To evaluate the integrity of a building, many experts and engineers have evaluated the damage classification of a building based on superficial visual information through field surveys. On-site surveys are hazardous and require several years of experience and expertise. In this study, a system for detecting the presence or absence of cracks and calculating their lengths was developed using image processing technology. The accuracy of the system was examined using crack image data obtained from shear force experiments. For crack detection, a crack detection method was developed using canny edge, threshold, and HSV color detection. The detection of the presence of cracks was proposed to be coupled with image segmentation to improve detection accuracy. A method for calculating the crack length using image processing was also developed. In this study, we proposed a method to calculate cracks as straight lengths, and obtained results with 98.1% accuracy. However, for curved cracks, it was necessary to rotate or segment the image.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_34-Development_of_Crack_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Artificial Intelligence-based Optimization Models for the Technical Workforce Allocation and Routing Problem Considering Productivity</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141233</link>
        <id>10.14569/IJACSA.2023.0141233</id>
        <doi>10.14569/IJACSA.2023.0141233</doi>
        <lastModDate>2023-12-29T11:47:33.8870000+00:00</lastModDate>
        
        <creator>Mariam Alzeraif</creator>
        
        <creator>Ali Cheaitou</creator>
        
        <subject>Productivity; workforce; maintenance; optimization; allocation; routing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>Ensuring the reliability and availability of electric power networks is essential due to the increasing demands. An effective preventive maintenance strategy requires efficient resources allocation to perform the maintenance tasks, particularly the technical workforce. This paper introduces an innovative artificial intelligence-based approach to predict workforce productivity, aiming to optimize both the allocation of the technical workforce for maintenance tasks and their routing. In this study, two mathematical optimization models are introduced that utilize the output value of Artificial Neural Networks (ANN) for optimal resource allocation and routing. The first model focuses on team formation, considering the predicted productivity in order to ensure effective collaboration. While the second model focuses on the optimal assignment and routing of these teams to specific maintenance tasks. Validated with real-world data, the models show considerable promise in enhancing resource allocation, task assignment, and cost-efficiency in the electricity industry. Furthermore, sensitivity analysis has been conducted and managerial insights has been explored. The study also paves the way for future research, highlighting the potential for refining these models for more extensive applications.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_33-Artificial_Intelligence_based_Optimization_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Approach with VADER and Multinomial Logistic Regression for Multiclass Sentiment Analysis in Online Customer Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141232</link>
        <id>10.14569/IJACSA.2023.0141232</id>
        <doi>10.14569/IJACSA.2023.0141232</doi>
        <lastModDate>2023-12-29T11:47:33.8570000+00:00</lastModDate>
        
        <creator>Murahartawaty Arief</creator>
        
        <creator>Noor Azah Samsudin</creator>
        
        <subject>Hybrid approach; multiclass sentiment analysis; VADER; multinomial logistic regression; online customer review</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>Sentiment analysis is crucial for businesses to understand customer reviews and assess sentiment polarity. A hybrid technique combining VADER and Multinomial Logistic Regression was used to analyze customer sentiment in online customer review data. VADER is a lexicon-based approach that labels reviews with sentiment using a predefined lexicon, whereas Multinomial Logistic Regression can determine the polarity of sentiment using VADER data. This study employed multiclass classification using TF-IDF vectorization to categorize sentiment as a positive, negative, or neutral class. Correctly managing neutral sentiments can assist businesses in identifying improvement opportunities. The utilization of the VADER lexicon and Multinomial Logistic Regression has been shown to significantly improve the performance of sentiment analysis in the context of multiclass classification problems. With a 75.213% accuracy rate, the VADER lexicon accurately recognizes neutral sentiment and is appropriate to adapt in categorizing sentiment related to customer reviews. Combined with Multinomial Logistic Regression, accuracy increases to 92.778%. In conclusion, the hybrid approach with VADER and Multinomial Logistic Regression can leverage the accuracy and reliability of multiclass customer sentiment analysis.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_32-Hybrid_Approach_with_VADER_and_Multinomial_Logistic_Regression.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Paradigm for IoT Security: ResNet-GRU Model Revolutionizes Botnet Attack Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141231</link>
        <id>10.14569/IJACSA.2023.0141231</id>
        <doi>10.14569/IJACSA.2023.0141231</doi>
        <lastModDate>2023-12-29T11:47:33.8570000+00:00</lastModDate>
        
        <creator>Jyotsna A</creator>
        
        <creator>Mary Anita E. A</creator>
        
        <subject>Internet of things; federated learning; Gated Recurrent Neural Networks; Long Short Term Memory (LSTM)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>The rapid proliferation of the Internet of Things (IoT) has engendered substantial security apprehensions, chiefly due to the emergence of botnet attacks. This research study delves into the realm of Intrusion Detection Systems (IDS) by leveraging the IoT23 dataset, with a specific emphasis on the intricate domain of IoT at the network&#39;s edge. The evolution of edge computing underscores the exigency for tailored security solutions. An array of statistical methodologies, encompassing ANOVA, Kruskal-Wallis, and Friedman tests, is systematically employed to illuminate the evolving trends across multiple facets of the study. Given the intricacies entailed in feature selection within edge environments, Chi-square analyses, Recursive Feature Elimination (RFE), and Lasso-based techniques are strategically harnessed to unearth meaningful feature subsets. A meticulous evaluation encompassing 19 classifiers, meticulously selected from both machine learning (ML) and deep learning (DL) paradigms, is rigorously conducted. Initial findings underscore the potential of the Gated Recurrent Unit (GRU) model, especially when coupled with intrinsic lasso-based feature selection. This promising outcome catalyzes the formulation of an ensemble approach that harnesses multiple LassoCV models, aimed at amplifying feature selection proficiency. Furthermore, an optimized ResNet-GRU model emerges from the fusion of the GRU and ResNet architectures, with the objective of augmenting classification performance. In response to mounting concerns regarding data privacy at the edge, a resilient federated learning ecosystem is meticulously crafted. The seamless integration of the optimized ResNet-GRU model into this framework facilitates the employment of FedAvg, a widely acclaimed federated learning methodology, to adeptly navigate the intricacies associated with data sharing challenges. A comprehensive performance evaluation is undertaken, wherein the ResNet-GRU model is benchmarked against FedAvg and a diverse array of other federated learning algorithms, including FedProx and Fed+. This extensive comparative analysis encompasses a spectrum of performance metrics and processing time benchmarks, shedding comprehensive light on the capabilities of the model.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_31-A_Novel_Paradigm_for_IoT_Security.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards a Framework for Elevating the Usage of eLearning Technologies in Higher Education Institutions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141230</link>
        <id>10.14569/IJACSA.2023.0141230</id>
        <doi>10.14569/IJACSA.2023.0141230</doi>
        <lastModDate>2023-12-29T11:47:33.8400000+00:00</lastModDate>
        
        <creator>Naif Alzahrani</creator>
        
        <creator>Hassan Alghamdi</creator>
        
        <subject>eLearning; higher education; business and IT alignment; enterprise architecture</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>Adopting eLearning technologies is no longer an option in Higher Education Institutions (HEIs) to support teaching and learning activities. However, despite steps taken by most of these institutions towards effective utilization of technologies, the current situation on the ground shows two significant challenges. First, the misalignment between institutions’ strategies and the technical implementation of these technologies. Second, the scattered implementation and usage of eLearning technologies among stakeholders, which increases operational overhead due to the lack of a unified approach and usage procedures that promote optimal utilization of such technologies. This paper aims to introduce a framework for elevating the usage of eLearning technologies in HEIs. It guides the alignment between strategic goals and technology implementation for effective and progressive eLearning technology usage. Design science research methodology is adopted to guide the development of this framework. It drives the development process by first being aware of the problem from a real-life context and then proposing a solution. Principles from business and IT alignment and enterprise architecture are adopted to propose this framework, which is meant to be comprehensive to have eLearning technologies fit the institution’s purpose while achieving strategic goals.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_30-Towards_a_Framework_for_Elevating_the_Usage_of_eLearning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Smart Fruit Identification and Counting using Machine Vision Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141229</link>
        <id>10.14569/IJACSA.2023.0141229</id>
        <doi>10.14569/IJACSA.2023.0141229</doi>
        <lastModDate>2023-12-29T11:47:33.8270000+00:00</lastModDate>
        
        <creator>Madhura R. Shankarpure</creator>
        
        <creator>Dipti D. Patil</creator>
        
        <subject>Image processing; fruit; multiphase approach; counting</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>Estimating fruit yield holds significant importance for farmers as it enables them to make precise resource management decisions for fruit harvesting. The adoption of automated image processing technology not only reduces the human labor required but also enhances the accuracy of ripe fruit estimates. This research delves into the performance of an image processing algorithm designed to count and identify oranges. The study employed a multi-phase approach, starting with the creation of a mask to isolate orange content, followed by the detection of circular shapes within the mask. Lastly, the algorithm filters and counts the identified circles. The outcome of this study revealed that the algorithm demonstrated an impressive success rate of approximately 72.4% in correctly identifying oranges with standard deviation of +/- 12.20.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_29-Smart_Fruit_Identification_and_Counting_using_Machine_Vision.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multidimensional Private Information Portrait in Social Network Users</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141228</link>
        <id>10.14569/IJACSA.2023.0141228</id>
        <doi>10.14569/IJACSA.2023.0141228</doi>
        <lastModDate>2023-12-29T11:47:33.8100000+00:00</lastModDate>
        
        <creator>Fangfang Shan</creator>
        
        <creator>Mengyi Wang</creator>
        
        <creator>Huifang Sun</creator>
        
        <subject>Social network; personal privacy information; privacy information portrait; sensitivity; privacy protection; BERT</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>In order to tackle the challenges of users&#39; weak privacy awareness and frequent disclosure of private information in social network, this paper proposes a multidimensional privacy information portrait model of users in Chinese social networks. Because the TF-IDF (Term Frequency-Inverse Document Frequency, TF-IDF) algorithm does not consider the distribution of feature terms among and within classes, uses the TF-IDF algorithm based on the bag-of-words model to calculate the sensitivity of user privacy information. Considering the diversity of user privacy information, this paper proposes the PROLM (Positive reverse order lookaround matching ) algorithm, which is combined with the Flashtext+ (improved Flashtext) algorithm and SMA (string matching algorithm, SMA), the PROLM_FlashText+_SMA to extract user personal privacy information and location where the privacy information is located, and return the sensitivity. Using the BERT (Bidirectional Encoder Representation from Transformers, BERT)-Softmax privacy information classification model, the privacy information is classified into high, moderate and mild privacy information, and a multidimensional privacy information portrait of the user is constructed based on the privacy information and sensitivity. The experiments show that the accuracy of PROLM_FlashText+_SMA algorithm for privacy information extraction reaches 93.63%, and the overall F1 index of privacy information classification using the BERT-Softmax model reaches 0.9798 on the test set, better than baseline comparison model, has better privacy information classification effect.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_28-Multidimensional_Private_Information_Portrait_in_Social_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Assessing User Requirements for e-Resources Interface Design in University Libraries</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141227</link>
        <id>10.14569/IJACSA.2023.0141227</id>
        <doi>10.14569/IJACSA.2023.0141227</doi>
        <lastModDate>2023-12-29T11:47:33.8100000+00:00</lastModDate>
        
        <creator>Yuli Rohmiyati</creator>
        
        <creator>Tengku Siti Meriam Tengku Wook</creator>
        
        <creator>Noraidah Sahari</creator>
        
        <creator>Siti Aishah Hanawi</creator>
        
        <subject>User requirement; interface design; e-resources; social presence; university library; element</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>e-Resources in the university library as learning resources are one of the primary services that promote learning and research to improve university productivity. At present, users find it difficult to access e-resources and require assistance in finding them. When using the system, users felt frustrated, confused, and lost. The e-resources services system on library websites, on the other hand, lacks sociability and a sense of human warmth. Sociability and a sense of human warmth can be integrated into the website interface, which may evoke the sensation of being with an actual individual, even if the service is provided online. This study investigated the social presence aspects that can be implemented in the library&#39;s e-resources system. The purpose of this study is to elicit social presence features that can be implemented in the design of e-resource interfaces on library websites. The methods used in this study are in three phases: a) web content analysis from twelve university library interfaces designed in several countries; b) interviews with library staff; and c) assessment by a questionnaire of library website users. Website content analysis was used to investigate elements that offer many unique features to support the implementation of social presence through the e-resources interface. An interview was used to validate elements that were found in the web content analysis, and a questionnaire phase was used to assess the user requirements for these social presence elements. The results of empirical studies show that users need some elements of social presence, such as comments, chat, ratings, voice, personalized welcome in library accounts, tools, preference language, links for reference managers, and social media, as well as ease of access such as readable help font, color, and font size.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_27-Assessing_User_Requirements_for_e_Resources_Interface_Design.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comprehensive Analysis of Topic Models for Short and Long Text Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141226</link>
        <id>10.14569/IJACSA.2023.0141226</id>
        <doi>10.14569/IJACSA.2023.0141226</doi>
        <lastModDate>2023-12-29T11:47:33.7930000+00:00</lastModDate>
        
        <creator>Astha Goyal</creator>
        
        <creator>Indu Kashyap</creator>
        
        <subject>Topic modeling; Nonnegative Matrix Factorization (NMF); Latent Dirichlet Allocation (LDA); evaluation metrics; short text mining; long text mining</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>The digital age has brought significant information to the Internet through long text articles, webpages, and short text messages on social media platforms. As the information sources continue to grow, Machine Learning and Natural Language Processing techniques, including topic modeling, are employed to analyze and demystify this data. The performance of topic modeling algorithms varies significantly depending on the text data&#39;s characteristics, such as text length. This comprehensive analysis aims to compare the performance of the state-of-the-art topic models: Nonnegative Matrix Factorization (NMF), Latent Dirichlet Allocation using Variational Bayes modeling (LDA-VB), and Latent Dirichlet Allocation using Collapsed Gibbs-Sampling (LDA-CGS), over short and long text datasets. This work utilizes four datasets: Conceptual Captions and Wider Captions, image captions for short text data, and 20 Newsgroups news articles and Web of Science containing science articles for long text data. The topic models are evaluated for each dataset using internal and external evaluation metrics and are compared against a known value of topic &#39;K.&#39; The internal and external evaluation metrics are the statistical metrics that assess the model&#39;s performance on classification, significance, coherence, diversity, similarity, and clustering aspects. Through comprehensive analysis and rigorous evaluation, this work illustrates the impact of text length on the choice of topic model and suggests a topic model that works for varied text length data. The experiment shows that LDA-CGS performed better than other topic models over the internal and external evaluation metrics for short and long text data.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_26-Comprehensive_Analysis_of_Topic_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid Double Encryption Approach for Enhanced Cloud Data Security in Post-Quantum Cryptography</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141225</link>
        <id>10.14569/IJACSA.2023.0141225</id>
        <doi>10.14569/IJACSA.2023.0141225</doi>
        <lastModDate>2023-12-29T11:47:33.7770000+00:00</lastModDate>
        
        <creator>Manjushree C V</creator>
        
        <creator>Nandakumar A N</creator>
        
        <subject>Cloud data security; double encryption; Post-Quantum Cryptography (PQC); NTRU Encrypt; AES Encryption</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>Quantum computers and research on quantum computers are increasing due to the efficiency and speed required for critical applications. This scenario also kindles the vitality of data protection needed against threats from quantum computers. Research in post-quantum threats is very minimal so far, but it is much needed to protect the enormous data stored in the cloud for healthcare, governmental, or any crucial data. This research work presents an advanced hybrid double encryption approach for cloud data security based on Post-Quantum Cryptography (PQC) to ensure the restriction of unauthorized access. The suggested approach combines the benefits of the NTRU encryption and AES encryption algorithms and works in hybrid mode, offering strong security while resolving issues with real-time performance and cost-efficiency. A streamlined key management system is set together to improve real-time processing, significantly reducing encryption and decryption delay times. Moreover, NTRU Encrypt dynamic parameter selection, which adapts security parameters based on data sensitivity, maintains accurate information and security. In addition to addressing real-time performance and data security, an innovative development in this method is known as Quantum-Adaptive Stream Flow Encryption (QASFE), which enables secure data sharing and collaborative working within a quantum-resistant framework. This innovative feature enhances data accessibility while maintaining the highest level of security. In the era of post-quantum cryptography, our multifactor authentication technique, integrating double encryption and QASFE, is a proactive and flexible solution for securing cloud data, and protecting data security and privacy against emerging threats.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_25-A_Hybrid_Double_Encryption_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning-based License Plate Recognition in IoT Smart Parking Systems using YOLOv6 Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141223</link>
        <id>10.14569/IJACSA.2023.0141223</id>
        <doi>10.14569/IJACSA.2023.0141223</doi>
        <lastModDate>2023-12-29T11:47:33.7630000+00:00</lastModDate>
        
        <creator>Ming Li</creator>
        
        <creator>Li Zhang</creator>
        
        <subject>Internet of things; deep learning; smart parking system; license plate recognition; YOLOv6</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>License plate recognition (LPR) is pivotal for the seamless operation of Internet of things (IoT) and smart parking systems, ensuring the swift and effective identification and management of vehicles. Recent research has concentrated on refining LPR methods through deep learning approaches, proposing diverse strategies to enhance accuracy and reduce computation costs. This work tackles these challenges by introducing an innovative method rooted in the YOLOv6 algorithm. Leveraging a tailored dataset for model generation, the study employs rigorous methodologies involving validation, testing, and training. The resultant model demonstrates marked improvements in license plate recognition capabilities, surpassing the performance of existing methods. This breakthrough bears significant implications for advancing IoT smart parking systems, promising heightened reliability and efficiency in vehicle identification and management. Thorough experimental results and performance evaluations validate the efficacy of the proposed YOLOv6-based method. In-depth discussions and comparisons with state-of-the-art methods in the field lead to the conclusion that the introduced approach not only elevates accuracy but also enhances overall efficiency in license plate recognition for smart parking systems, thereby providing valuable contributions to the domain.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_23-Deep_Learning_based_License_Plate_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Influence of Membership Function and Degree on Sorghum Growth Prediction Models in Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141224</link>
        <id>10.14569/IJACSA.2023.0141224</id>
        <doi>10.14569/IJACSA.2023.0141224</doi>
        <lastModDate>2023-12-29T11:47:33.7630000+00:00</lastModDate>
        
        <creator>Abdul Rahman</creator>
        
        <creator>Ermatita</creator>
        
        <creator>Dedik Budianta</creator>
        
        <creator>Abdiansah</creator>
        
        <subject>Prediction; MANFIS; membership function; organic fertilizer; sorghum</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>Rapid advances in science and technology have significantly changed plant growth modeling. The main contribution to this transformation lies in using Machine Learning (ML) techniques. This study focuses on sorghum, an important agricultural crop with significant economic implications. Crop yield studies include temperature, humidity, climate, rainfall, and soil nutrition. This research has a novelty: the input factors for predicting sorghum plant growth, namely the treatment of applying organic fertilizer and dolomite lime to sorghum planting land. The three predicted sorghum plant growth factors, namely Height, Biomass, and Panicle weight, are the reasons for using the Multiple Adaptive Neural Fuzzy Inference System (MANFIS) model. This research investigates the impact of Membership Function and Degree on the MANFIS model. A comprehensive comparison of various membership functions, including Gaussian, Triangular, Bell, and Trapezoidal functions, along with various degrees of membership, has been carried out. The dataset used includes data related to sorghum growth obtained from field experiments. The main objective was to assess the effectiveness of membership and degree functions in accurately predicting sorghum growth parameters, consisting of height, biomass, and panicle weight. This assessment uses metrics such as MAPE (Mean Absolute Percentage Error), MAE (Mean Absolute Error), and RMSE (Root Mean Square Error) to evaluate the predictive performance of the MANFIS model when using four different types of membership functions and degrees. The results obtained the best level of accuracy in predicting panicle weight (ANFIS-3) with chicken manure treatment using the Trapezoidal membership function type and degree of membership function [3,3] with MAPE results of 5.77%, MAE of 0.2994, and RMSE of 0.395.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_24-Influence_of_Membership_Function_and_Degree_on_Sorghum_Growth.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Gamers Intention Towards Purchasing Game Items in Virtual Community: Extending the Theory of Planned Behavior</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141222</link>
        <id>10.14569/IJACSA.2023.0141222</id>
        <doi>10.14569/IJACSA.2023.0141222</doi>
        <lastModDate>2023-12-29T11:47:33.7470000+00:00</lastModDate>
        
        <creator>Abbi Nizar Muhammad</creator>
        
        <creator>Achmad Nizar Hidayanto</creator>
        
        <subject>Theory of planned behavior; in-game items; purchasing intention; virtual community</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>Virtual communities serve as bustling marketplaces where gamers engage in transactions for in-game items, driving the digital economy&#39;s expansion. This research aims to illuminate the determinants of steering users&#39; decisions within these online environments. Focusing on the constructs of Attitude, Subjective Norms, and Perceived Behavioral Control derived from the Theory of Planned Behavior (TPB), we investigate the factors shaping purchasing intentions. Employing structural equation modeling (SEM) on a robust dataset of 300 validated respondents, our analysis unveils insights into user motivations. Notably, the amalgamation of Attitude, Subjective Norms, and Perceived Behavioral Control explains 84% of the drivers guiding in-game item transactions within virtual communities. Our findings underscore the significance of certain attributes. Specifically, the perceived wisdom inherent in these transactions, the constructive influence of community discussions, and the ease of communication and negotiation channels within virtual realms emerge as pivotal determinants influencing user behavior. This study not only contributes to understanding user behavior in virtual spaces but also holds practical implications for scholars and industry stakeholders. By shedding light on these influential factors, this research informs strategies and interactions within virtual communities, offering valuable insights into the dynamics of the digital marketplace.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_22-Gamers_Intention_Towards_Purchasing_Game_Items.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Identification of Microaneurysms and Exudates for Early Detection of Diabetic Retinopathy</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141221</link>
        <id>10.14569/IJACSA.2023.0141221</id>
        <doi>10.14569/IJACSA.2023.0141221</doi>
        <lastModDate>2023-12-29T11:47:33.7300000+00:00</lastModDate>
        
        <creator>G Indira Devi</creator>
        
        <creator>D. Madhavi</creator>
        
        <subject>Diabetic retinopathy; microaneurysms; hard exudates; SVM; LRC</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>Diabetic retinopathy (DR) is a condition that may be a complication of diabetes, and it can damage both the retina and other small blood vessels throughout the body. Microaneurysms (MA’s) and Hard exudates (HE’s) are two symptoms that occur in the early stage of DR. Accurate and reliable detection of MA’s and HE’s in color fundus images has great importance for DR screening. Here, a machine learning algorithm has been presented in this paper that detects MA’s and HE’s in fundus images of the retina. In this research a dynamic thresholding and fuzzy c mean clustering with characteristic feature extraction and different classification techniques are used for detection of MA’s and HE’s. The performance of system is evaluated by computing the parameters like sensitivity, specificity, accuracy, and precision. The results are compared between different types of classifiers. The Logistic Regression classifier (LRC) performance is good when compared with other classifiers with an accuracy of 94.6% in detection of MA’s and 96.2% in detection of HE’s.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_21-Identification_of_Microaneurysms_and_Exudates.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Reliable and Efficient Model for Water Quality Prediction and Forecasting</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141219</link>
        <id>10.14569/IJACSA.2023.0141219</id>
        <doi>10.14569/IJACSA.2023.0141219</doi>
        <lastModDate>2023-12-29T11:47:33.7170000+00:00</lastModDate>
        
        <creator>Azween Abdullah</creator>
        
        <creator>Himakshi Chaturvedi</creator>
        
        <creator>Siddhesh Fuladi</creator>
        
        <creator>Nandhika Jhansi Ravuri</creator>
        
        <creator>Deepa Natesan</creator>
        
        <creator>M. K Nallakaruppan</creator>
        
        <subject>Random forest; logistic regression; feature importance; decision trees; support vector machines</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>Water quality is a crucial aspect of environmental and public health. Hence, its assessment is of paramount importance. This research paper aims to leverage machine learning models to classify water quality based on a comprehensive dataset. The dataset contains various water quality indicators, and the primary objective is to predict whether the water is safe or not to consume or use. This research evaluates the performance of diverse machine learning algorithms, such as Decision Trees, Random Forest, Logistic Regression, Support Vector Machines, and more for comparative analysis. Performance metrics such as accuracy, precision, recall, and F1-score are used to assess the models&#39; effectiveness in classifying water quality. The Random Forest algorithm gave the best performance with an accuracy of 95.08%, an F1-Score of 94.69%, a Precision of 90.48%, a Recall of 93.10%, and an AUC score of 0.91. A comparative plot for the ROC AUC curve is also plotted between the various machine learning models used. Feature importance, which can help identify which water quality parameters have the greatest impact on predicting water quality outcomes, is also found in the research work.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_19-Reliable_and_Efficient_Model_for_Water_Quality_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Data Mining Application Forecast of Business Trends of Electronic Products</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141220</link>
        <id>10.14569/IJACSA.2023.0141220</id>
        <doi>10.14569/IJACSA.2023.0141220</doi>
        <lastModDate>2023-12-29T11:47:33.7170000+00:00</lastModDate>
        
        <creator>Kheo Chau Mui</creator>
        
        <creator>Nhon Nguyen Thien</creator>
        
        <subject>Data mining; sales forecasting; clusters; regression tree; RMSE; MSE; k-prototypes</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>Sales forecasting is a pressing concern for companies amid rising consumer demand and intensifying competition, compounded by declining sales due to growing socio-economic challenges. Currently, many companies are having difficulty selling products due to a lack of management systems. To assist that, data mining techniques are introduced but it is difficult to evaluate the data and it is practically impossible to accurately forecast large amounts of data. However, data mining remains an important management tool that supports early decisions to increase profits, innovate business trends and improve sales by generating intelligence from the company&#39;s data resources. In this article, the research object chosen is the data of a nationwide electronics company. Their sales volume data for consumer electronics was used and applied to this study. The study used a &quot;clustering&quot; algorithm to group data based on the unique characteristics of each product, region, season, and time to estimate the amount of goods sold in the past, thereby predicting the amount of goods that will be exported. Password is sold in the following years and look for market trends. For each group, the results obtained with k = 3 show that the number of elements in each cluster is 771422, 11874, and 312, respectively. Combined with the “regression tree” algorithm for cluster partitioning and using the protocol Evaluate MSE and RMSE to evaluate the accuracy of the model, a result of 43065.66 Sales forecasting results show that the model&#39;s accuracy is close to realistic accuracy and depends on seasonal factors that are really important to some people. Based on the above results, the business&#39;s marketing campaigns and strategies will be deployed and achieve high results.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_20-Data_Mining_Application_Forecast_of_Business_Trends.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Identifying Factors in Congenital Heart Disease Transition using Fuzzy DEMATEL</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141218</link>
        <id>10.14569/IJACSA.2023.0141218</id>
        <doi>10.14569/IJACSA.2023.0141218</doi>
        <lastModDate>2023-12-29T11:47:33.7000000+00:00</lastModDate>
        
        <creator>Raghavendra M Devadas</creator>
        
        <creator>Vani Hiremani</creator>
        
        <creator>Ranjeet Vasant Bidwe</creator>
        
        <creator>Bhushan Zope</creator>
        
        <creator>Veena Jadhav</creator>
        
        <creator>Rohini Jadhav</creator>
        
        <subject>DEMATEL; Fuzzy DEMATEL; factors; pediatric patients; heart disease</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>The transition from pediatric to adult cardiology care is a pivotal moment in the healthcare journey of individuals with congenital heart conditions or childhood-onset heart diseases. This multifaceted process requires meticulous consideration of clinical, psychosocial, and logistical factors. This research aims to explore the critical criteria for transitioning pediatric patients to adult cardiology, delving into the challenges and opportunities inherent in this healthcare shift. The identified factors for successful transition, including age and developmental stage, medical complexity, cardiac function, psychosocial factors, insurance, and financial considerations, play integral roles in the transition process. Leveraging analytical methodologies, particularly the Fuzzy Decision-Making Trial and Evaluation Laboratory (DEMATEL), this study involves three experts who assess criteria linguistically, converted to Triangular Fuzzy Numbers, and averaged. Defuzzification, using the CFCS method, yields crisp values. Results reveal that Medical Complexity (U+V = 3.96, U-V = 0.233), Insurance (U+V = 3.931, U-V = 0.22), Psychosocial Factors (U+V = 3.839, U-V = 0.387), and Age and Developmental Stage (U+V = 3.802, U-V = 0.106) follow Cardiac Function (U+V = 4.312, U-V = 0.946) in ranking. Age and Developmental Stage, Medical Complexity, Psychosocial Factors, and Insurance are considered causal variables, with Cardiac Function as an effect. These numerical insights enhance our understanding of transition criteria interdependencies, informing tailored healthcare strategies.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_18-Identifying_Factors_in_Congenital_Heart_Disease_Transition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Research on Qubit Mapping Technique Based on Batch SWAP Optimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141217</link>
        <id>10.14569/IJACSA.2023.0141217</id>
        <doi>10.14569/IJACSA.2023.0141217</doi>
        <lastModDate>2023-12-29T11:47:33.6830000+00:00</lastModDate>
        
        <creator>Hui Li</creator>
        
        <creator>Kai Lu</creator>
        
        <creator>Ziao Han</creator>
        
        <creator>Huiping Qin</creator>
        
        <creator>Mingmei Ju</creator>
        
        <creator>Shujuan Liu</creator>
        
        <subject>Quantum computing; quantum circuit compilation; initial qubit mapping; Batch SWAP Optimization Strategy (BSOS); best SWAP choice; Batch Update Technology (BUT)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>The conventional approach for initial qubit mapping in the Noisy Intermediate-Scale Quantum (NISQ) era typically uses a static heuristic strategy, overlooking insufficient qubit neighborhood in subsequent operations, resulting in excess additional SWAP gates. To address this, we introduce a multifactor interaction cost function considering qubit distance, interaction time, and gate operation error rates, enhancing SWAP gate selection in the traditional strategy. Considering quantum hardware constraints, we propose Batch SWAP Optimization Strategy (BSOS). BSOS tackles qubit mapping challenges by leveraging optimal SWAP gate selection and a SWAP-based batch update technique, effectively minimizing SWAP gates throughout circuit execution. Experimental results show that BSOS significantly reduces additional gates by intelligently selecting SWAP gates and using batch updating, with a 38.1% average decrease in inserted SWAP gates, leading to a 12% reduction in hardware gate counting overhead.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_17-Research_on_Qubit_Mapping_Technique.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implementation of a Convolutional Neural Network (CNN)-based Object Detection Approach for Smart Surveillance Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141215</link>
        <id>10.14569/IJACSA.2023.0141215</id>
        <doi>10.14569/IJACSA.2023.0141215</doi>
        <lastModDate>2023-12-29T11:47:33.6700000+00:00</lastModDate>
        
        <creator>Weiguo Ni</creator>
        
        <subject>Smart surveillance; lightweight object detection; YOLOv7; small object recognition; vision-based identification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>In the realm of smart surveillance systems, a fundamental technique for tracking and evaluating consumer behavior is object detection through video surveillance. While existing research underscores object detection through deep learning techniques, a notable gap exists in adapting these methods to effectively capture and recognize small, intricate objects. This study addresses this gap by introducing a customized methodology tailored to meet the nuanced requirements of accurate and lightweight detection for small objects, especially in scenarios prone to visual complexity and object similarity challenges. The primary objective is to furnish a vision-based object identification method designed for surveillance applications in smart stores, with a particular focus on locating jewelry objects. To achieve this, a Convolutional Neural Network (CNN)-based object detector utilizing YOLOv7 is employed for precise object detection and location extraction. The YOLOv7 network undergoes rigorous training and verification on a unique dataset specifically curated for this purpose. Experimental results affirm the efficacy of the proposed object identification method, demonstrating its capacity to detect items relevant to smart surveillance applications.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_15-Implementation_of_a_Convolutional_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Energy Efficient Routing Algorithm using Chaotic Grey Wolf with Mobile Sink-based Path Optimization for Wireless Sensor Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141216</link>
        <id>10.14569/IJACSA.2023.0141216</id>
        <doi>10.14569/IJACSA.2023.0141216</doi>
        <lastModDate>2023-12-29T11:47:33.6700000+00:00</lastModDate>
        
        <creator>Latifah Alharthi</creator>
        
        <creator>Alaa E. S. Ahmed</creator>
        
        <creator>Mostafa E. A. Ibrahim</creator>
        
        <subject>Wireless sensor network; clustering algorithm; grey wolf optimizer; slime mould algorithm; mobile sink</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>The task of deploying an energy-conscious wireless sensor networks (WSNs) is challenging. One of the most effective methods for conserving WSNs energy is clustering. The deployed sensors are divided into groups by the clustering algorithm, and each group&#39;s cluster head (CH) is chosen to gather and combine data from other sensors in the group. Mobile Wireless Sensor Networks, which enable moving the sink node, aid in reducing energy consumption. Thus, this paper introduces an energy efficient clustering algorithm and optimized path for a mobile sink using a swarm intelligence algorithms. The Chaotic Grey Wolf Optimization (CGWO) approach is used to form clusters and identify CHs. While utilizing the Slime Mould Algorithm (SMA) for determining the shortest path between a mobile sink and CHs. The effectiveness of the suggested routing strategy is evaluated against that of other current, cutting-edge protocols. The findings demonstrate that in terms of overall energy consumption and network lifetime, the suggested algorithm performs better than others. While for stability period the proposed algorithm outperforms three of compared algorithms and was close to the fourth.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_16-An_Energy_Efficient_Routing_Algorithm_using_Chaotic_Grey_Wolf.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Advanced Detection of COVID-19 Through X-ray Imaging using CovidFusionNet with Hybrid CNN Fusion and Multi-resolution Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141214</link>
        <id>10.14569/IJACSA.2023.0141214</id>
        <doi>10.14569/IJACSA.2023.0141214</doi>
        <lastModDate>2023-12-29T11:47:33.6530000+00:00</lastModDate>
        
        <creator>Majdi Khalid</creator>
        
        <subject>COVID-19 diagnosis; X-ray imaging; wavelet transform; Adaptive Histogram Equalization (AHE); oversampling; image denoising; image classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>The rapid diagnosis of COVID-19 through imaging is crucial in the current pandemic scenario. This study introduces the CovidFusionNet, a novel model adapted for efficient COVID-19 image classification. By effectively combining fusing features from seven pre-trained convolutional neural networks (CNNs), our model presents better accuracy in detecting COVID-19 from X-ray images. Three separate datasets, obtained from Kaggle, were used in this study to ensure the reliability and robustness of the model. The Continuous and Discrete Wavelet Transform was implemented for robust multi-resolution image analysis to maintain image properties after denoising. A novel enhancement method was also proposed, combining the capabilities of Adaptive Histogram Equalization (AHE) and Wavelet Transforms to emphasize finer details and concurrently heighten clarity while minimizing noise. Furthermore, to mitigate class imbalance, an oversampling approach was implemented. Comprehensive validation using 12 metrics across each dataset verified the proposed consistent performance, with remarkable accuracies of 98.02% for Dataset One, 99.30% for Dataset Two, and 98.25% for Dataset Three. Comparing CovidFusionNet against seven well-known pre-trained models showed that CovidFusionNet appeared more capable. This research advances the area of image-based diagnosis using COVID-19 and provides a model for quick medical actions.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_14-Advanced_Detection_of_COVID_19_Through_X_ray_Imaging.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comparative Study of Stemming Techniques on the Malay Text</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141213</link>
        <id>10.14569/IJACSA.2023.0141213</id>
        <doi>10.14569/IJACSA.2023.0141213</doi>
        <lastModDate>2023-12-29T11:47:33.6370000+00:00</lastModDate>
        
        <creator>Rosmayati Mohemad</creator>
        
        <creator>Nazratul Naziah Mohd Muhait</creator>
        
        <creator>Noor Maizura Mohamad Noor</creator>
        
        <creator>Nur Fadilla Akma Mamat</creator>
        
        <subject>Algorithm; ahmad algorithm; malay language; othman algorithm; rule-based; stemming; stemmer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>Text stemming, an essential preprocessing step in the development of Natural Language Processing (NLP) applications, involves the transformation of various word forms into their root words. Stemming plays a critical role in decreasing the volume of text, thereby enhancing the efficiency of various computational tasks such as information retrieval, text classification, and text clustering. Stemming is a rule-based approach. On the other hand, it frequently suffers affixation errors that result in under-stemming, over-stemming, or both, as well as unstemmed or spelling exceptions. Every language has different stemming techniques, and among the most well-known Malay stemming algorithms are the Othman and Ahmad algorithms. Therefore, this study aims to compare the performance of the stemming errors between the Othman and Ahmad algorithms in stemming Malay text, particularly on two different domains of textual datasets, which are the course summaries of the education domain and housebreaking crime reports of the crime domain. The Othman algorithm presents a set of 121 stemming rules (set A). In the meantime, Ahmad&#39;s algorithm proposes two distinct sets of stemming rules, comprising 432 (set B) and 561 rules (set C), respectively. Based on the experiment results with 100 course summaries, the Ahmad algorithm (Set B) obtained a higher accuracy rate of 93.61%. The second highest is the Ahmad algorithm (Set C) with 93.53%. The Othman algorithm achieved the lowest accuracy with 86.04% compared to the other two algorithms. Meanwhile, findings from the experiment with 100 housebreaking crime reports show similar results, with the Ahmad algorithm (Set C) achieving the highest stemming accuracy of approximately 93.80% and the Othman algorithm producing the lowest stemming accuracy (83.09%). The result indicates that stemming accuracy is consistent across different types of datasets.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_13-A_Comparative_Study_of_Stemming_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Exploratory Analysis of using Chatbots in Academia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141212</link>
        <id>10.14569/IJACSA.2023.0141212</id>
        <doi>10.14569/IJACSA.2023.0141212</doi>
        <lastModDate>2023-12-29T11:47:33.6230000+00:00</lastModDate>
        
        <creator>Njood K Al-harbi</creator>
        
        <creator>Amal A. Al-shargabi</creator>
        
        <subject>AI; chatbots; ChatGPT; GPT-4; bard; ethics; machine learning; topic modeling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>With the advancement of technology in this era, chatbots have become more than just robots, as they used to conduct time-consuming and labor-intensive routine tasks. Now, it is more than just a robot for routine duties; it interacts and produces like a human. Despite the efficacy and productivity of chatbots like ChatGPT-4 and Bard, there will be significant ethical implications for the academic community, particularly students and researchers. The current study is experimenting with ChatGPT-4 and Bard by producing scientific articles with specific criteria, then applying topic modeling to assess the extent to which the content of the articles is related to the required topic, and verifying references, plagiarism, and the accuracy of the chatbot-generated articles. The results indicated that the content is relevant to the topic, and the accuracy of ChatGPT-4 is greater than Bard. ChatGPT-4 achieved 96%, and the majority of the bibliographies are accurate, whereas Bard achieved 52%, and the majority of bibliographies are incorrect, and some are not available. It is unethical to rely on a chatbot to produce scientific content, despite its accuracy, because it is not as accurate as humans and requires a thorough review of the content it generates. Furthermore, it alters his responses based on the individual he is interrogating, regardless of whether his answers are correct, as he is unable to defend his knowledge.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_12-An_Exploratory_Analysis_of_using_Chatbots_in_Academia.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Role of AI in Mitigating Climate Change: Predictive Modelling for Renewable Energy Deployment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141211</link>
        <id>10.14569/IJACSA.2023.0141211</id>
        <doi>10.14569/IJACSA.2023.0141211</doi>
        <lastModDate>2023-12-29T11:47:33.6070000+00:00</lastModDate>
        
        <creator>Nawaf Alharbe</creator>
        
        <creator>Reyadh Alluhaibi</creator>
        
        <subject>Renewable energy; climate change; predictive models; and Artificial Intelligence (AI)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>This study looks at how AI algorithms like Random Forest, Support Vector Machines (SVM), and Deep Boltzmann Machine (DBM) can be used for predictive modeling to make it easier to use renewable energy sources while reducing the negative effects of climate change. Predictive models based on Artificial Intelligence show possible ways to get the most out of green energy sources, which could lead to fewer carbon emissions. The results of the preliminary studies show that these AI systems can make accurate predictions about how green energy will be made because they are good at making predictions and generalizing. This feature makes it possible to use resources effectively, which improves the reliability of the grid and encourages more people to use green energy sources. Ultimately, employing these AI programs will serve as powerful tools in combating climate change and fostering a more sustainable and eco-friendly environment.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_11-The_Role_of_AI_in_Mitigating_Climate_Change.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Conceptualizing an Inductive Learning Situation in Online Learning Enabled by Software Engineering</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141210</link>
        <id>10.14569/IJACSA.2023.0141210</id>
        <doi>10.14569/IJACSA.2023.0141210</doi>
        <lastModDate>2023-12-29T11:47:33.5900000+00:00</lastModDate>
        
        <creator>Ouariach Soufiane</creator>
        
        <creator>Khaldi Maha</creator>
        
        <creator>Khaldi Mohamed</creator>
        
        <subject>Software engineering approach; instructional design; conceptualization scenario; online learning situation; inductive approach</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>Our work highlights the importance of adopting a systematic and methodical software engineering approach to the development of information technology projects for e-learning. We place particular emphasis on conceptualizing pedagogical scenarios and an inductive online learning situation. To ensure effective management of the information systems development process, we applied instructional design principles and adopted the 2TUP process, a refined version of the Rational Unified Process (RUP) suitable for projects of all sizes. To provide a visual representation of the system architecture and inform instructional design decisions, we used the Unified Modeling Language (UML) to create class, use case, activity, and sequence diagrams. We aim to demonstrate the potential of a structured software engineering approach to creating effective and efficient e-learning systems by conceptualizing an inductive online learning situation and five concrete examples illustrating the system&#39;s functionality. Our work underlines the importance of using standardized modeling languages such as UML to facilitate communication between stakeholders and collaboration between instructional designers and software developers.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_10-Conceptualizing_an_Inductive_Learning_Situation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimizing Mobile Ad Hoc Network Routing using Biomimicry Buzz and a Hybrid Forest Boost Regression - ANNs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141209</link>
        <id>10.14569/IJACSA.2023.0141209</id>
        <doi>10.14569/IJACSA.2023.0141209</doi>
        <lastModDate>2023-12-29T11:47:33.5770000+00:00</lastModDate>
        
        <creator>D Dhinakaran</creator>
        
        <creator>S. M. Udhaya Sankar</creator>
        
        <creator>S. Edwin Raja</creator>
        
        <creator>J. Jeno Jasmine</creator>
        
        <subject>MANET; routing protocols; optimized route selection; regression; machine learning; Artificial Neural Networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>A mobile ad hoc network (MANET) is a network of moving nodes that can interact with one another without the aid of a centrally located infrastructure. In MANETs, every node acts as a router and as a host, generating and consuming data. However, due to the mobility of nodes and the absence of centralized control, the routing process in MANETs is challenging. Therefore, routing protocols in MANETs are required to be efficient, scalable, and adaptable to the dynamic topology changes of the network. This paper proposes an optimized route selection approach for MANETs via the biomimicry buzz algorithm with the Bellman-Ford-Dijkstra algorithm to improve the effectiveness and accuracy of the routing process. By integrating these behaviors into the algorithm, the approach can select the shortest path in a network, leading to an optimal routing solution. Furthermore, the paper explores the use of Forest Boost Regression (FR), a novel machine learning algorithm, to predict energy consumption in MANETs. Utilizing this will help the network run more efficiently and last longer. Additionally, the paper discusses the use of Artificial Neural Networks (ANNs) to forecast link failure in MANET s, thereby increasing network performance and dependability. The proposed work presents the experimental evaluation by using Ns-3 as the simulation tool. The experimental results indicate a variation in packet delivery ratio from 97% to 90%, an average end-to-end delay of approximately 19 ms, an increase in node speed energy consumption from 60 to 87 joules, and a simulation time energy consumption of 89 joules over 60 seconds. These results provide insights into the performance and efficiency of the proposed strategy in the context of MANETs.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_9-Optimizing_Mobile_Ad_Hoc_Network_Routing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detection of Fruit using YOLOv8-based Single Stage Detectors</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141208</link>
        <id>10.14569/IJACSA.2023.0141208</id>
        <doi>10.14569/IJACSA.2023.0141208</doi>
        <lastModDate>2023-12-29T11:47:33.5600000+00:00</lastModDate>
        
        <creator>Xiuyan GAO</creator>
        
        <creator>Yanmin ZHANG</creator>
        
        <subject>Fruit detection; agricultural sector; deep learning; YOLOv8 model; precision agriculture</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>In the agricultural sector, the precise detection of fruits plays a pivotal role in optimizing harvesting procedures, minimizing waste, and ensuring the delivery of high-quality produce. Deep learning methods have consistently exhibited superior accuracy compared to alternative techniques, making them a focal point in fruit detection research. However, the ongoing challenge lies in meeting the stringent accuracy requirements essential for real-world applications in agriculture. Addressing this critical concern, this study proposes an innovative solution utilizing the Yolov8 architecture for fruit detection. The methodology involves the meticulous creation of a custom dataset tailored to capture the diverse characteristics of agricultural fruits, followed by rigorous training, validation, and testing processes. Through extensive experimentation and performance evaluations, the findings underscore the exceptional accuracy achieved by the Yolov8-based model. This methodology not only surpasses existing benchmarks but also establishes a robust foundation for transforming fruit detection practices in agriculture. By effectively addressing the challenges associated with accuracy rates, this approach opens new avenues for optimized harvesting, waste reduction, and enhanced efficiency in agricultural practices, contributing significantly to the evolution of precision farming technologies.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_8-Detection_of_Fruit_using_YOLOv8_based_Single_Stage_Detectors.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Deep Learning-based Approach for Vision-based Weeds Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141207</link>
        <id>10.14569/IJACSA.2023.0141207</id>
        <doi>10.14569/IJACSA.2023.0141207</doi>
        <lastModDate>2023-12-29T11:47:33.5430000+00:00</lastModDate>
        
        <creator>Yan Wang</creator>
        
        <subject>Smart agriculture; weed detection; remote sensing; deep learning; computer vision</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>Weed detection is an essential component of smart agriculture, and the use of remote sensing technologies has the potential to significantly improve weed management practices, reduce herbicide usage, and increase crop yields. This study proposed an approach to weed detection using computer vision and deep learning technologies. By utilizing remote sensing methods based on DL, this approach has the potential to optimize weed management strategies, minimize herbicide use, and enhance crop productivity. The weed detection algorithm is based on the Yolov8 framework, and a custom model is trained using images from popular datasets as well as the internet. To evaluate the model&#39;s effectiveness, it is tested on both validation and testing sets. Furthermore, the model&#39;s performance is assessed using images that are not included in the original dataset. As experimental results shown, the deep learning-based approach is a promising solution for weed detection in agriculture.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_7-A_Deep_Learning_based_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Technology-Mediated Interventions for Autism Spectrum Disorder</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141205</link>
        <id>10.14569/IJACSA.2023.0141205</id>
        <doi>10.14569/IJACSA.2023.0141205</doi>
        <lastModDate>2023-12-29T11:47:33.5270000+00:00</lastModDate>
        
        <creator>Mihaela Chistol</creator>
        
        <creator>Mirela Danubianu</creator>
        
        <creator>Adina-Luminita Bar&#238;la</creator>
        
        <subject>Autism spectrum disorder; technology-mediated interventions; assistive technologies; therapy; cross-sectional study</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>According to the Diagnostic and Statistical Manual of Mental Disorders (DSM-5), Autism Spectrum Disorder (ASD) is a complex neurological and developmental condition characterized by impairments in social interaction and communication. Despite significant advancements in the research field, no pharmaceutical medication has been designed for ASD treatment. Therefore, ASD treatment relies mainly on therapeutic intervention. Interactive technologies have emerged as valuable therapy augmentation tools. This research focuses on interactive technologies developed for ASD therapeutic intervention. The study introduces a conceptual framework for understanding the full spectrum of technologies involved in the ASD context. The employed methodology encompasses expert opinions and entails a cross-sectional study that included 59 participants with significant experience in interacting with individuals diagnosed with ASD in various real-life settings, including therapists, teachers, and parents of children with ASD. The research findings revealed a broad spectrum of technologies involved in ASD interventions, including applications, devices, and robots. The results bring a new perspective on the interactive technologies used in the therapy and diagnosis of ASD and highlight their important characteristics that can serve as a standard in the development of future technological solutions.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_5-Technology_Mediated_Interventions.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Drier Bed Adsorption Predictive Model with Enhancement of Long Short-Term Memory and Particle Swarm Optimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141206</link>
        <id>10.14569/IJACSA.2023.0141206</id>
        <doi>10.14569/IJACSA.2023.0141206</doi>
        <lastModDate>2023-12-29T11:47:33.5270000+00:00</lastModDate>
        
        <creator>Marina Yusoff</creator>
        
        <creator>Mohamad Taufik Mohd Sallehud-din</creator>
        
        <creator>Nooritawati Md. Tahir</creator>
        
        <creator>Wan Fairos Wan Yaacob</creator>
        
        <creator>Nur Niswah Naslina Azid</creator>
        
        <creator>Jasni Mohamad Zain</creator>
        
        <creator>Putri Azmira R Azmi</creator>
        
        <creator>Calvin Karunakumar</creator>
        
        <subject>Adsorption; Long Short-Term Memory; net loading capacity; Particle Swarm Optimization; prediction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>The drier bed adsorption processes remove moisture from gases and liquids by ensuring product quality, extending equipment lifespan, and enhancing safety in various applications. The longevity of adsorption beds is quantified by net loading capacity values that directly impact the effectiveness of the moisture removal process. Predictive modeling has emerged as a valuable tool to enhance drier bed adsorption systems. Despite the increasing significance of predictive modeling in enhancing the efficiency of drier bed adsorption processes, the existing methodologies frequently exhibit deficiencies in accuracy and flexibility, which are crucial for optimizing process performance. This research investigates the effectiveness of a hybrid approach combining Long Short-Term Memory and Particle Swarm Optimization (LSTM+PSO) as a proposed method to predict the net loading capacity of a drier bed. The train-test split ratios and rolling origin technique are explored to assess model performance. The findings reveal that LSTM+PSO with a 70:30 train-test split ratio outperform other methods with the lowest error. Bed 1 exhibits an RMSE of 1.31 and an MSE of 0.91, while Bed 2 archives RMSE and MSE values of 0.81 and 0.72, respectively and Bed 3 with an RMSE of 0.19 and an MSE of 0.13, followed by Bed 4 with an RMSE of 0.67 and an MSE of 0.36. Bed 5 exhibits an RMSE of 0.42 and an MSE of 0.34. Furthermore, this research compares LSTM+PSO with LSTM and conventional predictive methods: Support Vector Regression, Seasonal Autoregressive Integrated Moving Average with Exogenous Variables, and Random Forest.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_6-Drier_Bed_Adsorption_Predictive_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cloud Migration: Identifying the Sources of Potential Technical Challenges and Issues</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141204</link>
        <id>10.14569/IJACSA.2023.0141204</id>
        <doi>10.14569/IJACSA.2023.0141204</doi>
        <lastModDate>2023-12-29T11:47:33.5130000+00:00</lastModDate>
        
        <creator>Nevena Staevsky</creator>
        
        <creator>Silvia Gaftandzhieva</creator>
        
        <subject>Digital transformation; cloud; cloud migration; cloud models; PaaS; SaaS; IaaS; challenges</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>Digital Transformation is emerging as a crucial factor for successful adaptation to the modern digital world for all possible economic and social entities. In recent years, cloud migration, cloud services and computing solutions adoption have been popular enablers for the Digital Transformation. During the Digital Transformation process, organizations and institutions face various technical challenges and implementation problems. This article explores the issues related to cloud migration and existing cloud service models. It investigates the advantages and disadvantages of the most popular cloud services offered by leading service providers, summarizes the main challenges in cloud migration processes, and how organizations can overcome them. Results help organizations understand the sources of potential technical challenges and implementation problems affecting cloud adoption and address these issues at an early stage of the initiative in order to reduce the threat of failure, avoid potential pitfalls and achieve desired cloud capabilities and business benefits.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_4-Cloud_Migration_Identifying_the_Sources_of_Potential_Technical_Challenges.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Integrating Social Media Data and Historical Stock Prices for Predictive Analysis: A Reinforcement Learning Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141203</link>
        <id>10.14569/IJACSA.2023.0141203</id>
        <doi>10.14569/IJACSA.2023.0141203</doi>
        <lastModDate>2023-12-29T11:47:33.4970000+00:00</lastModDate>
        
        <creator>Mei Li</creator>
        
        <creator>Ye Zhang</creator>
        
        <subject>Social media; stock market; sentiment analysis; unbalanced classification; reinforcement learning; differential equation; long short-term memory</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>The reliance on data collection for assessing individual behavior and actions has intensified, particularly with the proliferation of digital platforms. People often use the Internet to express their opinions and experiences about various products and services on social media and personal websites. Concurrently, the stock market, a key driver of commercial and industrial growth, has seen a surge in research focused on predicting market trends. The vast array of information on social media regarding public sentiment towards current events, coupled with the known impact of financial news on stock prices, has led to the application of data mining techniques for understanding market volatility. This research proposes a novel method that integrates social media data, encompassing public sentiment, news, and historical stock prices, to predict future stock trends. The approach involves two primary phases. The first phase develops a sentiment analysis (SA) model using three dilated convolution layers for feature extraction and classification. Addressing the challenge of unbalanced classification, a reinforcement learning (RL)-based strategy is employed, wherein an agent receives varied rewards for accurate classification, with a bias towards the minority class. Additionally, a unique clustering-based mutation operator within a differential equation (DE) framework is introduced to initiate the backpropagation (BP) process. The second phase incorporates an attention-based long short-term memory (LSTM) model, merging historical stock prices with sentiment data. An experimental analysis of the study dataset is conducted to determine optimal values for significant parameters, including the reward function.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_3-Integrating_Social_Media_Data_and_Historical_Stock_Prices.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Computer Vision-based Efficient Segmentation Method for Left Ventricular Epicardium and Endocardium using Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141201</link>
        <id>10.14569/IJACSA.2023.0141201</id>
        <doi>10.14569/IJACSA.2023.0141201</doi>
        <lastModDate>2023-12-29T11:47:33.4800000+00:00</lastModDate>
        
        <creator>A F M Saifuddin Saif</creator>
        
        <creator>Trung Duong</creator>
        
        <creator>Zachary Holden</creator>
        
        <subject>Convolutional neural network; segmentation; computer vision; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>Segmentation of the Left Ventricular Epicardium and Endocardium remains challenging and significant for valuable investigation of cardiac image classification. Previous research methods did not consider the flexibility of the heart area, so measurements needed to be more consistent and accurate. In addition, previous methods ignored the presence of affectability and additional parts, such as the lung organ inside the frame, during segmentation. Deep learning architectures, specifically convolutional neural networks, have become the primary choice for assessing cardiac medical images. In this context, a Convolutional Neural Network (CNN) can be an effective way to segment the left ventricular epicardium and endocardium as CNN can take data pictures, move enormity to various centers or objects in the image and have the choice to separate one from the other. This research proposes an efficient method for segmenting the left ventricular epicardium and endocardium using the InceptionV3 convolutional neural network. Rather than including fully connected layers on the head of the component maps, the proposed method considers the average of each element map, and the subsequent vector was taken care of legitimately into the SoftMax layer. Data augmentation technique was used to validate the proposed method on large number of dataset images. Besides, the proposed method was validated in publicly available MRI cardiac image datasets. Comprehensive experimental analysis was done by analyzing a large number of performance metrics, i.e., cosine similarity, log cos error, mean absolute error, mean absolute percentage error, mean squared error, mean squared logarithmic error, and root mean squared error. The proposed method depicted superior performance for localization of the left ventricular epicardium and endocardium in terms of all these performance metrics. In addition, the proposed method performed efficiently to get smooth curve for covering the region due to usage of interpolation technique to draw the curve, which made it smoother compared with previous research.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_1-Computer_Vision_based_Efficient_Segmentation_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predicting Alzheimer&#39;s Progression in Mild Cognitive Impairment: Longitudinal MRI with HMMs and SVM Classifiers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141202</link>
        <id>10.14569/IJACSA.2023.0141202</id>
        <doi>10.14569/IJACSA.2023.0141202</doi>
        <lastModDate>2023-12-29T11:47:33.4800000+00:00</lastModDate>
        
        <creator>Deep Himmatbhai Ajabani</creator>
        
        <subject>Alzheimer&#39;s disease; image processing; Magnetic Resonance Imaging; Mild Cognitive Impairment; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(12), 2023</description>
        <description>The number of elderly people has increased due to the huge growth in human life expectancy over the past few decades. As a result, age-related illnesses and ailments have become more prevalent, including Alzheimer&#39;s Disease (AD). A notable deterioration in cognitive functions, particularly memory and thinking skills, characterizes Mild Cognitive Impairment (MCI), a condition that lies in the middle of normal aging and dementia. Therefore, MCI carries a noticeably higher chance of developing into AD and frequently serves as a prelude to dementia. However, using cutting-edge image processing and machine learning techniques, it is possible to examine and find underlying patterns in these complex diseases. By using these techniques, it is possible to separate groups, identify the causes of such separation, and create disease prediction models. Clinical trials, mostly using cross-sectional Magnetic Resonance Imaging (MRI) data, have extensively looked into the use of MRI for the early identification of AD and MCI. On the other hand, longitudinal studies follow the same subjects over an extended period, giving researchers the chance to investigate cross-sectional trends as well as the development of the disease. Three different techniques are put forth in this study for the analysis and assessment of the structural data found in longitudinal MRI scans. Without considering any other diagnostic measures, this information is used to forecast the progression of those who have been diagnosed with MCI. These techniques utilize Hidden Markov Models (HMMs), which capitalize on the advantages of Support Vector Machine (SVM) classifiers.</description>
        <description>http://thesai.org/Downloads/Volume14No12/Paper_2-Predicting_Alzheimers_Progression_in_Mild_Cognitive_Impairment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Deep Learning-Assisted SVD-based Method for Medical Image Watermarking</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01411146</link>
        <id>10.14569/IJACSA.2023.01411146</id>
        <doi>10.14569/IJACSA.2023.01411146</doi>
        <lastModDate>2023-12-02T10:20:50.0870000+00:00</lastModDate>
        
        <creator>Saima Kanwal</creator>
        
        <creator>Feng Tao</creator>
        
        <creator>Rizwan Taj</creator>
        
        <subject>Singular value decomposition; medical image watermarking; digital watermarking; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>In the present era, the administration of medical images faces various security challenges that necessitate the authentication of image source and origin for accurate patient identification. With the increasing exchange of medical images between hospitals to facilitate informed decision-making, the adoption of digital watermarking techniques has emerged as an efficient solution to address the imperceptibility and robustness requirements in medical imaging watermarking. This research work introduces a technically advanced approach that combines singular value decomposition (SVD) watermarking with deep learning segmentation models to enhance the security of medical image sharing and transfer. The primary objective is to seamlessly integrate the watermark while minimizing distortion to preserve critical medical information within the image. The proposed methodology involves utilizing a ResNet-based U-Net segmentation model to segment X-Ray radiographs into the Region of Interest (ROI) and the Region of Non-Interest (RONI). The watermark data is then encoded into the ROI using singular value decomposition. Subsequently, the ROI and RONI are merged to reconstruct the complete image, preserving its original identity. Additionally, XOR encryption is applied to the watermarked image to enhance data integrity and copyright protection. On the other side of the methodology, the reconstructed image is once again separated into ROI and RONI. The ROI is decoded to recover the original transferred content. To assess the efficacy of the proposed method, a publicly available X-Ray radiograph dataset is employed, and evaluation metrics demonstrate an impressive segmentation accuracy of 98.27%. The proposed approach ensures information integrity, patient confidentiality during data sharing, and robustness against various conventional attacks, demonstrating its effectiveness in the field of medical image watermarking.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_146-A_Novel_Deep_Learning_Assisted_SVD_based_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Studying the Security and Privacy Issues of Big Data in the Saudi Medical Sector</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01411145</link>
        <id>10.14569/IJACSA.2023.01411145</id>
        <doi>10.14569/IJACSA.2023.01411145</doi>
        <lastModDate>2023-12-02T10:20:50.0570000+00:00</lastModDate>
        
        <creator>Ramy Elnaghy</creator>
        
        <creator>Hazem M. El-Bakry</creator>
        
        <subject>Security; privacy; healthcare; medical data; big data</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>In today’s era of Big Data, with the integration of data from various systems, devices, and machines used by healthcare service providers, health insurance companies, and their sub-sectors, maintaining privacy and security has become crucial. It is important to uphold the confidentiality and security of data exchanged between data service providers and insurance companies as required by law. The purpose of this paper is to focus on addressing the security and privacy issues associated with healthcare data, particularly concerning medical data in both in- transit and at-rest modes. We aim to provide a proposed solution to enhance data security and maximize privacy protection.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_145-Studying_the_Security_and_Privacy_Issues_of_Big_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comprehensive Review of Deep Learning Approaches for Animal Detection on Video Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01411144</link>
        <id>10.14569/IJACSA.2023.01411144</id>
        <doi>10.14569/IJACSA.2023.01411144</doi>
        <lastModDate>2023-12-02T10:20:50.0100000+00:00</lastModDate>
        
        <creator>Prashanth Kumar</creator>
        
        <creator>Suhuai Luo</creator>
        
        <creator>Kamran Shaukat</creator>
        
        <subject>Machine learning; deep learning; animal detection; convolutional neural networks; video-based; deep learning models</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>Integrating deep learning techniques into computer vision application has ushered in a new era of automated analysis and interpretation of visual data. In recent years, a surge of interest has been witnessed in applying these methodologies towards detecting animals in video streams, promising transformative impacts on diverse fields such as ecology and agriculture. This paper presents an extensive and meticulous review of the latest deep-learning approaches employed for animal detection in video data. This study looks closely at ways to detect animals in videos using deep learning. This study explores various Deep learning methods for detecting many animals in multiple environments. The analysis also pays close attention to preparing the data, picking out important features, and reusing what has been learned from one task to help with another. In addition to highlighting successful methodologies, this review addresses the challenges and limitations inherent in these approaches issues such as limited data availability and adapting to technological advancements present significant hurdles. Recognising and understanding these challenges is crucial in shaping the future focus of research endeavours. Thus, this comprehensive review is an indispensable tool for anyone keen on employing these potent computer methods for animal detection in videos. It takes the latest ideas and shows where study can explore further to improve them. Furthermore, this comprehensive review has demonstrated that a more sustainable and balanced relationship between humans and animals can be achieved by harnessing the power of deep learning in animal detection. This research contributes to computer vision and holds immense promise in safeguarding biodiversity and promoting responsible land use practices, especially within agricultural domains. The insights from this study propel us towards a future where advanced technology and ecological harmony go hand in hand, ultimately benefiting both humans and the animal kingdom. The survey aims to provide a comprehensive overview of the cutting-edge developments in applying deep learning models for animal detection through cameras by elucidating the significance of these techniques in advancing the accuracy and efficiency of animal detection processes.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_144-A_Comprehensive_Review_of_Deep_Learning_Approaches.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Generate Adversarial Attack on Graph Neural Network using K-Means Clustering and Class Activation Mapping</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01411143</link>
        <id>10.14569/IJACSA.2023.01411143</id>
        <doi>10.14569/IJACSA.2023.01411143</doi>
        <lastModDate>2023-11-30T12:00:12.7270000+00:00</lastModDate>
        
        <creator>Ganesh Ingle</creator>
        
        <creator>Sanjesh Pawale</creator>
        
        <subject>Graph Neural Networks; adversarial attacks; K-Means clustering; class activation mapping; robustness; defense mechanisms</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>Graph Neural Networks (GNNs) have emerged as powerful tools for analyzing complex structured data, including social networks, biological networks, and recommendation sys-tems. However, their susceptibility to adversarial attacks poses a significant challenge, especially in critical tasks such as node classification and link prediction. Adversarial attacks on GNNs can introduce harmful input graphs, leading to biased model predictions and compromising the integrity of the network. We propose a novel adversarial attack method that leverages the combination of K-Means clustering and Class Activation Mapping (CAM) to conduct subtle yet effective attacks against GNNs. The clustering algorithm identifies critical nodes within the graph, whose perturbations are likely to have a substantial impact on model performance. Additionally, CAM highlights regions of the graph that significantly influence GNN predictions, enabling more targeted and efficient attacks. We assess the efficacy of state-of-the-art GNN defenses against our proposed attack, underscoring the pressing need for robust defense mechanisms. Our study focuses on countering attacks on GNN networks by utilizing K-Means clustering and CAM to enhance the effectiveness and efficiency of the adversarial strat-egy. Through our observations, we emphasize the necessity for stronger security measures to safeguard GNN-based applications, particularly in sensitive environments.Furthermore, our research highlights the importance of developing robust GNNs that can withstand adversarial attacks, ensuring the reliability and trustworthiness of these models in critical applications. Strengthening the robustness of GNNs against adversarial manipulation is crucial for maintaining the se-curity and integrity of systems that heavily rely on these advanced analytical tools. Our findings underscore the ongoing efforts required to fortify GNN-based applications, urging the research community and practitioners to collaborate in developing and implementing more robust security measures for these powerful neural network models.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_143-Generate_Adversarial_Attack_on_Graph_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mukh-Oboyob: Stable Diffusion and BanglaBERT enhanced Bangla Text-to-Face Synthesis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01411142</link>
        <id>10.14569/IJACSA.2023.01411142</id>
        <doi>10.14569/IJACSA.2023.01411142</doi>
        <lastModDate>2023-11-30T12:00:12.7130000+00:00</lastModDate>
        
        <creator>Aloke Kumar Saha</creator>
        
        <creator>Noor Mairukh Khan Arnob</creator>
        
        <creator>Nakiba Nuren Rahman</creator>
        
        <creator>Maria Haque</creator>
        
        <creator>Shah Murtaza Rashid Al Masud</creator>
        
        <creator>Rashik Rahman</creator>
        
        <subject>Bangla text-to-face synthesis; Natural Language Processing (NLP); Bangla NLP; Computer Vision (CV); Generative Model; stable diffusion; BanglaBERT</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>Facial image generation from textual generation is one of the most complicated tasks within the broader topic of Text-to-Image (TTI) synthesis. It is relevant in several fields of scientific research, cartoon and animation development, online marketing, game development, etc. There have been extensive studies on Text-to-Face (TTF) synthesis in the English language. However, the amount of relevant existing work in Bangla is limited and not comprehensive. As the TTF field is not vastly prospected for Bangla language, the objective of this study sets forth to explore the possibilities in the field of Bangla Natural Language Processing and Computer Vision. In this paper, a novel system for generating highly detailed facial images from textual descriptions in the Bangla language is proposed. The proposed system named Mukh-Oboyob consists of two essential components: a pre-trained language model, BanglaBERT, and Stable Diffusion. BanglaBERT, a transformer-based pre-trained text encoder, is a language model used to transform Bangla sentences into vector representations. Stable Diffusion is used by Mukh-Oboyob to generate facial images utilizing the text embedding of the Bangla sentences. Moreover, the work uti-lizes CelebA Bangla, a modified version of the CelebA dataset consisting of face images, Bangla facial attributes, and Bangla text descriptions to develop and train the proposed system. This paper establishes a system for image synthesis with excellent performance and detailed image outcomes, as evidenced by a comprehensive analysis incorporating both qualitative and quantitative measures, leading to the system under consideration achieving an impressive FID score of 34.6828 and an LPIPS score of 0.4541.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_142-Mukh_Oboyob_Stable_Diffusion_and_BanglaBERT.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Preventing Cyberbullying on Social Networks with Spanish Parental Control NLP System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01411141</link>
        <id>10.14569/IJACSA.2023.01411141</id>
        <doi>10.14569/IJACSA.2023.01411141</doi>
        <lastModDate>2023-11-30T12:00:12.6970000+00:00</lastModDate>
        
        <creator>Gabriel A. Leon-Paredes</creator>
        
        <creator>Omar G. Bravo-Quezada</creator>
        
        <creator>Pedro P. Bermeo-Aguaysa</creator>
        
        <creator>Maria J. Pelaez-Currillo</creator>
        
        <creator>Ledys L. Jimenez-Gonzalez</creator>
        
        <subject>Cyberbullying; control parental system; natural lan-guage processing; Spanish cyberbullying prevention system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>The boom in social networks and digital communi-cation has given place to innovative forms of social interaction. However, it has also made possible new forms of harassment of others anonymously and without repercussions. Such is the case of cyberbullying, an increasingly common problem, especially among young people. Its effects on individuals can be devastating, ranging from anxiety and depression to social isolation and low self-esteem. Furthermore, there is a wide variety of applications called parental control, which allow parents to show, the pages the child or adolescent has accessed, know how often the child or adolescent accesses them, and control the time spent on social networks or other entertainment platforms. Therefore, the present research aimed to analyze, design, and implement an intelligent application based on data mining algorithms and the Latent Semantic Analysis (LSA) method for the presumed detection of cyberbullying in social networks in adolescents. The methodological process of the study was carried out following the fundamentals of applied research with a qualitative-quantitative descriptive, and cross-sectional approach. As a result, a multi-platform application was obtained that alerts about suspected bullying to parents or guardians. For the validation of the application, the technique of expert judgment was applied. Also, the process of obtaining negative and positive text similarity was performed based on cosine similarity. In the analysis of Twitter accounts, values of 46% with negative texts and 6.71% with positive texts are obtained, which allows inferring that this is a presumed case of cyberbullying in this account.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_141-Preventing_Cyberbullying_on_Social_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Imbalance Node Classification with Graph Neural Networks (GNN): A Study on a Twitter Dataset</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01411140</link>
        <id>10.14569/IJACSA.2023.01411140</id>
        <doi>10.14569/IJACSA.2023.01411140</doi>
        <lastModDate>2023-11-30T12:00:12.6970000+00:00</lastModDate>
        
        <creator>Alda Kika</creator>
        
        <creator>Arber Ceni</creator>
        
        <creator>Denada Collaku</creator>
        
        <creator>Emiranda Loka</creator>
        
        <creator>Ledia Bozo</creator>
        
        <creator>Klesti Hoxha</creator>
        
        <subject>GNN; imbalanced data; Twitter; social networks; GCN; GraphSage; GAT; GraphSMOTE; ReNode</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>Social networks produce a large volume of infor-mation, a part of which is fake. Social media platforms do a good job in moderating content and banning fake news spreaders, but a proactive solution is more desirable especially during global threats like COVID-19 pandemic and war. A proactive solution would be to ban users who spread fake news before they become important spreaders. In this paper we propose to model user’s interactions in a social media platform as a graph and then evaluate state of the art (SOTA) graph neural networks (GNN) that can classify users’ (nodes) profiles as being suspended or not. As with other real world data, we are faced with the imbalanced data problem and we evaluate different algorithms that try to fix this issue. Data used for this study were collected from X (Twitter) by using Twitter API 1.1 from November 2021 to July 2022 with the focus to collect information spread through tweets about vaccines. The aim of this paper is to evaluate if current models can deal with real world imbalanced data.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_140-Imbalance_Node_Classification_with_Graph_Neural_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Emotional Speech Transfer on Demand based on Contextual Information and Generative Models: A Case Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01411139</link>
        <id>10.14569/IJACSA.2023.01411139</id>
        <doi>10.14569/IJACSA.2023.01411139</doi>
        <lastModDate>2023-11-30T12:00:12.6800000+00:00</lastModDate>
        
        <creator>Andrea Veronica Porco</creator>
        
        <creator>Kang Dongshik</creator>
        
        <subject>Emotion transfer; contextual information; speech processing; generative models; variational autoencoder; conditional generative adversarial networks; empathetic systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>The automated generation of speech audio that closely resembles human emotional speech has garnered signif-icant attention from the society and the engineering academia. This attention is due to its diverse applications, including au-diobooks, podcasts, and the development of empathetic home assistants. In the scope of this study, it is introduced a novel approach to emotional speech transfer utilizing generative models and a selected emotional target desired for the output speech. The natural speech has been extended with contextual information data related with emotional speech cues. The generative models used for pursuing this task are a variational autoencoder model and a conditional generative adversarial network model. In this case study, an input voice audio, a desired utterance, and user-selected emotional cues, are used to produce emotionally expressive speech audio, transferring an ordinary speech audio with added contextual cues, into a happy emotional speech audio by a variational autoencoder model. The model try to reproduce in the ordinary speech, the emotion present in the emotional contextual cues used for training. The results show that, the proposed unsupervised VAE model with custom dataset for generating emotional data reach an MSE lower than 0.010 and an SSIM almost reaching the 0.70, while most of the values are greater than 0.60, respect to the input data and the generated data. CGAN and VAE models when generating new emotional data on demand, show a certain degree of success in the evaluation of an emotion classifier that determines the similarity with real emotional audios.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_139-Emotional_Speech_Transfer_on_Demand_Based_on_Contextual_Information.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Identification of Air-Writing Tamil Alphabetical Vowel Characters</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01411138</link>
        <id>10.14569/IJACSA.2023.01411138</id>
        <doi>10.14569/IJACSA.2023.01411138</doi>
        <lastModDate>2023-11-30T12:00:12.6670000+00:00</lastModDate>
        
        <creator>Rukshani Puvanendran</creator>
        
        <creator>Vijayanathan Senthooran</creator>
        
        <subject>Air-writing; Tamil alphabetical vowel; convolutional neural network; feature extraction; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>In recent years, there has been a lot of focus on gesture recognition because of its potential as a means of communication for cutting-edge gadgets. As a special category of gesture recognition, air-writing is the practice of forming letters or words in the air using one’s fingers or the move-ments of one’s hands. The primary objective of this study is to propose a classification framework with feature extraction techniques to enhance the recognition of vowel characters in the Tamil language. The data collection and classification procedure involved a set of 12 distinct letters. A methodology has been developed to facilitate the analysis of various configurations for the purpose of evaluation. To get useful features from the 2-second time window data segments, this study uses a one-dimensional convolutional neural network (1D CNN). In our approach, we employ five machine-learning methods to conduct our evaluation. These methods include Naive Bayes, Random Forest, K-Nearest Neighbor, Support Vector Machine, and Decision Tree. The classification algorithms are considered to be superior based on the results obtained from our dataset in this experiment. The results of the tests show that the suggested K-nearest neighbors (KNN) algorithm works very well when used with a k-1 and 0.6:0.4 split ratio for training and testing. Specifically, the KNN model achieved an accuracy rate of 91.67%. The present study builds upon previous research by utilizing applications that have been employed in prior studies. However, a unique aspect of our system is the integration of cutting-edge technology, which utilizes collected sensor data to classify the characters. The examination of the window size has the potential to enhance accuracy and performance.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_138-Identification_of_Air_Writing_Tamil_Alphabetical_Vowel_Characters.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Deep Transfer Learning Approach for Accurate Dragon Fruit Ripeness Classification and Visual Explanation using Grad-CAM</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01411137</link>
        <id>10.14569/IJACSA.2023.01411137</id>
        <doi>10.14569/IJACSA.2023.01411137</doi>
        <lastModDate>2023-11-30T12:00:12.6500000+00:00</lastModDate>
        
        <creator>Hoang-Tu Vo</creator>
        
        <creator>Nhon Nguyen Thien</creator>
        
        <creator>Kheo Chau Mui</creator>
        
        <subject>Dragon fruit classification; ripeness classification; densenet201 model; Grad-CAM visualization; guided grad-CAM; visual interpretation; Explainable AI; XAI; deep learning; pre-trained models; model fine-tuning; transfer learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>Dragon fruit, known for its rich antioxidant content and low-calorie attributes, has garnered significant attention as a health-promoting fruit. Its economic value has also surged due to increasing consumer demand and its potential as an export commodity in various regions. The classification of dragon fruit ripeness is a pivotal task in ensuring product quality and minimizing post-harvest losses. This research article presents a comprehensive study on the classification of ripe and unripe dragon fruits (Hylocereus spp) using the Densenet201 model through three distinct approaches: as a classifier, feature extrac-tor, and fine-tuner. To explain the outcomes of the image clas-sification model and thereby enhance its performance, optimiza-tion, and reliability, this study employs advanced visualization techniques. Specifically, it utilizes Grad-CAM (Gradient-weighted Class Activation Mapping) and Guided Grad-CAM techniques. These techniques offer insights into the model’s decision-making process and pinpoint regions of interest within the images. This approach empowers researchers to iteratively validate the model’s accuracy and enhance its performance. The utilization of Densenet201 as a classifier, feature extractor, and fine-tuner, coupled with the insights from Grad-Cam and Guided Grad-Cam, presents a holistic approach to enhancing dragon fruit ripeness classification. The findings contribute to the broader discourse on agricultural technology, image analysis, and the optimization of classification models.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_137-A_Deep_Transfer_Learning_Approach_for_Accurate_Dragon_Fruit_Ripeness.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Elevating Android Privacy: A Blockchain-Powered Paradigm for Secure Data Management</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01411136</link>
        <id>10.14569/IJACSA.2023.01411136</id>
        <doi>10.14569/IJACSA.2023.01411136</doi>
        <lastModDate>2023-11-30T12:00:12.6330000+00:00</lastModDate>
        
        <creator>Bang Khanh Le</creator>
        
        <creator>Ngan Thi Kim Nguyen</creator>
        
        <creator>Khiem Gia Huynh</creator>
        
        <creator>Phuc Trong Nguyen</creator>
        
        <creator>Anh The Nguyen</creator>
        
        <creator>Khoa Dand Tran</creator>
        
        <creator>Trung Hoang Tuan Phan</creator>
        
        <subject>Medical test result; blockchain; smart contract; NFT; Ethereum; Fantom; Polygon; Binance smart chain</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>The significance of medical test records in diagnos-ing and treating illnesses cannot be overstated. These records serve as the foundation upon which medical professionals craft precise treatment strategies tailored to a patient’s unique health condition and ailment. However, in several developing nations, such as Vietnam, a concerning trend persists: medical test records predominantly exist in vulnerable paper format, entrusted to patients for safekeeping. When patients transition between health-care facilities, the responsibility of carrying these paper-based medical histories rests with them, introducing a significant risk factor due to the inherent fragility of paper documents, which can be easily damaged by fire or water. The loss of these crucial records can lead to severe disruptions in the diagnostic and therapeutic journey of patients, potentially compromising their well-being. Despite the emergence of various alternatives to address this vulnerability, Vietnam faces multifaceted challenges. These challenges encompass low technological literacy among patients and substantial infrastructural limitations. In response to these pressing issues, this study endeavors to harness the transformative potential of blockchain technology, smart con-tracts, and Non-Fungible Tokens (NFTs) to effectively mitigate the drawbacks associated with paper-based medical test records. Our comprehensive approach includes meticulous cataloging of current hospital practices, the introduction of a purpose-built blueprint for decentralized record sharing, the proposal of an innovative NFT-backed authentication model, the development of a practical proof-of-concept, and comprehensive platform testing. Through these efforts, we aim to revolutionize the management of medical test records in Vietnam, enhancing accessibility, security, and reliability for both patients and healthcare providers.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_136-Elevating_Android_Privacy_A_Blockchain_Powered_Paradigm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Semantic Embeddings for Arabic Retrieval Augmented Generation (ARAG)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01411135</link>
        <id>10.14569/IJACSA.2023.01411135</id>
        <doi>10.14569/IJACSA.2023.01411135</doi>
        <lastModDate>2023-11-30T12:00:12.6330000+00:00</lastModDate>
        
        <creator>Hazem Abdelazim</creator>
        
        <creator>Mohamed Tharwat</creator>
        
        <creator>Ammar Mohamed</creator>
        
        <subject>Arabic NLP; large language models; retrieval aug-mented generation; semantic embedding</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>In recent times, Retrieval Augmented Generation (RAG) models have garnered considerable attention, primarily due to the impressive capabilities exhibited by Large Language Models (LLMs). Nevertheless, the Arabic language, despite its significance and widespread use, has received relatively less research emphasis in this field. A critical element within RAG systems is the Information Retrieval component, and at its core lies the vector embedding process commonly referred to as “semantic embedding”. This study encompasses an array of multilingual semantic embedding models, intending to enhance the model’s ability to comprehend and generate Arabic text effec-tively. We conducted an extensive evaluation of the performance of ten cutting-edge Multilingual Semantic embedding models, employing a publicly available ARCD dataset as a benchmark and assessing their performance using the average Recall@k metric. The results showed that the Microsoft E5 sentence embedding model outperformed all other models on the ARCD dataset, with Recall@10 exceeding 90%.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_135-Semantic_Embeddings_for_Arabic_Retrieval_Augmented_Generation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>D2-Net: Dilated Contextual Transformer and Depth-wise Separable Deconvolution for Remote Sensing Imagery Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01411134</link>
        <id>10.14569/IJACSA.2023.01411134</id>
        <doi>10.14569/IJACSA.2023.01411134</doi>
        <lastModDate>2023-11-30T12:00:12.6200000+00:00</lastModDate>
        
        <creator>Huaping Zhou</creator>
        
        <creator>Qi Zhao</creator>
        
        <creator>Kelei Sun</creator>
        
        <subject>YOLOv7; dilated contextual transformer; depth-wise separable deconvolution; circular smooth label; remote sensing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>Remote sensing-based object detection faces chal-lenges in arbitrary orientations, complex backgrounds, dense distributions, and large aspect ratios. Considering these issues, this paper introduces a novel method called D2-Net, which incorporates a transformer structure into a convolutional neural network. First, a new feature extraction module called dilated contextual transformer block is designed to minimize the loss of object information due to complex backgrounds and dense tar-gets. In addition, an efficient approach using depth-wise separable deconvolution as an up-sampling method is developed to recover lost feature information effectively. Finally, the circular smooth label is incorporated to compute the angular loss to complete the rotated detection of remote sensing images. Experimental evaluations are conducted on the DOTA and HRSC2016 datasets. On the DOTA dataset, the proposed method achieves 79.2%and 78.00% accuracy in horizontal and rotated object detection, respectively; it achieves 94.00% accuracy in the rotated detection of the HRSC2016 dataset. The proposed model shows a significant performance improvement over other comparative models on the dataset, which verifies the effectiveness of our proposed approach.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_134-D2_Net_Dilated_Contextual_Transformer.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Estimation of Hazardous Environments Through Speech and Ambient Noise Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01411133</link>
        <id>10.14569/IJACSA.2023.01411133</id>
        <doi>10.14569/IJACSA.2023.01411133</doi>
        <lastModDate>2023-11-30T12:00:12.6030000+00:00</lastModDate>
        
        <creator>Andrea Veronica Porco</creator>
        
        <creator>Kang Dongshik</creator>
        
        <subject>Dangerous environment detection; speech analysis; acoustic audio analysis; ambient noises; variational autoencoder model; empathetic systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>In recent years, significant attention has been di-rected towards the development of artificial empathy within the engineering academic community. Replicating artificial empathy necessitates the capability of agents to discern human emotions and comprehend environmental risks. Analyzing acoustic data in real environments offers a higher level of non-invasive pri-vacy compared to video and camera data, limiting the agent’s understanding to specific patterns. However, current studies are negatively affected by subjective inferences from real data, which can result in inaccurate predictions, leading to both false positives and negatives, especially when contextual data and human speech are involved. This paper work proposes the estimation of a dangerous environment in accordance with the emotional speech and additional ambient noises. In this approach we implement a variational autoencoder model in conjunction with a classifier for training the classification task. Additional regularization techniques are applied to bridge the gap between the original training data and the expected data. The classifier utilizes feature data generated by the variational autoencoder to extract class patterns and determine whether the environment is hazardous. Emotional speech is classified as angry, sad, or scared emotions, contributing to the classification of danger, while happy, calm, and neutral emotions are considered safe. Various ambient noise types, including gunfire and broken glass, are categorized as dangerous, while real-life indoor noises like cooking, eating, and movements are considered safe.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_133-Estimation_of_Hazardous_Environments_Through_Speech.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Speech Recognition System Based on AutoEncoder-GAN for Biometric Access Control</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01411132</link>
        <id>10.14569/IJACSA.2023.01411132</id>
        <doi>10.14569/IJACSA.2023.01411132</doi>
        <lastModDate>2023-11-30T12:00:12.6030000+00:00</lastModDate>
        
        <creator>Oussama Mounnan</creator>
        
        <creator>Otman Manad</creator>
        
        <creator>Abdelkrim El Mouatasim</creator>
        
        <creator>Larbi Boubchir</creator>
        
        <creator>Boubaker Daachi</creator>
        
        <subject>Speaker identification; speech recognition; biomet-ric access control; authentication; verification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>Speech recognition-based biometric access control systems are promising solutions that have resolved many is-sues related to security and convenience. Speech recognition, as a biometric modality, offers unique advantages such as user-friendliness and non-intrusiveness, etc. However, developing robust and accurate speaker identification and authentication systems pose challenges due to variations in speech patterns and environmental factors. Integrating deep learning techniques, especially AutoEncoder and Generative Adversarial Network models, has shown promising results in addressing these chal-lenges. This article presents a novel approach based on the combination of two deep learning models, namely, AE and GAN for speech recognition-based biometric access control. In the model architecture, the AutoEncoder takes the MFCC coefficients as input, and the encoder converts the latter to the latent space, whereas the decoder reconstructs the data. Then, speech features extracted from the latent space are used in the GAN generator to generate additional speech data. The discriminator network has a dual role, serving as both a feature extractor and a classifier. The first extracts relevant features from generated samples, while the latter distinguishes between generated and authentic samples that come from AutoEncoder. This strategy outperforms DNN and LSTM models on VoxCeleb 2, LibriSpeech, and Aishell- 1 datasets. The models are trained to minimize Mean Squared Error (MSE) for both the generator and discriminator, aiming at achieving highly realistic datasets and a robust, interpretable model. This approach addresses challenges in feature extraction, data augmentation, realistic biometric samples generation, data variability handling, and data generalization enhancement, pro-viding therefore, a comprehensive solution.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_132-Deep_Speech_Recognition_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Particle Filter based Visual Object Tracking: A Systematic Review of Current Trends and Research Challenges</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01411131</link>
        <id>10.14569/IJACSA.2023.01411131</id>
        <doi>10.14569/IJACSA.2023.01411131</doi>
        <lastModDate>2023-11-30T12:00:12.5870000+00:00</lastModDate>
        
        <creator>Md Abdul Awal</creator>
        
        <creator>Md Abu Rumman Refat</creator>
        
        <creator>Feroza Naznin</creator>
        
        <creator>Md Zahidul Islam</creator>
        
        <subject>Particle filter; visual object tracking; On-Gaussian noises; Kalman filter; CNN</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>Visual object tracking is a crucial research area in computer vision because it can simulate a dynamic environment with non-linear motions and multi-modal non-Gaussian noises. However, This paper presents an overview of the recent devel-opments in particle filter-based visual object tracking algorithms and discusses the pros and cons of particle filters, respectively. There are presentations of many different methodologies and algorithms in the research literature. The majority of visual object tracking research at present is on particle filters. In addition, the most advanced technique for visual object tracking has also been developed by combining the convolutional neural network (CNN) and the particle filter. The advantage of particle filters is that they can handle nonlinear models and non-Gaussian advancements, sequentially concentrating on the areas of the state space with higher densities, primarily parallelization, and simplicity of implementation. Despite this, it offers a robust framework for visual object tracking because it incorporates uncertainty and outperforms other filters like the Kalman filter, Kernelized correlation filter, optical filter, mean shift filter, and extended Kalman filter in recognition tests. In contrast, this study provided information on various particle filter features and classifiers.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_131-A_Particle_Filter_Based_Visual_Object_Tracking.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Triggered Screen Restriction: Gamification Framework</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01411130</link>
        <id>10.14569/IJACSA.2023.01411130</id>
        <doi>10.14569/IJACSA.2023.01411130</doi>
        <lastModDate>2023-11-30T12:00:12.5730000+00:00</lastModDate>
        
        <creator>Majed Hariri</creator>
        
        <creator>Richard Stone</creator>
        
        <subject>Gamification; physical activity; sedentary behavior; Triggered Screen Restriction (TSR) framework</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>The prevalence of sedentary lifestyles is increasingly becoming a significant public health concern, with numerous health risks ranging from obesity to heart disease. Several gamified interventions have been employed to counter sedentary behavior by promoting physical activity. However, the existing approaches have yielded mixed results, making it crucial to explore new methodologies. While existing approaches have utilized gamification elements to encourage activity, they often need a comprehensive blend of psychological elements and advanced technology to drive a meaningful behavioral alteration. This paper introduces the Triggered Screen Restriction (TSR) framework, an interdisciplinary approach integrating behavioral psychology, gamification, and screen-time restriction technologies. The TSR framework aims to elevate gamified physical activity by leveraging the psychological Fear of Missing Out phenomenon, encouraging users to meet specific activity goals to unlock social media applications. The TSR framework presents a promising avenue for future research. The proposed framework’s unique approach is designed to motivate users to be more physically active. The proposed framework fills a literature gap in the current implementation of the gamified physical intervention. Further studies are needed to empirically validate the framework’s effectiveness and potential to contribute to the gamification ecosystem.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_130-Triggered_Screen_Restriction_Gamification_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>CESSO-HCRNN: A Hybrid CRNN With Chaotic Enriched SSO-based Improved Information Gain to Detect Zero-Day Attacks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01411129</link>
        <id>10.14569/IJACSA.2023.01411129</id>
        <doi>10.14569/IJACSA.2023.01411129</doi>
        <lastModDate>2023-11-30T12:00:12.5570000+00:00</lastModDate>
        
        <creator>Dharani Kanta Roy</creator>
        
        <creator>Ripon Patgiri</creator>
        
        <subject>Hackers; vulnerability; zero-day attack; chaotic en-riched salp swarm optimization; data cleaning; normalization; and MATLAB software</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>Hackers use the vulnerability before programmers have a chance to fix it, which is known as a zero-day attack. Zero-day attackers have a variety of abilities, including the ability to alter files, control machines, steal data, and install malware or adware. When a series of complex assaults uses one or more zero-day exploits, the result is a zero-day attack path. Timely assessment of zero-day threats might be enabled by early detection of zero-day attack pathways. To detect this zero-day attack, this paper introduced a Chaotic Enriched Salp Swarm Optimization (CESSO) with the help of a hybrid Convolutional Recursive Neural Network (HCRNN) is implemented. The input data is retrieved from two datasets called IDS 2018 Intrusion CSVs (CSE-CIC-IDS2018) and NSL-KDD. The data is pre-processed with the help of data cleaning and normalization. A unique hybrid feature selection method that is based on the CESSO and Information Gain(IG) is introduced. The CESSO is also used to improve the Recursive Neural Network (RNN) performance to produce an optimized RNN. The selected features are classified, and prediction is performed using the hybrid Convolutional Neural Network (CNN) with RNN called HCRNN. The implementation of the zero-day attack is performed using MATLAB software. The accuracy achieved for dataset 1 is 98.36%, and for dataset 2 is 97.14%.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_129-CESSO-HCRNN_A_Hybrid_CRNN_With_Chaotic_Enriched_SSO.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning-Powered Mobile App for Fast and Accurate COVID-19 Detection from Chest X-rays</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01411127</link>
        <id>10.14569/IJACSA.2023.01411127</id>
        <doi>10.14569/IJACSA.2023.01411127</doi>
        <lastModDate>2023-11-30T12:00:12.5400000+00:00</lastModDate>
        
        <creator>Rahhal Errattahi</creator>
        
        <creator>Fatima Zahra Salmam</creator>
        
        <creator>Mohamed Lachgar</creator>
        
        <creator>Asmaa El Hannani</creator>
        
        <creator>Abdelhak Aqqal</creator>
        
        <subject>COVID-19 diagnosis; computer vision; deep learn-ing; X-ray images; mobile application</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>The COVID-19 pandemic has imposed significant challenges on healthcare systems globally, necessitating swift and precise screening methods to curb transmission. Traditional screening approaches are time-consuming and prone to errors, prompting the development of an innovative solution - a mobile application employing machine learning for automated COVID- 19 screening. This application harnesses computer vision and deep learning algorithms to analyze X-ray images, rapidly de-tecting virus-related symptoms. This solution aims to enhance the accuracy and speed of COVID-19 screening, particularly in resource-constrained or densely populated settings. The paper details the use of convolutional neural networks (CNNs) and transfer learning in diagnosing COVID-19 from chest X-rays, highlighting their efficacy in image classification. The trained model is deployed in a mobile application for real-world testing, aiming to aid healthcare professionals in the battle against the pandemic. The paper provides a comprehensive overview of the background, methodology, results, and the application’s architecture and functionalities, concluding with avenues for future research.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_127-Deep_Learning_Powered_Mobile_App_for_Fast_and_Accurate_COVID_19.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Explicit Knowledge Database Interface Model System Based on Natural Language Processing Techniques and Immersive Technologies</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01411128</link>
        <id>10.14569/IJACSA.2023.01411128</id>
        <doi>10.14569/IJACSA.2023.01411128</doi>
        <lastModDate>2023-11-30T12:00:12.5400000+00:00</lastModDate>
        
        <creator>Luis Alfaro</creator>
        
        <creator>Claudia Rivera</creator>
        
        <creator>Jose Herrera</creator>
        
        <creator>Antonio Arroyo</creator>
        
        <creator>Lucy Delgado</creator>
        
        <creator>Elisa Castaneda</creator>
        
        <subject>Knowledge management; explicit knowledge databases; natural language processing; natural user interfaces; Immersive technologies</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>This work is focused on the proposal and de-velopment of an interface system model, based on natural language processing, immersive technologies and natural user interfaces, for the interaction with Explicit Knowledge databases. Five phases were proposed: The user testing characterization, the establishment of the state of the art and the theoretical foundation, the design and development of software, the system implementation and the functional tests and evaluation of the usability of the interface model. In order to establish the user testing characterization and the corresponding theoretical frame-work, the expert guide on Knowledge Management and Virtual Reality was followed, based on the approach of Usability and Computer Ergonomics compatible with the ISO 9241 standard. The traditional interfaces and the proposal in this work were evaluated for each of the metrics defined by the ISO 9241 standard, considering the dimensions of effectiveness, efficiency and satisfaction. The statistical test of “T-Student” established that there is enough evidence to confirm the existence of the following significant differences: Effectiveness is lower using the proposed interface model; efficiency and satisfaction is higher using the proposed interface model. Based on the conducted tests, it can be established that the proposed interface model is superior to the traditional interface in terms of the “Efficiency and Satisfaction” dimensions and inferior in terms of “Effec-tiveness.” Consequently, it can be concluded that the scientific article exploration model using VR and NLP is superior to the traditional model.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_128-Explicit_Knowledge_Database_Interface_Model_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Attention-based Cross-Modality Multiscale Fusion for Multispectral Pedestrian Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01411126</link>
        <id>10.14569/IJACSA.2023.01411126</id>
        <doi>10.14569/IJACSA.2023.01411126</doi>
        <lastModDate>2023-11-30T12:00:12.5270000+00:00</lastModDate>
        
        <creator>Zhou Hui</creator>
        
        <subject>Pedestrian detection; multispectral pedestrian detec-tion; attention mechanism; cross-modal fusion</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>Multispectral pedestrian detection has wide ap-plications in fields such as autonomous driving and intelli-gent surveillance. Mining complementary information between modalities is one of the most effective approaches to improve the performance of multispectral pedestrian detection. However, the inevitable introduction of redundant information between modalities during the fusion process leads to feature degradation. To address this challenge, we propose a multiscale differen-tial fusion algorithm that leverages complementary information between modalities to suppress feature degradation caused by noise propagation along the network. We compare our algorithm with other cross-modal fusion pedestrian detection algorithms on the LLVIP and cleaned KAIST datasets. Experimental results demonstrate that our algorithm outperforms others, particularly in nighttime scenes where our algorithm achieves a 7.28%improvement in recall rate compared to the baseline on the cleaned KAIST dataset.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_126-Attention_based_Cross_Modality_Multiscale_Fusion.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Telemedicine Adoption for Healthcare Delivery: A Systematic Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01411125</link>
        <id>10.14569/IJACSA.2023.01411125</id>
        <doi>10.14569/IJACSA.2023.01411125</doi>
        <lastModDate>2023-11-30T12:00:12.5100000+00:00</lastModDate>
        
        <creator>Taif Ghiwaa</creator>
        
        <creator>Imran Khan</creator>
        
        <creator>Martin White</creator>
        
        <creator>Natalia Beloff</creator>
        
        <subject>Telemedicine; systematic review; technology accep-tance model; adoption; telehealth; healthcare provider; patient</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>Telemedicine is the delivery of healthcare ser-vices using telecommunication and information technologies. The adoption of telemedicine has been promoted by advancements in technology, increased accessibility to the Internet, and the need for convenient and efficient healthcare delivery. Under-standing the theoretical foundations of telemedicine adoption among healthcare providers and patients is crucial for successful acceptance and utilization. This systematic review aims to explore the theoretical frameworks and models that have been widely utilized to understand telemedicine adoption among healthcare providers and patients. A systematic search was conducted across two popular electronic databases, resulting in the inclusion of 21 relevant studies. The selected studies were analyzed to identify the theoretical perspectives employed in telemedicine adoption research. The key findings reveal that the Technology Acceptance Model (TAM), the Unified Theory of Acceptance, and the Use of Technology (UTAUT) model are the most widely models used to illustrate the factors affecting telemedicine adoption among healthcare providers and patients through different countries and telemedicine contexts. Understanding these theoretical models is crucial for policymakers and healthcare professionals as it can provide insight into the key factors influencing the widespread adoption of telemedicine. This knowledge can serve as a guidance for crafting initiatives, and tailoring policies to promote the successful acceptance and utilization of telemedicine among providers and patients in diverse healthcare environments.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_125-Telemedicine_Adoption_for_Healthcare_Delivery.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Incorporating News Tags into Neural News Recommendation in Indonesian Language</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01411124</link>
        <id>10.14569/IJACSA.2023.01411124</id>
        <doi>10.14569/IJACSA.2023.01411124</doi>
        <lastModDate>2023-11-30T12:00:12.4930000+00:00</lastModDate>
        
        <creator>Maxalmina Satria Kahfi</creator>
        
        <creator>Evi Yulianti</creator>
        
        <creator>Alfan Farizki Wicaksono</creator>
        
        <subject>News recommendation; recommendation systems; news tags; user modeling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>News recommendation system holds the potential to aid users in discovering articles that align with their interests, which is critical to alleviate user information overload. To generate effective news recommendations, one key capability is to accurately capture the contextual meaning of text in the news articles, since this is pivotal in acquiring useful repre-sentations for both news content and users. In this work, we examine the effectiveness of neural news recommendation with attentive multi-view learning (NAML) method to conduct a news recommendation task in the Indonesian language. We further propose to incorporate news tags, which at some levels may capture the important contextual meanings contained in the news articles, to improve the effectiveness of the NAML method in the Indonesian news recommendation system. Our results show that the NAML method leads to significant improvement (if not comparable) in the effectiveness of neural-based Indonesian news recommendations. Further incorporating news tags is shown to significantly increase the performance of the NAML method by 5.86% in terms of NDCG@5 metric.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_124-Incorporating_News_Tags_into_Neural_News_Recommendation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Securing Digital Data: A New Edge Detection and XOR Coding Approach for Imperceptible Image Steganography</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01411123</link>
        <id>10.14569/IJACSA.2023.01411123</id>
        <doi>10.14569/IJACSA.2023.01411123</doi>
        <lastModDate>2023-11-30T12:00:12.4770000+00:00</lastModDate>
        
        <creator>Hayat Al-Dmour</creator>
        
        <subject>Steganography; information hidings; bits modifica-tion; decoding algorithm; edge detection; canny edge detection; human visual system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>The rapid progress of digital devices and technol-ogy, coupled with the emergence of the internet has amplified the risks and perils associated with malicious attacks. Consequently, it becomes crucial to protect valuable information transmitted through the internet. Steganography is a tried-and-true technique for hiding information beneath digital content, such as pictures, texts, audio, and video. Various methodologies of image steganog-raphy have been developed recently. In image recognition, edge detection secures an image into well-defined areas. This paper introduces a novel image steganography algorithm with edge detection and XOR coding techniques. The proposed approach aims to conceal a confidential message within the spatial domain of the original image. In contrast to uniform regions, the Human Visual System (HVS) is less responsive to variations in the sharp areas; an edge detection algorithm is applied to identify edge pixels. Furthermore, to enhance the efficiency and reduce the embedding impact, XOR operation has been utilized to embed the secret message in the Least Significant Bit (LSB). According to the results of the experiments, the proposed method embeds confidential data without causing noticeable modifications to the stego image. The proposed method system produced imperceptible stego images with minimal embedding distortions compared to existing methods. Based on the results, the proposed approach outperforms the conventional methods regarding image distortion techniques. The PSNR values achieved by the proposed method are higher than the acceptable level.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_123-Securing_Digital_Data_A_New_Edge_Detection_and_XOR_Coding_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Investigating the Effectiveness of ChatGPT for Providing Personalized Learning Experience: A Case Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01411122</link>
        <id>10.14569/IJACSA.2023.01411122</id>
        <doi>10.14569/IJACSA.2023.01411122</doi>
        <lastModDate>2023-11-30T12:00:12.4630000+00:00</lastModDate>
        
        <creator>Raneem N. Albdrani</creator>
        
        <creator>Amal A. Al-Shargabi</creator>
        
        <subject>Personalized learning; data science education; ChatGPT; generative AI</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>The demand for personalized learning experiences that cater to the unique needs of individual learners has increased with the emergence of data science. This paper investigates the potential use of ChatGPT, a generative AI tool, in providing personalized learning experiences for data science education, specifically focusing on Deep Learning. The paper presents a case study that applies the 5Es model to test personalized learning for students using ChatGPT. The study aims to answer the question of how educators can leverage ChatGPT in their pedagogy to enhance student learning, and whether ChatGPT can provide a better learning experience than traditional teaching methods. The paper also discusses the limitations faced during the study and the findings. The results suggest that ChatGPT can be a valuable resource for data science education, providing personalized and instant feedback to learners. However, ethical considerations such as the potential for biased or inaccurate responses and the need for transparency in AI-generated content should be carefully ad-dressed by educators. The study highlights ChatGPT’s potential as a research tool for data science educators to investigate the effectiveness of AI in personalized learning experiences. Overall, this paper contributes to the ongoing dialogue on the role of AI in data science education and provides insights into how educators can utilize ChatGPT to enhance student learning and engagement.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_122-Investigating_the_Effectiveness_of_ChatGPT.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Nature-Inspired Optimization for Virtual Machine Allocation in Cloud Computing: Current Methods and Future Directions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01411121</link>
        <id>10.14569/IJACSA.2023.01411121</id>
        <doi>10.14569/IJACSA.2023.01411121</doi>
        <lastModDate>2023-11-30T12:00:12.4470000+00:00</lastModDate>
        
        <creator>Xiaoqing YANG</creator>
        
        <subject>Cloud computing; virtualization; virtual machine allocation; optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>An expanding range of services is offered by cloud data centers. The execution of application tasks is facilitated by assigning (VMs) Virtual Machines to (PMs) Physical Machines. Speaking of VM allocation in the cloud service center, two key factors are taken into consideration: quality of service (QoS) and energy consumption. The cloud service center aims to optimize these aspects while allocating VMs. On the other hand, cloud users have their priorities and focus on their specific requirements, particularly throughput and reliability. User requirements are considered by the cloud service center, resulting in VM allocation that meets QoS targets and optimizes energy consumption. Cloud service centers must, therefore, find a balance between QoS and energy efficiency while considering the user&#39;s requirements. To achieve this, various optimization algorithms and techniques must be employed. The objective is to find the best allocation of VMs to PMs. Due to the NP-hardness of the VM allocation problem, nature-inspired meta-heuristic algorithms have become commonly used to solve it. However, there are no comprehensive and in-depth review papers on this specific area. This paper aims to bridge a knowledge gap by providing an understanding of the significance of metaheuristic methods to address the VM allocation issue effectively. It not only highlights the role played by these algorithms but also examines the existing methods, provides comprehensive comparisons of strategies based on key parameters, and concludes with valuable recommendations for future research.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_121-Nature_Inspired_Optimization_for_Virtual_Machine_Allocation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automated Classification of Multiclass Brain Tumor MRI Images using Enhanced Deep Learning Technique</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01411120</link>
        <id>10.14569/IJACSA.2023.01411120</id>
        <doi>10.14569/IJACSA.2023.01411120</doi>
        <lastModDate>2023-11-30T12:00:12.4300000+00:00</lastModDate>
        
        <creator>Faiz Ainur Razi</creator>
        
        <creator>Alhadi Bustamam</creator>
        
        <creator>Arnida L. Latifah</creator>
        
        <creator>Shandar Ahmad</creator>
        
        <subject>Brain tumor; enhanced deep learning; MRI; multiclass; neuroimaging</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>The brain is a vital organ, and the brain tumor is one of the most dangerous types of tumors in the world. Neuroimaging is an interesting and important discussion in diagnosing central nervous system tumors. Brain tumors have several types, namely meningioma, glioma, pituitary, schwannoma, and neurocytoma. A radiologist uses magnetic resonance imaging (MRI) to detect brain tumors because of its advantages over computed tomography. However, classifying multiclass MRI is difficult and takes a long time. This study proposes an automated classification of multiclass brain tumors using enhanced deep learning techniques. Various models are used in this research, namely VGG16, NasNet-Mobile, InceptionV3, ResNet50, and EfficientNet. For EfficientNet, we applied EfficientNet-B0–B7. From the experiments, EfficientNet-B2 is the superior, with the highest level of training accuracy of 99.90%, testing accuracy of 99.55%, precision of 99.50%, recall of 99.67%, and F1-Score of 99.58% with a training time of 15 minutes. The development of this automatic classification can assist radiologists in classifying brain tumor types more efficiently.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_120-Automated_Classification_of_Multiclass_Brain_Tumor_MRI_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fuzzy Neural Network Algorithm Application in User Behavior Portrait Construction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01411119</link>
        <id>10.14569/IJACSA.2023.01411119</id>
        <doi>10.14569/IJACSA.2023.01411119</doi>
        <lastModDate>2023-11-30T12:00:12.4000000+00:00</lastModDate>
        
        <creator>Peisen Song</creator>
        
        <creator>Bengcheng Yu</creator>
        
        <creator>Chen Chen</creator>
        
        <subject>User behavior profiling; momentum gradient descent method; adaptive fuzzy neural network; error backpropagation algorithm; least squares estimation method; subtractive clustering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>With the increasing number of online users, constructing user behavior profiles has received widespread attention from relevant scholars. In order to construct user behavior profiles more accurately, the research first designed an adaptive fuzzy neural network algorithm based on the momentum gradient descent method. It uses momentum gradient descent to optimize and learn the parameters adjusted by error backpropagation algorithm and least squares estimation method and optimizes the structure of the fuzzy neural network through subtraction clustering. Finally, the improved algorithm is applied to the construction of user behavior profiles. The results showed that in error analysis, the error range of the improved algorithm was within [-0.10, 0.10], and the accuracy was relatively high. In indicator calculation, the improved algorithm had a recall rate of 0.07 and 0.09 higher than the other two algorithms, an accuracy rate of 0.03 and 0.07 higher than the other two algorithms, and an F1 score of 0.07 and 0.08 higher than the other two algorithms, indicating good overall performance. In the ROC curve, the average detection rate of the designed user behavior profiling model was 0.065 and 0.155 higher than the other two models, respectively, with higher detection accuracy. These results demonstrated the effectiveness of improved algorithms and design models, providing certain reference value for the development of related fields.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_119-Fuzzy_Neural_Network_Algorithm_Application.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Focal Loss-based Multi-layer Perceptron for Diagnosis of Cardiovascular Risk in Athletes</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01411118</link>
        <id>10.14569/IJACSA.2023.01411118</id>
        <doi>10.14569/IJACSA.2023.01411118</doi>
        <lastModDate>2023-11-30T12:00:12.3530000+00:00</lastModDate>
        
        <creator>Chuan Yang</creator>
        
        <subject>Cardiovascular diseases; multi-layer perceptron; focal loss; artificial bee colony; imbalanced classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>Cardiovascular diseases (CVDs) are a prevalent cause of heart failure around the world. This research was required in order to investigate potential approaches to treating the disease. The article presents a focal loss (FL)-based multi-layer perceptron called MLP-FL-CRD to diagnose cardiovascular risk in athletes. In 2012, 26,002 athletes were measured for their height, weight, age, sex, blood pressure, and pulse rate in a medical exam that had electrocardiography at rest. Outcomes were negative for the largest majority, leading to class imbalance. Training on imbalanced data hurts classifier performance. To address this, the study proposes a training approach based on focal loss, which effectively emphasizes minority class examples. Focal loss softens the influence of simplistic samples, enabling the model to concentrate on more intricate examples. It is useful in circumstances when there is a substantial class imbalance. Additionally, the paper highlights a challenge in the training phase, often characterized by the use of gradient-based learning methods like backpropagation. These methods exhibit several disadvantages, including sensitivity to initialization. The paper recommends the implementation of a mutual learning-based artificial bee colony (ML-ABC). This approach adjusts the primary weight by substituting the food resource candidate, which is selected due to superior fitness, with one based on a mutual learning factor between two individuals. The sample obtains great outcomes, outperforming other machine learning samples. Optimal values for important parameters are identified for the model based on experiments on the study dataset. Ablation studies that exclude FL and ML-ABC of the sample confirm the additive effect of, which is not negative and dependent, these factors on the sample’s efficiency.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_118-A_Focal_Loss_based_Multi_layer_Perceptron.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>MG-CS: Micro-Genetic and Cuckoo Search Algorithms for Load-Balancing and Power Minimization in Cloud Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01411117</link>
        <id>10.14569/IJACSA.2023.01411117</id>
        <doi>10.14569/IJACSA.2023.01411117</doi>
        <lastModDate>2023-11-30T12:00:12.3070000+00:00</lastModDate>
        
        <creator>Jun ZHOU</creator>
        
        <creator>Youyou Li</creator>
        
        <subject>Resource utilization; cloud computing; energy consumption; optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>Cloud computing has emerged as a transformative technology, offering remote access to various computing resources. However, efficiently managing these resources while curbing escalating energy consumption remains a critical challenge. In response, this paper presents the Micro-Genetic Algorithm with Cuckoo Search (MG-CS), a novel approach for enhancing cloud computing efficiency. MG-CS optimizes load balancing and power reduction and significantly contributes to reducing operational costs, ensuring compliance with service level agreements, and enhancing overall service quality. Our experiments showcase MG-CS&#39;s versatility in achieving a well-balanced distribution of workloads, resource optimization, and substantial energy savings. This multifaceted approach redefines cloud resource management, offering an environmentally sustainable and cost-effective solution. By introducing MG-CS, this research addresses the pressing challenges in cloud computing, aligning it with environmental responsibility and economic efficiency.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_117-MG_CS_Micro_Genetic_and_Cuckoo_Search_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Recognition of Depression from Video Frames by using Convolutional Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01411116</link>
        <id>10.14569/IJACSA.2023.01411116</id>
        <doi>10.14569/IJACSA.2023.01411116</doi>
        <lastModDate>2023-11-30T12:00:12.2770000+00:00</lastModDate>
        
        <creator>Jianwen WANG</creator>
        
        <creator>Xiao SHA</creator>
        
        <subject>Deep learning; depression recognition; Convolutional Neural Network (CNN); attention mechanism</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>The disturbances of the mood are relevant to the emotions. Specifically, the behaviour of persons with disturbances of mood, like the depression of the unipolar, displays a powerful correlation of the temporal by the emotional girths of the arousal and the valence. Moreover, the psychiatrists and the psychologists take into account the audible signs of the facial and the audible signs of the voice when they assess the condition of the patient. Depression makes audible behaviours like weak expressions, the validation of the contact of the eye and the use of little flat-voiced sentences. Artificial intelligence has combined various automated frameworks for the detection of depression severity by using hand-crafted features. The method of deep learning has been successfully applied to detect depression. In the current article, a federate architecture, which is the network of the neural of the deep convolutional basis on the attention of global, is proposed to diagnose the depression. This method uses CNN with the attention mechanism and also uses the integration of the weighted spatial pyramid pooling for the learning of the deep global representation. In this method, two branches are introduced: the CNN based on local attention focuses on the patches of the local, while the CNN based on global attention attains the universal patterns from the whole face area. For taking the data of the supplementary among two parts, a CNN basis on the local-global attention is proposed. The designed experiments have been done in two datasets, which are AVEC2014 and AVEC2013. The results show that our presented approach can extract the depression patterns from the video frames. Also, the outcomes display that our presented approach is superior to the best methods based on the video for the detection of depression.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_116-Recognition_of_Depression_from_Video_Frames.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel CNN-based Model for Medical Image Registration</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01411115</link>
        <id>10.14569/IJACSA.2023.01411115</id>
        <doi>10.14569/IJACSA.2023.01411115</doi>
        <lastModDate>2023-11-30T12:00:12.2430000+00:00</lastModDate>
        
        <creator>Hui GAO</creator>
        
        <creator>Mingliang LIANG</creator>
        
        <subject>Image registration; convolutional neural network; Pyramid Registration (PR); encoder-decoder</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>The registration of the deformable image is applied widely to image diagnosis, the monitoring of the disease, and the navigation of the surgery with the aim of learning the correspondence of the anatomist among an image of motion and an image of static. The procedure of the registration of an image mainly includes three steps: the creation of a model of the deformation, a function design for the mensuration of the similarity, and the step of learning for the optimization of the parameter. In the current article, 2-stream architecture is designed, which has the ability to sequentially estimate the fields of the registration of the multi-level by a couple of the pyramids of the feature. In this paper, a 3D network of the encoder-decoder with the 2-stream is designed, which calculates 2 pyramids of the feature of the convolutional as separately by 2 volumes of the input. Also, the registration of the pyramid of the sequential is proposed, which in it, a trail of the modules of the pyramid registration (PR) for the prediction of the fields of the registration of the multi-level is designed as straight by the pyramids of the feature of the decoding. In addition, the modules of PR can be augmented with the computation of the 3D correlations of the local among the pyramids of the feature, which this work leads to the further improvement of the presented approach. Thus, it is capable of collecting the detailed anatomical structure of the brain. The proposed method is tested in three criterion datasets about the registration of MRI of the brain. The evaluation outcomes display that the presented approach outperforms the advanced approaches with a big value.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_115-A_Novel_CNN_based_Model_for_Medical_Image_Registration.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Hybrid Jaro-Winkler and Manhattan Distance using Dissimilarity Measure for Test Case Prioritization Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01411114</link>
        <id>10.14569/IJACSA.2023.01411114</id>
        <doi>10.14569/IJACSA.2023.01411114</doi>
        <lastModDate>2023-11-30T12:00:12.2130000+00:00</lastModDate>
        
        <creator>Siti Hawa Mohamed Shareef</creator>
        
        <creator>Rabatul Aduni Sulaiman</creator>
        
        <creator>Abd Samad Hasan Basari</creator>
        
        <subject>Test case prioritization; software product line; dissimilarity-based technique; string distance; new enhanced hybrid</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>Software product line (SPL) is a concept that has revolutionized the software development industry. It refers to a set of related software products that are developed from a common set of core assets but can be customized to meet specific customer requirements. Integrating SPL techniques into test case prioritization (TCP) can greatly enhance its effectiveness. By considering variability across different products within an SPL, it becomes possible to prioritize test cases based on their relevance to specific product configurations. However, the concept itself still has certain issues, such as in finding the highest rate of early failure detection. Various solutions have been proposed to mitigate this problem, among them is to improve the calculation of string distance using hybrid technique to achieve a high degree for similarity. Dissimilarity-based Technique (DBP) is the basis for our ranking method. The objective is to identify further weaknesses in the product lines as well as the differences between the experiment and real-world applications. Our focus is to enhance hybrid techniques that produce the highest rate of early failure detection. In this paper, early fault detection is selected as the performance goal. In order to choose the optimal methods for DBP for TCP, a comparison between several string distance measures was conducted. This study proposed hybrid techniques that combined Jaro-Winkler and Manhattan string distance namely New Enhanced Hybrid Technique 1 (NEHT1), New Enhanced Hybrid Technique 2 (NEHT2) and New Enhanced Hybrid Technique 3 (NEHT3). The case study was generated using the PLEDGE tool based on a Feature Model (FM). Six test cases were used in the experiment. Result shows the effectiveness of the combination where it achieved higher degree of similarity for T1 vs. T4, T2 vs. T3, T2 vs. T6, and T3 vs. T6, as well as perfect degree of similarity for NEHT1 (100.00%). The result proves that the combination of both techniques improve SPL testing effectiveness compared to existing techniques.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_114-The_Hybrid_Jaro_Winkler_and_Manhattan_Distance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automatic Bangla Image Captioning Based on Transformer Model in Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01411113</link>
        <id>10.14569/IJACSA.2023.01411113</id>
        <doi>10.14569/IJACSA.2023.01411113</doi>
        <lastModDate>2023-11-30T12:00:12.1970000+00:00</lastModDate>
        
        <creator>Md. Anwar Hossain</creator>
        
        <creator>Mirza AFM Rashidul Hasan</creator>
        
        <creator>Ebrahim Hossen</creator>
        
        <creator>Md Asraful</creator>
        
        <creator>Md. Omar Faruk</creator>
        
        <creator>AFM Zainul Abadin</creator>
        
        <creator>Md. Suhag Ali</creator>
        
        <subject>Bangla image captioning; image processing; natural language processing; attention mechanism; transformer model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>Indeed, Image Captioning has become a crucial aspect of contemporary artificial intelligence because it has tackled two crucial parts of the AI field: Computer Vision and Natural Language Processing. Currently, Bangla stands as the seventh most widely spoken language globally. Due to this, image captioning has gained recognition for its significant research accomplishments. Many established datasets are found in English but no standard datasets in Bangla. For our research, we have used the BAN-Cap dataset which contains 8091 images with 40455 sentences. Many effective encoder-decoder and Visual Attention approaches are used for image captioning where CNN is utilized for the encoder and RNN is used for the decoder. However, we suggested a transformer-based image captioning model in this study with different pre-train image feature extraction models like Resnet50, InceptionV3, and VGG16 using the BAN-Cap dataset and find out its effective efficiency and accuracy based on many performances measured methods like BLEU, METEOR, ROUGE, CIDEr and also find out the drawbacks of others model.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_113-Automatic_Bangla_Image_Captioning_Based_on_Transformer_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>ODFM: Abnormal Traffic Detection Based on Optimization of Data Feature and Mining</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01411112</link>
        <id>10.14569/IJACSA.2023.01411112</id>
        <doi>10.14569/IJACSA.2023.01411112</doi>
        <lastModDate>2023-11-30T12:00:12.1670000+00:00</lastModDate>
        
        <creator>Xianzong Wu</creator>
        
        <subject>Abnormal traffic; detection; data mining; feature dimension optimization; network security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>The booming of computer networks and software applications has led to an explosive growth in the potential damage caused by network attacks. Efficient detection of abnormal traffic in networks is appealing for facilely mastering the traffic tracking and locating for network usage at low resource cost. High quality abnormal traffic detection of Internet becomes particularly relevant during the automated services of multiple application situations. This paper proposes a novel abnormal traffic detection algorithm called ODFM based on the optimization of data feature and mining. Specially, we develop a feature selection strategy to reduce the feature analysis dimension, and set a peer-to-peer (P2P) traffic identification module to filter and mine the related service traffic to reduce the amount of data detection and facilitate the abnormal traffic detection. Experimental results demonstrate that the proposed algorithm greatly improves the detection accuracy, which verifies its effectiveness and competitiveness in the general tasks of abnormal network traffic detection.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_112-ODFM_Abnormal_Traffic_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Network Security Detection Method Based on Abnormal Traffic Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01411111</link>
        <id>10.14569/IJACSA.2023.01411111</id>
        <doi>10.14569/IJACSA.2023.01411111</doi>
        <lastModDate>2023-11-30T12:00:12.1500000+00:00</lastModDate>
        
        <creator>Tao Xiao</creator>
        
        <creator>Yang Ke</creator>
        
        <creator>Hu YiWen</creator>
        
        <creator>Wang HongYa</creator>
        
        <subject>Abnormal traffic; network security detection; data dimensionality reduction; flow characteristics; traffic capture; alarm module</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>To discover potential risks and vulnerabilities in the network in time and ensure the safe operation of the network, a network security detection method based on abnormal traffic detection is studied. Construct network security detection architecture from several aspects, including the front-end interface module, control center module, network status extraction module, anomaly detection module, alarm module, and database module. Use NetFlow technology to capture network traffic from the network in the form of flow, and use the KNN algorithm in the traffic filtering submodule to filter network traffic packets and eliminate duplicate traffic data. After filtering traffic, the traffic data is transmitted to the feature selection sub-module. PCA-TS algorithm is used to reduce the dimension of the network traffic data and select the network traffic characteristics, and then it is input into the SVM classifier. The improved SVM multi-classification algorithm is used to classify normal and abnormal traffic, complete abnormal traffic detection, and achieve network security detection. Experimental results show that the time for feature selection of this method does not exceed 3.0s, and the G score in the detection process also remains above 0.70, indicating that this method has strong network security detection capability.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_111-Network_Security_Detection_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implementation of Cybersecurity Situation Awareness Model in Saudi SMEs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01411110</link>
        <id>10.14569/IJACSA.2023.01411110</id>
        <doi>10.14569/IJACSA.2023.01411110</doi>
        <lastModDate>2023-11-30T12:00:12.1500000+00:00</lastModDate>
        
        <creator>Monerah Faisal Almoaigel</creator>
        
        <creator>Ali Abuabid</creator>
        
        <subject>Cyber situation awareness; cybersecurity control and precaution; Saudi; SMEs</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>Saudi Small and Medium-sized Enterprises (SMEs) are witnessing rapid growth in technology and innovation. However, this growth is accompanied by increased cybersecurity threats, which pose significant challenges for SMEs. Cyber threats are becoming more complex and sophisticated, with SMEs becoming prime targets due to their weaker cybersecurity defenses. Hence, there exists a rich literature on critical challenges facing SMEs. Existing literature on these challenges addresses many research issues (e.g., finance, technology adoption, and management) associated with SMEs. However, one critical issue that has so far received no rigorous attention is cybersecurity situation awareness for research in the SME context. Thus, this study used a quantitative approach aiming to empirically test a model of cybersecurity situational awareness that can support SMEs in Saudi Arabia to implement cybersecurity measures and precautions with efficacy. An online survey of 350 participants was conducted to collect the research data. The study identified a significant positive relationship between Cyber Situational Awareness (Csa) and Implementation of Cybersecurity Controls (Icsc), suggesting that enhancing awareness can contribute to better control implementation. The study identified a significant positive relationship between Cyber Situational Awareness (Csa) and Implementation of Cybersecurity Controls (Icsc), suggesting that enhancing awareness can contribute to better control implementation. Finally, the paper provides several interesting findings and outlines future research directions.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_110-Implementation_of_Cybersecurity_Situation_Awareness_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Creating a Framework for Care Needs Hub for Persons with Disabilities and Senior Citizens</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01411109</link>
        <id>10.14569/IJACSA.2023.01411109</id>
        <doi>10.14569/IJACSA.2023.01411109</doi>
        <lastModDate>2023-11-30T12:00:12.1330000+00:00</lastModDate>
        
        <creator>Guillermo V. Red</creator>
        
        <creator>Thelma D. Palaoag</creator>
        
        <creator>Vince Angelo E. Naz</creator>
        
        <subject>Care need framework; persons with disability; CareAide; 4+1 view model; CareNeed</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>Patient satisfaction is an assessment that assesses how effectively a company’s goods or services fulfil consumer expectations. This study aims to design an architectural framework for a care needs hub for people with disabilities and senior citizens. Using systems modelling for crafting architectural frameworks, the researchers used a 4+1 view model with UML to intensively describe the features of the care needs hub. Quality attributes were used to indicate how well the system would satisfy the needs of the stakeholders beyond its basic functions. The design includes the system&#39;s functional and non-functional features, as well as their corresponding diagrams drawn in a unified modelling language in accordance with the 4+1 view model, to assist the system&#39;s developer in mapping the system&#39;s functionalities correctly and accurately. Architecture models and design patterns are developed and executed to understand how the system&#39;s primary components fit together, how messages and data move effectively across the system, and how other structural issues work. The proposed model includes verified and validated development paradigms and architectural and design patterns that may help accelerate the development process. The architecture and design patterns fulfil all of the system&#39;s criteria. The researchers designed a comprehensive tool for the completion of the development of the care needs hub, which would greatly help the developers of the system in crafting the correct features and data abstractions needed to build and implement the said system. This research aims to develop an innovative solution that addresses the current challenges faced by persons with disabilities and senior citizens in accessing care services and provides a comprehensive and accessible platform for their care.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_109-Creating_a_Framework_for_Care_Needs_Hub.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Using Generative Adversarial Networks and Ensemble Learning for Multi-Modal Medical Image Fusion to Improve the Diagnosis of Rare Neurological Disorders</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01411108</link>
        <id>10.14569/IJACSA.2023.01411108</id>
        <doi>10.14569/IJACSA.2023.01411108</doi>
        <lastModDate>2023-11-30T12:00:12.1200000+00:00</lastModDate>
        
        <creator>Bhargavi Peddi Reddy</creator>
        
        <creator>K Rangaswamy</creator>
        
        <creator>Doradla Bharadwaja</creator>
        
        <creator>Mani Mohan Dupaty</creator>
        
        <creator>Partha Sarkar</creator>
        
        <creator>Mohammed Saleh Al Ansari</creator>
        
        <subject>Multi-modal medical images; ensemble learning; CNN; GAN; neurological disorders; image-to-image method; transfer learning; feature extraction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>The research suggests a unique ensemble learning approach for precise feature extraction and feature fusion from multi-modal medical pictures, which may be applied to the diagnosis of uncommon neurological illnesses. The proposed method makes use of the combined characteristics of Convolutional Neural Networks and Generative Adversarial Networks (CNN-GAN) to improve diagnostic accuracy and enable early identification. In order to do this, a diverse dataset of multi-modal patient medical records with rare neurological disorders was gathered. The multi-modal pictures are successfully combined using a GAN-based image-to-image translation technique to produce fake images that effectively gather crucial clinical data from different paradigms. To extract features from extensive clinical imaging databases, the research employs trained models using transfer learning approaches with CNN frameworks designed specifically for analyzing medical images. By compiling unique traits from each modality, a thorough grasp of the core pathophysiology is produced. By combining the strengths of several CNN algorithms using ensemble learning techniques including voting by majority, weight averaging, and layering, the forecasts were also integrated to arrive at the final diagnosis. In addition, the ensemble approach enhances the robustness and reliability of the assessment algorithm, resulting in increased effectiveness in identifying unusual neurological conditions. The analysis of the collected data shows that the proposed technique outperforms single-modal designs, demonstrating the importance of multi-modal fusion of pictures and feature extraction. The proposed method significantly outperforms existing methods, achieving an accuracy of 99.99%, as opposed to 85.69% for XGBoost and 96.12% for LSTM. The proposed method significantly outperforms existing methods, achieving an average increase in accuracy of approximately 13.3%. The proposed method was implemented using Python software.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_108-Using_Generative_Adversarial_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>AI-Driven Optimization Approach Based on Genetic Algorithm in Mass Customization Supplying and Manufacturing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01411106</link>
        <id>10.14569/IJACSA.2023.01411106</id>
        <doi>10.14569/IJACSA.2023.01411106</doi>
        <lastModDate>2023-11-30T12:00:12.1030000+00:00</lastModDate>
        
        <creator>Shereen Alfayoumi</creator>
        
        <creator>Neamat Eltazi</creator>
        
        <creator>Amal Elgammal</creator>
        
        <subject>Mass customization manufacturing; metaheuriatic search; genetic algorithm; optimization; supply chain management</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>Numerous artificial intelligence (AI) techniques are currently utilized to identify planning solutions for supply chains, which comprise suppliers, manufacturers, wholesalers, and customers. Continuous optimization of these chains is necessary to enhance their performance. Manufacturing is a critical stage within the supply chain that requires continuous optimization. Mass Customization Manufacturing is one such manufacturing type that involves high-volume production with a wide variety of materials. However, genetic algorithms have not been used to minimize both time and cost in the context of mass customization manufacturing. Therefore, we propose this study to present an artificial intelligence solution using genetic algorithm to build a model that minimizes the time and cost which associated with mass customized orders. Our problem formulation is based on a real-world case, and it adheres to expert descriptions. Our proposed optimization model incorporates two strategies to solve the optimization problem. The first strategy employs a single objective function focused on either time or cost, while the second strategy applies the multi-objective function NSGAII to optimize both time and cost simultaneously. The effectiveness of the proposed model was evaluated using a real case study, and the results demonstrated that leveraging genetic algorithms for mass customization optimization outperformed expert estimations in finding efficient solutions. On average, the evaluation revealed a 20.4% improvement for time optimization, a 29.8% improvement for cost optimization, and a 25.5% improvement for combined time and cost optimization compared to traditional expert optimization.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_106-AI_Driven_Optimization_Approach_Based_on_Genetic_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application Model Construction of Emotional Expression and Propagation Path of Deep Learning in National Vocal Music</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01411107</link>
        <id>10.14569/IJACSA.2023.01411107</id>
        <doi>10.14569/IJACSA.2023.01411107</doi>
        <lastModDate>2023-11-30T12:00:12.1030000+00:00</lastModDate>
        
        <creator>Zhangcheng Tang</creator>
        
        <subject>Deep learning; national vocal music; innovation; emotion; dissemination</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>Emotional expression is important in Chinese national vocal music art. The emotional expression in national vocal music is based on the art of national vocal music, with distinct characteristics and requirements. The ultimate goal is to spread the expression of various emotions in the national vocal music art. Promoting the spread of national vocal music singing art using modern media is an urgent requirement for the inheritance and development of national vocal music singing art. With the rapid development of science and technology, integrating deep learning and traditional music has become the general trend. It has been gradually applied to melody recognition, intelligent composition, virtual performance, and other aspects of traditional music and has achieved good results, but also hidden behind a series of ideas and technical and ethical issues. In this paper, the application of deep learning has been discussed and prospected. The recognition rate of emotional expression in national vocal music is 92 %. In terms of communication, combined with the deep learning algorithm, this paper analyzes the characteristics and requirements of emotional expression in the art of national vocal music singing and puts forward a new method of promoting the development of the art of national vocal music singing, hoping to attract more attention and enhance the social awareness of the application field, to promote the steady development of Chinese traditional music in the information age.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_107-Application_Model_Construction_of_Emotional_Expression.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intelligent Design of Ethnic Patterns in Clothing using Improved DCGAN for Real-Time Style Transfer</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01411105</link>
        <id>10.14569/IJACSA.2023.01411105</id>
        <doi>10.14569/IJACSA.2023.01411105</doi>
        <lastModDate>2023-11-30T12:00:12.0870000+00:00</lastModDate>
        
        <creator>Yingjun Liu</creator>
        
        <creator>Ming Wu</creator>
        
        <subject>Computer vision; improved DCGAN; style transfer; adaptive instance normalization; intelligent design of patterns</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>In view of the problems that traditional real-time style transmission technology requires a large number of sample map training, low image quality, lack of realism and detail, this study combines the improved generative adversarial network (GANs) with real-time style transfer technology, and enhances the real-time style transfer calculation with adaptive instance normalization. As a result, a novel intelligent clothing ethnic pattern design model is developed. Experimental results show that the model reduces physical memory usage by 45.7%, with only 453MB, and utilizes only 26% of CPU resources in terms of CPU usage. The training time is approximately 20 minutes and 48 seconds. This model performance is obviously higher than other models. The designed intelligent clothing ethnic pattern design model in this study demonstrates higher clarity and shorter processing time, and has potential applications in the field of image generation.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_105-Intelligent_Design_of_Ethnic_Patterns_in_Clothing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Model for Analyzing Employee Turnover in Enterprises Based on Improved XGBoost Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01411104</link>
        <id>10.14569/IJACSA.2023.01411104</id>
        <doi>10.14569/IJACSA.2023.01411104</doi>
        <lastModDate>2023-11-30T12:00:12.0730000+00:00</lastModDate>
        
        <creator>Linzhi Nan</creator>
        
        <creator>Han Zhang</creator>
        
        <subject>Data preprocessing; linear white noise; root mean square error; newton’s law of cooling; step cooling curve</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>To accurately predict the possibility of employee turnover during enterprise operation and improve the benefits created by talents in the enterprise, research based on the limit gradient enhancement algorithm has received widespread attention. However, with the exponential growth of various types of resignation reasons, this algorithm is not comprehensive enough when dealing with complex character psychology. To solve this problem, this study uses the limit gradient enhancement algorithm to predict employee turnover in the Company dataset, and uses differential automatic regression moving average variable optimization to generate a fusion algorithm. The research first involves stepwise regression processing of the training data, expanding the objective function to a second-order Taylor expansion; Then variance coding is added to the square integrable linear white noise, and the step cooling curve is smoothed by changing the temperature control constant; Then to calculate the root mean square error of Newton&#39;s law of cooling, and obtain its derivative loss variable. Linear white noise is the chaotic data produced by the improved extreme gradient lifting algorithm in forecasting the original data of enterprise employees, which will affect the results of data preprocessing in the loss analysis. In order to reduce the operation error of the algorithm, the step cooling curves are drawn according to the cooling law, and then their root mean square errors are calculated. Finally, the fusion algorithm studied was applied to the Company dataset and the prediction accuracy of the particle swarm optimization algorithm was tested and compared with the fusion algorithm. A total of 400 experiments were conducted, and the fusion algorithm achieved a prediction accuracy of 398 times, with an accuracy rate of 99.5%; The accuracy of particle swarm optimization algorithm is close to that of fusion algorithm, at 83.2%. The experimental results indicate that the algorithm model proposed in the study can accurately predict the possibility of employee turnover in enterprises, and the company will also receive timely information to make the next budget step.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_104-A_Model_for_Analyzing_Employee_Turnover_in_Enterprises.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Research on Image Algorithm for Face Recognition Based on Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01411103</link>
        <id>10.14569/IJACSA.2023.01411103</id>
        <doi>10.14569/IJACSA.2023.01411103</doi>
        <lastModDate>2023-11-30T12:00:12.0570000+00:00</lastModDate>
        
        <creator>Qiang Wu</creator>
        
        <subject>Multi task deep learning; face recognition; convolution neural network; multi task; dimension</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>As people&#39;s requirements for applications are getting higher and higher, the recognition of facial features has been paid more and more attention. The current facial feature recognition algorithm not only takes a long time, but also has problems such as large system resource consumption and long running time in practical applications. Based on this, the research proposes a multi-task face recognition algorithm by combining multi-task deep learning on the basis of convolutional neural network, and analyzes its performance in four dimensions of face identity, age, gender, and fatigue state. The experimental results show that the multi-task face recognition algorithm model obtained through layer-by-layer progression takes less time than other models and can complete more tasks in the same training time. At the same time, comparing the best model M44 with other algorithms in four dimensions, it is found that the Mean Absolute Error lowest is 3.53, and the highest Accuracy value is 98.3%. On the whole, the multi-task face recognition algorithm proposed in the study can recognize facial features efficiently and quickly. At the same time, its training time is short, the calculation speed is fast, and the recognition accuracy is much higher than other algorithms. It is applied to intelligent driving behavior. Analysis, intelligent clothing navigation and other aspects have strong practical significance.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_103-Research_on_Image_Algorithm_for_Face_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Classification Method of Traditional Art Painting Style Based on Color Space Transformation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01411102</link>
        <id>10.14569/IJACSA.2023.01411102</id>
        <doi>10.14569/IJACSA.2023.01411102</doi>
        <lastModDate>2023-11-30T12:00:12.0570000+00:00</lastModDate>
        
        <creator>Xu Zhe</creator>
        
        <subject>Color space; traditional art; painting style; classification method; fuzzy c-means</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>In order to improve the accuracy and efficiency of traditional art painting style classification, a classification method of traditional art painting style based on color space transformation is proposed. This method preprocesses the traditional artistic painting style, improves the contrast of the image, makes the color and details of the image more vivid, and provides the basis for the subsequent color space conversion. After the traditional artistic painting style is stretched by automatic contrast stretching method, the color space is transformed. The purpose is to transform the image from one color space to another, so as to better extract the features of the image. Based on the traditional artistic painting style image after color space conversion, the traditional artistic painting image is balanced by the adaptive histogram equalization method with limited contrast, and an enhanced traditional artistic painting image is obtained, which further enhances the contrast of the image, makes the details in the image more prominent, and also enhances the overall visual effect of the image. Taking the enhanced traditional art painting images as input, the fuzzy C-means method is used to classify the traditional art painting styles, and the images are effectively divided into different categories according to the characteristics of the images. The experimental results show that this method can effectively enhance the image of traditional art paintings and effectively classify traditional art paintings with different styles, which has strong application effect.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_102-Classification_Method_of_Traditional_Art_Painting_Style.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Fusion Method of Virtual Reality Technology and 3D Movie Animation Design</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01411101</link>
        <id>10.14569/IJACSA.2023.01411101</id>
        <doi>10.14569/IJACSA.2023.01411101</doi>
        <lastModDate>2023-11-30T12:00:12.0400000+00:00</lastModDate>
        
        <creator>Xiang Yuan</creator>
        
        <creator>He Huixuan</creator>
        
        <subject>Virtual reality technology; 3D film and television; Animation design; model optimization; roaming interaction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>To further improve the design effect of 3D film and television animation, integrating virtual reality technology with 3D film and television animation design is studied. This method uses 3Ds Max software in virtual reality technology to build 3D film and television animation scenes by manual modeling. Based on the established 3D film and television animation scene, texture mapping is performed on it, and then the 3D film and television animation character model is established and simulated. After optimizing the established 3D scene and character model using the improved quadratic error measurement algorithm, the roaming interaction of 3D film and television animation scene is realized through Unity3D software, and the integration of virtual reality technology and 3D film and television animation design is realized. The experimental results indicate that the 3D film and television animation scene created using virtual reality technology is very realistic, which can effectively optimize the 3D film and television animation model. The number of path nodes is the least when the 3D film and television animation scene roams and interacts, which has a relatively significant application effect.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_101-The_Fusion_Method_of_Virtual_Reality_Technology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Question Pairs Identification with Ensemble Learning: Integrating Machine Learning and Deep Learning Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01411100</link>
        <id>10.14569/IJACSA.2023.01411100</id>
        <doi>10.14569/IJACSA.2023.01411100</doi>
        <lastModDate>2023-11-30T12:00:12.0270000+00:00</lastModDate>
        
        <creator>Salsabil Tarek</creator>
        
        <creator>Hatem M. Noaman</creator>
        
        <creator>Mohammed Kayed</creator>
        
        <subject>Ensemble learning; natural language processing; deep learning; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>The effectiveness of machine learning (ML) and deep learning (DL) models on the Quora question pairs dataset is investigated in this study. ML models, including AdaBoost, reached 73.44% test accuracy, while ensemble learning approaches enhanced outcomes even further, with the Hard-Voting Ensemble achieving 76.13%. DL models, such as FCN, demonstrated test accuracy of 81% with cross validation. These findings contribute to natural language processing by demonstrating the potential of ensemble learning for ML models and the DL models&#39; detailed pattern-capturing capacity.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_100-Enhancing_Question_Pairs_Identification_with_Ensemble_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Construction of an Intelligent Evaluation Model of Yield Risk Based on Empirical Probability Distribution</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141199</link>
        <id>10.14569/IJACSA.2023.0141199</id>
        <doi>10.14569/IJACSA.2023.0141199</doi>
        <lastModDate>2023-11-30T12:00:12.0100000+00:00</lastModDate>
        
        <creator>Zhou Yanru</creator>
        
        <creator>Yang Jing</creator>
        
        <subject>Empirical probability distribution; yield; risk intelligence evaluation; principal component analysis; clustering; weight</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>In order to improve the accuracy of yield risk evaluation, an intelligent evaluation model of yield risk based on empirical probability distribution is constructed. The dimensionality reduction method of risk factor based on principal component analysis is adopted. After adjusting the multiple data dimensions of risk factors that affect the rate of return to a unified dimension, the cluster-based evaluation index screening method is used to build the evaluation index set that best reflects the risk of the rate of return; The index weight vector equation method based on entropy weight and information entropy is used to set the evaluation index weight. Through the comprehensive evaluation model based on the empirical probability distribution of risk indicators, the empirical probability distribution information of risk indicators at all levels is analyzed, and the risk level of yield is intelligently evaluated. The research structure shows that the model can effectively evaluate the level of return risk and provide an effective reference for preventing and controlling investment return risk.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_99-Construction_of_an_Intelligent_Evaluation_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Proposal of a Machine Learning-based Model to Optimize the Detection of Cyber-attacks in the Internet of Things</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141198</link>
        <id>10.14569/IJACSA.2023.0141198</id>
        <doi>10.14569/IJACSA.2023.0141198</doi>
        <lastModDate>2023-11-30T12:00:11.9930000+00:00</lastModDate>
        
        <creator>Cheikhane Seyed</creator>
        
        <creator>Jeanne roux BILONG NGO</creator>
        
        <creator>Mbaye KEBE</creator>
        
        <subject>IoT; Machine learning; cyber-security; detection of attacks; weka tool; classification quality and consistency</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>In this article, we propose a model to optimize the detection of attacks in IoT. IoT network is a promising technology that connects living and non-living things around the world. Despite the increased development of these technologies, cyber-attacks remains a weakness, making it vulnerable to numerous cyber-attacks. Of course, automatic computer intrusion detection systems are deployed. However, it does not make it possible to mobilize the full potential of Machine Learning. Our approach in this maneuver consists of offering a means to select the least expensive ML method in terms of learning in order to optimize the prediction of threats to introduce IoT objects. To do this, we make modular design based on two layers. The first module is a canvas containing the different methods most used in ML such as supervised learning method, unsupervised learning method and reinforcement learning method. The second module introduces a mechanism to measure the learning cost linked to each of these methods in order to choose the least expensive one in order to quickly and efficiently detect intrusions in IoT objects. To prove the validity of the proposed model, we simulated it using the Weka tool. The results obtained illustrate the following behaviors: The classification quality rate is 93.66%. This last result is supported by a classification consistency rate of 0.882 (close to unity 1) demonstrating a trend towards convergence between observation and prediction.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_98-Proposal_of_a_Machine_Learning_based_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Blockchain Integrated Neural Networks: A New Frontier in MRI-based Brain Tumor Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141197</link>
        <id>10.14569/IJACSA.2023.0141197</id>
        <doi>10.14569/IJACSA.2023.0141197</doi>
        <lastModDate>2023-11-30T12:00:11.9930000+00:00</lastModDate>
        
        <creator>Subrata Banik</creator>
        
        <creator>Nani Gopal Barai</creator>
        
        <creator>F M Javed Mehedi Shamrat</creator>
        
        <subject>Brain tumor; MRI imaging; BrainTumorNet; deep learning; image classification; augmentation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>Brain tumors originating from uncontrolled growth of abnormal cells in the brain, presents a significant challenge in healthcare due to their various symptoms and infrequency. While Magnetic Resonance Imaging (MRI) is essential for accurately identifying and diagnosing malignant tumors, manual interpretation is often complex and sensitive to mistakes. To address this, we introduce BrainTumorNet, a specialized convolutional neural network (CNN) created for MRI-based brain tumor diagnosis. We ensure improved image quality and a robust dataset for model training by including preprocessing approaches involving CLAHE and data augmentation. Additionally, we integrated a blockchain-based data retrieval technology to enhance the security, traceability, and collaboration in MRI data management across several medical institutions. This blockchain framework ensures that MRI data, once input from hospitals, stays immutable and can be safely retrieved based on unique hospital IDs, promoting a trustable environment for data exchange. Performance assessments conducted on multiple MRI datasets showcased BrainTumorNet’s commendable proficiency, with accuracy rates of 98.66%, 97.17% and 94.24% on the dataset 1, dataset 2, and dataset 3, respectively. The model’s performance was evaluated using a comprehensive set of metrics, including accuracy, specificity, recall, precision, f1-score, and confusion matrix. These measures are essential for evaluating a model&#39;s strengths and limits, emphasizing BrainTumorNet’s ability to generate accurate and relevant predictions and its effectiveness in determining negative classification. BrainTumorNet&#39;s performance was compared with six renowned deep learning architectures: VGG16, ResNet50, AlexNet, MobileNetV2, InceptionV3, and DenseNet121. Our work highlights BrainTumorNet&#39;s potential capabilities in simplifying and boosting the accuracy of MRI-based brain tumor diagnosis while ensuring data integrity and collaboration through blockchain.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_97-Blockchain_Integrated_Neural_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Brain Tumor Segmentation Algorithm Based on Asymmetric Encoder and Multimodal Cross-Collaboration</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141196</link>
        <id>10.14569/IJACSA.2023.0141196</id>
        <doi>10.14569/IJACSA.2023.0141196</doi>
        <lastModDate>2023-11-30T12:00:11.9770000+00:00</lastModDate>
        
        <creator>Pengyue Zhang</creator>
        
        <creator>Qiaomei Ma</creator>
        
        <subject>Brain tumor; multimodal cross-collaboration; asymmetric encoder; coordinate attention</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>To address the challenges of insufficient multimodal information fusion and insufficient long-range dependencies features extraction for brain tumor segmentation, this paper propose a novel network based on asymmetric encoder and multimodal cross-collaboration. The network employs an asymmetric encoder-decoder architecture. Firstly, the invert ConvNext split convolution (ICSC) block is used in the local refinement encoder and improved SwinTransformer with DscMLP enhancements (DscSwinTransformer) module is used in global associative encoder. The local and long-range dependencies of each stage of two parallel encoders can be well extracted by hybrid fusion. Moreover, this paper adds a multimodal cross-collaboration (MCC) module at the beginning of the two encoders to fully exploit the complementary information between modalities and reduce the reliance on a single modality during model training. Coordinate Attention (CA) is used in the bridge part of the encoder and decoder to capture important spatial location information. Then, the depthwise separable convolution (DscConv) module is used in the decoder branch to reduce the computation while maintaining good feature extraction ability. Finally, this paper uses a hybrid loss function of BCE, Dice and L2 loss to mitigate the problem of class datas imbalance. Experimental results show that our model achieves Dice coefficients of 0.897, 0.905 and 0.824 in the whole, core and enhanced tumor regions, respectively. These results show that the performance of our proposed method outperforms in comparison with several existing methods in core and enhanced tumor regions.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_96-Brain_Tumor_Segmentation_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving Deep Reinforcement Learning Training Convergence using Fuzzy Logic for Autonomous Mobile Robot Navigation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141195</link>
        <id>10.14569/IJACSA.2023.0141195</id>
        <doi>10.14569/IJACSA.2023.0141195</doi>
        <lastModDate>2023-11-30T12:00:11.9470000+00:00</lastModDate>
        
        <creator>Abdurrahman bin Kamarulariffin</creator>
        
        <creator>Azhar bin Mohd Ibrahim</creator>
        
        <creator>Alala Bahamid</creator>
        
        <subject>Autonomous navigation; deep reinforcement learning; mobile robots; neuro-symbolic; Fuzzy Logic</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>Autonomous robotic navigation has become hotspot research, particularly in complex environments, where inefficient exploration can lead to inefficient navigation. Previous approaches often had a wide range of assumptions and prior knowledge. Adaptations of machine learning (ML) approaches, especially deep learning, play a vital role in the applications of navigation, detection, and prediction about robotic analysis. Further development is needed due to the fast growth of urban megacities. The main problem of training convergence time in deep reinforcement learning (DRL) for mobile robot navigation refers to the amount of time it takes for the agent to learn an optimal policy through trial and error and is caused by the need to collect a large amount of data and computational demands of training deep neural networks. Meanwhile, the assumption of reward in DRL for navigation is problematic as it can be difficult or impossible to define a clear reward function in real-world scenarios, making it challenging to train the agent to navigate effectively. This paper proposes a neuro-symbolic approach that combine the strengths of deep reinforcement learning and fuzzy logic to address the challenges of deep reinforcement learning for mobile robot navigation in terms of training time and the assumption of reward by incorporating symbolic representations to guide the learning process, and inferring the underlying objectives of the task which is expected to reduce the training convergence time.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_95-Improving_Deep_Reinforcement_Learning_Training_Convergence.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Method for Revealing Traffic Patterns in Video Surveillance using a Topic Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141194</link>
        <id>10.14569/IJACSA.2023.0141194</id>
        <doi>10.14569/IJACSA.2023.0141194</doi>
        <lastModDate>2023-11-30T12:00:11.9470000+00:00</lastModDate>
        
        <creator>Yao Wang</creator>
        
        <subject>Group thin topic coding; QM_UL video; optical flux; traffic patterns</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>Research on video surveillance systems, for instance, in intelligent transportation systems, has advanced due to the growing requirement for monitoring, control, and intelligent management. One of the next issues is extracting patterns and automatically classifying them, given the volume of data produced by these systems. In this study, a theme approach was utilized to translate visual patterns into visual words in order to reveal and extract traffic patterns at crossings. The supplied video is first cut up into segments. The optical flux technique is then used to determine the clips&#39; optical flux characteristics, which are based on a lot of local motion vector data, and translate them into visual words. The thin-group thematic coding method is then used to teach traffic patterns to the proposed system using a non-probable thematic model. By responding to a behavioral query like &quot;Where is a vehicle going?&quot; these patterns convey observable motion that can be utilized to characterize a scene. The results of applying the suggested method to the QM_UL video database demonstrated that the suggested method can accurately identify and depict significant traffic patterns such as left turns, right turns, and intersection crossings.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_94-A_New_Method_for_Revealing_Traffic_Patterns_in_Video_Surveillance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Bitcoin Optimized Signal Allocation Strategies using Decomposition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141193</link>
        <id>10.14569/IJACSA.2023.0141193</id>
        <doi>10.14569/IJACSA.2023.0141193</doi>
        <lastModDate>2023-11-30T12:00:11.9300000+00:00</lastModDate>
        
        <creator>Sherin M. Omran</creator>
        
        <creator>Wessam H. El-Behaidy</creator>
        
        <creator>Aliaa A. A. Youssif</creator>
        
        <subject>Bitcoin; technical analysis; decomposition; particle swarm optimization; MOEA/D</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>Bitcoin is the first and most famous cryptocurrency. It is a virtual currency that is operated in a decentralized form using cryptographic strategies called blockchains. Although it has experienced significant market acceptance by traders and investors in recent years, it also suffers from volatility and riskiness. Technical analysis is one of the most powerful tools used for trading signals’ allocation using some algorithmic strategies called technical indicators. In this research, a newly proposed multi-objectives decomposition-based particle swarm optimization algorithm is used to find the best parameter values for some technical indicators, which in turn generates the best trading signals for Bitcoin trading. In this context, three conflicting objectives have been used, i.e., the return on investment, the Sortino-ratio, and the number of trades. The proposed algorithm is compared to the original MOEA/D algorithm as well as the indicators using their original parameters. Results showed the superiority of the proposed algorithm during the training and testing periods over the other benchmarks.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_93-Bitcoin_Optimized_Signal_Allocation_Strategies.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Quality In-Use of Mobile Geographic Information Systems for Data Collection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141192</link>
        <id>10.14569/IJACSA.2023.0141192</id>
        <doi>10.14569/IJACSA.2023.0141192</doi>
        <lastModDate>2023-11-30T12:00:11.9170000+00:00</lastModDate>
        
        <creator>Badr El Fhel</creator>
        
        <creator>Ali Idri</creator>
        
        <subject>Mobile GIS for data collection; machine learning; software product quality; ISO/IEC 25010; natural language processing; user experience</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>Mobile Geographic Information Systems (GIS) plays a vital role in data collection, offering diverse functionalities for spatial data handling. Despite advancements, accurately determining the usage environment during development remains challenging. This study uses machine learning and natural language processing to automatically classify user reviews based on the ISO 25010 quality-in-use model. Motivated by the challenge of gauging user experience during development, stakeholders analyze user reviews for insights. An experimental study compares Support Vector Machine (SVM), Random Forest, Logistic Regression, and Naive Bayes classifiers, revealing superior performance by SVM and Random Forest, particularly in efficiency evaluation. Findings underscore the efficacy of SVM in classifying user reviews, emphasizing its effectiveness in evaluating efficiency within mobile GIS applications. Moreover, it provides valuable insights for stakeholders, contributing to the enhancement of software quality of mobile GIS apps.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_92-Quality_In_Use_of_Mobile_Geographic_Information_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automated Detection and Classification of Soccer Field Objects using YOLOv7 and Computer Vision Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141191</link>
        <id>10.14569/IJACSA.2023.0141191</id>
        <doi>10.14569/IJACSA.2023.0141191</doi>
        <lastModDate>2023-11-30T12:00:11.9000000+00:00</lastModDate>
        
        <creator>Jafar AbuKhait</creator>
        
        <creator>Murad Alaqtash</creator>
        
        <creator>Ahmad Aljaafreh</creator>
        
        <creator>Waleed Othman</creator>
        
        <subject>Soccer game; football; YOLOv7; human detection and classification; ball detection; improved color coherence vector</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>In the last two decades, many technologies have been deployed and utilized in Soccer games (Football) as a result to the huge investment of Federation of International Football Association (FIFA). These technologies aim to monitor and track all soccer match objects including players and the ball itself in order to measure the player performance, and tracking the players’ positions and movements at the field. Latest emerging artificial intelligence and computer vision techniques are being used recently in many systems and deployed in different scenarios. Identifying all field objects automatically has to be the first step in the monitoring process of soccer games. In this paper, we are proposing an automated system that has the ability to detect and track the ball and to detect and classify players and referees on the soccer field. The proposed system implements a detection model using a real-time object detection model YOLOv7 to detect the ball and all humans on the field after building a labeled dataset of 1300 different soccer game frames. It also deploys Improved Color Coherence Vector (ICCV) features to classify all humans on the field to five classes (Team1, Team2, Goalkeeper1, Goalkeeper2, and Referee) using K-Nearest Neighbor algorithm. The proposed system has achieved high accuracy in both the detection and classification modules.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_91-Automated_Detection_and_Classification_of_Soccer_Field_Objects.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>New AHP Improvement using COMET Method Characteristic to Eliminate Rank Reversal Phenomenon</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141190</link>
        <id>10.14569/IJACSA.2023.0141190</id>
        <doi>10.14569/IJACSA.2023.0141190</doi>
        <lastModDate>2023-11-30T12:00:11.8830000+00:00</lastModDate>
        
        <creator>Yulistia </creator>
        
        <creator>Ermatita</creator>
        
        <creator>Samsuryadi</creator>
        
        <creator>Abdiansah</creator>
        
        <subject>Method; combination; C-AHP; rank reversal; elimination</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>Rank Reversal in Multi-Criteria Decision Making (MCDM) is a phenomenon that occurs when an alternative is added or deleted because of a change in the order in which the result is ranked. The evaluation of the weight of criteria, which are established based on whether a decision maker considers them important, impacts the alternative ranking result in MCDM. Changes in decision result ranking called rank reversal cannot be acceptable. Many researchers have done lots of research and created new methods for eliminating rank reversal, but until now there is still research that denies these new methods are free from rank reversal. The Analytical Hierarchy Process Method (AHP), the oldest Decision support Method has an advantage in the decision according to the Decision Maker’s (DM’s) preference. Still, it is vulnerable to the rank reversal phenomenon. While Characteristic Object Method (COMET) is a method claimed to be free of rank reversal phenomenon. This paper will discuss how the integration of COMET to AHP especially in the phase of generating characteristic value and characteristic objects is added to the AHP phase, which will have an impact on digital marketing strategy decision-making for private Universities in Indonesia, especially the city of Palembang. The combination of COMET and AHP in this paper is tested with several testing tools; they are case study testing, accuracy testing, and sensitivity analysis testing. The result of the combination of COMET and AHP will be named C-AHP, which is a consideration of DM’s preference for the criteria weight, and the generation of alternative comparison based on criteria, or any other attributes makes AHP free from rank reversal.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_90-New_AHP_Improvement_using_COMET_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Bidirectional Long Short-Term Memory for Analysis of Public Opinion Sentiment on Government Policy During the COVID-19 Pandemic</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141189</link>
        <id>10.14569/IJACSA.2023.0141189</id>
        <doi>10.14569/IJACSA.2023.0141189</doi>
        <lastModDate>2023-11-30T12:00:11.8700000+00:00</lastModDate>
        
        <creator>Intan Nurma Yulita</creator>
        
        <creator>Ahmad Faaiz Al-Auza’i</creator>
        
        <creator>Anton Satria Prabuwono</creator>
        
        <creator>Asep Sholahuddin</creator>
        
        <creator>Firman Ardiansyah</creator>
        
        <creator>Indra Sarathan</creator>
        
        <creator>Yusa Djuyandi</creator>
        
        <subject>Sentiment analysis; COVID-19; BiLSTM; deep learning; government policy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>One of the initiatives adopted by the Indonesian government to combat the development of COVID-19 in Indonesia is Community Activities Restrictions Enforcement. Many public opinions emerged, both for and against this policy. There are so many comments every second that it is certainly not easy to analyze them by reading each one by one. This task necessitates computer applications. Therefore, this study was conducted to produce an application that can help analyze public sentiment on the policy through social media, namely Twitter, into three classes: positive, neutral, and negative. The method used in this research is bidirectional long short-term memory (BiLSTM), one of the algorithms of deep learning. This study trains the model using the dataset, which consists of 10,486 tweets. The model receives an f1-score of 76.67 %. Thus, the model can be used to analyze public sentiment when the same policy is enforced. It can determine public acceptance of this policy. Thus, the system created in this research can be used as evaluation material for the government to review the policy when it is implemented in the future. However, this study concentrates on how to develop the sentiment analysis system and does not examine how the community responds to government policy.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_89-Bidirectional_Long_Short_Term_Memory_for_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Steganography Method for Hiding Text into RGB Image</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141188</link>
        <id>10.14569/IJACSA.2023.0141188</id>
        <doi>10.14569/IJACSA.2023.0141188</doi>
        <lastModDate>2023-11-30T12:00:11.8530000+00:00</lastModDate>
        
        <creator>AL-Hasan Amer Ibrahim</creator>
        
        <creator>Ruaa Shallal Abbas Anooz</creator>
        
        <creator>Mohammed Ghassan Abdulkareem</creator>
        
        <creator>Musatafa Abbas Abbood Albadr</creator>
        
        <creator>Fahad Taha AL-Dhief</creator>
        
        <creator>Yaqdhan Mahmood Hussein</creator>
        
        <creator>Hatem Oday Hanoosh</creator>
        
        <creator>Mohammed Hasan Mutar</creator>
        
        <subject>Steganography techniques; color images; XOR gate; NOR gate; huffman technique</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>Now-a-days, the network has significant roles in transferring data and knowledge quickly and accurately from sender to receiver. However, the data is still not secure enough to transfer quite confidentially. Data protection is considered as one of the principal challenges in information sharing over communication. So, steganography techniques were proposed which are the art of hiding information that prevents secret text message detection from intruders. Nevertheless, most steganography methods use low bits number of secret messages. Moreover, these methods applied a single logic gate for encrypting the secret message. Therefore, this paper proposes a new method for the encryption of secret messages based on the Huffman technique to reduce the secret message dimensions. In addition, the proposed method uses two different logic gates namely XOR and XNOR for increasing the message security. The RGB Lena image is used as the cover image of the secret message. There are six different experiments conducted with respect to various lengths of the secret messages in bits. The experimental results show that when using the highest number of bits (i.e., 66288), the proposed method achieved 0.0233 MSE, 64.4589 PSNR, 0.9999998 SSIM, and 8.2383 encryption time. The proposed method has the ability to encrypt the secret message with a high number of bits.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_88-A_New_Steganography_Method_for_Hiding_Text.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>FOREX Prices Prediction Using Deep Neural Network and FNF</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141187</link>
        <id>10.14569/IJACSA.2023.0141187</id>
        <doi>10.14569/IJACSA.2023.0141187</doi>
        <lastModDate>2023-11-30T12:00:11.8370000+00:00</lastModDate>
        
        <creator>Asmaa M. Moustafa</creator>
        
        <creator>Mohamed Waleed Fakhr</creator>
        
        <creator>Fahima A. Maghraby</creator>
        
        <subject>FOREX prediction; CNN; normalization function; SVR</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>One of the largest financial markets on the planet is the foreign exchange (FOREX) market. Banks, retail traders, businesses, and individuals trade more than $5.1 trillion in FOREX daily. It is very challenging to predict prices in advance due to the market&#39;s complex, volatile, and highly fluctuating nature. In this study, the new FOREX Normalization Function (FNF) is proposed and used with different models to predict the prices of the AUD/USD, EUR/USD, USD/JPY, CHF/INR, USD/CHF, AUD/JPY, USD/CAD, and GBP/USD. Two models are proposed in this study. The first model contains FNF as a normalization and feature extractor, followed by a Convolutional Neural Network (CNN). The second model utilizes FNF and a Support Vector Regressor(SVR). The forecasts are set for a one-day timeframe, with predictions made for 1, 3, 7, and 15 days ahead. The efficient ability of the proposed method to solve the FOREX prediction problem is proven by performing experiments on nine real-world datasets from different currencies. Additionally, the models are evaluated using Mean Absolute Error (MAE) and Mean Squared Error (MSE). Applying the presented models to 9 different datasets improved the results by an average between 0.5% and 58% of MAE.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_87-FOREX_Prices_Prediction_Using_Deep_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Learning Engagement of Children with Dyslexia Through Tangible User Interface: An Experiment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141186</link>
        <id>10.14569/IJACSA.2023.0141186</id>
        <doi>10.14569/IJACSA.2023.0141186</doi>
        <lastModDate>2023-11-30T12:00:11.8230000+00:00</lastModDate>
        
        <creator>Siti Nurliana Jamali</creator>
        
        <creator>Novia Admodisastro</creator>
        
        <creator>Azrina Kamaruddin</creator>
        
        <creator>Saadah Hassan</creator>
        
        <subject>Dyslexia; Tangible User Interface; mobile application; user centered design; engagement</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>This paper presents the evaluation of a mobile application employing Tangible User Interface (TUI) technology to enhance the educational involvement of children experiencing dyslexia. The primary objective of this application is to assist these children in overcoming challenges related to reading, spelling, pronunciation, and writing, issues often associated with lower self-esteem and dissatisfaction in an academic setting. The study adopts a User-Centered Design (UCD) approach, focusing on the specific needs and preferences of children with dyslexia during development. The evaluation involved 30 children with dyslexia, divided into two groups: a control group utilizing the non-tangible DisleksiaBelajar mobile app (DB) and a treatment group utilizing the DisleksiaBelajar 3D Tangible (DB3dT) app, which incorporates tangible elements. Results indicated that the DB3dT app achieved significantly higher usability scores (79.5%) compared to the DisleksiaBelajar app (51%). Furthermore, the treatment group utilizing the DB3dT app surpassed the control group in learning performance. In summary, the evaluation demonstrated that integrating tangible elements into the DB3dT app notably enhanced the learning experience for children with dyslexia when compared to the non-tangible DisleksiaBelajar app. The children exhibited increased engagement and a willingness to repeat activities, suggesting potential advancements in learning outcomes and performance.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_86-Learning_Engagement_of_Children_with_Dyslexia.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hotspot Identification Through Pick-Up and Drop-Off Analysis of Ride-Hailing Transport Service</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141185</link>
        <id>10.14569/IJACSA.2023.0141185</id>
        <doi>10.14569/IJACSA.2023.0141185</doi>
        <lastModDate>2023-11-30T12:00:11.8070000+00:00</lastModDate>
        
        <creator>Ragil Saputra</creator>
        
        <creator>Suprapto</creator>
        
        <creator>Agus Sihabudin</creator>
        
        <subject>Hotspot identification; ride-hailing; transportation; PUDO location; clustering analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>It is important to extract hotspots in urban traffic networks to improve driver route efficiency. This research aims to identify hotspot pick-up and drop-off (PUDO) areas in ride-hailing transportation services using a clustering approach. However, there are challenges in applying clustering algorithms to trajectory data in the coordinates of the Global Positioning System (GPS). So this research proposes modifications to the Density-Based Spatial Clustering of Applications with Noise (DBSCAN) algorithm by considering the radius from the center of the cluster to determine the presence of amenities around the cluster. We used a dataset containing 55,988 trip trajectories of Grab drivers over a two-week period in Jakarta. A preliminary statistical analysis was carried out to understand the distribution of trips. Next, we identify the PUDO point of each trip for use in the clustering analysis. The research explores the various parameters and settings of the clustering method and their impact on the results. The study found that the results obtained from the clustering method are sensitive to parameter selection, including epsilon radius and minimum number of points needed to form a cluster. The optimal cluster with the best parameters (eps: 0.25, minpts: 100) in the pick-up (PU) location analysis produced 17 clusters with the silhouette coefficient of 0.752, while in the drop-off (DO) location there are 18 clusters with a silhouette coefficient of 0.694. Overall, the research highlights the potential of the clustering analysis method for ride-hailing transportation.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_85-Hotspot_Identification_Through_Pick_Up.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Speech Enhancement using Fully Convolutional UNET and Gated Convolutional Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141184</link>
        <id>10.14569/IJACSA.2023.0141184</id>
        <doi>10.14569/IJACSA.2023.0141184</doi>
        <lastModDate>2023-11-30T12:00:11.7900000+00:00</lastModDate>
        
        <creator>Danish Baloch</creator>
        
        <creator>Sidrah Abdullah</creator>
        
        <creator>Asma Qaiser</creator>
        
        <creator>Saad Ahmed</creator>
        
        <creator>Faiza Nasim</creator>
        
        <creator>Mehreen Kanwal</creator>
        
        <subject>Speech enhancement; speech denoising; deep neural network; raw waveform; fully convolutional neural network; gated linear unit</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>Speech Enhancement aims to enhance audio intelligibility by reducing background noises that often degrade the quality and intelligibility of speech. This paper brings forward a deep learning approach for suppressing the background noise from the speaker&#39;s voice. Noise is a complex nonlinear function, so classical techniques such as Spectral Subtraction and Wiener filter approaches are not the best for non-stationary noise removal. The audio signal was processed in the raw audio waveform to incorporate an end-to-end speech enhancement approach. The proposed model&#39;s architecture is a 1-D Fully Convolutional Encoder-to-Decoder Gated Convolutional Neural Network (CNN). The model takes the simulated noisy signal and generates its clean representation. The proposed model is optimized on spectral and time domains. To minimize the error among time and spectral magnitudes, L1 loss is used. The model is generative, denoising English language speakers, and capable of denoising Urdu language speech when provided. In contrast, the model is trained exclusively on the English language. Experimental results show that it can generate a clean representation of a clean signal directly from a noisy signal when trained on samples of the Valentini dataset. On objective measures such as PESQ (Perceptual Evaluation of Speech Quality) and STOI (Short-Time Objective Intelligibility), the performance evaluation of the research outcome has been conducted. This system can be used with recorded videos and as a preprocessor for voice assistants like Alexa, and Siri, sending clear and clean instructions to the device.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_84-Speech_Enhancement_using_Fully_Convolutional_UNET.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing IoT Security and Privacy with Claims-based Identity Management</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141183</link>
        <id>10.14569/IJACSA.2023.0141183</id>
        <doi>10.14569/IJACSA.2023.0141183</doi>
        <lastModDate>2023-11-30T12:00:11.7770000+00:00</lastModDate>
        
        <creator>Mopuru Bhargavi</creator>
        
        <creator>Yellamma Pachipala</creator>
        
        <subject>Internet of Things (IoT); identity management; privacy preservation; access control; security; DCapBAC; CP-ABE; interconnected devices</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>The Internet of Things (IoT) has ushered in a new era of ubiquitous connectivity among devices, necessitating robust identity management (IdM) solutions to address privacy, security, and efficiency challenges. In this study, it delve into various IdM approaches in the context of IoT, examining their implications for privacy preservation, user experience, integration, and efficiency. In this paper a methodology is an innovative holistic IdM system that leverages emerging cryptographic technologies and a claims-based approach. This system empowers both users and smart objects to manage data disclosure via partial identities and efficient proof mechanisms, ensuring privacy while facilitating seamless interactions which integrate the proposed IdM system with Distributed Capability-Based Access Control (DCapBAC) and Ciphertext-Policy Attribute-Based Encryption (CP-ABE) to cater to diverse IoT scenarios. Through a comparative evaluation, it is highlighted that the limitations of conventional IdM methods and OAuth-based approaches, underscored by the superior efficiency exhibited by our proposed system. Notably efficient, the IdM system stands as a paramount solution for ensuring secure, private, and resource-effective interactions within the ever-expanding IoT landscape. As the IoT domain continues to evolve, embracing advanced identity management systems like our proposal becomes indispensable for fostering trust, bolstering security, and optimizing interactions across interconnected devices and services.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_83-Enhancing_IoT_Security_and_Privacy_with_Claims_based_Identity.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Alzheimer&#39;s Disease Diagnosis: The Efficacy of the YOLO Algorithm Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141182</link>
        <id>10.14569/IJACSA.2023.0141182</id>
        <doi>10.14569/IJACSA.2023.0141182</doi>
        <lastModDate>2023-11-30T12:00:11.7600000+00:00</lastModDate>
        
        <creator>Tran Quang Vinh</creator>
        
        <creator>Haewon Byeon</creator>
        
        <subject>Machine learning; deep learning; YOLO; alzheimer’s disease; dementia</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>The diagnosis and early detection of Alzheimer&#39;s Disease (AD) and other forms of dementia have become increasingly crucial as our aging population grows. In recent years, deep learning, particularly the You Only Look Once (YOLO) architecture, has emerged as a promising tool in the field of neuroimaging and machine learning for AD diagnosis. This comprehensive review investigates the recent advances in the application of YOLO for AD diagnosis and classification. We scrutinized five research papers that have explored the potential of YOLO, delving into the methodologies, datasets, and results presented. Our review reveals the remarkable strides made in AD diagnosis using YOLO, while also highlighting challenges, such as data scarcity and research lacking. The paper provides insights into the growing role of YOLO in the early detection of AD and its potential to transform clinical practices in the field. This review aims to inspire further research and innovation to enhance AD diagnosis and, ultimately, patient care.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_82-Enhancing_Alzheimers_Disease_Diagnosis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automated Detection of Driver and Passenger Without Seat Belt using YOLOv8</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141181</link>
        <id>10.14569/IJACSA.2023.0141181</id>
        <doi>10.14569/IJACSA.2023.0141181</doi>
        <lastModDate>2023-11-30T12:00:11.7430000+00:00</lastModDate>
        
        <creator>Sutikno </creator>
        
        <creator>Aris Sugiharto</creator>
        
        <creator>Retno Kusumaningrum</creator>
        
        <subject>Windshield detection; passenger classification; seat belt classification; YOLOv8</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>The issue of traffic accident fatalities is a serious concern on a global scale, and one of the contributing factors is the failure of drivers to adhere to seat belt usage. A notable challenge arises from the limited availability of law enforcement personnel monitoring this particular issue. In this context, there is a compelling need to implement an automated detection system. The development of this system using YOLOv5 has been done. However, there are weaknesses related to the length of training and detection time. Therefore, this paper proposed a new system using the YOLOv8 method to detect drivers and passengers who violate seat belt regulations. The proposed system is divided into three subsystems: windshield detection, passenger classification, and seat belt classification. YOLOv8 is the latest version of the YOLO (You Only Look Once) method and has been proven to provide better performance than previous versions. Furthermore, this paper also compared five YOLOv8 models, namely YOLOv8n, YOLOv8s, YOLOv8m, YOLOv8l, and YOLOv8x. The proposed model is trained and tested using image data collected from several roads in Indonesia. The experiment results show that the YOLOv8s model produced the best mean Average Precision (mAP) of 0.960 for windshield detection. YOLOv8s-cls and YOLOv8l-cls models achieved the same accuracy of 0.8923 for passenger classification. The YOLOv8l-cls model produced the best accuracy of 0.8846 for seat belt classification. In addition, the proposed method can increase mAP and training time for windshield detection compared to YOLOv5.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_81-Automated_Detection_of_Driver_and_Passenger_Without_Seat_Belt.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Self-Organizing Control Systems for Nonlinear Spacecraft in the Class of Structurally Stable Mappings</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141179</link>
        <id>10.14569/IJACSA.2023.0141179</id>
        <doi>10.14569/IJACSA.2023.0141179</doi>
        <lastModDate>2023-11-30T12:00:11.7270000+00:00</lastModDate>
        
        <creator>Orisbay Abdiramanov</creator>
        
        <creator>Daniyar Taiman</creator>
        
        <creator>Mamyrbek Beisenbi</creator>
        
        <creator>Mira Rakhimzhanova</creator>
        
        <creator>Islam Omirzak</creator>
        
        <subject>Impulsive sound; machine learning; deep learning; CNN; LSTM; classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>In recent developments within the domain of aerospace engineering, there is a burgeoning interest in the autonomous control of nonlinear spacecraft using advanced methodologies. The present research delves deep into the realm of self-organizing control systems tailored for such nonlinear spacecraft, emphasizing its application within the framework of structurally stable mappings. By harnessing the inherent characteristics of structurally stable mappings — often renowned for their resilience to minor perturbations and local modifications — this research endeavors to design a control mechanism that mitigates the challenges presented by the intrinsic nonlinearity of spacecraft dynamics. Initial findings suggest a commendable enhancement in spacecraft maneuverability and robustness against unforeseen disturbances. Furthermore, the employment of self-organization principles leads to an adaptive and resilient system that can reconfigure its control strategies in real-time, basing decisions on immediate environmental feedback. This adaptability, in essence, mimics biological systems that evolve and adapt in the face of challenges. Such a breakthrough in nonlinear spacecraft control not only widens the horizons for space exploration by making missions safer and more efficient but also contributes foundational knowledge to the broader field of nonlinear dynamic system controls. Researchers and practitioners are encouraged to explore this synergistic combination of self-organization and structurally stable mappings to further harness its potential in diverse arenas beyond aerospace.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_79-Self_Organizing_Control_Systems_for_Nonlinear_Spacecraft.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Offensive Language Detection on Online Social Networks using Hybrid Deep Learning Architecture</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141180</link>
        <id>10.14569/IJACSA.2023.0141180</id>
        <doi>10.14569/IJACSA.2023.0141180</doi>
        <lastModDate>2023-11-30T12:00:11.7270000+00:00</lastModDate>
        
        <creator>Gulnur Kazbekova</creator>
        
        <creator>Zhuldyz Ismagulova</creator>
        
        <creator>Zhanar Kemelbekova</creator>
        
        <creator>Sarsenkul Tileubay</creator>
        
        <creator>Boranbek Baimurzayev</creator>
        
        <creator>Aizhan Bazarbayeva</creator>
        
        <subject>Offensive language; machine learning; deep learning; social media; detection; classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>In the digital era, online social networks (OSNs) have revolutionized communication, creating spaces for vibrant public discourse. However, these platforms also harbor offensive language that can proliferates hate speech, cyberbullying, and discrimination, significantly undermining the quality of online interactions and posing severe social implications. This research paper introduces a sophisticated approach to offensive language detection on OSNs, employing a novel Hybrid Deep Learning Architecture (HDLA). The urgency of addressing offensive content is juxtaposed with the challenges inherent in accurately identifying nuanced communications, thus necessitating an advanced model that transcends the limitations of traditional natural language processing techniques. The proposed HDLA model synergistically integrates Convolutional Neural Networks (CNNs) with Long Short-Term Memory (LSTM) networks, capitalizing on the strengths of both methodologies. While the CNN component excels in the hierarchical extraction of spatial features within text data, identifying offensive patterns often concealed in the structural nuances, the LSTM network, adept in processing sequential data, captures the contextual dependencies in user posts over time. This duality ensures a comprehensive analysis of complex linguistic constructs, enhancing the detection accuracy for both overt and covert offensive content. Our research meticulously evaluates the HDLA model using extensive, multi-source datasets reflective of diverse OSN environments, establishing benchmarks against prevailing deep learning models. Results indicate a substantial improvement in precision, recall, and F1-score, demonstrating the model&#39;s efficacy in identifying offensive language amidst varying degrees of subtlety and complexity. Furthermore, the model maintains high interpretability, providing insights into the intricate mechanisms of offensive content propagation. Our findings underscore the potential of HDLA in fostering healthier online communities by efficiently curating digital content, thereby upholding the integrity of digital communication spaces.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_80-Offensive_Language_Detection_on_Online_Social_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of Ransomware Impact on Android Systems using Machine Learning Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141178</link>
        <id>10.14569/IJACSA.2023.0141178</id>
        <doi>10.14569/IJACSA.2023.0141178</doi>
        <lastModDate>2023-11-30T12:00:11.7130000+00:00</lastModDate>
        
        <creator>Anfal Sayer M. Al-Ruwili</creator>
        
        <creator>Ayman Mohamed Mostafa</creator>
        
        <subject>Ransomware; machine learning; malware detection; phishing detection; spam filtering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>Ransomware is a significant threat to Android systems. Traditional methods of detection and prediction have been used, but with the advancement of technology and artificial intelligence, new and innovative techniques have been developed. Machine learning (ML) algorithms are a branch of artificial intelligence that have several important advantages, including phishing detection, malware detection, and spam filtering. ML algorithms can also be used to detect ransomware by learning the patterns and behaviors associated with ransomware attacks. ML algorithms can be used to develop detection systems that are more effective than traditional signature-based methods. The selection of the dataset is a crucial step in developing an ML-based ransomware detection system. The dataset should be large, diverse, and representative of the real-world threats that the system will face. It should also include a variety of features that are informative for ransomware detection. This research presents a survey of ML algorithms for ransomware detection and prediction. The authors discuss the advantages of ML-based ransomware detection systems over traditional signature-based methods. They also discuss the importance of selecting a large, diverse, and representative dataset for training ML algorithms. Two datasets are applied during the conducted experiments, which are SEL and ransomware datasets. The experiments are repeated with different splitting ratios to identify the overall performance of each ML algorithm. The results of the paper are also compared to recent methods of ransomware detection and showed high performance of the proposed model.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_78-Analysis_of_Ransomware_Impact_on_Android_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid Movies Recommendation System Based on Demographics and Facial Expression Analysis using Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141177</link>
        <id>10.14569/IJACSA.2023.0141177</id>
        <doi>10.14569/IJACSA.2023.0141177</doi>
        <lastModDate>2023-11-30T12:00:11.6970000+00:00</lastModDate>
        
        <creator>Mohammed Balfaqih</creator>
        
        <subject>Recommender system; movies recommendation; emotion prediction; k-means clustering; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>Cinemas and digital platforms offer an extensive array of content requiring tailored filtering to cater to individual preferences. While recommender systems prove invaluable for this purpose, conventional movie recommendations tend to emphasize specific attributes, leading to a reduction in overall accuracy and reliability. Notably, the extraction process of facial temporal attributes exhibits a suboptimal level of accuracy, thereby influencing the classification of attributes and the overall accuracy of the recommendation system. This article introduces a hybrid recommender system that seamlessly integrates collaborative filtering and content-based methodologies. The system takes into account crucial factors such as age, gender, emotion, and genre attributes. Films undergo an initial categorization based on genre, with a subsequent selection of the most representative genres to ascertain group preferences. Ratings for these selected movies are then predicted and organized in descending order. Employing Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM) models, the system achieves real-time extraction of facial attributes, particularly enhancing the accuracy of emotion attribute extraction through sequential processing. The CNN model demonstrates a commendable 55.3% accuracy score, the LSTM model excels with a 59.1% score, while the combined CNN and LSTM models showcase an impressive 60.2% accuracy. The performance of the recommendation system is rigorously evaluated using standard metrics, including precision, recall, and F1-measure. Results underscore the superior performance of the proposed system across various testing scenarios compared to the established benchmark. Nevertheless, it is noteworthy that the precision of the benchmark marginally surpasses the proposed system in the age groups of 8-14 and 15-24.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_77-A_Hybrid_Movies_Recommendation_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Utilizing Multimodal Medical Data and a Hybrid Optimization Model to Improve Diabetes Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141176</link>
        <id>10.14569/IJACSA.2023.0141176</id>
        <doi>10.14569/IJACSA.2023.0141176</doi>
        <lastModDate>2023-11-30T12:00:11.6800000+00:00</lastModDate>
        
        <creator>A. Leela Sravanthi</creator>
        
        <creator>Sameh Al-Ashmawy</creator>
        
        <creator>Chamandeep Kaur</creator>
        
        <creator>Mohammed Saleh Al Ansari</creator>
        
        <creator>K. Aanandha Saravanan</creator>
        
        <creator>Veera Ankalu. Vuyyuru</creator>
        
        <subject>Diabetes prediction; multimodal medical data; binary grey wolf optimization; crow search optimization; support vector machine</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>Diabetes is a major health issue that affects people all over the world. Accurate early diagnosis is essential to enabling adequate therapy and prevention actions. Through the use of electronic health records and recent advancements in data analytics, there is growing interest in merging multimodal medical data to increase the precision of diabetes prediction. In order to improve the accuracy of diabetes prediction, this study presents a novel hybrid optimisation strategy that seamlessly combines machine learning techniques. In order to merge many models in a way that maximises efficiency while enhancing prediction accuracy, the study employs a collaborative learning technique. This study makes use of two separate diabetes database datasets from Pima Indians. A feature selection process is used to streamline error-free classification. A third method known as Binary Grey Wolf-based Crow Search Optimisation (BGW-CSO), which was produced by merging the Binary Grey Wolf Optimisation Algorithm (BGWO) and Crow Search Optimisation (CSO), is provided to further enhance feature selection capabilities. This hybrid optimisation approach successfully solves the high-dimensional feature space challenges and enhances the generalisation capabilities of the system. The Support Vector Machine (SVM) method is used to analyse the selected characteristics. The performance of conventional SVMs is enhanced by the newly created BGW-CSO technique, which optimises the number of hidden neurons within the SVM. The proposed method is implemented using Python software. The suggested BGW-CSO-SVM approach outperforms the current methods, such as Soft Voting Classifier, Random Forest, DMP_MI, and Bootstrap Aggregation, with a remarkable accuracy of 96.62%. Comparing the suggested BGW-CSO-SVM approach to the other methods, accuracy shows an average improvement of around 16%. Comparative evaluations demonstrate the suggested approach&#39;s improved performance and demonstrate its potential for real-world use in healthcare settings.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_76-Utilizing_Multimodal_Medical_Data_and_a_Hybrid_Optimization_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Efficiency Analysis of Firefly Optimization-Enhanced GAN-Driven Convolutional Model for Cost-Effective Melanoma Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141175</link>
        <id>10.14569/IJACSA.2023.0141175</id>
        <doi>10.14569/IJACSA.2023.0141175</doi>
        <lastModDate>2023-11-30T12:00:11.6670000+00:00</lastModDate>
        
        <creator>Lakshmi K</creator>
        
        <creator>Sridevi Gadde</creator>
        
        <creator>Murali Krishna Puttagunta</creator>
        
        <creator>G. Dhanalakshmi</creator>
        
        <creator>Yousef A. Baker El-Ebiary</creator>
        
        <subject>Melanoma; cost effective analysis; long short-term memory; firefly optimization; generative adversarial network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>Early identification is essential for successful treatment of melanoma, a potentially fatal type of skin cancer. This work takes a fresh approach to addressing the urgent need for an accurate and economical melanoma categorization system. Inaccuracy, efficiency, and resource usage are common problems with current techniques. A model that incorporates a number of innovative methods to get beyond these restrictions was used in this study. To improve data quality, first applied the pre-processing with a Gaussian filter and augment our dataset with Generative Adversarial Networks (GAN). To extract and classify features, this suggested model makes use of Convolutional Long Short-Term Memory (LSTM) networks. The model performs better and is substantially more accurate when Firefly Optimization is used. It analyses the model&#39;s ability to lower healthcare costs by doing a cost-effective analysis, especially when detecting melanoma, including situations involving bleeding lesions. The proposed FFO Enhanced Conv-LSTM&#39;s cost-effective analysis makes it possible to compare it favourably to deep convolutional neural networks (DCNN), showcasing its promise for melanoma classification accuracy and healthcare resource allocation optimization. For this study, Python software was used as the implementation tool. The suggested model achieves a 99.1% accuracy rate, which is better than current techniques. A comparative study with well-known models such as Res Net 50, Mobile Net, and Dense Net 169 highlights the notable enhancement provided by the proposed Firefly Optimization-enhanced Conv-LSTM method. This model offers a promising advancement in the precise and economical classification of melanoma due to its high accuracy and cost-effectiveness. In comparison to existing approaches like Res Net 50, Mobile Net, and Dense Net 169, the suggested Firefly Optimization-enhanced Convolutional LSTM (FFO Enhanced Conv-LSTM) method shows an average gain of roughly 5.6% in accuracy.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_75-Efficiency_Analysis_of_Firefly_Optimization_Enhanced_GAN_Driven.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Exploring the Insights of Bat Algorithm-Driven XGB-RNN (BARXG) for Optimal Fetal Health Classification in Pregnancy Monitoring</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141174</link>
        <id>10.14569/IJACSA.2023.0141174</id>
        <doi>10.14569/IJACSA.2023.0141174</doi>
        <lastModDate>2023-11-30T12:00:11.6670000+00:00</lastModDate>
        
        <creator>Suresh Babu Jugunta</creator>
        
        <creator>Manikandan Rengarajan</creator>
        
        <creator>Sridevi Gadde</creator>
        
        <creator>Yousef A.Baker El-Ebiary</creator>
        
        <creator>Veera Ankalu. Vuyyuru</creator>
        
        <creator>Namrata Verma</creator>
        
        <creator>Farhat Embarak</creator>
        
        <subject>BAT; fetal health; pregnancy monitoring; RNN; XGBoost</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>Pregnancy monitoring plays a pivotal role in ensuring the well-being of both the mother and the fetus. Accurate and timely classification of fetal health is essential for early intervention and appropriate medical care. This work presents a novel method for classifying fetal health optimally by combining the Bat Algorithm (BA) in an effective manner with a hybrid model that combines Recurrent Neural Networks (RNN) and Extreme Gradient Boosting (XGB). The Bat Algorithm, inspired by the echolocation behaviour of bats, is employed to optimize the hyperparameters of the XGB-RNN hybrid model. This enables the model to adapt dynamically to the complexities of fetal health data, enhancing its performance and predictive accuracy. The XGB-RNN hybrid model is designed to capitalize on the strengths of both algorithms. XGB provides superior feature selection and gradient boosting capabilities, while RNN excels in capturing temporal dependencies in the data. This approach effectively deals with the difficulties involved in classifying fetal health in the context of pregnancy monitoring by combining these approaches. Python is used to implement the proposed framework. To validate the performance of the proposed approach, extensive experiments were conducted on a comprehensive dataset comprising a wide range of physiological parameters related to fetal health. When it comes to fetal health, BAT Algorithm&#39;s XGB-RNN (BARXG) performs outstandingly, greater than other classifiers in terms of accuracy, sensitivity, and specificity. The proposed BARXG model has greater accuracy (98.2%) than existing techniques, which include SVM, Random Forest Classifier, LGBM, Voting Classifier, and EHG.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_74-Exploring_the_Insights_of_Bat_Algorithm_Driven_XGB_RNN.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Unleashing the Potential of Artificial Bee Colony Optimized RNN-Bi-LSTM for Autism Spectrum Disorder Diagnosis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141173</link>
        <id>10.14569/IJACSA.2023.0141173</id>
        <doi>10.14569/IJACSA.2023.0141173</doi>
        <lastModDate>2023-11-30T12:00:11.6500000+00:00</lastModDate>
        
        <creator>Suresh Babu Jugunta</creator>
        
        <creator>Yousef A.Baker El-Ebiary</creator>
        
        <creator>K. Aanandha Saravanan</creator>
        
        <creator>Kanakam Siva Rama Prasad</creator>
        
        <creator>S. Koteswari</creator>
        
        <creator>Venubabu Rachapudi</creator>
        
        <creator>Manikandan Rengarajan</creator>
        
        <subject>Autism spectrum disorder; artificial bee colony; recurrent neural network; bidirectional long short-term network; artificial intelligence</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>The diagnosis of Autism Spectrum Disorder (ASD) is a crucial, drawn-out, and sometimes subjective procedure that calls for a high level of knowledge. Automation of this diagnostic procedure appears to be possible because to recent developments in machine learning techniques. This paper presents a unique method for improving the performance of a Recurrent Neural Network with a Bidirectional Long Short-Term Memory (RNN-BiLSTM) model for ASD diagnosis by utilizing the power of Artificial Bee Colony (ABC) optimization. Because Python software is used to carry out the implementation, accessibility and adaptability in clinical contexts are guaranteed. The suggested approach is thoroughly contrasted with current techniques, such as ABC optimization for feature extraction, Convolutional Neural Networks (CNN), Long Short-Term Memory (LSTM) models, and Transfer Learning, in order to highlight its effectiveness. The outcomes demonstrate the superiority of the RNN-BiLSTM over other methods, with much greater accuracy and precision. Combining RNN-BiLSTM with ABC optimization demonstrates not just cutting-edge accuracy but also excellent interpretability. By using this sophisticated model&#39;s capabilities, an outstanding diagnosis accuracy of 99.12% is attained, which is 2.77% higher than previous approaches. The model helps physicians comprehend the diagnosis process by highlighting important characteristics and trends that influence its conclusion. Additionally, it lessens the subjectivity and unpredictability involved in human diagnosis, which may result in quicker and more accurate diagnoses of ASD. The research emphasizes how well the Artificial Bee Colony optimized RNN-BiLSTM model diagnoses autism spectrum disorder. By integrating AI-driven diagnostic tools into clinical practice, this research improves early diagnosis and intervention for ASD.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_73-Unleashing_the_Potential_of_Artificial_Bee_Colony_Optimized_RNN_Bi_LSTM.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Selection of Unmanned Aircraft Development Model in Indonesia using the AHP Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141172</link>
        <id>10.14569/IJACSA.2023.0141172</id>
        <doi>10.14569/IJACSA.2023.0141172</doi>
        <lastModDate>2023-11-30T12:00:11.6330000+00:00</lastModDate>
        
        <creator>Agus Bayu Utama</creator>
        
        <creator>Siswo Hadi Sumantri</creator>
        
        <creator>Romie Oktovianus Bura</creator>
        
        <creator>Gita Amperiawan</creator>
        
        <subject>Analytical Hierarchy Process (AHP); decision-making; development model; medium altitude long endurance (MALE); unmanned aircraft</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>Countries worldwide are attempting to acquire or create Class 3 unmanned aircraft as part of their armies’ primary weapons systems. The development of medium altitude long endurance (MALE) unmanned aircraft in Indonesia forms part of the national strategic program. Based on documentation studies, three alternative MALE-class unmanned aircraft development models were identified. This study aims to determine the most appropriate unmanned aircraft development model for the MALE class for Indonesia’s current situation. This will aid decision-making by the government and stakeholders related to the drone development model. The analytical hierarchy process (AHP) method was used to analyze the decision-making for the selection of an unmanned aircraft development model. The study began with a questionnaire survey of 11 experts from various institutions. The results show that the priority criterion should be the benefits obtained, followed by the opportunity and budget criteria, and, finally, the risk. The consortium model, which had the highest score of 0.548, is the most suitable for Indonesia’s development of MALE-class unmanned aircraft. The results of the study are expected to provide useful input for AHP researchers, government institutions, and stakeholders.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_72-Selection_of_Unmanned_Aircraft_Development_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Strengthening AES Security through Key-Dependent ShiftRow and AddRoundKey Transformations Utilizing Permutation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141171</link>
        <id>10.14569/IJACSA.2023.0141171</id>
        <doi>10.14569/IJACSA.2023.0141171</doi>
        <lastModDate>2023-11-30T12:00:11.6200000+00:00</lastModDate>
        
        <creator>Tran Thi Luong</creator>
        
        <subject>AES; ShiftRow; AddRoundKey; dynamic AES; key-dependent</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>AES (Advanced Encryption Standard) is a widely applied block cipher standard in the United States, used in various security applications today. Currently, there are numerous research endeavors aimed at making AES block ciphers dynamic to improve their security against contemporary strong attacks. The most common dynamic approach involves the dynamization of AES block transformations, including SubByte, ShiftRow, AddRoundKey, and MixColumn operations. The combination of these transformations has also been explored and proposed. However, to the best of our knowledge, the dynamic combination of AddRoundKey and ShiftRow transformations remains unexplored. Therefore, in this paper we introduce algorithms for generating key-dependent AddRoundKey and ShiftRow transformations based on permutations. Subsequently, these key-dependent transformations are applied to AES to create dynamic AES block ciphers. Security analysis and evaluation of NIST’s statistical criteria are performed, and the entropy of AES and dynamic AES is assessed. From our findings, it is evident that dynamic AES block ciphers can significantly enhance AES security and meet stringent randomness criteria, similar to AES.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_71-Strengthening_AES_Security_Through_Key_Dependent_ShiftRow.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detecting Data Poisoning Attacks using Federated Learning with Deep Neural Networks: An Empirical Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141170</link>
        <id>10.14569/IJACSA.2023.0141170</id>
        <doi>10.14569/IJACSA.2023.0141170</doi>
        <lastModDate>2023-11-30T12:00:11.5870000+00:00</lastModDate>
        
        <creator>Hatim Alsuwat</creator>
        
        <subject>Poisoning attacks; deep learning; network security; data classification; malicious data</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>The advent of intelligent networks powered by machine learning (ML) methods over the past few years has dramatically facilitated various facets of human lives, including healthcare, transportation, and entertainment. However, the use of ML in intelligent networks raises serious concerns about privacy and security, particularly in the context of data poisoning attacks. In order to address these concerns, this research paper presents a novel technique for detecting data poisoning attacks in intelligent networks, focusing on addressing privacy and security concerns associated with the use of machine learning (ML) methods. The research combines federated learning and deep learning approaches to analyze network data in a distributed and privacy-preserving manner. The technique employs a federated neural network to identify malicious data by analyzing network traffic, leveraging the power of Bayesian convolutional neural networks for efficient and accurate detection. The research follows an empirical approach, conducting experimental analyses to evaluate the proposed technique&#39;s effectiveness in terms of network security and data classification. The results demonstrate significant performance, including high throughput, quality of service, transmission rate, and low root mean square error for network security. Furthermore, the technique achieves impressive accuracy, recall, precision and malicious data analysis for data detection. The findings of this research contribute to enhancing the security and integrity of intelligent networks, benefiting various stakeholders, including network administrators, data privacy advocates, and users relying on secure network communication.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_70-Detecting_Data_Poisoning_Attacks_using_Federated_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Separability-based Quadratic Feature Transformation to Improve Classification Performance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141169</link>
        <id>10.14569/IJACSA.2023.0141169</id>
        <doi>10.14569/IJACSA.2023.0141169</doi>
        <lastModDate>2023-11-30T12:00:11.5870000+00:00</lastModDate>
        
        <creator>Usman Sudibyo</creator>
        
        <creator>Supriadi Rustad</creator>
        
        <creator>Pulung Nurtantio Andono</creator>
        
        <creator>Ahmad Zainul Fanani</creator>
        
        <subject>Separability; feature transformation; quadratic function; fisher’s criterion; fisher score</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>Feature transformation is an essential part of data preprocessing to improve the predictive performance of machine learning (ML) algorithms. Box-Cox transformation with the goal of separability is proven to align with the performance improvement of ML algorithms. However, the features mapped using Box-Cox transformation preserve the order of the data, so it is ineffective when used to improve the separability of multimodal distributed features. This research aims to build a feature transformation method using quadratic functions to improve class separability that can adaptively change the order of the data when necessary. Fisher score (Fs) measures the separability level by maximizing the Fisher&#39;s Criteria of the quadratic function. In addition to increasing the Fs value of each feature, this method can also make the feature more informative, as evidenced by the increasing value of information gain, information gain ratio, Gini decrease, ANOVA, Chi-Square, reliefF, and FCBF. The increase in Fs is particularly significant for bimodally distributed features. Experiments were conducted on 11 public datasets with two statistical-based machine learning algorithms representing linear and nonlinear ML algorithms to validate the success of this method, namely LDA and QDA. The experimental results show an improvement in accuracy in almost all datasets and ML algorithms, where the highest accuracy improvement is 0.268 for LDA and 0.188 for QDA.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_69-Separability_based_Quadratic_Feature_Transformation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Applying Machine Learning Models to Electronic Health Records for Chronic Disease Diagnosis in Kuwait</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141168</link>
        <id>10.14569/IJACSA.2023.0141168</id>
        <doi>10.14569/IJACSA.2023.0141168</doi>
        <lastModDate>2023-11-30T12:00:11.5730000+00:00</lastModDate>
        
        <creator>Talal M. Alenezi</creator>
        
        <creator>Taiseer H. Sulaiman</creator>
        
        <creator>Amr M. AbdelAziz</creator>
        
        <subject>Chronic diseases; Electronic Health Records (EHR); machine learning; classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>The leading cause of death nowadays is chronic disease. As a result, personal wellbeing has received a considerable boost as a healthcare preventative strategy. A notable development in data-driven healthcare technology is the creation of a prediction model for chronic diseases. In this situation, computational intelligence is used to analyze electronic health data to provide clinicians with knowledge that will help them make more informed decisions about prognoses or therapies. In this study, various classification algorithms have been implemented namely, Decision Tree, K-Nearest Neighbors, Logistic Regression, Multilayer Perceptron, Na&#239;ve Bayes, Random Forest, and Support Vector Machines to examine the medical records of patients in Kuwait who had chronic conditions. For predicting diabetes, the support vector machines classifier was the best classifier for predicting diabetes and kidney chronic diseases. For diabetes, it achieved 88.5% accuracy, and 93.6% f1-score, while for kidney; it achieved 94.9% and 92.6% accuracy and f1-score respectively. For predicting heart disease, MLP was the best and achieved 84.7%, and 87.8% accuracy and f1-score respectively.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_68-Applying_Machine_Learning_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Developing an Improved Method to Remove Pectoral Muscle for Better Diagnosis of Breast Cancer in Mammography Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141167</link>
        <id>10.14569/IJACSA.2023.0141167</id>
        <doi>10.14569/IJACSA.2023.0141167</doi>
        <lastModDate>2023-11-30T12:00:11.5570000+00:00</lastModDate>
        
        <creator>Golnoush Abaei</creator>
        
        <creator>Zahra Rezaei</creator>
        
        <creator>Usama Qasim Mian</creator>
        
        <creator>Yasir Azhari Abdalgadir Abdalla</creator>
        
        <creator>Nitin Mathew</creator>
        
        <creator>Leong Yi Gan</creator>
        
        <subject>Breast cancer; preprocessing pectoral muscle segmentation; level set algorithm; region growing algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>Mammography is a non-invasive method to study breast tissues for abnormalities. Computer-aided diagnosis (CAD) can automate the process of diagnosing malignant and benign tumors accurately. However, accurate results can be hampered by the presence of the pectoral muscle, which has a similar opacity to the breast tissue area. Detecting and removing pectoral muscles is not trivial due to various factors, and there are artifacts present near the pectoral muscle that can hamper proper segmentation. Given the significance of the topic, it is crucial to devise an accurate method for automatically detecting the muscle area in a mammography image and eliminating it from the rest of the image. This process of removing the pectoral muscle from the breast image can aid in precise segmentation and diagnosis of the tumor area, ultimately leading to faster diagnosis and better outcomes for patients. This study examined two segmentation algorithms, Level Set and Region Growing, for segmenting the pectoral muscle. An Improved Region Growing-based (IRG) algorithm was also proposed and showed promising results in automatically segmenting the pectoral muscle. All algorithms were tested on the MIAS dataset, and radiologists evaluated the results, showing an accuracy rating of up to 83% for IRG. The results indicated that IRG outperformed Level Set considerably due to many optimizations and modifications. IRG can be used as part of the preprocessing unit of an automated cancer diagnosis system.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_67-Developing_an_Improved_Method_to_Remove_Pectoral_Muscle.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Breast Cancer Detection System using Deep Learning Based on Fusion Features and Statistical Operations</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141165</link>
        <id>10.14569/IJACSA.2023.0141165</id>
        <doi>10.14569/IJACSA.2023.0141165</doi>
        <lastModDate>2023-11-30T12:00:11.5400000+00:00</lastModDate>
        
        <creator>Suleyman A. AlShowarah</creator>
        
        <subject>Breast cancer detection; breast cancer classification; deep learning; vgg-19; breast tumor</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>Breast cancer is considered as the second cause of death for women. The earlier is diagnosed, the easier the patients can be recovered. The need for studies to detect this kind of cancer easily and accurately came from the growing rate of infected patents by breast cancer exponentially. This study is conducted to investigate the use of deep-learning model for breast cancer detecting using the technique VGG-19 and ultrasound images. Two layers of VGG19 structure were used: (i.e. fc6 and fc7. Based on these two layers (fc6 and fc7), new datasets were created, which are named as statistical operations. These datasets will be employed as input for the following Machine Learning classifiers: K-Nearest Neighbors, Random Forest, Na&#239;ve Bayes and Decision Tree. Data augmentation was considered to increase the dataset size for better learning of CNN. Random Forest achieved high accuracy (88.63), precision (0.88), recall (0.88) and F-Measure (0.88). The results of the classification accuracy in the three scenarios are slightly similar; this proves that the breast cancer can be detected even if the size of data in the training dataset was minimal.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_65-Breast_Cancer_Detection_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detecting Threats from Live Videos using Deep Learning Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141166</link>
        <id>10.14569/IJACSA.2023.0141166</id>
        <doi>10.14569/IJACSA.2023.0141166</doi>
        <lastModDate>2023-11-30T12:00:11.5400000+00:00</lastModDate>
        
        <creator>Rawan Aamir Mushabab AlShehri</creator>
        
        <creator>Abdul Khader Jilani Saudagar</creator>
        
        <subject>Deep learning; machine learning; object detection; threat detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>Threat detection is an important area of research, particularly in security and surveillance applications. The research is focused on developing a threat detection system using DL techniques. The system aims to detect potential threats in real-time video streams, enabling early identification and timely response to potential security risks. The study uses two state-of-the-art DL models, MobileNet and YOLOv5, to train the object detection system. The TensorFlow object detection API is employed for training and evaluating the models. The results of the study indicate that MobileNet outperforms YOLOv5 in terms of detection accuracy, speed, and overall performance. The justification for selecting MobileNet over YOLOv5 is based on several factors. First, MobileNet has a lightweight architecture, making it suitable for real-time applications where processing speed is critical. Second, it is efficient in terms of memory usage, enabling it to operate effectively on low-resource devices. Third, MobileNet provides high accuracy in detecting objects of different sizes and shapes. The study evaluated the performance of the threat detection system using various evaluation metrics, including mean average recall (mAR), mean average precision (mAP) and Intersection over union (IoU). The results show that the system achieved high accuracy in detecting threats, with an overall mAP (mean average precision) of 0.9125, mAR (mean average recall) of 0.9565 and Intersection over union (IoU) of 0.9045. In this study, researchers present a highly efficient and successful method for identifying threats through the utilization of deep learning methods. The research demonstrates the superiority of MobileNet over YOLOv5 in terms of performance, and the results obtained validate the effectiveness of the proposed system in detecting potential threats in real-time video streams.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_66-Detecting_Threats_from_Live_Videos_using_Deep_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Convolutional LSTM Network for Real-Time Impulsive Sound Detection and Classification in Urban Environments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141164</link>
        <id>10.14569/IJACSA.2023.0141164</id>
        <doi>10.14569/IJACSA.2023.0141164</doi>
        <lastModDate>2023-11-30T12:00:11.5270000+00:00</lastModDate>
        
        <creator>Aigerim Altayeva</creator>
        
        <creator>Nurzhan Omarov</creator>
        
        <creator>Sarsenkul Tileubay</creator>
        
        <creator>Almash Zhaksylyk</creator>
        
        <creator>Koptleu Bazhikov</creator>
        
        <creator>Dastan Kambarov</creator>
        
        <subject>Deep learning; CNN; LSTM; hybrid model; ANN; impulsive sound</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>In recent years, the escalating challenges of noise pollution in urban environments have necessitated the development of more sophisticated sound detection and classification systems. This research introduces a novel approach employing a Convolutional Long Short-Term Memory (ConvLSTM) network tailored for real-time impulsive sound detection in metropolitan landscapes. Impulsive sounds, characterized by sudden onsets and short durations—such as honking, abrupt shouts, or breaking glass—are inherently sporadic but can significantly impact urban soundscapes and the well-being of city dwellers. Traditional sound detection mechanisms often falter in identifying these ephemeral noises amidst the cacophony of urban life. The ConvLSTM network proposed in this study amalgamates the spatial feature learning capabilities of Convolutional Neural Networks (CNN) with the temporal sequence retention attributes of LSTM, culminating in an architecture that excels in both sound detection and classification tasks. The model was trained and evaluated on a comprehensive dataset sourced from various urban settings and demonstrated commendable proficiency in discerning impulsive sounds with minimal false positives. Furthermore, the system&#39;s real-time processing capabilities ensure timely interventions, paving the way for smarter noise management in cities. This research not only propels the frontier of impulsive sound detection but also underscores the potential of ConvLSTM in addressing multifaceted urban challenges.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_64-Convolutional_LSTM_Network_for_Real_Time_Impulsive_Sound_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Ascertaining Speech Emotion using Attention-based Convolutional Neural Network Framework</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141163</link>
        <id>10.14569/IJACSA.2023.0141163</id>
        <doi>10.14569/IJACSA.2023.0141163</doi>
        <lastModDate>2023-11-30T12:00:11.5100000+00:00</lastModDate>
        
        <creator>Ashima Arya</creator>
        
        <creator>Vaishali Arya</creator>
        
        <creator>Neha Kohli</creator>
        
        <creator>Namrata Sukhija</creator>
        
        <creator>Ashraf Osman Ibrahim</creator>
        
        <creator>Salil Bharany</creator>
        
        <creator>Faisal Binzagr</creator>
        
        <creator>Farkhana Binti Muchtar</creator>
        
        <creator>Mohamed Mamoun</creator>
        
        <subject>Convolutional neural network; emotions; speech; transfer learning models; spectrogram</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>Conversation among people is a profuse form of interaction that also carries emotional information. Speech input has been the subject of numerous studies over the last ten years, and it is now crucial for human-computer connection, as well as for medical care, privacy, and stimulation. This research aims to evaluate if the suggested framework can aid in speech emotion recognition (SER) activities and determine if Convolutional Neural Network (CNN) systems are efficient for SER activities using transfer learning models on spectrogram. In this investigation, the authors present a brand-new attention-based CNN framework and evaluate its efficacy against several well-known CNN architectures from earlier research. The effectiveness of the suggested system is assessed using the SAVEE dataset, an open-access resource for emotive speech, compared to famous CNN models like VGG16, InceptionV3, ResNet50, InceptionResNetV2, and Xception. The authors used stacked 10-fold cross-validation on SAVEE for all of our trials. Amongst these CNN structures, the suggested model had the greatest accuracy (87.14%), followed by VGG16 (83.19%) and InceptionResNetV2 (82.22%). Compared to contemporary techniques, the test results and evaluation show our proposed approach to have steady and impressive results.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_63-Ascertaining_Speech_Emotion.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Graph Anomaly Detection with Graph Convolutional Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141162</link>
        <id>10.14569/IJACSA.2023.0141162</id>
        <doi>10.14569/IJACSA.2023.0141162</doi>
        <lastModDate>2023-11-30T12:00:11.4930000+00:00</lastModDate>
        
        <creator>Aabid A. Mir</creator>
        
        <creator>Megat F. Zuhairi</creator>
        
        <creator>Shahrulniza Musa</creator>
        
        <subject>Anomaly detection; deep learning; dynamic graphs; Graph Convolutional Networks (GCNs); Graph Neural Networks (GNNs); network data</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>Anomaly detection in network data is a critical task in various domains, and graph-based approaches, particularly Graph Convolutional Networks (GCNs), have gained significant attention in recent years. This paper provides a comprehensive analysis of anomaly detection techniques, focusing on the importance and challenges of network anomaly detection. It introduces the fundamentals of GCNs, including graph representation, graph convolutional operations, and the graph convolutional layer. The paper explores the applications of GCNs in anomaly detection, discussing the graph convolutional layer, hierarchical representation learning, and the overall process of anomaly detection using GCNs. A thorough review of the literature is presented, with a comparative analysis of GCN-based approaches. The findings highlight the significance of graph-based techniques, deep learning, and various aspects of graph representation in anomaly detection. The paper concludes with a discussion on key insights, challenges, and potential advancements, such as the integration of deep learning models and dynamic graph analysis.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_62-Graph_Anomaly_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sleep Apnea Detection Method Based on Improved Random Forest</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141161</link>
        <id>10.14569/IJACSA.2023.0141161</id>
        <doi>10.14569/IJACSA.2023.0141161</doi>
        <lastModDate>2023-11-30T12:00:11.4930000+00:00</lastModDate>
        
        <creator>Xiangkui Wan</creator>
        
        <creator>Yang Liu</creator>
        
        <creator>Liuwang Yang</creator>
        
        <creator>Chunyan Zeng</creator>
        
        <creator>Danni Hao</creator>
        
        <subject>Sleep apnea; fuzzy c-means; backward feature elimination method; random forest</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>Random forest (RF) helps to solve problems such as the detection of sleep apnea (SA) by constructing multiple decision trees, but there is no definite rule for the selection of input features in the model. In this paper, we propose a SA detection method based on fuzzy C-mean clustering (FCM) and backward feature rejection method, which improves the sensitivity and accuracy of SA detection by selecting the optimal set of features to input to the random forest model. Firstly, FCM clustering is performed on the RR interval features of ECG signals, and then the backward feature rejection method is used to combine the intra-cluster tightness, inter-cluster separation and contour coefficient metrics to eliminate redundant features to determine the optimal feature set, which is then inputted into the RF to detect SA. The experimental results of this method on Apnea-ECG database data show that the SA detection accuracy is 88.6%, sensitivity is 90.5%, and specificity is 85.5%, and the algorithm can adaptively select a smaller number of more discriminative features through FCM to reduce the input dimensions and improve the accuracy and sensitivity of the RF model for sleep apnea detection.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_61-Sleep_Apnea_Detection_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SmishGuard: Leveraging Machine Learning and Natural Language Processing for Smishing Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141160</link>
        <id>10.14569/IJACSA.2023.0141160</id>
        <doi>10.14569/IJACSA.2023.0141160</doi>
        <lastModDate>2023-11-30T12:00:11.4770000+00:00</lastModDate>
        
        <creator>Saleem Raja Abdul Samad</creator>
        
        <creator>Pradeepa Ganesan</creator>
        
        <creator>Justin Rajasekaran</creator>
        
        <creator>Madhubala Radhakrishnan</creator>
        
        <creator>Hariraman Ammaippan</creator>
        
        <creator>Vinodhini Ramamurthy</creator>
        
        <subject>Smishing; phishing; SMS; machine learning; natural language processing; TF-IDF</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>SMS facilitates the transmission of concise text messages between mobile phone users, serving a range of functions in personal and business domains such as appointment confirmation, authentication, alerts, notifications, and banking updates. It plays a vital role in daily communication due to its accessibility, reliability, and compatibility. However, SMS unintentionally generates an environment where smishing can occur. This is because SMS is extensively available and reliable. Smishing attackers exploit this trust to trick victims into divulging sensitive information or performing malicious actions. Early detection saves users from being victimized. Researchers introduced different methods for accurately detecting smishing attacks. Machine Learning models, coupled with Language Processing methods, are promising approaches for combating the escalating menace of SMS phishing attacks by analyzing large datasets of SMS messages to differentiate between legitimate and fraudulent messages. This paper presents two methods (SmishGaurd) to detect smishing attacks that leverage machine learning models and language processing techniques. The results indicate that TF-IDF with the LDA method outperforms Weight Average Word2Vec in precision and F1-Score, and Random Forest and Extreme Gradient Boosting demonstrate higher accuracy.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_60-SmishGuard_Leveraging_Machine_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Use of Hand Gestures as a Tool for Presentation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141159</link>
        <id>10.14569/IJACSA.2023.0141159</id>
        <doi>10.14569/IJACSA.2023.0141159</doi>
        <lastModDate>2023-11-30T12:00:11.4630000+00:00</lastModDate>
        
        <creator>Hope Orovwode</creator>
        
        <creator>John Amanesi Abubakar</creator>
        
        <creator>Onuora Chidera Gaius</creator>
        
        <creator>Ademola Abdullkareem</creator>
        
        <subject>Hand gesture; linear classifier; motion classifier; LSTM; interface</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>Our hands play a crucial role in daily activities, serving as a primary tool for interacting with technology. This paper explores using hand gestures to control presentations, offering a dynamic alternative to traditional devices like mice or keyboards. These conventional methods often limit presenters to a fixed position and depend on the device&#39;s proximity. In contrast, hand gesture controls promise a more fluid and engaging presentation style. This study utilizes the HaGRID dataset, supplemented by custom-recorded data, divided into 80% for training, and 10% each for validation and testing. The data undergoes preprocessing and a linear classifier with four dense layers and a SoftMax activation layer is employed. The model, optimized with the Adam optimizer and a learning rate of 1e-1, incorporates a motion classifier (LSTM) with two dense layers and an LSTM layer, tailored for long-distance body pose estimation. The resulting application, a local desktop tool independent of internet connectivity, uses tkinter for its user interface. It demonstrates high accuracy in classifying gestures, achieving 90.1%, 89%, and 90% in training, validation, and testing, respectively, for the linear classifier. The motion classifier records 79.8%, 72%, and 70.1%. The model effectively recognizes and categorizes dataset gestures, capturing live camera feeds to manage presentations. Users benefit from various features, including PowerPoint selection, distance mode, gesture toggling and assignment, and appearance mode. This study illustrates how hand gesture control can enhance presentation experiences, merging technology with natural human movement for a more seamless interaction.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_59-The_Use_of_Hand_Gestures_as_a_Tool_for_Presentation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Neural Network-based Approach for Apple Leaf Diseases Detection in Smart Agriculture Application</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141158</link>
        <id>10.14569/IJACSA.2023.0141158</id>
        <doi>10.14569/IJACSA.2023.0141158</doi>
        <lastModDate>2023-11-30T12:00:11.4470000+00:00</lastModDate>
        
        <creator>Shengjie Gan</creator>
        
        <creator>Defeng Zhou</creator>
        
        <creator>Yuan Cui</creator>
        
        <creator>Jing Lv</creator>
        
        <subject>Smart agriculture; plant disease; apple leaf disease; image processing; neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>Plant diseases significantly harm agriculture, which has an impact on nations&#39; economies and levels of food security. Early plant disease detection is essential in smart agriculture. For the diagnosis of plant diseases, a number of methods, including imaging, have been used recently. Some of the existing methods for plant disease detection using imaging have limitations as firstly, high computational cost, some methods require complex image processing algorithms or manual design of features that can increase the time and resources needed for the detection. Secondly, low accuracy, most of the methods rely on simple classifiers or handcrafted features that may not capture the subtle differences between different diseases or healthy leaves. Thirdly, dependency on expert knowledge, some methods need human intervention or prior knowledge of the diseases and pests to perform the detection. These limitations are not suitable for the problem at hand because they can affect the efficiency of the detection system. In this study, three apple tree leaf diseases—apple black spot, Alternaria, and Minoz blight—are detected using a neural network (NN) and a digital image processing technique. The sample images are prepared, processed, and used to extract attributes using a digital image processing approach, and the NN is used to classify the diseases. An evaluation of the proposed system&#39;s performance in identifying illnesses in apple trees shows satisfactory accuracy and strong overall performance. Additionally, when compared to other techniques already in use, this strategy is more effective at diagnosing.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_58-A_Neural_Network_based_Approach_for_Apple_Leaf_Disease_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Improved Depth Estimation using Stereo Matching and Disparity Refinement Based on Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141156</link>
        <id>10.14569/IJACSA.2023.0141156</id>
        <doi>10.14569/IJACSA.2023.0141156</doi>
        <lastModDate>2023-11-30T12:00:11.4300000+00:00</lastModDate>
        
        <creator>Deepa </creator>
        
        <creator>Jyothi K</creator>
        
        <creator>Abhishek A Udupa</creator>
        
        <subject>Census transform; deep learning; depth; generative adversarial network; occlusion; stereo matching</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>Stereo matching techniques are a vital subject in computer vision. It focuses on finding accurate disparity maps that find its use in several applications namely reconstruction of a 3D scene, navigation of robot, augmented reality. It is a method of obtaining corresponding matching point in stereo images to get disparity map. With additional details, this disparity map could be converted into a depth of a scene. Obtaining an efficient disparity map in the texture less, occluded, and discontinuous areas is a difficult job. A matching cost using an improvised Census transform and an optimization framework is proposed to produce an initial disparity map. The classic Census transform focus on the value of pixel at the center. If this pixel is prone to noisy condition, then the census encoding may differ which leads to mismatches. To overcome this issue an improved census transform based on weighted sum values of the neighborhood pixels is proposed which suppresses the noise during stereo matching. Additionally, a deep learning based disparity refinement technique using the generative adversarial network to handle texture less, occluded, and discontinuous areas is proposed. The suggested method offers cutting-edge performance in terms of both qualitative and quantitative outcomes.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_56-An_Improved_Depth_Estimation_using_Stereo_Matching.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Federated-Learning Topic Modeling Based Text Classification Regarding Hate Speech During COVID-19 Pandemic</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141157</link>
        <id>10.14569/IJACSA.2023.0141157</id>
        <doi>10.14569/IJACSA.2023.0141157</doi>
        <lastModDate>2023-11-30T12:00:11.4300000+00:00</lastModDate>
        
        <creator>Muhammad Kamran</creator>
        
        <creator>Ammar Saeed</creator>
        
        <creator>Ahmed Almaghthawi</creator>
        
        <subject>Knowledge extraction; text mining; pandemics and society; hate speech; Islamophobia</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>One of the most challenging tasks in knowledge discovery is extracting the semantics of the content regarding emotional context from the natural language text. The COVID-19 pandemic gave rise to many serious concerns and has led to several controversies including spreading of false news and hate speech. This paper particularly focuses on Islamophobia during the COVID-19. The widespread usage of social media platforms during the pandemic for spreading of false information about Muslims and their common religious practices has further fueled the existing problem of Islamophobia. In this respect, it becomes very important to distinguish between the genuine information and the Islamophobia related false information. Accordingly, the proposed technique in this paper extracts features from the textual content using approaches like Word2Vec and Global Vectors. Next, the text classification is performed using various machine learning and deep learning techniques. The performance comparison of various algorithms has also been reported. After experimental evaluation, it was found that the performance metric like F1-score indicate that Support Vector Machine performs better than other alternatives. Similarly, Convolutional Neural Network also achieved promising results.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_57-Federated_Learning_Topic_Modeling_Based_Text_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Flood Prediction using Hydrologic and ML-based Modeling: A Systematic Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141155</link>
        <id>10.14569/IJACSA.2023.0141155</id>
        <doi>10.14569/IJACSA.2023.0141155</doi>
        <lastModDate>2023-11-30T12:00:11.4170000+00:00</lastModDate>
        
        <creator>A Fares Hamad Aljohani</creator>
        
        <creator>Ahmad. B. Alkhodre</creator>
        
        <creator>Adnan Ahamad Abi Sen</creator>
        
        <creator>Muhammad Sher Ramazan</creator>
        
        <creator>Bandar Alzahrani</creator>
        
        <creator>Muhammad Shoaib Siddiqui</creator>
        
        <subject>Flood prediction; hydrologic model; machine learning; systematic review</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>Flooding, caused by the overflow of water bodies beyond their natural boundaries, has severe environmental and socioeconomic consequences. To effectively predict and mitigate flood events, accurate and reliable flood modeling techniques are essential. This study provides a comprehensive review of the latest modeling techniques used in flood prediction, classifying them into two main categories: hydrologic models and machine learning models based on artificial intelligence. By objectively assessing the advantages and disadvantages of each model type, we aim to synthesize a systematic analysis of the various flood modeling approaches in the current literature. Additionally, we explore the potential of hybrid strategies that combine both modeling methods&#39; best characteristics to develop more effective flood control measures. Our findings provide valuable insights for researchers and practitioners in the field of flood modeling, and our recommendations can contribute to the development of more efficient and accurate flood prediction systems.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_55-Flood_Prediction_using_Hydrologic_and_ML_based_Modeling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Zero-Trust Model for Intrusion Detection in Drone Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141154</link>
        <id>10.14569/IJACSA.2023.0141154</id>
        <doi>10.14569/IJACSA.2023.0141154</doi>
        <lastModDate>2023-11-30T12:00:11.4000000+00:00</lastModDate>
        
        <creator>Said OUIAZZANE</creator>
        
        <creator>Malika ADDOU</creator>
        
        <creator>Fatimazahra BARRAMOU</creator>
        
        <subject>Fleet of drones; security; zero trust; intrusions; cybersecurity; zero day; Multi-Agent</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>Today&#39;s worldwide introduction of drone fleets in a range of industrial applications has led to numerous network security issues, opening drones up to cyberthreats. In response to these challenges, an innovative approach has been proposed to protect drone fleet networks against potentially dangerous cyberattacks. Indeed, drones are considered as flying computers, and the proposed approach takes into account their complex network structure and communication protocols. The proposed system is designed around a multi-agent architecture, with a hybrid zero-trust detection mechanism against known and emerging cyberthreats. The CICIDS2017 dataset was exploited after performing some essential pre-processing tasks including data cleaning, balancing, binarization and dimension reduction. The proposed approach guaranteed high levels of accuracy and scalability, enabling an effective response to potentially dangerous cyber threat scenarios threatening drone fleets. To evaluate the effectiveness of the proposed system, a test portion of CICIDS2017 was used. The accuracy in recognizing benign network traffic reached 99.99% with a very low false alarm rate, ensuring the system&#39;s effectiveness against known and unknown cyber threats. Extensive experimental testing has been carried out on never-before-seen data, highlighting the system&#39;s remarkable ability to rapidly recognize cyber threats in real time, thereby enhancing the overall security of drone networks. The contribution of the proposed approach is significant for drone network security, as it introduces a comprehensive model designed to meet the specific security requirements of drone fleets. Finally, the proposed approach offers practical prospects for improving the security of drone applications.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_54-A_Zero_Trust_Model_for_Intrusion_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Tailored Expert Finding Systems for Vietnamese SMEs: A Five-step Framework</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141153</link>
        <id>10.14569/IJACSA.2023.0141153</id>
        <doi>10.14569/IJACSA.2023.0141153</doi>
        <lastModDate>2023-11-30T12:00:11.3830000+00:00</lastModDate>
        
        <creator>Thi Thu Le</creator>
        
        <creator>Xuan Lam Pham</creator>
        
        <creator>Thanh Huong Nguyen</creator>
        
        <subject>Expert Finding System (EFS); Small and Medium-sized Enterprises (SMEs); experts; Vietnamese expert resources; business expertise identification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>This study addresses the underexplored area of EFSs (EFS) tailored for business applications, with a specific focus on supporting Small and Medium Enterprises (SMEs). The principal objective of this research is to develop an EFS designed to cater to the needs of Vietnamese SMEs. The study methodology involves conducting in-depth interviews with Vietnamese SMEs to ascertain their requirements for Vietnamese EFSs. Subsequently, the research proposes an architectural model for the EFS and proceeds to develop the corresponding system. The EFS operates by collecting and analyzing data from diverse online sources to identify Vietnamese experts and individuals of Vietnamese origin who can provide valuable insights and support to enterprises operating in Vietnam. This research framework is guided by five key Husain (2019)’s issues: 1) Expertise evidence selection, 2) Expert representation, 3) Model building, 4) Model evaluation, and 5) Interaction design. By addressing these issues, the study aims to contribute to the development of an effective EFS tailored to the specific needs of Vietnamese SMEs in their quest to find and engage experts for business growth and innovation.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_53-Tailored_Expert_Finding_Systems_for_Vietnamese_SMEs.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application of Data Mining Technology with Improved Clustering Algorithm in Library Personalized Book Recommendation System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141151</link>
        <id>10.14569/IJACSA.2023.0141151</id>
        <doi>10.14569/IJACSA.2023.0141151</doi>
        <lastModDate>2023-11-30T12:00:11.3700000+00:00</lastModDate>
        
        <creator>Xiao Lin</creator>
        
        <creator>Wenjuan Guan</creator>
        
        <creator>Ying Zhang</creator>
        
        <subject>Peak density; distance optimization; warhill algorithm; collaborative filtering; book recommendations</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>The information construction work of university libraries is becoming increasingly perfect. However, the massive amount of data poses significant challenges to the personalized recommendation of books. Cluster analysis has always been an important research topic in data mining technology, and it has a wide range of application fields. Clustering algorithm is a fundamental operation in big data processing, and it also has good application value in personalized recommendation of library books. To improve the personalized service quality of libraries, this study proposes a clustering algorithm based on density noise application spatial clustering. This study introduced a distance optimization strategy and Warhill algorithm to the proposed algorithm, to improve the difficulties in selecting initial parameter neighborhoods and density thresholds in traditional models, as well as computational complexity. Afterwards, this study will integrate the improved algorithm with the density peak algorithm to further improve the operational efficiency of the model. The performance verification of the model demonstrated superior clustering performance. The average accuracy of the proposed model&#39;s recommendation is 98.97%, indicating superiority. The practical application results have confirmed that there is a significant similarity between the books read by the readers and the books read by the target readers, and the effectiveness and feasibility of the proposed model have been verified. Therefore, the proposed model can contribute to the personalized recommendation function of libraries and has certain practical significance.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_51-Application_of_Data_Mining_Technology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Workforce Planning for Cleaning Services Operation using Integer Programming</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141152</link>
        <id>10.14569/IJACSA.2023.0141152</id>
        <doi>10.14569/IJACSA.2023.0141152</doi>
        <lastModDate>2023-11-30T12:00:11.3700000+00:00</lastModDate>
        
        <creator>Mandy Lim Man Yee</creator>
        
        <creator>Rosshairy Abd Rahman</creator>
        
        <creator>Nerda Zura Zaibidi</creator>
        
        <creator>Syariza Abdul-Rahman</creator>
        
        <creator>Norhafiza Mohd Noor</creator>
        
        <subject>Workforce planning; cleaning services industry; optimization approach; integer programming</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>The cleaning services industry in Malaysia faces significant challenges in effectively managing its workforce. Workforce planning, a critical procedure that aligns employee skills with suitable positions at the right time, is becoming increasingly essential across various organizations, including postal delivery and cleaning services. However, the absence of proper workforce planning from management teams has emerged as a primary concern in this sector. This study identifies an opportunity to improve the workforce planning in the cleaning industry by employing an optimization approach that aims to minimize hiring costs. The main objective of this study is to minimize hiring costs in cleaning services operations at a public university in Malaysia. To achieve this, an optimization model based on integer programming was proposed to represent the current situation. Data collection involved interviews and company reports for the purpose of understanding the current conditions comprehensively. Factors influencing hiring costs were meticulously selected, considering the organization&#39;s specific situation. Model evaluation was conducted through what-if analysis, which allowed the evaluation of solutions provided by the modified models in three what-if scenarios. The findings indicated that the proposed modified model could assist organizations in improving the workforce planning by optimizing the allocation of resources, reducing hiring costs, and enhancing cleaner performance. This study offers valuable insights for the management of cleaning services, paving the way for more effective and efficient workforce planning practices in the industry.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_52-Workforce_Planning_for_Cleaning_Services_Operation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Basketball Motion Recognition Model Analysis Based on Perspective Invariant Geometric Features in Skeleton Data Extraction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141150</link>
        <id>10.14569/IJACSA.2023.0141150</id>
        <doi>10.14569/IJACSA.2023.0141150</doi>
        <lastModDate>2023-11-30T12:00:11.3530000+00:00</lastModDate>
        
        <creator>Jiaojiao Lu</creator>
        
        <subject>Skeleton data; perspective invariance; geometric features; basketball recognition; spatio-temporal feature fusion</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>The study proposes a recognition method based on skeleton data to address the basketball action recognition, especially those posed by viewpoint changes in videos. The key of this method is to extract geometric features of viewpoint invariance and combine them with spatio-temporal feature fusion techniques. In addition, the study constructs a dynamic topological map of the human skeleton based on long and short-term neural networks to improve the model performance. The experimental results showed that the research method had an average accuracy of 97.85% for Top-5 metrics on the Kinetics dataset and 97.82% for Top-5 metrics on the NTU RGB+D dataset. It is significantly better than the other three state-of-the-art methods. According to the experimental results, it achieves efficient and stable basketball action recognition, which is significantly superior to existing methods. This research not only provides a more efficient method for basketball motion recognition, but also provides valuable references for other sports action recognition fields.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_50-Basketball_Motion_Recognition_Model_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Research on Evaluation Method of Urban Human Settlement Environment Quality Based on Back Propagation Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141149</link>
        <id>10.14569/IJACSA.2023.0141149</id>
        <doi>10.14569/IJACSA.2023.0141149</doi>
        <lastModDate>2023-11-30T12:00:11.3370000+00:00</lastModDate>
        
        <creator>Siyuan Zhang</creator>
        
        <creator>Wenbo Song</creator>
        
        <subject>Back propagation; neural network; urban human settlements; quality evaluation; morbidity index; genetic algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>In order to improve people&#39;s living experience, a method for evaluating the quality of urban human settlements based on back propagation neural network is proposed. Firstly, the initial evaluation index system is constructed, the initial evaluation index system is screened, and the final evaluation index system is constructed by using the remaining evaluation indexes. Then, the back propagation neural network is constructed to build an evaluation model, and the evaluation model is trained through the processes of network initialization, hidden layer output calculation and output layer output calculation. Finally, the improved genetic algorithm is used to optimize the back propagation neural network, improve the evaluation performance of the back propagation neural network, and realize the evaluation of human settlements quality. The experimental results show that the accuracy of the evaluation results of urban human settlements quality output by the trained back-propagation neural network model reaches 96.3%, which has a good effect.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_49-Research_on_Evaluation_Method_of_Urban_Human_Settlement.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of a Framework for Classification of Impulsive Urban Sounds using BiLSTM Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141148</link>
        <id>10.14569/IJACSA.2023.0141148</id>
        <doi>10.14569/IJACSA.2023.0141148</doi>
        <lastModDate>2023-11-30T12:00:11.3230000+00:00</lastModDate>
        
        <creator>Nazbek Katayev</creator>
        
        <creator>Aigerim Altayeva</creator>
        
        <creator>Bayan Abduraimova</creator>
        
        <creator>Nurgul Kurmanbekkyzy</creator>
        
        <creator>Zhumabay Madibaiuly</creator>
        
        <creator>Bakhytzhan Kulambayev</creator>
        
        <subject>Impulsive sound; machine learning; deep learning; CNN; LSTM; classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>Urban environments are awash with myriad sounds, among which impulsive noises stand distinct due to their brief and often disruptive nature. As cities evolve and expand, the accurate classification and management of these impulsive sounds become paramount for urban planners, environmental scientists, and public health advocates. This paper introduces a novel framework leveraging the Bidirectional Long Short-Term Memory (BiLSTM) Network for the systematic categorization of impulsive urban sounds. Traditional methodologies often falter in recognizing the nuanced intricacies of such noises. In contrast, the presented BiLSTM-based approach adapts to the temporal variability intrinsic to these sounds, thereby enhancing classification accuracy. The research harnesses an expansive dataset, curated from various urban settings, to train and validate the model. Preliminary findings suggest that our BiLSTM framework outperforms existing models, with a marked increase in both specificity and sensitivity metrics. The outcome of this study holds profound implications for city acoustics management, noise pollution control, and urban health interventions. Moreover, the framework&#39;s adaptability paves the way for its application across diverse acoustic landscapes beyond the urban realm. Future endeavors should seek to further optimize the model by integrating more diverse soundscapes and addressing potential biases in data collection.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_48-Development_of_a_Framework_for_Classification_of_Impulsive_Urban_Sounds.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Linear and Nonlinear Analysis of Photoplethysmogram Signals and Electrodermal Activity to Recognize Three Different Levels of Human Stress</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141147</link>
        <id>10.14569/IJACSA.2023.0141147</id>
        <doi>10.14569/IJACSA.2023.0141147</doi>
        <lastModDate>2023-11-30T12:00:11.3070000+00:00</lastModDate>
        
        <creator>Yan Su</creator>
        
        <creator>Yuanyuan Li</creator>
        
        <creator>Shumin Zhang</creator>
        
        <creator>Hui Wang</creator>
        
        <subject>Stress detection; biological signal; linear analysis; nonlinear analysis; classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>All human beings experience different levels of psychological stress during their daily activities, and stress is an integral part of human life. So far, few studies have attempted to identify different levels of stress by analyzing physiological signals. However, it should be noted that developing a practical system for detecting multiple stress levels is a challenging task, and no standard system has been developed for this purpose. Therefore, in the current study, we propose a new detection system based on linear and nonlinear analysis of photoplethysmogram (PPG) and electrodermal activity (EDA) signals to classify three levels of stress (low, medium and high). In the current study, we recorded the physiological signals of EDA and PPG during three trials of a Stroop color word test that induced three levels of stress in 42 healthy male volunteers. Mean, median, standard deviation, variance, skewness, kurtosis, minimum, maximum, and RMS features in the time domain were calculated from physiological signals as linear features. Also, approximate entropy, sample entropy, permutation entropy, Hurst exponent, Katz fractal dimension, Higuchi fractal dimension, Petrosian fractal dimension, detrended fluctuation analysis (DFA), and embedding dimension and time delay parameters from phase space reconstruction of the signals were calculated as nonlinear features. The combination of nonlinear and linear features extracted from both PPG and EDA signals resulted in the highest mean accuracy (88.36%), intraclass correlation (ICC) (98.82%) and F1 (89.24%) values in the classification of three levels of mental stress through multilayer perceptron neural network. Our findings showed that the combination of nonlinear and linear approaches for biological data analysis (PPG and EDA) could help to develop a stress detection system.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_47-Linear_and_Nonlinear_Analysis_of_Photoplethysmogram_Signals.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Contactless Palm Vein Recognition System with Integrated Learning Approach System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141146</link>
        <id>10.14569/IJACSA.2023.0141146</id>
        <doi>10.14569/IJACSA.2023.0141146</doi>
        <lastModDate>2023-11-30T12:00:11.2900000+00:00</lastModDate>
        
        <creator>Ram Gopal Musunuru</creator>
        
        <creator>T Sivaprakasam</creator>
        
        <creator>G Krishna Kishore</creator>
        
        <subject>Palm Vein Recognition (PVR); Light Gradient Boosting Machine (LightGBM); Transfer Learning; Integrated Learning Approach (ILA)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>Palm Vein Recognition (PVR) is a new biometric authentication technology that provides both security and convenience. This paper describes a contactless PVR system (CPVR) that uses an integrated learning approach (ILA) to recognise the palm veins from the given input images while ensuring user comfort and ease of use. Contactless palm vein scanning technology is used in the proposed system, eliminating the need for physical contact with the scanning device. The proposed method combines advanced feature extraction techniques with a light gradient boosting machine (LightGBM) and transfer learning. A pre-trained model, EfficientNetB1, is used to train the model to extract significant factors from the input PVR images. The proposed method improves user comfort and reduces the risk of cross-contamination in environments where hygiene is critical, such as hospitals, banking, and other secured places. The cutting-edge contactless palm vein scanner captures the unique vein patterns beneath the user&#39;s palm without requiring direct physical contact. The proposed ILA illuminates and captures vein patterns using near-infrared (NIR) light, ensuring high accuracy and robustness. The system employs advanced pre-processing techniques and enhanced image segmentation techniques to continuously improve recognition accuracy. It adjusts to changes in the user&#39;s vein patterns over time, considering factors like ageing and injuries. The ILA improves the system&#39;s ability to adjust palm positioning and lighting changes. The ILA is also a Contactless Palm Vein Recognition System with numerous applications, such as access control, secure authentication for financial transactions, healthcare record access, and more. The system is built to be scalable, allowing organisations to use it in various settings, ranging from small-scale installations to large enterprise-level deployments. Finally, the proposed approach ILA used to recognise accurate users increased the detection rate.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_46-Contactless_Palm_Vein_Recognition_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>IoT-based Autonomous Search and Rescue Drone for Precision Firefighting and Disaster Management</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141145</link>
        <id>10.14569/IJACSA.2023.0141145</id>
        <doi>10.14569/IJACSA.2023.0141145</doi>
        <lastModDate>2023-11-30T12:00:11.2770000+00:00</lastModDate>
        
        <creator>Shubeeksh Kumaran</creator>
        
        <creator>V Aditya Raj</creator>
        
        <creator>Sangeetha J</creator>
        
        <creator>V R Monish Raman</creator>
        
        <subject>Search and rescue; firefighting; internet of things; disaster management</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>Disaster management is a line of work that deals with the lives of people, such work requires utmost precision, accuracy, and tough decision-making under critical situations. Our research aims to utilize Internet of Things (IoT)-based autonomous drones to provide detailed situational awareness and assessment of these dangerous areas to rescue personnel, firefighters, and police officers. The research involves the integration of four systems with our drone, each capable of tackling situations the drone can be in. As the recognition of civilians and protecting them is a key aspect of disaster management, our first system (i.e., Enhanced Human Identification System) to detect trapped victims and provide rescue personnel the identity of the human located. Moreover, it also leverages an Enhanced Deep Super-Resolution Network (EDSR) x4-based Upscaling technology to improve the image of human located. The second system is the Fire Extinguishing System which is equipped with an inbuilt fire extinguisher and a webcam to detect and put off fire at disaster sites to ensure the safety of both trapped civilians and rescue personnels. The third system (i.e., Active Obstacle Avoidance system) ensures the safety of the drone as well as any civilians the drone encounters by detecting any obstacle surrounding its pre-defined path and preventing the drone from any collision with an obstacle. The final system (i.e., Air Quality and Temperature Monitoring system) provides situational awareness to the rescue personnel. To accurately analyze the area and its safety levels, inform the rescue force on whether to take precautions such as wearing a fire proximity suit in case of high temperature or trying a different approach to manage the disaster. With these integrated systems, Autonomous surveillance drones with such capabilities will improve the equation of autonomous Search and Rescue (SAR) operations to a great extent as every aspect of our approach considers both the rescuer and victims in a region of disaster.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_45-IoT_based_Autonomous_Search_and_Rescue_Drone.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Overview of Different Deep Learning Techniques Used in Road Accident Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141144</link>
        <id>10.14569/IJACSA.2023.0141144</id>
        <doi>10.14569/IJACSA.2023.0141144</doi>
        <lastModDate>2023-11-30T12:00:11.2600000+00:00</lastModDate>
        
        <creator>Vinu Sherimon</creator>
        
        <creator>Sherimon P. C</creator>
        
        <creator>Alaa Ismaeel</creator>
        
        <creator>Alex Babu</creator>
        
        <creator>Sajina Rose Wilson</creator>
        
        <creator>Sarin Abraham</creator>
        
        <creator>Johnsymol Joy</creator>
        
        <subject>Deep learning; road traffic; road accident detection; MLP; CNN; LSTM; DenseNet; RNN; inception V3</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>Every year, numerous lives are tragically lost because of traffic accidents. While many factors may lead to these accidents, one of the most serious issues is the emergency services&#39; delayed response. Often, valuable time is lost due to a lack of information or difficulty determining the location and severity of an accident. To solve this issue, extensive research has been conducted on the creation of effective traffic accident detection and information communication systems. These systems use new technology, such as deep learning algorithms, to spot accidents quickly and correctly and communicate important information to emergency workers. This study provides an overview of current research in this field and identifies similarities among various systems. Based on the review findings, it was found that researchers utilised various techniques, including MLP (Multilayer Perceptron), CNN (Convolutional Neural Network), and models such as DenseNet, Inception V3, LSTM (Long short-term memory), YOLO (You Only Look Once), and RNN (Recurrent Neural Network), among others. Among these models, the MLP model demonstrated high accuracy. However, the Inception V3 model outperformed the others in terms of prediction time, making it particularly well-suited for real-time deployment at the edge and providing end-to-end functionality. The insights gained from this review will help enhance systems for detecting traffic accidents, which will lead to safer roads and fewer casualties. Future research must address several challenges, despite the promising results showcased by the proposed systems. These challenges include low visibility during nighttime conditions, occlusions that hinder accurate detection, variations in traffic patterns, and the absence of comprehensive annotated datasets.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_44-An_Overview_of_Different_Deep_Learning_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>IAM-TSP: Iterative Approximate Methods for Solving the Travelling Salesman Problem</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141143</link>
        <id>10.14569/IJACSA.2023.0141143</id>
        <doi>10.14569/IJACSA.2023.0141143</doi>
        <lastModDate>2023-11-30T12:00:11.2430000+00:00</lastModDate>
        
        <creator>Esra’a Alkafaween</creator>
        
        <creator>Samir Elmougy</creator>
        
        <creator>Ehab Essa</creator>
        
        <creator>Sami Mnasri</creator>
        
        <creator>Ahmad S. Tarawneh</creator>
        
        <creator>Ahmad Hassanat</creator>
        
        <subject>Greedy algorithms; TSP; NP-Hard problems; polynomial time algorithms; combinatorial problems; optimization methods</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>TSP is a well-known combinatorial optimization problem with several practical applications. It is an NP-hard problem, which means that the optimal solution for huge numbers of examples is computationally impractical. As a result, researchers have focused their efforts on devising efficient algorithms for obtaining approximate solutions to the TSP. This paper proposes Iterative Approximate Methods for Solving TSP (IAM-TSP), as a new method that provides an approximate solution to TSP in polynomial time. This proposed method begins by adding four extreme cities to the route, a loop, and then adds each city to the route using a greedy technique that evaluates the cost of adding each city to different positions along the route. This method determines the best position to add the city and the also the best city to be added. The resultant route is further improved by employing local constant permutations. When compared to existing state-of-the-art methods, our experimental results show that the proposed method is more capable of producing high-quality solutions. The proposed approach, with an average approximation of 1.09, can be recommended for practical usage in its current form or as a pre-processing step for another optimizer.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_43-IAM_TSP_Iterative_Approximate_Methods.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced Land Use and Land Cover Classification Through Human Group-based Particle Swarm Optimization-Ant Colony Optimization Integration with Convolutional Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141142</link>
        <id>10.14569/IJACSA.2023.0141142</id>
        <doi>10.14569/IJACSA.2023.0141142</doi>
        <lastModDate>2023-11-30T12:00:11.2270000+00:00</lastModDate>
        
        <creator>Moresh Mukhedkar</creator>
        
        <creator>Chamandeep Kaur</creator>
        
        <creator>Divvela Srinivasa Rao</creator>
        
        <creator>Shweta Bandhekar</creator>
        
        <creator>Mohammed Saleh Al Ansari</creator>
        
        <creator>Maganti Syamala</creator>
        
        <creator>Yousef A.Baker El-Ebiary</creator>
        
        <subject>Land use and land cover; human group-based particle swarm optimization; ant colony optimization; convolutional neural network; satellite image</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>Reliable classification of Land Use and Land Cover (LULC) using satellite images is essential for disaster management, environmental monitoring, and urban planning. This paper introduces a unique method that combines a Convolutional Neural Network (CNN) with Human Group-based Particle Swarm Optimization (HPSO) and Ant Colony Optimization (ACO) algorithms to improve the accuracy of LULC classification. The suggested hybrid HPSO-ACO-CNN architecture effectively solves the issues with feature selection, parameter optimization, and model training that are present in conventional LULC classification techniques. During the initial phases, HPSO and ACO are crucial in identifying the best hyperparameters for the CNN model and fine-tuning the selection of critical spectral bands. ACO modifies the CNN&#39;s hyperparameters (learning rate, batch size, and convolutional layers), whereas HPSO finds the optimal selection of spectral bands. This optimization technique reduces the probability of overfitting while substantially enhancing the model&#39;s ability to generalize. Utilizing the selected spectral bands and optimum parameter configuration, the CNN algorithm is trained in the second phase. With Python implementation, this method uses both the spatial and spectral characteristics that the CNN detects to reach an outstanding 99.3% accuracy in LULC classification. The hybrid approach outperforms traditional methods like Deep Neural Network (DNN), Multiclass Support Vector Machine (MSVM), and Long Short-Term Memory (LSTM) in experiments using benchmark satellite image datasets, demonstrating a significant 10.5% increase in accuracy. This hybrid HPSO-ACO-CNN architecture transforms accurate and dependable LULC classification, offering an advantageous instrument for remote sensing applications. It enhances the area of satellite imagery evaluation by combining the advantages of deep learning techniques with optimization algorithms, enabling more accurate mapping of land use and cover for sustainable land management and environmental preservation.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_42-Enhanced_Land_Use_and_Land_Cover_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Beyond the Norm: A Modified VGG-16 Model for COVID-19 Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141140</link>
        <id>10.14569/IJACSA.2023.0141140</id>
        <doi>10.14569/IJACSA.2023.0141140</doi>
        <lastModDate>2023-11-30T12:00:11.1970000+00:00</lastModDate>
        
        <creator>Shimja M</creator>
        
        <creator>K. Kartheeban</creator>
        
        <subject>Covid-19; coronavirus; artificial intelligence; deep learning; transfer learning; VGG-16; performance metrics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>The outbreak of Coronavirus Disease 2019 (COVID-19) in the initial days of December 2019 has severely harmed human health and the world&#39;s overall condition. There are currently five million instances that have been confirmed, and the unique virus is continuing spreading quickly throughout the entire world. The manual Reverse Transcription-Polymerase Chain Reaction (RT-PCR) test is time-consuming and difficult, and many hospitals throughout the world do not yet have an adequate number of testing kits. Designing an automated and early diagnosis system that can deliver quick decisions and significantly lower diagnosis error is therefore crucial. Recent advances in emerging Deep Learning (DL) algorithms and emerging Artificial Intelligence (AI) approaches have made the chest X-ray images a viable option for early COVID-19 screening. For visual image analysis, CNNs are the most often utilized class of deep learning neural networks. At the core of CNN is a multi-layered neural network that offers solutions, particularly for the analysis, classification, and recognition of videos and images. This paper proposes a modified VGG-16 model for detection of COVID-19 infection from chest X-ray images. The analysis has been made among the model by considering some important parameters such as accuracy, precision and recall. The model has been validated on publicly available chest X-ray images. The best performance is obtained by the proposed model with an accuracy of 97.94%.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_40-Beyond_the_Norm_A_Modified_VGG_16_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimizing Crack Detection: The Integration of Coarse and Fine Networks in Image Segmentation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141141</link>
        <id>10.14569/IJACSA.2023.0141141</id>
        <doi>10.14569/IJACSA.2023.0141141</doi>
        <lastModDate>2023-11-30T12:00:11.1970000+00:00</lastModDate>
        
        <creator>Hoanh Nguyen</creator>
        
        <creator>Tuan Anh Nguyen</creator>
        
        <subject>Deep learning; crack segmentation; coarse-to-fine strategy; image segmentation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>In recent years, the automation of detecting structural deformities, particularly cracks, has become vital across a wide range of applications, spanning from infrastructure maintenance to quality assurance. While numerous methods, ranging from traditional image processing to advanced deep learning architectures, have been introduced for crack segmentation, reliable and precise segmentation remains challenging, especially when dealing with complex or low-resolution images. This paper introduces a novel method that adopts a dual-network model to optimize crack segmentation through a coarse-to-fine strategy. This model integrates both a coarse network, focusing on the global context of the entire image to identify probable crack areas, and a fine network that zooms in on these identified regions, processing them at higher resolutions to ensure detailed crack segmentation results. The foundation of this architecture lies in utilizing shared encoders throughout the networks, which highlights the extraction of uniform features, paired with the introduction of separate decoders for different segmentation levels. The efficiency of the proposed model is evaluated through experiments on two public datasets, highlighting its capability to deliver superior results in crack detection and segmentation.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_41-Optimizing_Crack_Detection_The_Integration_of_Coarse_and_Fine_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Research on 3D Target Detection Algorithm Based on PointFusion Algorithm Improvement</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141139</link>
        <id>10.14569/IJACSA.2023.0141139</id>
        <doi>10.14569/IJACSA.2023.0141139</doi>
        <lastModDate>2023-11-30T12:00:11.1670000+00:00</lastModDate>
        
        <creator>Jun Wang</creator>
        
        <creator>Shuai Jiang</creator>
        
        <creator>Linglang Zeng</creator>
        
        <creator>Ruiran Zhang</creator>
        
        <subject>Neural network; target detection; autonomous driving; PointFusion; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>With the continuous development of automatic driving technology, the requirements for the accuracy of 3D target detection in complex traffic scenes are getting higher and higher. To solve the problems of low recognition rate, long detection time, and poor robustness of traditional detection methods, this paper proposes a new method based on PointFusion model improvement. The method utilizes the PointFusion network architecture to input 3D point cloud data and RGB image data into the PointNet++ and ResNeXt neural network structures, respectively, and adopts a dense fusion method to predict the spatial offsets of each input point to each vertex in the 3D selection box point by point, to output the 3D prediction box of the target. Experimental results on the KITTI dataset show that compared with the PointFusion network model, the improved PointFusion-based model proposed in this paper improves the 3D target detection accuracy in three different difficulty modes (easy, medium, and hard) and performs best in the medium difficulty mode. These findings highlight the potential of the method proposed in this paper to be applied in the field of autonomous driving, providing a reliable basis for navigating self-driving cars in complex environments.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_39-Research_on_3D_Target_Detection_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Proposed Roadmap for Optimizing Predictive Maintenance of Industrial Equipment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141138</link>
        <id>10.14569/IJACSA.2023.0141138</id>
        <doi>10.14569/IJACSA.2023.0141138</doi>
        <lastModDate>2023-11-30T12:00:11.1500000+00:00</lastModDate>
        
        <creator>Maria Eddarhri</creator>
        
        <creator>Mustapha Hain</creator>
        
        <creator>Jihad Adib</creator>
        
        <creator>Abdelaziz Marzak</creator>
        
        <subject>Predictive maintenance; intelligent system; aeronautical wiring companies; machine learning; IIoT</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>Now-a-days, the maintenance management of industrial equipment, particularly in the aeronautical industry, has evolved into a substantial challenge and a critical concern for the sector. Aeronautical wiring companies are currently grappling with escalating difficulties in equipment maintenance. This paper proposes an intelligent system for the automated detection of machine failures. It assesses predictive maintenance approaches and underscores the significance of sensor selection to optimize outcomes. The integration of Machine Learning techniques with the Industrial Internet of Things (IIoT) and intelligent sensors is presented, showcasing the heightened accuracy and effectiveness of predictive maintenance, especially in the aeronautical industry. The research aims to leverage Predictive Maintenance for enhancing the performance of production machines, predicting their failures, recognizing faults, and determining maintenance dates through the analysis and processing of collected data. Employing sophisticated code, the study emphasizes real-time data collection, data traceability, and enhanced precision in predicting potential failures using Machine Learning. The findings underscore the collaboration between sensors and the synergy of Machine Learning with IIoT, ultimately aiming for sustained reliability and efficiency of predictive maintenance in aeronautical wiring companies.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_38-A_Proposed_Roadmap_for_Optimizing_Predictive_Maintenance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Smart Cities, Smarter Roads: A Review of Leveraging Cutting-Edge Technologies for Intelligent Event Detection from Social Media</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141137</link>
        <id>10.14569/IJACSA.2023.0141137</id>
        <doi>10.14569/IJACSA.2023.0141137</doi>
        <lastModDate>2023-11-30T12:00:11.1330000+00:00</lastModDate>
        
        <creator>Ebtesam Ahmad Alomari</creator>
        
        <creator>Rashid Mehmood</creator>
        
        <subject>Mobility; smart cities; event detection; social media; big data analytics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>The rapidly evolving landscape of smart cities and intelligent transportation systems makes the timely detection of traffic events a critical element for optimizing urban mobility. Furthermore, social media emerges as a valuable source of real-time information, with users acting as active sensors who spontaneously share observations and experiences related to traffic incidents. This review paper offers a comprehensive understanding of the state-of-the-art in traffic event detection from social media. The paper explores leveraging cutting-edge technologies including machine learning, and deep learning with big data technologies and high-performance computing. The discussion unfolds with an in-depth examination of the recent approaches for event detection followed by an exploration of the techniques of spatio-temporal information extraction and sentiment analysis, which are both considered fundamental aspects in enhancing the contextual understanding of traffic events. Further, the review explores the pivotal role of big data technologies in addressing scalability challenges inherent in the vast expanse of social data. The examination encompasses how big data frameworks facilitate efficient storage, processing, and analysis of large-scale social media datasets, thereby empowering machine learning and deep learning models for robust and real-time traffic event detection. Subsequently, the challenges and future directions have been highlighted. Addressing these challenges and leveraging advanced technologies, facilitates the proactive detection and management of these events, paving the way for smart mobility systems.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_37-Smart_Cities_Smarter_Roads.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Investigation of Deep Learning Based Semantic Segmentation Models for Autonomous Vehicles</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141136</link>
        <id>10.14569/IJACSA.2023.0141136</id>
        <doi>10.14569/IJACSA.2023.0141136</doi>
        <lastModDate>2023-11-30T12:00:11.1200000+00:00</lastModDate>
        
        <creator>Xiaoyan Wang</creator>
        
        <creator>Huizong Li</creator>
        
        <subject>Semantic segmentation; autonomous vehicles; deep learning approaches; performance analysis; accuracy; inference time</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>Semantic segmentation plays a pivotal role in enhancing the perception capabilities of autonomous vehicles and self-driving cars, enabling them to comprehend and navigate complex real-world environments. Numerous techniques have been developed to achieve semantic segmentation. Still, the paper emphasizes the effectiveness of deep learning approaches because they have demonstrated impressive capabilities in capturing intricate patterns and features from images, resulting in highly accurate segmentation results. Although various studies have been conducted in literature, there is needed for a careful investigation and analysis of the existing methods, especially in terms of two critical aspects: accuracy and inference time. To address this need for analysis and investigation, the research focuses on three widely-used deep learning architectures: ResNet, VGG, and MobileNet. By thoroughly evaluating these models based on accuracy and inference time, the study aims to identify the models that strike the best balance between precision and speed. The findings of this study highlight the most accurate and efficient models for semantic segmentation, aiding the development of reliable self-driving technology.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_36-Investigation_of_Deep_Learning_Based_Semantic_Segmentation_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Thai Finger-Spelling using Vision Transformer</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141135</link>
        <id>10.14569/IJACSA.2023.0141135</id>
        <doi>10.14569/IJACSA.2023.0141135</doi>
        <lastModDate>2023-11-30T12:00:11.1030000+00:00</lastModDate>
        
        <creator>Kullawat Chaowanawatee</creator>
        
        <creator>Kittasil Silanon</creator>
        
        <creator>Thitinan Kliangsuwan</creator>
        
        <subject>Thai finger-spelling; vision transformer; deep learning; image recognition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>In this paper, we present a finger-spelling recognition system that is based on Thai Sign Language (TFS) and employs a deep learning model called vision transformer. We extracted the 15 characters of the Thai alphabet from publicly available and our collected datasets to establish the recognition system. To train the learning model, we employed four EVA-02 vision transformer models, each of which showed impressive performance across different model sizes. We conducted four experiments to determine the most effective performance model. In Experiment 1, we directly trained the model to compare its performance. In Experiment 2, we used augmentation techniques to generate additional datasets. Experiment 3 utilized the Test-Time Augmentation (TTA) technique to generate test images with random variations. Lastly, in Experiment 4, we used Pseudo-Labelling (labeling labeled and unlabeled data) in each batch to train the model network. Furthermore, we developed a mobile application that collects user image data and provides helpful information related to finger-spelling, such as meanings, gestures, and usage examples.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_35-Thai_Finger_Spelling_using_Vision_Transformer.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hyperchaotic Image Encryption System Based on Deep Learning LSTM</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141134</link>
        <id>10.14569/IJACSA.2023.0141134</id>
        <doi>10.14569/IJACSA.2023.0141134</doi>
        <lastModDate>2023-11-30T12:00:11.0730000+00:00</lastModDate>
        
        <creator>Shuangyuan Li</creator>
        
        <creator>Mengfan Li</creator>
        
        <creator>Qichang Li</creator>
        
        <creator>Yanchang Lv</creator>
        
        <subject>Image encryption; Lorenz Chaotic System; LSTM model; deep learning; DNA encoding</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>This paper introduces an advanced method for enhancing the security of image transmission. It presents a novel color image encryption algorithm that combines hyperchaotic dynamics and deep learning medium and long short-term memory (LSTM) networks. Firstly, the chaotic sequence is generated using the Lorenz hyperchaotic system, then the Lorenz chaotic system is discretized and iteratively processed using the fourth-order Runge-Kutta (RK4) method, and then the deep learning LSTM model is used to transform the chaotic sequence processed by the Lorenz hyperchaotic system into a new sequence for training. Finally, according to the new chaotic signal, the Arnold disruption and Deoxyribo Nucleic Acid (DNA) encoding double disruption diffusion are performed to derive the ultimate encrypted image. Through the analysis of multiple color image simulation experiments, the algorithm presented in this paper can well realize the encryption on color images and can achieve lossless encryption, with strong resistance to differential attack, statistical attack and violent attack. Compared with the literature analysis, the correlation coefficient, information entropy and pixel change rate of this paper are closer to the ideal value, and it has higher security and better encryption effect.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_34-Hyperchaotic_Image_Encryption_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Securing Patient Medical Records with Blockchain Technology in Cloud-based Healthcare Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141133</link>
        <id>10.14569/IJACSA.2023.0141133</id>
        <doi>10.14569/IJACSA.2023.0141133</doi>
        <lastModDate>2023-11-30T12:00:11.0570000+00:00</lastModDate>
        
        <creator>Mohammed K Elghoul</creator>
        
        <creator>Sayed F. Bahgat</creator>
        
        <creator>Ashraf S. Hussein</creator>
        
        <creator>Safwat H. Hamad</creator>
        
        <subject>Security; blockchain; cloud; hyperledger</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>Blockchain technology presents a promising solution to myriad challenges pervasive in the healthcare domain, particularly concerning the secure and efficient management of burgeoning health information technology (HIT) data. This paper delineates a novel blockchain-based approach to enhance various aspects of healthcare management, including data accuracy, drug prescriptions, pregnancy data, supply chain management, electronic health record (EHR) management, and risk data management, with a special emphasis on ensuring secure access, immutable record-keeping, and robust data sharing. We propose a solution focusing on leveraging blockchain technology, particularly utilizing a Hyperledger network within Amazon Web Services (AWS), to securely manage patients&#39; medical records in the cloud. The implemented framework, housed within a Virtual Private Cloud (Amazon VPC) to ensure restricted access and cost-effective resource utilization, underscores advancements in data availability, security, traceability, and sharing, addressing key challenges within healthcare data management, and presenting a scalable, efficient, and secure approach to EHR management in contemporary healthcare contexts.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_33-Securing_Patient_Medical_Records_with_Blockchain_Technology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Style Transfer with GANs: Perceptual Loss and Semantic Segmentation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141132</link>
        <id>10.14569/IJACSA.2023.0141132</id>
        <doi>10.14569/IJACSA.2023.0141132</doi>
        <lastModDate>2023-11-30T12:00:11.0400000+00:00</lastModDate>
        
        <creator>A Satchidanandam</creator>
        
        <creator>R. Mohammed Saleh Al Ansari</creator>
        
        <creator>A L Sreenivasulu</creator>
        
        <creator>Vuda Sreenivasa Rao</creator>
        
        <creator>Sanjiv Rao Godla</creator>
        
        <creator>Chamandeep Kaur</creator>
        
        <subject>Artistic style transfer; Generative Adversarial Networks (GANs); semantic segmentation; visual fidelity; deep Convolutional Neural Networks (deep-CNN)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>The goal of artistic style translation is to combine an image&#39;s substance with an equivalent image&#39;s spirit of innovation. Current approaches are unable to consistently capture complex stylistic elements and maintain uniform stylization over semantic segments, which results in artefacts. Also suggest a novel approach which blends subjective loss algorithms using deep networks of neurons with segmentation using semantics to address these issues. By guaranteeing contextually-aware design distribution together with information preservation, the combination improves general aesthetic correctness during the styling transmission process. With this technique, perceptive components are extracted using both the subject matter and the style photos using previously trained deep neural systems. These components combine to provide perceptive loss coefficients, which are subsequently included into the design of a Generative Adversarial Network (GAN). For offering the representation a better grasp of the meaning contained in any given image, an automatic segmenting module is subsequently implemented. This historical data directs the style transferring process, producing an additional precise and sophisticated transition. The outcomes of our experiments confirm the efficacy of this method and demonstrate improved visual accuracy over earlier approaches. The use of semantic segmentation and loss of perceptual information algorithms together provide a significant 95.6% improvement in visual accuracy. This method effectively overcomes the drawbacks of earlier approaches, providing precise and trustworthy transference of style and constituting a noteworthy advancement in the field of imaginative style transfer. The final output graphics further demonstrate the importance of the recommended approach by deftly integrating decorative elements into functionally significant places.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_32-Enhancing_Style_Transfer_with_GANs.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimizing Network Intrusion Detection with a Hybrid Adaptive Neuro Fuzzy Inference System and AVO-based Predictive Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141131</link>
        <id>10.14569/IJACSA.2023.0141131</id>
        <doi>10.14569/IJACSA.2023.0141131</doi>
        <lastModDate>2023-11-30T12:00:11.0270000+00:00</lastModDate>
        
        <creator>Sweety Bakyarani. E</creator>
        
        <creator>Anil Pawar</creator>
        
        <creator>Sridevi Gadde</creator>
        
        <creator>Eswar Patnala</creator>
        
        <creator>P. Naresh</creator>
        
        <creator>Yousef A. Baker El-Ebiary</creator>
        
        <subject>Network intrusion; cyberthreats; normalization; African vulture optimization; data cleaning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>Protecting data and computer systems, as well as preserving the accessibility, integrity, and confidentiality of vital information in the face of constantly changing cyberthreats, requires the vital responsibility of detecting network intrusions. Existing intrusion detection models have limits in properly capturing and interpreting complex patterns in network behavior, which frequently leads to difficulties in robust feature selection and a lack of overall intrusion detection accuracy. The drawbacks of current methods are addressed by a unique approach to network intrusion detection presented in this paper. This framework discusses the difficulties presented by changing cyberthreats and the critical requirement for efficient intrusion detection in a society growing more networked by the day. Using a Hybrid Adaptive Neuro Fuzzy Inference System and African Vulture Optimization model with Min-Max normalization and data cleaning on the NSL-KDD dataset, the methodology outlined here overcomes issues with complex network behavior patterns and improves feature selection for precise identification of potential security threats. This approach meets the need for an effective intrusion detection system. Python software is used to implement the suggested model since it is flexible and reliable. The results show a notable improvement in accuracy, with the Hybrid Adaptive Neuro Fuzzy Inference System and African Vulture Optimization model surpassing previous approaches significantly and obtaining an exceptional accuracy rate of 99.3%. The accuracy of the proposed model was improved by African Vulture Optimization, rising from 99.2% to 99.3%. When compared to Artificial Neural Network (78.51%), Random Forest (92.21%), and Linear Support Vector Machine (97.4%), this amazing improvement is clear. When compared to other techniques, the suggested model exhibits an average accuracy gain of about 20.79%.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_31-Optimizing_Network_Intrusion_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Efficient Evaluation of SLAM Methods and Integration of Human Detection with YOLO Based on Multiple Optimization in ROS2</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141130</link>
        <id>10.14569/IJACSA.2023.0141130</id>
        <doi>10.14569/IJACSA.2023.0141130</doi>
        <lastModDate>2023-11-30T12:00:11.0100000+00:00</lastModDate>
        
        <creator>Hoang Tran Ngoc</creator>
        
        <creator>Nghi Nguyen Vinh</creator>
        
        <creator>Nguyen Trung Nguyen</creator>
        
        <creator>Luyl-Da Quach</creator>
        
        <subject>Indoor robotic; SLAM; ROS2; Robot model; Human detection; YOLO</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>In the realm of robotics, indoor robotics is an increasingly prominent field, and enhancing robot performance stands out as a crucial concern. This research undertakes a comparative analysis of various Simultaneous Localization and Mapping (SLAM) algorithms with the overarching objective of augmenting the navigational capabilities of robots. This is accomplished within an open-source framework known as the Robotic Operating System (ROS2) in conjunction with additional software components such as RVIZ and Gazebo. The central aim of this study is to identify the most efficient SLAM approach by evaluating map accuracy and the time it takes for a robot model to reach its destinations when employing three distinct SLAM algorithms: GMapping, Cartographer SLAM, and SLAM_toolbox. Furthermore, this study addresses indoor human detection and tracking assignments, in which we evaluate the effectiveness of YOLOv5, YOLOv6, YOLOv7, and YOLOv8 models in conjunction with various optimization algorithms, including SGD, AdamW, and AMSGrad. The study concludes that YOLOv8 with SGD optimization yields the most favorable outcomes for human detection. These proposed systems are rigorously validated through experimentation, utilizing a simulated Gazebo environment within the Robot Operating System 2 (ROS2).</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_30-Efficient_Evaluation_of_SLAM_Methods.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Enhanced Approach for Realizing Robust Security and Isolation in Virtualized Environments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141129</link>
        <id>10.14569/IJACSA.2023.0141129</id>
        <doi>10.14569/IJACSA.2023.0141129</doi>
        <lastModDate>2023-11-30T12:00:10.9930000+00:00</lastModDate>
        
        <creator>Rawan Abuleil</creator>
        
        <creator>Samer Murrar</creator>
        
        <creator>Mohammad Shkoukani</creator>
        
        <subject>Virtual Machine (VM); High-Performance Computing (HPC); cybersecurity; hypervisor security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>Transitioning into the next generation of supercomputing resources, we’re faced with expanding user bases and diverse workloads, increasing the demand for improved security measures and deeper software compartmentalization. This is especially pertinent for virtualization, a key cloud computing component that’s at risk from attacks due to hypervisors’ integration into privileged OSs and shared use across VMs. In response to these challenges, our paper presents a two- pronged approach: introducing secure computing capabilities into the HPC software stack and proposing SecFortress an enhanced hypervisor design. By porting the Kitten Lightweight Kernel to the ARM64 architecture and integrating it with the Hafnium hypervisor, we substitute the Linux-based resource management infrastructure, reducing overheads. Concurrently, SecFortress employs a nested kernel approach, preventing outerOS from accessing mediator’s memory, and creating a hypervisor box to isolate untrusted VMs’ effects. Our initial results highlight significant performance improvements on small scale ARM-based SOC platforms and enhanced hypervisor security with minimal runtime overhead, establishing a solid foundation for further research in secure, scalable high- performance computing.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_29-An_Enhanced_Approach_for_Realizing_Robust_Security.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sentiment Analysis Predictions in Digital Media Content using NLP Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141128</link>
        <id>10.14569/IJACSA.2023.0141128</id>
        <doi>10.14569/IJACSA.2023.0141128</doi>
        <lastModDate>2023-11-30T12:00:10.9630000+00:00</lastModDate>
        
        <creator>Abdulrahman Radaideh</creator>
        
        <creator>Fikri Dweiri</creator>
        
        <subject>Sentiment analysis; digital media; decision-making; quality assurance; NLP</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>In the current digital landscape, understanding sentiment in digital media is crucial for informed decision-making and content quality. The primary objective is to improve decision-making processes and enhance content quality within this dynamic environment. To achieve this, a comprehensive comparative analysis of NLP for tweet sentiment analysis was conducted, revealing compelling insights. The BERT pre-trained model stood out, achieving an accuracy rate of 94.56%, emphasizing the effectiveness of transfer learning in text classification. Among machine learning algorithms, the Random Forest model excelled with an accuracy rate of 70.82%, while the K Nearest Neighbours model trailed at 55.36%. Additionally, the LSTM model demonstrated excellence in Recall, Precision, and F1 metrics, recording values of 81.12%, 82.32%, and 80.12%, respectively. Future research directions include optimizing model architecture, exploring alternative deep learning approaches, and expanding datasets for improved generalizability. While valuable insights are provided by our study, it is important to acknowledge its limitations, including a Twitter-centric focus, constrained model comparisons, and binary sentiment analysis. These constraints highlight opportunities for more nuanced and diverse sentiment analysis within the digital media landscape.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_28-Sentiment_Analysis_Predictions.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predicting and Improving Behavioural Factors that Boosts Learning Abilities in Post-Pandemic Times using AI Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141127</link>
        <id>10.14569/IJACSA.2023.0141127</id>
        <doi>10.14569/IJACSA.2023.0141127</doi>
        <lastModDate>2023-11-30T12:00:10.9300000+00:00</lastModDate>
        
        <creator>Jaya Gera</creator>
        
        <creator>Ekta Bhambri Marwaha</creator>
        
        <creator>Reema Thareja</creator>
        
        <creator>Aruna Jain</creator>
        
        <subject>Academic performance; machine learning; chatbot; educational data mining; learning analytics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>Quantifying student academic performance has always been challenging as it hinges on several factors including academic progress, personal characteristics and behaviours relating to learning activities. Several research studies are therefore being conducted to identify the factors so that appropriate measures can be conducted by academic institutions, family and the student to boost his/ her academic performance. The present study investigates personal characteristics, psychological factors, behavioural factors, social factors and learning capabilities, that directly or indirectly affect student’s academic performance, which was tapped by administering a self-designed questionnaire. The data was collected from 214 undergraduate students studying in various streams of the University of Delhi and post that semi-structured interview was conducted to get in- depth information. The result proved the correlation between the aforementioned factors and the learning capabilities of the students. Using the results of analysis a machine learning model based on k-nn algorithm was formed to predict student performance. A chatbot is also proposed to provide guidance to students in strenuous situations, motivate them and interact with them without having personal bias.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_27-Predicting_and_Improving_Behavioural_Factors.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Advanced Metering Infrastructure Data Aggregation Scheme Based on Blockchain</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141126</link>
        <id>10.14569/IJACSA.2023.0141126</id>
        <doi>10.14569/IJACSA.2023.0141126</doi>
        <lastModDate>2023-11-30T12:00:10.9170000+00:00</lastModDate>
        
        <creator>Hongliang TIAN</creator>
        
        <creator>Naiqian ZHENG</creator>
        
        <creator>Yuzhi JIAN</creator>
        
        <subject>Smart grid; blockchain; advanced metering infrastructure; data aggregation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>Smart grid stands as both the cornerstone of the modern energy system and the pivotal technology for addressing energy-related challenges. Advanced Metering Infrastructure constitute a critical component within the smart grid ecosystem, providing real-time energy consumption data to power utility companies. Advanced Metering Infrastructure enables these companies to make timely and accurate decisions. Hence, the issue of data security pertaining to Advanced Metering Infrastructure assumes profound significance. Presently, Advanced Metering Infrastructure data confronts challenges associated with centralized data storage, rendering it susceptible to potential cyberattacks. Moreover, with the burgeoning number of electricity consumers, the resultant data volumes have swelled considerably. Consequently, the transmission of this data becomes intricate and its efficiency is compromised. To address these issues, this paper presents a lightweight blockchain data aggregation scheme. By integrating fog computing and cloud computing, a three-tier blockchain-based architecture is devised. Initially, digital signatures are employed to ensure the validity and integrity of user data. The innate attributes of blockchain technology are harnessed to safeguard the security of electricity energy data. Through secondary data aggregation, the privacy-sensitive user data is efficiently compressed and subsequently integrated into the blockchain, thereby mitigating the storage pressure on the blockchain and enhancing data transmission efficiency. Ultimately, through rigorous theoretical analysis and simulated experimentation, the paper demonstrates that, in comparison to existing methodologies, lightweight blockchain data aggregation scheme exhibits heightened security. Additionally, lightweight blockchain data aggregation scheme holds a competitive advantage in terms of computational and communication costs.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_26-Advanced_Metering_Infrastructure_Data_Aggregation_Scheme.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Arabic Regional Dialect Identification (ARDI) using Pair of Continuous Bag-of-Words and Data Augmentation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141125</link>
        <id>10.14569/IJACSA.2023.0141125</id>
        <doi>10.14569/IJACSA.2023.0141125</doi>
        <lastModDate>2023-11-30T12:00:10.9170000+00:00</lastModDate>
        
        <creator>Ahmed H. AbuElAtta</creator>
        
        <creator>Mahmoud Sobhy</creator>
        
        <creator>Ahmed A. El-Sawy</creator>
        
        <creator>Hamada Nayel</creator>
        
        <subject>Dialect identification; continuous Bag-of-Words; data augmentation; text classification.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>Author profiling is the process of finding characteristics that make up an author’s profile. This paper presents a machine learning-based author profiling model for Arabic users, considering the author’s regional dialect as a crucial characteristic. Various classification algorithms have been implemented: decision tree, KNN, multilayer perceptron, random forest, and support vector machines. A pair of Continuous Bag-of-Word (CBOW) models has been used for word representation. A well-known data set has been used to evaluate the proposed model and a data augmentation process has been implemented to improve the quality of training data. Support vector machines achieved a 50.52% f1-score, outperforming other models.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_25-Arabic_Regional_Dialect_Identification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Selection of a Trustworthy Technique for Fraud Prevention in the Digital Banking Sector</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141124</link>
        <id>10.14569/IJACSA.2023.0141124</id>
        <doi>10.14569/IJACSA.2023.0141124</doi>
        <lastModDate>2023-11-30T12:00:10.9000000+00:00</lastModDate>
        
        <creator>Bandar Ali M. Al-Rami Al-Ghamdi</creator>
        
        <subject>Digital banking fraud; analytical network process; intuitionistic fuzzy sets; fraud prevention and detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>Digital banking fraud poses a threat to the global economy and fintech applications. Sustainable models are essential to address this issue and minimize its economic impact. Hybrid methods have been developed to assess strategies for preventing digital banking fraud, aiding global stakeholders in making well-informed judgments. However, many of these models concentrate on the numerical features of digital banking ratios while overlooking crucial financial fraud protection qualities. This paper introduces a computational method for discovering and measuring the influence of digital banking fraud prevention strategies on sustainable fraud prevention. This innovative approach combines intuitionistic fuzzy set theory and the analytical network process for decision-making. Initially, an intuitionistic fuzzy expert system prioritizes crucial indices based on the preferences of financial decision-makers. This technique is then compared to alternative decision-making models across multiple variables. Empirical data demonstrate the superiority of the intuitionistic fuzzy-based decision-making system, outperforming other models and facilitating the recognition of financial statement fraud in global banking networks. Consequently, it offers a sustainable fintech solution. The findings of this study are pertinent to fintech scholars and practitioners engaged in the global battle against digital banking fraud.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_24-Selection_of_a_Trustworthy_Technique_for_Fraud_Prevention.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Emotional State Prediction Based on EEG Signals using Ensemble Methods</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141123</link>
        <id>10.14569/IJACSA.2023.0141123</id>
        <doi>10.14569/IJACSA.2023.0141123</doi>
        <lastModDate>2023-11-30T12:00:10.8700000+00:00</lastModDate>
        
        <creator>Norah Alrebdi</creator>
        
        <creator>Amal A. Al-Shargabi</creator>
        
        <subject>Electroencephalograph; mental health; feature extraction; random forest; extreme gradient boosting; adaptive boost</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>The emotional state is an essential factor that affects mental health. Electroencephalography (EEG) signal analysis is a promising method for detecting emotional states. Although multiple studies exist on EEG emotional signals classification, they have rarely considered processing time as a metric for classification model evaluation. Instead, they used either model accuracy and/or the number of features for evaluation. Processing time is an important factor to be considered in the context of mental health. Many people commonly use smart devices, such as smartwatches to monitor their emotional state and such devices require a short processing time. This research proposes an EEG-based model that detects emotional signals based on three factors: accuracy, number of features, and processing time. Two feature extraction algorithms were applied to EEG emotional signals: principal components analysis (PCA) and fast independent components analysis (FastICA). In the classification process, ensemble method classifiers were adopted due to their powerful performance. Three ensemble classifiers were used: random forest (RF), extreme gradient boosting (XGBoost), and adaptive boost (AdaBoost). The experimental results showed that the RF and XGBoost achieved the best accuracy, i.e., 95%, for both methods. However, XGBoost outperformed RF in terms of the number of features; it used 33 components extracted by PCA within 14 seconds, while RF used 36 within 4 seconds. AdaBoost was the worst in terms of both accuracy and processing time in the two experiments.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_23-Emotional_State_Prediction_Based_on_EEG_Signals.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of Nursing Process Expert System for Android-based Nursing Student Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141122</link>
        <id>10.14569/IJACSA.2023.0141122</id>
        <doi>10.14569/IJACSA.2023.0141122</doi>
        <lastModDate>2023-11-30T12:00:10.8370000+00:00</lastModDate>
        
        <creator>Aristoteles </creator>
        
        <creator>Abie Perdana Kusuma</creator>
        
        <creator>Anie Rose Irawati</creator>
        
        <creator>Dwi Sakethi</creator>
        
        <creator>Lisa Suarni</creator>
        
        <creator>Dedy Miswar</creator>
        
        <creator>Rika Ningtias Azhari</creator>
        
        <subject>Classification; expert system; forward chaining; blackbox testing; android; flutter; nursing process</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>Nurses are professionals who provide health services using a scientific process called nursing. In nursing, problem-solving uses the nursing process which is a critical thinking method, nurses must analyze the data found in patients to diagnose and determine the results and appropriate intervention plans. Prospective nursing students are required to be able to apply the nursing process in carrying out nursing care according to existing nursing standards, of course, with supervision by nursing experts to improve the quality of medical services. This study aims to develop an android application with the help of an expert system as a nursing diagnosis tool, which helps nursing students learn the nursing process and helps lecturers monitor the nursing process carried out by nursing students. This research uses 116 symptom data, 22 diagnosis data, 60 intervention data, 8 type data, and 864 description data. The results of this research are in the form of an expert system with an android-based forward chaining method that has been tested using the black box testing method.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_22-Development_of_Nursing_Process_Expert_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimizing Shuttle-Bus Systems in Mega-Events using Computer Modeling: A Case Study of Pilgrims&#39; Transportation System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141121</link>
        <id>10.14569/IJACSA.2023.0141121</id>
        <doi>10.14569/IJACSA.2023.0141121</doi>
        <lastModDate>2023-11-30T12:00:10.8070000+00:00</lastModDate>
        
        <creator>Mohamed S. Yasein</creator>
        
        <creator>Esam Ali Khan</creator>
        
        <subject>Computer modeling; simulation; optimization; shuttle-bus systems; mega-events; hajj</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>Mega-events are held in a city, or more, during a limited time, which requires special attention to the infrastructure and the offered services. The Hajj event, hosted in Makkah - Saudi Arabia, is considered as an excellent example of religious mega-events. The field of computer modeling and simulation is one of the main technical tools that help in developing and understanding the risks of crowds and studying the safety means during organizing of many major events in the world. This paper focuses on using computer simulation to optimize the pilgrims&#39; shuttle-bus transportation system in Holy Sites (Mashaaer), as a case study of optimizing shuttle-bus Systems in Mega-Events using computer modeling. The objective of paper is to develop a model of the shuttle-bus transport system to give insights of the advantages of the use as an alternative for transporting pilgrims as well as to provide decision makers with a tool that could be used to select the best parameters of the system for the most efficient operation. For this purpose, pilgrims’ evacuation time, traffic congestion and average trip time, from Arafat to Muzdalifa, are identified as the performance measures for evaluating the proposed transport system. The conducted simulation can be used to assess the current systems, recommend changes to the systems, and offer indicators and readings to assist decision makers.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_21-Optimizing_Shuttle_Bus_Systems_in_Mega_Events.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Adaptive Gray Wolf Optimization Algorithm based on Gompertz Inertia Weight Strategy</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141120</link>
        <id>10.14569/IJACSA.2023.0141120</id>
        <doi>10.14569/IJACSA.2023.0141120</doi>
        <lastModDate>2023-11-30T12:00:10.7900000+00:00</lastModDate>
        
        <creator>Qiuhua Pan</creator>
        
        <subject>Gray wolf optimization algorithm; inertia weight; adaptive; Gompertz function; swarm intelligence algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>To solve the problems that the Gray Wolf Optimizer (GWO) convergence speed is not fast enough and the solution accuracy is not high enough, this paper proposes an Adaptive Gray Wolf Optimizer based on Gompertz inertia weighting strategy (GGWO). GGWO uses the characteristics of the Gompertz function to achieve nonlinear adjustment of the inertia weight, which better balances the speed of global search and accuracy of local search of the GWO algorithm. At the same time, the Gompertz function is used to realize the adaptive adjustment of the individual gray wolf’s position and to better update the gray wolves’ position according to the fitness values of different gray wolf individuals. Use 6 classic test functions to compare the performance of GGWO in optimization and 10 other classic or improved swarm intelligence algorithms. Results show that GGWO has better solution accuracy, stability, and faster convergence than all other 10 swarm intelligence algorithms.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_20-Adaptive_Gray_Wolf_Optimization_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Information Retrieval System for Scientific Publications of Lampung University by using VSM, K-Means, and LSA</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141119</link>
        <id>10.14569/IJACSA.2023.0141119</id>
        <doi>10.14569/IJACSA.2023.0141119</doi>
        <lastModDate>2023-11-30T12:00:10.7770000+00:00</lastModDate>
        
        <creator>Rahman Taufik</creator>
        
        <creator>Didik Kurniawan</creator>
        
        <creator>Anie Rose Irawati</creator>
        
        <creator>Dewi Asiah Shofiana</creator>
        
        <subject>Information retrieval; Vector Space Model (VSM); k-means; Latent Semantic Analysis (LSA); clustering; topic identification; scientific publication information</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>The Lampung University repository system is a repository of data related to study, community service, and other scientific works, currently has 37242 documents accessible through repository.lppm.unila.ac.id. Despite the amount of data, its optimal use as an information retrieval remains unrealized, hindering the effective promotion of Lampung University&#39;s scientific publication excellence. Recognizing the limitations of existing information retrieval systems that are limited to specific methods for topic identification through clustering, this study aims to develop a retrieval system for Lampung University&#39;s repository using Vector Space Model (VSM), K-Means and Latent Semantic Analysis (LSA) that generates clusters and study expertise at the level of study program, faculty and Lampung University. The methodology includes data collection, preprocessing, modeling, evaluation and system deployment. The results show that the number of clusters obtained for the university level is 7 clusters, for the faculty level are 6, 7, 8 and 10 clusters, and for the program level are 3 to 5 clusters. In addition, the finding topic identification indicate that the expertise topics at Lampung University, which are agriculture, soil, education, plants, learning, society, Lampung. This study contributes to optimizing the information retrieval system, promoting academic excellence, and advancing the understanding of study expertise at Lampung University.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_19-Information_Retrieval_System_for_Scientific_Publications.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Advanced Seismic Magnitude Classification Through Convolutional and Reinforcement Learning Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141118</link>
        <id>10.14569/IJACSA.2023.0141118</id>
        <doi>10.14569/IJACSA.2023.0141118</doi>
        <lastModDate>2023-11-30T12:00:10.7600000+00:00</lastModDate>
        
        <creator>Qiuyi Lin</creator>
        
        <creator>Jin Li</creator>
        
        <subject>Earthquake early warning; the magnitude of the earthquake; imbalanced classification; artificial bee colony; reinforcement learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>Earthquake Early Warning (EEW) systems are crucial in reducing the dangers associated with earthquakes. This paper delves into the realm of EEWs, focusing on rapidly determining earthquake magnitudes (EMs). Traditional methods for swift magnitude categorization often grapple with challenges such as data disparity and cumbersome processes. Our research introduces an innovative EEW model, employing a 7-second seismic waveform record from three different components provided by the China Earthquake Network Center (CENC). This empirical, quantitative study pioneers a method combining dilated convolutional techniques with a novel mutual learning-based artificial bee colony (ML-ABC) algorithm and reinforcement learning (RL) for EM classification. The proposed model utilizes an ensemble of convolutional neural networks (CNNs) to simultaneously extract feature vectors from input images, which are then amalgamated for classification. To address the imbalances in the dataset, we implement an RL-based algorithm, conceptualizing the training process as a series of decisions with individual samples representing distinct states. Within this framework, the network operates as an agent, receiving rewards or penalties based on its precision in distinguishing between the minority and majority classes. A key innovation in our approach is the initial weight pre-training using the ML-ABC method. This technique dynamically optimizes the &quot;food source&quot; for candidates, integrating mutual learning elements related to the initial weights. Extensive experiments were carried out on the selected dataset to ascertain the most effective parameter values, including the reward function. The findings demonstrate the superiority of our proposed model over other evaluated methods, highlighting its potential as a robust tool for EM classification in seismology. This research provides valuable insights for both seismologists and developers of EEW systems, offering a novel, efficient approach to earthquake magnitude determination.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_18-Advanced_Seismic_Magnitude_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Edge Computing-based Handgun and Knife Detection Method in IoT Video Surveillance Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141117</link>
        <id>10.14569/IJACSA.2023.0141117</id>
        <doi>10.14569/IJACSA.2023.0141117</doi>
        <lastModDate>2023-11-30T12:00:10.7430000+00:00</lastModDate>
        
        <creator>Haibo Liu</creator>
        
        <creator>Zhubing HU</creator>
        
        <subject>Real-time detection; handgun and knife detection; edge devices; IoT video surveillance; deep learning; convolutional neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>Real-time handgun and knife detection on edge devices within the Internet of Things (IoT) video surveillance systems hold paramount importance in ensuring public safety and security. Numerous methods have been explored for handgun and knife detection in video-based surveillance systems, with deep learning-based approaches demonstrating superior accuracy compared to other methods. However, the current research challenge lies in achieving high accuracy rates while managing the computational demands to meet real-time requirements. This paper proposes a solution by introducing a single-stage convolutional neural network (CNN) model tailored to address this challenge. The proposed method is developed using a custom dataset, encompassing model generation, training, validation, and testing phases. Extensive experiments and performance evaluations substantiate the efficacy of the proposed approach, which achieves remarkable accuracy results, thus showcasing its potential for enhancing real-time handgun and knife and knife detection capabilities in IoT-based video surveillance systems.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_17-An_Edge_Computing_based_Handgun_and_Knife_Detection_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automatic Model for Postpartum Depression Identification using Deep Reinforcement Learning and Differential Evolution Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141115</link>
        <id>10.14569/IJACSA.2023.0141115</id>
        <doi>10.14569/IJACSA.2023.0141115</doi>
        <lastModDate>2023-11-30T12:00:10.7130000+00:00</lastModDate>
        
        <creator>Sunyuan Shen</creator>
        
        <creator>Sheng Qi</creator>
        
        <creator>Hongfei Luo</creator>
        
        <subject>Postpartum depression; deep reinforcement learning; differential evolution algorithm; weight initialization; artificial neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>Postpartum depression (PPD) affects approximately 12% of new mothers, posing a significant health concern for both the mother and child. However, many women with PPD do not receive proper care. Preventative interventions are more cost-effective for high-risk women, but identifying those at risk can be challenging. To address this problem, we present an automatic model for PPD using a deep reinforcement learning approach and a differential evolution (DE) algorithm for weight initialization. DE is known for its ability to search for global optima in high-dimensional spaces, making it a promising approach for weight initialization. The policy of the model is based on an artificial neural network (ANN), treating the categorization issue as a policymaking stage-by-stage process. The DE algorithm is used to acquire initial weight values, with the agent obtaining samples and performing classifications in each step. The habitat provides an award for every categorization activity, considering a greater award for identification of the minor category to encourage precise detection. By using a particular compensatory technique and an encouraging learning system, the operator eventually decides the most excellent method for achieving its goals. The model&#39;s efficiency is evaluated by analyzing a set of data acquired from the population-based BASIC study carried out in Uppsala, Sweden, which covers the period from 2009 to 2018 and consists of 4313 samples. The experiential results, identified by known analysis criteria, indicate that the sample achieved better precision and correctness, making it suitable for identifying PPD. The proposed model could have significant implications for identifying at-risk women and providing timely interventions to improve maternal and child health outcomes.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_15-Automatic_Model_for_Postpartum_Depression_Identification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Secure Cloud-Connected Robot Control using Private Blockchain</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141116</link>
        <id>10.14569/IJACSA.2023.0141116</id>
        <doi>10.14569/IJACSA.2023.0141116</doi>
        <lastModDate>2023-11-30T12:00:10.7130000+00:00</lastModDate>
        
        <creator>Muhammad Amzie Muhammad Fauzi</creator>
        
        <creator>Mohamad Hanif Md Saad</creator>
        
        <creator>Sallehuddin Mohamed Haris</creator>
        
        <creator>Marizuana Mat Daud</creator>
        
        <subject>Internet of Things (IoT); robot control; cloud computing; cybersecurity; blockchain</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>With the increasing demand for remote operations and the challenges posed during the COVID-19 pandemic, industries across various sectors, including logistics, manufacturing, and education, have adopted virtual solutions. Cloud-based robot control has emerged as a viable approach for enabling safe remote operation of robots. However, along with the benefits, there are also risks associated with cloud-based robots. In this study, a secure cloud-based robot control system using a blockchain system was developed. The robot utilizes supervisory control to navigate via the internet. The communication system of the robot relies on the ThingsSentral cloud-based IoT platform; enabling communication between the user via a GUI developed using Python Tkinter and the local robot over the internet. To facilitate internet communication, the robot in this study incorporates an ESP32 microcontroller, which provides a low-cost and low-power system capable of connecting to Wi-Fi. However, cloud-based control systems are susceptible to cyberattacks, prompting the use of blockchain cybersecurity in this study to mitigate the risks. The data sent by the supervisor is stored within a private blockchain developed using Python, simultaneously being transmitted to the cloud platform. The developed security system addresses the risks associated with cloud-based robot control systems, such as data tampering and unauthorized misuse, by leveraging the Proof of Work (PoW) and hashing mechanisms.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_16-Secure_Cloud_Connected_Robot_Control.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Secure IoT Routing through Manifold Criterion Trust Evaluation using Ant Colony Optimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141113</link>
        <id>10.14569/IJACSA.2023.0141113</id>
        <doi>10.14569/IJACSA.2023.0141113</doi>
        <lastModDate>2023-11-30T12:00:10.6800000+00:00</lastModDate>
        
        <creator>Afsah Sharmin</creator>
        
        <creator>Rashidah Funke Olanrewaju</creator>
        
        <creator>Burhan Ul Islam Khan</creator>
        
        <creator>Farhat Anwar</creator>
        
        <creator>S. M. A. Motakabber</creator>
        
        <creator>Nur Fatin Liyana Mohd Rosely</creator>
        
        <creator>Aisha Hassan Abdalla Hashim</creator>
        
        <subject>Internet of things (IoT); secure IoT routing; manifold criterion trust evaluation; ant colony optimization (ACO); bioinspired computing; pheromone management</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>The paper presents a simplified yet innovative computational framework to enable secure routing for sensors within a vast and dynamic Internet of Things (IoT) environment. In the proposed design methodology, a unique trust evaluation scheme utilizing a modified version of Ant Colony Optimization (ACO) is introduced. This scheme formulates a manifold criterion for secure data transmission, optimizing the sensor&#39;s residual energy and trust score. A distinctive pheromone management is devised using trust score and residual energy. Concurrently, several attributes are employed for constraint modeling to determine a secure data transmission path among the IoT sensors. Moreover, the trust model introduces a dual-tiered system of primary and secondary trust evaluations, enhancing reliability towards securing trusted nodes and alleviating trust-based discrepancies. The comprehensive implementation of the proposed integrates mathematical modeling, leveraging a streamlined bioinspired approach of the revised ACO using crowding distance. Quantitative results demonstrate that our approach yields a 35% improvement in throughput, an 89% reduction in delay, a 54% decrease in energy consumption, and a 73% enhancement in processing speed compared to prevailing secure routing protocols. Additionally, the model introduces an efficient asynchronous updating rule for local and global pheromones, ensuring greater trust in secure data propagation in IoT.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_13-Secure_IoT_Routing_Through_Manifold_Criterion_Trust_Evaluation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analyzing Sentiment in Terms of Online Feedback on Top of Users&#39; Experiences</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141114</link>
        <id>10.14569/IJACSA.2023.0141114</id>
        <doi>10.14569/IJACSA.2023.0141114</doi>
        <lastModDate>2023-11-30T12:00:10.6800000+00:00</lastModDate>
        
        <creator>Mohammed Alonazi</creator>
        
        <subject>Sentiment analysis; product review; machine learning; recommendation system; collaborative filtering; exploratory data analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>Since most businesses today are conducted online, it is crucial that each customer provide feedback on the various items offered. Evaluating online product sentiment and making suggestions using state-of-the-art machine learning and deep learning algorithms requires a comprehensive pipeline. Thus, this paper addresses the need for a comprehensive pipeline to analyze online product sentiment and recommend products using advanced machine learning and deep learning algorithms. The methodology of the research is divided into two parts: the Sentiment Analysis Approach and the Product Recommendation Approach. The study applies several state-of-the-art algorithms, including Na&#239;ve Bayes, Logistic Regression, Support Vector Machine (SVM), Decision Tree, Random Forest, Bidirectional Long-Short-Term-Memory (BI-LSTM), Convolutional Neural Network (CNN), Long-Short-Term-Memory (LSTM), and Stacked LSTM, with proper hyperparameter optimization techniques. The study also uses the collaborative filtering approach with the k-Nearest Neighbours (KNN) model to recommend products. Among these models, Random Forest achieved the highest accuracy of 95%, while the LSTM model scored 79%. The proposed model is evaluated using Receiver Operating Characteristic (ROC) - Area under the ROC Curve (AUC). Additionally, the study conducted exploratory data analysis, including Bundle or Bought-Together analysis, point of interest-based analysis, and sentiment analysis on reviews (1996-2018). Overall, the study achieves its objectives and proposes an adaptable solution for real-life scenarios.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_14-Analyzing_Sentiment_in_Terms_of_Online_Feedback.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Empirical Study: Automating e-Commerce Product Rating Through an Analysis of Customer Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141112</link>
        <id>10.14569/IJACSA.2023.0141112</id>
        <doi>10.14569/IJACSA.2023.0141112</doi>
        <lastModDate>2023-11-30T12:00:10.6670000+00:00</lastModDate>
        
        <creator>Uvaaneswary Rajendran</creator>
        
        <creator>Salfarina Abdullah</creator>
        
        <creator>Khairi Azhar Aziz</creator>
        
        <creator>Sazly Anuar</creator>
        
        <subject>e-commerce website; sentiment analysis technique; manual product rating; automated product rating; product review</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>e-Commerce today is a remarkable experience. However, finding and purchasing a right quality product based on numerous product reviews and manual rating in the e-commerce websites utilize much time among the consumers. This paper presents the problems faced by the consumers when buying products in e-commerce websites and a solution to solve the problems. Thus, the idea of an automated product rating system would be very useful for the consumers in which it rates the products automatically based on the reviews given by the buyers. To do this, a technique called Sentiment Analysis is used. It also ranks the products in order based on the product rating that is generated automatically. It would provide a way for the consumers to purchase their desired product within minutes. Surveys and interviews were conducted to find out the problems faced by consumers when purchasing a product online through e-commerce websites. There was also research conducted to study the product rating and product review section on the current e-commerce websites. To conclude, this automated product rating system eventually eases the consumers’ effort and time from reading numerous reviews and trusting inaccurate product rating to find a best quality product for them.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_12-An_Empirical_Study_Automating_e_Commerce_Product_Rating.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Investigating Efficiency of Soil Classification System using Neural Network Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141111</link>
        <id>10.14569/IJACSA.2023.0141111</id>
        <doi>10.14569/IJACSA.2023.0141111</doi>
        <lastModDate>2023-11-30T12:00:10.6500000+00:00</lastModDate>
        
        <creator>Pappala Mohan Rao</creator>
        
        <creator>Kunjam Nageswara Rao</creator>
        
        <creator>Sitaratnam Gokuruboyina</creator>
        
        <creator>Neeli Koti Siva Sai Priyanka</creator>
        
        <subject>Agricultural; convolution neural network; soil classification deep learning; VGG16; VGG19; InceptionV3; multi-classification; ResNet50</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>Soil is a vital requirement for agricultural activities providing numerous functionalities restoring both abiotic and biotic materials. There are different types of soils, and each type of soil possesses distinctive characteristics and unique harvesting properties that impact agricultural development in various ways. Generally, farmers in the olden days used to analyse soil by looking at it visually while some prefer laboratory tests which are time-consuming and costly. Testing of soil is done to analyse the features and characteristics of the soil type, which results in selecting a suitable crop. This in turn results in increased food productivity which is very beneficial to farmers. Hence, to recognize the soil type an automatic soil identification model is proposed by implementing Deep Learning Techniques. It is used to classify the soil for crop recommendation by analysing accurate soil type. Different Convolution Neural Networks have been applied in the proposed model. They are VGG16, VGG19, InceptionV3 and ResNet50.Among all those techniques it is analysed that better results were obtained with ResNet50 having an accuracy of about 87% performing Multi-classification that is Black soil, Laterite Soil, Yellow Soil, Cinder soil &amp; Peat soil.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_11-Investigating_Efficiency_of_Soil_Classification_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Construction of a Security Defense Model for the University&#39;s Cyberspace Based on Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141110</link>
        <id>10.14569/IJACSA.2023.0141110</id>
        <doi>10.14569/IJACSA.2023.0141110</doi>
        <lastModDate>2023-11-30T12:00:10.6330000+00:00</lastModDate>
        
        <creator>Wang Bin</creator>
        
        <subject>Machine learning; University&#39;s cyberspace; security defense; construction of a model; compressed sensing; non-negative matrix</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>In order to ensure the security of university teachers and students using cyberspace, a machine learning based university cyberspace security defense model is constructed. Adopting a compression perception based data collection method for university cyberspace, the data information collection of university cyberspace is completed through sparse representation, compression measurement, and recovery reconstruction. Combining the advantages of Convolutional Neural Network (CNN) model in spatial feature extraction of data and Long Short Term Memory (LSTM) model in sequential feature extraction of data, extract the features of university network spatial data. After completing the multi feature dimensionality reduction processing of university network data based on the non-negative matrix decomposition algorithm, the feature dimensionality reduction processing results are input into the ConvLSTM-CNN model. After convolution calculation and integration, the security threat detection results of university network space are output. Based on the results of security threat detection, corresponding network attack defense measures are selected to ensure the security of the university&#39;s cyberspace. The experimental results show that the average attack interception rate of the model after application can reach 97.6%. It has been proven that building a model can accurately detect security threats to the university&#39;s cyberspace and achieve defense against various network attacks in different environments.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_10-Construction_of_a_Security_Defense_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimization of Unsupervised Neural Machine Translation Based on Syntactic Knowledge Improvement</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141109</link>
        <id>10.14569/IJACSA.2023.0141109</id>
        <doi>10.14569/IJACSA.2023.0141109</doi>
        <lastModDate>2023-11-30T12:00:10.6200000+00:00</lastModDate>
        
        <creator>Aiping Zhou</creator>
        
        <subject>Unsupervised; Neural network; Machine translation; Grammatical knowledge; Transformer; LSTM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>Unsupervised Neural Machine Translation is a crucial machine translation method that can translate in the absence of a parallel corpus and opens up new avenues for intercultural dialogue. Existing unsupervised neural machine translation models still struggle to deal with intricate grammatical relationships and linguistic structures, which leads to less-than-ideal translation quality. This study combines the Transformer structure and syntactic knowledge to create a new unsupervised neural machine translation model, which enhances the performance of the existing model. The study creates a neural machine translation model based on the Transformer structure first, and then introduces sentence syntactic structure and various syntactic fusion techniques, also known as the Transformer combines grammatical knowledge. The results show that the Transformer combines grammatical knowledge paired with Bi-Long Short-Term Memory proposed in this research has better performance. The accuracy and F1 value of the combined model in the training dataset are as high as 0.97. In addition, the time of the model in real sentence translation is controlled within 2s, and the translation accuracy is above 0.9. In conclusion, the unsupervised neural machine translation model proposed in this study has better performance, and its application to actual translation can achieve better translation results.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_9-Optimization_of_Unsupervised_Neural_Machine_Translation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Security in Software-Defined Networks Against Denial-of-Service Attacks Based on Increased Load Balancing Efficiency</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141108</link>
        <id>10.14569/IJACSA.2023.0141108</id>
        <doi>10.14569/IJACSA.2023.0141108</doi>
        <lastModDate>2023-11-30T12:00:10.6030000+00:00</lastModDate>
        
        <creator>Ying ZHANG</creator>
        
        <creator>Hongwei DING</creator>
        
        <subject>Security; open balance; denial-of-service attacks; software-oriented networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>The goal of software-oriented networks (SDNs), which enable centralized control by separating the control layer from the data layer, is to increase manageability and network compatibility. However, this form of network is vulnerable to the control layer going down in the face of a denial-of-service assault because of the centralized control policy. The considerable increase in events brought on by the introduction of fresh currents into the network puts a lot of strain on the control surface when the system is in reaction mode. Additionally, the existence of recurring events that seriously impair the control surface&#39;s ability to function, such as the gathering of statistical data from the entire network, might have a negative impact. This article introduces a new approach that uses a control box comprising a coordinating controller, a main controller that establishes the flow rules, and one or more sub-controllers that establish the rules to fend off the attack and avoid network paralysis. It makes use of current (when needed). The controllers who currently set the regulations are relieved of some work by giving the coordinating controller management and supervision responsibilities. Additionally, the coordinator controller distributes the load at the control level by splitting up incoming traffic among the controllers of the flow rules. Thus, a proposed method can avoid performance disruption of the flow rule setter&#39;s main controller and withstand denial-of-service attacks by distributing the traffic load brought on by the denial-of-service attack to one or more sub-controllers of the flow rule setter. The results of the experiments conducted indicate that, when compared to the existing solutions, the proposed solution performs better in the face of a denial-of-service assault.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_8-Security_in_Software_Defined_Networks_Against_Denial_of_Service_Attacks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automatic Extractive Summarization using GAN Boosted by DistilBERT Word Embedding and Transductive Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141107</link>
        <id>10.14569/IJACSA.2023.0141107</id>
        <doi>10.14569/IJACSA.2023.0141107</doi>
        <lastModDate>2023-11-30T12:00:10.5870000+00:00</lastModDate>
        
        <creator>Dongliang Li</creator>
        
        <creator>Youyou Li</creator>
        
        <creator>Zhigang ZHANG</creator>
        
        <subject>Extractive text summarization; generative adversarial network; transductive learning; long short-term memory; DistilBERT</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>Text summarization is crucial in diverse fields such as engineering and healthcare, greatly enhancing time and cost efficiency. This study introduces an innovative extractive text summarization approach utilizing a Generative Adversarial Network (GAN), Transductive Long Short-Term Memory (TLSTM), and DistilBERT word embedding. DistilBERT, a streamlined BERT variant, offers significant size reduction (approximately 40%), while maintaining 97% of language comprehension capabilities and achieving a 60% speed increase. These benefits are realized through knowledge distillation during pre-training. Our methodology uses GANs, consisting of the generator and discriminator networks, built primarily using TLSTM - an expert at decoding temporal nuances in timeseries prediction. For more effective model fitting, transductive learning is employed, assigning higher weights to samples nearer to the test point. The generator evaluates the probability of each sentence for inclusion in the summary, and the discriminator critically examines the generated summary. This reciprocal relationship fosters a dynamic iterative process, generating top-tier summaries. To train the discriminator efficiently, a unique loss function is proposed, incorporating multiple factors such as the generator’s output, actual document summaries, and artificially created summaries. This strategy motivates the generator to experiment with diverse sentence combinations, generating summaries that meet high-quality and coherence standards. Our model’s effectiveness was tested on the widely accepted CNN/Daily Mail dataset, a benchmark for summarization tasks. According to the ROUGE metric, our experiments demonstrate that our model outperforms existing models in terms of summarization quality and efficiency.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_7-Automatic_Extractive_Summarization_using_GAN.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Augmented Reality SDK Overview for General Application Use</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141106</link>
        <id>10.14569/IJACSA.2023.0141106</id>
        <doi>10.14569/IJACSA.2023.0141106</doi>
        <lastModDate>2023-11-30T12:00:10.5730000+00:00</lastModDate>
        
        <creator>Suzanna </creator>
        
        <creator>Sasmoko</creator>
        
        <creator>Ford Lumban Gaol</creator>
        
        <creator>Tanty Oktavia</creator>
        
        <subject>Augmented reality; software development kits; AR SDK; platform; framework; AR technology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>Augmented Reality Software Development Kits, or as they are commonly called AR SDKs, are useful for developers to build digital objects in AR. This paper presents a comparative study of AR SDKs. This comparison is based on several significant criteria, to select the most suitable SDK. The evaluation used the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) method. Based on a comparative analysis of the features and virtual elements available for application development with the AR SDK, researcher suggests that the main functions of the AR SDK were to be able to offer AR application Editing Platform and facilitate software creation without requiring knowledge of algorithms. Besides that, it is possible to establish some general observations regarding the benefits and limitations of the AR SDK. The result of this research is expected to provide with the clear framework for processing the data that has been collected, summarized, and tested from case study so the researcher will be able to reach useful conclusions. From the literature study has been conducted, it was concluded that among many SDK tools, there are 15 of them which were the most employed by AR developers. These 15 tools were selected based on certain main attributes and support platforms. At the end of this research, it also presents the advantages and limitations of these 15 tools.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_6-Augmented_Reality_SDK_Overview_for_General_Application_Use.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of University Archives Business Data Push System Based on Big Data Mining Technology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141105</link>
        <id>10.14569/IJACSA.2023.0141105</id>
        <doi>10.14569/IJACSA.2023.0141105</doi>
        <lastModDate>2023-11-30T12:00:10.5570000+00:00</lastModDate>
        
        <creator>Zhongke Wang</creator>
        
        <creator>Jun Li</creator>
        
        <subject>Big data mining technology; system design; business data pus; hidden markov model; similarity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>Aiming at the problems of low accuracy, recall, coverage and push efficiency of university archives business data, a university archives business data push system based on big data mining technology is designed. Firstly, the overall architecture and topological structure of the university archives business data push system are designed, and then the functional modules of the system are designed. Using big data mining technology to mine user behavior, modeling according to user behavior sequence, and designing a model to predict user behavior sequence based on hidden Markov model theory. Finally, the user behavior sequence is analyzed, and the factors such as user collaboration, similarity of user behavior sequence and data timeliness are comprehensively considered to push university archives business data for users. The experimental results show that the proposed method has high data push accuracy, recall, coverage and push efficiency, and can effectively push the required business data for users.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_5-Design_of_University_Archives_Business_Data_Push_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Vehicle Safety: A Comprehensive Accident Detection and Alert System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141104</link>
        <id>10.14569/IJACSA.2023.0141104</id>
        <doi>10.14569/IJACSA.2023.0141104</doi>
        <lastModDate>2023-11-30T12:00:10.5400000+00:00</lastModDate>
        
        <creator>Jamil Abedalrahim Jamil Alsayaydeh</creator>
        
        <creator>Mohd Faizal bin Yusof</creator>
        
        <creator>Mohamad Amirul Aliff bin Abdillah</creator>
        
        <creator>Ahmed Jamal Abdullah Al-Gburi</creator>
        
        <creator>Safarudin Gazali Herawan</creator>
        
        <creator>Andrii Oliinyk</creator>
        
        <subject>Vehicle accident detection; microcontroller-based system; accelerometer sensor; Global Positioning System (GPS) localization; Global System for Mobile (GSM) communication; emergency response; safety innovation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>This research pioneers a ground-breaking system meticulously engineered to swiftly detect vehicular accidents and dispatch immediate alerts to both emergency services and pre-assigned contacts. This symphony of cutting-edge technologies includes an accelerometer sensor attuned to detect acceleration in any vector, a dynamic Liquid-Crystal Display (LCD) display for rapid alert dissemination, an assertive buzzer for resonant alarms, a Global System for Mobile (GSM) module for the swift transmission of distress messages, and pinpoint location data provided by a Global Positioning System (GPS) module. A user-friendly &#39;cancel&#39; button acts as an escape hatch from potential false alarms. Orchestrated by the dexterity of an Arduino Uno microcontroller, this ensemble orchestrates a harmonious ballet of safety. This solution boasts cost-effectiveness, steadfastness, and unparalleled efficiency. Rigorous testing across diverse scenarios confirms its precision and robustness. By enhancing accident detection accuracy, expediting emergency responses, and facilitating rapid location dissemination, this innovation serves as a vital lifeline, empowering both passengers and rescue services upon accident initiation. With location data as its guiding star, emergency services gain a swift navigational edge, offering a beacon of hope in the battle against accident-related casualties.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_4-Enhancing_Vehicle_Safety.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>State of the Art in Intent Detection and Slot Filling for Question Answering System: A Systematic Literature Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141103</link>
        <id>10.14569/IJACSA.2023.0141103</id>
        <doi>10.14569/IJACSA.2023.0141103</doi>
        <lastModDate>2023-11-30T12:00:10.5270000+00:00</lastModDate>
        
        <creator>Anis Syafiqah Mat Zailan</creator>
        
        <creator>Noor Hasimah Ibrahim Teo</creator>
        
        <creator>Nur Atiqah Sia Abdullah</creator>
        
        <creator>Mike Joy</creator>
        
        <subject>Intent detection; intent classification; slot filling; question answering system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>A Question Answering System (QAS), also known as a chatbot, is a Natural Language Processing (NLP) application that automatically provides accurate responses to questions posed by humans in natural language. Intent Detection and Classification are crucial elements in NLP, especially in a task-oriented dialogue system. In this paper, we conduct a systematic literature review that will perform a comparative analysis of different techniques or algorithms that are being implemented for intent detection and classification with slot filling. The goals of this paper are to identify the distribution, methodology, techniques or algorithms, and evaluation methods, that can be used to develop and construct a model of intent detection and classification with slot filling. This paper also reviews academic documents that have been published from 2019 to 2023, based on a four-step selection process of identification, screening, eligibility, and inclusion, for the selection process. In order to examine these documents, a systematic review was conducted and four main research questions were answered. The results discuss the methodology that can be used for the implementation of intent detection and classification with slot filling, along with the techniques, algorithms and evaluation methods that are widely used and currently implemented by other researchers.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_3-State_of_the_Art_in_Intent_Detection_and_Slot_Filling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Semantic Sampling: Enhancing Recommendation Diversity and User Engagement in the Headspace Meditation App</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141102</link>
        <id>10.14569/IJACSA.2023.0141102</id>
        <doi>10.14569/IJACSA.2023.0141102</doi>
        <lastModDate>2023-11-30T12:00:10.5270000+00:00</lastModDate>
        
        <creator>Rohan Singh Rajput</creator>
        
        <creator>Christabelle Pabalan</creator>
        
        <creator>Akhil Chaturvedi</creator>
        
        <creator>Prathamesh Kulkarni</creator>
        
        <creator>Adam Brownell</creator>
        
        <subject>Information retrieval; machine learning; recom-mender system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>In this paper, we present a clever approach to enhance the performance of sequential recommendation systems, specifically in the context of meditation recommendations within the Headspace app. Our method, termed “Semantic Sampling”, leverages the power of language embeddings and clustering techniques to introduce diversity and novelty in the recommen-dations. We augment the Time Interval Aware Self-Attention for Sequential Recommendation (TiSASRec) model with semantic sampling, where the next recommended item is randomly sampled from a cluster of semantically similar items. Our empirical evaluation, conducted on a sample set of 276,700 users, reveals a statistically significant increase of 2.26 % in content start rate for the treatment group (TiSASRec with semantic sampling) compared to the control group (TiSASRec alone). Furthermore, our approach demonstrates improved coverage and rarity, indi-cating a broader range of recommendations and higher novelty. The results underscore the potential of Semantic Sampling in enhancing user engagement and satisfaction in recommendation systems.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_2-Semantic_Sampling_Enhancing_Recommendation_Diversity.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sentiment-Driven Forecasting LSTM Neural Networks for Stock Prediction-Case of China Bank Sector</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141101</link>
        <id>10.14569/IJACSA.2023.0141101</id>
        <doi>10.14569/IJACSA.2023.0141101</doi>
        <lastModDate>2023-11-30T12:00:10.4930000+00:00</lastModDate>
        
        <creator>Shangshang Jin</creator>
        
        <subject>Machine learning; LSTM; sentiment; forecasting; banking sector</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023</description>
        <description>This study explores the predictive analysis of public sentiment in China&#39;s financial market, focusing on the banking sector, through the application of machine learning techniques. Specifically, it utilizes the Baidu Index and Long Short-Term Memory (LSTM) networks. The Baidu Index, akin to China&#39;s version of Google Trends, serves as a sentiment barometer, while LSTM networks excel in analyzing sequential data, making them apt for stock price forecasting. Our model integrates sentiment indices from Baidu with historical stock data of significant Chinese banks, aiming to unveil how digital sentiment influences stock price movements. The model&#39;s forecasting prowess is rigorously evaluated using metrics such as R-squared (R2), Root Mean Square Error (RMSE), Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE), and confusion matrices, the latter being instrumental in assessing the model&#39;s capability in correctly predicting stock up or down movements. Our findings predominantly showcase superior prediction performance of the sentiment-based LSTM model compared to a standard LSTM model. However, effectiveness varies across different banks, indicating that sentiment integration enhances prediction capabilities, yet individual stock characteristics significantly contribute to the prediction accuracy. This inquiry not only underscores the importance of integrating public sentiment in financial forecasting models but also provides a pioneering framework for leveraging digital sentiment in financial markets. Through this endeavor, we offer a robust analytical tool for investors, policymakers, and financial institutions, aiding in better navigation through the intricate financial market dynamics, thereby potentially leading to more informed decision-making in the digital age.</description>
        <description>http://thesai.org/Downloads/Volume14No11/Paper_1-Sentiment_Driven_Forecasting_LSTM_Neural_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Applications of Artificial Intelligence for Information Diffusion Prediction: Regression-based Key Features Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01410123</link>
        <id>10.14569/IJACSA.2023.01410123</id>
        <doi>10.14569/IJACSA.2023.01410123</doi>
        <lastModDate>2023-11-01T13:37:06.5600000+00:00</lastModDate>
        
        <creator>Majed Algarni</creator>
        
        <creator>Mohamed Maher Ben Ismail</creator>
        
        <subject>Information diffusion; social media data; machine learning; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>Information diffusion prediction is essential in marketing, advertising, and public health. Public health officials may avoid disease outbreaks, and businesses can optimize marketing campaigns and target audiences. Information diffusion prediction helps identify influential nodes in social networks, enabling targeted interventions to spread positive messages or counter misinformation. Organizations can make informed decisions and improve society by analyzing information propagation patterns. This research study investigates the prediction of information diffusion on social media platforms using a diverse set of features and advanced machine learning and deep learning models. We explore the impact of network structure, early retweet dynamics, and tweet content on social media, provided by the publicly available dataset Weibo, a social network like Twitter. By applying the training of the models on set of features separately, we observed different performances. The Random Forest model using all features achieved an R-squared of 76.690%. The Random Forest (RF) model focusing on the following network structure achieved an R-squared of 90.773%. The RF model analyzing the retweeting network structure achieved an R-squared of 98.161%.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_123-Applications_of_Artificial_Intelligence_for_Information_Diffusion_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Integrated, Bidirectional Pronunciation, Morphology, and Diacritics Finite-State System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01410122</link>
        <id>10.14569/IJACSA.2023.01410122</id>
        <doi>10.14569/IJACSA.2023.01410122</doi>
        <lastModDate>2023-11-01T13:37:06.5300000+00:00</lastModDate>
        
        <creator>Maha Alkhairy</creator>
        
        <creator>Afshan Jafri</creator>
        
        <creator>Adam Cooper</creator>
        
        <subject>Computational linguistics; phonology; morphology; modern standard Arabic; diacritization; text-to-speech; language learning tools</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>A bidirectional phonetizer, morphologizer, and diacritizer pipeline (FSPMD) for modern standard Arabic (MSA) that integrated pronunciation, concatenative and templatic morphology, and diacritization were developed. Grammar and segmental phonology rules were applied in the forward direction to ensure the order of the proper rules, which were supplemented with special backward direction rules. The FSPMD comprises bidirectional finite-state transducers (FSTs) consisting of an ordered composition of FSTs, unordered parallel FSTs, unioned FSTs, and for validity, finite-state acceptors. The FSPMD has unique, innovative features and can be used as an integrated pipeline or standalone phonetizer (FSAP), morphologizer (FSAM), or diacritizer (FSAD). As the system is bidirectional, it can be used in forward (generation, synthesis) and backward (analysis, decomposition) directions and can be integrated into systems such as automatic speech recognition (ASR) and language learning tools. The FSPMD is rule-based and avoids stem listings for morphology or pronunciation dictionaries, which makes it scalable and generalizable to similar languages. The FSPMD models authentic rules, including fine granularity and nuances, such as rewrite and morphophonemic rules, subcategory identification and utilization, such as irregular verbs. FSAP performance regarding text from the Tashkeela corpus and Wikipedia demonstrated that the pronunciation system can accurately pronounce all text and words, with the only errors related to foreign words and misspellings, which were out of the system’s scope. FSAM and FSAD coverage and accuracy were evaluated using the Tashkeela corpus and a gold standard derived from its intersection with the UD_PADT treebank. The coverage of extraction of root and properties from words is 82%. Accuracy results are roots computed from a word (92%), words generated from a root (100%), non-root properties (97%), and diacritization (84%). FSAM non-root results matched and/or surpassed those from MADAMIRA; however, root result comparisons were not conducted because of the concatenative nature of publicly available morphologizers.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_122-An_Integrated_Bidirectional_Pronunciation_Morphology_and_Diacritics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>AI Animation Character Behavior Modeling and Action Recognition in Virtual Studio</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01410121</link>
        <id>10.14569/IJACSA.2023.01410121</id>
        <doi>10.14569/IJACSA.2023.01410121</doi>
        <lastModDate>2023-11-01T13:37:06.4830000+00:00</lastModDate>
        
        <creator>Yaoyao Xu</creator>
        
        <subject>Virtual broadcasting; animated characters; behavioral modeling; action recognition; behavior tree; long and short-term memory</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>With the advancement of virtual broadcasting technology, the use of artificial intelligence animated characters in virtual scenes is becoming increasingly widespread. However, there are still a series of challenges and limitations to make the behavior of animated characters more natural, intelligent, and diverse. Therefore, this study proposes a behavior tree based animation character behavior modeling and a short-term memory action recognition method combining human geometric features. The research results indicate that when the behavior modeling model faces different obstacles, the successful avoidance rate is over 80%, and the avoidance reaction time is 0.41s-0.65s. The accuracy and loss function values of the action recognition method gradually converge to 1 and 0 with the quantity of iterations grows. For the recognition of seven types of actions, the accuracy of raising the left hand, raising the right hand, waving the left hand, and waving the right hand reaches 100%, and the recall rate of raising the right hand is 100%. The majority of action types have F-value scores above 0.9. Relative to the recurrent neural network model, the accuracy of the double-layer long-term and short-term memory model is 95.8%, which is significantly better than the former&#39;s 86.3%, showing better recognition performance. In summary, modeling and identifying the behavior of artificial intelligence animated characters can make the characters in virtual broadcasting more intelligent, natural, and realistic, thereby improving the viewing experience of virtual broadcasting, which has important practical value and research significance. This has significant practical and research value, providing insightful references for related fields.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_121-AI_Animation_Character_Behavior_Modeling_and_Action_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Image Stitching Method and Implementation for Immersive 3D Ink Element Animation Production</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01410120</link>
        <id>10.14569/IJACSA.2023.01410120</id>
        <doi>10.14569/IJACSA.2023.01410120</doi>
        <lastModDate>2023-10-30T10:49:52.7170000+00:00</lastModDate>
        
        <creator>Chen Yang</creator>
        
        <creator>Siti SalmiJamali</creator>
        
        <creator>Adzira Husain</creator>
        
        <creator>Nianyou Zhu</creator>
        
        <creator>Jian Wen</creator>
        
        <subject>Immersive; 3D; ink element animation; image stitching; stereo matching algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>As the growth of immersive 3D animation, its application in ink element animation is constantly updating and advancing. However, the current immersive 3D ink element animation production also has the problem of lack of innovation and repeated development, so the research innovatively designs and develops the image stitching method for immersive 3D ink element animation production. The method is designed through stereo matching algorithm and scale-invariant feature transform algorithm, and the stereo matching algorithm is optimized with the weighted median filtering method based on the guide map. In addition, the study also designs the specific implementation of this method from different functional modules. The experimental results show that on four different datasets, the error percentages of the optimized stereo matching algorithm in non-occluded areas are 0.3885%, 0.4743%, 1.6848%, and 1.34%, respectively. The error percentages of all areas are 0.8316%, 0.8253%, 4.3235%, and 4.1760%, respectively. The research and design of image stitching methods can be applied in other fields and has good practical significance.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_120-Image_Stitching_Method_and_Implementatio.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automated Fruit Grading in Precise Agriculture using You Only Look Once Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01410119</link>
        <id>10.14569/IJACSA.2023.01410119</id>
        <doi>10.14569/IJACSA.2023.01410119</doi>
        <lastModDate>2023-10-30T10:49:52.7030000+00:00</lastModDate>
        
        <creator>Weiwei Zhang</creator>
        
        <subject>Precise agriculture; automated fruit grading; deep learning; computer vision; Yolov5</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>In the realm of precision agriculture, the automated grading of fruits stands as a critical endeavor, serving to maintain consistent quality assessment and streamline the sorting process. Traditional methods based on computer vision and deep learning techniques have both been explored extensively in the context of fruit grading, with the latter gaining prominence due to its superior performance. However, the existing research landscape in the domain of deep learning-based fruit grading confronts a compelling challenge: striking a balance between accuracy and computational cost. This challenge has been consistently noted through an extensive analysis of prior studies. In response, this study introduces an innovative approach built upon the YOLOv5 algorithm. This methodology encompasses the creation of a bespoke dataset and the division of data into training, validation, and testing sets, facilitating the training of a robust and computationally efficient model. The findings of the experiments and the subsequent performance evaluation underscore the effectiveness of the proposed method. This approach yields significant improvements in both accuracy and computational efficiency, thus addressing the ongoing challenge in deep learning-based fruit grading. Therefore, this study contributes valuable insights into the field of automated fruit grading, offering a promising solution to the trade-off between accuracy and computational cost while demonstrating the practical viability of the YOLOv5-based approach.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_119-Automated_Fruit_Grading_in_Precise_Agriculture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Identification of the False Data Injection Cyberattacks on the Internet of Things by using Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01410118</link>
        <id>10.14569/IJACSA.2023.01410118</id>
        <doi>10.14569/IJACSA.2023.01410118</doi>
        <lastModDate>2023-10-30T10:49:52.6870000+00:00</lastModDate>
        
        <creator>Henghe Zheng</creator>
        
        <creator>Xiaojing Chen</creator>
        
        <creator>Xin Liu</creator>
        
        <subject>Cyberattacks; false data injection (FDI) attacks; internet of things (IoT); deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>With the expanding utilization of cyber-physical structures and communication networks, cyberattacks have become a serious threat in various networks, including the Internet of Things (IoT) sensors. The state estimation algorithms play an important role in defining the present operational scenario of the IoT sensors. The attack of the false data injection (FDI) is the earnest menace for these estimation strategies (adopted by the operators of the IoT sensor) with the injection of the wicked data into the earned mensuration. The real-time recognition of this group of attacks increases the network resilience while it ensures secure network operation. This paper presents a new method for real-time FDI attack detection that uses a state prediction method basis on deep learning along with a new officiousness identification approach with the use of the matrix of the error covariance. The architecture of the presented method, along with its optimal group of meta-parameters, shows a real-time, scalable, effective state prediction method along with a minimal error border. The earned results display that the proposed method performs better than some recent literature about the prediction of the remaining useful life (RUL) with the use of the C-MAPSS dataset. In the following, two types of attacks of the false data injection are modeled, and then, their effectiveness is evaluated by using the proposed method. The earned results show that the attacks of the FDI, even on the low number of the sensors of the IoT, can severely disrupt the prediction of the RUL in all instances. In addition, our proposed model outperforms the FDI attack in terms of accuracy and flexibility.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_118-Identification_of_the_False_Data_Injection_Cyberattacks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of YOLO-based Model for Fall Detection in IoT Smart Home Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01410117</link>
        <id>10.14569/IJACSA.2023.01410117</id>
        <doi>10.14569/IJACSA.2023.01410117</doi>
        <lastModDate>2023-10-30T10:49:52.6700000+00:00</lastModDate>
        
        <creator>Pengcheng Gao</creator>
        
        <subject>Smart home; IoT; elderly care; computer vision; deep learning; YOLO</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>In smart home applications, effective fall detection is a critical concern to minimize the occurrence of falls leading to injuries, especially for the assistance of elderly individuals. Various methods have been proposed, including both vision-based and non-vision-based approaches. Among these, vision-based approaches have garnered significant attention from researchers due to their practicality and applicability. However, existing vision-based methods face challenges such as low accuracy rates and high computational costs, which still need further exploration to enhance fall detection effectiveness. This study aims to develop a vision-based fall detection system tailored for smart home care applications. The objective of this study is to develop an accurate and lightweight fall detection method that is applicable in IoT platforms. A You Only Look Once (YOLO) based network is trained and tested to identify human falls accurately. The experimental results demonstrate that the developed YOLO-based technique shows promising outcomes for human fall detection and holds potential for integration in the Internet of Things (IoT) enabled smart home applications.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_117-Development_of_YOLO_based_Model_for_Fall_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Wrapper-based Modified Binary Particle Swarm Optimization for Dimensionality Reduction in Big Gene Expression Data Analytics</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01410116</link>
        <id>10.14569/IJACSA.2023.01410116</id>
        <doi>10.14569/IJACSA.2023.01410116</doi>
        <lastModDate>2023-10-30T10:49:52.6570000+00:00</lastModDate>
        
        <creator>Hend S. Salem</creator>
        
        <creator>Mohamed A. Mead</creator>
        
        <creator>Ghada S. El-Taweel</creator>
        
        <subject>Alzheimer disease; big gene expression; binary particle swarm optimization; deep learning; dimensionality reduction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>Gene expression data has emerged as a crucial aspect of big data in genomics. The advent of high-throughput technologies such as microarrays and next-generation sequencing has enabled the generation of extensive gene expression data. These datasets are characterized by their complexity, fast data generation, diversity, and high dimensionality. Analyzing high dimensional gene expression data offers both challenges and opportunities. Computational intelligence and deep learning techniques have been employed to extract meaningful information from these enormous datasets. However, the challenges related to preprocessing, reducing dimensionality, and normal-ization continue to exist. This study explored the effectiveness of the Wrapper-based Modified Particle Swarm Optimization (WMBPSO) algorithm in reducing dimensionality of big gene expression data for Alzheimer’s disease (AD) prediction, using the GSE33000 dataset. The reduced dataset was then used as input to a CNN-LSTM model for prediction. The WMBPSO method identified 4303 genes out of a total of 39280 genes as being relevant for AD. These genes were selected based on their discriminatory power and potential contribution to the classification task, achieving an accuracy score of 0.98. The performance of the CNN-LSTM model is evaluated using these selected genes, and the results were highly promising. The results of our analysis are 0.968 for mean cross-validation accuracy, 0.995 for AUC, and 0.967 for recall, precision, and F1 score. Importantly, our approach outperforms conventional feature selection methods and alternative machine and deep learning algorithms. By addressing the critical challenge of dimensionality reduction in gene expression data, our study contributes to advancing the field of AD prediction and under-scores the potential for improved diagnosis and patient care.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_116-Wrapper_based_Modified_Binary_Particle_Swarm_Optimization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimizing the Production of Valuable Metabolites using a Hybrid of Constraint-based Model and Machine Learning Algorithms: A Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01410115</link>
        <id>10.14569/IJACSA.2023.01410115</id>
        <doi>10.14569/IJACSA.2023.01410115</doi>
        <lastModDate>2023-10-30T10:49:52.6400000+00:00</lastModDate>
        
        <creator>Kauthar Mohd Daud</creator>
        
        <creator>Ridho Ananda</creator>
        
        <creator>Suhaila Zainudin</creator>
        
        <creator>Chan Weng Howe</creator>
        
        <subject>Flux balance analysis; genome-scale metabolic model; machine learning; metabolic engineering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>The advances in genome sequencing and metabolic engineering have allowed the reengineering of the cellular function of an organism. Furthermore, given the abundance of omics data, data collection has increased considerably, thus shifting the perspective of molecular biology. Therefore, researchers have recently used artificial intelligence and machine learning tools to simulate and improve the reconstruction and analysis by identifying meaningful features from the large multiomics dataset. This review paper summarizes research on the hybrid of constraint-based models and machine learning algorithms in optimizing valuable metabolites. The research articles published between 2020 and 2023 on machine learning and constraint-based modeling have been collected, synthesized, and analyzed. The articles are obtained from the Web of Science and Scopus databases using the keywords: “Machine learning”, “flux balance analysis”, and “metabolic engineering”. At the end of the search, this review contained 13 records. This review paper aims to provide current trends and approaches in in silico metabolic engineering while providing research directions by highlighting the research gaps. In addition, we have discussed the methodology for integrating machine learning and constraint-based modeling approaches.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_115-Optimizing_the_Production_of_Valuable_Metabolites_using_a_Hybrid.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Impact of Text Generation Techniques on Neural Image Captioning: An Empirical Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01410114</link>
        <id>10.14569/IJACSA.2023.01410114</id>
        <doi>10.14569/IJACSA.2023.01410114</doi>
        <lastModDate>2023-10-30T10:49:52.6400000+00:00</lastModDate>
        
        <creator>Linna Ding</creator>
        
        <creator>Mingyue Jiang</creator>
        
        <creator>Liming Nie</creator>
        
        <creator>Zuzhang Qing</creator>
        
        <creator>Zuohua Ding</creator>
        
        <subject>Image captioning; encoder-decoder; text gener-ation techniques</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>Image captioning is an advanced NLP task that has various practical applications. To meet the requirement of visual information understanding and textual information generation, the encoder-decoder framework has been widely adopted by image captioning models. In this context, the encoder is responsible for transforming an image into vector representation, and the decoder acts as a text generator for yielding an image caption. It is obvious and intuitive that the decoder is crucial for the entire image captioning model. However, there is a lack of comprehensive studies in which the impact of various aspects of the decoder on the image captioning is investigated. To advance the understanding of the impacts of text generation techniques employed by the decoder, we conduct an extensive empirical analysis of three types of language models, two types of decoding strategies and two types of training methods, based on four state-of-the-art image captioning models. Our experimental results demonstrate that the language model affects the performance of image captioning models, while different language models may benefit different image captioning models. In addition, it is also revealed that among the decoding and training strategies under investigation, the beam search, AOA mechanism and the reinforcement learning based training method can generally improve the performance of image captioning models. Moreover, the results also show that the combinational usage of these strategies always outperforms the use of single strategy for the task of image captioning.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_114-The_Impact_of_Text_Generation_Techniques_on_Neural_Image_Captioning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Transformer-based End-to-End Object Detection in Aerial Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01410113</link>
        <id>10.14569/IJACSA.2023.01410113</id>
        <doi>10.14569/IJACSA.2023.01410113</doi>
        <lastModDate>2023-10-30T10:49:52.6230000+00:00</lastModDate>
        
        <creator>Nguyen D. Vo</creator>
        
        <creator>Nguyen Le</creator>
        
        <creator>Giang Ngo</creator>
        
        <creator>Du Doan</creator>
        
        <creator>Do Le</creator>
        
        <creator>Khang Nguyen</creator>
        
        <subject>Object detection; aerial images; end-to-end; transformer-based; DETR; DAB-DETR; DINO</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>Transformer models have achieved significant mile-stones in the field of Artificial Intelligence in recent years, primarily focusing on text processing and natural language processing. However, the application of these models in the domain of image processing, particularly on aerial images data, is actively research. This study concentrates on the experimental evaluation of Transformer-based models such as DETR, DAB-DETR, and DINO on the challenging Visdrone dataset, which is also essential for aerial image data processing. The experimental results indicate that Transformer-based models exhibit substantial potential, especially in object detection on aerial image data. Nevertheless, their application is not without challenges, including low resolution, dense object occurrences, and environmental noise. This work provides an initial glimpse into both the capabilities and limitations of Transformer-based approaches within this domain, with the aim of stimulating further development and optimization for practical applications, including traffic monitoring, environmental protection, and various other domains.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_113-Transformer_based_End_to_End_Object_Detection_in_Aerial_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Approach for Classification of Diseases on Leaves</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01410112</link>
        <id>10.14569/IJACSA.2023.01410112</id>
        <doi>10.14569/IJACSA.2023.01410112</doi>
        <lastModDate>2023-10-30T10:49:52.6100000+00:00</lastModDate>
        
        <creator>Quy Thanh Lu</creator>
        
        <subject>Classification of diseases on leaves; transfer learning; finetuning; image classification; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>In recent years, significant advancements have been made in the realm of plant disease classification, with a particular focus on leveraging the capabilities of deep learning techniques. This study delves into the utilization of renowned Convolutional Neural Network (CNN) models, including EfficientNetB5, Mo-bileNet, ResNet50, InceptionV3, and VGG16, for the purpose of plant disease classification. The core methodology involves employing transfer learning, wherein these established CNN models are employed as a foundation and subsequently fine-tuned using a publicly accessible plant disease dataset. The study also compared the results with some deep learning models and with state-of-the-art. Among the tested CNNs, EfficientNetB5 has shown the best performance. EfficientNetB5 has outperformed another model and obtained 99.2% classification accuracy.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_112-An_Approach_for_Classification_of_Diseases_on_Leaves.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Small Dummy Disrupting Database Reconstruction in a Cache Side-Channel Attack</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01410111</link>
        <id>10.14569/IJACSA.2023.01410111</id>
        <doi>10.14569/IJACSA.2023.01410111</doi>
        <lastModDate>2023-10-30T10:49:52.5930000+00:00</lastModDate>
        
        <creator>Hyeonwoo Han</creator>
        
        <creator>Eun-Kyu Lee</creator>
        
        <creator>Junghee Jo</creator>
        
        <subject>Attack; cache attack; side-channel attack; security; database reconstruction; privacy; clique-finding; database volume</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>This paper demonstrates the feasibility of a database reconstruction attack on open-source database engines and presents a defense method against it. We launch a Flush+Reload attack on SQLite, which returns approximate, noisy volumes returned by range queries for a private database. Given the volumes, our database reconstruction uses two al-gorithms, a Modified Clique-Finding algorithm and Match-Extension algorithm, to recover the database. Experiments show that an attacker can reconstruct the victim’s database with a size of 10,000 and a range of 12 with an error rate of up to 0.07% at most. To mitigate the attack, a small dummy data is added to the result volumes of range queries, which makes the approximation more confused. Experimental results show that by adding about 1% of dummy data, an attack success rate (in terms of the number of reconstructed volumes in the database) is reduced to 60% from 100% and an error rate increases to 15% from 0.07%. It is also observed that by adding about 2%of dummy data, the reconstruction is completely failed.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_111-A_Small_Dummy_Disrupting_Database_Reconstruction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detecting and Unmasking AI-Generated Texts through Explainable Artificial Intelligence using Stylistic Features</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01410110</link>
        <id>10.14569/IJACSA.2023.01410110</id>
        <doi>10.14569/IJACSA.2023.01410110</doi>
        <lastModDate>2023-10-30T10:49:52.5770000+00:00</lastModDate>
        
        <creator>Aditya Shah</creator>
        
        <creator>Prateek Ranka</creator>
        
        <creator>Urmi Dedhia</creator>
        
        <creator>Shruti Prasad</creator>
        
        <creator>Siddhi Muni</creator>
        
        <creator>Kiran Bhowmick</creator>
        
        <subject>Detecting AI generated text; computer generated text; AI generated text; text classification; machine learning; pattern recognition; Stylistic features; Explainable AI; Lime; Shap</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>In recent years, Artificial Intelligence (AI) has sig-nificantly transformed various aspects of human activities, including text composition. The advancements in AI technology have enabled computers to generate text that closely mimics human writing which is raising concerns about misinformation, identity theft, and security vulnerabilities. To address these challenges, understanding the underlying patterns of AI-generated text is essential. This research focuses on uncovering these patterns to establish ethical guidelines for distinguishing between AI-generated and human-generated text. This research contributes to the ongoing discourse on AI-generated content by elucidating methodologies for distinguishing between human and machine-generated text. The research delves into parameters such as syllable count, word length, sentence structure, functional word usage, and punctuation ratios to detect AI-generated text. Furthermore, the research integrates Explainable AI (xAI) techniques—LIME and SHAP—to enhance the interpretability of machine learning model predictions. The model demonstrated excellent efficacy, showing an accuracy of 93%.Leveraging xAI techniques, further uncovering that pivotal attributes such as Herdan’s C, MaaS, and Simpson’s Index played a dominant role in the classification process.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_110-Detecting_and_Unmasking_AI_Generated_Texts.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fine-Grained Differences-Similarities Enhancement Network for Multimodal Fake News Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01410109</link>
        <id>10.14569/IJACSA.2023.01410109</id>
        <doi>10.14569/IJACSA.2023.01410109</doi>
        <lastModDate>2023-10-30T10:49:52.5630000+00:00</lastModDate>
        
        <creator>Xiaoyu Wu</creator>
        
        <creator>Shi Li</creator>
        
        <creator>Zhongyuan Lai</creator>
        
        <creator>Haifeng Song</creator>
        
        <creator>Chunfang Hu</creator>
        
        <subject>Fake news detection; social media; pre-training model; multimodal; transformer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>The use of social media has proliferated dramatically in recent years due to its increasing reach and ease of use. Along with this enlarged influence of social media platforms and the relative anonymity afforded to content contributors, an increasingly significant proportion of social media is composed of untruthful or “fake” news. Hence for various reasons of personal and national security, it is essential to be able to identify and eliminate fake news sources. The automated detection of fake news is complicated by the fact that most news posts on social media takes very diverse forms, including text, images, and videos. Most existing multimodal fake news detection models are structurally complex and not interpretable; the main reason for this is the difficulty of identifying essential features which characterize fake social media posts, leading to different models focusing on multiple different aspects of the news detection task. In this paper, we show that contrasting the different and similar (DS) features of social media posts serves as an important identifying marker for their authenticity, with the consequence that we only need to direct our attention to this aspect when designing a multimodal fake news detector. To address this challenge, we propose the Fine-Grained Differences-Similarities Enhancement Network (FG-DSEN), which improves detection with a simple and interpretable structure to enhance the DS aspect between images and text. Our proposed method was evaluated on two different language social media datasets, Weibo in Chinese and Twitter in English. It achieved accuracies 3% and 3.8% higher than other state-of-the-art methods, respectively.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_109-Fine_Grained_Differences_Similarities_Enhancement_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Quantum Steganography: Hiding Secret Messages in Images using Quantum Circuits and SIFT</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01410107</link>
        <id>10.14569/IJACSA.2023.01410107</id>
        <doi>10.14569/IJACSA.2023.01410107</doi>
        <lastModDate>2023-10-30T10:49:52.5470000+00:00</lastModDate>
        
        <creator>Hassan Jameel Azooz</creator>
        
        <creator>Khawla Ben Salah</creator>
        
        <creator>Monji Kherallah</creator>
        
        <creator>Mohamed Saber Naceur</creator>
        
        <subject>Clustering; keypoints; k-means; cover image; quantum steganography</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>In today’s era of escalating digital threats and the growing need for safeguarding sensitive information, this research strives to advance the field of information concealment by introducing a pioneering steganography methodology. Our approach goes beyond the conventional boundaries of image security by seamlessly integrating classical image processing techniques with the cutting-edge realm of quantum encoding. The foundation of our technique lies in the meticulous identification of distinctive features within the cover image, a crucial step achieved through the utilization of SIFT (Scale-Invariant Feature Transform). These identified key points are further organized into coherent clusters employing the K-means clustering algorithm, forming a structured basis for our covert communication process. The core innovation of this research resides in the transformation of the concealed message into a NEQR (Novel Enhanced Quantum Representation) code, a quantum encoding framework that leverages the power of quantum circuits. This transformative step ensures not only the secrecy but also the integrity of the hidden information, making it highly resistant to even the most sophisticated decryption attempts. The strategic placement of the quantum circuit representing the concealed message at the centroids of the clusters generated by the K-means algorithm conceals it within the cover image seamlessly. This fusion of classical image processing and quantum encoding results in an unprecedented level of security for the embedded information, rendering it virtually impervious to unauthorized access. Empirical findings from extensive experimentation affirm the robustness and efficacy of our proposed strategy.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_107-Quantum_Steganography_Hiding_Secret_Messages_in_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of Interactive Data Visualization System in Three-Dimensional Immersive Space</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01410108</link>
        <id>10.14569/IJACSA.2023.01410108</id>
        <doi>10.14569/IJACSA.2023.01410108</doi>
        <lastModDate>2023-10-30T10:49:52.5470000+00:00</lastModDate>
        
        <creator>Shah Murtaza Rashid Al Masud</creator>
        
        <creator>Homaira Adiba</creator>
        
        <creator>Tamzid Hossain</creator>
        
        <creator>Aloke Kumar Saha</creator>
        
        <creator>Rashik Rahman</creator>
        
        <subject>Immersive space; data visualization; VR system; python API; unity 3D</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>Today’s data-driven environments require innovative tools and methods to analyze and present data. The growth of data across many domains and remarkable technological advances have necessitated a shift from 2D data representations. The rapid growth in dataset scale, variety, and speed has revealed the limitations of conventional charts and graphs. Significant progress has been made in the domain of interactive, three-dimensional data visualizations as a means to address this challenge. The integration of Virtual Reality (VR) and Augmented Reality (AR) technologies enables users to achieve a heightened level of immersion in a simulated environment, where data is transformed into physical and interactive creatures. Recent research in the domain of immersive analytics has provided evidence that virtual reality (VR) and augmented reality (AR) technologies possess the capacity to provide succinct multiple layouts, facilitate collaborative data exploration, enable immersive multiview maps, establish spatial environments, enhance spatial memory, and enable interactions in three dimensions. The primary aim of this research is to design and implement a sophisticated data visualization system that integrates the development of a data pipeline within the Unity 3D framework, with the specific goal of aggregating data. The resulting system will enable the presentation of data from CSV files within a three-dimensional immersive environment. The prospective ramifications of this development have the capacity to yield good effects in diverse domains, including E-commerce analysis, financial services, engineering technology, medical services, data analysis, and interactive data display, among others. The proposed system presents a methodical framework for the development of a 3D data visualization system that integrates virtual reality (VR) technologies, Unity, and Python, with the aim of redefining the process of data exploration within a VR environment. This paper examines the integration of continuous testing methodologies within the context of Python API and virtual reality (VR) environments. It also allows for the creation of an interesting and immersive experience that meets user needs.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_108-Development_of_Interactive_Data_Visualization_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Investigate the Impact of Stemming on Mauritanian Dialect Classification using Machine Learning Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01410106</link>
        <id>10.14569/IJACSA.2023.01410106</id>
        <doi>10.14569/IJACSA.2023.01410106</doi>
        <lastModDate>2023-10-30T10:49:52.5300000+00:00</lastModDate>
        
        <creator>Mohamed El Moustapha El Arby CHRIF</creator>
        
        <creator>Cheikhane Seyed</creator>
        
        <creator>Cheikhne Mohamed Mahmoud</creator>
        
        <creator>EL BENANY Mohamed Mahmoud</creator>
        
        <creator>Fatimetou Mint Mohamed-Saleck</creator>
        
        <creator>Moustapha Mohamed Saleck</creator>
        
        <creator>Omar EL BEQQALI</creator>
        
        <creator>Mohamedade Farouk NANNE</creator>
        
        <subject>Machine learning; Natural Language Processing; Arabic text classification; HASSANIYA dialect; Weka; stemming</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>Despite the plethora and diversity of research on Natural Language Processing (NLP). As a technique allowing computers to understand, generate, and manipulate human language; It still remains insufficient, especially with regard to the processing of Arabic texts and their dialects which are widely used. The proposed approach focuses on the application of machine learning techniques taking into account evaluation criteria such as training to comments expressed in Mauritanian dialect, published on social media notably Facebook, and compares results generated by three algorithms which we applied such as the Random Forest (RF), Na&#168;ıve Bayes Multinominal (NBM), and Logistic Regression (LR) algorithm. Additionally, We then study the effect of machine learning techniques when different stemmers are combined with other features such as the tokenizers used to process the dataset. Although major challenges exist such as the morphology of Arabic is completely different from Latin letter languages, and there is no pre-existing dataset or dictionary to train the algorithms, the result we obtained after the experiments carried out on Weka shows that the RF and NBM algorithms are more efficient when applied with ArbicStemmerKhoja giving results respectively 96.37% and 71.40%; However, Logistic gets better performance results with Null Stemme is 81.65%. Results obtained by the three techniques applied with a light Arabic stemmer were more than 70%. This article presents a contribution to NLP based on Machine learning, descript also an important study that can determine the best Arabic classifier.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_106-Investigate_the_Impact_of_Stemming_on_Mauritanian_Dialect_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Gamification in Physical Activity: State-of-the-Art</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01410105</link>
        <id>10.14569/IJACSA.2023.01410105</id>
        <doi>10.14569/IJACSA.2023.01410105</doi>
        <lastModDate>2023-10-30T10:49:52.5170000+00:00</lastModDate>
        
        <creator>Majed Hariri</creator>
        
        <creator>Richard Stone</creator>
        
        <subject>Physical activity; gamification; gamified systems; gamification and motivation; state-of-the-art</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>Physical activity is decreasing globally, and more people are becoming sedentary, which is associated with numerous adverse health outcomes. To counter this trend, gamification emerges as a promising strategy for enhancing participation in physical activity interventions. The review investigates the influence of gamified systems on the promotion of physical activity and examines associated behavioral and psychological outcomes. The analysis incorporates empirical studies focused on adult participants, published in peer-reviewed English-language journals over the last five years. Several critical aspects are considered in the analysis, including specific types of physical activity targeted, employed gamification systems, involved motivational features, and behavioral and psychological outcomes, thus offering a state-of-the-art overview of gamification and physical activity. Findings confirm that gamification serves as an effective mechanism for promoting physical activity. To address gaps in existing research, recommendations for future work include broadening the range of metrics used for measuring physical activity and investigating the psychological benefits of gamification in physical activity interventions. Moreover, future research could benefit from leveraging addictive game design elements and utilizing artificial intelligence and computer vision models to monitor user progress and suggest appropriate challenges. In conclusion, the review outlines the considerable potential of gamification to positively affect participation in physical activity, highlighting the need for additional research to fully realize this potential.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_105-Gamification_in_Physical_Activity_State_of_the_Art.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Keyphrase Distance Analysis Technique from News Articles as a Feature for Keyphrase Extraction: An Unsupervised Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01410104</link>
        <id>10.14569/IJACSA.2023.01410104</id>
        <doi>10.14569/IJACSA.2023.01410104</doi>
        <lastModDate>2023-10-30T10:49:52.5000000+00:00</lastModDate>
        
        <creator>Mohammad Badrul Alam Miah</creator>
        
        <creator>Suryanti Awang</creator>
        
        <creator>Md Mustafizur Rahman</creator>
        
        <creator>A. S. M. Sanwar Hosen</creator>
        
        <subject>Curve fitting technique; data pre-processing; data processing; feature extraction; KDA technique; keyphrase extraction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>Due to the rapid expansion of information and online sources, automatic keyphrase extraction remains an important and challenging problem in the field of current study. The use of keyphrases is extremely beneficial for many tasks, including information retrieval (IR) systems and natural language processing (NLP). It is essential to extract the features of those keyphrases for extracting the most significant keyphrases as well as summarizing the texts to the highest standard. In order to analyze the distance between keyphrases in news articles as a feature of keyphrases, this research proposed a region-based unsupervised keyphrase distance analysis (KDA) technique. The proposed method is broken down into eight steps: gathering data, data preprocessing, data processing, searching keyphrases, distance calculation, averaging distance, curve plotting, and lastly, the curve fitting technique. The proposed approach begins by gathering two distinct datasets containing the news items, which are then used in the data preprocessing step, which makes use of a few preprocessing techniques. This preprocessed data is then employed in the data processing phase, where it is routed to the keyphrase searching, distance computation, and distance averaging phases. Finally, the curve fitting method is used after applying a curve plotting analysis. These two benchmark datasets are then used to evaluate and test the performance of the proposed approach. The proposed approach is then contrasted with different approaches to show how effective, advantageous, and significant it is. The results of the evaluation also proved that the proposed technique considerably improved the efficiency of keyphrase extraction techniques. It produces an F1-score value of 96.91% whereas its present keyphrases are 94.55%.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_104-Keyphrase_Distance_Analysis_Technique_from_News_Articles.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Recognition of Human Interactions in Still Images using AdaptiveDRNet with Multi-level Attention</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01410103</link>
        <id>10.14569/IJACSA.2023.01410103</id>
        <doi>10.14569/IJACSA.2023.01410103</doi>
        <lastModDate>2023-10-30T10:49:52.4830000+00:00</lastModDate>
        
        <creator>Arnab Dey</creator>
        
        <creator>Samit Biswas</creator>
        
        <creator>Dac-Nhoung Le</creator>
        
        <subject>Human interaction recognition; still images; adaptiveDRNet; multi level attention; human interactions</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>Human-Human Interaction Recognition (H2HIR) is a multidisciplinary field that combines computer vision, deep learning, and psychology. Its primary objective is to decode and understand the intricacies of human-human interactions. H2HIR holds significant importance across various domains as it enables machines to perceive, comprehend, and respond to human social behaviors, gestures, and communication patterns. This study aims to identify human-human interactions from just one frame, i.e. from an image. Diverging from the realm of video-based inter-action recognition, a well-established research domain that relies on the utilization of spatio-temporal information, the complexity of the task escalates significantly when dealing with still images due to the absence of these intrinsic spatio-temporal features. This research introduces a novel deep learning model called AdaptiveDRNet with Multi-level Attention to recognize Human-Human (H2H) interactions. Our proposed method demonstrates outstanding performance on the Human-Human Interaction Image dataset (H2HID), encompassing 4049 meticulously curated images representing fifteen distinct human interactions and on the publicly accessible HII and HIIv2 related benchmark datasets. Notably, our proposed model excels with a validation accuracy of 97.20% in the classification of human-human interaction images, surpassing the performance of EfficientNet, InceptionResNetV2, NASNet Mobile, ConvXNet, ResNet50, and VGG-16 models. H2H interaction recognition’s significance lies in its capacity to enhance communication, improve decision-making, and ultimately contribute to the well-being and efficiency of individuals and society as a whole.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_103-Recognition_of_Human_Interactions_in_Still_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comprehensive System for Managing Blood Resources Leveraging Blockchain, Smart Contracts, and Non-Fungible Tokens</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01410101</link>
        <id>10.14569/IJACSA.2023.01410101</id>
        <doi>10.14569/IJACSA.2023.01410101</doi>
        <lastModDate>2023-10-30T10:49:52.4670000+00:00</lastModDate>
        
        <creator>Khiem H. G</creator>
        
        <creator>Huong H. L</creator>
        
        <creator>Phuc N. T</creator>
        
        <creator>Khoa T. D</creator>
        
        <creator>Khanh H. V.</creator>
        
        <creator>Quy L. T</creator>
        
        <creator>Ngan N. T. K</creator>
        
        <creator>Triet N. M</creator>
        
        <creator>Kha N. H</creator>
        
        <creator>Anh N. T</creator>
        
        <creator>Trong. V. C. P</creator>
        
        <creator>Bang L. K</creator>
        
        <creator>Hieu D. M</creator>
        
        <creator>Bao T. Q</creator>
        
        <subject>Blood donation; blockchain; ethereum; blood products supply chain; smart contract; NFT; ethereum; fantom; poly-gon; binance smart chain</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>The escalating demand for blood and its derivatives in the medical field underpins its indispensable nature for disease diagnosis and therapy. Such essential life-giving components are irreplaceable, necessitating a continuous reliance on voluntary blood donors. Existing methodologies primarily address the challenges of blood storage and its logistical distribution among healthcare centers. These conventional strategies lean towards centralized systems, often compromising data transparency and accessibility. Notably, there remains a significant gap in incentivizing and raising awareness among potential and existing donors regarding the life-saving act of blood donation. Recognizing these challenges, we introduce a robust and innovative framework that harnesses the potential of Blockchain technology, coupled with the power of smart contracts. Furthermore, to foster a sustainable blood donation ecosystem, we advocate the shift from traditional paper-based recognition to digitized donor acknowledgment using Non-Fungible Tokens (NFTs). Our novel approach encapsulates four key areas: (a) Introduction of a supply chain oversight mechanism for blood and its derivatives through Blockchain and smart contracts; (b) Development of a digital certification system for blood donors utilizing NFTs; (c) Execution of our suggested framework via smart contracts, offering a tangible proof-of-concept; and (d) Assessment and implementation of the proof-of-concept across four prominent platforms: ERC721 (ETH’s NFT), and the Ethereum Virtual Machine (EVM) employing the Solidity language – this encompasses the BNB Smart Chain, Fantom, Polygon, and Celo, aiming to discern the optimal platform compatible with our innovative framework.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_101-A_Comprehensive_System_for_Managing_Blood_Resources.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Enhanced CoD System Leveraging Blockchain, Smart Contracts, and NFTs: A New Approach for Trustless Transactions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01410102</link>
        <id>10.14569/IJACSA.2023.01410102</id>
        <doi>10.14569/IJACSA.2023.01410102</doi>
        <lastModDate>2023-10-30T10:49:52.4670000+00:00</lastModDate>
        
        <creator>Phuc N. T</creator>
        
        <creator>Khanh H. V</creator>
        
        <creator>Khoa T. D</creator>
        
        <creator>Khiem H. G</creator>
        
        <creator>Huong H. L</creator>
        
        <creator>Ngan N. T. K</creator>
        
        <creator>Triet N. M</creator>
        
        <creator>Kha N. H</creator>
        
        <creator>Anh N. T</creator>
        
        <creator>Trong. V. C. P</creator>
        
        <creator>Bang L. K</creator>
        
        <creator>Hieu D. M</creator>
        
        <creator>Quy L. T</creator>
        
        <subject>Letter-of-credit; cash-on-delivery; blockchain; smart contract; NFT; Ethereum; Fantom; polygon; binance smart chain</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>The global transportation of goods has evolved in response to varied economic demands. The rapid progression of modern scientific and technological innovations offers a shift from traditional shipping paradigms. Current systems, whether domestic like Cash-on-Delivery (CoD) or international such as Letter-of-Credit, necessitate trust-building through an intermediary—be it a carrier or a financial institution. While these conventional systems provide certain benefits, they inherently present several challenges and potential vulnerabilities, affecting both sellers and buyers. The introduction of blockchain technology and smart contracts has been explored as a viable alternative to bypass these intermediaries. However, simply removing the shipping intermediary presents its own set of issues, particularly when disputes arise. Notably, the shipper remains unaffected in situations of contention. Consequently, some models are now incorporating the shipper’s role, either as a singular entity or in collaboration with others. Yet, a considerable number of these models still depend on an external trusted party for conflict resolution. Our study introduces a unique framework, blending the robustness of blockchain, the enforceability of smart contracts, and the authenticity assurance of NFTs. This system creates a streamlined CoD operation encompassing the seller, shipper, and buyer, using NFTs to produce digital receipts, guaranteeing both proof-of-purchase and a security deposit. Furthermore, our system provides an inherent mechanism for dispute resolution. Key contributions of our work including i) The design of a novel CoD system anchored on blockchain and smart contract capabilities; ii) The incorporation of Ethereum-based NFT (specifically, ERC721) for securely logging package information; iii) The development of smart contracts that facilitate NFT generation and transfer between transactional entities; and iv) Performance evaluation and deployment of these contracts across multiple EVM-compatible platforms such as BNB Smartchain, Fantom, Celo, and Polygon, establishing the optimal environment for our innovative system.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_102-An_Enhanced_CoD_System_Leveraging_Blockchain_Smart_Contracts.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid metaheuristic Algorithm for Edge Site Deployment with User Coverage Maximization and Cost Minimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01410100</link>
        <id>10.14569/IJACSA.2023.01410100</id>
        <doi>10.14569/IJACSA.2023.01410100</doi>
        <lastModDate>2023-10-30T10:49:52.4530000+00:00</lastModDate>
        
        <creator>Xiaodong Xing</creator>
        
        <creator>Ying Song</creator>
        
        <creator>Bo Wang</creator>
        
        <subject>Edge computing; edge deployment; GA; PSO; metaheuristic</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>Recent years, edge computing has been getting increased attention due to its ultra-low delay service deliveries. Plenty of works have focused on the performance improvement of edge computing by e.g., edge server deployment, edge caching, and task offloading. While, there is a lack of work on improving the investment cost for building or upgrading the edge site deployment by making a decision on which places edge sites are deployed. In this paper, we focus on the edge site deployment problem (ESDP) to maximize user coverage with fewest edge sites. We first formulate ESDP into a binary nonlinear programming with two optimization objectives of user coverage maximization and edge site minimization, and prove that ESDP is NP-complete. Then, we propose a hybrid metaheuristic algorithm to solve ESDP with polynomial time complexity, which combining the crossover and mutation operators of genetic algorithm with self-and social-cognition of particle swarm optimization. At last, we conduct extensive simulated experiment based on a real data set to evaluate the performance of our proposed algorithm. The results show that our algorithm achieves 100% user coverage with much fewer edge sites than other seven metaheuristic algorithms, and has a good scalability.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_100-A_Hybrid_Meta_heuristic_Algorithm_for_Edge_Site_Deployment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detection of Dyslexia Through Images of Handwriting using Hybrid AI Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141099</link>
        <id>10.14569/IJACSA.2023.0141099</id>
        <doi>10.14569/IJACSA.2023.0141099</doi>
        <lastModDate>2023-10-30T10:49:52.4370000+00:00</lastModDate>
        
        <creator>Norah Dhafer Alqahtani</creator>
        
        <creator>Bander Alzahrani</creator>
        
        <creator>Muhammad Sher Ramzan</creator>
        
        <subject>Dyslexia; dyslexia detection; deep learning; dyslexia classification; CNN model; SVM model; random forest model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>Dyslexia is a neurodevelopmental disorder characterized by difficulties with acquiring reading skills, despite the presence of appropriate learning opportunities, sufficient education, and a suitable sociocultural context. Dyslexia negatively affects children’s educational development and their acquisition of language, as well as their writing. Therefore, early detection of dyslexia is of great importance. The prediction of dyslexia through handwriting is an active research field of almost five years’ standing. In this paper, we propose hybrid models (CNN-SVM) and (CNN-RF) to reveal dyslexia through images of handwriting. The paper aimed to develop a CNN model to extract features from images of handwriting where CNN is highly reliable in extracting features from images, and to use SVM as a classifier due to its generalization abilities as well as using random forest (RF) as a classifier in (CNN-RF). The study aimed to combine a deep learning (DL) model and a machine learning (ML) model to improve model performance. Data sets that consisted of 176,673 images of handwriting were used in this study. The hyperparameter of the model was adjusted and examined in order to classify the three categories of handwriting. The CNN model that was built demonstrated an outstanding accuracy rate of 98.71% in effectively categorizing three distinct types of handwriting—99.33% with SVM, and 98.44% in the CNN-RF model. The aim of recognizing dyslexic handwriting through CNN-SVM was successfully attained, and our model outperformed all previous models.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_99-Detection_of_Dyslexia_Through_Images_of_Handwriting.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comparative Study of Deep Learning Algorithms for Forecasting Indian Stock Market Trends</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141098</link>
        <id>10.14569/IJACSA.2023.0141098</id>
        <doi>10.14569/IJACSA.2023.0141098</doi>
        <lastModDate>2023-10-30T10:49:52.4200000+00:00</lastModDate>
        
        <creator>Mrinal Kanti Paul</creator>
        
        <creator>Purnendu Das</creator>
        
        <subject>Stock prediction; machine learning technique; deep learning; stock market; National Stock Exchange</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>This research underscores the vital significance of providing investors with timely and dependable information within the dynamic landscape of today’s stock market. It delves into the expanding utilization of data science and machine learning methods for anticipating stock market movements. The study conducts a comprehensive analysis of past research to pinpoint effective predictive models, with a specific focus on widely acknowledged algorithms. By employing an extensive dataset spanning 27 years of NIFTY 50 index data from the National Stock Exchange (NSE), the research facilitates a thorough comparative investigation. The primary goal is to support both investors and researchers in navigating the intricate domain of stock market prediction. Stock price prediction is challenging due to numerous influencing factors, and identifying the optimal deep learning model and parameters is a complex task. This objective is accomplished by harnessing the capabilities of deep learning, thereby contributing to well-informed decision-making and the efficient utilization of predictive tools. The paper scrupulously examines prior contributions from fellow researchers in stock prediction and implements established deep learning algorithms on the NIFTY 50 dataset to assess their predictive accuracy. The study extensively analyzes NIFTY 50 data to anticipate market trends. It employs three distinct deep learning models—RNN, SLSTM, and BiLSTM. The results underscore SLSTM as the most effective model for predicting the NIFTY 50 index, achieving an impressive accuracy of 99.10%. It’s worth noting that the accuracy of BiLSTM falls short when compared to RNN and SLSTM.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_98-A_Comparative_Study_of_Deep_Learning_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Measuring Surroundings Awareness using Different Visual Parameters in Virtual Reality</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141097</link>
        <id>10.14569/IJACSA.2023.0141097</id>
        <doi>10.14569/IJACSA.2023.0141097</doi>
        <lastModDate>2023-10-30T10:49:52.4070000+00:00</lastModDate>
        
        <creator>Fatma E. Ibrahim</creator>
        
        <creator>Neven A. M. Elsayed</creator>
        
        <creator>Hala H. Zayed</creator>
        
        <subject>Virtual reality; visual distraction; attention; awareness</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>Due to the popularity of digital games, there is a growing interest in using games as therapeutic interventions. The ability of games to capture attention can be beneficial to distract patients from pain. In this paper, we investigate the impact of visual parameters (color, shapes, and animation) on users’ awareness of their surroundings in virtual reality. We conducted a user study in which experiments included a visual search task using a virtual reality game. Through the game, the participants were asked to find a target among distraction objects. The results showed that the different visual representations of the target among distraction objects could affect the users’ awareness of their surroundings. The least awareness of the surroundings occurred when the target and distractors shared similar features. Further, the conjunction of low similarity between distractors-distractors and high similarity between target-distractors provided less awareness of the surroundings. Additionally, results revealed that there is a strong positive correlation between search time and awareness of the surroundings. Less awareness of the surroundings while playing a game implies that users are positively engaged in that game. These results offered a set of criteria that can be applied to future virtual reality interventions for medical pain distraction.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_97-Measuring_Surroundings_Awareness_using_Different_Visual_Parameters.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Efficient Cloud Workflow Scheduling with Inverted Ant Colony Optimization Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141096</link>
        <id>10.14569/IJACSA.2023.0141096</id>
        <doi>10.14569/IJACSA.2023.0141096</doi>
        <lastModDate>2023-10-30T10:49:52.3900000+00:00</lastModDate>
        
        <creator>Hongwei DING</creator>
        
        <creator>Ying ZHANG</creator>
        
        <subject>Cloud computing; workflow scheduling; virtualization; task allocation; swarm intelligence; optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>Cloud computing has risen as a prominent paradigm, offering users on-demand access to computing resources and services via the Internet. In cloud environments, workflow scheduling plays a vital role in optimizing resource utilization, reducing execution time, and minimizing overall costs. As workflows comprise interdependent tasks that need to be assigned to Virtual Machines (VMs), the complexity of the scheduling problem increases in proportion to workflow size and VM availability. Due to its NP-hard nature, finding an optimal scheduling solution for workflows remains a challenging task. To address this problem, researchers have turned to metaheuristic approaches, which have shown promise in finding near-optimal solutions for complex combinatorial optimization problems. This paper proposes a novel metaheuristic algorithm called Inverted Ant Colony Optimization (IACO) for workflow scheduling in cloud environments. IACO is a variation of the traditional ACO algorithm, where the updated pheromone has an inverted influence on the path chosen by the ants. By leveraging the complementary nature of these two algorithms, our proposed algorithm aims to achieve superior workflow scheduling performance regarding total execution time and cost, surpassing existing approaches.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_96-Efficient_Cloud_Workflow_Scheduling_with_Inverted_Ant_Colony.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>QoS and Energy-aware Resource Allocation in Cloud Computing Data Centers using Particle Swarm Optimization Algorithm and Fuzzy Logic System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141095</link>
        <id>10.14569/IJACSA.2023.0141095</id>
        <doi>10.14569/IJACSA.2023.0141095</doi>
        <lastModDate>2023-10-30T10:49:52.3900000+00:00</lastModDate>
        
        <creator>Yu Wang</creator>
        
        <creator>Lin Zhu</creator>
        
        <subject>Cloud computing; resource allocation; scheduling; PSO; fuzzy logic</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>Cloud computing has become a viable option for many organizations due to its flexibility and scalability in providing virtualized resources via the Internet. It offers the possibility of hosting pervasive applications in the consumer, scientific, and business domains utilizing a pay-as-you-go model. This makes cloud computing a cost-effective solution for businesses as it eliminates the need for large investments in hardware and software infrastructure. Furthermore, cloud computing enables organizations to quickly and easily scale their services to meet the demands of their customers. Resource allocation is a major challenge in cloud computing. It is known as the NP-hard problem and can be solved using meth-heuristic algorithms. This study optimizes resource allocation using the Particle Swarm Optimization (PSO) algorithm and fuzzy logic system developed under the proposed time and cost models in the cloud computing environment. Receiving, processing, and waiting time are included in the time model. The cost model incorporates processing and receiving costs. Two experiments demonstrate the performance of the proposed algorithm. The simulation results demonstrate the potential of our mechanism, demonstrating improved performance over previous approaches in aspects such as providers&#39; total income, users&#39; total revenue, resource utilization, and energy consumption.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_95-QoS_and_Energy_aware_Resource_Allocation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Integrated Aquila Optimizer for Efficient Service Composition with Quality of Service Guarantees in Cloud Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141094</link>
        <id>10.14569/IJACSA.2023.0141094</id>
        <doi>10.14569/IJACSA.2023.0141094</doi>
        <lastModDate>2023-10-30T10:49:52.3730000+00:00</lastModDate>
        
        <creator>Xiaofei Liu</creator>
        
        <subject>Cloud computing; service composition; Particle Swarm Optimization; Aquila optimizer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>The prompt evolution of cloud computing technology has given rise to the emergence of countless cloud-based services. However, guaranteeing Quality of Service (QoS) awareness in service composition poses a substantial difficulty in cloud computing. A solitary service cannot effectively handle the complicated requests and varied demands of real-world situations. In some instances, one service alone may not be enough to fulfill users&#39; particular requirements, prompting the integration of several services to satisfy these needs. As an NP-hard problem, service composition has been addressed using many metaheuristic algorithms. In this context, the proposed methodology presents a new blended technique, referred to as Integrated Aquila Optimizer (IAO), which amalgamates conventional Aquila Optimizer (AO) and Particle Swarm Optimization (PSO) algorithm. The central objective of this hybridization is to tackle the shortcomings confronted by both AO and PSO algorithms. Specifically, these algorithms are known to get stuck in local search areas and show limited solution variety. To address these challenges, the proposed method introduces a novel transition mechanism that facilitates suitable adjustments between the search operators, ensuring continual improvements in the solutions. The transition mechanism allows the algorithm to switch between AO and PSO when any of them gets stuck or when the diversity of solutions decreases. This adaptability enhances the overall performance and effectiveness of the hybrid approach. The proposed IAO method is exhaustively tested through experiments conducted using the Cloudsim simulation platform. The numerical findings confirm the effectiveness of the suggested approach regarding dependability, accessibility, and expenses, which are essential factors of cloud computing.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_94-Hybrid_Integrated_Aquila_Optimizer_for_Efficient_Service_Composition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Machine Learning Approach for Emotion Classification in Bengali Speech</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141093</link>
        <id>10.14569/IJACSA.2023.0141093</id>
        <doi>10.14569/IJACSA.2023.0141093</doi>
        <lastModDate>2023-10-30T10:49:52.3600000+00:00</lastModDate>
        
        <creator>Md. Rakibul Islam</creator>
        
        <creator>Amatul Bushra Akhi</creator>
        
        <creator>Farzana Akter</creator>
        
        <creator>Md Wasiul Rashid</creator>
        
        <creator>Ambia Islam Rumu</creator>
        
        <creator>Munira Akter Lata</creator>
        
        <creator>Md. Ashrafuzzaman</creator>
        
        <subject>XgBoost; gradient boosting; CatBoost; random forest; MFCC</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>In this research work, we have presented a machine learning strategy for Bengali speech emotion categorization with a focus on Mel-frequency cepstral coefficients (MFCC) as features. The commonly utilized method of MFCC in speech processing has proved effective in obtaining crucial phoneme-specific data. This paper analyzes the efficacy of four machine learning algorithms: Random Forest, XGBoost, CatBoost, and Gradient Boosting, and tackles the paucity of research on emotion categorization in non-English languages, particularly Bengali. With CatBoost obtaining the greatest accuracy of 82.85%, Gradient Boosting coming in second with 81.19%, XGBoost coming in third with 80.03%, and Random Forest coming in fourth with 80.01%, experimental evaluation shows encouraging outcomes. MFCC features improve classification precision and offer insightful information on the distinctive qualities of emotions expressed in Bengali speech. By demonstrating how well MFCC characteristics can identify emotions in Bengali speech, this study advances the field of emotion classification. Future research can investigate more sophisticated feature extraction methods, look into how temporal dynamics are incorporated into emotion classification models, and investigate practical uses for emotion detection systems in Bengali speech. This study advances our knowledge of emotion classification and paves the way for more effective emotion identification systems in Bengali speech by utilizing MFCC and machine learning techniques. Our work addresses the need for thorough and efficient techniques to recognize and classify emotions in speech signals in the context of emotion categorization. Understanding emotions is essential for many applications, as they are a basic component of human communication. By investigating cutting-edge strategies that show promise for enhancing the precision and effectiveness of emotion recognition, this study advances the field of emotion classification.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_93-A_Machine_Learning_Approach_for_Emotion_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Innovative Practice of Virtual Reality Technology in Animation Production</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141092</link>
        <id>10.14569/IJACSA.2023.0141092</id>
        <doi>10.14569/IJACSA.2023.0141092</doi>
        <lastModDate>2023-10-30T10:49:52.3430000+00:00</lastModDate>
        
        <creator>He Huixuan</creator>
        
        <creator>Xiang Yuan</creator>
        
        <subject>Virtual reality technology; animation production; 3D modeling; OpenGL; geometric modeling; real time roaming</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>In order to make the users who watch animation look better, the innovative practice research of virtual reality technology in animation production is proposed. According to the object structure information, the method uses 3ds Max software to complete the production of 3D animated character models. It completes the character prototype texture feature extraction through the character prototype boundary contour extraction, image hat height transformation, and discrete grid projection. The OpenGL texture mapping is used to complete the mapping of 3D animated character models. After the boundary optimization of texture seams, the best 3D animated character modeling effect is obtained. Geometric modeling technology and DOF nodes are used to build static and dynamic scene entity models to complete the construction of a 3D animation scene. The interactive visualization platform based on space is introduced to complete the visualization processing of the interactive animation scene, and the animation scene is regarded as the image base. The points with equal arc length are selected according to the curve points, and the camera is switched in combination with the roaming speed to realize the real-time roaming of 3D animation scenes and complete the innovative, practical application of virtual reality technology in animation production. Experimental results show that this method improves the smoothness, integrity, and authenticity of animation, improves the smoothness of motion, and ensures the real-time roaming effect.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_92-Innovative_Practice_of_Virtual_Reality_Technology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Improved Hybrid A*Algorithm of Path Planning for Hotel Service Robot*</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141091</link>
        <id>10.14569/IJACSA.2023.0141091</id>
        <doi>10.14569/IJACSA.2023.0141091</doi>
        <lastModDate>2023-10-30T10:49:52.3430000+00:00</lastModDate>
        
        <creator>Xiaobing Cao</creator>
        
        <creator>Yicen Xu</creator>
        
        <creator>Yonghong Yao</creator>
        
        <creator>Chenbo Zhi</creator>
        
        <subject>Path planning; bidirectional A* algorithm; JPS algorithm; security weight square matrix; cubic spline interpolation method</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>Due to the increasing demand for unmanned in the hotel industry in recent years, how to efficiently use hotel service robots to further improve the efficiency of the hotel industry has become a hot research issue. To solve the problems of lengthy path-finding time and poor route security in conventional service robots in complex environments, the current study provides an improved A* path-finding algorithm for application in the hotel environment. Firstly, the conventional A* algorithm is combined with bidirectional search and Jump Point Search (JSP) algorithm, which makes the search more effective. Secondly, the traditional A* algorithm is combined with the security weight square matrix to make the path trajectory safer. A cubic spline interpolation is chosen to smoothen the transitions at the corners planned by the improved A* algorithm. Simulation experiments were done on grid maps with 10*10, 20*20 and 50*50 sizes. Compared with the conventional A* algorithm, the search time were decreased by 67%, 77% and 95% respectively. The number of search nodes was decreased by 80%, 76% and 95%, respectively. Meanwhile the distance between the robot and the obstacles was increased. The results indicate that the improved A* algorithm suggested in the present research can ensure the path trajectory safer while keeping the path search efficiency higher.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_91-An_Improved_Hybrid_A_Algorithm_of_Path_Planning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>CDCA: Transparent Cache Architecture to Improve Content Delivery by Internet Service Providers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141090</link>
        <id>10.14569/IJACSA.2023.0141090</id>
        <doi>10.14569/IJACSA.2023.0141090</doi>
        <lastModDate>2023-10-30T10:49:52.3270000+00:00</lastModDate>
        
        <creator>Alwi M Bamhdi</creator>
        
        <subject>Content caching; content delivery network; content search algorithm; information-centric networking; multimedia; network function virtualization; software defined networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>The popularity of on-demand multimedia such as video streaming services has been rapidly increasing the overall Internet traffic volume in the world. As of the beginning of 2023, almost 82% of this global Internet traffic came from video transmission through on-demand online services, trending towards changing the Internet paradigm from location-based to content-based, culminating in a new paradigm of Information-Centric Networking (ICN). ICN focuses on content distribution based on name rather than location, allowing Internet Service Providers (ISP) to implement local content caching systems for faster delivery and reduced transmission delays and unnoticeable jitter or distortions. ICN can be implemented over a Software-Defined Networking (SDN) infrastructure. SDN enables flexible programming and implementation of forwarding packet rules within a network domain seamlessly. This paper proposes a hybrid architecture that combines ICN and SDN to create a transparent in-network caching system for content distribution over the traditional IP network. The architecture aims to improve the performance of Video-on-Demand (VoD) services for customers while efficiently utilizing network provider resources. A prototype called CDCA was developed and evaluated in a Mininet emulation environment. The results of the evaluation demonstrate that the CDCA hybrid architecture to create a caching system for content distribution enhances VoD service performance and optimizes network resource utilization.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_90-CDCA_Transparent_Cache_Architecture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cold Chain Logistics Path Planning and Design Method based on Multi-source Visual Information Fusion Technology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141089</link>
        <id>10.14569/IJACSA.2023.0141089</id>
        <doi>10.14569/IJACSA.2023.0141089</doi>
        <lastModDate>2023-10-30T10:49:52.3130000+00:00</lastModDate>
        
        <creator>Ke XUE</creator>
        
        <creator>Bing Han</creator>
        
        <subject>Multi-source visual information fusion; cold chain logistics road; path planning; ant colony optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>Complete cold chain logistics is needed to control the whole temperature of refrigerated and frozen food, including the closed environment, storage and transportation when loading and unloading goods. Studying how to optimize vehicle scheduling and reduce transportation time and transportation costs is very important. The research object of this paper is the path planning of urban cold chain logistics. This paper will consider the cold chain distribution of multi-vehicle coexistence, build an integer programming model, design a targeted ACO (Ant Colony Optimization) solution model, and verify it with an example. Based on multi-source visual information fusion technology, these independent heterogeneous data sources are accessed through cloud computing resource integration technology to establish a unified data integration middleware. The pheromone update model selected in this paper is the ant week model, which uses global information to record the optimal path of ants. The results show that the satisfaction of delivery time is far behind, and even the average satisfaction of key customers with high value is only 55.1%, which is 18.3% higher than that of the planning without considering value. This method can provide a real-time optimized path in an effective time range and improve the efficiency of distribution services, which has certain theoretical significance and practical value.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_89-Cold_Chain_Logistics_Path_Planning_and_Design_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-Scale Deep Learning-based Recurrent Neural Network for Improved Medical Image Restoration and Enhancement</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141088</link>
        <id>10.14569/IJACSA.2023.0141088</id>
        <doi>10.14569/IJACSA.2023.0141088</doi>
        <lastModDate>2023-10-30T10:49:52.2970000+00:00</lastModDate>
        
        <creator>A. B. Pawar</creator>
        
        <creator>C Priya</creator>
        
        <creator>V. V. Jaya Rama Krishnaiah</creator>
        
        <creator>V. Antony Asir Daniel</creator>
        
        <creator>Yousef A. Baker El-Ebiary</creator>
        
        <creator>Ahmed I. Taloba</creator>
        
        <subject>Multi-Scale Deep Learning (MSDL); Recurrent Neural Network (RNN); deep learning; medical image; Artificial Bee Colony (ABC)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>Improving medical image quality is essential for accurate diagnosis, treatment planning, and ongoing condition monitoring. A crucial step in many medical applications, the restoration of damaged input images tries to retrieve lost high-quality data. Despite significant advancements in image restoration, two major problems still exist. First, it&#39;s important to preserve spatial features, although doing so frequently results in the loss related data. Second, while producing linguistically sound outputs is important, location accuracy can sometimes suffer. To overcome these issues and improve medical imaging, the Multi-Scale Deep Learning-based Recurrent Neural Network (MSDL-RNN) is offered in this paper. The model makes use of various scales during building, in contrast to standard RNN-based techniques, which generally use both full-resolution and gradually reduced-resolution approximations. This multi-scale approach uses deep learning to address problems including noise reduction, defect elimination, and increase of overall image quality.  Artificial Bee Colony Optimization is employed for efficient segmentation. By combining local and global data, the MSDL-RNN technique effectively improves and recovers a variety of medical imaging modalities. It generalizes the optimization strategy for model capacity assurance by incorporating crucial pre-processing methods targeted to various medical image types. The suggested approach was implemented in Python software and has an amazing accuracy of 99.23%, which is 4.33% higher than other existing methods like DesNet, AGNet, and NetB0. This study sets the way for important developments in improving the quality of medical images and their uses in healthcare.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_88-Multi_Scale_Deep_Learning_based_Recurrent_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comparative Study of Cloud Data Portability Frameworks for Analyzing Object to NoSQL Database Mapping from ONDM&#39;s Perspective</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141086</link>
        <id>10.14569/IJACSA.2023.0141086</id>
        <doi>10.14569/IJACSA.2023.0141086</doi>
        <lastModDate>2023-10-30T10:49:52.2800000+00:00</lastModDate>
        
        <creator>Salil Bharany</creator>
        
        <creator>Kiranbir Kaur</creator>
        
        <creator>Safaa Eltayeb Mohamed Eltaher</creator>
        
        <creator>Ashraf Osman Ibrahim</creator>
        
        <creator>Sandeep Sharma</creator>
        
        <creator>Mohammed Merghany Mohammed Abd Elsalam</creator>
        
        <subject>NoSQl; Portability; Cloud; middleware; platform as a service; platform services</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>Cloud computing revolves around storing and retrieving data in a portable manner. However, practical data portability across multiple Database-as-a-service (DBaaS) cloud data stores is challenging. This becomes even more complicated when data needs to be migrated between different types of data storage, such as SQL and NoSQL databases. NoSQL databases have gained significant popularity among developers due to their ability to provide high availability, fault tolerance, and scalability, making them suitable for managing big data in large-scale infrastructures. However, the varied data models in NoSQL databases make it difficult to migrate or port data among data repositories. Object to NoSQL database mappers (ONDMs) solves this problem. However, only a few ONDMs are available for C#.NET development, and the ONDM market used in Java development could be more stable. To address this issue, we propose building a middleware solution using the .NET framework to support cloud data portability, leveraging the capabilities of ONDMs. In this study, we evaluate several frameworks and compare them to our suggested middleware solution through empirical research. Our middleware solution can perform open network data management (ONDM) and object-relational mapping (ORM).</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_86-A Comparative_Study_of_Cloud_Data_Portability_Frameworks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis and Application of Antibacterial Drug Resistance Based on Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141087</link>
        <id>10.14569/IJACSA.2023.0141087</id>
        <doi>10.14569/IJACSA.2023.0141087</doi>
        <lastModDate>2023-10-30T10:49:52.2800000+00:00</lastModDate>
        
        <creator>Wei Zhang</creator>
        
        <creator>Yanhua Zhang</creator>
        
        <creator>Qiang Zhang</creator>
        
        <creator>Caixia Xie</creator>
        
        <creator>Yan Chen</creator>
        
        <creator>Jingwei Lei</creator>
        
        <subject>Deep learning; antibacterial drug; CCN algorithm; pharmaceutical chemical analysis; drug resistance testing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>The continuous improvement of deep learning technology has led to its deeper application in related fields, especially in the detection of antimicrobial resistance in the medical field. In drug resistance detection, the CNN-ATT-TChan model based on the fusion of CNN algorithm and attention mechanism can classify and organize a large amount of antimicrobial resistance data, achieving standardized processing. Based on mature chemical analysis and testing methods, drug resistance test data was obtained, and the training duration and classification accuracy F of the model were discussed in combination with the test data. At the same time, based on relevant research literature, the changes in ROC curves and AUC values between different models were compared. The results showed that the CNN algorithm using fusion attention mechanism can improve the training time of the model and also improve the classification accuracy of the model. Therefore, the application of CNN-ATT-TChan model combined with attention mechanism in the detection of antimicrobial resistance provides more support for the development of antimicrobial resistance testing.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_87-Analysis_and_Application_of_Antibacterial_Drug_Resistance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Syntax Dependency with Lexicon and Logistic Regression for Aspect-based Sentiment Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141085</link>
        <id>10.14569/IJACSA.2023.0141085</id>
        <doi>10.14569/IJACSA.2023.0141085</doi>
        <lastModDate>2023-10-30T10:49:52.2670000+00:00</lastModDate>
        
        <creator>Mohammad Mashrekul Kabir</creator>
        
        <creator>Zulaiha Ali Othman</creator>
        
        <creator>Mohd Ridzwan Yaakub</creator>
        
        <creator>Sabrina Tiun</creator>
        
        <subject>Aspect-based Sentiment Analysis; dependency parsing; lexicon; customer review; opinion mining; hybrid</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>Aspect-based Sentiment Analysis (ABSA) is a fine-grained form of SA that greatly benefits customers and the real world. ABSA of customer reviews has become a trendy topic because of the profuse information that is shared through these reviews. While SA also known as opinion mining helps to find opinion, ABSA greatly impact business world by converting these reviews to finer form with aspects and opinion or sentiment. These review words are interwoven internally, which depends on the semantics besides syntax, and sometimes there are long dependencies. Recently, the hybrid methods for ABSA are popular, but most of them merely considered if the syntax and long dependency exist, thus missing the inclusion of multi and infrequent aspects. In addition, in most literature, sentiment classification is shown directly without calculating the sentiment scores in ABSA. To this effect, this paper proposes a hybrid with syntax dependency and the lexicon for aspect, sentiment extraction, and polarity classification by Logistic Regression (LR) classifier to overcome the issues in ABSA. The proposed method is able to address the challenges of ABSA in a number of ways. First, it is able to extract multi-word and infrequent aspects by using syntactic dependency information. Second, it is able to calculate sentiment scores, which provides a more nuanced understanding of the overall sentiment expressed towards an aspect. Third, it is able to capture long dependencies between words by using syntactic dependency and semantic information. The proposed hybrid model outperformed the other methods by an average of 8-10 percent with the standard public dataset in terms of accuracy.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_85-Hybrid_Syntax_Dependency_with_Lexicon.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dynamic Routing Using Petal Ant Colony Optimization for Mobile Ad-hoc Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141084</link>
        <id>10.14569/IJACSA.2023.0141084</id>
        <doi>10.14569/IJACSA.2023.0141084</doi>
        <lastModDate>2023-10-30T10:49:52.2500000+00:00</lastModDate>
        
        <creator>Sathyaprakash B. P</creator>
        
        <creator>Manjunath Kotari</creator>
        
        <subject>Petal ant routing; dynamic petal ant routing; MANET; ant colony optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>A Mobile Ad-hoc Network (MANET) is a temporary wireless network that configures itself as needed. Each MANET node has a finite number of resources and serves as both a node and a router at the same time. MANET nodes are mobile and move from one location to another. Because MANET nodes are dynamic, choosing an optimal node for data transfer is a difficult issue. Because packets must propagate in a multi-hop manner, they take a longer path and may endure a longer delay, causing them to become lost in the network. The network’s overall performance suffers as a result of the re-transmission of those lost packets. We propose a modified version of a nature-inspired algorithm called Petal Ant based Dynamic Routing (PADR) in this research study, which reconstructs data packets to traverse inside a given region and achieves minimal delay during data transmission. The PADR is simulated in Network Simulator (NS2) and compared against nature-inspired routing protocols like PAR and SARA, as well as traditional routing protocols like AODV.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_84-Dynamic_Routing_Using_Petal_Ant_Colony_Optimization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Blockchain-based Teaching Evaluation System for Ensuring Data Integrity and Anonymity</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141083</link>
        <id>10.14569/IJACSA.2023.0141083</id>
        <doi>10.14569/IJACSA.2023.0141083</doi>
        <lastModDate>2023-10-30T10:49:52.2330000+00:00</lastModDate>
        
        <creator>Md. Mijanur Rahman</creator>
        
        <creator>Uttam Kumar Saha</creator>
        
        <creator>Shohedul Islam</creator>
        
        <creator>Sanjida Akhter</creator>
        
        <subject>Blockchain; student feedback system; faculty performance evaluation; anonymity; smart contract; ethereum</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>The significance of student feedback within educational institutions cannot be overstated, as it serves as a pivotal tool for evaluating faculty performance and identifying potential gaps in course content. Blockchain technology has emerged as an increasingly promising solution for diverse digital applications, owing to its distinctive attributes and robust security features. This study endeavors to explore the use of blockchain technology for secure student feedback systems in education, specifically for analyzing faculty performance in a course. However, a noteworthy challenge that plagues existing feedback systems is their inability to ensure complete anonymity, leading to students&#39; hesitancy in providing candid and honest feedback. Furthermore, these conventional systems often rely on databases for data storage, rendering them susceptible to tampering and data breaches. In response to these pressing concerns, the present paper proffers a comprehensive and innovative solution. The crux of the proposed approach revolves around the implementation of a blockchain-based student feedback system, artfully designed to guarantee both student anonymity and tamper-proof data storage, thereby facilitating the evaluation of teaching effectiveness. By leveraging the potential of an Ethereum-based blockchain, a secure and trusted platform is meticulously established, catering to the sensitive realm of student feedback in an impervious and confidential manner. Concomitantly, a user-friendly web application is deftly developed to complement the proposed system, meticulously documenting the implementation process, Smart Contract and project code. It is noteworthy that this cutting-edge feedback system provides an invaluable layer of security, fostering heightened user trust and engendering an environment conducive to genuine and authentic evaluations.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_83-Blockchain_based_Teaching_Evaluation_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Investigations of Modified Functional Connectivity at Rest in Drug-Resistant Temporal Lobe Epilepsy Patients</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141082</link>
        <id>10.14569/IJACSA.2023.0141082</id>
        <doi>10.14569/IJACSA.2023.0141082</doi>
        <lastModDate>2023-10-30T10:49:52.2170000+00:00</lastModDate>
        
        <creator>Deepa Nath</creator>
        
        <creator>Anil Hiwale</creator>
        
        <creator>Nilesh Kurwale</creator>
        
        <creator>C. Y. Patil</creator>
        
        <subject>Temporal Lobe Epilepsy (TLE); resting-state Functional Magnetic Resonance Imaging (rs-fMRI); Functional Connectivity (FC); Blood Oxygen Level-Dependent (BOLD)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>In this experimental study patients with temporal lobe epilepsy and controls have been compared for functional connectivity (FC) using resting-state functional magnetic resonance imaging (rs-fMRI). This research work examines the alterations to better understand the issues with brain activity of individuals suffering from Temporal Lobe epilepsy (TLE), during the rest state. The major objective of this study is to investigate FC-related alterations in the resting state to fully comprehend the complex nature of epilepsy. It is observed that FC gets altered in specific regions in the case of patients suffering from left-sided Temporal Lobe Epilepsy and right-sided Temporal Lobe Epilepsy as compared to controls. Using rs-fMRI, it is found that the right-sided TLE patient group had altered hippocampus networks than the control right-side group. There are considerable differences between the left and right areas of control and the groups with mesial temporal hippocampal sclerosis on the left and right sides. When compared to control left brain regions, the left-side TLE group exhibits reduced connection between the anterior cingulate gyrus and the affected hippocampus and increased regional connectivity between the affected hippocampus and the default posterior cingulate cortex region.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_82-Investigations_of_Modified_Functional_Connectivity.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application of Lightweight Deep Learning Model in Landscape Architecture Planning and Design</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141081</link>
        <id>10.14569/IJACSA.2023.0141081</id>
        <doi>10.14569/IJACSA.2023.0141081</doi>
        <lastModDate>2023-10-30T10:49:52.2030000+00:00</lastModDate>
        
        <creator>Linyu Zhang</creator>
        
        <subject>Deep learning; landscape architecture; landscape element; neural network; artificial neural network; view planning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>The holistic view of garden construction is firstly reflected in the integration of the elements that make up the garden, and the primary and secondary are distinguished from the perspective of the whole city, the continuation of the upper planning, the coordination with the surrounding groups and the harmony of the internal gardening elements. The primary goal of ANN (artificial neural network) learning is to understand the drawings and to convert information such as plant numbers and positions in digital drawings into standard digital formats for storage. In front of the SSD (Single Shot Multi-box Dettor) network model, a standard architecture network for image classification is adopted, called the basic network and is fused for comprehensive detection. This paper proposes the network model flow of the 3D object voxel modeling method based on the lightweight DL (Deep learning) model. The cyclic 2D encoder, cyclic 3D decoder and view planner are integrated into a unified framework responsible for feature extraction and fusion, feature decoding and view planning. The results show that the pixel accuracy, the average accuracy and the average IU value are the highest, with the pixel accuracy as high as 90.44%, the average accuracy as high as 93.15%, and the average IU value as 92.72%. In landscape image processing, it provides a certain foundation for future landscape planning and design.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_81-Application_of_Lightweight_Deep_Learning_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Multidimensional Reference Model for Heterogeneous Textual Datasets using Context, Semantic and Syntactic Clues</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141080</link>
        <id>10.14569/IJACSA.2023.0141080</id>
        <doi>10.14569/IJACSA.2023.0141080</doi>
        <lastModDate>2023-10-30T10:49:52.1870000+00:00</lastModDate>
        
        <creator>Ganesh Kumar</creator>
        
        <creator>Shuib Basri</creator>
        
        <creator>Abdullahi Abubakar Imam</creator>
        
        <creator>Abdullateef Oluwagbemiga Balogun</creator>
        
        <creator>Hussaini Mamman</creator>
        
        <creator>Luiz Fernando Capretz</creator>
        
        <subject>Reference model; computational linguistics; heterogeneous data; context clues; semantic clues; syntactic clues</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>With the advent of technology and use of latest devices, they produce voluminous data. Out of it, 80% of the data are unstructured and remaining 20% are structured and semi-structured. The produced data are in heterogeneous format and without following any standards. Among heterogeneous (structured, semi-structured and unstructured) data, textual data are nowadays used by industries for prediction and visualization of future challenges. Extracting useful information from it is really challenging for stakeholders due to lexical and semantic matching. Few studies have been solving this issue by using ontologies and semantic tools, but the main limitations of proposed work were the less coverage of multidimensional terms. To solve this problem, this study aims to produce a novel multidimensional reference model using linguistics categories for heterogeneous textual datasets. The categories in such context, semantic and syntactic clues are focused along with their score. The main contribution of MRM is that it checks each tokens with each term based on indexing of linguistic categories such as synonym, antonym, formal, lexical word order and co-occurrence. The experiments show that the percentage of MRM is better than the state-of-the-art single dimension reference model in terms of more coverage, linguistics categories and heterogeneous datasets.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_80-A_Novel_Multidimensional_Reference_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Incorporating Natural Language Processing into Virtual Assistants: An Intelligent Assessment Strategy for Enhancing Language Comprehension</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141079</link>
        <id>10.14569/IJACSA.2023.0141079</id>
        <doi>10.14569/IJACSA.2023.0141079</doi>
        <lastModDate>2023-10-30T10:49:52.1700000+00:00</lastModDate>
        
        <creator>Franciskus Antonius</creator>
        
        <creator>Purnachandra Rao Alapati</creator>
        
        <creator>Mahyudin Ritonga</creator>
        
        <creator>Indrajit Patra</creator>
        
        <creator>Yousef A. Baker El-Ebiary</creator>
        
        <creator>Myagmarsuren Orosoo</creator>
        
        <creator>Manikandan Rengarajan</creator>
        
        <subject>Natural language processing; virtual assistants; smart evaluation approach; artificial intelligence; human-computer interactions</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>The study introduces a comprehensive technique for enhancing the Natural Language Processing (NLP) capabilities of virtual assistant systems. The method addresses the challenges of efficient information transfer and optimizing model size while ensuring improved performance, with a primary focus on model pertaining and distillation. To tackle the issue of vocabulary size affecting model performance, the study employs the SentencePiece tokenizer with unigram settings. This approach allows for the creation of a well-balanced vocabulary, which is essential for striking the right balance between task performance and resource efficiency. a novel pre-layernorm design is introduced, drawing inspiration from models like BERT and RoBERTa. This optimization optimizes the placement of layer normalization within transformer layers during the pretraining phase. Teacher models are effectively trained using masked language modeling objectives and the Deepspeed scaling framework. Modifications to model operations are made, and mixed precision training strategies are explored to ensure stability. The two-stage distillation method efficiently transfers knowledge from teacher models to student models. It begins with an intermediate model, and the data is distilled carefully using logit and hidden layer matching techniques. This information transfer significantly enhances the final student model while maintaining an ideal model size for low-latency applications. In this approach, innovative measurements, such as the precision of filling a mask, are employed to assess the effectiveness and quality of the methods. The findings demonstrate substantial improvements over publicly available models, showcasing the effectiveness of the strategy within complete virtual assistant systems. The proposed approach confirms the potential of the technique to enhance language comprehension and efficiency within virtual assistants, specifically addressing the challenges posed by real-world user inputs. Through extensive testing and rigorous analysis, the capability of the method to meet these objectives is validated.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_79-Incorporating_Natural_Language_Processing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Exploring the Utilization of Program Semantics in Extreme Code Summarization: An Experimental Study Based on Acceptability Evaluation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141077</link>
        <id>10.14569/IJACSA.2023.0141077</id>
        <doi>10.14569/IJACSA.2023.0141077</doi>
        <lastModDate>2023-10-30T10:49:52.1570000+00:00</lastModDate>
        
        <creator>Jiuli Li</creator>
        
        <creator>Yan Liu</creator>
        
        <subject>Extreme code summarization; program semantics utilization; acceptability analysis of code summary</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>With the rise of deep learning methods, neural network architecture adopted from neural machine translation has been widely studied in code summarization by learning the sequential content of code. Given the inherent nature of programming languages, learning the representation of source code from the parsed structural information is also a typical way for constructing code summarization models. Recent studies show that the overall performance of the neural models for code summarization can be improved by utilizing sequential and structural information in a hybrid manner. However, both of these two kinds of information fed to the neural models for code summarization fail to embrace the semantics of source code snippets in an explicit way. Is it really a good way to just leave the semantics as hidden things in the source code and have the neural models capture whatever they can get? To observe the utilization of program semantics in automatic code summarization, we conducted an experimental study by analyzing the acceptability of the extreme code summaries generated from neural models. To make the models aligned in the same context for this experimental study and to focus on the observation of the semantics, we re-implement the neural models from three selected studies as extreme code summarization solutions. After an intuitive observation and exploration of the generated summaries with the models trained from a Java dataset, we identify five acceptability aspects: (1) function name format; (2) function naming style; (3) semantic level similarity; (4) the differences in hitting rate of representative words; and (5) the correlation between extreme code summaries with function body. Based on the false negative and false positive phenomena in the results, ablation experiments have shown that the use of program semantics has a positive effect on generating high-quality abstracts in neural models. Our work proves the potential of utilizing the program semantics explicitly in code summarization, and the possible directions are also indicated.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_77-Exploring_the_Utilization_of_Program_Semantics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Using Topic in Summarization for Vietnamese Paragraph</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141078</link>
        <id>10.14569/IJACSA.2023.0141078</id>
        <doi>10.14569/IJACSA.2023.0141078</doi>
        <lastModDate>2023-10-30T10:49:52.1570000+00:00</lastModDate>
        
        <creator>Dat Tien Dieu</creator>
        
        <creator>Dien Dinh</creator>
        
        <subject>Automatic text summarization; a theme-based approach; BERTmodel; latent dirichlet allocation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>This article delves into the realm of refining the precision of automated text summarization tasks by harnessing the underlying themes within the documents. Our training data draws upon the VNDS dataset (A_Vietnamese_Dataset_for_Summarization), encompassing a total of 150,704 samples aggregated from diverse online news sources like vnexpress.net, tuoitre.vn, and more. These articles have been meticulously processed to ensure they align with our training objectives and criteria. This paper presents an approach to text summarization that is theme-oriented, utilizing Latent Dirichlet Allocation to delineate the document&#39;s subject matter. The data subsequently have been fed into the BERT model, which constitutes one of the subtasks within the broader domain of abstractive summarization—summarizing content based on pivotal concepts. The results attained, although modest, underscore the challenges we&#39;ve confronted. Consequently, our model necessitates further development and refinement to unlock its full potential.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_78-Using_Topic_in_Summarization_for_Vietnamese_Paragraph.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Cost-Effective Method for Detecting and Tracking Moving Objects using Overlapping Methods</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141076</link>
        <id>10.14569/IJACSA.2023.0141076</id>
        <doi>10.14569/IJACSA.2023.0141076</doi>
        <lastModDate>2023-10-30T10:49:52.1400000+00:00</lastModDate>
        
        <creator>Yuanyuan ZHANG</creator>
        
        <subject>Tracking; moving object detection; image processing; binary patterns; HSV histogram</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>Overlay approaches for moving object detection and tracking have recently received attention as a crucial field for computer science and computer vision research. Using pixel overlap and visual attributes, these techniques enable the recognition and tracking of objects in movies or video data. Two color and edge features for the suggested method are presented in this article. The suggested approach uses the SED algorithm, and since the edges have a lower volume than the entire image, the processing process will be faster with the reduction of information. The characteristic of color is the HSV (hue, saturation and value) histogram because it is close to human vision. However, because the margins tidy up the shapes in the human eye, they contain important information. These concerns lead to the conclusion that the histogram of gradient angles based on regional binary patterns is the edge feature of the suggested system. There are two justifications for employing local binary patterns. First, the principal edges are emphasized by using local binary patterns. Another point is that the image produced by this method displays the image&#39;s texture; in other words, the shape&#39;s feature is taken from the context of the texture, which is regarded as a type of combination of features. Several criteria were evaluated in order to assess the suggested approach for tracking images in comparison to related systems; the most significant of these are the precision, recall, and similarity criteria. In comparison to other works, the findings for precision have generally increased accuracy by 25%, recall by 17%, and similarity by 12%.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_76-A_Cost_Effective_Method_for_Detecting_and_Tracking_Moving_Objects.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Surface Reconstruction from Unstructured Point Cloud Data for Building Digital Twin</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141075</link>
        <id>10.14569/IJACSA.2023.0141075</id>
        <doi>10.14569/IJACSA.2023.0141075</doi>
        <lastModDate>2023-10-30T10:49:52.1230000+00:00</lastModDate>
        
        <creator>F. A. Ismail</creator>
        
        <creator>S.A. Abdul Shukor</creator>
        
        <creator>N.A. Rahim</creator>
        
        <creator>R. Wong</creator>
        
        <subject>Surface reconstruction; point cloud; building reconstruction; 3D mesh</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>This study highlights on the methods used for surface reconstruction from unstructured point cloud data, characterized by simplicity, robustness and broad applicability from 3D point cloud data. The input data consists of unstructured 3D point cloud data representing a building. The reconstruction methods tested here are Poisson Reconstruction Algorithm, Ball Pivoting Algorithm, Alpha Shape Algorithm and 3D surface refinement, employing mesh refinement through Laplacian smoothing and Simple Smoothing techniques. Analysis on the algorithm parameters and their influence on reconstruction quality, as well as their impact on computational time are discussed. The findings offer valuable insights into parameter behavior and its effects on computational efficiency and level of detail in the reconstruction process, contributing to enhanced 3D modeling and digital twin for buildings.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_75-Surface_Reconstruction_from_Unstructured_Point.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Investigating the Role of Machine Learning Algorithms in Predicting Sepsis using Vital Sign Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141073</link>
        <id>10.14569/IJACSA.2023.0141073</id>
        <doi>10.14569/IJACSA.2023.0141073</doi>
        <lastModDate>2023-10-30T10:49:52.1100000+00:00</lastModDate>
        
        <creator>Amit Sundas</creator>
        
        <creator>Sumit Badotra</creator>
        
        <creator>Gurpreet Singh</creator>
        
        <creator>Amit Verma</creator>
        
        <creator>Salil Bharany</creator>
        
        <creator>Imtithal A. Saeed</creator>
        
        <creator>Ashraf Osman Ibrahim</creator>
        
        <subject>Machine learning; sepsis; vital sign; prediction; electronic health records</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>Objective: In hospitals, sepsis is a common and costly condition, but machine learning systems that utilize electronic health records can enhance the timely detection of sepsis. The purpose of this research is to verify the effectiveness of a machine learning tool that makes use of a gradient boosted ensemble for sepsis diagnosis and prediction in relation. San Francisco University of California, (SFUC) Medical Center and the Medical Information Mart for Intensive Care (MIMIC) databases were consulted for historical information. The study encompassed adult patients who were admitted without sepsis and had a minimum single logging of six vital signs (SpO2, temperature, heart rate, respiratory rate, diastolic blood pressure and systolic). Using the area under the receiver operating characteristic (AUROC) curve, the performance of the machine learning algorithm was compared to commonly used scoring systems, and its accuracy was determined. Performance of the MLA (machine learning algorithm) was evaluated at sepsis onset, as well as 24 and 48 hours before sepsis onset. The AUROC for the MLA was 0.88, 0.84, and 0.83 for sepsis onset, 24 hours prior, and 48 hours prior, respectively. At the time of onset, these values were superior to those of SOFA, MEWS, qSOFA, and SIRS. Using UCSF data for training and MIMIC data for testing, the sepsis onset AUROC was 0.89. The MLA can safely predict sepsis up to forty-eight hours before it occurs and the accuracy in detecting the onset of sepsis is higher in comparison to traditional instruments. When trained and evaluated on distinct datasets, the MLA maintains high performance for sepsis detection.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_73-Investigating_the_Role_of_Machine_Learning_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel and Efficient Point Cloud Registration by using Coarse-to-Fine Strategy Integrating PointNet</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141074</link>
        <id>10.14569/IJACSA.2023.0141074</id>
        <doi>10.14569/IJACSA.2023.0141074</doi>
        <lastModDate>2023-10-30T10:49:52.1100000+00:00</lastModDate>
        
        <creator>Chunxiang Liu</creator>
        
        <creator>Tianqi Cheng</creator>
        
        <creator>Muhammad Tahir</creator>
        
        <creator>Mingchu Li</creator>
        
        <creator>Zhouqi Liu</creator>
        
        <creator>Lei Wang</creator>
        
        <subject>Point cloud registration; PointNet; coarse-fine registration; random sample consensus (RANSAC) algorithm; Lucas and Kanade (LK) algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>The registration of the point cloud plays a critical and fundamental role in the computer vision domain. Although quite good registration results have been obtained by using the global, local, and learning-based registration strategies, there are still many problems to solve. For example, the local methods that are based on geometric features are very sensitive to attitude deviation, the global shapes-based methods are easy to result in inconsistency when the distribution differences are obvious and the learning-based registration methods have highly relied on the huge label data. A novel and effective registration method for the point cloud data integrating the coarse-to-fine strategy and the improved PointNet network is proposed to overcome the above-mentioned drawbacks and improve registration accuracy. The improved Random Sample Consensus (RANSAC) algorithm is developed to effectively deal with the initial attitude deviation problem in the coarse registration procedure and the improved Lucas and Kanade (LK) algorithm is proposed based on the classical PointNet framework to reduce the errors of the refine registration, and the whole registration procedure is implemented under a trainable recurrent deep learning architecture. Compared with the state-of-the-art point cloud registration methods, experimental results fully prove that the proposed method can effectively handle the significant attitude deviation and partial overlap problem and achieves stronger robustness and higher accuracy.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_74-A_Novel_and_Efficient_Point_Cloud_Registration.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sustainable Smart Home IoT to Open and Close the House Fence using a Scanning Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141072</link>
        <id>10.14569/IJACSA.2023.0141072</id>
        <doi>10.14569/IJACSA.2023.0141072</doi>
        <lastModDate>2023-10-30T10:49:52.0930000+00:00</lastModDate>
        
        <creator>Heri Purwanto</creator>
        
        <creator>Rikky Wisnu Nugraha</creator>
        
        <creator>Fahmi Reza Ferdiansyah</creator>
        
        <creator>Deshinta Arrova Dewi</creator>
        
        <creator>Rudy Sofian</creator>
        
        <creator>Muhammad Faridh Rizaldy</creator>
        
        <subject>Smart home; Internet of Things (IoT); Radio Frequency Identification (RFID); scanning; sustainable smart home; smart city; process innovation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>A home that is connected to the Internet allows all of its appliances and systems to communicate with one another via the Internet of Things (IoT), making it a component of a sustainable smart home. The issue with this study&#39;s findings is that some homes still utilize manual gates, which must be opened and closed by pushing a gate. Considering that a building&#39;s gate is its primary form of security, this is viewed as being less effective. Additional locks are required on the fence to overcome its frail defenses, which do not deter criminals. This project aims to create a smart home by using the internet to automate the process of opening and closing home gates based on IoT. Prototyping is a strategy used in software development, whereas card barcode objects are found using scanning. The findings demonstrated that Radio Frequency Identification (RFID), which is connected to a smartphone as a communication medium between the device and the user, is connected to each other between the microcontroller and the stepper motor so that it can operate the home gate automatically. The test findings indicate that when the user taps in the RFID card as the drive for the gate, the reaction time of the RFID to the stepper is between 7.35 and 10.10 seconds. Future research can use long-range RFID technology, which has a reading distance of more than 5-12 meters with a radio frequency band refarming process of 800 - 900 Mhz for any smart home or smart building. The accuracy of reading RFID cards with an RFID reader is about 1 - 5 cm, which is the limitation in this study. According to the test findings, it can be said that the development of an automatic fence control system increases the effectiveness of home security and allows for direct control from a smartphone. Using a Long-RFID instrument with a reading precision distance of 5–12 meters and a radio frequency band refarming method of 800–900 Mhz is anticipated to be sustainable in this research.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_72-Sustainable_Smart_Home_IoT_to_Open_and_Close_the_House_Fence.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The GSO-Deep Learning-based Financial Risk Management System for Rural Economic Development Organization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141071</link>
        <id>10.14569/IJACSA.2023.0141071</id>
        <doi>10.14569/IJACSA.2023.0141071</doi>
        <lastModDate>2023-10-30T10:49:52.0770000+00:00</lastModDate>
        
        <creator>Weiliang Chen</creator>
        
        <subject>Deep learning; Glowworm Swarm Optimization (GSO) algorithm; Deep Neural Networks (DNN); financial risk prediction; rural economic development organization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>Financial risk management has always been a key concern for major enterprises. At the same time, with the continuous attention to impoverished rural areas worldwide, financial risk management tools have become an important component of rural economic development organizations to avoid financial risks. With the rapid development of artificial intelligence technologies such as neural networks and deep learning, and due to their strong learning ability, high adaptability, and good portability, some financial risk management tools are gradually adopting technologies such as neural networks and machine learning. However, existing financial risk management tools based on neural networks are mostly developed for large enterprises such as banks or power grid companies, and cannot guarantee their full applicability to rural economic development organizations. Therefore, this study focuses on the financial risk management system used for rural economic development organizations. In order to improve the accuracy of deep learning algorithms in predicting financial risks, this paper designs an improved Glowworm Swarm Optimization (IGSO) algorithm to optimize Deep Neural Networks (DNN). Finally, the effectiveness of the financial risk management tool based on IGSO-DNN proposed in this article was fully validated using data from 45 rural economic development organizations as a test set.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_71-The_GSO_Deep_Learning_based_Financial_Risk_Management_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep CNN for the Identification of Pneumonia Respiratory Disease in Chest X-Ray Imagery</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141069</link>
        <id>10.14569/IJACSA.2023.0141069</id>
        <doi>10.14569/IJACSA.2023.0141069</doi>
        <lastModDate>2023-10-30T10:49:52.0630000+00:00</lastModDate>
        
        <creator>Dias Nessipkhanov</creator>
        
        <creator>Venera Davletova</creator>
        
        <creator>Nurgul Kurmanbekkyzy</creator>
        
        <creator>Batyrkhan Omarov</creator>
        
        <subject>X-Ray; deep learning; classification; respiratory disease; pneumonia; CNN</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>Addressing the challenges of diagnosing lower respiratory tract infections, this study unveils the potential of Deep Convolutional Neural Networks (Deep CNN) as transformative tools in medical image interpretation. Our research presents a tailored Deep CNN model, optimized for distinguishing pneumonia in chest X-ray images, a task often complicated by subtle radiological differences. We utilized an extensive dataset comprising 12,000 chest X-rays, which incorporated both pneumonia-affected and healthy samples. Through rigorous pre-processing, encompassing noise abatement, normalization, and data augmentation, a fortified training set emerged. This set was the basis for our Deep CNN, marked by intricate convolutional designs, planned dropouts, and modern activation functions. With 85% of images used for training and the balance for validation, the model manifested an impressive 98.1% accuracy, surpassing preceding approaches. Crucially, specificity and sensitivity metrics stood at 97.5% and 98.8%, highlighting the model&#39;s precision in segregating pneumonia cases from clear ones, thus reducing diagnostic errors. These results emphasize Deep CNN&#39;s transformative capability in pneumonia diagnosis via X-rays and suggest potential applications across various medical imaging facets. However, as we champion these outcomes, we must cognizantly assess potential hurdles in clinical application, encompassing ethical deliberations, model scalability, and its adaptability to ever-changing pulmonary disease profiles.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_69-Deep_CNN_for_the_Identification_of_Pneumonia_Respiratory.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Digital Recognition Method Based on Improved SVD-DHNN</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141070</link>
        <id>10.14569/IJACSA.2023.0141070</id>
        <doi>10.14569/IJACSA.2023.0141070</doi>
        <lastModDate>2023-10-30T10:49:52.0630000+00:00</lastModDate>
        
        <creator>Xuemei Yao</creator>
        
        <creator>Jiajia Zhang</creator>
        
        <creator>Juan Wang</creator>
        
        <creator>Jiaying Wei</creator>
        
        <subject>Discrete Hopfield Neural Network; particle swarm optimization; singular value decomposition; digital recognition; sparse matrix</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>Discrete Hopfield Neural Network (DHNN) is widely used in character recognition because of its associative memory. It is a fully connected neural network. Its weight initialization is a random process. In order to give full play to the associative memory of DHNN and overcome the problems of pseudo-stable points and complex structure caused by random initialization, an improved SVD-DHNN model is proposed. Firstly, the weight of DHNN is optimized by the global search capability of PSO to help the model jump out of the pseudo stable point; secondly, the weight matrix of DHNN is readjusted by singular value decomposition (SVD). The contribution rate is used to trim the weights of DHNN, which can reduce the complexity of the network structure; finally, the validity and applicability of the new model are verified by means of digital recognition.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_70-A_Novel_Digital_Recognition_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Holistic Expression Factors of Emotional Motion for Non-humanoid Robots</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141068</link>
        <id>10.14569/IJACSA.2023.0141068</id>
        <doi>10.14569/IJACSA.2023.0141068</doi>
        <lastModDate>2023-10-30T10:49:52.0470000+00:00</lastModDate>
        
        <creator>Qisi Xie</creator>
        
        <creator>Ding-Bang Luh</creator>
        
        <subject>Human-robot interaction; robot emotion; non-humanoid robot; movement</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>The development of technology and the increasing prevalence of solitary living have transformed non-humanoid robots, such as robotic sweepers and mechanical pets, into potential sources of emotional support for individuals. Nevertheless, the majority of non-humanoid robots currently in existence are task-oriented and lack features such as facial expressions and sound. Existing research primarily emphasizes the details of human motion in robot motion design, while devoting less attention to the analysis of universal emotional expression factors and methods rooted in human recognition patterns. In our initial step, a theoretical framework and holistic expression factors were proposed based on Gestalt theory and SOR theory. These factors encompass vertical and horizontal motion direction, stimulation, and vertical repetition. Subsequently, animation simulation tests were conducted to confirm and examine the contributions of each factor to the recognition of emotional expressions. The results indicate that both vertical and horizontal movements can convey emotional valence. However, if both of them exist, there is no leading direction to the valence recognition result. When both vertical and horizontal movements are present, valence recognition is influenced by the combined effects of stimulation, vertical repetition, and movement direction. Simultaneously, non-humanoid robots can display recognizable emotional content when influenced by holistic expression factors. This framework can serve as a universal guide for emotional expression tasks in non-humanoid robots, proving the hypothesis that Gestalt theory is applicable in dynamic emotional recognition tasks. At the same time, these findings propose a new holistic perspective for designing emotional expression methods for robots.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_68-The_Holistic_Expression_Factors_of_Emotional_Motion.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Early Detection and Defense Countermeasure Inference of Ransomware based on API Sequence</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141067</link>
        <id>10.14569/IJACSA.2023.0141067</id>
        <doi>10.14569/IJACSA.2023.0141067</doi>
        <lastModDate>2023-10-30T10:49:52.0300000+00:00</lastModDate>
        
        <creator>Shuqin Zhang</creator>
        
        <creator>Tianhui Du</creator>
        
        <creator>Peiyu Shi</creator>
        
        <creator>Xinyu Su</creator>
        
        <creator>Yunfei Han</creator>
        
        <subject>Ransomware detection; API sequences; WGAN-GP; ATT&amp;CK; machine learning; ontology; defense countermeasures</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>Currently, ransomware attacks have become an important threat in the field of network security. The detection and defense of ransomware has become particularly important. However, due to the insufficient data and behavior patterns collected dynamically to detect variants and unknown ransomware, there is also a lack of specialized defense strategies for ransomware. In response to this situation, this article proposes a ransomware early detection and defense system (REDDS) based on application programming interface (API) sequences. REDDS first dynamically collects API sequences from the pre-encryption stage of the ransomware, and calculates the API sequences as feature vectors using the n-gram model and TF-IDF algorithm. Due to the limitations of dynamic data collection, API sequences were enhanced using Wasserstein GAN with Gradient Penalty (WGAN GP), and then machine learning classification algorithms were used to train the enhanced data to detect ransomware. By mapping the malicious API of ransomware to public security knowledge bases such as Adversarial Tactics, Techniques, and Common Knowledge (ATT&amp;CK), a Ransomware Defense Countermeasures Ontology (RDCO) is proposed. Based on the ontology model, a set of inference rules is designed to automatically infer the defense countermeasures of ransomware. The experimental results show that WGAN-GP can more effectively enhance API sequence data than other GAN models. After data augmentation, the accuracy of machine learning detection models has significantly improved, with a maximum of 99.32%. Based on malicious APIs in ransomware, defense countermeasures can be inferred to help security managers respond to ransomware attacks and deploy appropriate security solutions.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_67-Early_Detection_and_Defense_Countermeasure_Inference.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Lung Cancer Detection using Segmented 3D Tensors and Support Vector Machines</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141066</link>
        <id>10.14569/IJACSA.2023.0141066</id>
        <doi>10.14569/IJACSA.2023.0141066</doi>
        <lastModDate>2023-10-30T10:49:52.0300000+00:00</lastModDate>
        
        <creator>Zaib un Nisa</creator>
        
        <creator>Arfan Jaffar</creator>
        
        <creator>Sohail Masood Bhatti</creator>
        
        <creator>Umair Muneer Butt</creator>
        
        <subject>Deep learning; lung cancer; LUNA16; machine learning; nodules; PyTorch</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>Tumor is currently the second most prevalent cause of mortality, and its prevalence is expanding rapidly. The development of pulmonary nodules inside the lungs is suggestive of the existence of lung cancer. The detection of cancer is achieved using nodules detected in computer tomography (CT) images obtained from the LUNA 16 dataset. This study uses the Python library &quot;PyTorch&quot; for this purpose. A three-dimensional model has been used to train and extract the nodular segments from CT-Scan images, referred to as CT-scan chunks. It is done due to the impracticality of handling the whole CT scan image due to its vast size. The previously mentioned chunks are then transformed into PyTorch tensors. The tensors are subsequently input into a deep learning model to extract features, which are then passed through a sequence of machine learning classifiers for the purpose of classification. These classifiers include Support Vector Machines, Multi-layer Perceptron, Random Forest Classifier, Logistic Regression, K Nearest Neighbor, and Linear Discriminant Analysis. Our research has shown that the use of chunk extraction from CT-Scan images, coupled with the creation of tensors using segmented CT scans, has significantly enhanced the precision of various machine learning algorithms. Additionally, this approach has the advantage of reducing the computational time during runtime. In our study, the use of Support Vector Machines yielded the best degree of accuracy, reaching 99.68%. The findings of this study have the potential to be valuable in the practical implementation of real-time lung nodule identification applications.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_66-Lung_Cancer_Detection_using_Segmented_3D_Tensors.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detecting the RPL Version Number Attack in IoT Networks using Deep Learning Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141065</link>
        <id>10.14569/IJACSA.2023.0141065</id>
        <doi>10.14569/IJACSA.2023.0141065</doi>
        <lastModDate>2023-10-30T10:49:52.0170000+00:00</lastModDate>
        
        <creator>Ayoub KRARI</creator>
        
        <creator>Abdelmajid HAJAMI</creator>
        
        <creator>Ezzitouni JARMOUNI</creator>
        
        <subject>Attack; deep learning; detection; IoT; machine learning; RPL; security; version number</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>This research presents a novel approach for detecting the highly perilous RPL version number attack in IoT networks using deep learning models, specifically Long Short-Term Memory (LSTM) and Deep Neural Networks (DNN). The study employs the Cooja simulator to create a comprehensive dataset for simulating the attack. By training LSTM and DNN models on this dataset, intricate attack patterns are learned for effective detection. The urgency of this work is underscored by the critical need to bolster IoT network security. IoT networks have become increasingly integral in various domains, including healthcare, smart cities, and industrial automation. Any compromise in their security could result in severe consequences, including data breaches and potential harm. Traditional intrusion detection systems often struggle to counter advanced attacks like the RPL version number attack, which could lead to unauthorized access and disruption of essential services. Experimental results in this research showcase outstanding accuracy rates, surpassing traditional machine learning algorithms used in IoT network intrusion detection. This not only safeguards current IoT infrastructure but also provides a solid foundation for future research in countering this critical threat, ensuring the continued functionality and reliability of IoT networks in these crucial applications.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_65-Detecting_the_RPL_Version_Number_Attack.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Convolutional Neural Network for Accurate Prediction of Seismic Events</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141064</link>
        <id>10.14569/IJACSA.2023.0141064</id>
        <doi>10.14569/IJACSA.2023.0141064</doi>
        <lastModDate>2023-10-30T10:49:52.0000000+00:00</lastModDate>
        
        <creator>Assem Turarbek</creator>
        
        <creator>Maktagali Bektemesov</creator>
        
        <creator>Aliya Ongarbayeva</creator>
        
        <creator>Assel Orazbayeva</creator>
        
        <creator>Aizhan Koishybekova</creator>
        
        <creator>Yeldos Adetbekov</creator>
        
        <subject>Deep learning; CNN; random forest; SVM; neural network; prediction; analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>In recent years, the realm of seismology has witnessed an increased integration of advanced computational techniques, seeking to enhance the precision and timeliness of earthquake predictions. The paper titled &quot;Deep Convolutional Neural Network and Machine Learning Enabled Framework for Analysis and Prediction of Seismic Events&quot; embarks on an ambitious exploration of this interstice, marrying the formidable prowess of Deep Convolutional Neural Networks (CNNs) with an array of machine learning algorithms. At the forefront of our investigation is the Deep CNN, known for its unparalleled capability to process spatial hierarchies and multi-dimensional seismic data. Accompanying this neural behemoth is LightGBM, a gradient boosting framework that offers superior speed and performance, especially with voluminous datasets. Additionally, conventional neural networks, noted for their adeptness in pattern recognition, offer a robust method to gauge the intricacies of seismic data. Our exploration doesn&#39;t halt here; the research delves deeper with Random Forest and Support Vector Machines (SVM), both renowned for their resilient performance in classification tasks. By amalgamating these diverse methodologies, this research crafts a multifaceted and synergistic framework. The culmination is a sophisticated tool poised to not only discern the minutiae of seismic activities with heightened accuracy but to predict forthcoming events with a degree of certainty previously deemed elusive. In this era of escalating seismic activities, our research offers a timely beacon, heralding a future where communities are better equipped to respond to the Earth&#39;s capricious tremors.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_64-Deep_Convolutional_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comprehensive Comparative Study of Machine Learning Methods for Chronic Kidney Disease Classification: Decision Tree, Support Vector Machine, and Naive Bayes</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141063</link>
        <id>10.14569/IJACSA.2023.0141063</id>
        <doi>10.14569/IJACSA.2023.0141063</doi>
        <lastModDate>2023-10-30T10:49:51.9830000+00:00</lastModDate>
        
        <creator>Admi Syarif</creator>
        
        <creator>Olivia Desti Riana</creator>
        
        <creator>Dewi Asiah Shofiana</creator>
        
        <creator>Akmal Junaidi</creator>
        
        <subject>Chronic kidney disease (CKD); classification; decision tree; machine learning; na&#239;ve bayes; support vector machine (SVM)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>Based on the findings of the 2010 Global Burden of Disease analysis, there was an increase in the global ranking of Chronic Kidney Disease (CKD) as a major contributor to mortality, moving from 27th place in 1990 to 18th position. Approximately 10 percent of the global population experiences CKD, and every year millions of lives are lost due to limited access to adequate treatment. CKD poses a substantial global health concern, greatly affecting both the well-being and life span of individuals afflicted by the condition. This study aims to evaluate the performance of three major classification algorithms in CKD diagnosis: Decision Tree, Support Vector Machine (SVM), and Na&#239;ve Bayes. This research distinguishes it from previous studies through an innovative data processing approach. Data preprocessing involved transforming categorical values into numerical form using label encoding, as well as applying Exploratory Data Analysis (EDA) to identify outliers and test data assumptions. In addition, the handling of missing values was done with appropriate strategies to maintain the integrity of the dataset. The classification method was evaluated using a dataset of 400 samples from Kaggle with 24 attributes. Through careful experimentation, the accuracy results of each algorithm are presented and compared. The results of this study can help in the development of a more efficient and accurate decision support system for the early diagnosis of CKD.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_63-A_Comprehensive_Comparative_Study_of_Machine_Learning_Methods.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Diabetes Prediction Empowered with Multi-level Data Fusion and Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141062</link>
        <id>10.14569/IJACSA.2023.0141062</id>
        <doi>10.14569/IJACSA.2023.0141062</doi>
        <lastModDate>2023-10-30T10:49:51.9670000+00:00</lastModDate>
        
        <creator>Ghofran Bassam</creator>
        
        <creator>Amina Rouai</creator>
        
        <creator>Reyaz Ahmad</creator>
        
        <creator>Muhammd Adnan Khan</creator>
        
        <subject>Disease prediction; machine learning (ML); fused approach; artificial neural network (ANN); support vector machine (SVM); disease diagnosis; healthcare</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>Technology improvements have benefited the medical industry, especially in the area of diabetes prediction. In order to find patterns and risk factors related to diabetes, machine learning and Artificial Intelligence (AI) are vital in the analysis of enormous volumes of data, including medical records, lifestyle variables, and biomarkers. This makes it possible for tailored management and early discovery, which might revolutionize healthcare. This study examines how machine learning algorithms may be used to identify diseases, with an emphasis on diabetes prediction. The Proposed Diabetes Prediction Empowered with Mutli-level Data Fusion and Machine Learning (DPEMDFML) model combines two distinct types of models—the Artificial Neural Network (ANN) and the Support Vector Machine (SVM)—to create a fused machine learning technique. Two separate datasets were utilized for training and testing the model in order to assess its performance. To ensure a thorough evaluation of the model&#39;s prediction ability, the datasets were split in two experiments in proportions of 70:30 and 75:25, respectively. The study&#39;s findings were encouraging, with the ANN algorithm obtaining a remarkable accuracy of 97.43%. This indicates that the model accurately identified instances of diabetes, indicating a high degree of accuracy. A more thorough knowledge of the model&#39;s prediction ability would result from further assessment and validation of its performance using various measures.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_62-Diabetes_Prediction_Empowered_with_Multi_level_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Automatic Nuclei Segmentation on Microscopic Images using Deep Residual U-Net</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141061</link>
        <id>10.14569/IJACSA.2023.0141061</id>
        <doi>10.14569/IJACSA.2023.0141061</doi>
        <lastModDate>2023-10-30T10:49:51.9670000+00:00</lastModDate>
        
        <creator>Ramya Shree H P</creator>
        
        <creator>Minavathi</creator>
        
        <creator>Dinesh M S</creator>
        
        <subject>Nuclei segmentation; convolutional neural networks; neural networks; U-Net; deep learning; semantic segmentation; 2018 data science bowl</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>Nuclei Segmentation is the preliminary step towards the task of medical image analysis. Nowadays, there exists several deep learning-based techniques based on Convolutional Neural Networks (CNNs) for the task of nuclei segmentation. In this study, we present a neural network for semantic segmentation. This network harnesses the strengths in both residual learning and U-Net methodologies, thereby amplifying cell segmentation performance. This hybrid approach also facilitates the creation of network with diminished parameter requirement. The network incorporates residual units contributes to a smoother training process and mitigate the issue of vanishing gradients. Our model is tested on a microscopy image dataset which is publicly available from the 2018 Data Science Bowl grand challenge and assessed against U-Net and several other state-of-the-art deep learning approaches designed for nuclei segmentation. Our proposed approach showcases a notable improvement in average Intersection over Union (IoU) gain compared to prevailing state-of-the-art techniques, by exhibiting a significant margin of 1.1% and 5.8% higher gains over the original U-Net. Our model also excels across various key indicators, including accuracy, precision, recall and dice-coefficient. The outcomes underscore the potential of our proposed approach as a promising nuclei segmentation method for microscopy image analysis.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_61-An_Automatic_Nuclei_Segmentation_on_Microscopic_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fuzzy Failure Modes Effect and Criticality Analysis of the Procurement Process of Artificial Intelligent Systems/Services</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141060</link>
        <id>10.14569/IJACSA.2023.0141060</id>
        <doi>10.14569/IJACSA.2023.0141060</doi>
        <lastModDate>2023-10-30T10:49:51.9530000+00:00</lastModDate>
        
        <creator>Khalid Alshehhi</creator>
        
        <creator>Ali Cheaitou</creator>
        
        <creator>Hamad Rashid</creator>
        
        <subject>Fuzzy Failure Mode Effect and Criticality Analysis (FMECA); procurement; Artificial Intelligent (AI) System; public sector; United Arab Emirates (UAE)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>This study focuses on the ranking of risks associated with the procurement of Artificial Intelligent (AI) systems/services for UAE public Sectors. Considering the involvement of human-based reasoning, this study proposes to use Fuzzy Failure Mode Effect and Criticality Analysis (FMECA). The risks were identified from the literature and subsequently, using 40 interviews with practitioners, the final list is developed on the basis of the presence of risks in the AI procurement process. For Fuzzy FMECA, the input data is collected from fifteen experts. The values of Severity (S), and Detection (D) for each risk element are averaged to use as input. If-Then rule-based fuzzy inference system is employed to obtain the Fuzzy Risk Priority Numbers of risk elements. The traditional RPN and Fuzzy RPN numbers are compared and it is found that fuzzy RPN gives a realistic picture of the ranking of risks. Privacy and security risks, Integration Risks, Risk of Malfunction of systems/services, and Ethical risks are found to be high priorities. This study provides valuable insight to policymakers to develop strategies to mitigate these risks for smooth procurement and implementation of AI-related Projects.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_60-Fuzzy_Failure_Modes_Effect_and_Criticality_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Blockchain-Enabled Security Framework for Enhancing IoT Networks: A Two-Layer Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141059</link>
        <id>10.14569/IJACSA.2023.0141059</id>
        <doi>10.14569/IJACSA.2023.0141059</doi>
        <lastModDate>2023-10-30T10:49:51.9370000+00:00</lastModDate>
        
        <creator>Hosny H. Abo Emira</creator>
        
        <creator>Ahmed A. Elngar</creator>
        
        <creator>Mohammed Kayed</creator>
        
        <subject>Internet of Things (IoT); Blockchain (BC); LEACH; clustering; authentication; security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>The increasing proliferation of Internet of Things (IoT) nodes poses significant security challenges for their network’s communication. Blockchain technology, with its decentralized and distributed nature, has the potential to address these security concerns within IoT networks. LEACH (Low Energy Adaptive Clustering Hierarchy) algorithm and blockchain technology enhance IoT network security, enabling energy-efficient data management and transaction integrity, enhancing network lifespan and protection. This paper presents a security model that combines the LEACH algorithm and blockchain technology to improve IoT networks&#39; security. The LEACH algorithm forms clusters of IoT devices, with a designated cluster head (CH) responsible for data aggregation and forwarding. Our model incorporates blockchain technology&#39;s core principles and cryptographic foundations, providing additional security measures. It consists of two main layers: the LEACH clustering-based routing protocol, which forms clusters and layers, and a blockchain simulator module. The LEACH algorithm enhances energy consumption, enables efficient data management within clusters, and ensures the integrity, transparency, and immutability of transactions. Our model is implemented on a simulator, allowing for experimentation and modification to evaluate the performance and effectiveness of the security enhanced IoT network model. Our results demonstrate the effectiveness of the proposed enhanced LEACH algorithm compared to previous algorithms, in which the last node died after 1868 transactions. As well as the results of the pro-posed framework, which record 0.058% of the state rate and 2.75 Throughput. Simulation results are validated with respect to previous algorithms, and it obtained higher accuracy compared to them.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_59-Blockchain_Enabled_Security_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Instance Segmentation Method based on R2SC-Yolact++</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141058</link>
        <id>10.14569/IJACSA.2023.0141058</id>
        <doi>10.14569/IJACSA.2023.0141058</doi>
        <lastModDate>2023-10-30T10:49:51.9200000+00:00</lastModDate>
        
        <creator>Liqun Ma</creator>
        
        <creator>Chuang Cai</creator>
        
        <creator>Haonan Xie</creator>
        
        <creator>Xuanxuan Fan</creator>
        
        <creator>Zhijian Qu</creator>
        
        <creator>Chongguang Ren</creator>
        
        <subject>Instance segmentation; Yolact++; Res2net; Cluster-NMS; insulator dataset</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>To address the problems of missed detection, segmentation error and poor target edge segmentation in the instance segmentation model, a R2SC-Yolact++ instance segmentation approach based on the improved Yolact++ is proposed. Firstly, the backbone network adopts Res2Net which introduces spatial attention mechanism (SAM) to improve the problem of segmentation error by better extracting feature information; then, high-quality masks are obtained by fusing the detail information of the shallow feature P2 as the input to the prototype mask branch; finally, the problem of missed detection was solved by introducing Cluster-NMS in order to improve the accuracy of the detection boxes. In order to illustrate the effectiveness of the improved model, experiments were conducted on two publicly available datasets, the COCO and CVPPP datasets. The experimental results show that the accuracy on the COCO dataset is 1.1% higher than the original model. And the accuracy on the CVPPP dataset is 1.7% better than before the improvement, which is better than other mainstream instance segmentation algorithms such as Mask RCNN. Finally, the improved model is applied to the insulator dataset, which can segment the shed of insulator accurately.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_58-Instance_Segmentation_Method_Based_on_R2SC_Yolact.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimizing Hyperparameters for Improved Melanoma Classification using Metaheuristic Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141057</link>
        <id>10.14569/IJACSA.2023.0141057</id>
        <doi>10.14569/IJACSA.2023.0141057</doi>
        <lastModDate>2023-10-30T10:49:51.9200000+00:00</lastModDate>
        
        <creator>Shamsuddeen Adamu</creator>
        
        <creator>Hitham Alhussian</creator>
        
        <creator>Norshakirah Aziz</creator>
        
        <creator>Said Jadid Abdulkadir</creator>
        
        <creator>Ayed Alwadin</creator>
        
        <creator>Abdullahi Abubakar Imam</creator>
        
        <creator>Aliyu Garba</creator>
        
        <creator>Yahaya Saidu</creator>
        
        <subject>Deep learning; machine learning; classification; metaheuristic algorithm; CNN</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>Melanoma, a prevalent and formidable skin cancer, necessitates early detection for improved survival rates. The rising incidence of melanoma poses significant challenges to healthcare systems worldwide. While deep neural networks offer the potential for precise melanoma classification, the optimization of hyperparameters remains a major obstacle. This paper introduces a groundbreaking approach that harnesses the Manta Rays Foraging Optimizer (MRFO) to empower melanoma classification. MRFO efficiently fine-tunes hyperparameters for a Convolutional Neural Network (CNN) using the ISIC 2019 dataset, which comprises 776 images (438 melanoma, 338 non-melanoma). The proposed cost-effective DenseNet121 model surpasses other optimization methods in various metrics during training, testing, and validation. It achieves an impressive accuracy of 99.26%, an AUC of 99.56%, an F1 score of 0.9091, a precision of 94.06%, and a recall of 87.96%. Comparative analysis with EfficientB1, EfficientB7, EfficientNetV2B0, NesNetLarge, ResNet50, VGG16, and VGG19 models demonstrates its superiority. These findings underscore the potential of the novel MRFO-based approach in achieving superior accuracy for melanoma classification. The proposed method has the potential to be a valuable tool for early detection and improved patient outcomes.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_57-Optimizing_Hyperparameters_for_Melanoma_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Model for Pervasive Computing and Wearable Devices for Sustainable Healthcare Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141056</link>
        <id>10.14569/IJACSA.2023.0141056</id>
        <doi>10.14569/IJACSA.2023.0141056</doi>
        <lastModDate>2023-10-30T10:49:51.9070000+00:00</lastModDate>
        
        <creator>Deshinta Arrova Dewi</creator>
        
        <creator>Rajermani Thinakan</creator>
        
        <creator>Malathy Batumalay</creator>
        
        <creator>Tri Basuki Kurniawan</creator>
        
        <subject>Internet of Things; wearable devices; pervasive computing; sustainable healthcare; healthcare applications; public health; health system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>The user’s demands in the system supported by the Internet of Things are frequently controlled effectively using the pervasive computing system. Pervasive computing is a term used to describe a system that integrates several communication and distributed network technologies. Even so, it properly accommodates user needs. It is quite difficult to be inventive in the pervasive computing system when it comes to the delivery of information, handling standards, and extending heterogeneous aid for scattered clients. In this view, our paper intends to utilize a Dispersed and Elastic Computing Model (DECM) to enable proper and reliable communication for people who are using IoT-based wearable healthcare devices. Recurrent Reinforcement Learning (RRL) is used in the suggested model and the system that is connected to analyze resource allocation in response to requirements and other allocative factors. To provide effective data transmission over wearable medical devices, the built system gives managing mobility additional consideration to resource allocation and distribution. The results show that the pervasive computing system provides services to the user with reduced latency and an increased rate of communication for healthcare wearable devices based on the determined demands of the resources. This is an important aspect of sustainable healthcare. We employ the assessment metrics consisting of request failure, response time, managed and backlogged requests, bandwidth, and storage to capture the consistency of the proposed model.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_56-A_Model_for_Pervasive_Computing_and_Wearable_Devices.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fortifying Against Cyber Fraud: Instrument Development with the Protection Motivation Theory</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141055</link>
        <id>10.14569/IJACSA.2023.0141055</id>
        <doi>10.14569/IJACSA.2023.0141055</doi>
        <lastModDate>2023-10-30T10:49:51.8900000+00:00</lastModDate>
        
        <creator>Norhasyimatul Naquiah Ghazali</creator>
        
        <creator>Syahida Hassan</creator>
        
        <creator>Rahayu Ahmad</creator>
        
        <subject>Cyber security; cyber fraud; e-services; instrument development; content validity; Protection Motivation Theory (PMT)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>Cybersecurity has become a trending topic in this technological era. Crimes keep happening in this medium and bring challenges for researchers and IT professionals worldwide to find the best solution to overcome this issue. Crimes primarily related to fraud on e-services have become a red alert that needs to be a concern for netizens. Instead of simply believing in the human-created network and system, individuals or users should acquire and implement protective behaviours for themselves. Thus, a few factors such as source credibility, perceived value of data, wishful thinking, perceived threat severity, perceived threat vulnerability, maladaptive rewards, and response efficacy have been investigated in this study, and the Protection Motivation Theory is used to counter cybersecurity issues faced by users. A tool has been created to facilitate the collection of empirical data necessary for verifying the proposed model. Analysis such as Content validity index (CVI) and Scale-level CVI (S-CVI) have been used to validate the item. The findings indicate that one of the items does not meet the criteria, however, it has been suggested by experts to revise and make it comprehensible to use for the main study. This paper also includes a discussion part regarding the implications of the experts&#39; evaluation. This study, in particular, can help boost the understanding of cyber fraud and the proper methods, a user can employ to avoid becoming a victim.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_55-Fortifying_Against_Cyber_Fraud.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Code-Mixed Sentiment Analysis using Transformer for Twitter Social Media Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141053</link>
        <id>10.14569/IJACSA.2023.0141053</id>
        <doi>10.14569/IJACSA.2023.0141053</doi>
        <lastModDate>2023-10-30T10:49:51.8730000+00:00</lastModDate>
        
        <creator>Laksmita Widya Astuti</creator>
        
        <creator>Yunita Sari</creator>
        
        <creator>Suprapto</creator>
        
        <subject>Sentiment analysis; code-mixed; BERT; bahasa Indonesia</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>The underrepresentation of the Indonesian language in the field of Natural Language Processing (NLP) can be attributed to several key factors, including the absence of annotated datasets, limited language resources, and a lack of standardization in these resources. One notable linguistic phenomenon in Indonesia is code-mixing between Bahasa Indonesia and English, which is influenced by various sociolinguistic factors, including individual speaker characteristics, the linguistic environment, the societal status of languages, and everyday language usage. In an effort to address the challenges posed by code-mixed data, this research project has successfully created a code-mixed dataset for sentiment analysis. This dataset was constructed based on keywords derived from the sociolinguistic phenomenon observed among teenagers in South Jakarta. Utilizing this newly developed dataset, we conducted a series of experiments employing different pre-processing techniques and pre-trained models. The results of these experiments have demonstrated that the IndoBERTweet pre-trained model is highly effective in solving sentiment analysis tasks when applied to Indonesian-English code-mixed data. These experiments yielded an average precision of 76.07%, a recall of 75.52%, an F-1 score of 75.51%, and an accuracy of 76.56%.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_53-Code_Mixed_Sentiment_Analysis_using_Transformer.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Research on the Application of Random Forest-based Feature Selection Algorithm in Data Mining Experiments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141054</link>
        <id>10.14569/IJACSA.2023.0141054</id>
        <doi>10.14569/IJACSA.2023.0141054</doi>
        <lastModDate>2023-10-30T10:49:51.8730000+00:00</lastModDate>
        
        <creator>Huan Wang</creator>
        
        <subject>Random forest; SVM; machine learning; big data; feature selection; best-first search; rough set theory</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>Handling high-dimensional big data presents substantial challenges for Machine Learning (ML) algorithms, mainly due to the curse of dimensionality that leads to computational inefficiencies and increased risk of overfitting. Various dimensionality reduction and Feature Selection (FS) techniques have been developed to alleviate these challenges. Random Forest (RF), a widely-used Ensemble Learning Method (ELM), is recognized for its high accuracy and robustness, including its lesser-known capability for effective FS. While specialized RF models are designed for FS, they often struggle with computational efficiency on large datasets. Addressing these challenges, this study proposes a novel Feature Selection Model (FSM) integrated with data reduction techniques, termed Dynamic Correlated Regularized Random Forest (DCRRF). The architecture operates in four phases: Preprocessing, Feature Reduction (FR) using Best-First Search with Rough Set Theory (BFS-RST), FS through DCRRF, and feature efficacy assessment using a Support Vector Machine (SVM) classifier. Benchmarked against four gene expression datasets, the proposed model outperforms existing RF-based methods in computational efficiency and classification accuracy. This study introduces a robust and efficient approach to feature selection in high-dimensional big-data scenarios.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_54-Research_on_the_Application_of_Random.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Blockchain-based Method Ensuring Integrity of Shared Data in a Distributed-Control Intersection Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141052</link>
        <id>10.14569/IJACSA.2023.0141052</id>
        <doi>10.14569/IJACSA.2023.0141052</doi>
        <lastModDate>2023-10-30T10:49:51.8600000+00:00</lastModDate>
        
        <creator>Mohamed El Ghazouani</creator>
        
        <creator>Abdelouafi Ikidid</creator>
        
        <creator>Charafeddine Ait Zaouiat</creator>
        
        <creator>Aziz Layla</creator>
        
        <creator>Mohamed Lachgar</creator>
        
        <creator>Latifa Er-Rajy</creator>
        
        <subject>Security; data integrity; blockchain; distributed system; congestion; intelligent agent</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>In modern urban transportation systems, the efficient management of traffic intersections is crucial to ensure smooth traffic flow and reduce congestion. Distributed-control intersection networks, where control decisions are made collaboratively by multiple entities, offer promising solutions. However, maintaining the security and the integrity of shared data among these entities poses significant challenges, including the risk of data tampering and unauthorized modifications. This paper proposes a novel approach that leverages blockchain technology to address these integrity concerns based on intelligent agents. By utilizing the decentralized and transparent nature of blockchain, our method ensures the authenticity and immutability of shared data within the distributed-control intersection network. The paper presents a detailed architecture, highlighting the integration of blockchain into the existing infrastructure, and discusses the benefits of this approach in enhancing data integrity, trust, and overall system reliability. Through a case study and simulation results, the proposed approach demonstrates its effectiveness in maintaining the integrity of shared data, thereby contributing to the advancement of secure and efficient traffic management systems.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_52-A_Blockchain_Based_Method_Ensuring_Integrity.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>AHP-based Design of a Finger Training Device for Stroke</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141051</link>
        <id>10.14569/IJACSA.2023.0141051</id>
        <doi>10.14569/IJACSA.2023.0141051</doi>
        <lastModDate>2023-10-30T10:49:51.8430000+00:00</lastModDate>
        
        <creator>Hua Wei</creator>
        
        <creator>Ding-Bang Luh</creator>
        
        <creator>Xin Li</creator>
        
        <creator>Hai-Xia Yan</creator>
        
        <subject>Stroke; rehabilitation training equipment; finger muscle strength; AHP; specific finger actions</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>This study aims to develop a stroke finger training device specifically for office hand scenes which exercises the small muscles of the fingertips and improves the hand strength of stroke patients. The device has a real-time recording function for muscle strength changes during finger muscle training and enhances interaction through the feedback of training device data, thereby improving training effectiveness. This research involves analyzing hand postures and muscle movements in computer office scenes, designing questionnaires to obtain user requirements, and using the Delphi analysis method to screen key indicators and form standards and program layers. The Analytic Hierarchy Process (AHP) evaluates and ranks the core design elements. According to the design elements, the structure and training system design are guided, and a prototype is built for experimental testing. The results show that the training device effectively improves participants&#39; hand strength, stability, and coordination and helps restore hand function. The AHP method allows for evaluating and ranking the device&#39;s design elements, making the device design more reasonable and comprehensive. Overall, the training device significantly improves the finger muscle strength of participants.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_51-AHP_Based_Design_of_a_Finger_Training_Device.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Using Ensemble Learning and Advanced Data Mining Techniques to Improve the Diagnosis of Chronic Kidney Disease</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141050</link>
        <id>10.14569/IJACSA.2023.0141050</id>
        <doi>10.14569/IJACSA.2023.0141050</doi>
        <lastModDate>2023-10-30T10:49:51.8430000+00:00</lastModDate>
        
        <creator>Muneer Majid</creator>
        
        <creator>Yonis Gulzar</creator>
        
        <creator>Shahnawaz Ayoub</creator>
        
        <creator>Farhana Khan</creator>
        
        <creator>Faheem Ahmad Reegu</creator>
        
        <creator>Mohammad Shuaib Mir</creator>
        
        <creator>Wassim Jaziri</creator>
        
        <creator>Arjumand Bano Soomro</creator>
        
        <subject>Kidney; chronic kidney disease; support vector machine; k-nearest neighbors; artificial neural network; decision tree</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>Kidney failure is a condition with far-reaching, potentially life-threatening consequences on the human body. Leveraging the power of machine learning and data mining, this research focuses on precise disease prediction to equip decision-makers with critical data-driven insights. The accuracy of classification systems hinges on the dataset&#39;s inherent characteristics, prompting the application of feature selection techniques to streamline algorithm models and optimize classification precision. Various classification methodologies, including K-Nearest Neighbor, J48, Artificial Neural Network (ANN), Naive Bayes, and Support Vector Machine, are employed to detect chronic renal disease. A predictive framework is devised, blending ensemble methods with feature selection strategies to forecast chronic kidney disease. Specifically, the predictive model for chronic kidney disease is meticulously constructed through the fusion of an information gain-based feature evaluator and a ranker search mechanism, fortified by the wrapper subset evaluator and the best first algorithm. J48, in tandem with the Info Gain Attribute Evaluator and ranker search system, exhibits a remarkable accuracy rate of 97.77%. The Artificial Neural Network (ANN), coupled with the Wrapper Subset Evaluator and the highly effective Best First search strategy, yields precise results at a rate of 97.78%. Similarly, the Naive Bayes model, when integrated with the Wrapper Subset Evaluator (WSE) and the Best First search engine, demonstrates exceptional performance, achieving an accuracy rate of 97%. Furthermore, the Support Vector Machine algorithm achieves a notable accuracy rate of 97.12% when utilizing the Info Gain Attribute Evaluator. The K-Nearest Neighbor Classifier, in conjunction with the Wrapper Subset Evaluator, emerges as the most accurate among the foundational classifiers, boasting an impressive prediction accuracy of 98%. A second model is introduced, incorporating five diverse classifiers operating through a voting mechanism to form an ensemble model. Investigative findings highlight the efficacy of the proposed ensemble model, which attains a precision rate of 98.85%, as compared to individual base classifiers. This research underscores the potential of combining feature selection and ensemble techniques to significantly enhance the precision and accuracy of chronic kidney disease prediction.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_50-Using_Ensemble_Learning_and_Advanced_Data_Mining_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Applications of Missing Data Imputation Methods in Wastewater Treatment Plants</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141049</link>
        <id>10.14569/IJACSA.2023.0141049</id>
        <doi>10.14569/IJACSA.2023.0141049</doi>
        <lastModDate>2023-10-30T10:49:51.8270000+00:00</lastModDate>
        
        <creator>Abdellah Chaoui</creator>
        
        <creator>Kaoutar Rebija</creator>
        
        <creator>Kaoutar Chkaiti</creator>
        
        <creator>Mohammed Laaouan</creator>
        
        <creator>Rqia Bourziza</creator>
        
        <creator>Karima Sebari</creator>
        
        <creator>Wafae Elkhoumsi</creator>
        
        <subject>Systematic literature review; kitchenham’ method; wastewater treatment; imputation methods; missing data</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>Missing data pose a big challenge in the field of wastewater treatment, representing a frequent issue in data quality that can result in misleading analyses and compromised decision-making accuracy. The initial step in data preprocessing involves the estimation and handling of missing values. The primary aim to conduct a comprehensive examination of the existing research concerning missing value imputation in wastewater treatment plants (WWTPs). The focus is specifically on identifying and outlining various imputation techniques employed in this field, while paying close attention to their respective strengths and limitations. To ensure a methodical approach, this study adopts the systematic literature review (SLR) using Kitchenham’s guidelines. In order to gather relevant and up-to-date papers, the research leverages the scientific database &quot;Scopus&quot; to retrieve and analyze all pertinent papers during the search process. By doing so, this research aims to contribute valuable insights into the different strategies used for imputing missing values in WWTPs and to shed light on their practical implications and potential drawbacks. Form 599, a total of 16 research papers were selected to assess the review questions. Finally, several recommendations were given to address the limitations identified in the reviewed studies and to contribute to more accurate and reliable data analysis and decision-making in the wastewater treatment domain.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_49-Applications_of_Missing_Data_Imputation_Methods.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>HHO-SMOTe: Efficient Sampling Rate for Synthetic Minority Oversampling Technique Based on Harris Hawk Optimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141047</link>
        <id>10.14569/IJACSA.2023.0141047</id>
        <doi>10.14569/IJACSA.2023.0141047</doi>
        <lastModDate>2023-10-30T10:49:51.8130000+00:00</lastModDate>
        
        <creator>Khaled SH. Raslan</creator>
        
        <creator>Almohammady S. Alsharkawy</creator>
        
        <creator>K. R. Raslan</creator>
        
        <subject>Imbalanced data; machine learning; over-sampling; SMOTE; HHO</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>Classifying imbalanced datasets presents a significant challenge in the field of machine learning, especially with big data, where instances are unevenly distributed among classes, leading to class imbalance issues that affect classifier performance. Synthetic Minority Over-sampling Technique (SMOTE) is an effective oversampling method that addresses this by generating new instances for the under-represented minority class. However, SMOTE&#39;s efficiency relies on the sampling rate for minority class instances, making optimal sampling rates crucial for solving class imbalance. In this paper, we introduce HHO-SMOTe, a novel hybrid approach that combines the Harris Hawk optimization (HHO) search algorithm with SMOTE to enhance classification accuracy by determining optimal sample rates for each dataset. We conducted extensive experiments across diverse datasets to comprehensively evaluate our binary classification model. The results demonstrated our model&#39;s exceptional performance, with an AUC score exceeding 0.96, a high G-means score of 0.95 highlighting its robustness, and an outstanding F1-score consistently exceeding 0.99. These findings collectively establish our proposed approach as a formidable contender in the domain of binary classification models.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_47-HHO_SMOTe_Efficient_Sampling_Rate.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Multitask Learning System for Trait-based Automated Short Answer Scoring</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141048</link>
        <id>10.14569/IJACSA.2023.0141048</id>
        <doi>10.14569/IJACSA.2023.0141048</doi>
        <lastModDate>2023-10-30T10:49:51.8130000+00:00</lastModDate>
        
        <creator>Dadi Ramesh</creator>
        
        <creator>Suresh Kumar Sanampudi</creator>
        
        <subject>Sentence embedding; coherence; LSTM; short answer scoring; trait score</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>Evaluating students&#39; responses and providing feedback in the education system is widely acknowledged. However, while most research on Automated Essay Scoring (AES) has focused on generating a final score for given responses, only a few studies have attempted to generate feedback. These studies often rely on statistical features and fail to capture coherence and content-based features. To address this gap, we proposed a multitask learning system that can capture linguistic, coherence, and content-based features with Bidirectional Encoder Representations from Transformers (BERT) sentence by sentence and generate overall essay and trait scores. Our proposed system outperformed other existing models, achieving Quadratic Weighted Kappa (QWK) scores of 0.766, 0.69, and 0.701 compared to human rater scores. We evaluated our model on the Automated Student Assessment Prize (ASAP) Kaggle and operating system (OS) data set. When compared with other prescribed models proposed to multitask learning system is a promising step towards more effective and comprehensive writing assessment and feedback.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_48-A_Multitask_Learning_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modified Deep Neural Network for Object Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141046</link>
        <id>10.14569/IJACSA.2023.0141046</id>
        <doi>10.14569/IJACSA.2023.0141046</doi>
        <lastModDate>2023-10-30T10:49:51.7970000+00:00</lastModDate>
        
        <creator>Dulari Bhatt</creator>
        
        <creator>Chirag Patel</creator>
        
        <creator>Madhuri Chopade</creator>
        
        <creator>Madhvi Dave</creator>
        
        <creator>Chintan Patel</creator>
        
        <subject>Convolutional Neural Network (CNN); depth-wise separable convolution; dimension-based generic convolution unit (DBGC); CNN architecture</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>Object recognition has gained significance due to the rise in CCTV surveillance and the need for automated detection of objects or activities in images and videos. Lightweight process frameworks are in demand for sensor networks. While Convolutional Neural Networks (CNNs) are widely used in computer vision, many existing architectures are specialized. This paper introduces the Dimension-Based Generic Convolution Block (DBGC), enhancing CNNs with dimension-wise selection of kernels for improved performance. The DBGC offers flexibility for height, width, and depth kernels and can be applied to different dimension combinations. A key feature is the dimension selector block. Unoptimized kernel dimensions reduce computational operations and accuracy, while semi-optimized ones maintain accuracy with fewer operations. Optimized dimensions provide 5-6% higher accuracy and reduced operations. This work addresses the challenge of generic architecture in object recognition research.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_46-Modified_Deep_Neural_Network_for_Object_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of Distributed Cooperative Control for Multi-Missile System to Track Maneuvering Targets</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141044</link>
        <id>10.14569/IJACSA.2023.0141044</id>
        <doi>10.14569/IJACSA.2023.0141044</doi>
        <lastModDate>2023-10-30T10:49:51.7800000+00:00</lastModDate>
        
        <creator>Belkacem Kada</creator>
        
        <creator>Khalid A. Juhany</creator>
        
        <creator>Ibraheem Al-Qadi</creator>
        
        <creator>Mostefa Bourchak</creator>
        
        <subject>Multi-missile cooperative control; missile autopilot; smooth control; high-order sliding mode; target tracking</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>The current paper provides unique smooth control methods for constructing resilient nonlinear autopilot systems and cooperative control protocols for single and multi-missile systems. To develop the single autopilots, a high-order framework based on asymptotic output stability principles and local relative degree for nonlinear affine systems is first applied. Then, using asymptotic exponential functions and graph theory, free-chattering distributed protocols are constructed to allow multi-missile systems to track and intercept high-risk targets. The Lyapunov approach is used to derive the essential requirements for smooth asymptotic consensus. The proposed method minimizes computing load while enhancing accuracy. The simulation results indicate the efficacy of the recommended strategies.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_44-Design_of_Distributed_Cooperative_Control.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cyberbullying Detection using Machine Learning and Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141045</link>
        <id>10.14569/IJACSA.2023.0141045</id>
        <doi>10.14569/IJACSA.2023.0141045</doi>
        <lastModDate>2023-10-30T10:49:51.7800000+00:00</lastModDate>
        
        <creator>Aljwharah Alabdulwahab</creator>
        
        <creator>Mohd Anul Haq</creator>
        
        <creator>Mohammed Alshehri</creator>
        
        <subject>Cyberbullying detection; machine learning; deep learning; natural language processing (NLP); feature extraction; CNN</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>With the human passion for gaining knowledge, learning new things and knowing the news that surrounds the world, social networks were invented to serve the human need, which resulted in the rapid spread and use among people, but social networks have a dark and bright side. The dark side is that strangers or anonymous people harass some users with obscene words that the user feels wrong about, which leads to psychological harm to him, and here we try to discover how to discover electronic bullying to block this alarming phenomenon. In this context, the utility of Natural Language Processing (NLP) is employed in the present investigation to detect electronic bullying and address this alarming phenomenon. The machine learning (ML) method is moderated based on specific features or criteria for detecting cyberbullying on social media. The collected characteristics were analyzed using the K-Nearest Neighbor (KNN), Support Vector Machine (SVM), Naive Bayes (NB), Decision Trees (DT), and Random Forest (RF) methods. Naturally, there are test results that use or operate on the proposed framework in a multi-category setting and are encouraged by kappa, classifier accuracy, and f-measure standards. These apparent outcomes show that the suggested model is a valuable method for predicting the behavior of cyberbullying, its strength, and its impact on social networks via the Internet. In the end, we evaluated the results of the proposed and basic features with machine learning techniques, which shows us the importance and effectiveness of the proposed features for detecting cyberbullying. We evaluated the models, and we got the accuracy of the KNN (0,90), SVM (0,92), and Deep learning (0,96).</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_45-Cyberbullying_Detection_using_Machine_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implementation of the REST API Model using QR Codes on Mobile Devices to Order Parking Tickets</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141043</link>
        <id>10.14569/IJACSA.2023.0141043</id>
        <doi>10.14569/IJACSA.2023.0141043</doi>
        <lastModDate>2023-10-30T10:49:51.7670000+00:00</lastModDate>
        
        <creator>Mauluddini Amras</creator>
        
        <creator>Erwin Yulianto</creator>
        
        <creator>Deshinta Arrova Dewi</creator>
        
        <creator>Awan Setiawan</creator>
        
        <subject>Representational state transfer application programming interface (REST API); online parking ticket; quick response code (QR Code); smart city; public transport</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>Many parking lots are still operate manually, and delays are commonly caused during the parking process when unforeseen events occur, such as when the parking ticket paper runs out or the ticket machines jam. New services are added to the parking system online with the aim of decreasing the amount of time that people spend waiting in line to park. This is done by conducting a parking booking system to obtain a parking ticket in the form of a QR Code as well as parking information, payment transactions, and other things that interfere with the parking process. In this study, the Forward Chaining Algorithm will be combined with the survey research method as the research methodology. The Rapid Application Development model is used for analysis and design (RAD). Representational State Transfer Application Programming Interface (REST API) is one of the solutions offered to overcome this problem. With the advent of online parking services, it is envisioned that customers who intend to park their vehicles in public spaces will be able to reserve a parking space in advance, greatly simplifying the process and eliminating the problem of the drawn out queue process.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_43-Implementation_of_the_REST_API_Model_using_QR_Codes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detection of Autism Spectrum Disorder (ASD) from Natural Language Text using BERT and ChatGPT Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141041</link>
        <id>10.14569/IJACSA.2023.0141041</id>
        <doi>10.14569/IJACSA.2023.0141041</doi>
        <lastModDate>2023-10-30T10:49:51.7500000+00:00</lastModDate>
        
        <creator>Prasenjit Mukherjee</creator>
        
        <creator>Gokul R. S</creator>
        
        <creator>Sourav Sadhukhan</creator>
        
        <creator>Manish Godse</creator>
        
        <creator>Baisakhi Chakraborty</creator>
        
        <subject>BERT model; ChatGPT model; autism; machine learning; generative AI; autism detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>ASD may be caused by a combination of genetic and environmental factors, including gene mutations and exposure to toxins. People with ASD may also have trouble forming social relationships, have difficulty with communication and language, and struggle with sensory sensitivity. These difficulties can range from mild to severe and can affect a person&#39;s ability to interact with the world around them. Autism spectrum disorder (ASD) is a developmental disorder that affects people in different ways. But early detection of ASD in a child is a good option for parents to start corrective therapies and treatment. They can take action to reduce the ASD symptoms in their child. The proposed work is the detection of ASD in a child using a parent’s dialog. The most popular Bert model and recent ChatGPT have been utilized to analyze the sentiment of each statement from parents for the detection of symptoms of ASD. The Bert model has been developed by the transformers which are the most popular in the natural language processing field whereas the ChatGPT model is a large language model (LLM). It is based on Reinforcement learning from human feedback (RLHF) that can able to generate the sentiment of the sentence, computer language codes, text paragraphs, etc. The sentiment analysis has been done on parents’ dialog using the Bert model and ChatGPT model. The data has been prepared from various Autism groups on social sites and other resources on the internet. The data has been cleaned and prepared to train the Bert model and ChatGPT model. The Bert model is able to detect the sentiment of each sentence from parents. Any positive sentiment detection means parents should be aware of their children. The proposed model has given 83 percent accuracy according to the prepared data.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_41-Detection_of_Autism_Spectrum_Disorder.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimizing Power Management in Distribution Networks: A Mathematical Modeling Approach for Coordinated Directional Over-Current Relay Control</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141042</link>
        <id>10.14569/IJACSA.2023.0141042</id>
        <doi>10.14569/IJACSA.2023.0141042</doi>
        <lastModDate>2023-10-30T10:49:51.7500000+00:00</lastModDate>
        
        <creator>Simardeep Kaur</creator>
        
        <creator>Shimpy Ralhan</creator>
        
        <creator>Mangal Singh</creator>
        
        <creator>Mahesh Singh</creator>
        
        <subject>Optimization; Cuckoo Search Algorithm (CSA); Fire-Fly Algorithm (FFA); Harmony Search Algorithm (HSA); Jaya algorithm; directional over-current relays</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>Optimizing Power Management in Distribution Networks through Coordination of Directional Over-Current Relays summarizes a study or project focused on enhancing the management of power in distribution networks by optimizing the coordination of directional over-current relays. Directional over-current relays are critical components of power distribution systems, designed to safeguard the network against over-current faults while maintaining operational stability. Proper coordination of these relays is vital to ensure that faults are isolated and cleared efficiently without causing extensive disruptions. In this paper, a mathematical modeling approach is employed to address the optimization of power management in distribution networks. This approach likely encompasses the development of mathematical models and algorithms that consider factors such as fault types, fault locations, network topology, and relay settings to improve the coordination of directional over-current relays. Here, different optimization algorithms have been implemented to optimize the operating time of relays &amp; hence power management. Cuckoo Search Algorithm (CSA), Fire-Fly Algorithm (FFA), Harmony Search Algorithm (HSA), and Jaya Algorithm are employed to solve the coordination problem for directional over-current relays (DOCRs) with different test systems. The outcomes of this research may have practical applications in power distribution systems, potentially leading to more resilient and responsive networks that better manage power distribution and reduce disruptions during faults and outages.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_42-Optimizing_Power_Management_in_Distribution_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning Driven Web Security: Detecting and Preventing Explicit Content</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141040</link>
        <id>10.14569/IJACSA.2023.0141040</id>
        <doi>10.14569/IJACSA.2023.0141040</doi>
        <lastModDate>2023-10-30T10:49:51.7330000+00:00</lastModDate>
        
        <creator>Ganeshayya Shidaganti</creator>
        
        <creator>Shubeeksh Kumaran</creator>
        
        <creator>Vishwachetan D</creator>
        
        <creator>Tejas B N Shetty</creator>
        
        <subject>Web safety; machine learning; cloud computing; natural language processing; web-scraping; big data</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>In today&#39;s digital age, the vast expanse of online content has made it increasingly accessible for users to encounter inappropriate text, images and videos. The repercussions of such exposure are concerning, impacting individuals and society adversely. Exposure to violent content can lead to undesirable human emotions, including desensitization, aggression, and other harmful effects. We utilize a machine learning approach aimed at real-time violence detection in text, images and videos embedded in the website. The foundation of this approach lies in a deep learning model, highly trained on a vast dataset of manually labeled images categorized as violent or non-violent. The model boasts exceptional accuracy in identifying violence in images, subsequently filter out violent content from online platforms. By performing all processing intensive tasks in the Cloud, and storing the data in a database, an improved user experience is achieved by completing all the necessary detection processes at a lower time frame, and also reducing the processing load on the user’s local system. The detection of the violent videos is done by a CNN model, which was trained on violent and non-violent video data, and the detection of emotions in the text is taken in by a NLP based algorithm. By implementing this highly efficient approach, web safety can undergo a significant improvement. Users can now navigate the web with confidence, free from concerns about accidentally encountering violent content, fostering improved mental health, and cultivating a more positive online environment. We are able to achieve 67% accuracy in detecting violent content at approximately 2.5 seconds at its best scenario.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_40-Deep_Learning_Driven_Web_Security.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implementation of a Web System with Chatbot Service for Sales Management - A Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141039</link>
        <id>10.14569/IJACSA.2023.0141039</id>
        <doi>10.14569/IJACSA.2023.0141039</doi>
        <lastModDate>2023-10-30T10:49:51.7170000+00:00</lastModDate>
        
        <creator>Jorge Barrantes-Saucedo</creator>
        
        <creator>Cristian Garc&#237;a-Leandro</creator>
        
        <creator>Orlando Iparraguirre-Villanueva</creator>
        
        <creator>Rosalynn Ornella Flores-Casta&#241;eda</creator>
        
        <subject>Web system; chatbot; chatbot service; sales management; sales automation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>The objective of the research was to analyze various researches about web systems with chatbot service in the sales management process between the years 2018-2022, employing four databases, such as: Science Direct, Taylor &amp; Francis, IEEE Xplore and Springer. The PRISMA methodology was applied, selecting 60 manuscripts where the year of highest publication was 2021 (35%), leading the USA as the country with the highest scientific production equivalent to 23.33%; in addition, the type of research that predominated were scientific articles with the percentage value of 70% and being entirely in the English language. Finally, it was found that there are two relevant components regarding the implementation of a web system with chatbot service for Sales Management, the first are the evaluated aspects, explained as those that focus on the analysis of the intelligent system, chatbot, website, Google API, e-commerce, machine learning, IBM service, mobile application, web, relationship with customer service, sales management, digital transformation, information system, algorithm and innovation and as the second component, according to the conditional factors refers to the context in which the use of chatbot in sales management occurs, being such technical features as algorithm, type of system, chatbot-customer relationship, sales and innovation and sales-system relationship.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_39-Implementation_of_a_Web_System_with_Chatbot_Service.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An IoT-based Smart Plug Energy Monitoring System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141038</link>
        <id>10.14569/IJACSA.2023.0141038</id>
        <doi>10.14569/IJACSA.2023.0141038</doi>
        <lastModDate>2023-10-30T10:49:51.7030000+00:00</lastModDate>
        
        <creator>Lamya Albraheem</creator>
        
        <creator>Haifa Alajlan</creator>
        
        <creator>Najoud aljenedal</creator>
        
        <creator>Lenah Abo Alkhair</creator>
        
        <creator>Sarab Bin Gwead</creator>
        
        <subject>Internet of things; IoT; smart plugs; electricity; energy consumption</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>Over the years, considerable efforts have been made to maintain electricity. However, there is still a significant need to explore new technologies and solutions conserve and enhance electricity supply. This project discusses research studies and applications conducted in the field of energy control, including a comparison of these applications undertaken in order to highlight constraints that need to be further addressed. This can be considered as the first step in developing a system that helps building owners to control their electricity consumption using Internet of Things (IoT) technologies. The main phases of the proposed system are data collection, data analysis and mobile application development. The project utilizes Wi-Fi smart plugs to collect active power consumption data, of which analysis is conducted on the cloud. The mobile application allows the building owner to manage buildings, and to obtain active and accumulated consumption data of plugged-in devices. This paper involves the architecture design of the proposed system, and the experimentation, testing, and implementation. The application was tested and the active and accumulative consumption per device and per building were reported. To confirm the accuracy of the active power consumption measurements from the smart plugs, a comparison is performed between these values and the active power consumptions measured by the company and shown on the labels. The results showed that using IoT-based smart plugs gives accurate readings.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_38-An_IoT_based_Smart_Plug_Energy_Monitoring_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Recyclable Waste Classification using SquezeeNet and XGBoost</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141037</link>
        <id>10.14569/IJACSA.2023.0141037</id>
        <doi>10.14569/IJACSA.2023.0141037</doi>
        <lastModDate>2023-10-30T10:49:51.7030000+00:00</lastModDate>
        
        <creator>Intan Nurma Yulita</creator>
        
        <creator>Firman Ardiansyah</creator>
        
        <creator>Anton Satria Prabuwono</creator>
        
        <creator>Muhammad Rasyid Ramdhani</creator>
        
        <creator>Mokhamad Arfan Wicaksono</creator>
        
        <creator>Agus Trisanto</creator>
        
        <creator>Asep Sholahuddin</creator>
        
        <subject>Garbage classification; image; machine learning; SqueezeNet; XGBoost</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>The unregulated buildup of waste results in the occurrence of flames. This phenomenon poses a substantial threat to both the ecological system and human welfare. To tackle this problem, the current study proposes the implementation of Machine Learning technology to automate the sorting of waste. The methodology being examined incorporates the utilization of SqueezeNet as an image embedding method in conjunction with XGBoost as the final classifier. This work examines the efficacy of the aforementioned technique by doing a comparative analysis with many alternative final classifiers, including LightGBM, XGBoost, CatBoost, Random Forest, SVM, Na&#239;ve Bayes, KNN, and Decision Tree. The experimental results indicate that the integration of SqueezeNet and XGBoost produces the highest level of performance in the field of garbage categorization, as supported by an F1-score of 0.931. SqueezeNet is a method employed for image embedding that enables the extraction of salient features from images. This procedure enables the recognition of unique characteristics linked to different classes. Therefore, XGBoost may be utilized to enhance classification tasks. XGBoost has the ability to generate a feature importance score. Therefore, enabling the recognition of the most prominent attributes. This methodology possesses the capacity to alleviate the risk of fire that arises due to the accumulation of unregulated trash. This work makes a substantial contribution to environmental conservation and the improvement of public safety.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_37-Recyclable_Waste_Classification_using_SquezeeNet.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Reduced Feature-Set OCR System to Recognize Handwritten Tamil Characters using SURF Local Descriptor</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141036</link>
        <id>10.14569/IJACSA.2023.0141036</id>
        <doi>10.14569/IJACSA.2023.0141036</doi>
        <lastModDate>2023-10-30T10:49:51.6870000+00:00</lastModDate>
        
        <creator>Ashlin Deepa R N</creator>
        
        <creator>S. Sankara Narayanan</creator>
        
        <creator>Adithya Padthe</creator>
        
        <creator>Manjula Ramannavar</creator>
        
        <subject>Image processing; feature extraction; Convolutional Neural Networks; SURF; handwritten character recognition; optical character recognition system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>High dimensionality in variable-length feature sets of real datasets negatively impacts the classification accuracy of traditional classifiers. Convolutional Neural Networks (CNNs) with convolution filters have been widely used for handling the classification of high-dimensional image datasets. However, these models require massive amounts of high-dimensional training data, posing a challenge for many image-processing applications. In contrast, traditional feature detectors and descriptors, with a minor trade-off in precision, have shown success in various computer vision tasks. This paper introduces the Nearest Angles (NA) classifier tailored for a handwritten character recognition system, employing Speeded-Up Robust Features (SURF) as local descriptors. These descriptors make local decisions, while global decisions on the test image are accomplished through a ranking-based classification approach. Image similarity scores generated from the SURF descriptors are ranked to make local decisions, and these ranks are then used by the NA classifier to produce a global class similarity score. The proposed method achieves recognition rates of 96.4% for Tamil, 96.5% for Devanagari, and 97 % for Telugu handwritten character datasets. Although the proposed approach shows slightly lower accuracy compared to CNN-based models, it significantly reduces the computational complexity and the number of parameters required for the classification tasks. As a result, the proposed method offers a computationally efficient alternative to deep learning models, lowering the computational time multiple times without a substantial loss in accuracy.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_36-A_Reduced_Feature_Set_OCR_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Text Simplification using Hybrid Semantic Compression and Support Vector Machine for Troll Threat Sentences</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141035</link>
        <id>10.14569/IJACSA.2023.0141035</id>
        <doi>10.14569/IJACSA.2023.0141035</doi>
        <lastModDate>2023-10-30T10:49:51.6700000+00:00</lastModDate>
        
        <creator>Juhaida Abu Bakar</creator>
        
        <creator>Nooraini Yusoff</creator>
        
        <creator>Nor Hazlyna Harun</creator>
        
        <creator>Maslinda Mohd Nadzir</creator>
        
        <creator>Salehah Omar</creator>
        
        <subject>Text simplification; semantic compression; machine learning; natural language processing; cyber bullying</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>Text Simplification (TS) is an emerging field in Natural Language Processing (NLP) that aims to make complex text more accessible. However, there is limited research on TS in the Malay language, known as Bahasa Malaysia, which is widely spoken in Southeast Asia. The challenges in this domain revolve around data availability, feature engineering, and the suitability of methods for text simplification. Previous studies predominantly employed single methods such as semantic compression, or machine learning with the Support Vector Machine (SVM) classifier consistently achieving an accuracy of approximately 70% in identifying troll sentences—statements containing threats from online trolls notorious for their disruptive online behavior. This study combines semantic compression and machine learning methods across lexical, syntactic, and semantic levels, utilizing frequency dictionaries as semantic features. Support Vector Machine and Decision Tree classifiers are applied and tested on 6,836 datasets, divided into training and testing sets. When comparing SVM and Decision Tree with and without semantic features, SVM with semantics achieves an average accuracy of 92.37%, while Decision Tree with semantics reaches 91.21%. The proposed TS method is evaluated on troll sentences, which are often associated with cyberbullying. Furthermore, it is worth noting that cyberbullying has been reported to be a significant issue, with Malaysia ranking as the second worst out of the 28 countries surveyed in Asia. Therefore, the outcomes of the study could potentially offer means, such as machine translation and relation extraction, to help prevent cyberbullying in Malaysia.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_35-Text_Simplification_using_Hybrid_Semantic_Compression.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dimensionality Reduction with Truncated Singular Value Decomposition and K-Nearest Neighbors Regression for Indoor Localization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141034</link>
        <id>10.14569/IJACSA.2023.0141034</id>
        <doi>10.14569/IJACSA.2023.0141034</doi>
        <lastModDate>2023-10-30T10:49:51.6570000+00:00</lastModDate>
        
        <creator>Hang Duong Thi</creator>
        
        <creator>Kha Hoang Manh</creator>
        
        <creator>Vu Trinh Anh</creator>
        
        <creator>Trang Pham Thi Quynh</creator>
        
        <creator>Tuyen Nguyen Viet</creator>
        
        <subject>Dimensionality Reduction; Indoor Positioning System; KNN regression; Truncated Singular Value Decomposition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>Indoor localization presents formidable challenges across diverse sectors, encompassing indoor navigation and asset tracking. In this study, we introduce an inventive indoor localization methodology that combines Truncated Singular Value Decomposition (Truncated SVD) for dimensionality reduction with the K-Nearest Neighbors Regressor (KNN Regression) for precise position prediction. The central objective of this proposed technique is to mitigate the complexity of high-dimensional input data while preserving critical information essential for achieving accurate localization outcomes. To validate the effectiveness of our approach, we conducted an extensive empirical evaluation employing a publicly accessible dataset. This dataset covers a wide spectrum of indoor environments, facilitating a comprehensive assessment. The performance evaluation metrics adopted encompass the Root Mean Squared Error (RMSE) and the Euclidean distance error (EDE)—widely embraced in the field of localization. Importantly, the simulated results demonstrated promising performance, yielding an RMSE of 1.96 meters and an average EDE of 2.23 meters. These results surpass the achievements of prevailing state-of-the-art techniques, which typically attain localization accuracies ranging from 2.5 meters to 2.7 meters using the same dataset. The enhanced accuracy in localization can be attributed to the synergy between Truncated SVD&#39;s dimensionality reduction and the proficiency of KNN Regression in capturing intricate spatial relationships among data points. Our proposed approach highlights its potential to deliver heightened precision in indoor localization outcomes, with immediate relevance to real-time scenarios. Future research endeavors involving comprehensive comparative analyses with advanced techniques hold promise in propelling the field of accurate indoor localization solutions forward.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_34-Dimensionality_Reduction_with_Truncated_Singular_Value_Decomposition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of an Advanced Distributed Adaptive Control for Multi-SMA Actuators</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141033</link>
        <id>10.14569/IJACSA.2023.0141033</id>
        <doi>10.14569/IJACSA.2023.0141033</doi>
        <lastModDate>2023-10-30T10:49:51.6230000+00:00</lastModDate>
        
        <creator>Belkacem Kada</creator>
        
        <creator>Khalid A. Juhany</creator>
        
        <creator>Ibraheem Al-Qadi</creator>
        
        <creator>Mostefa Bourchak</creator>
        
        <subject>Adaptive backstepping; hysteresis; I&amp;I control; L2-gain control; rotary actuator; shape memory alloy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>Aerospace applications place high demands on designing Shape Memory Alloy (SMA) actuators, including accuracy, dependability, high-performance criteria, and cooperative activation. Because of their portability, durability, and performance under extreme conditions, SMAs have found a home in the aerospace industry as single and array actuators. This paper presents the development of a control scheme for thermally activating rotary SMA actuators as single and cooperative actuators. The control scheme is a hybrid adaptive robust control abbreviated as HARC. The immersion and invariance adaptive (I&amp;I adaptive) and L2-gain control frameworks are utilized in developing the HARC approach. To create stable transient responses despite parametric and non-parametric errors, recursive backstepping is utilized for asymptotic stability. At the same time, L2-gain control is applied to ensure the global stability of the transient closed-loop system. Both techniques are used in conjunction with one another. In contrast to the conventional I&amp;I, the robust control law can be developed without needing a target system or the solution of PDEs to satisfy the I&amp;I condition. The parametric uncertainty is estimated with the help of an adaptive rule, and the non-parametric uncertainty brought on by the phase change of the SMA material and modeling mistakes is accounted for with the help of asymptotic nonlinear functions. The designed HARC is then extended to cover the actuation of multi-SMA or array actuators to respond to the increasing demand for cooperative controllers using distributed control protocols. It has been demonstrated through simulation testing on a rotational NiTi SMA actuator that the suggested control approach is both practical and resilient.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_33-Design_of_an_Advanced_Distributed_Adaptive_Control.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comparison of Sampling Methods for Dealing with Imbalanced Wearable Sensor Data in Human Activity Recognition using Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141032</link>
        <id>10.14569/IJACSA.2023.0141032</id>
        <doi>10.14569/IJACSA.2023.0141032</doi>
        <lastModDate>2023-10-30T10:49:51.6100000+00:00</lastModDate>
        
        <creator>Mariam El Ghazi</creator>
        
        <creator>Noura Aknin</creator>
        
        <subject>Human activity recognition (HAR); class imbalance; sampling methods; wearable sensors; deep learning; synthetic minority over-sampling technique (SMOTE); random undersampling; PAMAP2 dataset; bayesian optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>Human Activity Recognition (HAR) holds significant implications across diverse domains, including healthcare, sports analytics, and human-computer interaction. Deep learning models demonstrate great potential in HAR, but performance is often hindered by imbalanced datasets. This study investigates the impact of class imbalance on deep learning models in HAR and conducts a comprehensive comparative analysis of various sampling techniques to mitigate this issue. The experimentation involves the PAMAP2 dataset, encompassing data collected from wearable sensors. The research includes four primary experiments. Initially, a performance baseline is established by training four deep-learning models on the imbalanced dataset. Subsequently, Synthetic Minority Over-sampling Technique (SMOTE), random under-sampling, and a hybrid sampling approach are employed to rebalance the dataset. In each experiment, Bayesian optimization is employed for hyperparameter tuning, optimizing model performance. The findings underscore the paramount importance of dataset balance, resulting in substantial improvements across critical performance metrics such as accuracy, F1 score, precision, and recall. Notably, the hybrid sampling technique, combining SMOTE and Random Undersampling, emerges as the most effective method, surpassing other approaches. This research contributes significantly to advancing the field of HAR, highlighting the necessity of addressing class imbalance in deep learning models. Furthermore, the results offer practical insights for the development of HAR systems, enhancing accuracy and reliability in real-world applications. Future works will explore alternative public datasets, more complex deep learning models, and diverse sampling techniques to further elevate the capabilities of HAR systems.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_32-A_Comparison_of_Sampling_Methods.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Seamless Data Exchange: Advancing Healthcare with Cross-Chain Interoperability in Blockchain for Electronic Health Records</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141031</link>
        <id>10.14569/IJACSA.2023.0141031</id>
        <doi>10.14569/IJACSA.2023.0141031</doi>
        <lastModDate>2023-10-30T10:49:51.5930000+00:00</lastModDate>
        
        <creator>Reval Prabhu Puneeth</creator>
        
        <creator>Govindaswamy Parthasarathy</creator>
        
        <subject>Electronic health records; data sharing scheme; blockchain technology; solidity smart contract; cross-chain interoperability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>The rapid digitization of healthcare records has led to the accumulation of vast amounts of sensitive patient data, stored across various systems and platforms. To ensure the secure and efficient exchange of Electronic Health Records (EHRs) among healthcare providers, researchers, and patients themselves, the concept of cross-chain interoperability within blockchain technology emerges as a promising solution. Nevertheless, existing blockchain platforms exhibit several limitations. In order to address the issue of non-interoperability, the suggested method involves creating a connection between two similar blockchain networks. This solution is exemplified through the use of an Electronic Health Records (EHR) structure, which is distributed across distinct Ethereum Testnets and implemented via a Solidity Smart Contract. The paper aims to demonstrate the viability of bridging the gap and fostering seamless interoperability between blockchain networks. However, establishing effective communication between these smart contracts proves to be a complex endeavor, whether within a singular blockchain or spanning multiple blockchains. This complexity presents a formidable obstacle, particularly when diverse hospitals require the sharing or exchange of critical information. Consequently, a solution becomes imperative to facilitate cross-chain communication among smart contracts. This solution provides seamless operation both within the confines of a single blockchain and across disparate blockchains. By achieving this, cross-chain interoperability can be realized, enabling distinct blockchain networks to mutually comprehend and actively engage with each other.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_31-Seamless_Data_Exchange_Advancing_Healthcare.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dance Motion Detection Algorithm Based on Computer Vision</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141030</link>
        <id>10.14569/IJACSA.2023.0141030</id>
        <doi>10.14569/IJACSA.2023.0141030</doi>
        <lastModDate>2023-10-30T10:49:51.5770000+00:00</lastModDate>
        
        <creator>Yan Wang</creator>
        
        <creator>Zhiguo Wu</creator>
        
        <subject>Dance motion detection; computer vision; human posture recognition; Kinect 3D sensor</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>Human posture recognition is an essential link in the development of human-computer interaction. Currently, the existing dance movement training methods often require students to constantly watch videos or find a tutor to correct them during practice to achieve good results, which not only takes a lot of time and energy but also creates some difficulties and challenges for students. The research goal of this paper was to use computer recognition technology to detect dance movements and identify body postures. This paper develops a Kinect dance auxiliary training system based on the body skeleton tracking technology of the Kinect 3D sensor, combined with auxiliary dance training. This article not only introduced a fixed axis-based expression method for joint angles to improve the stability of joint angles but also improved the body position detection algorithm using the angle of joint spots to realize the accurate recognition of human body posture. In the experiment, the trainee&#39;s arm was raised to the highest position, which could not meet the requirements, and the trainer&#39;s wrist should be raised by another 200 mm. Moreover, retracting the hand was too fast, which did not meet the standard action. The test results showed that the system could effectively improve the dance movements of the students.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_30-Dance_Motion_Detection_Algorithm_Based_on_Computer_Vision.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced System for Computer-Aided Detection of MRI Brain Tumors</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141028</link>
        <id>10.14569/IJACSA.2023.0141028</id>
        <doi>10.14569/IJACSA.2023.0141028</doi>
        <lastModDate>2023-10-30T10:49:51.5630000+00:00</lastModDate>
        
        <creator>Abdullah Alhothali</creator>
        
        <creator>Ali Samkari</creator>
        
        <creator>Umar S. Alqasemi</creator>
        
        <subject>Computer-aided detection; MRI; brain tumor; MATLAB; machine learning; support vector machine; KNN</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>The categorization of brain images into normal or abnormal categories is a critical task in medical imaging analysis. In this research, we propose a software solution that automatically classifies MRI brain scans as normal or abnormal, specifically focusing on glioblastoma as an abnormal condition. The software utilizes first-order statistical features extracted from brain images and employs seven different classifiers, including Support Vector Machine (SVM) and K-Nearest Neighbors (KNN), for classification. The performance of the classifiers was evaluated using an open-source dataset, and our findings showed that SVM and KNN classifiers performed equally well in accurately categorizing brain scans. However, further improvements can be made by incorporating more images and features to enhance the accuracy of the classifier. The developed software has the potential to assist healthcare professionals in efficiently identifying abnormal brain scans, particularly in cases of glioblastoma, which could aid in early detection and timely intervention. Further research and development in this area could contribute to the advancement of healthcare technology and patient care.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_28-Enhanced_System_for_Computer_Aided_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Entanglement Classification for Three-qubit Pure Quantum System using Special Linear Group under the SLOCC Protocol</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141029</link>
        <id>10.14569/IJACSA.2023.0141029</id>
        <doi>10.14569/IJACSA.2023.0141029</doi>
        <lastModDate>2023-10-30T10:49:51.5630000+00:00</lastModDate>
        
        <creator>Amirul Asyraf Zhahir</creator>
        
        <creator>Siti Munirah Mohd</creator>
        
        <creator>Mohd Ilias M Shuhud</creator>
        
        <creator>Bahari Idrus</creator>
        
        <creator>Hishamuddin Zainuddin</creator>
        
        <creator>Nurhidaya Mohamad Jan</creator>
        
        <creator>Mohamed Ridza Wahiddin</creator>
        
        <subject>Quantum entanglement; entanglement classification; three-qubit quantum system; special linear group; SL(2); stochastic local operations and classical communication; SLOCC</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>Quantum technology has been introduced in the IR 4.0, breeding a new era of advanced technology revolutionizing the future. Hence, understanding the key resources of quantum technology, quantum entanglement is vital. Growing interests in quantum technologies has raised comprehensive studies of quantum entanglement, especially on the entanglement classification. Special Linear group, SL(n) of multipartite entanglement classification under the SLOCC protocol is not widely studied due to its complex structure, creating a curb in developing its classification method. Therefore, this paper developed and delivered a classification method of pure multipartite, three-qubit quantum state using a combination of Special Linear group, SL(2) x SL(2) x SL(2) model operator under the SLOCC, classifying entanglement using the model operator with certain selected parameters. Further analysis was done resulting in the determination of the six subgroups, namely fully separable (A-B-C), bi-separable (A-BC, B-AC and C-AB) and genuinely entangled (W and GHZ).</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_29-Entanglement_Classification_for_Three_qubit_Pure_Quantum_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Optimized Deep Learning Method for Video Summarization Based on the User Object of Interest</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141027</link>
        <id>10.14569/IJACSA.2023.0141027</id>
        <doi>10.14569/IJACSA.2023.0141027</doi>
        <lastModDate>2023-10-30T10:49:51.5470000+00:00</lastModDate>
        
        <creator>Hafiz Burhan Ul Haq</creator>
        
        <creator>Watcharapan Suwansantisuk</creator>
        
        <creator>Kosin Chamnongthai</creator>
        
        <subject>Video summarization; deep learning; user object of interest; surveillance systems; SumMe</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>Surveillance video is now able to play a vital role in maintaining security and protection thanks to the advancement of digital video technology. Businesses, both private and public, employ surveillance systems to monitor and track their daily operations. As a result, video generates a significant volume of data that needs to be further processed to satisfy security protocol requirements. Analyzing video requires a lot of effort and time, as well as quick equipment. The concept of a video summary was developed in order to overcome these limitations. To work past these limitations, the concept of video summarization has emerged. In this study, a deep learning-based method for customized video summarization is presented. This research enables users to produce a video summary in accordance with the User Object of Interest (UOoI), such as a car, airplane, person, bicycle, automobile, etc. Several experiments have been conducted on the two datasets, SumMe and self-created, to assess the efficiency of the proposed method. On SumMe and the self-created dataset, the overall accuracy is 98.7% and 97.5%, respectively, with a summarization rate of 93.5% and 67.3%. Furthermore, a comparison study is done to demonstrate that our proposed method is superior to other existing methods in terms of video summarization accuracy and robustness. Additionally, a graphic user interface is created to assist the user with summarizing the video using the UOoI.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_27-An_Optimized_Deep_Learning_Method_for_Video_Summarization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Survey of Structural Health Monitoring Advances Based on Internet of Things (IoT) Sensors</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141025</link>
        <id>10.14569/IJACSA.2023.0141025</id>
        <doi>10.14569/IJACSA.2023.0141025</doi>
        <lastModDate>2023-10-30T10:49:51.5300000+00:00</lastModDate>
        
        <creator>Hao DENG</creator>
        
        <creator>JianHua CHEN</creator>
        
        <subject>Structural health monitoring; civil structures; internet of things; sensors; survey</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>Structural Health Monitoring (SHM) is a technique that ensures the safety and reliability of structures through continuous and real-time monitoring. IoT-based sensors have become a popular solution for implementing SHM systems, and research in this area is essential for improving the accuracy and reliability of SHM systems. A review of the current state-of-the-art is necessary to identify the challenges and opportunities for further development of SHM systems based on IoT sensors. This study presents a survey to comprehensively review of SHM, focusing on IoT sensors. Secondly, the categorization of the current civil structural monitoring methods is established, and the advantages and disadvantages of the methods are addressed. Thirdly, an analysis is performed, and the result is compared to civil structural monitoring methods. Finally, key features of the methods are discussed and summarized, and at last, some directions for future studies are presented.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_25-A_Survey_of_Structural_Health_Monitoring.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Whale Optimization Algorithm for Energy-Efficient Task Allocation in the Internet of Things</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141026</link>
        <id>10.14569/IJACSA.2023.0141026</id>
        <doi>10.14569/IJACSA.2023.0141026</doi>
        <lastModDate>2023-10-30T10:49:51.5300000+00:00</lastModDate>
        
        <creator>Shan YANG</creator>
        
        <creator>Renping YU</creator>
        
        <creator>Xin JIN</creator>
        
        <subject>Task allocation; internet of things; energy efficiency; optimization; whale optimization algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>The Internet of Things (IoT) represents a new paradigm where various physical devices interact and collaborate to achieve common goals. This technology encompasses sensors, mobile phones, actuators, and other smart devices that work together to perform tasks and applications. In order to ensure optimal performance of these tasks and applications, task allocation becomes a critical aspect in IoT networks. Task allocation in IoT networks is a complex problem due to the intricate connections and interactions among devices. The task allocation issue is generally recognized as NP-hard, necessitating the development of effective optimization solutions. This paper proposes a solution using the Whale Optimization Algorithm (WOA) to address the task allocation problem in IoT networks. By leveraging the capabilities of the WOA, our algorithm aims to improve energy efficiency and enhance network stability. The performance of the algorithm was tested comprehensively on the MATLAB simulation platform. Our algorithm outperforms existing algorithms in the literature, especially in terms of energy efficiency, as per the findings.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_26-Whale_Optimization_Algorithm_for_Energy_Efficient_Task_Allocation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Object Detectors in Autonomous Vehicles: Analysis of Deep Learning Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141024</link>
        <id>10.14569/IJACSA.2023.0141024</id>
        <doi>10.14569/IJACSA.2023.0141024</doi>
        <lastModDate>2023-10-30T10:49:51.5170000+00:00</lastModDate>
        
        <creator>Lei Du</creator>
        
        <subject>Autonomous vehicles; object detection; deep learning; two-stage object detectors; one-stage object detectors; comprehensive analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>Autonomous vehicles have emerged as a transformative technology with wide-ranging implications for smart cities, revolutionizing transportation systems and optimizing urban mobility. Object detection plays a crucial role in autonomous vehicles, accurately identifying and localizing pedestrians, vehicles, and traffic signs for safe navigation. Deep learning-based approaches have revolutionized object detection, leveraging deep neural networks to extract intricate features from visual data, enabling superior performance in various domains. Two-stage algorithms like R-FCN and Mask R-CNN focus on precise object localization and instance-level segmentation, while one-stage algorithms like SSD, RetinaNet, and YOLO offer real-time performance through single-pass processing. To advance object detection for autonomous vehicles, comprehensive studies are needed, particularly on two-stage and one-stage algorithms. This study aims to conduct an in-depth analysis, evaluating the strengths, limitations, and performance of R-FCN, Mask R-CNN, SSD, RetinaNet, and YOLO algorithms in the context of autonomous vehicles and smart cities. The research contributions include a thorough analysis of two-stage algorithms, a comprehensive examination of one-stage algorithms, and a comparison of different YOLO variants to highlight their advantages and drawbacks in object detection tasks.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_24-Object_Detectors_in_Autonomous_Vehicles.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Vision-based Human Posture Detection Approach for Smart Home Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141023</link>
        <id>10.14569/IJACSA.2023.0141023</id>
        <doi>10.14569/IJACSA.2023.0141023</doi>
        <lastModDate>2023-10-30T10:49:51.5000000+00:00</lastModDate>
        
        <creator>Yangxia Shu</creator>
        
        <creator>Lei Hu</creator>
        
        <subject>Posture identification; smart home applications; vision-based recognition; YOLO network; accuracy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>Effective posture identification in smart home applications is a challenging topic for people to tackle in order to decrease the occurrence of improper postures. Vision-based posture identification has been used to construct a system for identifying people&#39;s postures. However, the system complexity, low accuracy rate, and slow identification speed of existing vision-based systems make them unsuitable for smart home applications. The goal of this project is to address these issues by creating a vision-based posture recognition system that can recognize human position and be used in smart home applications. The suggested method involves training and testing a You Only Look Once (YOLO) network to identify the postures. This Yolo-based approach is based on YOLOv5, which provides a high accuracy rate and satisfied speed in posture detection. Experimental results show the effectiveness of the developed system for posture recognition on smart home applications.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_23-A_Vision_based_Human_Posture_Detection_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Benchmarking the LGBM, Random Forest, and XGBoost Models Based on Accuracy in Classifying Melon Leaf Disease</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141022</link>
        <id>10.14569/IJACSA.2023.0141022</id>
        <doi>10.14569/IJACSA.2023.0141022</doi>
        <lastModDate>2023-10-30T10:49:51.4830000+00:00</lastModDate>
        
        <creator>Chaerur Rozikin</creator>
        
        <creator>Agus Buono</creator>
        
        <creator>Sri Wahjuni</creator>
        
        <creator>Chusnul Arif</creator>
        
        <creator>Widodo</creator>
        
        <subject>Classification; Downy mildew; LGBM; disease level; melon leaves</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>Leaf diseases in melon plants cause losses for melon farmers. However, melon plants become less productive or even die. Downy mildew is a foliar disease that spreads rapidly in melon plants. Determining the level of downy mildew in melon leaves is important. Determining the level of downy mildew disease, farmers can carry out preventive treatment according to the severity level of downy mildew disease. This study aimed to create a classification model for the level of downy mildew disease on melon leaves using combined features and to compare the classification models, namely the LGBM, Random Forest, and XGBoost models. The combined features consist of colour, texture, Shannon entropy, and Canny edge features. The combined features are used as input for a classification model to predict the level of downy mildew leaf disease in melon plants. Model evaluation was carried out with three scenarios of data sharing: the first scenario, 90% training data and 10% test data; the second scenario, 80% training data and 20% test data; and the third scenario, 70% training data and 30% test data. The results of the evaluation of the model with the confusion matrix show that for the first and second scenarios, the highest accuracy was achieved by the Random Forest algorithm, with 72% and 73% accuracy, respectively. For the third scenario, the highest accuracy was obtained using the XGBoost algorithm.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_22-Benchmarking_the_LGBM_Random_Forest_and_XGBoost_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Virtual Reality in Training: A Case Study on Investigating Immersive Training for Prisoners</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141021</link>
        <id>10.14569/IJACSA.2023.0141021</id>
        <doi>10.14569/IJACSA.2023.0141021</doi>
        <lastModDate>2023-10-30T10:49:51.4670000+00:00</lastModDate>
        
        <creator>Abdulaziz Alshaer</creator>
        
        <subject>Component; virtual reality; correctional services; technology acceptance; rehabilitation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>This study addresses the pressing issue of prison rehabilitation by comparing traditional and Virtual Reality (VR) based training services offered by the General Directorate of Prisons in Saudi Arabia. Utilising Technology Acceptance Model (TAM) metrics such as perceived usefulness, ease of use, and enjoyment, the study evaluates the acceptance of VR technologies across two different headset platforms. Findings reveal that VR-based training services received significantly higher acceptance ratings than traditional methods. Both VR platforms were highly rated in terms of perceived usefulness, ease of use, and enjoyment but showed no significant differences between the headsets. These results indicate that VR-based methods could be more effective, engaging, and safer alternatives in correctional rehabilitation programs. Importantly, this research contributes to the field of Human-Computer Interaction (HCI) by suggesting design frameworks tailored for effective interventions in training and rehabilitative contexts where safety and psychological health are of high concern.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_21-Virtual_Reality_in_Training_A_Case_Study.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Construction of an Intelligent Robot Path Recognition System Supported by Deep Learning Network Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141019</link>
        <id>10.14569/IJACSA.2023.0141019</id>
        <doi>10.14569/IJACSA.2023.0141019</doi>
        <lastModDate>2023-10-30T10:49:51.4530000+00:00</lastModDate>
        
        <creator>Jiong Chen</creator>
        
        <subject>Deep learning; reinforcement learning; intelligent robots; path planning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>In recent years, intelligent robots have been widely used in fields such as express transportation, industrial automation, and healthcare, bringing great convenience to people&#39;s lives. As one of the core technologies of intelligent robots, path planning technology has become a research highlight in the field of robotics. To achieve path planning in unknown environments, a path planning algorithm based on an improved dual depth Q-network is proposed. In both simple and complex grid environments, the planned path inflection points for the improved dual depth Q-network is 4 and 9, respectively, with path lengths of 27.21m and 28.63m, respectively. Both are less than double depth Q network and adaptive Ant colony optimization algorithms. The average reward values of the improved dual depth Q network in simple and complex environments are 1.12 and 1.02, respectively, which are higher than those of the dual depth Q network. In a random environment, the lowest probability of the improved dual depth Q network successfully reaching the destination without colliding with obstacles is 95.1%, which is higher than the other two algorithms. In the Gazebo environment, when the number of iterations reaches 2000, the average cumulative reward value is positive. The average cumulative reward value in the range of iterations from 3500 to 4000 and iterations from 4000 to 4500 exceeds 500. The average cumulative reward value of the dual depth Q network is only positive within the two intervals of iterations 2500-3000 and 3000-3500. The average cumulative reward value does not exceed 100. According to the findings, the path planning ability of the improved dual depth Q network is better than that of the dual depth Q network and the adaptive Ant colony optimization algorithms.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_19-Construction_of_an_Intelligent_Robot_Path_Recognition_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Harnessing Ensemble in Machine Learning for Accurate Early Prediction and Prevention of Heart Disease</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141020</link>
        <id>10.14569/IJACSA.2023.0141020</id>
        <doi>10.14569/IJACSA.2023.0141020</doi>
        <lastModDate>2023-10-30T10:49:51.4530000+00:00</lastModDate>
        
        <creator>Mohammad Husain</creator>
        
        <creator>Pankaj Kumar</creator>
        
        <creator>Mohammad Nadeem Ahmed</creator>
        
        <creator>Arshad Ali</creator>
        
        <creator>Mohammad Ashiquee Rasool</creator>
        
        <creator>Mohammad Rashid Hussain</creator>
        
        <creator>Muhammad Shahid Dildar</creator>
        
        <subject>Heart disease; machine learning; predictive modeling; cardiovascular disorders; medical diagnosis; feature selection; model evaluation; public health</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>Cardiovascular diseases (CVDs) remain a significant global health concern, demanding precise and early prediction methods for effective intervention. In this comprehensive study, various machine learning algorithms were rigorously evaluated to identify the most accurate approach for forecasting heart disease. Through meticulous analysis, it was established that precision, recall, and the F1-score are critical metrics, overshadowing the mere accuracy of predictions. Among the classifiers explored, the Decision Tree (DT) and Random Forest (RF) algorithms emerged as the most proficient, boasting remarkable accuracy rates of 96.75%. The DT Classifier exhibited a precision rate of 97.81% and a recall rate of 95.73%, resulting in an exceptional F1-score of 96.76%. Similarly, the RF Classifier achieved an outstanding precision rate of 95.85% and a recall rate of 97.88%, yielding an exemplary F1-score of 96.85%. In stark contrast, other methods, including Logistic Regression, Support Vector Machine, and K-Nearest Neighbor, demonstrated inferior predictive capabilities. This study conclusively establishes the combination of Decision Tree and Random Forest algorithms as the most potent and dependable approach for predicting cardiac illnesses, providing a groundbreaking avenue for early intervention and personalized patient care. These findings signify a significant advancement in the field of predictive healthcare analytics, offering a robust framework for enhancing healthcare strategies related to cardiovascular diseases.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_20-Harnessing_Ensemble_in_Machine_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of Depression in News Articles Before and After the COVID-19 Pandemic Based on Unsupervised Learning and Latent Dirichlet Allocation Topic Modeling</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141018</link>
        <id>10.14569/IJACSA.2023.0141018</id>
        <doi>10.14569/IJACSA.2023.0141018</doi>
        <lastModDate>2023-10-30T10:49:51.4370000+00:00</lastModDate>
        
        <creator>Seonjae Been</creator>
        
        <creator>Haewon Byeon</creator>
        
        <subject>COVID-19; depression; news articles; LDA topic modeling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>As of 2023, South Korea maintains the highest suicide rate among OECD countries, accompanied by a notably high prevalence of depression. The onset of the COVID-19 pandemic in 2020 further exacerbated the prevalence of depression, attributed to shifts in lifestyle and societal factors. In this research, differences in depression-related keywords were analyzed using a news big data set, comprising 45,376 news articles from January 1st, 2016 to November 30th, 2019 (pre-COVID-19 pandemic) and 50,311 news articles from December 1st, 2019 to May 5th, 2023 (post-pandemic declaration). Latent Dirichlet Allocation (LDA) topic modeling was utilized to discern topics pertinent to depression. LDA topic modeling outcomes indicated the emergence of topics related to suicide and depression in association with COVID-19 following the pandemic&#39;s onset. Exploring strategies to manage such scenarios during future infectious disease outbreaks becomes imperative.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_18-Analysis_of_Depression_in_News_Articles.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Prediction of South African Public Twitter Opinion using a Hybrid Sentiment Analysis Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141017</link>
        <id>10.14569/IJACSA.2023.0141017</id>
        <doi>10.14569/IJACSA.2023.0141017</doi>
        <lastModDate>2023-10-30T10:49:51.4200000+00:00</lastModDate>
        
        <creator>Matthew Brett Shackleford</creator>
        
        <creator>Timothy Temitope Adeliyi</creator>
        
        <creator>Seena Joseph</creator>
        
        <subject>Sentiment analysis; opinion mining; machine learning; government; public service delivery; twitter</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>Sentiment analysis, a subfield of Natural Language Processing, has garnered a great deal of attention within the research community. To date, numerous sentiment analysis approaches have been adopted and developed by researchers to suit a variety of application scenarios. This consistent adaptation has allowed for the optimal extraction of the authors emotional intent within text. A contributing factor to the growth in application scenarios is the mass adoption of social media platforms and the bondless topics of discussion they hold. For government, organizations and other miscellaneous parties, these opinions hold vital insight into public mindset, welfare, and intent. Successful utilization of these insights could lead to better methods of addressing said public, and in turn, could improve the overall state of public well-being. In this study, a framework using a hybrid sentiment analysis approach was developed. Various amalgamations were created – consisting of a simplified version of the Valence Aware Dictionary and sEntiment Reasoner (VADER) lexicon and multiple instances of classical machine learning algorithms. In this study, a total of 67,585 public opinion-oriented Tweets created in 2020 applicable to the South African (ZA) domain were analyzed. The developed hybrid sentiment analysis approaches were compared against one another using well known performance metrics. The results concluded that the hybrid approach of the simplified VADER lexicon and the Medium Gaussian Support Vector Machine (MGSVM) algorithm outperformed the other seven hybrid algorithms. The Twitter dataset utilized serves to demonstrate model capability, specifically within the ZA context.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_17-A_Prediction_of_South_African_Public_Twitter_Opinion.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Systematic Review of Rubric Ontology in Higher Education</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141016</link>
        <id>10.14569/IJACSA.2023.0141016</id>
        <doi>10.14569/IJACSA.2023.0141016</doi>
        <lastModDate>2023-10-30T10:49:51.4070000+00:00</lastModDate>
        
        <creator>Noor Maizura Mohamad Noor</creator>
        
        <creator>Nur Fadila Akma Mamat</creator>
        
        <creator>Rosmayati Mohemad</creator>
        
        <creator>Noor Azliza Che Mat</creator>
        
        <subject>Assessment; higher education; learning outcomes; Malaysia Qualification Framework (MQF); ontology; rubric</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>Assessing students is a common practice in educational settings. Students will be evaluated using several methods or tools to determine how well they have acquired knowledge or progressed. There are two distinct types of assessment is summative and formative. Rubrics are used to evaluate student performance. However, the development of the rubric is challenging because subject-matter expertise is required. Ontology has been utilized in certain research to communicate knowledge relevant to rubrics, but these studies do not map to the important learning outcomes. Rubrics are developed in Malaysia to support outcome-based education (OBE) based on the Malaysia Qualification Framework (MQF). It is essential to discover if the technology supports rubrics that leverage learning outcomes to produce the best possible rubric. A systematic review of the literature (SLR) was used to carry out this analysis. In the years 2018 through 2022, 42 papers were reviewed. In conclusion, the key finding of this work is that rubric-based outcome learning is the most recent research area to get attention and that only a small number of studies have used ontologies to develop rubrics based on learning outcomes.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_16-Systematic_Review_of_Rubric_Ontology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluation Method of Physical Education Students&#39; Mental Health based on Multi-source and Heterogeneous Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141015</link>
        <id>10.14569/IJACSA.2023.0141015</id>
        <doi>10.14569/IJACSA.2023.0141015</doi>
        <lastModDate>2023-10-30T10:49:51.3900000+00:00</lastModDate>
        
        <creator>YongCheng WU</creator>
        
        <subject>Multi-source heterogeneous data; sports major; students; mental health; assess the situation; confidence level; linear regression analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>To enhance the ability to evaluate the mental health status of physical education students, a method of evaluating the mental well-being state of physical education students based on multi-source heterogeneous data mining is proposed. A fuzzy information detection model of multi-source heterogeneous data on the mental health status of physical education students is constructed, with four factors as dependent variables: compulsion, interpersonal sensitivity, hostility, and depression. Combined with the hierarchical index parameter detection and analysis method, the statistical analysis of multi-source heterogeneous info is accomplished. Based on the factor extraction outcomes of multi-source heterogeneous info, combined with the subspace heterogeneous fusion method, an estimated parameter feature clustering model is established. Combining the results of characteristic distributed clustering and linear regression analysis, the psychological well-being state evaluation of physical education students is realized. The results of empirical analysis show that this method has higher accuracy and better feature resolution in the evaluation of the mental well-being state of physical education students, which improves the reliability and confidence level of the assessment of the mental well-being status of physical education students.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_15-Evaluation_Method_of_Physical_Education_Students_Mental_Health.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Reinforcement Learning-based Answer Selection with Class Imbalance Handling and Efficient Differential Evolution Initialization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141014</link>
        <id>10.14569/IJACSA.2023.0141014</id>
        <doi>10.14569/IJACSA.2023.0141014</doi>
        <lastModDate>2023-10-30T10:49:51.3600000+00:00</lastModDate>
        
        <creator>Jia Wei</creator>
        
        <subject>Answer selection; imbalanced classification; reinforcement learning; DistilBERT; differential evolution</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>Answer selection (AS) involves the task of selecting the best answer from a given list of potential options. Current methods commonly approach the AS problem as a binary classification task, using pairs of positive and negative samples. However, the number of negative samples is usually much larger than the positive ones, resulting in a class imbalance. Training on imbalanced data can negatively impact classifier performance. To address this issue, a novel reinforcement learning-based technique is proposed in this study. In this approach, the AS problem is formulated as a sequence of sequential decisions, where an agent classifies each received instance and receives a reward at each step. To handle the class imbalance, the reward assigned to the majority class is lower than that for the minority class. The parameters of the policy are initialized using an improved Differential Evolution (DE) technique. To enhance the efficiency of the DE algorithm, a novel cluster-based mutation operator is introduced. This operator utilizes the K-means clustering approach to identify the winning cluster and employs an upgrade strategy to incorporate potentially viable solutions into the existing population. For word embedding, the DistilBERT model is utilized, which reduces the size of the BERT (Bidirectional encoder representations from transformers) model by 40% and improves computational efficiency by running 60% faster. Despite the decrease, the DistilBERT model maintains 97% of its language comprehension abilities by utilizing knowledge distillation in the pretraining phase. Extensive experiments are carried out on LegalQA, TrecQA, and WikiQA datasets to assess the suggested model. The outcomes showcase the superiority of the proposed model over existing techniques in the domain of AS.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_14-Reinforcement_Learning_based_Answer_Selection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Greenhouse Horticulture Automation with Crops Protection by using Arduino</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141013</link>
        <id>10.14569/IJACSA.2023.0141013</id>
        <doi>10.14569/IJACSA.2023.0141013</doi>
        <lastModDate>2023-10-30T10:49:51.3430000+00:00</lastModDate>
        
        <creator>Jamil Abedalrahim Jamil Alsayaydeh</creator>
        
        <creator>Mohd Faizal bin Yusof</creator>
        
        <creator>Chee Kai Hern</creator>
        
        <creator>Mohd Riduan AHMAD</creator>
        
        <creator>Vadym Shkarupylo</creator>
        
        <creator>Safarudin Gazali Herawan</creator>
        
        <subject>IoT-based; automation greenhouse intrusion detection and prevention; real-time monitoring</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>Agriculture significantly contributes to economic growth, generating employment opportunity also simulating the small-scale agriculture experiences. However, unforeseeable weather patterns, natural disasters, and unwelcome intruders are significant threat which brings severe financial losses for the owner. To overcome these challenges, this study aims to develop IoT-based automation greenhouse integrated with intrusion detection and prevention system. Automation greenhouse provides optimal environmental conditions for crop growth and enhances agricultural productivity, additionally inclusion of intrusion detection and prevention practices could detect and give immediate responses towards intrudes approaches. The automation greenhouse with intrusion detection and prevention with IoT-based provides the real-time monitoring and control as instant intrusion notification will be send to user through mobile application remotely. Thus, IoT-based automation greenhouse invention provides the sustainable environment condition for crop growth and reducing crop losses from treats.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_13-Greenhouse_Horticulture_Automation_with_Crops_Protection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automatic Configuration of Deep Learning Algorithms for an Arabic Named Entity Recognition System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141012</link>
        <id>10.14569/IJACSA.2023.0141012</id>
        <doi>10.14569/IJACSA.2023.0141012</doi>
        <lastModDate>2023-10-30T10:49:51.3270000+00:00</lastModDate>
        
        <creator>AZROUMAHLI Chaimae</creator>
        
        <creator>MOUHIB Ibtihal</creator>
        
        <creator>El YOUNOUSSI Yacine</creator>
        
        <creator>BADIR Hassan</creator>
        
        <subject>Algorithm automatic configuration; natural language processing; named entity recognition; word embeddings; finetuning; irace</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>Word embedding models have been widely used by many researchers to extract linguistic features for Natural Language Processing (NLP) tasks. However, the creation of an adequate Word embedding model depends on choosing the right language model method and architecture, in addition to finetuning the various parameters of the language model. Each parameter combination could result in a different model, and each model can behave differently according to the targeted NLP task. In this paper, we present an approach that combines a range of Word embedding models, multiple clustering and classification methods, and Irace for automatic algorithm configuration. The goal is to facilitate the construction of the most accurate Arabic Named Entity Recognition (NER) model for our dataset. Our approach involves the creation of different Word embedding models, the implementation of these models in different classification and clustering methods, and finetuning these implementations with different parameter combinations to create an Arabic NER System with the highest accuracy rate.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_12-Automatic_Configuration_of_Deep_Learning_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Efficient Method for Implementing Applications of Smart Devices Based on Mobile Fog Processing in a Secure Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141011</link>
        <id>10.14569/IJACSA.2023.0141011</id>
        <doi>10.14569/IJACSA.2023.0141011</doi>
        <lastModDate>2023-10-30T10:49:51.3130000+00:00</lastModDate>
        
        <creator>Huaibao Ding</creator>
        
        <creator>Xiaomei Ding</creator>
        
        <creator>Fang Xia</creator>
        
        <creator>Fei Zhou</creator>
        
        <subject>Cloud environment; IoT; real-time systems; smart devices; mobile fog; energy consumption</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>Smart technology and the Internet of Things (IoT) are advancing and growing daily in the modern world. The demand for solutions to execute complex applications and protect user security and privacy increases as the number of smart devices in our surroundings increases. Mobile fog processing aids us in this situation by providing a fresh and effective method for running smart device applications in a safe setting. Due to the delay and high volume of requests, the centralized and traditional architecture of cloud processing cannot handle the high user demand and effectively implement delay-sensitive and real-time programs. To address these issues, a virtual mobile fog processing architecture that establishes a layer between mobile apps and the cloud layer was developed in this work. In this layer, storage, processing, and encrypted communication occur on separate nodes not connected to the cloud. These nodes are virtually implemented on a single server. An Android smart system-based augmented reality application that uses a marker to display dynamic 3D objects has been introduced. Its functioning has been assessed in both the cloud-based architecture and the suggested architecture in two 4G and telecom mobile internet networks. The evaluation findings demonstrate the suggested architecture&#39;s superior performance in both communication networks. The suggested mobile fog-based architecture makes use of the Internet of Telecommunications to create high-volume 3D models quickly and to the satisfaction of a real-time application. In addition to these accomplishments, the results demonstrate that the suggested architecture outperforms typical cloud-based architecture in terms of lowering overall energy consumption by up to 34%.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_11-An_Efficient_Method_for_Implementing_Applications.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Ensembling of Attention-based Recurrent Units for Detection and Mitigation of Multiple Attacks in Cloud</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141010</link>
        <id>10.14569/IJACSA.2023.0141010</id>
        <doi>10.14569/IJACSA.2023.0141010</doi>
        <lastModDate>2023-10-30T10:49:51.2970000+00:00</lastModDate>
        
        <creator>Kalaivani M</creator>
        
        <creator>Padmavathi G</creator>
        
        <subject>Multiple threats; deep learning algorithm; attention enabled gated recurrent networks; NSL-KDD; UNSW; CIDC-001</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>In the recent years, number of threats to network security increases exponentially as the Internet users which poses serious threat in cloud storage application. Detection and defending against the multiple threats are currently a hot topic in industry and considered as one of the challenging research in academia. Many methodologies and algorithms devised to predict the different attacks. Still, most of the methods cannot simultaneously achieve high performance of prediction with a small number of false alarm rates. In this scenario, Deep Learning (DL) algorithms are appropriate and intelligent to categorize the multiple attacks. Still, most of the existing DL techniques are computationally inefficient that may degrade the performance in predicting the both normal and attack information. To overcome this aforementioned problem, this paper proposes the hybrid combination of attention maps with deep recurrent networks to mitigate the multiple attacks with low computational overhead. Initially, the pre-processing step is proposed to the inputs in a specified range. Later on, input data are fed into the Attention Enabled Gated Recurrent Networks (AEGRN) which is used to remove the redundant features and select the optimal features that aids for the better classification. Further to enhance the faster response, deep feed forward layers are proposed to replace the traditional deep neural networks. Numerous metrics for performance, including accuracy, precision, recall, specificity, and F1-score, are examined and analyzed as part of the thorough experimentation utilizing multiple datasets, including NSL-KDD-99, UNSW -2019, and CIDC-001. Comparisons of performance between the method that is suggested and existing models developed with DL are used to demonstrate the proposed algorithm&#39;s supremacy. The suggested framework surpasses the other DL models and has the best accuracy in predicting with little computational overhead, according to an investigation.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_10-Ensembling_of_Attention_based_Recurrent_Units.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Immersive Virtual Reality: A New Dimension in Physiotherapy</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141009</link>
        <id>10.14569/IJACSA.2023.0141009</id>
        <doi>10.14569/IJACSA.2023.0141009</doi>
        <lastModDate>2023-10-30T10:49:51.2800000+00:00</lastModDate>
        
        <creator>Siok Yee Tan</creator>
        
        <creator>Meng Chun Lam</creator>
        
        <creator>Joshua Faburada</creator>
        
        <creator>Monirul Islam Pavel</creator>
        
        <subject>Android; COVID-19; physiotherapy; virtual reality</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>Physiotherapy treatments often necessitate patients to perform exercises at home as part of their rehabilitation regimen. However, outside the clinic, patients are often left with inadequate guidance, typically provided in the form of static images or sketches on paper. The ongoing COVID-19 pandemic has further disrupted the ability for patients and physiotherapists to engage in face-to-face sessions, leading to suboptimal compliance and concerns about the accuracy of exercise performance. In recent years, there has been a growing body of scientific literature on the application of virtual reality (VR) in physiotherapy. This emerging trend highlights the potential of VR technology to enhance the guidance and effectiveness of physiotherapy regimens. This research paper aims to investigate the impact of VR-based physiotherapy on the guidance and completion of prescribed exercises. To address the limitations faced by patients unable to access in-person physiotherapy due to the pandemic or geographical constraints, we propose the FisioVR application, specifically designed for Android devices. What sets FisioVR apart is its intrinsic guidance and support from physiotherapy experts. To evaluate the effectiveness of FisioVR, we conducted tests with eight respondents who provided valuable feedback via an online form. The results clearly demonstrate that each physiotherapy session carried out using FisioVR has a positive impact and is conducive to achieving the intended therapeutic objectives, effectively promoting recovery. In summary, FisioVR has the potential to bridge the gap between patients and care providers, facilitating home-based and individualized physiotherapy. This innovative application leverages the power of virtual reality to offer a more accessible, guided, and personalized approach to physiotherapy, especially crucial during times when in-person sessions are challenging.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_9-Immersive_Virtual_Reality.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Construction of Sports Culture Recommendation Model Combining Big Data Technology and Video Semantic Comprehension</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141008</link>
        <id>10.14569/IJACSA.2023.0141008</id>
        <doi>10.14569/IJACSA.2023.0141008</doi>
        <lastModDate>2023-10-30T10:49:51.2670000+00:00</lastModDate>
        
        <creator>Bin Xie</creator>
        
        <creator>Fuye Zhang</creator>
        
        <subject>Big data; video semantic comprehension; sports culture; semantic sequences; convolutional neural networks (CNN)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>Information blast makes it harder for clients to channel the substance they are keen on. This study aims to combine big data and video semantic comprehension technology to realize the recommendation of sports culture videos by exploring the semantics of video and taking advantage of multi-source heterogeneous information. The semantic structure of unstructured video data is defined first, and on this basis, Converse3D (C3D) - Connectionist Temporal Asifationon (CTC) is employed to complete the extraction of sub-action semantics and the integration of behaviour semantic sequences. In adjustment to break the botheration of low accurateness of the model for the semantic abstraction of unlabeled videos, this study proposes an unsupervised semantic abstraction adjustment based on Converse3D(C3D)-RAE, which completes the compression and affiliation of the semantic sequences and verifies the accurateness of both two models through experiments. In order to solve the problem of insufficient accuracy of video recommendation algorithms based on single video semantic similarity and topic similarity, this study comprehensively considers video semantic similarity and video topic similarity and proposes a multi-modal video recommendation algorithm. The experimental results show that the accuracy of the COMSIM-based algorithm is 7.8% higher than that of Video+ CNN + K-NearestNeighbor (KNN) and 15.9% higher than that of CLIP + CNN +Ncut+LDA.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_8-Construction_of_Sports_Culture_Recommendation_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Evaluation Method of English Composition Automatic Grading Based on Genetic Optimization Algorithm and CNN Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141007</link>
        <id>10.14569/IJACSA.2023.0141007</id>
        <doi>10.14569/IJACSA.2023.0141007</doi>
        <lastModDate>2023-10-30T10:49:51.2500000+00:00</lastModDate>
        
        <creator>Li Wang</creator>
        
        <subject>Genetic optimization algorithm; CNN model; English composition; Automatic scoring; Teaching effect</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>In response to the problems of traditional genetic algorithms in evaluating English compositions, the stability of automatic grading of English compositions has been further enhanced. This article evaluates the teaching effectiveness of automatic grading of English compositions using an optimization fusion algorithm combined with genetic optimization algorithm and CNN model. By analyzing genetic content and optimization algorithms, a corresponding fusion optimization model was obtained, and the automatic evaluation of English compositions was analyzed and predicted through experimental verification. The results indicate that the curves corresponding to different parameters exhibit typical segmentation features through the variation curves of individual numbers under different scale factors. And through quantitative description and analysis of the curve, it can be seen that the change in proportion factor has an absolute advantage in the impact of genetic algorithm on the number of children. As the number of samples increases, the performance of genetic optimization algorithms under the f function shows an upward trend. Research has shown that the writing content index has the greatest impact on English writing, while the corresponding grammar errors have the smallest impact on English writing. Finally, the accuracy of the optimized model was verified by comparing the model curve with experimental data. This study provides theoretical support for the use of genetic optimization algorithms and CNN models in English, and provides ideas for the use of optimization algorithms in other fields.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_7-An_Evaluation_Method_of_English_Composition_Automatic_Grading.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Establishment and Optimization of Video Analysis System in Metaverse Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141006</link>
        <id>10.14569/IJACSA.2023.0141006</id>
        <doi>10.14569/IJACSA.2023.0141006</doi>
        <lastModDate>2023-10-30T10:49:51.2330000+00:00</lastModDate>
        
        <creator>Dandan WANG</creator>
        
        <creator>Tianci Zhang</creator>
        
        <subject>Artificial intelligence; metaverse; video perception; big data</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>The current source space communication architecture has not changed. At present, the key technology of so-called metaverse media only applies its elements to existing communication architectures, and more importantly, this type of integration involves individual examples and promotional marketing methods. How to become a new growth point for the deep collaborative development of metaverse media requires strengthened research and exploration. Although AI analysis technology is powerful, its sensitivity, accuracy and adaptability could be more satisfactory due to the complexity of real-world scenarios. Given the shortcomings of existing research, we designed a video analysis system in the metaverse environment, combining virtual reality and artificial intelligence, with video perception, network, and information technology as the medium and big data as the technical support to build a full intelligence Video analysis system. The system is based on the YOLOv3 model, combined with the actual video scene, and analyzes according to the video&#39;s human behavior and environmental changes. Experiments show that the system has obvious advantages in the accuracy and recall rate of video analysis and detection, the system detection performance is significantly improved, and the video target analysis and detection of complex scenes are realized.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_6-Establishment_and_Optimization_of_Video_Analysis_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application of Image Style Transfer Based on Normalized Residual Network in Art Design</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141005</link>
        <id>10.14569/IJACSA.2023.0141005</id>
        <doi>10.14569/IJACSA.2023.0141005</doi>
        <lastModDate>2023-10-30T10:49:51.2330000+00:00</lastModDate>
        
        <creator>Jing Pu</creator>
        
        <creator>Yuke Li</creator>
        
        <subject>CNN; residual network; normalization; image segmentation transfer; art and design</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>With the development of computer vision technology, image style transfer technology based on deep learning has achieved vigorous development. It has been widely applied in fields such as art design, painting creation, and film and television effect production. However, existing image style transfer methods still have shortcomings, including low efficiency and weak quality of style transfer, which cannot better meet the actual needs of various art and design activities. Therefore, a residual network structure is introduced to construct an image style transfer model based on the convolutional neural networks. Meanwhile, a normalization layer is added to the residual network results to optimize the image style transfer technology. An image style transfer model based on the normalized residual network is constructed. The experimental results show that the accuracy, recall, and F1 values of the improved image style transfer model proposed in the study are 97.35%, 96.49%, and 97.52%, respectively, which can complete high-quality image style transfer. This indicates that the image style transfer model proposed in the study has good performance, which can effectively improve the efficiency and quality of image style transfer, providing effective support for various art and design activities.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_5-Application_of_Image_Style_Transfer.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparison of Four Demosaicing Methods for Facial Recognition Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141003</link>
        <id>10.14569/IJACSA.2023.0141003</id>
        <doi>10.14569/IJACSA.2023.0141003</doi>
        <lastModDate>2023-10-30T10:49:51.2030000+00:00</lastModDate>
        
        <creator>M. El&#233;onore Elvire HOUSSOU</creator>
        
        <creator>A. Tidjani SANDA MAHAMA</creator>
        
        <creator>Pierre GOUTON</creator>
        
        <creator>Guy DEGLA</creator>
        
        <subject>Multispectral image database; multispectral imaging; multispectral filter array (MSFA); one-shot camera; facial recognition system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>Multispectral imaging has become more important in several areas during this decade to overcome the limitations of color imaging. There are several types of multispectral acquisition systems, including single-shot cameras that incorporate Multispectral Filter Arrays (MSFA). MSFA is an extension of the color filter array. Acquisition systems that incorporate spectral filter arrays are very fast, lightweight, and able to acquire moving scenes. But these cameras are manufactured with at best software for filter positioning correction without demosaicing software. Hence there is a need to identify a suitable demosaicing algorithm in terms of image quality, computation time, and decorrelation factor. This paper presents a comparative study of four relevant demosaicing methods in the facial recognition process using images acquired with a single-shot MSFA camera designed in our laboratory. To achieve this goal, the four demosaicing methods named bilinear interpolation, discrete wavelet transform, binary tree, and median vector were adapted to multispectral images acquired using a MSFA camera. Evaluations were first performed using the NIQE performance metric and the correlation coefficient. Then Demosaced images were used to train VGG19 neural network to know which demosacing method better contains relevant features for recognition and better computation time. Results reveal that bilinear interpolation provides the less correlated images and the binary tree gives the best quality images with a NIQE of 8.99 and an accuracy of 100% for face recognition.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_3-Comparison_of_Four_Demosaicing_Methods.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Artificial Intelligence Method for Automatic Assessment of Fuzzy Semantics in English Literature</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141004</link>
        <id>10.14569/IJACSA.2023.0141004</id>
        <doi>10.14569/IJACSA.2023.0141004</doi>
        <lastModDate>2023-10-30T10:49:51.2030000+00:00</lastModDate>
        
        <creator>Meiyan LI</creator>
        
        <subject>Automatic assessment; recurrent neural networks; long short-term memory (LSTM); quadratic weighted kappa (QWK); convolutional layer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>This Online writing and evaluation are becoming increasingly popular, as is automatic literature assessment. The most popular way is to obtain a good evaluation of the essay and article is by the automatic scoring model. However, assessing fuzzy semantics contained in reports and papers takes much work. An automated essay and articles assessment model using the long-short-term memory (LSTM) neural network is developed and validated to obtain an appropriate assessment. The relevant theoretical basis of the recurrent neural network is introduced first, and the quadratic weighted kappa (QWK) elevation method is cited here to develop the model. The LSTM network is then awarded for developing the general automatic assessment model. The available model is modified to get better performance by adding a convolutional layer(s). Finally, a data set of 7000 essays is segmented based on the ratio of 6:2:2 to train, validate, and test the model. The results indicate that the LSTM network can effectively capture the general properties of the essay and articles. After adding the convolutional layer(s), the LSTM+convolutional layer(s) model can get better performance. The QWK values are higher than 0.6 and have an improvement of 0.097 to 0.134 compared with the LSTM network, which proves that the results of the LSTM network combined with the convolutional layer(s) model are overall satisfactory, and the modified model has practical values.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_4-An_Artificial_Intelligence_Method_for_Automatic_Assessment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multispectral Image Analysis using Convolution Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141002</link>
        <id>10.14569/IJACSA.2023.0141002</id>
        <doi>10.14569/IJACSA.2023.0141002</doi>
        <lastModDate>2023-10-30T10:49:51.1700000+00:00</lastModDate>
        
        <creator>Arun D. Kulkarni</creator>
        
        <subject>Convolution neural networks; machine learning; multispectral images; remote sensing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>Machine learning (ML) techniques are used often to classify pixels in multispectral images. Recently, there is growing interest in using Convolution Neural Networks (CNNs) for classifying multispectral images. CNNs are preferred because of high performance, advances in hardware such as graphical processing units (GPUs), and availability of several CNN architectures. In CNN, units in the first hidden layer view only a small image window and learn low level features. Deeper layers learn more expressive features by combining low level features. In this paper, we propose a novel approach to classify pixels in a multispectral image using deep convolution neural networks (DCNNs). In our approach, each feature vector is mapped to an image. We used the proposed framework to classify two Landsat scenes that are obtained from New Orleans and Juneau, Alaska areas. The suggested approach is compared with the commonly used classifiers such as the Decision Tree (DT), Support Vector Machine (SVM), and Random Forest (RF). The proposed approach has shown the state-of-the-art results.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_2-Multispectral_Image_Analysis_using_Convolution_Neural_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Human Coach Technology Reactance Factors and their Influence on End-Users&#39; Acceptance of e-Health Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0141001</link>
        <id>10.14569/IJACSA.2023.0141001</id>
        <doi>10.14569/IJACSA.2023.0141001</doi>
        <lastModDate>2023-10-30T10:49:51.1700000+00:00</lastModDate>
        
        <creator>Sarah Janb&#246;cke</creator>
        
        <creator>Toshimi Ogawa</creator>
        
        <creator>Johanna Langendorf</creator>
        
        <creator>Koki Kobayashi</creator>
        
        <creator>Ryan Browne</creator>
        
        <creator>Rainer Wieching</creator>
        
        <creator>Yasuyuki Taki</creator>
        
        <subject>Technology acceptance; technology reactance; human-machine-interface; technology mediator; technology leverage; human coach; digital health; e-health; virtual coach; active aging; healthy aging; healthcare information technology introduction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(10), 2023</description>
        <description>Project e-VITA is a joined research force from Europe and Japan that examines various cutting-edge e-health applications for older adult care. Those specific users do not necessarily feel technology savvy or secure enough to open up for innovative home tech systems. Thus, it is essential to provide the support that is virtual and human beside each other. Human coaches will provide this support to fulfill this role as a mediator between the technological system and the end-user. Reactance towards the system from the mediator&#39;s role could lead to the system&#39;s failure with the end user, thus failing the development. The effect of technology reactance in the integration process of a technological system can be the decisive factor in evaluating the success and failure of a technological system. We used part-standardized, problem-centered interviews to understand the human coaches’ challenges. The sample included people who act as the mediator role between the user and the technological system in the test application in the study centers. The interviews focused on experienced or imagined hurdles in the communication process with the user and the mediator role as well as the later relationship dynamic between the mediator, end-user, and technological system. The described technological challenges during the testing phase led the human coaches to responsibility, diffusion and uncertainty within their role. Furthermore, they led to a feeling of not fulfilling role expectations, which in the long term could indicate missing self-efficacy for the human coaches. We describe possible solutions mentioned by the interviewees and deepen the understanding of decisive factors for sustainable system integration for e-health applications.</description>
        <description>http://thesai.org/Downloads/Volume14No10/Paper_1-Human_Coach_Technology_Reactance_Factors.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Composite Noise Removal Network Based on Multi-domain Adaptation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01409124</link>
        <id>10.14569/IJACSA.2023.01409124</id>
        <doi>10.14569/IJACSA.2023.01409124</doi>
        <lastModDate>2023-10-03T10:19:24.3970000+00:00</lastModDate>
        
        <creator>Fan Bai</creator>
        
        <creator>Pengfei Li</creator>
        
        <creator>Haoyang Sun</creator>
        
        <creator>Hui Zhang</creator>
        
        <subject>Image denoising; domain adaptation; generative adversarial network; autoencoder</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>Addressing the limitation of conventional single-scene image denoising algorithms in filtering mixed environmental disturbances, and recognizing the drawbacks of cascaded image enhancement algorithms, which have poor real-time performance and high computational demands, The composite weather adaptive denoising network (CWADN) is proposed. A Cascade Hourglass Feature Extraction Network is constructed with a visual attention mechanism to extract characteristics of rain, fog, and low-light noise from authentic natural images. These features are then transferred from their original real distribution domain to a synthetic distribution domain using a deep residual convolutional neural network. The generator and style encoder of the adversarial network work together to adaptively remove the transferred noise through a combination of supervised and unsupervised training, this approach achieves adaptive denoising capabilities tailored to complex natural environmental noise. Experimental results demonstrate that the proposed denoising network yields a high signal-to-noise ratio while maintaining excellent image fidelity. It effectively prevents image distortion, particularly in critical target areas. Additionally, it adapts to various types of mixed noise, making it a valuable tool for preprocessing images in advanced machine vision algorithms such as target recognition and tracking.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_124-A_Composite_Noise_Removal_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Autism Diagnosis using Linear and Nonlinear Analysis of Resting-State EEG and Self-Organizing Map</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01409123</link>
        <id>10.14569/IJACSA.2023.01409123</id>
        <doi>10.14569/IJACSA.2023.01409123</doi>
        <lastModDate>2023-10-03T10:19:24.3800000+00:00</lastModDate>
        
        <creator>Jie Xu</creator>
        
        <creator>Wenxiao Yang</creator>
        
        <subject>Autism; EEG; linear analysis; nonlinear analysis; neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>The prevalence of autism has increased dramatically in recent years and many people around the world are facing this difficult condition. There is a need to develop an objective method to diagnose autism. Various analysis methods have been used to classify the EEG signals of people with autism, from linear methods in the time and frequency domain to nonlinear methods based on chaos theory. However, there is still no consensus on which method of EEG signal analysis can provide us with the best diagnostic accuracy and valid biomarkers for autism diagnosis. Therefore, in this study, we evaluate different feature extraction methods from EEG signals to diagnose autism from healthy individuals. For this purpose, EEG analysis was performed in time, time-frequency, frequency and nonlinear domains. Furthermore, the self-organizing map (SOM) method was used to classify features extracted from autistic and normal EEG. The data used in this study were recorded by the research team from 24 children with autism and 24 normal children. The accuracies of 92.31, 93.57, 95.63 and 97.10% were achieved through time and morphological, frequency, time-frequency and nonlinear analyzes, respectively. Indeed, the findings showed that nonlinear analysis could yield the best classification results (accuracy = 97.10%, sensitivity = 98.80% and specificity = 97.02%) in the EEG discrimination of autistic children from typical children through the SOM neural network.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_123-Autism_Diagnosis_using_Linear_and_Nonlinear_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Method for Intrusion Detection in Computer Networks using Computational Intelligence Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01409122</link>
        <id>10.14569/IJACSA.2023.01409122</id>
        <doi>10.14569/IJACSA.2023.01409122</doi>
        <lastModDate>2023-10-03T10:19:24.3500000+00:00</lastModDate>
        
        <creator>Yanrong HAO</creator>
        
        <creator>Shaohui YAN</creator>
        
        <subject>Decision tree; network intrusion detection; particle swarm algorithm; basic-radial neural network; frog jump algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>This paper introduces a novel and integrated approach to intrusion detection in computer networks that makes use of the benefits of both abuse detection and anomaly detection techniques. The proposed method combines anomaly detection and abuse detection technologies to enhance intrusion detection functionality. The intrusion detection system is implemented using a set of algorithms and models in the proposed approach. The frog jump algorithm has been utilized to choose the system&#39;s ideal input attributes. The decision tree is utilized in this system&#39;s abuse detection portion. Support vector machines or basic-radial neural network models have been utilized to find anomalies in this system. In the process of training neural networks, other techniques like particle swarm or genetic optimization are also utilized. The NSL-KDD dataset was used the experiment, and the findings were published. These findings demonstrate that, in comparison to using only anomaly or abuse detection, the proposed approach can increase the effectiveness of intrusion detection in the network. Additionally, a model that uses the frog leap algorithm for feature selection and classification and combines decision tree and support vector machine techniques with ten chosen input features has a detection rate of 98.2%. This is true despite the fact that the detection rates of the systems trained using comparable data in prior studies with 33 and 14 selected input features to the trainer have been 83.2% and 84.2%, respectively. Additionally, the algorithm execution performance increases up to 29 times faster than the aforementioned approaches when the intrusion detection rate is maintained at the level of other competing methods that were simulated in this work.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_122-A_New_Method_for_Intrusion_Detection_in_Computer_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Fruit Ripeness Detection Method using Adapted Deep Learning-based Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01409121</link>
        <id>10.14569/IJACSA.2023.01409121</id>
        <doi>10.14569/IJACSA.2023.01409121</doi>
        <lastModDate>2023-10-03T10:19:24.3330000+00:00</lastModDate>
        
        <creator>Weiwei Zhang</creator>
        
        <subject>Fruit ripeness detection; precise agriculture; deep learning; vision system; YOLOv8</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>Fruit ripeness detection plays a crucial role in precise agriculture, enabling optimal harvesting and post-harvest handling. Various methods have been investigated in the literature for fruit ripeness detection in vision-based systems, with deep learning approaches demonstrating superior accuracy compared to other approaches. However, the current research challenge lies in achieving high accuracy rates in deep learning-based fruit ripeness detection. In this study proposes a method based on the YOLOv8 algorithm to address this challenge. The proposed method involves generating a model using a custom dataset and conducting training, validation, and testing processes. Experimental results and performance evaluation demonstrate the effectiveness of the proposed method in achieving accurate fruit ripeness detection. The proposed method surpasses existing approaches through extensive experiments and performance analysis, providing a reliable solution for fruit ripeness detection in precise agriculture.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_121-A_Fruit_Ripeness_Detection_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Decision-Making with Data Science in the Internet of Things Environments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01409120</link>
        <id>10.14569/IJACSA.2023.01409120</id>
        <doi>10.14569/IJACSA.2023.01409120</doi>
        <lastModDate>2023-10-03T10:19:24.3030000+00:00</lastModDate>
        
        <creator>Lei Hu</creator>
        
        <creator>Yangxia Shu</creator>
        
        <subject>Internet of Things; IoT data; data science; data preprocessing; machine learning; real-time analytics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>The Internet of Things (IoT) has emerged as a transformative technology, enabling various devices to interconnect and generate vast amounts of data. The insights contained within this data can revolutionize industries and improve decision-making processes. The heterogeneity, scale, and complexity of IoT data pose challenges for efficient analysis and utilization. In this paper, the field of data science is explored in the IoT context, focusing on critical techniques, applications, and challenges vital to realizing the full potential of IoT data. This paper explores the field of data science in the IoT context, focusing on critical techniques, applications, and challenges vital to realizing the full potential of IoT data. The distinctive qualities of IoT data, including its volume, velocity, variety, and veracity, are examined, and their impact on data science approaches is analyzed. Additionally, cutting-edge data science approaches and methodologies designed for IoT data, such as data preprocessing, data fusion, machine learning, and anomaly detection, are discussed. The importance of scalable and distributed data processing frameworks to handle IoT data&#39;s large-scale and real-time nature is highlighted. Furthermore, the application of data science in various IoT fields, such as smart cities, healthcare, agriculture, and industrial IoT, is explored. Finally, areas for future research and development are identified, such as privacy and security issues, understanding machine learning models, and ethical aspects of data science in IoT.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_120-Enhancing_Decision_Making_with_Data_Science.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Single-Stage Deep Learning-based Approach for Real-Time License Plate Recognition in Smart Parking System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01409119</link>
        <id>10.14569/IJACSA.2023.01409119</id>
        <doi>10.14569/IJACSA.2023.01409119</doi>
        <lastModDate>2023-10-03T10:19:24.2870000+00:00</lastModDate>
        
        <creator>Lina YU</creator>
        
        <creator>Shaokun LIU</creator>
        
        <subject>Smart parking; license plate recognition; deep learning; single-stage detector; Yolo</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>License plate recognition in smart parking systems plays a crucial role in enhancing parking management efficiency and security. Traditional methods and deep learning-based approaches have been explored for license plate recognition. Deep learning methods have gained prominence due to their ability to extract meaningful features and achieve high accuracy rates. However, existing deep learning-based fire detection methods face challenges in terms of accuracy, real-time requirement, and computation cost, as evident from previous studies. To address these challenges, we propose a single-stage deep learning approach using YOLO (You Only Look Once) algorithm. Our method involves generating a custom dataset and conducting training, validation, and testing processes to train the YOLO-based model. Experimental results and performance evaluations demonstrate that our proposed method achieves high accuracy rates and satisfies real-time requirements, validating its effectiveness for license plate recognition in smart parking systems.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_119-A_Single_Stage_Deep_Learning_based_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Providing a Hybrid and Symmetric Encryption Solution to Provide Security in Cloud Data Centers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01409118</link>
        <id>10.14569/IJACSA.2023.01409118</id>
        <doi>10.14569/IJACSA.2023.01409118</doi>
        <lastModDate>2023-10-03T10:19:24.2230000+00:00</lastModDate>
        
        <creator>Desong Shen</creator>
        
        <subject>Hybrid encryption algorithm; security; cloud computing; symmetric algorithms</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>One of the most crucial components of information technology infrastructure in the modern world is cloud data centers. Customers have access to these data centers&#39; infrastructure and software, which enable them to store and process massive amounts of data. However, the security and protection of private data in cloud data centers is a serious problem that needs effective and c solutions. Security and privacy issues exist because cloud computing outsources the processing of sensitive data. Consumer worries about cloud infrastructure security remain, particularly those related to data privacy. A thorough analysis of research efforts in the area of cloud security is the main objective of this study. In order to do this, a variety of models were evaluated, their advantages and disadvantages were identified, and a viable security solution based on symmetric algorithms was put forth. The original text in the proposed solution (Hybrid encryption algorithm) is first encrypted using the faster symmetric key method AES, and then its key is encrypted using the faster asymmetric key scheme RSA. This increases efficiency and speed. This method will shorten the time required for data encryption while enhancing its security. The final step was implementing the desired solution in the Eclipse software environment and comparing it against the Blowfish and RSA algorithms. The evaluation&#39;s findings indicate that the solution is more advantageous, which has resulted in a nearly two-fold decrease in execution time and a marked increase in throughput when compared to the RSA algorithm. Additionally, the execution time has shrunk, and throughput has been vastly improved compared to the Blowfish method.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_118-Providing_a_Hybrid_and_Symmetric_Encryption_Solution.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Fingerprint Liveness Detection Method using Empirical Mode Decomposition and Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01409117</link>
        <id>10.14569/IJACSA.2023.01409117</id>
        <doi>10.14569/IJACSA.2023.01409117</doi>
        <lastModDate>2023-09-30T12:22:54.2100000+00:00</lastModDate>
        
        <creator>Shekun Tong</creator>
        
        <creator>Chunmeng Lu</creator>
        
        <subject>Fingerprint; liveness; biometric; neural network; empirical mode decomposition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>One of the most common biometric systems is fingerprint identification, which has been misused due to issues such as fraud. Hence, intelligent methods should be designed and used to recognize real-live fingerprints. Therefore, in the current work, we proposed a novel liveness fingerprint detection framework with low computational cost and excellent accuracy based on empirical mode decomposition and neural network to distinguish real from fake fingerprints. Our proposed scheme works based on empirical mode decomposition technique. The fingerprint images were cropped into 200 &#215; 200 images and then the two-dimensional (2D) images were converted into one-dimensional (1D) data, greatly reducing the computational process. The empirical mode decomposition (EMD) technique decomposed the data and the first five intrinsic mode functions (IMFs) were targeted for feature extraction through simple statistical features. The findings revealed that our suggested system can yield an average accuracy of 97.72% in distinguishing fake from real fingerprints through multilayer perceptron (MLP) neural network. This framework is very efficient compared to other techniques because only one piece of fingerprint image is enough to defend against spoof attacks. Therefore, such framework can reduce the cost of the fingerprint biometric systems, as no further hardware is needed. In addition, our framework method gives the best classification results in comparison to other previous techniques in real-live fingerprint recognition while being simple with lower computational cost. Therefore, this framework can be practically used in commercial biometric systems.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_117-A_Novel_Fingerprint_Liveness_Detection_Method_using_Empirical_Mode.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Corpus Generation to Develop Amharic Morphological Segmenter</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01409116</link>
        <id>10.14569/IJACSA.2023.01409116</id>
        <doi>10.14569/IJACSA.2023.01409116</doi>
        <lastModDate>2023-09-30T10:43:05.4900000+00:00</lastModDate>
        
        <creator>Terefe Feyisa</creator>
        
        <creator>Seble Hailu</creator>
        
        <subject>Amharic; Amharic morphology; segmentation corpus; seq2seq; under-resourced languages</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>Morphological segmenter is an important component in Amharic natural language processing systems. Despite this fact, Amharic lacks large amount of morphologically segmented corpus. Large amount of corpus is often a requirement to develop neural network-based language technologies. This paper presents an alternative method to generate large amount of morph-segmented corpus for Amharic language. First, a relatively small (138,400 words) morphologically annotated Amharic seed-corpus is manually prepared. The annotation enables to identify preﬁxes, stem, and sufﬁxes of a given word. Second, a supervised approach is used to create a conditional random ﬁeld-based seed-model (on the seed-corpus). Applying the seed-model (an unsupervised technique on a large unsegmented raw Amharic words) for prediction, a large corpus size (3,777,283) of segmented words are automatically generated. Third, the newly generated corpus is used to train an Amharic morphological segmenter (based on a supervised neural sequence-to-sequence (seq2seq) approach using character embeddings). Using the seq2seq method, an F-score of 98.65% was measured. Results show an agreement with previous efforts for Arabic language. The work presented here has profound implications for future studies of Ethiopian language technologies and may one day help solve the problem of the digital-divide between resource-rich and under-resourced languages.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_116-Corpus_Generation_to_Develop_Amharic_Morphological_Segmenter.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Virtual Machine Allocation in Cloud Computing Environments using Giant Trevally Optimizer</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01409115</link>
        <id>10.14569/IJACSA.2023.01409115</id>
        <doi>10.14569/IJACSA.2023.01409115</doi>
        <lastModDate>2023-09-30T10:43:05.4770000+00:00</lastModDate>
        
        <creator>Hai-yu Zhang</creator>
        
        <subject>Cloud computing; resource allocation; virtualization; Giant Trevally Optimizer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>Cloud computing has gained prominence due to its potential for computational tasks, but the associated energy consumption and carbon emissions remain significant challenges. Allocating Virtual Machines (VMs) to Physical Machines (PMs) in cloud data centers, a known NP-hard problem, offers an avenue for enhancing energy efficiency. This paper presents an energy-conscious optimization approach utilizing the Giant Trevally Optimizer (GTO) which is inspired by the hunting strategies of the giant trevally, a proficient marine predator. Our study mathematically models the trevally&#39;s hunting behavior when targeting seabirds. The trevally&#39;s approach involves strategic selection of optimal hunting locations based on food availability, including pursuing seabird prey in the air or seizing it from the water&#39;s surface. Through extensive simulations, our method demonstrates superior performance in terms of skewness, CPU utilization, memory utilization, and overall resource allocation efficiency. This research offers a promising avenue for addressing the energy consumption challenges in cloud data centers while optimizing resource utilization for sustainable and cost-effective cloud operations.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_115-Virtual_Machine_Allocation_in_Cloud_Computing_Environments.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improved Model for Smoke Detection Based on Concentration Features using YOLOv7tiny</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01409114</link>
        <id>10.14569/IJACSA.2023.01409114</id>
        <doi>10.14569/IJACSA.2023.01409114</doi>
        <lastModDate>2023-09-30T10:43:05.4770000+00:00</lastModDate>
        
        <creator>Yuanpan ZHENG</creator>
        
        <creator>Liwei Niu</creator>
        
        <creator>Xinxin GAN</creator>
        
        <creator>Hui WANG</creator>
        
        <creator>Boyang XU</creator>
        
        <creator>Zhenyu WANG</creator>
        
        <subject>YOLOv7tiny; smoke detection; dark channel; smoke concentration; feature fusion; depthwise separable convolution</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>Smoke is often present in the early stages of a fire. Detecting low smoke concentration and small targets during these early stages can be challenging. This paper proposes an improved smoke detection algorithm that leverages the characteristics of smoke concentration using YOLOv7tiny. The improved algorithm consists of the following components: 1) utilizing the dark channel prior theory to extract smoke concentration characteristics and using the synthesized αRGB image as an input feature to enhance the features of sparse smoke; 2) designing a light-BiFPN multi-scale feature fusion structure to improve the detection performance of small target smoke; 3) using depth separable convolution to replace the original standard convolution and reduce the model parameter quantity. Experimental results on a self-made dataset show that the improved algorithm performs better in detecting sparse smoke and small target smoke, with mAP@0.5 and Recall reaching 94.03% and 95.62% respectively, and the detection FPS increasing to 118.78 frames/s. Moreover, the model parameter quantity decreases to 4.97M. The improved algorithm demonstrates superior performance in the detection of sparse and small smoke in the early stages of a fire.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_114-Improved_Model_for_Smoke_Detection_Based_on_Concentration_Features.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Voice Feature AVA and its Application to the Pathological Voice Detection Through Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01409113</link>
        <id>10.14569/IJACSA.2023.01409113</id>
        <doi>10.14569/IJACSA.2023.01409113</doi>
        <lastModDate>2023-09-30T10:43:05.4600000+00:00</lastModDate>
        
        <creator>Abdulrehman Altaf</creator>
        
        <creator>Hairulnizam Mahdin</creator>
        
        <creator>Ruhaila Maskat</creator>
        
        <creator>Shazlyn Milleana Shaharudin</creator>
        
        <creator>Abdullah Altaf</creator>
        
        <creator>Awais Mahmood</creator>
        
        <subject>Pathological voice; healthy voice; voice feature; amplitudes; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>Voice pathology is a universal problem which must be addressed. Traditionally, this malady is treated by using the surgical instruments in the varied healthcare settings. In the current era, machine learning experts have paid an increasing attention towards the solution of this problem by exploiting the signal processing of the voice. For this purpose, numerous voice features have been capitalized to classify the healthy and pathological voice signals. In particular, Mel-Frequency Cepstral Coefficients (MFCC) is a widely used feature in speech and audio signal processing. It denotes spectral characteristics of a voice signal, particularly of human speech. The modus operandi of MFCC is too time-consuming, which goes against the hasty and urgent nature of the modern times. This study has developed a yet another voice feature by utilizing the average value of the amplitudes (AVA) of the voice signals. Moreover, Gaussian Naive Bayes classifier has been employed to classify the given voice signals as healthy or pathological. Apart from that, the dataset has been acquired from the SVD (Saarbrucken Voice Database) to demonstrate the workability of the proposed voice feature and its usage in the classifier. The machine experimentation rendered very promising results. Particularly, Recall, F1 and accuracy scores obtained, are 100%, 83% and 80%, respectively. These results vividly imply that the proposed classifier can be installed in various healthcare settings.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_113-A_Novel_Voice_Feature_AVA_and_its_Application.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dual-Level Blind Omnidirectional Image Quality Assessment Network Based on Human Visual Perception</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01409112</link>
        <id>10.14569/IJACSA.2023.01409112</id>
        <doi>10.14569/IJACSA.2023.01409112</doi>
        <lastModDate>2023-09-30T10:43:05.4430000+00:00</lastModDate>
        
        <creator>Deyang Liu</creator>
        
        <creator>Lu Zhang</creator>
        
        <creator>Lifei Wan</creator>
        
        <creator>Wei Yao</creator>
        
        <creator>Jian Ma</creator>
        
        <creator>Youzhi Zhang</creator>
        
        <subject>Omnidirectional image quality assessment; dual-level network; human visual perception; human attention; multi-scale</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>With the rapid development of virtual reality (VR) technology, a large number of omnidirectional images (OIs) with uncertain quality are flooding into the internet. As a result, Blind Omnidirectional Image Quality Assessment (BOIQA) has become increasingly urgent. The existing solutions mainly focus on manually or automatically extracting high-level features from OIs, which overlook the important guiding role of human visual perception in this immersive experience. To address this issue, a dual-level network based on human visual perception is developed in this paper for BOIQA. Firstly, a human attention branch is proposed, in which the transformer-based model can efficiently represent attentional features of the human eye within a multi-distance perception image pyramid of viewport. Then, inspired by the hierarchical perception of human visual system, a multi-scale perception branch is designed, in which hierarchical features of six orientational viewports are considered and obtained by a residual network in parallel. Additionally, the correlation features among viewports are investigated to assist the multi-viewport feature fusion, in which the feature maps extracted from different viewports are further measured for their similarity and correlation by the attention-based module. Finally, the output values from both branches are regressed by fully connected layer to derive the final predicted quality score. Comprehensive experiments on two public datasets demonstrate the significant superiority of the proposed method.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_112-Dual_Level_Blind_Omnidirectional_Image_Quality_Assessment_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of a Touchless Control System for a Clinical Robot with Multimodal User Interface</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01409111</link>
        <id>10.14569/IJACSA.2023.01409111</id>
        <doi>10.14569/IJACSA.2023.01409111</doi>
        <lastModDate>2023-09-30T10:43:05.4270000+00:00</lastModDate>
        
        <creator>Julio Alegre Luna</creator>
        
        <creator>Anthony Vasquez Rivera</creator>
        
        <creator>Alejandra Loayza Mendoza</creator>
        
        <creator>Jes&#180;us Talavera S.</creator>
        
        <creator>Andres Montoya A</creator>
        
        <subject>Multimodal user interface; human–robot interaction; clinical robot</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>This article introduces the development of a multi-modal user interface for touchless control of a clinical robot. This system seamlessly integrates distinct control modalities: voice commands, an accelerometer-embedded gauntlet, and a virtual reality (VR) headset to display real-time robot video and system alerts. By synergizing these control approaches, a more versatile and intuitive means of commanding the robot has been established. This assertion finds support through comprehensive assessments conducted with both seasoned professionals and novices in the domain of clinical robotics, all within a controlled experimental setting. The diverse array of test results unequivocally demonstrate the system’s efficacy. They substantiate the system’s ability to proficiently govern a robotic arm in the clinical environment. The user interface’s usability is measured at an impressive 90.2 on the system usability scale, affirming its suitability for robotic control. Notably, the interface not only offers comfort but also intuitiveness for operators of varying levels of expertise.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_111-Development_of_a_Touchless_Control_System_for_a_Clinical_Robot.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>ArCyb: A Robust Machine-Learning Model for Arabic Cyberbullying Tweets in Saudi Arabia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01409110</link>
        <id>10.14569/IJACSA.2023.01409110</id>
        <doi>10.14569/IJACSA.2023.01409110</doi>
        <lastModDate>2023-09-30T10:43:05.4130000+00:00</lastModDate>
        
        <creator>Khalid T. Mursi</creator>
        
        <creator>Abdulrahman Y. Almalki</creator>
        
        <creator>Moayad M. Alshangiti</creator>
        
        <creator>Faisal S. Alsubaei</creator>
        
        <creator>Ahmed A. Alghamdi</creator>
        
        <subject>Natural language processing; machine learning; neural network; bullying; cyberbullying</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>The widespread use of computers and smartphones has led to an increase in social media usage, where users can express their opinions freely. However, this freedom of expression can be misused for spreading abusive and bullying content online. To ensure a safe online environment, cybersecurity experts are continuously researching effective and intelligent ways to respond to such activities. In this work, we present ArCyb, a robust machine-learning model for detecting cyberbullying in social media using a manually labeled Arabic dataset. The model achieved 89% prediction accuracy, surpassing the state-of-the-art cyberbullying models. The results of this work can be utilized by social media platforms, government agencies, and internet service providers to detect and prevent the spread of bullying posts in social networks.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_110-ArCyb_A_Robust_Machine_Learning_Model_for_Arabic_Cyberbullying_Tweets.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Outdoor Mobility and Environment Perception for Visually Impaired Individuals Through a Customized CNN-based System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01409109</link>
        <id>10.14569/IJACSA.2023.01409109</id>
        <doi>10.14569/IJACSA.2023.01409109</doi>
        <lastModDate>2023-09-30T10:43:05.4130000+00:00</lastModDate>
        
        <creator>Athulya N K</creator>
        
        <creator>Sivakumar Ramachandran</creator>
        
        <creator>Neetha George</creator>
        
        <creator>Ambily N</creator>
        
        <creator>Linu Shine</creator>
        
        <subject>Visually impaired; Terrain identification; Puddle detection; Deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>Visual impairment indicates any kind of vision loss including blindness. Individuals with visual impairments face significant challenges when trying to perceive their surroundings from a global perspective and navigating unfamiliar environments. Existing assistive technologies predominantly focus on obstacle avoidance, neglecting to provide comprehensive information about the overall environment. To address this gap, the proposed system employs a customized Convolutional Neural Network (CNN) model tailored to accurately predict the type of outdoor ground terrain the user is traversing. This information is then conveyed to the user audibly. It can also detect the presence of puddles on the road and let the user know whether the outside floor is wet (slippery). The proposed deep-learning architecture is trained on images collected from sources including the Stagnant Water dataset, the GTOS-Mobile dataset and a custom dataset. The trained model is then integrated into an Android app, providing visually impaired (VI) people with effective surrounding perception capabilities, leading to better travel and, ultimately, better living.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_109-Enhancing_Outdoor_Mobility_and_Environment_Perception.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Efficient Convolutional Neural Network Classification Model for Several Sign Language Alphabets</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01409108</link>
        <id>10.14569/IJACSA.2023.01409108</id>
        <doi>10.14569/IJACSA.2023.01409108</doi>
        <lastModDate>2023-09-30T10:43:05.3970000+00:00</lastModDate>
        
        <creator>Ahmed Osman Mahmoud</creator>
        
        <creator>Ibrahim Ziedan</creator>
        
        <creator>Amr Ahmed Zamel</creator>
        
        <subject>Convolutional neural network (CNN); sign language; Arabic sign language (ArSL); American sign language (ASL); Korean sign language (KSL); Complexity time</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>Although deaf people represent over 5% of the world’s population, according to what the World Health Organization stated in May 2022, they suffer from social and economic marginalization. One way to improve the lives of deaf people is to try to make communication between them and others easier. Sign language, the means through which deaf people can communicate with other people, can benefit from modern techniques in machine learning. In this study, several convolutional neural networks (CNN) models are designed to develop an efficient model, in terms of accuracy and computational time, for the classification of different signs. This research presents a methodology for developing an efficient CNN architecture from scratch to classify multiple sign language alphabets, which has numerous advantages over other contemporary CNN models in terms of prediction time and accuracy. This framework analyses the effect of varying CNN hyper-parameters, such as kernel size, number of layers, and number of filters in each layer, and picks the ideal parameters for CNN model construction. In addition, the suggested CNN architecture operates directly on unprocessed data without the need for preprocessing to generalize it across other datasets. In addition, the capacity of the model to generalize to diverse sign languages is rigorously evaluated using three distinct sign language alphabets and five datasets, namely, Arabic (ArSL), two American English (ASL), Korean (KSL), and the combination of Arabic and American datasets. The proposed CNN architecture (SL-CNN) outperforms state-of-the-art CNN models and traditional machine learning models achieving an accuracy of 100%, 98.47%, 100%, and 99.5% for English, Arabic, Korean, and combined Arabic-English alphabets, respectively. The prediction or inference time of the model is about three milliseconds on average, making it suitable for real-time applications. So, in the future, it is easy to turn this model into a mobile application.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_108-An_Efficient_Convolutional_Neural_Network_Classification_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Improved Genetic Algorithm with Chromosome Replacement and Rescheduling for Task Offloading</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01409107</link>
        <id>10.14569/IJACSA.2023.01409107</id>
        <doi>10.14569/IJACSA.2023.01409107</doi>
        <lastModDate>2023-09-30T10:43:05.3800000+00:00</lastModDate>
        
        <creator>Hui Fu</creator>
        
        <creator>Guangyuan Li</creator>
        
        <creator>Fang Han</creator>
        
        <creator>Bo Wang</creator>
        
        <subject>Genetic algorithm; task offloading; task scheduling; edge computing; cloud computing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>End-Edge-Cloud Computing (EECC) has been applied in many fields, due to the increased popularity of smart devices. But the cooperation of end devices, edge and cloud resources is still challenge for improving service quality and resource efficiency in EECC. In this paper, we focus on the task offloading to address the challenge. We formulate the offloading problem as mixed integer nonlinear programming, and solve it by Genetic Algorithm (GA). In the GA-based offloading algorithm, each chromosome is the code of a offloading solution, and the evolution is to iteratively search the global best solution. To improve the performance of GA-based task offloading, we integrate two improvement schemes into the algorithm, which are the chromosome replacement and the task rescheduling, respectively. The chromosome replacement is to replace the chromosome of every individual by its better offspring after every crossing, which substitutes the selection operator for population evolution. The task rescheduling is rescheduling each rejected task to available resources, given offloading solution from every chromosome. Extensive experiments are conducted, and results show that our proposed algorithm can improve upto 32% user satisfaction, upto 12% resource efficiency, and upto 35.3% processing efficiency, compared with nine classical and up-to-date algorithms.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_107-An_Improved_Genetic_Algorithm_with_Chromosome_Replacement.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Non-contact Respiratory Rate Monitoring Based on the Principal Component Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01409106</link>
        <id>10.14569/IJACSA.2023.01409106</id>
        <doi>10.14569/IJACSA.2023.01409106</doi>
        <lastModDate>2023-09-30T10:43:05.3800000+00:00</lastModDate>
        
        <creator>Hoda El Boussaki</creator>
        
        <creator>Rachid Latif</creator>
        
        <creator>Amine Saddik</creator>
        
        <creator>Zakaria El Khadiri</creator>
        
        <creator>Hicham El Boujaoui</creator>
        
        <subject>RGB; breathing rate; non-contact; principal component analysis; plethysmography</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>Assessing respiratory rate is a critical determinant of one’s health status. The proposed approach relies on principal component analysis (PCA) for the continuous monitoring of breathing rate using an RGB camera. This method employs re-mote plethysmography, a video-based technique enabling contact-less tracking of blood volume fluctuations by detecting variations in pixel intensity on the skin. These pixels encompass the red, blue, and green channels, whose values, post-PCA dimensionality reduction, encode the signal containing vital information about the breathing rate. To assess the method’s performance, it was tested on a group of seven volunteers, including individuals of both genders. The results reveal a Mean Absolute Deviation of 0.714 BPM and a Root Mean Square Error of 2.035 BPM when comparing the experimental measurements to the actual readings.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_106-Non_contact_Respiratory_Rate_Monitoring.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Two Dimensional Deep CNN Model for Vision-based Fingerspelling Recognition System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01409105</link>
        <id>10.14569/IJACSA.2023.01409105</id>
        <doi>10.14569/IJACSA.2023.01409105</doi>
        <lastModDate>2023-09-30T10:43:05.3670000+00:00</lastModDate>
        
        <creator>Zhadra Kozhamkulova</creator>
        
        <creator>Elmira Nurlybaeva</creator>
        
        <creator>Leilya Kuntunova</creator>
        
        <creator>Shirin Amanzholova</creator>
        
        <creator>Marina Vorogushina</creator>
        
        <creator>Mukhit Maikotov</creator>
        
        <creator>Kaden Kenzhekhan</creator>
        
        <subject>Fingerspelling; recognition; computer vision; CNN; machine learning; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>This paper presents a novel approach to fingerspelling recognition in real-time, utilizing a two-dimensional Convolutional Neural Network (2D CNN). Existing recognition systems often fall short in real-world conditions due to variations in illumination, background, and user-specific characteristics. Our method addresses these challenges, delivering significantly improved performance. Leveraging a robust 2D CNN architecture, the system processes image sequences representing the dynamic nature of fingerspelling. We focus on low-level spatial features and temporal patterns, thereby ensuring a more accurate capture of the intricate nuances of fingerspelling. Additionally, the incorporation of real-time video feed enhances the system&#39;s responsiveness. We validate our model through comprehensive experiments, showcasing its superior recognition rate over current methods. In scenarios involving varied lighting, different backgrounds, and distinct user behaviors, our system consistently outperforms. The findings demonstrate that the 2D CNN approach holds promise in improving fingerspelling recognition, thereby aiding communication for the hearing-impaired community. This work paves the way for further exploration of deep learning applications in real-time sign language interpretation. This research bears profound implications for accessibility and inclusivity in communication technology.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_105-Two_Dimensional_Deep_CNN_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Machine Learning for Smart Cities: A Comprehensive Review of Applications and Opportunities</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01409104</link>
        <id>10.14569/IJACSA.2023.01409104</id>
        <doi>10.14569/IJACSA.2023.01409104</doi>
        <lastModDate>2023-09-30T10:43:05.3500000+00:00</lastModDate>
        
        <creator>Xiaoning Dou</creator>
        
        <creator>Weijing Chen</creator>
        
        <creator>Lei Zhu</creator>
        
        <creator>Yingmei Bai</creator>
        
        <creator>Yan Li</creator>
        
        <creator>Xiaoxiao Wu</creator>
        
        <subject>Smart city; machine learning; artificial intelligence; intelligent transportation system; smart grids</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>The smart city concept originated a few years ago as a combination of ideas about how information and communication technologies can improve urban life. With the advent of the digital revolution, many cities globally are investing heavily in designing and implementing smart city solutions and projects. Machine Learning (ML) has evolved into a powerful tool within the smart city sector, enabling efficient resource management, improved infrastructure, and enhanced urban services. This paper discusses the diverse ML algorithms and their potential applications in smart cities, including Artificial Intelligence (AI) and Intelligent Transportation Systems (ITS). The key challenges, opportunities, and directions for adopting ML to make cities smarter and more sustainable are outlined.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_104-Machine_Learning_for_Smart_Cities.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Research on Clothing Color Classification Method based on Improved FCM Clustering Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01409102</link>
        <id>10.14569/IJACSA.2023.01409102</id>
        <doi>10.14569/IJACSA.2023.01409102</doi>
        <lastModDate>2023-09-30T10:43:05.3330000+00:00</lastModDate>
        
        <creator>Jinliang Liu</creator>
        
        <subject>Fuzzy clustering; SWK-FCM; fashion color scheme; Gaussian noise</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>In the apparel industry, apparel color is an important factor to enhance the market competitiveness of enterprise products. However, the current prediction samples of clothing fashion color styling information do not incorporate practical cutting-edge fashion information. Therefore, Self-adaptive Weighted Kernel Function (SWK) has been introduced to traditional Fuzzy C-Means (FCM) clustering algorithms. After improvement, the SWK-FCM clustering algorithm is obtained, which enhances the classification ability of fashion colors and hue. Two prediction models have been developed using the finalized data of the International Fashion Color Committee, along with the SWK-FCM clustering algorithm. The models have been tested via experiments to verify their accuracy. The experimental results show that the classification coefficients of SWK-FCM clustering algorithm are 0.9553 and 0.9258 under 5% Gaussian noise. They are higher than those of FCM (0.7063) and FLICM (0.8598). The classification entropy is lower than that of the comparison algorithm, while the same results are presented under other conditions and in the actual experiments. In addition, the overall MSE of the GM (1, 1) prediction model using the final case information is 0.00028, which is close to the order of 10-4. The MSE value of the BP neural network prediction model using the final case information ranges from 0.000529 to 0.011025. Overall, the clustering algorithm of SWK-FCM has good classification performance. Additionally, the GM (1,1) model based on SWK-FCM has better prediction results, which can be effectively applied in practical clothing color classification and popular color prediction.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_102-Research_on_Clothing_Color_Classification_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Next-Generation Intrusion Detection and Prevention System Performance in Distributed Big Data Network Security Architectures</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01409103</link>
        <id>10.14569/IJACSA.2023.01409103</id>
        <doi>10.14569/IJACSA.2023.01409103</doi>
        <lastModDate>2023-09-30T10:43:05.3330000+00:00</lastModDate>
        
        <creator>Michael Hart</creator>
        
        <creator>Rushit Dave</creator>
        
        <creator>Eric Richardson</creator>
        
        <subject>Big data systems; zero trust architecture; benchmarking; distributed denial of service attacks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>Big data systems are expanding to support the rapidly growing needs of massive scale data analytics. To safeguard user data, the design and placement of cybersecurity systems is also evolving as organizations to increase their big data portfolios. One of several challenges presented by these changes is benchmarking real-time big data systems that use different network security architectures. This work introduces an eight-step benchmark process to evaluate big data systems in varying architectural environments. The benchmark is tested on real-time big data systems running in perimeter-based and perimeter-less network environments. Findings show that marginal I/O differences exist on distributed file systems between network architectures. However, during various types of cyber incidents such as distributed denial of service (DDoS) attacks, certain security architectures like zero trust require more system resources than perimeter-based architectures. Results illustrate the need to broaden research on optimal benchmarking and security approaches for massive scale distributed computing systems.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_103-Next_Generation_Intrusion_Detection_and_Prevention_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design and Implementation Submarine Cable Object Detection YOLOv4 based with Graphical User Interface (GUI) for Remotely Operated Vehicle (ROV)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01409101</link>
        <id>10.14569/IJACSA.2023.01409101</id>
        <doi>10.14569/IJACSA.2023.01409101</doi>
        <lastModDate>2023-09-30T10:43:05.3200000+00:00</lastModDate>
        
        <creator>Fikri Arif Wicaksana</creator>
        
        <creator>Eueung Mulyana</creator>
        
        <creator>Syarif Hidayat</creator>
        
        <creator>Rahadian Yusuf</creator>
        
        <subject>Submarine cable; object detection; GUI; ROV; YOLOv4</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>The use of submarine cables as underwater transmission channels for distributing electrical energy in Indonesian waters is crucial. However, the detection and maintenance of submarine cables still heavily rely on human observation, leading to limitations in time and subjective interpretations. This research aims to design and implement an underwater object detection system based on YOLOv4 integrated with a Graphical User Interface (GUI) on a Remotely Operated Vehicle (ROV) for submarine cable detection. The YOLOv4 model was trained using a balanced dataset, achieving performance with precision of 0.89, recall of 0.85, and f1-score of 0.87. Detection of Good Condition (SC-Good-Condition) achieved an Average Precision (AP) of 97.62%, while Bad Condition detection (SC-Bad-Condition) had an AP of 87.54%, resulting in an overall mAP of 92.58%. The implemented GUI successfully detected submarine cables in two test videos with FPS rates of 0.178 and 0.083. The designed underwater object detection system using YOLOv4 and GUI on ROV demonstrated satisfactory performance in detecting submarine cables. However, further efforts are needed to improve the GUI&#39;s FPS to make it more responsive and efficient. This research contributes to the development of underwater detection technology that supports environmental observation and electrical energy distribution in Indonesian waters.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_101-Design_and_Implementation_Submarine_Cable_Object_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Routing Strategies and Protocols for Efficient Data Transmission in the Internet of Vehicles: A Comprehensive Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01409100</link>
        <id>10.14569/IJACSA.2023.01409100</id>
        <doi>10.14569/IJACSA.2023.01409100</doi>
        <lastModDate>2023-09-30T10:43:05.3030000+00:00</lastModDate>
        
        <creator>Yijun Xu</creator>
        
        <subject>Internet of things; internet of vehicles; Vehicular Ad Hoc Networks (VANETs); routing; network adaptability; vehicular technology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>The Internet of Vehicles (IoV) integrates wireless communication, vehicular technology, and the Internet to create intelligent transportation systems. Efficient routing of data packets within the IoV is crucial for seamless communication and service enablement. This paper provides a comprehensive review of routing strategies and protocols in the IoV environment, categorizing and evaluating existing approaches. Routing protocols are classified, their adaptability is assessed to network variations, and their performance is compared. Insights are drawn from researchers&#39; experiences. The paper offers a taxonomy of routing protocols, highlights adaptability to network conditions, and presents a comparative analysis. Lessons from researchers shed light on practical implications. The review identifies key routing challenges in IoV and provides a valuable resource for understanding and addressing these challenges in future research.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_100-Routing_Strategies_and_Protocols_for_Efficient_Data_Transmission.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A QoS-aware Mechanism for Reducing TCP Retransmission Timeouts using Network Tomography</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140999</link>
        <id>10.14569/IJACSA.2023.0140999</id>
        <doi>10.14569/IJACSA.2023.0140999</doi>
        <lastModDate>2023-09-30T10:43:05.2870000+00:00</lastModDate>
        
        <creator>Jingfu LI</creator>
        
        <subject>Latency; network tomography; end to end; depending on the probe</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>A wide range of web-based applications uses the Transmission Control Protocol (TCP) to ensure network resources are shared efficiently and fairly. As wired and wireless networks have become more complex, various end-to-end Congestion Control (CC) schemes have been developed, offering solutions through their proposed TCP variants. Network tomography, a powerful analytical tool, offers a unique perspective by measuring end-to-end performance to estimate internal network parameters, including latency. This estimation capability proves valuable, especially in cases where precise protocol performance evaluation is essential. TCP protocol can be improved significantly by properly estimating RTT time. It has resulted in better network conditions and improved reliability, as well as a higher level of user satisfaction. In this study, we propose a method to infer the link delay using network tomography and then adjust the RTT based on the delay estimation obtained in the previous step. Simulation results performed using the NS2 software show that the proposed method significantly improves the TCP protocol&#39;s Round-Trip Time (RTT) estimation by more than 15%. It reduces congestion, improves information transfer efficiency, and ensures the highest level of service in the network.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_99-A_QoS_aware_Mechanism_for_Reducing_TCP_Retransmission_Timeouts.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Artificial Rabbits Optimizer with Deep Learning Model for Blockchain-Assisted Secure Smart Healthcare System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140998</link>
        <id>10.14569/IJACSA.2023.0140998</id>
        <doi>10.14569/IJACSA.2023.0140998</doi>
        <lastModDate>2023-09-30T10:43:05.2730000+00:00</lastModDate>
        
        <creator>Mousa Mohammed Khubrani</creator>
        
        <subject>Blockchain; smart healthcare; artificial rabbit’s optimizer; deep learning; intrusion detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>Smart healthcare is based on the electronic health and medical histories of residents, combined with information technology (IT) which can be used to construct a variety of systems including humanised health management systems and convenient medical service systems. The transparency, traceability, decentralization and security of BC technology and machine learning (ML) will enable the medical sector to upgrade and optimise different forms of quality and service. Therefore, this study introduces an artificial rabbit optimizer with deep learning for Blockchain Assisted Secure Smart Healthcare System (ARODL-BSSHS) technique. The presented ARODL-BSSHS technique designs a new healthcare monitoring technique by using blockchain (BC) technology and classifies the presence of malicious activities in the healthcare system, and takes needed actions to predict the disease. For intrusion detection, the ARODL-BSSHS technique exploits the ARO algorithm with Hop field neural network (IHNN) model. On the other hand, the ARODL-BSSHS technique applies a deep extreme learning machine (DELM) model for disease detection purposes. Finally, the heap-based optimization (HBO) technique is exploited as a hyperparameter optimizer for the DELM model. The ARODL-BSSHS technique involves BC technology for the secure transmission of healthcare data. A series of simulations were carried out on benchmark datasets: heart disease and NSL-KDD database for examining the performance of the ARODL-BSSHS technique. The experimental values highlighted that the ARODL-BSSHS method obtains superior performance than other approaches.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_98-Artificial_Rabbits_Optimizer_with_Deep_Learning_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A QoS-Aware Resource Allocation Method for Internet of Things using Ant Colony Optimization Algorithm and Tabu Search</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140997</link>
        <id>10.14569/IJACSA.2023.0140997</id>
        <doi>10.14569/IJACSA.2023.0140997</doi>
        <lastModDate>2023-09-30T10:43:05.2570000+00:00</lastModDate>
        
        <creator>Shuling YIN</creator>
        
        <creator>Renping YU</creator>
        
        <subject>Internet of things; resource allocation; virtualization; Ant Colony Optimization; Tabu Search</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>In today&#39;s computing era, the Internet of Things (IoT) stands out for its implementation of automation, high-quality ecosystems, creative and efficient services, and higher productivity. IoT has found applications in various fields, such as education, healthcare, agriculture, military, and industry, where diverse resource requirements present a major challenge. To address this issue, we propose a novel QoS-aware resource allocation method for IoT systems. Our approach combines the Ant Colony Optimization (ACO) and Tabu Search (TS) algorithms to manage resources effectively, minimize energy consumption, reduce communication delays, and enhance overall system performance. Experimental results demonstrate the efficiency and effectiveness of our approach, with significant improvements in QoS metrics compared to traditional methods. By merging ACO and TS algorithms, our research contributes to the advancement of IoT capabilities, energy conservation, and business optimization.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_97-A_QoS_Aware_Resource_Allocation_Method_for_Internet_of_Things.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Marginal Distribution Algorithm for Feature Model Test Configuration Generation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140996</link>
        <id>10.14569/IJACSA.2023.0140996</id>
        <doi>10.14569/IJACSA.2023.0140996</doi>
        <lastModDate>2023-09-30T10:43:05.2400000+00:00</lastModDate>
        
        <creator>Mohd Zanes Sahid</creator>
        
        <creator>Mohd Zainuri Saringat</creator>
        
        <creator>Mohd Hamdi Irwan Hamzah</creator>
        
        <creator>Nurezayana Zainal</creator>
        
        <subject>Estimation of distribution algorithm; marginal distribution algorithm; test configuration generation; pairwise testing; software product line</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>Generating test configuration for Software Product Line (SPL) is difficult, due to the exponential effect of feature combination. Pairwise testing can generate test input for a single software product that deviates from exhaustive testing, nevertheless proven to be effective. In the context of SPL testing, to generate minimal test configuration that maximizes pairwise coverage is not trivial, especially when dealing with a huge number of features and when constraints must be satisfied, which is the case in most SPL systems. In this paper, we propose an estimation of distribution algorithm, based on pairwise testing, to alleviate this problem. Comparisons are made against a greedy-based and a constraint handling based approach. The experiments demonstrate the feasibility of the proposed algorithm, such that it achieves better test configurations dissimilarity and at the same time maintain the test configuration size and pairwise coverage. This is supported by analysis using descriptive statistics.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_96-Marginal_Distribution_Algorithm_for_Feature_Model_Test_Configuration.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Group Intelligence Recommendation System based on Knowledge Graph and Fusion Recommendation Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140994</link>
        <id>10.14569/IJACSA.2023.0140994</id>
        <doi>10.14569/IJACSA.2023.0140994</doi>
        <lastModDate>2023-09-30T10:43:05.2270000+00:00</lastModDate>
        
        <creator>Chengning Huang</creator>
        
        <creator>Bo Jing</creator>
        
        <creator>Lili Jiang</creator>
        
        <creator>Yuquan Zhu</creator>
        
        <subject>Knowledge graphs; recommendation system; graph convolutional networks; label propagation algorithms</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>The challenge of how to further improve the accuracy of the system&#39;s recommendations in a data-limited environment is crucial as the use of group intelligence recommendation systems in everyday life increases. Through the fusion of different types of auxiliary information, this study develops a multi-feature fusion model based on the conventional recommendation model by introducing knowledge graphs. It also considers the homogeneity of push results caused by graph convolutional network smoothing when using knowledge graphs, and designs a fusion label propagation algorithm and graph convolution. The multi-feature fusion model had a maximum hit rate of over 80% and a normalised discount gain of up to 43% running time much lower than the conventional graph convolution recommendation model in the representation dimension interval [2, 32], while the fusion label propagation algorithm and graph convolution network model maintained a hit rate and normalised discount gain higher than the conventional model by 2 to 1 under 10 consecutive epochs. With a hit rate and normalised discount gain 2 to 10 percentage points higher than the conventional model, the coverage rate increased to 49.8%. This study is useful for research on group intelligence recommendation systems and can serve as a technical guide for improving the ability of group intelligence systems to make recommendations quickly.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_94-Group_Intelligence_Recommendation_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Statistical Language Model-based Analysis of English Corpora and Literature</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140995</link>
        <id>10.14569/IJACSA.2023.0140995</id>
        <doi>10.14569/IJACSA.2023.0140995</doi>
        <lastModDate>2023-09-30T10:43:05.2270000+00:00</lastModDate>
        
        <creator>Wenwen Chai</creator>
        
        <subject>Statistical language model; corpus; English literature; reordering; grammatical analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>Despite widespread use of statistical language models in language processing, their ability to process natural languages is not advanced and they struggle to effectively capture linguistic information. Furthermore, there is a lack of automatic processing models in the field of natural language processing. In order to address these issues, and Improve the processing ability of statistical language models for English language a statistical language model optimization algorithm has been proposed. This algorithm is based on an improved resorting algorithm and is specifically applied to process English literary texts. Experimental results indicate that the proposed algorithm outperforms the N-gram algorithm in a majority of texts, with a maximum accuracy improvement of 14.5%. Additionally, in terms of the grammar analysis model, there is a high level of consistency between the model&#39;s scoring and the expert manpower scoring, as reflected by a correlation coefficient of 0.7893. This high level of consistency between the grammar analysis model and expert analysis results holds significant importance for the advancement of natural language processing.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_95-Statistical_Language_Model_based_Analysis_of_English_Corpora.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>PRESSNet: Assessment of Building Damage Caused by the Earthquake</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140993</link>
        <id>10.14569/IJACSA.2023.0140993</id>
        <doi>10.14569/IJACSA.2023.0140993</doi>
        <lastModDate>2023-09-30T10:43:05.2100000+00:00</lastModDate>
        
        <creator>Dewa Ayu Defina Audrey Nathania</creator>
        
        <creator>Alexander Agung Santoso Gunawan</creator>
        
        <creator>Edy Irwansyah</creator>
        
        <subject>Remote sensing; deep learning; PSPNet; ResNet; spatial attention</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>Loss of life and property often occur due to natural disasters and other significant occurrences like earthquakes, which make manual damage assessment a time-consuming and inefficient process. In an attempt to address this challenge, researchers have been investigating the field of automated damage assessment in Remote Sensing. With time, this area of research has transformed from conventional machine learning techniques to more sophisticated deep learning techniques. The study puts forward the PRESSNet model as a solution for assessing building damage. The effectiveness of the proposed PRESSNet model is compared to that of a baseline model, PSPNet, and ResNet 50, across different types of damage. This study contributes by introducing the spatial attention module to the baseline model. The xBD Dataset was used both before and after the Palu earthquake disaster. The results show that PRESSNet performs similarly or slightly better than the baseline model in all damage categories. This illustrates the impressive ability of the proposed PRESSNet architecture to accurately detect and classify building damage. This research sheds light on the development of effective models for assessing disaster damage and lays the foundation for future progress in this crucial area.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_93-PRESSNet_Assessment_of_Building_Damage_Caused.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Tampering Detection and Segmentation Model for Multimedia Forensic</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140992</link>
        <id>10.14569/IJACSA.2023.0140992</id>
        <doi>10.14569/IJACSA.2023.0140992</doi>
        <lastModDate>2023-09-30T10:43:05.1930000+00:00</lastModDate>
        
        <creator>Manjunatha S</creator>
        
        <creator>Malini M Patil</creator>
        
        <creator>Swetha M D</creator>
        
        <creator>Prabhu Vijay S S</creator>
        
        <subject>Convolution neural networks; digital image forensic; hybrid image transformation; resampling feature; segmentation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>When an image undergoes hybrid post-processing transformation, detecting tamper region, localizing it and segmentation becomes very difficult tasks. In particular, when a copy-move attack with hybrid transformation has similar contrast and illumination parameters with an authenticated image it makes tamper detection difficult. Alongside, under small-smooth attack existing tamper identification model provides a very poor segmentation outcome and sometimes fails to identify an image as tampered. This article focused on addressing the difficulty through the adoption of the Deep Learning model. The proposed technique is efficient in detecting tampering with good segmentation outcomes. However, existing models fail to distinguish adjacent pixels&#39; relationships affecting segmentation outcomes. In this paper, an Improved Convolution Neural Network (ICNN) assuring correlation awareness-based Tamper Detection and Segmentation (TDS) model for image forensics is presented. This model brings good correlation among adjacent pixels through the introduction of an additional layer namely the correlation layer alongside vertical and horizontal layers. The TDS-ICNN is very effective in localizing and segmenting tamper regions even under small-smooth post-processing tampering attacks by using a feature descriptor built using aggregated three-layer ICNN architecture. An experiment is done to study TDS-ICNN with other tamper identification models using various datasets such as MICC, Coverage, and CoMoFoD. The TDS-ICNN is very efficient under different post-processing hybrid attacks when compared with existing models.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_92-Tampering_Detection_and_Segmentation_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of an Image Encryption Algorithm using Latin Square Matrix and Logistics Map</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140991</link>
        <id>10.14569/IJACSA.2023.0140991</id>
        <doi>10.14569/IJACSA.2023.0140991</doi>
        <lastModDate>2023-09-30T10:43:05.1770000+00:00</lastModDate>
        
        <creator>Emmanuel Oluwatobi Asani</creator>
        
        <creator>Godsfavour Biety-Nwanju</creator>
        
        <creator>Abidemi Emmanuel Adeniyi</creator>
        
        <creator>Salil Bharany</creator>
        
        <creator>Ashraf Osman Ibrahim</creator>
        
        <creator>Anas W. Abulfaraj</creator>
        
        <creator>Wamda Nagmeldin</creator>
        
        <subject>Image encryption; algorithm; logistics map; Latin square matrix; chaos technology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>The goal of this study was to develop a robust image cryptographic scheme based on Latin Square Matrix and Logistics Map, capable of effectively securing sensitive data. Logistics mapping is a comparatively strong chaos system which enciphers with an unpredictability that significantly reduces the chance of deciphering. Additionally, the Latin square matrix stands out for its uniform histogram distribution, thereby bolstering its encryption&#39;s potency. The consequent integration of these algorithms in this study was therefore grounded in the scientific rationale of establishing a strong and resilient cypher technique. The study provides a new chaos-based method and extends the application of the probabilistic approach to the domain of symmetric key image encryption. Permutation and substitution approaches of image encryption were deployed to address the issue of images volume and differing sizes. The issue of misplaced pixel positions in the image was also adequately addressed, making it an effective method for image encryption. The hybrid technique was simulated on image data and evaluated to gauge its performance. Results showed that the algorithm was able to securely protect image data and the private information associated with them, while also making it very difficult for unauthorized users to decrypt the information. The average encryption time of 184(μs) on seven (7) images showed that it could to be deployed for real-time systems. The proposed method obtained an average entropy of 7.9398 with key space of 1.17x1077 and an average avalanche effect (%) of 49.9823 confirming the security and resilience of the developed method.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_91-Development_of_an_Image_Encryption_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design and Development of an Intelligent Rendering System for New Year&#39;s Paintings Color Based on B/S Architecture</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140989</link>
        <id>10.14569/IJACSA.2023.0140989</id>
        <doi>10.14569/IJACSA.2023.0140989</doi>
        <lastModDate>2023-09-30T10:43:05.1630000+00:00</lastModDate>
        
        <creator>Zaozao Guo</creator>
        
        <subject>B/S architecture; intelligent rendering; adversarial neural network; Chinese New Year painting</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>With the arrival of the synthetic talent era, laptop technological know-how for the safety and inheritance of intangible cultural heritage has added a new way of thinking, and the range of intangible cultural heritage additionally offers greater chances for laptop technology, the utility of laptop talent science to New Year&#39;s Eve artwork of the applicable lookup there are many gaps. Training of Cyclic Generative Adversarial Network (CycleGAN) realize the task of extracting plots of different site types from planar maps and the Rendering generation from planar color block map to color texture map. This paper first introduces the B/S community architecture, Python programming technological know-how and Django framework. Then the unique approach of using pc Genius to the project of rendering Chinese New Year artwork is clarified via modeling, studying algorithms, and community architecture. Finally, a hierarchical fusion generative adversarial neural community structure is designed primarily based on generative adversarial neural networks. The structural and textural features of the image are fused by texture GAN and then rendered to generate the New Year paintings. The test results show that this kind of algorithm draws clear texture, realistic images and full color of the New Year&#39;s pictures, and the IS index reaches 3.16 in the quantitative analysis, which is higher than other comparison algorithms.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_89-Design_and_Development_of_an_Intelligent_Rendering_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Using EEG Effective Connectivity Based on Granger Causality and Directed Transfer Function for Emotion Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140990</link>
        <id>10.14569/IJACSA.2023.0140990</id>
        <doi>10.14569/IJACSA.2023.0140990</doi>
        <lastModDate>2023-09-30T10:43:05.1630000+00:00</lastModDate>
        
        <creator>Weisong Wang</creator>
        
        <creator>Wenjing Sun</creator>
        
        <subject>EEG; effective connectivity; granger causality; directed transfer function; emotion recognition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>Emotion is a complex phenomenon that originates from everyday issues and has significant effects on individual decisions. Electroencephalography (EEG) is one of the widely used tools in examining the neural correlates of emotions. In this research, two concepts of Granger causality and directional transfer function were utilized to analyze EEG data recorded from 36 healthy volunteers in positive, negative and neutral emotional states and determine the effective connectivity between different brain sources (obtained through independent component analysis). Shannon entropy was utilized to sort the brain sources obtained by the ICA method, and average topography helps to add spatial information to the proposed connectivity models. According to the obtained confusion matrix, our method yielded an overall accuracy of 75% in recognizing three emotional states. Positive emotion was recognized with the highest accuracy of 87.96% (precision = 0.78, recall = 0.78 and F1-score = 0.81), followed by neutral (accuracy = 82.41%) and negative (accuracy = 79.63%) emotions. Indeed, our proposed method achieved the highest recognition accuracy for positive emotion. The proposed model in the present study has the ability to identify emotions in a completely personalized way based on neurobiological data. In the future, the proposed approach in the present study can be integrated with machine learning and neural network methods.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_90-Using_EEG_Effective_Connectivity_based_on_Granger_Causality.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enterprise Marketing Decision: Advertising Click Through Rate Prediction Based on Deep Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140988</link>
        <id>10.14569/IJACSA.2023.0140988</id>
        <doi>10.14569/IJACSA.2023.0140988</doi>
        <lastModDate>2023-09-30T10:43:05.1470000+00:00</lastModDate>
        
        <creator>Luyao Zhan</creator>
        
        <subject>Click through rate prediction; deep learning; deep neural network; online advertising; marketing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>With the high-speed growth of modern information technology, online advertising, as a new form of advertising on the Internet, has begun to emerge, demonstrating enormous development potential. To improve the accurate estimation of advertising placement and improve the operational efficiency of the advertising placement system, an improved deep neural network model for forecasting advertising click through rate was studied and designed. Meanwhile, the values of the activation function and the parameter dropout are determined, and the prediction accuracy of the deep neural network model and the improved model is compared and analyzed. The experimental results show that the training time of the improved prediction model has been shortened by about 73.25%, resulting in a significant improvement in computational efficiency. When the number of iterations is 110, the logarithmic loss function value is 0.208, and the logarithmic loss function value of the improved model is 0.207, with an average loss reduction of 0.4%. In the area comparison under the receiver operating characteristic curve, the pre improved model was 0.7092, and the improved model was 0.7207. Meanwhile, compared to before the improvement, the prediction accuracy of the improved model increased by 1.6%. The data validates that the optimized model has high prediction precision and efficiency, and has certain application potential and commercial value in marketing.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_88-Enterprise_Marketing_Decision_Advertising_Click.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Machine Learning based Predictive Modelling of Cybersecurity Threats Utilising Behavioural Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140987</link>
        <id>10.14569/IJACSA.2023.0140987</id>
        <doi>10.14569/IJACSA.2023.0140987</doi>
        <lastModDate>2023-09-30T10:43:05.1300000+00:00</lastModDate>
        
        <creator>Ting Tin Tin</creator>
        
        <creator>Khiew Jie Xin</creator>
        
        <creator>Ali Aitizaz</creator>
        
        <creator>Lee Kuok Tiung</creator>
        
        <creator>Teoh Chong Keat</creator>
        
        <creator>Hasan Sarwar</creator>
        
        <subject>Cybersecurity threat; cybersecurity risk; predictive modeling; undergraduates; cybercrime</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>With the rapid advancement of technology in Malaysia, the number of cybercrimes is also increasing. To stop the increase in cybercrimes, everyone, including normal citizens, needs to know how secure they are while using digital appliances. A system is developed to predict the risk of users based on their behaviour when they are online using real-life behavioural data obtained from a private university’s 207 undergraduates. Five supervised machine learning methods are being tested which are: Regression Logistics, K-Nearest Neighbour (KNN), Decision Tree (DT), Support Vector Machine (SVM), and Na&#239;ve Bayesian Classifier with the aid of a tool, RapidMiner. The algorithms are used to construct, test, and validate three categories of cybercrime threat (Malware, Social Engineering, and Password Attack) predictive models. It was found that KNN model produces the highest accuracy and lowest classification error for all three categories of cybercrime threat. This system is believed to be crucial in alerting users with details of whether the consumer behaviour risk is high or low and what further actions can be taken to increase awareness. This system aims to prevent the rise in cybercrimes by providing a prediction of their risk levels in cybersecurity to encourage them to be more proactive in cybersecurity.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_87-Machine_Learning_based_Predictive_Modelling_of_Cybersecurity.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Identifying and Prioritizing Digital Transformation Elements Using Fuzzy Analytic Hierarchy Process</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140986</link>
        <id>10.14569/IJACSA.2023.0140986</id>
        <doi>10.14569/IJACSA.2023.0140986</doi>
        <lastModDate>2023-09-30T10:43:05.1170000+00:00</lastModDate>
        
        <creator>Mohammed Hitham M. H</creator>
        
        <creator>Hatem Elkadi</creator>
        
        <creator>Neamat El Tazi</creator>
        
        <subject>Digital transformation; MCDM; AHP; fuzzy AHP introduction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>Digital transformation addresses multiple aspects of the organization. These aspects are the elements to be addressed for the digital transformation in any organization and are categorized as dimensions and sub-dimensions. In this work, these elements are collected from a wide range of related literature (56 publications). The most relevant elements were then identified through expert survey; involving 12 experts. The weights for these elements were identified using multi-criteria decision-making (MCDM) techniques. The Analytical Hierarchy Process (AHP) is one of the most often used MCDM techniques to incorporate individual and subjective preferences when conducting analysis and convert complex issues into a clear hierarchical structure. This work applies fuzzy AHP to take into consideration the treatment of uncertainty issues (in AHP), using the geometric mean method, and through an iterative process, calculate the weights of various dimensions and sub-dimensions, and prioritize them within the proposed roadmap for digital transformation implementation. Sensitivity analysis and comparison with AHP were used to validate our findings and the robustness of our approach. The proposed approach identified 9 main dimensions and 42 sub-dimensions which align with the majority of the literature. However, the advantage of this approach is the prioritization of these nine dimensions and their sub-dimensions as per the weights assigned to each one of them, allowing the project manager to allocate the available resources to the dimensions with the highest priority. The results show that the strategy and business process dimensions are the most crucial ones in the implementation of digital transformation.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_86-Identifying_and_Prioritizing_Digital_Transformation_Elements.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analyzing RNA-Seq Gene Expression Data for Cancer Classification Through ML Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140984</link>
        <id>10.14569/IJACSA.2023.0140984</id>
        <doi>10.14569/IJACSA.2023.0140984</doi>
        <lastModDate>2023-09-30T10:43:05.1000000+00:00</lastModDate>
        
        <creator>Abdul Wahid</creator>
        
        <creator>M Tariq Banday</creator>
        
        <subject>RNA-Sequence; gene expression; feature extraction; voting classifier; ensemble approach</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>Purpose: Ribonucleic Acid Sequencing (RNA-Seq) is a technique that allows an efficient genome-wide analysis of gene expressions. Such analysis is a strategy for identifying hidden patterns in data, and those related to cancer-specific biomarkers. Prior analyses without samples of different cancer kinds used RNA-Seq data from the same type of cancer as the positive and negative samples. Therefore, different cancer types must be evaluated to uncover differentially expressed genes and perform multiple cancer classifications. Problem: Since gene expression reflects both the genetic make-up of an organism and the biochemical activities occurring in tissue and cells, it can be crucial in the early identification of cancer. The aim of this study is to classify the RNA-Sequence data into five different cancer forms, such as LUAD, BRCA, KIRC, LUSC, and UCEC, through an ensemble approach of machine learning algorithms. RNA-Seq data for five different cancer types from the UCI Machine Learning Repository are examined in this research. Methods: As a first step, the relevant features of RNA-Seq are extricated using Principal Component Analysis (PCA). Then, the extricated features are given to the ensemble of machine learning classifiers to classify the type of cancer. The ensemble of classifiers is built using Support Vector Machine (SVM), Naive Bayes (NB), and K-Nearest Neighbor (KNN). Results: The results demonstrated that the proposed ensemble classifier outperformed the existing machine-learning approaches with an accuracy of 99.59%.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_84-Analyzing_RNA_Seq_Gene_Expression_Data_for_Cancer_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Historical Building 3D Reconstruction for a Virtual Reality-based Documentation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140985</link>
        <id>10.14569/IJACSA.2023.0140985</id>
        <doi>10.14569/IJACSA.2023.0140985</doi>
        <lastModDate>2023-09-30T10:43:05.1000000+00:00</lastModDate>
        
        <creator>Ahmad Zainul Fanani</creator>
        
        <creator>Arry Maulana Syarif</creator>
        
        <subject>Virtual reality; immersive presentation; 3D reconstruction; historical heritage building preservation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>An innovative preservation approach was proposed to document historical buildings in 3D model, and to present it virtually. The approach was applied to the Lawang Sewu building, one of the architectural masterpieces that is part of Indonesian history. Virtual Reality (VR) technology was used to create a Lawang Sewu VR application program that allows users to virtually walk around the building. A new method for 3D reconstruction was proposed, where data of photo, video and miniature documentation, as well as notes collected from observations were used as the main reference. Meanwhile, architectural record data was used in cases where information cannot be obtained through the main reference. The proposed method focuses on traditional techniques, both at the data acquisition and 3D modelling stages. Poly modelling techniques were chosen for 3D reconstruction. The poly modelling technique was chosen based on its ease and flexibility in controlling the number of polys in 3D models, and was suitable to be applied for repetitive spatial typologies, such as the Lawang Sewu building. After given textures, the 3D model was sent to the VR editor. In addition of running on the desktop platform, Head Mounted Device (HMD) that supports the creation of an immersive experience, was also chosen to run the Lawang Sewu VR. The evaluation carried out to measure the level of similarity of the 3D model to the original building and the sensation of an immersive experience felt by the user shows good achievements.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_85-Historical_Building_3D_Reconstruction_for_a_Virtual_Reality.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Optimized Survival Prediction Method for Kidney Transplant Recipients</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140983</link>
        <id>10.14569/IJACSA.2023.0140983</id>
        <doi>10.14569/IJACSA.2023.0140983</doi>
        <lastModDate>2023-09-30T10:43:05.0830000+00:00</lastModDate>
        
        <creator>Benita Jose Chalissery</creator>
        
        <creator>V. Asha</creator>
        
        <subject>Cox proportional hazard model; random survival forest; C-index; brier score; area under curve; organ transplantation; survival prognosis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>Human organ transplantation is a lifesaving process for many of the patients suffering from end stage diseases. Transplantation surgeons are often confronted with the question of the expected survival prognosis for this expensive and perilous process.The aim of the work is to identify an optimal model for predicting the survival of the recipient based on the available organ. This study identifies important features of the recipient and donor parameters for training the model. The study compares the performance of the Random Survival Forest (RSF), which is a machine learning method, and the Cox Proportional Hazard (CPH) model, which is a statistical model, to identify the more accurate model for survival prediction. Variations of the C-index, Brier score, and cumulative Area Under Curve evaluate the survival models considered. This study suggests that CPH which is a statistical method is a better option for forecasting graft and patient survival for an improved clinical outcome.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_83-An_Optimized_Survival_Prediction_Method_for_Kidney_Transplant.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Machine Learning Techniques for Diabetes Classification: A Comparative Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140982</link>
        <id>10.14569/IJACSA.2023.0140982</id>
        <doi>10.14569/IJACSA.2023.0140982</doi>
        <lastModDate>2023-09-30T10:43:05.0700000+00:00</lastModDate>
        
        <creator>Hiri Mustafa</creator>
        
        <creator>Chrayah Mohamed</creator>
        
        <creator>Ourdani Nabil</creator>
        
        <creator>Aknin Noura</creator>
        
        <subject>Machine learning; support vector machine; artificial neural networks; decision tree; random forest; logistic regression; naive bayes; principal component analysis; classification; diabetes</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>In light of the growing global diabetes epidemic, there is a pressing need for enhanced diagnostic tools and methods. Enter machine learning, which, with its data-driven predictive capabilities, can serve as a powerful ally in the battle against this chronic condition. This research took advantage of the Pima Indians Diabetes Data Set, which captures diverse patient information, both diabetic and non-diabetic. Leveraging this dataset, we undertook a rigorous comparative assessment of six dominant machine learning algorithms, specifically: Support Vector Machine, Artificial Neural Networks, Decision Tree, Random Forest, Logistic Regression, and Naive Bayes. Aiming for precision, we introduced principal component analysis to the workflow, enabling strategic dimensionality reduction and thus spotlighting the most salient data features. Upon completion of our analysis, it became evident that the Random Forest algorithm stood out, achieving an exemplary accuracy rate of 98.6% when &#39;BP&#39; and &#39;SKIN&#39; attributes were set aside. This discovery prompts a crucial discussion: not all data attributes weigh equally in their predictive value, and a discerning approach to feature selection can significantly optimize outcomes. Concluding, this study underscores the potential and efficiency of machine learning in diabetes diagnosis. With Random Forest leading the pack in accuracy, there&#39;s a compelling case to further embed such computational techniques in healthcare diagnostics, ushering in an era of enhanced patient care.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_82-Machine_Learning_Techniques_for_Diabetes_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Systematic Review on Blockchain Scalability</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140981</link>
        <id>10.14569/IJACSA.2023.0140981</id>
        <doi>10.14569/IJACSA.2023.0140981</doi>
        <lastModDate>2023-09-30T10:43:05.0530000+00:00</lastModDate>
        
        <creator>Asmaa Aldoubaee</creator>
        
        <creator>Noor Hafizah Hassan</creator>
        
        <creator>Fiza Abdul Rahim</creator>
        
        <subject>Blockchain; scalability; sharding; consensus algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>Blockchain is an exciting new technology that has garnered attention across multiple industries. This new technology offers several advantages, including decentralization, transparency, and immutability. However, several issues limit the effectiveness of this technology, such as scalability, interoperability, and privacy. A systematic review of blockchain scalability research was conducted using three primary databases: ACM, Science Direct, and IEEE. The review examined the state of the art in blockchain scalability, identifying the most important research trends and challenges. The solutions that have been established can be categorized into two main groups: those that pertain to block storage and those that pertain to the underlying blockchain mechanism. Numerous solutions were suggested for each main group. The most common proposed solutions for improving the scalability of blockchain networks in the literature are improving the consensus algorithm and using sharding. Most of the solutions were proof of concept and need more investigation in the future.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_81-A_Systematic_Review_on_Blockchain_Scalability.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Osteoporosis Detection and Classification of Femur X-ray Images Through Spectral Domain Analysis using Texture Features</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140980</link>
        <id>10.14569/IJACSA.2023.0140980</id>
        <doi>10.14569/IJACSA.2023.0140980</doi>
        <lastModDate>2023-09-30T10:43:05.0530000+00:00</lastModDate>
        
        <creator>Dhanyavathi A</creator>
        
        <creator>Veena M B</creator>
        
        <subject>Classification; feature; femur; images; normal; osteopenia; osteoporosis; texture</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>Osteoporosis commonly diagnosed as a bone disorder that affects the significant portion of the population. The Dual X-ray Absorptiometry (DXA) is one of the most accepted standard methods of analyzing the bone disorder, but it is exorbitant. However X-ray is a cost effective, therefore the proposed work introduces a new technique to improve osteoporosis detection and classification of femur bone X-ray image. The spectral based sub band images texture features are used to analyze the Region Of Interest (ROI) femoral head trabecular bone. A spectral domain based on the Two-Dimensional Discrete Wavelet Transform (2D-DWT) is used to represent variations in finer details in the image. Trabecular femur bone texture is determined only by horizontal, vertical, and diagonal sub bands of DWT coefficients. The sub band images are further enhanced by applying the maximum response filter (MRF) at different scales, thereby enhancing the most significant responses. Consequently, the sum of the MRFs of different scale images is considered as the supervised database. To detect osteoporosis, the test and supervised images are analyzed to calculate two significant attributes such as Zero Mean Normalized Cross-Correlation (ZMNC) and Sum Squared Difference (SSD). Based on experimental results, the performance metrics measure is improved in all aspects over current methods.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_80-Osteoporosis_Detection_and_Classification_of_Femur_X_ray_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Real-Time Road Surface Damage Detection Framework based on Mask R-CNN Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140979</link>
        <id>10.14569/IJACSA.2023.0140979</id>
        <doi>10.14569/IJACSA.2023.0140979</doi>
        <lastModDate>2023-09-30T10:43:05.0370000+00:00</lastModDate>
        
        <creator>Bakhytzhan Kulambayev</creator>
        
        <creator>Magzat Nurlybek</creator>
        
        <creator>Gulnar Astaubayeva</creator>
        
        <creator>Gulnara Tleuberdiyeva</creator>
        
        <creator>Serik Zholdasbayev</creator>
        
        <creator>Abdimukhan Tolep</creator>
        
        <subject>Deep learning; CNN; random forest; SVM; neural network; prediction; analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>In the ever-evolving realm of infrastructure management, the timely and accurate detection of road surface damages is imperative for the longevity and safety of transportation networks. This research paper introduces a pioneering framework centered on the Mask R-CNN (Region-based Convolutional Neural Networks) model for real-time road surface damage detection. The overarching methodology encapsulates a deep learning-based approach to discern and classify various road aberrations such as potholes, cracks, and rutting. The chosen Mask R-CNN architecture, renowned for its proficiency in instance segmentation tasks, has been fine-tuned and optimized specifically for the unique challenges posed by road surfaces under diverse lighting and environmental conditions. A diverse dataset, amalgamating urban, suburban, and rural roadways under varied climatic conditions, served as the foundation for model training and validation. Preliminary results have not only underscored the model&#39;s robustness in real-time detection but also its superiority in terms of accuracy and computational efficiency when juxtaposed with extant methods. Concomitantly, the framework emphasizes scalability and adaptability, positing it as a frontrunner for potential integration into automated road maintenance systems and vehicular navigation aids. This trailblazing endeavor elucidates the potentialities of deep learning paradigms in revolutionizing road management systems, thus fostering safer and more efficient transportation environments.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_79-Real_Time_Road_Surface_Damage_Detection_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid CNN-LSTM Network for Cyberbullying Detection on Social Networks using Textual Contents</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140978</link>
        <id>10.14569/IJACSA.2023.0140978</id>
        <doi>10.14569/IJACSA.2023.0140978</doi>
        <lastModDate>2023-09-30T10:43:05.0230000+00:00</lastModDate>
        
        <creator>Daniyar Sultan</creator>
        
        <creator>Mateus Mendes</creator>
        
        <creator>Aray Kassenkhan</creator>
        
        <creator>Olzhas Akylbekov</creator>
        
        <subject>Deep learning; machine learning; NLP; classification; detection; cyberbullying</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>In the face of escalating cyberbullying and its associated online activities, devising effective mechanisms for its detection remains a critical challenge. This study proposes an innovative approach, integrating Long Short-Term Memory (LSTM) networks with Convolutional Neural Networks (CNN), for the detection of cyberbullying in online textual content. The method uses LSTM to understand the temporal aspects and sequential dependencies of text, while CNN is employed to automatically and adaptively learn spatial hierarchies of features. We introduce a hybrid LSTM-CNN model which has been designed to optimize the detection of potential cyberbullying signals within large quantities of online text, through the application of advanced natural language processing (NLP) techniques. The paper reports the results from rigorous testing of this model across an extensive dataset drawn from multiple online platforms, indicative of the current digital landscape. Comparisons were made with prevailing methods for cyberbullying detection, demonstrating a substantial improvement in accuracy, precision, recall and F1-score. This research constitutes a significant step forward in developing robust tools for detecting online cyberbullying, thereby enabling proactive interventions and informed policy development. The effectiveness of the LSTM-CNN hybrid model underscores the transformative potential of leveraging artificial intelligence for social safety and cohesion in an increasingly digitized society. The potential applications and limitations of this model, alongside avenues for future research, are discussed.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_78-Hybrid_CNN_LSTM_Network_for_Cyberbullying_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Deep Neural Network to Analyze and Monitoring the Physical Training Relation to Sports Activities</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140977</link>
        <id>10.14569/IJACSA.2023.0140977</id>
        <doi>10.14569/IJACSA.2023.0140977</doi>
        <lastModDate>2023-09-30T10:43:05.0070000+00:00</lastModDate>
        
        <creator>Bakhytzhan Omarov</creator>
        
        <creator>Nurlan Nurmash</creator>
        
        <creator>Bauyrzhan Doskarayev</creator>
        
        <creator>Nagashbek Zhilisbaev</creator>
        
        <creator>Maxat Dairabayev</creator>
        
        <creator>Shamurat Orazov</creator>
        
        <creator>Nurlan Omarov</creator>
        
        <subject>ANN; PoseNET; exercise monitoring; machine learning; neural networks; artificial intelligence</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>In the research paper, authors meticulously detail the development, testing, and application of an innovative deep learning model aimed at monitoring physical activities of students in real-time. Drawing upon the advanced capabilities of convolutional neural networks (CNNs), the proposed system exhibits an exceptional ability to track, analyze, and evaluate the physical exercises performed by students, thereby providing an unprecedented scope for customization in physical education strategies. This piece of scholarly work bridges the gap between physical education and cutting-edge technology, highlighting the burgeoning role of artificial intelligence in health and fitness sector. With an expansive study spanning various cohorts of physical culture students, the paper provides compelling empirical evidence that underlines the superiority of the deep learning system over conventional methods in aspects of accuracy, speed, and efficiency of monitoring. The authors demonstrate the transformative potential of their system, capable of facilitating personalized and optimized physical training strategies based on real-time feedback. Moreover, the potential implications of the study extend beyond the realm of education and into wider public health applications, with the possibility of fostering improved health outcomes on a larger scale. This research paper makes a significant contribution to the burgeoning field of AI in physical education, embodying a paradigm shift in the approach towards physical fitness and health monitoring. It underscores the potential of AI-driven technology to revolutionize traditional methods in physical education, paving the way for more personalized and effective teaching and training regimes, and ultimately contributing to enhanced health and fitness outcomes among students.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_77-A_Novel_Deep_Neural_Network_to_Analyze_and_Monitoring_the_Physical_Training.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predicting the Level of Safety Feeling of Bangladeshi Internet users using Data Mining and Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140976</link>
        <id>10.14569/IJACSA.2023.0140976</id>
        <doi>10.14569/IJACSA.2023.0140976</doi>
        <lastModDate>2023-09-30T10:43:04.9900000+00:00</lastModDate>
        
        <creator>Md. Safiul Alam</creator>
        
        <creator>Anirban Roy</creator>
        
        <creator>Partha Protim Majumder</creator>
        
        <creator>Sharun Akter Khushbu</creator>
        
        <subject>Bangladesh; data analysis; data mining; important factors; machine learning; prediction; performance evaluation metrics; safety level</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>An amazing combination of cutting-edge data mining and machine learning methodologies to predict the level of safety feeling among Bangladeshi internet users, which is a significant departure in this subject. By leveraging cutting-edge algorithms and innovative data sources, this work provides previously unheard-of insights into how this demographic perceives online safety, shedding light on an essential yet underappreciated aspect of their digital lives. This exceptional study&#39;s original research increases the body of knowledge of online safety and sets the road for policy recommendations and intervention tactics that will enable Bangladesh to become a global leader in internet security.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_76-Predicting_the_Level_of_Safety_Feeling_of_Bangladeshi_Internet.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Digital Stethoscope for Early Detection of Heart Disease on Phonocardiography Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140975</link>
        <id>10.14569/IJACSA.2023.0140975</id>
        <doi>10.14569/IJACSA.2023.0140975</doi>
        <lastModDate>2023-09-30T10:43:04.9770000+00:00</lastModDate>
        
        <creator>Batyrkhan Omarov</creator>
        
        <creator>Assyl Tuimebayev</creator>
        
        <creator>Rustam Abdrakhmanov</creator>
        
        <creator>Bakytgul Yeskarayeva</creator>
        
        <creator>Daniyar Sultan</creator>
        
        <creator>Kanat Aidarov</creator>
        
        <subject>Deep learning; CNN; random forest; SVM; neural network; prediction; analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>The burgeoning realm of digital healthcare has unveiled a novel diagnostic instrument: a digital stethoscope tailored for the early detection of heart disease as elucidated in this research. By harnessing the nuanced capabilities of phonocardiography, this device captures intricate heart sounds, subsequently processed through advanced machine learning algorithms. Traditional stethoscopes, although indispensable, might miss subtle anomalies – a lacuna this digital counterpart addresses by meticulously analyzing phonocardiographic data for the slightest deviations indicative of cardiac anomalies. As the digital stethoscope delves into this trove of aural cues, the machine learning component discerns patterns and irregularities often imperceptible to human auditors. The confluence of these digital acoustics and computational analytics not only augments the accuracy of early heart disease diagnosis but also facilitates the archival of this data, engendering a continuous, longitudinal assessment of cardiac health. The initial foray into real-world application registered an encouraging precision rate, cementing its potential as an invaluable asset in preemptive cardiac care. With this innovation, we stand on the cusp of a paradigm shift in how heart diseases are diagnosed, making strides towards timely interventions and improved patient outcomes.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_75-Digital_Stethoscope_for_Early_Detection_of_Heart_Disease.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Conv-LSTM Network for Arrhythmia Detection using ECG Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140973</link>
        <id>10.14569/IJACSA.2023.0140973</id>
        <doi>10.14569/IJACSA.2023.0140973</doi>
        <lastModDate>2023-09-30T10:43:04.9600000+00:00</lastModDate>
        
        <creator>Alisher Mukhametkaly</creator>
        
        <creator>Zeinel Momynkulov</creator>
        
        <creator>Nurgul Kurmanbekkyzy</creator>
        
        <creator>Batyrkhan Omarov</creator>
        
        <subject>Deep learning; Conv-LSTM; classification; ECG; CNN</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>In the evolving realm of medical diagnostics, electrocardiogram (ECG) data stands as a cornerstone for cardiac health assessment. This research introduces a novel approach, leveraging the capabilities of a Deep Convolutional Long Short-Term Memory (Conv-LSTM) network for the early and accurate detection of arrhythmias using ECG data. Traditionally, cardiac anomalies have been diagnosed through heuristic means, often requiring intricate scrutiny and expertise. However, the Deep Conv-LSTM model proposed herein addresses the inherent limitations of traditional methods by amalgamating the spatial feature extraction capability of convolutional neural networks (CNN) with the temporal sequence learning capacity of LSTM networks. Initial results derived from a diverse dataset, comprising myriad ECG waveform anomalies, delineated an enhancement in accuracy, reducing false positives and facilitating timely interventions. Notably, the model showcased adaptability in handling the burstiness of ECG signals, reflecting various heart rhythms, and the perplexity inherent in diagnosing subtle arrhythmic events. Additionally, the model&#39;s ability to discern longer, more complex patterns alongside transient anomalies offers potential for broader applications in telemetry and continuous patient monitoring systems. It is anticipated that this innovative fusion of CNN and LSTM architectures will usher a paradigm shift in automated arrhythmia detection, bridging the chasm between technology and the intricate nuances of cardiac physiology, thus improving patient outcomes.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_73-Deep_Conv_LSTM_Network_for_Arrhythmia_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>DetBERT: Enhancing Detection of Policy Violations for Voice Assistant Applications using BERT</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140974</link>
        <id>10.14569/IJACSA.2023.0140974</id>
        <doi>10.14569/IJACSA.2023.0140974</doi>
        <lastModDate>2023-09-30T10:43:04.9600000+00:00</lastModDate>
        
        <creator>Rawan Baalous</creator>
        
        <creator>Joud Alzahrani</creator>
        
        <creator>Mariam Ali</creator>
        
        <creator>Rana Asiri</creator>
        
        <creator>Eman Nooli</creator>
        
        <subject>Alexa; Google assistant; BERT; policy violation detector; voice assistant; user privacy; security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>Voice Assistants, also known as VAs, have gained popularity in the last few years. They make our daily tasks easier via simple voice instructions. VAs platforms allow third-party developers to develop voice applications and publish them on the VAs platforms. However, VAs applications may collect users’ personal information for different purposes. To maintain the security and privacy of users, VAs platforms have specified a set of policies that must be adhered to by VAs applications’ developers. This paper aims to automatically detect voice apps that do not comply with the VA&#39;s platforms policies. To this end, DetBERT, a comprehensive testing tool, was built. DetBERT evaluates voice apps&#39; compliance with the policies using BERT model by analyzing the apps’ behaviors and detecting violations. With DetBERT, a total of 50,000 voice assistant apps from Amazon Alexa and Google Assistant platforms were tested. The paper demonstrates that DetBERT can accurately identify whether a voice assistant application has violated the platform’s policy or not.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_74-DetBERT_Enhancing_Detection_of_Policy_Violations_for_Voice_Assistant.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Feature Fusion for the Classification of Histopathological Carcinoma Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140972</link>
        <id>10.14569/IJACSA.2023.0140972</id>
        <doi>10.14569/IJACSA.2023.0140972</doi>
        <lastModDate>2023-09-30T10:43:04.9430000+00:00</lastModDate>
        
        <creator>Salini S Nair</creator>
        
        <creator>M. Subaji</creator>
        
        <subject>Breast cancer; machine learning; artificial intelligence; feature extraction; ensemble classifier</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>Breast cancer is a significant global health concern, demanding advanced diagnostic approaches. Although traditional imaging and manual examinations are common, the potential of artificial intelligence (AI) and machine learning (ML) in breast cancer detection remains underexplored. This study proposes a hybrid approach combining image processing and ML methods to address breast cancer diagnosis challenges. The method utilizes feature fusion with gray-level co-occurrence matrix (GLCM), local binary patterns (LBP), and histogram features, alongside an ensemble learning technique for improved classification. Results demonstrate the approach&#39;s effectiveness in accurately classifying three carcinoma classes (ductal, lobular, and papillary). The Voting Classifier, an ensemble learning model, achieves the highest accuracy, precision, recall, and F1-scores across carcinoma classes. By harnessing feature extraction and ensemble learning, the proposed approach offers advantages such as early detection, improved accuracy, personalized medicine recommendations, and efficient analysis. Integration of AI and ML in breast cancer diagnosis shows promise for enhancing accuracy, effectiveness, and personalized patient care, supporting informed decision-making by healthcare professionals. Future research and technological advancements can refine AI-ML algorithms, contributing to earlier detection, better treatment outcomes, and higher survival rates for breast cancer patients. Validation and scalability studies are needed to confirm the effectiveness of the proposed hybrid approach. In conclusion, leveraging AI and ML techniques has the potential to revolutionize breast cancer diagnosis, leading to more accurate and personalized detection and treatment. Technology-driven advances can significantly impact breast cancer care and management.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_72-A_Novel_Feature_Fusion_for_the_Classification_of_Histopathological_Carcinoma.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Artificial Intelligence-based Volleyball Target Detection and Behavior Recognition Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140970</link>
        <id>10.14569/IJACSA.2023.0140970</id>
        <doi>10.14569/IJACSA.2023.0140970</doi>
        <lastModDate>2023-09-30T10:43:04.9270000+00:00</lastModDate>
        
        <creator>Jieli Huang</creator>
        
        <creator>Wenjun Zou</creator>
        
        <subject>Volleyball; video detail enhancement; hmm; CamShift tracking; detection; recognition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>Volleyball has limitations in relying on judges’ subjective judgments alone to call penalties for infractions in the court. While video detail enhancement technology is extremely useful for target tracking and extraction in sports video, the current research on video detail enhancement technology does not pay much attention to the development of ball game violation tracking and recognition. Therefore, the study uses the fusion algorithm of wavelet exchange method and three-frame difference method and background subtraction method to detect and extract the motion targets, and uses the improved CamShift tracking algorithm and HMM to track and identify the tracking targets for the violation actions. Comprehensively, the study constructs a tracking recognition model for volleyball violation based on video enhancement technology to achieve accurate penalty in intense rivalry games. Through experimental analysis and comparison, the tracking F-measure value of the model constructed by the study is 0.89, which can achieve a good tracking effect, the recognition accuracy is 99.76%, and the average error is 0.003, which can effectively realize the tracking recognition of players’ illegal actions during volleyball, and objectively make court penalties to guarantee the fairness and justice of the game.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_70-Artificial_Intelligence_based_Volleyball_Target_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning-based Multiple Bleeding Detection in Wireless Capsule Endoscopy</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140971</link>
        <id>10.14569/IJACSA.2023.0140971</id>
        <doi>10.14569/IJACSA.2023.0140971</doi>
        <lastModDate>2023-09-30T10:43:04.9270000+00:00</lastModDate>
        
        <creator>Ouiem Bchir</creator>
        
        <creator>Ghaida Ali Alkhudhair</creator>
        
        <creator>Lena Saleh Alotaibi</creator>
        
        <creator>Noura Abdulhakeem Almhizea</creator>
        
        <creator>Sara Mohammed Almuhanna</creator>
        
        <creator>Shouq Fahad Alzeer</creator>
        
        <subject>Wireless Capsule Endoscopy (WCE); Multiple Bleeding Spots (MBS); Gastrointestinal (GI) disease; deep learning; pattern recognition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>Wireless Capsule Endoscopy (WCE) is a diagnostic technology for gastrointestinal tract pathology detection. It has emerged as an alternative to conventional endoscopy which could be distressing to the patient. However, the diagnosis process requires to view and analyze hundreds of frames extracted from WCE video. This makes the diagnosis tedious. For this purpose, researches related to the automatic detection of signs of gastrointestinal diseases have been boosted. In this paper, we design a pattern recognition system for detecting Multiple Bleeding Spots (MBS) using WCE video. The proposed system relies on the Deep Learning approach to accurately recognize multiple bleeding spots in the gastrointestinal tract. Specifically, the You Only Look Once (YOLO) Deep Learning models are explored in this paper, namely, YOLOv3, YOLOv4, YOLOv5 and YOLOv7. The results of experiments showed that YOLOv7 is the most appropriate model for designing the proposed MBS detection system. Specifically, the proposed system achieved a mAP of 0.86, and an IoU of 0.8. Moreover, the results of the detection were enhanced by augmenting the training data to reach a mAP of 0.883.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_71-Deep_Learning_Based_Multiple_Bleeding_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mechatronics Design and Development of T-EVA: Bio-Sensorized Space System for Astronaut’s Upper Body Temperature Monitoring During Extravehicular Activities on the Moon and Mars</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140969</link>
        <id>10.14569/IJACSA.2023.0140969</id>
        <doi>10.14569/IJACSA.2023.0140969</doi>
        <lastModDate>2023-09-30T10:43:04.9130000+00:00</lastModDate>
        
        <creator>Paul Palacios</creator>
        
        <creator>Jose Cornejo</creator>
        
        <creator>Juan C. Chavez</creator>
        
        <creator>Carlos Cornejo</creator>
        
        <creator>Jorge Cornejo</creator>
        
        <creator>Mariela Vargas</creator>
        
        <creator>Natalia I. Vargas-Cuentas</creator>
        
        <creator>Avid Roman-Gonzalez</creator>
        
        <creator>Julio Valdivia-Silva</creator>
        
        <subject>Extravehicular-activities astronauts; spacesuits; body temperature; Mars; space</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>The exploration of the universe is progressively increasing, within this inquiry, the planet Mars and the Moon remain a mystery and challenge, as well as its colonization and civilization. Thus, in the extravehicular activities (EVA) where the astronaut will be in extreme environments performing activities such as exploration, and collection of rock and soil samples for later analysis, it should be noted that when he performs these activities, he will be exposed to extreme environmental parameters such as radiation, temperature, gravity, and many other extreme conditions. Therefore, the Center of Space Emerging Technologies (C-SET) proposed a project called T-EVA, developed into the Research Line: Space Suits and Assistive Devices, and in the Research Area: Biomechatronics and Life Support Systems, with the aim of astronaut temperature monitoring during their work outside the base station, being able to know how much their body is measuring and if they are at risk of hypothermia or hyperthermia, which could cause irreparable damage. The electronic design was made for testing both in the laboratory and outside, as well as the implementation of the lycra to mount the design, resulting in a feasible prototype that can be implemented in real situations with easy access to temperature reports.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_69-Mechatronics_Design_and_Development_of_T_EVA.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of Personalized Recommendation and Sharing Management System for Science and Technology Achievements based on WEBSOCKET Technology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140968</link>
        <id>10.14569/IJACSA.2023.0140968</id>
        <doi>10.14569/IJACSA.2023.0140968</doi>
        <lastModDate>2023-09-30T10:43:04.8970000+00:00</lastModDate>
        
        <creator>Shan Zuo</creator>
        
        <creator>Kai Xiao</creator>
        
        <creator>Taitian Mao</creator>
        
        <subject>Research management; personalised recommendations; WebSocket; ruby on rails; informatization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>Scientific research is becoming more and more crucial to contemporary society as the backbone of the nation&#39;s innovation-driven development. The rapid growth of information technology and the rise of information technology in scientific research both contribute to the globalization of scientific research. Small research groups still don&#39;t have a place to showcase and share their accomplishments, though. In order to integrate scientific research information and combine personalised recommendation technology to suggest developments of interest to users through their historical behaviour data, the study proposes a personalised recommendation and sharing management system for scientific and technological achievements based on the Ruby on Rails framework. According to the testing results, the system had a 299ms request response time, a maximum 1KB request resource size, and a 20ms data transfer time. Additionally, the study&#39;s user-based collaborative filtering recommendation algorithm has an accuracy rate of 41% when the nearest neighbor parameter is set to 50, there are 10 information suggestions, and there are 0.7 training sets, which essentially satisfies the system criteria. In conclusion, the research suggested that a personalised recommendation and sharing management system for scientific and technological accomplishments can essentially satisfy the needs of small research teams to communicate and share scientific accomplishments, as well as realise the sharing of scientific achievements.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_68-Design_of_Personalized_Recommendation_and_Sharing_Management_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An MILP-based Lexicographic Approach for Robust Selective Full Truckload Vehicle Routing Problem</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140967</link>
        <id>10.14569/IJACSA.2023.0140967</id>
        <doi>10.14569/IJACSA.2023.0140967</doi>
        <lastModDate>2023-09-30T10:43:04.8800000+00:00</lastModDate>
        
        <creator>Karim EL Bouyahyiouy</creator>
        
        <creator>Anouar Annouch</creator>
        
        <creator>Adil Bellabdaoui</creator>
        
        <subject>Vehicle routing problem; full truckload; robust optimization; MILP-based lexicographic approach; uncertain travel time</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>Full truckload (FTL) shipment is one of the largest trucking modes. It is an essential part of the transportation industry, where the carriers are required to move FTL transportation demands (orders) at a minimal cost between pairs of locations using a certain number of trucks available at the depots. The drivers who pick up and deliver these orders must return to their home depots within a given time. In practice, satisfying those orders within a given time frame (e.g., one day) could be impossible while adhering to all operational constraints. As a result, the investigated problem is distinguished by the selective aspect, in which only a subset of transportation demands is serviced. Furthermore, travel times between nodes can be uncertain and vary depending on various possible scenarios. The robustness subsequently consists of identifying a feasible solution in all scenarios. Therefore, this study introduces an MILP-based lexicographic approach to solve a robust selective full truckload vehicle routing problem (RSFTVRP). We demonstrated the proposed method’s efficiency through experimental results on newly generated instances for the considered problem.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_67-An_MILP_based_Lexicographic_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>AIRA-ML: Auto Insurance Risk Assessment-Machine Learning Model using Resampling Methods</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140966</link>
        <id>10.14569/IJACSA.2023.0140966</id>
        <doi>10.14569/IJACSA.2023.0140966</doi>
        <lastModDate>2023-09-30T10:43:04.8800000+00:00</lastModDate>
        
        <creator>Ahmed Shawky Elbhrawy</creator>
        
        <creator>Mohamed A. Belal</creator>
        
        <creator>Mohamed Sameh Hassanein</creator>
        
        <subject>Risk assessment; machine learning; imbalanced data; rapid miner; CRISP-DM methodology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>Predicting underwriting risk has become a major challenge due to the imbalanced datasets in the field. A real-world imbalanced dataset is used in this work with 12 variables in 30144 cases, where most of the cases were classified as &quot;accepting the insurance request&quot;, while a small percentage classified as &quot;refusing insurance&quot;. This work developed 55 machine learning (ML) models to predict whether or not to renew policies. The models were developed using the original dataset and four data-level approaches resampling techniques: random oversampling, SMOTE, random undersampling, and hybrid methods with 11 ML algorithms to address the issue of imbalanced data (11 ML&#215; (4 resampling techniques + unbalanced datasets) = 55 ML models). Seven classifier efficiency measures were used to evaluate these 55 models that were developed using 11 ML algorithms: logistic regression (LR), random forest (RF), artificial neural network (ANN), multilayer perceptron (MLP), support vector machine (SVM), naive Bayes (NB), decision tree (DT), XGBoost, k-nearest neighbors (KNN), stochastic gradient boosting (SGB), and AdaBoost. The seven classifier efficiency measures namely are accuracy, sensitivity, specificity, AUC, precision, F1-measure, and kappa. CRISP-DM methodology is utilisied to ensure that studies are conducted in a rigorous and systematic manner. Additionally, RapidMiner software was used to apply the algorithms and analyze the data, which highlighted the potential of ML to improve the accuracy of risk assessment in insurance underwriting. The results showed that all ML classifiers became more effective when using resampling strategies; where Hybrid resampling methods improved the performance of machine learning models on imbalanced data with an accuracy of 0.9967 and kappa statistics of 0.992 for the RF classifier.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_66-AIRA_ML_Auto_Insurance_Risk_Assessment_Machine_Learning_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Object Detection and Recognition in Remote Sensing Images by Employing a Hybrid Generative Adversarial Networks and Convolutional Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140965</link>
        <id>10.14569/IJACSA.2023.0140965</id>
        <doi>10.14569/IJACSA.2023.0140965</doi>
        <lastModDate>2023-09-30T10:43:04.8670000+00:00</lastModDate>
        
        <creator>Araddhana Arvind Deshmukh</creator>
        
        <creator>Mamta Kumari</creator>
        
        <creator>V.V. Jaya Rama Krishnaiah</creator>
        
        <creator>Suraj Bandhekar</creator>
        
        <creator>R. Dharani</creator>
        
        <subject>Object detection; Generative Adversarial Networks (GAN); Convolutional Neural Networks (CNN); deep learning; remote sensing; satellite images; hybrid model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>Due to diverse backdrops, scale fluctuations, and a lack of annotated training data, the identification and recognition of objects in remote sensing images present major problems. In order to overcome these difficulties, this work suggests a novel hybrid technique that blends GAN and CNN. The suggested approach expands the small labelled dataset by synthesising realistic training examples using the generative abilities of GANs. The samples generated capture the various variances and backgrounds found in remote sensing photos, improving the object identification and recognition model&#39;s capacity to generalise. Additionally, CNNs, which are recognised for their outstanding feature extraction skills, are incorporated into the hybrid approach, enabling precise and reliable object identification and recognition. The model&#39;s CNN component is developed using both real and synthetic data, effectively combining the advantages of both fields. Several experiments are conducted on a large dataset of satellite photos to evaluate the performance of the proposed method. The results demonstrate that the hybrid model, with accuracy 97.32%, outperforms traditional approaches and pure CNN-based approaches in terms of dependability and resilience. The model may be efficiently generalised to unknown remote sensing images thanks to the GAN-generated samples, which bridge the gap among synthetic and actual data. The hybrid methodology used in this study demonstrates the possibility of merging GANs and CNNs for item detection and recognition using deep learning in remote sensing images.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_65-Object_Detection_and_Recognition_in_Remote_Sensing_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Image Encryption using Non-Adjacent Bits Dynamic Encoding DNA with RSA and Chaotic Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140964</link>
        <id>10.14569/IJACSA.2023.0140964</id>
        <doi>10.14569/IJACSA.2023.0140964</doi>
        <lastModDate>2023-09-30T10:43:04.8500000+00:00</lastModDate>
        
        <creator>Marwa A. Elmenyawi</creator>
        
        <creator>Nada M. Abdel Aziem</creator>
        
        <subject>Cryptography; image encryption; hash function; chaotic map; DNA encoding; DNA operations; RSA algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>Image encryption is a crucial aspect that helps to maintain the images&#39; confidentiality and security in diverse applications. Ongoing research is focused on improving the efficiency and effectiveness of encryption. Image encryption has many practical applications in today&#39;s digital world, such as securing confidential images transmitted over networks, protecting sensitive personal information stored in images, and ensuring the privacy of medical images. The suggested work represents a breakthrough in image encryption by proposing a model that leverages the power of DNA, RSA, and chaos. This model has three phases: key generation, confusion, and diffusion. The key generation phase employs a hash function and hyperchaotic technique to generate a strong key. During the confusion phase, the positions of pixels are rearranged, either at the image level or within blocks, using the Duffing chaotic map. Once the scrambling level is determined, each pixel undergoes two successive scrambling steps, with Henon and Arnold&#39;s chaotic map to change its location. During the diffusion phase, the encryption model employs a two way approach to ensure maximum security. Firstly, it utilizes dynamic DNA cryptography for non-adjacent bits, followed by robust RSA cryptography. The experimental results indicate that the model possesses a strong security level randomness and can withstand different attacks.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_64-Hybrid_Image_Encryption_using_Non_Adjacent_Bits_Dynamic_Encoding.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Utilizing Deep Convolutional Neural Networks and Non-Negative Matrix Factorization for Multi-Modal Image Fusion</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140963</link>
        <id>10.14569/IJACSA.2023.0140963</id>
        <doi>10.14569/IJACSA.2023.0140963</doi>
        <lastModDate>2023-09-30T10:43:04.8330000+00:00</lastModDate>
        
        <creator>Nripendra Narayan Das</creator>
        
        <creator>Santhakumar Govindasamy</creator>
        
        <creator>Sanjiv Rao Godla</creator>
        
        <creator>Yousef A.Baker El-Ebiary</creator>
        
        <creator>E.Thenmozhi</creator>
        
        <subject>Image fusion; deep convolution network; non-negative matrix factorization; multi-modal images; vector space model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>A key element of contemporary computer vision, image fusion tries to improve the quality and interpretability of images by combining complimentary data from several image sources or modalities. This paper offers a unique method for multi-modal image fusion, combining the benefits of Deep Convolutional Neural Networks (CNNs) and Non-Negative Matrix Factorization (NMF), by using current developments in deep learning and matrix factorization techniques. Deep CNNs have shown to be remarkably effective in extracting features from images, capturing complex patterns and discriminative data. A group of deep CNNs are trained using this suggested technique on a varied dataset of multi-modal images. With the help of these networks, which extract and encode pertinent characteristics from several modalities, information-rich representations may then be combined. Concatenating, the features that were derived from the CNNs throughout the fusion process results in a fused feature representation that perfectly expresses the input modalities. The main novelty is the two-stage integration of NMF: first, breaking down the fused feature representation into non-negative basis vectors and coefficients, and then, using NMF to further extract important patterns from the fused feature maps. The non-negativity requirement in NMF guarantees the preservation of the natural structures and characteristics present in the source images, resulting in fused images that are both aesthetically pleasing and semantically intelligible. Visual examination of the merged images demonstrates the method&#39;s capacity to successfully extract important information from several modalities. The better performance and robustness of the suggested approach, which has an accuracy of roughly 99.12%, are highlighted by comparison with existing fusion approaches.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_63-Utilizing_Deep_Convolutional_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Feline Wolf Net: A Hybrid Lion-Grey Wolf Optimization Deep Learning Model for Ovarian Cancer Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140962</link>
        <id>10.14569/IJACSA.2023.0140962</id>
        <doi>10.14569/IJACSA.2023.0140962</doi>
        <lastModDate>2023-09-30T10:43:04.8330000+00:00</lastModDate>
        
        <creator>Moresh Mukhedkar</creator>
        
        <creator>Divya Rohatgi</creator>
        
        <creator>Veera Ankalu Vuyyuru</creator>
        
        <creator>K V S S Ramakrishna</creator>
        
        <creator>Yousef A.Baker El-Ebiary</creator>
        
        <creator>V. Antony Asir Daniel</creator>
        
        <subject>Ovarian cancer; deep learning; bidirectional long short term memory; CT images; convolutional neural network; lion grey wolf optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>Ovarian cancer is a major cause of mortality among gynecological malignancies, emphasizing the critical role of early detection in improving patient outcomes. This paper presents an automated computer-aided design system that combines deep learning techniques with an optimization mechanism for accurate ovarian cancer detection that utilizes pelvic CT images dataset. The key contribution of this work is the development of an optimized Bi-directional Long Short-Term Memory (Bi-LSTM) model which is introduced in the layers of CNN (Convolutional Neural Network), enhancing the learning process. Additionally, a feature selection method based on Lion with Grey Wolf Optimization (LGWO) is employed to enhance classifier efficiency and accuracy. The proposed approach classifies ovarian tumors as benign or malignant using the Bi-LSTM model, evaluated on the Ovarian Cancer University of Kaggle dataset. Results showcase the effectiveness of the method, achieving remarkable performance metrics, including 98% accuracy, 99.7% recall, 93% precision, and an impressive F1 score of 98%. The proposed method&#39;s efficiency is validated through comparison with validating data, demonstrating consistent and reliable results. The study&#39;s significance lies in its potential to provide an accurate and efficient solution for early ovarian cancer detection. By leveraging deep learning and optimization, the proposed method outperforms existing approaches, highlighting the promise of advanced computational techniques in improving healthcare outcomes. The findings contribute to the field of ovarian cancer detection, emphasizing the value of integrating cutting-edge technologies for effective medical diagnosis.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_62-Feline_Wolf_Net_A_Hybrid_Lion_Grey_Wolf_Optimization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Diabetic Retinopathy Detection Through Machine Learning with Restricted Boltzmann Machines</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140961</link>
        <id>10.14569/IJACSA.2023.0140961</id>
        <doi>10.14569/IJACSA.2023.0140961</doi>
        <lastModDate>2023-09-30T10:43:04.8200000+00:00</lastModDate>
        
        <creator>Venkateswara Rao Naramala</creator>
        
        <creator>B. Anjanee Kumar</creator>
        
        <creator>Vuda Sreenivasa Rao</creator>
        
        <creator>Annapurna Mishra</creator>
        
        <creator>Shaikh Abdul Hannan</creator>
        
        <creator>Yousef A.Baker El-Ebiary</creator>
        
        <creator>R. Manikandan</creator>
        
        <subject>Optic disc (OD); Optic cup (OC); U-network; restricted Boltzmann machines; squirrel search algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>Diabetes is a potentially sight-threatening condition that can lead to blindness if left undetected. Timely diagnosis of diabetic retinopathy, a persistent eye ailment, is critical to prevent irreversible vision loss. However, the traditional method of diagnosing diabetic retinopathy through retinal testing by ophthalmologists is labor-intensive and time-consuming. Additionally, early identification of glaucoma, indicated by the Cup-to-Disc Ratio (CDR), is vital to prevent vision impairment, yet its subtle initial symptoms make timely detection challenging. This research addresses these diagnostic challenges by leveraging machine learning and deep learning techniques. In particular, the study introduces the application of Restricted Boltzmann Machines (RBM) to the domain. By extracting and analyzing multiple features from retinal images, the proposed model aims to accurately categorize anomalies and automate the diagnostic process. The investigation further advances with the utilization of a U-network model for optic segmentation and employs the Squirrel Search Algorithm (SSA) to fine-tune RBM hyperparameters for optimal performance. The experimental evaluation conducted on the RIM-ONE DL dataset demonstrates the efficacy of the proposed methodology. A comprehensive comparison of results against previous prediction models is carried out, assessing accuracy, cross-validation, and Receiver Operating Characteristic (ROC) metrics. Remarkably, the proposed model achieves an accuracy value of 99.2% on the RIM-ONE DL dataset. By bridging the gap between automated diagnosis and ophthalmological practice, this research contributes significantly to the medical field. The model&#39;s robust performance and superior accuracy offer a promising avenue to support healthcare professionals in enhancing their decision-making processes, ultimately improving the quality of care for patients with retinal anomalies.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_61-Enhancing_Diabetic_Retinopathy_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Skin Cancer Detection Through an AI-Powered Framework by Integrating African Vulture Optimization with GAN-based Bi-LSTM Architecture</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140960</link>
        <id>10.14569/IJACSA.2023.0140960</id>
        <doi>10.14569/IJACSA.2023.0140960</doi>
        <lastModDate>2023-09-30T10:43:04.8030000+00:00</lastModDate>
        
        <creator>N. V. Rajasekhar Reddy</creator>
        
        <creator>Araddhana Arvind Deshmukh</creator>
        
        <creator>Vuda Sreenivasa Rao</creator>
        
        <creator>Sanjiv Rao Godla</creator>
        
        <creator>Yousef A.Baker El-Ebiary</creator>
        
        <creator>Liz Maribel Robladillo Bravo</creator>
        
        <creator>R. Manikandan</creator>
        
        <subject>Skin cancer; generative adversarial network; Bi-LSTM; African Vulture Optimisation (AVO); deep learning (DL)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>One of the more prevalent and severe cancer kinds is thought to be skin cancer. The main objective is to detect the melanoma in initial stage and save millions of lives. One of the most difficult aspects of developing an effective automatic classification system is due to lack of large datasets. The data imbalance and overfitting problem degrades the accuracy. In this proposed work, this problem can be solved using a Generative Adversarial Network (GAN) by generating more training images. Traditional RNNs are concerned with overcoming memory constraints. By using a cyclic link on the hidden layer, these models attain Long short-term memory. However, RNNs suffer from the issue of the gradient disappearing, which affects learning performance. To overcome these challenges this work proposes Bidirectional Long Short-Term Memory (Bi-LSTM) deep learning framework for skin cancer detection. The dataset which is collected from the International Skin Imaging Collaboration were used in image processing. A novel metaheuristic enthused by the routine of African vultures is proposed in this proposed work. The African Vulture Optimisation Algorithm (AVOA) algorithm is designed to select optimum feature of skin image. The accuracy of the proposed method obtains 98.5%. This comprehensive framework, encompassing GAN-generated data, Bi-LSTM architecture, and AVOA-based feature optimization, contributes significantly to enhancing early melanoma detection.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_60-Enhancing_Skin_Cancer_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SE-RESNET: Monkeypox Detection Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140959</link>
        <id>10.14569/IJACSA.2023.0140959</id>
        <doi>10.14569/IJACSA.2023.0140959</doi>
        <lastModDate>2023-09-30T10:43:04.8030000+00:00</lastModDate>
        
        <creator>Krishnan Thiruppathi</creator>
        
        <creator>Selvakumar K</creator>
        
        <creator>Vairachilai Shenbagavel</creator>
        
        <subject>Squeeze-and-Excitation (SE); monkeypox; poxviridae; prodromal; chickenpox; prodromal</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>The monkeypox virus, a species of the Orthopoxvirus genus within the family Poxviridae, is answerable for inflicting monkeypox. The symptoms of monkeypox last for about two to three weeks, which is often a self-limiting infection. There may be extreme cases. Recently, the case fatality rate has been in the region of 3-6. When developing a clinical medical diagnosis, it is vital to incorporate different rash diseases such as pox, measles, bacterial skin infections, scabies, syphilis, and medically connected allergies. Pathology at the symptom stage of the sickness could aid in distinctive monkeypox from chickenpox or smallpox. The dataset’s machine learning model should not be used for clinical diagnosis, but rather for developing a new model to identify illness fast. The gray scale versions of the original photos in the Monkeypox grey file could make it easier to figure out training more quickly. The channel-wise feature responses that are adaptively re-calibrated are handled by the “Squeeze-and-Excitation” (SE) block. To do this, cross-channel dependency must be explicitly modeled. To demonstrate how these architectures are put together and how these building pieces may be layered to produce SE-Resnet designs in monkeypox image sets that generalize very well. Also, demonstrate that employing SE blocks significantly enhances the performance of current state-of-the-art CNNs while incurring just a little computational cost.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_59-SE_RESNET_Monkeypox_Detection_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cyberbullying Detection Based on Hybrid Ensemble Method using Deep Learning Technique in Bangla Dataset</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140958</link>
        <id>10.14569/IJACSA.2023.0140958</id>
        <doi>10.14569/IJACSA.2023.0140958</doi>
        <lastModDate>2023-09-30T10:43:04.7870000+00:00</lastModDate>
        
        <creator>Md. Tofael Ahmed</creator>
        
        <creator>Afroza Sharmin Urmi</creator>
        
        <creator>Maqsudur Rahman</creator>
        
        <creator>Abu Zafor Muhammad Touhidul Islam</creator>
        
        <creator>Dipankar Das</creator>
        
        <creator>Md. Golam Rashed</creator>
        
        <subject>Bangla dataset; cyberbullying; exploratory data analysis; machine learning; deep learning; hybrid ensemble method</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>Globalization is certainly a blessing for us. Still, this term also brought such things that are constantly not only creating social insecurities but also diminishing our mental health, and one of them is Cyberbullying. Cyberbullying is not only a misuse of technology but also encourages social harassment among people. Research on Cyberbullying detection has gained increasing attention nowadays in many languages, including Bengali. However, the amount of work on the Bengali language compared to others is insignificant. Here we introduce a Hybrid ensemble method using a voting classifier in Bangla Cyberbullying detection and compare this with traditional Machine Learning and Deep Learning Classifiers. Before implementation, Exploratory Data Analysis was performed on the dataset to gather better insight. There are lots of papers that have already been published in other languages where it is seen that the hybrid approach provides better outcomes compared to traditional methods. Thus, we propose a highly well-driven method for Cyberbullying detection on the Bangla dataset using the hybrid ensemble method by voting classifier. The overall deployment consists of three Machine Learning classifiers, three Deep Learning classifiers, and a Hybrid approach using the voting classifier. Finally, the Hybrid ensemble method yields the best performance with an accuracy of 85%, compared with other Machine and Deep Learning methods.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_58-Cyberbullying_Detection_Based_on_Hybrid_Ensemble_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of a Hypermodel using Transfer Learning to Detect DDoS Attacks in the Cloud Security</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140957</link>
        <id>10.14569/IJACSA.2023.0140957</id>
        <doi>10.14569/IJACSA.2023.0140957</doi>
        <lastModDate>2023-09-30T10:43:04.7730000+00:00</lastModDate>
        
        <creator>Marram Amitha</creator>
        
        <creator>Muktevi Srivenkatesh</creator>
        
        <subject>Machine learning; deep learning; support vector machine; k-nearest neighbors algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>The present research proposes a detective approach to analyzing the performance of various algorithms used for more accurate detection of Distributed Denial-of-Service (DDoS) attacks in cloud computing. From the start, this study uses machine learning and deep learning to explore whether information security has evolved in recent years. The deployment of intrusion detection systems and distributed denial-of-service attacks are then discussed. The most common DDoS attack types were summarized. In addition, this study reviewed the existing approaches and techniques for DDoS attack detection. Various pre-processing subsystems as well as attribute-based selection techniques for preventing the detection of DDoS were briefly described. The proposed Intrusion detection system uses transfer learning for detecting DDoS attacks in the Networks. The proposed system used for the data set for the Network Intrusion Detection System is SDN Dataset which has more features and is suitable to use to detect in Network Intrusions. It contains 23 features that are used to detect Intrusions in the network SDN Dataset which consists of training and testing data to detect the attacks in the network. The detection and prevention subsystems through ML and DL strategies were briefly discussed. The proposed deep learning model for DDoS attack detection in cloud storage applications is explained. After that, various preprocessing strategies employed in the detection are described, among them rebalancing data, data cleaning, data splitting, and data normalization like min-max normalization. The author created a hypermodel that consists the parameters of baseline classifiers like Support Vector Machine, K-Nearest Neighbors Algorithm, XGboost, and other various machine learning models. The proposed model gives very good accuracy compared to other machine learning models.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_57-Design_of_a_Hypermodel_using_Transfer_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Approach for Content-based Image Retrieval System using Logical AND and OR Operations</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140955</link>
        <id>10.14569/IJACSA.2023.0140955</id>
        <doi>10.14569/IJACSA.2023.0140955</doi>
        <lastModDate>2023-09-30T10:43:04.7570000+00:00</lastModDate>
        
        <creator>Ranjana Battur</creator>
        
        <creator>Jagadisha Narayana</creator>
        
        <subject>Medical images; support vector machine; fuzzy logic; X-ray images; time complexity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>This paper proposes an innovative ensemble learning framework for classifying medical images using Support Vector Machine (SVM) and Fuzzy Logic classifiers. The proposed approach utilizes logical AND and OR operations to combine the predictions from the two classifiers, aiming to capitalize on the strengths of each. The SVM and Fuzzy Logic classifiers were independently trained on a comprehensive database of medical images comprising various types of X-ray images. The logical OR operation was then used to create an ensemble classifier that outputs a positive classification if either of the individual classifiers does so. On the other hand, the logical AND operation was used to construct an ensemble classifier that outputs a positive classification only if both individual classifiers do so. The proposed method aims to increase sensitivity and precision by capturing as many positive instances as possible, thereby reducing false positives. The scope of the proposed work is validated in terms of overall time complexity and retrieval accuracy. The simulation outcome shows promising result with 98.36 accuracy score and 1.8 seconds to retrieve all the images in query database.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_55-A_Novel_Approach_for_Content_based_Image_Retrieval_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Imperative Role of Digital Twin in the Management of Hospitality Services</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140956</link>
        <id>10.14569/IJACSA.2023.0140956</id>
        <doi>10.14569/IJACSA.2023.0140956</doi>
        <lastModDate>2023-09-30T10:43:04.7570000+00:00</lastModDate>
        
        <creator>Ramnarayan </creator>
        
        <creator>Rajesh Singh</creator>
        
        <creator>Anita Gehlot</creator>
        
        <creator>Kapil Joshi</creator>
        
        <creator>Ashraf Osman Ibrahim</creator>
        
        <creator>Anas W. Abulfaraj</creator>
        
        <creator>Faisal Binzagr</creator>
        
        <creator>Salil Bharany</creator>
        
        <subject>Hospitality industry; digital twin; sensor and actuator; IoT; augment and virtual reality</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>Digital twin implementation enables more effective terms of evaluation and planning, and also effective utilization of resources with a flood of knowledge to improve the real-time services. The hospitality industry settings utilize digital twin technologies to introduce new ideas with sensor, actuators, AR/VR improve production, and improve customer services. Currently, the hospitality industry is focused to create a fast, virtual world space where customers can get a real world of hospitality. The technologically digital twin of a vast inn office can be implemented to create both discrete and continuous event recreations in order to precisely conceptualize the events that occur in distinct frameworks. Based on the above facts, the adoption of the digital twin in the hospitality industry has gained significant attention. With this motivation, the study aims to investigate the significance and application of the digital twins in the hospitality industry for establishing innovative and digital infrastructure. In addition to this, the study discusses different elements that are significant for the digital twin. Finally, the article summarizes and recommends vital recommendation in the adoption of digital twin in hospitality industry.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_56-Imperative_Role_of_Digital_Twin_in_the_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intelligent Heart Disease Prediction System with Applications in Jordanian Hospitals</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140954</link>
        <id>10.14569/IJACSA.2023.0140954</id>
        <doi>10.14569/IJACSA.2023.0140954</doi>
        <lastModDate>2023-09-30T10:43:04.7400000+00:00</lastModDate>
        
        <creator>Mohammad Subhi Al-Batah</creator>
        
        <creator>Mowafaq Salem Alzboon</creator>
        
        <creator>Raed Alazaidah</creator>
        
        <subject>Heart disease; machine learning; predictive models; classification; clinical data; predictions</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>Heart disease is the leading cause of mortality worldwide. Early identification and prediction can play a crucial role in preventing and treating it. Based on patient data, machine learning techniques may be used to construct cardiac disease prediction models. This work aims to investigate the usage of machine learning models for heart disease prediction utilizing a publicly available dataset. The dataset contains patient information on clinical and demographic characteristics and the presence or absence of cardiac disease. Based on classification performance, many machine learning methods were tested and compared. The findings reveal that machine learning models can predict cardiac disease with accuracy and AUC values. Furthermore, the developed system is used to examine some Jordanian patients, and the predictions of the results are satisfactory. The study&#39;s findings might have far-reaching consequences for the early identification and prevention of heart disease, as well as for improving patient outcomes and lowering healthcare expenditures.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_54-Intelligent_Heart_Disease_Prediction_System_with_Applications.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>K-Means Extensions for Clustering Categorical Data on Concept Lattice</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140953</link>
        <id>10.14569/IJACSA.2023.0140953</id>
        <doi>10.14569/IJACSA.2023.0140953</doi>
        <lastModDate>2023-09-30T10:43:04.7270000+00:00</lastModDate>
        
        <creator>Mohammed Alwersh</creator>
        
        <creator>L&#225;szl&#243; Kov&#225;cs</creator>
        
        <subject>Clustering algorithms; categorical data; k-means; cluster analysis; formal concept analysis; concept lattice</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>Formal Concept Analysis (FCA) is a key tool in knowledge discovery, representing data relationships through concept lattices. However, the complexity of these lattices often hinders interpretation, prompting the need for innovative solutions. In this context, the study proposes clustering formal concepts within a concept lattice, ultimately aiming to minimize lattice size. To address this, The study introduces introduce two novel extensions of the k-means algorithm to handle categorical data efficiently, a crucial aspect of the FCA framework. These extensions, namely K-means Dijkstra on Lattice (KDL) and K-means Vector on Lattice (KVL), are designed to minimize the concept lattice size. However, the current study focuses on introducing and refining these new methods, laying the groundwork for our future goal of lattice size reduction. The KDL utilizes FCA to build a graph of formal concepts and their relationships, applying a modified Dijkstra algorithm for distance measurement, thus replacing the Euclidean distance in traditional k-means. The defined centroids are formal concepts with minimal intracluster distances, enabling effective categorical data clustering. In contrast, the KVL extension transforms formal concepts into numerical vectors to leverage the scalability offered by traditional k-means, potentially at the cost of clustering quality due to oversight of the data&#39;s inherent hierarchy. After rigorous testing, KDL and KVL proved robust in managing categorical data. The introduction and demonstration of these novel techniques lay the groundwork for future research, marking a significant stride toward addressing current challenges in categorical data clustering within the FCA framework.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_53-K_Means_Extensions_for_Clustering_Categorical_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Texton Tri-alley Separable Feature Merging (TTSFM) Capsule Network for Brain Tumor Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140951</link>
        <id>10.14569/IJACSA.2023.0140951</id>
        <doi>10.14569/IJACSA.2023.0140951</doi>
        <lastModDate>2023-09-30T10:43:04.7100000+00:00</lastModDate>
        
        <creator>Vivian Akoto-Adjepong</creator>
        
        <creator>Obed Appiah</creator>
        
        <creator>Peter Appiahene</creator>
        
        <creator>Patrick Kwabena Mensah</creator>
        
        <subject>Texton; separable convolutions; capsule neural network; dynamic routing; brain tumor; brain tumor detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>Brain tumors represent one of the most perilous and lethal forms of tumors in both children and adults. Early detection and treatment of such malignant disease types may reduce the mortality rate. However, manual procedures can be used to diagnose such disorders, and this process necessitates a careful, in-depth analysis which is prone to errors, tedious for health professionals, and time-consuming. Therefore, this research aims to design a Texton Tri-alley Separable Feature Merging (TSFM) Capsule Network based on dynamic routing, suitable for the automatic detection of brain tumors. The TSFM Capsule Network’s Texton layer helps to extract important features from the input image, and the separable convolutions coupled with the use of fewer filters and kernel sizes help to reduce the time for training, the size of the model on disk, and the number of trainable parameters generated by the model. The model’s evaluation results on the brain tumor dataset consisting of four classes show better performance than the traditional capsule network, and are comparable to the state-of-the-art models, with an overall accuracy of 97.64%, specificity of 99.24%, precision of 97.43%, sensitivity of 97.45%, f1-score of 97.44%, ROC rate of 99.50%, PR rate of 99.00%. The components and properties of the proposed model make the model deployable on devices with low memory like mobile devices. This model with better performance can assist physicians in the diagnosis of brain tumors.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_51-Texton_Tri_alley_Separable_Feature_Merging_TTSFM_Capsule_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Unraveling Ransomware: Detecting Threats with Advanced Machine Learning Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140952</link>
        <id>10.14569/IJACSA.2023.0140952</id>
        <doi>10.14569/IJACSA.2023.0140952</doi>
        <lastModDate>2023-09-30T10:43:04.7100000+00:00</lastModDate>
        
        <creator>Karam Hammadeh</creator>
        
        <creator>M. Kavitha</creator>
        
        <subject>Ransomware; cuckoo sandbox; PEFile; YARA rules; machine learning; LSTM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>In our contemporary world, the pervasive influence of information technology, computer engineering, and the Internet has undeniably catalyzed innovation, fostering unparalleled economic growth and revolutionizing education. This technological juggernaut, however, has unwittingly ushered in a parallel era of new criminal frontiers, a magnet for hackers and cybercriminals. These malevolent actors exploit the vast expanse of electronic devices and interconnected networks to perpetrate an array of cybercrimes, and among these insidious digital threats, ransomware reigns supreme. Ransomware, characterized by its ominous ability to encrypt victims&#39; data and extort payment for its release, stands as a dire menace to individuals and organizations alike. Operating with stealth and propagating with alarming alacrity through digital networks, ransomware has emerged as a formidable adversary in the digital age. This research paper focuses on the evolving stages of ransomware, driven by cutting-edge technologies, and proposes essential methods and ideas to detect and combat this menace. The proposed methodology, anchored in Cuckoo Sandbox, PE file feature extraction, and YARA rules, orchestrates three crucial phases: data collection, feature selection, and data preprocessing, all harmonizing to strengthen our defense against this concealed cyber menace. This paper contributes to the development of effective solutions for detecting and mitigating this hidden and insidious cyber threat. This work involves the application of multiple machine learning algorithms, including LSTM, which achieves an impressive accuracy of 99% in identifying ransomware attacks.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_52-Unraveling_Ransomware_Detecting_Threats.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Multispectral Ariel Image Stitching using Decortification and EEG Signal Extraction Technique</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140950</link>
        <id>10.14569/IJACSA.2023.0140950</id>
        <doi>10.14569/IJACSA.2023.0140950</doi>
        <lastModDate>2023-09-30T10:43:04.6930000+00:00</lastModDate>
        
        <creator>Mukul Manohar S</creator>
        
        <creator>K N Muralidhara</creator>
        
        <subject>EEG signal extraction; feature extraction; image stitching; multispectral image; UAV video</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>UAV Videos and other remote-sensing innovations have increased the demand for multispectral image stitching methods, which can gather data on a broad area by looking at different aspects of the same scene. For large-scale hyperspectral remote-sensing images, state-of-the-art techniques frequently have accumulating errors and high processing costs. However, this research paper aims to produce high-precision multispectral mapping with minimal spatial and spectral distortion. The stitching framework was created in the following manner: First, UAV collects the raw input data, which is then labeled as a signal using a connected component labeling strategy that correlates to each pixel or label using the EEG (Alpha, Beta, Theta, and Delta) technique. Next, the feature extraction process follows a novel decortication Hydrolysis CNN approach which extracts active and passive characteristics. Then after feature extraction, a novel chromatographic classification approach is employed for separating features without overfitting. Finally, a novel yield mapping georeferencing technique is employed for all images stitched together with proper alignment and segmented overlapping fields of view. The suggested deep learning model is an effective method for real-time mosaic image feature extraction which is faster by an average of 11.5 times compared to existing approaches as noted on the samples for experimental analysis.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_50-A_Multispectral_Ariel_Image_Stitching.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Securing IoT Devices in e-Health using Machine Learning Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140949</link>
        <id>10.14569/IJACSA.2023.0140949</id>
        <doi>10.14569/IJACSA.2023.0140949</doi>
        <lastModDate>2023-09-30T10:43:04.6800000+00:00</lastModDate>
        
        <creator>Haifa Khaled Alanazi</creator>
        
        <creator>A. A. Abd El-Aziz</creator>
        
        <creator>Hedi Hamdi</creator>
        
        <subject>IoT; ML; DL; attack classification; e-health</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>The Internet of Things (IoT) has gained significance over the past several years and is currently one of the most important technologies. The capacity to link everyday objects, such as home appliances, medical equipment, autos, and baby monitors, to the internet via embedded devices with a minimum of human interaction has made continuous communication between people, processes, and things feasible. IoT devices have established themselves in many sectors, of which electronic health is considered the most important. The IoT environment deals with many private and sensitive health data that must be kept safe from tampering or theft. If safety precautions are not implemented, these dangers and assaults against IoT devices in the health sector might completely destroy this industry. Detecting security threats to an IoT environment requires sophisticated technology; these attacks can be identified using machine learning (ML) techniques, which can also predict snooping behavior based on unidentified patterns. In this paper, it is proposed to apply five strategies to detect attacks in network traffic based on the NF-ToN-IoT dataset. The classifiers used are Naive Bayes (NB), Random Forest (RF), Decision Tree (DT), Artificial Neural Network (ANN), and Support Vector Machine (SVM) models. These algorithms have been used instead of a centralized method to deliver compact security systems for IoT devices. The dataset was pre-processed to eliminate extraneous or missing data, and then a feature engineering approach was used to extract key features. The results obtained by applying each of the listed classifiers to a maximum classification accuracy of 98% achieved by the RF model showed our comparison to other work.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_49-Securing_IoT_Devices_in_e_Health_using_Machine_Learning_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Performance Analysis of Point CNN and Mask R-CNN for Building Extraction from Multispectral LiDAR Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140948</link>
        <id>10.14569/IJACSA.2023.0140948</id>
        <doi>10.14569/IJACSA.2023.0140948</doi>
        <lastModDate>2023-09-30T10:43:04.6800000+00:00</lastModDate>
        
        <creator>Asmaa A. Mandouh</creator>
        
        <creator>Mahmoud El Nokrashy O. Ali</creator>
        
        <creator>Mostafa H.A. Mohamed</creator>
        
        <creator>Lamyaa Gamal EL-Deen Taha</creator>
        
        <creator>Sayed A. Mohamed</creator>
        
        <subject>Multispectral LiDAR; Mask R-CNN; Point CNN; deep learning; building extraction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>The extraction of buildings from multispectral Light Detection and Ranging (LiDAR) data holds significance in various domains such as urban planning, disaster response, and environmental monitoring. State-of-the-art deep learning models, including Point Convolutional Neural Network (Point CNN) and Mask Region-based Convolutional Neural Network (Mask R-CNN), have effectively addressed this particular task. Data and application characteristics affect model performance. This research compares multispectral LiDAR building extraction models, Point CNN and Mask R-CNN. Models are tested for accuracy, efficiency, and capacity to handle irregularly spaced point clouds using multispectral LiDAR data. Point CNN extracts buildings from multispectral LiDAR data more accurately and efficiently than Mask R-CNN. CNN-based point cloud feature extraction avoids preprocessing like voxelization, improving accuracy and processing speed over Mask R-CNN. CNNs can handle LiDAR point clouds with variable spacing. Mask R-CNN outperforms Point CNN in some cases. Mask R-CNN uses image-like data instead of point clouds, making it better at detecting and categorizing objects from different angles. The study emphasizes selecting the right deep learning model for building extraction from multispectral LiDAR data. Point CNN or Mask R-CNN for accurate building extraction depends on the application. For building extraction from multispectral LiDAR data, two approaches were compared utilizing precision, recall, and F1 score. The point-CNN model outperformed Mask R-CNN. The point-CNN model had 93.40% precision, 92.34% recall, and 92.72% F1 score. Mask R-CNN has moderate precision, recall, and F1.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_48-A_Performance_Analysis_of_Point_CNN_and_Mask_R_CNN.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Framework for Predicting Academic Success using Classification Method through Filter-Based Feature Selection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140947</link>
        <id>10.14569/IJACSA.2023.0140947</id>
        <doi>10.14569/IJACSA.2023.0140947</doi>
        <lastModDate>2023-09-30T10:43:04.6630000+00:00</lastModDate>
        
        <creator>Dafid </creator>
        
        <creator>Ermatita</creator>
        
        <creator>Samsuryadi</creator>
        
        <subject>Academic success; framework; filter-based feature selection; classifier; accuracy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>Students’ academic success is still a serious problem faced by higher education institutions worldwide. A strategy is needed to increase the students’ academic performance and prevent students from failing. The need to get early accurate information about poor academic performance is a must and could achieved by constructing a prediction model. Therefore, an effective technique is required to provide the accurate information and improve the accuracy of the prediction model. This study evaluates the filter-based feature selection especially the filter-based feature ranking techniques for predicting academic success. It provides a comparative study of filter-based feature selection techniques for determining the type of features (redundant, irrelevant, relevant) that affect the accuracy of the prediction models. Furthermore, this study proposes a novel feature selection technique based on attribute dependency for improving the performance of the prediction model through a framework. The experimental results show that the proposed technique significantly improved the accuracy of the prediction models from 2-8%, outperforming the existing techniques, and the Decision Tree classifier performs best for predicting with an accuracy score of 92.64%.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_47-A_Framework_for_Predicting_Academic_Success.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Study of the Impact of the Internet of Things Integration on Competition Among 3PLs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140946</link>
        <id>10.14569/IJACSA.2023.0140946</id>
        <doi>10.14569/IJACSA.2023.0140946</doi>
        <lastModDate>2023-09-30T10:43:04.6470000+00:00</lastModDate>
        
        <creator>Kenza Izikki</creator>
        
        <creator>Mustapha Hlyal</creator>
        
        <creator>Aziz Ait Bassou</creator>
        
        <creator>Jamila El Alami</creator>
        
        <subject>Internet of things; third party logistics; game theory; cournot duopoly</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>The Third-Party Logistics (3PL) industry plays an important role in modern supply chains, facilitating the efficient movement of goods and optimizing logistics operations. With the advent of advanced technologies, such as the Internet of Things (IoT), automation, artificial intelligence, and data analytics, the landscape of the 3PL industry has undergone significant transformation. With their tracking ability and real time data enabling capability, IoT technologies have gained great attention from researchers and practitioners and have been widely used in the supply chain sector. This paper employs the Cournot duopoly model within the framework of game theory to investigate the profound implications of the use of IoT technology on competition and operational strategies within the 3PL sector. In this study, we construct a Cournot duopoly model focusing on the assessment of the service level of third party logistics (3PL) within the market. We consider variables such as service level and the IoT adoption rates as crucial factors influencing the behavior of these firms. Through numerical simulations we quantify the impact of the technology on the overall profitability for both firms. Our findings have demonstrated the positive impact of integrating IoT on enhancing the profits of the 3PL firms. Additionally, the IoT adoption rates and the overall IoT integration costs play a critical role in determining market equilibrium and profit distribution.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_46-Study_of_the_Impact_of_the_Internet_of_Things_Integration.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Study of Machine Learning Algorithms for Phishing Website Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140945</link>
        <id>10.14569/IJACSA.2023.0140945</id>
        <doi>10.14569/IJACSA.2023.0140945</doi>
        <lastModDate>2023-09-30T10:43:04.6470000+00:00</lastModDate>
        
        <creator>Kamal Omari</creator>
        
        <subject>Phishing detection; cybersecurity; machine learning; Gradient Boosting; Random Forest</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>Phishing, a prevalent online threat where attackers impersonate legitimate organizations to obtain sensitive information from victims, poses a significant cybersecurity challenge. Recent advancements in phishing detection, particularly machine learning-based methods, have shown promising results in countering these malicious attacks. In this study, we developed and compared seven machine learning models, namely Logistic Regression (LR), k-Nearest Neighbors (KNN), Support Vector Machine (SVM), Naive Bayes (NB), Decision Tree (DT), Random Forest (RF), and Gradient Boosting, to assess their efficiency in detecting phishing domains. Employing the UCI phishing domains dataset as a benchmark, we rigorously evaluated the performance of these models. Our findings indicate that the Gradient Boosting-based model, in conjunction with the Random Forest, exhibits superior performance compared to the other techniques and aligns with existing solutions in the literature. Consequently, it emerges as the most accurate and effective approach for detecting phishing domains.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_45-Comparative_Study_of_Machine_Learning_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced Plagiarism Detection Through Advanced Natural Language Processing and E-BERT Framework of the Smith-Waterman Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140944</link>
        <id>10.14569/IJACSA.2023.0140944</id>
        <doi>10.14569/IJACSA.2023.0140944</doi>
        <lastModDate>2023-09-30T10:43:04.6300000+00:00</lastModDate>
        
        <creator>Franciskus Antonius</creator>
        
        <creator>Myagmarsuren Orosoo</creator>
        
        <creator>Aanandha Saravanan K</creator>
        
        <creator>Indrajit Patra</creator>
        
        <creator>Prema S</creator>
        
        <subject>Natural language processing; encoder representation from transformers; document to vector + logistic regression</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>Effective detection has been extremely difficult due to plagiarism&#39;s pervasiveness throughout a variety of fields, including academia and research. Increasingly complex plagiarism detection strategies are being used by people, making traditional approaches ineffective. The assessment of plagiarism involves a comprehensive examination encompassing syntactic, lexical, semantic, and structural facets. In contrast to traditional string-matching techniques, this investigation adopts a sophisticated Natural Language Processing (NLP) framework. The preprocessing phase entails a series of intricate steps ultimately refining the raw text data. The crux of this methodology lies in the integration of two distinct metrics within the Encoder Representation from Transformers (E-BERT) approach, effectively facilitating a granular exploration of textual similarity. Within the realm of NLP, the amalgamation of Deep and Shallow approaches serves as a lens to delve into the intricate nuances of the text, uncovering underlying layers of meaning. The discerning outcomes of this research unveil the remarkable proficiency of Deep NLP in promptly identifying substantial revisions. Integral to this innovation is the novel utilization of the Waterman algorithm and an English-Spanish dictionary, which contribute to the selection of optimal attributes. Comparative evaluations against alternative models employing distinct encoding methodologies, along with logistic regression as a classifier underscore the potency of the proposed implementation. The culmination of extensive experimentation substantiates the system&#39;s prowess, boasting an impressive 99.5% accuracy rate in extracting instances of plagiarism. This research serves as a pivotal advancement in the domain of plagiarism detection, ushering in effective and sophisticated methods to combat the growing spectre of unoriginal content.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_44-Enhanced_Plagiarism_Detection_Through_Advanced_Natural_Language.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Factors and Models Influencing Value Co-Creation in the Supply Chain of Collection Resources for Library Distribution Providers Under Data Ecology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140942</link>
        <id>10.14569/IJACSA.2023.0140942</id>
        <doi>10.14569/IJACSA.2023.0140942</doi>
        <lastModDate>2023-09-30T10:43:04.6170000+00:00</lastModDate>
        
        <creator>Xiaoyun Lin</creator>
        
        <subject>Resource supply chain; data mining; value co-creation; K-Means clustering algorithm; pavilion dispenser</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>Under the data ecology, the advancement of relevant technology and the utilization of relevant resources have provided more efficient technical services for various industries. However, with the proliferation of data resources, problems such as information pollution and data redundancy have arisen in the process of supply chain services for collection resources. For solving such problems and enhancing the collection resource supply efficiency of librarians, the study uses data mining technology combined with improved K-Means clustering algorithm to design a value co-creation model of library collection resource supply chain for librarians under data ecology. The outcomes indicate that the shortest running time of traditional K-Means algorithm is 40ms and the longest running time is 115ms in Wine dataset, and the running time of improved K-Means algorithm is stable at 59ms; the shortest running time of traditional K-Means algorithm is 26 ms and the longest running time is 58ms in Iris dataset, and the running time of improved K-Means algorithm is stable at 53 ms. The clustering accuracy of the improved K-Means algorithm in the Wine data set is 98.2%, which is 0.3% exceeding the traditional K-Means algorithm, which is 97.9%; the clustering accuracy in the Iris data set is 100%, which is 2.4% exceeding the traditional K-Means algorithm, which is 97.6%. In summary, it can be seen that the studied data ecology has a good application of the factors and models influencing the value co-creation of the supply chain of library resources for library dispensers.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_42-Factors_and_Models_Influencing_Value_Co_Creation_in_the_Supply_Chain.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Flexible Manufacturing System based on Virtual Simulation Technology for Building Flexible Platforms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140943</link>
        <id>10.14569/IJACSA.2023.0140943</id>
        <doi>10.14569/IJACSA.2023.0140943</doi>
        <lastModDate>2023-09-30T10:43:04.6170000+00:00</lastModDate>
        
        <creator>Zhangchi Sun</creator>
        
        <subject>Flexible platform; virtual simulation technology; manufacturing control system; multi-objective genetic algorithm; slotting optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>Flexible manufacturing systems have become relatively mature in the industrial field, representing the most advanced research achievements in the development of the manufacturing industry. But currently, there are few resources and high costs in universities to create a system that is more practical, and it cannot meet the practical teaching requirements of students in multiple majors. In response to the above issues, this study first designed a flexible manufacturing system from the overall architecture, then introduced and integrated virtual simulation technology, and utilized multi-objective genetic algorithm for cargo location optimization to improve the work efficiency of the flexible system. The research results indicate that after 213 iterations of the proposed algorithm, the iteration curve of the total objective function value tends to be stable, and the effect of cargo location optimization is relatively ideal. At this time, the total objective function value is 142.5. In addition, as the scale expands, the corresponding number of iterations for multi-objective genetic algorithm at its maximum scale is 411.2. The application effect of virtual flexible manufacturing system in practical teaching in universities is good, and visual learning methods can better attract students&#39; attention.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_43-A_Flexible_Manufacturing_System_based_on_Virtual_Simulation_Technology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Residual Convolutional Long Short-term Memory Network for Option Price Prediction Problem</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140941</link>
        <id>10.14569/IJACSA.2023.0140941</id>
        <doi>10.14569/IJACSA.2023.0140941</doi>
        <lastModDate>2023-09-30T10:43:04.6000000+00:00</lastModDate>
        
        <creator>Artur Dossatayev</creator>
        
        <creator>Ainur Manapova</creator>
        
        <creator>Batyrkhan Omarov</creator>
        
        <subject>Deep learning; CNN; LSTM; prediction; option price</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>In the realm of financial markets, the precise prediction of option prices remains a cornerstone for effective portfolio management, risk mitigation, and ensuring overall market equilibrium. Traditional models, notably the Black-Scholes, often encounter challenges in comprehensively integrating the multifaceted interplay of contemporary market variables. Addressing this lacuna, this study elucidates the capabilities of a novel Deep Residual Convolution Long Short-term Memory (DR-CLSTM) network, meticulously designed to amalgamate the superior feature extraction prowess of Convolutional Neural Networks (CNNs) with the unparalleled temporal sequence discernment of Long Short-term Memory (LSTM) networks, further augmented by deep residual connections. Rigorous evaluations conducted on an expansive dataset, representative of diverse market conditions, showcased the DR-CLSTM&#39;s consistent supremacy in prediction accuracy and computational efficacy over both its traditional and deep learning contemporaries. Crucially, the integration of residual pathways accelerated training convergence rates and provided a formidable defense against the often detrimental vanishing gradient phenomenon. Consequently, this research positions the DR-CLSTM network as a pioneering and formidable contender in the arena of option price forecasting, offering substantive implications for quantitative finance scholars and practitioners alike, and hinting at its potential versatility for broader financial instrument applications and varied market scenarios.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_41-Deep_Residual_Convolutional_Long_Short_term_Memory_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Exploring the Challenges and Impacts of Artificial Intelligence Implementation in Project Management: A Systematic Literature Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140940</link>
        <id>10.14569/IJACSA.2023.0140940</id>
        <doi>10.14569/IJACSA.2023.0140940</doi>
        <lastModDate>2023-09-30T10:43:04.5830000+00:00</lastModDate>
        
        <creator>Muhammad Irfan Hashfi</creator>
        
        <creator>Teguh Raharjo</creator>
        
        <subject>Artificial intelligence; project management; PMBOK process groups; challenge; impact</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>This paper presents a systematic literature review (SLR) investigating the challenges and impacts of implementing artificial intelligence (AI) in project management, specifically mapping them into the process groups defined in the Project Management Body of Knowledge (PMBOK). The study aims to contribute to the understanding of integrating AI in project management and provides insights into the challenges and impacts within each process group. The SLR methodology was applied, and a total of 34 scientific articles were analyzed. The results and analysis reveal the specific challenges and impacts within each process group. In the Initiating Process Group, AI tools and analysis techniques address challenges in risk assessment, cost prediction, and decision-making. The Planning process group benefits from various tools and methodologies that improve risk assessment, project selection, cost estimation, resource allocation, and decision-making. The Execution process group emphasizes the importance of advanced tools and techniques in enhancing productivity, resource utilization, cost reduction, and decision-making. The Monitoring and Controlling process group demonstrates the potential of advanced tools in achieving efficiency, cost reduction, improved quality, and informed decision-making. Lastly, the Closing process group emphasizes the importance of utilizing advanced tools to minimize waste, optimize resource utilization, reduce costs, improve quality, and project closure success. Overall, this research provides valuable insights and strategies for organizations seeking to implement AI in project management, thereby enhancing the potential for success within the PMBOK Process Group.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_40-Exploring_the_Challenges_and_Impacts_of_Artificial_Intelligence.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Contributed Factors in Predicting Market Values of Loaned Out Players of English Premier League Clubs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140939</link>
        <id>10.14569/IJACSA.2023.0140939</id>
        <doi>10.14569/IJACSA.2023.0140939</doi>
        <lastModDate>2023-09-30T10:43:04.5830000+00:00</lastModDate>
        
        <creator>Muhammad Daffa Arviano Putra</creator>
        
        <creator>Deshinta Arrova Dewi</creator>
        
        <creator>Wahyuningdiah Trisari Putri</creator>
        
        <creator>Retno Hendrowati</creator>
        
        <creator>Tri Basuki Kurniawan</creator>
        
        <subject>Data analytics; predicting market value; English Premier League; loaned out players; consumption; resource use</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>The top tier of the English football league division is occupied by the English Premier League (EPL). It has become a global phenomenon with exhilarating skills and has been one of the most-watched professional football leagues on the planet. The possibility of a player temporarily playing for a club other than the one to whom they are now contracted is known as a &quot;loan player&quot; in the English Premier League (EPL) hence, each player has a market value. Market value is an estimate of how much a player costs when a club wants to buy his contract from another club. The purpose of this study is to determine the factors that influence a player&#39;s market value at the conclusion of a loan period. With the Transfermarkt player transfer record dataset for the years 2004 through 2020, we use linear regression analysis. Our study found that a football player&#39;s market worth at the end of a loan period is influenced by several aspects, including market value at the beginning, goals, appearances, and total loan.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_39-Contributed_Factors_in_Predicting_Market_Values_of_Loaned_Out_Players.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Wireless Capsule Endoscopy Video Summarization using Transfer Learning and Random Forests</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140938</link>
        <id>10.14569/IJACSA.2023.0140938</id>
        <doi>10.14569/IJACSA.2023.0140938</doi>
        <lastModDate>2023-09-30T10:43:04.5700000+00:00</lastModDate>
        
        <creator>Parminder Kaur</creator>
        
        <creator>Rakesh Kumar</creator>
        
        <subject>Bayesian optimization; capsule endoscopy; MobileNetV2; random forest classifier; transfer learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>Wireless Capsule Endoscopy (WCE) is a diagnostic technique for identifying gastrointestinal diseases and abnormalities. Gastroenterologists face a considerable challenge when reviewing a lengthy video to identify a disease. The solution to this problem is generating an automated video summarization technique that generates the WCE Video summaries. This paper presents a Video Summarization technique that summarizes the WCE video. The proposed method uses transfer learning and a Random Forest classifier. Using a computationally light and pre-trained MobileNetV2 for feature extraction helped deliver results quickly. Managing small datasets and mitigating the overfitting risk was effectively addressed using Random Forest. The Random Forest&#39;s hyperparameters are optimized through the use of Bayesian optimization. The approach proposed has achieved an accuracy of 98.75% in disease prediction while significantly reducing the viewing time for the video summary. Furthermore, it has attained an average F-Score of 0.98, demonstrating its efficacy and reliability.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_38-Wireless_Capsule_Endoscopy_Video_Summarization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cocoa Pods Diseases Detection by MobileNet Confluence and Classification Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140937</link>
        <id>10.14569/IJACSA.2023.0140937</id>
        <doi>10.14569/IJACSA.2023.0140937</doi>
        <lastModDate>2023-09-30T10:43:04.5530000+00:00</lastModDate>
        
        <creator>Diarra MAMADOU</creator>
        
        <creator>Kacoutchy Jean AYIKPA</creator>
        
        <creator>Abou Bakary BALLO</creator>
        
        <creator>Brou M&#233;dard KOUASSI</creator>
        
        <subject>Cocoa pods diseases; MobileNetV2; classification algorithms; machine learning; hybrid method</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>Cocoa cultivation is of immense importance to the people of C&#244;te d&#39;Ivoire. However, this culture is experiencing significant challenges due to diseases spread by various agents such as bacteria, viruses, and fungi, which cause considerable economic losses. Currently, the methods available to detect these cocoa diseases force farmers to seek the expertise of agronomists for visual inspections and diagnostics, a laborious and complex process. In the search for solutions, many studies have opted for using convolutional neural networks (CNNs) to identify diseases in cocoa pods. However, an essential advance is to develop hybrid approaches that combine the advantages of a CNN with sophisticated classification algorithms. This research stands out for its innovative contribution, combining MobileNetV2, a convolutional neural network architecture, with algorithms, such as Logistic Regression (LR), K Nearest Neighbors (KNN), Support Vector Machines (SVM), XGBoost, and Random Forest. The study was conducted in two distinct phases. First, each algorithm was evaluated individually, and then performance was measured when MobileNetV2 was merged with the algorithms mentioned. These hybrid approaches complement and amplify MobileNetV2&#39;s capabilities. To do so, they draw on MobileNetV2&#39;s inherent capabilities to extract key features and enhance information quality. By combining this expertise with the classification methods of these other models, hybrid approaches outperform individual techniques. Accuracy rates range from 72.4% to 86.04%.This performance amplitude underlines the effectiveness of the synergy between the extraction characteristics of MobileNetV2 and the classification skills of other algorithms.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_37-Cocoa_Pods_Diseases_Detection_by_MobileNet_Confluence.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Local Search Algorithm for Optimization Route of Travelling Salesman Problem</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140935</link>
        <id>10.14569/IJACSA.2023.0140935</id>
        <doi>10.14569/IJACSA.2023.0140935</doi>
        <lastModDate>2023-09-30T10:43:04.5370000+00:00</lastModDate>
        
        <creator>Muhammad Khahfi Zuhanda</creator>
        
        <creator>Noriszura Ismail</creator>
        
        <creator>Rezzy Eko Caraka</creator>
        
        <creator>Rahmad Syah</creator>
        
        <creator>Prana Ugiana Gio</creator>
        
        <subject>Travelling Salesman Problem; heuristic algorithms; hybridization techniques algorithm performance; route optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>This study explores the Traveling Salesman Problem (TSP) in Medan City, North Sumatra, Indonesia, analyzing 100 geographical locations for the shortest route determination. Four heuristic algorithms—Nearest Neighbor (NN), Repetitive Nearest Neighbor (RNN), Hybrid NN, and Hybrid RNN—are investigated using RStudio software and benchmarked against various problem instances and TSPLIB data. The results reveal that algorithm performance is contingent on problem size and complexity, with hybrid methods showing promise in producing superior solutions. Statistical analysis confirms the significance of the differences between non-hybrid and hybrid methods, emphasizing the potential for hybridization to enhance solution quality. This research advances our understanding of heuristic algorithm performance in TSP problem-solving and underscores the transformative potential of hybridization strategies in optimization.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_35-Hybrid_Local_Search_Algorithm_for_Optimization_Route.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Systematic Literature Review of Computational Studies in Aquaponic System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140936</link>
        <id>10.14569/IJACSA.2023.0140936</id>
        <doi>10.14569/IJACSA.2023.0140936</doi>
        <lastModDate>2023-09-30T10:43:04.5370000+00:00</lastModDate>
        
        <creator>Khaoula Taji</creator>
        
        <creator>Ali Sohail</creator>
        
        <creator>Yassine Taleb Ahmad</creator>
        
        <creator>Ilyas Ghanimi</creator>
        
        <creator>Sheeba Ilyas</creator>
        
        <creator>Fadoua Ghanimi</creator>
        
        <subject>Aquaponics; machine learning; internet of thing (IoT); message queue telemetry transport; sensors; SMART aquaculture</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>The word aquaponics means the growth of aquatic organisms as well as plants in the controlled environment. As the nutrients used for sustainable plant growth is obtained from aquatic organisms and the nutrients that are absorbed by the plants remediate the water for the aquatic life. The advancement in the computational studies plays a vital role in every field of life. The aim of the proposed study is to deeply analyze the computational studies that used IoT, AI, Machine learning and deep learning for aquaponic systems between the years 2019 to 2022. The literature survey deeply discuss the proposed methodology, comprehends the fundamental researches, tool, advantages, limitations, concepts, and results of the recent studies proposed by the researchers in context of aquaponic system. The proposed study extract 41 research articles from these libraries based on year of publication, title, methodology, citation, paper quality and abstract. These articles are collected from seven different research article libraries including Google Scholar, Worldwide Science, IEEE Xplore, Google Books, Refseek, ACM digital Library and Science Direct. This study develops a state of the art research for the next researchers to work on the loopholes of the previous researches in an efficient manner. The results of the proposed study shows that the implementation of IoT based machine learning and deep learning framework shows state of the art results for the nutrients regulation, sensing, monitoring and controlling of the aquaponic environment. It is concluded from the proposed study that there need to be develop ensemble learning model with an efficient dataset in context of aquaponic environment.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_36-A_Systematic_Literature_Review_of_Computational_Studies.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Strengthening Network Security: Evaluation of Intrusion Detection and Prevention Systems Tools in Networking Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140934</link>
        <id>10.14569/IJACSA.2023.0140934</id>
        <doi>10.14569/IJACSA.2023.0140934</doi>
        <lastModDate>2023-09-30T10:43:04.5230000+00:00</lastModDate>
        
        <creator>Wahyu Adi Prabowo</creator>
        
        <creator>Khusnul Fauziah</creator>
        
        <creator>Aufa Salsabila Nahrowi</creator>
        
        <creator>Muhammad Nur Faiz</creator>
        
        <creator>Arif Wirawan Muhammad</creator>
        
        <subject>IDPS; network security; computer performance; Quality of Service; DDoS attacks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>This study aims to enhance network security by comprehensively evaluating various Intrusion Detection and Prevention Systems tools in networking systems. The objectives of this research were to assess the performance of different IDPS tools in terms of computer resources utilization, Quality of Service metrics namely delay, jitter, throughput, and packet loss, and their effectiveness in countering Distributed Denial of Service attacks, specifically ICMP Flood and SYN Flood. The evaluation used popular IDPS tools, including Snort, Suricata, Zeek, OSSEC, and Honeypot Cowrie. Real attack scenarios were simulated to measure the tools performance. The results indicated CPU and RAM usage variations among the tools, with Snort and Suricata showing efficient resource utilization. Regarding QoS metrics, Snort demonstrated superior performance in delay, jitter, throughput, and packet loss mitigation for both attack types. The implication for further research lies in exploring the optimal configurations and fine-tuning of IDPS tools to achieve the best possible network security against DDoS attacks. This research provides valuable insights into selecting appropriate IDPS tools for network administrators, cybersecurity professionals, and organizations to fortify their infrastructure against evolving cyber threats.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_34-Strengthening_Network_Security_Evaluation_of_Intrusion_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Neural Network-based Detection of Road Traffic Objects from Drone-Captured Imagery Focusing on Road Regions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140933</link>
        <id>10.14569/IJACSA.2023.0140933</id>
        <doi>10.14569/IJACSA.2023.0140933</doi>
        <lastModDate>2023-09-30T10:43:04.5070000+00:00</lastModDate>
        
        <creator>Hoanh Nguyen</creator>
        
        <subject>Deep learning; drone images; vehicle detection; road segmentation; data imbalance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>This paper presents a novel deep learning approach for the detection of traffic objects from drone-based imagery, focusing predominantly on the rapid and accurate detection of vehicles within road sections. The proposed method consists of two primary components: a road segmentation module and a vehicle detection network. The former leverages a residual unit with skip-connections to effectively extract road areas, while the latter employs a modified version of the YOLOv3 architecture, tailored for high-accuracy and high-speed vehicle detection. To address the issue of data imbalance, which is a pervasive challenge in drone images, this paper utilizes a range of data augmentation techniques to improve the robustness of the proposed model. Experimental results on the UAVDT and UAVid datasets exhibit that the proposed model attains a substantial boost in accuracy and inference speed of vehicle detection in comparison to the existing methods. These findings underscore the potential of the proposed approach for real-world traffic monitoring applications, where rapid and reliable vehicle detection is paramount.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_33-Deep_Neural_Network_based_Detection_of_Road_Traffic_Objects.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SFFT-CapsNet: Stacked Fast Fourier Transform for Retina Optical Coherence Tomography Image Classification using Capsule Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140932</link>
        <id>10.14569/IJACSA.2023.0140932</id>
        <doi>10.14569/IJACSA.2023.0140932</doi>
        <lastModDate>2023-09-30T10:43:04.5070000+00:00</lastModDate>
        
        <creator>Michael Opoku</creator>
        
        <creator>Benjamin Asubam Weyori</creator>
        
        <creator>Adebayo Felix Adekoya</creator>
        
        <creator>Kwabena Adu</creator>
        
        <subject>Capsule network; convolution neural network; medical imaging; optical coherence tomography</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>The work of the Ophthalmologist in manually detecting specific eye related disease is challenging especially screening through large volume of dataset. Deep learning models can leverage on medical imaging like the retina Optical Coherence Tomography (OCT) image dataset to help with the classification task. As a result, many solutions have been proposed based on deep learning-based convolutional neural networks (CNNs). However, the limitations such as inability to recognize pose, the pooling operations which affect resolution of the featured maps have affected its performance in achieving the best accuracies. The study proposes a Capsule network (CapsNet) with contrast limited adaptive histogram equalization (CLAHE) and Fast Fourier transform (FFT), a method we called Stacked Fast Fourier Transform-CapsNet (SFFT-CapsNet). The SFFT was used as an enhancement layer to reduce noise in the retina OCT image. A two-block framework of three-layer convolutional capsule network each was designed. The dataset used for this study was presented by University of California San Diego (UCSD). The dataset consists of 84,495 X-Ray images categorized into four classes (NORMAL, CNV, DME, and DRUSEN). Experiment was conducted on the SFFT-CapsNet model and results were compared with baseline models for performance evaluation using accuracy, sensitivity, precision, specificity, and AUC as evaluation metrics. The evaluation results indicate that the proposed model outperformed the baseline model and state-of-the-arts models by achieving the best accuracies of 99.0%, 100%, and 99.8% on overall accuracy (OA), overall sensitivity (OS), and overall precision (OP), respectively. The result shows that the proposed method can be adopted to aid Ophthalmologist in retina disease diagnosis.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_32-SFFT_CapsNet_Stacked_Fast_Fourier_Transform_for_Retina.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Artifact Removal Strategy and Spatial Attention-based Multiscale CNN for MI Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140931</link>
        <id>10.14569/IJACSA.2023.0140931</id>
        <doi>10.14569/IJACSA.2023.0140931</doi>
        <lastModDate>2023-09-30T10:43:04.4900000+00:00</lastModDate>
        
        <creator>Duan Li</creator>
        
        <creator>Peisen Liu</creator>
        
        <creator>Yongquan Xia</creator>
        
        <subject>Motor Imagery (MI); Brain Computer Interface (BCI); EEG signal; artifact removal; spatial attention; Convolutional Neural Network (CNN)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>The brain-computer interface (BCI) based on motor imagery (MI) is a promising technology aimed at assisting individuals with motor impairments in regaining their motor abilities by capturing brain signals during specific tasks. However, non-invasive electroencephalogram (EEG) signals collected using EEG caps often contain large numbers of artifacts. Automatically and effectively removing these artifacts while preserving task-related brain components is a key issue for MI de-coding. Additionally, multi-channel EEG signals encompass temporal, frequency and spatial domain features. Although deep learning has achieved better results in extracting features and de-coding motor imagery EEG (MI-EEG) signals, obtaining a high-performance network on MI that achieves optimal matching of feature extraction, thus classification algorithms is still a challenging issue. In this study, we propose a scheme that combines a novel automatic artifact removal strategy with a spatial attention-based multiscale CNN (SA-MSCNN). This work obtained independent component analysis (ICA) weights from the first subject in the dataset and used K-means clustering to determine the best feature combination, which was then applied to other subjects for artifact removal. Additionally, this work designed an SA-MSCNN which includes multiscale convolution modules capable of extracting information from multiple frequency bands, spatial attention modules weighting spatial information, and separable convolution modules reducing feature information. This work validated the performance of the proposed model using a real-world public dataset, the BCI competition IV dataset 2a. The average accuracy of the method was 79.83%. This work conducted ablation experiments to demonstrate the effectiveness of the proposed artifact removal method and SA-MSCNN network and compared the results with outstanding models and state-of-the-art (SOTA) studies. The results confirm the effectiveness of the proposed method and provide a theoretical and experimental foundation for the development of new MI-BCI systems, which is very useful in helping people with disabilities regain their independence and improve their quality of life.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_31-A_Novel_Artifact_Removal_Strategy_and_Spatial_Attention.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced Brain Tumor Detection and Classification in MRI Scans using Convolutional Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140929</link>
        <id>10.14569/IJACSA.2023.0140929</id>
        <doi>10.14569/IJACSA.2023.0140929</doi>
        <lastModDate>2023-09-30T10:43:04.4770000+00:00</lastModDate>
        
        <creator>Ruqsar Zaitoon</creator>
        
        <creator>Hussain Syed</creator>
        
        <subject>Multi-layer Convolutional Neural Networks (CNNs); MRI images; tumor segmentation and classification; deep learning; learning rate</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>Tumor detection is one of the most critical and challenging tasks in the realm of medical image processing due to the risk of incorrect prediction and diagnosis when using human-aided categorization for cancer cell identification. Data input is an intensive process, particularly when dealing with a low-quality scan image, due to the background, contrast, noise, texture, and volume of data; when there are many input images to analyze, the task becomes more onerous. It is difficult to distinguish tumor areas from raw MRI scans because tumors pose a diverse appearance and superficially resemble normal tissues, which makes it more difficult to detect tumors. Deep learning techniques are applied in medical images to a great extent to understand tumor contours and areas with high intensities in input images. For timely diagnosis and the right treatment with less human involvement, and to interpret and enhance detection and classification accuracies this automated method is proposed. This proposed work is to identify and classify tumors on 2D MRI scans of the brain. In this work, a dataset is used, inside it, there are images with and without tumors of varied sizes, locations, and forms, with different image intensities and textures. In this paper, multi-layer Convolutional Neural Network (CNN) architectures are implemented. This shows two main experiments to assess the accuracy and performance of the model. First, five-layer CNN architecture with five layers and two different split ratios. Second, six-layer CNN architecture with two different split ratios. In addition, image pre-processing and hyper-parameter tuning were performed to improve the classification accuracy. The results show that the five-layer CNN architecture outperforms the six-layer CNN architecture. When results are compared with state-of-the-art methods, the proposed model for segmentation and classification is better because this model achieved an accuracy of 99.87 percent.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_29-Enhanced_Brain_Tumor_Detection_and_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Method for Hyperparameter Tuning of Image Classification with PyCaret</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140930</link>
        <id>10.14569/IJACSA.2023.0140930</id>
        <doi>10.14569/IJACSA.2023.0140930</doi>
        <lastModDate>2023-09-30T10:43:04.4770000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Jin Shimazoe</creator>
        
        <creator>Mariko Oda</creator>
        
        <subject>PyCaret; extra trees classifier; AUC; gini; entropy; feature split; ROC curve</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>A method for hyperparameter tuning of image classification with PyCaret is proposed. The application example compares 14 classification methods and confirms that Extra Trees Classifier has the best performance among them, AUC=0.978, Recall=0.879, Precision=0.969, F1=0.912, Time=0.609 bottom. The Extra Trees Classifier produces a large number of decision trees, similar to the random forest algorithm, but with random sampling of each tree and no permutation. This creates a dataset for each tree containing unique samples, and from the ensemble set of features a certain number of features are also randomly selected for each tree. The most important and unique property of the Extra Trees Classifier is that the feature split values are chosen randomly. Instead of using Gini or entropy to split the data to compute locally optimal values, the algorithm randomly selects split values. This makes the tree diverse and uncorrelated. i.e. the diversity of each tree. Therefore, it is considered that the classification performance is better than other classification methods. Parameter tuning of Extra Trees Classifier was performed, and training performance, test performance, ROC curve, accuracy rate characteristics, etc. were evaluated.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_30-Method_for_Hyperparameter_Tuning_of_Image_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Usability Testing of Memorable Word in Security Enhancing in e-Government and e-Financial Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140928</link>
        <id>10.14569/IJACSA.2023.0140928</id>
        <doi>10.14569/IJACSA.2023.0140928</doi>
        <lastModDate>2023-09-30T10:43:04.4600000+00:00</lastModDate>
        
        <creator>Hanan Alotaibi</creator>
        
        <creator>Dania Aljeaid</creator>
        
        <creator>Amal Alharbi</creator>
        
        <subject>Security; usability testing; two factor authentication; one time password; memorable word</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>Most applications increase their security by adding an extra layer to the login process using two-factor authentication (2FA). In Saudi Arabia, One-Time Password (OTP), which is 2FA, is the most common method used as users log in to their accounts. However, some issues have emerged with using OTP as 2FA; these issues from previous research were investigated in the study. Also, the study proposed a new method of account authentication, which is a Memorable Word (MW). MW is the second and short password in which the user enters a certain number of characters instead of the whole password. The study conducted usability testing to compare two 2FA methods, OTP and MW. The study included 60 participants logged into a simulated website using both authentication methods. Then, all participants have to complete the questionnaire. The collected data analyses showed a favourable opinion of the MW method.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_28-Usability_Testing_of_Memorable_Word_in_Security.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hyperparameter Tuning of Semi-Supervised Learning for Indonesian Text Annotation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140927</link>
        <id>10.14569/IJACSA.2023.0140927</id>
        <doi>10.14569/IJACSA.2023.0140927</doi>
        <lastModDate>2023-09-30T10:43:04.4430000+00:00</lastModDate>
        
        <creator>Siti Khomsah</creator>
        
        <creator>Nur Heri Cahyana</creator>
        
        <creator>Agus Sasmito Aribowo</creator>
        
        <subject>Text annotation; semi-supervised; parameter-tuning; grid search; random search</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>A crucial issue in sentiment analysis primarily relies on the annotation task involving data labeling. This critical step is typically performed by linguists, as the nuanced meaning of text significantly influences its contextual interpretation. If there is a large volume of data, annotation is time-consuming and financially burdensome. Addressing these challenges, a semi-supervised learning annotation (SSL) that integrates human annotator and artificial intelligence algorithms emerges as a potent solution. Building accurate SSL needs to explore the best architecture, including a combination of machine learning and mechanism. This research aims to construct semi-supervised model annotation text by tuning the parameter of the machine learning algorithm to gain the most accurate model. This study employed a Support Vector Machine and a Random Forest algorithm to build semi-supervised annotation. Grid-Search and Random-Search were employed to tune the Random Forest and Support Vector Machine parameters. The semi-supervised annotation model was applied to annotate Indonesian texts. The outcomes signify that hyperparameter-tuning enhances SSL performance, surpassing the performance achieved using default parameters. The experiment also shows that the SSL annotation using a Support Vector Machine tuned by Grid Search and Random Search is more robust than the Random Forest algorithm. Hyperparameter tuning is also robust to training data that contains many manual labeling errors by experts.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_27-Hyperparameter_Tuning_of_Semi_Supervised_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>He and She in Video Games: Impact of Gendera on Video Game Participation and Perspectives</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140926</link>
        <id>10.14569/IJACSA.2023.0140926</id>
        <doi>10.14569/IJACSA.2023.0140926</doi>
        <lastModDate>2023-09-30T10:43:04.4270000+00:00</lastModDate>
        
        <creator>Deena Alghamdi</creator>
        
        <subject>College students; gender differences; KSA; video games</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>Playing video games is now considered one of the day-to-day activities of many adolescents and young people. This research studies the gender impact on video game participation and perspectives among college students in the Kingdom of Saudi Arabia (KSA). The data were collected by first conducting discussions involving four focus groups with a total of 26 participants to explore the topic. An online questionnaire was then distributed, and a total of 2,756 responses were received. The analysis of the data shows a clear impact of gender on the playing practices adopted, perceptions towards the pros and cons of video games, and the most used consoles and popular games. However, the practices and perspectives of male and female players did not differ regarding bullying in video games. The findings of this study can advance the understanding of this subject, and game developers who are targeting the KSA game market can use the results as the basis for developing games that are more suitable for the players in that country.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_26-He_and_She_in_Video_Games_Impact_of_Gendera.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>RETRACTED: Improved YOLO-X Model for Tomato Disease Severity Detection using Field Dataset</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140925</link>
        <id>10.14569/IJACSA.2023.0140925</id>
        <doi>10.14569/IJACSA.2023.0140925</doi>
        <lastModDate>2023-09-30T10:43:04.4130000+00:00</lastModDate>
        
        <creator>Rajasree R</creator>
        
        <subject>Convolutional neural network; deep learning; object classification; plant disease detection; spatial pyramid pooling; YOLOX</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>After careful and considered review of the content of this paper by a duly constituted expert committee, this paper has been found to be in violation of IJACSA`s Publication Principles. We hereby retract the content of this paper. Reasonable effort should be made to remove all past references to this paper. Retraction DOI: 10.14569/IJACSA.2023.0140925.retraction</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_25-Improved_YOLO_X_Model_for_Tomato_Disease_Severity_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>LAD-YOLO: A Lightweight YOLOv5 Network for Surface Defect Detection on Aluminum Profiles</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140924</link>
        <id>10.14569/IJACSA.2023.0140924</id>
        <doi>10.14569/IJACSA.2023.0140924</doi>
        <lastModDate>2023-09-30T10:43:04.3970000+00:00</lastModDate>
        
        <creator>Dongxue Zhao</creator>
        
        <creator>Shenbo Liu</creator>
        
        <creator>Yuanhang Chen</creator>
        
        <creator>Da Chen</creator>
        
        <creator>Zhelun Hu</creator>
        
        <creator>Lijun Tang</creator>
        
        <subject>YOLOv5; ShuffleNetv2; lightweight and fast spatial pyramid pooling structure; convolutional block attention module; aluminum profiles surface defect detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>In this paper, we leverage the advantages of YOLOv5 in target detection to propose a highly accurate and lightweight network, called LAD-YOLO, for surface defect detection on aluminum profiles. The LAD-YOLO addresses the issues of computational complexity, low precision, and a large number of model parameters encountered in YOLOv5 when applied to aluminum profiles defect detection. LAD-YOLO reduces the model parameters and computation while also decreasing the model size by utilizing the ShuffleNetV2 module and depthwise separable convolution in the backbone and neck networks, respectively. Meanwhile, a lightweight structure called &quot;Ghost_SPPFCSPC_group&quot;, which combines Cross Stage Partial Network Connection Operation, Ghost Convolution, Group Convolution and Spatial Pyramid Pooling-Fast structure, is designed. This structure is incorporated into the backbone along with the Convolutional Block Attention Module (CBAM) to achieve lightweight. Simultaneously, it enhances the model&#39;s ability to extract features of weak and small targets and improves its capability to learn information at different scales. The experimental results show that the mean Average Precision (mAP) of LAD-YOLO on aluminum profiles defect datasets reaches 96.9%, model size is 6.64MB, and Giga Floating Point Operations (GFLOPs) is 5.5. Compared with YOLOv5, YOLOV5s-MobileNetv3, and other networks, LAD-YOLO proposed in this paper has higher accuracy, fewer parameters, and lower floating-point computation.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_24-LAD_YOLO_A_Lightweight_YOLOv5_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application Prototype for Inclusive Literacy for People with Reading Disabilities</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140923</link>
        <id>10.14569/IJACSA.2023.0140923</id>
        <doi>10.14569/IJACSA.2023.0140923</doi>
        <lastModDate>2023-09-30T10:43:04.3970000+00:00</lastModDate>
        
        <creator>Laberiano Andrade-Arenas</creator>
        
        <creator>Roberto Santiago Bellido-Garc&#237;a</creator>
        
        <creator>Pedro Molina-Velarde</creator>
        
        <creator>Cesar Yactayo-Arias</creator>
        
        <subject>Atlas TI 22; inclusive literacy; mobile applications; reading disability; design thinking</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>This article details the process of creating a prototype mobile application that aims to promote inclusive literacy for people with reading disabilities. The goal of this application is to help people with reading difficulties to become more independent so that they can participate in society and take advantage of educational and employment opportunities that were previously unavailable to them. The methodology used in this work is Design Thinking as it is a user-centered creative approach to solving difficult challenges and addresses creativity, design and problem solving. The results obtained from the expert judgment based on Atlas TI 22 provide a valuable perspective on the viability and potential of these technological tools. The analysis of the results of the application prototype designs gives an encouraging picture of 85%. Similarly, 75% confirm that the app effectively complements inclusive literacy efforts, a significant achievement in line with the objective, and 70% appreciate the app&#39;s interaction with people with reading disabilities. Finally, a staggering 87% would gladly recommend the app, underscoring its valuable impact. In conclusion, the article discusses how mobile applications can help people with reading difficulties become more literate. The good reception of the prototype confirms the importance of technology in inclusive education and the value of this approach to improving the lives and education of this demographic.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_23-Application_Prototype_for_Inclusive_Literacy_for_People.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Method for Classifying Intracerebral Hemorrhage (ICH) Based on Diffusion Weighted –Magnetic Resonance Imaging (DW-MRI)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140922</link>
        <id>10.14569/IJACSA.2023.0140922</id>
        <doi>10.14569/IJACSA.2023.0140922</doi>
        <lastModDate>2023-09-30T10:43:04.3800000+00:00</lastModDate>
        
        <creator>Andi Kurniawan Nugroho</creator>
        
        <creator>Jajang Edi Priyanto</creator>
        
        <creator>Dinar Mutiara Kusumo Nugraheni</creator>
        
        <subject>Batch size; Epoch; ML-CNN; SGD; Stroke</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>Stroke is a condition where the blood supply to the brain is cut off. This occurs due to the rupture of blood vessels in the intracerebral area or Intracerebral Hemorrhage (ICH). Examination by health workers is generally carried out to get an overview of the part of the brain of a patient who has had a stroke. The weakness in diagnosing this disease is that deeper knowledge is needed to classify the type of stroke, especially ICH. This study aims to use the Modified Layers Convolutional Neural Network (ML-CNN) method to classify ICH stroke images based on Diffusion-Weighted (DW) MRI. The data used in this study is a DWI stroke MRI image dataset of 3,484 images. The data consists of 1,742 normal and ICH images validated by a radiologist. Because the data used is relatively small and takes into account the computational time, Stochastic Gradient Descent (SGD) is used. This study compares the basic CNN model scenario with the addition of layers to the original CNN model to produce the highest accuracy value. Furthermore, each model is cross-validated with a different k to produce performance in each model as well as changes to batch size and epoch and comparison with machine learning models such as SVM, Random Forest, Extra Trees, and kNN. The results showed that the smaller the number of batch sizes, the higher the accuracy value and the number of epochs, the higher the number of epochs, the higher the accuracy value of 99.86%. Then, four machine learning methods with accuracy, sensitivity, and specificity below 90% are all compared to CNN2. As a summary of this research, the proposed CNN modification works better than the four machine learning models in classifying stroke images.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_22-A_New_Method_for_Classifying_Intracerebral_Hemorrhage.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Improved Convolutional Neural Network for Churn Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140921</link>
        <id>10.14569/IJACSA.2023.0140921</id>
        <doi>10.14569/IJACSA.2023.0140921</doi>
        <lastModDate>2023-09-30T10:43:04.3670000+00:00</lastModDate>
        
        <creator>Priya Gopal</creator>
        
        <creator>Nazri Bin MohdNawi</creator>
        
        <subject>Customer churn analysis; deep learning; variational autoencoder; convolutional neural networks; dimensionality reduction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>The significance of customer churn analysis has escalated due to the increasing availability of relevant data and intensifying competition. Researchers and practitioners are focused on enhancing prediction accuracy in modeling approaches, with deep neural networks emerging as appealing due to their robust performance across domains. However, the computational demands surge due to the challenges posed by dimensionality and inherent characteristics of the data. To address these issues, this research proposes a novel hybrid model that strategically integrates Convolutional Neural Networks (CNN) and a modified Variational Autoencoder (VAE). By carefully adjusting the parameters of the VAE to capture the central tendency and range of variation, the study aims to enhance the effectiveness of classifying high-dimensional churn data. The proposed framework&#39;s efficacy is evaluated using six benchmark datasets from various domains, with performance metrics encompassing accuracy, f1-score, precision, recall, and response time. Experimental results underscore the prowess of the hybrid technique in effectively handling high-dimensional and imbalanced time series data, thus offering a robust pathway for enhanced churn analysis.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_21-An_Improved_Convolutional_Neural_Network_for_Churn_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comprehensive Review of Modern Methods to Improve Diabetes Self-Care Management Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140920</link>
        <id>10.14569/IJACSA.2023.0140920</id>
        <doi>10.14569/IJACSA.2023.0140920</doi>
        <lastModDate>2023-09-30T10:43:04.3670000+00:00</lastModDate>
        
        <creator>Alhuseen Omar Alsayed</creator>
        
        <creator>Nor Azman Ismail</creator>
        
        <creator>Layla Hasan</creator>
        
        <creator>Farhat Embarak</creator>
        
        <subject>Diabetes self-care; diabetes management; systematic literature review; BCT theories</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>Diabetes mellitus has become a global epidemic, with an increasing number of individuals affected by this chronic metabolic disorder. Effective management of diabetes requires a comprehensive self-care approach, which encompasses various aspects like monitoring blood glucose levels, adherence to medication, modifications in lifestyle, and regular healthcare monitoring. Innovative techniques for bettering diabetic self-care management have been developed recently as a result of developments in technology and healthcare systems. This comprehensive review examines the modern methods that have emerged to enhance diabetes self-care management systems. The review focuses on the integration of technology, Behavioural Change Techniques (BCTs), behavioural health theories such as Transtheoretical Model (TTM), the Health Belief Model (HBM), Theory of Reasoned Action/Planned Behaviour (TPB), Social Cognitive Theory (SCT) techniques to promote optimal diabetes care outcomes. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) 2020 standards were followed in this research&#39;s documentation. The Systematic Literature Review (SLR) period, which covered 2009 to 2020, was used to acquire the most recent complete review. Overall, the SLR results show that self-care interventions have a favourable impact on behaviours modification, the encouragement of good lifestyle habits, the lowering of blood glucose scales, and the accomplishment of significant weight loss. According to the review&#39;s findings, treatments for diabetic self-management that included behavioural health theories and BCTs in their creation tended to be more successful. In order to assist academics and practitioners with the creation of future applications, the restriction and future direction were finally defined. After recognising the potential for combining BCT methodologies and theories, it creates self-management interventions. Depending on these recognised cutting-edge mechanisms, the current SLR can assist application developers create a model to construct efficient self-care interventions for diabetes.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_20-A_Comprehensive_Review_of_Modern_Methods_to_Improve_Diabetes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Preserving Cultural Heritage Through AI: Developing LeNet Architecture for Wayang Image Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140919</link>
        <id>10.14569/IJACSA.2023.0140919</id>
        <doi>10.14569/IJACSA.2023.0140919</doi>
        <lastModDate>2023-09-30T10:43:04.3500000+00:00</lastModDate>
        
        <creator>Muhathir </creator>
        
        <creator>Nurul Khairina</creator>
        
        <creator>Rehia Karenina Isabella Barus</creator>
        
        <creator>Mutammimul Ula</creator>
        
        <creator>Ilham Sahputra</creator>
        
        <subject>Wayang; LeNet; artificial intelligence; deep learning; cultural tradition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>Wayang, an ancient cultural tradition in Java, has been an integral part of Indonesian culture for 1500 years. Rooted in Hindu cultural influences, wayang has evolved into a highly esteemed and beloved performance art. In the form of wayang kulit, this tradition conveys profound philosophical messages and implicit meanings that resonate with Javanese society. This research aims to develop an artificial intelligence (AI) model using deep learning with the LeNet architecture to accurately classify wayang images. The model was tested with 2515 Punakawan wayang images, showing excellent performance with an accuracy of 80% to 85%. Although the model successfully recognizes and distinguishes wayang classes, it faces some challenges in classifying specific classes, particularly in scenarios 2 and 4. Nevertheless, this research has a positive impact on cultural preservation, as the developed AI model can be used for automatic wayang image recognition. These implications open opportunities to better understand and preserve this rich cultural heritage through AI technology. With further improvements, this model has the potential to become a valuable tool in the efforts to preserve and introduce wayang culture to future generations.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_19-Preserving_Cultural_Heritage_Through_AI.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparison of Machine Learning Algorithms for Crime Prediction in Dubai</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140918</link>
        <id>10.14569/IJACSA.2023.0140918</id>
        <doi>10.14569/IJACSA.2023.0140918</doi>
        <lastModDate>2023-09-30T10:43:04.3330000+00:00</lastModDate>
        
        <creator>Shaikha Khamis AlAbdouli</creator>
        
        <creator>Ahmad Falah Alomosh</creator>
        
        <creator>Ali Bou Nassif</creator>
        
        <creator>Qassim Nasir</creator>
        
        <subject>Machine learning; crime analysis; crime patterns; KNN; random forest; SVM; ANN; Na&#239;ve Bayes; Decision Tree; major crime</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>This study aims to find the most accurate algorithm that is capable of predicting crimes in Dubai. It compares models on a dataset of sample crimes in the Emirate of Dubai, United Arab Emirates using the open-source data mining software WEKA, which enabled us to use Random Forest, KNN, SVM, ANN, Na&#239;ve Bayes and Decision Tree, We chose those algorithms as former studies that were effective used them. We have applied the algorithms on a dataset containing 13440 Major Crime in four categories occurred between 2014 and 2018. After comparing the models and analyzing their success rates, we identified the ideal algorithms and evaluated the effectiveness of variables in making predictions by measuring the correlation coefficients. One of the study&#39;s most crucial recommendations is to increase the variables and data, also adding more details about the crime, the criminal, and the victim. These variables make an impact on the analysis and the ultimate prediction.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_18-Comparison_of_Machine_Learning_Algorithms_for_Crime_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Improvement for Spatial-Temporal Queries of ATMGRAPH</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140917</link>
        <id>10.14569/IJACSA.2023.0140917</id>
        <doi>10.14569/IJACSA.2023.0140917</doi>
        <lastModDate>2023-09-30T10:43:04.3200000+00:00</lastModDate>
        
        <creator>ZHANG Zhiyuan</creator>
        
        <creator>HAN Boyang</creator>
        
        <subject>Air traffic management; knowledge graph; storage model; spatial-temporal query; ontology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>As a knowledge graph for the field of ATM (Air Traffic Management), ATMGRAPH integrates aviation information from various sources, and provides a new way to comprehensively analyze ATM data, but the storage schema of ATMGRAPH is inefficient for trajectory-related queries which have typical spatial-temporal characteristics, thus cannot meet the application requirements. This paper presents an improved storage model of ATMGRAPH, specifically, we design a cluster structure to connect trajectory points and spatial-temporal information to speed up trajectory-related queries, and we link flights, airports, and weather information in an effective way to speed up weather-related queries. We create a dataset of about 10,000 real domestic flights, and build a knowledge graph of it which contains about 11.66 million triplets. Experimental results show that ATM knowledge graph constructed by this storage model can significantly improve the efficiency of spatial-temporal related queries.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_17-An_Improvement_for_Spatial_Temporal_Queries_of_ATMGRAPH.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Survey of Evolving Performance Analysis Technologies, Algorithms and Models for Sports</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140916</link>
        <id>10.14569/IJACSA.2023.0140916</id>
        <doi>10.14569/IJACSA.2023.0140916</doi>
        <lastModDate>2023-09-30T10:43:04.3030000+00:00</lastModDate>
        
        <creator>Shamala Subramaniam</creator>
        
        <creator>Manoj Ravi Shankar</creator>
        
        <creator>Azyyati Adiah Zazali</creator>
        
        <creator>Hong Siaw Swin</creator>
        
        <creator>Zarina Muhamed</creator>
        
        <creator>Sivakumar Rajagopal</creator>
        
        <creator>Mohamad Zamri Napiah</creator>
        
        <creator>Faisal Embung</creator>
        
        <subject>Sports performance analysis technology; on-field analysis; IoT; real-time monitoring; off-field analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>The emergence and extensive development and deployment of Industrial Revolution 4.0 have distinctly transformed the methodologies of sports performance monitoring. Consequently, there has been an increase in the emergence of new and adapted technologies in various areas of sports, such as competition analysis, player performance analysis and many others. There are rich and heterogeneous sports performance analysis technologies, algorithms and frameworks which provide constant basis for elevating new horizons of sports technologies. Thus, this paper aims to encompass significant findings that will provide a comprehensive survey in this area. Previous surveys have extensively focused on various methodologies of sports performance analysis, sport-specific analysis and other technology revolving around sports performance analysis. However, most of the focus is largely on training and competition performances and not off-field. The objective of this paper is to understand the current research trends, challenges and future directions of dynamically evolving technology embedded in the world of sports. This survey aims at contributing to this rich repository but with a new focus element of off-field that researches the connection between the athlete, the sports aspect of their life, the non-sport aspect and the methodologies of sports performance analysis. In addition, the exponential growth of Artificial Intelligence (AI) as a base for sports performance analysis systems and platforms is analysed extensively. This paper also presents a comprehensive classification of athlete performance analysis using algorithm tools and sports performance platforms and systems. Subsequently, the detailed analysis of this taxonomy has enabled the identification and detailed analysis of open issues and future directions.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_16-A_Survey_of_Evolving_Performance_Analysis_Technologies.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Proposed Intelligent Model with Optimization Algorithm for Clustering Energy Consumption in Public Buildings</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140915</link>
        <id>10.14569/IJACSA.2023.0140915</id>
        <doi>10.14569/IJACSA.2023.0140915</doi>
        <lastModDate>2023-09-30T10:43:04.3030000+00:00</lastModDate>
        
        <creator>Ahmed Abdelaziz</creator>
        
        <creator>Vitor Santos</creator>
        
        <creator>Miguel Sales Dias</creator>
        
        <subject>Energy consumption in public buildings; self-organizing map; K-means; genetic algorithm; principal component analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>Recently, intelligent applications gained a significant role in the energy management of public buildings due to their ability to enhance energy consumption performance. Energy management of these buildings represents a big challenge due to their unexpected energy consumption characteristics and the deficiency of design guidelines for energy efficiency and sustainability solutions. Therefore, an analysis of energy consumption patterns in public buildings becomes necessary. This reveals the significance of understanding and classifying energy consumption patterns in these buildings. This study seeks to find the optimal intelligent technique for classifying energy consumption of public buildings into levels (e.g., low, medium, and high), find the critical factors that influence energy consumption, and finally, find the scientific rules (If-Then rules) to help decision-makers for determining the energy consumption level in each building. To achieve the objectives of this study, correlation coefficient analysis was used to determine critical factors that influence on energy consumption of public buildings; two intelligent models were used to determine the number of clusters of energy consumption patterns which are Self Organizing Map (SOM) and Batch-SOM based on Principal Component Analysis (PCA). SOM outperforms Batch-SOM in terms of quantization error. The quantization error of SOM and Batch-SOM is 8.97 and 9.24, respectively. K-means with a genetic algorithm were used to predict cluster levels in each building. By analyzing cluster levels, If-Then rules have been extracted, so needs that decision-makers determine the most energy-consuming buildings. In addition, this study helps decision-makers in the energy field to rationalize the consumption of occupants of public buildings in the times that consume the most energy and change energy suppliers to those buildings.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_15-A_Proposed_Intelligent_Model_with_Optimization_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Compression Analysis of Hybrid Model Based on Scalable WDR Method and CNN for ROI-based Medical Image Transmission</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140914</link>
        <id>10.14569/IJACSA.2023.0140914</id>
        <doi>10.14569/IJACSA.2023.0140914</doi>
        <lastModDate>2023-09-30T10:43:04.2870000+00:00</lastModDate>
        
        <creator>Bindulal T.S</creator>
        
        <subject>Medical image segmentation; compression; region of interest; wavelet difference reduction; convolutional neural network; singular value decomposition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>The image compression techniques are the fast-growing methods and have developed on large scale. Among them, wavelet-based compression methods are most promising and efficient techniques widely used in the field of medical image processing and transmission. The compression techniques are treated as lossy or lossless models and these can be applied on the medical images considering different situations. The medical image parts are separated into two regions. The central part of the image is treated as core region called region of interest (ROI) and others are treated as non-ROI. ROI based coding techniques are considered as most important in the medical field for efficient transmission of clinical data. The proposed method focuses on these concepts. The ROI parts considered are either smooth or textured regions. These are extracted using a segmentation method called singular value decomposition (SVD) method. An efficient run length coding method called wavelet difference reduction method (WDR) with region growing approach is used to code the extracted ROI part after applying 5/3 based integer wavelet transform. The remaining parts called non-ROI part or background artifacts are coded using Convolution Neural etwork (CNN) method. The proposed method is also restructured as layered structure to achieve adaptive scalable property and named as scalable WDR-CNN (SWDR-CNN) method. The proposed SWDR-CNN method has been evaluated using rate distortion metrics such as Peak Signal to Noise Ratio (PSNR) and Structural Similarity Index (SSIM). The coding gains in terms of PSNR values of SWDR-CNN method has been analysed and compared to popular scalable algorithm like S-SPIHT. The SWDR-CNN method has achieved better coding gain from 0.2 dB to 6 dB in terms of PSNR values. Hence, it is proved that proposed model can be used to code the ROI of images and has applications in the field of medical image data coding and transmission.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_14-Compression_Analysis_of_Hybrid_Model_Based_on_Scalable_WDR_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automatic Generation of Image Caption Based on Semantic Relation using Deep Visual Attention Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140912</link>
        <id>10.14569/IJACSA.2023.0140912</id>
        <doi>10.14569/IJACSA.2023.0140912</doi>
        <lastModDate>2023-09-30T10:43:04.2730000+00:00</lastModDate>
        
        <creator>M. M. EL-GAYAR</creator>
        
        <subject>Semantic image captioning; deep visual attention model; long short-term memory; wavelet driven convolutional neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>While modern systems for managing, retrieving, and analyzing images heavily rely on deriving semantic captions to categorize images, this task presents a considerable challenge due to the extensive capabilities required for manual processing, particularly with large images. Despite significant advancements in automatic image caption generation and human attention prediction through convolutional neural networks, there remains a need to enhance attention models in these networks through efficient multi-scale features utilization. Addressing this need, our study presents a novel image decoding model that integrates a wavelet-driven convolutional neural network with a dual-stage discrete wavelet transform, enabling the extraction of salient features within images. We utilize a wavelet-driven convolutional neural network as the encoder, coupled with a deep visual prediction model and Long Short-Term Memory as the decoder. The deep Visual Prediction Model calculates channel and location attention for visual attention features, with local features assessed by considering the spatial-contextual relationship among objects. Our primary contribution is to propose an encoder and decoder model to automatically create a semantic caption on the image based on the semantic contextual information and spatial features present in the image. Also, we improved the performance of this model, demonstrated through experiments conducted on three widely used datasets: Flickr8K, Flickr30K, and MSCOCO. The proposed approach outperformed current methods, achieving superior results in BLEU, METEOR, and GLEU scores. This research offers a significant advancement in image captioning and attention prediction models, presenting a promising direction for future work in this field.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_12-Automatic_Generation_of_Image_Caption_Based_on_Semantic_Relation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Oil Price Forecasting Through an Intelligent Hybridized Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140913</link>
        <id>10.14569/IJACSA.2023.0140913</id>
        <doi>10.14569/IJACSA.2023.0140913</doi>
        <lastModDate>2023-09-30T10:43:04.2730000+00:00</lastModDate>
        
        <creator>Hicham BOUSSATTA</creator>
        
        <creator>Marouane CHIHAB</creator>
        
        <creator>Younes CHIHAB</creator>
        
        <creator>Mohammed CHINY</creator>
        
        <subject>Oil market; prediction; crude oil; hybrid approach; CPSE stock ETF price; machine learning; stock markets</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>The oil market has long experienced price fluctuations driven by diverse factors. These shifts in crude oil prices wield substantial influence over the costs of various goods and services. Moreover, the price per barrel is intricately intertwined with global economic activities, themselves influenced by the trajectory of oil prices. Analyzing oil behavior stands as a pivotal means for tracking the evolution of barrel prices and predicting future oil costs. This analytical approach significantly contributes to the field of crude oil price forecasting. Researchers and scientists alike prioritize accurate crude oil price forecasting. Yet, such endeavors are often challenged by the intricate nature of oil price behavior. Recent times have witnessed the effective employment of various approaches, including Hybrid and Machine Learning techniques to address similarly complex tasks, though they often yield elevated error rates, as observed in financial markets. In this study, the goal is to enhance the predictive precision of several weak supervised learning predictors by harnessing hybridization, particularly within the context of the crude oil market&#39;s multifaceted variations. The focus extends to a vast dataset encompassing CPSE Stock ETF prices over a period of 23 years. Ten distinct models, namely SVM, XGBoost, Random Forest, KNN, Gradient Boosting, Decision Tree, Ridge, Lasso, Elastic Net, and Neural Network, were employed to derive elemental predictions. These predictions were subsequently amalgamated via Linear Regression, yielding heightened performance. The investigation underscores the efficacy of hybridization as a strategy. Ultimately, the proposed approach&#39;s performance is juxtaposed against its individual weak predictors, with experiment results validating the findings.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_13-Enhancing_Oil_Price_Forecasting_Through_an_Intelligent_Hybridized_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Bibliometric Analysis of Smart Home Acceptance by the Elderly (2004-2023)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140911</link>
        <id>10.14569/IJACSA.2023.0140911</id>
        <doi>10.14569/IJACSA.2023.0140911</doi>
        <lastModDate>2023-09-30T10:43:04.2570000+00:00</lastModDate>
        
        <creator>Bo Yuan</creator>
        
        <creator>Norazlyn Kamal Basha</creator>
        
        <subject>Smart home; acceptance; elderly people; ageing-in-place; bibliometric analysis; VOSviewer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>Both academia and business firmly endorse the notion that a smart home would be the solution to easing the excessive social burden associated with demographic ageing and improving older adults&#39; quality of life by enhancing living independence while encouraging their desire to age in place. This study uses bibliometric analysis to examine the research trends on elderly people&#39;s acceptance of smart home. The results are derived from analysis using the VOSviewer software on 257 documents in the Scopus database. The results reveal that: there is an accelerating growth rate for the smart home literature focusing on the elderly’s acceptance since 2004; the majority of these studies are journal articles filed in the research area of computer science; the most commonly mentioned keywords include “smart home(s)” and “older adults”; the US has produced the highest number of related works; and the most cited articles are composed by authors across nations with tight collaborations.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_11-A_Bibliometric_Analysis_of_Smart_Home_Acceptance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimized YOLOv7 for Small Target Detection in Aerial Images Captured by Drone</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140909</link>
        <id>10.14569/IJACSA.2023.0140909</id>
        <doi>10.14569/IJACSA.2023.0140909</doi>
        <lastModDate>2023-09-30T10:43:04.2400000+00:00</lastModDate>
        
        <creator>Yanxin Liu</creator>
        
        <creator>Shuai Chen</creator>
        
        <creator>Lin Luo</creator>
        
        <subject>Small target detection; drone aerial photography; YOLOv7; clustering algorithm; spatial pyramid pooling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>It is challenging to detect small targets in aerial images captured by drones due to variations in target sizes and occlusions arising from the surrounding environment. This study proposes an optimized object detection algorithm based on YOLOv7 to address the above-mentioned challenges. The proposed method comprises the design of a Genetic Kmeans (1- IoU) clustering algorithm to obtain customized anchor boxes that more significantly apply to the dataset. Moreover, the SPPFCSPC_group structure is optimized using group convolutions to reduce model parameters. The fusion of Spatial Pyramid Pooling-Fast (SPPF) and Cross Stage Partial (CSP) structures leads to increased detection accuracy and enhanced multi-scale feature fusion network. Furthermore, a Detect Head is incorporated into the classification phase for more accurate position and class predictions. According to experimental findings, the optimized YOLOv7 algorithm performs quite well on the VisDrone2019 dataset in terms of detection accuracy. Compared with the original YOLOv7 algorithm, the optimized version shows a 0.18% increase in the Average Precision (AP), a reduction of 5.7 M model parameters, and a 1.12 Frames Per Second (FPS) improvement in the frame rate. With the above-described enhancements in AP and parameter reduction, the precision of small target detection and the real-time detection speed are increased notably. In general, the optimized YOLOv7 algorithm offers superior accuracy and real-time capability, thus making it well-suited for small target detection tasks in real-time drone aerial photography.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_9-Optimized_YOLOv7_for_Small_Target_Detection_in_Aerial_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>DevOps Implementation Challenges in the Indonesian Public Health Organization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140910</link>
        <id>10.14569/IJACSA.2023.0140910</id>
        <doi>10.14569/IJACSA.2023.0140910</doi>
        <lastModDate>2023-09-30T10:43:04.2400000+00:00</lastModDate>
        
        <creator>Muhammad Yazid Al Qahar</creator>
        
        <creator>Teguh Raharjo</creator>
        
        <subject>DevOps; challenges; fuzzy AHP; software development</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>The importance of accelerating software development to meet rapidly changing business needs has driven the Indonesian Public Health Organization (IPHO) to adopt DevOps. But after three years, the expected benefits have not been achieved. This research aims to identify the main challenges and obstacles in implementing DevOps at IPHO. A comprehensive examination of existing literature is employed to recognize prevalent difficulties encountered by organizations when implementing DevOps. The main factors are ranked using the Fuzzy Analytic Hierarchy Process (FAHP) based on survey data from DevOps practitioners at IPHO. This study helps fill in some gaps left by empirical studies on the challenges in applying DevOps, especially in the public healthcare sector. It also streamlines the data collection and analysis process by utilizing FAHP, simplifying the survey process, and reducing the number of questions compared to previous approaches. According to the research findings, the primary hurdle that requires attention is the mindset to transform from a traditional approach to continuous delivery. In addition, the lack of understanding about the benefits of implementing DevOps and the lack of cross-functional leadership are also identified as challenges that need to be considered. However, IPHO does not view the use of legacy tools and technologies as a significant impediment to adopting DevOps.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_10-DevOps_Implementation_Challenges_in_the_Indonesian_Public_Health.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimization Method for Trajectory Data Based on Satellite Doppler Velocimetry</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140908</link>
        <id>10.14569/IJACSA.2023.0140908</id>
        <doi>10.14569/IJACSA.2023.0140908</doi>
        <lastModDate>2023-09-30T10:43:04.2270000+00:00</lastModDate>
        
        <creator>Junzhuo Li</creator>
        
        <creator>Wenyong Li</creator>
        
        <creator>Guan Lian</creator>
        
        <subject>Urban transportation; Kalman Filter; information fusion; trajectory data</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>Due to cost and energy consumption limitations, there are significant differences in the positioning capabilities of mobile terminals, resulting in unsatisfactory quality of trajectory data. In this paper, satellite Doppler data is used to optimize trajectory data. First, the system state equation is established by the kinematic relationship between the measured velocity and position, and the static linear Kalman filter estimates the optimal system state. Then a dynamic Kalman filter system is established by correlating the measurement error matrix parameters of the Kalman filter with the vertical dilution of precision of satellite positioning. Finally, the whole-day trajectory of a taxi in Shenzhen was visualized, and the deviation between the trajectory points and the urban road was calculated to compare the optimized and non-optimized taxi trajectories. The results show that the proposed optimization method can effectively reduce the deviation between trajectory points and urban roads, and this method can be used to process vehicle trajectory data in urban traffic research.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_8-Optimization_Method_for_Trajectory_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Promise of Self-Supervised Learning for Dental Caries</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140907</link>
        <id>10.14569/IJACSA.2023.0140907</id>
        <doi>10.14569/IJACSA.2023.0140907</doi>
        <lastModDate>2023-09-30T10:43:04.2100000+00:00</lastModDate>
        
        <creator>Tran Quang Vinh</creator>
        
        <creator>Haewon Byeon</creator>
        
        <subject>Machine learning; dental imaging; dental caries; oral diseases</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>Self-supervised learning (SSL) is a type of machine learning that does not require labeled data. Instead, SSL algorithms learn from unlabeled data by predicting the order of image patches, predicting the missing pixels in an image, or predicting the rotation of an image. SSL has been shown to be effective for a variety of tasks, including image classification, object detection, and segmentation. Dental image processing is a rapidly growing field with a wide range of applications, such as caries detection, periodontal disease progression prediction, and oral cancer detection. However, the manual annotation of dental images is time-consuming and expensive, which limits the development of dental image processing algorithms. In recent years, there has been growing interest in using SSL for dental image processing. SSL algorithms have the potential to overcome the challenges of manual annotation and to improve the accuracy of dental image analysis. This paper conducts a comparative examination between studies that have used SSL for dental caries processing and others that use machine learning methods. We also discuss the challenges and opportunities for using SSL in dental image processing. We conclude that SSL is a promising approach for dental image processing. SSL has the potential to improve the accuracy and efficiency of dental image analysis, and it can be used to overcome the challenges of manual annotation. We believe that SSL will play an increasingly important role in dental image processing in the years to come.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_7-The_Promise_of_Self_Supervised_Learning_for_Dental_Caries.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Versatile Shuffle Resource Units Recomputation Algorithm for Uplink OFDMA Random Access</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140906</link>
        <id>10.14569/IJACSA.2023.0140906</id>
        <doi>10.14569/IJACSA.2023.0140906</doi>
        <lastModDate>2023-09-30T10:43:04.1930000+00:00</lastModDate>
        
        <creator>Azyyati Adiah Zazali</creator>
        
        <creator>Shamala Subramaniam</creator>
        
        <creator>Zuriati Ahmad Zukarnain</creator>
        
        <creator>Abdullah Muhammed</creator>
        
        <subject>IEEE 802.11ax; OFDMA; UORA; random access; backoff; resource units allocation; multi-user</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>IEEE 802.11ax introduces Uplink Orthogonal Frequency Division Multiple Access (OFDMA)-based Random Access (UORA), a novel feature for facilitating random channel access in Wireless Local Area Networks (WLANs). Similar to the conventional random access scheme in WLANs, UORA employs the OFDMA backoff (OBO) procedure to access the channel’s Resource Units (RUs) and selects a random OBO counter within the OFDMA contention window (OCW) range. The Access Point (AP) can determine and communicate this OCW range to each station (STA). Multiple STAs accessing RUs result in transmission failure due to RU collisions, which occur when specific RUs remain unassessed by any STA, leading to wastage. Efforts to optimize channel efficiency require minimizing both collisions and idle RU despite the challenges arising from UORA’s distributed and random nature. The Fisher-Yates shuffle algorithm introduces a random uniform distribution strategy for managing RU allocations among STAs. The results demonstrate that this approach enables STAs to access RUs in a distributed manner, effectively reducing idle and wasted RUs, especially in scenarios involving a limited number of STAs. Furthermore, this approach effectively mitigates collisions among STAs, even in scenarios with a more significant number of STAs.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_6-A_Versatile_Shuffle_Resource_Units_Recomputation_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Stacked LSTM and Kernel-PCA-based Ensemble Learning for Cardiac Arrhythmia Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140905</link>
        <id>10.14569/IJACSA.2023.0140905</id>
        <doi>10.14569/IJACSA.2023.0140905</doi>
        <lastModDate>2023-09-30T10:43:04.1770000+00:00</lastModDate>
        
        <creator>Azween Abdullah</creator>
        
        <creator>S. Nithya</creator>
        
        <creator>M. Mary Shanthi Rani</creator>
        
        <creator>S. Vijayalakshmi</creator>
        
        <creator>Balamurugan Balusamy</creator>
        
        <subject>Arrhythmia classification; ensemble learning; extreme gradient boosting; kernel PCA; LSTM; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>Cardiovascular diseases (CVD) are the most prevalent causes of death and disability worldwide. Cardiac arrhythmia is one of the chronic cardiovascular diseases that create panic in human life. Early diagnosis aids physicians in securing life. ECG is a non-stationary physiological signal representing the heart&#39;s electrical activity. Automated tools to detect arrhythmia from ECG signals are possible with Machine Learning (ML). The ensemble learning technique combines the power of two or more classifiers to solve a computational intelligence problem. It enhances the performance of the models by fusing two or more models, which extremely increases its strength. The proposed ensemble Machine learning amalgamates the potency of Long Short-Term Memory (LSTM) and ensemble learning, opening up a new direction for research. In this research work, two novel ensemble methods of Extreme Gradient Boosting-LSTM (EXGB-LSTM) are developed, which use LSTM as a base learner and are transformed into an ensemble learner by coalescing with Extreme Gradient Boosting. Kernel Principal Component Analysis (K-PCA) is a significant non-linear dimensionality reduction technique. It can manage high-dimensional datasets with various features by lowering the dimensionality of the data while retaining the most crucial details. It has been applied as a preprocessing step for feature reduction in the dataset, and the performance of EXGB-LSTM is tested with and without K-PCA. Experimental results showed that the first method, fusion of EXG-LSTM, has reached an accuracy of 92.1%, Precision of 90.6%, F1-score of 94%, and Recall of 92.7%. The second proposed method, KPCA with EXGB-LSTM, attained the highest accuracy of 94.3%, with a precision of 92%, F1-score of 98%, and Recall of 94.9% for multi-class cardiac arrhythmia classification.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_5-Stacked_LSTM_and_Kernel_PCA_based_Ensemble_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Breast Cancer Diagnosis using a Modified Elman Neural Network with Optimized Algorithm Integration</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140904</link>
        <id>10.14569/IJACSA.2023.0140904</id>
        <doi>10.14569/IJACSA.2023.0140904</doi>
        <lastModDate>2023-09-30T10:43:04.1630000+00:00</lastModDate>
        
        <creator>Linkai Chen</creator>
        
        <creator>CongZhe You</creator>
        
        <creator>Honghui Fan</creator>
        
        <creator>Hongjin Zhu</creator>
        
        <subject>Breast cancer model; Elman Neural Network; upgraded imperialist competitive algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>Breast cancer is a class of cancer that starts in the cells of the breast. It happens once the cells of the breast divide and amplify abnormally and uncontrollably. Other parts of the body, including lymph nodes, bones, lungs, and liver, can be affected by breast cancer. Early diagnosis and treatment are critical in helping to lessen the risk of death from breast cancer. Machine learning is a type of artificial intelligence that can be used to diagnose breast cancer. It uses algorithms to analyze data and assess patterns associated with breast cancer. Machine learning models can help improve diagnostic accuracy, reduce false-positive results, and improve the efficiency of diagnosis. Elman Neural Networks (ENNs) are machine learning algorithms that can be used to diagnose breast cancer. ENNs use medical data to detect patterns that are associated with the presence of cancer. The accuracy of ENNs in diagnosing breast cancer is still being researched, but they have the potential to help improve diagnostic accuracy and reduce false-positive results. In the existing study, a new modified version of ENN founded on a combination of an upgraded version of the imperialist competitive algorithm is proposed for this objective. Likewise, the results of the model compared with some other methods illustrated the proposed method&#39;s higher efficiency.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_4-Enhancing_Breast_Cancer_Diagnosis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Classification of Coherence Indices Extracted from EEG Signals of Mild and Severe Autism</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140903</link>
        <id>10.14569/IJACSA.2023.0140903</id>
        <doi>10.14569/IJACSA.2023.0140903</doi>
        <lastModDate>2023-09-30T10:43:04.1300000+00:00</lastModDate>
        
        <creator>Lingyun Wu</creator>
        
        <subject>Autism spectrum disorder; electroencephalography (EEG); classification; neural network; support vector machine; coherence feature</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>Autism spectrum disorder is a debilitating neurodevelopmental illness characterized by serious impairments in communication and social skills. Due to the increasing prevalence of autism worldwide, the development of a new diagnostic approach for autism spectrum disorder is of great importance. Also, diagnosing the severity of autism is very important for clinicians in the treatment process. Therefore, in this study, we intend to classify the electroencephalogram (EEG) signals of mild and severe autism patients. Twelve patients with mild autism and twelve patients with severe autism with the age range of 10-30 years participated in the present research. Due to the difficulties of working with autism patients and recording EEG signals from these patients in the awake state, the Emotiv Epoch headset device was utilized in this work. After signal preprocessing, we calculated short-range and long-range coherence values in the frequency range of 1-45 Hz, including short and long-range intra and inter-hemispheric coherence features. Then, statistical analysis was conducted to select coherence features with statistical differences between the two groups. Multilayer perceptron (MLP) neural network and support vector machine (SVM) with radial basis function (RBF) kernel were used in the classification stage. Our results showed that the best MLP classification performance was obtained by selected inter-hemispheric coherence features with accuracy, sensitivity and specificity of 96.82%, 97.82% and 96.92%, respectively. Also, the best SVM classification performance was obtained by selected inter-hemispheric coherence features with accuracy, sensitivity and specificity of 94.70%, 93.85% and 95.55%, respectively. However, it should be noted that the MLP neural network imposes a much higher computational cost than the SVM classifier. Considering that our simple system gives promising results in diagnosing autistic patients with mild and severe severities from EEG, there is scope for further work with a larger sample size and different ages and genders.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_3-Classification_of_Coherence_Indices_Extracted_from_EEG_Signals.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Segmentation of Motion Objects in Video Frames using Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140902</link>
        <id>10.14569/IJACSA.2023.0140902</id>
        <doi>10.14569/IJACSA.2023.0140902</doi>
        <lastModDate>2023-09-30T10:43:04.1170000+00:00</lastModDate>
        
        <creator>Feng JIANG</creator>
        
        <creator>Jiao LIU</creator>
        
        <creator>Jiya TIAN</creator>
        
        <subject>Segmentation; video processing; motion objects; deep convolutional neural network (DCNN)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>The segmentation of the moving objects in the video sequences is one of the most usable series in the machine vision field, which has absorbed the consideration of researchers in the latter decades. It is a challenging task, especially when there are several motion objects in the video, and then the system needs to discover the objects that should be segmented among the trail. Therefore, in this article, we present a new method to segment several motion objects at the same time. In this work, the propagation of the credence of the confidently-estimated frames by fine-tuning the DCNN model with the other frames is the main idea. We exert a DCNN model (which is pre-trained) for the frames to estimate the class of the object; then, we gather the frames where the approximation is locally or globally reliable. In the following, we apply a collection of the frames of CE as the training set to fine-tune the pre-trained network with the existing examples in a video. Our proposed model provides acceptable results, which are better than the results of similar models. These comparisons are made in the dataset of YouTube-VOS. Also, our presented approach is applied in the dataset of DAVIS-2017 and the obtained results are better than the results of the similar works.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_2-Segmentation_of_Motion_Objects_in_Video_Frames.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Data Anomaly Detection in the Internet of Things: A Review of Current Trends and Research Challenges</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140901</link>
        <id>10.14569/IJACSA.2023.0140901</id>
        <doi>10.14569/IJACSA.2023.0140901</doi>
        <lastModDate>2023-09-30T10:43:04.1000000+00:00</lastModDate>
        
        <creator>Min Yang</creator>
        
        <creator>Jiajie Zhang</creator>
        
        <subject>Internet of things; anomaly detection; security; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(9), 2023</description>
        <description>The Internet of Things (IoT) has revolutionized how we interact with the physical world, bringing a new era of connectivity. Billions of interconnected devices seamlessly communicate, generating an unprecedented volume of data. However, the dramatic growth of IoT applications also raises an important issue: the reliability and security of IoT data. Data anomaly detection plays a pivotal role in addressing this critical issue, allowing for identifying abnormal patterns, deviations, and malicious activities within IoT data. This paper discusses the current trends, methodologies, and challenges in data anomaly detection within the IoT domain. In this paper, we discuss the strengths and limitations of various anomaly detection techniques, such as statistical methods, machine learning algorithms, and deep learning methods. IoT data anomaly detection carries unique characteristics and challenges that must be carefully considered. We explore these intricacies, such as data heterogeneity, scalability, real-time processing, and privacy concerns. By delving into these challenges, we provide a holistic understanding of the complexity associated with IoT data anomaly detection, paving the way for more targeted and effective solutions.</description>
        <description>http://thesai.org/Downloads/Volume14No9/Paper_1-Data_Anomaly_Detection_in_the_Internet_of_Things.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intelligent Detection System for Electrical Equipment based on Deep Learning and Infrared Image Processing Technology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01408124</link>
        <id>10.14569/IJACSA.2023.01408124</id>
        <doi>10.14569/IJACSA.2023.01408124</doi>
        <lastModDate>2023-09-01T07:59:23.4770000+00:00</lastModDate>
        
        <creator>Mingxu Lu</creator>
        
        <creator>Yuan Xie</creator>
        
        <subject>Deep learning; infrared images; electrical equipment; intelligent detection; adaptive median filtering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>The demand for the reliability of power grid systems is gradually increasing with the development of the power industry. And it is necessary to promptly identify and eliminate the hidden dangers. To meet the needs of online monitoring and the early warning of electrical equipment, an intelligent detection system based on deep learning and infrared image processing technology is proposed in this study. Firstly, the infrared image is preprocessed for noise reduction. Then, an improved SSD (Single Shot MultiBox Detector) network is used to optimize the infrared image detection method. Based on this, an intelligent detection system for electrical equipment is designed. The results show that the mAP value of the improved SSD network after 1200 iterations is about 92.58%, and its area under the Precision Recall (PR) curve is higher than other algorithms. The simulation analysis results of the detection system show that the improved method detects a fault degree of 57.85%, which is closer to the 59.74% in the real situation. The experimental results indicate that the newly established intelligent detection system for electrical equipment can effectively detect its abnormal situations.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_124-Intelligent_Detection_System_for_Electrical_Equipment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Rural Landscape Design Data Analysis Based on Multi-Media, Multi-Dimensional Information Based on a Decision Tree Learning Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01408123</link>
        <id>10.14569/IJACSA.2023.01408123</id>
        <doi>10.14569/IJACSA.2023.01408123</doi>
        <lastModDate>2023-08-30T09:54:15.0500000+00:00</lastModDate>
        
        <creator>Ning Leng</creator>
        
        <creator>Hongxin Wang</creator>
        
        <subject>Multi-media multi-dimensional information rural landscape; data mining; decision tree; data preprocessing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>This paper analyzes and studies the design characteristics of multi-dimensional information rural scenes. For data mining and the Decision Tree (DT) calculation method, the pre-processing system and method of multi-dimensional information rural award design are put forward again. Through the analysis of the multi-dimensional value of multi-dimensional multimedia mountain villages, the form of planning and design analysis and corresponding methods are based on the analysis. Using one village as a case study, we were able to investigate the villagers, roads, services, greening, ecology, and other aspects of the village in complete detail and then implement the multi-media, multi-resource village&#39;s detailed planning and design.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_123-Rural_Landscape_Design_Data_Analysis_Based_on_Multi_Media.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Implementation of Image Conceptualization Split-Screen Stitching and Positioning Technology in Film and Television Production</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01408122</link>
        <id>10.14569/IJACSA.2023.01408122</id>
        <doi>10.14569/IJACSA.2023.01408122</doi>
        <lastModDate>2023-08-30T09:54:15.0370000+00:00</lastModDate>
        
        <creator>Zhouzhou Deng</creator>
        
        <creator>Rongshen Zhu</creator>
        
        <subject>Convolutional neural network; attention mechanism; null space convolutional pooling pyramid; spatial rich model; dense block</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>In order to study the technology of image conception, splitting, stitching and positioning in film and television production, this paper first discusses the relevant research literature, then designs an improved biomedical image segmentation convolution network model applied in film and television production, and then verifies the effectiveness of the proposed model. Ultimately, the paper summarizes the research findings. Aiming at the problem that the traditional image mosaic positioning model has poor robustness because of its insufficient ability to extract features and inaccurate segmentation and positioning areas, this study proposes a biomedical image segmentation convolutional network model that is based on dense block and void space convolutional pooling pyramidal module. Additionally, an attention mechanism is introduced to enhance the biomedical image segmentation convolutional network model. The results show that the accuracy, recall, and F1 value of the biomedical image segmentation convolutional network model are 96.48%, 95.24%, and 95.96%, respectively, on the Colombian uncompressed image stitching detection dataset, and the accuracy, recall, and F1 value of the improved biomedical image segmentation convolutional network model are 98.19%, 96.23%, and F1 value of 97.21%. In summary, the improved convolution network model for biomedical image segmentation has excellent performance, and it has certain application value in image conception, mirror splicing and positioning in film and television production.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_122-The_Implementation_of_Image_Conceptualization_Split_Screen.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Secure Data Sharing in Smart Homes: An Efficient Approach Based on Local Differential Privacy and Randomized Responses</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01408121</link>
        <id>10.14569/IJACSA.2023.01408121</id>
        <doi>10.14569/IJACSA.2023.01408121</doi>
        <lastModDate>2023-08-30T09:54:15.0030000+00:00</lastModDate>
        
        <creator>Amr T. A. Elsayed</creator>
        
        <creator>Almohammady S. Alsharkawy</creator>
        
        <creator>Mohamed S. Farag</creator>
        
        <creator>S. E. Abo-Youssef</creator>
        
        <subject>Smart homes; security; privacy-preserving; differential privacy; RAPPOR; randomized responses</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>Smart homes are smart spaces that contain devices that are connected to each other, collecting information and facilitating users’ comfortable living, safety, and energy management features. To improve the quality of individuals’ life, smart device companies and service providers are collecting data about user activities, user needs, power consumption, etc.; these data need to be shared with companies with privacy-preserving practices. In this paper, an effective approach of securing data transmission to the service provider is based on local differential privacy (LDP), which enables residents of smart homes to provide statistics on their power usage as disturbances bloom filters. Randomized Aggregatable Privacy-Preserving Ordinal (RAPPOR) is a privacy technique that allows sharing of data and statistics while pre-serving the privacy of individual users. The proposed approach applies two randomized responses: permanent random response (PRR) and instantaneous random response (IRR), then applies machine learning algorithms for decoding the perturbation bloom filters on the service provider side. The simulation results show that the proposed approach achieves good performance in terms of privacy-preserving, accuracy, recall, and f-measure metrics. The results indicate that, the proposed LDP for smart homes achieved good utility privacy when the value of LDP ϵ = 0.95. The classification accuracy is between 95.4% and 98% for the utilized classification techniques.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_121-Secure_Data_Sharing_in_Smart_Homes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Lung Cancer Classification using Reinforcement Learning-based Ensemble Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01408120</link>
        <id>10.14569/IJACSA.2023.01408120</id>
        <doi>10.14569/IJACSA.2023.01408120</doi>
        <lastModDate>2023-08-30T09:54:14.9900000+00:00</lastModDate>
        
        <creator>Shengping Luo</creator>
        
        <subject>Lung cancer; ensemble learning; reinforcement learning; artificial bee colony; convolutional neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>Lung cancer is a significant health issue affecting millions of people worldwide annually. However, current manual detection methods used by physicians and radiologists to identify lung nodules are inefficient because of the diverse shapes and locations of the nodules in the lungs. New methods are needed to improve the accuracy and speed of detecting lung nodules. This is important because early detection of nodules can increase the likelihood of successful treatment and recovery. This paper introduces a new LLC-QE model that combines ensemble learning and reinforcement learning to classify lung cancer. Initially, the model undergoes pre-training through the utilization of the Artificial Bee Colony (ABC) algorithm. This approach aims to decrease the probability of the model getting stuck in a local optimum. Subsequently, a set of convolutional neural networks (CNNs) is used to simultaneously derive feature vectors from input images, which are subsequently combined for classification in downstream processes. The LIDC-IDRI dataset, predominantly composed of cases without cancer, was employed to train and evaluate the model. To mitigate the dataset imbalance, the training procedure using reinforcement learning is formulated as a series of interconnected decisions. During this process, the images are regarded as states; the network acts as the agent, and the agent is given a greater reward/punishment for accurately/incorrectly classifying the underrepresented class compared to the overrepresented class. The LLC-QE model achieves excellent results (F measure 89.8%; geometric mean 92.7%), outperforming other deep models. Identifying the optimal values for the reward function and determining the ideal number of CNN feature extractors in the ensemble are achieved through experiments conducted on the study dataset. Ablation studies that exclude ABC pre-training and reinforcement learning from the model confirm these components’ independent positive incremental impact on the model’s performance.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_120-Lung_Cancer_Classification_using_Reinforcement_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Framework for Agriculture Plant Disease Prediction using Deep Learning Classifier</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01408119</link>
        <id>10.14569/IJACSA.2023.01408119</id>
        <doi>10.14569/IJACSA.2023.01408119</doi>
        <lastModDate>2023-08-30T09:54:14.9570000+00:00</lastModDate>
        
        <creator>Mohammelad Baljon</creator>
        
        <subject>Suggested agricultural plant disease prediction system; tuned ML models; machine learning classifiers; plant disease detection; deep learning architectures</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>The agricultural industry in Saudi Arabia suffers from the effects of vegetable diseases in the Central Province. The primary causes of death documented in this analysis were 32 fungal diseases, two viral diseases, two physiological diseases, and one parasitic disease. Because early diagnosis of plant diseases may boost the productivity and quality of agricultural operations, tomatoes, Pepper and Onion were selected for the experiment. The primary goal is to fine-tune the hyperparameters of common Machine Learning classifiers and Deep Learning architectures in order to make precise diagnoses of plant diseases. The first stage makes use of common image processing methods using ml classifiers; the input picture is median filtered, contrast increased, and the background is removed using HSV color space segmentation. After shape, texture, and color features have been extracted using feature descriptors, hyperparameter-tuned machine learning (ML) classifiers such as k-nearest neighbor, logistic regression, support vector machine, and random forest are used to determine an outcome. Finally, the proposed Deep Learning Plant Disease Detection System (DLPDS) makes use of Tuned ML models. In the second stage, potential Convolutional Neural Network (CNN) designs were evaluated using the supplied input dataset and the SGD (Stochastic Gradient Descent) optimizer. In order to increase classification accuracy, the best Convolutional Neural Network (CNN) model is fine-tuned using several optimizers. It is concluded that MCNN (Modified Convolutional Neural Network) achieved 99.5% classification accuracy and an F1 score of 1.00 for Pepper disease in the first phase module. Enhanced GoogleNet using the Adam optimizer achieved a classification accuracy of 99.5% and an F1 score of 0.997 for Pepper illnesses, which is much higher than previous models. Thus, proposed work may adapt this suggested strategy to different crops to identify and diagnose illnesses more effectively.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_119-A_Framework_for_Agriculture_Plant_Disease_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Prediction of Heart Disease using an Ensemble Learning Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01408118</link>
        <id>10.14569/IJACSA.2023.01408118</id>
        <doi>10.14569/IJACSA.2023.01408118</doi>
        <lastModDate>2023-08-30T09:54:14.9430000+00:00</lastModDate>
        
        <creator>Ghalia A. Alshehri</creator>
        
        <creator>Hajar M. Alharbi</creator>
        
        <subject>Machine learning; ensemble learning; classification; disease prediction; heart disease</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>The ability to predict diseases early is essential for improving healthcare quality and can assist patients in avoiding potentially dangerous health conditions before it is too late. Various machine learning techniques are used in the medical field. Nonetheless, machine learning is critical in determining the future of pharmaceuticals and patients’ health. This is because the various classification techniques provide a high level of accuracy. However, because so much data are being gathered from patients, it becomes harder to find meaningful cardiac disease predictions. A vital research task is to identify these characteristics. Individual classification algorithms in this situation cannot generate flawless models capable of reliably predicting heart disease. As a result, higher performance might be achieved by using ensemble learning approaches (ELA), producing accurate cardiac disease predictions. In the present research work, we utilized an ELA for the early prediction of heart disease, using a new combination including four machine learning algorithms—adaptive boosting, support vector machine, decision tree, and random forest—to increase the accuracy of the prediction results. We used two wrapper methods for feature selection: forward selection and backward elimination. We used the proposed model with three datasets: the StatLog UCI dataset, the Z-Alizadeh Sani dataset, and the Cardiovascular Disease (CVD) dataset. We obtained the highest accuracy when using our proposed model with the Z-Alizadeh Sani dataset, where it was 0.91, while the StatLog UCI dataset was 0.83. The CVD dataset obtained the lowest accuracy, 0.73.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_118-Prediction_of_Heart_Disease_using_an_Ensemble_Learning_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Generating Nature-Resembling Tertiary Protein Structures with Advanced Generative Adversarial Networks (GANs)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01408117</link>
        <id>10.14569/IJACSA.2023.01408117</id>
        <doi>10.14569/IJACSA.2023.01408117</doi>
        <lastModDate>2023-08-30T09:54:14.9100000+00:00</lastModDate>
        
        <creator>Mena Nagy A. Khalaf</creator>
        
        <creator>Taysir Hassan A Soliman</creator>
        
        <creator>Sara Salah Mohamed</creator>
        
        <subject>Molecular structure; protein structure; protein modeling; tertiary structure; generative adversarial learning; deep learning; proteomic</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>In the field of molecular chemistry, the functions, interactions, and bonds between proteins depend on their tertiary structures. Proteins naturally exhibit dynamism under different physiological conditions, as they alter their tertiary structures to accommodate interactions with other molecular partners. Significant advancements in Generative Adversarial Networks (GANs) have been leveraged to generate tertiary structures closely mimicking the natural features of real proteins, including the backbone and local and distal characteristics. Our research has led to the development of stable model ROD-WGAN, which is capable of generating tertiary structures that closely resemble those found in nature. Four key contributions have been made to achieve this goal: (1) Utilizing Ratio Of Distribution (ROD) as a penalty function in the Wasserstein Generative Adversarial Networks (WGAN), (2) Developing a GAN network architecture that fertilizes the residual block in generator, (3) Increasing the length of the generated protein structures to 256 amino acids, and (4) Revealing consistent correlations through Structural Similarity Index Measure (SSIM) in protein structures with varying lengths. These model represent a significant step towards robust deep-generation models that can explore the highly diverse set of protein molecule structures that support various cellular activities. Moreover, they provide a valuable source of data augmentation for critical applications such as molecular structure prediction, inpainting, dynamics, and drug design. Data, code, and trained models are available at https://github.com/mena01/Generating-Tertiary-Protein-Structures-Resembling-Nature-using-Advanced-WGAN.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_117-Generating_Nature_Resembling_Tertiary_Protein_Structures.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Precision in Lung Cancer Diagnosis Through Machine Learning Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01408116</link>
        <id>10.14569/IJACSA.2023.01408116</id>
        <doi>10.14569/IJACSA.2023.01408116</doi>
        <lastModDate>2023-08-30T09:54:14.8970000+00:00</lastModDate>
        
        <creator>Nasareenbanu Devihosur</creator>
        
        <creator>Ravi Kumar M G</creator>
        
        <subject>Lung cancer diagnosis; machine learning; precision medicine</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>Lung cancer continues to pose a significant threat worldwide, leading to high cancer-related mortality rates and underscoring the urgent need for improved early diagnosis approaches. Despite the valuable technology currently employed for lung cancer diagnosis, some limitations hinder timely and accurate diagnoses, resulting in delayed treatment and unfavorable outcomes. In this research, we propose a comprehensive methodology that harnesses the power of various machine learning algorithms, including Logistic Regression, Gradient Boost, LGBM, and Support Vector Machine, to address these challenges and improve patient care. These algorithms have been thoughtfully chosen for their ability to effectively handle the complexity of lung cancer data and enable accurate classification and prediction of cases. By leveraging these advanced techniques, our methodology aims to enhance the efficiency and accuracy of lung cancer diagnosis, enabling earlier interventions and tailored treatment plans that can significantly impact patient outcomes and quality of life. Through rigorous assessments conducted on benchmark datasets and real-world cases, our study has yielded promising results. Random Forest achieved an impressive accuracy of 97%, showcasing its ability to effectively capture complex patterns and features within the lung cancer dataset. By pushing the boundaries of medical innovation and precision medicine, we envision a future where machine learning algorithms seamlessly integrate into healthcare systems, leading to personalized and efficient care for lung cancer patients.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_116-Enhancing_Precision_in_Lung_Cancer_Diagnosis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Incorporating Learned Depth Perception Into Monocular Visual Odometry to Improve Scale Recovery</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01408115</link>
        <id>10.14569/IJACSA.2023.01408115</id>
        <doi>10.14569/IJACSA.2023.01408115</doi>
        <lastModDate>2023-08-30T09:54:14.8800000+00:00</lastModDate>
        
        <creator>Hamza Mailka</creator>
        
        <creator>Mohamed Abouzahir</creator>
        
        <creator>Mustapha Ramzi</creator>
        
        <subject>Visual odometry; scale recovery; depth estimation; DPT model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>A growing interest in autonomous driving has led to a comprehensive study of visual odometry (VO). It has been well studied how VO can estimate the pose of moving objects by examining the images taken from onboard cameras. In the last decade, it has been proposed that deep learning under supervision can be employed to estimate depth maps and visual odometry (VO). In this paper, we propose a DPT (Dense Prediction Transformer)-based monocular visual odometry method for scale estimation. Scale-drift problems are common in traditional monocular systems and in recent deep learning studies. In order to recover the scale, it is imperative that depth estimation to be accurate. A framework for dense prediction challenges that bases its computation on vision transformers instead of convolutional networks is characterized as an accurate model that is utilized to estimate depth maps. Scale recovery and depth refinement are accomplished iteratively. This allows our approach to simultaneously increase the depth estimate while eradicating scale drift. The depth map estimated using the DPT model is accurate enough for the purpose of achieving the best efficiency possible on a VO benchmark, eliminating the scaling drift issue.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_115-Incorporating_Learned_Depth_Perception.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predicting Quality Medical Drug Data Towards Meaningful Data using Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01408114</link>
        <id>10.14569/IJACSA.2023.01408114</id>
        <doi>10.14569/IJACSA.2023.01408114</doi>
        <lastModDate>2023-08-30T09:54:14.8470000+00:00</lastModDate>
        
        <creator>Suleyman Al-Showarah</creator>
        
        <creator>Abubaker Al-Taie</creator>
        
        <creator>Hamzeh Eyal Salman</creator>
        
        <creator>Wael Alzyadat</creator>
        
        <creator>Mohannad Alkhalaileh</creator>
        
        <subject>Classification; alternative drugs; medical; decision tree; support vector machine; naive bayes; random forest</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>This research aims to improve the process of finding alternative drugs by utilizing artificial intelligence algorithms. It is not an easy task for human beings to classify the drugs manually, as this requires much longer time and more effort than doing it using classifiers. The study focuses on predicting high-quality medical drug data by considering ingredients, dosage forms, and strengths as features. Two datasets were generated from the original drug dataset, and four machine learning classifiers were applied to these datasets: Random Forest, Support Vector Machine, Naive Bayes, and Decision Tree. The classification performance was evaluated under three different scenarios, which varied the ratio of the training and test data for both datasets, as follows: (i) 80% (training) and 20% (test dataset), (ii) 70%(training) and 30% (test dataset), and (iii) 50% (training) and 50% (test dataset). The results indicated that the Decision Tree, Naive Bayes, and Random Forest classifiers showed superior performance in terms of classification accuracy, with over 90%accuracy achieved in all scenarios. The results also showed that there was no significant difference between the results of the two datasets. The findings of this study have implications for streamlining the process of identifying alternative drugs.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_114-Predicting_Quality_Medical_Drug_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Tomato Disease Recognition: Advancing Accuracy Through Xception and Bilinear Pooling Fusion</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01408113</link>
        <id>10.14569/IJACSA.2023.01408113</id>
        <doi>10.14569/IJACSA.2023.01408113</doi>
        <lastModDate>2023-08-30T09:54:14.8330000+00:00</lastModDate>
        
        <creator>Hoang-Tu Vo</creator>
        
        <creator>Nhon Nguyen Thien</creator>
        
        <creator>Kheo Chau Mui</creator>
        
        <subject>Tomato disease recognition; Xception; Bilinear pooling; convolutional neural networks; disease management</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>Accurate detection and classification of tomato diseases are essential for effective disease management and maintaining agricultural productivity. This paper presents a novel approach to tomato disease recognition that combines Xception, a pre-trained convolutional neural network (CNN), with bilinear pooling to advance accuracy. The proposed model consists of two parallel Xception-based CNNs that independently process input tomato images. Bilinear pooling is applied to combine the feature maps generated by the two CNNs, capturing intricate interactions between different image regions. This fusion of Xception and bilinear pooling results in a comprehensive representation of tomato diseases, leading to improved recognition performance. Extensive experiments were conducted on a diverse dataset of annotated tomato disease images to evaluate the effectiveness of the suggested approach. The model achieved a remarkable test accuracy of 98.7%, surpassing conventional CNN approaches. This high accuracy demonstrates the efficacy of the integrated Xception and bilinear pooling model in accurately identifying and classifying tomato diseases. The implications of this research are significant for automated tomato disease recognition systems, enabling timely and precise disease diagnosis. The model’s exceptional accuracy empowers farmers and agricultural practitioners to implement targeted disease management strategies, minimizing crop losses and optimizing yields.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_113-Tomato_Disease_Recognition_Advancing_Accuracy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Convolutional Neural Network Architecture for Pollen-Bearing Honeybee Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01408112</link>
        <id>10.14569/IJACSA.2023.01408112</id>
        <doi>10.14569/IJACSA.2023.01408112</doi>
        <lastModDate>2023-08-30T09:54:14.8170000+00:00</lastModDate>
        
        <creator>Thi-Nhung Le</creator>
        
        <creator>Thi-Minh-Thuy Le</creator>
        
        <creator>Thi-Thu-Hong Phan</creator>
        
        <creator>Huu-Du Nguyen</creator>
        
        <creator>Thi-Lan Le</creator>
        
        <subject>Pollen-bearing honeybee; image classification; convolutional neural network; honeybee monitoring system; Pollen dataset</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>Monitoring the pollen foraging behavior of honey-bees is an important task that is beneficial to beekeepers, allowing them to understand the health status of their honeybee colonies. To perform this task, monitoring systems should have the ability to automatically recognize images of pollen-bearing honeybees extracted from videos recorded at the beehive entrance. In this paper, a novel convolutional neural network architecture is proposed for recognizing pollen-bearing and non-pollen-bearing honeybees from their images. The performance of the proposed model is illustrated based on a real dataset and the obtained results show that it performs better than some other state-of-the-art deep learning architectures like VGG16, VGG19, or Resnet50 in terms of both accuracy and execution time. Thus, the proposed model can be considered an effective algorithm for designing automatic honeybee colony monitoring systems.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_112-A_Novel_Convolutional_Neural_Network_Architecture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Low-Cost Wireless Sensor System for Power Quality Management in Single-Phase Domestic Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01408111</link>
        <id>10.14569/IJACSA.2023.01408111</id>
        <doi>10.14569/IJACSA.2023.01408111</doi>
        <lastModDate>2023-08-30T09:54:14.8000000+00:00</lastModDate>
        
        <creator>Cristian A. Aldana B</creator>
        
        <creator>Edison F. Montenegro A</creator>
        
        <subject>Cost-effective; current measurement; energy consumption; ESP32 microcontroller; non-invasive; power quality; remote monitoring</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>This article presents a novel low-cost hardware and software tool for monitoring power quality in single-phase domestic networks using an ESP32 microcontroller. The proposed embedded system allows remote evaluation and monitoring of electrical energy consumption behavior through non-invasive current measurement parameters. Based on these measurements, power, power factor, total harmonic distortion, and energy consumption are calculated. The collected data is then published and visualized on a free and open IoT application in the cloud. The tool was designed to be both cost-effective and high-quality. During laboratory testing, the equipment demonstrated a high level of precision, as compared to a network analyzer. Additionally, the design utilized the smallest number of components possible, while still maintaining quality performance. The ESP32 microcontroller enables wireless data transmission, making remote monitoring and management of energy consumption more accessible and efficient. Moreover, the non-invasive measurement method makes the tool safer and more user-friendly, as it does not require any interruption of power supply. The proposed tool can help identify and address power quality issues that arise in domestic networks, which can have a significant impact on energy consumption and costs. The IoT application enables users to access their power consumption data remotely, facilitating better energy management and reducing wastage.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_111-A_Low_Cost_Wireless_Sensor_System_for_Power_Quality_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Approach of Test Case Generation with Software Requirement Ontology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01408109</link>
        <id>10.14569/IJACSA.2023.01408109</id>
        <doi>10.14569/IJACSA.2023.01408109</doi>
        <lastModDate>2023-08-30T09:54:14.7700000+00:00</lastModDate>
        
        <creator>Adisak Intana</creator>
        
        <creator>Kuljaree Tantayakul</creator>
        
        <creator>Kanjana Laosen</creator>
        
        <creator>Suraiya Charoenreh</creator>
        
        <subject>Software testing; software requirement specification; ontology; test case; equivalence and classification tree method</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>Software testing plays an essential role in software development process since it helps to ensure that the developed software product is free from errors and meets the defined specifications before the delivery. As the software specification is mostly written in the form of natural language, this may lead to the ambiguity and misunderstanding by software developers and results in the incorrect test cases to be generated from this unclear specification. Therefore, to solve this problem, this paper presents a novel hybrid approach, Software Requirement Ontologies based Test Case Generation (ReqOntoTestGen) to enhance the reliability of existing software testing techniques. This approach enables a framework that combines ontology engineering with the software test case generation approaches. Controlled Natural Language (CNL) provided by the ROO (Rabbit to OWL Ontologies Authoring) tools is used by the framework to build the software requirement ontology from unstructured functional requirements. This eliminates the inconsistency and ambiguity of requirements before test case generation. The OWL ontology resulted from ontology engineering is then transformed into the XML file of data dictionary. Combination of Equivalence and Classification Tree Method (CCTM) is used to generate test cases from this XML file with the decision tree. This allows us to reduce redundancy of test cases and increase testing coverage. The proposed approach is demonstrated with the developed prototype tool. The contribution of the tool is confirmed by the validation and evaluation result with two real case studies, Library Management System (LMS) and Kidney Failure Diagnosis (KFD) Subsystem, as we expected.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_109-An_Approach_of_Test_Case_Generation_with_Software_Requirement.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Eligible Personal Loan Applicant Selection using Federated Machine Learning Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01408110</link>
        <id>10.14569/IJACSA.2023.01408110</id>
        <doi>10.14569/IJACSA.2023.01408110</doi>
        <lastModDate>2023-08-30T09:54:14.7700000+00:00</lastModDate>
        
        <creator>Mehrin Anannya</creator>
        
        <creator>Most. Shahera Khatun</creator>
        
        <creator>Md. Biplob Hosen</creator>
        
        <creator>Sabbir Ahmed</creator>
        
        <creator>Md. Farhad Hossain</creator>
        
        <creator>M. Shamim Kaiser</creator>
        
        <subject>Loan eligibility prediction; machine learning; random forest; K-Nearest Neighbour; Adaboost; extreme gradient boost; artificial neural network; federated learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>Loan sanctioning develops a paramount financial dependency amongst banks and customers. Banks assess bundles of documents from individuals or business entities seeking loans depending on different loan types since only reliable candidates are chosen for the loan. This reliability materializes after assessing the previous transaction history, financial stability, and other diverse kinds of criteria to justify the reliance of the bank on an applicant. To reduce the workload of this laborious assessment, in this research, a machine learning (ML) based web application has been initiated to predict eligible candidates considering multiple criteria that banks generally use in their calculation, in short which can be briefed as loan eligibility prediction. Data from prior customers, who are authorized for loans based on a set of criteria, are used in this research. As ML techniques, Random Forest, K-Nearest Neighbour, Adaboost, Extreme Gradient Boost Classifier, and Artificial Neural Network algorithms are utilized for training and testing the dataset. A federated learning approach is employed to ensure the privacy of loan applicants. Performance analysis reveals that Random Forest classifier has provided the best output with an accuracy of 91%. Based on the mentioned prediction, the web application can decide whether the customers’ requested loan should be accepted or rejected. The application was developed using NodeJs, ReactJS, Rest API, HTML, and CSS. Furthermore, parameter tuning can improve the performance of the web application in the future along with a usable user interface ensuring global accessibility for various types of users.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_110-Eligible_Personal_Loan_Applicant_Selection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning-based Sentence Embeddings using BERT for Textual Entailment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01408108</link>
        <id>10.14569/IJACSA.2023.01408108</id>
        <doi>10.14569/IJACSA.2023.01408108</doi>
        <lastModDate>2023-08-30T09:54:14.7530000+00:00</lastModDate>
        
        <creator>Mohammed Alsuhaibani</creator>
        
        <subject>Textual entailment; deep learning; entailment detection; BERT; text processing; natural language processing systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>This study directly and thoroughly investigates the practicalities of utilizing sentence embeddings, derived from the foundations of deep learning, for textual entailment recognition, with a specific emphasis on the robust BERT model. As a cornerstone of our research, we incorporated the Stanford Natural Language Inference (SNLI) dataset. Our study emphasizes a meticulous analysis of BERT’s variable layers to ascertain the optimal layer for generating sentence embeddings that can effectively identify entailment. Our approach deviates from traditional methodologies, as we base our evaluation of entailment on the direct and simple comparison of sentence norms, subsequently highlighting the geometrical attributes of the embeddings. Experimental results revealed that the L2 norm of sentence embeddings, drawn specifically from BERT’s 7th layer, emerged superior in entailment detection compared to other setups.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_108-Deep_Learning_based_Sentence_Embeddings_using_BERT.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implementing a Blockchain, Smart Contract, and NFT Framework for Waste Management Systems in Emerging Economies: An Investigation in Vietnam</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01408107</link>
        <id>10.14569/IJACSA.2023.01408107</id>
        <doi>10.14569/IJACSA.2023.01408107</doi>
        <lastModDate>2023-08-30T09:54:14.7400000+00:00</lastModDate>
        
        <creator>Khiem H. G</creator>
        
        <creator>Khanh H. V</creator>
        
        <creator>Huong H. L</creator>
        
        <creator>Quy T. L</creator>
        
        <creator>Phuc T. N.</creator>
        
        <creator>Ngan N. T. K.</creator>
        
        <creator>Triet M. N.</creator>
        
        <creator>Bang L. K.</creator>
        
        <creator>Trong D. P. N.</creator>
        
        <creator>Hieu M. D.</creator>
        
        <creator>Bao Q. T.</creator>
        
        <creator>Khoa D. T.</creator>
        
        <subject>Vietnam waste management; blockchain; smart contracts; NFT; Ethereum; Fantom; Polygon; Binance Smart Chain</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>The management and disposal of various types of waste (including industrial, domestic, and medical waste) are worldwide issues, which are particularly critical in developing nations such as Vietnam. Given the extensive population and inadequate waste treatment facilities, addressing this challenge is of utmost importance. Predominantly, the majority of such waste is not processed for composting but is instead subjected to elimination, thereby posing severe threats to public health and environmental safety. Furthermore, insufficient standards in existing waste treatment plants contribute to the rising volume of environmental waste. Emphasizing the process of waste recycling instead of total elimination is an alternate strategy that needs to be considered seriously. However, the implementation of waste segregation in Vietnam is still not sufficiently prioritized by individuals or organizations. This study presents a unique model for waste segregation and treatment, leveraging the capacities of blockchain technology and smart contracts. We also scrutinize the adherence or non-compliance to waste segregation mandates as a mechanism to incentivize or penalize individuals and organizations, respectively. To address this, we employ Non-Fungible Token (NFT) technology for the storage of compliance proofs and associated metadata. The paper’s primary contributions can be delineated into four components: i) presentation of a waste segregation and treatment model in Vietnam, utilizing Blockchain technology and Smart Contracts; ii) application of NFTs for storage of compliance-related content and its metadata; iii) offering a proof-of-concept implementation rooted in the Ethereum platform; and iv) executing the proposed model on four EVM and ERC721 compliant platforms, namely BNB Smart Chain, Fantom, Polygon, and Celo, to identify the most suitable platform for our proposition.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_107-Implementing_a_Blockchain_Smart_Contract_and_NFT_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Dual Confusion and Diffusion Approach for Grey Image Encryption using Multiple Chaotic Maps</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01408106</link>
        <id>10.14569/IJACSA.2023.01408106</id>
        <doi>10.14569/IJACSA.2023.01408106</doi>
        <lastModDate>2023-08-30T09:54:14.7230000+00:00</lastModDate>
        
        <creator>S Phani Praveen</creator>
        
        <creator>V Sathiya Suntharam</creator>
        
        <creator>S Ravi</creator>
        
        <creator>U. Harita</creator>
        
        <creator>Venkata Nagaraju Thatha</creator>
        
        <creator>D Swapna</creator>
        
        <subject>Image encryption; dual confusion and diffusion; chaotic maps; grey images; robust encryption; key generation; image analysis; performance evaluation; histogram analysis; grey images; key generation; performance evaluation; histogram analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>With the exponential growth of the internet and social media, images have become a predominant form of information transmission, including confidential data. Ensuring the proper security of these images has become crucial in today’s digital age. This research study proposes a unique strategy for solving this demand by presenting a dual confusion and diffusion technique for encrypting gray-scale pictures. This method is presented as an innovative means of meeting this need. To im-prove the effectiveness of the encryption process, the encryption method uses several chaotic maps, including the logistic map, the tent map, and the Lorenz attractor. Python is used for the implementation of the suggested approach. Furthermore, a thorough assessment of the encryption mechanism is carried out to determine its efficacy and resilience. By employing the combined strength of chaotic maps and dual confusion and diffusion techniques, the proposed method aims to provide a high level of security for confidential image transmission. The experimental results demonstrate the algorithm’s effectiveness in terms of encryption speed, security, and resistance against common attacks. The encrypted images exhibit properties such as randomness, key sensitivity, and resilience against statistical analysis and differential attacks. Moreover, the proposed method maintains a reasonable computational efficiency, and it is compatible with real-time applications. This study makes a contribution to the growing area of picture encryption by presenting an original and effective encryption method that overcomes the shortcomings of previously used approaches. Future work can explore additional security features and extend the proposed approach to encrypt other forms of multimedia data.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_106-A_Novel_Dual_Confusion_and_Diffusion_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Leveraging Blockchain, Smart Contracts, and NFTs for Streamlining Medical Waste Management: An Examination of the Vietnamese Healthcare Sector</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01408105</link>
        <id>10.14569/IJACSA.2023.01408105</id>
        <doi>10.14569/IJACSA.2023.01408105</doi>
        <lastModDate>2023-08-30T09:54:14.7070000+00:00</lastModDate>
        
        <creator>Triet M. N</creator>
        
        <creator>Khanh H. V</creator>
        
        <creator>Huong H. L</creator>
        
        <creator>Khiem H. G</creator>
        
        <creator>Phuc T. N.</creator>
        
        <creator>Ngan N. T. K.</creator>
        
        <creator>Quy T. L.</creator>
        
        <creator>Bang L. K.</creator>
        
        <creator>Trong D. P. N.</creator>
        
        <creator>Hieu M. D.</creator>
        
        <creator>Bao Q. T.</creator>
        
        <creator>Khoa D. T.</creator>
        
        <creator>Anh T. N.</creator>
        
        <subject>Medical waste management; blockchain; smart contracts; NFTs; ethereum; fantom; polygon; binance smart chain</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>Medical waste is deemed hazardous due to its potential health implications and the predominant practice of discarding it post six months of utilization. Furthermore, the reusable proportion of such waste is minimal. The implications of this scenario were brought to the fore during the COVID- 19 pandemic when sub-optimal medical waste management was identified as a factor exacerbating the spread of the virus worldwide. The predicament is particularly grave in developing nations, such as Vietnam, where the underdeveloped state of medical infrastructure renders efficient waste management a daunting task. The waste management challenge also stems from the significant roles played by different stakeholders (healthcare workers and patients confined to isolation wards), whose actions directly influence waste classification, impact the waste treatment process, and indirectly contribute to environmental pollution. Given that waste management involves a chain of activities requiring the coordinated efforts of medical, transportation, and waste treatment personnel, inaccuracies in the initial stages, such as waste sorting, can negatively impact subsequent processes. In light of these issues, our study puts forth a unique model aimed at enhancing waste classification and management practices in Vietnam. This model innovatively integrates Blockchain technology, smart contracts, and non-fungible tokens (NFTs) with the intent to foster an increased individual and collective consciousness towards effective waste classification within healthcare settings. Our research is notable for its four-fold contribution: (a) suggesting a unique mechanism based on blockchain technology and smart contracts, designed specifically to improve medical waste classification and treatment in Vietnam; (b) introducing a model for instituting rewards or penalties based on NFT technology to influence behaviors of individuals and organizations; (c) demon-strating the feasibility of the proposed model through a proof-of-concept; (d) executing the proof-of-concept on four prominent platforms that support ERC721 - NFT of Ethereum and EVM for executing smart contracts programmed in the Solidity language, namely BNB Smart Chain, Fantom, Polygon, and Celo.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_105-Leveraging_Blockchain_Smart_Contracts_and_NFTs.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Decentralized Management of Medical Test Results Utilizing Blockchain, Smart Contracts, and NFTs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01408104</link>
        <id>10.14569/IJACSA.2023.01408104</id>
        <doi>10.14569/IJACSA.2023.01408104</doi>
        <lastModDate>2023-08-30T09:54:14.6930000+00:00</lastModDate>
        
        <creator>Quy T. L</creator>
        
        <creator>Khanh H. V</creator>
        
        <creator>Huong H. L</creator>
        
        <creator>Khiem H. G</creator>
        
        <creator>Phuc T. N</creator>
        
        <creator>Ngan N. T. K</creator>
        
        <creator>Triet M. N</creator>
        
        <creator>Bang L. K</creator>
        
        <creator>Trong D. P. N.</creator>
        
        <creator>Hieu M. D.</creator>
        
        <creator>Bao Q. T.</creator>
        
        <creator>Khoa D. T.</creator>
        
        <subject>Medical test result; blockchain; smart contract; NFT; Ethereum; Fantom; Polygon; Binance Smart Chain</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>In today’s medical landscape, the effective management and availability of diagnostic data, including current and historical medical tests, play a critical role in inform-ing physicians’ therapeutic decisions. However, the conventional centralized storage system presents a significant impediment, particularly when patients switch healthcare providers. Given the sensitive nature of medical data, retrieving this information from a different healthcare facility can be fraught with challenges. While decentralized storage models using blockchain and smart contracts have been suggested as potential solutions, these methodologies often expose sensitive personal information due to the inherently open nature of data on the blockchain. Addressing these challenges, we present an innovative approach integrating Non-Fungible Tokens (NFTs) to facilitate the creation and sharing of medical document sets based on test results within a medical environment. This novel approach effectively balances data accessibility and security, introducing four key contributions: (a) We introduce a mechanism for sharing medical test results while preserving data privacy. (b) We offer a model for generating certified, NFT-based document sets that encapsulate these results.(c) We provide a proof-of-concept reflecting the proposed model’s functionality and (d) We deploy this proof-of-concept across four EVM-supported platforms—BNB Smart Chain, Fantom, Polygon, and Celo—to identify the most compatible platform for our proposed model. Our work underscores the potential of blockchain, smart contracts, and NFTs to revolutionize medical data management, demonstrating a practical solution to the challenges posed by centralized storage systems.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_104-Decentralized_Management_of_Medical_Test_Results.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automated Analysis of Job Market Demands using Large Language Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01408103</link>
        <id>10.14569/IJACSA.2023.01408103</id>
        <doi>10.14569/IJACSA.2023.01408103</doi>
        <lastModDate>2023-08-30T09:54:14.6600000+00:00</lastModDate>
        
        <creator>Myo Thida</creator>
        
        <subject>ChatGPT; labour market analysis; skills identification; online job adverts; skills demand</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>This paper presents a comprehensive analysis of labor market demands for Myanmar workers in Japan, and Thailand, focusing on opportunities for individuals without higher education degrees. Leveraging ChatGPT’s text classification and summarization capabilities, we extracted vital insights from extensive job advertisements and social media groups. The dataset comprises 152 job advertisements from Thailand and 30 from Japan, collected in 2023. Our research provides a valuable snapshot of skill demands and job opportunities, offering insights for informed decision-making by both job seekers and international non-governmental organizations. The innovative approach of using ChatGPT highlights its efficacy in understanding labor market dynamics. These findings serve as a foundation for tailored interventions to bridge employment challenges faced by marginalized Myanmar youths.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_103-Automated_Analysis_of_Job_Market_Demands.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Research on Enterprise Supply Chain Anti-Disturbance Management Based on Improved Particle Swarm Optimization Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01408102</link>
        <id>10.14569/IJACSA.2023.01408102</id>
        <doi>10.14569/IJACSA.2023.01408102</doi>
        <lastModDate>2023-08-30T09:54:14.6600000+00:00</lastModDate>
        
        <creator>Tongqing Dai</creator>
        
        <subject>Supply chain; particle swarm optimization algorithm; genetic algorithm; inverse production behaviour; neighbourhood structure</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>A supply chain that is effective and of the highest caliber boosts customer happiness as well as sales and earnings, increasing the company&#39;s competitiveness in the market. It has been discovered that the standard supply chain management technique leaves the supply chain with weak supply chain stability because it has a low ability to withstand the manufacturer&#39;s production behaviour. An enterprise supply chain resistance management model is built using the study&#39;s proposed particle swarm optimisation technique, which is based on a genetic algorithm with stochastic neighbourhood structure, to solve this issue. The suggested technique outperformed the other two algorithms utilised for comparison in a performance comparison test, with a stable particle swarm fitness value of 0.016 after 800 iterative iterations and the fastest convergence. The proposed model was then empirically examined, and the results revealed that the production team using the model completed the same volume of orders in 32 days while making $460,000 more in profit. With scores of 4.5, 4.5, 4.3, 4.3, 4.2, and 4.2, respectively, the team also had the lowest values of the six forms of employee anti-production conduct, outperforming the comparative management style. In summary, the study proposes an anti-disturbance management model for enterprise supply chains that can rationalise the scheduling of manufacturers&#39; production behaviour and thus improve the stability of the supply chain.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_102-Research_on_Enterprise_Supply_Chain_Anti_Disturbance_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards Secure Blockchain-enabled Cloud Computing: A Taxonomy of Security Issues and Recent Advances</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01408101</link>
        <id>10.14569/IJACSA.2023.01408101</id>
        <doi>10.14569/IJACSA.2023.01408101</doi>
        <lastModDate>2023-08-30T09:54:14.6300000+00:00</lastModDate>
        
        <creator>Shengli LIU</creator>
        
        <subject>Cloud computing; security; blockchain; review</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>Blockchain technology offers a promising solution for addressing performance and security challenges within distributed systems. This paper presents a comprehensive taxonomy of security issues in cloud computing and explores recent advances in utilizing blockchain to enhance security and efficiency in this domain. We employ a systematic literature review approach to analyze various blockchain-enabled solutions for cloud computing. Our findings reveal that blockchain&#39;s decentralized and immutable nature empowers cloud computing services to establish secure and private data interactions. By leveraging blockchain&#39;s consensus mechanism, we demonstrate the feasibility of creating a robust platform for authenticating transactions involving digital assets. Through cryptographic methods, blocks of transactions are securely linked, ensuring data integrity. This paper provides a roadmap for understanding security concerns in cloud computing and offers insights into the potential of blockchain technology. We conclude by outlining future research directions that can drive innovation in this exciting intersection of fields.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_101-Towards_Secure_Blockchain_enabled_Cloud_Computing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Black Widow Optimization Algorithm for Virtual Machines Migration in the Cloud Environments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01408100</link>
        <id>10.14569/IJACSA.2023.01408100</id>
        <doi>10.14569/IJACSA.2023.01408100</doi>
        <lastModDate>2023-08-30T09:54:14.6130000+00:00</lastModDate>
        
        <creator>Chuang Zhou</creator>
        
        <subject>Cloud computing; migration; energy consumption; optimization; black widow algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>Cloud data centers use virtualization technology to manage computing resources. Using a group of connected Virtual Machines (VMs), corresponding users can compute data efficiently and effectively. It improves the utilization of resources, thereby reducing hardware requirements. Repossession of affected services requires VM-based infrastructure overhaul schemes. Clarifications concerning dedicated routing are also desirable to improve the reliability of Domain Controller (DC) services. The migration of a VM experiencing a node failure challenges maintaining reliability. The selection of VMs is influential in limiting the number of VM migrations. Choosing one or more potential VMs for migration reduces the servers&#39; workload. This paper presents an energy-aware VM migration method for cloud computing based on the Black Widow Optimization (BWO) algorithm. The proposed algorithm was implemented and measured using JAVA. Afterward, we compared our results against existing methodologies regarding resource availability, energy consumption, load, and migration cost.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_100-Black_Widow_Optimization_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of a Decentralized AI IoT System Based on Back Propagation Neural Network Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140899</link>
        <id>10.14569/IJACSA.2023.0140899</id>
        <doi>10.14569/IJACSA.2023.0140899</doi>
        <lastModDate>2023-08-30T09:54:14.5830000+00:00</lastModDate>
        
        <creator>Xiaomei Zhang</creator>
        
        <subject>BP neural networks; artificial intelligence; IoT systems; fog devices; Docker containers</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>In the Internet of Things (IoT) era, when user needs are continually evolving, the coupling of AI and IoT technologies is unavoidable. Fog devices are introduced into the IoT system and given the function of hidden layer neurons of Back Propagation neural network, and Docker containers are combined to realize the mapping of devices and neurons in order to improve the quality of service of IoT devices. This study proposes the design of a decentralized AI IoT system based on Back Propagation neural network model. The testing data revealed that, at various data transfer intervals, the average transmission rate between the fog device and the sensing device was 8.265Mbps, and that the device&#39;s transmission rate could satisfy user demand. When the data transmission interval was 20s, the network data transmission rate was greater than 8.5Mbps and did not vary much when the number of data transmissions rose. The research demonstrates that the decentralized AI IoT system&#39;s network performance, which is based on a back propagation neural network model, can match user usage requirements and has good stability.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_99-Design_of_A_Decentralized_AI_IoT_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Construction and Application of Automatic Scoring Index System for College English Multimedia Teaching Based on Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140898</link>
        <id>10.14569/IJACSA.2023.0140898</id>
        <doi>10.14569/IJACSA.2023.0140898</doi>
        <lastModDate>2023-08-30T09:54:14.5670000+00:00</lastModDate>
        
        <creator>Hui Dong</creator>
        
        <creator>Ping Wei</creator>
        
        <subject>Cognition of multimedia teaching in universities; scoring index; neural network; teaching system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>With the continuous development of interactive multimedia, multimedia is increasingly integrated into college English teaching, providing advanced teaching equipment and resources. While enriching the teaching environment, it also brings new challenges to teaching ideas and strategies. Although the proportion of independent and selective learning of college students has increased, classroom teaching still constitutes the most essential unit of educational activities. Classroom evaluation is an important means and institutionalized element to improve the quality of university teaching. This paper analyzes the elements of multimedia classroom teaching and constructs an evaluation index system for English multimedia teaching. The improved model is used to achieve automatic teaching grading, acquire knowledge through environmental learning and improve its own performance, and evaluate the mathematical model of the English multimedia teaching evaluation system established by neural network theory accurately and effectively. In this paper, the results of automatic scoring of multimedia English teaching in colleges and universities are compared. Simulation software is used to verify the established neural network evaluation system. The simulation results show that the model is more suitable for the test data of English classroom teaching than the traditional methods, and the prediction effect is better. All 15 English teachers had a predicted error rate of less than 2%, and all 10 English teachers had a predicted error rate of less than 1%.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_98-Construction_and_Application_of_Automatic_Scoring_Index_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Research on the Application of Multi-Objective Algorithm Based on Tag Eigenvalues in e-Commerce Supply Chain Forecasting</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140897</link>
        <id>10.14569/IJACSA.2023.0140897</id>
        <doi>10.14569/IJACSA.2023.0140897</doi>
        <lastModDate>2023-08-30T09:54:14.5370000+00:00</lastModDate>
        
        <creator>Man Huang</creator>
        
        <creator>Jie Lian</creator>
        
        <subject>Label features; multi-objective algorithm; sparse set; e-commerce supply chain; multi target regression</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>With the continuous development of Internet technology, the scale of Internet data is increasing day by day, and business forecasting has become more and more important in corporate business decision-making. Therefore, to improve the accuracy of Multi Target Regression in the actual e-commerce supply chain forecasting, research through the method of constructing the labeling feature for each target is optimized, the Multi-Target Regression via Sparse Integration and Label- Specific Features algorithm is obtained, and the experimental analysis is carried out on the performance of the algorithm and the application effect in the actual e-commerce supply chain. The experimental results show that the average of Relative Root Mean Square Error value of the research algorithm and is the lowest in most datasets, with a minimum of 0.058 in the effect experiments of prediction and label-specific features; in the effect and flexibility experiments of sparse sets, the lowest average of Relative Root Mean Square Error value of the research algorithm was 0.058, and the average rank value was the smallest. In addition, the average of Relative Root Mean Square Error value of the research algorithm is the smallest under the target variable of Y2 in the Enb data, and its value is 0.075. In the actual e-commerce supply chain forecast, the research algorithm has the highest score of 0.097 points. Overall, research algorithm has a better forecasting effect and higher performance, and has better practicality in practice, and can play a better effect in actual e-commerce supply chain forecasting.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_97-Research_on_the_Application_of_Multi_Objective_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Explore Chinese Energy Commodity Prices in Financial Markets using Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140896</link>
        <id>10.14569/IJACSA.2023.0140896</id>
        <doi>10.14569/IJACSA.2023.0140896</doi>
        <lastModDate>2023-08-30T09:54:14.5200000+00:00</lastModDate>
        
        <creator>Yu Cui</creator>
        
        <creator>Tianhao Ma</creator>
        
        <subject>Chinese commodity price; exchange rate; stock markets; machine learning; international energy trade; global economic system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>This study simultaneously investigates the causality and dynamic links between international energy trade and economic price changes, especially in the Chinese commodity market. To get a causal route, it attempts to identify the linear and nonlinear causality among commodity prices, equities, and the exchange rate in China and the United States (US). Here, we adapt multilayer perceptron networks to obtain a nonlinear autoregressive model for causality discovery. After comparing methods without networks, this study proves that the nonlinear causality discovery method using machine learning performs best on simulated data. Subsequently, we apply that causality to actual data; we combine the causal routes, particularly from the machine learning methodology, to investigate the existence of a causal direct or indirect relationship among Chinese commodity prices, long-term interest rates, stock index, and exchange rates in China and the US. The steady-state accuracy of cmlpgranger is 99%. In most cases, the order of judgment accuracy of causality is cmlpgranger &gt; HSICLasso &gt; ARD &gt; LinSVR. The results show that Energy trade as an element of the global economic system. The Chinese commodity price of energy has an interactive relationship with the Chinese commodity price of agricultural products. The significant transmission is from the commodity price of energy to equities, then to the exchange rate, and, finally, to the commodity price of agricultural products.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_96-Explore_Chinese_Energy_Commodity_Prices.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Visual Image Feature Recognition Method for Mobile Robots Based on Machine Vision</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140895</link>
        <id>10.14569/IJACSA.2023.0140895</id>
        <doi>10.14569/IJACSA.2023.0140895</doi>
        <lastModDate>2023-08-30T09:54:14.4900000+00:00</lastModDate>
        
        <creator>Minghe Hu</creator>
        
        <creator>Jiancang He</creator>
        
        <subject>Machine vision; mobile robots; image recognition; convolutional neural network; K-means algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>With the continuous advancement of machine vision and computer technology, mobile robots with visual systems have received widespread attention in fields such as industry, agriculture, and services. However, the current methods for processing visual images of mobile robots are difficult to meet the requirements of practical applications. There are issues of low efficiency and low accuracy. Therefore, firstly, spatial information is integrated into the K-means algorithm and image spatial structure constraints are introduced for visual image segmentation. Then the dense connected network is added to the Convolutional neural network structure. This structure is combined with a bidirectional long-term and short-term memory network to achieve visual image feature recognition. The results show that the improved K-means algorithm has a maximum recall rate of 97.35% in the Berkeley image segmentation dataset, with a maximum Randall index of 86.18%. After combining with the proposed improved Convolutional neural network, the highest feature recognition rate for five scenes of mining, risk elimination, agriculture, factory and building is 96.1%, and the lowest error rate is 1.2%. It possesses a high degree of recognition accuracy and is capable of effectively being applied to visual feature recognition on mobile robots, providing a novel reference point for visual image processing on mobile robots.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_95-Visual_Image_Feature_Recognition_Method_for_Mobile_Robots.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SLAM Mapping Method of Laser Radar for Tobacco Production Line Inspection Robot Based on Improved RBPF</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140894</link>
        <id>10.14569/IJACSA.2023.0140894</id>
        <doi>10.14569/IJACSA.2023.0140894</doi>
        <lastModDate>2023-08-30T09:54:14.4730000+00:00</lastModDate>
        
        <creator>Zhiyuan Liang</creator>
        
        <creator>Pengtao He</creator>
        
        <creator>Wenbin Liang</creator>
        
        <creator>Xiaolei Zhao</creator>
        
        <creator>Bin Wei</creator>
        
        <subject>Improved RBPF; tobacco production line; patrol robot; LiDAR; slam mapping; drosophila optimization strategy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>The study focuses on the laser radar SLAM mapping method employed by the tobacco production line inspection robot, utilizing an enhanced RBPF approach. It involves the construction of a well-structured two-dimensional map of the inspection environment for the tobacco production line inspection robot. This construction aims to ensure the seamless execution of inspection tasks along the tobacco production line. The fusion of wheel odometer and IMU data is accomplished using the extended Kalman filter algorithm, wherein the resulting fused odometer motion model and LiDAR observation model jointly serve as the hybrid proposal distribution. In the hybrid proposal distribution, the iterative nearest point method is used to find the sampling particles in the high probability area, and the matching score during particle matching scanning is used as the fitness value, and the Drosophila optimization strategy is used to adjust the particle distribution. Then, the weight of each particle after optimization is solved, and the particles are adaptively resampled according to the size of the weight after solution, and the inspection map of the inspection robot of the tobacco production line is updated according to the updated position and posture information and observation information of the particles of the inspection robot of the tobacco production line. The experimental results show that this method can realize the laser radar SLAM mapping of the tobacco production line inspection robot, and it can build a more ideal two-dimensional map of the inspection environment of the tobacco production line inspection robot with fewer particles. If it is applied to practical work, a more ideal work effect can be achieved.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_94-SLAM_Mapping_Method_of_Laser_Radar_for_Tobacco_Production.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design Strategy and Application of Headwear with National Characteristics Based on Information Visualization Technology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140893</link>
        <id>10.14569/IJACSA.2023.0140893</id>
        <doi>10.14569/IJACSA.2023.0140893</doi>
        <lastModDate>2023-08-30T09:54:14.4430000+00:00</lastModDate>
        
        <creator>Ting Zhang</creator>
        
        <subject>Information visualization; headwear national characteristics; digital material library; yao nationality characteristic headdress design</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>With the rapid development of big data technology, information technology and visualization technology, traditional national headdress design has gradually been combined with it. The strategies and applications related to national headdress design also fully reflect the beauty of modern science and technology, which is a model of the combination of national classics and modern technology. Based on this, this paper will deeply analyze the various links and processes of the design based on the data based on the specific information of Yao ethnic headwear. At the same time, based on the existing visual design, this paper will take spring, hibernate and other systems as the basic software architecture of the design system and deeply study the visualization principles and data information visualization methods of spring, hibernate and other software, and carry out data information visualization processing on the relevant design of national headwear, to build the corresponding digital material library with national characteristics and the digital design process of national headwear. Through the digital processing and matching of the whole design, the current design of national headwear can be simplified and optimized, and the design efficiency can be improved to provide reference samples for the design of other national characteristics. In the specific design part, this paper will carry out design verification based on Yao nationality&#39;s corresponding characteristic headdress design and evaluate the corresponding design from the perspective of artistry, practicality and nationality of headdress design. The practice results show that the information visualization design of national headwear proposed in this paper has obvious advantages over the traditional design, which greatly improves the design efficiency and simplifies the design process.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_93-Design_Strategy_and_Application_of_Headwear_with_National_Characteristics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Construction of VR Video Quality Evaluation Model Based on 3D-CNN</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140892</link>
        <id>10.14569/IJACSA.2023.0140892</id>
        <doi>10.14569/IJACSA.2023.0140892</doi>
        <lastModDate>2023-08-30T09:54:14.4100000+00:00</lastModDate>
        
        <creator>Hongxia Zhao</creator>
        
        <creator>Li Huang</creator>
        
        <subject>Virtual reality video; 3D convolutional neural network; residual network; quality evaluation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>Currently, virtual reality (VR) panoramic video content occupies a very important position in the content of virtual reality platforms. The level of video quality directly affects the experience of platform users, and there is increasing research on methods for evaluating VR video quality. Therefore, this study establishes a subjective evaluation library for VR video data and uses viewport slicing method to segment VR videos, expanding the sample size. Finally, a classification prediction network structure was constructed using a three-dimensional convolutional neural network (3D-CNN) to achieve objective evaluation of VR videos. However, during the research process, it was found that the increase in its convolutional dimension inevitably leads to a significant increase in the parameter count of the entire neural network, resulting in a surge in algorithm time complexity. In response to this defect, research and design dual 3D convolutional layers and improve 3D-CNN based on residual networks. Based on this research, a virtual reality video quality evaluation model based on improved 3D-CNN was constructed. Through experimental analysis, it can be concluded that the average overall accuracy value of the constructed model is 95.27%, the average accuracy value is 95.94%, and the average Kappa coefficient value is 96.18%. Being able to accurately and effectively evaluate the quality of virtual reality videos and promote the development of the virtual reality field.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_92-Construction_of_VR_Video_Quality_Evaluation_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Violent Physical Behavior Detection using 3D Spatio-Temporal Convolutional Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140891</link>
        <id>10.14569/IJACSA.2023.0140891</id>
        <doi>10.14569/IJACSA.2023.0140891</doi>
        <lastModDate>2023-08-30T09:54:14.3970000+00:00</lastModDate>
        
        <creator>Xiuhong Xu</creator>
        
        <creator>Zhongming Liao</creator>
        
        <creator>Zhaosheng Xu</creator>
        
        <subject>Violence detection; surveillance cameras; 3D Convolutional Neural Network (3D CNN); Spatio-temporal convolution; deep learning; abnormal behavior</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>The use of surveillance cameras has made it possible to analyze a huge amount of data for automated surveillance. The use of security systems in schools, hotels, hospitals, and other security areas is required to identify the violent activities that can cause social, economic, and environmental damage. Detecting the mobile objects on each frame is a fundamental phase in the analysis of the video trail and the violence recognition. Therefore, a three-step approach is presented in this article. In our method, the separation of the frames containing the motion information and the detection of the violent behavior are applied at two levels of the network. First, the people in the video frames are identified by using a convolutional neural network. In the second step, a sequence of 16 frames containing the identified people is injected into the 3D CNN. Furthermore, we optimize the 3D CNN by using the visual inference and then a neural network optimization tool that transforms the pre-trained model into an average representation. Finally, this method uses the toolbox of OPENVINO to perform the optimization operations to increase the performance. To evaluate the accuracy of our algorithm, two datasets have been analyzed, which are: Violence in Movies and Hockey Fight. The results show that the final accuracy of this analysis is equal to 99.9% and 96% from each dataset.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_91-Violent_Physical_Behavior_Detection_Using_3D.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid Classification Approach of Network Attacks using Supervised and Unsupervised Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140890</link>
        <id>10.14569/IJACSA.2023.0140890</id>
        <doi>10.14569/IJACSA.2023.0140890</doi>
        <lastModDate>2023-08-30T09:54:14.3630000+00:00</lastModDate>
        
        <creator>Rahaf Hamoud R. Al-Ruwaili</creator>
        
        <creator>Osama M. Ouda</creator>
        
        <subject>Network attacks; supervised learning; unsupervised learning; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>The increasing scale and sophistication of network attacks have become a major concern for organizations around the world. As a result, there is an increasing demand for effective and accurate classification of network attacks to enhance cyber security measures. Most existing schemes assume that the available training data is labeled; that is, classification is based on supervised learning. However, this is not always the case since the available real data is expected to be unlabeled. In this paper, this issue is tackled by proposing a hybrid classification approach that combines both supervised and unsupervised learning to build a predictive classification model for classifying network attacks. First, unsupervised learning is used to label the data available in the dataset. Then, different supervised machine learning algorithms are utilized to classify data with the labels obtained from the first step and compare the results with the ground truth labels. Moreover, the issue of the unbalanced dataset is addressed using both over-sampling and under-sampling techniques. Several experiments have been conducted, using the NSL-KDD dataset, to evaluate the efficiency of the proposed hybrid model and the obtained results demonstrate that the accuracy of our proposed model is comparable to supervised classification methods that assume that all data is labeled.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_90-A_Hybrid_Classification_Approach_of_Network_Attacks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Systematic Review for Phonocardiography Classification Based on Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140889</link>
        <id>10.14569/IJACSA.2023.0140889</id>
        <doi>10.14569/IJACSA.2023.0140889</doi>
        <lastModDate>2023-08-30T09:54:14.3470000+00:00</lastModDate>
        
        <creator>Abdullah Altaf</creator>
        
        <creator>Hairulnizam Mahdin</creator>
        
        <creator>Awais Mahmood</creator>
        
        <creator>Mohd Izuan Hafez Ninggal</creator>
        
        <creator>Abdulrehman Altaf</creator>
        
        <creator>Irfan Javid</creator>
        
        <subject>Heart sounds classification; Phonocardiogram (PCG); CVDs; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>Phonocardiography, the recording and analysis of heart sounds, has become an essential tool in diagnosing cardiovascular diseases (CVDs). In recent years, machine learning and deep learning techniques have dramatically improved the automation of phonocardiogram classification, making it possible to delve deeper into intricate patterns that were previously difficult to discern. Deep learning, in particular, leverages layered neural networks to process data in complex ways, mimicking how the human brain works. This has contributed to more accurate and efficient diagnoses. This systematic review aims to examine the existing literature on phonocardiography classification based on machine learning, focusing on algorithms, datasets, feature extraction methods, and classification models utilized. The materials and methods used in the study involve a comprehensive search of relevant literature and a critical evaluation of the selected studies. The review also discusses the challenges encountered in this field, especially when incorporating deep learning techniques, and suggests future research directions. Key findings indicate the potential of machine and deep learning in enhancing the accuracy of phonocardiography classification, thereby improving cardiovascular disease diagnosis and patient care. The study concludes by summarizing the overall implications and recommendations for further advancements in this area.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_89-Systematic_Review_for_Phonocardiography_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Motor Imagery EEG Signals Marginal Time Coherence Analysis for Brain-Computer Interface</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140888</link>
        <id>10.14569/IJACSA.2023.0140888</id>
        <doi>10.14569/IJACSA.2023.0140888</doi>
        <lastModDate>2023-08-30T09:54:14.3170000+00:00</lastModDate>
        
        <creator>Md. Sujan Ali</creator>
        
        <creator>Jannatul Ferdous</creator>
        
        <subject>Brain-Computer Interface (BCI); Electroencephalogram (EEG); Short-time Fourier Transform (STFT); Synchrosqueezing Transform (SST); time-frequency coherence</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>The synchronization of neural activity in the human brain has great significance for coordinating its various cognitive functions. It changes throughout time and in response to frequency. The activity is measured in terms of brain signals, like an electroencephalogram (EEG). The time-frequency (TF) synchronization among several EEG channels is measured in this research using an efficient approach. Most frequently, the windowed Fourier transforms-short-time Fourier transform (STFT), as well as wavelet transform (WT), and are used to measure the TF coherence. The information provided by these model-based methods in the TF domain is insufficient. The proposed synchro squeezing transform (SST)-based TF representation is a data-adaptive approach for resolving the problem of the traditional one. It enables more perfect estimation and better tracking of TF components. The SST generates a clearly defined TF depiction because of its data flexibility and frequency reassignment capabilities. Furthermore, a non-identical smoothing operator is used to smooth the TF coherence, which enhances the statistical consistency of neural synchronization. The experiment is run using both simulated and actual EEG data. The outcomes show that the suggested SST-dependent system performs significantly better than the previously mentioned traditional approaches. As a result, the coherences dependent on the suggested approach clearly distinguish between various forms of motor imagery movement. The TF coherence can be used to measure the interdependencies of neural activities.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_88-Motor_Imagery_EEG_Signals_Marginal_Time_Coherence_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Presenting a Novel Method for Identifying Communities in Social Networks Based on the Clustering Coefficient</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140887</link>
        <id>10.14569/IJACSA.2023.0140887</id>
        <doi>10.14569/IJACSA.2023.0140887</doi>
        <lastModDate>2023-08-30T09:54:14.3000000+00:00</lastModDate>
        
        <creator>Zhihong HE</creator>
        
        <creator>Tao LIU</creator>
        
        <subject>Social network; detection of communities; butterfly fire algorithm; clustering coefficient</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>In recent decades, social networks have been considered as one of the most important topics in computer science and social science. Identifying different communities and groups in these networks is very important because this information can be useful in analyzing and predicting various behaviors and phenomena, including the spread of information and social influence. One of the most important challenges in social network analysis is identifying communities. A community is a collection of people or organizations that are more densely connected than other network entities. In this article, a method to increase the accuracy, quality, and speed of community detection using the Fire Butterfly algorithm is presented, which defines the algorithm and fully introduces the parameters used in the proposed algorithm and how to implement it. In this method, first the social network is converted into a graph and then the clustering coefficient is calculated for each node. Also, the butterfly algorithm based on the clustering coefficient (CC-BF) has been proposed to identify complex social networks. The proposed algorithm is new both in terms of generating the initial population and in terms of the mutation method, and these improve its efficiency and accuracy. This research is inspired by the meta-heuristic algorithm of Butterfly Flame based on the clustering coefficient to find active nodes in the social network. The results have shown that the proposed algorithm has improved by 23.6% compared to previous similar works. The findings of this research have great value and can be useful for researchers in computer science, social network managers, data analysts, organizations and companies, and other general public.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_87-Presenting_a_Novel_Method_for_Identifying_Communities.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>New Real Dataset Creation to Develop an Intelligent System for Predicting Chemotherapy Protocols</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140886</link>
        <id>10.14569/IJACSA.2023.0140886</id>
        <doi>10.14569/IJACSA.2023.0140886</doi>
        <lastModDate>2023-08-30T09:54:14.2870000+00:00</lastModDate>
        
        <creator>Houda AIT BRAHIM</creator>
        
        <creator>Mariam BENLLARCH</creator>
        
        <creator>Nada BENHIMA</creator>
        
        <creator>Salah EL-HADAJ</creator>
        
        <creator>Abdelmoutalib METRANE</creator>
        
        <creator>Ghizlane BELBARAKA</creator>
        
        <subject>Dataset; breast cancer; cancer stage; chemotherapeutic regimen; machine learning; prediction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>Breast cancer is the most common cancer diagnosed in women. In developing countries, controlling this scourge is often problematic due to late diagnosis and the lack of medical and human resources. Automation and optimization of treatment is then needed to improve patient outcome. The use of medical datasets could, according to medical staff and pharmacists, assist them in clinical decision-making and would allow for better use of resources especially when limited. In our paper, a new real dataset was produced by collecting medical and personal data from 601 patients with breast cancer at the University Hospital Center (UHC) Mohammed VI of Marrakech. Data of women diagnosed with breast cancer from January 2018 at UHC were assessed. Most patients were 24-85 year-old, with an average age of 48.84 years. Patient age, performance status (PS), cancer stage and subtype, treatment patterns and correlations among the different variables were analyzed. The created dataset will help to determine the most appropriate treatment regimen depending on the individual characteristics of patients to allow for better use of limited resources.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_86-New_Real_Dataset_Creation_to_Develop_an_Intelligent_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Segmentation of Breast Cancer on Ultrasound Images using Attention U-Net Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140885</link>
        <id>10.14569/IJACSA.2023.0140885</id>
        <doi>10.14569/IJACSA.2023.0140885</doi>
        <lastModDate>2023-08-30T09:54:14.2530000+00:00</lastModDate>
        
        <creator>Sara LAGHMATI</creator>
        
        <creator>Khadija HICHAM</creator>
        
        <creator>Bouchaib CHERRADI</creator>
        
        <creator>Soufiane HAMIDA</creator>
        
        <creator>Amal TMIRI</creator>
        
        <subject>Breast cancer; deep learning; segmentation; attention U-Net</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>Breast cancer (BC) is one of the most prevailing and life-threatening types of cancer impacting women worldwide. Early detection and accurate diagnosis are crucial for effective treatment and improved patient outcomes. Deep learning techniques have shown remarkable promise in medical image analysis tasks, particularly segmentation. This research leverages the Breast Ultrasound Images BUSI dataset to develop two variations of a segmentation model using the Attention U-Net architecture. In this study, we trained the Attention3 U-Net and the Attention4 U-net on the BUSI dataset, consisting of normal, benign, and malignant breast lesions. We evaluated the model&#39;s performance based on standard segmentation metrics such as the Dice coefficient and Intersection over Union (IoU). The results demonstrate the effectiveness of the Attention U-Net in accurately segmenting breast lesions, with high overall performance, indicating agreement between predicted and ground truth masks. The successful application of the Attention U-Net to the BUSI dataset holds promise for improving breast cancer diagnosis and treatment. It highlights the potential of deep learning in medical image analysis, paving the way for more efficient and reliable diagnostic tools in breast cancer management.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_85-Segmentation_of_Breast_Cancer_on_Ultrasound_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Ensemble Learning Approach for Multi-Modal Medical Image Fusion using Deep Convolutional Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140884</link>
        <id>10.14569/IJACSA.2023.0140884</id>
        <doi>10.14569/IJACSA.2023.0140884</doi>
        <lastModDate>2023-08-30T09:54:14.2400000+00:00</lastModDate>
        
        <creator>Andino Maseleno</creator>
        
        <creator>D. Kavitha</creator>
        
        <creator>Koudegai Ashok</creator>
        
        <creator>Mohammed Saleh Al Ansari</creator>
        
        <creator>Nimmati Satheesh</creator>
        
        <creator>R. Vijaya Kumar Reddy</creator>
        
        <subject>Deep convolutional neural networks; image fusion; generative adversarial network; ensemble learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>Medical image fusion plays a vital role in enhancing the quality and accuracy of diagnostic procedures by integrating complementary information from multiple imaging modalities. In this study, we propose an ensemble learning approach for multi-modal medical image fusion utilizing deep convolutional neural networks (DCNNs) to predict brain tumour. The proposed method aims to exploit the inherent characteristics of different modalities and leverage the power of CNNs for improved fusion results. The Generative Adversarial Network (GAN) strengthens the input images. The ensemble learning framework comprises two main stages. Firstly, a set of DCNN models is trained independently on the respective input modalities, extracting high-level features that capture modality-specific information. Each DCNN model is fine-tuned to optimize its performance for fusion. Secondly, a fusion module is designed to aggregate the individual modality features and generate a fused image. The fusion module employs a weighted averaging technique to assign appropriate weights to the features based on their relevance and significance. The fused image obtained through this process exhibits enhanced spatial details and improved overall quality compared to the individual modalities. On a diversified dataset made up of multi-modal medical images, thorough tests are carried out to assess the efficacy of the suggested approach. The fusion images exhibit improved visual quality, enhanced feature representation, and better preservation of diagnostic information. The BRATS 2018 dataset, which contains Multi-Modal MRI images and patients’ healthcare information were used. The proposed method also demonstrates robustness across different medical imaging modalities, highlighting its versatility and potential for widespread adoption in clinical practice.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_84-An_Ensemble_Learning_Approach_for_Multi_Modal_Medical_Image.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Framework for Patient-Centric Medical Image Management using Blockchain Technology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140883</link>
        <id>10.14569/IJACSA.2023.0140883</id>
        <doi>10.14569/IJACSA.2023.0140883</doi>
        <lastModDate>2023-08-30T09:54:14.2070000+00:00</lastModDate>
        
        <creator>Abdulaziz Aljaloud</creator>
        
        <subject>Smart healthcare; medical imaging; blockchain; ethereum; distributed storage</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>In smart systems context, the storage and distribution of health-critical data – medical images, test reports, clinical information etc. that is processed and transmitted via web portal and pervasive devices which requires a secure and efficient management of patients’ medical records. The reliance on centralized data centers in the cloud to process, store, and transmit patients’ medical records poses some critical challenges including but not limited to operational costs, storage space requirements, and importantly threats and vulnerabilities to the security and privacy of health-critical data. To address these issues, this research proposes a framework and provides a proof-of-the-concept named Patient-Centric Medical Image Management System (PCMIMS). The proposed solution PCMIMS utilizes the Ethereum blockchain and Inter-Planetary File System (IPFS) to enable secure and decentralized storage capabilities that lack in existing solution for patients’ medical image management. The PCMIMS design facilitates secure access to Patient-Centric information for health units, patients, medics, and third-party requestors by incorporating the Patient-Centric access control protocol, ensuring privacy and control over medical data. The proposed framework is validated through the deployment of a prototype based on smart contract executed on Ethereum TESTNET blockchain that demonstrates efficiency and feasibility of the solution. Validation results highlight a correlation between (i) number of transactions (i.e., data storage and retrieval), (ii) gas consumption (i.e., energy efficiency), and (iii) data size (volume of Patient-Centric medical images) via repeated trials in Microsoft Windows environment. Validation results also indicate computational efficiency of the solution in terms of processing three most common types of Patient-Centric medical images namely (a) Magnetic resonance imaging (MRI) (b) X-radiation (X-Rays), (c) Computed tomography (CT) scan. This research primarily contributes by designing, implementing, and validating a blockchain based practical solution for efficient and secure management of Patient-Centric medical image management in the context of smart healthcare systems.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_83-A_Framework_for_Patient_Centric_Medical_Image_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>3D Magnetic Resonance Image Denoising using Wasserstein Generative Adversarial Network with Residual Encoder-Decoders and Variant Loss Functions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140882</link>
        <id>10.14569/IJACSA.2023.0140882</id>
        <doi>10.14569/IJACSA.2023.0140882</doi>
        <lastModDate>2023-08-30T09:54:14.1930000+00:00</lastModDate>
        
        <creator>Hanaa A. Sayed</creator>
        
        <creator>Anoud A. Mahmoud</creator>
        
        <creator>Sara S. Mohamed</creator>
        
        <subject>Deep learning; image denoising; MRI; Wasserstein GAN; loss function</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>Magnetic resonance imaging (MRI) is frequently contaminated by noise during scanning and transmission of images, this deteriorates the accuracy of quantitative measures from the data and limits disease diagnosis by doctors or a computerized system. It is common for MRI to suffer from noise commonly referred to as Rician noise because the uncorrelated Gaussian noise is present in both the real and imaginary parts of a complex K-space image with zero mean and equal standard deviation, the distribution of noise in magnitude MR images typically tends to be related to Rician distributions. To remove the Rician noise from an MRI scan, deep learning has been used in the MRI denoising method to achieve improved performance. The proposed models were inspired by the Residual Encoder-Decoder Wasserstein Generative Adversarial Network (RED-WGAN). Specifically, the generator network is residual autoencoders combined with the convolution and deconvolution operations, and the discriminator network is convolutional layers. As a result of replacing Mean Square Error (MSE) in RED-WGAN with Structurally Sensitive Loss (SSL), RED-WGAN-SSL has been proposed to overcome the loss of important structural details that occurs because of over-smoothing the edges. The RED-WGAN-SSIM model has also been developed using Structural Similarity Loss SSIM. The proposed RED-WGAN-SSL and RED-WGAN-SSIM models are formed by using the SSL, SSIM, Visual Geometry Group (VGG), and adversarial loss that are incorporated to form the new loss function. They preserved the informative details and fine image better than RED-WGAN, so our models could effectively reduce noise and suppress artifacts.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_82-3D_Magnetic_Resonance_Image_Denoising.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multimodal Deep Learning Approach for Real-Time Sentiment Analysis in Video Streaming</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140881</link>
        <id>10.14569/IJACSA.2023.0140881</id>
        <doi>10.14569/IJACSA.2023.0140881</doi>
        <lastModDate>2023-08-30T09:54:14.1600000+00:00</lastModDate>
        
        <creator>Tejashwini S. G</creator>
        
        <creator>Aradhana D</creator>
        
        <subject>Deep learning; emotion recognition; feature ex-traction; machine learning; sentiment analysis; visual data</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>Recognizing emotions from visual data, like images and videos, presents a daunting challenge due to the intricacy of visual information and the subjective nature of human emotions. Over the years, deep learning has showcased remarkable success in diverse computer vision tasks, including sentiment classification. This paper introduces a novel multi-view deep learning framework for emotion recognition from visual data. Leveraging Convolutional Neural Networks (CNNs) this framework extracts features from visual data to enhance sentiment classification accuracy. Additionally, we enhance the deep learning model through cutting-edge techniques like transfer learning to bolster its generalization capabilities. Furthermore, we develop an efficient deep learning classification algorithm, effectively categorizing visual sentiments based on the extracted features. To assess its performance, we compare our proposed model with state-of-the-art machine learning methods in terms of classification accuracy, training time, and processing speed. The experimental results unequivocally demonstrate the superiority of our framework, showcasing higher classification accuracy, faster training times, and improved processing speed compared to existing methods. This multi-view deep learning approach marks a significant stride in emotion recognition from visual data and holds the potential for various real-world applications, such as social media sentiment analysis and automated video content analysis.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_81-Multimodal_Deep_Learning_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Research on Strategic Decision Model of Human Resource Management based on Biological Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140880</link>
        <id>10.14569/IJACSA.2023.0140880</id>
        <doi>10.14569/IJACSA.2023.0140880</doi>
        <lastModDate>2023-08-30T09:54:14.1470000+00:00</lastModDate>
        
        <creator>Ke Xu</creator>
        
        <subject>Biological neural network; human resources management; strategic decision making; index selection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>Human resource management system is an indispensable part of information strategy construction. Based on the theory of biological neural network, this paper constructs the strategic decision model of human resources management, then uses the micro-integration method to predict the demand for human resources, and solves the quantification problem of human resources supply prediction. In the simulation process, the model analyzes the current situation of the personnel management system and the necessity of research and plans and designs a computer-aided personnel management information system based on the Client/Server biological neural network structure. Personnel quality evaluation through the evaluation and analysis of the quality of the evaluated, to provide effective reference information for the enterprise personnel decision and index selection, the enterprise human resources allocation, use, training and development is of great significance. Neural networks rely on the powerful data storage, processing and computing capabilities of computers to help enterprises respond quickly to changes in external market conditions, improve the efficiency of decision-making, and create greater value for enterprises. Through experimental testing, it is found that when the iteration is 5, the network verification results have the best consistency. When the iterations reach 7, the standard of training target error set in this paper is reached. When the samples reached 60, the screening accuracy of the network reached 92.18%; when the samples increased to 80, the screening accuracy was further improved to 92.84%, indicating that the screening accuracy of the network increased with the training samples, which could be used to detect and classify samples quickly, objectively and accurately.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_80-Research_on_Strategic_Decision_Model_of_Human_Resource_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Smart Sensor Signal-Assisted Behavioral Model and Control of Live Interaction in Digital Media Art</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140879</link>
        <id>10.14569/IJACSA.2023.0140879</id>
        <doi>10.14569/IJACSA.2023.0140879</doi>
        <lastModDate>2023-08-30T09:54:14.1130000+00:00</lastModDate>
        
        <creator>Pujie Li</creator>
        
        <creator>Shi Bai</creator>
        
        <subject>Intelligent sensors; digital media; VR technology; artistic interaction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>Digital media art immersive scene design is a type of art design based on the theory of positive psychology mind flow theory, using digital media as the main technology and tool to build a certain scene by stimulating the senses and perception of the user so that they can achieve a state of immersion and forgetting other things. In this paper, we discuss the application of digital experience technology in designing art scene interaction devices by combining intelligent sensor signal analysis with multimodal interaction. Based on this, a new inductive displacement sensing element is proposed, which adopts square wave driving mode and op-amp circuit to extract signals, overcoming the shortcomings of the traditional inductive displacement sensing element, gaining the advantages of small size, lightweight, good linearity, high-frequency response, a simple driving circuit, and signal detection circuit, and more easily adaptable to microcomputer control. A more comprehensive anti-interference and system fault self-diagnosis design is carried out for the sensor system to ensure the stability and reliability of the system. An intelligent digital filtering algorithm with program judgment is proposed, with better smoothing ability and faster response speed. The multimodal interaction in digital experience design strategy is applied to the design practice, and a series of diversified device design solutions are proposed suitable for on-site interaction behavior.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_79-Smart_Sensor_Signal_Assisted_Behavioral_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automatic Layout Algorithm for Graphic Language in Visual Communication Design</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140878</link>
        <id>10.14569/IJACSA.2023.0140878</id>
        <doi>10.14569/IJACSA.2023.0140878</doi>
        <lastModDate>2023-08-30T09:54:14.0830000+00:00</lastModDate>
        
        <creator>Xiaofang Liao</creator>
        
        <creator>Xinqian Hu</creator>
        
        <subject>Graphic language; visual communication design; layout algorithm; design elements; grid layout; content layout</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>As computer technology advances, people&#39;s capacity for visual perception grows better, and the demands placed on computerized layouts progressively rise. The simple style of graphics is no longer the only option for computer figure video creation; instead, there is a greater tendency to visually represent the effect and improve the aesthetics and expressiveness of visuals and images. Graphic language uses visual components, including shapes, colors, typographies, images, and icons, in a visual communication context to express messages, ideas, and emotions. Graphics language encounters greater chances and obstacles against the backdrop of this information era. Consequently, it is crucial to convert data into graphics language. Visual communication is evolving in some potential directions with the advancement of technological advances and cultural convergence. The graphic language has its distinct visual meaning, and each person&#39;s visual experience is extremely diverse and exists in life with various visual elements in different layouts. A hybridized Grid and Content-based Automatic Layout (HGC-AL) algorithm for graphic language in Visual Communication Design (VCD) has been developed to produce visually balanced layouts and establish a structured system for arranging content elements. The content-based layout uses design constraints for better alignment and avoids conflict loss. The hierarchical arrangement of graphic elements in a grid layout analyzes the types of visual elements like image, text, and color. Finally, graphic language enhances the visual score and gives flexibility by allowing changes and modifications within the grid layout. Following the design requirements change, the responsive fluid grid supports various graphical content, sizes, and alignments. Thus, compared with existing layout algorithms, the proposed algorithm is validated with metrics like Intersection of Union (IoU), alignment accuracy, content coverage ratio, visual score, scalability ratio, and overall layout quality.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_78-Automatic_Layout_Algorithm_for_Graphic_Language.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>PMG-Net: Electronic Music Genre Classification using Deep Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140877</link>
        <id>10.14569/IJACSA.2023.0140877</id>
        <doi>10.14569/IJACSA.2023.0140877</doi>
        <lastModDate>2023-08-30T09:54:14.0670000+00:00</lastModDate>
        
        <creator>Yuemei Tang</creator>
        
        <subject>Music genre classification; deep neural networks; convolutional neural networks model; PMG-Net model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>With the rapid development of electronic music industry, how to establish a set of electronic music genre automatic classification technology has also become an urgent problem. This paper utilized neural network (NN) technology to classify electronic music genres. The basic idea of the research was to establish a deep neural network (DNN) based classification model to analyze the audio signal processing and classification feature extraction of electronic music. In this paper, 2700 different types of electronic music were selected as experimental data from the publicly available dataset of W website, and substituted into the convolutional neural network (CNN) model, PMG-Net electronic music genre classification model and traditional classification model for comparison. The results showed that the PMG-Net model had the best classification performance and the highest recognition accuracy. The classification error of PMG-Net electronic music genre classification model in each round of training was smaller than the other two classification models, and the fluctuation was small. The speed of music signal processing in each round and the feature extraction of audio samples of PMG-Net electronic music genre classification model were faster than the traditional classification model and CNN model. It can be seen that using the PMG-Net electronic music genre classification model customized based on DNN for automatic classification of electronic music genres has a better classification effect, and can achieve the goal of efficiently completing the classification in massive data.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_77-PMG_Net_Electronic_Music_Genre_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Applying Artificial Intelligence and Computer Vision for Augmented Reality Game Development in Sports</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140876</link>
        <id>10.14569/IJACSA.2023.0140876</id>
        <doi>10.14569/IJACSA.2023.0140876</doi>
        <lastModDate>2023-08-30T09:54:14.0370000+00:00</lastModDate>
        
        <creator>Nurlan Omarov</creator>
        
        <creator>Bakhytzhan Omarov</creator>
        
        <creator>Axaule Baibaktina</creator>
        
        <creator>Bayan Abilmazhinova</creator>
        
        <creator>Tolep Abdimukhan</creator>
        
        <creator>Bauyrzhan Doskarayev</creator>
        
        <creator>Akzhan Adilzhan</creator>
        
        <subject>Augmented reality; computer vision; game development; action detection; action classification; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>This paper delineates the intricate process of crafting an Augmented Reality (AR)-enriched version of the Subway Surfers game, engineered with an emphasis on action recognition and the leverage of Artificial Intelligence (AI) principles, with the primary objective of boosting children&#39;s enthusiasm towards physical activity. The gameplay, fundamentally predicated on advanced computer vision methodologies for discerning player kinesthetics, and reinforced with machine learning tactics for modulating the intricacy of the game in accordance with player capabilities, offers an immersive and engaging interface. This innovative amalgamation serves to not only catalyze children&#39;s interest in participating in active exercises, but also introduces a playful aspect to it. The procedural development of the game required the cohesive assimilation of a diverse spectrum of technologies, encompassing Unity for game development, TensorFlow for implementing machine learning algorithms, and Vuforia for crafting the AR elements. A preliminary study, conducted to assess the efficacy of the game in fostering a pro-sport attitude in children, reported encouraging outcomes. Given the potential of the game to incite physical activity among young users, it could be construed as a promising antidote to sedentarism and a potent catalyst for endorsing a healthier lifestyle.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_76-Applying_Artificial_Intelligence_and_Computer_Vision.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Chatbot Program for Proposed Requirements in Korean Problem Specification Document</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140875</link>
        <id>10.14569/IJACSA.2023.0140875</id>
        <doi>10.14569/IJACSA.2023.0140875</doi>
        <lastModDate>2023-08-30T09:54:14.0200000+00:00</lastModDate>
        
        <creator>Young Yun Baek</creator>
        
        <creator>Soojin Park</creator>
        
        <creator>Young B. Park</creator>
        
        <subject>Requirement engineering; NLP machine learning; clustering; Korean document; chatbot</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>In software engineering, requirement analysis is a crucial task throughout the entire process and holds significant importance. However, factors contributing to the failure of requirement analysis include communication breakdowns, divergent interpretations of requirements, and inadequate execution of requirements. To address these issues, a proposed approach involves utilizing NLP machine learning within Korean requirement documents to generate knowledge-based data and deduce actors and actions using natural language processing knowledge-based information. Actors and actions derived are then structured into a hierarchy of sentences through clustering, establishing a conceptual hierarchy between sentences. This is transformed into ontology data, resulting in the ultimate requirement list. A chatbot system provides users with the derived system event list, generating requirement diagrams and specification documents. Users can refer to the chatbot system&#39;s outputs to extract requirements. In this paper, the feasibility of this approach is demonstrated by applying it to a case involving Korean-language requirements for course enrollment.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_75-Chatbot_Program_for_Proposed_Requirements_in_Korean_Problem.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning Enhanced Internet of Medical Things to Analyze Brain Computed Tomography Images of Stroke Patients</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140874</link>
        <id>10.14569/IJACSA.2023.0140874</id>
        <doi>10.14569/IJACSA.2023.0140874</doi>
        <lastModDate>2023-08-30T09:54:13.9900000+00:00</lastModDate>
        
        <creator>Batyrkhan Omarov</creator>
        
        <creator>Azhar Tursynova</creator>
        
        <creator>Meruert Uzak</creator>
        
        <subject>Deep learning; machine learning; stroke; diagnosis; detection; computed tomography</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>In the realm of advancing medical technology, this paper explores a revolutionary amalgamation of deep learning algorithms and the Internet of Medical Things (IoMT), demonstrating their efficacy in decoding the labyrinthine intricacies of brain Computed Tomography (CT) images from stroke patients. Deploying an avant-garde deep learning framework, we lay bare the system&#39;s ability to distill complex patterns, from multifarious imaging data, that often elude traditional analysis techniques. Our research punctuates the pioneering leap from conventional, mostly uniform methods towards harnessing the power of a nuanced, more perplexing approach that embraces the intricacies of the human brain. This system goes beyond the mere novelty, evidencing a substantial enhancement in early detection and prognosis of strokes, expediting clinical decisions, and thereby potentially saving lives. Contrasting sentences – some more terse, others elongated and packed with details – delineate our innovative concept&#39;s contours, underpinning the notion of burstiness. Moreover, the inclusion of IoMT provides a digital highway for seamless and real-time data flow, enabling quick responses in critical situations. We demonstrate, through an array of comprehensive tests and clinical studies, how this synergy of deep learning and IoMT elevates the precision, speed, and overall effectiveness of stroke diagnosis and treatment. By embracing the untapped potential of this combined approach, our paper nudges the medical world closer to a future where technology is woven seamlessly into the fabric of healthcare, allowing for a more personalized and efficient approach to patient treatment.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_74-Deep_Learning_Enhanced_Internet_of_Medical_Things.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Application of Decision Tree Classification Algorithm on Decision-Making for Upstream Business</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140873</link>
        <id>10.14569/IJACSA.2023.0140873</id>
        <doi>10.14569/IJACSA.2023.0140873</doi>
        <lastModDate>2023-08-30T09:54:13.9730000+00:00</lastModDate>
        
        <creator>Mohd Shahrizan Abd Rahman</creator>
        
        <creator>Nor Azliana Akmal Jamaludin</creator>
        
        <creator>Zuraini Zainol</creator>
        
        <creator>Tengku Mohd Tengku Sembok</creator>
        
        <subject>Decision-making strategies; decision tree family; business decisions; upstream; oil &amp; gas; predictive analysis; project control; project planning; machine learning algorithms</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>In today&#39;s rapidly advancing technological landscape and evolving business paradigms, the pursuit of insightful patterns and concealed knowledge beyond conventional big data becomes imperative. This pursuit serves a crucial role in aiding stakeholders, particularly in the realms of tactical decision-making and forecasting, with a particular focus on business strategy and risk management. Strategic and tactical decision-making holds the key to sustaining the longevity, profitability, and continuous enhancement of the oil and gas industry. Therefore, it is paramount to address this need by uncovering the most effective Decision Tree (DT) techniques for various challenges and identifying their practical applications in real-life scenarios. The integration of big data with Machine Learning (ML) stands as a pivotal approach to foster data-driven innovation within the oil and gas sector. This study aims to offer valuable insights and methodologies for efficient decision-making, catering to the diverse stakeholders within the oil and gas industry. It focuses on the exploration of optimal DT techniques for specific problems and their relevance in practical situations. By harnessing the potential of machine learning and collaborative efforts among research scientists, big data practitioners, data scientists, and analysts, the study strives to provide more precise and effective data. Furthermore, it is imperative to recognize that not all stakeholders are mathematicians. In project management, a holistic approach that considers humanistic perspectives, such as risk analysis, ethics, and empathy, is crucial. Ultimately, the output and findings of any system must be accessible, comprehensible, and interpretable by humans or human groups. The success of these insights lies not just in their mathematical precision but also in their ability to resonate with and guide human decision-makers. In this light, the study emphasizes the human element in data interpretation and decision-making, acknowledging that the system&#39;s output will require human interaction, analysis, and ethical considerations to be truly effective in driving positive outcomes in the industry.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_73-The_Application_of_Decision_Tree_Classification_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>CryptoScholarChain: Revolutionizing Scholarship Management Framework with Blockchain Technology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140872</link>
        <id>10.14569/IJACSA.2023.0140872</id>
        <doi>10.14569/IJACSA.2023.0140872</doi>
        <lastModDate>2023-08-30T09:54:13.9570000+00:00</lastModDate>
        
        <creator>Jadhav Swati</creator>
        
        <creator>Pise Nitin</creator>
        
        <subject>Blockchain; smart scholarship management; smart contract; solidity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>Scholarship management is a crucial aspect of higher education systems, aimed at supporting deserving students and reducing financial barriers. However, traditional scholarship management processes often suffer from challenges such as a lack of transparency, inefficient communication, and difficulty tracking and verifying scholarship applications. Recently, Blockchain technology has emerged as a potential solution to address these issues, offering a decentralized, transparent, and secure framework for scholarship management. Blockchain technology has emerged as a promising solution to address the challenges faced in scholarship management. However, existing literature lacks comprehensive solutions in critical areas such as scholarship management, storage facilities, payment systems, monitoring and auditing, and experimental validation. This research introduces an innovative smart scholarship management system leveraging Blockchain technology to overcome these limitations. The research presents an Ethereum-based implementation utilizing Solidity for backend smart contracts and ReactJS for the front end. Experimental evaluation validates the transaction execution gas costs and deployment cost.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_72-CryptoScholarChain_Revolutionizing_Scholarship_Management_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Proposed Approach for Monkeypox Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140871</link>
        <id>10.14569/IJACSA.2023.0140871</id>
        <doi>10.14569/IJACSA.2023.0140871</doi>
        <lastModDate>2023-08-30T09:54:13.9270000+00:00</lastModDate>
        
        <creator>Luong Hoang Huong</creator>
        
        <creator>Nguyen Hoang Khang</creator>
        
        <creator>Le Nhat Quynh</creator>
        
        <creator>Le Huu Thang</creator>
        
        <creator>Dang Minh Canh</creator>
        
        <creator>Ha Phuoc Sang</creator>
        
        <subject>Monkeypox; machine learning; deep learning; skin lesions</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>Public health concerns have been heightened by the emergence and spread of monkeypox, a viral disease that affects both humans and animals. The significance of early detection and diagnosis of monkeypox cannot be overstated, as it plays a crucial role in minimizing the negative impact on affected individuals and safeguarding public health. Monkeypox poses a considerable threat to human well-being, causing physical discomfort and mental distress, while also posing challenges to work productivity. This study proposes an applied model that combines deep learning models such as: ResNet-50, VGG16, MobileNet and machine learning models such as: Random Forest Classifier, K-Nearest Neighbors Classifier, Gaussian Naive Bayes Classifier, Decision Tree Classifier, Logistic Regression Classifier, AdaBoost Classifier to classify and detect monkeypox. The datasets are used in this research are the Monkeypox Skin Lesion Dataset (MSLD) and the Monkeypox Image Dataset (MID) that have total 659. Subjects range from healthy cases to severe skin lesions. The test results show that the model which combines deep learning and machine learning models achieves positive results, with Accuracy being 0.97 and F1-score being 0.98.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_71-A_Proposed_Approach_for_Monkeypox_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design and Improvement of New Industrial Robot Mechanism Based on Innovative BP-ARIMA Combined Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140870</link>
        <id>10.14569/IJACSA.2023.0140870</id>
        <doi>10.14569/IJACSA.2023.0140870</doi>
        <lastModDate>2023-08-30T09:54:13.9100000+00:00</lastModDate>
        
        <creator>Yuanyuan Liu</creator>
        
        <subject>Industry 4.0; robotics; design; backpropagation autoregressive integrated moving average (BP-ARIMA); and operation facilities</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>The main innovation of Industry 4.0, which involves human-robot cooperation, is transforming industrial operation facilities. Robotic systems have been developed as modern industrial solutions to assist operators in carrying out manual tasks in cyber-physical industrial environments. These robots integrate unique human talents with the capabilities of intelligent machinery. Due to the increasing demand for modern robotics, numerous ongoing industrial robotics studies exist. Robots offer advantages over humans in various aspects, as they can operate continuously. Enhanced efficiency is achieved through reduced processing time and increased industrial adaptability. When deploying interactive robotics, emphasis should be placed on optimal design and improvisation requirements. Robotic design is a very challenging procedure that involves extensive development and modeling efforts. Significant progress has been made in robotic design in recent years, providing multiple approaches to address this issue. Considering this, we propose utilizing the Backpropagation Autoregressive Integrated Moving Average (BP-ARIMA) combination model for designing and improving a novel industrial robot mechanism. The design outcomes were evaluated based on performance indicators, including accuracy, optimal performance, error rate, implementation cost, and energy consumption. The evaluation findings demonstrate that the suggested BP-ARIMA model offers optimal design for industrial robotics.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_70-Design_and_Improvement_of_New_Industrial_Robot_Mechanism.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Startup Efficiency: Multivariate DEA for Performance Recognition and Resource Optimization in a Dynamic Business Landscape</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140869</link>
        <id>10.14569/IJACSA.2023.0140869</id>
        <doi>10.14569/IJACSA.2023.0140869</doi>
        <lastModDate>2023-08-30T09:54:13.8630000+00:00</lastModDate>
        
        <creator>K. N. Preethi</creator>
        
        <creator>Yousef A. Baker El-Ebiary</creator>
        
        <creator>Esther Rosa Saenz Arenas</creator>
        
        <creator>Kathari Santosh</creator>
        
        <creator>Ricardo Fernando Cosio Borda</creator>
        
        <creator>Anuradha. S</creator>
        
        <creator>R. Manikandan</creator>
        
        <subject>Startup efficiency; data envelopment analysis; logistic approach; resource allocation; dynamic business landscape</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>Startups encounter a variety of difficulties in maximising their performance and resource allocation in the dynamic business environment of today. This study employs a two-stage methodology to address the challenges faced by startups in optimizing their performance and resource allocation in the dynamic contemporary business environment. The research utilizes an advanced Data Envelopment Analysis (DEA) technique to identify the factors influencing startups&#39; efficiency. In the first stage, the relative efficiency of startups is assessed by comparing their inputs and outputs through DEA, a non-parametric approach. This analysis not only reveals the successful startups but also establishes benchmarks for others to aspire to. By examining the efficiency scores, critical factors that significantly impact startup performance can be identified. In the second stage, a logistic approach is employed to predict the performance of startups based on these discovered factors. This prediction model can be valuable in making informed decisions regarding resource allocation, aiding startups in their survival and development endeavors. This study introduces a novel two-stage methodology, combining advanced Data Envelopment Analysis (DEA) with predictive modeling, to uncover the key factors influencing startup efficiency. By evaluating relative efficiency and predicting performance based on these factors, it offers a comprehensive approach for startups to strategically allocate resources and enhance overall performance in present dynamic business environment.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_69-Enhancing_Startup_Efficiency_Multivariate_DEA.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Population-based Plagiarism Detection using DistilBERT-Generated Word Embedding</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140868</link>
        <id>10.14569/IJACSA.2023.0140868</id>
        <doi>10.14569/IJACSA.2023.0140868</doi>
        <lastModDate>2023-08-30T09:54:13.8330000+00:00</lastModDate>
        
        <creator>Yuqin JING</creator>
        
        <creator>Ying LIU</creator>
        
        <subject>Plagiarism detection; LSTM; imbalanced classification; DistilBERT; differential evolution; focal loss</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>Plagiarism is the unacknowledged use of another person’s language, information, or writing without crediting the source. This manuscript presents an innovative method for detecting plagiarism utilizing attention mechanism-based LSTM and the DistilBERT model, enhanced by an enriched differential evolution (DE) algorithm for pre-training and a focal loss function for training. DistilBERT reduces BERT’s size by 40% while maintaining 97% of its language comprehension abilities and being 60% quicker. Current algorithms utilize positive-negative pairs to train a two-class classifier that detects plagiarism. A positive pair consists of a source sentence and a suspicious sentence, while a negative pair comprises two dissimilar sentences. Negative pairs typically outnumber positive pairs, leading to imbalanced classification and significantly lower system performance. To combat this, a training method based on a focal loss (FL) is suggested, which carefully learns minority class examples. Another addressed issue is the training phase, which typically uses gradient-based methods like back-propagation for the learning process. As a result, the training phase has limitations, such as initialization sensitivity. A new DE algorithm is proposed to initiate the back-propagation process by employing a mutation operator based on clustering. A successful cluster for the current DE population is found, and a fresh updating approach is used to produce potential solutions. The proposed method is assessed using three datasets: SNLI, MSRP, and SemEval2014. The model attains excellent results that outperform other deep models, conventional, and population-based models. Ablation studies excluding the proposed DE and focal loss from the model confirm the independent positive incremental impact of these components on model performance.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_68-A_Population_based_Plagiarism_Detection_using_DistilBERT_Generated.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Algorithm for Skeleton Action Recognition by Integrating Attention Mechanism and Convolutional Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140867</link>
        <id>10.14569/IJACSA.2023.0140867</id>
        <doi>10.14569/IJACSA.2023.0140867</doi>
        <lastModDate>2023-08-30T09:54:13.8170000+00:00</lastModDate>
        
        <creator>Jianhua Liu</creator>
        
        <subject>Attention mechanism; convolutional neural network; action recognition; central differential network; spacetime converter; directed graph convolution</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>An action recognition model based on 3D skeleton data may experience a decrease in recognition accuracy when facing complex backgrounds, and it is easy to overlook the local connection between dynamic gradient information and dynamic actions, resulting in a decrease in the fault tolerance of the constructed model. To achieve accurate and fast capture of human skeletal movements, a directed graph convolutional network recognition model that integrates attention mechanism and convolutional neural network is proposed. By combining spacetime converter and central differential graph convolution, a corresponding central differential converter graph convolutional network model is constructed to obtain dynamic gradient information in actions and calculate local connections between dynamic actions. The research outcomes express that the cross-target benchmark recognition rate of the directed graph convolutional network recognition model is 92.3%, and the cross-view benchmark recognition rate is 97.3%. The accuracy of Top-1 is 37.6%, and the accuracy of Top-5 is 60.5%. The cross-target recognition rate of the central differential converter graph convolutional network model is 92.9%, and the cross-view benchmark recognition rate is 97.5%. Undercross-target and cross-view benchmarks, the average recognition accuracy for similar actions is 81.3% and 88.9%, respectively. The accuracy of the entire action recognition model in single-person multi-person action recognition experiments is 95.0%. The outcomes denote that the model constructed by the research institute has higher recognition rate and more stable performance compared to existing neural network recognition models, and has certain research value.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_67-Algorithm_for_Skeleton_Action_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Genetic Approach for Improved Prediction of Adaptive Learning Activities in Intelligent Tutoring System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140866</link>
        <id>10.14569/IJACSA.2023.0140866</id>
        <doi>10.14569/IJACSA.2023.0140866</doi>
        <lastModDate>2023-08-30T09:54:13.7870000+00:00</lastModDate>
        
        <creator>Fatima-Zohra Hibbi</creator>
        
        <creator>Otman Abdoun</creator>
        
        <creator>El Khatir Haimoudi</creator>
        
        <subject>Intelligent tutoring system; learner model; genetic algorithm; adaptive learning activities</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>The intelligent tutoring system registers the reference data of the learners in a database. This data is stored for later use in the instructional module. Designing a student model is not an easy task. It is first necessary to identify the knowledge acquired by the learner, then identify the learner&#39;s level of understanding of the functionality and finally identify the pedagogical strategies used by the learner to solve a problem. These elements must be taken into account in the development of the learner model. Learner characteristics must be considered in several forms. To build an effective learner model, the system must take into consideration both static (Learner preferences) and dynamic (Compartmental action) student characteristics. The objective of the article is to work out the learner model of the intelligent tutoring system by suggesting a new learning path. This proposal is based on the constructivist approach and the activist style (based on experimentation). According to the KOLB model, the authors propose a list of pedagogical activities depending on the learners&#39; profile. Based on the learners&#39; actions, the system reduces the list of activities based on two criteria: the learner&#39;s preference and the presence of one or more activities based on the activist style using genetic algorithm as an evolutionary algorithm. The results obtained led us to improve the learning process through a new conception of the ITS learner model.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_66-Genetic_Approach_for_Improved_Prediction_of_Adaptive_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Ensemble Load Balancing Algorithm to Process the Multiple Transactions Over Banking</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140865</link>
        <id>10.14569/IJACSA.2023.0140865</id>
        <doi>10.14569/IJACSA.2023.0140865</doi>
        <lastModDate>2023-08-30T09:54:13.7700000+00:00</lastModDate>
        
        <creator>Raghunadha Reddi Dornala</creator>
        
        <subject>Cloud computing; load balancing; ensemble algorithm; banking; transaction processing; resource utilization; response time; scalability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>The banking industry has been transformed by cloud computing, which has provided scalable and cost-effective solutions for managing large volumes of transactions. However, as the number of transactions grow, the need for efficient load-balancing algorithms to ensure optimal utilization of cloud resources and improve system performance becomes critical. This paper proposes an ensemble cloud load-balancing (ECBA) algorithm specifically designed to process multiple banking transactions. The proposed algorithm combines the strengths of several load-balancing techniques to achieve a balanced distribution of transaction loads in various cloud servers. It considers factors such as transaction types, server capacities, and network conditions to make intelligent load distribution decisions. The algorithm dynamically adapts to changing workload patterns and optimizes resource allocation by leveraging machine learning and predictive analytics. A simulation environment that mimics the banking system&#39;s transaction processing workflow is created to evaluate the performance of the ensemble load balancing algorithm. Extensive experiments with various workload scenarios are conducted to assess the algorithm&#39;s effectiveness in load balancing, response time, resource utilization, and overall system performance. The results show that the proposed ECBA outperforms traditional banking load-balancing approaches. It reduces response time, improves resource utilization, and ensures every server is adequately funded with a few transactions. The algorithm&#39;s adaptability and scalability make it well-suited for handling dynamic and fluctuating workloads, thus providing a robust solution for processing multiple transactions in the banking sector.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_65-An_Ensemble_Load_Balancing_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Approach of Hybrid Sampling SMOTE and ENN to the Accuracy of Machine Learning Methods on Unbalanced Diabetes Disease Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140864</link>
        <id>10.14569/IJACSA.2023.0140864</id>
        <doi>10.14569/IJACSA.2023.0140864</doi>
        <lastModDate>2023-08-30T09:54:13.7530000+00:00</lastModDate>
        
        <creator>Hairani Hairani</creator>
        
        <creator>Dadang Priyanto</creator>
        
        <subject>SMOTE-ENN; data imbalance; SVM; random forest; health dataset</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>The performance of machine learning methods in disease classification is affected by the quality of the dataset, one of which is unbalanced data. One example of health data that has unbalanced data is diabetes disease data. If unbalanced data is not addressed, it can affect the performance of the classification method. Therefore, this research proposed the SMOTE-ENN approach to improving the performance of the Support Vector Machine (SVM) and Random Forest classification methods for diabetes disease prediction. The methods used in this research were SVM and Random Forest classification methods with SMOTE-ENN. The SMOTE-ENN method was used to balance the diabetes data and remove noise data adjacent to the majority and minority classes. Data that has been balanced was predicted using SVM and Random Forest methods based on the division of training and testing data with 10-fold cross-validation. The results of this study were Random Forest method with SMOTE-ENN got the best performance compared to the SVM method, such as accuracy of 95.8%, sensitivity of 98.3%, and specificity of 92.5%. In addition, the proposed method approach (Random Forest with SMOTE-ENN) also obtained the best accuracy compared to previous studies referenced. Thus, the proposed method can be adopted to predict diabetes in a health application.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_64-A_New_Approach_of_Hybrid_Sampling_SMOTE_and_ENN.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Estimating Probability Values Based on Na&#239;ve Bayes for Fuzzy Random Regression Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140863</link>
        <id>10.14569/IJACSA.2023.0140863</id>
        <doi>10.14569/IJACSA.2023.0140863</doi>
        <lastModDate>2023-08-30T09:54:13.7230000+00:00</lastModDate>
        
        <creator>Hamijah Mohd Rahman</creator>
        
        <creator>Nureize Arbaiy</creator>
        
        <creator>Chuah Chai Wen</creator>
        
        <creator>Pei-Chun Lin</creator>
        
        <subject>Na&#239;ve Bayes; fuzziness; randomness; probability estimation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>In the process of treating uncertainties of fuzziness and randomness in real regression application, fuzzy random regression was introduced to address the limitation of classical regression which can only fit precise data. However, there is no systematic procedure to identify randomness by means of probability theories. Besides, the existing model mostly concerned in fuzzy equation without considering the discussion on probability equation though random plays a pivotal role in fuzzy random regression model. Hence, this paper proposed a systematic procedure of Na&#239;ve Bayes to estimate the probabilities value to overcome randomness. From the result, it shows that the accuracy of Na&#239;ve Bayes model can be improved by considering the probability estimation.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_63-Estimating_Probability_Values_Based_on_Na&#239;ve_Bayes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Automated Medical Image Segmentation Framework using Deep Learning and Variational Autoencoders with Conditional Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140862</link>
        <id>10.14569/IJACSA.2023.0140862</id>
        <doi>10.14569/IJACSA.2023.0140862</doi>
        <lastModDate>2023-08-30T09:54:13.6930000+00:00</lastModDate>
        
        <creator>Dustakar Surendra Rao</creator>
        
        <creator>L. Koteswara Rao</creator>
        
        <creator>Bhagyaraju Vipparthi</creator>
        
        <subject>Deep learning; variational autoencoders; CNN; medical image segmentation; automated diagnosis and treatment</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>It is a highly difficult challenge to achieve correlation between images by reliable image authentication and this is essential for numerous therapeutic activities like combining images, creating tissue atlases and tracking the development of the tumors. The separation of healthcare data utilizing deep learning variational autoencoders and conditional neural networks is presented in this research as a paradigm. One of the essential jobs in machine vision is the partitioning of an image. Due to the requirement for low-level spatial data, this assignment is more challenging compared to other vision-related challenges. By utilizing VAEs&#39; capacity to develop hidden representations and combining CNNs in a conditioned environment, the algorithm generates accurate and efficient results for the segmentation. Moreover, to learn the representation of latent space from labelled clinical images, the VAE is trained as part of the system that is suggested. After that, the representations that were learned and real categorizations are used to develop the conditional neural network. Furthermore, the model that has been trained is utilized to accurately separate the areas that are important in brand-new medical images during the inferential stage. Thus, the experimental findings on several healthcare imaging databases show the enhanced precision of segmentation of the structure, highlighting its ability to enhance automated diagnosis and treatment. Henceforth, the suggested Deep Learning and Variational Auto Encoders with Conditional Neural Networks (DL-VAE-CNN) are employed to solve the pixel-level problem of classification that plagues the earlier investigations.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_62-An_Automated_Medical_Image_Segmentation_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Impact of the Use of the Video Game SimCity on the Development of Critical Thinking in Students: A Quantitative Experimental Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140861</link>
        <id>10.14569/IJACSA.2023.0140861</id>
        <doi>10.14569/IJACSA.2023.0140861</doi>
        <lastModDate>2023-08-30T09:54:13.6770000+00:00</lastModDate>
        
        <creator>Jorge Luis Torres-Loayza</creator>
        
        <creator>Grunilda Telma Reymer-Morales</creator>
        
        <creator>Benjam&#237;n Maraza-Quispe</creator>
        
        <subject>SimCity; video games; critical thinking; critical learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>The objective of the research is to determine to what extent the use of the SimCity video game allows the development of critical thinking in the teaching-learning processes of students. The methodology applied consisted of a research with a quantitative approach of experimental type, working with a sample of 25 students selected through a simple random sampling of a population of 100 students, 10 sessions were developed using the SimCity video game, a pretest and posttest of skills and abilities required to develop critical thinking of Watson Glaser were applied, whose dimensions measured were: Inferences, assumptions, deductive reasoning, logical interpretation and evaluation of arguments. The results show that with adequate stimulation through the use of the SimCity video game, critical thinking can have a moderate but effective development in the students, from the comparison of the data obtained in the pretest and posttest a significant progress in terms of scores is observed; likewise, the effectiveness of the use of the SimCity video game is reflected to a greater extent in inferences and evaluations of arguments, since during the posttest evaluations greater progress was observed in comparison to other skills; while the interpretation of information obtained less progress in comparison to the other skills, the use of skills such as deductive reasoning, inferences and evaluation of arguments were moderately developed. In conclusion, the use of the SimCity video game allows the development of skills and abilities to develop critical thinking according to various factors, such as the way in which it is incorporated into the curriculum, the orientation and guidance of teachers, and the way in which reflection and analysis is carried out after the game experience.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_61-Impact_of_the_Use_of_the_Video_Game_SimCity.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Proposed Framework for Context-Aware Semantic Service Provisioning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140860</link>
        <id>10.14569/IJACSA.2023.0140860</id>
        <doi>10.14569/IJACSA.2023.0140860</doi>
        <lastModDate>2023-08-30T09:54:13.6600000+00:00</lastModDate>
        
        <creator>Wael Haider</creator>
        
        <creator>Hatem Abdelkader</creator>
        
        <creator>Amira Abdelwahab</creator>
        
        <subject>Internet of Things (IoT); Web of Things (WoT); Web of Objects (WoOs); context-awareness; service provisioning; interoperability; ontology; OWL</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>Web-hosted Internet of Things (IoT) applications are the next logical step in the recent endeavor by academia and industry to design and standardize new communication protocols for smart objects. Context Awareness is defined as the property of a system that employs context to provide related information or services to the user, where the relationship is based on the user&#39;s task. Therefore, context-aware service discovery can be defined as utilizing context information to discover the most relevant services for the user. Merging context-aware concepts with the IoT facilitates IoT system developments that depend on complex environments with many sensors and actuators, user, and their environment. The main objective of this study is to design an abstract framework for provisioning smart objects as a service based on context-aware concepts while considering constraints of bandwidth, scalability, and performance. The proposed framework building blocks include data acquisition and management service and data aggregation, and rules reasoning. The proposed framework is validated and evaluated by constructing an IoT network simulation and testing accessing the service in the traditional method and according to the proposed framework and comparing the results.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_60-A_Proposed_Framework_for_Context_Aware_Semantic_Service.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Human-object Behavior Analysis Based on Interaction Feature Generation Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140859</link>
        <id>10.14569/IJACSA.2023.0140859</id>
        <doi>10.14569/IJACSA.2023.0140859</doi>
        <lastModDate>2023-08-30T09:54:13.6130000+00:00</lastModDate>
        
        <creator>Qing Ye</creator>
        
        <creator>Xiuju Xu</creator>
        
        <creator>Rui Li</creator>
        
        <subject>Two-stream human-object behavior analysis network; interaction feature generation algorithm; interactive feature information; ResNeXt; graph convolutional neural networks; graph model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>Aiming at the problem of insufficient utilization of interactive feature information between human and object, this paper proposes a two-stream human-object behavior analysis network based on interaction feature generation algorithm. The network extracts human-object’s feature information and interactive feature information respectively. When extracting human-object features information, considering that ResNeXt has powerful feature expression ability, the network is used to extract human-object features from images. When extracting interactive features information between human and object, an interaction feature generation algorithm is proposed, which uses the feature reasoning ability of graph convolutional neural networks. A graph model is constructed by taking human and objects as nodes and the interaction between them as edges. According to the interactive feature generation algorithm, the graph model is updated by traversing nodes, and new interactive features are generated during this process. Finally, the humans’ and objects’ features information and the human-object interaction feature information are fused and sent to the classification network for behavior recognition, so as to fully utilize the humans’ and objects’ feature information and the interaction feature information of human-objects. The human-object behavior analysis network is experimentally verified. The results show that the accuracy of the network has been significantly improved on HICO-DET and V-COCO datasets.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_59-Human_object_Behavior_Analysis_Based_on_Interaction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Integrated Instrument for Measuring Science, Technology, Engineering, and Mathematics: Digital Educational Game Acceptance and Player Experience</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140858</link>
        <id>10.14569/IJACSA.2023.0140858</id>
        <doi>10.14569/IJACSA.2023.0140858</doi>
        <lastModDate>2023-08-30T09:54:13.5830000+00:00</lastModDate>
        
        <creator>Husna Hafiza R. Azami</creator>
        
        <creator>Roslina Ibrahim</creator>
        
        <creator>Suraya Masrom</creator>
        
        <creator>Rasimah Che Mohd Yusoff</creator>
        
        <creator>Suraya Yaacob</creator>
        
        <subject>Game; education; acceptance; experience; stem</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>Digital educational games (DEGs) are effective learning tools for subjects related to science, technology, engineering, and mathematics (STEM), yet they are still not widely used among students. Existing instruments typically assess player experience (PX) and acceptance separately, even though both are essential DEG evaluations that can be merged and analyzed concurrently in a thorough manner. This study, therefore, proposes an integrated instrument called DEGAPX that combines fundamental technology acceptance factors with a broad range of PX criteria. The proposed instrument can be used by educators and game designers in the selection and development of DEGs that satisfy the needs of target users. This article describes the process of developing the scale instrument and validating it through two rounds of expert judgment and among students after using three DEGs related to STEM. The proposed instrument, which comprised 15 constructs measured by 67 items, was proven to be reliable and valid.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_58-An_Integrated_Instrument_for_Measuring_Science.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Integrating Transfer Learning and Deep Neural Networks for Accurate Medical Disease Diagnosis from Multi-Modal Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140857</link>
        <id>10.14569/IJACSA.2023.0140857</id>
        <doi>10.14569/IJACSA.2023.0140857</doi>
        <lastModDate>2023-08-30T09:54:13.5670000+00:00</lastModDate>
        
        <creator>Chamandeep Kaur</creator>
        
        <creator>Abdul Rahman Mohammed Al-Ansari</creator>
        
        <creator>Taviti Naidu Gongada</creator>
        
        <creator>K. Aanandha Saravanan</creator>
        
        <creator>Divvela Srinivasa Rao</creator>
        
        <creator>Ricardo Fernando Cosio Borda</creator>
        
        <creator>R. Manikandan</creator>
        
        <subject>Transfer learning; deep neural network; disease diagnosis; multi-modal data; Alexnet; GLCM; DNN; pre-trained model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>Effective patient treatment and care depend heavily on accurate disease diagnosis. The availability of multi-modal medical data in recent years, such as genetic profiles, clinical reports, and imaging scans, has created new possibilities for increasing diagnostic precision. However, because of their inherent complexity and variability, analyzing and integrating these varied data types present significant challenges. In order to overcome the difficulties of precise medical disease diagnosis using multi-modal data, this research suggests a novel approach that combines Transfer Learning (TL) and Deep Neural Networks (DNN). An image dataset that included images from various stages of Alzheimer&#39;s disease (AD) was collected from kaggle repository. In order to improve the quality of the signals or images for further analysis, a Gaussian filter is applied during the preprocessing stage to smooth out and reduce noise in the input data. The features are then extracted using Gray-Level Co-occurrence Matrix (GLCM). TL makes it possible for the model to use the information gained from previously trained models in other domains, requiring less training time and data. The trained model used in this approach is AlexNet. The classification of the disease is done using DNN. This integrated approach improves diagnostic precision particularly in scenarios with limited data availability. The study assesses the effectiveness of the suggested method for diagnosing AD, focusing on evaluation metrics such as accuracy, precision, miss rate, recall, F1-score, and the Area under the Receiver Operating Characteristic Curve (AUC-ROC). The approach is a promising tool for medical professionals to make more accurate and timely diagnoses, which will ultimately improve patient outcomes and healthcare practices. The results show significant improvements in accuracy (99.32%).</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_57-Integrating_Transfer_Learning_and_Deep_Neural_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mechatronics Design and Robotic Simulation of Serial Manipulators to Perform Automation Tasks in the Avocado Industry</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140856</link>
        <id>10.14569/IJACSA.2023.0140856</id>
        <doi>10.14569/IJACSA.2023.0140856</doi>
        <lastModDate>2023-08-30T09:54:13.5370000+00:00</lastModDate>
        
        <creator>Carlos Paredes</creator>
        
        <creator>Ricardo Palomares</creator>
        
        <creator>Josmell Alva</creator>
        
        <creator>Jos&#233; Cornejo</creator>
        
        <subject>Mechatronic design; inverse kinematics; dynamic modeling; pick and place; palletizing; Scara robot; universal robot; robot manipulators; path tracking simulation; kinematic control</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>Peru is considered one of the principal agroindustrial avocado exporters worldwide. At the beginning of 2022, the volume exported was 8.3% higher than in 2021, so the design and simulation of a pick and place and palletizing cell for agro-exporting companies in the Region of La Libertad was proposed. A methodology was followed that presented a flow diagram of the design of the cell, considering the size of the avocado and the dimensions of the box-type packaging. The forward and inverse kinematics for the Scara T6 and UR10 robots were developed in Matlab according to the Denavit-Hartenberg algorithm, and 3D CAD, dynamic modeling, and trajectory calculation were performed in Solidworks using a &quot;planner&quot; algorithm developed in Matlab, which takes into account the start and end points, maximum speeds, and travel time of each robot. Then, in CoppeliaSim, the working environment of the cell and the robots with their respective configurations are created. Finally, the simulation of trajectories is performed, describing the expected movement, getting the time of the finished task was calculated, where the Scara T6 robot had a working time of 1.18 s and the UR10 of 2.32 s. For 2023 - 2025, its implementation is proposed in the Camposol Company located in the district of Chao - La Libertad, considering the dynamic control of the system.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_56-Mechatronics_Design_and_Robotic_Simulation_of_Serial_Manipulators.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Research on Semantic Segmentation Method of Remote Sensing Image Based on Self-supervised Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140855</link>
        <id>10.14569/IJACSA.2023.0140855</id>
        <doi>10.14569/IJACSA.2023.0140855</doi>
        <lastModDate>2023-08-30T09:54:13.5200000+00:00</lastModDate>
        
        <creator>Wenbo Zhang</creator>
        
        <creator>Achuan Wang</creator>
        
        <subject>Computer vision; deep learning; self-supervised learning; remote sensing image; semantic segmentation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>To address the challenge of requiring a large amount of manually annotated data for semantic segmentation of remote sensing images using deep learning, a method based on self-supervised learning is proposed. Firstly, to simultaneously learn the global and local features of remote sensing images, a self-supervised learning network structure called TBSNet (Triple-Branch Self-supervised Network) is constructed. This network comprises an image transformation prediction branch, a global contrastive learning branch, and a local contrastive learning branch. The contrastive learning part of the network employs a novel data augmentation method to simulate positive pairs of the same remote sensing images under different weather conditions, enhancing the model&#39;s performance. Meanwhile, the model integrates channel attention and spatial attention mechanisms in the projection head structure of the global contrastive learning branch, and replaces a fully connected layer with a convolutional layer in the local contrastive learning branch, thus improving the model&#39;s feature extraction ability. Secondly, to mitigate the high computational cost during the pre-training phase, an algorithm optimization strategy is proposed using the TracIn method and sequential optimization theory, which increases the efficiency of pre-training. Lastly, by fine-tuning the model with a small amount of annotated data, effective semantic segmentation of remote sensing images is achieved even with limited annotated data. The experimental results indicate that with only 10% annotated data, the overall accuracy (OA) and recall of this model have improved by 4.60% and 4.88% respectively, compared to the traditional self-supervised model SimCLR (A Simple Framework for Contrastive Learning of Visual Representations). This provides significant application value for tasks such as semantic segmentation in remote sensing imagery and other computer vision domains.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_55-Research_on_Semantic_Segmentation_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dynamic Modelling of Hand Grasping and Wrist Exoskeleton: An EMG-based Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140854</link>
        <id>10.14569/IJACSA.2023.0140854</id>
        <doi>10.14569/IJACSA.2023.0140854</doi>
        <lastModDate>2023-08-30T09:54:13.4900000+00:00</lastModDate>
        
        <creator>Mohd Safirin Bin Karis</creator>
        
        <creator>Hyreil Anuar Bin Kasdirin</creator>
        
        <creator>Norafizah Binti Abas</creator>
        
        <creator>Muhammad Noorazlan Shah Bin Zainudin</creator>
        
        <creator>Sufri Bin Muhammad</creator>
        
        <creator>Mior Muhammad Nazmi Firdaus Bin Mior Fadzil</creator>
        
        <subject>Hand grasping; wrist control; ANN; ANFIS; exoskeleton wrist design</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>Human motion intention plays an important role in designing an exoskeleton hand wrist control for post-stroke survivors especially for hand grasping movement. The challenges occurred as sEMG signal frequently being affected by noises from its surroundings. To overcome these issues, this paper aims to establish the relationship between sEMG signal with wrist angle and handgrip force. ANN and ANFIS were two approaches that have been used to design dynamic modelling for hand grasping of wrist movement at different MVC levels. Input sEMG signals value from FDS and EDC muscles were used to predict the hand grip force as a representation of output signal. From the experimental results, sEMG MVC signal level was directly proportional to the hand grip force production while hand grip force signal values will depend on the position of wrist angle. It’s also concluded that the hand grip force signal production is higher while the wrist at flexion position compared to extension. A strong relationship between sEMG signal and wrist angle improved the estimation of hand grip force result thus improved the myoelectronic control device for exoskeleton hand. Moreover, ANN managed to improve the estimation accuracy result provided by ANFIS by 0.22% summation of integral absolute error value with similar testing dataset from the experiment.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_54-Dynamic_Modelling_of_Hand_Grasping_and_Wrist_Exoskeleton.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Campus Network Intrusion Detection Based on Gated Recurrent Neural Network and Domain Generation Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140853</link>
        <id>10.14569/IJACSA.2023.0140853</id>
        <doi>10.14569/IJACSA.2023.0140853</doi>
        <lastModDate>2023-08-30T09:54:13.4730000+00:00</lastModDate>
        
        <creator>Qi Rong</creator>
        
        <creator>Guang Zhao</creator>
        
        <subject>Gated recurrent; domain generation algorithm; campus network; threat detection; neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>Network attacks are diversified, rare and Universal generalization. This has made the exploration and construction of network information flow packet threat detection systems, which becomes a hot research topic in preventing network attacks. So this study establishes a network data threat detection model based on traditional network threat detection systems and deep learning neural networks. And convolutional neural network and data enhancement technology are used to optimize the model and improve rare data recognizing accuracy. The experiment confirms that this detection model has a recognition probability of approximately 11% and 42% for two rare attacks when N=1, respectively. When N=2, their probabilities are 52% and 78%, respectively. When N=3, their recognition probabilities are approximately 85% and 92%, respectively. When N=4, their recognition probabilities are about 58% and 68%, respectively, with N=3 having the best recognition effect. In addition, the recognition efficiency of this model for malicious domain name attacks and normal data remains around 90%, which has significant advantages compared to traditional detection systems. The proposed network data flow threat detection model that integrates Gated Recurrent Neural Network and Domain Generation Algorithm has certain practicality and feasibility.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_53-Campus_Network_Intrusion_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Pairwise Test Case Generation using (1+1) Evolutionary Algorithm for Software Product Line Testing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140852</link>
        <id>10.14569/IJACSA.2023.0140852</id>
        <doi>10.14569/IJACSA.2023.0140852</doi>
        <lastModDate>2023-08-30T09:54:13.4430000+00:00</lastModDate>
        
        <creator>Sharafeldin Kabashi Khatir</creator>
        
        <creator>Rabatul Aduni Binti Sulaiman</creator>
        
        <creator>Mohammed Adam Kunna Azrag</creator>
        
        <creator>Jasni Mohamad Zain</creator>
        
        <creator>Julius Beneoluchi Odili</creator>
        
        <creator>Samer Ali Al-Shami</creator>
        
        <subject>SPL; SPL testing; combinatorial testing; pairwise testing; evolutionary algorithm; 1+1 EA</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>Software product line SPLs, or software product lines, are groups of similar software systems that share some commonalities but stand out from one another in terms of the features they offer. Over the past few decades, SPLs have been the focus of a great deal of study and implementation in both the academic and commercial sectors. Using SPLs has been shown to improve product customization and decrease time to market. Additional difficulties arise when testing SPLs because it is impractical to test all possible product permutations. The use of Combinatorial Testing in SPL testing has been the subject of extensive study in recent years. The purpose of this study is to gather and analyze data on combinatorial testing applications in SPL, apply Pairwise Testing using (1+1) evolutionary algorithms to SPL across four case studies, and assess the algorithms&#39; efficacy using predetermined evaluation criteria. According to the findings, the performance of this technique is superior when the case study is larger, that is, when it has a higher number of features, than when the case study is smaller in scale.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_52-Pairwise_Test_Case_Generation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Methodological Insights Towards Leveraging Performance in Video Object Tracking and Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140851</link>
        <id>10.14569/IJACSA.2023.0140851</id>
        <doi>10.14569/IJACSA.2023.0140851</doi>
        <lastModDate>2023-08-30T09:54:13.4270000+00:00</lastModDate>
        
        <creator>Divyaprabha </creator>
        
        <creator>M. Z Kurian</creator>
        
        <subject>Object detection; object tracking; video; visual field; surveillance system; video feed</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>Video Object Detection and Tracking (VODT), one of its integral operations of surveillance system in present time, mechanizes a way to identify and track the target object autonomously and seamlessly within its visual field. However, the challenges associated with video feeding are immensely high, and the scene context is out of human control, posing an impediment to a successful model of VODT. The presented work has discussed about effectiveness of existing VODT approaches considering its identified taxonomies viz. satellite based, remote sensing-based, unmanned-based, Real-time Tracking based, behavioral analysis and event detection based, integration of multiple data sources, and privacy and ethics. Further, research trend associated with cumulative publications and evolving methods to realize the frequently used methodologies in VODT. Further, the results of review showcase that there is prominent research gap of manifold attributes that demands to be addressed for improving performance of VODT.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_51-Methodological_Insights_Towards_Leveraging_Performance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Research on Improving Piano Performance Evaluation Method in Piano Assisted Online Education</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140850</link>
        <id>10.14569/IJACSA.2023.0140850</id>
        <doi>10.14569/IJACSA.2023.0140850</doi>
        <lastModDate>2023-08-30T09:54:13.4100000+00:00</lastModDate>
        
        <creator>Huayi Qi</creator>
        
        <creator>Chunhua She</creator>
        
        <subject>Short-term memory network; attention mechanism; musical instrument digital interface; online education; piano performance evaluation model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>With the continuous progress of science and technology and the popularization of the Internet, online piano education has gradually emerged. This educational model provides piano learning resources and communication platforms through the network platform, so that students can learn piano at home anytime and anywhere. However, there are still some problems in the evaluation method of piano assisted online education, which hinders the development of piano assisted online education. Aiming at the problem that piano assisted online education is difficult to evaluate correctly, this paper proposes to integrate the bidirectional long and short memory network into the instrument digital interface piano performance evaluation model, and to integrate the attention mechanism into the bidirectional long and short memory network, hoping to improve the evaluation accuracy of the model. In the comparison experiment of the evaluation model based on the bidirectional long term memory network, it is found that the accuracy of the bidirectional long term memory network model is 0.91, which is significantly higher than the comparison model. In addition, through the empirical analysis of the model, it is found that the piano online education course integrated with the model can improve students&#39; performance level scores and promote their participation enthusiasm. The above results indicate that the digital interface piano performance evaluation model can not only be used to evaluate digital interface piano performance more accurately, but also to promote the development of online piano education.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_50-Research_on_Improving_Piano_Performance_Evaluation_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Mechanism for Bitcoin Price Forecasting using Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140849</link>
        <id>10.14569/IJACSA.2023.0140849</id>
        <doi>10.14569/IJACSA.2023.0140849</doi>
        <lastModDate>2023-08-30T09:54:13.3800000+00:00</lastModDate>
        
        <creator>Karamath Ateeq</creator>
        
        <creator>Ahmed Abdelrahim Al Zarooni</creator>
        
        <creator>Abdur Rehman</creator>
        
        <creator>Muhammd Adna Khan</creator>
        
        <subject>Currency; bitcoin; LSTM; forecasting; models</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>Researchers and investors have recently become interested in forecasting the cryptocurrency price forecasting but the most important currency can take that it’s the bitcoin exchange rate. Some researchers have aimed at leveraging the technical and financial characteristics of Bitcoin to create predictive models, while others have utilized conventional statistical methods to explain these factors. This article explores the LSTM model for forecasting the value of bitcoins using historical bitcoin price series. Predict future bitcoin prices by developing the most accurate LSTM forecasting model, building an advanced LSTM forecasting model (LSTM-BTC), and comparing past bitcoin prices. This is the second step, if looking at the end of the model, it has very high accuracy in predicting future prices. The performance of the proposed model is evaluated using five different datasets with monthly, weekly, daily, hourly, and minute-by-minute bitcoin price data with total records from January 1, 2021, to March 31, 2022. The results confirm the better forecasting accuracy of the proposed model using LSTM-BTC. The analysis includes square error MSE, RMSE, MAPE, and MAE of bitcoin price forecasting. Compared to the conventional LSTM model, the suggested LSTM-BTC model performs better. The contribution made by this research is to present a new framework for predicting the price of Bitcoin that solves the issue of choosing and evaluating input variables in LSTM without making firm data assumptions. The outcomes demonstrate its potential use in applications for industry forecasting, including different cryptocurrencies, health data, and economic time.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_49-A_Mechanism_for_Bitcoin_Price_Forecasting.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid Metaheuristic Model for Efficient Analytical Business Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140848</link>
        <id>10.14569/IJACSA.2023.0140848</id>
        <doi>10.14569/IJACSA.2023.0140848</doi>
        <lastModDate>2023-08-30T09:54:13.3630000+00:00</lastModDate>
        
        <creator>Marischa Elveny</creator>
        
        <creator>Mahyuddin K. M Nasution</creator>
        
        <creator>Rahmad B. Y Syah</creator>
        
        <subject>Efficiency; analytics business; predictions; Particle Swam Optimization (PSO); Gravitational Search Optimization (GSO)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>Accurate and efficient business analytical predictions are essential for decision making in today&#39;s competitive landscape. Involves using data analysis, statistical methods, and predictive modeling to extract insights and make decisions. Current trends focus on applying business analytics to predictions. Optimizing business analytics predictions involves increasing the accuracy and efficiency of predictive models used to forecast future trends, behavior, and outcomes in the business environment. By analyzing data and developing optimization strategies, businesses can improve their operations, reduce costs, and increase profits. The analytic business optimization method uses a hybrid PSO (Particle Swarm Optimization) and GSO (Gravitational Search Optimization) algorithm to increase the efficiency and effectiveness of the decision-making process in business. In this approach, the PSO algorithm is used to explore the search space and find the global best solution, while the GSO algorithm is used to refine the search around the global best solution. The hybrid meta-heuristic method optimizes the three components of business analytics: descriptive, predictive, and perspective. The hybrid model is designed to strike a balance between exploration and exploitation, ensuring effective search and convergence to high-quality solutions. The results show that the R2 value for each optimization parameter is close to one, indicating a more fit model. The RMSE value measures the average prediction error, with a lower error indicating that the model is performing well. MSE represents the mean of the squared difference between the predicted and optimized values. A lower error value indicates a higher level of accuracy.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_48-A_Hybrid_Metaheuristic_Model_for_Efficient_Analytical_Business.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced Transfer Learning Strategies for Effective Kidney Tumor Classification with CT Imaging</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140847</link>
        <id>10.14569/IJACSA.2023.0140847</id>
        <doi>10.14569/IJACSA.2023.0140847</doi>
        <lastModDate>2023-08-30T09:54:13.3330000+00:00</lastModDate>
        
        <creator>Muneer Majid</creator>
        
        <creator>Yonis Gulzar</creator>
        
        <creator>Shahnawaz Ayoub</creator>
        
        <creator>Farhana Khan</creator>
        
        <creator>Faheem Ahmad Reegu</creator>
        
        <creator>Mohammad Shuaib Mir</creator>
        
        <creator>Wassim Jaziri</creator>
        
        <creator>Arjumand Bano Soomro</creator>
        
        <subject>Kidney; kidney tumor; automatic diagnosis; machine learning algorithms; CT imaging; deep learning; transfer learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>Kidney tumours (KTs) rank seventh in global tumour prevalence among both males and females, posing a significant health challenge worldwide. Early detection of KT plays a crucial role in reducing mortality rates, mitigating side effects, and effectively treating the tumor. In this context, computer-assisted diagnosis (CAD) offers promising benefits, such as improved test accuracy, cost reduction, and timesaving compared to manual detection, which is known to be laborious and time-consuming. This research investigates the feasibility of employing machine learning (ML) and Fine-tuned Transfer Learning (TL) to improve KT detection. CT images of individuals with and without kidney tumors were utilized to train the models. The study explores three different image dimensions: 32x32, 64x64, and 128x128 pixels, employing the Grey Level Co-occurrence Matrix (GLCM) for feature engineering. The GLCM uses pixel pairs&#39; distance (d) and angle (θ) to calculate their occurrence in the image. Various detection approaches, including Random Forest (RF), Support Vector Machine (SVM), Gradient Boosting (GB), and Light Gradient Boosting Model (LGBM), were applied to identify KTs in CT images for diagnostic purposes. Additionally, the study experimented with fine-tuned ResNet-101 and DenseNet-121 models for more effective computer-assisted diagnosis of KT. Evaluation of the efficient diagnostics of fine-tuned ResNet-101 and DenseNet-121 was conducted by comparing their performance with four ML models (RF, SVM, LGBM, and GB). Notably, ResNet-101 and DenseNet-121 achieved the highest accuracy of 94.09%, precision of 95.10%, recall of 93.5%, and F1-score of 93.95% when using 32x32 input images. These results outperformed other models and even surpassed state-of-the-art methods. This research demonstrates the potential of accurately and efficiently classifying KT in CT kidney scans using ML approaches. The use of fine-tuned ResNet-101 and DenseNet-121 shows promising results and opens up avenues for enhanced computer-assisted diagnosis of kidney tumors.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_47-Enhanced_Transfer_Learning_Strategies_for_Effective_Kidney_Tumor.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Simulation Analysis of Hydraulic Control System of Engineering Robot Arm Based on ADAMS</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140846</link>
        <id>10.14569/IJACSA.2023.0140846</id>
        <doi>10.14569/IJACSA.2023.0140846</doi>
        <lastModDate>2023-08-30T09:54:13.3170000+00:00</lastModDate>
        
        <creator>Haiqing Wu</creator>
        
        <subject>Hydraulic control systems; ADAMS; simulation analysis; engineering robot arm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>Substantial trenching capacity, communication capabilities, simple configuration, and so on are just a few of the many benefits that make Hydraulic Control Systems (HCS) the context of physical devices used within the geotechnical trench. These characteristics have led to widespread application in developing water conservation and hydroelectric technology, architectural construction, local construction, and other technology. In this article, the engineering robot arm proposed an HCS. Subsequently, a digital version of the functional device is constructed using Anti-Doping Administration and Management System (ADAMS), a simulation program, by incorporating associated restrictions and workload. With the help of a simulation model of the HCS&#39;s functioning apparatus, this research obtains the fundamental factors of the excavator&#39;s operating range and the pressured condition variation curve of the location of every Hydraulic Actuator (HA). The findings, which provide a conceptual framework and enhancements for the control system equipment, significantly raise the bar on China&#39;s excavator architecture, expand digger efficiency, and foster the firm&#39;s fast growth. An in-depth examination of the HCS&#39;s current operating condition, including an examination of the simulated model&#39;s transmission phase, can be determined. The findings provide a theoretical foundation for designing an optimal HCS.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_46-Simulation_Analysis_of_Hydraulic_Control_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Simulation of Logistics Frequent Path Data Mining Based on Statistical Density</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140845</link>
        <id>10.14569/IJACSA.2023.0140845</id>
        <doi>10.14569/IJACSA.2023.0140845</doi>
        <lastModDate>2023-08-30T09:54:13.2870000+00:00</lastModDate>
        
        <creator>Fengju Hou</creator>
        
        <subject>Statistical density; logistics; the path; the simulation data</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>Sharp increases and rapid development followed the effects of a novel coronavirus outbreak on online sales and the real economy. The e-commerce mode on the Internet has attracted much attention, and users&#39; purchases on the Internet has never been done before. However, among the many express companies, as the ones closest to consumers, they can still provide high-quality products in the face of huge market demand. Urban terminal logistics refers to the purpose of express services to meet the needs of terminal customers under the requirements of logistics centralization and customer diversification. However, the geographical distribution of logistics services in China is comprehensive, and customers&#39; requirements are also complex. Practical problems in logistics enterprises in China significantly restrict the quality of logistics services. The final kilometer of distribution is composed of many links, and it is a very cumbersome enlace; it contains the determination of distribution scope, loading goods, arrangement of distribution sequence, arrangement of vehicles or personnel scheduling, and planning of distribution routes. A Genetic Algorithm (GA) with local search method fusion is proposed for fast logistics data modeling and mining simulation analysis. Practical examples and literature data prove the method&#39;s accuracy.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_45-Simulation_of_Logistics_Frequent_Path_Data_Mining.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application of Improved Ant Colony Algorithm Integrating Adaptive Parameter Configuration in Robot Mobile Path Design</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140844</link>
        <id>10.14569/IJACSA.2023.0140844</id>
        <doi>10.14569/IJACSA.2023.0140844</doi>
        <lastModDate>2023-08-30T09:54:13.2530000+00:00</lastModDate>
        
        <creator>Jinli Han</creator>
        
        <subject>Ant colony algorithm; robots; mobile path planning; obstacle avoidance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>Under the background of the continuous progress of Industry 4.0 reform, the market demand for mobile robots in major world economies is gradually increasing. In order to improve the mobile robot&#39;s movement path planning quality and obstacle avoidance ability, this research adjusted the node selection method, pheromone update mechanism, transition probability and volatility coefficient calculation method of the ant colony algorithm, and improved the search direction setting and cost estimation calculation method of the A* algorithm. Thus, a robot movement path planning model can be designed with respect to the improved ant colony algorithm and A* algorithm. The simulation experiment results on grid maps show that the planning model constructed in view of the improved algorithm, the traditional ant colony algorithm, the Tianniu whisker search algorithm, and the particle swarm algorithm designed in this study converged after 8, 37, 23, and 26 iterations, respectively. The minimum path lengths after convergence were 13.24m, 17.82m, 16.24m, and 17.05m, respectively. When the edge length of the grid map is 100m, the minimum planning length and total moving time of the planning model constructed in view of the improved algorithm, the traditional ant colony algorithm, the longicorn whisker search algorithm, and the particle swarm algorithm designed in this study are 49m, 104m, 75m, 93m and 49s, 142s, 93s, and 127s, respectively. This indicates that the model designed in this study can effectively shorten the mobile path and training time while completing mobile tasks. The results of this study have a certain reference value for optimizing the robot&#39;s movement mode and obstacle avoidance ability.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_44-Application_of_Improved_Ant_Colony_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Prostate Cancer Detection and Analysis using Advanced Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140843</link>
        <id>10.14569/IJACSA.2023.0140843</id>
        <doi>10.14569/IJACSA.2023.0140843</doi>
        <lastModDate>2023-08-30T09:54:13.2230000+00:00</lastModDate>
        
        <creator>Mowafaq Salem Alzboon</creator>
        
        <creator>Mohammad Subhi Al-Batah</creator>
        
        <subject>Prostate cancer; machine learning; clinical data; radiological data; diagnosis; medical diagnosis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>Prostate cancer is one of the leading causes of cancer-related deaths among men. Early detection of prostate cancer is essential in improving the survival rate of patients. This study aimed to develop a machine-learning model for detecting and diagnosing prostate cancer using clinical and radiological data. The dataset consists of 200 patients with prostate cancer and 200 healthy controls and extracted features from their clinical and radiological data. Then, the data trained and evaluated using several machines learning models, including logistic Regression, decision tree, random forest, support vector machine, and neural network models, using 10-fold cross-validation. Our results show that the random forest model achieved the highest accuracy of 0.92, with a sensitivity of 0.95 and a specificity of 0.89. The decision tree model achieved a nearly similar accuracy of 0.91, while the logistic regression, support vector machine, and neural network models achieved lower accuracies of 0.86, 0.87, and 0.88, respectively. Our findings suggest that machine learning models can effectively detect and diagnose prostate cancer using clinical and radiological data. The random forest model may be the most suitable model for this task.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_43-Prostate_Cancer_Detection_and_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Research on the Local Path Planning for Mobile Robots based on PRO-Dueling Deep Q-Network (DQN) Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140842</link>
        <id>10.14569/IJACSA.2023.0140842</id>
        <doi>10.14569/IJACSA.2023.0140842</doi>
        <lastModDate>2023-08-30T09:54:13.2070000+00:00</lastModDate>
        
        <creator>Yaoyu Zhang</creator>
        
        <creator>Caihong Li</creator>
        
        <creator>Guosheng Zhang</creator>
        
        <creator>Ruihong Zhou</creator>
        
        <creator>Zhenying Liang</creator>
        
        <subject>Deep Q-Network (DQN) algorithm; local path planning; mobile robot; Pro-Dueling DQN algorithm; SumTree</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>This paper proposes a Pro-Dueling DQN algorithm to solve the problems of slow convergence speed and waste of effective experience of the traditional DQN (Deep Q-Network) algorithm for the local path planning of mobile robot. The new algorithm introduces a priority experience playback mechanism based on SumTree to avoid forgetting the learning effective experiences as the number of samples in the experience pool increases. A more detailed reward and punishment function is designed for the new algorithm to reduce the blindness of extracting experience in the early stages of algorithm training. The feasibility of the algorithm is verified by comparative verification on ROS simulation platform and real scene, respectively. The results show that the designed Pro-Dueling DQN algorithm converges faster and the length of planned path is shorter than that of the original DQN algorithm.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_42-Research_on_the_Local_Path_Planning_for_Mobile_Robots.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Impact of Cyber Security on Preventing and Mitigating Electronic Crimes in the Jordanian Banking Sector</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140841</link>
        <id>10.14569/IJACSA.2023.0140841</id>
        <doi>10.14569/IJACSA.2023.0140841</doi>
        <lastModDate>2023-08-30T09:54:13.1770000+00:00</lastModDate>
        
        <creator>Tamer Bani Amer</creator>
        
        <creator>Mohammad Ibrahim Ahmed Al-Omar</creator>
        
        <subject>Cyber security; electronic crime; Jordanian banks; banking sector</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>As technology advances and cyber threats continue to evolve, cyber security professionals play a critical role in developing and implementing robust security measures, staying ahead of potential risks, and mitigating the impact of cyber incidents. Many studies have examined the impact of cyber security on banks, without focusing on electronic crimes. Despite its importance, to the best of our knowledge, there are no studies on the impact of cyber security on mitigating electronic crimes in the banking sector. Therefore, the purpose of this study is to ascertain how cyber security affects electronic crimes in the Jordanian banking industry. The study sample consisted of 270 senior Jordanian managers and employees who understand the importance of cyber security in the banking sector in 14 Jordanian commercial banks, listed on the Amman stock exchange. The study used SPSS to evaluate how banks can enhance network security infrastructure to prevent unauthorized access and data breaches and also to find out the role of cybersecurity in granting competitive advantage to banks. A relative importance index (RII) was conducted to rank the importance of variables’ statements and test the hypotheses. The results found the most important method through which banks can effectively mitigate the risk of electronic crimes and ensure the security of customers’ financial data is that banks utilize robust encryption technologies to ensure the protection of customer financial data while it is being transmitted and when it is stored (RII=0.740). About 81.5 % of the sample agree, also, banks that have a strong cyber security system provide a secure platform for digital financial services which increases the competitive advantage as they were ranked first for their relative importance at both the category level and overall ranking with (RII=0.754). The study recommended that the banking industry, must consistently educate its customers on information security techniques and how to avoid hacking into their accounts, and develop an alert system that can raise awareness for both banks and bank customers if there is any possible entry or access to the customer&#39;s account or organization confidential information.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_41-The_Impact_of_Cyber_Security_on_Preventing_and_Mitigating_Electronic_Crimes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detection of Herd Pigs Based on Improved YOLOv5s Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140840</link>
        <id>10.14569/IJACSA.2023.0140840</id>
        <doi>10.14569/IJACSA.2023.0140840</doi>
        <lastModDate>2023-08-30T09:54:13.1600000+00:00</lastModDate>
        
        <creator>Jianquan LI</creator>
        
        <creator>Xiao WU</creator>
        
        <creator>Yuanlin NING</creator>
        
        <creator>Ying YANG</creator>
        
        <creator>Gang LIU</creator>
        
        <creator>Yang MI</creator>
        
        <subject>Pig; deep learning; computer vision; object detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>Fast and accurate detection technology for individual pigs raised in herds is crucial for subsequent research on counting and disease surveillance. In this paper, we propose an improved lightweight object detection method based on YOLOv5s to improve the speed and accuracy of detection of herd-raised pigs in real-world and complex environments. Specifically, we first introduce a lightweight feature extraction module called C3S, then replace the original large object detection layer with a small object detection layer at the output (head) of YOLOv5s. Finally, we propose a dual adaptive weighted PAN structure to compensate for the information loss of feature map at the neck of YOLOv5s caused by down sampling. Experiments show that our method has an accuracy rate of 95.2%, a recall rate of 89.1%, a mean Average Precision (mAP) of 95.3%, a model parameter number of 3.64M, a detection speed of 154 frames per second, and a model layer count of 183 layers. Comparing with the original YOLOv5s model and the current state-of-the-art object detection models, our proposed method achieves the best results in terms of mAP and detection speed.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_40-Detection_of_Herd_Pigs_Based_on_Improved_YOLOv5s_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Scalable Blockchain Architecture: Leveraging Hybrid Shard Generation and Data Partitioning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140839</link>
        <id>10.14569/IJACSA.2023.0140839</id>
        <doi>10.14569/IJACSA.2023.0140839</doi>
        <lastModDate>2023-08-30T09:54:13.1300000+00:00</lastModDate>
        
        <creator>Praveen M Dhulavvagol</creator>
        
        <creator>Prasad M R</creator>
        
        <creator>Niranjan C Kundur</creator>
        
        <creator>Jagadisha N</creator>
        
        <creator>S G Totad</creator>
        
        <subject>Ethereum; shard generation; data partitioning; proof of work</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>Blockchain technology has gained widespread recognition and adoption in various domains, but its implementation beyond crypto currencies faces a significant challenge - poor scalability. The serial execution of transactions in existing blockchain systems hampers transaction throughput and increases network latency, limiting overall system performance. In response to this limitation, this paper proposes a static analysis-driven data partitioning approach to enhance blockchain system scalability. By enabling parallel and distributed transaction execution through a simultaneous block-level transaction approach, the proposed technique substantially improves transaction throughput and reduces network latency. The study employs a hybrid shard generation algorithm within the Geth node of the blockchain network to create multiple shards or partitions. Experimental results indicate promising outcomes, with miners experiencing a remarkable speedup of 1.91x and validators achieving 1.90x, along with a substantial 35.34% reduction in network latency. These findings provide valuable insights and offer scalable solutions, empowering researchers and practitioners to address scalability concerns and promoting broader adoption of blockchain technology across various industries.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_39-Scalable_Blockchain_Architecture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Advances in Value-based, Policy-based, and Deep Learning-based Reinforcement Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140838</link>
        <id>10.14569/IJACSA.2023.0140838</id>
        <doi>10.14569/IJACSA.2023.0140838</doi>
        <lastModDate>2023-08-30T09:54:13.0970000+00:00</lastModDate>
        
        <creator>Haewon Byeon</creator>
        
        <subject>Reinforcement learning; value-based algorithms; policy gradient-based reinforcement learning; reinforcement learning with intrinsic rewards; deep learning-based reinforcement learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>Machine learning is a branch of artificial intelligence in which computers use data to teach themselves and improve their problem-solving abilities. In this case, learning is the process by which computers use data and algorithms to build models that improve performance, and it can be divided into supervised learning, unsupervised learning, and reinforcement learning. Among them, reinforcement learning is a learning method in which AI interacts with the environment and finds the optimal strategy through actions, and it means that AI takes certain actions and learns based on the feedback it receives from the environment. In other words, reinforcement learning is a learning algorithm that allows AI to learn by itself and determine the optimal action for the situation by learning to find patterns hidden in a large amount of data collected through trial and error. In this study, we introduce the main reinforcement learning algorithms: value-based algorithms, policy gradient-based reinforcement learning, reinforcement learning with intrinsic rewards, and deep learning-based reinforcement learning. Reinforcement learning is a technology that enables AI to develop its own problem-solving capabilities, and it has recently gained attention among AI learning methods as the usefulness of the algorithms in various industries has become more widely known. In recent years, reinforcement learning has made rapid progress and achieved remarkable results in a variety of fields. Based on these achievements, reinforcement learning has the potential to positively transform human lives. In the future, more advanced forms of reinforcement learning with enhanced interaction with the environment need to be developed.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_38-Advances_in_Value_based_Policy_based_and_Deep_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Prediction of Cryptocurrency Price using Time Series Data and Deep Learning Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140837</link>
        <id>10.14569/IJACSA.2023.0140837</id>
        <doi>10.14569/IJACSA.2023.0140837</doi>
        <lastModDate>2023-08-30T09:54:13.0830000+00:00</lastModDate>
        
        <creator>Michael Nair</creator>
        
        <creator>Mohamed I. Marie</creator>
        
        <creator>Laila A. Abd-Elmegid</creator>
        
        <subject>Cryptocurrency; deep learning; prediction; LSTM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>One of the most significant and extensively utilized cryptocurrencies is Bitcoin (BTC). It is used in many different financial and business activities. Forecasting cryptocurrency prices are crucial for investors and academics in this industry because of the frequent volatility in the price of this currency. However, because of the nonlinearity of the cryptocurrency market, it is challenging to evaluate the unique character of time-series data, which makes it impossible to provide accurate price forecasts. Predicting cryptocurrency prices has been the subject of several research studies utilizing machine learning (ML) and deep learning (DL) based methods. This research suggests five different DL approaches. To forecast the price of the bitcoin cryptocurrency, recurrent neural networks (RNN), long short-term memories (LSTM), gated recurrent units (GRU), bidirectional long short-term memories (Bi-LSTM), and 1D convolutional neural networks (CONV1D) were used. The experimental findings demonstrate that the LSTM outperformed RNN, GRU, Bi-LSTM, and CONV1D in terms of prediction accuracy using measures such as Root Mean Squared Error (RMSE), Mean Absolute Error (MAE), Mean Squared Error (MSE), and R-squared score (R2). With RMSE= 1978.68268, MAE=1537.14424, MSE= 3915185.15068, and R2= 0.94383, it may be considered the best method.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_37-Prediction_of_Cryptocurrency_Price_using_Time_Series_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Model Classification of Fire Weather Index using the SVM-FF Method on Forest Fire in North Sumatra, Indonesia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140836</link>
        <id>10.14569/IJACSA.2023.0140836</id>
        <doi>10.14569/IJACSA.2023.0140836</doi>
        <lastModDate>2023-08-30T09:54:13.0500000+00:00</lastModDate>
        
        <creator>Darwis Robinson Manalu</creator>
        
        <creator>Opim Salim Sitompul</creator>
        
        <creator>Herman Mawengkang</creator>
        
        <creator>Muhammad Zarlis</creator>
        
        <subject>Fire weather index; forest fire; support vector machine; SVM-FF model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>As a tropical country, Indonesia is situated in Southeast Asia nation has vast forests. Forest fire occur busy vary due to land conditions and forest conditions in drought season. The indicator used mitigated potential forest fire is to study the indicator behavior of the fire weather index (FWI). The data is gathered from the observation station in north Sumatra province, computation and estimation FWI by Canadian Forest Fire Weather Index based on the data gathered. It is found that there is gathered outlier data. to hope will it, it is necessary to conduct classification and predict this of the dataset by machine learning approach using Support Vector Machine Forest Fire (SVM-FF), which is a further development of the previous models, known as the c-SVM and v-SVM. This method includes a balancing parameter by determining the lower and upper limits of a support vector. Furthermore, it allowed the balancing parameter value to be negative. The results showed that the classification of FWI was at low, medium, high, and extreme levels. The low FWI value has an average of 0.5 which is in the 0 to 1 interval. There was an increase in the model’s accuracy and performance from its predecessor, which include the c-SVM and v-SVM with respective values of 0.96 and 0.89. Meanwhile, it was observed that with the SVM-FF model, the accuracy was quite better with a value of 0.99, indicating that it is useful as an alternative to classify and predict forest fires.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_36-Model_Classification_of_Fire_Weather_Index.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cybersecurity Advances in SCADA Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140835</link>
        <id>10.14569/IJACSA.2023.0140835</id>
        <doi>10.14569/IJACSA.2023.0140835</doi>
        <lastModDate>2023-08-30T09:54:13.0370000+00:00</lastModDate>
        
        <creator>Bakil Al-Muntaser</creator>
        
        <creator>Mohamad Afendee Mohamed</creator>
        
        <creator>Ammar Yaseen Tuama</creator>
        
        <creator>Imran Ahmad Rana</creator>
        
        <subject>Threat detection; SCADA security; machine learning-based intrusion detection; cyber-physical systems security; insider attack prevention</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>The management of critical infrastructure heavily relies on Supervisory Control and Data Acquisition [SCADA] systems, but as they become more connected, insider attacks become a greater concern. Insider threat detection systems [IDS] powered by machine learning have emerged as a potential answer to this problem. In order to identify and neutralize insider threats, this review paper examines the most recent developments in machine learning algorithms for insider IDS in SCADA security systems. A thorough analysis of research articles published in 2019 and later, focussed on variety of machine learning methods, have been adopted in this review study to better highlight difficulties and challenges being faced by professionals, and how the study will contribute to overcome them. The results show that, in addition to conventional methods, machine-learning based intrusion detection techniques offer important advantages in identifying complex and covert insider attacks. Finding pertinent insider threat data for model training and guaranteeing data privacy and security are still difficult to address. Ensemble techniques and hybrid strategies show potential for improving detection resiliency. In conclusion, machine learning-based insider IDS has the potential to protect critical infrastructures by strengthening SCADA systems against insider attacks. The similarities and differences between cyber physical systems and SCADA systems, emphasizing security challenges and the potential for mutual improvement were also reviewed in this study. In order to be as effective as possible, future research should concentrate on addressing issues with data collecting and privacy, investigating the latest developments in technology, and creating hybrid models. SCADA systems can accomplish proactive and effective defence against insider attacks by integrating machine learning advancements, maintaining their dependability and security in the face of emerging threats.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_35-Cybersecurity_Advances_in_SCADA_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Testing the Usability of Serious Game for Low Vision Children</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140834</link>
        <id>10.14569/IJACSA.2023.0140834</id>
        <doi>10.14569/IJACSA.2023.0140834</doi>
        <lastModDate>2023-08-30T09:54:13.0200000+00:00</lastModDate>
        
        <creator>Nurul Izzah Othman</creator>
        
        <creator>Hazura Mohamed</creator>
        
        <creator>Nor Azan Mat Zin</creator>
        
        <subject>Serious game; learning; low vision; usability; accessibility</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>Serious games are prodigious tools for building language, science and math knowledge and skills. Despite a growing number of studies on using serious games for learning, children with visual impairment have obstacles when playing the games. Low vision children have a visual balance that can be assisted with assistive technology. A 2D serious game for learning Mathematics is developed using Unity for low vision children. In order to enhance the game’s accessibility for low vision children, accessibility elements have been implemented in the serious game prototype. Those elements are screen design (buttons, menus, and navigation), multimedia (text, graphics, audio, and animation), object motion, and language. Upon completion of the serious game, usability testing was done to identify the accessibility of the serious game to low vision children based on the usability level. The observation technique is used for analysing the serious game. The overall usability score is good based on aspects of effectiveness, efficiency and user satisfaction tested.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_34-Testing_the_Usability_of_Serious_Game.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hussein Search Algorithm: A Novel Efficient Searching Algorithm in Constant Time Complexity</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140833</link>
        <id>10.14569/IJACSA.2023.0140833</id>
        <doi>10.14569/IJACSA.2023.0140833</doi>
        <lastModDate>2023-08-30T09:54:13.0030000+00:00</lastModDate>
        
        <creator>Omer H Abu El Haijia</creator>
        
        <creator>Arwa H. F. Zabian</creator>
        
        <subject>Binary search; prediction search procedure; prediction cost; constant time complexity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>Hussein search algorithm focuses on the fundamental concept of searching in computer science and aims to enhance the retrieval of data from various data warehouses. The efficiency of cloud systems is substantially influenced by the manner in which data is saved and retrieved, given the vast quantity of data being generated and stored in the cloud. The act of searching entails the systematic endeavor of locating a particular item within a substantial volume of data, and searching algorithms offer methodical strategies for accomplishing this task. There exists a wide array of searching algorithms, each exhibiting variations in terms of the search procedure, time complexity, and space complexity. The choice of the suitable algorithm is contingent upon various aspects, including the magnitude of the dataset, the distribution of the data, and the desired temporal and spatial intricacy. This study presents a novel prediction-based searching algorithm named the Hussein search algorithm. The system is designed to operate in a straightforward manner and makes use of a simple data structure. This study relies on fundamental mathematical computations and incorporates the interpolation search algorithm, an algorithm that introduces a search by-prediction method for uniformly distributed lists, it forecasts the precise position of the queried object. The cost of prediction remains consistent and, in numerous instances, falls under the O(1) range. Hussein search algorithm exhibits enhanced efficiency in comparison to the binary search and ternary search algorithms, both of which are widely regarded as the best methods for searching sorted data.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_33-Hussein_Search_Algorithm_a_Novel_Efficient_Searching_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Attitude Synchronization and Stabilization for Multi-Satellite Formation Flying with Advanced Angular Velocity Observers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140832</link>
        <id>10.14569/IJACSA.2023.0140832</id>
        <doi>10.14569/IJACSA.2023.0140832</doi>
        <lastModDate>2023-08-30T09:54:12.9900000+00:00</lastModDate>
        
        <creator>Belkacem Kada</creator>
        
        <creator>Khalid Munawar</creator>
        
        <creator>Muhammad Shafique Shaikh</creator>
        
        <subject>Attitude synchronization; coordinated control; finite-time control; high-order sliding mode observer; inter-satellite communication links; leader-following consensus; switching communication topology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>This paper focuses on two aspects of satellite formation flying (SFF) control: finite-time attitude synchronization and stabilization under undirected time-varying communication topology and synchronization without angular velocity measurements. First, a distributed nonlinear control law ensures rapid convergence and robust disturbance attenuation. To prove stability, a Lyapunov function involving an integrator term is utilized. Specifically, attitude synchronization and stabilization conditions are derived using graph theory, local finite-time convergence for homogeneous systems, and LaSalle&#39;s non-smooth invariance principle. Second, the requirements for angular velocity measurements are loosened using a distributed high-order sliding mode estimator. Despite the failure of inter-satellite communication links, the homogeneous sliding mode observer precisely estimates the relative angular velocity and provides smooth control to prevent the actuators of the satellites from chattering. Simulations numerically demonstrate the efficacy of the proposed design scheme.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_32-Attitude_Synchronization_and_Stabilization_for_Multi_Satellite_Formation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multimodal Contactless Architecture for Upper Limb Virtual Rehabilitation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140831</link>
        <id>10.14569/IJACSA.2023.0140831</id>
        <doi>10.14569/IJACSA.2023.0140831</doi>
        <lastModDate>2023-08-30T09:54:12.9570000+00:00</lastModDate>
        
        <creator>Emilio Valdivia-Cisneros</creator>
        
        <creator>Elizabeth Vidal</creator>
        
        <creator>Eveling Castro-Gutierrez</creator>
        
        <subject>Human computer interaction (HCI); multimodal; feedback; architecture; upper limb; rehabilitation; contactless</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>The use of virtual rehabilitation systems for upper limbs has been implemented using different devices, and its efficiency as a complement to traditional therapies has been demonstrated. Multimodal systems are necessary for virtual rehabilitation systems since they allow multiple sources of information for both input and output so that the participant can have a personalized interaction. This work presents a simplified multimodal contactless architecture for virtual reality systems that focuses on upper limb rehabilitation. This research presents the following: 1) the proposed architecture 2) the implementation of a virtual reality system oriented to activities of daily living, and 3) an evaluation of the user experience and the kinematic results of the implementation. The results of the two experiments showed positive results regarding the implementation of a multimodal contactless virtual rehabilitation system based on the architecture. User experience evaluation showed positive values regard to six dimensions: perspicuity=2.068, attractiveness=1.987, stimulation=1.703, dependability=1.649, efficiency=1.517, and novelty=1.401. Kinematic evaluation was consistent with the score of the implemented game.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_31-Multimodal_Contactless_Architecture_for_Upper_Limb.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Overview of Vision Transformers for Image Processing: A Survey</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140830</link>
        <id>10.14569/IJACSA.2023.0140830</id>
        <doi>10.14569/IJACSA.2023.0140830</doi>
        <lastModDate>2023-08-30T09:54:12.9430000+00:00</lastModDate>
        
        <creator>Ch. Sita Kameswari</creator>
        
        <creator>Kavitha J</creator>
        
        <creator>T. Srinivas Reddy</creator>
        
        <creator>Balaswamy Chinthaguntla</creator>
        
        <creator>Senthil Kumar Jagatheesaperumal</creator>
        
        <creator>Silvia Gaftandzhieva</creator>
        
        <creator>Rositsa Doneva</creator>
        
        <subject>Vision transformers; image processing; natural language processing; image</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>Using image processing technology has become increasingly essential in the education sector, with universities and educational institutions exploring innovative ways to enhance their teaching techniques and provide a better learning experience for their students. Vision transformer-based models have been highly successful in various domains of artificial intelligence, including natural language processing and computer vision, which have generated significant interest from academic and industrial researchers. These models have outperformed other networks like convolutional and recurrent networks in visual benchmarks, making them a promising candidate for image processing applications. This article presents a comprehensive survey of vision transformer models for image processing and computer vision, focusing on their potential applications for student verification in university systems. The models can analyze biometric data like student ID cards and facial recognition to ensure that students are accurately verified in real-time, becoming increasingly vital as online learning continues to gain traction. By accurately verifying the identity of students, universities and educational institutions can guarantee that students have access to relevant learning materials and resources necessary for their academic success.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_30-An_Overview_of_Vision_Transformers_for_Image_Processing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design and Implementation of an IoT Control and Monitoring System for the Optimization of Shrimp Pools using LoRa Technology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140829</link>
        <id>10.14569/IJACSA.2023.0140829</id>
        <doi>10.14569/IJACSA.2023.0140829</doi>
        <lastModDate>2023-08-30T09:54:12.9270000+00:00</lastModDate>
        
        <creator>Jos&#233; M. Pereira Pont&#243;n</creator>
        
        <creator>Ver&#243;nica Ojeda</creator>
        
        <creator>V&#237;ctor Asanza</creator>
        
        <creator>Leandro L. Lorente-Leyva</creator>
        
        <creator>Diego H. Peluffo-Ord&#243;&#241;ez</creator>
        
        <subject>Control and monitoring system; shrimp pools; IoT architecture; LoRa technology; fuzzy logic control</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>The shrimp farming industry in Ecuador, renowned for its shrimp breeding and exportation, faces challenges due to diseases related to variations in abiotic factors during the maturation stage. This is partly attributed to the traditional methods employed in shrimp farms. Consequently, a prototype has been developed for monitoring and controlling abiotic factors using IoT technology. The proposed system consists of three nodes communicating through the LoRa interface. For control purposes, a fuzzy logic system has been implemented that evaluates temperature and dissolved oxygen abiotic factors to determine the state of the aerator, updating the information in the ThingSpeak application. A detailed analysis of equipment energy consumption and the maximum communication range for message transmission and reception was conducted. Subsequently, the monitoring and control system underwent comprehensive testing, including communication with the visualization platform. The results demonstrated significant improvements in system performance. By modifying parameters in the microcontroller, a 2.55-fold increase in battery durability was achieved. The implemented fuzzy logic system enabled effective on/off control of the aerators, showing a corrective trend in response to variations in the analyzed abiotic parameters. The robustness of the LoRa communication interface was evident in urban environments, achieving a distance of up to 1 km without line of sight.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_29-Design_and_Implementation_of_an_IoT_Control_and_Monitoring.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Efficient Parameter Estimation in Image Processing using a Multi-Agent Hysteretic Q-Learning Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140828</link>
        <id>10.14569/IJACSA.2023.0140828</id>
        <doi>10.14569/IJACSA.2023.0140828</doi>
        <lastModDate>2023-08-30T09:54:12.8970000+00:00</lastModDate>
        
        <creator>Issam QAFFOU</creator>
        
        <subject>Parameter estimation; reinforcement learning; cooperative agents; hysteretic q-learning; optimistic agent; object extraction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>Optimizing image processing parameters is often a time-consuming and unreliable task that requires manual adjustments. In this paper, we present a novel approach that utilizes a multi-agent system with Hysteretic Q-learning to automatically optimize these parameters, providing a more efficient solution. We conducted an empirical study that focused on extracting objects of interest from textural images to validate our approach. Experimental results demonstrate that our multi-agent approach outperforms the traditional single-agent approach by quickly finding optimal parameter values and producing satisfactory results. Our approach&#39;s key innovation is the ability to enable agents to cooperate and optimize their behavior for the given task through the use of a multi-agent system. This feature distinguishes our approach from previous work that only used a single agent. By incorporating reinforcement learning techniques in a multi-agent context, our approach provides a scalable and effective solution to parameter optimization in image processing.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_28-Efficient_Parameter_Estimation_in_Image_Processing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Approaches and Tools for Quality Assurance in Distance Learning: State-of-play</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140827</link>
        <id>10.14569/IJACSA.2023.0140827</id>
        <doi>10.14569/IJACSA.2023.0140827</doi>
        <lastModDate>2023-08-30T09:54:12.8800000+00:00</lastModDate>
        
        <creator>Silvia Gaftandzhieva</creator>
        
        <creator>Rositsa Doneva</creator>
        
        <creator>Senthil Kumar Jagatheesaperumal</creator>
        
        <subject>Distance learning; quality assurance; assessment; stakeholder satisfaction; regulatory documents; performance indicators</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>In recent years, distance learning has become an increasingly popular mode of education due to its flexibility and accessibility. However, the quality of distance learning programs has been a cause for concern, which has led to the development of various approaches and tools for quality assurance and assessment. This review article aims to provide an in-depth analysis of the current state of play of quality assurance in distance learning. The paper discusses the fundamental requirements to establish quality in distance learning and the challenges associated with ensuring quality in this mode of education. Then it explores the different approaches and tools used for quality assurance and assessment, such as course evaluations, self-assessments, and external reviews. In addition, the paper delves into the development of regulatory documents and manuals for quality assurance, which are essential for ensuring that distance learning programs adhere to established standards. It also discusses in detail the importance of audits and accreditations from assessment organizations in assuring quality in distance learning. As the satisfaction of all stakeholders (including students, faculty, and administrators) is crucial for ensuring the success of distance learning programmes, the paper highlights the various measures HEIs can take to ensure stakeholder satisfaction. Finally, the article discusses the processing of statistical data and performance indicators, which can provide valuable insights into the effectiveness of distance learning programmes.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_27-Approaches_and_Tools_for_Quality_Assurance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Virtual Route Guide Chatbot Based on Random Forest Classifier</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140826</link>
        <id>10.14569/IJACSA.2023.0140826</id>
        <doi>10.14569/IJACSA.2023.0140826</doi>
        <lastModDate>2023-08-30T09:54:12.8800000+00:00</lastModDate>
        
        <creator>Puspa Miladin Nuraida Safitri A. Basid</creator>
        
        <creator>Fajar Rohman Hariri</creator>
        
        <creator>Fresy Nugroho</creator>
        
        <creator>Ajib Hanani</creator>
        
        <creator>Firman Jati Pamungkas</creator>
        
        <subject>Tourism; chatbot; artificial intelligence; random forest classifier</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>Improvements in the quality of tourism services and the number of human resources will affect the quality of social services and information services provided to foreign tourists, thereby enhancing the quality of services offered regarding tourist destination information in the Malang Raya area. Considering the urgency of foreign tourists in obtaining information related to directions, routes, and access roads to their desired tourist destinations, especially in East Java, due to limited data from the government agencies handling the tourism sector, as well as the difficulty in communication with residents who may not understand what is being communicated by foreign tourists. Therefore, the need for an interactive chatbot to assist in obtaining routes and access information to the desired tourist destinations will facilitate foreign tourists. To improve the accuracy of the chatbot&#39;s ability to answer sentence selection, the use of artificial intelligence, specifically the Random Forest Classifier, is necessary. This study obtained the highest accuracy value using a tree quantity of 200, a maximum tree depth of 20, and a minimum sample split of 5. Using these quantities resulted in an accuracy of 95.88%, precision of 96.29%, recall of 96.03%, and f-measure of 96.16%.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_26-Virtual_Route_Guide_Chatbot_Based_on_Random_Forest.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Earth Observation Satellite: Big Data Retrieval Method with Fuzzy Expression of Geophysical Parameters and Spatial Features</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140825</link>
        <id>10.14569/IJACSA.2023.0140825</id>
        <doi>10.14569/IJACSA.2023.0140825</doi>
        <lastModDate>2023-08-30T09:54:12.8470000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>Fuzzy retrieval; earth observation satellite; big data; geophysical parameter; oceanographer; circle feature; arc feature; line feature; fuzzy expression</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>A method for fuzzy retrievals of Earth observation satellite image database using geophysical parameters and spatial features is proposed. It is confirmed that the proposed method allows fuzzy expressions of queries with sea surface temperature, chlorophyll-a concentration and cloud coverage as well as circle, line and edge, for instance “rather cold sea surface temperature and a sort of circle feature”. Thus users, in particular, oceanographers may access the most appropriate image data from the database for finding of cold cores (circle features), fronts (arc and line features), etc. in a simple manner. Although this is just an example for oceanographers, it is found that the proposed method allows data mining with fuzzy expressions of geophysical queries from the big data platforms of the earth observation satellite database.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_25-Earth_Observation_Satellite_Big_Data_Retrieval_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improved Drosophila Visual Neural Network Application in Vehicle Target Tracking and Collision Warning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140824</link>
        <id>10.14569/IJACSA.2023.0140824</id>
        <doi>10.14569/IJACSA.2023.0140824</doi>
        <lastModDate>2023-08-30T09:54:12.8330000+00:00</lastModDate>
        
        <creator>Jianyi Wu</creator>
        
        <subject>Drosophila visual neural network; collision warning; target calibration; target tracking</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>To enable the vehicle tracking and collision warning system to face more complex road information, the Drosophila visual neural network collision warning algorithm has been improved, including image stabilization algorithm, target region synthesis algorithm, and target tracking algorithm. The results showed that the improved image stabilization algorithm had significantly higher image stabilization quality. The peak signal-to-noise ratio of the stabilized image before improvement was the highest at 80dB and the lowest at 54dB. After improvement, the peak signal-to-noise ratio of the stabilized image was the highest at 82dB and the lowest at 60dB. The improved algorithm did not have any false alarms or missed alarms in collision warning. In video 1, there were false alarms in the unimproved algorithm, while in video 2, there were missed alarms. In video 1, all frames were in a safe state, but the original algorithm displayed an alarm in frames 7-12, 13-22, and 23-31. In video 2, there were dangerous situations in frames 8-24 that required an alarm, while the original algorithm displayed an alarm message in frames 8-17, consistent with the actual situation. The improved target tracking algorithm can complete the task of extracting target motion curves. The target tracking algorithm extracted the motion curves of one target in video 1 and two targets in video 2, which were consistent with the video content. The improvement of the Drosophila visual neural network collision warning model through research is effective, which can improve the driving safety of vehicles in complex road conditions.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_24-Improved_Drosophila_Visual_Neural_Network_Application.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Efficient Deep Learning with Optimization Algorithm for Emotion Recognition in Social Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140823</link>
        <id>10.14569/IJACSA.2023.0140823</id>
        <doi>10.14569/IJACSA.2023.0140823</doi>
        <lastModDate>2023-08-30T09:54:12.8000000+00:00</lastModDate>
        
        <creator>Ambika G N</creator>
        
        <creator>Yeresime Suresh</creator>
        
        <subject>Blue monkey optimization (BMO); deep learning; electroencephalograph (EEG); emotion recognition; human-computer interaction (HCI); radial basis function networks (RBFN)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>Emotion recognition, or computers&#39; ability to interpret people&#39;s emotional states, is a rapidly expanding topic with many life-improving applications. However, most image-based emotion recognition algorithms have flaws since people can disguise their emotions by changing their facial expressions. As a result, brain signals are being used to detect human emotions with increased precision. However, most proposed systems could do better because electroencephalogram (EEG) signals are challenging to classify using typical machine learning and deep learning methods. Human-computer interaction, recommendation systems, online learning, and data mining all benefit from emotion recognition in photos. However, there are challenges with removing irrelevant text aspects during emotion extraction. As a consequence, emotion prediction is inaccurate. This paper proposes Radial Basis Function Networks (RBFN) with Blue Monkey Optimization to address such challenges in human emotion recognition (BMO). The proposed RBFN-BMO detects faces on large-scale images before analyzing face landmarks to predict facial expressions for emotional acknowledgment. Patch cropping and neural networks comprise the two stages of the RBFN-BMO. Pre-processing, feature extraction, rating, and organizing are the four categories of the proposed model. In the ranking stage, appropriate features are extracted from the pre-processed information, the data are then classed, and accurate output is obtained from the classification phase. This study compares the results of the proposed RBFN-BMO algorithm to the previous state-of-the-art algorithms using publicly available datasets derived from the RBFN-BMO model. Furthermore, we demonstrated the efficacy of our framework in comparison to previous works. The results show that the projected method can progress the rate of emotion recognition on datasets of various sizes.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_23-An_Efficient_Deep_Learning_with_Optimization_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design and Application of an Automatic Scoring System for English Composition Based on Artificial Intelligence Technology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140822</link>
        <id>10.14569/IJACSA.2023.0140822</id>
        <doi>10.14569/IJACSA.2023.0140822</doi>
        <lastModDate>2023-08-30T09:54:12.7870000+00:00</lastModDate>
        
        <creator>Fengqin Zhang</creator>
        
        <subject>English composition; automatic scoring; artificial intelligence; text matching degree; natural language processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>The automatic grading of English compositions involves utilizing natural language processing, statistics, artificial intelligence (AI), and other techniques to evaluate and score compositions. This approach is objective, fair, and resource-efficient. The current widely used evaluation system for English compositions falls short in off-topic assessment, as subjective factors in manual marking lead to inconsistent scoring standards, which affects objectivity and fairness. Hence, researching and implementing an AI-based automatic scoring system for English compositions holds significant importance. This paper examines various composition evaluation factors, such as vocabulary usage, sentence structure, errors, development, word frequency, and examples. These factors are classified, quantified, and analysed using methods such as standardization, cluster analysis, and TF word frequency. Scores are assigned to each feature factor based on fuzzy clustering analysis and the information entropy principle of rough set theory. The system can flexibly identify composition themes in batches and rapidly score English compositions, offering more objective and impartial quality control. The goal of the proposed system is to address existing issues in teacher corrections and evaluations, as well as low self-efficacy in students&#39; writing learning. The test results demonstrate that the system expands the learning material collections, enhances the identification of weak points, optimizes the marking engine performance with the text matching degree, reduces the marking time, and ensures efficient and high-quality assessments. Overall, this system shows great potential for widespread adoption.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_22-Design_and_Application_of_an_Automatic_Scoring_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Enhanced Algorithm of Improved Response Time of ITS-G5 Protocol</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140821</link>
        <id>10.14569/IJACSA.2023.0140821</id>
        <doi>10.14569/IJACSA.2023.0140821</doi>
        <lastModDate>2023-08-30T09:54:12.7530000+00:00</lastModDate>
        
        <creator>Kawtar Jellid</creator>
        
        <creator>Tomader Mazri</creator>
        
        <subject>ITS-G5 (Intelligent Transport Systems); V2V (Vehicle-to-Vehicle); V2I (Vehicle-to-Infrastructure); V2X (Vehicle-to-everything); autonomous vehicle</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>This research article proposes an algorithm for improving the ITS-G5 protocol, which addresses the issue of response time. The algorithm includes the integration of Dijkstra&#39;s algorithm to prioritize shorter paths for message transmission, resulting in reduced delays. The initial algorithm for the ITS-G5 protocol is presented, followed by the modified algorithm that incorporates Dijkstra&#39;s algorithm. The modified algorithm utilizes a node-based approach and implements Dijkstra&#39;s algorithm to find the shortest path between two nodes. The algorithm is evaluated in a scenario involving 20 vehicles, where each vehicle has its own message. The results show improved communication efficiency and reduced response time compared to the original ITS-G5 protocol.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_21-An_Enhanced_Algorithm_of_Improved_Response_Time.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Collateral Circulation Classification Based on Cone Beam Computed Tomography Images using ResNet18 Convolutional Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140820</link>
        <id>10.14569/IJACSA.2023.0140820</id>
        <doi>10.14569/IJACSA.2023.0140820</doi>
        <lastModDate>2023-08-30T09:54:12.7230000+00:00</lastModDate>
        
        <creator>Nur Hasanah Ali</creator>
        
        <creator>Abdul Rahim Abdullah</creator>
        
        <creator>Norhashimah Mohd Saad</creator>
        
        <creator>Ahmad Sobri Muda</creator>
        
        <subject>Collateral circulation; CBCT; ResNet; convolutional neural network; classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>Collateral circulation is an arterial anastomotic channel that supply nutrient perfusion to areas of the brain. It happens when there is an existence of disruption of regular sources of flow due to an ischemic stroke. The most recent method, Cone Beam Computed Tomography (CBCT) neuroimaging is able to provide specific details regarding the extent and adequacy of collaterals. The current approaches for collateral circulation classification are based on manual observation and lead to inter and intra-rater inconsistency. This paper presented a 2-class automatic classification that is recently growing very fast in artificial intelligence disciplines. The two classes will differentiate between good and poor collateral circulation. A pre-trained convolutional neural network (CNN), namely ResNet18, has been used to learn features and train using 4368 CBCT images. Initially, the dataset is prepared, labeled and augmented. Then the images were transferred to be trained using the ResNet18 method with certain specifications. The algorithm performance was then evaluated using metrics in terms of accuracy, sensitivity, specificity, F1 score and precision on the CBCT images to classify collateral circulation accurately. The findings can automate collateral circulation classification to ease the limitations of standard clinical practice. It is a convincing method that supports neuroradiologists in assessing clinical scans and helps neuroradiologists in clinical decisions about stroke treatment.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_20-Collateral_Circulation_Classification_Based_on_Cone_Beam.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Machine-Learning-based User Behavior Classification for Improving Security Awareness Provision</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140819</link>
        <id>10.14569/IJACSA.2023.0140819</id>
        <doi>10.14569/IJACSA.2023.0140819</doi>
        <lastModDate>2023-08-30T09:54:12.7070000+00:00</lastModDate>
        
        <creator>Alaa Al-Mashhour</creator>
        
        <creator>Areej Alhogail</creator>
        
        <subject>Machine learning; user behavior analysis; cybersecurity; classification; security awareness</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>Users of information technology are regarded as essential components of information security. Users’ lack of cybersecurity awareness can result in external and internal security attacks and threats in any organization that has several users or employees. Although various security methods have been designed to protect organizations from external intrusions and attacks, the human factor is also essential because security risks by “insiders” can occur due to a lack of awareness. Therefore, instead of general nontargeted security training, comprehensive cybersecurity awareness should be provided based on employees’ online behavior. This study seeks to provide a machine-learning-based model that provides user behavior analysis in which organizations can profile their employees by analyzing their online behavior to classify them into different classes and, thus, help provide them with appropriate awareness sessions and training. The model proposed in this paper will be evaluated and assessed through its implementation on a sample dataset that reflects users’ online activities over a specific period to measure the model’s accuracy and effectiveness. A comparison between six classification techniques has been made, and random forest classification had the best performance regarding classification accuracy and performance time. After users are classified, each group can be provided with the appropriate training material. This study will stimulate additional research in this area, which has not been widely investigated, and it will provide a useful point of reference for other studies. Additionally, it should provide insightful information to help decision-makers in organizations provide necessary and effective security awareness.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_19-Machine_Learning_based_User_Behavior_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards Automated Evaluation of the Quality of Educational Services in HEIs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140818</link>
        <id>10.14569/IJACSA.2023.0140818</id>
        <doi>10.14569/IJACSA.2023.0140818</doi>
        <lastModDate>2023-08-30T09:54:12.6930000+00:00</lastModDate>
        
        <creator>Silvia Gaftandzhieva</creator>
        
        <creator>Rositsa Doneva</creator>
        
        <creator>Mariya Zhekova</creator>
        
        <creator>George Pashev</creator>
        
        <subject>Quality assurance; higher education; educational services; administrative services; data analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>The provision of educational services with high quality is a matter of concern to all stakeholders in higher education (academic staff, administration, students, etc.). According to many researchers, student satisfaction is an indicator of service quality in higher education institutions (HEIs), and evaluating the quality of educational and administrative services from students is an effective tool for improving the quality of HEIs. To ensure a competitive benefit over other educational institutions, HEIs leadership should take measures leading to improved student feedback on the quality of the provided administrative and education services, seek ways to exceed student expectations and provide high-quality services. Due to the great importance of the opinion of students on the quality of the services offered, many HEIs develop and use tools to assess student satisfaction with the quality of the services in the HEI. Little researched in the literature is the issue regarding the need to develop tools for HEIs leadership allowing survey results analysis, tracking trends over the years and comparing HEIs results. Based on a detailed analysis of developed questionnaires for evaluating the quality of services, this paper explores the possibilities of automation of the overall process for conducting questionnaire surveys of student’s satisfaction with the quality of services. As a result, a software prototype of a tool to automate the entire process for assessing student satisfaction is proposed - from questionnaire modelling, survey organizing and conducting to the analysis of the collected data. The developed tool allows governing bodies in HEIs to make informed decisions to improve the quality of services and to compare the results with those of competing universities.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_18-Towards_Automated_Evaluation_of_the_Quality_of_Educational_Services.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Yolo-based Violence Detection Method in IoT Surveillance Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140817</link>
        <id>10.14569/IJACSA.2023.0140817</id>
        <doi>10.14569/IJACSA.2023.0140817</doi>
        <lastModDate>2023-08-30T09:54:12.6770000+00:00</lastModDate>
        
        <creator>Hui Gao</creator>
        
        <subject>Violence detection; IoT; surveillance systems; Yolo; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>Violence detection in Internet of Things (IoT)-based surveillance systems has become a critical research area due to their potential to provide early warnings and enhance public safety. There have been many types of research on vision-based systems for violence detection, including traditional and deep learning-based methods. Deep learning-based methods have shown great promise in ameliorating the efficiency and accuracy of violence detection. Despite the recent advances in violence detection using deep learning-based methods, significant limitations and research challenges still need to be addressed, including the development of standardized datasets and real-time processing. This study presents a deep learning method based on You Only Look Once (YOLO) algorithm for the violence detection task to overcome these issues. We generate a model for violence detection using violence and non-violence images in a prepared dataset divided into testing, validation, and training sets. Based on accepted performance indicators, the produced model is assessed. The experimental results and performance evaluation show that the method accurately identifies violence and non-violence classes in real-time.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_17-A_Yolo_based_Violence_Detection_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Adaptive Learner-CBT with Secured Fault-Tolerant and Resumption Capability for Nigerian Universities</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140816</link>
        <id>10.14569/IJACSA.2023.0140816</id>
        <doi>10.14569/IJACSA.2023.0140816</doi>
        <lastModDate>2023-08-30T09:54:12.6600000+00:00</lastModDate>
        
        <creator>Bridget Ogheneovo Malasowe</creator>
        
        <creator>Maureen Ifeanyi Akazue</creator>
        
        <creator>Ejaita Abugor Okpako</creator>
        
        <creator>Fidelis Obukohwo Aghware</creator>
        
        <creator>Deborah Voke Ojie</creator>
        
        <creator>Arnold Adimabua Ojugo</creator>
        
        <subject>Adaptive blended learning; computer-based test; fault tolerant design; resumption capabilities; Nigeria; FUPRE</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>The post covid-19 studies have reported significant negative impact witnessed on global education and learning with the closure of schools’ physical infrastructure from 2020 to 2022. Its effects today continues to ripple across the learning processes even with advances in e-learning or media literacy. The adoption and integration therein of e-learning on the Nigerian frontier is yet to be fully harnessed. From traditional to blended learning, and to virtual learning – Nigeria must rise, and develop new strategies to address issues with her educational theories as well as to bridge the gap and negative impact of the post covid-19 pandemic. This study implements a virtual learning framework that adequately fuses the alternative delivery asynchronous-learning with traditional synchronous learning for adoption in the Nigerian Educational System. Result showcases improved cognition in learners, engaged qualitative learning, and a learning scenario that ensures a power shift in the educational structure that will further equip learners to become knowledge producer, help teachers to emancipate students academically, in a framework that measures quality of engaged student’s learning.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_16-Adaptive_Learner_CBT_with_Secured_Fault_Tolerant.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparing Scrum Maturity of Digital and Business Process Reengineering Groups: A Case Study at an Indonesia’s State-Owned Bank</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140815</link>
        <id>10.14569/IJACSA.2023.0140815</id>
        <doi>10.14569/IJACSA.2023.0140815</doi>
        <lastModDate>2023-08-30T09:54:12.6300000+00:00</lastModDate>
        
        <creator>Gloria Saripah Patara</creator>
        
        <creator>Teguh Raharjo</creator>
        
        <subject>Transformation; scrum; digital project; BPR project; scrum maturity model; agile maturity model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>Bank XYZ, an Indonesia’s state-owned bank, has been conducting business and digital transformation throughout its organization. Based on a recent McKinsey survey, less than 30% of organizations succeed in transformation. Fast changing business requirements and various technology-based initiatives enforce the organization to employ an Agile methodology and Scrum, to cope with the situation. Group Grp-DGT and Grp-BPR are two groups in Bank XYZ that manage their projects using Scrum. Grp-DGT develops digital projects, whereas Grp-BPR develops Business Process Reengineering (BPR) projects. Scrum maturity in both groups needs to be appraised to promote sustainability in the long run. Comparing Scrum maturity between digital and BPR projects has not been done in the previous works, especially in a state-owned bank in Indonesia. This research will help the organization through the research output which are Scrum maturity level at both groups and proposed recommendations to improve Scrum practices. The other organizations can benefit from the recommendations as well. Scrum maturity model (SMM) is used to appraise the practices, while Agile Maturity (AMM) is used to calculate the maturity rating. From this research, it is found that Grp-DGT has reached maturity level 5 (optimizing), whereas Grp-BPR is still at level 1 (initial). Based on assessment results and Scrum guides, the recommendations are then drafted. There are 15 recommendations proposed to Grp-BPR to reach level 2 and onwards.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_15-Comparing_Scrum_Maturity_of_Digital_and_Business_Process.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Medical Image Denoising Method Based on the CycleGAN and the Complex Shearlet Transform</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140814</link>
        <id>10.14569/IJACSA.2023.0140814</id>
        <doi>10.14569/IJACSA.2023.0140814</doi>
        <lastModDate>2023-08-30T09:54:12.6130000+00:00</lastModDate>
        
        <creator>ChunXiang Liu</creator>
        
        <creator>Jin Huang</creator>
        
        <creator>Muhammad Tahir</creator>
        
        <creator>Lei Wang</creator>
        
        <creator>Yuwei Wang</creator>
        
        <creator>Faiz Ullah</creator>
        
        <subject>Medical image; image denoising; CycleGAN; complex shearlet transform</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>Medical image denoising plays an important role for the noise in the medical images can reduce the visibility, thereby affecting the diagnostic results of the doctors. Although good results have been achieved by the well-known deep learning-based denoising methods for their strong ability of learning, the loss of structural feature information and the well preservation of the edge information have not attracted considerable attention. To deal with these problems, a novel medical image denoising method based on the improved CycleGAN and the complex shearlet transform(CST) is proposed. The CST is used to construct the generator to embed more feature information in the training process and the denoising process is modeled to adversarial learn the mapping between the noise-free image domain and the noisy image domain. With the mechanism of the recurrent learning from the CycleGAN, the proposed method does not need the paired training data, which obviously speeds up the training and is more convenient than other classical methods. By comparing with five state-of-the-art denoising methods, experiments on the open dataset fully prove the accuracy and efficiency of the proposed method in terms of the visual quality and the quantitative PSNR, SSIM, and EPI.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_14-The_Medical_Image_Denoising_Method_Based_on_the_CycleGAN.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sentiment Analysis in Indonesian Healthcare Applications using IndoBERT Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140813</link>
        <id>10.14569/IJACSA.2023.0140813</id>
        <doi>10.14569/IJACSA.2023.0140813</doi>
        <lastModDate>2023-08-30T09:54:12.5830000+00:00</lastModDate>
        
        <creator>Helmi Imaduddin</creator>
        
        <creator>Fiddin Yusfida A’la</creator>
        
        <creator>Yusuf Sulistyo Nugroho</creator>
        
        <subject>Application; healthcare; IndoBERT; sentiment analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>The rapid growth of application development has made applications an integral part of people&#39;s lives, offering solutions to societal problems. Health service applications have gained popularity due to their convenience in accessing information on diseases, health, and medicine. However, many of these applications disappoint users with limited features, slow response times, and usability challenges. Therefore, this research focuses on developing a sentiment analysis system to assess user satisfaction with health service applications. The study aims to create a sentiment analysis model using reviews from health service applications on the Google Play Store, including Halodoc, Alodokter, and klikdokter. The dataset comprises 9.310 reviews, with 4.950 positive and 4.360 negative reviews. The IndoBERT pre-training method, a transfer learning model, is employed for sentiment analysis, leveraging its superior context representation. The study achieves impressive results with an accuracy score of 96%, precision of 95%, recall of 96%, and an F1-score of 95%. These findings underscore the significance of sentiment analysis in evaluating user satisfaction with health service applications. By utilizing the IndoBERT pre-training method, this research provides valuable insights into the strengths and weaknesses of health service applications on the Google Play Store, contributing to the enhancement of user experiences.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_13-Sentiment_Analysis_in_Indonesian_Healthcare_Applications.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Machine Learning Model for Automated Assessment of Short Subjective Answers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140812</link>
        <id>10.14569/IJACSA.2023.0140812</id>
        <doi>10.14569/IJACSA.2023.0140812</doi>
        <lastModDate>2023-08-30T09:54:12.5670000+00:00</lastModDate>
        
        <creator>Zaira Hassan Amur</creator>
        
        <creator>Yew Kwang Hooi</creator>
        
        <creator>Hina Bhanbro</creator>
        
        <creator>Mairaj Nabi Bhatti</creator>
        
        <creator>Gul Muhammad Soomro</creator>
        
        <subject>Natural language processing; short text; answer assessment; BERT; semantic similarity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>Natural Language Processing (NLP) has recently gained significant attention; where, semantic similarity techniques are widely used in diverse applications, such as information retrieval, question-answering systems, and sentiment analysis. One promising area where NLP is being applied, is personalized learning, where assessment and adaptive tests are used to capture students&#39; cognitive abilities. In this context, open-ended questions are commonly used in assessments due to their simplicity, but their effectiveness depends on the type of answer expected. To improve comprehension, it is essential to understand the underlying meaning of short text answers, which is challenging due to their length, lack of clarity, and structure. Researchers have proposed various approaches, including distributed semantics and vector space models, However, assessing short answers using these methods presents significant challenges, but machine learning methods, such as transformer models with multi-head attention, have emerged as advanced techniques for understanding and assessing the underlying meaning of answers. This paper proposes a transformer learning model that utilizes multi-head attention to identify and assess students&#39; short answers to overcome these issues. Our approach improves the performance of assessing the assessments and outperforms current state-of-the-art techniques. We believe our model has the potential to revolutionize personalized learning and significantly contribute to improving student outcomes.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_12-Machine_Learning_Model_for_Automated_Assessment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Approach for Identification of Figurative Language Types in Devanagari Scripted Languages</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140811</link>
        <id>10.14569/IJACSA.2023.0140811</id>
        <doi>10.14569/IJACSA.2023.0140811</doi>
        <lastModDate>2023-08-30T09:54:12.5500000+00:00</lastModDate>
        
        <creator>Jatinderkumar R. Saini</creator>
        
        <creator>Preety Sagar</creator>
        
        <creator>Hema Gaikwad</creator>
        
        <subject>Figures of speech (FsoS); natural language processing (NLP); Koshur; Awadhi</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>Poetry can be defined as a form of literary expression that uses language and artistic techniques to evoke emotions, create imagery, and convey complex ideas in a concentrated and imaginative manner. It is a form of written or spoken art that often incorporates rhythm, meter, rhyme, and figurative language to engage the reader or listener on multiple levels. There is no automated system that can identify figures of speech (FsoS) in poetry using Natural Language Processing (NLP) methods. In this research paper, the authors categorized four types of FsoS व्रत्या अनुप्रास (Type of alliteration), छेकानुप्रास (Type of alliteration), अन्तत्यानुप्रास (Rhyme), and पुनरुक्ति (Repetition) using two custom algorithms, Koshur and Awadhi (KA and AA), developed specifically for three different language corpora of poems (Koshur (K), Awadhi (A), and Hindi (H)). To evaluate the effectiveness of these algorithms, the authors conducted tests on three languages using four distinct approaches: with stopwords without optimization, with stopwords with optimization, without stopwords without optimization, and without stopwords with optimization. Authors have put lots of effort into identifying FsoS in not only one single language but in three Devanagari scripted languages. This research work is the first of its kind. The average accuracy without stopwords was not up to the mark. The authors then optimized both algorithms and again tested them on the same corpora with or without stopwords, resulting in a significant increase in accuracy.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_11-A_Novel_Approach_for_Identification_of_Figurative_Language_Types.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Current State of Blockchain Consensus Mechanism: Issues and Future Works</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140810</link>
        <id>10.14569/IJACSA.2023.0140810</id>
        <doi>10.14569/IJACSA.2023.0140810</doi>
        <lastModDate>2023-08-30T09:54:12.5030000+00:00</lastModDate>
        
        <creator>Shadab Alam</creator>
        
        <subject>Blockchain; consensus mechanism; consensus algorithm; data security; distributed systems; bitcoin</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>Blockchain is a decentralized ledger that serves as the foundation of Bitcoin and has found applications in various domains due to its immutable properties. It has the potential to change digital transactions drastically. It has been successfully used across multiple fields for record immutability and reliability. The consensus mechanism is the backbone of blockchain operations and validates newly generated blocks before they are added. To verify transactions in the ledger, various peer-to-peer (P2P) network validators use different consensus algorithms to solve the reliability problem in a network with unreliable nodes. The security and reliability of the inherent consensus algorithm used mainly determine blockchain security. However, consensus algorithms consume significant resources for validating new nodes. Therefore the safety and reliability of a blockchain system is based on the consensus mechanism&#39;s reliability and performance. Although various consensus mechanisms/algorithms exist, there is no unified evaluation criterion to evaluate them. Evaluating the consensus algorithm will explain system reliability and provide a mechanism for choosing the best consensus mechanism for a defined set of problems. This article comprehensively analyzes existing and recent consensus algorithms&#39; throughput, scalability, latency, energy efficiency, and other factors such as attacks, Byzantine fault tolerance, adversary tolerance, and decentralization levels. The paper defines consensus mechanism criteria, evaluates available consensus algorithms based on them, and presents their advantages and disadvantages.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_10-The_Current_State_of_Blockchain_Consensus_Mechanism.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automated CAD System for Early Stroke Diagnosis: Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140809</link>
        <id>10.14569/IJACSA.2023.0140809</id>
        <doi>10.14569/IJACSA.2023.0140809</doi>
        <lastModDate>2023-08-30T09:54:12.4730000+00:00</lastModDate>
        
        <creator>Izzatul Husna Azman</creator>
        
        <creator>Norhashimah Mohd Saad</creator>
        
        <creator>Abdul Rahim Abdullah</creator>
        
        <creator>Rostam Affendi Hamzah</creator>
        
        <creator>Adam Samsudin</creator>
        
        <creator>Shaarmila A/P Kandaya</creator>
        
        <subject>Stroke diagnosis; CAD system; machine learning; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>Stroke is an important health issue that affects millions of people globally each year. Early and precise stroke diagnosis is crucial for efficient treatment and better patient outcomes. Traditional stroke detection procedures, such as manual visual evaluation of clinical data, can be time-consuming and error-prone. Computer-aided diagnostic (CAD) technologies have emerged as a viable option for early stroke diagnosis in recent years. These systems analyze medical pictures, such as magnetic resonance imaging (MRI), and identify indicators of stroke using modern algorithms and machine learning approaches. The goal of this review paper is to offer a thorough overview of the current state-of-the-art in CAD systems for early stroke detection. We give an examination of the merits and limits of this technology, as well as future research and development directions in this field. Finally, we contend that CAD systems represent a promising solution for improving the efficiency and accuracy of early stroke diagnosis, resulting in better patient outcomes and lower healthcare costs.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_9-Automated_CAD_System_for_Early_Stroke_Diagnosis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detection of Tuberculosis Based on Hybridized Pre-Processing Deep Learning Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140808</link>
        <id>10.14569/IJACSA.2023.0140808</id>
        <doi>10.14569/IJACSA.2023.0140808</doi>
        <lastModDate>2023-08-30T09:54:12.4570000+00:00</lastModDate>
        
        <creator>Mohamed Ahmed Elashmawy</creator>
        
        <creator>Irraivan Elamvazuthi</creator>
        
        <creator>Lila Iznita Izhar</creator>
        
        <creator>Sivajothi Paramasivam</creator>
        
        <creator>Steven Su</creator>
        
        <subject>Tuberculosis; CNN; pre-processing; CXR images; augmentation; segmentation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>The disease, tuberculosis (TB) is a serious health concern as it primarily affects the lungs and can lead to fatalities. However, early detection and treatment can cure the disease. One potential method for detecting TB is using Computer Aided Diagnosis (CAD) systems, which can analyze Chest X-Ray Images (CXR) for signs of TB. This paper proposes a new approach for improving the performance of CAD systems by using a hybrid pre-processing method for Convolutional Neural Network (CNN) models. The goal of the research is to enhance the accuracy and Area Under Curve (AUC) of detection for TB in CXR images by combining two different pre-processing methods and multi-classifying different manifestations of the disease. The hypothesis is that this approach will result in more accurate detection of TB in CXR images. To achieve this, this research used augmentation and segmentation techniques to pre-process the CXR images before feeding them into a pre-trained CNN model for classification. The VGG16 model managed to achieve an AUC of 0.935, an accuracy of 90% and a 0.8975 F1-score with the proposed pre-processing method.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_8-Detection_of_Tuberculosis_Based_on_Hybridized_Pre_Processing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Spatial Distribution of Atmospheric Water Vapor Based on Analytic Hierarchy Process and Genetic Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140807</link>
        <id>10.14569/IJACSA.2023.0140807</id>
        <doi>10.14569/IJACSA.2023.0140807</doi>
        <lastModDate>2023-08-30T09:54:12.4270000+00:00</lastModDate>
        
        <creator>Fengjun Wei</creator>
        
        <creator>Chunhua Liu</creator>
        
        <creator>Rendong Guo</creator>
        
        <creator>Xin Li</creator>
        
        <creator>Jilei Hu</creator>
        
        <creator>Chuanxun Che</creator>
        
        <subject>Global navigation satellite system; spatial distribution of water vapor; genetic algorithm; chromatography technology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>The inversion of water vapor spatial distribution using ground-based global navigation satellite systems is a technique that utilizes the propagation delay of satellite signals in the atmosphere to retrieve atmospheric water vapor information. To further promote the accuracy of the information obtained by this method, a satellite system is designed to solve the spatial distribution of atmospheric water vapor based on chromatography technology and genetic algorithm. Firstly, the accuracy of the empirical air temperature and pressure model to calculate the zenith statics delay is analyzed. To optimize the global weighted average temperature model, a model that considers the decreasing rate of atmospheric weighted average temperature and a model based on the linear relationship between surface heat and weighted average temperature are proposed. The idea of removal interpolation restoration is introduced to achieve regional interpolation of atmospheric precipitable water. Finally, in response to the problem of multiple solutions in the current water vapor chromatography equation, a genetic algorithm based chromatography method is put forward to achieve the solution of atmospheric water vapor spatial distribution. The experimental analysis shows that the average root mean square error and average absolute error of the design method of the research institute are 1.78g/m3 and 1.41g/m3, respectively, which can realize the calculation of atmospheric water vapor density distribution with high accuracy.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_7-The_Spatial_Distribution_of_Atmospheric_Water_Vapor.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Symbol Detection in a Multi-class Dataset Based on Single Line Diagrams using Deep Learning Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140806</link>
        <id>10.14569/IJACSA.2023.0140806</id>
        <doi>10.14569/IJACSA.2023.0140806</doi>
        <lastModDate>2023-08-30T09:54:12.3970000+00:00</lastModDate>
        
        <creator>Hina Bhanbhro</creator>
        
        <creator>Yew Kwang Hooi</creator>
        
        <creator>Worapan Kusakunniran</creator>
        
        <creator>Zaira Hassan Amur</creator>
        
        <subject>Single line diagrams; engineering drawings; synthetic data; symbol detection; deep learning; augmented dataset</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>Single Line Diagrams (SLDs) are used in electrical power distribution systems. These diagrams are crucial to engineers during the installation, maintenance, and inspection phases. For the digital interpretation of these documents, deep learning-based object detection methods can be utilized. However, there is a lack of efforts made to digitize the SLDs using deep learning methods, which is due to the class-imbalance problem of these technical drawings. In this paper, a method to address this challenge is proposed. First, we use the latest variant of You Look Only Once (YOLO), YOLO v8 to localize and detect the symbols present in the single-line diagrams. Our experiments determine that the accuracy of symbol detection based on YOLO v8 is almost 95%, which is more satisfactory than its previous versions. Secondly, we use a synthetic dataset generated using multi-fake class generative adversarial network (MFCGAN) and create fake classes to cope with the class imbalance problem. The images generated using the GAN are then combined with the original images to create an augmented dataset, and YOLO v5 is used for the classification of the augmented dataset. The experiments reveal that the GAN model had the capability to learn properly from a small number of complex diagrams. The detection results show that the accuracy of YOLO v5 is more than 96.3%, which is higher than the YOLO v8 accuracy. After analyzing the experiment results, we might deduce that creating multiple fake classes improved the classification of engineering symbols in SLDs.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_6-Symbol_Detection_in_a_Multi_class_Dataset.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Visualization of AI Systems in Virtual Reality: A Comprehensive Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140805</link>
        <id>10.14569/IJACSA.2023.0140805</id>
        <doi>10.14569/IJACSA.2023.0140805</doi>
        <lastModDate>2023-08-30T09:54:12.3970000+00:00</lastModDate>
        
        <creator>Medet Inkarbekov</creator>
        
        <creator>Rosemary Monahan</creator>
        
        <creator>Barak A. Pearlmutter</creator>
        
        <subject>Virtual Reality (VR); Artificial Intelligence (AI) Vi-sualization; VR in AI Visualization; Human-Computer Interaction (HCI)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>This study provides a comprehensive review of the utilization of Virtual Reality (VR) in the context of Human-Computer Interaction (HCI) for visualizing Artificial Intelligence (AI) systems. Drawing from 18 selected studies, the results illuminate a complex interplay of tools, methods, and approaches, notably the prominence of VR engines like Unreal Engine and Unity. However, despite these tools, a universal solution for effective AI visualization remains elusive, reflecting the unique strengths and limitations of each technique. The application of VR for AI visualization across multiple domains is observed, despite challenges such as high data complexity and cognitive load. Moreover, it briefly discusses the emerging ethical considerations pertaining to the broad integration of these technologies. Despite these challenges, the field shows significant potential, emphasizing the need for dedicated research efforts to unlock the full potential of these immersive technologies. This review, therefore, outlines a roadmap for future research, encouraging innovation in visualization techniques, addressing identified challenges, and considering the ethical implications of VR and AI convergence.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_5-Visualization_of_AI_Systems_in_Virtual_Reality.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Secure and Scalable Behavioral Dynamics Authentication Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140804</link>
        <id>10.14569/IJACSA.2023.0140804</id>
        <doi>10.14569/IJACSA.2023.0140804</doi>
        <lastModDate>2023-08-30T09:54:12.3630000+00:00</lastModDate>
        
        <creator>Idowu Dauda Oladipo</creator>
        
        <creator>Mathew Nicho</creator>
        
        <creator>Joseph Bamidele Awotunde</creator>
        
        <creator>Jemima Omotola Buari</creator>
        
        <creator>Muyideen Abdulraheem</creator>
        
        <creator>Tarek Gaber</creator>
        
        <subject>Behavioral authentication; keystroke dynamics; human-computer interface; two-factor authentication</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>Various authentication methods have been proposed to mitigate data breaches. However, the increasing frequency of data breaches and users&#39; lack of awareness have exposed traditional methods, including single-factor password-based systems and, two-factor authentication systems, to vulnerabilities against attacks. While behavioral authentication holds promise in tackling these issues, it faces challenges concerning interoperability between operating systems, the security of behavioral data, accuracy enhancement, scalability, and cost. This research presents a scalable dynamic behavioral authentication model utilizing keystroke typing patterns. The model is constructed around five key components: human-computer interface devices, encryption of behavioral data, consideration of the authenticator&#39;s emotional state, incorporation of cross-platform features, and proposed implementation solutions. It addresses potential typing errors and employs data encryption for behavioral data, achieving a harmonious blend of usability and security by leveraging keyboard dynamics. This is accomplished through the implementation of a web-based authentication system that integrates Convolutional Neural Networks (CNNs) for advanced feature engineering. Keystroke typing patterns were gathered from participants and subsequently employed to evaluate the system&#39;s keystroke timing verification, login ID verification, and error handling capabilities. The web-based system uniquely identifies users by merging their username-password (UN-PW) credentials with their keyboard typing patterns, all while securely storing the keystroke data. Given the achievement of a 100% accuracy rate, the proposed Behavioral Dynamics Authentication Model (BDA) introduces future researchers to five scalable constructs. These constructs offer an optimal combination, tailored to the device and context, for maximizing effectiveness. This achievement underscores its potential applications in the realm of authentication.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_4-A_Secure_and_Scalable_Behavioral_Dynamics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Converting Data for Spiking Neural Network Training</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140803</link>
        <id>10.14569/IJACSA.2023.0140803</id>
        <doi>10.14569/IJACSA.2023.0140803</doi>
        <lastModDate>2023-08-30T09:54:12.3330000+00:00</lastModDate>
        
        <creator>Erik Sadovsky</creator>
        
        <creator>Maros Jakubec</creator>
        
        <creator>Roman Jarina</creator>
        
        <subject>SNN; rate coding; spike timing; data conversion; MNIST</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>The application of spiking neural networks (SNNs) for processing visual and auditory data necessitate the conversion of traditional neural network datasets into a format suitable for spike-based computations. Existing datasets designed for conventional neural networks are incompatible with SNNs due to their reliance on spike timing and specific preprocessing requirements. This paper introduces a comprehensive pipeline that enables the conversion of common datasets into rate-coded spikes, meeting processing demands of SNNs. The proposed solution is evaluated on Spike-CNN trained on Time-to-First-Spike encoded MNIST and compared with the similar system trained on the neuromorphic dataset (N-MNIST). Both systems have comparative precision; however the proposed solution is more energy efficient than the system based on neuromorphic computing. Since, the proposed solution is not limited to any specific data form and can be applied to various types of audio/visual content. By providing a means to adapt existing datasets, this research facilitates the exploration and advancement of SNNs across different domains.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_3-Converting_Data_for_Spiking_Neural_Network_Training.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Ensemble Security and Multi-Cloud Load Balancing for Data in Edge-based Computing Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140802</link>
        <id>10.14569/IJACSA.2023.0140802</id>
        <doi>10.14569/IJACSA.2023.0140802</doi>
        <lastModDate>2023-08-30T09:54:12.3170000+00:00</lastModDate>
        
        <creator>Raghunadha Reddi Dornala</creator>
        
        <subject>Edge computing; cloud computing; dynamic load balancing; fog computing; multi-cloud load balancing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>Edge computing has gained significant attention in recent years due to its ability to process data closer to the source, resulting in reduced latency and improved performance. However, ensuring data security and efficient data management in edge-based computing applications poses significant challenges. This paper proposes an ensemble security approach and a multi-cloud load-balancing strategy to address these challenges. The ensemble security approach leverages multiple security mechanisms, such as encryption, authentication, and intrusion detection systems, to provide a layered defense against potential threats. By combining these mechanisms, the system can detect and mitigate security breaches at various levels, ensuring the integrity and confidentiality of data in edge-based environments. The multi-cloud load balancing strategy also aims to optimize resource utilization and performance by distributing data processing tasks across multiple cloud service providers. This approach takes advantage of the flexibility and scalability offered by the cloud, allowing for dynamic workload allocation based on factors like network conditions and computational capabilities. To evaluate the effectiveness of the proposed approach, we conducted experiments using a realistic edge-based computing environment. The results demonstrate that the ensemble security approach effectively detects and prevents security threats, while the multi-cloud load balancing strategy with edge computing to improve the overall system performance and resource utilization.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_2-Ensemble_Security_and_Multi_Cloud_Load_Balancing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Empirical Internet Protocol Network Intrusion Detection using Isolation Forest and One-Class Support Vector Machines</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140801</link>
        <id>10.14569/IJACSA.2023.0140801</id>
        <doi>10.14569/IJACSA.2023.0140801</doi>
        <lastModDate>2023-08-30T09:54:12.2700000+00:00</lastModDate>
        
        <creator>Gerard Shu Fuhnwi</creator>
        
        <creator>Victoria Adedoyin</creator>
        
        <creator>Janet O. Agbaje</creator>
        
        <subject>HTTP; SMTP; FTP; ANOVA F-test; AUCROC; OC-SVMs; FAR; DR; IF</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(8), 2023</description>
        <description>With the increasing reliance on web-based applications and services, network intrusion detection has become a critical aspect of maintaining the security and integrity of computer networks. This study empirically investigates internet protocol network intrusion detection using two machine learning techniques: Isolation Forest (IF) and One-Class Support Vector Machines (OC-SVM), combined with ANOVA F-test feature selection. This paper presents an empirical study comparing the effectiveness of two machine learning algorithms, Isolation Forest (IF) and One-Class Support Vector Machines (OC-SVM), with ANOVA F-test feature selection in detecting network intrusions using web services. The study used the NSL-KDD dataset, encompassing hypertext transfer protocol (HTTP), simple mail transfer protocol (SMTP), and file transfer protocol (FTP) web services attacks and normal traffic patterns, to comprehensively evaluate the algorithms. The performance of the algorithms is evaluated based on several metrics, such as the F1-score, detection rate (recall), precision, false alarm rate (FAR), and Area Under the Receiver Operating Characteristic (AUCROC) curve. Additionally, the study investigates the impact of different hyper-parameters on the performance of both algorithms. Our empirical results demonstrate that while both IF and OC-SVM exhibit high efficacy in detecting network intrusion attacks using web services of type HTTP, SMTP, and FTP, the One-Class Support Vector Machines outperform the Isolation Forest in terms of F1-score (SMTP), detection rate(HTTP, SMTP, and FTP), AUCROC, and a consistent low false alarm rate (HTTP). We used the t-test to determine that OCSVM statistically outperforms IF on DR and FAR.</description>
        <description>http://thesai.org/Downloads/Volume14No8/Paper_1-An_Empirical_Internet_Protocol_Network_Intrusion_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep-Learning-based Analysis of the Patterns Associated with the Changes in the Grit Scores and Understanding Levels of Students</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01407118</link>
        <id>10.14569/IJACSA.2023.01407118</id>
        <doi>10.14569/IJACSA.2023.01407118</doi>
        <lastModDate>2023-07-31T11:57:46.4800000+00:00</lastModDate>
        
        <creator>Ayako OHSHIRO</creator>
        
        <subject>Time series; dynamic time warping; decision tree; Grit</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>The purpose of this study is to classify the pattern of the understanding level changes for university students during class term, and analyze the relation between them and the changes in the Grid score before and after the class term. Dynamic time warping was applied for classification of the understanding level, and the decision tree was applied to analyze the relation between the changes in the understanding level and that in the Grid score. As a result, it was shown that a large variety of the patterns of changes in the understanding level, and the relations between the understanding level and Grid score cover a wide variety, too. It is necessary to take these results for conducting effective lectures.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_118-Deep_Learning_based_Analysis_of_the_Patterns_Associated_with_the_Changes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>DeepShield: A Hybrid Deep Learning Approach for Effective Network Intrusion Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01407117</link>
        <id>10.14569/IJACSA.2023.01407117</id>
        <doi>10.14569/IJACSA.2023.01407117</doi>
        <lastModDate>2023-07-31T11:57:46.4500000+00:00</lastModDate>
        
        <creator>Hongjie Lin</creator>
        
        <subject>Network intrusion detection system; IDS; cyber security; machine learning; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>In today&#39;s rapidly evolving digital landscape, ensuring the security of networks and systems has become more crucial than ever before. The ever-present threat of hackers and intruders attempting to disrupt networks and compromise online services highlights the pressing need for robust security measures. With the continuous advancement of security systems, new dangers arise, but so do innovative solutions. One such solution is the implementation of Network Intrusion Detection Systems (NIDSs), which play a pivotal role in identifying potential threats to computer systems by categorizing network traffic. However, the effectiveness of an intrusion detection system lies in its ability to prepare network data and identify critical attributes necessary for constructing robust classifiers. In light of this, this paper proposes, DeepShield, a cutting-edge NIDS that harnesses the power of deep learning and leverages a hybrid feature selection approach for optimal performance. DeepShield consists of three essential steps: hybrid feature selection, rule assessment, and detection. By combining the strengths of machine learning and deep learning technologies, a new solution is developed that excels in detecting network intrusions. The process begins by capturing packets from the network, which are then carefully preprocessed to reduce their size while retaining essential information. These refined data packets are then fed into a deep learning algorithm, which employs machine learning characteristics to learn and test potential intrusion patterns. Simulation results demonstrate the superiority of DeepShield over previous approaches. NIDS achieves an exceptional level of accuracy in detecting malicious attacks, as evidenced by its outstanding performance on the widely recognized CSE-CIC-DS2018 dataset.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_117-DeepShield_A_Hybrid_Deep_Learning_Approach_for_Effective_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comprehensive Review of Fault-Tolerant Routing Mechanisms for the Internet of Things</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01407116</link>
        <id>10.14569/IJACSA.2023.01407116</id>
        <doi>10.14569/IJACSA.2023.01407116</doi>
        <lastModDate>2023-07-31T11:57:46.4330000+00:00</lastModDate>
        
        <creator>Zhengxin Lan</creator>
        
        <subject>Internet of things; routing; data transmission; fault-tolerant; review</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>The Internet of Things (IoT) facilitates intelligent communication and real-time data collection through dynamic networks. The IoT technology is ideally suited to meet intelligent city requirements and enable remote access. Several cloud-based approaches have been proposed for constrained IoT systems, including scalable data storage and effective routing. In real-world scenarios, the effectiveness of many methods for wireless networks and communication links can be challenged due to their unpredictable characteristics. These challenges can result in path failures and increased resource utilization. To enhance the reliability and resilience of IoT networks in the face of failures, fault tolerance mechanisms are crucial. Network failures can occur for various reasons, including the breakdown of the wireless nodes&#39; communication module, node failures caused by battery drain, and changes in the network topology. Addressing these issues is essential to ensure the continuous and reliable operation of IoT networks. Fault-tolerant routing plays a critical role in IoT-based networks, but no systematic and comprehensive research has been conducted in this area. Therefore, this paper aims to fill this gap by reviewing state-of-the-art mechanisms. An analysis of the practical techniques leads to recommendations for further research.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_116-A_Comprehensive_Review_of_Fault_Tolerant_Routing_Mechanisms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid Cryptography Method using Extended Letters in Arabic and Persian Language</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01407115</link>
        <id>10.14569/IJACSA.2023.01407115</id>
        <doi>10.14569/IJACSA.2023.01407115</doi>
        <lastModDate>2023-07-31T11:57:46.3870000+00:00</lastModDate>
        
        <creator>Ke Wang</creator>
        
        <subject>Cryptography; extended letters; Persian language; Arabic language</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>Cryptography is widely used in information security systems. In encryption, the goal is to hide information in such a way that only the sender and receiver are aware of the existence of communication and information. Encryption takes place in various media, such as image, sound and text. Today, the rapid growth of network technologies and digital tools has made digital delivery fast and easy. However, the distribution of digital data in public networks such as the Internet has various challenges due to copyright infringement, forgery, coding and fraud. Therefore, methods of protecting digital data, especially sensitive data, are very necessary. Accordingly, in this article, a combined solution is used based on the technique of stretching the letters and making minor changes in the letters that have closed spaces so that the bits related to the hidden text can be inserted into a Persian or Arabic language. For this purpose, a new solution has been designed, in which the cover text is similar to the normal text, with the difference that, in addition to the extended letters that are longer due to the status of the secret message, it also has some prepositions, which have spaces. They are empty and closed. Of course, this difference in the closed space between the original letters and the changed letters will be very slight, and as a result, there will not be much difference between them that the normal user can feel the change. Finally, the proposed solution has been evaluated with the help of the MATLAB program, and according to the rate parameter of encryption capacity, the results show that the proposed method has an average encryption capacity of more than 50% compared to other common solutions.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_115-A_Hybrid_Cryptography_Method_using_Extended_Letters.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimal Scheduling using Advanced Cat Swarm Optimization Algorithm to Improve Performance in Fog Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01407114</link>
        <id>10.14569/IJACSA.2023.01407114</id>
        <doi>10.14569/IJACSA.2023.01407114</doi>
        <lastModDate>2023-07-31T09:34:38.9500000+00:00</lastModDate>
        
        <creator>Xiaoyan Huo</creator>
        
        <creator>Xuemei Wang</creator>
        
        <subject>Scheduling; fog computing; optimal balancing; cat swarm optimization algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>Fog computing can be considered a decentralized computing approach that essentially extends the capabilities offered by cloud computing to the periphery of the network. In addition, due to its proximity to the user, fog computing proves to be highly efficient in minimizing the volume of data that needs to be transmitted, reducing overall network traffic, and shortening the distance that data must travel. But this technology, like other new technologies, has challenges, and scheduling and optimal allocation of resources is one of the most important of these challenges. Accordingly, this research aims to propose an optimal solution for efficient scheduling within the fog computing environment through the application of the advanced cat swarm optimization algorithm. In this solution, the two main behaviors of cats are implemented in the form of seek and tracking states. Accordingly, processing nodes are periodically examined and categorized based on the number of available resources; servers with highly available resources are prioritized in the scheduling process for efficient scheduling. Subsequently, the congested servers, which may be experiencing various issues, are migrated to alternative servers with ample resources using the virtual machine live migration technique. Ultimately, the effectiveness of the proposed solution is assessed using the iFogSim simulator, demonstrating notable reductions in execution time and energy consumption. So, the proposed solution has led to a 20% reduction in execution time while also improving energy efficiency by more than 15% on average. This optimization represents a trade-off between improving performance and reducing resource consumption.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_114-Optimal_Scheduling_using_Advanced_Cat_Swarm_Optimization_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automatic Fraud Detection in e-Commerce Transactions using Deep Reinforcement Learning and Artificial Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01407113</link>
        <id>10.14569/IJACSA.2023.01407113</id>
        <doi>10.14569/IJACSA.2023.01407113</doi>
        <lastModDate>2023-07-31T09:34:38.9330000+00:00</lastModDate>
        
        <creator>Yuanyuan Tang</creator>
        
        <subject>Fraud detection; reinforcement learning; artificial neural network; artificial bee colony; imbalanced classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>Fraud is a serious issue that has plagued e-commerce for many years, and despite significant efforts to combat it, current fraud detection strategies only catch a small portion of fraudulent transactions. This results in substantial financial losses, with billions of dollars being lost each year. Given the expected surge in the volume of online transactions in the upcoming years, there is a critical need for improved fraud detection strategies. To tackle this problem, the article proposes a deep reinforcement learning approach for the automatic detection of fraudulent e-commerce transactions. The architecture&#39;s policy is built on the implementation of artificial neural networks (ANNs). The classification problem is viewed as a step-by-step decision-making procedure. The implementation of the model involves the use of the artificial bee colony (ABC) algorithm to acquire initial weight values. After that, in each step, the agent obtains a sample and performs a classification, with the environment providing a reward for each classification action. To encourage the model to concentrate on detecting fraudulent transactions precisely, the reward for identifying the minority class is higher than that for the majority class. With the aid of a supportive learning setting and a specific reward system, the agent ultimately determines the best approach to achieve its objectives. The performance of the suggested model is assessed utilizing a publicly available dataset contributed by the Machine Learning group at the Universit&#233; Libre de Bruxelles. The experimental outcomes, determined using recognized evaluation measures, indicate that the model has attained a high level of accuracy. As a result, the suggested model is considered appropriate for identifying deceitful transactions in e-commerce.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_113-Automatic_Fraud_Detection_in_e_Commerce_Transactions.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improved Cat Swarm Optimization Algorithm for Load Balancing in the Cloud Computing Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01407112</link>
        <id>10.14569/IJACSA.2023.01407112</id>
        <doi>10.14569/IJACSA.2023.01407112</doi>
        <lastModDate>2023-07-31T09:34:38.9030000+00:00</lastModDate>
        
        <creator>Wang Dou</creator>
        
        <subject>Cloud computing; resource utilization; load balancing; optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>Recently, cloud computing has gained recognition as a powerful tool for providing clients with flexible platforms, software services, and cost-effective infrastructures. Cloud computing is a form of distributed computing that allows users to store and process data in a virtual environment instead of a physical server. This is beneficial because it allows businesses to quickly scale up or down their computing capacity, reducing the need to invest in expensive hardware. As cloud tasks continue to grow exponentially and the usage of cloud services increases, scheduling these tasks across diverse virtual machines poses a challenging NP-hard optimization problem with substantial requirements, including optimal resource utilization levels, a short execution time, and a reasonable implementation cost. The issue has consequently been addressed using a variety of meta-heuristic algorithms. In this paper, we propose a new load-balancing approach using the Cat Swarm Optimization (CSO) algorithm in order to distribute the load among the various servers within a data center. Statistical analyses indicate that our algorithm is superior to previous research with regard to energy consumption, makespan, and time required up to 30%, 35%, and 40%, respectively.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_112-Improved_Cat_Swarm_Optimization_Algorithm_for_Load_Balancing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Inverted Ant Colony Optimization Algorithm for Data Replication in Cloud Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01407111</link>
        <id>10.14569/IJACSA.2023.01407111</id>
        <doi>10.14569/IJACSA.2023.01407111</doi>
        <lastModDate>2023-07-31T09:34:38.8870000+00:00</lastModDate>
        
        <creator>Min YANG</creator>
        
        <subject>Cloud computing; data replication; cloud data centers; reliability; energy efficiency; inverted ant colony optimization algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>Data replication is crucial in enhancing data availability and reducing access latency in cloud computing. This paper presents a dynamic duplicate management method for cloud storage systems based on the Inverted Ant Colony Optimization (IACO) algorithm and a fuzzy logic system. The proposed approach optimizes data replication decisions focusing on energy consumption, response time, and cost. Extensive simulations demonstrate that the IACO-based method outperforms existing techniques, achieving a remarkable 25% reduction in energy consumption, a significant 15% improvement in response time, and a substantial 20% cost reduction. By addressing the research gap concerning integrating IACO and fuzzy logic for data replication, our work contributes to advancing cloud computing solutions for large datasets. The proposed method offers a viable and efficient approach to improve resource utilization and system performance, benefiting various scientific fields.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_111-Inverted_Ant_Colony_Optimization_Algorithm_for_Data_Replication.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Modified Hybrid Algorithm Approach for Solving Harmonic Problems in Power Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01407110</link>
        <id>10.14569/IJACSA.2023.01407110</id>
        <doi>10.14569/IJACSA.2023.01407110</doi>
        <lastModDate>2023-07-31T09:34:38.8700000+00:00</lastModDate>
        
        <creator>Ning WANG</creator>
        
        <creator>Qiuju DENG</creator>
        
        <subject>Harmonic estimation; fuzzy PD controlled; harmonic components</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>A fundamental problem with electrical systems&#39; power quality is electrical harmonics. In order to limit harmonics and their effects on power systems, filters used in electric power systems must be designed with the consideration of power harmonics. The study&#39;s suggested approach differs from other hybrid strategies that have been previously published, and the processes that are expected mostly center on cutting down on computing complexity and time. The voltage and current waveforms of distribution networks have started to be significantly distorted over the past 20 years due to the increased use of power electronic equipment and non-linear loads. This paper provides a new hybrid approach for harmonic estimation. Harmonic estimation of these deformed waveforms is a nonlinear problem because sinusoidal waveforms contain nonlinear distortions. As a result, the corresponding combined technique splits the problem of harmonic estimation into two independent problems due to the slow convergence of nonlinear problems in the estimate of harmonic components. The algorithm used in this study first estimates amplitude and frequency using the fuzzy logic control (FLC) approach, a non-linear estimator. The objective function is then to minimize the error value for both the original signal and the estimated signal using the genetic algorithm, a non-linear estimator. The experiments show that the proposed method for determining harmonic estimation time is 36% better than comparable methods. As a result, the suggested technique offers a number of benefits, including very quick computation times, more precise evaluation of amplitude and phase values for all conditions, and complexity in the outcomes.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_110-A_Modified_Hybrid_Algorithm_Approach_for_Solving_Harmonic_Problems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cloud Task Scheduling using Particle Swarm Optimization and Capuchin Search Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01407109</link>
        <id>10.14569/IJACSA.2023.01407109</id>
        <doi>10.14569/IJACSA.2023.01407109</doi>
        <lastModDate>2023-07-31T09:34:38.8700000+00:00</lastModDate>
        
        <creator>Gang WANG</creator>
        
        <creator>Jiayin FENG</creator>
        
        <creator>Dongyan JIA</creator>
        
        <creator>Jinling SONG</creator>
        
        <creator>Guolin LI</creator>
        
        <subject>Cloud computing; virtualization; task scheduling; optimization; resource utilization; capuchin search algorithm; particle swarm optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>Cloud providers offer heterogeneous virtual machines for the execution of a variety of tasks requested by users. These virtual machines are managed by the cloud provider, eliminating the need for users to set up and maintain their hardware. This makes accessing the computing resources necessary to run applications and services more accessible and cost-effective. The task scheduling problem can be expressed as a discrete optimization issue known as NP-hard. To address this problem, we propose a hybrid meta-heuristic algorithm using the Capuchin Search Algorithm (CapSA) and the Particle Swarm Optimization (PSO) algorithm. PSO excels in global exploration, while CapSA is adept at fine-tuning solutions through local search. We aim to achieve better convergence and solution quality by integrating both algorithms. Our proposed method&#39;s performance is thoroughly evaluated through extensive experimentation, comparing it to standalone PSO and CapSA approaches. The findings reveal that our hybrid algorithm outperforms the individual techniques in terms of both total execution time and total execution cost metrics. The novelty of our work lies in the synergistic integration of PSO and CapSA, addressing the limitations of traditional optimization methods for cloud task scheduling. The proposed hybrid approach opens up intriguing directions for future research in dynamic task scheduling, multi-objective optimization, adaptive algorithms, integration with emerging technologies, and real-world deployment scenarios.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_109-Cloud_Task_Scheduling_using_Particle_Swarm_Optimization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>New Approach based on Association Rules for Building and Optimizing OLAP Cubes on Graphs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01407108</link>
        <id>10.14569/IJACSA.2023.01407108</id>
        <doi>10.14569/IJACSA.2023.01407108</doi>
        <lastModDate>2023-07-31T09:34:38.8570000+00:00</lastModDate>
        
        <creator>Redouane LABZIOUI</creator>
        
        <creator>Khadija LETRACHE</creator>
        
        <creator>Mohammed RAMDANI</creator>
        
        <subject>NoSQL; graph-oriented databases; data warehouse; OLAP; aggregation algorithms; association rules; cypher language</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>The expansion of data has prompted the creation of various NoSQL (Not only SQL) databases, including graph-oriented databases, which provide an understandable abstraction for modeling complex domains and managing highly connected data. However, to add graph data to existing decision support systems, new data warehouse systems that consider the special characteristics of graphs need to be developed. This work proposes a novel method for creating a data warehouse under a graph database and demonstrates how OLAP (Online Analytical Processing) structures created for reporting can be handled by graph databases. Additionally, the paper suggests using aggregation algorithms based association rules techniques to improve the efficiency of reporting and data analysis within a graph-based data warehouse. Finally, we provide a Cypher language implementation of the suggested approach to evaluate and validate our approach.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_108-New_Approach_based_on_Association_Rules.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Smart-Agri: A Smart Agricultural Management with IoT-ML-Blockchain Integrated Framework</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01407107</link>
        <id>10.14569/IJACSA.2023.01407107</id>
        <doi>10.14569/IJACSA.2023.01407107</doi>
        <lastModDate>2023-07-31T09:34:38.8400000+00:00</lastModDate>
        
        <creator>Md. Mamun Hossain</creator>
        
        <creator>Md. Ashiqur Rahman</creator>
        
        <creator>Sudipto Chaki</creator>
        
        <creator>Humayra Ahmed</creator>
        
        <creator>Ahsanul Haque</creator>
        
        <creator>Iffat Tamanna</creator>
        
        <creator>Sweety Lima</creator>
        
        <creator>Most. Jannatul Ferdous</creator>
        
        <creator>Md. Saifur Rahman</creator>
        
        <subject>Smart agriculture; machine learning; internet of things; energy harvesting; blockchain technology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>This paper presents intuitive directions for field research by introducing a ground-breaking IoT-ML-driven intelligent farm management platform. This study’s main goal is to address agricultural difficulties by providing a thorough, integrated solution. This work makes a variety of important contributions. By utilizing cutting-edge technology like IoT and Machine Learning (ML), it first improves conventional farm management procedures. Farmers now have the capacity to remotely monitor and regulate irrigation management thanks to sensor-based real-time data. Second, based on data gathered from agricultural fields, our machine learning model offers improved water control management and fertilizer use recommendations, maximizing production while minimizing resource usage. The suggested solution also uses blockchain technology to create a safe, decentralized network that guarantees data integrity and defends against threats. We also introduce energy harvesting technology to address the issue of continuous energy supply for IoT devices, which lessens the load on farmers by removing the requirement for additional batteries. We achieved 89.5% accuracy in our proposed machine learning model. The suggested model would provide a variety of services to farmers, including pesticide recommendations and water motor control via mobile applications and a cloud database.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_107-Smart_Agri_A_Smart_Agricultural_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid TF-IDF and RNN Model for Multi-label Classification of the Deep and Dark Web</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01407106</link>
        <id>10.14569/IJACSA.2023.01407106</id>
        <doi>10.14569/IJACSA.2023.01407106</doi>
        <lastModDate>2023-07-31T09:34:38.8230000+00:00</lastModDate>
        
        <creator>Ashwini Dalvi</creator>
        
        <creator>Soham Bhoir</creator>
        
        <creator>Nishavak Naik</creator>
        
        <creator>Atharva Kitkaru</creator>
        
        <creator>Irfan Siddavatam</creator>
        
        <creator>Sunil Bhirud</creator>
        
        <subject>Deep web; dark web; multi-label classification; TF-IDF; FastText; RNN</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>The classification of content on the deep and dark web has been a topic of interest for researchers. Researchers focus on adopting more efficient and effective classification methods as the data available on deep and dark web platforms continues to grow. Multi-label classification is the approach for simultaneously categorizing content into multiple classes. To address this, a hybrid approach combining Term Frequency-Inverse Document Frequency (TF-IDF) and Recurrent Neural Network (RNN) has been proposed. The approach involves preprocessing a dataset of Hypertext Markup Language (HTML) documents, selecting specific HTML tags to generate embeddings using TF-IDF, and using an RNN model for multi-label classification. The proposed model was evaluated against commonly used methods (Binary Relevance, Classifier Chains, and Label Powerset) using precision, recall, and F1-score as evaluation metrics, demonstrating promising results in accurately classifying data from the deep and dark web. This contribution represents a noteworthy advancement for researchers and analysts working in this field.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_106-A_Hybrid_TF_IDF_and_RNN_Model_for_Multi_label_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Evaluation of Face Mask Detection for Real-Time Implementation on an Rpi</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01407105</link>
        <id>10.14569/IJACSA.2023.01407105</id>
        <doi>10.14569/IJACSA.2023.01407105</doi>
        <lastModDate>2023-07-31T09:34:38.8230000+00:00</lastModDate>
        
        <creator>Ivan George L. Tarun</creator>
        
        <creator>Vidal Wyatt M. Lopez</creator>
        
        <creator>Pamela Anne C. Serrano</creator>
        
        <creator>Patricia Angela R. Abu</creator>
        
        <creator>Rosula S.J. Reyes</creator>
        
        <creator>Ma. Regina Justina E. Estuar</creator>
        
        <subject>Face mask detection; multi-face detection; Raspberry Pi; embedded platform</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>Mask-wearing remains to be one of the primary protective measures against COVID-19. To address the difficulty of manual compliance monitoring, face mask detection models considerate of both frontal and angled faces were developed. This study aimed to test the performance of the said models in classifying multi-face images and upon running on a Raspberry Pi device. The accuracies and inference speeds were measured and compared when inferencing images with one, two, and three faces and on the desktop and the Raspberry Pi. With an increasing number of faces in an image, the models’ accuracies were observed to decline, while their speeds were not significantly affected. Moreover, the YOLOv5 Small model was regarded to be potentially the best model for use on lower resource platforms, as it experienced a 3.33% increase in accuracy and recorded the least inference time of two seconds per image among the models.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_105-Performance_Evaluation_of_Face_Mask_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Computer-assisted Bone Fractures Diagnosis in Musculoskeletal Radiographs Based on Generative Adversarial Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01407104</link>
        <id>10.14569/IJACSA.2023.01407104</id>
        <doi>10.14569/IJACSA.2023.01407104</doi>
        <lastModDate>2023-07-31T09:34:38.8100000+00:00</lastModDate>
        
        <creator>Nabila Ounasser</creator>
        
        <creator>Maryem Rhanoui</creator>
        
        <creator>Mounia Mikram</creator>
        
        <creator>Bouchra El Asri</creator>
        
        <subject>Deep learning; generative adversarial network; diagnosis; orthopedics; fracture detection; x-ray image</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>Computer-Assisted Bone Fractures Diagnosis in musculoskeletal radiographs plays a crucial role in aiding medical professionals in accurate and timely fracture detection. In this work, we explore a Generative Adversarial Network based approach for this task, which is a powerful deep learning model capable of generating realistic images and detecting anomalies. Our proposed approach leverages the potential of GANs to generate synthetic radiographs with fractures and identify anomalous patterns, thereby enhancing fracture diagnosis. Through extensive experimentation and evaluation on musculoskeletal radiograph datasets (MURA), we demonstrate the effectiveness of GAN-based models in improving fracture detection performance by adopting several evaluation metrics notably accuracy, precision, F1-score and detection speed. These findings highlight the potential of integrating GANs into computer-assisted diagnosis, contributing to the advancement of fracture diagnosis methodologies in orthopedics. It is important to note that GANs operate by training a generator network to produce synthetic images and a discriminator network to distinguish between real and generated images. This adversarial process fosters the generation of realistic radiographs with fractures, enabling accurate and automated detection. Our findings contribute to the advancement of fracture diagnosis methodologies and pave the way for more efficient and precise diagnostic tools in the field of orthopedics.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_104-Enhancing_Computer_assisted_Bone_Fractures_Diagnosis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>FCCC: Forest Cover Change Calculator User Interface for Identifying Fire Incidents in Forest Region using Satellite Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01407103</link>
        <id>10.14569/IJACSA.2023.01407103</id>
        <doi>10.14569/IJACSA.2023.01407103</doi>
        <lastModDate>2023-07-31T09:34:38.7930000+00:00</lastModDate>
        
        <creator>Anubhava Srivastava</creator>
        
        <creator>Sandhya Umrao</creator>
        
        <creator>Susham Biswas</creator>
        
        <creator>Rakesh Dubey</creator>
        
        <creator>Md. Iltaf Zafar</creator>
        
        <subject>GEE; remote sensing; classification; landsat; sentinel; forest fire</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>For the ecosystem to maintain a balance between the social and environmental spheres, forests play a crucial role. The greatest threat to forests for this significance, is fires and natural disasters caused by several factors. It is crucial to assess the genesis and behavioral characteristics of fires in forest areas. The discovery of the forest fire areas and the intensity of the fire affected are greatly facilitated by the satellite image obtained by different sensors and data sets. We are suggesting a novel approach to compute changes using spectral indices, using landsat-9 and sentinel-2 satellite datasets for measuring the change in forest areas affected by fire accidents over Kochi areas on March 2023. Kochi is a city in Kerala, South India, and is located at 9&#176; 50’ 20.7348” N and 77&#176; 10’ 13.8828” E. coordinates. Computation is performed by calculating forest area before the fire incident (pre-fire) and after the fire incident (post-fire) and total loss is calculated by the difference between pre-fire and post-fire incident. The proposed work uses Sentinel-2 and Landsat-9 satellite images to recover burn scars using several vegetation indicators. We have identified the fire locations using the object-based classification approach. For verification of results computed by vegetation indices, we have also performed land use land cover classification and calculated the changes in forest areas. Accuracy is computed by the confusion matrix with an accuracy of 89.45% and the kappa coefficient with an accuracy of 87.68%. In particular, there was a strong correlation between forest loss and the burned area in the subtropical evergreen broadleaf forest zone (6.9%) and the deciduous coniferous forest zone (18.9%of the lands). These findings serve as a foundation for future forecasts of fire-induced forest loss in regions with similar climatic and environmental conditions.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_103-FCCC_Forest_Cover_Change_Calculator_User_Interface.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Employee Attrition Prediction using Nested Ensemble Learning Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01407101</link>
        <id>10.14569/IJACSA.2023.01407101</id>
        <doi>10.14569/IJACSA.2023.01407101</doi>
        <lastModDate>2023-07-31T09:34:38.7770000+00:00</lastModDate>
        
        <creator>Muneera Saad Alshiddy</creator>
        
        <creator>Bader Nasser Aljaber</creator>
        
        <subject>Nested ensemble learning; employee attrition; machine learning; employment process</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>In many industries, including the IT industry, rising employee attrition is a major concern. Hiring a candidate for an unsuitable job because of issues with the employment process can lead to employee attrition. Thus, enhancing the employment process would reduce the attrition rate. This paper aims to investigate the effect of ensemble learning techniques on enhancing the employment process by predicting employee attrition. This paper applied a two-layer nested ensemble model to the IBM HR Analytics Employee Attrition &amp; Performance dataset. The performance of this model was compared to that of the random forest (RF) algorithm as a baseline for comparison. The results showed that the proposed model outperformed the baseline algorithm. The RF model achieved an accuracy of 94.2417%, an F1-score of 94.2%, and an AUC of 98.4%. However, the proposed model had the highest performance. It outperformed with an accuracy of 94.5255%, an F1-score of 94.5%, and an AUC of 98.5%. The performance of the proposed model was compared with that of the baseline comparison algorithm by using a paired t-test. According to the paired t-test, the performance of the proposed model was statistically better than that of the baseline comparison algorithm at the significance level of 0.05. Thus, the two-layer nested ensemble model improved the employee attrition prediction.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_101-Employee_Attrition_Prediction_using_Nested_Ensemble_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Bird Detection and Species Classification: Using YOLOv5 and Deep Transfer Learning Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01407102</link>
        <id>10.14569/IJACSA.2023.01407102</id>
        <doi>10.14569/IJACSA.2023.01407102</doi>
        <lastModDate>2023-07-31T09:34:38.7770000+00:00</lastModDate>
        
        <creator>Hoang-Tu Vo</creator>
        
        <creator>Nhon Nguyen Thien</creator>
        
        <creator>Kheo Chau Mui</creator>
        
        <subject>Bird detection; species classification; YOLOv5; deep transfer learning models; automated bird monitoring</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>Bird detection and species classification are important tasks in ecological research and bird conservation efforts. The study aims to address the challenges of accurately identifying bird species in images, which plays a crucial role in various fields such as environmental monitoring, and wildlife conservation. This article presents a comprehensive study on bird detection and species classification using the YOLOv5 object detection algorithm and deep transfer learning models. The objective is to develop an efficient and accurate system for identifying bird species in images. The YOLOv5 model is utilized for robust bird detection, enabling the localization of birds within images. Deep transfer learning (TL) models, including VGG19, Inception V3, and EfficientNetB3, are employed for species classification, leveraging their pre-trained weights and learned features. The experimental findings show that the proposed approach is effective, with excellent accuracy in both bird detection and tasks for species classification. The study showcases the potential of combining YOLOv5 with deep transfer learning models for comprehensive bird analysis, opening avenues for automated bird monitoring, ecological research, and conservation efforts. Furthermore, the study investigated the effects of optimization algorithms, including SGD, Adam, and Adamax, on the performance of the models. The findings contribute to the advancement of bird recognition systems and provide insights into the performance and suitability of various deep transfer learning architectures for avian image analysis.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_102-Bird_Detection_and_Species_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Review of Existing Datasets Used for Software Effort Estimation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01407100</link>
        <id>10.14569/IJACSA.2023.01407100</id>
        <doi>10.14569/IJACSA.2023.01407100</doi>
        <lastModDate>2023-07-31T09:34:38.7600000+00:00</lastModDate>
        
        <creator>Mizanur Rahman</creator>
        
        <creator>Teresa Goncalves</creator>
        
        <creator>Hasan Sarwar</creator>
        
        <subject>Software effort estimation; software effort prediction; software effort estimation datasets</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>The Software Effort Estimation (SEE) tool calculates an estimate of the amount of work that will be necessary to effectively finish the project. Managers usually want to know how hard a new project will be ahead of time so they can divide their limited resources in a fair way. In fact, it is common to use effort datasets to train a prediction model that can predict how much work a project will take. To train a good estimator, you need enough data, but most data owners don’t want to share their closed source project effort data because they are worried about privacy. This means that we can only get a small amount of effort data. The purpose of this research was to evaluate the quality of 15 datasets that have been widely utilized in studies of software project estimation. The analysis shows that most of the chosen studies use artificial neural networks (ANN) as ML models, NASA as datasets, and the mean magnitude of relative error (MMRE) as a measure of accuracy. In more cases, ANN and support vector machine (SVM) have done better than other ML techniques.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_100-Review_of_Existing_Datasets_Used_for_Software_Effort_Estimation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predicting Customer Segment Changes to Enhance Customer Retention: A Case Study for Online Retail using Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140799</link>
        <id>10.14569/IJACSA.2023.0140799</id>
        <doi>10.14569/IJACSA.2023.0140799</doi>
        <lastModDate>2023-07-31T09:34:38.7470000+00:00</lastModDate>
        
        <creator>Lahcen ABIDAR</creator>
        
        <creator>Dounia ZAIDOUNI</creator>
        
        <creator>Ikram EL ASRI</creator>
        
        <creator>Abdeslam ENNOUAARY</creator>
        
        <subject>Customer segment changes; customer retention; marketing actions; informed decisions; advertising strategies</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>In today’s highly competitive marketplace, advertisers strive to tailor their messages to specific individuals or groups, often overlooking their most significant clients. The Pareto principle, asserting that 80% of sales come from 20% of customers, offers valuable insights, imagine if companies could accurately forecast this vital 20% and recognize its historical significance. Predicting customer lifetime value (CLV) at this juncture becomes crucial in aiding firms to effectively prioritize their efforts. To achieve this, organizations can leverage predictive models and analytical tools to target specific customers with tailored campaigns, enabling well-informed decisions about advertising investments. By being aware of these segment transitions, advertisers can efficiently deploy resources and increase their return on investment. By implementing the strategies outlined in this study, businesses can gain a competitive edge by identifying and retaining their most valuable clients. The potential for growth and client retention is immense when anticipating changes in customer segments and adjusting advertising strategies accordingly. This paper provides a comprehensive methodology, tools, and insights to assist marketers in optimizing their advertising campaigns by anticipating customer lifetime value and actively predicting changes in client segmentation.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_99-Predicting_Customer_Segment_Changes_to_Enhance_Customer_Retention.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Review on Machine-Learning and Nature-Inspired Algorithms for Genome Assembly</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140798</link>
        <id>10.14569/IJACSA.2023.0140798</id>
        <doi>10.14569/IJACSA.2023.0140798</doi>
        <lastModDate>2023-07-31T09:34:38.7300000+00:00</lastModDate>
        
        <creator>Asmae Yassine</creator>
        
        <creator>Mohammed Essaid Riffi</creator>
        
        <subject>Artificial intelligence; genome assembly; machine learning; bioinformatics; bio-inspired algorithms</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>Genome assembly plays a crucial role in the field of bioinformatics, as current sequencing technologies are unable to sequence an entire genome at once where the need for fragmenting into short sequences and reassembling them. The genomes often contain repetitive sequences and duplicated regions, which can lead to ambiguities during assembly. Thus, the process of reconstructing a complete genome from a set of reads necessitates the use of efficient assembly programs. Over time, as genome sequencing technology has advanced, the methods for genome assembly have also evolved, resulting in the utilization of various genome assemblers. Many artificial intelligence techniques such as machine learning and nature-inspired algorithms have been applied in genome assembly in recent years. These technologies have the potential to significantly enhance the accuracy of genome assembly, leading to functionally correct genome reconstructions. This review paper aims to provide an overview of the genome assembly, highlighting the significance of different methods used in machine learning techniques and nature-inspiring algorithms in achieving accurate and efficient genome assembly. By examining the advancements and possibilities brought about by different machine learning and metaheuristics approaches, this review paper offers insights into the future directions of genome assembly.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_98-A_Review_on_Machine_Learning_and_Nature_Inspired_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Generative Adversarial Network-based Approach for Automated Generation of Adversarial Attacks Against a Deep-Learning based XSS Attack Detection Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140797</link>
        <id>10.14569/IJACSA.2023.0140797</id>
        <doi>10.14569/IJACSA.2023.0140797</doi>
        <lastModDate>2023-07-31T09:34:38.7300000+00:00</lastModDate>
        
        <creator>Rokia Lamrani Alaoui</creator>
        
        <creator>El Habib Nfaoui</creator>
        
        <subject>Deep learning; generative adversarial network; LSTM; web attacks; adversarial attacks; Cross Site Scripting attack</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>Cross Site Scripting attack (XSS) is one of the most famous and dangerous web attacks. In XSS attacks, illegitimate technical methods are used by attackers to disclose sensitive data from web site users, which result in an important finance and reputation loss to the web site’s owner. There exist numerous XSS attack countermeasures. Deep Learning has been shown to be effective when used to detect XSS attacks in HTTP web requests. Yet, Deep Learning models are inherently vulnerable to adversarial attacks, which aim to deceive the detection model into mis-classifying malicious HTTP web requests. Thus, it is important to evaluate the robustness of the detection model against adversarial attacks before its deployment to production in real web applications. In this work, we developed a Generative Adversarial Network (GAN) model for automated generation of adversarial XSS attacks against an LSTM-based XSS attack detection model. We showed that the detection model performance drops drastically when evaluated on the XSS instances, originally used in the model development, but modified by the GAN model. We also provided some guidelines to the development of detection models that can defend against adversarial attacks in the particular context of web attacks detection.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_97-Generative_Adversarial_Network_based_Approach_for_Automated_Generation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Integrated Framework for Relevance Classification of Trending Topics in Arabic Tweets</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140796</link>
        <id>10.14569/IJACSA.2023.0140796</id>
        <doi>10.14569/IJACSA.2023.0140796</doi>
        <lastModDate>2023-07-31T09:34:38.7130000+00:00</lastModDate>
        
        <creator>Abdullah M. Alkadri</creator>
        
        <creator>Abeer ElKorany</creator>
        
        <creator>Cherry A. Ezzat</creator>
        
        <subject>Trending topics; social media platforms; machine learning; Arabic relevance classification; data augmentation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>Social media platforms such as Twitter are a valuable source of information about current events and trends. Trending topics aim to promote public events such as political events, market changes, and other types of breaking news. However, with so much data being generated, it would be difficult to identify relevant tweets that are related to a particular trending topic. Therefore, in this paper, an integrated framework is proposed for the detection of the degree of relevance between Arabic tweets and trending topics. This framework integrates natural language processing, data augmentation, and machine learning techniques to identify text that is likely to be relevant to a given trending topic. The proposed framework was evaluated using a real-life dataset of Arabic tweets that was collected and labeled. The results of the evaluation showed that the proposed framework achieved the highest macro F1 score of 82% in binary classification (relevant/irrelevant) and 77% in categorical classification (degree of relevance), which outperforms the current state of the art.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_96-An_Integrated_Framework_for_Relevance_Classification_of_Trending_Topics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Semantic Privacy Inference Preservation Algorithm for Indoor Trajectory</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140794</link>
        <id>10.14569/IJACSA.2023.0140794</id>
        <doi>10.14569/IJACSA.2023.0140794</doi>
        <lastModDate>2023-07-31T09:34:38.7000000+00:00</lastModDate>
        
        <creator>Abdullah Alamri</creator>
        
        <subject>Privacy; semantic ontology; indoor space; routing algorithm; spatial databases</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>Indoor location services have become an increasingly important part of our everyday lives in recent years. Despite the numerous benefits these services offer, serious concerns have arisen about the privacy of users’ locations. Adversaries can monitor user-requested locations in order to obtain sensitive information such as shopping patterns. Many users of indoor spaces want their movements and locations to be kept private so as not to reveal their visit to a particular zone inside buildings. Research on semantic indoor trajectory-based human movement data has primarily focused on finding routes without taking into account the protection of privacy. Hence, the servers on which trajectory data is stored are not completely secure. In this paper, we propose a semantic privacy inference preservation algorithm for an indoor trajectory that can issue path finding and navigation instructions while achieving good privacy protection of moving entities by generating ambiguous trajectory. The simulation of the proposed semantic indoor privacy algorithm was implemented in MATLAB.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_94-Semantic_Privacy_Inference_Preservation_Algorithm_for_Indoor_Trajectory.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Knee Cartilage Segmentation using Improved U-Net</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140795</link>
        <id>10.14569/IJACSA.2023.0140795</id>
        <doi>10.14569/IJACSA.2023.0140795</doi>
        <lastModDate>2023-07-31T09:34:38.7000000+00:00</lastModDate>
        
        <creator>Nawaf Waqas</creator>
        
        <creator>Sairul Izwan Safie</creator>
        
        <creator>Kushsairy Abdul Kadir</creator>
        
        <creator>Sheroz Khan</creator>
        
        <subject>Knee image segmentation; U-Net; loss function; squeeze and excitation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>Patello-femoral joint stability is a complex problem and requires detailed anatomic parametric study for knowing the associated breakdowns of knee cartilage. Osteoarthritis is one of the main disorders, which disrupt the normal bio-mechanics and stability of the patello-femoral joint and for diagnosing osteoarthritis radiologists needs a lot of time to diagnose it. An improved network called PSU-Net is proposed for the automatic segmentation of femoral, tibia, and patella cartilage in knee MR images. The model utilizes a Squeeze and Excitation block with residual connection for effective feature learning that helps in learning imbalance anatomical structure between background, bone areas and cartilage. The severity of knee cartilage is measured through the Kellgren and Lawrence (KL) grading system by radiologists. Also, updated weighted loss function is used during training to optimize the model and improve cartilage segmentation. Results demonstrate that PSU-Net can accurately and quickly identify cartilages compared to the traditional procedures, aiding in the treatment planning in a very short amount of time. Future work will involve the use of augmentation methods and also use this architecture as a generator model for generative adversarial network to improve performance further. The utility of this work will help in analyzing the anatomy of the human knee by the radiologists in short amount of time that may prove helpful to standardize and automate patello-femoral measurements in diverse patient populations.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_95-Knee_Cartilage_Segmentation_using_Improved_U_Net.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intelligent Anomaly Detection Method of Gateway Electrical Energy Metering Devices using Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140793</link>
        <id>10.14569/IJACSA.2023.0140793</id>
        <doi>10.14569/IJACSA.2023.0140793</doi>
        <lastModDate>2023-07-31T09:34:38.6830000+00:00</lastModDate>
        
        <creator>Lihua Zhang</creator>
        
        <creator>Xu Chen</creator>
        
        <creator>Chao Zhang</creator>
        
        <creator>Lingxuan Zhang</creator>
        
        <creator>Binghang Zou</creator>
        
        <subject>Anomaly detection; gateway electric energy metering device; stacked autoencoder; long short-term memory</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>Accurate anomaly detection of gateway electrical energy metering device is important for maintenance and operations in the power systems. Traditionally, anomaly detection was typically performed manually through the analysis of the collected energy information. However, the manual process is time-consuming and labor-intensive. In this condition, this paper proposes a hybrid deep-learning model, which integrates Stacked Autoencoder (SAE) and Long Short-Term Memory (LSTM), for intelligently detecting the abnormal events of gateway electrical energy metering device. The proposed model named SAE-LSTM model, first uses SAE to extract deep latent features of three-phase voltage data collected from the gateway electrical energy metering device, and then adopts LSTM for separating the abnormal events based on the extracted deep latent features. The SAE-LSTM model, can effectively highlight the temporal information of the electrical data, thereby enhancing the accuracy of anomaly detection. The simulation experiments verify the advantages of the SAE-LSTM model in anomaly detection under different signal-to-noise ratios. The experimental results of real datasets demonstrate that it is suitable for anomaly detection of gateway electrical energy metering devices in practical scenarios.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_93-Intelligent_Anomaly_Detection_Method_of_Gateway_Electrical_Energys.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Light Weight Circular Error Learning Algorithm (CELA) for Secure Data Communication Protocol in IoT-Cloud Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140792</link>
        <id>10.14569/IJACSA.2023.0140792</id>
        <doi>10.14569/IJACSA.2023.0140792</doi>
        <lastModDate>2023-07-31T09:34:38.6670000+00:00</lastModDate>
        
        <creator>Mangala N</creator>
        
        <creator>Eswara Reddy B</creator>
        
        <creator>Venugopal K R</creator>
        
        <subject>Quantum secure cryptography; homomorphic encryption; lattice-based cryptography; Learning With Errors (LWE); Ring Learning With Errors (RLWE); Circular Error Learning Algorithm (CELA)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>The data driven smart applications, utilize the IoT, Cloud Computing, AI and other digital technologies to create, curate and operate on large amounts of data to provide intelligent solutions for day-to-day problems. Security of Data in the IoT-Cloud systems has become very crucial as there are several attacks such as ransomware, data thieving, and data corruption, causing huge loss to the application users. The basic impediment in providing strong security solutions for the IoT systems, is due the resource limitations of IoT devices. Recently, there is an additional threat of quantum computing being able to break the traditional cryptographic techniques. The objective of this research is to address the bifold challenge and design a light weight quantum secure communication protocol for the IoT Cloud ecosystem. The Ring Learning With Errors (RLWE) lattice based cryptography has emerged as the most popular in the NIST PQC Standardization Program. A light weight Circular Learning Error Algorithm (CELA) has been proposed by optimizing RLWE to make it suitable for IoT-Cloud environment. The CELA inherits the advantages of quantum security and homomorphic encryption from RLWE. It is observed that CELA is light weight in terms of execution time and a slightly bigger cipher text size provides higher security as compared to RLWE. The paper also offers plausible solutions for future quantum secure cryptographic protocols.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_92-Light_Weight_Circular_Error_Learning_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Unsupervised Document Binarization of Engineering Drawings via Multi Noise CycleGAN</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140791</link>
        <id>10.14569/IJACSA.2023.0140791</id>
        <doi>10.14569/IJACSA.2023.0140791</doi>
        <lastModDate>2023-07-31T09:34:38.6670000+00:00</lastModDate>
        
        <creator>Luqman Hakim Rosli</creator>
        
        <creator>Yew Kwang Hooi</creator>
        
        <creator>Ong Kai Bin</creator>
        
        <subject>Image processing and computer vision; generative adversarial networks; document binarization; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>The task of document binarization of degraded complex documents is tremendously challenging due to the various forms of noise often present in these documents. While the current state-of-the-art deep learning approaches are capable for the removal of various noise types in documents with high accuracy, they employ a supervised learning scheme which requires matching clean and noisy document image pairs which are difficult and costly to obtain for complex documents such as engineering drawings. In this paper, we propose our method for document binarization of engineering drawings using ‘Multi Noise CycleGAN’. The method utilizing unsupervised learning using adversarial and cycle-consistency loss is trained on unpaired noisy document images of various noise and image conditions. Experimental results for the removal of various noise types demonstrated that the method is able to reliably produce a clean image for any given noisy image and in certain noisy images achieve significant improvements over existing methods.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_91-Unsupervised_Document_Binarization_of_Engineering_Drawings.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Intelligent Malware Classification Model Based on Image Transformation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140790</link>
        <id>10.14569/IJACSA.2023.0140790</id>
        <doi>10.14569/IJACSA.2023.0140790</doi>
        <lastModDate>2023-07-31T09:34:38.6530000+00:00</lastModDate>
        
        <creator>Mohamed Abo Rizka</creator>
        
        <creator>Mohamed Hamed</creator>
        
        <creator>Hatem A. Khater</creator>
        
        <subject>Malware Classification; zero-day; Convolutional Neural Networks (CNN); grayscale image transformation; Bytehist</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>Due to financial incentives, the number of malware infections is steadily rising. Accuracy and effectiveness are essential because malware detection systems serve as the first line of defense against harmful attacks. A zero-day vulnerability is a hole in the target operating system, device driver, application, or other tools employing a computer environment that was previously unknown to anybody other than the hacker. Traditional malware detection systems usually use conventional machine learning algorithms, which call for time-consuming and error-prone feature gathering and extraction. Convolutional neural networks (CNNs) have been demonstrated to outperform conventional learning techniques in a number of applications, including the classification of images. This success prompts us to suggest a CNN-based malware categorization architecture. We evaluated our methodology using a bigger dataset made up of 25 families within a corpus of 9342 malware. Last but not least, comparisons are made between the model&#39;s measurement and performance with other cutting-edge deep learning techniques. The overall testing accuracy of 98.31% in the provided results attested to the excellent accuracy and robustness of the suggested procedure at a lower computational cost.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_90-An_Intelligent_Malware_Classification_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Effective Face Recognition using Adaptive Multi-scale Transformer-based Resnet with Optimal Pattern Extraction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140789</link>
        <id>10.14569/IJACSA.2023.0140789</id>
        <doi>10.14569/IJACSA.2023.0140789</doi>
        <lastModDate>2023-07-31T09:34:38.6370000+00:00</lastModDate>
        
        <creator>Santhosh Shivaprakash</creator>
        
        <creator>Sannangi Viswaradhya Rajashekararadhya</creator>
        
        <subject>Face recognition; facial images; optimal pattern extraction rate; local binary patterns; local vector patterns; improved rat swarm optimization algorithm; adaptive multi-scale transformer-based Resnet</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>Human face is the major characteristic for identifying a person and it helps to differentiate each person. Face recognition methods are mainly useful for determining a person’s identity with the help of biometric techniques. Face recognition methods are used in many practical applications like criminal identification, the phone unlocks systems and home security systems. It does not need any key and card, and it only requires facial images to provide high security over several applications. The interdependencies of the encryption methods are highly reduced in the deep learning-enabled face recognition models. Conventional methods did not satisfy the present demand due to poor recognition accuracy. Therefore, an advanced deep learning-based face recognition framework is implemented to authenticate the identity of individuals with high accuracy by using facial images. The required facial images are taken from the standard databases. The collected images are preprocessed using median filtering. The preprocessed facial images are subjected to spatial feature extraction, where the Local Binary patterns (LBP) and Local Vector Patterns (LVP) are utilized to extract the relevant optimal patterns from the facial images. Here, optimal pattern extraction is done with the Improved Rat Swarm Optimization Algorithm (IRSO). Then, the facial recognition is done over the extracted optimal features with the usage of the implemented Adaptive Multi-scale transformer-based Resnet (AMT-ResNet), where the parameters in the recognition network are optimized by using the IRSO. The efficiency of the developed deep learning adopted face recognition model is validated through different heuristics algorithms, and baseline face recognition approaches.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_89-Effective_Face_Recognition_using_Adaptive_Multi_scale_Transformer.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Essence of Software Engineering Framework-based Model for an Agile Software Development Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140788</link>
        <id>10.14569/IJACSA.2023.0140788</id>
        <doi>10.14569/IJACSA.2023.0140788</doi>
        <lastModDate>2023-07-31T09:34:38.6200000+00:00</lastModDate>
        
        <creator>Teguh Raharjo</creator>
        
        <creator>Betty Purwandari</creator>
        
        <creator>Eko K. Budiardjo</creator>
        
        <creator>Rina Yuniarti</creator>
        
        <subject>Agile; common ground; the essence of software engineering framework; Design Science Research (DSR)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>Agile development&#39;s rapid growth is due to its ability to address complex problems and facilitate a smooth transition from traditional methods. However, no single Agile method can fit every organization, which leads to a lack of adoption guidelines. It triggers this investigation by proposing an Agile development method model based on the Essence of software engineering framework and incorporating the common ground of popular methods such as Scrum, Kanban, Extreme programming, SAFe, Less, Nexus, Spotify Agile, Scrum of Scrums, and Disciplined Agile. The Essence of software engineering framework provides an approach for organizations to develop software development methods based on common ground or shared understanding among methods. We enhance this approach for Agile methods, resulting in a model to support organizations in developing their Agile methods and practices. Moreover, Design Science Research (DSR) was employed as a methodology to construct the artifact, demonstration, and evaluation. We demonstrated the model in an Agile product development at a national-wide bank in Indonesia. This investigation enhances Agile methods in SWEBOK&#39;s Software Engineering Models and Methods knowledge area, benefiting academics and practitioners. Practitioners can use the model as a reference to implement their Agile projects.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_88-The_Essence_of_Software_Engineering_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Innovating Art with Augmented Reality: A New Dimension in Body Painting</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140787</link>
        <id>10.14569/IJACSA.2023.0140787</id>
        <doi>10.14569/IJACSA.2023.0140787</doi>
        <lastModDate>2023-07-31T09:34:38.6200000+00:00</lastModDate>
        
        <creator>Dou Lei</creator>
        
        <creator>Wan Samiati Andriana W. Mohamad Daud</creator>
        
        <subject>Augmented reality; body paintings; artistic expression; technology acceptance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>This study investigates the fusion of augmented reality (AR) and body painting as a novel concept for artistic expression. By combining the immersive capabilities of AR with the creative potential of body painting, this research explores individuals&#39; perceptions and attitudes towards this innovative artistic approach from an HCI perspective. Drawing upon the Technology Acceptance Model (TAM) and the Diffusion of Innovation Theory (DIT), the study examines the factors influencing individuals&#39; acceptance and intention to engage in AR-integrated body painting. Additionally, the research explores the mediating role of artistic expression in understanding the impact of these factors on the actual outcomes of this merged concept. A sample of 212 respondents participated in an online survey to accomplish the research objectives. The survey comprehensively measured participants&#39; perceptions of innovativeness, social system support, perceived usefulness, perceived ease of use, artistic expression, and behavioral intention towards AR-integrated body painting. Rigorous data analysis was conducted using Partial Least Squares Structural Equation Modeling (PLS-SEM) to examine the intricate relationships between the variables. The findings underscore the significant impact of factors such as Innovativeness, social system support, perceived usefulness, and perceived ease of use on individuals&#39; acceptance and intention to engage in AR-integrated body painting from an HCI perspective. Moreover, the study reveals the mediating role of artistic expression in connecting these influential factors with the actual outcomes of this merged concept. These empirical insights substantially contribute to our understanding of the fundamental mechanisms driving the adoption and utilization of AR in artistic practices, particularly within the domain of body painting, from both an artistic and HCI standpoint.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_87-Innovating_Art_with_Augmented_Reality.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>GDM-PREP: A Rule-Based Technique to Enhance Early Detection of Gestational Diabetes Mellitus</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140786</link>
        <id>10.14569/IJACSA.2023.0140786</id>
        <doi>10.14569/IJACSA.2023.0140786</doi>
        <lastModDate>2023-07-31T09:34:38.6070000+00:00</lastModDate>
        
        <creator>Ayunnie Azmi</creator>
        
        <creator>Nurulhuda Zainuddin</creator>
        
        <creator>Azmi Aminordin</creator>
        
        <creator>Masurah Mohamad</creator>
        
        <subject>Gestational diabetes mellitus (GDM); rule based; expert systems; risk factor</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>Gestational diabetes mellitus (GDM), a condition occurring solely during pregnancy, poses risks to both expectant mothers and their infants, particularly among individuals with pre-existing risk factors. However, early diagnosis and effective management of GDM can help mitigate potential complications. As part of the Ministry of Health&#39;s efforts to enhance screening and management strategies for GDM in Malaysia, this study aims utilizing a rule-based technique, acting as an Expert System for Initial Screening of Gestational Diabetes Mellitus Detection. This application will facilitate early diagnosis by assessing risk factors and symptoms to calculate the probability of GDM occurrence and classify it as low, medium, or high. Functionality and usability tests are conducted to ensure error-free performance and gather user feedback. The study&#39;s findings indicate that the self-check GDM system effectively utilizes the algorithm, while the mobile application showcases good usability, achieving an above-average System Usability Scale (SUS) score.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_86-GDM_PREP_A_Rule_Based_Techniques_to_Enhance_Early_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application of VR Technology Based on Gesture Recognition in Animation-form Capture</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140785</link>
        <id>10.14569/IJACSA.2023.0140785</id>
        <doi>10.14569/IJACSA.2023.0140785</doi>
        <lastModDate>2023-07-31T09:34:38.5900000+00:00</lastModDate>
        
        <creator>Jing Yang</creator>
        
        <creator>Hao Zhang</creator>
        
        <subject>Frame rate reduction method; model feature points; Wronsky function; mixed Gaussian model; weight learning rate</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>To accurately capture the posture of animation characters in virtual vision and optimize the user&#39;s experience when wearing virtual vision equipment, the hybrid Gaussian model has gained wide attention. However, various types of animation show an exponential growth trend, and the hybrid Gaussian model is prone to low-dimensional explosion when processing these single frames. Based on the mixed Gaussian model, this study conducts animation character gesture recognition experiments on the Disert data set to solve these problems. Meanwhile, it is improved by frame rate reduction method to generate fusion algorithm. In this paper, the video is first grayened and filtered, and the model feature points of the image are marked. Then the weight learning rate is introduced and added to the set of pixels, and then the peak signal-to-noise ratio of Wronsky function is adjusted by changing the parameters. Then similar image sets are extracted and the structure elements are opened and closed. Finally, the proposed algorithm is applied to Disert data set. Meanwhile, the prediction accuracy of PSO is tested and compared with fusion algorithm. A total of 400 experiments were conducted, and the prediction accuracy of the fusion algorithm reached 392 times, with an accuracy of 98.0%. The accuracy of PSO is close to that of fusion algorithm (88.2%). It is verified that the suggested model can identify the four common gestures of cartoon characters well, and users will get a good viewing experience.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_85-Application_of_VR_Technology_Based_on_Gesture_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Open Information Extraction Methodology for a New Curated Biomedical Literature Dataset</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140783</link>
        <id>10.14569/IJACSA.2023.0140783</id>
        <doi>10.14569/IJACSA.2023.0140783</doi>
        <lastModDate>2023-07-31T09:34:38.5730000+00:00</lastModDate>
        
        <creator>Nesma Abdel Aziz Hassan</creator>
        
        <creator>Rania Ahmed Abdel Azeem Abul Seoud</creator>
        
        <creator>Dina Ahmed Salem</creator>
        
        <subject>Relation extraction; BERT; open information extraction; biomedical literature; ChemProt; DDI</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>The research articles contain a wealth of information about the interactions between biomedical entities. However, manual relation extraction processing from the literature by domain experts takes a lot of time and money. In addition, it is often prohibitively expensive and labor-intensive, especially in biomedicine where domain knowledge is required. For this reason, computer strategies that can use unlabeled data to lessen the load of manual annotation are of great relevance in biomedical relation extraction. The present study solves relation extraction tasks in a completely unsupervised scenario. This article presents an unsupervised model for relation extraction between medical entities from PubMed abstracts, after filtration and preprocessing the abstracts. The verbs and relationship types are embedded in a vector space, and each verb is mapped to the relation type with the highest similarity score. The model achieves competitive performance compared to supervised systems on the evaluation using ChemProt and DDI datasets, with an F1-score of 85.8 and 88.5 respectively. These improved results demonstrate the effectiveness of extracting relations without the need for manual annotation or human intervention.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_83-Open_Information_Extraction_Methodology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detection and Investigation Model for the Hard Disk Drive Attacks using FTK Imager</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140784</link>
        <id>10.14569/IJACSA.2023.0140784</id>
        <doi>10.14569/IJACSA.2023.0140784</doi>
        <lastModDate>2023-07-31T09:34:38.5730000+00:00</lastModDate>
        
        <creator>Ahmad Alshammari</creator>
        
        <subject>HDD; cybercrimes; design science method; digital forensic tools; FTK imager</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>A computer hard disk drive (HDD) is a device that stores, organizes, and manages computer data. In general, it is used for system storage, in which the computer maintains its operating system and other programs. A hard disk drive can, however, be physically damaged as well as affected by software errors, data corruption, and viruses that are used by attackers to cause damage. This study aims to develop a detection and investigation model (DIM) for HDD to detect and investigate HDD attacks using the FTK Imager forensic tool. The design science method is adapted to develop and evaluate the DIM. The developed DIM consists of three main phases: detection, gathering, and analysis. In order to evaluate the capabilities of the developed DIM for HDD, a real scenario was used. According to the results, the DIM can detect and investigate the HDD easily using FTK Imager. Thus, organizations can use the developed DIM to detect, investigate, mitigate, or avoid HDD threats.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_84-Detection_and_Investigation_Model_for_the_Hard_Disk_Drive.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Providing an Improved Resource Management Approach for Healthcare Big Data Processing in Cloud Computing Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140782</link>
        <id>10.14569/IJACSA.2023.0140782</id>
        <doi>10.14569/IJACSA.2023.0140782</doi>
        <lastModDate>2023-07-31T09:34:38.5600000+00:00</lastModDate>
        
        <creator>Fei Zhou</creator>
        
        <creator>Huaibao Ding</creator>
        
        <creator>Xiaomei Ding</creator>
        
        <subject>Healthcare; big data; cloud confederation; service quality; Cloud Resource Management (CRM)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>Due to the gathering of big data and the advancement of machine learning, the healthcare industry has recently experienced fast change. Acceleration of operations related to the analysis and retrieval of healthcare data is essential to facilitate surveillance. However, providing healthcare to the community is a complex task that is highly dependent on data processing. Also, processing health metadata can be very expensive for organizations. To meet the strict service quality requirements of the healthcare industry, large-scale healthcare data processing in the cloud confederation has emerged as a viable option. However, there are many challenges, including optimal resource management for metadata processing. Based on this, in the present study, a fuzzy solution for determining the optimal cloud using the resource forecasting technique is presented for health big data processing. During job processing, a fuzzy selection-based VM migration technique was used to move a virtual machine (VM) from a high-load server to a low-load server. The proposed architecture is divided into regional and global levels. After evaluating the local component, requests are sent to the global component. If the local component cannot meet the requirements, the request is sent to the global component. The hierarchical structure of the proposed method requires the generation of delivered requests before estimating the available resources. The proposed solution is compared with PSO and ACO algorithms according to different criteria. The simulation results show the effectiveness and efficiency of the model compared to alternative methods.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_82-Providing_an_Improved_Resource_Management_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Lung Nodule Segmentation and Classification using U-Net and Efficient-Net</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140781</link>
        <id>10.14569/IJACSA.2023.0140781</id>
        <doi>10.14569/IJACSA.2023.0140781</doi>
        <lastModDate>2023-07-31T09:34:38.5430000+00:00</lastModDate>
        
        <creator>Suriyavarman S</creator>
        
        <creator>Arockia Xavier Annie R</creator>
        
        <subject>Cancer; CT; U-Net; efficient-net; feature; accuracy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>The ability to detect lung cancer has led to better health outcomes. Deep learning techniques are widely used in the medical field to detect lung tumors at an early stage. Deep learning models such as U-Net, Efficient-Net, Resnet, VGG-16, etc. have been incorporated in various studies to detect lung cancer accurately. To enhance the detection performance, this work proposes an algorithm that combines U-Net and Efficient-Net neural networks for lung nodule segmentation and classification. A feature-extraction-based semi-supervised method is used to take advantage of the huge amount of CT scan images with no pathological labels. Semi-supervised learning is achieved using a feature pyramid network (FPN) with ResNet-50 model for feature extraction and a neural network classifier for predicting unlabelled nodules. The main innovation of U-Net is the skip-connections, which give the decoder access to the features that the encoder learned at various scales and enable accurate localization of lung nodules. Efficient-Net uses depth, width, and resolution scaling, combined with a compound coefficient that uniformly scales all network dimensions, resulting in an efficient neural network for image classification. This work has been evaluated on the publicly available LIDC-IDRI dataset and outperforms most existing methods. The proposed algorithm aims to address issues such as a high false-positive rate, small nodules, and a wide range of non-uniform longitudinal data. Experiment results show this model has a higher accuracy of 91.67% when compared with previous works.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_81-Lung_Nodule_Segmentation_and_Classification_using_U_Net.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Artificial Neural Network for Binary and Multiclassification of Network Attacks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140780</link>
        <id>10.14569/IJACSA.2023.0140780</id>
        <doi>10.14569/IJACSA.2023.0140780</doi>
        <lastModDate>2023-07-31T09:34:38.5270000+00:00</lastModDate>
        
        <creator>Bauyrzhan Omarov</creator>
        
        <creator>Alma Kostangeldinova</creator>
        
        <creator>Lyailya Tukenova</creator>
        
        <creator>Gulsara Mambetaliyeva</creator>
        
        <creator>Almira Madiyarova</creator>
        
        <creator>Beibut Amirgaliyev</creator>
        
        <creator>Bakhytzhan Kulambayev</creator>
        
        <subject>Neural networks; artificial intelligence; detection; classification; attacks; network security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>Diving into the complex realm of network security, the research paper investigates the potential of leveraging artificial neural networks (ANNs) to identify and classify network intrusions. Balancing two distinct paradigms – binary and multiclassification – the study breaks fresh ground in this intricate field. Binary classification takes the stage initially, offering a bifurcated outlook: network traffic is either under attack, or it&#39;s not. This lays the foundation for an intuitive understanding of the network landscape. Then, the spotlight shifts to the finer-grained multiclassification, navigating through a realm that holds five unique classes: Normal traffic, DoS (Denial of Service), Probe, Privilege, and Access attacks. Each class serves a specific function, ranging from harmless communication (Normal) to various degrees and kinds of malicious intrusion. By integrating these two approaches, the research illuminates a path towards a more comprehensive understanding of network attack scenarios. It highlights the role of ANNs in enhancing the precision of network intrusion detection systems, contributing to the broader field of cybersecurity. The findings underline the potency of ANNs, offering fresh insights into their application and raising questions that promise to push the frontiers of cybersecurity research even further.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_80-Artificial_Neural_Network_for_Binary_and_Multiclassification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel 2D Deep Convolutional Neural Network for Multimodal Document Categorization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140779</link>
        <id>10.14569/IJACSA.2023.0140779</id>
        <doi>10.14569/IJACSA.2023.0140779</doi>
        <lastModDate>2023-07-31T09:34:38.5100000+00:00</lastModDate>
        
        <creator>Rustam Abkrakhmanov</creator>
        
        <creator>Aruzhan Elubaeva</creator>
        
        <creator>Tursinbay Turymbetov</creator>
        
        <creator>Venera Nakhipova</creator>
        
        <creator>Shynar Turmaganbetova</creator>
        
        <creator>Zhanseri Ikram</creator>
        
        <subject>Scanned documents; classification; document categorization; artificial intelligence; machine learning; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>Digitized documents are increasingly becoming prevalent in various industries. The ability to accurately classify these documents is critical for efficient and effective management. However, digitized documents often come in different formats, making document classification a challenging task. In this paper, we propose a multimodal deep learning approach for digitized document classification. The proposed approach combines both text and image modalities to improve classification accuracy. The model architecture consists of a convolutional neural network (CNN) for image processing and a recurrent neural network (RNN) for text processing. The output features from the two modalities are then merged using a fusion layer to generate the final classification result. The proposed approach is evaluated on a dataset of digitized documents from various industries, including finance, healthcare, and legal fields. The experimental results demonstrate that the multimodal approach outperforms single-modality approaches, achieving high accuracy for document classification. The proposed model has significant potential for applications in various industries that rely heavily on document management systems. For example, in the finance industry, the proposed model can be used to classify loan applications or financial statements. In the healthcare industry, the model can classify patient records, medical images, and other medical documents. In the legal industry, the model can classify legal documents, contracts, and court filings. Overall, the proposed multimodal deep learning approach can significantly improve document classification accuracy, thus enhancing the efficiency and effectiveness of document management systems.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_79-A_Novel_2D_Deep_Convolutional_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Criminal Law Risk Management and Prediction Method based on Echo State Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140778</link>
        <id>10.14569/IJACSA.2023.0140778</id>
        <doi>10.14569/IJACSA.2023.0140778</doi>
        <lastModDate>2023-07-31T09:34:38.5100000+00:00</lastModDate>
        
        <creator>Zhe Li</creator>
        
        <subject>Echo state network; model; criminal law; risk prediction; risk prediction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>Criminal law plays an important role in maintaining social security and achieving effective social control. However, criminal law has hidden risks that cannot be ignored at the legislative, judicial and theoretical levels. This paper starts from all aspects of criminal law, analyzes criminal law risk and its management measures, and predicts and analyzes criminal law risk through echo state network model. The prediction results of the echo state network model fit well with the actual situation, and its verification can provide reference for the study of criminal law risk prediction and management systems. Legislative risk and theoretical risk belong to social factors and are also fundamental risks of criminal law. Judicial risk is mainly manifested in the level of judicial power. Criminal law is closely related to the political environment, social system, economic system, etc. In criminal law legislation, we should pay attention to the balance between criminal law rules and realistic social functions, and properly control social risks, so as to avoid the criminal law risks brought by the establishment of risky criminal law, and provide the necessary guarantee for the national security system.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_78-Criminal_Law_Risk_Management_and_Prediction_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sentiment Analysis of Code-mixed Social Media Data on Philippine UAQTE using Fine-tuned mBERT Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140777</link>
        <id>10.14569/IJACSA.2023.0140777</id>
        <doi>10.14569/IJACSA.2023.0140777</doi>
        <lastModDate>2023-07-31T09:34:38.4970000+00:00</lastModDate>
        
        <creator>Lany L. Maceda</creator>
        
        <creator>Arlene A. Satuito</creator>
        
        <creator>Mideth B. Abisado</creator>
        
        <subject>Sentiment analysis; UAQTE; code-mixing; policy-making; multilingual BERT</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>The Universal Access to Quality Tertiary Education (UAQTE) marks a significant policy change in the Philippines. While the program’s objective is to offer free higher education and tertiary education subsidies to eligible Filipino students, its viability and effectiveness have been subject to scrutiny and continuous evaluation. This study explores the sentiments of Filipinos towards UAQTE. Leveraging a fine-tuned multilingual Bidirectional Encoder Representations from Transformers (mBERT) model, we conducted sentiment analysis on code-mixed data. With minimal preprocessing, our model achieved an accuracy of 80.21% and an F1 score of 81.14%, surpassing previous related studies and confirming its effectiveness in handling code-mixed data. The results reveal that the majority of social media users view UAQTE positively or beneficially. However, negative sentiments highlight concerns related to subsidy delays, alleged fund misuse, and application challenges. Additionally, neutral sentiments center around subsidy-related announcements. These findings provide valuable insights for its key stakeholders involved in the implementation, enhancement, and evaluation of UAQTE.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_77-Sentiment_Analysis_of_Code_mixed_Social_Media_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Blockchain Architecture Based on Decentralised PoW Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140776</link>
        <id>10.14569/IJACSA.2023.0140776</id>
        <doi>10.14569/IJACSA.2023.0140776</doi>
        <lastModDate>2023-07-31T09:34:38.4800000+00:00</lastModDate>
        
        <creator>Cinthia P. Pascual Caceres</creator>
        
        <creator>Jose Vicente Berna Martinez</creator>
        
        <creator>Francisco Maci&#225; P&#233;rez</creator>
        
        <creator>Iren Lorenzo Fonseca</creator>
        
        <creator>Maria E. Almaral Martinez</creator>
        
        <subject>Blockchain technology; proof of work; consensus algorithm; proof of stake; Dissociated-PoW; security; performance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>Blockchain has gained increasing popularity across various industries due to its decentralized, stable, and secure nature. Consensus algorithms play a crucial role in maintaining the security and efficiency of Blockchain systems and selecting the right algorithm can lead to significant performance improvements. This article aims to provide a comparative review of the most used Blockchain consensus algorithms, highlighting their strengths and weaknesses. Additionally, we propose a dissociated architecture for an efficient Blockchain system that doesn&#39;t compromise on security. A comparison is made between this architecture and the reviewed algorithms, considering aspects such as algorithm performance, energy consumption, mining, decentralization level, and vulnerability to security threats. The research findings demonstrate that the proposed architecture can support complex algorithms with high security while addressing issues related to efficiency, processing performance, and energy consumption.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_76-Blockchain_Architecture_Based_on_Decentralised_PoW_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automated Characterization of Autism Spectrum Disorder using Combined Functional and Structural MRI Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140775</link>
        <id>10.14569/IJACSA.2023.0140775</id>
        <doi>10.14569/IJACSA.2023.0140775</doi>
        <lastModDate>2023-07-31T09:34:38.4630000+00:00</lastModDate>
        
        <creator>Nour El Houda Mezrioui</creator>
        
        <creator>Kamel Aloui</creator>
        
        <creator>Amine Nait-Ali</creator>
        
        <creator>Mohamed Saber Naceur</creator>
        
        <subject>Autism spectrum disorder (ASD); Magnetic Resonance Imaging (MRI); functional Magnetic Resonance Imaging (fMRI); Artificial Neural Network (ANN)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>Autism Spectrum Disorders (ASD) are among the most critical health concerns of our time. These disorders typically present challenges in social interaction, communication, and exhibit repetitive behaviors. To diagnose and customize medical treatments for ASD effectively, the development of robust neuroimaging biomarkers is indispensable. Although extensive studies have recently delved into this area, only a handful have explored the differences between ASD and NC. This study aspires to shed light on this relationship by analyzing both structural and functional brain data associated with ASD. We aim to provide an extensive characterization of ASD by combining techniques of structural and functional analysis. The framework we propose is based on analyzing the differences in structural and functional aspects between ASD and development control (DC) subjects. The study leverages a substantial dataset of 1114 T1-weighted structural and functional Magnetic Resonance Imaging comprising 521 individuals with ASD and 593 controls, ranging in age from 5 to 64 years. These subjects are divided into three broad age categories. Utilizing automated labeling, we compute the features from subcortical and cortical regions. Statistical analyses help identify disparities between ASD and DC subjects. Principal Component Analysis (PCA) is employed to select the most discriminative features, which are subsequently used for classifying the two groups via an Artificial Neural Network (ANN) analysis. Our preliminary findings reveal a significant difference in the distribution of all tested features and subcortical regions between ASD subjects and DC subjects. Through our work, we contribute towards an enhanced understanding of ASD, potentially paving the way for future research and therapeutic interventions.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_75-Automated_Characterization of_Autism_Spectrum_Disorder.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Improved Artificial Bee Colony Optimization Algorithm for Test Suite Minimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140774</link>
        <id>10.14569/IJACSA.2023.0140774</id>
        <doi>10.14569/IJACSA.2023.0140774</doi>
        <lastModDate>2023-07-31T09:34:38.4500000+00:00</lastModDate>
        
        <creator>Neeru Ahuja</creator>
        
        <creator>Pradeep Kumar Bhatia</creator>
        
        <subject>Test suite; test suite minimization; TLBO; ABC; nature inspired algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>Software testing is essential process for maintaining the quality of software. Due to changes in customer demands or industry, software needs to be updated regularly. Therefore software becomes more complex and test suite size also increases exponentially. As a result, testing incurs a large overhead in terms of time, resources, and costs associated with testing. Additionally, handling and operating huge test suites can be cumbersome and inefficient, often resulting in duplication of effort and redundant test coverage. Test suite minimization strategy can help in resolving this issue. Test suite reduction is an efficient method for increasing the overall efficacy of a test suite and removing obsolete test cases. The paper demonstrates an improved artificial bee colony optimization algorithm for test suite minimization. The exploitation behavior of algorithm is improved by amalgamating the teaching learning based optimization technique. Second, the learner performance factor is used to explore the more solutions. The aim of the algorithm is to remove the redundant test cases, while still ensuring effectiveness of fault detection capability. The algorithm compared against three established methods (GA, ABC, and TLBO) using a benchmark dataset. The experiment results show that proposed algorithm reduction rate more than 50% with negligible loss in fault detection capability. The results obtained through empirical analysis show that the suggested algorithm has surpassed the other algorithms in performance.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_74-An_Improved_Artificial_Bee_Colony_Optimization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Prediction of Cardiac Arrest by the Hybrid Approach of Soft Computing and Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140773</link>
        <id>10.14569/IJACSA.2023.0140773</id>
        <doi>10.14569/IJACSA.2023.0140773</doi>
        <lastModDate>2023-07-31T09:34:38.4500000+00:00</lastModDate>
        
        <creator>Subrata Kumar Nayak</creator>
        
        <creator>Sateesh Kumar Pradhan</creator>
        
        <creator>Sujogya Mishra</creator>
        
        <creator>Sipali Pradhan</creator>
        
        <creator>P. K. Pattnaik</creator>
        
        <subject>Ventricular fibrillation (VF); heart rate variability (HRV); Rough Set Theory (RST); support vector machine (SVM); regression analysis; Adaboost method</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>Cardiac-related diseases are the major reason for the increased mortality rate. The early predictions of cardiac diseases like ventricular fibrillation (VF) are always challenging for doctors and data analysts. Early prediction of these diseases can save million lives. If the symptoms of these diseases are predicted early, the chance of survival increases significantly. For the prediction of Ventricular fibrillation (VF), several researchers have used Heart Rate Variability Analysis (HRV); various alternatives by combining the features taken from several areas to explore the prediction outcome. Several techniques like spectral Analysis, Rough Set Theory (RST), Support Vector Machine (SVM), and Adaboost techniques have not required any pre-processing. In this work, randomly medical-related data sets are taken from various parts of Odisha, applying regression and Rough Set techniques, reducing the dimension of the data set. Application of Rough Set Theory (RST) on the data set is not only useful in dimension reduction but also gives a set of various alternatives. This work&#39;s last section uses a comparative analysis between AdaBoost combined with RST and Empirical mode decomposition (EMD).</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_73-Prediction_of_Cardiac_Arrest_by_the_Hybrid_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>DeepCyberDetect: Hybrid AI for Counterfeit Currency Detection with GAN-CNN-RNN using African Buffalo Optimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140772</link>
        <id>10.14569/IJACSA.2023.0140772</id>
        <doi>10.14569/IJACSA.2023.0140772</doi>
        <lastModDate>2023-07-31T09:34:38.4330000+00:00</lastModDate>
        
        <creator>Franciskus Antonius</creator>
        
        <creator>Jarubula Ramu</creator>
        
        <creator>P. Sasikala</creator>
        
        <creator>J. C. Sekhar</creator>
        
        <creator>S. Suma Christal Mary</creator>
        
        <subject>Fake currency; convolutional neural network; generative adversarial networks; recurrent neural network; African Buffalo Optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>Modern technology has made a big contribution to the distribution of counterfeit money and the valuation of it. This paper recommends a deep learning-based methodology for currency recognition in order to extract attributes and identify money values; machine learning&#39;s binary classification task of fake currency detection. One can train a model that can distinguish between real and fake banknotes if one has enough information about actual and fake notes. The vast majority of older systems relied on hardware and techniques for image processing. Using such strategies renders identifying fake currency more challenging and inefficient. The proposed system has suggested deploying a deep convolution neural network to figure out fake currency in order to solve the aforementioned issue. By analyzing the images of the currency, our technique finds counterfeit notes. The transfer-learned convolutional neural network is trained using data sets that represent 2000 different currency notes in order to learn the unique characteristics map of the currencies. After becoming familiar with the feature map, the network is capable of real-time phoney cash detection. It is surprising how well deep learning models perform in photo classification tasks. The Deep CNN model that has been created in the proposed approach helps in the detection of the fake note without really manually extracting the properties of photographs. The model trains from the data set produced during training, letting us to identify fake currency. In multiple instances, techniques for deep learning have been shown to be more effective. Thus, deep learning is used to boost currency recognition accuracy. Among the techniques used are the African Buffalo Optimization Approach (ABO), recurrent neural networks (RNN), convolutional neural networks, generative adversarial networks (GAN) for identifying bogus notes, and classical neural networks.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_72-DeepCyberDetect_Hybrid_AI_for_Counterfeit_Currency_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Personating GA Neural Fuzzy Hybrid System for Computing HD Probability</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140771</link>
        <id>10.14569/IJACSA.2023.0140771</id>
        <doi>10.14569/IJACSA.2023.0140771</doi>
        <lastModDate>2023-07-31T09:34:38.4170000+00:00</lastModDate>
        
        <creator>Rahul Kumar Jha</creator>
        
        <creator>Santosh Kumar Henge</creator>
        
        <creator>Sanjeev Kumar Mandal</creator>
        
        <creator>C Menaka</creator>
        
        <creator>Deepak Mehta</creator>
        
        <creator>Aditya Upadhyay</creator>
        
        <creator>Ashok Kumar Saini</creator>
        
        <creator>Neha Mishra</creator>
        
        <subject>Dickey-Fuller test case (DF-TC); HA prediction (HAP); heart rate variability (HRV); artificial based neural network (AbNN); Fuzzy Inference System (FuzIS); genetic-based algorithm (GbA); multi-objective evolutionary Fuzzy classifier (MOEFC); heart attack (HA); fuzzification-mode (FuzM); de-fuzzification-mode (De-FuzM)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>The cardiovascular disease (CD) is a widespread, dangerous sickness involving an excessive rate of demise that necessitates quick piousness for care and cure. There are numerous diagnostic methods, such as angiography, available to diagnose heart disease (HD). ML is an extremely leading option for scientists for discovering prediction-based explanations for heart disease, and several machine learning algorithms are discovered to find the leading key results in community assistance. Researchers are presented with numerous conventional approaches, and various supportive algorithmic sequences formulated through the artificial neural network (NN) family, such as adaptive, convolutional, and de-convolutional NN, and various extended versions of hybrid combinations, originate with suitable outcomes. This research integrated the design and computational analysis of a unified model through a genetic algorithm-based Neural Fuzzy Hybrid System, which is formulated for CD prediction. It included a dual hybrid model to forecast CD and measure the degree of a healthy heart, as well as more precise heart attack complications. Stage 1 of the study&#39;s implications integrates the two stages and plans HD prediction using patient data. The input was processed in stages. First, the data was delivered in pre-processing mode. Next, the mRMR algorithm was used to select features. Finally, the model was trained using a variety of ML algorithms, including SVM, KNN, NB, DT, RF, LR, and NN. The results were compared, and based on those findings, the model was tuned to produce the best results. In stage 2, HA possibilities and occurrences are determined by FuzIS intelligence using data from the first stage, which includes more than 13000 pre-generated rules of fuzzy implications. These rules cover both normal-level and dangerous-level cases, and the medical parameters are integrated and tuned to produce membership functions that are then sent to the model. It is composed with the comparison of a unified system, which consists of Genetic algorithms, Neural networks, and Fuzzy Inference systems. In the experiment, gaussian MF sketched the continuous series of data, enabling the inference system to generate a good accuracy of 94% in calculating the problem probability.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_71-Personating_GA_Neural_Fuzzy_Hybrid_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>U-Net-based Pancreas Tumor Segmentation from Abdominal CT Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140770</link>
        <id>10.14569/IJACSA.2023.0140770</id>
        <doi>10.14569/IJACSA.2023.0140770</doi>
        <lastModDate>2023-07-31T09:34:38.4030000+00:00</lastModDate>
        
        <creator>H S Saraswathi</creator>
        
        <creator>Mohamed Rafi</creator>
        
        <subject>U-net; deep learning; segmentation; computed tomography images; hyper parameters; PDAC</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>There is no doubt that pancreatic cancer is one of the most deadly types of cancer. In order to diagnose and stage pancreatic tumors, computed tomography (CT) is widely used. However, manual segmentation of volumetric CT scans is a time-consuming and subjective process. It has been shown that the U-Net model is highly effective for semantic segmentation, although several deep learning models have been proposed. In this study, we propose a U-Net-based method for pancreatic tumor segmentation from abdominal CT images and demonstrate its simplicity and effectiveness. Using the U-Net architecture, the pancreas is segmented from CT slices in the first stage, while tumors are segmented from masked CT images in the second stage. For validation set of NIH dataset and according to the proposed method&#39;s dice scores, the pancreas segmentation and tumor segmentation performance was outstanding, demonstrating its potential to identify pancreatic cancer efficiently and accurately.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_70-U_Net_based_Pancreas_Tumor_Segmentation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimized Ensemble of Hybrid RNN-GAN Models for Accurate and Automated Lung Tumour Detection from CT Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140769</link>
        <id>10.14569/IJACSA.2023.0140769</id>
        <doi>10.14569/IJACSA.2023.0140769</doi>
        <lastModDate>2023-07-31T09:34:38.3870000+00:00</lastModDate>
        
        <creator>Atul Tiwari</creator>
        
        <creator>Shaikh Abdul Hannan</creator>
        
        <creator>Rajasekhar Pinnamaneni</creator>
        
        <creator>Abdul Rahman Mohammed Al-Ansari</creator>
        
        <creator>Yousef A.Baker El-Ebiary</creator>
        
        <creator>S. Prema</creator>
        
        <creator>R. Manikandan</creator>
        
        <subject>Lung tumour; recurrent neural network; generative adversarial network; CT images; hybrid</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>The early diagnosis and treatment of lung tumour, the primary cause of cancer-related deaths globally, depend critically on the identification of lung tumours. In this approach, a new method is suggested for detecting lung tumours that combines a Gaussian filter with a hybrid Recurrent Neural Network-Generative Adversarial Network (RNN-GAN). Utilising the sequential data seen in images of lung tumours, the RNN-GAN architecture is used. In processing the sequential input, the RNN component looks for temporal relationships and patterns. The GAN component improves the training of the RNN for accurate classification by creating synthetic tumour specimens that resemble actual tumour images. In addition, the proposed approach pre-process lung tumour images using a Gaussian filter to improve their quality. The Gaussian filter improves feature extraction and the visibility of tumour borders by reducing noise and smoothing the pictures. The proposed experimental findings on a dataset of lung tumours shows that the suggested strategy successful. In comparison to conventional techniques, the hybrid RNN-GAN delivers higher accuracy in lung tumour identification due to the incorporation of the Gaussian filter. While the GAN component creates realistic tumour samples for improved training, the RNN component efficiently captures the sequential patterns of tumour images. The lung tumour images are pre-processed using a Gaussian filter, which greatly enhances image quality and facilitates precise feature extraction. The proposed hybrid RNN-GAN with the Gaussian filter shows promising potential for accurate and early detection of lung tumours. The integration of deep learning techniques with image pre-processing methods can contribute to the advancement of lung cancer diagnosis and treatment, ultimately improving patient outcomes and survival rates. Further research and validation are necessary to explore the full potential of this approach and its applicability in clinical settings.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_69-Optimized_Ensemble_of_Hybrid_RNN_GAN_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimizing Port Operations: Synchronization, Collision Avoidance, and Efficient Loading and Unloading Processes</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140768</link>
        <id>10.14569/IJACSA.2023.0140768</id>
        <doi>10.14569/IJACSA.2023.0140768</doi>
        <lastModDate>2023-07-31T09:34:38.3700000+00:00</lastModDate>
        
        <creator>Sakhi Fatima Ezzahra</creator>
        
        <creator>Bellat Abdelouahad</creator>
        
        <creator>Mansouri Khalifa</creator>
        
        <creator>Qbadou Mohammed</creator>
        
        <subject>Optimizing; synchronization; collision; efficient; time</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>This study focuses on optimizing the loading and unloading processes in a port environment by employing synchronization techniques and collision avoidance mechanisms. The objective function of this research aims to minimize the time required for these tasks while ensuring efficient coordination and safety. The obtained results are compared with previous studies, demonstrating significant improvements in overall performance. The synchronized handling systems, including gantries and cranes, along with speed control measures, facilitate streamlined operations, reduced delays, and enhanced productivity. By integrating these strategies, the port achieves better results in terms of task completion time compared to previous methodologies, thereby validating the effectiveness of the proposed approach.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_68-Optimizing_Port_Operations_Synchronization_Collision_Avoidance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Stacking-based Ensemble Framework for Automatic Depression Detection using Audio Signals</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140767</link>
        <id>10.14569/IJACSA.2023.0140767</id>
        <doi>10.14569/IJACSA.2023.0140767</doi>
        <lastModDate>2023-07-31T09:34:38.3700000+00:00</lastModDate>
        
        <creator>Suresh Mamidisetti</creator>
        
        <creator>A. Mallikarjuna Reddy</creator>
        
        <subject>Health care; depression detection; acoustic features; speech elicitation; feature selection; openSMILE; ensemble methods</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>Mental illnesses are severe obstacles for the global welfare. Depression is a psychological disorder which causes problems to the individual and also to his/her dependents. Machine learning based methods using audio signals can differentiate patterns between healthy and depressive subjects. These methods can assist health care professionals to detect the depression. Literature in depression detection, based on audio signals, used only single classifier, lacks to take advantage of diverse classifiers. The current work combines predictive capabilities of diverse classifiers using stacking method to detect depression. Audio clips are reordered while a predefined paragraph is being read out, for acoustic analysis of speech. The dataset is created which has extended Geneva Minimalistic Acoustic Parameter Set (eGeMAPS) features, that are extracted using openSMILE toolkit. The normalized feature vectors are given as input to multiple classifiers to give an intermediate prediction. These predictions are combined using a meta classifier to form a final outcome. K-Nearest Neighbours (KNN), Na&#239;ve Bayes (NB), Support Vector Machine (SVM), and Decision Trees (DT) classifiers are utilised on the normalized feature vector for intermediate predictions and Logistic Regression (LR) is used as meta classifier to predict final outcome. Our proposed method of using diverse classifiers achieved significant accuracy of 79.1%, precision of 83.3%, recall of 76.9% and F1-score of 80% on our dataset. Results are discussed while using stacking method on our dataset, then compared with various baseline methods also while applying on a publicly available bench marking dataset. Our results showed that combining predictive capability of multiple diverse classifiers helps in depression detection.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_67-A_Stacking_based_Ensemble_Framework_for_Automatic_Depression_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Transformer-CNN Hybrid Model for Cognitive Behavioral Therapy in Psychological Assessment and Intervention for Enhanced Diagnostic Accuracy and Treatment Efficiency</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140766</link>
        <id>10.14569/IJACSA.2023.0140766</id>
        <doi>10.14569/IJACSA.2023.0140766</doi>
        <lastModDate>2023-07-31T09:34:38.3570000+00:00</lastModDate>
        
        <creator>Veera Ankalu Vuyyuru</creator>
        
        <creator>G Vamsi Krishna</creator>
        
        <creator>S. Suma Christal Mary</creator>
        
        <creator>S. Kayalvili</creator>
        
        <creator>Abraheem Mohammed Sulayman Alsubayhay</creator>
        
        <subject>CBT; psychological assessment; intervention; diagnostic accuracy; treatment efficiency; Transformer; CNN; NLP</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>The use of Cognitive Behavioral Therapy (CBT) as a method for psychological assessment and intervention has shown to be quite successful. However, by utilizing advancements in artificial intelligence and natural language processing techniques, the diagnostic precision and therapeutic efficacy of CBT can be significantly improved. For CBT in psychological evaluation and intervention, we suggest a unique Transformer-CNN hybrid model in this work. The hybrid model combines the strengths of the Transformer and Convolutional Neural Network (CNN) architectures. While the CNN model successfully extracts local and global features from the input sequences, the Transformer model accurately captures the contextual dependencies and semantic linkages in the text data. It intends to enhance the model&#39;s comprehension and interpretation of the complex linguistic patterns involved in psychological evaluation and intervention by merging these two algorithms. On a sizable collection of clinical text data, which includes patient narratives, treatment transcripts, and diagnostic reports, we undertake comprehensive experiments. The proposed Trans-CNN hybrid model outperformed all other methods with an impressive accuracy of 97%. In diagnosing psychiatric problems, the model shows improved diagnosis accuracy and offers more effective therapy advice. Our hybrid model&#39;s automatic real-time monitoring and feedback capabilities also make it possible for prompt intervention and customized care during therapy sessions. By giving doctors a formidable tool for precise evaluation and efficient intervention, the suggested approach has the potential to revolutionize the field of CBT and enhance patient outcomes for mental health. In order to improve the diagnostic precision and therapeutic efficacy of CBT in psychological evaluation and intervention, this work provides a transformational strategy that combines the advantages of the Transformer and CNN architectures.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_66-A_Transformer_CNN_Hybrid_Model_for_Cognitive_Behavioral_Therapy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Whale Optimization-Driven Generative Convolutional Neural Network Framework for Anaemia Detection from Blood Smear Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140765</link>
        <id>10.14569/IJACSA.2023.0140765</id>
        <doi>10.14569/IJACSA.2023.0140765</doi>
        <lastModDate>2023-07-31T09:34:38.3400000+00:00</lastModDate>
        
        <creator>S. Yazhinian</creator>
        
        <creator>Vuda Sreenivasa Rao</creator>
        
        <creator>J. C. Sekhar</creator>
        
        <creator>Suganthi Duraisamy</creator>
        
        <creator>E. Thenmozhi</creator>
        
        <subject>Generative adversarial network; blood smear images; convolutional neural network; anaemia; Whale Optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>Anaemia is a frequent blood disorder marked by a reduction in the quantity of haemoglobin or the number of red blood cells in the blood. Quick and accurate anaemia detection is crucial for fast action and effective treatment. In this research, we provide a new structure called Whale Optimization-Driven Generative Convolutional Neural Network (WO-GCNN) for the detection of anaemia using blood smear pictures. To increase anaemia detection accuracy, the WO-GCNN system combines the strength of generative models and convolutional neural networks (CNNs). In order to create artificial blood smear images and learn the underlying data distribution, generative models, such as Generative Adversarial Networks (GANs), are used. Improve the functionality of the WO-GCNN system by applying the Whale Optimisation Algorithm (WOA), which is based on the hunting behaviours of humpback whales. To create the optimal set of CNN weights, the WOA effectively achieves a compromise between exploitation and exploration. The WO-GCNN framework accelerates convergence speed and increases overall performance of anaemia detection by incorporating the WOA into the training process. On a sizable dataset of blood smear pictures obtained from clinical settings, we assess the suggested WO-GCNN system. A highly accurate and effective approach for the early identification of anaemia is produced by combining generative models and CNNs with the WOA optimisation. By enabling early anaemia identification, the proposed WO-GCNN framework has the potential to have a substantial impact on the field of medical image analysis and enhance patient care. It can be a useful tool for medical personnel, supporting them in making decisions and giving anaemia patients urgent interventions.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_65-Whale_Optimization_Driven_Generative_Convolutional_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Facemask Detection using Deep learning Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140763</link>
        <id>10.14569/IJACSA.2023.0140763</id>
        <doi>10.14569/IJACSA.2023.0140763</doi>
        <lastModDate>2023-07-31T09:34:38.3230000+00:00</lastModDate>
        
        <creator>Abdullahi Ahmed Abdirahman</creator>
        
        <creator>Abdirahman Osman Hashi</creator>
        
        <creator>Ubaid Mohamed Dahir</creator>
        
        <creator>Mohamed Abdirahman Elmi</creator>
        
        <creator>Octavio Ernest Romo Rodriguez</creator>
        
        <subject>Object detection; deep learning; detection; face detection; mask detection; convolutional neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>Face detection and mask detection are critical tasks in the context of public safety and compliance with mask-wearing protocols. Hence, it is important to track down whoever violated rules and regulations. Therefore, this paper aims to implement four deep learning models for face detection and face with mask detection: MobileNet, ResNet50, Inceptionv3, and VGG19. The models are evaluated based on precision and recall metrics for both face detection and face with mask detection tasks. The results indicate that the proposed model based on ResNet50 achieves superior performance in face detection, demonstrating high precision (99.4%) and recall (98.6%) values. Additionally, the proposed model shows commendable accuracy in mask detection. MobileNet and Inceptionv3 provide satisfactory results, while the proposed model based on VGG19 excels in face detection but shows slightly lower performance in mask detection. The findings contribute to the development of effective face mask detection systems, with implications for public safety.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_63-Enhancing_Facemask_Detection_using_Deep_Learning_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sequential Model-based Optimization Approach Deep Learning Model for Classification of Multi-class Traffic Sign Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140764</link>
        <id>10.14569/IJACSA.2023.0140764</id>
        <doi>10.14569/IJACSA.2023.0140764</doi>
        <lastModDate>2023-07-31T09:34:38.3230000+00:00</lastModDate>
        
        <creator>Si Thu Aung</creator>
        
        <creator>Jartuwat Rajruangrabin</creator>
        
        <creator>Ekkarut Viyanit</creator>
        
        <subject>Autonomous driving; convolutional neural network; deep learning; traffic sign; optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>Autonomous vehicles are currently gaining popularity in the future mobility ecosystem. The development of autonomous driving systems is still challenging in the research area of image processing and signal processing. Extensive research work was conducted on various traffic sign datasets. It achieved respectable results, but a robust network structure is still needed to develop to improve the traffic sign recognition (TSR) system. In this research work, there is an alternative approach to designing deep learning models, which are implemented in TSR systems. The proposed deep learning model was also tested with different datasets to obtain the generalized model. The proposed model was based on a convolutional neural network (CNN), and Bayesian Optimization optimizes the model’s hyperparameters to find the best hyperparameters grid. After that, the optimized CNN model was used to classify the traffic sign images from three different datasets, including the German traffic sign recognition benchmark (GTSRB), the Belgium traffic sign classification (BTSC) dataset, and the Chinese traffic sign database, achieving the average accuracy scores of 99.57%, 99.15%, and 99.35%, respectively.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_64-Sequential_Model_based_Optimization_Approach_Deep_Learning_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Software Quality Characteristic Recommendation Model to Handle the Dynamic Requirements of Software Projects that Improves Service Quality and Cost</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140762</link>
        <id>10.14569/IJACSA.2023.0140762</id>
        <doi>10.14569/IJACSA.2023.0140762</doi>
        <lastModDate>2023-07-31T09:34:38.3100000+00:00</lastModDate>
        
        <creator>Kamal Borana</creator>
        
        <creator>Meena Sharma</creator>
        
        <creator>Deepak Abhyankar</creator>
        
        <subject>Recommendation system; software quality model; ML (Machine Learning); quality matrix; software quality characteristics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>The software is created and constructed to address particular issues in the applied field. In this context, there is a need to be aware of the crucial characteristics to assess the quality of software. But not all software requires checking all the quality-of-service parameters, resulting in effort loss and time consumption. Therefore, it is required to develop software quality characteristics recommendation model to address and resolve the issue. The proposed work involved in this paper can be subdivided into three main parts (1) a review of popular software quality models and their comparison to create a complete set of predictable, and (2) the design of an ML-based recommendation model for recommending the software quality model and software quality characteristics (3) performance analysis. The proposed recommendation system utilizes the different software quality of service attributes as well as the software attributes where these models are suitably applied to satisfy the demands. Profiling of applications and their essential requirements have been performed Based on the different quality of service parameters and the requirements of applications. These profiles are learned by machine learning algorithms for distinguishing the application-based requirement and recommending the essential attributes. The implementation of the proposed technique has been done using Python technology. The simulation aims to demonstrate how to minimize the cost of software testing and improve time and accuracy by utilizing the appropriate quality matrix. Finally, a conclusion has been drawn and the future extension of the proposed model has been reported.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_62-A_Novel_Software_Quality_Characteristic_Recommendation_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Purchase Intention and Sentiment Analysis on Twitter Related to Social Commerce</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140760</link>
        <id>10.14569/IJACSA.2023.0140760</id>
        <doi>10.14569/IJACSA.2023.0140760</doi>
        <lastModDate>2023-07-31T09:34:38.2930000+00:00</lastModDate>
        
        <creator>Muhammad Alviazra Virgananda</creator>
        
        <creator>Indra Budi</creator>
        
        <creator>Kamrozi</creator>
        
        <creator>Ryan Randy Suryono</creator>
        
        <subject>Algorithm; machine learning; sentiment; social commerce</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>Social commerce is a digital and efficient solution to transform existing commerce and address contemporary issues. TikTok Shop, a popular and trending social commerce platform, competes with established competitors like Facebook Marketplace and Instagram Shop. TikTok Shop offers benefits and incentives to attract users for both sales and product purchases. In this study, various algorithmic approaches such as Na&#239;ve Bayes, K-Nearest Neighbor, Support Vector Machine, Logistic Regression, Decision Tree, Random Forest, LGBM Boost, Ada Boost, and Voting Classifier are utilized to analyze and compare sentiments expressed on Twitter regarding Facebook, Instagram, and TikTok. The aim is to determine the methods with the best performance and identify the social commerce platform with the highest purchase intention and positive sentiment. The results indicate that TikTok has more positive sentiment than Facebook and Instagram at 93.07% with the best-performing classification model, Decision Tree. In conclusion, TikTok exhibits the highest positive sentiment percentage, indicating a greater number of positive reviews compared to Facebook and Instagram. According to the theory of evaluation scores for measuring model performance, values above 0.90 represent models with good performance.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_60-Purchase_Intention_and_Sentiment_Analysis_on_Twitter.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Ensemble Deep Learning (EDL) for Cyber-bullying on Social Media</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140761</link>
        <id>10.14569/IJACSA.2023.0140761</id>
        <doi>10.14569/IJACSA.2023.0140761</doi>
        <lastModDate>2023-07-31T09:34:38.2930000+00:00</lastModDate>
        
        <creator>Zarapala Sunitha Bai</creator>
        
        <creator>Sreelatha Malempati</creator>
        
        <subject>Cyber bullying; ensemble deep learning (EDL); convolutional neural networks (CNNs); recurrent neural networks (RNNs); deep belief networks (DBNs)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>Cyber-bullying is a growing problem in the digital age, affecting millions of people worldwide. Deep learning algorithms have the potential to assist in identifying and combating Cyber-bullying by detecting and classifying harmful messages. This paper uses two Ensemble deep learning (EDL) models to detect Cyber-bullying on text data, images and videos—and an overview of Cyber-bullying and its harmful effects on individuals and society. The advantages of using deep learning algorithms in the fight against Cyber-bullying include their ability to process large amounts of data and learn and adapt to new patterns of Cyber-bullying behaviour. For text data, firstly, a pre-trained model BERT (Bidirectional Encoder Representations from Transformers) is used to train on cyber-bullying text data. The next step describes the data pre-processing and feature extraction techniques required to prepare data for deep learning algorithms. We also discuss the different types of deep learning algorithms that can be used for Cyber-bullying detection, including convolutional neural networks (CNNs), recurrent neural networks (RNNs), and deep belief networks (DBNs). This paper combines the sentiment analysis model, such as Aspect-based Sentiment Analysis (ABSA), for classifying bullying messages. Deep Neural network (DNN) used the classification of Cyber-bullying images and videos. Experiments were conducted on three datasets such as Twitter (Kaggle), Images (Online), and Videos (Online). Datasets are collected from various online sources. The results demonstrate the effectiveness of EDL and DNN in detecting Cyber-bullying in terms of detecting bullying data from relevant datasets. The EDL and DNN obtained an accuracy of 0.987, precision of 0.976, F1-score of 0.975, and recall of 0.971 for the Twitter dataset. The performance of Ensemble CNN brought an accuracy of 0.887, precision of 0.88, F1-score of 0.88, and recall of 0.887 for the Image dataset. For the video dataset, the performance of Ensemble CNN is an accuracy of 0.807, precision of 0.81, F1-score of 0.82, and recall of 0.81. Future research should focus on developing more accurate and efficient deep learning algorithms for Cyber-bullying detection and investigating the ethical implications of using such algorithms in practice.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_61-Ensemble_Deep_Learning_EDL_for_Cyber_bullying_on_Social_Media.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Web Phishing Classification Model using Artificial Neural Network and Deep Learning Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140759</link>
        <id>10.14569/IJACSA.2023.0140759</id>
        <doi>10.14569/IJACSA.2023.0140759</doi>
        <lastModDate>2023-07-31T09:34:38.2770000+00:00</lastModDate>
        
        <creator>Noor Hazirah Hassan</creator>
        
        <creator>Abdul Sahli Fakharudin</creator>
        
        <subject>Phishing website; classification; artificial neural network; convolutional neural network; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>Phishing is an online crime in which a cybercriminal tries to persuade internet users to reveal important and sensitive personal information, such as bank account details, usernames, passwords, and social security numbers, to the phisher, usually for mean purposes. The target victim of the fraud suffers a financial loss, as well as the loss of personal information and reputation. Therefore, it is essential to identify an effective approach for phishing website classification. Machine learning approaches have been applied in the classification of phishing websites in recent years. The objectives of this research are to classify phishing websites using artificial neural network (ANN) and convolutional neural network (CNN) and then compare the results of the models. This study uses a phishing website dataset collected from the machine learning database, University of California, Irvine (UCI). There were nine input attributes and three output classes that represent types of websites either legitimate, suspicious, or phishing. The data was split into 70% and 30% for training and testing purposes, respectively. The results indicate that the modified ANN with Rectified Linear Unit (ReLU) activation function model outperforms other models by achieving the least average of root mean square error (RMSE) value for testing which is 0.2703, while the CNN model produced the least average RMSE for training which is 0.2631. ANN with Sigmoid activation function model obtained the highest average RMSE of 0.3516 for training and 0.3585 for testing.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_59-Web_Phishing_Classification_Model_using_Artificial_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predicting Maintenance Labor Productivity in Electricity Industry using Machine Learning: A Case Study and Evaluation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140758</link>
        <id>10.14569/IJACSA.2023.0140758</id>
        <doi>10.14569/IJACSA.2023.0140758</doi>
        <lastModDate>2023-07-31T09:34:38.2600000+00:00</lastModDate>
        
        <creator>Mariam Alzeraif</creator>
        
        <creator>Ali Cheaitou</creator>
        
        <creator>Ali Bou Nassif</creator>
        
        <subject>Productivity; machine learning; maintenance; prediction; ANN</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>Predicting maintenance labor productivity is crucial for effective planning and decision-making in the electricity industry. This paper aims at predicting maintenance labor productivity using various machine learning methods, utilizing a real-world case study from the electricity industry. Additionally, the study evaluates the performance of the employed machine learning methods. To meet this objective, 1750 productivity measures have been used to train (80%) and test (20%) prediction models using Artificial Neural Networks, Support Vector Machines, Random Forest, and Multiple Linear Regression methods. The models&#39; performance was evaluated based on the mean squared error, mean absolute percentage error, and testing time. The results indicated that the Artificial Neural Networks model - specifically, a feedforward network with a backpropagation algorithm - outperformed the other models (Multiple Linear Regression, Support Vector Machines, Random Forest). These results highlight the effectiveness of machine learning, particularly the Artificial Neural Networks prediction model, as an invaluable tool for decision-makers in the electricity industry, aiding in more effective maintenance planning and potential productivity improvement.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_58-Predicting_Maintenance_Labor_Productivity_in_Electricity_Industry.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of an Educational Platform for Professional Development of Teachers with Elements of Blockchain Technology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140757</link>
        <id>10.14569/IJACSA.2023.0140757</id>
        <doi>10.14569/IJACSA.2023.0140757</doi>
        <lastModDate>2023-07-31T09:34:38.2470000+00:00</lastModDate>
        
        <creator>Aivar Sakhipov</creator>
        
        <creator>Talgat Baidildinov</creator>
        
        <creator>Madina Yermaganbetova</creator>
        
        <creator>Nurzhan Ualiyev</creator>
        
        <subject>Blockchain; professional development; artificial intelligence; teaching; learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>This paper presents an in-depth examination of the development and implementation of an innovative platform for teacher professional development, incorporating features of blockchain technology. The platform manifests a revolutionary step in enhancing teacher training, creating a secure, transparent, and decentralized approach for maintaining continuous professional development records. Using blockchain&#39;s inherent properties, the platform ensures immutable record-keeping and instills credibility in teachers&#39; career progression, empowering educators through direct ownership of their professional development milestones. Additionally, the platform fosters a culture of lifelong learning, encouraging educators to actively engage in their professional growth, while providing reliable evidence of their achievements. Alongside highlighting the design aspects of the platform, the paper delves into potential challenges and solutions associated with the incorporation of blockchain technology into educational contexts. Through this innovative intersection of technology and education, the platform showcases the potential of blockchain in reshaping and enriching professional development strategies for teachers, thereby elevating educational standards and practices across the board.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_57-Design_of_an_Educational_Platform_for_Professional_Development.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Dynamic Model for Risk Assessment of Cross-Border Fresh Agricultural Supply Chain</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140756</link>
        <id>10.14569/IJACSA.2023.0140756</id>
        <doi>10.14569/IJACSA.2023.0140756</doi>
        <lastModDate>2023-07-31T09:34:38.2470000+00:00</lastModDate>
        
        <creator>Honghong Zhai</creator>
        
        <subject>Cross-border fresh agricultural products; supply chain management; risk identification; system dynamics model; risk weighting</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>The cross-border trade of Fresh Agricultural Products (FAP) is widespread in the current society, and the demand for it is also increasing. The cross-border fresh agricultural product Supply Chain (SP) itself has strong complexity and high costs, and it also bears many risks. In order to alleviate the adverse impact of risk factors interfering with cross-border fresh agricultural product SPs and improve the overall SP efficiency, this study proposes a system dynamics model based on cross-border fresh agricultural product risk factors. The experiment first studied the possible risk factors in the SP of FAP. After discussing the causal relationship between possible risks, subjective and objective weighting methods were introduced to weight risk factors. After that, a system dynamics model of the cross-border fresh agricultural product SP was constructed for the purpose of enhancing product quality and the overall efficiency of the SP. In the system dynamics model constructed, risk factors are introduced for simulation experiments. It is demonstrated that the suggested model can truly reflect the dynamic changes of the actual SP, and can obtain the operational rules of the system.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_56-A_Dynamic_Model_for_Risk_Assessment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Framework for Detecting Network Intrusions Based on Machine Learning Methods</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140755</link>
        <id>10.14569/IJACSA.2023.0140755</id>
        <doi>10.14569/IJACSA.2023.0140755</doi>
        <lastModDate>2023-07-31T09:34:38.2300000+00:00</lastModDate>
        
        <creator>Batyrkhan Omarov</creator>
        
        <creator>Nazgul Abdinurova</creator>
        
        <creator>Zhamshidbek Abdulkhamidov</creator>
        
        <subject>Attack detection; intrusion detection; machine learning; information security; artificial intelligence</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>In the rapidly evolving landscape of cyber threats, the efficacy of traditional rule-based network intrusion detection systems has become increasingly questionable. This paper introduces a novel framework for identifying network intrusions, leveraging the power of advanced machine learning techniques. The proposed methodology steps away from the rigidity of conventional systems, bringing a flexible, adaptive, and intuitive approach to the forefront of network security. This study employs a diverse blend of machine learning models including but not limited to, Convolutional Neural Networks (CNNs), Support Vector Machines (SVMs), and Random Forests. This research explores an innovative feature extraction and selection technique that enables the model to focus on high-priority potential threats, minimizing noise and improving detection accuracy. The framework&#39;s performance has been rigorously evaluated through a series of experiments on benchmark datasets. The results consistently surpass traditional methods, demonstrating a remarkable increase in detection rates and a significant reduction in false positives. Further, the machine learning-based model demonstrated its ability to adapt to new threat landscapes, indicating its suitability in real-world scenarios. By marrying the agility of machine learning with the concreteness of network intrusion detection, this research opens up new avenues for dynamic and resilient cybersecurity. The framework offers an innovative solution that can identify, learn, and adapt to evolving network intrusions, shaping the future of cyber defense strategies.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_55-A_Novel_Framework_for_Detecting_Network_Intrusions.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Effects of Training Data on Prediction Model for Students&#39; Academic Progress</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140754</link>
        <id>10.14569/IJACSA.2023.0140754</id>
        <doi>10.14569/IJACSA.2023.0140754</doi>
        <lastModDate>2023-07-31T09:34:38.2130000+00:00</lastModDate>
        
        <creator>Susana Limanto</creator>
        
        <creator>Joko Lianto Buliali</creator>
        
        <creator>Ahmad Saikhu</creator>
        
        <subject>Decision tree; effects of training data; heterogeneity; prediction; students’ academic performance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>The ability to predict students’ academic performance before the start of the class with credible accuracy could significantly aid the preparation of effective teaching and learning strategies. Several studies have been conducted to enhance the performance of prediction models by emphasizing three key factors: developing effective prediction algorithms, identifying significant predictor variables, and developing preprocessing techniques. Importantly, none of these studies focused on the effect of using different types of training data on the performance of prediction models. Therefore, this study was conducted to evaluate the effects of differences in training data on the performance of a prediction model designed to monitor students’ academic progress. The findings showed that the performance of the prediction model was strongly influenced by the heterogeneity of the values of the predictor variables, which should accommodate all the existing possibilities. It was also discovered that the application of training data with different characteristics and sizes did not improve the performance of the prediction model when its heterogeneity was not representative.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_54-Effects_of_Training_Data_on_Prediction_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>DefBDet: An Intelligent Default Borrowers Detection Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140753</link>
        <id>10.14569/IJACSA.2023.0140753</id>
        <doi>10.14569/IJACSA.2023.0140753</doi>
        <lastModDate>2023-07-31T09:34:38.2000000+00:00</lastModDate>
        
        <creator>Fooz Alghamdi</creator>
        
        <creator>Nora Alkhamees</creator>
        
        <subject>Default borrowers; default loans; loan risks; machine learning models; prediction model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>The growing popularity and availability of online lending platforms have attracted more borrowers and lenders. There have been several studies focusing on analyzing loan risks in the financial industry, however, defaulting loans still remains an issue that needs more attention. Hence, this research aims to develop an intelligent prediction model that is able to predict risky loans and default borrowers, named the Default Borrowers Detection Model (DefBDet). We seek to help loan lending platforms to approve lending loans to those who are expected to comply with re-payments at the agreed time. Previous works developed a binary classification prediction model (either default or repaid loan). Repaid loans include loans being repaid on or after the loan deadline date. DefBDet, on the other hand, is a novel model, it can predict a loan status based on a multi-classification bases rather than a binary class bases. Hence, it can additionally identify expected late repaid loans, so that special conditions are assigned before loan being approved. This study employs seven different Machine Learning models, using a real-world dataset from 2009-2022 consisting of around 255k loan requests. Statistical measures such as Recall, Precision, and F-measure have been used for models&#39; evaluation. Results show that Random Forest has achieved the highest performance of 85%.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_53-DefBDet_An_Intelligent_Default_Borrowers_Detection_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimizing YOLO Performance for Traffic Light Detection and End-to-End Steering Control for Autonomous Vehicles in Gazebo-ROS2</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140752</link>
        <id>10.14569/IJACSA.2023.0140752</id>
        <doi>10.14569/IJACSA.2023.0140752</doi>
        <lastModDate>2023-07-31T09:34:38.2000000+00:00</lastModDate>
        
        <creator>Hoang Tran Ngoc</creator>
        
        <creator>Khang Hoang Nguyen</creator>
        
        <creator>Huy Khanh Hua</creator>
        
        <creator>Huynh Vu Nhu Nguyen</creator>
        
        <creator>Luyl-Da Quach</creator>
        
        <subject>Yolo models; PID; CNN; gazebo; ROS2; traffic-light; lane-keeping; autonomous</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>Autonomous driving has become a popular area of research in recent years, with accurate perception and recognition of the environment being critical for successful implementation. Traditional methods for recognizing and controlling steering rely on the color and shape of traffic lights and road lanes, which can limit their ability to handle complex scenarios and variations in data. This paper presents an optimization of the You Only Look Once (YOLO) object detection algorithm for traffic light detection and end-to-end steering control for lane-keeping in the simulation environment. The study compares the performance of YOLOv5, YOLOv6, YOLOv7, and YOLOv8 models for traffic light signal detection, with YOLOv8 achieving the best results with a mean Average Precision (mAP) of 98.5%. Additionally, the study proposes an end-to-end convolutional neural network (CNN) based steering angle controller that combines data from a classical proportional integral derivative (PID) controller and the steering angle controller from human perception. This controller predicts the steering angle accurately, outperforming conventional open-source computer vision (OpenCV) methods. The proposed algorithms are validated on an autonomous vehicle model in a simulated Gazebo environment of Robot Operating System 2 (ROS2).</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_52-Optimizing_YOLO_Performance_for_Traffic_Light_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Approach Method for Multi Classification of Lung Diseases using X-Ray Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140751</link>
        <id>10.14569/IJACSA.2023.0140751</id>
        <doi>10.14569/IJACSA.2023.0140751</doi>
        <lastModDate>2023-07-31T09:34:38.1830000+00:00</lastModDate>
        
        <creator>Sri Heranurweni</creator>
        
        <creator>Andi Kuniawan Nugroho</creator>
        
        <creator>Budiani Destyningtias</creator>
        
        <subject>Augmentation; machine learning; lung disease; prepossessing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>Lung disease is one of the most common diseases in today&#39;s society. This lung disease&#39;s treatment is frequently postponed. This is usually due to a lack of understanding about proper treatment and a lack of clear information about lung disease. Reading the correct X-ray images, which is usually done by experts who are familiar with these X-rays, is one method of detecting lung disease. However, the results of this diagnosis are dependent on the expert&#39;s practice schedule and take a long time. This study aims to classify lung disease images using preprocessing, augmentation, and multimachine learning methods, with the goal of achieving high classification performance accuracy with multi-class lung disease. The classification ExtraTrees was obtained from experimental results with unbalanced datasets using a balancing process with augmentation. Precision, Recall, Fi-Score, and Accuracy are 100% for training and testing data 89% for Precision, 88% for Recall, 87 for Fi-Score, and 85% for Accuracy outperform other machine learning models such as Kneighbors, Support Vector Machine (SVM), and Random Forest in classifying lung diseases. The conclusion from this research is that the machine learning approach can detect several lung diseases using X-ray images.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_51-A_New_Approach_Method_for_Multi_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing User Experience Via Calibration Minimization using ML Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140750</link>
        <id>10.14569/IJACSA.2023.0140750</id>
        <doi>10.14569/IJACSA.2023.0140750</doi>
        <lastModDate>2023-07-31T09:34:38.1670000+00:00</lastModDate>
        
        <creator>Sarah N. AbdulKader</creator>
        
        <creator>Taha M. Mohamed</creator>
        
        <subject>EMG signals; user independence; EMG user acceptance; HCI; movement classification; calibration minimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>Electromyogram (EMG) signals are used to recognize gestures that could be used for prosthetic-based and hands-free human computer interaction. Minimizing calibration times for users while preserving the accuracy, is one of the main challenges facing the practicality, user acceptance and spread of upper limb movements’ detection systems. This paper studies the effect of minimized user involvement, thus user calibration time and effort, on the user-independent system accuracy. It exploits time based features extracted from EMG signals. One-versus-all kernel based Support Vector Machine (SVM) and K Nearest Neighbors (KNN) are used for classification. The experiments are conducted using a dataset having five subjects performing six distinct movements. Two experiments performed, one with complete user dependence condition and the other with the partial dependence. The results show that the involvement of at least two samples, representing around 2% of sample space, increase the performance by 62.6% in case of SVM, achieving accuracy result equal to 89.6% on average; while the involvement of at least three samples, representing around 3% of sample space, cause the increase by 50.6% in case of KNN, achieving accuracy result equal to 78.2% on average. The results confirmed the great impact on system accuracy when involving only small number of user samples in the model-building process using traditional classification methods.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_50-Enhancing_User_Experience_Via_Calibration_Minimization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Toward Modeling Trust Cyber-Physical Systems: A Model-based System Engineering Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140748</link>
        <id>10.14569/IJACSA.2023.0140748</id>
        <doi>10.14569/IJACSA.2023.0140748</doi>
        <lastModDate>2023-07-31T09:34:38.1530000+00:00</lastModDate>
        
        <creator>Zina Oudina</creator>
        
        <creator>Makhlouf Derdour</creator>
        
        <subject>Cyber Physical Systems (CPSs); trust CPS; system engineering (SE); model-based system engineering (MBSE); SysML</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>Developing trust in cyber-physical systems (CPSs) is a challenging task. Trust in CPS is needed for carrying out their intended duties and is reasonably safe from misuse and intrusion; it also enforces the applicable security policy. As an example, medical smart devices, many researches have found that trust is a key factor in explaining the relationship between individual beliefs about technological attributes and their acceptance behavior; and have associated medical device failures with severe patient injuries and deaths. The cyber-physical system is considered a trust system if the principles of security and safety, confidentiality, integrity, availability, and other attributes are assured. However, a lack of sufficient analysis of such systems, as well as appropriate explanation of relevant trust assumptions, may result in systems that fail to completely realize their functionality. The existing research does not provide suitable guidance for a systematic procedure or modeling language to support such trust-based analysis. The most pressing difficulties are achieving trust by design in CPS and systematically incorporating trust engineering into system development from the start of the system life cycle. Still, there is a need for a strategy or standard model to aid in the creation of a safe, secure, and trustworthy CPS. Model-based system engineering (MBSE) approaches for trust cyber-physical systems are a means to address system trustworthiness design challenges. This work proposes a practical and efficient MBSE method for constructing trust CPS, which provides guidance for the process of trustworthiness analysis. The SysML-based profile is supplied, together with recommendations on which approach is required at each process phase. The MBSE method is proven by expanding the autonomous car SysML and UML diagrams, and we show how trust considerations are integrated into the system development life cycle.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_48-Toward_Modeling_Trust_Cyber_Physical_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Construction of an Ontology-based Document Collection for the IT Job Offer in Morocco</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140749</link>
        <id>10.14569/IJACSA.2023.0140749</id>
        <doi>10.14569/IJACSA.2023.0140749</doi>
        <lastModDate>2023-07-31T09:34:38.1530000+00:00</lastModDate>
        
        <creator>Zineb Elkaimbillah</creator>
        
        <creator>Bouchra El Asri</creator>
        
        <creator>Mounia Mikram</creator>
        
        <creator>Maryem Rhanoui</creator>
        
        <subject>Ontology; IT job descriptions; semantic links; DL query; prot&#233;g&#233; 5.5.0</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>Information Technology (IT) job offers are available on the web in a heterogeneous way. It is difficult for a candidate looking for an IT job to retrieve the exact information they need to locate the ideal match for their profile, without wasting time on useless searches. Traditional IT job search systems are based on simple keywords that are generally not adapted to provide detailed answers because they do not take into account semantic links. In this article, an ontology is developed to meet the expectations of IT profiles from the IT job descriptions accumulated and pre-annotated using the UBIAI tool. The classes and subclasses of the ontology are designed using the Prot&#233;g&#233; 5.5.0 editor. Then the properties of objects and data are defined to improve the ontology. The ontology results are validated using DL queries by asking a number of questions to retrieve the requested information for each IT profile, and the ontology answers all these questions adequately. Finally, Various plugins are used to display an ontology in a graphical representation.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_49-Construction_of_an_Ontology_based_Document_Collection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Drug Resistant Prediction Based on Plasmodium Falciparum DNA-Barcoding using Bidirectional Long Short Term Memory Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140747</link>
        <id>10.14569/IJACSA.2023.0140747</id>
        <doi>10.14569/IJACSA.2023.0140747</doi>
        <lastModDate>2023-07-31T09:34:38.1370000+00:00</lastModDate>
        
        <creator>Lailil Muflikhah</creator>
        
        <creator>Nashi Widodo</creator>
        
        <creator>Novanto Yudistira</creator>
        
        <creator>Achmad Ridok</creator>
        
        <subject>Drug resistant; plasmodium falciparum; Bi-LSTM; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>Malaria disease mostly affects children and causes death every year. Multiple factors of the disease due to failure in treatment, including anti-malaria drug resistance. The resistance is caused by a decrease in the efficacy of the drug against Plasmodium parasites. Therefore, we proposed a computational approach using deep learning methods to predict anti-malarial drug resistance based on genetic variants of the Plasmodium falciparum through DNA barcoding. The DNA Barcode, organism identification from Plasmodium, is employed as data set for predicting the anti-malaria drug resistance. As a univariate amino acid sequence, it is transformed to numerical value data for building classifier model. It is constructed into a classifier model for prediction using Bidirectional Long Term-Short Memory (Bi-LSTM). This algorithm is extended from LSTM by two directions. In the first stage, the sequence is encoded into numerical data as input data for the method using sigmoid activation loss function. Then binary cross entropy is addressed to define the class, resistance or sensitivity. The final stage is applied by tuning hyper-parameter using Adaptive Moment Estimation optimizer to get the best performance. The experimental results show that Bi-LSTM as the proposed method achieves high performance for resistance prediction including precision, recall, and f1-score.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_47-Drug_Resistant_Prediction_Based_on_Plasmodium_Falciparum.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Vehicle Classification System for Intelligent Transport System using Machine Learning in Constrained Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140746</link>
        <id>10.14569/IJACSA.2023.0140746</id>
        <doi>10.14569/IJACSA.2023.0140746</doi>
        <lastModDate>2023-07-31T09:34:38.1200000+00:00</lastModDate>
        
        <creator>Ahmed S. Alghamdi</creator>
        
        <creator>Talha Imran</creator>
        
        <creator>Khalid T. Mursi</creator>
        
        <creator>Atika Ejaz</creator>
        
        <creator>Muhammad Kamran</creator>
        
        <creator>Abdullah Alamri</creator>
        
        <subject>Vehicle classification; intelligent transport system; deep learning; machine learning; CNN; digital image processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>Vehicle type classification has an extensive variety of applications which include intelligent parking systems, traffic flow-statistics, toll collecting system, vehicle access control, congestion management, security system and many more. These applications are designed for reliable and secure transportation. Vehicle classification is one of the major challenges of these applications particularly in a constrained environment. The constrained environment in the real world put a limit on data quality due to noise, poor lightning condition, low resolution images and bad weather conditions. In this research, we build a more practical and more robust vehicle type classification system for real world constrained environment with promising results and got a validation accuracy of 90.85 and a testing accuracy of 87%. To this end, we design a framework of vehicle type classification from vehicle images by using machine learning. We investigate the deep learning method Convolutional neural network (CNN), a specific type of neural networks. CNN is biologically inspired with multi-layer feed forward neural networks. It can learn automatically at several stages of invariant features for the particular chore. For evaluation, we also compared the performance of our model with the performance of other machine learning algorithms like Na&#239;ve Bayes, SVM and Decision Trees.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_46-A_Vehicle_Classification_System_for_Intelligent_Transport_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparison of Multi-layer Perceptron and Support Vector Machine Methods on Rainfall Data with Optimal Parameter Tuning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140745</link>
        <id>10.14569/IJACSA.2023.0140745</id>
        <doi>10.14569/IJACSA.2023.0140745</doi>
        <lastModDate>2023-07-31T09:34:38.1070000+00:00</lastModDate>
        
        <creator>Marji </creator>
        
        <creator>Agus Widodo</creator>
        
        <creator>Marjono</creator>
        
        <creator>Wayan Firdaus Mahmudy</creator>
        
        <creator>Maulana Muhamad Arifin</creator>
        
        <subject>Rainfall; MLP; SVM; optimal</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>This study describes the search for optimal hyperparameter values in rainfall data in 49 cities in Australia, consisting of 145,460 records with 22 features. The process eliminates missed values and selects 16 numeric type features as input features and one feature (Rain Tomorrow) as output feature. It is processed using the Multi-Layer Perceptron (MLP) and Support Vector Machine (SVM) methods based on Three Best Accuration (3BestAcc) and Best Three Nearest Neighbors (3BestNN). The results showed that the SVM kernel linear method gave an average accuracy value of 0.85586 and was better than the MLP method with an accuracy of 0.854.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_45-Comparison_of_Multi_layer_Perceptron_and_Support_Vector_Machine.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Computational Framework for Analytical Operation in Intelligent Transportation System using Big Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140744</link>
        <id>10.14569/IJACSA.2023.0140744</id>
        <doi>10.14569/IJACSA.2023.0140744</doi>
        <lastModDate>2023-07-31T09:34:38.0900000+00:00</lastModDate>
        
        <creator>Mahendra G</creator>
        
        <creator>Roopashree H. R</creator>
        
        <subject>Intelligent transportation system; traffic managemen; machine learning; artifacts; prediction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>Intelligent Transportation System (ITS) is the future of the current transport scheme. It is meant to incorporate an intelligent traffic management operation to offer vehicles more safety and valuable traffic-related information. A review of existing approaches showcases the implementation of varied scattered schemes where analytical operation is mainly emphasized. However, some significant shortcomings are witnessed in efficiently managing complex traffic data. Therefore, the proposed system introduces a novel computational framework with a joint operation toward analytical processing using big data targeting to manage raw and complex traffic data efficiently. As a novel feature, the model introduces a data manager who can manage the complex traffic stream, followed by decentralized traffic management, that can identify and eliminate artefacts using statistical correlation. Finally, predictive modelling is incorporated to offer knowledge discovery with the highest accuracy. The simulation outcome shows that Random Forest excels with 99% accuracy, which is the highest of all existing machine learning approaches, along with the accomplishment of 11.77% reduced overhead, 1.3% of reduced delay, and 67.47% reduced processing time compared to existing machine learning approaches.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_44-Computational_Framework_for_Analytical_Operation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Dynamic Intrusion Detection System Capable of Detecting Unknown Attacks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140743</link>
        <id>10.14569/IJACSA.2023.0140743</id>
        <doi>10.14569/IJACSA.2023.0140743</doi>
        <lastModDate>2023-07-31T09:34:38.0730000+00:00</lastModDate>
        
        <creator>Na Xing</creator>
        
        <creator>Shuai Zhao</creator>
        
        <creator>Yuehai Wang</creator>
        
        <creator>Keqing Ning</creator>
        
        <creator>Xiufeng Liu</creator>
        
        <subject>Intrusion detection systems; transformer; DUA-IDS; data playback; variable perspectives features; Knowledge distillation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>In recent years, deep learning-based network intrusion detection systems (IDS) have shown impressive results in detecting attacks. However, most existing IDS can only recognize known attacks that were included in their training data. When faced with unknown attacks, these systems are often unable to take appropriate actions and incorrectly classify them into known categories, leading to reduced detection performance. Furthermore, as the number and types of network attacks continue to increase, it becomes challenging for these IDS to update their model parameters promptly and adapt to new attack scenarios. To address these issues, this paper introduces a dynamic intrusion detection system, Dynamic Unknown Attack Intrusion Detection System (DUA-IDS). This system aims to learn and detect unknown attacks effectively. DUA-IDS comprises three components: Feature Extractor: This component employs CNN and Transformer models to extract data features from various perspectives. Threshold-Based Classifier: The second part utilizes the nearest mean rule of samples to classify known and unknown attacks, enabling the distinction between them. Dynamic Learning Module: The third part incorporates data playback and knowledge distillation techniques to retain existing category knowledge while continuously learning new attack categories. To assess the effectiveness of DUA-IDS, this paper conducted experiments using the UNSW-NB15 public dataset. The experimental results show that DUA-IDS improves the classification accuracy of flow network data with unknown traffic attacks. Can accurately distinguish unknown traffic and correctly classify known traffic. When dynamically learning unknown traffic, the classification accuracy of previously learned known traffic is less affected. This indicates the advantages of DUA-IDS in detecting unknown attacks and learning new attack categories.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_43-A_Dynamic_Intrusion_Detection_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Review of Fake News Detection Models: Highlighting the Factors Affecting Model Performance and the Prominent Techniques Used</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140742</link>
        <id>10.14569/IJACSA.2023.0140742</id>
        <doi>10.14569/IJACSA.2023.0140742</doi>
        <lastModDate>2023-07-31T09:34:38.0600000+00:00</lastModDate>
        
        <creator>Suhaib Kh. Hamed</creator>
        
        <creator>Mohd Juzaiddin Ab Aziz</creator>
        
        <creator>Mohd Ridzwan Yaakub</creator>
        
        <subject>Fake news detection; social media; data augmentation; feature extraction; multimodal fusion</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>In recent times, social media has become the primary way people get news about what is happening in the world. Fake news surfaces on social media every day. Fake news on social media has harmed several domains, including politics, the economy, and health. Additionally, it has negatively affected society&#39;s stability. There are still certain limitations and challenges even though numerous studies have offered useful models for identifying fake news in social networks using many techniques. Moreover, the accuracy of detection models is still notably poor given we deal with a critical topic. Despite many review articles, most previously concentrated on certain and repeated sections of fake news detection models. For instance, the majority of reviews in this discipline only mentioned datasets or categorized them according to labels, content, and domain. Since the majority of detection models are built using a supervised learning method, it has not been investigated how the limitations of these datasets affect detection models. This review article highlights the most significant components of the fake news detection model and the main challenges it faces. Data augmentation, feature extraction, and data fusion are some of the approaches explored in this review to improve detection accuracy. Moreover, it discusses the most prominent techniques used in detection models and their main advantages and disadvantages. This review aims to help other researchers improve fake news detection models.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_42-A_Review_of_Fake_News_Detection_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detection of Protective Apparatus for Municipal Engineering Construction Personnel Based on Improved YOLOv5s</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140741</link>
        <id>10.14569/IJACSA.2023.0140741</id>
        <doi>10.14569/IJACSA.2023.0140741</doi>
        <lastModDate>2023-07-31T09:34:38.0430000+00:00</lastModDate>
        
        <creator>Shuangyuan Li</creator>
        
        <creator>Yanchang Lv</creator>
        
        <creator>Mengfan Li</creator>
        
        <creator>Zhengwei Wang</creator>
        
        <subject>YOLOv5s; hard hat; reflective vest; simultaneous detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>With the rapid economic development, the government has increased investment in municipal construction, which usually takes a long time, involves many open-air operations, and is affected by cross-construction, traffic, climate and environment, and so on. The safety protection of urban construction workers has been a concern. In this paper, an improved algorithm based on YOLOv5s for the simultaneous detection of helmets and reflective vests is proposed for municipal construction management. First, a new data enhancement method, Mosaic-6, is used to improve the model&#39;s ability to learn local features. Second, the SE attention mechanism is introduced in the focus module to expand the perceptual field, strengthen the degree of association between channel information and the detection target, and improve the detection accuracy. Finally, the features of small-scale targets are interacted and fused in multiple dimensions according to the Swin transformer network structure. The experimental results show that the improved algorithm achieves accuracy, recall, and mean accuracy rates of 98.5%, 97.0%, and 92.7%, respectively. These results show an average improvement of 3.4 percentage points in mean accuracy compared to the basic YOLOv5s. This study provides valuable insights for further research in the area of urban engineering security and protection.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_41-Detection_of_Protective_Apparatus_for_Municipal_Engineering_Construction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Model of Stroke Rehabilitation Service and User Demand Matching</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140740</link>
        <id>10.14569/IJACSA.2023.0140740</id>
        <doi>10.14569/IJACSA.2023.0140740</doi>
        <lastModDate>2023-07-31T09:34:38.0270000+00:00</lastModDate>
        
        <creator>Hua Wei</creator>
        
        <creator>Ding-Bang Luh</creator>
        
        <creator>Yue Sun</creator>
        
        <creator>Xiao-Hong Mo</creator>
        
        <creator>Yu-Hao Shen</creator>
        
        <subject>Stroke; rehabilitation services; user needs; matching model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>This article focuses on matching stroke rehabilitation services, and patient needs through the interconnection between patient demand and rehabilitation service capabilities. A solution is proposed based on the KJ, fuzzy AHP, and QFD methods to address this problem. Specifically, the KJ method categorizes user needs, and the fuzzy AHP method calculates weights and rankings. Furthermore, rehabilitation service capability indicators are developed, and the QFD method is applied to match customer needs with rehabilitation service capability indicators. The service indicator value is constructed through mapping relationships, and the rehabilitation service capability value is obtained by adding up the results. The best matching scheme is predicted by comparing rehabilitation service capability values of service alternatives. The success of the model has been proven by examining the case. It has helped patients and service organizations find suitable caregivers. The research results illustrate that the proposed model can effectively address the problem of stroke rehabilitation services and patient needs matching and has practical value and potential applications. Therefore, this research is significant in enhancing the quality of stroke rehabilitation services and patient satisfaction and provides a reference value for future studies of similar issues.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_40-The_Model_of_Stroke_Rehabilitation_Service.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Anomalous Taxi Trajectory Detection using Popular Routes in Different Traffic Periods</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140739</link>
        <id>10.14569/IJACSA.2023.0140739</id>
        <doi>10.14569/IJACSA.2023.0140739</doi>
        <lastModDate>2023-07-31T09:34:37.9970000+00:00</lastModDate>
        
        <creator>Lina Xu</creator>
        
        <creator>Yonglong Luo</creator>
        
        <creator>Qingying Yu</creator>
        
        <creator>Xiao Zhang</creator>
        
        <creator>Wen Zhang</creator>
        
        <creator>Zhonghao Lu</creator>
        
        <subject>Anomalous trajectory detection; time periods; popular routes; gridded distance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>Anomalous trajectory detection is an important approach to detecting taxi fraud behaviors in urban traffic systems. The existing methods usually ignore the integration of the trajectory access location with the time and trajectory structure, which incorrectly detects normal trajectories that bypass the congested road as anomalies and ignores circuitous travel of trajectories. Therefore, this study proposes an anomalous trajectory detection algorithm using the popular routes in different traffic periods to solve this problem. First, to obtain popular routes in different time periods, this study divides the time according to the time distribution of the traffic trajectories. Second, the spatiotemporal frequency values of the nodes are obtained by combining the trajectory point moments and time span to exclude the interference of the temporal anomaly trajectory on the frequency. Finally, a gridded distance measurement method is designed to quantitatively measure the anomaly between the trajectory and the popular routes by combining the trajectory position and trajectory structure. Extensive experiments are conducted on real taxi trajectory datasets; the results show that the proposed method can effectively detect anomalous trajectories. Compared to the baseline algorithms, the proposed algorithm has a shorter running time and a significant improvement in F-Score, with the highest improvement rate of 7.9%, 5.6%, and 10.7%, respectively.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_39-Anomalous_Taxi_Trajectory_Detection_using_Popular_Routes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Inspection System for Glass Bottle Defect Classification based on Deep Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140738</link>
        <id>10.14569/IJACSA.2023.0140738</id>
        <doi>10.14569/IJACSA.2023.0140738</doi>
        <lastModDate>2023-07-31T09:34:37.9800000+00:00</lastModDate>
        
        <creator>Niphat Claypo</creator>
        
        <creator>Saichon Jaiyen</creator>
        
        <creator>Anantaporn Hanskunatai</creator>
        
        <subject>Convolution neural network; glass bottle; defect detection; long shot-term memory; inspection machine</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>The problem of defects in glass bottles is a significant issue in glass bottle manufacturing. There are various types of defects that can occur, including cracks, scratches, and blisters. Detecting these defects is crucial for ensuring the quality of glass bottle production. The inspection system must be able to accurately detect and automatically determine that the defects in a bottle affect its appearance and functionality. Defective bottles must be identified and removed from the production line to maintain product quality. This paper proposed glass bottle defect classification using Convolutional Neural Network with Long Short-Term Memory (CNNLSTM) and instant base classification. CNNLSTM is used for feature extraction to create a representation of the class data. The instant base classification predicts anomalies based on the similarity of representations of class data. The convolutional layer of the CNNLSTM method incorporates a transfer learning algorithm, using pre-trained models such as ResNet50, AlexNet, MobileNetV3, and VGG16. In this experiment, the results were compared with ResNet50, AlexNet, MobileNetV3, VGG16, ADA, Image threshold, and Edge detection methods. The experimental results demonstrate the effectiveness of the proposed method, achieving high classification accuracies of 77% on the body dataset, 95% on the neck dataset, and an impressive 98% on the rotating dataset.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_38-Inspection_System_for_Glass_Bottle_Defect_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Cloud Security: An Optimization-based Deep Learning Model for Detecting Denial-of-Service Attacks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140737</link>
        <id>10.14569/IJACSA.2023.0140737</id>
        <doi>10.14569/IJACSA.2023.0140737</doi>
        <lastModDate>2023-07-31T09:34:37.9630000+00:00</lastModDate>
        
        <creator>Lamia Alhazmi</creator>
        
        <subject>DOS attack; cloud database; generative adversarial networks; attack detection; security threats</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>DoS (Denial-of-Service) attacks pose an imminent threat to cloud services and could cause significant financial and intellectual damage to cloud service providers and their customers. DoS attacks can also result in revenue loss and security vulnerabilities due to system disruptions, interrupted services, and data breaches. However, despite machine learning methods being the research subject for detecting DoS attacks, there has not been much advancement in this area. As a consequence of this, there is a requirement for additional research in this field to create the most effective models for the detection of DoS attacks in cloud-based environments. This research paper suggests a deep convolutional generative adversarial network as an optimization-based deep learning model for identifying DoS bouts in the cloud. The proposed model employs Deep Convolutional Generative Adversarial Networks (DCGAN) to comprehend the spatial and temporal features of network traffic data, thereby enabling the attack detection of patterns indicative of DoS assaults. Furthermore, to make the DCGAN more accurate and resistant to attacks, it is trained on a massive collection of network traffic data. Moreover, the model is optimized via backpropagation and stochastic gradient descent to lessen the loss function, quantifying the gap between the simulated and observed traffic volumes. The testing findings prove that the suggested model is superior to the most recent technology methods for identifying cloud-based DoS assaults in Precision and the rate of false positives.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_37-Enhancing_Cloud_Security_An_Optimization_based_Deep_Learning_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SECI Model Design with a Combination of Data Mining and Data Science in Transfer of Knowledge of College Graduates’ Competencies</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140736</link>
        <id>10.14569/IJACSA.2023.0140736</id>
        <doi>10.14569/IJACSA.2023.0140736</doi>
        <lastModDate>2023-07-31T09:34:37.9500000+00:00</lastModDate>
        
        <creator>Mardiani </creator>
        
        <creator>Ermatita</creator>
        
        <creator>Samsuryadi</creator>
        
        <creator>Abdiansah</creator>
        
        <subject>Model SECI; data mining; data science; competence of graduates</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>One of the methods in knowledge management that can be used is the SECI Model. The SECI Model transfers tacit and explicit knowledge in each quadrant. However, without using tools, the transfer of technical knowledge will experience various obstacles. These obstacles included limited knowledge of the informants, difficulty in translating what was conveyed by the informants, limited time and opportunities, and unclear results obtained. The transfer of knowledge needed by college institutions is in the form of input from graduates who have graduated from college institutions. Graduates&#39; knowledge must be obtained to determine whether their competence is following their respective fields of knowledge. Information technology can help overcome technical problems in transferring knowledge, including the problem of large amounts of data. Data science will deliver results from a combination of technology and mathematics. Meanwhile, data mining, especially with classification, grouping, and association functions, can provide a clear picture of the needs of higher education institutions for the knowledge of their graduates to assess the curriculum that has been provided so far. The design formulation of the SECI model and the implementation of this data mining use an empirical approach through observation and experimentation with quantitative data, as well as theoretical thinking in supporting the development of the model development concept. Data mining and data science will clarify processes in the SECI Model quadrant regarding technological tools in the context of knowledge transfer in a circular manner between tacit and explicit, in order to be more directed and precise. Information extracted from graduate competencies can assist college institutions in formulating future strategies in the academic field, especially the curriculum in study program. This result will impact students in the future, where the developed curriculum will focus more on the results of the input of graduate students.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_36-SECI_Model_Design_with_a_Combination_of_Data_Mining.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluating Machine Learning Models for Predicting Graduation Timelines in Moroccan Universities</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140734</link>
        <id>10.14569/IJACSA.2023.0140734</id>
        <doi>10.14569/IJACSA.2023.0140734</doi>
        <lastModDate>2023-07-31T09:34:37.9330000+00:00</lastModDate>
        
        <creator>Azeddine Sadqui</creator>
        
        <creator>Merouane Ertel</creator>
        
        <creator>Hicham Sadiki</creator>
        
        <creator>Said Amali</creator>
        
        <subject>Machine learning; logistic regression; classification reports; on time graduation; Moroccan universities</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>The escalating student numbers in Moroccan universities have intensified the complexities of managing on-time graduation. In this context, Machine learning methodologies were utilized to analyze the patterns and predict on-time graduation rates in a comprehensive manner. Our dataset comprised information from 5236 bachelor students who graduated in the years 2020 and 2021 from the Faculty of Law, Economic, and Social Sciences at Moulay Ismail University. The dataset incorporated a diverse range of student attributes including age, marital status, gender, nationality, socio-economic category of parents, profession, disability status, province of residence, high school diploma attainment, and academic honors, all contributing to a comprehensive understanding of the factors influencing graduation outcomes. Implementation and evaluation of the performance of five different machine learning models: Support Vector Machines, Decision Tree, Naive Bayes, Logistic Regression, and Random Forest, were carried out. These models were assessed based on their classification reports, confusion matrices, and Receiver Operating Characteristic (ROC) curves. From the findings, the Random Forest model emerged as the most accurate in predicting on-time graduation, showcasing the highest accuracy and ROC AUC score. Despite these promising results, it is believed that performance enhancements can be achieved through further tuning and preprocessing of the dataset. Insights from this study could enable Moroccan universities, among others, to better comprehend the factors influencing on-time graduation and implement appropriate measures to improve academic outcomes.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_34-Evaluating_Machine_Learning_Models_for_Predicting_Graduation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Efficient and Accurate Beach Litter Detection Method Based on QSB-YOLO</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140735</link>
        <id>10.14569/IJACSA.2023.0140735</id>
        <doi>10.14569/IJACSA.2023.0140735</doi>
        <lastModDate>2023-07-31T09:34:37.9330000+00:00</lastModDate>
        
        <creator>Hanling Zhu</creator>
        
        <creator>Daoheng Zhu</creator>
        
        <creator>Xue Qin</creator>
        
        <creator>Fawang Guo</creator>
        
        <subject>Beach litter detection; QSB-YOLO; YOLOv7; Quantization-Aware RepVGG; a simple; parameter-free attention module; bidirectional feature pyramid network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>Because of the potential threats it presents to marine ecosystems and human health, beach litter is becoming a major global environmental issue. The traditional manual sampling survey of beach litter is poor in real-time, poor in effect, and limited in the detection area, so it is extremely difficult to quickly clean up and recycle beach litter. Deep learning technology is quickly advancing, opening up a new method for monitoring beach litter. A QSB-YOLO beach litter detection approach based on the improved YOLOv7 is proposed for the problem of missed and false detection in beach litter detection. First, YOLOv7 is combined with the quantization-friendly Quantization-Aware RepVGG (QARepVGG) to reduce the model&#39;s parameters while maintaining its performance advantage. Secondly, A Simple, Parameter-Free Attention Module (SimAM) is used in YOLOv7 to enhance the feature extraction capacity of the network for the image region of interest. Finally, improving the original neck by combining the concept of the Bidirectional Feature Pyramid Network (BiFPN) allows the network to better learn features of various sizes. The test results on the self-built dataset demonstrate that: (1) QSB-YOLO has a good detection effect for six types of beach litter; (2) QSB-YOLO has a 5.8% higher mAP compared to YOLOv7, with a 43% faster detection speed, and QSB-YOLO has the highest detection accuracy for styrofoam, plastic products, and paper products; (3) QSB-YOLO has the greatest detection accuracy and detection efficiency when comparing the detection effects in various models. The results of the experiments demonstrate that the suggested model satisfies the need for beach litter identification in real-time.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_35-Efficient_and_Accurate_Beach_Litter_Detection_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Segmentation-based Token Identification for Recognition of Audio Mathematical Expression</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140733</link>
        <id>10.14569/IJACSA.2023.0140733</id>
        <doi>10.14569/IJACSA.2023.0140733</doi>
        <lastModDate>2023-07-31T09:34:37.9170000+00:00</lastModDate>
        
        <creator>Vaishali A. Kherdekar</creator>
        
        <creator>Sachin A. Naik</creator>
        
        <creator>Prafulla Bafna</creator>
        
        <subject>Audio segmentation; classification; feature extraction; neural network; speech recognition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>In human-computer interaction, humans can interact with the computer with the help of text, audio, images, speech, etc. Interacting with the computer using speech, speech recognitions in particularly audio segmentation is a challenging task due to accent or way of pronouncing style. To input mathematical symbols, words, functions, and expressions with the help of a keyboard are tedious and time-consuming. Input this with the help of audio, speeds up the input process. In this paper, an SBTI (audio Segmentation Based Token Identification) algorithm is proposed for the recognition of words in an audio mathematical expression. 6 types of audio mathematical expressions are considered for recognition. The proposed algorithm segments the audio file into chunks and from each chunk temporal and spectral characteristics of audio signals are selected to extract the features. The model is trained using a neural network. The proposed algorithm shows a classification accuracy of 100% for the algebraic, quadratic, area, and differentiation expression, 99% for trigonometric expression, and 92% for summation expression.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_33-A_Segmentation_based_Token_Identification_for_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mobile Apps Performance Testing as a Service for Parallel Test Execution and Automatic Test Result Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140732</link>
        <id>10.14569/IJACSA.2023.0140732</id>
        <doi>10.14569/IJACSA.2023.0140732</doi>
        <lastModDate>2023-07-31T09:34:37.9030000+00:00</lastModDate>
        
        <creator>Amira Ali</creator>
        
        <creator>Huda Amin Maghawry</creator>
        
        <creator>Nagwa Badr</creator>
        
        <subject>Performance testing; mobile apps testing; mobile apps performance testing; automated testing; cloud computing; TaaS; model-based testing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>Now-a-days, numerous mobile apps are developed daily that influence the lives of people worldwide. Mobile apps are implemented within a limited time and budget. This is to keep up with the rapid business growth and to gain a competitive advantage in the market. Performance testing is a crucial activity that evaluates the behavior of the application under test (AUT) under various workloads. Performance testing in the domain of mobile app development is still a manual and time-consuming activity. As a negative consequence, performance testing is ignored during the development of many mobile apps. Thus, mobile apps may suffer from weak performance that badly affects the user experience. Therefore, cloud technology is introduced as a solution that emerges in the domain of software testing. Based on this technology, software testing is provided as a service (TaaS) that leverages cloud-based resources. This overcomes the testing issues and achieves high test quality. In this paper, a cloud-based testing as a service architecture is proposed for performance testing of mobile apps. The proposed performance testing as a service (P-TaaS) adopts efficient approaches for automating the entire process. Efficient approaches for test case generation, parallel test execution, and test results analysis are introduced. The proposed test case generation approach applies model-based testing (MBT) technique that generates test cases automatically from the AUT’s specification models and artifacts. The proposed P-TaaS lessens the testing time and satisfies the fast time-to-release constraint of mobile apps. Additionally, the proposed P-TaaS maximizes resource utilization, and allows continuous resource monitoring.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_32-Mobile_Apps_Performance_Testing_as_a_Service.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-feature Fusion for Relation Extraction using Entity Types and Word Dependencies</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140731</link>
        <id>10.14569/IJACSA.2023.0140731</id>
        <doi>10.14569/IJACSA.2023.0140731</doi>
        <lastModDate>2023-07-31T09:34:37.8870000+00:00</lastModDate>
        
        <creator>Pu Zhang</creator>
        
        <creator>Junwei Li</creator>
        
        <creator>Sixing Chen</creator>
        
        <creator>Jingyu Zhang</creator>
        
        <creator>Libo Tang</creator>
        
        <subject>Relation extraction; multi-feature fusion; information extraction; dependency tree; entity type</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>Most existing methods do not make full use of different types of information sources to extract effective features for relation extraction. This paper proposes a multi-feature fusion model based on raw input sentences and external knowledge sources, which deeply integrates diverse lexical, semantic, and syntactic features into deep neural network models. Specifically, our model extracts lexical features of different granularity from the original input text representation, entity type features from the entity annotation information of the corpus, and dependency features from the dependency trees. Meanwhile, the dimension-based attention mechanism is proposed to enrich the diversity of entity type features and enhance their discriminability. Different features enable the model to comprehensively utilize various types of information, so this paper fuses these features and train a classifier for relation extraction. The experimental results show that the proposed model outperforms the existing state-of-the-art baselines on the TACRED Revisited, Re-TACRED, and SemEval datasets, with macro-average F1 scores of 81.2%, 90.2%, and 89.4%, respectively, improving the performance by 1.4%, 4.4%, and 2% on average, which indicates the effectiveness of multi-feature fusion modeling.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_31-Multi_feature_Fusion_for_Relation_Extraction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimization Solutions for Solving Travelling Salesman Problem in Graph Theory using African Buffalo Mechanism</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140730</link>
        <id>10.14569/IJACSA.2023.0140730</id>
        <doi>10.14569/IJACSA.2023.0140730</doi>
        <lastModDate>2023-07-31T09:34:37.8700000+00:00</lastModDate>
        
        <creator>Yousef Methkal Abd Algani</creator>
        
        <subject>African buffalo optimization; solutions; travelling salesman’s problem; graph theory</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>The African Buffalo Optimization (ABO), a metaheuristic optimization algorithm created from thorough study of African buffalos, a species of African cows, in African woods and savannahs, is suggested in this study. In its pursuit for food across the African continent, this animal demonstrates unusual intelligence, sophisticated organising capabilities, and remarkable navigational acumen. The African Buffalo Optimization creates a mathematical model based on this animal&#39;s behaviour and uses it to tackle several benchmark symmetrical Travel Salesman&#39;s Problem and six tough asymmetric Travelling Salesman Problem Library (TSPLIB) instances. Buffalos can ensure the effective exploitation and exploration of the problem space by frequent contact, teamwork, and a sharp mind of previous record discoveries, as well as tapping into the breed&#39;s collective exploits, according to this study. The results produced by solving these TSP problems using the ABO were compared to those obtained by utilizing other prominent methods. The results indicate that ABO gently outperformed than Lin-Kernighan and HBMO optimising solutions to the ATSP cases under investigative process, with a slightly higher accuracy of 99.5% compared to 87% for Lin-Kernighan and 80% for HBMO. The African Buffalo Optimization algorithm produces very competitive outcomes.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_30-Optimization_Solutions_for_Solving_Travelling_Salesman_Problem.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Attacks on the Vehicle Ad-hoc Network from Cyberspace</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140729</link>
        <id>10.14569/IJACSA.2023.0140729</id>
        <doi>10.14569/IJACSA.2023.0140729</doi>
        <lastModDate>2023-07-31T09:34:37.8570000+00:00</lastModDate>
        
        <creator>Anas Alwasel</creator>
        
        <creator>Shailendra Mishra</creator>
        
        <creator>Mohammed AlShehri</creator>
        
        <subject>Vehicular Ad hoc Network (VANET); Mobile Ad hoc Network (MANET); machine learning; random forest; linear regression</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>The emergence of Vehicle Ad hoc Networks (VANET) in 2003 has brought about a significant advancement in mobile phone networks and VANETs enable cars on the road to communicate with each other and the infrastructure on the street through a set of sensors and Intelligent Transport Systems (ITS). However VANETs are a low-level trust environment, making them vulnerable to misbehavior attacks and abnormal use. Thus, it is crucial to ensure that VANET systems and applications are secure and protected from cyber-attacks. This research aims to identify security challenges and vulnerabilities in VANET and proposes an algorithm that checks vehicle identity, location, and speed to detect and classify suspicious behavior. The research involves a study of the structures, architecture, and applications using VANET technology, the interconnection processes between them, and the types of architecture, layers, and applications that can pose a high risk. The research also focuses on the Confidentiality, Integrity and Availability (CIA) information security triangle and develops a program that uses machine learning to classify and analyze risks, attacks. The proposed algorithm provides security and safety for everyone on the road by identifying harmful behaviors of vehicles through knowledge of their location and identity. Overall, this research contributes to the development of a stable and secure Vehicular ad hoc network environment, enabling the integration of VANET security with smart cities.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_29-Attacks_on_the_Vehicle_Ad_Hoc_Network_from_Cyberspace.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Adaptive Visual Sentiment Prediction Model Based on Event Concepts and Object Detection Techniques in Social Media</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140728</link>
        <id>10.14569/IJACSA.2023.0140728</id>
        <doi>10.14569/IJACSA.2023.0140728</doi>
        <lastModDate>2023-07-31T09:34:37.8570000+00:00</lastModDate>
        
        <creator>Yasser Fouad</creator>
        
        <creator>Ahmed M. Osman</creator>
        
        <creator>Samah A. Z. Hassan</creator>
        
        <creator>Hazem M. El-Bakry</creator>
        
        <creator>Ahmed M. Elshewey</creator>
        
        <subject>Sentiment Analysis (SA); visual sentiment analysis; image analysis; object recognition; event concepts; events concepts with object detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>Now-a-days, the increasing number of smartphones has caused the immediate sharing of photographs capturing current events on social media. The sentimental content of pictures from social events starts to be obtained from visual material, so visual sentiment analysis is a vital research topic. The research aims to reach valuable criteria to modify the visual sentiment prediction model based on event concepts and object detection techniques. In addition to adapting the approach for designing the method for predicting visual sentiments in a social network according to concept scores and measuring the performance of the model for predicting visual sentiments as accurately as possible, approach obtains a visual summary of social event images based on the visual elements that appear in the pictures which exceed sentiment-specific features. By this method, attributes (color, texture) are assigned to sentiments with discovering affective objects that are used to obtain emotions related to a picture of a social event by mapping the top predicted qualities to feelings and extracting the prevailing emotion connected with a photograph of a social event. This method is valid for a wide range of social events. This strategy also demonstrates the social event&#39;s effectiveness for a difficult social event image collection by using techniques for classifying complicated event images into sentiments, whether positive or negative.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_28-Adaptive_Visual_Sentiment_Prediction_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Anti-Spoofing in Medical Employee&#39;s Email using Machine Learning Uclassify Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140727</link>
        <id>10.14569/IJACSA.2023.0140727</id>
        <doi>10.14569/IJACSA.2023.0140727</doi>
        <lastModDate>2023-07-31T09:34:37.8400000+00:00</lastModDate>
        
        <creator>Bander Nasser Almousa</creator>
        
        <creator>Diaa Mohammed Uliyan</creator>
        
        <subject>Spoofing; phishing; machine learning; Uclassify algorithm; medical employee&#39;s email</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>Since the advent of COVID-19, healthcare and IT cybersecurity have been an issue. Digital services and foreign labor have increased cyberattacks. July 2021 saw 260,642 phishing emails. 94% of 12 countries’ employees experienced epidemic cyberattacks. Phishing attacks steal sensitive data from spam emails or legitimate websites for profit. Phishing spam uses URL, domain, page, and content variables. Simple machine-learning methods stop phishing emails. This study discusses phishing emails and patient data and healthcare employee accounts cybersecurity. This paper covers COVID-19 email and phishing detection. This article examines the message&#39;s URL, subject, email, and links. Uclassify classifies content, spam, and languages and automates emails. Semi-supervised machine learning dominates healthcare. The Uclassify algorithm used multinomial Naive Bayesian classifiers. Document class is [0–1]. This article compared Multinomial Naive Bayesian in two experiments with other algorithms. Experiment 1 achieved an MNB accuracy of 96% based on a database from Kaggle Phishing. Experiment 2 showed that the Multinomial Naive Bayesian system accurately predicted URL and hyperlink targets based on PhishTank data. 96.67% of respondents correctly identified URLs, and 91.6% did so for hyperlinks. These two experiments focused on Tokenization, Lemmatization, and Feature Extraction (FE) and contained an internal feature set (IFS) and an external feature set (EFS). MNB is more exact than earlier methods since it uses decimal digits and word frequency. MNB only takes binary inputs. MNB can detect phishing and spoofing.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_27-Anti_Spoofing_in_Medical_Employees_Email.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimizing Drying Efficiency Through an IoT-based Direct Solar Dryer System: Integration of Web Data Logger and SMS Notification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140726</link>
        <id>10.14569/IJACSA.2023.0140726</id>
        <doi>10.14569/IJACSA.2023.0140726</doi>
        <lastModDate>2023-07-31T09:34:37.8230000+00:00</lastModDate>
        
        <creator>Joel I. Miano</creator>
        
        <creator>Michael A. Nabua</creator>
        
        <creator>Alexander R. Gaw</creator>
        
        <creator>Apple Rose B. Alce</creator>
        
        <creator>Cris Argie M. Ecleo</creator>
        
        <creator>Jewelane V. Repulle</creator>
        
        <creator>Jaafar J. Omar</creator>
        
        <subject>Arduino Uno; Internet of Thing (IoT); solar dryer system; web application portal; ESP-32; SMS notification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>Various agricultural and culinary products are dried to extend their shelf lives, mostly for marine foods. In many coastal locations of the Philippines, drying fish traditionally is still practiced, although study has shown that due to weather conditions and other factors, this technique is not seen to be reliable or cost-effective. The Internet of Things (IoT)-based Direct Solar Dryer System is optimized for drying efficiency by combining a web data logger and SMS notification system using Arduino Uno and ESP-32 to address difficulties with reliability and cost effectiveness. The study focuses on the potential and system efficiency of drying Sardinella fish (Tamban) in Brgy, Calibunan, Agusan Del Norte, Philippines; as well as investigating and assessing temperature, heat index, humidity, and temperature range alert conditions using a web application portal to serve as a remote monitoring platform for dependable data visualizations. The system delivered the expected results because the direct solar drier was able to raise and maintain the requisite temperature to accelerate drying while keeping the acceptable relative humidity. Furthermore, the system&#39;s monitoring and notification capabilities, as well as effective data collecting and data display via physical and remote monitoring, are supported by SMS notifications. As a result, the effectiveness of upgrading traditional sun drying with IoT technology can help reduce the challenges and disadvantages that fish drying farmers have faced. The study, with correct drying monitoring criteria, could serve as a model for other food products that can be dried.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_26-Optimizing_Drying_Efficiency_Through_an_IoT.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Leveraging Big Data and AI in Mobile Shopping: A Study in the Context of Jordan</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140725</link>
        <id>10.14569/IJACSA.2023.0140725</id>
        <doi>10.14569/IJACSA.2023.0140725</doi>
        <lastModDate>2023-07-31T09:34:37.8230000+00:00</lastModDate>
        
        <creator>Maher Abuhamdeh</creator>
        
        <creator>Osama Qtaish</creator>
        
        <creator>Hasan Kanaker</creator>
        
        <creator>Ahmad Alshanty</creator>
        
        <creator>Nidal Yousef</creator>
        
        <creator>Abdulla Mousa AlAli</creator>
        
        <subject>Bigdata; mobile shopping; artificial intelligence; internet of things; shopping; user experience</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>This study investigates the current state of mobile shopping in Jordan and the integration of big data and AI technologies in this context. A mixed-methods approach, combining qualitative and quantitative data collection techniques, utilized to gather comprehensive insights. The survey questionnaire distributed to 105 individuals engaged in mobile shopping in Jordan. The findings highlight the popularity of mobile shopping and the preference for mobile apps as the primary platform. Personalized product recommendations emerged as a crucial factor in enhancing the mobile shopping experience. Privacy concerns regarding data sharing were present among respondents. Trust in AI-powered virtual assistants varied, indicating the potential for leveraging AI technologies. Respondents recognized the potential of big data and AI in improving the mobile shopping experience. The study concludes that businesses can enhance mobile shopping by utilizing AI-powered virtual assistants and prioritizing data security. The findings contribute to understanding mobile shopping dynamics and provide guidance for businesses and policymakers in optimizing mobile shopping experiences and driving economic growth in Jordan&#39;s digital economy. Future research and implementation efforts are encouraged to harness the potential of big data and AI in the mobile shopping landscape.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_25-Leveraging_Big_Data_and_AI_in_Mobile_Shopping.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Lane Road Segmentation Based on Improved UNet Architecture for Autonomous Driving</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140724</link>
        <id>10.14569/IJACSA.2023.0140724</id>
        <doi>10.14569/IJACSA.2023.0140724</doi>
        <lastModDate>2023-07-31T09:34:37.8100000+00:00</lastModDate>
        
        <creator>Hoang Tran Ngoc</creator>
        
        <creator>Huynh Vu Nhu Nguyen</creator>
        
        <creator>Khang Hoang Nguyen</creator>
        
        <creator>Luyl-Da Quach</creator>
        
        <subject>Local binary patterns; feature extraction; UNet; semantic segmentation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>This paper introduces a real-time workflow for implementing neural networks in the context of autonomous driving. The UNet architecture is specifically selected for road segmentation due to its strong performance and low complexity. To further improve the model&#39;s capabilities, Local Binary Convolution (LBC) is incorporated into the skip connections, enhancing feature extraction, and elevating the Intersection over Union (IoU) metric. The performance evaluation of the model focuses on road detection, utilizing the IOU metric. Two datasets are used for training and validation: the widely used KITTI dataset and a custom dataset collected within the ROS2 environment. Simulation validation is performed on both datasets to assess the performance of our model. The evaluation of our model on the KITTI dataset demonstrates an impressive IoU score of 97.90% for road segmentation. Moreover, when evaluated on our custom dataset, our model achieves an IoU score of 98.88%, which is comparable to the performance of conventional UNet models. Our proposed method to reconstruct the model structure and provide input feature extraction can effectively improve the performance of existing lane road segmentation methods.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_24-Lane_Road_Segmentation_Based_on_Improved_UNet_Architecture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Improved Lane-Keeping Controller for Autonomous Vehicles Leveraging an Integrated CNN-LSTM Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140723</link>
        <id>10.14569/IJACSA.2023.0140723</id>
        <doi>10.14569/IJACSA.2023.0140723</doi>
        <lastModDate>2023-07-31T09:34:37.7930000+00:00</lastModDate>
        
        <creator>Hoang Tran Ngoc</creator>
        
        <creator>Phuc Phan Hong</creator>
        
        <creator>Nghi Nguyen Vinh</creator>
        
        <creator>Nguyen Nguyen Trung</creator>
        
        <creator>Khang Hoang Nguyen</creator>
        
        <creator>Luyl-Da Quach</creator>
        
        <subject>End-to-end steering control; convolutional neural network; LSTM; nvidia model; MobileNetv2; VGG16</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>Representing the task of navigating a car through traffic using traditional algorithms is a complex endeavor that presents significant challenges. To overcome this, researchers have started training artificial neural networks using data from front-facing cameras, combined with corresponding steering angles. However, many current solutions focus solely on the visual information from the camera frames, overlooking the important temporal relationships between these frames. This paper introduces a novel approach to end-to-end steering control by combining a VGG16 convolutional neural network (CNN) architecture with Long Short-Term Memory (LSTM). This integrated model enables the learning of both the temporal dependencies within a sequence of images and the dynamics of the control process. Furthermore, we will present and evaluate the estimated accuracy of the proposed approach for steering angle prediction, comparing it with various CNN models including the Nvidia classic model, Nvidia model, and MobilenetV2 model when integrated with LSTM. The proposed method demonstrates superior accuracy compared to other approaches, achieving the lowest loss function. To evaluate its performance, we recorded a video and saved the corresponding steering angle results based on human perception from the robot operating system (ROS2). The videos are then split into image sequences to be smoothly fed into the processing model for training.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_23-An_Improved_Lane_Keeping_Controller_for_Autonomous_Vehicles.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of a Two-dimensional Animation for Business Law: Elements of a Valid Contract</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140722</link>
        <id>10.14569/IJACSA.2023.0140722</id>
        <doi>10.14569/IJACSA.2023.0140722</doi>
        <lastModDate>2023-07-31T09:34:37.7770000+00:00</lastModDate>
        
        <creator>Sarni Suhaila Rahim</creator>
        
        <creator>Hazira Saleh</creator>
        
        <creator>Nur Zulaiha Fadlan Faizal</creator>
        
        <creator>Shahril Parumo</creator>
        
        <subject>2D animation; business law; elements of a valid contract; teaching and learning; multimedia</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>Elements of a valid contract is an important topic in corporate law. Since there are so many elements and related case studies, some students have difficulty remembering all the elements. Therefore, an animation will contribute to explaining all the elements in simpler terms, help the students remember the relevant case studies, and help lecturers teach students in an easier and more interactive way. An investigation on a 2D animation design and its effectiveness for corporate law and commerce learning is presented in this article. This paper aims to examine the 2D animation principle in animated explainer videos. In addition, the objective of this research is to develop an animation and evaluate the effectiveness of the 2D animation for Business Law teaching and learning. A comprehensive analysis of the 2D animation used in Business Law learning which focusing on spreading the importance of student understanding and motivation in Business Law course using a 2D animated approach is the expected outcome of this paper. The project collaborates with the Department of Commerce, Politeknik Melaka, Malaysia, for content expertise and testing. The Multimedia Production Process is the methodology used for the development of this research work, and the ADDIE Model is applied for the instructional design. The application is developed using Adobe After Effects, Adobe Premiere Pro, Adobe Media Encoder, and the Audacity platform. The contribution of this study is obvious, as the resulting outcomes can be used as guidelines for best practises of learning styles. The implications of this study will impact teaching and learning and increase understanding. This research is expected to improve teaching delivery while also increasing user understanding and motivation to learn.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_22-Development_of_a_Two_Dimensional_Animation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>University’s Service Delivery Improvement Through a DSS-enabled Client Feedback System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140721</link>
        <id>10.14569/IJACSA.2023.0140721</id>
        <doi>10.14569/IJACSA.2023.0140721</doi>
        <lastModDate>2023-07-31T09:34:37.7770000+00:00</lastModDate>
        
        <creator>Belen M. Tapado</creator>
        
        <creator>John Gregory M. Bola</creator>
        
        <creator>Erickson T. Salazar</creator>
        
        <creator>Zcel T. Tablizo</creator>
        
        <subject>Decision support system; client satisfaction; client feedback system; rapid application development; service delivery improvement</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>The expansion of products and services on a global scale demands the improvement of an organization’s performance. In addition to addressing the challenges of improving product and service delivery, companies must focus not only on meeting customer expectations but also on surpassing them. Consequently, valuing the opinions of clients, giving the best client experience, and measuring client satisfaction are deemed vital not only for the company’s survival but also for gaining a competitive edge for the organizations in the wired communities. It is because of these premises that the Client Feedback System was developed in this study for the university’s service delivery improvement. This system captured the results of the Client Satisfaction Survey for School Year 2015-2016 to School Year 2020-2021. Interpretation of these captured data were made and action for the improvement of service delivery for each department in this university was recommended using the Decision Support System (DSS) technique. The system was created using the Rapid Application Development (RAD) method and utilized various software and technologies such as HTML, CSS, and JavaScript for the front-end development, MySQL and PHP for the back-end, and Apache as the local server of the system during its development and pilot testing.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_21-Universitys_Service_Delivery_Improvement.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Local-Global Graph Convolutional Network for Depression Recognition using EEG Signals</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140720</link>
        <id>10.14569/IJACSA.2023.0140720</id>
        <doi>10.14569/IJACSA.2023.0140720</doi>
        <lastModDate>2023-07-31T09:34:37.7600000+00:00</lastModDate>
        
        <creator>Yu Chen</creator>
        
        <creator>Xiuxiu Hu</creator>
        
        <creator>Lihua Xia</creator>
        
        <subject>Electroencephalogram; depression recognition; Local-Global Graph Convolutional Network (LG-GCN); multilevel spatial information; brain regions; multiple graphs</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>Graph Convolutional Networks (GCNs) have shown remarkable capabilities in learning the topological relationships among electroencephalogram (EEG) channels for recognizing depression. However, existing GCN methods often focus on a single spatial pattern, disregarding the relevant connectivity of local functional regions and neglecting the data dependency of the original EEG data. To address these limitations, we introduce the Local-Global GCN (LG-GCN), a novel GCN inspired by brain science research, which learns the local-global graph representation of EEG. Our approach leverages discriminative features extracted from EEG signals as auxiliary information to capture dynamic multi-level spatial information between EEG channels. Specifically, the representation learning of the topological space in brain regions comprises two graphs: one for exploring augmentation information in local functional regions and another for extracting global dynamic information. The aggregation of multiple graphs enables the GCN to acquire more robust features. Additionally, we develop an Information Enhancement Module (IEM) to capture multi-dimensional fused features. Extensive experiments conducted on public datasets demonstrate that our proposed method surpasses state-of-the-art (SOTA) models, achieving an impressive accuracy of 99.30% in depression recognition.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_20-A_Local_Global_Graph_Convolutional_Network_for_Depression_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Continuous Software Engineering for Augmented Reality</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140719</link>
        <id>10.14569/IJACSA.2023.0140719</id>
        <doi>10.14569/IJACSA.2023.0140719</doi>
        <lastModDate>2023-07-31T09:34:37.7470000+00:00</lastModDate>
        
        <creator>Suzanna </creator>
        
        <creator>Sasmoko</creator>
        
        <creator>Ford Lumban Gaol</creator>
        
        <creator>Tanty Oktavia</creator>
        
        <subject>Continuous software engineering; augmented reality; method in software engineering; continuous planning; continuous analysis; continuous design; continuous programming; continuous integration; continuous maintenance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>Continuous software engineering is a new trend that has attracted increasing attention from the research community in recent years. In software engineering there are “continuous” stages that are used depending on the number of artifact repositories such as databases, meta data, virtual machines, networks and servers, various logs, and reports. Augmented Reality (AR) technology is currently growing rapidly. We can find this technology in various fields of life, but unfortunately sustainable software engineering for Augmented Reality is not found. The method shown in previous research is a general method in software engineering so that a theory is needed for sustainable software engineering for AR considering that AR is not just an ordinary application but there are 3D elements and specific components that must be met so that it can be called AR. The main idea behind this research is to find a continuous pattern from the stages of the existing method so far. For example, in general the stages of system development are planning, analysis, design, implementation and maintenance. Then after the application has been built, does it finish there? As we know software always grows and develops according to human needs. Therefore, there are continuous stages that must be patterned so that the life cycle process can be maintained. In this paper we present our initial findings about the continuous stages of continuous software engineering namely continuous planning, continuous analysis, continuous design, continuous programming, continuous integration, and continuous maintenance.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_19-Continuous_Software_Engineering_for_Augmented_Reality.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Research on the Text Classification of Legal Consultation Based on Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140718</link>
        <id>10.14569/IJACSA.2023.0140718</id>
        <doi>10.14569/IJACSA.2023.0140718</doi>
        <lastModDate>2023-07-31T09:34:37.7300000+00:00</lastModDate>
        
        <creator>ZuoQiang Du</creator>
        
        <subject>Text classification; legal consultation; deep learning; KP+BILSTM+ATT model; word embedding layer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>In view of the existing traditional legal service, practitioners are unable to meet the huge demand; a large number of citizens are unable to determine the scope of the problems when they encounter infringement or require various legal assistance. Based on this, an automatic classification model of legal consultation based on Deep Learning is proposed in this paper. A KP+BiLSTM+Attention model is proposed. The Keyword Parser is introduced to extract key information. TF-IDF and part of speech tagging are used to filter out the important information in the user&#39;s legal problem description. The extracted keywords are given a weight value, and the other information weights are set to zero. The text information is transferred into two parallel word vector embedding layers. One of the word vector embedding layers transfers the results to the fusion layer for splicing, difference and point multiplication after the key information is converted into vector form. The output results are respectively connected with the results obtained from the other embedding layer as residuals. The final results are transferred to the BiLSTM+Attention model for training. The test results show that KP+BiLSTM+Attention model has significantly improved the accuracy and F1 value of the best benchmark method for text classification tasks of legal consulting. Therefore, KP+BiLSTM+Attention method has better performance in dealing with the classification of legal consulting issues.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_18-Research_on_the_Text_Classification_of_Legal_Consultation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Explainable Artificial Intelligence (XAI) for the Prediction of Diabetes Management: An Ensemble Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140717</link>
        <id>10.14569/IJACSA.2023.0140717</id>
        <doi>10.14569/IJACSA.2023.0140717</doi>
        <lastModDate>2023-07-31T09:34:37.7300000+00:00</lastModDate>
        
        <creator>Rita Ganguly</creator>
        
        <creator>Dharmpal Singh</creator>
        
        <subject>Explainable Artificial Intelligence (XAI); diabetes; interpretability; machine learning; chronic disease management</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>Machine learning determines patterns from data to expedite the process of decision making. Fact-based decisions and data-driven decisions are specified by the industry specialist. Due to the continuous growth of machine language models in healthcare, they are breeding continuous complexity and black boxes in ML models. To make the ML model crystal clear and authentically explainable, AI accession came in prevalence. This research scrutinizes the explainable AI and capabilities in the Indian healthcare system to detect diabetes. LIME and SHAP are two libraries and packages that are used to implement explainable AI. The intimated base amalgamates the local and global interpretable methods, which enhances the crystallinity of the complex model and obtains intuition into the equity from the complex model. Moreover, the obtained intuition could also boost clinical data scientists to plan a more felicitous composition of computer-aided diagnosis. Importance of XAI to forecast stubborn disease. In this case, of stubborn diabetes, the correlation between plasma versus insulin, age versus pregnancies, class (diabetic and nondiabetic) versus plasma glucose persisted with a strong relationship. The PIDD (PIMA Indian Diabetic Data set) with the SHAP value is used for concise dependency, and LIME is applicable when anchors and importance of features are both required simultaneously. Dependency plots help physicians visualize independent relationships with predicted disease. To identify dependencies of different attributes, a correlation heatmap is used. From an academic perspective, XAI is very indispensable to mature in the near future. To estimate the presentation of other applicable data set correspondence studies are very much apprenticed.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_17-Explainable_Artificial_Intelligence_XAI_ for_the_Prediction_of_Diabetes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid Federated Learning Framework and Multi-Party Communication for Cyber-Security Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140716</link>
        <id>10.14569/IJACSA.2023.0140716</id>
        <doi>10.14569/IJACSA.2023.0140716</doi>
        <lastModDate>2023-07-31T09:34:37.7130000+00:00</lastModDate>
        
        <creator>Fahad Alqurashi</creator>
        
        <subject>Federated learning; multi-party communication; cyber-security; machine learning; internet of things</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>The term &quot;Internet of Things&quot; (IoT) describes a global system of electronically linked devices and sensors capable of two-way communication and data sharing. IoT provides various advantages, including improved efficiency and production and lower operating expenses. Concern about data breaches is constantly present, for example, since devices with sensors capture and send confidential data that might have dire effects if leaked. Hence, this research proposed a novel hybrid federated learning framework with multi-party communication (FLbMPC) to address the cyber-security challenges. The proposed approach comprises four phases: data collection and standardization, model training, data aggregation, and attack detection. The research uses the UNSW-NB15 cyber-security dataset, which was collected and standardized using the z-score normalization approach. Federated learning was used to train the local models of each IoT device with their respective subsets of data. The MPC method is used to aggregate the encrypted local models into a global model while maintaining the confidentiality of the local models. Finally, in the attack detection phase, the global model compares real-time sensor data and predicted values to identify cyber-attacks. The experiment findings show that the suggested model outperforms the current methods in terms of accuracy, precision, f-measure and recall.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_16-A_Hybrid_Federated_Learning_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dynamic Allocation Method of Incentive Pool for Financial Management Teaching Innovation Team Based on Data Mining</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140715</link>
        <id>10.14569/IJACSA.2023.0140715</id>
        <doi>10.14569/IJACSA.2023.0140715</doi>
        <lastModDate>2023-07-31T09:34:37.7000000+00:00</lastModDate>
        
        <creator>Huang Jingjing</creator>
        
        <creator>Zhang Xu</creator>
        
        <subject>Data mining; financial management; teaching innovation team; incentive pool; dynamic allocation; artificial colony</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>In order to reasonably allocate the amount of incentive pool and promote the unity of members of the financial management teaching innovation team, a dynamic allocation method of incentive pool for the financial management teaching innovation team based on data mining is proposed. This method constructs the incentive pool allocation index system by analyzing the principles of risk and income correlation, income and contribution consistency, individual and overall profit consistency, as well as the actual contribution of the financial management teaching innovation team, members&#39; efforts and other factors that affect the allocation of incentive pool. After determining the index weight, the maximum entropy model is used to establish the incentive pool function of the financial management teaching innovation team project. The incentive pool scale decision model is established according to the prospect theory. After outputting the scale of the financial management teaching innovation team&#39;s incentive pool using the construction model, the incentive pool model of the financial management teaching innovation team is obtained. Based on the asymmetric Nash negotiation model, the allocation model for the incentive pool model of the financial management teaching innovation team is established, the improved artificial colony algorithm in the data mining algorithm is used to solve the model, and the dynamic allocation result of the incentive pool of the financial management teaching innovation team is obtained. The experiment shows that this method can effectively calculate the size of the incentive pool and allocate the incentive pool. The members of the financial management teaching innovation team have a high degree of satisfaction with the allocation result of the incentive pool, with allocation satisfaction consistently fluctuating around 96%.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_15-Dynamic_Allocation_Method_of_Incentive_Pool_for_Financial_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Multi-label Filter Feature Selection Method Based on Approximate Pareto Dominance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140714</link>
        <id>10.14569/IJACSA.2023.0140714</id>
        <doi>10.14569/IJACSA.2023.0140714</doi>
        <lastModDate>2023-07-31T09:34:37.6830000+00:00</lastModDate>
        
        <creator>Jian Zhou</creator>
        
        <creator>Yinnong Guo</creator>
        
        <subject>Approximate Pareto dominance; multi-label data; feature selection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>The Pareto dominance has been applied to resolve the issue of choosing significant features from a multi-label dataset. High-dimensional labels will directly result in the difficulty of forming Pareto dominance. This work proposes a multi-label feature selection approach based on the approximate Pareto dominance (MAPD) to address this issue. It maps the multi-label feature selection to the problem of solving the approximate Pareto dominant solution set. By introducing an approximate parameter, it is possible to efficiently cut down on the amount of features in the chosen feature subset while also raising its quality. To verify the performance of MAPD, this research compares the MAPD algorithm with alternative approaches in terms of Hamming loss, accuracy, and chosen feature size using nine publicly available multi-label datasets. The findings indicate that the MAPD method performs better in terms of classification accuracy, Hamming loss, and the amount of features that may be chosen.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_14-A_Multi_label_Filter_Feature_Selection_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Method for Image Quality Evaluation of Satellite-based SAR Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140713</link>
        <id>10.14569/IJACSA.2023.0140713</id>
        <doi>10.14569/IJACSA.2023.0140713</doi>
        <lastModDate>2023-07-31T09:34:37.6830000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Michihiro Mikamo</creator>
        
        <creator>Shunsuke Onishi</creator>
        
        <subject>Image quality; synthetic aperture radar (SAR); geometric fidelity; signal to noise ratio; frequency component; saturated pixel ratio; speckle noise; optimum filter kernel size; filter function for speckle noise reduction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>A method for image quality evaluation of satellite-based Synthetic Aperture Radar: SAR data is proposed. Not only geometric fidelity but also signal to noise ratio, frequency component, saturated pixel ratio, speckle noise, optimum filter kernel size and its filter function are evaluated. Through experiments with SAR so called QPS-SAR_2 (Q-shu Pioneers of Space SAR the second) of imagery data, all these items are evaluated, and it is confirmed that the geometric and radiometric performances are good enough. Also, geometric fidelity of QPS-SAR_2 is compared to Sentinel-1/SAR European Space Agency (ESA) provided data which is obtained on the same day of QPS-SAR_2 data acquisition.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_13-Method_for_Image_Quality_Evaluation_of_Satellite_based_SAR_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application of Virtual Reality Technology in the Design of Interactive Interfaces for Public Service Announcements</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140712</link>
        <id>10.14569/IJACSA.2023.0140712</id>
        <doi>10.14569/IJACSA.2023.0140712</doi>
        <lastModDate>2023-07-31T09:34:37.6670000+00:00</lastModDate>
        
        <creator>Rong Hu</creator>
        
        <subject>Virtual reality; interactive interfaces; interface design; greyscaling; image pre-processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>With the development of technology, more and more public service announcements are being designed with interactive interfaces. There are many different ways to interact with interactive interfaces, and using appropriate design methods can expand the impact of PSAs. The study incorporates image pre-processing methods based on virtual reality, using the cvtColor grey scale function and median filtering method to process the images, an iterative approach to camera positioning method design, and subsequent performance testing of the research algorithms. The test results showed that the peak signal-to-noise ratio of the research method was 13.390dB in the image pre-processing process and 35.635dB on lightly shaded images; in the error test, the rotational mean error of the research method was approximately 4.2degrees at four reference points; and in the image plane reprojection test, 70% of the points of the research method almost coincided with the original point position. The method generated 1203 designs with 40 reference points. The experimental results show that the research method can effectively design interactive interfaces for PSAs in virtual reality environments, and can propose more design solutions, and has better performance in virtual reality environment positioning.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_12-Application_of_Virtual_Reality_Technology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Adaptive Style Transfer Method of Art Works Based on Laplace Operator</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140711</link>
        <id>10.14569/IJACSA.2023.0140711</id>
        <doi>10.14569/IJACSA.2023.0140711</doi>
        <lastModDate>2023-07-31T09:34:37.6530000+00:00</lastModDate>
        
        <creator>HaiTing Jia</creator>
        
        <subject>Laplace operator; artworks; adaptive style transfer; brightness migration; convolution neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>In order to improve the image quality of artworks after style transfer, the adaptive style transfer method of artworks based on the Laplace operator is studied. Through three steps of expansion processing, corrosion processing and multi-scale morphological enhancement, the image edge of the content of artworks is enhanced. The colour and brightness of the artworks with edge enhancement are transferred, and the transfer results are input into the convolution neural network simultaneously with the style image. According to the improved Laplace operator, the Laplace operator loss term of the convolution neural system is counted, the style losing term of the style picture of the art image is determined, and the total loss function is constructed. According to the determined loss function, a convolution neural network is used to output paintings&#39; adaptive style transfer results. The experiential outcomes indicate that this technique is able to realize the adaptive style transmission of paintings. After style transfer, the picture quality of paintings is high, and the adaptive transfer of artworks can be realized within 500ms.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_11-Adaptive_Style_Transfer_Method_of_Art_Works.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Bibliometric Analysis of Research on Risks in the Poultry Farming Industry: Trends, Themes, Collaborations, and Technology Utilization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140710</link>
        <id>10.14569/IJACSA.2023.0140710</id>
        <doi>10.14569/IJACSA.2023.0140710</doi>
        <lastModDate>2023-07-31T09:34:37.6530000+00:00</lastModDate>
        
        <creator>Kamal Imran Mohd Sharif</creator>
        
        <creator>Mazni Omar</creator>
        
        <creator>Muhammad Danial Mohd Noor</creator>
        
        <creator>Mohd Azril Ismail</creator>
        
        <creator>Mohamad Ghozali Hassan</creator>
        
        <creator>Abdul Rehman Gilal</creator>
        
        <subject>Poultry farming risk; poultry farming industry; bibliometric analysis; Harzing’s Publish or Perish; VOSviewer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>This paper explores the risks prevalent in the poultry farming industry, drawing upon an extensive examination conducted by researchers over the past decade. Employing a bibliometric analysis approach, a comprehensive search of the Scopus database was conducted using relevant keywords related to poultry farming risk and technology utilization. The search spanned from 2002 to 2022, yielding 345 pertinent documents. This study presents an overview of the current state of publications concerning poultry farming risk and its intersection with technology utilization. It delves into citation patterns, prevalent themes, and authorship analysis, focusing on the role of technology in mitigating risks. The comprehensive citation analysis highlights the impact of technology-related studies in the field. Frequency analysis employed Microsoft Excel, while VOSviewer facilitated data visualization. Harzing&#39;s Publish or Perish software was used for citation metrics and analysis. The findings reveal a consistent increase in publications on risk in poultry farming since 2002, particularly in relation to technology utilization. The United States emerges as the most active country in this area of research, with Wageningen University from the Netherlands identified as the most prolific institution contributing significantly to risk in poultry farming research, including technology applications. The research involved 32 scholars from 70 different countries and 32 distinct institutions, reflecting the multi-authorship and multicultural nature of the research. It is important to note that this paper focuses solely on the Scopus database, while future researchers may consider alternative databases for new studies, recognizing the expanding role of technology in addressing risks in the poultry farming industry.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_10-A_Bibliometric_Analysis_of_Research_on_Risks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automated Modified Grey Wolf Optimizer for Identification of Unauthorized Requests in Software-defined Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140709</link>
        <id>10.14569/IJACSA.2023.0140709</id>
        <doi>10.14569/IJACSA.2023.0140709</doi>
        <lastModDate>2023-07-31T09:34:37.6370000+00:00</lastModDate>
        
        <creator>Aminata Dembele</creator>
        
        <creator>Elijah Mwangi</creator>
        
        <creator>Abderrahim Bouchair</creator>
        
        <creator>Kennedy K Ronoh</creator>
        
        <creator>Edwin O Ataro</creator>
        
        <subject>Software-defined networks; security; DDoS attacks; metaheuristic algorithms; Grey Wolf Optimizer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>Software Defined Networking (SDN) is utilized to centralize network control within a controller, but its reliance on a single control plane can make it vulnerable to attacks such as DDoS. This highlights the importance of developing effective security mechanisms and using proactive measures such as detection and prevention strategies to mitigate the risk of attacks. Many DDoS attack detection technologies within SDN focus on detecting and mitigating the attack once it has occurred in the controller, which leads to more seconds of exposure, diminished precision, and high overhead. In this work, we have developed an Automated Modified Grey Wolf Optimizer Algorithm (AMGWOA) to design the detection of this malicious activity in an SDN environment to prevent the attack in the controller. Our methodology involves the development of the AMGWOA, which incorporates a mechanism to facilitate the blocking of malicious requests while reducing detection time and minimizing the use of storage and data resources for detection purposes. The results obtained show that our model performs well, with an ability to minimize a very large number of malicious requests in a minimum of time of less than 1 second compared to Grey Wolf Optimizer and particle swarm optimization algorithms evaluated using the same datasets.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_9-Automated_Modified_Grey_Wolf_Optimizer_for_Identification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Internet of Things-enabled Approach to Monitor Patients’ Health Statistics</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140708</link>
        <id>10.14569/IJACSA.2023.0140708</id>
        <doi>10.14569/IJACSA.2023.0140708</doi>
        <lastModDate>2023-07-31T09:34:37.6370000+00:00</lastModDate>
        
        <creator>Xi GOU</creator>
        
        <subject>Internet of things; healthcare; telemedicine; artificial neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>Leveraging Internet of Things (IoT) technology in healthcare systems improves patient care, reduces costs, and increases efficiency. Enabled by IoT, telemedicine allows remote patient monitoring, vital sign tracking, and seamless data accessibility for doctors across multiple locations. This article presents a novel IoT-enabled approach that utilizes artificial neural networks with radial basis functions to detect patients&#39; positions. This real-time tracking mechanism operates even without cellular connectivity, providing timely diagnoses and treatments. Our research aims to develop a smart and cost-effective healthcare approach, revolutionizing patient care. Mathematical analysis and experiments confirm the effectiveness of our proposed method, particularly in predicting patient location for the upcoming smart healthcare solution.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_8-A_Novel_Internet_of_Things_enabled_Approach_to_Monitor_Patients_Health.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Method for Evaluating the Competitiveness of Human Resources in High-tech Enterprises Based on Self-organized Data Mining Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140707</link>
        <id>10.14569/IJACSA.2023.0140707</id>
        <doi>10.14569/IJACSA.2023.0140707</doi>
        <lastModDate>2023-07-31T09:34:37.6200000+00:00</lastModDate>
        
        <creator>Sun Zhixin</creator>
        
        <subject>Self-organized data mining algorithm; high-tech enterprises; human resources; competitiveness evaluation; multi-level fuzzy evaluation method</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>The level of human resources competitiveness of high-tech companies affects the efficiency and effectiveness of enterprises to a particular extent. To achieve sustainable development of high-tech enterprises, an evaluation method of human resource competitiveness of high-tech enterprises based on a self-organized data mining algorithm is proposed. The fuzzy clustering algorithm is used to select five first-level indexes for the evaluation of HR competitiveness of high-tech companies, including human capital power, human resources policy incentive power, and human resources performance manifestation power, and to construct the initial evaluation indicator setting. The self-organized data mining algorithm is used to identify the key attributes related to the human resource competitiveness of high-tech companies within the initial assessment indicator setup, reduce the complexity of the indexes and construct the final rating index system. The multi-level fuzzy evaluation method is applied to calculate the evaluation index weights and fuzzy evaluation matrix to obtain the assessment results of HR competitivity of high-tech enterprises. The experimental results show that the information contribution rate of the evaluation index system constructed by this method is higher than 95%, which can accurately evaluate the human resource competitiveness of high-tech enterprises.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_7-A_Method_for_Evaluating_the_Competitiveness_of_Human_Resources.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Speech-Music Classification Model Based on Improved Neural Network and Beat Spectrum</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140706</link>
        <id>10.14569/IJACSA.2023.0140706</id>
        <doi>10.14569/IJACSA.2023.0140706</doi>
        <lastModDate>2023-07-31T09:34:37.6070000+00:00</lastModDate>
        
        <creator>Chun Huang</creator>
        
        <creator>Wei HeFu</creator>
        
        <subject>Vocal music; classification model; beat spectrum; feature parameter extraction; cosine similarity; convolutional neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>A speech-music classification method according to a developed neural system and beat spectrum is proposed to achieve accurate classification of speech-music through pre-emphasis, endpoint detection, framing, windowing and other steps to preprocess and collect vocal music signals. After fast Fourier transforms and triangle filter processing, the Mel frequency cepstrum coefficient (MFCC) is obtained, and a discrete cosine transform is performed to obtain the signal MFCC characteristic parameters. After calculating the similarity of feature parameters through cosine similarity, the signal similarity matrix is obtained, based on which the vocal music beat spectrum is obtained. The residual structure is optimized by adding Swish and max-out activation functions, respectively, between convolutional neural network layers to build residual convolution layers and deepen the number of convolution layers. The connected time series classification (CTC) is used as the objective loss function. It is applied to the softmax layer to build a deep optimization residual convolutional neural network for speech-music classification model. The pitch spectrum of vocal music is used as the input information of the model to realize the vocal music classification. The experiment proves that the classification accuracy of the design model is higher than 99%; when the iteration reaches 1200, the training loss approaches 0; when the signal-to-noise ratio is 180dB, the sensitivity and specificity are 99.98% and 99.96%, respectively; the accuracy of voice music classification is higher than 99%, and the running time is 0.48 seconds. It has been proven that the model has high classification accuracy, low training loss, good sensitivity and special effects, and can effectively achieve the classification of speech-music.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_6-Speech_Music_Classification_Model_Based_on_Improved_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning-based Method for Enhancing the Detection of Arabic Authorship Attribution using Acoustic and Textual-based Features</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140705</link>
        <id>10.14569/IJACSA.2023.0140705</id>
        <doi>10.14569/IJACSA.2023.0140705</doi>
        <lastModDate>2023-07-31T09:34:37.6070000+00:00</lastModDate>
        
        <creator>Mohammed Al-Sarem</creator>
        
        <creator>Faisal Saeed</creator>
        
        <creator>Sultan Noman Qasem</creator>
        
        <creator>Abdullah M Albarrak</creator>
        
        <subject>Authorship attribution; acoustic features; fusion approach; deep learning; CNN; ResNet34</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>Authorship attribution (AA) is defined as the identification of the original author of an unseen text. It is found that the style of the author’s writing can change from one topic to another, but the author’s habits are still the same in different texts. The authorship attribution has been extensively studied for texts written in different languages such as English. However, few studies investigated the Arabic authorship attribution (AAA) due to the special challenges faced with the Arabic scripts. Additionally, there is a need to identify the authors of texts extracted from livestream broadcasting and the recorded speeches to protect the intellectual property of these authors. This paper aims to enhance the detection of Arabic authorship attribution by extracting different features and fusing the outputs of two deep learning models. The dataset used in this study was collected from the weekly livestream and recorded Arabic sermons that are available publicly on the official website of Al-Haramain in Saudi Arabia. The acoustic, textual and stylometric features were extracted for five authors. Then, the data were pre-processed and fed into the deep learning-based models (CNN architecture and its pre-trained ResNet34). After that the hard and soft voting ensemble methods were applied for combining the outputs of the applied models and improve the overall performance. The experimental results showed that the use of CNN with textual data obtained an acceptable performance using all evaluation metrics. Then, the performance of ResNet34 model with acoustic features outperformed the other models and obtained the accuracy of 90.34%. Finally, the results showed that the soft voting ensemble method enhanced the performance of AAA and outperformed the other method in terms of accuracy and precision, which obtained 93.19% and 0.9311 respectively.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_5-Deep_Learning_based_Method_for_Enhancing_the_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Apache Spark in Riot Games: A Case Study on Data Processing and Analytics</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140704</link>
        <id>10.14569/IJACSA.2023.0140704</id>
        <doi>10.14569/IJACSA.2023.0140704</doi>
        <lastModDate>2023-07-31T09:34:37.5900000+00:00</lastModDate>
        
        <creator>Kanhaiya Sharma</creator>
        
        <creator>Firdous Hussain Mohammad</creator>
        
        <creator>Deepak Parashar</creator>
        
        <subject>Riot games; Apache Spark; data processing; real-time analytics; distributed computing technology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>This case study examines Riot Games&#39; use of Apache Spark and its effects on data processing and analytics. Riot Games is a well-known game production studio. The developer Riot Games, best known for the well-liked online multiplayer game League of Legends, manages enormous volumes of data produced daily by millions of players. Riot Games handled and analyzed this data quickly using Apache Spark, a distributed computing technology that made insightful findings and improved user experiences. This case study explores Riot Games&#39; difficulties, the company&#39;s adoption of Apache Spark, its implementation, and the advantages of utilizing Spark&#39;s capabilities. We evaluated the drawbacks and advantages of adopting Spark in the gaming sector and offered suggestions for game creators wishing to embrace Spark for their data processing and real-time analytics requirements. Our study adds to the increasing body of knowledge on the use of Spark in the gaming sector and offers suggestions and insights for both game producers and researchers.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_4-Apache_Spark_in_Riot_Games_A_Case_Study_on_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Intelligent Evaluation Path for English Teaching Quality: Construction of an Evaluation Model Based on Improved BPNN</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140702</link>
        <id>10.14569/IJACSA.2023.0140702</id>
        <doi>10.14569/IJACSA.2023.0140702</doi>
        <lastModDate>2023-07-31T09:34:37.5730000+00:00</lastModDate>
        
        <creator>Weihua Shen</creator>
        
        <creator>Wei Lu</creator>
        
        <creator>Yukun Qi</creator>
        
        <subject>Teaching quality evaluation; English; BPNN; intelligent evaluation; dragonfly optimization algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>The current intelligent evaluation methods for English teaching quality are inefficient and have poor evaluation accuracy for effective assessment. The paper suggests an evaluation model based on an upgraded Back Propagation Neural Network to overcome the aforementioned issues. First, the principal component analysis is utilized to lessen the dimensionality of the index system as we build an English teaching quality evaluation index system with reference to the results of the existing study. Then, we adopt a multi-strategy improved dragonfly optimization algorithm to evaluate Back Propagation Neural Network for its defects; an algorithm to improve it. Finally, to increase the efficacy and objectivity of English teaching quality evaluation, an intelligent evaluation model based on IDA-BPNN is developed. The experimental results demonstrate that the IDA-BPNN model has an evaluation accuracy of 98.96%, an F1 value of 0.950 on the training set and 0.968 on the test set, a Recall value of 0.948 on the training set and 0.966 on the test set. The aforementioned indicators are all superior to the most recent state-of-the-art approaches for evaluating teaching quality. The aforementioned findings thus demonstrate that the model suggested in the study has high performance and can successfully improve the accuracy and efficiency of English teaching quality evaluation, which has a positive impact on the development of English teaching careers.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_2-An_Intelligent_Evaluation_Path_for_English_Teaching_Quality.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Determinants of Medical Internet of Things Adoption in Healthcare and the Role of Demographic Factors Incorporating Modified UTAUT</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140703</link>
        <id>10.14569/IJACSA.2023.0140703</id>
        <doi>10.14569/IJACSA.2023.0140703</doi>
        <lastModDate>2023-07-31T09:34:37.5730000+00:00</lastModDate>
        
        <creator>Abdulaziz Alomari</creator>
        
        <creator>Ben Soh</creator>
        
        <subject>Medical internet of things; eHealth adoption; modified UTAUT; demographics and IT adoption</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>Medical Internet of Things (mIoT) is the IoT sub-set with vast potential in healthcare. However, the adoption of eHealth solutions such as mIoT has been a critical challenge in the health sector of the Kingdom of Saudi Arabia. Therefore, this study was conducted to explore the mIoT adoption determinants in Saudi public hospitals. Methods: A total of 271 participants were recruited from public hospitals in Riyadh, and a modified UTAUT model named UTAUT-HS was developed in this study to test its relevance with respect to mIoT adoption. Results: Ten path relationships were tested in this study, out of which six showed significant results. Similarly, three variables (Computer and English Language Self-efficacy or CESE, Performance Expectancy or PE and Social Influence or SI) showed a significant direct relationship with the behavioural intention to adopt mIoT. Furthermore, CESE showed the strongest relationship and emerged as a major sub-set of Effort Expectancy (EE) for mIoT adoption. However, moderator analysis showed substantial variations between different study demographic groups. In particular, the current study findings unravelled a comparatively novel relevance of Perceived Threat to Autonomy (PTA) for mIoT adoption for clinical and non-clinical and for older and younger participants. Conclusion: The study concludes that UTAUT-HS is an adequate model to explain the mIoT adoption in healthcare. However, it also suggests conducting future large-scale studies in KSA and elsewhere to validate the relevance of UTAUT-HS in other contexts and with much more confidence.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_3-Determinants_of_Medical_Internet_of_Things_Adoption.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Secure Virtual Local Area Network Design and Implementation for Electronic Data Interchange</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140701</link>
        <id>10.14569/IJACSA.2023.0140701</id>
        <doi>10.14569/IJACSA.2023.0140701</doi>
        <lastModDate>2023-07-31T09:34:37.5600000+00:00</lastModDate>
        
        <creator>Jhansi Bharathi Madavarapu</creator>
        
        <creator>Firdous Hussain Mohammed</creator>
        
        <creator>Shailaja Salagrama</creator>
        
        <creator>Vimal Bibhu</creator>
        
        <subject>Electronic data interchange; value added network; virtual local area network; data link layer; time division multiplexing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(7), 2023</description>
        <description>Electronic Data Interchange is a popular platform for sensitive business transactional data transmission over the local to public network. It requires Value Added Network to communicate local data from one endpoint to another. In this paper, we proposed a value-added network design and deployment using the Virtual Local Area Network. The Value Added Network over a virtual Local Area Network eases the burden of managing the network and its functional devices. This proposed network provides the solution of data traffic as per demand required network path for the Electronic Data Interchange applications. This advanced deployment model of Value Added Network over a virtual Local Area Network also offers more robust security to the network entities and traffic. This proposed Virtual Local area network has been deployed with the Cisco environment, and all the specifications have been successfully implemented and tested for optimal security for Electronic Data Interchange. Virtual Local Area Network deployed with four different methods such as transferring packets with backbone, virtual Local Area Network using the tagging method, implementing virtual Local Area Network using Time Division Multiplexing, and by user-defined frame field. All these deployments are successfully done, and the secure platform for Electronic Data Interchange data exchange from one end local network to the other local network has been optimized.</description>
        <description>http://thesai.org/Downloads/Volume14No7/Paper_1-Secure_Virtual_Local_Area_Network_Design_and_Implementation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Study of the Drug-related Adverse Events with the Help of Electronic Health Records and Natural Language Processing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01406148</link>
        <id>10.14569/IJACSA.2023.01406148</id>
        <doi>10.14569/IJACSA.2023.01406148</doi>
        <lastModDate>2023-07-02T14:39:45.9770000+00:00</lastModDate>
        
        <creator>Sarah Allabun</creator>
        
        <creator>Ben Othman Soufiene</creator>
        
        <subject>Natural language processing; surveillance of pharmacovigilance; drug-related adverse</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>Surveillance of pharmacovigilance, also known as drug safety surveillance, involves the monitoring and evaluation of drug-related adverse events or side effects to ensure the safe and effective use of medications. Pharmacovigilance is an essential component of healthcare systems worldwide and plays a crucial role in identifying and managing drug safety concerns. Natural language processing (NLP) can play a crucial role in surveillance activities within pharmacovigilance by analyzing and extracting information from various sources, such as clinical trial reports, electronic health records, social media, and scientific literature. It is important to note that while NLP can be a powerful tool in pharmacovigilance surveillance, it should always be used in conjunction with human expertise. NLP algorithms can assist in the identification and extraction of relevant information, but the final assessment and decision-making should involve the knowledge and judgment of trained pharmacovigilance professionals. In this paper, we intend to train and test our models using the dataset from the Medication, Indication, and Adverse Drug Events challenge. This dataset will include patient notes as well as entity categories such as Medication, Indication, and ADE, as well as various sorts of relationships between these entities. Because ADE-related information extraction is a two-stage process, the outcome of the second step (i.e., relation extraction) will be utilized to compare all models. The analysis of drug-related adverse events using electronic health records and automated approaches can considerably increase the effectiveness of ADE-related information extraction, although this depends on the methodology, data, and other aspects. Our findings can help with ADE detection and NLP research.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_148-Study_of_the_Drug_related_Adverse_Events.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Text Mining-based Enterprise Financial Performance Evaluation in the Context of Enterprise Digital Transformation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01406147</link>
        <id>10.14569/IJACSA.2023.01406147</id>
        <doi>10.14569/IJACSA.2023.01406147</doi>
        <lastModDate>2023-07-02T14:39:45.9430000+00:00</lastModDate>
        
        <creator>Changrong Guo</creator>
        
        <creator>Jing Xing</creator>
        
        <subject>Web crawler; text mining; IF-FDP; entropy method; financial performance; evaluation model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>As enterprises gradually move towards digitalization, it is increasingly difficult to accurately evaluate changes in corporate financial performance. To improve this situation, the study uses a text mining algorithm based on the web crawler principle to extract keywords from corporate annual reports, select representative financial performance indicators through IF-FDP, and construct a corporate financial performance evaluation model using the entropy weighting method. The performance comparison experiments of the text mining algorithm proposed in the study show that the accuracy-recall rate area under the line of the text mining algorithm proposed in the study is 0.83 and the average F-value is 0.34, which are both better than other algorithms. In the empirical analysis of the financial performance evaluation model, it was found that the financial performance evaluation model had the smallest absolute error of 0.3%, which was lower than the other models. The above results indicate that both the text mining algorithm and the performance evaluation model proposed in the study outperform the comparison algorithm and model. Therefore, the performance evaluation model proposed by the study can be used to effectively evaluate the financial performance of enterprises accurately and promote the development of enterprises, which has practical application value.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_147-Text_Mining_based_Enterprise_Financial_Performance_Evaluation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Social Media Mining to Detect Online Violent Extremism using Machine Learning Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01406146</link>
        <id>10.14569/IJACSA.2023.01406146</id>
        <doi>10.14569/IJACSA.2023.01406146</doi>
        <lastModDate>2023-06-30T12:23:04.7170000+00:00</lastModDate>
        
        <creator>Shynar Mussiraliyeva</creator>
        
        <creator>Kalamkas Bagitova</creator>
        
        <creator>Daniyar Sultan</creator>
        
        <subject>NLP; machine learning; social networks; extremism detection; textual contents</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>In this paper, we explore the challenging domain of detecting online extremism in user-generated content on social media platforms, leveraging the power of Machine Learning (ML). We employ six distinct ML and present a comparative analysis of their performance. Recognizing the diverse and complex nature of social media content, we probe how ML can discern extremist sentiments hidden in the vast sea of digital communication. Our study is unique, situated at the intersection of linguistics, computer science, and sociology, shedding light on how coded language and intricate networks of online communication contribute to the propagation of extremist ideologies. The goal is twofold: not only to perfect detection strategies, but also to increase our understanding of how extremism proliferates in digital spaces. We argue that equipping machine learning algorithms with the ability to analyze online content with high accuracy is crucial in the ongoing fight against digital extremism. In conclusion, our findings offer a new perspective on online extremism detection and contribute to the broader discourse on the responsible use of ML in society.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_146-Social_Media_Mining_to_Detect_Online_Violent_Extremism.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Roadmap Towards Optimal Resource Allocation Approaches in the Internet of Things</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01406145</link>
        <id>10.14569/IJACSA.2023.01406145</id>
        <doi>10.14569/IJACSA.2023.01406145</doi>
        <lastModDate>2023-06-30T12:23:04.7000000+00:00</lastModDate>
        
        <creator>Jiyin Zhou</creator>
        
        <subject>Internet of things; resource utilization; resource allocation; systematic review</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>Introducing new technologies has facilitated people&#39;s lives more than ever. As one of these emerging technologies, the Internet of Things (IoT) enables objects we handle daily to interact with each other or humans and exchange information through the Internet by being equipped with sensors and communication technologies. IoT turns the physical world into a virtual world where heterogeneous objects and devices can be interconnected and controlled. IoT-based networks face numerous challenges, including energy and sensor transmission limitations. New technologies are needed to spread the IoT platform, optimize costs, cover heterogeneous connections, reduce power consumption, and diminish delays. Users of IoT-based systems typically use services that are integrated into these networks. Service providers provide users with on-demand services. The interrelationship between this request and response must be managed in a way that is done using a resource allocation strategy. Therefore, resource allocation plays a major role in these systems and networks. The allocation of resources involves matters such as how much, where, and when available resources should be provided to the user economically. The allocation of resources in the IoT environment is also subject to various challenges, including maintaining the quality of service, achieving a predetermined level of service, storing power, controlling congestion, and reducing costs. As the IoT resource allocation problem is an NP-Hard one, many research efforts have been conducted on this topic, and various algorithms have been developed. This paper reviews published publications on IoT resource allocation, outlining the underlying principles, the latest developments, and current trends.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_145-A_Roadmap_Towards_Optimal_Resource_Allocation_Approaches.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Mechanism of the Role of Big Data Knowledge Management in the Development of Enterprise Innovation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01406144</link>
        <id>10.14569/IJACSA.2023.01406144</id>
        <doi>10.14569/IJACSA.2023.01406144</doi>
        <lastModDate>2023-06-30T12:23:04.6870000+00:00</lastModDate>
        
        <creator>Guangyu Yan</creator>
        
        <creator>Rui Ma</creator>
        
        <subject>Big data knowledge management; BP neural network algorithm; enterprise innovation development; principal component analysis; particle swarm optimization algorithm; correlation analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>The effectiveness and efficiency of enterprise knowledge management depends on the effectiveness and efficiency of the enterprise&#39;s implementation of knowledge management. Big data technology can collect, analyse and apply the massive amount of data in an organisation to support the implementation of knowledge management. Therefore, exploring the role of big-data knowledge management in the development of enterprise innovation will help enterprises to better implement knowledge management. Based on this, the study aims to propose a model for predicting big data knowledge management and enterprise innovation development for high-tech enterprises in China. The study firstly used Principal Component Analysis (PCA) to decrease the dimensionality of the model, and then used the particle swarm algorithm to optimize BP neural network (PSO-BP). Network (PSO-BP) was used to evaluate enterprise knowledge management and enterprise innovation development. The results of the study show that the absolute values of the relative errors of the pre-processed model do not exceed the 5% threshold, and only the relative errors of some indicators are relatively large, such as X5 and X7, with values of 4.5% and -3.8%, indicating that the model has a good performance in predicting the innovation effect of enterprises.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_144-The_Mechanism_of_the_Role_of_Big_Data_Knowledge_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intelligent Traffic Video Retrieval Model based on Image Processing and Feature Extraction Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01406143</link>
        <id>10.14569/IJACSA.2023.01406143</id>
        <doi>10.14569/IJACSA.2023.01406143</doi>
        <lastModDate>2023-06-30T12:23:04.6700000+00:00</lastModDate>
        
        <creator>Xiaoming Zhao</creator>
        
        <creator>Xinxin Wang</creator>
        
        <subject>Matching extraction; feature fusion; image retrieval; intelligent transportation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>Intelligent transportation is a system that combines data-driven information with traffic management to achieve intelligent monitoring and retrieval functions. In order to further improve the retrieval accuracy of the system model, a new retrieval model was designed. The functional requirements of the system were summarized, and the three stages of data preprocessing, feature matching, and feature extraction were analyzed in detail. The study adopted preprocessing measures such as equalization and normalization to minimize the negative effects of noise and brightness. Based on the performance of various algorithms, the distance method was selected as the feature matching method, which has a wider applicability and is better at processing bulk data. Next, the study utilizes Euclidean distance method to extract keyframes and divides the feature extraction into three parts: color, shape, and texture. The methods of color moment, canny operator, and grayscale co-occurrence matrix are used to extract them, and ultimately achieve relevant image retrieval. The research conducted multiple experiments on the retrieval performance of the model, and analyzed the results of retrieving single and mixed features. The experimental results showed that the algorithm performed better in the face of mixed feature extraction. Compared with the average value of a single feature, the recall and precision of the three mixed features increased by 13.78% and 15.64%, respectively. Moreover, in the case of a large number of concurrent features, the algorithm also met the basic requirements. When the concurrent number was 100, the average response time of the algorithm is 4.46 seconds. Therefore, the algorithm proposed by the research institute effectively improves the ability of video retrieval and can meet the requirements of timeliness, which can be widely applied in practical applications.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_143-Intelligent_Traffic_Video_Retrieval_Model_Based_on_Image_Processing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>NLPashto: NLP Toolkit for Low-resource Pashto Language</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01406142</link>
        <id>10.14569/IJACSA.2023.01406142</id>
        <doi>10.14569/IJACSA.2023.01406142</doi>
        <lastModDate>2023-06-30T12:23:04.6530000+00:00</lastModDate>
        
        <creator>Ijazul Haq</creator>
        
        <creator>Weidong Qiu</creator>
        
        <creator>Jie Guo</creator>
        
        <creator>Peng Tang</creator>
        
        <subject>NLP; text processing; word segmentation; POS tagging; BERT, LLMs; Pashto; low-resource languages; CRF; CNNs; RNNs</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>In recent years, natural language processing (NLP) has transformed numerous domains, becoming a vital area of research. However, the focus of NLP studies has predomi-nantly centered on major languages like English, inadvertently neglecting low-resource languages like Pashto. Pashto, spoken by a population of over 50 million worldwide, remains largely unexplored in NLP research, lacking off-the-shelf resources and tools even for fundamental text-processing tasks. To bridge this gap, this study presents NLPashto, an open-source and publicly accessible NLP toolkit specifically designed for Pashto. The initial version of NLPashto introduces four state-of-the-art models for Spelling Correction, Word Segmentation, Part-of-Speech (POS) Tagging, and Offensive Language Detection. The toolkit also includes essential NLP resources like pre-trained static word embeddings, Word2Vec, fastText, and GloVe. Furthermore, we have pre-trained a monolingual language model for Pashto from scratch, using the Bidirectional Encoder Representations from Transformers (BERT) architecture. For the training and evaluation of all the models, we have developed several benchmark datasets and also included them in the toolkit. Experimental results demonstrate that the models exhibit satisfactory perfor-mance in their respective tasks. This study can be a significant milestone and will hopefully support and speed-up future research in the field of Pashto NLP.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_142-NLPashto_NLP_Toolkit_for_Low_resource_Pashto_Language.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Character Representation and Application Analysis of English Language and Literature Based on Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01406141</link>
        <id>10.14569/IJACSA.2023.01406141</id>
        <doi>10.14569/IJACSA.2023.01406141</doi>
        <lastModDate>2023-06-30T12:23:04.6370000+00:00</lastModDate>
        
        <creator>Yao Song</creator>
        
        <subject>Neural network; English; literary image; character vector; similarity calculation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>The development of computer technology has promoted the continuous progress of Natural language processing technology and the great development of ideology and culture, and also prompted literary workers to create a large number of literary works. This poses a new challenge to the application of Natural language processing technology. Text analysis and processing is realized by Natural language processing technology. In the information society, the amount of data is increasing exponentially, and the number of literary works produced is also rapidly increasing. In order to gain a comprehensive understanding of domestic and foreign history and culture, some Chinese readers are not only satisfied with reading Chinese works from ancient and modern times, but also hope to read and understand foreign literary works. Current mainstream methods for literary character analysis are manual, making the results highly subjective and inefficient for large-scale literary works. To address this problem, this study proposes a character representation and analysis method based on neural networks using English novels as an example. By preprocessing data and utilizing the word dependency relationship to represent character vectors and calculate similarity, the study uses the Skip-gram model to train character vectors and K-means for clustering. An AGA-BPNN model is proposed for character and gender prediction and classification, with a 95.42% accuracy rate achieved in character prediction classification, and an average accuracy, recall, and F1 score of 0.953, 0.962, and 0.962, respectively, in gender prediction and classification. The results demonstrate the effectiveness of the method and propose a new approach for novel character analysis.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_141-Character_Representation_and_Application_Analysis_of_English_Language.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning-based Mobile Robot Target Object Localization and Pose Estimation Research</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01406140</link>
        <id>10.14569/IJACSA.2023.01406140</id>
        <doi>10.14569/IJACSA.2023.01406140</doi>
        <lastModDate>2023-06-30T12:23:04.6230000+00:00</lastModDate>
        
        <creator>Caixia He</creator>
        
        <creator>Laiyun He</creator>
        
        <subject>Mobile robot; target object localization; pose estimation; YOLOv2 network; FCN semantic segmentation network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>Two key technologies in robotic object grasping are target object localization and pose estimation (PE), respectively, and the addition of a robotic vision system can dramatically enhance the flexibility and accuracy of robotic object grasping. The study optimizes the classical convolutional structure in the target detection network considering the limited computing power and memory resources of the embedded platform, and replaces the original anchor frame mechanism using an adaptive anchor frame mechanism in combination with the fused depth map. For evaluating the target’s pose, the smooth plane of its surface is identified using the semantic segmentation network, and the target’s pose information is obtained by solving the normal vector of the plane, so that the robotic arm can absorb the object surface along the direction of the plane normal vector to achieve the target’s grasping. The adaptive anchor frame can maintain an average accuracy of 85.75% even when the number of anchor frames is increased, which proves its anti-interference ability to the over fitting problem. The detection accuracy of the target localization algorithm is 98.8%; the accuracy of the PE algorithm is 74.32%; the operation speed could be 25 frames/s. It could satisfy the requirements of real-time physical grasping. In view of the vision algorithm in the study, physical grasping experiments were carried on. Then the success rate of object grasping in the experiments was above 75%, which effectively verified the practicability.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_140-Deep_Learning_based_Mobile_Robot_Target_Object_Localization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Application of Intelligent Evaluation Method with Deep Learning in Calligraphy Teaching</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01406139</link>
        <id>10.14569/IJACSA.2023.01406139</id>
        <doi>10.14569/IJACSA.2023.01406139</doi>
        <lastModDate>2023-06-30T12:23:04.6070000+00:00</lastModDate>
        
        <creator>Yu Wang</creator>
        
        <subject>Deep learning; calligraphy teaching; BPNN; intelligent evaluation; sparrow search algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>Scientific and effective teaching quality evaluation (QE) is helpful to improve teaching mode and improve teaching quality. At present, calligraphy teaching (CT) QE methods are few in number and have poor evaluation effect. Aiming at these problems, deep learning (DL) is introduced to realize intelligent evaluation of CT quality. First, based on relevant research, the CTQE indicator system is constructed. Secondly, rough set and the principal component analysis (PCA) are used to reduce the dimension of the CTQE index system and extract four common factors. Then, the corresponding index data is input into the BP neural network (BPNN) model optimized by the improved sparrow search algorithm for fitting. Finally, combining the above contents, the improved sparrow search algorithm (ISSA) BPNN model is built to realize the intelligent evaluation of CT quality. The experimental results show that the loss value of ISSA-BPN model is 0.21, and the fitting degree of CT data is 0.953. The evaluation Accuracy is 95%, Precision is 0.945, Recall is 0.923, F1 is 0.942, and AUC is 0.967. These values are superior to the most advanced teaching QE model available. The SSA-BPNNCTQE model proposed in the study has excellent performance in CTQE. This is of positive significance to the improvement of teaching quality and students&#39; calligraphy level.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_139-The_Application_of_Intelligent_Evaluation_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Review of Unsupervised Segmentation Techniques on Long Wave Infrared Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01406138</link>
        <id>10.14569/IJACSA.2023.01406138</id>
        <doi>10.14569/IJACSA.2023.01406138</doi>
        <lastModDate>2023-06-30T12:23:04.5900000+00:00</lastModDate>
        
        <creator>Mohammed Abuhussein</creator>
        
        <creator>Aaron L. Robinson</creator>
        
        <creator>Iyad Almadani</creator>
        
        <subject>Unsupervised segmentation; thermal images; texture analysis; pixel labeling; Gabor; GMM; image analysis; K-Means; MRF; Otsu’s; DNN; region-based clustering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>This paper studies the different unsupervised segmentation algorithms that have been proposed and their efficacy on thermal images. The scope of this research is to develop a generalized approach to blindly segment urban thermal imagery to assist the system in identifying regions by shape instead of pixel values. Most methods can be classified as thresholding, edge-based, region-based, clustering, or texture analysis. We explained methods, worked before applying the methods of interest on thermal images of 8-bit and 16-bit resolution, and evaluated the performance. The evaluation section discusses where each method succeeded, where it failed, and how the performance can be enhanced. Finally, we study the time complexity of each method to assess the feasibility of implementing a fast, and generalized method of pixel labeling.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_138-Review_of_Unsupervised_Segmentation_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Encryption Algorithm for Information Security in Hadoop</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01406137</link>
        <id>10.14569/IJACSA.2023.01406137</id>
        <doi>10.14569/IJACSA.2023.01406137</doi>
        <lastModDate>2023-06-30T12:23:04.5770000+00:00</lastModDate>
        
        <creator>Youness Filaly</creator>
        
        <creator>Fatna El mendili</creator>
        
        <creator>Nisrine Berros</creator>
        
        <creator>Younes El Bouzekri EL idrissi</creator>
        
        <subject>Hadoop distributed file system (HDFS); big data security; data encryption; data decryption</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>Network security has gained importance in recent years. Information system security is greatly aided by the development of encryption as a solution. To safeguard the shared information, several strategies are required. Thanks to the cutting-edge Internet, networking corporations, health information, and cloud applications, our data is growing exponentially every minute. In order to handle enormous amounts of data efficiently, a new application called Hadoop distributed file system (HDFS) was created. However, HDFS doesn’t come with any built-in data encryption tools, which poses serious security risks. In order to increase data security, encryption techniques are established; nevertheless, standard algorithms fall short when dealing with bigger files. In this study, huge data will be secured using a novel hybrid encryption algorithm that combines CP-ABE (encryption based on the features of the encryption policy), AES (advanced standard encryption), and RSA (Rivest-Shamir-Adleman). The suggested model’s performance is compared against that of traditional encryption algorithms like DES, 3DES, and Blowfish in order to demonstrate improved performance as it relates to decryption time, encryption time, and throughput. The results of the studies demonstrate that our suggested method’s algorithm is more secure.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_137-Hybrid_Encryption_Algorithm_for_Information_Security_in_Hadoop.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Software Vulnerabilities’ Detection by Analysing Application Execution Traces</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01406136</link>
        <id>10.14569/IJACSA.2023.01406136</id>
        <doi>10.14569/IJACSA.2023.01406136</doi>
        <lastModDate>2023-06-30T12:23:04.5600000+00:00</lastModDate>
        
        <creator>Gouayon Koala</creator>
        
        <creator>Didier Bassol&#180;e</creator>
        
        <creator>Telesphore Tiendrebeogo</creator>
        
        <creator>Oumarou Si&#180;e</creator>
        
        <subject>Execution traces; events; vulnerability detection; malware; applications</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>Over the years, digital traces have proven to be significant for analyzing IT systems, including applications. With the persistent threats arising from the widespread proliferation of malware and the evasive techniques employed by cybercriminals, researchers and application vendors alike are concerned about finding effective solutions. In this article, we assess a hybrid approach to detecting software vulnerabilities based on analyzing traces of application execution. To accomplish this, we initially extract permissions and features from manifest files. Subsequently, we employ a tracer to extract events from each running application, utilizing a set of elements that indicate the behavior of the application. These events are then recorded in a trace. We convert these traces into features that can be utilized by machine learning algorithms. Finally, to identify vulnerable applications, we train these features using six machine learning algorithms (KNN, Random Forest, SVM, Naive Bayes, Decision Tree-CART, and MLP). The selection of these algorithms is based on the outcomes of several preliminary experiments. Our results indicate that the SVM algorithm produces the best performance, followed by Random Forest, achieving an accuracy of 98%for malware detection and 96% for benign applications. These findings demonstrate the relevance and utility of analyzing real application behavior through event analysis.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_136-Software_Vulnerabilities_Detection_by_Analysing_Application_Execution_Traces.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Microbial Biomarkers Identification for Human Gut Disease Prediction using Microbial Interaction Network Embedded Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01406135</link>
        <id>10.14569/IJACSA.2023.01406135</id>
        <doi>10.14569/IJACSA.2023.01406135</doi>
        <lastModDate>2023-06-30T12:23:04.5430000+00:00</lastModDate>
        
        <creator>Anushka Sivakumar</creator>
        
        <creator>Syama K</creator>
        
        <creator>J. Angel Arul Jothi</creator>
        
        <subject>Biomarker discovery; microbial interaction network; graph embedding feature selection; inflammatory bowel disease; colorectal cancer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>Human gut microorganisms are crucial in regulating the immune system. Disruption of the healthy relationship between the gut microbiota and gut epithelial cells leads to the development of diseases. Inflammatory Bowel Disease (IBD) and Colorectal Cancer (CRC) are gut-related disorders with complex pathophysiological mechanisms. With the massive availability of microbiome data, computer-aided microbial biomarker discovery for IBD and CRC is becoming common. However, microbial interactions were not considered by many of the existing biomarker identification methods. Hence, in this study, we aim to construct a microbial interaction network (MIN). The MIN accounts for the associations formed and interactions among microbes and hosts. This work explores graph embedding feature selection through the construction of a sparse MIN using MAGMA embedded into a deep feedforward neural network (DFNN). This aims to reduce dimensionality and select prominent features that form the disease biomarkers. The selected features are passed through a deep forest classifier for disease prediction. The proposed methodology is experimentally cross-validated (5-fold) with different classifiers, existing works, and different models of MIN embedded in DFNN for the IBD and CRC datasets. Also, the selected biomarkers are verified against biological studies for the IBD and CRC datasets. The highest achieved AUC, accuracy, and f1-score are 0.863, 0.839, and 0.897, respectively, for the IBD dataset and 0.837, 0.768, and 0.757, respectively, for the CRC dataset. As observed, the proposed method is successful in selecting a subset of informative and prominent biomarkers for IBD and CRC.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_135-Microbial_Biomarkers_Identification_for_Human_Gut_Disease_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Facial Image Generation from Bangla Textual Description using DCGAN and Bangla FastText</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01406134</link>
        <id>10.14569/IJACSA.2023.01406134</id>
        <doi>10.14569/IJACSA.2023.01406134</doi>
        <lastModDate>2023-06-30T12:23:04.5430000+00:00</lastModDate>
        
        <creator>Noor Mairukh Khan Arnob</creator>
        
        <creator>Nakiba Nuren Rahman</creator>
        
        <creator>Saiyara Mahmud</creator>
        
        <creator>Md. Nahiyan Uddin</creator>
        
        <creator>Rashik Rahman</creator>
        
        <creator>Aloke Kumar Saha</creator>
        
        <subject>Bangla Text-to-Face Synthesis; Natural Language Processing(NLP); Computer Vision(CV); GAN; Text encoders</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>The synthesis of facial images from textual descriptions is a relatively difficult subfield of text-to-image synthesis. It is applicable in various domains like Forensic Science, Game Development, Animation, Digital Marketing, and Metaverse. However, no work was found that generates facial images from textual descriptions in Bangla; the 5th most spoken language in the world. This research introduces the first-ever system to generate facial images from Bangla textual descriptions. The proposed model comprises two fundamental constituents, namely a textual encoder, and a Generative Adversarial Network(GAN). The text encoder is a pre-trained Bangla text encoder named Bangla FastText which is employed to transform Bangla text into a latent vector representation. The utilization of Deep Convolutional GAN (DCGAN) allows for the generation of face images that correspond to text embedding. Furthermore, a Bangla version of the CelebA dataset, CelebA Bangla is created for this study to develop the proposed system. CelebA Bangla contains images of celebrities, their corresponding annotated Bangla facial attributes and Bangla Textual Descriptions generated using a novel description generation algorithm. The proposed system attained a Fr&#180;echet Inception Distance (FID) score of 126.708, Inception Score(IS) of 12.361, and Face Semantic Distance(FSD) of 20.23. The novel text embedding strategy used in this study outperforms prior work. A thorough qualitative and quantitative analysis demonstrates the superior performance of the proposed system over other experimental systems.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_134-Facial_Image_Generation_from_Bangla_Textual_Description.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Enhanced Variational AutoEncoder Approach for the Purpose of Deblurring Bangla License Plate Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01406133</link>
        <id>10.14569/IJACSA.2023.01406133</id>
        <doi>10.14569/IJACSA.2023.01406133</doi>
        <lastModDate>2023-06-30T12:23:04.5300000+00:00</lastModDate>
        
        <creator>Md. Siddiqure Rahman Tusher</creator>
        
        <creator>Nakiba Nuren Rahman</creator>
        
        <creator>Shabnaz Chowdhury</creator>
        
        <creator>Anika Tabassum</creator>
        
        <creator>Md. Akhtaruzzaman Adnan</creator>
        
        <creator>Rashik Rahman</creator>
        
        <creator>Shah Murtaza Rashid Al Masud</creator>
        
        <subject>Image deblur; bangla license plate deblur; Variational AutoEncoder (VAE); computer vision</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>Automated License Plate Detection and Recognition (ALPDR) is a well-studied area of computer vision and a crucial activity in a variety of applications, including surveillance, law enforcement, and traffic management. Such a system plays a crucial role in the investigation of vehicle-related offensive activities. When an input image or video frame travels through an ALPDR system for license plate detection, the detected license plate is frequently blurry due to the fast motion of the vehicle or low-resolution input. Images of license plates that are blurred or distorted can reduce the accuracy of ALPDR systems. In this paper, a novel Variational AutoEncoder(VAE) architecture is proposed for deblurring license plates. In addition, a dataset of obscured license plate images and corresponding ground truth images is proposed and used to train the novel VAE model. This dataset comprises 3788 image pairs, in which the train, test, and validation set contains 2841, 568, and 379 pairs of images respectively. Upon completion of the training process, the model undergoes an evaluation procedure utilizing the validation set, where it achieved an SSIM value of 0.934 and a PSNR value of 32.41. In order to assess the efficacy of our proposed VAE model, a comparison with contemporary deblurring techniques is pre-sented in the results section. In terms of both quantitative metrics and the visual quality of the deblurred images, the experimental results indicate that our proposed method outperforms the other state-of-the-art deblurring methods. Therefore, it enhances the precision and dependability of an ALPDR system.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_133-An_Enhanced_Variational_AutoEncoder_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Uncertainty-Aware Traffic Prediction using Attention-based Deep Hybrid Network with Bayesian Inference</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01406132</link>
        <id>10.14569/IJACSA.2023.01406132</id>
        <doi>10.14569/IJACSA.2023.01406132</doi>
        <lastModDate>2023-06-30T12:23:04.5130000+00:00</lastModDate>
        
        <creator>Md. Moshiur Rahman</creator>
        
        <creator>Abu Rafe Md Jamil</creator>
        
        <creator>Naushin Nower</creator>
        
        <subject>Traffic flow prediction; uncertainty; deep learning; Bayesian inference; Dhaka city</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>Traffic congestion has an adverse impact on the economy and quality of life and thus accurate traffic flow forecasting is critical for reducing congestion and enhancing transportation management. Recently, hybrid deep-learning approaches show promising contributions in prediction by handling various dynamic traffic features. Existing methods, however, frequently neglect the uncertainty associated with traffic estimates, resulting in inefficient decision-making and planning. To overcome these issues, this research presents an attention-based deep hybrid network with Bayesian inference. The suggested approach assesses the uncertainty associated with traffic projections and gives probabilistic estimates by applying Bayesian inference. The attention mechanism improves the ability of the model to detect unexpected situations that disrupt traffic flow. The proposed method is tested using real-world traffic data from Dhaka city, and the findings show that it outperforms than other cutting-edge approaches when used with real-world traffic statistics.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_132-Uncertainty_Aware_Traffic_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Method for Diagnosing Alzheimer’s Disease from MRI Scans using the ResNet50 Feature Extractor and the SVM Classifier</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01406131</link>
        <id>10.14569/IJACSA.2023.01406131</id>
        <doi>10.14569/IJACSA.2023.01406131</doi>
        <lastModDate>2023-06-30T12:23:04.4970000+00:00</lastModDate>
        
        <creator>Farhana Islam</creator>
        
        <creator>Md. Habibur Rahman</creator>
        
        <creator>Nurjahan</creator>
        
        <creator>Md. Selim Hossain</creator>
        
        <creator>Samsuddin Ahmed</creator>
        
        <subject>Alzheimer’s disease; brain images; machine learning; deep learning; brain disorder; ADNI dataset</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>Alzheimer’s disease (AD), a chronic neurodegenerative brain disorder, caused by the accumulation of abnormal proteins called amyloid, is one of the prominent causes of mortality worldwide. Since there is a scarcity of experienced neurologists, manual diagnosis of AD is very time-consuming and error-prone. Hence, automatic diagnosis of AD draws significant attention nowadays. Machine learning (ML) algorithms such as deep learning are widely used to support early diagnosis of AD from magnetic resonance imaging (MRI). However, they provide better accuracy in binary classification, which is not the case with multi-class classification. On the other hand, AD consists of a number of early stages, and accurate detection of them is necessary. Hence, this research focuses on how to support the multi-stage classification of AD particularly in its early stage. After the MRI scans have been preprocessed (through median filtering and watershed segmentation), benchmark pre-trained convolutional neural network (CNN) models (AlexNet, VGG16, VGG19, ResNet18, ResNet50) carry out automatic feature extraction. Then, principal component analysis is used to optimize features. Conventional machine learning classifiers (Decision Tree, K-Nearest Neighbors, Support Vector Machine, Linear Programming Boost, and Total Boost) are deployed using the optimized features for staging AD. We have exploited the Alzheimer’s disease Neuroimaging Initiative(ADNI) data set consisting of AD, MCIs (MCI), and cognitive normal (CN) classes of images. In our experiment, the SVM classifier performed better with the extracted ResNet50 features, achieving multi-class classification accuracy of 99.78% during training, 99.52% during validation, and 98.71% during testing. Our approach is distinctive because it combines the advantages of deep feature extractors, conventional classifiers, and feature optimization.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_131-A_Novel_Method_for_Diagnosing_Alzheimers_Disease.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Parameter Identification of a Multilayer Perceptron Neural Network using an Optimized Salp Swarm Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01406130</link>
        <id>10.14569/IJACSA.2023.01406130</id>
        <doi>10.14569/IJACSA.2023.01406130</doi>
        <lastModDate>2023-06-30T12:23:04.4670000+00:00</lastModDate>
        
        <creator>Mohamad Al-Laham</creator>
        
        <creator>Salwani Abdullah</creator>
        
        <creator>Mohammad Atwah Al-Ma’aitah</creator>
        
        <creator>Mohammed Azmi Al-Betar</creator>
        
        <creator>Sofian Kassaymeh</creator>
        
        <creator>Ahmad Azzazi</creator>
        
        <subject>Software development effort estimation; machine learning; multilayer perceptron neural network; salp swarm algorithm; genetic algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>Effort estimation in software development (SEE) is a crucial concern within the software engineering domain, as it directly impacts cost estimation, scheduling, staffing, planning, and resource allocation accuracy. In this scientific article, the authors aim to tackle this issue by integrating machine learning (ML) techniques with metaheuristic algorithms in order to raise prediction accuracy. For this purpose, they employ a multilayer perceptron neural network (MLP) to perform the estimation for SEE. Unfortunately, the MLP network has numerous drawbacks as well, including weight dependency, rapid convergence, and accuracy limits. To address these issues, the SSA Algorithm is employed to optimize the MLP weights and biases. Simultaneously, the SSA algorithm has shortcomings in some aspects of the search mechanisms as well, such as rapid convergence and being susceptible to the local optimal trap. As a result, the genetic algorithm (GA) is utilized to address these shortcomings through fine-tuning its parameters. The main objective is to develop a robust and reliable prediction model that can handle a wide range of SEE problems. The developed techniques are tested on twelve benchmark SEE datasets to evaluate their performance. Furthermore, a comparative analysis with state-of-the-art methods is conducted to further validate the effectiveness of the developed techniques. The findings demonstrate that the developed techniques surpass all other methods in all benchmark problems, affirming their superiority.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_130-Parameter_Identification_of_a_Multilayer_Perceptron_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Classification Approach for Grape Leaf Disease Detection Based on Different Attention Deep Learning Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01406128</link>
        <id>10.14569/IJACSA.2023.01406128</id>
        <doi>10.14569/IJACSA.2023.01406128</doi>
        <lastModDate>2023-06-30T12:23:04.4500000+00:00</lastModDate>
        
        <creator>S Phani Praveen</creator>
        
        <creator>Rajeswari Nakka</creator>
        
        <creator>Anuradha Chokka</creator>
        
        <creator>Venkata Nagaraju Thatha</creator>
        
        <creator>Sai Srinivas Vellela</creator>
        
        <creator>Uddagiri Sirisha</creator>
        
        <subject>Grape leaves; faster region-based convolutional neural networks; you only look once (x); single shot detection attention techniques</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>Preventing and controlling grape diseases is essential for a good grape harvest. With the help of “single shot multi-box detectors”, “faster region based convolutional neural networks”, &amp; “You only look once-X,” the study improved grape leaf disease detection accuracy with effective attention mechanisms, which includes convolutional block attention module, squeeze &amp; excitation networks, &amp; efficient channel attention. The various attention techniques helped to emphasize important features while reducing the impact of irrelevant ones, which ultimately improved the precision of the models and allowed for real-time performance. As a result of examining the optimal models from the three types, it was found that the Faster (R-CNN) model had a lower precision value, while You only look once-X and SSD with various attention techniques required the fewest parameters with the highest precision, with the best real-time performance. In addition to providing insights into grape diseases &amp; symptoms in automated agricultural production, this study provided valuable insights into grape leaf disease detection.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_128-A_Novel_Classification_Approach_for_Grape_Leaf_Disease_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predicting At-Risk Students’ Performance Based on LMS Activity using Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01406129</link>
        <id>10.14569/IJACSA.2023.01406129</id>
        <doi>10.14569/IJACSA.2023.01406129</doi>
        <lastModDate>2023-06-30T12:23:04.4500000+00:00</lastModDate>
        
        <creator>Amnah Al-Sulami</creator>
        
        <creator>Miada Al-Masre</creator>
        
        <creator>Norah Al-Malki</creator>
        
        <subject>Predict at-risk student; artificial neural network; learning management system; and educational data mining</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>It is of great importance for Higher Education (HE) institutions to continuously work on detecting at-risk students based on their performance during their academic journey with the purpose of supporting their success and academic advancement. This is where Learning Analytics (LA) representing learners’ behaviour inside the Learning Management Systems (LMS), Educational Data Mining (EDM), and Deep Learning (DL) techniques come into play as an academic sustainable pipeline, which can be used to extract meaningful predictions of the learners’ future performance based on their online activity. Thus, the aim of this study is to implement a supervised learning approach which utilizes three artifcial neural networks (vRNN, LSTM, and GRU) to develop models that can classify students’ final grade as Pass or Fail based on a number of LMS activity indicators; more precisely, detect failed students who are actually the ones susceptible to risk. The three models alongside a baseline MLP classifier have been trained on two datasets (ELIA 101- 1, and ELIA 101-2) illustrating the LMS activity and final assessment grade of 3529 students who enrolled in an English Foundation-Year course (ELIA 101) taught at King Abdulaziz University (KAU) during the first and second semesters of 2021. Results indicate that though all of the three DL models performed better than the MLP baseline, the GRU model achieved the highest classification accuracy on both datasets: 93.65% and 98.90%, respectively. As regards predicting at-risk students, all of the three DL models achieved an = 81% Recall values, with notable variation of performance depending on the dataset, the highest being the GRU on the ELIA 101-2.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_129-Predicting_At_Risk_Students_Performance_Based_on_LMS_Activity.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Type 2 Diabetes Mellitus: Early Detection using Machine Learning Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01406127</link>
        <id>10.14569/IJACSA.2023.01406127</id>
        <doi>10.14569/IJACSA.2023.01406127</doi>
        <lastModDate>2023-06-30T12:23:04.4370000+00:00</lastModDate>
        
        <creator>Gowthami S</creator>
        
        <creator>Venkata Siva Reddy</creator>
        
        <creator>Mohammed Riyaz Ahmed</creator>
        
        <subject>Diabetes Mellitus Type II; feature selection; machine learning methods; precision medicine</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>Type 2 Diabetes Mellitus (T2DM) is a growing global health problem that significantly impacts patient’s quality of life and longevity. Early detection of T2DM is crucial in preventing or delaying the onset of its associated complications. This study aims to evaluate the use of machine learning algorithms for the early detection of T2DM. A classification model is developed using a dataset of patients diagnosed with T2DM and healthy controls, incorporating feature selection techniques. The model will be trained and tested on machine learning algorithms such as Logistic Regression, K-Nearest Neighbors, Decision Trees, Random Forest, and Support Vector Machines. The results showed that the Random Forest algorithm achieved the highest accuracy in detecting T2DM, with an accuracy of 98%. This high accuracy rate highlights the potential of machine learning algorithms in early T2DM detection and the importance of incorporating such methods in the clinical decision-making process. The findings of this study will contribute to the development of a more efficient precision medicine screening process for T2DM that can help healthcare providers detect the disease at its earliest stages, leading to improved patient outcomes.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_127-Type_2_Diabetes_Mellitus_Early_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>MC-ABAC: An ABAC-based Model for Collaboration in Multi-Cloud Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01406126</link>
        <id>10.14569/IJACSA.2023.01406126</id>
        <doi>10.14569/IJACSA.2023.01406126</doi>
        <lastModDate>2023-06-30T12:23:04.4200000+00:00</lastModDate>
        
        <creator>Mohamed Amine Madani</creator>
        
        <creator>Abdelmounaim Kerkri</creator>
        
        <creator>Mohammed Aissaoui</creator>
        
        <subject>ABAC model; multi-tenant; multi-cloud; collaboration; trust</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>Collaborative systems allow a group of organizations to collaborate and complete shared tasks through distributed platforms. Organizations who collaborate often leverage cloud-based solutions to outsource their data and to benefit from the cloud capabilities. During such collaborations, tenants require access to and utilize resources held by other collaborating tenants, which are hosted across multiple cloud providers. Ensuring access control in a cloud-based collaborative application is a crucial problem that needs to be addressed, particularly in a multi-cloud environment. This paper presents the Multi-Cloud ABAC: MC-ABAC model, an extension of the ABAC: Attribute Based Access Control model, suitable for ensuring secure collaboration and cross-tenant access in a multi-cloud environment. The MC-ABAC introduces the notions of tenant, cloud customer and cloud service provider as fundamental entities within the model. Additionally, it incorporates multiple trust relations to enable collaboration and resource sharing among tenants in the multi-cloud environment. To demonstrate its feasibility, we have implemented the MC-ABAC model using Python technology.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_126-MC_ABAC_An_ABAC_based_Model_for_Collaboration_in_Multi_Cloud_Environment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hate Speech Detection in Bahasa Indonesia: Challenges and Opportunities</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01406125</link>
        <id>10.14569/IJACSA.2023.01406125</id>
        <doi>10.14569/IJACSA.2023.01406125</doi>
        <lastModDate>2023-06-30T12:23:04.4030000+00:00</lastModDate>
        
        <creator>Endang Wahyu Pamungkas</creator>
        
        <creator>Divi Galih Prasetyo Putri</creator>
        
        <creator>Azizah Fatmawati</creator>
        
        <subject>Abusive language; hate speech detection; machine learning; social media</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>This study aims to provide an overview of the current research on detecting abusive language in Indonesian social media. The study examines existing datasets, methods, and challenges and opportunities in this field. The research found that most existing datasets for detecting abusive language were collected from social media platforms such as Twitter, Facebook, and Instagram, with Twitter being the most commonly used source. The study also found that hate speech is the most researched type of abusive language. Various models, including traditional machine learning and deep learning approaches, have been implemented for this task, with deep learning models showing more competitive results. However, the use of transformer-based models is less popular in Indonesian hate speech studies. The study also emphasizes the importance of exploring more diverse phenomena, such as islamophobia and political hate speech. Additionally, the study suggests crowdsourcing as a potential solution for the annotation approach for labeling datasets. Furthermore, it encourages researchers to consider code-mixing issues in abusive language datasets in Indonesia, as it could improve the overall model performance for detecting abusive language in Indonesian data. The study also suggests that the lack of effective regulations and the anonymity afforded to users on most social networking sites, as well as the increasing number of Twitter users in Indonesia, have contributed to the rising prevalence of hate speech in Indonesian social media. The study also notes the importance of considering code-mixed language, out-of-vocabulary words, grammatical errors, and limited context when working with social media data.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_125-Hate_Speech_Detection_in_Bahasa_Indonesia_Challenges_and_Opportunities.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Advanced Night time Object Detection in Driver-Assistance Systems using Thermal Vision and YOLOv5</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01406124</link>
        <id>10.14569/IJACSA.2023.01406124</id>
        <doi>10.14569/IJACSA.2023.01406124</doi>
        <lastModDate>2023-06-30T12:23:04.3870000+00:00</lastModDate>
        
        <creator>Hoang-Tu Vo</creator>
        
        <creator>Luyl-Da Quach</creator>
        
        <subject>Driver-assistance systems; object detection; nighttime object detection; thermal vision; YOLOv5</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>Driver-assistance systems have become an indispensable component of modern vehicles, serving as a crucial element in enhancing safety for both drivers and passengers. Among the fundamental aspects of these systems, object detection stands out, posing significant challenges in low-light scenarios, particularly during nighttime. In this research paper, we propose an innovative and advanced approach for detecting objects during nighttime in driver-assistance systems. Our proposed method leverages thermal vision and incorporates You Only Look Once version 5 (YOLOv5), which demonstrates promising results. The primary objective of this study is to comprehensively evaluate the performance of our model, which utilizes a combination of stochastic gradient descent (SGD) and Adam optimizer. Moreover, we explore the impact of different activation functions, including SiLU, ReLU, Tanh, LeakyReLU, and Hardswish, on the efficiency of nighttime object detection within a driver assistance system that utilizes thermal imaging. To assess the effectiveness of our model, we employ standard evaluation metrics including precision, recall, and mean average precision (mAP), commonly used in object detection systems.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_124-Advanced_Night_time_Object_Detection_in_Driver_Assistance_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Unmanned Aerial Vehicle-based Applications in Smart Farming: A Systematic Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01406123</link>
        <id>10.14569/IJACSA.2023.01406123</id>
        <doi>10.14569/IJACSA.2023.01406123</doi>
        <lastModDate>2023-06-30T12:23:04.3730000+00:00</lastModDate>
        
        <creator>El Mehdi Raouhi</creator>
        
        <creator>Mohamed Lachgar</creator>
        
        <creator>Hamid Hrimech</creator>
        
        <creator>Ali Kartit</creator>
        
        <subject>Artificial intelligence; internet of things; sensor; big data; cloud; unmanned aerial vehicle; smart farming</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>On one hand, the emergence of cutting-edge technologies like AI, Cloud Computing, and IoT holds immense potential in Smart Farming and Precision Agriculture. These technologies enable real-time data collection, including high-resolution crop imagery, using Unmanned Aerial Vehicles (UAVs). Leveraging these advancements can revolutionize agriculture by facilitating faster decision-making, cost reduction, and increased yields. Such progress aligns with precision agriculture principles, optimizing practices for the right locations, times, and quantities. On the other hand, integrating UAVs in Smart Farming faces obstacles related to technology selection and deployment, particularly in data acquisition and image processing. The relative novelty of UAV utilization in Precision Agriculture contributes to the lack of standardized workflows. Consequently, the widespread adoption and implementation of UAV technologies in farming practices are hindered. This paper addresses these challenges by conducting a comprehensive review of recent UAV applications in Precision Agriculture. It explores common applications, UAV types, data acquisition techniques, and image processing methods to provide a clear understanding of each technology’s advantages and limitations. By gaining insights into the advantages and challenges associated with UAV-based applications in Precision Agriculture, this study aims to contribute to the development of standardized workflows and improve the adoption of UAV technologies.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_123-Unmanned_Aerial_Vehicle_based_Applications_in_Smart_Farming.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>PaddyNet: An Improved Deep Convolutional Neural Network for Automated Disease Identification on Visual Paddy Leaf Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01406122</link>
        <id>10.14569/IJACSA.2023.01406122</id>
        <doi>10.14569/IJACSA.2023.01406122</doi>
        <lastModDate>2023-06-30T12:23:04.3570000+00:00</lastModDate>
        
        <creator>Petchiammal A</creator>
        
        <creator>Murugan D</creator>
        
        <creator>Briskline Kiruba S</creator>
        
        <subject>Image annotation; data augmentation; deep learn-ing; paddy leaf disease detection; paddyNet</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>Timely disease diagnosis in paddy is fundamental to preventing yield losses and ensuring an adequate supply of rice for a rapidly rising worldwide population. Recent advancements in deep learning have helped overcome the limitations of unsuper-vised learning methods. This paper proposes a novel PaddyNet model for enhanced accuracy in paddy leaf disease detection. The PaddyNet model, developed using 17 layers, captures and models patterns of different disease symptoms present in paddy leaf images. The effectiveness of the novel model is verified by applying a large dataset comprising 16,225 paddy leaf datasets across 13 classes, including a normal class and 12 disease classes. The performance results show that the new PaddyNet model classifies paddy leaf disease images effectively with 98.99%accuracy and a dropout value of 0.4.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_122-PaddyNet_An_Improved_Deep_Convolutional_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SbChain+: An Enhanced Snowball-Chain Approach for Detecting Communities in Social Graphs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01406121</link>
        <id>10.14569/IJACSA.2023.01406121</id>
        <doi>10.14569/IJACSA.2023.01406121</doi>
        <lastModDate>2023-06-30T12:23:04.3570000+00:00</lastModDate>
        
        <creator>Jayati Gulati</creator>
        
        <creator>Muhammad Abulaish</creator>
        
        <subject>Clique; clustering; community detection; graph mining; snowball sampling; social network analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>In this paper, we present snowball-chain (SbChain+) approach, which is an improved version of SbChain community detection method in terms of precision with which communities are identified in a social graph. It exploits the topology of a social graph in terms of the connections of a node, i.e., its degree centrality, betweenness centrality and the number of links within its neighborhood defined by the local clustering coefficient. Two different functions have been used to identify neighbors for a given node. Hence, two approaches have been discussed with their pros and cons. In general, SbChain+ takes a social graph as an input and aims to identify communities around the core nodes in the underlying network. The core nodes are expected to have a high degree and have densely connected neighbors and guides in identifying cliques from the graph. The proposed approach takes its inspiration from snowball sampling technique and keeps merging the nodes with their neighboring nodes based on certain criteria to form snowballs. One of the functions discussed (SbChain+(i)) uses a hyperparameter, λ for merging snowballs which further leads to the formation of communities. This hyperparameter also helps in achieving the desired level of coarseness in the communities, and it can be adjusted to fine tune the identified communities. While the second function (SbChain+(ii)) uses an average out degree function to merge snowballs. The modularity values are calculated at each level of the dendrogram formed by combining nodes and snowballs to decide an appropriate cut for community determination. SbChain+ is empirically evaluated using these two different functions over both real-world and LFR-benchmark datasets and results are evaluated on modularity and normalized mutual information. The aim of this study is to improve upon the previously discussed technique (SbChain) and to study the use of hyperparameter, i.e., the performance of a technique with or without the hyperparameter.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_121-SbChain_An_Enhanced_Snowball_Chain_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-dimensional Data Aggregation Scheme Supporting Fault-Tolerant Mechanism in Smart Grid</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01406120</link>
        <id>10.14569/IJACSA.2023.01406120</id>
        <doi>10.14569/IJACSA.2023.01406120</doi>
        <lastModDate>2023-06-30T12:23:04.3400000+00:00</lastModDate>
        
        <creator>Yong Chen</creator>
        
        <creator>Feng Wang</creator>
        
        <creator>Li Xu</creator>
        
        <creator>Zhongming Huang</creator>
        
        <subject>Cryptography; fault tolerance; privacy; multi-dimensional data aggregation; encryption; smart grid</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>With the large-scale deployment of smart grids, the scheme of smart grid data aggregation has gradually enriched in recent years. Based on the principle of protecting user privacy, existing schemes usually choose to introduce a trusted third party (TTP) to participate in the collaboration. However, this also increases the risk of privacy exposure as the attacker can target the TTP which provides services to smart grid operators. In addition, many existing schemes do not take into account the operational requirements of smart meters in case of failure. Furthermore, some schemes ignore the control center’s demand for analyzing multi-dimensional data, which causes a lot of inconvenience in actual operation. Therefore, a fault-tolerant multi-dimensional data aggregation scheme is proposed in this paper. We have constructed a scheme without TTP participation in collaboration, and also meet the following two requirements. The scheme not only ensures the normal operation of the system when the smart meter fails but also meets the requirements of the control center for multi-dimensional data analysis. Security analysis shows that the proposed scheme can resist external attack, internal attack, and collusion attack. The experimental results show that the proposed scheme improves the fault tolerance and reduces the computational cost compared with the existing schemes.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_120-Multi_dimensional_Data_Aggregation_Scheme.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Brain Tumor Semantic Segmentation using Residual U-Net++ Encoder-Decoder Architecture</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01406119</link>
        <id>10.14569/IJACSA.2023.01406119</id>
        <doi>10.14569/IJACSA.2023.01406119</doi>
        <lastModDate>2023-06-30T12:23:04.3270000+00:00</lastModDate>
        
        <creator>Mai Mokhtar</creator>
        
        <creator>Hala Abdel-Galil</creator>
        
        <creator>Ghada Khoriba</creator>
        
        <subject>Brain tumor segmentation; medical image segmentation; BraTS; U-Net; U-Net++; residual network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>Image segmentation is considered one of the essential tasks for extracting useful information from an image. Given the brain tumor and its consumption of medical resources, the development of a deep learning method for MRI to segment the brain tumor of patients’ MRI is illustrated here. Brain tumor segmentation technique is crucial in detecting and treating MRI brain tumors. Furthermore, it assists physicians in locating and measuring tumors and developing treatment and rehabilitation programs. The residual U-Net++ encoder-decoder-based architecture is designed as the primary network, and it is an architecture that is hybridized between ResU-Net and U-Net++. The proposed Residual U-Net++ is applied to MRI brain images for the most recent and well-known global benchmark challenges: BraTS 2017, BraTS 2019, and BraTS 2021. The proposed approach is evaluated based on brain tumor MRI images. The results with the BraST 2021 dataset with a dice similarity coefficient (DSC) is 90.3%, sensitivity is 96%, specificity is 99%, and 95% Hausdorff distance (HD) is 9.9. With the BraST 2019 dataset, a DSC is 89.2%, sensitivity is 96%, specificity is 99%, and HD is 10.2. With the BraST 2017 dataset, a DSC is 87.6%, sensitivity is 94%, specificity is 99%, and HD is 11.2. Furthermore, Residual U-Net++ outperforms the standard brain tumor segmentation approaches. The experimental results indicated that the proposed method is promising and can provide better segmentation than the standard U-Net. The segmentation improvement could help radiologists increase their radiologist segmentation accuracy and save time by 3%.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_119-Brain_Tumor_Semantic_Segmentation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Adaptive Testcase Recommendation System to Engage Students in Learning: A Practice Study in Fundamental Programming Courses</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01406118</link>
        <id>10.14569/IJACSA.2023.01406118</id>
        <doi>10.14569/IJACSA.2023.01406118</doi>
        <lastModDate>2023-06-30T12:23:04.3100000+00:00</lastModDate>
        
        <creator>Tien Vu-Van</creator>
        
        <creator>Huy Tran</creator>
        
        <creator>Thanh-Van Le</creator>
        
        <creator>Hoang-Anh Pham</creator>
        
        <creator>Nguyen Huynh-Tuong</creator>
        
        <subject>Testcases recommendation system (TRS); learning management system (LMS); zone of proximal development (ZPD); singular value decomposition (SVD)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>This paper proposes a testcase recommendation system (TRS) to assist beginner-level learners in introductory programming courses with completing assignments on a learning management system (LMS). These learners often struggle to generate complex testcases and handle numerous code errors, leading to disengaging their attention from the study. The proposed TRS addresses this problem by applying the recommendation system using singular value decomposition (SVD) and the zone of proximal development (ZPD) to provide a small and appropriate set of testcases based on the learner’s ability. We implement this TRS to the university-level Fundamental Programming courses for evaluation. The data analysis has demonstrated that TRS significantly increases student interactions with the system.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_118-An_Adaptive_Testcase_Recommendation_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-objective Task Scheduling Optimization Based on Improved Bat Algorithm in Cloud Computing Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01406117</link>
        <id>10.14569/IJACSA.2023.01406117</id>
        <doi>10.14569/IJACSA.2023.01406117</doi>
        <lastModDate>2023-06-30T12:23:04.2930000+00:00</lastModDate>
        
        <creator>Dakun Yu</creator>
        
        <creator>Zhongwei Xu</creator>
        
        <creator>Meng Mei</creator>
        
        <subject>Cloud computing; task scheduling; optimization; bat algorithm; meta-heuristics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>In cloud computing environments, task completion time and virtual machine load balance are two critical issues that need to be addressed. To solve these problems, this paper proposes a Multi-objective Optimization Mutate Discrete Bat Algorithm (MOMDBA) that improves upon the traditional Bat algorithm (BA). The MOMDBA algorithm introduces a mutation factor and mutation inertia weight during the global optimization process to enhance the algorithm’s global search ability and convergence speed. Additionally, the local optimization logic is optimized according to the characteristics of cloud computing task scenarios to improve the degree of load balancing of virtual machines. Simulation experiments were conducted using CloudSim to evaluate the algorithm’s performance, and the results were compared with other scheduling algorithms. The results of our experiments indicate that when the cost difference between algorithms is within 4.47%, MOMDBA can significantly outperform other methods. Specifically, compared to PSO, GA, and LBACO, our algorithm reduces makespan by 56.26%, 59.87%, and 25.26%, respectively, while also increasing the degree of load balancing by 93.87%, 75.92%, and 39.13%, respectively. These findings demonstrate the superior performance of MOMDBA in optimizing task scheduling and load balancing.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_117-Multi_objective_Task_Scheduling_Optimization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Analysis using Various Performance Metrics in Imbalanced Data for Multi-class Text Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01406116</link>
        <id>10.14569/IJACSA.2023.01406116</id>
        <doi>10.14569/IJACSA.2023.01406116</doi>
        <lastModDate>2023-06-30T12:23:04.2800000+00:00</lastModDate>
        
        <creator>Slamet Riyanto</creator>
        
        <creator>Imas Sukaesih Sitanggang</creator>
        
        <creator>Taufik Djatna</creator>
        
        <creator>Tika Dewi Atikah</creator>
        
        <subject>Imbalanced data; undersampling; oversampling; smote; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>Precision, Recall, and F1-score are metrics that are often used to evaluate model performance. Precision and Recall are very important to consider when the data is balanced, but in the case of unbalanced data the F1-score is the most important metric. To find out the importance of these metrics, a comparative analysis is needed in order to determine which metric is appropriate for the data being analyzed. This study aims to perform a comparative analysis of various evaluation metrics on unbalanced data in multi-class text classification. This study uses an unbalanced multi-class text dataset including: association, negative, cause of disease, and treatment of disease. This study involves five classifiers as algorithm-level approach, namely: Multinomial Naive Bayes, K-Nearest Neigbors, Support Vector Machine, Random Forest, and Long Short-Term Memory. Mean-while, data-level approach, this study involves under sampling, over sampling, and synthetic minority oversampling technique. Several evaluation metrics used to evaluate model performance include Precision, Recall, and F1-score. The results show that the most suitable evaluation metric for use on unbalanced data depends on the purpose of use and the desired priority, including the classifier that is suitable for handling multi-class assignments on unbalanced data. The results of this study can assist practitioners in selecting evaluation metrics that are in accordance with the goals and application needs of multi-class text classification.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_116-Comparative_Analysis_using_Various_Performance_Metrics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Stroke Risk Prediction: Comparing Different Sampling Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01406115</link>
        <id>10.14569/IJACSA.2023.01406115</id>
        <doi>10.14569/IJACSA.2023.01406115</doi>
        <lastModDate>2023-06-30T12:23:04.2800000+00:00</lastModDate>
        
        <creator>Qiuyang Yin</creator>
        
        <creator>Xiaoyan Ye</creator>
        
        <creator>Binhua Huang</creator>
        
        <creator>Lei Qin</creator>
        
        <creator>Xiaoying Ye</creator>
        
        <creator>Jian Wang</creator>
        
        <subject>Stroke prediction; data mining; machine learning; unbalanced data; sampling algorithms; classification algorithms</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>Stroke is a serious disease that has a significant impact on the quality of life and safety of patients. Accurately predicting stroke risk is of great significance for preventing and treating stroke. In the past few years, machine learning methods have shown potential in predicting stroke risk. However, due to the imbalance of stroke data and the challenges of feature selection and model selection, stroke risk prediction still faces some difficulties.This article aims to compare the performance differences between different sampling algorithms and machine learning methods in stroke risk prediction. This study used the over-sampling algorithm (Random Over Sampling and SMOTE), the under-sampling algorithm (Random Under Sampling and ENN), and the hybrid sampling algorithm (SMOTE-ENN), and combined them with common machine learning methods such as K-Nearest Neighbors, Logistic Regression, Decision Tree and Support Vector Machine to build the prediction model.Through the analysis of experimental results, and found that the SMOTE combined with the LR model showed good performance in stroke risk prediction, with a high F1 score. In addition, this study found that the overall performance of the undersampling algorithm is better than that of the oversampling and hybrid sampling algorithms.These research results provide useful references for predicting stroke risk and provide a foundation for further research and application. Future research can continue to explore more sampling algorithms, machine learning methods, and feature engineering techniques to further improve the accuracy and interpretability of stroke risk prediction and promote its application in clinical practice.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_115-Stroke_Risk_Prediction_Comparing_Different_Sampling_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Video Surveillance Vehicle Detection Method Incorporating Attention Mechanism and YOLOv5</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01406114</link>
        <id>10.14569/IJACSA.2023.01406114</id>
        <doi>10.14569/IJACSA.2023.01406114</doi>
        <lastModDate>2023-06-30T12:23:04.2630000+00:00</lastModDate>
        
        <creator>Yi Pan</creator>
        
        <creator>Zhu Zhao</creator>
        
        <creator>Yan Hu</creator>
        
        <creator>Qing Wang</creator>
        
        <subject>Attention mechanism; YOLOv5; vehicle detection; image recognition; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>With the rising number of vehicle ownership nationwide and the consequent increase in traffic accidents, vehicle detection for traffic surveillance video is an effective method to reduce traffic accidents. However, existing video surveillance vehicle detection methods suffer from high computational load, low accuracy, and excessive reliance on large-scale computing servers. Therefore, the research will try to fuse coordinate attention mechanism to improve YOLOv5 network, choose lightweight YOLOv5s for image recognition, and use K-means algorithm to modify the aiming frame according to the characteristics of vehicle detection; meanwhile, in order to get more accurate results, coordinate attention mechanism algorithm, which is also a lightweight algorithm, is inserted into YOLOv5s for improvement, so that the designed The lightweight vehicle detection model can be run on embedded devices. The measurement experiments show that the YOLOv5+CA model completes convergence when the iterations exceed 100, and the localization loss and confidence loss gradually stabilize at 0.002 and 0.028, and the classification loss gradually stabilizes at 0.017. Comparing YOLOv5+CA with SSD algorithm, ResNet-101 algorithm and RefineDet algorithm, YOLOv5 +CA detection accuracy is better than other algorithms by about 9%, and the accuracy can be approximated to 1.0 at a confidence level of 0.946. The experimental results show that the research design provides higher accuracy and high computational efficiency for video surveillance vehicle detection, and can better provide reference value and reference methods for video surveillance vehicle detection and operation management.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_114-Video_Surveillance_Vehicle_Detection_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application Methods of Image Design Based on Virtual Reality and Interaction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01406113</link>
        <id>10.14569/IJACSA.2023.01406113</id>
        <doi>10.14569/IJACSA.2023.01406113</doi>
        <lastModDate>2023-06-30T12:23:04.2470000+00:00</lastModDate>
        
        <creator>Shasha Mao</creator>
        
        <subject>Virtual reality; interactive; ANNs-DS information fusion algorithm; image design</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>The continuous improvement of virtual reality and interactive technology has led to a broader and deeper application in related fields, especially image design. In image design, creating usage scenarios for portable interactive experience products based on virtual reality and interactive technology can optimize and improve key parameters for real 3D techniques, thereby building a more comprehensive image design. This article constructs a three-dimensional image model of marine organisms and scenarios based on multi-sensory interactive interface generation technology and information fusion optimization ANNs-DS algorithm, targeting the image scenarios of product design. The relevant model information and parameter changes are analyzed. The results indicate that in the process of multi-sensory interface interactive image design, the virtual reality image design implemented using ANNs-DS information fusion algorithm can enhance participants&#39; multi-sensory visual experience of the interactive interface. The reasonable degree between objects in the interactive interface and the scene space image is basically within the range of 0.85-0.95. The fluency in different scenarios can be significantly improved. Therefore, virtual reality and interactive technology have laid the foundation for developing interactive image design.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_113-Application_Methods_of_Image_Design.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Empirical Deep Learning Approach for Arabic News Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01406112</link>
        <id>10.14569/IJACSA.2023.01406112</id>
        <doi>10.14569/IJACSA.2023.01406112</doi>
        <lastModDate>2023-06-30T12:23:04.2170000+00:00</lastModDate>
        
        <creator>Roobaea Alroobaea</creator>
        
        <subject>Deep learning (DL); machine-learning (ML); convolutional neural networks (CNNs); long short-term memory (LSTM)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>In this paper, we tackle the problem of Arabic news classification. A dataset of 5,000 news articles from various Saudi Arabian news sources were gathered, classified into six categories: business, entertainment, health, politics, sports, and technology. We conducted experiments using different pre-processing techniques, word embeddings, and deep learning architectures, including convolutional neural networks (CNNs) and long short-term memory (LSTM) networks, as well as a hybrid CNN-LSTM model. Our proposed model achieved an accuracy of 93.15, outperforming other models in terms of accuracy. Moreover, our model is evaluated on other Arabic news datasets and obtained competitive results. Our approach demonstrates the effectiveness of deep learning methods in Arabic news classification and emphasizes the significance of careful selection of preprocessing techniques, word embeddings, and deep learning architectures.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_112-An_Empirical_Deep_Learning_Approach_for_Arabic_News.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel ML Approach for Computing Missing Sift, Provean, and Mutassessor Scores in Tp53 Mutation Pathogenicity Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01406111</link>
        <id>10.14569/IJACSA.2023.01406111</id>
        <doi>10.14569/IJACSA.2023.01406111</doi>
        <lastModDate>2023-06-30T12:23:04.2000000+00:00</lastModDate>
        
        <creator>Rashmi Siddalingappa</creator>
        
        <creator>Sekar Kanagaraj</creator>
        
        <subject>Decision tree (DT); deep neural networks (DNN); imputation; k-nearest neighbor (KNN); logistic regression (LR); missense mutations; Mutassessor; pathogenicity; Provean; random forest (RF); SIFT; support vector machine (SVM)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>Cancer is often caused by missense mutations, where a single nucleotide substitution leads to an amino acid change and affects protein function. This study proposes a novel machine learning (ML) approach to calculate missing values in the tp53 database for three computational methods: SIFT, Provean, and Mutassessor scores. The computed values are compared with those obtained from the imputation method. Using these values, an ML classification model trained on 80,406 samples achieves an accuracy of 85%, while the impute method achieves 75%. The scores and statistics are used to classify samples into five classes: Benign, likely pathogenic, possibly pathogenic, pathogenic, and a variant of uncertain significance. Additionally, a comparative analysis is conducted on 58,444 samples, evaluating six ML techniques. The accuracy obtained by each of these mentioned in mentioned alongside the algorithm: logistic regression (89%), k-nearest neighbor (99%), decision tree (95%), random forest (99.8%), support vector machine with the polynomial kernel (91%), support vector machine with RBF kernel (84%), and deep neural networks (98.2%). These results demonstrate the effectiveness of the proposed ML approach for pathogenicity prediction.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_111-A_Novel_ML_Approach_for_Computing_Missing_Sift.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application of Top-N Rule-based Optimal Recommendation System for Language Education Content based on Parallel Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01406110</link>
        <id>10.14569/IJACSA.2023.01406110</id>
        <doi>10.14569/IJACSA.2023.01406110</doi>
        <lastModDate>2023-06-30T12:23:04.1870000+00:00</lastModDate>
        
        <creator>Nan Hu</creator>
        
        <subject>Data parallel computing; cloud computing; data crawlers; top-N rules; PRF algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>In recent years personalized recommendation services have been applied to many areas of society, typically in the fields of e-commerce, short videos and so on. In response to the serious performance problems of the current online language education platform content recommendation, so in the face of the above opportunities and challenges, this paper designs a new online English education model to allow university students to get a full and more three-dimensional training of English language learning. Based on the MU platform, this paper obtains data from the platform and uses crawler technology to sample and standardize the learning resources for online education. Then user information, such as explicit and implicit ratings of courses, is selected as the main basis for training a user interest preference model. Immediately afterwards, a PRF algorithm combining data parallelism and task parallelism optimization was executed and implemented on Apeche Spark to provide some optimization of data accuracy and content recommendation methods. Finally, the top-N recommendation rule is used to propose a dynamic evolutionary process of identifying students’ preferences or learning habits through the results of previous data analysis, so as to make more accurate course content recommendations and learning content guidance for students’ English learning. The online three-dimensional teaching model proposed in this paper focuses more on time-series research than traditional algorithms, and can more accurately capture the dynamic changes in students’ learning abilities.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_110-Application_of_Top_N_Rule_based_Optimal_Recommendation_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Damage Security Intelligent Identification of Wharf Concrete Structures under Deep Learning and Digital Image Technology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01406109</link>
        <id>10.14569/IJACSA.2023.01406109</id>
        <doi>10.14569/IJACSA.2023.01406109</doi>
        <lastModDate>2023-06-30T12:23:04.1700000+00:00</lastModDate>
        
        <creator>Jinbo Zhu</creator>
        
        <creator>Yuesong Li</creator>
        
        <creator>Pengrui Zhu</creator>
        
        <subject>Structural damage identification; deep learning; neural network; digital image; concrete</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>Artificial Intelligence (AI) technology has quickly developed under the mighty computing power of computers. At this stage, there are many mature non-destructive testing methods in civil engineering, but they are generally only suitable for simple structures and evident damage characteristics. Therefore, it’s necessary for us to investigate the damage identification of wharf concrete structures under deep learning and digital image technology. The article propose a damage detection and localization method based on Neural Network (NN) technology in deep learning and Digital Image Correlation (DIC) to identify internal damage to concrete used for wharf construction. Firstly, the identification model of concrete structure is constructed using NN technology. Then, structural damage identification of concrete is further investigated using DIC. Finally, relevant experiments are designed to verify the effect of the model. The results show that: (1) The damage model of concrete structure constructed by NN technology has high convergence and stability and can control the test error well. (2) The image output by the DIC equipment is processed and input into the NN. The errors of the various parameters of different concretes can be within the acceptable range. This paper aims to provide good ideas and references for follow-up structural health monitoring and other topics and has significant engineering application value.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_109-Damage_Security_Intelligent_Identification_of_Wharf_Concrete_Structures.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Technology Adoption and Usage Behaviors in Field Incident Management System Utilization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01406108</link>
        <id>10.14569/IJACSA.2023.01406108</id>
        <doi>10.14569/IJACSA.2023.01406108</doi>
        <lastModDate>2023-06-30T12:23:04.1530000+00:00</lastModDate>
        
        <creator>Cory Antonio Buyan</creator>
        
        <creator>Noelyn M. De Jesus</creator>
        
        <creator>Eltimar T. Castro Jr</creator>
        
        <subject>UTAUT; field incident management system (FIMS); regression analysis; user intention and acceptance; system adoption; usage behavior; manufacturing; IMS; effort expectancy; performance expectancy; social influence; facilitating conditions; behavioral intention; ANOVA</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>This study utilized the Unified Theory of Acceptance and Use of Technology (UTAUT) model to analyze the adoption and utilization of field incident management system (IMS) in a manufacturing organization. The study specifically focused on the role of behavior as a key factor in the adoption and utilization of incident management system. Data was collected through a survey of employees who had utilized the IMS system and the UTAUT model was used to analyze the data. The results indicated that behavior in the system significantly influenced the adoption and utilization of IMS. The study also found that the UTAUT model provided a useful framework for understanding the adoption and utilization of IMS, particularly the importance of performance expectancy, effort expectancy, social influence, and facilitating conditions. The study provides valuable insights for organizations looking to implement IMS and improve their incident management processes. It highlights the importance of building behavior in the system through appropriate user experience and user training. The findings of this study have important implications for manufacturing organizations seeking to enhance their incident management procedures through the adoption and utilization of IMS.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_108-Technology_Adoption_and_Usage_Behaviors_in_Field_Incident_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Precise Survey on Multi-agent in Medical Domains</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01406107</link>
        <id>10.14569/IJACSA.2023.01406107</id>
        <doi>10.14569/IJACSA.2023.01406107</doi>
        <lastModDate>2023-06-30T12:23:04.1370000+00:00</lastModDate>
        
        <creator>Arwa Alshehri</creator>
        
        <creator>Fatimah Alshahrani</creator>
        
        <creator>Habib Shah</creator>
        
        <subject>Artificial intelligence; agent systems; multi-agent systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>Agent technology has provided many opportunities to improve the human standard of life in recent decades, starting from social life and moving on to business intelligence and tackling complicated communication, integration, and analysis challenges. These agents play an important role in human health from diagnosis to treatment. Every day, sophisticated agents and expert systems are being developed for human beings. These agents have made it easier to deal with common diseases and provide high accuracy with less processing time. However, they also have some challenges in their domain, especially when dealing with complex issues. To handle these challenges, the domain has become characterized by distinctive and creative methodologies and architectures. This survey provides a review of medical multi-agent systems, including the typical intelligent agents, their main characteristics and applications, multi-agent systems, and challenges. A classification of multi-agent system applications and challenges is presented, along with references for additional studies. For researchers and practitioners in the field, we intend this paper to be an informative and complete resource on the medical multi-agent system.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_107-A_Precise_Survey_on_Multiagent_in_Medical_Domains.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>System Dynamics Approach in Supporting The Achievement of The Sustainable Development on MSMEs: A Collection of Case Studies</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01406106</link>
        <id>10.14569/IJACSA.2023.01406106</id>
        <doi>10.14569/IJACSA.2023.01406106</doi>
        <lastModDate>2023-06-30T12:23:04.1230000+00:00</lastModDate>
        
        <creator>Julia Kurniasih</creator>
        
        <creator>Zuraida Abal Abas</creator>
        
        <creator>Siti Azirah Asmai</creator>
        
        <creator>Agung Budhi Wibowo</creator>
        
        <subject>System dynamics; sustainable development; Micro Small and Medium Enterprises (MSMEs)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>Sustainable development in MSMEs is very important to encourage economic growth, improve the welfare of people, and ensure environmental sustainability. However, achieving sustainability in the MSME sector faces many challenges due to the complex interdependencies and dynamic interactions among various factors. The system dynamics approach makes it possible to model and simulate dynamic feedback loops, time delays, and nonlinear relationships between these factors. This paper provides an overview of the system dynamics approach and its suitability to address the complexities inherent in the MSME sector in its application to sustainable development. It explores the issues faced by MSMEs in achieving sustainable development and how the system dynamics approach models and analyzes the behavior of these MSMEs. These issues cover the dimensions of product development, technology and ICT inclusion, supply chain, business development, financial resources, and organizational support. This study was conducted on several case studies from various industries, namely the steel industry, agro-industry, craft industry, tourism industry, plastic molding, manufacturing, cosmetics, and digital companies; who come from various countries. From this study it was concluded that the system dynamics approach has significant potential to support the achievement of sustainable development in MSMEs, because it allows MSMEs to be able to effectively model and simulate the behavior of various factors that affect their operations, such as resource allocation, environmental impacts, and social considerations; proactively addressing sustainability challenges, adapting to changing market conditions, and contributing to broader socio-economic and environmental objectives.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_106-System_Dynamics_Approach_in_Supporting_The_Achievement.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comprehensive Study of DCNN Algorithms-based Transfer Learning for Human Eye Cataract Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01406105</link>
        <id>10.14569/IJACSA.2023.01406105</id>
        <doi>10.14569/IJACSA.2023.01406105</doi>
        <lastModDate>2023-06-30T12:23:04.1070000+00:00</lastModDate>
        
        <creator>Omar Jilani Jidan</creator>
        
        <creator>Susmoy Paul</creator>
        
        <creator>Anirban Roy</creator>
        
        <creator>Sharun Akter Khushbu</creator>
        
        <creator>Mirajul Islam</creator>
        
        <creator>S.M. Saiful Islam Badhon</creator>
        
        <subject>Cataract detection; eye disease; ocular images; deep convolutional neural network (DCNN); hybrid architecture</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>This study presents a comparative analysis of different deep convolutional neural network (DCNN) architectures, including VGG19, NASNet, ResNet50, and MobileNetV2, with and without data augmentation, for the automatic detection of cataracts in fundus images. Utilizing hybrid architecture models, namely ResNet50-NASNet and ResNet50+MobileNetV2, which combine two state-of-the-art DCNNs, this research demonstrates their superior performance. Specifically, MobileNetV2 and the combined ResNet50+MobileNetV2 outperform other models, achieving an impressive accuracy of 99.00%. By emphasizing the efficacy of diverse datasets and pre-processing techniques, as well as the potential of pretrained DCNN models, this study contributes to accurate cataract diagnosis. Furthermore, the proposed system has the potential to reduce reliance on ophthalmologists, decrease the cost of eye check-ups, and improve accessibility to eye care for a wider population. These findings showcase the successful application of deep learning and image processing techniques in the early detection and treatment of various medical conditions, including cataracts, addressing the needs of individuals with diminished vision through ocular images and innovative hybrid architectures.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_105-A_Comprehensive_Study_of_DCNN_Algorithms_Based_Transfer_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Unusual Human Behavior Detection System in Real-Time Video Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01406104</link>
        <id>10.14569/IJACSA.2023.01406104</id>
        <doi>10.14569/IJACSA.2023.01406104</doi>
        <lastModDate>2023-06-30T12:23:04.0900000+00:00</lastModDate>
        
        <creator>Yanbin Bu</creator>
        
        <creator>Ting Chen</creator>
        
        <creator>Hongxiu Duan</creator>
        
        <creator>Mei Liu</creator>
        
        <creator>Yandan Xue</creator>
        
        <subject>Anomaly detection; video sequence; standard Convolutional Automatic Encoder (CAE); spatio-temporal structures; LSTM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>Abnormal behavior detection, in terms of importance, has become a necessity in real-time visual systems. The main problem is the ambiguity in the difference between the characteristics of abnormal and normal behavior, which its definition is usually different according to the previous context of images. In this research, three approaches are used. In the first approach, a standard Convolutional Automatic Encoder (CAE) is used. After evaluation, it was found that the standard CAE problem is that it does not take into account the temporal aspect of the image frames sequence. The second method involves automatic encoding to learn the dataset&#39;s spatio-temporal structures. In the third approach, the complex LSTM cells are used for further improvement. The outcomes of the test display that the proposed methods have better performance compared to many of the previous conventional methods, and their efficiency in identifying abnormal behavior is very competitive compared to previous methods.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_104-Unusual_Human_Behavior_Detection_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Role of Artificial Intelligence and Business Decision Making</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01406103</link>
        <id>10.14569/IJACSA.2023.01406103</id>
        <doi>10.14569/IJACSA.2023.01406103</doi>
        <lastModDate>2023-06-30T12:23:04.0770000+00:00</lastModDate>
        
        <creator>Anupama Prasanth</creator>
        
        <creator>Densy John Vadakkan</creator>
        
        <creator>Priyanka Surendran</creator>
        
        <creator>Bindhya Thomas</creator>
        
        <subject>Artificial intelligence; business decision making; efficiency; accuracy; innovation; marketing strategy; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>Artificial Intelligence (AI) has emerged as a transformative technology with profound implications for various sectors, including business. In recent years, AI has revolutionized decision-making processes by providing organizations with advanced analytical capabilities, enabling them to extract valuable insights from vast amounts of data. The application of AI in businesses may force the sector to rely on quicker, less expensive, and more accurate marketing techniques. By utilizing the AI in marketing strategies, a business owner may increase audience reaction and build a strong online brand that can compete with others. In addition to marketing, it has the capacity to remodel a business with fresh concepts. Additionally, it provides solutions for challenging problems, aiding in the enormous business growth. The study&#39;s primary goal is to investigate how artificial intelligence and decision-making are deployed in business and tried to explore how AI is being used to enhance decision-making processes and how it is changing business models. The study reveals that the role of artificial intelligence in business decision making is transformative, offering significant advantages in terms of efficiency, accuracy, and innovation. AI-powered systems enable businesses to process and analyze vast amounts of data efficiently, leading to quicker and more informed decision making. Overall, the integration of AI in business decision making has the potential to drive organizational success and shape the future of business practices.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_103-Role_of_Artificial_Intelligence_and_Business_Decision_Making.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intelligent Recommendation of Open Educational Resources: Building a Recommendation Model Based on Deep Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01406102</link>
        <id>10.14569/IJACSA.2023.01406102</id>
        <doi>10.14569/IJACSA.2023.01406102</doi>
        <lastModDate>2023-06-30T12:23:04.0600000+00:00</lastModDate>
        
        <creator>Zongkui Wang</creator>
        
        <subject>Intelligent recommendation; deep neural networks; multilayer perceptron; educational resources</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>Information overload is a challenge for the development of online education. To address the problem of intelligent recommendation of educational resources, the study proposes an intelligent recommendation model of educational resources based on deep neural networks. First, a deep neural network-based custom recommendation model for educational resources is constructed after a multilayer perceptron-based prediction model is established. The results showed that the prediction model proposed in the study steadily reduced the average absolute error as the number of iterations increase, reaching an average of 0.704, with the loss value stabilising at around 0.6, which is lower than that of the deep neural network prediction model. Compared to the deep neural network prediction model, the normalised discounted cumulative gain is typically 0.01 higher and in terms of hit rate, 0.03 higher. The prediction time of the similarity algorithm is faster than that of the neural network. The mean squared error ranged from a high of 1.29 to a low of 1.19, both lower than other algorithms, and the mean absolute error ranged from a high of 0.56 to a low of 0.54, lower than all other algorithms except the support vector machine algorithm. The average absolute error of the deep neural network resource representation algorithm ranged from a high of 1.46 to a low of 1.45, lower than all other algorithms except the support vector machine algorithm, and the average squared error ranged from a high of 3.43 to a low of 3.24, better than all other algorithms. In conclusion, the model constructed by the study has a good application effect in recommending educational resources, and has a certain promoting effect on the development of online education.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_102-Intelligent_Recommendation_of_Open_Educational_Resources.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluating Game Application Interfaces for Older Adults with Mild Cognitive Impairment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01406101</link>
        <id>10.14569/IJACSA.2023.01406101</id>
        <doi>10.14569/IJACSA.2023.01406101</doi>
        <lastModDate>2023-06-30T12:23:04.0430000+00:00</lastModDate>
        
        <creator>Nita Rosa Damayanti</creator>
        
        <creator>Nazlena Mohamad Ali</creator>
        
        <subject>System usability scale; older adults; mild cognitive impairment</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>A digital game is software that is used as alternative entertainment for older adults for brain training. In this study, a digital game prototype for older adults with mild cognitive impairment has been developed called EmoGame and illustrated. The game is intended to assist older adults who experience emotional and cognitive impairment that implement reminiscence therapy in the design of the user interface. Applications for older adults have been developed in many studies, but applications using a reminiscence therapy approach still need to be improved. User interface testing was carried out using the system usability scale (SUS). Interface testing with the SUS instrument was carried out in an organized and precisely measured using ten (10) questions as a benchmark for evaluation among twenty (20) respondents of, older adults. The results of the evaluation of the EmoGame prototype show an assessment score of 82, representing an excellent rating. Future work will improve the prototype to improve the design based on user feedback and iteratively improve the functionalities and interfaces and conduct a longitudinal study to investigate the effect of the games towards improving cognitive among older adults with mild cognitive impairment.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_101-Evaluating_Game_Application_Interfaces_for_Older_Adults.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Classification of Garlic Land Based on Growth Phase using Convolutional Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01406100</link>
        <id>10.14569/IJACSA.2023.01406100</id>
        <doi>10.14569/IJACSA.2023.01406100</doi>
        <lastModDate>2023-06-30T12:23:04.0300000+00:00</lastModDate>
        
        <creator>Durrotul Mukhibah</creator>
        
        <creator>Imas Sukaesih Sitanggang</creator>
        
        <creator>Annisa</creator>
        
        <subject>Convolutional neural network; garlic; growth phase; horticulture; land classification; Sentinel-2; VGG</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>Indonesian Government needs to monitor the realization of garlic land with production plans in several production areas at growth season. A previous study, which used Sentinel-1A satellite imagery and Convolutional Neural Networks to classify garlic land, needed more information on growth phases. The study aims to address that limitation by creating a garlic land classification model based on the growth phase using Convolutional Neural Networks. The dataset comprises 446 preprocessed Sentinel-2 images cross-referenced with drone ground truth data. The model used both VGG16 and VGG19 architectures. Hyperparameter tuning was applied to obtain optimal values. After evaluating three scenarios (VGG16 base model, modified VGG16, and modified VGG19), the best model was obtained from the modified VGG19, which had an accuracy rate of 81.81% and a loss function of 0.71. The study successfully classified garlic land based on growth phase, with a precision rate of 0.43 for initial growth and vegetation classes, and 0.22 for the harvest class. The study offers an alternative to monitoring garlic production throughout growth phases with satellite imagery and deep learning.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_100-Classification_of_Garlic_Land_Based_on_Growth_Phase.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Exploring the Impact of Hybrid Recommender Systems on Personalized Mental Health Recommendations</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140699</link>
        <id>10.14569/IJACSA.2023.0140699</id>
        <doi>10.14569/IJACSA.2023.0140699</doi>
        <lastModDate>2023-06-30T12:23:04.0130000+00:00</lastModDate>
        
        <creator>Idayati Mazlan</creator>
        
        <creator>Noraswaliza Abdullah</creator>
        
        <creator>Norashikin Ahmad</creator>
        
        <subject>Recommender system; mental health; content-based filtering; collaborative filtering; hybrid recommender system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>Personalized mental health recommendations are crucial in addressing the diverse needs and preferences of individuals seeking mental health support. This research aims to study the investigates the impact of hybrid recommender systems on the provision of personalized recommendations for mental health interventions. This paper explores the integration of various recommendation techniques, including collaborative filtering, content-based filtering, and knowledge-based filtering, within the hybrid system to leverage their respective strengths for Personalized Mental Health Recommendations. Additionally, this paper discusses the challenges and considerations involved in combining multiple techniques, such as data integration and algorithm selection for Hybrid Recommender System for this domain. Furthermore, this paper also discusses the data sources that are typically used in hybrid recommender systems for mental health and evaluation metrics that are employed to assess the effectiveness of the hybrid recommender system. Future research opportunities, including incorporating emerging technologies and leveraging novel data sources, are identified to further enhance the performance and relevance of hybrid recommender systems in the mental health domain. The findings of this research contribute to the advancement of personalized mental health support and the development of effective recommendation systems tailored to individual mental health needs.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_99-Exploring_the_Impact_of_Hybrid_Recommender_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Diversity-based Test Case Prioritization Technique to Improve Faults Detection Rate</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140698</link>
        <id>10.14569/IJACSA.2023.0140698</id>
        <doi>10.14569/IJACSA.2023.0140698</doi>
        <lastModDate>2023-06-30T12:23:04.0130000+00:00</lastModDate>
        
        <creator>Jamal Abdullahi Nuh</creator>
        
        <creator>Tieng Wei Koh</creator>
        
        <creator>Salmi Baharom</creator>
        
        <creator>Mohd Hafeez Osman</creator>
        
        <creator>Lawal Babangida</creator>
        
        <creator>Sukumar Letchmunan</creator>
        
        <creator>Si Na Kew</creator>
        
        <subject>Regression testing; fault detection; test case prioritization; test case diversity; test case coverage; species diversity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>Regression testing is an important task in software development, but it is often associated with high costs and increased project expenses. To address this challenge, prioritizing test cases during test execution is essential as it aims to swiftly identify the hidden faults in the software. In the literature, several techniques for test case prioritization (TCP) have been proposed and evaluated. However, existing weight-based TCP techniques often overlook the true diversity coverage of test cases, resulting in the use of average-based weighting practices and a lack of systematic calculation for test case weights. Our research revolves around prioritizing test cases by considering multiple code coverage criteria. The study presents a novel diversity technique that calculates a diversity coverage score for each test case. This score serves as a weight to effectively rank the test cases. To evaluate the proposed technique, an experiment was conducted using five open-source programs and measured its performance in terms of the average percentage of fault detection (APFD). A comparison was made against an existing technique. The results revealed that the proposed technique significantly improved the fault detection rate compared to the existing approach. It is worth noting that this study is the first of its kind to incorporate the true diversity score of test cases into the TCP process. The findings of our research make valuable contributions to the field of regression testing by enhancing the effectiveness of the testing process through the utilization of diversity-based weighting techniques.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_98-Diversity_based_Test_Case_Prioritization_Technique.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Object Detection-based Automatic Waste Segregation using Robotic Arm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140697</link>
        <id>10.14569/IJACSA.2023.0140697</id>
        <doi>10.14569/IJACSA.2023.0140697</doi>
        <lastModDate>2023-06-30T12:23:03.9970000+00:00</lastModDate>
        
        <creator>Azza Elsayed Ibrahim</creator>
        
        <creator>Rasha Shoitan</creator>
        
        <creator>Mona M. Moussa</creator>
        
        <creator>Heba A. Elnemr</creator>
        
        <creator>Young Im Cho</creator>
        
        <creator>Mohamed S. Abdallah</creator>
        
        <subject>Smart recycling; inverse kinematics; object detection; 4 DOF robotic arm; YOLOV6</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>Today&#39;s overpopulation and fast urbanization present a significant challenge for developing countries in the form of excessive garbage generation. Managing waste is essential in creating sustainable and habitable communities, but it remains an issue for developing countries. Finding an efficient smart waste management system is a challenge in current research. In recent years, robots and artificial intelligence have influenced a wide range of industries, especially waste management. This research proposes a waste segregation system that integrates the robot arm and YOLOv6 object detection model to automatically sort the garbage according to its type and achieve real-time requirements. The proposed algorithm utilizes the pros of the hardware-friendly architecture of YOLOv6 while keeping high detection accuracy in detecting and classifying garbage. Moreover, the proposed system creates a 3D model of a 4 DOF robotic arm by CAD tools. A new approach based on a geometric method is proposed to solve the inverse kinematics problem in order to precisely calculate the proper angles of the robot arm&#39;s joints via a unique solution with less computational time. The proposed system is evaluated on a modified TrashNet dataset with seven garbage classes. The experiments reveal that the proposed algorithm outperforms the other recent YOLO models in terms of precision, recall, F1 score, and model size. Furthermore, the proposed algorithm consumes approximately fractions of a second for picking up and placing a single object in its proper basket.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_97-Object_Detection_based_Automatic_Waste_Segregation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design and Application of Online Courses under the Threshold of Smart Innovation Education</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140696</link>
        <id>10.14569/IJACSA.2023.0140696</id>
        <doi>10.14569/IJACSA.2023.0140696</doi>
        <lastModDate>2023-06-30T12:23:03.9670000+00:00</lastModDate>
        
        <creator>Qin Wang</creator>
        
        <creator>Anya Xiong</creator>
        
        <creator>Huirong Zhu</creator>
        
        <subject>Massive open online courses (MOOC); deep learning; collaborative neural network filtering model (FIONeu); course recommendation; online learning recommendation system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>With the rapid development of the Internet and the growing demand for education, a new online teaching mode, massive open online courses (MOOC), emerged in 2012. To address the problems of sparse data and poor recommendation effect in online course recommendation, this paper introduces deep learning into course recommendation and proposes an auxiliary information-based neural network model (IUNeu), on the basis of which a collaborative neural network filtering model (FIUNeu) is obtained by improving it. Firstly, the principles and technical details of the deep learning base model are studied in depth to provide technical support for course recommendation models and online learning recommendation systems. In this paper, based on the existing neural matrix decomposition model (NeuMF), we combine user information and course information and consider the interaction relationship between them to improve the accuracy of the model to represent users and courses. The neural network model of auxiliary information (IUNeu) is incorporated into the online learning platform, and the system development is completed with the design of front and back-end separation, realizing the functions of the online learning module, course collection module, course recommendation module, and resource download module. Finally, the experimental results are analyzed: under the same experimental conditions, the test experiments are repeated 10 times, and the RMSE calculation results are averaged. The RMSE value of the neural network collaborative filtering model (FIUNeu) proposed in this paper based on deep learning is 0.85517, which is the best performance and has a high accuracy rate of rating prediction, and is useful for alleviating the data sparsity problem.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_96-Design_and_Application_of_Online_Courses.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Method for Myocardial Image Classification using Data Augmentation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140695</link>
        <id>10.14569/IJACSA.2023.0140695</id>
        <doi>10.14569/IJACSA.2023.0140695</doi>
        <lastModDate>2023-06-30T12:23:03.9500000+00:00</lastModDate>
        
        <creator>Qing kun Zhu</creator>
        
        <subject>Myocarditis; generative adversarial network; data augmentation; differential evolution</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>Myocarditis is an important public health concern since it can cause heart failure and abrupt death. It can be diagnosed with magnetic resonance imaging (MRI) of the heart, a non-invasive imaging technology with the potential for operator bias. The study provides a deep learning-based model for myocarditis detection using CMR images to support medical professionals. The proposed architecture comprises a convolutional neural network (CNN), a fully-connected decision layer, a generative adversarial network (GAN)-based algorithm for data augmentation, an enhanced DE for pre-training weights, and a reinforcement learning-based method for training. We present a new method of employing produced images for data augmentation based on GAN to improve the classification performance of the provided CNN. Unbalanced data is one of the most significant classification issues, as negative samples are more than positive, decimating system performance. To solve this issue, we offer an RL-based training method that learns minority class examples with attention. In addition, we tackle the challenges associated with the training step, which typically relies on gradient-based techniques for the learning process; however, these methods often face issues like sensitivity to initialization. To start the BP process, we present an improved differential evolution (DE) technique that leverages a clustering-based mutation operator. It recognizes a successful cluster for DE and applies an original updating strategy to produce potential solutions. We assess our suggested model on the Z-Alizadeh Sani myocarditis dataset and show that it outperforms other methods.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_95-A_Novel_Method_for_Myocardial_Image_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Classified Warning Method for Heavy Overload in Distribution Networks Considering the Characteristics of Unbalanced Datasets</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140694</link>
        <id>10.14569/IJACSA.2023.0140694</id>
        <doi>10.14569/IJACSA.2023.0140694</doi>
        <lastModDate>2023-06-30T12:23:03.9370000+00:00</lastModDate>
        
        <creator>Guohui Ren</creator>
        
        <subject>Imbalanced data; feature extraction; distribution network; overload classification warning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>In order to achieve heavy overload warning and capacity planning for the distribution network, it is necessary to classify the heavy overload warning of the distribution network. A distribution network with heavy overload classification warning method based on imbalanced dataset feature extraction is proposed. Screening the feature indicator set related to distribution network overload, constructing a hierarchical prediction framework for distribution network load situation, combining information such as power distribution points, road construction, municipal planning, and power load distribution to form distribution network capacity planning and line renovation plans. Based on K-means clustering, the undersampling method is used to extract features from the unbalanced dataset of distribution network overload classification, using decision trees as the basic learning unit. It includes multiple decision trees trained by Bagging integrated learning theory and random subspace method. The random forest algorithm is used to realize the feature detection and distribution network capacity planning of distribution network weight overload grading, and the grading early warning of distribution network weight overload is realized according to the capacity planning results. Tests have shown that this method has good accuracy in predicting electrical loads and can effectively solve the problem of excess capacity caused by light or no load, improving the ability of heavy overload warning and capacity planning in the distribution network.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_94-A_Classified_Warning_Method_for_Heavy_Overload.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Research on Settlement Prediction of Building Foundation in Smart City Based on BP Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140693</link>
        <id>10.14569/IJACSA.2023.0140693</id>
        <doi>10.14569/IJACSA.2023.0140693</doi>
        <lastModDate>2023-06-30T12:23:03.9200000+00:00</lastModDate>
        
        <creator>Luyao Wei</creator>
        
        <subject>Smart city; intelligent architecture; foundation settlement; settlement prediction; BP neural network; parameter</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>In the construction process of high-rise buildings, it is necessary to predict the settlement and deformation of the foundation, and the current prediction methods are mainly based on empirical theoretical calculations and methods and more accurate numerical analysis methods. In the face of the interference of complex and ever-changing terrain and parameter values on prediction methods, in order to accurately determine the settlement of building foundations, this study designed a smart city building foundation settlement prediction method based on BP neural network. Firstly, a real-time dynamic monitoring unit for building foundation settlement was constructed using Wireless Sensor Network (WSN) technology. Then, the monitoring data was used to calculate the relevant parameters of building foundation settlement through layer sum method. Finally, input the monitoring data into the BP network results, adjust the weights of the output layer and hidden layer using settlement related parameters, and output the settlement prediction results of the smart city building foundation through training. The study selected average error and prediction time as evaluation criteria to test the feasibility of the method proposed in this article. This method can effectively predict foundation settlement, with an average prediction error always less than 4% and a prediction process time always less than 49ms.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_93-Research_on_Settlement_Prediction_of_Building_Foundation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Prediction of Breast Cancer using Traditional and Ensemble Technique: A Machine Learning Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140692</link>
        <id>10.14569/IJACSA.2023.0140692</id>
        <doi>10.14569/IJACSA.2023.0140692</doi>
        <lastModDate>2023-06-30T12:23:03.9030000+00:00</lastModDate>
        
        <creator>Tamanna Islam</creator>
        
        <creator>Amatul Bushra Akhi</creator>
        
        <creator>Farzana Akter</creator>
        
        <creator>Md. Najmul Hasan</creator>
        
        <creator>Munira Akter Lata</creator>
        
        <subject>Breast cancer; prediction; machine learning algorithms; ensemble models; voting; stacking</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>Breast cancer is a prevalent and potentially life-threatening disease that affects millions of individuals worldwide. Early detection plays a crucial role in improving patient outcomes and increasing the chances of survival. In recent years, machine learning (ML) techniques have gained significant attention in the field of breast cancer detection and diagnosis due to their ability to analyze large and complex datasets, extract meaningful patterns, and facilitate accurate classification. This research focuses on leveraging ML algorithms and models to enhance breast cancer detection and provide more reliable diagnostic results in the real world. Two datasets from Kaggle have been used in this study and Decision tree (DT), Random Forest (RF), Logistic Regression (LR), K-Nearest Classifier (KNN) etc. are applied to identify potential breast cancer cases. On the first dataset, A, the test&#39;s accuracy using Logistic Regression, SVM, and Grid SearchCV was 95.614%, however in dataset B, the accuracy of Logistic Regression and Decision Tree increased to 99.270%. The accuracy of Boosting Decision Tree was 99.270% when compared to other algorithms. To defend the performances, various ensemble models are used. To assign the optimal parameters to each classifier, a hyper-parameter tweaking method is used. The experimental study examined the findings of recent studies and discovered that LRBO performed best, with the highest level of accuracy for predicting breast cancer being 95.614%.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_92-Prediction_of_Breast_Cancer_using_Traditional_and_Ensemble_Technique.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A 3D Processing Technique to Detect Lung Tumor</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140691</link>
        <id>10.14569/IJACSA.2023.0140691</id>
        <doi>10.14569/IJACSA.2023.0140691</doi>
        <lastModDate>2023-06-30T12:23:03.8870000+00:00</lastModDate>
        
        <creator>Nabila ELLOUMI</creator>
        
        <creator>Slim Ben CHAABANE</creator>
        
        <creator>Hassan SEDDIK</creator>
        
        <creator>TOUNSI Nadra</creator>
        
        <subject>Deep learning U-NET architecture; 3D CT scan (computerized tomography); DICOM images (Digital Imaging and Communications in Medicine); 2D slices; ROI (regions of interest)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>In this paper, the authors introduce a new segmentation technique based on U-NET algorithm from the deep learning used for lung cancer segmentation, which is the main challenge that medical Staff confront in their diagnosis process. The goal is to develop an ideal segmentation that enables medical personnel to distinguish the various tumor components using the completely U-NET convolution network architecture, which is the most effective. First, the regions of interest (ROI) in the 2D slides are established by an expert using the syngovia application of the Siemens. In this pre-processing step, the cancer area is isolated from its surroundings, and is used as a training model for U-NET algorithm. Second, the 2D U-NET model is used to segment the DICOM images (Digital Imaging and Communications in Medicine) into homogeneous regions. Finally, the post processing step has been used to obtain the 3D CT scan (computerized tomography) from the 2D slices. The segmentation results from the proposed method applied on biomedical images from nuclear medicine and radiotherapy that are extracted from the archiving system of the Institute of Salah Azaiez from Tunisia. The segmentation results are validated, and the prediction accuracy for the available test data is evaluated. Finally, a comparison study with other existing techniques is presented. The experimental results demonstrate the superiority of the used U-NET architecture applied either for 2D or for 3D image segmentation.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_91-A_3D_Processing_Technique_to_Detect_Lung_Tumor.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid Multiple Indefinite Kernel Learning Framework for Disease Classification from Gene Expression Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140690</link>
        <id>10.14569/IJACSA.2023.0140690</id>
        <doi>10.14569/IJACSA.2023.0140690</doi>
        <lastModDate>2023-06-30T12:23:03.8730000+00:00</lastModDate>
        
        <creator>Swetha S</creator>
        
        <creator>Srinivasan G N</creator>
        
        <creator>Dayananda P</creator>
        
        <subject>Gene expression; optimized kernel principle component analysis; multiple indefinite kernel learning; flow direction algorithm based support vector machine; arithmetic optimization algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>In recent years, Machine Learning (ML) techniques have been used by several researchers to classify diseases using gene expression data. Disease categorization using heterogeneous gene expression data is often used for defining critical problems such as cancer analysis. A variety of evaluated factors known as genes are used to characterize the gene expression data gathered from DNA microarrays. Accurate classification of genetic data is essential to provide accurate treatments to sick people. A large number of genes can be viewed simultaneously from the collected data. However, processing this data has some limitations due to noises, redundant data, frequent errors, increased complexity, smaller samples with high dimensionality, difficult interpretation, etc. A model must be able to distinguish the features in such heterogeneous data with high accuracy to make accurate predictions. So this paper presents an innovative model to overcome these issues. The proposed model includes an effective multiple indefinite kernel learning based model for analyze the gene expression microarray data, then an optimized kernel principal component analysis (OKPCA) to select best features and hybrid flow-directed arithmetic support vector machine (SVM)-based multiple infinite kernel learning (FDASVM-MIKL) model for classification. Flow direction and arithmetic optimization algorithms are combined with SVM to increase classification accuracy. The proposed technique has an accuracy of 99.95%, 99.63%, 99.60%, 99.51%, and 99.79% using the datasets including colon, Isolet, ALLAML, Lung_cancer, and Snp2 graph.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_90-A_Hybrid_Multiple_Indefinite_Kernel_Learning_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Ensuring Information Security of Web Resources Based on Blockchain Technologies</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140689</link>
        <id>10.14569/IJACSA.2023.0140689</id>
        <doi>10.14569/IJACSA.2023.0140689</doi>
        <lastModDate>2023-06-30T12:23:03.8570000+00:00</lastModDate>
        
        <creator>Barakova Aliya</creator>
        
        <creator>Ussatova Olga</creator>
        
        <creator>Begimbayeva Yenlik</creator>
        
        <creator>Ibrahim Sogukpinar</creator>
        
        <subject>Information security; data security; website protection; blockchain; network attacks; hash functions; web applications</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>This project examines how blockchain technology can enhance data security and reliability for web applications. In this article, ways to improve data security on online course platforms that utilize blockchain technology are explored. To clarify, online course platforms are web-based applications that enable users to access course materials online. These platforms often deal with sensitive data which, if compromised, can cause significant harm to users. Unfortunately, this information is often the target of fraudulent operations and illegal actions aimed at stealing personal data that can be used for authentication on various platforms. This article identifies the weaknesses of these sites and discusses the importance of using complex technologies to safeguard web resources effectively. This research explores how blockchain technology can protect from common web application attacks, which are often aimed at the user authorization process involving the transmission of identification and authentication data from the user to the website database. The study outlines the key components of blockchain technology, including hash function, hash value, data structure, and blockchain classification. Additionally, the study presents a transaction block model for a web course developed using blockchain technology.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_89-Ensuring_Information_Security_of_Web_Resources.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Query-Focused Multi-document Summarization Survey</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140688</link>
        <id>10.14569/IJACSA.2023.0140688</id>
        <doi>10.14569/IJACSA.2023.0140688</doi>
        <lastModDate>2023-06-30T12:23:03.8400000+00:00</lastModDate>
        
        <creator>Entesar Alanzi</creator>
        
        <creator>Safa Alballaa</creator>
        
        <subject>Text summarization; query-based extractive text summarization; multi-document; graph-based approach; clustering-based approach</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>With the exponential growth of textual information on the web and in multimedia, query-focused multi-document summarization (QFMS) has emerged as a critical research area. QFMS aims to generate concise summaries that address user queries and satisfy their information needs. This paper provides a comprehensive survey of state-of-the-art approaches in QFMS, focusing specifically on graph-based and clustering-based methods. Each approach is examined in detail, highlighting its advantages and disadvantages. The survey covers ranking algorithms, sentence selection techniques, redundancy removal methods, evaluation metrics, and available datasets. The principal aim of this paper is to present a thorough analysis of QFMS approaches, providing researchers and practitioners with valuable insights into the field. By surveying existing techniques, the paper identifies the challenges and issues faced in QFMS and discusses potential future directions. Moreover, the paper emphasizes the importance of addressing coherency, ambiguity, vague references, evaluation methods, redundancy, and diversity in QFMS. Performance standards and competing approaches are also discussed, showcasing the advancements made in QFMS. The paper acknowledges the need for improving summarization coherence, readability, and semantic efficiency, while balancing compression ratios and summarizing quality. Additionally, it highlights the potential of hybrid methods and the integration of extractive and abstractive techniques to achieve more human-like summaries.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_88-Query_Focused_Multi_document_Summarization_Survey.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Offensive Language Identification in Low Resource Languages using Bidirectional Long-Short-Term Memory Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140687</link>
        <id>10.14569/IJACSA.2023.0140687</id>
        <doi>10.14569/IJACSA.2023.0140687</doi>
        <lastModDate>2023-06-30T12:23:03.8270000+00:00</lastModDate>
        
        <creator>Aigerim Toktarova</creator>
        
        <creator>Aktore Abushakhma</creator>
        
        <creator>Elvira Adylbekova</creator>
        
        <creator>Ainur Manapova</creator>
        
        <creator>Bolganay Kaldarova</creator>
        
        <creator>Yerzhan Atayev</creator>
        
        <creator>Bakhyt Kassenova</creator>
        
        <creator>Ainash Aidarkhanova</creator>
        
        <subject>Offensive language; natural language processing; low resource language; machine learning; deep learning; classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>Offensive language identification is a critical task in today&#39;s digital era, enabling the development of effective content moderation systems. However, it poses unique challenges in low resource languages where limited annotated data is available. This research paper focuses on addressing the problem of offensive language identification specifically in the context of a low resource language, namely the Kazakh language. To tackle this challenge, we propose a novel approach based on Bidirectional Long-Short-Term Memory (BiLSTM) networks, which have demonstrated strong performance in natural language processing tasks. By leveraging the bidirectional nature of the BiLSTM architecture, we capture both contextual dependencies and long-term dependencies in the input text, enabling more accurate offensive language identification. Our approach further utilizes transfer learning techniques to mitigate the scarcity of annotated data in the low resource setting. Through extensive experiments on a Kazakh offensive language dataset, we demonstrate the effectiveness of our proposed approach, achieving state-of-the-art results in offensive language identification in the low resource Kazakh language. Moreover, we analyze the impact of different model configurations and training strategies on the performance of our approach. The findings from our study provide valuable insights into offensive language identification techniques in low resource languages and pave the way for more robust content moderation systems tailored to specific linguistic contexts.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_87-Offensive_Language_Identification_in_Low_Resource_Languages.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fertigation Technology Meets Online Market: A Multipurpose Mobile App for Urban Farming</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140686</link>
        <id>10.14569/IJACSA.2023.0140686</id>
        <doi>10.14569/IJACSA.2023.0140686</doi>
        <lastModDate>2023-06-30T12:23:03.8100000+00:00</lastModDate>
        
        <creator>Jamil Abedalrahim Jamil Alsayaydeh</creator>
        
        <creator>Mohd Faizal bin Yusof</creator>
        
        <creator>Asyraf Salmi</creator>
        
        <creator>Adam Wong Yoon Khang</creator>
        
        <creator>Safarudin Gazali Herawan</creator>
        
        <subject>Vertical farming; mobile application; online market; farm monitoring; urban farmer; agriculture supply chain</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>In a world where smartphones dominate the market and provide opportunities to a vast population, this work introduces an innovative application that enables users to order irrigated vegetable crops from urban farmers. The application utilizes simple fertigation system technology, which can be easily implemented in small areas such as homes. Currently, consumers and sellers rely on traditional methods like paper or other means to place orders for vertical farming. However, research shows that these methods are unreliable and only offer temporary relevance. Additionally, the traditional agriculture supply chain diminishes the appeal of urban farming as its benefits do not outweigh the disadvantages. The primary objective of this application is to promote the concept of urban farming by creating an online marketplace that bypasses traditional methods. This allows consumers to directly order from the farmers themselves, serving as an alternative to the agricultural supply chain. Furthermore, a monitoring system has been integrated into the application as an additional tool, enabling farmers to remotely monitor and control their farms. This feature is particularly beneficial for urban farmers with farms in multiple locations who may lack the time to physically visit each one.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_86-Fertigation_Technology_Meets_Online_Market.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Skin Cancer Classification using Delaunay Triangulation and Graph Convolutional Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140685</link>
        <id>10.14569/IJACSA.2023.0140685</id>
        <doi>10.14569/IJACSA.2023.0140685</doi>
        <lastModDate>2023-06-30T12:23:03.7930000+00:00</lastModDate>
        
        <creator>Caroline Angelina Sunarya</creator>
        
        <creator>Jocelyn Verna Siswanto</creator>
        
        <creator>Grace Shirley Cam</creator>
        
        <creator>Felix Indra Kurniadi</creator>
        
        <subject>Skin cancer; Delaunay triangulation; graph convolutional network; GCN; multilabel image classification; convolutional neural network; CNN</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>Oftentimes, many people or even medical workers misdiagnose skin cancer, which may lead to malpractice and thus, resulting in delayed recovery or life-threatening complications. In this research, a Graph Convolutional Network (GCN) method is proposed as a classification model and Delaunay triangulation as its feature extraction method to classify various types of skin cancers. Delaunay triangulation serves the purpose of boundary extraction, and this implementation allows the model to focus only on the cancerous lesion and ignore the skin around it. This way, the types of skin cancer can be predicted more accurately. Furthermore, GCN offers many advantages in medical image analysis over traditional CNN models. GCN can model interactions between different regions and structures in an image and perform messaging between nodes, whereas CNN is not explicitly designed to do such thing. Other than that, GCN can also leverage transfer learning and few-shot learning techniques to address the challenges of limited annotated medical image datasets. However, the result shows that the proposed model tends to overfit and is unable to generate correct predictions for new skin cancer images. There are several reasons that could lead the model to overfit, such as imbalance data, incorrect feature extraction, insufficient features for data prediction, or the data containing noise.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_85-Skin_Cancer_Classification_using_Delaunay_Triangulation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Analysis of DIDIM and IV Approaches using Double Least Squares Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140684</link>
        <id>10.14569/IJACSA.2023.0140684</id>
        <doi>10.14569/IJACSA.2023.0140684</doi>
        <lastModDate>2023-06-30T12:23:03.7800000+00:00</lastModDate>
        
        <creator>Fadwa SAADA</creator>
        
        <creator>David DELOUCHE</creator>
        
        <creator>Karim CHABIR</creator>
        
        <creator>Mohamed Naceur ABDELKRIM</creator>
        
        <subject>Identification; double least squares; instrumental variable; DIDIM method; robotics dynamics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>Usually, identifying dynamic parameters for robots involves utilizing the Inverse Dynamic Model (IDM) which is linear in relation to the parameters being identified, alongside Linear Least Squares (LLS) methods. To implement this approach, precise measurements of both torque and position must be obtained at a high frequency. Additionally, velocities and accelerations must be estimated by implementing a band-pass filtering technique on the position data. Given the presence of noise in the observation matrix and the closed-loop nature of the identification process, we have modified the Instrumental Variable (IV) method to address the issue of noisy observations. A novel identification technique, named (Direct and Inverse Dynamic Identification Model) DIDIM, which requires only torque measurements as input variables, has recently been successfully applied to a 6-degree-of-freedom industrial robot. DIDIM employs a closed-loop output error approach that utilizes closed-loop simulations of the robot. The experimental results reveal that the IV and DIDIM methods exhibit numerical equivalence. In this paper, we conduct a comparison of these two methods using a double step least squares (2SLS) analysis. We experimentally validate this study using a 2-degree-of-freedom planar robot.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_84-Comparative_Analysis_of_DIDIM_and_IV_Approaches.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automatic Essay Scoring for Arabic Short Answer Questions using Text Mining Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140682</link>
        <id>10.14569/IJACSA.2023.0140682</id>
        <doi>10.14569/IJACSA.2023.0140682</doi>
        <lastModDate>2023-06-30T12:23:03.7630000+00:00</lastModDate>
        
        <creator>Maram Meccawy</creator>
        
        <creator>Afnan Ali Bayazed</creator>
        
        <creator>Bashayer Al-Abdullah</creator>
        
        <creator>Hind Algamdi</creator>
        
        <subject>Arabic language; Automated Essay Scoring (AES); Automated Scoring (AS); Educational Technologies; NLP</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>Automated Essay Scoring (AES) systems involve using a specially designed computing program to mark students’ essays. It is a form of online assessment supported by natural language processing (NLP). These systems seek to exploit advanced technologies to reduce the time and effort spent on the exam scoring process. These systems have been applied in several languages, including Arabic. Nevertheless, the applicable NLP techniques in Arabic AES are still limited, and further investigation is needed to make NLP suitable for Arabic to achieve human-like scoring accuracy. Therefore, this comparative empirical experimental study tested two word-embedding deep learning approaches, namely BERT and Word2vec, along with a knowledge-based similarity approach; Arabic WordNet. The study used the Cosine similarity measure to provide optimal student answer scores. Several experiments were conducted for each of the proposed approaches on two available Arabic short answer question datasets to explore the effect of the stemming level. The quantitative results of this study indicated that advanced models of contextual embedding can improve the efficiency of Arabic AES as the meaning of words can differ in the different contexts. Therefore, serve as a catalyst for future research based on contextual embedding models, as the BERT approach achieved the best Pearson Correlation (.84) and RMSE (1.003). However, this research area needs further investigation to increase the accuracy of Arabic AES to become a practical online scoring system.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_82-Automatic_Essay_Scoring_for_Arabic_Short_Answer_Questions.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Combination of Adaptive Neuro Fuzzy Inference System and Machine Learning Algorithm for Recognition of Human Facial Expressions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140683</link>
        <id>10.14569/IJACSA.2023.0140683</id>
        <doi>10.14569/IJACSA.2023.0140683</doi>
        <lastModDate>2023-06-30T12:23:03.7630000+00:00</lastModDate>
        
        <creator>B. Dhanalaxmi</creator>
        
        <creator>B. Madhuravani</creator>
        
        <creator>Yeligeti Raju</creator>
        
        <creator>C. Balaswamy</creator>
        
        <creator>A. Athiraja</creator>
        
        <creator>G. Charles Babu</creator>
        
        <creator>T. Samraj Lawrence</creator>
        
        <subject>ANFIS; Image processing; face recognition; feature extraction; fuzzy logic</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>A face recognition system&#39;s initial three processes are face detection, feature extraction, and facial expression recognition. The initial step of face detection involves colour model skin colour detection, lighting adjustment to achieve uniformity on the face, and morphological techniques to maintain the necessary face region. To extract facial characteristics such the eyes, nose, and mouth, the output of the first step is employed. Third-step methodology using automated face emotion recognition. This study&#39;s goal is to apply the Adaptive Neuro Fuzzy Inference System (ANFIS) algorithm to increase the precision of the current face recognition systems. For the purpose of removing noise and unwanted information from the data sets, independent data sets and a pre-processing technique are built in this study based on color, texture, and shape, to determine the features of the face. The output of the three-feature extraction process is given to the ANFIS model as input. By using our training picture data sets, it has already been trained. This model receives a test image as input, then evaluates the three aspects of the input image, and then recognizes the test image based on correlation. The determination of whether input has been authenticated or not is made using fuzzy logic. The proposed ANFIS method is compared to the existing methods such as Minimum Distance Classifier (MDC), Support Vector Machine (SVM), Case Based Reasoning (CBR) with the following quality measure like error rate, accuracy, precision, recall. Finally, the performance is analyzed by combining all feature extractions with existing classification methods such as MDC, KNN (K-Nearest Neighbour), SVM, ANFIS and CBR. Based on the performance of classification techniques, it is observed that the face detection failures are reduced, such that overall accuracy for CBR is 92% and it is 97% in ANFIS.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_83-Combination_of_Adaptive_Neuro_Fuzzy_Inference_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Prediction of Anti-inflammatory Activity of Bio Copper Nanoparticle using an Innovative Soft Computing Methodology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140681</link>
        <id>10.14569/IJACSA.2023.0140681</id>
        <doi>10.14569/IJACSA.2023.0140681</doi>
        <lastModDate>2023-06-30T12:23:03.7470000+00:00</lastModDate>
        
        <creator>Dyuti Banerjee</creator>
        
        <creator>G. Kiran Kumar</creator>
        
        <creator>Farrukh Sobia</creator>
        
        <creator>Subuhi Kashif Ansari</creator>
        
        <creator>Anuradha. S</creator>
        
        <creator>R. Manikandan</creator>
        
        <subject>Copper; nanoparticles; green synthesis; prediction; artificial neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>The objective of this work is to use a novel soft computing approach to predict the anti-inflammatory effect of bio copper nanoparticles. Using a modified technique, various doses of the Musa sapientum extract and copper nanoparticles were examined for their anti-inflammatory capabilities. Protein denaturation was evaluated, and an inhibition percentage was computed. The outcomes demonstrated that the quantity of copper nanoparticles raised the inhibition percentage, indicating a greater anti-inflammatory efficacy. In order to forecast the anti-inflammatory action based on the input variables of contact duration, operating temperature, and beginning concentration, an artificial neural network (ANN) was created. Using experimental data, the ANN model was developed, tested, and its performance assessed. The outcomes showed that the ANN model has a high degree of accuracy in predicting the anti-inflammatory action. In the context of summary, copper nanoparticles produced by Musa sapientum show considerable anti-inflammatory action. The ANN model and the suggested soft computing technique, which included the creation of copper nanoparticles, made an accurate prediction of the anti-inflammatory capabilities. This study aids in creating new methods for estimating the efficacy of bioactive nanoparticles in diverse therapeutic uses, such as the treatment of inflammation.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_81-Prediction_of_Anti_inflammatory_Activity_of_Bio_Copper_Nanoparticle.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>End-to-End Real-time Architecture for Fraud Detection in Online Digital Transactions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140680</link>
        <id>10.14569/IJACSA.2023.0140680</id>
        <doi>10.14569/IJACSA.2023.0140680</doi>
        <lastModDate>2023-06-30T12:23:03.7330000+00:00</lastModDate>
        
        <creator>ABBASSI Hanae</creator>
        
        <creator>BERKAOUI Abdellah</creator>
        
        <creator>ELMENDILI Saida</creator>
        
        <creator>GAHI Youssef</creator>
        
        <subject>Online fraud; big data analytics; fraud detection; behavior analysis; isolation forest</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>The banking sector is witnessing a fierce concurrence characterized by changing business models, new entrants such as FinTechs, and new customer behaviors. Financial institutions try to adapt to this trend and invent new ways and channels to reach and interact with their customers. While banks are opening their services to avoid missing this shift, they become naturally exposed to fraud attempts through their digital banking platforms. Therefore, fraud prevention and detection are considered must-have capabilities. Detecting fraud at an optimal time requires developing and deploying scalable learning systems capable of ingesting and analyzing vast volumes of streaming records. Current improvements in data analytics algorithms and the advent of open-source technologies for big data processing and storage bring up novel avenues for fraud identification. In this article, we provide a real-time architecture for detecting transactional fraud via behavioral analysis that incorporates big data analysis techniques such as Spark, Kafka, and h2o with an unsupervised machine learning (ML) algorithm named Isolation Forest. The results of experiments on a significant dataset of digital transactions indicate that this architecture is robust, effective, and reliable across a large set of transactions yielding 99% of accuracy, and a precision of 87%.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_80-End-to-End Real-time Architecture for Fraud Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid Approach for Underwater Image Enhancement using CNN and GAN</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140679</link>
        <id>10.14569/IJACSA.2023.0140679</id>
        <doi>10.14569/IJACSA.2023.0140679</doi>
        <lastModDate>2023-06-30T12:23:03.7170000+00:00</lastModDate>
        
        <creator>Aparna Menon</creator>
        
        <creator>R Aarthi</creator>
        
        <subject>Convolutional neural network (CNN); generative adversarial networks (GAN); enhancing underwater visual perception (EUVP); underwater images; image enhancement; computer vision; artificial intelligence</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>Underwater image-capturing technology has advanced over the years, and varieties of artificial intelligence-based applications have been developed on digital and synthetic images. The low-quality and low-resolution underwater images are challenging factors for use in existing image processing in computer vision applications. Degraded or low-quality photos are common issues in the underwater imaging process due to natural factors like low illumination and scattering. The recent techniques use deep learning architectures like CNN, GAN, or other models for image enhancement. Although adversarial-based architectures provide good perceptual quality, they performed worse in quantitative tests compared with convolutional-based networks. A hybrid technique is proposed in this paper that blends both designs to gain advantages of the CNN and GAN architectures. The generator component produces or makes images, which contributes to the creation of a sizable training set. The EUVP dataset is used for experimentation for model training and testing. The PSNR score was observed to measure the visual quality of the resultant images produced by models. The proposed system was able to provide an improved image with a higher PSNR score and SSIM score with state-of-the-art methods.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_79-A_Hybrid_Approach_for_Underwater_Image_Enhancement.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-Granularity Tooth Analysis via Faster Region-Convolutional Neural Networks for Effective Tooth Detection and Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140678</link>
        <id>10.14569/IJACSA.2023.0140678</id>
        <doi>10.14569/IJACSA.2023.0140678</doi>
        <lastModDate>2023-06-30T12:23:03.7000000+00:00</lastModDate>
        
        <creator>Samah AbuSalim</creator>
        
        <creator>Nordin Zakaria</creator>
        
        <creator>Salama A Mostafa</creator>
        
        <creator>Yew Kwang Hooi</creator>
        
        <creator>Norehan Mokhtar</creator>
        
        <creator>Said Jadid Abdulkadir</creator>
        
        <subject>Dental informatics; intra-oral image; deep learning; faster region-convolutional neural network; classification; granularity level; tooth detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>In image classification, multi-granularity refers to the ability to classify images with different levels of detail or resolution. This is a challenging task because the distinction between subcategories is often minimal, needing a high level of visual detail and precise representation of the features specific to each class. In dental informatics, and more specifically tooth classification poses many challenges due to overlapping teeth, varying sizes, shapes, and illumination levels. To address these issues, this paper considers various data granularity levels since a deeper level of details can be acquired with increased granularity. Three tooth granularity levels are considered in this study named Two Classes Granularity Level (2CGL), Four Classes Granularity Level (4CGL), and Seven Classes Granularity Level (7CGL) to analyze the performance of teeth detection and classification at multi-granularity levels in Granular Intra-Oral Image (GIOI) dataset. Subsequently, a Faster Region-Convolutional Neural Network (FR-CNN) based on three ResNet models is proposed for teeth detection and classification at multi-granularity levels from the GIOI dataset. The FR-CNN-ResNet models exploit the effect of the tooth classification granularity technique to empower the models with accurate features that lead to improved model performance. The results indicate a remarkable detection effect in investigating the granularity effect on the FR-CNN-ResNet model&#39;s performance. The FR-CNN-ResNet-50 model achieved 0.94 mAP for 2CGL, 0.74 mAP for 4CGL, and 0.69 mAP for 7CGL, respectively. The findings demonstrated that multi-granularity enables flexible and nuanced analysis of visual data, which can be useful in a wide range of applications.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_78-Multi_Granularity_Tooth_Analysis_via_Faster_Region_Convolutional.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Global Structure Model for Unraveling Influential Nodes in Complex Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140677</link>
        <id>10.14569/IJACSA.2023.0140677</id>
        <doi>10.14569/IJACSA.2023.0140677</doi>
        <lastModDate>2023-06-30T12:23:03.6870000+00:00</lastModDate>
        
        <creator>Mohd Fariduddin Mukhtar</creator>
        
        <creator>Zuraida Abal Abas</creator>
        
        <creator>Amir Hamzah Abdul Rasib</creator>
        
        <creator>Siti Haryanti Hairol Anuar</creator>
        
        <creator>Nurul Hafizah Mohd Zaki</creator>
        
        <creator>Ahmad Fadzli Nizam Abdul Rahman</creator>
        
        <creator>Zaheera Zainal Abidin</creator>
        
        <creator>Abdul Samad Shibghatullah</creator>
        
        <subject>Centrality indices; combination; hybrid; global structure model; influential nodes</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>In graph analytics, the identification of influential nodes in real-world networks plays a crucial role in understanding network dynamics and enabling various applications. However, traditional centrality metrics often fall short in capturing the interplay between local and global network information. To address this limitation, the Global Structure Model (GSM) and its improved version (IGSM) have been proposed. Nonetheless, these models still lack an adequate representation of path length. This research aims to enhance existing approaches by developing a hybrid model called H-GSM. The H-GSM algorithm integrates the GSM framework with local and global centrality measurements, specifically Degree Centrality (DC) and K-Shell Centrality (KS). By incorporating these additional measures, the H-GSM model strives to improve the accuracy of identifying influential nodes in complex networks. To evaluate the effectiveness of the H-GSM model, real-world datasets are employed, and comparative analyses are conducted against existing techniques. The results demonstrate that the H-GSM model outperforms these techniques, showcasing its enhanced performance in identifying influential nodes. As future research directions, it is proposed to explore different combinations of index styles and centrality measures within the H-GSM framework.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_77-Hybrid_Global_Structure_Model_for_Unraveling_Influential_Nodes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application of the Learning Set for the Detection of Jamming Attacks in 5G Mobile Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140676</link>
        <id>10.14569/IJACSA.2023.0140676</id>
        <doi>10.14569/IJACSA.2023.0140676</doi>
        <lastModDate>2023-06-30T12:23:03.6700000+00:00</lastModDate>
        
        <creator>Brou M&#233;dard KOUASSI</creator>
        
        <creator>Vincent MONSAN</creator>
        
        <creator>Abou Bakary BALLO</creator>
        
        <creator>Kacoutchy Jean AYIKPA</creator>
        
        <creator>Diarra MAMADOU</creator>
        
        <creator>Kablan J&#233;rome ADOU</creator>
        
        <subject>Jamming attacks; 5G mobile networks; ensemble learning; XGBOOST-ensemble learning; attack detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>Jamming attacks represent a significant problem in 5G mobile networks, requiring an effective detection mechanism to ensure network security. This study focused on finding effective methods for detecting these attacks using machine learning techniques. The effectiveness of Ensemble Learning and the XGBOOST-Ensemble Learning combination was evaluated by comparing their performance to other existing approaches. To carry out this study, the WSN-DS database, widely used in attack detection, was used. The results obtained show that the hybrid method, XGBOOST-Ensemble Learning, outperforms other approaches, including those described in the literature, with an accuracy ranging from 99.46% to 99.72%. This underlines the effectiveness of this method for accurately detecting jamming attacks in 5G networks. By using advanced machine learning techniques, the present study helps strengthen the security of 5G mobile networks by providing a reliable mechanism to detect and prevent jamming attacks. These encouraging results also open avenues for future research to further improve the accuracy and effectiveness of attack detection in radiocommunication in general and specifically in 5G networks, thereby ensuring better protection for next-generation wireless communications.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_76-Application_of_the_Learning_Set_for_the_Detection_of_Jamming_Attacks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Robust Analysis of IT Infrastructure&#39;s Log Data with BERT Language Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140675</link>
        <id>10.14569/IJACSA.2023.0140675</id>
        <doi>10.14569/IJACSA.2023.0140675</doi>
        <lastModDate>2023-06-30T12:23:03.6530000+00:00</lastModDate>
        
        <creator>Deepali Arun Bhanage</creator>
        
        <creator>Ambika Vishal Pawar</creator>
        
        <subject>System log; log analysis; BERT; classification; failure prediction; failure detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>Now-a-days, failure detection and prediction have become a significant research focus on enhancing the reliability and availability of IT infrastructure components. Log analysis is an emerging domain aimed at diminishing downtime caused by IT infrastructure components&#39; failure. However, it can be challenging due to poor log quality and large data sizes. The proposed system automatically classifies logs based on log level and semantic analysis, allowing for a precise understanding of the meaning of log entries. Using the BERT pre-trained model, semantic vectors are generated for various IT infrastructures, such as Server Applications, Cloud Systems, Operating Systems, Supercomputers, and Mobile Systems. These vectors are then used to train machine learning (ML) classifiers for log categorization. The trained models are competent in classifying logs by comprehending the context of different types of logs. Additionally, semantic analysis outperforms sentiment analysis when dealing with unobserved log records. The proposed system significantly reduces engineers&#39; day-to-day error-handling work by automating the log analysis process.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_75-Robust_Analysis_of_IT_Infrastructures_Log_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Motion Path Planning of Wearable Lower Limb Exoskeleton Robot Based on Feature Description</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140674</link>
        <id>10.14569/IJACSA.2023.0140674</id>
        <doi>10.14569/IJACSA.2023.0140674</doi>
        <lastModDate>2023-06-30T12:23:03.6370000+00:00</lastModDate>
        
        <creator>Ying Wang</creator>
        
        <creator>Songyu Sui</creator>
        
        <subject>Feature description; wearable lower limb exoskeleton robot; motion path planning; least square identification; geometric parameter</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>Wearable lower extremity exoskeleton robot is a kind of training equipment designed for the disabled or powerless in the lower extremity. In order to improve the environmental adaptability of the robot and better meet the use habits of patients, it is necessary to plan and design the movement path, and a movement path planning model of wearable lower extremity exoskeleton robot based on feature description is proposed, which describes the objects with different wearing frequencies and training intensities. Taking the wearer&#39;s natural walking gait as the constraint feature quantity and the control object model, the spatial planning and design of exoskeleton structures such as hip joint, knee joint and ankle joint are adopted, and the traditional single-degree-of-freedom rotating pair is replaced by a four-bar mechanism, which improves the bionic performance of the knee joint. Combining the feature description and the spatial planning algorithm model, an error compensation method based on iterative least square method is adopted to identify geometric parameters. The feature identification model of robot moving path planning is constructed, and the adaptive strong coupling tracking identification and path planning of robot moving path are realized through feature description and spatial distance error identification results. The simulation test results show that the cooperative positioning error is reduced and the torque error is compensated in real time by using this method to plan the movement path of the wearable lower limb exoskeleton robot, which makes the robot obtain better movement planning effect and enhance the stability of the mechanism.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_74-Motion_Path_Planning_of_Wearable_Lower_Limb_Exoskeleton_Robot.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application of Medical Brain CT/MRI Image Fusion Algorithm based on Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140673</link>
        <id>10.14569/IJACSA.2023.0140673</id>
        <doi>10.14569/IJACSA.2023.0140673</doi>
        <lastModDate>2023-06-30T12:23:03.6230000+00:00</lastModDate>
        
        <creator>Dan Yang</creator>
        
        <subject>Convolutional neural network; image; integration; CT; MRI</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>In recent years, fused images have been developed for fast processing of medical images, which provide a more reliable basis for reducing the burden on physicians because they can contain multiple times the image information. In order to achieve fast and accurate recognition results in medical image recognition, avoid similar blocks and shadow fitting in CT/MR fusion images, and improve the entire medical system, in this study, CT/MRI image fusion of brain images is studied based on algorithms generated by Convolutional Neural Network (CNN). The study utilizes Rolling Guidance Filter (RGF) to divide medical CT/MRI images into two parts, one of which is used for model training and the other for image fusion. In the experiments, the results of all three experiments are compared with the Nonsub Sampled Contourlet Transform - Piecewise Convolutional Neural Network (NSCT - PCNN), and the CNN-RGF MI/ IE/SSIM/AG values of CNN-RGF are superior compared to the conventional algorithm of NSCT-RCNN with an average improvement of 10.0% and above, and the resulting CNN-RGF observed meningitis, hydrocephalus, and cerebral infarction with an average of 24.8% higher compared to NSCT-RCNN. The outcomes show that for brain image fusion and detection, the CNN-RGF approach put forward in the study performs better.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_73-Application_of_Medical_Brain_CTMRI_Image_Fusion_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Efficient Vision-based Approach for Optimizing Energy Consumption in Internet of Things and Smart Homes</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140672</link>
        <id>10.14569/IJACSA.2023.0140672</id>
        <doi>10.14569/IJACSA.2023.0140672</doi>
        <lastModDate>2023-06-30T12:23:03.6070000+00:00</lastModDate>
        
        <creator>LIU Chenguang</creator>
        
        <subject>IoT; Internet of things; digital economics; smart cities; digitization; machine vision; YOLO; YOLOv5n</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>One of the primary forces for digital transformation is how quickly the world is changing. Additionally, and at a dizzying pace, the world economy is being transformed by digital technology. The billions of daily online connections between individuals, organizations, devices, data, and processes that generate economic activity are known as the &quot;digital economy.&quot; The Internet, mobile technology, and the Internet of Things (IoT) all contribute to hyper-interconnection, or the growing connectivity of people, organizations, and machines, which is the foundation of the digital economy. Simultaneously with these developments, the demand for energy is more than the supply, which leads to energy shortage. In order to keep pace with energy demand, new strategies are being developed. As a result of the emergence and expansion of smart homes, there is a growing need for digitization in applications such as energy efficient automation and safety. With the increase in the amount of electricity consumed and the introduction of new energy sources, the reduction of electricity costs for households becomes increasingly important. Basically, this article uses machine vision technology. In this paper, a YOlO method is used for facial recognition. And compared to all kinds of YOlO methods, the YOlOv5n method was the fastest and most efficient method. So, by using the YOlOv5s method on the Jetson Nano platform, it creates the possibility of authenticating the residents of the houses to identify them to turn on or off the sources of energy consumption in the houses. Therefore, the presented system is designed with the aim of optimizing energy consumption in houses and with the aim of ensuring the safety of the residents of the houses.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_72-An_Efficient_Vision_based_Approach_for_Optimizing_Energy_Consumption.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Proposed Secure Activity Diagram for Software Development</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140671</link>
        <id>10.14569/IJACSA.2023.0140671</id>
        <doi>10.14569/IJACSA.2023.0140671</doi>
        <lastModDate>2023-06-30T12:23:03.5900000+00:00</lastModDate>
        
        <creator>Madhuri N. Gedam</creator>
        
        <creator>Bandu B. Meshram</creator>
        
        <subject>Unified modeling language; activity diagram; object constraint language; SQL injection; use case diagram</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>Unified Modeling Language (UML) activity diagrams are derived from use case diagrams. It becomes essential to incorporate security features and maintain consistency in the diagrams during analysis phase of Software Development Life Cycle (SDLC). As part of current software development practices, software security must be a constant effort. The activity diagrams are used to model business process. The detailed analysis of activity diagram is done. The challenge lies in viewing the main activity diagram from attacker&#39;s perspective and providing defense mechanism to mitigate the attacks. This paper presents an extension of the activity diagram named SecUML3Activity to provide security with Object Constraint Language (OCL) constraints using Five Primary Security Input Validation Attributes (FPSIVA) parameters for input validation. It also proposed three security color code notations and stereotypes in activity diagrams. White color is used to represent activity diagram in normal state. Red color in dotted line is used to represent attack activity components. Blue color with double line is used to represent the defensive activity components. The defense mechanism algorithm against SQL Injection (SQLI) attack, Cross Site Scripting (XSS) attack, DoS/ DDoS attack, access validation attack is provided. The mapping of Secure 3-Use Case diagram with SecUML3Activity diagram is done through mathematical modeling.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_71-Proposed_Secure_Activity_Diagram_for_Software_Development.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced Gravitational Search Algorithm Based on Improved Convergence Strategy</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140670</link>
        <id>10.14569/IJACSA.2023.0140670</id>
        <doi>10.14569/IJACSA.2023.0140670</doi>
        <lastModDate>2023-06-30T12:23:03.5770000+00:00</lastModDate>
        
        <creator>Norlina Mohd Sabri</creator>
        
        <creator>Ummu Fatihah Mohd Bahrin</creator>
        
        <creator>Mazidah Puteh</creator>
        
        <subject>Enhanced gravitational search algorithm; variant; improved convergence; exploration; exploitation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>Gravitational search algorithm (GSA) is one of the metaheuristic algorithms that has been popularly implemented in solving various optimization problems. The algorithm could perform better in highly nonlinear and complex optimization problems. However, GSA has also been reported to have a weak local search ability and slow searching speed to achieve its convergence. This research proposes two new parameters in order to improve GSA’s convergence strategy by improving its exploration and exploitation capabilities. The parameters are the mass ratio and distance ratio parameters. The mass ratio parameter is related to the exploration strategy, while the distance ratio parameter is related to the exploitation strategy of the enhanced GSA (eGSA). These two parameters are expected to create a good balance between the exploration and the exploitation strategies in eGSA. There are seven benchmark functions that have been tested on eGSA. The results have shown that eGSA has been able to produce good performance in the minimization of fitness values and execution times, compared with two other GSA variants. The testing results have shown that the enhancements made to GSA have successfully improved the algorithm’s convergence strategy. The improved convergence has also been able to improve the algorithm’s solution quality and the processing time. It is expected that eGSA could be applied in many fields and solve various optimization problems efficiently.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_70-Enhanced_Gravitational_Search_Algorithm_Based_on_Improved_Convergence.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Developing a Security Policy for the Use of CCTV in the Northern Border University</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140669</link>
        <id>10.14569/IJACSA.2023.0140669</id>
        <doi>10.14569/IJACSA.2023.0140669</doi>
        <lastModDate>2023-06-30T12:23:03.5600000+00:00</lastModDate>
        
        <creator>Ahmad Alshammari</creator>
        
        <subject>Closed circuit television; security policy; surveillance; educational institutes</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>The use of closed-circuit television (CCTV) in universities is a challenging task due to global strong opposition to its implementation at education institutions. The ministry of higher education of Kingdom of Saudi Arabia (KSA) has initiated a plan for monitoring educational institutes across the Kingdom. Therefore, this paper proposes a new framework for developing a comprehensive security policy for using CCTV in the Northern Border University, which streamlines the implementation, usage, and securing of the CCTV footage contents. In this regard, a new policy was developed combining the principles of activity theory, international standards, and design science methodology. It considered six key elements from both theoretical and practical perspectives, namely government rules, technical aspects, training, security requirements, users, and legal issues. Based on them, a standard 12-principal policy was developed; to help organizations easily implement and evaluate the developed policy and secure the contents, the principles were classified into three categories: performance, security, and policy management. The findings showed that the implementation of the policy developed in this study not only improved the security measures of the university, but also built trust among the stakeholders due to the high internal security and effective evaluation of the surveillance system.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_69-Developing_a_Security_Policy_for_the_Use_of_CCTV.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Image Specular Highlight Removal using Generative Adversarial Network and Enhanced Grey Wolf Optimization Technique</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140668</link>
        <id>10.14569/IJACSA.2023.0140668</id>
        <doi>10.14569/IJACSA.2023.0140668</doi>
        <lastModDate>2023-06-30T12:23:03.5430000+00:00</lastModDate>
        
        <creator>Maddikera Krishna Reddy</creator>
        
        <creator>J. C. Sekhar</creator>
        
        <creator>Vuda Sreenivasa Rao</creator>
        
        <creator>Mohammed Saleh Al Ansari</creator>
        
        <creator>Yousef A.Baker El-Ebiary</creator>
        
        <creator>Jarubula Ramu</creator>
        
        <creator>R. Manikandan</creator>
        
        <subject>Highlight detection; optimization; specular highlight detection; GAN</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>Image highlight plays a major role in different interactive media and computer vision technology such as image fragmentation, recognition and matching. The original data will be unclear if the image contains highlights. Moreover, it may reduce the robustness in non-transparent as well as glassy objects and also it reduces accuracy. Hence, the removal of highlights is an extremely crucial thing in the dome of digital image enhancement. This is to develop the enhancement of the texture in imageries, and video analytics. Several state-of-art methods are used for removing highlights; but they face some difficulties like insufficient efficacy, accuracy and producing less datasets. To overcome this issue, this paper proposes an optimized GAN technology. The Enhanced Grey Wolf Optimization (EGWO) technique is employed for feature selection process. Generative Adversarial Network is a machine learning (ML) algorithm. Here, two neural networks that will compete among themselves to produce better calculations. The algorithm generates realistic data, especially images, with great practical results. The investigational outcome reveals that the future algorithm has the ability to verify and eliminate the illumination spotlight in the image so that real details can be obtained from the image. The effectiveness of the proposed work can be proved by comparing the proposed optimized GAN with other existing models in highlight removal task. The comparison outcome gives better accuracy with 99.91% compared to previous existing methods.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_68-Image_Specular_Highlight_Removal_using_Generative_Adversarial.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hierarchical and Efficient Identity-bsed Encryption Against Side Channel Attacks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140667</link>
        <id>10.14569/IJACSA.2023.0140667</id>
        <doi>10.14569/IJACSA.2023.0140667</doi>
        <lastModDate>2023-06-30T12:23:03.5300000+00:00</lastModDate>
        
        <creator>Qihong Yu</creator>
        
        <creator>Jian Shen</creator>
        
        <creator>Jiguo Li</creator>
        
        <creator>Sai Ji</creator>
        
        <subject>Identity-based encryption; side channel attack; hash proof system; composite order group</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>Hierarchical and identity-based encryption (HIBE) is very valuable and widely used in many occasions. In the Internet of Things based on cloud services, efficient HIBE is likely to be applied to cloud service scenarios for the limited computing ability of some terminal devices. What’s more, because of the insecurity of cryptographic systems caused by side channel attacks, the design of leakage resilient cryptographic scheme has attracted more and more cryptography researchers&#39; attention. In this study, an efficient leakage resilient HIBE is constructed. (1) In essence, this given scheme contains a hierarchical ID-based key encapsulation system. By using the extractor to act on the encapsulated symmetric key, this proposed scheme may resist the disclosure for the symmetric key due to side channel attacks. The relative leakage ratio of the encapsulated key is close to 1. (2) We also construct a hierarchical identity-based hash proof system that provides the security of our scheme. The proposed scheme can not only resist side channel attacks, but also has short public key parameters and computational efficiency, which is very suitable for applications in the Internet of Things environment. (3) There is no limit to the hierarchy depth of the system, and only the maximum hierarchy length is required to be given when the system is initialized.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_67-Hierarchical_and_Efficient_Identity_Based_Encryption.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Weight Optimization Based on Firefly Algorithm for Analogy-based Effort Estimation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140666</link>
        <id>10.14569/IJACSA.2023.0140666</id>
        <doi>10.14569/IJACSA.2023.0140666</doi>
        <lastModDate>2023-06-30T12:23:03.5130000+00:00</lastModDate>
        
        <creator>Ayman Jalal AlMutlaq</creator>
        
        <creator>Dayang N. A. Jawawi</creator>
        
        <creator>Adila Firdaus Binti Arbain</creator>
        
        <subject>Analogy-based estimation; firefly algorithm; software cost estimation; weight optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>Proper cost estimation is one of the vital tasks that must be achieved for software project development. Owing to the complexity and uncertainties of the software development process, this task is ambiguous and difficult. Recently, analogy-based estimation (ABE) has become one of the popular approaches in this field due to its effectiveness and practicability in comparing completed projects and new projects in estimating the development effort. However, in spite of its many achievements, this method is not capable to guarantee accurate estimation confronting the complex relation between independent features and software effort. In such a case, the performance of the ABE can be improved by efficient feature weighting. This study introduces an enhanced software estimation method by integrating the firefly algorithm (FA) with the ABE method for improving software development effort estimation (SDEE). The proposed model can provide accurate identification of similar projects by optimising the performances of the similarity function in the estimation process in which the most relevant weights are assigned to project features for obtaining the more accurate estimates. A series of experiments were carried out using six real-world datasets. The results based on the statistical analysis showed that the integration of the FA and ABE significantly outperformed the existing analogy-based approaches especially for the ISBSG dataset.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_66-Weight_Optimization_Based_on_Firefly_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Apache Spark in Healthcare: Advancing Data-Driven Innovations and Better Patient Care</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140665</link>
        <id>10.14569/IJACSA.2023.0140665</id>
        <doi>10.14569/IJACSA.2023.0140665</doi>
        <lastModDate>2023-06-30T12:23:03.4970000+00:00</lastModDate>
        
        <creator>Lalit Shrotriya</creator>
        
        <creator>Kanhaiya Sharma</creator>
        
        <creator>Deepak Parashar</creator>
        
        <creator>Kushagra Mishra</creator>
        
        <creator>Sandeep Singh Rawat</creator>
        
        <creator>Harsh Pagare</creator>
        
        <subject>Apache spark; healthcare; patient; styling; predictive analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>The enormous amounts of data produced in the healthcare sector are managed and analyzed with the help of Apache Spark, an open-source distributed computing system. This case study examines how Spark is utilized in the healthcare industry to produce data-driven innovations and enhance patient care. The report gives a general introduction of Spark&#39;s architecture, advantages, and healthcare use cases, such as managing electronic health records, predictive analytics for disease outbreaks, individualized medicine, medical image analysis, and remote patient monitoring. Additionally, it contains several case studies that highlight Spark&#39;s effects on lowering hospital readmission rates, detecting sepsis earlier, enhancing cancer research and therapy, and speeding up drug discovery. The report also identifies obstacles with data security and privacy, scalability and infrastructure, data integration and quality, labor and skills shortages, and other aspects of employing Spark in healthcare. Spark has overcome these obstacles by enabling efficient data-driven decision-making processes and enhancing patient outcomes, revolutionizing healthcare solutions. Additionally, the study looks at potential future advancements in healthcare, including the use of Spark with AI and ML, real-time analytics, the Internet of Medical Things (IoMT), enhanced interoperability and data sharing, and ethical standards. In conclusion, healthcare businesses can fully utilize Spark to transform their data into actionable insights that will enhance patient care and boost the efficiency of healthcare systems.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_65_Apache_Spark_in_Healthcare_Advancing_Data_Driven.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Kalman Filter-based Signal Processing for Robot Target Tracking</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140663</link>
        <id>10.14569/IJACSA.2023.0140663</id>
        <doi>10.14569/IJACSA.2023.0140663</doi>
        <lastModDate>2023-06-30T12:23:03.4830000+00:00</lastModDate>
        
        <creator>Baofu Gong</creator>
        
        <subject>Motion target tracking; Kalman filter; CamShift algorithm; occlusion processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>In the field of computer vision, the signal tracking of moving objects is a highly representative problem. Therefore, how to accurately and quickly track the target unit has become the focus of the research. Based on this, a Cam Shift algorithm improved by Kalman filtering algorithm is introduced to realize fast tracking of moving targets. This method uses the prediction function of the Kalman filter to predict the moving target of the next frame, transforms the global search problem into a local search problem, and improves the real-time performance. The experimental results show that, in the case of complete occlusion, the trajectory of the unimproved algorithm will deviate compared with the actual trajectory of the improved trajectory tracking curve, but the improved algorithm has no trajectory deviation. The error of the improved algorithm is about 4%, while the maximum error of the unimproved algorithm is about 90%. The improved algorithm reached the expected target accuracy after 110 and 78 trainings in X and Y coordinates, respectively, while the CamShift algorithm without Kalman filtering still failed to reach the expected error after 200 trainings in X and Y coordinates. This indicates that the performance of the improved CamShift algorithm based on Kalman filter has been greatly improved. In conclusion, the improved algorithm proposed in this study is highly practical.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_63-Kalman_Filter_based_Signal_Processing_for_Robot_Target_Tracking.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Vehicle Path Planning Based on Gradient Statistical Mutation Quantum Genetic Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140664</link>
        <id>10.14569/IJACSA.2023.0140664</id>
        <doi>10.14569/IJACSA.2023.0140664</doi>
        <lastModDate>2023-06-30T12:23:03.4830000+00:00</lastModDate>
        
        <creator>Hui Li</creator>
        
        <creator>Huiping Qin</creator>
        
        <creator>Zi’ao Han</creator>
        
        <creator>Kai Lu</creator>
        
        <subject>Quantum genetic algorithm; path planning; gradient descent; adaptive mutation operator; quantum rotation gate</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>In the field of vehicle path planning, traditional intelligent optimization algorithms have the disadvantages of slow convergence, poor stability and a tendency to fall into local extremes. Therefore, a gradient statistical mutation quantum genetic algorithm (GSM-QGA) is proposed. Based on the dynamic rotation angle adjustment by the chromosome fitness value, the quantum rotation gate adjustment strategy is improved by introducing the idea of gradient descent. According to the statistical properties of chromosomal change trends, the gradient-based mutation operator is designed to realize the mutation operation. The shortest path is used as the metric to build the vehicle path planning model, and the effectiveness of the modified algorithm in vehicle path planning is demonstrated by simulation experiments. Compared with other optimization algorithms, the path length planned by the improved algorithm is shorter and the search stability is better. The algorithm can be effectively controlled to fall into local optimums.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_64-Vehicle_Path_Planning_Based_on_Gradient_Statistical_Mutation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Zero-Watermarking for Medical Images Based on Regions of Interest Detection using K-Means Clustering and Discrete Fourier Transform</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140662</link>
        <id>10.14569/IJACSA.2023.0140662</id>
        <doi>10.14569/IJACSA.2023.0140662</doi>
        <lastModDate>2023-06-30T12:23:03.4670000+00:00</lastModDate>
        
        <creator>Rodrigo Eduardo Arevalo-Ancona</creator>
        
        <creator>Manuel Cedillo-Hernandez</creator>
        
        <subject>Zero-watermark; ROI detection; machine learning; k-means; image security; copyright protection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>Watermarking schemes ensure digital image security and copyright protection to prevent unauthorized distribution. Zero-watermarking methods do not modify the image. This characteristic is a requirement in some tasks that need image integrity, such as medical images. Zero-watermarking methods obtain specific features for the master share construction to protect the digital image. This paper proposed a zero-watermarking scheme based on K-means clustering for ROI detection to obtain specific features. The K-means algorithm classifies the data according to the proximity of the generated clusters. K-means clustering is applied for image segmentation to identify ROI and detect areas that contain important information from the image. Therefore, the Discrete Fourier Transform (DFT) is applied to the ROI features, using the high frequencies to increase its robustness against geometric attacks. In addition, an edge detection based on the Sobel operator is applied for the QR code creation. This type of watermark avoids errors in watermark detection and increases the robustness of the watermark system. The master share creation is based on an XOR logic operation between extracted features from the selected ROI and the watermark. This method focuses on the protection of the image despite it being tampered with. Many proposed schemes focus on protection against advanced image processing attacks. The experiments demonstrate that the presented algorithm is robust against geometric and advanced signal-processing attacks. The DFT coefficients from the extracted ROI features increase the efficiency and robustness.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_62-Zero_Watermarking_for_Medical_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detection of Breast Cancer using Convolutional Neural Networks with Learning Transfer Mechanisms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140661</link>
        <id>10.14569/IJACSA.2023.0140661</id>
        <doi>10.14569/IJACSA.2023.0140661</doi>
        <lastModDate>2023-06-30T12:23:03.4500000+00:00</lastModDate>
        
        <creator>Victor Guevara-Ponce</creator>
        
        <creator>Ofelia Roque-Paredes</creator>
        
        <creator>Carlos Zerga-Morales</creator>
        
        <creator>Andrea Flores-Huerta</creator>
        
        <creator>Mario Aymerich-Lau</creator>
        
        <creator>Orlando Iparraguirre-Villanueva</creator>
        
        <subject>Convolutional neural networks; transfer learning; deep learning; classification; breast cancer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>Breast cancer is the leading cause of mortality in women worldwide. One of the biggest challenges for physicians and technological support systems is early detection, because it is easier to treat and establish curative treatments. Currently, assistive technology systems use images to detect patterns of behavior with respect to patients who have been found to have some type of cancer. This work aims to identify and classify breast cancer using deep learning models and convolutional neural networks (CNN) with transfer learning. For the breast cancer detection process, 7803 real images with benign and malignant labels were used, which were provided by BreaKHis on the Kaggle platform. The convolutional basis (parameters) of pre-trained models VGG16, VGG19, Resnet-50 and Inception-V3 were used. The TensorFlow framework, keras and Python libraries were also used to retrain the parameters of the models proposed for this study. Metrics such as accuracy, error ratio, precision, recall and f1-score were used to evaluate the models. The results show that the models based on VGG16, VGG19 ResNet-50 and Inception-V3 obtain an accuracy of 88%, 86%, 97% and 96% respectively, recall of 84%, 82%, 96% and 96% respectively, in addition to f1-score of 86%, 83%, 96% and 95% respectively. It is concluded that the model that shows the best results is Resnet-50, obtaining high results in all the metrics considered, although it should be noted that the Inception-V3 model achieves very similar results in relation to Resnet-50, in all the metrics. In addition, these two models exceed the 95% threshold of correct results.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_61-Detection_of_Breast_Cancer_using_Convolutional_Neural_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dynamic Difficulty Adjustment of Serious-Game Based on Synthetic Fog using Activity Theory Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140660</link>
        <id>10.14569/IJACSA.2023.0140660</id>
        <doi>10.14569/IJACSA.2023.0140660</doi>
        <lastModDate>2023-06-30T12:23:03.4370000+00:00</lastModDate>
        
        <creator>Fresy Nugroho</creator>
        
        <creator>Puspa Miladin Nuraida Safitri Abdul Basid</creator>
        
        <creator>Firma Sahrul Bahtiar</creator>
        
        <creator>I. G. P. Asto Buditjahjanto</creator>
        
        <subject>Dynamic difficulty adjustment; serious-game; activity theory model; synthetic fog; synthetic player</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>This study used the activity theory model to determine the dynamic difficulty adjustment of serious-game based on synthetic fog. The difference in difficulty levels was generated in a 3-dimensional game environment with changes determined by applying varying fog thickness. The activity theory model in serious-games aims to facilitate development analysis in terms of learning content, the equipment used, and the resulting in-game action. The difficulty levels vary according to the player&#39;s ability because the game is expected to reduce boredom and frustration. Furthermore, this study simulated scenarios of various conditions, scores, time remaining, and the lives of synthetic players. The experimental results showed that the system can change the game environment with different fog thicknesses according to synthetic player parameters.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_60-Dynamic_Difficulty_Adjustment_of_Serious_Game.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Behavior Intention of Chronic Illness Patients in Malaysia to Use IoT-based Healthcare Services</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140659</link>
        <id>10.14569/IJACSA.2023.0140659</id>
        <doi>10.14569/IJACSA.2023.0140659</doi>
        <lastModDate>2023-06-30T12:23:03.4030000+00:00</lastModDate>
        
        <creator>Huda Hussein Mohamad Jawad</creator>
        
        <creator>Zainuddin Bin Hassan</creator>
        
        <creator>Bilal Bahaa Zaidan</creator>
        
        <subject>Internet of things; IoT; chronic disease; adoption theories; adoption</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>The Internet of Things (IoT) has emerged as a trend in the healthcare industry to develop innovative solutions that enhance patient outcomes and operational efficiency. Healthcare has become more accessible, affordable, and efficient to sensors, wearables, and health monitors. The healthcare industry&#39;s adoption of the Internet of Things is lagging behind other sectors despite its many benefits. This study aims to investigate the extent to which chronic patients in Malaysia are using healthcare services made possible by the Internet of Things. To that end, this study proposes a unified framework to examine how these highlighted factors affect Behavioral Intention (BI) with regard to adopting IoT healthcare services. The innovation here is in bringing together three distinct theories: i) the Technology-Organization-Environment Framework (TOE), which is a framework for understanding how companies adopt new technologies; ii) the Unified Theory of Acceptance and Use of Technology (UTAUT); and iii) the Social Exchange Theory (SE). Patients in Malaysia who are coping with long-term health issues were surveyed online. This study also employs SPSS and Smart Partial Least Square (Smart PLS) for data analysis. Eleven hypothesized predictive components have been investigated. The results showed that chronic illness patients&#39; BI towards adopting IoT solutions was considerably impacted by both individual and technological factors and related aspects. The impact of BI on Use Behaviour (UB) also showed similar outcomes. Moreover, trust somewhat mediates the impact of both individual and technological factors on BI. The findings of this investigation will be beneficial to policymakers and suppliers of healthcare in that country. Additionally, the patients and their family members would gain benefits from the study due to the fact that the delivery of comprehensive treatment, especially in the field of chronic disease management, will be improved through IoT-healthcare services. The Internet of Things will also let medical staff function remotely and professionally.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_59-Behavior_Intention_of_Chronic_Illness_Patients_in_Malaysia.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing IoT Security with Deep Stack Encoder using Various Optimizers for Botnet Attack Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140658</link>
        <id>10.14569/IJACSA.2023.0140658</id>
        <doi>10.14569/IJACSA.2023.0140658</doi>
        <lastModDate>2023-06-30T12:23:03.4030000+00:00</lastModDate>
        
        <creator>Archana Kalidindi</creator>
        
        <creator>Mahesh Babu Arrama</creator>
        
        <subject>Internet of things; botnet attacks; neural network methods; N-BaIoT; deep stack encoder; Adam optimizer; Adagrad optimizer; Adadelta optimizer; activation function</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>The Internet of Things (IoT) connects different sensors, devices, applications, databases, services, and people, bringing improvements to various aspects of our lives, such as cities, agriculture, finance, and healthcare. However, guaranteeing the safety and confidentiality of IoT data which has become rich in its quality requires careful preparation and awareness. Machine learning techniques are used to predict different types of cyber-attacks, including denial of service (DoS), botnet attacks, malicious operations, unauthorized control, data probing, surveillance, scanning, and incorrect setups. In this study, for improving security of IoT data, a method called Deep Stack Encoder Neural Network to predict botnet attacks by using N-BaIoT bench mark dataset is employed. In this study a new framework is introduced which will improve the performance of prediction rate to 94.5%. To evaluate the performance of this method assessment criteria are adopted like accuracy, precision, recall, and F1 score, comparing it with other models. From the optimizers of Adam, Adagrad and Adadelta, Adam optimizer gave the highest accuracy with relu activation function.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_58-Enhancing_IoT_Security_with_Deep_Stack_Encoder.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>State of-the-Art Analysis of Multiple Object Detection Techniques using Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140657</link>
        <id>10.14569/IJACSA.2023.0140657</id>
        <doi>10.14569/IJACSA.2023.0140657</doi>
        <lastModDate>2023-06-30T12:23:03.3870000+00:00</lastModDate>
        
        <creator>Kanhaiya Sharma</creator>
        
        <creator>Sandeep Singh Rawat</creator>
        
        <creator>Deepak Parashar</creator>
        
        <creator>Shivam Sharma</creator>
        
        <subject>Deep learning; neural networks; object detection; YOLO</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>Object detection has experienced a surge in interest due to its relevance in video analysis and image interpretation. Traditional object detection approaches relied on handcrafted features and shallow trainable algorithms, which limited their performance. However, the advancement of Deep learning (DL) has provided more powerful tools that can extract semantic, high- level, and deep features, addressing the shortcomings of previous systems. Deep Learning-based object detection models differ regarding network architecture, training techniques, and optimization functions. In this study, common generic designs for object detection and various modifications and tips to enhance detection performance have been investigated. Furthermore, future directions in object detection research, including advancements in Neural Network-based learning systems and the challenges have been discussed. In addition, comparative analysis based on performance parameters of various versions of YOLO approach for multiple object detection has been presented.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_57-State_of_the_Art_Analysis_of_Multiple_Object_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Advances in Machine Learning and Explainable Artificial Intelligence for Depression Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140656</link>
        <id>10.14569/IJACSA.2023.0140656</id>
        <doi>10.14569/IJACSA.2023.0140656</doi>
        <lastModDate>2023-06-30T12:23:03.3730000+00:00</lastModDate>
        
        <creator>Haewon Byeon</creator>
        
        <subject>Depression; LIME; Explainable artificial intelligence; Machine learning; SHAP</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>There is a growing interest in applying AI technology in the field of mental health, particularly as an alternative to complement the limitations of human analysis, judgment, and accessibility in mental health assessments and treatments. The current mental health treatment service faces a gap in which individuals who need help are not receiving it due to negative perceptions of mental health treatment, lack of professional manpower, and physical accessibility limitations. To overcome these difficulties, there is a growing need for a new approach, and AI technology is being explored as a potential solution. Explainable artificial intelligence (X-AI) with both accuracy and interpretability technology can help improve the accuracy of expert decision-making, increase the accessibility of mental health services, and solve the psychological problems of high-risk groups of depression. In this review, we examine the current use of X-AI technology in mental health assessments for depression. As a result of reviewing 6 studies that used X-AI to discriminate high-risk groups of depression, various algorithms such as SHAP (SHapley Additive exPlanations) and Local Interpretable Model-Agnostic Explanation (LIME) were used for predicting depression. In the field of psychiatry, such as predicting depression, it is crucial to ensure AI prediction justifications are clear and transparent. Therefore, ensuring interpretability of AI models will be important in future research.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_56-Advances_in_Machine_Learning_and_Explainable_Artificial_Intelligence.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards Path Planning Algorithm Combining with A-Star Algorithm and Dynamic Window Approach Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140655</link>
        <id>10.14569/IJACSA.2023.0140655</id>
        <doi>10.14569/IJACSA.2023.0140655</doi>
        <lastModDate>2023-06-30T12:23:03.3570000+00:00</lastModDate>
        
        <creator>Kaiyu Li</creator>
        
        <creator>Xiugang Gong</creator>
        
        <creator>Muhammad Tahir</creator>
        
        <creator>Tao Wang</creator>
        
        <creator>Rajesh Kumar</creator>
        
        <subject>AGV; path planning; A* algorithm; dynamic window approach</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>In the Automated Guided Vehicle (AGV) warehouse automatic guided vehicle system, the path planning algorithm for intelligent logistics vehicles is a key factor to ensure the stable and efficient operation of the system. However, the existing planning algorithms have problems such as single designing a route and the inability to intelligently evade moving barriers. The academic community has proposed various solutions to these problems, although they have improved the efficiency and quality of path planning to some extent, they have not completely solved problems such as poor safety in planning, the high number of path inflection points, poor path smoothness, easily getting stuck in deadlocks, and have not fully considered the running cost and practical implementation difficulty of algorithms. To address these issues, the article deeply researched traditional A* scheme and Dynamic Window Approach (DWA) technology and proposed designing a route method according to the fusion of the A* algorithm and DWA technology. The algorithm improved the A algorithm by introducing a sub-node optimization algorithm to solve problems for instance poor global path planning safety and easy deadlock. Moreover, the algorithm reduced the amount of global route reversal locations and increased path consistency by improving the evaluation function and removing redundant points of the A algorithm. Finally, by integrating the DWA algorithm, the intelligent logistics vehicle achieved dynamic obstacle avoidance capabilities for moving objects in the real world. Our simulations-based results on MATLAB framework show that the algorithm significantly improves path smoothness, path length, path planning time, and environmental adaptability compared to traditional algorithms, and basically meets the path planning requirements of the AGV system for intelligent logistics vehicles.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_55-Towards_Path_Planning_Algorithm_Combining_with_A_Star_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hamming Distance Approach to Reduce Role Mining Scalability</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140654</link>
        <id>10.14569/IJACSA.2023.0140654</id>
        <doi>10.14569/IJACSA.2023.0140654</doi>
        <lastModDate>2023-06-30T12:23:03.3400000+00:00</lastModDate>
        
        <creator>Nazirah Abd Hamid</creator>
        
        <creator>Siti Rahayu Selamat</creator>
        
        <creator>Rabiah Ahmad</creator>
        
        <creator>Mumtazimah Mohamad</creator>
        
        <subject>Role-based Access Control; role mining; hamming distance; data mining</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>Role-based Access Control has become the standard of practice for many organizations for restricting control on limited resources in complicated infrastructures or systems. The main objective of the role mining development is to define appropriate roles that can be applied to the specified security access policies. However, the mining scales in this kind of setting are extensive and can cause a huge load on the management of the systems. To resolve the above mentioned problems, this paper proposes a model that implements Hamming Distance approach by rearranging the existing matrix as the input data to overcome the scalability problem. The findings of the model show that the generated file size of all datasets substantially have been reduced compared to the original datasets It has also shown that Hamming Distance technique can successfully reduce the mining scale of datasets ranging between 30% and 47% and produce better candidate roles.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_54-Hamming_Distance_Approach_to_Reduce_Role_Mining_Scalability.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Approach to Multi-Layer-Perceptron Training using Quadratic Interpolation Flower Pollination Neural Network on Non-Binary Datasets</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140653</link>
        <id>10.14569/IJACSA.2023.0140653</id>
        <doi>10.14569/IJACSA.2023.0140653</doi>
        <lastModDate>2023-06-30T12:23:03.3270000+00:00</lastModDate>
        
        <creator>Yulianto Triwahyuadi Polly</creator>
        
        <creator>Sri Hartati</creator>
        
        <creator>Suprapto</creator>
        
        <creator>Bambang Sumiarto</creator>
        
        <subject>Quadratic interpolation; flower pollination algorithm; neural network; non-binary dataset; multi-layer-perceptron</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>Machine Learning (ML) algorithms are widely used in solving classification problems. The biggest challenge of classification lies in the robustness of the ML algorithm in various dataset characteristics. Quadratic Interpolation Flower Pollination Neural Network (QIFPNN) is categorised into ML algorithm. The new QIFPNN&#39;s extraordinary capabilities are measured on binary-type datasets. This research ensures that the remarkable ability of QIFPNN also applies to non-binary datasets with balanced and unbalanced data class characteristics. Flower Pollination Neural Network (FPNN), Particle Swarm Optimisation Neural Network (PSONN), and Bat Neural Network (BANN) were used as comparisons. The QIFPNN, FPNN, PSONN, and BANN were used to train Multi-Layer-Perceptron (MLP). The test results on five datasets show that QIFPNN obtains an average classification accuracy higher than its comparison in three datasets with balanced and unbalanced data class characteristics. The three datasets are Iris, Wine, and Glass. The highest classification accuracy obtained by QIFPNN in the three datasets is 97.1462%, 98.6551%, and 73.1979%, respectively. Based on the F1-score test from QIFPNN, it is higher than all the comparisons in four datasets: Iris, Wine, Vertebral column, and Glass. Sequentially, 96.4599%, 98.7155%, 90.7517%, and 60.2843%. It proves that QIFPNN can also classify datasets with non-binary data types with balanced and unbalanced data class characteristics because they are more consistently tested on various datasets and are not susceptible to the influence of variations in dataset characteristics so that they can be applied to various types of data or cases.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_53_A_Novel_Approach_to_Multi_Layer_Perceptron_Training.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluation of the Effects of 2D Animation on Business Law: Elements of a Valid Contract</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140652</link>
        <id>10.14569/IJACSA.2023.0140652</id>
        <doi>10.14569/IJACSA.2023.0140652</doi>
        <lastModDate>2023-06-30T12:23:03.3270000+00:00</lastModDate>
        
        <creator>Sarni Suhaila Rahim</creator>
        
        <creator>Nur Zulaiha Fadlan Faizal</creator>
        
        <creator>Shahril Parumo</creator>
        
        <creator>Hazira Saleh</creator>
        
        <subject>2D Animation; business law; elements of a valid contract; evaluation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>This article presents an evaluation of Business Law 2D Animation: Elements of a Valid Contract. The developed application was produced to assist business law students to understand the contents of the topic Elements of a Valid Contract. An experiment was carried out to assess the usability of the application as a learning and review material for business law students. This study comprised five major evaluation components, including learnability, usability, accessibility, functionality, and effectiveness, to investigate user involvement and satisfaction with the proposed educational learning system. To acquire user testing results, online questionnaires were issued. There was a total of 63 respondents, including multimedia experts, students, and subject matter experts. The findings of the current study revealed that the majority of respondents were pleased with the outcomes of the animation. The results may assist in improving the teaching of the topic Elements of a Valid Contract for business law students as it provides visually appealing method of learning.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_52-Evaluation_of_the_Effects_of_2D_Animation_on_Business_Law.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automated Epileptic Seizure Detection using Improved Crystal Structure Algorithm with Stacked Autoencoder</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140651</link>
        <id>10.14569/IJACSA.2023.0140651</id>
        <doi>10.14569/IJACSA.2023.0140651</doi>
        <lastModDate>2023-06-30T12:23:03.3100000+00:00</lastModDate>
        
        <creator>Srikanth Cherukuvada</creator>
        
        <creator>R. Kayalvizhi</creator>
        
        <subject>Deep learning; EEG signals; epileptic seizure detection; hyperparameter tuning; stacked autoencoders</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>Epilepsy can be referred to as a neurological disorder, categorized by intractable seizures with serious consequences. To forecast such seizures, Electroencephalogram (EEG) datasets should be gathered continuously. EEG signals were recorded by using numerous electrodes fixed on the scalp that cannot be worn by patients continuously. Neurostimulators can intervene in advance and ignore the seizure rate. Its productivity is increased by using heuristics such as advanced seizure prediction. In recent times, several authors have deployed various deep learning approaches for predicting epileptic seizures, utilizing EEG signals. In this work, an Automated Epileptic Seizure Detection using Improved Crystal Structure Algorithm with Stacked Auto encoder (AESD-ICSASAE) technique has been developed. The presented AESD-ICSASAE technique executes a three-stage process. At the initial level, the AESD-ICSASAE technique applies min-max normalization approach to normalize the input data. Next, the AESD-ICSASAE technique uses ICSA based feature selection method for optimal choice of features. Finally, the SAE based classification process takes place and the hyperparameter selection process is performed by Arithmetic Optimization Algorithm (AOA). To depict the enhanced classification outcomes of the AESD-ICSASAE technique, series of experiments was made. Furthermore, the proposed method&#39;s results have been tested utilizing the CHB-MIT database, with results indicating an accuracy of 98.9%. These results validate the highest level of accuracy in seizure classification across all of the analyzed EEG data. A full set of experiments validated the AESD-ICSASAE method&#39;s enhancements.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_51-Automated_Epileptic_Seizure_Detection_using_Improved.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing COVID-19 Diagnosis Through a Hybrid CNN and Gray Wolf Optimizer Framework</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140650</link>
        <id>10.14569/IJACSA.2023.0140650</id>
        <doi>10.14569/IJACSA.2023.0140650</doi>
        <lastModDate>2023-06-30T12:23:03.2930000+00:00</lastModDate>
        
        <creator>Yechun JIN</creator>
        
        <creator>Guanxiong ZHANG</creator>
        
        <creator>Jie LI</creator>
        
        <subject>Covid-19; respiratory disorder; medical imaging; convolutional neural networks; improved gray wolf algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>Covid-19 is an infectious respiratory disorder brought about using a brand-new coronavirus first found in 2019. The severity of symptoms can vary from mild to life-threatening. No vaccine or specific treatment has been developed to address Covid-19. Hence the most effective preventive measure is to practice social distancing and adhere to good hygiene practices. Medical imaging and convolutional neural networks are used in Covid-19 research to quickly identify infected individuals and detect changes in the lung tissue of those infected. Convolutional neural networks can be used to analyze chest CT scans, detecting potential signs of infection like ground-glass opacities, which indicate the presence of Covid-19. This article introduces a powerful framework for classifying COVID-19 images utilizing a hybrid of CNN and an improved version of Gray Wolf Optimizer. To demonstrate the efficiency of the projected framework, it is verified on a standard dataset and compared with other methods, with results indicating its superiority over the others.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_50-Enhancing_COVID_19_Diagnosis_Through_a_Hybrid_CNN.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluations on Competitiveness of Service Sector in Yangtze River Economic Belt of China Based on Dual-core Diamond Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140649</link>
        <id>10.14569/IJACSA.2023.0140649</id>
        <doi>10.14569/IJACSA.2023.0140649</doi>
        <lastModDate>2023-06-30T12:23:03.2800000+00:00</lastModDate>
        
        <creator>Ming Zhao</creator>
        
        <creator>Qingjun Zeng</creator>
        
        <creator>Dan Wang</creator>
        
        <creator>Jiafu Su</creator>
        
        <subject>Dual-core diamond model; service sector; Yangtze river economic belt; principal component</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>By expanding and innovating Michael Porter Diamond Model, a Dual-core Diamond Model is developed in this paper with innovation and openness as the core factors in consideration of the actual needs of the development of service sector in the Yangtze River Economic Belt of China. This paper establishes an evaluation indicator system of service sector competitiveness profitability to measure and evaluate the competitiveness of service sector in 11 provinces and towns in the Yangtze River Economic Belt through PCA (principal component analysis) based on the relevant information of the 11 provinces and towns mentioned above in 2015 and 2016. The research results indicate that the design of Dual-core Diamond Model is in line with the current situation and future development needs of the service sector in the Yangtze River Economic Belt, and the dual-core factors, namely, innovation and openness, have become the most important factors influencing the competitiveness of the service sector in the Yangtze River Economic Belt. Based on the model analysis results, it should propose strategies to enhance the competitiveness of the service industry in the Yangtze River Economic Belt. It needs to enhance innovation ability as well as to further expand trade in services. Firstly, encourage the growth of related industries and create a coordinated development cluster for the service sector. Second, intensify efforts in talent cultivation and build a talent system in alignment with the development of service sector. Third, improve the relevant legal system and innovate the service supervision and governance system in the service sector. Last, focus on a coordinated and integrated inter-region development.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_49-Evaluations_on_Competitiveness_of_Service_Sector_in_Yangtze_River.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Auto-Regressive Integrated Moving Average Threshold Influence Techniques for Stock Data Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140648</link>
        <id>10.14569/IJACSA.2023.0140648</id>
        <doi>10.14569/IJACSA.2023.0140648</doi>
        <lastModDate>2023-06-30T12:23:03.2630000+00:00</lastModDate>
        
        <creator>Bhupinder Singh</creator>
        
        <creator>Santosh Kumar Henge</creator>
        
        <creator>Sanjeev Kumar Mandal</creator>
        
        <creator>Manoj Kumar Yadav</creator>
        
        <creator>Poonam Tomar Yadav</creator>
        
        <creator>Aditya Upadhyay</creator>
        
        <creator>Srinivasan Iyer</creator>
        
        <creator>Rajkumar A Gupta</creator>
        
        <subject>Dickey-Fuller test case (DF-TC); recurrent neural network (RNN); root mean square error (RMSE); long short-term memory (LSTM); machine learning (ML); auto-regressive integrated moving average (ARIMA)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>This study focuses on predicting and estimating possible stock assets in a favorable real-time scenario for financial markets without the involvement of outside brokers about broadcast-based trading using various performance factors and data metrics. Sample data from the Y-finance sector was assembled using API-based data series and was quite accurate and precise. Prestigious machine learning algorithmic performances for both classification and regression complexities intensify this assumption. The fallibility of stock movement leads to the production of noise and vulnerability that relate to decision-making. In earlier research investigations, fewer performance metrics were used. In this study, Dickey-Fuller testing scenarios were combined with time series volatility forecasting and the Long Short-Term Memory algorithm, which was used in a futuristic recurrent neural network setting to predict future closing prices for large businesses on the stock market. In order to analyze the root mean squared error, mean squared error, mean absolute percentage error, mean deviation, and mean absolute error, this study combined LSTM methods with ARIMA. With fewer hardware resources, the experimental scenarios were framed, and test case simulations carried out.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_48-Auto_Regressive_Integrated_Moving_Average_Threshold.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Skin Diseases Classification Through Dual Ensemble Learning and Pre-trained CNNs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140647</link>
        <id>10.14569/IJACSA.2023.0140647</id>
        <doi>10.14569/IJACSA.2023.0140647</doi>
        <lastModDate>2023-06-30T12:23:03.2470000+00:00</lastModDate>
        
        <creator>Oussama El Gannour</creator>
        
        <creator>Soufiane Hamida</creator>
        
        <creator>Yasser Lamalem</creator>
        
        <creator>Bouchaib Cherradi</creator>
        
        <creator>Shawki Saleh</creator>
        
        <creator>Abdelhadi Raihani</creator>
        
        <subject>Multi-modal approach; multi-task approach; transfer learning; deep learning; skin diseases classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>Skin diseases represent a variety of disorders that can affect the skin. In fact, early diagnosis plays a central role in the treatment of this type of disease. This scholarly article introduces a novel approach to classifying skin diseases by leveraging two ensemble learning techniques, encompassing multi-modal and multi-task methodologies. The proposed classifier integrates diverse information sources, including skin lesion images and patient-specific data, aiming to enhance the accuracy of disease classification. By simultaneously utilizing image input and structured data input, the multi-task functionality of the classifier enables efficient disease classification. The integration of multi-modal and multi-task techniques allows for a comprehensive analysis of skin diseases, leading to improved classification performance and a more holistic understanding of the underlying factors influencing disease diagnosis. The efficacy of the classifier was assessed using the ISIC 2018 dataset, which comprises both image and clinical information for each patient with skin diseases. The dataset used in this study comprises images of seven different types of skin diseases and their associated medical information. The findings of our proposed approach show that it outperforms traditional single-modal and single-task classifiers. The results of this study demonstrate that the proposed model attained an accuracy of 97.66% for the initial classification task (image classification). Additionally, the second classification task (clinical data classification) achieved an accuracy of 94.40%.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_47-Enhancing_Skin_Diseases_Classification_Through_Dual_Ensemble.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Feature Fusion Network for Lane Line Segmentation in Urban Traffic Scenes</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140646</link>
        <id>10.14569/IJACSA.2023.0140646</id>
        <doi>10.14569/IJACSA.2023.0140646</doi>
        <lastModDate>2023-06-30T12:23:03.2330000+00:00</lastModDate>
        
        <creator>Hoanh Nguyen</creator>
        
        <subject>Lane line segmentation; deep learning; convolutional neural network; spatial and channel attention</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>As autonomous driving technology continues to advance at a rapid pace, the demand for precise and dependable lane detection systems has become increasingly critical. However, traditional methods often struggle with complex urban scenarios, such as crowded environments, diverse lighting conditions, unmarked lanes, curved lanes, and night-time driving. This paper presents a novel approach to lane line segmentation in urban traffic scenes with a Deep Feature Fusion Network (DFFN). The DFFN leverages the strengths of deep learning for feature extraction and fusion, aiming to enhance the accuracy and reliability of lane detection under diverse real-world conditions. To integrate multi-layer features, the DFFN employs both spatial and channel attention mechanisms in an appropriate manner. This strategy facilitates learning and predicting the relevance of each input feature during the fusion process. In addition, deformable convolution is employed in all up-sampling operations, enabling dynamic adjustment of the receptive field according to object scales and poses. The performance of DFFN is rigorously evaluated and compared with existing models, namely SCNN, ENet, and ENet-SAD, across different scenarios in the CULane dataset. Experimental results demonstrate the superior performance of DFFN across all conditions, highlighting its potential applicability in advanced driver assistance systems and autonomous driving applications.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_46-Deep_Feature_Fusion_Network_for_Lane_Line_Segmentation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Model for Blood Cancer Classification Based on Deep Learning Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140645</link>
        <id>10.14569/IJACSA.2023.0140645</id>
        <doi>10.14569/IJACSA.2023.0140645</doi>
        <lastModDate>2023-06-30T12:23:03.2170000+00:00</lastModDate>
        
        <creator>Hagar Mohamed</creator>
        
        <creator>Fahad Kamal Elsheref</creator>
        
        <creator>Shrouk Reda Kamal</creator>
        
        <subject>Deep learning; convolutional neural networks (CNNs); leukemia; lymphoma; computer-aided diagnosis systems (CAD)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>Artificial intelligence and deep learning algorithms have become essential fields in medical science. These algorithms help doctors detect diseases early, reduce the incidence of errors, and decrease the time required for disease diagnosis, thereby saving human lives. Deep learning models are widely used in Computer-Aided Diagnosis Systems (CAD) for the classification of various diseases, including blood cancer. Early diagnosis of blood cancer is crucial for effective treatment and saving patients&#39; lives. Therefore, this study developed two distinct models to classify eight types of blood cancer. These types include follicular lymphoma (FL), mantle cell lymphoma (MCL), chronic lymphocytic leukemia (CLL), acute myeloid leukemia (AML), and the subtypes of acute lymphoblastic leukemia (ALL) known as early pre-B, pre-B, pro-B ALL, and benign. AML and ALL are specific classifications for human leukemia cancer, while FL, MCL, and CLL are specific classifications for lymphoma. Both models consist of different phases, including data collection, preprocessing, feature extraction techniques, and the classification process. The techniques applied in these phases are the same in both proposed models, except for the classification phase. The first model utilizes the VGG16 architecture, while the second model utilizes DenseNet-121. The results indicated that DenseNet-121 achieved a lower accuracy compared to VGG16. VGG16 exhibited excellent results, achieving an accuracy of 98.2% when classifying the eight classes. This outcome suggests that VGG16 is the most effective classifier for the utilized dataset.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_45-A_New_Model_for_Blood_Cancer_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>End to End Text to Speech Synthesis for Malay Language using Tacotron and Tacotron 2</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140644</link>
        <id>10.14569/IJACSA.2023.0140644</id>
        <doi>10.14569/IJACSA.2023.0140644</doi>
        <lastModDate>2023-06-30T12:23:03.2000000+00:00</lastModDate>
        
        <creator>Azrul Fahmi Abdul Aziz</creator>
        
        <creator>Sabrina Tiun</creator>
        
        <creator>Noraini Ruslan</creator>
        
        <subject>Text to speech; end-to-end TTS; Tacotron; Tacotron 2; Malay language; artificial intelligence; mean opinion score (MOS); naturalness; intelligibility</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>Text-to-speech (TTS) technology is becoming increasingly popular in various fields such as education and business. However, the advancement of TTS technology for Malay language is slower compared to other language especially English language. The rise of artificial intelligence (AI) technology has sparked TTS technology into a new dimension. An end-to-end (E2E) TTS system that generates speech directly from text input is one of the latest AI technologies for TTS and implementing this E2E method into Malay language will help to expand the TTS technology for Malay language. This study involves the development and comparison of two end-to-end TTS models for the Malay language, namely Tacotron and Tacotron 2. Both models were trained using a Malay corpus consisting of text and speech and evaluated the synthesized speech using Mean Opinion Scores (MOS) for naturalness and intelligibility. The results show that Tacotron outperformed Tacotron 2 in terms of naturalness and intelligibility, with both models falling short of human speech quality. Improving TTS technology for Malay can encourage its use in a wider range of contexts.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_44-End_to_End_Text_to_Speech_Synthesis_for_Malay_Language.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Semi-Dense U-Net: A Novel U-Net Architecture for Face Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140643</link>
        <id>10.14569/IJACSA.2023.0140643</id>
        <doi>10.14569/IJACSA.2023.0140643</doi>
        <lastModDate>2023-06-30T12:23:03.2000000+00:00</lastModDate>
        
        <creator>Ganesh Pai</creator>
        
        <creator>Sharmila Kumari M</creator>
        
        <subject>Semi-Dense U-Net; face detection; segmentation; U-Net</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>Face detection and localization has been a major field of study in facial analysis and computer vision. Several convolutional neural network-based architectures have been proposed in the literature such as cascaded approach, single-stage and two-stage architectures. Using image segmentation based technique for object/face detection and recognition have been an alternative approach recently being employed. In this paper, we propose detection of faces by using U-net segmentation architectures. Motivated from DenseNet, a variant of U-net, called Semi-Dense U-Net, is designed in order to improve the binary masks generated by the segmentation model and further post-processed to detect faces. The proposed U-Net model have been trained and tested on FDDB, Wider face and Open Image dataset and compared with state-of-the-art algorithms. We could successfully achieve dice coefficient of 95.68% and average precision of 91.60% on a set of test data from OpenImage dataset.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_43-Semi_Dense_U_Net_A_Novel_U_Net_Architecture_for_Face_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Data-driven Decision Making in Higher Education Institutions: State-of-play</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140642</link>
        <id>10.14569/IJACSA.2023.0140642</id>
        <doi>10.14569/IJACSA.2023.0140642</doi>
        <lastModDate>2023-06-30T12:23:03.1870000+00:00</lastModDate>
        
        <creator>Silvia Gaftandzhieva</creator>
        
        <creator>Sadiq Hussain</creator>
        
        <creator>Slavoljub Hilcenko</creator>
        
        <creator>Rositsa Doneva</creator>
        
        <creator>Kirina Boykova</creator>
        
        <subject>Business intelligence; data analytics tools; decision-making framework; decision-making support systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>The paper highlights the importance of using data-driven decision-making tools in Higher Education Institutions (HEIs) to improve academic performance and support sustainable development. HEIs must utilize data analytics tools, including educational data mining, learning analytics, and business intelligence, to extract insights and knowledge from educational data. These tools can help HEIs’ leadership monitor and improve student enrolment campaigns, track student performance, evaluate academic staff, and make data-driven decisions. Although decision support systems have many advantages, they are still underutilized in HEIs, leaving field for further research and implementation. To address this, the authors summarize the benefits of applying data-driven decision approaches in HEIs and review various frameworks and methodologies, such as a course recommendation system and an academic prediction model, to aid educational decision-making. These tools articulate pedagogical theories, frameworks, and educational phenomena to establish mainstay significant components of learning to enable the scheming of superior learning systems. The tools can be utilized by the placement agencies or companies to find out their probable trainees/ recruitees. They can help students in course selection, and educational management in being more efficient and effective.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_42-Data_driven_Decision_Making_in_Higher_Education_Institutions.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automated Type Identification and Size Measurement for Low-Voltage Metering Box Based on RGB-Depth Image</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140641</link>
        <id>10.14569/IJACSA.2023.0140641</id>
        <doi>10.14569/IJACSA.2023.0140641</doi>
        <lastModDate>2023-06-30T12:23:03.1700000+00:00</lastModDate>
        
        <creator>Pengyuan Liu</creator>
        
        <creator>Xurong Jin</creator>
        
        <creator>Shaokui Yan</creator>
        
        <creator>Tingting Hu</creator>
        
        <creator>Yuanfeng Zhou</creator>
        
        <creator>Ling He</creator>
        
        <creator>Xiaomei Yang</creator>
        
        <subject>Low-voltage metering box; RGB-D image processing; automated size detection; automated type detection; inspection automation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>The low-voltage metering box is a critical piece of equipment in the power supply system. The automated inspection of metering boxes is important in their production, transportation, installation, operation and maintenance. In this work, an automated type identification and size measurement method for low-voltage metering boxes based on RGB-D images is proposed. The critical components, including the door shell and window, connection terminal block, and metering compartment in the cabinet, are segmented first using the Mask-RCNN network. Then the proposed Sub-Region Closer-Neighbor algorithm is used to estimate the number of connection terminal blocks. Combined with the number of metering compartments, the type of metering box is classified. To refine the borders of the metering box components, an edge correction algorithm based on the Depth Difference (Dep-D) Constraint is presented. Finally, the automated size measurement is implemented based on the proposed Equal-Region Averaging algorithm. The experimental results show that the accuracies of the automated type identification and size measurement of the low-voltage metering box reach more than 92%.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_41-Automated_Type_Identification_and_Size_Measurement.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Review on Security Techniques in Image Steganography</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140640</link>
        <id>10.14569/IJACSA.2023.0140640</id>
        <doi>10.14569/IJACSA.2023.0140640</doi>
        <lastModDate>2023-06-30T12:23:03.1530000+00:00</lastModDate>
        
        <creator>Sami Ghoul</creator>
        
        <creator>Rossilawati Sulaiman</creator>
        
        <creator>Zarina Shukur</creator>
        
        <subject>Image steganography; data hiding; steganographic security; randomization; encryption</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>Given the increased popularity of the internet, the exchange of sensitive information leads to concerns about privacy and security. Techniques such as steganography and cryptography have been employed to protect sensitive information. Steganography is one of the promising tools for securely exchanging sensitive information through an unsecured medium. It is a powerful tool for protecting a user’s data, wherein the user can hide messages inside other media, such as images, videos, and audios (cover media). Image steganography is the science of concealing secret information inside an image using various techniques. The nature of the embedding process makes the hidden information undetectable to human eyes. The challenges faced by image steganography techniques include achieving high embedding capacity, good imperceptibility, and high security. These criteria are inter-related since enhancing one factor undermines one or more others. This paper provides an overview of existing research related to various techniques and security in image steganography. First, basic information in this domain is presented. Next, various kinds of security techniques used in steganography are explained, such as randomization, encryption, and region-based techniques. This paper covers research published from 2017 to 2022. This review is not exhaustive and aims to explore state-of-the-art techniques applied to enhance security, crucial issues in the domain, and future directions to assist new and current researchers.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_40-A_Review_on_Security_Techniques_in_Image_Steganography.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning for Personal Activity Recognition Under More Complex and Different Placement Positions of Smart Phone</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140639</link>
        <id>10.14569/IJACSA.2023.0140639</id>
        <doi>10.14569/IJACSA.2023.0140639</doi>
        <lastModDate>2023-06-30T12:23:03.1370000+00:00</lastModDate>
        
        <creator>Bhagya Rekha Sangisetti</creator>
        
        <creator>Suresh Pabboju</creator>
        
        <subject>Human activity recognition; deep learning; CNN; MHealth dataset; artificial intelligence</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>Personal Activity Recognition (PAR) is an indispensable research area as it is widely used in applications such as security, healthcare, gaming, surveillance and remote patient monitoring. With sensors introduced in smart phones, data collection for PAR made easy. However, PAR is non-trivial and difficult task due to bulk of data to be processed, complexity and sensor placement positions. Deep learning is found to be scalable and efficient in processing such data. However, the main problem with existing solutions is that, they could recognize up to 6 or 8 actions only. Besides, they suffer from accurate recognition of other actions and also deal with complexity and different placement positions of smart phone. To address this problem, in this paper, we proposed a framework named Robust Deep Personal Action Recognition Framework (RDPARF) which is based on enhanced Convolutional Neural Network (CNN) model which is trained to recognize 12 actions. RDPARF is realized with our proposed algorithm known as Enhanced CNN for Robust Personal Activity Recognition (ECNN-RPAR). This algorithm has provision for early stopping checkpoint to optimize resource consumption and faster convergence. Experiments are made with MHealth benchmark dataset collected from UCI repository. Our empirical results revealed that ECNN-RPAR could recognize 12 actions under more complex and different placement positions of smart phone besides outperforming the state of the art exhibiting highest accuracy with 96.25%.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_39-Deep_Learning_for_Personal_Activity_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intelligent Moroccan License Plate Recognition System Based on YOLOv5 Build with Customized Dataset</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140638</link>
        <id>10.14569/IJACSA.2023.0140638</id>
        <doi>10.14569/IJACSA.2023.0140638</doi>
        <lastModDate>2023-06-30T12:23:03.1230000+00:00</lastModDate>
        
        <creator>El Mehdi Ben Laoula</creator>
        
        <creator>Marouane Midaoui</creator>
        
        <creator>Mohamed Youssfi</creator>
        
        <creator>Omar Bouattane</creator>
        
        <subject>License plate recognition; YOLOv5; intelligent region segmentation; customized dataset; Moroccan license plate issues; fonts-based data</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>The rising number of automobiles has led to an increased demand for a reliable license plate identification system that can perform effectively in diverse conditions. This applies to local authorities, public organizations, and private companies in Morocco, as well as worldwide. To meet this need, a strong License Plate Recognition (LPR) system is required, taking into account local plate specifications and fonts used by plate manufacturers. This paper presents an intelligent LPR system based on the YOLOv5 framework, trained on a customized dataset encompassing multiple fonts and circumstances such as illumination, climate, and lighting. The system incorporates an intelligent region segmentation level that adapts to the plate&#39;s type, improving recognition accuracy and addressing separator issues. Remarkably, the model achieves an impressive precision rate of 99.16% on problematic plates with specific illumination, separators, and degradations. This research represents a significant advancement in the field of license plate recognition, providing a reliable solution for accurate identification and paving the way for broader applications in Morocco and beyond.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_38-Intelligent_Moroccan_License_Plate_Recognition_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Clustering Based on Gray Wolf Optimization Algorithm for Internet of Things over Wireless Nodes</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140637</link>
        <id>10.14569/IJACSA.2023.0140637</id>
        <doi>10.14569/IJACSA.2023.0140637</doi>
        <lastModDate>2023-06-30T12:23:03.1070000+00:00</lastModDate>
        
        <creator>Chunfen HU</creator>
        
        <creator>Haifei ZHOU</creator>
        
        <creator>Shiyun LV</creator>
        
        <subject>Internet of things; energy consumption; clustering; optimization; gray wolf optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>The Internet of Things (IoT) creates an environment where things are permitted to act, hear, listen, and talk. IoT devices encompass a wide range of objects, from basic sensors to intelligent devices, capable of exchanging information with or without human intervention. However, the integration of wireless nodes in IoT systems brings about both advantages and challenges. While wireless connectivity enhances system functionality, it also introduces constraints on resources, including power consumption, memory, and CPU processing capacity. Among these limitations, energy consumption emerges as a critical challenge. To address these challenges, metaheuristic algorithms have been widely employed to optimize routing patterns in IoT networks. This paper proposes a novel clustering strategy based on the Gray Wolf Optimization (GWO) algorithm. The GWO-based clustering approach aims to achieve energy efficiency and improve overall network performance. Experimental results demonstrate significant improvements in key performance metrics. Specifically, the proposed strategy achieves up to a 14% reduction in energy consumption, a 34% decrease in end-to-end delay, and a 10% increase in packet delivery rate compared to existing approaches. The findings of this research contribute to the advancement of energy-efficient and high-performance IoT networks. The utilization of the GWO algorithm for clustering enhances the network&#39;s ability to conserve energy, reduce latency, and improve the delivery of data packets. These outcomes highlight the effectiveness and potential of the proposed approach in addressing resource limitations and optimizing performance in IoT environments.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_37-Clustering_Based_on_Gray_Wolf_Optimization_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Real-Time Intrusion Detection of Insider Threats in Industrial Control System Workstations Through File Integrity Monitoring</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140636</link>
        <id>10.14569/IJACSA.2023.0140636</id>
        <doi>10.14569/IJACSA.2023.0140636</doi>
        <lastModDate>2023-06-30T12:23:03.0900000+00:00</lastModDate>
        
        <creator>Bakil Al-Muntaser</creator>
        
        <creator>Mohamad Afendee Mohamed</creator>
        
        <creator>Ammar Yaseen Tuama</creator>
        
        <subject>Industrial control system; insider threats; intrusion detection; file integrity monitoring; SCADA security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>Industrial control systems (ICS) play a crucial role in various industries and ensuring their security is paramount for maintaining process continuity and reliability. In ICS, the most damaging cyber-attacks often come from trusted insiders rather than external threats or malware. Insiders have the advantage of bypassing security measures and staying undetected. This research focuses on developing a real-time intrusion detection system for ICS workstations that effectively detects insider threats while prioritizing user privacy. The approach employs file integrity monitoring to identify suspicious activities, particularly file violations such as data tampering and destruction. The model presented in this research demonstrates low system resource consumption by utilizing an event-triggered approach instead of continuous polling of file data. The model leverages built-in operating system functions, eliminating the need for third-party software installation. To minimize disruptions to the ICS network, the model operates at the supervisory level within the ICS architecture. Through extensive testing, the model achieves a high level of accuracy, detecting insider intrusions with a high true positive rate. This reliable detection capability contributes to enhancing the security of ICS and mitigating the risks associated with insider threats. By implementing this real-time intrusion detection system, organizations can effectively protect their control systems while preserving user privacy.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_36-Real_Time_Intrusion_Detection_of_Insider_Threats.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Evaluation of a Persuasive Learning Tool using Think-Aloud Protocol</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140635</link>
        <id>10.14569/IJACSA.2023.0140635</id>
        <doi>10.14569/IJACSA.2023.0140635</doi>
        <lastModDate>2023-06-30T12:23:03.0770000+00:00</lastModDate>
        
        <creator>Muhammad Aqil Abd Rahman</creator>
        
        <creator>Mohamad Hidir Mhd Salim</creator>
        
        <creator>Nazlena Mohamad Ali</creator>
        
        <subject>Learning technology; persuasive technology; persuasive learning; persuasive design</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>e-Learning has become a platform for students to gain and expand their knowledge through mobile applications or web-based systems. Even though e-learning systems usually aim to facilitate students&#39; understanding of the subject, some fail to convey the underlying learning outcomes. These circumstances emerge as most e-learning methods or tools fail to attract students to engage in their studies continuously. Therefore, to overcome the problem, the Persuasive Learning Objects and Technologies (PLOT) model comprises persuasive design elements for online learning, is developed. A web-based statistical analysis assistant system called TemanKajianKu (Study Buddy) has been developed based on PLOT elements to assist students in identifying the correct approach to conduct and analyze their experiment. This paper aims to evaluate users’ experience and examine the effectiveness of the persuasive design elements of the system. Ten participants were involved in interviews using the Think-Aloud protocol method. The study results showed that most participants conveyed positive opinions by giving good feedback on the system design. Most also stated that the system could help them make decisions by utilizing persuasive elements such as reduction, social signal, tunnelling, tailoring, and self-monitoring. This concludes that the Persuasive Learning Tool is effective in helping develop an e-learning application or web-based system that helps students in decision-making concerning their studies.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_35-The_Evaluation_of_a_Persuasive_Learning_Tool.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Arabic Intelligent Diagnosis Assistant for Psychologists using Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140634</link>
        <id>10.14569/IJACSA.2023.0140634</id>
        <doi>10.14569/IJACSA.2023.0140634</doi>
        <lastModDate>2023-06-30T12:23:03.0600000+00:00</lastModDate>
        
        <creator>Asmaa Alayed</creator>
        
        <creator>Manar Alrabie</creator>
        
        <creator>Sarah Aldumaiji</creator>
        
        <creator>Ghaida Allhyani</creator>
        
        <creator>Sahar Siyam</creator>
        
        <creator>Reem Qaid</creator>
        
        <subject>Mental health; psychologist; mental illness diagnosis; psychological test; deep learning; CNN algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>Mental illnesses have increased in recent years, especially after Covid-19 pandemic. In Saudi Arabia, the number of psychiatric clinics is small compared to the population density. As a result, psychologists encounter a variety of difficulties at work. The main goal of the current research is to develop a system that assists psychologists in the diagnosis process, which will be based on the DSM-5 (Diagnosis and Statistical Manual of Mental Disorders). The work on this research started with collecting the requirements and identifying users’ needs. In this matter, several interviews have been conducted with Saudi Psychologist and then a questionnaire was developed and distributed to psychologists in Saudi Arabia. Following an analysis of the needs and requirements, the system was designed. A deep learning technique was applied during the diagnosing process to address the issues mentioned by psychologists. Additionally, the proposed system helps psychologists by quickly calculating the results of psychological tests. The system was built as a website. The Convolutional Neural Network (CNN) algorithm was used with 96% accuracy to automatically predict the appropriate diagnosis and suggest the most suitable psychological test for the patient to take. System testing and usability testing were also conducted by involving patients and Saudi psychologists to test the usability of the system and the accuracy of the CNN model. The results indicate that the diagnosis prediction was accurate, and that each activity was completed faster. This demonstrated the model&#39;s high degree of accuracy and the system&#39;s interfaces&#39; clarity. Additionally, psychologists&#39; comments were encouraging and positive.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_34-An_Arabic_Intelligent_Diagnosis_Assistant_for_Psychologists.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Distributed Training of Deep Autoencoder for Network Intrusion Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140633</link>
        <id>10.14569/IJACSA.2023.0140633</id>
        <doi>10.14569/IJACSA.2023.0140633</doi>
        <lastModDate>2023-06-30T12:23:03.0300000+00:00</lastModDate>
        
        <creator>Haripriya C</creator>
        
        <creator>Prabhudev Jagadeesh M. P</creator>
        
        <subject>Network intrusion detection systems; deep learning; autoencoders; cloud computing; distributed training; parallel computing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>The amount of data being exchanged over the internet is enormous. Attackers are finding novel ways to evade rules, investigate network defenses, and launch successful attacks. Intrusion detection is one of the effective means to counter attacks. As the network traffic continues to grow, it can be challenging for network administrators to detect intrusions. In huge networks connected with millions of computers Terabytes/Zettabytes of data is generated every second. Deep Learning is an effective means for analyzing network traffic and detecting intrusions. In this article, distributed autoencoder on the CSE-CIC-IDS2018 dataset is implemented by considering all the classes of the dataset. The proposed work is implemented on Azure Cloud using distributed training as it helps in speeding up the training process, thereby detecting intrusions faster. An overall accuracy of 98.96 % is achieved. By leveraging such parallel computing into the security process, organizations may accomplish operations more quickly and respond to risks and remediate them at a rate that would not be possible with manual human capabilities alone.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_33-Distributed_Training_of_Deep_Autoencoder.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Feature-based Transfer Learning to Improve the Image Classification with Support Vector Machine</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140632</link>
        <id>10.14569/IJACSA.2023.0140632</id>
        <doi>10.14569/IJACSA.2023.0140632</doi>
        <lastModDate>2023-06-30T12:23:03.0300000+00:00</lastModDate>
        
        <creator>Nina Sevani</creator>
        
        <creator>Kurniawati Azizah</creator>
        
        <creator>Wisnu Jatmiko</creator>
        
        <subject>Feature-transfer learning; image; feature selection; weight; distance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>In the big data era there are some issues regarding real-world classification problems. Some of the important challenges that still need to be overcome to produce an accurate classification model are the data imbalance, difficulties in labeling process, and differences on data distribution. Most classification problems are related to the differences in the data distribution and the lack of labels on some datasets while other datasets have abundant labels. To address the problem, this paper proposes a weighted-based feature-transfer learning (WbFTL) method to transfer knowledge between different but related domains, called cross-domain. The knowledge transfer is done through making a new feature representations in order to reduce the cross-domain’s distribution differences while maintaining the local structure of the domain. To make the new feature representation we implement a feature selection and inter-cluster class distance. We propose two stages of the feature selection process to capture the knowledge of the feature and its relation to the label. The first stage uses a threshold to select the feature. The second stage uses ANOVA (Analysis of Variance) to select features that are significant to the label. To enhance the accuracy, the selected features are weighted before being used for the training process using SVM. The proposed WbFTL are compared to 1-NN and PCA as baseline 1 and baseline 2. Both baseline models represent the traditional machine learning and dimensionality reduction method, without implementing transfer learning. It is also compared with TCA, the first feature-transfer learning work on this same task, as baseline 3. The experiment results of 12 cross-domain tasks on Office and Caltech dataset show that the proposed WbFTL can increase the average accuracy by 15.25%, 6.83%, and 3.59% compared to baseline 1, baseline 2, and baseline 3, respectively.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_32-A_Feature_based_Transfer_Learning_to_Improve_the_Image.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Effect of Distance and Direction on Distress Keyword Recognition using Ensembled Bagged Trees with a Ceiling-Mounted Omnidirectional Microphone</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140631</link>
        <id>10.14569/IJACSA.2023.0140631</id>
        <doi>10.14569/IJACSA.2023.0140631</doi>
        <lastModDate>2023-06-30T12:23:03.0130000+00:00</lastModDate>
        
        <creator>Nadhirah Johari</creator>
        
        <creator>Mazlina Mamat</creator>
        
        <creator>Yew Hoe Tung</creator>
        
        <creator>Aroland Kiring</creator>
        
        <subject>Distress speech; ensemble bagged trees; audio surveillance; machine learning; distance; directions</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>Audio surveillance can provide an effective alternative to video surveillance in situations where the latter is impractical. Nevertheless, it is essential to note that audio recording raises privacy and legal concerns that require unambiguous consent from all parties involved. By utilizing keyword recognition, audio recordings can be filtered, allowing for the creation of a surveillance system that is activated by distress keywords. This paper investigates the performance of the Ensemble Bagged Trees (EBT) classifier in recognizing the distress keyword &quot;Please&quot; captured by a ceiling-mounted omnidirectional microphone in a room measuring 4.064m (length) x 2.54m (width) x 2.794m (height). The study analyzes the impact of different distances (0m, 1m, and 2m) and two directions (facing towards and away from the microphone) on recognition performance. Results indicate that the system is more sensitive and better able to identify targeted signals when they are farther away and facing toward the microphone. The validation process demonstrates excellent accuracy, precision, and recall values exceeding 98%. In testing, the EBT achieved a satisfactory recall rate of 86.7%, indicating moderate sensitivity, and a precision of 97.7%, implying less susceptibility to false alarms, a crucial feature of any reliable surveillance system. Overall, the findings suggest that a single omnidirectional microphone equipped with an EBT classifier is capable of detecting distress keywords in a low-noise enclosed room measuring up to 4.0 meters in length, 4.0 meters in width, and 2.794 meters in height. This study highlights the potential of employing an omnidirectional microphone and EBT classifier as an edge audio surveillance system for indoor environments.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_31-Effect_of_Distance_and_Direction_on_Distress_Keyword_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Information Technology Technical Support Success Factors in Higher Education: Principal Component Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140630</link>
        <id>10.14569/IJACSA.2023.0140630</id>
        <doi>10.14569/IJACSA.2023.0140630</doi>
        <lastModDate>2023-06-30T12:23:02.9830000+00:00</lastModDate>
        
        <creator>Geeta Pursan</creator>
        
        <creator>Timothy. T. Adeliyi</creator>
        
        <creator>Seena Joseph</creator>
        
        <subject>Information technology; technical support services; key success factors; principal component analysis; higher education institutions</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>The use of information and communication technologies at higher education institutions is no longer an option, but rather a need. Information Technology support is an essential factor that entails giving end users assistance with hardware and software components. Technical support for information technology has been recognized as a crucial element linked to student happiness because it helps students understand, access, and use technology efficiently. The successful implementation of IT technical support will be aided by identifying the essential success criteria that enable efficient and effective support for students and instructors. Hence the main aim of this study is to identify and rank the key success factors for the successful implementation of IT technical support at higher education institutes. 81 key success factors identified from 100 research papers were analyzed using principal component analysis. The findings led to the identification and ranking of 25 PCs. 95.35 percent of the observed variation was accounted for by the first 25 PCs with eigenvalues higher than 1. The percentages for the first 6 PCs were, in order, 11.87%, 22.21%, 30,64%, 38.25%, 45,12%, and 51.47%. This research provides useful information highlighting factors that can be used to examine areas in educational institutions that need to receive continuous and special care to generate high student satisfaction; ensure future success and gain a competitive advantage. These factors can assist the management of HEI to determine the success or failure of an institution in terms of the technical support provided to students and student satisfaction.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_30-Information_Technology_Technical_Support_Success_Factors.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Fuzzy Lexicon Expansion and Sentiment Aware Recommendation System in e-Commerce</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140629</link>
        <id>10.14569/IJACSA.2023.0140629</id>
        <doi>10.14569/IJACSA.2023.0140629</doi>
        <lastModDate>2023-06-30T12:23:02.9670000+00:00</lastModDate>
        
        <creator>Manikandan. B</creator>
        
        <creator>Rama. P</creator>
        
        <creator>Chakaravarthi. S</creator>
        
        <subject>Classification; e-commerce; preprocessing; recommendation system; recurrent neural network; sentiment analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>Customers’ feedbacks are necessary for an online business to enrich themselves. The customers’ feedback reflects the quality of the products and the e-commerce services. The companies are in a position to concentrate more and analyze the customers’ feedback or reviews carefully by applying new techniques for predicting the current trends, customers’ expectations, and the quality of their services. The e-business will succeed when one accurately predicts customer purchase patterns and expectations. For this purpose, we propose a new fuzzy logic incorporated sentiment analysis-based product recommendation system to predict the customers’ needs and recommend suitable products successfully. The proposed system incorporates a newly developed sentiment analysis model which incorporates the classification through fuzzy temporal rules. Moreover, the basic level data preprocessing activities such as stemming, stop word removal, syntax analysis and tokenization are performed to enhance the sentiment classification accuracy. Finally, this product recommendation system recommends suitable products to the customers by predicting the customers’ needs and expectations. The proposed system is evaluated using the Amazon dataset and proved better than the existing recommendation systems regarding precision, recall, serendipity and nDCG.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_29-A_New_Fuzzy_Lexicon_Expansion_and_Sentiment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application of Conv-1D and Bi-LSTM to Classify and Detect Epilepsy in EEG Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140628</link>
        <id>10.14569/IJACSA.2023.0140628</id>
        <doi>10.14569/IJACSA.2023.0140628</doi>
        <lastModDate>2023-06-30T12:23:02.9670000+00:00</lastModDate>
        
        <creator>Chetana R</creator>
        
        <creator>A Shubha Rao</creator>
        
        <creator>Mahantesh K</creator>
        
        <subject>1D CNN; bidirectional LSTM; dataset (DS); deep learning; electroencephalogram (EEG); LSTM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>EEG is used to study the electrical changes in the brain and can derive a conclusion as epileptic or not, using an automated method for accurate detection of seizures. Deep learning, a technique ahead of machine learning tools, can self-discover related data for the detection and classification of EEG analysis. Our work focuses on deep neural network architecture to visualize the temporal dependencies in EEG signals. Algorithms and models based on Deep Learning techniques like Conv1D, Conv1D + LSTM, and Conv1D + Bi-LSTM for binary and multiclass classification. Convolution Neural Networks can spontaneously extract and learn features independently in the multichannel time-series EEG signals. Long Short-Term Memory (LSTM) network, with its selective memory retaining capability, Fully Connected (FC) layer, and softmax activation, discover hidden sparse features from EEG signals and predicts labels as output. Two independent LSTM networks combine to form Bi-LSTM in opposite directions and appreciate added visibility to upcoming information to provide efficient work contrary to previous methods. Long-term EEG recordings on the Bonn EEG database, Hauz Khas epileptic database, and Epileptic EEG signals from Spandana Hospital, Bangalore, assess performance. Metrics like precision, recall, f1-score, and support exhibit an improvement over traditional ML algorithms evaluated in the literature.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_28-Application_of_Conv_1D_and_Bi_LSTM_to_Classify.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Speaker Recognition Improvement for Degraded Human Voice using Modified-MFCC with GMM</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140627</link>
        <id>10.14569/IJACSA.2023.0140627</id>
        <doi>10.14569/IJACSA.2023.0140627</doi>
        <lastModDate>2023-06-30T12:23:02.9500000+00:00</lastModDate>
        
        <creator>Amit Moondra</creator>
        
        <creator>Poonam Chahal</creator>
        
        <subject>GMM; artificial intelligence; MFCC; fundamental frequency; melspectrum; speaker recognition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>Speaker’s audio is one of the unique identities of the speaker. Nowadays not only humans but machines can also identify humans by their audio. Machines identify different audio properties of the human voice and classify speaker from speaker’s audio. Speaker recognition is still challenging with degraded human voice and limited dataset. Speaker can be identified effectively when feature extraction from voice is more accurate. Mel-Frequency Cepstral Coefficient (MFCC) is mostly used method for human voice feature extraction. We are introducing improved feature extraction method for effective speaker recognition from degraded human audio signal. This article presents experiment results of modified MFCC with Gaussian Mixture Model (GMM) on uniquely developed degraded human voice dataset. MFCC uses human audio signal and transforms it into a numerical value of audio characteristics, which is utilized to recognize speaker efficiently with the help of data science model. Experiment uses degraded human voice when high background noise comes with audio signal. Experiment also covers, Sampling Frequency (SF) impacts on human audio when “Signal to Noise Ratio” (SNR) is low (up to 1dB) in overall speaker identification process. With modified MFCC, we have observed improved speaker recognition when speaker voice SNR is upto 1dB due to high SF and low frequency range for mel-scale triangular filter.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_27-Speaker_Recognition_Improvement_for_Degraded_Human_Voice.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Early Detection of Autism Spectrum Disorder (ASD) using Traditional Machine Learning Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140626</link>
        <id>10.14569/IJACSA.2023.0140626</id>
        <doi>10.14569/IJACSA.2023.0140626</doi>
        <lastModDate>2023-06-30T12:23:02.9370000+00:00</lastModDate>
        
        <creator>Prasenjit Mukherjee</creator>
        
        <creator>Sourav Sadhukhan</creator>
        
        <creator>Manish Godse</creator>
        
        <creator>Baisakhi Chakraborty</creator>
        
        <subject>Support vector; logistic regression; cosine similarity; K-nearest neighbor; random forest</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>Autism Spectrum Disorder (ASD) is a mental disorder among children that is difficult to diagnose at an early age of a child. People with ASD have difficulty functioning in areas such as communication, social interaction, motor skills, and emotional regulation. They may also have difficulty processing sensory information and have difficulty understanding language, which can lead to further difficulty in socializing. Early detection can help with learning coping skills, communication strategies, and other interventions that can make it easier for them to interact with the world. This kind of disorder is not curable but it is possible to reduce the symptoms of ASD. The early age detection of ASD helps to start several therapies corresponding to ASD symptoms. The detection of ASD symptoms at an early age of a child is our main problem where traditional machine learning algorithms like Support Vector Machine, Logistic Regression, K-nearest neighbour, and Random Forest classifiers have been applied to parents’ dialog to understand the sentiment of each statement about their child. After completion of the prediction of these models, each positive ASD symptoms-related sentence has been used in the cosine similarity model for the detection of ASD problems. Samples of parents’ dialogs have been collected from social networks and special child training institutes. Data has been prepared according to the model for sentiment analysis. The accuracies of these proposed classifiers are 71%, 71%, 62%, and 69% percent according to the prepared data. Another dataset has been prepared where each sentence refers to a particular categorical ASD problem and that has been used in cosine similarity calculation for ASD problem detection.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_26-Early_Detection_of_Autism_Spectrum_Disorder.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards Point Cloud Classification Network Based on Multilayer Feature Fusion and Projected Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140625</link>
        <id>10.14569/IJACSA.2023.0140625</id>
        <doi>10.14569/IJACSA.2023.0140625</doi>
        <lastModDate>2023-06-30T12:23:02.9200000+00:00</lastModDate>
        
        <creator>Tengteng Song</creator>
        
        <creator>YiZhi He</creator>
        
        <creator>Muhammad Tahir</creator>
        
        <creator>Jianbo Li</creator>
        
        <creator>Zhao Li</creator>
        
        <creator>Imran Saeed</creator>
        
        <subject>Point cloud; classification; graph convolution; attention mechanism; CLIP</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>Deep Learning (DL) based point cloud classification techniques now in use suffer from issues such as disregarding local feature extraction, missing connections between points, and failure to extract two-dimensional information features from point clouds. A point cloud classification network that utilizes multi-layer feature fusion and point cloud projection images is suggested to address the aforementioned problems and produce more accurate classification outcomes. Firstly, the network extracts local characteristics of point clouds through graph convolution to strengthen the connection between points. Then, the fusing attention mechanism is introduced to aggregate the useful characteristics of the point cloud while suppressing the useless characteristics, and the point cloud characteristics are fused by multi-layer characteristic fusion. Finally, a 3D point cloud network plug-in model based on point cloud projection image (3D CLIP) is proposed, which can make up for the defects of other 3D point cloud classification networks that do not extract two-dimensional information characteristics of point clouds, and solve the problem of low accuracy of similar category recognition in datasets. The ModelNet40 dataset was used for classification studies, and the results show that the point cloud classification network, without the addition of a 3D CLIP plug-in model, achieves a classification accuracy of 92.5%. The point cloud classification network with a 3D CLIP plug-in model achieved a classification accuracy of 93.6%, demonstrating that this technique can successfully raise point cloud classification accuracy.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_25-Towards_Point_Cloud_Classification_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hierarchical Convolutional Neural Networks using CCP-3 Block Architecture for Apparel Image Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140624</link>
        <id>10.14569/IJACSA.2023.0140624</id>
        <doi>10.14569/IJACSA.2023.0140624</doi>
        <lastModDate>2023-06-30T12:23:02.9030000+00:00</lastModDate>
        
        <creator>Natthamon Chamnong</creator>
        
        <creator>Jeeraporn Werapun</creator>
        
        <creator>Anantaporn Hanskunatai</creator>
        
        <subject>Convolutional neural networks (CNN); hierarchical CNN (H-CNN); CCP-3 block (two convolutional layers (CC) and one pooling layer (P) per block); apparel image classification; fashion applications</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>In fashion applications, deep learning has been applied automatically to recognize and classify the apparel images under the massive visual data, emerged on social networks. To classify the apparel correctly and quickly is challenging due to a variety of apparel features and complexity of the classification. Recently, the hierarchical convolutional neural networks (H–CNN) with the VGGNet architecture was proposed to classify the fashion-MNIST datasets. However, the VGGNet (many layers) required many filters (in the convolution layer) and many neurons (in the fully connected layer), leading to computational complexity and long training-time. Therefore, this paper proposes to classify the apparel images by the H–CNN in cooperated with the new shallow-layer CCP-3-Block architecture, where each building block consists of two convolutional layers (CC) and one pooling layer (P). In the CCP-3-Block, the number of layers can be reduced (in the network), the number of filters (in the convolution layer), and the number of neurons (in the fully connected layer), while adding a new connection between the convolution layer and the pooling layer plus a batch-normalization technique before passing the activation so that networks can learn independently and train quickly. Moreover, dropout techniques were utilized in the feature mapping and fully connected to reduce overfitting, and the optimizer adaptive moment estimation was utilized to solve the decaying of gradients, which can improve the network-performance. The experimental results showed that the improved H–CNN model with our CCP-3-Block outperformed the recent H–CNN model with the VGGNet in terms of decreased loss, increased accuracy, and faster training.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_24-Hierarchical_Convolutional_Neural_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-Features Audio Extraction for Speech Emotion Recognition Based on Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140623</link>
        <id>10.14569/IJACSA.2023.0140623</id>
        <doi>10.14569/IJACSA.2023.0140623</doi>
        <lastModDate>2023-06-30T12:23:02.8870000+00:00</lastModDate>
        
        <creator>Jutono Gondohanindijo</creator>
        
        <creator>Muljono</creator>
        
        <creator>Edi Noersasongko</creator>
        
        <creator>Pujiono</creator>
        
        <creator>De Rosal Moses Setiadi</creator>
        
        <subject>Deep learning; multi-features extraction; RAVDESS; speech emotion recognition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>The increasing need for human interaction with computers makes the interaction process more advanced, one of which is by utilizing voice recognition. Developing a voice command system also needs to consider the user&#39;s emotional state because the users indirectly treat computers like humans in general. By knowing the type of a person&#39;s emotions, the computer can adjust the type of feedback that will be given so that the human-computer interaction (HCI) process will run more humanely. Based on the results of previous research, increasing the accuracy of recognizing the types of human emotions is still a challenge for researchers. This is because not all types of emotions can be expressed equally, especially differences in language and cultural accents. In this study, it is proposed to recognize speech-based emotion types using multi-feature extraction and deep learning. The dataset used is taken from the RAVDESS database. The dataset was then extracted using MFCC, Chroma, Mel-Spectrogram, Contrast, and Tonnetz. Furthermore, in this study, PCA (Principal Component Analysis) and Min-Max Normalization techniques will be applied to determine the impact resulting from the application of these techniques. The data obtained from the pre-processing stage is then used by the Deep Neural Network (DNN) model to identify the types of emotions such as calm, happy, sad, angry, neutral, fearful, surprised, and disgusted. The model testing process uses the confusion matrix technique to determine the performance of the proposed method. The test results for the DNN model obtained the accuracy value of 93.61%, a sensitivity of 73.80%, and a specificity of 96.34%. The use of multi-features in the proposed method can improve the performance of the model&#39;s accuracy in determining the type of emotion based on the RAVDESS dataset. In addition, using the PCA method also provides an increase in pattern correlation between features so that the classifier model can show performance improvements, especially accuracy, specificity, and sensitivity.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_23-Multi_Features_Audio_Extraction_for_Speech_Emotion_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Algorithm Based on Self-balancing Binary Search Tree to Generate Balanced, Intra-homogeneous and Inter-homogeneous Learning Groups</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140622</link>
        <id>10.14569/IJACSA.2023.0140622</id>
        <doi>10.14569/IJACSA.2023.0140622</doi>
        <lastModDate>2023-06-30T12:23:02.8730000+00:00</lastModDate>
        
        <creator>Ali Ben Ammar</creator>
        
        <creator>Amir Abdalla Minalla</creator>
        
        <subject>Learning group formation; balanced size groups; homogeneous groups; self-balancing binary search trees; greedy algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>This paper presents an algorithm, based on the self-balancing binary search tree, to form learning groups. It aims to generate learning groups that are intra-homogeneous (student performance similarity within the group), inter-homogeneous (group performance similarity between groups), and of balanced size. The algorithm mainly uses the 2-3 tree and the 2-3-4 tree as two implementations of a self-balancing binary search tree to form student blocks with close GPAs (grade point averages) and balanced sizes. Then, groups are formed from those blocks in a greedy manner. The experiment showed the efficiency of the proposed algorithm, compared to traditional forming methods, in balancing the size of the groups and improving their intra- and inter-homogeneity by up to 26%, regardless of the used version of the self-balancing binary search tree (2-3 or 2-3-4). For small samples of students, the use of the 2-3-4 tree was distinguished for improving intra- and inter-homogeneity compared to the 2-3 tree. As for large samples of students, experiments showed that the 2-3 tree was better than the 2-3-4 tree in improving the inter-homogeneity, while the 2-3-4 tree was distinguished in improving the intra-homogeneity.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_22-An_Algorithm_Based_on_Self_balancing_Binary_Search_Tree.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Software Cost Estimation using Stacked Ensemble Classifier and Feature Selection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140621</link>
        <id>10.14569/IJACSA.2023.0140621</id>
        <doi>10.14569/IJACSA.2023.0140621</doi>
        <lastModDate>2023-06-30T12:23:02.8570000+00:00</lastModDate>
        
        <creator>Mustafa Hammad</creator>
        
        <subject>Software project management; effort estimation; prediction model; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>Predicting the cost of the development effort is essential for successful projects. This helps software project managers to allocate resources, and determine budget or delivery date. This paper evaluates a set of machine learning algorithms and techniques in predicting the development cost of software projects. A feature selection algorithm is utilized to enhance the accuracy of the prediction process. A set of evaluations are presented based on basic classifiers and stacked ensemble classifiers with and without the feature selection approach. The evaluation study uses a dataset from 76 university students&#39; software projects. Results show that using a stacked ensemble classifier and feature selection technique can increase the accuracy of software cost prediction models.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_21-Software_Cost_Estimation_using_Stacked_Ensemble_Classifier.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluation of the Accidents Risk Caused by Truck Drivers using a Fuzzy Bayesian Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140620</link>
        <id>10.14569/IJACSA.2023.0140620</id>
        <doi>10.14569/IJACSA.2023.0140620</doi>
        <lastModDate>2023-06-30T12:23:02.8400000+00:00</lastModDate>
        
        <creator>Imane Benallou</creator>
        
        <creator>Abdellah Azmani</creator>
        
        <creator>Monir Azmani</creator>
        
        <subject>Heavy truck vehicle; road accident prevention; risk management; bayesian-fuzzy network; analysis tree event</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>Road accidents cause hundreds of fatalities and injuries each year; due to their size and operating features, heavy trucks typically experience more severe accidents. Many factors are likely to cause such accidents; however, statistics mainly blame human error. This paper analyses the risk of accidents for heavy vehicles, focusing on driver-related factors contributing to accidents. A model is developed to anticipate the probability of an accident by using Bayesian networks (BNs) and fuzzy logic. Three axioms were verified to validate the developed model, and a sensitivity analysis is performed to identify the factors that have the most significant influence over truck accidents. Subsequently, the result provided by the model was exploited to examine the effects of in-vehicle road safety systems in preventing road accidents via an event tree analysis. The results underlined a strong link between the occurrence of accidents and parameters related to the driver, such as alcohol and substance consumption, his driving style, and his reactivity. Similarly, unfavourable working conditions significantly impact the occurrence of accidents since it contributes to fatigue, one of the leading causes of road accidents. Also, the event tree analysis results have highlighted the importance of equipping trucks with these mechanisms.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_20-Evaluation_of_the_Accidents_Risk_Caused_by_Truck_Drivers.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detecting Malware with Classification Machine Learning Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140619</link>
        <id>10.14569/IJACSA.2023.0140619</id>
        <doi>10.14569/IJACSA.2023.0140619</doi>
        <lastModDate>2023-06-30T12:23:02.8400000+00:00</lastModDate>
        
        <creator>Mohd Azahari Mohd Yusof</creator>
        
        <creator>Zubaile Abdullah</creator>
        
        <creator>Firkhan Ali Hamid Ali</creator>
        
        <creator>Khairul Amin Mohamad Sukri</creator>
        
        <creator>Hanizan Shaker Hussain</creator>
        
        <subject>Malware; classification; machine learning; accuracy; false positive rate</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>In today&#39;s digital landscape, the identification of malicious software has become a crucial undertaking. The ever-growing volume of malware threats renders conventional signature-based methods insufficient in shielding against novel and intricate attacks. Consequently, machine learning strategies have surfaced as a viable means of detecting malware. The following research report focuses on the implementation of classification machine learning methods for detecting malware. The study assesses the effectiveness of several algorithms, including Na&#239;ve Bayes, Support Vector Machine (SVM), K-Nearest Neighbor (KNN), Decision Tree, Random Forest, and Logistic Regression, through an examination of a publicly accessible dataset featuring both benign files and malware. Additionally, the influence of diverse feature sets and preprocessing techniques on the classifiers&#39; performance is explored. The outcomes of the investigation exhibit that machine learning methods can capably identify malware, attaining elevated precision levels and decreasing false positive rates. Decision Tree and Random Forest display superior performance compared to other algorithms with 100.00% accuracy. Furthermore, it is observed that feature selection and dimensionality reduction techniques can notably enhance classifier effectiveness while mitigating computational complexity. Overall, this research underscores the potential of machine learning approaches for detecting malware and offers valuable guidance for the development of successful malware detection systems.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_19-Detecting_Malware_with_Classification_Machine_Learning_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Socio Technical Framework to Improve Work Behavior During Smart City Implementation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140618</link>
        <id>10.14569/IJACSA.2023.0140618</id>
        <doi>10.14569/IJACSA.2023.0140618</doi>
        <lastModDate>2023-06-30T12:23:02.8270000+00:00</lastModDate>
        
        <creator>Eko Haryadi</creator>
        
        <creator>Abdul Karim</creator>
        
        <creator>Lizawati Salahuddin</creator>
        
        <subject>Framework; socio technical; cybersecurity; behavior; threat; smart city</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>Every organization uniquely adheres to security culture. Numerous studies have discovered that procrastinating, impulsive, forward-thinking, and risk-taking behaviors vary across organizations, which may help to explain why different organizations&#39; adherence to security policies. This study describes the human aspect of a government organization in contributing to the successful implementation of a smart city by minimizing cybersecurity threats. Improper employee behavior and lack of understanding of cybersecurity will negatively contribute to the successful development of smart cities. The purpose of this research is to develop a framework to determine the factors to improve work behavior in terms of the contribution of social and technical factors. The use of a socio-technical approach to explain how socio-technical integration can contribute to improving work behavior by using mixed methods. The results indicated that several socio-technical factors which include technology, IT infrastructure, work organization, competency, training, and teamwork contribute to improving work behaviors which can be used as a basis for minimizing cybersecurity threats in smart city implementation.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_18-Socio_Technical_Framework_to_Improve_Work_Behavior.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Multi-branch Feature Fusion Model Based on Convolutional Neural Network for Hyperspectral Remote Sensing Image Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140617</link>
        <id>10.14569/IJACSA.2023.0140617</id>
        <doi>10.14569/IJACSA.2023.0140617</doi>
        <lastModDate>2023-06-30T12:23:02.8100000+00:00</lastModDate>
        
        <creator>Jinli Zhang</creator>
        
        <creator>Ziqiang Chen</creator>
        
        <creator>Yuanfa Ji</creator>
        
        <creator>Xiyan Sun</creator>
        
        <creator>Yang Bai</creator>
        
        <subject>Hyperspectral image classification; convolutional neural network (CNN); multi-branch network; feature fusion</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>Hyperspectral image classification constitutes a pivotal research domain in the realm of remote sensing image processing. In the past few years, convolutional neural networks (CNNs) with advanced feature extraction capabilities have demonstrated remarkable performance in hyperspectral image classification. However, the challenges faced by classification methods are compounded by the difficulties of &quot;dimensional disaster&quot; and limited sample distinctiveness in hyperspectral images. Despite existing efforts to extract spectral spatial information, low classification accuracy remains a persistent issue. Therefore, this paper proposes a multi-branch feature fusion model classification method based on convolutional neural networks to fully extract more effective and adequate high-level semantic features. The proposed classification model first undergoes PCA dimensionality reduction, followed by a multi-branch network composed of three-dimensional and two-dimensional convolutions. Convolutional kernels of varying scales are utilized for multi-feature extraction. Among them, the 3D convolution not only adapts to the cube of hyperspectral data but also fully exploits the spectral-spatial information, while the 2D convolution learns deeper spatial information. The experimental results of the proposed model on three datasets demonstrate its superior performance over traditional classification models, enabling it to accomplish the task of hyperspectral image classification more effectively.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_17-A_Multi_branch_Feature_Fusion_Model_Based_on_Convolutional_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Artificial Intelligence Enabled Mobile Chatbot Psychologist using AIML and Cognitive Behavioral Therapy</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140616</link>
        <id>10.14569/IJACSA.2023.0140616</id>
        <doi>10.14569/IJACSA.2023.0140616</doi>
        <lastModDate>2023-06-30T12:23:02.7930000+00:00</lastModDate>
        
        <creator>Batyrkhan Omarov</creator>
        
        <creator>Zhandos Zhumanov</creator>
        
        <creator>Aidana Gumar</creator>
        
        <creator>Leilya Kuntunova</creator>
        
        <subject>Chatbot; artificial intelligence; machine learning; CBT; AIML</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>In recent years, the demand for mental health services has increased exponentially, prompting the need for accessible, cost-effective, and efficient solutions. This paper introduces an Artificial Intelligence (AI) enabled mobile chatbot psychologist that leverages AIML (Artificial Intelligence Markup Language) and Cognitive Behavioral Therapy (CBT) to provide psychological support. The chatbot is designed to facilitate mental health care by offering personalized CBT interventions to individuals experiencing psychological distress. The proposed mobile chatbot psychologist employs AIML, a language created to facilitate human-computer interactions, to understand user inputs and generate contextually appropriate responses. To ensure the efficacy of the chatbot, it is equipped with a knowledge base comprising CBT principles and techniques, enabling it to provide targeted psychological interventions. The integration of CBT allows the chatbot to address a wide range of mental health issues, including anxiety, depression, stress, and phobias, by helping users identify and challenge cognitive distortions. The paper discusses the development and implementation of the mobile chatbot psychologist, detailing the AIML-based conversational engine and the incorporation of CBT techniques. The chatbot&#39;s effectiveness is evaluated through a series of user studies involving participants with varying levels of psychological distress. Results demonstrate the chatbot&#39;s ability to deliver personalized interventions, with users reporting significant improvements in their mental well-being. The AI-enabled mobile chatbot psychologist offers a promising solution to bridge the gap in mental health care, providing an easily accessible, cost-effective, and scalable platform for psychological support. This innovative approach can serve as a valuable adjunct to traditional therapy and help reduce the burden on mental health professionals, while empowering individuals to take charge of their mental well-being.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_16-Artificial_Intelligence_Enabled_Mobile_Chatbot_Psychologist.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Bidirectional Long-Short-Term Memory with Attention Mechanism for Emotion Analysis in Textual Content</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140615</link>
        <id>10.14569/IJACSA.2023.0140615</id>
        <doi>10.14569/IJACSA.2023.0140615</doi>
        <lastModDate>2023-06-30T12:23:02.7800000+00:00</lastModDate>
        
        <creator>Batyrkhan Omarov</creator>
        
        <creator>Zhandos Zhumanov</creator>
        
        <subject>Deep learning; emotion detection; BiLSTM; machine learning; classification; artificial intelligence</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>Emotion analysis in textual content plays a crucial role in various applications, including sentiment analysis, customer feedback monitoring, and mental health assessment. Traditional machine learning and deep learning techniques have been employed to analyze emotions; however, these methods often fail to capture complex and long-range dependencies in text. To overcome these limitations, this paper proposes a novel bidirectional long-short-term memory (Bi-LSTM) model for emotion analysis in textual content. The proposed Bi-LSTM model leverages the power of recurrent neural networks (RNNs) to capture both the past and future context of text, providing a more comprehensive understanding of the emotional content. By integrating the forward and backward LSTM layers, the model effectively learns the semantic representations of words and their dependencies in a sentence. Additionally, we introduce an attention mechanism to weigh the importance of different words in the sentence, further improving the model&#39;s interpretability and performance. To evaluate the effectiveness of our Bi-LSTM model, we conduct extensive experiments on Kaggle Emotion detection dataset. The results demonstrate that our proposed model outperforms several state-of-the-art baseline methods, including traditional machine learning algorithms, such as support vector machines and naive Bayes, as well as other deep learning approaches, like CNNs and vanilla LSTMs.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_15-Bidirectional_Long_Short_Term_Memory_with_Attention_Mechanism.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Artificial Intelligence-based Detection of Fava Bean Rust Disease in Agricultural Settings: An Innovative Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140614</link>
        <id>10.14569/IJACSA.2023.0140614</id>
        <doi>10.14569/IJACSA.2023.0140614</doi>
        <lastModDate>2023-06-30T12:23:02.7630000+00:00</lastModDate>
        
        <creator>Hicham Slimani</creator>
        
        <creator>Jamal El Mhamdi</creator>
        
        <creator>Abdelilah Jilbab</creator>
        
        <subject>Fava bean disease; deep learning; YOLOv8; real-time detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>The traditional methods used to identify plant diseases mostly rely on expert opinion, which causes long waits and enormous expenses in the control of crop diseases and field activities, especially given that the majority of crop infections now in existence have tiny targets, occlusions, and looks that are similar to those of other diseases. To increase the efficiency and precision of rust disease classification in a fava bean field, a new optimized multilayer deep learning model called YOLOv8 is suggested in this study. 3296 images were collected from a farm in eastern Morocco for the fava bean rust disease dataset. We labeled all the data before training, evaluating, and testing our model. The results demonstrate that the model developed using transfer learning has a higher recognition precision than the other models, reaching 95.1%, and can classify and identify diseases into three severity levels: healthy, moderate, and critical. As performance indicators, the needed standards for mean Average Precision (mAP), recall, and F1 score are 93.7%, 90.3%, and 92%, respectively. The improved model&#39;s detection speed was 10.1 ms, sufficient for real-time detection. This study is the first to employ a new method to find rust in fava bean crops. Results are encouraging and supply new opportunities for crop disease research.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_14-Artificial_Intelligence_Based_Detection_of_Fava_Bean_Rust.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Method for Characterization of Customer Churn Based on LightBGM and Experimental Approach for Mitigation of Churn</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140613</link>
        <id>10.14569/IJACSA.2023.0140613</id>
        <doi>10.14569/IJACSA.2023.0140613</doi>
        <lastModDate>2023-06-30T12:23:02.7470000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Ikuya Fujikawa</creator>
        
        <creator>Yusuke Nakagawa</creator>
        
        <creator>Ryoya Momozaki</creator>
        
        <creator>Sayuri Ogawa</creator>
        
        <subject>Churn; LightBGM; churn characteristics; linear regression</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>A method for customer churn characterization based on LightBGM (Light Gradient Boosting Machine) is proposed. Additionally, experimental approaches for mitigation of churn are conducted through churn prediction. The experiments reveal several churn characteristics such as age dependency, gender dependency (with a high divorce rate among female customers), number of visits dependency (with a higher churn rate for customers with fewer visits), unit price (per hair salon visit) dependency (with a higher withdrawal rate for lower-priced services), date of first visit dependency (with a high churn rate for recent customers), date of last visit dependency, and menu dependency (with low attrition rates for gray hair dye and high attrition rates for school and child cuts) and so on. Through the experiments, these dependencies are clarified. It is found that the first visit date is the most significant factor for churn customer character. Also, it is found that “distance to hair salon” dependency may be related to the availability of parking lots, although this factor has insignificant impact on the churn rate.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_13-Method_for_Characterization_of_Customer_Churn_Based_on_LightBGM.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning-based Intrusion Detection: A Novel Approach for Identifying Brute-Force Attacks on FTP and SSH Protocol</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140612</link>
        <id>10.14569/IJACSA.2023.0140612</id>
        <doi>10.14569/IJACSA.2023.0140612</doi>
        <lastModDate>2023-06-30T12:23:02.7330000+00:00</lastModDate>
        
        <creator>Noura Alotibi</creator>
        
        <creator>Majid Alshammari</creator>
        
        <subject>Artificial neural networks; machine learning; deep ‎learning; intrusion detection ‎system; detecting brute ‎force attacks on SSH and FTP protocols</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>As networks continue to expand rapidly, the number and diversity of cyberattacks are also increasing, posing a significant challenge for organizations worldwide. Consequently, brute-force attacks targeting FTP and SSH protocols have become more prevalent. IDSes offer an essential tool to detect these attacks, providing traffic analysis and system monitoring. Traditional IDSes employ signatures and anomalies to monitor information flow for malicious activity and policy violations; however, they often struggle to effectively identify unknown or novel patterns. In response, we propose a novel intelligent approach based on deep learning to detect brute-force attacks on FTP and SSH protocols. We conducted an extensive literature review and developed a metric to compare our work with existing literature. Our findings indicate that our proposed approach achieves an accuracy of 99.9%, outperforming other comparable solutions in detecting brute-force attacks.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_12-Deep_Learning_based_Intrusion_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Proof of Spacetime as a Defensive Technique Against Model Extraction Attacks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140611</link>
        <id>10.14569/IJACSA.2023.0140611</id>
        <doi>10.14569/IJACSA.2023.0140611</doi>
        <lastModDate>2023-06-30T12:23:02.7170000+00:00</lastModDate>
        
        <creator>Tatsuki Fukuda</creator>
        
        <subject>Proof of spacetime; model extraction attacks; machine learning; security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>When providing a service that utilizes a machine learning model, the countermeasures against cyber-attacks are required. The model extraction attack is one of the attacks, in which an attacker attempts to replicate the model by obtaining a large number of input-output pairs. While a defense using Proof of Work has already been proposed, an attacker can still conduct model extraction attacks by increasing their computational power. Moreover, this approach leads to unnecessary energy consumption and might not be environmentally friendly. In this paper, the defense method using Proof of Spacetime instead of Proof of Work is proposed to reduce the energy consumption. The Proof of Spacetime is a method to impose spatial and temporal costs on the users of the service. While the Proof of Work makes a user to calculate until permission is granted, the Proof of Spacetime makes a user to keep a result of calculation, so the energy consumption is reduced. Through computer simulations, it was found that systems with Proof of Spacetime, compared to those with Proof of Work, impose 0.79 times the power consumption and 1.07 times the temporal cost on the attackers, while 0.73 times and 0.64 times on the non-attackers. Therefore, the system with Proof of Spacetime can prevent model extraction attacks with lower energy consumption.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_11-Proof_of_Spacetime_as_a_Defensive_Technique_Against_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>DeLClustE: Protecting Users from Credit-Card Fraud Transaction via the Deep-Learning Cluster Ensemble</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140610</link>
        <id>10.14569/IJACSA.2023.0140610</id>
        <doi>10.14569/IJACSA.2023.0140610</doi>
        <lastModDate>2023-06-30T12:23:02.7170000+00:00</lastModDate>
        
        <creator>Fidelis Obukohwo Aghware</creator>
        
        <creator>Rume Elizabeth Yoro</creator>
        
        <creator>Patrick Ogholoruwami Ejeh</creator>
        
        <creator>Christopher Chukwufunaya Odiakaose</creator>
        
        <creator>Frances Uche Emordi</creator>
        
        <creator>Arnold Adimabua Ojugo</creator>
        
        <subject>Fraud transactions; fraud detection; deep learning ensemble; credit card fraud; cluster modeling; financial inclusion</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>Fraud is the unlawful acquisition of valuable assets gained via intended misrepresentation. It is a crime committed by either an internal/external user, and associated with acts of theft, embezzlement, and larceny. The proliferation of credit cards to aid financial inclusiveness has its usefulness alongside it attracting malicious attacks for gains. Attempts to classify fraudulent credit card transactions have yielded formal taxonomies as these attacks seek to evade detection. We propose a deep learning ensemble via a profile hidden Markov model with a deep neural network, which is poised to effectively classify credit-card fraud with a high degree of accuracy, reduce errors, and timely fashion. The result shows the ensemble effectively classified benign transactions with a precision of 97 percent. Thus, we posit a new scheme that is more logical, intuitive, reusable, exhaustive, and robust in classifying such fraudulent transactions based on the attack source, cause(s), and attack time gap.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_10-DeLClustE_Protecting_Users_from_Credit_Card_Fraud_Transaction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Fine-grained Access Control Model with Enhanced Flexibility and On-chain Policy Execution for IoT Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140609</link>
        <id>10.14569/IJACSA.2023.0140609</id>
        <doi>10.14569/IJACSA.2023.0140609</doi>
        <lastModDate>2023-06-30T12:23:02.7000000+00:00</lastModDate>
        
        <creator>Hoang-Anh Pham</creator>
        
        <creator>Ngoc Nhuan Do</creator>
        
        <creator>Nguyen Huynh-Tuong</creator>
        
        <subject>Attribute-based Access Control (ABAC); Internet of Things (IoT); blockchain; substrate framework</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>Blockchain-based access control mechanisms have garnered significant attention in recent years due to their potential to address the security and privacy challenges in the Internet of Things (IoT) ecosystem. IoT devices generate massive amounts of data that are often transmitted to cloud-based servers for processing and storage. However, these devices are vulnerable to attacks and unauthorized access, which can lead to data breaches and privacy violations. Blockchain-based access control mechanisms can provide a secure and decentralized solution to these issues. This paper presents an improved Attribute-based Access Control (ABAC) approach with enhanced flexibility, which utilizes decentralized identity management on the Substrate Framework, codifies access control policies by Rust programming language, and executes access control policies on-chain. The proposed design ensures trust and security while enhancing flexibility compared to existing works. In addition, we implement a PoC to demonstrate the feasibility and investigate its effectiveness.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_9-A_Fine_grained_Access_Control_Model_with_Enhanced_Flexibility.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>ConvNeXt-based Mango Leaf Disease Detection: Differentiating Pathogens and Pests for Improved Accuracy</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140608</link>
        <id>10.14569/IJACSA.2023.0140608</id>
        <doi>10.14569/IJACSA.2023.0140608</doi>
        <lastModDate>2023-06-30T12:23:02.6870000+00:00</lastModDate>
        
        <creator>Asha Rani K P</creator>
        
        <creator>Gowrishankar S</creator>
        
        <subject>Mango disease; pest; pathogens; machine learning; deep learning; convnext models</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>Mango farming is a key economic activity in several locations across the world. Mango trees are prone to various diseases caused by viruses and pests, which can substantially impair crops and have an effect on farmers&#39; revenue. To stop the spread of these illnesses and to lessen the crop damage they cause, early diagnosis of these diseases is essential. Growing interest has been shown in employing deep learning models to create automated disease detection systems for crops because of recent developments in machine learning. This research article includes a study on the application of ConvNeXt models for the diagnosis of pathogen and pest caused illnesses in mango plants. The study intends to investigate the variety in how these illnesses emerge on mango leaves and assess the efficiency of ConvNeXt models in identifying and categorizing them. Images of healthy mango leaves as well as the leaves with a variety of illnesses brought on by pathogens and pests are included in the dataset used in the study. In the study, deep learning models were applied to classify mango pests and pathogens. The models achieved high accuracy on both datasets, with better performance on the pathogen dataset. Larger models consistently outperformed smaller ones, indicating their ability to learn complex features. The ConvNeXtXLarge model showed the highest accuracy: 98.79% for mango pests, 100% for mango pathogens, and 99.17% for the combined dataset. This work holds significance for mango disease detection, aiding in efficient management and potential economic benefits for farmers. However, the models&#39; performance can be influenced by dataset quality, preprocessing techniques, and hyperparameter selection.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_8-ConvNeXt_based_Mango_Leaf_Disease_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Investigating OpenAI’s ChatGPT Potentials in Generating Chatbot&#39;s Dialogue for English as a Foreign Language Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140607</link>
        <id>10.14569/IJACSA.2023.0140607</id>
        <doi>10.14569/IJACSA.2023.0140607</doi>
        <lastModDate>2023-06-30T12:23:02.6530000+00:00</lastModDate>
        
        <creator>Julio Christian Young</creator>
        
        <creator>Makoto Shishido</creator>
        
        <subject>ChatGPT; chatbots as learning partners; EFL chatbot system; dialogue creation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>Lack of opportunities is a significant hurdle for English as a Foreign Language (EFL) for students during their learning journey. Previous studies have explored the use of chatbots as learning partners to address this issue. However, the success of chatbot implementation depends on the quality of the reference dialogue content, yet research focusing on this subject is still limited. Typically, human experts are involved in creating suitable dialogue materials for students to ensure the quality of such content. Research attempting to utilize artificial intelligence (AI) technologies for generating dialogue practice materials is relatively limited, given the constraints of existing AI systems that may produce incoherent output. This research investigates the potential of leveraging OpenAI&#39;s ChatGPT, an AI system known for producing coherent output, to generate reference dialogues for an EFL chatbot system. The study aims to assess the effectiveness of ChatGPT in generating high-quality dialogue materials suitable for EFL students. By employing multiple readability metrics, we analyze the suitability of ChatGPT-generated dialogue materials and determine the target audience that can benefit the most. Our findings indicate that ChatGPT&#39;s dialogues are well-suited for students at the Common European Framework of Reference for Languages (CEFR) level A2 (elementary level). These dialogues are easily comprehensible, enabling students at this level to grasp most of the vocabulary used. Furthermore, a substantial portion of the dialogues intended for CEFR B1 (intermediate level) provides ample stimulation for learning new words. The integration of AI-powered chatbots in EFL education shows promise in overcoming limitations and providing valuable learning resources to students.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_7-Investigating_OpenAIs_ChatGPT_Potentials.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Instructional Digital Model to Promote Virtual Teaching and Learning for Autism Care Centres</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140606</link>
        <id>10.14569/IJACSA.2023.0140606</id>
        <doi>10.14569/IJACSA.2023.0140606</doi>
        <lastModDate>2023-06-30T12:23:02.6370000+00:00</lastModDate>
        
        <creator>Norziana Yahya</creator>
        
        <creator>Nazean Jomhari</creator>
        
        <creator>Mohd Azahani Md Taib</creator>
        
        <creator>Nahdatul Akma Ahmad</creator>
        
        <subject>Instructional digital model; virtual teaching and learning; autism; online learning; pandemic</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>The COVID-19 pandemic has led to temporary school closures affecting over 90% of students worldwide. This has exacerbated educational inequality, particularly for students with learning disabilities such as autism spectrum disorder (ASD), whose routines, services, and support they rely on have been disrupted. To address this issue, it is important to investigate virtual teaching and learning (VTL) strategies that can provide a more effective learning experience for these unique learners. The main objectives of this research are twofold: to investigate the challenges faced by teachers and students with ASD in Malaysia when adapting to online education, and to explore how the learning process occurs during the pandemic. Additionally, the study aimed to identify suitable VTL technology for autism care centres. Four autism care centres were visited for on-site observation activities, and interviews were conducted with the care centre principals. Two sets of online questionnaires were distributed to 10 autism care centres, where 6 principals and 16 teachers provided feedback. The data collected through on-site observations, interviews, and online questionnaires, were then analysed to construct an instructional digital model (IDM) for VTL. The model is very significant as a guide for the development of VTL platform for autism care centres. Finally, a VTL platform development framework was created, which provides a structure for system developers to conduct further research on the development of VTL platform based on the IDM. The framework aims to facilitate the successful implementation of the VTL.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_6-Instructional_Digital_Model_to_Promote_Virtual_Teaching.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fast Pasture Classification Method using Ground-based Camera and the Modified Green Red Vegetation Index (MGRVI)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140605</link>
        <id>10.14569/IJACSA.2023.0140605</id>
        <doi>10.14569/IJACSA.2023.0140605</doi>
        <lastModDate>2023-06-30T12:23:02.6230000+00:00</lastModDate>
        
        <creator>Boris Evstatiev</creator>
        
        <creator>Tsvetelina Mladenova</creator>
        
        <creator>Nikolay Valov</creator>
        
        <creator>Tsenka Zhelyazkova</creator>
        
        <creator>Mariya Gerdzhikova</creator>
        
        <creator>Mima Todorova</creator>
        
        <creator>Neli Grozeva</creator>
        
        <creator>Atanas Sevov</creator>
        
        <creator>Georgi Stanchev</creator>
        
        <subject>Pasture biomass; MGRVI; ground-based camera; classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>The assessment of aboveground biomass is important for achieving rational usage of pasture resources and for maximizing the quantity and quality of milk and meat production. This study presents a method for fast approximation of pastures’ biomass. Unlike most similar studies, which rely on unmanned aerial vehicle and satellite obtained data, this study focuses on photos made by stationary or mobile ground-based visual spectrum camera. The developed methodology uses raster analysis, based on the MGRVI index, in order to classify the pasture into two categories: “grazed” and “ungrazed”. Thereafter, the developed methodology accounts for the perspective in order to obtain the actual area of each class in square meters and in percent. The methodology was applied on an experimental pasture, located near the city of Troyan (Bulgaria). Two images were selected, with the first one representing a mostly ungrazed pasture and the second one – a mostly grazed one. Thereafter the images were analyzed using QGIS 3.0 as well as a specially developed software tool. An important advantage of the proposed methodology is that it does not require expensive equipment and technological knowledge, as it relies on commonly available tools, such as the camera of mobile phones.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_5-Fast_Pasture_Classification_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Shape Control of a Dual-Segment Soft Robot using Depth Vision</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140604</link>
        <id>10.14569/IJACSA.2023.0140604</id>
        <doi>10.14569/IJACSA.2023.0140604</doi>
        <lastModDate>2023-06-30T12:23:02.6070000+00:00</lastModDate>
        
        <creator>Hu Junfeng</creator>
        
        <creator>Zhang Jun</creator>
        
        <subject>Pneumatic soft robot; shape control; depth vision; shape feature</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>Pneumatic soft robots outperform rigid robots in complex environments due to the high flexibility of their redundant configurations, and their shape control is considered a prerequisite for applications in unstructured environments. In this paper, we propose a depth vision-based shape control method for a two-segment soft robot, which uses a binocular camera to achieve 3D shape control of the soft robot. A closed-loop control algorithm based on depth vision is designed for shape compensation when subject to its own non-linear responsiveness and coupling by solving for the shape feature parameters used to describe the robot and analytically modeling the motion of curved feature points. Experimental results show that the position and angle errors are less than 2 mm and 1&#176; respectively, the curvature error is less than 0.0001mm-1, and the algorithm has convergence performance for L-type and S-type shape reference 3D shapes. This work provides a general method for being able to adjust the shape of a soft robot without on-board sensors.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_4-Shape_Control_of_a_Dual_Segment_Soft_Robot.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid Method Based on Gravitational Search and Genetic Algorithms for Task Scheduling in Cloud Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140603</link>
        <id>10.14569/IJACSA.2023.0140603</id>
        <doi>10.14569/IJACSA.2023.0140603</doi>
        <lastModDate>2023-06-30T12:23:02.5900000+00:00</lastModDate>
        
        <creator>Xiuyan ZHANG</creator>
        
        <subject>Cloud computing; task scheduling; genetic algorithm; gravitational search algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>Cloud computing has emerged as a novel technology that offers convenient and cost-effective access to a scalable pool of computing resources over the internet. Task scheduling plays a crucial role in optimizing the functionality of cloud services. However, inefficient scheduling practices can result in resource wastage or a decline in service quality due to under- or overloaded resources. To address this challenge, this research paper introduces a hybrid approach that combines gravitational search and genetic algorithms to tackle the task scheduling problem in cloud computing environments. The proposed method leverages the strengths of both gravitational search and genetic algorithms to achieve enhanced scheduling performance. By integrating the unique search capabilities of the gravitational search algorithm with the optimization and adaptation capabilities of the genetic algorithm, a more effective and efficient solution is achieved. The experimental results validate the superiority of the proposed method over existing approaches in terms of total cost optimization. The experimental evaluation demonstrates that the hybrid method outperforms previous scheduling methods in achieving optimal resource allocation and minimizing costs. The improved performance is attributed to the combined strengths of the gravitational search and genetic algorithms in effectively exploring and exploiting the solution space. These findings underscore the potential of the proposed hybrid method as a valuable tool for addressing the task scheduling problem in cloud computing, ultimately leading to improved resource utilization and enhanced service quality.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_3-A_Hybrid_Method_Based_on_Gravitational_Search.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Investigating the User Experience and Evaluating Usability Issues in AI-Enabled Learning Mobile Apps: An Analysis of User Reviews</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140602</link>
        <id>10.14569/IJACSA.2023.0140602</id>
        <doi>10.14569/IJACSA.2023.0140602</doi>
        <lastModDate>2023-06-30T12:23:02.5900000+00:00</lastModDate>
        
        <creator>Bassam Alsanousi</creator>
        
        <creator>Abdulmohsen S. Albesher</creator>
        
        <creator>Hyunsook Do</creator>
        
        <creator>Stephanie Ludi</creator>
        
        <subject>Human-Computer Interaction (HCI); Artificial Intelligence (AI); user reviews; AI-Enabled Mobile Apps; usability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>Integrating artificial intelligence (AI) has become crucial in modern mobile application development. However, the current integration of AI in mobile learning applications presents several challenges regarding mobile app usability. This study aims to identify critical usability issues of AI-enabled mobile learning apps by analyzing user reviews. We conducted a qualitative and content analysis of user reviews for two groups of AI apps from the education category - language learning apps and educational support apps. Our findings reveal that while users generally report positive experiences, several AI-related usability issues impact user satisfaction, effectiveness, and efficiency. These challenges include AI-related functionality issues, performance, bias, explanation, and ineffective Features. To enhance user experience and learning outcomes, developers must improve AI technology and adapt learning methodologies to meet users’ diverse demands and preferences while addressing these issues. By overcoming these challenges, AI-powered mobile learning apps can continue to evolve and provide users with engaging and personalized learning experiences.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_2-Investigating_the_User_Experience_and_Evaluating_Usability_Issues.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Fuzzy Reward and Punishment Scheme for Vehicular Ad Hoc Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140601</link>
        <id>10.14569/IJACSA.2023.0140601</id>
        <doi>10.14569/IJACSA.2023.0140601</doi>
        <lastModDate>2023-06-30T12:23:02.5770000+00:00</lastModDate>
        
        <creator>Rezvi Shahariar</creator>
        
        <creator>Chris Phillips</creator>
        
        <subject>VANET; Trust management; fuzzy logic; Markov chain; reward and punishment; driver behaviour model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(6), 2023</description>
        <description>Trust management is an important security approach for the successful implementation of Vehicular Ad Hoc Networks (VANETs). Trust models evaluate messages to assign reward or punishment. This can be used to influence a driver’s future behaviour. In the author’s previous work, a sender-side based trust management framework is developed which avoids the receiver evaluation of messages. However, this does not guarantee that a trusted driver will not lie. These “untrue attacks” are resolved by the RSUs using collaboration to rule on a dispute, providing a fixed amount of reward and punishment. The lack of sophistication is addressed in this paper with a novel fuzzy RSU controller considering the severity of incident, driver past behaviour, and RSU confidence to determine the reward or punishment for the conflicted drivers. Although any driver can lie in any situation, it is expected that trustworthy drivers are more likely to remain so, and vice versa. This behaviour is captured in a Markov chain model for sender and reporter drivers where their lying characteristics depend on trust score and trust state. Each trust state defines the driver’s likelihood of lying using different probability distribution. An extensive simulation is performed to evaluate the performance of the fuzzy assessment and examine the Markov chain driver behaviour model with changing the initial trust score of all or some drivers in Veins simulator. The fuzzy and the fixed RSU assessment schemes are compared, and the result shows that the fuzzy scheme can encourage drivers to improve their behaviour.</description>
        <description>http://thesai.org/Downloads/Volume14No6/Paper_1-A_Fuzzy_Reward_and_Punishment_Scheme.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cross-age Face Image Similarity Measurement Based on Deep Learning Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01405123</link>
        <id>10.14569/IJACSA.2023.01405123</id>
        <doi>10.14569/IJACSA.2023.01405123</doi>
        <lastModDate>2023-05-31T11:22:24.1700000+00:00</lastModDate>
        
        <creator>Jing Zhang</creator>
        
        <creator>Ningyu Hu</creator>
        
        <subject>Cross-age; image recognition; RNN; feature fusion; decoupling; loss function</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>In this study, a multi-feature fusion and decoupling solution based on the RNN is proposed from a discriminative perspective. This method can address the identity and age information extraction losses in cross-age face recognition. This method not only constrains the correlation between identity and age using correlation loss but also optimizes identity feature restoration using feature decoupling. The model was trained and simulated in CACD and CACD-VS datasets. The single-task learning model stabilized after 125 iterations of training, while the multi-task learning model reached a stable and convergent state after 75 iterations. In terms of performance analysis, the DE-RNN model had the highest recognition accuracy with a mAP of 92.4%. The Human Voting model had a value of 90.2%. The mAP of the Human Average model was 81.8%, whereas the mAP of the DAL model was the lowest at 78.1%. Experiments proved that the model constructed in this study has effective recognition and application value in the cross-age face recognition scenario.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_123-Cross_age_Face_Image_Similarity_Measurement.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detecting Pneumonia with a Deep Learning Model and Random Data Augmentation Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01405122</link>
        <id>10.14569/IJACSA.2023.01405122</id>
        <doi>10.14569/IJACSA.2023.01405122</doi>
        <lastModDate>2023-05-30T16:27:33.9930000+00:00</lastModDate>
        
        <creator>Tawfik Guesmi</creator>
        
        <subject>Deep learning; pneumonia detection; convolutional neural network; random data augmentation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>This research paper presents an investigation into the detection of pneumonia using deep learning models and data augmentation techniques. The study compares and evaluates the performance of different models based on experimental results. The proposed model consists of multiple convolutional layers and maxpooling layers. Extensive experiments were conducted on a dataset, and the results demonstrate the efficiency and accuracy of our approach. The findings highlight the potential of deep learning in pneumonia detection and contribute to the existing body of knowledge in this field. The implications of this research can have a significant impact on improving diagnostic accuracy and patient outcomes. Future research directions could explore further enhancements in the model architecture, investigate additional data augmentation techniques, and consider larger datasets for more comprehensive evaluations.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_122-Detecting_Pneumonia_with_a_Deep_Learning_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Opportunities in Real Time Fraud Detection: An Explainable Artificial Intelligence (XAI) Research Agenda</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01405121</link>
        <id>10.14569/IJACSA.2023.01405121</id>
        <doi>10.14569/IJACSA.2023.01405121</doi>
        <lastModDate>2023-05-30T16:27:33.9770000+00:00</lastModDate>
        
        <creator>Eleanor Mill</creator>
        
        <creator>Wolfgang Garn</creator>
        
        <creator>Nick Ryman-Tubb</creator>
        
        <creator>Chris Turner</creator>
        
        <subject>Artificial intelligence; explainable AI; machine learning; credit card fraud</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>Regulatory and technological changes have recently transformed the digital footprint of credit card transactions, providing at least ten times the amount of data available for fraud detection practices that were previously available for analysis. This newly enhanced dataset challenges the scalability of traditional rule-based fraud detection methods and creates an opportunity for wider adoption of artificial intelligence (AI) techniques. However, the opacity of AI models, combined with the high stakes involved in the finance industry, means practitioners have been slow to adapt. In response, this paper argues for more researchers to engage with investigations into the use of Explainable Artificial Intelligence (XAI) techniques for credit card fraud detection. Firstly, it sheds light on recent regulatory changes which are pivotal in driving the adoption of new machine learning (ML) techniques. Secondly, it examines the operating environment for credit card transactions, an understanding of which is crucial for the ability to operationalise solutions. Finally, it proposes a research agenda comprised of four key areas of investigation for XAI, arguing that further work would contribute towards a step-change in fraud detection practices.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_121-Opportunities_in_Real_Time_Fraud_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Video-based Heart Rate Estimation using Embedded Architectures</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01405119</link>
        <id>10.14569/IJACSA.2023.01405119</id>
        <doi>10.14569/IJACSA.2023.01405119</doi>
        <lastModDate>2023-05-30T16:27:33.9600000+00:00</lastModDate>
        
        <creator>Hoda El Boussaki</creator>
        
        <creator>Rachid Latif</creator>
        
        <creator>Amine Saddik</creator>
        
        <subject>Heart rate; driver; photoplethysmography; non-contact; embedded architectures</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>Monitoring a driver’s heart rate is an important determinant to his health condition. The monitoring system must be accurate and non restrictive to the user’s actions. Estimating the driver’s change in his usual heart beat pattern can prevent undesirable outcomes. Several methods exist to estimate heart rate without any contact. In this paper, we are focusing on a method that uses remote photoplethysmography (rPPG). rPPG is a technique where heart rate is extracted from a PPG signal. The signal is extracted from the changes in blood flow that corresponds to the color variations recorded through an RGB camera. In this work, a different study that was based on an existing algorithm is presented to determine its processing time. The algorithm we proposed was divided into different global blocks and each block into different functional blocks (FBs). Though evaluating all the blocks’ processing time, it was possible to determine the most time consuming functional blocks. The results are implemented on different architectures: Desktop, Odroid XU4 and Jetson Nano to provide a higher performance.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_119-Video_based_Heart_Rate_Estimation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Prediction of Death Counts Based on Short-term Mortality Fluctuations Data Series using Multi-output Regression Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01405120</link>
        <id>10.14569/IJACSA.2023.01405120</id>
        <doi>10.14569/IJACSA.2023.01405120</doi>
        <lastModDate>2023-05-30T16:27:33.9600000+00:00</lastModDate>
        
        <creator>Md Imtiaz Ahmed</creator>
        
        <creator>Nurjahan</creator>
        
        <creator>Md. Mahbub-Or-Rashid</creator>
        
        <creator>Farhana Islam</creator>
        
        <subject>Multi-output regression model; short-term mortality fluctuations; machine learning; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>Effective public health responses to unexpected epidemiological hazards or disasters need rapid and reliable monitoring. But, monitoring fast-changing situations and ac-quiring timely, accurate, and cross-national statistics to address short-term mortality fluctuations due to these hazards is very challenging. Estimating weekly excess deaths is the most solid and accurate way to measure the mortality burden caused by short-term risk factors. The Short-term Mortality Fluctuations (STMF) data series is one of the significant collections of the Human Mortality Database (HMD) that provides the weekly death counts and rates by age and sex of a country. Sometimes, the data collected from the sources are not always represented in specific age groups rather represented by the the total number of individual death records per week. However, the researchers reclassified their dataset based on the ranges of age and sex distributions of every country so that one can easily find out how many people died in per week of each country based on an equation and earlier distribution data. The paper focuses on the implementation of multi-output regression models such as logistic regression, decision tree, random forest, k nearest neighbors, lasso, support vector regressor, artificial neural network, and recurrent neural network to correctly predict death counts for specific age groups. According to the results, random forest delivered the highest performance with an R squared coefficient value of 0.9975, root mean square error of 43.2263, and mean absolute error of 16.4069.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_120-Prediction_of_Death_Counts.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Blockchain-enabled Secure Privacy-preserving System for Public Health-center Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01405118</link>
        <id>10.14569/IJACSA.2023.01405118</id>
        <doi>10.14569/IJACSA.2023.01405118</doi>
        <lastModDate>2023-05-30T16:27:33.9470000+00:00</lastModDate>
        
        <creator>Md. Shohidul Islam</creator>
        
        <creator>Mohamed Ariff Bin Ameedeen</creator>
        
        <creator>Husnul Ajra</creator>
        
        <creator>Zahian Binti Ismail</creator>
        
        <subject>Blockchain; data; health; public; secure transaction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>Health center data implicates a large scale of individual health records and is immensely concealment sensory. In the virtual era of large-size data, the increasingly different health informatization causes it important that health data needs to be stored precisely and securely. However, daily health data transactions carry the risk of privacy leaks that make sharing difficult. Moreover, the recently permitted blockchain applications suffer from deficient performance and lack of privacy. This study presents a privacy-preserving and secure sharing and storage system for public health centers based on the blockchain method to dispose of these issues. This system utilizes a hash-256-based access controller and transaction signature with the consensus policy and provides security to share and store health data in the blockchain. In this approach, blockchain guarantees scalability, privacy, integrity, and availability for data retention. Also, this paper measures the performance of transactions with supporting confidentiality-preserving and shows the average transaction time and acceptable latency when accessing health data.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_118-Blockchain_enabled_Secure_Privacy_preserving_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>RETRACTED: A Fast and Accurate Deep Learning based Approach to Measure Fetal Fat from MRI Scan Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01405117</link>
        <id>10.14569/IJACSA.2023.01405117</id>
        <doi>10.14569/IJACSA.2023.01405117</doi>
        <lastModDate>2023-05-30T16:27:33.9300000+00:00</lastModDate>
        
        <creator>Nagabotu Vimala</creator>
        
        <creator>Anupama Namburu</creator>
        
        <subject>Fetal adipose tissue; fetal MRI; automatic segmentation; radiologist</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>After careful and considered review of the content of this paper by a duly constituted expert committee, this paper has been found to be in violation of IJACSA`s Publication Principles. We hereby retract the content of this paper. Reasonable effort should be made to remove all past references to this paper. Retraction DOI: 10.14569/IJACSA.2023.01405117.retraction</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_117-A_Fast_and_Accurate_Deep_Learning_based_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Mango Grading System Based on Image Processing and Machine Learning Methods</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01405115</link>
        <id>10.14569/IJACSA.2023.01405115</id>
        <doi>10.14569/IJACSA.2023.01405115</doi>
        <lastModDate>2023-05-30T16:27:33.9130000+00:00</lastModDate>
        
        <creator>Thanh-Nghi Doan</creator>
        
        <creator>Duc-Ngoc Le-Thi</creator>
        
        <subject>Smart agriculture; mango grading; image processing; machine learning methods</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>Mangoes are a great commercial fruit and are widely cultivated in tropical areas. In smart agriculture, the automatic quality inspection and grading application is essential to post-harvest processing, due to the laborious nature and inconsistencies of traditional manual visual grading. This paper presents a low-cost, efficient, and effective mango grading system based on image processing and machine learning methods to generate higher quality fruit sorting, quality maintenance, production, and cut back labor concentration. A novel database of classified mangoes was collected and built in An Giang province. Methodologies and algorithms that utilize digital image processing, content-predicated analysis, and statistical analysis are implemented to determine the grade of local mango production. On our collected dataset, the proposed system achieved overall with an overall accuracy of 88% for all mango grades. The system shows compromised results for higher-quality fruit sorting, quality maintenance, and production while reducing labor concentration.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_115-A_Novel_Mango_Grading_System_Based_on_Image_Processing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mobile Module in Reconfigurable Intelligent Space: Applications and a Review of Developed Versions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01405116</link>
        <id>10.14569/IJACSA.2023.01405116</id>
        <doi>10.14569/IJACSA.2023.01405116</doi>
        <lastModDate>2023-05-30T16:27:33.9130000+00:00</lastModDate>
        
        <creator>Dinh Tuan Tran</creator>
        
        <creator>Tatsuki Satooka</creator>
        
        <creator>Joo-Ho Lee</creator>
        
        <subject>Climbing Robot; intelligent space; iSpace; mobile module; MoMo; reconfigurable intelligent space; R+iSpace; smart home; ubiquitous environment</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>Due to the immobility of devices in conventional intelligent spaces, the quality and quantity of their applications (i.e., services) are thus restricted. To provide better and more applications, the devices in the spaces must be able to move autonomously to ideal positions. To solve this issue, the concepts of reconfigurable intelligent space (R+iSpace) and mobile modules (MoMos) have been introduced. Each device in the R+iSpace is carried by one or more MoMos that can freely move on the ceiling and walls. Consequently, the R+iSpace has evolved into a user-centered intelligent space, where devices can move to the user to provide services instead of the user having to move to where the devices are. In this work, several promising applications are introduced as open research challenges for the R+iSpace and the MoMo. In fact, various wall-climbing robots have been developed, however, their speed and carrying capacity are insufficient for adoption for the MoMo and the R+iSpace. Therefore, the development of MoMo requires the creation of entirely new designs and mechanisms. In addition to introducing promising applications, this work provides an overview of all versions of the MoMo that have been developed to gradually make it deployable in a realistic R+iSpace.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_116-Mobile_Module_in_Reconfigurable_Intelligent_Space.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Exploring Forest Transformation by Analyzing Spatial-temporal Attributes of Vegetation using Vegetation Indices</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01405114</link>
        <id>10.14569/IJACSA.2023.01405114</id>
        <doi>10.14569/IJACSA.2023.01405114</doi>
        <lastModDate>2023-05-30T16:27:33.9000000+00:00</lastModDate>
        
        <creator>Anubhava Srivastava</creator>
        
        <creator>Sandhya Umrao</creator>
        
        <creator>Susham Biswas</creator>
        
        <subject>Classification; change detection; vegetation indices; landsat; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>The world’s ecosystem and environment are rapidly deteriorating with an increase in the depletion of forest conditions due to forest fires. In recent past years, wildfire incidents in Sikkim have increased due to severe climatic changes such as turbulent rainfall, untimely summers, extreme droughts in winters, and a reduction in the percentage of yearly rainfall. Forest fires are one of the numerous kinds of disasters that impose disastrous changes on the entire environment and disrupt the complex correspondence of the flora and fauna. The research’s goal is to examine the vegetation indices based on different climates to know why forest vegetation is decreasing day by day from 2000 to 2023. The frequent changes in forest vegetation are extensively studied by using satellite images. This data has been collected by three satellites Landsat-5, Landsat-8, and Landsat-9 on different vegetation indices NDVI, EVI, and NDWI. East Sikkim area is chosen to compute forest vegetation indices based on the heap’s landmass this region is unexplored yet and also studied about the forest changes by using different spatial temporal indices in the range of the entire district in the future. The authors of this paper have used Landsat multi-spectral data to assess changes in the area of vegetation in a sub-tropical region like a dense forest region in east Sikkim. The analysis depicts space images, computes vegetation indices (NDVI, EVI, NDWI), and accomplishes mathematical computation of findings. The proposed method will be helpful to discuss the variance of vegetation in the entire East Sikkim region at the time span of 2000–2023. In the analysis, we find that mean and standard deviation values change over the years in all indices. Later, we also calculated changes by using a classification model and find a total 10% change in forest areas in approximately 22 years.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_114-Exploring_Forest_Transformation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>QMX-BdSL49: An Efficient Recognition Approach for Bengali Sign Language with Quantize Modified Xception</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01405113</link>
        <id>10.14569/IJACSA.2023.01405113</id>
        <doi>10.14569/IJACSA.2023.01405113</doi>
        <lastModDate>2023-05-30T16:27:33.8830000+00:00</lastModDate>
        
        <creator>Nasima Begum</creator>
        
        <creator>Saqib Sizan Khan</creator>
        
        <creator>Rashik Rahman</creator>
        
        <creator>Ashraful Haque</creator>
        
        <creator>Nipa Khatun</creator>
        
        <creator>Nusrat Jahan</creator>
        
        <creator>Tanjina Helaly</creator>
        
        <subject>Bengali sign language; CNN; computer vision; model quantization; Raspberry Pi 4; transfer learning; Tiny ML</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>Sign language is developed to bridge the com-munication gap between individuals with and without hearing impairment or speech difficulties. Individuals with hearing and speech impairment typically rely on hand signs as a means of expressing themselves. However, people, in general, may not have sufficient knowledge of sign language, thus a sign language recognition system on an embedded device is most needed. Literature related to such systems on embedded devices is scarce as these recognition tasks are very complex and computationally expensive. The limited resources of embedded devices cannot execute complex algorithms like Convolutional Neural Network (CNN) properly. Therefore, in this paper, we propose a novel deep learning architecture based on default Xception architec-ture, named Quantized Modified Xception (QMX) to reduce the model’s size and enhance the computational speed without compromising model accuracy. Moreover, the proposed QMX model is highly optimized due to the weight compression of model quantization. As a result, the footprint of the proposed QMX model is 11 times smaller than the Modified Xception (MX) model. To train the model, BDSL 49 dataset is utilized which includes approximately 14,700 images divided into 49 classes. The proposed QMX model achieves an overall F1 accuracy of 98%. In addition, a comprehensive analysis among QMX, Modified Xception Tiny (MXT), MX, and the default Xception model is provided in this research. Finally, the model has been implemented on Raspberry Pi 4 and a detailed evaluation of its performance has been conducted, including a comparison with existing state-of-the-art approaches in this domain. The results demonstrate that the proposed QMX model outperforms the prior work in terms of performance.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_113-QMX_BdSL49_An_Efficient_Recognition_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Light Field Spatial Super-resolution via Multi-level Perception and View Reorganization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01405111</link>
        <id>10.14569/IJACSA.2023.01405111</id>
        <doi>10.14569/IJACSA.2023.01405111</doi>
        <lastModDate>2023-05-30T16:27:33.8670000+00:00</lastModDate>
        
        <creator>Yifan Mao</creator>
        
        <creator>Zaidong Tong</creator>
        
        <creator>Xin Zheng</creator>
        
        <creator>Xiaofei Zhou</creator>
        
        <creator>Youzhi Zhang</creator>
        
        <creator>Deyang Liu</creator>
        
        <subject>Light field image; spatial super-resolution; multi-level perception; view reorganization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>Light field (LF) imaging can obtain spatial and angular information of three-dimensional (3D) scene through a single shot, which enables a wide range of applications in the fields of 3D reconstruction, refocusing, virtual reality, etc. However, due to the inherent trade-off problem, the spatial resolution of acquired LF images is low, which hinders the widespread application of LF imaging technique. In order to relieve this issue, an end-to-end LF spatial super-resolution network is proposed by considering the multi-level perception and view reorganization. This method can fully explore the highly interwoven LF spatial and angular structure information. Specifically, a multi-feature fusion enhancement block is introduced that can fully perceive LF spatial, angular, and EPI information for LF spatial super-resolution. Furthermore, the angular coherence between LF views is exploited by reorganizing the LF sub-aperture images and constructing a multi-angular stack structure. Compared with other state-of-the-art methods, the proposed method achieves superior performance in both visual and quantitative terms.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_111-Light_Field_Spatial_Super_resolution_via_Multi_level_Perception.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Intrusion Detection Systems with XGBoost Feature Selection and Deep Learning Approaches</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01405112</link>
        <id>10.14569/IJACSA.2023.01405112</id>
        <doi>10.14569/IJACSA.2023.01405112</doi>
        <lastModDate>2023-05-30T16:27:33.8670000+00:00</lastModDate>
        
        <creator>Khalid A. Binsaeed</creator>
        
        <creator>Alaaeldin M. Hafez</creator>
        
        <subject>Intrusion detection system; deep learning (DL); XG-Boost; feature extraction; Bidirectional Long Short-Term Memory (BiLSTM); Artificial Neural Networks (ANN); 1D Convolutional Neural Network (1DCNN); Synthetic Minority Oversampling Tech-nique (SMOTE); NSL-KDD dataset; CIC-IDS2017; UNSW-NB15</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>As cyber-attacks evolve in complexity and frequency; the development of effective network intrusion detection systems (NIDS) has become increasingly important. This paper investigates the efficacy of the XGBoost algorithm for feature selection combined with deep learning (DL) techniques, such as ANN, 1DCNN, and BiLSTM, to create accurate intrusion detection systems (IDSs) and evaluating it against NSL-KDD, CIC-IDS2017, and UNSW-NB15 datasets. The high accuracy and low error rate of the classification models demonstrate the potential of the proposed approach in IDS design. The study applied the XGBoost feature extraction technique to obtain a reduced feature vector and addressed data imbalance using the synthetic minority oversampling technique (SMOTE), signif-icantly improving the models’ performance in terms of precision and recall for individual attack classes. The ANN + BiLSTM model combined with SMOTE consistently out performed other models within this paper, emphasizing the importance of data balancing techniques and the effectiveness of integrating XGBoost and DL approaches for accurate IDSs. Future research can focus on implementing novel sampling techniques explicitly designed for IDSs to enhance minority class representation in public datasets during training.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_112-Enhancing_Intrusion_Detection_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detection of Epileptic Seizures Based-on Channel Fusion and Transformer Network in EEG Recordings</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01405110</link>
        <id>10.14569/IJACSA.2023.01405110</id>
        <doi>10.14569/IJACSA.2023.01405110</doi>
        <lastModDate>2023-05-30T16:27:33.8530000+00:00</lastModDate>
        
        <creator>Jose Yauri</creator>
        
        <creator>Manuel Lagos</creator>
        
        <creator>Hugo Vega-Huerta</creator>
        
        <creator>Percy De-La-Cruz-VdV</creator>
        
        <creator>Gisella Luisa Elena Maquen-Ni˜no</creator>
        
        <creator>Enrique Condor-Tinoco</creator>
        
        <subject>Epilepsy; epilepsy detection; EEG; EEG channel fusion; convolutional neural network; self-attention</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>According to the World Health Organization, epilepsy affects more than 50 million people in the world, and specifically, 80% of them live in developing countries. Therefore, epilepsy has become among the major public issue for many governments and deserves to be engaged. Epilepsy is characterized by uncontrollable seizures in the subject due to a sudden abnormal functionality of the brain. Recurrence of epilepsy attacks change people’s lives and interferes with their daily activities. Although epilepsy has no cure, it could be mitigated with an appropriated diagnosis and medication. Usually, epilepsy diagnosis is based on the analysis of an electroencephalogram (EEG) of the patient. However, the process of searching for seizure patterns in a multichannel EEG recording is a visual demanding and time consuming task, even for experienced neurologists. Despite the recent progress in automatic recognition of epilepsy, the multichannel nature of EEG recordings still challenges current methods. In this work, a new method to detect epilepsy in multichannel EEG recordings is proposed. First, the method uses convolutions to perform channel fusion, and next, a self-attention network extracts temporal features to classify between interictal and ictal epilepsy states. The method was validated in the public CHB-MIT dataset using the k-fold cross-validation and achieved 99.74% of specificity and 99.15% of sensitivity, surpassing current approaches.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_110-Detection_of_Epileptic_Seizures_Based_on_Channel_Fusion.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Primal-Optimal-Binding LPNet: Deep Learning Architecture to Predict Optimal Binding Constraints of a Linear Programming Problem</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01405109</link>
        <id>10.14569/IJACSA.2023.01405109</id>
        <doi>10.14569/IJACSA.2023.01405109</doi>
        <lastModDate>2023-05-30T16:27:33.8370000+00:00</lastModDate>
        
        <creator>Natdanai Kafakthong</creator>
        
        <creator>Krung Sinapiromsaran</creator>
        
        <subject>Deep learning; convolution neural network; linear programming; basic feasible solution; optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>Identifying an optimal basis for a linear programming problem is a challenging learning task. Traditionally, an optimal basis is obtained via the iterative simplex method which improves from the current basic feasible solution to the adjacent one until it reaches optimal. The obtained result is the value of the optimal solution and the corresponding optimal basis. Even though learning the optimal value is hard but learning the optimal basis is possible via deep learning. This paper presents the primal-optimal-binding LPNet that learns from massive linear programming problems of various sizes casting as all-unit-row-except-first-unit-column matrices. During the training step, these matrices are fed to the special row-column convolutional layer followed by the state-of-the-art deep learning architecture and sent to two fully connected layers. The result is the probability vector of non-negativity constraints and the original linear programming constraints at the optimal basis. The experiment shows that this LPNet achieves 99% accuracy of predicting a single binding optimal constraint on unseen test problems and Netlib problems. It identifies correctly 80% LP problems having all optimal binding constraints and faster than cplex solution time.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_109-Primal_Optimal_Binding_LPNet_Deep_Learning_Architecture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Ensemble of Deep Learning Models for Multi-plant Disease Classification in Smart Farming</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01405108</link>
        <id>10.14569/IJACSA.2023.01405108</id>
        <doi>10.14569/IJACSA.2023.01405108</doi>
        <lastModDate>2023-05-30T16:27:33.8200000+00:00</lastModDate>
        
        <creator>Hoang-Tu Vo</creator>
        
        <creator>Luyl-Da Quach</creator>
        
        <creator>Hoang Tran Ngoc</creator>
        
        <subject>Ensemble learning; automated disease detection systems; transfer learning models; plant diseases</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>Plant disease identification at an early stage plays a crucial role in ensuring efficient management of the diseases and crop protection. The occurrence of plant ailments can result in substantial reductions in both crop yield and quality, which may cause financial setbacks for farmers and lead to food shortages for consumers. Traditional methods of disease detection rely on visual observation, which can consume a significant amount of time, be a labor-intensive, and often be inaccurate. Automated disease detection systems, based on techniques for machine learning have the potential to greatly improve the precision and speed of disease detection. This article presents a model for classifying plant diseases that combines the output of two transfer learning models, EfficientNetB0 and MobileNetV2, to improve disease classification accuracy. The PlantVillage Dataset was used to train and test the model under consideration, which contains 54,305 photos of 38 different plant disease classes, achieving an accuracy rate of 99.77% in disease classification. The use of an ensemble of deep learning models in this study shows promising results, indicating that the technique can enhance the accuracy of plant disease classification. Besides, this study contributes to the development of accurate and reliable automated disease detection systems, thereby supporting sustainable agriculture and global food security.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_108-Ensemble_of_Deep_Learning_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>From Phishing Behavior Analysis and Feature Selection to Enhance Prediction Rate in Phishing Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01405107</link>
        <id>10.14569/IJACSA.2023.01405107</id>
        <doi>10.14569/IJACSA.2023.01405107</doi>
        <lastModDate>2023-05-30T16:27:33.8200000+00:00</lastModDate>
        
        <creator>Asmaa Reda Omar</creator>
        
        <creator>Shereen Taie</creator>
        
        <creator>Masoud E.Shaheen</creator>
        
        <subject>Gradient boosting; light GBM; machine learning; phishing; phishing URL; random forest</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>Phishing incidents have captured the attention of security experts and end users in recent years as they have become more frequent, widespread, and sophisticated. The researchers offered a variety of strategies for detecting phishing attacks. Over time, these approaches suffer from insufficient performance and the inability to identify zero attacks. One of the limitations with these methods is that phishing techniques are constantly evolving, and the proposed methods are not keeping up, making it a hard nut to crack. The objective of this research is to develop a URL phishing detection model that can demonstrate its robustness against constantly changing attacks. One of the most significant contributions of this paper is the selection of a novel combination of features based on literal and recent phishing behavior analysis. This makes the model competent sufficient to recognize zero attacks and able to adjust to changes in phishing attacks. Furthermore, eleven machine learning classification techniques are utilized for classification tasks and comparative objectives. Moreover, three datasets with different instance distributions were constructed at different times for the model’s initial construction and evaluation. Several experiments were carried out to investigate and evaluate the proposed model’s performance, effectiveness, and robustness. The experiments’ findings demonstrated that the GaussianNB method is the most durable, capable of maintaining performance even in the absence of retraining. Additionally, the LightGBM, Random Forest, and GradientBoost algorithms had the highest levels of performance, which they were able to maintain by routinely retraining the model with newer types of attacks. Models that employed these three suggested algorithms outperformed other current detection models with an average accuracy of about 99.7%, making them promising.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_107-From_Phishing_Behavior_Analysis_and_Feature_Selection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fruit Classification using Colorized Depth Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01405106</link>
        <id>10.14569/IJACSA.2023.01405106</id>
        <doi>10.14569/IJACSA.2023.01405106</doi>
        <lastModDate>2023-05-30T16:27:33.8030000+00:00</lastModDate>
        
        <creator>Dhong Fhel K. Gom-os</creator>
        
        <subject>Fruit classification; depth image; depth colorization; CNN; transfer learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>Fruit classification is a computer vision task that aims to classify fruit classes correctly, given an image. Nearly all fruit classification studies have used RGB color images as inputs, a few have used costly hyperspectral images, and a few classical ML-based have used colorized depth images. Depth images have apparent benefits such as invariance to lighting, less storage requirement, better foreground-background separation, and more pronounced curvature details and object edge discontinuities. However, the use of depth images in CNN-based fruit classification remains unexplored. The purpose of this study is to investigate the use of colorized depth images in fruit classification with four CNN models, namely, AlexNet, GoogleNet, ResNet101, and VGG16, and compare their performance and computational efficiency, as well as the impact of transfer learning. Depth images of apple, orange, mango, banana and rambutan (Nephelium Lappaceum) were manually collected using a depth sensor with sub-millimeter accuracy and subjected to jet, uniform, and inverse colorization to produce three sets of dataset. Results show that depth images can be used to train CNN models for fruit classification with ResNet101 achieving the best accuracy of 96%on the inverse dataset. It achieved 100% accuracy after transfer learning. GoogleNet showed the most significant improvement after transfer learning on the uniform dataset, at 12.27%. It also exhibited the lowest training and inference times. The results show the potential use of depth images for fruit classification and similar computer vision tasks.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_106-Fruit_Classification_using_Colorized_Depth_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Economic Development Efficiency Based on Tobit Model: Guided by Sustainable Development</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01405104</link>
        <id>10.14569/IJACSA.2023.01405104</id>
        <doi>10.14569/IJACSA.2023.01405104</doi>
        <lastModDate>2023-05-30T16:27:33.7900000+00:00</lastModDate>
        
        <creator>Ming Liu</creator>
        
        <subject>Tobit model; DEA; economic development efficiency; Moran index</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>At present, in resource-based regions in China, it has been seriously restricted in the harmonious growth of green economy (GE) and environment. To find and solve the problems that affect the quality of regional GE development, the study took Xinjiang, a resource-based province, as the research object. With the data of 14 prefectures and cities in Xinjiang from 2017 to 2022, an evaluation model for the efficiency of GE development based on DEA-Tobit was constructed. Data envelopment analysis (DEA) measures the spatial autocorrelation and distribution characteristics of GE development efficiency in various prefectures and cities in Xinjiang. The influencing factors were analyzed by using Tobit model. From the empirical results, there are obvious differences in the spatial distribution of GE development among various prefectures and cities in Xinjiang. The average value is 0.7289, the highest value is 1, and the lowest value is 0.3684, with a difference of 0.6316. The efficiency values of GE in the seven regions R1, R2, R4, R6, R7, R9, and R13 have reached 1, and DEA is effective. Based on the global and local Moran index, it can be seen that there is no obvious spatial correlation between the development efficiency of green economy in the cities and cities of Xinjiang, and the absolute value of its coefficient is not more than 0.5. From the results of the Tobit model, there are still areas for raising the efficiency of GE development in most regions of Xinjiang. Based on the established DEA-Tobit GE development efficiency evaluation model, this study proposes targeted development strategies for improving the efficiency of GE development in Xinjiang.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_104-Economic_Development_Efficiency_Based_on_Tobit_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Proposed Approach for Motif Finding Problem Solved on Heterogeneous Cluster with Best Scheduling Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01405105</link>
        <id>10.14569/IJACSA.2023.01405105</id>
        <doi>10.14569/IJACSA.2023.01405105</doi>
        <lastModDate>2023-05-30T16:27:33.7900000+00:00</lastModDate>
        
        <creator>Abdullah Barghash</creator>
        
        <creator>Ahmed Harbaoui</creator>
        
        <subject>Motif finding problem; scheduling algorithm; het-erogeneous; high-performance computing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>The Motif Finding Problem (MFP) is the problem of finding patterns in sequences of DNA. This paper discusses and presents an enhanced scheduling approach to solve the motif problem on the Heterogeneous Cluster by making a comparison between exact algorithms. The method that was followed is to analyze several exact algorithms, compare them within specific points to measure, and improve performance by comparing the number of devices and peripheral units used in every situation and running time in every method. Our experimental results show that the use of the scheduling approach that use different algorithms on Heterogeneous Cluster make a significant difference in the speed of completing the problem and in a shorter record time with less resources, and that this proposed approach is more effective than the traditional method of distributing tasks to solve the motif problem.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_105-A_Proposed_Approach_for_Motif_Finding_Problem.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Effect of Multi-SVC Installation for Loss Control in Power System using Multi-Computational Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01405103</link>
        <id>10.14569/IJACSA.2023.01405103</id>
        <doi>10.14569/IJACSA.2023.01405103</doi>
        <lastModDate>2023-05-30T16:27:33.7730000+00:00</lastModDate>
        
        <creator>N. Balasubramaniam</creator>
        
        <creator>N. A. M. Kamari</creator>
        
        <creator>I. Musirin</creator>
        
        <creator>A. A. Ibrahim</creator>
        
        <subject>Flexible AC Transmission Systems (FACTs); Shunt VARs Compensators (SVCs); Evolutionary Programming (EP); Artificial Immune System (AIS); Immune Evolutionary Programming (IEP)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>Flexible AC Transmission Systems (FACTs) play a vital role in minimizing the power losses and improving voltage profile in power transmission system. These increase the real power transfer capacity of the system. However, optimal location of sizing of the FACTs devices determines the extent of benefits provided by the FACTs devices to the transmission system. Non-optimal solution in terms of the location and sizing may possibly lead to under-compensation or over-compensation phenomena. Thus, a robust optimization is a priori for optimal solution achievement. This paper presents a study on the effect on multi static VAR compensators (SVC) installation for loss control in power system using evolutionary programming (EP), artificial immune system (AIS) and immune evolutionary programming (IEP). The objective is to minimize the real power loss transmission and improve the voltage profile of the transmission power system. The study reveals that installation of multi-units SVC significantly reduces the power loss and increases the voltage profile of the system, validated on the IEEE 30-Bus Reliability Test System (RTS).</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_103-Effect_of_Multi_SVC_Installation_for_Loss_Control_in_Power_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Particle Swarm Optimization-based Modeling of Wireless Sensor Network Coverage Optimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01405102</link>
        <id>10.14569/IJACSA.2023.01405102</id>
        <doi>10.14569/IJACSA.2023.01405102</doi>
        <lastModDate>2023-05-30T16:27:33.7570000+00:00</lastModDate>
        
        <creator>Guangyue Kou</creator>
        
        <creator>Guoheng Wei</creator>
        
        <subject>Particle swarm optimization; wireless sensor networks; network coverage; grey wolf optimization; grey wolf particle swarm optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>To address the problem of insufficient coverage of WSN and poor network coverage in obstacle environments, the study proposes an improved particle swarm optimization (PSO) combined with a hybrid grey wolf algorithm. The speed and position of the PSO particle&#39;s search for superiority are enhanced through the guiding nature of the superior wolf in the grey wolf optimization (GWO), thus the convergence speed and search precision are improved. Based on this, the study applies the improved PSO to a wireless sensor networks (WSO) coverage optimization model and uses model comparison to test the effectiveness and superiority of the algorithm. According to the results, the node network coverage of PSO, genetic algorithm (GA), data envelopment analysis (DEA), GWO, and grey wolf particle swarm optimization (GWPSO) reach 85.97%, 87.24%, 88.76%, 89.31%, and 91.05% respectively in the trapezoidal obstacle environment. And the node network coverage of the research-designed GWPSO algorithm reaches the highest value of its kind. This shows that the research-designed GWPSO has superior performance in the optimization control of sensor coverage deployment compared with similar algorithms. The design provides a new path for optimizing wireless sensor node network coverage.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_102-Hybrid_Particle_Swarm_Optimization_based_Modeling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Approach for an Outdoor Oyster Mushroom Cultivation using a Smart IoT-based Adaptive Neuro Fuzzy Controller</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01405101</link>
        <id>10.14569/IJACSA.2023.01405101</id>
        <doi>10.14569/IJACSA.2023.01405101</doi>
        <lastModDate>2023-05-30T16:27:33.7430000+00:00</lastModDate>
        
        <creator>Dakhole Dipali</creator>
        
        <creator>Thiruselvan Subramanian</creator>
        
        <creator>G Senthil Kumaran</creator>
        
        <subject>Precision agriculture; adaptive neuro fuzzy inference system; fuzzy inference system; oyster mushroom cultivation; internet of things</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>An automatic environment control systems for greenhouses are turning to be very significant because of food demand, and rise in temperature and population of the world. This article proposes to design and implement a low cost, robust and water efficient autonomous smart internet of things (IoT) system to monitor and control the temperature, and humidity of an outdoor oyster mushroom growing unit. The IoT-based control system involves DHT22 sensors, ESP32 controller and actuators (water pump and cooling fan) to facilitate the adequate amount of air for circulation to maintain temperature and water to maintain humidity inside an outdoor oyster mushroom growing unit as per its requirement. A real working prototype is developed and implemented on integrating fuzzy inference system (FIS) in ESP32 controller using Arduino C with the help of its integrated design environment. The FIS is designed to calculate the switching on/off time of water pump and cooling fan on sensing current temperature, and humidity inside oyster mushroom unit with respect to ambient temperature, and humidity respectively. The prototype provides inside temperature, humidity, ambient temperature, ambient humidity, water pump time and fan time on Thing-speak platform in real time. Furthermore, the data is used for design and simulation of Adaptive Neuro Fuzzy Inference Controller for an outdoor oyster mushroom growing unit in MATLAB/Simulink to improve the performance of the system. The practical applicability of the proposed ANFIS controller over FIS Controller and industrial PID Controller is shown by simulation findings with use of experimental data. The system reduces water use as well as an extremely extraordinary administration required for monitoring the mushroom unit. In addition, it increases robustness of the system.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_101-A_Novel_Approach_for_an_Outdoor_Oyster_Mushroom_Cultivation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Serious Game Design Principles for Children with Autism to Facilitate the Development of Emotion Regulation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01405100</link>
        <id>10.14569/IJACSA.2023.01405100</id>
        <doi>10.14569/IJACSA.2023.01405100</doi>
        <lastModDate>2023-05-30T16:27:33.7430000+00:00</lastModDate>
        
        <creator>Nor Farah Naquiah Mohamad Daud</creator>
        
        <creator>Muhammad Haziq Lim Abdullah</creator>
        
        <creator>Mohd Hafiz Zakaria</creator>
        
        <subject>Autism spectrum disorders; serious game; emotion regulation; serious game design principles</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>Autism spectrum disorder (ASD) is a deficit-driven neurodevelopmental condition in three areas, which are social interactions, communication, and the presence of restricted interests and repetitive behaviours. Children with autism mainly suffer from emotional disturbance that emerges as meltdowns, tantrums, and aggression, increasing the risk of developing mental health issues. Several studies have assessed the use of serious games in helping children with autism enhance their communication, learning, and social skills. Significantly, these serious games focus on the strengths and weaknesses of the disorder to establish a comfortable and controlled environment that is able to support children with autism. However, there is still a lack of evidence in studies exploring the use of serious games for children with autism to facilitate the development of emotion regulation. The aim of this study is to consolidate and propose a new serious game design principle for children with autism to facilitate the development of emotion regulation.  The target age of the children involved in this study ranged between 6 and 12. A review of previous literature on serious game design principles was conducted. More than 70 articles related to serious games for children with autism were analysed using thematic analysis. This study found 16 elements that influenced the designing and developing process of creating a serious game for children with autism. It has been organised and categorised into five attributes (user, game objectives, game elements, game aesthetics, and player experience). Certainly, this study demonstrates the needs and requirements of children with autism when designing serious games.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_100-Serious_Game_Design_Principles_for_Children_with_Autism.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Experimentation on Iterated Local Search Hyper-heuristics for Combinatorial Optimization Problems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140599</link>
        <id>10.14569/IJACSA.2023.0140599</id>
        <doi>10.14569/IJACSA.2023.0140599</doi>
        <lastModDate>2023-05-30T16:27:33.7270000+00:00</lastModDate>
        
        <creator>Stephen A. Adubi</creator>
        
        <creator>Olufunke O. Oladipupo</creator>
        
        <creator>Oludayo O. Olugbara</creator>
        
        <subject>Combinatorial optimization; heuristic algorithm; heuristic categorization; local search; Thompson sampling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>Designing effective algorithms to solve cross-domain combinatorial optimization problems is an important goal for which manifold search methods have been extensively investigated. However, finding an optimal combination of perturbation operations for solving cross-domain optimization problems is hard because of the different characteristics of each problem and the discrepancies in the strengths of perturbation operations. The algorithm that works effectively for one problem domain may completely falter in the instances of other optimization problems. The objectives of this study are to describe three categories of a hyper-heuristic that combine low-level heuristics with an acceptance mechanism for solving cross-domain optimization problems, compare the three hyper-heuristic categories against the existing benchmark algorithms and experimentally determine the effects of low-level heuristic categorization on the standard optimization problems from the hyper-heuristic flexible framework. The hyper-heuristic categories are based on the methods of Thompson sampling and iterated local search to control the perturbation behavior of the iterated local search. The performances of the perturbation configurations in a hyper-heuristic were experimentally tested against the existing benchmark algorithms on standard optimization problems from the hyper-heuristic flexible framework. Study findings have suggested the most effective hyper-heuristic with improved performance when compared to the existing hyper-heuristics investigated for solving cross-domain optimization problems to be the one with a good balance between “single shaking” and “double shaking” strategies. The findings not only provide a foundation for establishing comparisons with other hyper-heuristics but also demonstrate a flexible alternative to investigate effective hyper-heuristics for solving complex combinatorial optimization problems.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_99-Experimentation_on_Iterated_Local_Search_Hyper_heuristics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intelligent Brake Controller Based on Intelligent Highway Signs to Avoid Accidents on Algerian Roads</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140598</link>
        <id>10.14569/IJACSA.2023.0140598</id>
        <doi>10.14569/IJACSA.2023.0140598</doi>
        <lastModDate>2023-05-30T16:27:33.7100000+00:00</lastModDate>
        
        <creator>AHMED MALEK Nada</creator>
        
        <creator>BOUDOUR Rachid</creator>
        
        <subject>Intelligent Transportation Systems (ITS); Intelligent System Adaptation (ISA); road accidents; intelligent road signalization; decision</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>Despite the considerable efforts of the Algerian authorities to reduce the high number of accidents, therefore, fatalities on the country’s roads, the problem persists. To address this worrying situation, it is necessary to adopt new technologies and approaches that can assist Algerian drivers forcing them to comply with driving rules, thus putting an end to this concerned issue. This research aims to primarily assist Algerian drivers in reducing the mortality rate, which is primarily caused by speeding and poor road conditions, including outdated and inadequate road signs. To achieve this objective, a complete system consisting of two complementary subsystems: intelligent traffic signs and an interactive and smart speed limiter, has been proposed. Existing projects in this field have shown deficiencies, particularly in the context of real-time critical systems. This work offers improved precision, real-time responsiveness, adaptability to changes, and reduced infrastructure dependency compared to existing solutions. The new approach has been tested with the SUMO simulator, and a prototype based on Arduino cards has been developed approving its feasibility. However, the results obtained from this study demonstrate that the proposed system can significantly reduce the mortality rate on Algerian roads.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_98-Intelligent_Brake_Controller_Based_on_Intelligent_Highway_Signs.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Label Propagation Method for Community Detection Based on Game Theory</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140597</link>
        <id>10.14569/IJACSA.2023.0140597</id>
        <doi>10.14569/IJACSA.2023.0140597</doi>
        <lastModDate>2023-05-30T16:27:33.7100000+00:00</lastModDate>
        
        <creator>Mengqin Ning</creator>
        
        <creator>Jun Gong</creator>
        
        <creator>Zhipeng Zhou</creator>
        
        <subject>Community detection; label propagation; node linkage; complete subgraph; game theory</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>Community is a mesoscopic feature of the multi-scale phenomenon of complex networks, which is the bridge to revealing the formation and evolution of complex networks. Due to high computational efficiency, label propagation becomes a topic of considerable interest within community detection, but its randomness yet produces serious fluctuations. Facing the inherent flaws of label propagation, this paper proposes a series of solutions. Firstly, this paper presents a heuristic label propagation algorithm named Label Propagation Algorithm use Cliques and Weight (LPA-CW). In this algorithm, labels are expanded from seeds and propagated based on node linkage index. Seeds are produced from complete subgraph, and node linkage index is related to neighboring nodes. This method can produce competitive modularity Q but not Normalized Mutual Information (NMI), and compensate with existing methods, such as Stepping Community Detection Algorithm based on Label Propagation and Similarity (LPA-S). Secondly, in order to combine the advantages of different algorithms, this paper introduces a game theory framework, design the profit function of the participant algorithms to attain Nash equilibrium, and build an algorithm integration model for community detection (IA-GT). Thirdly, based on the above model, this presents an algorithm, named Label Propagation Algorithm based on IA-GT model (LPA-CW-S), which integrates LPA-CW and LPA-S and solves the incompatibility between modularity and NMI. Fully tested on both computer-generated and real-world networks, this method gives better results in indicators such as modularity and NMI than existing methods, effectively resolving the contradiction between the theoretical community and the real community. Moreover, this method significantly reduces the randomness and runs faster.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_97-A_Novel_Label_Propagation_Method_for_Community_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis and System Construction of ALSTM-LSTM Model-based Sports Jumping Rope Movement</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140596</link>
        <id>10.14569/IJACSA.2023.0140596</id>
        <doi>10.14569/IJACSA.2023.0140596</doi>
        <lastModDate>2023-05-30T16:27:33.6970000+00:00</lastModDate>
        
        <creator>Peng Su</creator>
        
        <creator>Zhipeng Li</creator>
        
        <creator>Weiguo Li</creator>
        
        <creator>Yongli Yang</creator>
        
        <subject>ALSTM-LSTM model; jumping rope exercise; Sports; human posture estimation algorithm; attention mechanisms</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>Computer technology&#39;s maturity has enabled intelligent and interactive sports training. Jumping rope test in secondary school faces difficulties due to bulky testing equipment and inefficient data measurement. An ALSTM-LSTM model based on visual human posture estimation is proposed for motion system analysis. Joint pose features are fused through LSTM, and the attention mechanism assigns weights to feature sequences to achieve motion recognition, considering the data&#39;s multidimensional and hierarchical nature. The model’s precision value is 95.83. Its average accuracy is much higher than LSTM, ML-KNN, and RSN models. Additionally, the model has 95.2% accuracy in localizing jump rope stance movements with low data consumption. The model can improve the accuracy of the analysis of the jump rope sport’s posture based on the characteristics of human movement, and inspire new technical tools for teaching instruction.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_96-Analysis_and_System_Construction_of_ALSTM_LSTM_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Presenting a Planning Model for Urban Waste Transportation and Selling Recycled Products with a Green Chain Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140594</link>
        <id>10.14569/IJACSA.2023.0140594</id>
        <doi>10.14569/IJACSA.2023.0140594</doi>
        <lastModDate>2023-05-30T16:27:33.6800000+00:00</lastModDate>
        
        <creator>Baoqing Ju</creator>
        
        <subject>Planning model; urban waste transportation; recycled products; green supply chain</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>The growing amount of municipal solid waste (MSW) is a significant issue, especially in large urban areas with inadequate landfill capacities and ineffective waste management systems. Several supply chain options exist for implementing an MSW management system; however, numerous technical, economic, environmental, and social factors must be evaluated to determine the optimal solution. This research aims to illustrate the difficulty of urban solid waste management in a network of supply chains with several levels. Hence, a mathematical model is implemented as a mixed integer linear programming problem that encompasses a variety of functions, comprising trash collection in cities, waste separation in sorting facilities, waste treatment in industries, and waste transportation between processing facilities. In addition, given the significance of urban solid waste management to environmental concerns, we are attempting to model the problem using a green approach. The purpose of the model proposed in this article is to determine the optimal distribution of waste among all units and maximize the net profit of the entire supply chain, along with a green approach. A case study has been undertaken to evaluate the efficacy and efficiency of the suggested model, which is utilized to solve the numerical problem with GAMS software and the grasshopper metaheuristic algorithm. The findings indicate that integrating municipal solid waste can yield economic and environmental benefits.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_94-Presenting_a_Planning_Model_for_Urban_Waste_Transportation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improved Tuna Swarm-based U-EfficientNet: Skin Lesion Image Segmentation by Improved Tuna Swarm Optimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140595</link>
        <id>10.14569/IJACSA.2023.0140595</id>
        <doi>10.14569/IJACSA.2023.0140595</doi>
        <lastModDate>2023-05-30T16:27:33.6800000+00:00</lastModDate>
        
        <creator>Khaja Raoufuddin Ahmed</creator>
        
        <creator>Siti Zura A Jalil</creator>
        
        <creator>Sahnius Usman</creator>
        
        <subject>Skin lesion; skin cancer; segmentation; deep learning model; optimization; EfficientNet; Unet</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>Skin cancers have been on an upward trend, with melanoma being the most severe type. A growing body of investigation is employing digital camera images to computer-aided examine suspected skin lesions for cancer. Due to the presence of distracting elements including lighting fluctuations and surface light reflections, interpretation of these images is typically difficult. Segmenting the area of the lesion from healthy skin is a crucial step in the diagnosis of cancer. Hence, in this research an optimized deep learning approach is introduced for the skin lesion segmentation. For this, the EfficientNet is integrated with the UNet for enhancing the segmentation accuracy. Also, the Improved Tuna Swarm Optimization (ITSO) is utilized for adjusting the modifiable parameters of the U-EfficientNet to minimize the information loss during the learning phase. The proposed ITSU-EfficientNet is assessed based on various evaluation measures like Accuracy, Mean Square Error (MSE), Precision, Recall, IoU, and Dice Coefficient and acquired the values are 0.94, 0.06, 0.94, 0.94, 0.92 and 0.94 respectively.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_95-Improved_Tuna_Swarm_based_U_EfficientNet.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Analysis of Bias in Facial Image Processing: A Review of Datasets</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140593</link>
        <id>10.14569/IJACSA.2023.0140593</id>
        <doi>10.14569/IJACSA.2023.0140593</doi>
        <lastModDate>2023-05-30T16:27:33.6630000+00:00</lastModDate>
        
        <creator>Amarachi M. Udefi</creator>
        
        <creator>Segun Aina</creator>
        
        <creator>Aderonke R. Lawal</creator>
        
        <creator>Adeniran I. Oluwarantie</creator>
        
        <subject>Digital signal processing; facial image processing; bias, geo-diversity; facial image datasets; k-means clustering; principal component analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>Facial image processing is a major research area in digital signal processing. According to recent studies, most commercial facial image processing systems are prejudiced by bias towards specific races, ethnicities, cultures, ages, and genders. In some circumstances, bias may be traced back to the algorithms employed, while in others, bias can be elicited from the insufficient representations in datasets. This study tackles bias based on insufficient representations in datasets. To tackle this, the research undertakes an exploratory review in which the context of facial image dataset is analyzed to thoroughly examine the rate of bias. Facial image processing systems are developed using widely publicly available datasets since generating datasets are costly. However, these datasets are strongly biased towards Whites and Asians, and other geo-diversity such as indigenous Africans are underrepresented. In this study, 40 large publicly accessible facial image data sets were examined. The races of the datasets used for this study were visualized using the t-distributed Stochastic Neighbor Embedding (t-SNE) visualization method. Then, to measure the geo-diversity and rate of bias of the dataset, k-means clustering, principal component analysis (PCA) and the Oriented FAST and Rotated BRIEF (ORB) feature extraction techniques were used. The findings from this study indicate that these datasets seem to exhibit an obvious ethnicity representation bias, particularly for native African facial images; as a result, additional African indigenous datasets are required to reduce the bias currently present in the most publicly available facial image datasets.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_93-An_Analysis_of_Bias_in_Facial_Image_Processing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improved 3D Rotation-based Geometric Data Perturbation Based on Medical Data Preservation in Big Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140592</link>
        <id>10.14569/IJACSA.2023.0140592</id>
        <doi>10.14569/IJACSA.2023.0140592</doi>
        <lastModDate>2023-05-30T16:27:33.6500000+00:00</lastModDate>
        
        <creator>Jayanti Dansana</creator>
        
        <creator>Manas Ranjan Kabat</creator>
        
        <creator>Prasant Kumar Pattnaik</creator>
        
        <subject>Data mining; privacy; health care data; machine learning; perturbation; improved fuzzy c-means; horse herd optimization; kernel based support vector machine</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>With the rise in technology, a huge volume of data is being processed using data mining, especially in the healthcare sector. Usually, medical data consist of a lot of personal data, and third parties utilize it for the data mining process. Perturbation in health care data highly aids in preventing intruders from utilizing the patient’s privacy. One of the challenges in data perturbation is managing data utility and privacy protection. Medical data mining has certain special properties compared with other data mining fields. Hence, in this work, the machine learning (ML) based perturbation approach is introduced to provide more privacy to healthcare data. Here, clustering and IGDP-3DR processes are applied to improve healthcare privacy preservation. Initially, the dataset is pre-processed using data normalization. Then, the dimensionality is reduced by SVD with PCA (singular value decomposition with Principal component analysis). Then, the clustering process is performed by IFCM (Improved Fuzzy C means). The high-dimensional data are divided into several segments by IFCM, and every partition is set as a cluster. Then, improved Geometric Data Perturbation (IGDP) is used to perturb the clustered data. IGDP is a combination of GDP with 3D rotation (3DR). Finally, the perturbed data are classified using a machine learning (ML) classifier - kernel Support Vector Machine- Horse Herd Optimization (KSVM-HHO) to classify the perturbed data and ensure better accuracy. The overall evaluation of the proposed KSVM-HHO is carried out in the Python platform. The performance of the IGDP-KSVM-HHO is compared over the two benchmark datasets, Wisconsin prognostic breast cancer (WBC) and Pima Indians Diabetes (PID) dataset. For the WBC dataset, the proposed method obtains an overall accuracy of 98.08% perturbed data, and for the PID dataset, the proposed method obtains an overall accuracy of 98.04%.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_92-Improved_3D_Rotation_based_Geometric_Data_Perturbation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>From Monolith to Microservice: Measuring Architecture Maintainability</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140591</link>
        <id>10.14569/IJACSA.2023.0140591</id>
        <doi>10.14569/IJACSA.2023.0140591</doi>
        <lastModDate>2023-05-30T16:27:33.6330000+00:00</lastModDate>
        
        <creator>Muhammad Hafiz Hasan</creator>
        
        <creator>Mohd. Hafeez Osman</creator>
        
        <creator>Novia Indriaty Admodisastro</creator>
        
        <creator>Muhamad Sufri Muhammad</creator>
        
        <subject>Monolith; cloud migration; software architecture; design quality; maintainability; quality metric</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>The migration of monolithic applications to the cloud is a popular trend, with microservice architecture being a commonly targeted architectural pattern. The motivation behind this migration is often rooted in the challenges associated with maintaining legacy applications and the need to adapt to rapidly changing business requirements. To ensure that the migration to microservices is a sound decision for enhancing maintainability, designers must carefully consider the underlying factors driving this software architecture migration. This study proposes a set of software architecture metrics for evaluating the maintainability of microservice architectural designs for monolith to microservice architecture migration. These metrics consider various factors, such as coupling, complexity, cohesion, and size, which are crucial for ensuring that the software architecture remains maintainable in the long term. Drawing upon previous product quality models that share similar design properties with microservice, we have derived maintainability metrics that can help measure the quality of microservice architecture. In this work, we introduced our first version of structural metrics for measuring the maintainability quality of microservice architecture concerning its cloud-native characteristics. This work allows us to get early feedback on proposed metrics before a detailed evaluation. With these metrics, designers can measure their microservice architecture quality to fully leverage the benefits of the cloud environment, thus ensuring that the migration to microservice is a beneficial decision for enhancing the maintainability of their software architecture applications.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_91-From_Monolith_to_Microservice.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Business Data Analysis Based on Kissmetric in the Context of Big Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140590</link>
        <id>10.14569/IJACSA.2023.0140590</id>
        <doi>10.14569/IJACSA.2023.0140590</doi>
        <lastModDate>2023-05-30T16:27:33.6330000+00:00</lastModDate>
        
        <creator>Kan Wang</creator>
        
        <subject>Big data; kissmetric; data analysis; density clustering; hierarchical clustering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>The kissmetric data analysis model can be used for the analysis and research of business data, and the focused research method in this model is cluster analysis. To realize the effective application of Kissmetric data analysis model, the focused method is improved in the experiment. An improved hierarchical clustering algorithm generated by splitting stage and merging stage is proposed in the experiment, and then the algorithm is combined with density clustering method while considering noise point processing to achieve automatic determination of clustering centers and improvement of clustering effect. In different dimensions, the highest F-measure index and ARI values of the hybrid clustering method are 0.997 and 0.998, respectively. In different numbers of classes of the dataset, the highest F-measure index and ARI values of the hybrid clustering method are 1.000 and 0.999, respectively. The mean accuracy and mean-variance were 95.94% vs. 5.89%, 94.72% vs. 0.57%, 89.72% vs. 4.97%, 87.45% vs. 5.53%, 93.83% vs. 5.76%, and 88.43% vs. 5.40 %, respectively. The mean and mean squared deviation of hybrid clustering method’s accuracy was 89.71% vs. 6.17% and 88.85% vs. 0.33% when dealing with the real datasets 7 and 8, respectively. The quality and stability of the clustering results of the hybrid clustering method are better. Compared with other clustering methods, the accuracy and stability of this method are higher and have certain superiority.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_90-Business_Data_Analysis_Based_on_Kissmetric.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Application of Virtual Technology Based on Posture Recognition in Art Design Teaching</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140589</link>
        <id>10.14569/IJACSA.2023.0140589</id>
        <doi>10.14569/IJACSA.2023.0140589</doi>
        <lastModDate>2023-05-30T16:27:33.6170000+00:00</lastModDate>
        
        <creator>Qinyan Gao</creator>
        
        <subject>Posture recognition; deep learning; art design; time convolutional network; VR</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>With the development of virtual technology, posture recognition technology has been integrated into virtual technology. This new technology allows users to further understand and observe the activities carried out in life scenes based on their original observation of the external world. And it enables them to make intelligent decisions. Existing posture recognition cannot meet the requirements of precise positioning in virtual environments. Therefore, a two-stage three-dimensional pose recognition model is proposed. The experiment illustrates that the three-dimensional gesture recognition performance is excellent. In addition, under the ablation experiment, the error accuracy of the research model decreased by more than 5 mm, and the overall error accuracy decreased by 10%. In the P-R curve, the accuracy rate of the model reaches 0.741, and the recall rate reaches 0.65. When conducting empirical analysis, the virtual posture disassembly action is complete; the disassembly error is less than 5%, and the disassembly error accuracy is good. The fit degree of the leg bending amplitude reaches over 96%, and the fit degree of the arm bending amplitude reaches over 95%. When the model is applied to actual teaching, the overall satisfaction score of teachers and students reaches 94.6 points. This has effectively improved the teaching effect of art design and is of great significance to the development of education in China.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_89-The_Application_of_Virtual_Technology_Based_on_Posture_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Artificial Intelligent Methodology-based Bayesian Belief Networks Constructing for Big Data Economic Indicators Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140588</link>
        <id>10.14569/IJACSA.2023.0140588</id>
        <doi>10.14569/IJACSA.2023.0140588</doi>
        <lastModDate>2023-05-30T16:27:33.6030000+00:00</lastModDate>
        
        <creator>Adil Al-Azzawi</creator>
        
        <creator>Fernando Torre Mora</creator>
        
        <creator>Chanmann Lim</creator>
        
        <creator>Yi Shang</creator>
        
        <subject>STE; Bayesian networks; domain knowledge; discriminative methods; economic forecasting</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>Economic indicator prediction in big data requires treating all random variables as an independent set of selective values and used as a discriminative method for classification tasks. A Bayesian network is a popular graphical representation approach for modeling probabilistic dependencies and causality among a set of random variables to incorporate a huge amount of human expert knowledge about the problem of interest involving diagnostic reasoning of big data. In our study, we set out to construct the Bayesian networks using the standard error for a least-squares linear regression (STE) and the domain knowledge from the literature in the field for predicting the big data economy prediction. The experimental results show that the proposed STE baseline provided us with an accuracy of 20% to 58% in seven out of eight regions, including the aggregate for “World”. In comparison, the Bayesian Networks generated by our first Domain Knowledge Model improved accuracy from 54% to 75% in the same regions.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_88-An_Artificial_Intelligent_Methodology_based_Bayesian_Belief_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Distributed Cooperative Control for Multi-UAV Flying Formation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140587</link>
        <id>10.14569/IJACSA.2023.0140587</id>
        <doi>10.14569/IJACSA.2023.0140587</doi>
        <lastModDate>2023-05-30T16:27:33.5870000+00:00</lastModDate>
        
        <creator>Belkacem Kada</creator>
        
        <creator>Abdullah Y.Tameem</creator>
        
        <creator>Ahmed A. Alzubairi</creator>
        
        <creator>Uzair Ansari</creator>
        
        <subject>Formation control; distributed consensus; multi-agent systems; multiple-UAV</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>The problem of collaborative pattern tracking in multi-agent systems (MAS) like unmanned aerial vehicles (UAV) is investigated in this article. First, a new method for distributed consensus is constructed inside the framework of the leader-following approach for second-order nonlinear MAS. The technique canceled the chattering effect observed in the conventional sliding mode-based control protocols by transmitting smooth input signals to agents&#39; control channels. Second, a novel formation framework is proposed to accomplish three-dimensional formation tracking by including consensus procedures in the formation dynamics model. This will allow for formation tracking in all three dimensions. The Lyapunov theory provides evidence demonstrating the proposed protocols&#39; stability and convergence. Numerical simulations have been carried out to prove the proposed algorithms&#39; effectiveness.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_87_Distributed_Cooperative_Control.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Weapons Detection System Based on Edge Computing and Computer Vision</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140586</link>
        <id>10.14569/IJACSA.2023.0140586</id>
        <doi>10.14569/IJACSA.2023.0140586</doi>
        <lastModDate>2023-05-30T16:27:33.5870000+00:00</lastModDate>
        
        <creator>Zufar R. Burnayev</creator>
        
        <creator>Daulet O. Toibazarov</creator>
        
        <creator>Sabyrzhan K. Atanov</creator>
        
        <creator>H&#252;seyin Canbolat</creator>
        
        <creator>Zhexen Y. Seitbattalov</creator>
        
        <creator>Dauren D. Kassenov</creator>
        
        <subject>Internet of Things; gun recognition; edge device; Raspberry pi; military systems control; network analytic</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>Early detection of armed threats is crucial in reducing accidents and deaths resulting from armed conflicts and terrorist attacks. The most significant application of weapon detection systems would be found in public areas such as airports, stadiums, central squares, and on the battlefield in urban or rural conditions. Modern surveillance and control systems of closed-circuit television cameras apply deep learning and machine learning algorithms for weapons detection on the base of cloud architecture. However, cloud computing is inefficient for network bandwidth, data privacy and slow decision-making. To address these issues, edge computing can be applied, using Raspberry Pi as an edge device with the EfficientDet model for developing the weapons detection system. The image processing results are transmitted as a text report to the cloud platform for further analysis by the operator. Soldiers can equip themselves with the suggested edge node and headphones for armed threat notifications, plugged into augmented reality glasses for visual data output. As a result, the application of edge computing makes it possible to ensure data safety, increase the network bandwidth and provide the device operation without the internet. Thus, an independent weapon detection system was developed that identifies weapons in 1.48 seconds without the Internet.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_86-Weapons_Detection_System_Based_on_Edge_Computing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Design of Optical Logic Gates with Transverse Electric and Magnetic</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140585</link>
        <id>10.14569/IJACSA.2023.0140585</id>
        <doi>10.14569/IJACSA.2023.0140585</doi>
        <lastModDate>2023-05-30T16:27:33.5700000+00:00</lastModDate>
        
        <creator>Lili Liu</creator>
        
        <creator>Haiquan Sun</creator>
        
        <creator>Lishuang Hao</creator>
        
        <creator>Cailiang Chen</creator>
        
        <subject>Photonic crystal; hexagonal lattice; NOR; XNOR; transverse electric; transverse magnetic</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>This paper presents a new design of optical NOR and XNOR logic gates using a two-dimensional-hexagonal photonic crystal (2D-HPhC) that allows for both Transverse Electric (TE) and Transverse Magnetic (TM) polarization modes. The structure is very small in size and has a low delay time. The design includes three inputs (A, B, C) and one output (Q) waveguide, with the NOR gate having a delay period of 0.75 ps and the XNOR gate having a delay period of 0.9 ps. The contrast ratio between the input and output for both gates is 7-8 dB. The XNOR gate has an optimum transmission signal rate of T = 96%. The purpose of the structure is to use a reference input to create the fundamental logic gates NOR and XNOR by adjusting the signal phase angle.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_85-A_New_Design_of_Optical_Logic_Gates.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Attribute-based Access Control Model in Healthcare Systems with Blockchain Technology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140584</link>
        <id>10.14569/IJACSA.2023.0140584</id>
        <doi>10.14569/IJACSA.2023.0140584</doi>
        <lastModDate>2023-05-30T16:27:33.5530000+00:00</lastModDate>
        
        <creator>Prince Arora</creator>
        
        <creator>Avinash Bhagat</creator>
        
        <creator>Mukesh Kumar</creator>
        
        <subject>Blockchain; healthcare; permission level; consensus level; scalability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>Blockchain and the healthcare sector have a serious concern with context to scalability, which has a challenge of converting arbitrary values to fixed values. The transfer of arbitrary data coming from diverse resources has another point of concern in the blockchain. In this paper, the author proposed a model that will receive data from diverse sources and will convert it to a fixed type of value. The paper also proposes an access control scheme with various permission and consensus level protocols which will allow a reduction in block size with respect to scalability. The consensus level will allow access to the individual or a group of users and the permission level with respect to each block via considering the access granted to nodes of the blockchain. The addition of various permission and consensus levels will allow only a restricted type of data to pass the model. Once the data is verified and approved by various levels, then the data is all set to be part of the blockchain. The paper introduces a model where the time taken to create a new hash is 0.15625 microseconds. A total number of 64 transactions taken from the data set where the throughput is calculated for individual access are considered. After applying the formula, the calculated throughput is 32.5 microseconds. By the lighter block size data can be made available to the patients. The research is for the patients so they can keep track of their medical history and the deaths due to overdose of the medicines can be reduced.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_84-Attribute_based_Access_Control_Model_in_Healthcare_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Study on the Evaluation Model of In-depth Learning for Oral English Learning in Online Education</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140583</link>
        <id>10.14569/IJACSA.2023.0140583</id>
        <doi>10.14569/IJACSA.2023.0140583</doi>
        <lastModDate>2023-05-30T16:27:33.5400000+00:00</lastModDate>
        
        <creator>Yanli Ge</creator>
        
        <subject>Spoken English; online education; transformer model; deep learning; evaluation model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>The trend of globalization in the world is becoming increasingly frequent, and people from different regions are communicating more closely. Therefore, the demand for a second language is constantly expanding, accelerating the development of the field of English oral evaluation and also accelerating the development of online education. The study proposes a text priori based oral evaluation model, which is based on the Transformer model and uses target phonemes as input to the Decoder. The model successfully predicts the relationship between actual pronunciation and error labels. At the same time, a self-supervised oral evaluation model with accent is constructed, which simulates the training process of misreading data by calculating semantic distance. The experimental results show that when the training set ratio reaches its maximum in the Speed Ocean dataset and the L2 Arctic dataset, the F1 values of the proposed method are 0.612 and 0.596, respectively; the length of the target phoneme has a smaller impact on this model compared to other models. Experiments have shown that the proposed deep learning method can alleviate deployment difficulties, directly optimize the effectiveness of oral evaluation, provide more accurate feedback, and also provide users with a better learning experience. This has practical significance for the development of the field of oral evaluation.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_83-A_Study_on_the_Evaluation_Model_of_In_depth_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Autonomous Path Planning for Industrial Omnidirectional AGV Based on Mechatronic Engineering Intelligent Optical Sensors</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140582</link>
        <id>10.14569/IJACSA.2023.0140582</id>
        <doi>10.14569/IJACSA.2023.0140582</doi>
        <lastModDate>2023-05-30T16:27:33.5400000+00:00</lastModDate>
        
        <creator>Yuanyuan Pan</creator>
        
        <subject>Smart machinery; optical sensors; industrial development; autonomous path planning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>With the rapid development of modern industry, the application of automated mechanical and electronic technology is gradually increasing, and the research on automatic path planning is also receiving increasing attention. In this environment of rapid technological progress, rapid growth of the knowledge economy, and fierce competition, industrial intelligence has become an indispensable part of social development. Industrial Automated Guided Vehicle (AGV) has put forward higher requirements for the application of automatic control technology in the planning and research of autonomous path planning. Autonomous path planning with AGV as the service object is currently the most widely used direction in industrial production processes, with the best development prospects and the highest market demand. Optimizing autonomous path planning for AGV is of great significance in promoting the process of industrial modernization and improving industrial production efficiency. In order to solve the problems of low path planning efficiency, excessive reliance on the rich experience and subjective judgment of relevant personnel, and excessive consumption of path planning costs in traditional AGV omnidirectional autonomous path planning, this article attempted to introduce sensor technology to conduct in-depth research on AGV omnidirectional automatic path planning. Based on intelligent optical sensors and combined with ant colony algorithm, the autonomous path planning model for AGV was optimized, and an innovative AGV omnidirectional autonomous path planning model application experiment was conducted in two industrial production enterprises in a certain region. Comparative analysis of experimental data showed that the innovative AGV omnidirectional autonomous path planning model studied in this article had an average improvement of about 17.8% in four evaluation indicators compared to traditional AGV omnidirectional autonomous path planning models.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_82-Autonomous_Path_Planning_for_Industrial_Omnidirectional_AGV.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of Intrusion Detection System using Ensemble Learning Technique in Cloud Computing Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140580</link>
        <id>10.14569/IJACSA.2023.0140580</id>
        <doi>10.14569/IJACSA.2023.0140580</doi>
        <lastModDate>2023-05-30T16:27:33.5070000+00:00</lastModDate>
        
        <creator>Rajesh Bingu</creator>
        
        <creator>S. Jothilakshmi</creator>
        
        <subject>Cloud; distributed denial of service; intrusion detection; ensemble; recurrent neural network; convolutional neural network; random forest; gated recurrent unit; K-means clustering; long short term memory</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>The key advantage of the cloud is that it fluidly propagates to fulfil changeable requirements and provides an environment that is repeatable and can be scaled down instantly when needed. Therefore, it is necessary to protect this cloud environment from malicious attacks such as spamming, keylogging, Denial of Service (DoS), and Distributed Denial of Service (DDoS). Among these kinds of attacks, DDoS has the capability to establish a high flood of malicious attacks on the cloud environment or Software Defined Networking (SDN) based cloud environment. Hence in this work, an ensemble based deep learning technique is proposed to detect attacks in cloud and SDN based cloud environments. Here, the ensemble model is formed by combining K-means with deep learning classifiers such as Long Short term Memory (LSTM) network, Convolutional Neural Network (CNN), Recurrent Neural Network (RNN), Gated Recurrent Unit (GRU) and Deep Neural Network (DNN). Initially, preprocessing with data cleaning and standardization is applied to the input data. Meanwhile, a random forest is implemented for extracting the minimum significant features. After that, the proposed ensemble based approach is utilized for detecting the intrusion. This approach is used to enhance the performance of the deep learning classifiers without much computational complexity. This model is trained and evaluated using two datasets as CICIDS 2018 and SDN based DDOS attack datasets. The proposed approach provides better intrusion detection performance in terms of F1 measure, precision, accuracy, and recall. By using the proposed approach, the accuracy and precision value attained is 99.685 and 0.992, respectively.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_80-Design_of_Intrusion_Detection_System_using_Ensemble_Learning_Technique.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Method for Anomaly Detection in the Internet of Things using Whale Optimization Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140581</link>
        <id>10.14569/IJACSA.2023.0140581</id>
        <doi>10.14569/IJACSA.2023.0140581</doi>
        <lastModDate>2023-05-30T16:27:33.5070000+00:00</lastModDate>
        
        <creator>Zhihui Zhu</creator>
        
        <creator>Meifang Zhu</creator>
        
        <subject>Internet of things; anomaly detection; intrusion detection; firewall; whale optimization algorithm; accuracy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>The Internet of Things (IoT) is integral to human life due to its pervasive applications in home appliances, surveillance, and environment monitoring. Resource-constrained IoT devices are easily accessible to attackers due to their direct connection to the unsafe Internet. Public access to the Internet makes IoT objects more susceptible to intrusion. As the name implies, anomaly detection systems are designed to identify anomalous traffic patterns that conventional firewalls fail to detect. Effective Intrusion Detection Systems (IDSs) design faces three major problems, including handling high dimensionality, selecting a learning algorithm, and comparing entered observations and traffic patterns using a distance or similarity measure. Considering the dynamic nature of the entities involved and the limited computing resources available, more than traditional anomaly detection approaches is required. This paper proposes a novel method based on Whale Optimization Algorithm (WOA) to detect anomalies in IoT-based networks that conventional firewall systems cannot detect. Experiments are conducted on the KDD dataset. The accuracy of the proposed method is compared for classifiers such as kNN, SVM, and DT approaches. The detection accuracy rate of the proposed method is significantly higher than that of other methods for DoS, probing, normal attacks, R2L attacks, and U2R attacks compared to other methods. This method shows an impressive increase in accuracy when detecting a wide range of malicious activities, from DoS, probing, and privilege escalation attacks, to remote-to-local and user-to-root attacks.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_81-A_Novel_Method_for_Anomaly_Detection_in_the_Internet_of_Things.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of a Reliable Transmission Mechanism for Vehicle Data in Mobile Internet of Vehicles Driven by Edge Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140579</link>
        <id>10.14569/IJACSA.2023.0140579</id>
        <doi>10.14569/IJACSA.2023.0140579</doi>
        <lastModDate>2023-05-30T16:27:33.4930000+00:00</lastModDate>
        
        <creator>Wenjing Liu</creator>
        
        <subject>Mobile network; internet of vehicles; reliable transmission; edge computing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>In order to meet the business requirements of different applications in heterogeneous, random, and time-varying mobile network environments, the design of a reliable transmission mechanism is the core problem of the mobile Internet of vehicles. The current research is mainly based on the computing power support of roadside units, and large delays and high costs are significant defects that are difficult to overcome. In order to overcome this deficiency, this paper integrates edge computing to design task unloading and routing protocol for the reliable transmission mechanism of mobile Internet of vehicles. Firstly, combined with edge computing technology, a mobile-aware edge task unloading mechanism in a vehicle environment is designed to improve resource utilization efficiency and strengthen network edge computing capacity so as to provide computing support for upper service applications; Secondly, with the support of computing power of edge task unloading mechanism, connectivity aware and delay oriented edge node routing protocol in-vehicle environment is constructed to realize reliable communication between vehicles. The main characteristics of this research are as follows: firstly, edge computing technology is introduced to provide distributed computing power, and reliable transmission routing is designed based on vehicle-to-vehicle network topology, which has prominent cost advantages and application value. Secondly, the reliability of transmission is improved through a variety of innovative technical designs, including taking the two hop range nodes as the service set search to reduce the amount of system calculation, fully considering the link connectivity state, and comprehensively using real-time and historical link data to establish the backbone link. This paper constructs measurement indicators based on delay and mobility as key elements of the computing offloading mechanism. The offloading decision is made through weighted calculation of delay estimation and computing cost, and a reasonable computing model is designed. The experimental simulation shows that the average task execution time under this model is 65.4% shorter than that of local computing, 18.4% shorter than that of cloud computing, and the routing coverage is about 6% higher than that of local computing when there are less than 60 nodes. These research and experimental results fully demonstrate that the mobile Internet of vehicles based on edge computing has good reliable transmission characteristics.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_79-Design_of_a_Reliable_Transmission_Mechanism_for_Vehicle_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>MoveNET Enabled Neural Network for Fast Detection of Physical Bullying in Educational Institutions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140578</link>
        <id>10.14569/IJACSA.2023.0140578</id>
        <doi>10.14569/IJACSA.2023.0140578</doi>
        <lastModDate>2023-05-30T16:27:33.4770000+00:00</lastModDate>
        
        <creator>Zhadra Kozhamkulova</creator>
        
        <creator>Bibinur Kirgizbayeva</creator>
        
        <creator>Gulbakyt Sembina</creator>
        
        <creator>Ulmeken Smailova</creator>
        
        <creator>Madina Suleimenova</creator>
        
        <creator>Arailym Keneskanova</creator>
        
        <creator>Zhumakul Baizakova</creator>
        
        <subject>MoveNET; neural networks; skeleton; bullying; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>In this article, we provide a MoveNET-based technique that we think may be used to detect violent actions. This strategy does not need high-computational technology, and it is able to put into action in a very short amount of time. Our method is comprised of two stages: first, the capture of features from photo sequences in order to evaluate body position; next, the application of an artificial neural network to activities classification in order to determine whether or not the picture frames include violent or hostile circumstances. A video aggression database consisting of 400 minutes of one individual&#39;s actions and 20 hours of videodata encompassing physical abuse, as well as 13 categories for distinguishing between the behaviors of the attacker and the victim, was created. In the end, the suggested approach was refined and validated by employing the collected dataset during the process. According to the findings, an accuracy rate of 98% was attained while attempting to detect aggressive behavior in video sequences. In addition, the findings indicate that the suggested technique is able to identify aggressive behavior and violent acts in a very short amount of time and is suitable for use in apps that take place in the real world.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_78-MoveNET_Enabled_Neural_Network_for_Fast_Detection_of_Physical_Bullying.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Using the Term Frequency-Inverse Document Frequency for the Problem of Identifying Shrimp Diseases with State Description Text</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140577</link>
        <id>10.14569/IJACSA.2023.0140577</id>
        <doi>10.14569/IJACSA.2023.0140577</doi>
        <lastModDate>2023-05-30T16:27:33.4600000+00:00</lastModDate>
        
        <creator>Luyl-Da Quach</creator>
        
        <creator>Anh Nguyen Quynh</creator>
        
        <creator>Khang Nguyen Quoc</creator>
        
        <creator>An Nguyen Thi Thu</creator>
        
        <subject>TF-IDF; machine learning; deep learning; CNN; shrimp disease classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>With the increasing demand for research on shrimp disease recognition to assist far-off farmers who need the proper assistance for their shrimp farming, shrimp disease prediction research is still in the initial stage. Most current methods utilize vision-based models, which mainly face challenges: symptom detection and image quality. Meanwhile, there are few researches which are language-based to get over the issues. In this study, we will experiment with natural language processing based on recognizing shrimp diseases; based on descriptions of shrimp status. This study provides an efficient solution for classifying multiple diseases in shrimp. We will compare different machine learning models and deep learning models (SVM, Logistic Regression, Multinomial Naive Bayes, (a4) Bernoulli Naive Bayes, Random forest, DNN, LSTM, GRU, BRNN, RCNN) in terms of accuracy and performance. The study also evaluates the TF-IDF technique in feature extraction. Data were collected for 12 types of shrimp diseases with 1,037 descriptions. Firstly, the data is preprocessed with standardised Vietnamese accent typing, tokenized words, converted to lowercase, removed unnecessary characters and stopwords. Then, TF-IDF is utilized to express the text feature weight. Machine learning-based and deep learning-based models are trained. The experimental results show that Random forest (F1-Score micro: 98%) and DNN (Validation accuracy: 84%) are the most efficient models.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_77-Using_the_Term_Frequency_Inverse_Document_Frequency.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Krill Herd Algorithm for Live Virtual Machines Migration in Cloud Environments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140576</link>
        <id>10.14569/IJACSA.2023.0140576</id>
        <doi>10.14569/IJACSA.2023.0140576</doi>
        <lastModDate>2023-05-30T16:27:33.4470000+00:00</lastModDate>
        
        <creator>Hui Cao</creator>
        
        <creator>Zhuo Hou</creator>
        
        <subject>Cloud computing; migration; virtualization; energy consumption</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>Green cloud computing is a modern approach that provides pay-per-use information and communication technologies with a minimal carbon footprint. Cloud computing enables users to access computing resources without the need for local servers or personal devices to operate applications. It allows businesses and developers to access infrastructure and hardware resources conveniently. Consequently, this results in a growing demand for data centers. It becomes crucial in maintaining economic and environmental sustainability as data centers use disproportionate energy. This points to sustainability and energy consumption as being important topics for research in cloud computing. This paper introduces a two-tiered VM placement algorithm. A queuing model is proposed at the first level to handle many VM requests. Models such as cloud simulation are easily implemented and validated using this model. It also provides an alternate method for allocating tasks to servers. Next, a multi-objective VM placement algorithm is proposed based on the Krill Herd (KH) algorithm. Basically, it maintains a balance between energy consumption and resource utilization.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_76-Krill_Herd_Algorithm_for_Live_Virtual_Machines_Migration.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Skin Cancer Image Detection and Classification by CNN based Ensemble Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140575</link>
        <id>10.14569/IJACSA.2023.0140575</id>
        <doi>10.14569/IJACSA.2023.0140575</doi>
        <lastModDate>2023-05-30T16:27:33.4470000+00:00</lastModDate>
        
        <creator>Sarah Ali Alshawi</creator>
        
        <creator>Ghazwan Fouad Kadhim AI Musawi</creator>
        
        <subject>Medical images; skin cancer; machine learning; deep learning; ensemble learning; accuracy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>Melanoma is accounted as a rare skin cancer responsible for a huge mortality rate. However, various imaging tests can be used to detect the metastatic spread of disease with a primary diagnosis or on clinical suspicion. Focus on melanoma detection, irrespective of its unusual occurrence, is that it is often misdiagnosed for other skin malignancies leading to medical negligence. Sometimes melanoma is detected only when the metastasis has entered the bloodstream or lymph nodes. So, effective computational strategies for early detection of melanoma are essential. There are four principal types of skin melanoma with two sub types: Superficial spreading, nodular, lentigo, lentigo maligna, Acral lentiginous, and Subungual melanoma. Amelanotic melanoma, one particular type of melanoma, exists in all kinds of skin tones. Classifications of melanoma with its classes are focused on in this research. Misclassification errors, overfitting issues and improve accuracy, the ensemble classifier models, namely Adaboost, random forest, voted ensemble, voted CNN, Boosted SVM, Boosted GMM, have been used in melanoma classification. The results of the ensemble classifier achieve high classification accuracy. However, imbalanced classification is found in all six classes of melanoma. Transfer learning and ensembled transfer learning approaches are implemented to reduce the imbalanced classification issues, and performances are analyzed. Four ML/DL models, six ensembled models, four transfer learning models, and five ensembled transfer learning models are used in this investigation. Implementation of all the 19 classifiers is analyzed using standard performance metrics such as Accuracy, Precision, recall, Mathew’s correlation coefficient, Jaccard Index, F1 measure, and Cohen’s Kappa.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_75-Skin_Cancer_Image_Detection_and_Classification_by_CNN.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Combinatorial Optimization Design of Search Tree Model Based on Hash Storage</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140574</link>
        <id>10.14569/IJACSA.2023.0140574</id>
        <doi>10.14569/IJACSA.2023.0140574</doi>
        <lastModDate>2023-05-30T16:27:33.4300000+00:00</lastModDate>
        
        <creator>Yun Liu</creator>
        
        <creator>Jiajun Li</creator>
        
        <creator>Jingjing Chen</creator>
        
        <subject>Combination optimization; game search algorithm; state space; transposition table</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>The game search tree model usually does not consider the state information of similar nodes, which results in searching a huge state space, and there are problems such as the size of the game tree and the long solution time. In view of this, the article proposes a scheme using the idea of combinatorial optimization algorithm, which has an important application in solving the decision problem in the tree graph model. First, the special graph-theoretic structure of the point-grid game is analyzed, and the storage and search of states are optimized by designing hash functions; then, the branch delimitation algorithm is used to search the state space, and the evaluation value of repeated nodes is calculated by dynamic programming; finally, the state space is greatly reduced by combining the two-way detection search strategy. The results show that the algorithm improves decision-making efficiency and has achieved 37% and 42% final winning rate, respectively. The design provides new ideas for computational complexity problems in the field of game search and also proposes new solutions for the field of combinatorial optimization.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_74-Combinatorial_Optimization_Design_of_Search_Tree_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automatic Classification of Scanned Electronic University Documents using Deep Neural Networks with Conv2D Layers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140573</link>
        <id>10.14569/IJACSA.2023.0140573</id>
        <doi>10.14569/IJACSA.2023.0140573</doi>
        <lastModDate>2023-05-30T16:27:33.4130000+00:00</lastModDate>
        
        <creator>Aigerim Baimakhanova</creator>
        
        <creator>Ainur Zhumadillayeva</creator>
        
        <creator>Sailaugul Avdarsol</creator>
        
        <creator>Yermakhan Zhabayev</creator>
        
        <creator>Makhabbat Revshenova</creator>
        
        <creator>Zhenis Aimeshov</creator>
        
        <creator>Yerkebulan Uxikbayev</creator>
        
        <subject>Deep learning; CNN; RNN; classification; image analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>This paper proposes a novel approach for scanned document categorization using a deep neural network architecture. The proposed approach leverages the strengths of both convolutional neural networks (CNNs) and recurrent neural networks (RNNs) to extract features from the scanned documents and model the dependencies between words in the documents. The pre-processed documents are first fed into a CNN, which learns and extracts features from the images. The extracted features are then passed through an RNN, which models the sequential nature of the text. The RNN produces a probability distribution over the predefined categories, and the document is classified into the category with the highest probability. The proposed approach is evaluated on a dataset of scanned documents, where each document is categorized into one of four predefined categories. The experimental results demonstrate that the proposed approach achieves high accuracy and outperforms existing methods. The proposed approach achieves an overall accuracy of 97.3%, which is significantly higher than the existing methods&#39; accuracy. Additionally, the proposed approach&#39;s performance was robust to variations in the quality of the scanned documents and the OCR accuracy. The contributions of this paper are twofold. Firstly, it proposes a novel approach for scanned document categorization using deep neural networks that leverages the strengths of CNNs and RNNs. Secondly; it demonstrates the effectiveness of the proposed approach on a dataset of scanned documents, highlighting its potential applications in various domains, such as information retrieval, data mining, and document management. The proposed approach can help organizations manage and analyze large volumes of data efficiently.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_73-Automatic_Classification_of_Scanned_Electronic_University.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Investigating Internet of Things Impact on e-Learning System: An Overview</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140572</link>
        <id>10.14569/IJACSA.2023.0140572</id>
        <doi>10.14569/IJACSA.2023.0140572</doi>
        <lastModDate>2023-05-30T16:27:33.4000000+00:00</lastModDate>
        
        <creator>Duha Awad H. Elneel</creator>
        
        <creator>Hasan Kahtan</creator>
        
        <creator>Abdul Sahli Fakharudin</creator>
        
        <creator>Mansoor Abdulhak</creator>
        
        <creator>Ahmad Salah Al-Ahmad</creator>
        
        <creator>Yehia Ibrahim Alzoubi</creator>
        
        <subject>e-Learning system; Internet of Things (IoT); software system; education; learning process</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>e-Learning systems have reached their peak with the revolution of smart technologies. In the past few years, the Internet of Things (IoT) has become one of the most advanced and popular technologies, affecting many different areas. Using IoT in an e-learning system is a fantastic technology that improves the e-learning system and makes it more inventive and cutting-edge. The key challenge addressed in this study is the acceptance of IoT usage in e-learning systems as well as how to improve it so that it can be utilized properly. This research concentrates on how IoT can benefit e-learning systems and how it might benefit users of e-learning systems. A comprehensive literature review was conducted to get acquainted with the important research related to IoT technology and e-learning systems through online research databases and reliable scientific journals. The first research finding is that e-learning systems need such modern techniques as IoT to enable interconnection, increase reliability, and enhance the enjoyment of the educational process. The second result is that research related to the development of new technologies like the IoT has a significant impact on enhancing the performance of new systems and bringing about positive change. This study highlights the value of IoT, particularly in e-learning systems. It aids in the development of new strategies that will improve the efficacy of e-learning systems and stimulate researchers to develop advanced technology.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_72-Investigating_Internet_of_Things_Impact_on_e_Learning_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Study of Student Personality Trait on Spear-Phishing Susceptibility Behavior</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140571</link>
        <id>10.14569/IJACSA.2023.0140571</id>
        <doi>10.14569/IJACSA.2023.0140571</doi>
        <lastModDate>2023-05-30T16:27:33.4000000+00:00</lastModDate>
        
        <creator>Mohamad Alhaddad</creator>
        
        <creator>Masnizah Mohd</creator>
        
        <creator>Faizan Qamar</creator>
        
        <creator>Mohsin Imam</creator>
        
        <subject>Spear-phishing; cyber-attack; personality; trait; embedded training; message framing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>Spear-phishing emails are an effective cyber-attack method due to the fact that the emails sent are highly personalized to look like a regular legitimate email. Recently, it was discovered that personality traits of the victim have an impact on a person&#39;s susceptibility to spear-phishing. This study aims to identify which personality traits affect spear-phishing susceptibility besides other traits such as Information Technology background, gender, and age. In addition, measure of the effectiveness of embedded training systems and see whether message framing can further help increase its effectiveness. A personality trait survey was sent to 100 participants, followed by a real-life spear-phishing simulation to measure a certain personality trait’s influence on phishing susceptibility. After a two-week period, the second round of spear-phishing emails was sent again to measure message framing effectiveness. The personality traits analysis results show that users with higher levels of Internet anxiety are less susceptible to spear-phishing emails. While the message framing did not show any significant results, the embedded training program reduced the click rate. Findings revealed that certain people are more susceptible to spear-phishing emails than others. Thus, this work can guide an institution or organizations to identify which group of people are more vulnerable to spear-phishing.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_71-Study_of_Student_Personality_Trait_on_Spear_Phishing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Impact and Analysis of Disease Spread in Paddy Crops using Environmental Factors with the Support of X-Step Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140570</link>
        <id>10.14569/IJACSA.2023.0140570</id>
        <doi>10.14569/IJACSA.2023.0140570</doi>
        <lastModDate>2023-05-30T16:27:33.3830000+00:00</lastModDate>
        
        <creator>P. Veera Prakash</creator>
        
        <creator>Muktevi Srivenkatesh</creator>
        
        <subject>Paddy crops; cash crop disease; green rice leafhopper; rice leaf folder; brown plant leafhopper; x-step algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>India is an agriculture-based country, with paddy being the main crop cultivated on nearly half of its agricultural lands. Paddy cultivation faces numerous challenges, particularly diseases that affect crop growth and yield. Adult paddy crops are especially vulnerable to diseases caused by various factors, such as the green rice leafhopper, rice leaf folder, and brown plant leafhopper. These insects inflict damage on the paddy crops, restricting their growth and leading to significant losses. This research paper investigates the impact of environmental factors on disease spread in paddy crops, using the X-Step Algorithm for analysis. The study aims to better understand the role of environmental conditions, including air, water, and soil quality, in the development and progression of diseases in rice crops. This knowledge will help to optimize disease prevention and management strategies for improved crop yields and food security. The X-Step Algorithm, a novel machine learning algorithm, was employed to model and predict disease spread, taking into account various environmental factors. The proposed algorithm analyses images of paddy crops either manually captured or taken by sensors to evaluate disease spread and growth in paddy crops. This data-driven approach allows for more accurate and timely predictions, enabling farmers and agricultural experts to implement appropriate interventions.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_70-Impact_and_Analysis_of_Disease_Spread_in_Paddy_Crops.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Machine Learning Techniques in Keratoconus Classification: A Systematic Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140569</link>
        <id>10.14569/IJACSA.2023.0140569</id>
        <doi>10.14569/IJACSA.2023.0140569</doi>
        <lastModDate>2023-05-30T16:27:33.3670000+00:00</lastModDate>
        
        <creator>AATILA Mustapha</creator>
        
        <creator>LACHGAR Mohamed</creator>
        
        <creator>HRIMECH Hamid</creator>
        
        <creator>KARTIT Ali</creator>
        
        <subject>Ophthalmology; corneal disease; keratoconus classification; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>Machine learning (ML) algorithms are being integrated into several disciplines. Ophthalmology is one field of health sector that has benefited from the advantages and capacities of ML in processing of different types of data. In a large number of studies, the detection and classification of various diseases, such as keratoconus, was carried out by analyzing corneal characteristics, in different data types (images, measurements, etc.), using ML tools. The main objective of this study was to conduct a rigorous systematic review of the use of ML techniques in the detection and classification of keratoconus. Papers considered in this study were selected carefully from Scopus and Web of Science digital databases, according to their content and to the adoption of ML methods in the classification of keratoconus. The selected studies were reviewed to identify different ML techniques implemented and the data types handled in the diagnosis of keratoconus. A total of 38 articles, published between 2005 and 2022, were retained for review and discussion of their content.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_69-Machine_Learning_Techniques_in_Keratoconus_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Recurrent Ascendancy Feature Subset Training Model using Deep CNN Model for ECG based Arrhythmia Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140568</link>
        <id>10.14569/IJACSA.2023.0140568</id>
        <doi>10.14569/IJACSA.2023.0140568</doi>
        <lastModDate>2023-05-30T16:27:33.3530000+00:00</lastModDate>
        
        <creator>Shaik Janbhasha</creator>
        
        <creator>S Nagakishore Bhavanam</creator>
        
        <subject>Feature selection; arrhythmia classification; convolution neural network; deep learning; electrocardiograms</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>The World Health Organization (WHO) has released a report warning of the worldwide epidemic of heart disease, which is reaching worrisome proportions among adults aged 40 and high. Heart problems can be detected and diagnosed by a variety of methods and procedures. Scientists are striving to find multiple approaches that meet the required accuracy standards. Finding the heart issue in the waveform is what an Electrocardiogram (ECG) is all about. Feature-based deep learning algorithms have been essential in the medical sciences for decades, centralising data in the cloud and making it available to researchers around the world. To promptly detect irregularities in the cardiac rhythm, manual analysis of the ECG signal is insufficient. ECGs play a crucial role in the evaluation of cardiac arrhythmias in the context of daily clinical practice. In this research, a deep learning-based Convolution Neural Network (CNN) framework is adapted from its original classification task to automatically diagnose arrhythmias in ECGs. A deep convolution network that has been used for training with most relevant feature subset is used for accurate classification. The primary goal of this research is to classify arrhythmia using a deep learning method that is straightforward, accurate, and easily deployable. This research proposes a Recurrent Ascendancy Feature Subset Training model using Deep CNN model for arrhythmia Classification (RAFST-DCNN-AC). The suggested framework is tested on ECG waveform circumstances taken from the MIT-BIH arrhythmia database. The proposed model when contrasted with the existing models exhibit better classification rate.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_68-Recurrent_Ascendancy_Feature_Subset_Training_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Recommendation System Based on Double Ensemble Models using KNN-MF</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140566</link>
        <id>10.14569/IJACSA.2023.0140566</id>
        <doi>10.14569/IJACSA.2023.0140566</doi>
        <lastModDate>2023-05-30T16:27:33.3370000+00:00</lastModDate>
        
        <creator>Krishan Kant Yadav</creator>
        
        <creator>Hemant Kumar Soni</creator>
        
        <creator>Nikhlesh Pathik</creator>
        
        <subject>Recommendation system; k nearest neighbour; matrix factorization; predictions using stacking; ensemble model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>In today&#39;s digital environment, recommendation systems are essential as they provide personalised content to users, increasing user engagement and enhancing user satisfaction. This paper proposes a double ensemble recommendation model that combines two collaborative filtering algorithms, K Nearest Neighbour (KNN) and Matrix Factorization (MF). KNN is a neighbourhood-based algorithm that uses the similarity between users or items to make recommendations. At the same time, MF is a model-based algorithm that decomposes the user-item rating matrix into lower-dimensional matrices representing the latent user and item factors. The proposed double ensemble model uses KNN and MF to predict missing ratings matrix and combines their predictions using stacking. To evaluate the performance of the proposed ensemble model, we conducted experiments on three datasets i.e. Movielense, BookCrossing dataset and Hindi Movie dataset and compared the results with those of single algorithm approaches. The experimental results demonstrate that the double ensemble model outperforms the single algorithm approaches regarding accuracy metrics such as Mean Square Error (MSE), Root Mean Squared Error (RMSE), Mean Absolute Error (MAE). The results indicate that stacked KNN and MF lead to a more robust and more accurate recommendation system.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_66-Recommendation_System_Based_on_Double_Ensemble.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Integrating Regression Models and Climatological Data for Improved Precipitation Forecast in Southern India</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140567</link>
        <id>10.14569/IJACSA.2023.0140567</id>
        <doi>10.14569/IJACSA.2023.0140567</doi>
        <lastModDate>2023-05-30T16:27:33.3370000+00:00</lastModDate>
        
        <creator>J. Subha</creator>
        
        <creator>S. Saudia</creator>
        
        <subject>Ensemble models; machine learning models; rainfall precipitation forecast; R-squared value</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>Modern technologies like Artificial Intelligence (AI) and Machine Learning (ML) replicate intelligent human behavior and offer solutions in all domains, especially for human protection and disaster management. Nowadays, in both rural and urban areas, flood control is a serious issue to overcome the vast disaster to life and property. The work proposes to identify an appropriate ML based precipitation forecast model for the flood-prone southern states of India namely Tamil Nadu, Karnataka, and Kerala which receive most precipitation using the climatological information obtained from the NASA POWER platform. The work investigates the effectiveness of ML forecasting models: Multiple Linear Regression (MLR), Support Vector Regression (SVR), Decision Tree Regression (DTR), Random Forest Regression (RFR) and Ensemble (E) learning approaches of E-MLR-SVR, E-MLR-DTR, E-MLR-RFR, E-SVR-DTR, E-SVR-RFR, E-DTR-RFR, E-MLR-SVR-DTR, E-MLR-SVR-RFR, E-MLR-DTR-RFR and E-SVR-DTR-RFR in forecasting precipitation. The E-MLR-RFR model produces improved and most precise precipitation forecast in terms of Mean Absolute Error (MAE), Mean Square Error (MSE), Root Mean Square Error (RMSE) and R2 values. A higher precipitation forecast can be used to provide early warning about the possible flood in any region.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_67-Integrating_Regression_Models_and_Climatological_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Forecasting Model of Corn Commodity Productivity in Indonesia: Production and Operations Management, Quantitative Method (POM-QM) Software</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140565</link>
        <id>10.14569/IJACSA.2023.0140565</id>
        <doi>10.14569/IJACSA.2023.0140565</doi>
        <lastModDate>2023-05-30T16:27:33.3200000+00:00</lastModDate>
        
        <creator>Asriani </creator>
        
        <creator>Usman Rianse</creator>
        
        <creator>Surni</creator>
        
        <creator>Yani Taufik</creator>
        
        <creator>Dhian Herdhiansyah</creator>
        
        <subject>Forecasting model; productivity; corn commodity; POM-QM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>Food is an essential ingredient needed by humans. In addition to being consumed, it can also be a valuable commodity for economic purposes through productivity of food crops. Therefore, this study aims to model the forecasting of maize productivity in Indonesia using Production and Operations Management-Quantitative Method (POM-QM) software. The data collected on productivity of corn commodities in Indonesia between 1980-2019 shows fluctuations, with both deficit and surplus periods. This study uses a time series data-based forecasting model consisting of three methods, namely Double Moving Average (DMA), Weighted Moving Average (WMA), and Single Exponential Smoothing (SES). The selection of the best model was conducted based on the Mean Absolute Deviation (MAD), Mean Square Error (MSE), and Mean Absolute Percent Error (MAPE). SES emerged as the most preferred, with a lower MAPE value of 4.913%. The predicted productivity of corn in Indonesia is estimated at 5.28 tons/ha/year, sufficient to meet consumers&#39; demand. Therefore, governments are recommended to use this information in predicting corn productivity to meet the national demand in the future.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_65-Forecasting_Model_of_Corn_Commodity_Productivity_in_Indonesia.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Two Phase Detection Process to Mitigate LRDDoS Attack in Cloud Computing Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140563</link>
        <id>10.14569/IJACSA.2023.0140563</id>
        <doi>10.14569/IJACSA.2023.0140563</doi>
        <lastModDate>2023-05-30T16:27:33.3030000+00:00</lastModDate>
        
        <creator>Amrutha Muralidharan Nair</creator>
        
        <creator>R Santhosh</creator>
        
        <subject>LRDDoS attack; distance deviation; covariance vector; threshold</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>Distributed Denial of Service (DDoS) is a major attack carried out by attackers leveraging critical cloud computing technologies. DDoS attacks are carried out by flooding the victim servers with a massive volume of malicious traffic over a short period, Because of the enormous amount of malicious traffic, such assaults are easily detected. As a result, DDoS operations are increasingly appealing to attackers due to their stealth and low traffic rates, DDoS assaults with low traffic rates are also difficult to detect. In recent years, there has been a lot of focus on defense against low-rate DDoS attacks. This paper presents a two-phase detection technique for mitigating and reducing LRDDoS threats in a cloud environment. The proposed model includes two phases: one for calculating predicted packet size and entropy, and another for calculating the covariance vector. In this model, each cloud user accesses the cloud using the virtual machine, which has a unique session ID. This model identifies all LRDDoS assaults that take place by using different protocols (TCP, UDP, ICMP). The experiment&#39;s findings demonstrate, how the suggested data packet size, IP address, and flow behavior is used to identify attacks and prevent hostile users from using cloud services. The VM instances used by different users are controlled by this dynamic mitigation mechanism, which also upholds the cloud service quality. The results of the experiments reveal that the suggested method identifies LRDDoS attacks with excellent accuracy and scalability.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_63-Two_Phase_Detection_Process_to_Mitigate_LRDDoS.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Pig Health Abnormality Detection Based on Behavior Patterns in Activity Periods using Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140564</link>
        <id>10.14569/IJACSA.2023.0140564</id>
        <doi>10.14569/IJACSA.2023.0140564</doi>
        <lastModDate>2023-05-30T16:27:33.3030000+00:00</lastModDate>
        
        <creator>Duc Duong Tran</creator>
        
        <creator>Nam Duong Thanh</creator>
        
        <subject>Deep learning; pig tracking; behavior patterns; pig health monitoring</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>Abnormal detection of pig behaviors in pig farms is important for monitoring pig health and welfare. Pigs with health problems often have behavioral abnormalities. Observing pig behaviors can help detect pig health problems and take early treatment to prevent disease from spreading. This paper proposes a method using deep learning for automatically monitoring and detecting abnormalities in pig behaviors from cameras in pig farms based on pig behavior patterns comparison in activity periods. The approach consists of a pipeline of methods, including individual pig detection and localization, pig tracking, and behavioral abnormality analysis. From pig behaviors measured during the detection and tracking process, the behavior patterns of healthy pigs in different activity periods of the day, such as resting, eating, and playing periods, were built. Behavioral abnormalities can be detected if pigs behave differently from the normal patterns in the same activity period. The experiments showed that pig behavior patterns built in 30-minute time duration can help detect behavioral abnormalities with over 90% accuracy when applying the activity period-based approach.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_64-Pig_Health_Abnormality_Detection_Based_on_Behavior_Patterns.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Use of Fuzzy Linear Regression Modeling to Predict High-risk Symptoms of Lung Cancer in Malaysia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140562</link>
        <id>10.14569/IJACSA.2023.0140562</id>
        <doi>10.14569/IJACSA.2023.0140562</doi>
        <lastModDate>2023-05-30T16:27:33.2900000+00:00</lastModDate>
        
        <creator>Aliya Syaffa Zakaria</creator>
        
        <creator>Muhammad Ammar Shafi</creator>
        
        <creator>Mohd Arif Mohd Zim</creator>
        
        <creator>Siti Noor Asyikin Mohd Razali</creator>
        
        <subject>Lung cancer; high-risk symptom; fuzzy linear regression; H-value; mean square error</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>Lung cancer is the most prevalent cancer in the world, accounting for 12.2% of all newly diagnosed cases in 2020 and has the highest mortality rate due to its late diagnosis and poor symptom detection. Currently, there are 4,319 lung cancer deaths in Malaysia, representing 2.57 percent of all mortality in 2020. The late diagnosis of lung cancer is common, which makes survival more difficult. In Malaysia, however, most cases are detected when the tumors have become too large, or cancer has spread to other body areas that cannot be removed surgically. This is a frequent situation due to the lack of public awareness among Malaysians regarding cancer-related symptoms. Malaysians must be acknowledged the high-risk symptoms of lung cancer to enhance the survival rate and reduce the mortality rate. This study aims to use a fuzzy linear regression model with heights of triangular fuzzy by Tanaka (1982), H-value ranging from 0.0 to 1.0, to predict high-risk symptoms of lung cancer in Malaysia. The secondary data is analyzed using the fuzzy linear regression model by collecting data from patients with lung cancer at Al-Sultan Abdullah Hospital (UiTM Hospital), Selangor. The results found that haemoptysis and chest pain has been proven to be the highest risk, among other symptoms obtained from the data analysis. It has been discovered that the H-value of 0.0 has the least measurement error, with mean square error (MSE) and root mean square error (RMSE) values of 1.455 and 1.206, respectively.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_62-The_Use_of_Fuzzy_Linear_Regression_Modeling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Artificial Intelligence System for Detecting the Use of Personal Protective Equipment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140561</link>
        <id>10.14569/IJACSA.2023.0140561</id>
        <doi>10.14569/IJACSA.2023.0140561</doi>
        <lastModDate>2023-05-30T16:27:33.2730000+00:00</lastModDate>
        
        <creator>Josue Airton Lopez Cabrejos</creator>
        
        <creator>Avid Roman-Gonzalez</creator>
        
        <subject>Personal protective equipment (PPE); artificial intelligence (AI); YOLO (You Only Look Once); object detection; neural network; custom PPE dataset</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>In recent years, occupational accidents have been increasing, and it has been suggested that this increase is related to poor or no supervision of personal protective equipment (PPE) use. This study proposes developing a system capable of identifying the use of PPEs using artificial intelligence through a neural network called YOLO. The results obtained from the development of the system suggest that automatic recognition of PPEs using artificial intelligence is possible with high precision. The recognition of gloves is the only critical object that can give false positives, but it can be addressed with a redundant system that performs two or more consecutive recognitions. This study also involved the preparation of a custom dataset for training the YOLO neural network. The dataset includes images of workers wearing different types of PPEs, such as helmets, gloves, and safety shoes. The system was trained using this dataset and achieved a precision of 98.13% and a recall of 86.78%. The high precision and recall values indicate that the system can accurately identify the use of PPEs in real-world scenarios, which can help prevent occupational accidents and ensure worker safety.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_61-Artificial_Intelligence_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Game Theory Approach for Open Innovation Systems Analysis in Duopolistic Market</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140559</link>
        <id>10.14569/IJACSA.2023.0140559</id>
        <doi>10.14569/IJACSA.2023.0140559</doi>
        <lastModDate>2023-05-30T16:27:33.2570000+00:00</lastModDate>
        
        <creator>Aziz Elmire</creator>
        
        <creator>Aziz Ait Bassou</creator>
        
        <creator>Mustapha Hlyal</creator>
        
        <creator>Jamila El Alami</creator>
        
        <subject>Duopoly; open innovation; closed innovation; Cournot model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>The approach used in this study involves applying the Cournot model, which is initially based on the analysis of product quantities in the market. Building upon the obtained equilibrium, a second analysis is conducted to examine the impact of the open innovation integration rate, utilizing a dynamic model. The obtained results have demonstrated that multiple equilibria are possible, and under certain conditions, competing firms have a stake in carefully analyzing the integration rate of open innovation.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_59-Game_Theory_Approach_for_Open_Innovation_Systems_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Decentralised Access Control Framework using Blockchain: Smart Farming Case</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140560</link>
        <id>10.14569/IJACSA.2023.0140560</id>
        <doi>10.14569/IJACSA.2023.0140560</doi>
        <lastModDate>2023-05-30T16:27:33.2570000+00:00</lastModDate>
        
        <creator>Normaizeerah Mohd Noor</creator>
        
        <creator>Noor Afiza Mat Razali</creator>
        
        <creator>Sharifah Nabila S Azli Sham</creator>
        
        <creator>Khairul Khalil Ishak</creator>
        
        <creator>Muslihah Wook</creator>
        
        <creator>Nor Asiakin Hasbullah</creator>
        
        <subject>Access control; role-based access control; attribute-based access control; blockchain technology; internet of things; smart contract; smart farming</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>The convergence of farming with cutting-edge technologies, like the Internet of Things (IoT), has led to the emergence of a smart farming revolution. IoT facilitates the interconnection of numerous devices across different agricultural ecosystems, enabling automation and ultimately enhancing the efficiency and quality of production. However, the implementation of IoT entails an array of potential risks. The accelerated adoption of IoT in the domain of smart farming has amplified the existing cybersecurity concerns, specifically those pertaining to access control. In extensive IoT environments that require scalability, the conventional centralized access control system is insufficient. Therefore, to address these gaps, we propose a novel decentralized access control framework. The framework applies blockchain technology as the decentralization approach with smart contract application focuses on the application scenario in smart farming to protect and secure IoT devices from unauthorised access by anomalous entities. The proposed framework adopted attribute-based access control (ABAC) and role-based access control (RBAC) to establish access rules and access permissions for IoT. The framework is validated via simulation to determine the price of gas consumption when executing smart contracts to retrieve attributes, roles and access rules between three smart contracts and provide the baseline value for future research references. Thus, this paper offers valuable insight into ongoing research on decentralized access control for IoT security to protect and secure IoT resources in the smart farming environment.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_60-Decentralised_Access_Control_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mask R-CNN Approach to Real-Time Lane Detection for Autonomous Vehicles</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140558</link>
        <id>10.14569/IJACSA.2023.0140558</id>
        <doi>10.14569/IJACSA.2023.0140558</doi>
        <lastModDate>2023-05-30T16:27:33.2430000+00:00</lastModDate>
        
        <creator>Rustam Abdrakhmanov</creator>
        
        <creator>Madina Elemesova</creator>
        
        <creator>Botagoz Zhussipbek</creator>
        
        <creator>Indira Bainazarova</creator>
        
        <creator>Tursinbay Turymbetov</creator>
        
        <creator>Zhalgas Mendibayev</creator>
        
        <subject>Road; lane; Mask R-CNN; detection; deep learning; autonomous vehicle</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>The accurate and real-time detection of road lanes is crucial for the safe navigation of autonomous vehicles (AVs). This paper presents a novel approach to lane detection by leveraging the capabilities of the Mask Region-based Convolutional Neural Network (Mask R-CNN) model. Our method adapts Mask R-CNN to specifically address the challenges posed by diverse traffic scenarios and varying environmental conditions. We introduce a robust, efficient, and scalable architecture for lane detection, which segments the lane markings and generates precise boundaries for AVs to follow. We augment the model with a custom dataset, consisting of images collected from different geographical locations, weather conditions, and road types. This comprehensive dataset ensures the model&#39;s generalizability and adaptability to real-world conditions. We also introduce a multi-scale feature extraction technique, which improves the model&#39;s ability to detect lanes in both near and far fields of view. Our proposed method significantly outperforms existing state-of-the-art techniques in terms of accuracy, processing speed, and adaptability. Extensive experiments were conducted on public datasets and our custom dataset to validate the performance of the proposed method. Results demonstrate that our Mask R-CNN-based approach achieves high precision and recall rates, ensuring reliable lane detection even in complex traffic scenarios. Additionally, our model&#39;s real-time processing capabilities make it an ideal solution for implementation in AVs, enabling safer and more efficient navigation on roads.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_58-Mask_R_CNN_Approach_to_Real_Time_Lane_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Unsupervised Bearing Fault Diagnosis via a Multi-Layer Subdomain Adaptation Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140557</link>
        <id>10.14569/IJACSA.2023.0140557</id>
        <doi>10.14569/IJACSA.2023.0140557</doi>
        <lastModDate>2023-05-30T16:27:33.2270000+00:00</lastModDate>
        
        <creator>Nguyen Duc Thuan</creator>
        
        <creator>Nguyen Thi Hue</creator>
        
        <creator>Hoang Si Hong</creator>
        
        <subject>Bearing fault; fault diagnosis; domain adaptation; transfer learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>Bearings play a crucial role in the functioning of rotating machinery, making it essential to monitor their condition for maintaining system stability and dependability. In recent years, intelligent diagnostic techniques for bearing issues have made significant progress due to advancements in artificial intelligence. These methods rely heavily on data, requiring data collection and labeling to develop the learning model, which is often highly challenging and nearly infeasible in industrial settings. As a result, a domain adaptation-based transfer learning approach has been suggested. This approach aims to minimize the difference between the distribution of accessible data and the unlabeled real-world data, enabling the model trained on public data to function effectively with actual data. In this paper, we introduce a sophisticated subdomain adaptation technique for cross-machine bearing fault diagnosis using vibration, termed multi-layer subdomain adaptation. Verification experiments were conducted, and the findings indicate that the proposed approach offers relatively high accuracy up to 97.47% and excellent transferability. Comparative experiments revealed that the proposed method is a superior technique for bearing fault diagnosis and slightly outperforms other methods (3-5%) in both predictive and noise-ignore capabilities. Comprehensive validation experiments were conducted using the HUST dataset.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_57-Unsupervised_Bearing_Fault_Diagnosis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Knowledge Based Framework for Cardiovascular Disease Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140556</link>
        <id>10.14569/IJACSA.2023.0140556</id>
        <doi>10.14569/IJACSA.2023.0140556</doi>
        <lastModDate>2023-05-30T16:27:33.2100000+00:00</lastModDate>
        
        <creator>Abha Marathe</creator>
        
        <creator>Virendra Shete</creator>
        
        <creator>Dhananjay Upasani</creator>
        
        <subject>Machine learning; ensemble classifier; cardiovascular disease; performance metrics; classifier techniques</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>Cardiovascular disease has become more concern in the hectic and stressful life of modern era. Machine learning techniques are becoming reliable in medical treatment to help the doctors. But the ML algorithms are sensitive to data sets. Hence a Smart Robust Predictive System is almost essential which can work efficiently on all data sets. The study proposes ensembled classifier validating its performance on five different data sets- Cleveland, Hungarian, Long Beach, Statlog and Combined datasets. The developed model deals with missing values and outliers. Synthetic Minority Oversampling Technique (SMOTE) was used to resolve the class imbalance issue. In this study, performance of five individual classifiers – Support Vector Machine Radial (SVM-R), Logistic Regression (LR), Na&#239;ve Bayes (NB), Random Forest (RF) and XGBoost, was compared with five ensembled classifiers on five different data sets. On each data set the top three performers were identified and were combined to give ensemble classifiers. Thus, in all total 25 experimentation were done. The results have shown that out of all classifiers implemented, the proposed system outperforms on all the data sets. The performance was validated by 10-fold cross validation The proposed system gives the highest accuracy and sensitivity of 87% and 86% respectively.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_56-A_Knowledge_Based_Framework_for_Cardiovascular_Disease.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Consumer Product of Wi-Fi Tracker System using RSSI-based Distance for Indoor Crowd Monitoring</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140555</link>
        <id>10.14569/IJACSA.2023.0140555</id>
        <doi>10.14569/IJACSA.2023.0140555</doi>
        <lastModDate>2023-05-30T16:27:33.2100000+00:00</lastModDate>
        
        <creator>Syifaul Fuada</creator>
        
        <creator>Trio Adiono</creator>
        
        <creator>Prasetiyo</creator>
        
        <creator>Harthian Widhanto</creator>
        
        <creator>Shorful Islam</creator>
        
        <creator>Tri Chandra Pamungkas</creator>
        
        <subject>Wi-Fi tracker system; RSSI-based distance; crowd monitoring; Unscented Kalman Filter; indoor</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>This study aims to design and develop Wi-Fi tracker system that utilizes RSSI-based distance parameters for crowd-monitoring applications in indoor settings. The system consists of three main components, namely 1) an embedded node that runs on Raspberry-pi Zero W, 2) a real-time localization algorithm, and 3) a server system with an online dashboard. The embedded node scans and collects relevant information from Wi-Fi-connected smartphones, such as MAC data, RSSI, timestamps, etc. These data are then transmitted to the server system, where the localization algorithm passively determines the location of devices as long as Wi-Fi is enabled. The mentioned devices are smartphones, tablets, laptops, while the algorithm used is a Non-Linear System with Lavenberg–Marquart and Unscented Kalman Filter (UKF). The server and online dashboard (web-based application) have three functions, including displaying and recording device localization results, setting parameters, and visualizing analyzed data. The node hardware was designed for minimum size and portability, resulting in a consumer electronics product outlook. The system demonstration in this study was conducted to validate its functionality and performance.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_55-A_Consumer_Product_of_Wi_Fi_Tracker_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Method for Network Intrusion Detection Based on GAN-CNN-BiLSTM</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140554</link>
        <id>10.14569/IJACSA.2023.0140554</id>
        <doi>10.14569/IJACSA.2023.0140554</doi>
        <lastModDate>2023-05-30T16:27:33.1970000+00:00</lastModDate>
        
        <creator>Shuangyuan Li</creator>
        
        <creator>Qichang Li</creator>
        
        <creator>Mengfan Li</creator>
        
        <subject>Intrusion detection; GAN; CNN; BiLSTM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>As network attacks are more and more frequent and network security is more and more serious, it is important to detect network intrusion accurately and efficiently. With the continuous development of deep learning, a lot of research achievements are applied to intrusion detection. Deep learning is more accurate than machine learning, but in the face of a large amount of data learning, the performance will be degraded due to data imbalance. In view of the serious imbalance of network traffic data sets at present, this paper proposes to process data expansion with GAN to solve data imbalance and detect network intrusion in combination with CNN and BiLSTM. In order to verify the efficiency of the model, the CIC-IDS 2017 data set is used for evaluation, and the model is compared with machine learning methods such as Random Forest and Decision Tree. The experiment shows that the performance of this model is significantly improved over other traditional models, and the GAN-CNN-BiLSTM model can improve the efficiency of intrusion detection, and its overall accuracy is improved compared with SVM, DBN, CNN, BiLSTM and other models.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_54-A_Method_for_Network_Intrusion_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deadline-aware Task Scheduling for Cloud Computing using Firefly Optimization Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140553</link>
        <id>10.14569/IJACSA.2023.0140553</id>
        <doi>10.14569/IJACSA.2023.0140553</doi>
        <lastModDate>2023-05-30T16:27:33.1800000+00:00</lastModDate>
        
        <creator>BAI Ya-meng</creator>
        
        <creator>WANG Yang</creator>
        
        <creator>WU Shen-shen</creator>
        
        <subject>Cloud computing; energy efficiency; task scheduling; firefly algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>Task scheduling poses a major challenge for cloud computing environments. Task scheduling ensures cost-effective task execution and improved resource utilization. It is classified as a NP-hard problem due to its nondeterministic polynomial time nature. This characteristic motivates researchers to employ meta-heuristic algorithms. The number of cloud users and computing capabilities is leading to increased concerns about energy consumption in cloud data centers. In order to leverage cloud resources in the most energy-efficient manner while delivering real-time services to users, a viable cloud task scheduling solution is necessary. This study proposes a new deadline-aware task scheduling algorithm for cloud environments based on the Firefly Optimization Algorithm (FOA). The suggested scheduling algorithm achieves a higher level of efficiency in multiple parameters, including execution time, waiting time, resource utilization, the percentage of missed tasks, power consumption, and makespan. According to simulation results, the proposed algorithm is more effective and superior to the CSO algorithm under HP2CN and NASA workload archives.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_53-Deadline_aware_Task_Scheduling_for_Cloud_Computing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Systematic Analysis on the Effectiveness of Covert Channel Data Transmission</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140551</link>
        <id>10.14569/IJACSA.2023.0140551</id>
        <doi>10.14569/IJACSA.2023.0140551</doi>
        <lastModDate>2023-05-30T16:27:33.1630000+00:00</lastModDate>
        
        <creator>Abdulrahman Alhelal</creator>
        
        <creator>Mohammed Al-Khatib</creator>
        
        <subject>Covert channel; transmission; limitations; file; encoding throughput; performance; time; audio; video; text</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>A covert channel is a communication channel that allows parties to communicate and transfer data indirectly. Covert channel types are storage, timing, and behavior channels. Covert channels can be used for malicious and benign applications. A covert channel is a mechanism for violating the communication security policy that was not anticipated by the system creator. Recently, covert channels are used to transfer text, video, and audio information between entities. This article, discusses studies related to the development of covert channels as well as the research works that focus on improving the performance/throughput of covert channels. Also, it analyzes the previous studies in terms of publication type, year of publication, article title, article purpose, transferring file format used in covert channel, coding technique, throughput performance, time needed to transfer files, and article limitations.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_51-Systematic_Analysis_on_the_Effectiveness.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards Analysis of Biblical Entities and Names using Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140552</link>
        <id>10.14569/IJACSA.2023.0140552</id>
        <doi>10.14569/IJACSA.2023.0140552</doi>
        <lastModDate>2023-05-30T16:27:33.1630000+00:00</lastModDate>
        
        <creator>Mikolaj Martinjak</creator>
        
        <creator>Davor Lauc</creator>
        
        <creator>Ines Skelac</creator>
        
        <subject>Bible; deep learning; gospel of mark; natural language processing; social network analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>Scholars from various fields have studied the translations of the Bible in different languages to understand the changes that have occurred over time. Taking into account recent advances in deep learning, there is an opportunity to improve the understanding of these texts and conduct analyses that were previously unattainable. This study used deep learning techniques of NLP to analyze the distribution and appearance of names in the Polish, Croatian, and English translations of the Gospel of Mark. Within the scope of social network analysis (SNA), various centrality metrics were used to determine the importance of different entities (names) within the gospel. Degree Centrality, Closeness Centrality, and Betweenness Centrality were leveraged, given their capacity to provide unique insights into the network structure. The findings of this study demonstrate that deep learning could help uncover interesting connections between individuals who may have initially been considered less important. It also highlighted the critical role of onomastic sciences and the philosophy of language in analyzing the richness and importance of human and other proper names in biblical texts. Further research should be conducted to produce more relevant language resources, improve parallel multilingual corpora and annotated data sets for the major languages of the Bible, and develop an accurate end-to-end deep neural model that facilitates joint entity recognition and resolution.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_52-Towards_Analysis_of_Biblical_Entities.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predicting Drug Response on Multi-Omics Data Using a Hybrid of Bayesian Ridge Regression with Deep Forest</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140550</link>
        <id>10.14569/IJACSA.2023.0140550</id>
        <doi>10.14569/IJACSA.2023.0140550</doi>
        <lastModDate>2023-05-30T16:27:33.1500000+00:00</lastModDate>
        
        <creator>Talal Almutiri</creator>
        
        <creator>Khalid Alomar</creator>
        
        <creator>Nofe Alganmi</creator>
        
        <subject>Bayesian ridge regression; deep forest; deep learning; drug response prediction; machine learning; multi-omics data</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>An accurate drug response prediction for each patient is critical in personalized medicine. However, numerous studies that relied on single-omics datasets continue to have limitations. In addition, the curse of dimensionality considers a challenge to drug response prediction. Deep learning has remarkable prediction effectiveness compared to traditional machine learning, but it requires enormous amounts of training data which is a limitation because the nature of most biological data is small-scale. This paper presents an approach that combines Bayesian Ridge Regression with Deep Forest. BRR relies on the Bayesian approach, in which linear model estimation occurs based on probability distributions rather than point estimates. It was utilized to integrate multi-omics, a feature selection that calculates the coefficient as the feature importance. DF reduces the computational cost and hyper-parameter tuning cost. The Cancer Cell Line Encyclopedia CCLE was used as a dataset to integrate the gene expression, copy number variant, and single nucleotide variant. Root Mean Square Error, Pearson Correlation Coefficient, and the coefficient of determination were used as the evaluation metrics. The obtained findings show that the proposed model outperforms Random Forest and Convolutional Neural Network regarding regression performance; it achieved 0.175 for RMSE, 0.842 for PCC, and 0.708 for R2.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_50-Predicting_Drug_Response_on_Multi_Omics_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Investigation of Asthma Experiences in Arabic Communities Through Twitter Discourse</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140549</link>
        <id>10.14569/IJACSA.2023.0140549</id>
        <doi>10.14569/IJACSA.2023.0140549</doi>
        <lastModDate>2023-05-30T16:27:33.1330000+00:00</lastModDate>
        
        <creator>Mohammed Alotaibi</creator>
        
        <creator>Ahmed Omar</creator>
        
        <subject>Asthma; twitter; semantic analysis; LDA; Arab; communities</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>Artificial intelligence technologies can effectively analyze the public opinions from social-media platforms like twitter. This study aims to employ the AI technology and big data to explore and discuss the common issues of asthma that patients share on Twitter platform in Arabic communities. The data was acquired using the Twitter API version 2. Latent Dirichlet Allocation was used for grouping data into two clusters which provide information and tips about the treatment and prevention of asthma and personal experiences with asthma, including symptoms, diagnosis, and the negative impact of asthma on the quality of life. Sentiment analysis and data frequency distribution techniques were used to analyze the data in both clusters. The data analysis of first indicated that individuals are interested in learning about different ways to treat asthma and potentially finding a permanent solution. The data analysis of second cluster indicated the existence of negative sentiments about asthma, which also included religious expressions for improving the condition. The study also discussed the differences in expressions among Arabic communities and other communities.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_49-An_Investigation_of_Asthma_Experiences.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of a New Lightweight Encryption Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140548</link>
        <id>10.14569/IJACSA.2023.0140548</id>
        <doi>10.14569/IJACSA.2023.0140548</doi>
        <lastModDate>2023-05-30T16:27:33.1170000+00:00</lastModDate>
        
        <creator>Ardabek Khompysh</creator>
        
        <creator>Nursulu Kapalova</creator>
        
        <creator>Oleg Lizunov</creator>
        
        <creator>Dilmukhanbet Dyusenbayev</creator>
        
        <creator>Kairat Sakan</creator>
        
        <subject>Lightweight block cipher; S-box; linear transformation; avalanche effect; IoT devices; RFID tags; null hypothesis; NIST tests; D. knuth tests</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>Due to the growing need to use devices with low hardware resources in everyday life, the likelihood of their susceptibility to various cyber-attacks increases. In this regard, one of the methods to ensure the security of information circulating in these devices is encryption. For devices with small hardware resources, the most applicable is low-resource (lightweight) cryptography. This article introduces a new lightweight encryption algorithm, ISL-LWS (Information Security Laboratory – lightweight system), designed to protect data on resource-constrained devices. The encryption algorithm is implemented in the C++ programming language. The paper presents the statistical properties of ciphertexts obtained using the developed algorithm. For the experimental testing for statistical security, the sets of statistical tests by NIST and D. Knuth were used. Separately, the ISL-LWS algorithm was tested for avalanche effect properties. The obtained results of statistical tests were compared with the Present and Speck modern lightweight algorithms. The study and comparative analysis of the speed of encryption and key generation of the three algorithms were carried out on the Arduino Uno R3 board.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_48-Development_of_a_New_Lightweight_Encryption_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Recognition of Lung Nodules in Computerized Tomography Lung Images using a Hybrid Method with Class Imbalance Reduction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140546</link>
        <id>10.14569/IJACSA.2023.0140546</id>
        <doi>10.14569/IJACSA.2023.0140546</doi>
        <lastModDate>2023-05-30T16:27:33.1030000+00:00</lastModDate>
        
        <creator>Yingqiang Wang</creator>
        
        <creator>Honggang Wang</creator>
        
        <creator>Erqiang Dong</creator>
        
        <subject>Lung cancer; artificial neural network; fuzzy c-means clustering method; reinforcement learning; restricted boltzmann machine</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>Lung cancer is among the deadly diseases affecting millions globally every year. Physicians&#39; and radiologists&#39; manual detection of lung nodules has low efficiency due to the variety of shapes and nodule locations. The paper aims to recognize the lung nodules in computerized tomography (CT) lung images utilizing a hybrid method to reduce the problem space at every step. First, the suggested method uses the fast and robust fuzzy c-means clustering method (FRFCM) algorithm to segment CT images and extract two lungs, followed by a convolutional neural network (CNN) to identify the sick lung for use in the next step. Then, the adaptive thresholding method detects the suspected regions of interest (ROIs) among all available objects in the sick lung. Next, some statistical features are selected from every ROI, and then a restricted Boltzmann machine (RBM) is considered a feature extractor that extracts rich features among the selected features. After that, an artificial neural network (ANN) is employed to classify ROIs and determine whether the ROI includes nodules or non-nodules. Finally, cancerous ROIs are localized by the Otsu thresholding algorithm. Naturally, sick ROIs are more than healthy ones, leading to a class imbalance that substantially decreases ANN ability. To solve this problem, a reinforcement learning (RL)-based algorithm is used, in which the states are sampled. The agent receives a larger reward/penalty for correct/incorrect classification of the examples related to the minority class. The proposed model is compared with state-of-the-art methods on the lung image database consortium image collection (LIDC-IDRI) dataset and standard performance metrics. The results of the experiments demonstrate that the proposed model outperforms its rivals.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_46-Recognition_of_Lung_Nodules_in_Computerized_Tomography_Lung_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Theoretical Framework for Creating Folk Dance Motion Templates using Motion Capture</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140547</link>
        <id>10.14569/IJACSA.2023.0140547</id>
        <doi>10.14569/IJACSA.2023.0140547</doi>
        <lastModDate>2023-05-30T16:27:33.1030000+00:00</lastModDate>
        
        <creator>Amir Irfan Mazian</creator>
        
        <creator>Wan Rizhan</creator>
        
        <creator>Normala Rahim</creator>
        
        <creator>Azrul Amri Jamal</creator>
        
        <creator>Ismahafezi Ismail</creator>
        
        <creator>Syed Abdullah Fadzli</creator>
        
        <subject>Motion capture; folk dance; motion template</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>Folk dance (FD) is a type of traditional dance that has been handed down through a culture or group from generation to generation. It is crucial to preserve and safeguard this type of cultural legacy since it can reflect the history and identity of particular nations. However, due to ineffective preservation and conservation techniques, the survival of FDs is being negatively impacted more and more. Its extinction may be caused by ignorance about and disregard for preservation and conservation efforts. The most efficient method for digitizing intangible cultural property, including FDs, is motion capture (MoCap). MoCap enables the conversion of real-time movement into digital performance to produce motion templates. This paper aims to provide suggestions and guidelines in conducting research to generate motion templates of FDs. Several key approaches are presented and discussed in detail, including acquaintance meetings, procedures and approval, interviews and experiments, and the framework. The proposed framework includes models for MoCap, skeleton generation, skeleton refinement, and evaluation. By implementing the proposed framework, the motion templates for FDs can be created. The generated motion templates will preserve and conserve FDs and guarantee their originality and authenticity.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_47-A_Theoretical_Framework_for_Creating_Folk_Dance_Motion.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Knowledge Management Model for the Generation of Innovative Capacities in Organizations that Provide Services</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140545</link>
        <id>10.14569/IJACSA.2023.0140545</id>
        <doi>10.14569/IJACSA.2023.0140545</doi>
        <lastModDate>2023-05-30T16:27:33.0870000+00:00</lastModDate>
        
        <creator>Cristhian Ronceros</creator>
        
        <creator>Jos&#233; Medina</creator>
        
        <creator>Pedro Le&#243;n</creator>
        
        <creator>Alfredo Mendieta</creator>
        
        <creator>Jos&#233; Fern&#225;ndez</creator>
        
        <creator>Yuselys Martinez</creator>
        
        <subject>Component; model; knowledge management; intellectual capital; information and communication technologies</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>The research was oriented to the development of a knowledge management model for the generation of innovative capacities in the organizations that provide services. A systematic review of articles published in the Scopus, IEEE Explore and Google Scholar databases was carried out, where 67 articles and 24 models were selected, which were subsequently analyzed based on their theoretical foundation, strategies used for the generation and dissemination of knowledge, incorporation of the organizational culture and the use of Information and Communication Technology (ICT) in the generation and dissemination of knowledge. The proposed model, unlike the models evaluated, is oriented towards generating added value with a new strategic approach structured in the knowledge management and organizational memory macro-processes, which in turn are divided into 29 and 11 macro-activities respectively, which incorporate the organizational culture and allows guiding the organization to improve its functions through the incorporation of innovation and use of ICT in all processes of the organization and in each stage of the generation and management of knowledge; establishing the essential parameters for the generation of innovative capacities, generation of knowledge, intellectual capital and transfer of information to knowledge, which can be used within the organization. The proposed model, unlike the models evaluated, is aimed at directly strengthening interpersonal relationships between members of the organization and between them and their clients. In the same way, it incorporates a maturity model made up of five levels to measure the state in which the organization is in relation to knowledge management.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_45-Knowledge_Management_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Evolutive Knowledge Base for “AskBot” Toward Inclusive and Smart Learning-based NLP Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140544</link>
        <id>10.14569/IJACSA.2023.0140544</id>
        <doi>10.14569/IJACSA.2023.0140544</doi>
        <lastModDate>2023-05-30T16:27:33.0700000+00:00</lastModDate>
        
        <creator>Khadija El Azhari</creator>
        
        <creator>Imane Hilal</creator>
        
        <creator>Najima Daoudi</creator>
        
        <creator>Rachida Ajhoun</creator>
        
        <creator>Ikram Belgas</creator>
        
        <subject>Knowledge base; KB; artificial intelligence; AI; chatbot; e-learning; cosine similarity; soft cosine similarity; TF-IDF; FastText; neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>Artificial Intelligence chatbots have shown a growing interest in different domains including e-learning. They support learners by answering their repetitive and massive questions. In this paper, we develop a smart learning architecture for an inclusive chatbot handling both text and voice messages. Thus, disabled learners can easily use it. We automatically extract, preprocess, vectorize, and construct AskBot&#39;s Knowledge Base. The present work evaluates various vectorization techniques with similarity measures to answer learners&#39; questions. The proposed architecture handles both Wh-Questions starting with Wh words and Non-Wh-Questions, beginning with unpredictable words. Regarding Wh-Questions, we develop a neural network model to classify intents. Our results show that the model&#39;s accuracy and the F1-Score are equal to 99,5%, and 97% respectively. With a similarity score of 0.6, our findings indicate that TF-IDF has performed well, correctly answering 90% of the tested Wh-Questions. Concerning No-Wh Questions, soft cosine measure, and fasttext successfully answered 72% of Non-Wh-Question.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_44-An_Evolutive_Knowledge_Base_for_AskBot.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hate Speech Detection in Social Networks using Machine Learning and Deep Learning Methods</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140542</link>
        <id>10.14569/IJACSA.2023.0140542</id>
        <doi>10.14569/IJACSA.2023.0140542</doi>
        <lastModDate>2023-05-30T16:27:33.0530000+00:00</lastModDate>
        
        <creator>Aigerim Toktarova</creator>
        
        <creator>Dariga Syrlybay</creator>
        
        <creator>Bayan Myrzakhmetova</creator>
        
        <creator>Gulzat Anuarbekova</creator>
        
        <creator>Gulbarshin Rakhimbayeva</creator>
        
        <creator>Balkiya Zhylanbaeva</creator>
        
        <creator>Nabat Suieuova</creator>
        
        <creator>Mukhtar Kerimbekov</creator>
        
        <subject>Machine learning; deep learning; hate speech; social network; classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>Hate speech on social media platforms like Twitter is a growing concern that poses challenges to maintaining a healthy online environment and fostering constructive communication. Effective detection and monitoring of hate speech are crucial for mitigating its adverse impact on individuals and communities. In this paper, we propose a comprehensive approach for hate speech detection on Twitter using both traditional machine learning and deep learning techniques. Our research encompasses a thorough comparison of these techniques to determine their effectiveness in identifying hate speech on Twitter. We construct a robust dataset, gathered from diverse sources and annotated by experts, to ensure the reliability of our models. The dataset consists of tweets labeled as hate speech, offensive language, or neutral, providing a more nuanced representation of online discourse. We evaluate the performance of LSTM, BiLSTM, and CNN models against traditional shallow learning methods to establish a baseline for comparison. Our findings reveal that deep learning techniques outperform shallow learning methods, with BiLSTM emerging as the most accurate model for hate speech detection. The BiLSTM model demonstrates improved sensitivity to context, semantic nuances, and sequential patterns in tweets, making it adept at capturing the intricate nature of hate speech. Furthermore, we explore the integration of word embeddings, such as Word2Vec and GloVe, to enhance the performance of our models. The incorporation of these embeddings significantly improves the models&#39; ability to discern between hate speech and other forms of online communication. This paper presents a comprehensive analysis of various machine learning methods for hate speech detection on Twitter, ultimately demonstrating the superiority of deep learning techniques, particularly BiLSTM, in addressing this critical issue. Our findings pave the way for further research into advanced methods of tackling hate speech and facilitating healthier online interactions.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_42-Hate_Speech_Detection_in_Social_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>New Arabic Root Extraction Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140543</link>
        <id>10.14569/IJACSA.2023.0140543</id>
        <doi>10.14569/IJACSA.2023.0140543</doi>
        <lastModDate>2023-05-30T16:27:33.0530000+00:00</lastModDate>
        
        <creator>Nisrean Jaber Thalji</creator>
        
        <creator>Emran Aljarrah</creator>
        
        <creator>Roqia Rateb</creator>
        
        <creator>Amaal Rateb Mohammad Al-Shorman</creator>
        
        <subject>Natural language processing; Arabic root extraction algorithm; Arabic applications; Arabic morphology; Text mining</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>This research presents a new algorithm for Arabic root extraction, which aims to improve the accuracy of Arabic Natural Language Processing Algorithms by addressing the weaknesses and errors of existing algorithms. The proposed algorithm utilizes a database, that includes a collection of roots, patterns, and affixes, to generate potential derivation roots of a word without eliminating affixes initially. By matching the derived word with patterns to identify potential roots, the proposed algorithm avoids the inaccuracies caused by eliminating affixes based on expectation methods. The study conducted a comparison of the proposed algorithm with three commonly used Arabic root extraction algorithms. The evaluation process is performed on three corpora. Results showed that the proposed algorithm achieved an average accuracy rate of 96%, which is significantly higher than the others.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_43-New_Arabic_Root_Extraction_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimized Secure Federated Learning for Event Detection in Big Data using Blockchain Mechanism</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140541</link>
        <id>10.14569/IJACSA.2023.0140541</id>
        <doi>10.14569/IJACSA.2023.0140541</doi>
        <lastModDate>2023-05-30T16:27:33.0400000+00:00</lastModDate>
        
        <creator>K. Prasanna Lakshmi</creator>
        
        <creator>K. Swapnika</creator>
        
        <subject>Blockchain; cloud storage; decryption; encryption; federated learning; hashing; homomorphism</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>Currently, cloud storage in blockchain and federated learning technology provides better security among data transmission and file access. But, in some of the cases, security issues arose. So, to avoid security problems and offer better protection in a cloud environment, a novel optimized buffalo-based Homomorphic SHA blockchain (OBHSB). In this model for accessing the cloud storage data with the key matching method, if any of the unauthenticated users are trying to access the file initially, the system checks the key matching parameter. The proposed model was developed to provide better security in big data presented in the cloud environment. However, the parameters in the proposed model were compared with the existing models to make sure better performance was attained through the proposed model. Attack was considered as an event in this research. In the performance analysis, the performance rate of the proposed model was validated. Subsequently, the case study was developed in this research to explain the working procedure of the proposed design; model performs hashing, encryption, decryption, and key matching mechanisms. The results proposed model is observed to have 100% confidentiality rate after attack.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_41-Optimized_Secure_Federated_Learning_for_Event_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Combining GAN and LSTM Models for 3D Reconstruction of Lung Tumors from CT Scans</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140540</link>
        <id>10.14569/IJACSA.2023.0140540</id>
        <doi>10.14569/IJACSA.2023.0140540</doi>
        <lastModDate>2023-05-30T16:27:33.0230000+00:00</lastModDate>
        
        <creator>Cong Gu</creator>
        
        <creator>Hongling Gao</creator>
        
        <subject>3D tumor reconstruction; lung cancer; LSTM; Generative adversarial network; ResNet</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>Generating a three-dimensional (3D) reconstruction of tumors is an efficient technique for obtaining accurate and highly detailed visualization of the structures of tumors. To create a 3D tumor model, a collection of 2D imaging data is required, including images from CT imaging. Generative adversarial networks (GANs) offer a method to learn helpful representations without annotating the training dataset considerably. The article proposes a technique for creating a 3D model of lung tumors from CT scans using a combination of GAN and LSTM models, with support from ResNet as a feature extractor for the 2D images. The model presented in this article involves three steps, starting with the segmentation of the lung, then the segmentation of the tumor, and concluding with the creation of a 3D reconstruction of the lung tumor. The segmentation of the lung and tumor is conducted utilizing snake optimization and Gustafson–Kessel (GK) method. To prepare the 3D reconstruction component for training, the ResNet model that has been pre-trained is utilized to capture characteristics from 2D lung tumor images. Subsequently, the series of characteristics that have been extracted are fed into a LSTM network to generate compressed features as the final output. Ultimately, the condensed feature is utilized as input for the GAN framework, in which the generator is accountable for generating a sophisticated 3D lung tumor image. Simultaneously, the discriminator evaluates whether the 3D lung tumor image produced by the generator is authentic or synthetic. This model is the initial attempt that utilizes a GAN model as a means for reconstructing 3D lung tumors. The suggested model is evaluated against traditional approaches using the LUNA dataset and standard evaluation metrics. The empirical findings suggest that the suggested approach shows a sufficient level of performance in comparison to other methods that are vying for the same objective.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_40-Combining_GAN_and_LSTM_Models_for_3D_Reconstruction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Developing A Predictive Model for Selecting Academic Track Via GPA by using Classification Algorithms: Saudi Universities as Case Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140539</link>
        <id>10.14569/IJACSA.2023.0140539</id>
        <doi>10.14569/IJACSA.2023.0140539</doi>
        <lastModDate>2023-05-30T16:27:33.0070000+00:00</lastModDate>
        
        <creator>Thamer Althubiti</creator>
        
        <creator>Tarig M. Ahmed</creator>
        
        <creator>Madini O. Alassafi</creator>
        
        <subject>Data mining; educational data mining; classification algorithms; logistic regression; neural networks; gradient boosting; k-nearest neighbors; predicting students’ performance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>The main motivation of any educational institution is to provide quality education. Therefore, choosing an academic track can be clearly seen as an obstacle, for students and universities, which in turn led to imposing a mandatory preparatory year program in Saudi Arabia. One of the main objectives of the preparatory year is to help students discover the right academic track. Nevertheless, some students choose the wrong academic track which can be a stumbling block that may prevent their progress. According to the tremendous growth of using information technology, educational data mining technology (EDM) can be applied to discover useful patterns, unlike traditional data analysis methods. Most of the previous research focused on predicting the GPA after the students choose an academic track. On the contrary, our research focuses on using classification algorithms to develop a predictive model for advising students to select academic tracks via prediction of the GPA based on the preparatory year data at Saudi Universities. Then, compare classification algorithms to provide the most accurate prediction. The dataset was extracted from a Saudi university containing preparatory year data for 2363 students. This work was carried out using five classification algorithms: Gradient Boosting(GB), K-Nearest Neighbors (kNN), Logistic Regression (LG), Neural Network(NN) and Random Forest(RF). The results showed the superiority of the Logistic Regression algorithm in terms of accuracy over the other algorithms. Future work could add behavioral characteristics of students and use other algorithms to provide better accuracy.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_39-Developing_a_Predictive_Model_for_Selecting_Academic_Track.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>PM2.5 Estimation using Machine Learning Models and Satellite Data: A Literature Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140538</link>
        <id>10.14569/IJACSA.2023.0140538</id>
        <doi>10.14569/IJACSA.2023.0140538</doi>
        <lastModDate>2023-05-30T16:27:32.9930000+00:00</lastModDate>
        
        <creator>Mitra Unik</creator>
        
        <creator>Imas Sukaesih Sitanggang</creator>
        
        <creator>Lailan Syaufina</creator>
        
        <creator>I Nengah Surati Jaya</creator>
        
        <subject>AOD; machine learning; PM2.5; remote sensing; pollutant</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>Most researchers are beginning to appreciate the use of remote sensing satellites to assess PM2.5 levels and use machine learning algorithms to automate the collection, make sense of remote sensing data, and extract previously unseen data patterns. This study reviews delicate particulate matter (PM2.5) predictions from satellite aerosol optical depth (AOD) and machine learning. Specifically, we review the characteristics and gap-filling methods of satellite-based AOD products, sources and components of PM2.5, observable AOD products, data mining, and the application of machine learning algorithms in publications of the past two years. The study also included functional considerations and recommendations in covariate selection, addressing the spatiotemporal heterogeneity of the PM2.5 -AOD relationship, and the use of cross-validation, to aid in determining the final model. A total of 79 articles were included out of 112 retrieved records consisting of articles published in 2022 totaling 43 articles, as of 2023 (until February) totaling 19 articles, and other years totaling 18 articles. Finally, the latest method works well for monthly PM2.5 estimates, while daily PM2.5 and hourly PM2.5 can also be achieved. This is due to the increased availability and computing power of large datasets and increased awareness of the potential benefits of predictors working together to achieve higher estimation accuracy. Some key findings are also presented in the conclusion section of this article.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_38-PM2_5_Estimation_using_Machine_Learning_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Real-Time Automated Visual Inspection System for Printed Circuit Boards Missing Footprints Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140537</link>
        <id>10.14569/IJACSA.2023.0140537</id>
        <doi>10.14569/IJACSA.2023.0140537</doi>
        <lastModDate>2023-05-30T16:27:32.9930000+00:00</lastModDate>
        
        <creator>Xiaoda Cao</creator>
        
        <subject>Automated visual inspection; Printed Circuit Boards (PCB); quality control; image processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>Visual inspection systems (VIS) are vital for recognizing and assessing parts in mass-produced products at the fabricating lines. In the past, item review was carried out physically, which made finding imperfections repetitive, moderate, and prone to error. VIS may be a strategy to abbreviate preparing times, boost item quality, and increment fabricating competitiveness. For the reason of reviewing lost components on uncovered printed circuit sheets, a visual inspection framework is required. The assessment assignment has become more challenging to accomplish the specified quality due to the more compact and complex surface of structured electronic components. This study proposes a real-time visual inspection system to assess lost impressions on Printed Circuit Boards (PCB). This system is composed of hardware and software frameworks. The main contribution of this study is the proposed software framework. The software framework consists of components region analysis and missing detection using image processing, cross-correlation, and production rules. Experimental results show the viability and achievability of the proposed system for PCB missing component detection.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_37-A_Real_Time_Automated_Visual_Inspection_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimization Design of Bridge Inspection Vehicle Boom Structure Based on Improved Genetic Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140536</link>
        <id>10.14569/IJACSA.2023.0140536</id>
        <doi>10.14569/IJACSA.2023.0140536</doi>
        <lastModDate>2023-05-30T16:27:32.9770000+00:00</lastModDate>
        
        <creator>Ruihua Xue</creator>
        
        <creator>Shuo Lv</creator>
        
        <creator>Tingqi Qiu</creator>
        
        <subject>Genetic algorithm; Bridge inspection; Structural optimization; Finite element model; BP neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>Excessive self-weight of bridge inspection vehicles increases the safety risk of the inspected bridge structures. In this study, a bridge inspection vehicle arm structure self-weight optimization design model is proposed to improve the efficiency and safety of bridge structure inspection. The model uses a finite element model of the arm structure to generate force data to validate and train a back propagation (BP) neural network-based self-weight prediction model of the arm structure, and uses an improved genetic algorithm to assist the prediction model in searching for the optimal solution. The experimental results show that the maximum stress and maximum deformation of the optimal solution from the optimization model designed in this study are lower than the allowable values of the material, and the total weight of the structure from the optimal solution is the lowest, 4687.5 kg. The computational time of the optimization model designed in this study is lower than all the comparison models. The experimental data show that the optimized model for the self-weight optimization of the bridge inspection vehicle arm structure designed in this study has good optimization effect and has some application potential.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_36-Optimization_Design_of_Bridge_Inspection_Vehicle_Boom_Structure.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Usability of Digital Game-based Learning for Low Carbon Awareness: Heuristic Evaluation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140535</link>
        <id>10.14569/IJACSA.2023.0140535</id>
        <doi>10.14569/IJACSA.2023.0140535</doi>
        <lastModDate>2023-05-30T16:27:32.9600000+00:00</lastModDate>
        
        <creator>Nur Fadhilah Abdul Jalil</creator>
        
        <creator>Umi Azmah Hasran</creator>
        
        <creator>Siti Fadzilah Mat Noor</creator>
        
        <creator>Muhammad Helmi Norman</creator>
        
        <subject>Heuristic evaluation; digital game-based learning; usability; low carbon awareness</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>Digital Game-based Learning (DGBL) that attracts many practitioners to engage students in promoting low carbon awareness has been understudied. The evaluation phase plays a crucial part in determining the usability of the learning material. This study aims to identify the usability of DGBL which consists of four components: game usability (GU), mobility (MO), playability (P), and learning contents (LC) from the perspective of targeted end-user using heuristic evaluation. This study will also provide recommendations to help improve the quality of any related DGBL for novice designers or practitioners. A prototype of DGBL was developed which aims to promote low carbon awareness by learning about fuel cell. The study was designed in two phases, which are (1) developing the heuristic evaluation instrument validated by experts and (2) playtesting to identify the issues of usability in DGBL via heuristic evaluation by targeted end-users, which are forty-six selected students aged fourteen years old. Hence, it shows that the DGBL prototype developed for fuel cell learning has succeeded in achieving learning objectives while promoting low carbon awareness.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_35-The_Usability_of_Digital_Game_based_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Contribution of Numerical EEG Analysis for the Study and Understanding of Addictions with Substances</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140534</link>
        <id>10.14569/IJACSA.2023.0140534</id>
        <doi>10.14569/IJACSA.2023.0140534</doi>
        <lastModDate>2023-05-30T16:27:32.9600000+00:00</lastModDate>
        
        <creator>Aziz Mengad</creator>
        
        <creator>Jamal Dirkaoui</creator>
        
        <creator>Merouane Ertel</creator>
        
        <creator>Meryem Chakkouch</creator>
        
        <creator>Fatima Elomari</creator>
        
        <subject>Electroencephalography (EEG); quantitative electroencephalography; drug addiction; spectral analysis; coherence analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>Computerised electroencephalography (EEG) is one of a wide variety of brain imaging techniques used in addiction medicine. It is a sensitive measure of the effects of addiction on the brain and has been shown to show changes in brain electrical activity during addiction. But, the clinical value of computerised EEG recording in addictions is not yet clearly established. However, several studies argue that this non-invasive technique has an undeniable contribution to the understanding, prediction, diagnosis and monitoring of addictions. The aim of this article is to assess, through a systematic review, the contribution and interest of computerised EEG in the study and understanding of substance abuse by describing the different electrical activities that underlie it across the main frequency ranges: delta, theta, alpha, beta and gamma. We have been conducting a systematic review according to the recommendations of Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) and the Cochrane Group. We included 25 studies with a total of 1897 cases of addiction and 1504 controls. The studies dealt with addictions related to 05 licit and illicit psychoactive substances (alcohol, nicotine, cannabis, heroin and cocaine). The group of addicted patients showed significantly different brain electrical characteristics from the group of controls in the different EEG rhythms, whether during acute substance intoxication, abuse, withdrawal, abstinence, relapse, progression or response to treatment. The majority of studies have used EEG for diagnostic, predictive, monitoring purposes and also to discover electro-physiological markers of certain addictions.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_34-The_Contribution_of_Numerical_EEG_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Proactive Acquisition using Bot on Discord</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140533</link>
        <id>10.14569/IJACSA.2023.0140533</id>
        <doi>10.14569/IJACSA.2023.0140533</doi>
        <lastModDate>2023-05-30T16:27:32.9470000+00:00</lastModDate>
        
        <creator>Niken Dwi Wahyu Cahyani</creator>
        
        <creator>Daffa Syifa Pratama</creator>
        
        <creator>Nurul Hidayah Ab Rahman</creator>
        
        <subject>Discord; bot; cybercrime; social networks API; digital evidence</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>Data deletion increases challenges in cybercrime investigation. To address the problem, proactive forensics for evidentiary collection is acknowledged to help investigators to acquire the potentially needed digital evidence. This study proposes a bot machine to record data from the Discord server in advance, hashing and saving it in proper storage for further forensic analysis. The recording process can be managed to collect activities and their related data (intact, modified, deleted), including text, pictures, videos, and audio. The Discord bot is designed by utilizing the main features of the Discord Social Networks Application Programming Interface (API). This paper examines how this approach is applicable by embedding the bot in a Discord server. Observation showed that the bot records the real-time data as it is always alive on the server, including the deleted or modified messages and their timestamps. All the recorded data is saved locally on the server’s storage in easy-to-read formats, CSV and JSON. The results showed that the bot could conduct the data acquisition for 37 concurrent users with a 2.3% error rate and 97.7% accuracy.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_33-Proactive_Acquisition_Using_Bot_on_Discord.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comparative Study of Machine Learning Techniques to Predict Types of Breast Cancer Recurrence</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140531</link>
        <id>10.14569/IJACSA.2023.0140531</id>
        <doi>10.14569/IJACSA.2023.0140531</doi>
        <lastModDate>2023-05-30T16:27:32.9300000+00:00</lastModDate>
        
        <creator>Meryem Chakkouch</creator>
        
        <creator>Merouane Ertel</creator>
        
        <creator>Aziz Mengad</creator>
        
        <creator>Said Amali</creator>
        
        <subject>Breast cancer; machine learning; recurrence prediction; classification multi-classes; logistic regression; decision tree; K-Nearest Neighbors; artificial neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>The prediction of breast cancer recurrence is a crucial problem in cancer research that requires accurate and efficient prediction models. This study aims to compare the performance of different machine learning techniques in predicting types of breast cancer recurrence. In this study, the performance of logistic regression, decision tree, K-Nearest Neighbors, and artificial neural network algorithms was compared on a breast cancer recurrence dataset. The results show that the artificial neural network algorithm outperformed the other algorithms with 91% accuracy, followed by the decision tree (DT) algorithm and K-Nearest Neighbors (kNN) also performed well with accuracies of 90.10% and 88.20%, respectively, while the logistic regression algorithm had the lowest accuracy of 84.60%. The results of this study provide insight into the effectiveness of different machine learning techniques in predicting types of breast cancer recurrence and could guide the development of more accurate prediction models.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_31-A_Comparative_Study_of_Machine_Learning_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automated Decision Making ResNet Feed-Forward Neural Network based Methodology for Diabetic Retinopathy Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140532</link>
        <id>10.14569/IJACSA.2023.0140532</id>
        <doi>10.14569/IJACSA.2023.0140532</doi>
        <lastModDate>2023-05-30T16:27:32.9300000+00:00</lastModDate>
        
        <creator>A. Aruna Kumari</creator>
        
        <creator>Avinash Bhagat</creator>
        
        <creator>Santosh Kumar Henge</creator>
        
        <creator>Sanjeev Kumar Mandal</creator>
        
        <subject>Retinal lesion (RL); Fundus Images (FunImg); Microaneurysms (MAs); Principal Component Analysis (PCA); Standard Scaler (StdSca); Feed-Forward Neural Network (FFNN); cross pooling (CxPool)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>The detection of diabetic retinopathy eye disease is a time-consuming and labor-intensive process, that necessitates an ophthalmologist to investigate, assess digital color fundus photographic images of the retina, and discover DR by the existence of lesions linked with the vascular anomalies triggered by the disease. The integration of a single type of sequential image has fewer variations among them, which does not provide more feasibility and sufficient mapping scenarios. This research proposes an automated decision-making ResNet feed-forward neural network methodology approach. The mapping techniques integrated to analyze and map missing connections of retinal arterioles, microaneurysms, venules and dot points of the fovea, cottonwool spots, the macula, the outer line of optic disc computations, and hard exudates and hemorrhages among color and back white images. Missing computations are included in the sequence of vectors, which helps identify DR stages. A total of 5672 sequential and 7231 non-sequential color fundus and black-and-white retinal images were included in the test cases. The 80 and 20 percentage rations of best and poor-quality images were integrated in testing and training and implicated the 10-ford cross-validation technique. The accuracy, sensitivity, and specificity for testing and analysing good-quality images were 98.9%, 98.7%, and 98.3%, and poor-quality images were 94.9%, 93.6%, and 93.2%, respectively.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_32-Automated_Decision_Making_ResNet_Feed_Forward_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Feature Selection using Particle Swarm Optimization for Sentiment Analysis of Drug Reviews</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140530</link>
        <id>10.14569/IJACSA.2023.0140530</id>
        <doi>10.14569/IJACSA.2023.0140530</doi>
        <lastModDate>2023-05-30T16:27:32.9130000+00:00</lastModDate>
        
        <creator>Afifah Mohd Asri</creator>
        
        <creator>Siti Rohaidah Ahmad</creator>
        
        <creator>Nurhafizah Moziyana Mohd Yusop</creator>
        
        <subject>Sentiment analysis; feature selection; particle swarm optimization; drug reviews</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>Feature selection (FS) is an essential classification pre-processing task that eliminates irrelevant, redundant, and noisy features. The primary benefits of performing this task include enhanced model performance, reduced computational expense, and modified “curse of dimensionality”. The goal of performing FS is to find the best feature group that can be used to build an effective pattern recognition model. Drug reviews play a significant role in delivering valuable medical care information, such as the efficacy, side effects, and symptoms of drug use, facilities, drug pricing, and personal drug usage experience to healthcare providers and patients. FS can be used to obtain relevant and valuable information that can produce an optimal subset of features to help obtain accurate results in the classification of drug reviews. The FS approach reduces the number of input variables by eliminating redundant or irrelevant features and narrowing the collection of features to those most significant to the machine learning model. However, the high dimensionality of the feature vector is a major issue that reduces the accuracy of sentiment classification, making it challenging to find the best feature subset. Thus, this article presents a perceptive method to perform FS by gathering information from the potential solutions generated by a particle swarm optimization (PSO) algorithm. This research aimed to apply this algorithm to identify the optimal feature subset of drug reviews to improve the classification accuracy of sentiment analysis. The experimental results showed that PSO provided a better classification performance than a genetic algorithm (GA) and ant colony optimization (ACO) in most datasets. The results showed that PSO demonstrated the highest levels of performance, with an average of 49.3% for precision, 73.6% for recall, 59% for F-score, and 57.2% for accuracy.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_30-Feature_Selection_using_Particle_Swarm_Optimization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Industrial Practitioner Perspective of Mobile Applications Programming Languages and Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140529</link>
        <id>10.14569/IJACSA.2023.0140529</id>
        <doi>10.14569/IJACSA.2023.0140529</doi>
        <lastModDate>2023-05-30T16:27:32.9130000+00:00</lastModDate>
        
        <creator>Amira T. Mahmoud</creator>
        
        <creator>Ahmad A. Muhammad</creator>
        
        <creator>Ahmed H. Yousef</creator>
        
        <creator>Hala H.Zayed</creator>
        
        <creator>Walaa Medhat</creator>
        
        <creator>Sahar Selim</creator>
        
        <subject>Android; cross-platform; development; iOS; mobile applications; questionnaire</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>The growth of mobile application development industry made it crucial for researchers to study the industry practices of choosing mobile applications programming languages, systems, and tools. With the increased attention of cross-platform mobile applications development from both researchers and industry, this paper aims at answering the question of whether most of the industries are using cross-platform development or native development. The paper collects feedback about industry’s most used mobile development systems. In addition, it provides a map of the different technologies used by mobile applications development companies according to some of the demographics like company size and location. An online questionnaire is carried out to collect the data. The survey targeted both amateur and professional mobile developers. A total of 85 participants participated in answering the survey. Qualitative analysis using descriptive statistics is done on the results of the survey. Although the results show that there is an industrial trend towards using the cross-platform languages, native development is still used by the well-established companies. More than 50% of the participants are found to be aware of the performance issues of the cross-platform development languages and frameworks. Comparison with findings of related work is discussed which raises more research questions and draws out future research in this field.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_29-Industrial_Practitioner_Perspective_of_Mobile_Applications.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards an Adaptive e-Learning System Based on Deep Learner Profile, Machine Learning Approach, and Reinforcement Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140528</link>
        <id>10.14569/IJACSA.2023.0140528</id>
        <doi>10.14569/IJACSA.2023.0140528</doi>
        <lastModDate>2023-05-30T16:27:32.9000000+00:00</lastModDate>
        
        <creator>Riad Mustapha</creator>
        
        <creator>Gouraguine Soukaina</creator>
        
        <creator>Qbadou Mohammed</creator>
        
        <creator>Aoula Es-S&#226;adia</creator>
        
        <subject>Adaptive e-learning system; deep learner profile; reinforcement learning; Q-learning; k-means; linear regression; learning path recommendation; learning object</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>Now-a-days, the great challenge of adaptive e-learning systems is to recommend an individualized learning scenario according to the specific needs of learners. Therefore, the perfect adaptive e-learning system is the one that is based on a deep learner profile to recommend the most appropriate learning objects for that learner. Yet, the majority of existing adaptive e-learning systems do not give high importance to the adequacy of the real learner profile and its update with the one taken into account in the learning path recommendation. In this paper, we proposed an intelligent adaptive e-learning system, based on machine learning and reinforcement learning. The objectives of this system are the creation of a deep profile of a given learner, via the implementation of K-means and linear regression, and the recommendation of adaptive learning paths according to this deep profile, by implementing the Q-learning algorithm. The proposed system is decomposed into three principal modules, data preprocessing module, learner deep profile creation module, and learning path recommendation module. These three modules interact with each other to provide a personalized adaptation according to the learner&#39;s deep profile. The results obtained indicate that taking into account the learner&#39;s deep profile improves the quality of learning for learners.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_28-Towards_an_Adaptive_E_learning_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Using Descriptive Analysis to Find Patterns and Trends: A Case of Car Accidents in Washington D.C.</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140527</link>
        <id>10.14569/IJACSA.2023.0140527</id>
        <doi>10.14569/IJACSA.2023.0140527</doi>
        <lastModDate>2023-05-30T16:27:32.8670000+00:00</lastModDate>
        
        <creator>Zaid M. Altukhi</creator>
        
        <creator>Nasser F. Aljohani</creator>
        
        <subject>Descriptive analysis; trends; patterns; analytics; statistics; car accidents</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>The descriptive analysis could be used to find the trends and patterns in historical data. In this article, descriptive analysis has been used to describe the car accidents in Washington, D.C., between 2009 and 2020. The dataset was downloaded from the District Department of Transportation (DDOT), the department responsible for car accidents in Washington, D.C. Multiple analytics and statistical models have been applied to find the relationships between different variables and the patterns and trends among the data, such as correlation analysis, confidence interval, One-Way-ANOVA, decision tree, and visualizations. The article aims to find the common reasons for accidents and help DDOT find ways to reduce and eliminate accidents in the area. The statistical and analytical tools examine multiple features to find the patterns and trends among the datasets. Four main findings were found after analyzing the data. First, the main reason for most crashes is drunken people, either drivers or pedestrians. The second finding is that the top reason which causes deadly accidents is speed. Also, we have found that most of the accidents are not dangerous. In addition, we found the top ten streets that contain the highest accident number, and we found that they are located on the north side of the town.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_27-Using_Descriptive_Analysis_to_Find_Patterns_and_Trends.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Security Challenges Facing Blockchain Based-IoV Network: A Systematic Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140526</link>
        <id>10.14569/IJACSA.2023.0140526</id>
        <doi>10.14569/IJACSA.2023.0140526</doi>
        <lastModDate>2023-05-30T16:27:32.8530000+00:00</lastModDate>
        
        <creator>Hamza El Mazouzi</creator>
        
        <creator>Anass Khannous</creator>
        
        <creator>Khalid Amechnoue</creator>
        
        <creator>Anass Rghioui</creator>
        
        <subject>Internet of vehicles (IoV); traffic congestion; smart sensors; connected vehicles; data privacy; data security; blockchain technology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>The Internet of Vehicles (IoV) is an innovative concept aimed at addressing the critical problem of traffic congestion. IoV applications are part of a connected network that collects relevant data from various smart sensors installed in connected vehicles. This information is freely and easily exchanged between vehicles, which leads to improved traffic management and a reduction in traffic accidents. As the IoV technology continues to grow, the amount of data collected will increase, presenting new challenges for data privacy and security. The use of blockchain technology has been proposed as a solution, as its decentralized and distributed architecture has been proven reliable with cryptocurrencies such as Bitcoin. However, studies have shown that blockchain alone may not be sufficient to address privacy and security concerns, and there are currently no tools available to evaluate its performance in an IoV simulation environment. This research aims to provide a comprehensive review of the challenges associated with the implementation of blockchain technology in the IoV context.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_26-Security_Challenges_Facing_Blockchain_Based_IoV_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Adaptive Channel Selection and Graph ResNet Based Algorithm for Motor Imagery Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140525</link>
        <id>10.14569/IJACSA.2023.0140525</id>
        <doi>10.14569/IJACSA.2023.0140525</doi>
        <lastModDate>2023-05-30T16:27:32.8530000+00:00</lastModDate>
        
        <creator>Yongquan Xia</creator>
        
        <creator>Jianhua Dong</creator>
        
        <creator>Duan Li</creator>
        
        <creator>Keyun Li</creator>
        
        <creator>Jiaofen Nan</creator>
        
        <creator>Ruyun Xu</creator>
        
        <subject>Brain-Computer Interface; motor imagery; channel selection; deep learning; graph convolutional neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>In Brain-Computer interface (BCI) applications, achieving accurate control relies heavily on the classification accuracy and efficiency of motor imagery electroencephalogram (EEG) signals. However, factors such as mutual interference between multi-channel signals, inter-individual variability, and noise interference in the channels pose challenges to motor imagery EEG signal classification. To address these problems, this paper proposes an Adaptive Channel Selection algorithm aimed at optimizing classification accuracy and Information Translate Rate (ITR). First, C3, C4, and Cz are selected as key channels based on neurophysiological evidence and extensive experimental studies. Next, the channel selection is fine-tuned using spatial location and absolute Pearson correlation coefficients. By analyzing the relationship between EEG channels and key channels, the most relevant channel combination is determined for each subject, reducing confounding information and improving classification accuracy. To validate the method, the SHU Dataset and the PhysioNet Dataset are used in experiments. The Graph ResNet classification model is employed to extract features from the selected channel combinations using deep learning techniques. Experimental results show that the average classification accuracy is improved by 5.36% and 9.19%, and the Information Translate Rate is improved by 29.24% and 26.75%, respectively, compared to a single channel combination.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_25-An_Adaptive_Channel_Selection_and_Graph_ResNet.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Math-VR: Mathematics Serious Game for Madrasah Students using Combination of Virtual Reality and Ambient Intelligence</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140524</link>
        <id>10.14569/IJACSA.2023.0140524</id>
        <doi>10.14569/IJACSA.2023.0140524</doi>
        <lastModDate>2023-05-30T16:27:32.8370000+00:00</lastModDate>
        
        <creator>Hani Nurhayati</creator>
        
        <creator>Yunifa Miftachul Arif</creator>
        
        <subject>Mathematics; serious game; virtual reality; ambient intelligence; MCRS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>The challenge to increasing understanding of mathematics lessons for students in madrasah schools makes the learning process require the support of adaptive alternative learning media. In this study, we propose a serious game-based learning media supported by virtual reality and ambient intelligence technology to equip students with adaptive responses to subject matter scenarios. Ambient intelligence works based on recommendations generated by the Multi-Criteria Recommender System (MCRS). In calculating a similarity between users and reference data, MCRS uses cosine-based similarity calculations, and average similarity is used for ranking. We developed this learning media experiment called Math-VR using the Unity game engine. The experimental test results show that MCRS-based ambient intelligence technology can provide an adaptive response to the choice of geometry subject matter recommendations for students according to their pre-test results. The analysis results show that the recommendation system as part of ambient intelligence has the highest accuracy rate of 0.92 when using 80 reference data.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_24-Math_VR_Mathematics_Serious_Game_for_Madrasah_Students.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>DIP-CBML: A New Classification of Thai Dragon Fruit Species from Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140523</link>
        <id>10.14569/IJACSA.2023.0140523</id>
        <doi>10.14569/IJACSA.2023.0140523</doi>
        <lastModDate>2023-05-30T16:27:32.8200000+00:00</lastModDate>
        
        <creator>Naruwan Yusamran</creator>
        
        <creator>Nualsawat Hiransakolwong</creator>
        
        <subject>Dragon fruit; classification model; outdoor dataset; image pre-processing; segmentation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>The attractiveness of dragon fruit is that it has a strange exterior, beautiful colors, and high nutritional value. In Thailand, there is both import and export of dragon fruit. Each package for export must contain only one species of dragon fruit. From the survey, there are seven species of dragon fruit cultivated in Thailand and only some farmers can identify them on his/her farm. Therefore, this research focuses on the classification of Thai dragon fruit from laboratory images and outdoor images; which is different from the previous works which studied only laboratory images. This method was named DIP-CBML that stands for digital image processing with content-based and machine learning. The method consists of image type identification, pre-processing, red and yellow classification, image background removal, and six classes of red species classification. From the results, DIP-CBML can work with both datasets. It gave 100%, 100% and 95.53% accuracy for the image type identification, red and yellow classification, and the classification of six red species respectively. Hopefully, this research will lead to the innovation for the pre-harvest classification of Thai dragon fruit cultivars, applied to industrial applications, and robot harvesting. In the future, may add value to the yield of Thai dragon fruit cultivation.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_23-DIP_CBML_A_New_Classification_of_Thai_Dragon_Fruit_Species.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Analysis of Prophet Routing Protocol in Delay Tolerant Network by using Machine Learning Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140522</link>
        <id>10.14569/IJACSA.2023.0140522</id>
        <doi>10.14569/IJACSA.2023.0140522</doi>
        <lastModDate>2023-05-30T16:27:32.8030000+00:00</lastModDate>
        
        <creator>Bonu Satish Kumar</creator>
        
        <creator>Sailaja Vishnubhatla</creator>
        
        <creator>Chevuru Madhu Babu</creator>
        
        <creator>S. Pallam Shetty</creator>
        
        <subject>DTN; ONE; Prophet; CNN; ANN</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>Delay-Tolerant Networking (DTN) or Disruptive-Tolerant Networking comes under the category of networks that works without infrastructure wireless networks. DTN is one type of computer network that provides solutions for several applications. Delay tolerant network communications are networks that are accomplished by storing packets briefly in intermediate nodes till a certain time an end-to-end route is been re-setup or regenerated. This leads to thought as Delay Tolerant Networks. The paper presents the developed models using Artificial Neural Networks (ANN) and Convolutional Neural Networks (CNN) for predicting the best alpha, beta, and gamma parameters of Probabilistic Routing Protocol for Intermittently Connected Networks (PROPHET) protocol for delay tolerant networks. The first data set is generated using ONE simulator, and the generated data is analyzed using python panda’s module. From the above dataset, 80% was used for training and the remaining 20% each has been used for testing and validation. The models were developed and tested using the r2 score for both models to predict alpha, beta, and gamma parameters. Based on the predicted parameters extensive experiments were done and it was found that the ANN model is better than the CNN model. The ANN model can predict optimum alpha, beta, and gamma whereas CNN Model failed to produce accurate prediction.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_22-Performance_Analysis_of_Prophet_Routing_Protocol.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analyze Transmission Data from a Multi-Node Patient&#39;s Respiratory FMCW Radar to the Internet of Things</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140521</link>
        <id>10.14569/IJACSA.2023.0140521</id>
        <doi>10.14569/IJACSA.2023.0140521</doi>
        <lastModDate>2023-05-30T16:27:32.7900000+00:00</lastModDate>
        
        <creator>Rizky Rahmatullah</creator>
        
        <creator>Puput Dani Prasetyo Adi</creator>
        
        <creator>Suisbiyanto Prasetya</creator>
        
        <creator>Arief Budi Santiko</creator>
        
        <creator>Yuyu Wahyu</creator>
        
        <creator>B.Berlian Surya Wicaksana</creator>
        
        <creator>Stevry Yushady CH Bissa</creator>
        
        <creator>Riyani Jana Yanti</creator>
        
        <creator>Aloysius Adya Pramudita</creator>
        
        <subject>FMCW Radar data; realtime monitoring; internet of things; transmission data; multi node</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>This is the development of a system that has been made, FMCW radar for human or patient breathing which will then determine the type of disease or disorder in the patient just by looking at the type of breathing. This research uses data from FMCW Radar for human or patient breathing, which is then converted to data that can be read in real-time by the public, doctors, or medical teams through a web server; the web server used is iotmedis.brin.go.id. The novelty of this study is that various types of respiratory data are taken from various points so that it will cause new analysis, namely the process of transmitting data on server traffic or uplink and downlink processes. Specific data and research novelty is how Multi patient respiratory data from OmnipreSense or FMCW Radar can be processed by a microprocessor using MQTT, and multi-patient data can be displayed on the server in real-time.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_21-Analyze_Transmission_Data_from_a_Multi_Node_Patients_Respiratory.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Height Accuracy Study Based on RTK and PPK Methods Outside the Standard Working Range</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140520</link>
        <id>10.14569/IJACSA.2023.0140520</id>
        <doi>10.14569/IJACSA.2023.0140520</doi>
        <lastModDate>2023-05-30T16:27:32.7900000+00:00</lastModDate>
        
        <creator>Mohamed Jemai</creator>
        
        <creator>Mohamed Anis Loghmari</creator>
        
        <creator>Mohamed Saber Naceur</creator>
        
        <subject>Real time kinematic; post-processing kinematic; high-precision positioning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>The aim of this paper is to study and analyze the Jack Up Vessel (JUV) foundation height accuracy, with the objective of the precise installation of an Offshore Wind Farm (OWF), based on Real Time Kinematic (RTK) and Post-Processing Kinematic (PPK) modes applied on short and long baselines length. The offshore wind farm project is located far from the coastline, not always in the standard working range of RTK. The standard allowed vertical installation tolerance for foundations is less than 10 cm. Taking into account all error sources, deformation of the vessel, motion, lever arms that impact the height measurement of the foundation, it is required that RTK and PPK perform within an accuracy less than 5 cm. In this work, all measures will be evaluated according to the tolerance specification of &#177;2.5 cm. The survey GNSS tests executed during the project on board of a JUV should be able to provide answers to the following questions: Despite the critical environment, does RTK method allow reaching the theoretical specifications? Does PPK improve accuracy compared to the RTK solution? What is the influence of the baseline length? How much of the time the results fall within the range tolerance? What is the ideal logging period in which accurate and reliable results can be obtained? What is the influence of the hardware and software variants used in testing process on the results accuracy? Based on the test results and analysis a clear description of the influence of different parameters in the OWF precise height measurement in challenging environment will be exposed.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_20-A_Height_Accuracy_Study_based_on_RTK_and_PPK_Methods.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Research on Library Face Book Return Model Based on Hybrid PCA and Kernel Function</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140519</link>
        <id>10.14569/IJACSA.2023.0140519</id>
        <doi>10.14569/IJACSA.2023.0140519</doi>
        <lastModDate>2023-05-30T16:27:32.7730000+00:00</lastModDate>
        
        <creator>Jianwen Shi</creator>
        
        <subject>Kernel function; multidimensional principal component analysis; face recognition; intelligent book return</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>With the improvement in the quality of university education in China, the behavior of college students and school teachers to borrow and return books in the library is becoming more and more frequent. In peak periods of returning books, managers cannot even assist in returning books in time. Therefore, this research uses kernel function, multi-dimensional principal component analysis method, and multi-dimensional linear discriminant analysis method to construct a new face recognition algorithm for the automatic return of books in the university library. The test results show that the XT_2D_PL algorithm designed in this study has a face recognition rate of 96.8%. When the number of face samples of each type in the test sample set is 11, and when the number of feature dimensions is 14, the recognition rate of 96.3% reaches the highest level. However, if the sample to be processed is 500 pictures, the calculation speed is 1.072ms/per photo, higher than most comparison algorithms. The proposed face recognition algorithm has high recognition accuracy on the library face data; the calculation speed meets the needs of practical applications, and has certain practical promotion potential.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_19-Research_on_Library_Face_Book_Return_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Reinforcement Learning-based Aspect Term Extraction using Dilated Convolutions and Differential Equation Initialization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140518</link>
        <id>10.14569/IJACSA.2023.0140518</id>
        <doi>10.14569/IJACSA.2023.0140518</doi>
        <lastModDate>2023-05-30T16:27:32.7570000+00:00</lastModDate>
        
        <creator>Yuyu Xiong</creator>
        
        <creator>Mariani Md Nor</creator>
        
        <creator>Ye Li</creator>
        
        <creator>Hongxiang Guo</creator>
        
        <creator>Li Dai</creator>
        
        <subject>Aspect term extraction; sentiment analysis; differential evolution; reinforcement learning; BERT</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>Aspect term extraction is a crucial subtask in aspect-based sentiment analysis that aims to discover the aspect terms presented in a text. In this paper, a method for ATE is proposed that employs dilated convolution layers to extract feature vectors in parallel, which are then concatenated for classification downstream. Reinforcement learning is used to save the ATE model from imbalance classification, in which the training procedure is posed as a sequential decision-making process. The samples are the states; the network, and the agent; and the agent gets a more significant reward/penalty for correct/incorrect classification of the minority class compared with the majority class. The training phase, which typically employs gradient-based approaches, including back-propagation for the learning process, is tackled. Thus, it suffers from some drawbacks, including sensitivity to initialization. A novel differential equation (DE) approach that uses a clustering-based mutation operator to initiate the BP process is presented. Here, a winning cluster is identified for the current DE population, and a new updating strategy is used to generate candidate solutions. The BERT model is employed as word embedding, which can be included in a downstream task and fine-tuned as a model, while BERT can capture various linguistic properties. The proposed method is evaluated on two English datasets (Restaurant and Laptop) and has achieved outstanding results, surpassing other deep models (Restaurant: Precision 85.44%, F1-score 87.35%; Laptop: Precision 80.88%, F1-score 80.78%).</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_18-Reinforcement_Learning_based_Aspect_Term_Extraction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Things of Interest Recommendation with Multidimensional Context Embedding in the Internet of Things</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140517</link>
        <id>10.14569/IJACSA.2023.0140517</id>
        <doi>10.14569/IJACSA.2023.0140517</doi>
        <lastModDate>2023-05-30T16:27:32.7430000+00:00</lastModDate>
        
        <creator>Shuhua Li</creator>
        
        <creator>Jingmin An</creator>
        
        <subject>Internet of things; things of interest; multidimensional context embedding; intrinsic information; instant information; matrix factorization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>The emerging Internet of Things (IoT) makes users and things closely related together, and the interactions between users and things generate massive context data, where the preference information in time, space, and textual content is embedded. Traditional recommendation methods (e.g., movie, music, and location recommendations) are based on static intrinsic context information, which lacks consideration regarding real-time content and spatiotemporal features, failing to adapt to the personalized recommendation in IoT. Therefore, to meet users’ interests and needs in IoT, a novel effective and efficient recommendation method is urgently needed. The paper focuses on mining users’ things of interest in IoT via leveraging multidimensional context embedding. Specifically, to address the challenge from massive context data embedding different user preference information, the paper employs Convolutional Neural Networks (CNN) to mine the intrinsic content information of things and learn their represent. To solve the real-time recommendation problem, the paper proposes a real-time multimodal model embedded into location, time, and some instant content information to track the features of users and things. Furthermore, the paper proposes a matrix factorization-based framework using the regularization method to fuse real-time context embedding and intrinsic information embedding. The experimental results demonstrate the proposed method tailored to IoT is adaptable and flexible, and able to capture user personalized preference effectively.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_17-Things_of_Interest_Recommendation_with_Multidimensional_Context.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Assessing the Impact and Effectiveness of Cybersecurity Measures in e-Learning on Students and Educators: A Case Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140516</link>
        <id>10.14569/IJACSA.2023.0140516</id>
        <doi>10.14569/IJACSA.2023.0140516</doi>
        <lastModDate>2023-05-30T16:27:32.7270000+00:00</lastModDate>
        
        <creator>Alaa Saeb Al-Sherideh</creator>
        
        <creator>Khaled Maabreh</creator>
        
        <creator>Majdi Maabreh</creator>
        
        <creator>Mohammad Rasmi Al Mousa</creator>
        
        <creator>Mahmoud Asassfeh</creator>
        
        <subject>e-learning; security; cyber security; privacy; countermeasure; Moodle; education</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>As e-learning has become increasingly prevalent, cyber security has become a major concern. e-Learning platforms collect and store large amounts of sensitive information, such as personal data and financial information, making them attractive targets for cybercriminals. To address these challenges and concerns, e-learning platforms must implement a comprehensive cyber security strategy that includes strong access controls, data encryption, regular software updates, and student training to help them identify and prevent insider threats. This research aims at investigating and determine how satisfied students are with e-learning security and privacy, as well as whether these concerns affect the overall standard of education. A sample study is presented to assess both the impact of the security framework on students&#39; academic achievements and the student’s satisfaction with the security countermeasures in an e-learning system. Statistical analysis showed that the use of security and cyber security countermeasures had a significant effect on the frequent use and participation of students in the contents of the system. Furthermore, encouraging feedback and communication from students about their e-learning experience to share their concerns, questions, and suggestions can help in addressing any security issues or concerns, as well as increasing students’ participation in the e-learning content.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_16-Assessing_the_Impact_and_Effectiveness_of_Cybersecurity_Measures.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Data Sharing using PDPA-Compliant Blockchain Architecture in Malaysia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140515</link>
        <id>10.14569/IJACSA.2023.0140515</id>
        <doi>10.14569/IJACSA.2023.0140515</doi>
        <lastModDate>2023-05-30T16:27:32.7270000+00:00</lastModDate>
        
        <creator>Hasventhran Baskaran</creator>
        
        <creator>Salman Yussof</creator>
        
        <creator>Asmidar Abu Bakar</creator>
        
        <creator>Fiza Abdul Rahim</creator>
        
        <subject>Blockchain; smart contract; legitimacy; access protocol; retention protocol; data processor; data subject; data user; blockchain regulations</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>Data privacy is undoubtedly the biggest concern for the modern society. Data privacy is also becoming a key policy in data protection regulations. Organizations assemble massive amount of personal data of the users for monetary and political purposes. These data can be sold for commercial purpose without the prior knowledge or permission from the respective data owners. This can be mitigated by having blockchain to provide a much-needed transparency. However, blockchain’s own transparency becomes its own disadvantage when data owners want to be completely anonymous. Blockchain’s transparent nature will be conflicting with non-linkability. Since the data in blockchain is publicly viewable, any personal data or private transactions being processed through blockchain will be exposed to every node in the network. Hence, blockchain implementations also must comply with privacy acts such as Personal Data Protection Act (PDPA) to have privacy by design and by default. Hence, this paper proposes a PDPA-compliant blockchain architecture for data trading that provides complete control of data and anonymity to the users. A prototype is created using various tools to implement the proposed architecture. This study presents anonymous data sharing for users, data access, data delete features to verify the correctness of the proposed architecture.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_15-Data_Sharing_using_PDPA_Compliant_Blockchain_Architecture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Feature Detection Approach for COVID-19 Classification based on X-ray Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140514</link>
        <id>10.14569/IJACSA.2023.0140514</id>
        <doi>10.14569/IJACSA.2023.0140514</doi>
        <lastModDate>2023-05-30T16:27:32.7100000+00:00</lastModDate>
        
        <creator>Ayman Noor</creator>
        
        <creator>Priyadarshini Pattanaik</creator>
        
        <creator>Mohammed Zubair Khan</creator>
        
        <creator>Waseem Alromema</creator>
        
        <creator>Talal H. Noor</creator>
        
        <subject>COVID-19; coronavirus; deep learning; classification; chest X-ray images; DenseNet-121; XG-Boost classifier; EfficientNet-B0</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>The novel human Corona disease (COVID-19) is a pulmonary sickness brought on by an extraordinarily outrageous respiratory condition crown 2. (SARS -CoV-2). Chest radiography imaging has a significant role in the screening, early diagnosis, and follow-up of the suspected individuals due to the effects of COVID-19 on pneumonic-sensitive tissue. It also has a severe impact on the economy as a whole. If positive patients are identified early, the spread of the pandemic illness can be slowed. To determine whether people are at risk for illnesses, a COVID-19 infection prediction is critical. This paper categorizes chest CT samples of COVID-19 affected patients. The two-stage proposed deep learning technique produces spatial function from images, so it is a very expeditious manner for image category hassle. Extensive experiments are drawn by considering the benchmark chest-Computed Tomography (chest-CT) image datasets. Comparative evaluation reveals that our proposed method outperforms amongst other 20 different existing pre-trained models. The test outcomes constitute that our proposed model achieved the best rating of 97.6%, 0.964, 0.964, and 0.982 concerning the accuracy, precision, recall, specificity, and F1-score, respectively.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_14-Deep_Feature_Detection_Approach_for_COVID_19_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Input Value Chain Affect Vietnamese Rice Yield: An Analytical Model Based on a Machine Learning Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140512</link>
        <id>10.14569/IJACSA.2023.0140512</id>
        <doi>10.14569/IJACSA.2023.0140512</doi>
        <lastModDate>2023-05-30T16:27:32.6800000+00:00</lastModDate>
        
        <creator>Thi Thanh Nga Nguyen</creator>
        
        <creator>NianSong Tu</creator>
        
        <creator>Thai Thuy Lam Ha</creator>
        
        <subject>Value chains; Vietnamese rice; machine learning; neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>Input value chains greatly affect rice yield, however previous related studies were mainly based on empirical survey and simple statistics, which lacked generality and flexibility. The article presents a new method to predict the influence of input value chain on rice yield in Vietnam based on a machine learning algorithm. Input value chain data is collected through field surveys in rice-growing households. We build a predictive model based on the neural network and swarm intelligence optimization algorithm. The prediction results show that our proposed method has an accuracy of 96%, higher than other traditional methods. This is the basis for management levels to have orientation on the input supply value chain for Vietnamese rice, contributing to the development of the Vietnamese rice brand in the world market.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_12-Input_Value_Chain_Affect_Vietnamese_Rice_Yield.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cybersecurity in Healthcare: A Review of Recent Attacks and Mitigation Strategies</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140513</link>
        <id>10.14569/IJACSA.2023.0140513</id>
        <doi>10.14569/IJACSA.2023.0140513</doi>
        <lastModDate>2023-05-30T16:27:32.6800000+00:00</lastModDate>
        
        <creator>Elham Abdullah Al-Qarni</creator>
        
        <subject>Cybersecurity; healthcare industry; malware; ransomware; DoS; DDoS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>Cyberattacks on several businesses, including those in the healthcare, finance, and industrial sectors, have significantly increased in recent years. Due to inadequate security measures, antiquated practices, and sensitive data, including usernames, passwords, and medical records, the healthcare sector has emerged as a top target for cybercriminals. Cybersecurity has not gotten enough attention in the healthcare sector, despite being crucial for patient safety and a hospital&#39;s reputation. In order to prevent data breaches that could jeopardize the privacy of patients&#39; information, hospitals must deploy the proper IT security measures. This research article reviews many scholarly publications that look at ransomware attacks and other cyberattacks on hospitals between 2014 and 2020. The report summarizes the most recent defensive measures put forth in scholarly works that can be used in the healthcare industry. Additionally, the report provides a general review of the effects of cyberattacks and the steps hospitals have taken to manage and recover from these disasters. The study shows that cyberattacks on hospitals have serious repercussions and emphasizes the significance of giving cybersecurity a priority in the healthcare sector. To combat cyberattacks, hospitals must have clear policies and backup plans, constantly upgrade their systems, and instruct employees on how to spot and handle online threats. The article comes to the conclusion that putting in place suitable cybersecurity safeguards can reduce the harm brought on by system failures, reputational damage, and other associated problems.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_13-Cybersecurity_in_Healthcare_A_Review_of_Recent_Attacks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Recommendation System on Travel Destination based on Geotagged Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140511</link>
        <id>10.14569/IJACSA.2023.0140511</id>
        <doi>10.14569/IJACSA.2023.0140511</doi>
        <lastModDate>2023-05-30T16:27:32.6630000+00:00</lastModDate>
        
        <creator>Clarice Wong Sheau Harn</creator>
        
        <creator>Mafas Raheem</creator>
        
        <subject>Geotagged data; travel recommendation system; travel recommender; collaborative filtering; matrix factorization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>Tourism research has benefitted from the worldwide spread and development of social networking services. People nowadays are more likely to rely on internet resources to plan their vacations. Thus, travel recommendation systems are designed to sift through the mammoth amount of data and identify the ideal travel destinations for the users. Moreover, it is shown that the increasing availability and popularity of geotagged data significantly impacts the destination decision. However, most current research concentrates on reviews and textual information to develop the recommendation model. Therefore, the proposed travel recommendation model examines the collective behaviour and connections between users based on geotagged data to provide personalized suggestions for individuals. The model was developed using the user-based collaborative filtering technique. The matrix factorization model was selected as the collaborative filtering technique to compute user similarities due to its adaptability in dealing with sparse rating matrices. The recommendation model generates prediction values to recommend the most appropriate locations. Finally, the model performance of the proposed model was assessed against the popularity and random models using the test design established using Mean Average Precision (MAP), Root Mean Square Error (RMSE), and Mean Absolute Error (MAE). The findings indicated that the proposed matrix factorization model has an average MAP of 0.83, with RMSE and MAE values being 1.36 and 1.24, respectively. The proposed model got significantly higher MAP values and the lowest RMSE and MAE values compared to the two baseline models. The comparison shows that the proposed model is effective in providing personalized suggestions to users based on their past visits.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_11-Recommendation_System_on_Travel_Destination.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Method for Ad-hoc Blockchain of Wireless Mesh Networking with Agent and Initiate Nodes</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140510</link>
        <id>10.14569/IJACSA.2023.0140510</id>
        <doi>10.14569/IJACSA.2023.0140510</doi>
        <lastModDate>2023-05-30T16:27:32.6500000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>Blockchain; Ad-hoc network; agent and initiate nodes; the number of hops: connectivity; routing protocol; multi-hop routing; packet forwarding</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>Method for Ad-Hoc blockchain of wireless mesh networking with agent and initiate nodes is proposed. Minimizing the number of hops and maintaining connectivity of mobile terminals are concerns. Through simulation studies, it is found that increasing number of initiator nodes caused nodes to route a large number of messages. Thus, these nodes will die out quickly, causing the energy required to get the remaining messages to increase and more nodes to die. This will create a cascading effect that will shorten system lifetime. Multi-hop routing, however, imply high packet overhead, (more nodes in the network means more hops will be available). The packet overhead of the multi-hop routing is extremely high compared to single path routing since many nodes near the shortest path participate in packet forwarding. This additional overhead caused by moving node can cause congestion in the network.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_10-Method_for_Ad_Hoc_Blockchain_of_Wireless_Mesh_Networking.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Study of Prediction of Airline Stock Price through Oil Price with Long Short-Term Memory Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140509</link>
        <id>10.14569/IJACSA.2023.0140509</id>
        <doi>10.14569/IJACSA.2023.0140509</doi>
        <lastModDate>2023-05-30T16:27:32.6330000+00:00</lastModDate>
        
        <creator>Jae Won Choi</creator>
        
        <creator>Youngkeun Choi</creator>
        
        <subject>Stock price prediction; airline; oil; long short-term memory</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>This study aims to present a model that predicts the stock price of an airline by setting the economic and technical information of oil as features and taking advantage of the LSTM method. In this study, oil price data for about seven years from January 4, 2016, to April 14, 2023, were collected through FinanceDataReader. The collected data is a total of 1,833 days of AA stock price data. The price data consists of six categories: Date, Open, High, Low, Close, Volume, and Change (price is based on dollars). Data is stored every 24 hours, so it was judged to be most suitable for short-term price prediction (24 hours later) to be conducted in this study. In this paper, normalized closing price data was trained for 50 epochs. As a result of learning, the loss value converged close to 0. The MSE measured by the accuracy of the model shows a result of 0.00049. The significance of this study is as follows. First, it is meaningful in that it can present indicators such as more sophisticated predictions and risk management to airline companies. Oil price as our selected feature can compensate for the poor performance of a simple model and its limitations on overfitting.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_9-A_Study_of_Prediction_of_Airline_Stock_Price.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Security in the IoT: State-of-the-Art, Issues, Solutions, and Challenges</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140507</link>
        <id>10.14569/IJACSA.2023.0140507</id>
        <doi>10.14569/IJACSA.2023.0140507</doi>
        <lastModDate>2023-05-30T16:27:32.6170000+00:00</lastModDate>
        
        <creator>Ahmed SRHIR</creator>
        
        <creator>Tomader MAZRI</creator>
        
        <creator>Mohammed BENBRAHIM</creator>
        
        <subject>Internet of things (IoT); IoT security; IoT protocols; security issues in IoT; network security; data security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>Now-a-days, the Internet of Things (IoT) has enormous potential and growth impact due to the technological revolution and the spread and appearance of events. It has received considerable attention from researchers and is considered the future of the Internet; however, according to Cisco Inc. reports, the IoT will be crucial in transforming our standards of living, as well as our corporate and commercial models. By 2023, the number of devices connected to IP networks will reach more than three times the population of the entire world. In addition, there will be 5.3 billion Internet users worldwide, representing 66% of the world&#39;s population, up from 3.9 billion in 2018. IoT enables billions of devices and services to connect to each other and exchange information; however, most of these IoT devices can be easily compromised and are subject to various security attacks. In this article, we present and discuss the main IoT security issues, categorizing them according to the IoT layer architecture and the protocols used for networking. In the following, we describe the security requirements as well as the current attacks and methods with adequate solutions and architecture for avoiding these issues and security breaches.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_7-Security_in_the_IoT_State_of_the_Art_Issues_Solutions.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Mobile App for the Identification of Flowers Using Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140508</link>
        <id>10.14569/IJACSA.2023.0140508</id>
        <doi>10.14569/IJACSA.2023.0140508</doi>
        <lastModDate>2023-05-30T16:27:32.6170000+00:00</lastModDate>
        
        <creator>Gandhinee Rajkomar</creator>
        
        <creator>Sameerchand Pudaruth</creator>
        
        <subject>Flowers; deep learning; mobile application; Mauritius</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>Flowers are admired and used by people all around the world for their fragrance, religious significance, and medicinal capabilities. The accurate taxonomy of these flower species is critical for biodiversity conservation and research. Non-experts typically need to spend a lot of time examining botanical guides in order to accurately identify a flower, which can be challenging and time-consuming. In this study, an innovative mobile application named FloralCam has been developed for the identification of flower species that are commonly found in Mauritius. Our dataset, named FlowerNet, was collected using a smartphone in a natural environment setting and consists of 11660 images, with 110 images for each of the 106 flower species. Seventy percent of the data was used for training, twenty percent for validation and the remaining ten percent for testing. Using the approach of transfer learning, pre-trained convolutional neural networks (CNNs) such as the InceptionV3, MobileNetV2 and ResNet50V2 were fine tuned on the custom dataset created. The best performance was achieved with the fine tuned MobileNetV2 model with accuracy 99.74% and prediction time 0.09 seconds. The best model was then converted to TensorFlow Lite format and integrated in a mobile application which was built using Flutter. Furthermore, the models were also tested on the benchmark Oxford 102 dataset and MobileNetV2 obtained the highest classification accuracy of 95.90%. The mobile application, the dataset and the deep learning models developed can be used to support future research in the field of flower recognition.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_8-A_Mobile_App_for_the_Identification_of_Flowers_Using_Deep_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Ethereum Cryptocurrency Entry Point and Trend Prediction using Bitcoin Correlation and Multiple Data Combination</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140506</link>
        <id>10.14569/IJACSA.2023.0140506</id>
        <doi>10.14569/IJACSA.2023.0140506</doi>
        <lastModDate>2023-05-30T16:27:32.5870000+00:00</lastModDate>
        
        <creator>Abdellah EL ZAAR</creator>
        
        <creator>Nabil BENAYA</creator>
        
        <creator>Hicham EL MOUBTAHIJ</creator>
        
        <creator>Toufik BAKIR</creator>
        
        <creator>Amine MANSOURI</creator>
        
        <creator>Abderrahim EL ALLATI</creator>
        
        <subject>Deep learning; cryptocurrency; bitcoin trend prediction; price action; convolutional neural network; transfer learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>Deep learning methods have achieved significant success in various applications, including trend signal prediction in financial markets. However, most existing approaches only utilize price action data. In this paper, we propose a novel system that incorporates multiple data sources and market correlations to predict the trend signal of Ethereum cryptocurrency. We conduct experiments to investigate the relationship between price action, candlestick patterns, and Ethereum-Bitcoin correlation, aiming to achieve highly accurate trend signal predictions. We evaluate and compare two different training strategies for Convolutional Neural Networks (CNNs), one based on transfer learning and the other on training from scratch. Our proposed 1-Dimensional CNN (1DCNN) model can also identify inflection points in price trends during specific periods through the analysis of statistical indicators. We demonstrate that our model produces more reliable predictions when utilizing multiple data representations. Our experiments show that by combining different types of data, it is possible to accurately identify both inflection points and trend signals with an accuracy of 98%.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_6-Ethereum_Cryptocurrency_Entry_Point_and_Trend_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Enhanced SVM Model for Implicit Aspect Identification in Sentiment Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140505</link>
        <id>10.14569/IJACSA.2023.0140505</id>
        <doi>10.14569/IJACSA.2023.0140505</doi>
        <lastModDate>2023-05-30T16:27:32.5700000+00:00</lastModDate>
        
        <creator>Halima Benarafa</creator>
        
        <creator>Mohammed Benkhalifa</creator>
        
        <creator>Moulay Akhloufi</creator>
        
        <subject>Implicit aspect-based sentiment analysis; machine learning; supervised approaches; support vector machines; wordnet; lesk algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>Opinion Mining or Sentiment Analysis (SA) is a key component of E-commerce applications where a vast number of reviews are generated by customers. SA operates on aspect level where the views are expressed on a specific aspect of a product and have a big influence on the customers’ choices and businesses’ reputation. Aspect Based Sentiment Analysis (ABSA) is the task of categorizing text by aspect and identifying the sentiment attributed to it. Implicit Aspect Identification (IAI) is a subtask of ABSA. This paper aims to empirically investigate how external knowledge (e.g. WordNet) is integrated into SVM model to address some of its intrinsic issues when dealing with classification. To achieve this research goal, we propose an approach to improve Support Vector Machines (SVM) model to deal with IAI. Using WordNet (WN) semantic relations, we suggest an enhancement for the SVM kernel computation. Experiments are conducted on three benchmark datasets of products, laptops, and restaurant reviews. The effects of our approach are examined and analyzed according to three criteria: (i) kernel function used, (ii) different experimental settings, and (iii) SVM behavior towards Overfitting and Underfitting. The research finding of our work is that the integration of external knowledge (e.g. WordNet) is experimentally proved to be significantly helpful to SVM classification for IAI and especially for addressing Overfitting and Underfitting that are considered as two of the main structural SVM issues. The empirical results demonstrate that our approach helps SVM (i) improve its performance for the three considered kernels and under different experimental settings, and (ii) deal better with Overfitting and Underfitting.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_5-An_Enhanced_SVM_Model_for_Implicit_Aspect_Identification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sentiment Analysis on COVID-19 Vaccine Tweets using Machine Learning and Deep Learning Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140504</link>
        <id>10.14569/IJACSA.2023.0140504</id>
        <doi>10.14569/IJACSA.2023.0140504</doi>
        <lastModDate>2023-05-30T16:27:32.5530000+00:00</lastModDate>
        
        <creator>Tarun Jain</creator>
        
        <creator>Vivek Kumar Verma</creator>
        
        <creator>Akhilesh Kumar Sharma</creator>
        
        <creator>Bhavna Saini</creator>
        
        <creator>Nishant Purohit</creator>
        
        <creator>Bhavika</creator>
        
        <creator>Hairulnizam Mahdin</creator>
        
        <creator>Masitah Ahmad</creator>
        
        <creator>Rozanawati Darman</creator>
        
        <creator>Su-Cheng Haw</creator>
        
        <creator>Shazlyn Milleana Shaharudin</creator>
        
        <creator>Mohammad Syafwan Arshad</creator>
        
        <subject>Covid-19 vaccine; sentiment analysis; machine learning; deep learning; natural language processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>One of the main functions of NLP (Natural Language Processing) is to analyze a sentiment or opinion of the text considered. In this research the objective is to analyze the sentiment in the form of tweets towards the Covid-19 vaccination. In this study, the collected tweets are in the form of a dataset from Kaggle that have been categorized into positive and negative depending on the polarity of the sentiment in that tweet, to visualize the overall situation. The reviews are translated into vector representations using various techniques, including Bag-Of-Words and TF-IDF to ensure the best result. Machine learning algorithms like Logistic Regression, Na&#239;ve Bayes, Support Vector Machine (SVM) and others, and Deep Learning algorithms like LSTM and Bert were used to train the predictive models. The performance metrics used to test the performance of the models show that Support Vector Machine (SVM) achieved the highest accuracy of 88.7989% among the machine learning models. Compared to the related research papers the highest accuracy obtained using LSTM is 90.59 % and our model has predicted with the highest accuracy of 90.42% using BERT techniques.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_4-Sentiment_Analysis_on_COVID_19_Vaccine_Tweets.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Super-Resolution of Brain MRI via U-Net Architecture</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140503</link>
        <id>10.14569/IJACSA.2023.0140503</id>
        <doi>10.14569/IJACSA.2023.0140503</doi>
        <lastModDate>2023-05-30T16:27:32.5400000+00:00</lastModDate>
        
        <creator>Aryan Kalluvila</creator>
        
        <subject>MRI; U-Net; Super-Resolution; PyTorch; SRCNN; SR-GAN; Deep Learning; GPU</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>This paper proposes a U-Net-based deep learning architecture for the task of super-resolution of lower resolution brain magnetic resonance images (MRI). The proposed system, called MRI-Net, is designed to learn the mapping between low-resolution and high-resolution MRI images. The system is trained using 50-800 2D MRI scans, depending on the architecture, and is evaluated using peak signal-to-noise ratio (PSNR) metrics on 10 randomly selected images. The proposed U-Net architecture outperforms current state-of-the-art networks in terms of PSNR when evaluated with a 3 x 3 resolution downsampling index. The system&#39;s ability to super-resolve MRI scans has the potential to enable physicians to detect pathologies better and perform a wider range of applications. The symmetrical downsampling pipeline used in this study allows for generically representing low-resolution MRI scans to highlight proof of concept for the U-Net-based approach. The system is implemented on PyTorch 1.9.0 with NVIDIA GPU processing to speed up training time. U-Net is a promising tool for medical applications in MRI, which can provide accurate and high-quality images for better diagnoses and treatment plans. The proposed approach has the potential to reduce the costs associated with high-resolution MRI scans by providing a solution for enhancing the image quality of low-resolution scans.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_3-Super_Resolution_of_Brain_MRI.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Usability and Security of Knowledge-based Authentication Systems: A State-of-the-Art Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140502</link>
        <id>10.14569/IJACSA.2023.0140502</id>
        <doi>10.14569/IJACSA.2023.0140502</doi>
        <lastModDate>2023-05-30T16:27:32.5230000+00:00</lastModDate>
        
        <creator>Hassan Wasfi</creator>
        
        <creator>Richard Stone</creator>
        
        <subject>Knowledge-based authentication; recognition; re-call; usability; security; memorability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>Knowledge-based passwords are still the most dominant authentication method for securing digital platforms and services, in spite of the emergence of alternative systems such as token-based and biometric systems. This method has remained the most popular one mostly because of its usability, compatibility, affordability of implementation, and user familiarity. However, the main challenge of knowledge-based password schemes lies in creating passwords that provide a balance between memorability and security. This research aimed to compare various knowledge-based schemes in order to establish a strategy that provided high memorability and resilience to most cyberattacks. The overview of this research identifies areas of knowledge-based passwords for further research and enhances the methodology that helps to offer insight into usable, secure, and sustainable authentication approaches. Future work has been recommended to explore the major features and drawbacks of recognition-based textual passwords because this method provides the usability and security benefits of graphical passwords with the familiarity of textual passwords.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_2-Usability_and_Security_of_Knowledge_based_Authentication_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Spatio-Temporal Features based Human Action Recognition using Convolutional Long Short-Term Deep Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140501</link>
        <id>10.14569/IJACSA.2023.0140501</id>
        <doi>10.14569/IJACSA.2023.0140501</doi>
        <lastModDate>2023-05-30T16:27:32.5230000+00:00</lastModDate>
        
        <creator>A F M Saifuddin Saif</creator>
        
        <creator>Ebisa D. Wollega</creator>
        
        <creator>Sylvester A. Kalevela</creator>
        
        <subject>Convolutional neural network; recurrent neural network; long short-term memory; human action recognition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(5), 2023</description>
        <description>Recognition of human intention is crucial and challenging due to subtle motion patterns of a series of action evolutions. Understanding of human actions is the foundation of many applications, i.e., human robot interaction, smart video monitoring and autonomous driving etc. Existing deep learning methods use either spatial or temporal features during training. This research focuses on developing a lightweight method using both spatial and temporal features to predict human intention correctly. This research proposes Convolutional Long Short-Term Deep Network (CLSTDN) consists of Convolutional Neural Network (CNN) and Recurrent Neural Network (RNN). CNN uses Inception-ResNet-v2 to classify object specific class categories by extracting spatial features and RNN uses Long Short-Term Memory (LSTM) for final prediction based on temporal features. Proposed method was validated on four challenging benchmark dataset, i.e., UCF Sports, UCF-11, KTH and UCF-50. Performance of the proposed method was evaluated using seven performance metrics, i.e., accuracy, precision, recall, f-measure, error rate, loss and confusion matrix. Proposed method showed better results comparing with existing research results. Proposed method is expected to encourage researchers to use in future for real time implications to predict human intentions more robustly.</description>
        <description>http://thesai.org/Downloads/Volume14No5/Paper_1-Spatio_Temporal_Features_based_Human_Action_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Simulation Method of Port Petrochemical Industry Throughput Development under the Background of Integration of Port, Industry and City</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01404106</link>
        <id>10.14569/IJACSA.2023.01404106</id>
        <doi>10.14569/IJACSA.2023.01404106</doi>
        <lastModDate>2023-05-01T07:35:25.4600000+00:00</lastModDate>
        
        <creator>Tingting Zhou</creator>
        
        <creator>Chen Guo</creator>
        
        <subject>Support vector machine algorithm; port throughput; chicken swarm optimization algorithm; grey correlation analysis; petrochemical products</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>In order to accurately predict the changes in the throughput of port petrochemical products and facilitate the formulation of relative decisions, this paper analyzes the factors affecting the throughput of port petrochemical products in a city through the GRA method. After sorting and selection, PCA method is used for pretreatment. In the SVM algorithm, ICSO is used to obtain the best parameters and improve the prediction accuracy and efficiency. In view of the variability of future development, three development scenarios are set up to prepare for the throughput forecast of petrochemical products in a city&#39;s port. The results show that the optimization speed of ICSO algorithm is very fast. When the training iteration is 20, the best fitness value is obtained, which is 0.0572. The training effect of ICSO-SVM algorithm is good, the gap between it and the original data is small, and the overall trend is close to the original data. In the test prediction, ICSO-SVM algorithm has the best prediction effect, and its MAE, RMSE and MAPE are the smallest. The minimum MAE is 762.2, 477.0 smaller than CSO-SVM algorithm, and the latter&#39;s MAE is 1239.2. The minimum MAPE of the proposed algorithm is 1.05%, while that of CSO-SVM algorithm is 1.71%. In general, the prediction error of ICSO-SVM algorithm is smaller. After the prediction of different development scenarios, the throughput of petrochemical products in a port of a city shows an increasing trend in the next five years. This method can be applied to the development forecast of port petrochemical products and provide reference for decision-making.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_106-Simulation_Method_of_Port_Petrochemical_Industry_Throughput_Development.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Reverse Supply Chain Management Through a Quantity Flexibility Contract: A Case of Stochastic Remanufacturing Capacity</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01404105</link>
        <id>10.14569/IJACSA.2023.01404105</id>
        <doi>10.14569/IJACSA.2023.01404105</doi>
        <lastModDate>2023-04-29T13:35:36.3000000+00:00</lastModDate>
        
        <creator>Changhao Zhang</creator>
        
        <subject>Reverse supply chain; channel coordination; uncertain remanufacturing capacity; quantity flexibility contract</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>This article investigates a two-echelon reverse supply chain (RSC) where a third-party logistics provider charges customers to return outdated products. A green manufacturer refurbishes qualified returned products through the remanufacturing process. Remanufacturing capacity is considered a stochastic variable. Under the volatility of remanufacturing capacity, some likely examined, and qualified products could not be remanufactured. If a collected product cannot be processed, it should be salvaged at a lower value and be perceived as a lost profit. In such scenarios, increasing the quantity of returned outdated products is suitable if there is a strong possibility of enough capacity in the remanufacturing process. This paper develops a stochastic model to identify the optimal order quantity under diverse contracts, including wholesale price, centralized, and quantity flexibility contracts. Under the quantity flexibility contract, the green supplier might cancel its preliminary order in a restricted quantity. Additionally, third-party logistics supplier offers a restricted quantity above the initial order to minimize understocking during peak seasons. Our numerical experiments demonstrate that the suggested quantity flexibility can coordinate the examined RSC under the volatility of remanufacturing capacity. Contrary to wholesale and centralized contracts, quantity flexibility is a more practical alternative from the perspective of participants’ profitability.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_105-Reverse_Supply_Chain_Management_Through_a_Quantity_Flexibility_Contract.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fraud Mitigation in Attendance Monitoring Systems using Dynamic QR Code, Geofencing and IMEI Technologies</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01404104</link>
        <id>10.14569/IJACSA.2023.01404104</id>
        <doi>10.14569/IJACSA.2023.01404104</doi>
        <lastModDate>2023-04-29T12:39:01.2600000+00:00</lastModDate>
        
        <creator>Augustine Nwabuwe</creator>
        
        <creator>Baljinder Sanghera</creator>
        
        <creator>Temitope Alade</creator>
        
        <creator>Funminiyi Olajide</creator>
        
        <subject>Attendance management systems; fraud prevention; dynamic QR code; geofencing; IMEI verification; software algorithms; mobile application</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>Attendance monitoring is a vital activity in several organizations. Due to its importance, many attendance monitoring systems have been developed to automate this process. Despite several advancements in automated attendance management solutions, attendance fraud remains an issue as some end users can manipulate known vulnerabilities, such as proxy attendance, buddy-punching, early departure, and so on. In this paper, a fraud-resistant attendance management solution is developed by harnessing technologies such as geofencing, dynamic QR code and IMEI Checking. The proposed solution is comprised of a single-page web application where QR code can be enabled for attendance registration, and a mobile application, where end-users can scan generated QR code to register their attendance. Attendance cheating via QR code sharing is prevented by en-coding the polygonal coordinates of the event venue in the QR code to determine if the user is within the venue. The proposed system solves the problem of proxy attendance by registering and verifying the end user’s device IMEI number. Results obtained from testing indicate that attempts at committing a variety of attendance frauds are effectively mitigated.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_104-Fraud_Mitigation_in_Attendance_Monitoring_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Prediction of Air Quality and Pollution using Statistical Methods and Machine Learning Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01404103</link>
        <id>10.14569/IJACSA.2023.01404103</id>
        <doi>10.14569/IJACSA.2023.01404103</doi>
        <lastModDate>2023-04-29T12:39:01.2270000+00:00</lastModDate>
        
        <creator>V. Devasekhar</creator>
        
        <creator>P. Natarajan</creator>
        
        <subject>Air quality; forecasting; machine learning; statistical techniques</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>Air pollution is a major environmental issue and machine learning techniques play an important role in analyzing and forecasting these data sets. Air quality is an outcome of the complex interaction of several factors involving the chemical reactions, meteorological parameters, and emissions from natural and anthropogenic sources. In this paper, we propose an efficient combined technique that takes the benefits of statistical techniques and machine learning techniques to predict/forecast the Air Quality and Pollution in particular regions. This work also indicates that prediction performance varies over different regions/cities in India. We used time series analysis, regression and Ada-boosting to anticipate PM 2.5 concentration levels in several locations throughout Hyderabad on an annual basis, depending on numerous atmospheric and surface parameters like wind speed, air temperature, pressure, and so on. Dataset for this investigation is taken from Kaggle and experimented with proposed method and comparison results of our experiments are then plotted.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_103-Prediction_of_Air_Quality_and_Pollution_using_Statistical_Methods.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Particle Swarm Optimization with Imbalance Initialization and Task Rescheduling for Task Offloading in Device-Edge-Cloud Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01404102</link>
        <id>10.14569/IJACSA.2023.01404102</id>
        <doi>10.14569/IJACSA.2023.01404102</doi>
        <lastModDate>2023-04-29T12:39:01.2130000+00:00</lastModDate>
        
        <creator>Hui Fu</creator>
        
        <creator>Guangyuan Li</creator>
        
        <creator>Fang Han</creator>
        
        <creator>Bo Wang</creator>
        
        <subject>Cloud computing; edge computing; task offloading; task scheduling; particle swarm optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>Smart devices, e.g., smart-phones, internet-of-thing device, has been prevalent in our life. How to take full advantage of the limited resources to satisfy as many as requirements of users is still a challenge. Thus, in this paper, we focus on the task offloading problem to address the challenge by device-edge-cloud computing, by PSO improved with the imbalance initialization and the task scheduling. The imbalance initialization is to increase the probability that a task is assigned to a computing node such that the node provides a longer slack time. The task scheduling is to reassign tasks with deadline violations into other nodes, to improve the number of accepted tasks for each offloading solution. Extensive experiment results show that our proposed algorithm has better performance than other ten classical and up-to-data algorithms in both the maximization of the accepted task number, the resource utilization, as well as the processing rate.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_102-A_Particle_Swarm_Optimization_with_Imbalance_Initialization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Anomaly Discover: A New Community-based Approach for Detecting Anomalies in Social Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01404101</link>
        <id>10.14569/IJACSA.2023.01404101</id>
        <doi>10.14569/IJACSA.2023.01404101</doi>
        <lastModDate>2023-04-29T12:39:01.1800000+00:00</lastModDate>
        
        <creator>Hedia Zardi</creator>
        
        <creator>Hajar Alrajhi</creator>
        
        <subject>Anomaly detection; community anomaly; anomaly ranking; social networks; relevant attributes</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>In this paper, a new method called Anomaly Discover is provided for detecting anomalies in communities with mixed attributes (binary, numerical and categorical attributes). Our strategy tries to identify unusual users in Online Social Networks (OSN) communities and score them according to how far they deviate from typical users. Our ranking is based on both users’ attributes and network structure. Moreover, for effective anomaly detection, the context-selection process is performed for choosing relevant attributes that demonstrate a strong contrast between normal and abnormal users. So the anomaly score is defined as the degree of divergence in the network structure as well as a context-specific subset of attributes. To assess the efficacy of our model, we used real and artificial networks. We then compared the outcomes to those of two state-of-art models. The outcomes show that our model performs well since it outperforms other models and can pick up anomalies that competing models miss.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_101-Anomaly_Discover_A_New_Community_based_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>IM2P-Medical: Towards Individual Management Privacy Preferences for the Medical Web Apps</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01404100</link>
        <id>10.14569/IJACSA.2023.01404100</id>
        <doi>10.14569/IJACSA.2023.01404100</doi>
        <lastModDate>2023-04-29T12:39:01.1670000+00:00</lastModDate>
        
        <creator>Nguyen Ngoc Phien</creator>
        
        <creator>Nguyen Thi Hoang Phuong</creator>
        
        <creator>Khiem G. Huynh</creator>
        
        <creator>Khanh H. Vo</creator>
        
        <creator>Phuc T. Nguyen</creator>
        
        <creator>Khoa D. Tran</creator>
        
        <creator>Bao Q. Tran</creator>
        
        <creator>Loc C. P. Van</creator>
        
        <creator>Duy T. Q. Nguyen</creator>
        
        <creator>Hieu M. Doan</creator>
        
        <creator>Bang K. Le</creator>
        
        <creator>Trong D. P. Nguyen</creator>
        
        <creator>Ngan T. K. Nguyen</creator>
        
        <creator>Huong H. Luong</creator>
        
        <creator>Duong Hon Minh</creator>
        
        <subject>Letter-of-credit; cash-on-delivery; blockchain; smart contract; NFT; ethereum; fantom; polygon; binance smart chain</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>With the advancement of technology, people are now able to monitor their health more efficiently. Mobile phones and smartwatches are equipped with sensors that can measure real-time changes in blood pressure, SPO2, and other attributes and public them to service providers via web applications (called web apps) for health improvement suggestions. Moreover, users can share the collected health data with other people, such as doctors, relatives, or friends. However, using technology in healthcare has raised the issue of privacy. Some health web apps, by default, intrusively gather and share data. Additionally, smartwatches may monitor people’s health status 24/7. Therefore, users want to control how their health is processed (e.g., collected and shared). This can be cumbersome as they would have to configure each device manually. To address this problem, we have developed a privacy-preference prediction mechanism in the web apps called IM2P-Medical: towards Individual Management Privacy Preferences for the Medical web apps. To capture individual privacy preferences in the web apps, our model learns users’ privacy behavior based on their responses in different medical scenarios. In practice, we exploited several machine learning algorithms: SVM, Gradient Boosting Classifier, Ada Boost Classifier, and Gradient Boosting Regressor. To prove the effectiveness of the proposed model, we set up several scenarios to measure the accuracy as well as the satisfaction level in the two participant groups (i.e., expert and normal users). One key point in this research’s selection of participants is its focus on those living in developing countries, where privacy violation issues are not a common topic. The main contribution of our model is that it allows users to preserve their privacy without configuring privacy settings themselves.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_100-IM2P_Medical_Towards_Individual_Management_Privacy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Opportunities and Challenges in Human-Swarm Interaction: Systematic Review and Research Implications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140499</link>
        <id>10.14569/IJACSA.2023.0140499</id>
        <doi>10.14569/IJACSA.2023.0140499</doi>
        <lastModDate>2023-04-29T12:39:01.1330000+00:00</lastModDate>
        
        <creator>Alexandru-Ionut Siean</creator>
        
        <creator>Bogdanel-Constantin Gradinaru</creator>
        
        <creator>Ovidiu-Ionut Gherman</creator>
        
        <creator>Mirela Danubianu</creator>
        
        <creator>Laurentiu-Dan Milici</creator>
        
        <subject>Human swarm interactions; input modalities; swarm control</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>We conducted a Systematic Literature Review on scientific papers that examined the interaction between operators and drone swarms based on the use of a command and control center. We present the results of a meta-analysis of nine scientific papers published in the ACM DL and IEEE Xplore databases. Our findings show that research on human-drone swarm interaction shows a disproportionate interest in hand gestures compared to other input modalities for drone swarm control. Furthermore, all articles reviewed exclusively explored gestures and the size of the swarm used in the studies was limited, with a median of 3.0 and an average of 3.8 drones per study. We compiled an inventory of interaction modalities, recognition techniques, and application types from the scientific literature, which is presented in this paper. On the basis of our findings, we propose four areas for future research that can guide scientific investigations and practical developments in this field.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_99-Opportunities_and_Challenges_in_Human_Swarm_Interaction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Exploring the Joint Potential of Blockchain and AI for Securing Internet of Things</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140498</link>
        <id>10.14569/IJACSA.2023.0140498</id>
        <doi>10.14569/IJACSA.2023.0140498</doi>
        <lastModDate>2023-04-29T12:39:01.1170000+00:00</lastModDate>
        
        <creator>Md. Tauseef</creator>
        
        <creator>Manjunath R Kounte</creator>
        
        <creator>Abdul Haq Nalband</creator>
        
        <creator>Mohammed Riyaz Ahmed</creator>
        
        <subject>IoT; blockchain; AI; security; attacks; decentralization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>The emergence of the Internet of Things (IoT) has revolutionized the way we interact with the physical world. The rapid growth of IoT devices has led to a pressing need for robust security measures. Two promising approaches that can enhance IoT security are blockchain and artificial intelligence (AI). Blockchain can offer a decentralized and tamper-proof framework, ensuring the confidentiality and integrity of IoT data. AI can analyze large volumes of real-time data and detect anomalies in response to security threats in the IoT ecosystem. This paper explores the potential of these technologies and how they complement each other to provide a secured IoT system. Our main argument is that combining blockchain with AI can provide a robust solution for securing IoT networks and safeguarding the privacy of IoT users. This survey paper aims to provide a comprehensive understanding of the potential of these technologies for securing IoT networks and discuss the challenges and opportunities associated with their integration. It also provides a discussion on the current state of research on this topic and presents future research directions in this area.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_98-Exploring_the_Joint_Potential_of_Blockchain_and_AI.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Egypt Monuments Dataset version 1: A Scalable Benchmark for Image Classification and Monument Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140497</link>
        <id>10.14569/IJACSA.2023.0140497</id>
        <doi>10.14569/IJACSA.2023.0140497</doi>
        <lastModDate>2023-04-29T12:39:01.0870000+00:00</lastModDate>
        
        <creator>Mennat Allah Hassan</creator>
        
        <creator>Alaa Hamdy</creator>
        
        <creator>Mona Nasr</creator>
        
        <subject>Deep learning; landmark datasets; landmark recognition; monument datasets; monument recognition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>The success of machine learning (ML) as well as deep learning (DL) depends largely on data availability and quality. The system’s performance is frequently more affected by the amount and quality of its training data than by its architecture and training specifics. Consequently, demand exists for challenging datasets that both precisely measure performance and present unique challenges with real-world applications. The Egypt Monuments Dataset v1 (EGYPT-v1) is introduced as a new scalable benchmark for fine-image classification (IC) and object recognition (OR) in the domain of ancient Egyptian monuments. EGYPT-v1 dataset is by far the world’s first large specified such dataset to date, with over seven thousand images and 40 distinct instance labels. The dataset composes different categories of monuments such as pyramids, temples, mummies, statues, head statues, bust statues, heritage sites, palaces and shrines. Several advanced deep network architectures were tested to appraise the classification difficulty in the EGYPT-v1 dataset, namely ResNet50, Inception V3, and LeNet5 models. The models achieved accuracy rates as follows: 99.13%, 90.90%, and 92.64%, respectively. The dataset was predominantly created by manually collecting images from the popular global online video-sharing and social media platform, Youtube, as well as WATCHiT, Egypt’s top streaming entertainment service. Additionally, Wikimedia Commons, the largest crowdsourced media repository in the world, was used as a secondary source of images. The images that comprise the dataset can be accessed on the GitHub repository https://github.com/mennatallahhassan/egypt-monuments-dataset.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_97-Egypt_Monuments_Dataset_version_1_A_Scalable_Benchmark.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Machine Learning-Based Approach for Anomaly Detection using Apache Spark</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140496</link>
        <id>10.14569/IJACSA.2023.0140496</id>
        <doi>10.14569/IJACSA.2023.0140496</doi>
        <lastModDate>2023-04-29T12:39:01.0570000+00:00</lastModDate>
        
        <creator>Hanane Chliah</creator>
        
        <creator>Amal Battou</creator>
        
        <creator>Maryem Ait el hadj</creator>
        
        <creator>Adil Laoufi</creator>
        
        <subject>Anomaly detection; big data; Apache Spark; k-means; KNN</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>Over the past few decades, the volume of data has increased significantly in both scientific institutions and universities, with a large number of students enrolled and a high volume of related data. Furthermore, network traffic has increased with post-pandemic and the use of online learning. Therefore, processing network traffic data is a complex and challenging task that increases the possibility of intrusions and anomalies. Traditional security systems cannot deal with such high-speed and big data traffic. Real-time anomaly detection should be able to process data as quickly as possible to detect abnormal and malicious data. This paper proposes a hybrid approach consisting of supervised and unsupervised learning for anomaly detection based on the big data engine Apache Spark. Initially, the k-means algorithm was implemented in Sparks MLlib for clustering network traffic, then for each cluster, K-nearest neighbors algorithm (KNN) was implemented for classification and anomaly detection. The proposed model was trained and validated against a real dataset from Ibn Zohr University. The results indicate that the proposed model outperformed other well-known algorithms in detecting anomalies based on the aforementioned dataset. The experimental results show that the proposed hybrid approach can reach up to 99.94 % accuracy using the k-fold cross-validation method in the complete dataset with all 48 features.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_96-Hybrid_Machine_Learning_Based_Approach_for_Anomaly_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced MQTT Architecture for Smart Supply Chain</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140495</link>
        <id>10.14569/IJACSA.2023.0140495</id>
        <doi>10.14569/IJACSA.2023.0140495</doi>
        <lastModDate>2023-04-29T12:39:01.0400000+00:00</lastModDate>
        
        <creator>Raouya AKNIN</creator>
        
        <creator>Youssef Bentaleb</creator>
        
        <subject>Smart supply chain; internet of things; MQTT protocol; blockchain; smart contracts; Ethereum; solidity; one-time password</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>In industry 4.0, the use of smart supply chains has become necessary in order to overcome the shortcomings of traditional supply chains, such as overstocking, delivery delays, and stock out. However, the use of smart supply chains has introduced new security challenges because of the internet of things (IOT) constraint nature. Thus, the problem raised is ensuring the supply chain security requirements while taking into consideration the properties of the constraint environment. For this purpose, this paper aims to strengthen the authentication and data transmitting processes of the Message Queuing Telemetry Transport(MQTT) protocol, as the most used protocol for communication in the IOT environment, using blockchain and smart contracts. The new MQTT architecture allows to avoid a single point of failure, to ensure data immutability and to automatize the authentication mechanism as well as the publishing and the subscribing processes. In addition, the use of a one-time password (OTP) instead of a permanent one is another security measure used to protect the architecture from identity spoofing. The new architecture comprises three phases: Registration, Connection, and Publishing. Each phase is automatically controlled by a smart contract. For attack simulation tests, the smart contracts are implemented in a remix environment. The results of the simulation tests show that the new architecture is robust and resistant to different attacks.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_95-Enhanced_MQTT_Architecture_for_Smart_Supply_Chain.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Highly Accurate Deep Learning Model for Olive Leaf Disease Classification: A Study in Tacna-Per&#180;u</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140494</link>
        <id>10.14569/IJACSA.2023.0140494</id>
        <doi>10.14569/IJACSA.2023.0140494</doi>
        <lastModDate>2023-04-29T12:39:01.0100000+00:00</lastModDate>
        
        <creator>Erbert F. Osco-Mamani</creator>
        
        <creator>Israel N. Chaparro-Cruz</creator>
        
        <subject>Olive; leaf diseases; disease classification; deep learning; data augmentation; transfer learning; fine-tuning; VGG16</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>Deep learning applied to computer vision has different applications in agriculture, medicine, marketing, meteorology, etc. In agriculture, plant diseases can cause significant yield and quality losses. The treatment of these diseases depends on accurate and rapid classification. Olive leaf diseases are a problem that threatens the crop quality of olive growers. The objective of this work was to classify olive leaf diseases with Deep Learning in olive crops of the La Yarada-Los Palos area in the Tacna region, Peru. Disease classification is a critical task, nevertheless, for the most common diseases in the region: virosis, fumagina, and nutritional deficiencies, there is no dataset to train deep learning models. Due to the latter, a novel dataset of RGB olive leaf images is elaborated and published. Then, an extensive comparative ex-perimental study was conducted using all possible configurations of Learning from Scratch, Transfer Learning, Fine-Tuning, and Data Augmentation state-of-the-art methods to train a modified VGG16 architecture for the classification of Olive Leaf Diseases. It was demonstrated experimentally: (i) The ineffectiveness of Data Augmentation when the model Learning from Scratch, (ii) A high improvement by using Transfer Learning vs Learning from Scratch, (iii) Similar performance using Transfer Learning vs Transfer Learning + Fine-Tuning vs Transfer Learning + Data Augmentation, and (iv) Very high improvement using Transfer Learning + Fine-Tuning + Data Augmentation. This led us to a Deep Learning Model with an accuracy of 100%, 99.93%, and 100% in the training, validation, and test sets and F1-Score on the validation set of 1, 0.9901, and 0.9899 in the Nutritional Deficiences, Fumagina, and Virosis olive leaf diseases respectively. Replication of the results is ensured by publishing the novel dataset and the final model on GitHub.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_94-Highly_Accurate_Deep_Learning_Model_for_Olive_Leaf_Disease.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Discovering COVID-19 Death Patterns from Deceased Patients: A Case Study in Saudi Arabia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140493</link>
        <id>10.14569/IJACSA.2023.0140493</id>
        <doi>10.14569/IJACSA.2023.0140493</doi>
        <lastModDate>2023-04-29T12:39:00.9930000+00:00</lastModDate>
        
        <creator>Abdulrahman Alomary</creator>
        
        <creator>Tarik Alafif</creator>
        
        <creator>Abdulmohsen Almalawi</creator>
        
        <creator>Anas Hadi</creator>
        
        <creator>Faris Alkhilaiwi</creator>
        
        <creator>Yasser Alatawi</creator>
        
        <subject>COVID-19; association rules; Apriori algorithm; patterns; death; chronic diseases</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>COVID-19 is a serious infection that cause severe injuries and deaths worldwide. The COVID-19 virus can infect people of all ages, especially the elderly. Furthermore, elderly who have co-morbid conditions (e.g., chronic conditions) are at an increased risk of death. At the present time, no approach exists that can facilitate the characterization of patterns of COVID-19 death. In this study, an approach to identify patterns of COVID- 19 death efficiently and systematically is applied by adapting the Apriori algorithm. Validation and evaluation of the proposed approach are based on a robust and reliable dataset collected from Health Affairs in the Makkah region of Saudi Arabia. The study results show that there are strong associations between hypertension, diabetes, cardiovascular disease, and kidney disease and death among COVID-19 deceased patients.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_93-Discovering_COVID_19_Death_Patterns_from_Deceased_Patients.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Approach to Hyperparameter Tuning in Transfer Learning for Driver Drowsiness Detection Based on Bayesian Optimization and Random Search</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140492</link>
        <id>10.14569/IJACSA.2023.0140492</id>
        <doi>10.14569/IJACSA.2023.0140492</doi>
        <lastModDate>2023-04-29T12:39:00.9630000+00:00</lastModDate>
        
        <creator>Hoang-Tu Vo</creator>
        
        <creator>Hoang Tran Ngoc</creator>
        
        <creator>Luyl-Da Quach</creator>
        
        <subject>Hyperparameter tuning; driver drowsiness detection; transfer learning; Bayesian optimization; Random search</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>Driver drowsiness is a critical factor in road safety, and developing accurate models for detecting it is essential. Transfer learning has been shown to be an effective technique for driver drowsiness detection, as it enables models to leverage large, pre-existing datasets. However, the optimization of hyper-parameters in transfer learning models can be challenging, as it involves a large search space. The core purpose of this research is on presenting an approach to hyperparameter tuning in transfer learning for driving fatigue detection based on Bayesian optimization and Random search algorithms. We examine the efficiency of our approach on a publicly available dataset using transfer learning models with the MobileNetV2, Xception, and VGG19 architectures. We explore the impact of hyperparameters such as dropout rate, activation function, the number of units (the number of dense nodes), optimizer, and learning rate on the transfer learning models’ overall performance. Our experiments show that our approach improves the performance of the transfer learning models, obtaining cutting-edge results on the dataset for all three architectures. We also compare the efficiency of Bayesian optimization and Random search algorithms in terms of their ability to find optimal hyperparameters and indicate that Bayesian optimization is more efficient in finding optimal hyperparameters than Random search. The results of our study provide insights into the importance of hyperparameter tuning for transfer learning-based driver drowsiness detection using different transfer learning models and can guide the selection of hyperparameters and models for future studies in this field. Our proposed approach can be applied to other transfer learning tasks, making it a valuable contribution to the field of ML.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_92-An_Approach_to_Hyperparameter_Tuning_in_Transfer_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Software Effort Estimation using Machine Learning Technique</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140491</link>
        <id>10.14569/IJACSA.2023.0140491</id>
        <doi>10.14569/IJACSA.2023.0140491</doi>
        <lastModDate>2023-04-29T12:39:00.9470000+00:00</lastModDate>
        
        <creator>Mizanur Rahman</creator>
        
        <creator>Partha Protim Roy</creator>
        
        <creator>Mohammad Ali</creator>
        
        <creator>Teresa Gonc&#184;alves</creator>
        
        <creator>Hasan Sarwar</creator>
        
        <subject>Software effort estimation; K-nearest neighbor re-gression; machine learning; decision tree; support vector regression</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>Software engineering effort estimation plays a significant role in managing project cost, quality, and time and creating software. Researchers have been paying close attention to software estimation during the past few decades, and a great amount of work has been done utilizing a variety of machine-learning techniques and algorithms. In order to better effectively evaluate predictions, this study recommends various machine learning algorithms for estimating, including k-nearest neighbor regression, support vector regression, and decision trees. These methods are now used by the software development industry for software estimating with the goal of overcoming the limitations of parametric and conventional estimation techniques and advancing projects. Our dataset, which was created by a software company called Edusoft Consulted LTD, was used to assess the effectiveness of the established method. The three commonly used performance evaluation measures, mean absolute error (MAE), mean squared error (MSE), and R square error, represent the base for these. Comparative experimental results demonstrate that decision trees perform better at predicting effort than other techniques.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_91-Software_Effort_Estimation_using_Machine_Learning_Technique.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Autonomous Motion Planning for a Differential Robot using Particle Swarm Optimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140490</link>
        <id>10.14569/IJACSA.2023.0140490</id>
        <doi>10.14569/IJACSA.2023.0140490</doi>
        <lastModDate>2023-04-29T12:39:00.9170000+00:00</lastModDate>
        
        <creator>Fredy Martinez</creator>
        
        <creator>Angelica Rendon</creator>
        
        <subject>Autonomous motion planning; differential robot; ESP32 microcontroller; particle swarm optimization; PID controller; real-world environment; service robotics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>In the field of robotics, particularly within the realm of service applications, one of the fundamental challenges lies in devising autonomous motion planning strategies for real-world environments. Addressing this issue necessitates the management of numerous variables, with the primary goal of enabling the robot to circumnavigate obstacles, attain its target destination in the most efficient manner, and adhere to the shortest possible route while prioritizing safety. Furthermore, the robot’s control mechanisms must exhibit stability, precision, and swift respon-siveness. Prompted by these requirements, this paper explores the utilization of Particle Swarm Optimization (PSO) in conjunction with a Proportional-Integral-Derivative (PID) controller to devise a motion planning strategy for a differential robot operating in a multifaceted real-world setting. The proposed control system is implemented using an ESP32 microcontroller, which serves as the foundation for the robot’s motion planning and execution capabilities. Through a series of simulations, the efficacy of the suggested approach is demonstrated, emphasizing its potential as a robust solution for addressing the complex challenge of autonomous motion planning in real-world environments.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_90-Autonomous_Motion_Planning_for_a_Differential_Robot.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evolutionary Design of a PSO-Tuned Multigene Symbolic Regression Genetic Programming Model for River Flow Forecasting</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140489</link>
        <id>10.14569/IJACSA.2023.0140489</id>
        <doi>10.14569/IJACSA.2023.0140489</doi>
        <lastModDate>2023-04-29T12:39:00.9000000+00:00</lastModDate>
        
        <creator>Alaa Sheta</creator>
        
        <creator>Amal Abdel-Raouf</creator>
        
        <creator>Khalid M. Fraihat</creator>
        
        <creator>Abdelkarim Baareh</creator>
        
        <subject>River flow; forecasting; genetic programming; evolutionary computation; particle swarm optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>The earth’s population is growing at a rapid rate, while the availability of water resources remains limited. Water is required for various purposes, including drinking, agriculture, industry, recreation, and development. Accurate forecasting of river flows can have a significant economic impact, particularly in agricultural water management and planning during water resource scarcity. Developing precise river flow forecasting models can greatly improve the management of water resources in many countries. In this study, we propose a two-phase model for predicting the flow of the Blackwater river located in the South Central United States. In the first phase, we use Multigene Symbolic Regression Genetic Programming (MG-GP) to develop a mathematical model. In the second phase, Particle Swarm Optimization (PSO) is employed to fine-tune the model parameters. Fine-tuning the MG-GP parameters improves the prediction accuracy of the model. The newly fine-tuned model exhibits 96% and 94% accuracy in training and testing cases, respectively.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_89-Evolutionary_Design_of_a_PSO_Tuned_Multigene_Symbolic_Regression.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Review of Milgram and Kishino’s Reality-Virtuality Continuum and a Mathematical Formalization for Combining Multiple Reality-Virtuality Continua</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140488</link>
        <id>10.14569/IJACSA.2023.0140488</id>
        <doi>10.14569/IJACSA.2023.0140488</doi>
        <lastModDate>2023-04-29T12:39:00.8830000+00:00</lastModDate>
        
        <creator>Cristian Pamparau</creator>
        
        <subject>Systematic literature review; reality-virtuality continuum; mixed reality; transitional interfaces; mathematical for-malization; XR transition protocol</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>We explore in this paper theoretical contributions that are related to Milgram and Kishino’s Reality Virtuality Continuum by conducting a systematic literature review. From this study, we draw inspiration for our proposed mathematical formalization of combining multiple Reality-Virtuality Continua in a single, mixed reality experience. Also, we provide a definition for XR transition protocol. To complete our contribution, we discuss two potential examples that will exemplify our formalization and identify future work to be addressed.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_88-A_Review_of_Milgram_and_Kishino’s_Reality_Virtuality_Continuum.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluation of Wood Species Identification Using CNN-Based Networks at Different Magnification Levels</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140487</link>
        <id>10.14569/IJACSA.2023.0140487</id>
        <doi>10.14569/IJACSA.2023.0140487</doi>
        <lastModDate>2023-04-29T12:39:00.8670000+00:00</lastModDate>
        
        <creator>Khanh Nguyen-Trong</creator>
        
        <subject>Wood species identification; convolutional neural network; ResNet50; DensNet</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>Wood species identification (WoodID) is a crucial task in many industries, including forestry, construction, and furniture manufacturing. However, this process currently requires highly trained individuals and is time-consuming. With the recent advances in machine learning and computer vision techniques, automatic WoodID using macro-images of cross-section wood has gained attention. Nevertheless, existing works have been evaluated on ad-hoc datasets with pre-fixed magnification levels. To address this issue, this paper proposes an evaluation of deep learning-based methods for WoodID on multiple datasets with varying magnification levels. Several popular Convolutional Neural Networks, including DenseNet, ResNet50, and MobileNet, were examined to identify the best network and magnification levels. The experiments were conducted on five datasets with different magnifications, including a self-collected dataset and four existing ones. The results demonstrate that the DenseNet121 network achieved superior accuracy and F1-Score on the 20X dataset. The findings of this study provide useful insights into the development of automatic WoodID systems for practical applications.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_87-Evaluation_of_Wood_Species_Identification_Using_CNN_Based_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Review of Trending Crowdsourcing Topics in Software Engineering Highlighting Mobile Crowdsourcing and AI Utilization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140486</link>
        <id>10.14569/IJACSA.2023.0140486</id>
        <doi>10.14569/IJACSA.2023.0140486</doi>
        <lastModDate>2023-04-29T12:39:00.8530000+00:00</lastModDate>
        
        <creator>Mohammed Alghasham</creator>
        
        <creator>Mousa Alzakan</creator>
        
        <creator>Mohammed Al-Hagery</creator>
        
        <subject>Software engineering; crowdsourcing; mobile crowdsourcing; software management; software verification and validation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>Today’s modern technologies and requirements make the utilization of crowdsourcing more viable and applicable. It is one of the problem-solving models that can be used in various domains to reduce costs and time. It is also an excellent way to find new and different ideas and solutions. This paper studies the use of crowdsourcing in software engineering and reveals adequate details to highlight its significance. A few recent literature reviews have been published to address specific topics or study general attributes of papers in crowdsourced software engineering. This paper, however, explores all recent publications related to software and crowdsourcing to find the trends and highlight mobile and AI usage in software crowdsourcing. The findings of this paper show that most research papers are in the areas of software management and software verification and validation. The results also reveal that machine learning and data mining techniques are predominant in software management crowdsourcing and software verification and validation. Furthermore, this study shows that the methods and techniques used in general crowdsourcing apply to mobile crowdsourcing except in mobile testing, where there is a need for clustering and prioritization of test reports.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_86-A_Review_of_Trending_Crowdsourcing_Topics_in_Software_Engineering.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>BREPubSub: A Secure Publish-Subscribe Model using Blockchain and Re-encryption for IoT Data Sharing Management</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140485</link>
        <id>10.14569/IJACSA.2023.0140485</id>
        <doi>10.14569/IJACSA.2023.0140485</doi>
        <lastModDate>2023-04-29T12:39:00.8200000+00:00</lastModDate>
        
        <creator>Hoang-Anh Pham</creator>
        
        <subject>Publish-subscribe; blockchain; re-encryption; IoT data sharing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>As a result of the incredible growth and diversity of IoT systems and applications over the past several years, an enor-mous amount of sensing data has been generated, which is critical for developing IoT-based intelligent systems. So far, it has taken a significant amount of time and money to collect sufficient sensing data for these smart systems leading to demands of sharing or exchanging available and valuable data to reduce the time and money spent on the data collection process. However, ensuring the data sharing process’s integrity, security, and fairness is fraught with challenges. This paper proposes a Blockchain-based model that supports a secure publish-subscribe protocol for data sharing management by addressing three criteria such as confidentiality, integrity, and availability. In addition, the proposed model adopts a re-encryption technique to optimize shared data storage with multiple users and enhance the security of the data exchange process in a transparent and public environment like Blockchain. We have developed a DApp to demonstrate the feasibility of our design and evaluate its performance.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_85-BREPubSub_A_Secure_Publish_Subscribe_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intelligent Abnormal Residents’ Behavior Detection in Smart Homes for Risk Management using Fuzzy Logic Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140484</link>
        <id>10.14569/IJACSA.2023.0140484</id>
        <doi>10.14569/IJACSA.2023.0140484</doi>
        <lastModDate>2023-04-29T12:39:00.8070000+00:00</lastModDate>
        
        <creator>Bo Feng</creator>
        
        <creator>Lili Miao</creator>
        
        <creator>HuiXiang Liu</creator>
        
        <subject>Smart home; abnormal detection; behavior analysis; activity recognition; elderly people; fuzzy logic</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>In recent years, the population of sick and elderly people who are alone and need care has increased. This issue increases the need to have a smart home to be aware of the patient&#39;s condition. Identifying the patient&#39;s activity using sensors embedded in the environment is the first step to reach a smart home where the people around the patient can leave the patient alone at home with less worry. In literature, a variety of methods for detecting the performance of users in the smart home are discussed. In this study, a method for abnormal behavior detection and identifying the level of risk is proposed, in which fuzzy logic is used in cases such as when the activity start. Experimental results demonstrates that the proposed method achieved satisfied performance with 90% accuracy rate that presented better results compared to other existing methods.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_84-Intelligent_Abnormal_Residents_Behavior_Detection_in_Smart_Homes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Develop an Olive-based Grading Algorithm using Image Processing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140483</link>
        <id>10.14569/IJACSA.2023.0140483</id>
        <doi>10.14569/IJACSA.2023.0140483</doi>
        <lastModDate>2023-04-29T12:39:00.7900000+00:00</lastModDate>
        
        <creator>Dongliang Jin</creator>
        
        <subject>Image processing; grading; color; olive; MATLAB; HSV</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>Olives come in a number of external and internal varieties. The Shengeh kind, which is available in three colours—green, brown, and black—was chosen at random by the researchers to ensure that the sample was diverse. To avoid discoloration throughout the experiment, 150 healthy olives were harvested and stored correctly. These olives had not been subjected to any external harm, such as crushing or milling. The particular kind of olives were kept chilled at 2&#176;C and preserved in water. This study investigates the possibility of grading Shengeh cultivars from olives that have different uses, based on color using image processing. After preparing images of olives using MATLAB software and image processing techniques, olives are graded based on their color in three categories: immature with green, semi-ripe with brown, and ripe with black. The results showed that image processing technology can be used to grade olives of the Shengeh type in terms of their ripeness as a single-color grain with acceptable accuracy. The HSV color space is one of the best color spaces to separate the colors of the olive cultivar. The accuracy of the software for detecting olives with the mentioned degrees is 98%, 96%, and 100%, respectively.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_83-Develop_an_Olive_based_Grading_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning Localization Algorithm Integrating Attention Mechanism in Database Information Query</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140482</link>
        <id>10.14569/IJACSA.2023.0140482</id>
        <doi>10.14569/IJACSA.2023.0140482</doi>
        <lastModDate>2023-04-29T12:39:00.7730000+00:00</lastModDate>
        
        <creator>Yang Li</creator>
        
        <creator>Xianghui Hui</creator>
        
        <creator>Xiaolei Wang</creator>
        
        <creator>Fei Yin</creator>
        
        <subject>LSTM; attention mechanism; positioning; database</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>This study aims to solve the problems of traditional indoor car search positioning technology in terms of positioning accuracy and functionality. Based on database technology and deep learning technology, an LSTM model with attention mechanism was established. This model can simultaneously extract temporal and spatial features, and use attention mechanism for feature importance recognition. The entire positioning model has been designed as a triple functional entrance that includes positioning, car storage, and reverse car search, enhancing the user&#39;s coherent experience. The data results show that the root mean square error of the LSTM (Attention) model designed in the study is 0.216, and the variance is 0.092. Among similar positioning models, the index value is the smallest, while the CDF line rises the fastest and the maximum value is the highest. The research conclusion indicates that the LSTM (Attention) indoor positioning model designed in this study has better computational performance and can help users achieve more accurate positioning and vehicle navigation.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_82-Deep_Learning_Localization_Algorithm_Integrating_Attention_Mechanism.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Using Machine Learning Algorithm as a Method for Improving Stroke Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140481</link>
        <id>10.14569/IJACSA.2023.0140481</id>
        <doi>10.14569/IJACSA.2023.0140481</doi>
        <lastModDate>2023-04-29T12:39:00.7430000+00:00</lastModDate>
        
        <creator>Nojood Alageel</creator>
        
        <creator>Rahaf Alharbi</creator>
        
        <creator>Rehab Alharbi</creator>
        
        <creator>Maryam Alsayil</creator>
        
        <creator>Lubna A. Alharbi</creator>
        
        <subject>Stroke prediction; machine learning; PCA; decision tree; KNN; majority voting; Na&#239;ve Bayes</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>Having sudden strokes has had a very negative impact on all aspects in society to the point that it attracted efforts for better improvement and management of stroke diagnosis. Technological advancement also had an impact on the medical field such that nowadays caregivers have better options for taking care of their patients by mining and archiving their medical records for ease of retrieval. Furthermore, it is quite essential to understand the risk factors that make a patient more susceptible to strokes, thus there are some factors that make stroke prediction much easier. This research offers an analysis of the factors that enhance the stroke prediction process based on electronic health records. The most important factors for stroke prediction will be identified using statistical methods and Principal Component Analysis (PCA). It has been found that the most critical factors affecting stroke prediction are the age, average glucose level, heart disease, and hypertension. A balanced dataset is used for the model evaluation which was created by sub-sampling since the dataset for stroke occurrence is already highly imbalanced. In this study, seven different machine learning algorithms are implemented: Na&#239;ve Bayes, SVM, Random Forest, KNN, Decision Tree, Stacking, and majority voting to train on the Kaggle dataset to predict occurrence of stroke in patients. After preprocessing and splitting the dataset into training and testing sub-datasets, these proposed algorithms were evaluated according to accuracy, f1 score, recall value, and precision value. The NB classifier achieved the lowest accuracy level (86%), whereas the rest of the algorithms achieved similar accuracies 96%, f1 scores 0.98, precision 0.97, and recall 1.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_81-Using_Machine_Learning_Algorithm_as_a_Method_for_Improving_Stroke_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fuzzy Reasoning based Reliability Fault Prediction of CNC Machine Tools</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140480</link>
        <id>10.14569/IJACSA.2023.0140480</id>
        <doi>10.14569/IJACSA.2023.0140480</doi>
        <lastModDate>2023-04-29T12:39:00.7270000+00:00</lastModDate>
        
        <creator>Jie Yu</creator>
        
        <creator>Tiebin Wang</creator>
        
        <creator>Weidong Wang</creator>
        
        <creator>Gege Zhao</creator>
        
        <creator>Yue Yao</creator>
        
        <subject>CNC machine tools; reliability; fuzzy inference; fault prediction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>CNC machine tools are the infrastructure of the manufacturing industry, and many fields cannot do without them. This paper studies the fault data of a series of CNC machine tools, and predicts the fault level based on the activity parameters of Gutenberg Richter curve and fuzzy information theory. Apply the Gutenberg Richter curve model to the reliability analysis of CNC machine tools, and use this model to fit the curves separately. Fit the activity parameters of each stage with curves, and the results show that the b value can reflect the fault activity frequency of CNC machine tools. Due to the correlation and fuzziness between system faults, it is more appropriate to use a fuzzy neural network with strong adaptability and good learning ability, which can easily adjust parameters, and can express a more complex, high-dimensional nonlinear system through fewer conditions. The use of fuzzy reasoning can link the nonlinear relationship between fault level, b-value, and N-value. Analyze the error between the predicted fault level and the original level, and the small error indicates that the model has good predictive ability. Applying this predictive ability to the reliability research of CNC machine tools will yield good results.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_80-Fuzzy_Reasoning_based_Reliability_Fault_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparison Review on Brain Tumor Classification and Segmentation using Convolutional Neural Network (CNN) and Capsule Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140479</link>
        <id>10.14569/IJACSA.2023.0140479</id>
        <doi>10.14569/IJACSA.2023.0140479</doi>
        <lastModDate>2023-04-29T12:39:00.6970000+00:00</lastModDate>
        
        <creator>Nurul Fatihah Binti Ali</creator>
        
        <creator>Siti Salasiah Mokri</creator>
        
        <creator>Syahirah Abd Halim</creator>
        
        <creator>Noraishikin Zulkarnain</creator>
        
        <creator>Ashrani Aizuddin Abd Rahni</creator>
        
        <creator>Seri Mastura Mustaza</creator>
        
        <subject>Deep learning; convolution neural network (CNN); capsule network; segmentation; classification; brain glioma</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>Malignant brain glioma is considered as one of the deadliest cancer diseases that has a higher fatality rate than the survival rate. In terms of brain glioma imaging and diagnosis, the processes of detection and segmentation are manually done by the experts. However, with the advancement of artificial intelligence, the implementation of these tasks using deep learning provides an efficient solution to the management of brain glioma diagnosis and patient treatment. Deep learning networks are responsible for detecting, segmenting, and interpreting the tumors with high accuracy and repeatability so that the appropriate treatment planning can be offered to the patient. This paper presents a comparison review between two state of the art deep learning networks namely convolutional neural network and capsule network in performing brain glioma classification and segmentation tasks. The performance of each published method is discussed along with their advantages and disadvantages. Next, the related constraints in both networks are outlined and highlighted for future research.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_79-Comparison_Review_on_Brain_Tumor_Classification_and_Segmentation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Integrating Dropout Regularization Technique at Different Layers to Improve the Performance of Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140478</link>
        <id>10.14569/IJACSA.2023.0140478</id>
        <doi>10.14569/IJACSA.2023.0140478</doi>
        <lastModDate>2023-04-29T12:39:00.6800000+00:00</lastModDate>
        
        <creator>B. H. Pansambal</creator>
        
        <creator>A. B. Nandgaokar</creator>
        
        <subject>Convolutional layer; visible layer; hidden layer; dropout regularization; long short term memory (LSTM); deep learning; facial expression recognition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>In many facial expression recognition models it is necessary to prevent overfitting to check no units (neurons) depend on each other. Therefore, dropout regularization can be applied to ignore few nodes randomly while processing the remaining neurons. Hence, dropout helps dealing with overfitting and predicts the desired results with more accuracy at different layers of the neural network like ‘visible’, ‘hidden’ and ‘convolutional’ layers. In neural networks there are layers like dense, fully connected, convolutional and recurrent (LSTM- long short term memory). It is possible to embed the dropout layer with any of these layers. Model drops the units randomly from the neural network, meaning model removes its connection from other units. Many researchers found dropout regularization a most powerful technique in machine learning and deep learning. Dropping few units (neurons) randomly and processing the remaining units can be considered in two phases like forward and backward pass (stages). Once the model drops few units randomly and select ‘n’ from the remaining units it is obvious that weight of the units could change during processing. It must be noted that updated weight doesn’t reflect on the dropped units. Dropping and stepping-in few units seem to be very good process as those units which step-in will represent the network. It is assumed to have maximum chance for the stepped-in units to have less dependency and model gives better results with higher accuracy.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_78-Integrating_Dropout_Regularization_Technique_at_Different_Layers.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Effective 3D MRI Reconstruction Method Driven by the Fusion Strategy in NSST Domain</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140477</link>
        <id>10.14569/IJACSA.2023.0140477</id>
        <doi>10.14569/IJACSA.2023.0140477</doi>
        <lastModDate>2023-04-29T12:39:00.6500000+00:00</lastModDate>
        
        <creator>Jin Huang</creator>
        
        <creator>Lei Wang</creator>
        
        <creator>Muhammad Tahir</creator>
        
        <creator>Tianqi Cheng</creator>
        
        <creator>Xinping Guo</creator>
        
        <creator>Yuwei Wang</creator>
        
        <creator>ChunXiang Liu</creator>
        
        <subject>Multiple magnetic resonance imaging; 3D reconstruction; non-subsampled shearlet transform; the algebraic reconstruction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>The 3D reconstruction of medical images plays an important role in modern clinical diagnosis. Although the analytic-based, the iterative-based and the deep learning-based methods have been popularly used, there are still many problems to deal with. The analysis-based methods are not accurate enough, the iteration-based methods are computationally intensive, and the deep learning based methods are heavily dependent on the training of the data. To solve the default that only the single scan sequence is included in the traditional methods, a reconstruction method driven by the non-subsampled shearlet transform (NSST) and the algebraic reconstruction technique (ART) is proposed. Firstly, the multiple magnetic resonance imaging (MRI) sequences are decomposed into high-frequency and low-frequency components by NSST. Secondly, the low-frequency parts are fused with the weighted average fusion scheme and the high-frequency parts are fused with the weighted coefficient scheme that guided by the regional average gradient and energy. Finally, the 3D reconstruction is performed by using the ART algorithm. Compared with the traditional reconstruction methods, the proposed method is able to capture more information from the multiple MRI sequences, which makes the reconstruction results much clearer and more accurate. By comparing with the single-sequence reconstruction model without fusion, the experiments fully prove the accuracy and effectiveness.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_77-The_Effective_3D_MRI_Reconstruction_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of Virtual Experiment Teaching of Inorganic Chemistry in Colleges and Universities Based on Unity3D</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140476</link>
        <id>10.14569/IJACSA.2023.0140476</id>
        <doi>10.14569/IJACSA.2023.0140476</doi>
        <lastModDate>2023-04-29T12:39:00.6330000+00:00</lastModDate>
        
        <creator>Xia Hu</creator>
        
        <subject>Unity3D; inorganic chemistry; virtual reality; collision detection algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>Focusing on the high-cost issues and many risk and uncontrollable factors in chemical experiment teaching in modern colleges and universities, a virtual reality inorganic chemistry (IC) simulation experiment strategy based on Unity3D technology is studied. Starting from the needs of IC experiment teaching, a collision detection algorithm (CDA) between the experimental environment and the virtual scene with Unity3D as the main technical framework is designed. The results show that the collision detection (CD) time of the CDA designed in the research is 99ms, the detection average value is 95.29%, and the accuracy variance is 0.021. The above values are better than the same type of algorithm. This shows that the CD accuracy and efficiency are higher, and the performance is stronger. In addition, the virtual chemistry experiment designed in the research can significantly improve the students’ using attitude from the three main aspects of perceived ease of use (PEU), immersion and interactivity, and then enhance the students’ learning effect. Therefore, the method of virtual experiment teaching of IC in colleges and universities based on Unity3D is effective and provides a new idea for modern teaching reform.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_76-Design_of_Virtual_Experiment_Teaching_of_Inorganic_Chemistry.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A PSL-based Approach to Human Activity Recognition in Smart Home Environments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140475</link>
        <id>10.14569/IJACSA.2023.0140475</id>
        <doi>10.14569/IJACSA.2023.0140475</doi>
        <lastModDate>2023-04-29T12:39:00.6170000+00:00</lastModDate>
        
        <creator>Yan Li</creator>
        
        <subject>Human activity recognition; probabilistic soft logic; MAP inference; temporal complexity; data uncertainty</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>Human activity recognition is widely used in smart cities, public safety and other fields, especially in smart home systems where it has a pivotal role. The study addresses the shortcomings of Markov logic networks for human activity recognition and proposes a human activity recognition method in smart home scenarios - an activity recognition framework based on Probabilistic Soft Logic (PSL). The framework is able to deal with logical uncertainty problems and provides expression and inference mechanisms for data uncertainty problems on this basis. The framework utilizes Deng entropy evidence theory to provide an evaluation method for sensor event uncertainty, and combines event calculus for activity modeling. Comparing the PSL method with three other common recognition methods, Ontology, Hidden Markov Model (HMM), and Markov logic network, on a public dataset, it was found that the PSL method has a much better ability to handle data uncertainty than the other three algorithms. The average recognition rates on the ADL and ADL-E sub datasets were 82.87% and 80.33%, respectively. In experiments to verify the ability of PSL to handle temporal complexity, PSL showed the least significant decrease in the average recognition rate and maintained an average recognition rate of 81.02% in the presence of concurrent and alternating activities. The human activity recognition method based on PSL has a better performance in handling both data uncertainty and temporal complexity.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_75-A_PSL_based_Approach_to_Human_Activity_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Rain Streaks Removal in Images using Extended Generative Adversarial-based Deraining Framework</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140474</link>
        <id>10.14569/IJACSA.2023.0140474</id>
        <doi>10.14569/IJACSA.2023.0140474</doi>
        <lastModDate>2023-04-29T12:39:00.6030000+00:00</lastModDate>
        
        <creator>Subbarao Gogulamudi</creator>
        
        <creator>V. Mahalakshmi</creator>
        
        <creator>Indraneel Sreeram</creator>
        
        <subject>Deep learning; rain streaks removal; image generation; quality measure; capsule network; adversarial learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>The visual quality of photographs and videos can be negatively impacted by various weather conditions, such as snow, haze, or rain, affecting the quality of the images and videos. Such impacts may greatly affect outdoor vision systems that rely on image/video data. It has recently drawn a lot of interest to remove rain streaks from a single image. Several deep learning-based methods have been introduced to address the issue of removing rain streaks from a single image. Still, the efficiency of rain streak removal with enhanced quality is challenging. Hence, a novel deep-learning method is introduced for rain streak removal. The proposed Extended Generative Adversarial based De-raining (Ex_GADerain) is the enhanced version of a traditional Generative adversarial network (GAN). The proposed Ex_GADerain introduced a Self-Attention based Convolutional Capsule Bidirectional Network (SA-CCapBiNet) based generator for enhancing the rain streaks removal process. Also, the loss function estimation using the adversarial loss and the mean absolute error loss minimizes the information loss during training. The minimal information loss enhances the generalization capability of Ex_GADerain, and hence the enhanced performance is acquired. The quality assessment of a derained image based on various assessment measures like SSIM, PSNR, RMSE, and DSSIM improved performance compared to the conventional rain streak removal methods. The maximal SSIM and PSNR acquired by the Ex_GADerain are 0.9923 and 26.7052, respectively. The minimal RMSE and DSSIM acquired by the Ex_GADerain are 0.9367 and 0.0051, respectively.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_74-Rain_Streaks_Removal_in_Images_using_Extended_Generative.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving Brain Tumor Segmentation in MRI Images through Enhanced Convolutional Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140473</link>
        <id>10.14569/IJACSA.2023.0140473</id>
        <doi>10.14569/IJACSA.2023.0140473</doi>
        <lastModDate>2023-04-29T12:39:00.5700000+00:00</lastModDate>
        
        <creator>Kabirat Sulaiman Ayomide</creator>
        
        <creator>Teh Noranis Mohd Aris</creator>
        
        <creator>Maslina Zolkepli</creator>
        
        <subject>MRI brain tumor; anisotropic; segmentation; SVM classifier; convolutional neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>Achieving precise tumor segmentation is essential for accurate diagnosis. Since brain tumors segmentation require a significant training process, reducing the training time is critical for timely treatment. The research focuses on enhancing brain tumor segmentation in MRI images by using Convolutional Neural Networks and reducing training time by using MATLAB&#39;s GoogLeNet, anisotropic diffusion filtering, morphological operation, and sector vector machine for MRI images. The proposed method will allow for efficient analysis and management of enormous amounts of MRI image data, the earliest practicable early diagnosis, and assistance in the classification of normal, benign, or malignant patient cases. The SVM Classifier is used to find a cluster of tumors development in an MR slice, identify tumor cells, and assess the size of the tumor that appears to be present in order to diagnose brain tumors. The proposed method is evaluated using a dataset from Figshare that includes coronal, sagittal, and axial views of images taken with a T1-CE MRI modality. The accuracy of 2D tumor detection and segmentation are increased, enabling more 3D detection, and achieving a mean classification accuracy of 98% across system records. Finally, a hybrid approach of GoogLeNet deep learning algorithm and Convolution Neural Network- Support Vector Machines (CNN-SVM) deep learning is performed to increase the accuracy of tumor classification. The evaluations show that the proposed technique is significantly more effective than those currently in use. In the future, enhancement of the segmentation using artificial neural networks will help in the earlier and more precise detection of brain tumors. Early detection of brain tumors can benefit patients, healthcare providers, and the healthcare system as a whole. It can reduce healthcare costs associated with treating advanced stage tumors, and enables researchers to better understand the disease and develop more effective treatments.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_73-Improving_Brain_Tumor_Segmentation_in_MRI_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Review of the Recent Progress on Crowd Anomaly Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140472</link>
        <id>10.14569/IJACSA.2023.0140472</id>
        <doi>10.14569/IJACSA.2023.0140472</doi>
        <lastModDate>2023-04-29T12:39:00.5570000+00:00</lastModDate>
        
        <creator>Sarah Altowairqi</creator>
        
        <creator>Suhuai Luo</creator>
        
        <creator>Peter Greer</creator>
        
        <subject>Crowd anomaly detection; advanced computer science; intelligent systems; video surveillance application; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>Surveillance videos are crucial in imparting public security, reducing or avoiding the accidents that occur from anomalies. Crowd anomaly detection is a rapidly growing research field that aims to identify abnormal or suspicious behavior in crowds. This paper provides a comprehensive review of the state-of-the-art in crowd anomaly detection and, different taxonomies, publicly available datasets, challenges, and future research directions. The paper first provides an overview of the field and the importance of crowd anomaly detection in various applications such as public safety, transportation, and surveillance. Secondly, it presents the components of crowd anomaly detection and its different taxonomies based on the availability of labels, and the type of anomalies. Thirdly, it presents the review of the recent progress of crowd anomaly detection. The review also covers publicly available datasets commonly used for evaluating crowd anomaly detection methods. The challenges faced by the field, such as handling variability in crowd behavior, dealing with large and complex data sets, and addressing the imbalance of data, are discussed. Finally, the paper concludes with a discussion of future research directions in crowd anomaly detection, including integrating multiple modalities, addressing privacy concerns, and addressing crowd monitoring systems’ ethical and legal implications.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_72-A_Review_of_the_Recent_Progress_on_Crowd_Anomaly_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Leader-follower Optimal Control Method for Vehicle Platoons to Improve Fuel Eﬃciency</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140471</link>
        <id>10.14569/IJACSA.2023.0140471</id>
        <doi>10.14569/IJACSA.2023.0140471</doi>
        <lastModDate>2023-04-29T12:39:00.5230000+00:00</lastModDate>
        
        <creator>Zhigang Li</creator>
        
        <creator>Yushi Guo</creator>
        
        <creator>Hua Wang</creator>
        
        <creator>Jianyong Li</creator>
        
        <creator>Yuye Xie</creator>
        
        <creator>Jingyu Liu</creator>
        
        <subject>Vehicle platoon speed optimization control; leader vehicles; follower vehicles; distributed re-ceding horizon control method; vehicle platoon fuel consumption control</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>The automotive industry has experienced swift development with the rapid growth of the economy. The increasing number of vehicles has led to deteriorating road traffic conditions, increased consumption of nonrenewable energy, and excessive vehicle emissions. To tackle the problems of fuel efficiency and safety control for vehicle platoons, this study suggests a novel leader-follower optimal control method for vehicle platoons to improve fuel efficiency. Depending on where vehicles are in a platoon, they are classified into two categories: the leader vehicle and follower vehicles. The differing driving circumstances of these two types of vehicles are considered in this paper by using various control methods. On the one hand, the leader vehicle uses an approximate fuel consumption model to improve computational efficiency. At the same time, speed and acceleration are constrained to obtain the speed optimization curve. The follower vehicle uses a distributed receding horizon control method, which calculates the vehicle&#39;s speed optimization profile online. On the other hand, a linear following model is used to prevent collisions between vehicles and ensure the safety of the vehicle platoon. The simulation experiment has demonstrated that this speed optimization control method can reduce the fuel consumption of the vehicle platoon and ensure the safety of the vehicles.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_71-Leader_follower_Optimal_Control_Method_for_Vehicle_Platoons.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Customer Relationship Management Using Fuzzy Association Rules and the Evolutionary Genetic Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140470</link>
        <id>10.14569/IJACSA.2023.0140470</id>
        <doi>10.14569/IJACSA.2023.0140470</doi>
        <lastModDate>2023-04-29T12:39:00.5100000+00:00</lastModDate>
        
        <creator>Ahmed Abu-Al Dahab</creator>
        
        <creator>Riham M Haggag</creator>
        
        <creator>Samir Abu-Al Fotouh</creator>
        
        <subject>Customer relationship management (CRM); fuzzy association rule mining; multilevel association rule; quantitative data mining</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>The importance of Customer Relationship Management (CRM) has never been higher. Thus, companies are forced to adopt new strategies to focus on customers, given the competitive climate in which they operate. Also, companies have been able to maintain customer data within large databases that contain all information related to customers, thanks to the tremendous technological development seen recently. Multilevel quantitative association mining is a significant field for achieving motivational associations between data components with multiple abstraction levels. This paper develops a methodology to support CRM to improve the relationship between retail companies and their customers in the retail sector to retain existing customers and attract more new customers, by applying data mining techniques using the genetic algorithm through which an integrated search is performed. The proposed model can be implemented because the proposed model does not need the minimum levels of support and trust required by the user, and it has been confirmed that the algorithm proposed in this research can powerfully create non-redundant fuzzy multi-level association rules, according to the results of these experiments.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_70-Enhancing_Customer_Relationship_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Effectivity Deep Learning Optimization Model to Traditional Batak Culture Ulos Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140469</link>
        <id>10.14569/IJACSA.2023.0140469</id>
        <doi>10.14569/IJACSA.2023.0140469</doi>
        <lastModDate>2023-04-29T12:39:00.4930000+00:00</lastModDate>
        
        <creator>Rizki Muliono</creator>
        
        <creator>Mayang Septania Iranita</creator>
        
        <creator>Rahmad BY Syah</creator>
        
        <subject>Ulos; classification; convolutional neural network; modular neural network; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>Ulos is one of the Batak culture&#39;s traditional heritage fabrics. Ulos cloth is divided into several types, each with a distinct function. Ulos Ragi Hotang, Ulos Pinunsaan, Ulos Tumtuman, Ulos Ragi Hidup, and Ulos Sadum are the five Batak ulos motifs. The Batak ulos motif has evolved over time and is now well-known in other countries. However, many ordinary people have difficulty distinguishing between ulos cloth and other fabrics. This study categorizes the different types of ulos cloth so that it can be used by ordinary people who are unfamiliar with the different types and functions. The Convolutional Neural Network is the method used (CNN). CNN is used to recognize and classify images. CNN&#39;s main feature is that it detects feature patches from locations in the input matrix and assembles them into high-level references. The Modular Neural Network (MNN) is then used to break down large and complex computational processes into smaller components, reducing complexity while still producing the desired output. 80% of the data for the training process, 20% for testing. The accuracy value achieved is 97.83%, the loss value is 0.0793, the val loss is 2.1885, and the val accuracy is 0.7429.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_69-An_Effectivity_Deep_Learning_Optimization_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Integrated Methodology for Information Security Risk Management using ISO 27005:2018 and NIST SP 800-30 for Insurance Sector</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140468</link>
        <id>10.14569/IJACSA.2023.0140468</id>
        <doi>10.14569/IJACSA.2023.0140468</doi>
        <lastModDate>2023-04-29T12:39:00.4630000+00:00</lastModDate>
        
        <creator>Arief Prabawa Putra</creator>
        
        <creator>Benfano Soewito</creator>
        
        <subject>Risk management; information security; ISO/IEC 27005; NIST SP 800-30; ISO/IEC 27002</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>The development of Information and Communication Technology (ICT) in the Industrial Revolution 4.0 era shows very fast and disruptive developments that encourage increased use of Information Technology (IT) services within organizations. However, there is a risk of creating vulnerabilities and threats to owned information systems. Plans and strategies are required to implement information security risk management to address vulnerabilities in threat events. This research is a case study of the Enterprise Resource Planning System in the Insurance Sector. The proposed methodologies for integrating information security risk management using ISO/IEC 27005:2018 as a risk management framework and NIST SP 800-30 Rev. 1 as guidance for risk assessments. The risk evaluation stage is the process of comparing the results of the risk analysis with the risk criteria to then determine whether the risk rating is acceptable or tolerable. For risk treatment and control using the ISO/IEC 27002:2022 framework.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_68-Integrated_Methodology_for_Information_Security_Risk_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Patient Health Monitoring System Development using ESP8266 and Arduino with IoT Platform</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140467</link>
        <id>10.14569/IJACSA.2023.0140467</id>
        <doi>10.14569/IJACSA.2023.0140467</doi>
        <lastModDate>2023-04-29T12:39:00.4470000+00:00</lastModDate>
        
        <creator>Jamil Abedalrahim Jamil Alsayaydeh</creator>
        
        <creator>Mohd Faizal bin Yusof</creator>
        
        <creator>Muhammad Zulhakim Bin Abdul Halim</creator>
        
        <creator>Muhammad Noorazlan Shah Zainudin</creator>
        
        <creator>Safarudin Gazali Herawan</creator>
        
        <subject>Patient health monitoring; Internet of Things (IoT); Arduino UNO; Nodemcu ESP8266; thingspeak; wearable device; temperature value; heartbeat value; remotely</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>The Internet of Things (IoT) has emerged as a transformative technology that has revolutionized the field of healthcare. One of the most promising applications of Internet of Things (IoT) in healthcare is patient health monitoring, which allows healthcare providers to remotely monitor patients&#39; health and provide prompt medical attention when needed. This research work focuses on developing an Internet of Things (IoT)-based patient health monitoring system aimed at providing a solution for patients, particularly the elderly, who face the risk of unexpected death due to the lack of medical attention. The proposed system utilizes a heartbeat sensor and an Infrared IR temperature sensor connected to Arduino UNO and Nodemcu, respectively, to monitor the patient&#39;s vital signs. The sensors collect the data, which is then sent to an Internet of Things (IoT) web platform via a Wi-Fi connection. The Internet of Things (IoT) platform displays the real-time data of the patient&#39;s health status, including the temperature and heartbeat rate, which can be monitored by doctors and nurses. The system is designed to send alerts to healthcare providers in the event of any medical emergency, ensuring that prompt medical attention can be provided to the patient. The significance of this research work lies in its potential to revolutionize the healthcare industry by providing a more efficient and effective means of patient health monitoring. The system can be used to monitor a large number of patients simultaneously, which is particularly beneficial in hospitals with a large patient load. Moreover, it can reduce the workload of healthcare providers, allowing them to focus on other critical tasks. This innovative system has the potential to improve the overall quality of healthcare services and lead to better health outcomes for the society.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_67-Patient_Health_Monitoring_System_Development.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>ECAH: A New Energy-Aware Coverage Method for Wireless Sensor Networks using Artificial Bee Colony and Harmony Search</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140466</link>
        <id>10.14569/IJACSA.2023.0140466</id>
        <doi>10.14569/IJACSA.2023.0140466</doi>
        <lastModDate>2023-04-29T12:39:00.4300000+00:00</lastModDate>
        
        <creator>ZHOU Bing</creator>
        
        <creator>ZHANG Zhigang</creator>
        
        <subject>Artificial bee colony; coverage; deployment; harmony search; wireless sensor network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>Wireless Sensor Networks (WSNs) offer diverse applications in the research and commercial fields, such as military applications, medical science, waste management, home automation, habitat monitoring, and environmental observation. WSNs are generally composed of a large number of low-cost, low-power, and multifunctional sensor nodes that sense, process, and communicate data. These nodes are connected by a wireless medium, allowing them to collect and share data with each other. To achieve network coverage in a WSN, a few to thousands of tiny and low-power sensor nodes should be placed in an interconnected manner. Over the last decade, deploying sensor nodes in a WSN to cover a large area has received much attention. Coverage, regarded as an NP-hard problem, is an essential parameter for WSNs that determines how the deployed sensor nodes handle each point of interest. Various algorithms have been proposed to tackle this problem. However, they often come with a trade-off between energy efficiency and coverage rate. Moreover, the scalability of the algorithms needs to be considered for large-scale networks. This paper proposes a novel energy-aware method combining Artificial Bee Colony (ABC) and Harmony Search (HS) algorithms to address the coverage problem in WSN, called ECAH. The proposed ECAH algorithm has been tested with various network scenarios and compared with other existing algorithms. The results show that ECAH outperforms the existing methods in terms of network lifetime, coverage rate, and energy consumption. Additionally, the proposed algorithm is also more robust and efficient as it can adjust to dynamic network environment changes, making it suitable for various network scenarios.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_66-ECAH_A_New_Energy_Aware_Coverage_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Employee Information Security Awareness in the Power Generation Sector of PT ABC</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140465</link>
        <id>10.14569/IJACSA.2023.0140465</id>
        <doi>10.14569/IJACSA.2023.0140465</doi>
        <lastModDate>2023-04-29T12:39:00.4170000+00:00</lastModDate>
        
        <creator>Ridwan Fadlika</creator>
        
        <creator>Yova Ruldeviyani</creator>
        
        <creator>Zenfrison Tuah Butarbutar</creator>
        
        <creator>Relaci Aprilia Istiqomah</creator>
        
        <creator>Achmad Arzal Fariz</creator>
        
        <subject>Security awareness; data; information; ISO/IEC 27001:2013</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>Presidential Regulation No. 82 of 2022 demonstrates the Indonesian government&#39;s dedication to protecting Vital Information Infrastructure, which has become increasingly susceptible to cyber attacks. Intrusion detections at PT ABC reached 79,575 in 2021, and malware, botnets, targeted attacks, malicious websites/domains, and ransomware attacks may cause considerable financial losses. The implication of these incidents is that employees&#39; awareness of information security is critical, in addition to security technologies like firewalls and monitoring tools. To enhance employees&#39; knowledge of information security, this study aims to evaluate the information security awareness among PT ABC personnel using the HAIS-Q survey instrument alongside ISO/IEC 27001:2013 criteria. The study will provide valuable recommendations to improve the organization&#39;s security protocols. This research intends to investigate the correlation between employees&#39; knowledge, attitude, and behavior towards information security. Data was collected through a questionnaire and analyzed using the Pearson Correlation, Cronbach&#39;s Alpha, descriptive statistics, linear regression, and Kruskal-Wallis test method. The study findings suggest that the overall information security awareness level among employees is &quot;Good&quot;. However, certain areas like internet usage, information handling, asset management, incident reporting, and the use of mobile devices need improvement. To address these areas, the study recommends promoting information security awareness according to employee categories.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_65-Employee_Information_Security_Awareness.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Research on Recommendation Model of College English MOOC based on Hybrid Recommendation Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140464</link>
        <id>10.14569/IJACSA.2023.0140464</id>
        <doi>10.14569/IJACSA.2023.0140464</doi>
        <lastModDate>2023-04-29T12:39:00.4000000+00:00</lastModDate>
        
        <creator>Yifang Ding</creator>
        
        <creator>Jingbo Hao</creator>
        
        <subject>Genetic algorithm; education quality assessment; BP neural network; college English MOOC</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>Establishing a reasonable and efficient compulsory education balance index system is very important to boost the all-around of compulsory education development, and then realize the course recommendation for students with different attributes. Based on this, the research aimed at the problems in college English education and evaluation, aimed to establish a college English MOOC education and evaluation system based on the improved neural network recommendation algorithm. The research first constructed the college English MOOC education and evaluation data elements, and then established a genetic algorithm improved neural network algorithm (BP Neural Network Optimization Algorithm Based on Genetic Algorithm, GA-BP), and finally analyzed the effect of the assembled model. These results show that the fitness of the GA-BP model reaches the set expectation when the evolutionary algebra reaches 10 times, and its fitness is 0.6. The corresponding threshold and weight are obtained, and the threshold and weight are substituted into the model. After repeated iterative training, the model finally reached an error of 10-3 when it was trained 12 times, and the expected accuracy was achieved. The R value of each set hovered around 0.97, and the fitting degree was high, which showed that the GA-BP model proposed in the study had a better fitting degree. The difference between the expected value and the output value is mainly distributed in the [-0.08083, 0.06481] interval. To sum up, the GA-BP model proposed in the study has an excellent effect on college English education and evaluation. This evaluation model has a faster learning rate and a higher prediction accuracy and more stable performance.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_64-Research_on_Recommendation_Model_of_College_English_MOOC.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Study on Tomato Disease Classification based on Leaf Image Recognition based on Deep Learning Technology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140463</link>
        <id>10.14569/IJACSA.2023.0140463</id>
        <doi>10.14569/IJACSA.2023.0140463</doi>
        <lastModDate>2023-04-29T12:39:00.3670000+00:00</lastModDate>
        
        <creator>Ji Zheng</creator>
        
        <creator>Minjie Du</creator>
        
        <subject>Deep learning; convolutional neural network; image recognition; plant diseases</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>The utilization of computer vision technology is of the utmost significance in the examination of plant diseases. Research utilizing image processing to investigate plant diseases necessitates the analysis of discernible patterns on plants. Recently, numerous image processing and pattern classification techniques have been employed in the construction of a digital vision system capable of recognizing and categorizing the visual manifestations of plant diseases. Given the abundance of algorithms formulated for the purpose of plant leaf image classification for the detection of plant diseases, it is imperative to assess the accuracy of each algorithm, as well as its potential to identify diverse disease types. The main objective of this study is to explore accurate deep learning architectures that are more effective in deploying and detecting tomato diseases, thus eliminating human error when identifying tomato diseases through visual observation. and get more widespread use. An initial model was constructed from the ground up using a convolutional neural network (CNN), which was trained with 22930 tomato leaf images, and then compared to VGG16, Mobile Net, and Inceptionv3 architectures through a fine-tuning process. The basic CNN model achieved a training accuracy of 90%, whereas the training accuracies of VGG16, Mobile Net, and Inceptionv3 were respectively observed to be 89%, 91%, and 87%. The VGG16 model has a greater computational complexity than other approaches due to its considerable quantity of predefined parameters. Despite to be simpler, MobileNet proved to be the most efficient in terms of accuracy and thus is the most suitable for this research, due to its lightweight structure, fast functioning and adaptability for mobile devices. In contrast to other architectures, the suggested CNN architecture exhibits shallower characteristics, facilitating faster training on the same dataset. This research will provide a solid foundation for future scholars to easily improve the categorization of plant diseases, which is to develop algorithms that are lighter, faster, easier to run, and have higher accuracy.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_63-Study_on_Tomato_Disease_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Optimization with Recurrent Neural Network-based Medical Image Processing for Predicting Interstitial Lung Disease</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140462</link>
        <id>10.14569/IJACSA.2023.0140462</id>
        <doi>10.14569/IJACSA.2023.0140462</doi>
        <lastModDate>2023-04-29T12:39:00.3530000+00:00</lastModDate>
        
        <creator>K. Sundaramoorthy</creator>
        
        <creator>R. Anitha</creator>
        
        <creator>S. Kayalvili</creator>
        
        <creator>Ayat Fawzy Ahmed Ghazala</creator>
        
        <creator>Yousef A.Baker El-Ebiary</creator>
        
        <creator>Sameh Al-Ashmawy</creator>
        
        <subject>Interstitial lung disease; spider monkey lion optimization; recurrent neural network; medical image processing; diagnosis and identification; classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>One of the dreadful diseases that shortens people&#39;s lives is lung disease. There are numerous potentially fatal consequences that can arise from interstitial lung disease, such as: Lung hypertension. This illness doesn&#39;t influence your overall blood pressure; instead, it only affects the arteries in your lungs. To prevent mortality, it is essential to accurately diagnose pulmonary illness in patients. Various classifiers, including SVM, RF, MLP, and others, are processed to identify lung disorders. Large datasets cannot be processed by these algorithms, which causes false lung disease identification. A combined new Spider Monkey and Lion algorithm is suggested as a solution to get around these limitations. Images of interstitial lung disease (ILD) were taken for the study from the publicly accessible MedGIFT database. The median filter is employed during the pre-processing step of ILD images to reduce noise and remove undesirable objects. The features are extracted using a hybrid spider Monkey and Lion algorithm. The lungs&#39; damaged and unaffected regions are divided into categories using recurrent neural networks. Several metrics such as accuracy, precision, recall, and f1-score are used to evaluate the performance of the proposed system. The results demonstrate that this technique offers more precision, accuracy, and a higher rate of lung illness detection by processing a large number of computerized tomography representations quickly. When compared to other strategies already in use, the proposed model&#39;s accuracy is greater at 99.8%. This method could be beneficial for staging the severity of interstitial lung illness, prognosticating, and forecasting treatment outcomes and survival, determining risk control, and allocation of resources.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_62-Hybrid_Optimization_with_Recurrent_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>RETRACTED: Identity Authentication Protocol of Smart Home IoT based on Chebyshev Chaotic Mapping</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140461</link>
        <id>10.14569/IJACSA.2023.0140461</id>
        <doi>10.14569/IJACSA.2023.0140461</doi>
        <lastModDate>2023-04-29T12:39:00.3370000+00:00</lastModDate>
        
        <creator>Jingjing Sun</creator>
        
        <creator>Peng Zhang</creator>
        
        <creator>Xiaohong Kong</creator>
        
        <subject>Chebyshev; chaotic map; internet of things; identity authentication; LAoCCM protocol; key agreement; EDHOC protocol</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>After careful and considered review of the content of this paper by a duly constituted expert committee, this paper has been found to be in violation of IJACSA`s Publication Principles. We hereby retract the content of this paper. Reasonable effort should be made to remove all past references to this paper. Retraction DOI: 10.14569/IJACSA.2023.0140461.retraction</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_61-Identity_Authentication_Protocol_of_Smart_Home_IoT.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fusion Privacy Protection of Graph Neural Network Points of Interest Recommendation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140460</link>
        <id>10.14569/IJACSA.2023.0140460</id>
        <doi>10.14569/IJACSA.2023.0140460</doi>
        <lastModDate>2023-04-29T12:39:00.3070000+00:00</lastModDate>
        
        <creator>Yong Gan</creator>
        
        <creator>ZhenYu Hu</creator>
        
        <subject>Recommendation algorithms; location protection; graph convolutional neural networks; k-anonymity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>For the rapidly developing location-based web recommendation services, traditional point-of-interest(POI) recommendation methods not only fail to utilize user information efficiently, but also face the problem of privacy leakage. Therefore, this paper proposes a privacy-preserving interest point recommendation system that fuses location and user interaction information. The geolocation-based recommendation system uses convolutional neural networks (CNN) to extract the correlation between user and POI interactions and fuse text features, and then combine the location check-in probability to recommend POIs to users. To address the geolocation leakage problem, this paper proposes an algorithm that integrates k-anonymization techniques with homogenized coordinates (KMG) to generalize the real location of users. Finally, this paper integrates location-preserving algorithms and recommendation algorithms to build a privacy-preserving recommendation system. The system is analyzed by information entropy theory and has a high privacy-preserving effect. The experimental results show that the proposed recommendation system has better recommendation performance on the basis of privacy protection compared with other recommendation algorithms.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_60-Fusion_Privacy_Protection_of_Graph_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Validate the Users’ Comfortable Level in the Virtual Reality Walkthrough Environment for Minimizing Motion Sickness</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140459</link>
        <id>10.14569/IJACSA.2023.0140459</id>
        <doi>10.14569/IJACSA.2023.0140459</doi>
        <lastModDate>2023-04-29T12:39:00.2900000+00:00</lastModDate>
        
        <creator>Muhammad Danish Affan Anua</creator>
        
        <creator>Ismahafezi Ismail</creator>
        
        <creator>Nur Saadah Mohd Shapri</creator>
        
        <creator>Wan Mohd Amir Fazamin Wan Hamzah</creator>
        
        <creator>Maizan Mat Amin</creator>
        
        <creator>Fazida Karim</creator>
        
        <subject>Virtual reality; motion sickness; head-mounted display; head lean movement; mobile VR; walkthrough technique; UTAUT; frame rate</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>Motion sickness is a common scenario for users when they are exposed to a virtual reality (VR) environment. It is due to the conflict that occurs in the brain that tells the user that they are moving in the environment, but the fact is that the user’s body is sitting still causing them to get symptoms of motion sickness like nausea and dizziness. Therefore, motion sickness has become one of the main reasons why users still do not prefer to use VR to enhance their productivity. Motion sickness can be overcome by increasing the user&#39;s comfort level of walkthrough in the VR environment. Meanwhile, a popular VR simulation which is widely used in many industries is a walkthrough in a VR environment at a certain speed. This paper is focused on presenting the result of walkthroughs in a VR environment using movement speed and based on frame rates performance and adopting the unified theory of acceptance and use of technology (UTAUT) model construct variables namely performance expectancy (PE) and effort expectancy (EE) to measure the user’s comfort level. A mobile VR, ‘VR Terrain’ application software was developed based on the proposed framework. The application software was tested by 30 users by moving around in a VR environment with 4 different movement speeds that were implemented into four colored gates using a head-mounted display (HMD). A descriptive and coefficient analysis was used to analyze all the data. The blue gate revealed the most comfortable, outperforming all other three gates. Overall, the most suitable speed to use for VR walkthrough is 4.0 km/h. The experiment result may be used to create a parameter for the VR developers to reduce the VR motion sickness effect in the future.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_59-Validate_the_Users_Comfortable_Level_in_the_Virtual_Reality.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Deep Learning Approach for Sentiment Classification of COVID-19 Vaccination Tweets</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140458</link>
        <id>10.14569/IJACSA.2023.0140458</id>
        <doi>10.14569/IJACSA.2023.0140458</doi>
        <lastModDate>2023-04-29T12:39:00.2600000+00:00</lastModDate>
        
        <creator>Haidi Said</creator>
        
        <creator>BenBella S. Tawfik</creator>
        
        <creator>Mohamed A. Makhlouf</creator>
        
        <subject>COVID-19 vaccination; sentiment analysis; Twitter; machine learning; deep learning; natural language processing (NLP)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>Now-a-days, social media platforms enable people to continuously express their opinions and thoughts about different topics. Monitoring and analyzing the sentiments of people is essential for governments and business organizations to better understand people’s feelings and thoughts. The Coronavirus disease 2019 (COVID-19) has been one of the most trending topics on social media over the last two years. Consequently, one of the preventative measures to control and prevent the spread of the virus was vaccination. A dataset was formed by collecting tweets from Twitter for over a month from November 13th to December 31st, 2021. After data cleaning, the tweets were assigned a positive, negative, or neutral label using a natural language processing (NLP) sentiment analysis tool. This study aims to analyze people&#39;s public opinion towards the vaccination process against COVID-19. To fulfil this goal, an ensemble model based on deep learning (LSTM-2BiGRU) is proposed that combines long short-term memory (LSTM) and bidirectional gated recurrent unit (BiGRU). The performance of the proposed model is compared to five traditional machine learning models, two deep learning models in addition to state-of-the-art models. By comparing the results of the models used in this study, the results reveal that the proposed model outperforms all the machine and deep learning models employed in this work with a 92.46% accuracy score. This study also shows that the number of tweets that involve neutral, positive, and negative sentiments is 517496 (37%) tweets, 484258 (34%) tweets, and 409570 (29%) tweets, respectively. The findings indicate that the number of people carrying neutral sentiments towards COVID-19 immunization through vaccines is the highest among others.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_58-A_Deep_Learning_Approach_for_Sentiment_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Cloud Native Framework for Real-time Pricing in e-Commerce</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140457</link>
        <id>10.14569/IJACSA.2023.0140457</id>
        <doi>10.14569/IJACSA.2023.0140457</doi>
        <lastModDate>2023-04-29T12:39:00.2430000+00:00</lastModDate>
        
        <creator>Archana Kumari</creator>
        
        <creator>Mohan Kumar. S</creator>
        
        <subject>Real-time pricing; cloud-native design; system-design; pricing-framework; Amazon web services</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>Real-time pricing is a form of &#39;dynamic pricing,&#39; and it enables online sellers to adjust prices in real-time in response to variations in demand and competition to achieve higher revenue or improve customer satisfaction. As modern e-commerce implementations become more cloud-based, this paper proposes a cloud-native framework for a real-time pricing system. We take a requirement driven approach to come up with a modular architecture and a set of reusable components for real-time pricing. Following DSRM methodology, during the design phase, we identify and develop the theoretical foundations for key parts of the system, such as pricing models, competition and demand watchers, and other analytics components that fulfill the functional requirements of the system. At the stage of implementation, we describe how each of these components and the entire cloud application will be configured using an AWS cloud native implementation. As a framework, this work can support a variety of pricing models, demonstrating that multiple pricing models have been discussed. Other low-latency, reusable components described in this work provide the ability to react quickly to changes in demand and competition. We also provide a price-cache that decouples pricing model calculation from end-user price requests and keeps price query latency to a minimum. For a real-time system, where latency stands to be the most desired NFR, we validate the system for price-request latency (found to be a single digit of milliseconds) and market reaction latency (less than a second). Overall, our proposed framework provides a comprehensive solution for real-time pricing, which can be adapted to different business needs and can help online sellers optimize their pricing strategies.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_57-A_Cloud_Native_Framework_for_Real_time_Pricing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analyzing WhisperGate and BlackCat Malware: Methodology and Threat Perspective</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140456</link>
        <id>10.14569/IJACSA.2023.0140456</id>
        <doi>10.14569/IJACSA.2023.0140456</doi>
        <lastModDate>2023-04-29T12:39:00.2270000+00:00</lastModDate>
        
        <creator>Mathew Nicho</creator>
        
        <creator>Rajesh Yadav</creator>
        
        <creator>Digvijay Singh</creator>
        
        <subject>Malware analysis; WhisperGate; BlackCat; malware sample; ransomware</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>The increasing use of powerful evasive ransomware malware in cyber warfare and targeted attacks is a persistent and growing challenge for nations, corporations, and small and medium-sized enterprises. This threat is evidenced by the emergence of the WhisperGate malware in cyber warfare, which targets organizations in Ukraine to render targeted devices inoperable, and the BlackCat malware, which targets large organizations by encrypting files. This paper outlines a practical approach to malware analysis using WhisperGate and BlackCat malware as samples. It subjects them to heuristic-based analysis techniques, including a combination of static, dynamic, hybrid, and memory analysis. Specifically, 12 tools and techniques were selected and deployed to reveal the malware’s innovative stealth and evasion capabilities. This methodology shows what techniques can be applied to analyze critical malware and differentiate samples that are variations of known threats. The paper presents currently available tools and their underlying approaches to performing automated dynamic analysis on potentially malicious software. The study thus demonstrates a practical approach to carrying out malware analysis to understand cybercriminals’ behavior, techniques, and tactics.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_56-Analyzing_WhisperGate_and_BlackCat_Malware.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparison of Predictive Machine Learning Models to Predict the Level of Adaptability of Students in Online Education</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140455</link>
        <id>10.14569/IJACSA.2023.0140455</id>
        <doi>10.14569/IJACSA.2023.0140455</doi>
        <lastModDate>2023-04-29T12:39:00.1970000+00:00</lastModDate>
        
        <creator>Orlando Iparraguirre-Villanueva</creator>
        
        <creator>Carmen Torres-Cecl&#233;n</creator>
        
        <creator>Andr&#233;s Epifan&#237;a-Huerta</creator>
        
        <creator>Gloria Castro-Leon</creator>
        
        <creator>Melquiades Melgarejo-Graciano</creator>
        
        <creator>Joselyn Zapata-Paulini</creator>
        
        <creator>Michael Cabanillas-Carbonell</creator>
        
        <subject>Machine learning; adaptability; students; online education; prediction; models</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>With the onset of the COVID-19 pandemic, online education has become one of the most important options available to students around the world. Although online education has been widely accepted in recent years, the sudden shift from face-to-face education has resulted in several obstacles for students. This paper, aims to predict the level of adaptability that students have towards online education by using predictive machine learning (ML) models such as Random Forest (RF), K-Nearest-Neighbor (KNN), Support vector machine (SVM), Logistic Regression (LR) and XGBClassifier (XGB).The dataset used in this paper was obtained from Kaggle, which is composed of a population of 1205 high school to college students. Various stages in data analysis have been performed, including data understanding and cleaning, exploratory analysis, training, testing, and validation. Multiple parameters, such as accuracy, specificity, sensitivity, F1 count and precision, have been used to evaluate the performance of each model. The results have shown that all five models can provide optimal results in terms of prediction. For example, the RF and XGB models presented the best performance with an accuracy rate of 92%, outperforming the other models. In consequence, it is suggested to use these two models RF and XGB for prediction of students&#39; adaptability level in online education due to their higher prediction efficiency. Also, KNN, SVM and LR models, achieved a performance of 85%, 76%, 67%, respectively. In conclusion, the results show that the RF and XGB models have a clear advantage in achieving higher prediction accuracy. These results are in line with other similar works that used ML techniques to predict adaptability levels.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_55-Comparison_of_Predictive_Machine_Learning_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Framework for Semi-supervised Multiple-label Image Classification using Multi-stage CNN and Visual Attention Mechanism</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140454</link>
        <id>10.14569/IJACSA.2023.0140454</id>
        <doi>10.14569/IJACSA.2023.0140454</doi>
        <lastModDate>2023-04-29T12:39:00.1970000+00:00</lastModDate>
        
        <creator>Joseph James S</creator>
        
        <creator>Lakshmi C</creator>
        
        <subject>Semi-supervised; visual attention; multi-label; image classification; label propagation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>To train deep neural networks effectively, a lot of labeled data is typically needed. However, real-time applications make it difficult and expensive to acquire high-quality labels for the data because it takes skill and knowledge to accurately annotate multiple label images. In order to enhance classification performance, it is also crucial to extract image features from all potential objects of various sizes as well as the relationships between labels of numerous label images. The current approaches fall short in their ability to map the label dependencies and effectively classify the labels. They also perform poor to label the unlabeled images when small amount of labeled images available for classification. In order to solve these issues, we suggest a new framework for semi-supervised multiple object label classification using multi-stage Convolutional neural networks with visual attention (MSCNN)and GCN for label co-occurrence embedding(LCE) (MSCNN-LCE-MIC), which combines GCN and attention mechanism to concurrently capture local and global label dependencies throughout the entire image classification process. Four main modules make up MSCNN-LCE-MIC: (1) improved multi-label propagation method for labeling largely available unlabeled image; (2) a feature extraction module using multi-stage CNN with visual attention mechanism that focuses on the connections between labels and target regions to extract accurate features from each input image; (3) a label co-existence learning that applies GCN to discover the associations between different items to create embeddings of label co-occurrence; and (4) an integrated multi-modal fusion module. Numerous tests on MS-COCO and PASCAL VOC2007 show that MSCNN-LCE-MIC significantly improves classification efficiency on mAP 84.3% and 95.8% respectively when compared to the most recent existing methods.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_54-A_Novel_Framework_for_Semi_supervised_Multiple_label_Image.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Insights on Data Security Schemes and Authentication Adopted in Safeguarding Social Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140453</link>
        <id>10.14569/IJACSA.2023.0140453</id>
        <doi>10.14569/IJACSA.2023.0140453</doi>
        <lastModDate>2023-04-29T12:39:00.1670000+00:00</lastModDate>
        
        <creator>Nithya S</creator>
        
        <creator>Rekha B</creator>
        
        <subject>Social network; security threat; authentication; blockchain</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>With the increased social network usage, there is a rising concern about potential security and privacy risks related to digital information data. Although there have been numerous studies in this area, a summary is necessary to understand the effectiveness of existing security approaches. The ultimate goal is to provide valuable insights into the effectiveness of existing security schemes in the social network ecosystem. Therefore, the proposed paper discusses the existing research that has been done on authentication and data security measures, including methodologies, issues, benefits, and drawbacks. The inquiry further contributes to highlighting existing research trends and identifying the gap. The paper concludes by stating its learning results that help to open possible insights into the effectiveness of existing security schemes in the social network. Furthermore, blockchain is witnessed with increased interest in distributed security over large data. The paper&#39;s outcome states that blockchain-based authentication systems possess better scope if subjected to amending their inherent shortcomings. The findings of this paper emphasize the importance of continuous innovation in data security to ensure the safety and privacy of user data in an ever-evolving digital landscape. This paper offers a foundational aspect for future research toward developing more secure, privacy-preserving solutions for social network users.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_53-Insights_on_Data_Security_Schemes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multistage End-to-End Driver Drowsiness Alerting System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140452</link>
        <id>10.14569/IJACSA.2023.0140452</id>
        <doi>10.14569/IJACSA.2023.0140452</doi>
        <lastModDate>2023-04-29T12:39:00.1500000+00:00</lastModDate>
        
        <creator>Sowmyashree P</creator>
        
        <creator>Sangeetha J</creator>
        
        <subject>Driver drowsiness detection; internet of things; voice alert; seat-vibration alert; physical alert; driver rating; haar cascade classifier; eye aspect ratio</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>Drowsiness in drivers is the major cause for these fatal road accidents. Hence detecting drowsiness in drivers and alerting them on time is very important to avoid accidents. Researchers have developed several techniques to detect drowsiness in driver and warn the driver. However, in the past there is no work done towards the end-to-end driver drowsiness alerting system. Therefore, in this proposed system, it will ensure that the driver is awake through its end-to-end multi-stage (i.e., three stage) alerting system. The proposed system, at first performs driver authentication. Next, it detects the driver’s face and also checks whether he/she has consumed alcohol or not, in either case the car engine will not start, and a warning mail is sent. Then the system performs drowsiness detection. If the driver is found drowsy then a multi-stage alerting system (i.e., voice alert, seat vibration alert and physical alert) is performed to wake him/her. After the voice alert, the driver has to give his/her fingerprint as proof for not being drowsy. If the system fails to get a fingerprint it starts the vibration alert. Once again system asks for driver’s fingerprint, without which the system starts physical alert through robot arm which is performed with three different frequencies (i.e., Low, Medium and High) and three questions are asked after each frequency to make sure the driver is alert. In the process, it creates a log file which contains the driver’s drowsiness details, after analyzing which it gives rating to the driver and mail this rating to the concerned person. This rating can be used to choose the driver for a safe and comfortable journey. Thus, the system ensures that driver is alert and avoids road accidents.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_52-Multistage_End_to_End_Driver_Drowsiness_Alerting_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Context Aware Automatic Subjective and Objective Question Generation using Fast Text to Text Transfer Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140451</link>
        <id>10.14569/IJACSA.2023.0140451</id>
        <doi>10.14569/IJACSA.2023.0140451</doi>
        <lastModDate>2023-04-29T12:39:00.1330000+00:00</lastModDate>
        
        <creator>Arpit Agrawal</creator>
        
        <creator>Pragya Shukla</creator>
        
        <subject>Automatic question generation; Text-to-Text-transfer-transformer (T5); natural language processing; word sense disambiguation (WSD); domain adaptation; multipartite graphs; beam-search decoding</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>Online learning has gained a tremendous popularity in the last decade due to the facility to learn anytime, anything, anywhere from the ocean of web resources available. Especially the lockdown all over the world due to the Covid-19 pandemic has brought an enormous attention towards the online learning for value addition and skills development not only for the school/college students, but also to the working professionals. This massive growth in online learning has made the task of assessment very tedious and demands training, experience and resources. Automatic Question generation (AQG) techniques have been introduced to resolve this problem by deriving a question bank from the text documents. However, the performance of conventional AQG techniques is subject to the availability of large labelled training dataset. The requirement of deep linguistic knowledge for the generation of heuristic and hand-crafted rules to transform declarative sentence into interrogative sentence makes the problem further complicated. This paper presents a transfer learning-based text to text transformation model to generate the subjective and objective questions automatically from the text document. The proposed AQG model utilizes the Text-to-Text-Transfer-Transformer (T5) which reframes natural language processing tasks into a unified text-to-text-format and augments it with word sense disambiguation (WSD), ConceptNet and domain adaptation framework to improve the meaningfulness of the questions. Fast T5 library with beam-search decoding algorithm has been used here to reduce the model size and increase the speed of the model through quantization of the whole model by Open Neural Network Exchange (ONNX) framework. The keywords extraction in the proposed framework is performed using the Multipartite graphs to enhance the context awareness. The qualitative and quantitative performance of the proposed AQG model is evaluated through a comprehensive experimental analysis over the publicly available Squad dataset.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_51-Context_Aware_Automatic_Subjective_and_Objective_Question.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning for Combined Water Quality Testing and Crop Recommendation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140450</link>
        <id>10.14569/IJACSA.2023.0140450</id>
        <doi>10.14569/IJACSA.2023.0140450</doi>
        <lastModDate>2023-04-29T12:39:00.1030000+00:00</lastModDate>
        
        <creator>Tahani Alkhudaydi</creator>
        
        <creator>Maram Qasem Albalawi</creator>
        
        <creator>Jamelah Sanad Alanazi</creator>
        
        <creator>Wejdan Al-Anazi</creator>
        
        <creator>Rahaf Mansour Alfarshouti</creator>
        
        <subject>Deep learning; irrigation; artificial intelligence; soil moisture</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>The field of agriculture and its specifics has been gaining more attention nowadays due to the limited present resources and the continuously increasing need for food. In fact, agriculture has benefited greatly from the advancements of artificial intelligence, namely, Machine Learning (ML). In order to make the most of a crop field, one must initially plan on what crop is best for planting in this particular field, and whether it will provide the necessary yield. Additionally, it’s very important to constantly monitor the quality of soil and water for irrigation of the selected crop. In this paper, we are going to rely on Machine Learning and data analysis to decide the type of crop that we will use, and the quality of soil and water. To do so, certain parameters must be taken into consideration. For choosing the crop, parameters such as sun exposure, humidity, soil pH, and soil moisture will be taken into consideration. On the other hand, water pH, electric conductivity, content of minerals such as chloride, calcium, and magnesium are among the parameters taken into consideration for water quality classification. After acquiring datasets for crop and water potability, we implemented a deep learning model in order to predict these two features. Upon training, our neural network model achieved 97% accuracy for crop recommendation and 96% for water quality prediction. However, the SVM model achieves 96% for crop recommendation and 92% for water quality prediction.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_50-Deep_Learning_for_Combined_Water_Quality_Testing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comparative Performance Evaluation of Routing Protocols for Mobile Ad-hoc Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140449</link>
        <id>10.14569/IJACSA.2023.0140449</id>
        <doi>10.14569/IJACSA.2023.0140449</doi>
        <lastModDate>2023-04-29T12:39:00.0870000+00:00</lastModDate>
        
        <creator>Baidaa Hamza Khudayer</creator>
        
        <creator>Lial Raja Alzabin</creator>
        
        <creator>Mohammed Anbar</creator>
        
        <creator>Ragad M Tawafak</creator>
        
        <creator>Tat-Chee Wan</creator>
        
        <creator>Abir AlSideiri</creator>
        
        <creator>Sohail Iqbal Malik</creator>
        
        <creator>Taief Alaa Al-Amiedy</creator>
        
        <subject>MANET; routing protocols; proactive protocols; reactive protocols; hybrid protocols; Ad Hoc Networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>Mobile Ad Hoc Network (MANET) is a group of wireless mobile nodes that can connect with each other over a number of hops without the need for centralized management or pre-existing infrastructure. MANET has been used in several commercial areas such as intelligent shipping systems, ad hoc gaming, and clever agriculture, and non-commercial areas such as army applications, disaster rescue, and wildlife observing domains. One of the main challenges in MANET is routing mobility management which affects the performance of MANET seriously. The routing protocols have been functionally classified into proactive routing protocols, reactive routing protocols, and hybrid routing protocols. The objective of this paper is to create observations about the advantages and disadvantages of these protocols. Thus, the aim of this paper is to conduct a comparative analysis of the three groups of MANET routing protocols by comparing their features and methods in terms of routing overhead, scalability, delay, and other factors. It was shown that the proactive protocols guarantee the availability of the routes. However, it suffers from scalability and overhead. Whereas, reactive protocols initiate route discovery only when data needs to be sent. However, reactive protocols introduce an undesirable delay due to route establishment, which affects the network performance. Hybrid protocols, attempt to utilize the beneficial features of both reactive and proactive protocols, hybrid protocols are suitable for large networks and keep up-to-date information, but they increase operational complexity. It was concluded that MANET needs enhancement with regard to routing in order to meet the required performance.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_49-A_Comparative_Performance_Evaluation_of_Routing_Protocols.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving QoS in Internet of Vehicles Integrating Swarm Intelligence Guided Topology Adaptive Routing and Service Differentiated Flow Control</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140448</link>
        <id>10.14569/IJACSA.2023.0140448</id>
        <doi>10.14569/IJACSA.2023.0140448</doi>
        <lastModDate>2023-04-29T12:39:00.0700000+00:00</lastModDate>
        
        <creator>Tanuja Kayarga</creator>
        
        <creator>Ananda Kumar S</creator>
        
        <subject>Particle swarm optimization (PSO); QoS; bio inspired technique</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>Internet of Vehicles (IoV) is an evolution of vehicular adhoc network with concepts of internet of things (IOT). Each vehicle in IOV is an intelligent object with various capabilities like sensors, computation, storage, control etc. Vehicles can connect to any other entity in the network using various services like DSRC, C2C-CC etc. Ensuring QoS for vehicle to everything (V2X) communication is a major challenge in IoV. This work applies an integration of hybrid metaheuristics guided routing and service differentiated flow control to ensure QoS in Internet of Vehicles. Clustering based network topology is adopted with clustering based on hybrid metaheuristics integrating particle swarm optimization with firefly algorithm. Over the established clusters routing decision is done using swarm intelligence. Packet flows in the network are service differentiated and flow control is done at cluster heads to reduce the congestion in the network. High congestion in routes is mitigated with back up path satisfying the QoS constraints. Due to optimization in clustering, routing and data forwarding process, the proposed solution is able to achieve higher QoS. Through simulation analysis, the proposed solution is able to achieve 2% higher packet delivery ratio and 9.67% lower end to end packet latency compared to existing works.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_48-Improving_QoS_in_Internet_of_Vehicles_Integrating_Swarm_Intelligence.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Deep Learning based Approach for Recognition of Arabic Sign Language Letters</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140447</link>
        <id>10.14569/IJACSA.2023.0140447</id>
        <doi>10.14569/IJACSA.2023.0140447</doi>
        <lastModDate>2023-04-29T12:38:59.9770000+00:00</lastModDate>
        
        <creator>Boutaina Hdioud</creator>
        
        <creator>Mohammed El Haj Tirari</creator>
        
        <subject>Deep learning; hand landmark model; convolutional neural network; Arabic sign language recognition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>No one can deny that the deaf-mute community has communication problems in daily life. Advances in artificial intelligence over the past few years have broken through this communication barrier. The principal objective of this work is creating an Arabic Sign Language Recognition system (ArSLR) for recognizing Arabic letters. The ArSLR system is developed using our image pre-processing method to extract the exact position of the hand and we proposed architecture of the Deep Convolutional Neural Network (CNN) using depth data. The goal is to make it easier for people who have hearing problems to interact with normal people. Based on user input, our method will detect and recognize hand-sign letters of the Arabic alphabet automatically. The suggested model is anticipated to deliver encouraging results in the recognition of Arabic sign language with an accuracy score of 97,07%. We conducted a comparison study in order to evaluate proposed system, the obtained results demonstrated that this method is able to recognize static signs with greater accuracy than the accuracy obtained by similar other studies on the same dataset used.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_47-A_Deep_Learning_based_Approach_for_Recognition_of_Arabic_Sign_Language.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Listening to the Voice of People with Vision Impairment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140446</link>
        <id>10.14569/IJACSA.2023.0140446</id>
        <doi>10.14569/IJACSA.2023.0140446</doi>
        <lastModDate>2023-04-29T12:38:59.9630000+00:00</lastModDate>
        
        <creator>Abeer Malkawi</creator>
        
        <creator>Azrina Kamaruddin</creator>
        
        <creator>Alfian Abdul Halin</creator>
        
        <creator>Novia Admodisastro</creator>
        
        <subject>People with vision impairment; assistive technology; outdoor mobility; behaviors; challenges; requirements</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>Extensive research developed assistive technologies (ATs) to improve mobility for people with vision impairment (PVI). However, a limited number of PVI rely on ATs for mobility. One of the factors contributing to the limited reliability and low acceptance of ATs is the developers’ failure to consider PVI mobility traits from the target group’s perspective. Many developers and researchers proposed solutions based on their knowledge and experiences, where PVI have been excluded from several studies except for limited involvement in testing phases. Accordingly, this study aims to bridge this gap by providing comprehensive information on PVIs’ behaviors, challenges, and requirements for safe and independent outdoor mobility. Therefore, a total of 15 participants with vision impairment were involved in semi-structured interviews and two observation sessions. One key finding highlights the need for AT that complements the conventional cane and overcomes its limitations, not substituting the cane. Moreover, the proposed AT should address instant mobility and future needs simultaneously. Overall, the study contributes to providing comprehensive knowledge on PVI safe and independent mobility traits, which assist AT developers to explore the potential barriers and facilitators of the adoption of ATs among PVIs and leads to develop effective and reliable ATs that meet their needs. For future work, the researchers will develop an AT that complements the conventional cane, supports instant mobility, and enhances cognitive map formation.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_46-Listening_to_the_Voice_of_People_with_Vision_Impairment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Impact of Design-level Class Decomposition on the Software Maintainability</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140445</link>
        <id>10.14569/IJACSA.2023.0140445</id>
        <doi>10.14569/IJACSA.2023.0140445</doi>
        <lastModDate>2023-04-29T12:38:59.9300000+00:00</lastModDate>
        
        <creator>Bayu Priyambadha</creator>
        
        <creator>Tetsuro Katayama</creator>
        
        <subject>Design level refactoring; class decomposition; class diagram decomposition; software quality; software internal quality; software maintainability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>The quality of the software&#39;s internal structure tends to decay due to the adaptation to environmental changes. Therefore, it is beneficial to maintain the internal structure of the software to benefit future phases of the software life cycle. A common correlation exists between decaying internal structures and problems like software smell and maintenance costs. Refactoring is a process to maintain the internal structure of software artifacts based on the smell. Decomposition of classes is one of the most common refactoring actions based on Blob smell performed at the source code level. Moving the class decomposition process to the design artifact seems to affect the quality and maintainability of the source code positively. Therefore, studying the impact of design-level class decomposition on source code quality and software maintainability is essential to ascertain the benefits of implementing design-level class decomposition. The metrics-based evaluation shows that the design-level class decomposition positively impacts the source code quality and maintainability with the rank biserial value is 0.69.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_45-The_Impact_of_Design_level_Class_Decomposition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Solar Energy Forecasting Based on Complex Valued Auto-encoder and Recurrent Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140443</link>
        <id>10.14569/IJACSA.2023.0140443</id>
        <doi>10.14569/IJACSA.2023.0140443</doi>
        <lastModDate>2023-04-29T12:38:59.9170000+00:00</lastModDate>
        
        <creator>Aymen Rhouma</creator>
        
        <creator>Yahia Said</creator>
        
        <subject>Solar energy forecasting; artificial intelligence; complex-valued autoencoder; long-short term memory; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>Renewable energy is becoming a trusted power source. Energy forecasting is an important research field, which is used to provide information about the future power generation of renewable energy plants. Energy forecasting helps to safely manage the power grid by minimizing the operational cost of energy production. Recent advances in energy forecasting based on deep learning techniques have shown great success but the achieved results still too far from the target results. Ordinary deep learning models have been used for time series processing. In this paper, a complex-valued autoencoder was coupled with an LSTM neural network for solar energy forecasting. The complex-valued autoencoder was used to process the time series with the advantage of processing more complex data with more input arguments. The energy value was used as a real value and the weather condition was considered as the imaginary value. Taking into account the weather condition helps to better predict power generation. The proposed approach was evaluated on the Fingrid open data dataset. The mean absolute error (MAE), root-mean-square error (RMSE) and mean absolute percentage error (MAPE) was used to evaluate the performance of the proposed method. A comparison study was performed to prove the efficiency of the proposed approach. Reported results have shown the efficiency of the proposed approach.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_43-Solar_Energy_Forecasting_Based_on_Complex_Valued_Auto_encoder.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Classification with K-Nearest Neighbors Algorithm: Comparative Analysis between the Manual and Automatic Methods for K-Selection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140444</link>
        <id>10.14569/IJACSA.2023.0140444</id>
        <doi>10.14569/IJACSA.2023.0140444</doi>
        <lastModDate>2023-04-29T12:38:59.9170000+00:00</lastModDate>
        
        <creator>Tsvetelina Mladenova</creator>
        
        <creator>Irena Valova</creator>
        
        <subject>Machine Learning; KNN; classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>Machine learning and the algorithms it uses have been the subject of many and varied studies with the development of artificial intelligence in recent years. One of the popular and widely used classification algorithms is the nearest neighbors’ algorithm and in particular k nearest neighbors. This algorithm has three important steps: calculation of distances; selection of the number of neighbors; and the classification itself. The choice of the value for the k parameter determines the number of neighbors and is important and has a significant impact on the degree of efficiency of the created model. This article describes a study of the influence of the way the k parameter is chosen - manually or automatically. Data sets, used for the study, are selected to be as close as possible in their features to the data generated and used by small businesses - heterogeneous, unbalanced, with relatively small volumes and small training sets. From the obtained results, it can be concluded that the automatic determination of the value of k can give results close to the optimal ones. Deviations are observed in the accuracy rate and the behavior of well-known KNN modifications with increasing neighborhood size for some of the training data sets tested, but one cannot expect that the same model&#39;s parameter values (e.g. for k) will be optimally applicable on all data sets.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_44-Classification_with_K_Nearest_Neighbors_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An IoT-based Framework for Detecting Heart Conditions using Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140442</link>
        <id>10.14569/IJACSA.2023.0140442</id>
        <doi>10.14569/IJACSA.2023.0140442</doi>
        <lastModDate>2023-04-29T12:38:59.8830000+00:00</lastModDate>
        
        <creator>Mona Alnaggar</creator>
        
        <creator>Mohamed Handosa</creator>
        
        <creator>T. Medhat</creator>
        
        <creator>M. Z. Rashad</creator>
        
        <subject>Medical healthcare; medical support system; real monitoring; Internet of Things (IoT); ECG signals; Gridsearch; hyperparameter optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>A lot of diseases may be preventable if they can be analyzed or predicted from patient historical and family data. Predicting diagnosis depends on the gathered clinical and physiological data of patients. The more collected clinical and medical healthcare data, the more knowledge the medical support system may support. Hence, real monitoring clinical and healthcare data for patients is the trend of this decade based on Internet of Things technologies (IoT). IoT models facilitate human life by easily collecting clinical data remotely for recognizing diseases that are easily treatable if it is diagnosed early. This paper proposes a framework consisting of two models: (i) heart attack detection model (HADM); (ii) Electrocardiosignal ECG heartbeat multiclass-classification model (ECG-HMCM). Gridsearch is used to the hyperparameters optimization for different machine learning (ML) techniques. The used dataset in HADM consists of 1190 patients and 14 features. As the foundation of diagnosing cardiovascular disease is arrhythmia detection hence, we propose an ECG heartbeat multi-class classification model using MIT-BIH Arrhythmia and PTB Diagnostic ECG signals dataset which contains five categories with 109446 samples. K Nearest Neighbor (KNN) technique is applied to build ECG-HMCM in addition to the using of Gridsearch algorithm for hyperparameter optimization aiming to improve the accuracy of classification which achieved 97.5%. The proposed framework aims to facilitate human life by easily collecting clinical data remotely. The outcomes of the experiments show that the suggested framework works well in a practical setting.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_42-An_IoT_based_Framework_for_Detecting_Heart_Conditions.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implementation of CNN for Plant Identification using UAV Imagery</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140441</link>
        <id>10.14569/IJACSA.2023.0140441</id>
        <doi>10.14569/IJACSA.2023.0140441</doi>
        <lastModDate>2023-04-29T12:38:59.8670000+00:00</lastModDate>
        
        <creator>Mohd Anul Haq</creator>
        
        <creator>Ahsan Ahmed</creator>
        
        <creator>Jayadev Gyani</creator>
        
        <subject>Convolutional Neural Networks (CNN); Machine Learning (ML) algorithms; plant image identification; plant image dataset</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>Plants are the world&#39;s most significant resource since they are the only natural source of oxygen. Additionally, plants are considered crucial since they are the major source of energy for humanity and have nutritional, therapeutic, and other benefits. Image identification has become more prominent in this technology-driven world, where many innovations are happening in this sphere. Image processing techniques are increasingly being used by researchers to identify plants. The capacity of Convolutional Neural Networks (CNN) to transfer weights learned with huge standard datasets to tasks with smaller collections or more particular data has improved over time. Several applications are made for image identification using deep learning, and Machine Learning (ML) algorithms. Plant image identification is a prominent part of such. The plant image dataset of about 300 images collected by mobile phone and camera from different places in the natural scenes with nine species of different plants are deployed for training. A five-layered convolution neural network (CNN) is applied for large-scale plant classification in a natural environment. The proposed work claims a higher accuracy in plant identification based on experimental data. The model achieves the utmost recognition rate of 96% NU108 dataset and UAV images of NU101 have achieved an accuracy of 97.8%.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_41-Implementation_of_CNN_for_Plant_Identification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multifaceted Sentiment Detection System (MSDS) to Avoid Dropout in Virtual Learning Environment using Multi-class Classifiers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140440</link>
        <id>10.14569/IJACSA.2023.0140440</id>
        <doi>10.14569/IJACSA.2023.0140440</doi>
        <lastModDate>2023-04-29T12:38:59.8370000+00:00</lastModDate>
        
        <creator>Ananthi Claral Mary. T</creator>
        
        <creator>Arul Leena Rose. P. J</creator>
        
        <subject>Sentiment analysis; opinions; TF-IDF; n-gram; virtual learning; machine learning; NLTK; text pre-processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>Sentiment analysis with machine learning plays a vital role in Higher Educational Institutions (HEI) for decision making. Technology-enabled interactions can only be successful when a strong student-teacher link is established, and the emotions of students are clearly comprehended. The paper aims at proposing Multifaceted Sentiment Detection System (MSDS) for detecting sentiments of higher education students participating in virtual learning and to classify the comments posted by them using Machine Learning (ML) algorithms. Present research evaluated a total of n=1590 students’ comments with the presence of three specific multifaceted characteristics each providing 530 comments to perform Sentiment Analysis (SA) for monitoring their sentiments, opinions that facilitate predicting dropout in virtual learning environment (VLE). This begins with the phrase extraction; then data pre-processing techniques namely digits, punctuation marks and stop-words removal, spelling correction, tokenization, lemmatization, n-grams, and POS (Part of Speech) are applied. Texts are vectorized using two feature extraction techniques with count vectorization and TF-IDF metrics and classified with four multiclass supervised ML techniques namely Random Forest, Linear SVC, Multinomial Naive Bayes, and Logistic Regression for multifaceted sentiment classification. Analyzing students’ feedback using sentiment analysis techniques classifies their positive, negative, or even more refined emotions that enables dropout prediction. Experimental results reveal that the highest mean accuracy result for device efficiency, cognitive behavior, technological expertise with cloud learning platform usage were achieved by Logistic Regression with 98.49%, Linear SVC with 93.58% and Linear SVC with 92.08% respectively. Practically, results confirm feasibility for detecting students’ multifaceted behavioral patterns and risk of dropout in VLE.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_40-Multifaceted_Sentiment_Detection_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Single-valued Pentagonal Neutrosophic Geometric Programming Approach to Optimize Decision Maker’s Satisfaction Level</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140439</link>
        <id>10.14569/IJACSA.2023.0140439</id>
        <doi>10.14569/IJACSA.2023.0140439</doi>
        <lastModDate>2023-04-29T12:38:59.8200000+00:00</lastModDate>
        
        <creator>Satyabrata Nath</creator>
        
        <creator>Purnendu Das</creator>
        
        <creator>Pradip Debnath</creator>
        
        <subject>Decision making; pentagonal neutrosophic numbers; single-valued neutrosophic geometric programming; multi-parametric programming</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>Achieving the desired level of satisfaction for a decision-maker in any decision-making scenario is considered a challenging endeavor because minor modifications in the process might lead to incorrect findings and inaccurate decisions. In order to maximize the decision-maker’s satisfaction, this paper proposes a Single-valued Neutrosophic Geometric Programming model based on pentagonal fuzzy numbers. The decision-maker is typically assumed to be certain of the parameters, but in reality, this is not the case, hence the parameters are presented as neutrosophic fuzzy values. The decision-maker, with this strategy, is able to achieve varying levels of satisfaction and dissatisfaction for each constraint and even complete satisfaction for certain constraints. Here the decision maker aims to achieve the maximum level of satisfaction while maintaining the level of hesitation and minimizing dissatisfaction in order to retain an optimum solution. Furthermore, transforming the objective function into a constraint adds one more layer to the N-dimensional multi-parametrizes α ,β and γ. The advantages of this multi-parametrized proposed method over the existing ones are proven using numerical examples.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_39-A_Single_valued_Pentagonal_Neutrosophic_Geometric_Programming.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detecting Fraud Transaction using Ripper Algorithm Combines with Ensemble Learning Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140438</link>
        <id>10.14569/IJACSA.2023.0140438</id>
        <doi>10.14569/IJACSA.2023.0140438</doi>
        <lastModDate>2023-04-29T12:38:59.7900000+00:00</lastModDate>
        
        <creator>Vo Hoang Khang</creator>
        
        <creator>Cao Tung Anh</creator>
        
        <creator>Nguyen Dinh Thuan</creator>
        
        <subject>Financial fraud; data mining; credit card fraud; transaction; ensemble methods</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>In the context of the 4.0 technology revolution, which develops and applies strongly in many fields, in which the banking sector is considered to be the leading one, the application of algorithms to detect fraud is extremely important. necessary. In recent years, credit card transactions including physical credit card payments and online payments have become increasingly popular in many countries around the world. This convenient payment method attracts more and more criminals, especially credit card fraud. As a result, many banks around the world have developed fraud detection and prevention systems for each credit card transaction. Data mining is one of the techniques applied in these systems. This study uses the Ripper algorithm to detect fraudulent transactions on large data sets, and the results obtained with accuracy, recall, and F1 measure of more than 97%. This research then used the Ripper algorithm combined with Ensemble Learning models to detect fraudulent transactions, the results are more than 99% reliable. Specifically, this model using the Ripper algorithm combined with the Gradient Boosting method has improved the predictive ability and obtained very reliable results. The use of algorithms combined with machine learning models is expected to be a new topic and will be widely applied to banks’ or organizations’ activities related to e-commerce.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_38-Detecting_Fraud_Transaction_using_Ripper_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Anchor-free Proposal Generation Network for Efficient Object Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140437</link>
        <id>10.14569/IJACSA.2023.0140437</id>
        <doi>10.14569/IJACSA.2023.0140437</doi>
        <lastModDate>2023-04-29T12:38:59.7730000+00:00</lastModDate>
        
        <creator>Hoanh Nguyen</creator>
        
        <subject>Object detection; deep learning; convolutional neural network; proposal generation network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>Deep learning object detection methods are usually based on anchor-free or anchor-based scheme for extracting object proposals and one-stage or two-stage structure for producing final predictions. As each scheme or structure has its own strength and weakness, combining their strength in a unified framework is an interesting research topic. However, this topic has not attracted much attention in recent years. This paper presents a two-stage object detection method that utilizes an anchor-free scheme for generating object proposals in the initial stage. For proposal generation, this paper employs an efficient anchor-free network for predicting object corners and assigns object proposals based on detected corners. For object prediction, an efficient detection network is designed to enhance both detection accuracy and speed. The detection network includes a lightweight binary classification subnetwork for removing most false positive object candidates and a light-head detection subnetwork for generating final predictions. Experimental results on the MS-COCO dataset demonstrate that the proposed method outperforms both anchor-free and two-stage object detection baselines in terms of detection performance.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_37-Anchor_free_Proposal_Generation_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimized Image Authentication Algorithm using Redundant Wavelet Transform Based Sift Descriptors and Complex Zernike Moments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140436</link>
        <id>10.14569/IJACSA.2023.0140436</id>
        <doi>10.14569/IJACSA.2023.0140436</doi>
        <lastModDate>2023-04-29T12:38:59.7600000+00:00</lastModDate>
        
        <creator>Pooja Vijayakumaran Kallath</creator>
        
        <creator>Kondaka Lakshmisudha</creator>
        
        <subject>Forgery detection; scale invariant feature transform; key point operation; block matching; agglomerative hierarchical clustering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>Due to the advanced multimedia editing tools and supported by sophisticated hardware, creating image/video manipulations for malicious purposes is increasing which is almost impossible to detect manually. Moreover, to conceal the traces, different post-processing operations are performed. Therefore, authenticity is a growing concern and important for identifying original and forged images. One of the popular image manipulations is copy-move forgery in which one or more regions in the image are duplicated to create a malicious effect within an image. The work in this article presents redundant wavelet transform based complex Zernike moment and Scale Invariant Feature Transform (SIFT) keypoint matching technique for copy-move image forgery operation detection. SIFT is robust against scale and rotation that works on identifying similarity using exhaustive search of SIFT features. After extracting SIFT keypoint features agglomerative hierarchical clustering is employed for grouping and key point matching operation is performed. Finally, block matching operation is evaluated and forged regions in the manipulated images are marked and displayed. This work also presents optimized SIFT key-point feature computations resulting in lower computation time, often one of the requirements in real time deployment. The proposed algorithm performance is evaluated using Precision, Recall, F-Measure and average detection accuracy on popular and publicly available MICC-220 database. The proposed technique demonstrates improved speed-up and detection rate compared to existing approaches.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_36-Optimized_Image_Authentication_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hand Gesture Recognition Based on Various Deep Learning YOLO Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140435</link>
        <id>10.14569/IJACSA.2023.0140435</id>
        <doi>10.14569/IJACSA.2023.0140435</doi>
        <lastModDate>2023-04-29T12:38:59.7430000+00:00</lastModDate>
        
        <creator>Soukaina Chraa Mesbahi</creator>
        
        <creator>Mohamed Adnane Mahraz</creator>
        
        <creator>Jamal Riffi</creator>
        
        <creator>Hamid Tairi</creator>
        
        <subject>Neural network; deep learning; YOLO; object detection; hand gesture</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>Some varieties of sign languages are used by deaf or hard-of-hearing people worldwide to interact with others more effectively, consequently sign language&#39;s automatic translation is expressive and important. Significant improvements in computer vision have been made recently, notably in tasks based on object detection using deep learning. By locating things in visual photos or videos, the genuine cutting-edge one-step object detection approach greatly provides exceptional detection accuracy. With the help of messaging or video calling, this study suggests a technique to get beyond these obstacles and enhance communication for such persons, regardless of their disability. To recognize motions and classes, we provide an enhanced model based on Yolo (You Look Only Once) V3, V4, V4-tiny, and V5. The dataset is clustered using the suggested algorithm, requiring only manual annotation of a reduced number of classes and analysis for patterns that aid in target prediction. The suggested method outperforms the current object detection approaches based on the YOLO model, according to experimental results.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_35-Hand_Gesture_Recognition_Based_on_Various_Deep_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning Models for Crime Intention Detection Using Object Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140434</link>
        <id>10.14569/IJACSA.2023.0140434</id>
        <doi>10.14569/IJACSA.2023.0140434</doi>
        <lastModDate>2023-04-29T12:38:59.7130000+00:00</lastModDate>
        
        <creator>Abdirahman Osman Hashi</creator>
        
        <creator>Abdullahi Ahmed Abdirahman</creator>
        
        <creator>Mohamed Abdirahman Elmi</creator>
        
        <creator>Octavio Ernest Romo Rodriguez</creator>
        
        <subject>Object detection; deep learning; crime scenes; video surveillance; convolutional neural network; YOLOv6</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>The majority of visual based surveillance applications and security systems heavily rely on object detection, which serves as a critical module. In the context of crime scene analysis, images and videos play an essential role in capturing visual documentation of a particular scene. By detecting objects associated with a specific crime, police officers are able to reconstruct a scene for subsequent analysis. Nevertheless, the task of identifying objects of interest can be highly arduous for law enforcement agencies, mainly because of the massive amount of data that must be processed. Hence, the main objective of this paper is to propose a DL-based model for detecting tracked objects such as handheld firearms and informing the authority about the threat before the incident happens. We have applied VGG-19, ResNet, and GoogleNet as our deep learning models. The experiment result shows that ResNet50 has achieved the highest average accuracy of 0.92% compared to VGG19 and GoogleNet, which have achieved 0.91% and 0.89%, respectively. Also, YOLOv6 has achieved the highest MAP and inference speed compared to the faster R-CNN.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_34-Deep_Learning_Models_for_Crime_Intention_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Texture Analytics for Accurate Person Recognition: A Multimodal Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140433</link>
        <id>10.14569/IJACSA.2023.0140433</id>
        <doi>10.14569/IJACSA.2023.0140433</doi>
        <lastModDate>2023-04-29T12:38:59.6800000+00:00</lastModDate>
        
        <creator>Suchetha N V</creator>
        
        <creator>Sharmila Kumari M</creator>
        
        <subject>Unimodal; Multi-modal; LBP; LTP; intra-class; inter-class; spoofing attack</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>Securing the resources is a most challenging task in the digital era. Traditionally, password and ID card systems were used to provide security. Password and ID cards can be stolen or hacked; to overcome this drawback biometric systems are used to authenticate the user to access the data or resources. Biometric system uses physical and behavioral characteristics of the user. Biological characteristics of the person like face, fingerprint, iris, palm print, voice, hand geometry etc. cannot be stolen and misused. Even though unimodal biometric system is more secure as compared to the traditional approach, it is not able to handle intra-class, inter-class variations, noisy data and spoofing attack. These problems can be solved using multimodal biometrics. In this paper, we discuss unimodal biometric system using Local Binary Pattern (LBP) and Local Ternary Pattern (LTP). We propose a feature level fusion of face and fingerprint biometric traits using LTP. The implementation of the introduced system stands in comparison to the unimodal LBP and LTP for face and fingerprint system. The system is tested on ORL, UMIST, VISA face dataset and FVC fingerprint dataset. Experimental results show that the multimodal biometric system using LTP gives better accuracy as compared to the unimodal biometric system.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_33-Texture_Analytics_for_Accurate_Person_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Reversible De-identification of Specific Regions in Biomedical Images and Secured Storage by Randomized Joint Encryption</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140432</link>
        <id>10.14569/IJACSA.2023.0140432</id>
        <doi>10.14569/IJACSA.2023.0140432</doi>
        <lastModDate>2023-04-29T12:38:59.6670000+00:00</lastModDate>
        
        <creator>Prabhavathi K</creator>
        
        <creator>Anandaraju M. B</creator>
        
        <subject>Region identification mask; modular matrix inverse; selective image encryption; image de-identification; randomized joint encryption; image authentication</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>In many circumstances, de-identification of a specific region of a biomedical image is necessary. De-identification is used to hide the subject’s identity or to prevent the display of the objectionable or offensive region(s) of the image. The concerned region can be blurred (de-identified) by using a suitable image processing technique guided by the region-defining mask. The proposed method provides lossless blurring, which means the original image can be recovered fully with zero loss. The blurred image and the region-defining mask, along with the digital signature, are jointly encrypted to form the composite cipher matrix, and it is stored in the cloud for further distribution. The composite cipher matrix is decrypted to recover the blurred image by the conventional end user. Further, using the deblur key, the original image can be recovered with zero loss by the fully authorized special end users. On decryption, the digital signature is available for both types of end users. The proposed method uses randomized joint encryption using integer matrix keys in a finite field. The experimental results show that the proposed method achieves a reduction in the average execution time of encryption by 30 to 40 percent compared to its nearest competitor. Additionally, the proposed scheme achieves very nearly ideal performance with reference to the correlation coefficient, entropy, pixel change rate, and structural similarity index. Overall, the proposed algorithm performs substantially better than the other similar existing schemes for large-sized images.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_32-Reversible_De_identification_of_Specific_Regions_in_Biomedical_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Deep CNN-RNN Approach for Real-time Impulsive Sound Detection to Detect Dangerous Events</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140431</link>
        <id>10.14569/IJACSA.2023.0140431</id>
        <doi>10.14569/IJACSA.2023.0140431</doi>
        <lastModDate>2023-04-29T12:38:59.6330000+00:00</lastModDate>
        
        <creator>Nurzhigit Smailov</creator>
        
        <creator>Zhandos Dosbayev</creator>
        
        <creator>Nurzhan Omarov</creator>
        
        <creator>Bibigul Sadykova</creator>
        
        <creator>Maigul Zhekambayeva</creator>
        
        <creator>Dusmat Zhamangarin</creator>
        
        <creator>Assem Ayapbergenova</creator>
        
        <subject>CNN; RNN; deep learning; impulsive sound; dangerous sound; artificial intelligence</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>In this research paper, we presented a novel approach to detect impulsive sounds in real-time using a combination of Deep CNN and RNN architectures. The proposed approach was evaluated using our collected dataset of impulsive sounds, and the results showed that it outperformed traditional audio signal processing methods in terms of accuracy and F1-score. The proposed approach has several advantages over traditional methods, including the ability to handle complex audio patterns, detect impulsive sounds in real-time, and improve its performance with a large dataset of labeled impulsive sounds. However, there are some limitations to the proposed approach, including the requirement for a large amount of labeled data to train effectively, environmental factors that may impact the accuracy of the detection, and high computational requirements. Overall, the proposed approach demonstrates the effectiveness of using a combination of Deep CNN and RNN architectures for impulsive sound detection, with potential applications in various fields such as public safety, industrial settings, and home security systems. The proposed approach is a significant step towards developing automated systems for detecting dangerous events and improving public safety.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_31-A_Novel_Deep_CNN_RNN_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fuzzy Rank-Based Ensemble Model for Accurate Diagnosis of Osteoporosis in Knee Radiographs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140430</link>
        <id>10.14569/IJACSA.2023.0140430</id>
        <doi>10.14569/IJACSA.2023.0140430</doi>
        <lastModDate>2023-04-29T12:38:59.6170000+00:00</lastModDate>
        
        <creator>Saumya Kumar</creator>
        
        <creator>Puneet Goswami</creator>
        
        <creator>Shivani Batra</creator>
        
        <subject>Convolutional Neural Network; diagnosis; knee; osteoporosis; transfer learning models; X-rays</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>The main factor in fractures among seniors and women post-menopausal is osteoporosis, which decreases the density of bones. Finding a low-cost diagnostic technology to identify osteoporosis in its initial stages is imperative considering the substantial expenses of diagnosis and therapy. The simplest and most widely used imaging method for detecting bone diseases is X-ray radiography, however, it is problematic to manually examine X-rays for osteoporosis as well as to identify the essential components and choose elevated classifiers. To categorize x-ray pictures of knee joints into normal, osteopenia, and osteoporosis condition categories, authors present a process in this investigation that uses three convolutional neural networks (CNN) architectures, i.e., Inception v3, Xception, and ResNet 18, to create an ensemble-based classifier model. The suggested ensemble approach employs a fuzzy rank-based unification of classifiers by taking into account two distinct parameters on the decision scores produced by the aforementioned base classifiers. Contrary to the straightforward fusion strategies that have been mentioned in the literature, the suggested ensemble methodology finalizes predictions on the test specimens by considering the confidence in the recommendations of the base learners. A 5-fold cross-validation approach has been employed to assess the developed framework using a benchmark dataset that has been made accessible to the general population. The suggested model yields an accuracy rate of 93.5% with a loss of 0.082. Further, the AUC is observed to be 98.1, 97.9 and 97.3 for normal, osteopenia and osteoporosis, respectively. The results demonstrate the model’s usefulness by outperforming various state-of-the-art approaches.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_30-Fuzzy_Rank_Based_Ensemble_Model_for_Accurate_Diagnosis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Opposition Learning Based Improved Bee Colony Optimization (OLIBCO) Algorithm for Data Clustering</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140429</link>
        <id>10.14569/IJACSA.2023.0140429</id>
        <doi>10.14569/IJACSA.2023.0140429</doi>
        <lastModDate>2023-04-29T12:38:59.5870000+00:00</lastModDate>
        
        <creator>Srikanta Kumar Sahoo</creator>
        
        <creator>Priyabrata Pattanaik</creator>
        
        <creator>Mihir Narayan Mohanty</creator>
        
        <creator>Dilip Kumar Mishra</creator>
        
        <subject>Bee colony optimization; BCO based clustering; data clustering; partitional clustering; meta-heuristic search</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>Clustering of data in case of data mining has a major role in recent research as well as data engineers. It supports for classification and regression type of problems. It needs to obtain the optimized clusters for such application. The partitional clustering and meta-heuristic search techniques are two helpful tools for this task. However the convergence rate is one of the important factors at the time of optimization. In this paper, authors have taken a data clustering approach with improved bee colony algorithm and opposition based learning to improve the rate of convergence and quality of clustering. It introduces the opposite bees that are created using opposition based learning to achieve better exploration. These opposite bees occupy exactly the opposite position that of the mainstream bees in the solution space. Both the mainstream and opposite bees explore the solution space together with the help of Bee Colony Optimization based clustering algorithm. This boosts the explorative power of the algorithm and hence the convergence rate. The algorithm uses a steady state selection procedure as a tool for exploration. The crossover and mutation operation is used to get balanced exploitations. This enables the algorithm to avoid sticking in local optima. To justify the effectiveness of the algorithm it is verified with the open datasets from the UCI machine learning repository as the benchmark. The simulation result shows that it performs better than some benchmark as well as recently proposed algorithms in terms of convergence rate, clustering quality, and exploration and exploitation capability.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_29-Opposition_Learning_Based_Improved_Bee_Colony_Optimization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of Computer Vision-enabled Augmented Reality Games to Increase Motivation for Sports</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140428</link>
        <id>10.14569/IJACSA.2023.0140428</id>
        <doi>10.14569/IJACSA.2023.0140428</doi>
        <lastModDate>2023-04-29T12:38:59.5700000+00:00</lastModDate>
        
        <creator>Bauyrzhan Doskarayev</creator>
        
        <creator>Nurlan Omarov</creator>
        
        <creator>Bakhytzhan Omarov</creator>
        
        <creator>Zhuldyz Ismagulova</creator>
        
        <creator>Zhadra Kozhamkulova</creator>
        
        <creator>Elmira Nurlybaeva</creator>
        
        <creator>Galiya Kasimova</creator>
        
        <subject>Augmented reality; computer vision; action detection; action classification; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>This research paper presents the development of computer vision-enabled augmented reality games based on action detection to increase motivation for sports. With the increasing popularity of digital games, physical activity and sports participation have been declining, especially among the younger generation. To address this issue, we developed a series of augmented reality games that require players to perform physical actions to progress and succeed in the game. These games were developed using computer vision technology to detect the players&#39; movements and provide real-time feedback, enhancing the gaming experience and promoting physical activity. The results of our user study showed that participants who played the augmented reality games reported higher levels of motivation to engage in physical activity and sports. The findings suggest that computer vision-enabled augmented reality games can be an effective tool to promote physical activity and sports participation, especially among younger generations.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_28-Development_of_Computer_Vision_enabled_Augmented_Reality_Games.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Research on Customer Retention Prediction Model of VOD Platform Based on Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140427</link>
        <id>10.14569/IJACSA.2023.0140427</id>
        <doi>10.14569/IJACSA.2023.0140427</doi>
        <lastModDate>2023-04-29T12:38:59.5400000+00:00</lastModDate>
        
        <creator>Quansheng Zhao</creator>
        
        <creator>Zhijie Zhao</creator>
        
        <creator>Liu Yang</creator>
        
        <creator>Lan Hong</creator>
        
        <creator>Wu Han</creator>
        
        <subject>Video-on-demand platform; Customer Retention Forecast; RFM Model; Machine Learning; SHAP</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>Advanced wireless technology and smart mobile devices allow users to watch Internet video from almost anywhere. The major VOD platforms are competing with each other for customers, slowly shifting from a &quot;product-centric&quot; strategic goal to a &quot;customer-centric&quot; one. At present, existing research is limited to platform business model and development strategy as well as user behavior research, but there is less research on customer retention prediction. In order to effectively solve the customer retention prediction problem, this study applies machine learning methods to video-on-demand platform customer retention prediction, improves the traditional RFM model to establish the RFLH theoretical model for video-on-demand platform customer retention prediction, and uses machine learning methods to predict the number of customer retention days. The Optuna algorithm is used to determine the model hyperparameters, and the SHAP framework is integrated to analyze the important factors affecting customer retention. The experimental results show that the comprehensive performance of the LightGBM model is better than other models. The total number of user logins in the past week, the length of video playback in the same day, and the time difference between the last login and the present are important features that affect customer retention prediction. This study can help companies develop effective customer management strategies to maximize potential customer acquisition and existing customer retention for maximum market advantage.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_27-Research_on_Customer_Retention_Prediction_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Personalized Music Recommendation Based on Interest and Emotion: A Comparison of Multiple Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140426</link>
        <id>10.14569/IJACSA.2023.0140426</id>
        <doi>10.14569/IJACSA.2023.0140426</doi>
        <lastModDate>2023-04-29T12:38:59.5100000+00:00</lastModDate>
        
        <creator>Xiuli Yan</creator>
        
        <subject>Interest and emotion; recommendation algorithm; music; personalization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>Recommendation algorithms can greatly improve the efficiency of information retrieval for users. This article briefly introduced recommendation algorithms based on association rules and algorithms based on interest and emotion analysis. After crawling music and comment data from the NetEase Cloud platform, a simulation experiment was conducted. Firstly, the performance of the Back-Propagation Neural Network (BPNN) in the interest and emotion-based algorithm for recommending music was tested, and then the impact of the proportion of emotion weight between comments and music on the emotion analysis-based algorithm was tested. Finally, the three recommendation algorithms based on association rules, user ratings, and interest and emotion analysis were compared. The results showed that when the BPNN used the dominant interest and emotion and secondary interest and emotion as judgment criteria, the accuracy of interest and emotion recognition for music and comments was higher. When the proportion of interest and emotion weight between comments and music was 6:4, the interest and emotion analysis-based recommendation algorithm had the highest accuracy. The interest and emotion-based recommendation algorithm had higher recommendation accuracy than the association rule-based and user rating-based algorithms, and could provide users with more personalized and emotional music recommendations.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_26-Personalized_Music_Recommendation_Based_on_Interest_and_Emotion.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Iris Recognition Through Edge Detection Methods: Application in Flight Simulator User Identification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140425</link>
        <id>10.14569/IJACSA.2023.0140425</id>
        <doi>10.14569/IJACSA.2023.0140425</doi>
        <lastModDate>2023-04-29T12:38:59.4930000+00:00</lastModDate>
        
        <creator>Sundas Naqeeb Khan</creator>
        
        <creator>Samra Urooj Khan</creator>
        
        <creator>Onyeka Josephine Nwobodo</creator>
        
        <creator>Krzysztof Adam. Cyran</creator>
        
        <subject>Identification; authentication; detection; canny; Sobel; Prewitt; PSNR; SNR; SSIM; MSE</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>To meet the increasing security requirement of authorized users of flight simulators, personal identification is becoming more and more important. Iris recognition stands out as one of the most accurate biometric methods in use today. Iris recognition is done through different edge detection methods. Therefore, it is important to have an understanding of the different edge detection methods that are in use these days. Specifically, the biomedical research shows that irises are as different as fingerprints or the other patterns of the recognition. Furthermore, because the iris is a visible organism, its exterior look can be examined remotely using a machine vision system. The main part of this paper delves into concerns concerning the selection of the best results giving method of the recognition. In this paper, three edge detection methods, namely Canny, Sobel and Prewitt, are applied to the image of eye (iris) and their comparative analysis is discussed. These methods are applied using the Software MATLAB. The datasets used for this purpose are CASIA and MMU. The results indicate that the performance of Canny edge detection method is best as compared to Sobel and Prewitt. Image quality is a key requirement in image-based object recognition. This paper provides the quality evaluation of the images using different metrics like PSNR, SNR, MSE and SSIM. However, SSIM is considered best image quality metric as compared to PSNR, SNR and MSE.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_25-Iris_Recognition_Through_Edge_Detection_Methods.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Network Intrusion Detection System Based on Semi-Supervised Approach for IoT</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140424</link>
        <id>10.14569/IJACSA.2023.0140424</id>
        <doi>10.14569/IJACSA.2023.0140424</doi>
        <lastModDate>2023-04-29T12:38:59.4630000+00:00</lastModDate>
        
        <creator>Durga Bhavani A</creator>
        
        <creator>Neha Mangla</creator>
        
        <subject>IoT; security; intrusion detection system; semi-supervised; autoencoder; clustering; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>An intrusion detection system (IDS) is one of the most effective ways to secure a network and prevent unauthorized access and security attacks. But due to the lack of adequately labeled network traffic data, researchers have proposed several feature representations models over the past three years. However, these models do not account for feature generalization errors when learning semantic similarity from the data distribution and may degrade the performance of the predictive IDS model. In order to improve the capabilities of IDS in the era of Big Data, there is a constant need to extract the most important features from large-scale and balanced network traffic data. This paper proposes a semi-supervised IDS model that leverages the power of untrained autoencoders to learn latent feature representations from a distribution of input data samples. Further, distance function-based clustering is used to find more compact code vectors to capture the semantic similarity between learned feature sets to minimize reconstruction loss. The proposed scheme provides an optimal feature vector and reduces the dimensionality of features, reducing memory requirements significantly. Multiple test cases on the IoT dataset MQTTIOT2020 are conducted to demonstrate the potential of the proposed model. Supervised machine learning classifiers are implemented using a proposed feature representation mechanism and are compared with shallow classifiers. Finally, the comparative evaluation confirms the efficacy of the proposed model with low false positive rates, indicating that the proposed feature representation scheme positively impacts IDS performance.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_24-A_Novel_Network_Intrusion_Detection_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Event Feature Pre-training Model Based on Public Opinion Evolution</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140423</link>
        <id>10.14569/IJACSA.2023.0140423</id>
        <doi>10.14569/IJACSA.2023.0140423</doi>
        <lastModDate>2023-04-29T12:38:59.4470000+00:00</lastModDate>
        
        <creator>WANG Nan</creator>
        
        <creator>TAN Shu-Ru</creator>
        
        <creator>XIE Xiao-Lan</creator>
        
        <creator>LI Hai-Rong</creator>
        
        <creator>JIANG Jia-Hui</creator>
        
        <subject>Event vectorization; NL2ER-transformer model; public opinion reversal prediction; evolution of public opinion event; multi label semi supervised learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>The comments in the evolution of network public opinion events not only reflect the attitude of netizens towards the event itself, but also are the key basis for mastering the dynamics of public opinion. According to the comment data in the event evolution process, an event feature vector pre-training model NL2ER-Transformer is constructed to realize the real-time automatic extraction of event features. Firstly, a semi-supervised multi-label curriculum learning model is proposed to generate comment words, event word vectors, event words, and event sentences, so that a public opinion event is mapped into a sequence similar to vectorized natural language. Secondly, based on the Transformer structure, a training method is proposed to simulate the evolution process of events, so that the event vector generation model can learn the evolution law and the characteristics of reversal events. Finally, the event vectors generated by the presented NL2ER-Transformer model are compared with the event vectors generated by the current mainstream models such as XLNet and RoBerta. This paper tests the pre-trained model NL2ER-Transformer and three pre-trained benchmark models on four downstream classification models. The experimental results show that using the vectors generated by NL2ER-Transformer to train downstream models compared to using the vectors generated by other pre-trained benchmark models to train downstream models, the accuracy, recall, and F1 values are 16.66%, 44.44%, and 19% higher than the best downstream model. At the same time, in the evolutionary capability analysis test, only four events show partial errors. In terms of performance of semi-supervised model, the proposed semi-supervised multi-label curriculum learning model outperforms mainstream models in four indicators by 6%, 23%, 8%, and 15%, respectively.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_23-Event_Feature_Pre_training_Model_Based_on_Public_Opinion.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implementation of Revised Heuristic Knowledge in Average-based Interval for Fuzzy Time Series Forecasting of Tuberculosis Cases in Sabah</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140422</link>
        <id>10.14569/IJACSA.2023.0140422</id>
        <doi>10.14569/IJACSA.2023.0140422</doi>
        <lastModDate>2023-04-29T12:38:59.4170000+00:00</lastModDate>
        
        <creator>Suriana Lasaraiya</creator>
        
        <creator>Suzelawati Zenian</creator>
        
        <creator>Risman Mat Hasim</creator>
        
        <creator>Azmirul Ashaari</creator>
        
        <subject>Fuzzy time series; forecasting; length of interval; average-based interval; heuristic knowledge</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>Fuzzy time series forecasting is one method used to forecast in certain reality problems. The research on fuzzy time series forecasting has been increased due to its capability in dealing with vagueness and uncertainty. In this paper, we are dealing with implementation of revised heuristic knowledge to basic average-based interval and showing that these models forecast better than the basic one. We suggest three different lengths of interval, size 5, size 10 and size 20 to be used in comparing these models of average-based interval, average-based interval with implementation of heuristic knowledge and, average-based interval with implementation of revised heuristic knowledge. These models applied to forecast the number of tuberculosis cases reported monthly in Sabah starting from January 2012 until December 2019. A few numerical examples are shown as well. The performances of evaluations are shown by comparison on the values obtained by Mean Square error (MSE) and Root Mean Square Error (RMSE).</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_22-Implementation_of_Revised_Heuristic_Knowledge.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Artificial Intelligence Based Modelling for Predicting CO2 Emission for Climate Change Mitigation in Saudi Arabia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140421</link>
        <id>10.14569/IJACSA.2023.0140421</id>
        <doi>10.14569/IJACSA.2023.0140421</doi>
        <lastModDate>2023-04-29T12:38:59.4000000+00:00</lastModDate>
        
        <creator>Sultan Alamri</creator>
        
        <creator>Shahnawaz Khan</creator>
        
        <subject>Exponential smoothing; transformers; temporal convolutional network; neural basis expansion analysis; climate change</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>Climate change (such as global warming) causes the barrier in the attaining sustainable development goals. Emission of greenhouse gases (primarily carbon dioxide CO2 emission) are the root cause of global warming. This research analyses and investigates the emission of CO2 and attempts to develop an optimal model to forecast the CO2 emission. Several machine learning and statistical modeling techniques have been implemented and evaluated to explore the patterns and trends of CO2 emissions to develop an optimal model for forecasting future CO2 emissions. The implemented methods include such as Exponential Smoothing, Transformers, Temporal Convolutional Network (TCN), and neural basis expansion analysis for interpretable time series. The data for training these models have been collected and synthesized from various sources using a web crawler. The performance of these models has been evaluated using various performance measurement metrices such as RMSE, R2 score, MAE, MAPE and OPE. The N-BEATS model demonstrated an overall better performance for forecasting CO2 emission in Saudi Arabia in comparison to the other models. In addition, this paper also provides recommendations and strategies for mitigating the climate change (by reducing CO2 emission).</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_21-Artificial_Intelligence_Based_Modelling_for_Predicting_CO2_Emission.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Scrum: A Systematic Literature Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140420</link>
        <id>10.14569/IJACSA.2023.0140420</id>
        <doi>10.14569/IJACSA.2023.0140420</doi>
        <lastModDate>2023-04-29T12:38:59.3830000+00:00</lastModDate>
        
        <creator>Adrielle Cristina Sassa</creator>
        
        <creator>Isabela Alves de Almeida</creator>
        
        <creator>T&#225;bata Nakagomi Fernandes Pereira</creator>
        
        <creator>Milena Silva de Oliveira</creator>
        
        <subject>Project management; agile methods; agile manifesto; scrum</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>This study presents a Systematic Literature Review on an agile project management tool. The study offers a brief comparison between traditional and agile project management methodologies. Their respective concepts and characteristics are laid out to highlight and explain their main differences. The agile methods include quantitative and qualitative data, showing Scrum framework characteristics. This study highlights the importance of project management in function of its emergence as a response to problems encountered during improperly conducted projects. Furthermore, this study provides relevant information for professionals in the Industrial Engineering area and computer science. The results allowed us to conclude that Scrum is an agile framework for empirical-based project development; it was developed in the 1990s by Jeff Sutherland. It is a flexible and adaptable methodology. Scrum research peaked in 2020, and continues to be studied, mainly in the field of computer science. Finally, Brazil is well-positioned in third place for works published.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_20-Scrum_A_Systematic_Literature_Review.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SuffixAligner: A Python-based Aligner for Long Noisy Reads</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140419</link>
        <id>10.14569/IJACSA.2023.0140419</id>
        <doi>10.14569/IJACSA.2023.0140419</doi>
        <lastModDate>2023-04-29T12:38:59.3530000+00:00</lastModDate>
        
        <creator>Zeinab Rabea</creator>
        
        <creator>Sara El-Metwally</creator>
        
        <creator>Samir Elmougy</creator>
        
        <creator>M. Z. Rashad</creator>
        
        <subject>Long reads sequencing; reads mapping; suffix array; alignment; seed extending; LF mapping</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>Third-generation sequencing technologies have revolutionized genomics research by generating long reads that resolve many computational challenges such as long genomics variations and repeats. Mapping a set of sequencing reads against a reference genome is the first step of many genomic data analysis pipelines. Many mapping/alignment tools are introduced and always made different compromises between the alignment accuracy and the resource usage in terms of memory space and processor speed. SuffixAligner is a python-based aligner for long noisy reads generated from third-generation sequencing machines. SuffixAligner follows the seed extending approach and exploits the nature of the biological alphabet that has a fixed size and a predefined lexical ordering to construct a suffix array for indexing a reference genome. A suffix array is used to efficiently search the indexed reference and locate the exactly matched seeds among the reads and the reference. The matched seeds are arranged into windows/clusters and the ones with the maximum number of seeds are reported as candidates for mapping positions. Using real data sets from third-generation sequencing experiments, we evaluated SuffixAligner against lordFAST, BWA, GEM3, and Minimap2, in which the results showed that SuffixAligner mapped more reads compared to the other compared tools. The source code of SuffixAligner is available at: https://github.com/ZeinabRabea/SuffixAligner.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_19-SuffixAligner_A_Python_based_Aligner.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Challenges of Digital Twin Technologies Integration in Modular Construction: A Case from a Manufacturer’s Perspective</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140418</link>
        <id>10.14569/IJACSA.2023.0140418</id>
        <doi>10.14569/IJACSA.2023.0140418</doi>
        <lastModDate>2023-04-29T12:38:59.3370000+00:00</lastModDate>
        
        <creator>Laith Jamal Aldabbas</creator>
        
        <subject>Digital Twin; DTs; enabling technologies; digital twin model; applications; challenges; literature review</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>Automated models of physical objects are known as Digital twins; letting one to outline, test, array, and monitor and manage robotics in the real world. .CPS (Cyber-physical system) data have to be assembled through real life procedures to form a real-time monitoring cyber model in order to produce Digital Twin. Modification in the cyber model will be shown in real life system to guess or manage. As a result of digital and the progression in ICT, manufacturing and aviation or aerospace industries are now utilizing digital twin. Nonetheless, the uses of DT&#39;s in several production firms have not been researched massively. Those studies were sparse in construction, where structures are built without Superstructures, which thus sparked global concern. Herein, DT applications in building/manufacturing sector and in various firms were reviewed first and thereafter aimed on publications concerning DT applications in industry area by organizing a systematic search via Scopus. Notably, the publications were singled out immediately after the assessment of the publications and the study continues to investigate and evaluate the Potentials of digital twins in MIC, Restriction of digital twins in MIC, Impact of digital twins on industry, and Cost of time which should be appropriate for model development. The analysis report however demonstrated that DT is dedicated and thoughtful of in midst of inclusion along other digital technologies. More so, theoretical structure is formed in order to apply DT in module installation in MiC around the circumstances of Hong Kong which happens to be usual city case of high-density. Interestingly, the implementation of Digital twin in Modular Integrated Construction is expected to provide promising potential with significant benefits, such as improved logistics and manufacturing management by employing Digital Twins to track on-site progress during module installation.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_18-Challenges_of_Digital_Twin_Technologies_Integration_in_Modular_Construction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Customer Segmentation of Personal Credit using Recency, Frequency, Monetary (RFM) and K-means on Financial Industry</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140417</link>
        <id>10.14569/IJACSA.2023.0140417</id>
        <doi>10.14569/IJACSA.2023.0140417</doi>
        <lastModDate>2023-04-29T12:38:59.3070000+00:00</lastModDate>
        
        <creator>Hafidh Rizkyanto</creator>
        
        <creator>Ford Lumban Gaol</creator>
        
        <subject>Credit; credit risk; recency; frequency; monetary; K-means</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>This research focuses on how to build a segmentation model for credit customers to identify the potential for defaulting credit customers based on their transaction history. Currently, there is no segmentation available for this possibility of payment failure. Credit scoring helps in minimizing credit risk when applying for credit. However, using RFM (Recency, Frequency, Monetary) models helps to score each transaction variable of the customer&#39;s financial activity. K-means then assists in the process of segmenting the results of the RFM model scoring, which occurs in the middle of the customer&#39;s repayment schedule. Challenge is how to decide the variable that can be used in RFM models and how to interpret the clusters that have been formed and the actual implementation of the customer. The Bank can divide the clusters that have possibility of payment failure by their customers so that banks can take preventive actions and as information for the collection system to be able to make payment withdrawals or billing.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_17-Customer_Segmentation_of_Personal_Credit.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improved Speaker Recognition for Degraded Human Voice using Modified-MFCC and LPC with CNN</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140416</link>
        <id>10.14569/IJACSA.2023.0140416</id>
        <doi>10.14569/IJACSA.2023.0140416</doi>
        <lastModDate>2023-04-29T12:38:59.2730000+00:00</lastModDate>
        
        <creator>Amit Moondra</creator>
        
        <creator>Poonam Chahal</creator>
        
        <subject>Data science; artificial intelligence; MFCC; LPC; CNN; mel-spectrum; speaker recognition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>Economical speaker recognition solution from degraded human voice signal is still a challenge. This article is covering results of an experiment which targets to improve feature extraction method for effective speaker identification from degraded human audio signal with the help of data science. Every speaker’s audio has identical characteristics. Human ears can easily identify these different audio characteristics and classify speaker from speaker’s audio. Mel-Frequency Cepstral Coefficient (MFCC) supports to get same intelligence in machine also. MFCC is extensively used for human voice feature extraction. In our experiment we have effectively used MFCC and Linear Predictive Coding (LPC) for better speaker recognition accuracy. MFCC first outlines frames and then finds cepstral coefficient for each frame. MFCC use human audio signal and convert it in numerical value of audio features, which is used to recognize speaker efficiently by Artificial Intelligence (AI) based speaker recognition system. This article covers how effectively audio features can be extracted from degraded human voice signal. In our experiment we have observed improved Equal Error Rate (EER) and True Match Rate (TMR) due to high sampling rate and low frequency range for mel-scale triangular filter. This article also covers pre-emphasis effects on speaker recognition when high background noise comes with audio signal.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_16-Improved_Speaker_Recognition_for_Degraded_Human_Voice.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Adaptive Balance Optimizer: A New Adaptive Metaheuristic and its Application in Solving Optimization Problem in Finance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140415</link>
        <id>10.14569/IJACSA.2023.0140415</id>
        <doi>10.14569/IJACSA.2023.0140415</doi>
        <lastModDate>2023-04-29T12:38:59.2430000+00:00</lastModDate>
        
        <creator>Purba Daru Kusuma</creator>
        
        <creator>Ashri Dinimaharawati</creator>
        
        <subject>Optimization; metaheuristic; adaptability; portfolio optimization; IDX30</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>Adaptability becomes important in developing metaheuristic algorithms, especially in tackling stagnation. Unfortunately, almost all metaheuristics are not equipped with an adaptive approach that makes them change their strategy when stagnation happens during iteration. Based on this consideration, a new metaheuristic, called an adaptive balance optimizer (ABO), is proposed in this paper. ABO&#39;s unique strategy focuses on exploitation when improvement happens and switching to exploration during stagnation. ABO also uses a balanced strategy between exploration and exploitation by performing two sequential searches, whatever circumstance it faces. These sequential searches consist of one guided search and one random search. Moreover, ABO also deploys both a strict acceptance approach and a non-strict acceptance approach. In this work, ABO is challenged to solve a set of 23 classic functions as a theoretical optimization problem and a portfolio optimization problem as the use case for the practical optimization problem. In portfolio optimization, ABO should optimize the quantity of ten stocks in the energy and mining sector listed in the IDX30 index. In this evaluation, ABO is competed with five other metaheuristics: marine predator algorithm (MPA), golden search optimizer (GSO), slime mold algorithm (SMA), northern goshawk optimizer (NGO), and zebra optimization algorithm (ZOA). The simulation result shows that ABO is better than MPA, GSO, SMA, NGO, and ZOA in solving 21, 18, 16, 11, and 8, respectively, in solving 23 functions. Meanwhile, ABO becomes the third-best performer in solving the portfolio optimization problem.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_15-Adaptive_Balance_Optimizer.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>EMOCASH: An Intelligent Virtual-Agent Based Multiplayer Online Serious Game for Promoting Money and Emotion Recognition Skills in Egyptian Children with Autism</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140414</link>
        <id>10.14569/IJACSA.2023.0140414</id>
        <doi>10.14569/IJACSA.2023.0140414</doi>
        <lastModDate>2023-04-29T12:38:59.2270000+00:00</lastModDate>
        
        <creator>Hussein Karam Hussein Abd El-Sattar</creator>
        
        <subject>Autism; virtual agents; serious games; digital technology; AI; online gaming; usability and accessibility</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>Autism, often known as &quot;autism spectrum disorders (ASD),&quot; is one of the most common developmental disabilities that affect how people learn, behave, communicate, and interact with others. Two crucial everyday tasks that people with ASD typically struggle with are managing finances and recognizing emotions. As the online gaming sector grows and develops, the question of why this type of media can&#39;t be used as a useful educational tool for those with ASD arises. This paper discusses this issue via a novel virtual agent-based multiplayer online serious game referred to as &quot;EMOCASH,&quot; which aims to improve these important tasks for Egyptian children with ASD and achieve transfer of acquired knowledge to real-world situations via a 3D virtual shop scenario that was designed using the Autism ASPECTSSTM Design Index. EMOCASH served as an instrument for investigating the following research question: What role does technology play in the education of those with ASD? Numerous sub-questions that were related to the primary question were also addressed. A variety of usability metrics were used to assess effectiveness, efficiency and satisfaction aspects.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_14-EMOCASH_An_Intelligent_Virtual_Agent_Based_Multiplayer.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Plant Disease Classification and Adversarial Attack based CL-CondenseNetV2 and WT-MI-FGSM</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140413</link>
        <id>10.14569/IJACSA.2023.0140413</id>
        <doi>10.14569/IJACSA.2023.0140413</doi>
        <lastModDate>2023-04-29T12:38:59.2130000+00:00</lastModDate>
        
        <creator>Yong Li</creator>
        
        <creator>Yufang Lu</creator>
        
        <subject>Adversarial examples; FGSM; plants diseases and pests; attention mechanism; CondenseNetV2</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>In recent years, deep learning has been increasingly used to the detection of pests and diseases. Unfortunately, deep neural networks are particularly vulnerable when attacked by adversarial examples. Hence it is vital to explore the creation of intensely aggressive adversarial examples to increase neural network robustness. This paper proposes a wavelet transform and histogram equalization-based adversarial attack algorithm: WT-MI-FGSM. In order to verify the performance of the WT-MI-FGSM, we propose a plant pests and diseases identification method based on the coordinate attention mechanism and CondenseNetV2: CL-CondenseNetV2. The accuracy of CL- CondenseNetV2 on the PlantVillage dataset is 99.45%, which indicates that the improved CondenseNetV2 model has a more significant classification performance. In adversarial sample experiments using WT-MI-FGSM and CL-CondenseNetV2, experimental results show that when CL-CondenseNetV2 is attacked by the adversarial algorithm WT-MI-FGSM, the error rate reaches 89.8%, with a higher attack success rate than existing adversarial attack algorithms. In addition, the accuracy of CL-CondenseNetV2 is improved to 99.71% by adding the adversarial samples generated by WT-MI-FGSM to the training set and performing adversarial training. The experiments demonstrate that the adversarial examples caused by WT-MI-FGSM can improve the model&#39;s performance.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_13-Plant_Disease_Classification_and_Adversarial_Attack.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Machine Learning-based Hybrid Intrusion Detection System and Intelligent Routing Algorithm for MPLS Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140412</link>
        <id>10.14569/IJACSA.2023.0140412</id>
        <doi>10.14569/IJACSA.2023.0140412</doi>
        <lastModDate>2023-04-29T12:38:59.1970000+00:00</lastModDate>
        
        <creator>Mohammad Azmi Ridwan</creator>
        
        <creator>Nurul Asyikin Mohamed Radzi</creator>
        
        <creator>Kaiyisah Hanis Mohd Azmi</creator>
        
        <creator>Fairuz Abdullah</creator>
        
        <creator>Wan Siti Halimatul Munirah Wan Ahmad</creator>
        
        <subject>Machine learning; intrusion detection system; routing algorithm; quality of service; communication system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>Machine Learning (ML) is seen as a promising application that offers autonomous learning and provides optimized solutions to complex problems. The current Multiprotocol Label Switching (MPLS)-based communication system is packed with exponentially increasing applications and different Quality-of-Services (QoS) requirements. As the network is getting complex and congested, it will become challenging to satisfy the QoS requirements in the MPLS network. This study proposes a hybrid ML-based intrusion detection system (ML-IDS) and ML-based intelligent routing algorithm (ML-RA) for MPLS network. The research is divided into three parts, which are (1) dataset development, (2) algorithm development, and (3) algorithm performance evaluation. The dataset development for both algorithms is carried out via simulations in Graphical Network Simulator 3 (GNS3). The datasets are then fed into MATLAB to train ML classifiers and regression models to classify the incoming traffic as normal or attack and predict traffic delays for all available routes, respectively. Only the normal traffic predicted by the ML-IDS algorithm will be allowed to enter the network domain, and the route with the fastest delay predicted by the ML-RA is assigned for routing. The ML-based routing algorithm is compared to the conventional routing algorithm, Routing Information Protocol version 2 (RIPv2). From the performance evaluations, the ML-RA shows 100 percent accuracy in predicting the fastest route in the network. During network congestion, the proposed ML outperforms the RIPv2 in terms of delay and throughput on average by 57.61 percent and 46.57 percent, respectively.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_12-A_New_Machine_Learning_based_Hybrid_Intrusion_Detection_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Data Aggregation Method for Underwater Wireless Sensor Networks using Ant Colony Optimization Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140411</link>
        <id>10.14569/IJACSA.2023.0140411</id>
        <doi>10.14569/IJACSA.2023.0140411</doi>
        <lastModDate>2023-04-29T12:38:59.1670000+00:00</lastModDate>
        
        <creator>Lianchao Zhang</creator>
        
        <creator>Jianwei Qi</creator>
        
        <creator>Hao Wu</creator>
        
        <subject>UWSNs; routing; data aggregation; energy efficiency; ant colony optimization algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>Underwater Wireless Sensor Networks (UWSNs) have a wide range of applications for monitoring the ocean and exploring the offshore environment. Sensor nodes are typically dispersed throughout the area of interest at different depths in these networks. Sensor nodes on the seabed must use a routing protocol in order to communicate with surface-level nodes. The suitability assessment considers network resources, application requirements, and environmental factors. By combining these factors, a platform for resource-aware routing strategies can be created that meet the needs of different applications in dynamic environments. Numerous challenges and problems are associated with UWSNs, including the lack of battery power, instability of topologies, a limited bandwidth, long propagation times, and interference from the ocean. These problems can be addressed through the design of routing protocols. The routing protocol facilitates the transfer of data between source and destination nodes. Data aggregation and UWSN protocols are widely used to achieve better outcomes. This paper describes an energy-aware algorithm for data aggregation in UWSNs that uses the improved ACO (Ant Colony Optimization) algorithm to maximize the packet delivery ratio, improve the network lifetime, decrease end-to-end delay, and use less energy.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_11-A_Novel_Data_Aggregation_Method_for_Underwater_Wireless_Sensor_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Investigation of You Only Look Once Networks for Vision-based Small Object Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140410</link>
        <id>10.14569/IJACSA.2023.0140410</id>
        <doi>10.14569/IJACSA.2023.0140410</doi>
        <lastModDate>2023-04-29T12:38:59.1500000+00:00</lastModDate>
        
        <creator>Li YANG</creator>
        
        <subject>YOLOv7; YOLOv6; YOLOv5; computer vision; jewellery detection; small object detection; real-time detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>Small object detection is a challenging issue in computer vision-based algorithms. Although various methods have been investigated for common objects including person, car and others, small object are not addressed in this issue. Therefore, it is necessary to conduct more researches on them. This paper is focused on small object detection especially jewellery as current object detection methods suffer from low accuracy in this domain. This paper introduces a new dataset whose images were taken by a web camera from a jewellery store and data augmentation procedure. It comprises three classes, namely, ring, earrings, and pendant. In view of the small target of jewellery and the real-time detection, this study adopted the You Only Look Once (Yolo) algorithms. Different Yolo based model including eight versions are implemented and train them using our dataset to address most effective one. Evaluation criteria, including accuracy, F1 score, recall, and mAP, are used to evaluate the performance of the various YOLOv5, YOLOv6, and YOLOv7 versions. According to the experimental findings, utilizing YOLOv6 is significantly superior to YOLOv7 and marginally superior to YOLOv5.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_10-Investigation_of_You_Only_Look_Once_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Human Fall Detection for Smart Home Caring using Yolo Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140409</link>
        <id>10.14569/IJACSA.2023.0140409</id>
        <doi>10.14569/IJACSA.2023.0140409</doi>
        <lastModDate>2023-04-29T12:38:59.1330000+00:00</lastModDate>
        
        <creator>Bo LUO</creator>
        
        <subject>YOLO; computer vision; fall detection; smart home; caring</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>In order to help the elderly and limit the incidence of falls that result in injuries, effective fall detection in smart home applications is a challenging topic. Many techniques have been created employing both vision and non-vision-based technologies. Many researchers have been drawn to the vision-based technique amongst them because of its viability and application. However, there is still room for improvement in the effectiveness of fall detection given the poor accuracy rate and high computational cost issues with current vision-based techniques. This study introduces a new dataset for posture and fall detection, whose photo images were gathered from Internet resources and data augmentation. It employs YOLO networks for fall detection purpose. Furthermore, different YOLO networks are implemented on our dataset to address the most accurate and effective model. Based on assessment parameters including accuracy, F1 score, recall, and mAP, the performance of the various YOLOv5n, s and YOLOv6s versions are compared. As experimental results showed, the YOLOv5s performed better than other.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_9-Human_Fall_Detection_for_Smart_Home_Caring.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Gradually Generative Adversarial Networks Method for Imbalanced Datasets</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140408</link>
        <id>10.14569/IJACSA.2023.0140408</id>
        <doi>10.14569/IJACSA.2023.0140408</doi>
        <lastModDate>2023-04-29T12:38:59.1170000+00:00</lastModDate>
        
        <creator>Muhammad Misdram</creator>
        
        <creator>Muljono</creator>
        
        <creator>Purwanto</creator>
        
        <creator>Edi Noersasongko</creator>
        
        <subject>Classification; imbalance; GAN model; GradGAN model; significant oversampling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>Imbalanced dataset can cause obstacles to classification and result in a decrease in classification performance. There are several methods that can be used to deal the data imbalances, such as methods based on SMOTE and Generative Adversarial Networks (GAN). These methods are used for overcoming data oversampling so that the amount of minority data can increase and it can reach a balance with the majority data. In this research, the selected dataset is classified as a small imbalanced dataset of less than 200 records. The proposed method is the Gradually Generative Adversarial Network (GradGAN) model which aims to handle data imbalances gradually. The stages of the GradGAN model are adding the original minority dataset gradually so that it will create new minority datasets until a balance of data is created. Based on the algorithm flow described, the minority data is multiplied by the value of the variable that has been determined repeatedly to produce new balanced minority data. The test results on the classification of datasets from the GradGAN model produce an accuracy value of 8,3% when compare to that without GradGAN.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_8-Gradually_Generative_Adversarial_Networks_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Experimental Analysis of WebHDFS API Throughput</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140407</link>
        <id>10.14569/IJACSA.2023.0140407</id>
        <doi>10.14569/IJACSA.2023.0140407</doi>
        <lastModDate>2023-04-29T12:38:59.0870000+00:00</lastModDate>
        
        <creator>Yordan Kalmukov</creator>
        
        <creator>Milko Marinov</creator>
        
        <subject>WebHDFS API; throughput analysis; data analytical tools; Hadoop Distributed File System (HDFS)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>Data analysis is very important for the success of any business today. It helps to optimize business processes, analyze users’ behavior, demands etc. There are powerful data analytics tools, such as the ones of the Hadoop ecosystem, but they require multiple high-performance servers to run and high-qualified experts to install, configure and support them. In most cases, small companies and start-ups could not afford such expenses. However, they can use them as web services, on demand, and pay much lower fees per request. To do that, companies should somehow share their data with an existing, already deployed, Hadoop cluster. The most common way of uploading their files to the Hadoop’s Distributed File System (HDFS) is through the WebHDFS API (Application Programming Interface) that allows remote access to HDFS. For that reason, the API’s throughput is very important for the efficient integration of a company’s data to the Hadoop cluster. This paper performs a series of experimental analyses aiming to determine the WebHDFS API’s throughput, if it is a bottleneck in integration of a company’s data to existing Hadoop infrastructure and to detect all possible factors that influence the speed of data transmission between the clients’ software and the Hadoop’ file system.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_7-Experimental_Analysis_of_WebHDFS_API_Throughput.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Research on Automatic Intrusion Detection Method of Software-Defined Security Services in Cloud Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140406</link>
        <id>10.14569/IJACSA.2023.0140406</id>
        <doi>10.14569/IJACSA.2023.0140406</doi>
        <lastModDate>2023-04-29T12:38:59.0700000+00:00</lastModDate>
        
        <creator>Xingjie Huang</creator>
        
        <creator>Jing Li</creator>
        
        <creator>Jinmeng Zhao</creator>
        
        <creator>Beibei Su</creator>
        
        <creator>Zixian Dong</creator>
        
        <creator>Jing Zhang</creator>
        
        <subject>Cloud environment; software; security services; invasion; detection; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>In a cloud environment, software defined security services are highly vulnerable to malicious virus attacks. In response to software security issues, this project plans to use machine learning technology to achieve automated detection of software security services in a cloud environment. Firstly, study the intrusion characteristics of software defined security services in cloud environments based on piecewise sample regression, and establish their statistical feature quantities. Then, using the method of decision statistical analysis, achieve its fixed identification. Finally, the intrusion characteristics of software defined security services in the cloud environment are studied and compared with the data in the cloud environment to obtain its power spectral density. On this basis, machine learning methods are used to extract features from software security services in the cloud environment, in order to achieve the goal of automatic extraction and optimization of software security services in the cloud environment. Through simulation experiments, the credibility of the proposed algorithm for software defined security services in the cloud environment was verified, and the attack characteristics of software defined security services in the cloud environment were effectively patched.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_6-Research_on_Automatic_Intrusion_Detection_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Classification of Hand Movements Based on EMG Signals using Topological Features</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140405</link>
        <id>10.14569/IJACSA.2023.0140405</id>
        <doi>10.14569/IJACSA.2023.0140405</doi>
        <lastModDate>2023-04-29T12:38:59.0570000+00:00</lastModDate>
        
        <creator>Jianyang Li</creator>
        
        <creator>Lei Yang</creator>
        
        <creator>Yunan He</creator>
        
        <creator>Osamu Fukuda</creator>
        
        <subject>EMG classification; persistent homology; topological features; betti curve</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>Hand movement classification based on Electromyo-graphy (EMG) signals has been extensively investigated in the past decades as a promising approach used for controlling upper prosthetics or robotics. Topological data analysis is a relatively new and increasingly popular tool in data science that uses mathematical techniques from topology to analyze and understand complex data sets. This paper proposes a method for classifying hand movements based on EMG signals using topological features crafted with the tools of TDA. The main findings of this work on hand movement EMG classification are as follows: (1) topological features are effective in classifying EMG signals and outperform other time domain features tested in the experiments; (2) the 0-th Betti numbers are more effective than the 1-st Betti numbers; (3) Betti amplitude is a more stable and powerful feature than other topological features discussed in this paper. Additionally, Betti curves were used to visualize topological patterns for hand movement EMG.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_5-Classification_of_Hand_Movements_Based_on_EMG_Signals.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Radial Basis Network-based Early Warning Algorithm for Physical Injuries in Marathon Athletes</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140404</link>
        <id>10.14569/IJACSA.2023.0140404</id>
        <doi>10.14569/IJACSA.2023.0140404</doi>
        <lastModDate>2023-04-29T12:38:59.0230000+00:00</lastModDate>
        
        <creator>Ruisheng Jiao</creator>
        
        <creator>Juan Luo</creator>
        
        <subject>Radial basis neural network; exponentially decreasing inertia weights; early warning algorithm; sports injury; marathon; particle swarm; model building</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>For marathon runners, a single injury may affect their lifelong athletic career, so their injury management is very important. The current injury management for marathon runners has a certain lag, and the current injury warning is mainly based on manual teams, which is costly and poorly automated. To solve these problems, the study proposes a marathon athlete physical injury warning algorithm based on inertia weight adjustment optimized radial basis network. Particle swarm optimization technology has also been incorporated into early warning algorithms. Finally, an athlete injury and disease early warning model is constructed based on the algorithm. The results of performance tests show that the algorithm has a minimum fitness function value of 0.13, which is significantly lower than the current algorithm used for comparison. In the test with real data, the MAPE of the proposed algorithm was as low as 7.598% and the agreement of the hazard score results with the expert human assessment reached 100%. The results of the study indicate the practicality of the algorithm to assist work teams and perform early warning of physical injuries in athletes. However, the high number of iterations required is a limitation awaiting resolution.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_4-A_Radial_Basis_Network_based_Early_Warning_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Convolution Neural Networks for Phishing Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140403</link>
        <id>10.14569/IJACSA.2023.0140403</id>
        <doi>10.14569/IJACSA.2023.0140403</doi>
        <lastModDate>2023-04-29T12:38:59.0100000+00:00</lastModDate>
        
        <creator>Arun D. Kulkarni</creator>
        
        <subject>Classification; convolution neural networks; machine learning; phishing URLs</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>Phishing is one of the significant threats in cyber security. Phishing is a form of social engineering that uses e-mails with malicious websites to solicitate personal information. Phishing e-mails are growing in alarming number. In this paper we propose a novel machine learning approach to classify phishing websites using Convolution Neural Networks (CNNs) that use URL based features. CNNs consist of a stack of convolution, pooling layers, and a fully connected layer. CNNs accept images as input and perform feature extraction and classification. Many CNN models are available today. To avoid vanishing gradient problem, recent CNNs use entropy loss function with Rectified Linear Units (ReLU). To use a CNN, we convert feature vectors into images. To evaluate our approach, we use a dataset consists of 1,353 real world URLs that were classified into three categories-legitimate, suspicious, and phishing. The images representing feature vectors are classified using a simple CNN. We developed MATLAB scripts to convert vectors into images and to implement a simple CNN model. The classification accuracy obtained was 86.5 percent.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_3-Convolution_Neural_Networks_for_Phishing_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Automatic Framework for Number Plate Detection using OCR and Deep Learning Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140402</link>
        <id>10.14569/IJACSA.2023.0140402</id>
        <doi>10.14569/IJACSA.2023.0140402</doi>
        <lastModDate>2023-04-29T12:38:58.9770000+00:00</lastModDate>
        
        <creator>Yash Shambharkar</creator>
        
        <creator>Shailaja Salagrama</creator>
        
        <creator>Kanhaiya Sharma</creator>
        
        <creator>Om Mishra</creator>
        
        <creator>Deepak Parashar</creator>
        
        <subject>Number plat detection; recognition; deep learning; OCR; image classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>The use of automatic number plate detection devices in safety, commercial, and security has increased over the past few years. Number plate detection using computer vision is used to provide fast and accurate detection and recognition. Lately, many computerized approaches have been developed for the identification of vehicle registration details based on license plate numbers using either Deep Learning (DL) methodologies. In the proposed framework, we used Optical Character Recognition (OCR) and a deep learning-based new approach for automatic number plate detection and recognition. A deep learning approach trains the model to recognize the vehicle. The vehicle registration plate area is cropped adequately from the image, and a Convolution Neural Network (CNN) uses OCR to identify numbers and letters. The Jetson TX2 NVIDIA target served as the model&#39;s training data source, and its performance has been tested on a public dataset from Kaggle database. We obtained the highest accuracy of 96.23%. The proposed system could recognize vehicle license plate numbers on real-world images. The system can be implemented at security checkpoint entrances in highly restricted areas such as military areas or areas surrounding high-level government agencies.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_2-An_Automatic_Framework_for_Number_Plate_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An End-to-End Deep Learning System for Recommending Healthy Recipes Based on Food Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140401</link>
        <id>10.14569/IJACSA.2023.0140401</id>
        <doi>10.14569/IJACSA.2023.0140401</doi>
        <lastModDate>2023-04-29T12:38:58.9300000+00:00</lastModDate>
        
        <creator>Ledion Lico</creator>
        
        <creator>Indrit Enesi</creator>
        
        <creator>Sai Jawahar Reddy Meka</creator>
        
        <subject>Deep learning; nutri-score; new dataset; healthy food; accuracy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023</description>
        <description>Healthy food leads to healthy living and it is a major issue in our days. Nutri-Score is a nutrition label that can be calculated from the nutritional values of a food and helps evaluating the healthiness of it. Nevertheless, we don’t always have the nutritional values of the food, so it is not always easy identifying this label. In the same way, it is not easy finding the healthier option to a favorite food. In this paper an end-to end deep learning system is proposed to identify the Nutri-Score label and recommend similar but healthier recipes based on food images. A new dataset of images is extracted from the Recipe 1M and labeled with the Nutri-Score value calculated for each image. Pretrained models Resnet50, Resnet101, EfficientNetB2 and DensNet121 are tuned based on this dataset. The embeddings from the last convolutional layer of the input image are used to find its most similar neighbor based on KNN algorithm. The proposed system suggests recipes with the lowest Nutri-Score similar to the inputted image. Implementations show that the Resnet50 provides highest prediction accuracy.</description>
        <description>http://thesai.org/Downloads/Volume14No4/Paper_1-An_End_to_End_Deep_Learning_System_for_Recommending_Healthy_Recipes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Bisayan Dialect Short-time Fourier Transform Audio Recognition System using Convolutional and Recurrent Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01403111</link>
        <id>10.14569/IJACSA.2023.01403111</id>
        <doi>10.14569/IJACSA.2023.01403111</doi>
        <lastModDate>2023-03-31T11:54:33.4970000+00:00</lastModDate>
        
        <creator>Patrick D. Cerna</creator>
        
        <creator>Rhodessa J. Cascaro</creator>
        
        <creator>Khian Orland S. Juan</creator>
        
        <creator>Bon Jovi C. Montes</creator>
        
        <creator>Aldrei O. Caballero</creator>
        
        <subject>Bisayan dialect; speech recognition; dense layer; CNN; RNN</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>Speech is a form of oral communication that reinforces thoughts and ideas that have general purpose and meaning. In the Philippines, Filipinos can speak at least three languages. English, Filipino, native language. The Philippine government says the Philippines has more than 150 regional native languages, one of which he says is Cebuano. This research aims to implement automatic speech recognition (ASR) specifically for the Bisayan dialect, and researchers use machine learning techniques to create and operate the system. ASR has served its purpose in recent years not only in the official language of the Philippines, but also in various foreign languages. The required datasets were collected throughout the study to train and build the models selected for the speech recognition engine. Audio files are recorded in waveform file format and contain Visayan phrases and sentences. Audio was captured through hours of recorded audio and process using Tensorflow short time Fourier transform (STFT) algorithm to ensure the accurate representation. In order to analyze the audio data, the recordings were specially converted to digital format, specifically .wav  and making it sure all records are uncorrupted with only one channel, and finally have a sample rate of 22050kHz. A data mining process was carried out by integrating CNN layers, dense layers, and RNNs to predict the transcription of speech input using multiple layers that determine the output of the speech data. The researchers used the JiWER Python library, which was used in parallel when evaluating WER. This is because the trained scripted data set contains at least 500 time recordings totaling 61.78 minutes. Overall, the WER output is at best 99.53% and the percentage of records used is acceptable.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_111-Bisayan_Dialect_Short_time_Fourier_Transform_Audio_Recognition_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cloud Task Scheduling using the Squirrel Search Algorithm and Improved Genetic Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01403110</link>
        <id>10.14569/IJACSA.2023.01403110</id>
        <doi>10.14569/IJACSA.2023.01403110</doi>
        <lastModDate>2023-03-30T12:42:49.2930000+00:00</lastModDate>
        
        <creator>Qiuju DENG</creator>
        
        <creator>Ning WANG</creator>
        
        <creator>Yang LU</creator>
        
        <subject>Cloud computing; energy efficiency; task scheduling; genetic algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>With cloud computing, resources can be networked globally and shared easily between users. A range of heterogeneous needs are met on demand by software, hardware, storage, and networking. Dynamic resource allocation and load distribution pose challenges for cloud servers. In this regard, task scheduling plays a significant role in enhancing the performance of cloud computing. With the increase in the number of users and the capability of cloud computing, cloud data centers are experiencing concerns regarding energy consumption. To leverage cloud resources energy efficiently and provide real-time services to users, a viable cloud task scheduling solution is required. To address these problems, this paper proposes a new hybrid task scheduling algorithm based on squirrel search and improved genetic algorithms for cloud environments. The proposed scheduling algorithm surpasses existing scheduling algorithms across multiple parameters, including makespan, energy consumption, and execution time.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_110-Cloud_Task_Scheduling_using_the_Squirrel_Search_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Research on Identifying Stock Manipulation using GARCH Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01403109</link>
        <id>10.14569/IJACSA.2023.01403109</id>
        <doi>10.14569/IJACSA.2023.01403109</doi>
        <lastModDate>2023-03-30T12:42:49.2770000+00:00</lastModDate>
        
        <creator>Wen-Tsao Pan</creator>
        
        <creator>Wen-Bin Qian</creator>
        
        <creator>Ying He</creator>
        
        <creator>Zhi-Xiu Wang</creator>
        
        <creator>Wei Liu</creator>
        
        <subject>Stock prices; market; manipulation; GARCH model; stock exchange</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>Continuous rising of economy and investors’ demand for funds give a window to easier market manipulation which includes abusing of one’s power to raise or lower the price of securities, colluding to affect the price or volume of securities transactions at a pre-agreed time, price and method. In the study, the article aimed to create a sound investment environment, detect abnormal behaviors in stocks, and avoid risks of intentional manipulation. This study is to identify market manipulation and summarize the accuracy of GARCH model analysis with the help of fluctuation forecast trend chart and construction of GARCH model which calculates the sum of the GARCH-α parameter and the GARCH-β parameter of turnover rate, logarithmic return rate, and the trading volume fluctuation. Through the study of this paper, it is found that the stock market manipulation has the following characteristics: the participants are complex and diverse, the manipulation is opaque and has serious consequences, and the stock market manipulation involves a wide range of aspects.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_109-Research_on_Identifying_Stock_Manipulation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Information Retrieval Method of Natural Resources Data based on Hash Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01403108</link>
        <id>10.14569/IJACSA.2023.01403108</id>
        <doi>10.14569/IJACSA.2023.01403108</doi>
        <lastModDate>2023-03-30T12:42:49.2470000+00:00</lastModDate>
        
        <creator>Qian Li</creator>
        
        <subject>Hash algorithm; natural resource data; information structure reorganization; search; data encryption</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>In order to improve the ability of searching and identifying information in natural resources data, this paper puts forward a method of searching information in natural resources data based on hash algorithm. Through the data center technology, the problems of information source positioning, data directory organization, data semantic definition and expression, and data entity relationship construction in natural resource data center are solved. Combined with the distribution of resource data stream, the information structure reorganization and data encryption in natural resource data center are realized by using hash algorithm, and the parameters of information quality control model in natural resource data center are established. Through natural resource data governance and semantic reconstruction, the characteristics detection and redundancy arrangement of information data in natural resource data center are realized by standardizing data collection rules, and they are stored in the intermediate database. Through data governance rules, the information in the natural resources data is structured and managed, and stored in the publishing library. Through all kinds of data processing tools, all kinds of data are processed, cleaned and reconstructed, and through Hash algorithm and data aggregation processing, information detection in natural resources data is realized. The simulation results show that the precision rate of natural resource data retrieval by this method is high.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_108-Information_Retrieval_Method_of_Natural_Resources_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mammography Image Abnormalities Detection and Classification by Deep Learning with Extreme Learner</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01403107</link>
        <id>10.14569/IJACSA.2023.01403107</id>
        <doi>10.14569/IJACSA.2023.01403107</doi>
        <lastModDate>2023-03-30T12:42:49.2300000+00:00</lastModDate>
        
        <creator>Saruchi </creator>
        
        <creator>Jaspreet Singh</creator>
        
        <subject>Breast cancer; mammography; deep learning; CNN; extreme learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>Breast cancer has emerged as a leading killer of women worldwide in recent decades. Mammography is a useful tool for detecting abnormalities and doing screenings. The primary factors in the early identification of breast cancer are the quality of mammogram image and the radiologist’s appraisal of the mammography. The extensive use of deep learning (DL) as well as other image-processing technologies in recent times has tremendously aided in the categorization of breast cancer images. Image processing and classification methods may help us find breast cancer earlier, increasing the likelihood of a positive outcome from therapy and the likelihood of survival. employ picture segmentation methods on the datasets to draw attention to the area of interest, and then classify the findings as malignant or benign. In an effort to minimize the mortality rate from breast cancer among females, this research seeks to discover novel approaches to illness classification and detection, as well as new strategies for preventing the disease. In order to correctly categorize the results, the best possible feature optimization is carried out utilizing deep learning technology. The Proposed deep CNN (Convolutional Neural Network) is improved using two classification models such as SVM (Support Vector Machine) and ELM (Extreme Learning Machine). In the proposed deep learning model, the feature extraction with AlexNet is accomplished using deep CNN. Subsequently, different parameters are fine-tuned to enhance accuracy with various optimizers and learning rates.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_107-Mammography_Image_Abnormalities_Detection_and_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Chicken Behavior Analysis for Surveillance in Poultry Farms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01403106</link>
        <id>10.14569/IJACSA.2023.01403106</id>
        <doi>10.14569/IJACSA.2023.01403106</doi>
        <lastModDate>2023-03-30T12:42:49.1970000+00:00</lastModDate>
        
        <creator>Abdallah Mohamed Mohialdin</creator>
        
        <creator>Abdullah Magdy Elbarrany</creator>
        
        <creator>Ayman Atia</creator>
        
        <subject>Chicken; poultry; abnormal; behavior; birds</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>Poultry farming is an important industry that provides food for a growing population. However, the welfare of the birds is a major concern, as poor living conditions leads to abnormal behavior that affects the health and productivity of the flock. In order to monitor and improve the welfare of the birds, it is important to have a surveillance system in place that monitors the behavior of the chickens and alert farmers to potential issues. This paper reviews the current state of the art in behavior analysis for surveillance in poultry farms and discuss potential future directions for research in this area. This paper presents a computer-vision-based system that detects and monitors the behaviors of the chickens in poultry farms. The system classifies three behaviors which are eating, walking and sleeping. The system takes videos as input and then classifies the behavior of the chicken. The proposed system produces an accuracy of 94.7% using Light Gradient Boosting Machine on a collected data-set of chickens, and a 98.4% accuracy on a benchmarked Human Activity Recognition data-set.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_106-Chicken_Behavior_Analysis_for_Surveillance_in_Poultry_Farms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>COVID-19 Dataset Clustering based on K-Means and EM Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01403105</link>
        <id>10.14569/IJACSA.2023.01403105</id>
        <doi>10.14569/IJACSA.2023.01403105</doi>
        <lastModDate>2023-03-30T12:42:49.1830000+00:00</lastModDate>
        
        <creator>Youssef Boutazart</creator>
        
        <creator>Hassan Satori</creator>
        
        <creator>Anselme R. Affane M</creator>
        
        <creator>Mohamed Hamidi</creator>
        
        <creator>Khaled Satori</creator>
        
        <subject>COVID-19; clustering; k-means; EM algorithm; GMM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>In this paper, a COVID-19 dataset is analyzed using a combination of K-Means and Expectation-Maximization (EM) algorithms to cluster the data. The purpose of this method is to gain insight into and interpret the various components of the data. The study focuses on tracking the evolution of confirmed, death, and recovered cases from March to October 2020, using a two-dimensional dataset approach. K-Means is used to group the data into three categories: “Confirmed-Recovered”, “Confirmed-Death”, and “Recovered-Death”, and each category is modeled using a bivariate Gaussian density. The optimal value for k, which represents the number of groups, is determined using the Elbow method. The results indicate that the clusters generated by K-Means provide limited information, whereas the EM algorithm reveals the correlation between “Confirmed-Recovered”, “Confirmed-Death”, and “Recovered-Death”. The advantages of using the EM algorithm include stability in computation and improved clustering through the Gaussian Mixture Model (GMM).</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_105-COVID_19_Dataset_Clustering_based_on_K_Means_and_EM_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SSEC: Semantic Segmentation and Ensemble Classification Framework for Static Hand Gesture Recognition using RGB-D Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01403104</link>
        <id>10.14569/IJACSA.2023.01403104</id>
        <doi>10.14569/IJACSA.2023.01403104</doi>
        <lastModDate>2023-03-30T12:42:49.1500000+00:00</lastModDate>
        
        <creator>Dayananda Kumar NC</creator>
        
        <creator>K. V Suresh</creator>
        
        <creator>Chandrasekhar V</creator>
        
        <creator>Dinesh R</creator>
        
        <subject>Hand gesture recognition; semantic segmentation; ensemble classification; score fusion</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>Hand Gesture Recognition (HGR) refers to identifying various hand postures used in Sign Language Recognition (SLR) and Human Computer Interaction (HCI) applications. Complex background in uncontrolled environmental condition is the major challenging issue which impacts the recognition accuracy of HGR system. This can be effectively addressed by discarding the background using suitable semantic segmentation method, where it predicts the hand region pixels into foreground and rest of the pixels into background. In this paper, we have analyzed and evaluated well known semantic segmentation architectures for hand region segmentation using both RGB and depth data. Further, ensemble of segmented RGB and depth stream is used for hand gesture classification through probability score fusion. Experimental results shows that the proposed novel framework of Semantic Segmentation and Ensemble Classification (SSEC) is suitable for static hand gesture recognition and achieved F1-score of 88.91% on OUHANDS test dataset.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_104-SSEC_Semantic_Segmentation_and_Ensemble_Classification_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Heart Disease Classification and Recommendation by Optimized Features and Adaptive Boost Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01403103</link>
        <id>10.14569/IJACSA.2023.01403103</id>
        <doi>10.14569/IJACSA.2023.01403103</doi>
        <lastModDate>2023-03-30T12:42:49.1370000+00:00</lastModDate>
        
        <creator>Pardeep Kumar</creator>
        
        <creator>Ankit Kumar</creator>
        
        <subject>Heart disease prediction; heart disease; machine learning; optimization; multi-objective features</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>In recent decades, cardiovascular diseases have eclipsed all others as the main reason for death in both low and middle income countries. Early identification and continuous clinical monitoring can reduce the death rate associated with heart disorders. Neither service is yet accessible, as it requires more intellect, time, and skill to effectively detect cardiac disorders in all circumstances and to advise a patient for 24 hours. In this study, researchers suggested a Machine Learning-based approach to forecast the development of cardiac disease. For precise identification of cardiac disease, an efficient ML technique is required. The proposed method works on five classes, one normal and four diseases. In the research, all classes were assigned a primary task, and recommendations were made based on that. The proposed method optimises feature weighting and selects efficient features. Following feature optimization, adaptive boost learning using tree and KNN bases is used. In the trial, sensitivity improved by 3-4%, specificity by 4-5%, and accuracy by 3-4% compared to the previous approach.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_103-Heart_Disease_Classification_and_Recommendation_by_Optimized_Features.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Method for Inferring the Optimal Number of Clusters with Subsequent Automatic Data Labeling based on Standard Deviation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01403102</link>
        <id>10.14569/IJACSA.2023.01403102</id>
        <doi>10.14569/IJACSA.2023.01403102</doi>
        <lastModDate>2023-03-30T12:42:49.1030000+00:00</lastModDate>
        
        <creator>Aline Montenegro Leal Silva</creator>
        
        <creator>Francisco Alysson da Silva Sousa</creator>
        
        <creator>Alysson Ramires de Freitas Santos</creator>
        
        <creator>Vinicius Ponte Machado</creator>
        
        <creator>Andre Macedo Santana</creator>
        
        <subject>Inference approach; range of attribute values; labeling; standard deviation; interpretation of the groups</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>Machine learning is a suitable pattern recognition technique for detecting correlations between data. In the case of unsupervised learning, the groups formed from these correlations can receive a label, which consists of describing them in terms of their most relevant attributes and their respective ranges of values so that they are understood automatically. In this research work, this process is called labeling. However, a challenge for researchers is establishing the optimal number of clusters that best represent the underlying structure of the data subjected to clustering. This optimal number may vary depending on the data set and the grouping method used and influences the data clustering process and, consequently, the interpretability of the generated groups. Therefore, this research aims to provide an inference approach to the number of clusters to be used in the grouping based on the range of attribute values, followed by automatic data labeling based on the standard deviation to maximize the understanding of the groups obtained. This methodology was applied to four databases. The results show that it contributes to the interpretation of the groups since it generates more accurate labels without any overlap between ranges of values, considering the same attribute in different groups.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_102-Method_for_Inferring_the_Optimal_Number_of_Clusters.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Demand Forecasting Models for Food Industry by Utilizing Machine Learning Approaches</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01403101</link>
        <id>10.14569/IJACSA.2023.01403101</id>
        <doi>10.14569/IJACSA.2023.01403101</doi>
        <lastModDate>2023-03-30T12:42:49.0900000+00:00</lastModDate>
        
        <creator>Nouran Nassibi</creator>
        
        <creator>Heba Fasihuddin</creator>
        
        <creator>Lobna Hsairi</creator>
        
        <subject>Machine learning; long short-term memory; support vector machine; food industry; supply chain management; demand forecasting; product sales</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>Continued global economic instability and uncer-tainty is causing difficulties in predicting sales. As a result, many sectors and decision-makers are facing new, pressing challenges. In supply chain management, the food industry is a key sector in which sales movement and the demand forecasting for food products are more difficult to predict. Accurate sales forecasting helps to minimize stored and expired items across individual stores and, thus, reduces the potential loss of these expired products. To help food companies adapt to rapid changes and manage their supply chain more effectively, it is a necessary to utilize machine learning (ML) approaches because of ML’s ability to process and evaluate large amounts of data efficiently. This research compares two forecasting models for confectionery products from one of the largest distribution companies in Saudi Arabia in order to improve the company’s ability to predict demand for their products using machine learning algorithms. To achieve this goal, Support Vectors Machine (SVM) and Long Short-Term Memory (LSTM) algorithms were utilized. In addition, the models were evaluated based on their performance in forecasting quarterly time series. Both algorithms provided strong results when measured against the demand forecasting model, but overall the LSTM outperformed the SVM.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_101-Demand_Forecasting_Models_for_Food_Industry_by_Utilizing_Machine_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Incremental Diversity: An Efficient Anonymization Technique for PPDP of Multiple Sensitive Attributes</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01403100</link>
        <id>10.14569/IJACSA.2023.01403100</id>
        <doi>10.14569/IJACSA.2023.01403100</doi>
        <lastModDate>2023-03-30T12:42:49.0730000+00:00</lastModDate>
        
        <creator>Veena Gadad</creator>
        
        <creator>Sowmyarani C N</creator>
        
        <subject>Data management; privacy preserving data publishing; data privacy; multiple sensitive attributes; data anonymization; privacy attacks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>Data collected at the organizations such as schools, offices, healthcare centers and e-commerce websites contain multiple sensitive attributes. The sensitive information from these organisations such as marks obtained, salary, disease, treatment and traveling history are personal information that an individual dislikes to disclose to the public as it may lead to privacy threats. Therefore, it is necessary to preserve privacy of the data before publishing. Privacy Preserving Data Publishing(PPDP) algorithms aim to publish the data without compromising the privacy of individuals. In the recent years several algorithms have been designed for PPDP multiple sensitive attributes. The major limitations are, firstly among several sensitive attributes these algorithms consider one of them as primary sensitive attribute and anonymize the data, however there may be other dominant sensitive attributes that need to be preserved. Secondly, there is no consistent way to categorize multiple sensitive attributes. Lastly, increased proportion of records are generated due to usage of generalization and suppression techniques. Hence, to overcome these limitations the current work proposes an efficient approach to categorize the sensitive attributes based their semantics and anonymize the data using an anatomy technique. This reduces the residual records as well as categorizes the attributes. The results are compared with popular techniques like Simple Distribution of Sensitive Values (SDSV) and (l, e) diversity. Experiments prove that our method outperforms the existing methods in terms of categorization of multiple sensitive attributes, reducing the percentage of residual records and preventing the existing privacy threats.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_100-Incremental_Diversity_An_Efficient_Anonymization_Technique_for_PPDP.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Rapidly Exploring Random Trees for Autonomous Navigation in Observable and Uncertain Environments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140399</link>
        <id>10.14569/IJACSA.2023.0140399</id>
        <doi>10.14569/IJACSA.2023.0140399</doi>
        <lastModDate>2023-03-30T12:42:49.0430000+00:00</lastModDate>
        
        <creator>Fredy Martinez</creator>
        
        <creator>Edwar Jacinto</creator>
        
        <creator>Holman Montiel</creator>
        
        <subject>Autonomous navigation; differential robot; esp32 microcontroller; low-cost; rapidly exploring random trees algorithm; versatile</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>This paper proposes the use of a small differential robot with two DC motors, controlled by an ESP32 microcon-troller, that implements the Rapidly Exploring Random Trees algorithm to navigate from an origin point to a destination point in an unknown but observable environment. The motivation behind this research is to explore the use of a low-cost, versatile and efficient robotic platform for autonomous navigation in complex environments. This work presents a practical and cost-effective solution that can be easily replicated and implemented in various scenarios such as search and rescue, surveillance, and industrial automation. The proposed robotic platform is equipped with a set of sensors and actuators that allow it to observe the environment, estimate its position, and move through it. The Rapidly Exploring Random Trees algorithm is implemented to generate a path from an origin to a destination point, avoiding obstacles and adjusting the robot’s motion accordingly. The implementation of this algorithm enables the robot to navigate through complex environments with high efficiency and reliability, making it a suitable solution for a wide range of applications. The results obtained through simulations and experiments show that the proposed robotic platform and algorithm achieve high performance and accuracy in autonomous navigation, even in complex environments.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_99-Rapidly_Exploring_Random_Trees_for_Autonomous_Navigation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Automated Framework to Detect Emotions from Contextual Corpus</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140398</link>
        <id>10.14569/IJACSA.2023.0140398</id>
        <doi>10.14569/IJACSA.2023.0140398</doi>
        <lastModDate>2023-03-30T12:42:49.0270000+00:00</lastModDate>
        
        <creator>Ravikumar Thallapalli</creator>
        
        <creator>G. Narsimha</creator>
        
        <subject>Emoji translation; weighted annotation; text translation; reduced unicode based dictionary; relative sentiment score building; mean scoring technique; collaborative sentiment score building</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>The emotion extraction or opinion mining is one of the key tasks for any text processing frameworks. In recent times, the use of opinion mining has gained a lot of potential due to the application of the potential customized aspects of the consumer relations and other customized applications. However, the application of sentiment analysis or opinion mining is highly challenging as the accuracy of the sentiment analysis depends on the input text corpus. The input text corpus can be highly fluctuating due to the inclusion of emojis or local language influences and finally the use of a wide variety of the regional languages. A good number of parallel research outcomes have aimed to solve these challenges in the recent time. However, most of the parallel research outcomes have primarily three challenges kept unsolved as firstly, the emojis in the text corpus is mainly removed but not translated into sentiment scores, secondly, the translation of the texts from various regional languages and the translation is mainly true translations rather than the contextual translation. Finally, the use of the dictionaries in the actual translation tasks takes a lot of time to process and must be reduced. Henceforth, in order to solve these challenges, this work proposed a framework to automate the weighted emoji-based sentiment analysis, Unicode based translation process to reduce the time complexity and finally use the collaborative sentiment analysis scores to build the final sentiment models. This work results into nearly 97% accuracy and nearly 50% reduction in the time complexity.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_98-An_Automated_Framework_to_Detect_Emotions.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Bird Image Classification using Convolutional Neural Network Transfer Learning Architectures</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140397</link>
        <id>10.14569/IJACSA.2023.0140397</id>
        <doi>10.14569/IJACSA.2023.0140397</doi>
        <lastModDate>2023-03-30T12:42:49.0100000+00:00</lastModDate>
        
        <creator>Asmita Manna</creator>
        
        <creator>Nilam Upasani</creator>
        
        <creator>Shubham Jadhav</creator>
        
        <creator>Ruturaj Mane</creator>
        
        <creator>Rutuja Chaudhari</creator>
        
        <creator>Vishal Chatre</creator>
        
        <subject>Deep learning; CNN; Image classification; DenseNet201; ResNet152V2; InceptionV3; MobileNetV2</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>With the technological progress of human beings, more and more animal and bird species are being endangered and sometimes even going to the verge of extinction. However, the existence of birds is highly beneficial for human civilization as birds help in pollination, destroying harmful insects for crops, etc. To ensure the healthy co-existence of all species along with human beings, almost all advanced countries have taken up some conservation measures for endangered species. To ensure conservation, the first step is to identify the species of birds found in different locations. Deep learning-based techniques are best suited for the automated identification of bird species from the captured images. In this paper, a Convolutional Neural Network based bird image identification methodology has been proposed. Four different transfer learning-based architectures, namely Resnet152V2, Inception V3, Densenet201, and MobileNetV2 have been used for bird image classification and identification. The models have been trained using 58388 images belonging to 400 species of birds, and the models have been tested using 2000 images belonging to 400 species of birds. Out of these four models, Resnet152V2 and DenseNet201 performed comparatively well. The accuracy of Resnet152V2 was highest at 95.45%, but it faced a large loss of 0.8835. But based on the results, even though DenseNet201 had an accuracy of 95.05%, it faced less loss i.e., of 0.6854. The results show that the DenseNet201 model can further be used for real-life bird image classification.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_97-Bird_Image_Classification_using_Convolutional_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Real-time ECG CTG based Ensemble Feature Extraction and Unsupervised Learning based Classification Framework for Multi-class Abnormality Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140396</link>
        <id>10.14569/IJACSA.2023.0140396</id>
        <doi>10.14569/IJACSA.2023.0140396</doi>
        <lastModDate>2023-03-30T12:42:48.9970000+00:00</lastModDate>
        
        <creator>Y. Aditya</creator>
        
        <creator>S. Suganthi Devi</creator>
        
        <creator>B. D. C. N Prasad</creator>
        
        <subject>Ensemble; feature ranking; improved inter quartile range; outlier detection; heterogeneous optimized k-nearest neighbor; unsupervised learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>Cardiovascular diseases (CVDs) are a leading cause of death worldwide. Early detection and diagnosis of these diseases can greatly reduce complications and improve outcomes for high-risk individuals. One method for detecting CVDs is through the use of electrocardiogram (ECG) monitoring systems, which use various technologies such as the Internet of Things (IoT), mobile applications, wireless sensor networks (WSN), and wearable devices to acquire and analyze ECG data for early diagnosis. However, despite the prevalence of these systems in the literature, there is a need for further optimization and improvement of their classification accuracy. In an effort to address this challenge, a novel heterogeneous unsupervised learning model for real-time ECG classification was proposed. The main goal of this work was to reduce the error rate and improve the classification accuracy of the system. This study presents a framework for the classification of multi-class abnormalities in electrocardiograms (ECGs) using an ensemble feature extraction technique and unsupervised learning. The framework utilizes a real-time electrocardiogram-cardiotocography (ECG-CTG) system to extract features from the ECG signal, and then employs an ensemble of feature extraction techniques to enhance the discrimination of the extracted features. The extracted features are then used in an unsupervised learning-based classification algorithm to classify the ECG signals into different classes of abnormalities. The proposed framework is evaluated on a dataset of ECG signals and the results show that it can effectively classify ECG signals with high accuracy and low computational complexity.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_96-A_Real_time_ECG_CTG_based_Ensemble_Feature_Extraction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-String Missing Characters Restoration for Automatic License Plate Recognition System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140395</link>
        <id>10.14569/IJACSA.2023.0140395</id>
        <doi>10.14569/IJACSA.2023.0140395</doi>
        <lastModDate>2023-03-30T12:42:48.9630000+00:00</lastModDate>
        
        <creator>Ishtiaq Rasool KHAN</creator>
        
        <creator>Syed Talha Abid ALI</creator>
        
        <creator>Asif SIDDIQ</creator>
        
        <creator>Seong-O SHIM</creator>
        
        <subject>Automatic License Plate Recognition (ALPR); Intelligent Transportation System (ITS); Optical Character Recognition (OCR); Deep Convolutional Neural Network; You Only Look Once (YOLO)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>Developing a license plate recognition system that can cope with unconstrained real-time scenarios is very challenging. Additional cues, such as the color and dimensions of the plate, and font of the text, can be useful in improving the system&#39;s accuracy. This paper presents a deep learning-based plate recognition system that can take advantage of the bilingual text in the license plates, as used in many countries, including Saudi Arabia. We train and test the model using a custom dataset generated from real-time traffic videos in Saudi Arabia. Using the English alphanumeric alone, the accuracy of our system was on par with the existing state-of-the-art algorithms. However, it increased significantly when the additional information from the detection of Arabic text was utilized. We propose a new algorithm to restore noise-affected missing or misidentified characters in the plate. We generated a new test dataset of license plates to test how the proposed system performs in challenging scenarios. The results show a clear advantage of the proposed system over several commercially available solutions, including Open ALPR, Plate Recognizer, and Sighthound.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_95-Multi_String_Missing_Characters_Restoration_for_Automatic_License.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Cloud and IoT-enabled Workload-aware Healthcare Framework using Ant Colony Optimization Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140394</link>
        <id>10.14569/IJACSA.2023.0140394</id>
        <doi>10.14569/IJACSA.2023.0140394</doi>
        <lastModDate>2023-03-30T12:42:48.9470000+00:00</lastModDate>
        
        <creator>Lu Zhong</creator>
        
        <creator>Xiaoke Deng</creator>
        
        <subject>Internet of things; cloud computing; healthcare; load balancing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>In recent years, smart cities have gained in popularity due to their potential to improve the quality of life for urban residents. In many smart city services, particularly those in the field of smart healthcare, big healthcare data is analyzed, processed, and shared in real time. Products and services related to healthcare are essential to the industry&#39;s current state, which increases its viability for all parties involved. With the increasing popularity of cloud-based services, it is imperative to develop new approaches for discovering and selecting these services. This paper follows a two-stage process. The first step involves designing and implementing an Internet-enabled healthcare system incorporating wearable devices. A new load-balancing algorithm is presented in the second stage, based on Ant Colony Optimization (ACO). ACO distributes tasks across virtual machines to maximize resource utilization and minimize makespan time. In terms of both makespan time and processing time, the proposed method appears to be more efficient than previous approaches based on statistical analysis.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_94-A_Cloud_and_IoT_enabled_Workload_aware_Healthcare_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>AI in Tourism: Leveraging Machine Learning in Predicting Tourist Arrivals in Philippines using Artificial Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140393</link>
        <id>10.14569/IJACSA.2023.0140393</id>
        <doi>10.14569/IJACSA.2023.0140393</doi>
        <lastModDate>2023-03-30T12:42:48.9330000+00:00</lastModDate>
        
        <creator>Noelyn M. De Jesus</creator>
        
        <creator>Benjie R. Samonte</creator>
        
        <subject>Tourist arrivals; machine learning; predictive analytics; artificial neural network; ANN; time series prediction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>Tourism is one of the most prominent and rapidly expanding sectors that contribute significantly to the growth of a country’s economy. However, the tourism industry has been most adversely affected during the coronavirus pandemic. Thus, a reliable and accurate time series prediction of tourist arrivals is necessary in making decisions and strategies to develop the competitiveness and economic growth of the tourism industry. In this sense, this research aims to examine the predictive capability of artificial neural networks model, a popular machine learning technique, using the actual tourism statistics of the Philippines from 2008-2022. The model was trained using three distinct data compositions and was evaluated utilizing different time series evaluation metrics, to identify the factors affecting the model performance and determine its accuracy in predicting arrivals. The findings revealed that the ANN model is reliable in predicting tourist arrivals, with an R-squared value and MAPE of 0.926 and 13.9%, respectively. Furthermore, it was determined that adding training sets that contain the unexpected phenomenon, like COVID-19 pandemic, increased the prediction model&#39;s accuracy and learning process. As the technique proves it prediction accuracy, it would be a useful tool for the government, tourism stakeholders, and investors among others, to enhance strategic and investment decisions.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_93-AI_in_Tourism_Leveraging_Machine_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards Finding the Impact of Deep Learning in Educational Time Series Datasets – A Systematic Literature Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140392</link>
        <id>10.14569/IJACSA.2023.0140392</id>
        <doi>10.14569/IJACSA.2023.0140392</doi>
        <lastModDate>2023-03-30T12:42:48.9170000+00:00</lastModDate>
        
        <creator>Vanitha S</creator>
        
        <creator>Jayashree. R</creator>
        
        <subject>Deep learning; education; gated recurrent unit; long-short term memory; recurrent neural network; time series</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>Besides teaching in the education system, instructors do a bunch of background processes such as preparing study material, question paper setting, managing attendance, log book entry, student assessment, and the result analysis of the class. Moreover, Learning Management System(LMS) is mandatory if the course is online. The Massive Open Online Course (MOOC) is an example of the worldwide online education system. Nowadays, educators are using Google to efficiently formulate study material, question papers, and especially for self-preparation. Also, student assessment and result analysis tools are available to get instant results by feeding student data. Artificial Intelligence (AI) is driving behind these applications to deliver the most precise outcome. To accomplish that, AI requires historical data to train the model, and this sequential (year-wise, month-wise, etc) information is called time series data. This Systematic Literature Review (SLR) is conducted to find the contribution of time series algorithms in Education. There are enormous changes in algorithm architecture analogized to the traditional neural network to endure all kinds of data. Though it significantly raises the performance, it expands the complexity, resources, and execution time as well. Due to this, comprehending the algorithm architecture and the method of the execution process is a challenging phase before creating the model. But it is essential to have enough knowledge to select the suitable technique for the right solution. The first part reviews the time series problems in educational datasets using Deep Learning(DL). The second part describes the architecture of the time series model, such as the Recurrent Neural Network (RNN) and its variants called Long-Short Term Memory (LSTM) and Gated Recurrent Unit (GRU), the differences between each other, and the classification of performance metrics. Finally, the factors affecting the time series model accuracy and the significance of this work are summarized to incite the people who desire to initiate the research in educational time series problems.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_92-Towards_Finding_the_Impact_of_Deep_Learning_in_Educational_Time.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Conjugate Symmetric Data Transmission Control Method based on Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140391</link>
        <id>10.14569/IJACSA.2023.0140391</id>
        <doi>10.14569/IJACSA.2023.0140391</doi>
        <lastModDate>2023-03-30T12:42:48.9000000+00:00</lastModDate>
        
        <creator>Yao Wang</creator>
        
        <subject>Machine learning; conjugate symmetric data; transmission control; data coding; transmission delivery rate</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>In conjugate symmetric data transmission, due to insufficient judgment of congestion during transmission, the amount of data is large and the transmission rate is low. In order to improve the data transmission rate, a conjugate symmetric data transmission control method based on machine learning is designed. Firstly, the data to be transmitted is tracked and determined, and then conjugate symmetric data fusion is completed according to the calculation result of the best tracking signal. According to the fusion results, the framework of the conjugate symmetric data coding system is established, and the data coding is completed. The average congestion mark value is calculated by the machine learning method, and the congestion judgment of data transmission is completed. On the basis of congestion determination, the efficient transmission control of conjugate symmetric data is realized by specifying the conjugate symmetric data transmission protocol. Experimental results show that compared with traditional control methods, this control method has the advantages of a high delivery rate, low message transmission overhead and low data transmission delay. Compared to the traditional two-way path model, the scheduling method proposed in this study increases the transmission delivery rate by 5%, while reducing the transmission cost and delay by 0.7 cost index and 1.1 min delay, respectively. In comparison with the performance of accurate error tracking equalization, the transmission delivery rate of the research method increased by 21%, and in transmission cost index and delay analysis, it also decreased by 3.1 and 2.9 minutes. Based on the above performance comparison analysis, it can be concluded that the machine learning method has more superior transmission control performance.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_91-Conjugate_Symmetric_Data_Transmission_Control_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Research on the Model of Preventing Corporate Financial Fraud under the Combination of Deep Learning and SHAP</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140390</link>
        <id>10.14569/IJACSA.2023.0140390</id>
        <doi>10.14569/IJACSA.2023.0140390</doi>
        <lastModDate>2023-03-30T12:42:48.8700000+00:00</lastModDate>
        
        <creator>Yanzhao Wang</creator>
        
        <subject>Financial fraud; deep learning; ensemble algorithm; feature selection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>Preventing financial fraud in listed companies is conducive to improving the healthy development of China’s accounting industry and the securities market, is conducive to promoting the improvement of the internal control system of China’s enterprises, and is conducive to promoting stability. Based on the combination of SHAP (Shapley Additive explanation), a prediction and identification model should be built to determine the possibility of financial fraud and the risk of fraud for the company. The research model has effectively improved the identification accuracy of financial fraud in listed companies, and the research model has effectively dealt with the gray sample problem that is common in the forecasting model through the LOF algorithm and the IF algorithm. When conducting comparative experiments on the models, the overall accuracy rate of the research model is over 85%, the recall rate is 78.5%, the precision rate is 42%, the AUC reaches 0.896, the discrimination degree KS reaches 0.652, and the model stability PSI is 0.088, compared with traditional financial fraud Forecasting models FS model and CS model has a higher predictive effect. In the empirical analysis, selecting a company’s fraud cases in 2020 can effectively analyze the characteristic contribution in the fraud process and the focus on fraud risks. The established model can effectively monitor the company’s finance and prevent fraud.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_90-Research_on_the_Model_of_Preventing_Corporate_Financial_Fraud.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Legal Entity Extraction: An Experimental Study of NER Approach for Legal Documents</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140389</link>
        <id>10.14569/IJACSA.2023.0140389</id>
        <doi>10.14569/IJACSA.2023.0140389</doi>
        <lastModDate>2023-03-30T12:42:48.8530000+00:00</lastModDate>
        
        <creator>Varsha Naik</creator>
        
        <creator>Purvang Patel</creator>
        
        <creator>Rajeswari Kannan</creator>
        
        <subject>Named Entity Recognition; NER; legal domain; text annotation; annotation tools</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>In legal domain Name Entity Recognition serves as the basis for subsequent stages of legal artificial intelligence. In this paper, the authors have developed a dataset for training Name Entity Recognition (NER) in the Indian legal domain. As a first step of the research methodology study is done to identify and establish more legal entities than commonly used named entities such as person, organization, location, and so on. The annotators can make use of these entities to annotate different types of legal documents. Variety of text annotation tools are in existence finding the best one is a difficult task, so authors have experimented with various tools before settling on the best one for this research work. The resulting annotations from unstructured text can be stored into a JavaScript Object Notation (JSON) format which improves data readability and manipulation simple. After annotation, the resulting dataset contains approximately 30 documents and approximately 5000 sentences. This data further used to train a spacy pre-trained pipeline to predict accurate legal name entities. The accuracy of legal names can be increased further if the pre-trained models are fine-tuned using legal texts.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_89-Legal_Entity_Extraction_An_Experimental_Study.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Meta Heuristic Fusion Model for Classification with Modified U-Net-based Segmentation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140388</link>
        <id>10.14569/IJACSA.2023.0140388</id>
        <doi>10.14569/IJACSA.2023.0140388</doi>
        <lastModDate>2023-03-30T12:42:48.8400000+00:00</lastModDate>
        
        <creator>Sri Laxmi Kuna</creator>
        
        <creator>A. V. Krishna Prasad</creator>
        
        <creator>Suneetha Bulla</creator>
        
        <subject>Diabetic retinopathy segmentation and classification model; multi-armed bandits groundwater flow algorithm; hybrid soft attention-based DenseNet with multi-scale gated ResNet; modified U-Net-based segmentation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>General cause of diabetes mellitus is Diabetic Retinopathy (DR), which outcomes in lesions on the retinas that impair vision. If it is not detected in time, the result is severe blindness issues. Regrettably, there is no treatment for DR. Early diagnosis and treatment of DR can greatly lower the risk of visual loss. In contrast to computer-aided diagnosis methods, the manual diagnosis of DR using retina fundus images is more time-consuming effort, and high cost as well, as it is highly prone to error. Deep learning has emerged as one of the most popular methods for improving performance, particularly in the classification and analysis of medical images. Therefore, a deep structure-based DR detection and severity classification has been demonstrated for treating the DR with the usage of fundus images. The major aim of this developed technique is to classify the severity level of the retinal region of the human eye from the fundus images. At first, the required retinal fundus images are collected from the standard benchmark data sources. Secondly, image enhancement techniques are applied to the collected fundus images to improve the quality of images. Thirdly, the abnormality segmentations are carried out by using the optic disc removal process using active contouring model and then, the regional segmentation is done via the Modified U-Net method. Finally, the segmented image is subjected to the hybrid classifier network named a Hybrid Soft Attention-based DenseNet with Multi-Scale Gated ResNet (HSADMGR Net) for classifying the retinal fundus images and finding the severity level of the retinal images with higher accuracy. Furthermore, the parameters present inside the hybrid classifier network are optimized with the help of implemented Multi-Armed Bandits Groundwater Flow Algorithm (MABGFA). The test results regarding the developed deep structure-based DR model are validated with the existing DR detection and classification approaches by using different performance measures.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_88-Meta_Heuristic_Fusion_Model_for_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Cross-Cultural Teaching Model of Foreign Literature Under the Application of Machine Learning Technology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140387</link>
        <id>10.14569/IJACSA.2023.0140387</id>
        <doi>10.14569/IJACSA.2023.0140387</doi>
        <lastModDate>2023-03-30T12:42:48.8230000+00:00</lastModDate>
        
        <creator>Jing Lv</creator>
        
        <subject>Cross cultural education; foreign literature; extreme gradient boosting (XGBoost) algorithm; flower pollination algorithm (FPA); control class (CC)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>As globalization spreads, so does the world&#39;s ethnic makeup, leading to a surge in cultural diversity that has become a major issue for the educational systems of all countries. Many western countries advocate for cross-cultural education (CCE) as a means of dealing with cultural variety and promoting trust, tolerance, and interaction between individuals of different backgrounds. The way to achieve this goal is to work toward solving the issue while also fostering greater national unity. One of the newest developments in international education is the concept of CCE, which has also given rise to a whole new area of research within the subject of education. Too much time has passed since students were actively engaged in the learning process, and the new curriculum reform has seriously harmed the traditional approaches to teaching foreign literature. As a result, the proposed research has shown that around half of all education is dedicated to the FL cross-cultural teaching paradigm. Chinese students&#39; data were first gathered for this study and divided into two groups: Control Class (CC) and Experimental Class (EC). The performance of the students in both groups is then forecasted using the extreme gradient boosting (XGBoost) technique, which is based on machine learning. Then, we use an optimization method known as the Flower Pollination Algorithm (FPA) to improve XGBoost&#39;s prediction performance. According to the descriptive findings, students who adhere to the suggested teaching strategy show more learning interest than those who adhere to existing strategies.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_87-The_Cross_Cultural_Teaching_Model_of_Foreign_Literature.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Efficient Source Printer Identification Model using Convolution Neural Network (SPI-CNN)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140386</link>
        <id>10.14569/IJACSA.2023.0140386</id>
        <doi>10.14569/IJACSA.2023.0140386</doi>
        <lastModDate>2023-03-30T12:42:48.7930000+00:00</lastModDate>
        
        <creator>Naglaa F. El Abady</creator>
        
        <creator>Hala H. Zayed</creator>
        
        <creator>Mohamed Taha</creator>
        
        <subject>Document forgery; source printer identification (SPI); convolution neural network (CNN); transfer learning (TL); support vector machine (SVM)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>Document forgery detection is becoming increasingly important in the current era, as forgery techniques are available to even inexperienced users. Source printer identification is a method for identifying the source printer and classifying the questioned document into one of the printer classes. According to what we know, most earlier studies segmented documents into characters, words, and patches or cropped them to obtain large datasets. In contrast, in this paper, we worked with the document as a whole and a small dataset. This paper uses three techniques dependent on CNN to find the document source printer without segmenting the document into characters, words, or patches and with small datasets. Three separate datasets of 1185, 1200, and 2385 documents are used to estimate the performance of the suggested techniques. In the first technique, 13 pre-trained CNN were tested, and they were only used for feature extraction, while SVM was used for classification. In the second technique, a pre-trained neural network is retrained using transfer learning for feature extraction and classification. In the third technique, CNN is trained from scratch and then used for feature extraction and SVM for classification. Many experiments are done in the three techniques, showing that the third technique gives the best result. This technique achieved 99.16%, 99.58%, and 98.3% accuracy for datasets 1, 2, and 3. The three techniques are compared with some previously published papers, and found that the third technique gives better results.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_86-An_Efficient_Source_Printer_Identification_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Question Classification in Albanian Through Deep Learning Approaches</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140385</link>
        <id>10.14569/IJACSA.2023.0140385</id>
        <doi>10.14569/IJACSA.2023.0140385</doi>
        <lastModDate>2023-03-30T12:42:48.7770000+00:00</lastModDate>
        
        <creator>Evis Trandafili</creator>
        
        <creator>Nelda Kote</creator>
        
        <creator>Gjergj Plepi</creator>
        
        <subject>Question classification; deep learning; BiLSTM; transformer; RoBERTa; Albanian corpus; natural language processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>In recent years, there is growing interest in intelligent conversation systems. In this context, Question Classification is an essential subtask in Question Answering systems that determines the question type, therefore, also the type of the answer. However, while there is abundant research for English, little research work has been carried out for other languages. In this paper we deal with classification of questions in the Albanian language which is considered a complex Indo-European language. We employ both machine learning and deep learning approaches on a large corpus in Albanian based on the six-class TREC dataset with approximately 5000 questions. Experiments with and without stop-words show that the impact of stop-words is significant in the accuracy of the classifier. Extensive comparison of algorithms for the task of question classification in Albanian show that deep learning algorithms outperform conventional machine learning approaches. To the best of our knowledge this is the first approach in literature for classifying questions in Albanian and the results are highly comparable to English.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_85-Question_Classification_in_Albanian_Through_Deep_Learning_Approaches.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Systematic Review of Deep Learning Techniques for Lung Cancer Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140384</link>
        <id>10.14569/IJACSA.2023.0140384</id>
        <doi>10.14569/IJACSA.2023.0140384</doi>
        <lastModDate>2023-03-30T12:42:48.7600000+00:00</lastModDate>
        
        <creator>Mattakoyya Aharonu</creator>
        
        <creator>R Lokesh Kumar</creator>
        
        <subject>Artificial intelligence; deep learning; lung cancer; lung cancer detection; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>Cancer is the leading cause of deaths across the globe and 10 million people died of cancer and particularly 2.21 million new cases registered besides 1.80 million deaths, according to WHO, in 2020. Malignant cancer is caused by multiplication and growth of lung cells. In this context, exploiting technological innovations for automatic detection of lung cancer early is to be given paramount importance. Towards this end significant progress has been made and deep learning model such as Convolutional Neural Network (CNN) is found superior in processing lung CT or MRI images for disease diagnosis. Lung cancer detection in the early stages of the disease helps in better treatment and cure of the disease. In this paper, we made a systematic review of deep learning methods for detection of lung cancer. It reviews peer reviewed journal papers and conferences from 2012 to 2021. Literature review throws light on synthesis of different existing methods covering machine learning (ML), deep learning and artificial intelligence (AI). It provides insights of different deep learning methods in terms of their pros and cons and arrives at possible research gaps. This paper gives knowledge to the reader on different aspects of lung cancer detection which can trigger further research possibilities to realize models that can be used in Clinical Decision Support Systems (CDSSs) required by healthcare units.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_84-Systematic_Review_of_Deep_Learning_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cloud Service Composition using Firefly Optimization Algorithm and Fuzzy Logic</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140383</link>
        <id>10.14569/IJACSA.2023.0140383</id>
        <doi>10.14569/IJACSA.2023.0140383</doi>
        <lastModDate>2023-03-30T12:42:48.7300000+00:00</lastModDate>
        
        <creator>Wenzhi Wang</creator>
        
        <creator>Zhanqiao Liu</creator>
        
        <subject>Cloud computing; service composition; QoS; firefly algorithm; fuzzy logic</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>Cloud computing involves the dynamic provision of virtualized and scalable resources over the Internet as services. Different types of services with the same functionality but different non-functionality features may be delivered in a cloud environment in response to customer requests, which may need to be combined to satisfy the customer&#39;s complex requirements. Recent research has focused on combining unique and loosely-coupled services into a preferred system. An optimized composite service consists of formerly existing single and simple services combined to provide an optimal composite service, thereby improving the quality of service (QoS). In recent years, cloud computing has driven the rapid proliferation of multi-provision cloud service compositions, in which cloud service providers can provide multiple services simultaneously. Service composition fulfils a variety of user needs in a variety of scenarios. The composite request (service request) in a multi-cloud environment requires atomic services (service candidates) located in multiple clouds. Service composition combines atomic services from multiple clouds into a single service. Since cloud services are rapidly growing and their Quality of Service (QoS) is widely varying, finding the necessary services and composing them with quality assurances is an increasingly challenging technical task. This paper presents a method that uses the firefly optimization algorithm (FOA) and fuzzy logic to balance multiple QoS factors and satisfy service composition constraints. Experimental results prove that the proposed method outperforms previous ones in terms of response time, availability, and energy consumption.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_83-Cloud_Service_Composition_using_Firefly_Optimization_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Univariate and Multivariate Gaussian Models for Anomaly Detection in Multi Tenant Distributed Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140382</link>
        <id>10.14569/IJACSA.2023.0140382</id>
        <doi>10.14569/IJACSA.2023.0140382</doi>
        <lastModDate>2023-03-30T12:42:48.7130000+00:00</lastModDate>
        
        <creator>Pravin Ramdas Patil</creator>
        
        <creator>Geetanjali Kale</creator>
        
        <subject>Multi-tenant distributed system; anomaly detection; outlier detection; machine learning; Gaussian model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>Due to the flaws in shared memory, settings, and network access, distributed systems on a network always have been susceptible to cyber intrusions. Co-users on the same server give attackers the chance to monitor the activity of many other users and launch an attack when those users&#39; security is at risk. Building completely secure network topologies immune from risks and assaults has traditionally been the goal. It is also hard to create an architecture that is 100 percent safe due to its open-ended nature. The precise parameters and infrastructure design whereby the strike is instantiated are a constant which can always be detected regardless of the sort of attack. This work now have the chance to simulate any abnormality and subsequent attack possibilities using network parameter values thanks to the increased usage of algorithms for machine learning and data-gathering tools. This work proposes a Gaussian model to forecast the likelihood of an attack occurring depending on certain system parameters. This work model a univariate and a multivariate Gaussian model on the training dataset. This work makes use of various threshold values to predict whether the data point is an inlier or an outlier. This research examines accuracies for various threshold values. An important challenge in an anomaly detection situation is class imbalance. As long as this work just utilizes training data, a class imbalance is not a problem. Our data-driven results show that combining machine learning with Gaussian-based models might be a useful tool for analyzing network intrusions. Although more steps are being made to boost digital space security, machine learning algorithms may be utilized to examine any abnormal behavior that is left uncontrolled.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_82-Univariate_and_Multivariate_Gaussian_Models_for_Anomaly_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An In-depth Analysis of Uneven Clustering Techniques in Wireless Sensor Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140381</link>
        <id>10.14569/IJACSA.2023.0140381</id>
        <doi>10.14569/IJACSA.2023.0140381</doi>
        <lastModDate>2023-03-30T12:42:48.6830000+00:00</lastModDate>
        
        <creator>Hai-yu Zhang</creator>
        
        <subject>Wireless sensor networks; data aggregation; uneven clustering; energy-efficient; review</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>The low-cost and convenient feature of Wireless Sensor Networks (WSNs) has made them popular in many sectors over the last decade. The WSNs are now widely used as a result of recent advancements in low-power communication and being energy-efficient. The WSNs typically use batteries to power sensor nodes. The finite stored energy in batteries and the hassle of battery replacement have led to a critical focus on energy efficiency for WSNs. Clustering and data aggregation are the most efficient methods to address the energy concerns of WSNs. This paper comprehensively reviews several uneven clustering methods and compares the various uneven clustering algorithms. The methods are described in terms of their goals, attributes, categories, advantages and disadvantages. Probabilistic clustering is used when there is a need of simplicity and speed. As a result, this study compared all these types of protocols based on their clustering properties, CHs properties, and on the type of clustering process; and current research gap effective techniques are also addressed.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_81-An_In_depth_Analysis_of_Uneven_Clustering_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Polarimetric SAR Characterization of Mangrove Forest Environment in the United Arab Emirates (UAE)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140380</link>
        <id>10.14569/IJACSA.2023.0140380</id>
        <doi>10.14569/IJACSA.2023.0140380</doi>
        <lastModDate>2023-03-30T12:42:48.6370000+00:00</lastModDate>
        
        <creator>Soumaya Fatnassi</creator>
        
        <creator>Mohamed Yahia</creator>
        
        <creator>Tarig Ali</creator>
        
        <creator>Maruf Mortula</creator>
        
        <subject>Mangrove forests; dual-PolSAR; sentinel 1; United Arab Emirates; entropy/alpha decomposition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>This Mangrove forests in the United Arab Emirates (UAE) provide valuable ecosystem services such as coastal erosion protection, water purification and refuge for a wide variety of plants and animals. Therefore, the ﬁrst step toward understanding the mangrove forests is the monitoring of this important ecological system. This paper proposes an original study to characterize the mangrove forest environment in the UAE by using polarimetric synthetic aperture radar (PolSAR) remote sensing. Free access C-band dual- PolSAR Sentinel 1 data have been exploited. The elements as of the covariance matrix as well as the entropy/alpha decomposition parameters have been studied. Results show that the VH intensity, the coherence between VV and VH polarimetric channels, the entropy and alpha angle provide the most pronounced signatures that discern mangrove forests. Thus, these parameters could be exploited to improve the accuracy of the remote sensing monitoring and mapping techniques of mangrove forests in the UAE.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_80-Polarimetric_SAR_Characterization_of_Mangrove_Forest_Environment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Marigold Flower Blooming Stage Detection in Complex Scene Environment using Faster RCNN with Data Augmentation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140379</link>
        <id>10.14569/IJACSA.2023.0140379</id>
        <doi>10.14569/IJACSA.2023.0140379</doi>
        <lastModDate>2023-03-30T12:42:48.6200000+00:00</lastModDate>
        
        <creator>Sanskruti Patel</creator>
        
        <subject>Deep learning; convolutional neural networks; object detection; marigold flower blooming stage detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>In recent years, flower growing has developed into a lucrative agricultural sector that provides employment and business opportunities for small and marginal growers in both urban and rural locations in India. One of the most often cultivated flowers for landscaping design is the Marigold flower. It is also widely used to create garlands for ceremonial and social occasions using loose flowers. Understanding the appropriate stage of harvesting for each plant species is essential to ensuring the quality of the flowers after they have been picked. It has been demonstrated that human assessors consistently used a category scoring system to evaluate various flowering stages. Deep learning and convolutional neural networks have the potential to revolutionize agriculture by enabling efficient analysis of large-scale data. In order to address the problem of Marigold flower stages detection and classification in complex real-time field scenarios, this study proposes a fine-tuned Faster RCNN with ResNet50 network coupled with data augmentation. Faster RCNN is a popular deep learning framework for object detection that uses a region proposal network to efficiently identify object locations and features in an image. The Marigold flower dataset was collected from three different Marigold fields in the Anand District of Gujarat State, India. The collection includes of photos that were taken outdoors in natural light at various heights, angles, and distances. We have developed and fine-tuned a Faster RCNN detection and classification model to be particularly sensitive to Marigold flowers, and we have compared the generated method&#39;s performance to that of other cutting-edge models to determine its accuracy and effectiveness.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_79-Marigold_Flower_Blooming_Stage_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Energy‐Aware Technique for Task Allocation in the Internet of Things using Krill Herd Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140378</link>
        <id>10.14569/IJACSA.2023.0140378</id>
        <doi>10.14569/IJACSA.2023.0140378</doi>
        <lastModDate>2023-03-30T12:42:48.6030000+00:00</lastModDate>
        
        <creator>Dejun Miao</creator>
        
        <creator>Rongyan Xu</creator>
        
        <creator>Jiusong Chen</creator>
        
        <creator>Yizong Dai</creator>
        
        <subject>Internet of things; resource allocation; task scheduling; energy efficiency</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>The Internet of Things (IoT) is an innovative technology that connects the digital and physical worlds as well as allows physical devices capable of different capacities to share resources to accomplish tasks. Most IoT objects have limited battery life and are heterogeneous. Assignment of these objects is, therefore, extremely challenging. Energy consumption and reliability are the primary objectives of task allocation algorithms. We present an optimization solution to the IoT task allocation problem based on the krill herd algorithm. The algorithm increases the energy efficiency and stability of the network while providing a reliable task allocation solution. An extensive test of the proposed algorithm has been conducted using the MATLAB simulator. Compared to the most relevant method in the literature, our algorithm provides a higher level of energy efficiency.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_78-An_Energy_Aware_Technique_for_Task_Allocation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Auto JSON: An Automatic Transformation Model for Converting Relational Database to Non-relational Documents</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140377</link>
        <id>10.14569/IJACSA.2023.0140377</id>
        <doi>10.14569/IJACSA.2023.0140377</doi>
        <lastModDate>2023-03-30T12:42:48.5900000+00:00</lastModDate>
        
        <creator>K. Revathi</creator>
        
        <creator>T. Tamilselvi</creator>
        
        <creator>Batini Dhanwanth</creator>
        
        <creator>M. Dhivya</creator>
        
        <subject>Distributed data; document oriented NOSQL; hospital management system; Mongo DB; my SQL</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>In recent days, the demand for dealing large set of distributed data obsoletes the relational database and its structured query language (SQL) solutions in practice and paves the way for novel solutions in the name of non-relational database as not-only SQL (NoSQL). The NoSQL offers dynamic, flexible, scalable, highly available, greater performance and near real-time access to the distributed nature of voluminous data used for current industrial applications. Apart from these giant features of NoSQL, the SQL is still found to be in operation because of its popularity and standard. This paper projected an algorithm to convert the relational documents of MySQL into any document oriented NoSQL databases automatically without destructing the existing relational database setup and installing the NoSQL from scratch in the core machines. Java Script Object Notation (JSON) is a human readable data interchange format, being used in web development. The characteristics of JSON widened its use cases from web development to database storage. The Mongo DB, one of the most popular document oriented NoSQL adapts JSON format for its storage. The proposed algorithm is built based on its schema definition and the performance is captured through evaluating it against a sample database from hospital management system. The findings are discussed with great interest of addressing the challenges and revealing the scope for improvement.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_77-Auto_JSONAn_Automatic_Transformation_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Digital Signature Algorithm: A Hybrid Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140376</link>
        <id>10.14569/IJACSA.2023.0140376</id>
        <doi>10.14569/IJACSA.2023.0140376</doi>
        <lastModDate>2023-03-30T12:42:48.5730000+00:00</lastModDate>
        
        <creator>Prajwal Hegde N</creator>
        
        <creator>Veena Devi Shastrimath V</creator>
        
        <subject>DSA; digital signature algorithm; hash function; public key; private key; RSA</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>Security is one of the most important issues in layout of a Digital System. Communication these days is digital. Consequently, utmost care must be taken to secure the information. This paper specializes in techniques used to defend the facts from thefts and hacks the use of quit-to-cease encryption and decryption. Cryptography is the important thing technique related to Encrypting and Decrypting messages. We use Digital Signature preferred (DSS) and the Digital Signature Algorithm (DSA). The code for this algorithm is written in MATLAB. The DSA Algorithm is commonly used in cryptographic applications to provide services such as entity authentication, key transit, and key agreement in an authenticated environment. This structure is related with steady Hash Function and cryptographic set of rules the government groups in USA as it is taken into consideration to be one of the safest approaches of protection system. This fashion- able could have a top notch effect on all of the Government Agencies and Banks for protective the facts.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_76-Digital_Signature_Algorithm_A_Hybrid_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of a Smart Sensor Array for Adulteration Detection in Black Pepper Seeds using Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140375</link>
        <id>10.14569/IJACSA.2023.0140375</id>
        <doi>10.14569/IJACSA.2023.0140375</doi>
        <lastModDate>2023-03-30T12:42:48.5430000+00:00</lastModDate>
        
        <creator>Sowmya Natarajan</creator>
        
        <creator>Vijayakumar Ponnusamy</creator>
        
        <subject>Gas sensor system; volatile organic compounds; pepper seeds; papaya seeds; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>Black pepper is an expensive commodity with a high risk of adulteration. Ground papaya seed is the main adulterant in pepper because it cannot be discriminated visually. There are few destructive methods. Since pepper is costlier, non-destructive method of adulteration is must but it is challenging one. The existing non-destructive method uses costlier equipment, bulky, involve laboratory-based testing, time consuming in the process. To overcome the above issues, this article presents the development of Non-destructive E- nose gas sensor for pepper adulteration detection. This system determines the VOC in a controlled environment. The proposed system utilizes MQ2 and MQ3 gas sensor arrays to identify Volatile Organic Compounds present in pepper seeds to discriminate adulterant and non-adulterant sample. The sensor data are utilized to perform the qualitative analysis to determine the adulteration using a support vector machine learning algorithm. The proposed sensor system with Support Vector Machine learning algorithm outperforms in comparison with existing methods with 100% classification accuracy. Conclusion: The developed gas sensor system is connected to the internet via the IoT application model to show results on the web pages and enables access by the authenticated user from anywhere. Client server model with MQTT protocol is used for developing IoT application.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_75-Development_of_a_Smart_Sensor_Array.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Drug Repositioning for Coronavirus (COVID-19) using Different Deep Learning Architectures</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140374</link>
        <id>10.14569/IJACSA.2023.0140374</id>
        <doi>10.14569/IJACSA.2023.0140374</doi>
        <lastModDate>2023-03-30T12:42:48.5270000+00:00</lastModDate>
        
        <creator>Afaf A. Morad</creator>
        
        <creator>Mohamed A. Aborizka</creator>
        
        <creator>Fahima A. Maghraby</creator>
        
        <subject>Antiviral drugs; computational drug repositioning; coronavirus; deep learning; drug-target interactions</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>In December 2019, the COVID-19 epidemic was found in Wuhan, China, and soon hundreds of millions were infected. Therefore, several efforts were made to identify commercially available drugs to repurpose them against COVID-19. Inferring potential drug indications through computational drug repositioning is an efficient method. The drug repositioning problem is a top-K recommendation function that presents the most likely drugs for specific diseases based on drug and disease-related data. The accurate prediction of drug-target interactions (DTI) is very important for drug repositioning. Deep learning (DL) models were recently exploited for promising DTI prediction performance. To build deep learning models for DTI prediction, encoder-decoder architectures can be utilized. In this paper, a deep learning-based drug repositioning approach is proposed, which is composed of two experimental phases. Firstly, training and evaluating different deep learning encoder-decoder architecture models using the benchmark DAVIS Dataset. The trained deep learning models have been evaluated using two evaluation metrics; mean square error and the concordance index. Secondly, predicting antiviral drugs for Covid-19 using the trained deep learning models created during the first phase. In this phase, these models have been experimented to predict different antiviral drug lists, which then have been compared with a recently published antiviral drug list for Covid-19 using the concordance index metric. The overall experimental results of both phases showed that the most accurate three deep learning compound-encoder/protein-encoder architectures are Morgan/AAC, CNN/AAC, and CNN/CNN with best values for the mean square error, the first phase concordance index, and the second phase concordance index.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_74-Drug_Repositioning_for_Coronavirus_COVID_19.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Big Data Architecture for Analysis: The Challenges on Social Media</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140373</link>
        <id>10.14569/IJACSA.2023.0140373</id>
        <doi>10.14569/IJACSA.2023.0140373</doi>
        <lastModDate>2023-03-30T12:42:48.5100000+00:00</lastModDate>
        
        <creator>Abdessamad Essaidi</creator>
        
        <creator>Mostafa Bellafkih</creator>
        
        <subject>Social media; useful information; big data analysis; stream processing; classification tweets</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>The streams of social media big data are now becoming an important issue. But the analytics method and tools for this data may not be able to find the useful information from this massive amount of data. The question then becomes: how do we create a high-performance platform and a method to efficiently analyse social networks’ big data; how to develop a suitable mining algorithm for finding useful information from social media big data. In this work, we propose a new hierarchical big data analysis for understanding human interaction, and we present a new method to measure the useful tweets of Twitter users based on the three factors of tweet texts. Finally, we use this test implementation score, in order to detect useful and classification tweets by interested degree.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_73-A_New_Big_Data_Architecture_for_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid Model for Covid-19 Detection using CT-Scans</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140372</link>
        <id>10.14569/IJACSA.2023.0140372</id>
        <doi>10.14569/IJACSA.2023.0140372</doi>
        <lastModDate>2023-03-30T12:42:48.4800000+00:00</lastModDate>
        
        <creator>Nagwa G. Ali</creator>
        
        <creator>Fahad K. El Sheref</creator>
        
        <creator>Mahmoud M. El khouly</creator>
        
        <subject>Covid-19; coronavirus; transfer-learning; CT-scan and ensemble method</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>Although some believe it has been wiped out, the coronavirus is striking again. Controlling this epidemic necessitates early detection of coronavirus disease. Computed tomography (CT) scan images allow fast and accurate screening for COVID-19. This study seeks to develop the most precise model for identifying and classifying COVID-19 by developing an automated approach using transfer-learning CNN models as a base. Transfer learning models like VGG16, Resnet50, and Xception are employed in this study. The VGG16 has a 98.39% accuracy, the Resnet50 has a 97.27% accuracy, and the Xception has a 96.6% accuracy; after that, a hybrid model made using the stacking ensemble method has an accuracy of 98.71%. According to the findings, hybrid architecture offers greater accuracy than a single architecture.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_72-A_Hybrid_Model_for_Covid_19_Detection_using_CT_Scans.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Convolutional Neural Network Model based Students’ Engagement Detection in Imbalanced DAiSEE Dataset</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140371</link>
        <id>10.14569/IJACSA.2023.0140371</id>
        <doi>10.14569/IJACSA.2023.0140371</doi>
        <lastModDate>2023-03-30T12:42:48.4470000+00:00</lastModDate>
        
        <creator>Mayanda Mega Santoni</creator>
        
        <creator>T. Basaruddin</creator>
        
        <creator>Kasiyah Junus</creator>
        
        <subject>Convolutional neural networks; imbalanced data; deep learning; PCA; COVID-19; online learning; students’ engagement; SVD; SMOTE</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>The COVID-19 pandemic has significantly changed learning processes. Learning, which had generally been carried out face-to-face, has now turned online. This learning strategy has both advantages and challenges. On the bright side, online learning is unbound by space and time, allowing it to take place anywhere and anytime. On the other side, it faces a common challenge in the lack of direct interaction between educators and students, making it difficult to assess students’ engagement during an online learning process. Therefore, it is necessary to conduct research with the aim of automatically detecting students’ engagement during online learning. The data used in this research were derived from the DAiSEE dataset (Dataset for Affective States in E-Environments), which comprises ten-second video recordings of students. This dataset classifies engagement levels into four categories: low, very low, high, and very high. However, the issue of imbalanced data found in the DAiSEE dataset has yet to be addressed in previous research. This data imbalance can cause errors in the classification model, resulting in overfitting and underfitting of the model. In this study, Convolutional Neural Network, a deep learning model, was utilized for feature extraction on the DAiSEE dataset. The OpenFace library was used to perform facial landmark detection, head pose estimation, facial expression unit recognition, and eye gaze estimation. The pre-processing stages included data selection, dimensional reduction, and normalization. The PCA and SVD techniques were used for dimensional reduction. The data were later oversampled using the SMOTE algorithm. The training and testing data were distributed at an 80:20 ratio. The results obtained from this experiment exceeded the benchmark evaluation values on the DAiSEE dataset, achieving the best accuracy of 77.97% using the SVD dimensional reduction technique.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_71-Convolutional_Neural_Network_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Distributed Focused Web Crawling for Context Aware Recommender System using Machine Learning and Text Mining Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140370</link>
        <id>10.14569/IJACSA.2023.0140370</id>
        <doi>10.14569/IJACSA.2023.0140370</doi>
        <lastModDate>2023-03-30T12:42:48.4330000+00:00</lastModDate>
        
        <creator>Venugopal Boppana</creator>
        
        <creator>P. Sandhya</creator>
        
        <subject>Recommender Systems (RSs); context-aware; softmax regression; deep autoencoder; multimedia information</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>In today’s world, Recommender System (RS) is the most effective means used to manage the huge amount of multimedia content available on the internet. RS learns the user preferences and relationships among the users and items. It helps the users to discover new interesting items and make use of different media types such as text, audio, video and images. RS can act as an information filtering model which can overcome the issues related to over-fitting and excess information. In this work, a new distributed framework named DAE-SR (Deep AutoEncoder based Softmax Regression) is introduced for context-aware recommender systems which focus on user-item based interaction and offers personalized recommendations. The proposed model is implemented in PYTHON platform. The dataset used for experimentation is Foursquare dataset. The performance of the proposed context-aware RS is beneficial to both the users and service providers. Its helps in decision making process and can offer relevant recommendations to users. The performance is evaluated in terms of various metrics such as accuracy, recall, precision and so on. From the implementation outcomes, the proposed strategy achieved good accuracy (98.33%), precision (98%), run time (1.43 ms) and recall (98.1%). Thus, it is proved that the proposed DAE-SR classifier performs better compared to other models and offer dependable and relevant recommendations to users.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_70-Distributed_Focused_Web_Crawling_for_Context_Aware_Recommender_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Medical Name Entity Recognition Based on Lexical Enhancement and Global Pointer</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140369</link>
        <id>10.14569/IJACSA.2023.0140369</id>
        <doi>10.14569/IJACSA.2023.0140369</doi>
        <lastModDate>2023-03-30T12:42:48.4000000+00:00</lastModDate>
        
        <creator>Pu Zhang</creator>
        
        <creator>Wentao Liang</creator>
        
        <subject>MNER; nested NER; Global Pointer; lexical enhancement</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>Named entity recognition (NER) in biological sources, also called medical named entity recognition (MNER), attempts to identify and categorize medical terminology in electronic records. Deep neural networks have recently demonstrated substantial effectiveness in MNER. However, Chinese MNER has issues that cannot use lexical information and involve nested entities. To address these problems, we propose a model which can handle both nested and non-nested entities. The model uses a simple lexical enhancement method for merging lexical information into each character&#39;s vector representation, and then uses the Global Pointer approach for entity recognition. Furthermore, we retrain a pre-trained model with a Chinese medical corpus to incorporate medical knowledge, resulting in F1 score of 68.13% on the nested dataset CMeEE, 95.56% on the non-nested dataset CCKS2017, 85.89% on CCKS2019, and 92.08% on CCKS2020. These data demonstrate the efficacy of our proposed model.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_69-Medical_Name_Entity_Recognition_Based_on_Lexical_Enhancement.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predicting Hypertension using Machine Learning: A Case Study at Petra University</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140368</link>
        <id>10.14569/IJACSA.2023.0140368</id>
        <doi>10.14569/IJACSA.2023.0140368</doi>
        <lastModDate>2023-03-30T12:42:48.3870000+00:00</lastModDate>
        
        <creator>Yasmin Sakka</creator>
        
        <creator>Dina Qarashai</creator>
        
        <creator>Ahmad Altarawneh</creator>
        
        <subject>Hypertension; machine learning; medical records; sensitivity; specificity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>Hypertension is a key cardiovascular disease risk factor (CVD). Identifying these high-risk individuals is crucial since it would save time and money before using any sophisticated, invasive, or costly diagnostic technique. This endeavour may be accomplished in part with the use of modern machine learning techniques. Specifically, a prediction model may be created based on several easily-obtained, non-invasive, and inexpensive indicator characteristics of high-risk individuals. This research is an effort to forecast hypertension risks based on Petra University’s population. This case study was done between 2019 and 2020 at Petra University. Using hospital-visited patients’ medical records, the gathered data was used to develop a model. The research comprised a comprehensive dataset of 31500 patients, comprising 12658 hypertension cases and 18842 non-hypertensive cases. SMOTE was used as a dataset for the categorization of hypertension. The SMOTE-k-nearest neighbour prediction model performs exceptionally well, as evidenced by its excellent performance (83.9% classification accuracy, 85.1% specificity, 83.3% sensitivity, and 89.6% AUC) when compared to other classifiers using 10-fold cross-validation with full features and no oversampling on the hypertension dataset. The data extracted from Petra University Health Center is considered to be very helpful for ML and is availed to produce a decision tree to identify the data related to hypertension.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_68-Predicting_Hypertension_using_Machine_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modeling of Organic Waste Classification as Raw Materials for Briquettes using Machine Learning Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140367</link>
        <id>10.14569/IJACSA.2023.0140367</id>
        <doi>10.14569/IJACSA.2023.0140367</doi>
        <lastModDate>2023-03-30T12:42:48.3530000+00:00</lastModDate>
        
        <creator>Norbertus Tri Suswanto Saptadi</creator>
        
        <creator>Ansar Suyuti</creator>
        
        <creator>Amil Ahmad Ilham</creator>
        
        <creator>Ingrid Nurtanio</creator>
        
        <subject>Classification; organic waste; raw material; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>The existence of organic waste must be utilized by the community so that it does not only end up in landfills but can also be processed into something constructive so that it is useful and has high economic value. Organic waste can be converted into raw materials to manufacture of biomass briquettes. Machine learning techniques were developed for technological applications, object detection, and categorization. Methods with artificial reasoning networks that use a number of algorithms, such as the Naive Bayes Classifier, will work together in determining and identifying certain characteristics in a digital data set. The manufacturing method goes through several processes with a waste classification model as a source of learning data. The image data is based on five types: coconut shells, sawdust, corn cobs, rice husks, and plant leaves. The research aims to identify and classify types of waste both organically and non-organically so that it will make it easier to sort waste. The results of testing the organic waste application from digital images have an accuracy rate of 97%. The model design carried out in training data is useful for producing a data model.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_67-Modeling_of_Organic_Waste_Classification_as_Raw_Materials.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>State-of-the-Art of the Swarm Ship Technology for Alga Bloom Rapid Monitoring</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140366</link>
        <id>10.14569/IJACSA.2023.0140366</id>
        <doi>10.14569/IJACSA.2023.0140366</doi>
        <lastModDate>2023-03-30T12:42:48.3400000+00:00</lastModDate>
        
        <creator>Denny Darlis</creator>
        
        <creator>Indra Jaya</creator>
        
        <creator>Karlisa Priandana</creator>
        
        <creator>Yopi Novita</creator>
        
        <creator>Ayi Rachmat</creator>
        
        <subject>Swarm ship; algal bloom; cyber physical system; rapid monitoring</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>The swarm intelligence has become an interesting topic for employing of multi-agent robotics with specific purpose. The capability of multi-coordination, scalability and goals-oriented control in spatial and temporal environment are already concerned and proven for several applications such as in military patrol, and drones leader-follower coordination. In marine-based environment, swarm intelligence adopted by ASV or ROV has been used for water quality and environment monitoring with sufficient optimized results making it convenient for rapid assessments. In this paper, the arrangement for building a trusted cyber-physical systems for algal bloom rapid assessment using swarm ship technology were explained in state-of-the-art perspectives. The minimum requirements for sensing, vehicle controlling, and communication of this system with others were explored as well as algorithm chosen for the best known configuration to monitor algal bloom events before spreading so fast to larger area. Some models were explained to show the robustness of autonomous unmanned ship control. From this point of view, we concluded that swarm ship technology has become an important potential implementation for near real time in situ monitoring compared to other decision making method such as laboratory examination or remote sensing-based results. The results of this review open the opportunity to realization of swarm ship technology in cyber physical system for monitoring algal bloom in specific area near real time efficiently.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_66-State_of_the_Art_of_Swarm_Ship_Technology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Digital Image Encryption using Composition of RaMSH-1 Map Transposition and Logistic Map Keystream Substitution</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140365</link>
        <id>10.14569/IJACSA.2023.0140365</id>
        <doi>10.14569/IJACSA.2023.0140365</doi>
        <lastModDate>2023-03-30T12:42:48.3230000+00:00</lastModDate>
        
        <creator>Rama Dian Syah</creator>
        
        <creator>Sarifuddin Madenda</creator>
        
        <creator>Ruddy J. Suhatril</creator>
        
        <creator>Suryadi Harmanto</creator>
        
        <subject>Encryption; Decryption; Digital Image; RaMSH-1 Map Transposition; Logistic Map Substitution</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>Digital communication of multimedia data (text, signal/audio, image, and video) through the internet network has an important role in the era of industrial revolution 4.0 and society 5.0. However, the easiness of exchanging personal and confidential digital information/data has a high risk of being hijacked by irresponsible people. The development of reliable and robust data encryption methods is a solution to this risk. This paper suggests combining modified Henon Map transposition encryption and Logistic Map keystream substitution encryption to create a novel data encryption method (called RaMSH-1). The proposed algorithm simultaneously transposes data positions and substitute data values randomly, as well as having encryption key combination or key space of 1.05 &#215; 10670. A few images with various sizes and variations of color features, object shapes, and textures have been tested. Based on the results of the analysis of randomness, key sensitivity, and visual, it is evident that the proposed encryption algorithm is resistant to differential attack, entropy attack and brute force attack.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_65-Digital_Image_Encryption_using_Composition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fast Hybrid Deep Neural Network for Diagnosis of COVID-19 using Chest X-Ray Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140364</link>
        <id>10.14569/IJACSA.2023.0140364</id>
        <doi>10.14569/IJACSA.2023.0140364</doi>
        <lastModDate>2023-03-30T12:42:48.2930000+00:00</lastModDate>
        
        <creator>Hussein Ahmed Ali</creator>
        
        <creator>Nadia Smaoui Zghal</creator>
        
        <creator>Walid Hariri</creator>
        
        <creator>Dalenda Ben Aissa</creator>
        
        <subject>COVID-19; Chest X-ray (CXR); Deep Learning (DL); Convolutional Neural Network (CNN)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>In the last three years, the coronavirus (COVID-19) pandemic put healthcare systems worldwide under tremendous pressure. Imaging techniques, such as Chest X-Ray (CXR) images, play an essential role in diagnosing many diseases (for example, COVID-19). Recently, intelligent systems (Machine Learning (ML) and Deep Learning (DL)) have been widely utilized to identify COVID-19 from other upper respiratory diseases (such as viral pneumonia and lung opacity). Nevertheless, identifying COVID-19 from the CXR images is challenging due to similar symptoms. To improve the diagnosis of COVID-19 using CXR images, this article proposes a new deep neural network model called Fast Hybrid Deep Neural Network (FHDNN). FHDNN consists of various convolutional layers and various dense layers. In the beginning, we preprocessed the dataset, extracted the best features, and expanded it. Then, we converted it from two dimensions to one dimension to reduce training speed and hardware requirements. The experimental results demonstrate that preprocessing and feature expansion before applying FHDNN lead to better detection accuracy and reduced speedy execution. Furthermore, the model FHDNN outperformed the counterparts by achieving an accuracy of 99.9%, recall of 99.9%, F1-Score has 99.9%, and precision of 99.9% for the detection and classification of COVID-19. Accordingly, FHDNN is more reliable and can be considered a robust and faster model in COVID-19 detection.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_64-Fast_Hybrid_Deep_Neural_Network_for_Diagnosis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Consolidated Definition of Digital Transformation by using Text Mining</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140363</link>
        <id>10.14569/IJACSA.2023.0140363</id>
        <doi>10.14569/IJACSA.2023.0140363</doi>
        <lastModDate>2023-03-30T12:42:48.2770000+00:00</lastModDate>
        
        <creator>Mohammed Hitham M. H</creator>
        
        <creator>Hatem Elkadi</creator>
        
        <creator>Neamat El Tazi</creator>
        
        <subject>Digital transformation; text mining; association rules; FP tree</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>Digital transformation has become essential for the majority of organizations, in both public and private sectors. The term &quot;digital transformation&quot; has been used (and misused), so frequently that it is now somewhat ambiguous. It has become imperative to give it some conceptual rigor. The objective of this study is to identify the major elements of digital transformation as well as develop a proper definition for DT in the public and private sectors. For this purpose, 56 different definitions of DT collected from the available literature were analyzed, and we found that they extracted elements from definition of DT manually. So, text mining (TF-IDF and Fp-tree) techniques are used to identify the major constituents and finally consolidate in generic DT definitions. The approach consists of five phases: 1) collecting and classifying DT definitions; 2) detecting synonyms; 3) extracting major elements (terms); 4) discussing and comparing DT elements; 5) formulating DT definitions for different business categories. An evaluation tool was also developed to assess the level of DT elements coverage in various definitions found in the literature, and, as a validation, it was applied to the formulated definitions.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_63-Consolidated_Definition_of_Digital_Transformation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automatic Detection of Oil Palm Growth Rate Status with YOLOv5</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140361</link>
        <id>10.14569/IJACSA.2023.0140361</id>
        <doi>10.14569/IJACSA.2023.0140361</doi>
        <lastModDate>2023-03-30T12:42:48.2470000+00:00</lastModDate>
        
        <creator>Desta Sandya Prasvita</creator>
        
        <creator>Dina Chahyati</creator>
        
        <creator>Aniati Murni Arymurthy</creator>
        
        <subject>Automatic detection; deep learning; oil palm; YOLOv5</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>Oil palm plantations are essential for Indonesia as a source of foreign exchange and a provider of employment opportunities. However, large-scale land clearing is considered a cause of deforestation, which harms the environment and society. So, it is necessary to manage plantations that are sustainable and still maintain the preservation of forests and biodiversity. One solution is to apply remote sensing technology. The research was conducted to develop a multi-class detection method for the growth rate of oil palm trees, with five categories: healthy palm, dead palm, yellowish palm, mismanaged palm, and smallish palm. The deep learning-based object detection method, YOLO Version 5 (YOLOv5), is used. This study compares the YOLOv5 network models, namely YOLOv5s, YOLOv5m, YOLOv5l, and YOLOv5x. Parameter setting is also carried out in the BCE (Binary Cross Entropy) with Logits Loss Function to handle the problem of unbalanced data distribution in each class. The YOLOv5 model with the highest mAP value is the YOLOv5l and YOLOv5x, the YOLOv5x requires longer training time. In this study, hyperparameter optimization was also carried out using hyperparameter evolution techniques. However, it has yet to provide increased results because the experiments conducted in this study are still limited.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_61-Automatic_Detection_of_Oil_Palm_Growth_Rate_Status.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Approach Used to Analyze the Sentiments of Romanized Text (Sindhi)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140362</link>
        <id>10.14569/IJACSA.2023.0140362</id>
        <doi>10.14569/IJACSA.2023.0140362</doi>
        <lastModDate>2023-03-30T12:42:48.2470000+00:00</lastModDate>
        
        <creator>Irum Naz Sodhar</creator>
        
        <creator>Suriani Sulaiman</creator>
        
        <creator>Abdul Hafeez Buller</creator>
        
        <creator>Anam Naz Sodhar</creator>
        
        <subject>Sentiment analysis; natural language processing; hybrid approach; python tool; Romanized Sindhi</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>Sentiment analysis is an important part of natural language processing (NLP). This study evaluated the sentiment of Romanized Sindhi Text (RST) using a hybrid approach and ground truth values. The methodology of sentiment analysis involves three major steps: input data, process on tool, analysis of data and evaluation of results. One hundred RST sentences were used in this study&#39;s sentiment analysis, which can be positive, neutral, or negative. The statements in the corpus of this study are simple to understand and are used in everyday life. This research used an online Python tool to process a text and get results in the form of outcomes. The results showed that 86% of the sentences have neutral sentiments, 9% of the total results of sentiment analysis have negative sentiments, and only 5% of sentences of Romanized Sindhi Text have positive sentiments. The accuracy of the RST was measured on an online calculator and the value was 87.02% on the basis of ground truth values. An error ratio of 12.98% was calculated on the basis accuracy found on the online calculator of confusion matrix.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_62-Hybrid_Approach_Used_to_Analyze_the_Sentiments.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Suppressing Chest Radiograph Ribs for Improving Lung Nodule Visibility by using Circular Window Adaptive Median Outlier (CWAMO)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140359</link>
        <id>10.14569/IJACSA.2023.0140359</id>
        <doi>10.14569/IJACSA.2023.0140359</doi>
        <lastModDate>2023-03-30T12:42:48.2130000+00:00</lastModDate>
        
        <creator>Dnyaneshwar Kanade</creator>
        
        <creator>Jagdish Helonde</creator>
        
        <subject>Lung cancer; chest radiograph; contrast limited adaptive histogram equalization; median outlier</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>Chest radiograph ribs obstruct lung nodules. To see the nodule under the chest radiograph ribs, remove or suppress them. The paper describes a circular median filter approach for finding outliers in chest radiographs. The method uses 147 Japanese Society of Radiological Technology x-ray pictures (JSRT). Pixels with intensities two standard deviations above the median are median outliers. Contrast-Limited Adaptive Histogram Equalization enhances nodule visibility (CLAHE). The method is tested on modest chest radiographs and compared to the Budapest University Bone Shadow Eliminated X-Ray Dataset methodology. The initial test uses 50 modest chest radiographs (Test 1). The proposed approach is applied after active shape modelling (ASM) lung segmentation. True positive nodules are seen on 89% of chest radiographs of various subtleties. Test-2 and Test-3 used 20 subtlety-level photos. In Test-2, the peak signal-to-noise ratio (PSNR), mean-to-standard deviation ratio (MSR), and universal image quality index (IQI) are evaluated for the full image and compared to the existing algorithm. For all three parameters, the suggested technique outperforms the algorithm. Test-3 computes nodule MSR and compares it to Budapest University&#39;s Bone Shadow Eliminated Dataset and original chest radiographs. The new algorithm improved nodule area contrast by 3.83% and 23.94% compared to the original chest radiograph. This approach improves chest radiograph nodule visualization.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_59-Suppressing_Chest_Radiograph_Ribs_for_Improving_Lung_Nodule_Visibility.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Disease Identification in Crop Plants based on Convolutional Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140360</link>
        <id>10.14569/IJACSA.2023.0140360</id>
        <doi>10.14569/IJACSA.2023.0140360</doi>
        <lastModDate>2023-03-30T12:42:48.2130000+00:00</lastModDate>
        
        <creator>Orlando Iparraguirre-Villanueva</creator>
        
        <creator>Victor Guevara-Ponce</creator>
        
        <creator>Carmen Torres-Cecl&#233;n</creator>
        
        <creator>John Ruiz-Alvarado</creator>
        
        <creator>Gloria Castro-Leon</creator>
        
        <creator>Ofelia Roque-Paredes</creator>
        
        <creator>Joselyn Zapata-Paulini</creator>
        
        <creator>Michael Cabanillas-Carbonell</creator>
        
        <subject>CNN; identification; models; pathogen; plant; classification; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>The identification, classification and treatment of crop plant diseases are essential for agricultural production. Some of the most common diseases include root rot, powdery mildew, mosaic, leaf spot and fruit rot. Machine learning (ML) technology and convolutional neural networks (CNN) have proven to be very useful in this field. This work aims to identify and classify diseases in crop plants, from the data set obtained from Plant Village, with images of diseased plant leaves and their corresponding Tags, using CNN with transfer learning. For processing, the dataset composing of more than 87 thousand images, divided into 38 classes and 26 disease types, was used. Three CNN models (DenseNet-201, ResNet-50 and Inception-v3) were used to identify and classify the images. The results showed that the DenseNet-201 and Inception-v3 models achieved an accuracy of 98% in plant disease identification and classification, slightly higher than the ResNet-50 model, which achieved an accuracy of 97%, thus demonstrating an effective and promising approach, being able to learn relevant features from the images and classify them accurately. Overall, ML in conjunction with CNNs proved to be an effective tool for identifying and classifying diseases in crop plants. The CNN models used in this work are a very good choice for this type of tasks, since they proved to have a very high performance in classification tasks. In terms of accuracy, all three models are very accurate in image classification, with an accuracy of over 96% with large data sets.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_60-Disease_Identification_in_Crop_Plants.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implementation of a Smarter Herbal Medication Delivery System Employing an AI-Powered Chatbot</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140358</link>
        <id>10.14569/IJACSA.2023.0140358</id>
        <doi>10.14569/IJACSA.2023.0140358</doi>
        <lastModDate>2023-03-30T12:42:48.1830000+00:00</lastModDate>
        
        <creator>Maria Concepcion S. Vera</creator>
        
        <creator>Thelma D. Palaoag</creator>
        
        <subject>Chatbot; artificial intelligence; intelligent interactive system; applied computing; consumer health; medicinal plants; traditional medicine</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>Medicinal plants are a practical and cost-effective alternative for treating common ailments, especially in areas with limited access to public healthcare systems. This paper introduces a prototype of an intelligent interactive system that merges chatbot technology with artificial intelligence (AI) to address inquiries related to treatment alternatives and the application of different medicinal plants for prevalent health conditions which promote and advance alternative healing practices in the locality. The platform is a hybrid online chat service that prioritizes consumer health and encourages the responsible use of medicinal plants. This study used a survey questionnaire to gather information from traditional healers and users and concerned government agencies about how well the system prototype performed. The system&#39;s performance was assessed in terms of effectiveness, efficiency, and customer satisfaction, with respondents providing an aggregate rating of &quot;Strongly Agree&quot;. Significantly, this study lays the groundwork for education on the use of local medicinal plants to cure illnesses and highlights the importance of providing users with accurate and reliable information on the safe use of medicinal plants. This approach empowers users to make informed decisions about the plants they use, reducing the likelihood of harmful effects and optimizing the potential benefits of medicinal plants. By supporting this effort, this study contributes to the achievement of the third Sustainable Development Goal of the UN, which aims to promote health and well-being by offering the local populace a low-cost option as a first line of defense for improving their health and wellness.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_58-Implementation_of_a_Smarter_Herbal_Medication_Delivery_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Advanced Detections of Norway Lobster (Nephrops Norvegicus) Burrows using Deep Learning Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140357</link>
        <id>10.14569/IJACSA.2023.0140357</id>
        <doi>10.14569/IJACSA.2023.0140357</doi>
        <lastModDate>2023-03-30T12:42:48.1670000+00:00</lastModDate>
        
        <creator>Atif Naseer</creator>
        
        <subject>Nephrops norvegicus; deep learning; stock assessment; faster RCNN</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>Marine experts are facing lot of challenges in habitat monitoring of marine species. One of the biggest challenges is the underwater environment and species movement. The other challenge is the data collection of marine species. People used the camera sensors and satellite data in the past for data collection but in this era the scientists are using underwater Autonomous Underwater Vehicles (AUVs), the Remotely Operated Vehicles (ROVs), and certain sledges with high-definition still and video cameras to record the underwater footages. The ocean is composed of thousands of species which make the environment more challenging to monitor any specific specie. This work will focus on specie named Norway lobster (Nephrops norvegicus). The Nephrops norvegicus is one of the commercial specie in the Europe and generates millions of dollars yearly. This specie lives under the seabed and leaves behind the burrow structure on the sea ground. The Nephrops spend most of their time under the seabed. The scientists are currently monitoring the habitat of Nephrops norvegicus by underwater television (UWTV) surveys that is collected yearly on many European grounds. The collected data is reviewed manually by the experts who count the burrows on the sheet. This work focuses on the automatic detection of Nephrops burrows from underwater videos using the deep learning techniques. This work trained the Faster R-CNN models Inceptionv2, MobileNetv2, ResNet50, and ResNet101. Instead of training the models from scratch we used the transfer learning technique to fine tune these networks. The data is obtained from the Gulf of Cadiz (FU30) station. Twenty-eight different set of experiments are performed. The models are evaluated quantitatively using the mean Average Precision (mAP), precision and recall curves. Also, the models are qualitatively analyzed by visually presenting the output. The results prove that deep learning techniques are very helpful for marine scientists to assess the Nephrops norvegicus abundance.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_57-Advancing_Detections_of_Norway_Lobster.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Image Denoising using Wavelet Cycle Spinning and Non-local Means Filter</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140356</link>
        <id>10.14569/IJACSA.2023.0140356</id>
        <doi>10.14569/IJACSA.2023.0140356</doi>
        <lastModDate>2023-03-30T12:42:48.1500000+00:00</lastModDate>
        
        <creator>Giat Karyono</creator>
        
        <creator>Asmala Ahmad</creator>
        
        <creator>Siti Azirah Asmai</creator>
        
        <subject>Image denoising; discrete wavelet transform (DWT); dual-tree complex wavelet transform (DT-CWT); non-local means (NLM); cycle spinning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>Removing as much noise as possible in an image while preserving its fine details is a complex and challenging task. We propose a wavelet-based and non-local means (NLM) denoising method to overcome the problem. Two well-known wavelets: dual-tree complex wavelet transform (DT-CWT) and discrete wavelet transform (DWT), have been used to change the noise image into several wavelet coefficients sequentially. NLM filtering and universal hard thresholding with cycle spinning have been used for thresholding on its approximation and detail coefficients, respectively. The inverse two-dimensional DWT was applied to the modified wavelet coefficients to obtain the denoised image. We conducted experiments with twelve test images on the set12 data set, adding the additive Gaussian white noise with variances of 10 to 90 in increments of 10. Three evaluation metrics, such as peak signal noise to rate (PSNR), structural similarity index metric (SSIM), and mean square error (MSE), have been used to evaluate the effectiveness of the proposed denoising method. From these measurement results, the proposed denoising method outperforms DT-CWT, DWT, and NLM almost in all noise levels except for the noise level of 10. At that noise level, the proposed denoising method is lower than NLM but better than DT-CWT and DWT.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_56-Image_Denoising_using_Wavelet_Cycle_Spinning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Online Signature Verification for Forgery Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140355</link>
        <id>10.14569/IJACSA.2023.0140355</id>
        <doi>10.14569/IJACSA.2023.0140355</doi>
        <lastModDate>2023-03-30T12:42:48.1370000+00:00</lastModDate>
        
        <creator>Muhammad Rizwan</creator>
        
        <creator>Farhan Aadil</creator>
        
        <creator>Mehr Yahya Durrani</creator>
        
        <creator>Rajermani Thinakaran</creator>
        
        <subject>Fast Euclidean distance; m-mediod; local fisher discriminant analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>The increasing trend of using e-versions of document transmission and storage requires the electronic verification of sender/author. This research presents an efficient and robust online handwritten signature verification system targeting verification rates better than the available state-of-the-art systems in the presence of skilled forgeries. Fourier analysis is employed on the signatures to represent feature vectors in higher dimensional space followed by Local Fisher Discriminant Analysis to obtain compress representation while enhancing inter-class scatter between signature patterns. Signature modeling is performed using m-mediod-based modeling approach where m-mediods are put on to represent data distribution in each class. Connected component labeling is applied to binarized images of Urdu text to extract ligatures which are separated into primary ligatures and diacritics. Fast Euclidean Distance is used as dis(similarity). A total of 2414 signature samples including skilled forgeries are considered in our study. The evaluation of the proposed system on Japanese signature dataset provided by SigWiComp2013 realized promising results than the competitors.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_55-Online_Signature_Verification_for_Forgery_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Validating the Usability Evaluation Model for Hearing Impaired Mobile Application</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140354</link>
        <id>10.14569/IJACSA.2023.0140354</id>
        <doi>10.14569/IJACSA.2023.0140354</doi>
        <lastModDate>2023-03-30T12:42:48.1030000+00:00</lastModDate>
        
        <creator>Shelena Soosay Nathan</creator>
        
        <creator>Nor Laily Hashim</creator>
        
        <creator>Azham Hussain</creator>
        
        <creator>Ashok Sivaji</creator>
        
        <creator>Mohd Affendi Ahmad Pozin</creator>
        
        <subject>Usability; hearing impaired; validation; evaluation; MAEHI</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>Usability is an important element that enables the identification of the efficiency for application or product. However, many applications have been developed for general users’ needs and are unable to provide adequate applications usage for disabled people. This study focuses on the development of usability evaluation model and the validation process on the proposed model through experts. The developed model later evaluated by group of experts through focus group method. Focus group method enables to identify the 13 variables derived to develop the model are appropriately placed and useful in the evaluation process. The results shows that the selected variables are appropriate to identify usability of mobile application for the hearing impairment through three variables tested namely, gain satisfaction with the model, satisfaction with the model presentation, and support for tasks. Conclusively, the developed model can identify usability of mobile applications for hearing impairment and enable in identifying useful criteria to be included during application development process in real life process. As future study, the model can be tested among the hearing impairment people and practitioner to establish the results obtained which contributes to usability practitioners and application developers for the disabled.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_54-Validating_the_Usability_Evaluation_Model_for_Hearing_Impaired.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Approach to Network Forensic Analysis: Combining Packet Capture Data and Social Network Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140353</link>
        <id>10.14569/IJACSA.2023.0140353</id>
        <doi>10.14569/IJACSA.2023.0140353</doi>
        <lastModDate>2023-03-30T12:42:48.0730000+00:00</lastModDate>
        
        <creator>Irwan Sembiring</creator>
        
        <creator>Suharyadi</creator>
        
        <creator>Ade Iriani</creator>
        
        <creator>Jenni Veronika Br Ginting</creator>
        
        <creator>Jusia Amanda Ginting</creator>
        
        <subject>PCAP analysis; social network analysis; network forensic; network communication pattern</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>Log data from computers used for network forensic analysis is ineffective at identifying specific security threats. Log data limitations include the difficulty in reconstructing communication patterns between nodes and the inability to identify more advanced security threats. By combining traditional log data analysis methods with a more effective combination of approaches, a more comprehensive view of communication patterns can be achieved. This combined approach can then help identify potential security threats more effectively. It&#39;s difficult to determine the specific benefits of combining Packet Capture (PCAP) and Social Network Analysis (SNA) when performing forensics. This article proposes a new approach to forensic analysis that combines PCAP and social network analysis to overcome some of the limitations of traditional methods. The purpose of this discovery is to improve the accuracy of network forensic analysis by combining PCAP and social network analysis to provide a more comprehensive view of network communication patterns. Network forensics, which combines pcap analysis and social network analysis, provides more comprehensive results. PCAP analysis is used to analyze network traffic, conversation statistics, protocol distribution, packet content and round-trip times. Social network analysis maps communication patterns between nodes and identifies the most influential key players within the network. PCAP analysis efficiently captures and analyzes network packets, and SNA provides insight into relationships and communication patterns between devices on the network.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_53-A_Novel_Approach_to_Network_Forensic_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implementing Bisection Method on Forex Trading Database for Early Diagnosis of Inflection Point</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140352</link>
        <id>10.14569/IJACSA.2023.0140352</id>
        <doi>10.14569/IJACSA.2023.0140352</doi>
        <lastModDate>2023-03-30T12:42:48.0570000+00:00</lastModDate>
        
        <creator>Agustinus Noertjahyana</creator>
        
        <creator>Zuraida Abal Abas</creator>
        
        <creator>Zeratul Izzah Mohd. Yusoh</creator>
        
        <creator>M. Zainal Arifin</creator>
        
        <subject>Bisection method; moving average; inflection point; forex trading; decision support</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>Many people are trading in the forex market during the COVID-19 pandemic with the hope of earning money, but they are experiencing shortages due to the lack of information and technology-based tools for existing daily data. Sometimes traders only use moving averages in trading data, even though this information needs to be processed again to get the right inflection point. The objective of this research is to find inflection points based on Forex trading database. Another algorithm can also be used to determine the inflection point between two points on a moving average. This can be supported by the Bisection method used because it can guarantee that convergence will occur. The results show that the points resulting from the bisection calculation on the moving average provide a fairly accurate decision support for the location where the inflection point is located. From 10,000 data there is a standard deviation of 0.71 points which is very small compared to an average of 20 pips (points used as the difference in price values in forex). The use of the bisection method provides an accuracy of the results in seeing the inflection point of 87%.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_52-Implementing_Bisection_Method_on_Forex_Trading_Database.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Scouting Firefly Algorithm and its Performance on Global Optimization Problems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140350</link>
        <id>10.14569/IJACSA.2023.0140350</id>
        <doi>10.14569/IJACSA.2023.0140350</doi>
        <lastModDate>2023-03-30T12:42:48.0270000+00:00</lastModDate>
        
        <creator>Jolitte A. Villaruz</creator>
        
        <creator>Bobby D. Gerardo</creator>
        
        <creator>Ariel O. Gamao</creator>
        
        <creator>Ruji P. Medina</creator>
        
        <subject>Metaheuristics; firefly algorithm; modified firefly algorithm; global optimization; scout bee; exploitation and exploration</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>For effective optimization, metaheuristics should maintain the proper balance between exploration and exploitation. However, the standard firefly algorithm (FA) posted some limitations in its exploration process that can eventually lead to premature convergence, affecting its performance and adding uncertainty to the optimization results. To address these constraints, this study introduces an additional novel search mechanism for the standard FA inspired by the behavior of the scout bee in the artificial bee colony (ABC) algorithm, termed the &quot;Scouting FA&quot;. Specifically, fireflies stuck in the local optima will take directed extra random walks to escape toward the region of the optimum solution, thus improving convergence accuracy. Empirical findings on the five standard benchmark functions have validated the effects of this modification and revealed that Scouting FA is superior to its original version.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_50-Scouting_Firefly_Algorithm_and_its_Performance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Robust Steganographic Algorithm based on Linear Fractional Transformation and Chaotic Maps</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140351</link>
        <id>10.14569/IJACSA.2023.0140351</id>
        <doi>10.14569/IJACSA.2023.0140351</doi>
        <lastModDate>2023-03-30T12:42:48.0270000+00:00</lastModDate>
        
        <creator>Muhammad Ramzan</creator>
        
        <creator>Muhammad Fahad Khan</creator>
        
        <subject>Steganography; information security; chaotic map vulnerabilities; enhanced chaotic maps; S-box Design</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>The fundamental objectives of a steganographic technique are to achieve both robustness and high-capacity for the hidden information. This paper proposes a steganographic algorithm that satisfies both of these objectives, based on enhanced chaotic maps. The algorithm consists of two phases. In the first phase, a cryptographic substitution box is constructed using a novel fusion technique based on logistic and sine maps. This technique overcomes existing vulnerabilities of chaotic maps, such as frail chaos, finite precision effects, dynamical degradation, and limited control parameters. In the second phase, a frequency-domain-based embedding scheme is used to transform the secret information into ciphertext by employing the substitution boxes. The statistical strength of the algorithm is assessed through several tests, including measures of homogeneity, correlation, mean squared error, information entropy, contrast, peak signal-to-noise ratio, energy, as well as evaluations of the algorithm&#39;s performance under JPEG compression and image degradation. The results of these tests demonstrate the algorithm&#39;s robustness against various attacks and provide evidence of its high-capacity for securely embedding secret information with good visual quality.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_51-A_Robust_Steganographic_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>CNN-BiLSTM Hybrid Model for Network Anomaly Detection in Internet of Things</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140349</link>
        <id>10.14569/IJACSA.2023.0140349</id>
        <doi>10.14569/IJACSA.2023.0140349</doi>
        <lastModDate>2023-03-30T12:42:47.9970000+00:00</lastModDate>
        
        <creator>Bauyrzhan Omarov</creator>
        
        <creator>Omirlan Auelbekov</creator>
        
        <creator>Azizah Suliman</creator>
        
        <creator>Ainur Zhaxanova</creator>
        
        <subject>IoT; internet of things; network anomalies; network security; anomaly attack; machine learning; supervised learning; UNSW-NB15</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>Anomaly detection in internet of things network traffic is a critical aspect of intrusion and attack detection, in which a deviation from typical behavior signals the existence of malicious or inadvertent assaults, faults, flaws, and other issues. The necessity to examine a large number of security events to identify anomalous behavior of smart devices adds to the urgency of addressing the challenge of picking machine-learning and deep learning models for identifying anomalies in network traffic. For the challenge of binary data categorization, a software implementation of an intrusion detection system based on supervised-learning algorithms has been completed. The UNSW-NB15 open dataset, which contains 2,540,044 records - vectors of TCP/IP network connection signals and their associated class labels are used to train and test the system. This research compares different machine-learning models and proposes CNN-BiLSTM hybrid model for IoT network intrusion detection. The metrics for measuring the quality of classification and the running duration of algorithms for different ratios of train and test samples are the result of the built framework testing.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_49-CNN_BiLSTM_Hybrid_Model_for_Network_Anomaly.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>1D Convolutional Neural Network for Detecting Heart Diseases using Phonocardiograms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140348</link>
        <id>10.14569/IJACSA.2023.0140348</id>
        <doi>10.14569/IJACSA.2023.0140348</doi>
        <lastModDate>2023-03-30T12:42:47.9800000+00:00</lastModDate>
        
        <creator>Meirzhan Baikuvekov</creator>
        
        <creator>Abdimukhan Tolep</creator>
        
        <creator>Daniyar Sultan</creator>
        
        <creator>Dinara Kassymova</creator>
        
        <creator>Leilya Kuntunova</creator>
        
        <creator>Kanat Aidarov</creator>
        
        <subject>Deep learning; CNN; heart disease; phonocardiogram; classification; detection; PCG</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>According to estimations made by World Health Organization, heart disease is the largest cause of mortality throughout the globe, and it is safe to assume that diagnosing heart diseases in their earliest stages is very essential. Diagnosis of cardiovascular disease may be carried out by detection of interference in cardiac signals, one of which is called phonocardiography, and it can be accomplished in a number of various ways. Using phonocardiogram (PCG) inputs and deep learning, the researchers aim to develop a classification system for different types of heart illness. The slicing and normalization of the signal served as the first step in the study&#39;s signal preprocessing, which was subsequently followed by a wavelet based transformation method that employs mother wavelet analytic morlet. The results of the decomposition are first shown with the use of a scalogram, afterwards, they are utilized as input for the deep CNN. In this investigation, the analyzed PCG signals were separated into categories, denoting normal and pathological heart sounds. The entire utilized data was divided into two categories as training and test data as 80% to 20%. The developed model demonstrates the degree of clinical diagnosis, sensitivity, specificity and AUC-ROC value. As a result, it has been determined that the proposed method was superior to the mother wavelet as well as other classifier approaches. Consequently, we were able to acquire an electronic stethoscope that has a diagnostic accuracy of more than 90% when it comes to identifying cardiac problems. To be more specific, the proposed deep CNN model has an accuracy of 93.25% in identifying aberrant heart sounds and 93.50% in identifying regular heartbeats. In addition, given the fact that an examination may be completed in only 15 seconds, speed is the primary advantage offered by the suggested stethoscope.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_48-1D_Convolutional_Neural_Network_for_Detecting_Heart_Diseases.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sentiment Analysis on Moroccan Dialect based on ML and Social Media Content Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140347</link>
        <id>10.14569/IJACSA.2023.0140347</id>
        <doi>10.14569/IJACSA.2023.0140347</doi>
        <lastModDate>2023-03-30T12:42:47.9470000+00:00</lastModDate>
        
        <creator>Mouaad Errami</creator>
        
        <creator>Mohamed Amine Ouassil</creator>
        
        <creator>Rabia Rachidi</creator>
        
        <creator>Bouchaib Cherradi</creator>
        
        <creator>Soufiane Hamida</creator>
        
        <creator>Abdelhadi Raihani</creator>
        
        <subject>Sentiment analysis; Arabic Moroccan dialect; tweets; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>As technology continues to evolve, humans tend to follow suit, and currently social media has taken place as the defacto method of communication. As it tends to happen with verbal communication, people express their opinions in written form and through an analysis of their words, one can extract what an individual wants from a product, a topic, or an event. By looking at the emotions expressed in such content, governments, businesses, and people can learn a lot that can help them improve their strategies. Therefore, in this study, we will use different algorithms to improve the Moroccan sentiment classification. The first step is to gather and prepare Moroccan Dialectal Arabic Twitter comments. Then, a lot of different combinations of extraction (n-grams) and weighting schemes (BOW/ TF-IDF) and word embedding for feature construction are applied to get the best classification models. We used Naive Bayes, Random Forests, Support Vector Machines, and Logistic regression and LSTM to classify the data we prepared. Our machine learning approach, which incorporates sentiment analysis, was designed to analyze Twitter comments written in Modern Standard Arabic or Moroccan Dialectal Arabic. As a final benchmark of our paper, we were simply a sliver shy away from the 70% mark in our accuracy by relying on the SVM algorithm. Although not a game-changing result, this was enough to encourage us to continue developing our model further.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_47-Sentiment_Analysis_on_Moroccan_Dialect_based_on_ML.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improved Multiclass Brain Tumor Detection using Convolutional Neural Networks and Magnetic Resonance Imaging</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140346</link>
        <id>10.14569/IJACSA.2023.0140346</id>
        <doi>10.14569/IJACSA.2023.0140346</doi>
        <lastModDate>2023-03-30T12:42:47.9170000+00:00</lastModDate>
        
        <creator>Mohamed Amine Mahjoubi</creator>
        
        <creator>Soufiane Hamida</creator>
        
        <creator>Oussama El Gannour</creator>
        
        <creator>Bouchaib Cherradi</creator>
        
        <creator>Ahmed El Abbassi</creator>
        
        <creator>Abdelhadi Raihani</creator>
        
        <subject>Deep learning; convolutional neural networks; brain tumor; classification; magnetic resonance imaging</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>Recently, Deep learning algorithms, particularly Convolutional Neural Networks (CNNs), have been applied extensively for image recognition and classification tasks, with successful results in the field of medicine, such as in medical image analysis. Radiologists have a hard time categorizing this lethal illness since brain tumors include a variety of tumor cells. Lately, methods based on computer-aided diagnostics claimed to employ magnetic resonance imaging to help with the diagnosis of brain cancers (MRI). Convolutional Neural Networks (CNNs) are often used in medical image analysis, including the detection of brain cancers. This effort was motivated by the difficulty that physicians have in appropriately detecting brain tumors, particularly when they are in the early stages of brain bleeding. This proposed model categorized the brain image into four distinct classes: (Normal, Glioma, Meningioma, and Pituitary). The proposed CNN networks reach 95% of recall, 95.44% accuracy and 95.36% of F1-score.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_46-Improved_Multiclass_Brain_Tumor_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Elitist Animal Migration Optimization for Protein Structure Prediction based on 3D Off-Lattice Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140345</link>
        <id>10.14569/IJACSA.2023.0140345</id>
        <doi>10.14569/IJACSA.2023.0140345</doi>
        <lastModDate>2023-03-30T12:42:47.9000000+00:00</lastModDate>
        
        <creator>Ezgi Deniz &#220;lker</creator>
        
        <subject>Animal migration optimization; bioinformatics; elitism; metaheuristics; protein structure prediction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>Predicting the structure of protein has been the center of attraction for the researchers. The aim is to make a reliable prediction of the protein structure by obtaining the minimum energy values among amino acids interactions. According to the generated shape of amino acids, the functionality of the proteins can be determined. However, it is known as one of the most challenging tasks in the field of bioinformatics considering its high computation complexity. Metaheuristic algorithms are mainly preferred by researchers from various fields, since their performances are quite satisfactory in solving such complex problems. Animal Migration Optimization (AMO) algorithm is a metaheuristic approach which mimics the behavior of animals during the migration process. However, in this research to reach a high solution quality, an elitist version of Animal Migration Optimization (ELAMO) algorithm is considered and in particular it is applied to Protein Structure Prediction (PSP) problem. The performance of ELAMO is tested on some well-studied artificial and real protein sequences, and then compared with powerful optimization algorithms which are specially designed for solving PSP problem. The results show that ELAMO is quite capable in solving this problem. Hence, it can be used as an efficient optimizer for solving complex problems that require better solution quality in the field of bioinformatics.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_45-Elitist_Animal_Migration_Optimization_for_Protein_Structure.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comparative Study of Twofish, Blowfish, and Advanced Encryption Standard for Secured Data Transmission</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140344</link>
        <id>10.14569/IJACSA.2023.0140344</id>
        <doi>10.14569/IJACSA.2023.0140344</doi>
        <lastModDate>2023-03-30T12:42:47.8700000+00:00</lastModDate>
        
        <creator>Kwame Assa-Agyei</creator>
        
        <creator>Funminiyi Olajide</creator>
        
        <subject>Cryptography; twofish; blowfish; advanced encryption standard; throughput; data encryption; decryption</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>Now-a-days, network security is becoming an increasingly significant and demanding research area of interest. Threats and attacks on information and Internet security are getting increasingly difficult to detect. As a result, encryption has emerged as a solution and now plays a critical role in information security systems. Many techniques are required to safeguard shared data. In this work, the encryption, decryption times, and throughput (speed) of the three most commonly used block cipher algorithms: Twofish, Blowfish, and AES were investigated using different file types. Comparison of symmetric encryption techniques of experiments on these types of algorithms uses a lot of computer resources including CPU time, memory, and battery power. Previous research has yielded diverse results in terms of time complexity, speed, space complexity, power consumption, and security. However, this research evaluated the effectiveness of each algorithm based on the following parameters: process time and speed. An application was developed for data simulation to test different file formats and for the encryption process and speed using Python 3.10.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_44-A_Comparative_Study_of_Twofish_Blowfish_and_Advanced_Encryption.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Method for Frequent High Resolution of Optical Sensor Image Acquisition using Satellite-Based SAR Image for Disaster Mitigation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140343</link>
        <id>10.14569/IJACSA.2023.0140343</id>
        <doi>10.14569/IJACSA.2023.0140343</doi>
        <lastModDate>2023-03-30T12:42:47.8530000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Yushin Nakaoka</creator>
        
        <creator>Osamu Fukuda</creator>
        
        <creator>Nobuhiko Yamaguchi</creator>
        
        <creator>Wen Liang Yeoh</creator>
        
        <creator>Hiroshi Okumura</creator>
        
        <subject>Frequent observation; Synthetic Aperture Radar: SAR; super resolution; Generative Adversarial Network: GAN; GAN-based conversion of images</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>Method for frequent high resolution of optical sensor imagery data acquisition from satellite-based SAR (Synthetic Aperture Radar) image for disaster mitigation is proposed. The proposed method is based on Generative Adversarial Network: GAN-based super resolution and conversion method from a SAR imagery data to the corresponding optical sensor imagery data in order to increase observation frequency. Through experiments, it is found that it is possible to convert SAR imagery data to the corresponding optical sensor imagery data and also found that the spatial resolution of SAR imagery data is improved remarkably. Thus, initial stage of disaster (small scale of disaster) can be detected with resolution enhanced optical sensor imagery data derived from the corresponding SAR imagery data which results in prevention of secondary occurrence of relatively large scale of disaster. It is also found that 2.5 m of spatial resolution of optical sensor imagery data can be acquired every 2.5 days in the case that only Sentinel-1/SAR and Sentinel-2/MSI (Multi Spectral Imager) are used, for instance.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_43-Method_for_Frequent_High_Resolution_of_Optical_Sensor_Image.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Investigation of Combining Deep Learning Object Recognition with Drones for Forest Fire Detection and Monitoring</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140342</link>
        <id>10.14569/IJACSA.2023.0140342</id>
        <doi>10.14569/IJACSA.2023.0140342</doi>
        <lastModDate>2023-03-30T12:42:47.8400000+00:00</lastModDate>
        
        <creator>Mimoun YANDOUZI</creator>
        
        <creator>Mounir GRARI</creator>
        
        <creator>Mohammed BERRAHAL</creator>
        
        <creator>Idriss IDRISSI</creator>
        
        <creator>Omar MOUSSAOUI</creator>
        
        <creator>Mostafa AZIZI</creator>
        
        <creator>Kamal GHOUMID</creator>
        
        <creator>Aissa KERKOUR ELMIAD</creator>
        
        <subject>Forest fire; deep learning; drones; unmanned aerial vehicles; object detection; YOLO; Faster R-CNN</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>Forest fires are a global environmental problem that can cause significant damage to natural resources and human lives. The increasing frequency and severity of forest fires have resulted in substantial losses of natural resources. To mitigate this, an effective fire detection and monitoring system is crucial. This work aims to explore and review the current advancement in the field of forest fire detection and monitoring using both drones or unmanned aerial vehicles (UAVs), and deep learning techniques. The utilization of drones fully equipped with specific sensors and cameras provides a cost-effective and efficient solution for real-time monitoring and early fire detection. In this paper, we conduct a comprehensive analysis of the latest developments in deep learning object detection, such as YOLO (You Only Look Once), R-CNN (Region-based Convolutional Neural Network), and their variants, with a focus on their potential application in the field of forest fire monitoring. The performed experiments show promising results in multiple metrics, making it a valuable tool for fire detection and monitoring.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_42-Investigation_of_Combining_Deep_Learning_Object_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Balancing Technological Advances with User Needs: User-centered Principles for AI-Driven Smart City Healthcare Monitoring</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140341</link>
        <id>10.14569/IJACSA.2023.0140341</id>
        <doi>10.14569/IJACSA.2023.0140341</doi>
        <lastModDate>2023-03-30T12:42:47.8230000+00:00</lastModDate>
        
        <creator>Ali H. Hassan</creator>
        
        <creator>Riza bin Sulaiman</creator>
        
        <creator>Mansoor A. Abdulgabber</creator>
        
        <creator>Hasan Kahtan</creator>
        
        <subject>Smart healthcare; patient monitoring; smart city; artificial intelligence; user-centered</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>In recent years, the integration of artificial intelligence (AI) technologies has greatly benefited smart city healthcare, meeting the growing demand for affordable, efficient, and real-time healthcare services. Patient monitoring is one area where artificial intelligence has shown great promise. Improved health outcomes have been made possible by the advancement of AI-based monitoring systems, which enable more personalized and continuous patient monitoring. However, to fully maximize the benefits of these systems, a user-centered approach is essential, which prioritizes patients&#39; needs and experiences while ensuring their privacy and autonomy are respected. This study focuses on the application of user-centered design principles in the development and deployment of AI-driven monitoring systems in smart city healthcare. Addressing the challenges and opportunities of AI-driven monitoring systems, the article considers issues such as privacy and security concerns, data accuracy, and user acceptance. Finally, some possible future directions to the challenges are suggested. A user-centered approach to AI monitoring systems is recommended for healthcare providers to enhance patient experience in smart city healthcare.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_41-Balancing_Technological_Advances_with_User_Needs.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automatic Detection of Software Defects based on Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140340</link>
        <id>10.14569/IJACSA.2023.0140340</id>
        <doi>10.14569/IJACSA.2023.0140340</doi>
        <lastModDate>2023-03-30T12:42:47.8070000+00:00</lastModDate>
        
        <creator>Nawal Elshamy</creator>
        
        <creator>Amal AbouElenen</creator>
        
        <creator>Samir Elmougy</creator>
        
        <subject>Software defect detection; NDSGA-II; hyperband; imbalance dataset</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>Defects in software are one of the critical problems in software engineering community because they provide inaccurate results and negatively affect the quality and reliability of the software. These defects must be detected in the early stages of software development. Researchers had used Software Defect Detection (SDD) techniques to allow predicting module fault-proneness. By implementing the hyperparameter optimization techniques and exploiting data imbalances in predicting defects, this paper proposes and develops an SDD model with high performance and generalization capability. To classify defects in software modules, machine learning algorithms and ensemble learning techniques are used on the balanced datasets. The balanced datasets are obtained through using a hybrid of synthetic minority oversample (SMOTE) and Support Vector Machine (SVM). To obtain the optimal hyperparameters needed for the used classifiers and for the dataset balanced algorithms, Non-dominated Sorting Genetic Algorithm II (NDSGA-II) is used. To reduce the time and save other used resources, Hyperband technique, which is a multi-fidelity optimization, is used in NDSGA-II. A 10-fold Cross Validation (CV) is applied to overcome the overfitting and underfitting problems. The accuracy, recall, F-measure, and ROC AUC metrics are used to evaluate the SDD model. The results show that the proposed model predicts defects more accurately than the compared studies.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_40-Automatic_Detection_of_Software_Defects.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Proxy Re-encryption Scheme based on the Timed-release in Edge Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140339</link>
        <id>10.14569/IJACSA.2023.0140339</id>
        <doi>10.14569/IJACSA.2023.0140339</doi>
        <lastModDate>2023-03-30T12:42:47.7930000+00:00</lastModDate>
        
        <creator>Yifeng Yin</creator>
        
        <creator>Wanyi Zhou</creator>
        
        <creator>Zhaobo Wang</creator>
        
        <creator>Yong Gan</creator>
        
        <creator>Yanhua Zhang</creator>
        
        <subject>Timed-release; edge-computing; multi-dimensional virtual permutation; proxy re-encryption; symmetric encryption</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>With the growth of Industrial Internet, various types of data show explosive growth. Data is being moved from the cloud to the edge for computing more frequently and edge computing becomes an important factor affecting the deep application of Industrial Internet platforms. However, the security issue of data transmission sharing is not addressed. Therefore, this paper proposes a security scheme based on timed-release, multi-dimensional virtual permutation, and proxy re-encryption(PRE), to protect the confidentiality of the data during the transmitter; symmetric cryptography is employed to encrypt the transmission data. At the same time, a time server is used, making it impossible for data receivers to get the information about the data before the specified time arrives, it solves the application scenario of data transmission and sharing with timed-release requirements and the efficiency of a large number of data is solved, and the security of data is improved. Finally, the security of the scheme was proved theoretically. Compared to existing PRE schemes, this scheme adds timed-release controlled access, has resistance to ciphertext attacks and end-to-end security features, and uses fewer bilinear pair operations in the algorithm. The performance was tested experimentally and the results show that the scheme improves efficiency while ensuring security and has significant advantages in terms of data security and private data protection.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_39-Proxy_Re_encryption_Scheme_based_on_the_Timed_Release.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A High-Performance Approach for Irregular License Plate Recognition in Unconstrained Scenarios</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140338</link>
        <id>10.14569/IJACSA.2023.0140338</id>
        <doi>10.14569/IJACSA.2023.0140338</doi>
        <lastModDate>2023-03-30T12:42:47.7600000+00:00</lastModDate>
        
        <creator>Hoanh Nguyen</creator>
        
        <subject>License plate recognition; deep learning; convolutional neural network; keypoint detector; YOLO detector</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>This paper proposes a novel framework for locating and recognizing irregular license plates in real-world complex scene images. In the proposed framework, an efficient deep convolutional neural network (CNN) structure specially designed for keypoint estimation is first employed to predict the corner points of license plates. Then, based on the predicted corner points, perspective transformation is performed to align the detected license plates. Finally, a lightweight deep CNN structure based on the YOLO detector is designed to predict license plate characters. The character recognition network can predict license plate characters without depending on license plate layouts (i.e., license plates of single-line or double-line text). Experiment results on CCPD and AOLP datasets demonstrate that the proposed method obtains better recognition accuracy compared with previous methods. The proposed model also achieves impressive inference speed and can be deployed in real-time applications.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_38-A_High_Performance_Approach_for_Irregular_License_Plate.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Current Development, Challenges, and Future Trends in Cloud Computing: A Survey</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140337</link>
        <id>10.14569/IJACSA.2023.0140337</id>
        <doi>10.14569/IJACSA.2023.0140337</doi>
        <lastModDate>2023-03-30T12:42:47.7470000+00:00</lastModDate>
        
        <creator>Hazzaa N. Alshareef</creator>
        
        <subject>Cloud computing; security challenges; machine learning; resource scheduling; information and communication technologies</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>Cloud computing is a new paradigm in information and communication technologies (ICTs) that provides the ability to access shared pools of different computing resources that are related to many cloud users within a pay-per-use or on-demand approach. It has transformed the delivery model of ICT from a product to a service. This provides several different advantages for institutions, companies and users based on savings and reduced capital expenditure through lower operating expenses. This paper provides a comprehensive survey of cloud computing. It first develops an understanding of cloud computing in general and discusses its advantages, current development, challenges and future trends. Subsequently, a detailed discussion on the cloud computing architectures, services models, fault tolerance mechanisms, services selection methods, adoption by industry, and scheduling of cloud-based resources is also presented. Nonetheless, cloud computing has many obstacles which expose it to a number of limitations. Some of these challenges include security of data, fault tolerance, and load balancing. A number of techniques in literature are proposed to cope with these challenges which are discussed and analyzed. Experimental data and usage drift validates the popularity of cloud computing and its adoption in recent years. Future trends in cloud computing support the use of intelligent machine learning (ML) techniques and new technologies to cope with some of the challenges and making cloud computing more efficient, secure and commercially viable to be widely accepted.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_37-Current_Development_Challenges_and_Future_Trends.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Text-based Sarcasm Detection on Social Networks: A Systematic Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140336</link>
        <id>10.14569/IJACSA.2023.0140336</id>
        <doi>10.14569/IJACSA.2023.0140336</doi>
        <lastModDate>2023-03-30T12:42:47.7300000+00:00</lastModDate>
        
        <creator>Amal Alqahtani</creator>
        
        <creator>Lubna Alhenaki</creator>
        
        <creator>Abeer Alsheddi</creator>
        
        <subject>Sentiment analysis; figurative language; sarcasm detection; irony; machine learning; deep learning; transformer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>Sarcasm is a sophisticated phenomenon used for conveying a meaning that differs from what is being said, and it is usually used to express displeasure or ridicule others. Sentiment analysis is a process of uncovering the subjective information from a text. Detecting figurative language such as irony or sarcasm, is a focused challenging research field of sentiment analysis. Detecting and understanding the use of sarcasm in social networks could provide businesses and politicians with significant insight, since it reflects people’s opinions about certain topics, news, and products. This has especially become relevant recently because sarcastic texts have been trending on social networks and are being posted by millions of active users. As a result of this situation, there is now an increasing amount of research on the detection of sarcasm in social network posts. Many works have been published on sarcasm detection, and they include a wide variety of techniques based on rules, lexicons, traditional machine learning, deep learning, and transformers. However, sarcasm detection is a challenging task due to the ambiguity and non-straightforward nature of sarcastic text. In addition, very few reviews have been conducted on the research in this area. Therefore, this systematic review mainly aims at exploring the newly published sarcasm detection articles on social networks in the years between 2019 and 2022. Several databases were extensively searched, and 30 articles that met the criteria were included. The selected articles were reviewed based on their approaches, datasets, and evaluation metrics. The findings emphasized that deep learning is the most commonly used technique for sarcasm detection in recent literature, and Twitter and F-measure are the most used source and performance metric, respectively. Finally, this article presents a brief discussion regarding the challenges in sarcasm detection and future research directions.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_36-Text_based_Sarcasm_Detection_on_Social_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Support Vector Regression based Localization Approach using LoRaWAN</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140335</link>
        <id>10.14569/IJACSA.2023.0140335</id>
        <doi>10.14569/IJACSA.2023.0140335</doi>
        <lastModDate>2023-03-30T12:42:47.7130000+00:00</lastModDate>
        
        <creator>Saeed Ahmed Magsi</creator>
        
        <creator>Mohd Haris Bin Md Khir</creator>
        
        <creator>Illani Bt Mohd Nawi</creator>
        
        <creator>Abdul Saboor</creator>
        
        <creator>Muhammad Aadil Siddiqui</creator>
        
        <subject>LoRaWAN; localization; RSSI; fingerprinting; support vector regression</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>The Internet of Things (IoT) domain has experienced significant growth in recent times. There has been extensive research conducted in various areas of IoT, including localization. Localization of Long Range (LoRa) nodes in outdoor environments is an important task for various applications, including asset tracking and precision agriculture. In this research article, a localization approach using Support Vector Regression (SVR) has been implemented to predict the location of the end node using LoRaWAN. The experiments are conducted in the outdoor campus environment. The SVR used the Received Signal Strength Indicator (RSSI) fingerprints to locate the end nodes. The results show that the proposed method can locate the end node with a minimum error of 36.26 meters and a mean error of 171.59 meters.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_35-Support_Vector_Regression_based_Localization_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fake News Classification Web Service for Spanish News by using Artificial Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140334</link>
        <id>10.14569/IJACSA.2023.0140334</id>
        <doi>10.14569/IJACSA.2023.0140334</doi>
        <lastModDate>2023-03-30T12:42:47.6830000+00:00</lastModDate>
        
        <creator>Patricio Xavier Moreno-Vallejo</creator>
        
        <creator>Gisel Katerine Bastidas-Guacho</creator>
        
        <creator>Patricio Rene Moreno-Costales</creator>
        
        <creator>Jefferson Jose Chariguaman-Cuji</creator>
        
        <subject>Fake news; LSTM; classification; web service; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>The use of digital media, such as social networks, has promoted the spreading of fake news on a large scale. Therefore, several Machine Learning techniques, such as artificial neural networks, have been used for fake news detection and classification. These techniques are widely used due to their learning capabilities. Besides, models based on artificial neural networks can be easily integrated into social media and websites to spot fake news early and avoid their propagation. Nevertheless, most fake news classification models are available only for English news, limiting the possibility of detecting fake news in other languages, such as Spanish. For this reason, this study proposes implementing a web service that integrates a deep learning model for the classification of fake news in Spanish. To determine the best model, the performance of several neural network architectures, including MLP, CNN, and LSTM, was evaluated using the F1 score., and LSTM using the F1 score. The LSTM architecture was the best, with an F1 score of 0.746. Finally, the efficiency of web service was evaluated, applying temporal behavior as a metric, resulting in an average response time of 1.08 seconds.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_34-Fake_News_Classification_Web_Service_for_Spanish_News.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Experimental Evaluation of Genetic Algorithms to Solve the DNA Assembly Optimization Problem</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140333</link>
        <id>10.14569/IJACSA.2023.0140333</id>
        <doi>10.14569/IJACSA.2023.0140333</doi>
        <lastModDate>2023-03-30T12:42:47.6670000+00:00</lastModDate>
        
        <creator>Hachemi Bennaceur</creator>
        
        <creator>Meznah Almutairy</creator>
        
        <creator>Nora Alqhtani</creator>
        
        <subject>Genetic algorithms; traveling salesman problem; quadratic assignment problem; DNA fragments assembly problem</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>This paper aims to highlight the motivations for investigating genetic algorithms (GAs) to solve the DNA Fragment Assembly (DNAFA) problem. DNAFA problem is an optimization problem that attempts to reconstruct an original DNA sequence by finding the shortest DNA sequence from a given set of fragments. This paper is a continuation of our previous research paper in which the existence of a polynomial-time reduction of DNAFA into the Traveling Salesman Problem (TSP) and the Quadratic Assignment Problem (QAP) was discussed. Taking advantage of this reduction, this work conceptually designed a genetic algorithm (GA) platform to solve the DNAFA problem. This platform offers several ingredients enabling us to create several variants of GA solvers for the DNAFA optimization problems. The main contribution of this paper is the designing of an efficient GA variant by carefully integrating different GAs operators of the platform. For that, this work individually studied the effects of different GAs operators on the performance of solving the DNAFA problem. This study has the advantage of benefiting from prior knowledge of the performance of these operators in the contexts of the TSP and QAP problems. The best designed GA variant shows a significant improvement in accuracy (overlap score) reaching more than 172% of what is reported in the literature.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_33-Experimental_Evaluation_of_Genetic_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Automated Text Document Classification Framework using BERT</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140332</link>
        <id>10.14569/IJACSA.2023.0140332</id>
        <doi>10.14569/IJACSA.2023.0140332</doi>
        <lastModDate>2023-03-30T12:42:47.6500000+00:00</lastModDate>
        
        <creator>Momna Ali Shah</creator>
        
        <creator>Muhammad Javed Iqbal</creator>
        
        <creator>Neelum Noreen</creator>
        
        <creator>Iftikhar Ahmed</creator>
        
        <subject>Deep learning; text classification; BERT</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>Due to the rapid advancement of technology, the volume of online text data from numerous various disciplines is increasing significantly over time. Therefore, more work is needed to create systems that can effectively classify text data in accordance with its content, facilitating processing and the extraction of crucial information. Since these non-automated systems use manual feature extraction and classification, which is error-prone and time-consuming by choosing the best appropriate algorithms for feature extraction and classification, traditional procedures are typically resource intensive (computational, human, etc.), which is not a viable solution. To address the shortcomings of traditional approaches, we offer a unique text categorization strategy based on a well-known DL algorithm called BERT. The proposed framework is trained and tested using cutting-edge text datasets, such as the UCI email dataset, which includes spam and non-spam emails, and the BBC News dataset, which includes multiple categories such as tech, sports, politics, business, and entertainment. The system achieved the highest accuracy of 91.4% and can be used by different organizations to classify text-based data with a high performance. The effectiveness of the proposed framework is evaluated using multiple evaluation metrics such as Accuracy, Precision, and Recall.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_32-An_Automated_Text_Document_Classification_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Algorithm Transform DNA Sequences to Improve Accuracy in Similarity Search</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140331</link>
        <id>10.14569/IJACSA.2023.0140331</id>
        <doi>10.14569/IJACSA.2023.0140331</doi>
        <lastModDate>2023-03-30T12:42:47.6200000+00:00</lastModDate>
        
        <creator>Hoang Do Thanh Tung</creator>
        
        <creator>Phuong Vuong Quang</creator>
        
        <subject>Similarity search; data transformation; DNA sequence; big data</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>Similarity search of DNA sequences is a fundamental problem in the bioinformatics, serving as the basis for many other problems. In this, the calculation of the similarity value between sequences is the most important, with the Edit distance (ED) commonly used due to its high accuracy, but slow speed. With the advantage of transforming the original DNA sequences into numerical vector form that retaining unique features based on properties. The calculation processing on these transformed data will be much faster, many times faster than a direct comparison on the original sequence. Additionally, from a long DNA sequence, after transformation, it typically has a lower storage capacity, making it have good data compression. The challenge of this job is to develop algorithms based on features that maintain biological significance while ensuring search accuracy, which is also the problem to be solved. Previous methods often used pure mathematical statistics such as frequency statistics and matrix transformations to construct features. In this paper, an improved algorithm is proposed based on both biological significances and mathematical statistics to transforming gene data into numerical vectors for ease of storage and to improve accuracy in similarity search between DNA sequences. Based on the experimental results, the new algorithm improves the accuracy of similarity calculations while maintaining good performance.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_31-An_Algorithm_Transform_DNA_Sequences_to_Improve_Accuracy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning CNN Model-Based Anomaly Detection in 3D Brain MRI Images using Feature Distribution Similarity</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140330</link>
        <id>10.14569/IJACSA.2023.0140330</id>
        <doi>10.14569/IJACSA.2023.0140330</doi>
        <lastModDate>2023-03-30T12:42:47.6030000+00:00</lastModDate>
        
        <creator>Amarendra Reddy Panyala</creator>
        
        <creator>M. Baskar</creator>
        
        <subject>Deep learning; brian tumor; disease prediction; anomaly detection; CNN; FDS; ACW</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>Towards detecting an anomaly in brain images, different approaches are discussed in the literature. Features like white mass values and shape features have identified the presence of brain tumors. Various deep learning models like the neural network has been adapted to the problem tumor detection and suffers to meet maximum accuracy in detecting brain tumor. An Adaptive Feature Centric Distribution Similarity Based Anomaly Detection Model with Convolution Neural Network (AFCD-CNN) is sketched towards disease prediction problem to handle the problem. The model considers black-and-white mass features with the distribution of features. First, the method applies the Multi-Hop Neighbor Analysis (MHNA) algorithm in normalizing the brain image. Further, the process uses the Adaptive Mass Determined Segmentation (AMDS) algorithm, which groups the pixels of MRI according to the white and black mass values. The method extracts the ROI with the segmented image and convolves the features with CNN at the training phase. The CNN is designed to convolve the features into one dimension. The output layer neurons are designed to estimate different Feature Distribution Similarity (FDS) values against various features to compute the Anomaly Class Weight (ACW). According to the ACW value, anomaly detection is performed with higher accuracy up to 97% where the time complexity is reduced up to 32 seconds.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_30-Deep_Learning_CNN_Model_Based_Anomaly_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Knowledge Graph based Representation to Extract Value from Open Government Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140329</link>
        <id>10.14569/IJACSA.2023.0140329</id>
        <doi>10.14569/IJACSA.2023.0140329</doi>
        <lastModDate>2023-03-30T12:42:47.5730000+00:00</lastModDate>
        
        <creator>Kawtar YOUNSI DAHBI</creator>
        
        <creator>Dalila CHIADMI</creator>
        
        <creator>Hind LAMHARHAR</creator>
        
        <subject>Knowledge graph; open government data; knowledge graph construction; public procurement; fraud detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>Open government data refers to data that is made available by government entities to be freely reused by anyone and for any purpose. The potential benefits of open government data are numerous and include increasing transparency and accountability, enhancing citizens&#39; quality of life, and boosting innovation. However, realizing these benefits is not always straightforward, as the usage of this raw data often faces challenges related to its format, structure, and heterogeneity which hinder its processability and integration. In response to these challenges, we propose an approach to maximize the usage of open government data and achieve its potential benefits. This approach leverages knowledge graphs to extract value from open government data and drive the construction of a knowledge graph from structured, semi-structured, and non-structured formats. It involves the extraction, transformation, semantic enrichment, and integration of heterogeneous open government data sources into an integrated and semantically enhanced knowledge graph. Learning mechanisms and ontologies are used to efficiently construct the knowledge graph. We evaluate the effectiveness of the approach using real-world public procurement data and show that it can detect potential fraud such as favoritism.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_29-Knowledge_Graph_based_Representation_to_Extract_Value.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Content-Based Image Retrieval using Encoder based RGB and Texture Feature Fusion</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140328</link>
        <id>10.14569/IJACSA.2023.0140328</id>
        <doi>10.14569/IJACSA.2023.0140328</doi>
        <lastModDate>2023-03-30T12:42:47.5570000+00:00</lastModDate>
        
        <creator>Charulata Palai</creator>
        
        <creator>Pradeep Kumar Jena</creator>
        
        <creator>Satya Ranjan Pattanaik</creator>
        
        <creator>Trilochan Panigrahi</creator>
        
        <creator>Tapas Kumar Mishra</creator>
        
        <subject>CBIR; CNN Encoded Feature; LBP; CSLBP; LDP; feature fusion</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>Recent development of digital photography and the use of social media using smartphones has boosted the demand for image query by its visual semantics. Content-Based Image Retrieval (CBIR) is a well-identified research area in the domain of image and video data analysis. The major challenges of a CBIR system are (a) to derive the visual semantics of the query image and (b) to find all the similar images from the repository. The objective of this paper is to precisely define the visual semantics using hybrid feature vectors. In this paper, a CBIR system using encoded-based feature fusion is proposed. The CNN encoding features of the RGB channel are fused with the encoded texture features of LBP, CSLBP, and LDP separately. The retrieval performance of the different fused features is tested using three public datasets i.e. Corel-lK, Caltech, and 102flower. The result shows the class properties are better retained using the LDP with RGB encoded features, this helps to enhance the classification and retrieval performance for all three datasets. The average precision of Corel-lK is 94.5% and it is 89.7% for Caltech, and 88.7% for the 102flower. The average f1-score is 89.5% for Caltech, and 88.5% for the 102flower. The improvement in the f1-score value implies the proposed fused feature is more stable to deal the class imbalance problem.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_28-Content_Based_Image_Retrieval_using_Encoder_based_RGB.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Collaborative based Vehicular Ad-hoc Network Intrusion Detection System using Optimized Support Vector Machine</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140327</link>
        <id>10.14569/IJACSA.2023.0140327</id>
        <doi>10.14569/IJACSA.2023.0140327</doi>
        <lastModDate>2023-03-30T12:42:47.5270000+00:00</lastModDate>
        
        <creator>Azath M</creator>
        
        <creator>Vaishali Singh</creator>
        
        <subject>Vehicular Ad hoc network; intrusion detection; tabu search based particle swarm optimization; war strategy optimization; support vector machine</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>The Vehicular Ad hoc Network (VANET) can be used to provide secured information to the user vehicles. However, these days the immunity of safeguarding the information from vulnerabilities and threats are of great challenge. Therefore, it is necessary to provide a secured solution for the improvement of security with the deployment of advanced technology. In context with this, a blockchain based VANET structure for secured communication incorporated with the enhancement of confidentiality, scalability, and privacy is planned. The k-means clustering model forms cluster formation. The cluster head selection is carried out with Tabu Search-based Particle Swarm Optimization (TS-PSO) algorithm. The proposed approach aims to mitigate the delay with the enhancement of throughput and energy efficiency. Meanwhile, the deployed blockchain will enhance reliability and security. Moreover, the novel War Strategy Optimization (WSO) based Support Vector Machine (SVM) model (Optimized SVM) can be used for the trust-based collaborative intrusion detection in the VANET. Our work targets to detect the intrusion and non-intrusion classes. Meanwhile, our proposed work can be used for the prevention of repetitive detection processes and therein it enhances the security by rewarding the vehicles. An experimental analysis is carried out to ensure its usage in detecting the malicious node from the resource constraint vehicles and also used to achieve better security, energy utilization and end-to-end delay.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_27-Collaborative_based_Vehicular_Ad_hoc_Network_Intrusion_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Efficient Deep Learning based Hybrid Model for Image Caption Generation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140326</link>
        <id>10.14569/IJACSA.2023.0140326</id>
        <doi>10.14569/IJACSA.2023.0140326</doi>
        <lastModDate>2023-03-30T12:42:47.5100000+00:00</lastModDate>
        
        <creator>Mehzabeen Kaur</creator>
        
        <creator>Harpreet Kaur</creator>
        
        <subject>CNN; RNN; LSTM; YOLO</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>In the recent yeas, with the increase in the use of different social media platforms, image captioning approach play a major role in automatically describe the whole image into natural language sentence. Image captioning plays a significant role in computer-based society. Image captioning is the process of automatically generating the natural language textual description of the image using artificial intelligence techniques. Computer vision and natural language processing are the key aspect of the image processing system. Convolutional Neural Network (CNN) is a part of computer vision and used object detection and feature extraction and on the other side Natural Language Processing (NLP) techniques help in generating the textual caption of the image. Generating suitable image description by machine is challenging task as it is based upon object detection, location and their semantic relationships in a human understandable language such as English. In this paper our aim to develop an encoder-decoder based hybrid image captioning approach using VGG16, ResNet50 and YOLO. VGG16 and ResNet50 are the pre-trained feature extraction model which are trained on millions of images. YOLO is used for real time object detection. It first extracts the image features using VGG16, ResNet50 and YOLO and concatenate the result in to single file. At last LSTM and BiGRU are used for textual description of the image. Proposed model is evaluated by using BLEU, METEOR and RUGE score.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_26-An_Efficient_Deep_Learning_based_Hybrid_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Autonomous Multi-Agent Framework using Quality of Service to prevent Service Level Agreement Violations in Cloud Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140325</link>
        <id>10.14569/IJACSA.2023.0140325</id>
        <doi>10.14569/IJACSA.2023.0140325</doi>
        <lastModDate>2023-03-30T12:42:47.4970000+00:00</lastModDate>
        
        <creator>Jaspal Singh</creator>
        
        <creator>Major Singh Goraya</creator>
        
        <subject>Cloud computing; multi-agent framework; SLA violations; energy consumption; history agent; Poisson distribution</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>Cloud is a specialized computing technology accommodating several million users to provide seamless services via the internet. The extension of this reverenced technology is growing abruptly with the increase in the number of users. One of the major issues with the cloud is that it receives a huge volume of workloads requesting resources to complete their executions. While executing these workloads, the cloud suffers from the issue of service level agreement (SLA) violations which impacts the performance and reputation of the cloud. Therefore, there is a requirement for an effective design that supports faster and optimal execution of workloads without any violation of SLA. To fill this gap, this article proposes an automatic multi-agent framework that ensures the minimization of the SLA violation rate in workload execution. The proposed framework includes seven major agents such as user agent, system agent, negotiator agent, coordinator agent, monitoring agent, arbitrator agent and the history agent. All these agents work cooperatively to enable the effective execution of workloads irrespective of their dynamic nature. With effective execution of workloads, the proposed model also resulted in an advantage of minimized energy consumption in data centres. The inclusion of a history agent within the framework enabled the model to predict future requirements based on the records of resource utilization. The proposed model followed the Poisson distribution to generate random numbers that are further used for evaluation purposes. The simulations of the model proved that model is more reliable in reducing SLA violations compared to the existing works. The proposed method resulted in an average SLA violation rate of 55.71% for 1200 workloads and resulted in an average energy consumption of 47.84kWh for 1500 workloads.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_25-An_Autonomous_Multi_Agent_Framework_using_Quality_of_Service.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Frequency Domain Improvements to Texture Discrimination Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140324</link>
        <id>10.14569/IJACSA.2023.0140324</id>
        <doi>10.14569/IJACSA.2023.0140324</doi>
        <lastModDate>2023-03-30T12:42:47.4630000+00:00</lastModDate>
        
        <creator>Ibrahim Cem Baykal</creator>
        
        <subject>Machine vision; ANN; SVM; pattern recognition; co-occurrence; texture feature extraction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>As the production speeds of factories increase, it becomes more and more challenging to inspect products in real time. The goal of this article is to come up with a computationally efficient texture discrimination algorithm by first testing their ability to localize defects and then increase their efficiency by removing less effective parts of them. Therefore, abilities of the most popular texture classification algorithms such as the GLCM, the LBP and the SDH to localize defects are tested on different datasets. These tests reveal that, on small windows GLCM and SDH perform better. Frequency properties of the textures are used to fine-tune the parameters of these algorithms. Further experiments on three different datasets prove that the accuracy of the algorithms are increased almost twice while decreasing the processing time considerably.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_24-Frequency_Domain_Improvements_to_Texture_Discrimination.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Research on the Motion Control of the Sorting Manipulator based on Machine Vision</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140323</link>
        <id>10.14569/IJACSA.2023.0140323</id>
        <doi>10.14569/IJACSA.2023.0140323</doi>
        <lastModDate>2023-03-30T12:42:47.4470000+00:00</lastModDate>
        
        <creator>Kuandong Peng</creator>
        
        <creator>Zufeng Wang</creator>
        
        <subject>Machine vision; manipulator; motion control; camera calibration; item sorting</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>With the development of production technology, manipulators are gradually introduced in advanced production manufacturing industries to complete some tasks such as picking and sorting. However, the traditional manipulator has a complicated sorting process and low production efficiency. In order to improve the accuracy of sorting and reduce the labor intensity of workers, this paper studied the motion control of the sorting manipulator with machine vision. After placing four kinds of objects of different shapes on the conveyor belt, experiments were conducted on the catching and sorting process of the manipulator under different experimental environments, different conveyor belt speeds, and with or without machine vision. It was found that the overall success rate of the sorting robotic arm using machine vision for catching objects of different shapes was as high as 96%, and the sorting accuracy was as high as 97.91%. Therefore, it is concluded that the manipulator can achieve high accuracy in catching and sorting objects with the guidance of machine vision, and the adoption of machine vision has a positive impact on the motion control of the sorting manipulator.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_23-The_Research_on_the_Motion_Control_of_the_Sorting_Manipulator.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Predictive Approach to Improving Agricultural Productivity in Morocco through Crop Recommendations</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140322</link>
        <id>10.14569/IJACSA.2023.0140322</id>
        <doi>10.14569/IJACSA.2023.0140322</doi>
        <lastModDate>2023-03-30T12:42:47.4170000+00:00</lastModDate>
        
        <creator>Rachid Ed-daoudi</creator>
        
        <creator>Altaf Alaoui</creator>
        
        <creator>Badia Ettaki</creator>
        
        <creator>Jamal Zerouaoui</creator>
        
        <subject>Precision agriculture; artificial intelligence; machine learning; crop recommendation; Morocco</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>Agricultural productivity is a critical component of sustainable economic growth, particularly in developing countries. Morocco, with its vast agricultural potential, is in need of advanced technologies to optimize crop productivity. Precision farming is one such technology, which incorporates the use of artificial intelligence and machine learning to analyze data from various sources and make informed decisions about crop management. In this study, we propose a web-based crop recommendation system that leverages ML algorithms to predict the most suitable crop to harvest based on environmental factors such as soil nutrient levels, temperature, and precipitations. We evaluated the performance of five ML algorithms (Decision Tree, Na&#239;ve Bayes, Random Forest, Logistic Regression, and Support Vector Machine) and identified Random Forest as the best-performing algorithm. Despite the promising results, we faced several challenges, including limited availability of data and the need for field validation of the results. Nonetheless, our platform aims to provide free and open-source precision farming solutions to Moroccan farmers to improve agricultural productivity and contribute to sustainable economic growth in the country.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_22-A_Predictive_Approach_To_Improving_Agricultural_Productivity.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dynamic Programming Approach in Aggregate Production Planning Model under Uncertainty</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140321</link>
        <id>10.14569/IJACSA.2023.0140321</id>
        <doi>10.14569/IJACSA.2023.0140321</doi>
        <lastModDate>2023-03-30T12:42:47.4000000+00:00</lastModDate>
        
        <creator>Umi Marfuah</creator>
        
        <creator>Mutmainah</creator>
        
        <creator>Andreas Tri Panudju</creator>
        
        <creator>Umar Mansyuri</creator>
        
        <subject>Aggregate production planning; artificial neural network; dynamic programming; fuzzy logic</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>In order to achieve a competitive edge in the market, one of the most essential components of effective operations management is aggregate production planning, abbreviated as APP. The sources of uncertainty discussed in the APP model include uncertainty in demand, uncertainty of production costs, and uncertainty of storage costs. The problem of APP usually involves many imprecise, conflicting and incommensurable objective functions. The application of APP in real conditions is often inaccurate, because some information is incomplete or cannot be obtained. The aim of this study is to develop APP model under uncertainty with a dynamic programming (DP) approach to meet consumer demand and minimize total costs during the planning period. The APP model includes several parameters including market demand, production costs, inventory costs, production levels and production capacity. After describing the problem, the optimal APP model is formulated using artificial neural network (ANN) techniques in the demand forecasting process and fuzzy logic (FL) in the DP framework. The ANN technique is used to forecast the input demand for APP and minimize the total cost during the planning period using the FL technique in the DP framework to accommodate uncertainties. The model input is historical data obtained through interviews. A case study was conducted on the the need for aluminum plates for the automotive industry. The results show that the ANN technique proposed for demand projection has a low error value in forecasting demand and FL in the DP framework is able to find minimal production costs in the APP model.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_21-Dynamic_Programming_Approach_in_Aggregate_Production.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Ensemble Multi-Layered Sentiment Analysis Model (EMLSA) for Classifying the Complex Datasets</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140320</link>
        <id>10.14569/IJACSA.2023.0140320</id>
        <doi>10.14569/IJACSA.2023.0140320</doi>
        <lastModDate>2023-03-30T12:42:47.3700000+00:00</lastModDate>
        
        <creator>Penubaka Balaji</creator>
        
        <creator>D. Haritha</creator>
        
        <subject>Sentiment analysis; online social media; social networking sites; VADER; recurrent neural networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>Sentiment analysis is one domain that analyzes the feelings and emotions of the users based on their text messages. Sentiment analysis of short messages, reviews in online social media (OSM), and social networking sites (SNS) messages gives the analysis of given text data. Processing short text and SNS messages is a very tedious task because of the restricted detailed information generally contained. Solving this issue requires advanced techniques that are combined to give accurate results. This paper developed an Ensemble Multi-Layered Sentiment Analysis Model (EMLSA) that exploits the trust-based sentiment analysis on various real-time datasets. EMLA is the combined approach with VADER (Valence Aware Dictionary and sEntiment Reasoned) and Recurrent Neural Networks (RNNs). VADER is the lexicon and rule-based sentiment analysis model that predicts the sentiments extracted from input datasets and it is used for training. The feature extraction technique is term-frequency and inverse document frequency. Word-Level Embeddings (WLE) and Character-Level Embeddings (CLE) are the two models that increase the short text and single-word analysis. The proposed model was applied to four real-time datasets: Amazon, eBay, Trip-advisor, and IMDB Movie Reviews. The performance is analyzed using various parameters such as sensitivity, specificity, precision, accuracy, and F1-score.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_20-An_Ensemble_Multi_Layered_Sentiment_Analysis_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comprehensive Study on Medical Image Segmentation using Deep Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140319</link>
        <id>10.14569/IJACSA.2023.0140319</id>
        <doi>10.14569/IJACSA.2023.0140319</doi>
        <lastModDate>2023-03-30T12:42:47.3530000+00:00</lastModDate>
        
        <creator>Loan Dao</creator>
        
        <creator>Ngoc Quoc Ly</creator>
        
        <subject>Medical image segmentation (MIS); SOTA solutions in MIS; XAI; early disease diagnosis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>Over the past decade, Medical Image Segmentation (MIS) using Deep Neural Networks (DNNs) has achieved significant performance improvements and holds great promise for future developments. This paper presents a comprehensive study on MIS based on DNNs. Intelligent Vision Systems are often evaluated based on their output levels, such as Data, Information, Knowledge, Intelligence, and Wisdom (DIKIW), and the state-of-the-art solutions in MIS at these levels are the focus of research. Additionally, Explainable Artificial Intelligence (XAI) has become an important research direction, as it aims to uncover the &quot;black box&quot; nature of previous DNN architectures to meet the requirements of transparency and ethics. The study emphasizes the importance of MIS in disease diagnosis and early detection, particularly for increasing the survival rate of cancer patients through timely diagnosis. XAI and early prediction are considered two important steps in the journey from &quot;intelligence&quot; to &quot;wisdom.&quot; Additionally, the paper addresses existing challenges and proposes potential solutions to enhance the efficiency of implementing DNN-based MIS.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_19-A_Comprehensive_Study_on_Medical_Image_Segmentation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>DMobile-ELA: Digital Image Forgery Detection via cascaded Atrous MobileNet and Error Level Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140318</link>
        <id>10.14569/IJACSA.2023.0140318</id>
        <doi>10.14569/IJACSA.2023.0140318</doi>
        <lastModDate>2023-03-30T12:42:47.3230000+00:00</lastModDate>
        
        <creator>Karma M. Fathalla</creator>
        
        <creator>Malak Sowelem</creator>
        
        <creator>Radwa Fathalla</creator>
        
        <subject>Tampering detection; MobileNet; error level analysis; CASIAv2.0</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>With the current developments in technology, not only has digital media become widely available, the editing and manipulation of digital media has become equally available to everyone without any prior experience. The need for detecting manipulated images has grown immensely as it can now cause false information in news media, forensics, and daily life of common users. In this work, a cascaded approach DMobile-ELA is presented to ensure an image’s credibility and that the data it contains has not been compromised. DMobile-ELA integrates Error Level Analysis and MobileNet-based classification for tampering detection. It was able to achieve promising results compared to the state of the art on CASIAv2.0 dataset. DMobile-ELA has successfully reached a training accuracy of 99.79% and a validation accuracy of 98.48% in detecting image manipulation.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_18-DMobile_ELA_Digital_Image_Forgery_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An AHP based Task Scheduling and Optimal Resource Allocation in Cloud Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140317</link>
        <id>10.14569/IJACSA.2023.0140317</id>
        <doi>10.14569/IJACSA.2023.0140317</doi>
        <lastModDate>2023-03-30T12:42:47.3070000+00:00</lastModDate>
        
        <creator>Syed Karimunnisa</creator>
        
        <creator>Yellamma Pachipala</creator>
        
        <subject>Task scheduling; AHP; TS; QoS; optimization; CUCMCA</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>Cloud systems by virtue characterize ultimate resource utilization with ever evolving user requirements facilitating adaptivity. With a scope of enhancing the QoS needs of user applications, numerous factors are considered for tunning among which Task scheduling promises to grab focus. The Task Scheduling mechanism ascertains improvement by distributing the subtasks to specific set of resources pertaining to prevailing Quality models. The work emphasizes the need for effective task scheduling and optimizing resource allocation by modelling a modified AHP (Analytical Hierarchy Process) driven approach. The proposed method guarantees the functionality in two phases pertaining to Task ranking and pipelined with Optimized scheduling algorithms resulting in maximization of resource utilization. The former phase of task ranking is aided by improved AHP with substantial usage of fuzzy clustering followed by an enhanced CUCMCA (Chimp Updated and Cauchy Mutated Coot Algorithm) algorithm for optimal resource allocation of cloud applications. The contributed model promises leveraged performance of 32% for memory usage, 33.5% for execution time, 29% for makespan and 18% for communication cost over pre-existing conventional models considered.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_17-An_AHP_based_Task_Scheduling_and_Optimal_Resource_Allocation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Study on Institution Improvement Plans for the National Supercomputing Joint Utilization System in South Korea</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140315</link>
        <id>10.14569/IJACSA.2023.0140315</id>
        <doi>10.14569/IJACSA.2023.0140315</doi>
        <lastModDate>2023-03-30T12:42:47.2770000+00:00</lastModDate>
        
        <creator>Hyungwook Shim</creator>
        
        <creator>Myungju Ko</creator>
        
        <creator>Yungeun Choe</creator>
        
        <creator>Jaegyoon Hahm</creator>
        
        <subject>Component; supercomputer; joint utilization system; institutional gap; national supercomputing center; specialized center</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>The purpose of this paper is to discover institutional gaps in the supercomputing joint utilization system that the government is actively promoting as an alternative to the problem of shortage of domestic supercomputing resources. The institutional gaps were discovered by examining the current status of laws, top-level plans, and operating guidelines related to the current joint utilization system and matching them with problems or issues that need to be resolved socially. The improvement plan for the institutional gaps was derived at a level that can be reflected in the operating guidelines of the Specialized Center and Unit Center so that the performing subject constituting the joint utilization system can directly participate and solve it. In the future, for the effective operation of the joint utilization system, we plan to promote the domestic market through the diffusion of research results and secure external technological competitiveness by reflecting the contents of institution improvement.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_15-A_Study_on_Institution_Improvement_Plans.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Efficient Handwritten Signatures Identification using Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140316</link>
        <id>10.14569/IJACSA.2023.0140316</id>
        <doi>10.14569/IJACSA.2023.0140316</doi>
        <lastModDate>2023-03-30T12:42:47.2770000+00:00</lastModDate>
        
        <creator>Ibraheem M. Alharbi</creator>
        
        <subject>K-nearest neighbor; histogram of oriented gradients; local binary patterns; false acceptance rate; Fourier descriptors</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>Any agreement or contract between two or more parties requires at least one party to employ a signature as evidence of the other parties&#39; identities and as a means of establishing the parties&#39; intent. As a result, more people are curious about Signature Recognition than other biometric methods like fingerprint scanning. Utilizing both Fourier Descriptors and histogram of oriented gradients (HOG) features, this paper presents an efficient algorithms for signature recognition. The use of Local binary patterns (LBP) features in a signature verification technique has been proposed. Using morphological techniques, the signature is encapsulated within a curve that is both symmetrical and a good match. Measured by the frequency with which incorrect patterns are confirmed by a given system, false acceptance rate (FAR) provides an indication of the effectiveness and precision of the proposed system. Using a local dataset of 60 test signature patterns, this investigation found that 10% were incorrectly accepted for FAR of 0.169. Experiments are conducted on signature photos from a local dataset. Verification of signatures has previously made use of KNN classifier. KNN classifier produced higher FARs and recognition accuracies than prior techniques.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_16-Efficient_Handwritten_Signatures_Identification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Brightness and Contrast Enhancement Method for Color Images Via Pairing Adaptive Gamma Correction and Histogram Equalization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140314</link>
        <id>10.14569/IJACSA.2023.0140314</id>
        <doi>10.14569/IJACSA.2023.0140314</doi>
        <lastModDate>2023-03-30T12:42:47.2600000+00:00</lastModDate>
        
        <creator>Bilal Bataineh</creator>
        
        <subject>Color image; gamma correction; histogram equalization; image contrast; image enhancement</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>For enhanced adaptability to poor light enhancement whilst achieving high image contrast, a new method for color image correction based on the advantages of non-linear function in grey transformation and histogram equalization techniques is proposed in this work. Firstly, the original red, green and blue (RGB) image is converted into the HSV color space, and the V channel is used for enhancement. An adaptive gamma generator is proposed to adaptively calculate gamma parameters in accordance with dark, medium, or bright image conditions. The computed gamma parameters are used to propose a cumulative distribution function that produces an optimized curve for illumination values. Next, a second modified equalization is performed to evenly correct the offset of the illumination curve values on the basis of the equal probability of the available values only. Finally, the processed V channel replaces the original V channel, and the new HSV model returns to the RGB color space. Experiments show that the proposed method can significantly improve the low contrast and poor illumination of the color image whilst preserving the color and details of the original image. Results from benchmark data sets and measurements indicate that the proposed method outperforms other state-of-the-art methods.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_14-Brightness_and_Contrast_Enhancement_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Add-on CNN based Model for the Detection of Tuberculosis using Chest X-ray Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140313</link>
        <id>10.14569/IJACSA.2023.0140313</id>
        <doi>10.14569/IJACSA.2023.0140313</doi>
        <lastModDate>2023-03-30T12:42:47.2300000+00:00</lastModDate>
        
        <creator>Roopa N K</creator>
        
        <creator>Mamatha G S</creator>
        
        <subject>Chest X-Ray; machine learning; convolution neural network; segmentation; detection; classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>Machine Learning has been potentially contributing towards smart diagnosis in the medical domain for more than a decade with a target towards achieving higher accuracy in detection and classification. However, from the perspective of medical image processing, the contribution of machine learning towards segmentation is not been much to find in recent times. The proposed study considers a use case of Tuberculosis detection and classification from chest x-rays where a unique machine learning approach of Convolution Neural Network is adopted for segmentation of lung images from CXR. A computational framework is developed that performs segmentation, feature extraction, detection, and classification. The proposed system&#39;s study outcome is analyzed with and without segmentation over existing machine learning models to exhibit 99.85% accuracy, which is the highest score to date in contrast to existing approaches found in the literature. The study outcome based on the comparative analysis exhibits the effectiveness of the proposed system.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_13-An_Add_on_CNN_based_Model_for_the_Detection_of_Tuberculosis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimal Land-cover Classification Feature Selection in Arid Areas based on Sentinel-2 Imagery and Spectral Indices</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140312</link>
        <id>10.14569/IJACSA.2023.0140312</id>
        <doi>10.14569/IJACSA.2023.0140312</doi>
        <lastModDate>2023-03-30T12:42:47.2130000+00:00</lastModDate>
        
        <creator>Mohammed Saeed</creator>
        
        <creator>Asmala Ahmad</creator>
        
        <creator>Othman Mohd</creator>
        
        <subject>Feature selection; land cover; sentinel-2; arid areas; random forest; accuracy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>Adding spectral indices to Sentinel-2 spectral bands to improve land-cover (LC) classification with limited sample size can affect the accuracy due to the curse of dimensionality. In this study, we compared the performance metrics of Random Forest (RF) classifier with three different combinations of features for land cover classification in an urban arid area. The first combination used the ten Sentinel-2 bands with 10 and 20 m spatial resolution. The second combination consisted of the first combination in addition to five common spectral indices (15 features). The third combination represented the best output of features in terms of performance metrics after applying recursive feature elimination (RFE) for the second combination. The results showed that applying RFE reduced the number of features in combination 2 from 15 to 8 and the average F1-score indicator increased by nearly 8 and 6 percent in comparison with using the other two combinations respectively. The findings of this study confirmed the importance of feature selection in improving LC classification accuracy in arid areas through removing the redundant variable when using limited sample size and using spectral indices with spectral bands, respectively.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_12-Optimal_Land_cover_Classification_Feature_Selection_in_Arid_Areas.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimal Training Ensemble of Classifiers for Classification of Rice Leaf Disease</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140311</link>
        <id>10.14569/IJACSA.2023.0140311</id>
        <doi>10.14569/IJACSA.2023.0140311</doi>
        <lastModDate>2023-03-30T12:42:47.1970000+00:00</lastModDate>
        
        <creator>Sridevi Sakhamuri</creator>
        
        <creator>K Kiran Kumar</creator>
        
        <subject>Rice Leaf; Modified MBP; Bi-GRU; Improved BIRCH; OLIHFA-BA Algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>Rice is one of the most extensively cultivated crops in India. Leaf diseases can have a significant impact on the productivity and quality of a rice crop. Since it has a direct impact on the economy and food security, the detection of rice leaf diseases is the most important factor. The most prevalent diseases affecting rice leaves are leaf blast, brown spots, and hispa. To address this issue, this research builds a new classification model for rice leaf diseases. The model begins with a preprocessing step that employs the Median Filter (MF) process. Improved BIRCH is then utilized for picture segmentation. Features such as LBP, GLCM, color, shape, and modified Median Binary Pattern (MBP) are retrieved from segmented images. Then, an ensemble of three classification models, including Bi-GRU, Convolutional Neural Network (CNN), and Deep Maxout (DMN) is utilized. By adjusting the model weights, the suggested Opposition Learning Integrated Hybrid Feedback Artificial and Butterfly algorithm (OLIHFA-BA) will train the model to improve the performance of the proposed work.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_11-Optimal_Training_Ensemble_of_Classifiers.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Self-Adapting Security Monitoring in Eucalyptus Cloud Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140310</link>
        <id>10.14569/IJACSA.2023.0140310</id>
        <doi>10.14569/IJACSA.2023.0140310</doi>
        <lastModDate>2023-03-30T12:42:47.1670000+00:00</lastModDate>
        
        <creator>Salman Mahmood</creator>
        
        <creator>Nor Adnan Yahaya</creator>
        
        <creator>Raza Hasan</creator>
        
        <creator>Saqib Hussain</creator>
        
        <creator>Mazhar Hussain Malik</creator>
        
        <creator>Kamal Uddin Sarker</creator>
        
        <subject>Component; VM scheduling; cloud computing; Eucalyptus; virtualization; power efficiency; self-adapting security monitoring system; tenant-driven customization; dynamic events; adaptation manager; master adaptation drivers</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>This paper discusses the importance of virtual machine (VM) scheduling strategies in cloud computing environments for handling the increasing number of tasks due to virtualization and cloud computing technology adoption. The paper evaluates legacy methods and specific VM scheduling algorithms for the Eucalyptus cloud environment and compare existing algorithms using QoS. The paper also presents a self-adapting security monitoring system for cloud infrastructure that takes into account the specific monitoring requirements of each tenant. The system uses Master Adaptation Drivers to convert tenant requirements into configuration settings and the Adaptation Manager to coordinate the adaptation process. The framework ensures security, cost efficiency, and responsiveness to dynamic events in the cloud environment. The paper also presents the need for improvement in the current security monitoring platform to support more types of monitoring devices and cover the consequences of multi-tenant setups. Future work includes incorporating log collectors and aggregators and addressing the needs of a super-tenant in the security monitoring architecture. The equitable sharing of monitoring resources between tenants and the provider should be established with an adjustable threshold mentioned in the SLA. The results of experiments show that Enhanced Round-Robin uses less energy compared to other methods, and the Fusion Method outperforms other techniques by reducing the number of Physical Machines turned on and increasing power efficiency.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_10-Self_Adapting_Security_Monitoring.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Blockchain-based Three-factor Mutual Authentication System for IoT using PUFs and Group Signatures</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140309</link>
        <id>10.14569/IJACSA.2023.0140309</id>
        <doi>10.14569/IJACSA.2023.0140309</doi>
        <lastModDate>2023-03-30T12:42:47.1500000+00:00</lastModDate>
        
        <creator>Meriam Fariss</creator>
        
        <creator>Ahmed Toumanari</creator>
        
        <subject>Internet of Things; blockchain; mutual authentication; physical unclonable functions; biometrics; group signatures; elliptic curve cryptography</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>The widespread adoption of Internet of Things has brought many benefits to society, such as increased efficiency and convenience in various aspects of daily life. However, this has also led to a rise in security threats. Moreover, resource-constrained feature of IoT devices makes them vulnerable to various attacks that compromise the user&#39;s privacy and sensitive information confidentiality. It is therefore essential to address the security concerns of IoT devices to ensure their reliable and secure operation. This paper proposes a blockchain-based three-factor mutual authentication system for IoT using Elliptic Curve Cryptography, physical unclonable functions and group signatures. The main purpose is to achieve a secure mutual authentication among different involved entities while providing anonymous group member authentication and reliable auditing. The AVISPA tool is utilized in the paper to formally prove that the proposed system satisfies the security and privacy requirements.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_9-A_Blockchain_based_Three_factor_Mutual_Authentication_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Task Scheduling Framework for Internet of Things based on Agile VNFs On-demand Service Model and Deep Reinforcement Learning Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140308</link>
        <id>10.14569/IJACSA.2023.0140308</id>
        <doi>10.14569/IJACSA.2023.0140308</doi>
        <lastModDate>2023-03-30T12:42:47.1370000+00:00</lastModDate>
        
        <creator>Li YANG</creator>
        
        <subject>Internet of things; task scheduling; edge computing; resource allocation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>Recent innovations in the Internet of Things (IoT) have given rise to IoT applications that require quick response times and low latency. Fog computing has proven to be an effective platform for handling IoT applications. It is a significant challenge to deploy fog computing resources effectively because of the heterogeneity of IoT tasks and their delay sensitivity. To take advantage of idle resources in IoT devices, this paper presents an edge computing concept that offloads edge tasks to nearby IoT devices. The IoT-assisted edge computing should meet two conditions, edge services should exploit the computing resources of IoT devices effectively and edge tasks offloaded to IoT devices do not interfere with local IoT tasks. Two main phases are included in the proposed method: virtualization of edge nodes, and task scheduling based on deep reinforcement learning. The first phase offers a layered edge framework. In the second phase, we applied deep reinforcement learning (DRL) to schedule tasks taking into account the diversity of tasks and the heterogeneity of available resources. According to simulation results, our proposed task scheduling method achieves higher levels of task satisfaction and success than existing methods.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_8-A_New_Task_Scheduling_Framework_for_Internet_of_Things.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Pancreatic Cancer Segmentation and Classification in CT Imaging using Antlion Optimization and Deep Learning Mechanism</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140307</link>
        <id>10.14569/IJACSA.2023.0140307</id>
        <doi>10.14569/IJACSA.2023.0140307</doi>
        <lastModDate>2023-03-30T12:42:47.1030000+00:00</lastModDate>
        
        <creator>Radhia Khdhir</creator>
        
        <creator>Aymen Belghith</creator>
        
        <creator>Salwa Othmen</creator>
        
        <subject>Pancreatic cancer; antlion optimization; deep learning; convolutional neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>Pancreatic cancer, a fatal type of cancer, has a very poor prognosis. To monitor, forecast, and categorise cancer presence, automated pancreatic cancer segmentation and classification utilising Computer-Aided Diagnostic (CAD) model can be used. Furthermore, deep learning algorithms can provide in-depth diagnostic knowledge and precise image analysis for therapeutic usage. In this context, our study aims to develop an Antlion Optimization-Convolutional Neural Network-Gated Recurrent Unit (ALO-CNN-GRU) model for pancreatic tumour segmentation and classification based on deep learning and CT scans. The ALO-CNN-GRU technique’s objective is to segment and categorize the presence of cancer tissues. This technique consists of pre-processing, segmentation and feature extraction and classification phases. The images go through a pre-processing stage to reduce noise from the dataset that was obtained. A hybrid Gaussian and median filter is applied for the pre-processing phase. To identify the pancreatic area that is affected, the segmentation is processed utilizing the Antlion optimization method. Then, the categorization of pancreatic cancer as benign or malignant is done by employing the classifiers of the Convolutional neural network and Gated Recurrent Unit networks. The suggested model offers improved precision and a better rate of pancreatic cancer diagnosis with an accuracy of 99.92%.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_7-Pancreatic_Cancer_Segmentation_and_Classification_in_CT_Imaging.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Eye Contact as a New Modality for Man-machine Interface</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140306</link>
        <id>10.14569/IJACSA.2023.0140306</id>
        <doi>10.14569/IJACSA.2023.0140306</doi>
        <lastModDate>2023-03-30T12:42:47.0900000+00:00</lastModDate>
        
        <creator>Syusuke Kobayashi</creator>
        
        <creator>Pitoyo Hartono</creator>
        
        <subject>Eye contact; man-machine interface; non-verbal communication; object detection; neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>In daily life, people use many appliances, where different machines and tools should be operated with their specialized interfaces. These specialized interfaces are often not intuitive and thus require considerable time and effort to master. On the other hand, human communications are rich in modalities and mostly intuitive. One of them is eye contact. This study proposes eye contact for enriching modalities for human-machine interface. The proposed interface modality, based on a neural network for object detection, allows humans to initiate machine operations by looking at them. In this paper, the hardware framework for building this interface is elaborated and the results of usability assessment through users’ experiments are reported.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_6-Eye_Contact_as_a_New_Modality_for_Man_machine_Interface.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dynamic Hardware Redundancy Approaches Towards Improving Service Availability in Fog Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140305</link>
        <id>10.14569/IJACSA.2023.0140305</id>
        <doi>10.14569/IJACSA.2023.0140305</doi>
        <lastModDate>2023-03-30T12:42:47.0730000+00:00</lastModDate>
        
        <creator>Sara Alraddady</creator>
        
        <creator>Ben Soh</creator>
        
        <creator>Mohammed AlZain</creator>
        
        <creator>Alice Li</creator>
        
        <subject>Fog computing; fault tolerance; Markov chain; hardware redundancy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>The distributed nature of fog computing is designed to alleviate bottleneck traffic congestion which happens when a massive number of devices try to connect to more powerful computing resources simultaneously. Fog computing focuses on bringing data processing geographically closer to data source utilizing existing computing resources such as routers and switches. This heterogeneity nature of fog computing is an important feature and a challenge at the same time. To enhance fog computing availability with such nature, several studies have been conducted using different methods such as placement policies and scheduling algorithms. This paper proposes a fog computing model that includes an extra layer of duplex management system. This layer is designated for operating fog managers and warm spares to ensure higher availability for such a geographically disseminated paradigm. A Markov chain is utilized to calculate the probabilities of each possible state in the proposed model along with availability analysis. By utilizing the standby system, we were able to increase the availability to 93%.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_5-Dynamic_Hardware_Redundancy_Approaches.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning Algorithm based Wearable Device for Basketball Stance Recognition in Basketball</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140304</link>
        <id>10.14569/IJACSA.2023.0140304</id>
        <doi>10.14569/IJACSA.2023.0140304</doi>
        <lastModDate>2023-03-30T12:42:47.0430000+00:00</lastModDate>
        
        <creator>Lan Jiang</creator>
        
        <creator>Dongxu Zhang</creator>
        
        <subject>Deep learning; wearable devices; basketball; sports pose; CNN</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>With the continuous improvement of technology, modern sports training is gradually developing towards precision and efficiency, which requires more accurate identification of athletes&#39; sports stances. The study first establishes a classification structure of basketball stance, then designs a hardware module to collect different stance data by using inertial sensors, thus extracting multidimensional motion stance features. Then the traditional convolutional neural network (CNN) is improved by principal component analysis (PCA) to form the PCA+CNN algorithm. Finally, the algorithm is simulated and tested. The outcomes demonstrated that the average discrimination error rate of the improved PCA+CNN algorithm in the Human 3.6M dataset was 3.15%, which was a low error rate. In recognition of basketball sports pose, the wearable based on the improved algorithm had the highest accuracy of 99.4% and took the quietest time of 18s, which was better than the other three methods. It demonstrated that the method had high discrimination precision and recognition efficiency, which could provide a reliable technical means to improve the science of basketball sports training plan and training effect.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_4-Deep_Learning_Algorithm_based_Wearable_Device.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Strategic Monitoring for Efficient Detection of Simultaneous APT Attacks with Limited Resources</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140303</link>
        <id>10.14569/IJACSA.2023.0140303</id>
        <doi>10.14569/IJACSA.2023.0140303</doi>
        <lastModDate>2023-03-30T12:42:47.0270000+00:00</lastModDate>
        
        <creator>Fan Shen</creator>
        
        <creator>Zhiyuan Liu</creator>
        
        <creator>Levi Perigo</creator>
        
        <subject>Advanced persistent threats; intrusion detection; LSTM; multi-armed bandit</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>Advanced Persistent Threats (APT) are a type of sophisticated multistage cyber attack, and the defense against APT is challenging. Existing studies apply signature-based or behavior-based methods to analyze monitoring data to detect APT, but little research has been dedicated to the important problem of addressing APT detection with limited resources. In order to maintain the primary functionality of a system, the resources allocated for security purposes, for example logging and examining the behavior of a system, are usually constrained. Therefore, when facing multiple simultaneous powerful cyber attacks like APT, the allocation of limited security resources becomes critical. The research in this paper focuses on the threat model where multiple simultaneous APT attacks exist in the defender’s system, but the defender does not have sufficient monitoring resources to check every running process. To capture the footprint of multistage activities including APT attacks and benign activities, this work leverages the provenance graph which is constructed based on dependencies of processes. Furthermore, this work studies the monitoring strategy to efficiently detect APT attacks from incomplete information of paths on the provenance graph, by considering both the “exploitation” effect and the “exploration” effect. The contributions of this work are two-fold. First, it extends the classic UCB algorithm in the domain of the multi-armed bandit problem to solve cyber security problems, and proposes to use the malevolence value of a path, which is generated by a novel LSTM neural network as the exploitation term. Second, the consideration of “exploration” is innovative in the detection of APT attacks with limited monitoring resources. The experimental results show that the use of the LSTM neural network is beneficial to enforce the exploitation effect as it satisfies the same property as the exploitation term in the classic UCB algorithm and that by using the proposed monitoring strategy, multiple simultaneous APT attacks are detected more efficiently than using the random strategy and the greedy strategy, regarding the time needed to detect same number of APT attacks.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_3-Strategic_Monitoring_for_Efficient_Detection_of_Simultaneous_APT_Attacks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Illicit Activity Detection in Bitcoin Transactions using Timeseries Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140302</link>
        <id>10.14569/IJACSA.2023.0140302</id>
        <doi>10.14569/IJACSA.2023.0140302</doi>
        <lastModDate>2023-03-30T12:42:46.9970000+00:00</lastModDate>
        
        <creator>Rohan Maheshwari</creator>
        
        <creator>Sriram Praveen V A</creator>
        
        <creator>Shobha G</creator>
        
        <creator>Jyoti Shetty</creator>
        
        <creator>Arjuna Chala</creator>
        
        <creator>Hugo Watanuki</creator>
        
        <subject>Bitcoin; time-series analysis; HPCC systems; random time interval; illicit activity detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>A key motivator for the usage of cryptocurrency such as bitcoin in illicit activity is the degree of anonymity provided by the alphanumeric addresses used in transactions. This however does not mean that anonymity is built into the system as the transactions being made are still subject to the human element. Additionally, there is around 400 Gigabytes of raw data available in the bitcoin blockchain, making it a big data problem. HPCC Systems is used in this research, which is a data intensive, open source, big data platform. This paper attempts to use timing data produced by taking the time intervals between consecutive transactions performed by an address and make an identification of the nature of the address (illegal or legal). With the use of three different goodness of fit run tests namely Kolmogorov–Smirnov test, Anderson-Darling test and Cramér–von Mises criterion, two addresses are compared to find if they are from the same source. The BABD-13 dataset was used as a source of illegal addresses, which provided both references and test data points. The research shows that time-series data can be used to represent transactional behaviour of a user and the algorithm proposed is able to identify different addresses originating from the same user or users engaging in similar activity.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_2-Illicit_Activity_Detection_in_Bitcoin_Transactions.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Ontology-driven DBpedia Quality Enhancement to Support Entity Annotation for Arabic Text</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140301</link>
        <id>10.14569/IJACSA.2023.0140301</id>
        <doi>10.14569/IJACSA.2023.0140301</doi>
        <lastModDate>2023-03-30T12:42:46.9000000+00:00</lastModDate>
        
        <creator>Adham Kahlawi</creator>
        
        <subject>Entity annotation; semantics annotation; DBpedia; Arabic language; ontology; semantic web; linked open data</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(3), 2023</description>
        <description>Improving NLP outputs by extracting structured data from unstructured data is crucial, and several tools are available for the English language to achieve this objective. However, little attention has been paid to the Arabic language. This research aims to address this issue by enhancing the quality of DBpedia data. One limitation of DBpedia is that each resource can belong to multiple types and may not represent the intended concept. Additionally, some resources may be assigned incorrect types. To overcome these limitations, this study proposes creating a new ontology to represent Arabic data using the DBpedia ontology, followed by an algorithm to verify type assignments using the resource&#39;s title metadata and similarity between resources&#39; descriptions. Finally, the research builds an entity annotation tool for Arabic using the verified dataset.</description>
        <description>http://thesai.org/Downloads/Volume14No3/Paper_1-An_Ontology_driven_DBpedia_Quality_Enhancement.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Indoor Pollutant Classification Modeling using Relevant Sensors under Thermodynamic Conditions with Multilayer Perceptron Hyperparameter Tuning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01402103</link>
        <id>10.14569/IJACSA.2023.01402103</id>
        <doi>10.14569/IJACSA.2023.01402103</doi>
        <lastModDate>2023-02-28T10:12:19.7530000+00:00</lastModDate>
        
        <creator>Percival J. Forcadilla</creator>
        
        <subject>Indoor air pollutants; pollutant sources; indoor air quality; IAQ; sensors; multilayer perceptron; classification modeling; gridSearchCV; hyperparameter tuning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>Air pollutants that are generated from indoor sources such as cigarettes, cleaning products, air fresheners, etc. impact human health. These sources are usually safe but exposure beyond the recommended standards could be hazardous to health. Due to this fact, people started to use technology to monitor indoor air quality (IAQ) but have no capability of recognizing pollutant sources. This research is an improvement in building a classification model for recognizing pollutant sources using the multilayer perceptron. The current research model receives four data parameters under warm &amp; humid and cool &amp; dry conditions compared to nine parameters of the previous literature in detecting five pollutant sources. The classification model was optimized using GridSearchCV to obtain the best combination of hyperparameters while giving the best-fit model accuracy, loss, and computational time. The tuned classification model gives an accuracy of 98.9% and a loss function value of 0.0986 under the number of epochs equal to 50. In comparison with the previous research, the accuracy was 100% with the number of epochs equal to 1000. Computational time was greatly reduced at the same time giving the best-fit accuracy and loss function values without incurring the problem of overfitting.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_103-Indoor_Pollutant_Classification_Modeling_using_Relevant_Sensors.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Unsupervised Feature Learning Methodology for Tree based Classifier and SVM to Classify Encrypted Traffic</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01402102</link>
        <id>10.14569/IJACSA.2023.01402102</id>
        <doi>10.14569/IJACSA.2023.01402102</doi>
        <lastModDate>2023-02-28T10:12:19.7530000+00:00</lastModDate>
        
        <creator>RAMRAJ S</creator>
        
        <creator>Usha G</creator>
        
        <subject>Network traffic; encrypted network traffic; tree based classifiers; SVM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>Presently, sample social applications have emerged, and each one is trying to knock down the other. They expand their game by bringing novelty to the market, being ingenious and providing advanced level of security in the form of encryption. It has become significant to manage the network traffic and analyze it; hence we are performing a network traffic binary classification on one of the globally used application – WhatsApp. Also, this will be helpful to evaluate the sender-receiver system of the application alongside stipulate the properties of the network traces. By analyzing the behavior of network traces, we can scrutinize the type and nature of traffic for future maintenance of the network. In this study, we have carried out three different objectives. First, we have classified between the WhatsApp network packets and other applications using different ML classifiers, secondly, we have segmented the WhatsApp application files into image and text and third, we have incorporated a deep learning module with the same ML classifiers to understand and boost the performance of the previous experiments. Following the experiments, we have also highlighted the difference in the performance of both tree-based and vector-based classifiers of Machine Learning. Based on our findings, XGBoost classifier is a pre-eminent algorithm in the identification of WhatsApp network traces from the dataset. Whereas in the experiment of WhatsApp media segmentation, Random Forest has outperformed the other ML algorithms. Similarly, SVM when clubbed with a Deep Learning Auto encoder boosts the performance of this vector-based classifier in the binary classification task.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_102-Unsupervised_Feature_Learning_Methodology_for_Tree_based_Classifier.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Transformer Seq2Seq Model with Fast Fourier Transform Layers for Rephrasing and Simplifying Complex Arabic Text</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01402101</link>
        <id>10.14569/IJACSA.2023.01402101</id>
        <doi>10.14569/IJACSA.2023.01402101</doi>
        <lastModDate>2023-02-28T10:12:19.7370000+00:00</lastModDate>
        
        <creator>Abdullah Alshanqiti</creator>
        
        <creator>Ahmad Alkhodre</creator>
        
        <creator>Abdallah Namoun</creator>
        
        <creator>Sami Albouq</creator>
        
        <creator>Emad Nabil</creator>
        
        <subject>Text simplification; sequence-to-sequence; split-and-rephrase; natural language understanding; NLP; TSimAr; ATSC</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>Text simplification is a fundamental unsolved problem for Natural Language Understanding (NLU) models, which is deemed a hard-to-solve task. Recently, this hard task has aimed to simplify texts with complex linguistic structures and improve their readability, not only for human readers but also for boosting the performance of many natural language processing (NLP) applications. Towards tackling this hard task for the low-resource Arabic NLP, this paper presents a text split-and-rephrase strategy for simplifying complex texts, which depends principally on a sequence-to-sequence Transformer-based architecture (which we call TSimAr). For evaluation, we created a new benchmarking corpus for Arabic text simplification (so-called ATSC) containing 500 articles besides their corresponding simplifications. Through our automatic and manual analyses, experimental results report that our TSimAr evidently outperforms all the publicly accessible state-of-the-art text-to-text generation models for the Arabic language as it achieved the best score on SARI, BLEU, and METEOR metrics of about 0.73, 0.65, and 0.68, respectively.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_101-A_Transformer_Seq2Seq_Model_with_Fast_Fourier_Transform_Layers.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Image Super-Resolution using Generative Adversarial Networks with EfficientNetV2</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01402100</link>
        <id>10.14569/IJACSA.2023.01402100</id>
        <doi>10.14569/IJACSA.2023.01402100</doi>
        <lastModDate>2023-02-28T10:12:19.7200000+00:00</lastModDate>
        
        <creator>Saleh AlTakrouri</creator>
        
        <creator>Norliza Mohd Noor</creator>
        
        <creator>Norulhusna Ahmad</creator>
        
        <creator>Taghreed Justinia</creator>
        
        <creator>Sahnius Usman</creator>
        
        <subject>Single image super-resolution (SISR); generative adversarial networks (GAN); convolutional neural networks (CNN); EfficientNetv2</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>The image super-resolution is utilized for the image transformation from low resolution to higher resolution to obtain more detailed information to identify the targets. The super-resolution has potential applications in various domains, such as medical image processing, crime investigation, remote sensing, and other image-processing application domains. The goal of the super-resolution is to obtain the image with minimal mean square error with improved perceptual quality. Therefore, this study introduces the perceptual loss minimization technique through efficient learning criteria. The proposed image reconstruction technique uses the image super-resolution generative adversarial network (ISRGAN), in which the learning of the discriminator in the ISRGAN is performed using the EfficientNet-v2 to obtain a better image quality. The proposed ISRGAN with the EfficientNet-v2 achieved a minimal loss of 0.02, 0.1, and 0.015 at the generator, discriminator, and self-supervised learning, respectively, with a batch size of 32. The minimal mean square error and mean absolute error are 0.001025 and 0.00225, and the maximal peak signal-to-noise ratio and structural similarity index measure obtained are 45.56985 and 0.9997, respectively.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_100-Image_Super_Resolution_using_Generative_Adversarial_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Semi-supervised Method to Detect Fraudulent Transactions and Identify Fraud Types while Minimizing Mounting Costs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140298</link>
        <id>10.14569/IJACSA.2023.0140298</id>
        <doi>10.14569/IJACSA.2023.0140298</doi>
        <lastModDate>2023-02-28T10:12:19.7030000+00:00</lastModDate>
        
        <creator>Chergui Hamza</creator>
        
        <creator>Abrouk Lylia</creator>
        
        <creator>Cullot Nadine</creator>
        
        <creator>Cabioch Nicolas</creator>
        
        <subject>Machine learning; semi-supervised learning; fraud; finance; cost analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>Financial fraud is a complex problem faced by financial institutions, and existing fraud detection systems are often insufficient, resulting in significant financial losses. Researchers have proposed various machine learning-based techniques to enhance the performance of these systems. In this work, we present a semi-supervised approach to detect fraudulent transactions. First, we extract and select features, followed by the training of a binary classification model. Secondly, we apply a clustering algorithm to the fraudulent transactions and use the binary classification model with the SHAP framework to analyze the clusters and associate them with a particular fraud type. Finally, we present an algorithm to detect and assign a fraud type by leveraging a multi-fraud classification model. To minimize the mounting cost of the model, we propose an algorithm to choose an optimal threshold that can detect fraudulent transactions. We work with experts to adapt a risk cost matrix to estimate the mounting cost of the model. This risk cost matrix takes into account the cost of missing fraudulent transactions and the cost of incorrectly flagging a legitimate transaction as fraudulent. In our experiments on a real dataset, our approach achieved high accuracy in detecting fraudulent transactions, with the added benefit of identifying the fraud type, which can help financial institutions better understand and combat fraudulent activities. Overall, our approach offers a comprehensive and efficient solution to financial fraud detection, and our results demonstrate its effectiveness in reducing financial losses for financial institutions.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_98-Semi_Supervised_Method_to_Detect_Fraudulent_Transactions.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Liver Disease Prediction and Classification using Machine Learning Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140299</link>
        <id>10.14569/IJACSA.2023.0140299</id>
        <doi>10.14569/IJACSA.2023.0140299</doi>
        <lastModDate>2023-02-28T10:12:19.7030000+00:00</lastModDate>
        
        <creator>Srilatha Tokala</creator>
        
        <creator>Koduru Hajarathaiah</creator>
        
        <creator>Sai Ram Praneeth Gunda</creator>
        
        <creator>Srinivasrao Botla</creator>
        
        <creator>Lakshmikanth Nalluri</creator>
        
        <creator>Pathipati Nagamanohar</creator>
        
        <creator>Satish Anamalamudi</creator>
        
        <creator>Murali Krishna Enduri</creator>
        
        <subject>Machine learning algorithms; classification model; classifier; liver disease</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>Recently liver diseases are becoming most lethal disorder in a number of countries. The count of patients with liver disorder has been going up because of alcohol intake, breathing of harmful gases, and consumption of food which is spoiled and drugs. Liver patient data sets are being studied for the purpose of developing classification models to predict liver disorder. This data set was used to implement prediction and classification algorithms which in turn reduces the workload on doctors. In this work, we proposed apply machine learning algorithms to check the entire patient’s liver disorder. Chronic liver disorder is defined as a liver disorder that lasts for at least six months. As a result, we will use the percentage of patients who contract the disease as both positive and negative information We are processing Liver disease percentages with classifiers, and the results are displayed as a confusion matrix. We proposed several classification schemes that can effectively improve classification performance when a training data set is available. Then, using a machine learning classifier, good and bad values are classified. Thus, the outputs of the proposed classification model show accuracy in predicting the result.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_99-Liver_Disease_Prediction_and_Classification_using_Machine_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Autonomous Role and Consideration of Electronic Health Systems with Access Control in Developed Countries: A Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140297</link>
        <id>10.14569/IJACSA.2023.0140297</id>
        <doi>10.14569/IJACSA.2023.0140297</doi>
        <lastModDate>2023-02-28T10:12:19.6900000+00:00</lastModDate>
        
        <creator>Mohd Rafiz Salji</creator>
        
        <creator>Nur Izura Udzir</creator>
        
        <subject>Access control; security; privacy; electronic health-care system; electronic health record; developed countries</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>The electronic healthcare system (EHS) nowadays is essential to access, maintain, store, and share the electronic health records (EHR) of patients. It should provide safer, more efficient, and cost-effective healthcare. There are several challenges with EHS, notably in terms of security and privacy. Nonetheless, many approaches can be utilized to tackle it, and one of them is access control. Even though numerous access control models were presented, traditional methods of access control, such as role-based access control (RBAC), were extensively employed and are still in use today. Currently, the number of EHS equipped with access control keeps growing, and some previous works utilize RBAC only or an autonomous role. However, relying only on a role in today’s advanced technology may jeopardize security and privacy. The previous work also has flaws because of using an ineffective instrument that is costly to maintain and will burden organizations, particularly in developed countries. In this paper, the background and emphasis on the challenges associated with an autonomous role in the EHS are discussed. Following that, this paper provides recommendations and analytical discussion on existing EHSs with access control mechanisms for securing and protecting EHR in developed countries. Finally, instrument information in the form of a SWOT analysis is recommended to replace the present instrument utilized by the previous work for a notion to the organizations in the developed countries to select the best environment for their future or upgrade EHS.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_97-An_Autonomous_Role_and_Consideration_of_Electronic_Health_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Survey on Blockchain Technology Concepts, Applications and Security</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140296</link>
        <id>10.14569/IJACSA.2023.0140296</id>
        <doi>10.14569/IJACSA.2023.0140296</doi>
        <lastModDate>2023-02-28T10:12:19.6730000+00:00</lastModDate>
        
        <creator>Asma Mubark Alqahtani</creator>
        
        <creator>Abdulmohsen Algarni</creator>
        
        <subject>Blockchain; cryptography; cryptocurrency; consensus algorithms; blockchain security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>In the past decade, blockchain technology has become increasingly prevalent in our daily lives. This technology consists of a chain of blocks that contains the history of transactions and information about its users. Distributed digital ledgers are used in blockchain. A transparent environment is created by using this technology, allowing encrypted secure transactions to be verified and approved by all users. As a powerful tool, blockchain can be utilized for a wide range of useful purposes in everyday life including cryptocurrency, Internet-of-Things (IoT), finance, reputation system, and healthcare. This paper aims to provide an overview of blockchain technology and its security issues for users and researchers. In particular, those who conduct their business using blockchain technology. This paper includes a comparison of consensus algorithms and a description of cryptography. Further, most applications used in blockchain are focused on in this paper also analyzing real attacks and then summarizing security measures in blockchain. Even though Blockchain holds a promising scope of development in several sectors, it is prone to several security and vulnerability issues that arise from different types of blockchain networks which represent a challenge to deal with blockchain. Finally, as a research community, we encourage future research challenges that can be addressed to improve security in blockchain systems.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_96-A_Survey_on_Blockchain_Technology_Concepts_Applications.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An OCR Engine for Printed Receipt Images using Deep Learning Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140295</link>
        <id>10.14569/IJACSA.2023.0140295</id>
        <doi>10.14569/IJACSA.2023.0140295</doi>
        <lastModDate>2023-02-28T10:12:19.6570000+00:00</lastModDate>
        
        <creator>Cagri Sayallar</creator>
        
        <creator>Ahmet Sayar</creator>
        
        <creator>Nurcan Babalik</creator>
        
        <subject>Optical Character Recognition (OCR); image processing; deep learning; benchmarking; receipt</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>The digitization of receipts and invoices, and the recording of expenses in industry and accounting have begun to be used in the field of finance tracking. However, 100%success in character recognition for document digitization has not yet been achieved. In this study, a new Optical Character Recognition (OCR) engine called Nacsoft OCR was developed on Turkish receipt data by using artificial intelligence methods. The proposed OCR engine has been compared to widely used engines, Easy OCR, Tesseract OCR, and the Google Vision API. The benchmarking was made on English and Turkish receipts, and the accuracies of OCR engines in terms of character recognition and their speeds are presented. It is known that OCR character recognition engines perform better at word recognition when provided word position information. Therefore, the performance of the Nacsoft OCR engine in determining the word position was also compared with the performance of the other OCR engines, and the results were presented.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_95-An_OCR_Engine_for_Printed_Receipt_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Privacy-Preserving and Trustless Verifiable Fairness Audit of Machine Learning Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140294</link>
        <id>10.14569/IJACSA.2023.0140294</id>
        <doi>10.14569/IJACSA.2023.0140294</doi>
        <lastModDate>2023-02-28T10:12:19.6570000+00:00</lastModDate>
        
        <creator>Gui Tang</creator>
        
        <creator>Wuzheng Tan</creator>
        
        <creator>Mei Cai</creator>
        
        <subject>Security and privacy; machine learning; fairness; cryptography; zero knowledge proof</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>In the big data era, machine learning has devel-oped prominently and is widely used in real-world systems. Yet, machine learning raises fairness concerns, which incurs discrimination against groups determined by sensitive attributes such as gender and race. Many researchers have focused on developing fairness audit technique of machine learning model that enable users to protect themselves from discrimination. Existing solutions, however, rely on additional external trust as-sumptions, either on third-party entities or external components, that significantly lower the security. In this study, we propose a trustless verifiable fairness audit framework that assesses the fairness of ML algorithms while addressing potential security issues such as data privacy, model secrecy, and trustworthiness. With succinctness and non-interactive of zero knowledge proof, our framework not only guarantees audit integrity, but also clearly enhance security, enabling fair ML models to be publicly auditable and any client to verify audit results without extra trust assumption. Our evaluation on various machine learning models and real-world datasets shows that our framework achieves practical performance.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_94-Privacy_Preserving_and_Trustless_Verifiable_Fairness_Audit.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>BERT Model-based Natural Language to NoSQL Query Conversion using Deep Learning Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140293</link>
        <id>10.14569/IJACSA.2023.0140293</id>
        <doi>10.14569/IJACSA.2023.0140293</doi>
        <lastModDate>2023-02-28T10:12:19.6430000+00:00</lastModDate>
        
        <creator>Kazi Mojammel Hossen</creator>
        
        <creator>Mohammed Nasir Uddin</creator>
        
        <creator>Minhazul Arefin</creator>
        
        <creator>Md Ashraf Uddin</creator>
        
        <subject>Natural language processing; NoSQL query; BERT model; Levenshtein distance algorithm; artificial neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>Databases are commonly used to store complex and distinct information. With the advancement of the database system, non-relational databases have been used to store a vast amount of data as traditional databases are not sufficient for making queries on a wide range of massive data. However, storing data in a database is always challenging for non-expert users. We propose a conversion technique that enables non-expert users to access and filter data as close to human language as possible from the NoSQL database. Researchers have already explored a variety of technologies in order to develop more precise conversion procedures. This paper proposed a generic NoSQL query conversion learning method to generate a Non-Structured Query Language from natural language. The proposed system includes natural language processing-based text preprocessing and the Levenshtein distance algorithm to extract the collection and attributes if there were any spelling errors. The analysis of the result shows that our suggested approach is more efficient and accurate than other state-of-the-art methods in terms of bilingual understudy scoring with the WikiSQL dataset. Additionally, the proposed method outperforms the existing approaches because our method utilizes a bidirectional encoder representation from a transformer multi-text classifier. The classifier process extracts database operations that might increase the accuracy. The model achieves state-of-the-art performance on WikiSQL, obtaining 88.76% average accuracy.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_93-BERT_Model_based_Natural_Language_to_NoSQL_Query_Conversion.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predictions of Cybersecurity Experts on Future Cyber-Attacks and Related Cybersecurity Measures</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140292</link>
        <id>10.14569/IJACSA.2023.0140292</id>
        <doi>10.14569/IJACSA.2023.0140292</doi>
        <lastModDate>2023-02-28T10:12:19.6270000+00:00</lastModDate>
        
        <creator>Ahmad Mtair AL-Hawamleh</creator>
        
        <subject>Cybersecurity; cybercriminals; cyber-attacks; cyber-security techniques; security precautions</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>The Internet interconnections’ exponential growth has resulted in an increase in cyber-attack occurrences with mostly devastating consequences. Malware is a common tool for performing these attacks in cyberspace. The malefactors would either exploit the present weaknesses or employ the distinctive characteristics of the developing technologies. The cybersecurity community should increase their knowledge on the types and arsenals of cyber-attacks, and security measures against cyber-attacks should be in place as well. Also, advanced and effective malware defense mechanisms should be established. Hence, this study reviews cyber-attack types, measures and security precautions, and professional extrapolations on cyber-attacks future and the associated security measures. Semi-structured interviews were performed, involving five IT managers and nine Cybersecurity Consultants, to obtain the data. The study findings demonstrate prevention as key for data breach risk prevention. Knowledge of common attack methods and the use of cybersecurity software can facilitate individuals and organizations in thwarting hackers and in preserving their data privacy. Two-factor authorization by consumers and new back-end security protocols and security methods, including Artificial intelligence (AI) application, will encumber hacking attempts.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_92-Predictions_of_Cybersecurity_Experts_on_Future_Cyber_Attacks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>WEB-based Collaborative Platform for College English Teaching</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140291</link>
        <id>10.14569/IJACSA.2023.0140291</id>
        <doi>10.14569/IJACSA.2023.0140291</doi>
        <lastModDate>2023-02-28T10:12:19.6270000+00:00</lastModDate>
        
        <creator>Yuwan Zhang</creator>
        
        <subject>Web; text similarity; KNN algorithm; vocabulary matching; network teaching</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>At present, colleges and universities are trying to apply online education. The online college English course teaching cooperation platform is an important part of college English teaching. At present, teachers’ scoring method for students’ online examination on this kind of platform is mainly human scoring, which has a low efficiency. In view of this, based on the characteristics of web, this paper constructs an English test paper scoring algorithm based on text matching degree algorithm and improved KNN algorithm. The data analysis type of the algorithm is mainly prescriptive analysis that is, judging whether to give points according to the characteristics of the data. The automation and high efficiency of the algorithm can save a lot of human costs in the field of online education. The experimental results show that the recall rate of the improved KNN scoring algorithm for specific semantic topics is up to 0.9, and only 7.3% of students report that the algorithm misjudges their grades. The results indicate that the algorithm has the potential to be applied to the Web-based college English course teaching collaboration platform and reduce the workload of teachers and improve their efficiency.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_91-Web_based_Collaborative_Platform_for_College_English_Teaching.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Building a Machine Learning Powered Chatbot for KSU Blackboard Users</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140290</link>
        <id>10.14569/IJACSA.2023.0140290</id>
        <doi>10.14569/IJACSA.2023.0140290</doi>
        <lastModDate>2023-02-28T10:12:19.6100000+00:00</lastModDate>
        
        <creator>Qubayl Alqahtani</creator>
        
        <creator>Omer Alrwais</creator>
        
        <subject>Chatbot; RASA; conversational agents; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>Chatbots have attracted the interest of many entities within the public and private sectors locally within Saudi Arabia and also globally. Chatbots have many implementations in the education field and can range from enhancing the e-learning experience to answer students&#39; inquiries about course schedules and grades, tracking prerequisites information and elective courses. This work aim is to develop a chatbot engine that helps with frequently asked questions about the Blackboard system, which could be embedded into the Blackboard website. It contains a machine-learning model trained on Arabic datasets. The engine accepts both Arabic textual content as well as English textual content if needed; for commonly used English terminologies. Rasa framework was chosen as the main tool for developing the Blackboard chatbot. The dataset to serve the current need (i.e. Blackboard system) was requested from Blackboard support staff to build the initial dataset and get a sense of the frequently asked questions by KSU Blackboard student users. The dataset is designed to account for as many as possible of KSU Blackboard related inquires to provide the appropriate answers and reduce the workload of Blackboard system support staff. Testing and evaluating the model was a continuous process before and after the model deployment. The model post-tuning metrics were 93.4%, 92.5%, 92.49% for test accuracy, f1-score and precision, respectively. The average reported accuracy in similar studies were near 90% on average as opposed to results reported here.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_90-Building_a_Machine_Learning_Powered_Chatbot.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Visual Target Representation using Saliency Detection Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140289</link>
        <id>10.14569/IJACSA.2023.0140289</id>
        <doi>10.14569/IJACSA.2023.0140289</doi>
        <lastModDate>2023-02-28T10:12:19.5970000+00:00</lastModDate>
        
        <creator>Shekun Tong</creator>
        
        <creator>Chunmeng Lu</creator>
        
        <subject>Saliency detection; target representation; vision system; object detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>The task of saliency detection is to identify the most important and informative part of a scene. Saliency detection is broadly applied to numerous vision problems, including image segmentation, object recognition, image compression, content-based image retrieval, and moving object detection. Existing saliency detection methods suffer a low accuracy rate because of missing components of saliency regions. This study proposes a visual saliency detection method for the target representation to represent targets more accurately. The proposed method consists of five modules. In the first module, the salient region is extracted through manifold ranking on a graph, which incorporates local grouping cues and boundary priors. Secondly, using a region of interest (ROI) algorithm and the subtraction of the salient region from the original image, other parts of the image, either related or nonrelated to the interested target, are segmented. Lastly, those related and non-related regions are classified and distinguished using our proposed algorithm. Experimental result shows that proposed salient region accurately represent the interested target which can be used for object detection and tracking applications.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_89-A_Visual_Target_Representation_using_Saliency_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Study on Distance Personalized English Teaching Based on Deep Directed Graph Knowledge Tracking Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140288</link>
        <id>10.14569/IJACSA.2023.0140288</id>
        <doi>10.14569/IJACSA.2023.0140288</doi>
        <lastModDate>2023-02-28T10:12:19.5800000+00:00</lastModDate>
        
        <creator>Lianmei Deng</creator>
        
        <subject>Distanced personalized English teaching model; knowledge tracking; deep learning; graph structure rules; DKVMN</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>Despite the continuous development of online education models, the effectiveness of online distance education has never been able to meet people’s expectations due to individual difference of learners. How to personalize teaching in a targeted manner and stimulate learners’ independent learning ability has become a key issue. In this study, the multidimensional features of the learning process are mined with the help of the BORUTA feature selection model, and the DKVMN-BORUTA model incorporating multidimensional features is established. This optimized deep knowledge tracking method is combined with graph structure rules. Then, an intelligent knowledge recommendation algorithm based on reinforcement learning is used to construct a fusion approach-based model for distanced personalized teaching and learning of English. The results show that the research proposed fused deep-directed graph knowledge tracking with graph structure rules for remote personalized English teaching model has the lowest AUC value of 0.893 and the highest AUC value of 0.921 on each dataset. The prediction accuracy of the research model is 94.3% and the F1 score is 0.92, which is the highest among the studied models, indicating that the proposed model has a strong performance. The fusion model proposed in the study has a higher accuracy rate of knowledge personalization recommendation than the traditional deep knowledge tracking model, and it can help learners save revision time effectively and improve their overall English performance.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_88-A_Study_on_Distance_Personalized_English_Teaching.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Biologically Inspired Appearance Modeling and Sample Feature-based Approach for Visual Target Tracking in Aerial Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140287</link>
        <id>10.14569/IJACSA.2023.0140287</id>
        <doi>10.14569/IJACSA.2023.0140287</doi>
        <lastModDate>2023-02-28T10:12:19.5800000+00:00</lastModDate>
        
        <creator>Lili Pei</creator>
        
        <creator>Xiaohui Zhang</creator>
        
        <subject>Visual tracking; biologically inspired; visual saliency detection; appearance modeling; attention region; spatiotemporal</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>Visual tracking in uncrewed aerial vehicles is challenging because of the target appearance. Various research has been fulfilled to overcome appearance variations and unpredictable moving target issues. Visual saliency-based approaches have been widely studied in biologically inspired algorithms to detect moving targets based on attentional regions (ARs) extraction. This paper proposes a novel visual tracking method to deal with these issues. It consists of two main phases: spatiotemporal saliency-based appearance modeling (SSAM) and sample feature-based target detection (SFTD). The proposed method is based on a tracking-by-detection approach to provide a robust visual tracking system under appearance variation and unpredictable moving target conditions. Correspondingly, a semi-automatic trigger-based algorithm is proposed to handle the phases&#39; operation, and a discriminative-based method is utilized for appearance modeling. In the SSAM phase, temporal saliency extracts the ARs and coarse segmentation. Spatial saliency is utilized for the object’s appearance modeling and spatial saliency detection. Because the spatial saliency detection process is time-consuming for multiple target tracking conditions, an automatic algorithm is proposed to detect the region saliences in a multithreading implementation that leads to low processing time. Consequently, the temporal and spatial saliencies are integrated to generate the final saliency and sample features. The generated sample features are transferred to the sample feature-based target detection (SFTD) phase to detect the target in different images based on samples. Experimental results demonstrate that the proposed method is effective and presents promising results compared to other existing methods.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_87-A_ Biologically_Inspired_Appearance_Modeling_and_Sample_Feature.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Investigating Group Distributionally Robust Optimization for Deep Imbalanced Learning: A Case Study of Binary Tabular Data Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140286</link>
        <id>10.14569/IJACSA.2023.0140286</id>
        <doi>10.14569/IJACSA.2023.0140286</doi>
        <lastModDate>2023-02-28T10:12:19.5630000+00:00</lastModDate>
        
        <creator>Ismail. B. Mustapha</creator>
        
        <creator>Shafaatunnur Hasan</creator>
        
        <creator>Hatem S Y Nabbus</creator>
        
        <creator>Mohamed Mostafa Ali Montaser</creator>
        
        <creator>Sunday Olusanya Olatunji</creator>
        
        <creator>Siti Maryam Shamsuddin</creator>
        
        <subject>Class imbalance; deep neural networks; tabular data; empirical risk minimization; group distributionally robust optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>One of the most studied machine learning challenges that recent studies have shown the susceptibility of deep neural networks to is the class imbalance problem. While concerted research efforts in this direction have been notable in recent years, findings have shown that the canonical learning objective, empirical risk minimization (ERM), is unable to achieve optimal imbalance learning in deep neural networks given its bias to the majority class. An alternative learning objective, group distributionally robust optimization (gDRO), is investigated in this study for imbalance learning, focusing on tabular imbalanced data as against image data that has dominated deep imbalance learning research. Contrary to minimizing average per instance loss as in ERM, gDRO seeks to minimize the worst group loss over the training data. Experimental findings in comparison with ERM and classical imbalance methods using four popularly used evaluation metrics in imbalance learning across several benchmark imbalance binary tabular data of varying imbalance ratios reveal impressive performance of gDRO, outperforming other compared methods in terms of g-mean and roc-auc.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_86-Investigating_Group_Distributionally_Robust_Optimization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fall Detection and Monitoring using Machine Learning: A Comparative Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140284</link>
        <id>10.14569/IJACSA.2023.0140284</id>
        <doi>10.14569/IJACSA.2023.0140284</doi>
        <lastModDate>2023-02-28T10:12:19.5500000+00:00</lastModDate>
        
        <creator>Shaima R. M Edeib</creator>
        
        <creator>Rudzidatul Akmam Dziyauddin</creator>
        
        <creator>Nur Izdihar Muhd Amir</creator>
        
        <subject>Fall detection; machine learning; acceleration data; SVM; decision tree; Na&#239;ve Bayes; IoT</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>The detection of falls has emerged as an important topic for the public to discuss because of the prevalence and severity of unintentional falls, particularly among the elderly. A Fall Detection System, known as an FDS, is a system that gathers data from wearable Internet-of-Things (IoT) device and classifies the outcomes to distinguish falls from other activities and call for prompt medical aid in the event of a fall. In this paper, we determine either fall or not fall using machine learning prior to our collected fall dataset from accelerometer sensor. From the acceleration data, the input features are extracted and deployed to supervised machine learning (ML) algorithms namely, Support Vector Machine (SVM), Decision Tree, and Naive Bayes. The results show that the accuracy of fall detection reaches 95%, 97 % and 91% without any false alarms for the SVM, Decision Tree, and Na&#239;ve Bayes, respectively.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_84-Fall_Detection_and_Monitoring_using_Machine_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Online Teaching Design and Evaluation of Innovation and Entrepreneurship Courses in the Context of Education Internationalization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140285</link>
        <id>10.14569/IJACSA.2023.0140285</id>
        <doi>10.14569/IJACSA.2023.0140285</doi>
        <lastModDate>2023-02-28T10:12:19.5500000+00:00</lastModDate>
        
        <creator>Chengshe Xing</creator>
        
        <subject>Online course; Collaborative filtering algorithm; Ranking model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>In the context of the internationalization of education nowadays, courses in innovation and entrepreneurship have been strongly promoted, and the content and number of topics, etc. of this type of courses are rapidly climbing. In order to enable target users to quickly select courses that they may be interested in, one changed collaborative filtering algorithm based on a multi-feature ranking model is used to extract and rank the features of online courses based on several factors, and then combine the collaborative filtering algorithm to recommend them to users. The results of experiment show that the numerical valuation of accuracy rate and recall rate of the improved algorithm are more than those of the other algorithm with different conditions, and in most cases higher than those of the LDA algorithm, and the user’s evaluation of the recommendation effect also has the highest rating value of the improved algorithm, with the ratings of 4.3, 4.7 and 4.4 in the three groups, and the overall average score is 4.47, indicating that the improved algorithm has significant optimization performance and is suitable for teaching innovation and entrepreneurship in online courses.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_85-Online_Teaching_Design_and_Evaluation_of_Innovation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Research on Image Sharpness Enhancement Technology based on Depth Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140283</link>
        <id>10.14569/IJACSA.2023.0140283</id>
        <doi>10.14569/IJACSA.2023.0140283</doi>
        <lastModDate>2023-02-28T10:12:19.5330000+00:00</lastModDate>
        
        <creator>Wenbao Lan</creator>
        
        <creator>Chang Che</creator>
        
        <subject>Super resolution convolutional network; fast super resolution convolutional neural network; dual super-resolution convolutional network; deep learning; image definition; super resolution; image enhancement</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>Image technology is widely used in security, traffic, monitoring, and other social activities. However, these images carrying detailed information will have feature distortion due to various external physical factors in social ingestion, transmission, and storage, resulting in poor image quality and clarity. Resolution determines the definition of an image. Super-resolution reconstruction is the process of transforming a low-resolution picture into a high-resolution image. To enhance the image clarity, this experiment introduces the advantages and disadvantages of the Super Resolution Convolutional Network (SRCNN) and Fast Super Resolution Convolutional Neural Network (FSRCNN) model and then constructs an image super resolution method based on DSRCNN. The algorithm consists of two sub-network blocks, an enhancement block and a purification block. The model first uses two Convolutional Neural Networks (CNN) to obtain complementary low-frequency information that improves the model&#39;s learning ability; next, it employs an enhancement block to fuse the image features of two paths via residual operation and sub-pixel convolution to prevent the loss of low-resolution image information; finally, it employs a feature purification block to refine high-frequency information that more accurately represents the predicted high-quality image. It is found that the PSNR and SSIM of the DSRCNN model can reach 33.43dB and 0.9157dB, respectively.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_83-Research_on_Image_Sharpness_Enhancement_Technology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Classification of Human Sperms using ResNet-50 Deep Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140282</link>
        <id>10.14569/IJACSA.2023.0140282</id>
        <doi>10.14569/IJACSA.2023.0140282</doi>
        <lastModDate>2023-02-28T10:12:19.5170000+00:00</lastModDate>
        
        <creator>Ahmad Abdelaziz Mashaal</creator>
        
        <creator>Mohamed A. A. Eldosoky</creator>
        
        <creator>Lamia Nabil Mahdy</creator>
        
        <creator>Kadry Ali Ezzat</creator>
        
        <subject>Healthy sperms; sperm heads; infertility; classification; convolution; ResNet-50</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>Infertility is a disease which scientists around the world are concerned with. The disease of infertility also is a worldwide health concern of many people in the community. The andrologists are continually searching for further developed techniques for any related problems. The intracytoplasmic sperm injection (ICSI) method is a widely recognized strategy for accomplishing pregnancy and considered as one of the best methods for infertility treatment worldwide. Choosing the best sperms are done using the vision through the specimen which is reliant on the abilities and the cleverness of the embryologists and as such inclined to human errors. Subsequently, a system that detects the normal sperms automatically is required for speedy and more precise outcomes. Deep learning approaches are usually effective for classification and detection purposes. This paper uses the Residential Energy Services Network (ResNet-50) deep learning architecture to recognize human sperms after classification of human sperm heads. The ResNet-50 proposed model achieved an accuracy of 96.66%. This proposed model demonstrated its efficiency at the detection of healthy sperms. The healthy sperms are used for the injection into eggs by the andrologists who always look for easier and more advanced methods in order to increase the success rate of ICSI process.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_82-Classification_of_Human_Sperms_using_ResNet_50.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Early Warning Model for Intelligent Operation of Power Engineering based on Kalman Filter Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140280</link>
        <id>10.14569/IJACSA.2023.0140280</id>
        <doi>10.14569/IJACSA.2023.0140280</doi>
        <lastModDate>2023-02-28T10:12:19.5030000+00:00</lastModDate>
        
        <creator>Haopeng Shi</creator>
        
        <creator>Xiang Li</creator>
        
        <creator>Pei Sun</creator>
        
        <creator>Najuan Jia</creator>
        
        <creator>Qiyan Dou</creator>
        
        <subject>Kalman filter; power engineering; intelligent operation; early warning model; image denoising; feature extraction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>The accurate early warning of intelligent operation of power engineering can find the abnormal operation of substation equipment in time and ensure the safe operation of substation equipment. Thus, an early warning model for intelligent operation of power engineering based on Kalman filter algorithm is constructed. In this model, the noise elimination method of substation equipment inspection image based on particle resampling filter algorithm is introduced. After removing the noise information of operation situation inspection image of substation equipment, the gradient direction histogram feature, lab color space feature and edge contour feature in the image are extracted by the multi-feature extraction method for intelligent operation of power engineering based on multi-feature fusion. These features are combined to form the feature description set of equipment operation situation. The feature description set is used as the identification attribute set of the anomaly identification and early warning model for intelligent operation of electric power engineering based on Kalman filter algorithm to complete the anomaly identification and early warning of equipment operation situation. The test shows that when the model is used to observe the temperature change trend of the top layer of the transformer, the temperature error is very small, and the early warning accuracy for the abnormal temperature of the top layer of the transformer is very high, so the abnormal operation of the substation equipment can be found in time.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_80-An_Early_Warning_Model_for_Intelligent_Operation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automated Pneumonia Diagnosis using a 2D Deep Convolutional Neural Network with Chest X-Ray Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140281</link>
        <id>10.14569/IJACSA.2023.0140281</id>
        <doi>10.14569/IJACSA.2023.0140281</doi>
        <lastModDate>2023-02-28T10:12:19.5030000+00:00</lastModDate>
        
        <creator>Kamila Kassylkassova</creator>
        
        <creator>Batyrkhan Omarov</creator>
        
        <creator>Gulnur Kazbekova</creator>
        
        <creator>Zhadra Kozhamkulova</creator>
        
        <creator>Mukhit Maikotov</creator>
        
        <creator>Zhanar Bidakhmet</creator>
        
        <subject>Pneumonia; deep learning; CNN; chest X-rays; radiology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>Tiny air sacs in one or both lungs become inflamed as a result of the lung infection known as pneumonia. In order to provide the best possible treatment plan, pneumonia must be accurately and quickly diagnosed at initial stages. Nowadays, a chest X-ray is regarded as the most effective imaging technique for detecting pneumonia. However, performing chest X-ray analysis may be quite difficult and laborious. For this purpose, in this study we propose deep convolutional neural network (CNN) with 24 hidden layers to identify pneumonia using chest X-ray images. In order to get high accuracy of the proposed deep CNN we applied an image processing method as well as rescaling and data augmentation methods as shear_range, rotation, zooming, CLAHE, and vertical_flip. The proposed approach has been evaluated using different evaluation criteria and has demonstrat-ed 97.2%, 97.1%, 97.43%, 96%, 98.8% performance in terms of accuracy, precision, recall, F-score, and AUC-ROC curve. Thus, the applied deep CNN obtain a high level of performance in pneumonia detection. In general, the provided approach is intended to aid radiologists in making an accurate pneumonia diagnosis. Additionally, our suggested models could be helpful in the early detection of other chest-related illnesses such as COVID-19.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_81-Automated_Pneumonia_Diagnosis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of a Mobile Application to Reduce the Rate of People with Text Neck Syndrome</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140279</link>
        <id>10.14569/IJACSA.2023.0140279</id>
        <doi>10.14569/IJACSA.2023.0140279</doi>
        <lastModDate>2023-02-28T10:12:19.4870000+00:00</lastModDate>
        
        <creator>Rosa Perez-Siguas</creator>
        
        <creator>Hernan Matta-Solis</creator>
        
        <creator>Eduardo Matta-Solis</creator>
        
        <creator>Hernan Matta-Perez</creator>
        
        <creator>Luis Perez-Siguas</creator>
        
        <creator>Randall Seminario Unzueta</creator>
        
        <creator>Victoria Tacas-Yarcuri</creator>
        
        <subject>Accelerometer; android; firebase; gyroscope; mobile devices; sensors; text neck</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>Now-a-days, it is no surprise that mobile devices have become a very useful tool in the daily tasks of many people worldwide. This is thanks to their various features such as portability, connectivity, entertainment, work tool, etc. However, due to the bad posture that users have when using them, a syndrome called &quot;Text Neck&quot; is produced. This is caused by prolonged use of the devices looking down and tilting the head at different angles. The degree of inclination of the head causes a detrimental effect on the neck joints, so that the greater the degree of inclination the effect of the weight of the head on the neck increases detrimentally. However, currently mobile devices have sensors that help in monitoring the activities of users, in this sense, there is the gyroscope that allows the completion of the position and the accelerometer that tells us the amount of movement of the device. In this sense, a mobile application has been developed that by monitoring the information of the angle of inclination of the device and the time it remains in the same, allows notifying users to adopt a proper position. The aim is to reduce the number of people affected by text neck syndrome.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_79-Development_of_a_Mobile_Application_to_Reduce_the_Rate_of_People.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>EFASFMM: A Unique Approach for Early Prediction of Type II Diabetics using Fire Fly and Semi-supervised Min-Max Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140278</link>
        <id>10.14569/IJACSA.2023.0140278</id>
        <doi>10.14569/IJACSA.2023.0140278</doi>
        <lastModDate>2023-02-28T10:12:19.4700000+00:00</lastModDate>
        
        <creator>B. Manikyala Rao</creator>
        
        <creator>Mohammed Ali Hussain</creator>
        
        <subject>Fire Fly Algorithm (FFA); machine learning (ML); Semi-supervised Min-Max (SSMM)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>Non-insulin-reliant, one of the most serious illnesses is diabetes mellitus, often known as type 2 diabetes, and it affects a large number of people. Between 2 and 5 million individuals worldwide die from diabetes each year. If diabetes is identified sooner, it can be managed, and catastrophic dangers including nephropathy, heart stroke, and other conditions linked to it can be avoided. Therefore, early diabetes diagnosis aids in preserving excellent health. Machine learning (ML), which has recently made strides, is now being used in a number of medical health-related fields. The innovative, nature-inspired Firefly algorithm has been shown to be effective at solving a range of numerical optimization issues. While using alliterations, the traditional firefly method employed a fixed step size models for semi-supervised learning (SSL). The firefly is effective for solving classification issues involving both a sizable number of unlabelled data and a limited number of samples with labels. The fuzzy min-max (FMM) family of neural networks in this regard provide the capability of online learning for tackling both supervised and unsupervised situations. Using a special mix of the two proposed algorithms, one of which is utilised for optimization and the other for making early predictions of type 2 diabetes. The findings for the training and testing phases for the parameter’s accuracy, precision, sensitivity, specificity, and F-score are reported as 97.96%, 97.82%, 98.10%, 97.82%, and 97.95% which, when compared to current state-of-the-art methods, are finer.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_78-EFASFMM_ A_Unique_Approach_for_Early_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced Multi-Verse Optimizer (TMVO) and Applying it in Test Data Generation for Path Testing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140277</link>
        <id>10.14569/IJACSA.2023.0140277</id>
        <doi>10.14569/IJACSA.2023.0140277</doi>
        <lastModDate>2023-02-28T10:12:19.4700000+00:00</lastModDate>
        
        <creator>Mohammad Hashem Ryalat</creator>
        
        <creator>Hussam N. Fakhouri</creator>
        
        <creator>Jamal Zraqou</creator>
        
        <creator>Faten Hamad</creator>
        
        <creator>Mamon S. Alzboun</creator>
        
        <creator>Ahmad K. Al hwaitat</creator>
        
        <subject>MVO; optimization; testing; swarm intelligence; multi-verse optimizer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>Data testing is a vital part of the software development process, and there are various approaches available to improve the exploration of all possible software code paths. This study introduces two contributions. Firstly, an improved version of the Multi-verse Optimizer called Testing Multi-Verse Optimizer (TMVO) is proposed, which takes into account the movement of the swarm and the mean of the two best solutions in the universe. The particles move towards the optimal solution by using a mean-based algorithm model, which guarantees efficient exploration and exploitation. Secondly, TMVO is applied to automatically develop test cases for structural data testing, particularly path testing. Instead of automating the entire testing process, the focus is on centralizing automated procedures for collecting testing data. Automation for generating testing data is becoming increasingly popular due to the high cost of manual data generation. To evaluate the effectiveness of TMVO, it was tested on various well-known functions as well as five programs that presented unique challenges in testing. The test results indicated that TMVO performed better than the original MVO algorithm on the majority of the tested functions.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_77-Enhanced_Multi_Verse_Optimizer.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Research on Automatic Detection Algorithm for Pedestrians on the Road Based on Image Processing Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140276</link>
        <id>10.14569/IJACSA.2023.0140276</id>
        <doi>10.14569/IJACSA.2023.0140276</doi>
        <lastModDate>2023-02-28T10:12:19.4530000+00:00</lastModDate>
        
        <creator>Qing Zhang</creator>
        
        <subject>Pedestrian detection; recurrent convolutional neural network; scale-invariant feature transform; support vector machine; characteristic scale; Difference of Gaussians operator</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>Accurate detection of pedestrian targets can effectively improve the performance level of intelligent transportation and surveillance projects. In order to effectively enhance the accuracy of detecting pedestrian targets on the road, this paper first introduced the traditional pedestrian target detection algorithm, proposed the faster recurrent convolutional neural network (RCNN) algorithm to detect pedestrian targets, and improved it to make good use of the convolutional features at different scales. Finally, support vector machine (SVM), traditional Faster RCNN, and optimized Faster RCNN algorithms were compared by simulation experiments. The results showed that the optimized Faster RCNN algorithm had higher detection accuracy and recall rate, obtained a more accurate target localization frame, and detected faster than SVM and traditional Faster RCNN algorithms; the traditional Faster RCNN algorithm had higher detection accuracy and target frame localization accuracy than the SVM algorithm.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_76-Research_on_Automatic_Detection_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Realizing the Quantum Relative Entropy of Two Noisy States using the Hudson-Parthasarathy Equations</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140275</link>
        <id>10.14569/IJACSA.2023.0140275</id>
        <doi>10.14569/IJACSA.2023.0140275</doi>
        <lastModDate>2023-02-28T10:12:19.4400000+00:00</lastModDate>
        
        <creator>Bhaveshkumar B. Prajapati</creator>
        
        <creator>Nirbhay Kumar Chaubey</creator>
        
        <subject>Schr&#246;dinger equation; Ito calculus; quantum relative entropy; Hudson-Parthasarathy equation; quantum noise</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>The idea of noisy states can be derived through a quantum relative entropy over a given time period and construct the average value of X at time based on the system variables. A random Hermitian matrix is used to represent the quantum system observables with BATH states. The Hudson-Parthasarathy (HP equation) context for stochastic processes allows us to simulate quantum relative entropy using quantum Brownian motion. The Sudarshan-Lindblad&#39;s density evolution matrix equation was already derivable in generalized form in my previous work. This paper&#39;s goal is to illustrate how the HP equation may be used to estimate the density matrix for noise in a perturbed quantum system of a stochastic process. The last stage involves using MATLAB to estimate and simulate a random density matrix and measure the quantum average T_r (ρ(t)X) at various times. These formulas would be helpful in determining how sensitive the evolving/evolved states are to changes in the Hamiltonian of the noise operators in a sensitivity/robustness study of quantum systems.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_75-Realizing_the_Quantum_Relative_Entropy_of_Two_Noisy_States.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Placement of Edge Servers in Mobile Cloud Computing using Artificial Bee Colony Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140273</link>
        <id>10.14569/IJACSA.2023.0140273</id>
        <doi>10.14569/IJACSA.2023.0140273</doi>
        <lastModDate>2023-02-28T10:12:19.4230000+00:00</lastModDate>
        
        <creator>Bing Zhou</creator>
        
        <creator>Bei Lu</creator>
        
        <creator>Zhigang Zhang</creator>
        
        <subject>Artificial bee colony algorithm, k-means, server placement, mobile cloud computing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>Utilizing smart mobile devices for entertainment, education, and social networking has grown recently. Even though mobile applications are getting more sophisticated and resource-intensive, the computing power of mobile devices is still constrained. Mobile phone applications can perform better by shifting parts of their functions to cloud servers. However, because the cloud is frequently positioned far from mobile phone users, there may be a significant and unpredictable delay in the data transfer between users and the cloud. It is crucial for mobile applications since customers value rapid responses greatly. Users of mobile phones can get close-up access to information technology and cloud computing services thanks to mobile edge computing. In this article, the main goal is to use an artificial bee colony meta-innovative algorithm to solve the problem of placing edge servers in mobile edge computing. Moreover, load balancing between servers is one of the challenges discussed in this article. To deal with this issue, determination the locations of the servers using considering the distribution of workload between servers as a cost function in the artificial bee colony algorithm is a focused issue in this study. The results of the proposed method are compared with the load balancing criteria. The results of K-means compared to the clustering method show the superiority of the proposed method with regard to the loading criteria compared to this clustering method.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_73-Placement_of_Edge_Servers_in_Mobile_Cloud_Computing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Erythemato-Squamous Disease Detection using Best Optimized Estimators of ANN</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140274</link>
        <id>10.14569/IJACSA.2023.0140274</id>
        <doi>10.14569/IJACSA.2023.0140274</doi>
        <lastModDate>2023-02-28T10:12:19.4230000+00:00</lastModDate>
        
        <creator>Rajashekar Deva</creator>
        
        <creator>G .Narsimha</creator>
        
        <subject>Conditional probability; naive Bayesian; bayesian optimization; grid search; optimization techniques; estimators</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>Medical area focused on automating skin cancer detection after the pandemic era of &quot;Monkey Pox&quot;. Previous works proposed ANN mechanisms to classify the type of skin cancer. However, all those models implement layers of ANN with standard estimator components like hidden layers implemented using the ReLu activation function, several neurons are generally a power of two and others, but these values are not always perfect. Few researchers implemented optimization techniques for tuning the estimators of A.I. algorithms, but all those mechanisms require more resources and don&#39;t guarantee the best values for each estimator. The proposed method analyzes all the essential estimators of every possible neural network layer. Then it applies a modified version of Bayesian optimization because it avoids the disadvantages of Grid and Random optimization techniques. It picks the best estimator by using the conditional probability of naive Bayesian for every combination.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_74-Erythemato_Squamous_Disease_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Breast Cancer Prediction using Machine Learning Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140272</link>
        <id>10.14569/IJACSA.2023.0140272</id>
        <doi>10.14569/IJACSA.2023.0140272</doi>
        <lastModDate>2023-02-28T10:12:19.4070000+00:00</lastModDate>
        
        <creator>Orlando Iparraguirre-Villanueva</creator>
        
        <creator>Andr&#233;s Epifan&#237;a-Huerta</creator>
        
        <creator>Carmen Torres-Cecl&#233;n</creator>
        
        <creator>John Ruiz-Alvarado</creator>
        
        <creator>Michael Cabanillas-Carbonell</creator>
        
        <subject>Prediction; models; machine learning, cells; breast cancer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>Breast cancer is a type of cancer that develops in the cells of the breast. Treatment for breast cancer usually involves X-ray, chemotherapy, or a combination of both treatments. Detecting cancer at an early stage can save a person&#39;s life. Artificial intelligence (AI) plays a very important role in this area. Therefore, predicting breast cancer remains a very challenging issue for clinicians and researchers. This work aims to predict the probability of breast cancer in patients. Using machine learning (ML) models such as Multilayer Perceptron (MLP), K-Nearest Neightbot (KNN), AdaBoost (AB), Bagging, Gradient Boosting (GB), and Random Forest (RF). The breast cancer diagnostic medical dataset from the Wisconsin repository has been used. The dataset includes 569 observations and 32 features. Following the data analysis methodology, data cleaning, exploratory analysis, training, testing, and validation were performed. The performance of the models was evaluated with the parameters: classification accuracy, specificity, sensitivity, F1 count, and precision. The training and results indicate that the six trained models can provide optimal classification and prediction results. The RF, GB, and AB models achieved 100% accuracy, outperforming the other models. Therefore, the suggested models for breast cancer identification, classification, and prediction are RF, GB, and AB. Likewise, the Bagging, KNN, and MLP models achieved a performance of 99.56%, 95.82%, and 96.92%, respectively. Similarly, the last three models achieved an optimal yield close to 100%. Finally, the results show a clear advantage of the RF, GB, and AB models, as they achieve more accurate results in breast cancer prediction.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_72-Breast_Cancer_Prediction_using_Machine_Learning_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Digital Twins for Smart Home Gadget Threat Prediction using Deep Convolution Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140270</link>
        <id>10.14569/IJACSA.2023.0140270</id>
        <doi>10.14569/IJACSA.2023.0140270</doi>
        <lastModDate>2023-02-28T10:12:19.3930000+00:00</lastModDate>
        
        <creator>Valluri Padmapriya</creator>
        
        <creator>Muktevi Srivenkatesh</creator>
        
        <subject>Digital twins; deep learning; convolution neural network; logistic regression; internet of things; smart home; IoT gadget functionality; threat prediction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>Digital twin is one of the most important innovations in the Internet of Things (IoT) era and business disruption. Digital twins are a growing technology that bridges the gap between the real and the digital. Home automation in the IoT refers to the practice of automatically managing and monitoring smart home electronics by use of a variety of control system methods. The geysers, refrigerators, fans, lighting, fire alarms, kitchen timers, and other electrical and electronic items in the home can all be managed and monitored with the help of a variety of control methods. Digital twins replicate the physical machine in real time and produce data, such as asset degradation, product performance level that may be used by the predictive maintenance algorithm to identify the product functionality levels. The purpose of this research is to design the framework of Digital Twin using machine learning and state estimation algorithms model to assess and predict home appliances based on the probability rate of smart home system gadgets functionality. The main goal of this research is to create a digital twin for smart home gadgets that are used to monitor the health status of these devices for increasing the life time and to reduce maintenance costs. This research presents a Deep Convolution Neural Network based Logistic Regression Model with Digital Twins (DCNN-LR-DT) for accurate prediction of smart home gadget functionality levels and to predict the threats in advance. The proposed model is compared with the traditional models and the results represent that the proposed model performance is better than traditional models.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_70-Digital_Twins_for_Smart_Home_Gadget_Threat_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Privacy-Preserving Protocol for Academic Certificates on Hyperledger Fabric</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140271</link>
        <id>10.14569/IJACSA.2023.0140271</id>
        <doi>10.14569/IJACSA.2023.0140271</doi>
        <lastModDate>2023-02-28T10:12:19.3930000+00:00</lastModDate>
        
        <creator>Omar S. Saleh</creator>
        
        <creator>Osman Ghazali</creator>
        
        <creator>Norbik Bashah Idris</creator>
        
        <subject>Blockchain technology; hyperledger fabric blockchain; privacy preservation; decentralized control verification privacy-centered (DCVPC) protocol; academic certificates</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>Academic certificates are integral to an individual&#39;s education and career prospects, yet conventional paper-based certificates pose challenges with their transport and vulnerability to forgery. In response to this predicament, institutions have taken measures to release e-certificates, though ensuring authenticity remains a pressing concern. Blockchain technology, recognised for its attributes of security, transparency, and decentralisation, presents a resolution to this problem and has garnered attention from various sectors. While blockchain-based academic certificate management systems have been proposed, current systems exhibit some security and privacy limitations. To address these issues, this research proposes a new Decentralised Control Verification Privacy-Centered (DCVPC) protocol based on Hyperledger Fabric blockchain for preserving the privacy of academic certificates. The proposed protocol aims to protect academic certificates&#39; privacy by granting complete authority over all network nodes, creating channels for universities to have their private environment, and limiting access to the ledger. The protocol is highly secure, resistant to attacks, and allows improved interoperability and automation of the certificate verification process. A proof-of-concept was developed to demonstrate the protocol&#39;s functionality and performance. The proposed protocol presents a promising solution for enhancing security, transparency, and privacy of academic certificates. It guarantees that the certificate&#39;s rightful owner is correctly identified, and the issuer is widely recognised. This research makes a valuable contribution to the area of blockchain-based academic certificate management systems by introducing a new protocol that addresses the present security and privacy limitations.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_71-A_New_Privacy_Preserving_Protocol_for_Academic_Certificates.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Study of Encryption for Multimedia Digital Audio Security</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140269</link>
        <id>10.14569/IJACSA.2023.0140269</id>
        <doi>10.14569/IJACSA.2023.0140269</doi>
        <lastModDate>2023-02-28T10:12:19.3770000+00:00</lastModDate>
        
        <creator>Xiaodong Zhou</creator>
        
        <creator>Chao Wei</creator>
        
        <creator>Xiaotang Shao</creator>
        
        <subject>Multimedia digital audio; chaotic theory; encryption; logistic mapping; sine mapping; security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>Driven by the development of multimedia, the encryption of multimedia digital audio has received more attention; however, cryptography-based encryption methods have many shortcomings in encryption of multimedia information, and new encryption methods are urgently needed. This paper briefly introduced cryptography and chaos theory, designed a chaos-based encryption algorithm that combined Logistic mapping and Sine mapping for confusion and used a Hopfield chaos neural network for diffusion, explained the encryption and decryption process of the algorithm, and tested the algorithm. It was found that the keys obtained by the proposed algorithm passed the SP800-22 test, and the correlation between the three encrypted audio and the original audio was 0.0261, -0.0536, and 0.0237, respectively, all of which were small, and the peak signal-to-noise ratio (PSNR) values were -0.348 dB, -7.645 dB, and -3.636 dB, respectively, which were significantly different from the original audio. The NSCR and UACI were also closer to the original values. The results prove that the proposed algorithm has good security and can encrypt the actual multimedia digital audio.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_69-A_Study_of_Encryption_for_Multimedia_Digital_Audio_Security.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of an English Web-based Teaching Resource Sharing Platform based on Mobile Web Technology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140268</link>
        <id>10.14569/IJACSA.2023.0140268</id>
        <doi>10.14569/IJACSA.2023.0140268</doi>
        <lastModDate>2023-02-28T10:12:19.3600000+00:00</lastModDate>
        
        <creator>Yan Zhang</creator>
        
        <subject>Web service; SOAP; English teaching resources; sharing platform; personalised recommendation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>Thanks to the booming technology of computers and multimedia, student-centered online teaching resource platforms have become an important way for students to learn. However, English teaching resource platforms at the present stage fail to effectively integrate the massive and scattered learning resources. Based on this, the study proposes an English online teaching resource sharing platform based on mobile Web technology, using the SOAP protocol to deploy heterogeneous data resources as Web services to achieve interchangeability between heterogeneous resources. In addition, to enhance the efficient use of learning resources by students, the study proposes a hybrid algorithm based on collaborative filtering algorithm and sequential pattern mining algorithm to achieve personalized sequential recommendation for students. The results show that the platform created by the study exhibits excellent performance in terms of resource transfer capability, achieves efficient teaching resource sharing in a short response time and also shows that the proposed recommendation algorithm is highly accurate.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_68-Design_of_an_English_Web_based_Teaching_Resource.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sobel Edge Detection Algorithm with Adaptive Threshold based on Improved Genetic Algorithm for Image Processing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140266</link>
        <id>10.14569/IJACSA.2023.0140266</id>
        <doi>10.14569/IJACSA.2023.0140266</doi>
        <lastModDate>2023-02-28T10:12:19.3470000+00:00</lastModDate>
        
        <creator>Weibin Kong</creator>
        
        <creator>Jianzhao Chen</creator>
        
        <creator>Yubin Song</creator>
        
        <creator>Zhongqing Fang</creator>
        
        <creator>Xiaofang Yang</creator>
        
        <creator>Hongyan Zhang</creator>
        
        <subject>Genetic algorithm; Sobel operator; edge detection; adaptive threshold</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>In this paper, a novel adaptive threshold Sobel edge detection algorithm based on the improved genetic algorithm is proposed to detect edges. Because of the influence of external factors in actual detection process, the result of detection is often not accurate enough when the configured threshold of the target image is far away from the real threshold. Different thresholds of images are calculated by improved genetic algorithm for different images. The calculated threshold is used in edge detection. The experimental results show that the image processed by the improved algorithm has stronger edge continuity. It is shown that proposed algorithm has a better detection effect and applicability than the traditional Sobel algorithm.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_66-Sobel_Edge_Detection_Algorithm_with_Adaptive_Threshold.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Compiler Optimization Prediction with New Self-Improved Optimization Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140267</link>
        <id>10.14569/IJACSA.2023.0140267</id>
        <doi>10.14569/IJACSA.2023.0140267</doi>
        <lastModDate>2023-02-28T10:12:19.3470000+00:00</lastModDate>
        
        <creator>Chaitali Shewale</creator>
        
        <creator>Sagar B. Shinde</creator>
        
        <creator>Yogesh B. Gurav</creator>
        
        <creator>Rupesh J. Partil</creator>
        
        <creator>Sandeep U. Kadam</creator>
        
        <subject>Compiler optimization prediction; feature extraction; feature exploitation; improved hunger games search algorithm; convolutional neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>Users may now choose from a vast range of compiler optimizations. These optimizations interact in a variety of sophisticated ways with one another and with the source code. The order in which optimization steps are applied can have a considerable influence on the performance obtained. As a result, we created a revolutionary compiler optimization prediction model. Our model comprises three operational phases: model training, feature extraction, as well as model exploitation. The model training step includes initialization as well as the formation of candidate sample sets. The inputs were then sent to the feature extraction phase, which retrieved static, dynamic, and improved entropy features. These extracted features were then optimized by the feature exploitation phase, which employs an improved hunger games search algorithm to choose the best features. In this work, we used a Convolutional Neural Network to predict compiler optimization based on these selected characteristics, and the findings show that our innovative compiler optimization model surpasses previous approaches.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_67-Compiler_Optimization_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Efficient Real-Time Weed Detection Technique using YOLOv7</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140265</link>
        <id>10.14569/IJACSA.2023.0140265</id>
        <doi>10.14569/IJACSA.2023.0140265</doi>
        <lastModDate>2023-02-28T10:12:19.3300000+00:00</lastModDate>
        
        <creator>Ch. Lakshmi Narayana</creator>
        
        <creator>Kondapalli Venkata Ramana</creator>
        
        <subject>Weed detection; YOLOv7; early crop weed; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>Since farming is becoming increasingly more expensive, efficient farming entails doing so without suffering any losses, which is what the current situation desires. Weeds are a key issue in agriculture since they contribute significantly to agricultural losses. To control the weed, pesticides are now evenly applied across the entire area. This approach not only costs a lot of money but also harms the environment and people&#39;s health. Therefore, spot spray requires an automatic system. When a deep learning embedded system is used to operate a drone, herbicides can be sprayed in the desired location. With the continuous advancement of object identification technology, the YOLO family of algorithms with extremely high precision and speed has been applied in a variety of scene detection applications. We propose a YOLOv7-based object detection approach for creating a weed detection system. Finally, we used the YOLOv7 model with different parameters for training and testing analyzed on the early crop weed dataset and 4weed dataset. Experimental results revealed that the YOLOv7 model achieved the mAP@0.50, f1score, Precision, and Recall values for the bounding boxes as 99.6,97.6, 99.8, and 95.5 respectively on the early crop weed dataset and 78.53, 79.83, 86.34, and 74.24 on 4weed dataset. The Agriculture business can benefit from using the suggested YOLOv7 model with high accuracy in terms of productivity, efficiency, and time.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_65-An_Efficient_Real_Time_Weed_Detection_Technique.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Approach: Tokenization Framework based on Sentence Structure in Indonesian Language</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140264</link>
        <id>10.14569/IJACSA.2023.0140264</id>
        <doi>10.14569/IJACSA.2023.0140264</doi>
        <lastModDate>2023-02-28T10:12:19.3130000+00:00</lastModDate>
        
        <creator>Johannes Petrus</creator>
        
        <creator>Ermatita</creator>
        
        <creator>Sukemi</creator>
        
        <creator>Erwin</creator>
        
        <subject>Token; tokenization; multi-word; sentence structure; sentence elements</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>This study proposes a new approach in the sentence tokenization process. Sentence tokenization, which is known so far, is the process of breaking sentences based on spaces as separators. Space-based sentence tokenization only generates single word tokens. In sentences consisting of five words, tokenization will produce five tokens, one word each. Each word is a token. This process ignores the loss of the original meaning of the separated words. Our proposed tokenization framework can generate one-word tokens and multi-word tokens at the same time. The process is carried out by extracting the sentence structure to obtain sentence elements. Each sentence element is a token. There are five sentence elements that is Subject, Predicate, Object, Complement and Adverbs. We extract sentence structures using deep learning methods, where models are built by training the datasets that have been prepared before. The training results are quite good with an F1 score of 0.7 and it is still possible to improve. Sentence similarity is the topic for measuring the performance of one-word tokens compared to multi-word tokens. In this case the multiword token has better accuracy. This framework was created using the Indonesian language but can also use other languages with dataset adjustments.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_64-A_Novel_Approach_Tokenization_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Predictors of Mobile Banking Usage: A Systematic Literature Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140263</link>
        <id>10.14569/IJACSA.2023.0140263</id>
        <doi>10.14569/IJACSA.2023.0140263</doi>
        <lastModDate>2023-02-28T10:12:19.3130000+00:00</lastModDate>
        
        <creator>Mohammed Abd Al-Munaf Hashim</creator>
        
        <creator>Zainuddin Bin Hassan</creator>
        
        <subject>M-banking; TAM; Service quality; Loyalty; UTAUT</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>Mobile banking has become an essential method to conduct banking transaction. However, number of users worldwide are still limited. The purpose of this study is to review the literature and understand the status of m-banking adoption, usage, and loyalty. Keywords were used to search for related articles in three databases namely, Web of Science (WoS), Scopus, and Google scholar. Filtering process was conducted to select the most related articles. This has resulted in reviewing 45 articles. The findings showed that number of articles pertaining to m-banking is increasing. Malaysia and Indonesia have the largest number of articles. The technology acceptance model (TAM) is being used widely in the m-banking literature and most of the reviewed studies are empirical with adequate sample size. This explains the increased usage of structural equation model (SEM). The most critical factors for m-banking adoption, usage, and loyalty are service quality, trust, perceived usefulness, perceived ease of use, security, risk, privacy, and social influence. Future research is suggested to examine the m-banking in different region and using mediating and moderating variables to explain the variation in the adoption.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_63-The_Predictors_of_Mobile_Banking_Usage.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi Feature DCR based Drug Compound Selection and Recommendation System for Efficient Decision-Making using Genetic Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140262</link>
        <id>10.14569/IJACSA.2023.0140262</id>
        <doi>10.14569/IJACSA.2023.0140262</doi>
        <lastModDate>2023-02-28T10:12:19.3000000+00:00</lastModDate>
        
        <creator>ST. Aarthy</creator>
        
        <creator>J. L. Mazher Iqbal</creator>
        
        <subject>Decisive support Systems; GA; DCR; drug selection; compound selection; fitness; recommendation system; cardiac disease; RMDCRSR</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>The performance of treating the cardiac diseases is dependent on the kind of drug being selected. There exist numerous decisive support systems which work according to certain characteristics and factors like drug availability, and popularity. Still, they struggle to achieve expected performance in supporting the medical practitioner. To handle this issue, a multi feature drug curing rate based drug compound selection and recommendation system (MDCRSR) is presented. The method utilizes medical histories and data set of various medical organization around the disease considered. Using the traces, the method identifies the drug compounds and features to perform preprocessing which eliminates the noisy data points. Further, the features of the traces are extracted to perform training with genetic algorithm. At the test phase, the method estimates the fitness measure for different drug combination and compounds by measuring their Drug Curing Rate (DCR). The method performs cross over and mutation to produce various populations of drug compounds. According to the curing rate, the drug compound pattern or population is selected and ranked. The ranked results are populated to the medical practitioner. The method improves the performance of recommendation system as well as drug compound selection.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_62-Multi_Feature_DCR_based_Drug_Compound_Selection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Hybrid Deep Learning Framework for Detection and Categorization of Brain Tumor from Magnetic Resonance Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140261</link>
        <id>10.14569/IJACSA.2023.0140261</id>
        <doi>10.14569/IJACSA.2023.0140261</doi>
        <lastModDate>2023-02-28T10:12:19.2830000+00:00</lastModDate>
        
        <creator>Yousef Methkal Abd Algani</creator>
        
        <creator>B. Nageswara Rao</creator>
        
        <creator>Chamandeep Kaur</creator>
        
        <creator>B. Ashreetha</creator>
        
        <creator>K. V. Daya Sagar</creator>
        
        <creator>Yousef A. Baker El-Ebiary</creator>
        
        <subject>Brain tumour; BGWO; LSTM; CNN; MRI dataset</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>Cellular abnormality leads to brain tumour formation. This is one of the foremost reasons of adult death all over the world. The typical size of a brain tumour increases within 25 days due to its rapid growth. Early brain tumour diagnosis can save millions of lives. For the purpose of early brain tumour identification, an automatic method is necessary. MRI brain tumour detection improves the survival of patients. Tumour visibility is improved in MRI, which facilitates subsequent treatment. To distinguish between brain MRI images with tumour and images without tumour is suggested in this paper. Many approaches in the field of machine learning including Support Vector Machine, Artificial Neural Networks, and KNN classifier have been developed for solving these issues. But these methods are time consuming, inefficient and require complex procedures. For a Computer Assisted Diagnosis system to aid physicians and radiologists in the identification and categorization of tumours, artificial intelligence is used. Deep learning has demonstrated an encouraging efficiency in computer vision systems over the past decade. In this paper, identification and classification of brain tumour from MR images employing BGWO-CNN-LSTM method is proposed. The proposed method on a testing set with 6100 MRI images of four different kinds of brain tumours is utilized. In comparison to earlier research on the same data set, the suggested approach achieved 99.74% accuracy, 99.23% recall and 99.54% specificity which are greater than the other techniques.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_61-A_Novel_Hybrid_Deep_Learning_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fault Tolerance Smart Egg Incubation System with Computer Vision</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140260</link>
        <id>10.14569/IJACSA.2023.0140260</id>
        <doi>10.14569/IJACSA.2023.0140260</doi>
        <lastModDate>2023-02-28T10:12:19.2670000+00:00</lastModDate>
        
        <creator>Emiliyan Petkov</creator>
        
        <creator>Teodor Kalushkov</creator>
        
        <creator>Donika Valcheva</creator>
        
        <creator>Georgi Shipkovenski</creator>
        
        <subject>Hatching; incubation; computer vision; cloud architecture; sending alerts; smart farming; internet of things</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>Reliability of incubators is one of their most important specifications. Development of wireless, cloud and computer vision based technologies gives new possibilities for work process control and increasing fault tolerance. Regardless of whether the hatching is in the field of mass production or in the breeding of rare species of birds, detecting a critical situation and sending timely notifications can prevent serious losses. Experience shows that network isolated solutions are not reliable enough and good management requires complex algorithms that are beyond the capabilities of a local, single controller. Even with the duplication of some sensors and actuators, incubators without external connection are high risk due to the fact that their controller is a central point in the architecture and can fail, leaving the farmer with no alert about the accident. The report presents a solution that uses periodic checks from cloud structures on the condition and operability of the incubator. In parallel, a video surveillance system analyzes the internal environment and the condition of hatching chicks. When potential and real risks occur, the system sends notifications to the responsible persons even to his or her wrist. Additionally, the proposed smart egg incubation methodology has been found to reduce the amount of time required for farmers to oversee the incubation process by up to 50%, allowing them to focus on other important tasks while still ensuring optimal hatching conditions for their eggs. Overall, the proposed methodology offers a significant improvement in egg incubation efficiency and reliability, with potential applications in both commercial and personal settings.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_60-Fault_Tolerance_Smart_Egg_Incubation_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Early Warning for Sugarcane Growth using Phenology-Based Remote Sensing by Region</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140259</link>
        <id>10.14569/IJACSA.2023.0140259</id>
        <doi>10.14569/IJACSA.2023.0140259</doi>
        <lastModDate>2023-02-28T10:12:19.2670000+00:00</lastModDate>
        
        <creator>Sudianto Sudianto</creator>
        
        <creator>Yeni Herdiyeni</creator>
        
        <creator>Lilik Budi Prasetyo</creator>
        
        <subject>Google earth engine; landsat 8; monitoring and assessment; sugarcane health</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>It is crucial to know crop growing in order to increase agricultural productivity. In sugarcane&#39;s case, monitoring growth can be supported by remote sensing. This research aimed to develop an early warning for sugarcane growth using remote sensing with Landsat 8 satellite at a crucial phenological time. The early warning was developed by identifying regional sugarcane growth patterns by analyzing seasonal trends using linear and harmonic regression models. Identification of growth patterns aims to determine the crucial phenological time by calculating the statistical value of the NDVI spectral index. Finally, monitoring the sugarcane growth conditions with various spectral indices for verification: NDVI, NDBaI, NDWI, and NDDI. All processes used Google Earth Engine (GEE) as a cloud-based platform. The results showed that sugarcane phenology from January to June is crucial for monitoring and assessment. The value of the four corresponding indices indicated the importance of monitoring conditions to ensure a healthy sugarcane region. The results showed that two of the four regions were unhealthy during particular periods; unhealthy vegetation values were below 0.489 and vice versa, one due to excess water and the other due to drought.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_59-Early_Warning_for_Sugarcane_Growth.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Research on the Derivative Rule and Estimation Methods of Intelligent High-Speed Railway Investment Estimation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140258</link>
        <id>10.14569/IJACSA.2023.0140258</id>
        <doi>10.14569/IJACSA.2023.0140258</doi>
        <lastModDate>2023-02-28T10:12:19.2530000+00:00</lastModDate>
        
        <creator>Yang Meng</creator>
        
        <creator>Chuncheng Meng</creator>
        
        <creator>Xiaochen Duan</creator>
        
        <subject>High-speed railway; intelligent construction; investment estimation; interpretative structural model; system dynamics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>Taking the investment estimation of intelligent construction of high-speed railway as the research object, based on the historical data of investment of similar high-speed railway projects, this paper builds an explanatory structure model, establishes an system dynamic (SD) model of investment estimation of intelligent construction of high-speed railway, and puts forward suggestions for supplementing the labor value theory and improving the value-added tax. The paper carries out in-depth research and analysis in the following aspects: 1) the list of influencing factors for investment estimation of intelligent construction of high-speed railway in the feasibility study stage is constructed, and the interpretative structural model (ISM) is constructed to sort out the relationship between the influencing factors; 2) the SD model of intelligent construction cost estimation of high-speed railway is established to improve the accuracy of investment estimation of intelligent high-speed railway construction; 3) put forward suggestions and schemes for improving investment estimation content of intelligent construction of high-speed railway under high intelligence; 4) improve and supplement the labor value theory and the value-added tax base.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_58-Research_on_the_Derivative_Rule_and_Estimation_Methods.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Equally Spread Current Execution Load Modelling with Optimize Response Time Brokerage Policy for Cloud Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140257</link>
        <id>10.14569/IJACSA.2023.0140257</id>
        <doi>10.14569/IJACSA.2023.0140257</doi>
        <lastModDate>2023-02-28T10:12:19.2370000+00:00</lastModDate>
        
        <creator>Anisah Hamimi Zamri</creator>
        
        <creator>Nor Syazwani Mohd Pakhrudin</creator>
        
        <creator>Shuria Saaidin</creator>
        
        <creator>Murizah Kassim</creator>
        
        <subject>Equally spread current execution (ESCE); optimize response time brokerage; cloud computing; load balancer; data modelling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>Cloud computing is one of the significant technologies that is used to provide seamless internet surfing for large-scale applications and data storing purposes. The cloud is described as a large platform that enables users to access data from the internet without needing to buy storing space in their equipment such as a computer. Many studies have analysed the load-balancing technique on the cloud to distribute tasks equally between servers using the Equally Spread Current Execution (ESCE) algorithm. ESCE, which is a dynamic load balancer, has quite a few problems such as average level performance and too long of response time which affected the Quality of Services. This research has simulated a cloud computing concept using the ESCE Load Modelling technique with the CloudAnalyst simulator for three servers of Data Center (DC) locations. The ESCE was simulated to enhance its algorithm’s performance as a load balancer and higher throughput in the cloud environment. The result shows that ESCE average overall response time is shortest when the DC is located at R0 with response times of 15.05s, 13.05s with 10 VMs, and 8.631s with the Optimize Response Time brokerage policy. This research is significant to promote notable load-balancing technique testing for virtualized cloud machines data centers on Quality of Services (QoS) aware tasks for Internet of Things (IoT) services.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_57-Equally_Spread_Current_Execution_Load_Modelling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of a Hybrid Recommendation Algorithm based on Multi-objective Collaborative Filtering for Massive Cloud Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140256</link>
        <id>10.14569/IJACSA.2023.0140256</id>
        <doi>10.14569/IJACSA.2023.0140256</doi>
        <lastModDate>2023-02-28T10:12:19.2200000+00:00</lastModDate>
        
        <creator>Xiaoli Zhou</creator>
        
        <subject>Self preference; collaborative filtering; potential preferences; mixed recommendation; multi-objective optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>The current recommendation technology has some problems, such as lack of timeliness, the contradiction between recommendation diversity and accuracy. In order to solve the problem of lack of timeliness, the time factor is introduced when constructing the self-preference model. The cold start problem in the collaborative filtering algorithm is solved by the hybrid similarity calculation method, and the potential preference model is constructed. The two are fused to obtain a hybrid recommendation algorithm to improve the recommendation performance of the algorithm. For the problem of multi-objective contradiction, the NNIA algorithm is used to further optimize the candidate results of mixed recommendation, and the final recommendation list is obtained. Through verification experiments, the results show that the recall rate and accuracy of the fused preference model are better than those of the non-fused model, and the accuracy is 9.57% and 8.23% higher than that of SPM and PPM, and the recall rate is 9.97% and 7.65% higher, respectively. CBCF-NNIA algorithm has high accuracy and diversity of recommendation, and can provide users with rich and diverse text content to meet their own needs.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_56-Design_of_a_Hybrid_Recommendation_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Privacy and Integrity Verification Model with Decentralized ABE and Double Encryption Storage Scheme</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140255</link>
        <id>10.14569/IJACSA.2023.0140255</id>
        <doi>10.14569/IJACSA.2023.0140255</doi>
        <lastModDate>2023-02-28T10:12:19.2030000+00:00</lastModDate>
        
        <creator>Amrutha Muralidharan Nair</creator>
        
        <creator>R Santhosh</creator>
        
        <subject>Cloud computing; data integrity; ABK-XE; ECEA Algorithm; SHA1</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>To support a wide range of applications, cloud computing has a variety of services. It has a number of positive acceptance tales as well as a couple of negative ones including security breaches. The versatile usage of cloud services to store Sensitive and personal data in cloud become hesitated by many organizations because of security issues. A new model of relying on a third-party auditor (TPA) has been adopted to improve trust and entice adoption between cloud clients (CC). Hence, we require a dynamic approach to control the privacy and integrity problem that occur across the cloud computing. Decentralized Attribute based encryption techniques and FHE approach is used to overwhelmed the issues. In this proposed scheme, the integrity checking is verified and auditor by the TPA without have any knowledge of the data content and double encryption is performed on the data stored in cloud. the data owner encrypts the data using ABK-XE (Attribute Based Key generation with XOR encryption) technique and send it to tag server whose encrypt the data again using ECEA (Elliptical Curve Elgamal) algorithm and generate the signature and unique ID using SHA-1 algorithm then store the data in Cloud Environment. The proposed algorithm is an integration of auditing scheme with Symmetric key Encryption and Homomorphic Encryption.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_55-Privacy_and_Integrity_Verification_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Systematic Literature Review on AI Algorithms and Techniques Adopted by e-Learning Platforms for Psychological and Emotional States</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140254</link>
        <id>10.14569/IJACSA.2023.0140254</id>
        <doi>10.14569/IJACSA.2023.0140254</doi>
        <lastModDate>2023-02-28T10:12:19.2030000+00:00</lastModDate>
        
        <creator>Lubna A. Alharbi</creator>
        
        <subject>Psychological states; emotional states; e-learning; online platforms; solutions</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>Computers are becoming increasingly commonplace in educational settings. As a result of these advancements, a new field known as CEHL (Computing Environment for Human Learning) or e-learning has emerged, where students have access to a variety of services at their convenience. Using an e-learning platform facilitates more efficient, optimized, and successful education. They allow for personalized instruction and on-demand access to relevant, up-to-date material. These e-learning strategies significantly impact learners&#39; emotional and psychological states, which in turn affect their abilities and motivations. Because of the learner&#39;s physical and temporal detachment from their tutor, encouraging learners can be challenging, leading to frustration, doubt, and ambivalence. The learner&#39;s drive to learn will be weakened, and their emotional and psychological state will be badly impacted as a result, both during and after the learning session. This research aimed to learn about the methods currently used by research facilities to analyze human emotions and mental states. The findings reveal that only e-learning has been used in education and other fundamental technologies, including machine learning, deep learning, signal processing, and mathematical approaches. A wide variety of e-learning-focused real-world applications make use of these methods. Each study subject is explained in depth, and the most frequently used methods are also examined. Finally, we provide a comprehensive analysis of the prior art, our contributions, their ramifications, and a discussion of our shortcomings and suggestions for future research.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_54-A_Systematic_Literature_Review_on_AI_Algorithms_and_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Privacy-Centered Protocol for Enhancing Security and Authentication of Academic Certificates</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140253</link>
        <id>10.14569/IJACSA.2023.0140253</id>
        <doi>10.14569/IJACSA.2023.0140253</doi>
        <lastModDate>2023-02-28T10:12:19.1900000+00:00</lastModDate>
        
        <creator>Omar S. Saleh</creator>
        
        <creator>Osman Ghazali</creator>
        
        <creator>Norbik Bashah Idris</creator>
        
        <subject>Academic certificates; privacy-centered protocol; privacy preservation; challenge handshake authentication protocol</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>Academic certificate authentication is crucial in safeguarding the rights and opportunities of individuals who have earned academic credentials. This authentication helps prevent fraud and forgery, ensuring that only those who have genuinely earned certificates can use them for education and career opportunities. With the increased use of online education and digital credentials in the digital age, the importance of academic certificate authentication has significantly grown. However, traditional techniques for authentication, such as QR code, barcode, and watermarking, have limitations regarding security and privacy. Therefore, proposing a privacy-centred protocol to enhance the security and authentication of academic certificates is vital to improve the trust and credibility of digital academic certificates, ensuring that individuals&#39; rights and opportunities are protected. In this context, we adopted the Challenge Handshake Authentication (CHA) protocol to propose the Certificate Verification Privacy Control Protocol (CVPC). We implemented it using Python and Flask with a Postgres database and an MVT structure for the application. The results of the implementation demonstrate that the proposed protocol effectively preserves privacy during the academic certificate issuance and verification process. Additionally, we developed a proof of concept to evaluate the proposed protocol, demonstrating its functionality and performance. The PoC provided insights into the strengths and weaknesses of the proposed protocol and highlighted its potential to prevent forgery and unauthorised access to academic certificates. Overall, the proposed protocol has the potential to significantly enhance the security and authenticity of academic certificates, improving the overall trust and credibility of the academic credentialing system.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_53-A_Privacy_Centered_Protocol_for_Enhancing_Security.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Fuzzy Logic based Solution for Network Traffic Problems in Migrating Parallel Crawlers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140252</link>
        <id>10.14569/IJACSA.2023.0140252</id>
        <doi>10.14569/IJACSA.2023.0140252</doi>
        <lastModDate>2023-02-28T10:12:19.1730000+00:00</lastModDate>
        
        <creator>Mohammed Faizan Farooqui</creator>
        
        <creator>Mohammad Muqeem</creator>
        
        <creator>Sultan Ahmad</creator>
        
        <creator>Jabeen Nazeer</creator>
        
        <creator>Hikmat A. M. Abdeljaber</creator>
        
        <subject>Web crawler; incremental crawling; fuzzy logic-based system; fuzzy logic toolbox</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>Search engines are the instruments for website navigation and search, because the Internet is big and has expanded greatly. By continuously downloading web pages for processing, search engines provide search facilities and maintain indices for web documents. Online crawling is the term for this process of downloading web pages. This paper proposes solution to network traffic problem in migrating parallel web crawler. The primary benefit of a parallel web crawler is that it does local analysis at the data&#39;s residence rather than inside the web search engine repository. As a result, network load and traffic are greatly reduced, which enhances the performance, efficacy, and efficiency of the crawling process. Another benefit of moving to a parallel crawler is that as the web gets bigger, it becomes important to parallelize crawling operations in order to retrieve web pages more quickly. A web crawler will produce pages of excellent quality. When the crawling process moves to a host or server with a specific domain, it begins downloading pages from that domain. Incremental crawling will maintain the quality of downloaded pages and keep the pages in the local database updated. Java is used to implement the crawler. The model that was put into practice supports all aspects of a three-tier, real-time architecture. An implementation of a parallel web crawler migration is shown in this paper. The method for efficient parallel web migration detects changes in the content and structure using neural network-based change detection techniques in parallel web migration. This will produce high-quality pages and detection for changes will always download new pages. Either of the following strategies is used to carry out the crawling process: either crawlers are given generous permission to speak with one another, or they are not given permission to communicate with one another at all. Both strategies increase network traffic. Here, a fuzzy logic-based system that predicts the load at a specific node and the path of network traffic is presented and implemented in MATLAB using the fuzzy logic toolbox.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_52-A_Fuzzy_Logic_based_Solution_for_Network_Traffic_Problems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Graphical User Interfaces Generation from BPMN (Business Process Model and Notation) via IFML (Interaction Flow Modeling Language) up to PSM (Platform Specific Model) Level</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140251</link>
        <id>10.14569/IJACSA.2023.0140251</id>
        <doi>10.14569/IJACSA.2023.0140251</doi>
        <lastModDate>2023-02-28T10:12:19.1570000+00:00</lastModDate>
        
        <creator>Abir Sajji</creator>
        
        <creator>Yassine Rhazali</creator>
        
        <creator>Youssef Hadi</creator>
        
        <subject>MDA (Model Driven Architecture); CIM (Computation Independent Model); PIM (Platform Independent Model); PSM (Platform Specific Model); Model transformations; Graphical User Interfaces; BPMN (Business Process Model and Notation); IFML (Interaction Flow Modeling Language); Webratio tool</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>The fundamental concept behind the MDA (Model Driven Architecture) approach is the development of many models, first the Computation Independent Model (CIM), then the Platform Independent Model (PIM), and lastly the Platform Specific Model (PSM) for the concrete implementation of the system. Web applications are just one example of customized software that is now being developed at an increasing rate. Interaction Flow Modeling Language (IFML) was developed to represent the front end of any program that necessitates a powerful interaction with a user through the use of an interface, regardless of the technical details of its implementation. There are various modeling tools for IFML; the Webratio tool is an illustration that facilitates the generation of the entire web application. This article discusses the model transformations in the MDA’s approach, starting from the CIM level up to the PSM level through the PIM level. To begin, we created the Business Process Model and Notation (BPMN) and IFML metamodels in Eclipse tool, we created also the BPMN model, and we get the IFML model by applying the shift rules in Atlas Transformation Language (ATL). Finally, we generated the application using a standard tool that implements IFML Webratio tool. A CRUD (Create, Read, Update, and Delete) features for the after-sales service case study were provided to illustrate the conversion strategy from the CIM level via the PIM level to the PSM level.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_51-Graphical_user_Interfaces_Generation_from_BPMN.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards an Automatic Speech-to-Text Transcription System: Amazigh Language</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140250</link>
        <id>10.14569/IJACSA.2023.0140250</id>
        <doi>10.14569/IJACSA.2023.0140250</doi>
        <lastModDate>2023-02-28T10:12:19.1570000+00:00</lastModDate>
        
        <creator>Ahmed Ouhnini</creator>
        
        <creator>Brahim Aksasse</creator>
        
        <creator>Mohammed Ouanan</creator>
        
        <subject>Speech recognition system; Amazigh language; analyzing formants and pitch; speech corpus; artificial intelligence</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>Various studies inside the domain of research and the development of automatic speech recognition (ASR) technologies for several languages have not yet been published and thoroughly investigated. Nevertheless, the unique acoustic features of the Amazigh language, for example, Amazigh&#39;s consonant emphasis, pose many obstacles to the development of automatic speech recognition systems. In this study, we examine Amazigh language voice recognition. We treat the problem by focusing on transitions in vowel and consonant sounds and formant frequencies of phonemes. We present a hybrid strategy for phoneme separation based on energy differences. This includes analysis of consonant and vowel features, and identification methods based on formant analysis.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_50-Towards_an_Automatic_Speech_to_Text_Transcription_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Survey on Attention-Based Models for Image Captioning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140249</link>
        <id>10.14569/IJACSA.2023.0140249</id>
        <doi>10.14569/IJACSA.2023.0140249</doi>
        <lastModDate>2023-02-28T10:12:19.1430000+00:00</lastModDate>
        
        <creator>Asmaa A. E. Osman</creator>
        
        <creator>Mohamed A. Wahby Shalaby</creator>
        
        <creator>Mona M. Soliman</creator>
        
        <creator>Khaled M. Elsayed</creator>
        
        <subject>Image captioning; attention model; deep learning; computer vision; natural language processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>Image captioning task is highly used in many real-world applications. The captioning task is concerned with understanding the image using computer vision methods. Then, natural language processing methods are used to produce a description for the image. Different approaches were proposed to solve this task, and deep learning attention-based models have been proven to be the state-of-the-art. A survey on attention-based models for image captioning is presented in this paper including new categories that were not included in other survey papers. The attention-based approaches are classified into four main categories, further classified into subcategories. All categories and subcategories of the attention-based approaches are discussed in detail. Furthermore, the state-of-the-art approaches are compared and the accuracy improvements are stated especially in the transformer-based models, and a summary of the benchmark datasets and the main performance metrics is presented.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_49-A_Survey_on_Attention_Based_Models_for_Image_Captioning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hierarchical Pretrained Deep Learning Features for the Breast Cancer Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140248</link>
        <id>10.14569/IJACSA.2023.0140248</id>
        <doi>10.14569/IJACSA.2023.0140248</doi>
        <lastModDate>2023-02-28T10:12:19.1270000+00:00</lastModDate>
        
        <creator>Abeer S. Alsheddi</creator>
        
        <subject>Feature extraction; CNN models; Pretrained models; breast cancer classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>Breast cancer is a common and fatal disease among women worldwide. Accurately and early diagnosing of breast cancer plays a pivotal role in improving the prognosis of patients. Recently, advanced techniques of artificial intelligence and natural image classification have been used for the breast cancer image classification task and have become a hot topic for research in machine learning. This paper proposes a fully automatic computerized method for breast cancer classification using two well-established pretrained CNN models, namely VGG16 and ResNet50. Next, the feature extraction process is used to extract features in a hierarchical manner to train a support vector machine classifier. Evaluating the proposed model shows achieving 92% accuracy. In addition, this paper investigates the effect of different factors, highlights its findings, and provides future directions for the research to develop more advanced models.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_48-Hierarchical_Pretrained_Deep_Learning_Features.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Assessment of the Healthcare Administration of Senior Citizens from Survey Data using Sentiment Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140247</link>
        <id>10.14569/IJACSA.2023.0140247</id>
        <doi>10.14569/IJACSA.2023.0140247</doi>
        <lastModDate>2023-02-28T10:12:19.1270000+00:00</lastModDate>
        
        <creator>Ramona Michelle M. Magtangob</creator>
        
        <creator>Thelma D. Palaoag</creator>
        
        <subject>Sentiment analysis; opinion mining; senior citizen; healthcare services</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>Healthcare is most frequently used by older people and understanding how they feel about the way healthcare administration gives them the attention and support they need, is crucial to building a healthcare system that is effective in meeting their needs. This study determined the seniors&#39; opinions on the healthcare administration by employing SurveyMonkey, a robust online survey tool as an opinion miner. The study used the Orange application, which made data processing simple, to gauge the seniors&#39; opinions toward healthcare administration by analyzing text sentiment using the VADER Sentiment Analysis, which may distinguish between the polarity of positive, negative, or neutral emotions as well as their intensity. Results showed that the majority of seniors (51.1%) had a negative response to healthcare administration, whereas 47.9% had a neutral response and 1.0% had a positive response. Based on the study, the government should enhance its senior citizens’ healthcare services to better satisfy their demands and ensure their happiness. This is clear from the respondents&#39; feedback regarding the services they would like to utilize and how they believe they may be improved. Additionally, the findings provided sufficient information for future consideration to enhance seniors&#39; satisfaction with developmental activities and programs and improve healthcare administration.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_47-Assessment_of_the_Healthcare_Administration_of_Senior_Citizens.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Long Short-Term Memory for Non-Factoid Answer Selection in Indonesian Question Answering System for Health Information</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140246</link>
        <id>10.14569/IJACSA.2023.0140246</id>
        <doi>10.14569/IJACSA.2023.0140246</doi>
        <lastModDate>2023-02-28T10:12:19.1100000+00:00</lastModDate>
        
        <creator>Retno Kusumaningrum</creator>
        
        <creator>Alfi F. Hanifah</creator>
        
        <creator>Khadijah Khadijah</creator>
        
        <creator>Sukmawati N. Endah</creator>
        
        <creator>Priyo S. Sasongko</creator>
        
        <subject>Answer selection; health information; long short-term memory; LSTM; question answering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>Providing reliable health information to a community can help raise awareness of the dangers of diseases, their causes, methods of prevention, and treatment. Indonesians are facing various health problems partly due to the lack of health information; hence, the community needs media that can effectively provide reliable health information, namely a question answering (QA) system. The frequently asked questions are non-factoid questions. The development of answer selection based on the classical approach requires distinctive engineering features, linguistic tools, or external resources. It can be solved using deep learning approach such as Convolutional Neural Networks (CNN). However, this model cannot capture the sequence of words in both questions and answers. Therefore, this study aims to implement a long short-term memory (LSTM) model to effectively exploit long-range sequential context information for an answer selection task. In addition, this study analyses various hyper-parameters of Word2Vec and LSTM, such as the dimension, context window, dropout, hidden unit, learning rate, and margin; the corresponding values that yield the best mean reciprocal rank (MRR) and mean average precision (MAP) are found to be 300, 15, 0.25, 100, 0.01, and 0.1, respectively. The best model yields MAP and MRR values of 82.05% and 91.58%, respectively. These results experienced an increase in MAP and MRR of 18.68% and 46.11%, respectively, compared to CNN as the baseline model.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_46-Long_Short_Term_Memory_for_Non_Factoid_Answer_Selection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Image for CNN-based Diagnostic of Pediatric Pneumonia through Chest Radiographs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140245</link>
        <id>10.14569/IJACSA.2023.0140245</id>
        <doi>10.14569/IJACSA.2023.0140245</doi>
        <lastModDate>2023-02-28T10:12:19.0970000+00:00</lastModDate>
        
        <creator>Vaishali Arya</creator>
        
        <creator>Tapas Kumar</creator>
        
        <subject>Contrast enhancement; convolution neural network; pediatric pneumonia; radiography; white balancing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>In underdeveloped nations, severe lower respiratory infections are the principal reasons of infant mortality. The best treatments and early diagnosis are now being used to alleviate this issue. In developing nations, better treatment and prevention approaches are still required. Clinical, microbial, and radiographic clinical studies have a broad range of applicability within and across populations, and it much depends on the knowledge and resources that are made accessible in different situations. The most appropriate procedure is a chest radiograph (CXR), although pediatric chest X-ray techniques using machine intelligence are uncommon. A strong system is required to diagnose pediatric pneumonia. Authors provide a computer-aided diagnosis plan for the chest X-ray scans to address this. This investigation provides a deep learning-based intelligent healthcare that can reliably diagnose pediatric pneumonia. In order to improve the appearance of CXR pictures, the suggested technique also employs white balancing accompanied with contrast enhancement as a preliminary step. With an AUC of 99.1 on the testing dataset, the suggested approach outscored other state-of-the-art approaches and produced impressive results. Additionally, the suggested approach correctly classified chest X-ray scans as normal and pediatric pneumonia with a classification accuracy of 98.4%.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_45-Enhancing_Image_for_CNN_based_Diagnostic.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Leaf Diseases Identification and Classification of Self-Collected Dataset on Groundnut Crop using Progressive Convolutional Neural Network (PGCNN)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140244</link>
        <id>10.14569/IJACSA.2023.0140244</id>
        <doi>10.14569/IJACSA.2023.0140244</doi>
        <lastModDate>2023-02-28T10:12:19.0800000+00:00</lastModDate>
        
        <creator>Anna Anbumozhi</creator>
        
        <creator>Shanthini A</creator>
        
        <subject>Leaf Diseases Identification (LDI); Progressive Groundnut Convolutional Neural Networks (PGCNN); Self-Collected Dataset; AlexNet; VGG Models</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>A healthy crop is required to provide high-quality food for daily consumption. Crop leaf diseases have more influence on agronomic production and our country. Earlier, many scholars relied on traditional techniques to detect and classify leaf diseases. Furthermore, classification at an early stage is impossible when there are not enough experts and inadequate research facilities. As technology progresses into our day to day life, an Artificial Intelligence subset called Deep Learning (DL) models plays a vital role in the automatic identification of groundnut leaf diseases. The essential for controlling diseases that are spread to the healthy development of groundnut farming. Deep Learning can resolve the issues of finding leaf diseases early and effectively. Most of the researchers concentrate on detecting leaf diseases by doing research in Machine Learning (ML) approaches, which leads to low accuracy and high loss. To achieve better accuracy and decreases the loss in the DL model by identifying the leaf diseases of groundnut crops at an early stage, we propose the Progressive Groundnut Convolutional Neural Network (PGCNN) model. This paper mainly focuses on identifying and classifying groundnut leaf diseases with a self-collected dataset which is collected from the various climatic conditions around the village located nearby Pudukkottai district, Tamil Nadu, India. The common diseases that occurred in those areas were gathered namely early spot, late spot, rust, and rosette. Model Performance metrics analysis was done to evaluate the model performance and also compared with the various DL architectures like AlexNet, VGG11, VGG13, VGG16, and VGG19. The proposed models have achieved a training accuracy of 99.39% and a validation accuracy of 97.58%, continuing with an overall accuracy of 97.58%.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_44-Leaf_Diseases_Identification_and_Classification_of_Self_Collected_Dataset.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Fully Immersive Virtual Reality Cycling Training (vProCycle) and its Findings</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140242</link>
        <id>10.14569/IJACSA.2023.0140242</id>
        <doi>10.14569/IJACSA.2023.0140242</doi>
        <lastModDate>2023-02-28T10:12:19.0630000+00:00</lastModDate>
        
        <creator>Imran Bin Mahalil</creator>
        
        <creator>Azmi Bin Mohd Yusof</creator>
        
        <creator>Nazrita Binti Ibrahim</creator>
        
        <creator>Eze Manzura Binti Mohd Mahidin</creator>
        
        <creator>Ng Hui Hwa</creator>
        
        <subject>Virtual Reality (VR); presence level; technology acceptance; cycling performance; VR cycling training; vProCycle</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>Virtual reality (VR) technology is popularly applied in various sports training such as cycling, rowing, soccer, tennis and many more. In VR cycling, however, cyclists are not able to fully immerse themselves during the training due to the hardware and applications limitations required in the setup. In order to be fully immersed during the training, cyclists need to have similar effects to an outdoor training where they will experience cycling resistance, temperature effect, altitude, visual, and audio. For this reason, dedicated stimulus effectors or hardware are required to create these expected effects. On cycling resistance, a realistic cycling experience can be simulated by using a special device that simulates a resistance to the back wheel when cycling uphill in the VR simulation. In addition, the back wheel resistance would need to match the view displayed while paddling on an elevation slope. For higher immersion purposes, and the effect of temperature must be created that matches with the view visible in the display. For example, while the cyclist is on top of a virtual mountain, the cyclist would want to feel the effects of high altitude and low temperature. These stimulus effectors affect the realism experience while cycling in the VR simulation training. In the authors’ previous papers, the setup using a combination of stimulus effectors including uphill elevation climb, altitude, temperature, interaction, visual, and audio were integrated into a product called vProCycle. The study tested on vProcycle was conducted with an assumption that virtual reality can enhance the experience of physical cycling training. The objective of this study is to determine whether or not vProCycle may improve cyclists’ performance. This paper will discuss in detail the findings from data gathered during the experiment using vProCycle. More specifically, the findings are focused on the speed and the heart rate beats per minute which determine their performance improvement.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_42-A_Fully_Immersive_Virtual_Reality_Cycling_Training.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>First Responders Space Subdivision Framework for Indoor Navigation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140243</link>
        <id>10.14569/IJACSA.2023.0140243</id>
        <doi>10.14569/IJACSA.2023.0140243</doi>
        <lastModDate>2023-02-28T10:12:19.0630000+00:00</lastModDate>
        
        <creator>Asep Id Hadiana</creator>
        
        <creator>Safiza Suhana Kamal Baharin</creator>
        
        <creator>Zahriah Othman</creator>
        
        <subject>Space subdivision; indoor navigation; first responders; indoor disaster</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>Indoor navigation is crucial, particularly during indoor disasters such as fires. However, current spatial subdivision models struggle to adapt to the dynamic changes that occur in such situations, making it difficult to identify the appropriate navigation space, and thus reducing the accuracy and efficiency of indoor navigation. This study presents a new framework for indoor navigation that is specifically designed for first responders, with a focus on improving their response time and safety during rescue operations in buildings. The framework is an extension of previous research and incorporates the combustibility factor as a critical variable to consider during fire disasters, along with definitions of safe and unsafe areas for first responders. An algorithm was developed to accommodate the framework and was evaluated using Pyrosim and Pathfinder software. The framework calculates walking speed factors that affect the path and walking speed of first responders, enhancing their chances of successful evacuation. The framework captures dynamic changes, such as smoke levels, that may impact the navigation path and walking speed of first responders, which were not accounted for in previous studies. The experimental results demonstrate that the framework can identify suitable navigation paths and safe areas for first responders, leading to successful evacuation in as little as 148 to 239 seconds. The proposed framework represents a significant improvement over previous studies and has the potential to enhance the safety and effectiveness of first responders during emergency situations.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_43-First_Responders_Space_Subdivision_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>R-Diffset vs. IR-Diffset: Comparison Analysis in Dense and Sparse Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140241</link>
        <id>10.14569/IJACSA.2023.0140241</id>
        <doi>10.14569/IJACSA.2023.0140241</doi>
        <lastModDate>2023-02-28T10:12:19.0500000+00:00</lastModDate>
        
        <creator>Julaily Aida Jusoh</creator>
        
        <creator>Sharifah Zulaikha Tengku Hassan</creator>
        
        <creator>Wan Aezwani Wan Abu Bakar</creator>
        
        <creator>Syarilla Iryani Ahmad Saany</creator>
        
        <creator>Mohd Khalid Awang</creator>
        
        <creator>Norlina Udin @ Kamaruddin</creator>
        
        <subject>R-Diffset; IR-Diffset; dense data; sparse data; comparison analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>The mining of concealed information from databases using Association Rule Mining seems to be promising. The successful extraction of this information will give a hand to many areas by aiding them in the process of finding solutions, economic projecting, commercialization policies, medical inspections, and numbers of other problems. ARM is the most outstanding method in the mining of remarkable related configurations from any groups of information. The important patterns encountered are categorized as recurrent/frequent and non-recurrent/infrequent. Most of the previous data mining methods concentrated on horizontal data set-ups. Nevertheless, recent studies have shown that vertical data formats are becoming the main concerns. One example of vertical data format is Rare Equivalence Class Transformation (R-Eclat). Due to its efficacy, R-Eclat algorithms have been commonly applied for the processing of large datasets. The R-Eclat algorithm is actually comprised of four types of variants. However, our work will only focus on the R-Diffset variant and Incremental R-Diffset (IR-Diffset). The performance analysis of the R-Diffset and IR-Diffset algorithms in the mining of sparse and dense data are compared. The processing time for R-Diffset algorithm, especially for sequential processing is very long. Thus, the incremental R-Diffset (IR-Diffset) has been established to solve this problem. While R-Diffset may only process the non-recurrent itemsets mining process in sequential form, IR-Diffset on the other hand has the capability to execute sequential data that have been fractionated. The advantages of this newly developed IR-Diffset may become a potential candidate in providing a time-efficient data mining process, especially those involving the large sets of data.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_41-R_Diffset_vs_IR_Diffset_Comparison_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced Optimized Classification Model of Chronic Kidney Disease</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140239</link>
        <id>10.14569/IJACSA.2023.0140239</id>
        <doi>10.14569/IJACSA.2023.0140239</doi>
        <lastModDate>2023-02-28T10:12:19.0330000+00:00</lastModDate>
        
        <creator>Shahinda Elkholy</creator>
        
        <creator>Amira Rezk</creator>
        
        <creator>Ahmed Abo El Fetoh Saleh</creator>
        
        <subject>Machine learning (ML); feature selection (FS); chronic kidney disease (CKD); deep belief network (DBN); grasshopper&#39;s optimization algorithm (GOA)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>Chronic kidney disease (CKD) is one of the leading causes of death across the globe, affecting about 10% of the world&#39;s adult population. Kidney disease affects the proper function of the kidneys. As the number of people with chronic kidney disease (CKD) rises, it is becoming increasingly important to have accurate methods for detecting CKD at an early stage. Developing a mechanism for detecting chronic kidney disease is the study&#39;s main contribution to knowledge. In this study, preventive interventions for CKD can be explored using machine learning techniques (ML). The Optimized deep belief network (DBN) based on Grasshopper&#39;s Optimization Algorithm (GOA) classifier with prior Density-based Feature Selection (DFS) algorithm for chronic kidney disease is described in this study, which is called &quot;DFS-ODBN.&quot; Prior to the DBN classifier, whose parameters are optimized using GOA, the proposed method eliminates redundant or irrelevant dimensions using DFS. The proposed DFS-ODBN framework consists of three phases, preprocessing, feature selection, and classification phases. Using CKD datasets, the suggested approach is also tested, and the performance is evaluated using several assessment metrics. Optimized-DBN achieves its maximum performance in terms of sensitivity, accuracy, and specificity, the proposed DFS-ODBN demonstrated accuracy of 99.75 percent using fewer features comparing with other techniques.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_39-Enhanced_Optimized_Classification_Model_of_Chronic_Kidney_Disease.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automated Categorization of Research Papers with MONO Supervised Term Weighting in RECApp</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140240</link>
        <id>10.14569/IJACSA.2023.0140240</id>
        <doi>10.14569/IJACSA.2023.0140240</doi>
        <lastModDate>2023-02-28T10:12:19.0330000+00:00</lastModDate>
        
        <creator>Ivic Jan A. Biol</creator>
        
        <creator>Rhey Marc A. Depositario</creator>
        
        <creator>Glenn Geo T. Noangay</creator>
        
        <creator>Julian Michael F. Melchor</creator>
        
        <creator>Cristopher C. Abalorio</creator>
        
        <creator>James Cloyd M. Bustillo</creator>
        
        <subject>Text classification; supervised tern weighting schemes; optical character recognition; machine learning algorithms</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>Natural Language Processing, specifically text classification or text categorization, has become a trend in computer science. Commonly, text classification is used to categorize large amounts of data to allocate less time to retrieve information. Students, as well as research advisers and panelists, take extra effort and time in classifying research documents. To solve this problem, the researchers used state-of-the-art supervised term weighting schemes, namely: TF-MONO and SQRTF-MONO and its application to machine learning algorithms: K-Nearest Neighbor, Linear Support Vector, Naive Bayes Classifiers, creating a total of six classifier models to ascertain which of them performs optimally in classifying research documents while utilizing Optical Character Recognition for text extraction. The results showed that among all classification models trained, SQRTF-MONO and Linear SVC outperformed all other models with an F1 score of 0.94 both in the abstract and the background of the study datasets. In conclusion, the developed classification model and application prototype can be a tool to help researchers, advisers, and panelists to lessen the time spent in classifying research documents.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_40-Automated_Categorization_of_Research_Papers.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Using Deep Learning Algorithms to Diagnose Coronavirus Disease (COVID-19)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140238</link>
        <id>10.14569/IJACSA.2023.0140238</id>
        <doi>10.14569/IJACSA.2023.0140238</doi>
        <lastModDate>2023-02-28T10:12:19.0170000+00:00</lastModDate>
        
        <creator>Nfayel Alanazi</creator>
        
        <creator>Yasser Kotb</creator>
        
        <subject>Component; COVID-19; transfer learning; deep learning; ResNet50; VGG16; GoogleNet</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>With the rapid development in the area of Machine Learning (ML) and Deep learning, it is important to exploit these tools to contribute to mitigating the effects of the coronavirus pandemic. Early diagnosis of the presence of this virus in the human body can be crucially helpful to healthcare professionals. In this paper, three well-known Convolutional Neural Network deep learning algorithms (VGGNet 16, GoogleNet and ResNet50) are applied to measure their ability to distinguish COVID-19 patients from other patients and to evaluate the best performance among these algorithms with a large dataset. Two stages are conducted, the first stage with 14994 x-ray images and the second one with 33178. Each model has been applied with different batch sizes 16, 32 and 64 in each stage to measure the impact of data size and batch size factors on the accuracy results. The second stage achieved accuracy better than the first one and the 64 batch size gain best results than the 16 and 32. ResNet50 achieves a high rate of 99.31, GoogleNet model achieves 95.55, while VGG16 achieves 96.5. Ultimately, the results affect the process of expediting the diagnosis and referral of these treatable conditions, thereby facilitating earlier treatment, and resulting in improved clinical outcomes.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_38-Using_Deep_Learning_Algorithms_to_Diagnose_Coronavirus_Disease.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluation of QoS over IEEE 802.11 Wireless Network in the Implementation of Internet Protocols Mobility Supporting</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140237</link>
        <id>10.14569/IJACSA.2023.0140237</id>
        <doi>10.14569/IJACSA.2023.0140237</doi>
        <lastModDate>2023-02-28T10:12:19.0030000+00:00</lastModDate>
        
        <creator>Narimane Elhilali</creator>
        
        <creator>Mostapha Badri</creator>
        
        <creator>Mouncef Filali Bouami</creator>
        
        <subject>QoS; MIPv4; MIPv6; handover; mobility; priority</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>Now-a-days, the internet is an essential part of our digital lives. With the growing number of users, the ultimate goal is to enable all users to stay connected to the internet at anytime and anywhere, regardless of their mobility. Any delay or jitter in the system can cause a deterioration in the performance of multimedia services, such as video streaming, or cause websites to partially load. The current Internet Protocol version 4 (IPv4) cannot handle all the IP addressing requirements, while the next generation Internet Protocol version 6 (IPv6) has been developed to solve some of these problems by improving the quality of service and providing many other features. The primary contribution of this paper is to investigate the evaluation of Quality of Service (QoS) functionality, including end-to-end delay, throughput, jitter, and packet loss, in WLAN mobility environments for MIPv4 to MIPv6 using the OMNeT++ simulator.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_37-Evaluation_of_QoS_over_IEEE_802_11_Wireless_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Classification of Psychological Disorders by Feature Ranking and Fusion using Gradient Boosting</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140235</link>
        <id>10.14569/IJACSA.2023.0140235</id>
        <doi>10.14569/IJACSA.2023.0140235</doi>
        <lastModDate>2023-02-28T10:12:18.9870000+00:00</lastModDate>
        
        <creator>Saba Tahseen</creator>
        
        <creator>Ajit Danti</creator>
        
        <subject>Electroencephalograph (EEG); psychological disorders; negative state emotions; gridSearchCV; gradient boosting classifier</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>Negative emotional regulation is a defining element of psychological disorders. Our goal was to create a machine-learning model to classify psychological disorders based on negative emotions. EEG brainwave dataset displaying positive, negative, and neutral emotions. However, negative emotions are responsible for psychological health. In this paper, research focused solely on negative emotional state characteristics for which the divide-and-conquer approach has been applied to the feature extraction process. Features are grouped into four equal subsets and feature selection has been done for each subset by feature ranking approach based on their feature importance determined by the Random Forest-Recursive Feature Elimination with Cross-validation (RF-RFECV) method. After feature ranking, the fusion of the feature subset is employed to obtain a new potential dataset. 10-fold cross-validation is performed with a grid search created using a set of predetermined model parameters that are important to achieving the greatest possible accuracy. Experimental results demonstrated that the proposed model has achieved 97.71% accuracy in predicting psychological disorders.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_35-Classification_of_Psychological_Disorders_by_Feature_Ranking.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Public Response to the Legalization of The Criminal Code Bill with Twitter Data Sentiment Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140236</link>
        <id>10.14569/IJACSA.2023.0140236</id>
        <doi>10.14569/IJACSA.2023.0140236</doi>
        <lastModDate>2023-02-28T10:12:18.9870000+00:00</lastModDate>
        
        <creator>Deny Irawan</creator>
        
        <creator>Dana Indra Sensuse</creator>
        
        <creator>Prasetyo Adi Wibowo Putro</creator>
        
        <creator>Aji Prasetyo</creator>
        
        <subject>Sentiment analysis; RKUHP; support vector Machine (SVM); Na&#239;ve Bayes; classification and regression tree (CART)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>The Criminal Code Bill, also known as Rancangan Kitab Undang-undang Hukum Pidana (RKUHP), passed in the House of Representatives (DPR) on December 6, 2022, is being debated because several issues need to be fixed. Therefore, research was conducted to determine the public&#39;s reaction to the ratification of the Criminal Code Bill by analyzing Twitter data. This study aims to obtain a general response to the legalized RKUHP. We use sentiment analysis, a text-processing method, to get data from the public. To do this, we used N-grams (unigrams, bigrams, and trigrams) along with three algorithms: Na&#239;ve Bayes, Classification and Regression Tree (CART), and Support Vector Machine (SVM). The result of sentiment analysis found that 51% of tweets were positive about the ratification of the RKUHP, and 49% were negative. In addition, it was also found that SVM has the best accuracy compared to other algorithms, with an accuracy value of 0.81 on the unigram combination.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_36-Public_Response_to_the_Legalization_of_The_Criminal_Code_Bill.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning based Analysis of MRI Images for Brain Tumor Diagnosis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140234</link>
        <id>10.14569/IJACSA.2023.0140234</id>
        <doi>10.14569/IJACSA.2023.0140234</doi>
        <lastModDate>2023-02-28T10:12:18.9700000+00:00</lastModDate>
        
        <creator>Srinivasarao Gajula</creator>
        
        <creator>V. Rajesh</creator>
        
        <subject>Convolutional neural networks (CNN); magnetic resonance imaging (MRI); principal components analysis (PCA)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>This Identification and examination of brain tumour are critical components of any indication system, as evidenced by extensive research and methodological advancement over the years. As part of this approach, an efficient automated system must be put in place to enhance the rate of tumor identification. Today, manually examining thousands of MRI images to locate a brain tumor is arduous and imprecise. It may impair patient care. Since it incorporates several picture datasets, it might be time-consuming. Tumor cells present in the brain look a lot like healthy tissue, making it hard to distinguish between the two while doing segmentation. In this study, we present an approach for classification and prediction of MRI images of the brain using a convolutional neural network, conventional classifiers, and deep learning. Here we have proposed a new method for the automatic and exact categorization of brain tumour utilizing a two-stage feature composition of deep convolutional neural networks (CNNs). We used a deep learning approach to categorize MRI scans into several pathologies, including gliomas, meningiomas, benign lesions, and pituitary tumour, after first extracting characteristics from the scans. Additionally, the most accurate classifier is selected from a pool of five possible classifiers. The principal components analysis (PCA) is used to identify the most important characteristics from the retrieved features, which are then used to train the classifier. We develop our proposed model in Python, utilizing TensorFlow and Keras since it is an effective language for programming and performing work quickly. In our work, CNN got a 98.6% accuracy rate, which is better than what has been done so far.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_34-Deep_Learning_based_Analysis_of_MRI_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimized Strategy for Inter-Service Communication in Microservices</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140233</link>
        <id>10.14569/IJACSA.2023.0140233</id>
        <doi>10.14569/IJACSA.2023.0140233</doi>
        <lastModDate>2023-02-28T10:12:18.9530000+00:00</lastModDate>
        
        <creator>Sidath Weerasinghe</creator>
        
        <creator>Indika Perera</creator>
        
        <subject>Microservices; software architecture; inter-service communication; performance; streams</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>In the last decade, many enterprises have moved their software deployments to the cloud. As a result of this transmission, the cloud providers stepped ahead and introduced various new technologies for their offerings. People cannot gain the expected advantages from cloud-based solutions merely by transferring monolithic architecture-based software to the cloud since the cloud is natively designed for lightweight artifacts. Nowadays, the end user requirements rapidly change. Hence, the software should accommodate those accordingly. On the contrary, with Monolithic architecture, meeting that requirement change based on extensibility, scalability, and modern software quality attributes is quite challenging. The software industry introduced microservice architecture to overcome such challenges. Therefore, most backend systems are designed using this architectural pattern. Microservices are designed as small services, and those services are deployed in the distributed environment. The main drawback of this architecture is introducing additional latency when communicating with the inter-services in the distributed environment. In this research, we have developed a solution to reduce the interservice communication latency and enhance the overall application performance in terms of throughput and response time. The developed solution uses an asynchronous communication pattern using the Redis Stream data structure to enable pub-sub communication between the services. This solution proved that the most straightforward implementation could enhance the overall application performance.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_33-Optimized_Strategy_for_Inter_Service_Communication_in_Microservices.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Tamper Proof Air Quality Management System using Blockchain</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140232</link>
        <id>10.14569/IJACSA.2023.0140232</id>
        <doi>10.14569/IJACSA.2023.0140232</doi>
        <lastModDate>2023-02-28T10:12:18.9530000+00:00</lastModDate>
        
        <creator>Vaneeta M</creator>
        
        <creator>Deepa S R</creator>
        
        <creator>Sangeetha V</creator>
        
        <creator>Kamalakshi Naganna</creator>
        
        <creator>Kruthika S Vasisht</creator>
        
        <creator>Ashwini J</creator>
        
        <creator>Nikitha M</creator>
        
        <creator>Srividya H. R</creator>
        
        <subject>Air pollution; air quality index; machine learning; blockchain technology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>One of the most important concerns facing urban regions across the world is air pollution. As a result, it&#39;s critical to monitor pollution levels and notify the public on the state of the air. An indicator called the Air Quality Index (AQI) does this by mapping the concentration of different contaminants into a single number. Because the examination of pollutant data is frequently opaque to outsiders, poor environmental control judgments may result. This has led to a need for a tamper-proof pollution management system for use by authorities, like the state and central pollution boards. To address these challenges, a model using machine learning algorithms to predict the air quality and store that information in the blockchain is proposed. Machine learning algorithms are used to categorize the air quality, and blockchain technology guarantees a permanent, tamper-proof record of all air quality data. Such a system might address the persistent issues with data dependability, immutability and trust in pollution control. The execution time of two main operations on blockchain are measured. The execution time of the put block is measured as 10 ms and the get block gets executed in 1 ms that fetches data from the blockchain ledger.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_32-Tamper_Proof_Air_Quality_Management_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Approach to Automatic Garbage Detection Framework Designing using CNN</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140231</link>
        <id>10.14569/IJACSA.2023.0140231</id>
        <doi>10.14569/IJACSA.2023.0140231</doi>
        <lastModDate>2023-02-28T10:12:18.9400000+00:00</lastModDate>
        
        <creator>Akhilesh Kumar Sharma</creator>
        
        <creator>Antima Jain</creator>
        
        <creator>Deevesh Chaudhary</creator>
        
        <creator>Shamik Tiwari</creator>
        
        <creator>Hairulnizam Mahdin</creator>
        
        <creator>Zirawani Baharum</creator>
        
        <creator>Shazlyn Milleana Shaharudin</creator>
        
        <creator>Ruhaila Maskat</creator>
        
        <creator>Mohammad Syafwan Arshad</creator>
        
        <subject>Garbage detection; resnet; tensorflow; CNN</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>This paper proposes a system for automatic detection of litter and garbage dumps in CCTV feeds with the help of deep learning implementations. The designed system named Greenlock scans and identifies entities that resemble an accumulation of garbage or a garbage dump in real time and alerts the respective authorities to deal with the issue by locating the point of origin. The entity is labelled as garbage if it passes a certain similarity threshold. ResNet-50 has been used for the training purpose alongside TensorFlow for mathematical operations for the neural network. Combined with a pre-existing CCTV surveillance system, this system has the capability to hugely minimize garbage management costs via the prevention of formation of big dumps. The automatic detection also saves the manpower required in manual surveillance and contributes towards healthy neighborhoods and cleaner cities. This article is also showing the comparison between applied various algorithms such as standard TensorFlow, inception algo and faster-r CNN and Resnet-50, and it has been observed that Resnet-50 performed with better accuracy. The study performed with this study proved to be a stress reliever in terms of the garbage identification and dumping for any country. At the end of the article the comparison chart has been shown.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_31-An_Approach_to_Automatic_Garbage_Detection_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Survey of Forensic Analysis and Information Visualization Approach for Instant Messaging Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140229</link>
        <id>10.14569/IJACSA.2023.0140229</id>
        <doi>10.14569/IJACSA.2023.0140229</doi>
        <lastModDate>2023-02-28T10:12:18.9230000+00:00</lastModDate>
        
        <creator>Shahnaz Pirzada</creator>
        
        <creator>Nurul Hidayah Ab Rahman</creator>
        
        <creator>Niken Dwi Wahyu Cahyani</creator>
        
        <creator>Muhammad Fakri Othman</creator>
        
        <subject>Forensic analysis; forensic visualization; instant messaging apps; mobile forensics; and mobile communication apps</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>Instant messaging applications, including WhatsApp, Viber, and WeChat, are moving beyond text messages to videos and voice calls, which are proportioned to current media, files, and locations. In this study, we surveyed existing forensic visualization and forensic analysis techniques for instant messaging applications, with the aim of contributing to the knowledge in the discussion of these research issues. A total of 61 publications were reviewed after searching various academic databases, including the IEEE, ACM Digital Library, Google Scholar, and Science Direct during the last five years. Our observation from research trends indicates that both forensic analysis and information visualization are relatively mature research areas. However, there is a growing interest in forensic visualization and automated IM forensic analysis. We also identified the lack of discussion on forensic selection criteria in existing forensic visualization works and the needs of benchmarking the evaluation method of automate forensic analysis tools.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_29-A_Survey_of_Forensic_Analysis_and_Information_Visualization_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Driving Maneuvers Recognition and Classification Using A Hyprid Pattern Matching and Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140230</link>
        <id>10.14569/IJACSA.2023.0140230</id>
        <doi>10.14569/IJACSA.2023.0140230</doi>
        <lastModDate>2023-02-28T10:12:18.9230000+00:00</lastModDate>
        
        <creator>Munaf Salim Najim Al-Din</creator>
        
        <subject>Driving behavior classification; driving maneuvers; pattern matching; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>Since most of the road and traffic accidents are related to human errors or distraction, the study of irregular driving behaviors is considered one of the most important research topics in this field. To prevent road accidents and assess driving competencies, there is an urgent need to evaluate driving behavior through the design of a driving maneuvers assessment system. In this study, the recognition and classification of highway driving maneuvers using smartphones’ build-in sensors are presented. The paper examines the performance of three classical machine learning techniques and a novel hybrid system. The proposed hybrid system combines the pattern machining Dynamic Time Warping (DTW) technique for recognizing driving maneuvers and the machine learning techniques for classification. Results obtained from both approaches show that the performance of the hybrid system is superior to that obtained by using classical machine learning techniques. This enhancement in the performance of the hybrid system is due to the elimination of the overlapping in the target classes due to the separation, the recognition and the classification processes.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_30-Driving_Maneuvers_Recognition_and_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Paw Search – A Searching Approach for Unsorted Data Combining with Binary Search and Merge Sort Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140228</link>
        <id>10.14569/IJACSA.2023.0140228</id>
        <doi>10.14569/IJACSA.2023.0140228</doi>
        <lastModDate>2023-02-28T10:12:18.9070000+00:00</lastModDate>
        
        <creator>Md. Harun Or Rashid</creator>
        
        <creator>Ahmed Imtiaz</creator>
        
        <subject>Paw; search; unsorted; data; blocks; square root</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>Searching is one of the oldest core mechanism of nature. Nature is changing gradually along with searching approaches too. Data Mining is one of the most important industrials topic now-a-days. Under this area all social networks, governmental or non-governmental institutions and ecommerce industries produce a huge number of unsorted data and they are to utilize it. For utilizing this huge number of unsorted data there needs some specific features based unsorted data structure tools like searching algorithm. At present there are several sorted data based searching algorithms like Binary Search, Linear Search, Jump Search and Interpolation Search and so on. In this paper of Paw Search Algorithm, it is fully focused to develop a new approach of searching that can work on unsorted data merging several searching techniques and sorting techniques. This algorithm starts its operation by breaking down the given unsorted array into several blocks by making the square root of the length of the given array. Then these blocks will be searched within its specific formula till the target data is found or not, and in the inner side of each block there will be performed Merge Sort and Binary Search approach gradually. Time and Space Complexity of this Paw Search algorithm is comparatively optimal.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_28-Paw_Search_A_Searching_Approach_for_Unsorted_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Study of CRF Models for Speech understanding in Limited Task</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140227</link>
        <id>10.14569/IJACSA.2023.0140227</id>
        <doi>10.14569/IJACSA.2023.0140227</doi>
        <lastModDate>2023-02-28T10:12:18.8930000+00:00</lastModDate>
        
        <creator>Marwa Graja</creator>
        
        <subject>Speech understanding; Arabic dialect; CRF models</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>In this paper, we propose to evaluate in depth CRF models (Conditional Random Fields) for speech-understanding in limited task. To evaluate these models, we design several models that differ according to the level of integration of local dependencies in the same turn. As we propose to evaluate these models on different types of processed data. We perform our study on a corpus where turns are not segmented into utterances. In fact, we propose to use the whole turn as one unit during training and testing of CRF models. This represents the natural way of conversation. The language used in this work is the Tunisian Arabic dialect. The obtained results prove the robustness of CRF models when dealing with raw data. They are able to detect the semantic dependency between words in the same speech turn. Results are important when CRF models are designed to take into account the words with deep dependencies in the same turn and with advanced preprocessed data.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_27-Deep_Study_of_CRF_Models_for_Speech_Understanding.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Comparison of the Kernels of Support Vector Machine Algorithm for Diabetes Mellitus Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140226</link>
        <id>10.14569/IJACSA.2023.0140226</id>
        <doi>10.14569/IJACSA.2023.0140226</doi>
        <lastModDate>2023-02-28T10:12:18.8770000+00:00</lastModDate>
        
        <creator>Dimas Aryo Anggoro</creator>
        
        <creator>Dian Permatasari</creator>
        
        <subject>Diabetes mellitus; kernel; normalization; oversampling; SVM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>Diabetes Mellitus is a disease where the body cannot use insulin properly, so this disease is one of the health problems in various countries. Diabetes Mellitus can be fatal, cause other diseases, and even lead to death. Based on this, it is essential to have prediction activities to find out a disease. The SVM algorithm is used in classifying Diabetes Mellitus diseases. This study aimed to compare the accuracy, precision, recall, and F1-Score values of the SVM algorithm with various kernels and data preprocessing. Data preprocessing included data splitting, normalization, and data oversampling. This research has the benefit of solving health problems based on the percentage of Diabetes Mellitus and can be used as material for accurate information. The results of this study are that the highest accuracy was obtained by 80% (obtained from the polynomial kernel), the highest precision was obtained by 65%, which was also obtained from the polynomial kernel, and the highest recall was obtained by 79% (obtained from the RBF kernel) and the highest F1-score was obtained by 70% (which was also obtained from the RBF kernel).</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_26-Performance_Comparison_of_the_Kernels_of_Support_Vector_Machine.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Landmark Recognition Model for Smart Tourism using Lightweight Deep Learning and Linear Discriminant Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140225</link>
        <id>10.14569/IJACSA.2023.0140225</id>
        <doi>10.14569/IJACSA.2023.0140225</doi>
        <lastModDate>2023-02-28T10:12:18.8770000+00:00</lastModDate>
        
        <creator>Mohd Norhisham Razali</creator>
        
        <creator>Enurt Owens Nixon Tony</creator>
        
        <creator>Ag Asri Ag Ibrahim</creator>
        
        <creator>Rozita Hanapi</creator>
        
        <creator>Zamhar Iswandono</creator>
        
        <subject>Scene recognition; convolutional neural network; smart tourism; feature selections</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>Scene recognition algorithm is crucial for landmark recognition model development. Landmark recognition model is one of the main modules in the intelligent tour guide system architecture for the use of smart tourism industry. However, recognizing the tourist landmarks in the public places are challenging due to the common structure and the complexity of scene objects such as building, monuments and parks. Hence, this study proposes a super lightweight and robust landmark recognition model by using the combination of Convolutional Neural Network (CNN) and Linear Discriminant Analysis (LDA) approaches. The landmark recognition model was evaluated by using several pretrained CNN architectures for feature extraction. Then, several feature selections and machine learning algorithms were also evaluated to produce a super lightweight and robust landmark recognition model. The evaluations were performed on UMS landmark dataset and Scene-15 dataset. The results from the experiments have found that the Efficient Net (EFFNET) with CNN classifier are the best feature extraction and classifier. EFFNET-CNN achieved 100% and 94.26% classification accuracy on UMS-Scene and Scene-15 dataset respectively. Moreover, the feature dimensions created by EFFNet are more compact compared to the other features and even have significantly reduced for more than 90% by using Linear Discriminant Analysis (LDA) without jeopardizing classification performance but yet improved its performance.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_25-Landmark_Recognition_Model_for_Smart_Tourism.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Explaining the Outputs of Convolutional Neural Network - Recurrent Neural Network (CNN-RNN) based Apparent Personality Detection Models using the Class Activation Maps</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140224</link>
        <id>10.14569/IJACSA.2023.0140224</id>
        <doi>10.14569/IJACSA.2023.0140224</doi>
        <lastModDate>2023-02-28T10:12:18.8600000+00:00</lastModDate>
        
        <creator>WMKS Ilmini</creator>
        
        <creator>TGI Fernando</creator>
        
        <subject>Apparent personality detection (APD); convolutional neural network based recurrent neural network (CNN-RNN); class activation map (CAM); explainable AI (XAI)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>This study aims to use the Class Activation Map (CAM) visualisation technique to understand the outputs of apparent personality detection models based on a combination of Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN). The ChaLearn Looking at People First Impression (CVPR&#39;17) dataset is used for experimentation in this study. The dataset consists of short video clips labelled with the Big Five personality traits. Two deep learning models were designed to predict apparent personality with VGG19 and ResNet152 as base models. Then the models were trained using the raw frames extracted from the videos. The highest accurate models from each architecture were chosen for feature visualisation. The test dataset of the CVPR&#39;17 dataset is used for feature visualisation. To identify the feature&#39;s contribution to the network&#39;s output, the CAM XAI technique was applied to the test dataset and calculated the heatmap. Next, the bitwise intersection between the heatmap and background removed frames was measured to identify how much features from the human body (including facial and non-facial data) affected the network output. The findings revealed that nearly 35%-40% of human data contributed to the output of both models. Additionally, after analysing the heatmap with high-intensity pixels, the ResNet152 model was found to identify more human-related data than the VGG19 model, achieving scores of 46%-51%. The two models have different behaviour in identifying the key features which influence the output of the models based on the input.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_24-Explaining_the_Outputs_of_Convolutional_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Effective Heart Disease Prediction Framework based on Ensemble Techniques in Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140223</link>
        <id>10.14569/IJACSA.2023.0140223</id>
        <doi>10.14569/IJACSA.2023.0140223</doi>
        <lastModDate>2023-02-28T10:12:18.8470000+00:00</lastModDate>
        
        <creator>Deepali Yewale</creator>
        
        <creator>S. P. Vijayaragavan</creator>
        
        <creator>V. K. Bairagi</creator>
        
        <subject>Machine learning; heart disease; ensemble techniques; random over sampling, isolation forest</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>To design a framework for effective prediction of heart disease based on ensemble techniques, without the need of feature selection, incorporating data balancing, outlier detection and removal techniques, with results that are still at par with cutting-edge research. In this study, the Cleveland dataset, which has 303 occurrences, is used from the UCI repository. The dataset comprises 76 raw attributes, however only 14 of them are listed by the UCI repository as significant risk factors for heart disease when the dataset is uploaded as an open source dataset. Data balancing strategies, such as random over sampling, are used to address the issue of unbalanced data. Additionally, an isolation forest is used to find outliers in multivariate data, which has not been explored in previous research. After eliminating anomalies from the data, ensemble techniques such as bagging, boosting, voting, stacking are employed to create the prediction model. The potential of the proposed model is assessed for accuracy, sensitivity, and specificity, positive prediction value (PPV), negative prediction value (NPV), F1 score, ROC-AUC and model training time. For the Cleveland dataset, the performance of the suggested methodology is superior, with 98.73% accuracy, 98% sensitivity, 100% specificity, 100% PPV, 97% NPV, 1 as F score, and AUC as 1 with comparatively very less training time. The results of this study demonstrate that our proposed approach significantly outperforms the existing scholarly work in terms of accuracy and all the stated performance metrics. No earlier research has focused on these many performance parameters.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_23-An_Effective_Heart_Disease_Prediction_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automatic Extraction of Indonesian Stopwords</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140221</link>
        <id>10.14569/IJACSA.2023.0140221</id>
        <doi>10.14569/IJACSA.2023.0140221</doi>
        <lastModDate>2023-02-28T10:12:18.8300000+00:00</lastModDate>
        
        <creator>Harry Tursulistyono Yani Achsan</creator>
        
        <creator>Heru Suhartanto</creator>
        
        <creator>Wahyu Catur Wibowo</creator>
        
        <creator>Deshinta A. Dewi</creator>
        
        <creator>Khairul Ismed</creator>
        
        <subject>Stopwords extraction; attributes reduction; TF-IDF; large corpus; Indonesian stopwords; NLP</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>The rapid growth of the Indonesian language content on the Internet has drawn researchers’ attention. By using natural language processing, they can extract high-value information from such content and documents. However, processing large and numerous documents is very time-consuming and computationally expensive. Reducing these computational costs requires attribute reduction by removing some common words or stopwords. This research aims to extract stopwords automatically from a large corpus, about seven million words, in the Indonesian language downloaded from the web. The problem is that Indonesian is a low-resource language, making it challenging to develop an automatic stopword extractor. The method used is Term Frequency – Inverse Document Frequency (TF-IDF) and presents a methodology for ranking stopwords using TFs and IDFs, which is applicable to even a small corpus (as low as one document). It is an automatic method that can be applied to many different languages with no prior linguistic knowledge required. There are two novelties or contributions in this method: it can show all words found in all documents, and it has an automatic cut-off technique for selecting the top rank of stopwords candidates in the Indonesian language, overcoming one of the most challenging aspects of stopwords extraction.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_21-Automatic_Extraction_of_Indonesian_Stopwords.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Software Effort Estimation through Ensembling of Base Models in Machine Learning using a Voting Estimator</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140222</link>
        <id>10.14569/IJACSA.2023.0140222</id>
        <doi>10.14569/IJACSA.2023.0140222</doi>
        <lastModDate>2023-02-28T10:12:18.8300000+00:00</lastModDate>
        
        <creator>Beesetti Kiran Kumar</creator>
        
        <creator>Saurabh Bilgaiyan</creator>
        
        <creator>Bhabani Shankar Prasad Mishra</creator>
        
        <subject>Machine learning; software effort estimation; voting; regression; evolutionary algorithms</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>For a long time, researchers have been working to predict the effort of software development with the help of various machine learning algorithms. These algorithms are known for better understanding the underlying facts inside the data and improving the prediction rate than conventional approaches such as line of code and functional point approaches. According to no free lunch theory, there is no single algorithm which gives better predictions on all the datasets. To remove this bias our work aims to provide a better model for software effort estimation and thereby reduce the distance between the actual and predicted effort for future projects. The authors proposed an ensembling of regressor models using voting estimator for better predictions to reduce the error rate to over the biasness provide by single machine learning algorithm. The results obtained show that the ensemble models were better than those from the single models used on different datasets.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_22-Software_Effort_Estimation_through_Ensembling_of_Base_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Feature Selection Algorithm and Ensemble Stacking for Heart Disease Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140220</link>
        <id>10.14569/IJACSA.2023.0140220</id>
        <doi>10.14569/IJACSA.2023.0140220</doi>
        <lastModDate>2023-02-28T10:12:18.8130000+00:00</lastModDate>
        
        <creator>Nureen Afiqah Mohd Zaini</creator>
        
        <creator>Mohd Khalid Awang</creator>
        
        <subject>Heart disease prediction; feature selection; stacking; accuracy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>In cardiology, as in other medical specialties, early and accurate diagnosis of heart disease is crucial as it has been the leading cause of death over the past few decades. Early prediction of heart disease is now more crucial than ever. However, the state-of-the-art heart disease prediction strategy put more emphasis on classifier selection in enhancing the accuracy and performance of heart disease prediction, and seldom considers feature reduction techniques. Furthermore, there are several factors that lead to heart disease, and it is critical to identify the most significant characteristics in order to achieve the best prediction accuracy and increase prediction performance. Feature reduction reduces the dimensionality of the information, which may allow learning algorithms to work quicker and more efficiently, producing predictive models with the best rate of accuracy. In this study, we explored and suggested a hybrid of two distinct feature reduction techniques, chi-squared and analysis of variance (ANOVA). In addition, using the ensemble stacking method, classification is performed on selected features to classify the data. Using the optimal features based on hybrid features combination, the performance of a stacking ensemble based on logistic regression yields the best result with 93.44%. This can be summarized as the feature selection method can take into account as an effective method for the prediction of heart disease.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_20-Hybrid_Feature_Selection_Algorithm_and_Ensemble_Stacking.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Experimental Analysis and Monitoring of Photovoltaic Panel Parameters</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140219</link>
        <id>10.14569/IJACSA.2023.0140219</id>
        <doi>10.14569/IJACSA.2023.0140219</doi>
        <lastModDate>2023-02-28T10:12:18.8000000+00:00</lastModDate>
        
        <creator>Zaidan Didi</creator>
        
        <creator>Ikram El Azami</creator>
        
        <subject>Current sensor; bluetooth low consumption; photovoltaic panel; Esp32 microcontroller</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>In this article, we establish a technique based on the internet of things to simultaneously monitor the main values that characterize a photovoltaic solar panel. This technique allows to discover the problems and the monstrosities during the operation. This study also allows to collect the parameters and quantities measured for analysis. This method is based on exploiting the advantages of IoT technology. For this it will be a good choice to use and exploit the Esp32 microcontroller, because the two WIFI and Bluetooth modules are integrated. The design process began by creating a system to measure the intensity of the electric current delivered by the photovoltaic panel. A current sensor was implemented for this purpose. To prevent damage to the microcontroller, a voltage divider was proposed to decrease the voltage at the pin level of the Esp32 for measurement. Next, the power and energy values were calculated to estimate the production capacity. In the final stage, a low-power Bluetooth link was created to transmit the four quantities to a smartphone or other compatible device. Real-time values were presented as graphs on the free ThingSpeak platform and displayed on both, an LCD screen and the serial monitor of the Esp32 microcontroller. The system was tested without any problems or errors.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_19-Experimental_Analysis_and_Monitoring_of_Photovoltaic_Panel_Parameters.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Music Note Feature Recognition Method based on Hilbert Space Method Fused with Partial Differential Equations</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140217</link>
        <id>10.14569/IJACSA.2023.0140217</id>
        <doi>10.14569/IJACSA.2023.0140217</doi>
        <lastModDate>2023-02-28T10:12:18.7830000+00:00</lastModDate>
        
        <creator>Liqin Liu</creator>
        
        <subject>Partial differential equation; Hilbert space method; musical note feature recognition method; cepstral coefficients; empirical modal</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>Hilbert space method is an old mathematical theoretical model developed based on linear algebra and has a high theoretical value and practical application. The basic idea of the Hilbert space method is to use the existence of some stable relationship between variables and to use the dynamic dependence between variables to construct the solution of differential equations, thus transforming mathematical problems into algebraic problems. This paper firstly studies the denoising model in the process of music note feature recognition based on partial differential equations, then analyzes the denoising method based on partial differential equations and gives an algorithm for fused music note feature recognition in Hilbert space; secondly, this paper studies the commonly used music note feature recognition methods, including linear predictive cepstral coefficients, Mel frequency cepstral coefficients, wavelet transform-based feature extraction methods and Hilbert space-based feature extraction methods. Their corresponding feature extraction processes are given.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_17-Music_Note_Feature_Recognition_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hyperparameter Optimization of Support Vector Regression Algorithm using Metaheuristic Algorithm for Student Performance Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140218</link>
        <id>10.14569/IJACSA.2023.0140218</id>
        <doi>10.14569/IJACSA.2023.0140218</doi>
        <lastModDate>2023-02-28T10:12:18.7830000+00:00</lastModDate>
        
        <creator>M. Riki Apriyadi</creator>
        
        <creator>Ermatita</creator>
        
        <creator>Dian Palupi Rini</creator>
        
        <subject>Student performance; feature selection; particle swarm optimization; genetic algorithm; support vector regression</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>Improving student learning performance requires proper preparation and strategy so that it has an impact on improving the quality of education. One of the preparatory steps is to make a prediction modeling of student performance. Accurate student performance prediction models are needed to help teachers develop the potential of diverse students. This research aims to create a predictive model of student performance with hyperparameter optimization in the Support Vector Regression Algorithm. The hyperparameter optimization method used is the Metaheuristic Algorithm. The Metaheuristic Algorithms used in this study are Particle Swarm Optimization (PSO) and Genetic Algorithm (GA). After obtaining the best SVR hyperparameter, the next step is to model student performance predictions, which in this study produced two models, namely PSVR Modeling and GSVR Modeling. The resulting predictive modeling will also be compared with previous researchers&#39; prediction modeling of student performance using five models: Support Vector Regression, Na&#239;ve Bayes, Neural Network, Decision Tree, and Random Forest. The regression performance metric parameter, Root Mean Square Error (RMSE), evaluates modeling results. The test results show that predictive student performance using PSVR Modeling produces the smallest RMSE value of 1.608 compared to predictions of student performance by previous researchers so that the proposed prediction model can be used to predict student performance in the future.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_18-Hyperparameter_Optimization_of_Support_Vector_Regression_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Privacy Preservation Modelling for Securing Image Data using Novel Ethereum-based Ecosystem</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140216</link>
        <id>10.14569/IJACSA.2023.0140216</id>
        <doi>10.14569/IJACSA.2023.0140216</doi>
        <lastModDate>2023-02-28T10:12:18.7670000+00:00</lastModDate>
        
        <creator>Chhaya S Dule</creator>
        
        <creator>Roopashree H. R</creator>
        
        <subject>Blockchain; data security; Ethereum; image data; privacy; security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>The broad usage of images in real-time applications demands a cloud infrastructure due to its advantages. Many use cases are built where the image data is shared, sharing becomes the core function, and the medical domain takes its broad advantage. The cloud is a centralized infrastructure for its all-operation usages; it depends mainly on the trusted third party to handle security concerns. Therefore, the privacy preservation of the image data or any data becomes an issue of concern. The distrusted system advantages are achieved using blockchain technology for image data security and privacy concerns. The traditional approaches of the security and privacy models raise many apprehensions as these are designed on the centralized systems of the data sharing mechanisms. It is also observed that large data files are not wisely handled, which demands building a framework model that takes image data and any other data of any size to ensure a dependable optimal security system. This paper presents a framework model to achieve optimal time complexity for securing the privacy aspects of the image data or any other data that uses space optimal file system using distributed security mechanism for both the storage and sharing of the data. The proposed framework model for optimal time complexity and security uses a duplication algorithm using stakeholder agreement to ensure efficient access control to the resources using the cryptographic approach to the Ethereum ecosystem. The performance metric used in the model evaluation includes the degree of availability and efficiency. On benchmarks, it performs well compared to the traditional cloud-built distributed systems. The quantified outcome of the proposed scheme exhibits a 42.5% of reduction in time for data repositioning, a 41.1% of reduction in time for data retrieval, a 34.8% of reduction in operational cost, a 73.9% of reduction in delay, and a 61% faster algorithm execution time in contrast to conventional blockchain method.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_16-Privacy_Preservation_Modelling_for_Securing_Image_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implementation of Big Data Privacy Preservation Technique for Electronic Health Records in Multivendor Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140214</link>
        <id>10.14569/IJACSA.2023.0140214</id>
        <doi>10.14569/IJACSA.2023.0140214</doi>
        <lastModDate>2023-02-28T10:12:18.7530000+00:00</lastModDate>
        
        <creator>Ganesh Dagadu Puri</creator>
        
        <creator>D. Haritha</creator>
        
        <subject>Privacy preservation; data privacy; data distribution; anonymization; slicing; privacy attacks; HDFS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>Various diagnostic health data formats and standards include both structured and unstructured data. Sensitive information contained in such metadata requires the development of specific approaches that can combine methods and techniques that can extract and reconcile the information hidden in such data. However, when this data needs to be processed and used for other reasons, there are still many obstacles and concerns to overcome. Modern approaches based on machine learning including big data analytics, assist in the information refinement process for later use of clinical evidence. These strategies consist of transforming various data into standard formats in specific scenarios. In fact, in order to conform to these rules, only de-identified diagnostic and personal data may be handled for secondary analysis, especially when information is distributed or transferred across institutions. This paper proposes big data privacy preservation techniques using various privacy functions. This research focused on secure data distribution as well as security access control to revoke the malicious activity or similarity attacks from end-user. The various privacy preservation techniques such as data anonymization, generalization, random permutation, k-anonymity, bucketization, l-diversity with slicing approach have been proposed during the data distribution. The efficiency of system has been evaluated in Hadoop distributed file system (HDFS) with numerous experiments. The results obtained from different experiments show that the computation should be changed when changing k-anonymity and l-diversity. As a result, the proposed system offers greater efficiency in Hadoop environments by reducing execution time by 15% to 18% and provides a higher level of access control security than other security algorithms.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_14-Implementation_of_Big_Data_Privacy_Preservation_Technique.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Churn Customer Estimation Method based on LightGBM for Improving Sales</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140215</link>
        <id>10.14569/IJACSA.2023.0140215</id>
        <doi>10.14569/IJACSA.2023.0140215</doi>
        <lastModDate>2023-02-28T10:12:18.7530000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Ikuya Fujikawa</creator>
        
        <creator>Yusuke Nakagawa</creator>
        
        <creator>Ryuya Momozaki</creator>
        
        <creator>Sayuri Ogawa</creator>
        
        <subject>LightGBM (light gradient boosting machine); EDA (exploratory data analysis); churn prediction; linear regression; gradient boosting method; GradientBoostingClassifier: GBC</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>Churn customer estimation method is proposed for improving sales. By analyzing the differences between customers who churn and customers who do not churn (returning), we will conduct a customer churn analysis to reduce customer churn and take steps to reduce the number of unique customers. By predicting customers who are likely to defect using decision tree models such as LightGBM, which is a machine learning method, and logistic regression, we will discover important feature values in prediction and utilize the knowledge obtained through Exploratory Data Analysis: EDA. As results for experiments, it is found that the proposed method allows estimation and prediction of churn customers as well as characteristics and behavior of churn customers. Also, it is found that the proposed method is superior to the conventional method, GradientBoostingClassifier: GBC by around 10%.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_15-Churn_Customer_Estimation_Method_based_on_LightGBM.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Espousing AI to Enhance Cost-Efficient and Immersive Experience for Human Computer Interaction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140213</link>
        <id>10.14569/IJACSA.2023.0140213</id>
        <doi>10.14569/IJACSA.2023.0140213</doi>
        <lastModDate>2023-02-28T10:12:18.7370000+00:00</lastModDate>
        
        <creator>Deepak Chaturvedi</creator>
        
        <creator>Ashima Arya</creator>
        
        <creator>Mohammad Zubair Khan</creator>
        
        <creator>Eman Aljohani</creator>
        
        <creator>Liyakathunisa</creator>
        
        <creator>Vaishali Arya</creator>
        
        <creator>Namrata Sukhija</creator>
        
        <creator>Prakash Srivastava</creator>
        
        <subject>Artificial intelligence; dizziness; gestures; human computer interaction; user experience; virtual reality</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>Because of recent technological and interface advancements in the field, the virtual reality (VR) movement has entered a new era. Mobility is one of the most crucial behaviours in virtual reality. In this research, popular virtual reality mobility systems are compared, and it is shown that gesture control is a key technology for allowing distinctive virtual world communication paradigms. Gesture based movements are very beneficial when there are a lot of spatial restrictions. With a focus on cost-effectiveness, the current study introduces a gesture-based virtual movement (GVM) system that eradicates the obligation for expensive hardware/controllers for virtual world mobility (i.e., walk/ jump/ hold for this research) using artificial intelligence (AI). Additionally, the GVM aims to prevent users from becoming dizzy by allowing them to change the trajectory by simply turning their head in the intended direction. The GVM was assessed on its interpreted realism, presence, and spatial drift in the actual environment in comparison to the state-of-the-art techniques. The results demonstrated how the GVM outperformed the prevailing methodologies in a number of common interaction components. Additionally, the empirical analysis showed that GVM offers customers a real-time experience with a latency of ~65 milliseconds.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_13-Espousing_AI_to_Enhance_Cost_Efficient_and_Immersive_Experience.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Insights into Search Engine Optimization using Natural Language Processing and Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140211</link>
        <id>10.14569/IJACSA.2023.0140211</id>
        <doi>10.14569/IJACSA.2023.0140211</doi>
        <lastModDate>2023-02-28T10:12:18.7200000+00:00</lastModDate>
        
        <creator>Vinutha M S</creator>
        
        <creator>M C Padma</creator>
        
        <subject>Search engine optimization; google search; natural language processing; machine learning; recommendation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>Among the potential tools in digital marketing, Search Engine Optimization (SEO) facilitates the use of appropriate data by providing appropriate results according to the search priority of the user. Various research-based approaches have been developed to improve the optimization performance of search engines over the past decade; however, it is still unclear what the strengths and weaknesses of these methods are. As a result of the increased proliferation of Machine Learning (ML) and Natural Language Processing (NLP) in complex content management, there is potential to achieve successful SEO results. Therefore, the purpose of this paper is to contribute towards performing an exhaustive study on the respective NLP and ML methodologies to explore their strengths and weaknesses. Additionally, the paper highlights distinct learning outcomes and a specific research gap intended to assist future research work with a guideline necessary for optimizing search engine performance.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_11-Insights_into_Search_Engine_Optimization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Adaptive Rectified Linear Unit (Arelu) for Classification Problems to Solve Dying Problem in Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140212</link>
        <id>10.14569/IJACSA.2023.0140212</id>
        <doi>10.14569/IJACSA.2023.0140212</doi>
        <lastModDate>2023-02-28T10:12:18.7200000+00:00</lastModDate>
        
        <creator>Ibrahim A. Atoum</creator>
        
        <subject>Rectified Linear Unit (ReLU); Convolutional Neural Network; activation function; deep learning; MNIST dataset; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>A convolutional neural network (CNN) is a subset of machine learning as well as one of the different types of artificial neural networks that are used for different applications and data types. Activation functions (AFs) are used in this type of network to determine whether or not its neurons are activated. One non-linear AF named as Rectified Linear Units (ReLU) which involves a simple mathematical operations and it gives better performance. It avoids rectifying vanishing gradient problem that inherents older AFs like tanh and sigmoid. Additionally, it has less computational cost. Despite these advantages, it suffers from a problem called Dying problem. Several modifications have been appeared to address this problem, for example; Leaky ReLU (LReLU). The main concept of our algorithm is to improve the current LReLU activation functions in mitigating the dying problem on deep learning by using the readjustment of values (changing and decreasing value) of the loss function or cost function while number of epochs are increased. The model was trained on the MNIST dataset with 20 epochs and achieved lowest misclassification rate by 1.2%. While optimizing our proposed methods, we received comparatively better results in terms of simplicity, low computational cost, and with no hyperparameters.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_12-Adaptive_Rectified_Linear_Unit.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dynamic Software Architecture Design for Virtual Rehabilitation System for Manual Motor Dexterity</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140210</link>
        <id>10.14569/IJACSA.2023.0140210</id>
        <doi>10.14569/IJACSA.2023.0140210</doi>
        <lastModDate>2023-02-28T10:12:18.7030000+00:00</lastModDate>
        
        <creator>Edwin Enrique Saavedra Parisaca</creator>
        
        <creator>Solansh Jaqueline Montoya Mu&#241;oz</creator>
        
        <creator>Elizabeth Vidal Duarte</creator>
        
        <creator>Eveling Gloria Castro Gutierrez</creator>
        
        <creator>Angel Yvan Choquehuanca Peraltilla</creator>
        
        <creator>Sergio Albiol P&#233;rez</creator>
        
        <subject>Software architecture; dynamic architecture; virtual rehabilitation systems; motor dexterity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>The architectural design is fundamental in the construction process of a virtual rehabilitation system since it allows to understand the components and their interaction, and it is a guide to develop the software. This article proposes a dynamic architecture design that could be used independently of software and hardware in a virtual rehabilitation system for motor dexterity. This proposal contributes to the software engineering area since it provides a starting point for the development of virtual rehabilitation systems. The system implementation was done with two tracking devices (hardware) and two rehabilitation games (software). It was validated with the User Experience Questionnaire (UEQ). Results show that the use of a dynamic architecture allowed to use different devices efficiently and quickly, regardless of the game, preventing the user from feeling a change or difficulty in carrying out the tasks.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_10-Dynamic_Software_Architecture_Design.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Enhanced MCDM Model for Cloud Service Provider Selection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140209</link>
        <id>10.14569/IJACSA.2023.0140209</id>
        <doi>10.14569/IJACSA.2023.0140209</doi>
        <lastModDate>2023-02-28T10:12:18.6900000+00:00</lastModDate>
        
        <creator>Ayman S. Abdelaziz</creator>
        
        <creator>Hany Harb</creator>
        
        <creator>Alaa Zaghloul</creator>
        
        <creator>Ahmed Salem</creator>
        
        <subject>MCDM; TOPSIS; neutrosophic set; single valued neutrosophic; cloud services provider; quality of service</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>Multi-Criteria Decision-Making (MCDM) techniques are often used to aid decision-makers in selecting the best alternative among several options. However, these systems have issues, including the Rank Reversal Problem (RRP) and decision-making ambiguity. This study aimed to propose a selection model for a Cloud Service Provider (CSP) that addresses these issues. This research used the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) to rank the alternatives. The entropy technique is utilized to determine the weight of the criteria, and Single Valued Neutrosophic (SVN) is employed to address uncertainty. To select the best cloud provider based on Quality of Service (QoS) criteria, we used a dataset from Cloud Harmony for this study. The results indicated that the suggested model could effectively resolve the RRP under conditions of uncertainty. This research is novel and is the first to address both the problem of uncertainty in decision-making and RRP in MCDM.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_9-An_Enhanced_MCDM_Model_for_Cloud_Service_Provider_Selection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Supply Chain Network Model using Multi-Agent Reinforcement Learning for COVID-19</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140208</link>
        <id>10.14569/IJACSA.2023.0140208</id>
        <doi>10.14569/IJACSA.2023.0140208</doi>
        <lastModDate>2023-02-28T10:12:18.6730000+00:00</lastModDate>
        
        <creator>Tomohito Okada</creator>
        
        <creator>Hiroshi Sato</creator>
        
        <creator>Masao Kubo</creator>
        
        <subject>Supply chain management; agent based model; multi-agent reinforcement learning; COVID-19 vaccination</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>The COVID-19 vaccination management in Japan has revealed many problems. The number of vaccines available was clearly less than the number of people who wanted to be vaccinated. Initially, the system was managed by making reservations with age group utilizing vaccination coupons. After the second round of vaccinations, only appointments for vaccination dates were coordinated and vaccination sites were set up in Shibuya Ward where the vaccine could be taken freely. Under a shortage of vaccine supply, the inability to make appointments arose from a failure to properly estimate demand. In addition, the vaccine expired due to inadequate inventory management, resulting in the vaccine being discarded. This is considered to be a supply chain problem in which appropriate supply could not be provided in response to demand. In response to this problem, this paper examines whether it is possible to avoid shortage and stock discards by a decentralized management system for easy on-site inventory control instead of a centralized management system in real world. Based on a multi-agent model, a model was created to redistribute inventory to clients by predicting future shortage based on demand fluctuations and past inventory levels. The model was constructed by adopting the Kanto region. The validation results of the model showed that the number of discards was reduced by about 70% and out-of-stocks by about 12% as a result of learning the dispersion management and out-of-stock forecasting.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_8-Supply_Chain_Network_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Routing Overhead Aware Optimal Cluster based Routing Algorithm for IoT Network using Heuristic Technique</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140207</link>
        <id>10.14569/IJACSA.2023.0140207</id>
        <doi>10.14569/IJACSA.2023.0140207</doi>
        <lastModDate>2023-02-28T10:12:18.6730000+00:00</lastModDate>
        
        <creator>Srinivasulu M</creator>
        
        <creator>Shiva Murthy G</creator>
        
        <subject>Internet-of-things; communication overhead; cluster based routing; multipath routing; cluster head</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>Globally, billion of devices in heterogeneous networks are interconnected by the Internet of Things (IoT). IoT applications require a centralized decision-making system due to the failure-prone connectivity and high latency of such a system. Low-latency communications are a primary characteristic of applications. Since IoT applications usually have small payload sizes, reducing communication overhead is crucial to improving energy efficiency. Researchers have proposed several methods to resolve the load balancing issue of IoT networks and reduce communication overhead. Although these techniques are not effective, in terms of high communication costs, end-to-end delay, packet loss ratio, throughput, and node lifetimes negatively impact network performance. In this paper, we propose a communication overhead aware optimal cluster-based (COOC) routing algorithm for IoT networks based on a hybrid heuristic technique. Using three benchmark algorithms, we form load-balanced clusters using k-means clustering, fuzzy logic, and genetic algorithm. In the next step, compute the rank of each node in a cluster using multiple design constraints, which are optimized by using the improved COOT bird optimum search algorithm (I-COOT). After that, we choose the cluster head (CH) according to the rank condition, thereby reducing the communication overhead in IoT networks. Additionally, we design chaotic golden search optimization algorithm (CGSO) for choosing the optimal best path between IoT nodes among multiple paths to ensure optimal data transfer from CHs. To conclude, we validate our proposed COOC routing algorithm against the different simulation scenarios and compare the results with existing state-of-the-art routing algorithms.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_7-Routing_Overhead_Aware_Optimal_Cluster_based_Routing_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sequence Recommendation based on Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140206</link>
        <id>10.14569/IJACSA.2023.0140206</id>
        <doi>10.14569/IJACSA.2023.0140206</doi>
        <lastModDate>2023-02-28T10:12:18.6570000+00:00</lastModDate>
        
        <creator>Gulsim Rysbayeva</creator>
        
        <creator>Jingwei Zhang</creator>
        
        <subject>Long short-term memory (LSTM) and gated recurrent unit (GRU); RNN; deep learning; recommendation systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>Sequence recommendation systems have become increasingly popular in various fields such as movies and social media. These systems aim to predict a user&#39;s preferences and interests based on their past behavior and provide them with personalized recommendations. Deep learning, particularly Recurrent Neural Networks (RNNs), have emerged as a powerful tool for sequence recommendation. In this research, we explore the effectiveness of RNNs in movie and Instagram recommendation systems. We investigate and compare the performance of different types of RNNs, such as Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU), in recommending movies and Instagram posts to users based on their browsing history. Additionally, we study the impact of incorporating additional information such as user&#39;s demographics and Instagram hashtags on the performance of the recommendation system. We also evaluate the performance of RNN-based movie and Instagram recommendation systems in comparison to traditional approaches, such as collaborative filtering and content-based filtering, in terms of accuracy and personalization. The findings of this research provide insights into the effectiveness of RNNs in movie and Instagram recommendation systems and contribute to the development of more accurate and personalized recommendations for users.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_6-Sequence_Recommendation_based_on_Deep_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Machine Learning Enabled Hall-Effect IoT-System for Monitoring Building Vibrations</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140205</link>
        <id>10.14569/IJACSA.2023.0140205</id>
        <doi>10.14569/IJACSA.2023.0140205</doi>
        <lastModDate>2023-02-28T10:12:18.6430000+00:00</lastModDate>
        
        <creator>Emanuele Lattanzi</creator>
        
        <creator>Paolo Capellacci</creator>
        
        <creator>Valerio Freschi</creator>
        
        <subject>Vibration sensor; machine learning; hall-effect</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>Vibration monitoring of civil infrastructures is a fundamental task to assess their structural health, which can be nowadays carried on at reduced costs thanks to new sensing devices and embedded hardware platforms. In this work, we present a system for monitoring vibrations in buildings based on a novel, cheap, Hall-effect vibration sensor that is interfaced with a commercially available embedded hardware platform, in order to support communication toward cloud based services by means of IoT communication protocols. Two deep learning neural networks have been implemented and tested to demonstrate the capability of performing nontrivial prediction tasks directly on board of the embedded platform, an important feature to conceive dynamical policies for deciding whether to perform a recognition task on the final (resource constrained) device, or delegate it to the cloud according to specific energy, latency, accuracy requirements. Experimental evaluation on two use cases, namely the detection of a seismic event and the count of steps made by people transiting in a public building highlight the potential of the adopted solution; for instance, recognition of walking-induced vibrations can be achieved with an accuracy of 96% in real-time within time windows of 500ms. Overall, the results of the empirical investigation show the flexibility of the proposed solution as a promising alternative for the design of vibration monitoring systems in built environments.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_5-A_Machine_Learning_Enabled_Hall_Effect_IoT_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Review on Artificial Intelligence in the Context of Industry 4.0</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140204</link>
        <id>10.14569/IJACSA.2023.0140204</id>
        <doi>10.14569/IJACSA.2023.0140204</doi>
        <lastModDate>2023-02-28T10:12:18.6270000+00:00</lastModDate>
        
        <creator>Shadi Banitaan</creator>
        
        <creator>Ghaith Al-refai</creator>
        
        <creator>Sattam Almatarneh</creator>
        
        <creator>Hebah Alquran</creator>
        
        <subject>Artificial intelligence; Industry 4.0; intelligent manufacturing; industry analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>Artificial Intelligence (AI) is seen as the most promising among Industry 4.0 advancements for businesses. Artificial intelligence, defined as computer models that mimic intelligent behavior, is poised to unleash the next wave of digital disruption and bring a competitive advantage to the industry. The value of AI lies not in its models, but in the ways in which we can harness them. It is becoming more common for industry objects to be converted into intelligent objects that can sense, act, adapt, and behave in a given environment. Leaders in the industry will need to make deliberate choices about how, when, and where to deploy these technologies. Our work highlights some of the primary AI emerging trends in Industry 4.0. We also discuss the advantages, challenges, and applications of AI in Industry 4.0.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_4-A_Review_on_Artificial_Intelligence.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Text2Simulate: A Scientific Knowledge Visualization Technique for Generating Visual Simulations from Textual Knowledge</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140203</link>
        <id>10.14569/IJACSA.2023.0140203</id>
        <doi>10.14569/IJACSA.2023.0140203</doi>
        <lastModDate>2023-02-28T10:12:18.6270000+00:00</lastModDate>
        
        <creator>Ifeoluwatayo A. Ige</creator>
        
        <creator>Bolanle F. Oladejo</creator>
        
        <subject>Knowledge visualization; visual simulation; text-to-simulation knowledge visualization technique; natural language processing; electronic learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>Recent research has developed knowledge visualization techniques for generating interactive visualizations from textual knowledge. However, when applied, these techniques do not generate precise semantic visual representations, which is imperative for domains that require an accurate visual representation of spatial attributes and relationships between objects of discourse in explicit knowledge. Therefore, this work presents a Text-to-Simulation Knowledge Visualization (TSKV) technique for generating visual simulations from domain knowledge by developing a rule-based classifier to improve natural language processing, and a Spatial Ordering (SO) algorithm to solve the identified challenge. A system architecture was developed to structurally model the components of the TSKV technique and implemented using a Knowledge Visualization application called ‘Text2Simulate’. A quantitative evaluation of the application was carried out to test for accuracy using modified existing information visualization evaluation criteria. Object Inclusion (OI), Object-Attribute Visibility (OAV), Relative Positioning (RP), and Exact Visual Representation (EVR) criteria were modified to include Object’s Motion (OM) metric for quantitative evaluation of generated visual simulations. Evaluation for accuracy on generated simulation results were 90.1, 84.0, 90.1, 90.0, and 96.0% for OI, OAV, OM, RP, and EVR criteria respectively. User evaluation was conducted to measure system effectiveness and user satisfaction which showed that all the participants were satisfied well above average. These generated results showed an improved semantic quality of visualized knowledge due to the improved classification of spatial attributes and relationships from textual knowledge. This technique could be adopted during the development of electronic learning applications for improved understanding and desirable actions.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_3-Text2Simulate_A_Scientific_Knowledge_Visualization_Technique.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Adversarial Sampling for Fairness Testing in Deep Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140202</link>
        <id>10.14569/IJACSA.2023.0140202</id>
        <doi>10.14569/IJACSA.2023.0140202</doi>
        <lastModDate>2023-02-28T10:12:18.6100000+00:00</lastModDate>
        
        <creator>Tosin Ige</creator>
        
        <creator>William Marfo</creator>
        
        <creator>Justin Tonkinson</creator>
        
        <creator>Sikiru Adewale</creator>
        
        <creator>Bolanle Hafiz Matti</creator>
        
        <subject>Adversarial machine learning, adversarial attack; adversarial defense; machine learning fairness; fairness testing; adversarial sampling; deep neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>In this research, we focus on the usage of adversarial sampling to test for the fairness in the prediction of deep neural network model across different classes of image in a given dataset. While several framework had been proposed to ensure robustness of machine learning model against adversarial attack, some of which includes adversarial training algorithm. There is still the pitfall that adversarial training algorithm tends to cause disparity in accuracy and robustness among different group. Our research is aimed at using adversarial sampling to test for fairness in the prediction of deep neural network model across different classes or categories of image in a given dataset. We successfully demonstrated a new method of ensuring fairness across various group of input in deep neural network classifier. We trained our neural network model on the original image, and without training our model on the perturbed or attacked image. When we feed the adversarial samplings to our model, it was able to predict the original category/ class of the image the adversarial sample belongs to. We also introduced and used the separation of concern concept from software engineering whereby there is an additional standalone filter layer that filters perturbed image by heavily removing the noise or attack before automatically passing it to the network for classification, we were able to have accuracy of 93.3%. Cifar-10 dataset have ten categories of dataset, and so, in order to account for fairness, we applied our hypothesis across each categories of dataset and were able to get a consistent result and accuracy.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_2-Adversarial_Sampling_for_Fairness_Testing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Organizational Digital Transformations and the Importance of Assessing Theoretical Frameworks such as TAM, TTF, and UTAUT: A Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140201</link>
        <id>10.14569/IJACSA.2023.0140201</id>
        <doi>10.14569/IJACSA.2023.0140201</doi>
        <lastModDate>2023-02-28T10:12:18.5970000+00:00</lastModDate>
        
        <creator>Bibhu Dash</creator>
        
        <creator>Pawankumar Sharma</creator>
        
        <creator>Swati Swayamsiddha</creator>
        
        <subject>Data growth; digital transformations; TAM; TTF; UTAUT; sustainability; FTM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(2), 2023</description>
        <description>In this era of Industry 5.0, businesses worldwide are attempting to gain competitive advantages, increase profits, and improve consumer engagement. To achieve their goals, all businesses undergo extensive digital transformations (DT) by implementing cutting-edge technologies such as cloud computing, artificial intelligence (AI), the Internet of Things, and blockchain, among others. DT is a costly journey, including strategy, people, and technology. At the same time, many digitization efforts are failing miserably, resulting in project abandonment, loss of critical stakeholder trust, and the dismissal of important staff. Poor strategy, which may have pre-evaluated organizational flexibility and cultural misfits, is often criticized. As a result, it is critical to extensively investigate theoretical frameworks such as the Technology Acceptance Model (TAM), Task Technology Fit (TTF), and Unified Theory of Acceptance and Use of Technology (UTAUT), which were developed via significant research into various organizational kinds. All of these aspects are covered in this work by evaluating academic papers from the IEEE, Scopus, and Web of Science databases and reaching conclusions in future sections.</description>
        <description>http://thesai.org/Downloads/Volume14No2/Paper_1-Organizational_Digital_Transformations_and_the_Importance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Machine Learning Techniques to Enhance the Mental Age of Down Syndrome Individuals: A Detailed Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01401107</link>
        <id>10.14569/IJACSA.2023.01401107</id>
        <doi>10.14569/IJACSA.2023.01401107</doi>
        <lastModDate>2023-01-31T12:17:57.0470000+00:00</lastModDate>
        
        <creator>Irfan M. Leghari</creator>
        
        <creator>Hamimah Ujir</creator>
        
        <creator>SA Ali</creator>
        
        <creator>Irwandi Hipiny</creator>
        
        <subject>Artificial Intelligence; Artificial Neural Network (ANN); Down Syndrome Individuals (DSI); Interactive Mental Learning Software (IMLS)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>Down syndrome individuals are known as intellectually disabled people. Their intellectual ability is classified into four categories known as mild, moderate, severe, and profound. These individuals have significant limitations in learning and adapting skills. Psychologists evaluate mental capability of such individuals using conventional intellectual quotient method instead of using any technology. The research matrix shows most of research has been carried out on analyzing neuroimaging, antenatal screening, and hearing impairment of individuals. But there is still an obvious gap of evaluating mental age using artificial intelligence. We have proposed an artificial neural network model, which supervises how software is used to obtain dataset using Knowledge Base Decision Support System. In a survey (N = 120) individuals examined by psychiatrist, medical expert, and a teacher to assess the presence of Down’s syndrome by analyzing their physical and facial appearances, and communication skills. Only (N = 62) individuals declared as Down syndrome. Selected individuals invited to perform mental ability assessment using Interactive Mental Learning Software. The results of mental age of Down syndrome with a raise in IQ from severe to moderate (20% to 35%), moderate to mild (35% to 75%) severity were carried out with the help of assessing the interactive series of software opinion polls based on comparison, logic, and basic mathematical operations using initial IQ (iIQ), and enhanced IQ (eIQ) variables input and output parameters.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_107-Machine_Learning_Techniques_to_Enhance_the_Mental_Age.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>AMIM: An Adaptive Weighted Multimodal Integration Model for Alzheimer’s Disease Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01401108</link>
        <id>10.14569/IJACSA.2023.01401108</id>
        <doi>10.14569/IJACSA.2023.01401108</doi>
        <lastModDate>2023-01-31T12:17:57.0470000+00:00</lastModDate>
        
        <creator>Dewen Ding</creator>
        
        <creator>Xianhua Zeng</creator>
        
        <creator>Xinyu Wang</creator>
        
        <creator>Jian Zhang</creator>
        
        <subject>MRI; global information images; maximum infor-mation slices; adaptive weights; integration method</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>Alzheimer’s disease (AD) is an irreversible neu-rological disorder, so early medical diagnosis is extremely im-portant. Magnetic resonance imaging (MRI) is one of the main medical imaging methods used clinically to detect and diagnose AD. However, most existing computer-aided diagnostic methods only use MRI slices for model architecture design. They ig-nore informational differences between all slices. In addition, physicians often use multimodal data, such as medical images and clinical information, to diagnose patients. The approach helps physicians to make more accurate judgments. Therefore, we propose an adaptive weighted multimodal integration model (AMIM) for AD classification. The model uses global information images, maximum information slices and clinical information as data inputs for the first time. It adopts adaptive weights integration method for classification. Experimental results show that our model achieves an accuracy of 99.00% for AD versus normal controls (NC), and 82.86% for mild cognitive impairment (MCI) versus NC. The proposed model achieves best classification performance in terms of accuracy, compared with most state-of-the-art methods.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_108-AMIM_An_Adaptive_Weighted_Multimodal_Integration_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Trust Management for Deep Autoencoder based Anomaly Detection in Social IoT</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01401106</link>
        <id>10.14569/IJACSA.2023.01401106</id>
        <doi>10.14569/IJACSA.2023.01401106</doi>
        <lastModDate>2023-01-31T12:17:57.0330000+00:00</lastModDate>
        
        <creator>Rashmi M R</creator>
        
        <creator>C Vidya Raj</creator>
        
        <subject>Social IoT; trust management; anomaly detection; DDoS; deep autoencoder</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>Social IoT has gained huge traction with the advent of 5G and beyond communication. In this connected world of devices, the trust management is crucial for protecting the data. There are many attacks, while DDOS is the most prevalent BotNet attack. The infected devices earnestly require anomaly detection to learn and curb the malwares soon. This paper considers 9 IoT devices deployed in a Social IoT environment.We introduce a couple of attacks like Bash lite and Mirai by compromising a network node. We then look for traces of malicious behavior using AI algorithms. The investigation starts from a simple network approach - Multi-Layer Perceptron (MLP) then proceeds to ML - Random Forest (RF). While MLP detected the malicious node with an accuracy of 89.39%, RF proved 90.0% accurate. Motivated by the results, the Deep learning approach - Deep autoencoder was employed and found to be more accurate than MLP and RF. The results are encouraging and verified for scalability, efficiency, and reliability.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_106_Trust_Management_for_Deep_Autoencoder_based_Anomaly_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Automated Impact Analysis Approach for Test Cases based on Changes of Use Case based Requirement Specifications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01401105</link>
        <id>10.14569/IJACSA.2023.01401105</id>
        <doi>10.14569/IJACSA.2023.01401105</doi>
        <lastModDate>2023-01-31T12:17:57.0170000+00:00</lastModDate>
        
        <creator>Adisak Intana</creator>
        
        <creator>Kanjana Laosen</creator>
        
        <creator>Thiwatip Sriraksa</creator>
        
        <subject>Change impact analysis approach; test case; black-box testing; use case based requirement specification; combination of equivalence and classification tree method</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>Change Impact Analysis (CIA) is essential to the software development process that identifies the potential effects of changes during the development process. The changing of requirements always impacts on the software testing because some parts of the existing test cases may not be used to test the software. This affects new test cases to be entirely generated from the changed version of software requirements specification that causes a considerable amount of time and effort to generate new test cases to re-test the modified system. Therefore, this paper proposes a novel automatic impact analysis approach of test cases based on changes of use case based requirement specification. This approach enables a framework and CIA algorithm where the impact of test cases is analysed when the requirement specification is changed. To detect the change, two versions as before-change and after-change of the use case model are compared. Consequently, the patterns representing the cause of variable changes are classified and analysed. This results in the existing test cases to be analysed whether they are completely reused, partly updated as well as additionally generated. The new test cases are generated automatically by using the Combination of Equivalence and Classification Tree Method (CCTM). This benefits the level of testing coverage with a minimised number of test cases to be enabled and the redundant test cases to be eliminated. The automation of this approach is demonstrated with the developed prototype tool. The validation and evaluation result with two real case studies from Hospital Information System (HIS) together with perspective views of practical specialists confirms the contribution of this tool that we seek.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_105-An_Automated_Impact_Analysis_Approach_for_Test_Cases.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Low-Cost Wearable Autonomous System for the Protection of Bicycle Users</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01401104</link>
        <id>10.14569/IJACSA.2023.01401104</id>
        <doi>10.14569/IJACSA.2023.01401104</doi>
        <lastModDate>2023-01-31T12:17:57.0000000+00:00</lastModDate>
        
        <creator>Daniel Mejia</creator>
        
        <creator>Sergio Gomez</creator>
        
        <creator>Fredy Martinez</creator>
        
        <subject>Autonomous system; bicycle users; embedded sys-tem; protection; wearable</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>A bicycle is a form of transport that not only positively impacts the health of users, and the general population by reducing pollution levels but also constitutes an accessible and affordable means of transport for developing societies. However, when coexisting with other forms of transport, the accident rate is elevated, and the risk is high. Among the factors contributing to accidents involving bicycles are collisions with motor vehicles. These accidents can occur when a motor vehicle maneuvers and does not see the bicycle or when a motorist drives distracted. These types of accidents can be avoided if cyclists and motorists are aware of the environment and respect traffic laws and safety regulations. This research aims to develop a low-cost autonomous electronic system that provides extra protection to bicycle users, particularly by making them visible to other road users on cloudy days or at night. The system uses a 32-bit processor with brightness and acceleration sensors that trigger visual alerts to both the bicycle user and possible nearby vehicles. It also monitors and logs the signals on a server for route evaluation. The laboratory successfully evaluated the prototype, demonstrating its autonomy and performance. The test results obtained demonstrate the system’s capacity to provide extra protection, in addition to its robustness and accuracy.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_104-A_Low_Cost_Wearable_Autonomous_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Navigation of Autonomous Vehicles using Reinforcement Learning with Generalized Advantage Estimation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01401103</link>
        <id>10.14569/IJACSA.2023.01401103</id>
        <doi>10.14569/IJACSA.2023.01401103</doi>
        <lastModDate>2023-01-31T12:17:57.0000000+00:00</lastModDate>
        
        <creator>Edwar Jacinto</creator>
        
        <creator>Fernando Martinez</creator>
        
        <creator>Fredy Martinez</creator>
        
        <subject>Actor-critic; autonomous vehicles; generalized ad-vantage estimation; navigation; reinforcement learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>This study proposes a reinforcement learning ap-proach using Generalized Advantage Estimation (GAE) for autonomous vehicle navigation in complex environments. The method is based on the actor-critic framework, where the actor network predicts actions and the critic network estimates state values. GAE is used to compute the advantage of each action, which is then used to update the actor and critic networks. The approach was evaluated in a simulation of an autonomous vehicle navigating through challenging environments and it was found to effectively learn and improve navigation performance over time. The results suggest GAE as a promising direction for further research in autonomous vehicle navigation in complex environments.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_103-Navigation_of_Autonomous_Vehicles_using_Reinforcement_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Effect Assessment System for Curriculum Ideology and Politics based on Students’ Achievements in Chinese Engineering Education</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01401102</link>
        <id>10.14569/IJACSA.2023.01401102</id>
        <doi>10.14569/IJACSA.2023.01401102</doi>
        <lastModDate>2023-01-31T12:17:56.9870000+00:00</lastModDate>
        
        <creator>Bo Wang</creator>
        
        <creator>Hailuo Yu</creator>
        
        <creator>Yusheng Sun</creator>
        
        <creator>Zhifeng Zhang</creator>
        
        <creator>Xiaoyun Qin</creator>
        
        <subject>Curriculum ideology and politics; assessment; engi-neering education; ideological and political education; outcomes-based education</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>The curriculum ideological and political education (CIPE) has caused the attention of China’s leaders and state departments in Chinese education, but its effect assessment is still an open issue should be address for the efficient and effective implementation of CIPE. The engineering education conception has been widely adopted in Chinese higher education, in recent years, due to its effectiveness. Therefore, in this paper, under the background of Chinese engineering education, we study the CIPE effect quantification. We propose a CIPE effect assessment system for higher education, and a CIPE effect quantitative method based on the achievements of graduation requirements for each student. The proposed system provides visualization information of achievements and CIPE effect for students and teachers. This helps students to locate themselves in their major learns, and teachers to continuously improve their teaching methods.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_102-An_Effect_Assessment_System_for_Curriculum_Ideology_and_Politics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Integrated Assessment of Teaching Efficacy: A Natural Language Processing Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01401101</link>
        <id>10.14569/IJACSA.2023.01401101</id>
        <doi>10.14569/IJACSA.2023.01401101</doi>
        <lastModDate>2023-01-31T12:17:56.9700000+00:00</lastModDate>
        
        <creator>Lalitha Manasa Chandrapati</creator>
        
        <creator>Ch. Koteswara Rao</creator>
        
        <subject>Teacher evaluation; topic modeling; clustering; La-tent Dirichlet Allocation (LDA); K-means</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>The most significant component in the education domain is evaluation. Apart from student evaluation, teacher evaluation plays a vital role in the colleges or universities. The implementation of a scientific and appropriate assessment method for enhancing teaching standards in educational institutions is absolutely essential. Conventional teacher assessment techniques have always been bounded to bias and injustice for single dimensional assessment criteria, biased scoring, and ineffective integration. In this regard,it is crucial to develop a special-ized teacher evaluation assistant (TEA) system that integrates with some computational intelligence algorithms. This research concentrates on using Natural language processing(NLP) based techniques for empirically analysing teaching effectiveness. We develop a model in which a teacher is evaluated based on the content he delivers during a lecture. Two techniques are employed to evaluate teacher effectiveness using topic modelling and text clustering. By the application of topic modelling, an accuracy of 75% is achieved and text clustering achieved an accuracy of 80%. Thus, the method can effectively be deployed to assess and predict the effectiveness of a teacher&#39;s teaching.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_101-Integrated_Assessment_of_Teaching_Efficacy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Collaborative Interaction with the Augmentation of Sign Language for the Vocally Challenged</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140199</link>
        <id>10.14569/IJACSA.2023.0140199</id>
        <doi>10.14569/IJACSA.2023.0140199</doi>
        <lastModDate>2023-01-31T12:17:56.9530000+00:00</lastModDate>
        
        <creator>Sukruth G L</creator>
        
        <creator>Vijaya Kumar B P</creator>
        
        <creator>Tejas M R</creator>
        
        <creator>Rithvik K</creator>
        
        <creator>Trisha Ann Tharakan</creator>
        
        <subject>Hand Gesture Recognition (HGR); wearable sen-sors; Long-Short Term Memory (LSTM); Natural Language Pro-cessing (NLP); Dynamic Spatio-Temporal Warping (DSTW); Indian Sign Language (ISL)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>As per Census 2011, in India, there were 26.8 million differently abled people, out of which more than 25%of the people faced difficulty in vocal communication. They use Indian Sign Language (ISL) to communicate with others. The proposed solution is developing a sensor-based Hand Gesture Recognition (HGR) wearable device capable of translating and conveying messages from the vocally challenged community. The proposed method involves designing the hand glove by integrating flex and Inertial Measurement Unit (IMU) sensors within the HGR wearable device, wherein the hand and finger movements are captured as gestures. They are mapped to the ISL dictionary using machine learning techniques that learn the spatio-temporal variations in the gestures for classification. The novelty of the work is to enhance the capacity of HGR by extracting the spatio-temporal variations of the individual’s gestures and adapt it to their dynamics with aging and context factors by proposing Dynamic Spatio-temporal Warping (DSTW) technique along with long short term memory based learning model. Using the sequence of identified gestures along with their ISL mapping, grammatically correct sentences are constructed using transformer-based Natural Language Processing (NLP) models. Later, the sentences are conveyed to the user through a suitable communicable media, such as, text-to-voice, text-image, etc. Implementation of the proposed HGR device along with the Bidirectional Long-Short Memory (BiLSTM) and DSTW techniques is carried out to evaluate the performance with respect to accuracy, precision and reliability for gesture recognition. Experiments were carried out to capture the varied gestures and their recognition, and an accuracy of 98.91% was observed.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_99-Enhancing_Collaborative_Interaction_with_the_Augmentation_of_Sign_Language.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Delivery Management System based on Blockchain, Smart Contracts and NFT: A Case Study in Vietnam</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.01401100</link>
        <id>10.14569/IJACSA.2023.01401100</id>
        <doi>10.14569/IJACSA.2023.01401100</doi>
        <lastModDate>2023-01-31T12:17:56.9530000+00:00</lastModDate>
        
        <creator>Khiem Huynh Gia</creator>
        
        <creator>Luong Hoang Huong</creator>
        
        <creator>Hong Khanh Vo</creator>
        
        <creator>Phuc Nguyen Trong</creator>
        
        <creator>Khoa Tran Dang</creator>
        
        <creator>Hieu Le Van</creator>
        
        <creator>Loc Van Cao Phu</creator>
        
        <creator>Duy Nguyen Truong Quoc</creator>
        
        <creator>Nguyen Huyen Tran</creator>
        
        <creator>Anh Nguyen The</creator>
        
        <creator>Huynh Trong Nghia</creator>
        
        <creator>Bang Le Khanh</creator>
        
        <creator>Kiet Le Tuan</creator>
        
        <creator>Nguyen Thi Kim Ngan</creator>
        
        <subject>Letter-of-Credit; cash-on-delivery; blockchain; smart contract; NFT; Ethereum; Fantom; Polygon; Binance Smart Chain</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>Current traditional shipping models are increas-ingly revealing many shortcomings and affecting the interests of sellers and buyers due to having to depend on trusted third parties. For example, the Cash-on-Delivery (CoD) model must depend on the carrier/shipper, or the Letter-of-Credit (LoC) model depends on the place of the Letter certification (i.e., bank). There have been many examples demonstrating the riskiness of the two models above. Specifically, in developing countries (e.g., Vietnam), the demand for exporting goods and trading between sellers and buyers have not yet applied the benefits of current technology to improve traditional shipping models. Two typical examples in the last five years that have demonstrated the risks of both sellers and buyers when applying CoD and LoC models are the problem of keeping the money of the seller of GNN Expresses (2017) as well as risks in losing control of 4 containers of cashew nuts when exporting from Vietnam to Italy (2021). A series of studies have proposed solutions based on distributed storage, blockchain, and smart contracts to solve the above problems. However, the role of the shipper has not been considered in some approaches or is not suitable for deployment in a developed country (i.e., Vietnam). In this paper, we propose a combination model between the traditional CoD model and blockchain technology, smart contracts, and NFT to solve the above problems. Specifically, our contribution includes four aspects: a) proposing a shipping model based on blockchain technology and smart contracts; b) proposing a model for storing package information based on Ethereum’s NFT technology (i.e. ERC721); c) implementing the proposed model by designing smart contracts that support the creation and transfer of NFTs between sellers and buyers; d) deploy smart contracts on four EVM-enabled platforms including BNB Smart chain, Fantom, Celo, and Polygon to find a suitable platform for the proposed model.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_100-Delivery_Management_System_based_on_Blockchain_Smart_Contracts_and_NFT.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Machine Learning Hybrid Approach for Diagnosing Plants Bacterial and Fungal Diseases</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140198</link>
        <id>10.14569/IJACSA.2023.0140198</id>
        <doi>10.14569/IJACSA.2023.0140198</doi>
        <lastModDate>2023-01-31T12:17:56.9400000+00:00</lastModDate>
        
        <creator>Ahmed BaniMustafa</creator>
        
        <creator>Hazem Qattous</creator>
        
        <creator>Ihab Ghabeish</creator>
        
        <creator>Muwaffaq Karajeh</creator>
        
        <subject>Deep learning; machine learning; classification; plant diseases; disease diagnosis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>Bacterial and Fungal diseases may affect the yield of stone fruit and cause damage to the Chlorophyll synthesis process, which is crucial for tree growth and fruiting. However, due to their similar visual shot-hole symptoms, novice agriculturalists and ordinary farmers usually cannot identify and differentiate these two diseases. This work investigates and evaluates the use of machine learning for diagnosing these two diseases. It aims at paving the way toward creating a generic deep learning-based model that can be embedded in a mobile phone application or in a web service to provide a fast, reliable, and cheap diagnosis for plant diseases which help reduce the excessive, unnecessary, or improper use of pesticides, which can harm public health and the environment. The dataset consists of hundreds of samples collected from stone fruit farms in the north of Jordan under normal field conditions. The image features were extracted using a CNN algorithm that was pre-trained with millions of images, and the diseases were identified using three machine learning classification algorithms: 1) K-nearest neighbour (KNN); 2) Stochastic Gradient Descent (SGD); and 3) Random Forests (RF). The resulting models were evaluated using 10-fold cross-validation, with CNN-KNN achieving the best AUC performance with a score of 98.5%. On the other hand, the CNN-SGD model performed best in Classification Accuracy (CA) with a score of 93.7%. The results shown in the Confusion Matrix, ROC, Lift, and Calibration curves also confirmed the validity and robustness of the constructed models.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_98-A_Machine_Learning_Hybrid_Approach_for_Diagnosing_Plants.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Stacking Deep-Learning Model, Stories and Drawing Properties for Automatic Scene Generation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140197</link>
        <id>10.14569/IJACSA.2023.0140197</id>
        <doi>10.14569/IJACSA.2023.0140197</doi>
        <lastModDate>2023-01-31T12:17:56.9230000+00:00</lastModDate>
        
        <creator>Samir Elloumi</creator>
        
        <creator>Nzamba Bignoumba</creator>
        
        <subject>Text to image conversion; elementary image; image composition; deep-learning; drawing properties</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>Text-image mapping is of great interest to the scientific community, especially for educational purposes. It helps young learners, mainly those with learning difficulties, to better understand the content of stories. In this paper, we propose to capture the teacher’s experience in manually building relevant scenes for animal behavior stories. This manual work, which consists of a pair of texts and a set of elementary images, is fed into a Long Short-Term Memory (LSTM) followed by a Conditional Random Field (CRF) that aims to associate the relevant words in the text with their corresponding elementary image while preserving the drawing properties. This association is then used for scene construction. Several experiments were con-ducted to show how better the constructed scenes convey textual information than the scenes constructed from the competitor’s models.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_97-Stacking_Deep_Learning_Model_Stories_and_Drawing_Properties.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>User Perceive Realism of Machine Learning-based Drone Dynamic Simulator</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140196</link>
        <id>10.14569/IJACSA.2023.0140196</id>
        <doi>10.14569/IJACSA.2023.0140196</doi>
        <lastModDate>2023-01-31T12:17:56.9070000+00:00</lastModDate>
        
        <creator>Damitha Sandaruwan</creator>
        
        <creator>Nihal Kodikara</creator>
        
        <creator>Piyumi Radeeshani</creator>
        
        <creator>K.T.Y. Mahima</creator>
        
        <creator>Chathura Suduwella</creator>
        
        <creator>Sachintha Pitigala</creator>
        
        <creator>Mangalika Jayasundara</creator>
        
        <subject>Drone; simulation; machine learning; drone dynamics; virtual reality</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>The drone will be a commonly used technology by a significant portion of society, and simulating a given drone dynamic will be an essential requirement. There are drone dy-namic simulation models to simulate popular commercial drones. In addition, there are many Newtonian and fluid dynamics-based generic drone dynamic models. However, these models consist of many model parameters, and it is impracticable to evaluate the required model parameters to simulate a custom-made drone. A simple method to develop a machine learning-based dynamic drone simulation model to simulate custom- made drones mitigates the issues mentioned above. Specifically, the authors’ research is associated with the development of a machine learning-based drone dynamic model integrated with a virtual reality environment and validation of the user-perceived physical and behavioural realism of the entire solution. A figure of eight manoeuvring patterns was used to collect the data related to drone behaviour and drone pilot inputs. A Neural Network-based approach was employed to develop the machine learning-based drone dynamic model. Validations were done against real-world drone manoeuvres and user tests. Validation results show that the simulations provided by machine learning are accurate at the beginning and it decreases the accuracy with time. However, users also make mistakes/misjudgments while perceiving the real-world or virtual world. Hence, we explored the user perceive motion prediction accuracy of the simulation environment which is associated with the behavioural realism of the simulation environment. User tests show that the entire simulation environment maintains substantial physical realism.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_96-User_Perceive_Realism_of_Machine_Learning_based_Drone_Dynamic_Simulator.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Metaphor Recognition Method based on Graph Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140195</link>
        <id>10.14569/IJACSA.2023.0140195</id>
        <doi>10.14569/IJACSA.2023.0140195</doi>
        <lastModDate>2023-01-31T12:17:56.9070000+00:00</lastModDate>
        
        <creator>Zhou Chuwei</creator>
        
        <creator>SHI Yunmei</creator>
        
        <subject>Sentiment analysis; metaphor recognition; graph neural network; attention mechanism</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>This Metaphor is a very common language phenomenon. Human language often uses metaphor to express emotion, and metaphor recognition is also an important research content in the field of NLP. Official documents are a serious style and do not usually use rhetorical sentences. This paper aims to identify rhetorical metaphorical sentences in official documents. The use of metaphors in metaphorical sentences depends on the context. Based on this linguistic feature, this paper proposes a BertGAT model, which uses Bert to extract semantic features of sentences and transform the dependency relationship between Chinese text and sentences into connected graphs. Finally, the graph attention neural network is used to learn semantic features and syntactic structure information to complete sentence metaphor recognition. The proposed model is tested on the constructed domain dataset and the sentiment public dataset respectively. Experimental results show that the method proposed in this paper can effectively improve the recognition ability of metaphorical emotional sentences.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_95-Metaphor_Recognition_Method_based_on_Graph_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Visualization of Business Intelligence Insights into Aviation Accidents</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140194</link>
        <id>10.14569/IJACSA.2023.0140194</id>
        <doi>10.14569/IJACSA.2023.0140194</doi>
        <lastModDate>2023-01-31T12:17:56.8930000+00:00</lastModDate>
        
        <creator>Loe Piin Piin</creator>
        
        <creator>Sarasvathi Nagalingham</creator>
        
        <subject>Aviation; accidents; business intelligence; prediction; dashboard visualization; data analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>Despite the recent tragic loss activity, flying is often said to be the safest form of transport, and this is at least true in terms of fatalities per distance travelled. The Civil Aviation Authority reports that the death rate per billion kilometers travelled by aircraft is 0.003, which is much lower than the rates of 0.27 for train travel and 2.57 for vehicle travel. Despite the fact that safety has been the aviation industry&#39;s top focus for the last century and a half, accidents involving aircraft continue to be a source of horror even in the present day. Hence, the aim of this project is to identify the major causes and reasons that led to accidents in the aviation industry and to carry out research, finding, design, build and suggest a Business Intelligence (BI) solution to the problem. Throughout the project, it will discover problems, both elementary and critical which needs to be corrected or changed in order to prevent major negative happenings and improve the current situation in a positive way. Tableau will be the primary BI tool used in this process. Data visualization is the graphic depiction of information and data. Data visualization tools offer an easy approach to data analysis that observe and find patterns, outliers, and patterns in data by employing visual elements like charts, graphs, and maps. The project will also cover the initial to building and deployment stage of the BI solution to improve and prevent further accidents.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_94-Visualization_of_Business_Intelligence_Insights_into_Aviation_Accidents.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implementation of Business Intelligence Solution for United Airlines</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140192</link>
        <id>10.14569/IJACSA.2023.0140192</id>
        <doi>10.14569/IJACSA.2023.0140192</doi>
        <lastModDate>2023-01-31T12:17:56.8770000+00:00</lastModDate>
        
        <creator>Ng Iris</creator>
        
        <creator>Sarasvathi Nagalingham</creator>
        
        <subject>Business intelligence; aviation industry; dashboard visualization; tableau; data analytics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>US Airline is recognized as the world&#39;s largest airline, with a massive number of daily departures completed and a combined fleet of over 2700 aircraft. US Airlines have carried major 18 airlines, categorized as mainline, regional, and freight airlines. United Airline is one of the major airlines in the US after American Airlines and Delta Airlines in the world. Today, companies received as much feedback from their customers. Customers can share their opinion and emotion through social media platforms, such as Twitter. Thus, collecting and understanding customer’s opinion become the key benefits for the aviation industry to get actionable insights while increasing their competitiveness. Such insights are useful in planning and execution to increase the relationship with customers. Thus, this study was conducted to analyze customer’s feedback in different airlines to discover actionable insights that increase the competitiveness of United Airline. The analysis result will be visualized on Tableau dashboards and BI solutions will be provided. By implementing the BI solutions, United Airline can make accurate decisions and define next strategies by identifying those positive and negative references. Thus, United Airline can improve the quality of their service, enhance customers loyalty, and boost business profitability.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_92-Implementation_of_Business_Intelligence_Solutions_for_US_Airlines.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Model Predictive Controlled Quasi Z Source Inverter Fed Induction Motor Drive System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140193</link>
        <id>10.14569/IJACSA.2023.0140193</id>
        <doi>10.14569/IJACSA.2023.0140193</doi>
        <lastModDate>2023-01-31T12:17:56.8770000+00:00</lastModDate>
        
        <creator>D. Himabindu</creator>
        
        <creator>G. Sreenivasan</creator>
        
        <creator>R. Kiranmayi</creator>
        
        <subject>QZSIC; TPIML; CLSC; SMC; MPC; IMDS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>Ongoing advancements in inverters have offered pathway to high gain quasi Z source inverter Circuit (QZSIC). High gain QZSIC can be found between Semi Converter (SC) and three phase Induction Motor Loads (TPIML). This paper proposes suitable controller for closed loop controlled QZSIC-TPIML. This strive deals within improvement in time- response of QZSIC fed induction motor system. The objective of this effort tis to design a closed loop controlled QZSI*fed-induction motor framework that provides a stable-rotor-speed. The QZSIC is settled to switch it to “3phase AC”. The yield of 3phase inverter is sieved before it’s applied to a‘3phase-Induction-motor’. Closed loop control of QZSIC-TPIML using SMC and MPC is simulated &amp;their rejoinders are compared. The ‘Model Predictive controller (MPC)’ is acclaimed to retain persistent significance of-speed. The result obtained via MP-controlled QZS-IIMD-method is compared with Sliding mode-controlled SMC) QZS-IIMD systems for change in input voltage. The wished-for MP controlled-QZS-IIMD method has benefits like fast settling-time and less steady state speed error.PIC16F84basedhardware for 0.5HP, QZSIC-IMDS is implemented.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_93-Model_Predictive_Controlled_Quasi_Z_Source.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Augmented, Virtual and Mixed Reality Research in Cultural Heritage: A Bibliometric Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140191</link>
        <id>10.14569/IJACSA.2023.0140191</id>
        <doi>10.14569/IJACSA.2023.0140191</doi>
        <lastModDate>2023-01-31T12:17:56.8600000+00:00</lastModDate>
        
        <creator>Nilam Upasani</creator>
        
        <creator>Asmita Manna</creator>
        
        <creator>Manjiri Ranjanikar</creator>
        
        <subject>Augmented reality; bibliometric analysis; cultural heritage; information science; mixed reality; virtual reality</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>Heritage allows us to learn about the different monuments of importance and the inherited traditions from our ancestors. However, many times the monuments get partly ruined due to natural wear and tear, sometimes due to attacks by invaders. To preserve the cultural heritage virtually, many researchers have used augmented, virtual and mixed reality for bringing ancient environments to live in those heritage sites. This study aims to identify publications related to virtual, augmented and mixed reality in cultural heritage and to present the bibliometric analysis of these studies. The research articles on virtual, augmented, and mixed reality in cultural heritage are retrieved using the Scopus database. The analysis is performed using VOSviewer and various parameters such as bibliographic coupling of the countries, publications, journals, authors, and co-occurrences of the author keywords are performed. From the analysis done in this study, it is discovered that the augmented, virtual and mixed reality research in the domain of cultural heritage is mostly concentrated in Italy and surrounding European countries. However, it is also found that the research in this domain is lagging in many countries even if those countries are the homes of various heritage sites. This study provides an extensive analysis of the recent literature related to augmented, virtual and mixed reality research in cultural heritage. This information science based analysis will help researchers to identify the prominent journals in this domain, recognize stalwarts in this field and follow their works, find out path-breaking publications to refer to, and predict the direction of future studies.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_91-Augmented_Virtual_and_Mixed_Reality_Research_in_Cultural_Heritage.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Business Intelligence Data Visualization for Diabetes Health Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140190</link>
        <id>10.14569/IJACSA.2023.0140190</id>
        <doi>10.14569/IJACSA.2023.0140190</doi>
        <lastModDate>2023-01-31T12:17:56.8430000+00:00</lastModDate>
        
        <creator>Samantha Siow Jia Qi</creator>
        
        <creator>Sarasvathi Nagalingham</creator>
        
        <subject>Diabetes; business intelligence; prediction; dashboard visualization; data analysis; centers for disease control and prevention</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>In today&#39;s environment, Business Intelligence (BI) is transforming the world at a rapid pace across domains. Business intelligence has been around for a long time, but when combined with technology, the results are astounding. BI is also playing an important role in the healthcare domain. Centers for Disease Control and Prevention (CDC) is the largest science-based, data-driven service provider in the country for public health protection. For over 70 years, has been using science to fight disease and keep families, businesses, and communities healthy. However, research indicates that the prevalence of diabetes in the US is rising alarmingly. As a result, if diabetes is not treated, it can lead to life-threatening complications such as heart disease, loss of feeling, blindness, kidney failure, and amputations. As a result, this study was conducted to analyze people&#39;s health conditions and daily lifestyles in order to predict which type of diabetes they would most likely diagnose with the implementation of business intelligence using Tableau dashboard. Furthermore, background research is conducted on CDC to understand their work, challenges, and opportunities. By the end of the project, the information obtained and visualized should be able to enhance business choices and make better decisions on controlling diabetes in the future.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_90-Business_Intelligence_Data_Visualization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Interventional Teleoperation Protocol that Considers Stair Climbing or Descending of Crawler Robots in Low Bit-rate Communication</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140189</link>
        <id>10.14569/IJACSA.2023.0140189</id>
        <doi>10.14569/IJACSA.2023.0140189</doi>
        <lastModDate>2023-01-31T12:17:56.8300000+00:00</lastModDate>
        
        <creator>Tsubasa Sakaki</creator>
        
        <creator>Kei Sawai</creator>
        
        <subject>LoRaWAN; teleoperation; crawler robot; disaster-reduction activity; teleoperation protocol</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>In teleoperation of a crawler robot in a disaster-stricken enclosed space, distress of the crawler robot due to communication breakdown is a problem. We present a robot teleoperation system using LoRaWAN as a subcommunication infrastructure to solve this problem. In this system, the crawler robot is operated by teleoperation using a subcommunication infrastructure in a place where a wireless local area network (LAN) communication is possible. In this study, we assume an environment in which the crawler robot must ascend and descend stairs to evacuate to a place where wireless LAN communication is possible. In addition, the disaster-stricken environment is considered as an environment where obstacles are expected to suddenly occur, and the crawler robot has difficulty avoiding obstacles on the stairs. In this paper, we propose a teleoperation communication protocol that considers the risk of sudden appearance of obstacles and confirm its effectiveness using evaluation experiments in a real environment.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_89-Interventional_Teleoperation_Protocol_that_Considers_Stair_Climbing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>User-Centered Design (UCD) of Time-Critical Weather Alert Application</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140188</link>
        <id>10.14569/IJACSA.2023.0140188</id>
        <doi>10.14569/IJACSA.2023.0140188</doi>
        <lastModDate>2023-01-31T12:17:56.8300000+00:00</lastModDate>
        
        <creator>Abdulelah M. Ali</creator>
        
        <creator>Abdulrahman Khamaj</creator>
        
        <creator>Ziho Kang</creator>
        
        <creator>Majed Moosa</creator>
        
        <creator>Mohd Mukhtar Alam</creator>
        
        <subject>User-centered design; time-critical weather alert apps; weather forecasts; map set-tings; message alert</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>Weather alert applications can save precious lives in time-critical risk situations; however, even the most widely used applications may fall short in intuitive interface and content design, possibly due to limitations in the users participation in the design process and in the users range considered. The objective of this study was to investigate whether the application of UCD principles and usability guidelines can improve the use of and satisfaction of time-critical weather alert apps by the public and or expert users. A prototype of a UCD-based weather alert application was developed and evaluated. Initially, thirty-two voluntaries participated in the identification of the important features that lead to the development of the porotype, and then the prototype was tested with another eighty participants (40 young and 40 elderly). The prototype includes five enhancements: auto-suggested location search, an all-inclusive interface for weather forecasts, message alert, visual and intuitive map settings, and minimalism-oriented alert settings. The enhanced functionality was compared to similar functionality in existing commercial weather applications. Effectiveness (completion rate, error count, error severity, and error cause), efficiency (time to completion), and satisfaction (post-task and post-test surveys) were measured. The results showed the enhancements significantly improved performance and satisfaction across both age groups compared to equivalent functionality in the existing app. The Mann-Whitney U test showed a statistically significant difference (p&lt;0.001) in task satisfaction and number of errors between the two apps for all tasks. The Mann-Whitney U test showed a significant difference (p&lt;0.001) in the across all tasks between the two apps Also, overall, young people with existing apps outperformed elderly, and both young and elderly with enhanced apps performed very high. Therefore, the enhancements implemented through the UCD process and usability guidelines significantly improved performance and satisfaction across both age groups to facilitate timely action necessary during a crisis.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_88-User_Centered_Design_UCD_of_Time_Critical_Weather_Alert.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>2-D Deep Convolutional Neural Network for Predicting the Intensity of Seismic Events</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140187</link>
        <id>10.14569/IJACSA.2023.0140187</id>
        <doi>10.14569/IJACSA.2023.0140187</doi>
        <lastModDate>2023-01-31T12:17:56.8130000+00:00</lastModDate>
        
        <creator>Assem Turarbek</creator>
        
        <creator>Yeldos Adetbekov</creator>
        
        <creator>Maktagali Bektemesov</creator>
        
        <subject>Earthquake; prediction; deep learning; machine learning; classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>Machine learning has advanced rapidly in the last decade, promising to significantly change and improve the function of big data analysis in a variety of fields. When compared to traditional methods, machine learning provides significant advantages in complex problem solving, computing performance, uncertainty propagation and handling, and decision support. In this paper, we present a novel end-to-end strategy for improving the overall accuracy of earthquake detection by simultaneously improving each step of the detection pipeline. In addition, we propose a Conv2D convolutional neural network (CNN) architecture for processing seismic waveforms collected across a geophysical system. The proposed Conv2D method for earthquake detection was compared to various machine-learning approaches and state-of-the-art methods. All of the methods used were trained and tested on real data collected in Kazakhstan over the last 97 years, from 1906 to 2022. The proposed model outperformed the other models with accuracy, precision, recall, and f-score scores of 63%, 82.4%, 62.7%, and 83%, respectively. Based on the results, it is possible to conclude that the proposed Conv2D model is useful for predicting real-world earthquakes in seismic zones.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_87-2_D_Deep_Convolutional_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards a Machine Learning-based Model for Automated Crop Type Mapping</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140185</link>
        <id>10.14569/IJACSA.2023.0140185</id>
        <doi>10.14569/IJACSA.2023.0140185</doi>
        <lastModDate>2023-01-31T12:17:56.7970000+00:00</lastModDate>
        
        <creator>Asmae DAKIR</creator>
        
        <creator>Fatimazahra BARRAMOU</creator>
        
        <creator>Omar Bachir ALAMI</creator>
        
        <subject>Smart farming; artificial intelligence; machine Learning; Precision agriculture; random forest; SVM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>In the field of smart farming, automated crop type mapping is a challenging task to guarantee fast and automatic management of the agricultural sector. With the emergence of advanced technologies such as artificial intelligence and geospatial technologies, new concepts were developed to provide realistic solutions to precision agriculture. The present study aims to present a machine learning-based model for automated crop-type mapping with high accuracy. The proposed model is based on the use of both optical and radar satellite images for the classification of crop types with machine learning-based algorithms. Random Forest and Support Vector Machine, were employed to classify the time series of vegetation indices. Several indices extracted from both optical and radar data were calculated. Harmonical modelization was also applied to optical indices, and decomposed into harmonic terms to calculate the fitted values of the time series. The proposed model was implemented using the geospatial processing services of Google Earth Engine and tested with a case study with about 147 satellite images. The results show the annual variability of crops and allowed performing classifications and crop type mapping with accuracy that exceeds the performances of the other existing models.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_85-Towards_a_Machine_Learning_based_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid Model by Combining Discrete Cosine Transform and Deep Learning for Children Fingerprint Identification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140186</link>
        <id>10.14569/IJACSA.2023.0140186</id>
        <doi>10.14569/IJACSA.2023.0140186</doi>
        <lastModDate>2023-01-31T12:17:56.7970000+00:00</lastModDate>
        
        <creator>Vaishali Kamble</creator>
        
        <creator>Manisha Dale</creator>
        
        <creator>Vinayak Bairagi</creator>
        
        <subject>Discrete Cosine Transform (DCT); Curve DCT; biometric recognition; machine learning; convolutional neural network; AlexNet</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>Fingerprint biometric as an identification tool for children recognition was started in the late 19th century by Sir Galton. However, it is still not matured for children as adult fingerprint identification even after the span of two centuries. There is an increasing need for biometric identification of children because more than one million children are missing every year as per the report of International Centre of missing and exploited children. This paper presents a robust method of children identification by combining Discrete Cosine Transform (DCT) features and machine learning classifiers with Deep learning algorithms. The handcrafted features of fingerprint are extracted using DCT coefficient’s mid and high frequency bands. Gaussian Na&#239;ve Base (GNB) classifier is best fitted among machine learning classifiers to find the match score between training and testing images. Further, the Transfer learning model is used to extract the deep features and to get the identification score. To make the model robust and accurate score level fusion of both the models is performed. The proposed model is validated on two publicly available fingerprint databases of children named as CMBD and NITG databases and it is compared with state-of-the-art methods. The rank-1 identification accuracy obtained with the proposed method is 99 %, which is remarkable compared to the literature.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_86-A_Hybrid_Model_by_Combining_Discrete_Cosine_Transform.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implementation of ICT Continuity Plan (ICTCP) in the Higher Education Institutions (HEI’S): SUC’S Awareness and its Status</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140184</link>
        <id>10.14569/IJACSA.2023.0140184</id>
        <doi>10.14569/IJACSA.2023.0140184</doi>
        <lastModDate>2023-01-31T12:17:56.7830000+00:00</lastModDate>
        
        <creator>Chester L. Cofino</creator>
        
        <creator>Ken M. Balogo</creator>
        
        <creator>Jefrey G. Alegia</creator>
        
        <creator>Michael Marvin P. Cruz</creator>
        
        <creator>Benjamin B. Alejado Jr</creator>
        
        <creator>Felicisimo V. Wenceslao Jr</creator>
        
        <subject>Business Continuity Plan (BCP); Information; Communication; and Technology Continuity Plan (ICTCP); State Universities and Colleges (SUCs); Business Continuity Management (BCM) Framework</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>The purpose of this study was to assess the level of awareness of the management and the personnel within the academic institution and identify the implementation status of the ICTCP to the implementing SUCs. The researchers used the BCM Framework was utilized in this study as the model for identifying the level of awareness of the personnel within the institution about the ICTCP. The research respondents were the personnel employed in the different States, Universities, and Colleges (SUCs) within the province of Negros Occidental. The respondents were selected through random sampling, they were provided by a google form link to answer the survey questionnaire. A total of thirty-five (35) IT personnel were included in the study&#39;s sample size. It was found out that most SUCs have consistent ICT system uptime because they can continuously provide services; surprisingly, this is independent of an ICT business continuity plan. Most SUCs do not entirely implement their ICT business continuity plans. Lastly, it is recommended that SUCs can significantly enhance service delivery if ICT business continuity planning is taken seriously, adopted, and entirely carried out.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_84-Implementation_of_ICT_Continuity_Plan.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Unsupervised Learning-based New Seed-Expanding Approach using Influential Nodes for Community Detection in Social Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140183</link>
        <id>10.14569/IJACSA.2023.0140183</id>
        <doi>10.14569/IJACSA.2023.0140183</doi>
        <lastModDate>2023-01-31T12:17:56.7670000+00:00</lastModDate>
        
        <creator>Khaoula AIT RAI</creator>
        
        <creator>Mustapha MACHKOUR</creator>
        
        <creator>Jilali ANTARI</creator>
        
        <subject>Complex network; community detection; TOPSIS; seed-centric approach; ground-truth; k-means</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>Several recent studies focus on community structure due to its importance in analyzing and understanding complex networks. Communities are groups of nodes highly connected with themselves and not much connected to the rest of the network. Community detection helps us to understand the properties of the dynamic process within a network. In this paper, we propose a novel seed-centric approach based on TOPSIS (Technique for Order Preference by Similarity to an Ideal Solution) and k-means algorithm to find communities in a social network. TOPSIS is used to find the seeds within this network using the benefits of multiple measure centralities. The use of a single centrality to determine seeds within a network, like in classical algorithms of community detection, doesn’t succeed in the majority of cases to reach the best selection of seeds. Therefore, we consider all centrality metrics as a multi-attribute of TOPSIS and we rank nodes based on the TOPSIS’ relative closeness. The Top-K nodes extracted from TOPSIS will be considered as seeds in the proposed approach. Afterwards, we apply the k-means algorithm using these seeds as starting centroids to detect and construct communities within a social network. The proposed approach is tested on Facebook ego network and validated on the famous dataset having the ground-truth community structure Zachary karate club. Experimental results on Facebook ego network show that the dynamic k-means provides reasonable communities in terms of distribution of nodes. These results are confirmed using Zachary karate club. Two detected communities are detected with higher normalized mutual information NMI and Adjusted Rand Index ARI compared to other seed centric algorithms such as Yasca, LICOD, etc. The proposed method is effective, feasible, and provides better results than other available state-of-the-art community detection algorithms.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_83-Unsupervised_Learning_based_New_Seed_Expanding_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Assessing User Interest in Web API Recommendation using Deep Learning Probabilistic Matrix Factorization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140182</link>
        <id>10.14569/IJACSA.2023.0140182</id>
        <doi>10.14569/IJACSA.2023.0140182</doi>
        <lastModDate>2023-01-31T12:17:56.7500000+00:00</lastModDate>
        
        <creator>T. Ramathulasi</creator>
        
        <creator>M. Rajasekhara Babu</creator>
        
        <subject>Implicit feature; API’s recommendation; IoT; collaborative filtering; matrix factorization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>Internet 2.0 Things connected to the Internet not only manage data supply through devices but also control the commands that flow through it. The communication technology created by the desired sensor is used by a new computing model so that the collected data appears in Web 2.0 for management. In addition to enhancing Sense efficiency through the simple IoT computing process, it is used in many cases for example video surveillance, and improved and intelligent manufacturing. Every fragment of the system is carefully continued and supervised in this process by software collection using a large number of recurs. An important process for this is to access web APIs from various public platforms in an efficient way. The use of different APIs by developers for the integration of different IoT devices and the deployment process required for this is unnecessary. Obtaining configured target APIs makes it easy to know where and how to get started with the workflow approach. Rapid industrial development can be achieved through this powerful API approach. But finding adequately powerful APIs from a large number of APIs has become a great challenge. However, due to the massive spike in the count of APIs, combining the two APIs has now become a major challenge. In this paper, for the time being, only the relationships between users and the API are considered. In this case, they had to face difficulties in extracting contextual value from their interpretation. So better accuracy could not be obtained due to this. The consequence of the user&#39;s time aspect on the cryptographic properties concerning the information collected from the API contextual description can be enhanced by the Deep Learning Probabilistic Matrix Factorization (DL-PMF) method, which improves the accuracy of the API recommendation in considering the cryptographic features of the user in the API recommendation. In this paper, we have used CNN (Convulsive Neural Network) for web elements such as APIs, and LSTM (Long-Term and Short-Term Memory) Network, which works with a diligent mechanism to find hidden features, to find hidden features that suit the tastes of the users. In conclusion, the combination of PMF (Probabilistic Matrix Factorization) evaluation of the recommended results was obtained as described above. The combination of DL-PMF method experimental results was found to be better than previous PMF, ConvMF, and other methods, thus improving the recommended accuracy.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_82-Assessing_User_Interest_in_Web_API_Recommendation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Convolutional Transformer based Local and Global Feature Learning for Speech Enhancement</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140181</link>
        <id>10.14569/IJACSA.2023.0140181</id>
        <doi>10.14569/IJACSA.2023.0140181</doi>
        <lastModDate>2023-01-31T12:17:56.7370000+00:00</lastModDate>
        
        <creator>Chaitanya Jannu</creator>
        
        <creator>Sunny Dayal Vanambathina</creator>
        
        <subject>Convolutional neural network; recurrent neural network; speech enhancement; multi-head attention; two-stage convolutional transformer; feed-forward network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>Speech enhancement (SE) is an important method for improving speech quality and intelligibility in noisy environments where received speech is severely distorted by noise. An efficient speech enhancement system relies on accurately modelling the long-term dependencies of noisy speech. Deep learning has greatly benefited by the use of transformers where long-term dependencies can be modelled more efficiently with multi-head attention (MHA) by using sequence similarity. Transformers frequently outperform recurrent neural network (RNN) and convolutional neural network (CNN) models in many tasks while utilizing parallel processing. In this paper we proposed a two-stage convolutional transformer for speech enhancement in time domain. The transformer considers global information as well as parallel computing, resulting in a reduction of long-term noise. In the proposed work unlike two-stage transformer neural network (TSTNN) different transformer structures for intra and inter transformers are used for extracting the local as well as global features of noisy speech. Moreover, a CNN module is added to the transformer so that short-term noise can be reduced more effectively, based on the ability of CNN to extract local information. The experimental findings demonstrate that the proposed model outperformed the other existing models in terms of STOI (short-time objective intelligibility), and PESQ (perceptual evaluation of the speech quality).</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_81-Convolutional_Transformer_based_Local_and_Global_Feature_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deca Convolutional Layer Neural Network (DCL-NN) Method for Categorizing Concrete Cracks in Heritage Building</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140180</link>
        <id>10.14569/IJACSA.2023.0140180</id>
        <doi>10.14569/IJACSA.2023.0140180</doi>
        <lastModDate>2023-01-31T12:17:56.7200000+00:00</lastModDate>
        
        <creator>Dinar Mutiara Kusumo Nugraheni</creator>
        
        <creator>Andi Kurniawan Nugroho</creator>
        
        <creator>Diah Intan Kusumo Dewi</creator>
        
        <creator>Beta Noranita</creator>
        
        <subject>Cracks; concrete; Deca-CNN; features mapping; performance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>It is critical to develop a method for detecting cracks in historic building concrete structures. This is due to the fact that it is a method of preserving historic building and protecting visitors from the collapse of a historic structure. The purpose of this research is to determine the best method for identifying cracks in the concrete surface of old buildings by using cracked images of old buildings. The various surface textures, crack irregularities, and background complexity that distinguish crack detection from other forms of image detection research present challenges in crack detection of old buildings. This study presents a framework for detecting concrete cracks in old buildings in Semarang&#39;s old town using a modified Convolutional Neural Network with a combination of several convolutional layers. This study employs ten convolutional layers (Deca Convolutional Layer Neural Network (DCL-NN)) to provide mapping features for images of concrete cracks in ancient buildings at preservation area. This study also compares commonly used machine learning models such as KNeighbors (n neighbors=3), Random Forest, Support Vector Machine (SVM), ExtraTrees (n estimators=10), and other CNN-pretained models such as VGG19, Xception, and MobileNet. Four performance indicators are used to validate each model&#39;s performance: accuracy, recall, precision, F1-score, Matthews Correlation Coefficient (MCC), and Cohen Kappa (CK). This study&#39;s data set is comprised of primary data obtained from cracked and normal images of several buildings in Semarang&#39;s old town. The accuracy of this study using DCL-NN is 98.87%, recall is 99.40%, precision is 98.33%, F1 is 98.86%, MCC is 97.74%, and CK is 98.86% for crack class. From this study, it was found that the ten convolution layers have higher classification performance compared to other comparison models such as machine learning and other CNN models and are more effective in detecting cracks in concrete structures.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_80-Deca_Convolutional_Layer_Neural_Network_DCL_NN_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of the Artificial Neural Network Approach in the Extreme Learning Machine Method for Mining Sales Forecasting Development</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140179</link>
        <id>10.14569/IJACSA.2023.0140179</id>
        <doi>10.14569/IJACSA.2023.0140179</doi>
        <lastModDate>2023-01-31T12:17:56.7200000+00:00</lastModDate>
        
        <creator>Hendra Kurniawan</creator>
        
        <creator>Joko Triloka</creator>
        
        <creator>Yunus Ardhan</creator>
        
        <subject>Artificial neural network; business warehouse; extreme learning machine; mining sales forecasting</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>Forecasting is an accurate indicator to support management decisions. This study aimed to mining sales forecasting on Indonesia’s consumer goods companies with business warehouses engaged in the dynamic movement of large data using the Artificial Neural Network method. The sales forecasting used traditional method by inputting data and improvising simple patterns by collecting historical sales and remaining stock. Furthermore, several data variables in business warehouses were employed for sales forecasting. The study also used qualitative method to investigate the quality of data that cannot be measured quantitatively. The results showed with Mean Square Error score of 0.02716 in forecasting sales. The average accuracy generated by the Extreme Learning Machine after nine data tests is 111%. The result shows an opportunity for the company to further analyze the sales profit growth potential. The predicted value generated by Extreme Learning Machine for the last three months reaches 132%. The company&#39;s improved decision-making enlarge potential production line demonstrates the usefulness of this study.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_79-Analysis_of_the_Artificial_Neural_Network_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implementation Failure Recovery Mechanism using VLAN ID in Software Defined Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140178</link>
        <id>10.14569/IJACSA.2023.0140178</id>
        <doi>10.14569/IJACSA.2023.0140178</doi>
        <lastModDate>2023-01-31T12:17:56.7030000+00:00</lastModDate>
        
        <creator>Heru Nurwarsito</creator>
        
        <creator>Galih Prasetyo</creator>
        
        <subject>Software-defined networks; openflow; link failure; failure recovery; VLAN ID; fast failover</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>Link failure is a common problem that occurs in software-defined networks. The most proposed approach for failure recovery is to use pre-configured backup paths in the switch. However, it may increase the number of traffic packets after the traffic is rerouted through the backup path. In this research, the proposed method is the implementation of a failure recovery mechanism by utilizing the fast failover group feature in OpenFlow to store pre-configured backup paths in the switch. The disrupted traffic packets will be labeled using the VLAN ID, which can be used as a matching field. Due to this capability, VLAN ID can aggregate traffic packets into one entry table as a match field in the forwarding rules. Through implementation and evaluation, it is shown that the system can build a backup path in the switch and reroute the disrupted traffic to the backup path. Based on the parameters used, the results show that the proposed approach achieves a recovery time of around 1.02-1.26ms. Additionally, it can reduce the number of traffic packets and has a low amount of packet loss compared to previous methods.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_78-Implementation_Failure_Recovery_Mechanism.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Expanding Louvain Algorithm for Clustering Relationship Formation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140177</link>
        <id>10.14569/IJACSA.2023.0140177</id>
        <doi>10.14569/IJACSA.2023.0140177</doi>
        <lastModDate>2023-01-31T12:17:56.6900000+00:00</lastModDate>
        
        <creator>Murniyati </creator>
        
        <creator>Achmad Benny Mutiara</creator>
        
        <creator>Setia Wirawan</creator>
        
        <creator>Tristyanti Yusnitasari</creator>
        
        <creator>Dyah Anggraini</creator>
        
        <subject>Community detection; Louvain algorithm; modularity; network clustering relationship</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>Community detection is a method to determine and to discover the existence of cluster or group that share the same interest, hobbies, purposes, projects, lifestyles, location or profession. There are some example of community detection algorithms that have been developed, such as strongly connected components algorithm, weakly connected components, label propagation, triangle count and average clustering coefficient, spectral optimization, Newman and Louvain modularity algorithm. Louvain method is the most efficient algorithm to detect communities in large scale network. Expansion of the Louvain Algorithm is carried out by forming a community based on connections between nodes (users) which are developed by adding weights to nodes to form clusters or referred to as clustering relationships. The next step is to perform weighting based on user relationships using a weighting algorithm that is formed by considering user account activity, such as giving each other recommendation comments, or to decide whether the relationship between the followers and the following is exist or not. The results of this study are the best modularity created with a value of 0.879 and the cluster test is 0.776.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_77-Expanding_Louvain_Algorithm_for_Clustering_Relationship.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Effect of Artificial Neural Network Towards the Number of Particles of Rao-Blackwellized Particle Filter using Laser Distance Sensor</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140176</link>
        <id>10.14569/IJACSA.2023.0140176</id>
        <doi>10.14569/IJACSA.2023.0140176</doi>
        <lastModDate>2023-01-31T12:17:56.6730000+00:00</lastModDate>
        
        <creator>Amirul Jamaludin</creator>
        
        <creator>Norhidayah Mohamad Yatim</creator>
        
        <creator>Zarina Mohd Noh</creator>
        
        <subject>SLAM; occupancy grid map; Rao-Blackwellized particle filter; artificial neural network; laser distance sensor</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>Rao-Blackwellized particle filter (RBPF) algorithm aims to solve the Simultaneous Localization and Mapping (SLAM) problem. The performance of RBPF is based on the number of particles. The higher the number of particles, the better the performance of RBPF. However, higher number of particles required high memory and computational cost. Nevertheless, the number of particles can be reduced by using high-end sensor. By using high-end sensor, high performance of RBPF can be achieved and reduced the number of particles. But the development of the robot came at a high cost. A robot can be equipped with low-cost sensor in order to reduce the overall cost of the robot. However, low-cost sensor presented challenges of creating good map accuracy due to the low accuracy of the sensor measurement. For that reason, RBPF is integrated with artificial neural network (ANN) to interpret noisy sensor measurements and achieved better accuracy in SLAM. In this paper, RBPF integrated with ANN is experimented by using Turtlebot3 in real-world experiment. The experiment is evaluated by comparing the resulting maps estimated by RBPF with ANN and RBPF without ANN. The results show that RBPF with ANN has increased the performance of SLAM by 25.17% and achieved 10 out of 10 trials of closed loop map by using only 30 particles compared to RBPF without ANN that needs 400 particles to achieve closed loop map. In conclusion, it shows that, SLAM performance can be improved by integrating RBPF algorithm with ANN and reduces the number of particles.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_76-The_Effect_of_Artificial_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>e-Government Usability Evaluation: A Comparison between Algeria and the UK</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140175</link>
        <id>10.14569/IJACSA.2023.0140175</id>
        <doi>10.14569/IJACSA.2023.0140175</doi>
        <lastModDate>2023-01-31T12:17:56.6730000+00:00</lastModDate>
        
        <creator>Mohamed Benaida</creator>
        
        <subject>Human computer interaction; usability evaluation; web design; e-Government; user satisfaction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>e-Government holds the keys to improving government services provided to citizens and the private sectors within their countries. Although Algeria is the largest country in Africa and has one of the most thriving economies in the continent, it is remarkable that the Algerian EGDI ranking was 120th according to the latest UN e-government survey. This inspired the researcher to investigate the relationship between the success factors of e-services in developed countries and their counterparts in developing countries. The main aim of this study is to explore the factors that influence the level of usability of e-government services between developing and developed countries against a set of specific guidelines to provide means for improving these services in developing countries. The researcher selectively extracted three guideline categories from Research-Based Web Design and Usability Guidelines as a means for expert evaluation of 10 Algerian e-government services compared to British e-government services. Our results show that Algerian e-services lack mostly in Use Frames when Functions Must Remain Accessible, Highlighting Information, and Graphics Should Not Look like Banner Ads (belonging to Page Layout, Text Appearance, and Graphics, Images &amp; Multimedia respectively), whereas UK e-services scored highly across all three categories. These findings further enhance the UN e-government survey and identifies the sub-categories that developing countries need to pay more attention to in order to provide a more reliable and robust e-service to its users and citizens. Furthermore, this study proposes that the Research-Based Web Design &amp; Usability Guidelines can be converted into an evaluation tool to be used by evaluators to easily assess the usability of a website. The combination of relative importance, chapters of the guidelines, and their respective guidelines gathered from Research-Based Web Design &amp; Usability Guidelines, along with the evaluation of these individual guidelines by evaluators will serve as an integral tool for developers when developing e-government services to reach the satisfaction of the users.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_75-e_Government_Usability_Evaluation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Arabic Stock-News Sentiments and Economic Aspects using BERT Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140174</link>
        <id>10.14569/IJACSA.2023.0140174</id>
        <doi>10.14569/IJACSA.2023.0140174</doi>
        <lastModDate>2023-01-31T12:17:56.6570000+00:00</lastModDate>
        
        <creator>Eman Alasmari</creator>
        
        <creator>Mohamed Hamdy</creator>
        
        <creator>Khaled H. Alyoubi</creator>
        
        <creator>Fahd Saleh Alotaibi</creator>
        
        <subject>Machine learning; deep learning; classification; prediction; statements</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>Stock-market news sentiment analysis (SA) aims to identify the attitudes of the news of the stock on the official platforms toward companies’ stocks. It supports making the right decision in investing or analysts’ evaluation. However, the research on Arabic SA is limited compared to that on English SA due to the complexity and limited corpora of the Arabic language. This paper develops a model of sentiments to predict the polarity of Arabic stock news in microblogs based on Machine Learning and Deep Learning approaches. Also, it aims to extract the reasons which lead to polarity categorization as the main economic causes or aspects based on semantic unity. Therefore, this paper presents an Arabic SA approach based on the logistic regression model and the Bidirectional Encoder Representations from Transformers (BERT) model. The proposed model is used to classify articles as positive, negative, or neutral. It was trained based on data collected from an official Saudi stock-market article platform that was later preprocessed and labeled. Moreover, the economic reasons for the articles based on semantic unit, divided into seven economic aspects to highlight the polarity, were investigated. The supervised BERT model obtained 88% article classification accuracy based on SA, and the unsupervised mean Word2Vec encoder obtained 80% economic-aspect clustering accuracy.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_74-Arabic_Stock_News_Sentiments_and_Economic_Aspects.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Descriptive Analytics and Interactive Visualizations for Performance Monitoring of Extension Services Programs, Projects, and Activities</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140173</link>
        <id>10.14569/IJACSA.2023.0140173</id>
        <doi>10.14569/IJACSA.2023.0140173</doi>
        <lastModDate>2023-01-31T12:17:56.6430000+00:00</lastModDate>
        
        <creator>Noelyn M. De Jesus</creator>
        
        <creator>Lorissa Joana E. Buenas</creator>
        
        <subject>Business analytics; descriptive analytics; dashboards; interactive visualizations; extension services; monitoring; key performance indicators; KPIs; community</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>Providing universities with high technology-enabled automation tools to support the administrative decision-making processes will enable them to achieve their objectives. For an institution to succeed in its everyday tasks, it should come up with the emerging and modernized management services constituted, among others, by cloud, mobile, and business analytics technology. With this, the institution’s operations and management efficiency are ensured. This study aims to develop a system with descriptive analytics named as MET Online Services that will automate and optimize the monitoring of extension services key performance indicators (KPIs) in order to help the institution in making better, data-driven decisions. The dashboards and interactive visualizations of the developed system will provide quick access to real-time progress of the extension services programs, projects, and activities. Results interpretation depicted that the developed system is indeed feasible for implementation, proven to be fully-functional and passed the quality software standards of a Certified Software Quality Assurance Specialist. As the developed system satisfied the users’ expectations and requirements, it would be an effective tool for the institution, extension services unit, and community, to make better strategic decisions and continuously deliver quality services to the community.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_73-Descriptive_Analytics_and_Interactive_Visualizations.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Upgraded Very Fast Decision Tree: Energy Conservative Algorithm for Data Stream Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140171</link>
        <id>10.14569/IJACSA.2023.0140171</id>
        <doi>10.14569/IJACSA.2023.0140171</doi>
        <lastModDate>2023-01-31T12:17:56.6270000+00:00</lastModDate>
        
        <creator>Mai Lefa</creator>
        
        <creator>Hatem Abd-Elkader</creator>
        
        <creator>Rashed Salem</creator>
        
        <subject>Classification; energy consumption; Hoeffding bound; Information gain; massive online analysis; stream data; very fast decision tree</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>Traditional machine learning (ML) techniques model knowledge using static datasets. With the increased use of the Internet in today&#39;s digital world, a massive amount of data is generated at an accelerated rate that must be handled. This data must be handled as soon as it arrives because it is continuous, and cannot be kept for a long period of time. Various methods exist for mining data from streams. When developing methods like these, the machine learning community put accuracy and execution time first. Numerous sorts of studies take energy consumption into consideration while evaluating data mining methods. However, this work concentrates on Very Fast Decision Tree, which is the most often used technique in data flow classification, despite the fact that it wastes a huge amount of energy on trivial calculations. The research presents a proposed mechanism for upgrading the algorithm&#39;s energy usage and restricts computational resources, without compromising the algorithm&#39;s efficiency. The mechanism has two stages: the first is to eliminate a set of bad features that increase computational complexity and waste energy, and the second is to group the good features into a candidate group that will be used instead of using all of the attributes in the next iteration. Experiments were conducted on real-world benchmark and synthetic datasets to compare the proposed method to state-of-the-art algorithms in previous works. The proposed algorithm works considerably better and faster with less energy while maintaining accuracy.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_71-Upgraded_Very_Fast_Decision_Tree.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Classification Model for Diabetes Mellitus Diagnosis based on K-Means Clustering Algorithm Optimized with Bat Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140172</link>
        <id>10.14569/IJACSA.2023.0140172</id>
        <doi>10.14569/IJACSA.2023.0140172</doi>
        <lastModDate>2023-01-31T12:17:56.6270000+00:00</lastModDate>
        
        <creator>Syaiful Anam</creator>
        
        <creator>Zuraidah Fitriah</creator>
        
        <creator>Noor Hidayat</creator>
        
        <creator>Mochamad Hakim Akbar Assidiq Maulana</creator>
        
        <subject>Diabetes mellitus; disease diagnosis methods; k-means clustering algorithm; optimization; bat algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>Diabetes mellitus is a disease characterized by abnormal glucose homeostasis resulting in an increase in blood sugar. According to data from the International Diabetes Federation (IDF), Indonesia ranks 7th out of 10 countries with the highest number of diabetes mellitus patients in the world. The prevalence of patients with diabetes mellitus in Indonesia reaches 11.3 percent or there are 10.7 million sufferers in 2019. Prevention, risk analysis and early diagnosis of diabetes mellitus are necessary to reduce the impact of diabetes mellitus and its complications. The clustering algorithm is one of methods that can be used to diagnose and analyze the risk of diabetes mellitus. The K-mean Clustering Algorithm is the most commonly used clustering algorithm because it is easy to implement and run, computation time is fast and easy to adapt. However, this method often gets to be stuck at the local optima. The problem of the K-means Clustering Algorithm can be solved by combining the K-means Clustering algorithm with the global optimization algorithm. This algorithm has the ability to find the global optimum from many local optimums, does not require derivatives, is robust, easy to implement. The Bat Algorithm (BA) is one of global optimization methods in swarm intelligence class. BA uses automated enlargement techniques into a solution and it’s accompanied by a shift from exploration mode to local intensive exploitation. Based on the background that has been explained, this article proposes the development of a classification model for diagnosing diabetes mellitus based on the K-means clustering algorithm optimized with BA. The experimental results show that the K-means clustering optimized by BA has better performance than K-means clustering in all metrics evaluations, but the computational time of the K-means clustering optimized by BA is higher than K-means clustering.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_72-Classification_Model_for_Diabetes_Mellitus_Diagnosis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning Models for the Detection of Monkeypox Skin Lesion on Digital Skin Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140170</link>
        <id>10.14569/IJACSA.2023.0140170</id>
        <doi>10.14569/IJACSA.2023.0140170</doi>
        <lastModDate>2023-01-31T12:17:56.6100000+00:00</lastModDate>
        
        <creator>Othman A. Alrusaini</creator>
        
        <subject>Monkeypox; digital skin images; artificial intelligence; deep learning; convoluted neural networks; VGG-16</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>The study is an investigation testing the accuracy of deep learning models in the detection of Monkeypox. The disease is relatively new and difficult for physicians to detect. Data for the skins were obtained from Google via web-scraping with Python’s BeautifulSoup, SERP API, and requests libraries. The images underwent scrutiny by professional physicians to determine their validity and classification. The researcher extracted the images’ features using two CNN models - GoogLeNet and ResNet50. Feature selection from the images involved conducting principal component analysis. Classification employed Support Vector Machines, ResNet50, VGG-16, SqueezeNet, and InceptionV3 models. The results showed that all the models performed relatively the same. However, the most effective model was VGG-16 (accuracy = 0.96, F1-score = 0.92). It is an affirmation of the usefulness of artificial intelligence in the detection of the Monkeypox disease. Subject to the approval of national health authorities, the technology can be used to help detect the disease faster and more conveniently. If integrated into a mobile application, it can be members of the public to self-diagnose before seeking official diagnoses from approved hospitals. The researcher recommends further research into the models and building bigger image databases that will power more reliable analyses.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_70-Deep_Learning_Models_for_the_Detection_of_Monkeypox_Skin_Lesion.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Model for Detecting Fungal Diseases in Cotton Cultivation using Segmentation and Machine Learning Approaches</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140169</link>
        <id>10.14569/IJACSA.2023.0140169</id>
        <doi>10.14569/IJACSA.2023.0140169</doi>
        <lastModDate>2023-01-31T12:17:56.5930000+00:00</lastModDate>
        
        <creator>Odukoya O. H</creator>
        
        <creator>Aina S</creator>
        
        <creator>D&#233;gb&#233;ss&#233; F. W</creator>
        
        <subject>Fungal diseases; watershed segmentation; SVM; K-means; Edge Detection algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>This research detailed a model for detecting fungal diseases via techniques for processing images of cotton leaves. The work allowed to develop a model based on the set of preprocessed data, to formulate the developed model, to simulate and evaluate the model. It is about detecting fungal diseases in cotton cultivation. The image data records were collected in an online data repository consisting of images of cotton leaves infected with fungal diseases and normal leaf images. In addition, other images of infected and uninfected cotton leaves were collected in cotton production fields in the S&#233;gbana region in Benin Republic. The model was formulated based on watershed segmentation technique by applying Edge Detection algorithm and K-Means Clustering; and Support Vector Machine (SVM) for classification. The simulation was done using MATLAB with Image Processing Toolbox 9.4. The results gave an accuracy of 99.05%, specificity 90%, misclassification rate 0.95%, recall rate 99.5% and precision 99.5%. In addition, with less computational effort and in less than a minute, the best results were obtained, showing the efficiency of the image processing technique for the detection and classification of infected and uninfected leaves. It was concluded that this approach was applied to detect fungal diseases on cotton leaves to promote the production and harvest of good quality cotton and valuable cotton products.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_69-A_Model_for_Detecting_Fungal_Diseases_in_Cotton_Cultivation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of Medical Slide Images Processing using Depth Learning in Histopathological Studies of Cerebellar Cortex Tissue</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140167</link>
        <id>10.14569/IJACSA.2023.0140167</id>
        <doi>10.14569/IJACSA.2023.0140167</doi>
        <lastModDate>2023-01-31T12:17:56.5800000+00:00</lastModDate>
        
        <creator>Xiang-yu Zhang</creator>
        
        <creator>Xiao-wen Shi</creator>
        
        <creator>Xing-bo Zhang</creator>
        
        <subject>Image processing; fragmentation of images; machine learning; image classification; stereological; histopathology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>Today, with the advancement of science and technology, artificial intelligence evolves and grows along with human beings. Clinical specialists rely only on their knowledge and experience, as well as the results of complex and time-consuming clinical trials, despite the inevitable human errors of diagnosis work. Performing malignant and dangerous diseases, the use of machine learning makes it clear that the ability and capacity of these techniques are beneficial to help correctly diagnose diseases, reduce human error, improve diagnosis, and start treatment as soon as possible. In diseases, image processing and artificial intelligence is widely used in medicine and applied in stereological, histopathology. One of the essential activities for diagnosing the disease using artificial intelligence and machine learning is the fragmentation of images and classification of medical images, which is used to diagnose the disease with the help of images of the patient obtained from medical devices. In this article, we have worked on classifying medical histopathological images of brain tissue. The images are not of good quality due to sampling with standard equipment, and an attempt is made to improve the quality of the images by operating. Also, all images are segmented using the U-NET algorithm. In order to improve performance in classification, segmented images are used to classify images into two classes, normal and abnormal, instead of the images themselves. The images in the data set used in this study have a small number of images. Due to the use of a convolutional neural network algorithm to extract the feature and classify the images, more images are needed. Therefore, the data amplification technique to overcome this problem is used. Finally, the convolutional neural network has been used to extract features from images and classify fragmented images. Experimental results shown that the proposed method presented better performance compared to other existing methods.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_67-Analysis_of_Medical_Slide_Images_Processing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Cloud-powered Hybrid Learning Process to Enhance Digital Natives’ Analytical Reading Skills</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140168</link>
        <id>10.14569/IJACSA.2023.0140168</id>
        <doi>10.14569/IJACSA.2023.0140168</doi>
        <lastModDate>2023-01-31T12:17:56.5800000+00:00</lastModDate>
        
        <creator>Sakolwan Napaporn</creator>
        
        <creator>Sorakrich Maneewan</creator>
        
        <creator>Kuntida Thamwipat</creator>
        
        <creator>Vitsanu Nittayathammakul</creator>
        
        <subject>Hybrid learning; cloud-powered learning tools; learning process; analytical reading skills; digital natives</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>Analytical reading is a necessary cognitive skill for advancing to other skills required in the digital age. Thailand is focused on the instructional development and use of digital media to enhance the digital natives&#39; analytical reading skills, which will assist learners of all ages in effectively and quickly adapting to changes in the digital environment. After the COVID-19 pandemic situation, educational institutions in Thailand have begun to embrace a hybrid learning approach like never before. The limitations of the existing learning process for boosting digital natives’ analytical reading skills are the lack of integration between reading techniques, hybrid pedagogies, and emerging learning technologies to enhance learners&#39; seamless learning experiences. Thus, this study aims to propose the Cloud-powered Hybrid Learning process (Cp-HL process) to enhance digital natives’ analytical reading skills. This study consisted of two main stages in the research methodology: 1) learning p ocess development; and 2) learning process evaluation. The developed Cp-HL process had four main learning phases: (1) preparation for hybrid learning; (2) presentation for interactive learning; (3) practice with analytical reading; and (4) progress reports on analytical reading skills. All the experts agreed that the newly developed Cp-HL process performed extremely well in terms of overall suitability.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_68-The_Cloud_powered_Hybrid_Learning_Process.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Quantum Cryptography Experiment using Optical Devices</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140166</link>
        <id>10.14569/IJACSA.2023.0140166</id>
        <doi>10.14569/IJACSA.2023.0140166</doi>
        <lastModDate>2023-01-31T12:17:56.5630000+00:00</lastModDate>
        
        <creator>Nur Shahirah Binti Azahari</creator>
        
        <creator>Nur Ziadah Binti Harun</creator>
        
        <subject>Half-wave plate; polarizer; photon beam splitter; Stoke Vector</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>The study of quantum cryptography is one of the great interest. A straightforward and reliable quantum experiment is provided in this paper. A half wave plate in linearly polarized light makes up a simplified polarization rotator. The polarization rotates twice as much as the half wave plate&#39;s fast axis&#39; angle with the polarization plane when the half wave plate is rotated. Here, an experiment of message sharing is conducted to demonstrate quantum communication between parties. The unitary transformation is performed step by step using half-wave plates represented by the Mueller matrix. A simulation created using Python programming has been used to test the proposed protocol&#39;s implementation. Python was chosen because it can mathematically imitate the quantum state of superposition.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_66-Quantum_Cryptography_Experiment_using_Optical_Devices.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Hybrid DL Model for Printed Arabic Word Recognition based on GAN</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140165</link>
        <id>10.14569/IJACSA.2023.0140165</id>
        <doi>10.14569/IJACSA.2023.0140165</doi>
        <lastModDate>2023-01-31T12:17:56.5470000+00:00</lastModDate>
        
        <creator>Yazan M. Alwaqfi</creator>
        
        <creator>Mumtazimah Mohamad</creator>
        
        <creator>Ahmad T. Al-Taani</creator>
        
        <creator>Nazirah Abd Hamid</creator>
        
        <subject>Deep learning; convolutional neural network; generative adversarial network; Arabic recognition; image processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>The recognition of printed Arabic words remains an open area for research since Arabic is among the most complex languages. Prior research has shown that few efforts have been made to develop models of accurate Arabic recognition, as most of these models have faced the increasing complexity of the performance and lack of benchmark Arabic datasets. Meanwhile, Deep learning models, such as Convolutional Neural Networks (CNNs), have been shown to be beneficial in reducing the error rate and enhancing accuracy in Arabic character recognition systems. The reliability of these models increases with the depth of layers. Still, the essential condition for more layers is an extensive amount of data. Since CNN generates features by analysing large amounts of data, its performance is directly proportional to the volume of data, as DL models are considered data-hungry algorithms. Nevertheless, this technique suffers from poor generalisation ability and overfitting issues, which affect the Arabic recognition models&#39; accuracy. These issues are due to the limited availability of Arabic databases in terms of accessibility and size, which led to a central problem facing the Arabic language nowadays. Therefore, the Arabic character recognition models still have gaps that need to be bridged. The Deep Learning techniques are also to be improved to increase the accuracy by manipulating the strength of technique in a neural network for handling the lack of datasets and the generalisation ability of the neural network in model building. To solve these problems, this study proposes a hybrid model for Arabic word recognition by adapting a deep convolutional neural network (DCNN) to work as a classifier based on a generative adversarial network (GAN) work as a data augmentation technique to develop a robust hybrid model for improving the accuracy and generalisation ability. Each proposed model is separately evaluated and compared with other state-of-the-art models. These models are tested on the Arabic printed text image dataset (APTI). The proposed hybrid deep learning model shows excellent performance regarding the accuracy, with a score of 99.76% compared to 94.81% for the proposed DCNN model on the APTI dataset. The proposed model indicates highly competitive performance and enhanced accuracy compared to the existing state-of-the-art Arabic printed word recognition models. The results demonstrate that the generalisation of networks and the handling of overfitting have also improved. This study output is comparable to other competitive models and contributes an enhanced Arabic recognition model to the body of knowledge.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_65-A_Novel_Hybrid_DL_Model_for_Printed_Arabic_Word.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Image Segmentation of Intestinal Polyps using Attention Mechanism based on Convolutional Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140164</link>
        <id>10.14569/IJACSA.2023.0140164</id>
        <doi>10.14569/IJACSA.2023.0140164</doi>
        <lastModDate>2023-01-31T12:17:56.5330000+00:00</lastModDate>
        
        <creator>Xinyi Zheng</creator>
        
        <creator>Wanru Gong</creator>
        
        <creator>Ruijia Yang</creator>
        
        <creator>Guoyu Zuo</creator>
        
        <subject>Image segmentation; intestinal polyps; block convolutional attention mechanism; Har Dnet</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>The intestinal polyp is one of the common intestinal diseases, which is characterized by protruding lining tissue of the colon or rectum. Considering that they may become cancerous, they should be removed by surgery as soon as possible. In the past, it took a lot of manpower and time to identify and diagnose intestinal polyps, which greatly affected the treatment efficiency of medical staff. Because the polyp part looks similar to the normal structure of the human body, the probability of human eye misjudgment is high. Therefore, it is necessary to use advanced computer technology to segment the intestinal polyp image. In the model established in this paper, an image segmentation method based on convolution neural network is proposed. The Har-DNet backbone network is used as the encoder in the model, and its feature processing results are converted into three feature images of different sizes, which are input to the decoding module. In the decoding process, each output first expands the receptive field module and then fuses the feature image processed by the attention mechanism. The fusion results are input to the density aggregation module for processing to improve the operation efficiency and accuracy of the model. The experimental results show that compared with the previous Pra-Net model and Har-DNet MSEG model, the accuracy and precision of this method are greatly improved, and can be applied to the actual medical image recognition process, thus improving the treatment efficiency of patients.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_64-Image_Segmentation_of_Intestinal_Polyps_using_Attention_Mechanism.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Comparison of the Kernels of Support Vector Machine Algorithm for Diabetes Mellitus Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140163</link>
        <id>10.14569/IJACSA.2023.0140163</id>
        <doi>10.14569/IJACSA.2023.0140163</doi>
        <lastModDate>2023-01-31T12:17:56.5330000+00:00</lastModDate>
        
        <creator>Dimas Aryo Anggoro</creator>
        
        <creator>Dian Permatasari</creator>
        
        <subject>Diabetes mellitus; kernel; normalization; oversampling; SVM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>Diabetes Mellitus is a disease where the body cannot use insulin properly, so this disease is one of the health problems in various countries. Diabetes Mellitus can be fatal and can cause other diseases and even lead to death. Based on this, it is important to have prediction activities to find out a disease. The SVM algorithm is used in classifying Diabetes Mellitus diseases. The purpose of this study was to compare the accuracy, precision, recall, and F1-Score values of the SVM algorithm with various kernels and data preprocessing. The data preprocessing used included data splitting, data normalization, and data oversampling. This research has the benefit of solving health problems based on the percentage of Diabetes Mellitus and can be used as material for accurate information. The results of this study are that the highest accuracy was obtained by 80% obtained from the polynomial kernel, the highest precision was obtained by 65% which was also obtained from the polynomial kernel, and the highest recall was obtained by 79% obtained from the RBF kernel and the highest f1-score was obtained by 70% obtained from RBF kernel.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_63-Performance_Comparison_of_the_Kernels_of_Support_Vector_Machine.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Customer Sentiment Analysis in Hotel Reviews Through Natural Language Processing Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140162</link>
        <id>10.14569/IJACSA.2023.0140162</id>
        <doi>10.14569/IJACSA.2023.0140162</doi>
        <lastModDate>2023-01-31T12:17:56.5170000+00:00</lastModDate>
        
        <creator>Soumaya Ounacer</creator>
        
        <creator>Driss Mhamdi</creator>
        
        <creator>Soufiane Ardchir</creator>
        
        <creator>Abderrahmane Daif</creator>
        
        <creator>Mohamed Azzouazi</creator>
        
        <subject>Topic modeling; aspect-based sentiments analysis; aspect extraction; sentiment classification; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>Customer reviews of products and services play a key role in the customers&#39; decision to buy a product or use a service. Customers&#39; preferences and choices are influenced by the opinions of others online; on blogs or social networks. New customers are faced with many views on the web, but they can&#39;t make the right decision. Hence, the need for sentiment analysis is to clarify whether opinions are positive, negative or neutral. This paper suggests using the Aspect-Based Sentiment Analysis approach on reviews extracted from tourism websites such as TripAdvisor and Booking. This approach is based on two main steps namely aspect extraction and sentiment classification related to each aspect. For aspect extraction, an approach based on topic modeling is proposed using the semi-supervised CorEx (Correlation Explanation) method for labeling word sequences into entities. As for sentiment classification, various supervised machine learning techniques are used to associate a sentiment (positive, negative or neutral) to a given aspect expression. Experiments on opinion corpora have shown very encouraging performances.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_62-Customer_Sentiment_Analysis_in_Hotel_Reviews.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mitigate Volumetric DDoS Attack using Machine Learning Algorithm in SDN based IoT Network Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140161</link>
        <id>10.14569/IJACSA.2023.0140161</id>
        <doi>10.14569/IJACSA.2023.0140161</doi>
        <lastModDate>2023-01-31T12:17:56.5000000+00:00</lastModDate>
        
        <creator>Kumar J</creator>
        
        <creator>Arul Leena Rose P J</creator>
        
        <subject>DDoS; SDN; IoT; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>Software-Defined Networking (SDN) is a recent trend that is combined with Internet of Things (IoT) in wireless network applications. SDN focus entirely on the upper-level network management and IoT enables monitoring the physical activity of the real-time environment via internet network connectivity. The IoT clusters with SDN often undergoes challenges like network security concerns like getting attacked by a Distributed Denial of Service (DDoS). The mitigation of network management issues is carried out by the frequent software update of SDN. On other hand, the security enhancement is needed to all alleviate the mitigation of security attacks in the network. With such motivation, the research uses machine learning based intrusion detection system to mitigate the DDoS attack in SDN-IoT network. The control layer in the SDN is responsible for the prevention of attacks in IoT network using a strong Intrusion Detection System (IDS) framework. The IDS enables a higher-level attack resistance to the DDoS attack as the framework involves feature selection-based classification model. The simulation is conducted to test the efficacy of the model against various levels of DDoS attacks. The results of simulation show that the proposed method achieves better classification of attacks in the network than other methods.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_61-Mitigate_Volumetric_DDoS_Attack_using_Machine_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Risk Analysis of Urban Water Infrastructure Systems in Cauayan City</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140160</link>
        <id>10.14569/IJACSA.2023.0140160</id>
        <doi>10.14569/IJACSA.2023.0140160</doi>
        <lastModDate>2023-01-31T12:17:56.4870000+00:00</lastModDate>
        
        <creator>Rafael J. Padre</creator>
        
        <creator>Melanie A. Baguio</creator>
        
        <creator>Edward B. Panganiban</creator>
        
        <creator>Rudy U. Panganiban</creator>
        
        <creator>Carluz R. Bautista</creator>
        
        <creator>Justine Ryan L. Rigates</creator>
        
        <creator>Allisandra Pauline Mariano</creator>
        
        <subject>Water infrastructure; risk analysis; geographic information systems; decision support systems; storm water</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>The City of Cauayan Isabela is known as one of the first smart cities and leading agro-industrial centers in the Philippines. Since the center of the economy is in urban areas like Cauayan City, there is a tendency for people and businesses to converge when development and activity take place, with that, a risk analysis was done to analyze hazards for urban water infrastructures in the City of Cauayan. This paper includes an Inventory of the existing urban water infrastructure, with the aid of Geographic Information system Software and gathered data, maps were generated for flood hazards with 5, 25, and 100 yr. return period, liquefaction, ground shaking, and drought of urban water infrastructures. These maps were generated to help the people of Cauayan City, Isabela. The main goal of the paper is to assess the potential prone areas where water infrastructures are located, and monitor areas that are suitable for building such water infrastructures. Problems encountered by the people in utilizing urban water infrastructure can be able to minimize by proper installation of water infrastructures in suitable places which can help the people of the city in water utilization. Since Storm water can cause wide flooding in low elevated areas, to utilize the storm water and to address such problems, an urban water infrastructure with decision support systems intervention can be able to help the city in times of scarcity of water. In addition, the analysis can be used by the local government of the city for proper planning and to project the extent of the hazards.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_60-Risk_Analysis_of_Urban_Water_Infrastructure_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Water Tank Wudhu and Monitoring System Design using Arduino and Telegram</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140159</link>
        <id>10.14569/IJACSA.2023.0140159</id>
        <doi>10.14569/IJACSA.2023.0140159</doi>
        <lastModDate>2023-01-31T12:17:56.4870000+00:00</lastModDate>
        
        <creator>Ritzkal </creator>
        
        <creator>Yuggo Afrianto</creator>
        
        <creator>Indra Riawan</creator>
        
        <creator>Fitrah Satrya Fajar Kusumah</creator>
        
        <creator>Dwi Remawati</creator>
        
        <subject>Arduino; solenoid valves; ultrasonic sensor; pump water</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>Manual water faucets, which are commonly used in mosques and homes, cannot control water use, resulting in a variety of issues, including water waste when the user forgets to close the water faucet, resulting in water continuously coming out. In addition to filling the water tank, which is also an important factor in saving water, the water reserve in the tank must be properly controlled so that its availability is maintained. Based on the existing problems, a water faucet system was made for ablution and monitoring water tanks using Arduino and Telegram. An automatic ablution water faucet system that can drain water automatically with an ultrasonic sensor as a body movement reader and a solenoid valve as a substitute for a faucet The water pump can help fill the water tank automatically and know how much water is in the tank using an ultrasonic sensor; liquid crystal display and Telegram as recipients of text messages from the results of the condition of water faucets, water pumps, and water levels.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_59-Water_Tank_Wudhu_and_Monitoring_System_Design.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Proof-of-Work for Merkle based Access Tree in Patient Centric Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140158</link>
        <id>10.14569/IJACSA.2023.0140158</id>
        <doi>10.14569/IJACSA.2023.0140158</doi>
        <lastModDate>2023-01-31T12:17:56.4700000+00:00</lastModDate>
        
        <creator>B Ravinder Reddy</creator>
        
        <creator>T Adilakshmi</creator>
        
        <subject>Merkle tree; hashing; CP-ABE; access policy; PCD</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>With the advent of wearable devices and smart health care, the wearable health care technology for obtaining Patient Centric Data (PCD) has gained popularity in recent years. To establish access control over encrypted data in health records, Ciphertext Policy-Attribute Based Encryption (CP-ABE), is used. The most critical element is granting secure access to the generated information. However, with growing complexity of access policy, computational overhead of encryption and decryption process also increases. As a result, ensuring data access control as well as efficiency in PCD collected by wearables is crucial and challenging. This paper proposes and demonstrates a proof-of-work for the Merkle-based access tree using notion of hiding the sensitive access policy attributes.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_58-Proof_of_Work_for_Merkle_based_Access_Tree_in_Patient_Centric_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Survey on Cloudlet Computation Optimization in the Mobile Edge Computing Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140157</link>
        <id>10.14569/IJACSA.2023.0140157</id>
        <doi>10.14569/IJACSA.2023.0140157</doi>
        <lastModDate>2023-01-31T12:17:56.4530000+00:00</lastModDate>
        
        <creator>Layth Muwafaq</creator>
        
        <creator>Nor K. Noordin</creator>
        
        <creator>Mohamed Othman</creator>
        
        <creator>Alyani Ismail</creator>
        
        <creator>Fazirulhisyam Hashim</creator>
        
        <subject>Mobile edge computing; cloudlet deployment; task offloading; mobile device; multi-objective optimization; meta-heuristics; variable-length</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>Mobile Edge Computing (MEC) uses to perform computation operations at the edge of a network for mobile devices. This allows the deployment of more powerful and efficient computing resources in a cost-effective, lightweight and scalable manner. MEC can optimize mobile device performance, enhance security and privacy, improve battery life, provide increased bandwidth, and reduce latency across wireless networks. Cloudlets are a new concept of computations that can perform at the edge of the networks. The service provider can deploy cloudlets services in a MEC environment with the ability for mobile devices to offload their tasks to cloudlets. In the MEC environment, the offloading problem depends on cloudlets&#39; availability of computation resources. Also, the deployment method of cloudlets in the environment will affect the task offloading. This paper investigates the approach to the cloudlet deployment and task offloading problem in the MEC environment. First demonstrate that the problem has to be considered a Multi-objective optimization problem since it needs more than one objective to be optimized. Then prove that the problem is NP-completeness, give an overview of existing solutions using the meta-heuristic algorithms, and suggest future solutions for this problem. Finally, explain the advantages of using Variable-length of solution space with meta-heuristic algorithms for this problem.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_57-A_Survey_on_Cloudlet_Computation_Optimization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Impact of COVID-19 on Digital Competence</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140156</link>
        <id>10.14569/IJACSA.2023.0140156</id>
        <doi>10.14569/IJACSA.2023.0140156</doi>
        <lastModDate>2023-01-31T12:17:56.4400000+00:00</lastModDate>
        
        <creator>Syerina Syahrin</creator>
        
        <creator>Khalid Almashiki</creator>
        
        <creator>Eman Alzaanin</creator>
        
        <subject>Digital competence; digital skills; digital profile; ICT skills; preservice teacher education</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>The study looked into how COVID-19 affected the digital competence of a group of preservice teacher education students at a higher education institution in the Sultanate of Oman. The paper examined students’ digital profile in five areas namely information and data literacy, communication and collaboration, digital content creation, safety and problem solving. Data from 32 undergraduate students was collected by utilizing DigComp, a European Commission digital skills self-assessment tool and findings from a survey. The digital competence framework measures the set of skills, knowledge and attitudes that describes what it means to be digitally competent. These skills are important for students to be effective global citizens in the 21st century. The results of the study revealed that the majority of the students scored Level 3 (Intermediate) in their self-assessment competency test score. The majority of the students perceived that their digital competence improved significantly as the result of online learning which was accelerated by the COVID-19 pandemic. The rationale of this investigation is that it helps educators understand the students’ level of digital competence and the students’ perspectives on ICT skills. In turn, it informs us the ways to monitor the students’ digital progress and the next steps in developing their digital competency.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_56-The_Impact_of_COVID_19_on_Digital_Competence.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Generalized Epileptic Seizure Prediction using Machine Learning Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140155</link>
        <id>10.14569/IJACSA.2023.0140155</id>
        <doi>10.14569/IJACSA.2023.0140155</doi>
        <lastModDate>2023-01-31T12:17:56.4400000+00:00</lastModDate>
        
        <creator>Zarqa Altaf</creator>
        
        <creator>Mukhtiar Ali Unar</creator>
        
        <creator>Sanam Narejo</creator>
        
        <creator>Muhammad Ahmed Zaki</creator>
        
        <creator>Naseer-u-Din</creator>
        
        <subject>Epilepsy; electroencephalogram; artificial intelligence; machine learning; CHB-MIT</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>In recent years, the electroencephalography (EEG) signal identification of epileptic seizures has developed into a routine procedure to determine epilepsy. Since physically identifying epileptic seizures by expert neurologists becomes a labor-intensive, time-consuming procedure that also produces several errors. Thus, efficient, and computerized detection of epileptic seizures is required. The disordered brain function that causes epileptic seizures can have an impact on a patient&#39;s condition. Epileptic seizures can be prevented by medicine with great success if they are predicted before they start. Electroencephalogram (EEG) signals are utilized to predict epileptic seizures by using machine learning algorithms and complex computational methodologies. Furthermore, two significant challenges that affect both expectancy time and genuine positive forecast rate are feature extraction from EEG signals and noise removal from EEG signals. As a result, we suggest a model that offers trustworthy preprocessing and feature extraction techniques. To automatically identify epileptic seizures, a variety of ensemble learning-based classifiers were utilized to extract frequency-based features from the EEG signal. Our algorithm offers a higher true positive rate and diagnoses epileptic episodes with enough foresight before they begin. On the scalp EEG CHB-MIT dataset on 24 subjects, this suggested framework detects the beginning of the preictal state, the state that occurs before a few minutes of the onset of the detention, resulting in an elevated true positive rate of (91%) than conventional methods and an optimum estimation time of 33 minutes and an average time of prediction is 23 minutes and 36 seconds. Depending on the experimental findings’ The maximum accuracy, sensitivity, and specificity rates in this research were 91 %, 98%, and 84%.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_55-Generalized_Epileptic_Seizure_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Light-weight Authentication Scheme in the Internet of Things using the Enhanced Bloom Filter</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140154</link>
        <id>10.14569/IJACSA.2023.0140154</id>
        <doi>10.14569/IJACSA.2023.0140154</doi>
        <lastModDate>2023-01-31T12:17:56.4230000+00:00</lastModDate>
        
        <creator>Xiaoyan Huo</creator>
        
        <subject>En-route authentication bitmap; message authentication codes; Internet of Things; bloom filter</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>Authenticated key exchange mechanisms are critical for security-sensitive Internet of Things (IoT) and Wireless Sensor Networks (WSNs). In this area, the Bloom Filter (BF) plays a crucial role directly and indirectly, which has a significant advantage in space and time. Light-weight input authentication is one of the most challenging tasks in IoT. Weak or inefficient defense algorithms can allow fake information to enter the system, share information, send unnecessary messages, and reduce network efficiency. The utilization of an augmented Bloom filter for creating an authentication prominent called En-route Authentication Bitmap (EAB) has a substantial advantage over traditional methods that involve direct usage of Message Authentication Codes (MAC). This effective method of EAB picks the fake information almost accurately, thereby reducing the feeding attacks within not more than two steps taken by the attacker. EAB necessarily needs only a few bytes of bandwidth for efficient defense against at least ten forward steps of the adversary. Without hesitation, the Augmented Bloom filter and its components are becoming more common in network defense mechanisms.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_54-A_Light_weight_Authentication_Scheme_in_the_Internet_of_Things.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Effect of Thermal and Electrical Conductivities on the Ablation Volume during Radiofrequency Ablation Process</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140153</link>
        <id>10.14569/IJACSA.2023.0140153</id>
        <doi>10.14569/IJACSA.2023.0140153</doi>
        <lastModDate>2023-01-31T12:17:56.4070000+00:00</lastModDate>
        
        <creator>Mohammed S. Ahmed</creator>
        
        <creator>Mohamed Tarek El-Wakad</creator>
        
        <creator>Mohammed A. Hassan</creator>
        
        <subject>Radiofrequency ablation (RFA); finite element method (FEM); COMSOL; Cool-tip RF electrode; multi-hooks electrode; large tumor ablation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>Radiofrequency ablation (RFA) is the treatment of choice for certain types of cancers, especially liver cancer. However, the main issue with RFA is that the larger the tumor volume, the longer the ablation period. That causes more pain for the patient, so the surgeons perform a larger number of ablation sessions or surgeries. The current commonly used electrode material, nickel-titanium alloy, used in RFA is characterized by low thermal and electrical conductivities. Using an electrode material with higher electrical conductivity and thermal conductivity provides more thermal energy to tumors. In this paper, we design two models: a cool-tip RF electrode and a multi-hook RF electrode, which aim to study the effect of the thermal and electrical conductivities of the electrode material on ablation volume. Gold, silver, and platinum have higher thermal and electrical conductivity than nickel and titanium alloy, and therefore we studied the effect of these materials on the ablation volume using two different designs, which are the RF cooling tip electrode and the multi-hook electrode. The proposed model reduces the ablation time and damages healthy tissue while increasing the ablation volume with values ranging from 2.6 cm3 to 15.4 cm3. The results show ablation volume increasing with materials characterized by higher thermal and electrical conductivities and thus reducing patient pain.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_53-The_Effect_of_Thermal_and_Electrical_Conductivities.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Improved Breast Cancer Classification Method Using an Enhanced AdaBoost Classifier</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140151</link>
        <id>10.14569/IJACSA.2023.0140151</id>
        <doi>10.14569/IJACSA.2023.0140151</doi>
        <lastModDate>2023-01-31T12:17:56.3930000+00:00</lastModDate>
        
        <creator>Yousef K. Qawqzeh</creator>
        
        <creator>Abdullah Alourani</creator>
        
        <creator>Sameh Ghwanmeh</creator>
        
        <subject>Breast cancer; diagnosis; prediction; AdaBoost; RandomForest; PCA</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>The goal of this research is to create a machine learning (ML) classifier that can improve breast cancer (BC) diagnosis and prediction. The principle components analysis (PCA) technique is used in this work to minimize the dimensions of the BC dataset and achieve better classification metrics. The developed classifier outperformed others in terms of F1 score and accuracy score. Using the original BC dataset, four different classifiers are applied to determine the best classifier in terms of performance metrics. The used classifiers were RandomForest, DecisionTree, AdaBoost, and GradientBoosting. The RandomForest classifier obtained (95.7%) f1 score and (94.5%) accuracy score, the DecisionTree classifier obtained (93%) f1 score and (91%) accuracy score, the GradientBoosting classifier obtained (95%) f1 score and (93.5%) accuracy score, and the AdaBoost classifier obtained (95.8%) f1 score and (94.5%). The AdaBoost classifier was utilized to create the final model using the reduced PCA dataset because it scored the highest performance metrics. The developed classifier is named as “pcaAdaBoost”. The optimized pcaAdaBoost achieved higher performance metrics in terms of f1 score (99%) and accuracy score (98.8%). The results reveal that the optimized pcaAdaBoost scored highest performance measures in terms of cross-validation and testing outcomes, with an overall accuracy of (99%). The improved results justify the use of dimensionality reduction in high-dimension datasets to reduce complexity and improve performance measures.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_51-An_Improved_Breast_Cancer_Classification_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Efficient Multimedia Content Transmission Model for Disaster Management using Delay Tolerant Mobile Adhoc Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140152</link>
        <id>10.14569/IJACSA.2023.0140152</id>
        <doi>10.14569/IJACSA.2023.0140152</doi>
        <lastModDate>2023-01-31T12:17:56.3930000+00:00</lastModDate>
        
        <creator>Sushant Mangasuli</creator>
        
        <creator>Mahesh Kaluti</creator>
        
        <subject>Buffer management; disaster management; Mobile Adhoc Network; opportunistic routing; Quality of Service</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>Natural and manmade disasters such as earthquakes, floods, unprecedented rainfall, etc. pose several threats to our society. The citizens upload disaster information in the form of multimedia content such as pictures, audio, and videos. Efficient information and communication framework are critical for disaster management. Mobile Adhoc Networks (MANET) have been used effectively for disaster management. However, Disaster management prerequisites following Quality of Service (QoS) requirements such as bandwidth, high delivery ratio, low overhead, and minimal latency; however, the existing data transmission scheme induces high latency and overhead among intermediate devices; In order to meet the QoS requirement of disaster management applications in this paper, High Delivery Efficiency and Low Latency Multimedia Content Transmission (HDELL-MCT) scheme for MANETs is presented. Then, an improved buffer management scheme is presented for meeting disaster management performance and latency prerequisites. The experiment is conducted using ONE Simulator, the outcome shows the HDELL-MCT scheme achieves very good performance considering different QoS metrics such as improving delivery ratio by 38.02%, reducing latency by 7.53% and minimizing hop communication overhead by 65.1% in comparison with existing multimedia content transmission model.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_52-Efficient_Multimedia_Content_Transmission_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Queueing Model based Dynamic Scalability for Containerized Cloud</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140150</link>
        <id>10.14569/IJACSA.2023.0140150</id>
        <doi>10.14569/IJACSA.2023.0140150</doi>
        <lastModDate>2023-01-31T12:17:56.3770000+00:00</lastModDate>
        
        <creator>Ankita Srivastava</creator>
        
        <creator>Narander Kumar</creator>
        
        <subject>Cloud computing; scalability; containers; containerized cloud models; queueing model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>Cloud computing has become a growing technology and has received wide acceptance in the scientific community and large organizations like government and industry. Due to the highly complex nature of VM virtualization, lightweight containers have gained wide popularity, and techniques to provision the resources to these containers have drawn researchers towards themselves. The models or algorithms that provide dynamic scalability which meets the demand of high performance and QoS utilizing the minimum number of resources for the containerized cloud have been lacking in the literature. The dynamic scalability facilitates the cloud services in offering timely, on-demand, and computing resources having the characteristic of dynamic adjustment to the end users. The manuscript has presented a technique which has exploited the queuing model to perform the dynamic scalability and scale the virtual resources of the containers while reducing the finances and meeting up the user’s Service Level Agreement (SLA). The paper aims in improving the usage of virtual resources and satisfy the SLA requirements in terms of response time, drop rate, system throughput, and the number of containers. The work has been simulated using Cloudsim and has been compared with the existing work and the analysis has shown that the proposed work has performed better.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_50-Queueing_Model_based_Dynamic_Scalability.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Practices of Online Assessment in a Digital Device in the Context of University Training: The Case of Hassan II University</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140148</link>
        <id>10.14569/IJACSA.2023.0140148</id>
        <doi>10.14569/IJACSA.2023.0140148</doi>
        <lastModDate>2023-01-31T12:17:56.3600000+00:00</lastModDate>
        
        <creator>Fatima-ezzahra Mrisse</creator>
        
        <creator>Nadia Chafiq</creator>
        
        <creator>Mohammed Talbi</creator>
        
        <creator>Kamal Moundy</creator>
        
        <subject>Online assessment; digital device; Information and Communication Technologies (ICT); biometrics; identify digital; student</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>This research presents online assessment in a digital device in the context of university training, aimed at improving their practices with emerging technologies based on an experiment with students from Hassan II University. Or, online assessment is a systematic process that helps measure the knowledge and skills of learners through multiple technological tools in a digital device. Indeed, digital devices intend to revolutionize higher education with the use of Information and Communication Technologies (ICT). Nevertheless, digital devices pose the problem of student identity verification during online assessment. In reality, automated online assessment systems are extremely vulnerable to cheating. So, our aim of this research is to explore, firstly, the types of online assessment that could be implemented in a digital device and secondly, how to verify the identity of the student during an online course on a digital device? The sample of our experiment consists of (N = 108) students from the Hassan II University of Casablanca, divided into two classes of the ITEF and MIMPA Masters and, our study was based on an online questionnaire for (N = 37) teachers at Hassan II University in Casablanca. The results obtained is to put into practice in digital devices diagnostic evaluations and formative evaluations using biometric methods for identity verification with a limited number. However, biometrics is inapplicable in summative assessments due to the problem of massiveness and hindrances in the online exam. For this reason, measures must be put in place to promote the smooth running of the online assessment.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_48-The_Practices_of_Online_Assessment_in_a_Digital_Device.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative and Evaluation of Anomaly Recognition by Employing Statistic Techniques on Humanoid Robot</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140149</link>
        <id>10.14569/IJACSA.2023.0140149</id>
        <doi>10.14569/IJACSA.2023.0140149</doi>
        <lastModDate>2023-01-31T12:17:56.3600000+00:00</lastModDate>
        
        <creator>Nuratiqa Natrah Mansor</creator>
        
        <creator>Muhammad Herman Jamaluddin</creator>
        
        <creator>Ahmad Zaki Shukor</creator>
        
        <creator>Muhammad Sufyan Basri</creator>
        
        <subject>Anomaly detection; humanoid robot; vision system; statistical analysis; robot recognition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>This paper presents the study to differentiate between normal and anomaly conditions detected by humanoid robots using comparative statistics. The study has been conducted in robotic software as a platform to examine the scenario and evaluate between the anomalies and normal behaviour in different conditions. This study employed a machine vision technique to run an image segmentation process and carry out semi-supervised object training within a controlled environment. The robot is trained by differentiating the measurement size of the target object, its location, and the object’s visibility within three different frames. The effect is measured by extracting the positive predictive value (PPV) value, mean and standard deviation value from the captured image using statistical techniques in machine vision. The results showed that the mean value decreased by around 50% from the normal scenario when an anomaly occurred. Aside from that, the standard deviation values were more than twofold compared to the common scenario, especially after the object’s size grew. In contrast, the deviation value is remarkably small when the target is situated in the middle of adjacent frames, compared to the value when the entire shape is positioned in the frame. Simultaneously, the mean values from the processed image produced a minor difference.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_49-Comparative_and_Evaluation_of_Anomaly_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Learning-based Correlated Graph Model for Spinal Cord Injury Prediction from Magnetic Resonance Spinal Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140147</link>
        <id>10.14569/IJACSA.2023.0140147</id>
        <doi>10.14569/IJACSA.2023.0140147</doi>
        <lastModDate>2023-01-31T12:17:56.3430000+00:00</lastModDate>
        
        <creator>P. R. S. S. V Raju</creator>
        
        <creator>V. Asanambigai</creator>
        
        <creator>Suresh Babu Mudunuri</creator>
        
        <subject>Spinal cord injury; regression; machine learning; graph model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>In epidemiological research on spine surgery, machine learning represents a promising new area. It is made up of several algorithms that work together to identify patterns in the data. Machine learning provides many benefits over traditional regression techniques, including a lower necessity for a priori predictor information and a higher capacity for managing huge datasets. Recent research has made significant progress toward using machine learning more effectively in spinal cord injury (SCI). Machine learning algorithms are employed to analyze non-traumatic and traumatic spinal cord injuries. Non-traumatic spinal cord injuries often reflect degenerative spine conditions that cause spinal cord compression, such as degenerative cervical myelopathy. This article proposes a novel correlated graph model (CGM) that adopts correlated learning to predict various outcomes published in traumatic and non-traumatic SCI. In the studies mentioned, machine learning is used for several purposes, including imaging analysis and epidemiological data set prediction. We discuss how these clinical predictive models are based on machine learning compared to traditional statistical prediction models. Finally, we outline the actions that must be taken in the future for machine learning to be a more prevalent statistical analysis method in SCI.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_47-A_Learning_based_Correlated_Graph_Model_for_Spinal_Cord_Injury.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Empirical Study on the Affecting Factors of Cloud-based ERP System Adoption in Iraqi SMEs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140146</link>
        <id>10.14569/IJACSA.2023.0140146</id>
        <doi>10.14569/IJACSA.2023.0140146</doi>
        <lastModDate>2023-01-31T12:17:56.3300000+00:00</lastModDate>
        
        <creator>Mohammed G. J</creator>
        
        <creator>MA Burhanuddin</creator>
        
        <creator>Dawood F. A. A</creator>
        
        <creator>Alyousif S</creator>
        
        <creator>Alkhayyat A</creator>
        
        <creator>Ali M. H</creator>
        
        <creator>R. Q. Malik</creator>
        
        <creator>Jaber M. M</creator>
        
        <subject>SMEs; TOE; DOI and HOT-fit frameworks; Cloud-based ERP; ICT; SmartPLS; SPSS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>This paper aims to investigate the main factors that have an impact on the adoption of cloud-based enterprise resource planning (ERP) among small- and medium-sized enterprises (SMEs) in the Republic of Iraq using TOE, DOI, and HOT-fit as a theoretical framework. Data was collected from 136 decision maker senior executives, and IT managers in SMEs in the Republic of Iraq. A web-based survey questionnaire was used for data collection processes. The research model and the derived hypotheses were tested using SPSS and SmartPLS. The findings indicate several specific factors have a significant effect on the adoption of cloud-based ERP. This conclusion can be utilized in enhancing the strategies for approaching cloud-based ERP by pinpointing the reasons why some SMEs choose to adopt this technology and success during the adoption phase, while others still do not go forward with the adoption. This study provides an overview and empirically shows the main determinants logistical factors that might face SMEs in the Republic of Iraq. The findings also help SMEs consider their information technologies investments when they think to adopt cloud-based ERP.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_46-An_Empirical_Study_on_the_Affecting_factors_of_Cloud_based_ERP_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Smart Deepfake Video Detection System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140144</link>
        <id>10.14569/IJACSA.2023.0140144</id>
        <doi>10.14569/IJACSA.2023.0140144</doi>
        <lastModDate>2023-01-31T12:17:56.3130000+00:00</lastModDate>
        
        <creator>Marwa Elpeltagy</creator>
        
        <creator>Aya Ismail</creator>
        
        <creator>Mervat S. Zaki</creator>
        
        <creator>Kamal Eldahshan</creator>
        
        <subject>Deepfake; deepfake detection; bimodal; XceptionNet; InceptionResNetV2; constant-Q transform; CQT; Gated Recurrent Unit; GRU; video authenticity; deep learning; multimodal</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>Rapid advancements in deep learning-based technologies have developed several synthetic video and audio generation methods producing incredibly hyper-realistic deepfakes. These deepfakes can be employed to impersonate the identity of a source person in videos by swapping the source&#39;s face with the target one. Deepfakes can also be used to clone the voice of a target person utilizing audio samples. Such deepfakes may pose a threat to societies if they are utilized maliciously. Consequently, distinguishing either one or both deepfake visual video frames and cloned voices from genuine ones has become an urgent issue. This work presents a novel smart deepfake video detection system. The video frames and audio are extracted from given videos. Two feature extraction methods are proposed, one for each modality; visual video frames, and audio. The first method is an upgraded XceptionNet model, which is utilized for extracting spatial features from video frames. It produces feature representation for visual video frames. The second one is a modified InceptionResNetV2 model based on the Constant-Q Transform (CQT) method. It is employed to extract deep time-frequency features from the audio modality. It produces feature representation for the audio. The corresponding extracted features of both modalities are fused at a mid-layer to produce a bimodal information-based feature representation for the whole video. These three representation levels are independently fed into the Gated Recurrent Unit (GRU) based attention mechanism helping to learn and extract deep and important temporal information per level. Then, the system checks whether the forgery is only applied to video frames, audio, or both, and produces the final decision about video authenticity. The newly suggested method has been evaluated on the FakeAVCeleb multimodal videos dataset. The experimental results analysis assures the superiority of the new method over the current–state-of-the-art methods.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_44-A_Novel_Smart_Deepfake_Video_Detection_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Three on Three Optimizer: A New Metaheuristic with Three Guided Searches and Three Random Searches</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140145</link>
        <id>10.14569/IJACSA.2023.0140145</id>
        <doi>10.14569/IJACSA.2023.0140145</doi>
        <lastModDate>2023-01-31T12:17:56.3130000+00:00</lastModDate>
        
        <creator>Purba Daru Kusuma</creator>
        
        <creator>Ashri Dinimaharawati</creator>
        
        <subject>Optimization; metaheuristic; swarm intelligence; portfolio optimization; Kompas 100; bank</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>This paper presents a new swarm intelligence-based metaheuristic called a three-on-three optimizer (TOTO). This name is chosen based on its novel mechanism in adopting multiple searches into a single metaheuristic. These multiple searches consist of three guided searches and three random searches. These three guided searches are searching toward the global best solution, searching for the global best solution to avoid the corresponding agent, and searching based on the interaction between the corresponding agent and a randomly selected agent. The three random searches are the local search of the corresponding agent, the local search of the global best solution, and the global search within the entire search space. TOTO is challenged to solve the classic 23 functions as a theoretical optimization problem and the portfolio optimization problem as a real-world optimization problem. There are 13 bank stocks from Kompas 100 index that should be optimized. The result indicates that TOTO performs well in solving the classic 23 functions. TOTO can find the global optimal solution of eleven functions. TOTO is superior to five new metaheuristics in solving 17 functions. These metaheuristics are grey wolf optimizer (GWO), marine predator algorithm (MPA), mixed leader-based optimizer (MLBO), golden search optimizer (GSO), and guided pelican algorithm (GPA). TOTO is better than GWO, MPA, MLBO, GSO, and GPA in solving 22, 21, 21, 19, and 15 functions, respectively. It means TOTO is powerful to solve high-dimension unimodal, multimodal, and fixed-dimension multimodal problems. TOTO performs as the second-best metaheuristic in solving a portfolio optimization problem.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_45-Three_on_Three_Optimizer_A_New_Metaheuristic_with_Three_Guided_Searches.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implementation of Flood Emergency Response System with Face Analytics</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140143</link>
        <id>10.14569/IJACSA.2023.0140143</id>
        <doi>10.14569/IJACSA.2023.0140143</doi>
        <lastModDate>2023-01-31T12:17:56.2970000+00:00</lastModDate>
        
        <creator>E. Mardaid</creator>
        
        <creator>Z. Zainal Abidin</creator>
        
        <creator>S. A. Asmai</creator>
        
        <creator>Z. Abal Abas</creator>
        
        <subject>Disaster management system; flood emergency response system; flood emergency response system with face analytics; flood emergency response system with face analytics using HOG algorithm; flood emergency response system dashboard</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>Disaster management system is developed to monitor flood, tsunami and earthquake, which is effectively preparing for and responding to the disaster. In fact, Malaysia is building the emergency response system for managing the flood disaster since it is highly occurred. However, the current flood emergency response system has limitations, which, has no integration that data entered into spreadsheets and transferred to different storages, and, data analytics that assist in data collection and decision-making. Even though flood emergency response system have been improved, which introducing sirens and loudspeakers to reach flood victims but still difficult to access the basic facilities and receive flood responses. Therefore, this study implements a flood emergency response system with face analytics to assist data acquisition, which analyze flood victim’s faces using CCTV infrastructure with HOG algorithm that incorporated with a dashboard. The dashboard categorizes the number of flood occurrence, maximum flood period (days), the number of displaced flood victims and loss assessment. Findings have shown that the dashboard helps the enforcement agencies to implement the real-time information about flood victims. Based on the number flood frequency occurrence in Malaysia from 2017 to 2021, the percentage produced was 27%, 19%, 12%, 19% and 23%. Moreover, the duration of floods has been decreased from 30% to 17% in 5 years that shows the flood emergency response helps the Malaysia government to improve the infrastructure. The significant of this study beneficial to the local enforcement unit and evacuation centers in identifying the flood victims.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_43-Implementation_of_Flood_Emergency_Response_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Agro-Food Supply Chain Traceability using Blockchain and IPFS</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140142</link>
        <id>10.14569/IJACSA.2023.0140142</id>
        <doi>10.14569/IJACSA.2023.0140142</doi>
        <lastModDate>2023-01-31T12:17:56.2830000+00:00</lastModDate>
        
        <creator>Subashini Babu</creator>
        
        <creator>Hemavathi Devarajan</creator>
        
        <subject>Blockchain; IPFS; supply chain management; traceability; Ethereum</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>Many blockchain initiatives significantly use the InterPlanetary File System (IPFS) to store user data off-chain. The centralized administration, ambiguous data, unreliable data, and ease of creating information islands are all issues with traditional traceability systems. This study develops a monitoring system using blockchain technology to record and inquire about product information in the supply network of Non-Perishable (NP) agro goods to address the above issues. The transparency and trustworthiness of traceability data were considerably improved by employing blockchain technology&#39;s distributed, tamper-proof, and traceable properties. To alleviate the strain on the blockchain and enable efficient information inquiry, a storage structure is built in which both public and private data are stored in the blockchain and the Inter Planetary File System (IPFS) using cryptography. Because of its ability to trace the origin of food, blockchain technology contributes to the development of reliable food supply chains and the establishment of rapport between farmers and their customers. Since it provides a secure location for data to be kept, it can pave the way for implementing data-driven farming techniques. In addition to improving data security, recording farm data in IPFS and storing encrypted file IPFS hashes in smart contracts solves the issue of blockchain storage explosion. And when used in tandem with smart contracts, it enables instantaneous outflows between parties in response to changes in data stored in the blockchain. The paper also offers simulations of the implementation and analysis of the performance. The findings validate that our system improves security for sensitive information, safeguards supply chain data, and meets the needs of real-world applications. Furthermore, it boosts throughput efficiency while reducing latency.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_42-Agro_Food_Supply_Chain_Traceability_using_Blockchain.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid Filtering Technique of Digital Images in Multimedia Data Warehouses</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140141</link>
        <id>10.14569/IJACSA.2023.0140141</id>
        <doi>10.14569/IJACSA.2023.0140141</doi>
        <lastModDate>2023-01-31T12:17:56.2670000+00:00</lastModDate>
        
        <creator>Nermin Abdelhakim Othman</creator>
        
        <creator>Ahmed Ayman Saad</creator>
        
        <creator>Ahmed Sharaf Eldin</creator>
        
        <subject>Data Warehouse (DW); Content-Based Image Retrieval (CBIR); Color Edge Detection (CED); Genetic Algorithm (GA); Simulating Annealing (SA); Memetic Algorithm (MA)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>The similarity search approach used for image Data Warehouse (DW) can provide better insights into discovering the most similar images compared to the input query. Due to the later innovation improvement, the mixed media complexity is discernibly expanded and modern inquires about regions are opened depending on comparable mixed media substance recovery. Content-Based Image Retrieval (CBIR) algorithms are utilized for the retrieval of images related to the inquiry image from gigantic databases or DW. The queries that are used for DW are complex, take a lot of time to process and many give less accurate results. For these reasons, this paper needs to have an effective technique to improve the similarity search query process that reflects a more positive result. In this paper, show how to extract features from a set of images (color, shape, and texture features) by using CBIR algorithm with Color Edge Detection (CED) method. Once these features are extracted, the proposed method will minimize the distance between these features vectors and the query image one using a Genetic Algorithm (GA). This paper illustrates the extraction of endless strong and imperative features from the database of the images, therefore, the capacity of these features in storing within the frame of features vectors. Accordingly, an imaginative closeness assessment with a metaheuristic algorithm (Genetic Algorithm (GA) with Simulating Annealing (SA)) has been attained between the query image features and those having a place in the database image. This paper introduces a new algorithm CEDF (Color Edge Detection with Gaussian Blur Filter) that applies the Gaussian Blur Filter after using CED method for feature detection of the image. Experimental results show that CEDF method gives better result than the other already-known methods.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_41-A_Hybrid_Filtering_Technique_of_Digital_Images_in_Multimedia.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Framework of Outcome-based Assessment and Evaluation for Computing Programs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140140</link>
        <id>10.14569/IJACSA.2023.0140140</id>
        <doi>10.14569/IJACSA.2023.0140140</doi>
        <lastModDate>2023-01-31T12:17:56.2670000+00:00</lastModDate>
        
        <creator>Wasan S. Awad</creator>
        
        <creator>Khadija A. Almhosen</creator>
        
        <subject>Student outcomes; program assessment; program evaluation; program accreditation; ABET accreditation; continuous improvement</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>This paper is to present a framework for student outcome-based assessment and evaluation, including the process and detailed activities leading to continue assessment of the successes of an academic program which is essential to its sustainability. Moreover, this paper provides a survey of the literature that reviews the different means of assessing and evaluating an academic program together with the critical performance metrics which aid in quantifying such evaluation. The presented framework is implemented on the Information Technology program over a course of five years. The paper provides empirical insights about how careful implementation of the presented framework enabled the College of Information Technology in Ahlia University to achieve outstanding results in quality assurance and to be ABET accredited. The results of the implementation prove the effectiveness of the framework in improving the student performance and the program. This paper fulfils an identified need to study how student outcome-based assessment and evaluation model enables an academic institute to foster quality assurance instead of relying on ad hoc practices which might lead them to trial-and-error approach. The presented framework could be followed by other institution aiming for international accreditations.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_40-A_Framework_of_Outcome_based_Assessment_and_Evaluation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Reducing Cheating in Online Exams Through the Proctor Test Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140139</link>
        <id>10.14569/IJACSA.2023.0140139</id>
        <doi>10.14569/IJACSA.2023.0140139</doi>
        <lastModDate>2023-01-31T12:17:56.2500000+00:00</lastModDate>
        
        <creator>Yusring Sanusi Baso</creator>
        
        <creator>Nurul Murtadho</creator>
        
        <creator>Syihabuddin</creator>
        
        <creator>Hikmah Maulani</creator>
        
        <creator>Andi Agussalim</creator>
        
        <creator>Haeruddin</creator>
        
        <creator>Ahmad Fadlan</creator>
        
        <creator>Ilham Ramadhan</creator>
        
        <subject>Reducing cheating; online exam; proctor test models; Indonesian learners of Arabic; Silsilat Al-Lisan</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>The World Health Organization (WHO) officially declared coronavirus (COVID-19) a pandemic on March 11, 2020. Educational institutions must change most face-to-face learning activities in class to online. This situation forces academic institutions to change the format of assessing student learning outcomes. Online exam surveillance applications utilizing cameras and other blocking browsers (proctors) are becoming popular. However, the appearance of the proctor model supervised exam system also raises controversy. The main discussion regarding this proctor system is the integrity of assessment and the capacity of students to adapt to this new method of supervision. The main question is whether students feel comfortable using the proctor system in exams and whether this system affects students&#39; scores. To answer this question, we have analyzed the scores obtained from a trial of 152 scores of students learning Arabic at Hasanuddin University Makassar, Indonesia. The experiment involved three exam models: online format from home using the Sikola Learning Management System (Modality 1), online directly using the Proctor System in the Sikola Learning Management System (Modality 2), and a paper exam format in person under the supervision of a lecturer (Modality 3). The results show that students prefer Modality 1 (online at home with the Sikola LMS system). There is a statistical difference between the scores obtained by students from the three modalities analyzed. Student scores with modality 1 are higher than the other two modalities. On the other hand, there was no difference in scores between modalities 2 and 3. The online exam system (modality 2) can be applied to online exams in higher education institutions because it can reduce or even keep students from cheating.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_39-Reducing_Cheating_in_Online_Exams_Through_the_Proctor_Test_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>AI based Dynamic Prediction Model for Mobile Health Application System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140138</link>
        <id>10.14569/IJACSA.2023.0140138</id>
        <doi>10.14569/IJACSA.2023.0140138</doi>
        <lastModDate>2023-01-31T12:17:56.2370000+00:00</lastModDate>
        
        <creator>Adari Ramesh</creator>
        
        <creator>C K Subbaraya</creator>
        
        <creator>G K Ravi Kumar</creator>
        
        <subject>Artificial Intelligence (AI); M-Health System; Data Collection; Cloud Storage; Gorilla Troop Optimization (GTO); Bi-directional Long Short-Term Memory (Bi-LSTM); dynamic prediction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>In recent decades, mobile health (m-health) applications have gained significant attention in the healthcare sector due to their increased support during critical cases like cardiac disease, spinal cord problems, and brain injuries. Also, m-health services are considered more valuable, mainly where facilities are deficient. In addition, it supports wired and advanced wireless technologies for data transmission and communication. In this work, an Artificial Intelligence (AI)-based deep learning model is implemented to predict healthcare data, where the data handling is performed to improve the dynamic prediction performance. It includes the working modules of data collection, normalization, AI-based classification, and decision-making. Here, the m-health data are obtained from the smart devices through the service providers, which comprises the health information related to blood pressure, heart rate, glucose level, etc. The main contribution of this paper is to accurately predict Cardio Vascular Disease (CVD) from the patient dataset stored in cloud using the AI-based m-health system. After obtaining the data, preprocessing can be performed for noise reduction and normalization because prediction performance highly depends on data quality. Consequently, we use the Gorilla Troop Optimization Algorithm (GTOA) to select the most relevant functions for classifier training and testing. Classify his CVD type according to a selected set of features using bidirectional long-term memory (Bi-LSTM). Moreover, the proposed AI-based prediction model&#39;s performance is validated and compared using different measures.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_38-AI_based_Dynamic_Prediction_Model_for_Mobile_Health_Application.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Exploring College Academic Performance of K to12 IT and Non-IT-related Strands to Reduce Academic Deficiencies</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140137</link>
        <id>10.14569/IJACSA.2023.0140137</id>
        <doi>10.14569/IJACSA.2023.0140137</doi>
        <lastModDate>2023-01-31T12:17:56.2200000+00:00</lastModDate>
        
        <creator>Marilou S. Benoza</creator>
        
        <creator>Thelma Palaoag</creator>
        
        <subject>Academic performance; exploratory research; reduce academic deficiencies; thematic analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>Improving students&#39; academic performance is a significant concern among academics. Despite various strategies to improve academic performance, there are still a significant number of students who fail academically. This study sought to investigate the possible reasons for the academic deficiencies among Infotech Development Systems Colleges, Inc. Information Technology (IT) students in the IT and Non-IT-related strands, if their strand is significant to their academic performance in college, as well as formulate a solution based on the identified reasons and recommendations to reduce the negative academic remarks. The researchers employed survey questionnaires and interviews to conduct exploratory data analysis. Similarly, the Senior High School academic performance and actual grades of the respondents from AY 2018-2019 to First Semester of AY 2021-2022. On the tested hypothesis showed statistical significance (p &lt; 0.05) on IT-related strand. The study further reveals that the Non-IT-related strand has more students with academic deficiencies compared to IT-related strand and highlights a variety of reasons cited. Respondents cited misalignment of strand to current program, instructor not speaking clearly, unreliable internet connection, and failure to complete and submit an academic task as reasons for academic deficiencies. The researchers designed a model that can potentially eliminate academic inadequacies. The model takes into account both internal and external factors; for internal variable it includes effective time management, a positive attitude and mindset, prompt and punctual completion of requirements, and good study habits. While for the external factors, competent and student-friendly instructors, a stable, strong, and accessible internet connection, a conducive learning environment, relevant available resources and facilities, adaptation of limited face-to-face or hybrid classes, and alignment of SHS strand to college program of choice are recommended.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_37-Exploring_College_Academic_Performance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Classifying Weather Images using Deep Neural Networks for Large Scale Datasets</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140136</link>
        <id>10.14569/IJACSA.2023.0140136</id>
        <doi>10.14569/IJACSA.2023.0140136</doi>
        <lastModDate>2023-01-31T12:17:56.2200000+00:00</lastModDate>
        
        <creator>Shweta Mittal</creator>
        
        <creator>Om Prakash Sangwan</creator>
        
        <subject>Weather classification; big data; transfer learning; deep learning; Sparkdl; convolutional networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>Classifying weather from outdoor images helps prevent road accidents, schedule outdoor activities, and improve the reliability of vehicle assistant driving and outdoor video surveillance systems. Weather classification has applications in various fields such as agriculture, aquaculture, transportation, tourism, etc. Earlier, expensive sensors and huge manpower were used for weather classification making it very tedious and time-consuming. Automating the task of classifying weather conditions from images will save a huge time and resources. In this paper, a framework based on the transfer learning technique has been proposed for classifying the weather images with the features learned from pre-trained deep CNN models in much lesser time. Further, the size of the training data affects the efficiency of the model. The larger amount of high-quality data often leads to more accurate results. Hence, we have implemented the proposed framework using the spark platform making it scalable for big datasets. Extensive experiments have been performed on weather image dataset and the results proved that the proposed framework is reliable. From the results, it can be concluded that weather classification with the InceptionV3 model and Logistic Regression classifier yields the best results with a maximum accuracy of 97.77%.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_36-Classifying_Weather_Images_using_Deep_Neural_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Parameter Extraction and Performance Analysis of 3D Surface Reconstruction Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140135</link>
        <id>10.14569/IJACSA.2023.0140135</id>
        <doi>10.14569/IJACSA.2023.0140135</doi>
        <lastModDate>2023-01-31T12:17:56.2030000+00:00</lastModDate>
        
        <creator>Richha Sharma</creator>
        
        <creator>Pawanesh Abrol</creator>
        
        <subject>3D reconstruction; point cloud; feature detection; feature matching</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>Digital image-based 3D surface reconstruction is a streamlined and proper means of studying the features of the object being modelled. The generation of true 3D content is a very crucial step in any 3D system. A methodology to reconstruct a 3D surface of objects from a set of digital images is presented in this paper. It is simple, robust, and can be freely used for the construction of 3D surfaces from images. Digital images are taken as input to generate sparse and dense point clouds in 3D space from the detected and matched features. Poisson Surface, Ball Pivoting, and Alpha shape reconstruction algorithms have been used to reconstruct photo-realistic surfaces. Various parameters of these algorithms that are critical to the quality of reconstruction are identified and the effect of these parameters with varying values is studied. The results presented in this study will give readers an insight into the behaviour of various algorithmic parameters with computation time and fineness of reconstruction.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_35-Parameter_Extraction_and_Performance_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing the Intrusion Detection Efficiency using a Partitioning-based Recursive Feature Elimination in Big Cloud Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140134</link>
        <id>10.14569/IJACSA.2023.0140134</id>
        <doi>10.14569/IJACSA.2023.0140134</doi>
        <lastModDate>2023-01-31T12:17:56.1900000+00:00</lastModDate>
        
        <creator>Hesham M. Elmasry</creator>
        
        <creator>Ayman E. Khedr</creator>
        
        <creator>Hatem M. Abdelkader</creator>
        
        <subject>Machine learning models; big cloud environment; intrusion detection system (IDS); Jupyter Notebook; feature selection; ISOT-CID introduction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>In the era of cloud computing, the effectiveness of utilizing supervised machine-learning-based intrusion detection models for categorizing and detecting malicious network attacks depends on the preparation, extraction, and selection of the optimal subset of features from the dataset. Therefore, before beginning the training phase of the machine learning classifier models, it is required to remove redundant data, manage missing values, extract statistical features from the dataset, and choose the most valuable and appropriate attributes using the Python Jupyter Notebook. In this study, partitioning-based recursive feature elimination (PRFE) method was suggested to decrease the complexity space and training time for machine learning models while increasing the accuracy rate of detecting malicious attacks. On the information security and object technology cloud intrusion dataset (ISOT-CID), some of the most popular supervised machine learning classification techniques, including support vector machines (SVM) and decision trees (DT), have been assessed using the suggested PRFE technique. In comparison to some of the most popular filter and wrapper-based feature selection strategies, the results of the practical experiments demonstrated an improvement in accuracy, recall, F-score, and precision rate after using the PRFE technique on the ISOT-CID dataset. Additionally, the time required to train the machine-learning models was reduced.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_34-Enhancing_the_Intrusion_Detection_Efficiency.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Time Series Forecasting using LSTM and ARIMA</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140133</link>
        <id>10.14569/IJACSA.2023.0140133</id>
        <doi>10.14569/IJACSA.2023.0140133</doi>
        <lastModDate>2023-01-31T12:17:56.1730000+00:00</lastModDate>
        
        <creator>Khulood Albeladi</creator>
        
        <creator>Bassam Zafar</creator>
        
        <creator>Ahmed Mueen</creator>
        
        <subject>Stock analysis; machine learning; deep learning; time series; ARIMA; LSTM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>Time series analysis is the process of evaluating sequential data to extract meaningful statistics. In the current era, organizations rely greatly on data analysis to solve and predict possible answers to a specific problem. These predictions help greatly in decision-making. In time series problems, the data is used to train the different machine and deep learning models. The models train on provided data displays particular outcomes. These outcomes anticipate possible solutions. In this paper, the two most effective Python models LSTM (Long Short Term Memory Loss) and ARIMA (Autoregressive integrated moving average) are used. These are the two most recommended models while dealing with Time series forecasting. The selected dataset is from Mulkia Gulf Real Estate available at MarketWatch. The main objective of this research paper is to study and compare the results of the two models used and determine which one is the best-suited model for that particular type of prediction. However, these are widely used models but the focus point of this research is determining the performance variance between these two models. LSTM became famous in 1997 as a training model that can remember patterns based on previous data while ARIMA is famous for forecasting variables of interest using a linear combination of previous values of the variable. The findings state that ARIMA is better for time series forecasting than LSTM based on the mean average of the basic evaluation parameters.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_33-Time_Series_Forecasting_using_LSTM_and_ARIMA.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modeling of Whale Optimization with Deep Learning based Brain Disorder Detection and Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140131</link>
        <id>10.14569/IJACSA.2023.0140131</id>
        <doi>10.14569/IJACSA.2023.0140131</doi>
        <lastModDate>2023-01-31T12:17:56.1570000+00:00</lastModDate>
        
        <creator>Uvaneshwari M</creator>
        
        <creator>M. Baskar</creator>
        
        <subject>Brain disorder detection; magnetic resonance imaging; deep learning; convolutional recurrent neural network; whale optimization algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>Brain disorders are a significant source of economic strain and unfathomable suffering in modern society. Imaging techniques help diagnose, monitor and treat mental health, neurological, and developmental disorders. To aid in the Computer-Aided Diagnosis of brain diseases, deep learning (DL) was used for the analysis of neuroimages from modalities including Positron Emission Tomography (PET), Structural Magnetic Resonance Imaging (SMRI), and functional MRI. In this study, a Whale Optimization Algorithm is used with Deep Learning to analyse MRI scans for signs of neurological disease. WOADL-BDDC may detect and label abnormalities in the brain based on an MRI scan. It uses a two-step pre-processing procedure, first using guided filtering to get rid of background noise and then using U-Net segmentation to get rid of the top of the head. QuickNAT, along with RMSProp, is used to segment the brain. When analysing data, WOADL-BDDC uses radionics to collect information from every layer. When used in a convolutional recurrent neural network model, the Whale Optimization Algorithm may accurately categorize mental illness. WOADL-BDDC is put through its paces using ADNI 3D. Compared to state-of-the-art classification results from Vgg16, Graph CNN, Modified ResNet18, Non-linear SVM, ResNet50-SVM, ResNet50-RF, the suggested technique achieved the greatest accuracy. It was demonstrated that the suggested model is superior to other models for classification from MRI images. In simulations, the proposed approach is shown to be effective in optimizing hyperparameters with an accuracy of 94.38 % on TR set and 94.87% on TS set, Precision of 96.43% on TR set and 97.62% on TS set, and an F1-Score of 89.35 % and 92.10% on TR and TS set, respectively.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_31-Modeling_of_Whale_Optimization_with_Deep_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Study on the Designation Institution for Supercomputer Specialized Centers in Republic of Korea</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140132</link>
        <id>10.14569/IJACSA.2023.0140132</id>
        <doi>10.14569/IJACSA.2023.0140132</doi>
        <lastModDate>2023-01-31T12:17:56.1570000+00:00</lastModDate>
        
        <creator>Hyungwook Shim</creator>
        
        <creator>Yonghwan Jung</creator>
        
        <creator>Jaegyoon Hahm</creator>
        
        <subject>Supercomputer; specialized center; evaluation system; logistic regression model; network centrality analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>In Korea, specialized centers are designated for 10 strategic fields for the purpose of jointly utilizing supercomputer resources at the national level. Based on the “National Supercomputing Innovation Strategy,” it plans to select 10 centers in three stages by 2030, and has now completed the designation of the first-stage specialized centers in 2022. With the second designation in 2024 ahead, it is urgent to review and improve the existing designation institution for fairer and more effective selection of specialized centers. Therefore, this paper analyzed the influence of evaluation items and the influence of evaluation items on evaluation results by using logistic regression analysis and network centrality analysis to prepare improvement plans for the existing evaluation model. As a result of the analysis, improvement measures were derived, such as subdividing evaluation items with low impact, expanding the items, and lowering the allotment of evaluation items with low impact.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_32-A_Study_on_the_Designation_Institution_for_Supercomputer.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Ọdịgbo Metaheuristic Optimization Algorithm for Computation of Real-Parameters and Engineering Design Optimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140130</link>
        <id>10.14569/IJACSA.2023.0140130</id>
        <doi>10.14569/IJACSA.2023.0140130</doi>
        <lastModDate>2023-01-31T12:17:56.1430000+00:00</lastModDate>
        
        <creator>Ikpo C V</creator>
        
        <creator>Akowuah E K</creator>
        
        <creator>Kponyo J J</creator>
        
        <creator>Boateng K O</creator>
        
        <subject>Human socio-cultural; nature-inspired; informal-learning; global optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>This paper proposes a new population-based global optimization algorithm, Ọdịgbo Metaheuristic Optimization Algorithm–ỌMOA, for solving complex bounded-constraint/single objective real-parameter problems found in most engineering and scientific applications. It’s inspired by the human socio-cultural informal discipleship learning pattern inherent in the behavior of the Ndịgbo peoples; the subject – primary (Nwa-ahịa), in mercantile cycle grows to a secondary (Mazi) owing to the intuitive stratagem (dialect - Ịgba) embedded in an aged-long cultural model “Ịgba-ọsọ-ahịa” (meaning, strategic marketing skills, and practice). The model mimics the search routine for satisfying a customer’s need in the market, built into exploration and exploitation applied in the mathematical model. About 30 complex classical unconstrained functions are tested, comparing results with that of five similar state-of-the-art algorithms. Also, 29 CEC-2017 single objective real constraint benchmark serious dimensional problems were simulated and compared against the winners of that competition. Validation includes statistical (t-test, p-value) comparison and for 50 Dimension constraint problems as ỌMOA demonstrated superior performance. TCS (9.18%), WBP (6.3%), PVDP (601%), RGP (319%), RBP (760%), GTCD (202%), HIMMELBLAU (4%), and CDP (88.12%) are the improvements made on 8 CEC-2020 engineering real design problems against the former best performances; OMOA is simple to implement, replicate and applicable across domains. Also, some new, improved optimum was obtained in Shubert and Schaffer 4 function compared to the global optimums.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_30-Odigbo_Metaheuristic_Optimization_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Improved Ant Colony Algorithm for Virtual Resource Scheduling in Cloud Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140128</link>
        <id>10.14569/IJACSA.2023.0140128</id>
        <doi>10.14569/IJACSA.2023.0140128</doi>
        <lastModDate>2023-01-31T12:17:56.1270000+00:00</lastModDate>
        
        <creator>Chunlei Zhong</creator>
        
        <creator>Gang Yang</creator>
        
        <subject>Improved ant colony algorithm; cloud computing; virtual resources; intelligent scheduling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>In order to solve the problems of uneven spatial distribution of data nodes and unclear weight relationship of virtual scheduling features in cloud computing platform, a virtual resource scheduling method based on improved ant colony algorithm is studied and designed to improve the performance of virtual resource scheduling in cloud computing platform by this method. After analyzing the information resource sequence change of the cloud computing platform, according to the STR - Tree partition graph, a simulated annealing-based algorithm is employed to classify the resource types after optimal scheduling into IO types, middle types and CPU types, and the time span and load balance are set as the measurement indexes. The simulation results show that after applying this method, the occupied resources of the main platform are 535 MB, which are much lower than the other two comparison algorithms, and the method has improved the allocation rationality, resource balance, maximum queue length and energy consumption. This result indicates that applying this virtual resource scheduling method can effectively improve the intelligent scheduling of virtual resources in the cloud computing platform.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_28-An_Improved_Ant_Colony_Algorithm_for_Virtual_Resource_Scheduling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Analysis of Risks and Recent Trends Towards Network Intrusion Detection System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140129</link>
        <id>10.14569/IJACSA.2023.0140129</id>
        <doi>10.14569/IJACSA.2023.0140129</doi>
        <lastModDate>2023-01-31T12:17:56.1270000+00:00</lastModDate>
        
        <creator>D. Shankar</creator>
        
        <creator>G. Victo Sudha George</creator>
        
        <creator>Janardhana Naidu J N S S</creator>
        
        <creator>P Shyamala Madhuri</creator>
        
        <subject>Network; dataset; communication; intrusion detection system; attacks; deep learning; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>In the modern world, information security and communications concerns are growing due to increasing attacks and abnormalities. The presence of attacks and intrusion in the network may affect various fields such as social welfare, economic issues and data storage. Thus intrusion detection (ID) is a broad research area, and various methods have emerged over the years. Hence, detecting and classifying new attacks from several attacks are complicated tasks in the network. This review categorizes the security threats and challenges in the network by accessing present ID techniques. The major objective of this study is to review conventional tools and datasets for implementing network intrusion detection systems (NIDS) with open source malware scanning software. Furthermore, it examines and compares state-of-art NIDS approaches in regard to construction, deployment, detection, attack and validation parameters. This review deals with machine learning (ML) based and deep learning (DL) based NIDS techniques and then deliberates future research on unknown and known attacks.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_29-Deep_Analysis_of_Risks_and_Recent_Trends_Towards_Network_Intrusion.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Data Augmentation for Deep Learning Algorithms that Perform Driver Drowsiness Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140127</link>
        <id>10.14569/IJACSA.2023.0140127</id>
        <doi>10.14569/IJACSA.2023.0140127</doi>
        <lastModDate>2023-01-31T12:17:56.1100000+00:00</lastModDate>
        
        <creator>Ghulam Masudh Mohamed</creator>
        
        <creator>Sulaiman Saleem Patel</creator>
        
        <creator>Nalindren Naicker</creator>
        
        <subject>Data augmentation; deep learning; computer vision; drowsiness detection; road safety</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>Driver drowsiness is one of the main causes of driver-related motor vehicle collisions, as this impairs a person’s concentration whilst driving. With the enhancements of computer vision and deep learning (DL), driver drowsiness detection systems have been developed previously, in an attempt to improve road safety. These systems experienced performance degradation under real-world testing due to factors such as driver movement and poor lighting. This study proposed to improve the training of DL models for driver drowsiness detection by applying data augmentation (DA) techniques that model these real-world scenarios. This paper studies six DL models for driver drowsiness detection: four configurations of a Convolutional Neural Network (CNN), two custom configurations as well as the architectures designed by the Visual Geometry Group (VGG) (i.e. VGG16 and VGG19); a Generative Adversarial Network (GAN) and a Multi-Layer Perceptron (MLP). These DL models were trained using two datasets of eye images, where the state of eye (open or closed) is used in determining driver drowsiness. The performance of the DL models was measured with respect to accuracy, F1-Score, precision, negative class precision, recall and specificity. When comparing the performance of DL models trained on datasets with and without DA in aggregation, it was found that all metrics were improved. After removing outliers from the results, it was found that the average improvement in both accuracy and F1 score due to DA was +4.3%. Furthermore, it is shown that the extent to which the DA techniques improve DL model performance is correlated with the inherent model performance. For DL models with accuracy and F1-Score ≤ 90%, results show that the DA techniques studied should improve performance by at least +5%.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_27-Data_Augmentation_for_Deep_Learning_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Improved Poisson Surface Reconstruction Algorithm based on the Boundary Constraints</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140126</link>
        <id>10.14569/IJACSA.2023.0140126</id>
        <doi>10.14569/IJACSA.2023.0140126</doi>
        <lastModDate>2023-01-31T12:17:56.0930000+00:00</lastModDate>
        
        <creator>Zhouqi Liu</creator>
        
        <creator>Lei Wang</creator>
        
        <creator>Muhammad Tahir</creator>
        
        <creator>Jin Huang</creator>
        
        <creator>Tianqi Cheng</creator>
        
        <creator>Xinping Guo</creator>
        
        <creator>Yuwei Wang</creator>
        
        <creator>ChunXiang Liu</creator>
        
        <subject>Point cloud; moving least squares; Poisson reconstruction; Neumann boundary constraints</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>The usage of the point cloud surface reconstruction to generate high-precision 3D models has been widely applied in various fields. In order to deal with the problems of insufficient accuracy, pseudo-surfaces and high time cost caused by the traditional surface reconstruction algorithms of the point cloud data, this paper proposes an improved Poisson surface reconstruction algorithm based on the boundary constraints. For large density point cloud data obtained from 3D laser scanning, the proposed method firstly uses an octree instead of the KD-tree to search the near neighborhood; then, it uses the Open Multi-Processing (OpenMP) to accelerate the normal estimation based on the moving least squares algorithm; meanwhile, the least-cost spanning tree is employed to adjust the consistency of the normal direction; and finally a screened Poisson algorithm with the Neumann boundary constraints is proposed to reconstruct the point cloud. Compared with the traditional methods, the experiments on three open datasets demonstrated that the proposed method can effectively reduce the generation of pseudo-surfaces. The reconstruction time of the proposed algorithm is about 16% shorter than that of the traditional Poisson reconstruction algorithm, and produce better reconstruction results in the term of quantitative analysis and visual comparison.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_26-An_Improved_Poisson_Surface_Reconstruction_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Ransomware: Analysis of Encrypted Files</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140124</link>
        <id>10.14569/IJACSA.2023.0140124</id>
        <doi>10.14569/IJACSA.2023.0140124</doi>
        <lastModDate>2023-01-31T12:17:56.0800000+00:00</lastModDate>
        
        <creator>Houria MADANI</creator>
        
        <creator>Noura OUERDI</creator>
        
        <creator>Abdelmalek Azizi</creator>
        
        <subject>Ransomware; encrypted files; signature; file format; static analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>Ransomware is a type of malware that damage the system by encrypting all the files existing in the computer. To get access, the victim has to pay a ransom to get a key to decrypt his data. When the virus is running in machine, the user cannot stop it on the first try, so he may lose his entire files. One of the goals of this work is to detect ransomware based on encrypted files in real time and to minimize the cost of losing files. We will try to do an analysis of a received file (without opening it and seeing its contents). This scanning action can prevent a ransomware from spreading in the system. Most Ransomware files are sent in “.exe” format, but in this work, we will try to use other file formats that can accept malware, for example, .doc or .docx, .xls or .xlsx, .ppt or .pptx, .jpg, etc. In fact, an attacker can focus only on the files that contain useful data. In this paper, we are going to identify the types of files if they are suspicious or normal (without opening them) from their headers. For that first, we are going to analyze each extension separately (.docx, .exe, .pptx, .xlsx, .jpg, etc.) by identifying their headers and signatures. Then we will take several files with different extensions to analyze them by doing a program who detect if a file is benign or suspicious.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_24-Ransomware_Analysis_of_Encrypted_Files.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>MMZ: A Study on the Implementation of Mathematical Game-based Learning Tool</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140125</link>
        <id>10.14569/IJACSA.2023.0140125</id>
        <doi>10.14569/IJACSA.2023.0140125</doi>
        <lastModDate>2023-01-31T12:17:56.0800000+00:00</lastModDate>
        
        <creator>Nur Syaheera Binti Sulaiman</creator>
        
        <creator>Hamzah Asyrani Bin Sulaiman</creator>
        
        <creator>Nor Saradatul Akmar Binti Zulkifli</creator>
        
        <creator>Tuty Asmawanty Binti Abdul Kadir</creator>
        
        <subject>Primary school; educational mathematic game; game components</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>Mathematic has always been one of the hardest subject to be learnt during school. This is the very same issue that student have been facing no matter the level of education that they are in, and this is the reason why Math Maze Zone or also known as MMZ had been developed. MMZ is a mathematical game-based learning tool for primary school students to help them prepare for their final examination for the mathematics subject. The proposed game will consist of mathematical questions and the game design will be focusing on a maze where user will have to search for a way out from the maze while going through the checkpoint within the maze. The checkpoint will consist of mathematical questions and when user answered correctly, they will be able to continue their journey to explore the maze. MMZ focused on 1 chapter only for now which is Chapter 8: Space and Shape. Although the game only consists of one-chapter, primary school students can play the game to enhance their knowledge and to help them to be more engaged with mathematics subject by playing the game. The results will be taken from the perspective of adult who have close relationship with standard six students. Five main sections will also be identified along with MMZ to ensure getting a good result.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_25-MMZ_A_Study_on_the_Implementation_of_Mathematical_Game.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Evaluation of Photovoltaic Projects in Latin America</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140123</link>
        <id>10.14569/IJACSA.2023.0140123</id>
        <doi>10.14569/IJACSA.2023.0140123</doi>
        <lastModDate>2023-01-31T12:17:56.0630000+00:00</lastModDate>
        
        <creator>Cristian Le&#243;n-Ospina</creator>
        
        <creator>Heyner Arias-Zarate</creator>
        
        <creator>Cesar Hernandez</creator>
        
        <subject>Evaluation; Latin America; photovoltaic; project</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>Photovoltaic solar energy has been booming worldwide due to the scarcity of non-renewable resources, from there arises the need to modernize and innovate in the methodologies for the use of energy resources as well as in the correct installation of this system at an urban or level rural. In recent decades in Latin America there have been presented advances in the implementation of photovoltaic projects, which is why this document aims to evaluate the feasibility of these at a technical and technological level, so that they are in accordance with the systems implemented in Asia, Europe, and North America. The analysis determined that the main factors affecting the feasibility of a photovoltaic execution project are economic and technological, in addition to the adverse impacts found on the ecosystem and the local population, but in general it was observed that they are weaknesses that can be corrected, since there are different countries that are working on establishing strategies to educate their community, so there is an improvement in the quality of life in sectors with high CO₂ pollution and lack of fossil fuels.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_23-Performance_Evaluation_of_Photovoltaic_Projects.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>VHDL based Design of an Efficient Hearing Aid Filter using an Intelligent Variable-Bandwidth-Filter</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140122</link>
        <id>10.14569/IJACSA.2023.0140122</id>
        <doi>10.14569/IJACSA.2023.0140122</doi>
        <lastModDate>2023-01-31T12:17:56.0470000+00:00</lastModDate>
        
        <creator>Ujjwala S Rawandale</creator>
        
        <creator>Sanjay R. Ganorkar</creator>
        
        <creator>Mahesh T. Kolte</creator>
        
        <subject>Hearing aid system; variable bandwidth filter; audiograms; matching error; power consumption; signal filtering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>Filtering techniques have been elaborated in the HA field to improve signal clarity and enhance the hearing capacity of deaf people. However, public sounds are highly noisy, so filtering those signals is not an easy task. Hence, the present article has aimed to develop a novel Ant Lion based power Noise-Variable Bandwidth Filter (ALPN-VBF) for the HA applications. Here, the proposed optimized power efficient filter has incorporated several functions like de-noising and frequency tuning based on the word features. Here, the signal’s noise has been removed with the maximum possible range with the help of High-pass-Filter (HPF) and low-pass filter (LPF). Finally, the developed model is tested with a few audiograms, and the filter parameters have been analyzed and compared with other models. The testing results have proved that the designed filter is better in frequency tuning and signal transmission than the previous approaches by attaining less delay and reduced power consumption rate.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_22-VHDL_based_Design_of_an_Efficient_Hearing_Aid_Filter.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Tree-based Machine Learning and Deep Learning in Predicting Investor Intention to Public Private Partnership</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140121</link>
        <id>10.14569/IJACSA.2023.0140121</id>
        <doi>10.14569/IJACSA.2023.0140121</doi>
        <lastModDate>2023-01-31T12:17:56.0330000+00:00</lastModDate>
        
        <creator>Ahmad Amin</creator>
        
        <creator>Rahmawaty</creator>
        
        <creator>Maya Febrianty Lautania</creator>
        
        <creator>Suraya Masrom</creator>
        
        <creator>Rahayu Abdul Rahman</creator>
        
        <subject>Tree-based machine learning; deep learning; prediction; investor intention; public private partnership</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>Public private partnership (PPP) is the government initiate in accelerating public infrastructure development growth. However, the scheme exposes private sector to various risks including political risk which in turn affect financial performance and reporting of participating firms. Given that one of the issues facing the government is the lack of participation from the private sector in such arrangements. Thus, the main objective of this study is to observe the machine learning prediction models on private investor intention in participating the PPP program. Tree-based machine learning and deep learning are two different types of promising algorithms, which proven to be useful in widely domain of prediction problems but never been tested on the concerned problem of this study. Based on real data of investors for Indonesian listed firms, this paper presents the ability of the selected machine learning algorithms by means of different assessments point of view. First assessment is on the algorithms’ performances in producing accurate prediction. Second assessment is to identify the variance of PPP attributes in each of the prediction model with the machine learning algorithms. The performance results show that all the prediction models with the machine learning algorithms and the PPP attributes were well-fitted at R squared above 80%. The findings contribute a significant knowledge to various fields of scholars to implement a more in-depth analysis on the machine learning methods and investors’ prediction.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_21-Tree_based_Machine_Learning_and_Deep_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Arabic Dialogue Processing and Act Classification using Support Vector Machine</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140120</link>
        <id>10.14569/IJACSA.2023.0140120</id>
        <doi>10.14569/IJACSA.2023.0140120</doi>
        <lastModDate>2023-01-31T12:17:56.0170000+00:00</lastModDate>
        
        <creator>Abraheem Mohammed Sulayman Alsubayhay</creator>
        
        <creator>Md Sah Hj Salam</creator>
        
        <creator>Farhan Bin Mohamed</creator>
        
        <subject>Dialogue processing; Act; Arabic language; linear support vector machine; without cue</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>Text classification is the technique of grouping documents according to their content into classes and groups. As a result of the vast amount of textual material available online, this procedure is becoming increasingly crucial. The primary challenge in text categorization is enhancing classification accuracy. This role is receiving more attention due to its importance in the development of these systems and the categorization of Arabic dialogue processing. In the research, attempts were made to define dialogue processing. It concentrates on classifying words that are used in dialogue. There are various types of dialogue processing, including hello, farewell, thank you, confirm, and apologies. The words are used in the study without context. The proposed approach recovers the properties of function words by replacing collocations with standard number tokens and each substantive keyword with a numerical approximation token. With the use of the linear support vector machine (SVM) technique, the classification method for this study was obtained. The act is classified using the linear SVM technique, and the anticipated accuracy is evaluated against that of alternative algorithms. This study encompasses Arabic dialogue acts corpora, annotation schema, and classification problems. It describes the outcomes of contemporary approaches to classifying Arabic dialogue acts. A custom database in the domains of banks, chat, and airline tickets is used in the research to assess the effectiveness of the suggested solutions. The linear SVM approach produced the best results.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_20-Arabic_Dialogue_Processing_and_Act_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Current Multi-factor of Authentication: Approaches, Requirements, Attacks and Challenges</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140119</link>
        <id>10.14569/IJACSA.2023.0140119</id>
        <doi>10.14569/IJACSA.2023.0140119</doi>
        <lastModDate>2023-01-31T12:17:56.0000000+00:00</lastModDate>
        
        <creator>Ali Hameed Yassir Mohammed</creator>
        
        <creator>Rudzidatul Akmam Dziyauddin</creator>
        
        <creator>Liza Abdul Latiff</creator>
        
        <subject>MFA; authentication; OTP; cancelable; biometrics; security; identity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>Now-a-days, with the rapid and broad emergence of local or remote access to services on the internet. Authentication represents an important security control requirement and the MFA is recommended to mitigate the weaknesses in the SFA. MFA techniques can be classified into two main approaches: based biometric and non-biometric approaches. However, there is a problem to maintain the tradeoff between security and accuracy. The studies that have been reviewed on both authentication mechanisms are found contradictory in the direction of others. In the direction of authentication-based biometrics the researchers tended to increase the recognition accuracy, while in the other direction, the researchers proposed to combine many authentication factors to increase the security layers. The main contribution of this survey is to review and spotlight on the current state of the arts in both authentication mechanisms to achieve a secure user identity. This paper provides a review of authentication protocols and security requirements. In addition to a detailed review with a comparison of secure one-time passcode generation and distribution. Furthermore, a comprehensive review of cancelable biometrics techniques, attacks, and requirements. Finally, providing a summary of key challenges and future research directions.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_19-Current_Multi_factor_of_Authentication_Approaches.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Optimized Method for Polar Code Construction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140118</link>
        <id>10.14569/IJACSA.2023.0140118</id>
        <doi>10.14569/IJACSA.2023.0140118</doi>
        <lastModDate>2023-01-31T12:17:56.0000000+00:00</lastModDate>
        
        <creator>Issame El Kaime</creator>
        
        <creator>Reda Benkhouya</creator>
        
        <creator>Abdessalam Ait Madi</creator>
        
        <creator>Hassane Erguig</creator>
        
        <subject>Polar codes; SNR; successive cancellation decoding; error correction; low-complexity; code construction; additive white Gaussian noise; Bhattacharya parameter; density evolution with Gaussian approximation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>Polar codes are traditionally constructed by calculating the reliability of channels, then sorting them by intensive calculations to select the most reliable channels. However, these operations can be complicated especially when, the polar code length, N becomes great. This paper proposes a new low-complexity procedure for polar codes construction over binary erasure and additive white Gaussian noise (AWGN) channels. Using the proposed algorithm, the code construction complexity is reduced from O(Nlog N) to O(N), where N=2n (n≥1). The proposed approach involves storing the classification of channels by reliabilities in a vector of length L, and then deriving the classification of M channels for every M where M&lt;=L. The proposed method is consistent with Bhattacharya parameter based Construction and Density Evolution with Gaussian Approximation (DEGA) based construction. In this paper, the Successive Cancellation Decoding algorithm (SCDA) is used. Thanks to its low complexity and its high error-correction capability.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_18-An_Optimized_Method_for_Polar_Code_Construction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Investigating the Input Validation Vulnerabilities in C Programs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140117</link>
        <id>10.14569/IJACSA.2023.0140117</id>
        <doi>10.14569/IJACSA.2023.0140117</doi>
        <lastModDate>2023-01-31T12:17:55.9870000+00:00</lastModDate>
        
        <creator>Shouki A. Ebad</creator>
        
        <subject>Input validation; buffer overflow; memory mismanagement; safe C functions</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>Input validation is a fairly universal programming practice that helps reduce the chances of producing protection-related vulnerabilities in software. In this paper, an experiment is conducted to specifically determine the input validation issues found in programs and the problematic functions that lead to such issues. The experiment evaluated 12 arbitrarily selected open source C projects written by different programmers. The top two most common input validation problems are buffer overflow/XSS and potential memory mismanagement. In addition, the functions that caused the first problem are (a) strings/text functions (e.g., strcpy and strcmp), and (b) functions that read from standard input, STDIN (e.g., scanf and gets). In contrast, the functions that caused the second problem are (a) memory allocation/deallocation functions (e.g., memmove and malloc), and (b) file manipulation functions (e.g., fopen and fseek). Furthermore, the goto construct—to a small extent—plays a role. The recommendations are that (a) developers are encouraged to use memory-safe programming languages, otherwise, they should perform different types of checks for the validity of inputs as soon as they are entered, and (b) they should have the required knowledge of secure source code and use tools/suites to manage malformed strings.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_17-Investigating_the_Input_Validation_Vulnerabilities_in_C_Programs.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>DataOps Lifecycle with a Case Study in Healthcare</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140115</link>
        <id>10.14569/IJACSA.2023.0140115</id>
        <doi>10.14569/IJACSA.2023.0140115</doi>
        <lastModDate>2023-01-31T12:17:55.9700000+00:00</lastModDate>
        
        <creator>Shaimaa Bahaa</creator>
        
        <creator>Atef Z. Ghalwash</creator>
        
        <creator>Hany Harb</creator>
        
        <subject>DataOps lifecycle; DataOps in machine learning; DataOps in healthcare; DataOps in data science; feature extraction; feature selection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>The DataOps methodology has become a solution to many of the difficulties faced by data science and analytics projects. This research introduces a novel DataOps lifecycle along with a detailed description of each phase. The proposed cycle enhances the implementation of data science and analytics projects for achieving business value. As a proof of concept, the new cycle phases are applied in a healthcare case study using the UCI Heart Disease dataset. Two goals are achieved. First, a dataset reduction by features analytic in which the four most effective features are selected. Second, different machine learning algorithms are applied to the dataset. The recorded results show that using the four most effective features is comparable with using the full features (thirteen features), and both approaches show high accuracy and sensitivity. The average accuracy of the highest four features is 82.32%, and the thirteen features is 84.28%. That means that the selected four features affect the applications with 97.67% accuracy. Besides, the average sensitivity of the highest four features is 87.94%, while the thirteen features are 87.12%. The study shows an interesting and significant result that data modeling needn&#39;t be done for all data science projects which reduced the dataset.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_15-DataOps_Lifecycle_with_a_Case_Study_in_Healthcare.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluation of e-Service Quality Impacts Customer Satisfaction: One-Gate Integrated Service Application in Indonesian Weather Agency</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140116</link>
        <id>10.14569/IJACSA.2023.0140116</id>
        <doi>10.14569/IJACSA.2023.0140116</doi>
        <lastModDate>2023-01-31T12:17:55.9700000+00:00</lastModDate>
        
        <creator>Aji Prasetyo</creator>
        
        <creator>Deny Irawan</creator>
        
        <creator>Dana Indra Sensuse</creator>
        
        <creator>Sofian Lusa</creator>
        
        <creator>Prasetyo Adi Wibowo</creator>
        
        <creator>Alivia Yulfitri</creator>
        
        <subject>e-Service quality; one-gate integrated service; e-government; multivariate analysis; Partial Least Square (PLS); Structural Equation Model (SEM)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>Badan Meteorologi, Klimatologi, dan Geofisika (BMKG) is the weather agency in Indonesia. It has One-Gate Integrated Service Application, also known as Pelayanan Terpadu Satu Pintu (PTSP) BMKG Application, which is a web-based e-commerce concept application. Its goal is to provide users with information and services related to Meteorology, Climatology, and Geophysics (MCG) by using information and communication technology. This is part of Indonesia&#39;s move toward e-government. Since January 2020, all MCG service and information activities through PTSP BMKG must be done using the application. With a questionnaire and multivariate analysis, this study aims to determine how the quality of service affects customer satisfaction with the PTSP BMKG application. Scientists use the E-S-Qual scale to prove that it works and is a good measure. The results of this study show that customer satisfaction is affected positively and significantly by efficiency, fulfillment, system availability, and privacy simultaneously. Partially, customer satisfaction with the PTSP BMKG application is affected positively and considerably by how well the application works and how well it meets customers&#39; needs. This has implications for the evaluation that BMKG needs to do.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_16-Evaluation_of_e_Service_Quality_Impacts_Customer_Satisfaction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Bidirectional Recurrent Neural Network based on Multi-Kernel Learning Support Vector Machine for Transformer Fault Diagnosis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140114</link>
        <id>10.14569/IJACSA.2023.0140114</id>
        <doi>10.14569/IJACSA.2023.0140114</doi>
        <lastModDate>2023-01-31T12:17:55.9530000+00:00</lastModDate>
        
        <creator>Xun Zhao</creator>
        
        <creator>Shuai Chen</creator>
        
        <creator>Ke Gao</creator>
        
        <creator>Lin Luo</creator>
        
        <subject>Multi-kernel learning; support vector machine; bidirectional recurrent neural network; fault diagnosis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>Traditional neural network has many weaknesses, such as a lack of mining transformer timing relation, poor generalization of classification, and low classification accuracy of heterogeneous data. Aiming at questions raised, this paper proposes a bidirectional recurrent neural network model based on a multi-kernel learning support vector machine. Through a bidirectional recurrent neural network for feature extraction, the features of the before and after time fusion and obvious data are outputted. The multi-kernel learning support vector machine method was carried out on the characteristics of data classification. The study of multi-kernel support vector machines in the weighted average of the way nuclear fusion improves the accuracy of characteristic data classification. Numerical simulation analysis of the temporal channel length for sequential network diagnostic performance, the effects of multi-kernel learning on the generalization ability of support vector machine, the influence on heterogeneous data processing capabilities, and transformer fault data classification experiment verifies the correctness and effectiveness of the bidirectional recurrent neural network based on multi-kernel learning support vector machine model. The experiment result shows that the diagnosis performance of bidirectional recurrent networks based on a multi-kernel learning support vector machine is better, and the prediction accuracy of the model is improved by more than 1.78% compared with several commonly used neural networks.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_14-Bidirectional_Recurrent_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Improved SVM Method for Movement Recognition of Lower Limbs by MIMU and sEMG</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140113</link>
        <id>10.14569/IJACSA.2023.0140113</id>
        <doi>10.14569/IJACSA.2023.0140113</doi>
        <lastModDate>2023-01-31T12:17:55.9400000+00:00</lastModDate>
        
        <creator>Xu Yun</creator>
        
        <creator>Xu Ling</creator>
        
        <creator>Gao Lei</creator>
        
        <creator>Liu Zhanhao</creator>
        
        <creator>Shen Bohan</creator>
        
        <subject>Surface electromyography; micro inertial measurement unit; support vector machine; voting mechanism</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>Aiming at the problems that the movement recognition accuracy of lower limbs needs to be improved, the optimized SVM recognition method by using voting mechanism is proposed in this paper. First, CS algorithm is applied to optimize the kernel function parameter and the penalty factor for SVM model. And then, voting mechanism is used to ensure the recognition accuracy of SVM classification algorithm. Finally, the experiments have been implemented and different classification algorithms have been compared. The recognition results shows that the movement recognition accuracy for the lower limbs by the optimized SVM recognition algorithm using voting mechanism is about 98.78%, which is higher than other commonly used classification algorithm with or without voting mechanism. The recognition method for the lower limbs proposed in this paper can be used in the field of rehabilitation training, smart healthcare and so on.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_13-An_Improved_SVM_Method_for_Movement_Recognition_of_Lower_Limbs.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Autism Spectrum Disorder Detection: Video Games based Facial Expression Diagnosis using Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140112</link>
        <id>10.14569/IJACSA.2023.0140112</id>
        <doi>10.14569/IJACSA.2023.0140112</doi>
        <lastModDate>2023-01-31T12:17:55.9230000+00:00</lastModDate>
        
        <creator>Morched Derbali</creator>
        
        <creator>Mutasem Jarrah</creator>
        
        <creator>Princy Randhawa</creator>
        
        <subject>Autism in children; machine learning; deep learning; convolution neural network (CNN); video games; prediction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>In this study, a novel method is proposed for determining whether a child between the ages of 3 and 10 has autism spectrum disorder. Video games have the ability to immerse a child in an intense and immersive environment. With the expansion of the gaming industry over the past decade, the availability and customization of games for children has increased dramatically. When children play video games, they may display a variety of facial expressions and emotions. These facial expressions can aid in the diagnosis of autism. Footage of children playing a game may yield a wealth of information regarding behavioral patterns, especially autistic behavior. You can submit any video of a child playing a game to the interface, which is powered by the algorithm presented in this work. We utilized a dataset of 2,536 facial images of autistic and typically developing children for this purpose. The accuracy and loss function are presented to examine the 92.3% accurate prediction outcomes generated by the CNN model and deep learning.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_12-Autism_Spectrum_Disorder_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Pinpointing Factors in the Success of Integrated Information System Toward Open Government Data Initiative: A Perspective from Employees</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140111</link>
        <id>10.14569/IJACSA.2023.0140111</id>
        <doi>10.14569/IJACSA.2023.0140111</doi>
        <lastModDate>2023-01-31T12:17:55.9230000+00:00</lastModDate>
        
        <creator>Wahyu Setiawan Wibowo</creator>
        
        <creator>Ahmad Fadhil</creator>
        
        <creator>Dana Indra Sensuse</creator>
        
        <creator>Sofian Lusa</creator>
        
        <creator>Prasetyo Adi Wibowo Putro</creator>
        
        <creator>Alivia Yulfitri</creator>
        
        <subject>Open data; open government data; OGD; employees’ perspective; ISSM; UTAUT; success factors; acceptance; impact; integrated IS; One Data Policy; BPS; SEM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>As the Supervisory Institution in Statistics, Badan Pusat Statistik (BPS) launched an integrated information system (IS) to exercise the Open Government Data (OGD) initiative and to impose the One Data Policy Act. Albeit challenges arise, BPS manages to provide more than 120 thousand publicly accessible datasets. With the success of OGD, many scholars have opted to examine a similar issue from the perspective of users/citizens. However, employees’ perspective remains substantial as employees are the OGD provider. This research administers employees’ views to pinpoint influencing factors in the success of OGD adoption through an IS. The authors seek to comprehend the factors from IS and acceptance manner, thus integrating the Information System Success Model (ISSM) and Unified Theory of Acceptance and Use of Technology (UTAUT) as the measurement model. This study also administers a cross-sectional questionnaire with close-ended questions to obtain data from 253 IS users in BPS. Using structural equation modelling (SEM), the authors find that all ISSM constructs influence the success of IS while only one construct from UTAUT plays a pivotal role in defining the success. Information Quality, System Quality, Service Quality, User Satisfaction, and System Use remain paramount to the successful implementation, while Performance Expectancy becomes the sole influencing UTAUT factor affecting success. This study therefore offers substantial benefits by aiding other researchers in OGD-related areas and providing in-depth evidence for practitioners in implementing IS for OGD initiatives.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_11-Pinpointing_Factors_in_the_Success_of_Integrated_Information_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Developing a Computer Simulation to Study the Behavior of Factors Affecting the Flooding of the Gash River</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140110</link>
        <id>10.14569/IJACSA.2023.0140110</id>
        <doi>10.14569/IJACSA.2023.0140110</doi>
        <lastModDate>2023-01-31T12:17:55.9070000+00:00</lastModDate>
        
        <creator>Abdalilah G. I. Alhalangy</creator>
        
        <subject>Gash river; flood simulation; influencing factors to flood; simulation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>In recent years, the city of Kassala has suffered from frequent flooding disasters in the Gash River, which is the city&#39;s lifeblood. But the problem of frequent flooding of the river has made it a life-threatening nightmare. The importance of research lies in the fact that it is one of the few attempts to discuss and study the causes and effects of the Gash River floods. It aims to identify the factors affecting river floods. It proposes an algorithm to simulate flooding by randomly generating different factors that effectively affect river flooding. The descriptive analytical approach, the analytical, inductive approach, and the analytical deductive approach to desk research were used, taking advantage of the primary statistical method in its observation and evaluation, which relies on primary and secondary information to help make scientific, practical, and objective. The research came out with significant results related to the problems that threaten the town of Kassala from the frequent floods of the Gash River. The study&#39;s results proved that there is a deviation and discrepancy between the floods rate during the year, which gives a negative indication, and that deposited quantities vary in different proportions from one period to another, which causes a significant threat in the future. The research suggests other solutions that help reduce the problems and their effects. In addition to the above, the study proposes various recommendations that will be the basis for future studies to reach the required solutions and goals.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_10-Developing_a_Computer_Simulation_to_Study_the_Behavior_of_Factors.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Effect of Augmented Reality Mobile Application on Visitor Impact Mediated by Rational Hedonism: Evidence from Subak Museum</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140109</link>
        <id>10.14569/IJACSA.2023.0140109</id>
        <doi>10.14569/IJACSA.2023.0140109</doi>
        <lastModDate>2023-01-31T12:17:55.8930000+00:00</lastModDate>
        
        <creator>Ketut Agustini</creator>
        
        <creator>Dessy Seri Wahyuni</creator>
        
        <creator>I Nengah Eka Mertayasa</creator>
        
        <creator>Ni Made Ratminingsih</creator>
        
        <creator>Gede Ariadi</creator>
        
        <subject>System quality; information quality; augmented reality media content quality; rational hedonism; satisfaction experienced</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>This study expands our comprehension of museum visitor impact within a system quality, information quality, and augmented reality (AR) media content quality on mobile applications. Museums meet new defiance of escalating expectancies of their visitors. As a result of the universal mobile phone tool, AR has arisen as the latest technology offered to the museum to increase its visitors. These expectancies are fostered by the improvement of modern technologies like AR on the mobile app. Across an online survey of 241 visitors, the study determines the constructs affecting visitor impact within museum&#39; mobile apps and the consequential results of AR-linked visitor impact. The study proposes a recent set of AR features, explicitly, system quality, information system, and AR media content quality, and establishes their influence on rational hedonism and satisfaction experienced, thus enhancing visitor impact. The findings also show that the rational hedonism and satisfaction experienced are positioned as full mediators for the relationship between system quality &amp; information quality and visitor impact. In contrast, these mediators partially influence the indirect relationship between AR media content quality and visitor impact. Moreover, the results affirm that AR media content quality within the mobile application is the most critical construct to directly enhance visitor impact, whereas the system quality and information quality have no influence yet. From a practical point of view, the importance of AR technology for the museum can support entice new visitors to museums and improve to make more incomes.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_9-The_Effect_of_Augmented_Reality_Mobile_Application.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>IoT Technology for Intelligent Management of Energy, Equipment and Security in Smart House</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140108</link>
        <id>10.14569/IJACSA.2023.0140108</id>
        <doi>10.14569/IJACSA.2023.0140108</doi>
        <lastModDate>2023-01-31T12:17:55.8770000+00:00</lastModDate>
        
        <creator>Fangmin Yuan</creator>
        
        <creator>Yan Zhang</creator>
        
        <creator>Junchao Zhang</creator>
        
        <subject>Internet of things technology; smart homes; intelligent energy management; fuzzy logic</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>The Internet of Things means that many of the daily devices used by humans will share their functions and information with each other or with humans by connecting to the Internet. The most important factor of the Internet of Things is the integration of several technologies and communication solutions. Identification and tracking technologies, wired and wireless sensors and active networks, protocols for increasing communication and intelligence of objects are the most important parts of the Internet of Things. In this article, an attempt has been made to determine the parts that can be used to make a house smart among the concepts and technologies related to web-based programs based on Internet of Things technology. Since it is very time-consuming to investigate the effect of all the Internet of Things technologies in smart homes, by studying and examining various types of research, the web-based program based on the Internet of Things is selected as an independent variable, and its effect on smart home management is investigated. For this purpose, a web-based program based on the Internet of Things for intelligent building energy management, intelligent equipment management, and intelligent security has been designed and implemented. As experimental results shown the proposed method the proposed method achieves better results compared to other existing methods in energy consumption by 33.8% reducing energy usage.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_8-IoT_Technology_for_Intelligent_Management_of_Energy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Patent Text Classification based on Deep Learning and Vocabulary Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140107</link>
        <id>10.14569/IJACSA.2023.0140107</id>
        <doi>10.14569/IJACSA.2023.0140107</doi>
        <lastModDate>2023-01-31T12:17:55.8770000+00:00</lastModDate>
        
        <creator>Ran Li</creator>
        
        <creator>Wangke Yu</creator>
        
        <creator>Qianliang Huang</creator>
        
        <creator>Yuying Liu</creator>
        
        <subject>Text classification; deep learning; network vocabulary; patent; feature extraction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>Patent documents are a special long text format, and traditional deep learning methods have insufficient feature extraction ability, which results in a weaker classification effect than ordinary text. Based on this, this paper constructs a text feature extraction method based on the lexical network, according to the inner relation between words and classification. Firstly, the inner relationship between words and classification was obtained from linear and probability dimensions and the lexical network were constructed. Secondly, the lexical network is fused with the features extracted from the deep learning model. Finally, the fusion features are trained in the original model to get the final classification result. T This method is a classification enhancement method that can classify patent text alone or enhance the accuracy of various types of neural networks in patent text classification. Experimental results demonstrate that the accuracy of BERT combined with lexical network method is as high as 82.73%, and the accuracy of lexical network method combined with CNN and LSTM is increased by 2.19% and 2.25% respectively. In addition, it was demonstrated that the lexical network feature extraction method accelerated the convergence speed of the model during training and improved the classification ability of the model in Chinese patent texts.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_7-Patent_Text_Classification_based_on_Deep_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Investigation of Cybersecurity Issues of Remote Work during the COVID-19 Pandemic in Saudi Arabia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140106</link>
        <id>10.14569/IJACSA.2023.0140106</id>
        <doi>10.14569/IJACSA.2023.0140106</doi>
        <lastModDate>2023-01-31T12:17:55.8600000+00:00</lastModDate>
        
        <creator>Gaseb N Alotibi</creator>
        
        <creator>Abdulwahid Al Abdulwahid</creator>
        
        <subject>Cybersecurity issues; investigative survey; remote work; COVID-19 Pandemic; Saudi Arabia</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>COVID-19 pandemic has dramatically changed the public life style as well as the daily work activities across the world. This indeed has led both public and private sectors to attempt to adapt thereby shifting to remote work and adopting and enabling new technologies and online services in order to sustain their businesses while saving people lives. Unfortunately, a decent number of those endeavors have been undertaken unwarily in a hurry without taking the due diligence of all relevant aspects including cybersecurity and privacy. This survey aims at exploring the current state of the practice during the Covid-19 pandemic lockdown and the revolving challenges of using and publishing online services in Saudi Arabia. It also investigates the needs for investment in the cybersecurity field which would increase the trust on and reliability of them; and thus encouraging organizations to move confidently towards the real digital transformation.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_6-An_Investigation_of_Cybersecurity_Issues_of_Remote_Work.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Deep-learning based Approach for Automatic Diacritization of Arabic Poems using Sequence-to-Sequence Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140105</link>
        <id>10.14569/IJACSA.2023.0140105</id>
        <doi>10.14569/IJACSA.2023.0140105</doi>
        <lastModDate>2023-01-31T12:17:55.8430000+00:00</lastModDate>
        
        <creator>Mohamed Saleh Mahmoud</creator>
        
        <creator>Nermin Negied</creator>
        
        <subject>Text diacritization; deep learning; sequence-to-sequence; regex; tokenization; ANLP</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>Over the last 10 years, Arabic language have attracted researchers in the area of Natural Language Processing (NLP). A lot of research papers suddenly emerged in which the main work was the processing of Arabic language and its dialects too. Arabic language processing has been given a special name ANLP (Arabic Natural Language Processing). A lot of ANLP work can be found in literature including almost all NLP applications. Many researchers have been attracted also to Arabic linguistic knowledge. The work expands from Basic Language Analysis to Semantic Level Analysis. But Arabic text semantic analysis cannot be held without considering diacritization, which can greatly affect the meaning. Many Arabic texts are written without diacritization, and Diacritizing them manually is a very tiresome process that may need an expert. Automatic diacritization systems became a demand as an initial step for processing Arabic text for any Arabic Language Processing application as Arabic diacritization is very important to get a readable and understandable Arabic text. For this reason, many researchers recently worked on building systems and tools that automatically diacritize un-diacritized Arabic texts. This work presents a novel deep learning-based sequence-to-sequence model to diacritize un-diacritized Arabic poems. The proposed model was tested and achieved high diacritization accuracy rate.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_5-A_Novel_Deep_Learning_based_Approach_for_Automatic_Diacritization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Leaf-based Classification of Important Indigenous Tree Species by Different Feature Extraction Techniques and Selected Classification Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140104</link>
        <id>10.14569/IJACSA.2023.0140104</id>
        <doi>10.14569/IJACSA.2023.0140104</doi>
        <lastModDate>2023-01-31T12:17:55.8300000+00:00</lastModDate>
        
        <creator>Eugene Val D. Mangaoang</creator>
        
        <creator>Jaime M. Samaniego</creator>
        
        <subject>Machine learning; feature extraction; convolutional neural network; leaf classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>The machine learning algorithms, namely, k-Nearest Neighbor (KNN), Support Vector Machine (SVM), Back-Propagation (BP) networks, and Convolutional Neural Networks (CNN) are four of the mostly used classifiers. Different sets of features are required as input in different application domains. In this paper, a set of significant leaf features and classification model was determined with a high accuracy in classifying important indigenous tree species. Leaf images were acquired using a scanner to control the image quality. The image dataset was then duplicated into two sets. The first set was labeled with their correct classes, preprocessed, and segmented in preparation for feature extraction. The leaf features extracted were leaf shape, leaf color, and leaf texture. Then, training and classification was done by KNN, SVM, and BP networks. On the other hand, the second set was unlabeled for training and classification by CNN. A CNN model was built and chosen with the best training and validation accuracy and the least training and validation loss rate. The study concluded that using all three leaf features for classification by BP networks resulted in a 93.48% accuracy with training done by supervised learning. However, the CNN achieved a high accuracy rate of 98.5% making it the best approach for classification of tree species using digital leaf images in the context of this study.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_4-Leaf_based_Classification_of_Important_Indigenous_Tree_Species.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Recognizing Safe Drinking Water and Predicting Water Quality Index using Machine Learning Framework</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140103</link>
        <id>10.14569/IJACSA.2023.0140103</id>
        <doi>10.14569/IJACSA.2023.0140103</doi>
        <lastModDate>2023-01-31T12:17:55.8130000+00:00</lastModDate>
        
        <creator>Mohamed Torky</creator>
        
        <creator>Ali Bakhiet</creator>
        
        <creator>Mohamed Bakrey</creator>
        
        <creator>Ahmed Adel Ismail</creator>
        
        <creator>Ahmed I. B. EL Seddawy</creator>
        
        <subject>Water quality; artificial intelligence; machine learning; deep learning; classification analysis; and regression analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>Water quality monitoring, analysis, and prediction have emerged as important challenges in several uses of water in our life. Recent water quality problems have raised the need for artificial intelligence (AI) models for analyzing water quality, classifying water samples, and predicting water quality index (WQI). In this paper, a machine-learning framework has been proposed for classify drinking water samples (safe/unsafe) and predicting water quality index. The classification tier of the proposed framework consists of nine machine-learning models, which have been applied, tested, validated, and compared for classifying drinking water samples into two classes (safe/unsafe) based on a benchmark dataset. The regression tier consists of six regression models that have been applied to the same dataset for predicting WQI. The experimental results clarified good classification results for the nine models with average accuracy, of 94.7%. However, the obtained results showed the superiority of Random Forest (RF), and Light Gradient Boosting Machine (Light GBM) models in recognizing safe drinking water samples regarding training and testing accuracy compared to the other models in the proposed framework. Moreover, the regression analysis results proved the superiority of LGBM regression, and Extra Trees Regression models in predicting WQI according to training, testing accuracy, 0.99%, and 0.95%, respectively. Moreover, the mean absolute error (MAE) results proved that the same models achieved less error rate, 10% than other applied regression models. These findings have significant implications for the understanding of how novel deep learning models can be developed for predicting water quality, which is suitable for other environmental and industrial purposes.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_3-Recognizing_Safe_Drinking_Water_and_Predicting_Water_Quality_Index.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving MapReduce Speculative Executions with Global Snapshots</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140102</link>
        <id>10.14569/IJACSA.2023.0140102</id>
        <doi>10.14569/IJACSA.2023.0140102</doi>
        <lastModDate>2023-01-31T12:17:55.8130000+00:00</lastModDate>
        
        <creator>Ebenezer Komla Gavua</creator>
        
        <creator>Gabor Kecskemeti</creator>
        
        <subject>MapReduce; Hadoop; speculative executions; strag-glers; consistent global snapshots; K-means algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>Hadoop’s MapReduce implementation has been employed for distributed storage and computation. Although efficient for parallelizing large-scale data processing, the chal-lenge of handling poor-performing jobs persists. Hadoop does not fix straggler tasks but instead launches equivalent tasks (also called a backup task). This process is called Speculative Execution in Hadoop. Current speculative execution approaches face challenges like incorrect estimation of tasks run times, high consumption of system resources and inappropriate selection of backup tasks. In this paper, we propose a new speculative execution approach, which determines task run times with consistent global snapshots and K-Means clustering. Task run times are captured during data processing. Two categories of tasks (i.e. fast and stragglers) are detected with K-Means clustering. A silhouette score is applied as decision tool to determine when to process backup tasks, and to prevent extra iterations of K-Means. This helped to reduce the overhead incurred in applying our approached. We evaluated our approach on different data centre configurations with two objectives: i) the overheads caused by implementing our approach and ii) job performance improvements. Our results showed that i) the overheads caused by applying our approach is becoming more negligible as data centre sizes increase. The overheads reduced by 1.9%, 1.5% and 1.3% (comparatively) as the size of the data centre and the task run times increased, ii) longer mapper tasks runs have better chances for improvements, regardless of the amount of straggler tasks. The graphs of the longer mappers were below 10% relative to the disruptions introduced. This showed that the effects of the disruptions were reduced and became more negligible, while there was more improvement in job performance.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_2-Improving_MapReduce_Speculative_Executions_with_Global_Snapshots.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Eye-tracking Analysis: College Website Visual Impact on Emotional Responses Reflected on Subconscious Preferences</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2023</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2023.0140101</link>
        <id>10.14569/IJACSA.2023.0140101</id>
        <doi>10.14569/IJACSA.2023.0140101</doi>
        <lastModDate>2023-01-31T12:17:55.7970000+00:00</lastModDate>
        
        <creator>Hedda Martina Šola</creator>
        
        <creator>Fayyaz Hussain Qureshi</creator>
        
        <creator>Sarwar Khawaja</creator>
        
        <subject>Neuromarketing; eye-tracking; student behavior; college website analyses; mood intensities; visual impact; website conversions</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 14(1), 2023</description>
        <description>This study examined students’ behaviour on the college website and the content of information they were able to obtain. With the eye-tracking sensor, this study aims to investigate the university websites’ effectiveness, satisfaction, and efficiency and collect data regarding users&#39; visual impacts. The research was carried out using mobile phone neuromarketing tools of eye-tracking, facial coding, and supplementary short memory post-survey. The study was focused on two web pages, the homepage, and the CARE page. The analysis results from both web pages were then compared and further discussed. The results suggest that participants mostly elicited sadness (29.55%), neutrality (33.19%), and puzzlement (13.60%) while browsing the homepage, regardless of the areas of interest (AOI). They also elicited slight disgust (4.33%), fear (3.51%), joy (5.21%), and surprise (29.55%). The heat map for the CARE page reveals that the top of the CARE page was a point of attraction for participants. The study found that participants&#39; negative feelings were more intense than good ones concerning homepage scrolling. Also, their pleasant mood intensity increased moderately when they looked at regions with only photos in a subdued color scheme or where brighter colors were used to emphasize essential textual information such as upcoming events and student blogs. This reveals that the website&#39;s complexity further affects the cognitive load. Therefore, making it more accessible will be beneficial to students. According to the student&#39;s responses, change such as the page&#39;s design, color, and text could be implemented.</description>
        <description>http://thesai.org/Downloads/Volume14No1/Paper_1-Eye_Tracking_Analysis_College_Website_Visual_Impact.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Research on Improved Xgboost Algorithm for Big Data Analysis of e-Commerce Customer Churn</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01312124</link>
        <id>10.14569/IJACSA.2022.01312124</id>
        <doi>10.14569/IJACSA.2022.01312124</doi>
        <lastModDate>2022-12-30T12:34:01.2000000+00:00</lastModDate>
        
        <creator>Li Li</creator>
        
        <subject>E-commerce; customer churn; random Forest; XGBoost; big data</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>With the increasing cost of acquiring new users for e-commerce enterprises, it has become an important task for e-commerce enterprises to actively carry out customer churn management. Therefore, based on the distributed gradient enhancement library algorithm (XGBoost), this research proposes a big data analysis study on e-commerce customer churn. First, it conducts an evaluation analysis on e-commerce customer segmentation and combines the random forest algorithm (RF) to build an RF XGBoost prediction model based on customer churn. Finally, it verifies the performance of the prediction model. The results show that the area under receiver operating characteristic curve (AUC) value, prediction accuracy, recall rate, and F1 value of the RF-XGBoost model are significantly better than those of the RF, XGBoost, and ID3 decision trees to build an e-commerce customer churn prediction model; The average output error of RF-XGBoost model is 0.42, and the average output error is relatively good, indicating that the model proposed in this study has a smaller error and higher accuracy. It can make a general assessment of the customer churn of e-commerce enterprises, and then provide data support for the customer maintenance work of e-commerce enterprises. It is helpful to analyze the relevant factors affecting customer churn, to Equationte targeted customer service programs, thus improving the economic benefits of e-commerce enterprises.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_124-Research_on_Improved_Xgboost_Algorithm_for_Big_Data_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Research on Face Recognition Technology of Subway Automatic Ticketing System based on Neural Network and Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01312123</link>
        <id>10.14569/IJACSA.2022.01312123</id>
        <doi>10.14569/IJACSA.2022.01312123</doi>
        <lastModDate>2022-12-30T12:34:01.1830000+00:00</lastModDate>
        
        <creator>Shuang Wu</creator>
        
        <creator>Xin Lin</creator>
        
        <creator>Tong Yao</creator>
        
        <subject>Automatic ticketing system; BP; CNN; deep learning; face recognition; SphereFac; SoftMax classifier</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>Face recognition technology is the core technology of the subway ticketing system, which is related to the efficiency of people&#39;s ticket purchase. In order to improve people&#39;s experience of taking public transport, it is necessary to improve the performance of face recognition technology. In this study, the Back Propagation (BP) algorithm is used to optimize the parameters of the SoftMax classifier of the convolutional neural network, and the branch structure is added to the structure of the SphereFace-36 convolutional neural network to extract the local features of the face. Based on the improved neural network, the face recognition system of the subway automatic ticketing system is established. The results show that the area under the ROC curve is the highest for validation and identification of the optimization model; The recognition accuracy of the optimized model in different data sets is 1.0%, 0.7%, 1.1%, 0.9% and 0.6% higher than that of SphereFace-36 respectively, and its specificity is higher than that of SphereFace-36, with the maximum difference of 9%; The average accuracy of global feature extraction and recognition of the optimized network model is 83.01%. In the simulation experiment, the optimized model can accurately recognize facial features, which has high practical value and can be applied to the automatic ticketing system.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_123-Research_on_Face_Recognition_Technology_of_Subway_Automatic_Ticketing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Research on Key Technologies of Smart City Building Interior Decoration Construction based on In-Depth Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01312122</link>
        <id>10.14569/IJACSA.2022.01312122</id>
        <doi>10.14569/IJACSA.2022.01312122</doi>
        <lastModDate>2022-12-30T12:34:01.1670000+00:00</lastModDate>
        
        <creator>Li Zhang</creator>
        
        <creator>Aimin Qin</creator>
        
        <subject>Interior decoration; path planning; deep reinforcement learning; reward function; credit allocation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>The intelligentization of building interior decoration construction is of great significance to the construction of smart city, and robot automation has brought an opportunity for this. Robot self-decoration is the development trend in the future. One of the key issues involved, is the self-planning of mobile path. In this regard, the research adopts the proximal policy optimization algorithms (PPO) to improve the self-planning path ability of the decoration robot. For the information of lidar and robot status, the Full Connect Neural Network (FCNN) is used to process it. In addition, the reward function and the corresponding Credit Assignment Problem (CAP) model are designed, to accelerate the learning process of path planning. Aiming at the dynamic uncertainty in the actual environment, the adaptive loss function is used to build an auxiliary model to predict the environmental change. The simulation results show that the research and design strategy significantly improves the learning efficiency and path planning success rate of the decoration robot, and shows good adaptability to the dynamic environment, which has important reference significance for the practical application of the decoration robot.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_122-Research_on_Key_Technologies_of_Smart_City_Building_Interior_Decoration.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Digital Intelligent Management Platform for High-Rise Building Construction Based on BIM Technology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01312121</link>
        <id>10.14569/IJACSA.2022.01312121</id>
        <doi>10.14569/IJACSA.2022.01312121</doi>
        <lastModDate>2022-12-30T12:34:01.1530000+00:00</lastModDate>
        
        <creator>Rui Deng</creator>
        
        <creator>Chun’e Li</creator>
        
        <subject>BIM technology; high-rise building construction; digitization; intelligent management; BIM model; RFID technology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>In this study, the digital intelligent management platform of high-rise building construction based on BIM technology is used for real-time monitoring and management of construction progress and quality. In the data acquisition and processing layer, construction site data is obtained through RFID technology. After processing such as cleaning and integration, it is input to the BIM model layer to dynamically generate various real-time BIM models, and these real-time BIM model information is input to the application layer to query, monitor and correct the construction progress and quality. The results are presented by the display layer. The actual application results show that the real-time BIM models generated by the platform have clear details and can realize the query, monitoring and correction functions for the construction progress of high-rise buildings, and effectively correct the construction progress according to the construction progress monitoring query results to achieve the unification with the planned progress. It can effectively realize the visual measurement of the size of each component in construction and monitor the construction quality in real time.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_121-Digital_Intelligent_Management_Platform_for_High_Rise_Building_Construction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Filtering and Enhancement Method of Ancient Architectural Decoration Image based on Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01312119</link>
        <id>10.14569/IJACSA.2022.01312119</id>
        <doi>10.14569/IJACSA.2022.01312119</doi>
        <lastModDate>2022-12-30T12:34:01.1370000+00:00</lastModDate>
        
        <creator>Yanan Wang</creator>
        
        <subject>Neural network; decorative images of ancient buildings; filter enhancement method; encoder; decoder; pixel gray value</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>Due to poor ambient light or uneven lighting, the old decoration image acquisition methods are easy to cause the image blur. To solve this problem, this paper proposes a neural network-based filtering enhancement method for ancient architectural decoration images, which preserves image details by enhancing contrast, smoothing noise reduction and edge sharpening. Based on the convolutional neural network which is composed of encoder, decoder and layer hop connection, the residual network and hole convolution are introduced, and the hole U-Net neural network is constructed to fuse the pixel feature blocks of different levels. This method enhanced the image contrast according to the gray level and frequency histogram, and aiming at the gray value of the pixel to be processed in the image. And the middle value of the gray value of the neighborhood pixel is used to filter the noise of the ancient building decoration image. The paper also analyzes the joint strength of beams and columns in ancient buildings, and calculates the elastic constants of beams and columns and the stress at the joint of them, considering the image texture characteristics of the wood in ancient buildings with the mortise and tenon connection of beams and columns. Experimental results show that the proposed method has good noise suppression performance, can effectively obtain image detail features, and significantly improve the subjective visual effect of ancient architectural decoration images.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_119-Filtering_and_Enhancement_Method_of_Ancient_Architectural_Decoration.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Neural Network-Based Algorithm for Weak Signal Enhancement in Low Illumination Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01312120</link>
        <id>10.14569/IJACSA.2022.01312120</id>
        <doi>10.14569/IJACSA.2022.01312120</doi>
        <lastModDate>2022-12-30T12:34:01.1370000+00:00</lastModDate>
        
        <creator>Dawei Yin</creator>
        
        <creator>Jianwei Li</creator>
        
        <subject>Artificial neural network; GAN neural network; low-light image; weak signal enhancement</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>There is noise interference in low-illumination images, which makes it difficult to extract weak signals. For this reason, this paper proposes a low-illumination image weak signal enhancement algorithm based on neural network. Multi-scale normalization is performed on low-light images, and multi-scale Retinex is used to enhance weak signals in low-light images. On this basis, the GAN artificial neural network is used to detect the weak signal of the weak signal in the image, the normalization of the weak signal of the low-illumination image is completed based on the residual network, the self-encoding parameters of the depth residual are generated, and the weak signal enhancement result of the low-illumination image is output. The experimental results show that the method in this paper has better enhancement effect on low-illumination images and better image denoising effect. When the scale value is large, the low-contrast area of the low-illumination image has a better enhancement effect. The saturated area of the low-light image has a better enhancement effect.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_120-A_Neural_Network_Based_Algorithm_for_Weak_Signal_Enhancement.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application of CAD Aided Intelligent Technology in Landscape Design</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01312118</link>
        <id>10.14569/IJACSA.2022.01312118</id>
        <doi>10.14569/IJACSA.2022.01312118</doi>
        <lastModDate>2022-12-30T12:34:01.1200000+00:00</lastModDate>
        
        <creator>Juan Du</creator>
        
        <subject>Landscape design; CAD intelligent technology; engine rendering; image library function; 3D visual reconstruction；digital design; landscape architecture</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>The current landscape design methods ignore the depth rendering of scene elements, resulting in low spatial utilization of landscape plant diversity index and landscape spatial pattern. Therefore, this study explores the application of CAD in landscape design. AutoCAD aided intelligent technology is adopted to display the scene in multiple directions and from all angles with terrain design, planning design and planting design as the main contents. Using 3D graphics engine to render landscape elements. On this basis, the spatial coordination planning model of plant landscape is established. The color attribute of landscape space staggered pattern is added to 3D visual reconstruction model by image library function, and the CAD intelligent technology is applied in landscape design. The results shows, the method scored higher in graphic refresh rate, visual brightness and visual contrast, a higher plant diversity coefficient in multiple iterations, and a higher spatial utilization ratio of the landscape pattern than the other two design methods, and the spatial utilization ratio of the landscape pattern of the proposed method is higher than that of the reference method.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_118-Application_of_CAD_Aided_Intelligent_Technology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Denoising Method of Interior Design Image based on Median Filtering Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01312117</link>
        <id>10.14569/IJACSA.2022.01312117</id>
        <doi>10.14569/IJACSA.2022.01312117</doi>
        <lastModDate>2022-12-30T12:34:01.1030000+00:00</lastModDate>
        
        <creator>Tao Li</creator>
        
        <subject>Median filtering algorithm; interior design; image denoising; image acquisition architecture; the rough set</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>Interior design image generation process is prone to the interference of many factors, resulting in the interior design image denoising effect decreases, denoising time increases, so the interior design image denoising method based on median filtering algorithm is proposed. The architecture of interior design image collection is set up, including video signal conversion module, compression coding module, programmable logic chip module and power module. The interior design image collection is realized by using sensors to collect interior design related video information and converting video signals. Based on the results of image acquisition, the median filtering algorithm based on rough set theory is used to realize the denoising of interior design images. Experimental results show that the denoising effect of the proposed method is better, the average signal-to-noise ratio of interior design images is 54.6dB, and the denoising time is always lower than 0.3s, which can be widely used in practice.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_117-Denoising_Method_of_Interior_Design_Image.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Encrypted Storage Method of Oral English Teaching Resources based on Cloud Platform</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01312116</link>
        <id>10.14569/IJACSA.2022.01312116</id>
        <doi>10.14569/IJACSA.2022.01312116</doi>
        <lastModDate>2022-12-30T12:34:01.0900000+00:00</lastModDate>
        
        <creator>Tongsheng Si</creator>
        
        <subject>Cloud platform; oral language; encryption; resource storage</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>With the development of the times, the secure storage of educational resources has become one of the key security problems faced by colleges and universities. On the one hand, the cost of traditional resource storage is too expensive, on the other hand, its encryption and access efficiency are low. To solve this problem, this research takes the cloud platform serves as the main carrier for the encrypted storage of school teaching resources. On this basis, the convolutional neural network is encrypted and optimized, and the argmax algorithm is improved to improve the access efficiency of encrypted data. Finally, the effectiveness and superiority of the design method are compared and analyzed through the method of performance detection. The results show that the maximum consumption time of encryption and decryption of the encrypted storage model is no more than 20000ms, which is significantly less than that of the traditional model. The running time of the argmax output encryption module is 1.76ms and the running loss is 0.26 MB, which is less than that of the traditional model. It can be seen that the encrypted storage model has stronger encryption performance and access performance, and has a better application effect in the encrypted storage of oral English teaching resources with a large amount of access data and frequent updates.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_116-Encrypted_Storage_Method_of_Oral_English_Teaching_Resources.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Measurement Tool for Exposure Techniques in X-ray Ionizing Radiation Equipment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01312115</link>
        <id>10.14569/IJACSA.2022.01312115</id>
        <doi>10.14569/IJACSA.2022.01312115</doi>
        <lastModDate>2022-12-30T12:34:01.0730000+00:00</lastModDate>
        
        <creator>Edwin Arley Cortes Puentes</creator>
        
        <creator>Andres Gomez Rodriguez</creator>
        
        <creator>Fernando Martinez Santa</creator>
        
        <subject>Voltage; current; X-Ray; kVp; mA; mAs radiation; ESP32; OLED microcontroller</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>This article shows the development of an instrument for measuring the exposure parameters used to take radiographic studies in living beings; such as kilovoltage, current and time, since radiation protection is a fundamental pillar in the care of patients and operators of ionizing radiation equipment, it is necessary to calibrate these parameters in equipment that produce X-rays. For the manufacture of the measuring instrument is used an ESP32 microcontroller which is programmed using the Python syntax using the project micropython, in addition to current, distance and light sensors. The results of these measurements will be displayed through output devices such as organic light-emitting diode (OLED) displays, liquid crystal (LCD) displays, and a Web server, in order to perform the measurements safely from the control room and thus avoid exposure to radiation as much as possible. The kVp measurement performed in this article is for equipment operating at 60 [Hz], for high frequency equipment a new parameterization must be performed in order to obtain results as close to reality as possible. By using the web server for the transmission of measurement data, the radiation exposure was reduced and the calibration times of the equipment were improved. This article presents the measurements, and also the calculation of the error of each of the different exposure parameters of conventional X-ray equipments, such as kVp, mA, mAs and time. The errors obtained in the measurements were made assuming that the X-ray equipment used has a 0 error, i.e. assuming that the X-ray equipment is calibrated and that it is a standard equipment.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_115-Measurement_Tool_for_Exposure_Techniques_in_X_ray_Ionizing_Radiation_Equipment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Smart Monitoring System using Internet of Things: Application for Agricultural Management in Benin</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01312114</link>
        <id>10.14569/IJACSA.2022.01312114</id>
        <doi>10.14569/IJACSA.2022.01312114</doi>
        <lastModDate>2022-12-30T12:34:01.0730000+00:00</lastModDate>
        
        <creator>P&#180;elagie HOUNGUE</creator>
        
        <creator>Romaric SAGBO</creator>
        
        <creator>Gilles DAHOUE</creator>
        
        <creator>Julien KOMACLO</creator>
        
        <subject>Monitoring system; agricultural space; machine learning; herds passage; motion detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>One of the major tools of the new era of digital transformation is Internet of Things (IoT) through which, one can look forward to exploring the new technologies in the digital world as well as how they help in improving the real world. This work provides an overview of the approach used to deploy a surveillance system for monitoring any indoor space in general and specifically for agricultural spaces. The entire process starts after motion detection by motion sensors using Machine Learning techniques. This requires coverage and response processing algorithms implemented in the electronic chain. The electronic part of the system relies on the micro-controllers, sensors and communications between them. A mobile application has been developed to allow competent authorities to receive alerts for real-time intervention with the aim of preventing the destruction of crops slaughtered near herds passage. The monitoring system’ synoptic diagram and its operation along with the power modules description are introduced. Prototype has been designed and performance evaluation performed to show the system’ responsiveness.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_114-Smart_Monitoring_System_using_Internet_of_Things.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Drought Forecasting in Alibori Department in Benin using the Standardized Precipitation Index and Machine Learning Approaches</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01312113</link>
        <id>10.14569/IJACSA.2022.01312113</id>
        <doi>10.14569/IJACSA.2022.01312113</doi>
        <lastModDate>2022-12-30T12:34:01.0570000+00:00</lastModDate>
        
        <creator>Rodrigue B. W. VODOUNON</creator>
        
        <creator>Henoc SOUDE</creator>
        
        <creator>Oss&#180;enatou MAMADOU</creator>
        
        <subject>Droughts; forecasting; machine learning; SPI</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>Drought forecasting provides an early warning for the effective management of water resources to avoid or mitigate drought damage. In this study, the prediction of droughts is carried out in the department of Alibori in Benin republic using the standardized precipitation index (SPI) where two Machine Learning approaches were used to set up the drought prediction models which were Random Forest (RF) and Extreme Gradient Boosting (XGBOOST). The performance of these models was reported using metrics such as: coefficient of determination (R2), root mean square error (RMSE), mean square error (MSE), and root mean absolute error (MAE). The results revealed that XGBOOST models gave better prediction performance for SPI 3, 6, 12 with coefficients of determination of 0.89, 0.83 and 0.99, respectively. The square root mean square error (RMSE) of the models gives 0.29, 0.40 and 0.07, respectively. This work demonstrated the potential of artificial intelligence approaches in the prediction of droughts in the Republic of Benin.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_113-Drought_Forecasting_in_Alibori_Department_in_Benin.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Aspect-based Sentiment Analysis for Bengali Text using Bidirectional Encoder Representations from Transformers (BERT)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01312112</link>
        <id>10.14569/IJACSA.2022.01312112</id>
        <doi>10.14569/IJACSA.2022.01312112</doi>
        <lastModDate>2022-12-30T12:34:01.0430000+00:00</lastModDate>
        
        <creator>Moythry Manir Samia</creator>
        
        <creator>Alimul Rajee</creator>
        
        <creator>Md. Rakib Hasan</creator>
        
        <creator>Mohammad Omar Faruq</creator>
        
        <creator>Pintu Chandra Paul</creator>
        
        <subject>Sentiment analysis; Bengali sentiment analysis; Aspect-Based Sentiment Analysis (ABSA); Bengali ABSA; deep learning; Bidirectional Encoder Representations from Transformers (BERT)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>Public opinion is important for decision-making on numerous occasions for national growth in democratic countries like Bangladesh, the USA, and India. Sentiment analysis is a technique used to determine the polarity of opinions expressed in a text. The more complex stage of sentiment analysis is known as Aspect-Based Sentiment Analysis (ABSA), where it is possible to ascertain both the actual topics being discussed by the speakers as well as the polarity of each opinion. Nowadays, people leave comments on a variety of websites, including social networking sites, online news sources, and even YouTube video comment sections, on a wide range of topics. ABSA can play a significant role in utilizing these comments for a variety of objectives, including academic, commercial, and socioeconomic development. In English and many other popular European languages, there are many datasets for ABSA, but the Bengali language has very few of them. As a result, ABSA research on Bengali is relatively rare. In this paper, we present a Bengali dataset that has been manually annotated with five aspects and their corresponding sentiment. A baseline evaluation was also carried out using the Bidirectional Encoder Representations from Transformers (BERT) model, with 97% aspect detection accuracy and 77% sentiment classification accuracy. For aspect detection, the F1-score was 0.97 and for sentiment classification, it was 0.77.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_112-Aspect_based_Sentiment_Analysis_for_Bengali_Text.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Bi-LSTM Model to Recognize Human Activities in UAV Videos using Inflated I3D-ConvNet</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01312111</link>
        <id>10.14569/IJACSA.2022.01312111</id>
        <doi>10.14569/IJACSA.2022.01312111</doi>
        <lastModDate>2022-12-30T12:34:01.0270000+00:00</lastModDate>
        
        <creator>Sireesha Gundu</creator>
        
        <creator>Hussain Syed</creator>
        
        <subject>2D-ConvNet; Bi-LSTM; drone-action; inception-V1; inflated I3D-ConvNet; Kinetics-400</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>Human activity recognition in aerial videos is an emerging research area. In this paper, an Inflated I3D-ConvNet (Inflated I3D) and Bidirectional Long Short-Term Memory (Bi-LSTM) based human action recognition model in UAV videos have been proposed. The initial module was pre-trained using the Kinetics-400 video dataset, which consisted of 400 classes of human activities and around 400 video clips for each class culled from real-world and arduous YouTube videos. The proposed inflated I3D-ConvNet which was built on 2D-ConvNet inflation learns and extracts spatio-temporal features from aerial video while leveraging the architectural design of Inception-V1. The proposed model employs Bi-LSTM architecture for human action classification on the Drone-Action dataset which is a smaller benchmark UAV-captured video dataset. This model considerably improves the state-of-the-art results in activity classification using the SoftMax classifier and retains an accuracy of about 98.4%.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_111-Bi_LSTM_Model_to_Recognize_Human_Activities_in_UAV_Videos.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Comparison of Multiple Neural Networks for Fault Detection of Sensors Array in Oil Heating Reactor</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01312109</link>
        <id>10.14569/IJACSA.2022.01312109</id>
        <doi>10.14569/IJACSA.2022.01312109</doi>
        <lastModDate>2022-12-30T12:34:01.0100000+00:00</lastModDate>
        
        <creator>Mai Mustafa</creator>
        
        <creator>Sawsan Morkos Gharghory</creator>
        
        <creator>Hanan Ahmed Kamal</creator>
        
        <subject>Fault detection; sensor array; oil heater reactor; confusion matrix; neural network; recurrent neural network; con-volution neural network; bidirectional short term memory</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>Fault detection is an important issue for early failure revelation and machine components preserving before the damage. The processes of fault detection, diagnosis and correction especially in oil heating reactor sensors are among the most crucial steps for reliable and proper operation inside the reactor.The fault detection in sensors array of heating reactor is considered as an important tool to guarantee that the controller can take the best possible action to insure the quality of the output. In this paper, fault detection for the temperature sensor in oil heating reactor using different types of faults with different levels is addressed. Multiple approaches based on Neural Network (NN)s such as the classical Fully Connected Neural Network (FCNN), Bidirectional Long Short Term Memory network (BiLSTM) based on Recurrent Neural Network (R.N.N.) and Convolutional Neural Network (CNN) are suggested for this purpose. The suggested networks are trained and tested on real dataset sequences taken from sensors array readings of real heating reactor in Egypt. The performance comparison of the suggested networks is evaluated using different metrics such as “confusion matrix”, accuracy, precision, etc. The various NN are simulated, trained and tested in this paper using MATLAB software 2021 and the advanced tool of “DeepNetworkDesigner”. The simulation results prove that CNN outperforms the other comparative networks with classification accuracy reached to 100% with different levels and different types of faults.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_109-Performance_Comparison_of_Multiple_Neural_Networks_for_Fault_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Impact of Peer Code Review on Software Maintainability in Open-Source Software: A Case Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01312110</link>
        <id>10.14569/IJACSA.2022.01312110</id>
        <doi>10.14569/IJACSA.2022.01312110</doi>
        <lastModDate>2022-12-30T12:34:01.0100000+00:00</lastModDate>
        
        <creator>Aziz Nanthaamornphong</creator>
        
        <creator>Thanyarat Kitpanich</creator>
        
        <subject>Open-source software; software maintainability; code review</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>Recently, open-source software (OSS) has become a considerably popular and reliable source of functionality corrections. OSS also allows software developers to reduce technical debt in software development. However, previous studies have shown that the main problem within OSS development is the lack of systematic processes and formal documents related to system development, such as requirements, designs, and testing. This feature of OSS development causes problems in the software quality, such as those related to security and maintainability. In this research, the authors focused on the software’s maintainability because this attribute has to greatest potential to reduce the cost and increase the productivity of the software development process. There is currently no existing research that examines whether OSS developers pay attention to software maintainability. To better understand how OSS developers improve software maintainability, this research aims to answer the question: “Are developers interested in software maintainability under the modern code review of open-source software projects?” To answer the research question, the authors investigated the code review process in which the OSS developers changed the code based on a review of code comments related to maintenance and collected the sub-characteristics associated with software maintainability from the existing literature. The authors examined the review comments from two OSS projects: Eclipse and Qt. The results suggest that the number of code revisions due to maintenance issues was moderate and that the OSS developers tend to improve source code quality. This direction could be observed from the increasing number of modifications on given maintenance-based comments over the years. Therefore, an implication of this is the possibility that OSS project developers are interested in software maintainability.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_110-The_Impact_of_Peer_Code_Review_on_Software_Maintainability_in_Open_Source_Software.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-Task Multi-User Offloading in Mobile Edge Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01312108</link>
        <id>10.14569/IJACSA.2022.01312108</id>
        <doi>10.14569/IJACSA.2022.01312108</doi>
        <lastModDate>2022-12-30T12:34:00.9970000+00:00</lastModDate>
        
        <creator>Nouhaila Moussammi</creator>
        
        <creator>Mohamed El Ghmary</creator>
        
        <creator>Abdellah Idrissi</creator>
        
        <subject>Time execution; energy consumption; computation offloading; mobile edge computing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>Mobile Edge Computing (MEC) is a new method to overcome the resource limitations of mobile devices by enabling Computation Offloading (CO) with low latency. This paper proposes a multi-user multi-task effective system to offload com-putations for MEC that guarantees in terms of energy, latency for MEC. To begin, radio and computation resources are integrated to ensure the efficient utilization of shared resources when there are multiple users. The energy consumed is positively correlated with the power of transmission and the local CPU frequency. The values can be adjusted to accommodate multi-tasking in order to minimize the amount of energy consumed. The current methods for offloading aren’t appropriate when multiple tasks and multiple users have high computing density. Additionally, this paper proposes a multi-user system that includes multiple tasks and high-density computing that is efficient. Simulations have confirmed the Multi-User Multi-Task Offloading Algorithm (MUMTOD). The results in terms of execution time and energy consumption are extremely positive. This improves the effectiveness of offloading as well as reducing energy consumption.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_108-Multi_Task_Multi_User_Offloading_in_Mobile_Edge_Computing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dynamic Light Settings as Data Augmentations for Automated Scratch Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01312107</link>
        <id>10.14569/IJACSA.2022.01312107</id>
        <doi>10.14569/IJACSA.2022.01312107</doi>
        <lastModDate>2022-12-30T12:34:00.9800000+00:00</lastModDate>
        
        <creator>GRAVE Valentin</creator>
        
        <creator>FUKUDA Osamu</creator>
        
        <creator>YEOH Wen Liang</creator>
        
        <creator>OKUMURA Hiroshi</creator>
        
        <creator>YAMAGUCHI Nobuhiko</creator>
        
        <subject>Augmentation technique; deep neural network; im-age processing; light emission; object detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>The manufacture of plastic parts requires a rigorous visual examination of its production to avoid the shipment of some that would be defective to its customers. In an attempt to ease the detection of scratches on plastic parts, the prototype of a computer-assisted visual inspection system was developed. The aim of this paper is to introduce how we explored ways to design a semi-automatic system comprising of a lamp whose orientations and intensities help in revealing irregularities on subjects that would have been missed with a unique light configuration. This process was qualified as “hardware data augmentation”. The pictures collected by our system were then used to train several convolutional neural networks (YOLOv4 algorithm/architecture). Finally, the performances of their models were confronted to evaluate the effects of the different light settings, and deduce which parameters are favourable to capture datasets leading to robust defect detection systems.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_107-Dynamic_Light_Settings_as_Data_Augmentations_for_Automated_Scratch_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of EEG Signals in a Patient with Spastic Cerebral Palsy Undergone Dolphin-Assisted Therapies</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01312106</link>
        <id>10.14569/IJACSA.2022.01312106</id>
        <doi>10.14569/IJACSA.2022.01312106</doi>
        <lastModDate>2022-12-30T12:34:00.9630000+00:00</lastModDate>
        
        <creator>Oswaldo Morales Matamoros</creator>
        
        <creator>Erika Yolanda Aguilar del Villar</creator>
        
        <creator>Abril P&#180;erez S&#180;anchez</creator>
        
        <creator>Jes&#180;us Jaime Moreno Escobar</creator>
        
        <creator>Ricardo Tejeida Padilla</creator>
        
        <subject>Brain–computer interfaces; healthcare system; non-linear dynamics; assisted therapies; infantile spastic cerebral palsy; analysis of EEG signals; power spectrum; self-affine analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>Cerebral palsy is a group of developmental disorders that affects a certain percentage of population motivating the development of several types of therapies ranging from conventional where physical therapies are included to some alternative therapies such as the dolphin-assisted therapies (DAT), in order to improve the quality of life of patients suffering these disorders. To find scientific evidence of the DAT effectiveness, in this work is developed a four-stage first-order cybernetic model: Signal Acquisition, ii) EEG Processing, iii) EEG Exploring and iv) Healthcare Informatics System (HIS-DAT), in order to explore the electroencephalographic signals behavior from a patient with Infantile Spastic Cerebral Palsy undergone DAT, as well as bioacoustic signals emitted by a female bottlenose dolphin via specialized transducers or passive sensors for aquatic environments, by using nonlinear mathematical tools. We found that the Power Spectrum of signals from EEG and Hydrophone yield similar densities along all DAT and the child’s brain activity increases 3-fold higher-frequency when the therapist-dolphin pair interacts with the patient. These findings are supported by the Self-Affine Analysis outcomes, pointing out the emergence of negative correlations from the patient’s brain activity during the whole session of DAT but the greatest changes occurred During DAT.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_106-Analysis_of_EEG_Signals_in_a_Patient_with_Spastic_Cerebral_Palsy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Real-Time Open Public Sources Text Analysis System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01312105</link>
        <id>10.14569/IJACSA.2022.01312105</id>
        <doi>10.14569/IJACSA.2022.01312105</doi>
        <lastModDate>2022-12-30T12:34:00.9500000+00:00</lastModDate>
        
        <creator>Chi Mai Nguyen</creator>
        
        <creator>Phat Trien Thai</creator>
        
        <creator>Van Tuan Nguyen</creator>
        
        <creator>Duy Khang Lam</creator>
        
        <subject>Named entity recognition; entity linking; text analysis system; data mining; natural language processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>With the emergence of digital newspapers and social media, one can easily suffer from information overload. The enormous amount of data they provide has created several new challenges for computational and data mining, especially in the natural language processing field. Many pieces of research focusing on the information extraction process, such as named entity recognition, entity linking, and text analysis methodologies, are available. However, there is a lack of development for a system to unify all these advanced techniques. The current state-of-the-art systems are either semi-automatic or can only handle short-text documents. Most of them are not real-time or have a long lag. Some of them are domain restricted. Many of them only focus on a single source: Twitter. In this work, we proposed a system that can automatically collect, extract, and analyze information from public source text documents, like news and tweets. The system can be used in different domains, such as scientific research, marketing, and security-related domains.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_105-A_Real_Time_Open_Public_Sources_Text_Analysis_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Deep Learning Architecture for Land Use: Land Cover Images Classification with a Comparative and Experimental Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01312104</link>
        <id>10.14569/IJACSA.2022.01312104</id>
        <doi>10.14569/IJACSA.2022.01312104</doi>
        <lastModDate>2022-12-30T12:34:00.9500000+00:00</lastModDate>
        
        <creator>Salhi Wiam</creator>
        
        <creator>Tabiti Khouloud</creator>
        
        <creator>Honnit Bouchra</creator>
        
        <creator>SAIDI Mohamed Nabil</creator>
        
        <creator>KABBAJ Adil</creator>
        
        <subject>Deep Learning; image classification; land use-land cover; MobileNet; ResNet; satellite images</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>Deep Learning algorithms have become more popular in computer vision, especially in the image classification field. This last has many applications such as moving object detection, cancer detection, and the classification of satellite images, also called images of land use-land cover (LULC), which are the scope of this paper. It represents the most commonly used method for decision making in the sustainable management of natural resources at various geographical levels. However, methods of satellite images analysis are expensive in the computational time and did not show good performance. Therefore, this paper, on the one hand, proposes a new CNN architecture called Modified MobileNet V1 (MMN) based on the fusion of MobileNet V1 and ResNet50. On the other hand, it presents a comparative study of the proposed model and the most used models based on transfer learning, i.e. MobileNet V1, VGG16, DenseNet201, and ResNet50. The experiments were conducted on the dataset Eurosat, and they show that ResNet50 results emulate the other models.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_104-Hybrid_Deep_Learning_Architecture_for_Land_Use.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>BBVD: A BERT-based Method for Vulnerability Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01312103</link>
        <id>10.14569/IJACSA.2022.01312103</id>
        <doi>10.14569/IJACSA.2022.01312103</doi>
        <lastModDate>2022-12-30T12:34:00.9330000+00:00</lastModDate>
        
        <creator>Weichang Huang</creator>
        
        <creator>Shuyuan Lin</creator>
        
        <creator>Chen Li</creator>
        
        <subject>Vulnerability detection; BERT; software security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>Software vulnerability detection is one of the key tasks in the field of software security. Detecting vulnerability in the source code in advance can effectively prevent malicious attacks. Traditional vulnerability detection methods are often ineffective and inefficient when dealing with large amounts of source code. In this paper, we present the BBVD approach, which treats high-level programming languages as another natural language and uses BERT-based models in the natural language processing domain to automate vulnerability detection. Our experimental results on both SARD and Big-Vul datasets demonstrate the good performance of the proposed BBVD in detecting software vulnerability.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_103-BBVD_A_BERT_based_Method_for_Vulnerability_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Analytical Model of Induction Motors for Rotor Slot Parametric Design Performance Evaluation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01312102</link>
        <id>10.14569/IJACSA.2022.01312102</id>
        <doi>10.14569/IJACSA.2022.01312102</doi>
        <lastModDate>2022-12-30T12:34:00.9170000+00:00</lastModDate>
        
        <creator>Ahamed Ibrahim Sithy Juhaniya</creator>
        
        <creator>Ahmad Asrul Ibrahim</creator>
        
        <creator>Muhammad Ammirrul Atiqi Mohd Zainuri</creator>
        
        <creator>Mohd Asyraf Zulkifley</creator>
        
        <subject>Analytical model; efficiency; induction motor; rotor slot parameters</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>Induction motors are commonly used in most elec-tricity generation due to their low investment cost. However, the performance of the induction motors for different applications highly depends on rotor design and machine geometry. For example, changing rotor bar height and width varies the rotor resistance and reactance, thereby leading to variation of the motor efficiency. A parametric study on rotor slot geometry parameters such as opening height, rotor slot depth, and rotor slot width, is carried out to investigate the effect of the parameters on the efficiency of a squirrel cage induction motor. The study is based on analytical model that considers a general-purpose squirrel cage induction motor with the specification of 5.5 kW, 60 Hz, and 460 V. The analytical model is developed and simulated within the MATLAB software environment. The effects of each parameter variation toward efficiency of the induction motor are investigated individually as well as all together using a 4D scatter plot. Results show that the efficiency can be improved up to 0.1%after designing a suitable setting of rotor slot parameters from the initial settings.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_102-An_Analytical_Model_of_Induction_Motors_for_Rotor_Slot_Parametric_Design.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Emotion Detection from Text and Sentiment Analysis of Ukraine Russia War using Machine Learning Technique</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01312101</link>
        <id>10.14569/IJACSA.2022.01312101</id>
        <doi>10.14569/IJACSA.2022.01312101</doi>
        <lastModDate>2022-12-30T12:34:00.9030000+00:00</lastModDate>
        
        <creator>Abdullah Al Maruf</creator>
        
        <creator>Zakaria Masud Ziyad</creator>
        
        <creator>Md. Mahmudul Haque</creator>
        
        <creator>Fahima Khanam</creator>
        
        <subject>Emotion detection; racism; sentiment analysis; social media; machine learning; ensemble; Ukraine-Russia</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>In the human body, emotion plays a critical func-tion. Emotion is the most significant subject in human-machine interaction. In economic contexts, emotion detection is equally essential. Emotion detection is crucial in making any decision. Several approaches were explored to determine emotion in text. People increasingly use social media to share their views, and researchers strive to decipher emotions from this medium. There has been some work on emotion detection from the text and sentiment analysis. Although some work has been done in which emotion has been recognized, there are many things to improve. There is not much work to detect racism and analysis sentiment on Ukraine -Russia war. We suggested a unique technique in which emotion is identified, and the sentiment is analyzed. We utilized Twitter data to analyze the sentiment of the Ukraine-Russia war. Our system performs better than prior work. The study increases the accuracy of detecting emotion. To identify emotion and racism, we used classical machine learning and the ensemble method. An unsupervised approach and NLP modules were used to analyze sentiment. The goal of the study is to detect emotion and racism and also analyze the sentiment.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_101-Emotion_Detection_from_Text_and_Sentiment_Analysis_of_Ukraine_Russia_War.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design and Implementation of an Unreal Engine 4-Based Smart Traffic Control System for Smart City Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01312100</link>
        <id>10.14569/IJACSA.2022.01312100</id>
        <doi>10.14569/IJACSA.2022.01312100</doi>
        <lastModDate>2022-12-30T12:34:00.8870000+00:00</lastModDate>
        
        <creator>Md. Imtiaz Hossain Subree</creator>
        
        <creator>Md. Rakib Hasan</creator>
        
        <creator>Maksuda Haider Sayma</creator>
        
        <subject>Traffic control system; traffic congestion; CCTV; image processing; simulation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>Traffic congestion is a serious problem nowadays, especially in Dhaka city. With the increasing population and automation, it has become one of the most critical issues in our country. There can be a lot of causes of congestion in traffic, such as insufficient capacity, large red signal delay, unrestrained demand, etc, which causes extra time delay, extra fuel consump-tion, a speed reduction of vehicle, and financial loss. The traffic control system is one of the most important factors affecting traffic flow. Poor traffic management around these hotspots could result in prolonged traffic jams. Small critical locations that are frequent hotspots for congestion are a common byproduct of poorly constructed road networks in many developing countries. In this research, we first offer a straightforward automated image processing method for analyzing CCTV camera image feeds to determine the level of traffic congestion. Our system’s design seeks to use real-time photos from the cams at traffic intersections to calculate traffic density using image processing, and to adjust the traffic signal on the data obtained based on the current traffic congestion on the road. We suggest tailoring our system to erratic traffic feeds with poor visual quality. Using live Surveillance camera feeds from multiple traffic signals in Dhaka city, we demonstrate evidence of this bottleneck breakdown tendency persisting over prolonged time frames across multiple locations. To partially address this problem, we offer a local adapting algorithm that coordinates signal timing behavior in a restricted area and can locally minimize congestion collapse by maintaining time-variant traffic surges. Using simulation-based research on basic network topologies, we show how our local decongestion protocol may boost the capacity of the road and avoid traffic collapse in limited scenarios.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_100-Design_and_Implementation_of_an_Unreal_Engine_4_Based_Smart_Traffic_Control_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Synthesis of Comments to Social Media Posts for Business Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131298</link>
        <id>10.14569/IJACSA.2022.0131298</id>
        <doi>10.14569/IJACSA.2022.0131298</doi>
        <lastModDate>2022-12-30T12:34:00.8700000+00:00</lastModDate>
        
        <creator>Peter Adebowale Olujimi</creator>
        
        <creator>Abejide Ade-Ibijola</creator>
        
        <subject>Natural language comprehension; social media; natural language processing; customer engagements; artificial intelligence; comment generation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>Responding to enormous comments on social media platforms is one major challenge facing businesses in recent times, especially when dealing with irate consumers. Customers have increasingly adopted social networks as a platform for expressing their concerns and posting comments on business pages, posing a great challenge for customer support agents and digital marketers alike. Analyzing and responding manually to these enormous comments is a time-consuming task, necessitating the adoption of Artificial Intelligence (AI) tool that can complete the task swiftly— automatic comprehension of social media posts for comment generation. In this paper, we present algorithms and a tool for the automatic comprehension of customer tweets and generation of responses to these tweets. This was done in two-fold: using existing Natural Language Processing (NLP) libraries to preprocess and tokenize these tweets, and secondly, using rule-based algorithms to find a matching response to each customer, based on the array of extracted tokens from the customer’s tweet. This was built into a tool called Comment-Synthesizer. This tool takes unfiltered tweets as input, preprocesses the tweets, and matches the tweet with predefined responses using a rule-based algorithm with a success rate of 76%. This tool, if implemented in a desktop automation application, can be used to respond automatically to a large volume of customers’ social media comments/posts.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_98-Synthesis_of_Comments_to_Social_Media_Posts_for_Business_Applications.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>PDE: A Real-Time Object Detection and Enhancing Model under Low Visibility Conditions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131299</link>
        <id>10.14569/IJACSA.2022.0131299</id>
        <doi>10.14569/IJACSA.2022.0131299</doi>
        <lastModDate>2022-12-30T12:34:00.8700000+00:00</lastModDate>
        
        <creator>Zhiying Li</creator>
        
        <creator>Shuyuan Lin</creator>
        
        <creator>Zhongming Liang</creator>
        
        <creator>Yongjia Lei</creator>
        
        <creator>Zefan Wang</creator>
        
        <creator>Hao Chen</creator>
        
        <subject>Low-visibility condition; image enhance; object detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>Deep object detection models are important tools that can accurately detect objects and frame them for the user in real time. However, in low visibility conditions, such as fog or low light conditions, the captured images are underexposed and blurred, which negatively affects the recognition accuracy and is not well visible to humans. In addition, the image enhancement model is complex and time-consuming. Using the image enhancement model before the object recognition model cannot meet the real-time requirements. Therefore, we propose the Parallel Detection and Enhancement model (PDE), which detects objects and enhances poorly visible images in parallel and in real time. Specifically, we introduce the specially designed tiny prediction head along with coordinated attention and multi-stage concatenation modules to better detect underexposed and blurred objects. For the parallel image enhancement model, we adaptively develop improved weighting evaluation models for each “3D Lookup Table” module. As a result, PDE achieves better detection accuracy for poorly visible objects and more user-friendly reference in real time. Experimental results show that PDE has significantly better object recognition performance than the state-of-the-art on real foggy (8.9%) and low-light (20.6%) datasets.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_99-PDE_A_Real_Time_Object_Detection_and_Enhancing_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Utilizing Deep Learning in Arabic Text Classification Sentiment Analysis of Twitter</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131297</link>
        <id>10.14569/IJACSA.2022.0131297</id>
        <doi>10.14569/IJACSA.2022.0131297</doi>
        <lastModDate>2022-12-30T12:34:00.8530000+00:00</lastModDate>
        
        <creator>Nehad M. Ibrahim</creator>
        
        <creator>Wael M. S. Yafooz</creator>
        
        <creator>Abdel-Hamid M. Emara</creator>
        
        <creator>Ahmed Abdel-Wahab</creator>
        
        <subject>Arabic sentiment analysis; machine learning; convolutional neural networks; word embedding; Arabic word2Vec; long short-term method; AraBERT</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>The number of social media users has increased. These users share and reshare their ideas in posts and this information can be mined and used by decision-makers in different domains, who analyse and study user opinions on social media networks to improve the quality of products or study specific phenomena. During the COVID-19 pandemic, social media was used to make decisions to limit the spread of the disease using sentiment analysis. Substantial research on this topic has been done; however, there are limited Arabic textual resources on social media. This has resulted in fewer quality sentiment analyses on Arabic texts. This study proposes a model for Arabic sentiment analysis using a Twitter dataset and deep learning models with Arabic word embedding. It uses the supervised deep learning algorithms on the proposed dataset. The dataset contains 51,000 tweets, of which 8,820 are classified as positive, 37,360 neutral, and 8,820 as negative. After cleaning it will contain 31,413. The experiment has been carried out by applying the deep learning models, Convolutional Neural Network and Long Short-Term Memory while comparing the results of different machine learning techniques such as Naive Bayes and Support Vector Machine. The accuracy of the AraBERT model is 0.92% when applying the test on 3,505 tweets.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_97-Utilizing_Deep_Learning_in_Arabic_Text_Classification_Sentiment_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Research on the Optimization Problem of Agricultural Product Logistics based on Genetic Algorithm under the Background of Sharing Economy</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131295</link>
        <id>10.14569/IJACSA.2022.0131295</id>
        <doi>10.14569/IJACSA.2022.0131295</doi>
        <lastModDate>2022-12-30T12:34:00.8400000+00:00</lastModDate>
        
        <creator>Na Wang</creator>
        
        <subject>Sharing economy; two-level programming; genetic algorithm; path optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>China’s national development and reform commission issued the “logistics industry adjustment and revitalization plan” in 2009 to support the development of agricultural product logistics and distribution centers. China’s agricultural product logistics and distribution have entered a stage of rapid development. With the rise of the sharing economy, logistics has become a bottleneck restricting the further development of agricultural product distribution. In order to realize the effective cooperation among the main body of agricultural product logistics distribution, improve the distribution efficiency and reduce the distribution cost, a logistics distribution optimization model based on the two-layer planning idea and genetic algorithm is proposed. A two-level programming model is constructed by combining qualitative and quantitative methods, theory and examples, and insertion and deletion operators are introduced to optimize the genetic algorithm. The research results show that the optimized genetic algorithm has a 54.55% increase in convergence speed, 1.08% in performance, and a 54.231% reduction in path length compared to the benchmark algorithm. It effectively improves the efficiency of path planning and saves the planning cost, and the final target value is reduced by 48.19%.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_95-Research_on_the_Optimization_Problem_of_Agricultural_Product_Logistics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application Research of Trademark Recognition Technology based on SIFT Feature Recognition Algorithm in Advertising Design</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131296</link>
        <id>10.14569/IJACSA.2022.0131296</id>
        <doi>10.14569/IJACSA.2022.0131296</doi>
        <lastModDate>2022-12-30T12:34:00.8400000+00:00</lastModDate>
        
        <creator>Weina Zhang</creator>
        
        <subject>SIFT algorithm; trademark recognition; advertising design; support vector machine; principal component analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>Now-a-days, due to the sharp increase in the number of advertising designs, creative duplication is easy to occur in advertising design. If this situation is not discovered in time, it may cause legal disputes and cause damage to the reputation and property of the enterprise. In view of the above situation, this paper proposes a trademark recognition technology based on SIFT feature recognition algorithm to avoid duplication of advertisement design and cause copyright disputes. Aiming at the defect that the dimension of the image feature vector extracted by SIFT algorithm is too high, the principal component analysis method is used to reduce its dimension. For the problem of unsatisfactory image recognition rate accuracy of SIFT algorithm, a support vector machine is used to classify the extracted feature vector, so as to improve the image recognition rate. Based on the above content, build a trademark recognition model. The research results show that the recognition accuracy of the model reaches 98.82%, 0.66% and 0.58% higher than that of Model 1 and Model 2; AUC value of model 3 is 0.962, 0.039 higher than model 2 and 0.107 higher than model 1. The above results show that the proposed trademark recognition model can better identify similar advertising designs, thereby avoiding design duplication and legal disputes.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_96-Application_Research_of_Trademark_Recognition_Technology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Data Clutter Reduction in Sampling Technique</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131294</link>
        <id>10.14569/IJACSA.2022.0131294</id>
        <doi>10.14569/IJACSA.2022.0131294</doi>
        <lastModDate>2022-12-30T12:34:00.8230000+00:00</lastModDate>
        
        <creator>Nur Nina Manarina Jamalludin</creator>
        
        <creator>Zainura Idrus</creator>
        
        <creator>Zanariah Idrus</creator>
        
        <creator>Ahmad Afif Ahmarofi</creator>
        
        <creator>Jahaya Abdul Hamid</creator>
        
        <creator>Nurul Husna Mahadzir</creator>
        
        <subject>Sampling technique; probability sampling; non-probability sampling; data clutter; big data; data visualization; data reduction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>Visualization is a process of converting data into its visual form as such data patterns can be extracted from the data. Data patterns are knowledge hidden behind the data. However, when data is big, it tends to overlap and clutter on visualization which distorts the data patterns. Data is overly crowded on visualization thus, it has become a challenge to extract knowledge patterns. Besides, big data is costly to visualize because it requires expensive hardware facilities due to its size. Moreover, it is timely to plot the data since it takes time for data to render on visualizations. Due to those reasons, there is a need to reduce the size of big datasets and at the same time maintain the data patterns. There are many methods of data reduction, which are preprocessing operations, dimension reduction, compression, network theory, redundancy elimination, data mining, machine learning, data filtering and sampling techniques. However, the commonly used data reduction technique is sampling technique that derives samples from data populations. Thus, sampling technique is chosen as a study for data reduction in this paper. However, the studies are scattered and are not discussed in a single paper. Consequently, the objective of this paper is to collect them in a single paper for further analysis in order to understand them in great detail. To achieve the objective, three interdisciplinary databases which are ACM Digital Library, IEEE Explore and Science Direct have been selected. From the database, a total of 48 studies have been extracted and they are from the years 2017 to 2021. Other than sampling techniques, this paper also seeks information on big data, data visualization, data clutter, and data reduction.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_94-Data_Clutter_Reduction_in_Sampling_Technique.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application of Multi-Scale Convolution Neural Network Optimization Image Defogging Algorithm in Image Processing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131293</link>
        <id>10.14569/IJACSA.2022.0131293</id>
        <doi>10.14569/IJACSA.2022.0131293</doi>
        <lastModDate>2022-12-30T12:34:00.8070000+00:00</lastModDate>
        
        <creator>Weihan Zhu</creator>
        
        <subject>Multi-scale convolution neural network; complex road traffic scene; Image defogging; dark passage; Gray enhancement</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>To improve the ability to detect and identify smog images in complex road traffic scenes, smog images need to be defogged, and an optimized image defogging algorithm on the basis of multi-scale convolutional neural network (MCNN) is proposed. The physical model of road traffic scene smog scattering is constructed, and the image is divided sky area, road surface area and road sky boundary area. The road sky boundary line is the boundary line between road surface and sky area. The dark channel of traffic scene smog image is established by Canny edge detection and MCNN optimization, and the smog image is subjected to detail compensation and gray enhancement processing through prior knowledge. After substituting the atmospheric light value and transmittance map into the atmospheric scattering model (ACM), the MCNN learning model is combined to realize the filtering processing and defogging optimization of smog images in complex road traffic scenes. The color saturation, defogging degree, peak signal-to-noise ratio (PSNR), texture effect as well as other aspects of the image are taken as test indexes for the simulation experiment. The simulation results show that the color saturation, defogging degree and image definition of the defogged haze images in complex road traffic scenes are higher by using this method, which improves the output PSNR of the defogged haze images in complex road traffic scenes, and has a good application value in image defogging.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_93-Application_of_Multi_Scale_Convolution_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Research on Intellectual Dichotomiser 3 Decision Tree Algorithm Model for Financial Analysis of Colleges and Universities</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131291</link>
        <id>10.14569/IJACSA.2022.0131291</id>
        <doi>10.14569/IJACSA.2022.0131291</doi>
        <lastModDate>2022-12-30T12:34:00.7930000+00:00</lastModDate>
        
        <creator>Sujuan Guo</creator>
        
        <subject>Financial analysis; ID3; Decision tree; model; colleges and universities; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>The rapid development of college information construction has promoted the processing and analysis of a large number of data in the college system. Decision tree algorithm is often used in the field of financial data analysis, but it has a bias in the selection of attributes. Aiming at the defects of the decision tree algorithm in attribute selection, ID3 algorithm in the decision tree algorithm is selected for weighted improvement, and it is optimized based on Synthetic Minority Oversampling Technique (SMOTE) algorithm and Bagging algorithm to balance the positive and negative data of its training samples, thus obtaining the DSB-ID3 financial analysis model. Using this model to analyze the financial data of a university, its G value and F value are both about 0.78, the recognition accuracy rate for normal samples is 0.7345, and the total recognition accuracy rate is 0.7893, which are the highest among the four models. Compared with other models, model designed in this study has significantly improved classification performance, and its distribution is the most centralized, showing superior stability. The experimental results show that the classification effect of model designed in this study is the best, and it shows superior accuracy and stability in the analysis of financial data. Its superior classification performance shows the potential of decision tree algorithm optimization and the feasibility and necessity of improving it. From the experimental data, it can be seen that the service life and parameters of the model designed in this study are obviously better than those commonly used in the financial analysis industry of colleges and universities. It can be seen from the overall analysis that this model provides a practical reference for the application of decision tree optimization in college financial analysis, and greatly improves the accuracy of financial system data analysis.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_91-Research_on_Intellectual_Dichotomiser_3_Decision_Tree_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Clustering-based Automated Requirement Trace Retrieval</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131292</link>
        <id>10.14569/IJACSA.2022.0131292</id>
        <doi>10.14569/IJACSA.2022.0131292</doi>
        <lastModDate>2022-12-30T12:34:00.7930000+00:00</lastModDate>
        
        <creator>Nejood Hashim Al-walidi</creator>
        
        <creator>Shahira Shaaban Azab</creator>
        
        <creator>Abdelaziz Khamis</creator>
        
        <creator>Nagy Ramadan Darwish</creator>
        
        <subject>Requirements traceability; information retrieval; term mismatch problem; trace retrieval; TraceLab; clustering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>The benefits of requirement traceability are well known and documented. The traceability links between requirements and code are fundamental in supporting different activities in the software development process, including change management and software maintenance. These links can be obtained using manual or automatic means. Manual trace retrieval is a time-consuming task. Automatic trace retrieval can be performed via various tools such as Information retrieval or machine learning techniques. Meanwhile, a big concern associated with automated trace retrieval is the low precision problem primarily caused by the term mismatches across documents to be traced. This study proposes an approach that addresses the term mismatch problem to obtain the greatest improvements in the trace retrieval accuracy. The proposed approach uses clustering in the automated trace retrieval process and performs an experimental evaluation against previous benchmarks. The results show that the proposed approach improves the trace retrieval precision.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_92-Clustering_based_Automated_Requirement_Trace_Retrieval.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Role of Machine Learning in Remote Sensing for Agriculture Drought Monitoring: A Systematic Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131290</link>
        <id>10.14569/IJACSA.2022.0131290</id>
        <doi>10.14569/IJACSA.2022.0131290</doi>
        <lastModDate>2022-12-30T12:34:00.7770000+00:00</lastModDate>
        
        <creator>Aries Suharso</creator>
        
        <creator>Yeni Hediyeni</creator>
        
        <creator>Suria Darma Tarigan</creator>
        
        <creator>Yandra Arkeman</creator>
        
        <subject>Drought monitoring; exploration of the use of machine learning; Landsat imagery; remote sensing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>Agricultural drought is still difficult to anticipate even though there have been developments in remote sensing technology, especially satellite imagery that is useful for farmers in monitoring crop conditions. The availability of open and free satellite imagery still has a weakness, namely the level of resolution is low and coarse with atmospheric disturbances in the form of cloud cover, as well as the location and period for taking images that are different from the presence of weather stations on Earth. This problem is a challenge for researchers trying to monitoring agricultural drought conditions through satellite imagery. One approach that has recently used is high computational techniques through machine learning, which is able to predict satellite image data according to the conditions of mapping land types and plants in the field. Furthermore, using time series data from satellite imagery, a predictive model of crop cycles can be regarding future crop drought conditions. So, through this technology, we can encourage farmers to make decisions to anticipate the dangers of agricultural drought. Unfortunately, exploration of the use of machine learning for classification and prediction of agricultural drought conditions has not conducted, and the existing methods can still improve. This review aims to present a comprehensive overview of methods that used to monitor agricultural drought using remote sensing and machine learning, which are the subjects of future research.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_90-The_Role_of_Machine_Learning_in_Remote_Sensing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dynamic Force-directed Graph with Weighted Nodes for Scholar Network Visualization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131289</link>
        <id>10.14569/IJACSA.2022.0131289</id>
        <doi>10.14569/IJACSA.2022.0131289</doi>
        <lastModDate>2022-12-30T12:34:00.7600000+00:00</lastModDate>
        
        <creator>Khalid Al-Walid Mohd. Aris</creator>
        
        <creator>Chitra Ramasamy</creator>
        
        <creator>Teh Noranis Mohd. Aris</creator>
        
        <creator>Maslina Zolkepli</creator>
        
        <subject>Force-directed graph; weighted network; citation network; D3 algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>Through the growth of portals and venues to publish academic publications, the number of academic publications is growing exponentially in recent years. An effective exploration and fast navigation in the collection of academic publications become an urgent need to help academic researchers find publications related to their research and the surrounding community. A scholar network visualization approach is proposed to help users to explore a large number of academic publications concerning the strength of the relationship between each publication. The approach is realized by creating a web-based interface using D3 JavaScript algorithm that allows the visualization to focus on how data are connected to each other more accurately than the conventional lines of data seen in traditional data representation. The proposed approach visualizes data by incorporating a force-directed graph with weighted nodes and vertices to give more descriptive information of millions of raw data such as author names, publication title, publication year, publication venue and number of citations from the scholar network dataset. By introducing a weighted relationship in the network visualization, the proposed approach can give a more insightful detail of each publication such as a highly cited publication by looking at and exploring the generated interactive graph. The proposal is targeted to be incorporated into a larger-scale scholar network analytical dashboard that can offer various visualization approaches under one flagship application.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_89-Dynamic_Force_directed_Graph_with_Weighted_Nodes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Embedded Monitoring Method of Greenhouse Environment based on Wireless Sensor Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131287</link>
        <id>10.14569/IJACSA.2022.0131287</id>
        <doi>10.14569/IJACSA.2022.0131287</doi>
        <lastModDate>2022-12-30T12:34:00.7470000+00:00</lastModDate>
        
        <creator>Weixue Liu</creator>
        
        <subject>Control chip; Detection node; embedded; greenhouse environmental monitoring; missing data; wireless sensor networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>Aiming at the problem of low accuracy of greenhouse environmental monitoring, an embedded monitoring method of greenhouse environment based on wireless sensor network is studied. The embedded microprocessor S3C2410 is selected as the control chip of greenhouse environmental monitoring. The wireless sensor network is composed of wireless detection node, sink node and remote-control terminal. The temperature and humidity sensor, CO2 concentration sensor and light intensity sensor are set as the detection nodes of the wireless sensor network. Each detection sensor node is configured with wireless communication module to form a wireless sensor network to realize data communication of environmental monitoring. The distribution map method is selected to eliminate the missing data of greenhouse environmental monitoring data collected by wireless sensor network, and the weighted average method is used to fuse the monitoring data after eliminating the missing data to obtain the final greenhouse environmental monitoring results. The experimental results show that this method can effectively detect the environmental data such as temperature, humidity and light intensity of greenhouse, and the relative error of each data is less than 1%.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_87-Embedded_Monitoring_Method_of_Greenhouse_Environment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Research on the Application of Virtual Technology-based Posture Detection Device in Swimming Teaching</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131288</link>
        <id>10.14569/IJACSA.2022.0131288</id>
        <doi>10.14569/IJACSA.2022.0131288</doi>
        <lastModDate>2022-12-30T12:34:00.7470000+00:00</lastModDate>
        
        <creator>Hongming Guo</creator>
        
        <creator>Jingang Fan</creator>
        
        <subject>Virtual reality; pose recognition; swimming; neural network; sliding window</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>With the socio-economic development, the national demand for playing leisure sports has increased, and swimming is one of the popular choices. To help swimming beginners understand the correct swimming posture more quickly and directly, hybrid neural network algorithms based on sliding window detection and deep residual networks are designed in this study, and two corresponding virtual image classification models of swimmer’s posture are designed based on these algorithms. In order to reduce the noise of the input data and reduce the cost of data collection, the virtual reality technology is used to convert the swimmer’s swimming pose image into the image model in the virtual reality space as the input data of the algorithm. The performance test experimental results show that the classification accuracy of the swimmer pose recognition models based on PTP-CNN algorithm and SW-CNN algorithm designed in this research are 97.48% and 96.72% respectively on the test set, which are much higher than other comparison models, and the model built based on PTP-CNN algorithm has the fastest computation speed. The results of this research can be applied to assist participants in swimming pose recognition in teaching beginner swimmers.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_88-Research_on_the_Application_of_Virtual_Technology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Big Data Application in Forecasting Financial Investment of e-Commerce Industry for Sustainability</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131286</link>
        <id>10.14569/IJACSA.2022.0131286</id>
        <doi>10.14569/IJACSA.2022.0131286</doi>
        <lastModDate>2022-12-30T12:34:00.7300000+00:00</lastModDate>
        
        <creator>Yanfeng Zhang</creator>
        
        <subject>Stainable development; big data; E-commerce industry; financial investment forecast; deep belief network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>With the rapid development of e-commerce, financial investment forecasting in the e-commerce industry has gradually become a concern of relevant personnel. Based on DBN, the study proposes a PVD prediction model. For training and test sample sets, the PLR_VIP algorithm is calculated and min-max normalization is applied to the original financial time series. To determine appropriate network parameters, the DBN network is trained and tested, and then Elliott wave patterns are predicted based on financial time series. The experimental results show that the MSE of the PVD model is 0.4015 and the prediction accuracy is 70.21%, indicating that it can efficiently and accurately identify the Elliott wave pattern of financial time series. Comparing the prediction results of the PVD model with the other five models, the values of the four evaluation indicators of PVD are the lowest among all models, which are 0.6336, 0.4015, 0.9052, and 29.79%, respectively. Compared with the training error changes of other models, it can be seen that the error curve of the DBN network is smoother and the training error is smaller. It shows that it has higher stability, faster convergence speed, higher reliability and accuracy, and shows excellent prediction performance, which is significantly better than other models. Experiments show that under the background of sustainable development, the PVD forecasting model proposed in the study performs well in financial investment forecasting, which provides a reference for the development of financial investment forecasting in the e-commerce industry.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_86-Big_Data_Application_in_Forecasting_Financial_Investment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Semantic NoSQL Application Program Interface for Big Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131285</link>
        <id>10.14569/IJACSA.2022.0131285</id>
        <doi>10.14569/IJACSA.2022.0131285</doi>
        <lastModDate>2022-12-30T12:34:00.7130000+00:00</lastModDate>
        
        <creator>K. ElDahshan</creator>
        
        <creator>E. K. Elsayed</creator>
        
        <creator>H. Mancy</creator>
        
        <creator>A. AbuBakr</creator>
        
        <subject>NoSQL database; formatting; semantic technology; data integration; pandemic prediction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>Complexity, heterogeneity, schemaless-ness, data visualization, and extraction of consistent knowledge from Big Data are the biggest challenges in NoSQL databases. This paper presents a general semantic NoSQL Application Program Interface that integrates and converts NoSQL databases to semantic representation. The generated knowledge base is suitable for visualization and knowledge extraction from different Big Data sources. The authors use a case study of the COVID-19 pandemic prediction and other weather occurrences in various parts of the world to illustrate the suggested API. The Authors find a correlation between COVID-19 spread and deteriorating weather. According to the experimental findings, the API&#39;s performance is enough for heterogeneous Big Data.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_85-A_Semantic_NoSQL_Application_Program_Interface.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Research on High Voltage Cable Condition Detection Technology based on Wireless Sensor Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131283</link>
        <id>10.14569/IJACSA.2022.0131283</id>
        <doi>10.14569/IJACSA.2022.0131283</doi>
        <lastModDate>2022-12-30T12:34:00.7000000+00:00</lastModDate>
        
        <creator>Yang Zhao</creator>
        
        <creator>Qing Liu</creator>
        
        <creator>Tong Shang</creator>
        
        <creator>Yingqiang Shang</creator>
        
        <creator>Rong Xia</creator>
        
        <creator>Shuai Shao</creator>
        
        <subject>Wireless sensor; high-voltage cable; fault diagnosis; particle swarm algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>The development and progress of modern society cannot be achieved without the support of electric power resources, and at present, electric power is the most important energy source to promote social development and maintain human life. As a key unit under the power distribution and transmission system, the high-voltage cable of the power grid will undertake the task of supplying power resources to the whole power grid. Therefore, based on the transmission line fault diagnosis framework of Wireless Sensor Networks (WSN), a high-voltage cable path condition monitoring scheme using LoRa technology is proposed. Three high-voltage cable condition monitoring periods are proposed according to the difference of high-voltage cable fault rate, and the delay and energy consumption of the high-voltage cable monitoring system are optimized by multi-objective particle swarm algorithm reality. The experimental results show that the proposed high-voltage cable detection technology can switch the working mode according to different environments, and the data communication packet loss rate is less than 5%, while the detection platform has excellent delay performance and energy saving effect. The high-voltage cable status detection solution can effectively solve the problem of blind high-voltage cable channels in high mountain areas of China. The research content has important reference value for the detection of China’s power grid circuit system.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_83-Research_on_High_Voltage_Cable_Condition_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Research on the Application of Improved Decision Tree Algorithm based on Information Entropy in the Financial Management of Colleges and Universities</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131284</link>
        <id>10.14569/IJACSA.2022.0131284</id>
        <doi>10.14569/IJACSA.2022.0131284</doi>
        <lastModDate>2022-12-30T12:34:00.7000000+00:00</lastModDate>
        
        <creator>Huirong Zhao</creator>
        
        <subject>Information entropy; financial management; decision tree; information gain rate; C4.5 algorithm; early warning structure</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>In the era of information technology, the work relies on information technology to generate a huge amount of data and information. Among them, the financial data information of universities is growing exponentially, and the manual method of organizing data and extracting key information can no longer meet the requirements of financial data management of universities. Taking the financial management of higher education institutions as an example, it is difficult to grasp the progress of financial budget execution with frequent and complicated daily expenditure and income problems, and then it is difficult to execute correct decisions in the management. The study uses information entropy as the decision basis of decision tree in the financial management of higher education institutions. The higher the value of information entropy generated in financial management, the higher the prediction accuracy of the decision tree. The metric calculation method is introduced to obtain the information entropy as well as the information gain rate to predict the likelihood of problematic events. The study validates the performance of the improved decision tree with a dataset that achieves a maximum accuracy of 95% in the experiment. With the higher prediction accuracy, for the university financial management system, a decision tree for financial warning is established and the link between the current month&#39;s financial expenditure and the warning mechanism is analyzed, and finally the two common decision tree algorithms, (Iterative Dichotomiser3，ID3) ID3 and (Classification and regression tree，CART) CART, are compared with the algorithm proposed in the study. The mean square error and the sum of squared error metrics are used to conclude that the algorithm proposed in the study has better performance. By improving the existing decision tree algorithm, the study proposes a decision tree model based on information entropy, which aims to help decision makers to quickly and accurately distill relevant data and make correct decisions in a large amount of information data for more rational financial management.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_84-Research_on_the_Application_of_Improved_Decision_Tree_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Primary and Secondary Fusion Transformer Based on Internet of Things Technology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131282</link>
        <id>10.14569/IJACSA.2022.0131282</id>
        <doi>10.14569/IJACSA.2022.0131282</doi>
        <lastModDate>2022-12-30T12:34:00.6830000+00:00</lastModDate>
        
        <creator>Xiaohua Zhang</creator>
        
        <creator>Yuping Wu</creator>
        
        <creator>Jianjun Chen</creator>
        
        <creator>Jie Dong</creator>
        
        <creator>Yu Yue</creator>
        
        <subject>Internet of things technology; primary and secondary integration; smart transformer; fault detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>As one of the core equipment of power system, power transformer has great fault influence and complex fault causes. In order to ensure the safe and stable operation of power system, its operation state must be monitored and judged. With the increasing maturity of the Internet industry and the continuous development of sensor technology, the emergence of smart grid has contributed to the realization of intelligent transformer on-line monitoring, condition evaluation and fault diagnosis. This paper studies the deep primary and secondary fusion transformer based on the Internet of Things technology, summarizes the development status of intelligent transformer and its existing problems on the basis of relevant literature, and proposes the optimization technology of deep primary and secondary fusion transformer based on the Internet of Things technology to solve its existing problems, and conducts related experiments and fault detection research on the proposed technology. The experimental results show that this optimization technology has good feasibility, good self-state evaluation and fault diagnosis functions. The deep primary and secondary fusion based on the Internet of Things technology proposed in this study can increase the reliability monitoring of intelligent transformer operation, provide a strong technical guarantee for the normal operation of distribution network, and also provide important technical support for the next research of distribution network technology.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_82-Deep_Primary_and_Secondary_Fusion_Transformer.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Adaptive Texture Enhancement Algorithm for AR Live Screen Based on Approximate Matching</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131281</link>
        <id>10.14569/IJACSA.2022.0131281</id>
        <doi>10.14569/IJACSA.2022.0131281</doi>
        <lastModDate>2022-12-30T12:34:00.6670000+00:00</lastModDate>
        
        <creator>Panpan Yang</creator>
        
        <creator>Lingfei Ma</creator>
        
        <creator>Chao Yin</creator>
        
        <creator>Yang Ping</creator>
        
        <subject>Approximate matching; AR live; picture details; adaptive; texture enhancement; matrix reconstruction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>In order to improve the visual effect of AR live video detail texture, an adaptive enhancement algorithm based on approximate matching is proposed. According to the local self-similarity of the original image, the best matching block of the initial super-resolution image block is obtained. After extracting its high-frequency information, the improved singular value decomposition (SVD) is used to embed the watermark into the original super-resolution gray image; And through the watermark extraction and reconstruction matrix, the effective high-frequency detail texture information is obtained, then the final super-resolution image is obtained, and the AR live video detail texture enhancement is completed. The experimental results show that the peak signal-to-noise ratio, structure similarity and feature similarity of the algorithm are high, and the reconstruction effect is good; After detail texture enhancement, the detail texture clarity and edge sharpness of AR live picture are better, and the best visual effect is achieved.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_81-An_Adaptive_Texture_Enhancement_Algorithm_for_AR_Live_Screen.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fault Diagnosis Technology of Railway Signal Equipment based on Improved FP-Growth Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131279</link>
        <id>10.14569/IJACSA.2022.0131279</id>
        <doi>10.14569/IJACSA.2022.0131279</doi>
        <lastModDate>2022-12-30T12:34:00.6530000+00:00</lastModDate>
        
        <creator>Yueqin Yang</creator>
        
        <subject>FP-growth algorithm; gailway; signal equipment; fault diagnosis; relevance; frequent itemsets; vector space model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>The rapid development of computer information technology has made various fault diagnosis and detection technologies emerge in an endless stream. As one of the main transportation vehicles, the detection efficiency of fault diagnosis of railway signal equipment has important practical significance for maintaining the overall safe operation of railways. On the basis of the traditional FP-Growth algorithm, improve the TF-IDF algorithm to realize the weight discretization of text features, and realize the improvement of the FP-Growth algorithm by adjusting the adaptive confidence and support. The FP-Growth algorithm will be improved. FP-Growth algorithm is used for performance tests and applications. The results show that the minimum running time-saving of the proposed algorithm is 1500ms, and the average accuracy of P@N exceeds 85%, which is higher than that of the FP-Growth algorithm (81.4%) and VSM algorithm (82.1%). The PR curve of the improved algorithm is closer to the upper right, which effectively ensures the processing of correlated data, and the overall average precision performance under the influence of positive and negative signal-to-noise ratio values exceeds 95%. And the signal curve generated by the algorithm. The error range of the data under the four fault types of track circuit, turnout, signal, and connecting line floats between 1% and 5%. The improved FP-Growth algorithm can effectively analyze railway fault types and data. Perform analysis and data processing to minimize diagnostic errors.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_79-Fault_Diagnosis_Technology_of_Railway_Signal_Equipment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Systematic Literature Review: Internet of Things on Smart Greenhouse</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131280</link>
        <id>10.14569/IJACSA.2022.0131280</id>
        <doi>10.14569/IJACSA.2022.0131280</doi>
        <lastModDate>2022-12-30T12:34:00.6530000+00:00</lastModDate>
        
        <creator>Dodi Yudo Setyawan</creator>
        
        <creator>Warsito</creator>
        
        <creator>Roniyus Marjunus</creator>
        
        <creator>Nurfiana</creator>
        
        <creator>Rahmalia Syahputri</creator>
        
        <subject>IoT; SLR; Smart greenhouse; agriculture; research gap</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>The Increasing food needs and climate instability require researchers to innovate agriculture using smart greenhouses that are integrated with the Internet of Things (IoT). The Systematic Literature Review (SLR) begins with determining the topic keywords followed by searching the publisher link. It obtains 301 publications to be reviewed, 58 of which address the research questions posed. This study aims to collect and analyze in depth various knowledge about the Internet of Things Smart Greenhouse regarding sensors, methods used and publishers who publish the most related topics and the possibility of a research gap. The findings are as many as 12 publications that use temperature and humidity sensors and use the research and development methods integrated with Artificial intelligence methods, of which 62.1% do not use the datasets and 37.9% use the datasets. It obtains two possibilities of a research gap, namely improvising the algorithm and the dataset used and placing full control on the microcontroller development board and making IoT a supporting tool.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_80-A_Systematic_Literature_Review_Internet_of_Things.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Emotion Recognition on Multimodal with Deep Learning and Ensemble</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131278</link>
        <id>10.14569/IJACSA.2022.0131278</id>
        <doi>10.14569/IJACSA.2022.0131278</doi>
        <lastModDate>2022-12-30T12:34:00.6370000+00:00</lastModDate>
        
        <creator>David Adi Dharma</creator>
        
        <creator>Amalia Zahra</creator>
        
        <subject>Emotion recognition; deep learning; ensemble method; transformer; natural language processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>Emotion Recognition on multimodal dataset is a difficult task, which is one of the most important tasks in topics like Human Computer Interaction (HCI). This paper presents a multimodal approach for emotion recognition on dataset MELD. The dataset contains three modalities, audio, text, and facial features. In this research, only audio and text features will be experimented on. For audio data, the raw audio is converted into MFCC as an input to a bidirectional LSTM, which will be built to perform emotion classification. On the other hand, BERT will be used to tokenize the text data as an input to the text model. To classify the emotion in text data, a Bidirectional LSTM will be built. And finally, the voting ensemble method will be implemented to combine the result from two modalities. The model will be evaluated using F1-score and confusion matrix. The unimodal audio model achieved 41.69% of F1-score, while the unimodal text model achieved 47.29% of F1-score, and the voting ensemble model achieved 47.47% of F1-score. To conclude this research, this paper also discussed future works, which involved how to build and improve deep learning models and combine them with ensemble model for better performance in emotion recognition tasks in multimodal dataset.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_78-Emotion_Recognition_on_Multimodal_with_Deep_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of Content Based Image Retrieval using Deep Feature Extraction and Similarity Matching</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131277</link>
        <id>10.14569/IJACSA.2022.0131277</id>
        <doi>10.14569/IJACSA.2022.0131277</doi>
        <lastModDate>2022-12-30T12:34:00.6200000+00:00</lastModDate>
        
        <creator>Anu Mathews</creator>
        
        <creator>Sejal N</creator>
        
        <creator>Venugopal K R</creator>
        
        <subject>Convolutional neural network; deep learning; feature extraction; accuracy; similarity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>Image retrieval using a textual query becomes a major challenge mainly due to human perception subjectivity and the impreciseness of image annotations. These drawbacks can be overcome by focusing on the content of images rather than on the textual descriptions of images. Traditional feature extraction techniques demand for expert knowledge to select the limited feature types and are also sensitive to changing imaging conditions. Deep feature extraction using Convolutional Neural Network (CNN) are a solution to these drawbacks as they can learn the feature representations automatically. This work carries out a detailed performance comparison of various pre-trained models of CNN in feature extraction. Features are extracted from men footwear and women clothing datasets using the VGG16, VGG19, InceptionV3, Xception and ResNet50 models. Further, these extracted features are used for classification using SVM, Random Forest and K-Nearest Neighbors classifiers. Results of feature extraction and image retrieval show that VGG19, Inception and Xception features perform well with feature extraction, achieving a good image classification accuracy of 97.5%. These results are further justified by performing a comparison of image retrieval efficiency, with the extracted features and similarity metrics. This work also compares the accuracy obtained by features extracted by the selected pretrained CNN models with the results obtained using conventional classification techniques on CIFAR 10 dataset. The features extracted using CNN can be used in image-based systems like recommender systems, where images have to be analyzed to generate item profiles.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_77-Analysis_of_Content_Based_Image_Retrieval.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Research on Asymmetry of Two-Queue Cycle Query Threshold Service Polling System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131275</link>
        <id>10.14569/IJACSA.2022.0131275</id>
        <doi>10.14569/IJACSA.2022.0131275</doi>
        <lastModDate>2022-12-30T12:34:00.6030000+00:00</lastModDate>
        
        <creator>Man Cheng</creator>
        
        <creator>Dedu Yin</creator>
        
        <creator>Xinchun Wang</creator>
        
        <subject>Discrete time query; asymmetric two-queue; threshold service; second-order characteristic quantity; information packets average delay</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>Based on establishing the mathematical model of the system provided system parameters, using the discrete-time Markov chain and a function set by a non-negative integer random variable as Probabilistic methods, discrete time variable, two-queue different gated polling system has been fully analyzed, the low- and higher-order properties and cycle period of the system are deduced, and average queue pair length and average waiting delay for message packets are calculated accurately. The simulation experiments agree well with the theoretical calculations. The analysis further deepens the cognition of the asymmetric threshold polling system, lays a foundation for researches on the asymmetric threshold polling system, and has positive significance for a better and more flexible control periodic query polling work system.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_75-Research_on_Asymmetry_of_Two_queue_Cycle_Query_Threshold_Service.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Transformer-based Cross-Lingual Summarization using Multilingual Word Embeddings for English - Bahasa Indonesia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131276</link>
        <id>10.14569/IJACSA.2022.0131276</id>
        <doi>10.14569/IJACSA.2022.0131276</doi>
        <lastModDate>2022-12-30T12:34:00.6030000+00:00</lastModDate>
        
        <creator>Achmad F. Abka</creator>
        
        <creator>Kurniawati Azizah</creator>
        
        <creator>Wisnu Jatmiko</creator>
        
        <subject>Cross-lingual summarization; multilingual word embeddings; transformer; automatic summarization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>Cross-lingual summarization (CLS) is a process of generating a summary in the target language from a source document in another language. CLS is a challenging task because it involves two different languages. Traditionally, CLS is carried out in a pipeline scheme that involves two steps: summarization and translation. This approach has a problem, it introduces error propagation. To address this problem, we present a novel end-to-end abstractive CLS without the explicit use of machine translation. The CLS architecture is based on Transformer which is proven to be able to perform text generation well. The CLS model is a jointly trained CLS task and monolingual summarization (MS) task. This is accomplished by adding a second decoder to handle the MS task, while the first decoder handles the CLS task. We also incorporated multilingual word embeddings (MWE) components into the architecture to further improve the performance of the CLS models. Both English and Bahasa Indonesia are represented by MWE whose embeddings have already been mapped into the same vector space. MWE helps to better map the relation between input and output that use different languages. Experiments show that the proposed model achieves improvement up to +0.2981 ROUGE-1, +0.2084 ROUGE-2, and +0.2771 ROUGE-L when compared to the pipeline baselines and up to +0.1288 ROUGE-1, +0.1185 ROUGE-2, and +0.1413 ROUGE-L when compared to the end-to-end baselines.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_76-Transformer_based_Cross_Lingual_Summarization_using_Multilingual_Word.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hierarchical ST-DBSCAN with Three Neighborhood Boundary Clustering Algorithm for Clustering Spatio–temporal Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131274</link>
        <id>10.14569/IJACSA.2022.0131274</id>
        <doi>10.14569/IJACSA.2022.0131274</doi>
        <lastModDate>2022-12-30T12:34:00.5900000+00:00</lastModDate>
        
        <creator>Amalia Mabrina Masbar Rus</creator>
        
        <creator>Zulaiha Ali Othman</creator>
        
        <creator>Azuraliza Abu Bakar</creator>
        
        <creator>Suhaila Zainudin</creator>
        
        <subject>Data mining; hierarchical clustering; density-based clustering; spatio–temporal clustering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>Clustering Spatio-temporal data is challenging because of the complexity of processing the spatial and temporal aspects. Various enhanced clustering approaches, such as partition-based and hierarchical-based algorithms have been proposed. However, the ST-DBSCAN density-based algorithm is commonly used to process irregularly shaped clusters. Moreover, ST-DBSCAN considers neighborhood parameters as spatial and non-spatial. The preliminary results from our experiments indicate that the ST-DBSCAN algorithm addresses temporal elements less effectively. Therefore, an improvement to the ST-DBSCAN algorithm was proposed by considering three neighborhood boundaries in neighborhood function. This experiment used the El Ni&#241;o dataset from the UCI repository. The experimental results show that the proposed algorithm increased the performance indices by 27% compared to existing approaches. Further improvement using the hierarchical Ward’s method (with thresholds of 0.3 and 0.1) reduced the number of clusters from 240 to 6 and increased performance indices by up to 73%. It can be concluded that ST-HDBSCAN is a suitable clustering algorithm for Spatio-temporal data.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_74-A_Hierarchical_ST_DBSCAN_with_Three_Neighborhood_Boundary.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Deep Learning-based Model for Evaluating the Sustainability Performance of Accounting Firms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131273</link>
        <id>10.14569/IJACSA.2022.0131273</id>
        <doi>10.14569/IJACSA.2022.0131273</doi>
        <lastModDate>2022-12-30T12:34:00.5730000+00:00</lastModDate>
        
        <creator>Cui Hu</creator>
        
        <subject>Deep learning; RBM; performance evaluation; classification accuracy; sustainability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>The harmonious and stable development of society is strongly related to the sustainable development of enterprises. In order to better face the challenges of environmental resources, sustainable development must be included in the development focus of accounting enterprises. The research proposes a performance evaluation model based on deep learning, improves RBMs model on the basis of deep belief network (DBN), improves the accuracy of the model through reverse fine-tuning technology, and effectively combines multiple restricted Boltzmann machines (RBMs) and Softmax classifiers to build a modular multi classification model to complete the sustainable development performance evaluation of accounting enterprises. The performance of RBM fine tuning classifier is higher than that of RBM expression and PCA (Principal Component Analysis) expression, which mainly shows the effectiveness and stability of feature extraction. The network output results of test samples are converted into prediction performance evaluation. The model is evaluated by average precision (AP), average recall (AR), and prediction accuracy. The AP, AR, and prediction accuracy of the proposed method are 86.95%, 89.74%, and 88.29% respectively, which are higher than Softmax classifiers, Back Propagation (BP) neural networks, and DBN based Softmax methods, It shows that this method is superior to other algorithms in the application of performance evaluation model for sustainable development of accounting enterprises, and it is feasible and effective, which is of great significance to the establishment of performance evaluation model for the accounting industry.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_73-A_Deep_Learning_Based_Model_for_Evaluating_the_Sustainability.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards an Accurate Breast Cancer Classification Model based on Ensemble Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131272</link>
        <id>10.14569/IJACSA.2022.0131272</id>
        <doi>10.14569/IJACSA.2022.0131272</doi>
        <lastModDate>2022-12-30T12:34:00.5570000+00:00</lastModDate>
        
        <creator>Aya Hesham</creator>
        
        <creator>Nora El-Rashidy</creator>
        
        <creator>Amira Rezk</creator>
        
        <creator>Noha A. Hikal</creator>
        
        <subject>Breast cancer; feature selection; classification; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>Breast cancer (BC) is considered the most common cancer among women and the major reason for the increased death rate. This condition begins in breast cells and may spread to the rest of the body tissues. The early detection and prediction of BC can help in saving a patient’s life. In the last decades, machine learning (ML) has played a significant role in the development of models that can be used to detect and predict various diseases at an early stage, which can greatly increase the survival rate of patients. The importance of ML Classification is attributed to its capability to learn from previous datasets, detects patterns that are difficult to comprehend in massive datasets, predicts a categorical variable within a predefined example and provide accurate results within a short amount of time. Feature selection (FS) method was used to reduce the data dimensionality and choose the optimal feature set. In this paper, we proposed a stacking ensemble model that can differentiate between malignant and benign BC cells. A total of 25 different experiments have been conducted using several classifiers, including logistic regression (LR), decision tree (DT), linear discriminant analysis (LDA), K-nearest neighbor (KNN), naive Bayes (NB), and support vector machine (SVM). In addition to several ensembles, the classifiers included random forest (RF), bagging, AdaBoost, voting, and stacking. The results indicate that our ensemble model outperformed other state-of-the-art models in terms of accuracy (98.6%), precision (89.7%), recall, and F1 score (93.33%). The result shows that the ensemble methods with FS have a high improvement of classification accuracy rather than a single method in detecting BC accurately.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_72-Towards_an_Accurate_Breast_Cancer_Classification_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Research on Real-time Monitoring of Video Images of Traffic Vehicles and Pedestrian Flow using Intelligent Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131271</link>
        <id>10.14569/IJACSA.2022.0131271</id>
        <doi>10.14569/IJACSA.2022.0131271</doi>
        <lastModDate>2022-12-30T12:34:00.5570000+00:00</lastModDate>
        
        <creator>Xiujuan Dong</creator>
        
        <creator>Jianping Lan</creator>
        
        <creator>Wenhuan Wu</creator>
        
        <subject>Urban development; object detection; traffic video; Vibe algorithm; visitors flowrate; image filtering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>The development of urbanization has brought many traffic problems, among which the delayed feedback of traffic flow and people flow has led to traffic congestion. In order to effectively analyze the traffic flow and people flow on the traffic road, this research proposes a traffic surveillance video image object detection model based on the improved Vibe algorithm, and uses the moving historical image to track the traffic flow and people flow. Finally, the performance analysis of the algorithm shows that the loss rate of the improved Vibe algorithm proposed in the study is only 0.25%, and its detection accuracy reaches 91.25%. The above results show that the use of Vibe intelligent algorithm can significantly improve the detection effect of traffic flow and pedestrian flow in traffic monitoring video, help to improve urban traffic management ability and promote the development of urban modernization.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_71-Research_on_Real-time_Monitoring_of_Video_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>BrainNet-7: A CNN Model for Diagnosing Brain Tumors from MRI Images based on an Ablation Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131270</link>
        <id>10.14569/IJACSA.2022.0131270</id>
        <doi>10.14569/IJACSA.2022.0131270</doi>
        <lastModDate>2022-12-30T12:34:00.5430000+00:00</lastModDate>
        
        <creator>Md Harun or Rashid</creator>
        
        <creator>Salma Akter</creator>
        
        <creator>Amatul Bushra Akhi</creator>
        
        <subject>MRI image; image pre-processing; transfer-learning; CNN; brainnet-7</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>Tumors in the brain are masses or clusters of abnormal cells that may spread to other tissues nearby and pose a danger to the patient. The main imaging technique used to determine the extent of brain tumors is magnetic resonance imaging, which ensures an accurate diagnosis. A sizable amount of data for model training and advances in model designs that provide better approximations in a supervised environment likely account for most of the growth in Deep Learning techniques for computer vision applications. Deep learning approaches have shown promising results for increasing the precision of brain tumor identification and classification precision using magnetic resonance imaging (MRI). This study’s purpose is to describe a robust deep-learning model that categorizes brain tumors using MRI images into four classes based on a convolutional neural network (CNN). By removing artefacts, reducing noise, and enhancing the image, unwanted areas of brain tumors are deleted, quality is improved, and the tumor is highlighted. Several CNN architectures, including VGG16, VGG19, MobileNet, MobileNetV2, and InceptionV3, are investigated to compare or get the best model. After getting the best model, a hyper parameter ablation study was performed on that model. Proposed BrainNet-7 achieved the best results with 99.01% test accuracy and 99.21% test and validation accuracy.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_70-BrainNet_7_A_CNN_Model_for_Diagnosing_Brain_Tumors.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Compound Feature based Driver Identification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131269</link>
        <id>10.14569/IJACSA.2022.0131269</id>
        <doi>10.14569/IJACSA.2022.0131269</doi>
        <lastModDate>2022-12-30T12:34:00.5270000+00:00</lastModDate>
        
        <creator>Md. Abbas Ali Khan</creator>
        
        <creator>Mohammad Hanif Ali</creator>
        
        <creator>AKM Fazlul Haque</creator>
        
        <creator>Md. Iktidar Islam</creator>
        
        <creator>Mohammad Monirul Islam</creator>
        
        <subject>Compound feature; driver behavior identification; engine speed; fuel consumption; vehicle</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>In today&#39;s world, it is time to identify the driver through technology. At present, it is possible to find out the driving style of the drivers from every car through controller area network (CAN-BUS) sensor data which was not possible through the conventional car. Many researchers did their work and their main purpose was to find out the driver driving style from end-to-end analysis of CAN-BUS sensor data. So, it is potential to identify each driver individually based on the driver&#39;s driving style. We propose a novel compound feature-based driver identification to reduce the number of input attributes based on some mathematical operation. Now, the role of machine learning in the field of any type of data analysis is incomparable and significant. The state-of-the-art algorithms have been applied in different fields. Occasionally these are tested in a similar domain. As a result, we have used some prominent algorithms of machine learning, which show different results in the field of aspiration of the model. The other goal of this study is to compare the conspicuous classification algorithms in the index of performance metrics in driver behavior identification. Hence, we compare the performance of SVM, Na&#239;ve Bayes, Logistic Regression, k-NN, Random Forest, Decision tree, Gradient boosting.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_69-A_Novel_Compound_Feature_Based_Driver_Identification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Web based Mitosis Detection on Breast Cancer Whole Slide Images using Faster R-CNN and YOLOv5</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131268</link>
        <id>10.14569/IJACSA.2022.0131268</id>
        <doi>10.14569/IJACSA.2022.0131268</doi>
        <lastModDate>2022-12-30T12:34:00.5100000+00:00</lastModDate>
        
        <creator>Rajasekaran Subramanian</creator>
        
        <creator>R. Devika Rubi</creator>
        
        <creator>Rohit Tapadia</creator>
        
        <creator>Katakam Karthik</creator>
        
        <creator>Mohammad Faseeh Ahmed</creator>
        
        <creator>Allam Manudeep</creator>
        
        <subject>Nottingham grading system; breast cancer biomarker; whole slide image; mitosis; faster R-CNN; YOLOv5</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>Histological grading quantifies the tumor architecture and the cytology deviation of breast cancer against normal tissue. Nottingham Grading System grades the breast cancer classification and allots tumor scores. Mitotic detection is one of the major components in the Nottingham Grading System. Using a conventional microscope is time-consuming, semi-quantitative and has limited histological parameters. Digital scanners scan the tissue slice into high-resolution virtual images called whole slide images. Deep learning models on whole slide images provide a fast and accurate quantitative diagnosis. This paper proposes two deep learning models namely Faster R-CNN and YOLOv5 to detect mitosis on WSI. The proposed Deep Learning models uses 56258 annotated tiles for training/testing and provide F1 score as 84%. The proposed model uses a web-based imaging analysis and diagnosis platform called CADD4MBC for image uploading, Annotation and visualization. This paper proposes an end-to-end web based Deep Learning detection for Breast Cancer Mitosis.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_68-Web_based_Mitosis_Detection_on_Breast_Cancer_Whole_Slide_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Modeling to Classify and Detect Outliers on Multilabel Dataset based on Content and Context</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131267</link>
        <id>10.14569/IJACSA.2022.0131267</id>
        <doi>10.14569/IJACSA.2022.0131267</doi>
        <lastModDate>2022-12-30T12:34:00.4970000+00:00</lastModDate>
        
        <creator>Lusiana Efrizoni</creator>
        
        <creator>Sarjon Defit</creator>
        
        <creator>Muhammad Tajuddin</creator>
        
        <subject>News article classification; machine learning; outlier detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>Due to the linked various matching categories, news article categorization are a rapidly increasing field of interest in text classification. However, the low-reliability indices and ambiguities related to frequently used province classifiers restrict success in this field. Most of the existing research uses traditional machine learning algorithms. It has weaknesses in training large-scale datasets, and data sparseness often occurs from short texts. Therefore, this study proposed a hybrid model consisting of two models, namely the news article classification and the outlier detection model. The news article classification model used a combination of two deep learning algorithms (Long Short-Term Memory dan Convolutional Neural Network) and outlier classifier model, which was intended to predict the outlier news using a decision tree algorithm. The proposed model&#39;s performance was compared against two widely used datasets. The experimental results provide useful insights that open the way for a number of future initiatives.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_67-Hybrid_Modeling_to_Classify_and_Detect_Outliers_on_Multilabel_Dataset.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Rapid Modelling of Machine Learning in Predicting Office Rental Price</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131266</link>
        <id>10.14569/IJACSA.2022.0131266</id>
        <doi>10.14569/IJACSA.2022.0131266</doi>
        <lastModDate>2022-12-30T12:34:00.4970000+00:00</lastModDate>
        
        <creator>Thuraiya Mohd</creator>
        
        <creator>Muhamad Harussani</creator>
        
        <creator>Suraya Masrom</creator>
        
        <subject>Random forest; decision tree; support vector machine; rapid prediction modelling; office rental price</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>This study demonstrates the utilization of rapid machine learning modelling in an essential case of the real estate industry. Predicting office rental price is highly crucial in the real estate industry but the study of machine learning is still in its infancy. Despite the renowned advantages of machine learning, the difficulties have restricted the inexpert machine learning researchers to embark on this prominent artificial intelligence approach. This paper presents the empirical research results based on three machine learning algorithms namely Random Forest, Decision Tree and Support Vector Machine to be compared between two training approaches; split and cross-validation. AutoModel machine learning has accelarated the modelling tasks and is useful for inexperienced machine learning researchers for any domain. Based on real cases of office rental in a big city of Kuala Lumpur, Malaysia, the evaluation results indicated that Random Forest with cross-validation was the best promising algorithm with 0.9 R squared value. This research has significance for real estate domain in near future, by applying a more in-depth analysis, particularly on the relevant variables of building pricing as well as on the machine learning algorithms.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_66-Rapid_Modelling_of_Machine_Learning_in_Predicting_Office.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dual U-Net with Resnet Encoder for Segmentation of Medical Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131265</link>
        <id>10.14569/IJACSA.2022.0131265</id>
        <doi>10.14569/IJACSA.2022.0131265</doi>
        <lastModDate>2022-12-30T12:34:00.4800000+00:00</lastModDate>
        
        <creator>Syed Qamrun Nisa</creator>
        
        <creator>Amelia Ritahani Ismail</creator>
        
        <subject>Segmentation; Medical Images; Deep Convolutional Neural Network; FCN; U-net; Unet_Resnet; Dual U-net with Resnet Encoder</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>Segmentation of medical images has been the most demanding and growing area currently for analysis of medical images. Segmentation of polyp images is a huge challenge because of the variability of color depth and morphology in polyps throughout colonoscopy imaging. For segmentation, in this work, we have used a dataset of images of the gastrointestinal polyp. The algorithms used in this paper for segmentation of gastrointestinal polyp images depend on profound deep convolutional neural network architectures: FCN, Dual U-net with Resnet Encoder, U-net, and Unet_Resnet. To improve the performance, data augmentation is performed on the dataset. The efficiency of the algorithms is measured by using metrics such as Dice Similarity Coefficient (DSC) and Intersection Over Union (IOU). The algorithm Dual U-net with Resnet Encoder obtains a higher DSC of 0.87 and IOU of 0.80 and beats the other algorithms U-net, FCN, and Unet_Resnet in segmentation of gastrointestinal polyp images.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_65-Dual_U_Net_with_Resnet_Encoder_for_Segmentation_of_Medical_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implementation of Fuzzy Expert System on Skin Diseases</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131264</link>
        <id>10.14569/IJACSA.2022.0131264</id>
        <doi>10.14569/IJACSA.2022.0131264</doi>
        <lastModDate>2022-12-30T12:34:00.4630000+00:00</lastModDate>
        
        <creator>Admi Syarif</creator>
        
        <creator>Mayda B Fauzi</creator>
        
        <creator>Aristoteles</creator>
        
        <creator>Agus Wantoro</creator>
        
        <subject>Artificial intelligence; expert system; fuzzy logic; skin disease</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>Skin diseases are a group of diseases affecting people of all ages, commonly caused by fungi, bacteria, parasites, viruses, and infections. The disease&#39;s main symptoms are usually itching all over the skin. Many patients are often underestimated and embarrassed to consult directly with doctors, which in the end, ignores the symptoms of skin diseases. Since they usually have imprecision symptoms, examining skin diseases is complex and challenging. Recently, many efforts have been made to utilize artificial intelligence approaches for diagnosing various diseases based on the patient&#39;s condition. This paper aims to develop a novel fuzzy-based medical expert system based on unprecise existing symptoms. The system uses the specialist Doctor&#39;s knowledge (dermatologist) to diagnose and provide the patient&#39;s severity level for the disease. We have done numerical experiments using 100 (one hundred) test problems to evaluate the performance of the developed system by comparing the result with the recommendations of doctors (dermatologists). It shows that this system succeeds in all tests with an accuracy value of 95.6%. Thus, this system is very beneficial to support doctors in the assessment of skin diseases.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_64-Implementation_of_Fuzzy_Expert_System_on_Skin_Diseases.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Analysis of Machine Learning-based Detection of Sinkhole Network Layer Attack in MANET</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131262</link>
        <id>10.14569/IJACSA.2022.0131262</id>
        <doi>10.14569/IJACSA.2022.0131262</doi>
        <lastModDate>2022-12-30T12:34:00.4500000+00:00</lastModDate>
        
        <creator>Sivanesan N</creator>
        
        <creator>K. S. Archana</creator>
        
        <subject>Sinkhole; machine learning; MANET; intrusion detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>This paper proposes an Intrusion Detection System (IDS) against Sinkhole attacks in Mobile Adhoc Networks (MANET) with mobile sinks. A sinkhole attack is where a hacked node advertises a false routing update to draw network traffic. One effect of a sinkhole attack is that it may be used to launch further attacks, such as drops or changed routing information. Sinkhole nodes attempt to forge the source–destination routes to attract the surrounding network traffic. For this purpose, they modify routing control packets to publish fake routing information that makes sinkhole nodes appear as the best path to some destinations. Several machine learning techniques, including Decision Tree (DT), K-Nearest Neighbor (KNN), Convolution neural network (CNN), and Support Vector Machine (SVM), are used to do the categorization. Furthermore, the MANET’s node’s characteristics, particularly speed, are used for feature extraction. Totally 3997 unique samples, including 256 malicious samples and 3604 normal samples are collected. The categorization results demonstrate the accuracy of DT, KNN, CNN, and SVM at 98.4%, 96.7%, 98.6%, and 97.8%, respectively. The CNN approach is more accurate than other methods, at 98.6%, based on the data. After that, Priority, SVM, KNN, and CNN, in that order, each denotes excellent accuracy.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_62-Performance_Analysis_of_Machine_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Friendly Group Architecture for Securely Promoting Selfish Node Cooperation in Wireless Ad-hoc Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131263</link>
        <id>10.14569/IJACSA.2022.0131263</id>
        <doi>10.14569/IJACSA.2022.0131263</doi>
        <lastModDate>2022-12-30T12:34:00.4500000+00:00</lastModDate>
        
        <creator>Rajani K C</creator>
        
        <creator>Aishwarya P</creator>
        
        <creator>Manjunath S</creator>
        
        <subject>Wireless Adhoc network; selfish node; DSR; reactive; security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>Wireless Ad-hoc Network is characterized by a decentralized communication scheme with self-configuring nodes which has witnessed a wide range of practical wireless applications. However, this characteristic also results in various security threats in vulnerable wireless environment irrespective of presence of various routing protocols. Review of existing literature shows that there is very less emphasis towards securing Dynamic Source Routing (DSR) while majority of solutions uses encryption- based operation. Therefore, this manuscript introduces a novel non-encryption-based scheme called as Friendly Group Architecture which intends to identify the presence of selfish node followed by presenting a method to promote the secure cooperation of it. The complete modelling is analytically designed using probability-based computation and dynamic thresholding. The simulation outcome carried out in MATLAB exhibits that it outperforms existing system with respect to energy, overhead, and security.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_63-Friendly_Group_Architecture_for_Securely_Promoting_Selfish_Node.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Enhanced Face Detection System using A Novel FIS-CDNN Classifier</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131261</link>
        <id>10.14569/IJACSA.2022.0131261</id>
        <doi>10.14569/IJACSA.2022.0131261</doi>
        <lastModDate>2022-12-30T12:34:00.4330000+00:00</lastModDate>
        
        <creator>Santhosh S</creator>
        
        <creator>S. V. Rajashekararadhya</creator>
        
        <subject>Gamma correction - based histogram equalization (GBHE) technique; modified tiny face detection method (MTFD); Improved Chehra (IC) landmark extraction method; Gaussian-based Spider Monkey Optimization (GSMO) Algorithm; fuzzy interference system-convolutional deep neural network (FIS-CDNN) classifier</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>A computer application that can detect, track, identify or verify human faces as of an image or video capture utilizing a digital camera is Face Recognition (FR). Few challenges like low-resolution images, aging, uncontrolled pose, illumination changes, and poor lighting conditions are not tackled even though huge advancement has been created in the Face Detection and Recognition (FDR) domain. Utilizing the Modified Tiny Face Detection (MTFD) and Fuzzy Interference System - Convolutional Deep Neural Network (FIS-CDNN) classifier, a Face Recognition System (FRS) was proposed here to tackle all complications. Primarily, Gamma correction - Based Histogram Equalization (GBHE) technique is utilized to augment the image’s input in the pre-processing phase. The MTFD was employed to detect the face. Following that, the features are extracted. The Improved Chehra (IC) landmark extraction method was employed to retrieve the landmark features. And finally, the Geometric Features (GFs) are extracted. Later, the Gaussian - centered Spider Monkey Optimization (GSMO) Algorithm was employed to choose the vital features. To recognizing the face, the chosen features are fed into the FIS-CDNN classifier. When analogized to the prevailing models, it is concluded via the experiential outcomes that higher accuracy was attained by the proposed method.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_61-An_Enhanced_Face_Detection_System_using_A_Novel.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Optimal Route Selection Model of Fresh Agricultural Products Transportation Based on Bee Colony Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131260</link>
        <id>10.14569/IJACSA.2022.0131260</id>
        <doi>10.14569/IJACSA.2022.0131260</doi>
        <lastModDate>2022-12-30T12:34:00.4170000+00:00</lastModDate>
        
        <creator>Qingqing Ren</creator>
        
        <subject>Bee colony algorithm; fresh agricultural products; transportation; optimal route selection; road network model; spectral clustering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>In order to optimize the distribution route of fresh agricultural products and reduce the distribution cost, the optimal route selection model of fresh agricultural products transportation based on bee colony algorithm is constructed. After establishing the transportation road network model of fresh agricultural products and analyzing the transportation road information, the transportation road network zoning is realized by the dynamic zoning method of transportation road network based on spectral clustering algorithm, taking the unblocked area and congested area as the zoning purpose. Based on the traffic zoning information, the transportation route optimization model of fresh agricultural products with time window is constructed. Solved by bee colony algorithm to obtain the distribution route of fresh agricultural products with the lowest distribution cost and the highest customer satisfaction. The experimental results show that the model can choose the fresh agricultural products distribution route with the lowest distribution cost and the highest customer satisfaction under the two working conditions of smooth traffic and congestion.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_60-The_Optimal_Route_Selection_Model_of_Fresh_Agricultural_Products.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-Modal Medical Image Fusion Using Transfer Learning Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131259</link>
        <id>10.14569/IJACSA.2022.0131259</id>
        <doi>10.14569/IJACSA.2022.0131259</doi>
        <lastModDate>2022-12-30T12:34:00.4030000+00:00</lastModDate>
        
        <creator>Shrida Kalamkar</creator>
        
        <creator>Geetha Mary A</creator>
        
        <subject>Image fusion; discrete wavelet transform; computer vision; inverse wavelet transform</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>Multimodal imaging techniques of the same organ help in getting anatomical as well as functional details of a particular body part. Multimodal imaging of the same organs can help doctors diagnose a disease cost-effectively. In this paper, a hybrid approach using transfer learning and discrete wavelet transform is used to fuse multimodal medical images. As the access to medical data is limited, transfer learning is used for feature extractor and save training time. The features are fused with a pre-trained VGG19 model. Discrete Wavelet Transform is used to decompose the multimodal images in different sub-bands. In the last phase, Inverse Wavelet Transform is used to obtain a fused image from the four bands generated. The proposed model is executed on Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) datasets. The experimental results show that the proposed approach performs better than other approaches and the significance of the obtained fused image is measured using qualitative metrics.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_59-Multi_Modal_Medical_Image_Fusion_Using_Transfer_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automatic Detection of Alzheimer Disease from 3D MRI Images using Deep CNNs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131258</link>
        <id>10.14569/IJACSA.2022.0131258</id>
        <doi>10.14569/IJACSA.2022.0131258</doi>
        <lastModDate>2022-12-30T12:34:00.4030000+00:00</lastModDate>
        
        <creator>Nermin Negied</creator>
        
        <creator>Ahmed SeragEldin</creator>
        
        <subject>Alzheimer detection; brain scanning techniques; MRI scanning; image processing; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>Alzheimer&#39;s disease (AD), also referred to simply as Alzheimer&#39;s, is a chronic neurodegenerative disease that usually starts slowly and worsens over time. It is the cause of 60% to 70% of cases of dementia. In 2015, there were approximately 29.8 million people worldwide with AD. It most often begins in people over 65 years of age as it affects about 6% of people 65 years and older, although 4% to 5% of cases are early-onset Alzheimer&#39;s which begin before this. In 2015, researchers have figured out that dementia resulted in about 1.9 million deaths. Continuous efforts are made to cure the disease or to delay its progression. Brain imaging is one of the hottest areas in AD research. Techniques like CT, MRI, SPECT, and PET assist in disease detection and help in excluding other probable causes of dementia. Imaging helps to perceive the intended cause of the disease as well as track the disease through its course. This paper applies Image processing and machine learning techniques combined to MRI brain images to help in detection of AD and classify the case either to MDI or Dementia.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_58-Automatic_Detection_of_Alzheimer_Disease_from_3D_MRI_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application of Artificial Intelligence-Genetic Algorithms to Select Stock Portfolios in the Asian Markets</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131257</link>
        <id>10.14569/IJACSA.2022.0131257</id>
        <doi>10.14569/IJACSA.2022.0131257</doi>
        <lastModDate>2022-12-30T12:34:00.3870000+00:00</lastModDate>
        
        <creator>Luu Thu Quang</creator>
        
        <subject>Artificial intelligence; genetic algorithms; optimal portfolio; Sharpe ratio; Asian stock market</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>The paper&#39;s main goal is to use a genetic algorithm to find the best stock portfolio that meets the criteria of high return and low risk, allowing investors to adjust the appropriate proportion for each share. Using the Python programming language based on the Jupyter Notebook engine, this paper introduces a model of six stock portfolios, each of 30 stocks selected with market capitalization and high liquidity criteria of six markets in the Asian region. The results show that the four portfolios created from the markets of Vietnam, Thailand, Philippines, and Singapore meet both the return and risk objectives. The Malaysian market only meets the risk target, but the portfolio&#39;s return is not close to the expected ratio. Meanwhile, the Indonesian market outperformed expectations in terms of profits, but high profits come with high risks, so this market carries a concerning level of risk when compared to the profit and loss of other markets. The suggested stock allocation levels for each portfolio are based on the above results. Finally, the author proposes several policy implications related to the management and operation of the market to limit unnecessary price fluctuations of the stock and affect the business model of companies.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_57-Application_of_Artificial_Intelligence_Genetic_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Principal Component Analysis Based Hybrid Speckle Noise Reduction Technique for Medical Ultrasound Imaging</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131256</link>
        <id>10.14569/IJACSA.2022.0131256</id>
        <doi>10.14569/IJACSA.2022.0131256</doi>
        <lastModDate>2022-12-30T12:34:00.3700000+00:00</lastModDate>
        
        <creator>Yasser M. Kadah</creator>
        
        <creator>Ahmed F. Elnokrashy</creator>
        
        <creator>Ubaid M. Alsaggaf</creator>
        
        <creator>Abou-Bakr M. Youssef</creator>
        
        <subject>Image quality metrics; principal component analysis; speckle reduction; ultrasound imaging</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>Ultrasound imaging is the safest and most widely used medical imaging technique available today. The main disadvantage of ultrasound imaging is the presence of speckle noise in its images that may obscure pathological changes in the body and makes diagnosis more challenging. Therefore, many techniques were proposed to reduce speckle and improve image quality. Unfortunately, variations of their performance with different scan parameters and due to their methodologies make it hard to choose which one to adopt in clinical practice. In this work, we consider the problem of combining the information from multiple speckle filters and propose the use of principal component analysis to find the optimal set of weights that would retain the most information and hence would better represent the data in the final image. The new technique is implemented to process ultrasound images collected from a research system and the outcomes are compared to the individual techniques and their average using quantitative image quality metrics. The proposed technique has potential for utilization in clinical settings to provide consistently better-quality combined images that may help improve diagnostic accuracy.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_56-Principal_Component_Analysis_Based_Hybrid_Speckle_Noise.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Prediction of Pediatric Outpatient No-Show Visits by Employing Machine Learning Framework</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131255</link>
        <id>10.14569/IJACSA.2022.0131255</id>
        <doi>10.14569/IJACSA.2022.0131255</doi>
        <lastModDate>2022-12-30T12:34:00.3530000+00:00</lastModDate>
        
        <creator>Abdulwahhab Alshammari</creator>
        
        <creator>Hanoof Alaboodi</creator>
        
        <creator>Riyad Alshammari</creator>
        
        <subject>No-show; machine learning; healthcare medical appointments; predictive analytics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>Patient no-show for a booked medical appointment is a significant problem that negatively impacts healthcare resource utilization, cost, efficiency, quality, and patient outcomes. This paper developed a machine learning framework to predict pediatric patients&#39; no-shows to medical appointments accurately. Thirty months of outpatient visits data were extracted from data warehouse from January 2017 to July 2019 of the Ministry of National Guard Health Affairs (MNGHA), Saudi Arabia. The researchers retrieved the data from all healthcare facilities in the central region, and more than 100 attributes were generated. The data includes over 100,000 pediatric patients and more than 3.7 million visits. Five machine learning algorithms were deployed, where Gradient Boosting (GB) algorithm outperformed the other four machine learning algorithms: decision tree, random forest, logistic regression, and neural network. The study evaluated and compared the performance of the five models based on five evaluations criteria. GB achieved a Receiver Operating Characteristic (ROC) score of 97.1%. Furthermore, this research paper identified the factors that have massive potential for effecting patients&#39; adherence to scheduled appointments.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_55-The_Prediction_of_Pediatric_Outpatient.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Prioritizing the Factors Affecting the Application of Industry 4.0 Technology in Electrical Appliance Manufacturing using a Fuzzy Analytical Network Process Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131254</link>
        <id>10.14569/IJACSA.2022.0131254</id>
        <doi>10.14569/IJACSA.2022.0131254</doi>
        <lastModDate>2022-12-30T12:34:00.3400000+00:00</lastModDate>
        
        <creator>Apiwat Krommuang</creator>
        
        <creator>Atchari Krommuang</creator>
        
        <subject>Industry 4.0; big data analytics; internet of things; cloud manufacturing; additive manufacturing; cyber-physical systems; fuzzy analytic network process</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>The fourth industrial revolution is a technological advancement that is posing new challenges in manufacturing and services. Industries must adopt innovations to create value-added for their products and services to gain a competitive advantage and increase production efficiency. Therefore, this research aims to study the factors that influence the application of Industry 4.0 technology for managing electrical appliance production by focusing on five major factors: the internet of things, cloud manufacturing, big data analytics, additive manufacturing, and cyber-physical systems, which can be further subdivided into 23 sub-factors. The fuzzy analytic network process (FANP) technique is used to prioritize the factors to develop criteria for selecting appropriate applications of Industry 4.0 technology in manufacturing. Besides, a questionnaire based on the FANP approach is used to collect data from 82 electrical appliance manufacturers to calculate the weight of each factor. Consequently, the Internet of Things is ranked first, followed by big data analytics and additive manufacturing. While the results have indicated the importance of sub-factors as data-driven, data collection, tracking, monitoring, and automation, respectively. The benefit of this research is that manufacturers of electrical appliances can use this research as a criterion for implementing Industry 4.0 technology for long-term effectiveness.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_54-Prioritizing_the_Factors_Affecting_the_Application_of_Industry.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fish Detection in Seagrass Ecosystem using Masked-Otsu in HSV Color Space</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131253</link>
        <id>10.14569/IJACSA.2022.0131253</id>
        <doi>10.14569/IJACSA.2022.0131253</doi>
        <lastModDate>2022-12-30T12:34:00.3400000+00:00</lastModDate>
        
        <creator>Sri Dianing Asri</creator>
        
        <creator>Indra Jaya</creator>
        
        <creator>Agus Buono</creator>
        
        <creator>Sony Hartono Wijaya</creator>
        
        <subject>Fish Detection; HSV color space; masked-otsu thresholding; morphological operation; seagrass ecosystem</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>Seagrass ecosystems are coastal ecosystems with high species diversity, especially fish. Fish diversity determines the abundance of communities based on the number of species. Detection of fish directly (in-situ) and conventionally by catching them requires more energy, costs, and relatively needs time. Therefore a computer vision method is needed that can detect fish well using underwater images. The fish detection model used Masked-Otsu Thresholding, HSV color space with closing techniques in morphological operations. The dataset is in the form of 130 underwater images, divided into 80% training data and 20% testing data. The test results showed a model accuracy value of 0.92, Precision value of 0.84, Sensitivity value of 0.93, and F1 Score of 0.88. With these results, the model could detect fish in the seagrass ecosystem.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_53-Fish_Detection_in_Seagrass_Ecosystem_using_Masked-Otsu.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Machine Learning for Securing Traffic in Computer Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131252</link>
        <id>10.14569/IJACSA.2022.0131252</id>
        <doi>10.14569/IJACSA.2022.0131252</doi>
        <lastModDate>2022-12-30T12:34:00.3230000+00:00</lastModDate>
        
        <creator>Ahmed BaniMustafa</creator>
        
        <creator>Mahmoud Baklizi</creator>
        
        <creator>Khalaf Khatatneh</creator>
        
        <subject>Machine learning; data mining; cyber security; computer networks; intrusion detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>Computer network attacks are among the most significant and common threats against computer-wired and wireless communications. Intrusion detection technology is used to secure computer networks by monitoring network traffic and identifying attacks. In this paper, we investigate and evaluate the application of four machine learning classification algorithms for identifying attacks that target computer networks: DDoS, Brute Force Web, and SQL Injection attacks, in addition to Benign Traffic. A public dataset of 80 features was used to build four machine learning models using Random Forest, Logistic Regression, CN2, and Neural Networks. The constructed models were evaluated based on 10-fold cross-validation using Classification Accuracy (CA), Area under the Curve (AUC), F1, Recall, Specificity, and Sensitivity metrics in addition to Confusion Matrix, Calibration, Lift, and ROC plots. The Random Forest model achieved 98% in the CA score and 99% in the AUC score, while the Logistic regression achieved 90% in the CA score and 98% in the AUC score.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_52-Machine_Learning_for_Securing_Traffic_in_Computer_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis and Detection of Tomatoes Quality using Machine Learning Algorithm and Image Processing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131250</link>
        <id>10.14569/IJACSA.2022.0131250</id>
        <doi>10.14569/IJACSA.2022.0131250</doi>
        <lastModDate>2022-12-30T12:34:00.3070000+00:00</lastModDate>
        
        <creator>Haichun Zuo</creator>
        
        <subject>Machine learning; image processing; product category; tomato quality rating</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>Grading of agricultural products methods based on artificial intelligence is more important. Because these methods have the ability to learn and thus increase the flexibility of the system. In this paper, image processing systems, detection analysis methods, and artificial intelligence are used to grade tomatoes, and the success rate of grading these methods is compared with each other. However, the purpose of this study is to obtain a solution to detect appearance defects and grade and sort the tomato crop and provide an efficient system in this field. A visual dataset is created, to investigate the approach of image processing and machine learning based on a tomato image. Tomato models are placed individually under the camera and samples are classified in a lighting box away from the effects of ambient light. Data sets have been used in three types of first, second, and third quality categories. It should be noted that quality category one has the best quality and quality category two has the medium quality and category three has the worst quality, Also, each data class contains 80 samples. Using tomato appearance such as size, texture, color, shape, etc. Image processing is performed for extract features. Tomato images are pre-processed for optimization. Then, to prepare for classification, the dimensions of the images are reduced by principal component analysis (PCA). Three categories of an artificial neural network, a support vector machine, and a decision tree are compared to show the most efficient support machine. The analysis is examined in two classes and three classes. The support vector machine has the best accuracy compared to other methods so this rate is 99.9% for two classes and 99.79% for three classes.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_50-Analysis_and_Detection_of_Tomatoes_Quality.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Electricity Theft Detection using Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131251</link>
        <id>10.14569/IJACSA.2022.0131251</id>
        <doi>10.14569/IJACSA.2022.0131251</doi>
        <lastModDate>2022-12-30T12:34:00.3070000+00:00</lastModDate>
        
        <creator>Ivan Petrlik</creator>
        
        <creator>Pedro Lezama</creator>
        
        <creator>Ciro Rodriguez</creator>
        
        <creator>Ricardo Inquilla</creator>
        
        <creator>Julissa Elizabeth Reyna-Gonz&#225;lez</creator>
        
        <creator>Roberto Esparza</creator>
        
        <subject>Energy theft; non-technical losses; machine learning; support vector machine</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>This research work dealt with the indiscriminate theft of electric power, reported as a non-technical loss, affecting electric distribution companies and customers, triggering serious consequences including fires and blackouts. The research focused on recommending the best prediction model using Machine Learning in electrical energy theft. The source of the information on the electricity consumption of 42372 consumers was a dataset published in the State Grid Corporation of China. The method used was data imputation, data balancing (oversampling and under sampling), and feature extraction to improve energy theft detection. Five Machine Learning models were tested. As a result, the accuracy indicator of the SVM model was 81%, K-Nearest Neighbors 79%, Random Forest 80%, Logistic Regression 69%, and Naive Bayes 68%. It is concluded that the best performance, with an accuracy of 81%, is obtained by using the SVM model.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_51-Electricity_Theft_Detection_using_Machine_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Research on Quantitative Security Protection Technology of Distribution Automation Nodes based on Attack Tree</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131249</link>
        <id>10.14569/IJACSA.2022.0131249</id>
        <doi>10.14569/IJACSA.2022.0131249</doi>
        <lastModDate>2022-12-30T12:34:00.2770000+00:00</lastModDate>
        
        <creator>Yinfeng Han</creator>
        
        <creator>Yong Tang</creator>
        
        <creator>Xiaoping Kang</creator>
        
        <creator>Hao Jiang</creator>
        
        <creator>Xiaoyang Song</creator>
        
        <subject>Attack tree; security protection; risk assessment; fault node location; trusted computing; SM4 encryption algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>In order to improve the security of distribution automation system nodes and ensure the safe operation of distribution network, a quantitative security protection technology for distribution automation nodes based on attack tree is proposed. This paper analyzes the factors of node risk assessment of the distribution automation system, and through the evaluation and analysis of node vulnerabilities; it discovers the faulty nodes in the distribution automation system in advance. Based on the node vulnerability evaluation results of the distribution automation system, the risk of the distribution automation system is comprehensively evaluated using the attack tree. Establish the distribution network control model under network attack, locate the fault node, and use trusted computing technology to design trusted distribution terminals. When the amount of data is large, more effective symmetric encryption algorithm SM4 is required to achieve node security protection in the distribution network automation system. The experimental results show that the method has high fault node location accuracy, low reliability calculation time, and the distribution automation system network has certain robustness, which fully verifies the application effect of the method.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_49-Research_on_Quantitative_Security_Protection_Technology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Integrating User Reviews and Issue Reports of Mobile Apps for Change Requests Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131248</link>
        <id>10.14569/IJACSA.2022.0131248</id>
        <doi>10.14569/IJACSA.2022.0131248</doi>
        <lastModDate>2022-12-30T12:34:00.2600000+00:00</lastModDate>
        
        <creator>Laila Al-Safoury</creator>
        
        <creator>Akram Salah</creator>
        
        <creator>Soha Makady</creator>
        
        <subject>User review; feedback analysis; mobile app maintenance; text similarity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>There is abundance of mobile Apps released continuously on the App store, where developers are required to maintain these Apps to attain user satisfaction. Developers should consider all user feedback, as they are important resources for planning of next App’s release. In order to consider user feedback, many platforms host mobile Apps and allow users to submit their opinions, such as: Google Play App store and Github Open-Source Development platform. The automated consolidation of user feedback from such platforms, and transforming it into a list of change requests would result in satisfying users across different platforms, and their analysis helps developer to reduce cost of time and effort to plan for the new release of the mobile App. In this paper, a framework is proposed which integrates user feedback from different sources and analyzes them using a state-of-art user reviews analysis tool to obtain a list of change requests, such list is further examined for similarity to remove duplicates and prioritize the identified change requests. A prototype is designed to implement the proposed framework and applied to AntennaPod. Consequently, the framework experimentation results show that reviews and issue reports can be analyzed almost equally despite the difference of text’s nature.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_48-Integrating_User_Reviews_and_Issue_Reports_of_Mobile_Apps.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comparison of Pathfinding Algorithm for Code Optimization on Grid Maps</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131247</link>
        <id>10.14569/IJACSA.2022.0131247</id>
        <doi>10.14569/IJACSA.2022.0131247</doi>
        <lastModDate>2022-12-30T12:34:00.2470000+00:00</lastModDate>
        
        <creator>Azyan Yusra Kapi</creator>
        
        <creator>Mohd Shahrizal Sunar</creator>
        
        <creator>Zeyad Abd Algfoor</creator>
        
        <subject>Comparative; jump point search; optimization; pathfinding; path planning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>There have been various pathfinding algorithms created and developed over the past few decades to assist in finding the best path between two points. This paper presents a comparison of several algorithms for pathfinding on 2D grid maps. As a result, this study identified Jump Point Search Block Based Jumping (JPS (B)) as a potential algorithm in term of five evaluation metrics including search time. The comparisons pointed out the potential algorithm and code optimization was performed on the selected JPS(B) algorithm, and it was named JPS(BCO). This paper also explores issues regarding the JPS(B) and how to resolve them in order to optimize access to the index pointer. The presented enhance JPS(BCO) is capable to search optimal path quicker than the original JPS(B) as demonstrated by experimental findings. An experiment of different size grid maps is conducted to validate the ability of the proposed algorithm in term of search time. The comparative study with original JPS (B) exhibits the enhancement that has more benefits on grid maps of different size in terms of search time.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_47-A_Comparison_of_Pathfinding_Algorithm_for_Code_Optimization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Neural Network Training and Testing Datasets for License Plate Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131245</link>
        <id>10.14569/IJACSA.2022.0131245</id>
        <doi>10.14569/IJACSA.2022.0131245</doi>
        <lastModDate>2022-12-30T12:34:00.2300000+00:00</lastModDate>
        
        <creator>Ishtiaq Rasool Khan</creator>
        
        <creator>Saleh M. Alshomrani</creator>
        
        <creator>Muhammad Murtaza Khan</creator>
        
        <creator>Susanto Rahardja</creator>
        
        <subject>License plate recognition; deep neural networks; public datasets</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>Modern society has made tremendous progress towards automation to increase the quality of life and reduce the margin of human error. Intelligent transportation systems are a critical aspect of this evolution. The core technology of these systems is the automatic identification of vehicles&#39; license plates to monitor safety and control violations of traffic rules and other crimes. The research on license plate detection and recognition has gone a long way, from traditional computer vision techniques to features (color, shape, text, etc.) based classification and finally to modern deep learning structures. The deep networks comprising hundreds of layers require enormous amounts of training data. The training dataset should contain plates from different countries; otherwise, the system will be specific to only certain types of plates (from a country or province). There are several datasets collected by researchers containing large numbers of license plates from different countries. This paper provides a detailed survey of such datasets available in the public domain. Sample images from each dataset are shown, and details such as the dataset size, size of images, download link, and country of origin are provided. This survey will be a helpful reference for new researchers in the field for the tasks of training new networks and benchmarking their performances.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_45-Deep_Neural_Network_Training_and_Testing_Datasets.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Prediction of Oil Production through Linear Regression Model and Big Data Tools</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131246</link>
        <id>10.14569/IJACSA.2022.0131246</id>
        <doi>10.14569/IJACSA.2022.0131246</doi>
        <lastModDate>2022-12-30T12:34:00.2300000+00:00</lastModDate>
        
        <creator>Rehab Alharbi</creator>
        
        <creator>Nojood Alageel</creator>
        
        <creator>Maryam Alsayil</creator>
        
        <creator>Rahaf Alharbi</creator>
        
        <creator>A’aeshah Alhakamy</creator>
        
        <subject>Big data; machine learning; oil production; regres-sion model; features; prediction; PySpark</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>Fossil fuels, including oil, are the most important sources of energy. They are commonly used in various forms of commercial and industrial consumption. Producing oil is a complex task that requires special management and planning. This can result in a serious problem if the oil well is not operated properly. Oil engineers must have the necessary knowledge about the well’s status to perform their duties properly. This study pro-poses a linear regression method to predicate the oil production value. It takes into account various independent variables, such as the pressure, downhole temperature, and pressure tubing. The proposed method can accurately reach a very close prediction of the actual production value by achieving very interesting results at the end of this study.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_46-Prediction_of_Oil_Production_through_Linear_Regression_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Approach to Cashew Nut Detection in Packaging and Quality Inspection Lines</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131243</link>
        <id>10.14569/IJACSA.2022.0131243</id>
        <doi>10.14569/IJACSA.2022.0131243</doi>
        <lastModDate>2022-12-30T12:34:00.2000000+00:00</lastModDate>
        
        <creator>Van-Hung Pham</creator>
        
        <creator>Ngoc-Khoat Nguyen</creator>
        
        <creator>Van-Minh Pham</creator>
        
        <subject>Cashew; CNN; cashew detection; YOLOv7; computer vision</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>YOLO standing for You Only Look Once is one of the most famous algorithms in computer vision used for detecting objects in a real-time environment. The newest version of this algorithm, namely YOLO with the seventh version or YOLOv7, is proposed in the present study for cashew nut detection (good, broken and not peeled) in packaging and quality inspection lines. Furthermore, this algorithm using an efficient convolutional neural network (CNN) to be able to successfully detect and identify unsatisfactory cashew nuts, such as chipped or burnt cashews. In order to deal with the quality inspection process, a new dataset called CASHEW dataset has been built at first by collecting cashew images in environments with different brightness and camera angles to ensure the model&#39;s effectiveness. The quality inspection of cashew nuts is tested with a huge number of YOLOv7 models and their effectiveness will also be evaluated. The experimental results show that all models are able to obtain high accuracy. Among them, the YOLOv7-tiny model employs the least number of parameters, i.e. 6.2M but has many output parameters with higher accuracy than that of some other YOLO models. As a result, the proposed approach should clearly be one of the most feasible solutions for the cashew’s quality inspection.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_43-A_Novel_Approach_to_Cashew_Nut_Detection_in_Packaging.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improvement Classification Approach in Tomato Leaf Disease using Modified Visual Geometry Group (VGG)-InceptionV3</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131244</link>
        <id>10.14569/IJACSA.2022.0131244</id>
        <doi>10.14569/IJACSA.2022.0131244</doi>
        <lastModDate>2022-12-30T12:34:00.2000000+00:00</lastModDate>
        
        <creator>Jiraporn Thomkaew</creator>
        
        <creator>Sarun Intakosum</creator>
        
        <subject>Modified VGG-InceptionV3; InceptionV3; VGG-16; tomato leaf disease classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>This paper presents a new method for optimizing tomato leaf disease classification using Modified Visual Geometry Group (VGG)-InceptionV3. Improved performance of VGG-16 model as a base model with InceptionV3 block reduced the number of convolution layers of VGG-16 from 16 to 10 layers, and added an InceptionV3 block that was improved by adding convolution layer from 3 to 4 layers to increase the accuracy of tomato leaf disease classification and reduce the number of parameters and computation time of the model. The experiments were performed on tomato leaves from the PlantVillage dataset of 10 classes, consisting of nine classes of diseased leaves and one class of healthy leaves. The results showed that the proposed method was able to reduce the number of parameters and computation time with and accuracy of tomato leaf disease classification was 99.27%. Additionally, the proposed approach was compared with state-of-the-art Convolutional Neural Network (CNN) models such as VGG16, InceptionV3, DenseNet121, MobileNetV2, and RestNet50. Comparative results showed that the proposed method had the highest accuracy in the tomato leaf disease classification and required a smaller number of parameters and computational time.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_44-Improvement_Classification_Approach_in_Tomato_Leaf_Disease.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>FDeep: A Fog-based Intrusion Detection System for Smart Home using Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131242</link>
        <id>10.14569/IJACSA.2022.0131242</id>
        <doi>10.14569/IJACSA.2022.0131242</doi>
        <lastModDate>2022-12-30T12:34:00.1830000+00:00</lastModDate>
        
        <creator>Tahani Gazdar</creator>
        
        <subject>Fog computing; smart home; deep learning; IDS; classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>Smart Home is an application of the Internet of Things (IoT) that connects smart appliances and the Internet. The emergence of Smart Home has caused many security and privacy risks that can lead to fatal damages to the user and his property. Unfortunately, Intrusion detection systems designed for conventional networks have shown their inefficiency when deployed in Smart Home environments for many reasons that rely basically on the resources-constrained devices and their inherent intermittent connectivity. So, an intrusion detection system designed for IoT and particularly Smart Home is mandatory. On the other hand, Deep learning shows its potential in enhancing the performance of Intrusion Detection Systems. According to recent studies, Deep learning-based intrusion detection systems are deployed either on the devices or in the Cloud. However, Deep learning models are greedy in terms of resources which makes it challenging to deploy them on Smart Home devices. Besides, in the IoT architecture, the IoT layer is far from the Cloud layer which may cause additional latency and jitter. To overcome these challenges, a new intrusion detection system for Smart Home deployed in the Fog Layer is proposed, it is called FDeep. FDeep will inspect the traffic using a Deep Learning model. To select the most accurate model, three Deep Learning models are trained using an IoT dataset named TON/IIOT, also the proposed models are compared to an existing one. The obtained results show that the long short-term memory model combined with the convolutional neural networks outperforms the other three models. It has the best detection accuracy compared to other Deep Learning models.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_42-FDeep_A_Fog_based_Intrusion_Detection_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid Optimization Approach with Deep Learning Technique for the Classification of Dental Caries</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131241</link>
        <id>10.14569/IJACSA.2022.0131241</id>
        <doi>10.14569/IJACSA.2022.0131241</doi>
        <lastModDate>2022-12-30T12:34:00.1670000+00:00</lastModDate>
        
        <creator>Riddhi Chawla</creator>
        
        <creator>Konda Hari Krishna</creator>
        
        <creator>Araddhana Arvind Deshmukh</creator>
        
        <creator>K. V. Daya Sagar</creator>
        
        <creator>Mohammed Saleh Al Ansari</creator>
        
        <creator>Ahmed I. Taloba</creator>
        
        <subject>Dental caries; deep learning; convolutional neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>Due to the wealth of data available from different radiographic images, detecting dental caries has traditionally been a difficult undertaking. Numerous techniques have been developed to enhance image quality for quicker caries detection. For the investigation of medical images, deep learning has emerged as the preferred methodology. This study provides a thorough examination of the application of deep learning to object detection, segmentation, and classification. It also examines the literature on deep learning-based segmentation and identification techniques for dental images. To identify dental caries, several techniques have been used to date. However, these techniques are inefficient, inaccurate, and unable to handle a sizable amount of datasets. There is a need for a way that can get around these issues since the prior methods failed to do so. In the domains of medicine and radiology, deep convolutional neural networks (CNN) have produced amazing results in predicting and diagnosing diseases. This new field of healthcare research is developing quickly. The current study&#39;s objective was to assess the effectiveness of deep CNN algorithms for dental caries detection and diagnosis on radiographic images. The Convolutional Neural Network (CNN) method, which is based on artificial intelligence, is used in this study to introduce hybrid optimal deep learning, which offers superior performance.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_41-A_Hybrid_Optimization_Approach_with_Deep_Learning_Technique.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Tracking The Sensitivity of The Learning Models Toward Exact and Near Duplicates</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131240</link>
        <id>10.14569/IJACSA.2022.0131240</id>
        <doi>10.14569/IJACSA.2022.0131240</doi>
        <lastModDate>2022-12-30T12:34:00.1530000+00:00</lastModDate>
        
        <creator>Menna Ibrahim Gabr</creator>
        
        <creator>Yehia Helmy</creator>
        
        <creator>Doaa S. Elzanfaly</creator>
        
        <subject>Deduplication; deterministic duplicates; probabilistic duplicates; supervised learning models; unsupervised learning models; evaluation metrices</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>Most real-world datasets contaminated by quality issues have a severe effect on the analysis results. Duplication is one of the main quality issues that hinder these results. Different studies have tackled the duplication issue from different perspectives. However, revealing the sensitivity of supervised and unsupervised learning models under the existence of different types of duplicates, deterministic and probabilistic, is not broadly addressed. Furthermore, a simple metric is used to estimate the ratio of both types of duplicates regardless of the probability by which the record is considered duplicate. In this paper, the sensitivity of five classifiers and four clustering algorithms toward deterministic and probabilistic duplicates with different ratios (0% - 15%) is tracked. Five evaluation metrics are used to accurately track the changes in the sensitivity of each learning model, MCC, F1-Score, Accuracy, Average Silhouette Coefficient, and DUNN Index. Also, a metric to measure the ratio of probabilistic duplicates within a dataset is introduced. The results revealed the effectiveness of the proposed metric that reflects the ratio of probabilistic duplicates within the dataset. All learning models, classification, and clustering models are differently sensitive to the existence of duplicates. RF and Kmeans are positively affected by the existence of duplicates which means that their performce increase as the percentage of duplicates increases. Furthermore, the rest of classifiers and clustering algorithms are sensitive toward duplicates existence, especially within high percentage that negatively affect their performance.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_40-Tracking_The_Sensitivity_of_The_Learning_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Estimation of Transmission Rate and Recovery Rate of SIR Pandemic Model Using Kalman Filter</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131239</link>
        <id>10.14569/IJACSA.2022.0131239</id>
        <doi>10.14569/IJACSA.2022.0131239</doi>
        <lastModDate>2022-12-30T12:34:00.1370000+00:00</lastModDate>
        
        <creator>Wahyu Sukestyastama Putra</creator>
        
        <creator>Afrig Aminuddin</creator>
        
        <creator>Ibnu Hadi Purwanto</creator>
        
        <creator>Rakhma Shafrida Kurnia</creator>
        
        <creator>Ika Asti Astuti</creator>
        
        <subject>Kalman filter; pandemic; SIR model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>COVID-19 is a global pandemic that significantly impacts all aspects. The number of victims who died makes this disease so terrible. Various policies continue to be pursued to reduce the spread and impact of COVID-19. The spread of a disease can be modeled in differential equation modeling. This differential equation modeling is known as the SIR Model. A differential equation can be expressed in a state-space model. The state-space model is a model that is widely used to design a modern control system. This research carried out the transmission rate and recovery rate estimates in the SIR pandemic model. Estimation of the transmission rate and recovery rate in this study poses a challenge to the value of the number of people confirmed as infected. The experimental result shows that the transmission and recovery rates can be estimated using the data for the infected and recovered persons. Estimates of infected and recovered people were conducted using the Kalman Filter.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_39-Estimation_of_Transmission_Rate_and_Recovery_Rate.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Event Detection and Classification Using Deep Compressed Convolutional Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131238</link>
        <id>10.14569/IJACSA.2022.0131238</id>
        <doi>10.14569/IJACSA.2022.0131238</doi>
        <lastModDate>2022-12-30T12:34:00.1200000+00:00</lastModDate>
        
        <creator>K. Swapnika</creator>
        
        <creator>D. Vasumathi</creator>
        
        <subject>Event detection; erosion; dilation; deep learning; deep compressed convolutional neural network; hashing; median filter</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>Recently, the number of different kinds of events on social media platforms show a tremendous increase in each second. Hence, event detection holds a very important role in the current scenario. However, event detection is challenging in information technology (IT). Several machine learning-based approaches are established for the event detection process, but it generates a high error and makes various information loss, affecting the system’s performance. Thus, the proposed work introduces a new detection strategy based on a deep learning architecture. In this, both text and image data are utilized for event detection. The different procedures for image and text databases are pre-processing, extraction and classification. The text data is pre-processed using four methods: lower case filter, tokenization, stemming, and stop word filter. An adaptive median filter (AMF) is utilized for pre-processing the image data. After the pre-processing stage, feature extraction is performed for text and image-based data in which most useful features are extracted. Finally, the varied events are detected and classified using the proposed Deep Compressed Convolutional Neural Network (DCCNN). The entire work is implemented using the PYTHON platform. The efficiency of the proposed model is measured by evaluating the performance metrics such as accuracy, recall, precision and F-measure. The simulation validation exhibits that the proposed classification method attains an improved accuracy of 97.1%, obtained precision is about 95.06%, recall value is 91.69%, and f-measure is 93.35%. The efficacy of the proposed deep learning method is proved by comparing the attained results with various state-of-the-art techniques.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_38-Event_Detection_and_Classification_Using_Deep_Compressed.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Recognition of Copy Move Forgeries in Digital Images using Hybrid Optimization and Convolutional Neural Network Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131237</link>
        <id>10.14569/IJACSA.2022.0131237</id>
        <doi>10.14569/IJACSA.2022.0131237</doi>
        <lastModDate>2022-12-30T12:34:00.1030000+00:00</lastModDate>
        
        <creator>Anna Gustina Zainal</creator>
        
        <creator>Chamandeep Kaur</creator>
        
        <creator>Mohammed Saleh Al Ansari</creator>
        
        <creator>Ricardo Fernando Cosio Borda</creator>
        
        <creator>A. Nageswaran</creator>
        
        <creator>Rasha M. Abd El-Aziz</creator>
        
        <subject>Copy move forgery; convolutional neural network; image authentication; deep learning; tampered images</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>In the modern day, protecting data against tampering is a significant task. One of the most common forms of information display has been digital photographs. Images may be exploited in a variety of contexts, including the military, security applications, intelligence areas, legal evidence, social media, and journalism. Digital picture forgeries involve altering the original images with strange patterns, which result in variability in the image&#39;s characteristics. Among the most challenging forms of image forgeries to identify is Copy Move Forgery (CMF). It occurs by copying a portion or piece of the picture and then inserting it again, but in a different place. When the actual content is unavailable, techniques for detecting fake content have been utilised in image security. This study presents a novel method for Copy Move Forgery Recognition (CMFR), which is mostly based on deep learning (DL) and hybrid optimization. The hybrid Grey Wolf Optimization and African Buffalo Optimization (GWO-ABO) using Convolution Neural Network (CNN) technique i.e., GWO-ABO-CNN is the foundation of the suggested model. The developed model extracts the features of images by convolution layers, and pooling layers; hereafter, the features are matched and detect CMF. The MICC-F220, SATs-130, and MICC-F600 datasets were three publicly accessible datasets to which this methodology has been implemented. To assess the model&#39;s efficacy, the outcomes of implementing the GWO-ABO-CNN model were contrasted with those of other approaches.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_37-Recognition_of_Copy_Move_Forgeries_in_Digital_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Big Data and Internet of Things Web Service Management to Support Salt Agriculture Automation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131236</link>
        <id>10.14569/IJACSA.2022.0131236</id>
        <doi>10.14569/IJACSA.2022.0131236</doi>
        <lastModDate>2022-12-30T12:34:00.0900000+00:00</lastModDate>
        
        <creator>Muhammad Choirul Imam</creator>
        
        <creator>Dedi Trisnawarman</creator>
        
        <creator>Hugeng</creator>
        
        <creator>Hetty Karunia Tunjung Sari</creator>
        
        <subject>Web service; big data; salt; IoT</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>The integration of the internet of things in the information system application service web platform implemented in the supply chain has given rise to new formats and models, which are important manifestations of industry transformation and improvement. In the context of implementing long-term rural development plans, deep integration of the application of information technology and rural revitalization will act as a trigger that drives productivity and the development of other village business industries. The purpose of this research is to build a web service management model that can be used to manage and help optimize IoT-based salt farming production. The model built consists of software and hardware architectures and interconnections between tools. This research is divided into three stages: the first stage is to identify the data sources needed for big data needs, the second stage is to build a big data microservices model and the IoT model, the third stage is to integrate IoT data with the big data microservices model that has been built. The results of this study are in the form of an IoT device that can be run with big data micro services. The resulting IoT device can be used to automate water distribution based on the salinity value measured using a sensor.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_36-Big_Data_and_Internet_of_Things_Web_Service_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Cascaded Feature Extraction for Diagnosis of Ovarian Cancer in CT Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131235</link>
        <id>10.14569/IJACSA.2022.0131235</id>
        <doi>10.14569/IJACSA.2022.0131235</doi>
        <lastModDate>2022-12-30T12:34:00.0730000+00:00</lastModDate>
        
        <creator>Arathi B</creator>
        
        <creator>Shanthini A</creator>
        
        <subject>Ovarian cancer; deep convolutional neural network rat swarm optimization; CT image; joint feature; efficient net; improved non-local means; interpolated local binary pattern</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>This paper proposed ovarian cancer detection in the ovarian image using joint feature extraction and an efficient Net model. The noise of the input image is filtered by using Improved NLM (Improved Non-Local Means) filtering. The deep features are extracted using Deep CNN_RSO (Deep Convolutional Neural Network Rat Swarm Optimization), and the low-level texture features are extracted using ILBP (Interpolated Local Binary Pattern or Interpolated LBP). To improve the feature extraction and reduce the error, use a cascading technique for the feature extraction. RSO also helps to efficiently optimize the DCNN features from the images. Finally, the extracted image is classified using the Efficient Net classifier, which performs a global average summary and classification of ovarian cancer (normal and abnormal). The system’s performance is implemented on the Cancer Genome Atlas Ovarian Cancer (TCGA-OV) dataset. The system’s performance, like sensitivity, specificity, accuracy and error rates, shows better with respect to other techniques.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_35-A_Cascaded_Feature_Extraction_for_Diagnosis_of_Ovarian_Cancer.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Transfer Learning for Closed Domain Question Answering in COVID-19</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131234</link>
        <id>10.14569/IJACSA.2022.0131234</id>
        <doi>10.14569/IJACSA.2022.0131234</doi>
        <lastModDate>2022-12-30T12:34:00.0570000+00:00</lastModDate>
        
        <creator>Nur Rachmawati</creator>
        
        <creator>Evi Yulianti</creator>
        
        <subject>COVID-19; closed domain question answering; sequential dependence model; transfer learning; BERT</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>COVID-19 has been a popular issue around 2019 until today. Recently, there has been a lot of research being conducted to utilize a big amount of data discussing about COVID-19. In this work, we conduct a closed domain question answering (CDQA) task in COVID-19 using transfer learning technique. The transfer learning technique is adopted because a large benchmark for question answering about COVID-19 is still unavailable. Therefore, rich knowledge learned from a large benchmark of open domain QA are utilized using transfer learning to improve the performance of our CDQA system. We use retriever-reader framework for our CDQA system, and propose to use Sequential Dependence Model (SDM) as our retriever component to enhance the effectiveness of the system. Our result shows that the use of SDM retriever can improve the F-1 score of the state-of-the-art baseline CDQA system using BM25 and TF-IDF+cosine similarity retriever by 3,26% and 32,62%, respectively. The optimal parameter settings for our CDQA system are found to be as follows: using 20 top-ranked documents as the retriever’s output, five sentences as the passage length, and BERT-Large-Uncased model as the reader. In this optimal parameter setting, SDM retriever can improve the F-1 score of the state-of-the-art baseline CDQA system using BM25 by 5,06 % and TF-IDF+cosine similarity retriever by 24,94 %. Our last experiment then confirms the merit of using transfer learning, since our best-performing model (double fine-tune SQuAD and COVID-QA) is shown to gain eight times higher accuracy than the baseline method without using transfer learning. Further fine-tuning the transfer learning model using closed domain dataset (COVID-QA) can increase the accuracy of the transfer learning model that only fine-tuning with SQuAD by 27, 26%.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_34-Transfer_Learning_for_Closed_Domain_Question_Answering.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Feedforward Deep Learning Optimizer-based RNA-Seq Women&#39;s cancers Detection with a hybrid Classification Models for Biomarker Discovery</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131233</link>
        <id>10.14569/IJACSA.2022.0131233</id>
        <doi>10.14569/IJACSA.2022.0131233</doi>
        <lastModDate>2022-12-30T12:33:59.9970000+00:00</lastModDate>
        
        <creator>Waleed Mahmoud Ead</creator>
        
        <creator>Marwa Abouelkhir Abdelazim</creator>
        
        <creator>Mona Mohamed Nasr</creator>
        
        <subject>Women&#39;s cancers; RNA-Seq; deep learning; molecular tumor; hybrid classification models</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>Women&#39;s cancers, signified by breast adenocarcinoma and non-small-cell lung cancers, are a significant threat to women&#39;s health. Across the globe, the leading cause of death for women is a group of tumors referred to as &quot;female-oriented cancers&quot;. The most recent researches in the classification of molecular tumors is the analysis of women&#39;s cancers using RNA-Seq data for precision cancer diagnoses. Furthermore, discovering the different genes’ patterns behaviors will lead to predict the cancer-specific biomarkers to early diagnosis and detection of cancer-specific in women. An overfit model will be resulted due to the high-dimensional data of RNA-Seq from a small samples of data. In this work, we propose a filter-based selection approach for a deep learning-based classification model. In addition, hybrid classification models have been proposed to be compared with the new modified deep learning model. The Experiments’ analysis showed that the proposed filter-based selection approach for a deep learning-based classification model performed better than the other hybrid models in terms of performance evaluation metrics, with an accuracy of 96.7% for RNA-Seq breast adenocarcinoma data and 95.5% for RNA-Seq non-small-cell lung cancer data.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_33-Feedforward_Deep_Learning_Optimizer_based_RNA.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Machine Learning Ensemble Classifier for Prediction of Brain Strokes</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131232</link>
        <id>10.14569/IJACSA.2022.0131232</id>
        <doi>10.14569/IJACSA.2022.0131232</doi>
        <lastModDate>2022-12-30T12:33:59.9800000+00:00</lastModDate>
        
        <creator>Samaa A. Mostafa</creator>
        
        <creator>Doaa S. Elzanfaly</creator>
        
        <creator>Ahmed E. Yakoub</creator>
        
        <subject>Stroke disease; prediction model; ensemble methods; stacking classifier</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>Brain Strokes are considered one of the deadliest brain diseases due to their sudden occurrence, so predicting their occurrence and treating the factors may reduce their risk. This paper aimed to propose a brain stroke prediction model using machine learning classifiers and a stacking ensemble classifier. The smote technique was employed for data balancing, and the standardization technique was for data scaling. The classifiers’ best parameters were chosen using the hyperparameter tuning technique. The proposed stacking prediction model was created by combining Random Forest (RF), K-Nearest Neighbors (KNN), Logistic Regression (LR), Support Vector Machine (SVM), and Naive Bayes (NB) as base classifiers, and meta learner was chosen to be Random Forest. The performance of the proposed stacking model has been evaluated using Accuracy, Precision, Recall, and F1 score. In addition, the Matthews Correlation Coefficient (MCC) has been also used for more reliable evaluation when having an unbalanced dataset, which is the case in most medical datasets. The results demonstrate that the proposed stacking model outperforms the standalone classifiers by achieving an accuracy of 97% and an MCC value of 94%.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_32-A_Machine_Learning_Ensemble_Classifier_for_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dilated Multi-Activation Autoencoder to Improve the Performance of Sound Separation Mechanisms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131231</link>
        <id>10.14569/IJACSA.2022.0131231</id>
        <doi>10.14569/IJACSA.2022.0131231</doi>
        <lastModDate>2022-12-30T12:33:59.9630000+00:00</lastModDate>
        
        <creator>Ghada Dahy</creator>
        
        <creator>Mohammed A. A. Refaey</creator>
        
        <creator>Reda Alkhoribi</creator>
        
        <creator>M. Shoman</creator>
        
        <subject>Speech de-noising; speech enhancement; speech separation; short time Fourier transform (STFT); autoencoder; dilated Convolution neural network; multi-activation functions; convolution neural network (CNN); bidirectional long short memory (BLSTM)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>Speech enhancement is the process of improving the quality of audio relative to target speaker while suppressing other sounds. It can be used in many applications as speech recognition, mobile phone, hearing aids and also enhancing audio files resulted from separation models. In this paper, a convolutional neural network (CNN) architecture is proposed to improve the quality of target’s speaker resulted from speech separation models without having any prior information about the background sounds. The proposed model consists of three main phases: Pre-Processing phase, Autoencoder phase and Retrieving Audio phase. The pre-processing phase converts audio to short time Fourier transform (STFT) domain. Autoencoder phase consists of two main modules: dilated multi-Activation encoder and dilated multi-Activation decoder. Dilated multi-Activation encoder module has a six blocks with different dilation factors and each block consists of three CNN layers where each layer has different activation function then the encoder’s blocks are arranged in reverse order to construct dilated multi-activation decoder. Audio retrieving phase is used to reconstruct audio depending on feature resulted from second phase. Audio files resulted from separation models are used to build our datasets that consist of 31250 files. The proposed dilated multi-activation autoencoder improved separated audios Segmental Signal-to-Noise Ratio (SNRseg) with 33.9%, Short-time objective intelligibility (STOI) with 1.3% and reduced bark spectral distortion (BSD) with 97%.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_31-Dilated_Multi_Activation_Autoencoder_to_Improve_the_Performance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi Oral Disease Classification from Panoramic Radiograph using Transfer Learning and XGBoost</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131230</link>
        <id>10.14569/IJACSA.2022.0131230</id>
        <doi>10.14569/IJACSA.2022.0131230</doi>
        <lastModDate>2022-12-30T12:33:59.9500000+00:00</lastModDate>
        
        <creator>Priyanka Jaiswal</creator>
        
        <creator>Vijay Katkar</creator>
        
        <creator>S. G. Bhirud</creator>
        
        <subject>Panoramic radiograph; dentistry; deep learning; ensemble classifier; multi-disease classification and prediction; oral diseases; weighted ensemble module; XGBoost</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>The subject of oral healthcare is a crucial research field with significant technological development. This research examines the field of oral health care known as dentistry, a branch of medicine concerned with the anatomy, development, and disorders of the teeth. Good oral health is essential for speaking, smiling, testing, touching, digesting food, swallowing, and many other aspects, such as expressing a variety of emotions through facial expressions. Comfort in doing all these activities contributes to a person&#39;s self-confidence. For diagnosing multiple oral diseases at a time panoramic radiograph is used. Oral healthcare experts are important to appropriately detect and classify disorders. This automated approach was developed to eliminate the overhead of experts and the time required for diagnosis. This research is based on a self-created dataset of 500 images representing six distinct diseases in 46 possible combinations. Tooth wear, periapical, periodontitis, tooth decay, missing tooth, and impacted tooth are all examples of diseases. This system is developed using the concept of transfer learning with the use of a different pre-trained network such as “ResNet50V2”, “ResNet101V2”, “MobileNetV3Large”, “MobileNetV3Small”, “MobileNet”, “EfficientNetB0”, “EfficientNetB1”, and “EfficientNetB2” with XGBoost and to get the final prediction The images in the dataset were divided into 80% training and 20% images for testing. To assess the performance of this system, various measuring metrics are used. Experiments revealed that the proposed model detected Tooth wear, periapical, periodontitis, tooth decay, missing tooth, and impacted tooth with an accuracy of 91.8%, 92.2%, 92.4%, 93.2%, 91.6%, and 90.8%, respectively.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_30-Multi_Oral_Disease_Classification_from_Panoramic_Radiograph.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Effect of Blockchain using Big data and the Internet of Things in Healthcare</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131229</link>
        <id>10.14569/IJACSA.2022.0131229</id>
        <doi>10.14569/IJACSA.2022.0131229</doi>
        <lastModDate>2022-12-30T12:33:59.9330000+00:00</lastModDate>
        
        <creator>Bassant Nabil Mohamed</creator>
        
        <creator>Hatem Abdelkader</creator>
        
        <subject>Big data; blockchain; internet of things; data security; healthcare; data processing cost; IOMCT; climate change and global warming</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>Modern organizations of all sizes emphasize safeguarding sensitive consumer information. Regardless of the limits given by the degrees they choose to pursue, people are nevertheless required to possess data management skills. In addition, determine if the data should be centralized or decentralized to meet the objective of enhancing accessibility. In addition, you will need to be able to monitor who has access to your data and regulate who has access to your data.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_29-The_Effect_of_Blockchain_using_Big_data_and_the_Internet_of_Things.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fast Comprehensive Secret Sharing using Naive Image Compression</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131228</link>
        <id>10.14569/IJACSA.2022.0131228</id>
        <doi>10.14569/IJACSA.2022.0131228</doi>
        <lastModDate>2022-12-30T12:33:59.9170000+00:00</lastModDate>
        
        <creator>Heri Prasetyo</creator>
        
        <creator>Kukuh Caezaocta Prayuda</creator>
        
        <subject>Comprehensive; image compression; na&#239;ve; polynomial; secret sharing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>This paper presents a simple method for performing (k,n)-Secret Sharing (SS) with fast computation. It aims to reduce the computational time of the former scheme in the shadow generation process. The former scheme performs SS with the polynomial function computation by involving the color palette. The color palette transforms noisy-like shadow image into more meaningfull appearance. However, this scheme requires a high computational burden on this transformation process. The proposed method exploits naive image compression to decrease the required bit for representing a secret and cover image. It effectively avoids the color palette usage previously used by the former scheme. The proposed method produces a set of shadow images with a cover image-like appearance. In addition, the secret and cover image can be reconstructed by gathering at least k shadow images. As documented in the Experimental Results section, the proposed method yields a promising result in the (k,n)-SS with reduced computational time compared to that of the former scheme.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_28-Fast_Comprehensive_Secret_Sharing_Using_Naive_Image.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-Objective Optimal Path Planning for Autonomous Robots with Moving Obstacles Avoidance in Dynamic Environments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131226</link>
        <id>10.14569/IJACSA.2022.0131226</id>
        <doi>10.14569/IJACSA.2022.0131226</doi>
        <lastModDate>2022-12-30T12:33:59.9030000+00:00</lastModDate>
        
        <creator>Kadari Neeraja</creator>
        
        <creator>G Narsimha</creator>
        
        <subject>Autonomous mobile robots; dynamic environment; planning; collision-free; time-efficient paths</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>Path planning is vital for robust autonomous robot navigation. Driving in dynamic environments is particularly difficult. The majority of the work is based on the premise that a robot possesses a comprehensive and precise representation of its surroundings prior to its starting. The problem of partially knowing and dynamic environments has received little attention. This circumstance occurs when an exploratory robot or a robot without a floor plan or terrain map must move to its destination. Existing approaches for dynamic-path-planning design a preliminary path based-on known knowledge of the environment, then adjust locally by replanning the total path as obstacles are discovered by the robot&#39;s sensors, thereby sacrificing either optimality or computational efficacy. This paper presents a novel algorithm. A Near-Optimal Multi-Objective Path Planner (NO-MOPP), capable of planning time-efficient, near-optimal, and drivable paths in partially known and dynamic environments. It is an expansion of our earlier research contributions called &quot;A Multi-Objective Hybrid Collision-free Optimal Path Finder (MOHC-OPF) for Autonomous Robots in known static environments&quot; and &quot;A Multi-Objective Hybrid Collision-free Near-Optimal Path Planner (MOHC-NODPP) for Autonomous Robots in Dynamic environments&quot;. In the environment, a mix of static and moving dynamic obstacles are present, both of which are expressed by a hybrid, discrete configuration space in an occupancy-grid map. The proposed approach is executed at two distinct levels. Using our earlier method, A Multi-Objective Collision-free Optimal Path Finder (MOHC-OPF), the initial optimal path is found in environment that includes only known stationery obstacles at the Global-path-planning level. On the second level, known as Local Re-planning, this optimal path is continuously refined by online re-planning to account for the movement of obstacles in the environment. The proposed method, A Near-Optimal Multi-Objective Path Planner (NO-MOPP), is used to keep the robot&#39;s sub-paths optimum while also avoiding dynamic obstacles. This is done while still obeying the robot&#39;s non-holonomic restrictions. The proposed technique is tested in simulation using a collection of standard maps. The simulation findings demonstrate the proposed method&#39;s ability to avoid static as well as dynamic obstacles, as well as its capacity to find a near-optimal-path to a goal location in environments that are constantly changing without collision. The optimal-path is determined by taking into account several performance measures, including path length, collision-free path, execution time, and smooth paths. 90% of studies utilizing the proposed method demonstrate that it is more effective than other methods for determining the shortest length and time-efficient smooth drivable paths. The proposed technique reduced average 15% path length and execution time compared to the existing methods.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_26-Multi_Objective_Optimal_Path_Planning_for_Autonomous_Robots.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Eye-Vision Net: Cataract Detection and Classification in Retinal and Slit Lamp Images using Deep Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131227</link>
        <id>10.14569/IJACSA.2022.0131227</id>
        <doi>10.14569/IJACSA.2022.0131227</doi>
        <lastModDate>2022-12-30T12:33:59.9030000+00:00</lastModDate>
        
        <creator>Binju Saju</creator>
        
        <creator>Rajesh R</creator>
        
        <subject>Cataract detection; grade classification; CRNN; dense CNN; Aquila optimization; BE-ResNet101</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>In the modern world, cataracts are the predominant cause of blindness. Early treatment and detection can reduce the number of cataract patients and prevent surgery. However, cataract grade classification is necessary to control risk and avoid blindness. Previously, various studies focused on developing a system to detect cataract type and grade. However, the existing works on cataract detection does not provide optimal results because of high detection error, lack of learning ability, computational complexity issues, etc. Therefore, the proposed work aims to develop an effective deep learning techniques for detecting and classifying cataracts from the given input samples. Here, the cataract detection and classification are performed using two phases. In order to provide an accurate cataract detection, the proposed study introduced Deep Optimized Convolutional Recurrent Network_Improved Aquila Optimization (Deep OCRN_IAO) model in phase I. Here, both retinal and slit lamp images are utilized for cataract detection. Then, the performance of these two image datasets are analysed, and the best one is chosen for cataract type and grade classification. By analysing the performance, the slit lamp images attain higher results. Therefore, phase II uses slit lamp images and detects the type and grade of cataracts through the proposed Batch Equivalence ResNet-101 (BE_ResNet101) model. The proposed classification model is highly efficient to classify the type and grades of cataracts. The experimental setup is done using MATLAB software, and the datasets used for simulation purposes are DRIMDB (Diabetic Retinopathy Images Database) and real-time slit lamp images. The proposed type and grade detection model has an accuracy of 98.87%, specificity of 99.66%, the sensitivity of 98.28%, Youden index of 95.04%, Kappa of 97.83%, and F1-score is 95.68%. The obtained results and comparative analysis proves that the proposed model is highly suitable for cataract detection and classification.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_27-Eye_Vision_Net_Cataract_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hardware Trojan Detection based on Testability Measures in Gate Level Netlists using Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131225</link>
        <id>10.14569/IJACSA.2022.0131225</id>
        <doi>10.14569/IJACSA.2022.0131225</doi>
        <lastModDate>2022-12-30T12:33:59.8870000+00:00</lastModDate>
        
        <creator>Thejaswini P</creator>
        
        <creator>Anu H</creator>
        
        <creator>Aravind H S</creator>
        
        <creator>D Mahesh Kumar</creator>
        
        <creator>Syed Asif</creator>
        
        <creator>Thirumalesh B</creator>
        
        <creator>Pooja C A</creator>
        
        <creator>Pavan G R</creator>
        
        <subject>Hardware trojan; machine learning; controllability; observability; detection and mitigation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>Modern integrated circuit design manufacturing involves outsourcing intellectual property to third-party vendors to cut down on overall cost. Since there is a partial surrender of control, these third-party vendors may introduce malicious circuit commonly known as Hardware Trojan into the system in such a way that it goes undetected by the end-users’ default security measures. Therefore, to mitigate the threat of functionality change caused by the Trojan, a technique is proposed based on the testability measures in gate level netlists using Machine Learning. The proposed technique detects the presence of Trojan from the gate-level description of nodes using controllability and observability values. Various Machine Learning models are implemented to classify the nodes as Trojan infected and non-infected. The efficiency of linear discriminant analysis obtains an accuracy of 92.85 %, precision of 99.9 %, recall of 80%, and F1 score of 88.8% with a latency of around 0.9 ms.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_25-Hardware_Trojan_Detection_based_on_Testability.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Combining the Characteristics of the Buddha Statue from Photogrammetry and 3D Creation to Simulate the Combination of the Art of Creating Buddha Statue</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131224</link>
        <id>10.14569/IJACSA.2022.0131224</id>
        <doi>10.14569/IJACSA.2022.0131224</doi>
        <lastModDate>2022-12-30T12:33:59.8700000+00:00</lastModDate>
        
        <creator>Jirawat Sookkaew</creator>
        
        <creator>Nakarin Chaikaew</creator>
        
        <subject>3D art; 3D artifacts; creation; blending art; reconstruction artifacts</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>The creation of this research was born out of interest in creating the art of carved sandstone into Buddha statue found in the area of Phayao Province, which was part of the Lanna Kingdom that prospered during the 19th-23rd Buddhist century. In the area of Phayao province, the sandstone Buddha was created which is an art and valuable artistic feature that has been shown until now. There are five categories of sand stone Phayao Buddha style that are studied and classified which have distinctive characteristics of each Buddha statue. Nowadays, traditional techniques for making Buddha statue are becoming less and less popular as Buddha statue made of sandstone are not as popular as before. Creating a Buddha statue from stone was a difficult and laborious process. Including craftsmen in carving began to decrease in number. In this research, a tool for collecting data on Buddha statue was used photogrammetry to store and process into 3D objects and use processes and techniques for creating 3D work that has been created, and simulating the Buddha statue by using the outstanding features of the Buddha statue collected as the main part in selecting the proportion of the Buddha statue to combine to form a new Buddha statue in 3D format by simulating the Buddha statue by such methods as a prototype to reproduce the appearance of Buddha statue for use in the creation of works of art that are an important part of history. It is also used to study the characteristics of Buddha statue in combination to create the characteristics, and a new way to preserve art by using technology to transfer and preserve these valuable works of art in another way.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_24-Combining_the_Characteristics_of_the_Buddha_Statue.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Secure Palmprint Recognition based on Multispectral Sequential Capture</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131223</link>
        <id>10.14569/IJACSA.2022.0131223</id>
        <doi>10.14569/IJACSA.2022.0131223</doi>
        <lastModDate>2022-12-30T12:33:59.8530000+00:00</lastModDate>
        
        <creator>Amine AMRAOUI</creator>
        
        <creator>Mounir AIT KERROUM</creator>
        
        <creator>Youssef FAKHRI</creator>
        
        <subject>Biometrics; multispectral palmprint; local features; fusion; compound local binary pattern</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>The security of personal identities is a serious challenge in today&#39;s digital world, with so many daily transactions requiring secure solutions. The use of biometric characteristics of the person is presented as the reliable solution to solve this problem. Indeed, this solution is effective, but it hides a weak point which lies in the ability to reproduce certain biometric characteristics for fraud. To overcome this weak point, we propose a secure approach for palmprints that relies on the concept of merging multiple features. Indeed, these features will be extracted from multi-spectral images with different spectra, which allow the extraction of information under the skin of the palm for two different spectrums sequentially in two different times (T1, T2) but instantly. The instant fusion of these characteristics will be impossible to replicate. The images used are grayscale. To satisfy a construction of a reliable and secure system, for this kind of patterns (palmprints), we will use the Compound Local Binary Pattern method, since this method adds an additional bit for each P bits coded by LBP corresponding to a neighbor of the local neighborhood, in order to build a robust system. This feature descriptor, it uses both the sign and tilt information of the differences between the central and neighboring gray values. The reliability of the proposed approach has been demonstrated on the Casia Multi-Spectral database. The final experimental results show reliable recognition rates and these recognition rates vary between 99% and 100% for the left and right palms.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_23-Secure_Palmprint_Recognition_based_on_Multispectral.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>New Text Steganography Technique based on Multilayer Encoding with Format-Preserving Encryption and Huffman Coding</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131222</link>
        <id>10.14569/IJACSA.2022.0131222</id>
        <doi>10.14569/IJACSA.2022.0131222</doi>
        <lastModDate>2022-12-30T12:33:59.8400000+00:00</lastModDate>
        
        <creator>Mohammed Abdul Majeed</creator>
        
        <creator>Rossilawati Sulaiman</creator>
        
        <creator>Zarina Shukur</creator>
        
        <subject>Text steganography; format-preserving encryption; Huffman coding; unicode characters</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>Steganography is the process of hiding secret data inside other media or cover media. Balancing the requirements for capacity, security, and imperceptibility is the main challenge for any successful steganography system. In text steganography, the data hiding capacity is limited because of the lack of redundant data compared to other digital media, such as images, video, or audio. Other challenges in text steganography are imperceptibility and security. Poor imperceptibility results from the structure of the text file, which is more visually apparent in terms of syntax and grammar than in other media. Low level of security results from the sequential selection of positions for embedding secret data due to insufficient redundant data in a text file. Therefore, an attacker or a third party would notice slight changes in the text file. This paper proposes a new text steganography method that combines cryptography and compression techniques to deal with these issues. This technique is used to conceal secret data to achieve high data hiding capacity in the cover text while maintaining security and imperceptibility. Multilayer encoding and Format-Preserving Encryption (FPE) with Huffman Coding, are applied to secret data before embedding. Invisible Unicode characters are employed to embed secret data into English text files to generate stego files. Results show that the proposed method satisfies capacity and imperceptibility in the cover file by comparing it with previously developed methods.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_22-New_Text_Steganography_Technique_based_on_Multilayer_Encoding.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Image Matting using Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131221</link>
        <id>10.14569/IJACSA.2022.0131221</id>
        <doi>10.14569/IJACSA.2022.0131221</doi>
        <lastModDate>2022-12-30T12:33:59.8230000+00:00</lastModDate>
        
        <creator>Nrupatunga J</creator>
        
        <creator>Swarnalatha K S</creator>
        
        <subject>Picture matting; RGB picture; Blue/Green screen; foreground; background</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>Image matting, also refers to picture matting in the article, is the task of finding appealing targets in a picture or sequence of pictures i.e., video, and it has been used extensively in many photo and video editing applications. Image composition is the process of extracting an eye-catching subject from a photograph and blending it with a different background. a) Blue/Green screen (curtain) matting, where the backdrop is clear and readily distinct between the foreground (frontal area) and background (foundation) portions. This approach is now the most used type of image matting. b) Natural picture matting, in which these sorts of photos are taken naturally using cameras or cell phones during everyday activities. These are the present known techniques of picture matting. It is difficult to discern the distinction between the frontal area and the foundation at their boundaries. The current framework requires both the RGB and trimap images as inputs for natural picture matting. It is difficult to compute the trimap since additional framework is required to obtain this trimap. This study will introduce the Picture Matting Neural Net (PMNN) framework, which utilizes a single RGB image as an input and creates the alpha matte without any human involvement in between the framework and the user, to overcome the drawbacks of the prior frameworks. The created alpha matte is tested against the alpha matte from the PPM-100 data set, and the PSNR and SSIM measurement index are utilized to compare the two. The framework works well and can be fed with regular pictures taken with cameras or mobile phones without reducing the clarity of the image.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_21-Image_Matting_using_Neural_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-Exposure Image Fusion based on Window Segmentation and a Laplacian Pyramid for Chip Package Appearance Quality Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131220</link>
        <id>10.14569/IJACSA.2022.0131220</id>
        <doi>10.14569/IJACSA.2022.0131220</doi>
        <lastModDate>2022-12-30T12:33:59.8070000+00:00</lastModDate>
        
        <creator>Fei Hao</creator>
        
        <creator>Jiatong Song</creator>
        
        <creator>Jiahao Sun</creator>
        
        <creator>Yang Fu</creator>
        
        <subject>Image fusion; multi-exposure; Laplacian pyramid; window segmentation; chip package</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>A heterogeneous material image enhancement method based on multi-exposure image fusion is proposed to address the problem of obtaining high-quality images from the single imaging of chips containing two extremely different reflectivity materials. First, a multi-exposure image fusion algorithm based on window segmentation and Laplacian pyramid fusion is proposed. Then, orthogonal experiments are used to optimize the parameters of the imaging system. Next, a method based on information entropy and average gray intensity is utilized to calculate the imaging exposure times of two heterogeneous materials, and two exposure time ranges are obtained that are appropriate for regions with high and low reflectivity. Finally, the subjective and objective experimental evaluations are conducted after the multi-exposure image set has been established. The results show that the fused image has a good visual effect, the information entropy is 6.29, and the average gray intensity is 131.56. In addition, time consumption is reduced by an average of 20.3% compared to the Laplace pyramid strategy. The heterogeneous material enhancement method based on multi-exposure image fusion proposed in this paper is effective and deserving of further research and application.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_20-Multi_Exposure_Image_Fusion_Based_on_Window_Segmentation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Footwear Sketches Colorization Method based on Generative Adversarial Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131219</link>
        <id>10.14569/IJACSA.2022.0131219</id>
        <doi>10.14569/IJACSA.2022.0131219</doi>
        <lastModDate>2022-12-30T12:33:59.7930000+00:00</lastModDate>
        
        <creator>Xin Li</creator>
        
        <creator>Yihang Zhang</creator>
        
        <subject>Footwear sketch; generative adversarial network; image to image translation; colorization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>The coloring of sketches has a constant market demand in the area of research. The difficulty of the coloring sketch outline is its lack of texture and color. Take footwear design as an example, it is difficult for designers to complete a colorful sketch in a limited time, so an artificial intelligence technology for coloring shoes is required. Though we do not build a new GAN, which is based on pix2pix. We try to integrate the existing model in four ways, including generator, discriminator, loss function and comparison. In this paper, given a set of edges-to-shoes that have 50,025 shoe images, our approach produces an image with vivid shoes images. Unlike the recent research, our approach is not based on a unique adversarial training. We show that shoe sketches can be synthesized from simple lines by a GAN into a high-resolution picture. In particular, we offer a new model to synthesize high-resolution photo-realistic images of shoes, and apply a multi-discriminator to train and distinguish the generated images. Our model enables the shoe designer to benefit from the colorization design.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_19-Footwear_Sketches_Colorization_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Fast and Effective Method for Intrusion Detection using Multi-Layered Deep Learning Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131218</link>
        <id>10.14569/IJACSA.2022.0131218</id>
        <doi>10.14569/IJACSA.2022.0131218</doi>
        <lastModDate>2022-12-30T12:33:59.7930000+00:00</lastModDate>
        
        <creator>A. Srikrishnan</creator>
        
        <creator>Arun Raaza</creator>
        
        <creator>Ebenezer Abishek. B</creator>
        
        <creator>V. Rajendran</creator>
        
        <creator>M. Anand</creator>
        
        <creator>S. Gopalakrishnan</creator>
        
        <creator>Meena. M</creator>
        
        <subject>Intrusion detection system; knowledge discovery and data mining; transmission control protocol; adaptive parallelized intrusion detection; constrained-optimization-based extreme learning machine</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>The practise of recognising unauthorised abnormal actions on computer systems is referred to as intrusion detection. The primary goal of an Intrusion Detection System (IDS) is to identify user behaviours as normal or abnormal based on the data they communicate. Firewalls, data encryption, and authentication techniques were all employed in traditional security systems. Current intrusion scenarios, on the other hand, are very complex and capable of readily breaching the security measures provided by previous protection systems. However, current intrusion scenarios are highly sophisticated and are capable of easily breaking the security mechanisms imposed by the traditional protection systems. Detecting intrusions is a challenging aspect especially in networked environments, as the system designed for such a scenario should be able to handle the huge volume and velocity associated with the domain. This research presents three models, APID (Adaptive Parallelized Intrusion Detection), HBM (Heterogeneous Bagging Model) and MLDN (Multi Layered Deep learning Network) that can be used for fast and efficient detection of intrusions in networked environments. The deep learning model has been constructed using the Keras library. The training data is preprocessed and segregated to fit the processing architecture of neural networks. The network is constructed with multiple layers and the other required parameters for the network are set in accordance with the input data. The trained model is validated using the validation data that has been specifically segregated for this purpose.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_18-A_Fast_and_Effective_Method_for_Intrusion_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Age Estimation on Human Face Image Using Support Vector Regression and Texture-Based Features</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131217</link>
        <id>10.14569/IJACSA.2022.0131217</id>
        <doi>10.14569/IJACSA.2022.0131217</doi>
        <lastModDate>2022-12-30T12:33:59.7770000+00:00</lastModDate>
        
        <creator>Jesy S Amelia</creator>
        
        <creator>Wahyono</creator>
        
        <subject>Age estimation; LBP; BSIF; LBQ; MAE; PCA; preprocessing; feature extraction; Support Vector Regression (SVR)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>This paper proposed a framework for estimating human age using facial features. These features exploit facial region information, such as wrinkles on the eye and cheek, which are then represented as a texture-based feature. Our proposed framework has several steps: preprocessing, feature extraction, and age estimation. In this research, three feature extraction methods and their combination are performed, such as Local Binary Pattern (LBP), Local Phrase Quantization (LPQ), and Binarized Statistical Image Feature (BSIF). After extracting the feature, Principle Component Analysis (PCA) was performed to reduce the feature size. Finally, the Support Vector Regression (SVR) method was used to predict age. In evaluation, the estimation error will be based on mean average error (MAE). In the experiment, we utilized the well-known public dataset, face-age.zip, and UTK Face datasets, containing 15,202 facial image data. The data were divided into the training of 12,162 images and the testing of 3,040 images. Our experiments found that combining BSIF and LPQ with PCA achieved the lowest MAE of 9.766 and 9.754. The results show that the texture-based feature could be utilized for estimating the age on facial image.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_17-Age_Estimation_on_Human_Face_Image_Using_Support_Vector.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of Human-Computer Interaction Product Interface of Intelligent Service System based on User Experience</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131216</link>
        <id>10.14569/IJACSA.2022.0131216</id>
        <doi>10.14569/IJACSA.2022.0131216</doi>
        <lastModDate>2022-12-30T12:33:59.7600000+00:00</lastModDate>
        
        <creator>Xiaoli Xiong</creator>
        
        <creator>Yongguang Hou</creator>
        
        <subject>User experience; intelligent service system; human-computer interaction; interface design; ARM processor; gateway module</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>The current intelligent service platform for human-computer interaction products and services on the user experience is not comprehensive enough, resulting in user satisfaction cannot reach the ideal level. Therefore, a new human-computer interface of intelligent service system based on user experience is designed. Build a user experience based intelligent service system hardware platform, the introduction of the full name of hypertext markup language, so that more personalized design to be met. Design user experience PC terminal and Bluetooth/RS-485 gateway module to achieve two-way signal conversion between Bluetooth and RS-485. Based on ARM processor, the speech recognition of human-computer interaction is completed, the features of collected data are extracted, and the hand gesture recognition is completed. In order to optimize the human-computer interaction effect, Kinect is used to track and identify moving objects, and the 3D interactive image is simulated by fused texture. Experimental results show that the proposed method has a higher probability of receiving data, and the recognition rate of gesture features and recognition accuracy can reach more than 90%.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_16-Design_of_Human_Computer_Interaction_Product_Interface.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Social Media Multimodal Information Analysis based on the BiLSTM-Attention-CNN-XGBoost Ensemble Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131215</link>
        <id>10.14569/IJACSA.2022.0131215</id>
        <doi>10.14569/IJACSA.2022.0131215</doi>
        <lastModDate>2022-12-30T12:33:59.7470000+00:00</lastModDate>
        
        <creator>Ling Jixian</creator>
        
        <creator>An Gang</creator>
        
        <creator>Su Zhihao</creator>
        
        <creator>Song Xiaoqiang</creator>
        
        <subject>Natural language processing; social media sentiment analysis; multimodal information processing; ensemble neural network; emergency management</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>Social media users internalise information in a multimodal context. Social media functions as a primary information source for disaster situational awareness encompassing texts, photographs, videos, and other multimodal information widely used in emergency management. Applying ensemble learning to social media sentiment analysis has garnered much scholarly attention, albeit with limited research on rescue and its sub-domain, which is characterised as a major complexity. A multimodal information categorisation model based on hierarchical feature extraction was proposed in this study. The information of multiple modes is first mapped to a unified text vector space in modelling the semantic content at the sentence and multimodal information levels in the multimodal information. Multiple deep learning (DL) models were subsequently applied to model the semantic content at the aforementioned levels. This study offers a BiLSTM-Attention-CNN-XGBOOST ensemble neural network model to acquire extensive multimodal information characteristics. Based on the empirical outcomes, this method precisely extracted multimodal information features with an accuracy exceeding 85% and 95% for Chinese- and English-language datasets, respectively.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_15-Social_Media_Multimodal_Information_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intrusion Detection System using Long Short Term Memory Classification, Artificial Raindrop Algorithm and Harmony Search Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131214</link>
        <id>10.14569/IJACSA.2022.0131214</id>
        <doi>10.14569/IJACSA.2022.0131214</doi>
        <lastModDate>2022-12-30T12:33:59.7300000+00:00</lastModDate>
        
        <creator>Meghana G Raj</creator>
        
        <creator>Santosh Kumar Pani</creator>
        
        <subject>Artificial raindrop algorithm; cloud computing; harmony search algorithm; hybrid meta-heuristic algorithms; hyper-parameter tuning; intrusion detection system; long short term memory classification model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>Nowadays, various technological advancements in Intrusion Detection Systems (IDS) detects the malicious attacks and reinstate network security in the cloud platform. Cloud based IDS designed with hybrid elements combining Machine Learning and Computational Intelligence algorithms have been shown to perform better on parameters, such as Detection Rate, Accuracy, and the False Positive Rate. Machine Learning algorithms provide effective techniques for classification and prediction of network attacks, by analyzing existing IDS datasets. The main challenge is selection of appropriate data dimensions to be used for detection of attacks, out of the high number of data dimensions available. For the selected data dimensions, Computational Intelligence Algorithms provide effective techniques for hyper-parameter tuning, by optimizing on reiterative basis. The main challenge is selection of appropriate algorithm which offers optimal performance results. In this research, Hybrid Meta-heuristic approach, which combines a Long Short Term Memory (LSTM) classification model in dimension selection, with the application of Artificial Raindrop Algorithm- Harmony Search Algorithm (ARA-HSA) for hyper-parameter tuning, in order to achieve a high performance IDS in cloud environment. The performance validation of the hybrid LSTM-ARA-HSA algorithm has been carried out using a benchmark IDS data set and the comparative results for this algorithm along with other recent hybrid approaches has been presented.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_14-Intrusion_Detection_System_using_Long_Short_Term_Memory.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Low Complexity Classification of Thermophilic Protein using One Hot Encoding as Protein Representation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131212</link>
        <id>10.14569/IJACSA.2022.0131212</id>
        <doi>10.14569/IJACSA.2022.0131212</doi>
        <lastModDate>2022-12-30T12:33:59.7130000+00:00</lastModDate>
        
        <creator>Meredita Susanty</creator>
        
        <creator>Rukman Hertadi</creator>
        
        <creator>Ayu Purwarianti</creator>
        
        <creator>Tati Latifah Erawati Rajab</creator>
        
        <subject>Thermophilic; classification; one-hot encoding; BiLSTM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>The laborious, and cost-inefficient biochemical methods for identifying thermophilic proteins necessarily require a rapid and accurate method for identifying thermophilic proteins. Recently, machine learning has become a more effective method for identifying specific classes of extremophiles. There is still a need for a low-cost method for identifying thermophilic proteins, despite the fact that studies employing machine learning yielded superior results to conventional methods. Here, we avoid the problem of manually crafted features, which involves experts defining and extracting a set of features using only protein sequences as input for various computational methods. This study classifies thermophilic proteins and their counterparts using only protein sequences in one-hot encoding representation and the bidirectional long short-term memory (BiLSTM) model. The model achieved an accuracy of 92.34 percent, a specificity of 91 percent, and a sensitivity of 93.77 percent, which is superior to other models reported elsewhere that rely on a number of manually crafted features. In addition, the more trustworthy and objective data set and the independent data set for evaluation make this model competitive with other, more accurate models.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_12-Low_Complexity_Classification_of_Thermophilic_Protein.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparison of Naive Bayes and SVM Classification in Grid-Search Hyperparameter Tuned and Non-Hyperparameter Tuned Healthcare Stock Market Sentiment Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131213</link>
        <id>10.14569/IJACSA.2022.0131213</id>
        <doi>10.14569/IJACSA.2022.0131213</doi>
        <lastModDate>2022-12-30T12:33:59.7130000+00:00</lastModDate>
        
        <creator>KaiSiang Chong</creator>
        
        <creator>Nathar Shah</creator>
        
        <subject>Machine learning; sentiment analysis; opinion mining; Na&#239;ve Bayes; SVM Classifier; grid search technique; hyperparameter tuning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>This paper compares the performance of Naive Bayes and SVM classifiers classification based on sentiment analysis of healthcare companies&#39; stock comments in Bursa Malaysia. Differing from other studies which focus on the performance of the classifier models, this paper focuses on identifying the hyperparameters of the classifier models that are significant for sentiment analysis and the optimization potential of the models. Grid Search technique is used for the hyperparameters tuning process. The performance such as precision, recall, f1-score, and accuracy of Naive Bayes and SVM before and after hyperparameter tuning are compared. The results show that the important hyperparameters for Naive Bayes are alpha and fit_prior, while the important hyperparameters for SVM are C, kernel, and gamma. After performing hyperparameters tuning, SVM gave a better performance with an accuracy of 85.65% than Naive Bayes with an accuracy of 68.70%. It also proves that hyperparameter tuning is able to improve the performance of both models, and SVM has a better optimization potential than Naive Bayes.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_13-Comparison_of_Naive_Bayes_and_SVM_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Arabic Location Named Entity Recognition for Tweets using a Deep Learning Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131211</link>
        <id>10.14569/IJACSA.2022.0131211</id>
        <doi>10.14569/IJACSA.2022.0131211</doi>
        <lastModDate>2022-12-30T12:33:59.7000000+00:00</lastModDate>
        
        <creator>Bedour Swayelh Alzaidi</creator>
        
        <creator>Yoosef Abushark</creator>
        
        <creator>Asif Irshad Khan</creator>
        
        <subject>NER; Named Entity Recognition; NEL; named entity linking; event; location; deep learning; Arabic</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>Social media sites like Twitter have emerged in recent years as a major data source utilized in a variety of disciplines, including economics, politics, and scientific study. To extract pertinent data for decision-making and behavioral analysis, one can use Twitter data. To extract event location names and entities from colloquial Arabic texts using deep learning techniques, this study proposed Named Entity Recognition (NER) and Linking (NEL) models. Google Maps was also used to obtain up-to-date details for each extracted site and link them to the geographical coordination. Our method was able to predict 40% and 48% of the locations of tweets at the regional and city levels, respectively, while the F-measure was able to reliably identify and detect 63% of the locations of tweets at a single Point of Interest.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_11-Arabic_Location_Named_Entity_Recognition_for_Tweets.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predicting Employee Turnover in IT Industries using Correlation and Chi-Square Visualization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131210</link>
        <id>10.14569/IJACSA.2022.0131210</id>
        <doi>10.14569/IJACSA.2022.0131210</doi>
        <lastModDate>2022-12-30T12:33:59.6830000+00:00</lastModDate>
        
        <creator>Bagus Priambodo</creator>
        
        <creator>Yuwan Jumaryadi</creator>
        
        <creator>Sarwati Rahayu</creator>
        
        <creator>Nur Ani</creator>
        
        <creator>Anita Ratnasari</creator>
        
        <creator>Umniy Salamah</creator>
        
        <creator>Zico Pratama Putra</creator>
        
        <creator>Muhamad Otong</creator>
        
        <subject>Employee turnover; turnover factors; chi-square; classification algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>Employee turnover in the IT industry is among the highest compared to other industries. Knowing factors that influence the turnover may help reduce this issue in future. One of these factors is job satisfaction, which are composed of two important factors, status and seniority. In this study, the correlation and chi-square visualization are utilized to determine the factors that affect employee turnover. The experiment was carried out to predict turnover using a private IT consultant dataset comparing three classification algorithms (decision tree, Na&#239;ve Bayes, and Random Forest). The result shows that job duration and positioning are factors that influence employee turnover in a software company.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_10-Predicting_Employee_Turnover_in_IT_Industries.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Global Pattern Feedforward Neural Network Structure with Bacterial Foraging Optimization towards Medicinal Plant Leaf Identification and Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131209</link>
        <id>10.14569/IJACSA.2022.0131209</id>
        <doi>10.14569/IJACSA.2022.0131209</doi>
        <lastModDate>2022-12-30T12:33:59.6670000+00:00</lastModDate>
        
        <creator>Sapna R</creator>
        
        <creator>S N Sheshappa</creator>
        
        <creator>P Vijayakarthik</creator>
        
        <creator>S Pravinth Raja</creator>
        
        <subject>Medicinal plant; feed-forward neural networks; linear discriminant analysis; bacterial forging</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>Medicinal Plant species help to cure various diseases across the world. The automated identification of medicinal plant species to treat disease based on their structure is much required in pharmaceutical laboratories. Plant Species with a complex background in the field will make the detection and classification more difficult. In this paper, optimization of bacterial foraging technique has been employed towards medicinal plant prediction and classification architecture based on feed-forward neural network. It is capable of identifying both complex structures of medicinal plants. Feed-forward Neural Networks are considered to have good recognition accuracy compared to other machine learning approaches. Further bacterial foraging has been implemented to minimize the feature search space to the classifier and provides optimal features for the plant classification. The experimental outcomes of the proposed approach has been analysed by employing the medley dataset and evaluating the performance of the proposed approach with respect to dice similarity coefficient, Specificity and sensitivity towards medicinal plant classification. The findings are very positive, and further research will focus on using a large dataset and increased computing resources to examine how well deep-learning neural networks function in identifying medicinal plants for use in health care.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_9-Global_Pattern_Feedforward_Neural_Network_Structure.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Method for Recognizing Traffic Signs using Color and Texture Properties using the ELM Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131208</link>
        <id>10.14569/IJACSA.2022.0131208</id>
        <doi>10.14569/IJACSA.2022.0131208</doi>
        <lastModDate>2022-12-30T12:33:59.6530000+00:00</lastModDate>
        
        <creator>Xiaoda Cao</creator>
        
        <subject>Traffic sign recognition; torsional neural network; HOG Feature; LBP Feature; ELM Algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>Road accidents cause a lot of financial and human losses every year. One of the causes of these accidents is human error, and the driver ignores traffic signs. Therefore, accurate detection of these signs will help to increase the safety of drivers and pedestrians and reduce accidents. In recent years, much research has been done to increase the accuracy of panel recognition, most of which are problems that affect the diagnosis, such as adverse weather conditions, light reflection, and complex backgrounds. In the present study, considering the diversity of traffic signs&#39; geometric shapes, the sign detection part has been done using a torsional neural network. Then, in the feature extraction section, we used LBP and HOG techniques, and at the end, the section was identified and classified using the ELM algorithm. The results obtained on 12569 images, 75% of which were used for training and 25% for experimentation, show that the accuracy of this research has improved by 95% compared to the essential work by 93%.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_8-A_Novel_Method_for_Recognizing_Traffic_Signs.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dynamic Time Warping Features Extraction Design for Quranic Syllable-based Harakaat Assessment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131207</link>
        <id>10.14569/IJACSA.2022.0131207</id>
        <doi>10.14569/IJACSA.2022.0131207</doi>
        <lastModDate>2022-12-30T12:33:59.6370000+00:00</lastModDate>
        
        <creator>Noraimi Shafie</creator>
        
        <creator>Azizul Azizan</creator>
        
        <creator>Mohamad Zulkefli Adam</creator>
        
        <creator>Hafiza Abas</creator>
        
        <creator>Yusnaidi Md Yusof</creator>
        
        <creator>Nor Azurati Ahmad</creator>
        
        <subject>Speech processing; short time frequency transform; dynamic time warping; human-guided threshold classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>The use of technological speech recognition systems with a variety of approaches and techniques has grown exponentially in varieties of human-machine interaction applications. The assessment for Qur&#39;anic recitation errors based on syllables utterance is used to meet the Tajweed rules which generally consist of Harakaat (prolonging). The digital transformation of Quranic voice signals with identification of Tajweed-based recitation errors of Harakaat is the main research work in this paper. The study focused on speech processing implemented using the representation of Quranic Recitation Speech Signals (QRSS) in the best digital format based on Al-Quran syllables and feature extraction design to reveal similarities or differences in recitation (based on Al-Quran syllables) between experts and student. The method of Dynamic Time Warping (DTW) is used as Short Time Frequency Transform (STFT) of QRSS syllable feature for Harakaat measurement. Findings from this paper include an approach based on human-guidance threshold classification that is used specifically to evaluate Harakaat based on the syllables of the Qur&#39;an. The threshold classification performance obtained for Harakaat is above 80% in the training and testing stages. The results of the analysis at the end of the experiment have concluded that the threshold classification method for Minimum Path Cost (MPC) feature parameters can be used as an important feature to evaluate the rules of Tajwid Harakaat embedded in syllables.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_7-Dynamic_Time_Warping_Features_Extraction_Design.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Advantages of Digital Transformation Models and Frameworks for Business: A Systematic Literature Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131206</link>
        <id>10.14569/IJACSA.2022.0131206</id>
        <doi>10.14569/IJACSA.2022.0131206</doi>
        <lastModDate>2022-12-30T12:33:59.6200000+00:00</lastModDate>
        
        <creator>Seyedali Aghamiri</creator>
        
        <creator>John Karima</creator>
        
        <creator>Nadire Cavus</creator>
        
        <subject>Digital transformation; digitalization; model; framework; business; SMEs</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>Digital Transformation (DT) is a vital change in the way an organization utilizes processes, people, and technology to provide value to its ever changing customer expectations over products and services. Researchers developed models and frameworks to tackle concerns in this area, and existing literature improved our understanding of digital transformation. However, there are not enough comprehensive systematic literature reviews to picture a clear portrait of the advantages of related works and point out the major gaps for future studies. This study aims to evaluate how these models and frameworks affect business while highlighting their advantages and pointing out their gaps for future improvements and studies. A Systematic Literature Review (SLR) applied and collected and reviewed seven models and nine frameworks over five years between 2017 and 2021 from four databases of IEEE, Web of Science, Scopus, and Science Direct. These models and frameworks were reviewed and their advantages for researchers and practitioners were pointed out while picturing a clear vision of what is done in the Digital Transformation development of models and frameworks. The findings in this SLR indicated that the rising trend of DT studies has increased by 275% solely from 2020 to 2021 with 62% of those studies conducted in Europe.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_6-Advantages_of_Digital_Transformation_Models_and_Frameworks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Microcontrollers Programming Framework based on a V-like Programming Language</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131205</link>
        <id>10.14569/IJACSA.2022.0131205</id>
        <doi>10.14569/IJACSA.2022.0131205</doi>
        <lastModDate>2022-12-30T12:33:59.6200000+00:00</lastModDate>
        
        <creator>Fernando Mart&#180;inez Santa</creator>
        
        <creator>Santiago Orjuela Rivera</creator>
        
        <creator>Fredy H. Mart&#180;inez Sarmiento</creator>
        
        <subject>Microcontroller; transpiler; API; programming language V; V-lang; Aixt project</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>This paper describes the design of a programming framework for microcontrollers specially the ones with low program and data memory, using as a base a programming language with modern features. The proposed programming framework is named Aixt Project and took inspiration from other similar projects such as Arduino, Micropython and TinyGo among others. The project’s name is inspired on the weasel pet of the V programming language and at the same time it is a tribute to Ticuna people who live in the Amazon rain-forest, just between Colombia, Peru&#180; and Brasil. Aixt comes from Aixt&#168;u or Ait&#168;u r&#168;u which means otter in Ticuna language. The proposed programming framework has three main components: the Aixt language based on the V syntax, a transpiler that turns the defined V-like source code into C, and a generic cross-platform Application Programming Interface (API). The target of this project is obtaining a cross-platform programming framework over the same language modern language an the same API, for programming different microcontrollers especially the ones with low memory resources. Aixt language is based on the syntax of V programming language but it uses mutable variables by default. V language was selected to be used as base of this project due to it is a new compiled programming language with interesting modern features. In order to turn the Aixt source code into C, a transpiler is implemented using Python and the some specialized libraries to design each part of its translation process. The transpiled code is compiled by the native C compiler of each microcontroller to obtain the final binary file, that is why the API has to be adapted for each native C compiler. The complete project is released as a free and open source project. Finally, different application test were done over the XC8 and XC16 compilers for the PIC16, PIC18, PIC24 and dsPIC33 microcontrollers families, demonstrating the correct working of the overall framework. Those tests show that the use modern language framework to program any microcontrollers is perfectly feasible using the proposed programming framework.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_5-Microcontrollers_Programming_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modeling and Simulation of a Blockchain Consensus for IoT Node Data Validation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131204</link>
        <id>10.14569/IJACSA.2022.0131204</id>
        <doi>10.14569/IJACSA.2022.0131204</doi>
        <lastModDate>2022-12-30T12:33:59.6030000+00:00</lastModDate>
        
        <creator>Bismark Tei Asare</creator>
        
        <creator>Laurent Nana</creator>
        
        <creator>Kester Quist-Aphetsi</creator>
        
        <subject>Blockchain consensus; ripple consensus algorithm; coloured petri net; cyber-physical system; IoT architecture; node data security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>The classical blockchain developed for the Bitcoin cryptocurrency has evolved since its introduction more than a decade ago. Blockchain exists in different forms for different purposes and operational contexts. There has been a significant growth in the business use cases of blockchain which is based on the unique attributes of the distributed ledger technology. Blockchain provides peer-to-peer distribution of data in a traceable and decentralized architecture that attains data authentication using consensus protocols. Blockchain as a distributed ledger is the fusion of cryptography, peer-to-peer networking technology, distributed system technology, and consensus mechanism to assure information security and digital asset management. Consensus mechanisms are applied to the distributed ledger that operates in a peer-to-peer network where message transmission between peers is validated and stored across all active peers. Reaching an agreement to validate message transmission and maintaining the correctness of the state of data in a network for critical wireless sensor networks have become a necessary requirement for networks that span several subsystems covering a large operational area. Due to the resource constrained nature of the active actors of wireless sensor networks, any cryptographic solution to be adopted must be lightweight and efficient as well. This paper proposes a blockchain-based decentralized mechanism for authentication of node data for storage onto a distributed ledger. The coloured Petri net was used to model and simulate by detailing the critical attributes of the workings of the system that is based on cyber-physical IoT architecture.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_4-Modeling_and_Simulation_of_a_Blockchain_Consensus.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of Personalized VR Short Video Content Distribution Algorithm based on Artificial Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131203</link>
        <id>10.14569/IJACSA.2022.0131203</id>
        <doi>10.14569/IJACSA.2022.0131203</doi>
        <lastModDate>2022-12-30T12:33:59.5900000+00:00</lastModDate>
        
        <creator>Han Zhong</creator>
        
        <creator>Donghyuk Choi</creator>
        
        <creator>Wonho Choi</creator>
        
        <creator>Yu Zheng</creator>
        
        <subject>Artificial neural network; VR short video; BP neural network; video cache; cost optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>In order to improve the video quality, reduce the number of video jams, and improve the video transcoding rate, a new personalized distribution algorithm of VR short video content based on artificial neural network is proposed. BP neural network is used to compress the original video and determine the execution mode and cache location of VR short video cache file; Transcoding cached high bit rate VR short video stream to generate low bit rate video stream to meet the needs of different network bandwidth; Build a video distribution model, and build a multicast distribution tree based on this model, that is, add some relay servers to minimize the video distribution cost; Finally, through the algorithm of minimizing the distribution cost of VR short video, the bandwidth loss and response delay are effectively reduced to achieve the goal of minimizing the distribution cost. The experimental results show that the video blocking times of this method are always less than six times, which effectively reduces the video blocking times. The PSNR value is high, the increase is large, up to 0.5, and the video transcoding rate is improved, up to 92%.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_3-Design_of_Personalized_VR_Short_Video_Content_Distribution.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hyperspectral Image Segmentation using End-to-End CNN Architecture with built-in Feature Compressor for UAV Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131202</link>
        <id>10.14569/IJACSA.2022.0131202</id>
        <doi>10.14569/IJACSA.2022.0131202</doi>
        <lastModDate>2022-12-30T12:33:59.5730000+00:00</lastModDate>
        
        <creator>Muhammad Bilal</creator>
        
        <creator>Khalid Munawar</creator>
        
        <creator>Muhammad Shafique Shaikh</creator>
        
        <creator>Ubaid M. Al-Saggaf</creator>
        
        <creator>Belkacem Kada</creator>
        
        <subject>Hyperspectral images; CNN; dimensionality reduction; segmentation; PCA</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>Hyperspectral image segmentation is an important task for geographical surveying. Real-time processing of this operation is especially important for sensors mounted on-board Unmanned Aerial Vehicles in the context of visual servoing, landmarks recognition and data compression for efficient storage and transmission. To this end, this paper proposes a machine learning approach for segmentation using an efficient Convolutional Neural Network (CNN) which incorporates a feature compressor and a subsequent segmentation module based on 3D convolution operations. The experimental results demonstrate that the proposed approach gives segmentation accuracy at par with conventional approaches based on Principal Component Analysis (PCA) to reduce the feature dimensionality. Moreover, the proposed network is at least 35% faster than the conventional CNN-based approaches using 3D convolutions.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_2-Hyperspectral_Image_Segmentation_using_End_to_End_CNN_Architecture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Two-Step Approach to Weighted Bipartite Link Recommendations</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131201</link>
        <id>10.14569/IJACSA.2022.0131201</id>
        <doi>10.14569/IJACSA.2022.0131201</doi>
        <lastModDate>2022-12-30T12:33:59.5570000+00:00</lastModDate>
        
        <creator>Nathan Ma</creator>
        
        <subject>Bipartite graph; weighted graph; link prediction; two-step algorithm; information retrieval</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(12), 2022</description>
        <description>Many real-world person-person or person-product relationships can be modeled graphically. Specifically, bipartite graphs are especially useful when modeling scenarios involving two disjoint groups. As a result, existing papers have utilized bipartite graphs to address the classical link recommendation problem. Applying the principle of bipartite graphs, this research presents a modified approach to this problem which employs a two-step algorithm for making recommendations that accounts for the frequency and similarity between common edges. Implemented in Python, the new approach was tested using bipartite data from Epinions and Movielens data sources. The findings showed that it improved the baseline results, performing within an estimated error of 14 per cent. This two-step algorithm produced promising findings, and can be refined to generate recommendations with even greater accuracy.</description>
        <description>http://thesai.org/Downloads/Volume13No12/Paper_1-A_Two_Step_Approach_to_Weighted_Bipartite_Link.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards a Blockchain-based Medical Test Results Management System: A Case Study in Vietnam</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01311103</link>
        <id>10.14569/IJACSA.2022.01311103</id>
        <doi>10.14569/IJACSA.2022.01311103</doi>
        <lastModDate>2022-12-02T12:41:29.2600000+00:00</lastModDate>
        
        <creator>Phuc Nguyen Trong</creator>
        
        <creator>Hong Khanh Vo</creator>
        
        <creator>Luong Hoang Huong</creator>
        
        <creator>Khiem Huynh Gia</creator>
        
        <creator>Khoa Tran Dang</creator>
        
        <creator>Hieu Le Van</creator>
        
        <creator>Nghia Huynh Huu</creator>
        
        <creator>Tran Nguyen Huyen</creator>
        
        <creator>Loc Van Cao Phu</creator>
        
        <creator>Duy Nguyen Truong Quoc</creator>
        
        <creator>Bang Le Khanh</creator>
        
        <creator>Kiet Le Tuan</creator>
        
        <subject>Blockchain-based system; hyperledger fabric; med-ical test results; medical institution at developing countries</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>The role of the testing process cannot be denied in the diagnosis and treatment of patients’ diseases in medical facilities today. The results from this process help doctors and nurses in medical centers make a preliminary and detailed assessment of symptoms and provide a specific course of treat-ment for their patients. In addition, these results are stored as a patient’s medical record that serves as a reference for subsequent therapies. However, the storage of this information (i.e., paper-based, electronic-based) faces some difficulties for both approaches. Especially for developing countries (i.e., Vietnam), this process encounters some major obstacles at health centers in rural areas. Many centralized/decentralized storage methods have been proposed to solve the above problem. Besides, the current popular method is patient-centered (all information shared is decided by the patient) can solve the above problems and be applied by many research directions. However, these methods require the user (i.e., patient) to have a background in security and privacy as well as the cutting-edge technologies installed on their phones. This is extremely difficult to apply in rural areas in developing countries where people are not yet conscious of protecting their personal information. This paper proposes a mechanism for storing and managing test results of patients at medical centers based on blockchain technology - applicable to developing countries. We build a proof-of-concept based on the Hyperledger Fabric platform and exploit the Hyperledger Caliper to evaluate a variety of scenarios related to system performance (i.e., create, query, and update).</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_103-Towards_a_Blockchain_based_Medical_Test_Results_Management_System_A_Case_Study_in_Vietnam.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Factors Influencing the Acceptance of Online Mobile Auctions using User-Centered Agile Software Development: An Early Technology Acceptance Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01311102</link>
        <id>10.14569/IJACSA.2022.01311102</id>
        <doi>10.14569/IJACSA.2022.01311102</doi>
        <lastModDate>2022-12-01T12:26:14.2370000+00:00</lastModDate>
        
        <creator>Abdallah Namoun</creator>
        
        <creator>Ahmed Alrehaili</creator>
        
        <creator>Ali Tufail</creator>
        
        <creator>Aseel Natour</creator>
        
        <creator>Yaman Husari</creator>
        
        <creator>Mohammed A. Al-Sharafi</creator>
        
        <creator>Albaraa M. Alsaadi</creator>
        
        <creator>Hani Almoamari</creator>
        
        <subject>Online auction; mobile auction; technology acceptance model; eBay; human-centered design; agile software development; factors</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>e-Commerce is booming everywhere, and Saudi Arabia is no exception. However, the adoption and prevalence of online mobile auctions (aka m-auction) remain unsatisfying in Saudi Arabia and the MENA region. This paper uncovers the enabling factors and hindering barriers against the use of mobile auctions by online consumers. To this end, a multiphase mixed methods design is applied to acquire an in-depth understanding of online mobile bidding or auctioning attitudes and practices of the Saudi auctioneers and bidders. Initially, an interactive mobile auction app was developed by applying the principles of user-centered agile software development (UCASD) methodology, which incorporated several design iterations based on feedback from 454 real users. The mobile auction requirements were collected using a mix of research methods, including a survey, focus groups, prototyping, and user testing. The UCASD methodology positively influenced the early evidence-based adoption and use of mobile auctions in the Saudi market. Subsequently, three consecutive focus groups were conducted with another 22 participants to induce further insights regarding the antecedents impacting the intention to embrace online auctions using mobile phones. A taxonomy of requirements coupled with thematic analysis of the discussions gave rise to 13 influential factors of mobile auctions, namely risk, quality of products, trust, ubiquity, usefulness, access to valuable products, ease of use, age, social influence, monetary costs, enjoyment, past experience, and facilitating conditions. Our inductive approach resulted in an early technology acceptance model of mobile auctions. We conclude by reflecting on the challenges observed to suggest some practical guidelines to pave the way for other researchers in this promising area to carry out experimental studies to ameliorate the proposed model.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_102-Factors_Influencing_the_Acceptance_of_Online_Mobile_Auctions.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Student Acceptance Towards Online Learning Management System based on UTAUT2 Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131115</link>
        <id>10.14569/IJACSA.2022.0131115</id>
        <doi>10.14569/IJACSA.2022.0131115</doi>
        <lastModDate>2022-11-30T10:34:34.7130000+00:00</lastModDate>
        
        <creator>Masitah Musa</creator>
        
        <creator>Mohd. Norasri Ismail</creator>
        
        <creator>Suhaidah Tahir</creator>
        
        <creator>Mohd. Farhan Md. Fudzee</creator>
        
        <creator>Muhamad Hanif Jofri</creator>
        
        <subject>Online learning management system; technology acceptance; unified theory of acceptance and usage of technology 2; online learning value</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>Recently, education has changed from physical learning to online and hybrid learning. Furthermore, the outbreak of COVID-19 makes them more significant. An online learning management system (LMS) is one of the most prevalent approaches to online and distance learning. The acceptance of the students towards the LMS is significant and it can give either bad or good responses to ensure the success of LMS. However, the Universiti Tun Hussein Onn Malaysia (UTHM) has not yet implemented any study to examine their LMS. The Unified Theory of Acceptance and Usage of Technology (UTAUT2) model is used in this study to investigate students’ Behavioral Intention and Use Behavior when using the LMS in UTHM. This study also introduces a new construct in UTAUT2 named Online Learning Value. 376 respondents took part in this survey. Descriptive Statistics, Reliability Analysis, Pearson Correlation Coefficient, and Multiple Linear Regression analysis were all used to analyze survey data. The outcome of this research is Performance Expectancy (β=0.129, p=0.014), Hedonic Motivation (β=0.221, p=0.000), Online Learning Value (β=0.109, p=0.036) and Habit (β=0.513, p=0.000) has influence on students’ intention to use LMS. Besides that, Facilitating Conditions (β=0.481, p=0.000) are the most important factors in students’ use behavior toward the LMS followed by Habit (β=0.343, p=0.000) and Behavioral Intention (β=0.239, p=0.000). By utilizing the UTAUT2 model, the constructs of technology acceptance related to students&#39; adoption of LMS have been identified and may become a reference to the stakeholders for future enhancement.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_15-Student_Acceptance_Towards_Online_Learning_Management_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comprehensive Insight into Blockchain Technology: Past Development, Present Impact and Future Considerations</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01311101</link>
        <id>10.14569/IJACSA.2022.01311101</id>
        <doi>10.14569/IJACSA.2022.01311101</doi>
        <lastModDate>2022-11-30T10:26:21.1730000+00:00</lastModDate>
        
        <creator>Farhat Anwar</creator>
        
        <creator>Burhan Ul Islam Khan</creator>
        
        <creator>Miss Laiha Mat Kiah</creator>
        
        <creator>Nor Aniza Abdullah</creator>
        
        <creator>Khang Wen Goh</creator>
        
        <subject>Blockchain; blockchain applications; consensus algorithms; distributed ledger; smart contract</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>Blockchain technology is based on the idea of a distributed, consensus ledger, which it employs to create a secure, immutable data storage and management system. It is a publicly accessible and collectively managed ledger enabling unprecedented levels of trust and transparency between business and individual collaborations. It has both robust cryptographic security and a transparent design. The immutability feature of blockchain data has the potential to transform numerous industries. People have begun to view blockchain as a revolutionary technology capable of identifying &quot;The Best Possible Solution&quot; in various real-world scenarios. This paper provides a comprehensive insight into blockchains, fostering an objectual understanding of this cutting-edge technology by focusing on the theoretical fundamentals, operating principles, evolution, architecture, taxonomy, and diverse application-based manifestations. It investigates the need for decentralisation, smart contracts, permissioned and permissionless consensus mechanisms, and numerous blockchain development frameworks, tools, and platforms. Furthermore, the paper presents a novel compendium of existing and emerging blockchain technologies by examining the most recent advancements and challenges in blockchain-enabled solutions for a variety of application domains. This survey bridges multiple domains and blockchain technology, discussing how embracing blockchain technology is reshaping society&#39;s most important sectors. Finally, the paper delves into potential future blockchain ecosystems providing a clear picture of open research challenges and opportunities for academics, researchers, and companies with a strong fundamental and technical grounding.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_101-A_Comprehensive_Insight_into_Blockchain_Technology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Efficient Meta-Heuristic-Feature Fusion Model using Deep Neuro-Fuzzy Classifier</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01311100</link>
        <id>10.14569/IJACSA.2022.01311100</id>
        <doi>10.14569/IJACSA.2022.01311100</doi>
        <lastModDate>2022-11-30T10:26:21.1570000+00:00</lastModDate>
        
        <creator>Sri Laxmi Kuna</creator>
        
        <creator>A. V. Krishna Prasad</creator>
        
        <subject>Multi-class severity classification; diabetic retinopathy; modified mating probability-based water strider algorithm; optimized deep neuro-fuzzy classifier; fuzzy clustering model; adaptive thresholding; optic disc removal; image enhancement</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>Diabetic Retinopathy (DR) is the major cause of the loss of vision among adults worldwide. DR patients generally do not have any symptoms till they reach the final stage. The categorization of retinal images is a remarkable application in detecting DR. Due to the level of sugar available in the blood, the categorization of DR severity becomes complicated to determine the grading level of the damages caused in the retina. To rectify these challenges, a new DR severity classification model is proposed for detecting and treating the DR. The main objective of the proposed model is to classify the severity grades that occurred in the retinal region of the human eye. Initially, gathered retinal images are enhanced and the blood vessel segmentations are done by utilizing the optic disc removal and active contouring model. The abnormalities such as “microaneurysms, hemorrhages, and exudates” are segmented by utilizing Fuzzy C-Means Clustering (FCM) and adaptive thresholding. Then, the segmented images are given to “VGG16 and ResNet”, in which the two different feature sets are acquired. Then, these features are added to obtain the second set of features as F2. Again, the enhanced images act as an input to the “VGG16 and ResNet”, which are attained as the first feature set as F1. In the feature concatenation phase, the resultant of two features is used for feature fusion with the aid of weights parameter that is optimized by Modified Mating Probability-based Water Strider Algorithm (MMP-WSA), where the feature fusion is carried out using the mathematical expression. Finally, the multi-class severity classifications are done by using the Optimized Deep Neuro-Fuzzy Classifier (ODNFC), where the optimization of hyper-parameters is done by the proposed MMP-WSA. Thus, the experimental results of the proposed model have been acquired by the precise segment of the abnormalities and better classification results regarding the grade level.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_100-An_Efficient_Meta_Heuristic_Feature_Fusion_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>NGram Approach for Semantic Similarity on Arabic Short Text</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131199</link>
        <id>10.14569/IJACSA.2022.0131199</id>
        <doi>10.14569/IJACSA.2022.0131199</doi>
        <lastModDate>2022-11-30T10:26:21.1400000+00:00</lastModDate>
        
        <creator>Rana Husni Al-Mahmoud</creator>
        
        <creator>Ahmad Sharieh</creator>
        
        <subject>Arabic text; Ngram; semantic sentences similarity; short text; ALMaany; natural language; semantic similarity of words; corpus-based measures</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>Measuring the semantic similarity between words requires a method that can simulate human thought. The use of computers to quantify and compare semantic similarities has become an important research area in various fields, including artificial intelligence, knowledge management, information re-trieval, and natural language processing. Computational seman-tics require efficient measures for computing concept similarity, which still need to be developed. Several computational measures quantify semantic similarity based on knowledge resources such as the WordNet taxonomy. Several measures based on taxonom-ical parameters have been applied to optimize the expression for content semantics. This paper presents a new similarity measure for quantifying the semantic similarity between concepts, words, sentences, short text, and long text based on NGram features and Synonyms of NGram related to the same domain. The proposed algorithm was tested on 700 tweets, and the semantic similarity values were compared with cosine similarity on the same dataset. The results were analyzed manually by a domain expert who concluded that the values provided by the proposed algorithm were better than the cosine similarity values within the selected domain regarding the semantic similarity between the datasets’ short texts.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_99-NGram_Approach_for_Semantic_Similarity_on_Arabic_Short_Text.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-Scale ConvLSTM Attention-Based Brain Tumor Segmentation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131198</link>
        <id>10.14569/IJACSA.2022.0131198</id>
        <doi>10.14569/IJACSA.2022.0131198</doi>
        <lastModDate>2022-11-30T10:26:21.1270000+00:00</lastModDate>
        
        <creator>Brahim AIT SKOURT</creator>
        
        <creator>Aicha MAJDA</creator>
        
        <creator>Nikola S. Nikolov</creator>
        
        <creator>Ahlame BEGDOURI</creator>
        
        <subject>Convolutional neural networks; image processing; semantic brain tumor segmentation; convolutional long short term memory; inception; squeeze-excitation; residual-network; attention units</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>In computer vision, there are various machine learning algorithms that have proven to be very effective. Con-volutional Neural Networks (CNNs) are a kind of deep learning algorithms that became mostly used in image processing with a remarkable success rate compared to conventional machine learning algorithms. CNNs are widely used in different computer vision fields, especially in the medical domain. In this study, we perform a semantic brain tumor segmentation using a novel deep learning architecture we called multi-scale ConvLSTM Attention Neural Network, that resides in Convolutional Long-Short-Term-Memory (ConvLSTM) and Attention units with the use of multiple feature extraction blocks such as Inception, Squeeze-Excitation and Residual Network block. The use of such blocks separately is known to boost the performance of the model, in our case we show that their combination has also a beneficial effect on the accuracy. Experimental results show that our model performs brain tumor segmentation effectively compared to standard U-Net, Attention U-net and Fully Connected Network (FCN), with 79.78 Dice score using our method compared to 78.61, 73.65 and 72.89 using Attention U-net, standard U-net and FCN respectively.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_98-Multi_Scale_ConvLSTM_Attention_Based_Brain_Tumor_Segmentation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Rao-Blackwellized Particle Filter with Neural Network using Low-Cost Range Sensor in Indoor Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131197</link>
        <id>10.14569/IJACSA.2022.0131197</id>
        <doi>10.14569/IJACSA.2022.0131197</doi>
        <lastModDate>2022-11-30T10:26:21.1270000+00:00</lastModDate>
        
        <creator>Norhidayah Mohamad Yatim</creator>
        
        <creator>Amirul Jamaludin</creator>
        
        <creator>Zarina Mohd Noh</creator>
        
        <creator>Norlida Buniyamin</creator>
        
        <subject>Simultaneous localization and mapping (SLAM); occupancy grid map; neural network; Rao-Blackwellized Particle Filter; infrared sensor</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>Implementation of Rao-Blackwellized Particle Fil-ter (RBPF) in grid-based simultaneous localization and mapping (SLAM) algorithm with range sensors is commonly developed by using sensor with dense measurements such as laser rangefinder. In this paper, a more cost convenient solution was explored where implementation of array of infrared sensors equipped on a mobile robot platform was used. The observation from array of infrared sensors are noisy and sparse. This adds more uncertainty in the implementation of SLAM algorithm. To compensate for the high uncertainties from robot’s observations, neural network was integrated with the grid-based SLAM algorithm. The result shows that the grid-based SLAM algorithm with neural network has better accuracy compared to the grid-based SLAM algorithm without neural network for the aforementioned mobile robot implementation. The algorithm improves the map accuracy by 21% and reduce robot’s state estimate error significantly. The better performance is due to the improvement in accuracy of grid cells’ occupancy value. This affects the importance weight computation in RBPF algorithm hence resulting a better map accuracy and robots state estimate. This finding shows that a promising grid-based SLAM algorithm can be obtained by using merely array of infrared sensors as robot’s observation.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_97-Rao_Blackwellized_Particle_Filter_with_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>COVIDnet: An Efficient Deep Learning Model for COVID-19 Diagnosis on Chest CT Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131196</link>
        <id>10.14569/IJACSA.2022.0131196</id>
        <doi>10.14569/IJACSA.2022.0131196</doi>
        <lastModDate>2022-11-30T10:26:21.1100000+00:00</lastModDate>
        
        <creator>Briskline Kiruba S</creator>
        
        <creator>Murugan D</creator>
        
        <creator>Petchiammal A</creator>
        
        <subject>Coronavirus disease; reverse transcription poly-merase chain reaction; computed tomography; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>A novel coronavirus disease (COVID-19) has been a severe world threat to humans since December 2020. The virus mainly affects the human respiratory system, making breathing difficult. Early detection and Diagnosis are essential to controlling the disease. Radiological imaging, like Computed Tomography (CT) scans, produces clear, high-quality chest images and helps quickly diagnoses lung abnormalities. The recent advancements in Artificial intelligence enable accurate and fast detection of COVID-19 symptoms on chest CT images. This paper presents COVIDnet, an improved and efficient deep learning Model for COVID-19 diagnosis on chest CT images. We developed a chest CT dataset from 220 CT studies from Tamil Nadu, India, to evaluate the proposed model. The final dataset contains 5191 CT images (3820 COVID-infected and 1371 normal CT images). The proposed COVIDnet model aims to produce accurate diagnostics for classifying these two classes. Our experimental result shows that COVIDnet achieved a superior accuracy of 98.98% when compared with three contemporary deep learning models.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_96-COVIDnet_An_Efficient_Deep_Learning_Model_for_COVID_19_Diagnosis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Swarm Intelligence-based Hierarchical Clustering for Identification of ncRNA using Covariance Search Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131195</link>
        <id>10.14569/IJACSA.2022.0131195</id>
        <doi>10.14569/IJACSA.2022.0131195</doi>
        <lastModDate>2022-11-30T10:26:21.0930000+00:00</lastModDate>
        
        <creator>Lustiana Pratiwi</creator>
        
        <creator>Yun-Huoy Choo</creator>
        
        <creator>Azah Kamilah Muda</creator>
        
        <creator>Satrya Fajri Pratama</creator>
        
        <subject>Covariance model; ncRNA identification; swarm intelligence; hierarchical clustering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>Covariance Model (CM) has been quite effective in finding potential members of existing families of non-coding Ribonucleic Acid (ncRNA) identification and has provided ex-cellent accuracy in genome sequence database. However, it has significant drawbacks with family-specific search. An existing Hierarchical Agglomerative Clustering (HAC) technique merged overlapping sequences which is known as combined CM (CCM). However, the structural information will be discarded, and the sequence features of each family will be significantly diluted as the number of original structures increases. Additionally, it can only find members of the existing families and is not useful in finding potential members of novel ncRNA families. Furthermore, it is also important to construct generic sequence models which can be used to recognize new potential members of novel ncRNA families and define unknown ncRNA sequence as the potential members for known families. To achieve these objectives, this study proposes to implement Particle Swarm Optimization (PSO) and Genetic Algorithm (GA) to ensure the CCMs have the best quality for every level of dendrogram hierarchy. This study will also apply distance matrix as the criteria to measure the compatibility between two CMs. The proposed techniques will be using five gene families with fifty sequences from each family from Rfam database which will be divided into training and testing dataset to test CMs combination method. The proposed techniques will be compared to the existing HAC in terms of identification accuracy, sum of bit-scores, and processing time, where each of these performance measurements will be statistically validated.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_95-Swarm_Intelligence_based_Hierarchical_Clustering.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid Protection Method to Enhance Data Utility while Preserving the Privacy of Medical Patients Data Publishing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131194</link>
        <id>10.14569/IJACSA.2022.0131194</id>
        <doi>10.14569/IJACSA.2022.0131194</doi>
        <lastModDate>2022-11-30T10:26:21.0930000+00:00</lastModDate>
        
        <creator>Shermina Jeba</creator>
        
        <creator>Mohammed BinJubier</creator>
        
        <creator>Mohd Arfian Ismail</creator>
        
        <creator>Reshmy Krishnan</creator>
        
        <creator>Sarachandran Nair</creator>
        
        <creator>Girija Narasimhan</creator>
        
        <subject>Medical patients data publishing; anonymization; protection method for preserving the privacy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>Medical patient data need to be published and made available to researchers so that they can use, analyse, and evaluate the data effectively. However, publishing medical patient data raises privacy concerns regarding protecting sensitive data while preserving the utility of the released data. The privacy-preserving data publishing (PPDP) process attempts to keep public data useful without risking the medical patients’ pri-vacy. Through protection methods like perturbing, suppressing, or generalizing values, which lead to uncertainty in identity inference or sensitive value estimation, the PPDP aims to reduce the risks of patient data being disclosed and to preserve the potential use of published data. Although this method is helpful, information loss is inevitable when attempting to achieve a high level of privacy using protection methods. In addition, the privacy-preserving techniques may affect the use of data, resulting in imprecise or even impractical knowledge extraction. Thus, balancing privacy and utility in medical patient data is essential. This study proposed an innovative technique that used a hybrid protection method for utility enhancement while preserving medical patients’ data privacy. The utilized technique could partition information horizontally and vertically, resulting in data being grouped into columns and equivalence classes. Then, the attributes assumed to be easily known by any attacker are determined by upper and lower protection levels (UP L and LP L). This work also depends on making the false matches and value swapping to make sure that the attribute disclosure is less likely to happen. The innovative technique makes data more useful. According to the results, the innovative technique delivers about 93.4% data utility when the percentage of exchange level is 5% using LP L and 95% using UP L with a 4.5K medical patient dataset. In conclusion, the innovative technique has minimized risk disclosure compared to other existing works.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_94-A_Hybrid_Protection_Method_to_Enhance_Data_Utility.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Parallelizing Image Processing Algorithms for Face Recognition on Multicore Platforms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131193</link>
        <id>10.14569/IJACSA.2022.0131193</id>
        <doi>10.14569/IJACSA.2022.0131193</doi>
        <lastModDate>2022-11-30T10:26:21.0770000+00:00</lastModDate>
        
        <creator>Kausar Mia</creator>
        
        <creator>Tariqul Islam</creator>
        
        <creator>Md Assaduzzaman</creator>
        
        <creator>Tajim Md. Niamat Ullah Akhund</creator>
        
        <creator>Arnab Saha</creator>
        
        <creator>Sonjoy Prosad Shaha</creator>
        
        <creator>Md. Abdur Razzak</creator>
        
        <creator>Angkur Dhar</creator>
        
        <subject>Image processing; multi-core platforms; machine learning; face recognition; parallelizing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>A good face detection system should have the ability to identify objects with varying degrees of illumination and orientation. It should also be able to respond to all the possible variations in the image. The image of the face depends on the relative camera face pose such as the nose and one eye. The appearance of a face is directly influenced by the facial expression of a person and partially occluded by objects around it. One of the most important and necessary conditions for face recognition is to exclude the background of reliable face classification techniques. However, the face can appear in complex backgrounds and different positions. The face recognition system can mistake some areas of the background for faces. This paper solves some face recognition problems including segmenting, extracting and identifying facial features that are thought to face from the background.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_93-Parallelizing_Image_Processing_Algorithms_for_Face_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Applying Logarithm and Russian Multiplication Protocols to Improve Paillier’s Cryptosystem</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131192</link>
        <id>10.14569/IJACSA.2022.0131192</id>
        <doi>10.14569/IJACSA.2022.0131192</doi>
        <lastModDate>2022-11-30T10:26:21.0630000+00:00</lastModDate>
        
        <creator>Hamid El Bouabidi</creator>
        
        <creator>Mohamed EL Ghmary</creator>
        
        <creator>Sara Maftah</creator>
        
        <creator>Mohamed Amnai</creator>
        
        <creator>Ali Ouacha</creator>
        
        <subject>Cloud computing; cloud security; homomorphic encryption; paillier cryptosystem; sockets</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>Cloud computing provides on-demand access to a diverse set of remote IT services. It offers a number of advantages over traditional computing methods. These advantages include pay-as-you-go pricing, increased agility and on-demand scalabil-ity. It also reduces costs due to increased efficiency and better business continuity. The most significant barrier preventing many businesses from moving to the cloud is the security of crucial data maintained by the cloud provider. The cloud server must have complete access to the data to respond to a client request. That implies the decryption key must be sent to the cloud by the client, which may compromise the confidentiality of data stored in the cloud. One way to allow the cloud to use encrypted data without knowing or decrypting it is homomorphic encryption. In this paper, we focus on improving the Paillier cryptosystem, first by using two protocols that allow the cloud to perform the multiplication of encrypted data and then comparing the two protocols in terms of key size and time.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_92-Applying_Logarithm_and_Russian_Multiplication_Protocols.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Secure and Lightweight Authentication Protocol for Smart Metering System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131191</link>
        <id>10.14569/IJACSA.2022.0131191</id>
        <doi>10.14569/IJACSA.2022.0131191</doi>
        <lastModDate>2022-11-30T10:26:21.0470000+00:00</lastModDate>
        
        <creator>Hind El Makhtoum</creator>
        
        <creator>Youssef Bentaleb</creator>
        
        <subject>Internet of things; confidentiality; integrity; authen-tication; Ascon; Blake2; AVISPA</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>One of the main advantages of the new power grid over the traditional grid is the intelligent energy management by the customer and the Operator. Energy supply, demand response management, and consumption regulation are only possible with the smart metering system. Smart meters are the main component of that system. Hence, a compromised smart meter or a successful attack against this entity may cause data theft, data falsification, and server/device manipulation. Therefore, Smart grids’ develop-ment and the guarantee of their services are related to the ability to avoid attacks and disasters by ensuring high security. This paper aims to provide a secure and lightweight security protocol that respects the IOT device constraints. The proposition deploys the distributed OTP calculations combined with the Blake2s hash function and the Ascon AEAD cipher to ensure authentication, confidentiality, and integrity. We propose a performance analysis, an informal and a formal security evaluation made by the AVISPA-SPAN tool. Also, we compare the proposed protocol to other similar works. The assessment proves that the proposed protocol is light, valid, secure, and robust against many attacks that threaten the NAN area of the smart metering system, namely MITM and replay attacks.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_91-Secure_and_Lightweight_Authentication_Protocol_for_Smart_Metering_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Contactless Surveillance for Preventing Wind-Borne Disease using Deep Learning Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131190</link>
        <id>10.14569/IJACSA.2022.0131190</id>
        <doi>10.14569/IJACSA.2022.0131190</doi>
        <lastModDate>2022-11-30T10:26:21.0470000+00:00</lastModDate>
        
        <creator>Md Mania Ahmed Joy</creator>
        
        <creator>Israt Jaben Bushra</creator>
        
        <creator>Razoana Ayshee</creator>
        
        <creator>Samira Hasan</creator>
        
        <creator>Samia Binta Hassan</creator>
        
        <creator>Md. Sawkat Ali</creator>
        
        <creator>Omar Farrok</creator>
        
        <creator>Mohammad Rifat Ahmmad Rashid</creator>
        
        <creator>Maheen Islam</creator>
        
        <subject>Computer vision; convolution neural network; COVID-19; deep learning; face mask detection; identity detection; object detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>Covid-19 has been marked as a pandemic world-wide caused by the SARS-CoV-2 virus. Different studies are being conducted with a view to preventing and lessening the infections caused by covid-19. In future, many other wind-borne diseases may also appear and even emerge as “pandemic”. To prevent this, various measures should be an integral part of our daily life such as wearing face masks. It is tough to manually ensure individuals safety. The goal of this paper is to automate the process of contactless surveillance so that substantial prevention can be ensured against all kinds of wind-borne diseases. For automating the process, real time analysis and object detection is a must for which deep learning is the most efficient approach. In this paper, a deep learning model is used to check if a person takes any preventive measures. In our experimental analysis, we considered real time face mask detection as a preventive measure. We proposed a new face mask detection dataset. The accuracy of detecting a face mask along with the identity of a person achieved accuracy of 99.5%. The proposed model decreases time consumption as no human intervention is needed to check an individual person. This model helps to decrease infection risk by using a contactless automation system.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_90-Contactless_Surveillance_for_Preventing_Wind_Borne_Disease.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Aspect based Sentiment &amp; Emotion Analysis with ROBERTa, LSTM</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131189</link>
        <id>10.14569/IJACSA.2022.0131189</id>
        <doi>10.14569/IJACSA.2022.0131189</doi>
        <lastModDate>2022-11-30T10:26:21.0300000+00:00</lastModDate>
        
        <creator>Uddagiri Sirisha</creator>
        
        <creator>Bolem Sai Chandana</creator>
        
        <subject>Aspect based sentiment analysis; twitter; LSTM; emotion analysis; russia-ukraine war; online social networks; roberta model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>Internet usage has increased social media over the past few years, significantly impacting public opinion on online social networks. Nowadays, these websites are considered the most appropriate place to express feelings and opinions. The popular social media site Twitter offers valuable insight into people’s thoughts. Throughout the conflict between Russia and Ukraine, people from all over the world have expressed their opinions. In this study, ”machine–learning” &amp; ”deep–learning” techniques are used to understand people’s emotions and their views about this war are revealed. This study unveils a novel deep-learning approach that merges the best features of the sequence and transformer models while fixing their respective flaws. The model combines Roberta with ABSA(Aspect based sentiment analysis) and Long Short-Term Memory for sentiment analysis. A large dataset of geographically tagged tweets related to the Ukraine-Russia war was collected from Twitter. We analyzed this dataset using the Roberta-based sentiment model. In contrast, the Long Short-Term Memory model can effectively capture long-distance contextual semantics. The Robustly optimized BERT with ABSA approach maps words into a compact, meaningful word embedding space. The accuracy of the suggested hybrid model is 94.7%, which is higher than the accuracy of the state-of-the-art techniques.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_89-Aspect_based_Sentiment_Emotion_Analysis_with_ROBERTa_LSTM.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid Genetic Algorithm for Service Caching and Task Offloading in Edge-Cloud Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131188</link>
        <id>10.14569/IJACSA.2022.0131188</id>
        <doi>10.14569/IJACSA.2022.0131188</doi>
        <lastModDate>2022-11-30T10:26:21.0170000+00:00</lastModDate>
        
        <creator>Li Li</creator>
        
        <creator>Yusheng Sun</creator>
        
        <creator>Bo Wang</creator>
        
        <subject>Cloud computing; edge computing; genetic algo-rithm; service caching; task offloading</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>Edge-cloud computing is increasingly prevalent for Internet-of-thing (IoT) service provisioning by combining both benefits of edge and cloud computing. In this paper, we aim to improve the user satisfaction and the resource efficiency by service caching and task offloading for edge-cloud computing. We propose a hybrid heuristic method to combine the global search ability of the genetic algorithm (GA) and heuristic local search ability, to improve the number of satisfied requests and the resource utilization. The proposed method encodes the service caching strategies into chromosomes, and evolves the population by GA. Given a caching strategy from a chromosome, our method exploits a dual-stage heuristic method for the task offloading. In the first stage, the dual-stage heuristic method pre-offloads tasks to the cloud, and offloads tasks whose requirements cannot be satisfied by the cloud to edge servers, aiming at satisfying as many tasks’ requirements as possible. The second stage re-offloads tasks from the cloud to edge servers, to get the utmost out of limited edge resources. Experimental results demonstrate the competitive edges of the proposed method over multiple classical and state-of-the-art techniques. Compared with five existing scheduling algorithms, our method achieves 11.3%–23.7% more accepted tasks and 1.86%–18.9% higher resource utilization.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_88-A_Hybrid_Genetic_Algorithm_for_Service_Caching_and_Task_Offloading.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>360&#176; Virtual Reality Video Tours Generation Model for Hostelry and Tourism based on the Analysis of User Profiles and Case-Based Reasoning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131187</link>
        <id>10.14569/IJACSA.2022.0131187</id>
        <doi>10.14569/IJACSA.2022.0131187</doi>
        <lastModDate>2022-11-30T10:26:21.0000000+00:00</lastModDate>
        
        <creator>Luis Alfaro</creator>
        
        <creator>Claudia Rivera</creator>
        
        <creator>Ernesto Suarez</creator>
        
        <creator>Alberto Raposo</creator>
        
        <subject>Immersive virtual reality; adaptive software archi-tecture; case-based reasoning; user profiles</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>This paper proposes an adaptive software archi-tecture focused on hotel marketing based on immersive virtual reality (VRI) with 360&#176; videos, which includes a component based on Case-Based Reasoning (CBR) to provide experiences that correspond to the analysis of user profiles. For the validation of the system, considering that the use of VR can trigger experi-ences in several dimensions, affective, attitudinal and behavioral responses, as well as the cognitive load were evaluated using visu-alizations of 2D photographs contained in hotel websites , which were compared with 360&#176; videos in a VRI environment. To test the hypotheses, a quasi-experimental study was conducted with an independent sample group, in which subjects were randomly assigned to the two types of visualizations. The contribution of the article lies in the incorporation of marketing concepts and approaches in VRI experiences with 360&#176; videos through virtual objects that are used by the software architecture, as well as in the proposed validation of the effectiveness of the proposal.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_87-360&#176;_Virtual_Reality_Video_Tours_Generation_Model_for_Hostelry_and_Tourism.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Blood and Product-Chain: Blood and its Products Supply Chain Management based on Blockchain Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131186</link>
        <id>10.14569/IJACSA.2022.0131186</id>
        <doi>10.14569/IJACSA.2022.0131186</doi>
        <lastModDate>2022-11-30T10:26:20.9830000+00:00</lastModDate>
        
        <creator>Phuc Nguyen Trong</creator>
        
        <creator>Hong Khanh Vo</creator>
        
        <creator>Luong Hoang Huong</creator>
        
        <creator>Khiem Huynh Gia</creator>
        
        <creator>Khoa Tran Dang</creator>
        
        <creator>Hieu Le Van</creator>
        
        <creator>Nghia Huynh Huu</creator>
        
        <creator>Tran Nguyen Huyen</creator>
        
        <creator>The Anh Nguyen</creator>
        
        <creator>Loc Van Cao Phu</creator>
        
        <creator>Duy Nguyen Truong Quoc</creator>
        
        <creator>Bang Le Khanh</creator>
        
        <creator>Kiet Le Tuan</creator>
        
        <subject>Blood donation; blockchain; hyperledger fabric; blood products supply chain</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>This paper provides a novel implementation of blockchain technology, and data is stored in a decentralized distributed ledger to assist information protection in blood supply chain management and prevent data loss or identity theft. The present blood supply is used exclusively from the blood of volunteers (known as donors), making blood and its derivatives one of the significant roles in treating diseases today. In particular, depending on the type of product extracted from the blood (e.g., red blood cells, white blood cells, platelets, plasma). They require different procedures and storage environments (e.g., time, temperature, humidity). However, the current blood management processes are done manually - where all medical staff does all data entry. Additionally, data about the complete blood donation process (e.g., blood donors, blood recipients, blood inventories) is held centrally and is challenging to examine accurately. Therefore, ensuring centralized data security is extremely difficult because of stealing personal information or losing data. In this study, we present the blockchain technology-based blood management process and offer Blood and Product-Chain, a decentralized distributed ledger that stores data to address these restrictions. Specifically, we target two main contributions: i) we design the Blood and Product-Chain model to manage all relevant information about blood and its products based on blockchain technology, and ii) we implement the proof-of-concept of Blood and Product-Chain by Hyperledger Fabric and evaluate this in the two scenarios (i.e., data creation and data access).</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_86-Blood_and_Product_Chain_Blood_and_its_Products_Supply_Chain_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards an YouTube Verified Content System based on Blockchain Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131185</link>
        <id>10.14569/IJACSA.2022.0131185</id>
        <doi>10.14569/IJACSA.2022.0131185</doi>
        <lastModDate>2022-11-30T10:26:20.9700000+00:00</lastModDate>
        
        <creator>Phuc Nguyen Trong</creator>
        
        <creator>Hong Khanh Vo</creator>
        
        <creator>Luong Hoang Huong</creator>
        
        <creator>Khiem Huynh Gia</creator>
        
        <creator>Khoa Tran Dang</creator>
        
        <creator>Hieu Le Van</creator>
        
        <creator>Nghia Huynh Huu</creator>
        
        <creator>Tran Nguyen Huyen</creator>
        
        <creator>The Anh Nguyen</creator>
        
        <creator>Loc Van Cao Phu</creator>
        
        <creator>Duy Nguyen Truong Quoc</creator>
        
        <creator>Bang Le Khanh</creator>
        
        <creator>Kiet Le Tuan</creator>
        
        <subject>Blockchain technology; public authentication; Ethereum; Fantom; Binance smart chain platform; social media platform</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>YouTube connects people with each other through an online video sharing service platform. With the great devel-opment of the entertainment industry, content on YouTube is accessible to many people of different ages. However, verifying the content posted on YouTube is clean or not is a difficult problem. Dirty content is violent, pornographic and vulgar content that causes serious psychological harm to the segment of users under the age of 18, i.e., especially those of an age who are not yet aware of the harmful effects of content. Toxic will bring to the child’s behavior. Agree that Google (i.e., YouTube) has developed a YouTube Kid application where the videos are only for children under the age of 13. However, cultural and educational differences between regions strongly influence the choice of children. Select content for children. Therefore, the content restrictions on the YouTube Kid application have not yet met all the requirements of parents around the world. There have been many development directions to identify videos containing malicious content based on deep learning. However, there is no method to build a tool to support parents of children to share and identify videos with objectionable content (e.g., violence, pornography, obscene words) on the YouTube platform. In this research paper, we introduce YVC, a YouTube-verified content platform by applying blockchain’s distributed, public validation. This tool helps parents validate YouTube content and issue a report to reduce dirty content on YouTube. To demonstrate the effectiveness of our approach, we implement the proof-of-concept in the three most popular EVM platforms: Ethereum, Fantom, and the Binance smart chain. Compared to the YouTube Kids (i.e., the most common shared video platform for the under 13-year-old kid), our approach is able to capture the video preferences of the parents covering the difference areas/countries.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_85-Towards_an_YouTube_Verified_Content_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Effective Decision-Making Support for Student Academic Path Selection using Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131184</link>
        <id>10.14569/IJACSA.2022.0131184</id>
        <doi>10.14569/IJACSA.2022.0131184</doi>
        <lastModDate>2022-11-30T10:26:20.9530000+00:00</lastModDate>
        
        <creator>Pelagie HOUNGUE</creator>
        
        <creator>Michel HOUNTONDJI</creator>
        
        <creator>Theophile DAGBA</creator>
        
        <subject>Academic path; academic performance; machine learning; educational data mining</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>In Benin, after the GCSE (General Certificate of Secondary Education), learners can either enroll in a Technical and Vocational Education and Training (TVET), or further their studies in the general education. Majority of those who take the latter path enroll in Senior High School by choosing the Biology stream or field of study. However, most of them do not have the abilities required to succeed in this field. For instance, for the last edition of the Senior Secondary Education Certificate (French baccalaureate) held in June 2022 in Benin, the Biology field of study had a low success rate of 42%. Therefore, one may consider that there is a problem in the orientation of the students. In recent years, Machine Learning has been used in almost every field to optimize processes or to assist in decision-making. Improving academic performance has always been of general interest. And, good academic performance implies good academic orientation. The goal of this study is to optimally help learners who have just obtained their GCSE to select their field of study. For this purpose, two major elements are predicted: i) Scientific or Literary ability of students, ii) Literature or Mathematics and Physical Sciences (MPS) or Biology stream of learners. More precisely, the average marks in Mathematics, Physics and Chemistry Technology (PCT) and Biology from 6th to 9th grade for 325 students are used. Machine Learning algorithms such as Decision Tree, Random Forest, Linear Support Vector Classifier (SVC), K-Nearest Neighbors (KNN), and Logistic Regression are used to predict learners’ ability and the stream. As a result, for learners’ ability prediction, we obtained the best accuracy of 99% with the random forest algorithm for a split that reserved around 21% of the dataset for testing. As for the learners’ stream prediction, we obtained the best accuracy of 95% with the Linear SVC algorithm for a split that reserved around 20%of the dataset for testing. This study contributes to Educational Data Mining (EDM) by performing academic data exploration using numerous methods. Furthermore, it provides a tool to ease students academic path selection, which may be used by educational institutes to ensure student performance. This paper presents the steps and the outputs of the study, we performed with some recommendations for future research.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_84-An_Effective_Decision_Making_Support_for_Student_Academic_Path_Selection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Wheat Diseases Detection and Classification using Convolutional Neural Network (CNN)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131183</link>
        <id>10.14569/IJACSA.2022.0131183</id>
        <doi>10.14569/IJACSA.2022.0131183</doi>
        <lastModDate>2022-11-30T10:26:20.9530000+00:00</lastModDate>
        
        <creator>Md Helal Hossen</creator>
        
        <creator>Md Mohibullah</creator>
        
        <creator>Chowdhury Shahriar Muzammel</creator>
        
        <creator>Tasniya Ahmed</creator>
        
        <creator>Shuvra Acharjee</creator>
        
        <creator>Momotaz Begum Panna</creator>
        
        <subject>Wheat crop diseases; artificial intelligence; convo-lution neural networks; image processing; feature extraction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>Ever since the medieval era, the preponderance of our concentration has been concentrated upon agriculture, which is typically recognized to be one of the vital aspects of the economy in contemporary society. This focus on agriculture can be traced back to the advent of the industrial revolution. Wheat is still another type of grain that, in the same way as other types of harvests, satisfies the necessity for the essential nutrients that are required for our bodies to perform their functions correctly. On the other hand, the supply of this harvest is being limited by a variety of rather frequent ailments. This is making it difficult to meet demand. The vast majority of people who work in agriculture are illiterate, which hinders them from being able to take appropriate preventative measures whenever they are necessary to do so. As a direct consequence of this factor, there has been a reduction in the total amount of wheat that has been produced. It can be quite difficult to diagnose wheat illnesses in their early stages because there are so many various forms of environmental variables and other factors. This is because there are numerous distinct sorts of agricultural products, illiteracy of agricultural workers, and other factors. In the past, a variety of distinct models have been proposed as potential solutions for identifying illnesses in wheat harvests. This study demonstrates a two-dimensional CNN model that can identify and categorize diseases that affect wheat harvests. To identify significant aspects of the photos, the software employs models that have previously undergone training. The suggested method can then identify and categorize disease-affected wheat crops as distinct from healthy wheat crops by employing the major criteria described above. The reliability of the findings was assessed to be 98.84 percent after the collection of a total of 4800 images for this study. These images included eleven image classes of images depicting diseased crops and one image class of images depicting healthy crops. To offer the suggested model the capability to identify and classify diseases from a variety of angles, the photographs that help compensate for the collection were flipped at a variety of different perspectives. These findings provide evidence that CNN can be applied to increase the precision with which diseases in wheat crops are identified.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_83-Wheat_Diseases_Detection_and_Classification_using_Convolutional_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards a Blockchain-based Medical Test Results Management System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131182</link>
        <id>10.14569/IJACSA.2022.0131182</id>
        <doi>10.14569/IJACSA.2022.0131182</doi>
        <lastModDate>2022-11-30T10:26:20.9370000+00:00</lastModDate>
        
        <creator>Phuc Nguyen Trong</creator>
        
        <creator>Hong Khanh Vo</creator>
        
        <creator>Luong Hoang Huong</creator>
        
        <creator>Khiem Huynh Gia</creator>
        
        <creator>Khoa Tran Dang</creator>
        
        <creator>Hieu Le Van</creator>
        
        <creator>Nghia Huynh Huu</creator>
        
        <creator>Tran Nguyen Huyen</creator>
        
        <creator>Loc Van Cao Phu</creator>
        
        <creator>Duy Nguyen Truong Quoc</creator>
        
        <creator>Bang Le Khanh</creator>
        
        <creator>Kiet Le Tuan</creator>
        
        <subject>Blood donation; blockchain; hyperledger fabric; blood products supply chain</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>The role of test results in the diagnosis and treatment of patients’ diseases at medical facilities cannot be ignored. Patients must have a series of tests that are related to their symptoms. This can be repeated as many times as possible, depending on the type of disease and treatment. Seriously, in the cases where the patients lose their medical test record (i.e., patient’s medical history), the diagnosis is difficult due to the lack of information about the medical history as well as the symptoms/complications in the previous treatments. Storing this treatment information in medical centers can address risks related to user failure (e.g., loss of medical test records and wet/fire documents). However, users face a bit of difficulty when they change to other medical centers for medical examination and treatment since the data is stored locally, and difficult to share this with others. Current solutions focus on empowering users (i.e., patients) to share medical information related to disease treatment. However, the main barrier to these approaches is the knowledge of the users. They must embrace some background in terms of the technologies, risks, and rights they may share with treatment facilities. To solve this problem, we propose a Blockchain-based medical test result management system where all information is stored and verified by the stakeholder. The data will be stored decentralized and updated throughout the treatment process. We implement a proof-of-concept based on the Hyperledger Fabric platform. To demonstrate the effectiveness of the proposed system, we conduct evaluation methods based on three main tasks of the system: initializing, accessing, and updating data on six different scenarios (i.e., increasing in size of processing requests). The evaluation based on Hyperledger Caliper helped us to have a deeper analysis of the proposed model.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_82-Towards_a_Blockchain_based_Medical_Test_Results_Management_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Providing a Framework for Security Management in Internet of Things</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131180</link>
        <id>10.14569/IJACSA.2022.0131180</id>
        <doi>10.14569/IJACSA.2022.0131180</doi>
        <lastModDate>2022-11-30T10:26:20.9230000+00:00</lastModDate>
        
        <creator>XUE Zhen</creator>
        
        <creator>LIU Xingyue</creator>
        
        <subject>Internet of things (IoT); security management; security requirement; security model; security architecture</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>With the advent of Internet of Things technology, tremendous changes are taking place. Perhaps what humans never even imagined will come in the near future, and just as the Internet surrounds all aspects of people&#39;s daily lives, intelligent objects will autonomously take over all aspects of people&#39;s lives. So far, a lot of research and development has been done in the field of Internet of Things, but there are still many challenges in this field. One of the most important challenges is the issue of security in the Internet of Things. Therefore, in this paper, while reviewing the requirements, models and security architectures of the Internet of Things, a framework for security management in the Internet of Things is proposed, which takes into account various aspects and requirements. The proposed framework uses various ideas such as cryptography, encryption, anomaly detection, intrusion detection, and behavior pattern analysis and can be considered as a basis for future research. The purpose of this research is to determine security requirements and provide a method to improve security management in the Internet of Things. Based on the tests, the proposed method is completely 100% resistant against data modification attacks. Against impersonation attacks up to 97% and against denial of service attacks up to 89% resistant detection accuracy.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_80-Providing_a_Framework_for_Security_Management_in_Internet_of_Things.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Novel Strategies Employing Deep Learning Techniques for Classifying Pathological Brain from MR Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131181</link>
        <id>10.14569/IJACSA.2022.0131181</id>
        <doi>10.14569/IJACSA.2022.0131181</doi>
        <lastModDate>2022-11-30T10:26:20.9230000+00:00</lastModDate>
        
        <creator>Mitrabinda Khuntia</creator>
        
        <creator>Prabhat Kumar Sahu</creator>
        
        <creator>Swagatika Devi</creator>
        
        <subject>Brain tumor; CT; MRI; CNN; MobileNet V2; ResNet101; DenseNet121</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>Brain tumors are the most widespread as well as disturbing sickness, among a very precise expectancy of life almost in their serious structure. As a consequence, therapy planning is a critical component in enhancing the characteristics of the patient’s life. Image modalities like computed tomography (CT), magnetic resonance imaging (MRI), along with ultrasound images are commonly used to assess malignancies in the brain, breast, etc. MRI scans, in evidence, are employed in this study to identify the brain tumors. The application of excellent catego-rization systems on Magnetic Resonance Imaging (MRI) aids in the accurate detection of brain malignancies. The large quantity of data produced through MRI scan, on the other hand, renders physical distribution of tumor and non-tumor in a given time period impossible. It does, however, come with major obstruction. As a consequence, in order to decrease human mortality, a dependable and automated categorizing approach is necessary. The enormous geological and anatomical heterogeneity of the environment surrounding the brain tumor makes automated classification of brain tumor a difficult undertaking. This paper proposes a classification of Convolutional Neural Networks (CNN) for automated brain tumour diagnosis. To study as well as compare the findings, other convolutional neural network designs such as MobileNet V2, ResNet101, and DenseNet121 are used. Small kernels are employed to carry out the more intricate architectural design. This experiment was carried out using Python and Google Colab. The weight of a neuron is characterized as minute.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_81-Novel_Strategies_Employing_Deep_Learning_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Artificial Neural Network based Power Control in D2D Communication</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131179</link>
        <id>10.14569/IJACSA.2022.0131179</id>
        <doi>10.14569/IJACSA.2022.0131179</doi>
        <lastModDate>2022-11-30T10:26:20.9070000+00:00</lastModDate>
        
        <creator>Nethravathi H M</creator>
        
        <creator>S Akhila</creator>
        
        <subject>Artificial Neural Network (ANN); Base Stations (BS); CUE; Device-to-Device (D2D); DUE; ML; User Equipment (UE)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>As a viable technique for next-generation wireless networks, Device-to-Device (D2D) communication has attracted interest because it encourages the usage of point-to-point communications between User Equipment (UE) without passing over base stations (BS). Device-to-device (D2D) communication has been proposed in cellular networks as a supplementary paradigm to primarily increase network connection. This research takes into account a cellular network where users are trying device-to-device (D2D) connection. A D2D pair is composed of two D2D users (DUEs), a transmitter, and a receiver. To improve spectral efficiency, we use the premise that the D2D pairs only employ one communication channel. In order to minimize interference between D2D pairs and increase capacity, a power control is required. In the scenario where only typical cellular channel gains between base stations and DUEs are known and channel gains among DUEs are completely inaccessible, we address the issue of D2D power control. For each individual D2D pair, we use an artificial neural network (ANN) to calculate the transmission power. We show that the maximum aggregate capacity for the D2D pairs may be reached while anticipating the transmission power setting for D2D pairs using cellular channel gains.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_79-Artificial_Neural_Network_based_Power_Control.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Framework for Accelerating Magnetic Resonance Imaging using Deep Learning along with HPC Parallel Computing Technologies</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131178</link>
        <id>10.14569/IJACSA.2022.0131178</id>
        <doi>10.14569/IJACSA.2022.0131178</doi>
        <lastModDate>2022-11-30T10:26:20.8900000+00:00</lastModDate>
        
        <creator>Hani Moaiteq Aljahdali</creator>
        
        <subject>Magnetic resonance imaging (MRI); segmentation; classification; acceleration; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>MRI (magnetic resource imaging) has played a vital role in emerging technologies because of its non-invasion principle. MR equipment is traditional procedure being used for imaging biological structures. In medical domain, MRI is a most important tool being used for staging in clinical diagnosis that has ability to furnish rich physiological and functional information and radiation and non-ionizing nature. However, MRI is highly demanding in several clinical applications. In this paper, we have proposed a novel deep learning based method that accelerates MRI using a huge number of MR images. In proposed method, we used supervised learning approach that performs network training of given datasets. It determines the required network parameters that afford an accurate reconstruction of under-sampled acquisitions. We also designed offline based neural network (NN) that was trained to discover the relationship between MR images and K-space. All the experiments were performed over advanced NVIDIA GPUs (Tesla k80 and GTX Titan) based computers. It was observed that the proposed model outperformed and attained &lt;0.2% error rate. With our best knowledge, our method is the best approach that can be considered as leading model in future.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_78-A_New_Framework_for_Accelerating_Magnetic_Resonance_Imaging.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Best Techniques to Deal with Unbalanced Sequential Text Data in Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131177</link>
        <id>10.14569/IJACSA.2022.0131177</id>
        <doi>10.14569/IJACSA.2022.0131177</doi>
        <lastModDate>2022-11-30T10:26:20.8770000+00:00</lastModDate>
        
        <creator>Sumarni Adi</creator>
        
        <creator>Awaliyatul Hikmah</creator>
        
        <creator>Bety Wulan Sari</creator>
        
        <creator>Andi Sunyoto</creator>
        
        <creator>Ainul Yaqin</creator>
        
        <creator>Mardhiya Hayaty</creator>
        
        <subject>Imbalanced text classification; deep learning; resampling technique; weighted cross-entropy loss</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>Datasets with a balanced distribution of data are often difficult to find in real life. Although various methods have been developed and proven successful using shallow learning algorithms, handling unbalanced classes using a deep learning approach is still limited. Most of these studies only focus on image data using the Convolution Neural Network (CNN) architecture. In this study, we tried to apply several class handling techniques to three datasets of unbalanced text data. Both use a data-level approach with resampling techniques on word vectors and algorithm-level using Weighted Cross-Entropy Loss (WCEL) to handle cases of imbalanced text classification. With Bidirectional Long-Short Term Memory (BiLSTM) architecture. We tested each method using three datasets with different characteristics and levels of imbalance. Based on the experiments that have been carried out, each technique applied has a different performance on each dataset.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_77-The_Best_Techniques_to_Deal_with_Unbalanced_Sequential_Text_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Early-Warning Dropout Visualization Tool for Secondary Schools: Using Machine Learning, QR Code, GIS and Mobile Application Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131176</link>
        <id>10.14569/IJACSA.2022.0131176</id>
        <doi>10.14569/IJACSA.2022.0131176</doi>
        <lastModDate>2022-11-30T10:26:20.8770000+00:00</lastModDate>
        
        <creator>Judith Leo</creator>
        
        <subject>Dropout; education; girls; machine learning; students; QR code; mobile application</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>Investment in education through the provision of secondary school to the community is geared to develop human capital in Tanzania. However, these investments have been hampered by unacceptable higher rates of school dropouts, which seriously affect female students, since most schools do not have effective mechanisms for quality data management for immediate and effective decision making. Therefore, this study aims to solve the problem of data management from the school level in order to assist higher levels to receive appropriate and effective data on time through the use of emerging technologies such as machine learning, QR codes, and mobile application. To implement this solution, the study has explored the predictors of school dropout using a mixed approach with questionnaires and interview discussion. 600 participants participated in problem identification in the Arusha region. Through the use of design science research methodology, Unified Modeling Language, MYSQL, QR codes and mobile application techniques were integrated with Support Vector Machine to develop the proposed solution. Finally, the evaluation process considered 100 participants, and the results showed that an average of 89% of participants provided positive feedback on the functionalities of the developed tool to prevent dropouts in secondary schools in Africa at large.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_76-Early_Warning_Dropout_Visualization_Tool_for_Secondary_Schools.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Research on the Design of Online Teaching Platform of College Dance Course based on IGA Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131175</link>
        <id>10.14569/IJACSA.2022.0131175</id>
        <doi>10.14569/IJACSA.2022.0131175</doi>
        <lastModDate>2022-11-30T10:26:20.8600000+00:00</lastModDate>
        
        <creator>Yunyun Xu</creator>
        
        <subject>Online teaching; improved genetic algorithm; college dance course; intelligent test set; simulated annealing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>As a comprehensive art form, dance plays an integral role in developing the overall quality of students. However, with the increasing progress of IT technology, the traditional classroom-based teaching mode of dance course can no longer adapt to the current educational environment. This research designs an e-teaching platform system for college dance lessons based on the IGA algorithm (Improved Genetic Algorithm, IGA). First, it establishes a mathematical model of artificial intelligence questions and then proposes the functional design of an online teaching platform for college dance course based on the IGA algorithm. The feasibility of the proposed online teaching platform system of college dance course based on the IGA algorithm is validated via a number of testes. When the iteration count is 500, the success rate of the IGA algorithm reaches 99%; when the iteration count is 100, the average fitness is 0.929; when the iteration count is 100 times, the moderate calculation value is 0.936. While for the traditional Genetic algorithm (GA), the results are 88.6%, 0.73, and 0.752, respectively. By comparing with the traditional teaching mode based on GA algorithm, the proposed method based on IGA algorithm is obviously superior in many aspects.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_75-Research_on_the_Design_of_Online_Teaching_Platform.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deeply Learned Invariant Features for Component-based Facial Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131174</link>
        <id>10.14569/IJACSA.2022.0131174</id>
        <doi>10.14569/IJACSA.2022.0131174</doi>
        <lastModDate>2022-11-30T10:26:20.8430000+00:00</lastModDate>
        
        <creator>Adam Hassan</creator>
        
        <creator>Serestina Viriri</creator>
        
        <subject>Invariant features; facial components; facial recognition; convolutional neural network; weighted fusion</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>Face recognition underage variation is a challenging problem. It is a difficult task because ageing is an intrinsic variation, not like pose and illumination, which can be controlled. We propose an approach to extract invariant features to improve facial recognition using facial components. Can facial recognition over age progression be improved by resizing independently each individual facial component? The individual facial components: eyes, mouth, and nose were extracted using the Viola-Jones algorithm. Then we utilize the eyes region rectangle with upper coordinates to detect the forehead and lower coordinates with the nose rectangle to detect the cheeks. The proposed work uses Convolutional Neural Network with an ideal input image size for each facial component according to many experiments. We sum up component scores by applying weighted fusion for a final decision. The experiments prove that the nose component provides the highest score contribution among other ones, and the cheeks are the lowest. The experiments were conducted on two different facial databases- MORPH, and FG-NET databases. The proposed work achieves a state-of-the-art accuracy that reaches 100% on the FG-NET dataset and the results obtained on the MORPH dataset outperform the accuracy results of the related works in the literature.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_74-Deeply_Learned_Invariant_Features_for_Component_based_Facial_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>EEG-Based Silent Speech Interface and its Challenges: A Survey</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131173</link>
        <id>10.14569/IJACSA.2022.0131173</id>
        <doi>10.14569/IJACSA.2022.0131173</doi>
        <lastModDate>2022-11-30T10:26:20.8270000+00:00</lastModDate>
        
        <creator>Nilam Fitriah</creator>
        
        <creator>Hasballah Zakaria</creator>
        
        <creator>Tati Latifah Erawati Rajab</creator>
        
        <subject>Imagined speech; silent speech interface; electroencephalograph (EEG); speech recognition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>People with speech disorders could have social and welfare difficulties. Therefore, the silent speech interface (SSI) is needed to help them communicate. This interface decodes the speech from a human’s biosignal. The brain signals contain information from speech production to cover people with numerous speech disorders. Brain signals can be acquired non-invasively by electroencephalograph (EEG) and later transformed into the features for the input of speech pattern recognition. This review discusses the advancement of EEG-based SSI research and its current challenges. It mainly discussed the acquisition protocol, spectral-spatial-temporal characterization of EEG-based imagined speech, classification techniques with leave-one-subject or session-out cross-validation, and related real-world environmental issues. It aims to aid future imagined speech decoding research in exploring the proper methods to overcome the problems.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_73-EEG_Based_Silent_Speech_Interface_and_its_Challenges.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of Robust Quasi Decentralized Type-2 Fuzzy Load Frequency Controller for Multi Area Power System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131172</link>
        <id>10.14569/IJACSA.2022.0131172</id>
        <doi>10.14569/IJACSA.2022.0131172</doi>
        <lastModDate>2022-11-30T10:26:20.8270000+00:00</lastModDate>
        
        <creator>Jesraj Tataji Dundi</creator>
        
        <creator>Anand Gondesi</creator>
        
        <creator>Rama Sudha Kasibhatla</creator>
        
        <creator>A. Chandrasekhar</creator>
        
        <subject>Load frequency control; type-2 fuzzy quasi-decentralized functional observer; fuzzy quasi-decentralized functional observers; state observer; type-2 fuzzy controller</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>Interconnected power systems receive power through tie lines. Sudden perturbation in load causes uneven power distribution issues resulting in sudden changes in the voltage and frequency in the given system (tie-line power exchange error). The Load Frequency Controller (LFC) has an ability to stabilize the system for the above mentioned disturbances. In this paper, a novel load frequency controller based on Type-2 Fuzzy Quasi-Decentralized Functional Observer (T2FQFO) is proposed. In the proposed methodology the observer gains are obtained mathematically which guarantees, the stability of the system. The efficacy of the proposed technique has been tested on an IEEE standard testing systems. The results shows the proposed T2FQFO has higher performance when compared with Fuzzy Quasi-Decentralized Functional observer, Quasi- Decentralized Functional observer and classical state observer. And the results say that the peak over shoot and settling time have been improved by 25% (Approx.) by Type-2 Quasi Decentralized Functional Observer ((T2FQFO) than other observers.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_72-Design_of_Robust_Quasi_Decentralized_Type_2_Fuzzy_Load.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Visually Impaired Person Assistance Based on Tensor FlowLite Technology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131171</link>
        <id>10.14569/IJACSA.2022.0131171</id>
        <doi>10.14569/IJACSA.2022.0131171</doi>
        <lastModDate>2022-11-30T10:26:20.8130000+00:00</lastModDate>
        
        <creator>Nethravathi B</creator>
        
        <creator>Srinivasa H P</creator>
        
        <creator>Hithesh Kumar P</creator>
        
        <creator>Amulya S</creator>
        
        <creator>Bhoomika S</creator>
        
        <creator>Banashree S Dalawai</creator>
        
        <creator>Chakshu Manjunath</creator>
        
        <subject>Tensor flow; SSD; Yolo; Yolo_v3; gifts; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>The most exciting thing about computer visualization is to detect a Real time object application system. This is abundantly used in many areas. With the more increase of development of deep learning such as self-driving cars, robots, safety tracking, and guiding visually impaired people, many algorithms have improved to find the relationship between video analysis and images analysis. Entire algorithms behave uniquely in the network architecture, and they have the same goal of detecting numerous objects in a composite image. It is very important to use our technology to train visually impaired people whenever they need them, as they are visually impaired and limit the movement of people in unknown places. This paper offers an application system that will identify all the possible day-to-day objects of our surroundings, and on the other side, it promotes speech feedback to the person about the sudden as well as far objects around them. This project was advanced using two different algorithms: Yolo and Yolo-v3, tested to the same criteria to measure its accuracy and performance. The SSD_MobileNet model is used in Yolo Tensor Flow and the Darknet model is used in Yolo_v3. Speech feedback: A Python library incorporated to convert statements to speech-to-speech. Both algorithms are analyzed using a web camera in a variety of circumstances to measure the correctness of the algorithm in every aspect.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_71-Visually_Impaired_Person_Assistance_Based_on_Tensor_FlowLite.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>KMIT-Pathology: Digital Pathology AI Platform for Cancer Biomarkers Identification on Whole Slide Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131170</link>
        <id>10.14569/IJACSA.2022.0131170</id>
        <doi>10.14569/IJACSA.2022.0131170</doi>
        <lastModDate>2022-11-30T10:26:20.7970000+00:00</lastModDate>
        
        <creator>Rajasekaran Subramanian</creator>
        
        <creator>R. Devika Rubi</creator>
        
        <creator>Rohit Tapadia</creator>
        
        <creator>Rochan Singh</creator>
        
        <subject>AI for cancer prediction and diagnostics; deep learning for WSI analysis; tubule prediction on breast cancer; ISUP grading for prostate cancer; WSI imaging platform</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>Analysis and identification of cancer imaging bio markers on biopsy tissues are done through optical microscope. Digital tissue scanners and Deep learning models automate this task and produce unbiased diagnostics. The digital tissue scanner is called as virtual microscopy which digitize the glass slide tissues and the digitized images are called as Whole Slide Images (WSI). They are multi-layered (level) images having high resolution, huge in size and stored as a pyramidal tiff file. As normal web browsers are unable to handle WSI, a special web imaging platform is needed to obtain, store, visualize and process WSI. This platform must provide basic facilities for uploading, viewing and annotating WSI which are the inputs to the deep learning models. The integration of deep learning models with the platform and the WSI database provides a complete solution to cancer diagnostics and detection. This paper proposes two AI deep learning models for the diagnostics and the detection of cancer imaging bio markers on breast cancer and prostate cancer WSI. Efficientnet deep learning model is used to detect ISUP (International Society of Urologic Pathologists) grading for prostate cancer which is trained and tested by 5000 prostate WSI and produces 80% accuracy with 0.6898 quadratic weighted kappa (QWK) score. R2Unet model is used to identify tubule structures for breast cancer which is a morphological component to grade breast cancer. The model is trained and tested by 17432 WSI tiles and generates f1 metric accuracy as 0.9961 with mean_io_u 0.8612. The paper also shows the complete execution of these two Deep learning models (from uploading WSI to visualize the AI detected results) on the newly developed WSI imaging web platform.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_70-KMIT_Pathology_Digital_Pathology_AI_Platform_for_Cancer_Biomarkers.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Diagnosis of Carcinoma from Histopathology Images using DA-Deep Convnets Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131169</link>
        <id>10.14569/IJACSA.2022.0131169</id>
        <doi>10.14569/IJACSA.2022.0131169</doi>
        <lastModDate>2022-11-30T10:26:20.7800000+00:00</lastModDate>
        
        <creator>K. Abinaya</creator>
        
        <creator>B. Sivakumar</creator>
        
        <subject>Cancer; cervical cancer; convolutional neural network; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>Cancer is a major origin of mortality around the globe, responsible for roughly high morbidity and mortality in 2020, or almost one per six deaths. Cervical, lung, and breast are the most common types of cancers. Cervical is the fourth highest common in women worldwide. Cervical would then kill approximately 4,280 women. Infections that cause, such as human papillomavirus (HPV) and hepatitis, account for approximately 30% of cases in low- and lower-middle-income countries. Many cancers are curable if detected as early as possible. In this proposed work, developed the DA-Deep convnets model (Data augmentation with a deep, Convolutional Neural Network) for the detection of cervical cancer from biopsy images. Deep Convolutional Neural Network presents one of the most applied DL approaches in medical imaging. Today, enhancements in image analysis and processing, particularly medical imaging, have become a major factor in the improvement of various systems in areas such as medical prognosis, treatment, and diagnosis. Based on our proposed model we achieved 99.2% accuracy in detecting the input image has cancer or not.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_69-Diagnosis_of_Carcinoma_from_Histopathology_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Students’ Perspective on Sustaining Education and Promoting Humanising Education through e-Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131168</link>
        <id>10.14569/IJACSA.2022.0131168</id>
        <doi>10.14569/IJACSA.2022.0131168</doi>
        <lastModDate>2022-11-30T10:26:20.7800000+00:00</lastModDate>
        
        <creator>Aidrina Sofiadin</creator>
        
        <subject>Education; sustainability; humanizing education; sustainable e-learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>The COVID-19 pandemic has shifted the education sector towards an e-learning approach to sustain education. Sustaining student education through e-learning significantly impacts student learning experiences and outcomes, which can be influenced by e-learning infrastructure, e-learning materials, and learning engagement. The concept of humanising education refers to the learning process that reflects students&#39; moralities and values. Focus group discussion is a practical approach to assess students’ perspectives on sustaining and humanising their education through e-learning. This qualitative focus group study aimed to discuss the e-learning factors that will sustain education and promote humanising education in the virtual learning environment. Thirty students from Information Technology (IT) and business fields have participated and provided a different view on sustaining education through e-learning. Thematic analysis was used to analyse the focus group data. Five themes were identified: (1) e-learning technologies and infrastructure; (2) e-learning principles: pedagogy and materials; (3) health and wellness; (4) equality; and (5) engagement: communication and collaboration. The analysis enlightened the e-learning design for sustainable e-learning. In addition, this paper outlines how the findings support sustainable development goals.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_68-Students_Perspective_on_Sustaining_Education.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-level Video Captioning based on Label Classification using Machine Learning Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131167</link>
        <id>10.14569/IJACSA.2022.0131167</id>
        <doi>10.14569/IJACSA.2022.0131167</doi>
        <lastModDate>2022-11-30T10:26:20.7500000+00:00</lastModDate>
        
        <creator>J. Vaishnavi</creator>
        
        <creator>V. Narmatha</creator>
        
        <subject>Video captioning; label classification; Hu moments; GLCM; statistical features; SVM; Naive Bayes</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>Video captioning is the heuristic and most essential task in the current world to save time by converting long and highly content-rich videos into simple and readable reports in text form. It is narrating the events happening in videos in natural language sentences. It makes the way to many more interesting tasks by the use of labels, tags, and terms such as video content retrieval, video search, video tagging, etc. Video captioning is currently being attempted by many researchers using some exciting Deep learning techniques. But this approach is to find the best of machine learning for the process of captioning videos in a different way. The novel part of the proposed approach is classifying videos by using the labels existing in video frames that belong to the various categories and producing consecutive Multi-Level captions that describe the entire video in a round-robin way. Informative features are extracted from the video frames such as Gray Level Co-occurrence Matrix (GLCM) features, Hu moments, and Statistical features to provide optimal results. This model is designed with two superior and optimal classifiers such as Support Vector Machine (SVM) and Naive Bayes separately. The models are demonstrated with the prevailing standard dataset Microsoft Research Video Description corpus (MSVD) and evaluated by the benchmark classification metrics such as Accuracy, Precision, Recall, and F1-Score.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_67-Multi_level_Video_Captioning_based_on_Label_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Speckle Reduction in Medical Ultrasound Imaging based on Visual Perception Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131166</link>
        <id>10.14569/IJACSA.2022.0131166</id>
        <doi>10.14569/IJACSA.2022.0131166</doi>
        <lastModDate>2022-11-30T10:26:20.7500000+00:00</lastModDate>
        
        <creator>Yasser M. Kadah</creator>
        
        <creator>Ahmed F. Elnokrashy</creator>
        
        <creator>Ubaid M. Alsaggaf</creator>
        
        <creator>Abou-Bakr M. Youssef</creator>
        
        <subject>Contrast sensitivity function; image quality metrics; speckle reduction; ultrasound imaging</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>Ultrasound imaging technology is one of the most important clinical imaging modalities due to its safety, low cost, in addition to its versatile applications. The main technical problem in this technology is its noisy appearance due to the presence of speckle, which makes reading imaging more difficult. In this study, a new method of speckle reduction in medical ultrasound images is proposed based on adaptive shifting of the contrast sensitivity function of human vision using a bias field map estimated from the original image. The aim of this work is to have an effective image enhancement strategy that reduces speckle while preserving diagnostically useful image features and allowing practical implementation in real-time for medical ultrasound imaging applications. The new method is used to improve the visual perception of image quality of ultrasound images by adding a local brightness bias to the areas with speckle noise. This allows the variations in image pixels due to speckle noise to be better perceived by the human observer because of the visual perception model. The performance of the proposed method is objectively assessed using quantitative image quality metrics and compared to previous methods. Furthermore, given that image quality perception is subjective, the level of added bias is controlled by a single parameter that accommodates the different needs for different users and applications. This method has potential to offer better viewing conditions of ultrasound images, which translates to higher diagnostic accuracy.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_66-Speckle_Reduction_in_Medical_Ultrasound_Imaging.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Transformation Model of Smallholder Oil Palm Supply Chain Ecosystem using Blockchain-Smart Contract</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131165</link>
        <id>10.14569/IJACSA.2022.0131165</id>
        <doi>10.14569/IJACSA.2022.0131165</doi>
        <lastModDate>2022-11-30T10:26:20.7330000+00:00</lastModDate>
        
        <creator>Irawan Afrianto</creator>
        
        <creator>Taufik Djatna</creator>
        
        <creator>Yandra Arkeman</creator>
        
        <creator>Irman Hermadi</creator>
        
        <subject>Transformation model; smallholder oil palm supply chain; blockchain; smart contract</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>The development of new technology has the potential to disrupt and transform an existing system as well as information technology. This study aims to build a proposed model in order to transform an old system into a Blockchain-based system. The smallholder oil palm supply chain currently uses a traditional information system and technology, hence its integrity, transparency, and security are vulnerable. To solve this problem, frontier technology is needed, such as a blockchain, which has trust, transparency, and traceable characteristics to improve performance and quality. The method used in this study was digital transformation in the context of operational processes and technology. In addition, the As-Is To-Be model was used as a mechanism to develop a transformation model for the smallholder oil palm supply chain system. Specifically, the As-Is model was implemented in identifying and analyzing the information system and technology in an already existing system, while the To-Be was used to determine the blockchain potential and characteristics. This becomes the proposed model for the transformation of the old system into a Blockchain technology-based. Also, a prospective and mechanism for system transformation were produced in the aspect of transactions, data, and architecture, as well as the flow of change strategies needed in the transformation of the blockchain-based smallholder oil palm supply chain system.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_65-Transformation_Model_of_Smallholder_Oil_Palm_Supply_Chain_Ecosystem.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimizing Faculty Workloads and Room Utilization using Heuristically Enhanced WOA</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131164</link>
        <id>10.14569/IJACSA.2022.0131164</id>
        <doi>10.14569/IJACSA.2022.0131164</doi>
        <lastModDate>2022-11-30T10:26:20.7200000+00:00</lastModDate>
        
        <creator>Lea D. Austero</creator>
        
        <creator>Ariel M. Sison</creator>
        
        <creator>Junrie B. Matias</creator>
        
        <creator>Ruji P. Medina</creator>
        
        <subject>Heuristics; mutation; optimization; swarm; timetabling; whale optimization algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>The creation and generation of schedules that are free of conflicts manually every academic semester present higher education institutions with a duty that is laborious and demanding of their resources. The course timetabling optimization, as an education timetabling problem, is a popular example of an NP-hard combinatorial problem. Numerous attempts have been made over the course of the past few decades to find a solution to this problem, but no one has yet developed a foolproof approach that can examine all alternatives to find the best method. The promising swarm-based optimization algorithm called Whale Optimization Algorithm was heuristically enhanced in the present study and is called HEWOA. It was designed as a solution to the course timetabling problem discussed in the current study. HEWOA was able to generate an efficient timetable for the large dataset of 1700 events for an average time of 14.92 seconds only, with an average generation of 7.2 and a best time of 8.38 seconds. These results reveal that the performance of HEWOA was better than that of various hybrids of the Genetic Algorithm that was compared in the present study.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_64-Optimizing_Faculty_Workloads_and_Room_Utilization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Influence of Virtual Secure Mode (VSM) on Memory Acquisition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131163</link>
        <id>10.14569/IJACSA.2022.0131163</id>
        <doi>10.14569/IJACSA.2022.0131163</doi>
        <lastModDate>2022-11-30T10:26:20.7200000+00:00</lastModDate>
        
        <creator>Niken Dwi Wahyu Cahyani</creator>
        
        <creator>Erwid M Jadied</creator>
        
        <creator>Nurul Hidayah Ab Rahman</creator>
        
        <creator>Endro Ariyanto</creator>
        
        <subject>Live forensics; memory acquisition; virtualization; virtual secure mode</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>Recently, acquiring the Random Access Memory (RAM) full memory and access data is gaining significant interest in digital forensics. However, a security feature on the Windows operating system - Virtual Secure Mode (VSM) - presents challenges to the acquisition process by causing a system crash known as a Blue Screen of Death (BSoD). The crash is likely to occur when memory acquisition tools are being used. Subsequently, it disrupts the goal of memory acquisition since the system must be restarted, and the RAM content is no longer available. This study analyzes the implications of VSM on memory acquisition tools as well as examines to what extent its impact on the acquisition process. Two memory acquisition tools, namely FTK Imager and Belkasoft RAM Capturer, were used to conduct the acquisition process. Static and dynamic code analyses were performed by using reverse engineering techniques that are disassembler and debugger. The results were compared based on the percentage of unreadable memory between active and inactive VSM. Static analysis showed that there is no difference between all applications’ functions for both active and inactive VSM. Further Bugcheck analysis of the MEMORY.DMP is pointed to the ad_driver.sys module in FTK Imager that causes the system to crash. The percentage of unreadable memory while running on active VSM and inactive VSM for Belkasoft is about 0.6% and 0.0021%, respectively. These results are significant as a reference to digital investigators as consistent with the importance of RAM dump in live forensics.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_63-The_Influence_of_Virtual_Secure_Mode_VSM_on_Memory_Acquisition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Review of Lightweight Object Detection Algorithms for Mobile Augmented Reality</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131162</link>
        <id>10.14569/IJACSA.2022.0131162</id>
        <doi>10.14569/IJACSA.2022.0131162</doi>
        <lastModDate>2022-11-30T10:26:20.7030000+00:00</lastModDate>
        
        <creator>Mohammed Mansoor Nafea</creator>
        
        <creator>Siok Yee Tan</creator>
        
        <creator>Mohammed Ahmed Jubair</creator>
        
        <creator>Mustafa Tareq Abd</creator>
        
        <subject>Augmented reality (AR); object detection; computer vision (CV); non-graphics processing unit (Non-GPU); real time</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>Augmented Reality (AR) has led to several technologies being at the forefront of innovation and change in every sector and industry. Accelerated advances in Computer Vision (CV), AR, and object detection refined the process of analyzing and comprehending the environment. Object detection has recently drawn a lot of attention as one of the most fundamental and difficult computer vision topics. The traditional object detection techniques are fully computer-based and typically need massive Graphics Processing Unit (GPU) power, while they aren&#39;t usually real-time. However, an AR application required real-time superimposed digital data to enable users to improve their field of view. This paper provides a comprehensive review of most of the recent lightweight object detection algorithms that are suitable to be used in AR applications. Four sources including Web of Science, Scopus, IEEE Xplore, and ScienceDirect were included in this review study. A total of ten papers were discussed and analyzed from four perspectives: accuracy, speed, small object detection, and model size. Several interesting challenges are discussed as recommendations for future work in the object detection field.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_62-A_Review_of_Lightweight_Object_Detection_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Energy Consumption Reduction Strategy and a Load Balancing Mechanism for Cloud Computing in IoT Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131161</link>
        <id>10.14569/IJACSA.2022.0131161</id>
        <doi>10.14569/IJACSA.2022.0131161</doi>
        <lastModDate>2022-11-30T10:26:20.6870000+00:00</lastModDate>
        
        <creator>Tai Zhang</creator>
        
        <creator>Huigang Li</creator>
        
        <subject>Internet of things; load balancing; cloud computing; virtual machine migration</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>Modern networks are built to be linked, agile, programmable, and load-efficient in order to overcome the drawbacks of an unbalanced network, such as network congestion, elevated transmission costs, low reliability, and other problems. The many technological devices in our environment have a considerable potential to make the connected world concept a reality. The Internet of Things (IoT) is a research community initiative to bring this idea to life. Cloud computing is crucial to making it happen. The load balancing and scheduling significantly increase the possibility of using resources and provide the grounds for reliability. Even if the intended node is under low or high loading, the load balancing techniques can increase its efficiency. This paper presents a scheduling technique for optimal resource allocation with enhanced particle swarm optimization and virtual machine live migration technique. The proposed technique prevents excessive or low server overloads through optimal allocation and scheduling tasks to physical servers. The proposed strategy was implemented in the cloudsim simulator environment and compared and showed that the proposed method is more effective and is well suited to decreasing execution time and energy consumption. This solution provides grounds to reduce energy consumption in the cloud environment while decreasing execution time. The simulation results showed that the amount of energy consumption compared to particle crowding has decreased by 10% and compared to PSO (Particle Swarm Optimization) scheduling by more than 8%. Also, the execution time has been reduced by 18% compared to particle swarm scheduling and by 8% compared to PSO.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_61-Energy_Consumption_Reduction_Strategy_and_a_Load_Balancing_Mechanism.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of Automatic Segmentation Techniques using Convolutional Neural Networks to Differentiate Diabetic Foot Ulcers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131160</link>
        <id>10.14569/IJACSA.2022.0131160</id>
        <doi>10.14569/IJACSA.2022.0131160</doi>
        <lastModDate>2022-11-30T10:26:20.6730000+00:00</lastModDate>
        
        <creator>R V Prakash</creator>
        
        <creator>K Sundeep Kumar</creator>
        
        <subject>Magnetic resonance imaging (MRI); diabetic foot ulcers (DFU); convolutional neural networks; ischemia&amp; machine learning algorithms &amp; dual-energy x-ray absorptiometry</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>The quality of computer vision systems to detect abnormalities in various medical imaging processes, such as dual-energy X-ray absorptiometry, magnetic resonance imaging (MRI), ultrasonography, and computed tomography, has significantly improved as a result of recent developments in the field of deep learning. There is discussion of current techniques and algorithms for identifying, categorizing, and detecting DFU. On the small datasets, a variety of techniques based on traditional machine learning and image processing are utilized to find the DFU. These literary works have kept their datasets and algorithms private. Therefore, the need for end-to-end automated systems that can identify DFU of all grades and stages is critical. The study&#39;s goals were to create new CNN-based automatic segmentation techniques to separate surrounding skin from DFU on full foot images because surrounding skin serves as a critical visual cue for evaluating the progression of DFU as well as to create reliable and portable deep learning techniques for localizing DFU that can be applied to mobile devices for remote monitoring. The second goal was to examine the various diabetic foot diseases in accordance with well-known medical categorization schemes. According to a computer vision viewpoint, the authors looked at the various DFU circumstances including site, infection, neuropathy, bacterial infection, area, and depth. Machine learning techniques have been utilized in this study to identify key DFU situations as ischemia and bacterial infection.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_60-Development_of_Automatic_Segmentation_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Hierarchical Shape Analysis based on Sampling Point-Line Distance for Regular and Symmetry Shape Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131159</link>
        <id>10.14569/IJACSA.2022.0131159</id>
        <doi>10.14569/IJACSA.2022.0131159</doi>
        <lastModDate>2022-11-30T10:26:20.6730000+00:00</lastModDate>
        
        <creator>Kehua Xian</creator>
        
        <subject>Shape recognition; hierarchical shape detection; sampling point-line distance distribution; regular and symmetry shape detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>Regular and symmetry shapes occurred in natural and manufactured objects. Detecting these shapes are essential and still tricky task in computer vision. This paper proposes a novel hierarchical shape detection (HiSD) method, which consists of circularity and roundness detection, and regularity and symmetry detection phases. The first phase recognizes the circular and elliptical shapes using aspect ratio and roundness measurements. The second phase, the main phase in the HiSD, recognizes the regular and symmetry shapes using density distribution measurement (DDM) and the proposed sampling point-line distance distribution (SPLDD) algorithm. The proposed method presets effective with low computation cost shape detection approach which is not sensitive to specific category of objects. It enables to detect different types of objects involving the arbitrary, regular, and symmetry shapes. Experimental results show that the proposed method performs well compared to the existing state-of-the-art algorithms.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_59-A_Novel_Hierarchical_Shape_Analysis_based_on_Sampling_Point_Line_Distance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Automatic Adaptive Case-based Reasoning System for Depression Remedy Recommendation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131158</link>
        <id>10.14569/IJACSA.2022.0131158</id>
        <doi>10.14569/IJACSA.2022.0131158</doi>
        <lastModDate>2022-11-30T10:26:20.6570000+00:00</lastModDate>
        
        <creator>Hatoon S. AlSagri</creator>
        
        <creator>Mourad Ykhlef</creator>
        
        <creator>Mirvat Al-Qutt</creator>
        
        <creator>Abeer Abdulaziz AlSanad</creator>
        
        <creator>Lulwah AlSuwaidan</creator>
        
        <creator>Halah Abdulaziz Al-Alshaikh</creator>
        
        <subject>Case-based reasoning (CBR); depression; remedy; adaptation; similarity; twitter</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>Social media data represents the fuel for advanced analytics concerning people’s behaviors, physiological and health status. These analytics include identifying users’ depression levels via Twitter and then recommend remedies. Remedies come in the form of suggesting some accounts to follow, displaying motivational quotes, or even recommending a visit to a psychiatrist. This paper proposes a remedy recommendation system which exploits case-based reasoning (CBR) with random forest. The system recommends the appropriate remedy for a person. The main contribution of this work is the creation of an automated, data-driven, and scalable adaptation module without human interference. The results of every stage of the system were verified by certified psychiatrist. Another contribution of this work is setting the weights in case similarity measurement by the features’ importance, extracted from the depression identification system. CBR retrieval accuracy (exact hit) reached 82% while the automatic adaptation accuracy (exact remedy) reached 88%. The adaptation presented an error-tolerance advantage which enhances the overall accuracy.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_58-An_Automatic_Adaptive_Case_based_Reasoning_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of Mobile Application Auction for Ornamental Fish to Increase Farmer Sales Transactions in Indonesia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131157</link>
        <id>10.14569/IJACSA.2022.0131157</id>
        <doi>10.14569/IJACSA.2022.0131157</doi>
        <lastModDate>2022-11-30T10:26:20.6400000+00:00</lastModDate>
        
        <creator>Henry Antonius Eka Widjaja</creator>
        
        <creator>Meyliana</creator>
        
        <creator>Erick Fernando</creator>
        
        <creator>Stephen Wahyudi Santoso</creator>
        
        <creator>Surjandy</creator>
        
        <creator>A.Raharto Condrobimo</creator>
        
        <subject>Mobile application; auction; ornamental fish; prototyping model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>This article focuses on designing a mobile application for ornamental fish auction transactions for fish cultivators in order to increase their sales. This mobile app was created using a prototyping methodology and a four step process. The first is communication, the second is Quick plan and design, the third is construction prototyping, which develops a tender application, and the last stage is development delivery and feedback. Data validation is carried out for users such as farmers, bidders, or buyers in developing the application. The results of this paper propose a mobile auction application that provides auction information and bidding by bidders and sellers. The results show that the application is validated and declared usable and feasible to conduct auctions and bids as needed. This application can increase sales and improve the economic life of ornamental fish farmers in Indonesia.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_57-Design_of_Mobile_Application_Auction_for_Ornamental_Fish.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Parkinson’s Disease Identification using Deep Neural Network with RESNET50</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131156</link>
        <id>10.14569/IJACSA.2022.0131156</id>
        <doi>10.14569/IJACSA.2022.0131156</doi>
        <lastModDate>2022-11-30T10:26:20.6230000+00:00</lastModDate>
        
        <creator>Anila M</creator>
        
        <creator>Pradeepini Gera</creator>
        
        <subject>Parkinson’s disease; speech impairment; artificial intelligence; RESNET50; deep learning; ht/wigner-ville distribution; 2D time-frequency</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>Recent Parkinson&#39;s disease (PD) research has focused on recognizing vocal defects from people&#39;s prolonged vowel phonations or running speech since 90% of Parkinson&#39;s patients demonstrate vocal dysfunction in the early stages of the illness. This research provides a hybrid analysis of time and frequency and deep learning techniques for PD signal categorization based on ResNet50. The recommended strategy eliminates manual procedures to perform feature extraction in machine learning. 2D time-frequency graphs give frequency and energy information while retaining PD morphology. The method transforms 1D PD recordings into 2D time-frequency diagrams using hybrid HT/Wigner-Ville distribution (WVD). We obtained 91.04% accuracy in five-fold cross-validation and 86.86% in testing using RESNET50. F1-score achieved 0.89186. The suggested approach is more accurate than state-of-the-art models.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_56-Parkinsons_Disease_Identification_using_Deep_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Image Verification and Emotion Detection using Effective Modelling Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131154</link>
        <id>10.14569/IJACSA.2022.0131154</id>
        <doi>10.14569/IJACSA.2022.0131154</doi>
        <lastModDate>2022-11-30T10:26:20.6100000+00:00</lastModDate>
        
        <creator>Sumana Maradithaya</creator>
        
        <creator>Vaishnavi S</creator>
        
        <subject>Face detection; face recognition; emotion detection; data augmentation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>The feelings expressed on the face reflect the manner of thinking and provide useful insights of happenings inside the brain. Face Detection enables us to identify a face. Recognizing the facial expressions for different emotions is to familiarize the machine with human like capacity to perceive and identify human feelings, which involves classifying the given input images of the face into one of the seven classes which is achieved by building a multi class classifier. The proposed methodology is based on convolutional neural organizations and works on 48x48 pixel-based grayscale images. The proposed model is tested on various images and gives the best accuracy when compared with existing functionalities. It detects faces in images, recognizes them and identifies emotions and shows improved performance because of data augmentation. The model is experimented with varying depths and pooling layers. The best results are obtained sequential model of six layers of Convolutional Neural Network and softmax activation function applied to last layer. The approach works for real time data taken from videos or photos.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_54-Image_Verification_and_Emotion_Detection_using_Effective_Modelling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Big Data Analytics Quality in Enhancing Healthcare Organizational Performance: A Conceptual Model Development</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131155</link>
        <id>10.14569/IJACSA.2022.0131155</id>
        <doi>10.14569/IJACSA.2022.0131155</doi>
        <lastModDate>2022-11-30T10:26:20.6100000+00:00</lastModDate>
        
        <creator>Wan Mohd Haffiz Mohd Nasir</creator>
        
        <creator>Rusli Abdullah</creator>
        
        <creator>Yusmadi Yah Jusoh</creator>
        
        <creator>Salfarina Abdullah</creator>
        
        <subject>Big data analytics; BDA quality factors; BDA quality assessment; organizational performance; healthcare</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>The advancement of Big Data Analytics (BDA) has aided numerous organizations in effectively and efficiently adopting BDA as a holistic solution. However, BDA quality assessment has not yet been fully addressed, therefore it is necessary to identify essential BDA quality factors to assure the enhancement of organizational performance, particularly in the healthcare sector. Hence, the goals of this study are to recognize and analyse the determining factors of BDA quality as well as to suggest a conceptual model for enhancing the performance of healthcare organizations via BDA quality assessment. The proposed conceptual model is based on a related theoretical model and previous research on BDA quality. The essential BDA quality factors being selected as determinants consist of reliability, completeness, accuracy, timeliness, format, accessibility, usability, maintainability, and portability. The findings of this ongoing study are used to develop a conceptual model that is proposed in line with the ten-research hypothesis and may offer a better assessment quality model to improve the performance of healthcare organizations.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_55-Big_Data_Analytics_Quality_in_Enhancing_Healthcare.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Method for 1/f Fluctuation Component Extraction from Images and Its Application to Improve Kurume Kasuri Quality Estimation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131153</link>
        <id>10.14569/IJACSA.2022.0131153</id>
        <doi>10.14569/IJACSA.2022.0131153</doi>
        <lastModDate>2022-11-30T10:26:20.5930000+00:00</lastModDate>
        
        <creator>Jin Shimazoe</creator>
        
        <creator>Kohei Arai</creator>
        
        <creator>Mariko Oda</creator>
        
        <creator>Jewon Oh</creator>
        
        <subject>1/f fluctuation component extraction; Kurume Kasuri textile quality; FLANN; OpenCV</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>Method for 1/f fluctuation component extraction from images is proposed. As an application of the proposed method, Kurume Kasuri textile quality evaluation is also proposed. Frequency component analysis is used for 1/f fluctuation component extraction. Also, an attempt is conducted to discriminate the typical Kurume Kasuri textile quality, (1) Relatively smooth edge lines are included in the Kurume Kasuri textile patterns, (2) Relatively non-smooth edge lines are included in the patterns, (3) Between both of patterns (1) and (2) by using template matching method of FLANN of OpenCV. Through experiments, it is found that the proposed method does work for extraction of 1/f fluctuation component and also found that Kurume Kasuri textile quality can be done with the result of 1/f fluctuation component extraction.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_53-Method_for_1f_Fluctuation_Component_Extraction_from_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of Underwater Pipe Crack Detection System for Low-Cost Underwater Vehicle using Raspberry Pi and Canny Edge Detection Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131152</link>
        <id>10.14569/IJACSA.2022.0131152</id>
        <doi>10.14569/IJACSA.2022.0131152</doi>
        <lastModDate>2022-11-30T10:26:20.5770000+00:00</lastModDate>
        
        <creator>Mohd Aliff</creator>
        
        <creator>Nur Farah Hanisah</creator>
        
        <creator>Muhammad Shafique Ashroff</creator>
        
        <creator>Sallaudin Hassan</creator>
        
        <creator>Siti Fairuz Nurr</creator>
        
        <creator>Nor Samsiah Sani</creator>
        
        <subject>Crack detection; pipeline; underwater vehicle: image processing; Raspberry Pi; canny edge detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>The effective loading area decreases because of cracking, leading to a rise in stress and eventual structural failure. Monitoring for cracks is an important part of keeping any pipeline or building in excellent working order. There are several obstacles that make manual inspection and monitoring of subsea pipes challenging. The fundamental objective of this study is to create a relatively inexpensive underwater vehicle that can use an image processing technique to reliably spot cracks on the exteriors of industrial pipes. The tasks involved in this project include the planning, development, and testing of an underwater vehicle that can approach the circular pipes, take pictures, and determine whether there are fractures. In this project, we will utilize the Canny edge detection technique to identify the crack. The system could function in either an online or offline mode. Using a Raspberry Pi and a camera, the paper will discuss the procedures followed to locate the pipe cracks that activate the underwater vehicle. While Python is used for image processing to capture photographs, analyze images, and expose flaws in particular images, the underwater vehicle&#39;s movement will be controlled via a connected remote control. When the physical model has been built and tested, the results are recorded, and the system&#39;s benefits and shortcomings are discussed.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_52-Development_of_Underwater_Pipe_Crack_Detection_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of a Speaking Training System for English Speech Education using Speech Recognition Technology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131151</link>
        <id>10.14569/IJACSA.2022.0131151</id>
        <doi>10.14569/IJACSA.2022.0131151</doi>
        <lastModDate>2022-11-30T10:26:20.5630000+00:00</lastModDate>
        
        <creator>Hengheng He</creator>
        
        <subject>English speech; long short-term memory; speaking training; speech recognition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>A good English speaking training system can provide an aid to the learning of English. This paper briefly introduced the English speaking training system and described the speaking training scoring and pronunciation resonance peak display modules in the system. The speaking training scoring module scored pronunciation with the Long Short-Term Memory (LSTM). The pronunciation resonance peak display module extracted the resonance peak with Fourier transform and visualized it. Finally, the speaking scoring module, the pronunciation resonance peak display module, and the effect of the whole system in improving students’ speaking pronunciation was tested. The results showed that the LSTM-based speaking scoring algorithm had highest scoring accuracy than pattern matching and the recurrent neural network (RNN) algorithm, and its accuracy was 95.21% when scoring the LibriSpeech dataset and 90.12% when scoring the local English dataset. The pronunciation resonance peak display module displayed the change of mouth shape before and after training, and the pronunciation after training was closer to the standard pronunciation. The P value in the comparison of the speaking level before and after training with the system was 0.001, i.e., the difference was significant, which indicated that the students’ English speaking proficiency significantly improved.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_51-Design_of_a_Speaking_Training_System_for_English_Speech_Education.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Constraints on Hyper-parameters in Deep Learning Convolutional Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131150</link>
        <id>10.14569/IJACSA.2022.0131150</id>
        <doi>10.14569/IJACSA.2022.0131150</doi>
        <lastModDate>2022-11-30T10:26:20.5630000+00:00</lastModDate>
        
        <creator>Ubaid M. Al-Saggaf</creator>
        
        <creator>Abdelaziz Botalb</creator>
        
        <creator>Muhammad Faisal</creator>
        
        <creator>Muhammad Moinuddin</creator>
        
        <creator>Abdulrahman U. Alsaggaf</creator>
        
        <creator>Sulhi Ali Alfakeh</creator>
        
        <subject>Neural networks; convolution; pooling; hyper-parameters; CNN; deep learning; zero-padding; stride; back-propagation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>Convolutional Neural Network (CNN), a type of Deep Learning, has a very large number of hyper-meters in contrast to the Artificial Neural Network (ANN) which makes the task of CNN training more demanding. The reason why the task of tuning parameters optimization is difficult in the CNN is the existence of a huge optimization space comprising a large number of hyper-parameters such as the number of layers, number of neurons, number of kernels, stride, padding, rows or columns truncation, parameters of the backpropagation algorithm, etc. Moreover, most of the existing techniques in the literature for the selection of these parameters are based on random practice which is developed for some specific datasets. In this work, we empirically investigated and proved that CNN performance is linked not only to choosing the right hyper-parameters but also to its implementation. More specifically, it is found that the performance is also depending on how it deals when the CNN operations require setting of hyper-parameters that do not symmetrically fit the input volume. We demonstrated two different implementations, crop or pad the input volume to make it fit. Our analysis shows that padding performs better than cropping in terms of prediction accuracy (85.58% in contrast to 82.62%) while takes lesser training time (8 minutes lesser).</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_50-Constraints_on_Hyper_parameters_in_Deep_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Classification of Electromyography Signal of Diabetes using Artificial Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131149</link>
        <id>10.14569/IJACSA.2022.0131149</id>
        <doi>10.14569/IJACSA.2022.0131149</doi>
        <lastModDate>2022-11-30T10:26:20.5470000+00:00</lastModDate>
        
        <creator>Muhammad Fathi Yakan Zulkifli</creator>
        
        <creator>Noorhamizah Mohamed Nasir</creator>
        
        <subject>Electromyography; diabetic neuropathy; classification; machine learning; artificial neural networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>Diabetes is one of the most chronic diseases, with an increasing number of sufferers yearly. It can lead to several serious complications, including diabetic peripheral neuropathy (DPN). DPN must be recognized early to receive appropriate treatment and prevent disease exacerbation. However, due to the rapid development of machine learning classification, like in the health science sector, it is very easy to identify DPN in the early stages. Therefore, the aim of this study is to develop a new method for detecting neuropathy based on the myoelectric signal among diabetes patients at a low cost with utilizing one of the machine learning techniques, the artificial neural network (ANN). To that aim, muscle sensor V3 is used to record the activity of the anterior tibialis muscle. Then, the representative time domain features which is mean absolute value (MAV), root mean square (RMS), variance (VAR), and standard deviation (SD) used to evaluate fatigue. During neural network training, a different number of hidden neurons were used, and it was found that using seven hidden neurons showed a high accuracy of 98.6%. Thus, this work indicates the potential of a low-cost system for classifying healthy and diabetic individuals using an ANN algorithm.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_49-Classification_of_Electromyography_Signal_of_Diabetes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Combining AHP and Topsis to Select Eligible Social and Solidarity Economy Actors for a Call for Grants</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131148</link>
        <id>10.14569/IJACSA.2022.0131148</id>
        <doi>10.14569/IJACSA.2022.0131148</doi>
        <lastModDate>2022-11-30T10:26:20.5300000+00:00</lastModDate>
        
        <creator>Salma Chrit</creator>
        
        <creator>Abdellah Azmani</creator>
        
        <creator>Monir Azmani</creator>
        
        <subject>AHP; TOPSIS; project selection; decision making; multi criteria decision method</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>The procedure for selecting projects in order to offer a grant for actors of the social and solidarity economy can be a delicate task for decision-makers (Public or Private Establishments), which is based on several eligibility and refusal criteria (economic, social and environmental); the task that can sometimes take several months before returning the results. This study proposes an integrated framework based on two multi-criteria decision methods, analytical hierarchy process (AHP) and technique for order performance by similarity to an ideal solution (TOPSIS), to select and rank viable projects to obtain a grant from the INDH (National Initiative for Human Development). Initially, the projects were randomly selected from a list of submitted projects to receive a grant. Later, AHP obtains weights of various criteria through pairwise comparison, and projects are ranked using TOPSIS. The proposed methodology is empirically applied to the social and solidarity economy sector and provides a detailed and effective decision-making tool for selecting suitable actors to obtain a grant. The results indicate that the conservation of natural resources and the rate of job creation are the essential criteria in the process of selecting projects.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_48-Combining_AHP_and_TOPSIS_to_Select_Eligible_Social_and_Solidarity_Economy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detecting Brain Diseases using Hyper Integral Segmentation Approach (HISA) and Reinforcement Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131147</link>
        <id>10.14569/IJACSA.2022.0131147</id>
        <doi>10.14569/IJACSA.2022.0131147</doi>
        <lastModDate>2022-11-30T10:26:20.5170000+00:00</lastModDate>
        
        <creator>M. Praveena</creator>
        
        <creator>M. Kameswara Rao</creator>
        
        <subject>Benign; malignant; magnetic resonance imaging; Alzheimer&#39;s disease</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>Medical Images are most widely done by the various image processing approaches. Image processing is used to analyze the various abnormal tissues based on given input images. Deep learning (DL) is one of the fast-growing field in the computer science and specifically in medical imaging analysis. Tumor is a mass tissue that contains abnormal cells. Normal tumor tissues may not grow in other places but if it contains the cancerous (malignant) cells these tissues may grow rapidly. It is very important to know the cause of brain tumors in humans and these should be detected in the early stages. Magnetic Resonance Imaging (MRI) images are most widely used to detect the tumors in the brain and these are also used to detect the tumors all over the body. Tumors are of various types such as noncancerous (benign) and cancerous (malignant). Sometimes tumors may convert into cancer cells based on the stage of the tumor. In this paper, a hyper integral segmentation approach (HISA) is introduced to detect cancerous tumors and non-cancerous tumors. Detecting cancerous cells in the tumors may reduce the life threat to the affected persons. The agent based reinforcement classification (ABRC) is used to classify the Alzheimer&#39;s disease (AD) and cancerous and non-cancerous cells based on the abnormalities present in the MRI images. Two publically available datasets are selected such as MRI images and AD-affected MRI images. Performance is analyzed by showing the improved metrics such as accuracy, f1-score, sensitivity, dice similarity score, and specificity.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_47-Detecting_Brain_Diseases_using_Hyper_Integral_Segmentation_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Toward an Ontological Cyberattack Framework to Secure Smart Cities with Machine Learning Support</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131145</link>
        <id>10.14569/IJACSA.2022.0131145</id>
        <doi>10.14569/IJACSA.2022.0131145</doi>
        <lastModDate>2022-11-30T10:26:20.5000000+00:00</lastModDate>
        
        <creator>Ola Malkawi</creator>
        
        <creator>Nadim Obaid</creator>
        
        <creator>Wesam Almobaideen</creator>
        
        <subject>Cyberattack; Internet of Things (IoT); ontology; machine learning; intrusion detection system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>With the emergence and the movement toward the Internet of Things (IoT), one of the most significant applications that have gained a great deal of concern is smart cities. In smart cities, IoT is leveraged to manage life and services within a minimal, or even no, human intervention. IoT paradigm has created opportunities for a wide variety of cyberattacks to threaten systems and users. Many challenges have been faced in order to encounter IoT cyberattacks, such as the diversity of attacks and the frequent appearance of new attacks. This raises the need for a general and uniform representation of cyberattacks. Ontology proposed in this paper can be used to develop a generalized framework, and to provide a comprehensive study of potential cyberattacks in a smart city system. Ontology can serve in building this intended general framework by developing a description and a knowledge base for cyberattacks as a set of concepts and relation between them. In this article we have proposed an ontology to describe cyberattacks, we have identified the benefits of such ontology, and discussed a case study to show how we can we utilize the proposed ontology to implement a simple intrusion detection system with the assistance of Machine Learning (ML). The ontology is implemented using prot&#233;g&#233; ontology editor and framework, WEKA is utilized as well to construct the inference rules of the proposed ontology. Results show that intrusion detection system developed using the ontology has shown a good performance in revealing the occurrence of different cyber-attacks, accuracy has reached 97% in detecting cyber-attacks in a smart city system.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_45-Toward_an_Ontological_Cyberattack_Framework_to_Secure_Smart_Cities.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Annotation Scheme to Generate Hate Speech Corpus through Crowdsourcing and Active Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131146</link>
        <id>10.14569/IJACSA.2022.0131146</id>
        <doi>10.14569/IJACSA.2022.0131146</doi>
        <lastModDate>2022-11-30T10:26:20.5000000+00:00</lastModDate>
        
        <creator>Nadeera Meedin</creator>
        
        <creator>Maneesha Caldera</creator>
        
        <creator>Suresha Perera</creator>
        
        <creator>Indika Perera</creator>
        
        <subject>Annotation; crowdsourcing; hate speech detection; social media data analytics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>The number of user-generated posts is growing exponentially with social media usage growth. Promoting violence against or having the primary purpose of inciting hatred against individuals or groups based on specific attributes via social media posts is daunting. As the posts are published in multiple languages with different forms of multimedia, social media finds it challenging to moderate before reaching the audience and assessing the posts as hate speech becomes sophisticated due to subjectivity. Social media platforms lack contextual and linguistic expertise and social and cultural insights to identify hate speech accurately. Research is being carried out to detect hate speech on social media content in English using machine learning algorithms, etc., using different crowdsourcing platforms. However, these platforms&#39; workers are unavailable from countries such as Sri Lanka. The lack of a workforce with the necessary skill set and annotation schemes symbolizes further research essentiality in low-resource language annotation. This research proposes a suitable crowdsourcing approach to label and annotates social media content to generate corpora with words and phrases to identify hate speech using machine learning algorithms in Sri Lanka. This paper summarizes the annotated Facebook posts, comments, and replies to comments from public Sri Lankan Facebook user profiles, pages and groups of 52,646 instances, unlabeled tweets based on 996 Twitter search keywords of 45,000 instances of YouTube Videos of 45,000 instances using the proposed annotation scheme. 9%, 21% and 14% of Facebook, Twitter and YouTube posts were identified as containing hate content. In addition, the posts were categorized as offensive and non-offensive, and hate targets and corpus associated with hate targets focusing on an individual or group were identified and presented in this paper. The proposed annotation scheme could be extended to other low-resource languages to identify the hate speech corpora. With the use of a well-implemented crowdsourcing platform with the proposed novel annotation scheme, it will be possible to find more subtle patterns with human judgment and filtering and take preventive measures to create a better cyberspace.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_46-A_Novel_Annotation_Scheme_to_Generate_Hate_Speech_Corpus.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Data Warehouse Analysis and Design based on Research and Service Standards</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131144</link>
        <id>10.14569/IJACSA.2022.0131144</id>
        <doi>10.14569/IJACSA.2022.0131144</doi>
        <lastModDate>2022-11-30T10:26:20.4830000+00:00</lastModDate>
        
        <creator>Lasmedi Afuan</creator>
        
        <creator>Nurul Hidayat</creator>
        
        <creator>Dadang Iskandar</creator>
        
        <creator>Arief Kelik Nugroho</creator>
        
        <creator>Bangun Wijayanto</creator>
        
        <creator>Ana Romadhona Yasifa</creator>
        
        <subject>Data warehouse; knowage; SNDIKTI; UNSOED</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>Data are not easy to organize, especially if the data are big in quantity and stored manually and in a non-computerized way. Therefore, in the last few years, many organizations or companies used information systems to help their activities organize and manage the data. Universitas Jenderal Soedirman (UNSOED) is a state college that has existed for a long time and has many Study Programs and Faculties, including the Faculty of Engineering. Data organization in UNSOED is mainly performed through computerization. However, retrieving data needs to be improved because UNSOED has various information systems, and the data produced keeps increasing over time. The data have yet to be evaluated following the needs of Tri Dharma, with indicators of achievement as expressed in the Regulation of Minister of Education and Culture (PERMENDIKBUD) on the National Standard of Higher Education (SNDIKTI). Data warehouse technology can be applied to storing, collecting, and processing media within a specific time from various data sources. The data processing results in the data warehouse are later displayed using the tools Knowage which may help the executives of the Faculty of Engineering make a decision and monitor the businesses, mainly about research and service, by the society of academicians in the Faculty of Engineering regularly from time to time.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_44-Data_Warehouse_Analysis_and_Design.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Vision-based Human Detection by Fine-Tuned SSD Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131143</link>
        <id>10.14569/IJACSA.2022.0131143</id>
        <doi>10.14569/IJACSA.2022.0131143</doi>
        <lastModDate>2022-11-30T10:26:20.4700000+00:00</lastModDate>
        
        <creator>Tang Jin Cheng</creator>
        
        <creator>Ahmad Fakhri Ab. Nasir</creator>
        
        <creator>Anwar P. P. Abdul Majeed</creator>
        
        <creator>Mohd Azraai Mohd Razman</creator>
        
        <creator>Thai Li Lim</creator>
        
        <subject>Human detection; deep learning; transfer learning; SSD; fine-tuning; human-robot interactions</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>Human-robot interaction (HRI) and human-robot collaboration (HRC) has become more popular as the industries are taking initiative to idealize the era of automation and digitalization. Introduction of robots are often considered as a risk due to the fact that robots do not own the intelligent as human does. However, the literature that uses deep learning technologies as the base to improve HRI safety are limited, not to mention transfer learning approach. Hence, this study intended to empirically examine the efficacy of transfer learning approach in human detection task by fine-tuning the SSD models. A custom image dataset is developed by using the surveillance system in TT Vision Holdings Berhad and annotated accordingly. Thereafter, the dataset is partitioned into the train, validation, and test set by a ratio of 70:20:10. The learning behaviour of the models was monitored throughout the fine-tuning process via total loss graph. The result reveals that the SSD fine-tuned model with MobileNetV1 achieved 87.20% test AP, which is 6.1% higher than the SSD fine-tuned model with MobileNetV2. As a trade-off, the SSD fine-tuned model with MobileNetV1 attained 46.2 ms inference time on RTX 3070, which is 9.6 ms slower as compared to SSD fine-tuned model with MobileNetV2. Taking test AP as the key metric, SSD fine-tuned model with MobileNetV1 is considered as the best fine-tuned model in this study. In conclusion, it has shown that the transfer learning approach within the deep learning domain can help to protect human from the risk by detecting human at the first place.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_43-Vision_based_Human_Detection_by_Fine_Tuned_SSD_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Stock Price Forecasting using Convolutional Neural Networks and Optimization Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131142</link>
        <id>10.14569/IJACSA.2022.0131142</id>
        <doi>10.14569/IJACSA.2022.0131142</doi>
        <lastModDate>2022-11-30T10:26:20.4530000+00:00</lastModDate>
        
        <creator>Nilesh B. Korade</creator>
        
        <creator>Mohd. Zuber</creator>
        
        <subject>Convolutional neural networks; swarm intelligence; random search; particle swarm optimization; firefly</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>Forecasting the correct stock price is intriguing and difficult for investors due to its irregular, inherent dynamics, and tricky nature. Convolutional neural networks (CNN) have impressive performance in forecasting stock prices. One of the most crucial tasks when training a CNN on a stock dataset is identifying the optimal hyperparameter that increases accuracy. In this research, we propose the use of the Firefly algorithm to optimize CNN hyperparameters. The hyperparameters for CNN were tuned with the help of Random Search (RS), Particle Swarm Optimization (PSO), and Firefly (FF) algorithms on different epochs, and CNN is trained on selected hyperparameters. Different evaluation metrics are calculated for training and testing datasets. The experimental finding demonstrates that the FF method finds the ideal parameter with a minimal number of fireflies and epochs. The objective function of the optimization technique is to reduce MSE. The PSO method delivers good results with increasing particle counts, while the FF method gives good results with fewer fireflies. In comparison with PSO, the MSE of the FF approach converges with increasing epoch.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_42-Stock_Price_Forecasting_using_Convolutional_Neural_ Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Low-rate DDoS attack Detection using Deep Learning for SDN-enabled IoT Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131141</link>
        <id>10.14569/IJACSA.2022.0131141</id>
        <doi>10.14569/IJACSA.2022.0131141</doi>
        <lastModDate>2022-11-30T10:26:20.4530000+00:00</lastModDate>
        
        <creator>Abdussalam Ahmed Alashhab</creator>
        
        <creator>Mohd Soperi Mohd Zahid</creator>
        
        <creator>Amgad Muneer</creator>
        
        <creator>Mujaheed Abdullahi</creator>
        
        <subject>SDN; LDDoS attack; OpenFlow; Deep Learning; Long-Short Term Memory</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>Software Defined Networks (SDN) can logically route traffic and utilize underutilized network resources, which has enabled the deployment of SDN-enabled Internet of Things (IoT) architecture in many industrial systems. SDN also removes bottlenecks and helps process IoT data efficiently without overloading the network. An SDN-based IoT in an evolving environment is vulnerable to various types of distributed denial of service (DDoS) attacks. Many research papers focus on high-rate DDoS attacks, while few address low-rate DDoS attacks in SDN-based IoT networks. There’s a need to enhance the accuracy of LDDoS attack detection in SDN-based IoT networks and OpenFlow communication channel. In this paper, we propose LDDoS attack detection approach based on deep learning (DL) model that consists of an activation function of the Long-Short Term Memory (LSTM) to detect different types of LDDoS attacks in IoT networks by analyzing the characteristic values of different types of LDDoS attacks and natural traffic, improve the accuracy of LDDoS attack detection, and reduce the malicious traffic flow. The experiment result shows that the model achieved an accuracy of 98.88%. In addition, the model has been tested and validated using benchmark Edge IIoTset dataset which consist of cyber security attacks.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_41-Low_rate_DDoS_attack_Detection_using_Deep_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>BCT-CS: Blockchain Technology Applications for Cyber Defense and Cybersecurity: A Survey and Solutions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131140</link>
        <id>10.14569/IJACSA.2022.0131140</id>
        <doi>10.14569/IJACSA.2022.0131140</doi>
        <lastModDate>2022-11-30T10:26:20.4370000+00:00</lastModDate>
        
        <creator>Naresh Kshetri</creator>
        
        <creator>Chandra Sekhar Bhushal</creator>
        
        <creator>Purnendu Shekhar Pandey</creator>
        
        <creator>Vasudha</creator>
        
        <subject>Applications; blockchain technology; blockchain solutions; countermeasures; cyber-attacks; cyber defense; cybersecurity; survey</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>Blockchain technology has now emerged as a ground-breaking technology with possible solutions to applications from securing smart cities to e-voting systems. Although it started as a digital currency or cryptocurrency, bitcoin, there is no doubt that blockchain is influencing and will influence business and society more in the near future. We present a comprehensive survey of how blockchain technology is applied to provide security over the web and to counter ongoing threats as well as increasing cybercrimes and cyber-attacks. During the review, we also investigate how blockchain can affect cyber data and information over the web. Our contributions included the following: (i) summarizing the Blockchain architecture and models for cybersecurity (ii) classifying and discussing recent and relevant works for cyber countermeasures using blockchain (iii) analyzing the main challenges and obstacles of blockchain technology in response to cyber defense and cybersecurity and (iv) recommendations for improvement and future research on the integration of blockchain with cyber defense.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_40-BCT_CS_Blockchain_Technology_Applications_for_Cyber_Defense.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Transformer-based Neural Network for Electrocardiogram Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131139</link>
        <id>10.14569/IJACSA.2022.0131139</id>
        <doi>10.14569/IJACSA.2022.0131139</doi>
        <lastModDate>2022-11-30T10:26:20.4230000+00:00</lastModDate>
        
        <creator>Mohammed A. Atiea</creator>
        
        <creator>Mark Adel</creator>
        
        <subject>Electrocardiogram classification; transformer neural network; convolutional neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>A transformer neural network is a powerful method that is used for sequence modeling and classification. In this paper, the transformer neural network was combined with a convolutional neural network (CNN) that is used for feature embedding to provide the transformer inputs. The proposed model accepts the raw electrocardiogram (ECG) signals side by side with extracted morphological ECG features to boost the classification performance. The raw ECG signal and the morphological features of the ECG signal experience two independent paths with the same model architecture where the output of each transformer decoder is concatenated to go through the final linear classifier to give the predicted class. The experiments and results on the PTB-XL dataset with 7-fold cross-validation have shown that the proposed model achieves high accuracy and F-score, with an average of 99.86% and 99.85% respectively, which shows and proves the robustness of the model and its feasibility to be applied in industrial applications.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_39-Transformer_based_Neural_Network_for_Electrocardiogram_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detection of Abnormal Human Behavior in Video Images based on a Hybrid Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131138</link>
        <id>10.14569/IJACSA.2022.0131138</id>
        <doi>10.14569/IJACSA.2022.0131138</doi>
        <lastModDate>2022-11-30T10:26:20.4230000+00:00</lastModDate>
        
        <creator>BAI Ya-meng</creator>
        
        <creator>WANG Yang</creator>
        
        <creator>WU Shen-shen</creator>
        
        <subject>Cellular neural network; detection of abnormalities; differential evolution algorithm; video images</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>The analysis of human movement has attracted the attention of many scholars of various disciplines today. The purpose of such systems is to perceive human behavior from a sequence of video images. They monitor the population to find common properties among pedestrians on the scene. In video surveillance, the main purpose of detecting specific or malicious events is to help security personnel. Different methods have been used to detect human behavior from images. This paper has used an efficient computational algorithm for detecting anomalies in video images based on the combined approach of the differential evolution algorithm and cellular neural network. In this method, the input image&#39;s gray-level image is first generated. Because it may be possible to identify several large areas in the image after the threshold, the largest white area is selected as the target area. The images are then used to remove noise, smooth the image, and fade the morphology. The results showed that the proposed method has higher speed and accuracy than other methods. The advantage of the algorithm is that it has a runtime of three seconds on a home computer, and the average sensitivity criterion is 98.6% (97.2%).</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_38-Detection_of_Abnormal_Human_Behavior_in_Video_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Ensemble Tree Classifier based Analysis of Water Quality for Layer Poultry Farm: A Study on Cauvery River</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131137</link>
        <id>10.14569/IJACSA.2022.0131137</id>
        <doi>10.14569/IJACSA.2022.0131137</doi>
        <lastModDate>2022-11-30T10:26:20.4070000+00:00</lastModDate>
        
        <creator>Deepika </creator>
        
        <creator>Nagarathna</creator>
        
        <creator>Channegowda</creator>
        
        <subject>Water quality; poultry; machine learning; Cauvery river; feature ranking; ensemble tree classifier; accuracy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>Indian poultry industry has evolved from a simple backyard occupation to a large commercial agri-based enterprise. Chicken dominates poultry production in India, accounting for almost 95% of total egg production. Several factors affect the egg production such as feeding material, drinking water, environmental factors etc. Analyzing the water quality is one of the important tasks. Cauvery River is considered as the study area because of its importance in several states of South India which have significant contribution in poultry farming. The aim of the proposed study is to develop an automated approach of water quality analysis and present a novel machine learning approach which considers an improved feature ranking method and ensemble tree classifier with majority voting. The experimental result shows that proposed approach performs better with an accuracy of 95.12%.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_37-Ensemble_Tree_Classifier_based_Analysis_of_Water_Quality.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards a Fair Evaluation of Feature Extraction Algorithms Robustness in Structure from Motion</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131136</link>
        <id>10.14569/IJACSA.2022.0131136</id>
        <doi>10.14569/IJACSA.2022.0131136</doi>
        <lastModDate>2022-11-30T10:26:20.3900000+00:00</lastModDate>
        
        <creator>Dina M. Taha</creator>
        
        <creator>Hala H. Zayed</creator>
        
        <creator>Shady Y. El-Mashad</creator>
        
        <subject>Feature extraction; feature matching; structure from motion; 3D reconstruction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>Structure from Motion is a pipeline for 3D reconstruction in which the true geometry of an object or a scene is inferred from a sequence of 2D images. As feature extraction is usually the first phase in the pipeline, the reconstruction quality depends on the accuracy of the feature extraction algorithm. Fairly evaluating the robustness of feature extraction algorithms in the absence of reconstruction ground truth is challenging due to the considerable number of parameters that affect the algorithms&#39; sensitivity and the tradeoff between reconstruction size and error. The evaluation methodology proposed in this paper is based on two elements. The first is using constrained 3D reconstruction, in which only fixed numbers of extracted and matched features are passed to subsequent phases. The second is comparing the 3D reconstructions using size-error curves (introduced in this paper) rather than the value of reconstruction size, error, or both. The experimental results show that the proposed methodology is more transparent.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_36-Towards_a_Fair_Evaluation_of_Feature_Extraction_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Effective Ensemble-based Framework for Outlier Detection in Evolving Data Streams</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131135</link>
        <id>10.14569/IJACSA.2022.0131135</id>
        <doi>10.14569/IJACSA.2022.0131135</doi>
        <lastModDate>2022-11-30T10:26:20.3730000+00:00</lastModDate>
        
        <creator>Asmaa F. Hassan</creator>
        
        <creator>Sherif Barakat</creator>
        
        <creator>Amira Rezk</creator>
        
        <subject>Outlier detection; data streams; data stream mining; ensemble learning; concept evolution</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>In the last few years, data streams have drawn lots of researchers’ attention due to their various applications, such as healthcare monitoring systems, fraud and intrusion detection, the internet of things (IoT), and financial market applications. A data stream is an unbounded sequence of data continually generated over time and is prone to evolution. Outliers in streaming data are the elements that significantly deviate from the majority of elements and then have to be detected as they may be error values or events of interest. Detection of outliers is a challenging issue in streaming data and is one of the most crucial tasks in data stream mining. Existing outlier detection methods for static data are unsuitable for use in data stream settings due to the unique characteristics of streaming data such as unpredictability, uncertainty, high-dimensionality, and changes in data distribution. Thus, in this paper, a novel ensemble learning framework called Ensemble-based Streaming Outlier Detection (ESOD) is presented to perfectly detect outliers over streaming data using a sliding window technique that is updated in response to the incoming events from the data streaming environment to overcome the concept evolution nature of streaming data. The proposed framework has three phases, namely the training phase, testing/offline phase, and outlier detection/online phase. A detection weighted vote technique is used to determine the final decisions for potential outliers. In the extensive experimental study, which was conducted on 11 real-world benchmark datasets, the proposed framework was assessed using many accuracy metrics. The experiment results showed that the proposed framework beats many other state-of-the-art methods.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_35-An_Effective_Ensemble_based_Framework_for_Outlier_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fuzzy Support Vector Machine based Fall Detection Method for Traumatic Brain Injuries</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131134</link>
        <id>10.14569/IJACSA.2022.0131134</id>
        <doi>10.14569/IJACSA.2022.0131134</doi>
        <lastModDate>2022-11-30T10:26:20.3730000+00:00</lastModDate>
        
        <creator>Mohammad Kchouri</creator>
        
        <creator>Norharyati Harum</creator>
        
        <creator>Ali Obeid</creator>
        
        <creator>Hussein Hazimeh</creator>
        
        <subject>Fall detection; fuzzy logic; SVM; traumatic brain injuries; wearable sensor</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>Falling is a major health issue that can lead to both physical and mental injuries. Detecting falls accurately can reduce the severe effects and improve the quality of life for disabled people. Therefore, it is critical to develop a smart fall detection system. Many approaches have been proposed in wearable-based systems. In these approaches, machine learning techniques have been conducted to provide automatic classification and to improve accuracy. One of the most commonly used algorithms is Support Vector Machine (SVM). However, classical SVM can neither use prior knowledge to process accurate classifications nor solve problems characterized by ambiguity. More specifically, some values of falls are inaccurate and similar to the features of normal activities, which can also greatly impact the performance of the learning ability of SVMs. Hence, it became necessary to look for an effective fall detection method based on a combination of Fuzzy Logic (FL) and SVM algorithms so as to reduce false positive alarms and improve accuracy. In this paper, various training data are assigned to the corresponding membership degrees. Some data points with a high chance of falling are assigned a high degree of membership, yielding a high contribution for SVM decision-making. This does not only achieve accurate fall detection, but also reduces the hesitation in labeling the outcomes and improves the heuristic transparency of the SVM. The experimental results achieved 100% specificity and precision, with an overall accuracy of 99.96%. Consequently, the experiment proved to be effective and yielded better results than the conventional approaches.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_34-Fuzzy_Support_Vector_Machine_based_Fall_Detection_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Guideline for Designing Mobile Applications for Children with Autism within Religious Boundaries</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131133</link>
        <id>10.14569/IJACSA.2022.0131133</id>
        <doi>10.14569/IJACSA.2022.0131133</doi>
        <lastModDate>2022-11-30T10:26:20.3600000+00:00</lastModDate>
        
        <creator>Ajrun Azhim Zamry</creator>
        
        <creator>Muhammad Haziq Lim Abdullah</creator>
        
        <creator>Mohd Hafiz Zakaria</creator>
        
        <subject>Autism Spectrum Disorder (ASD); guidelines; mobile applications; religion; assistive technology; communication</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>Autism spectrum disorder is a condition related to brain development that impacts how a person perceives and socialises with others, causing problems in social interaction and communication. The disorder also includes limited and repetitive patterns of behavior. Children with autism spectrum disorder (ASD) develop at a different rate and don’t necessarily develop skills in the same order as typically developing children. Nowadays, children with autism spectrum disorder (ASD) are having difficulties in gaining religious skills. This is due to the lack of schools that provide special religious education for disabled children. Many technologies have been developed to help children with autism for education. Mobile applications have extensively been used to enhance their daily learning. Researchers are extremely trailing their applications but not many applications are able to meet the requirements and needs of children with autism, especially in the religious context. The lack of religious mobile application guidelines is crucial as a reference for researchers. This paper aims to propose a guideline to design a mobile application for children with autism in religious context. A systematic review of previous literature on mobile application guidelines for autism and religious mobile application guidelines was conducted. This study resulted in two key findings: (1) elements of multimedia consist of text, images and sounds (2) features of application consist of interface, navigation, customisation and interaction. The proposed guidelines are potentially to be used by researchers who are interested in designing religious mobile applications for children with autism.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_33-A_Guideline_for_Designing_Mobile_Applications_for_Children.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Issues in Requirements Specification in Malaysia’s Public Sector: An Evidence from a Semi-Structured Survey and a Static Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131132</link>
        <id>10.14569/IJACSA.2022.0131132</id>
        <doi>10.14569/IJACSA.2022.0131132</doi>
        <lastModDate>2022-11-30T10:26:20.3430000+00:00</lastModDate>
        
        <creator>Mohd Firdaus Zahrin</creator>
        
        <creator>Mohd Hafeez Osman</creator>
        
        <creator>Alfian Abdul Halin</creator>
        
        <creator>Sa&#39;adah Hassan</creator>
        
        <creator>Azlena Haron</creator>
        
        <subject>Ambiguity; requirements engineering; requirement smell; requirement specification; semi-structured interview</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>Requirement specifications (RS) are essential and fundamental artefacts in system development. RS is the primary reference in software development and is commonly written in natural language. Bad requirement quality, such as requirement smells, may lead to project delay, cost overrun, and failure. Focusing on requirement quality in the Malaysian government, this paper investigates the methods for preparing Malay RS and personnel competencies to identify the root cause of this issue. We conducted semi-structured interviews that involved 17 respondents from eight critical Malaysian public sector agencies. This study found that ambiguity, incompleteness, and inconsistency are the top three requirement smells that cause project delays and failures. Furthermore, based on our static analysis, we collected the initial Malay RS documents from various Malaysian public sector agencies; we found that 30% of the RS were ambiguous. Our analysis also found that respondents with more than 10 years of experience could manually identify the smells in RS. Most respondents chose the Public Sector Application Systems Engineering (KRISA) handbook as a guideline for preparing Malay RS documents. Respondents acknowledged a correlation between the quality of RS and project delays and failures.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_32-Issues_in_Requirements_Specification_in_Malaysias_Public_Sector.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Study of Modelling IoT Security Systems with Unified Modelling Language (UML)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131130</link>
        <id>10.14569/IJACSA.2022.0131130</id>
        <doi>10.14569/IJACSA.2022.0131130</doi>
        <lastModDate>2022-11-30T10:26:20.3270000+00:00</lastModDate>
        
        <creator>Hind Meziane</creator>
        
        <creator>Noura Ouerdi</creator>
        
        <subject>Internet of things (IoT); IoT systems; IoT security; modelling; Unified Modelling Language (UML); UML extensions; IoT applications</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>The Internet of Things (IoT) has emerged as a technology with the application in different areas. Hence, security is one of the major challenges that has the potential to stifle the growth of the IoT. In fact, IoT is vulnerable to several cyber attacks and needs challenging techniques to achieve its security. In this paper, the use of a UML (Unified Modelling Language) aims at modeling IoT systems in various views. The purpose of this study is to discuss the need for more modeling in terms of security. For this reason, this paper focuses on modeling security of IoT systems. The objective is to make a comparison in terms of layers by describing the IoT architecture and presenting its components. In other words, the research question is to look for the modeling of security in the IoT layers. There is no standard that takes into account the security of the IoT architecture, there are different proposals of IoT levels, which means that each author has his own vision and own proposition. Moreover, there is a lack of modelling languages for IoT security systems. The main interest of this study is to choose the layer on which we should be interested. The question then is as follows: “which is the layer whose modeling is relevant?” The obtained results were conclusive and provided the best insight into all the specifications of each layer of the IoT architecture studied.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_30-A_Study_of_Modelling_IoT_Security_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>DevOps Enabled Agile: Combining Agile and DevOps Methodologies for Software Development</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131131</link>
        <id>10.14569/IJACSA.2022.0131131</id>
        <doi>10.14569/IJACSA.2022.0131131</doi>
        <lastModDate>2022-11-30T10:26:20.3270000+00:00</lastModDate>
        
        <creator>Shah Murtaza Rashid Al Masud</creator>
        
        <creator>Md. Masnun</creator>
        
        <creator>Afia Sultana</creator>
        
        <creator>Anamika Sultana</creator>
        
        <creator>Fahad Ahmed</creator>
        
        <creator>Nasima Begum</creator>
        
        <subject>Agile; DevOps; gaps; collaboration; skill; DevOps Enabled Agile; software development</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>The Agile and DevOps software development methodologies have made revolutionary advancements in software engineering. These methodologies vastly improve software quality and also speed up the process of developing software products. However, several limitations have been discovered in the practical implementation of Agile and DevOps, including the lack of collaboration between the development, testing and delivery sectors of different software projects and high skill requirements. This paper presents a solution to bridge the existing gaps between Agile and DevOps methodologies by integrating DevOps principles into Agile to devise a hybrid DevOps Enabled Agile for software development. This study includes the development of a small-scale, experimental pilot project to demonstrate how software development teams can combine the advantages of Agile and DevOps methodologies to fulfill the gaps and provide further improvements to the speed and quality of software development process while maintaining feasible skill requirements.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_31-DevOps_Enabled_Agile_Combining_Agile_and_DevOps_Methodologies.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Use of ICTs in the Digital Culture for Virtual Learning of University Students Applying an Artificial Neural Network Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131129</link>
        <id>10.14569/IJACSA.2022.0131129</id>
        <doi>10.14569/IJACSA.2022.0131129</doi>
        <lastModDate>2022-11-30T10:26:20.3130000+00:00</lastModDate>
        
        <creator>Jos&#233; Luis Morales Rocha</creator>
        
        <creator>Mario Aurelio Coyla Zela</creator>
        
        <creator>Nakaday Irazema Vargas Torres</creator>
        
        <creator>Helen Gaite Trujillo</creator>
        
        <subject>Artificial neural network; digital culture; ICT; virtual learning; COVID 19</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>Artificial neural networks are mathematical models of artificial intelligence that intend to reproduce the behavior of the human brain and whose main objective is the construction of systems that are capable of demonstrating certain intelligent behavior. The purpose of the investigation is to determine the influence of the use of Information and Communication Technologies (ICTs) in the digital culture in the learning process of university students in Peru and Bolivia in the context of the Coronavirus – COVID 19 sanitary emergency, through the application of artificial neural network models. The investigation has a quantitative focus, the applied type, with a correlational level and a non-experimental design. Data was recollected by means of a digital questionnaire, applied to students of two universities. The population is composed of 3980 students of the Universidad Privada Domingo Savio (UPDS, Tarija, Bolivia) and 1506 of the Universidad Nacional de Moquegua (UNAM, Moquegua, Peru). The sample consists of 496 students. The hypothetical-deductive and the artificial intelligence methods were used. It was determined that the ability to install software and data protection programs, the use of mobile devices for academic purposes and the command of specialized software are the most influential factors in the digital culture of the students at UNAM and UPDS.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_29-The_Use_of_ICTs_in_the_Digital_Culture_for_Virtual_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Routing with Multi-Criteria QoS for Flying Ad-hoc Networks (FANETs)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131128</link>
        <id>10.14569/IJACSA.2022.0131128</id>
        <doi>10.14569/IJACSA.2022.0131128</doi>
        <lastModDate>2022-11-30T10:26:20.2970000+00:00</lastModDate>
        
        <creator>Ch Naveen Kumar Reddy</creator>
        
        <creator>Krovi Raja Sekhar</creator>
        
        <subject>Flying ad-hoc Network; multi-criteria QoS; unmanned aerial vehicles; joint optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>Flying Ad-hoc Network (FANET) is a type of Ad-hoc network on backbone of Unmanned Aerial Vehicle (UAV). These networks are used for providing communication services in case of natural disasters. Dynamic changes in link quality and mobility distort the Quality of Service (QoS) for routing in FANETs. This work proposed a Multi Criteria QoS Optimal Routing (MCQOR) guided by prediction of link quality and three-dimensional (3D) movement of FANET nodes. The network is clustered based on prediction of movement of nodes. Over the clustered topology, routing path is selected in reactive manner with joint optimization of packet delivery ratio, delay, and network overhead. In addition, cross layer feedback is used to reduce the packet generation rate and congestion in network. Through simulation analysis, the proposed routing protocol is found to have 3.8% higher packet delivery ratio, 26% lower delay and 14% lower network overhead compared to existing works</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_28-Routing_with_Multi_Criteria_QoS_for_Flying_Ad_hoc_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Permission and Usage Control for Virtual Tourism using Blockchain-based Smart Contracts</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131126</link>
        <id>10.14569/IJACSA.2022.0131126</id>
        <doi>10.14569/IJACSA.2022.0131126</doi>
        <lastModDate>2022-11-30T10:26:20.2800000+00:00</lastModDate>
        
        <creator>Muhammad Shoaib Siddiqui</creator>
        
        <creator>Toqeer Ali Syed</creator>
        
        <creator>Adnan Nadeem</creator>
        
        <creator>Waqas Nawaz</creator>
        
        <creator>Ahmad Alkhodre</creator>
        
        <subject>Virtual tourism; permission control; usage control; access control; blockchain</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>Virtual Tourism (VT) is a booming business with potential perspectives in the entertainment and financial industry. Due to travel restrictions, safety concerns, and expensive travelling the younger generation is showing interest in virtual tourism instead of traditional tourism. However, virtual tourism does not financially benefit the service providers as compared to traditional tourism stakeholders. An online system is essential to provide a central point of access to various tourism sites along with usage, permission, and payment control. In this paper, a secure blockchain-based broker service for users and content providers is proposed, which allows tourism sites to announce their virtual tours and provide accessibility and accountability. Meanwhile, it enables users to register, subscribe, access, and be billed according to their usage. The permission control module ensures authentication and authorization, while the usage control provides accountability to the predefined service level agreement. The transactions are stored on the blockchain to ensure the integrity of data and smart contracts are used to ensure automatic usage and permission control. An implementation on Hyperledger Fabric is provided as a proof of concept with performance measurements as a case study.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_26-Permission_and_Usage_Control_for_Virtual_Tourism.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Fast Multicore-based Window Entropy Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131127</link>
        <id>10.14569/IJACSA.2022.0131127</id>
        <doi>10.14569/IJACSA.2022.0131127</doi>
        <lastModDate>2022-11-30T10:26:20.2800000+00:00</lastModDate>
        
        <creator>Suha S.A. Shokr</creator>
        
        <creator>Hazem M. Bahig</creator>
        
        <subject>Entropy; window method; malware analysis; parallel algorithm; multicore</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>Malware analysis is a major challenge in cybersecurity due to the regular appearance of new malware and its effect in cyberspace. The existing tools for malware analysis enable reverse engineering to understand the origin, purpose, attributes, and potential consequences of malicious software. An entropy method is one of the techniques used to analyze and detect malware, which is defined as a measure of information encoded in a series of values based upon the probability of those values appearing. The window entropy algorithm is one of the methods that can be applied to calculate entropy values in an effective manner. However, it requires a significant amount of time when the size of the file is large. In this paper, we solve this problem in two ways. The first way of improvement is determining the best window size that leads to minimizing the running time of the window entropy algorithm. The second way of improvement is by parallelizing the window entropy algorithm on a multicore system. The experimental studies using artificial data show that the improved sequential algorithm can reduce the window entropy method’s running time by 79% on an average. Also, the proposed parallel algorithm outperforms the modified sequential algorithm by 77% and has super-linear speed up.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_27-A_Fast_Multicore_based_Window_Entropy_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cybersecurity in Deep Learning Techniques: Detecting Network Attacks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131125</link>
        <id>10.14569/IJACSA.2022.0131125</id>
        <doi>10.14569/IJACSA.2022.0131125</doi>
        <lastModDate>2022-11-30T10:26:20.2670000+00:00</lastModDate>
        
        <creator>Shatha Fawaz Ghazal</creator>
        
        <creator>Salameh A. Mjlae</creator>
        
        <subject>HTTP DATASET CSIC 2010; deep learning; cybersecurity attacks; detection attacks; network attacks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>Deep learning techniques have been found to be useful in a variety of fields. Cybersecurity is one such area. In cybersecurity, both Machine Learning and Deep Learning classification algorithms can be used to monitor and prevent network attacks. Additionally, it can be utilized to identify system irregularities that may signal an ongoing attack. Cybersecurity experts can utilize machine learning and deep learning to help make systems safer. Eleven classification techniques, including eight machine learning algorithms (Decision Tree, Random Forest, and Gradient Boosting) and one statistical technique, were employed to examine the popular HTTP DATASET CSIC 2010. (K-Means). Along with XGBoost, AdaBoost, Multilayer Perceptrons, and Voting, three deep learning algorithms are Convolutional Neural Network (CNN), Long Short-Term Memory (LSTM), and LSTM plus CNN. To evaluate the performance of such models, precision, accuracy, f1-score, and recall are often used metrics. The results showed that when comparing the three deep learning algorithms by the aforementioned metrics, the LSTM with CNN produced the best performance outcomes in this paper. These findings will show that our use of this algorithm allows us to detect multiple attacks and defend against any external or internal threat to the network.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_25-Cybersecurity_in_Deep_Learning_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Emotion Estimation Method with Mel-frequency Spectrum, Voice Power Level and Pitch Frequency of Human Voices through CNN Learning Processes</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131124</link>
        <id>10.14569/IJACSA.2022.0131124</id>
        <doi>10.14569/IJACSA.2022.0131124</doi>
        <lastModDate>2022-11-30T10:26:20.2500000+00:00</lastModDate>
        
        <creator>Taiga Haruta</creator>
        
        <creator>Mariko Oda</creator>
        
        <creator>Kohei Arai</creator>
        
        <subject>e-Learning; emotion estimation; Mel-frequency spectrum; fundamental frequency (pitch frequency); sound pressure level (voice power level)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>Emotion estimation method with Mel-frequency spectrum, voice power level and pitch frequency of human voices through CNN (Convolutional Neural Network) learning processes is proposed. Usually, frequency spectra are used for emotion estimation. The proposed method utilizes not only Mel-frequency spectrum, but also voice pressure level (voice power level) and pitch frequency to improve emotion estimation accuracy. These components are used through CNN learning processes using training samples which are provided by Keio University (emotional speech corpus) together with our own training samples which are collected by our students in emotion estimation processes. In these processes, the target emotion is divided into two categories, confident and non-confident. Through experiments, it is found that the proposed method is superior to the traditional method with only Mel-frequency by 15%.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_24-Emotion_Estimation_Method_with_Mel_frequency_Spectrum.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Vision based 3D Object Detection using Deep Learning: Methods with Challenges and Applications towards Future Directions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131123</link>
        <id>10.14569/IJACSA.2022.0131123</id>
        <doi>10.14569/IJACSA.2022.0131123</doi>
        <lastModDate>2022-11-30T10:26:20.2330000+00:00</lastModDate>
        
        <creator>A F M Saifuddin Saif</creator>
        
        <creator>Zainal Rasyid Mahayuddin</creator>
        
        <subject>3D object detection; deep learning; vision; depth map; point cloud</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>For autonomous intelligent systems, 3D object detection can act as a basis for decision making by providing information such as object’s size, position and direction to perceive information about surrounding environment. Successful application using robust 3D object detection can hugely impact robotic industry, augmented and virtual reality sectors in the context of Fourth Industrial Revolution (IR4.0). Recently, deep learning has become potential approach for 3D object detection to learn powerful semantic object features for various tasks, i.e., depth map construction, segmentation and classification. As a result, exponential development in the growth of potential methods is observed in recent years. Although, good number of potential efforts have been made to address 3D object detection, a depth and critical review from different viewpoints is still lacking. As a result, comparison among various methods remains challenging which is important to select method for particular application. Based on strong heterogeneity in previous methods, this research aims to alleviate, analyze and systematize related existing research based on challenges and methodologies from different viewpoints to guide future development and evaluation by bridging the gaps using various sensors, i.e., cameras, LiDAR and Pseudo-LiDAR. At first, this research illustrates critical analysis on existing sophisticated methods by identifying six significant key areas based on current scenarios, challenges, and significant problems to be addressed for solution. Next, this research presents strict comprehensive analysis for validating 3D object detection methods based on eight authoritative 3D detection benchmark datasets depending on the size of the datasets and eight validation matrices. Finally, valuable insights of existing challenges are presented for future directions. Overall extensive review proposed in this research can contribute significantly to embark further investigation in multimodal 3D object detection.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_23-Vision_based_3D_Object_Detection_Using_Deep_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mobile Devices Supporting People with Special Needs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131122</link>
        <id>10.14569/IJACSA.2022.0131122</id>
        <doi>10.14569/IJACSA.2022.0131122</doi>
        <lastModDate>2022-11-30T10:26:20.2200000+00:00</lastModDate>
        
        <creator>Tihomir Stefanov</creator>
        
        <creator>Silviya Varbanova</creator>
        
        <creator>Milena Stefanova</creator>
        
        <subject>Mobile devices; mobile operating systems; Android; iOS; special needs; visually impaired; hearing impaired; e-learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>Over the years, various devices designed for people with special needs have been used for a while and then replaced with modern devices to make everyday life easier. The development of mobile devices is especially improved today and through them different everyday activities are facilitated, not only by people with special needs. The purpose of this paper is to present some of the modern mobile devices with an analysis of their operating systems, functionalities, applications and design. Based on the research, their usability for both sighted and visually and hearing-impaired users is described. Attention is paid to the preferences formed among users when using specialized applications developed for mobile devices. Based on a survey of specific target user groups, the paper provides summary results to support the thesis on the importance of the facilities, offered by modern mobile devices.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_22-Mobile_Devices_Supporting_People_with_Special_Needs.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Effect of Visuospatial Ability on E-learning for Pupils of Physics Option in Scientific Common Trunk</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131121</link>
        <id>10.14569/IJACSA.2022.0131121</id>
        <doi>10.14569/IJACSA.2022.0131121</doi>
        <lastModDate>2022-11-30T10:26:20.2200000+00:00</lastModDate>
        
        <creator>Khalid Marnoufi</creator>
        
        <creator>Imane Ghazlane</creator>
        
        <creator>Fatima Zahra Soubhi</creator>
        
        <creator>Bouzekri Touri</creator>
        
        <creator>Elhassan Aamro</creator>
        
        <subject>Visuospatial; e-learning; physics; intelligence</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>This study aims to reveal the existence of a relationship between the visuospatial capacity of pupils with a specialization in physics, with high educational performance, and the capacity for E-learning. To achieve the study, we used the Wechsler intelligence test of cognitive ability. Our sample is composed of 204 adolescents, whose average age is 15 years, 12 months, and 11 days, with a standard deviation of 00 years, 1 month and 19 days. The selection criterion was based on the general results and specifically the physics science mark. The results of the study showed the existence of a significant relationship between visuospatial ability and scientific thinking, and statistically significant homogeneity attributed to specialization in visuospatial ability and creative thinking.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_21-Effect_of_Visuospatial_Ability_on_e_Learning_for_Pupils_of_Physics_Option.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Why do Women Volunteer More than Men? Gender and its Role in Voluntary Citizen Reporting Applications Usage and Adoption</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131120</link>
        <id>10.14569/IJACSA.2022.0131120</id>
        <doi>10.14569/IJACSA.2022.0131120</doi>
        <lastModDate>2022-11-30T10:26:20.2030000+00:00</lastModDate>
        
        <creator>Muna M. Alhammad</creator>
        
        <subject>Self-determination theory; gender role theory; social role theory; motivation; amotivation; gender diversity; social responsibility; citizens reporting application</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>By researching why citizens are eager to participate in citizen reporting applications, this study contributes to the understanding of citizen-government interaction in open government. Self-determination theory, gender role theory, and social role theory were employed to evaluate the impact of various motivational factors on individual behavioural intentions to participate in citizen reporting applications, as well as the role of gender in moderating their effects. The model was quantitatively tested by collecting 499 responses through a questionnaire from citizens who had previously utilized citizen reporting applications. The model was validated using partial least squares. The findings reveal that social responsibility, output quality, self-concern, and revenge are the motivational antecedents that have the most influence on individuals&#39; motivation to participate in citizen reporting applications managing to explain 65.9% of behavioural intention variances. Social responsibility is the most significant driver when compared to the others. The study also revealed that gender differences moderate the impact of social responsibility and revenge on user involvement in citizen reporting apps. The current study adds to the existing literature on citizen reporting adoption and usage by examining the motivational factors that affect citizens&#39; engagement across multiple contexts and evaluating the effect of gender in moderating the influence of social responsibility and revenge. Government institutions need to consider gender differences when designing their citizen reporting applications and their associated marketing campaigns.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_20-Why_do_Women_Volunteer_More_than_Men.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Research on Sentiment Analysis Algorithm for Comments on Online Ideological and Political Courses</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131119</link>
        <id>10.14569/IJACSA.2022.0131119</id>
        <doi>10.14569/IJACSA.2022.0131119</doi>
        <lastModDate>2022-11-30T10:26:20.1870000+00:00</lastModDate>
        
        <creator>Xiang Zhang</creator>
        
        <creator>Xiaobo Qin</creator>
        
        <subject>Online courses; comment; sentiment tendency; long short-term memory</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>The online course teaching platform provides a more accessible and open teaching environment for teachers and students. The sentiment tendency reflected in the online course comments becomes an essential basis for teachers to adjust the course and students to choose the course. This paper combined two deep learning algorithms, i.e., a convolutional neural network (CNN) algorithm and a long short-term memory (LSTM) algorithm, to identify and analyze the emotional tendency of comments on online ideological and political courses. Moreover, the CNN+LSTM-based sentiment analysis algorithm was simulated in MATLAB software. The influence of the text vectorization method on the recognition performance of the CNN+LSTM algorithm was tested; then, it was compared with support vector machine (SVM) and LSTM algorithms, and the comments on online ideological and political courses were analyzed. The results showed that the recognition performance of the CNN+LSTM-based sentiment analysis algorithm adopting the Word2vec text vectorization method was better than that adopting the one-hot text vectorization method; the recognition performance of the CNN+LSTM algorithm was the best, the LSTM algorithm was the second, and the SVM algorithm was the worst in terms of the performance of recognizing the sentiment of comment texts; 86.36% of the selected comments on ideological and political courses contained positive sentiment, and 13.64% contained negative sentiment. Relevant suggestions were given based on the negative comments.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_19-Research_on_Sentiment_Analysis_Algorithm_for_Comments.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Review on Approaches in Arabic Chatbot for Open and Closed Domain Dialog</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131117</link>
        <id>10.14569/IJACSA.2022.0131117</id>
        <doi>10.14569/IJACSA.2022.0131117</doi>
        <lastModDate>2022-11-30T10:26:20.1730000+00:00</lastModDate>
        
        <creator>Abraheem Mohammed Sulayman Alsubayhay</creator>
        
        <creator>Md Sah Hj Salam</creator>
        
        <creator>Farhan Bin Mohamed</creator>
        
        <subject>Arabic chatbot; artificial intelligence; arabchat; human-machine interaction; conversational agent</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>A Chatbot is a computer program which facilitates human-to-human communication between an artificial agent and humans. The Arabic language unlike the other languages has been used in Natural Language Processing in relatively fewer works owing to the lack of corpus along with the complexity of the language which has a number of dialects extending across various countries across the world. In the current scenario, little research has been conducted in the case of Arabic chatbots. This study presents a review about the existing literature on Arabic chatbot studies to determine knowledge gaps and suggests areas that require additional study and research. Additionally, this research observes that all relevant research relies on pattern matching or AIML techniques. The searching process was conducted utilizing keywords like ‘utterance’ ‘chatbot’, ‘ArabChat’, ‘chat agent’, ‘dialogue’, ‘interactive agent’, ‘chatterbot’, ‘conversational robot’, ‘artificial conversational’, and ‘conversational agent’. Further the study deals with the existing studies and the various approaches in Open and Closed domain dialog system and their working in the case of Arabic Chatbots. The study identified a severe lack of studies on Arabic chatbots, and it was observed that the majority of those studies were retrieval-based or rule-based.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_17-A_Review_on_Approaches_in_Arabic_Chatbot.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Facial Emotion Detection using Convolutional Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131118</link>
        <id>10.14569/IJACSA.2022.0131118</id>
        <doi>10.14569/IJACSA.2022.0131118</doi>
        <lastModDate>2022-11-30T10:26:20.1730000+00:00</lastModDate>
        
        <creator>Pooja Bagane</creator>
        
        <creator>Shaasvata Vishal</creator>
        
        <creator>Rohit Raj</creator>
        
        <creator>Tanushree Ganorkar</creator>
        
        <creator>Riya</creator>
        
        <subject>Feature extraction; convolutional neural network; resnet; emotion recognition; emotion detection; facial recognition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>Non-verbal specialized strategies, e.g. look, eye development, and motions are utilized in numerous uses of human-PC connection, among them facial feeling is generally utilized as it conveys the enthusiastic states and sensations of people. In the machine learning calculation, a few significant separated highlights are utilized for displaying the face. As a result, it won&#39;t get a high accuracy rate for acknowledging that the highlights rely on prior knowledge. Convolutional Neural Network (CNN) has created this work for acknowledgment of facial feeling appearance. Looks assume an essential part in nonverbal correspondence which shows up because of the inner sensations of an individual that thinks about the countenances. This paper has utilized the calculation to distinguish features of a face such as eyes, nose, etc. This paper identified feelings from the mouth, and eyes. This paper will be proposed as a viable method for distinguishing outrage, hatred, disdain, dread, bliss, misery, and shock. These are the seven feelings from the front-facing facial picture of people. The final result gives us an accuracy of 63% on the CNN model and 85% on the ResNet Model.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_18-Facial_Emotion_Detection_using_Convolutional_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Object Pre-processing using Motion Stabilization and Key Frame Extraction with Machine Learning Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131116</link>
        <id>10.14569/IJACSA.2022.0131116</id>
        <doi>10.14569/IJACSA.2022.0131116</doi>
        <lastModDate>2022-11-30T10:26:20.1570000+00:00</lastModDate>
        
        <creator>Kande Archana</creator>
        
        <creator>V Kamakshi Prasad</creator>
        
        <subject>Information loss preventive; mean angle measure; key frame extraction; moving average; dynamic thresholding</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>Video information processing is one of the most important application areas in research and to solve in various pre-processing issues. The pre-processing issues such as unstable video frame rates or capture angle, noisy data and large size of the video data prevent the researchers to apply information retrieval or categorization algorithms. The video data itself plays a vital role in various areas. This work aims to solve the motion stabilization, noise reduction and key frame extraction, without losing the information and in reduced time. The work results into 66% reduction in key frame extraction and nearly 6 ns time for complete video data processing.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_16-Object_Pre_processing_using_Motion_Stabilization_and_Key_Frame_Extraction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-Feature Extraction Method of Power Customer’s Portrait based on Knowledge Map and Label Extraction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131114</link>
        <id>10.14569/IJACSA.2022.0131114</id>
        <doi>10.14569/IJACSA.2022.0131114</doi>
        <lastModDate>2022-11-30T10:26:20.1400000+00:00</lastModDate>
        
        <creator>Wentao Liu</creator>
        
        <creator>Liang Ji</creator>
        
        <subject>Knowledge map; label extraction; power customer’s portrait; multi-feature extraction; natural language processing; feature visualization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>In order to realize the visualization of power customer characteristics and better provide power services for power customers, a multi-feature extraction method of power customer’s portrait based on knowledge map and label extraction is studied. The power customer’s portrait construction model is designed, which uses the knowledge map construction link to collect the power customer related data from the power system official website and database, and clean and convert the data; In the multi-feature analysis section, natural language processing technology is used to analyze the characteristics of power customers through Chinese word segmentation, vocabulary weight determination and emotion calculation; Based on the feature analysis results, the portrait label is extracted to generate the power customer’s portrait. The power customer’s portrait is used to realize the application of power customer’s feature visualization, power customer recommendation, power customer evaluation and so on. The experimental results show that this method can effectively construct the knowledge map of power customers, accurately extract the characteristics of power customers, generate labels, and realize the visualization of power customer’s portraits.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_14-Multi_Feature_Extraction_Method_of_Power_Customers_Portrait.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Ransomware Detection using Machine and Deep Learning Approaches</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131112</link>
        <id>10.14569/IJACSA.2022.0131112</id>
        <doi>10.14569/IJACSA.2022.0131112</doi>
        <lastModDate>2022-11-30T10:26:20.1230000+00:00</lastModDate>
        
        <creator>Ramadhan A. M. Alsaidi</creator>
        
        <creator>Wael M.S. Yafooz</creator>
        
        <creator>Hashem Alolofi</creator>
        
        <creator>Ghilan Al-Madhagy Taufiq-Hail</creator>
        
        <creator>Abdel-Hamid M. Emara</creator>
        
        <creator>Ahmed Abdel-Wahab</creator>
        
        <subject>Machine learning; ransomware; URL classification; malicious URLs; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>Due to the advancement and easy accessibility to computer and internet technology, network security has become vulnerable to hacker threats. Ransomware is a frequently used malware in cyber-attacks to trick the victim users to expose sensitive and private information to the attackers. Consequently, victims may not be able to access their data any longer until they pay a ransom for stolen files or data. Different methods have been introduced to overcome these issues. It is evident through an extensive literature review that some lexical features are not always sufficient to detect categories of malicious URLs. This paper proposes a model to detect Ransomware using machine and deep learning approaches. This model was introduced as a novel feature for classification using the idea that starts with “https://www.” This feature was not considered in the earlier papers on malicious URLs identification. In addition, this paper introduced a novel dataset that consists of 405,836 records. Two main experiments were carried out utilizing malicious URL features to defend Ransomware using the proposed dataset. Moreover, to enhance and optimize the experimental accuracy, various hyper-parameters were tested on the same dataset to define the optimal factors of every method. According to the comparative and experimental results of the applied classification techniques, the proposed model achieved the best performance at 99.8% accuracy rate for detecting malicious URLs using machine and deep learning.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_12-Ransomware_Detection_Using_Machine_and_Deep_Learning_Approaches.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluation of Online Teaching in the Covid Period using Learning Analytics</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131113</link>
        <id>10.14569/IJACSA.2022.0131113</id>
        <doi>10.14569/IJACSA.2022.0131113</doi>
        <lastModDate>2022-11-30T10:26:20.1230000+00:00</lastModDate>
        
        <creator>Jolana Gubalova</creator>
        
        <subject>Distance online learning; learning management system; moodle; collaboration platform microsoft teams</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>The article compares education at the Faculty of Economics Matej Bel University before the pandemic and during the coronavirus pandemic. At the same time, it tries to outline what the education will look like after this situation is over. It finds out how the situation during the corona affected the education of economists and to what extent the changes it brought will be preserved in the future. The comparison of face-to-face and distance learning in 2019 and 2020 was made. This is because teaching in 2019 was carried out in a &quot;classic&quot;, face-to-face manner, and on the contrary, in 2020, after the closure of schools in March 2020, teaching at Matej Bel University was carried out only distance online method. To get the best possible view of the researched topic, several research methods were used: the examination of the LMS Moodle with using of various Learning Analytics tools and Questionnaire Research. The results showed that face-to-face education before the Covid pandemic and after this pandemic will no longer be the same because distance online education will also cause changes in face-to-face education in the post-pandemic period. Questionnaire research showed that up to 78% of part-time students and 61% of full-time students would like their study program to use elements of distance education in full-time study as well. Since this is a large group of students, their opinion will be considered in the future when fully returning to face-to-face teaching.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_13-Evaluation_of_Online_Teaching_in_the_Covid_Period.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cedarwood Quality Classification using SVM Classifier and Convolutional Neural Network (CNN)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131111</link>
        <id>10.14569/IJACSA.2022.0131111</id>
        <doi>10.14569/IJACSA.2022.0131111</doi>
        <lastModDate>2022-11-30T10:26:20.1100000+00:00</lastModDate>
        
        <creator>Muhammad Ary Murti</creator>
        
        <creator>Casi Setianingsih</creator>
        
        <creator>Eka Kusumawardhani</creator>
        
        <creator>Renal Farhan</creator>
        
        <subject>Cedarwood classification; convolutional neural network (CNN); HoG feature; SVM classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>Cedarwood is one of the most sought-after materials since it can be used to create a wide variety of household appliances. Other than its unique aroma, the product&#39;s quality is the most important selling attribute. Fiber patterns allow for a qualitative categorization of this wood. Traditionally, workers in the wood-processing business have relied solely on their eyesight to sort materials into several categories. As a result, there will be discrepancies in precision and efficiency, which will hurt the reputation of the regional wood sector. The answer to this issue is machine learning. In this study, we compare the performance of two different cedarwood quality classification systems where both systems use different machine learning methods namely Support Vector Machine (SVM) and Convolutional Neural Network (CNN). Each system will be sent images captured with a Logitech Brio 4K equipped with a joystick and ultrasonic sensors, labeled as belonging to one of five cedar classes (A, B, C, D, or E). In the initial method to learn the wood&#39;s pattern and texture, the Histogram of Oriented Gradient (HOG) will be used to identify the material. Meanwhile, the classification method uses a Support Vector Machine (SVM) which will be compared to find the best accuracy and time computation. The first system&#39;s experiment achieves 90 percent accuracy with a computation time of 1.40 seconds. For the second, we use a Convolutional Neural Network, a deep learning technique, to classify cedarwood (CNN). Extraction of features occurs in the convolution, activation, and pooling layers. Experimental results demonstrated a considerable enhancement, with an accuracy of 97% and a prediction speed of 0.56 seconds.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_11-Cedarwood_Quality_Classification_using_SVM_Classifier.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SPAMID-PAIR: A Novel Indonesian Post–Comment Pairs Dataset Containing Emoji</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131110</link>
        <id>10.14569/IJACSA.2022.0131110</id>
        <doi>10.14569/IJACSA.2022.0131110</doi>
        <lastModDate>2022-11-30T10:26:20.0930000+00:00</lastModDate>
        
        <creator>Antonius Rachmat Chrismanto</creator>
        
        <creator>Anny Kartika Sari</creator>
        
        <creator>Yohanes Suyanto</creator>
        
        <subject>Dataset; natural language processing; spam detection; spamid-pair; post-comment pairs</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>The detection of spam content is an important task especially in social media. It has become a topic to be continuely studied in Natural Language Processing (NLP) area in the last few years. However, limited data sets are available for this research topic because most researchers collect the data by themselves and make it private. Moreover, most available data sets only provide the post content without considering the comment content. This data becomes a limitation because the post-comment pair is needed when determining the context of a comment from a particular post. The context may contribute to the decision of whether a comment is a spam or not. The scarcity of non-English data sets, including Indonesian, is also another issue. To solve these problems, the authors introduce SPAMID-PAIR, a novel post-comment pair data set collected from Instagram (IG) in Indonesian. It is collected from selected 13 Indonesian actress/actor accounts, each of which has more than 15 million followers. It contains 72874 pairs of data. This data set has been annotated with spam/non-spam labels in Unicode (UTF-8) text format. The data also includes a lot of emojis/emoticons from IG. To test the baseline performance, the data is tested with some machine learning methods using several scenarios and achieves good performance. This dataset aims to be used for the replicable experiment in spam content detection on social media and other tasks in the NLP area.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_10-SPAMID_PAIR_A_Novel_Indonesian_Post_Comment_Pairs_Dataset.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Mobility Management Algorithm in the Internet of Things (IoT) for Smart Objects based on Software-Defined Networking (SDN)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131109</link>
        <id>10.14569/IJACSA.2022.0131109</id>
        <doi>10.14569/IJACSA.2022.0131109</doi>
        <lastModDate>2022-11-30T10:26:20.0930000+00:00</lastModDate>
        
        <creator>Lili Pei</creator>
        
        <subject>Internet of things (IoT); mobility management; software-defined networking (SDN); CoAP protocol</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>In recent decades, technological advancements have significantly improved people&#39;s living standards and given rise to the rapid development of intelligent technologies. The Internet of Things (IoT) is one of the most important research topics worldwide. However, IoT is often comprised of unreliable wireless networks, with hundreds of mobile sensors interconnected. A traditional sensor network typically consists of fixed sensor nodes periodically transmitting data to a pre-determined router. Current applications, however, require sensing devices to be mobile between networks. We need mobility management protocols to manage these mobile nodes to provide uninterrupted service to users. The interactions between the mobile nodes are affected by the loss of signaling messages, increased latency, signaling costs, and energy consumption because of the characteristics of these networks, including constrained memory, processing power, and limited energy source. Hence, developing an algorithm for managing smart devices&#39; mobility on the Internet is necessary. This study proposes an efficient and effective distributed mechanism to manage mobility in IoT devices. Using Software-Defined Networking (SDN) based on the CoAP protocol, the proposed method is intended not only to reduce the signaling cost of messages but also to make mobility management more reliable and simpler.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_9-A_Mobility_Management_Algorithm_in_the_Internet_of_Things.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Protein Secondary Structure Prediction based on CNN and Machine Learning Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131108</link>
        <id>10.14569/IJACSA.2022.0131108</id>
        <doi>10.14569/IJACSA.2022.0131108</doi>
        <lastModDate>2022-11-30T10:26:20.0770000+00:00</lastModDate>
        
        <creator>Romana Rahman Ema</creator>
        
        <creator>Mt. Akhi Khatun</creator>
        
        <creator>Md. Nasim Adnan</creator>
        
        <creator>Sk. Shalauddin Kabir</creator>
        
        <creator>Syed Md. Galib</creator>
        
        <creator>Md. Alam Hossain</creator>
        
        <subject>Protein Secondary Structure Prediction (PSSP); Support Vector Machine (SVM); Naive Bays (NB); Random Forest (RF); Convolutional Neural network (CNN)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>One of the most important topics in computational biology is protein secondary structure prediction. Primary, secondary, tertiary, and quaternary structure are the four levels of complexity that can be used to characterize the entire structure of a protein that are totally ordered by the amino acid sequences. The polypeptide backbone of a protein&#39;s local configuration is referred to as a secondary structure. In this paper, three prediction algorithms have been proposed which will predict the protein secondary structure based on machine learning. These prediction methods have been improved by the model structure of convolutional neural networks (CNN). The Rectified Linear Units (ReLU) has been used as the activation function. The 2D CNN has been trained with machine learning algorithms, including Support Vector Machine, Naive Bays and Random Forest. The SVM is used to correctly classify the unseen data. Na&#239;ve Bays (NB) and Random Forest (RF) are also applied to solve the prediction problems for not only classification problems but also regression problems. The 2D CNN, hybrid of 2D CNN -SVM, CNN-RF and CNN-NB have been proposed in this experiment. These different methods are implemented with the RS126, 25PDB and CB513 dataset. Further, all prediction Q3 accuracy is compared and improved with their datasets.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_8-Protein_Secondary_Structure_Prediction_based_on_CNN.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>IRemember: Memorable CAPTCHA Method for Sighted and Visually Impaired Users</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131107</link>
        <id>10.14569/IJACSA.2022.0131107</id>
        <doi>10.14569/IJACSA.2022.0131107</doi>
        <lastModDate>2022-11-30T10:26:20.0630000+00:00</lastModDate>
        
        <creator>Mrim Alnfiai</creator>
        
        <creator>Sahar Altalhi</creator>
        
        <creator>Duaa Alawfi</creator>
        
        <subject>CAPTCHA; blind users; visually impaired users; memorability; accessibility</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>A CAPTCHA is used to automatically differentiate between human users and automated software to prevent bots from accessing unauthorized websites. Most proposed CAPTCHAs are not accessible to visually impaired users because of the memorability of the CAPTCHA’s numerical digits. Recalling six random spoken digits is a difficult task for any human. Visually impaired users must typically play the audio several times to memorize the spoken digits in the correct order. The authors reviewed existing CAPTCHAs for visually impaired users and concluded that the high cognitive load is more susceptible to response errors due to extensive challenge digits intended for visual users. Thus, the authors proposed a novel method that improves current audio CAPTCHA by enhancing the display of the challenge and improving the memorability of its phraseology. The proposed CAPTCHA presents short common phrases, such as “piece of cake.” After hearing or seeing the phrases, the users are required to type the first letter of each word from the presented phrases, such as POC for a piece of cake. The study results of 11 visually impaired users concluded that the memorability and success rate for the IRemember CAPTCHA was 82.72%, compared to the audio CAPTCHA at only 48.18%. It has also demonstrated higher memorability and less workload than the traditional audio method. This research indicates that using common knowledge and experience in the design process for a CAPTCHA method for these users can enhance performance and minimize workload and, hence, error rates.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_7-IRemember_Memorable_CAPTCHA_Method_for_Sighted_and_Visually_Impaired.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Blockchain based Framework for Efficient Student Performance Tracking (BloSPer)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131105</link>
        <id>10.14569/IJACSA.2022.0131105</id>
        <doi>10.14569/IJACSA.2022.0131105</doi>
        <lastModDate>2022-11-30T10:26:20.0470000+00:00</lastModDate>
        
        <creator>Aisha Zahid Junejo</creator>
        
        <creator>Anton Dziatkovskii</creator>
        
        <creator>Manzoor Ahmed Hashmani</creator>
        
        <creator>Uladzimir Hryneuski</creator>
        
        <creator>Ekaterina Ovechkina</creator>
        
        <subject>Blockchain; education; performance tracking; trackability; student data analytics; student monitoring</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>For maintaining sustainable economy, the government of Malaysia is working towards improvising the standards of education in higher education institutes. According to reports, around 32% of enrolled students in Public Universities of Malaysia are unable to graduate on time due to unknown reasons. To ensure more students graduate on time with high quality of education, continuous monitoring of the student is essential. Continual tracking will allow the student as well as the educator to analyze the weak performer at an early stage. Tracking the student performance manually is challenging but with the advancements in information technology, keeping a track of student performance has been relatively easier. Therefore, the fundamental aim of this paper is to present a novel blockchain framework for record keeping and student performance tracking. We name this framework BloSPer (Blockchain Student Performance Tracking System). BloSPer has an edge over the existing systems as current systems face problems of single point of failure and unreliable data. The proposed framework will enable the students and educators to track the performance of the students in a more convenient and transparent manner. Due to this, it will be simpler for them to analyze the reasons of a students’ poor performance. Moreover, the data gathered through the system will be more reliable and worthy for data analytics because of tamper resistance provided through blockchain. This will result in much knowledgeable decisions by the institutions regarding improving the performance of each individual candidate.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_5-Blockchain_based_Framework_for_Efficient_Student_Performance_Tracking.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Impact of Mobile Technology Solution on Self-Management in Patients with Hypertension: Advantages and Barriers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131106</link>
        <id>10.14569/IJACSA.2022.0131106</id>
        <doi>10.14569/IJACSA.2022.0131106</doi>
        <lastModDate>2022-11-30T10:26:20.0470000+00:00</lastModDate>
        
        <creator>Adel Alzahrani</creator>
        
        <creator>Valerie Gay</creator>
        
        <creator>Ryan Alturki</creator>
        
        <subject>Impact; self-management; mHealth; hypertension</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>Hypertension is a major risk factor for cardiovascular morbidity and mortality. It is a condition that increases the high risk of heart, liver, and other diseases. Since hypertension is one of the biggest global public health issues, patients require more interventions to manage their blood pressure. The vast use of mobile phones and applications with medication features has turned a smartphone into a medical device. These tools are helpful for a physician in the treatment of hypertension. Mobile health applications are utilised to manage hypertension at the moment; however, there is a lack of information regarding their efficacy. Smartphones and their applications are evolving quickly hence the rise in the innovation of mobile health applications. Mobile-based applications are helpful in-patient education and reinforce the behaviour through constant reminders, medication, and appointment alarms. The main objective of this study is to determine the impact of mobile health applications on self-management in patients with hypertension and its advantages and disadvantages. We used publications from 2015 and later as a time frame and searched on the first five pages of Google Scholar, JSTOR, Hindawi, PubMed, and ResearchGate. We group all associated terms that might turn up articles on this subject in the search results. The total number of database records that we identified were 213, and the duplicate identified and removed were 117; hence the screened records were 96. The reports excluded based on abstract and title were 31. Articles with full text and have been accessed for final inclusion were 65. The excluded articles were 51, and the studies included in the qualitative analysis were 14.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_6-Impact_of_Mobile_Technology_Solution_on_Self_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Using Incremental Ensemble Learning Techniques to Design Portable Intrusion Detection for Computationally Constraint Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131104</link>
        <id>10.14569/IJACSA.2022.0131104</id>
        <doi>10.14569/IJACSA.2022.0131104</doi>
        <lastModDate>2022-11-30T10:26:20.0300000+00:00</lastModDate>
        
        <creator>Promise R. Agbedanu</creator>
        
        <creator>Richard Musabe</creator>
        
        <creator>James Rwigema</creator>
        
        <creator>Ignace Gatare</creator>
        
        <subject>Cyber-security; ensemble machine learning; incre-mental machine learning; Internet of Things; intrusion detection; online machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>Computers have evolved over the years, and as the evolution continues, we have been ushered into an era where high-speed internet has made it possible for devices in our homes, hospital, energy, and industry to communicate with each other. This era is known as the Internet of Things (IoT). IoT has several benefits in a country’s economy’s health, energy, transportation, and agriculture sectors. These enormous benefits, coupled with the computational constraint of IoT devices, make it challenging to deploy enhanced security protocols on them, making IoT devices a target of cyber-attacks. One approach that has been used in traditional computing over the years to fight cyber-attacks is Intrusion Detection System (IDS). However, it is practically impossible to deploy IDS meant for traditional computers in IoT environments because of the computational constraint of these devices. This study proposes a lightweight IDS for IoT devices using an incremental ensemble learning technique. We used Gaussian Naive Bayes and Hoeffding trees to build our incremental ensemble model. The model was then evaluated on the TON IoT dataset. Our proposed model was compared with other proposed state-of-the-art methods and evaluated using the same dataset. The experimental results show that the proposed model achieved an average accuracy of 99.98%. We also evaluated the memory consumption of our model, which showed that our model achieved a lightweight model status of 650.11KB as the highest memory consumption and 122.38KB as the lowest memory consumption.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_4-Using_Incremental_Ensemble_Learning_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Prediction of Micro Vascular and Macro Vascular Complications in Type-2 Diabetic Patients using Machine Learning Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131103</link>
        <id>10.14569/IJACSA.2022.0131103</id>
        <doi>10.14569/IJACSA.2022.0131103</doi>
        <lastModDate>2022-11-30T10:26:20.0170000+00:00</lastModDate>
        
        <creator>Bandi Vamsi</creator>
        
        <creator>Ali Al Bataineh</creator>
        
        <creator>Bhanu Prakash Doppala</creator>
        
        <subject>Diabetes mellitus; micro vascular; macro vascular; machine learning; type-2 complications</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>A collection of metabolic conditions known as diabetes mellitus are defined by hyperglycemia brought on by deficiencies in insulin secretion, action, or both. In terms of mortality rate, type-2 diabetes is 20 times higher when compared with type-1. Based on the earlier research, there is still scope to identify different risk levels of type-2 diabetes complications. To achieve this, we have proposed a T2DC machine learning-based prediction system using a decision tree as a base estimator with random forest to identify the severity of T2-DM complications at an early stage. Our proposed model achieved accuracies of 95.43%, 94.62%, 96.25%, 97.55%, and 97.83% for Nephropathy, Neuropathy, Retinopathy, Cardiovascularand Peripheral Vascu-lar complications in T2-DM patients. The proposed model has the potential to improve clinical outcomes by promoting the delivery of early and personalized care to T2-DM patients.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_3-Prediction_of_Micro_Vascular_and_Macro_Vascular_Complications.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Character Level Segmentation and Recognition using CNN Followed Random Forest Classifier for NPR System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131102</link>
        <id>10.14569/IJACSA.2022.0131102</id>
        <doi>10.14569/IJACSA.2022.0131102</doi>
        <lastModDate>2022-11-30T10:26:20.0000000+00:00</lastModDate>
        
        <creator>U. Ganesh Naidu</creator>
        
        <creator>R. Thiruvengatanadhan</creator>
        
        <creator>S. Narayana</creator>
        
        <creator>P. Dhanalakshmi</creator>
        
        <subject>Character segmentation; convolutional neural networks; bilateral filter; character recognition; SVM classifier</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>The number plate recognition system must be able to quickly and accurately identify the plate in both low and noisy lighting conditions, as well as within the specified time limit. This study proposes automated authentication, which would minimize security and individual workload while eliminating the requirement for human credential verification. The four processes that follow the acquisition of an image are pre-processing, number plate localization, character segmentation, and character identification. A human error during the affirmation and the enrolling process is a distinct possibility since this is a manual approach. Personnel at the selected location may find it difficult and time-consuming to register and compose information manually. Due to the printed edition design, it is impossible to communicate the information. Character segmentation breaks down the number plate region into individual characters, and character recognition detects the optical characters. Our approach was tested using genuine license plate images under various environmental circumstances and achieved overall recognition accuracy of 91.54% with a single license plate in an average duration of 2.63 seconds.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_2-Character_Level_Segmentation_and_Recognition_using_CNN.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Investigating the User Experience of Mind Map Software: A Comparative Study based on Eye Tracking</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131101</link>
        <id>10.14569/IJACSA.2022.0131101</id>
        <doi>10.14569/IJACSA.2022.0131101</doi>
        <lastModDate>2022-11-30T10:26:19.9830000+00:00</lastModDate>
        
        <creator>Junfeng Wang</creator>
        
        <creator>Xi Wang</creator>
        
        <creator>Jingjing Lu</creator>
        
        <creator>Zhiyu Xu</creator>
        
        <subject>Usability; mind map software; comparative research; eye tracking; user experience</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(11), 2022</description>
        <description>Software for creating mind maps is currently prevalent, and it should have strong usability and create a good user experience. Usability testing can help to uncover flaws in software&#39;s usability and support its optimization. This paper took the mind map software &quot;Xmind&quot; and &quot;MindMaster&quot; as study cases and conducted comparative research on three aspects: effectiveness, efficiency, and satisfaction. The research investigated 20 participants&#39; interactions with the two software. Task completion rate, number of errors, and number of requests for help were collected to evaluate the effectiveness. Eye tracking data and task completion time are collected to evaluate efficiency. System usability, interface quality, and emotional dimensions were collected with subjective scales to assess the software’s user satisfaction. The data together led to a conclusion: each software has a few usability issues. The use of jargon to explain functions was costly to learn and quickly undermined users&#39; confidence in using the software; the interface&#39;s simplicity impacted satisfaction, although users tended to evaluate utility tools in terms of their ease of use and ease of learning. These findings could be used to optimize utility software.</description>
        <description>http://thesai.org/Downloads/Volume13No11/Paper_1-Investigating_the_User_Experience_of_Mind_Map_Software.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cross-Event User Reaction Prediction on a Social Network Platform</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01310116</link>
        <id>10.14569/IJACSA.2022.01310116</id>
        <doi>10.14569/IJACSA.2022.01310116</doi>
        <lastModDate>2022-11-01T10:11:43.3200000+00:00</lastModDate>
        
        <creator>Pramod Bide</creator>
        
        <creator>Sudhir Dhage</creator>
        
        <subject>Twitter; cross events; collaborative filtering; logistic regression; social and topical context</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>Social network surges with multiple tweets with mixture of multiple emotions by many users when events like rape, robbery, war and murder, we use this user data to analyze user emotions between cross-events and try to predict user reactions for the next possible such event. Cross-events are a series of events that belong under the same umbrella of topics and are related to the events occurring prior to it. The proposed system solve this problem using collaborative filtering using Topical and Social context. The Text Rank Algorithm is an unsupervised algorithm used for keyword extraction. Count Vectorizer is used on preprocessed text to get the frequency of words throughout the text which is used as training data to get a probability of emotion using a logistic regression model. We incorporated social context along with topical context to account for homophily and used the Low-rank matrix factorization method for user-topic prediction. The model as an output gives a total of 8 emotions which include Shame, Disgust, Anger, Fear, Sadness, Neutral, Surprise and Joy. Finally, the model is able to predict emotions with an accuracy of 95% considering cross events.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_116-Cross_Event_User_Reaction_Prediction_on_a_Social_Network_Platform.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving Slope Stability in Open Cast Mines via Machine Learning based IoT Framework</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01310115</link>
        <id>10.14569/IJACSA.2022.01310115</id>
        <doi>10.14569/IJACSA.2022.01310115</doi>
        <lastModDate>2022-11-01T10:11:43.2730000+00:00</lastModDate>
        
        <creator>Sameer Kumar Das</creator>
        
        <creator>Subhendu Kumar Pani</creator>
        
        <creator>Abhaya Kumar Samal</creator>
        
        <creator>Sasmita Padhy</creator>
        
        <creator>Sachikanta Dash</creator>
        
        <creator>Singam Jayanthu</creator>
        
        <subject>Opencast; mining; slope; IoT; stability; machine learning; data mining</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>Slope stability has been a matter of concern for most geologists, mainly due to the fact that unstable slopes cause a greater number of accidents, which in turn reduces efficiency of mining operations. In order to reduce the probability of these slope instabilities, methods like tension crack mapping, inclinometer measurements, time domain reflectometry, borehole extensometers, piezometer, radar systems and image processing systems are deployed. These systems work efficiently for single site slope failures, but as the number of mining sites increase, dependency of one site slope failure on nearby sites also increases. Current systems are not able to capture this data, due to which the probability of accidents at open cast mines increases. In order to reduce this probability, a high efficiency internet of things (IoT) based continuous slope monitoring and control system is designed. This system assists in improving the efficiency of real-time slope monitoring via usage of a sensor array consisting of radar, reflectometer, inclinometer, piezometer and borehole extensometer. All these measurements are given to a high efficiency machine learning classifier which uses data mining, and based on its output suitable actions are taken to reduce accidents during mining. This information is dissipated to nearby mining sites in order to inform them about any inconsistencies which might occur due to the slope changes on the current site. Results were simulated using HIgh REsolution Slope Stability Simulator (HIRESSS), and an efficiency improvement of 6% is achieved for slope analysis in open cast mines, while probability of accident reduction is increased by 35% when compared to traditional non-IoT based approach.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_115-Improving_Slope_Stability_in_Open_Cast_Mines_Via_Machine_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Address Pattern Recognition Flash Translation Layer for Quadruple-level cell NAND-based Smart Devices</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01310113</link>
        <id>10.14569/IJACSA.2022.01310113</id>
        <doi>10.14569/IJACSA.2022.01310113</doi>
        <lastModDate>2022-10-31T13:06:22.4630000+00:00</lastModDate>
        
        <creator>Se Jin Kwon</creator>
        
        <subject>Memory management; nonvolatile memory; smart devices</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>The price of the solid-state drives has become a major factor in the development of flash memory technology. Major semiconductor companies are developing quadruple-level cell NAND-based SSDs for smart devices. Unfortunately, SSDs composed of quadruple-level cell (QLC) flash memory may suffer from low performance. In addition, few studies on internal page buffering mechanisms have been conducted. As a solution to these problems, an address pattern recognition flash translation layer (APR-FTL) is proposed in this study. APRA-FTL gathers the data in a page unit and separates random data from sequential data. Furthermore, APRA-FTL proposes address mapping algorithm which is compatible to the page buffering algorithm. Experimental results show that APRA-FTL generates a lower number of write and erase operations compared to previous FTL algorithms.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_113-Address_Pattern_Recognition_Flash_Translation_Layer.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Deep Learning Signature based Correlation Filter for Vehicle Tracking in Presence of Clutters and Occlusion</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01310114</link>
        <id>10.14569/IJACSA.2022.01310114</id>
        <doi>10.14569/IJACSA.2022.01310114</doi>
        <lastModDate>2022-10-31T11:46:20.2970000+00:00</lastModDate>
        
        <creator>Shobha B. S</creator>
        
        <creator>Deepu. R</creator>
        
        <subject>Smart traffic management; background subtraction; vehicle detection; aggregation signature; hybrid tracker</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>This vehicle tracking is an important task of smart traffic management. Tracking is very challenging in presence of occlusions, clutters, variation in real world lighting, scene conditions and camera vantage. Joint distribution of vehicle movement, clutter and occlusions introduces larger errors in particle tracking based approaches. This work proposes a hybrid tracker by adapting kernel and particle-based filter with aggregation signature and fusing the results of both to get the accurate estimation of target vehicle in video frames. Aggregation signature of object to be tracked is constructed using a probabilistic distribution function of lighting variation, clutters and occlusions with deep learning model in frequency domain. The work also proposed a fuzzy adaptive background modeling and subtraction algorithm to remove the backgrounds and clutters affecting the tracking performance. This hybrid tracker improves the tracking accuracy even in presence of larger disturbances in the environment. The proposed solution is able to track the objects with 3% higher precision compared to existing works even in presence of clutters.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_114-Hybrid_Deep_Learning_Signature_based_Correlation_Filter.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Anomaly Detection in Video Surveillance using SlowFast Resnet-50</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01310112</link>
        <id>10.14569/IJACSA.2022.01310112</id>
        <doi>10.14569/IJACSA.2022.01310112</doi>
        <lastModDate>2022-10-31T11:46:20.2970000+00:00</lastModDate>
        
        <creator>Mahasweta Joshi</creator>
        
        <creator>Jitendra Chaudhari</creator>
        
        <subject>Accuracy; GPU (Graphics Processing Unit); SlowFast Resnet50; Softmax; UCF-Crime dataset</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>Surveillance systems are widely used in malls, colleges, schools, shopping centers, airports, etc. This could be due to the increasing crime rate in daily life. It is a very tedious task to monitor and detect abnormal activities 24x7 from the surveillance system. So the detection of abnormal events from videos is a hugely demanding area of research. In this paper, the proposed framework is used for deep learning concepts. Here SlowFast Resnet50 has been used to extract and process the features. After that, the deep neural network has been applied to generate a class using the Softmax function. The proposed framework has been applied to the UCF-Crime dataset using Graphics Processing Unit (GPU). It includes 1900 videos with 13 classes. Our proposed algorithm is evaluated by accuracy. Our proposed algorithm works better than the existing algorithm. It achieves 47.8% more accuracy than state of art method and also achieves good accuracy compared to other approaches used for detecting abnormal activity on the UCF-Crime dataset.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_112-Anomaly_Detection_in_Video_Surveillance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Remote Monitoring Solution for Cardiovascular Diseases based on Internet of Things and PLX-DAQ add-in</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01310111</link>
        <id>10.14569/IJACSA.2022.01310111</id>
        <doi>10.14569/IJACSA.2022.01310111</doi>
        <lastModDate>2022-10-31T11:46:20.2800000+00:00</lastModDate>
        
        <creator>Jeanne Roux NGO BILONG</creator>
        
        <creator>Yao Gaspard Magnificat BOSSOU</creator>
        
        <creator>Adam Ismael Paco SIE</creator>
        
        <creator>Gervais MENDY</creator>
        
        <creator>Cheikhane SEYED</creator>
        
        <subject>Cardiovascular diseases; microcontroller arduino; esp8086; AD8232; Plx-DAQ; IoT; ECG</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>Access to health remains a real problem in Africa especially for the follow up of patients with chronic diseases. Many cases of heart attack deaths are still recorded before victims can access treatment. This is due to several factors, namely the insufficient number of cardiologists, the inaccessibility of hospitals with adequate infrastructure, the carelessness and ignorance of people about their health. In response to these limitations, internet of Things, thanks to its remarkable technological contribution, allow to follow from afar and easily patient’s condition. In this paper, we offer a ubiquitous surveillance solution distance from patients with cardiovascular disease in order to minimize or eliminate the risk of heart attacks. The proposed solution is based on a micro- service architecture and consists of two essential parts that are data acquisition and data transfer. It will allow the patient to access their physical data and submit them in real time to the doctor through a dedicated medical application. The doctor will then be able to analyse the data obtained and return a prescription to the patient in case of abnormality .We used the microcontroller Arduino esp8066, the heart Rate Monitor AD8232 ECG (electrocardiogram) to measure the electrical activity of the heart that can be traced as an ECG, a pulse-eater, a photoresistor LDR and an potentiometer to regulate and modify the current flow in the circuit. We also used add-in PLX-DAQ for data acquisition and Jira software for data transfer to the doctor. Our solution is inexpensive and allows people not yet suffering from cardiovascular disease to prevent it.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_111-Remote_Monitoring_Solution_for_Cardiovascular_Diseases.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Skin Melanoma Classification from Dermoscopy Images using ANU-Net Technique</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01310109</link>
        <id>10.14569/IJACSA.2022.01310109</id>
        <doi>10.14569/IJACSA.2022.01310109</doi>
        <lastModDate>2022-10-31T11:46:20.2630000+00:00</lastModDate>
        
        <creator>Vankayalapati Radhika</creator>
        
        <creator>B. Sai Chandana</creator>
        
        <subject>Melanoma; LeNet-5; ANU-Net; dermoscopy images; benign; classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>Cells in any area of the body might develop cancer when they begin to grow uncontrollably. Other body regions may become affected by it. Skin cancer known as melanoma develops when melanocytes, or cells that create melanin, the pigment that gives skin its appearance of color, start to develop out of control. Melanoma is deadly because, if not caught early and addressed, it has a high propensity to spread to other regions of the body. Analyzing digital dermoscopy images, create a unique approach to categorizing melanocytic tumors as malignant or benign. Every single newly formed mole has a unique shape and colour compared to the pre-existing moles and given few more issues to classify the melanoma. To overcome all of these issues, this paper uses deep learning techniques. In this paper, a four-step system for classifying melanoma is described. The first stage is pre-processing, followed by the removal of hair from dermoscopic images using a Laplacian-based algorithm and then removing noise from the images using a Median filter. The second method is feature extraction from pre-processed images. Extracting features including texture, shape, and color using the Principal Component Analysis (PCA) technique. Thirdly, the LeNet-5 approach is utilized to locate the lesion location and segment the skin lesion. Fourth, the ANU-Net technique is used to categorize the lesion as cancerous (melanoma) or non-cancerous (non-melanoma). Evaluated based on performance parameters such as precision, sensitivity, accuracy, and specificity. Results are compared to those of current systems and show higher accuracy.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_109-Skin_Melanoma_Classification_from_Dermoscopy_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Method for Determination of Tealeaf Plucking Date with Cumulative Air Temperature: CAT and Photosynthetically Active Radiation: PAR</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01310110</link>
        <id>10.14569/IJACSA.2022.01310110</id>
        <doi>10.14569/IJACSA.2022.01310110</doi>
        <lastModDate>2022-10-31T11:46:20.2630000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Yoshiko Hokazono</creator>
        
        <subject>Plucking date; elapsed days after sprouting; cumulative air temperature; Landsat-9 TIR; theanine; regressive analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>Method for determination of tealeaf plucking date with cumulative air temperature and Photosynthetically Active Radiation: PAR which is provided by the remote sensing satellites: Terra/MODIS and Aqua/MODIS is proposed. Also, a confirmation of thermal environment at the intensive study tea farm areas with Landsat-9 TIR (Thermal Infrared) image is conducted. Through a regressive analysis between the harvested tealeaf quality and the cumulative air temperature and PAR at the intensive study areas, it is found that there is a highly reliable relation between both. Also, an importance of air temperature environment at the sites is confirmed with Landsat-9 TIR image.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_110-Method_for_Determination_of_Tealeaf_Plucking_Date.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SDN Architecture for Smart Homes Security with Machine Learning and Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01310108</link>
        <id>10.14569/IJACSA.2022.01310108</id>
        <doi>10.14569/IJACSA.2022.01310108</doi>
        <lastModDate>2022-10-31T11:46:20.2500000+00:00</lastModDate>
        
        <creator>Wesam Abdulrhman Alonazi</creator>
        
        <creator>Hedi HAMDI</creator>
        
        <creator>Nesrine A. Azim</creator>
        
        <creator>A. A. Abd El-Aziz</creator>
        
        <subject>SDN; smart home; security; machine learning; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>In recent decades, Intelligent home systems are popular because they improve comfort and quality of life. A growing number of homes are becoming &quot;smarter&quot; by incorporating Internet of Things (IoT) technology to improve comfort, energy efficiency, and safety. Increases in resource-constrained IoT devices heighten security threats and vulnerabilities connected with them. Using SDN and virtualization, the IoT&#39;s size and adaptability can be managed at a lower cost than ever before. Using these intelligent security solutions, we can achieve real-time detection and automation for attack detection and prevention using artificial intelligence. Consequently, a large variety of solutions utilizing machine learning and deep learning have been developed to mitigate attacks on the IoT. Thus, the goal of this work is to use machine learning and deep learning to defend smart homes with SDN-based. We have designed smart home environments using Software-Defined Networking and Mininet that provide Instant Virtual networks for IoT in smart homes. Two datasets were used in this work: the first SDN dataset, which we acquired from smart homes by launching real attacks and creating normal traffic, and the second IoTID20 dataset, which is publicly available online. On both datasets, conducted ML and DL experiments. The best accuracy on SDN Dataset was 99.9% using Xgboost classifier, and on IoTID20 was 98.9% LSTM in binary classification, and ANN 85.7% on multiclass.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_108-SDN_Architecture_for_Smart_Homes_Security.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>From Monolith to Microservices: A Semi-Automated Approach for Legacy to Modern Architecture Transition using Static Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01310107</link>
        <id>10.14569/IJACSA.2022.01310107</id>
        <doi>10.14569/IJACSA.2022.01310107</doi>
        <lastModDate>2022-10-31T11:46:20.2330000+00:00</lastModDate>
        
        <creator>Mohd Hafeez Osman</creator>
        
        <creator>Cheikh Saadbouh</creator>
        
        <creator>Khaironi Yatim Sharif</creator>
        
        <creator>Novia Admodisastro</creator>
        
        <subject>Static analysis; software architecture; software modernisation; microservices</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>Modern system architecture may increase the maintainability of the system and promote the sustainability of the system. Nowadays, more and more organizations are looking towards microservice due to its positive impact on the business which can be translated into delivering quality products to the market faster than ever before. On top of that, native support of DevOps is also desirable. However, transforming legacy system architecture to modern architecture is challenging. As manual modernization is inefficient due to its time-intensive and the significant amount of effort required, the software architect is looking for an automated or semi-automated approach for easy and smooth transformation. Hence, this work proposed a semi-automated approach to transform legacy architecture to modern system architecture based on static analysis techniques. This bottom-up approach utilized legacy source code to adhere to the modern architecture framework. We studied the manual transformation pattern for architectural conversion and explore the possibility of providing transformation rules and guidelines. A task-based experiment was conducted to evaluate the correctness and efficiency of the approach. Two open-source projects were selected and several software architects participated in an architectural transformation task as well as in the survey. We found that the new approach promotes an efficient migration process and produces correct software artifacts with minimum errors rates.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_107_From_Monolith_to_Microservices_A_Semi_Automated_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cryptocurrency Price Prediction using Forecasting and Sentiment Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01310105</link>
        <id>10.14569/IJACSA.2022.01310105</id>
        <doi>10.14569/IJACSA.2022.01310105</doi>
        <lastModDate>2022-10-31T11:46:20.2170000+00:00</lastModDate>
        
        <creator>Shaimaa Alghamdi</creator>
        
        <creator>Sara Alqethami</creator>
        
        <creator>Tahani Alsubait</creator>
        
        <creator>Hosam Alhakami</creator>
        
        <subject>Sentiment analysis; cryptocurrencies; forecasting; bitcoin; ethereum</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>In recent years, many investors have used cryptocurrencies, prompting specialists to find out the factors that affect cryptocurrencies’ prices. Therefore, one of the most popular methods that have been used to predict cryptocurrency prices is sentiment analysis. It is a widespread technique utilized by many researchers on social media platforms, particularly on Twitter. Thus, to determine the relationship between investors’ sentiment and the volatility of cryptocurrency prices, this study forecasts the cryptocurrency prices using the Long-Term-Short-Memory (LSTM) deep learning algorithm. In addition, Twitter users’ sentiments using Support Vector Machine (SVM) and Naive Bayes (NB) machine learning approaches are analyzed. As a result, in the classification of the bitcoin (BTC) and Ethereum (ETH) datasets of investors’ sentiments into (Positive, Negative, and Neutral), the SVM algorithm outperformed the NB algorithm with an accuracy of 93.95% and 95.59%, respectively. Furthermore, the forecasting regression model achieves an error rate of 0.2545 for MAE, 0.2528 for MSE, and 0.5028 for RMSE.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_105_Cryptocurrency_Price_Prediction_using_Forecasting_and_Sentiment_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Multi-Objective Design of Laminated Structure with Non-Dominated Sorting Genetic Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01310106</link>
        <id>10.14569/IJACSA.2022.01310106</id>
        <doi>10.14569/IJACSA.2022.01310106</doi>
        <lastModDate>2022-10-31T11:46:20.2170000+00:00</lastModDate>
        
        <creator>Huiyao Zhang</creator>
        
        <creator>Yuxiao Wang</creator>
        
        <creator>Fangmeng Zeng</creator>
        
        <subject>Non-dominated sorting genetic algorithm; optimization; failure theory; laminated composite material; classical lamination theory</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>Non-dominated sorting genetic algorithm has shown excellent advantages in solving complicated optimization problems with discrete variables in a variety of domains. In this paper, we implement a multi-objective genetic algorithm to guide the design of the laminated structure with two objectives: minimizing the mass and maximizing the strength of a specified structure simultaneously, classical lamination theory and failure theory are adopted to compute the strength of a laminate. The simulation results have shown that a non-dominated genetic algorithm has great advantages in the design of laminated composite material. Experiment results also suggest that optimal run times are from 16 to 32 for the design of glass-epoxy laminate with non-dominated sorting genetic algorithm. We also observed that two stages involve the optimization process in which the number of individuals in the first frontier first increases, and then decreases. These simulation results are helpful to decide the proper run times of genetic algorithms for glass-epoxy design and reduce computation costs.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_106_The_Multi_Objective_Design_of_Laminated_Structure.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimized Automatic Course Timetabling Service Architecture for Integration with Vendor Management Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01310104</link>
        <id>10.14569/IJACSA.2022.01310104</id>
        <doi>10.14569/IJACSA.2022.01310104</doi>
        <lastModDate>2022-10-31T11:46:20.2030000+00:00</lastModDate>
        
        <creator>Marwah M. Alansari</creator>
        
        <subject>Courses timetable generation; genetic algorithm; course scheduling; service-based system; service-oriented architecture; optimization; web services</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>Generating university course timetables is a complex problem, especially in large environments such as institutions. Currently, some universities in Saudi Arabia manually generate timetables for classes because they use Vendor Management Systems (VMS) for registration and management. Manually generating course timetables is time-consuming and laborious for the academic staff. Although various methods have been proposed to generate timetables, they address specific environments or systems that can be extended to or work as separate components of the university management system. In this paper, we propose a service-based system with a decentralized architecture that can fully automate the process of course timetable generation and can be easily integrated into VMS. The proposed service-based system employs a genetic algorithm to optimize the process of scheduling courses and generating timetables. The system was implemented using JAVA RESTful web services, and the algorithm was tested by generating various course timetables with various constraints. The results showed that the proposed decentralized architecture is applicable to and can be fully integrated with any VMS. Furthermore, the use of genetic algorithm set up to 200 generations and iterate 1000 times produces acceptable timetables without violating any of the defined constraints.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_104_Optimized_Automatic_Course_Timetabling_Service_Architecture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Triple SVM Integrated with Enhanced Random Region Segmentation for Classification of Lung Tumors</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01310103</link>
        <id>10.14569/IJACSA.2022.01310103</id>
        <doi>10.14569/IJACSA.2022.01310103</doi>
        <lastModDate>2022-10-31T11:46:20.1870000+00:00</lastModDate>
        
        <creator>Sukruth Gowda M A</creator>
        
        <creator>A Jayachandran</creator>
        
        <subject>Benign; computed tomography; malignant; lung cancer; radiation; triple support vector machine</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>The rapid growth of Computer vision and Machine Learning applications, especially in Health care systems, assures a secure, innovative lifestyle for society. The implication of these technologies in the early diagnosis of lung tumors helps in lung cancer detection and promises the survival rate of patients. The existing general diagnosis method of lung radiotherapy, i.e., Computed Tomography imaging (CT), doesn’t spot exactly affected parts during injuries on lung malignancy. Herein, we propose a computer vision-based diagnostic method empowered with machine learning algorithms to detect lung tumors. The primary objective of the proposed method is to develop an efficient segmentation method to enhance the classification accuracy of lung tumors by implementing a Triple Support Vector Machine (SVM) for the classification of data samples into normal, malignant, or benign, Random Region Segmentation (RSS) for image segmentation and SIFT and GLCM algorithms are applied for featur extraction technique. The model is trained considering the dataset IQ - OTH or NCCD with 300 epochs, with an accuracy of 96.5% achieved under 200 cluster formations.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_103_Triple_SVM_Integrated_with_Enhanced_Random_Region_Segmentation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Efficient HPC and Energy-Aware Proactive Dynamic VM Consolidation in Cloud Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01310102</link>
        <id>10.14569/IJACSA.2022.01310102</id>
        <doi>10.14569/IJACSA.2022.01310102</doi>
        <lastModDate>2022-10-31T11:46:20.1870000+00:00</lastModDate>
        
        <creator>Rukshanda Kamran</creator>
        
        <creator>Ali A. El-Moursy</creator>
        
        <creator>Amany Abdelsamea</creator>
        
        <subject>Cloud computing; HPC (High-Performance Computing); virtual machine consolidation; placement; optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>The adoption of High-Performance Computing (HPC) applications has gained an extensive interest in the Cloud computing. Current cloud vendors utilize separate management tools for HPC and non-HPC applications, missing out on the consolidation benefits of virtualization. Non-HPC applications executed in the cloud may interfere with resource-hungry HPC applications, which is a key performance challenge. Furthermore, correlations between application major performance indicators, such as response time and throughput, with resource capacities reveal that conventional placement strategies are impacting virtual machine efficiency, resulting in poor resource optimization, increased operating expenses, and longer wait times. Since applications often underutilized the hardware, smart execution of HPC and Non-HPC applications on the same node can boost system and energy efficiency. This research incorporates proactive dynamic VM consolidation to enhance the resource usage and performance while maintaining energy efficiency. The proposed algorithm generates a workload-aware fine-grained classification by employing machine learning techniques to generate complimentary profiles that alleviate cross-application interference by intelligently co-locating non-HPC and HPC applications. The research used CloudSim to simulate real HPC workloads. The results verified that the proposed algorithm outperforms all heuristic methods with respect to the metrics in key areas.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_102_Efficient_HPC_and_Energy_Aware_Proactive_Dynamic_VM_Consolidation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Decentralized Payment Aggregator: Hyperledger Fabric</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01310101</link>
        <id>10.14569/IJACSA.2022.01310101</id>
        <doi>10.14569/IJACSA.2022.01310101</doi>
        <lastModDate>2022-10-31T11:46:20.1700000+00:00</lastModDate>
        
        <creator>Md. Al-Amin</creator>
        
        <creator>Khondoker Shahrina</creator>
        
        <creator>Rubyet Hossain</creator>
        
        <creator>Debashish Sarker</creator>
        
        <creator>Sumya Sultana Meem</creator>
        
        <subject>Blockchain; decentralized; hyperledger fabric; bit-coin; payment system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>Blockchain has become a great trend and very popular in the present era. There are two types of Blockchain technology, centralized and decentralized. In this research, the main concern is about the decentralized payment gateway, which is a trustworthy architecture and does not depend on third parties. For recording the transaction, decentralized payment systems use distributed ledger. Previously, Bitcoin and Ethereum payment systems were used to verify the consistency of the ledger of blockchain and also the transaction data along with the sender-receiver address and transaction value, but as all the payment system is public, so the transaction mode is also public. However, here the main concern is privacy and security. Because anyone can easily access the network, the attacker can also attack the network and the identity and transaction records and the address of the user identity, which is a privacy challenge. This research incorporates the Hyperledger Fabric, which is private, to overcome this challenge. Moreover, no one can access it from outside of the network. The transaction cost is low and the timing is fast during transactions. Considering the above scenario, this research proposes a decentralized payment system architecture using Hyperledger Fabric.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_101_Decentralized_Payment_Aggregator.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Evaluation of Raspberry Pi as an IoT Edge Signal Processing Device for a Real-time Flash Flood Forecasting System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01310100</link>
        <id>10.14569/IJACSA.2022.01310100</id>
        <doi>10.14569/IJACSA.2022.01310100</doi>
        <lastModDate>2022-10-31T11:46:20.1570000+00:00</lastModDate>
        
        <creator>Aslinda Hassan</creator>
        
        <creator>Haniza Nahar</creator>
        
        <creator>Wahidah Md Shah</creator>
        
        <creator>Azlianor Abd-Aziz</creator>
        
        <creator>Sarah Afiqah Sahiran</creator>
        
        <creator>Nazrulazhar Bahaman</creator>
        
        <creator>Mohd Riduan Ahmad</creator>
        
        <creator>Isredza Rahmi A. Hamid</creator>
        
        <creator>Muhammad Abu Bakar Sidik</creator>
        
        <subject>Raspberry Pi; IoT; edge; performance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>The Raspberry Pi has evolved in recent years into a popular, low-cost, tiny computer for a wide range of IoT applications. Raspberry Pi is not only successful for data collection but also for data processing, including data storage and analysis. Thus, this study investigates the capability of Raspberry Pi as an edge processing device for capturing lightning strike signals in predicting flash flood locations. An electric and magnetic sensor (EMS) is connected to a Raspberry Pi in the experiment setup. The Raspberry Pi is then used to process digitised lightning signals. From the experiment, Raspberry Pi’s performance is measured using the performance metrics: central processing unit (CPU) usage and temperature. The results revealed that the Raspberry Pi could handle the real-time collection and processing of lightning signals from the EMSs without affecting the hardware capability.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_100_Performance_Evaluation_of_Raspberry_Pi_as_an_IoT_Edge_Signal.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Arduino for Developing Problem-Solving and Computing Competencies in Children</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131098</link>
        <id>10.14569/IJACSA.2022.0131098</id>
        <doi>10.14569/IJACSA.2022.0131098</doi>
        <lastModDate>2022-10-31T11:46:20.1400000+00:00</lastModDate>
        
        <creator>Cristian Vidal-Silva</creator>
        
        <creator>Claudia Jimenez-Quintana</creator>
        
        <creator>Erika Madariaga-Garcia</creator>
        
        <subject>Arduino; competencies; programming; problem-solving; children</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>Fostering children’s problem-solving and computationalprogramming competencies is crucial at the current time. Like in other in-developing nations, children grew up with technology in Chile. Developing programming and problem-solving competencies in children seems a reachable task using high-level block-based programming languages. However, programming and electronics competencies often emerge at higher educational levels. This article presents that using Arduino can enhance the development of programming and problem-solving competencies in children and encourages them to think in new ways. This article uses TinkerCAD, an online emulator of Arduino, to teach fundamental electronic circuits and computer program-ming components. Using TinkerCAD effectively addresses various computing and electrical difficulties, such as turning on and off a group of lights and reading sensors to respond to the acquired values. This article seeks to develop problem-solving and computer programming competencies in primary school students, given the significance of both competencies, the open nature of Arduino, and the applicability of TinkerCAD, which permits using a block-based programming language. Children that took part in the trial saw an increase in their academic performance on average, which is a critical concomitant finding. The essential drawbacks of this project were the children’s lack of knowledge of electronics and programming principles and the need to use a computer with an internet connection.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_98_Arduino_for_Developing_Problem_Solving_and_Computing_Competencies.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Integrated Hardware Prototype for Monitoring Gas leaks, Fires, and Remote control via Mobile Application</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131099</link>
        <id>10.14569/IJACSA.2022.0131099</id>
        <doi>10.14569/IJACSA.2022.0131099</doi>
        <lastModDate>2022-10-31T11:46:20.1400000+00:00</lastModDate>
        
        <creator>Md. Ashiqur Rahman</creator>
        
        <creator>Humayra Ahmed</creator>
        
        <creator>Md. Mamun Hossain</creator>
        
        <subject>Gas leakage; infrared flame detection; IoT; android; Arduino UNO</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>Liquefied petroleum gas (LPG) is used in a wide range of applications such as home and industrial appliances, vehicles, and refrigerators. However, leakage of gas can have a dangerous and toxic effect on humans and other living organisms. In this paper, an IoT based system is employed for this purpose to monitor gas leakage, detect flames, and alert users. The MQ-5 gas sensor was used to understand the concentration level of a closed volume of gas, while the infrared flame sensor was used to detect the spread of fire in this study. The proposed system has the capacity to detect fire and gas leaks as well as take additional action to lower gas concentration by air ventilation with exhausted fan and put out fires with fire extinguisher. The suggested approach will contribute to increasing safety, lowering the mortality toll, and minimizing harm to the environment. Overall system is implemented with IOT cloud-based remote controls to prevent gas leakage by using android application in response to individual feedback or feed-forward commands. The controller used here is Arduino Uno Rev3 SMD. This study provides design approaches to both software and hardware.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_99_An_Integrated_Hardware_Prototype_for_Monitoring_Gas_Leaks_Fires.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Artificial Intelligence for Automated Plant Species Identification: A Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131097</link>
        <id>10.14569/IJACSA.2022.0131097</id>
        <doi>10.14569/IJACSA.2022.0131097</doi>
        <lastModDate>2022-10-31T11:46:20.1230000+00:00</lastModDate>
        
        <creator>Khaoula Labrighli</creator>
        
        <creator>Chouaib Moujahdi</creator>
        
        <creator>Jalal El Oualidi</creator>
        
        <creator>Laila Rhazi</creator>
        
        <subject>Plants identification; species; artificial intelligence; machine learning; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>Plants are very important for life on Earth. There is a wide variety of plant species and their number increases each year. The plants identification using conventional keys is complex, takes time and it is frustrating for non-experts because of the use of specific botanical terms/techniques. This creates a difficult obstacle to overcome for novices interested in acquiring knowledge about species, which is very important to develop any environmental study, like climate change anticipation models for example. Today, there is an increasing interest in automating the species identification process. The availability and omnipresence of relevant technologies, such as digital cameras, mobile devices, pattern recognition and artificial intelligence techniques in general, have allowed the idea of automated species identification to become a reality. In this paper, we present a review of automated plant identification over all significant available studies in literature. The main result of this synthesis is that the performance of advanced deep learning models, despite the presence of several challenges, is becoming close to the most advanced human expertise.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_97_Artificial_Intelligence_for_Automated_Plant_Species_Identification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluation of Online Machine Learning Algorithms for Electricity Theft Detection in Smart Grids</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131096</link>
        <id>10.14569/IJACSA.2022.0131096</id>
        <doi>10.14569/IJACSA.2022.0131096</doi>
        <lastModDate>2022-10-31T11:46:20.1100000+00:00</lastModDate>
        
        <creator>Ashraf Alkhresheh</creator>
        
        <creator>Mutaz A. B. Al-Tarawneh</creator>
        
        <creator>Mohammad Alnawayseh</creator>
        
        <subject>Smart grid; power loss; electricity theft; online machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>Electricity theft-induced power loss is a pressing issue in both traditional and smart grid environments. In smart grids, smart meters can be used to track power consumption behaviour and detect any suspicious activity. However, smart meter readings can be compromised by deploying intrusion tactics or launching cyber attacks. In this regard, machine learning models can be used to assess the daily consumption patterns of customers and detect potential electricity theft incidents. Whilst existing research efforts have extensively focused on batch learning algorithms, this paper investigates the use of online machine learning algorithms for electricity theft detection in smart grid environments, based on a recently proposed dataset. Several algorithms including Naive Bayes, K-nearest Neighbours, K-nearest Neighbours with self-adjusting memory, Hoeffding Tree, Extremely Fast Decision Tree, Adaptive Random Forest and Leveraging Bagging are considered. These algorithms are evaluated using an online machine learning platform considering both binary and multi-class theft detection scenarios. Evaluation metrics include prediction accuracy, precision, recall, F-1 score and kappa statistic. Evaluation results demonstrate the ability of the Leveraging Bagging algorithm with an Adaptive Random Forest base classifier to surpass all other algorithms in terms of all the considered metrics, for both binary and multi-class theft detection. Hence, it can be considered as a viable option for electricity theft detection in smart grid environments.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_96_Evaluation_of_Online_Machine_Learning_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>CertOracle: Enabling Long-term Self-Sovereign Certificates with Blockchain Oracles</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131095</link>
        <id>10.14569/IJACSA.2022.0131095</id>
        <doi>10.14569/IJACSA.2022.0131095</doi>
        <lastModDate>2022-10-31T11:46:20.1100000+00:00</lastModDate>
        
        <creator>Shaoxi Zou</creator>
        
        <creator>Fa Jin</creator>
        
        <creator>Yongdong Wu</creator>
        
        <subject>Digital certificate; blockchain oracle; fully homo-morphic encryption; secure two-party computation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>Identity certificate is an endorsement of identity attributes from an authority issuer, and plays a critical role in many digital applications such as electronic banking. However, the existing certificate schemes have two weaknesses: (1) a certificate is valid only for a short period due to expiry of the issuer’s private key, and (2) privacy leaks because all the attributes have to be disclosed in the attribute verification process. To overcome the weaknesses, this paper proposes a blockchain-based certificate scheme called CertOracle. Specifically, CertOr-acle allows a traditional certificate owner to encrypt the off-chain certificate attributes with fully homomorphic encryption algorithms. Then, the uploading protocol in CertOracle enables to post the encrypted off-chain attributes into the blockchain via a blockchain oracle in an authenticated way, i.e., the off-chain attributes and on-chain encrypted attributes are consistent. Finally, the attribute verification protocol in CertOracle enables anyone to verify any set of on-chain attributes under the control of the attribute owner. As the on-chain certificate attributes are immutable forever, a traditional short-term certificate is transformed into a long-term one. Besides, the owner of the on-chain certificate attributes can arbitrarily select his/her attributes to meet the requirements of target applications, i.e., the on-chain certificate has the self-sovereign merit. Moreover, the proposed scheme is implemented with fully homomorphic encryption and secure two-party computation algorithms, and its experiments show that it is viable in terms of computation time and communication overhead.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_95_CertOracle_Enabling_Long_term_Self_Sovereign_Certificates.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Exploring Power Advantage of Binary Search: An Experimental Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131094</link>
        <id>10.14569/IJACSA.2022.0131094</id>
        <doi>10.14569/IJACSA.2022.0131094</doi>
        <lastModDate>2022-10-31T11:46:20.0930000+00:00</lastModDate>
        
        <creator>Muhammad Al-Hashimi</creator>
        
        <creator>Naif Aljabri</creator>
        
        <subject>Binary search; ternary search; time-power tradeoff; exascale computing; barrel shifter</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>As exascale systems come online, more ways are needed to keep them within reasonable power budgets. This study aims to help uncover power advantages in algorithms likely ubiquitous in high-performance workloads such as searching. This study explored the power efficiency of binary search and its ternary variant, comparing consumption under different scenarios and workloads. Accurate modern on-chip integrated voltage regulators were used to get reliable power measurements. Results showed the binary version of the algorithm, which runs slower but relies on a barrel-shifter circuit, to be more power efficient in all studied scenarios offering an attractive time-power tradeoff. The cumulative savings were significant and will likely be valuable where the search may be a substantial fraction of workloads, especially massive ones.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_94_Exploring_Power_Advantage_of_Binary_Search.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of the Intuitive Teleoperated System of the TxRob Multimodal Robot</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131093</link>
        <id>10.14569/IJACSA.2022.0131093</id>
        <doi>10.14569/IJACSA.2022.0131093</doi>
        <lastModDate>2022-10-31T11:46:20.0770000+00:00</lastModDate>
        
        <creator>Jeyson Carpio A</creator>
        
        <creator>Samuel Luque C</creator>
        
        <creator>Juan Chambi C</creator>
        
        <creator>Jesus Talavera S</creator>
        
        <subject>Multimodal interface; Immersive teleoperation; exploration robot; Gyroscope and subjective measurements</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>Natural disasters such as earthquakes, avalanches, landslides, among others, leave in their path people who may be trapped in the rubble, which are hardly found by rescue agents, so a reliable system in the operation of an exploration and rescue robot is essential. This paper aims to evaluate the systems proposed for the operation of the TxRob exploration robot. The teleoperated control systems that were developed for the manipulation of the robot are: a multimodal system feedback with information through different sensors, and a GUI control system using joystick buttons. These systems were analyzed using subjective metrics such as NASA-TLX, Scale Utility System (SUS) and Microsoft Reaction Cards, which provide interesting data when evaluating the performance of an interface, as well as the workload, user satisfaction and usability; these aspects are used to conclude which system is the most intuitive when performing rescue operations in case of a disaster, among others. 15 operators were evaluated to validate this system; the age range of the operators was between 20 and 43 years old and 20% of them had previously used VR headsets. Priority is given to the most immersive, easy to use and the most efficient system to perform the task of handling the robot.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_93_Analysis_of_the_Intuitive_Teleoperated_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Decision Concept to Support House Hunting</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131091</link>
        <id>10.14569/IJACSA.2022.0131091</id>
        <doi>10.14569/IJACSA.2022.0131091</doi>
        <lastModDate>2022-10-31T11:46:20.0600000+00:00</lastModDate>
        
        <creator>Tanjim Mahmud</creator>
        
        <creator>Dilshad Islam</creator>
        
        <creator>Manoara Begum</creator>
        
        <creator>Sudhakar Das</creator>
        
        <creator>Lily Dey</creator>
        
        <creator>Koushick Barua</creator>
        
        <subject>AHP; multiple criteria decision Making (MCDM); uncertainty; evidential reasoning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>House hunting, or the act of seeking for a place to live, is one of the most significant responsibilities for many families around the world. There are numerous criteria/factors that must be evaluated and investigated. These traits can be both statistically and qualitatively quantified and expressed. There is also a hierarchical link between the elements. Furthermore, objectively/quantitatively assessing qualitative characteristics is difficult, resulting in data inconsistency and, as a result, uncertainty. As a result, ambiguity must be dealt with using the necessary processes; otherwise, the decision to live in a particular property would be incorrect. To compare criteria, the Analytic Hierarchy Process (AHP) is employed, evidential reasoning is used to evaluate houses based on each criterion, and TOPSIS is used to rank house sites for selection. It was necessary to analyze qualitative and quantitative elements, as well as economic and social features of these residences, in order to arrive at the final order of houses, which was not an easy process. As a result, the authors developed a decision support model to aid decision makers in the management of activities related to finding a suitable dwelling. This study describes the development of a decision support system (DSS) capable of providing an overall judgment on the location of a house to live in while taking into account both qualitative and quantitative factors.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_91_A_Decision_Concept_to_Support_House_Hunting.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Drone System with an Object Identification Algorithm for Tracking Dengue Disease</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131092</link>
        <id>10.14569/IJACSA.2022.0131092</id>
        <doi>10.14569/IJACSA.2022.0131092</doi>
        <lastModDate>2022-10-31T11:46:20.0600000+00:00</lastModDate>
        
        <creator>Diego Moran-Landa</creator>
        
        <creator>Maria del Rosario Damian</creator>
        
        <creator>Pedro Miguel Portillo Mendoza</creator>
        
        <creator>Carlos Sotomayor-Beltran</creator>
        
        <subject>Epidemiological surveillance; drones; neural networks; recognition algorithms</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>In recent decades, it has been shown that epidemi-ological surveillance is one of the most valuable tool that public health has, since it allows us to have an overview of the population general health, thus allowing to anticipate outbreaks of epidemics by helping in timely interventions. Currently there is an increase in cases of dengue disease in several regions of Peru. Therefore, to control this outbreak and to help population centers and human settlements that are far from the city this work puts forward a drone system with an object recognition algorithm. Drones are very efficient in terms of surveillance, allowing easy access to places that are difficult for humans. In this way, drones can carry out the field work that is required in epidemiological surveillance, carrying out photography or video work in real time, and thus identifying infectious foci of diverse diseases. In this work, an object detection algorithm that uses convolutional neural networks and a stable detection model is designed, this allows the detection of water reservoirs that are possible infectious sources of dengue. In addition the efficiency of the algorithm is evaluated through the statistical curves of precision and sensitivity that result of the training of the neural network. To validate the efficiency obtained, the model was applied to test images related to dengue, achieving an efficiency of 99.2%.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_92_A_Drone_System_with_an_Object_Identification_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Decode and Forward Coding Scheme for Cooperative Relay NOMA System with Cylindrical Array Transmitter</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131090</link>
        <id>10.14569/IJACSA.2022.0131090</id>
        <doi>10.14569/IJACSA.2022.0131090</doi>
        <lastModDate>2022-10-31T11:46:20.0470000+00:00</lastModDate>
        
        <creator>Samuel Tweneboah-Koduah</creator>
        
        <creator>Emmanuel Ampoma Affum</creator>
        
        <creator>Kingsford Sarkodie Obeng Kwakye</creator>
        
        <creator>Owusu Agyeman Antwi</creator>
        
        <subject>CR-NOMA; 3D GBSM; DF coding scheme; Cylindrical Array (CA); cooperative relay</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>The Non-Orthogonal Multiple Access (NOMA) technique has enormous potential for wireless communications in the fifth generation (5G) and beyond. Researchers have recently become interested in the combination of NOMA and cooperative relay. Even though geometric-based stochastic channel models (GBSM) have been found to provide better, practical, and realistic channel properties of massive multiple-input multiple-output (mMIMO) systems, the assessment of Cooperative Relay NOMA (CR-NOMA) with mMIMO system is largely based on correlated-based stochastic channel model (CBSM). We believe that this is a result of computational difficulties. Again, not many discussions have been done in academia about how well CR-NOMA systems perform when large antenna transmitters with the GBSM channel model are used. As a result, it is critical to investigate the mMIMO CR-NOMA system with the GBSM channel model that takes into account channel parameters such as path loss, delay profile, and tilt angle. Moreover, the coexistence of large antenna transmitters and coding methods requires additional research. In this research, we propose a two-stage, three-dimension (3D) GBSM mMIMO channel model from the 3GPP, in which the transmitter is modelled as a cylindrical array (CA) to investigate the efficiency of CR-NOMA. By defining antenna elements placement vectors using the actual dimensions of the antenna array and incorporating them into the three-dimension (3D) channel model, we were able to increase the analytical tractability of the 3D GBSM. Bit-error rates, achievable rates, and outage probabilities (OP) are investigated utilizing the decode-and-forward (DF) coding method: the results are compared with that of a system using the CBSM channel model. Despite the computational difficulties of the proposed GBSM system, there is no difference in performance between CBSM and GBSM.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_90_Decode_and_Forward_Coding_Scheme_for_Cooperative_Relay_NOMA_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Impact of Input Data Structure on Convolutional Neural Network Energy Prediction Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131089</link>
        <id>10.14569/IJACSA.2022.0131089</id>
        <doi>10.14569/IJACSA.2022.0131089</doi>
        <lastModDate>2022-10-31T11:46:20.0300000+00:00</lastModDate>
        
        <creator>Imen Toumia</creator>
        
        <creator>Ahlem Ben Hassine</creator>
        
        <subject>Deep learning; convolutional neural network; energy consumption; energy prediction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>Energy demand continues to increase with no prospect of slowing down in the future. This increase is caused by several sociological and economical factors such as population growth, urbanization and technological developments. In view of this growth, it becomes crucial to predict energy consumption for a more accurate management and optimization. Nevertheless, consumption estimation is a complex task due to consumer behaviour fluctuation and weather alterations. Several efforts were proposed in the literature. Almost, all of them focused on improving the prediction model to increase the accuracy of the results. They use the LSTM (Long-Short Term Memory) model to reflect the temporal dependencies between historical data despite its spatial and temporal complexities. The main contribution in this paper is a novel and simple Convolutional Neural Network energy prediction model based on input data structure enhancement. The main idea is to adjust the structure of the input data instead of using a more complicated deep learning model for better performance. The proposed model was implemented, tested using real data and compared to existing ones. The obtained results showed that the proposed data structure has a great influence on the model performance measurement.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_89_Impact_of_Input _Data_Structure_on_Convolutional_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-Channel Speech Enhancement using a Minimum Variance Distortionless Response Beamformer based on Graph Convolutional Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131088</link>
        <id>10.14569/IJACSA.2022.0131088</id>
        <doi>10.14569/IJACSA.2022.0131088</doi>
        <lastModDate>2022-10-31T11:46:20.0300000+00:00</lastModDate>
        
        <creator>Nguyen Huu Binh</creator>
        
        <creator>Duong Van Hai</creator>
        
        <creator>Bui Tien Dat</creator>
        
        <creator>Hoang Ngoc Chau</creator>
        
        <creator>Nguyen Quoc Cuong</creator>
        
        <subject>Multi-Channel Speech Enhancement; Graph Convolutional Networks; Minimum Variance Distortionless Response Beamformer; Complex Ideal Ratio Mask</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>The Minimum Variance Distortionless Response (MVDR) beamforming algorithm is frequently utilized to extract speech and noise from noisy signals captured from multiple microphones. A frequency-time mask should be employed to compute the Power Spectral Density (PSD) matrices of the noise and the speech signal of interest to obtain the optimal weights for the beamformer. Deep Neural Networks (DNNs) are widely used for estimating time-frequency masks. This paper adopts a novel method using Graph Convolutional Networks (GCNs) to learn spatial correlations among the different channels. GCNs are integrated into the embedding space of a U-Net architecture to estimate a Complex Ideal Ratio Mask (cIRM). We use the cIRM in an MVDR beamformer to further improve the enhancement system. We simulate room acoustics data to experiment extensively with our approach using different types of the microphone array. Results indicate the superiority of our approach when compared to current state-of-the-art methods. The metrics obtained by the proposed method are significantly improved, except the Scale-Invariant Source-to-Distortion Ratio (SI-SDR) score. The Perceptual Evaluation of Speech Quality (PESQ) score shows a noticeable improvement over the baseline models (i.e., 2.207 vs. 2.104 and 2.076). Our implementation of the proposed method can be found in the following link: https://github.com/3i-hust-asr/gnn-mvdr-final.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_88_Multi_Channel_Speech_Enhancement_using_a_Minimum_Variance_Distortionless_Response.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Money Laundering Detection using Machine Learning and Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131087</link>
        <id>10.14569/IJACSA.2022.0131087</id>
        <doi>10.14569/IJACSA.2022.0131087</doi>
        <lastModDate>2022-10-31T11:46:20.0130000+00:00</lastModDate>
        
        <creator>Johrha Alotibi</creator>
        
        <creator>Badriah Almutanni</creator>
        
        <creator>Tahani Alsubait</creator>
        
        <creator>Hosam Alhakami</creator>
        
        <creator>Abdullah Baz</creator>
        
        <subject>Anti-money laundering; machine learning; supervised learning; cryptocurrency</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>In recent years, money laundering activities have shown rapid progress and have indeed become the main concern for governments and financial institutions all over the world. As per recent statistics, $800 billion to $2 trillion is the estimated value of money laundered annually, in which $5 billion of the total is obtained from cryptocurrency money laundering. As per the financial action task force (FATF), the criminals may trade illegally obtained fiat money for the cryptocurrency. Accordingly, detecting and preventing illegal transactions becomes a serious threat to governments and it has been indeed challenging. To combat money laundering, especially in cryptocurrency, effective techniques for detecting suspicious transactions must be developed since the current preventive efforts are outdated. In fact, deep learning and machine learning techniques may provide novel methods to detect suspect currency movements. This study investigates the applicability of deep learning and machine learning techniques for anti-money laundering in cryptocurrency. The techniques employed in this study are Deep Neural Network (DNN), random forest (RF), K-Nearest Neighbors(KNN), and Naive Bayes (NB) with the bitcoin elliptic dataset. It was observed that the DNN and random forest classifier have achieved the highest accuracy rate with promising findings in decreasing the false positives as compared to the other classifiers. In particular, the random forest classifier outperforms DNN and achieves an F1-score of 0.99%.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_87_Money_Laundering_Detection_using_Machine_Learning_and_Deep_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Adaptive Lane Keeping Assist for an Autonomous Vehicle based on Steering Fuzzy-PID Control in ROS</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131086</link>
        <id>10.14569/IJACSA.2022.0131086</id>
        <doi>10.14569/IJACSA.2022.0131086</doi>
        <lastModDate>2022-10-31T11:46:20.0000000+00:00</lastModDate>
        
        <creator>Hoang Tran Ngoc</creator>
        
        <creator>Luyl-Da Quach</creator>
        
        <subject>Autonomous vehicles; automated steering; lane detection; fuzzy PID control; ROS; Gazebo</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>An autonomous vehicle is a vehicle that can run autonomously using a control. There are two modern autonomous assistant systems that are proposed in this research. First, we introduce a real-time approach to detect lanes of the streets. Based on a series of multi-step image processing through input data from the camera, the vehicle’s steering angle is estimated for lane keeping. Second, the steering control system ensures that autonomous vehicles can operate stably, and smoothly, and adapt to various road conditions. The steering controller consists of a PID controller and fuzzy logic control strategy to adjust the controller parameters. The simulation experiments by Gazebo simulator of the Robot Operating System (ROS) not only indicate that the vehicle can keep the lane safely, but also demonstrate that the proposed steering angle controller is more stable and adaptive than the conventional PID controller.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_86-Adaptive_Lane_Keeping_Assist_for_an_Autonomous_Vehicle.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Parallel Hough Transform based on Object Dual and Pymp Library</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131084</link>
        <id>10.14569/IJACSA.2022.0131084</id>
        <doi>10.14569/IJACSA.2022.0131084</doi>
        <lastModDate>2022-10-31T11:46:19.9830000+00:00</lastModDate>
        
        <creator>Abdoulaye SERE</creator>
        
        <creator>Moise OUEDRAOGO</creator>
        
        <creator>Armand Kodjo ATIAMPO</creator>
        
        <subject>Hough transform; parallel computing; pattern recognition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>Geometric shape detection in an image is a classical problem that leads to many applications, in cartography to highlight roads in a noisy image, in medical imaging to localize disease in a region and in agronomy to fight against weeds with pesticides. The Hough Transform method contributes effectively to the recognition of digital objects, as straight lines, circles and arbitrary objects. This paper deals with the theoretical comparisons of object dual based on the definition of Standard Hough Transform. It also focuses on parallelism of Hough Transform. A generic pseudo-code algorithm, using the Openmp library for the parallel computing of object dual is proposed in order to improve the execution time. In simulation, a triangular mesh superimposed on the image is implemented with the pymp library in python, in considering threads as inputs to read the image and to update the accumulator. The parallel computing presents reduction of the execution time accordingly to the rate of lit pixels in each virtual object and the number of threads. In perspectives, it will contribute to strenghen the developement of a toolkit for the Hough Transform method.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_84-Parallel_Hough_Transform_based_on_Object_Dual_and_Pymp_Library.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Prototyping a Mobile Application for Children with Dyscalculia in Primary Education using Augmented Reality</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131085</link>
        <id>10.14569/IJACSA.2022.0131085</id>
        <doi>10.14569/IJACSA.2022.0131085</doi>
        <lastModDate>2022-10-31T11:46:19.9830000+00:00</lastModDate>
        
        <creator>Misael Lazo-Amado</creator>
        
        <creator>Leoncio Cueva-Ruiz</creator>
        
        <creator>Laberiano Andrade-Arenas</creator>
        
        <subject>App augmented class; design thinking; dyscalculia; miro app; TinkerCad</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>Dyscalculia is a disorder of difficulty in understanding, understanding of numbers and mathematical operations in such a way that the child has a greater stress by not solving the exercises proposed by the teacher, leading to the objective of the research in making an innovation plan dedicated to the mobile prototype with augmented reality for children with dyscalculia in primary education. Design Thinking was used as a methodology that allows us to know the needs of users and implement new solutions to their problems, so that the project team makes decisions to choose the best idea proposed, likewise this idea must be applied to a model or design, for this the Miro application was used for the mobile prototype, for the 3D design TinkerCad was used for educational games and finally the App Augmented Class application that was responsible for the visualization of augmented reality. The results were obtained through interviews with parents, indicating that the mobile prototype with augmented reality is a great contribution of impact and should be applied for children, finally this prototype is validated by five experts who mentioned that the final prototype has 86% acceptance. The conclusion of this research is to make an innovation model to solve the problems of dyscalculia, improving understanding and comprehension in mathematics.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_85-Prototyping_a_Mobile_Application_for_Children_with_Dyscalculia.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>CoSiT: An Agent-based Tool for Training and Awareness to Fight the Covid-19 Spread</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131083</link>
        <id>10.14569/IJACSA.2022.0131083</id>
        <doi>10.14569/IJACSA.2022.0131083</doi>
        <lastModDate>2022-10-31T11:46:19.9670000+00:00</lastModDate>
        
        <creator>Henri-Joel Azemena</creator>
        
        <creator>Franck-Anael K. Mbiaya</creator>
        
        <creator>Selain K. Kasereka</creator>
        
        <creator>Ho Tuong Vinh</creator>
        
        <subject>Multi-agent system; covid-19; CoSiT; modeling-simulation; barrier measures; complex systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>Since the beginning of 2020 and following the recommendation of the Emergency Committee, the WHO (World Health Organization) Director General declared that the Covid-19 outbreak constitutes a Public Health Emergency of International Concern. Given the urgency of this outbreak, the international community is mobilizing to find ways to significantly accelerate the development of interventions. These interventions include raising awareness of ethical solutions such as wearing a face mask and respecting social distancing. Unfortunately, these solutions have been criticized and the number of infections and deaths by Covid-19 has only increased because of the lack of respect for these gestures on the one hand, and because of the lack of awareness and training tools on the spread of this disease through simulation packages on the other. To give importance to the respect of these measures, the WHO is going to try to propose to his member states, training and sensitization campaigns on coronavirus through simulation packages, so that the right decisions are taken in time to save lives. Thus, a rigorous analysis of this problem has enabled us to identify three directions for reflection. First, how to propose an IT tool based on these constraints in order to generalize training and awareness for all? Secondly, how to model and simulate these prescribed measures in our current reality? Thirdly, how to make it playful, interactive, and participative so that it is flexible according to the user’s needs? To address these questions, this paper proposes an interactive Agent-Based Model (ABM) describing a pedagogical (training and educational) tool that can help understanding the spread of Covid-19 and then show the impact of the barrier measures recommended by the WHO. The tool implemented is quite simple to use and can help make appropriate and timely decisions to limit the spread of Covid-19 in the population.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_83-CoSiT_An_Agent_based_Tool_for_Training_and_Awareness.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Development of an Ontology for Information Retrieval about Ethnic Groups in Chiang Mai Province</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131082</link>
        <id>10.14569/IJACSA.2022.0131082</id>
        <doi>10.14569/IJACSA.2022.0131082</doi>
        <lastModDate>2022-10-31T11:46:19.9530000+00:00</lastModDate>
        
        <creator>Phichete Julrode</creator>
        
        <creator>Thepchai Supnithi</creator>
        
        <subject>Information retrieval; ontology development; ethnic groups; knowledge organization; chiang mai</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>This study aims to develop the semantic ontology of information knowledge about ethnic groups by analyzing information from the collection of documentary sources from libraries, research, and the museum for learning about people on the highlands located in Chiang Mai Province. The study is based on the classification theory of ethnic groups in Chiang Mai Province with the intention of establishing the relationship between knowledge structure regarding ethnic groups. The study procedures consist of three stages: 1)Establishing ontology requirements from online data to analyze the data of the keyword from the research database of Chiang Mai University Library&#39;s Online Information Resource Database (OPAC) and Ratchamangkhalaphisek National Library, Chiang Mai to group the words by studying information resources in Thai language, such as books, textbooks, research papers, theses, research articles, academic articles, and reference books related to ethnic groups. Stage 2) Designing classes, defining main classes, subclasses, hierarchies, and properties in order to establish the relationship of data in each class using the Prot&#233;g&#233; program. Stage 3) Ontology evaluation, which is divided into two parts: an expert&#39;s evaluation of the suitability of the ontology structure using the Inter-Class Relational Accuracy Assessment Scale and an examination of ethnic grouping data. The findings reveal that specifying, definition, scope, and objectives of development are appropriate (average score = 0.97) in three areas: grouping and ordering of classes within the ontology (score value = 0.98), defining affinity names and class properties (score value = 0.96), and suitable overall ontology content (score value = 0.97).</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_82-The_Development_of_an_Ontology_for_Information_Retrieval.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Research on Personalized Recommendation of High-Quality Academic Resources based on user Portrait</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131080</link>
        <id>10.14569/IJACSA.2022.0131080</id>
        <doi>10.14569/IJACSA.2022.0131080</doi>
        <lastModDate>2022-10-31T11:46:19.9370000+00:00</lastModDate>
        
        <creator>Jianhui Xu</creator>
        
        <creator>Mustafa Man</creator>
        
        <creator>Ily Amalina Ahmad Sabri</creator>
        
        <creator>Guoyi Li</creator>
        
        <creator>Chao Yang</creator>
        
        <creator>Mingxue Jin</creator>
        
        <subject>Personalized recommendation system; user portrait; academic resources; collaborative filtering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>With the advent of the era of big data, the phenomenon of information overload is becoming increasingly serious. It is difficult for academic users to obtain the information they want quickly and accurately in the face of massive academic resources. Aiming at the optimization of academic resource recommendation services, this paper constructs a multi-dimensional academic user portrait model and proposes an Academic Resource Recommendation Algorithm Based on user portrait. This paper first, combs the relevant literature and information; Secondly, to obtain the attribute tags of multi-dimensional user portraits, a set of questionnaires are designed to collect the real information of academic users, and the corresponding academic user portrait model is constructed; Then, the collected data is processed through certain rules, and the user is quantitatively modeled based on the data through mathematical means; Finally, through the construction of the completed academic user portrait model, combined with collaborative filtering algorithm, provide personalized academic resource recommendation services for academic users. Through the verification and analysis of simulation experiments, the Academic Resource Recommendation Algorithm Based on the user portrait proposed in this paper plays a great role in expanding users&#39; interest fields and discovering new hobbies across fields and disciplines.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_80-Research_on_Personalized_Recommendation_of_High_Quality_Academic_Resources.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Inspection of Learning Management Systems on Persuasiveness of Interfaces and Persuasive Design: A Case in a Higher Learning Institution</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131081</link>
        <id>10.14569/IJACSA.2022.0131081</id>
        <doi>10.14569/IJACSA.2022.0131081</doi>
        <lastModDate>2022-10-31T11:46:19.9370000+00:00</lastModDate>
        
        <creator>Wan Nooraishya Wan Ahmad</creator>
        
        <creator>Mohamad Hidir Mhd Salim</creator>
        
        <creator>Ahmad Rizal Ahmad Rodzuan</creator>
        
        <subject>Learning management system; e-learning; persuasive design; persuasiveness; interface design</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>An effective Learning Management System (LMS) is an essential factor that can increase e-learning persuasiveness. One of the components that need to be addressed to design an effective LMS is design interfaces. Instead of developing a new LMS that requires a high cost, evaluating and improving the existing LMS is the best option. Issues like low completion rates and procrastination are common issues related to e-learning usage. These issues can be solved if academic institutions provide a proper LMS for students to change their learning behaviors positively. Many previous studies claimed they managed to implement persuasive technology into e-learning platforms to encourage positive learning behaviors. However, the claims can be questionable if the persuasive e-learning systems are not gone through a proper evaluation phase. This study will use the heuristic evaluation method to assess the persuasiveness level of LMS interfaces. The persuasive Systems Design Model (PSD), on the other hand, is used to evaluate persuasive strategies in LMS. The assessment involves students’ perspectives as the primary users to identify potentially behavior change factors, especially on engagement. Thus, the objectives of this study are i) to investigate the persuasiveness of LMS interfaces and ii) to identify persuasive strategies in the LMS design. Apart from that, this study also produces a) recommendations on design examples to increase the persuasiveness of LMS interfaces and b) the mapping of LMS interfaces to PSD framework that can be utilized by higher learning institutions.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_81-An_Inspection_of_Learning_Management_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Architecture for Community-Sustained Cultural Mapping System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131079</link>
        <id>10.14569/IJACSA.2022.0131079</id>
        <doi>10.14569/IJACSA.2022.0131079</doi>
        <lastModDate>2022-10-31T11:46:19.9200000+00:00</lastModDate>
        
        <creator>Chong Eng Tan</creator>
        
        <creator>Sei Ping Lau</creator>
        
        <creator>Siew Mooi Wong</creator>
        
        <subject>Rural system architecture; telecentre; cultural mapping; sustainability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>This paper presents a novel system architecture for implementing a cultural mapping system for the community of Buayan, a remote rural village in Sabah, East Malaysia. By considering various shortcomings of the local environment and the need for a community-sustained system, the cultural mapping system was designed to leverage a new set of architecture to achieve minimal implementation cost and higher reliability to survive the rural environment. The new architecture evolves from previous Telecentres’ design and implementation experience that was targeted at larger scale ICT systems. This paper also highlights the critical influence of power provision on the digital system implementation in rural areas, which always incurs a significant amount of the overall implementation cost. An efficient ICT system architecture will significantly reduce the cost of its associated power provision. The implementation of the cultural mapping system using the new ICT architecture at Buayan is also being described.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_79-A_Novel_Architecture_for_Community_Sustained_Cultural_Mapping_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analyzing Multi-stage Reverse Osmosis Desalination Using Artificial Intelligence</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131077</link>
        <id>10.14569/IJACSA.2022.0131077</id>
        <doi>10.14569/IJACSA.2022.0131077</doi>
        <lastModDate>2022-10-31T11:46:19.9070000+00:00</lastModDate>
        
        <creator>Batiseba Tekle</creator>
        
        <creator>Azmi Alazzam</creator>
        
        <creator>Abdulwehab Ibrahim</creator>
        
        <creator>Ghassan Malkawi</creator>
        
        <creator>Abdulaziz Fares NajiMoqbel</creator>
        
        <creator>Nissar Qureshi</creator>
        
        <creator>Ahmed Hamadat</creator>
        
        <creator>Filomento O. Corona Jr</creator>
        
        <subject>Artificial intelligence; artificial neural network; desalination; regression; reverse osmosis; support vector machine</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>Population growth has resulted in a decrease in readily available sources of potable water. Desalination is one of many approaches that has been studied and proposed as a way out of this predicament. In this study, multistage Reverse Osmosis desalination process is used in the model, since it has the potential to achieve a higher purity percentage than the single-stage RO desalination process. Some researchers have studied the distinctive tools of AI, specifically Artificial Neural Network as regression model and the genetic Algorithms as an optimization technique in the process of desalination and water treatments. This paper aims to examine multistage RO desalination by employing various artificial intelligence (AI) techniques, including Artificial Neural Network (ANN) and Support Vector Machine (SVM). Both training methods used for this research come under the category of regression algorithms, which are used to establish a predictive link between variables and labels. The main finding of this study was the noticeable decrease of Mean Square Error (MSE) in second stage when data was trained using the ANN. While on the other hand the MSE increased in second stage when the data was trained using the SVM. It can be concluded that the results of this research indicate that applying ANN and SVM to RO desalination process modelling would yield substantial improvements. Future work will be focusing on predicting and improving the performance of ANN and SVM prediction with other function variables.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_77-Analyzing_Multi_stage_Reverse_Osmosis_Desalination.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Educational Platform based on Smartphone to Increase Students’ Interaction in Classroom</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131078</link>
        <id>10.14569/IJACSA.2022.0131078</id>
        <doi>10.14569/IJACSA.2022.0131078</doi>
        <lastModDate>2022-10-31T11:46:19.9070000+00:00</lastModDate>
        
        <creator>Mohamed Naser AlSubie</creator>
        
        <creator>Omar Ben Bahri</creator>
        
        <subject>Android; classroom; e-learning; data management; IoT; smartphone</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>Current smartphones meet all the criteria for university application. This technology opens the door to develop new techniques to enhance teaching methods. In addition, it presents an interest solution for students’ guidance and helping. Thus, the idea of this proposal aims to provide a platform around smart phone based on android to help student and teachers managing their courses. It is based on the Internet of Things to increase digital interaction and improve the teaching process while delivering traditional lectures. The system encompasses three main parts. The first step is carried out for guiding students to find their classroom and teacher’s desk. The second part was developed to help teachers to monitor the attendance of students. The third part is dedicated for improving e-learning in classroom by managing the educational process with the purpose of providing the adequate platform for data management. The platform, therefore, succeeded to provide the adequate solution to prevent the misuse of smart phone in classroom, and to enhance the learning methods using smart technologies.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_78-Educational_Platform_based_on_Smartphone_to_Increase_Students.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application of Training Load Prediction Model based on Improved BP Neural Network in Sports Training of Athletes</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131076</link>
        <id>10.14569/IJACSA.2022.0131076</id>
        <doi>10.14569/IJACSA.2022.0131076</doi>
        <lastModDate>2022-10-31T11:46:19.8900000+00:00</lastModDate>
        
        <creator>Lin Liu</creator>
        
        <creator>Guannan Sheng</creator>
        
        <subject>BP neural network; adaptive genetic algorithm; selection operator; training load</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>With the enhancement of data mining technology, competitive sports informatization has become an inevitable development trend. It has become a common phenomenon to use data mining technology to help athletes train scientifically, assist coaches in rational decision-making, and improve team competitiveness. In competitive sports, cyclists&#39; adaptation to training has a complex relationship with their physical performance. In order to explore the correlation between data and provide better training data for athletes, this study proposes a load prediction model based on BP neural network (Back propagation, BP). Considering the local convergence and random assignment of traditional BP model, an adaptive genetic algorithm with improved selection operator is used to determine the initial weights and thresholds of BP neural network to improve the accuracy of the prediction model. The experimental results show that the improved adaptive genetic algorithm improves the overall optimization ability of the BP neural network, the improved BP neural network model has good stability in the convergence process, and the algorithm can search for better weight thresholds. Compared with the basic BP neural network prediction model, the accuracy of the optimized prediction model is increased by 11.86%, and the average error value is reduced by 26.21%, which is a guide to improve the training effect of the cycling team&#39;s competitive sports.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_76-Application_of_Training_Load_Prediction_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Research on the Academic Early Warning Model of Distance Education based on Student Behavior Data in the Context of COVID-19</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131074</link>
        <id>10.14569/IJACSA.2022.0131074</id>
        <doi>10.14569/IJACSA.2022.0131074</doi>
        <lastModDate>2022-10-31T11:46:19.8730000+00:00</lastModDate>
        
        <creator>Yi Qu</creator>
        
        <creator>Zhiyuan Sun</creator>
        
        <creator>Libin Liu</creator>
        
        <subject>COVID-19; Student behavior data; Distance education; Academic early warning model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>The COVID-19 epidemic has caused great impact on the entire society, and the spread of novel coronavirus has brought a lot of inconvenience to the education industry. To ensure the sustainability of education, distance education plays a significant role. During the process of distance education, it is necessary to examine the learning situation of students. This study proposes an academic early warning model based on long- and short-term memory (LSTM), which firstly extracts and classifies students’ behavior data, and then uses the optimized LSTM to establish an academic early warning model. The precision rate of the optimized LSTM algorithm is 0.929, the recall rate is 0.917 and the F value is 0.923, showing a higher degree of convergence than the basic LSTM algorithm. In the actual case analysis, the accuracy rate of the academic early warning system is 92.5%. The LSTM neural network shows high performance after parameter optimization, and the academic early warning model based on LSTM also has high accuracy in the actual case analysis, which proves the feasibility of the established academic early warning model.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_74-Research_on_the_Academic_Early_Warning_Model_of_Distance_Education.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dynamic Polymorphism without Inheritance: Implications for Education</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131075</link>
        <id>10.14569/IJACSA.2022.0131075</id>
        <doi>10.14569/IJACSA.2022.0131075</doi>
        <lastModDate>2022-10-31T11:46:19.8730000+00:00</lastModDate>
        
        <creator>Ivaylo Donchev</creator>
        
        <creator>Emilia Todorova</creator>
        
        <subject>Inheritance; polymorphism; object-oriented; C++; type erasure; pointers; templates; lambda expressions; teaching</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>Polymorphism is a core OO concept. Despite the rich pedagogical experience in teaching it, there are still difficulties in its correct and multifaceted perception by students. In this article, a method about a deeper study of the concept of polymorphism is offered by extending the learning content of the CS2 C++ Programming course with an implementation variant of dynamic polymorphism by type erasure, without using inheritance. The research is based on an inductive approach with a gradual expansion of functionalities when introducing new concepts. The stages of development of such a project and the details of the implementation of each functionality are traced. The results of experimental training showed higher scores of the experimental group in mastering the topics related to polymorphism. Based on these findings, recommendations for the construction of the lecture course and the organization of the laboratory work are suggested.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_75-Dynamic_Polymorphism_without_Inheritance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced Jaya Algorithm for Multi-objective Optimisation Problems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131073</link>
        <id>10.14569/IJACSA.2022.0131073</id>
        <doi>10.14569/IJACSA.2022.0131073</doi>
        <lastModDate>2022-10-31T11:46:19.8600000+00:00</lastModDate>
        
        <creator>Rahaini Mohd Said</creator>
        
        <creator>Roselina Sallehuddin</creator>
        
        <creator>Nor Haizan Mohd Radzi</creator>
        
        <creator>Wan Fahmn Faiz Wan Ali</creator>
        
        <subject>MOJaya; chaotic inertia weight; ZDT benchmark function; convergence metric; diversity metric</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>Evolutionary algorithms are suitable techniques for solving complex problems. Many improvements have been made on the original structure algorithm in order to obtain more desirable solutions. The current study intends to enhance multi-objective performance with benchmark optimisation problems by incorporating the chaotic inertia weight into the current multi-objective Jaya (MOJaya) algorithm. Essentially, Jaya is a recently established population-oriented algorithm. Exploitation proves to be more dominant in MOJaya following its propensity to capture local optima. This research addressed the aforementioned shortcoming by refining the MOJaya algorithm solution to update the equation for exploration-exploitation balance, enhancing divergence, and deterring premature convergence to retain the algorithm fundamentals while simultaneously sustaining its parameter-free component. The recommended chaotic inertia weight-multi-objective Jaya (MOiJaya) algorithm was assessed using well-known ZDT benchmark functions with 30 variables, whereas the convergence matrix (CM) and divergence matrix (DM) analysed the suggested MOiJaya algorithm performances are inspected. As such, this algorithm enhanced the exploration-exploitation balance and substantially prevented premature convergence. Then, the proposed algorithm is compared with a few other algorithms. Based on the comparison, the convergence metric and diversity metric results show that the recommended MOiJaya algorithm potentially resolved multi-objective optimisation problems better than the other algorithms.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_73-Enhanced_Jaya_Algorithm_for_Multi_objective_Optimisation_Problems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Exponential Decay Function-Based Time-Aware Recommender System for e-Commerce Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131071</link>
        <id>10.14569/IJACSA.2022.0131071</id>
        <doi>10.14569/IJACSA.2022.0131071</doi>
        <lastModDate>2022-10-31T11:46:19.8430000+00:00</lastModDate>
        
        <creator>Ayat Yehia Hassan</creator>
        
        <creator>Etimad Fadel</creator>
        
        <creator>Nadine Akkari</creator>
        
        <subject>Time-aware recommender system; context-aware recommender system; matrix factorization; K-Nearest Neighbor (KNN); and Sparse Linear Method (SLIM)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>Unlike traditional recommendation systems that rely only on the user&#39;s preferences, context-aware recommendation systems (CARS) consider the user&#39;s contextual information such as (time, weather, and geographical location). These data are used to create more intelligent and effective recommendation systems. Time is one of the most important and influential factors that affect users’ preferences and purchasing behavior. Thus, in this paper, time-aware recommendation systems are investigated using two common methods (Bias and Decay) to incorporate the time parameter with three different recommendation algorithms known as Matrix Factorization, K-Nearest Neighbor (KNN), and Sparse Linear Method (SLIM). The performance study is based on an e-commerce database that includes basic user purchasing actions such as add to cart and buy. Results are compared in terms of precision, recall, and Mean Average Precision (MAP) parameters. Results show that Decay-MF and Decay-SLIM outperform the Bias CAMF and CA-SLIM. On the other hand, Decay-KNN reduced the accuracy of the RS compared to the context-unaware KNN.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_71-Exponential_Decay_Function_Based_Time_Aware_Recommender_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comprehensive Assessment Framework for Evaluating Adaptive Security and Privacy Solutions for IoT e-Health Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131072</link>
        <id>10.14569/IJACSA.2022.0131072</id>
        <doi>10.14569/IJACSA.2022.0131072</doi>
        <lastModDate>2022-10-31T11:46:19.8430000+00:00</lastModDate>
        
        <creator>Waqas Aman</creator>
        
        <creator>Fatima Najla Mohammed</creator>
        
        <subject>Internet of Things; Adaptive Security; IoT Architecture; e-Health; Effectiveness; Privacy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>There exist numerous adaptive security and privacy (S&amp;P) solutions to manage potential threats at runtime. However, there is a lack of a comprehensive assessment framework that can holistically validate their effectiveness. Existing Adaptive S&amp;P assessment efforts either focus on privacy or security in general, or are focused on specific adaptive S&amp;P attributes, e.g. authentication, and, at certain times, disregards the architecture in which they should be comprehended. In this paper, we propose a holistic assessment framework for evaluating adaptive S&amp;P solutions for IoT e-health. The framework utilizes a proposed classification of essential attributes necessary to be recognized, evaluated, and incorporated for the effectiveness of adaptive S&amp;P solutions for the most common IoT architectures, fog-based and cloud/server-based architectures. As opposed to the existing related work, the classification comprehensively covers all the major classes of essential attributes, such as S&amp;P objectives, contextual factors, adaptation action aptitude, and the system’s self-* properties. Using this classification, the framework assists to evaluate the existence of a given attribute with respect to the adaptation process and in the context of the architectural layers. Therefore, it stresses the importance of where an essential attribute should be realized in the adaptation phases and in the architecture for an adaptive S&amp;P solution to be effective. We have also presented a comparison of the proposed assessment framework with existing related frameworks and have shown that it exhibits substantial completeness over the existing works to assess the feasibility of a given adaptive S&amp;P solution.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_72-A_Comprehensive_Assessment_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluation of Land Use/Land Cover Classification based on Different Bands of Sentinel-2 Satellite Imagery using Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131070</link>
        <id>10.14569/IJACSA.2022.0131070</id>
        <doi>10.14569/IJACSA.2022.0131070</doi>
        <lastModDate>2022-10-31T11:46:19.8270000+00:00</lastModDate>
        
        <creator>Pallavi M</creator>
        
        <creator>Thivakaran T K</creator>
        
        <creator>Chandankeri Ganapathi</creator>
        
        <subject>Sentinel-2; neural networks; convolutional neural networks; remote sensing data; land use land cover maps</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>Spatial data analytics is an emerging technology. Artificial neural network techniques play a major role in analysing any critical dataset. Integrating remote sensing data with deep neural networks has led a way to several research problems. This paper aims at producing land use land cover map of Bangalore region, Karnataka, India with various band combinations of sentinel satellite imagery obtained from google earth engine. LULC map classes include water, urban, forest, vegetation and openland. Band combinations of satellite images represent different characteristics of spatial data. Hence, several band combinations are used to build LULC maps. Also, classified maps are generated using different neural networks with pixel-based classification approach. Appropriate performance metrics were identified to evaluate the classification results such as Accuracy, Precision, Recall, F1-score and Confusion Matrix. Among neural networks, Convolutional Neural Network technique outperformed with 98.1 % of accuracy and less error rates in confusion matrix considering RGBNIR (4328) band combination of satellite imagery.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_70-Evaluation_of_Land_Use_Land_Cover_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Deep Learning and Machine Learning Approach for Image Classification of Tempered Images in Digital Forensic Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131069</link>
        <id>10.14569/IJACSA.2022.0131069</id>
        <doi>10.14569/IJACSA.2022.0131069</doi>
        <lastModDate>2022-10-31T11:46:19.8100000+00:00</lastModDate>
        
        <creator>Praveen Chitti</creator>
        
        <creator>K. Prabhushetty</creator>
        
        <creator>Shridhar Allagi</creator>
        
        <subject>Support Vector Machine (SVM); Discrete Cosine Transform (DCT); Convolution Neural Network (CNN); Image Forensics and Image Forgery</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>Multimedia images are the primary source of communication across social media and other websites. Multimedia security has gained the attention of modern researchers and has posed dynamic challenges such as image forensics, image tampering, and deep fakes. Malicious users tamper with the image embedding noise, leading to misinterpretation of the content. Identifying and authenticating the image by detecting the forgery operations performed on it is essential. In our proposed model, we detect the forged region using the machine learning model SVM in the first iteration and Convolution Neural Network in the second iteration with Discrete Cosine Transform (DCT) for feature extraction. The proposed model is tested with a Corel 10K dataset, and an average accuracy of 98% is obtained for all kinds of image operations, including scaling, rotation, and augmentation.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_69-A_Deep_Learning_and_Machine_Learning_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Prediction Model for Compiler Optimization with Hybrid Meta-Heuristic Optimization Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131068</link>
        <id>10.14569/IJACSA.2022.0131068</id>
        <doi>10.14569/IJACSA.2022.0131068</doi>
        <lastModDate>2022-10-31T11:46:19.8100000+00:00</lastModDate>
        
        <creator>Sandeep U. Kadam</creator>
        
        <creator>Sagar B. Shinde</creator>
        
        <creator>Yogesh B. Gurav</creator>
        
        <creator>Sunil B Dambhare</creator>
        
        <creator>Chaitali R Shewale</creator>
        
        <subject>Compiler; prediction; improved relief; HBA-BEO model; neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>Compiler designer needs years or sometimes months to construct programs using heuristic optimization rules for a specified compiler. For every novel processor, the modelers require readjusting the heuristics to get the probable performances of processor. The most important purpose of the developed approach is to build a prediction approach with optimization constraints for transforming programs with a lesser training overhead. The problem has occurred in the optimization and it is needed to address it with novel prediction model with derived features &amp; neural network. Here, a novel Compiler Optimization Prediction Model is developed. The features like static and dynamic features as well as improved Relief based features are derived, which are provided as input to Neural Network (NN) scheme, in which the weights are tuned via Honey Badger Adopted BES (HBA-BEO) model. Finally, the NN offers the final predicted output. The analysis outcomes prove the superiority of HBA-BEO model.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_68-A_Novel_Prediction_Model_for_Compiler_Optimization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Conceptual Model of Augmented Reality Mobile Application Design (ARMAD) to Enhance user Experience: An Expert Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131067</link>
        <id>10.14569/IJACSA.2022.0131067</id>
        <doi>10.14569/IJACSA.2022.0131067</doi>
        <lastModDate>2022-10-31T11:46:19.7800000+00:00</lastModDate>
        
        <creator>Nik Azlina Nik Ahmad</creator>
        
        <creator>Ahmad Iqbal Hakim Suhaimi</creator>
        
        <creator>Anitawati Mohd Lokman</creator>
        
        <subject>Augmented reality; conceptual design model; emotional UX; Kansei Engineering; mobile application; user experience</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>Rapid technological advancement has altered the demands of user experience (UX) in product design. However, research has shown that there is a gap and paucity of conceptual and practical models in this research field that may serve as guidelines for the design of immersive technologies such as augmented reality (AR) applications. Identifying the variables and components that influence the enhancement of AR design is critical for creating a great UX. The literature indicated that emotion is the primary driver of the UX. Therefore, this study proposed a conceptual design model for AR mobile application that incorporates user interface, interaction, and content design elements while taking their emotions into consideration in order to improve the UX. The focus of this study is to evaluate the proposed conceptual design model of augmented reality mobile application design (ARMAD) through expert reviews. Feedbacks from the expert reviewers are compiled and illustrated in order to refine the initial ARMAD model. The findings showed that majority of the expert reviewers agreed that the conceptual design model is suitable to be used as guideline in designing AR applications that enhances the UX through emotional elicitation. Accordingly, ARMAD model has been refined according to the feedback and suggestions from the expert reviewers. This model will provide researchers and practitioners insight into the essence of AR design that influence the user experience.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_67-Conceptual_Model_of_Augmented_Reality_Mobile_Application_Design.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Architecture based on DenseNet-121 Model for Weather Image Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131065</link>
        <id>10.14569/IJACSA.2022.0131065</id>
        <doi>10.14569/IJACSA.2022.0131065</doi>
        <lastModDate>2022-10-31T11:46:19.7500000+00:00</lastModDate>
        
        <creator>Saleh A. Albelwi</creator>
        
        <subject>Weather recognition; DenseNet-121; deep learning; data augmentation; transfer learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>Weather conditions have a significant effect on humans’ daily lives and production, ranging from clothing choices to travel, outdoor sports, and solar energy systems. Recent advances in computer vision based on deep learning methods have shown notable progress in both scene awareness and image processing problems. These results have highlighted network depth as a critical factor, as deeper networks achieve better outcomes. This paper proposes a deep learning model based on DenseNet-121 to effectively recognize weather conditions from images. DenseNet performs significantly better than previous models; it also uses less processing power and memory to further increase its efficiency. Since this field currently lacks adequate labeled images for training in weather image recognition, transfer learning and data augmentation techniques were applied. Using the ImageNet dataset, these techniques fine-tuned pre-trained models to speed up training and achieve better end results. Because DenseNet-121 requires sufficient data and is architecturally complex, the expansion of data via geometric augmentation—such as rotation, translation, flipping, and scaling—was critical in decreasing overfitting and increasing the effectiveness of fine-tuning. These experiments were conducted on the RFS dataset, and the results demonstrate both the efficiency and advantages of the proposed method, which achieved an accuracy rate of 95.9%.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_65-Deep_Architecture_based_on_DenseNet_121_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Research on Improved Shallow Neural Network Big Data Processing Model based on Gaussian Function</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131066</link>
        <id>10.14569/IJACSA.2022.0131066</id>
        <doi>10.14569/IJACSA.2022.0131066</doi>
        <lastModDate>2022-10-31T11:46:19.7500000+00:00</lastModDate>
        
        <creator>Lifang Fu</creator>
        
        <subject>Gaussian function; RBF; big data processing; incremental learning; sliding window</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>The application of the current new generation communication technology is gradually diversified, and the global Internet users are increasing, leading some large enterprises to increasingly rely on faster and more efficient big data processing technology. In order to solve the shortcomings of the current big data processing algorithms, such as slow computing speed, computing accuracy to be improved, and poor online real-time learning ability, this research combines incremental learning and sliding window ideas to design two improved radial basis function (RBF) neural network algorithms with Gaussian function as the kernel function. The Duffing equation example and the data of &quot;Top 100 single products for Taobao search glasses sales&quot; were used to verify the performance of the design algorithm. The experimental results of Duffing equation example show that when the total sample is 100000, the mean square errors of IOL, SWOL, SVM and ResNet50 algorithms are 1.86e-07, 1.59e-07, 3.37e-07 and 2.67e-07 respectively. The experimental results of the data set of &quot;Top 100 SKUs for Taobao Search Glasses Sales&quot; show that when the number of samples in the test set is 800, the root mean square errors of IOL, SWOL, SVM and ResNet50 algorithms are 0.0060, 0.0056, 0.0069 and 0.0073 respectively. This shows that the RBF online learning algorithm designed in this study, which integrates sliding windows, has a stronger comprehensive ability to process big data, and has certain application value for improving the accuracy of online data based commodity recommendation in e-commerce and other industries.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_66-Research_on_Improved_Shallow_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Study on the Technical Characteristics of Badminton Players in Different Stages through Video Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131064</link>
        <id>10.14569/IJACSA.2022.0131064</id>
        <doi>10.14569/IJACSA.2022.0131064</doi>
        <lastModDate>2022-10-31T11:46:19.7330000+00:00</lastModDate>
        
        <creator>Jin Qiu</creator>
        
        <subject>Women’s singles; video analysis; athletes; badminton; technical characteristics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>Through video analysis, Tai Tzu Ying, an excellent athlete, and badminton player A from Chongqing Institute of Engineering, were studied in this paper. The videos of the two athletes were organized and recorded, and the use of techniques in different stages were compared. The results found that Tai Tzu Ying’s serve technique was flexible, with few errors, while Play A’s serve technique was single and had many errors; in terms of serve receive, Tai Tzu Ying was more aggressive, mainly using rush shot and spinning net shot, while A mainly used techniques of spinning net shot and lift shot. The comparison of techniques in front, middle and back courts showed that Tai Tzu Ying’s playing style was more aggressive, while A’s playing style was more conservative. This paper compared the two athletes to understand the technical characteristics of excellent athletes and gave some suggestions for the training of school badminton players.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_64-Study_on_the_Technical_Characteristics_of_Badminton_Players.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Classification Method for Power Load Data of New Energy Grid based on Improved OTSU Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131063</link>
        <id>10.14569/IJACSA.2022.0131063</id>
        <doi>10.14569/IJACSA.2022.0131063</doi>
        <lastModDate>2022-10-31T11:46:19.7170000+00:00</lastModDate>
        
        <creator>Xun Ma</creator>
        
        <creator>Kai Liu</creator>
        
        <creator>Anlei Liu</creator>
        
        <creator>Xuchao Jia</creator>
        
        <creator>Yong Wang</creator>
        
        <subject>Improved OTSU algorithm; new energy grid; power load; classification method</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>The classification method for power load data of new energy grid based on improved OTSU algorithm is studied to improve the classification accuracy of power load data. According to the idea of two-dimensional visualization of time series, GAF (Geographical Adaptive Fidelity) method is used to transform the current data of power load in new energy grid into two-dimensional image of power load in new energy grid. The intra class dispersion is introduced, and the improved OTSU algorithm is used to segment the foreground and background of the two-dimensional image according to the pixel gray value of the two-dimensional image and the one-dimensional inter class variance corresponding to the pixel neighborhood gray value. The two-dimensional foreground image of power load is taken as the input sample of convolution neural network. The convolution neural network extracts the features of the two-dimensional foreground image of power load through convolution layer. According to the extracted features, the classification results of power load data of new energy grid are output through three steps: nonlinear processing, pooling processing and full connection layer classification. The experimental results show that this method can accurately classify the power load data of new energy grid, and the classification accuracy is higher than 97%.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_63-Classification_Method_for_Power_Load_Data_of_New_Energy_Grid.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Research on Blind Obstacle Ranging based on Improved YOLOv5</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131062</link>
        <id>10.14569/IJACSA.2022.0131062</id>
        <doi>10.14569/IJACSA.2022.0131062</doi>
        <lastModDate>2022-10-31T11:46:19.7170000+00:00</lastModDate>
        
        <creator>Yongquan Xia</creator>
        
        <creator>Yiqing Li</creator>
        
        <creator>Jianhua Dong</creator>
        
        <creator>Shiyu Ma</creator>
        
        <subject>Binocular ranging; object detection; attention mechanism</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>An improved model based on YOLOv5s is proposed for the problem that the YOLOv5 network model does not have high localization accuracy when detecting and identifying obstacles at different distances and sizes from the blind, which in turn leads to low accuracy in measuring distances. There are two main core ideas: firstly, a feature scale and a corresponding prediction head are added to YOLOv5 to improve the detection accuracy of small objects on blind paths. Secondly, SK attention mechanism is introduced in the feature fusion part. It can adaptively adjust the perceptual field for feature maps of different scales and more accurately extract objects of different distances and sizes on the blind path, which can improve the accuracy of detection and the accuracy of subsequent distance measurement. It was experimentally demonstrated that the improved YOLOv5 model improved the mAP by 6.29% compared to the original YOLOv5 model based on a small difference in time consumption. And for each category of AP values, the improvement ranged from 2.13% to 8.19%, respectively. The average accuracy of the measured distance from the obstacle at 1.5m to 3.5m from the camera is 98.20%. This shows that the improved YOLOv5 algorithm has good real-time performance and accuracy.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_62-Research_on_Blind_Obstacle_Ranging_based_on_Improved_YOLOv5.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Fake News Detection System based on Combination of Word Embedded Techniques and Hybrid Deep Learning Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131061</link>
        <id>10.14569/IJACSA.2022.0131061</id>
        <doi>10.14569/IJACSA.2022.0131061</doi>
        <lastModDate>2022-10-31T11:46:19.7030000+00:00</lastModDate>
        
        <creator>Mohamed-Amine OUASSIL</creator>
        
        <creator>Bouchaib CHERRADI</creator>
        
        <creator>Soufiane HAMIDA</creator>
        
        <creator>Mouaad ERRAMI</creator>
        
        <creator>Oussama EL GANNOUR</creator>
        
        <creator>Abdelhadi RAIHANI</creator>
        
        <subject>Deep learning (DL); Bidirectional Long Short-Term Memory (BILSTM); Convolutional Neural Network (CNN); Natural Language Processing (NLP); fake news</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>At present, most people prefer using different online sources for reading news. These sources can easily spread fake news for several malicious reasons. Detecting this unreliable news is an important task in the Natural Language Processing (NLP) field. Many governments and technology companies are engaged in this research field to prevent the manipulation of public opinion and spare people and society the huge damage that can result from the spreading of misleading information on online social media. In this paper, we present a new deep learning method to detect fake news based on a combination of different word embedding techniques and a hybrid Convolutional Neural Network (CNN) and Bidirectional Long Short-Term Memory (BILSTM) model. We trained the classification model on the unbiased dataset WELFake. The best method was a combination of a pre-trained Word2Vec CBOW model and a Word2Vec Skip-Word model with a CNN on BILSTM layers, yielding an accuracy of up to 97%.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_61-A_Fake_News_Detection_System_based_on_Combination_of_Word_Embedded_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Federated Learning Approach for Measuring the Response of Brain Tumors to Chemotherapy</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131060</link>
        <id>10.14569/IJACSA.2022.0131060</id>
        <doi>10.14569/IJACSA.2022.0131060</doi>
        <lastModDate>2022-10-31T11:46:19.6870000+00:00</lastModDate>
        
        <creator>Omneya Atef</creator>
        
        <creator>Mustafa Abdul Salam</creator>
        
        <creator>Hisham Abdelsalam</creator>
        
        <subject>Federated Learning (FL); BraTS2021; Data Privacy; O6-Methylguanine-DNA Methyltransferase (MGMT); OpenFl; EfficentNet-B3; brain tumors; Deep Learning (DL)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>Brain tumor is a fatal disease and one of the major causes of rising death rates in adults. Predicting methylation of the O6-Methylguanine-DNA Methyltransferase (MGMT) gene status utilizing Magnetic resonance imaging (MRI) imaging is highly important since it is a predictor of brain tumor responses to chemotherapy, which reduces the number of needed surgeries. Deep Learning (DL) approaches became powerful in extracting meaningful relationships and making accurate predictions. DL-based models require a large database and accessing or transferring patient data to train the model. Federated machine learning has recently gained popularity, as it offers practical solutions for data privacy, centralized computation, and high computing power. This study aims to investigate the feasibility of federated learning (FL) by developing a FL-based approach to predict MGMT promoter methylation status using the BraTs2021 dataset for the four sequence types, (Fluid Attenuated Inversion Recovery (FLAIR), T1-weighted (T1w), T1-weighted Gadolinium Post Contrast (T1wCE/T1Gd), and T2-weighted (T2w)) MRI images. The FL model compared to the DL-based and the experimental results show that even with imbalanced and heterogeneous datasets, the FL approach reached the training model to 99.99% of the model quality achieved with centralized data after 300 communication rounds between 10 institutions using OpenFl framework and the improved EfficentNet-B3 neural network architecture.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_60-Federated_Learning_Approach_for_Measuring_the_Response_of_Brain_Tumors.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mitigation of DDoS Attack in Cloud Computing Domain by Integrating the DCLB Algorithm with Fuzzy Logic</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131059</link>
        <id>10.14569/IJACSA.2022.0131059</id>
        <doi>10.14569/IJACSA.2022.0131059</doi>
        <lastModDate>2022-10-31T11:46:19.6700000+00:00</lastModDate>
        
        <creator>Amrutha Muralidharan Nair</creator>
        
        <creator>R Santhosh</creator>
        
        <subject>DDoS attack; resource scaling; DCLB; fuzzy logic; traffic parameters</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>Cloud computing would be an easy method to obtain services, resources and applications from any location on the internet. In the future of data generation, it is an unavoidable conclusion. Despite its many attractive properties, the cloud is vulnerable to a variety of attacks. One such well-known attack that emphasizes the availability of amenities is the Distributed Denial of Service (DDoS). A DDoS assault overwhelms the server with massive quantities of regular or intermittent traffic. It compromises with the cloud servers’ services and makes it harder to reply to legitimate users of the cloud. A monitoring system with correct resource scaling approach should be created to regulate and monitor the DDoS assault. The network is overwhelmed with excessive traffic of significant resource usage requests during the attack, resulting in the denial of needed services to genuine users. In this research, a unique way to the analyze resources used by the cloud users, lowering of the resources consumed is done when the network is overburdened with excessive traffic, and the dynamic cloud load balancing algorithm DCLB (Dynamic Cloud Load Balancing) is used to balance the overhead towards the server. The core premise is to monitor traffic using the fuzzy logic approach, which employs different traffic parameters in conjunction with various built in measured to recognize the DDoS attack traffic in the network. Finally, the proposed method shows a 93% of average detection rate when compared to the existing model. This method is a unique attempt to comprehend the importance of DDoS mitigation techniques as well as good resource management during an attack and analysis of the.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_59-Mitigation_of_DDoS_Attack_in_Cloud_Computing_Domain.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Evaluation of New Feature based on Ordinal Pattern Analysis for Iris Biometric Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131058</link>
        <id>10.14569/IJACSA.2022.0131058</id>
        <doi>10.14569/IJACSA.2022.0131058</doi>
        <lastModDate>2022-10-31T11:46:19.6700000+00:00</lastModDate>
        
        <creator>Sheena S</creator>
        
        <creator>Sheena Mathew</creator>
        
        <creator>Bindu M Krishna</creator>
        
        <subject>Dispersion entropy; iris recognition; rubber-sheet data; ordinal patterns; correlation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>The Iris recognition technique is currently the most efficient biometric identification system and is a common system on the practical front. Though most of the commercial systems use the patented Daugman’s algorithm, which mainly uses wavelet-based features, research is still active in identifying novel features that can provide personal identification. Here the first novel proposal of using ordinal pattern measure based on nonlinear time series analysis is put forth to characterize the unique pattern of the iris of individuals and thereby perform personal identification. Dispersion Entropy is a nonlinear time-series analysis method highly efficient in the characterization of the complexity of any data series with proven effectiveness in the characterization of model system dynamics as well as real-world data series. The results show that dispersion entropy can be used to identify iris images of specific individuals. The efficiency of this method is evaluated by computing correlation and RMSE between dispersion entropy values of normalized iris image rubber sheet data. The experimental results on the popular IRIS database- CASIA v1- demonstrate that the proposed method can effectively perform differential identification of iris images from different individuals. The results specifically indicate that the density of information along the angular direction of iris images which falls along the rows of rubber sheet data. This can be efficiently utilized with the method or ordinal pattern characterization and proves to be having promising potential for being incorporated into biometrics personal identification systems.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_58-Performance_Evaluation_of_New_Feature_based_on_Ordinal_Pattern_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detection of Cyber-Physical Attacks using Physical Model with Nonparametric EWMA Detector</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131057</link>
        <id>10.14569/IJACSA.2022.0131057</id>
        <doi>10.14569/IJACSA.2022.0131057</doi>
        <lastModDate>2022-10-31T11:46:19.6570000+00:00</lastModDate>
        
        <creator>Joko Supriyadi</creator>
        
        <creator>Jazi Eko Istiyanto</creator>
        
        <creator>Agfianto Eko Putra</creator>
        
        <subject>Industrial control systems; cyber-physical attacks; physical model; dynamic mode decomposition method with control (DMDc); nonparametric exponentially weighted moving average (EWMA)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>Industrial Control System (ICS) can suffer of cyber-physical attacks resulting in accident, damage, or financial loss. The attacks can be detected in both in physical space or cyberspace of the ICS. The detection in physical space can be based on physical models of the system. To model the physical system this study uses a data-driven modeling approach as an alternative of the analytic one. This study models the system using the dynamic mode decomposition method with control (DMDc) assuming a full state measurement. The attack detector used in some researches with predictive physical models is the cumulative sum (CUSUM), which only applies to normally distribute residual data. To detect any cyber-physical attack, this research uses a nonparametric exponentially weighted moving average (EWMA) detector. This study uses a data set from a testbed of Secure Water Treatment (SWaT). The approach used in this study was successful in detecting 8 out of 10 attacks on the first SWaT subsystem. This study demonstrates that DMDc used in this study results a better goodness of fit and the nonparametric EWMA can be used as an alternative as detector when residual data do not follow a normal distribution.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_57-Detection_of_Cyber_Physical_Attacks_using_Physical_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Algorithm for Shrinking Blood Receptacles using Retinal Internal Pictures for Clinical Characteristics Measurement</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131056</link>
        <id>10.14569/IJACSA.2022.0131056</id>
        <doi>10.14569/IJACSA.2022.0131056</doi>
        <lastModDate>2022-10-31T11:46:19.6400000+00:00</lastModDate>
        
        <creator>Aws A. Abdulsahib</creator>
        
        <creator>Moamin A Mahmoud</creator>
        
        <creator>Sally A. Al-Hasnawi</creator>
        
        <subject>Segmentation vessels / shrinking blood receptacles; clinical characteristics measurement; internal pictures for retinal; morphological filtering algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>The manual technique that might use for shrinking vessels blood in the retinal fundus images has significant limitations, such as the high rate of time consumption and the possibility of human error, precisely appear with the sophisticated structure of the blood receptacle and a hung amount of the retinal fundus photograph that needs to be anatomic. Moreover, the automatic proposed algorithm that will utilize shrinking and explore helpful clinical characteristics from retinal fundus photographs in order to lead the eye caregiver to early diagnosis for various retinal disorders and therapy evaluations. A precise, quick, and fully-automatic algorithm for shrinking blood receptacles and clinical characteristics measuring technique for internal retinal pictures is suggested in order to increase the diagnostic accuracy and reduce the ophthalmologist&#39;s burden. The proposed algorithm&#39;s main pipeline consists of two fundamental stages: picture shrinkage and medical feature elicitation. Many exhaustive practices were conducted to evaluate the efficacy of the sophisticated fully-automated shrinkage system in figuring out retinal blood receptacles using the DRIVE and HRF datasets of exceedingly demanding fundus images. Initially, the accuracy of the created algorithm was tested based on its ability to accurately recognize the retinal structure of blood receptacles. In these attempts, several quantitative performance measures precisely five were computed to validate the efficacy of the exact algorithm, including accuracy (Acc.), sensitivity (Sen.), specificity (Spe.), positive prediction value (PPV), and negative prediction value (NPV). When contrast with modern receptacles shrinking approaches on the DRIVE dataset, the produced results have enormously improved by obtaining accuracy, sensitivity, specificity, positive predictive value, and negative predictive value of 98.78%, 98.32%, 97.23%, and 90. Based on five quantitative performance indicators, the HRF dataset led to the following results: 98.76%, 98.87%, 99.17%, 96.88%, and 100%.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_56-An_Algorithm_for_Shrinking_Blood_Receptacles_using_Retinal_Internal_Pictures.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Teachers’ Attitudes Towards the Use of Augmented Reality Technology in Teaching Arabic in Primary School Malaysia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131055</link>
        <id>10.14569/IJACSA.2022.0131055</id>
        <doi>10.14569/IJACSA.2022.0131055</doi>
        <lastModDate>2022-10-31T11:46:19.6400000+00:00</lastModDate>
        
        <creator>Lily Hanefarezan Asbulah</creator>
        
        <creator>Mus’ab Sahrim</creator>
        
        <creator>Nor Fatini Aqilah Mohd Soad</creator>
        
        <creator>Nur Afiqah Athirah Mohd Rushdi</creator>
        
        <creator>Muhammad Afiq Hilmi Mhd Deris</creator>
        
        <subject>Attitude; teachers; augmented reality; Arabic language; primary school</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>The era of Industrial Revolution 4.0 has brought the debate of teachers&#39; willingness to use information technology in teaching Arabic. Thus, new technologies have emerged with a positive effect on teaching, such as augmented reality technology applied in the education system, especially in teaching. However, there is still limited research with regards to teaching in a foreign language. Therefore, the present study discusses the readiness of teachers from the aspect of knowledge and their attitude towards the use of augmented reality technology in the teaching of Arabic in Malaysia. The study was carried out using quantitative methodology with the use of survey questionnaire that is distributed to 36 Arabic language teachers as respondents. The usage of questionnaires as a research instrument forms the basis for data collection to identify respondents’ level of readiness. Afterwards, data analysis was carried out using the Statistical Package for the Social Science version 26 (SPSS). The results of the study show that the level of readiness of the teachers in terms of their attitude towards the use of augmented reality technology in the teaching of Arabic in Malaysia is at a moderate level. Nevertheless, teachers&#39; attitudes and knowledge are still found to be at a low level, especially for veteran teachers who have no experience in information technology to influence their enthusiasm towards the use of technology in their teaching. The implication of the current study is hoped to be useful and beneficial as a guide to stakeholders who are responsible for ensuring the process of teaching and learning Arabic which is based on augmented reality technology can be implemented in a meaningful way, thereby improving the performance of students in mastering the Arabic language.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_55-Teachers_Attitudes_Towards_the_Use_of_Augmented_Reality_Technology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Unethical Internet Behaviour among Students in High Education Institutions: A Systematic Literature Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131054</link>
        <id>10.14569/IJACSA.2022.0131054</id>
        <doi>10.14569/IJACSA.2022.0131054</doi>
        <lastModDate>2022-10-31T11:46:19.6230000+00:00</lastModDate>
        
        <creator>Zakiah Ayop</creator>
        
        <creator>Aslinda Hassan</creator>
        
        <creator>Syarulnaziah Anawar</creator>
        
        <creator>Nur Fadzilah Othman</creator>
        
        <creator>Rabiah Ahmad</creator>
        
        <creator>Nor Raihana Mohd Ali</creator>
        
        <creator>Maslin Masrom</creator>
        
        <subject>Systematic literature review; unethical Internet behavior; university student; Internet; ethics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>The modern internet era has several advantages and disadvantages, including the advent of immoral Internet conduct in addition to better, quicker, and increased working capacity in less time. Even though the area of study on unethical Internet activity has advanced, systematic literature reviews from a comprehensive perspective on unethical Internet behaviour among university students are still lacking. As a result, this systematic literature will provide theoretical foundation that address the following research questions: RQ1-How are unethical Internet behaviours among university students classified; RQ2-What are the various theoretical lenses that are used in unethical Internet behaviour research; RQ3-What demographic and risk factors are involved in unethical Internet behaviour research; and RQ4-What are the challenges and research opportunities for unethical Internet behaviour research within university settings? To respond to a formulated set of research questions, a total of 64 publications that were published between 2010 and 2020 underwent a systematic review. The study illustrates how university students’ unethical Internet activity is categorised. This study offers a comprehensive grasp of the factors that affect unethical Internet behaviour and an overview of the theories that have been utilised to explain and forecast unethical Internet behaviours in this sector. This study discusses literature gaps for future research to contribute to human ethical behavioural studies.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_54-Unethical_Internet_Behaviour_among_Students_in_High_Education_Institutions.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Local Texture Representation for Timber Defect Recognition based on Variation of LBP</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131053</link>
        <id>10.14569/IJACSA.2022.0131053</id>
        <doi>10.14569/IJACSA.2022.0131053</doi>
        <lastModDate>2022-10-31T11:46:19.6100000+00:00</lastModDate>
        
        <creator>Rahillda Nadhirah Norizzaty Rahiddin</creator>
        
        <creator>Ummi Raba’ah Hashim</creator>
        
        <creator>Lizawati Salahuddin</creator>
        
        <creator>Kasturi Kanchymalay</creator>
        
        <creator>Aji Prasetya Wibawa</creator>
        
        <creator>Teo Hong Chun</creator>
        
        <subject>Automated visual inspection; local binary pattern; timber defect classification; texture feature; feature extraction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>This paper evaluates timber defect classification performance across four various Local Binary Patterns (LBP). The light and heavy timber used in the study are Rubberwood, KSK, Merbau, and Meranti, and eight natural timber defects involved; bark pocket, blue stain, borer holes, brown stain, knot, rot, split, and wane. A series of LBP feature sets were created by employing the Basic LBP, Rotation Invariant LBP, Uniform LBP, and Rotation Invariant Uniform LBP in a phase of feature extraction procedures. Several common classifiers were used to further separate the timber defect classes, which are Artificial Neural Network (ANN), J48 Decision Tree (J48), and K-Nearest Neighbor (KNN). Uniform LBP with ANN classifier provides the best performance at 63.4%, superior to all other LBP types. Features from Merbau provide the greatest F-measure when comparing the performance of the ANN classifier with Uniform LBP across timber fault classes and clean wood, surpassing other feature sets.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_53-Local_Texture_Representation_for_Timber_Defect_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>BiLSTM and Multiple Linear Regression based Sentiment Analysis Model using Polarity and Subjectivity of a Text</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131052</link>
        <id>10.14569/IJACSA.2022.0131052</id>
        <doi>10.14569/IJACSA.2022.0131052</doi>
        <lastModDate>2022-10-31T11:46:19.6100000+00:00</lastModDate>
        
        <creator>Marouane CHIHAB</creator>
        
        <creator>Mohamed CHINY</creator>
        
        <creator>Nabil Mabrouk</creator>
        
        <creator>Hicham BOUSSATTA</creator>
        
        <creator>Younes CHIHAB</creator>
        
        <creator>Moulay Youssef HADI</creator>
        
        <subject>Sentiment analysis; textbolb; long short term memory; logistic regression; k-nearest neighbors; random forest; support vector machine; k-means; naive bayes</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>Sentiment analysis has become more and more requested by companies to improve their services. However, the main contribution of this paper is to present the results of the study which consists in proposing a combined model of sentiment analysis that is able to find the binary polarity of the analyzed text. The proposed model is based on a Bidirectional-Long Short-Term Memory recurrent neural network and the TextBolb model which computes both the polarity and the subjectivity of the input text. These two models are combined in a classification model that implements each of the Logistic Regression, k-Nearest Neighbors, Random Forest, Support Vector Machine, K-means and Naive Bayes algorithms. The training and test data come from the Twitter Airlines Sentiment data set. Experimental results show that the proposed system gives better performance metrics (accuracy and F1 score) than those found with the BiLSTM and TextBlob models used separately. The obtained results well serve organizations, companies and brands to get useful information that helps them to understand a customer&#39;s opinion of a particular product or service.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_52-BiLSTM_and_Multiple_Linear_Regression_based_Sentiment_Analysis_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Collaborative Ontology Construction Framework: An Attempt to Rationalize Effective Knowledge Dissemination</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131051</link>
        <id>10.14569/IJACSA.2022.0131051</id>
        <doi>10.14569/IJACSA.2022.0131051</doi>
        <lastModDate>2022-10-31T11:46:19.5930000+00:00</lastModDate>
        
        <creator>Kaneeka Vidanage</creator>
        
        <creator>Noor Maizura Mohamad Noor</creator>
        
        <creator>Rosmayati Mohemad</creator>
        
        <creator>Zuriana Abu Bakar</creator>
        
        <subject>Collaborative; domain-specialists; framework; methodology; ontologies</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>Ontologies are domain rich conceptualizations, which can be utilized for effective knowledge dissemination strategies. Knowledge dissemination plays a vital role in any industry. In this research, novel framework is designed and experimented for the collaborative ontology construction. With the iterative and incremental involvement of the domain specialists and ontologists rational process has been discussed and planned for the collaborative ontology construction. Additionally, existing shortcomings associated with the current ontology construction methodologies and frameworks also have been rigorously reviewed to identify the shortcomings. Henceforth the responses received from the domain specialists and ontologists, along with the gaps located from the literatures have been utilized as the backbone in designing this novel framework. Designed ontology increments have the potential of effective knowledge distribution once it`s coupled with technologies like chatbots. In this research, proposed framework has been deployed in three different domains and three different ontology increments have been created for each domain. Consequently, their efficacy have been tested with the involvement of domain specific stakeholders. Overall results have yielded an 82% of acceptance from the stakeholders.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_51-Collaborative_Ontology_Construction_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Delay-Aware and Profit-Maximizing Task Migration for the Cloudlet Federation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131050</link>
        <id>10.14569/IJACSA.2022.0131050</id>
        <doi>10.14569/IJACSA.2022.0131050</doi>
        <lastModDate>2022-10-31T11:46:19.5770000+00:00</lastModDate>
        
        <creator>Hengzhou Ye</creator>
        
        <creator>Junhao Guo</creator>
        
        <creator>Xinxiao Li</creator>
        
        <subject>Cloudlet federation; task migration; delay-aware; dynamic pricing; profit-maximizing; edge computing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>As the result of Open Edge Computing (OEC) project, cloudlet embodies the middle layer of edge computing architecture. Due to the close deployment to the user side, responding to user requests through cloudlet can reduce delay, improve security, and reduce bandwidth occupancy. In order to improve the quality of user experience, more and more cloudlets need to be deployed, which makes the deployment and management costs of Clouldlet service Providers (CLP) significantly increased. Therefore, the cloudlet federation appears as a new paradigm that can reduce the cost of cloudlet deployment and management by sharing cloudlet resources among CLPs. Facing the cloudlet federation scenario, more attention still needs to be paid to the heterogeneity of resource prices, the balance of benefits among CLPs, and the more complex delay computation when exploring task migration strategies. For delay-sensitive and delay-tolerance tasks, a delay-aware and profit-maximizing task migration strategy is proposed considering the relationship between the delay and the quotation of different tasks, as well as the dynamic pricing mechanism when resources are shared among CLPs. Experimental results show that the proposed algorithm outperforms the baseline algorithm in terms of revenue and response delay.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_50-Delay_Aware_and_Profit_Maximizing_Task_Migration.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>ACT on Monte Carlo FogRA for Time-Critical Applications of IoT</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131049</link>
        <id>10.14569/IJACSA.2022.0131049</id>
        <doi>10.14569/IJACSA.2022.0131049</doi>
        <lastModDate>2022-10-31T11:46:19.5770000+00:00</lastModDate>
        
        <creator>A. S. Gowri</creator>
        
        <creator>P. Shanthi Bala</creator>
        
        <creator>Zion Ramdinthara</creator>
        
        <creator>T. Siva Kumar</creator>
        
        <subject>Cloud; edge; fog; Internet of Things (IoT); Reinforcement Learning (RL)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>The need for instantaneous processing for Internet of Things (IoT) has led to the notion of fog computing where computation is performed at the proximity of the data source. Though fog computing reduces the latency and bandwidth bottlenecks, the scarcity of fog nodes hampers its efficiency. Also, due to the heterogeneity and stochastic behavior of IoT, traditional resource allocation technique does not suffice the time-sensitiveness of the applications. Therefore, adopting Artificial Intelligence (AI) based Reinforcement Learning approach that has the ability to self-learn and adapt to the dynamic environment is sought. The purpose of the work is to propose an Auto Centric Threshold (ACT) enabled Monte Carlo FogRA system that maximizes the utilization of Fog’s limited resources with minimum termination time for time-critical IoT requests. FogRA is devised as a Reinforcement Learning (RL) problem, that obtains optimal solutions through continuous interaction with the uncertain environment. Experimental results show that the optimal value achieved by the proposed system is increased by 41% more than the baseline adaptive RA model. The efficiency of FogRA is evaluated under different performance metrics.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_49-ACT_on_Monte_Carlo_FogRA_for_Time_Critical_Applications.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparing LSTM and CNN Methods in Case Study on Public Discussion about Covid-19 in Twitter</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131048</link>
        <id>10.14569/IJACSA.2022.0131048</id>
        <doi>10.14569/IJACSA.2022.0131048</doi>
        <lastModDate>2022-10-31T11:46:19.5600000+00:00</lastModDate>
        
        <creator>Fachrul Kurniawan</creator>
        
        <creator>Yuliana Romadhoni</creator>
        
        <creator>Laila Zahrona</creator>
        
        <creator>Jehad Hammad</creator>
        
        <subject>COVID-19; LSTM; CNN; sentiment analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>This study compares two Deep Learning model methods, which include the Long Short-Term Memory (LSTM) method and the Convolution Neural Network (CNN) method. The aim of the comparison is to discover the performance of two different fundamental deep learning approaches which are based on convolutional theory (CNN) and deal with the vanishing gradient problem (LSTM). The purpose of this study is to compare the accuracy of the two methods using a dataset of 4169 obtained by crawling social media using the Twitter API. The Tweets data we&#39;ve obtained are based on a specific hashtag keyword, namely &quot;covid-19 pandemic”. This study attempts to assess the sentiment of all tweets about the Covid-19 viral epidemic to determine whether tweets about Covid-19 contain positive or negative thoughts. Before classification, the Preprocessing and Word Embedding steps are completed, and this study has determined that the epoch used is 20 and the hidden layer is 64. Following the classification process, this study concludes that the two methods are appropriate for classifying public conversation sentences against Covid-19. According to this study, the LSTM method is superior, with an accuracy of 83.3%, a precision of 85.6%, a recall of 90.6%, and an f1-score of 88.5%. While the CNN method achieved an accuracy of 81%, precision of 71.7%, recall of 72%, and f1-score of 72%.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_48-Comparing_LSTM_and_CNN_Methods_in_Case_Study_on_Public_Discussion.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Experimental Study with Fuzzy-Wuzzy (Partial Ratio) for Identifying the Similarity between English and French Languages for Plagiarism Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131047</link>
        <id>10.14569/IJACSA.2022.0131047</id>
        <doi>10.14569/IJACSA.2022.0131047</doi>
        <lastModDate>2022-10-31T11:46:19.5470000+00:00</lastModDate>
        
        <creator>Peluru Janardhana Rao</creator>
        
        <creator>Kunjam Nageswara Rao</creator>
        
        <creator>Sitaratnam Gokuruboyina</creator>
        
        <subject>Plagiarism; natural language processing; string similarity; levenshtein distance; fuzzy-wuzzy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>With the rapid growth of digital libraries and language translation tools, it is easy to translate text documents from one language to other, which results in cross-language plagiarism. It is more challenging to identify plagiarism among documents in different languages. The main aim of this paper is to translate the French documents into English to detect plagiarism and to extract bilingual lexicons. The parallel corpus is used to compare multilingual text, a collection of similar sentences and sentences that complement each other. A comparative study is presented in this paper, the sentences similarity in bilingual content is found out by using the proposed Fuzzy-Wuzzy (Partial Ratio) based string similarity technique and three various techniques like Levenshtein Distance, Spacy and Fuzzy-Wuzzy (Ratio) similarity techniques in the literature. The string similarity method based on Fuzzy-Wuzzy (Partial Ratio) outperforms in terms of accuracy compared to Spacy, and Fuzzy-Wuzzy (Ratio) techniques for identifying language similarity.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_47-An_Experimental_Study_with_Fuzzy_Wuzzy_Partial_Ratio.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Computer Vision System for Street Sweeper Robot</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131046</link>
        <id>10.14569/IJACSA.2022.0131046</id>
        <doi>10.14569/IJACSA.2022.0131046</doi>
        <lastModDate>2022-10-31T11:46:19.5300000+00:00</lastModDate>
        
        <creator>Ouiem Bchir</creator>
        
        <creator>Sultana Almasoud</creator>
        
        <creator>Lina Alyahya</creator>
        
        <creator>Renad Aldhalaan</creator>
        
        <creator>Lina Alsaeed</creator>
        
        <creator>Nada Aldalbahi</creator>
        
        <subject>Covid-19; street sweeper robot; personal protective equipment (PPE); computer vision; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>With the spread of Covid-19, more people wear personal protective equipment such as gloves and masks. However, they are littering them all over streets, parking lots and parks. This impacts the environment and damages especially the marine ecosystem. Thus, this waste should not be discarded in the environment. Moreover, it should not be recycled with other plastic materials. Actually, they have to be separated from regular trash collection. Furthermore, littering gloves and masks yields more workload for street cleaners and presents potential harm for them. In this paper, we design a computer vision system for a street sweeper robot that picks up the masks and gloves and disposes them safely in garbage containers. This system relies on Deep Learning techniques for object recognition. In particular, three Deep Learning models will be investigated. They are: You Only Look Once (YOLO) model, Faster Region based Convolutional Neural Network (Faster R-CNN) and DeepLab v3+. The experiment results showed that YOLO is the most suitable approach to design the proposed system. Thus, the performance of the proposed system is 0.94 as F1 measure, 0.79 as IoU, 0.94 as mAP, and 0.41 s as Time to process one image.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_46-A_Computer_Vision_System_for_Street_Sweeper_Robot.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Analysis of Software Test Effort Estimation using Genetic Algorithm and Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131045</link>
        <id>10.14569/IJACSA.2022.0131045</id>
        <doi>10.14569/IJACSA.2022.0131045</doi>
        <lastModDate>2022-10-31T11:46:19.5130000+00:00</lastModDate>
        
        <creator>Vikas Chahar</creator>
        
        <creator>Pradeep Kumar Bhatia</creator>
        
        <subject>Test effort estimation; software testing; machine learning; computational intelligence; neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>In present scenario, the software companies are frequently involving software test effort estimation to allocate the resources efficiently during the software development process. Different machine learning models are developed to estimate the total effort that would be required before the software product could be delivered. These computational models are used to use the past data to estimate the efforts. In the current studies, test effort estimation for software is predicted using the Genetic algorithm and Neural Network. The attributes are selected using the Genetic algorithm and similarity measure between the attribute values has been computed using the Cosine Similarity measure. The simulation experiments were done using the PROMISE and Kaggle repository and implementation was done using the MATLAB software. The performance metrics namely, precision, recall, and accuracy are computed to evaluate against the existing techniques. The accuracy of the proposed model is 91.3% and results are improved by 8.9% in comparison to existing technique and comparison has been made for superiority to predict the test effort for software development.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_45-Performance_Analysis_of_Software_Test_Effort_Estimation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Otsu’s Thresholding for Semi-Automatic Segmentation of Breast Lesions in Digital Mammograms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131044</link>
        <id>10.14569/IJACSA.2022.0131044</id>
        <doi>10.14569/IJACSA.2022.0131044</doi>
        <lastModDate>2022-10-31T11:46:19.5130000+00:00</lastModDate>
        
        <creator>Moustapha Mohamed Saleck</creator>
        
        <creator>M. H. Ould Mohamed Dyla</creator>
        
        <creator>EL Benany Mohamed Mahmoud</creator>
        
        <creator>Mohamed Rmili</creator>
        
        <subject>Tumor detection; lesion segmentation; mammogram images; Otsu’s thresholding</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>In Maghreb countries, breast cancer considered one of the leading causes of mortality between females. A screening mammogram is a method of taking low energy level x-ray images of the human breast to identify the early symptoms of breast cancer. The shape and contour of the lesion in digitized mammograms are two effective features that allow the radiologists to distinguish between benign and malignant tumors. We propose in this paper a new approach based on Otsu’s thresholding method for semi-automatic extraction of lesion boundaries from mammogram images. This approach attempts to find the best threshold value where the weighted variance between the lesion and normal tissue pixels is the least. In the first step, the median filter is used for removing noise within the region of interest (ROI). In the second step, a global threshold decrement was started in order to get the proper range of pixels in which the breast lesion could be segmented by Otsu’s thresholding method with high accuracy. The technique of mathematical morphology, especially opening operation is used in this work for removing small objects from the ROI while preserving the shape and size of larger objects that represent the tumors. We evaluated our proposal using 21 images obtained from Mini-MIAS database. Experimental results show that our proposal achieves better results than many previously explored methods.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_44-Otsus_Thresholding_for_Semi_Automatic_Segmentation_of_Breast_Lesions.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Employability Prediction of Information Technology Graduates using Machine Learning Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131043</link>
        <id>10.14569/IJACSA.2022.0131043</id>
        <doi>10.14569/IJACSA.2022.0131043</doi>
        <lastModDate>2022-10-31T11:46:19.5000000+00:00</lastModDate>
        
        <creator>Gehad ElSharkawy</creator>
        
        <creator>Yehia Helmy</creator>
        
        <creator>Engy Yehia</creator>
        
        <subject>Machine learning; IT graduates; higher education; employability; labor market</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>The ability to predict graduates’ employability to match labor market demands is crucial for any educational institution aiming to enhance students&#39; performance and learning process as graduates’ employability is the metric of success for any higher education institution (HEI). Especially information technology (IT) graduates, due to the evolving demand for IT professionals increased in the current era. Job mismatch and unemployment remain major challenges and issues for educational institutions due to the various factors that influence graduates&#39; employability to match labor market needs. Therefore, this paper aims to introduce a predictive model using machine learning (ML) algorithms to predict information technology graduates’ employability to match the labor market demands. Five machine learning classification algorithms were applied named Decision tree (DT), Gaussian Na&#239;ve Bayes (Gaussian NB), Logistic Regression (LR), Random Forest (RF), and Support Vector Machine (SVM). The dataset used in this study is collected based on a survey given to IT graduates and employers. The performance of the study is evaluated in terms of accuracy, precision, recall, and f1 score. The results showed that DT achieved the highest accuracy, and the second highest accuracy was achieved by LR and SVM.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_43-Employability_Prediction_of_Information_Technology_Graduates.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of Path Loss Prediction Model using Feature Selection-Machine Learning Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131042</link>
        <id>10.14569/IJACSA.2022.0131042</id>
        <doi>10.14569/IJACSA.2022.0131042</doi>
        <lastModDate>2022-10-31T11:46:19.4830000+00:00</lastModDate>
        
        <creator>Bengawan Alfaresi</creator>
        
        <creator>Zainuddin Nawawi</creator>
        
        <creator>Bhakti Yudho Suprapto</creator>
        
        <subject>Path loss; feature selection; machine learning; mixed land-water; Cost-Hatta</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>Wireless network planning requires accurate coverage predictions to get good quality. The path loss accurate model requires a flexible model for each area including land and water. The purpose of this research is to develop a Cost-Hatta model that can be applied to the mixed land-water area. The approach used of this research is the three methods of feature selection of machine learning. The first stage of the research was the collection of field data. The measurement data included system, weather, and geographical parameters. The next stage was feature selection to obtain the best composition of features for the development of the model. The feature selection methods used were Univariate FS, Genetic Algorithm (GA), and Particle Swarm Optimization (PSO). After obtaining the best features from each method, the next stage was to form a model using four machine learning algorithms, namely Random Forest Regression (RF), Deep Neural Network (DNN), K-Nearest Neighbor Regression (KNN), and Support Vector Regression (SVR). The results of the improvements to the path loss prediction model were tested using the evaluation parameters of Root Mean Square Error (RMSE), Mean Square Error (MSE), and Mean Absolute Percentage Error (MAPE). The results of the testing showed that the improved Cost-Hatta model using the proposed Univariate-RF combination produced a very small RMSE value of 1.52. This indicates that the proposed model framework is highly suitable to be used in a mixed land-water area.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_42-Development_of_Path_Loss_Prediction_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Alz-SAENet: A Deep Sparse Autoencoder based Model for Alzheimer’s Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131041</link>
        <id>10.14569/IJACSA.2022.0131041</id>
        <doi>10.14569/IJACSA.2022.0131041</doi>
        <lastModDate>2022-10-31T11:46:19.4830000+00:00</lastModDate>
        
        <creator>G Nagarjuna Reddy</creator>
        
        <creator>K Nagi Reddy</creator>
        
        <subject>Alzheimer’s disease; MRI; CNN; sparse autoencoder; DNN; mild cognitive impairment</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>Precise identification of Alzheimer&#39;s Disease (AD) is vital in health care, especially at an early stage, since recognizing the likelihood of incidence and progression allows patients to adopt preventive measures before irreparable brain damage occurs. Magnetic Resonance Imaging is an effective and common clinical strategy to diagnose AD due to its structural details. we built an advanced deep sparse autoencoder-based architecture, named Alz-SAENet for the identification of diseased from typical control subjects using MRI volumes. We focused on a novel optimal feature extraction procedure using the combination of a 3D Convolutional Neural Network (CNN) and deep sparse autoencoder (SAE). Optimal features derived from the bottleneck layer of the hyper-tuned SAE network are subsequently passed via a deep neural network (DNN). This approach results in the improved four-way categorization of AD-prone 3D MRI brain images that prove the capability of this network in AD prognosis to adopt preventive measures. This model is further evaluated using ADNI and Kaggle data and achieved 98.9% and 98.215% accuracy and showed a tremendous response in distinguishing the MRI volumes that are in a transitional phase of AD.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_41-Alz_SAENet_A_Deep_Sparse_Autoencoder_based_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detection of Severity-based Email SPAM Messages using Adaptive Threshold Driven Clustering</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131040</link>
        <id>10.14569/IJACSA.2022.0131040</id>
        <doi>10.14569/IJACSA.2022.0131040</doi>
        <lastModDate>2022-10-31T11:46:19.4670000+00:00</lastModDate>
        
        <creator>I V S Venugopal</creator>
        
        <creator>D Lalitha Bhaskari</creator>
        
        <creator>M N Seetaramanath</creator>
        
        <subject>BoW collection; web crawler; email text extraction; subsetting method; email class detection; ranking method</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>The classification of emails is one crucial part of the email filtering process, as emails have become one of the key methods of communication. The process for identifying safe or unsafe emails is complex due to the diversified use of the language. Nonetheless, most of the parallel research outcomes have demonstrated significant benchmarks in identifying email spam. However, the standard processes can only identify the emails as spam or ham. Henceforth, a detailed classification of the emails has not been achieved. Thus, this work proposes a novel method for the identification of the emails into various classes using the proposed deep clustering process with the help of the ranking of words into severity. The proposed work demonstrates nearly 99.4% accuracy in detecting and classifying the emails into a total of five classes.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_40-Detection_of_Severity_based_Email_SPAM_Messages.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Comparison between Meta-classifier Algorithms for Heart Disease Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131039</link>
        <id>10.14569/IJACSA.2022.0131039</id>
        <doi>10.14569/IJACSA.2022.0131039</doi>
        <lastModDate>2022-10-31T11:46:19.4530000+00:00</lastModDate>
        
        <creator>Nureen Afiqah Mohd Zaini</creator>
        
        <creator>Mohd Khalid Awang</creator>
        
        <subject>Heart disease prediction; ensemble stacking; multi classifier</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>The rise in heart disease among the general population is alarming. This is because cardiovascular disease is the leading cause of death, and several studies have been conducted to assist cardiologists in identifying the primary cause of heart disease. The classification accuracy of single classifiers utilised in most recent studies to predict heart disease is quite low. The accuracy of classification can be enhanced by integrating the output of multiple classifiers in an ensemble technique. Even though they can deliver the best classification accuracy, the existing ensemble approaches that integrate all classifiers are quite resource-intensive. This study thus proposes a stacking ensemble that selects the optimal subset of classifiers to produce meta-classifiers. In addition, the research compares the effectiveness of several meta-classifiers to further enhance classification. There are ten types of algorithms, including logistic regression (LR), support vector classifier (SVC), random forest (RF), extra tree classifier (ETC), na&#239;ve bayes (NB), extreme gradient boosting (XGB), decision tree (DT), k-nearest neighbor (KNN), multilayer perceptron (MLP), and stochastic gradient descent (SGD) are used as a base classifier. The construction of the meta-classifier utilised three different algorithms consisting of LR, MLP, and SVC. The prediction results from the base classifier are then used as input for the stacking ensemble. The study demonstrates that using a stacking ensemble performs better than any other single algorithm in the base classifier. The meta-classifier of logistic regression yielded 90.16% results which is better than any base classifiers. In conclusion, we could assume that the ensemble stacking approach can be considered an additional means of achieving better accuracy and has improved the performance of the classification.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_39-Performance_Comparison_between_Meta_classifier_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modified Intrusion Detection Tree with Hybrid Deep Learning Framework based Cyber Security Intrusion Detection Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131038</link>
        <id>10.14569/IJACSA.2022.0131038</id>
        <doi>10.14569/IJACSA.2022.0131038</doi>
        <lastModDate>2022-10-31T11:46:19.4370000+00:00</lastModDate>
        
        <creator>Majed Alowaidi</creator>
        
        <subject>Cybersecurity; IntruDTree model; convolution recurrent neural network (CRNN); MIntruDTree-HDL; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>In modern era, the most pressing issue facing modern society is protection against cyberattacks on networks. The frequency of cyber-attacks in the present world makes the problem of providing feasible security to the computer system from potential risks important and crucial. Network security cannot be effectively monitored and protected without the use of intrusion detection systems (IDSs). DLTs (Deep learning methods) and MLTs (machine learning techniques) are being employed in information security domains for effectively building IDSs. These IDSs are capable of automatically and timely identifying harmful attacks. IntruDTree (Intrusion Detection Tree), a security model based on MLTs that detects attacks effectively, is shown in the existing research effort. This model, however, suffers from an overfitting problem, which occurs when the learning method perfectly matches the training data but fails to generalize to new data. To address the issue, this study introduces the MIntruDTree-HDL (Modified IntruDTree with Hybrid Deep Learning) framework, which improves the performance and prediction of the IDSs. The MIntruDTree-HDL framework predicts and classifies harmful cyber assaults in the network using an M-IntruDtree (Modified IDS Tree) with CRNNs (convolution recurrent neural networks). To rank the key characteristics, first create a modified tree-based generalized IDSs M-IntruDTree. CNNs (convolution neural networks) then use convolution to collect local information, while the RNNs (recurrent neural networks) capture temporal features to increase IDS performance and prediction. This model is not only accurate in predicting unknown test scenarios, but it also results in reduced computational costs due to its dimensionality reductions. The efficacy of the suggested MIntruDTree-HDL schemes was benchmarked on cybersecurity datasets in terms of precisions, recalls, fscores, accuracies, and ROC. The simulation results show that the proposed MIntruDTree-HDL outperforms current IDS approaches, with a high rate of malicious attack detection accuracy.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_38-Modified_Intrusion_Detection_Tree_with_Hybrid_Deep_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Combining Innovative Technology and Context based Approaches in Teaching Software Engineering</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131037</link>
        <id>10.14569/IJACSA.2022.0131037</id>
        <doi>10.14569/IJACSA.2022.0131037</doi>
        <lastModDate>2022-10-31T11:46:19.4370000+00:00</lastModDate>
        
        <creator>Shamsul Huda</creator>
        
        <creator>Sultan Alyahya</creator>
        
        <creator>Lei Pan</creator>
        
        <creator>Hmood Al-Dossari</creator>
        
        <subject>Sustainable learning and education; context-base teaching; work integrated learning; hands on experience; graduate outcome rate; positive attitude and engagement</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>Sustainability in learning is very essential for a sustainable future which largely depends on education. Sustainable learning requires learners to increase and rebuild base-knowledge as the circumstances change and get more complex. This becomes very obvious particularly for in- formation technology (IT) discipline where technology is rapidly changing and practice getting more complicated. Sustainability enables students to use their learning from formal education into practice, provide hands on experience (HOE) and help them rebuild their knowledge base in complex situations. This is also essential to achieve a high graduate outcome rate (GOR) which helps the education sector to become sustainable. In the existing policies and frameworks, institutions are moving towards more off-campus learning and less face-to-face learning. As a result, a downward trend is experienced in students’ engagement across IT discipline. This affects students’ ability in achieving HOE and appears to be one of the reasons of low GOR which poses a threat to the sustainability in the education sector for both stakeholder and learners’ perspectives. This paper presents a combined approach of context-based teaching with incorporation of innovative technology to engage students and achieve a better HOE towards sustainability in learning. The proposed approach was adopted in a software engineering course taught at School of IT at Deakin University, Australia. Students were provided context-based teaching material and industry standard software engineering tools for practice to achieve HOE. Students evaluation and assessment results reports that proposed approach was significantly impacted positively to engage the students in classes towards improved sustainable learning.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_37-Combining_Innovative_Technology_and_Context_based_Approaches.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comparative Study of Predictions of Students’ Performances in Offline and Online Modes using Adaptive Neuro-Fuzzy Inference System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131036</link>
        <id>10.14569/IJACSA.2022.0131036</id>
        <doi>10.14569/IJACSA.2022.0131036</doi>
        <lastModDate>2022-10-31T11:46:19.4200000+00:00</lastModDate>
        
        <creator>Sudhindra B. Deshpande</creator>
        
        <creator>Kiran K.Tangod</creator>
        
        <creator>Neelkanth V.Karekar</creator>
        
        <creator>Pandurang S.Upparmani</creator>
        
        <subject>ANFIS; fuzzy systems; online learning; e-learning; classroom learning; fuzzy rules; predictions; adaptive neuro-fuzzy inference system; education technology; distance education</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>Predicting a student&#39;s performance can help educational institutions to support the students in improving their academic performance and providing high-quality education. Creating a model that accurately predicts a student&#39;s performance is not only difficult but challenging. Before the pandemic situation students were more accustomed to offline i.e., physical mode of learning. As covid-19 took over the world the offline mode of education was totally disturbed. This situation resulted into the new beginning towards online mode of teaching over the Internet. In this article, these two modes are analysed and compared with reference to students’ academic performances. The article models a predicting academic performance of students before covid i.e., physical mode and during Covid i.e., online mode, to help the students to improve their performances. The proposed model works in two steps. First, two sets of students’ previous semester end results (SEE) i.e., after offline mode and after online mode, are collected and pre-processed using normalizing the performances in order to improving the efficiency and accuracy. Secondly, Adaptive Neuro-Fuzzy Inference System (ANFIS) is applied to predict the academic result performances in both learning modes. Three membership functions gaussian (Gausmf), triangular (Trimf) and gausian-bell (Gbellmf) of ANFIS are used to generate the fuzzy rules for the prediction process proposed in this paper.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_36-A_Comparative_Study_of_Predictions_of_Students_Performances.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Review on Multimodal Fusion Techniques for Human Emotion Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131035</link>
        <id>10.14569/IJACSA.2022.0131035</id>
        <doi>10.14569/IJACSA.2022.0131035</doi>
        <lastModDate>2022-10-31T11:46:19.4070000+00:00</lastModDate>
        
        <creator>Ruhina Karani</creator>
        
        <creator>Sharmishta Desai</creator>
        
        <subject>Feature-level fusion; decision-level fusion; hybrid fusion; artificial intelligence; EEG</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>Emotions play an essential role in human life for planning and decision making. Emotion identification and recognition is a widely explored field in the area of artificial intelligence and affective computing as a means of empathizing with humans and thereby improving human machine interaction. Though audio visual cues are vital for recognizing human emotions, they are sometimes insufficient in identifying emotions of people who are good at hiding emotions or people suffering from Alexithymia. Considering other dimensions like Electroencephalogram (EEG) or text, along with audio visual cues can aid in improving the results in such situations. Taking advantage of the complementarity of multiple modalities normally helps capture emotions more accurately compared to single modality. However, to achieve precise and accurate results, correct fusion of these multimodal signals is solicited. This study provides a detailed review of different multimodal fusion techniques that can be used for emotion recognition. This paper proposes in-depth study of feature-level fusion, decision-level fusion and hybrid fusion techniques for identifying human emotions based on multimodal inputs and compare the results. The study concentrates on three different modalities i.e., facial images, audio and text for experimentation; at least one of which differs in temporal characteristics. The result suggests that hybrid fusion works best in combining multiple modalities which differ in time synchronicity.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_35-Review_on_Multimodal_Fusion_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Approach for Optimization of Features using Gorilla Troop Optimizer for Classification of Melanoma</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131034</link>
        <id>10.14569/IJACSA.2022.0131034</id>
        <doi>10.14569/IJACSA.2022.0131034</doi>
        <lastModDate>2022-10-31T11:46:19.3900000+00:00</lastModDate>
        
        <creator>Anupama Damarla</creator>
        
        <creator>Sumathi D</creator>
        
        <subject>Skin cancer; image enhancement; deep learning; evolutionary algorithms; Particle Swarm Optimization; Ant Colony Optimization; Gorilla Troop Optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>The diagnosis and categorization of skin cancer, as well as the difference in skin textures and injuries, is a tough undertaking. Manually detecting skin lesions from dermoscopy images seems to be a difficult and cumbersome challenge. Recent advancements in the internet of things (IoT) and artificial intelligence for clinical applications have shown significant increase in precision and processing time. A lot of attention is given to deep learning models because they are effective at identifying cancer cells. The diagnosis and accuracy levels can be greatly increased by categorizing benign and malignant dermoscopy images. This work suggests an automated classification system based on a deep convolutional neural network (DCNN) in order to precisely perform multi-classification. The DCNN&#39;s structure was thoughtfully created by arranging a number of layers that are in charge of uniquely extracting different features from skin lesions. In this paper, we proposed a deep learning approach to tackle the three main tasks-deep extraction of features (task1) using transfer learning, selection of features (task2)-using metaheuristic algorithms such as Particle Swarm Optimization (PSO), Ant Colony Optimization (ACO), and Gorilla Troop Optimization (GTO) as a feature selector, the extensive feature set is optimized, and the amount of features is reduced to within the range, and a two-level classification (task3) was proposed that are emerging in the field of skin lesion image processing. On the HAM10000 dataset, the proposed deep learning frameworks were assessed. The accuracy achieved on the dataset is 93.58 percent. The proposed method outperforms state-of-the-art (SOTA) techniques in terms of accuracy. The suggested technique is however highly scalable.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_34-An_Approach_for_Optimization_of_Features_using_Gorilla_Troop.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparison of Metaheuristic Techniques for Parcel Delivery Problem: Malaysian Case Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131033</link>
        <id>10.14569/IJACSA.2022.0131033</id>
        <doi>10.14569/IJACSA.2022.0131033</doi>
        <lastModDate>2022-10-31T11:46:19.3900000+00:00</lastModDate>
        
        <creator>Shamine A/P S. Moganathan</creator>
        
        <creator>Siti Noor Asyikin binti Mohd Razali</creator>
        
        <creator>Aida Mustapha</creator>
        
        <creator>Safra Liyana binti Sukiman</creator>
        
        <creator>Rosshairy Abd Rahman</creator>
        
        <creator>Muhammad Ammar Shafi</creator>
        
        <subject>Ant-colony optimization; genetic algorithm; delivery problem; comparison; cost; runtime</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>Most people preferred e-commerce ensuing the Coronavirus Disease-2019 (COVID-19) outbreak, resulting in delivery companies receiving large quantities of parcels to be delivered to clients. Hurdle emerges when delivery person needs to convey items to a large number of households in a single journey as they never face this situation before. As a result, they seek the quickest way during the trip to reduce delivery costs and time. Since the delivery challenge has been classified as an NP-hard (non-deterministic polynomial-time hard)) problem, this study aims to search the shortest distance, including the runtime for the real case study located in Melaka, Malaysia. Hence, two metaheuristic approaches are compared in this study namely, Ant-Colony Optimization (ACO) and Genetic Algorithm (GA). The results show that the GA strategy outperforms the ACO technique in terms of distance, price, and runtime for moderate data sizes that is less than 90 locations.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_33-Comparison_of_Metaheuristic_Techniques_for_Parcel_Delivery_Problem.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Module Partition Method of Embedded Multitasking Software Based on Fuzzy Set Theory</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131032</link>
        <id>10.14569/IJACSA.2022.0131032</id>
        <doi>10.14569/IJACSA.2022.0131032</doi>
        <lastModDate>2022-10-31T11:46:19.3730000+00:00</lastModDate>
        
        <creator>Yunpeng Gu</creator>
        
        <subject>Multitasking; embedded; modular; fuzzy set theory</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>In order to improve the reliability of embedded multitask software, a module partition method based on fuzzy set theory is proposed to solve the problem of large number of software failures and high frequency of failures. Firstly, the characteristics of embedded multitask software are analyzed, and the constraint parameter distribution model is constructed. Then the reliability parameter analysis model of embedded multitask software is constructed, and the multilevel fuzzy metric structure combined with quantitative recursive analysis method is used to complete the division of embedded multitask software modules. The simulation results show that the accuracy of the proposed method exceeds 99% after the number of iterations exceeds 50. The experimental results show that the method has high reliability, high precision and high practicability.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_32- Module_Partition_Method_of_Embedded_Multitasking_Software.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Observe-Orient-Decide-Act (OODA) for Cyber Security Education</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131031</link>
        <id>10.14569/IJACSA.2022.0131031</id>
        <doi>10.14569/IJACSA.2022.0131031</doi>
        <lastModDate>2022-10-31T11:46:19.3600000+00:00</lastModDate>
        
        <creator>Dimas Febriyan Priambodo</creator>
        
        <creator>Yogha Restu Pramadi</creator>
        
        <creator>Obrina Candra Briliyant</creator>
        
        <creator>Muhammad Hasbi</creator>
        
        <creator>Muhammad Adi Yahya</creator>
        
        <subject>Cyber security; cybersecurity education; cyber range; OODA loop; play-role scenario</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>A cyber range is a term to define an isolated simulation environment that can be used for cybersecurity training. As a training tool, the cyber range has a crucial role in improving the competence of its users. Isolated environmental conditions allow users to increase competence through cybersecurity training based on predetermined scenarios. There is no standard for scenario in training, most of them using common case. In this research, the cyber range is built based on the cyber range taxonomy and uses the observe-orient-decide-act (OODA) loop that has been proven for military education. The OODA loop is implemented and helps guiding each step of the attack and its handling in the built scenario. The scenario chosen is a case of data theft since data theft incidents have often occurred so that it is easier for user to understand. OODA loop for cyber range meets 16 of the 17 characteristics in the cyber range taxonomy. The final cyber range acceptance rate was 81.82%. The results of this acceptance give confidence that this new method can be used as an alternative to learning cybersecurity.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_31-Observe_Orient_Decide_Act_OODA_for_Cyber_Security.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Optimized Single Layer Perceptron-based Approach for Cardiotocography Data Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131030</link>
        <id>10.14569/IJACSA.2022.0131030</id>
        <doi>10.14569/IJACSA.2022.0131030</doi>
        <lastModDate>2022-10-31T11:46:19.3600000+00:00</lastModDate>
        
        <creator>Bader Fahad Alkhamees</creator>
        
        <subject>Cardiotocography; machine learning; artificial neural network (ANN); learning rate; grid search; 10-fold cross-validation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>Uterine Contractions (UC) and Fetal Heart Rate (FHR) are the most common techniques for evaluating fetal and maternal assessment during pregnancy and detecting the changes in fetal oxygenation occurred throughout labor. By monitoring the Cardiotocography (CTG) patterns, doctors can measure fetus state, accelerations, heart rate, and uterine contractions. Several computational and machine learning (ML) methods have been done on CTG recordings to improve the effectiveness of fetus analysis and aid the doctors to understand the variations in their interpretation. However, getting an optimal solution and best accuracy remains an important concern. Among the various ML approaches, artificial neural network (ANN)-based approach has achieved a high performance in several applications. In this paper, an optimized Single Layer Perceptron (SLP)-based approach is proposed to classify the CTG data accurately and predict the fetal state. The approach is able to exploit the advantages of SLP model and optimize the learning rate using a grid search method in which we can arrive at the best accuracy and converge to a local minima. The approach is evaluated on CTG dataset of University of California, Irvine (UCI). The optimized SLP model is trained and tested on the dataset using a 10-fold cross-validation technique to classify the CTG patterns as normal, suspect or pathologic. The experimental results show that the proposed approach achieved 99.20% accuracy compared with the state-of-the-art models.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_30-An_Optimized_Single_Layer_Perceptron_based_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Real Time Customer Satisfaction Analysis using Facial Expressions and Headpose Estimation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131029</link>
        <id>10.14569/IJACSA.2022.0131029</id>
        <doi>10.14569/IJACSA.2022.0131029</doi>
        <lastModDate>2022-10-31T11:46:19.3430000+00:00</lastModDate>
        
        <creator>Nethravathi P. S</creator>
        
        <creator>Manjula Sanjay Koti</creator>
        
        <creator>Taramol. K.G</creator>
        
        <creator>Soofi Anwar</creator>
        
        <creator>Gayathri Babu J</creator>
        
        <creator>Rajermani Thinakaran</creator>
        
        <subject>Customer monitoring; convolutional neural network; facial expression recognition; facial analysis; head pose estimations component; CNN Model; object localization; face boosting</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>One of the most exciting, innovative, and promising topics in marketing research is the quantification of customer interest. This work focuses on interest detection and provides a deep learning-based system that monitors client behaviour. By assessing head position, the recommended method assesses customer attentiveness. Customers whose heads are directed toward the promotion or the item of curiosity are identified by the system, which analyses facial expressions and records client interest. An exclusive method is recommended to recognize frontal face postures first, then splits facial components that are critical for detecting facial expressions into iconized face pictures. Mainly consumer interest monitoring will be executed. Finally, the raw facial images are combined with the iconized face image&#39;s confidence ratings to estimate facial emotions. This technique combines local part-based characteristics through holistic face data for precise facial emotion identification. The new method provides the dimension of required marketing and product findings indicate that the suggested architecture has the potential to be implemented because it is efficient and operates in real time.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_29-Real_Time_Customer_Satisfaction_Analysis_using_Facial_Expressions.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Effect of the Aesthetically Mobile Interfaces on Students’ Learning Experience for Primary Education</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131028</link>
        <id>10.14569/IJACSA.2022.0131028</id>
        <doi>10.14569/IJACSA.2022.0131028</doi>
        <lastModDate>2022-10-31T11:46:19.3270000+00:00</lastModDate>
        
        <creator>Nor Fatin Farzana Binti Zainuddin</creator>
        
        <creator>Zuriana Binti Abu Bakar</creator>
        
        <creator>Noor Maizura Binti Mohammad</creator>
        
        <creator>Rosmayati Binti Mohamed</creator>
        
        <subject>Aesthetic; non-aesthetic; mobile interfaces; primary education</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>Mobile devices such as mobile phones are becoming more important to school students today. This is due to the COVID-19 pandemic, mostly traditional face-to-face learning has shifted to online learning such as learning via a mobile platform. Mobile learning also known as m-learning, is defined as learning in numerous situations through social and content interaction utilizing personal electronic devices. M-learning applications not only need to have efficient functions, but it also has to attract students to learn by providing an attractive interface. An aesthetic of a mobile interface is essential since it could influence the user&#39;s learning experiences, but vice versa for non-aesthetic interfaces. User experience (UX) encompasses an extensive range of outcomes of the user-device interaction, including cognitions, attitudes, beliefs, behaviour, behavioural intentions, and affect. However, this study focuses on UX in terms of learnability, satisfaction, and efficiency since most previous studies were not explicitly focused on examining these three (3) UX components. Thus, this study aims to investigate the effect of aesthetically mobile interfaces on the learnability, satisfaction, and efficiency of primary school students, specifically, for Kelas Al-Quran and Fardu Ain (KAFA) students. This study found that aesthetically mobile interfaces significantly affected students’ learning experiences regarding learnability, satisfaction, and efficiency. In conclusion, the findings of this study could serve as guidelines for future research in the field of mobile interface design.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_28-The_Effect_of_the_Aesthetically_Mobile_Interfaces.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Forecasting Covid-19 Time Series Data using the Long Short-Term Memory (LSTM)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131026</link>
        <id>10.14569/IJACSA.2022.0131026</id>
        <doi>10.14569/IJACSA.2022.0131026</doi>
        <lastModDate>2022-10-31T11:46:19.3100000+00:00</lastModDate>
        
        <creator>Harun Mukhtar</creator>
        
        <creator>Reny Medikawati Taufiq</creator>
        
        <creator>Ilham Herwinanda</creator>
        
        <creator>Doni Winarso</creator>
        
        <creator>Regiolina Hayami</creator>
        
        <subject>Time series prediction; forecasting; recurrent neural network; long short-term memory</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>Confirmed statistical data of Covid-19 cases that have accumulated sourced from (https://corona.riau.go.id/data-statistik/) in Riau Province on June 7, 2021, there were 63441 cases, on June 14, 2021, it increased to 65883 cases, on June 21, 2021, it increased to 67910, and on June 28, 2021, it increased to 69830 cases. Since the beginning of this pandemic outbreak, it has been observed that the case data continues to increase every week until this July. This study predicts cases of Covid-19 time series data in Riau Province using the LSTM algorithm, with a dataset of 64 lines. Long-Short Term Memory has the ability to store memory information for patterns in the data for a long time at the same time. Tests predicting historical data for Covid-19 cases in Riau Province resulted in the lowest RMSE value in the training data, which was 8.87, and the test data, which was 13.00, in the death column. The evaluation of the best MAPE value in the training data, which is 0.23%, is in the recovered column, and the evaluation of the best MAPE value in the test data, which is 0.27%, in the positive_number column. In the test to predict the next 30 days using the LSTM model that has been trained, it was found that the performance evaluation of the prediction results for the positive_number column and the death column was very good, the recovery column was categorized as good, the independent_isolation column and the care_rs column were categorized as poor.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_26-Forecasting_Covid_19_Time_Series_Data_using_the_Long_Short_Term.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of a Dense Layered Network Model for Epileptic Seizures Prediction with Feature Representation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131027</link>
        <id>10.14569/IJACSA.2022.0131027</id>
        <doi>10.14569/IJACSA.2022.0131027</doi>
        <lastModDate>2022-10-31T11:46:19.3100000+00:00</lastModDate>
        
        <creator>Summia Parveen</creator>
        
        <creator>S. A. Siva Kumar</creator>
        
        <creator>P. MohanRaj</creator>
        
        <creator>Kingsly Jabakumar</creator>
        
        <creator>R. Senthil Ganesh</creator>
        
        <subject>Epilepsy seizure; pre-ictal state; deep learning; feature representation; vector model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>Epilepsy is a neurological disorder that influences about 60 million people all over the world. With this, about 30% of the people cannot be cured with surgery or medications. The seizure prediction in the earlier stage helps in disease prevention using therapeutic interventions. Certain studies have sensed that abnormal brain activity is observed before the initiation of seizure which is medically termed as a pre-ictal state. Various investigators intend to predict the baseline for curing the pre-ictal seizure stage; however, an effectual prediction model with higher specificity and sensitivity is still a challenging task. This work concentrates on modelling an efficient dense layered network model (DLNM) for seizure prediction using deep learning (DL) approach. The anticipated framework is composed of pre-processing, feature representation and classification with support vector based layered model (dense layered model). The anticipated model is tested for roughly about 24 subjects from CHBMIT dataset which outcomes in attaining an average accuracy of 96% respectively. The purpose of the research is to make earlier seizure prediction to reduce the mortality rate and the severity of the disease to help the human community suffering from the disease.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_27-Design_of_a_Dense_Layered_Network_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimization of Multilayer Perceptron Hyperparameter in Classifying Pneumonia Disease Through X-Ray Images with Speeded-Up Robust Features Extraction Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131025</link>
        <id>10.14569/IJACSA.2022.0131025</id>
        <doi>10.14569/IJACSA.2022.0131025</doi>
        <lastModDate>2022-10-31T11:46:19.2970000+00:00</lastModDate>
        
        <creator>Mutammimul Ula</creator>
        
        <creator>Muhathir</creator>
        
        <creator>Ilham Sahputra</creator>
        
        <subject>Multilayer perceptron; SURF; pneumonia hyperparameter</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>Pneumonia is an illness that affects practically everyone, from children to the elderly. Pneumonia is an infectious disease caused by viruses, bacteria, or fungi that affect the lungs. It is quite difficult to recognize someone who has pneumonia. This is because pneumonia has multiple levels of classification, and so the symptoms experienced may vary. The multilayer perceptron approach will be used in this study to categorize Pneumonia and determine the level of accuracy, which will contribute to scientific development. The Multilayer Perceptron is employed as the classification method with hyperparameter learning rate and momentum, while SURF is used to extract the features in this classification. Based on the experiments that have been carried out, in general, the learning rate value is not very influential in the learning process, both at the momentum values of 0.1, 0.3, 0.5, 0.7, and 0.9. The best desirable accuracy value for momentum 0.1 is at a learning rate of 0.05. The best desirable accuracy value for momentum 0.3 is at a learning rate of 0.09. The most desirable accuracy value for momentum 0.3 is at a learning rate of 0.05 and 0.07. At a learning rate of 0.03 the highest ideal accuracy value is obtained. The best desirable accuracy value for momentum 0.9 is at a learning rate of 0.09. this research should be redone using the number of hidden layers and nodes in each hidden layer. The addition of a hidden layer, as well as variations in the number of nodes in the hidden layer, will affect computation time and yield more optimal accuracy results.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_25-Optimization_of_Multilayer_Perceptron_Hyperparameter.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Machine Learning Model for Personalized Tariff Plan based on Customer’s Behavior in the Telecom Industry</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131023</link>
        <id>10.14569/IJACSA.2022.0131023</id>
        <doi>10.14569/IJACSA.2022.0131023</doi>
        <lastModDate>2022-10-31T11:46:19.2800000+00:00</lastModDate>
        
        <creator>Lewlisa Saha</creator>
        
        <creator>Hrudaya Kumar Tripathy</creator>
        
        <creator>Fatma Masmoudi</creator>
        
        <creator>Tarek Gaber</creator>
        
        <subject>Customer behavior; data analytics; ensemble learning; machine learning; telecommunication industry</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>In the telecommunication industry, being able to predict customers’ behavioral pattern to successfully design and recommend a suitable tariff plan is the ultimate target. The behavioral pattern has a vital connection with the customers’ demographic background. Different researches have been done based on hypothesis testing, regression analysis, and conjoint analysis to determine the interdependencies among them and the effects on the customers’ behavioral needs. This has presented us with ample scope for research using numerous classification-based techniques. This work proposes a model to predict customer’s behavioral pattern by using their demographic data. This model was built after investigating various types of classification-based machine learning techniques including the traditional ones like decision tree, k-nearest neighbor, logistic regression, and artificial neural networks along with some ensemble techniques such as random forest, adaboost, gradient boosting machine, extreme gradient boosting, bagging, and stacking. They are applied to a dataset collected using a questionnaire in India. Among the traditional classifiers, decision tree gave the best result of 81% accuracy and random forest showed the best result among the ensemble learning techniques with an accuracy of 83%. The proposed model has shown a very positive outcome in predicting the customers’ behavioral pattern.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_23-A_Machine_Learning_Model_for_Personalized_Tariff_Plan.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Systematic Literature Review of Deep Learning-Based Detection and Classification Methods for Bacterial Colonies</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131024</link>
        <id>10.14569/IJACSA.2022.0131024</id>
        <doi>10.14569/IJACSA.2022.0131024</doi>
        <lastModDate>2022-10-31T11:46:19.2800000+00:00</lastModDate>
        
        <creator>Shimaa A. Nagro</creator>
        
        <subject>AI; bacterial-colonies; classification; deep learning; detection; literature review</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>Deep learning is an area of machine learning that has substantial potential in various fields of study such as image processing and computer vision. A large number of studies are published annually on deep learning techniques. The focus of this paper is on bacteria detection, identification, and classification. This paper presents a systematic literature review that synthesizes the evidence related to bacteria colony identification and detection published in the year 2021. The aim is to aggregate, analyse, and summarize the evidence related to deep learning detection, identification, and classification of bacteria and bacteria colonies. The significance is that the review will help experts and technicians to understand how deep learning techniques can apply in this regard and potentially further support more accurate detection of bacteria types. A total of 38 studies are analysed. The majority of the published studies focus on supervised-learning–based convolutional neural networks. Furthermore, a large number of studies make use of laboratory-prepared datasets as compared to open-source and industrial datasets. The results also indicate a lack of tools, which is a barrier in adapting academic research in industrial settings.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_24-A_Systematic_Literature_Review_of_Deep_Learning_Based_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning Model for Predicting Consumers’ Interests of IoT Recommendation System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131022</link>
        <id>10.14569/IJACSA.2022.0131022</id>
        <doi>10.14569/IJACSA.2022.0131022</doi>
        <lastModDate>2022-10-31T11:46:19.2630000+00:00</lastModDate>
        
        <creator>Talal H. Noor</creator>
        
        <creator>Abdulqader M. Almars</creator>
        
        <creator>El-Sayed Atlam</creator>
        
        <creator>Ayman Noor</creator>
        
        <subject>Internet of things; IoT; knowledge-based; recommendation system; service-oriented architecture; SOA; long short-term memory; LSTM; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>This electronic the Internet of Things (IoT) technology has contributed to several domains such as health, energy, education, transportation, industry, and other domains. However, with the increased number of IoT solutions worldwide, IoT consumers find it difficult to choose the technology that suits their needs. This article describes the design and implementation of an IoT recommendation system based on consumer interests. In particular, the knowledge-based IoT recommendation system exploits a Service Oriented Architecture (SOA) where IoT device and service providers use a registry to advertise their products. Moreover, the proposed model uses a Long Short-term Memory (LSTM) deep learning technique to predict the consumer&#39;s interest based on the consumer&#39;s data. Then the recommendation system do the mapping between the consumers and the related IoT devices based on the consumer interests. The proposed Knowledge-based IoT recommendation system has been validated using a real-world IoT dataset collected from Twitter Application Programming Interface (API) that include more than 15,791 tweets. Overall the results of our experiment are promising in terms of precision and recall. Furthermore, the proposed model achieved the highest accuracy score compared with other state-of-the-art methods.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_22-Deep_Learning_Model_for_Predicting_Consumers_Interests_of_IoT.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Semi-supervised Text Annotation for Hate Speech Detection using K-Nearest Neighbors and Term Frequency-Inverse Document Frequency</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131020</link>
        <id>10.14569/IJACSA.2022.0131020</id>
        <doi>10.14569/IJACSA.2022.0131020</doi>
        <lastModDate>2022-10-31T11:46:19.2500000+00:00</lastModDate>
        
        <creator>Nur Heri Cahyana</creator>
        
        <creator>Shoffan Saifullah</creator>
        
        <creator>Yuli Fauziah</creator>
        
        <creator>Agus Sasmito Aribowo</creator>
        
        <creator>Rafal Drezewski</creator>
        
        <subject>Natural language processing; text annotation; semi-supervised learning; TF-IDF; K-NN</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>Sentiment analysis can detect hate speech using the Natural Language Processing (NLP) concept. This process requires annotation of the text in the labeling. However, when carried out by people, this process must use experts in the field of hate speech, so there is no subjectivity. In addition, if processed by humans, it will take a long time and allow errors in the annotation process for extensive data. To solve this problem, we propose an automatic annotation process with the concept of semi-supervised learning using the K-Nearest Neighbor algorithm. This process requires feature extraction of term frequency-inverse document frequency (TF-IDF) to obtain optimal results. KNN and TF-IDF were able to annotate and increase the accuracy of &lt; 2% from the initial iteration of 57.25% to 59.68% in detecting hate speech. This process can annotate the initial dataset of 13169 with the distribution of 80:20 of training and testing data. There are 2370 labeled datasets; for testing, there are 1317 unannotated data; after preprocessing, there are 9482. The final results of the KNN and TF-IDF annotation processes have a length of 11235 for annotated data.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_20-Semi_supervised_Text_Annotation_for_Hate_Speech_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparison of Edge Detection Algorithms for Texture Analysis on Copy-Move Forgery Detection Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131021</link>
        <id>10.14569/IJACSA.2022.0131021</id>
        <doi>10.14569/IJACSA.2022.0131021</doi>
        <lastModDate>2022-10-31T11:46:19.2500000+00:00</lastModDate>
        
        <creator>Bashir Idris</creator>
        
        <creator>Lili N. Abdullah</creator>
        
        <creator>Alfian Abdul Halim</creator>
        
        <creator>Mohd Taufik Abdullah Selimun</creator>
        
        <subject>Edge detection; first derivative; second derivatives; robert; sobel; prewitt; laplacian; canny edge detector</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>Feature extraction in Copy-Move Forgery Detection (CMFD) is crucial to facilitate image forgery analysis. Edge detection is one of the processes to extract specific information from Copy-Move Forgery (CMF) Images. It sensitizes the amount of information in the image and filters out useless ones while preserving the important structural properties in the image. This paper compares five edge detection methods: Robert, Sobel, Prewitt (first Derivative), Laplacian, and Canny edge detectors (second Derivatives). CMFD evaluation datasets images (MICC-F220) are tested with both methods to facilitate comparison. The edge detection operators were implemented with their respective convolution masks. Robert with a 2x2 mask, The Prewitt and Sobel with a 3x3 mask, while Laplacian and canny used adjustable masks. These masks determine the quality of the detected edges. Edges reflect a great-intensity contrast that is either darker or brighter.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_21-Comparison_of_Edge_Detection_Algorithms_for_Texture_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards Home-based Therapy: The Development of a Low-cost IoT-based Transcranial Direct Current Stimulation System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131019</link>
        <id>10.14569/IJACSA.2022.0131019</id>
        <doi>10.14569/IJACSA.2022.0131019</doi>
        <lastModDate>2022-10-31T11:46:19.2330000+00:00</lastModDate>
        
        <creator>Ahmad O. Alokaily</creator>
        
        <creator>Ghala Almeteb</creator>
        
        <creator>Raghad Althabiti</creator>
        
        <creator>Suhail S. Alshahrani</creator>
        
        <subject>IoT; Internet of medical things; tDCS; home-based; brain stimulation; cloud</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>Transcranial direct current stimulation (tDCS), a neuromodulation technique that is painless and noninvasive, has shown promising results in assisting patients suffering from brain injuries and psychiatric conditions. Recently, there has been an increased interest in home-based therapeutic applications in various areas. This study proposes a low-cost, internet of things (IoT)-based tDCS prototype that provides the basic tDCS features with internet connectivity to enable remote monitoring of the system&#39;s usage and adherence. An IoT-enabled microcontroller was programmed with C++ to supply a specific dose of direct current between the anode and cathode electrodes for a predefined duration. Each tDCS session&#39;s information was successfully synchronized with an IoT cloud server to be remotely monitored. The accuracy of the resulting stimulation currents was close to the expected values with an acceptable error range. The proposed IoT-based tDCS system has the potential to be used as a telerehabilitation approach to enhance safety and adherence to home-based noninvasive brain stimulation techniques.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_19-Towards_Home_based_Therapy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Brain Tumor Detection using Integrated Learning Process Detection (ILPD)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131018</link>
        <id>10.14569/IJACSA.2022.0131018</id>
        <doi>10.14569/IJACSA.2022.0131018</doi>
        <lastModDate>2022-10-31T11:46:19.2170000+00:00</lastModDate>
        
        <creator>M. Praveena</creator>
        
        <creator>M. Kameswara Rao</creator>
        
        <subject>Machine learning (ML); deep convolutional neural network (D-CNN); brain-tumor-detection; integrated learning process detection (ILPD)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>Brain tumor detection becomes more complicated process in medical image processing. Analyzing brain tumors is very difficult task because of the unstructured shape of the tumors. Generally, tumors are of two types such as cancerous and non-cancerous. Cancerous tumors are called malignant and non-cancerous are called benign tumors. Malignant tumors are more complex to the patients if these are not detected in the early stages. Precancerous are the other types of tumors that may become cancerous if the treatment is not taken in the early stages. Machine Learning (ML) approaches are most widely used to detect complex patterns but ML has various disadvantages such as time taking process to detect brain tumors. In this paper, integrated learning process detection (ILPD) is introduced to detect the tumors in the brain and analyzes the shape and size of the tumors, and find the stage of the tumors in the given input image. To increase the tumor detection rate advanced image filters are adopted with Deep Convolutional Neural Networks (D-CNN) to improve the detection rate. A pre-trained model called VGG19 is applied to train the MRI brain images for effective detection of tumors. Two benchmark datasets are collected from Kaggle and BraTS 2019 contains MRI brain scan images. The performance of the proposed approach is analyzed by showing the accuracy, f1-score, sensitivity, dice similarity score and specificity.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_18-Brain_Tumor_Detection_using_Integrated_Learning_Process_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Complexity of Web-based Application for Research and Community Service in Academic</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131017</link>
        <id>10.14569/IJACSA.2022.0131017</id>
        <doi>10.14569/IJACSA.2022.0131017</doi>
        <lastModDate>2022-10-31T11:46:19.2170000+00:00</lastModDate>
        
        <creator>Fitriana Fitriana</creator>
        
        <creator>Sukarni Sukarni</creator>
        
        <creator>Zulkifli Zulkifli</creator>
        
        <subject>Application complexity; program scale; software size; function point analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>Research data and community service in academic environment is a very important asset that must be managed properly. They have to be applied synergically in order to obtain as quality standards of higher education. A centralized web-based application designed for research data management and community service have been applied in terms of supporting the managerial of activities. To make the application suitable for users, it is necessary to estimate the size of the software built. This study aimed at measuring the consistency of the apps based on feature point analysis method which is owned by research and community service in Indonesia. Fourteen Modification Complexity Adjustment Factor (MCAF) were used for calculating a program scale with adequate precision. The main cost is determining the quality of application sequentially, which includes measuring the weighted value of feature point components, namely, Crude Function Points (CFPs), calculating the Relative Complexity Adjustment Factor (RCAF), and estimating the Function Point (FP) by using the formula itself. The results depict that the size of application was estimated about 18381 lines using FPA methods and achieved successful estimation with 2.2 percent of deviation.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_17-Complexity_of_Web_based_Application_for_Research.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of Unsupervised Machine Learning Techniques for an Efficient Customer Segmentation using Clustering Ensemble and Spectral Clustering</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131016</link>
        <id>10.14569/IJACSA.2022.0131016</id>
        <doi>10.14569/IJACSA.2022.0131016</doi>
        <lastModDate>2022-10-31T11:46:19.2030000+00:00</lastModDate>
        
        <creator>Nouri Hicham</creator>
        
        <creator>Sabri Karim</creator>
        
        <subject>Machine learning; customer segmentation; marketing; clustering ensemble; spectral clustering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>Customer segmentation is key to a corporate decision support system. It is an important marketing technique that can target specific client categories. We create a novel consumer segmentation technique based on a clustering ensemble; in this stage, we ensemble four fundamental clustering models: DBSCAN, K-means, Mini Batch K-means, and Mean Shift, to deliver a consistent and high-quality conclusion. Then, we use spectral clustering to integrate numerous clustering findings and increase clustering quality. The new technique is more flexible with client data. Feature engineering cleans, processes, and transforms raw data into features. These traits are then used to form clusters. Adjust Rand Index (ARI), Normalized Mutual Information (NMI), Dunn&#39;s Index (DI), and Silhouette Coefficient (SC) were utilized to evaluate our model&#39;s performances with individual clustering approaches. The experimental analysis found that our model has the best ARI (70.14%), NMI (71.75), DI (75.15), and SC (72.89%). After retaining these results, we applied our model to an actual dataset obtained from Moroccan citizens via social networks and email boxes between 03/06/2022 and 19/08/2022.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_16-Analysis_of_Unsupervised_Machine_Learning_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Classification of Agriculture Area based on Superior Commodities in Geographic Information System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131015</link>
        <id>10.14569/IJACSA.2022.0131015</id>
        <doi>10.14569/IJACSA.2022.0131015</doi>
        <lastModDate>2022-10-31T11:46:19.1870000+00:00</lastModDate>
        
        <creator>Lilik Sumaryanti</creator>
        
        <creator>Rosmala Widjastuti</creator>
        
        <creator>Firman Tempola</creator>
        
        <creator>Heru Ismanto</creator>
        
        <subject>Classification; agriculture; location qoutient; single linkage; geographic information system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>Research carries a classification model that combines LQ analysis and hierarchical classification using a single linkage. The classification results are a basis for mapping the potential of agriculture areas based on superior food commodities in Merauke Regency, Indonesia. LQ analysis is used to select food commodities. In contrast, the application of single linkage uses the production of three features, rice, corn, and peanuts, which have an LQ value&gt;1, to group sub-districts based on agricultural potential. Intelligent mapping is represented by mapping the sub-districts agricultural areas according to the cluster. The classification results show that the first cluster has sixteen sub-district members, the second consists of three sub-districts, and the third cluster consists of one sub-district. Each cluster member has similarities based on the distance measurement with the smallest value using the Euclidean distance. The proposed classification model is a creative idea to map agricultural areas, which can present information on regional potential based on superior food crop commodities.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_15-Classification_of_Agriculture_Area_based_on_Superior_Commodities.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Integrating Computer-aided Argument Mapping into EFL Learners’ Argumentative Writing: Evidence from Saudi Arabia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131013</link>
        <id>10.14569/IJACSA.2022.0131013</id>
        <doi>10.14569/IJACSA.2022.0131013</doi>
        <lastModDate>2022-10-31T11:46:19.1700000+00:00</lastModDate>
        
        <creator>Nuha Abdullah Alsmari</creator>
        
        <subject>Argumentative writing; argument mapping; computer-aided argument mapping; self-regulated learning; Saudi EFL learners</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>This paper aims to examine the effects of Computer-Aided Argument Mapping (CAAM) on Saudi EFL learners’ argumentative writing performance across the development of writing content and coherence and their self-regulated learning skills. A total of 40 second-year university EFL learners were purposively selected as a one-group of pre- and post-test design. Using a mixed-method approach, three research tools were utilized: pre- and post-writing tests, a Self-regulated Learning Scale (SRLS), and semi-structured interviews. Quantitative results demonstrated that EFL learners’ argumentative writing performance made noteworthy gains, as manifested by the statistically significant differences between their pre- and post-test scores. Significant positive correlations were also found between the EFL learners’ overall argumentative writing performance and the SRL factor subscales, indicating an increase in the self-regulation mechanism relative to planning, self-monitoring, evaluation, effort, and self-efficacy. Qualitative results indicate that the participants have positively embraced the integration of CAAM to improve their writing skills and self-regulation processes. Recommendations for implementing digital mapping to revolutionize EFL learning classrooms in this digital era are provided.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_13-Integrating_Computer_aided_Argument_Mapping.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Efficient Computational Method of Motif Finding in the Zika Virus Genome</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131014</link>
        <id>10.14569/IJACSA.2022.0131014</id>
        <doi>10.14569/IJACSA.2022.0131014</doi>
        <lastModDate>2022-10-31T11:46:19.1700000+00:00</lastModDate>
        
        <creator>Pushpa Susant Mahapatro</creator>
        
        <creator>Jatinderkumar R. Saini</creator>
        
        <subject>Consensus string; genome study; greedy search technique; motif search; pseudocount; regulatory proteins; ZIKV; Zika virus</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>The Zika virus (ZIKV) outbreak and spread is a global health emergency declared by World Health Organization. ZIKV rapidly spread across the world, causing neurological disorders. It is gaining public and scientific consideration. ZIKV genome biology and molecular structure are better understood with published papers. Genetic regulation is better understood by finding the motif in the DNA Genome sequence. The transcription factor binding sites need to be identified to understand the genetic code. There is diversity in gene expression. Motif-finding methods work towards efficiently identifying the repeated patterns in the genome. ZIKV genome sequence is used in the study. Identifying the motif is still a difficult task. There is a low probability of identifying the binding sites. Finding all possible solutions is challenging as it requires a lot of time and has high space complexity for finding long motifs. The Greedy search technique with pseudocount finds the motif in real-time. The count matrix is computed, and the profile matrix is constructed from the genome of the Zika virus. The calculated consensus string helps in calculating the score of the motif. The Greedy motif search technique is applied in this paper to find the motifs in the Zika virus Genome. This technique is not applied earlier to find the motifs in Zika Virus. The motifs are identified using a Greedy motif search without pseudocount and with pseudocount.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_14-An_Efficient_Computational_Method_of_Motif_Finding.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Associating User’s Preference and Satisfaction into Quality of Experience: A Shoulder-surfing Resistant Authentication Scheme by Visual Perception</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131012</link>
        <id>10.14569/IJACSA.2022.0131012</id>
        <doi>10.14569/IJACSA.2022.0131012</doi>
        <lastModDate>2022-10-31T11:46:19.1570000+00:00</lastModDate>
        
        <creator>Juliana Mohamed</creator>
        
        <creator>Mohd Farhan Md Fudzee</creator>
        
        <creator>Sofia Najwa Ramli</creator>
        
        <creator>Mohd Norasri Ismail</creator>
        
        <creator>Muhamad Hanif Jofri</creator>
        
        <subject>Authentication; usability; algorithm; model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>Authentication acts as a secured method of usability concepts to certain transactions, especially for online banking transaction. Existing method is lacking in terms of usability, thus, making the goal of usability for authentication activities unsuccessful. A study has discovered some key concepts of usability in terms of Human Computer Interaction (HCI) by comparing two existing models of two different factors: environmental factors and display factors. An algorithm shows the authentication step during the online transaction activity. This paper is to prove that shoulder-surfing resistant authentication scheme that uses visual colour-blind mode-based model meets all the requirements of usability, hence, achieved the goal of usability of authentication. This study will bring forward an algorithm that examined the stated authentication scheme with the two factors, i.e environmental and display, during the authentication activity.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_12-Associating_Users_Preference_and_Satisfaction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mobile Applications for Cybercrime Prevention: A Comprehensive Systematic Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131010</link>
        <id>10.14569/IJACSA.2022.0131010</id>
        <doi>10.14569/IJACSA.2022.0131010</doi>
        <lastModDate>2022-10-31T11:46:19.1400000+00:00</lastModDate>
        
        <creator>Irma Huaman&#241;ahui Chipa</creator>
        
        <creator>Javier Gamboa-Cruzado</creator>
        
        <creator>Jimmy Ramirez Villacorta</creator>
        
        <subject>Computer crimes; cyberattacks; cyber security; mobile apps; phishing; machine learning; malware; systematic literature review</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>Now-a-days, cybercrime, cyberattacks, cyber security, phishing and malware are taking a more notorious role in people&#39;s daily lives, not only at the international level. The great technological leaps brought with them new modalities of cybercrime, the number of victims of cybercriminals has increased considerably. The objective of this study is to determine the state of the art about Mobile Applications and their impact on Computer Crime Prevention. Therefore, it has become necessary to know what preventive measures are being taken, such as techniques for detecting computer crimes, their modalities and their classification. To close this knowledge gap, a systematic literature review (SLR), a methodology proposed by Kitchenham &amp; Charters, was proposed to obtain the detection techniques and classification of computer crimes based on the review of 68 papers published between the years 2017 and 2022. Likewise, different tables and graphs of the selected studies are provided, which offer additional information such as the most used keywords per paper, biometric networks, among others.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_10-Mobile_Applications_for_Cybercrime_Prevention.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluation of the Efficiency of the Optimization Algorithms for Transfer Learning on the Rice Leaf Disease Dataset</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131011</link>
        <id>10.14569/IJACSA.2022.0131011</id>
        <doi>10.14569/IJACSA.2022.0131011</doi>
        <lastModDate>2022-10-31T11:46:19.1400000+00:00</lastModDate>
        
        <creator>Luyl-Da Quach</creator>
        
        <creator>Khang Nguyen Quoc</creator>
        
        <creator>Anh Nguyen Quynh</creator>
        
        <creator>Hoang Tran Ngoc</creator>
        
        <subject>Optimization algorithm; transfer learning; RMSprop; rice leaf disease; Adam</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>To improve the model&#39;s efficiency, people use many different methods, including the Transfer Learning algorithm, to improve the efficiency of recognition and classification of image data. The study was carried out to combine optimization algorithms with the Transfer Learning model with MobileNet, MobileNetV2, InceptionV3, Xception, ResNet50V2, DenseNet201 models. Then, testing on rice disease data set with 13.186 images, background removed. The result obtained with high accuracy is the RMSprop algorithm, with an accuracy of 88% when combined with the Xception model, similar to the F1, Xception model, and ResNet50V2 score of 87% when combined with the Adam algorithm. This shows the effect of gradients on the Transition learning model. Research, evaluate and draw the optimal model to build a website to identify diseases on rice leaves, with the main functions including images and recording of disease identification points for better management purposes on diseased areas of rice.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_11-Evaluation_of_the_Efficiency_of_the_Optimization_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Sequence-Aware Recommendation Method based on Complex Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131009</link>
        <id>10.14569/IJACSA.2022.0131009</id>
        <doi>10.14569/IJACSA.2022.0131009</doi>
        <lastModDate>2022-10-31T11:46:19.1230000+00:00</lastModDate>
        
        <creator>Abdullah Alhadlaq</creator>
        
        <creator>Said Kerrache</creator>
        
        <creator>Hatim Aboalsamh</creator>
        
        <subject>Sequence-aware recommender systems; complex networks; similarity-popularity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>Online stores and service providers rely heavily on recommendation software to guide users through the vast number of available products. Consequently, the field of recommender systems has attracted increased attention from the industry and academia alike, but despite this joint effort, the field still faces several challenges. For instance, most existing work models the recommendation problem as a matrix completion problem to predict the user preference for an item. This abstraction prevents the system from utilizing the rich information from the ordered sequence of user actions logged in online sessions. To address this limitation, researchers have recently developed a promising new breed of algorithms called sequence-aware recommender systems to predict the user’s next action by utilizing the time series composed of the sequence of actions in an ongoing user session. This paper proposes a novel sequence-aware recommen-dation approach based on a complex network generated by the hidden metric space model, which combines node similarity and popularity to generate links. We build a network model from data and then use it to predict the user’s subsequent actions. The network model provides an additional information source that improves the recommendations’ accuracy. The proposed method is implemented and tested experimentally on a large dataset. The results prove that the proposed approach performs better than state-of-the-art recommendation methods.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_9-A_Sequence_Aware_Recommendation_Method_based_on_Complex_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Research on Precision Marketing based on Big Data Analysis and Machine Learning:Case Study of Morocco</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131008</link>
        <id>10.14569/IJACSA.2022.0131008</id>
        <doi>10.14569/IJACSA.2022.0131008</doi>
        <lastModDate>2022-10-31T11:46:19.1100000+00:00</lastModDate>
        
        <creator>Nouhaila El Koufi</creator>
        
        <creator> Abdessamad Belangour</creator>
        
        <creator>Mounir Sdiq</creator>
        
        <subject>Precision marketing; big data analysis; machine learning; potential customers prediction algorithm (PCPA)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>With the growth of the Internet industry and the informatization of services, online services and transactions have become the mainstream method used by clients and companies. How to attract potential customers and keep up with the Big Data era are the important challenges and issues for the banking sector. With the development of artificial intelligence and machine learning, it has become possible to identify potential customers and provide personalized recommendations based on transactional data to realize precision marketing in banking. The current study aims to provide a potential customer’s prediction algorithm (PCPA) to predict potential clients using big data analysis and machine learning techniques. Our proposed methodology consists of five stages: data preprocessing, feature selection using Grid search algorithm, data splitting into two parts train and test set with the ratio of 80% and 20% respectively, modeling, evaluations of results using confusion matrix. According to the obtained results, the accuracy of the final model is the highest (98,9%). The dataset used in this research about banking customers has been collected from a Moroccan bank. It contains 6000 records, 14 predictor variables, and one outcome variable.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_8-Research_on_Precision_Marketing_based_on_Big_Data_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application of Stacking Ensemble Machine in Big Data: Analyze the Determinants for Vitalization of the Multicultural Support Center</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131007</link>
        <id>10.14569/IJACSA.2022.0131007</id>
        <doi>10.14569/IJACSA.2022.0131007</doi>
        <lastModDate>2022-10-31T11:46:19.0930000+00:00</lastModDate>
        
        <creator>Raeho Lee</creator>
        
        <creator>Haewon Byeon</creator>
        
        <subject>Stacking ensemble machine; radial basis function neural network; random forest; multicultural family support centers; prediction model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>For multicultural families to successfully promote social adaptation and achieve desirable social integration, the role of the multicultural family support center (Multi-FSC) is crucial. In addition, it&#39;s important to examine the factors that will contribute to the multicultural support center&#39;s vitality from the standpoint of the customers. In this study, machine learning models based on a single machine learning model and stacking ensemble using survey data from all multicultural families were used to examine the determinants for the utilization of multicultural family support centers for multicultural families. Additionally, based on the constructed prediction model, this study offered the foundational data for the revitalization of the multicultural support center. In this study, 281,606 adults (19 years or older), 56,273 of whom were married immigrants or naturalized citizens as of 2012, were examined. The stacking ensemble method was employed in this work to forecast the use of multicultural family support centers. In the base stage (model) of this model, logistic regression was employed, along with Classification and Regression Tree (CART), Radial Basis Function Neural Network (RBF-NN), and Random Forest (RF) model. The RBF-NN-Logit reg model had the best prediction performance, according to the study&#39;s findings (RMSE = 0.20, Ev = 0.45, and IA = 0.68). The findings of this study suggested that the prediction performance of the stacking ensemble can be improved when creating classification or prediction models using epidemiological data from a community.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_7-Application_of_Stacking_Ensemble_Machine_in_Big_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Review of Automatic Question Generation in Teaching Programming</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131006</link>
        <id>10.14569/IJACSA.2022.0131006</id>
        <doi>10.14569/IJACSA.2022.0131006</doi>
        <lastModDate>2022-10-31T11:46:19.0930000+00:00</lastModDate>
        
        <creator>Jawad Alshboul</creator>
        
        <creator>Erika Baksa-Varga</creator>
        
        <subject>Question generation; question generation techniques; automatic question generation; teaching programming</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>Computer programming is a complex field that requires rigorous practice in programming code writing and learning skills, which can be one of the critical challenges in learning and teaching programming. The complicated nature of computer programming requires an instructor to manage its learning resources and diligently generate programming-related questions for students that need conceptual programming and procedural programming rules. In this regard, automatic question generation techniques help teachers carefully align their learning objectives with the question designs in terms of relevancy and complexity. This also helps in reducing the costs linked with the manual generation of questions and fulfills the need of supplying new questions through automatic question techniques. This paper presents a theoretical review of automatic question generation (AQG) techniques, particularly related to computer programming languages from the year 2017 till 2022. A total of 18 papers are included in this study. one of the goals is to analyze and compare the state of the field in question generation before COVID-19 and after the COVID-19 period, and to summarize the challenges and future directions in the field. In congruence to previous studies, there is little focus given in the existing literature on generating questions related to learning programming languages through different techniques. Our findings show that there is a need to further enhance experimental studies in implementing automatic question generation especially in the field of programming. Also, there is a need to implement an authoring tool that can demonstrate designing more practical evaluation metrics for students.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_6-A_Review_of_Automatic_Question_Generation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Smart System for Emergency Traffic Recommendations : Urban Ambulance Mobility</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131005</link>
        <id>10.14569/IJACSA.2022.0131005</id>
        <doi>10.14569/IJACSA.2022.0131005</doi>
        <lastModDate>2022-10-31T11:46:19.0770000+00:00</lastModDate>
        
        <creator>Ayoub Charef</creator>
        
        <creator>Zahi Jarir</creator>
        
        <creator>Mohamed Quafafou</creator>
        
        <subject>Recommendation systems; emergency urban traffic; ambulance mobility; emergency navigation services</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>With the increasing evolution of advanced technologies and techniques such as the Internet of Things, Artificial Intelligence and Big Data, the traffic management systems industry has acquired new methodologies for creating advanced and intelligent services and applications for traffic management and safety. The current contribution focuses on the implementation of a path recommendation service for paramedics in emergency situations, which is one of the most critical and complex issues in traffic management for the survival of individuals involved in emergency incidents. This work mainly focused on the response time to life-threatening incidents, which is an indicator for emergency ambulance services and for recommending a fastest ambulance route. To this end, we propose a hybrid approach consisting on a local approach using machine learning techniques to predict the congestion of different sections of a map from an origin to a destination, and a global approach to suggest the fastest path to ambulance drivers in real time as they move in OpenStreetMap.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_5-Smart_System_for_Emergency_Traffic_Recommendations.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>English and Romanian Brain-to-Text Brain-Computer Interface Word Prediction System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131003</link>
        <id>10.14569/IJACSA.2022.0131003</id>
        <doi>10.14569/IJACSA.2022.0131003</doi>
        <lastModDate>2022-10-31T11:46:19.0600000+00:00</lastModDate>
        
        <creator>Haider Abdullah Ali</creator>
        
        <creator>Nicolae Goga</creator>
        
        <creator>Andrei Vasilateanu</creator>
        
        <creator>Ali M. Muslim</creator>
        
        <creator>Khalid Abdullah Ali</creator>
        
        <creator>Marius-Valentin Dragoi</creator>
        
        <subject>Brain-to-text; Brain-Computer Interface (BCI); Electroencephalography (EEG); Natural Language Processing (NLP); English language; Romanian language</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>Brain-Computer Interface (BCI) can recognise the thoughts of a human through various electrophysiological signals. Electrodes (sensors) placed on the scalp are used to detect these signals, or by using electrodes implanted inside the brain. Usually, BCI can detect brain activity through different neuroimage methods, but the most preferred is Electroencephalography (EEG) because it is a non-invasive and non-critical method. BCI systems applications are very helpful in restoring functionalities to people suffering from disabilities due to different reasons. In this study, a novel brain-to-text BCI system is presented to predict the word that the subject is thinking. This brain-to-text can assist mute people or those who cannot communicate with others due to different diseases to restore some of their abilities to interact with the surrounding environment and express themselves. In addition, brain-to-text may be used in different control or entertainment applications. EMOTIV™ Insight headset has been used to collect EEG signals from the subject’s brain. Feature extraction of EEG signals for BCI systems is very important to classification performance. Statistical-based feature extraction has been used in this system to extract valuable features to be used for classification. The datasets are sentences involving some commonly used words in English and Romanian languages. The results of the English language elucidated that K-Nearest neighbour (KNN) has a prediction accuracy of 86.7%, 86.1% for Support Vector Machine (SVM), and 79.2% for Linear discriminant analysis (LDA), while the Romanian language has a prediction accuracy of 96.1%, 97.1%, and 94.8% for SVM, LDA, and KNN respectively. This system is a step forward in developing advanced brain-to-text BCI prediction systems.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_3-English_and_Romanian_Brain_to_Text_Brain_Computer_Interface.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fine-grained Access Control Method for Blockchain Data Sharing based on Cloud Platform Big Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131004</link>
        <id>10.14569/IJACSA.2022.0131004</id>
        <doi>10.14569/IJACSA.2022.0131004</doi>
        <lastModDate>2022-10-31T11:46:19.0600000+00:00</lastModDate>
        
        <creator>Yu Qiu</creator>
        
        <creator>Biying Sun</creator>
        
        <creator>Qian Dang</creator>
        
        <creator>Chunhui Du</creator>
        
        <creator>Na Li</creator>
        
        <subject>Power grid data; blockchain technology; data sharing; fine-grained access control; game strategy; ciphertext key</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>Blockchain technology has the advantages of decentralization, de-trust, and non-tampering, which breaks through the limitations of traditional centralized technology, so it has gradually become the key technology of power data security storage and privacy protection. In the existing smart grid framework, the grid operator is a centralized key distribution organization, which is responsible for sending all the secret credentials, so it is easy to have a single point of failure, resulting in a large number of personal information losses. To solve the problems of inflexible access control in smart grid data-sharing framework and considering the limitation of multi-party cooperation among grid operators and efficiency, an attribute-based access control scheme supporting privacy preservation in smart grid is constructed in this paper. A fine-grained access control scheme supporting privacy protection is designed and extended to the smart grid system, which enables the system to achieve fine-grained access control of power data. A decryption test algorithm is added before the decryption algorithm. Finally, through performance analysis and comparison with other schemes, it is verified that the performance of this system is 7% higher than the traditional method, and the storage cost is 9.5% lower, which reflects the superiority of the system. Full optimization of the access policy is achieved. It is proved that the scheme is more efficient to implement the coordination and cooperation of multiple authorized agencies in the system initialization.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_4-Fine_grained_Access_Control_Method_for_Blockchain_Data_Sharing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>IoTCID: A Dynamic Detection Technology for Command Injection Vulnerabilities in IoT Devices</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131002</link>
        <id>10.14569/IJACSA.2022.0131002</id>
        <doi>10.14569/IJACSA.2022.0131002</doi>
        <lastModDate>2022-10-31T11:46:19.0470000+00:00</lastModDate>
        
        <creator>Hao Chen</creator>
        
        <creator>Jinxin Ma</creator>
        
        <creator>Baojiang Cui</creator>
        
        <creator>Junsong Fu</creator>
        
        <subject>Firmware vulnerability mining; command injection; dynamic detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>The pervasiveness of IoT devices has brought us convenience as well as the risks of security vulnerabilities. However, traditional device vulnerability detection methods cannot efficiently detect command injection vulnerabilities due to heavy execution overheads or false positives and false negatives. Therefore, we propose a novel dynamic detection solution, IoTCID. First, it generates constrained models by parsing the front-end files of the IoT device, and a static binary analysis is performed towards the back-end programs to locate the interface processing function. Then, it utilizes a fuzzing method based on the feedback from Distance Function, which selects high-quality samples through various scheduling strategies. Finally, with the help of the probe code, it compares the parameter of potential risk functions with samples to confirm the command injection vulnerabilities. We implement a prototype of IoTCID and evaluate it on real-world IoT devices from three vendors and confirm six vulnerabilities. It shows that IoTCID are effective in discovering command injection vulnerabilities in IoT devices.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_2-IoTCID_A_Dynamic_Detection_Technology_for_Command_Injection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hand Motion Estimation using Super-Resolution of Multipoint Surface Electromyogram by Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0131001</link>
        <id>10.14569/IJACSA.2022.0131001</id>
        <doi>10.14569/IJACSA.2022.0131001</doi>
        <lastModDate>2022-10-31T11:46:19.0300000+00:00</lastModDate>
        
        <creator>Keigo FUKUSHIMA</creator>
        
        <creator>Yoshiaki YASUMURA</creator>
        
        <subject>Hand motion estimation; super-resolution; deep neural network; prosthetic hand; electromyography</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(10), 2022</description>
        <description>This paper proposes a method for hand motion estimation using super-resolution of multipoint surface electromyogram for prosthetic hands. In general, obtaining more EMGs (ElectroMyoGraphy) improves the accuracy of hand motion estimation, but it is costly and hard to use. Therefore, this method improves the accuracy of hand motion estimation by estimating a large number of EMG signals from a small number of EMG signals using super-resolution. This super-resolution is achieved by learning the relationship between few and many myoelectric signals using a deep neural network. Then, hand motions are estimated from the high-resolution signal using a deep neural network. Experiments using actual EMG signals show that the proposed method improves the accuracy of hand motion estimation.</description>
        <description>http://thesai.org/Downloads/Volume13No10/Paper_1-Hand_Motion_Estimation_using_Super_Resolution_of_Multipoint_Surface.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Human Position and Object Motion based Spatio-Temporal Analysis for the Recognition of Human Shopping Actions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01309121</link>
        <id>10.14569/IJACSA.2022.01309121</id>
        <doi>10.14569/IJACSA.2022.01309121</doi>
        <lastModDate>2022-09-30T12:12:40.5500000+00:00</lastModDate>
        
        <creator>Nethravathi P. S</creator>
        
        <creator>Karuna Pandith</creator>
        
        <creator>Manjula Sanjay Koti</creator>
        
        <creator>Rajermani Thinakaran</creator>
        
        <creator>Sumathi Pawar</creator>
        
        <subject>Deep convolutional neural networks; computer vision; object detection; object localization; temporal analysis; human shopping actions component</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>Retailers have long sought ways to better understand their consumers&#39; behavior in order to deliver a smooth and enjoyable shopping experience that draws more customers every day and, as a result, optimizes income. By combining various visual clues such as activities, gestures, and facial expressions, humans may fully grasp the behavior of others. However, due to inherent problems as well as extrinsic forced issues such as a shortage of publicly available information and unique environmental variables, empowering computer vision systems to provide it remains an ongoing problem (wild). In this paper, the authors focus on identifying human activity recognition in computer vision, which is the first and by far the most important cue in behavior analysis. To accomplish this, the authors present an approach by integrating human position and object motion in order to detect and classify tasks in both temporal and spatial analysis. On the MERL shopping dataset, the authors get state-of-the-art results and demonstrate the capabilities of the proposed technique.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_121-Human_Position_and_Object_Motion_based_Spatio_Temporal_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Classifiers Combination for Efficient Masked Face Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01309120</link>
        <id>10.14569/IJACSA.2022.01309120</id>
        <doi>10.14569/IJACSA.2022.01309120</doi>
        <lastModDate>2022-09-30T12:12:40.5330000+00:00</lastModDate>
        
        <creator>Kebir Marwa</creator>
        
        <creator>Ouni Kais</creator>
        
        <subject>Masked faces; deep learning; AlexNet; ResNet50; FaceNet; classifiers combination</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>This study was developed following the upheaval caused by the spread of the Coronavirus around the world. This global crisis greatly affects security systems based on facial recognition given the obligation to wear a mask. This latter, camouflages the entire lower part of the face, which is therefore a great source of information for the recognition operation. In this article, we have implemented three different pre-trained feature extractor models. These models have been improved by implementing the well-known Support Vector Machines (SVM) to reinforce the classification task. Among the investigated architectures, the FaceNet feature extraction model shows remarkable results on both databases with a recognition rate equal to 90%on RMFD and a little lower on SMFD with 88.57%. Following these simulations, we have proposed a combination of classifiers (SVM-KNN) that would prove a remarkable improvement and a significant increase in the accuracy rate of the selected model with almost 4%.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_120-Classifiers_Combination_for_Efficient_Masked_Face_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Algorithm for Providing Adaptive Behavior to Humanoid Robot in Oral Assessment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01309119</link>
        <id>10.14569/IJACSA.2022.01309119</id>
        <doi>10.14569/IJACSA.2022.01309119</doi>
        <lastModDate>2022-09-30T12:12:40.5170000+00:00</lastModDate>
        
        <creator>Dalia khairy</creator>
        
        <creator>Salem Alkhalaf</creator>
        
        <creator>M. F. Areed</creator>
        
        <creator>Mohamed A. Amasha</creator>
        
        <creator>Rania A. Abougalala</creator>
        
        <subject>Algorithm; humanoid robot; social robots; oral assessment; assistance robots; higher education; adaptive behavior robot</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>Assistance humanoid robots (AHR) are the category of robotics used to offer social interaction to humans. In higher education, the teaching staff supports the acceptance of AHRs as a social assistance tool during the learning activities, with the whole responsibility of the correct operation of the device and providing a more comprehensive view of the objectives and significance of AHR use. On the other hand, students deal with AHRs either as a friend or control figures as a teacher. This paper presents an algorithm for AHRs in oral assessments. The proposed algorithm focuses on four characteristics: adaptive occurrence, friendly existence, persuasion, and external appearance. This paper integrates AHRs in higher education to improve the value of psychological and social communication during oral assessment where can assist students in dealing with challenges, such as shyness, dissatisfaction, hesitation, and confidence, better than a human teacher can. Thus, AHRs have increased students’ self-confidence and enriches active learning.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_119-An_Algorithm_for_Providing_Adaptive_Behavior_to_Humanoid_Robot.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Generation and Assessment of Intellectual and Informational Capital as a Foundation for Corporations’ Digital Innovations in the “Open Innovation” System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01309118</link>
        <id>10.14569/IJACSA.2022.01309118</id>
        <doi>10.14569/IJACSA.2022.01309118</doi>
        <lastModDate>2022-09-30T12:12:40.5030000+00:00</lastModDate>
        
        <creator>Viktoriya Valeryevna Manuylenko</creator>
        
        <creator>Galina Alexandrovna Ermakova</creator>
        
        <creator>Natalia Vladimirovna Gryzunova</creator>
        
        <creator>Mariya Nikolaevna Koniagina</creator>
        
        <creator>Alexander Vladimirovich Milenkov</creator>
        
        <creator>Liubov Alexandrovna Setchenkova</creator>
        
        <creator>Irina Ivanovna Ochkolda</creator>
        
        <subject>Informational and digital capitals; informational security risks; synergy; open innovation; transformation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>The research peruses development of scientifically based toolkit to create and assess promising types of intellectual capital transformed into digital innovations for “open innovation&quot; system. It is determined that in theory terms “intellectual, informational and digital capitals” are interrelated categories; efficient merge of informational and digital capitals minimizes information security risks; merge of informational and digital capitals provides a long-term multiplicative synergetic effect demonstrating constant transformation of innovative ideas into digital innovations. The following is suggested: structural and logical scheme method for creation and assessment of informational capital and scenarios for the synergetic development of informational and digital capitals.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_118-Generation_and_Assessment_of_Intellectual_and_Informational_Capital.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Attractiveness of the Megaproject Labor Market for Metropolitan Residents in the Context of Digitalization and the Long-Lasting COVID-19 Pandemic</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01309117</link>
        <id>10.14569/IJACSA.2022.01309117</id>
        <doi>10.14569/IJACSA.2022.01309117</doi>
        <lastModDate>2022-09-30T12:12:40.0170000+00:00</lastModDate>
        
        <creator>Mikhail Vinichenko</creator>
        
        <creator>Sergey Barkov</creator>
        
        <creator>Aleksander Oseev</creator>
        
        <creator>Sergey Makushkin</creator>
        
        <creator>Larisa Amozova</creator>
        
        <subject>Megaproject labor market; metropolitan residents; digitalization; COVID-19 pandemic; attractiveness factors</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>The article aims to determine the nature of changes in the attractiveness of the labor market of megaprojects from the perspective of megapolis residents under the conditions of digitalization and the long-lasting pandemic of COVID-19. The paper develops a scientific-methodological and categorical-conceptual apparatus with the support of empirical methods with distance methods. The study shows that the attractiveness of the labor market of megaprojects has undergone certain changes for megapolis residents under the current conditions. The factors of the attractiveness of the labor market of megaprojects are of a stable nature in the minds of megapolis residents. The main advantage of the work is the identification of trends in the changes of the megaproject labor market and the relationships they have. The study reveals both general and private trends. The obtained results can be used for further study of the megaproject labor market and the improvement of the social policy of the state and megalopolises in the conditions of digitalization and the prolonged pandemic.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_117-Attractiveness_of_the_Megaproject_Labor_Market_for_Metropolitan.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Taxation Transformation under the Influence of Industry 4.0</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01309116</link>
        <id>10.14569/IJACSA.2022.01309116</id>
        <doi>10.14569/IJACSA.2022.01309116</doi>
        <lastModDate>2022-09-30T12:12:40.0030000+00:00</lastModDate>
        
        <creator>Pavel Victorovich Stroev</creator>
        
        <creator>Rafael Valiakhmetovich Fattakhov</creator>
        
        <creator>Olga Vladimirovna Pivovarova</creator>
        
        <creator>Sergey Leonidovich Orlov</creator>
        
        <creator>Alena Stanislavovna Advokatova</creator>
        
        <subject>Digitalization; cryptocurrency; blockchain; robotics; automation; innovation; tax</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>Today the growing level of automation and the new concept of online technologies are transforming the traditional industry. Value is generated with the help of Industry 4.0 technologies that not only increase the efficiency and agility of supply chains, create new products and offer new ways of connecting businesses and consumers but also have a major impact on the traditional tax system. This study aims at determining changes in the modern industrial economy and substantiating possible directions for transforming the tax system to adapt it to the requirements of Industry 4.0. The objective is to identify the relationship between the digitalization of the economy, the use of blockchain technologies, robotics, automation, M2M technologies offered by Industry 4.0, and taxation. The article demonstrates how these technologies influence taxes and proposes measures to address possible tax issues. The authors of the article have concluded that the reasons (and goals) for transforming the current tax system as a result of the development of Industry 4.0 technologies are as follows: 1) to increase or stabilize tax revenues to compensate for tax losses and finance new education needs; 2) to introduce innovations for the development of Industry 4.0 and further digitalization of the economy; 3) to create an automatic tax administration system.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_116-Taxation_Transformation_under_the_Influence_of_Industry.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Swine flu Detection and Location using Machine Learning Techniques and GIS</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01309115</link>
        <id>10.14569/IJACSA.2022.01309115</id>
        <doi>10.14569/IJACSA.2022.01309115</doi>
        <lastModDate>2022-09-30T12:12:39.9870000+00:00</lastModDate>
        
        <creator>P. Nagaraj</creator>
        
        <creator>A. V. Krishna Prasad</creator>
        
        <creator>V. B. Narsimha</creator>
        
        <creator>B. Sujatha</creator>
        
        <subject>Swine Flu; influenza; machine learning; GIS; classifiers; ANN; virus; algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>The H1N1 virus, more commonly referred to as swine flu, is an illness that is extremely infectious and can in some cases be fatal. Because of this, the lives of many individuals have been taken. The disease can be transmitted from pigs to people. This research presents an artificial neural network (ANN) classifier for disease forecasting, as well as a technique for detecting people who are sick based on the geographic region in which they are found. The source codes for these two algorithms are provided below. These coordinates serve as the foundation for the GIS coordinates that are utilized in the method for assessing the extent to which the illness has spread. The ICMR and NCDC datasets were utilized in the study. They used Dynamic Boundary Location algorithm to detect swine flu affected person’s location, the researchers discovered that the accuracy of the proposed classifier was 96 standard classifiers.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_115-Swine_Flu_Detection_and_Location_using_Machine_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Information Classification Algorithm based on Project-based Learning Data-driven and Stochastic Grid</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01309114</link>
        <id>10.14569/IJACSA.2022.01309114</id>
        <doi>10.14569/IJACSA.2022.01309114</doi>
        <lastModDate>2022-09-30T12:12:39.9700000+00:00</lastModDate>
        
        <creator>Xiaomei Qin</creator>
        
        <creator>Wenlan Zhang</creator>
        
        <subject>Adaptive partitioning; data driven; information set; project-based learning; random grid; simulation laboratory</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>The adaptive partitioning algorithm of information set in simulation laboratory based on project-based learning data-driven and random grid is studied to effectively preprocess the information set and improve the adaptive partitioning effect of the information set. Using the improved fuzzy C-means clustering algorithm driven by project-based learning data, the fuzzy partition of information set in simulation laboratory is carried out to complete preprocessing of information set; The pre-processing information set space is roughly divided by the grid partitioning algorithm based on the data histogram; A random mesh generation algorithm based on uniformity is used to finely divide the coarse mesh cells; Taking the representative points of grid cells as the clustering center, the pre-processing information set is clustered by the density peak clustering algorithm to complete the adaptive partitioning of the information set in simulation laboratory. Experimental results show that this algorithm can effectively preprocess and adaptively partition the information set of simulation laboratory; For different dimension information sets, the evaluation index values of Rand index, Purity, standard mutual information, interval and Dunn index of the algorithm are all high, and the evaluation index values of compactness and Davidson&#39;s banding index are all low, so the algorithm has a high accuracy of adaptive partitioning of information sets.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_114-Information_Classification_Algorithm_based_on_Project_based_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Recognition Method of Dim and Small Targets in SAR Images based on Machine Vision</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01309113</link>
        <id>10.14569/IJACSA.2022.01309113</id>
        <doi>10.14569/IJACSA.2022.01309113</doi>
        <lastModDate>2022-09-30T12:12:39.9700000+00:00</lastModDate>
        
        <creator>Qin Dong</creator>
        
        <subject>Machine vision; SAR image; Weak target; PCA linear dimensionality reduction method; key frame frequency band</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>Aiming at the problems of long recognition time and low recognition accuracy of traditional SAR image dim target recognition methods, a method of SAR image dim target recognition based on machine vision was proposed. SAR images are collected and preprocessed by machine vision, and the image information is processed by PCA dimension reduction considering the linear characteristics of the data to extract image features. Then, the SAR image target feature key frame frequency band is divided by the segmentation results, and the recognition model is established based on the image trajectory tracking and target analysis. The proposed algorithm is applied and analyzed. The simulation results show that the proposed algorithm has good recognition rate, average recognition rate and false detection rate are 99% and 0.9%, and can effectively ensure the data processing performance.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_113-Recognition_Method_of_Dim_and_Small_Targets_in_SAR_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Flood Prediction using Deep Learning Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01309112</link>
        <id>10.14569/IJACSA.2022.01309112</id>
        <doi>10.14569/IJACSA.2022.01309112</doi>
        <lastModDate>2022-09-30T12:12:39.9570000+00:00</lastModDate>
        
        <creator>Muhammad Hafizi Mohd Ali</creator>
        
        <creator>Siti Azirah Asmai</creator>
        
        <creator>Z. Zainal Abidin</creator>
        
        <creator>Zuraida Abal Abas</creator>
        
        <creator>Nurul A. Emran</creator>
        
        <subject>Deep learning; recurrent neural network; long short-term memory; flood prediction; layer normalization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>Deep learning has recently appeared as one of the best reliable approaches for forecasting time series. Even though there are numerous data-driven models for flood prediction, most studies focus on prediction using a single flood variable. The creation of various data-driven models may require unfeasible computing resources when estimating multiple flood variables. Furthermore, the trends of several flood variables can only be revealed by analysing long-term historical observations, which conventional data-driven models do not adequately support. This study proposed a time series model with layer normalization and Leaky ReLU activation function in multivariable long-term short memory (LSTM), bidirectional long-term short memory (BI-LSTM) and deep recurrent neural network (DRNN). The proposed models were trained and evaluated by using the sensory historical data of river water level and rainfall in the east coast state of Malaysia. It were then, compared to the other six deep learning models. In terms of prediction accuracy, the experimental results also demonstrated that the deep recurrent neural network model with layer normalization and Leaky ReLU activation function performed better than other models.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_112-Flood_Prediction_using Deep_Learning_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Authorship Attribution on Kannada Text using Bi-Directional LSTM Technique</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01309111</link>
        <id>10.14569/IJACSA.2022.01309111</id>
        <doi>10.14569/IJACSA.2022.01309111</doi>
        <lastModDate>2022-09-30T12:12:39.9400000+00:00</lastModDate>
        
        <creator>Chandrika C P</creator>
        
        <creator>Jagadish S Kallimani</creator>
        
        <subject>Authorship attribution; Bi-Directional Long Short Term Memory; machine learning algorithms; parts of speech; stylometry features</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>Author attribution is the field of deducing the author of an unknown textual source based on certain characteristics inherently present in the author’s style of writing. Author attribution has a ton of useful applications which help automate manual tasks. The proposed model is designed to predict the authorship of the Kannada text using a sequential neural network with Bi-Directional Long Short Term Memory layers, Dense layers, Activation function and Dropout layers. Based on the nature of the data, we have used stochastic gradient descent as an optimizer that improves the learning of the proposed model. The model extracts Part of the speech tags as one of the semantic features using the N-gram technique. A Conditional random fields model is developed to assign Part of the speech tags for the Kannada text tokens, which is the base for the proposed model. The parts of the speech model achieve an overall 90% and 91% F1 score and accuracy respectively. There is no state-of-art model to compare the performance of our model with other models developed for the Kannada language. The proposed model is evaluated using the One Versus Five (1 vs 5) method and overall accuracy of 77.8% is achieved.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_111-Authorship_Attribution_on_Kannada_Text.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Toward A Holistic, Efficient, Stacking Ensemble Intrusion Detection System using a Real Cloud-based Dataset</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01309110</link>
        <id>10.14569/IJACSA.2022.01309110</id>
        <doi>10.14569/IJACSA.2022.01309110</doi>
        <lastModDate>2022-09-30T12:12:39.9230000+00:00</lastModDate>
        
        <creator>Ahmed M. Mahfouz</creator>
        
        <creator>Abdullah Abuhussein</creator>
        
        <creator>Faisal S. Alsubaei</creator>
        
        <creator>Sajjan G. Shiva</creator>
        
        <subject>Intrusion detection system; IDS dataset; stacking ensemble ids; stacking; security; ensemble learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>Network intrusion detection is a key step in securing today’s constantly developing networks. Various experiments have been put forward to propose new methods for resisting harmful cyber behaviors. Though, as cyber-attacks turn out to be more complex, the present methodologies fail to adequately solve the problem. Thus, network intrusion detection is now a significant decision-making challenge that requires an effective and intelligent approach. Various machine learning algorithms such as decision trees, neural networks, K nearest neighbor, logistic regression, support vector machine, and Naive Bayes have been utilized to detect anomalies in network traffic. However, such algorithms require adequate datasets to train and evaluate anomaly-based network intrusion detection systems. This paper presents a testbed that could be a model for building real-world datasets, as well as a newly generated dataset, derived from real network traffic, for intrusion detection. To utilize this real dataset, the paper also presents an ensemble intrusion detection model using a meta-classification approach enabled by stacked generalization to address the issue of detection accuracy and false alarm rate in intrusion detection systems.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_110-Toward_A_Holistic_Efficient_Stacking_Ensemble_Intrusion_Detection_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automated Brain Disease Classification using Transfer Learning based Deep Learning Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01309109</link>
        <id>10.14569/IJACSA.2022.01309109</id>
        <doi>10.14569/IJACSA.2022.01309109</doi>
        <lastModDate>2022-09-30T12:12:39.9100000+00:00</lastModDate>
        
        <creator>Farhana Alam</creator>
        
        <creator>Farhana Chowdhury Tisha</creator>
        
        <creator>Sara Anisa Rahman</creator>
        
        <creator>Samia Sultana</creator>
        
        <creator>Md. Ahied Mahi Chowdhury</creator>
        
        <creator>Ahmed Wasif Reza</creator>
        
        <creator>Mohammad Shamsul Arefin</creator>
        
        <subject>Brain MRI; tumor; deep learning; classification; transfer learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>Brain MRI (Magnetic Resonance Imaging) classification is one of the most significant areas of medical imaging. Among different types of procedures, MRI is the most trusted one to detect brain diseases. Manual and semi-automated segmentations need highly experienced radiologists and much time to detect the problem. Recently, deep learning methods have taken attention due to their automation and self-learning techniques. To get a faster result, we have used different algorithms of Convolutional Neural Network (CNN) with the help of transfer learning for classification to detect diseases. This procedure is fully automated, needs less involvement of highly experienced radiologists, and does not take much time to provide the result. We have implemented six deep learning algorithms, which are InceptionV3, ResNet152V2, MobileNetV2, Resnet50, EfficientNetB0, and DenseNet201 on two brain tumor datasets (both individually and manually combined) and one Alzheimer’s dataset. Our first brain tumor dataset (total of 7,023 images-training 5,712, testing 1,311) has 99-100 percent training accuracy and 98-99 percent testing accuracy. Our second tumor dataset (total of 3,264 images-training 2,870, testing 394) has 100 percent training accuracy and 69-81 percent testing accuracy. The combined dataset (total of 10,000 images-training 8,000, testing 2,000) has 99-100 percent training accuracy and 98-99 percent testing accuracy. Alzheimer’s dataset (total of 6,400 images-training 5,121, testing 1,279, 4 classes of images) has 99-100 percent training accuracy and 71-78 percent testing accuracy. CNN models are renowned for showing the best accuracy in a limited dataset, which we have observed in our models.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_109-Automated_Brain_Disease_Classification_using_Transfer_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comparative Study of Unsupervised Anomaly Detection Algorithms used in a Small and Medium-Sized Enterprise</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01309108</link>
        <id>10.14569/IJACSA.2022.01309108</id>
        <doi>10.14569/IJACSA.2022.01309108</doi>
        <lastModDate>2022-09-30T12:12:39.9100000+00:00</lastModDate>
        
        <creator>Irina Petrariu</creator>
        
        <creator>Adrian Moscaliuc</creator>
        
        <creator>Cristina Elena Turcu</creator>
        
        <creator>Ovidiu Gherman</creator>
        
        <subject>Unsupervised anomaly detection algorithms; small and medium-sized enterprise; traceability; open-source libraries</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>Anomaly detection finds application in several industries and domains. The anomaly detection market is growing driven by the increasing development and dynamic adoption of emerging technologies. Depending on the type of supervision, there are three main types of anomaly detection techniques: unsupervised, semi-supervised, and supervised. Given the wide variety of available anomaly detection algorithms, how can one choose which approach is most appropriate for a particular application? The purpose of this evaluation is to compare the performance of five unsupervised anomaly detection algorithms applied to a specific dataset from a small and medium-sized software enterprise, presented in this paper. To reduce the cost and complexity of a system developed to solve the problem of anomaly detection, a solution is to use machine learning (ML) algorithms that are available in one of the open-source libraries, such as the scikit-learn library or the PyOD library. These algorithms can be easily and quickly integrated into a low-cost software application developed to meet the needs of a small and medium-sized enterprise (SME). In our experiments, we considered some unsupervised algorithms available in PyOD library. The obtained results are presented, alongside with the limitations of the research.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_108-A_Comparative_Study_of_Unsupervised_Anomaly_Detection_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving the Diabetes Diagnosis Prediction Rate Using Data Preprocessing, Data Augmentation and Recursive Feature Elimination Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01309107</link>
        <id>10.14569/IJACSA.2022.01309107</id>
        <doi>10.14569/IJACSA.2022.01309107</doi>
        <lastModDate>2022-09-30T12:12:39.8930000+00:00</lastModDate>
        
        <creator>E. Sabitha</creator>
        
        <creator>M. Durgadevi</creator>
        
        <subject>Artificial Intelligence (AI); Machine Learning (ML); Deep Learning(DL); Neural Network; Diabetes Mellitus; Recursive Feature Elimination (RFE); Synthetic Minority Over-sampling Technique (SMOTE)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>Hyperglycemia is a symptom of diabetes mellitus, a metabolic condition brought on by the body&#39;s inability to produce enough insulin and respond to it. Diabetes can damage body organs if it is not adequately managed or detected in a timely manner. Many years of research into diabetes diagnosis has led to a suitable method for diabetes prediction. However, there is still scope for improvement regarding precision. The paper&#39;s primary objective is to emphasize the value of data preprocessing, feature selection, and data augmentation in disease prediction. Techniques for data preprocessing, feature selection, and data augmentation can assist classification algorithms function more effectively in the diagnosis and prediction of diabetes. A proposed method is employed for diabetes diagnosis and prediction using the PIMA Indian dataset. A systematic framework for conducting a comparison analysis based on the effectiveness of a three-category categorization model is provided in this study. The first category compares the model&#39;s performance with and without data preprocessing. The second category compares the performance of five alternative algorithms employing the Recursive Feature Elimination (RFE) feature selection method. Data augmentation is the third category; data augmentation is done with SMOTE Oversampling, and comparisons are made with and without SMOTE Oversampling. On the PIMA Indian Diabetes dataset, studies showed that data preprocessing, RFE with Random Forest Regression feature selection, and SMOTE Oversampling augmentation can produce accuracy scores of 81.25% with RF, 81.16 with DT, and 82.5% with SVC. From Six Classifiers LR, RF, DT, SVC, GNB and KNN, it is observed that RF, DT, and SVC performed better in accuracy level. The comparative study enables us to comprehend the value of data preprocessing, feature selection, and data augmentation in the disease prediction process as well as how they affect performance.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_107-Improving_the_Diabetes_Diagnosis_Prediction_Rate.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Building an Intelligent Tutoring System for Learning Polysemous Words in Moore</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01309106</link>
        <id>10.14569/IJACSA.2022.01309106</id>
        <doi>10.14569/IJACSA.2022.01309106</doi>
        <lastModDate>2022-09-30T12:12:39.8770000+00:00</lastModDate>
        
        <creator>Pengwende ZONGO</creator>
        
        <creator>Tounwendyam Frederic OUEDRAOGO</creator>
        
        <subject>Intelligent tutoring system; petri network; evaluation; Moor&#180;e language</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>This paper presents the results of our research carried out as part of the building of an Intelligent Tutoring System (ITS) to learn Moor&#180;e, a tone language. A word in tone language may have many meanings according to the pitch. The system has an intelligent tutor to personalize and guide the learning of the transcription of polysemous words in Moor&#180;e. This learning activity aims both to master the transcription and also to distinguish the lexical meaning of words according to the pitch used. A first step of this research has been the specification of the processes, inference and knowledge of the system. In this work we present the implementation and pedagogical assessment of the system. We designed the architecture of the ITS, the diagnosis of transcription errors and remediation approach. Then, we used the Petri net formalism to model the system dynamic in order to analyze its states and fix deadlocks. We developed the system in java and we evaluated its educational value by an experimentation with learners. This shows that the learning objectives can be achieved with this system.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_106-Building_an_Intelligent_Tutoring_System_for_Learning_Polysemous_Words.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cloud based Forecast of Municipal Solid Waste Growth using AutoRegressive Integrated Moving Average Model: A Case Study for Bengaluru</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01309105</link>
        <id>10.14569/IJACSA.2022.01309105</id>
        <doi>10.14569/IJACSA.2022.01309105</doi>
        <lastModDate>2022-09-30T12:12:39.8770000+00:00</lastModDate>
        
        <creator>Rashmi G</creator>
        
        <creator>S Sathish Kumar K</creator>
        
        <subject>Cloud Computing; Machine Learning; Time Series Forecasting; Waste Management System; ARIMA; Predictive Modeling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>Forecasting the quantity of waste growth in upcoming years is very much required for assessing the existing waste management system. In this research work, time series forecast model, ARIMA (Autoregressive Integrated Moving Average), is used to predict future waste growth from 2021 to 2028 for Bengaluru, largest city in Karnataka. Eight years old historical solid waste dataset from 2012 to 2020 is used to make predictions. This dataset is preprocessed and only time bounded variables like days, month, year and waste quantity in tons are used in this research work to obtain accurate prediction. The model is implemented in python in Google Colab free cloud’s Jupyter notebook. As ARIMA is time bounded, forecast made by the model is accurate and performance of the model is evaluated using metrics such as Mean Absolute Deviation (MAD), Mean Absolute Percentage Error (MAPE), Root Mean Square Error (RMSE) and Coefficient of Determination (R2). Outcomes revealed that ARIMA (0, 1, 2) model with the lowermost RMSE (753.5742), MAD (577.4601), and MAPE (11.6484) values and the maximum R2 (0.9788) value has a greater forecast performance. The outcomes attained from the model also showed that the total volume of yearly solid waste to be produced will rise from about 50,300 tons in 2021 to 75,600 tons in 2028.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_105-Cloud_based_Forecast_of_Municipal_Solid_Waste_Growth.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning based Cervical Cancer Classification and Segmentation from Pap Smears Images using an EfficientNet</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01309104</link>
        <id>10.14569/IJACSA.2022.01309104</id>
        <doi>10.14569/IJACSA.2022.01309104</doi>
        <lastModDate>2022-09-30T12:12:39.8600000+00:00</lastModDate>
        
        <creator>Krishna Prasad Battula</creator>
        
        <creator>B. Sai Chandana</creator>
        
        <subject>Cervical cancer; pap smear; time-consuming; contrast local adaptive histogram equalization (CLAHE); Grey Level Co-occurrence Matrix (GLCM); morphological features; wavelet; SegNet</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>One of the most prevalent cancers in the world, cervical cancer claims the lives of many people every year. Since early cancer diagnosis makes it easier for patients to use clinical applications, cancer research is crucial. The Pap smear is a useful tool for early cervical cancer detection, although the human error is always a risk. Additionally, the procedure is laborious and time-consuming. By automatically classifying cervical cancer from Pap smear images, the study&#39;s goal was to reduce the risk of misdiagnosis. For picture enhancement in this study, contrast local adaptive histogram equalization (CLAHE) was employed. Then, from this cervical image, features including wavelet, morphological features, and Grey Level Co-occurrence Matrix (GLCM) are extracted. An effective network trains and tests these derived features to distinguish between normal and abnormal cervical images by using EfficientNet. On the aberrant cervical picture, the SegNet method is used to identify and segment the cancer zone. Specificity, accuracy, positive predictive value, Sensitivity, and negative predictive value are all utilized to analyze the suggested cervical cancer detection system performances. When used on the Herlev benchmark Pap smear dataset, results demonstrate that the approach performs better than many of the existing algorithms.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_104-Deep_Learning_based_Cervical_Cancer_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>COVID-19 Disease Detection based on X-Ray Image Classification using CNN with GEV Activation Function</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01309103</link>
        <id>10.14569/IJACSA.2022.01309103</id>
        <doi>10.14569/IJACSA.2022.01309103</doi>
        <lastModDate>2022-09-30T12:12:39.8470000+00:00</lastModDate>
        
        <creator>Karim Ali Mohamed</creator>
        
        <creator>Emad Elsamahy</creator>
        
        <creator>Ahmed Salem</creator>
        
        <subject>COVID-19; CNN; GEV function; image augmentation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>The globe was rocked by unprecedented levels of disruption, which had devastating effects on daily life, global health, and global economy. Since the COVID-19 epidemic started, methods for delivering accurate diagnoses for multi-category classification have been proposed in this work (COVID vs. normal vs. pneumonia). XceptionNet and Dense Net, two transfer learning pre-trained model networks, are employed in our CNN model. The low-level properties of the two DCNN structures were combined and used to a classifier for the final prediction. To get better results with unbalanced data, we used the GEV activation function (generalized extreme value) to augment the training dataset using data augmentation for validation accuracy, which allowed us to increase the training dataset while still maintaining validation accuracy with the output classifier. The model has been put through its paces in two distinct scenarios. In the first instance, the model was tested using Image Augmentation for train data and the GEV (generalized extreme value) function for output class, and it got a 94% accuracy second instance Model evaluations were conducted without data augmentation and yielded an accuracy rating of 95% for the output class.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_103-COVID_19_Disease_Detection_based_on_X_Ray_Image_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-method Approach for User Experience of Selfie-taking Mobile Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01309101</link>
        <id>10.14569/IJACSA.2022.01309101</id>
        <doi>10.14569/IJACSA.2022.01309101</doi>
        <lastModDate>2022-09-30T12:12:39.8300000+00:00</lastModDate>
        
        <creator>Shahad Aldahri</creator>
        
        <creator>Reem Alnanih</creator>
        
        <subject>Filters; lenses; multi-method; research methods; selfie taking; user experience research</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>Taking selfies is a popular activity in most social media applications and applying filters/lenses to these selfies has become one of the most demanded features of such applications. This paper aims to design an application for taking selfies to minimize the heavy use of beautifying filters. To understand the current user experience of selfie-taking and filter features, multiple user experience research methods were applied in two steps. In the first step, interviews were conducted with 10 participants to collect data. The key findings of interviews were (i) the need for saving memories as users’ primary goal of using the applications, (ii) the need for using slightly beautifying filters as their preferred filter type, (iii) the need for a favorite filters list, and (iv) the need for the opportunity to edit selfies after they are taken. This output of the interviews was used as an input for determining the survey questions in the second step. A total of 340 respondents completed the survey and the findings were consistent with those of the interviews. Further pointing to the rising opportunity for a new selfie-taking application designed to save selfies quickly without sharing and only apply slightly beautifying filters. More studies should focus on increasing engagement and including a saved selfie categorization feature in the design.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_101-Multi_Method_Approach_for_User_Experience.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predicting Academic Performance using a Multiclassification Model: Case Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01309102</link>
        <id>10.14569/IJACSA.2022.01309102</id>
        <doi>10.14569/IJACSA.2022.01309102</doi>
        <lastModDate>2022-09-30T12:12:39.8300000+00:00</lastModDate>
        
        <creator>Alfredo Daza Vergaray</creator>
        
        <creator>Carlos Guerra</creator>
        
        <creator>Noemi Cervera</creator>
        
        <creator>Erwin Burgos</creator>
        
        <subject>Learning machine; prediction; academic performance; hybrid model; classification techniques; multiclassification; python</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>Now-a-days predicting the academic performance of students is increasingly possible, thanks to the constant use of computer systems that store a large amount of student information. Machine learning uses this information to achieve big goals, such as predicting whether or not a student will pass a course. The main purpose of the work was to make a multiclassifier model that exceeds the results obtained from the machine learning models used independently. For the development of our proposed predictive model, the methodology was used, which consists of several phases. For the first step, 557 records with 25 characteristics related to academic performance were selected, then the preprocessing was applied to said data set, eliminating the attributes with the lowest correlation and those records with inconsistencies, leaving 500 records and 9 attributes. For the transformation, it was necessary to convert categorical to numerical data of four attributes, being the following: SEX, ESTATUS_lab_padre, ESTATUS_lab_madre and CONDITION. Having the data set clean, we proceeded to balance the data, where 1,167 data were generated, using the 2/3 for training and the remaining 1/3 for validation, then the following techniques were applied: Extra Tree, Random Forest, Decision Tree, Ada Boost and XGBoost, each obtained an accuracy of 57.41%, 61.96%, 91.44%, 59.65% and 83.3% respectively. Then the proposed model was applied, combining the five algorithms mentioned above, which reached an accuracy of 92.86%, concluding that the proposed model provides better accuracy than when the models are used independently meaning that it was the one that obtained the best result.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_102-Predicting_Academic_Performance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Analysis of Deep Learning YOLO models for South Asian Regional Vehicle Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01309100</link>
        <id>10.14569/IJACSA.2022.01309100</id>
        <doi>10.14569/IJACSA.2022.01309100</doi>
        <lastModDate>2022-09-30T12:12:39.8130000+00:00</lastModDate>
        
        <creator>Minar Mahmud Rafi</creator>
        
        <creator>Siddharth Chakma</creator>
        
        <creator>Asif Mahmud</creator>
        
        <creator>Raj Xavier Rozario</creator>
        
        <creator>Rukon Uddin Munna</creator>
        
        <creator>Md. Abrar Abedin Wohra</creator>
        
        <creator>Rakibul Haque Joy</creator>
        
        <creator>Khan Raqib Mahmud</creator>
        
        <creator>Bijan Paul</creator>
        
        <subject>You Only Look Once (YOLOv5); vehicle detection; neural network; deep learning; vehicle tracking</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>For years, humans have pondered the possibility of combining human and machine intelligence. The purpose of this research is to recognize vehicles from media and while there are multiple models associated with this, models that can detect vehicles commonly used in developing countries like Bangladesh, India, etc. are scarce. Our focus was to assimilate the largest dataset of vehicles exclusive to South Asia in addition to the more common universal vehicles and apply it to track and recognize these vehicles, even in motion. To develop this, we increased the class variations and quantity of the data and used multiple variations of the YOLOv5 model. We trained different versions of the model with our dataset to properly measure the degree of accuracy between the models in detecting the more unique vehicles. If vehicle detection and tracking are adopted and implemented in live traffic camera feeds, the information can be used to create smart traffic systems that can regulate congestion and routing by identifying and separating fast and slow-moving vehicles on the road. The comparison between the three different YOLOv5 models led to an analysis that indicates that the large variant of the YOLOv5 architecture outperforms the rest.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_100-Performance_Analysis_of_Deep_Learning_YOLO_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Efficient Hybrid LSTM-CNN and CNN-LSTM with GloVe for Text Multi-class Sentiment Classification in Gender Violence</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130999</link>
        <id>10.14569/IJACSA.2022.0130999</id>
        <doi>10.14569/IJACSA.2022.0130999</doi>
        <lastModDate>2022-09-30T12:12:39.8000000+00:00</lastModDate>
        
        <creator>Abdul Azim Ismail</creator>
        
        <creator>Marina Yusoff</creator>
        
        <subject>Gender-based violence; deep learning; convolution neural network; long short-term memory; convolution neural network - long short-term memory; long short-term memory - convolution neural network; global vector; multi-class text classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>Gender-based violence is a public health issue that needs high concern to eliminate discrimination and violence against women and girls. Several cases are through the offline organization and the respective online platform. However, many victims share their experiences and stories on social media platforms. Twitter is one of the methods for locating and identifying gender-based violence based on its type. This paper proposed a hybrid Long Short-Term Memory (LSTM) and Convolution Neural Network CNN with GloVe to perform multi-classification of gender violence. Intimate partner violence, harassment, rape, femicide, sex trafficking, forced marriage, forced abortion, and online violence against women are e eight gender violence keyword for data extraction from Twitter text data. Next is data cleaning to remove unnecessary information. Normalization converts data into a structure the machine can recognize as model input. The evaluation considers cross-entropy loss parameters, learning rate, an optimizer, and epochs. LSTM+GloVe vector embedding outperforms all other methods. CNN-LSTM+Glove and LSTM-CNN+GloVe achieved 0.98 for test accuracy, 0.95 for precision, 0.94 for recall, and 0.95 for the f1-score. The findings can help the public and relevant agencies differentiate and categorize different types of gender violence through text. With this effort, the government can use as one of the mechanisms that indirectly can support monitoring of the current situation of gender violence.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_99-An_Efficient_Hybrid_LSTM_CNN_and_CNN_LSTM_with_Glove.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Extractive Multi-document Text Summarization Leveraging Hybrid Semantic Similarity Measures</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130998</link>
        <id>10.14569/IJACSA.2022.0130998</id>
        <doi>10.14569/IJACSA.2022.0130998</doi>
        <lastModDate>2022-09-30T12:12:39.7830000+00:00</lastModDate>
        
        <creator>Rajesh Bandaru</creator>
        
        <creator>Y. Radhika</creator>
        
        <subject>Extractive text summarization; semantic similarity; sentence scoring; summary</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>Because of the massive amount of textual information accessible today, automated extraction text summarization is one of the most extensively used ways to organize the information. The summarization mechanisms help to extract the important topics of data from a given set of documents. Extractive summarization is one method for providing a representative summary of a text by choosing the most pertinent sentences from the original text. Extractive multi-document text summarization systems&#39; primary goal is to decrease the quantity of textual information in a document collection by concentrating on the most crucial subjects and removing irrelevant material. In the previous research, there are several methods such as term-weighting schemes and similarity metrics used for constructing an automated summary system. There are few studies that look at the performance of combining various Semantic similarity and word weighting techniques in automatic text summarization. We evaluated numerous semantic similarity metrics in extractive multi-document text summarization in this research. In the extractive multi-document text summarization discussed in this research, we looked at numerous semantic similarity metrics. ROUGE metrics have been used to evaluate the model performance in experiments using DUC datasets. Even more, the combination formed by different semantic similarity measures obtained the highest results in comparison with the other models.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_98-Extractive_Multi_Document_Text_Summarization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Environmental Noise Pollution Forecasting using Fuzzy-autoregressive Integrated Moving Average Modelling</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130997</link>
        <id>10.14569/IJACSA.2022.0130997</id>
        <doi>10.14569/IJACSA.2022.0130997</doi>
        <lastModDate>2022-09-30T12:12:39.7830000+00:00</lastModDate>
        
        <creator>Muhammad Shukri Che Lah</creator>
        
        <creator>Nureize Arbaiy</creator>
        
        <creator>Syahir Ajwad Sapuan</creator>
        
        <creator>Pei-Chun Lin</creator>
        
        <subject>Noise pollution; forecasting; ARIMA; uncertainty; standard deviation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>Predicting noise pollution from building sites is important to take precautions to avoid pollution that harms the public. A high accuracy of the prediction model is required so that the predicted model can reach the true value. Forecasting models must be built on solid historical data to achieve high forecasting accuracy. However, data collected through various approaches are subject to ambiguity and uncertainty, resulting in less reliable predictive models. Therefore, the data must be handled accurately, to eliminate data uncertainty. Standard data processing processes are easy to use but do not provide a consistent method for dealing with this ambiguous data. Therefore, a method to deal with data containing uncertainty for forecasting purposes is presented in this paper. A new technique for providing uncertainty-based data preparation has been employed to develop an ARIMA-based model of environmental noise pollution. During the data preparation stage, the standard deviation approach was used. Prior to the development of the prediction model, it is crucial to manage the fuzzy data to minimize errors. The experimental findings show that the suggested data preparation strategy can increase the model&#39;s accuracy.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_97-Environmental_Noise_Pollution_Forecasting.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fish Species Classification using Optimized Deep Learning Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130996</link>
        <id>10.14569/IJACSA.2022.0130996</id>
        <doi>10.14569/IJACSA.2022.0130996</doi>
        <lastModDate>2022-09-30T12:12:39.7670000+00:00</lastModDate>
        
        <creator>J. M. Jini Mol</creator>
        
        <creator>S. Albin Jose</creator>
        
        <subject>Fish species classification; deep learning; GW optimization; auto encoder decoder; feature selection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>Classification of fish species in aquatic pictures is a growing field of research for researchers and image processing experts. Classification of fish species in aquatic images is critical for fish analytical purposes, such as ecological auditing balance, observing fish populations, and saving threatened animals. However, ocean water scattering and absorption of light result in dim and low contrast pictures, making fish classification laborious and challenging. This paper presents an efficient scheme of fish classification, which helps the biologist understand varieties of fish and their surroundings. This proposed system used an improved deep learning-based auto encoder decoder method for fish classification. Optimal feature selection is a major issue with deep learning models generally. To solve this problem efficiently, an enhanced grey wolf optimization technique (EGWO) has been introduced in this study. The accuracy of the classification system for aquatic fish species depends on the essential texture features. Accordingly, in this study, the proposed EGWO has selected the most optimal texture features from the features extracted by the auto encoder. Finally, to prove the efficacy of the proposed method, it is compared to existing deep learning models such as AlexNet, Res Net, VGG Net, and CNN. The proposed method is analysed by varying iterations, batches, and fully connected layers. The analysis of performance criteria such as accuracy, sensitivity, specificity, precision, and F1 score reveals that AED-EGWO gives superior performance.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_96-Fish_Species_Classification_Using_Optimized_Deep_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Research on Intelligent Control System of Air Conditioning Based on Internet of Things</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130994</link>
        <id>10.14569/IJACSA.2022.0130994</id>
        <doi>10.14569/IJACSA.2022.0130994</doi>
        <lastModDate>2022-09-30T12:12:39.7530000+00:00</lastModDate>
        
        <creator>Binfang Zhang</creator>
        
        <subject>Internet of things technology; intelligent control of air conditioning; system design; double closed-loop load; virtual synchronizer of air conditioning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>The current air conditioning intelligent control system cannot achieve the ideal energy-saving effect. The indoor temperature and humidity control is not good enough either. Therefore, an intelligent air-conditioning control system based on Internet of Things technology is designed. The hardware part of the system includes system control motherboard, sensor module, execution control structure, wireless communication module and access layer. The software includes the design of communication layer, the design of monitoring management, and the design of intelligent indoor air-conditioning temperature remote control algorithm. The experimental results show that the control effect of the intelligent air conditioner is more accurate and energy-saving, the opening degree of the air conditioning valve is larger, and the comfort is improved. The indoor temperature and humidity of the proposed system are both more ideal.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_94-Research_on_Intelligent_Control_System_of_Air_Conditioning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Short Review on the Role of Various Deep Learning Techniques for Segmenting and Classifying Brain Tumours from MRI Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130995</link>
        <id>10.14569/IJACSA.2022.0130995</id>
        <doi>10.14569/IJACSA.2022.0130995</doi>
        <lastModDate>2022-09-30T12:12:39.7530000+00:00</lastModDate>
        
        <creator>Kumari Kavitha. D</creator>
        
        <creator>E. Kiran Kumar</creator>
        
        <subject>Medical image segmentation; convolutional neural networks (CNN); deep-CNN; feed forward neural networks; brain tumor segmentation (BraTS) and U-net</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>The past few years have observed substantial growth in death rates associated with brain tumors and it is second foremost source of cancer-related demises. However, it is possible to increase the chance of survival if tumors are identified during initial stage by employing various deep learning techniques. These techniques are helpful to the doctors during the diagnosis process. The MRI which refers to magnetic resonance imaging is a non-invasive procedure and low ionization radiation diagnostic tool to evaluate an abnormity that evolves in the form of shape, location or position, size and texture of tumour. This paper focuses on the systematic literature survey of numerous Deep-Learning methods with suitable approaches for tumour segmentation and classification (normal or abnormal) from MRI images. Furthermore, this paper also provides the new aspects of research and clinical solution for brain tumor patients. It incorporates Deep-Learning applications for accurate tumor detection and quantitative investigation of different tumor segmentation techniques.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_95-A_Short_Review_on_the_Role_of_Various_Deep_Learning_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Decentralized Access Control using Blockchain Technology for Application in Smart Farming</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130993</link>
        <id>10.14569/IJACSA.2022.0130993</id>
        <doi>10.14569/IJACSA.2022.0130993</doi>
        <lastModDate>2022-09-30T12:12:39.7370000+00:00</lastModDate>
        
        <creator>Normaizeerah Mohd Noor</creator>
        
        <creator>Noor Afiza Mat Razali</creator>
        
        <creator>Nur Atiqah Malizan</creator>
        
        <creator>Khairul Khalil Ishak</creator>
        
        <creator>Muslihah Wook</creator>
        
        <creator>Nor Asiakin Hasbullah</creator>
        
        <subject>Blockchain; access control; smart contract; internet of thing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>The application of the Internet of Things (IoT) plays a crucial role in the fourth industrial revolution. The sophistication of technology due to the integration of heterogenous smart devices open a new threat from various aspects. Access control is the first line of defence to ensure that IoT resources are secure by preventing illegitimate users from gaining access to these resources. However, access control mechanisms face the limitation of technology in large scale IoT deployments since they are based on a centralized architecture. Significant research concerning decentralized access control solutions for securing IoT resources using combined techniques, such as blockchain, have caught much research attention in recent years. Nevertheless, research for decentralized access control for application in smart farming domain remain as a gap. Thus, this study presented a structured literature review on 81 articles related to the field of access control in IoT and blockchain technology to understand the challenges of centralized access control in securing IoT resources. This study serves as a foundation for decentralized access control using blockchain technology and its application to ensure the IoT actuators and sensors security with the aim to be applied in smart farming. This paper was deliberated based on systematic literature review that was searched from four different database platforms between 2018 and 2021. This study mostly addresses the relevant techniques/approaches including blockchain technology, access control model, key management mechanism and the combination of all three methods. The possible impacts, gap, procedures and evaluation of the decentralized access control are highlighted along with major trends and challenges.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_93-Decentralized_Access_Control_using_Blockchain_Technology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modeling Multioutput Response Uses Ridge Regression and MLP Neural Network with Tuning Hyperparameter through Cross Validation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130992</link>
        <id>10.14569/IJACSA.2022.0130992</id>
        <doi>10.14569/IJACSA.2022.0130992</doi>
        <lastModDate>2022-09-30T12:12:39.7200000+00:00</lastModDate>
        
        <creator>Waego Hadi Nugroho</creator>
        
        <creator>Samingun Handoyo</creator>
        
        <creator>Hsing-Chuan Hsieh</creator>
        
        <creator>Yusnita Julyarni Akri</creator>
        
        <creator>Zuraidah</creator>
        
        <creator>Donna DwinitaAdelia</creator>
        
        <subject>Filter approach; hyperparameter tuning; multi-response; neural network; ridge regression</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>The multiple regression model is very popular among researchers in both field of social and science because it is easy to interpret and have a well-established theoretical framework. However, the multioutput multiple regression model is actually widely applied in the engineering field because in the industrial world there are many systems with multiple outputs. The ridge regression model and the Multi-Layer Perceptron (MLP) neural network model are representations of the predictive linear regression model and predictive non-linear regression model that are widely applied in the world of practice. This study aims to build multi-output models of a ridge regression model and an MLP neural network whose hyperparameters are determined by a grid search algorithm through the cross-validation method. The hyperparameter that produces the smallest RMSE value in the validation data is chosen as the hyperparameter to train both models on the training data. The hyperparameter in question is a combination of learning algorithms and alpha values (ridge regression), a combination of the number of hidden nodes and gamma values (MLP neural network). In the ridge regression model for alpha in the range between 0.1 and 0.7, the smallest RMSE is obtained for all learning algorithms used. While the MLP neural network model specifically obtained a combination of the number of nodes = 18 and gamma = 0.1 which produces the smallest RMSE. The ridge regression model with selected hyperparameters has better performance (in the RMSE and R2 value) than the MLP neural network model with selected hyperparameters, both on training and testing data.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_92-Modeling_Multioutput_Response_Uses_Ridge_Regression.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Machine Learning-based Framework for Detecting Religious Arabic Hatred Speech in Social Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130991</link>
        <id>10.14569/IJACSA.2022.0130991</id>
        <doi>10.14569/IJACSA.2022.0130991</doi>
        <lastModDate>2022-09-30T12:12:39.7070000+00:00</lastModDate>
        
        <creator>Mahmoud Masadeh</creator>
        
        <creator>Hanumanthappa Jayappa Davanager</creator>
        
        <creator>Abdullah Y. Muaad</creator>
        
        <subject>Machine learning; Arabic language; hatred detection; social network; classification algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>Social media platforms generate a huge amount of data every day. However, liberty of speech through these networks could easily help in spreading hatred. Hate speech is a severe concern endangering the cohesion and structure of civil societies. With the increase in hate and sarcasm among the people who contact others over the internet in this era, there is a dire need for utilizing artificial intelligence (AI) technology innovation that would face this problem. The rampant spread of hate can dangerously break society and severely damage marginalized people or groups. Thus, the identification of hate speech is essential and becoming more challenging, where the recognition of hate speech on time is crucial in stopping its dissemination. The capacity of the Arabic morphology and the scarcity of resources for the Arabic language makes the task of distinguishing hate speech even more demanding. For fast identification of Arabic hate speech in social network comments, this work presents a comprehensive framework with eight machine learning (ML) and deep learning (DL) algorithms, namely Gradient Boosting (GB), K-Nearest Neighbor (K-NN), Logistic Regression (LR), Naive Bayes (NB), Passive Aggressive Classifier (PAC), Support Vector Machine (SVM), Ara-BERT, and BERT-AJGT are implemented. Two representation techniques have been used in the proposed framework in order to extract features: a bag of words followed by BERT-based context text representations. Based on the result and discussion part, context text representation techniques with Ara-BERT and BERT-AJGT outperform all other ML models and related work with accuracy equal to 79% for both models.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_91-A_Novel_Machine_Learning_based_Framework_for_Detecting_Religious.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluating Hybrid Framework of VASNET and IoT in Disaster Management System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130990</link>
        <id>10.14569/IJACSA.2022.0130990</id>
        <doi>10.14569/IJACSA.2022.0130990</doi>
        <lastModDate>2022-09-30T12:12:39.6900000+00:00</lastModDate>
        
        <creator>Sia Chiu Shoon</creator>
        
        <creator>Mohammad Nazim Jambli</creator>
        
        <creator>Sinarwati Mohamad Suhaili</creator>
        
        <creator>Nur Haryani Zakaria</creator>
        
        <subject>Energy consumption; packet loss; LTE-A; VASNET; IoT</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>During emergency operations in Disaster Management System (DMS) for natural and man-made disasters, any breakdown in the existing information and communication technology will affect the aspect of effectiveness and efficiency on an emergency response task. For Vehicular Ad-hoc Sensor Network (VASNET), the limitation in terms of infrastructure that consists of RSU (Roadside Sensor Unit) may partially or fully destroy the post-disaster scenario. As such, performance degradation of VASNET affects the network infrastructure on high packet loss, delay, and produce a huge amount of energy consumption in DMS. Thus, modification of VASNET and integrate with Internet of Thing (IoT) technology is a must to improve and solving the current problem on VASNET technology. Therefore, the main objective of this study was to investigate the performance of the proposed modified VASNET framework integrated with IoT at DMS in terms of energy consumption and packet loss. A suggested node in the proposed framework was introduced to implement low data rate and high data rate in evaluating the proposed framework using LTE and LTE-A transmission protocol. It was found that LTE-A contributes more energy by 25.33 (mJ/Byte) compared to LTE on 20 (mJ/Byte) on a high data rate. On the other hand, in terms of low data rate, LTE-A influences the most on the proposed framework by recording 19.82(mJ/Byte), LTE only 19.33 (mJ/Byte). For packet loss, LTE shows a high packet loss rate by contributing 11.39% compared to LTE-A, which is 8.0% in terms of low data rate, and 14.80% compared to LTE-A, only 11.97% for high data rate. Consequently, LTE-A on high data rate contributes more energy consumption and LTE in packet loss on same data rate.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_90-Evaluating_Hybrid_Framework_of_VASNET_and_IoT.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>On-Device Major Indian Language Identification Classifiers to Run on Low Resource Devices</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130989</link>
        <id>10.14569/IJACSA.2022.0130989</id>
        <doi>10.14569/IJACSA.2022.0130989</doi>
        <lastModDate>2022-09-30T12:12:39.6900000+00:00</lastModDate>
        
        <creator>Yashwanth Y S</creator>
        
        <subject>Character grams; code-mixed; deep learning; Indian languages; language identification; NLP; social media text</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>Language Identification acts a first and necessary step in building intelligent Natural Language Processing (NLP) systems that handle code mixed data. There is a lot of work around this problem, but there is still scope for improvement, especially for local Indian languages. Also, earlier works mostly concentrates on just accuracy of the model and neglects the information like, whether they can be used on low resource devices like mobiles and wearable devices like smart watches with considerable latency. Here, this paper discusses about both binary classification and multiclass classification using character grams as the features. Considering total nine languages in this classification which includes, eight code mixed Indian languages with English (Hindi, Bengali, Kannada, Tamil, Telugu, Gujarati, Marathi, Malayalam) and standard English. Binary classifier discussed in this paper will classify Hinglish (Hindi when written using English script is commonly known as Hinglish) from seven other code-mixed Indian Languages with English and standard English. Multiclass classifier will classify the previously mentioned languages. Binary classifier gave an accuracy of 96% on the test data and the size of the model was 1.4 MB and achieved an accuracy of 87% with multiclass classifier on same test set with model size of 3.6 MB.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_89-On_Device_Major_Indian_Language_Identification_Classifiers.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>TextBrew: Automated Model Selection and Hyperparameter Optimization for Text Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130988</link>
        <id>10.14569/IJACSA.2022.0130988</id>
        <doi>10.14569/IJACSA.2022.0130988</doi>
        <lastModDate>2022-09-30T12:12:39.6730000+00:00</lastModDate>
        
        <creator>Rushil Desai</creator>
        
        <creator>Aditya Shah</creator>
        
        <creator>Shourya Kothari</creator>
        
        <creator>Aishwarya Surve</creator>
        
        <creator>Narendra Shekokar</creator>
        
        <subject>Automated machine learning; AutoML; NLP; trans-former models; hyperparameter optimization; CopulaGAN; gener-ative model; meta-learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>In building a machine learning solution, algorithm selection and hyperparameter tuning is the most time-consuming task. Automated Machine Learning is a solution to fully automate the process of finding the best model for a given task without actually having to try various models. This paper introduces a new AutoML system, TextBrew, explicitly built for the NLP task of text classification. Our system provides an automated method for selecting transformer models, tuning hyperparameters, and combining the best models into one by ensembling. Keeping in mind that new state-of-the-art models are being constantly introduced, TextBrew has been designed to be highly flexible and thus can support additional models easily. In our work, we experiment with multiple transformer models, each with numerous different hyperparameter settings, and select the most robust models. These models are then trained on multiple datasets to obtain accuracy scores, which are then used to build the meta-dataset to train the meta-model. Since text classification datasets are not as abundant, our system generates synthetic data to augment the meta-dataset using CopulaGAN, a deep generative model. The meta-model is an ensemble of five models, which predicts the best candidate model with an accuracy of 78.75%. The final model returned to the user is an ensemble of all the best models that can be trained under the given time constraint. Experiments on various datasets and comparisons with existing systems demonstrate the effectiveness of our system.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_88-TextBrew_Automated_Model_Selection_and_Hyperparameter_Optimization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Rethinking Classification of Oriented Object Detection in Aerial Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130987</link>
        <id>10.14569/IJACSA.2022.0130987</id>
        <doi>10.14569/IJACSA.2022.0130987</doi>
        <lastModDate>2022-09-30T12:12:39.6600000+00:00</lastModDate>
        
        <creator>Phuc Nguyen</creator>
        
        <creator>Thang Truong</creator>
        
        <creator>Nguyen D. Vo</creator>
        
        <creator>Khang Nguyen</creator>
        
        <subject>Aerial images; classification; convex-hull transformation; data processing; oriented object detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>With the help of the rapid development of technology, especially the prevalence of UAVs (unmanned aerial vehicles), object detection in aerial images gains much more attention in computer vision and deep learning. However, traditional methods use horizontal bounding boxes for object representation leading to inconsistency between objects and features. Therefore, many detectors are being built to tackle this problem, and normally they use the conventional approaches of training and testing to achieve the results. Our pipeline proposed to strengthen not only the classification but also localization via independent training processes using convex-hull transformation in data pre-processing phase. We experimented with the well-designed S2ANet, R3Det, ReDet, RoI Transformer and Oriented R-CNN on the well-established oriented object detection dataset DOTA. Then we adopt the best detectors with the well-known classification network EfficientNet to our proposed pipeline and achieve promising results on the oriented object detection DOTA dataset. Moreover, our pipeline can flexibly be adapted to various oriented object detection baselines improving the results in classification via independent extensive training cycles.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_87-Rethinking_Classification_of_Oriented_Object_Detection_in_Aerial_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mobile Food Journalling Application with Convolutional Neural Network and Transfer Learning: A Case for Diabetes Management in Malaysia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130986</link>
        <id>10.14569/IJACSA.2022.0130986</id>
        <doi>10.14569/IJACSA.2022.0130986</doi>
        <lastModDate>2022-09-30T12:12:39.6430000+00:00</lastModDate>
        
        <creator>Jason Thomas Chew</creator>
        
        <creator>Yakub Sebastian</creator>
        
        <creator>Valliapan Raman</creator>
        
        <creator>Patrick Hang Hui Then</creator>
        
        <subject>Convolutional neural network; deep learning; diabetes; food journal; mobile application; nutritional tracking; Malaysia</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>Diabetes is an ever worsening problem in modern society, placing a heavy burden on healthcare systems. Due to the association between obesity and diabetes, food journaling mobile applications are an effective approach for managing and improving the outcome of diabetics. Due to the efficacy of nutritional tracking and management in managing diabetes, we implemented a deep learning-based Convolutional Neural Network food classification model to aid with food logging. The model is trained on a subset of the Food-101 and Malaysian Food 11 datasets, including web-scraped images, with a focus on food items found locally in Malaysia. In our experiments, we explore how fine-tuning of the image dataset improves the performance of the model.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_86-Mobile_Food_Journalling_Application_with_Convolutional_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Estimation of Varying Reaction Times with RNN and Application to Human-like Autonomous Car-following Modeling</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130985</link>
        <id>10.14569/IJACSA.2022.0130985</id>
        <doi>10.14569/IJACSA.2022.0130985</doi>
        <lastModDate>2022-09-30T12:12:39.6430000+00:00</lastModDate>
        
        <creator>Lijing Ma</creator>
        
        <creator>Shiru Qu</creator>
        
        <creator>Junxi Zhang</creator>
        
        <creator>Xiangzhou Zhang</creator>
        
        <subject>Car-following model; intelligent driver model; human-driven vehicle; autonomous vehicle; varying reaction time; string stability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>The interaction between human-driven vehicles and autonomous vehicles has become a vital issue in micro-transportation science. Compared to autonomous vehicles, human-driven vehicles have varying reaction times that could compromise traffic efficiency and stability. But human drivers can anticipate future traffic conditions subconsciously, which guar-antees qualified performance. This paper proposes an estimation method of varying reaction times and a human-like autonomous car-following model. The varying reaction times are estimated based on recurrent neural networks (RNNs) after the cross-correlation analysis of human-driven vehicles’ trajectory profiles. A human-like autonomous car-following model is established based on Intelligent Driver Model (IDM), considering both varying reaction times and temporal anticipation, and the short form is IDM RTTA. The analytical string stability of IDM RTTA is deduced and illustrated. The trajectory simulation result shows that increasing accuracy of trajectory prediction is obtained with the proposed model, which will benefit the interaction between human-driven vehicles and autonomous vehicles.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_85-Estimation_of_Varying_Reaction_Times_with_RNN.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Intelligent Decision Support Ensemble Voting Model for Coronary Artery Disease Prediction in Smart Healthcare Monitoring Environments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130984</link>
        <id>10.14569/IJACSA.2022.0130984</id>
        <doi>10.14569/IJACSA.2022.0130984</doi>
        <lastModDate>2022-09-30T12:12:39.6270000+00:00</lastModDate>
        
        <creator>Anas Maach</creator>
        
        <creator>Jamila Elalami</creator>
        
        <creator>Noureddine Elalami</creator>
        
        <creator>El Houssine El Mazoudi</creator>
        
        <subject>Machine learning; smart healthcare; coronary artery disease</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>Coronary Artery Disease (CAD) is one of the most common cardiac diseases worldwide and causes disability and economic burden. It is the world’s leading and most serious cause of mortality, with approximately 80% of deaths reported in low- and middle-income countries. The preferred and most precise diagnostic tool for CAD is angiography, but it is invasive, expensive, and technically demanding. However, the research community is increasingly interested in the computer-aided diagnosis of CAD via the utilization of machine learning (ML) methods. The purpose of this work is to present an e-diagnosis tool based on ML algorithms that can be used in a smart healthcare monitoring system. We applied the most accurate machine learning methods that have shown superior results in the literature to different medical datasets such as RandomForest, XGboost, MultilayerPerceptron, J48, AdaBoost, NaiveBayes, LogitBoost, KNN. Every single classifier can be efficient on a different dataset. Thus, an ensemble model using majority voting was designed to take advantage of the well-performed single classifiers, Ensemble learning aims to combine the forecasts of multiple individual classifiers to achieve higher performance than individual classifiers in terms of precision, specificity, sensitivity, and accuracy; furthermore, we have bench-marked our proposed model with the most efficient and well-known ensemble models, such as Bagging, Stacking methods based on the cross-validation technique, The experimental results confirm that the ensemble majority voting approach based on the top three classifiers: MultilayerPerceptron, RandomForest, and AdaBoost, achieves the highest accuracy of 88,12% and outperforms all other classifiers. This study demonstrates that the majority voting ensemble approach proposed above is the most accurate machine learning classification approach for the prediction and detection of coronary artery disease.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_84-An_Intelligent_Decision_Support_Ensemble_Voting_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intelligent System for Personalised Interventions and Early Drop-out Prediction in MOOCs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130983</link>
        <id>10.14569/IJACSA.2022.0130983</id>
        <doi>10.14569/IJACSA.2022.0130983</doi>
        <lastModDate>2022-09-30T12:12:39.6100000+00:00</lastModDate>
        
        <creator>ALJ Zakaria</creator>
        
        <creator>BOUAYAD Anas</creator>
        
        <creator>Cherkaoui Malki Mohammed Oucamah</creator>
        
        <subject>MOOC; drop-out; dimensionality reduction; features selection; personalised intervention</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>In this paper, we propose an approach to early detect students at high risk of drop-out in MOOC (Massive Open Online Course); we design personalised interventions to mitigate that risk. We apply Machine Learning (ML) algorithms and data mining techniques to a dataset extracted from XuetangX MOOC learning platforms and sourced from the KDD cup 2015. Since this dataset contains only raw student log activity records, we perform a hybrid feature selection and dimensionality reduction techniques to extract relevant features, and reduce models complexity and computation time. Besides, we built two models based on: Genetic Algorithms (GA) and Deep Learning (DL) with supervised learning methods. The obtained results, according to the accuracy and the AUC (Area Under Curve)-ROC (Reciever Operator Characteristic) metrics, prove the pertinence of the extracted features and encourage the use of the hybrid features selection. They also proved that GA and DL are outperforming the baseline algorithms used in related works. To assess the generalisation of the approach used in this work, The same process is performed to a second benchmark dataset extracted from the university MOOC. Then, a single web application hosted on the university server, produces an individual weekly drop-out probability, using time series data. It also proposes an approach to personalise and prioritise interventions for at-risk students according to the drop-out patterns.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_83-Intelligent_System_for_Personalised_Interventions.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Visual Navigation System for Autonomous Drone using Fiducial Marker Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130981</link>
        <id>10.14569/IJACSA.2022.0130981</id>
        <doi>10.14569/IJACSA.2022.0130981</doi>
        <lastModDate>2022-09-30T12:12:39.5970000+00:00</lastModDate>
        
        <creator>Mohammad Soleimani Amiri</creator>
        
        <creator>Rizauddin Ramli</creator>
        
        <subject>Proportional-Integral-Derivative (PID) controller; AprilTag detection system; autonomous navigation; fiducial marker detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>Drones have been quickly developing for civilian applications in recent years. Because of the nonlinearity of the mathematical drone model, and the importance of precise navigation to avoid possible dangers, it is necessary to establish an algorithm to localize the drone simultaneously and maneuver it to the desired destination. This paper presents a visual-based multi-stage error tolerance navigation algorithm of an autonomous drone by a tag-based fiducial marker detection in finding its target. Dynamic and kinematic models of the drone were developed by Newton-Euler. The position and orientation of the drone, related to the tag, are determined by AprilTag, which is used as feedback in a closed-loop control system with an Adjustable Proportional-Integral-Derivative (APID) controller. Parameters of the controller are tuned based on steady-State error, which is defined as the distance of the drone from the desired point. The sequence of path trajectory, that drone follows to reach the desired point, is defined as a navigation algorithm. A model of the drone was simulated in a virtual outdoor to mimic hovering in complex obstacles environment. The results present satisfactory performance of the navigation system programmed by the APID controller in comparison with the conventional Proportional-Integral-Derivative (PID) controller. It can be ascertained that the proposed navigation system based on a tag marker in the closed-loop control system is applicable to maneuvering the drone autonomously and useful for various industrial tasks in indoor/outdoor environments.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_81-Visual_Navigation_System_for_Autonomous_Drone.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of a Mobile Application for the Logistics Process of a Fire Company</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130982</link>
        <id>10.14569/IJACSA.2022.0130982</id>
        <doi>10.14569/IJACSA.2022.0130982</doi>
        <lastModDate>2022-09-30T12:12:39.5970000+00:00</lastModDate>
        
        <creator>Luis Enrique Parra Aquije</creator>
        
        <creator>Luis Gustavo Vasquez Carranza</creator>
        
        <creator>Gustavo Bernnet Alfaro Pena</creator>
        
        <creator>Michael Cabanillas-Carbonell</creator>
        
        <creator>Laberiano Andrade-Arenas</creator>
        
        <subject>Fire company; logistics process; mobile application; RUP methodology; expert survey</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>Currently, the logistics process is an important part for any company because it helps to manage the assets and products that enter and leave it. Some companies carry out this process physically, saving the information on sheets of paper or Excel files, which takes longer to do and is not at the forefront of how companies do it, which is by using mobile applications to improve this process. Likewise, it has been decided to implement a mobile application with the aim of improving the logistics process in the Callao No. 15 fire company. For the elaboration of the application, the RUP methodology was used to do it in a more optimal way, in the end, a survey of experts in Google Forms was conducted, addressed to 10 experts to know the evaluation of the mobile application. In the end, a favorable result was obtained from the opinion of the experts on the mobile application; 70% of the respondents indicate that the usability of the mobile application has a “Very high” level;it can be seen that 80% of respondents indicate that the presentation of the mobile the application has a “Very high” level; it can be seen that 90%of the respondents indicate that the functionality of the mobile application has a “Very high” level; besides,it can be seen that 80% of the respondents indicate that the security of the mobile application has a “Very high” level.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_82-Design_of_a_Mobile_Application_for_the_Logistics_Process.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid Approach of Wavelet Transform, Convolutional Neural Networks and Gated Recurrent Units for Stock Liquidity Forecasting</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130980</link>
        <id>10.14569/IJACSA.2022.0130980</id>
        <doi>10.14569/IJACSA.2022.0130980</doi>
        <lastModDate>2022-09-30T12:12:39.5800000+00:00</lastModDate>
        
        <creator>Mohamed Ben Houad</creator>
        
        <creator>Mohammed Mestari</creator>
        
        <creator>Khalid Bentaleb</creator>
        
        <creator>Adnane El Mansouri</creator>
        
        <creator>Salma El Aidouni</creator>
        
        <subject>Stock liquidity; wavelet transform; convolutional neural networks; GRU cell; Casablanca stock exchange</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>Stock liquidity forecasting is critical for investors, issuers, and financial market regulators. The object of this study is to propose a method capable of accurately predicting the liquidity of stocks. The few studies on stock liquidity forecasting have focused on single models such as Seasonal Auto-Regressive Integrated Moving Average with eXogenous factors, the nonlinear autoregressive network with exogenous input, and Deep Learning. A new trend in forecasting which attempts to combine several approaches is emerging at the moment. Inspired by this new trend, we propose a hybrid approach of Wavelet Transform, Convolutional Neural Networks, and Gated Recurrent Units to predict stock liquidity. Our model is tested on daily data of companies listed on the Casablanca Stock Exchange from 2000 to 2021. Its forecasting performances are evaluated based on the Mean Absolute Error, the Root Mean Square Error, the Mean Absolute Percentage Error, Theil’s U statistic, and the correlation coefficient. Finally, the outperformance of the proposed model is confirmed by comparison with other reference forecasting models. This study contributes to the enrichment of the field of prediction of financial risks and can constitute a framework of analysis allowing to help the stakeholders of the financial markets to forecast the liquidity of the actions.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_80-A_Hybrid_Approach_of_Wavelet_Transform_Convolutional_Neural_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fine-grained Access Control in Distributed Cloud Environment using Trust Valuation Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130978</link>
        <id>10.14569/IJACSA.2022.0130978</id>
        <doi>10.14569/IJACSA.2022.0130978</doi>
        <lastModDate>2022-09-30T12:12:39.5630000+00:00</lastModDate>
        
        <creator>Aparna Manikonda</creator>
        
        <creator>Nalini N</creator>
        
        <subject>Fine-grained; distributed; access control; trust</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>Cloud computing has been in existence as an adaptable technology that gets integrated with IoT, Big-Data, and WSN to provide reliable, scalable and mesh-free services. However, because of its openness in nature, the privacy of the cloud is an important parameter for today’s research. The most important privacy factor in cloud is access control and user trust. Many articles related to access control and trust management were presented, but most of them include highly complex algorithms that result in network overhead. This proposed security framework is for a better and more effective system wherein multiple distributed centers are created with trust-based computing for authentication and validation of requests from users and their resources. The idea of trust here is for efficient decision-making and establishing reliable relationships among users and resources using least computations. Each user has different permissions for each file present in the cloud server. The simulated results shows improvement in the rate of successful transactions, time cost and network overhead.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_78-Fine_grained_Access_Control_in_Distributed_Cloud_Environment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>BERT-Based Hybrid RNN Model for Multi-class Text Classification to Study the Effect of Pre-trained Word Embeddings</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130979</link>
        <id>10.14569/IJACSA.2022.0130979</id>
        <doi>10.14569/IJACSA.2022.0130979</doi>
        <lastModDate>2022-09-30T12:12:39.5630000+00:00</lastModDate>
        
        <creator>Shreyashree S</creator>
        
        <creator>Pramod Sunagar</creator>
        
        <creator>S Rajarajeswari</creator>
        
        <creator>Anita Kanavalli</creator>
        
        <subject>Multi-class text classification; transfer learning; pre-training; word embeddings; GloVe; bidirectional encoder representations from transformers; long short-term memory; gated recurrent units; hybrid model; RNN</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>Due to the Covid-19 pandemic which started in the year 2020, many nations had imposed lockdown to curb the spread of this virus. People have been sharing their experiences and perspectives on social media on the lockdown situation. This has given rise to increased number of tweets or posts on social media. Multi-class text classification, a method of classifying a text into one of the pre-defined categories, is one of the effective ways to analyze such data that is implemented in this paper. A Covid-19 dataset is used in this work consisting of fifteen pre-defined categories. This paper presents a multi-layered hybrid model, LSTM followed by GRU, to integrate the benefits of both the techniques. The advantages of word embeddings techniques like GloVe and BERT have been implemented and found that, for three epochs, the transfer learning based pre-trained BERT-hybrid model performs one percent better than GloVe-hybrid model but the state-of-the-art, fine-tuned BERT-base model outperforms the BERT-hybrid model by three percent, in terms of validation loss. It is expected that, over a larger number of epochs, the hybrid model might outperform the fine-tuned model.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_79-BERT_Based_Hybrid_RNN_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Energy Efficient Node Deployment Technique for Heterogeneous Wireless Sensor Network based Object Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130977</link>
        <id>10.14569/IJACSA.2022.0130977</id>
        <doi>10.14569/IJACSA.2022.0130977</doi>
        <lastModDate>2022-09-30T12:12:39.5500000+00:00</lastModDate>
        
        <creator>Jayashree Dev</creator>
        
        <creator>Jibitesh Mishra</creator>
        
        <subject>Heterogeneous wireless sensor network; energy efficiency; node deployment; object detection network; particle swarm optimization; harmony search algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>Lifetime of the network and the quality of operation are the two important issues in a wireless sensor network system meant for object detection and tracking application. At the same time, there should be a tradeoff between network cost and the quality of operation as high cost of the network limits its real-time usability. Heterogeneous wireless sensor networks promises the prolonged network lifetime as well as enhances network reliability as they contain a mixture of nodes with different characteristics. Further prolongation of network lifetime can be achieved by managing the available node energy in a proper way, i.e, by minimizing number of communication, minimizing node density, minimizing overhead information generated during operation etc. Proper node deployment scheme not only helps to enhance the lifetime of the network but also helps in reducing deployment cost while maintaining the quality of operation in terms of object detection accuracy. This paper focuses on the energy efficient node deployment in heterogeneous wireless sensor network system with the features of maximum network coverage, optimum node density and optimum network cost. This paper proposes a novel energy efficient node deployment algorithm that determines the number of static and mobile nodes required for deployment and then relocates the mobile nodes to cover up the coverage hole using 8-neighbourhood and Particle Swarm Optimization (PSO) algorithm. The performance of the proposed algorithm is compared with corresponding model of Harmony Search Algorithm (HSA) and PSO based node deployment and it is seen that the proposed model outperforms better in comparison to them.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_77-Energy_Efficient_Node_Deployment_Technique.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Advanced Persistent Threat Attack Detection using Clustering Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130976</link>
        <id>10.14569/IJACSA.2022.0130976</id>
        <doi>10.14569/IJACSA.2022.0130976</doi>
        <lastModDate>2022-09-30T12:12:39.5330000+00:00</lastModDate>
        
        <creator>Ahmed Alsanad</creator>
        
        <creator>Sara Altuwaijri</creator>
        
        <subject>APT Attack detection; DNS; network; cybersecurity; clustering algorithms</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>Advanced Persistent Threat (APT) attack has become one of the most complex attacks. It targets sensitive information. Many cybersecurity systems have been developed to detect the APT attack from network data traffic and request. However, they still need to be improved to identify this attack effectively due to its complexity and slow move. It gets access to the organizations either from an active directory or by gaining remote access, or even by targeting the Domain Name Server (DNS). Nowadays, many machine learning (ML) techniques have been implemented to detect APT attack by using the tools in the market. However, still, there are some limitations in terms of accuracy, efficiency, and effectiveness, especially the lack of labeled data to train ML methods. This paper proposes a framework to detect APT attacks using the most applicable clustering algorithms, such as the APRIORI, K-means, and Hunt’s algorithm. To evaluate and compare the performance of the proposed framework, several experiments are conducted on a public dataset. The experimental results showed that the Support Vector Machine with Radial Basis Function (SVM-RBF) achieves the highest accuracy rate, reaching about 99.2%. This accurate result confirms the effectiveness of the developed framework for detecting attacks from network data traffic.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_76-Advanced_Persistent_Threat_Attack_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Data Recovery Comparative Analysis using Open-based Forensic Tools Source on Linux</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130975</link>
        <id>10.14569/IJACSA.2022.0130975</id>
        <doi>10.14569/IJACSA.2022.0130975</doi>
        <lastModDate>2022-09-30T12:12:39.5330000+00:00</lastModDate>
        
        <creator>Muhammad Fahmi Abdillah</creator>
        
        <creator>Yudi Prayudi</creator>
        
        <subject>Recovery; tools; FTK imager; foremost; Testdisk</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>Data recovery is one of the forensic techniques used to recover data that has been lost or deleted. Data recovery is carried out if there is a condition where the data that has been owned is deleted or damaged. If the data has been lost or deleted or even tampered with, then a forensic expert has several ways to restore data that has been lost or damaged. One of them is to use a complete data recovery method using forensic tools, namely, TSK Recover, FTK Imager, Foremost Recover, and Testdisk Recover. Unfortunately, tools such as FTK imager and TSK recover have a weakness, namely that some damaged or corrupted data files cannot be restored in their entirety; they can only be recovered but not be opened. This study uses a tool comparison method approach using foremost recover and Testdisk recover. It&#39;s just that this method cannot be used using the graphic user interface (GUI) but using the CLI (Command Line) in the LINUX operating system. And the files that have been recovered will be fully recovered.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_75-Data_Recovery_Comparative_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detection and Extraction of Faces and Text Lower Third Techniques for an Audiovisual Archive System using Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130974</link>
        <id>10.14569/IJACSA.2022.0130974</id>
        <doi>10.14569/IJACSA.2022.0130974</doi>
        <lastModDate>2022-09-30T12:12:39.5170000+00:00</lastModDate>
        
        <creator>Khalid El Fayq</creator>
        
        <creator>Said Tkatek</creator>
        
        <creator>Lahcen Idouglid</creator>
        
        <creator>Jaafar Abouchabaka</creator>
        
        <subject>Image processing; OpenCV; Tesseract; video OCR; face detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>As part of the audiovisual archive digitization project, which has become a complex field that requires human and material resources, and its automation and optimization have so far represented a center of interest for researchers and media manufacturers, in particular those linked to the integration of artificial intelligence tools in the industry, an elaborate work for the development of an optical character and face recognition model, to digitize the tasks of audiovisual archivist from the manuscript method in automation, from a TV news video. In this article, an approach to develop an example of lower third in Arabic language and facial detection and recognition for news presenter that provide accurate classification results as well as the presentation of different methods and algorithms for Arabic characters. Many studies have been presented in this area, however a satisfactory classification accuracy is yet to be achieved. The comparative state-of-the-art results adopt the latest approaches to study face recognition or OCR, but this model supports both at the same time. it will present the context of realization, the method proposed to extract the texts in the video, using machine learning, about the specificity of the Arabic language, and finally the reasons that govern the decisions taken in the steps of realization. The best results from this approach in real project at the media station was 90.60%. The dataset collected via presenters images and the character dataset via the Pytesseract library.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_74-Detection_and_Extraction_of_Faces_and_Text_Lower_Third_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sentiment Analysis of Online Movie Reviews using Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130973</link>
        <id>10.14569/IJACSA.2022.0130973</id>
        <doi>10.14569/IJACSA.2022.0130973</doi>
        <lastModDate>2022-09-30T12:12:39.4870000+00:00</lastModDate>
        
        <creator>Isaiah Steinke</creator>
        
        <creator>Justin Wier</creator>
        
        <creator>Lindsay Simon</creator>
        
        <creator>Raed Seetan</creator>
        
        <subject>Decision tree; machine learning (ML); natural language processing (NLP); random forests; sentiment analysis; support vector machine (SVM); reviews</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>Many websites encourage their users to write reviews for a wide variety of products and services. In particular, movie reviews may influence the decisions of potential viewers. However, users face the arduous tasks of summarizing the information in multiple reviews and determining the useful and relevant reviews among a very large number of reviews. Therefore, we developed machine learning (ML) models to classify whether an online movie review has positive or negative sentiment. We utilized the Stanford Large Movie Review Dataset to build models using decision trees, random forests, and support vector machines (SVMs). Further, we compiled a new dataset comprising reviews from IMDb posted in 2019 and 2020 to assess whether sentiment changed owing to the coronavirus disease 2019 (COVID-19) pandemic. Our results show that the random forests and SVM models provide the best classification accuracies of 85.27% and 86.18%, respectively. Further, we find that movie reviews became more negative in 2020. However, statistical tests show that this change in sentiment cannot be discerned from our model predictions.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_73-Sentiment_Analysis_of_Online_Movie_Reviews.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>MOOC Dropout Prediction using FIAR-ANN Model based on Learner Behavioral Features</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130972</link>
        <id>10.14569/IJACSA.2022.0130972</id>
        <doi>10.14569/IJACSA.2022.0130972</doi>
        <lastModDate>2022-09-30T12:12:39.4400000+00:00</lastModDate>
        
        <creator>S. Nithya</creator>
        
        <creator>S.Umarani</creator>
        
        <subject>Dropout prediction; data analytics; association rule mining; machine learning; artificial neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>Massive Open Online Courses (MOOCs) are a transformative technology in digital learning that incorporates new techniques through video sessions, exams, activities, and conversations. Everyone leads a successful life in their professional and personal skills learning courses during COVID-19. The research concentrated on employing video interaction analysis to characterize crucial MOOC tasks, including predicting dropouts and student achievement. Our work consists of merely generating and picking the best characteristics based on the learner behavior for evaluating the dropout measure. To locate the frequent objects for feature creation, an association rule-FP growth approach is applied. The neural network is implemented using frequent itemset-3, which is used for feature selection. The evaluation metrics are calculated by using the Multilayer Perceptron (MLP) method. The metric values were then compared to the proposed model and some base supervised machine learning models namely Logistic Regression (LR), Decision Tree (DT), Random Forest (RF), K-Nearest Neighbor (KNN), and Naive Bayes (NB). The FIAR (Feature Importance Association Rule)-ANN(Artificial Neural Network) dropout prediction model was tested on the KDD Cup 2015 dataset and it had a high accuracy of over 92.42, which is approximately 18% better than the MLP-NN model. With the optimized parameters, we are solely focused on lowering dropout rates and increasing learner retention.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_72-MOOC_Dropout_Prediction_using_FIAR_ANN_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of Accounting Information System in Data Processing: Case Study in Indonesia Company</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130971</link>
        <id>10.14569/IJACSA.2022.0130971</id>
        <doi>10.14569/IJACSA.2022.0130971</doi>
        <lastModDate>2022-09-30T12:12:39.4230000+00:00</lastModDate>
        
        <creator>Meiryani </creator>
        
        <creator>Dezie Leonarda Warganegara</creator>
        
        <creator>Agustinus Winoto</creator>
        
        <creator>Gabrielle Beatrice Hudayat</creator>
        
        <creator>Erna Bernadetta Sitanggang</creator>
        
        <creator>Ka Tiong</creator>
        
        <creator>Jessica Paulina Sidauruk</creator>
        
        <creator>Mochammad Fahlevi</creator>
        
        <creator>Gredion Prajena</creator>
        
        <subject>Accounting; information systems; SAP; ERP; implementation of SAP</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>This study aims to determine the implementation of System Application and Product in Data Processing (SAP) in a company to provide solutions for companies to obtain reliable reports and improve performance in a company. This study uses the mixed method through interviews with resource persons who have work in well-known company. The data obtained were analyzed by the method of literature study from data on the internet. The results of this study indicate that many companies still apply manual systems in reporting, one of them is the lack of adequate technological facilities within the company so that companies cannot fulfill their business processes optimally due to not using integrated system that connected with each other.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_71-Design_of_Accounting_Information_System_in_Data_Processing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Smart Greenhouse Monitoring and Controlling based on NodeMCU</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130970</link>
        <id>10.14569/IJACSA.2022.0130970</id>
        <doi>10.14569/IJACSA.2022.0130970</doi>
        <lastModDate>2022-09-30T12:12:39.4100000+00:00</lastModDate>
        
        <creator>Yajie Liu</creator>
        
        <subject>Smart greenhouse; ThingSpeak cloud; NodeMCU</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>Food security is one of the major rising issues as the human population is larger and the land available for cultivation is smaller, as well unassured affairs happened often in society especially in the current CoVID-19 rapidly spread days. To mitigate this condition, further improve the yields and quality of food, this paper proposed a smart and low-cost greenhouse monitoring and control system, which mainly consists of sensors, actuators, LCD display and microcontrollers. DHT22 sensor is used to get the surrounding temperature and humidity in the greenhouse, and NodeMCU is used as the main microcontroller. Some other facilities such as fan and heater are used to adjust the inside environment. The system could monitor the growth environment continuously with Internet-connected, the monitoring data is transmitted and stored in the ThingSpeak cloud, the users can visualize the live data through a webpage or phone APP in real-time. If the environment condition is out of the predefined level, the environment is monitored continuously, and the system can be adjusted automatically. This system can be deployed in the greenhouse simply and maintain the greenhouse environment in a normal range dynamically and continuously.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_70-Smart_Greenhouse_Monitoring_and_Controlling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sentiment Analysis on Acceptance of New Normal in COVID-19 Pandemic using Na&#239;ve Bayes Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130968</link>
        <id>10.14569/IJACSA.2022.0130968</id>
        <doi>10.14569/IJACSA.2022.0130968</doi>
        <lastModDate>2022-09-30T12:12:39.3930000+00:00</lastModDate>
        
        <creator>Siti Hajar Aishah Samsudin</creator>
        
        <creator>Norlina Mohd Sabri</creator>
        
        <creator>Norulhidayah Isa</creator>
        
        <creator>Ummu Fatihah Mohd Bahrin</creator>
        
        <subject>Sentiment analysis; COVID-19; new normal; acceptance; na&#239;ve bayes</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>The COVID-19 pandemic has such a significant impact and causes difficulties in many aspects that the new normal rules should be implemented to reduce the effects. New normal rules have been implemented by governments worldwide to break the virus chain and stop its transmission among the society. Even if the COVID-19 outbreak is under control, governments still need to know whether society could adapt and adjust to their new daily lifestyles. Many precautions still must be addressed as the transition to endemic status does not mean that COVID-19 will naturally eventually disappear. The World Health Organization also has warned that it is too early to treat COVID-19 as an endemic disease. Since the pandemic, many interactions have been done online, leading to the increasing social media usage to express opinions about COVID-19. The objective of the study is to explore the capability of the Na&#239;ve Bayes algorithm in the sentiment classification of the public’s acceptance on the new normal in the COVID-19 pandemic. Na&#239;ve Bayes has been chosen for its good performance in solving various other classification problems. In this study, Twitter data were used for the analysis and were collected between March and June 2022. The evaluation results have shown that Na&#239;ve Bayes could generate excellent and acceptable performance in the classification with an accuracy of 83%. According to the findings of this research, many people have accepted the new normal in their daily lives. The future works would include scrapping more data based on geolocation, improving the feature extraction technique, balancing the dataset and comparing Na&#239;ve Bayes performance with other well-known classifiers. The subsequent study could also focus on detecting the emotions of the public and processing non-English tweets.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_68-Sentiment_Analysis_on_Acceptance_of_New_Normal_in_COVID_19_Pandemic.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Partial Differential Equation (PDE) based Hybrid Diffusion Filters for Enhancing Noise Performance of Point of Care Ultrasound (POCUS) Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130969</link>
        <id>10.14569/IJACSA.2022.0130969</id>
        <doi>10.14569/IJACSA.2022.0130969</doi>
        <lastModDate>2022-09-30T12:12:39.3930000+00:00</lastModDate>
        
        <creator>Deepa V S</creator>
        
        <creator>Jagathyraj V P</creator>
        
        <creator>Gopikakumari R</creator>
        
        <subject>Anisotropic diffusion filter; POCUS; mixed Gaussian impulse noise; speckle noise</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>A hybrid filter is developed by combining smoothing and edge preservation properties of anisotropic diffusion (AD) filters and noise reduction features of median filtering. Mixed Gaussian Impulse noise and speckle noise are considered for analysis. The performance of this hybrid filter is verified using ultrasound images. The effectiveness of this filter is assessed with Point of Care Ultrasound (POCUS) images to verify whether the algorithm developed is applicable to them. POCUS refers to a handheld portable ultrasound instrument that can be used at patient bedside. Quantitative analysis with COVID-19 POCUS images, in terms of SNR, SSIM and MSE is performed. Results demonstrate that for all test images, the proposed filter has the best SNR, least MSE, and highest SSIM. Significant improvement in image quality is thus observed both qualitatively and quantitatively. The novelty of suggested technique is its effectiveness in reducing both mixed Gaussian impulse noise and speckle noise in ultrasound as well as POCUS images without the need for separate filters. POCUS has played a significant role in the diagnosis and management of pulmonary, cardiac and vascular pathologies associated with COVID-19. Automatic segmentation of these images and subsequent automatic detection and diagnosis are becoming increasingly popular due to the rapid development of artificial intelligence technologies. These results are useful in implementing better pre-processing prior to segmentation of ultrasound images to facilitate improved patient care.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_69-Partial_Differential_Equation_PDE_based_Hybrid_Diffusion_Filters.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Machine Learning based Electromigration-aware Scheduler for Multi-core Processors</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130967</link>
        <id>10.14569/IJACSA.2022.0130967</id>
        <doi>10.14569/IJACSA.2022.0130967</doi>
        <lastModDate>2022-09-30T12:12:39.3770000+00:00</lastModDate>
        
        <creator>Jagadeesh Kumar P</creator>
        
        <creator>Mini M G</creator>
        
        <subject>Electromigration aware scheduler; useful lifetime; multi-core processor reliability; machine learning model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>The rising performance demands in modern technology devices see the need to pack more functionality per area and are made possible with the advent of technology scaling. The extremely down-scaled, high-density processors used in such technology devices functioning at high frequencies and greater temperatures expedite various aging effects which impact the reliable lifetime of computing systems. Electromigration is considered to be an important intrinsic aging effect that reduces the useful lifetime of modern microprocessors. The objective of this work is to use machine learning methods to develop an electromigration-aware scheduler for assigning workloads to cores based on reliability and performance requirements. Aging estimation of the processor cores is performed based on the proposed computationally efficient and accurate regression-based thermal prediction models. According to experimental findings, the suggested technique can significantly extend the lifetime of multi-core architectures while allowing performance to degrade gracefully. The maximum error in the prediction of the lifetime of the cores using the proposed methodology is estimated to be 2.85%.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_67-Machine_Learning_based_Electromigration_aware_Scheduler.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Novel Approach in Classification and Prediction of COVID-19 from Radiograph Images using CNN</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130966</link>
        <id>10.14569/IJACSA.2022.0130966</id>
        <doi>10.14569/IJACSA.2022.0130966</doi>
        <lastModDate>2022-09-30T12:12:39.3600000+00:00</lastModDate>
        
        <creator>Chalapathiraju Kanumuri</creator>
        
        <creator>CH. Renu Madhavi</creator>
        
        <creator>Torthi Ravichandra</creator>
        
        <subject>COVID-19; x-ray images; deep learning technique; CNN; efficient net</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>Effective screening and early detection of COVID-19 patients are highly crucial to slow down and stop the disease&#39;s rapid spread at this time. Currently, RT-PCR, CT scanning and Chest X-ray (CXR) imaging are the diagnosis mechanisms for COVID-19 detection. In this proposed work radiology examination by using CXR images is used for COVID-19 detection due to dearth of CT Scanners and RT-PCR testing centers. Therefore, researchers have developed various Deep and Machine Learning systems that can predict COVID-19 using CXR images. Out of which, few are exhibited good prediction results. However, Most of the models are suffered with over fitting, high variance, memory and generalization errors which are caused by noise as well as datasets are limited. Therefore, a Convolutional Neural Network (CNN) with the leveraging Efficient Net architecture is proposed for COVID-19 case detection. The proposed methods have an accuracy of 99% which gives the better results than the present available methods. Therefore, the proposed model can be used in real-time covid-19 classification systems.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_66-Novel_Approach_in_Classification_and_Prediction_of_Covid_19.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Machine Learning Model for Predicting Heart Disease using Ensemble Methods</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130965</link>
        <id>10.14569/IJACSA.2022.0130965</id>
        <doi>10.14569/IJACSA.2022.0130965</doi>
        <lastModDate>2022-09-30T12:12:39.3600000+00:00</lastModDate>
        
        <creator>Jasjit Singh Samagh</creator>
        
        <creator>Dilbag Singh</creator>
        
        <subject>Heart disease; diagnosis; ensemble; optimization; prediction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>There is the continuous increase in death rate related to cardiac disease across the world. Prediction of the heart disease in advance may help the experts to suggest the pre-emptive measures to minimize the death risk. The early diagnosis of heart disease symptoms is made possible by machine learning technologies. The existing machine learning models are inefficient in terms of simulation error, accuracy and timing for heart disease prediction. Hence, an efficient approach is needed for efficient prediction of heart disease. In the current research paper, a model based on Machine learning techniques has been proposed for early and accurate prediction of heart disease. The proposed model is based on techniques for feature optimization, feature selection, and ensemble learning. Using WEKA 3.8.3 tool, the feature selection and feature optimisation technique has been applied for irrelevant features elimination and then the pragmatic features are tested using ensemble techniques. Further, the comparison of the proposed model is made with the existing model without feature selection and feature optimisation technique in terms of heart disease prediction effectiveness. It is found that the results of proposed model gives the better performance in terms of simulation error, response time and accuracy in heart disease prediction.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_65-A_Machine_Learning_Model_for_Predicting_Heart_Disease.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Monadic Co-simulation Model for Cyber-physical Production Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130964</link>
        <id>10.14569/IJACSA.2022.0130964</id>
        <doi>10.14569/IJACSA.2022.0130964</doi>
        <lastModDate>2022-09-30T12:12:39.3470000+00:00</lastModDate>
        
        <creator>Daniel-Cristian Craciunean</creator>
        
        <subject>Models; metamodels; co-simulation; adjoint functors; monads; cyber-physical production systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>The production flexibility required by the new industrial revolutions is largely based on heterogeneous Cyber-Physical Production Systems models that cooperate with each other to perform complex tasks. To accomplish tasks at an acceptable pace, CPPSs should be based on appropriate cooperation mechanisms. To this end a CPPS must be able to provide services in the form of functionalities to other CPPSs, and also to use functionalities of other CPPSs. The cooperation of two CPPS systems is done by co-simulating the two models that allow the partial or total access of the functionalities of one system, by the other system. Requests from one CPPS to another CPPS create connection moments of the two models that can only be performed in certain states of the two models. Also, the answers to these requests create connections between the two models in other subsequent states. Optimal aggregation of the behaviors of the two models, by co-simulation, is essential because otherwise it can lead to very long waiting times and can cause major problems if not done correctly. We will see in this paper that the behavior of such a simulation model can be represented by a category, and the co-simulation of two models can be defined by a monad determined by two adjoint functors between the simulation categories of the two models.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_64-A_Monadic_Co_simulation_Model_for_Cyber_physical_Production_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Convolutional Neural Networks with Transfer Learning for Pneumonia Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130963</link>
        <id>10.14569/IJACSA.2022.0130963</id>
        <doi>10.14569/IJACSA.2022.0130963</doi>
        <lastModDate>2022-09-30T12:12:39.3300000+00:00</lastModDate>
        
        <creator>Orlando Iparraguirre-Villanueva</creator>
        
        <creator>Victor Guevara-Ponce</creator>
        
        <creator>Ofelia Roque Paredes</creator>
        
        <creator>Fernando Sierra-Li&#241;an</creator>
        
        <creator>Joselyn Zapata-Paulini</creator>
        
        <creator>Michael Cabanillas-Carbonell</creator>
        
        <subject>Neural networks; transfer learning; pneumonia; detection; Convolutional</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>Pneumonia is a type of acute respiratory infection caused by microbes, and viruses that affect the lungs. Pneumonia is the leading cause of infant mortality in the world, accounting for 81% of deaths in children under five years of age. There are approximately 1.2 million cases of pneumonia in children under five years of age and 180 000 died in 2016. Early detection of pneumonia can help reduce mortality rates. Therefore, this paper presents four convolutional neural network (CNN) models to detect pneumonia from chest X-ray images. CNNs were trained to classify X-ray images into two types: normal and pneumonia, using several convolutional layers. The four models used in this work are pre-trained: VGG16, VGG19, ResNet50, and InceptionV3. The measures that were used for the evaluation of the results are Accuracy, recall, and F1-Score. The models were trained and validated with the dataset. The results showed that the Inceptionv3 model achieved the best performance with 72.9% accuracy, recall 93.7%, and F1-Score 82%. This indicates that CNN models are suitable for detecting pneumonia with high accuracy.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_63-Convolutional_Neural_Networks_with_Transfer_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Q-learning Approach based on CNN and XGBoost for Traffic Signal Control</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130961</link>
        <id>10.14569/IJACSA.2022.0130961</id>
        <doi>10.14569/IJACSA.2022.0130961</doi>
        <lastModDate>2022-09-30T12:12:39.3130000+00:00</lastModDate>
        
        <creator>Nada Faqir</creator>
        
        <creator>Chakir Loqman</creator>
        
        <creator>Jaouad Boumhidi</creator>
        
        <subject>Convolutional neural network; extreme gradient; traffic control; traffic optimization; urban mobility</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>Traffic signal control is a way for reducing traffic jams in urban areas, and to optimize the flow of vehicles by minimizing the total waiting times. Several intelligent methods have been proposed to control the traffic signal. However, these methods use a less efficient road features vector, which can lead to suboptimal controls. The objective of this paper is to propose a deep reinforcement learning approach as the hybrid model that combines the convolutional neural network with eXtreme Gradient Boosting to traffic light optimization. We first introduce the deep convolutional neural network architecture for the best features extraction from all available traffic data and then integrated the extracted features into the eXtreme Gradient Boosting model to improve the prediction accuracy. In our approach; cross-validation grid search was used for the hyper-parameters tuning process during the training of the eXtreme Gradient Boosting model, which will attempt to optimize the traffic signal control. Our system is coupled to a microscopic agent-based simulator (Simulation of Urban MObility). Simulation results show that the proposed approach improves significantly the average waiting time when compared to other well-known traffic signal control algorithms.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_61-Deep_Q_learning_Approach_based_on_CNN_and_XGBoost.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automatic Text Summarization using Document Clustering Named Entity Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130962</link>
        <id>10.14569/IJACSA.2022.0130962</id>
        <doi>10.14569/IJACSA.2022.0130962</doi>
        <lastModDate>2022-09-30T12:12:39.3130000+00:00</lastModDate>
        
        <creator>Senthamizh Selvan. R</creator>
        
        <creator>K. Arutchelvan</creator>
        
        <subject>Named entity recognition; text summarization; k-means clustering; Zipf’s law</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>Due to the rapid development of internet technology, social media and popular research article databases have generated many open text information. This large amount of textual information leads to &#39;Big Data&#39;. Textual information can be recorded repeatedly about an event or topic on different websites. Text summarization (TS) is an emerging research field that helps to produce summary from a single or multiple documents. The redundant information in the documents is difficult, hence part or all of the sentences may be omitted without changing the gist of the document. TS can be organized as an exposition to collect accents from its special position, rather than being semantic in nature. Non-ASCII characters and pronunciation, including tokenizing and lemmatization are involved in generating a summary. This research work has proposed an Entity Aware Text Summarization using Document Clustering (EASDC) technique to extract summary from multi-documents. Named Entity Recognition (NER) has a vital part in the proposed work. The topics and key terms are identified using the NER technique. Extracted entities are ranked with Zipf’s law and sentence clusters are formed using k-means clustering. Cosine similarity-based technique is used to eliminate the similar sentences from multi-documents and produce unique summary. The proposed EASDC technique is evaluated using CNN dataset and it shown an improvement of 1.6 percentage when compared with the baseline methods of Textrank and Lexrank.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_62-Automatic_Text_Summarization_using_Document_Clustering.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Virtual Communities of Practice to Promote Digital Agriculturists’ Learning Competencies and Learning Engagement: Conceptual Framework</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130960</link>
        <id>10.14569/IJACSA.2022.0130960</id>
        <doi>10.14569/IJACSA.2022.0130960</doi>
        <lastModDate>2022-09-30T12:12:39.3000000+00:00</lastModDate>
        
        <creator>Maneerat Manyuen</creator>
        
        <creator>Surapon Boonlue</creator>
        
        <creator>Jariya Neanchaleay</creator>
        
        <creator>Vitsanu Nittayathammakul</creator>
        
        <subject>VCoPs; digital inquiry; digital agriculture; learning competencies; learning engagement</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>Virtual Communities of Practice (VCoPs) are networks of people who share a common interest and a desire to learn together in the same domain via ICT. The limitations of the existing concepts for developing VCoPs in general contexts are not explained in terms of the integration between virtual learning technologies and digital learning strategies used to promote expected learning outcomes in the agricultural sector. This research aims to propose the conceptual framework for developing the Virtual Communities of Practice through Digital Inquiry (VCoPs-DI Model) to promote digital agriculturists’ learning competencies and engagement. The research methodology was divided into three stages: the first stage involves a literature review for document analysis and synthesis, the second stage involves constructing the conceptual framework, and the third stage involves evaluating the content validity index. The key results showed that the developed conceptual framework has three parts: (1) The fundamentals of concept formation were divided into four concept bases: (1.1) Communities of Practice (CoPs), (1.2) Virtual Learning Environments (VLEs), (1.3) Digital Learning Resources (DLRs), and (1.4) Critical Inquiry Method; (2) The identification of the manipulated variable was divided into two compositions: (2.1) VCoPs and (2.2) Digital Inquiry (DI); (3) The identification of the dependent variable was divided into two compositions: (3.1) Digital agriculturists&#39; learning competencies, and (3.2) learning engagement. Findings from an expert’s review show that the scale levels of the content validity index (SCVI) were 0.958. We anticipate that our conceptual framework could be used for reference as part of the design and development of the VCoPs model to promote learning in the agricultural sector.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_60-Virtual_Communities_of_Practice_to_Promote_Digital_Agriculturists.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Efficient Decentralized Sharing Economy Model based on Blockchain Technology: A Case Study of Najm for Insurance Services Company</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130959</link>
        <id>10.14569/IJACSA.2022.0130959</id>
        <doi>10.14569/IJACSA.2022.0130959</doi>
        <lastModDate>2022-09-30T12:12:39.2830000+00:00</lastModDate>
        
        <creator>Atheer Alkhammash</creator>
        
        <creator>Kawther Saeedi</creator>
        
        <creator>Fatmah Baothman</creator>
        
        <creator>Rania Anwar Aboalela</creator>
        
        <creator>Amal Babour</creator>
        
        <subject>Blockchain; decentralized; Ethereum; multichain; Najm; sharing economy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>Blockchain is an emerging technology that is used to address ownership, centrality, and security issues in different fields. The blockchain technology has converted centralized applications into decentralized and distributed ones. In existing sharing economy applications, there are issues related to low efficiency and high complexity of services. However, blockchain technology can be adopted to overcome these issues by effectively opening up secure information channels of the sharing economy industry and other related parties, encouraging industry integration and improving the ability of sharing economy organizations to readily gain required information. This paper discusses blockchain technology to enhance the development of insurance services by proposing a five-layer decentralized model using Ethereum platform. The Najm for Insurance Services Company in Saudi Arabia was employed in a case study for applying the proposed model to effectively solve the issue of online underwriting, and to securely and efficiently enhance the verification and validation of transactions. The paper concludes with a review of the lessons learned and provides suggestions for blockchain application development process.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_59-Efficient_Decentralized_Sharing_Economy_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Gaussian Projection Deep Extreme Clustering and Chebyshev Reflective Correlation based Outlier Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130958</link>
        <id>10.14569/IJACSA.2022.0130958</id>
        <doi>10.14569/IJACSA.2022.0130958</doi>
        <lastModDate>2022-09-30T12:12:39.2670000+00:00</lastModDate>
        
        <creator>S. Rajalakshmi</creator>
        
        <creator>P. Madhubala</creator>
        
        <subject>Outlier detection; clustering; Gaussian random projection; deep extreme learning; Chebyshev distance; temporal; reflective correlation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>Outlier detection or simply the task of point detection that are noticeably distinct and different from data sample is a predominant issue in deep learning. When a framework is constructed, these distinctive points can later lead to model training and compromise accurate predictions. Owing to this reason, it is paramount to recognize and eliminate them before constructing any supervised model and this is frequently the initial step when dealing with a deep learning issue. Over the recent few years, different numbers of outlier detector algorithms have been designed that ensure satisfactory results. However, their main disadvantages remain in the time and space complexity and unsupervised nature. In this work, a clustering-based outlier detection called, Random Projection Deep Extreme Learning-based Chebyshev Reflective Correlation (RPDEL-CRC) is proposed. First, Gaussian Random Projection-based Deep Extreme Learning-based Clustering model is designed. Here, by applying Gaussian Random Projection function to the Deep Extreme Learning obtains the relevant and robust clusters corresponding to the data points in a significant manner. Next, with the robust clusters, outlier detection time is said to be reduced to a greater extent. In addition, a novel Chebyshev Temporal and Reflective Correlation-based Outlier Detection model is proposed to detect outliers therefore achieving high outlier detection accuracy. The proposed approach is validated with the NIFTY-50 stock market dataset. The performance of the RPDEL-CRC method is evaluated by applying it to NIFTY-50 Stock Market dataset. Finally, we compare the results of the RPDEL-CRC method to the state-of-the-art outlier detection methods using outlier detection time, accuracy, error rate and false positive rate evaluation metrics.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_58-Gaussian_Projection_Deep_Extreme_Clustering.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Identification of Retinal Disease using Anchor-Free Modified Faster Region</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130956</link>
        <id>10.14569/IJACSA.2022.0130956</id>
        <doi>10.14569/IJACSA.2022.0130956</doi>
        <lastModDate>2022-09-30T12:12:39.2530000+00:00</lastModDate>
        
        <creator>Arulselvam. T</creator>
        
        <creator>S. J. Sathish Aaron Joseph</creator>
        
        <subject>Retinal disease; fundus image dataset; contrast enhancement; segmentation; Fast Fuzzy C Means; adaptive optimal OTSU; faster region-based convolutional neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>Infections of the retinal tissue, as well as delayed or untreated therapy, may result in visual loss. Furthermore, when a large dataset is involved, the diagnosis is prone to inaccuracies. As a consequence, a completely automated model of retinal illness diagnosis is presented to eliminate human input while maintaining high accuracy classification findings. ODALAs (Optimal Deep Assimilation Learning Algorithms) are unable to handle zero errors or covariance or linearity and normalcy. DLTs (Deep Learning Techniques) such as GANs (Generative Adversarial Networks) or CNNs might replace the numerical solution of dynamic systems (Convolution Neural Networks), in order to speed up the runs. With this objective, this study proposes a completely automated multi-class retina disorders prediction system in which pictures from the Fundus image dataset are upgraded using RSWHEs (Recursive Separated Weighted Histogram Equalizations) to boost contrast and noise is eliminated using the Wiener filter. The improved picture is used for segmentation, which is done using clustering and the optimum threshold. The suggested EFFCM is used for clustering (Enriched Fast Fuzzy C Means). The suggested AOO (Adaptive optimum Otsu) threshold technique is used for clustering and picture optimal thresholding. This work suggests AMF-RCNNs (anchor-free modified faster region-based CNNs) that integrate AFRPNs (anchor free regions proposal generation networks) with Improved Fast R-CNNs into single networks for detecting retinal issues accurately. The performances of Accuracy is 98.5%, F-Measure is 96.5%, Precession is 99.2% and different Subset features are 98.5 % show better results when compared with other related techniques or models.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_56-Identification_of_Retinal_Disease.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>OpenCV Implementation of Grid-based Vertical Safe Landing for UAV using YOLOv5</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130957</link>
        <id>10.14569/IJACSA.2022.0130957</id>
        <doi>10.14569/IJACSA.2022.0130957</doi>
        <lastModDate>2022-09-30T12:12:39.2530000+00:00</lastModDate>
        
        <creator>Hrusna Chakri Shadakshri V</creator>
        
        <creator>Veena M. B</creator>
        
        <creator>Keshihaa Rudra Gana Dev V</creator>
        
        <subject>Autonomous UAV system; computer vision algorithm; YOLOv5; safe landing site selection; Haversine equations</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>The challenge of proving autonomous landing in practical situations is difficult and highly risky. Adopting autonomous landing algorithms substantially minimizes the probability of human-involved mishaps, which may enable the use of drones in populated metropolitan areas to their full potential. This paper proposes an Unmanned Aerial Vehicles (UAV) vertical safe landing &amp; navigation pipeline that relies on lightweight computer vision modules, able to execute on the limited computational resources on-board a typical UAV. In this work, a grid-based mask technique is proposed for selecting the safe landing zones where each grid is parameterizable based on the size of the UAVs, which is implemented using OpenCV. A custom trained YOLOv5 model is the underlying building block for safe landing algorithm which is trained for aerial views of pedestrians, cars &amp; bikes to identify as obstacles. The nearest obstacle-free zone algorithm is applied over the YOLOv5 output where boundary box locations are identified using Hue Saturation Value (HSV) filtering and then split into grids for safe landing zones where maximum coverage is taken into account while analyzing each scene. It performs a 2-level operation to prevent collisions while descending at different altitudes. Since UAV is expected to be processing only at predetermined altitudes, which will shorten the processing time, generating a PID signal for UAV actuators to navigate to the required safe zone with utmost safety and accuracy.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_57-OpenCV_Implementation_of_Grid_based_Vertical_Safe_Landing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of Privacy and Security Challenges in e-Health Clouds</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130955</link>
        <id>10.14569/IJACSA.2022.0130955</id>
        <doi>10.14569/IJACSA.2022.0130955</doi>
        <lastModDate>2022-09-30T12:12:39.2370000+00:00</lastModDate>
        
        <creator>Reem Alanazi</creator>
        
        <subject>HER; e-health; security; privacy; cloud</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>Electronic Health Records (EHR) techniques are being used at an increasingly faster rate to store the patient data making it easier to retrieve, share and utilize it efficiently. This data can be used for research purposes, clinical trials and for studying epidemiology to come up with strategies for epidemic control. With a huge global inflation, the increasing costs of healthcare and the shortage of medicine, it becomes convenient for the healthcare organizations to migrate from the traditional healthcare system to a more sophisticated, cost effective and efficient cloud-based e-Health model. To optimize the full potential of an ICT-based e-Health system, it is imperative for the existing healthcare systems to be implemented in a full-fledged cloud environment. However, with numerous benefits of technology, it might pose some privacy and security threats as well. Therefore, the security and access control of such information is of vital significance. Nonetheless, with the increasing interest of healthcare organizations to migrate from the conventional healthcare systems to the modern cloud-based e-Health systems, not much care is being taken to address security and privacy issues effectively towards the protection of sensitive data.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_55-Analysis_of_Privacy_and_Security_Challenges.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detection of Credit Card Fraud using a Hybrid Ensemble Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130953</link>
        <id>10.14569/IJACSA.2022.0130953</id>
        <doi>10.14569/IJACSA.2022.0130953</doi>
        <lastModDate>2022-09-30T12:12:39.2200000+00:00</lastModDate>
        
        <creator>Sayali Saraf</creator>
        
        <creator>Anupama Phakatkar</creator>
        
        <subject>Credit card; hybrid ensemble model; bagging; boosting; data imbalance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>The rising number of credit card frauds presents a significant challenge for the banking industry. Many businesses and financial institutions suffer huge losses because card users are reluctant to use their cards. A primary goal of fraud detection is to identify prior transaction patterns to detect future fraud. In this paper, a hybrid ensemble model is proposed to combine bagging and boosting techniques to distinguish between fraudulent and legitimate transactions. During the experimentation two datasets are used; the European credit card dataset and the credit card stimulation dataset which are highly imbalanced. The oversampling method is used to balance both datasets. To overcome the problem of unbalanced data oversampling method is used. The model is trained to predict output results by combining random forest with Adaboost. The proposed model provides 98.27 % area under curve score on the European credit cards dataset and the stimulation credit card dataset gives 99.3 % area under curve score.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_53-Detection_of_Credit_Card_Fraud.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Covid-19 and Pneumonia Infection Detection from Chest X-Ray Images using U-Net, EfficientNetB1, XGBoost and Recursive Feature Elimination</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130954</link>
        <id>10.14569/IJACSA.2022.0130954</id>
        <doi>10.14569/IJACSA.2022.0130954</doi>
        <lastModDate>2022-09-30T12:12:39.2200000+00:00</lastModDate>
        
        <creator>Munindra Lunagaria</creator>
        
        <creator>Vijay Katkar</creator>
        
        <creator>Krunal Vaghela</creator>
        
        <subject>Covid-19; u-net; efficientnet; semantic image segmentation; XGboost; recursive feature extraction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>The pandemic caused by the COVID-19 virus is the most serious current threat to the public&#39;s health. For the purpose of identifying patients with Covid-19, Chest X-Rays have proven to be an indispensable imaging modality for the hospital. Nevertheless, radiologists are needed to commit a significant amount of time to their interpretation. It is possible to diagnose and triage cases of Covid-19 effectively and rapidly with the assistance of precise computer systems that are powered by Machine Learning techniques. Machine Learning techniques such as Deep Feature Extraction can help detect the disease with improved precision and speed when used in conjunction with X-Ray images of the lung. This helps to alleviate the problem of lack of testing kits. Using the U-Net for Semantic image segmentation for lung segmentation and deep feature extraction-based strategy that was suggested in this research, it is possible to differentiate between patients who have contracted the Covid-19 virus, pneumonia and healthy people. XGBoost and recursive feature extraction based proposed methodology is evaluated using 20 different Pre-Trained deep learning based models including EfficientNet variations and it is observed that the maximum detection accuracy, precision, recall specificity, and F1-score are achieved when EfficientNetB1 is used to extract deep features. The respective values for these metrics are 97.6%, 0.964, 0.964, and 0.982. These findings lend credence to the efficiency of the proposed methodology.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_54-Covid_19_and_Pneumonia_Infection_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-instance Finger Knuckle Print Recognition based on Fusion of Local Features</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130952</link>
        <id>10.14569/IJACSA.2022.0130952</id>
        <doi>10.14569/IJACSA.2022.0130952</doi>
        <lastModDate>2022-09-30T12:12:39.2070000+00:00</lastModDate>
        
        <creator>Amine AMRAOUI</creator>
        
        <creator>Mounir AIT KERROUM</creator>
        
        <creator>Youssef FAKHRI</creator>
        
        <subject>Biometrics; Finger Knuckle Print; local features; fusion; compound local binary pattern</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>Personal identity has become an important asset in today&#39;s digital world for any individual in society. Biometrics offers itself as a reliable and secure guarantor of our identities, so it has become essential to build efficient and robust recognition systems. In this orientation, we propose a fusion approach, which aims to optimally exploit the dividing block dimensions in the case of local methods to reduce similarities. We will use the compound local binary model (CLBP) for local features extraction, a robust operator descriptor that exploits both the sign and the inclination information of the differences between the center and the neighbor gray values. The reliability of the proposed approach was evaluated on the PolyU Finger Knuckle Print (FKP) database. We presented several experimental results that show the detailed path of our approach, explain the choices made for each step and illustrate the significant improvements compared to other existing recognition systems in the literature. The recognition rate of the proposed global approach is one of the highest among the other methods. Optimal final approach recognition rates vary between 99.70% and 100%.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_52-Multi_instance_Finger_Knuckle_Print_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Review of Foreground Segmentation based on Convolutional Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130951</link>
        <id>10.14569/IJACSA.2022.0130951</id>
        <doi>10.14569/IJACSA.2022.0130951</doi>
        <lastModDate>2022-09-30T12:12:39.1900000+00:00</lastModDate>
        
        <creator>Pavan Kumar Tadiparthi</creator>
        
        <creator>Sagarika Bugatha</creator>
        
        <creator>Pradeep Kumar Bheemavarapu</creator>
        
        <subject>Foreground segmentation; deep learning; Convolutional Neural Network (CNN); Visual Geometry Group (VGG) 16 architecture; Transposed Convolutional Neural Network (TCNN); Gaussian Mixture Model (GMM); Visual Background Extractor (VIBE)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>Foreground segmentation in dynamic videos is a challenging task for many researchers. Many researchers worked on various methods that were traditionally developed; however, the performance of those state-of-art procedures has not yielded encouraging results. Hence, to obtain efficient results, a deep learning-based neural network model is proposed in this paper. The proposed methodology is based on Convolutional Neural Network (CNN) model incorporated with Visual Geometry Group (VGG) 16 architecture, which is further divided into two sections, namely, Convolutional Neural Network section for feature extraction and Transposed Convolutional Neural Network (TCNN) section for un-sampling feature maps. Then the thresholding technique is employed for effective segmentation of foreground from background in images. The Change Detection (CDNET) 2014 benchmark dataset is used for the experimentation. It consists of 11 categories, and each category contains four to six videos. The baseline, camera jitter, dynamic background, and bad weather are the categories considered for the experimentation. The performance of the proposed model is compared with the state-of-the-art techniques, such as Gaussian Mixture Model (GMM) and Visual Background Extractor (VIBE) for its efficiency in segmenting foreground images.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_51-A_Review_of_Foreground_Segmentation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analyzing the State of Mind of Self-quarantined People during COVID-19 Pandemic Lockdown Period: A Multiple Correspondence Analysis Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130949</link>
        <id>10.14569/IJACSA.2022.0130949</id>
        <doi>10.14569/IJACSA.2022.0130949</doi>
        <lastModDate>2022-09-30T12:12:39.1730000+00:00</lastModDate>
        
        <creator>Gauri Vaidya</creator>
        
        <creator>Vidya Kumbhar</creator>
        
        <creator>Sachin Naik</creator>
        
        <creator>Vijayatai Hukare</creator>
        
        <subject>Correspondence analysis; happiness index; sentiment analysis; proportional odds logistic regression; self-quarantining</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>COVID-19 (Corona) virus has spread across the world threatening lives of millions of people. In India first COVID-19 case was detected on 30th January 2020 in Kerala. To minimize the spread of Corona Virus, countries all over the world implemented complete lockdown. Due to complete lockdown even people who are not exposed to corona virus, have to self-quarantine to keep themselves safe from getting infected by the disease. People (especially Indians) have never experienced such complete lockdown and quarantining situations before. Thus, this creates a space for curiosity that how people are going to react to this situation. The present study aims to analyse how self-quarantined people during COVID-19 lockdown period are reacting to quarantining, what measures they are taking to deal with this situation, and what are their sentiments towards quarantining. The study also aims to measure their Happiness and to identify the factors that are statistically significant to Happiness. For this study, the data is collected through a survey method. Multiple correspondence analysis are performed to analyse the survey data. The happiness score is evaluated by using the GNH (Gross National Happiness) methodology. Proportional Odd Logistics Regression is used to identify factors that are statistically significant in predicting happiness. The study suggests that family factor is related to the happiness of the self-quarantined people during such lockdown situations.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_49-Analyzing_the_State_of_Mind_of_Self_quarantined_People.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SIBI (Sign System Indonesian Language) Text-to-3D Animation Translation Mobile Application</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130950</link>
        <id>10.14569/IJACSA.2022.0130950</id>
        <doi>10.14569/IJACSA.2022.0130950</doi>
        <lastModDate>2022-09-30T12:12:39.1730000+00:00</lastModDate>
        
        <creator>Erdefi Rakun</creator>
        
        <creator>Sultan Muzahidin</creator>
        
        <creator>IGM Surya A. Darmana</creator>
        
        <creator>Wikan Setiaji</creator>
        
        <subject>SIBI sign language; sequence generation; visual speech; animation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>This research proposed a mobile application prototype to translate Indonesian text into SIBI (Sign System for the Indonesian Language) 3D gestures animation to bridge the communication gap between the deaf and the other. To communicate in sign language, the signer will use his/her hands and fingers to demonstrate the word gesture, and at the same time, his/her mouth will pronounce the word being expressed. Therefore, the proposed mobile application needs two animation generator components: the hand gesture and the lip movement generator. Hand gestures are made using a motion capture sensor. Mouth movements are created for all syllables available in the SIBI dictionary using the Dirichlet Free-Form Deformation (DFFD) method. The subsequent challenging work is synchronizing these two components and adding transitional gestures. A transitional gesture done by the cross-fading method is needed to make a word gesture that can smoothly connect with the next word gesture. The Mean Opinion Score (MOS) test was run to measure the mouth movements in 3D animation. The MOS score is 4.422. There are four surveys conducted to measure user satisfaction. The surveys showed that the animation generated did not significantly differ from the original video. The Sistem Usability Score (SUS) is 76.25. The score means that prototype is in the GOOD category. The average time needed to generate an animation from Indonesian input text is less than 100ms.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_50-SIBI_Sign_System_Indonesian_Language_Text_to_3D_Animation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of Noise Removal Techniques on Retinal Optical Coherence Tomography Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130948</link>
        <id>10.14569/IJACSA.2022.0130948</id>
        <doi>10.14569/IJACSA.2022.0130948</doi>
        <lastModDate>2022-09-30T12:12:39.1600000+00:00</lastModDate>
        
        <creator>T. M. Sheeba</creator>
        
        <creator>S. Albert Antony Raj</creator>
        
        <subject>Average or Mean filter; Bilateral filter; denoising image; Gaussian Filter; Median filter; optical coherence tomography; Wiener filter</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>In the biomedical field, automatic disease detection by image processing has become the norm in the current days. For early illness detection, ophthalmologists have explored a variety of invasive and noninvasive procedures. Optical Coherence Tomography (OCT) is a noninvasive imaging technique for obtaining high resolution tomographic images of biological systems. The image quality is degraded by noise, which degrades the performance of noisy image processing algorithms. The OCT images captured with speckle noise and prior to further processing, it is critical to use an effective approach for denoising the image. In this paper, we used Median filter, Average filter or Mean filter, Wiener filter, Gaussian filter and Bilateral filter on OCT images in this paper, and discussed the advantages and drawbacks of each approach. The effectiveness of these filters are compared using the Peak Signal to Noise Ratio (PSNR), Mean Square Error (MSE) and Contrast to Noise Ratio (CNR).</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_48-Analysis_of_Noise_Removal_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Insect Pest Image Detection and Classification using Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130947</link>
        <id>10.14569/IJACSA.2022.0130947</id>
        <doi>10.14569/IJACSA.2022.0130947</doi>
        <lastModDate>2022-09-30T12:12:39.1430000+00:00</lastModDate>
        
        <creator>Niranjan C Kundur</creator>
        
        <creator>P B Mallikarjuna</creator>
        
        <subject>Deep learning; faster RCNN; insect pest detection; IP102 dataset; efficient net</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>Farmers&#39; primary concern is to reduce crop loss because of pests and diseases, which occur irrespective of the cultivation process used. Worldwide more than 40% of the agricultural output is lost due to plant pathogens, insects, and weed pests. Earlier farmers relied on agricultural experts to detect pests. Recently Deep learning methods have been utilized for insect pest detection to increase agricultural productivity. This paper presents two deep learning models&#172;&#172; based on Faster R-CNN Efficient Net B4 and Faster R-CNN Efficient Net B7 for accurate insect pest detection and classification. We validated our approach for 5, 10, and 15 class insect pests of the IP102 dataset. The findings illustrate that our proposed Faster R-CNN Efficient Net B7 model achieved an average classification accuracy of 99.00 %, 96.00 %, and 93.00 % for 5, 10, and 15 class insect pests outperforming other existing models. To detect insect pests less computation time is required for our proposed Faster-R-CNN method. The investigation reveals that our proposed Faster R-CNN model can be used to identify crop pests resulting in higher agricultural yield and crop protection.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_47-Insect_Pest_Image_Detection_and_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Wavelet Multi Resolution Analysis based Data Hiding with Scanned Secrete Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130945</link>
        <id>10.14569/IJACSA.2022.0130945</id>
        <doi>10.14569/IJACSA.2022.0130945</doi>
        <lastModDate>2022-09-30T12:12:39.1270000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>Multi-Dimensional wavelet transformation; multi resolution analysis: MRA; image data hiding; scanned secrete image; Daubechies basis function; invisibility</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>Wavelet Multi Resolution Analysis (MRA) based data hiding with scanned secrete images is proposed for improvement of invisibility of the secrete images. Daubechies (biorthogonal basis was adopted as the wavelet, but it was demonstrated that the key image (or secret image data) information can be restored with the biorthogonal wavelet. Also, the information of what to adopt as the biorthogonal wavelet is hidden. Key image information can also be protected by doing so, that the horizontal biorthogonal wavelet of the image does not have to be the same as the vertical biorthogonal wavelet, and the insertion position of the secret image data can be freely selected. It is also possible to divide the bit string of the secret image data and insert it into an arbitrary high frequency component, that the information hiding capability changes depending on the number of bit strings (information amount) of the secret image data, and the secret image in the public image data. Random scanning is effective for improving the visibility of data, selection of scanning method type, random number initial value It was shown that sharing only among parties is useful for improving confidentiality, resistance to noise, resistance to data compression, and resistance to tampering with data.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_45-Wavelet_Multi_Resolution_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multiple Eye Disease Detection using Hybrid Adaptive Mutation Swarm Optimization and RNN</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130946</link>
        <id>10.14569/IJACSA.2022.0130946</id>
        <doi>10.14569/IJACSA.2022.0130946</doi>
        <lastModDate>2022-09-30T12:12:39.1270000+00:00</lastModDate>
        
        <creator>P. Glaret Subin</creator>
        
        <creator>P. Muthu Kannan</creator>
        
        <subject>Adaptive mutation swarm optimization; fundus image; feature extraction; RNN classifier; standard deviation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>The major cause of visual impairment in aged people is due to age related eye diseases such as cataract, diabetic retinopathy, and glaucoma. Early detection of eye diseases is necessary for better diagnosis. This paper concentrates on the early identification of various eye disorders such as cataract, diabetic retinopathy, and glaucoma from retinal fundus images. The proposed method focuses on the automated early detection of multiple diseases using hybrid adaptive mutation swarm optimization and regression neural networks (AED-HSR). In the proposed work, the input images are preprocessed and then multiple features such as entropy, mean, color, intensity, standard deviation, and statistics are extracted from the collected data. The extracted features are segmented by using an adaptive mutation swarm optimization (AMSO) algorithm to segment the disease sector from the fundus image. Finally, the features collected are fed to a regression neural network (RNN) classifier to classify each fundus image as normal or abnormal. If the classifier output is abnormal, then it is classified by the corresponding diseases in terms of cataract, glaucoma, and diabetic retinopathy, which improves the accuracy of detection and classification. Ultimately, the results of the classifiers are evaluated by several performance analyses and the viability of structural and functional features is considered. The proposed system predicts the type of the disease with an accuracy of 0.9808, specificity of 0.9934, sensitivity of 0.9803 and F1 score of 0.9861 respectively.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_46-Multiple_Eye_Disease_Detection_using_Hybrid_Adaptive_Mutation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analyzing the Relationship between the Personality Traits and Drug Consumption (Month-based user Definition) using Rough Sets Theory</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130944</link>
        <id>10.14569/IJACSA.2022.0130944</id>
        <doi>10.14569/IJACSA.2022.0130944</doi>
        <lastModDate>2022-09-30T12:12:39.1100000+00:00</lastModDate>
        
        <creator>Manasik M. Nour</creator>
        
        <creator>H. A. Mohamed</creator>
        
        <creator>Sumayyah I. Alshber</creator>
        
        <subject>Classification; personality traits; five factor model; rules extraction; drug abuse detection; rough sets theory; feature selection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>There is no doubt that the use of drugs has significant consequences for society, it introduces risk into the human life and causing earlier mortality and morbidity. Being a conscientious member of society, we must go ahead to prevent these young minds from life-threatening addiction. Owing to the computational complexity of wrapper approaches, the poor performance of filtering techniques, and the classifier dependency of embedded approaches, artificial intelligence and machine learning systems can provide useful tools for raising the prediction rate of drug users. Recently, the psychologists approved the recent personality traits Five Factor Model (FFM) for understanding human individual differences. The aim of this work is to propose a rough sets theory based method to investigate the relationship between drug user/non-user (month-based user definition) and the personality traits. The data of five factor personality profiles, impulsivity, sensation-seeking and biographical information of users of 21 different types of legal and illegal drugs are used to fetch all reducts and finally a set of classification rules are created to predict the drug user/non-user(month-based user definition). The outcomes demonstrate the novelty of the current work which can be summarized as The set of generalized classification rules which pronounced with logic functions build a knowledge base with excellent accuracy to analyze drug misuse successfully and may be worthy in many applications.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_44-Analyzing_the_Relationship_between_the_Personality_Traits.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimally Allocating Ambulances in Delhi using Mutation based Shuffled Frog Leaping Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130942</link>
        <id>10.14569/IJACSA.2022.0130942</id>
        <doi>10.14569/IJACSA.2022.0130942</doi>
        <lastModDate>2022-09-30T12:12:39.0970000+00:00</lastModDate>
        
        <creator>Zaheeruddin </creator>
        
        <creator>Hina Gupta</creator>
        
        <subject>Ambulance allocation; ambulance service; emergency medical service; shuffled frog leaping algorithm; mutation based shuffled frog leaping algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>This paper presents a reliable and competent evolutionary-based approach for improving the response time of Emergency Medical Service (EMS) by efficiently allocating ambulances at the base stations. As the prime objective of EMS is to save people&#39;s lives by providing them with timely assistance, thus increasing the chances of a person&#39;s survivability, this paper has undertaken the problem of ambulance allocation. The work has been implemented using the proposed mutation-based Shuffled Frog Leaping Algorithm (mSFLA) to provide an optimal allocation plan. The authors have altered the basic SFLA using the concept of mutation to improve the quality of the solution obtained and avoid being trapped in local optima. Considering a set of assumptions, the new algorithm has been applied for allocating 50 ambulances among 11 base stations in Southern Delhi. The working environment of EMS, which includes stochastic requests, travel time, and dynamic traffic conditions, has been considered to attain accurate results. The work has been implemented in the MATLAB simulation environment to find an optimized allocation plan with a minimum average response time. The authors have reduced the average response time by 12.23% with the proposed algorithm. The paper also compares mSFLA, Genetic Algorithm (GA), and Particle Swarm Optimization (PSO) for the stated problem. The algorithms are compared in terms of objective value (average response time), convergence rate, and constancy repeatability to conclude that mSFLA performs better than the other two algorithms.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_42-Optimally_Allocating_Ambulances_in_Delhi.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Adopting a Digital Transformation in Moroccan Research Structure using a Knowledge Management System: Case of a Research Laboratory</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130943</link>
        <id>10.14569/IJACSA.2022.0130943</id>
        <doi>10.14569/IJACSA.2022.0130943</doi>
        <lastModDate>2022-09-30T12:12:39.0970000+00:00</lastModDate>
        
        <creator>Fatima-Ezzahra AIT-BENNACER</creator>
        
        <creator>Abdessadek AAROUD</creator>
        
        <creator>Khalid AKODADI</creator>
        
        <creator>Bouchaib CHERRADI</creator>
        
        <subject>Business process re-engineering; digital transformation; knowledge management system; Moroccan research laboratory; total quality management</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>Digital Transformation has become one of the most discussed debates; many sectors have adopted digital transformation to gain a competitive advantage and to ensure their continuity. Moroccan universities, in their turn, are facing strategic and managerial challenges due to emerging practices related to digital transformation. To address this issue, the proposed work sets out to define the factors that lead us to adopt a digital transformation using SWOT analysis and to apply total quality management techniques to contribute to our research laboratory&#39;s digital transformation, by digitalizing and managing knowledge and processes. KMS-TQM digital platform has been used to capitalize knowledge and profile the different existing functions, positions, tasks, and referential competencies. Then, we analyzed all the actual processes to propose a business process re-engineering using Bizagi Modeler. The study’s contribution is to standardize all the current processes in the laboratory to help the Doctoral Studies Center successfully carry out the digital transformation. Moreover, the aim is to make all functions and tasks for each position explicit.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_43-Adopting_a_Digital_Transformation_in_Moroccan_Research_Structure.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Gamification on OTT Platforms: A Behavioural Study for User Engagement</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130941</link>
        <id>10.14569/IJACSA.2022.0130941</id>
        <doi>10.14569/IJACSA.2022.0130941</doi>
        <lastModDate>2022-09-30T12:12:39.0800000+00:00</lastModDate>
        
        <creator>Komal Suryavanshi</creator>
        
        <creator>Prasun Gahlot</creator>
        
        <creator>Surya Bahadur Thapa</creator>
        
        <creator>Aradhana Gandhi</creator>
        
        <creator>Ramakrishnan Raman</creator>
        
        <subject>Gamification; user engagement; eye-tracking; OTT (Over-the-top); reward; visual attention</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>This study examines the consumer’s visual attention toward gamification options while watching the OTT (Over-the-top) online content. Also, the impact of gamification on user engagement (UE) on the OTT platform was studied using data collected by conducting an eye-tracking experiment and subsequently using a user engagement scale (UES). The study was carried out at the marketing and behavioural lab of a management institute in India using the OTT platform website and Tobii eye-tracker. Empirical data was collected from 52 respondents within the age group between 23 to 35 years. The relation between Attention to Gamification (AG), Reward Satisfaction (RS), and User Engagement (UE) were studied by running a mediating linear regression analysis. From the results, it was found that respondents were equally interested in watching the online content as well as ready to explore the gamification options. The research findings demonstrate that Reward Satisfaction (RS) acted as a mediating factor in the relation between Attention to Gamification (AG) and User Engagement (UE). This study adds to the literature on consumer engagement towards gamification on the OTT platform, where the literature is still limited. Future research could consider mobile apps as a platform to undertake the study. This study aimed to empirically test the effect of AG on UE with the involvement of RS as a mediator. The study is the first of its type to use eye-tracking data to understand the impact of gamification on the OTT platform.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_41-Gamification_on_OTT_Platforms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning and Classification Algorithms for COVID-19 Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130940</link>
        <id>10.14569/IJACSA.2022.0130940</id>
        <doi>10.14569/IJACSA.2022.0130940</doi>
        <lastModDate>2022-09-30T12:12:39.0630000+00:00</lastModDate>
        
        <creator>Mohammed Sidheeque</creator>
        
        <creator>P. Sumathy</creator>
        
        <creator>Abdul Gafur. M</creator>
        
        <subject>Deep Learning; COVID-19; classification; artificial intelligence</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>The imaging modalities of chest X-rays and computed tomography (CT) are commonly utilized to quickly and accurately diagnose COVID-19. Due to time and human error, it is exceedingly difficult to manually identify the infection using radio imaging. COVID-19 identification is being mechanized and improved with the use of artificial intelligence (AI) tools that have already showed promise. This study employs the following methodology: The chest footage was pre-processed by setting equalizing the histogram, sharpening it, and so on. The transformed chest images are then retrieved through shallow and high-level feature mapping over the backbone network. To further improve the classification performance of the convolutional neural network, the model uses self-attained mechanism through feature maps. Numerous simulations show that CT image classification and augmentation may be accomplished with higher efficiency and flexibility using the Inception-Resnet convolutional neural network than with traditional segmentation methods. The experiment illustrates the association between model accuracy, model loss, and epoch. Inception-statistical Resnet&#39;s measurement results are 98%, 91%, 91%.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_40-Deep_Learning_and_Classification_Algorithms_for_COVID_19.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Using the Agglomerative Hierarchical Clustering Method to Examine Human Factors in Indonesian Aviation Accidents</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130938</link>
        <id>10.14569/IJACSA.2022.0130938</id>
        <doi>10.14569/IJACSA.2022.0130938</doi>
        <lastModDate>2022-09-30T12:12:39.0500000+00:00</lastModDate>
        
        <creator>Rossi Passarella</creator>
        
        <creator>Gulfi Oktariani</creator>
        
        <creator>Dedy Kurniawan</creator>
        
        <creator>Purwita Sari</creator>
        
        <subject>Aviation accidents data; pilot’s licenses; flying hours; human factor</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>This study aims to provide a comprehensive source of knowledge regarding aviation accidents in Indonesia caused by human factors, which are the most significant among other causative elements, requiring a detailed assessment of the accident as a result of pilot and co-pilot faults while operating the aircraft. The KNKT website database is still in the form of accident reports. To this end, the retrieved information based on historical data for 23 years of accidents caused by humans by analyzing the data using the clustering approach to gain data insight in the relationship between total flying hours and pilot licenses. The data analysis revealed that, in general, the aircraft operator complied with the CASR standards.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_38-Using_the_Agglomerative_Hierarchical_Clustering_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Framework for Crime Detection and Diminution in Digital Forensics (CD3F)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130939</link>
        <id>10.14569/IJACSA.2022.0130939</id>
        <doi>10.14569/IJACSA.2022.0130939</doi>
        <lastModDate>2022-09-30T12:12:39.0500000+00:00</lastModDate>
        
        <creator>Arpita Singh</creator>
        
        <creator>Sanjay K. Singh</creator>
        
        <creator>Hari Kiran Vege</creator>
        
        <creator>Nilu Singh</creator>
        
        <subject>Cyber-crime; digital forensics; digital evidence; data analysis; security and privacy; cyber-attack</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>Cyber-attacks have become one of the world&#39;s most serious issues. Every day, they wreak serious financial harm to governments and people. As cyber-attacks become more common, so does cyber-crime. Identifying cyber-crime perpetrators and understanding attack tactics are critical in the battle against crime and criminals. Cyber-attack detection and prevention are difficult undertakings. Researchers have lately developed security models and made forecasts using artificial intelligence technologies to solve these concerns. In the literature, the authors explained numerous ways of predicting crime. They, on the other hand, have a problem forecasting cyber-crime and cyber-attack strategies. Here, in this paper author proposed a digital forensic investigation procedure that deals with cyber-crime. In this investigation, the process author explains digital forensics techniques for ensuring that digital evidence is located, collected, preserved, evaluated, and reported in such a way that the evidence&#39;s integrity is preserved. These sequential digital forensic stages affect a standard and accepted digital forensic investigation procedure, and each phase is influenced by sequential occurrences, with each event relying on tasks. Digital forensics investigation is a technique for ensuring that digital evidence is handled in such a way that the evidence&#39;s integrity is preserved. Sequential digital forensic stages affect a standard and accepted digital forensic investigation procedure, and each phase is influenced by sequential occurrences, with each event relying on tasks.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_39-A_Framework_for_Crime_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predicting University Student Retention using Artificial Intelligence</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130937</link>
        <id>10.14569/IJACSA.2022.0130937</id>
        <doi>10.14569/IJACSA.2022.0130937</doi>
        <lastModDate>2022-09-30T12:12:39.0330000+00:00</lastModDate>
        
        <creator>Samer M. Arqawi</creator>
        
        <creator>Eman Akef Zitawi</creator>
        
        <creator>Anees Husni Rabaya</creator>
        
        <creator>Basem S. Abunasser</creator>
        
        <creator>Samy S. Abu-Naser</creator>
        
        <subject>Artificial intelligence; machine learning; deep learning; retention; student; prediction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>Based on the advancement in the field of Artificial Intelligence, there is still a room for enhancement of student university retention. The main objective of this study is to assess the probability of using Artificial Intelligence techniques such as deep and machine learning procedures to predict university student retention. In this study a variable assessment is carried out on the dataset which was collected from Kaggle repository. The performance of twenty supervised algorithms of machine learning and one algorithm of deep learning is assessed. All algorithms were trained using 10 variables from 1100 records of former university student registrations that have been registered in the University. The top performing algorithm after hyper-parameters tuning was NuSVC Classifier. Therefore, we were able to use the current dataset to create supervised Machine Learning (ML) and Deep Learning (DL) models for predicting student retention with F1-score (90.32 percent) for ML and the proposed DL algorithm with F1-score (93.05 percent).</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_37-Predicting_University_Student_Retention.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Tissue and Tumor Epithelium Classification using Fine-tuned Deep CNN Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130936</link>
        <id>10.14569/IJACSA.2022.0130936</id>
        <doi>10.14569/IJACSA.2022.0130936</doi>
        <lastModDate>2022-09-30T12:12:39.0170000+00:00</lastModDate>
        
        <creator>Anju T E</creator>
        
        <creator>S. Vimala</creator>
        
        <subject>Colorectal cancer; deep learning; CNN; tumor epithelium; Alexnet; GoogLeNet; Inceptionv3</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>The field of Digital Pathology (DP) has become more interested in automated tissue phenotyping in recent years. Tissue phenotyping may be used to identify colorectal cancer (CRC) and distinguish various cancer types. The information needed to construct automated tissue phenotyping systems has been made available by the introduction of Whole Slide Images (WSIs). One of the typical pathological diagnosis duties for pathologists is the histopathological categorization of epithelial tumors. Artificial intelligence (AI) based computational pathology approaches would be extremely helpful in reducing the pathologists ever-increasing workloads, particularly in areas where access to pathological diagnosis services is limited. Investigating several deep learning models for categorizing the images of tumor epithelium from histology is the initial goal. The varying accuracy ratings that were achieved for the deep learning models on the same database demonstrated that additional elements like pre-processing, data augmentation, and transfer learning techniques might affect the models&#39; capacity to attain better accuracy. The second goal of this publication is to reduce the time taken to classify the tissue and tumor Epithelium. The final goal is to examine and fine-tune the most recent models that have received little to no attention in earlier research. These models were checked by the histology Kather CRC image database&#39;s nine classifications (CRC-VAL-HE-7K, NCT-CRC-HE-100K). To identify and recommend the most cutting-edge models for each categorization, these models were contrasted with those from earlier research. The performance and the achievements of the proposed preprocessing workflow and fine-tuned Deep CNN models (Alexnet, GoogLeNet and Inceptionv3) are greater compared to the prevalent methods.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_36-Tissue_and_Tumor_Epithelium_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Real Time Fire Detection using Color Probability Segmentation and DenseNet Model for Classifier</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130935</link>
        <id>10.14569/IJACSA.2022.0130935</id>
        <doi>10.14569/IJACSA.2022.0130935</doi>
        <lastModDate>2022-09-30T12:12:39.0030000+00:00</lastModDate>
        
        <creator>Faisal Dharma Adhinata</creator>
        
        <creator>Nur Ghaniaviyanto Ramadhan</creator>
        
        <subject>Fire detection; color segmentation; GMM-EM; DenseNet; real time</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>The forest is an outdoor environment not touched by the surrounding community, so it is not immediately handled when a fire occurs. Therefore, surveillance using cameras is needed to see the presence of fire hotspots in the forest. This study aims to detect hotspots through video data. As is known, fire has a variety of colors, ranging from yellow to reddish. The segmentation process requires a method that can recognize various fire colors to get a candidate fire object area in the video frame. The methods used for the color segmentation process are Gaussian Mixture Model (GMM) and Expectation–maximization (EM). The segmentation results are candidates for fire areas, which in the experiment used the value of K=4. This fire object candidate needs to be ascertained whether the segmented object is a fire object or another object. In the feature extraction stage, this research uses the DenseNet-169 or DenseNet-201 models. In this study, various color tests were carried out, namely RGB, HSV, and YCbCr. The test results show that RGB color produces the most optimal training accuracy. This RGB color configuration is used to test using video data. The test results show that the true positive and false negative values are quite good, 98.69% and 1.305%. This video data processing produces fps with an average of 14.43. So, it can be said that this combination of methods can be used to process real time data in case studies of fire detection.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_35-Real_Time_Fire_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Student’s Performance Prediction based on Personality Traits and Intelligence Quotient using Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130934</link>
        <id>10.14569/IJACSA.2022.0130934</id>
        <doi>10.14569/IJACSA.2022.0130934</doi>
        <lastModDate>2022-09-30T12:12:39.0030000+00:00</lastModDate>
        
        <creator>Samar El-Keiey</creator>
        
        <creator>Dina ElMenshawy</creator>
        
        <creator>Ehab Hassanein</creator>
        
        <subject>Prediction; student performance; machine learning; personality; intelligence quotient</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>Apparently, most life activities that people perform depend on their unique characteristics. Personal characteristics vary across people, so they perform tasks in different ways based on their skills. People have different mental, psychological, and behavioral features that affect most life activities. This is the same case with students at various educational levels. Students have different features that affect their academic performance. The academic score is the main indicator of the student’s performance. However, other factors such as personality features, intelligence level, and basic personal data can have a great influence on the student’s performance. This means that the academic score is not the only indicator that can be used in predicting students’ performance. Consequently, an approach based on personal data, personality features, and intelligence quotient is proposed to predict the performance of university undergraduates. Five machine learning techniques were used in the proposed approach. In order to evaluate the performance of the proposed approach, a real student’s dataset was used, and various performance measures were computed. Several experiments were performed to determine the impact of various features on the student’s performance. The proposed approach gave promising results when tested on the dataset.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_34-Students_Performance_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An End-to-End Big Data Deduplication Framework based on Online Continuous Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130933</link>
        <id>10.14569/IJACSA.2022.0130933</id>
        <doi>10.14569/IJACSA.2022.0130933</doi>
        <lastModDate>2022-09-30T12:12:38.9870000+00:00</lastModDate>
        
        <creator>Widad Elouataoui</creator>
        
        <creator>Imane El Alaoui</creator>
        
        <creator>Saida El Mendili</creator>
        
        <creator>Youssef Gahi</creator>
        
        <subject>Big data deduplication; online continual learning; big data; entity resolution; record linkage; duplicates detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>While big data benefits are numerous, most of the collected data is of poor quality and, therefore, cannot be effectively used as it is. One of the leading big data quality challenges is data duplication. Indeed, the gathered big data are usually messy and may contain duplicated records. The process of detecting and eliminating duplicated records is known as Deduplication, or Entity Resolution or also Record Linkage. Data deduplication has been widely discussed in the literature, and multiple deduplication approaches were suggested. However, few efforts have been made to address deduplication issues in Big Data Context. Also, the existing big data deduplication approaches are not handling the case of the decreasing performance of the deduplication model during the serving. In addition, most current methods are limited to duplicate detection, which is part of the deduplication process. Therefore, we aim through this paper to propose an End-to-End Big Data Deduplication Framework based on a semi-supervised learning approach that outperforms the existing big data deduplication approaches with an F-score of 98,21%, a Precision of 98,24% and a Recall of 96,48%. Moreover, the suggested framework encompasses all data deduplication phases, including data preprocessing and preparation, automated data labeling, duplicate detection, data cleaning, and an auditing and monitoring phase. This last phase is based on an online continual learning strategy for big data deduplication that allows addressing the decreasing performance of the deduplication model during the serving. The obtained results have shown that the suggested continual learning strategy has increased the model accuracy by 1,16%. Furthermore, we apply the proposed framework to three different datasets and compare its performance against the existing deduplication models. Finally, the results are discussed, conclusions are made, and future work directions are highlighted.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_33-An_End_to_End_Big_Data_Deduplication_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Creating Video Visual Storyboard with Static Video Summarization using Fractional Energy of Orthogonal Transforms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130931</link>
        <id>10.14569/IJACSA.2022.0130931</id>
        <doi>10.14569/IJACSA.2022.0130931</doi>
        <lastModDate>2022-09-30T12:12:38.9700000+00:00</lastModDate>
        
        <creator>Ashvini Tonge</creator>
        
        <creator>Sudeep D. Thepade</creator>
        
        <subject>Keyframe; orthogonal transform; VSUMM; video visual storyboard; video summarization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>The overwhelming number of video uploads and downloads has made it incredibly difficult to find, gather, and archive videos. A static video summarization technique highlights an original video&#39;s significant points through a set of static keyframes as a video visual storyboard. The video visual storyboards are created as static video summaries that solve video processing-related issues like storage and retrieval. In this paper, a strategy for effectively summarizing static videos using the feature vectors, which are fractional coefficients of the transformed video frames, is proposed and evaluated. Four popular orthogonal transforms are deployed for generating feature vectors of video frames. The fractional coefficients of transformed video frames taken as 25 percent, 6.25 percent, and 1.5625 percent of full 100 percent transformed coefficients are considered to form video visual storyboards. The proposed method uses the benchmark video datasets Open Video Project (OVP) and SumMe to validate the performance, containing user summaries (storyboards). These video summaries created using the proposed method are evaluated using percentage accuracy and matching rate.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_31-Creating_Video_Visual_Storyboard_with_Static_Video_Summarization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Denoising of Impulse Noise using Partition-Supported Median, Interpolation and DWT in Dental X-Ray Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130932</link>
        <id>10.14569/IJACSA.2022.0130932</id>
        <doi>10.14569/IJACSA.2022.0130932</doi>
        <lastModDate>2022-09-30T12:12:38.9700000+00:00</lastModDate>
        
        <creator>Mohamed Shajahan</creator>
        
        <creator>Siti Armiza Mohd Aris</creator>
        
        <creator>Sahnius Usman</creator>
        
        <creator>Norliza Mohd Noor</creator>
        
        <subject>Salt and pepper noise; impulse noise; X-ray noise removal; X-ray teeth image quality enhancement; dental X-ray noise reduction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>The impulse noise often damages the human dental X-Ray images, leading to improper dental diagnosis. Hence, impulse noise removal in dental images is essential for a better subjective evaluation of human teeth. The existing denoising methods suffer from less restoration performance and less capacity to handle massive noise levels. This method suggests a novel denoising scheme called &quot;Noise removal using Partition supported Median, Interpolation, and Discrete Wavelet Transform (NRPMID)&quot; to address these issues. To effectively reduce the salt and pepper noise up to a range of 98.3 percent noise corruption, this method is applied over the surface of dental X-ray images based on techniques like mean filter, median filter, Bi-linear interpolation, Bi-Cubic interpolation, Lanczos interpolation, and Discrete Wavelet Transform (DWT). In terms of PSNR, IEF, and other metrics, the proposed noise removal algorithm greatly enhances the quality of dental X-ray images.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_32-Denoising_of_Impulse_Noise_using_Partition_Supported_Median.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>CBT4Depression: A Cognitive Behaviour Therapy (CBT) Therapeutic Game to Reduce Depression Level among Adolescents</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130930</link>
        <id>10.14569/IJACSA.2022.0130930</id>
        <doi>10.14569/IJACSA.2022.0130930</doi>
        <lastModDate>2022-09-30T12:12:38.9570000+00:00</lastModDate>
        
        <creator>Norhana Yusof</creator>
        
        <creator>Nazrul Azha Mohamed Shaari</creator>
        
        <creator>Eizwan Hamdie Yusoff</creator>
        
        <subject>Therapeutic; game; depression; adolescents; cognitive behavior therapy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>Dropping out of depression treatment commonly occurs in the current psychotherapy treatment. Adolescents often find it difficult to express their thoughts and feelings clearly due to their developmental constraints. They also have trouble realising their behaviours as unhealthy or problematic. The use of therapeutic games in depression treatment among adolescents can enhance the engagement level. Indirectly, the issue of dropping out can be reduced among the adolescents. Therefore, this study aimed to improve engagement levels and reduce depression level among adolescents with depression by designing a therapeutic game. A prototype named CBT4Depression was developed in this study. A quasi experimental study was conducted to evaluate the developed therapeutic game and 115 adolescents were recruited to measure their depression level using CBT4Depression. Based on the findings from the evaluation process, it can be concluded that the CBT4Depression considered success to engage and reduce the depression level among adolescents.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_30-CBT4Depression_A_Cognitive_Behaviour_Therapy_CBT.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Use of Interactive Multimedia e-Learning in TVET Education</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130929</link>
        <id>10.14569/IJACSA.2022.0130929</id>
        <doi>10.14569/IJACSA.2022.0130929</doi>
        <lastModDate>2022-09-30T12:12:38.9400000+00:00</lastModDate>
        
        <creator>Siti Fadzilah Mat Noor</creator>
        
        <creator>Hazura Mohamed</creator>
        
        <creator>Nur Atiqah Zaini</creator>
        
        <creator>Dayana Daiman</creator>
        
        <subject>e-Learning; interactive multimedia; learning style; user experience</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>Malaysia is focused on the development and use of technologies among consumers. Thus, technological innovations are used in the adaptation of online learning to educate students, as well as to enhance the teaching and learning process in Technical and Vocational Education and Training (TVET) institutions. There is a need to expose students to the online learning revolution, which conceptualises using computerised systems to facilitate the learning process. However, the COVID-19 outbreak has disrupted the academic year across the country. Due to the unusual circumstances related to the pandemic, the Malaysian government has urged all academic institutions to conduct online teaching and learning. Thus, an e-Learning system, known as SpmiILP, has been designed and developed accordingly for an interactive multimedia course to encourage online interaction among students and lecturers, as well as to enhance human learning and cognitive development. In fact, these essential elements such as learning style of the students and user experience are focused on to engage them in learning effectively as well. An e-Learning System for Interactive Multimedia Course was used to develop the e-Learning system (SpmiILP). The usability test showed that the developed e-Learning system has a positive influence that provided potential contributions to (TVET) students in their learning processes.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_29-Use_of_Interactive_Multimedia_E_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SQrum: An Improved Method of Scrum</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130928</link>
        <id>10.14569/IJACSA.2022.0130928</id>
        <doi>10.14569/IJACSA.2022.0130928</doi>
        <lastModDate>2022-09-30T12:12:38.9230000+00:00</lastModDate>
        
        <creator>Najihi Soukaina</creator>
        
        <creator>Merzouk Soukaina</creator>
        
        <creator>Marzak Abdelaziz</creator>
        
        <subject>Agile project management; IT; OMG; meta-object facility; MOF; metamodel; scrum; SQrum; quality assurance; QA; quality management; QM; software development project</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>Software systems are having a major impact on many aspects of personal and professional life. Safety-critical applications, such as production line controls, automotive operations, and process industry controls, rely significantly on software systems. In these applications, software failure may result in bodily damage or death. The proper operation of software is essential to the safety and well-being of individuals and businesses. Therefore, software quality assurance is of paramount relevance in the software business today. In recent years, Agile Project Management and particularly Scrum, have gained popularity as a method of dealing with &quot;vuca&quot; business environments, which are characterized by rising Volatility, Uncertainty, Complexity, and Ambiguity. This paper contributes to the software development body of knowledge by proposing a metamodel of Scrum quality assurance, named SQrum (‘SQ’ of Software Quality and ‘rum’ of Scrum). Our objective is to make Scrum more efficient and reliable and to assist enterprises in undertaking quality assurance activities while considering agile practices and values.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_28-SQrum_An_Improved_Method_of_Scrum.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Effective Multitier Network Model for MRI Brain Disease Prediction using Learning Approaches</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130926</link>
        <id>10.14569/IJACSA.2022.0130926</id>
        <doi>10.14569/IJACSA.2022.0130926</doi>
        <lastModDate>2022-09-30T12:12:38.9100000+00:00</lastModDate>
        
        <creator>N. Ravinder</creator>
        
        <creator>Moulana Mohammed</creator>
        
        <subject>Brain disease; learning approaches; ground truth value; feature learning; global and local feature analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>Brain disease prognosis is considered a hot research topic where the researchers intend to predict the clinical measures of individuals using MRI data to evaluate the pathological stage and identifies the progression of the disease. With the lack of incomplete clinical scores, various existing learning-based approaches simply eradicate the score without ground truth score computation. It helps restrict the training data samples with robust and reliable models during the learning process. The major disadvantage of the prior approaches is the adoption of hand-crafted features, as these features are not well-suited for the prediction process. This research concentrates on modelling a weakly supervised multi-tier dense neural network model (ws-MTDNN) for examining the progression of brain disease using the available MRI data. The model helps analyze the incomplete clinical scores. The preliminary ties of the network model initially haul out the distinctive patches from the MRI to extract the global and local structural features (information) and develop a superior multi-tier dense neural network model for task-based image feature extraction and perform prediction in the successive tiers for computing the clinical measures. The loss function is adopted while examining the available individuals even in the absence of ground-truth values. The experimentation is done with the available online Dataset like ADNI-1/2, and the model works effectually with this Dataset compared to other approaches.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_26-Effective_Multitier_Network_Model_for_MRI.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application based on Hybrid CNN-SVM and PCA-SVM Approaches for Classification of Cocoa Beans</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130927</link>
        <id>10.14569/IJACSA.2022.0130927</id>
        <doi>10.14569/IJACSA.2022.0130927</doi>
        <lastModDate>2022-09-30T12:12:38.9100000+00:00</lastModDate>
        
        <creator>AYIKPA Kacoutchy Jean</creator>
        
        <creator>MAMADOU Diarra</creator>
        
        <creator>BALLO Abou Bakary</creator>
        
        <creator>GOUTON Pierre</creator>
        
        <creator>ADOU Kablan J&#233;r&#244;me</creator>
        
        <subject>Support vector machine; convolutional neural network; cocoa beans; principal component analysis; hybrid method</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>In our study, we propose a hybrid Convolutional Neural Network with Support Vector Machine (CNN-SVM) and Principal Component Analysis with support vector machine (PCA-SVM) methods for the classification of cocoa beans obtained by the fermentation of beans collected from cocoa pods after harvest. We also use a convolutional neural network (CNN) and support vector machine (SVM) for the classification operation. In the case of the hybrid model, we use a convolutional network as a feature extractor and the SVM is used to perform the classification operation. The use of PCA-SVM allowed for a reduction in image size while maintaining the main features still using the SVM classifier. Radial, linear and polynomial basis function kernels were used with various control parameters for the SVM, and optimizers such as the Stochastic Gradient Descent (SGD) algorithm, Adam, and RMSprop were used for the CNN softmax classifier. The results showed the robustness of the hybrid CNN-SVM model which obtained the best score with a value of 98.32% then the PCA-SVM based model had a score of 97.65% outperforming the standard CNN and SVM classification algorithms. Metrics such as accuracy, recall, F1 score, mean squared error (MSE), and MCC have allowed us to consolidate the results obtained from our different experiments.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_27-Application_based_on_Hybrid_CNN_SVM_and_PCA_SVM_Approaches.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Secure Cloud Connected Indoor Hydroponic System via Multi-factor Authentication</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130925</link>
        <id>10.14569/IJACSA.2022.0130925</id>
        <doi>10.14569/IJACSA.2022.0130925</doi>
        <lastModDate>2022-09-30T12:12:38.8930000+00:00</lastModDate>
        
        <creator>Mohamad Khairul Hafizi Rahimi</creator>
        
        <creator>Mohamad Hanif Md Saad</creator>
        
        <creator>Aini Hussain</creator>
        
        <creator>Nurul Maisarah Hamdan</creator>
        
        <subject>Internet of things; intelligent system; remote monitoring; hydroponic; multi-factor authentication</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>Now-a-days, the hydroponic farming system with the Internet of Things (IoT) technology is increasingly becoming a trend for researchers to produce a more capable farming device or remote monitoring system. However, this intelligent system is not controlled securely and will be dangerous when system hacking occurs. Therefore, developing a secure indoor hydroponic monitoring device with multi-factor authentication (MFA) method is proposed. The research aims to develop a secure cloud-connected indoor hydroponic system via multi-factor authentication on the ThingsSentral IoT platform with an MFA technique. The developed system comprises an iPhone Operating System (iOS), an Arduino node microcontroller unit and a ThingsSentral web IoT platform. A security software application on iOS phones with MFA techniques is built to authenticate devices before communicating with ThingsSentral.io. Token authentication between ThingsSentral.io and the security software application must be done before the hydroponic monitoring device can send and receive data. An indoor hydroponic monitoring system device with MFA security technique has been successfully produced from the study. An MFA security technique for iOS apps has also been successfully developed. In conclusion, using the MFA technique, this research successfully develops a high-security control and communication system between the field device and the IoT platform. Although the MFA security system developed for this IoT platform has several steps that need to be done before data can be sent to the cloud database, the users themselves can allow or prohibit a device from operating. Besides, users can also control and monitor the security between the device and the IoT platform when they operate.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_25-Secure_Cloud_Connected_Indoor_Hydroponic_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comprehensive Review and Application of Interpretable Deep Learning Model for ADR Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130924</link>
        <id>10.14569/IJACSA.2022.0130924</id>
        <doi>10.14569/IJACSA.2022.0130924</doi>
        <lastModDate>2022-09-30T12:12:38.8770000+00:00</lastModDate>
        
        <creator>Shiksha Alok Dubey</creator>
        
        <creator>Anala A. Pandit</creator>
        
        <subject>Drug safety; adverse drug reactions; early detection; deep learning; interpretable models</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>Drug safety is a pressing need in today&#39;s healthcare. Minimizing drug toxicity and improving the individual’s health and society is the key objective of the healthcare domain. Drugs are clinically tested in laboratories before marketing as medicines. However, the unintended and harmful effects of drugs are called Adverse Drug Reactions (ADRs). The impact of ADRs can range from mild discomfort to more severe health hazards leading to hospitalization and in some cases death. Therefore, the objective of this research paper is to design a framework based on which research papers are collected from both ADR detection and prediction domain. Around 172 research articles are collected from the sites like ResearchGate, PubMed, etc. After applying the elimination criteria the author categorized them into ADR detection and prediction themes. Further, common data sources and algorithms as well as the evaluation metrics were analyzed and their contribution to their respective domains is stated in terms of percentages. A deep learning framework is also designed and implemented based on the research gaps identified in the existing ADR detection and prediction models. The performance of the deep learning model with two hidden layers was found to be optimum for ADR prediction and further, the non-interpretability part of the model is addressed using a global surrogate model. The proposed architecture has successfully addressed multiple limitations of existing models and also highlights the importance of early detection &amp; prediction of adverse drug reactions in the healthcare industry.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_24-A_Comprehensive_Review_and_Application.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Performance Evaluation of Transfer Learning VGG16 Algorithm on Various Chest X-ray Imaging Datasets for COVID-19 Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130923</link>
        <id>10.14569/IJACSA.2022.0130923</id>
        <doi>10.14569/IJACSA.2022.0130923</doi>
        <lastModDate>2022-09-30T12:12:38.8770000+00:00</lastModDate>
        
        <creator>Andi Sunyoto</creator>
        
        <creator>Yoga Pristyanto</creator>
        
        <creator>Arief Setyanto</creator>
        
        <creator>Fawaz Alarfaj</creator>
        
        <creator>Naif Almusallam</creator>
        
        <creator>Mohammed Alreshoodi</creator>
        
        <subject>Covid-19; Chest X-Ray; CNN; transfer learning; VGG-16</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>Early detection of the coronavirus (COVID-19) disease is essential in order to contain the spread of the virus and provide effective treatment. Chest X-rays could be used to detect COVID-19 at an early stage. However, the pathological features of COVID-19 on chest X-rays closely resemble those caused by other viruses. The visual geometry group-16 (VGG16) deep learning algorithm based on convolutional neural network (CNN) architecture is commonly used to detect various pathologies on medical images automatically and may have a role in the detection of COVID-19 on chest X-rays. Therefore, this research is aimed to determine the robustness of the VGG16 architecture on several chest X-ray databases that vary in terms of size and the number of class labels. Nine publicly available chest X-ray datasets were used to train and test the algorithm. Each dataset had a different number of images, class compositions, and interclass proportions. The performance of the architecture was tested using several scenarios, including datasets above and below 5,000 samples, label class variation, and interclass ratio. This study confirmed that the VGG16 delivers robust performance on various datasets, achieving an accuracy of up to 97.99%. However, our findings also suggest that the accuracy of the VGG16 algorithm drops drastically in highly imbalanced datasets.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_23-The_Performance_Evaluation_of_Transfer_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Exploring Alumni Data using Data Visualization Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130922</link>
        <id>10.14569/IJACSA.2022.0130922</id>
        <doi>10.14569/IJACSA.2022.0130922</doi>
        <lastModDate>2022-09-30T12:12:38.8600000+00:00</lastModDate>
        
        <creator>Nurhanani Izzati Ismail</creator>
        
        <creator>Nur Atiqah Sia Abdullah</creator>
        
        <creator>Nasiroh Omar</creator>
        
        <subject>Alumni; descriptive analysis; diagnostic analysis; data visualization; exploratory dashboard</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>Alumni data are mostly managed through paper-based and word file. With lots of alumni graduating each year, these massive data become difficult to handle. It is hard to look for past alumni data to know their current situation. Since the data is kept conventionally, there is no communication between the alumni and the faculty. Therefore, we proposed a solution that includes alumni information regarding their status in life where alumni themselves and individuals in the faculty can see. This study aims to visualize alumni data from a faculty in a public university through an exploratory dashboard using the identified data visualization techniques. This study adopts the dashboard development process consists of three major phases, which are conception, visualization, and finalization phase. The primary audience is identified and the theme for the dashboard is decided in the conceptual phase. The primary and support views are then designed together with the layout during the visualization phase. At the end of the study, the exploratory dashboard for alumni data using multidimensional and hierarchical data visualization is finalized with the interactive elements. The results are interpreted through descriptive and diagnostic analysis. The dashboard is then evaluated through convenience sampling technique to verify the representation of the dashboard. Majority respondents agreed on the simplicity of exploratory dashboard and the amount of data is also sufficient with the selection of the visualization types. The dashboard is beneficial to the university’s administrator, alumni, and public.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_22-Exploring_Alumni_Data_using_Data_Visualization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Campus Quality of Services Analysis of Mobile Wireless Communications Network Signal among Providers in Malaysia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130921</link>
        <id>10.14569/IJACSA.2022.0130921</id>
        <doi>10.14569/IJACSA.2022.0130921</doi>
        <lastModDate>2022-09-30T12:12:38.8470000+00:00</lastModDate>
        
        <creator>Murizah Kassim</creator>
        
        <creator>Zulfadhli Hisam</creator>
        
        <creator>Mohd Nazri Ismail</creator>
        
        <subject>Quality of services; 4G/LTE; mobile network; wireless communication; RSRP; campus network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>Wireless communication is very important in this generation where todays 5G internet connection is still unconfirmed and 4G communication is still needed. Network in Malaysia has been supported by many telecommunication companies and the Quality of Services is still poor supported especially in the campus area. This research presents a performance analysis of Quality of Services for 4G wireless Communication among Providers supported in a campus area in Malaysia. A 4G Nemo Outdoor wireless analyzer was used to collect the Reference Signal Received Power (RSRP) signal data based on the identified campus road maps. Digi and U-Mobile Network was identified and compared as two telecommunications providers in the testing. The identified road maps were analyzed along the routes while testing signals are collected while driving. It is identified that Digi supports better for the Mobile broadband network which shows an excellent of 1% and good connections of 29 % and 0% signal loss in the drive areas. RSRP signal for U-Mobile shows there is 8% signal loss and the connections provided only at the Mid-Cell for 43% and Cell Edge connections for 48%. This concludes that the 4G signal strength in the campus area having average signal strength, but some medium signal strength is also identified based on the road locations. This research is significant for QoS of supports mobile network in a campus area.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_21-Campus_Quality_of_Services_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Effectiveness of Gamification for Students&#39; Engagement in Technical and Vocational Education and Training</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130920</link>
        <id>10.14569/IJACSA.2022.0130920</id>
        <doi>10.14569/IJACSA.2022.0130920</doi>
        <lastModDate>2022-09-30T12:12:38.8300000+00:00</lastModDate>
        
        <creator>Laily Abu Samah</creator>
        
        <creator>Amirah Ismail</creator>
        
        <creator>Mohammad Kamrul Hasan</creator>
        
        <subject>Technical and vocational education and training (TVET); flipped classroom; engagement; gamification; mobile application</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>The transformation of Technical and Vocational Education and Training (TVET) prioritizes by the national education convention to meet the needs of the industry through improving student skills and the quality of related systems. One of the transformations is practicing blended learning, such as a flipped classroom, to produce better quality student learning outcomes. However, based on previous studies, there are difficulties in maintaining student engagement during learning activities, even though blended learning offers some advantages. Therefore, this study suggests the development of a mobile application using gamification as a solution to enhance student participation. This paper proposes the design and development research (DDR) approach with the adaptation of the ADDIE model to build a learning content prototype. It involves five phases: analysis, design, development, implementation, and evaluation. The study participants consisted of two groups of students in the 1st semester of the Interactive Multimedia course from two different TVET institutions who were cleft into a control group and an experimental group. The experimental group is gamified, whereas the control group is not. The study evaluation uses two instruments: a test to compare students&#39; understanding of both groups and an activity log to track the experimental group&#39;s use of the prototype. According to the findings, gamification during learning activities can increase student engagement by boosting performance through a more significant pre-and post-test mean score difference and creating a positive learning experience. Additionally, mobile applications with the gamification concept can be employed extensively in various TVET courses to encourage student learning performance.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_20-The_Effectiveness_of_Gamification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Criteria and Guideline for Dyslexic Intervention Games</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130919</link>
        <id>10.14569/IJACSA.2022.0130919</id>
        <doi>10.14569/IJACSA.2022.0130919</doi>
        <lastModDate>2022-09-30T12:12:38.8300000+00:00</lastModDate>
        
        <creator>Noraziah ChePa</creator>
        
        <creator>Nur Azzah Abu Bakar</creator>
        
        <creator>Laura Lim Sie-Yi</creator>
        
        <subject>Dyslexic therapy games; game-based intervention; specific learning disorder; guideline for dyslexia games; dyslexia intervention</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>The utilization of game-based interventions is growing as a result of technological advancements, and it has shown to be effective in the treatment of dyslexia and other medical conditions. Games are typically viewed as activities having the essential components of challenge, incentive, and reward. Games were originally created for pleasure, and they can make dyslexic teaching and learning more enjoyable and exciting. Although there are numerous applications available for treating dyslexic children, the inclusion of games and their standards in those applications has not yet been established. Therefore, there is a need for a standard design guideline to be formulated in establishing a guideline for designing and developing games specifically for dyslexic children. This article proposes a design guideline for dyslexic intervention games. Two methods have been employed which are interviews and systematic literature reviews (SLR) to discover the characteristics of dyslexic games. The first set of the criteria was developed through interviews with the stakeholders who are directly associated with dyslexic children. Scopus, the ACM digital library, EBSCO-host, Wiley, and Web of Science (WOS) are the five primary databases used in SLR. 50 articles out of the 551 that were early screened from the five primary databases are qualified to be studied based on the criteria. Only 23 publications could be selected for the study after further screening, which led to the creation of a second set of criteria. These two sets of criteria are thoroughly analyzed, combined, and formulated as a guideline which comprises of four main categories; device and platform, interface, game features, and gameplay. The guideline consists of guidance to be used for designing and developing Dyslexic therapy games with the purpose of assisting Dyslexic children to read. The guideline is believed to be beneficial to many parties especially the educational game developers, therapists, and educationist who are dealing with intervention for Dyslexic children. This study is aligned and significant to Sustainable Development Goals (SDG) three and four, Good Health and Well-being and Quality Education respectively.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_19-Criteria_and_Guideline_for_Dyslexic_Intervention_Games.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Classification of Diabetes Types using Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130918</link>
        <id>10.14569/IJACSA.2022.0130918</id>
        <doi>10.14569/IJACSA.2022.0130918</doi>
        <lastModDate>2022-09-30T12:12:38.8130000+00:00</lastModDate>
        
        <creator>Oyeranmi Adigun</creator>
        
        <creator>Folasade Okikiola</creator>
        
        <creator>Nureni Yekini</creator>
        
        <creator>Ronke Babatunde</creator>
        
        <subject>Machine learning; diabetes mellitus; predictive algorithm; correlation map; confusion matrix</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>Machine learning algorithms have aided health workers (including doctors) in the processing, analysis, and diagnosis of medical problems, as well as the detection of disease patterns and other patient data. Diabetes mellitus (DM), commonly referred to as diabetes, is a gathering of a syndrome issue that is portrayed by high glucose levels in the blood over a drawn-out period. It is a long-term illness that is a great threat to humanity and causes death. Most of the existing machine learning algorithms used for the classification and prediction of diabetes suffer from embodying redundant or inessential medical procedures that cause complications and wastage of time and resources. The absence of a correct diagnosis scheme, deficiency of economic means, and a general lack of awareness represent the main reasons for these negative effects. Hence, preventing the sickness altogether through early detection may doubtless cut back a considerable burden on the economy and aid the patient in diabetes management. This study developed diabetes classification using machine learning techniques that will minimize the aforementioned drawbacks in the prediction of diabetes systems. Decision tree classifiers, logistic regression, random forest, and support vector machines are all examples of predictive algorithms that were tested in this paper. 1009 records of data set were obtained from the Diabetes dataset of Abelvikas, Data World. We used a confusion matrix to visualize the performance evaluation of the classifiers. The experimental result shows that the four machine learning algorithms perform well. However, Random Forest outperforms the other three, with a prediction accuracy of 100% and has a better prediction level when compared with others and existing work.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_18-Classification_of_Diabetes_Types_using_Machine_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Transfer Learning for Medicinal Plant Leaves Recognition: A Comparison with and without a Fine-Tuning Strategy</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130916</link>
        <id>10.14569/IJACSA.2022.0130916</id>
        <doi>10.14569/IJACSA.2022.0130916</doi>
        <lastModDate>2022-09-30T12:12:38.8000000+00:00</lastModDate>
        
        <creator>Vina Ayumi</creator>
        
        <creator>Ermatita Ermatita</creator>
        
        <creator>Abdiansah Abdiansah</creator>
        
        <creator>Handrie Noprisson</creator>
        
        <creator>Yuwan Jumaryadi</creator>
        
        <creator>Mariana Purba</creator>
        
        <creator>Marissa Utami</creator>
        
        <creator>Erwin Dwika Putra</creator>
        
        <subject>Medicinal leaf plant; transfer learning; deep learning; phytomedicine</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>Plant leaves are another common source of information for determining plant species. According to the dataset that has been collected, we propose transfer learning models VGG16, VGG19, and MobileNetV2 to examine the distinguishing features to identify medicinal plant leaves. We also improved algorithm using fine-tuning strategy and analyzed a comparison with and without a fine-tuning strategy to transfer learning models performance. Several protocols or steps were used to conduct this study, including data collection, data preparation, feature extraction, classification, and evaluation. The distribution of training and validation data is 80% for training data and 20% for validation data, with 1500 images of thirty species. The testing data consisted of a total of 43 images of 30 species. Each species class consists of 1-3 images. With a validation accuracy of 96.02 percent, MobileNetV2 with fine-tuning had the best validation accuracy. MobileNetV2 with fine-tuning also had the best testing accuracy of 81.82%.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_16-Transfer_Learning_for_Medicinal_Plant_Leaves_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Effect of Random Splitting and Cross Validation for Indonesian Opinion Mining using Machine Learning Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130917</link>
        <id>10.14569/IJACSA.2022.0130917</id>
        <doi>10.14569/IJACSA.2022.0130917</doi>
        <lastModDate>2022-09-30T12:12:38.8000000+00:00</lastModDate>
        
        <creator>Mariana Purba</creator>
        
        <creator>Ermatita Ermatita</creator>
        
        <creator>Abdiansah Abdiansah</creator>
        
        <creator>Handrie Noprisson</creator>
        
        <creator>Vina Ayumi</creator>
        
        <creator>Hadiguna Setiawan</creator>
        
        <creator>Umniy Salamah</creator>
        
        <creator>Yadi Yadi</creator>
        
        <subject>Random splitting; cross validation; machine learning; Indonesian text</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>Opinion mining has been a prominent topic of research in Indonesia, however there are still many unanswered questions. The majority of past research has been on machine learning methods and models. A comparison of the effects of random splitting and cross-validation on processing performance is required. Text data is in Indonesian. The goal of this project is to use a machine learning model to conduct opinion mining on Indonesian text data using a random splitting and cross validation approach. This research consists of five stages: data collection, pre-processing, feature extraction, training &amp; testing, and evaluation. Based on the experimental results, the TF-IDF feature is better than the Count-Vectorizer (CV) for Indonesian text. The best accuracy results are obtained by using TF-IDF as a feature and Support Vector Machine (SVM) as a classifier with cross validation implementation. The best accuracy reaches 81%. From the experimental results, it can also be seen that the implementation of cross validation can improve accuracy compared to the implementation of random splitting.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_17-Effect_of_Random_Splitting_and_Cross_Validation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Face Recognition under Illumination based on Optimized Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130915</link>
        <id>10.14569/IJACSA.2022.0130915</id>
        <doi>10.14569/IJACSA.2022.0130915</doi>
        <lastModDate>2022-09-30T12:12:38.7830000+00:00</lastModDate>
        
        <creator>Napa Lakshmi</creator>
        
        <creator>Megha P Arakeri</creator>
        
        <subject>Face recognition; illumination; neural network; robust principal component analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>Face recognition is a significant area of pattern recognition and computer vision research. Illumination in face recognition is obvious yet challenging task in pattern matching. Recent researchers introduced machine learning algorithms to solve illumination problems in both indoor and outdoor scenarios. The major challenge in machine learning is the lack of classification accuracy. Thus, the novel Optimized Neural Network Algorithm (ONNA) is used to solve the aforementioned drawback. First, we propose a novel Weight Transfer Ideal Filter (WTIF) which is employed for pre-processing to remove the dark spots and shadows in an image by normalizing low frequency and high frequency of illumination. Secondly, Robust Principal Component Analysis (RPCA) is employed to perform efficient extraction of features based on image area representation. These features are given as input to ONNA which classifies the given input image under illumination. Thus we achieve the recognition of the face under various illumination conditions. Our approach is analyzed and compared with existing approaches such as Support Vector Machine (SVM) and Random Forest (RF). ONNA is better in terms of high accuracy and low error rate.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_15-Face_Recognition_under_Illumination.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>HelaNER 2.0: A Novel Deep Neural Model for Named Entity Boundary Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130914</link>
        <id>10.14569/IJACSA.2022.0130914</id>
        <doi>10.14569/IJACSA.2022.0130914</doi>
        <lastModDate>2022-09-30T12:12:38.7670000+00:00</lastModDate>
        
        <creator>Y. H. P. P Priyadarshana</creator>
        
        <creator>L Ranathunga</creator>
        
        <subject>Computational linguistics; deep neural networks; natural language processing; named entity boundary detection; named entity recognition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>Named entity recognition (NER) is a sequential labelling task in categorizing textual nuggets into specific types. Named entity boundary detection can be recognized as a prominent research area under the NER domain which has been heavily adapted for information extraction, event extraction, information retrieval, sentiment analysis etc. Named entities (NE) can be identified as per flat NEs and nested NEs in nature and limited research attempts have been made for nested NE boundary detection. NER in low resource settings has been identified as a current trend. This research work has been scoped down to unveil the uniqueness in NE boundary detection based on Sinhala related contents which have been extracted from social media. The prime objective of this research attempt is to enhance the approach of named entity boundary detection. Considering the low resource settings, as the initial step, the linguistic patterns, complexity matrices and structures of the extracted social media statements have been analyzed further. A dedicated corpus of more than 100,000 tuples of Sinhala related social media content has been annotated by an expert panel. As per the scientific novelties, NE head word detection loss function, which was introduced in HelaNER 1.0, has been further improved and the NE boundary detection has been further enhanced through tuning up the stack pointer networks. Additionally, NE linking has been improved as a by-product of the previously mentioned enhancements. Various experimentations have been conducted, evaluated and the outcome has revealed that our enhancements have achieved the state-of-art performance over the existing baselines.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_14-HelaNER_2.0_A_Novel_Deep_Neural_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Risk Prediction Applied to Global Software Development using Machine Learning Methods</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130913</link>
        <id>10.14569/IJACSA.2022.0130913</id>
        <doi>10.14569/IJACSA.2022.0130913</doi>
        <lastModDate>2022-09-30T12:12:38.7530000+00:00</lastModDate>
        
        <creator>Hossam Hassan</creator>
        
        <creator>Manal A. Abdel-Fattah</creator>
        
        <creator>Amr Ghoneim</creator>
        
        <subject>Global software development; distributed development; risk prediction model; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>Software companies aim to develop high-quality software projects with the best global resources at the best cost. To achieve this global software development (GSD), an approach should be used which adopts work on projects across multiple distributed locations, and this is also known as distributed development. When companies attempt to implement GSD, they face numerous challenges owing to the nature of GSD and its differences from traditional methods. The objectives of this study were to identify the top software development factors that affect the overall success or failure of a software project using exploratory data analysis to find relationships between these factors, and to develop and compare risk prediction models that use machine learning classification techniques such as logistic regression, decision tree, random forest, support vector machine, K-nearest neighbors, and Naive Bayes. The findings of this study are as follows: in GSD, the top 18 factors influencing the software project are listed; and experiments show that the logistic regression and random forest models provide the best results, with an accuracy of 89% and 85%, respectively, and an area under the curve of 73% and 71%, respectively.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_13-Risk_Prediction_Applied_to_Global_Software_Development.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Estimation of Recovery Percentage in Gravimetric Concentration Processes using an Artificial Neural Network Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130912</link>
        <id>10.14569/IJACSA.2022.0130912</id>
        <doi>10.14569/IJACSA.2022.0130912</doi>
        <lastModDate>2022-09-30T12:12:38.7530000+00:00</lastModDate>
        
        <creator>Manuel Alejandro Ospina-Alarc&#243;n</creator>
        
        <creator>Ismael E. Rivera-M</creator>
        
        <creator>Gabriel El&#237;as Chanch&#237;-Golondrino</creator>
        
        <subject>Empirical modeling; dynamic gravimetric concentration model; gravimetric concentration; machine learning for recovery percentage modelling; mineral processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>The concentrate process is the most sensitive in mineral processing plants (MPP), and the optimization of the process based on intelligent computational models (machine learning for recovery percentage modelling) can offer significant savings for the plant. Recent theoretical developments have revealed that many of the parameters commonly assumed as constants in gravity concentration modelling have a dynamic nature; however, there still lacks a universal way to model these factors accurately. This paper aims to understand the model effect of operational parameters of a jig (gravimetric concentrator) on the recovery percentage of the interest mineral (gold) through empirical modeling. The recovery percentage of mineral particles in a vibrated bed of big particles is studied by experimental data. The data used for the modelling were from experimental test in a pilot-scale jig supplemented by a two-month field sampling campaign for collecting 151 tests varying the most significant parameters (amplitude and frequency of pulsation, water flow, height of the artificial porous bed, and particle size). It is found the recovery percentage (%R) decreases with increasing pulsation amplitude (A) and frequency (F) when the size ratio of small to large particles (d/D) is smaller than 0.148. An empirical model was developed through machine learning techniques, specifically an artificial neural network (ANN) model was built and trained to predict the jig recovery percentage as a function of operation parameters and is then used to validate the recovery as a function of vibration conditions. The performance of the ANN model was compared with a new 65 experimental data of the recovery percentage. Results showed that the model (R2 = 0.9172 and RMSE = 0.105) was accurate and therefore could be efficiently applied to predict the recovery percentage in a jig device.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_12-Estimation_of_Recovery_Percentage_in_Gravimetric_Concentration_Processes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Attention-based Long Short Term Memory Model for DNA Damage Prediction in Mammalian Cells</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130911</link>
        <id>10.14569/IJACSA.2022.0130911</id>
        <doi>10.14569/IJACSA.2022.0130911</doi>
        <lastModDate>2022-09-30T12:12:38.7370000+00:00</lastModDate>
        
        <creator>Mohammad A. Alsharaiah</creator>
        
        <creator>Laith H. Baniata</creator>
        
        <creator>Omar Adwan</creator>
        
        <creator>Ahmad Adel Abu-Shareha</creator>
        
        <creator>Mosleh Abu Alhaj</creator>
        
        <creator>Qasem Kharma</creator>
        
        <creator>Abdelrahman Hussein</creator>
        
        <creator>Orieb Abualghanam</creator>
        
        <creator>Nabeel Alassaf</creator>
        
        <creator>Mohammad Baniata</creator>
        
        <subject>Mammalian cell; deep learning techniques; attention; LSTM; classification; DNA damage</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>The understanding of DNA damage intensity – concentration-level is critical for biological and biomedical research, such as cellular homeostasis, tumor suppression, immunity, and gametogenesis. Therefore, recognizing and quantifying DNA damage intensity levels is a substantial issue, which requires further robust and effective approaches. DNA damage has several intensity levels. These levels of DNA damage in malignant cells and in other unhealthy cells are significant in the assessment of lesion stages located in normal cells. There is a need to get more insight from the available biological data to predict, explore and classify DNA damage intensity levels. Herein, the development process relied on the available biological dataset related to DNA damage signaling pathways, which plays a crucial role in DNA damage in the mammalian cell system. The biological dataset that was used in the proposed model consists of 15000 records intensity – concentration-level for a set of five proteins which regulate DNA damage. This research paper proposes an innovative deep learning model, which consists of an attention-based long short term-memory (AT-LSTM) model for DNA damage multi class predictions. The proposed model splits the prediction procedure into dual stages. For the first stage, we adopt the related feature sequences which are inserted as input to the LSTM neural network. In the next stage, the attention feature is applied efficiently to adopt the related feature sequences which are inserted as input to the softmax layer for prediction in the following frame. Our developed framework not only solves the long-term dependence problem of prediction effectively, but also enhances the interpretability of the prediction methods that was established on the neural network. We conducted a novel proposed model on big and complex biological datasets to perform prediction and multi classification tasks. Indeed, the (AT-LSTM) model has the ability to predict and classify the DNA damage in several classes: No-Damage, Low-damage, Medium-damage, High-damage, and Excess-damage. The experimental results show that our framework for DNA damage intensity level can be considered as state of the art for the biological DNA damage prediction domain.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_11-Attention_based_Long_Short_Term_Memory_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving Privacy Preservation Approach for Healthcare Data using Frequency Distribution of Delicate Information</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130910</link>
        <id>10.14569/IJACSA.2022.0130910</id>
        <doi>10.14569/IJACSA.2022.0130910</doi>
        <lastModDate>2022-09-30T12:12:38.7200000+00:00</lastModDate>
        
        <creator>Ganesh Dagadu Puri</creator>
        
        <creator>D. Haritha</creator>
        
        <subject>Privacy preservation approach; quasi identifier distribution block; frequency distribution block; big data; anonymization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>In the modern world, everyone wishes that their personal information wouldn&#39;t be made public in any manner. In order to keep personal information hidden from prying eyes, privacy protection is essential. The data may be in the form of big data and minimization of risk and protection of sensitive data is important. In this research, a revolutionary customized privacy-preserving method is implemented that addresses the drawbacks of earlier personalized privacy as well as other anonymization methods. There are two main components that make up the proposed method&#39;s core. Delicate Information and Delicate Weight are two additional attributes which are used in the record table, are covered in the first section. The record holder&#39;s Delicate Information (DI) decides whether or not secrecy should be kept or if it should be shared. How delicate an attribute value is compared to the rest is indicated by its Delicate weight (DW). The second part covers a new representation used for anonymization termed the Frequency Distribution Block (FDB) and Quasi-Identifier Distribution Block (QIDB). According to experimental findings, the proposed system executes more quickly and with less data loss than current approaches.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_10-Improving_Privacy_Preservation_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Study on Early Warning on the Financial Risk of Project Venture Capital through a Neural Network Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130909</link>
        <id>10.14569/IJACSA.2022.0130909</id>
        <doi>10.14569/IJACSA.2022.0130909</doi>
        <lastModDate>2022-09-30T12:12:38.7070000+00:00</lastModDate>
        
        <creator>Xianjuan Li</creator>
        
        <subject>Neural network; project investment; early risk warning; genetic algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>This paper aims to effectively reduce the financial loss of enterprises by accurately and reasonably making early warning of investment project risks. This paper briefly introduced the index system used for investment project risk early warning. It constructed a project investment risk early-warning model with a back-propagation neural network (BPNN) algorithm, and improved it with a genetic algorithm (GA) to solve the defect that the traditional BPNN is easy to fall into, over-fitting when reversely adjust parameters. An analysis was conducted on an electric power company in Hunan Province. Orthogonal experiments are performed to determine the population size and the number of hidden layers in the improved BPNN algorithm. The results showed that the improved BPNN algorithm had the best performance when the population size was set as 25 and the number of hidden layers was four; compared with support vector machine (SVM) and traditional BPNN algorithms, the GA-improved BPNN algorithm had better performance for early risk warning of investment projects. In conclusion, adjusting the parameters of a BPNN with a GA in the training stage can effectively avoid falling into over-fitting, thus improving the early warning performance of the algorithm; in addition, the improved BPNN has better early warning performance.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_9-Study_on_Early_Warning_on_the_Financial_Risk.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Blockchain-based Model for Securing IoT Transactions in a Healthcare Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130908</link>
        <id>10.14569/IJACSA.2022.0130908</id>
        <doi>10.14569/IJACSA.2022.0130908</doi>
        <lastModDate>2022-09-30T12:12:38.7070000+00:00</lastModDate>
        
        <creator>Mohamed Abdel Kader Mohamed Elgendy</creator>
        
        <creator>Mohamed Aborizka</creator>
        
        <creator>Ali Mohamed Nabil Allam</creator>
        
        <subject>Blockchain; ethereum; electronic medical records (EMR); ioi secure transactions; smart contracts; proof-of-work</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>A blockchain is a data structure that is implemented as a distrusted database or digital ledger. The transactions are saved to a block of transactions that is attached in turn to the blockchain after the verification process, in which each block in the chain contains a hash signature of the previous block in addition to the hash signature of the block itself. The blocks on the blockchain are chained as an immutable list using the proof-of-work procedure, where there is no way to alter or delete an attached block due to the strict security policy used for structuring the chain of blocks. Each node holds a copy of the blockchain in which the miners take the responsibility of verifying and attaching blocks to the blockchain. The Ethereum blockchain introduced the smart contract which holds logic to be processed once the contract is established. These smart contracts are developed via the Solidity programming language. This proposed paper exploits the Ethereum blockchain along with smart contracts as the base technology for implementing the proposed blockchain-based model. The paper aims to develop a multilayered blockchain-based model, in which the blockchain model is set up on a private blockchain Ethereum network where the nodes share the electronic medical records (EMR) among the P2P (peer-to-peer) network that will be used to secure the IoT medical transactions. Solidity smart contract, introduced by Ethereum, is deployed to handle the EMR “open-query-transfer” operations on the private network, whereas the miners are responsible to validate the transactions. Finally, the research conducts a performance analysis of the Ethereum network using the Ethereum Caliper, considering several performance factors, which are: Maximum Latency, Minimum Latency, Average Latency, and Throughput.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_8-A_Blockchain_based_Model_for_Securing_IoT_Transactions.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Study on the Effect of Digital Fabrication in Social Studies Education</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130907</link>
        <id>10.14569/IJACSA.2022.0130907</id>
        <doi>10.14569/IJACSA.2022.0130907</doi>
        <lastModDate>2022-09-30T12:12:38.6900000+00:00</lastModDate>
        
        <creator>Kazunari Hirakoso</creator>
        
        <creator>Hidetake Hamada</creator>
        
        <subject>Digital fabrication; 3D educational materials; self-learning Program; social studies; STEAM education</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>One of the learning methods that is increasingly being practiced in primary and secondary education is inquiry-based learning. This is not just a class to teach knowledge, but to practice activities to search for and discern the significance and essence of things. In social studies education, various trials and errors are being conducted, such as learning local history through fieldwork, and new approaches suitable for inquiry-based learning are being sought. In this study, as a new approach to social studies education, we developed a self-learning program that enables teachers to create original 3D educational materials using digital fabrication technology. We conducted an experiment in which students who wished to become social studies teachers participated in the program, created 3D educational materials, and taught a class using the materials. As a result, all the subjects who took the self-learning program could create 3D educational materials and give classes using them. The subjects&#39; opinions suggested that practicing classes using 3D educational materials is effective for teacher education. This contributes to STEAM education, which has been spreading recently in the field of education, and this case study can be seen as a novel model.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_7-A_Study_on_the_Effect_of_Digital_Fabrication.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Differential Privacy Technology of Big Data Information Security based on ACA-DMLP</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130905</link>
        <id>10.14569/IJACSA.2022.0130905</id>
        <doi>10.14569/IJACSA.2022.0130905</doi>
        <lastModDate>2022-09-30T12:12:38.6730000+00:00</lastModDate>
        
        <creator>Yubiao Han</creator>
        
        <creator>Lei Wang</creator>
        
        <creator>Dianhong He</creator>
        
        <subject>Big data; cloud computing; information security; distributed machine learning; differential privacy algorithms</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>Cloud computing and artificial intelligence have a deeper and closer connection with daily life. To ensure information security, most companies or individuals choose to pay a simple fee to store a large amount of data on cloud servers and hand over a large number of complex calculations of machine learning to cloud servers. To eliminate the security risks of data stored in the cloud and ensure that private data is not leaked, this paper proposes a collusion-resistant distributed machine learning scheme. Through homomorphic encryption algorithm and differential privacy algorithm, the security of data and model in machine learning framework is guaranteed. The distributed machine learning framework is adopted to reduce the data computing time and improve the data training efficiency. The simulation results show that the computational efficiency is improved while the user privacy security is guaranteed. The accuracy of model training is not reduced due to the improvement of privacy data security and computational efficiency. Through this study, we can further propose effective measures for the privacy protection of outsourced data and the data integrity of machine learning, which is of great significance to the security research of cloud intelligent big data.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_5-Differential_Privacy_Technology_of_Big_Data_Information_Security.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Reusable Product Line Asset in Smart Mobile Application: A Systematic Literature Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130906</link>
        <id>10.14569/IJACSA.2022.0130906</id>
        <doi>10.14569/IJACSA.2022.0130906</doi>
        <lastModDate>2022-09-30T12:12:38.6730000+00:00</lastModDate>
        
        <creator>Nan Pepin</creator>
        
        <creator>Abdul S. Shibghatullah</creator>
        
        <creator>Kasthuri Subaramaniam</creator>
        
        <creator>Rabatul Aduni Sulaiman</creator>
        
        <creator>Zuraida A. Abas</creator>
        
        <creator>Samer Sarsam</creator>
        
        <subject>Application; charity; donation; reusable product line; systematic literature review</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>A reusable product line asset is a product or asset that can be reused for different purposes including charity. Smart mobile applications are one of several communication and information methods used in charitable activities. Web, mobile, or hybrid platforms can be used to develop charity applications. It takes design and purpose to build an application, whether methodology or software development is applied for the smooth design or development of an application. The data for this study were acquired from the appropriate literature between 2017 and 2021 in order to determine the application development on current charity applications. The Systematic Literature Review (SLR) was employed in this study. The SLR method is used to identify, review, evaluate, and analyze all available research on relevant topics, as well as research issues for philanthropic development. This study aims to answer the following research questions: identify the donation applications that are frequently developed by researchers; identify the methods that are commonly used in the development of charity applications; identify the application platform that is frequently used; identify the functions utilized to the developed application and identify the key users who are using the application. The findings show, charity donations app, structured method, mobile applications, authentication and charity centers and donors were the most often observed in this study.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_6-A_Reusable_Product_Line_Assets.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cooperative Multi-Robot Hierarchical Reinforcement Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130904</link>
        <id>10.14569/IJACSA.2022.0130904</id>
        <doi>10.14569/IJACSA.2022.0130904</doi>
        <lastModDate>2022-09-30T12:12:38.6600000+00:00</lastModDate>
        
        <creator>Gembong Edhi Setyawan</creator>
        
        <creator>Pitoyo Hartono</creator>
        
        <creator>Hideyuki Sawada</creator>
        
        <subject>Multi-robot system; hierarchical deep reinforcement learning; path-finding; task decomposition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>Recent advances in multi-robot deep reinforcement learning have made it possible to perform efficient exploration in problem space, but it remains a significant challenge in many complex domains. To alleviate this problem, a hierarchical approach has been designed in which agents can operate at many levels to complete tasks more efficiently. This paper proposes a novel technique called Multi-Agent Hierarchical Deep Deterministic Policy Gradient that combines the benefits of multiple robot systems with the hierarchical system used in Deep Reinforcement Learning. Here, agents acquire the ability to decompose a problem into simpler subproblems with varying time scales. Furthermore, this study develops a framework to formulate tasks into multiple levels. The upper levels function to learn policies for defining lower levels’ subgoals, whereas the lowest level depicts robot’s learning policies for primitive actions in the real environment. The proposed method is implemented and validated in a modified Multiple Particle Environment (MPE) scenario.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_4-Cooperative_Multi_Robot_Hierarchical_Reinforcement_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fuzzy Image Enhancement Method based on a New Intensifier Operator</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130903</link>
        <id>10.14569/IJACSA.2022.0130903</id>
        <doi>10.14569/IJACSA.2022.0130903</doi>
        <lastModDate>2022-09-30T12:12:38.6430000+00:00</lastModDate>
        
        <creator>Libao Yang</creator>
        
        <creator>Suzelawati Zenian</creator>
        
        <creator>Rozaimi Zakaria</creator>
        
        <subject>Image enhancement; intensifier operator; threshold; pivotal point</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>In recent years, fuzzy image enhancement methods have been widely applied in image enhancement, which generally consists of three steps: fuzzification, modify membership(using intensifier (INT) operator), and defuzzification. This paper proposed a new INT operator used in fuzzy image enhancement. The INT operator is adjustable for different test images. The image enhancement method is as follows, firstly, calculate the image threshold (T ) using the OTSU method. Secondly, calculate pivotal point p corresponding to T , and find the corresponding INT operator function. Finally, use the INT operator in fuzzy Image Enhancement. The INT operator is used multiple times in the image processing process to obtain multiple result images. Comparative experiments show that the proposed new INT operator has better image enhancement effect when INT operator is applied at the same number of times. On the other hand, more intermediate process result images can also be obtained through the proposed new INT operator. More result images can provide material resources for the subsequent image processing.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_3-Fuzzy_Image_Enhancement_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Remote International Collaboration in Scientific Research Teams for Technology Development</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130902</link>
        <id>10.14569/IJACSA.2022.0130902</id>
        <doi>10.14569/IJACSA.2022.0130902</doi>
        <lastModDate>2022-09-30T12:12:38.6430000+00:00</lastModDate>
        
        <creator>Sarah Janb&#246;cke</creator>
        
        <creator>Toshimi Ogawa</creator>
        
        <creator>Koki Kobayashi</creator>
        
        <creator>Ryan Browne</creator>
        
        <creator>Yasuki Taki</creator>
        
        <creator>Rainer Wieching</creator>
        
        <creator>Johanna Langendorf</creator>
        
        <subject>Teamwork; technology development; international collaboration; scientific team performance; potential technology leverage; scientific commitment; team efficiency; team commitment; team performance; collaboration</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>Scientific research teams often find themselves in remote working situations due to their internationality. Incredibly complex technological projects demand close collaboration and knowledge-sharing management. Remote teamwork, especially in cutting-edge scientific technology development, comes with various challenges that can negatively influence the overall team performance and commitment to the project. Within the EU-Japan (EU-/MIC-funded) project e-VITA, a consortium of 22 multidisciplinary partners and around 80 people work on research regarding a virtual assistant for healthy and active aging. We conducted qualitative data within the consortium after nine months of teamwork to understand the influence of collaboration on commitment, personal performance, efficiency, and work outcome. Based on this research&#39;s outcome, we built a framework for future scientific research projects and consortia to increase efficiency and quality of teamwork, thus researchers’ well-being.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_2-Remote_International_Collaboration_in_Scientific_Research_Teams.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>ModER: Graph-based Unsupervised Entity Resolution using Composite Modularity Optimization and Locality Sensitive Hashing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130901</link>
        <id>10.14569/IJACSA.2022.0130901</id>
        <doi>10.14569/IJACSA.2022.0130901</doi>
        <lastModDate>2022-09-30T12:12:38.6270000+00:00</lastModDate>
        
        <creator>Islam Akef Ebeid</creator>
        
        <creator>John R. Talburt</creator>
        
        <creator>Nicholas Kofi Akortia Hagan</creator>
        
        <creator>Md Abdus Salam Siddique</creator>
        
        <subject>Entity resolution; data curation; database; graph theory; natural language processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(9), 2022</description>
        <description>Entity resolution describes techniques used to identify documents or records that might not be duplicated; nevertheless, they might refer to the same entity. Here we study the problem of unsupervised entity resolution. Current methods rely on human input by setting multiple thresholds prior to execution. Some methods also rely on computationally expensive similarity metrics and might not be practical for big data. Hence, we focus on providing a solution, namely ModER, capable of quickly identifying entity profiles in ambiguous datasets using a graph-based approach that does not require setting a matching threshold. Our framework exploits the transitivity property of approximate string matching across multiple documents or records. We build on our previous work in graph-based unsupervised entity resolution, namely the Data Washing Machine (DWM) and the Graph-based Data Washing Machine (GDWM). We provide an extensive evaluation of a synthetic data set. We also benchmark our proposed framework using state-of-the-art methods in unsupervised entity resolution. Furthermore, we discuss the implications of the results and how it contributes to the literature.</description>
        <description>http://thesai.org/Downloads/Volume13No9/Paper_1-ModER_Graph_based_Unsupervised_Entity_Resolution.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Secure and Efficient Implicit Certificates: Improving the Performance for Host Identity Protocol in IoT</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01308105</link>
        <id>10.14569/IJACSA.2022.01308105</id>
        <doi>10.14569/IJACSA.2022.01308105</doi>
        <lastModDate>2022-09-02T12:27:14.4870000+00:00</lastModDate>
        
        <creator>Zhaokang Lu</creator>
        
        <creator>Jianzhu Lu</creator>
        
        <subject>Authentication; privacy; implicit certificates; internet of things (IoT); host identity; security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>Implicit certificates own the shorter public key validation data. This property makes them appealing in resource-constrained IoT systems where public-key authentication is performed very often, which is common in Host Identity Protocol (HIP). However, it is still a critical challenge in IoT how to guarantee the security and efficiency of implicit certificates. This article presents a forgery attack for the Privacy-aware HIP (P-HIP), and then propose a Secure and Efficient Implicit Certificate (SEIC) scheme that can improve the security of the P-HIP and the efficiency of elliptic-curve point multiplications for IoT devices. For a fix-point multiplication, the proposed approach is about 1.5 times faster than the method in SIMPL scheme. Furthermore, we improve the performance of SEIC with the butterfly key expansion process, and then construct an improved P-HIP. Experimental results show that compare to the existing schemes, the improved scheme makes a user/device have both the smallest computation cost and the smallest communication cost.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_105-Secure_and_Efficient_Implicit_Certificates.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced Security: Implementation of Hybrid Image Steganography Technique using Low-Contrast LSB and AES-CBC Cryptography</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01308104</link>
        <id>10.14569/IJACSA.2022.01308104</id>
        <doi>10.14569/IJACSA.2022.01308104</doi>
        <lastModDate>2022-09-02T12:27:14.4570000+00:00</lastModDate>
        
        <creator>Edwar Jacinto G</creator>
        
        <creator>Holman Montiel A</creator>
        
        <creator>Fredy H. Mart&#237;nez S</creator>
        
        <subject>Steganography; cryptography; LSB; low contrast areas; AES-CBC algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>Now-a-days, sensitive and confidential information needs to be exchanged over open, public, and not secure networks such as the Internet. For this purpose, some information security techniques combine cryptographic and steganographic algorithms and image processing techniques to exchange information securely. Therefore, this research presents the implementation of an algorithm that combines the AES-CBC cryptographic technique with the LSB steganographic technique, which is statistically enhanced by image processing by looking for low-contrast areas where the encrypted information will be stored. This hybrid algorithm was developed to send a plaintext file hidden in an image in BMP format, so the changes in the image are invisible to the human eye and undetectable in possible steganographic analysis. The implementation was performed using Python and its libraries PyCryptodome for encryption and CV2 for image processing. As a result, it was found that the hybrid algorithm implemented has three layers of security over a plaintext encrypted and hidden in a digital image, which makes it difficult to break the secrecy of the information exchanged in a stego-image file. Additionally, the execution times of the hybrid algorithm were evaluated for different sizes of plaintext and digital image files.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_104-Enhanced_Security_Implementation_of_Hybrid_Image_Steganography.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Letter-of-Credit Chain: Cross-Border Exchange based on Blockchain and Smart Contracts</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01308103</link>
        <id>10.14569/IJACSA.2022.01308103</id>
        <doi>10.14569/IJACSA.2022.01308103</doi>
        <lastModDate>2022-09-02T12:27:14.4230000+00:00</lastModDate>
        
        <creator>Khoi Le Quoc</creator>
        
        <creator>Phuc Nguyen Trong</creator>
        
        <creator>Hieu Le Van</creator>
        
        <creator>Hong Khanh Vo</creator>
        
        <creator>Luong Hoang Huong</creator>
        
        <creator>Khoa Tran Dang</creator>
        
        <creator>Khiem Huynh Gia</creator>
        
        <creator>Loc Van Cao Phu</creator>
        
        <creator>Duy Nguyen Truong Quoc</creator>
        
        <creator>Nguyen Huyen Tran</creator>
        
        <creator>Huynh Trong Nghia</creator>
        
        <creator>Bang Le Khanh</creator>
        
        <creator>Kiet Le Tuan</creator>
        
        <subject>Letter-of-Credit; blockchain; smart contract; authorization; Ethereum; Fantom; Binance smart chain</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>The exchange of goods between countries is growing, contributing to the promotion of logistics-related technologies. More and more systems are adopting advances in science and engineering to reduce manual handling steps, thereby reducing transit time. Letter-of-Credit (LOC) is a standard method where the parties involved will enter into agreements for the sale and exchange of goods. Specifically, each party will receive a set of original documents and does not need to meet face-to-face under the bank’s witness. The process brings many benefits in terms of time and reduces records processing. However, the system faces a lot of risks when one of the parties is dishonest. On the other hand, the traditional LOC systems face a lot of risks related to the transparency of information about the goods, and also the supplier may lose the goods (e.g., 4/100 Vietnamese cashew nut containers are lost. stuck in Italy) or deposits in the hands of shipping companies (e.g., GNN Express - Vietnam) and many more. To this end, many research directions have exploited blockchain technology and smart contracts. Specifically, all information related to the transaction between the supplier and the demander including package, time, and delivery location. However, there needs to be a mechanism to ensure the smooth implementation of smart contracts, specifically for sanctioning when there is a conflict between a supplier and a demander. This role should be considered for the transaction manager, who directly designs and is responsible for their smart contracts. Currently, there is no mechanism to guarantee all interests of the parties involved in non-bank transactions. To increase the processing capacity and integrate with the Blockchain system, we propose the Letter-of-credit Chain that defines the agreements between the parties in international trade. We also deploy the proof-of-concept of the Letter-of-credit Chain on the three EVM-supported platforms (i.e., under ERC20), namely, Ethereum, Binance Smart Chain, and Fantom. By evaluating the actual execution of Gas for each platform, we found that our proposed model had the cheapest fee when deployed on the Fantom platform. Finally, we share the deployment/implementation of these platforms’ proof-of-concept to encourage further future research.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_103-Letter_of_Credit_Chain_Cross_Border_Exchange.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Blood Management System based on Blockchain Approach: A Research Solution in Vietnam</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01308102</link>
        <id>10.14569/IJACSA.2022.01308102</id>
        <doi>10.14569/IJACSA.2022.01308102</doi>
        <lastModDate>2022-09-02T12:27:14.3600000+00:00</lastModDate>
        
        <creator>Hieu Le Van</creator>
        
        <creator>Hong Khanh Vo</creator>
        
        <creator>Luong Hoang Huong</creator>
        
        <creator>Phuc Nguyen Trong</creator>
        
        <creator>Khoa Tran Dang</creator>
        
        <creator>Khiem Huynh Gia</creator>
        
        <creator>Loc Van Cao Phu</creator>
        
        <creator>Duy Nguyen Truong Quoc</creator>
        
        <creator>Nguyen Huyen Tran</creator>
        
        <creator>Huynh Trong Nghia</creator>
        
        <creator>Bang Le Khanh</creator>
        
        <creator>Kiet Le Tuan</creator>
        
        <subject>Blood donation in Vietnam; blockchain; hyperledger fabric; blood products supply chain</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>More and more new health care solutions are born based on the development of science and technology. The subjects who benefit the most, in this case, are not only patients (i.e., shorter healing time, faster recovery) but also medical staff, e.g., doctors/nurses (i.e., easy monitoring of the patient’s recovery process, proposing new treatment). However, there are still products that have not found an alternative: blood and blood products. Regardless of how science and technology can affect all aspects of patient treatment as well as medical care, blood still plays an important role in the treatment method. In addition to the above, blood and blood products may only be obtained from volunteers (i.e., blood donors). The preservation process is also very difficult, and no medical facility has enough facilities to preserve them. Therefore, the current process of blood preservation and transportation is done manually and contains many potential risks (e.g., data loss, personal information collection). In addition to the above barriers, developing countries (including Vietnam) also face many difficulties due to limited facilities. It is for this reason that this paper aims at a Blockchain-based technology solution for efficient management and distribution of blood from blood products. On the one hand, the paper contributes to the limitations in the information management process of storing and transporting blood and its products in the traditional database being applied in medical facilities in the cities and provinces in the Mekong Delta (the West-South of Vietnam). On the other hand, the article offers technology-based solutions to increase transparency and reduce the fear of centralized data storage (i.e., security and privacy issues). We also implement a proof-of-concept to evaluate the feasibility of the proposed approach.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_102-Blood_Management_System_based_on_Blockchain_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Straggler Mitigation in Hadoop MapReduce Framework: A Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01308101</link>
        <id>10.14569/IJACSA.2022.01308101</id>
        <doi>10.14569/IJACSA.2022.01308101</doi>
        <lastModDate>2022-08-29T10:10:23.3030000+00:00</lastModDate>
        
        <creator>Lukuman Saheed Ajibade</creator>
        
        <creator>Kamalrulnizam Abu Bakar</creator>
        
        <creator>Ahmed Aliyu</creator>
        
        <creator>Tasneem Danish</creator>
        
        <subject>Big data; blacklisting execution; Hadoop; MapReduce; spark; speculative execution; straggler</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>Processing huge and complex data to obtain useful information is challenging, even though several big data processing frameworks have been proposed and further enhanced. One of the prominent big data processing frameworks is MapReduce. The main concept of MapReduce framework relies on distributed and parallel processing. However, MapReduce framework is facing serious performance degradations due to the slow execution of certain tasks type called stragglers. Failing to handle stragglers causes delay and affects the overall job execution time. Meanwhile, several straggler reduction techniques have been proposed to improve the MapReduce performance. This study provides a comprehensive and qualitative review of the different existing straggler mitigation solutions. In addition, a taxonomy of the available straggler mitigation solutions is presented. Critical research issues and future research directions are identified and discussed to guide researchers and scholars.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_101-Straggler_Mitigation_in_Hadoop_MapReduce_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Hate Speech Detection System based on Textual and Psychological Features</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01308100</link>
        <id>10.14569/IJACSA.2022.01308100</id>
        <doi>10.14569/IJACSA.2022.01308100</doi>
        <lastModDate>2022-08-29T10:10:23.2700000+00:00</lastModDate>
        
        <creator>Fatimah Alkomah</creator>
        
        <creator>Sanaz Salati</creator>
        
        <creator>Xiaogang Ma</creator>
        
        <subject>Hate speech detection; hate speech classification; hate speech features; hate speech methods</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>Hate speech often spreads on social media and harms individuals and the community. Machine learning models have been proposed to detect hate speech in social media; however, several issues presently limit the performance of current approaches. One challenge is the issue of having diverse comprehensions of hate speech constructs which will lead to many speech categories and different interpretations. In addition, certain language-specific features, and short text issues, such as Twitter, exacerbate the problem. Moreover, current machine learning approaches lack universality due to small datasets and the adoption of a few features of hateful speech. This paper develops and builds new feature sets based on frequencies of textual tokens and psychological characteristics. Then, the study evaluates several machine learning methods over a large dataset. Results showed that the Random Forest and BERT methods are the most valuable for detecting hate speech content. Furthermore, the most dominant features that are helpful for hate speech detection methods combine psychological features and Term-Frequency Inverse Document-Frequency (TFIDF) features. Therefore, the proposed approach could identify hate speech on social media platforms like Twitter.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_100-A_New_Hate_Speech_Detection_System_based_on_Textual.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Image Enhancement Method based on an Improved Fuzzy C-Means Clustering</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130899</link>
        <id>10.14569/IJACSA.2022.0130899</id>
        <doi>10.14569/IJACSA.2022.0130899</doi>
        <lastModDate>2022-08-29T10:10:23.2570000+00:00</lastModDate>
        
        <creator>Libao Yang</creator>
        
        <creator>Suzelawati Zenian</creator>
        
        <creator>Rozaimi Zakaria</creator>
        
        <subject>Image enhancement; fuzzy clustering; fuzzy c-means clustering; membership; objective function</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>Image enhancement is an important method in the process of image processing. This paper proposes an image enhancement method base on an improved fuzzy c-means clustering. The method consists of the following steps: firstly, proposed a fuzzy c-means clustering with a cooperation center(FCM-co). Secondly, using the FCM-co, divide the image pixels into different clusters and marked membership values to those clusters. Thirdly, modify the membership values. Finally, calculate the new pixel gray levels. This enhancement method can overcome the disadvantage of overexposure and better retain image details. Through the experiment, the test results show that the proposed enhancement method could achieve better performance.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_99-Image_Enhancement_Method_based_on_an_Improved_Fuzzy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Computational Analysis based on Advanced Correlation Automatic Detection Technology in BDD-FFS System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130898</link>
        <id>10.14569/IJACSA.2022.0130898</id>
        <doi>10.14569/IJACSA.2022.0130898</doi>
        <lastModDate>2022-08-29T10:10:23.2230000+00:00</lastModDate>
        
        <creator>Xiao Zheng</creator>
        
        <creator>Muhammad Tahir</creator>
        
        <creator>Mingchu Li</creator>
        
        <creator>Shaoqing Wang</creator>
        
        <subject>Big data-driven fabric future systems (BDD-FFS); magnetic resonance imaging (MRI); partial tree</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>Big Data-Driven Fabric Future Systems (BDD-FFS) is currently attracting widespread attention in the healthcare research community. Medical devices rely primarily on the intelligent Internet to gather important health-related information. According to this, we provide patients with deeply supportive data to help them through their recovery. However, due to the large number of medical devices, the address of the device can be modified by intruders, which can be life-threatening for serious patients (such as tumor patients). A large number of abnormal cells in the brain can lead to brain tumors, which harm brain tissue and can be life-threatening. Recognition of brain tumors at the beginning of the process is significant for their detection, prediction and therapy. The traditional approach for detecting is for a human to perform a biopsy and examine CT scans or magnetic resonance imaging (MRI), which is cumbersome, unrealistic for great amounts of resource, and requests the radiologist to make inferential computations. A variety of automation schemes have been designed to address these challenges. However, there is an urgent need to develop a technology that will detect brain tumors with remarkable accuracy in a much shorter time. In addition, the selection of feature sets for prediction is crucial to realize significant accuracy. This work utilizes an associative action learner with an advanced feature group, Partial Tree (PART-T), to detect brain tumor recognition grades. The model presented was compared with existing methods through 10-fold cross-validation. Experimental results show that partial trees with advanced feature sets are superior to existing techniques in terms of performance indicators used for evaluation, such as accuracy, recall rate and F-measure.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_98-Computational_Analysis_based_on_Advanced_Correlation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards Flexible Transparent Authentication System for Mobile Application Security</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130897</link>
        <id>10.14569/IJACSA.2022.0130897</id>
        <doi>10.14569/IJACSA.2022.0130897</doi>
        <lastModDate>2022-08-29T10:10:23.2100000+00:00</lastModDate>
        
        <creator>Abdullah Golam</creator>
        
        <creator>Mohammed Abuhmoud</creator>
        
        <creator>Umar Albalawi</creator>
        
        <subject>Transparent security; authentication; UX/UI; for-getting password; reset password; biometric systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>Undoubtedly, Mobile Application Security (MAS) has made tremendous progress in implementing enhanced security protocols in the past decade. With the recent increase in the usage of mobile applications, concerns of privacy and security are increasing rapidly. Thus, the security measurement must be applied to satisfy security and privacy needs. On the one hand, the developer community works feverishly to develop mobile applications with innovative and usable layers and user-friendly for multigenerational customers. However, the security community, in particular, strives to make those layers secure. Therefore, the main objective of this research is to build a transparent authentication system in a mobile application. There are potentially many ways to implement an authentication mechanism such as the biometrics approach. It has features, which can be used to heightened security for the end-user. In these articles, we experimentally investigate the multigenerational customer base’s factors such as age, convenience, easiness, memorizing new passwords, and understanding the precept of frequently changing passwords to enhance security. Additionally, we propose a system that will solve the common problems users face when starting the password resetting process. At the same time, in the MAS sector, we orchestrate the applications for better security encryption for the stored biometrics to ensure it, which makes it even more challenging for an adversary to bypass the system and reset the password. We conclude our research with a comprehensive security solution for MAS that considers user friendliness and data safeguarding.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_97-Towards_Flexible_Transparent_Authentication_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Watchdog Monitoring for Detecting and Handling of Control Flow Hijack on RISC-V-based Binaries</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130896</link>
        <id>10.14569/IJACSA.2022.0130896</id>
        <doi>10.14569/IJACSA.2022.0130896</doi>
        <lastModDate>2022-08-29T10:10:23.1770000+00:00</lastModDate>
        
        <creator>Toyosi Oyinloye</creator>
        
        <creator>Lee Speakman</creator>
        
        <creator>Thaddeus Eze</creator>
        
        <creator>Lucas O’Mahony</creator>
        
        <subject>Watchdog; return oriented programming; RISC-V; control flow integrity; software security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>Control flow hijacking has been a major challenge in software security. Several means of protections have been developed but insecurities persist. This is because existing protections have sometimes been circumvented while some resilient protections do not cover all applications. Studies have revealed that a holistic way of tackling software insecurity could involve watchdog monitoring and detection via Control Flow Integrity (CFI). The CFI concept has shown a good measure of reliability to mitigate control flow hijacking. However, sophisticated attack techniques in the form of Return Oriented Programming (ROP) have persisted. A flexible protection is desirable, which not only covers as many architecture structures as possible but also mitigates known resilient attacks like ROP. The solution proffered here is a hybrid of CFI and watchdog timing via inter-process signaling (IP-CFI). It is a software-based protection that involves recompilation of the target program. The implementation here is on vulnerable RISC-V-based process but is flexible and could be adapted on other architectures. We present a proof of concept in IP-CFI which when applied to a vulnerable program, ROP is mitigated. The target program incurs a run-time overhead of 1.5%. The code is available.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_96-Watchdog_Monitoring_for_Detecting_and_Handling_of_Control_Flow_Hijack.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fashion Image Retrieval based on Parallel Branched Attention Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130895</link>
        <id>10.14569/IJACSA.2022.0130895</id>
        <doi>10.14569/IJACSA.2022.0130895</doi>
        <lastModDate>2022-08-29T10:10:23.1600000+00:00</lastModDate>
        
        <creator>Sangam Man Buddhacharya</creator>
        
        <creator>Sagar Adhikari</creator>
        
        <creator>Ram Krishna Lamichhane</creator>
        
        <subject>Convolutional neural network (CNN); image retrieval; attention mechanism; convolutional block attention module (CBAM)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>With the increase in vision-associated applications in e-commerce, image retrieval has become an emerging application in computer vision. Matching the exact user clothes from the database images is challenging due to noisy background, wide variation in orientation and lighting conditions, shape deformations, and the variation in the quality of the images between query and refined shop images. Most existing solutions tend to miss out on either incorporating low-level features or doing it effectively within their networks. Addressing the issue, we propose an attention-based multiscale deep Convolutional Neural Network (CNN) architecture called Parallel Attention ResNet (PAResNet50). It includes other supplementary branches with attention layers to extract low-level discriminative features and uses both high-level and low-level features for the notion of visual similarity. The attention layer focuses on the local discriminative regions and ignores the noisy background. Image retrieval output shows that our approach is robust to different lighting conditions. Experimental results on two public datasets show that our approach effectively locates the important region and significantly improves retrieval accuracy over simple network architectures without attention.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_95-Fashion_Image_Retrieval_based_on_Parallel_Branched_Attention_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Utilizing Artificial Intelligence Techniques for Assisting Visually Impaired People: A Personal AI-based Assistive Application</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130894</link>
        <id>10.14569/IJACSA.2022.0130894</id>
        <doi>10.14569/IJACSA.2022.0130894</doi>
        <lastModDate>2022-08-29T10:10:23.1300000+00:00</lastModDate>
        
        <creator>Samah Alhazmi</creator>
        
        <creator>Mohammed Kutbi</creator>
        
        <creator>Soha Alhelaly</creator>
        
        <creator>Ulfat Dawood</creator>
        
        <creator>Reem Felemban</creator>
        
        <creator>Entisar Alaslani</creator>
        
        <subject>Artificial intelligence; machine learning; assistive technology; visually impaired</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>Nowadays, the Artificial Intelligence (AI) field has made a significant change in the real life. Numerous applications use the AI techniques for the purpose of assisting people in different life aspects. Furthermore, with the increased number of people with visual difficulties around the world, there is a need for such AI assistive applications which provide them an independent life. Limited affordable and appropriate solutions developed so far. In this paper, we present a personal AI-based assistive application called (Vivid) that supports visually impaired people being more independent. Vivid has many features such as identifying objects, objects’ colors, recognizing text, and faces detection. It relies on using the mobile camera to sense the environment, and the machine learning techniques to understand the environment. By translating a meaningful information in audible sound for those users, Vivid does not require to have any visual ability. Moreover, the whole interaction with the user is only based on voice commands. The input from the user is captured as finger gestures on tablet or cell phone touch screen. In addition to Vivid, we also shade the lights on a supplementary application that notify/alarm visually impaired people of any nearby objects using sensors. These personal assistive applications were developed then tested on the real world and showed promising results.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_94-Utilizing_Artificial_Intelligence_Techniques_for_Assisting_Visually_Impaired_People.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Learning to Rank Approach for Software Defect Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130893</link>
        <id>10.14569/IJACSA.2022.0130893</id>
        <doi>10.14569/IJACSA.2022.0130893</doi>
        <lastModDate>2022-08-29T10:10:23.1130000+00:00</lastModDate>
        
        <creator>Sara Al-omari</creator>
        
        <creator>Yousef Elsheikh</creator>
        
        <creator>Mohammed Azzeh</creator>
        
        <subject>Software engineering; software testing; software defect prediction; learning to rank approach</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>Software defect prediction is one of the most active research fields in software development. The outcome of defect prediction models provides a list of the most likely defect-prone modules that need a huge effort from quality assurance teams. It can also help project managers to effectively allocate limited resources to validating software products and invest more effort in defect-prone modules. As the size of software projects grows, error prediction models can play an important role in assisting developers and shortening the time it takes to create more reliable software products by ranking software modules based on their defects. Therefore, there is need a learning-to-rank approach that can prioritize and rank defective modules to reduce testing effort, cost, and time. In this paper, a new learning to rank approach was developed to help the QA team rank the most defect-prone modules using different regression models. The proposed approach was evaluated on a set of standardized datasets using well-known evaluation measures such as Fault-Percentile Average (FPA), Mean Absolute Error (MAE), Root Mean Square Error (RMSE), and the Cumulative Lift Chart (CLC). Also, our proposed approach was compared with some other regression models that are used for software defect prediction, such as Random Forest (RF), Logistic Regression (LR), Support Vector Regression (SVR), Zero Inflated Regression (ZIR), Zero Inflated Poisson (ZIP), and Negative Polynomial Regression (NPR). Based on the results, the measurement criteria were different than each other as there was a gap in the accuracy obtained for defects prediction due to the nature of the random data, and thus was higher for RF and SVR, as well as FPA achieved better results than MAE and RMSE in this research paper.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_93-A_New_Learning_to_Rank_Approach_for_Software_Defect_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Big Data Intelligence Analytics Framework for 5G-Enabled IoT Healthcare</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130892</link>
        <id>10.14569/IJACSA.2022.0130892</id>
        <doi>10.14569/IJACSA.2022.0130892</doi>
        <lastModDate>2022-08-29T10:10:23.0830000+00:00</lastModDate>
        
        <creator>Yassine Sabri</creator>
        
        <creator>Ahmed Outzourhit</creator>
        
        <subject>Big data analytics; 5G-enabled; IoT healthcare; fog computing; confidentiality</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>Intelligent networking is a concept that enables 5G, the Internet of Things (IoT) and artificial intelligence (AI) to combine as a way to accelerate technological innovation and develop new revolutionary digital services. In the intelligent connectivity vision, the digital information gathered by the machines, devices and sensors which make up the IoT is analysed and contextualized. It is anticipated that the high availability of 5G and its inclusion of a large number of connections would help promote the production of wearable devices used to monitor the different biometric parameters of the wearer. Since these are AIbased health systems, the data obtained from these devices will be analysed in order to assess a patient’s current health status. This paper presents a detailed design for the development of intelligent data analytics and mobile computer-assisted healthcare systems.The proposed advanced PoS consensus algorithm provides better performance than other existing algorithms.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_92-A_Novel_Big_Data_Intelligence_Analytics_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Summarizing Event Sequence Database into Compact Big Sequence</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130891</link>
        <id>10.14569/IJACSA.2022.0130891</id>
        <doi>10.14569/IJACSA.2022.0130891</doi>
        <lastModDate>2022-08-29T10:10:23.0670000+00:00</lastModDate>
        
        <creator>Mosab Hassaan</creator>
        
        <subject>Sequence data; compressing patterns mining; minimum description length</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>Detecting the core structure of a database is one of the most objective of data mining. Many methods do so, in pattern set mining, by mining a small set of patterns that together summarize the dataset in efficient way. The better of these patterns, the more effective summarization of the database. Most of these methods are based on the Minimum Description Length principle. Here, we focus on the event sequence database. In this paper, rather than mining a small set of significant patterns, we propose a novel method to summarize the event sequence dataset by constructing compact big sequence namely, BigSeq. BigSeq conserves all characteristics of the original event sequences. It is constructed in efficient way via the longest common subsequence and the novel definition of the compatible event set. The experimental results show that BigSeq method outperforms the state-of-the-art methods such as Gokrimp with respect to compression ratio, total response time, and number of detected patterns.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_91-Summarizing_Event_Sequence_Database_into_Compact_Big_Sequence.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mobile App Design: Logging and Diagnostics of Respiratory Diseases</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130890</link>
        <id>10.14569/IJACSA.2022.0130890</id>
        <doi>10.14569/IJACSA.2022.0130890</doi>
        <lastModDate>2022-08-29T10:10:23.0670000+00:00</lastModDate>
        
        <creator>Diana Cecilia Chavez Canari</creator>
        
        <creator>Angel Vicente Garcia Obispo</creator>
        
        <creator>Jose Luis Herrera Salazar</creator>
        
        <creator>Laberiano Andrade-Arenas</creator>
        
        <creator>Michael Cabanillas-Carbonell</creator>
        
        <subject>Mobile app; Covid-19; diagnosis; respiratory diseases; RUP methodology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>Over the years, a wide variety of respiratory diseases have caused a high mortality rate throughout the world. This was again observed with the appearance of the pandemic, COVID-19. In addition, the most affected are people living in extreme poverty. The objective is design a mobile health application for the registration and diagnosis of respiratory diseases. For this, the RUP methodology was applied, because it easily adapts to various types of projects. Its use, together with the UML process development software, allows the analysis, implementation and documentation of object oriented systems. For validation, a user survey was carried out and the questionnaire was based on the dimensions of functionality, efficiency, effectiveness and satisfaction. Obtaining as a result a positive qualification to the design of the application and its acceptance due to the reduction in the time to obtain the diagnosis. In conclusion, a mobile health application design was successfully carried out so that patients can register and have the diagnosis of respiratory diseases from the comfort of their home.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_90-Mobile_App_Design_Logging_and_Diagnostics_of_Respiratory_Diseases.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Prototype Implementation of a CUDA-based Customized Rasterizer</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130889</link>
        <id>10.14569/IJACSA.2022.0130889</id>
        <doi>10.14569/IJACSA.2022.0130889</doi>
        <lastModDate>2022-08-29T10:10:23.0370000+00:00</lastModDate>
        
        <creator>Nakhoon Baek</creator>
        
        <subject>3D rasterization; CUDA implementation; OpenGL emulation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>In these days, we have high-performance massively parallel computing devices, as well as high-performance 3D graphics rendering devices. In this paper, we show a prototype implementation of a full-software 3D rasterizer system, based on the CUDA parallel architecture. While most of previous CUDA-based software rasterizer implementations focused on the triangle primitives, our system includes more 3D primitives, and extra 2D primitives, to fully support 3D graphics library features. Currently, our system is at its prototype implementation stage, and it shows successful results with 3D primitive handling and also character output features. Our design and implementation details are presented. More optimizations and fine tunes will be followed in near future.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_89-A_Prototype_Implementation_of_a_CUDA_based_Customized_Rasterizer.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Approach to Detect Phishing Websites with Features Selection Method and Ensemble Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130888</link>
        <id>10.14569/IJACSA.2022.0130888</id>
        <doi>10.14569/IJACSA.2022.0130888</doi>
        <lastModDate>2022-08-29T10:10:23.0200000+00:00</lastModDate>
        
        <creator>Mahmuda Khatun</creator>
        
        <creator>MD Akib Ikbal Mozumder</creator>
        
        <creator>Md. Nazmul Hasan Polash</creator>
        
        <creator>Md. Rakib Hasan</creator>
        
        <creator>Khalil Ahammad</creator>
        
        <creator>MD. Shibly Shaiham</creator>
        
        <subject>Machine learning; deep learning; catboost; LGBM; embedded; react-native; flask</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>Nowadays, phishing is a major problem on a global scale. Everyone must use the internet in today’s society in order to cope up in the real world. As a result, internet crime like phishing has become a serious issue throughout the world. This type of crime can be committed by anyone; all they need is a computer. Additionally, hacking may now be learned quickly by anyone with programming and mathematical skills. The adoption of various techniques by anti-phishing toolbars, such as machine learning, may enable users to quickly identify a fake website. As a result, researchers are now particularly interested in the problem of detecting fraudulent websites. Machine learning techniques have been offered throughout the entire process to more precisely identify fraudulent websites. To find the best accurate outcome, classification with random parameter tuning and ensemble based approaches are utilized. A user-friendly interface has also been suggested to make the system more accessible to the public.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_88-An_Approach_to_Detect_Phishing_Websites_with_Features_Selection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Improved K-Nearest Neighbor Algorithm for Pattern Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130887</link>
        <id>10.14569/IJACSA.2022.0130887</id>
        <doi>10.14569/IJACSA.2022.0130887</doi>
        <lastModDate>2022-08-29T10:10:22.9900000+00:00</lastModDate>
        
        <creator>Zinnia Sultana</creator>
        
        <creator>Ashifatul Ferdousi</creator>
        
        <creator>Farzana Tasnim</creator>
        
        <creator>Lutfun Nahar</creator>
        
        <subject>LANN algorithm; Standard Euclidian Distance; variance based Euclidian Distance; feature extraction; pattern classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>This paper proposed a “Locally Adaptive K-Nearest Neighbor (LAKNN) algorithm” for pattern exploration problem to enhance the obscenity of dimensionality. To compute neighborhood local linear discriminant analysis is an effective metric which determines the local decision boundaries from centroid information. KNN is a novel approach which uses in many classifications problem of data mining and machine learning. KNN uses class conditional probabilities for unfamiliar pattern. For limited training data in high dimensional feature space this hypothesis is unacceptable due to disfigurement of high dimensionality. To normalize the feature value of dissimilar metrics, Standard Euclidean Distance is used in KNN which s misguide to find a proper subset of nearest points of the pattern to be predicted. To overcome the effect of high dimensionality LANN uses a new variant of Standard Euclidian Distance Metric. A flexible metric is estimated for computing neighborhoods based on Chi-squared distance analysis. Chi-squared metric is used to ascertains most significant features in finding k-closet points of the training patterns. This paper also shows that LANN outperformed other four different models of KNN and other machine-learning algorithm in both training and accuracy.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_87-An_Improved_K_Nearest_Neighbor_Algorithm_for_Pattern_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>CapNet: An Encoder-Decoder based Neural Network Model for Automatic Bangla Image Caption Generation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130886</link>
        <id>10.14569/IJACSA.2022.0130886</id>
        <doi>10.14569/IJACSA.2022.0130886</doi>
        <lastModDate>2022-08-29T10:10:22.9730000+00:00</lastModDate>
        
        <creator>Rashik Rahman</creator>
        
        <creator>Hasan Murad</creator>
        
        <creator>Nakiba Nuren Rahman</creator>
        
        <creator>Aloke Kumar Saha</creator>
        
        <creator>Shah Murtaza Rashid Al Masud</creator>
        
        <creator>A S Zaforullah Momtaz</creator>
        
        <subject>Bangla image caption generation; encoder-decoder; bidirectional long short term memory (LSTM); bangla natural language processing (NLP)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>Automatic caption generation from images has be-come an active research topic in the field of Computer Vision (CV) and Natural Language Processing (NLP). Machine generated image caption plays a vital role for the visually impaired people by converting the caption to speech to have a better understanding of their surrounding. Though significant amount of research has been conducted for automatic caption generation in other languages, far too little effort has been devoted to Bangla image caption generation. In this paper, we propose an encoder-decoder based model which takes an image as input and generates the corresponding Bangla caption as output. The encoder network consists of a pretrained image feature extractor called ResNet-50, while the decoder network consists of Bidirectional LSTMs for caption generation. The model has been trained and evaluated using a Bangla image captioning dataset named BanglaLekhaIm-ageCaptions. The proposed model achieved a training accuracy of 91% and BLEU-1, BLEU-2, BLEU-3, BLEU-4 scores of 0.81, 0.67, 0.57, and 0.51 respectively. Moreover, a comparative study for different pretrained feature extractors such as VGG-16 and Xception is presented. Finally, the proposed model has been deployed on an embedded device for analysing the inference time and power consumption.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_86-CapNet_An_Encoder_Decoder_based_Neural_Network_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comparative Analysis of Generative Neural Attention-based Service Chatbot</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130885</link>
        <id>10.14569/IJACSA.2022.0130885</id>
        <doi>10.14569/IJACSA.2022.0130885</doi>
        <lastModDate>2022-08-29T10:10:22.9430000+00:00</lastModDate>
        
        <creator>Sinarwati Mohamad Suhaili</creator>
        
        <creator>Naomie Salim</creator>
        
        <creator>Mohamad Nazim Jambli</creator>
        
        <subject>Sequence-to-sequence; encoder-decoder; service chatbot; attention-based encoder-decoder; Recurrent Neural Net-work (RNN); Long Short-Term Memory (LSTM); Gated Recurrent Unit (GRU); word embedding</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>Companies constantly rely on customer support to deliver pre-and post-sale services to their clients through websites, mobile devices or social media platforms such as Twitter. In assisting customers, companies employ virtual service agents (chatbots) to provide support via communication devices. The primary focus is to automate the generation of conversational chat between a computer and a human by constructing virtual service agents that can predict appropriate and automatic responses to customers’ queries. This paper aims to present and implement a seq2seq-based learning task model based on encoder-decoder architectural solutions by training generative chatbots on customer support Twitter datasets. The model is based on deep Recurrent Neural Networks (RNNs) structures which are uni-directional and bi-directional encoder types of Long Short-Term Memory (LSTM) and Gated Recurrent Units (GRU). The RNNs are augmented with an attention layer to focus on important information between input and output sequences. Word level embedding such as Word2Vec, GloVe, and FastText are employed as input to the model. Incorporating the base architecture, a comparative analysis is applied where baseline models are compared with and without the use of attention as well as different types of input embedding for each experiment. Bilingual Evaluation Understudy (BLEU) was employed to evaluate the model’s performance. Results revealed that while biLSTM performs better with Glove, biGRU operates better with FastText. Thus, the finding significantly indicated that the attention-based, bi-directional RNNs (LSTM or GRU) model significantly outperformed baseline approaches in their BLEU score as a promising use in future works.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_85-A_Comparative_Analysis_of_Generative_Neural_Attention.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Deep Neural Network based Detection System for the Visual Diagnosis of the Blackberry</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130884</link>
        <id>10.14569/IJACSA.2022.0130884</id>
        <doi>10.14569/IJACSA.2022.0130884</doi>
        <lastModDate>2022-08-29T10:10:22.9100000+00:00</lastModDate>
        
        <creator>Alejandro Rubio</creator>
        
        <creator>Carlos Avendano</creator>
        
        <creator>Fredy Martinez</creator>
        
        <subject>Automatic sorter; blackberry; deep neural network; fruit handling; image analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>Thanks to its geographical and climatic advantages, Colombia has a historically strong fruit-growing tradition. To date, the basis of its food and economic development in a significant part of its territory is based on a wide range of fruits. One of the most important in the central and western regions of the country is the blackberry, which is rooted not only from the economic and food point of view but also culturally. For the departments of Casanare, Santander, and Cundinamarca, this fruit is one of the primary sources of income, rural employment, and food supply and income. However, small and medium farmers cultivate without access to technological production tools and with limited economic capacity. This process suffers from several problems that affect the whole plant, especially the fruit, which is strongly influenced by fungi, extreme ripening processes, or low temperatures. One of the main problems to be dealt with in its cultivation is the spread of pests, which are one of the causes of fruit rot. As a support strategy in producing this fruit, the development of an embedded system for visually diagnosing the fruit using a deep neural network is proposed. The article presents the training, tuning, and performance evaluation of this convolutional network to detect three possible fruit states, ripe, immature, and rotten, to facilitate the harvesting and marketing processes and reduce the impact on the healthy fruit and the quality of the final product. The model is built with a ResNet type network, which is trained with its dataset, which seeks to use images captured in their natural environment with as little manipulation as possible to reduce image analysis. This model achieves an accuracy of 70%, which indicates its high performance and validates its use in a stand-alone embedded system.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_84-A_Deep_Neural_Network_based_Detection_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mobile Application: A Proposal for the Inventory Management of Pharmaceutical Industry Companies</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130883</link>
        <id>10.14569/IJACSA.2022.0130883</id>
        <doi>10.14569/IJACSA.2022.0130883</doi>
        <lastModDate>2022-08-29T10:10:22.8970000+00:00</lastModDate>
        
        <creator>Alfredo Leonidas Vasquez Ubaldo</creator>
        
        <creator>Juan Andres Berrios Albines</creator>
        
        <creator>Jose Luis Herrera Salazar</creator>
        
        <creator>Laberiano Andrade-Arenas</creator>
        
        <creator>Michael Cabanillas-Carbonell</creator>
        
        <subject>Inventory management; mobile application; pharmaceutical industry; prototype; RUP methodology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>In recent years, the development of mobile applications has been evolving and becoming more and more frequent. This event is positive, since it plays an important role in facing and mitigating the multiple adversities that appear in the different existing sectors, such as business. On the other hand, it was detected that a little known problem that many companies in the pharmaceutical industry experience is poor inventory management, which causes countless consequences, generally of a negative nature. For this reason, in this work it was decided to make a mobile application prototype to face this problem. In this regard, the RUP methodology was used, along with various computer tools, in order to elaborate the prototype. Besides, as a data collection technique, surveys were made, which were subjected to expert judgment, in order to qualify the prototype. Likewise, very satisfactory results were obtained, concluding that the mobile application prototype that was developed complies with all the necessary conditions to mitigate the inventory management problems of pharmaceutical industry companies.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_83-Mobile_Application_A_Proposal_for_the_Inventory_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Novel Deep Learning Technique to Improve Resolution of Low-Quality Finger Print Image for Bigdata Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130882</link>
        <id>10.14569/IJACSA.2022.0130882</id>
        <doi>10.14569/IJACSA.2022.0130882</doi>
        <lastModDate>2022-08-29T10:10:22.7570000+00:00</lastModDate>
        
        <creator>Lisha P P</creator>
        
        <creator>Jayasree V K</creator>
        
        <subject>Single image super-resolution; convolution neural network; biometric; fingerprint images</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>High-resolution images are highly in demand when they are utilized for different analysis purposes and obviously due to their quality aesthetic visual impact. The objective of image super-resolution (SR) is to reconstruct a high-resolution (HR) image from a low-resolution (LR)image. Storing, transferring and processing of high-resolution (HR) images have got many practical issues in big data domain. In the case of finger print images, the data is huge because of the huge number of populations. So instead of transferring or storing these finger print images in its original form (HR images), it cost very low if we choose its low-resolution form. By using sampling technique, we can easily generate LR images, but the main problem is to regenerate HR image from these LR images. So, this paper addresses this problem, a novel method for enhancing resolution of low-resolution fingerprint images of size 50 x 50 to a high-resolution image of size 400 x 400 using convolutional neural network (CNN) architecture followed by sub pixel convolution operation for up sampling with no loss of promising features available in low-resolution image has been proposed. The pro-posed model contains five convolutional layers, each of which has an appropriate number of filter channels, activation functions, and optimization functions. The proposed model was trained using three publicly accessible fingerprint datasets FVC 2004 DB1, DB2, and DB3 after being validation and testing were done using 10 percent of these fingerprint data sets. In terms of performance measures like Peak Signal to Noise Ratio (PSNR), Mean Squared Error (MSE), Structural Similarity Index (SSIM) and loss functions, the quantitative and qualitative results show that the proposed model greatly outperformed existing state-of-the-art techniques like Enhanced deep residual network (EDSR), wide activation for image and video SR (WDSR), Generative adversarial network(GAN) based models and Auto-encoder-based models.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_82-Novel_Deep_Learning_Technique_to_Improve_Resolution.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A GIS and Fuzzy-based Model for Identification and Analysis of Accident Vulnerable Locations in Urban Traffic Management System: A Case Study of Bhubaneswar</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130881</link>
        <id>10.14569/IJACSA.2022.0130881</id>
        <doi>10.14569/IJACSA.2022.0130881</doi>
        <lastModDate>2022-08-29T10:10:22.7400000+00:00</lastModDate>
        
        <creator>Sarita Mahapatra</creator>
        
        <creator>Krishna Chandra Rath</creator>
        
        <creator>Satya Ranjan Das</creator>
        
        <subject>Road traffic management; accident vulnerability; GIS; fuzzy inference; fuzzy rules</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>The world has seen road accident and its related societal and economical impact as one of the live, persisting problem in the last 2-3 decades and its prominence has been observed in the developing countries of the Asian subcontinent. With no exception, all major cities in India are facing the various challenges related to road accidents, mostly due to the large population density. Among the major cities in India, Bhubaneswar is a very fast growing city with aim to be the most livable city in Asia in the coming few years. With the city of Bhubaneswar as our study area, we address the issues related to road safety by determining the degree of severity of road accidents. We study the accident related data collected for the last decade using the spatial tools of Geographical Information System (GIS). Then using a GIS-map based analysis and a fuzzy-based model, we have found the spatio-temporal distribution of accident vulnerable locations with their degree of severity. Our experimental results show the accident hot-spots with values of selected contributing parameters such as timing, traffic density, vehicle speed, road intersections.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_81-A_GIS_and_Fuzzy_based_Model_for_Identification_and_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deepfakes on Retinal Images using GAN</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130880</link>
        <id>10.14569/IJACSA.2022.0130880</id>
        <doi>10.14569/IJACSA.2022.0130880</doi>
        <lastModDate>2022-08-29T10:10:22.7100000+00:00</lastModDate>
        
        <creator>Yalamanchili Salini</creator>
        
        <creator>J HariKiran</creator>
        
        <subject>DeepFakes; deep learning; retinal fundus image synthesis; segmentation; generative adversarial network (GAN); variational autoencoder (VAE)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>In Deep Learning (DL), Generative Adversarial Networks (GAN) are a popular technique for generating synthetic images, which require extensive and balanced datasets to train. These Artificial Intelligence systems can produce synthetic images that seem authentic, known as Deep Fakes. At present, data-driven approaches to classifying medical images are prevalent. However, most medical data is inaccessible to general researchers due to standard consent forms that restrict research to medical journals or education. Our study focuses on GANs, which can create artificial fundus images that can be indistinguishable from actual fundus images. Before using these fake images, it is essential to investigate privacy concerns and hallucinations thoroughly. As well as, reviewing the current applications and limitations of GANs is very important. In this work, we present the Cycle-GAN framework, a new GAN network for medical imaging that focuses on the generation and segmentation of retinal fundus images.DRIVE retinal fundus image dataset is used to evaluate the proposed model’s performance and achieved an accuracy of 98.19%.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_80-Deepfakes_on_Retinal_Images_using_GAN.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>IoT based Low-cost Posture and Bluetooth Controlled Robot for Disabled and Virus Affected People</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130879</link>
        <id>10.14569/IJACSA.2022.0130879</id>
        <doi>10.14569/IJACSA.2022.0130879</doi>
        <lastModDate>2022-08-29T10:10:22.6930000+00:00</lastModDate>
        
        <creator>Tajim Md. Niamat Ullah Akhund</creator>
        
        <creator>Mosharof Hossain</creator>
        
        <creator>Khadizatul Kubra</creator>
        
        <creator>Nurjahan</creator>
        
        <creator>Alistair Barros</creator>
        
        <creator>Md Whaiduzzaman</creator>
        
        <subject>Internet of Things (IoT); Raspberry Pi; Pi camera; Pi robots; hospital robots; posture recognition robots</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>IoT-based robots can help people to a great extent. This work results in a low-cost posture recognizer robot that can detect posture signs from a disabled or virus-affected person and move accordingly. The robot can take images with the Raspberry Pi camera and process the image to identify the posture with our designed algorithm. In addition, it can also take instructions via Bluetooth from smartphone apps. The robot can move 360 degrees depending on the input posture or Bluetooth. This system can assist disabled people who can move a few organs only. Moreover, this system can assist virus-affected persons as they can instruct the robot without touching it. Finally, the robot can collect data from a distant place and send it to a cloud server without spreading the virus.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_79-IoT_based_Low_cost_Posture_and_Bluetooth_Controlled_Robot.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Approximate TSV-based 3D Stacked Integrated Circuits by Inexact Interconnects</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130878</link>
        <id>10.14569/IJACSA.2022.0130878</id>
        <doi>10.14569/IJACSA.2022.0130878</doi>
        <lastModDate>2022-08-29T10:10:22.6600000+00:00</lastModDate>
        
        <creator>Mahmoud S. Masadeh</creator>
        
        <subject>Approximate computing; Three-Dimensional Inte-grated Circuit (3D IC); Through-Silicon Via (TSV); testing; approx-imate communications; approximate interconnect; yield; energy efficiency</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>Three-Dimensional Stacked Integrated Circuit (3D-SICs) based on Through-Silicon Vias (TSVs) provide a high-density integration technology. However, integrating pre-tested dies requires post-bond interconnect testing, which is complex and costly. An imperfect TSV-based interconnect indicates a defective chip that should be rejected. Thus, it increases the yield loss and test cost. On the other hand, approximate computing (AC) is a promising design paradigm suitable for error-resilient applications, e.g., processing sensory-generated data, by judiciously sacrificing output accuracy. AC perform inexact operations and accepts inexact data. Thus, introducing AC into 3D-SICs will significantly ameliorate the efficiency of design approximation. Therefore, this work aims to increase the yield and reduce the test cost by accepting 3D-SICs with defected interconnects as approximate 3D-SICs. This work considers 3D-SICs, where the sensor is stacked on logic (CPU) which is stacked on memory (DRAM). Then, use the memory-based interconnect testing (MBIT) approach to detect and diagnose the faulty interconnect. Based on the detected fault location and type, and for a maximum allowed error, some sensory 3D-SICs with defected LSBs interconnects are accepted and used in error-resilient and data-intensive applications. Targeting data lines only, 50% of the defected interconnects, i.e., least significant bits (LSBs), were accepted as approximate. Thus, the proposed work was able to significantly increase the yield. Two applications, i.e., ECG signal compression and detecting of their R peaks,demonstrated the effectiveness of using a sensory device with a faulty data line in its least significant 8-bits. The approximate ECG signals have a compression rate higher than the exact with negligible (around 0.1%) reduced accuracy.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_78-Approximate_TSV_based_3D_Stacked_Integrated_Circuits.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Duality at Classical Electrodynamics and its Interpretation through Machine Learning Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130877</link>
        <id>10.14569/IJACSA.2022.0130877</id>
        <doi>10.14569/IJACSA.2022.0130877</doi>
        <lastModDate>2022-08-29T10:10:22.6470000+00:00</lastModDate>
        
        <creator>Huber Nieto-Chaupis</creator>
        
        <subject>Classical electrodynamics; quantum mechanics; machine learning principles; mitchell’s criteria</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>The aim of this paper is to investigate the hypothetical duality of classical electrodynamics and quantum mechanics through the usage of Machine Learning principles. Thus, the Mitchell’s criteria are used. Essentially this paper is focused on the radiated energy by a free electron inside an intense laser. The usage of mathematical strategies might be correct to some extent so that one expects that classical equation would contain a dual meaning. The concrete case of Compton scattering is analyzed. While at some quantum field theories might not be scrutinized by computer algorithms, contrary to this Quantum Electrodynamics would constitute a robust example.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_77-Duality_at_Classical_Electrodynamics_and_its_Interpretation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>English and Arabic Chatbots: A Systematic Literature Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130876</link>
        <id>10.14569/IJACSA.2022.0130876</id>
        <doi>10.14569/IJACSA.2022.0130876</doi>
        <lastModDate>2022-08-29T10:10:22.6300000+00:00</lastModDate>
        
        <creator>Abeer S. Alsheddi</creator>
        
        <creator>Lubna S. Alhenaki</creator>
        
        <subject>Chatbots; Arabic language; development approaches; domain applications; evaluation metrics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>In recent years, the availability of chatbot applications has increased substantially with the advancement of artificial intelligence techniques, and research efforts have been active in the English language, which presents state-of-the-art solutions. However, despite the popularity of the Arabic language, its research community is still in an immature stage. Therefore, the main objective of this systematic literature review is studying state-of-the-art research – for both the English and Arabic languages – to answer the proposed research questions regarding the development approaches, application domains, evaluation metrics, and development challenges of chatbot applications. The findings show that researchers have devoted more attention to the education domain using retrieval-based approaches while the generation-based approach has grown in popularity recently for providing new responses tasks. Whereas the hybrid approach for ranking multi-possible responses of combining both previous approaches shows a performance improvement. Besides, most metrics used to evaluate chatbot performance are human-based, followed by bilingual evaluation understudy and accuracy metrics. However, defining a common framework for evaluating chatbots remains a challenge. Finally, the open problems and future directions are highlighted to help in developing chatbots with minimal human interference to simulate natural conversations.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_76-English_and_Arabic_Chatbots_A_Systematic_Literature_Review.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Local Pre-Conditioning and Quality Enhancement to Handle Different Data Complexities in Contactless Fingerprint Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130875</link>
        <id>10.14569/IJACSA.2022.0130875</id>
        <doi>10.14569/IJACSA.2022.0130875</doi>
        <lastModDate>2022-08-29T10:10:22.6130000+00:00</lastModDate>
        
        <creator>Deepika K C</creator>
        
        <creator>G Shivakumar</creator>
        
        <subject>Ridge orientation; Gabor filtering; region masking; ridge frequency; contactless fingerprint</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>Biometric authentication systems have always been a fascinating approach to meet personalized security. Among the major existing solutions fingerprint-biometrics have gained widespread attention; yet, guaranteeing scalability and reliability over real-time demands remains a challenge. Despite innovations, the recent COVID-19 pandemic has capped the efficacy of the existing touch-based two-dimensional fingerprint detection models. Though, touchless fingerprint detection is considered as a viable alternative; yet, the real-time data complexities like non-linear textural patterns, dusts, non-uniform local conditions like illumination, contrast, orientation make it complex for realization. Moreover, the likelihood of ridge discontinuity and spatio-temporal texture damages can limit its efficacy. Considering these complexities, here, we focused on improving the input image intrinsic feature characteristics. More specifically, applied normalization, ridge orientation estimation, ridge frequency estimation, ridge masking and Gabor filtering over the input touchless fingerprint images. The proposed model mainly focusses on reducing FPR &amp; EER by dividing the input image in to blocks and classify each input block as recoverable and nonrecoverable image block. Finally, an image with higher recoverable blocks with sufficiently large intrinsic features were considered for feature extraction and classification. The Proposed method outperforms when compared with the existing state of the art methods by achieving an accuracy of 94.72%, precision of 98.84%, recall of 97.716%, F-Measure 0.9827, specificity of 95.38% and a reduced EER of about 0.084.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_75-Local_Pre_Conditioning_and_Quality_Enhancement.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Building and Testing Fine-Grained Dataset of COVID-19 Tweets for Worry Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130874</link>
        <id>10.14569/IJACSA.2022.0130874</id>
        <doi>10.14569/IJACSA.2022.0130874</doi>
        <lastModDate>2022-08-29T10:10:22.5830000+00:00</lastModDate>
        
        <creator>Tahani Soud Alharbi</creator>
        
        <creator>Fethi Fkih</creator>
        
        <subject>COVID-19; sentiment analysis; emotion analysis; worry dataset; concern analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>The COVID-19 outbreak has resulted in the loss of human life worldwide and has increased worry concerning life, public health, the economy, and the future. With lockdown and social distancing measures in place, people turned to social media such as Twitter to share their feelings and concerns about the pandemic. Several studies have focused on analyzing Twitter users’ sentiments and emotions. However, little work has focused on worry detection at a fine-grained level due to the lack of adequate datasets. Worry emotion is associated with notions such as anxiety, fear, and nervousness. In this study, we built a dataset for worry emotion classification called “WorryCov” . It is a relatively large dataset derived from Twitter concerning worry about COVID-19. The data were annotated into three levels (“no-worry”, “worry”, and “high-worry”). Using the annotated dataset, we investigated the performance of different machine learning algorithms (ML), including multinomial Na&#239;ve Bayes (MNB), support vector machine (SVM), logistic regression (LR), and random forests (RF). The results show that LR was the optimal approach, with an accuracy of 75%. Furthermore, the results indicate that the proposed model could be used by psychologists and researchers to predict Twitter users’ worry levels during COVID-19 or similar crises.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_74-Building_and_Testing_Fine_Grained_Dataset_of_COVID_19_Tweets.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid 1D-CNN-Bi-LSTM based Model with Spatial Dropout for Multiple Fault Diagnosis of Roller Bearing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130873</link>
        <id>10.14569/IJACSA.2022.0130873</id>
        <doi>10.14569/IJACSA.2022.0130873</doi>
        <lastModDate>2022-08-29T10:10:22.5670000+00:00</lastModDate>
        
        <creator>Gangavva Choudakkanavar</creator>
        
        <creator>J. Alamelu Mangai</creator>
        
        <subject>Fault diagnosis; roller bearing; deep learning; 1D-CNN; Bi-LSTM; spatial dropout</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>Fault diagnosis of roller bearings is a crucial and challenging task to ensure the smooth functioning of modern industrial machinery under varying load conditions. Traditional fault diagnosis methods involve preprocessing of the vibration signals and manual feature extraction. This requires domain expertise and experience in extracting relevant features to accurately detect the fault. Hence, it is of great significance to implement an intelligent fault diagnosis method that involves appropriate automatic feature learning and fault identification. Recent research has shown that deep learning is an effective technique for fault diagnosis. In this paper, a hybrid model based on 1D-CNN (One-Dimensional Convolution Neural Networks) with Bi-LSTM (Bi-directional Long-Short Term Memory) is proposed to classify 12 different fault types. Firstly, vibration signals are given as input to 1D-CNN to extract intrinsic features from the input signals. Then, the extracted features are fed into a Bi-LSTM model to identify the faults. The performance of the proposed method is enhanced by applying Softsign activation function in the Bi-LSTM layer and Spatial Dropout in the neural network. To analyze the effectiveness of the proposed method, Case Western Reserve University (CWRU) bearing data is considered for experimentation. The results demonstrated that the proposed model has attained an accuracy of 99.84% in classifying the various faults. The superiority of the proposed method is verified by comparing the predictive accuracy of the proposed method with the existing fault diagnosis methods.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_73-A_Hybrid_1D_CNN_Bi_LSTM_based_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Cross Platform Contact Tracing Mobile Application for COVID-19 Infections using Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130872</link>
        <id>10.14569/IJACSA.2022.0130872</id>
        <doi>10.14569/IJACSA.2022.0130872</doi>
        <lastModDate>2022-08-29T10:10:22.5530000+00:00</lastModDate>
        
        <creator>Josephat Kalezhi</creator>
        
        <creator>Mathews Chibuluma</creator>
        
        <creator>Christopher Chembe</creator>
        
        <creator>Victoria Chama</creator>
        
        <creator>Francis Lungo</creator>
        
        <creator>Douglas Kunda</creator>
        
        <subject>Contact tracing mobile application; coronavirus; COVID-19; deep neural networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>The COVID-19 pandemic has remained a global health crisis following the declaration by the World Health Organization. As a result, a number of mechanisms to contain the pandemic have been devised. Popular among these are contact tracing to identify contacts and carry out tests on them in order to minimize the spread of the coronavirus. However, manual contact tracing is tedious and time consuming. Therefore, contact tracing based on mobile applications have been proposed in literature. In this paper, a cross platform contact tracing mobile application that uses deep neural networks to determine contacts in proximity is presented. The application uses Bluetooth Low Energy technologies to detect closeness to a Covid-19 positive case. The deep learning model has been evaluated against analytic models and machine learning models. The proposed deep learning model performed better than analytic and traditional machine learning models during testing.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_72-A_Cross_Platform_Contact_Tracing_Mobile_Application.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Innovation Management Model as a Source of Business Competitiveness for Industrial SMEs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130871</link>
        <id>10.14569/IJACSA.2022.0130871</id>
        <doi>10.14569/IJACSA.2022.0130871</doi>
        <lastModDate>2022-08-29T10:10:22.5200000+00:00</lastModDate>
        
        <creator>Rafael Roosell Paez Advincula</creator>
        
        <creator>Celso Gonzales Chavesta</creator>
        
        <creator>Lilian Ocares-Cunyarachi</creator>
        
        <subject>Competitiveness; continuous improvement; management; optimization; strategies</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>One of the main problems of small companies is not knowing how to properly manage investments, resources, strategies, tools and responsibilities to be more competitive, the research objective is the development of management and innovation processes required by the new company for its permanence in the market and decision making to be carried out every day; small companies face the competitiveness of new products that emerge. Today there are a great variety of products, which are globally interconnected, therefore it is required to implement a structural equation model for its management, so that the company continuously improves and optimizes the available resources, therefore as a result is the management, focused on greater effectiveness of resources; so it is necessary that small businesses need to manage a model of investments, resources, tools and responsibilities to obtain support from the competitiveness of the market; which would allow it to be oriented to a sustainable development and be one step ahead of the competition.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_71-Innovation_Management_Model_as_a_Source_of_Business_Competitiveness.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>MFCC and Texture Descriptors based Stuttering Dysfluencies Classification using Extreme Learning Machine</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130870</link>
        <id>10.14569/IJACSA.2022.0130870</id>
        <doi>10.14569/IJACSA.2022.0130870</doi>
        <lastModDate>2022-08-29T10:10:22.5070000+00:00</lastModDate>
        
        <creator>Roohum Jegan</creator>
        
        <creator>R. Jayagowri</creator>
        
        <subject>Voice disorder; Mel-frequency cepstral coefficients; gray level co-occurrence matrix; GLRLM; Laplacian score; extreme learning machine</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>Stuttering is a type of speech disorder which results in disrupted flow of speech in the form of unintentional repetitions and prolongation of sounds. Stuttering classification is important for speech pathology treatment and speech therapy techniques which decreases speech disfluency to some extent. In this article, a method for prolongation and repetition classification is presented based on Mel-frequency cepstral coefficients (MFCC) and texture descriptors. Initially, MFCC and filter bank energy (FBE) matrix are computed. Gray level co-occurrence matrix (GLCM) and Gray level run length matrix (GLRLM) textural features are extracted from these matrices. Laplacian score-based feature selection approach is employed to choose relevant features. Finally, extreme learning machine (ELM) is utilized to classify the speech audio event as repetition or prolongation. The algorithm is evaluated using UCLASS database and has achieved improved performance with classification accuracy of 96.36%.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_70-MFCC_and_Texture_Descriptors_based_Stuttering.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Model to Detect COVID-19 Coughing and Breathing Sound Symptoms Classification from CQT and Mel Spectrogram Image Representation using Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130869</link>
        <id>10.14569/IJACSA.2022.0130869</id>
        <doi>10.14569/IJACSA.2022.0130869</doi>
        <lastModDate>2022-08-29T10:10:22.4730000+00:00</lastModDate>
        
        <creator>Mohammed Aly</creator>
        
        <creator>Nouf Saeed Alotaibi</creator>
        
        <subject>COVID-19; median filter; deep learning; Mel-scale spectrogram; sound classification; constant-Q Transform</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>Deep Learning is a relatively new Artificial Intelligence technique that has shown to be extremely effective in a variety of fields. Image categorization and also the identification of artefacts in images are being employed in visual recognition. The goal of this study is to recognize COVID-19 artefacts like cough and also breath noises in signals from real-world situations. The suggested strategy considers two major steps. The first step is a signal-to-image translation that is aided by the Constant-Q Transform (CQT) and a Mel-scale spectrogram method. Next, nine deep transfer models (GoogleNet, ResNet18/34/50/100/101, SqueezeNet, MobileNetv2, and NasNetmobile) are used to extract and also categorise features. The digital audio signal will be represented by the recorded voice. The CQT will transform a time-domain audio input to a frequency-domain signal. To produce a spectrogram, the frequency will really be converted to a log scale as well as the colour dimension will be converted to decibels. To construct a Mel spectrogram, the spectrogram will indeed be translated onto a Mel scale. The dataset contains information from over 1,600 people from all over the world (1185 men as well as 415 women). The suggested DL model takes as input the CQT as well as Mel-scale spectrograms derived from the breathing and coughing tones of patients diagnosed using the coswara-combined dataset. With the better classification performance employing cough sound CQT and a Mel-spectrogram image, the current proposal outperformed the other nine CNN networks. For patients diagnosed, the accuracy, sensitivity, as well as specificity were 98.9%, 97.3%, and 98.1%, respectively. The Resnet18 is the most reliable network for symptomatic patients using cough and breath sounds. When applied to the Coswara dataset, we discovered that the suggested model&#39;s accuracy (98.7%) outperforms the state-of-the-art models (85.6%, 72.9%, 87.1%, and 91.4%) according to the SGDM optimizer. Finally, the research is compared to a comparable investigation. The suggested model is more stable and reliable than any present model. Cough and breathing research precision are good enough just to test extrapolation as well as generalization abilities. As a result, sufferers at their headquarters may utilise this novel method as a main screening tool to try and identify COVID-19 by prioritising patients&#39; RT-PCR testing and decreasing the chance of disease transmission.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_69-A_New_Model_to_Detect_COVID_19_Coughing_and_Breathing_Sound.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A 4-Layered Plan-Driven Model (4LPdM) to Improve Software Development</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130868</link>
        <id>10.14569/IJACSA.2022.0130868</id>
        <doi>10.14569/IJACSA.2022.0130868</doi>
        <lastModDate>2022-08-29T10:10:22.4600000+00:00</lastModDate>
        
        <creator>Kamal Uddin Sarker</creator>
        
        <creator>Aziz Bin Deraman</creator>
        
        <creator>Raza Hasan</creator>
        
        <creator>Ali Abbas</creator>
        
        <subject>Software development methodology; project management; 4-layered plan-driven model; quality factors; sustainability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>Quality is the degree of excellence of a product and one of the most important factors of software projects that mainly defines user satisfaction and success of the project. Software methodologies represent a variety of tasks, processes, and roles to manage time, cost, and quality. The invention, innovation, and diffusion for technological advancement creates challenges of software projects, thus several existing methodologies albeit with limited scope. A software product is highly influenced by the latest technology and distributed project management opportunities. Management issues are introduced for a virtual project management environment when resource persons are in another corner of the world. To resolve the problem, this research presents a new software project management model (4-LPdM) with alternative actions and practices to effectively manage. The model was presented to 20 different organizations and 29 respondents gave feedback who had experience between 1-16 years in multiple sections of software engineering. The model is evaluated based on the factors of advanced PMBOK 4.0 (scope, cost, quality, resource, risk, plan) and two (management, sustainability) additional features according to the demand of experts. This research illustrates statistical analyses to examine the significance of the proposed model besides a comprehensive comparative study of the traditional methodology.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_68-A_4_Layered_Plan_Driven_Model_4LPdM.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Inclusive Study of Fake News Detection for COVID-19 with New Dataset using Supervised Learning Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130867</link>
        <id>10.14569/IJACSA.2022.0130867</id>
        <doi>10.14569/IJACSA.2022.0130867</doi>
        <lastModDate>2022-08-29T10:10:22.4430000+00:00</lastModDate>
        
        <creator>Emad K. Qalaja</creator>
        
        <creator>Qasem Abu Al-Haija</creator>
        
        <creator>Afaf Tareef</creator>
        
        <creator>Mohammad M. Al-Nabhan</creator>
        
        <subject>Machine learning; fake news; twitter; covid-19; correlation coefficient</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>Covid-19 imposes many bans and restrictions on news, individuals and teams, and thus social networks have become one of the most used platforms for sharing and destroying news, which can be either fake or true. Therefore, detecting fake news has become imperative and thus has drawn the attention of researchers to develop approaches for understanding and classifying news content. The focus was on the Twitter platform because it is one of the most used platforms for sharing and disseminating information among many organizations, personalities, news agencies, and satellite stations. In this research, we attempt to improve the detection process of fake news by employing supervised machine learning techniques on our newly developed dataset. Specifically, the proposed system categorizes fake news related to COVID-19 extracted from the Twitter platform using four machine learning-based models, including decision tree (DT), Na&#239;ve Bayes (NB), artificial neural network (ANN), and k-nearest neighbors (KNN) classifiers. Besides, the developed detection models were evaluated on our new dataset, which we extracted from Twitter in a real-time process using standard evaluation metrics such as detection accuracy (ACC), F1-score (FSC), the under the curve (AUC), and Matthew&#39;s correlation coefficient (MCC). In the first set of experiments which employ the full dataset (i.e., 14,000 tweets), our experimental evaluation reported that DT based detection model had achieved the highest detection performance scoring 99.0%, 96.0%, 98.0%, and 90.0% in ACC, FSC, AUC, and MCC, respectively. The second set of experiments employs the small dataset (i.e., 700 tweets); our experimental evaluation reported that DT based detection model had achieved the highest detection performance scoring 89.5%, 89.5%, 93.0%, and 80.0% in ACC, FSC, AUC, and MCC, respectively. The results obtained for all experiments have been generated for the best-selected features.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_67-Inclusive_Study_of_Fake_News_Detection_for_COVID_19.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Grover’s Algorithm for Data Lake Optimization Queries</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130866</link>
        <id>10.14569/IJACSA.2022.0130866</id>
        <doi>10.14569/IJACSA.2022.0130866</doi>
        <lastModDate>2022-08-29T10:10:22.4100000+00:00</lastModDate>
        
        <creator>Mohamed CHERRADI</creator>
        
        <creator>Anass EL HADDADI</creator>
        
        <subject>Big data; data management; information retrieval; quantum computing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>Now-a-days, the use of No-SQL databases is one of the potential options for storing and processing big data lakes. However, searching for large data in No-SQL databases is a complex and time-consuming task. Further, information retrieval from big data management suffers in terms of execution time. To reduce the execution time during the search process, we propose a fast and suitable approach based on the quantum Grover algorithm, which represents one of the best-known approaches for searching in an unstructured database and resolves the unsorted search query in O (√n) time complexity. To assess our proposal, a comparative study with linear and binary search algorithms was conducted to prove the effectiveness of Grover&#39;s algorithms. Then, we perform extensive experiment evaluations based on ibm_qasm_simulator for searching one item out of eight using Grover’s search algorithm based on three qubits. The experiments outcomes revealed encouraging results, with an accuracy of 0.948, well in accordance with the theoretical result. Moreover, a discussion of the sensitivity of Grover&#39;s algorithm through different iterations was carried out. Then, exceeding the optimal number of iterations round (π/4 √N), induces low accuracy of the marked state. Furthermore, the incorrect selection of this parameter can outline the solution.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_66-Grovers_Algorithm_for_Data_Lake_Optimization_Queries.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Ensemble of Arabic Transformer-based Models for Arabic Sentiment Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130865</link>
        <id>10.14569/IJACSA.2022.0130865</id>
        <doi>10.14569/IJACSA.2022.0130865</doi>
        <lastModDate>2022-08-29T10:10:22.3970000+00:00</lastModDate>
        
        <creator>Ikram El Karfi</creator>
        
        <creator>Sanaa El Fkihi</creator>
        
        <subject>Transformers; BERT; ensemble learning; Arabic sentiment analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>In recent years, sentiment analysis has gained momentum as a research area. This task aims at identifying the opinion that is expressed in a subjective statement. An opinion is a subjective expression describing personal thoughts and feelings. These thoughts and feelings can be assigned with a certain sentiment. The most studied sentiments are positive, negative, and neutral. Since the introduction of attention mechanism in machine learning, sentiment analysis techniques have evolved from recurrent neural networks to transformer models. Transformer-based models are encoder-decoder systems with attention. Attention mechanism has permitted models to consider only relevant parts of a given sequence. Making use of this feature in encoder-decoder architecture has impacted the performance of transformer models in several natural language processing tasks, including sentiment analysis. A significant number of Arabic transformer-based models have been pre-trained recently to perform Arabic sentiment analysis tasks. Most of these models are implemented based on Bidirectional Encoder Representations from Transformers (BERT) such as AraBERT, CAMeLBERT, Arabic ALBERT and GigaBERT. Recent studies have confirmed the effectiveness of this type of models in Arabic sentiment analysis. Thus, in this work, two transformer-based models, namely AraBERT and CAMeLBERT have been experimented. Furthermore, an ensemble model has been implemented to achieve more reasonable performance.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_65-An_Ensemble_of_Arabic_Transformer_based_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancement of Low-Light Image using Homomorphic Filtering, Unsharp Masking, and Gamma Correction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130864</link>
        <id>10.14569/IJACSA.2022.0130864</id>
        <doi>10.14569/IJACSA.2022.0130864</doi>
        <lastModDate>2022-08-29T10:10:22.3800000+00:00</lastModDate>
        
        <creator>Tan Wan Yin</creator>
        
        <creator>Kasthuri A/P Subaramaniam</creator>
        
        <creator>Abdul Samad Bin Shibghatullah</creator>
        
        <creator>Nur Farraliza Mansor</creator>
        
        <subject>Low-light image; gamma correction; homomorphic filtering. low-light enhancement; unsharp masking</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>Now-a-days, a digital image can be found almost everywhere, and digital image processing plays a huge role in analyzing and enhancing the image so that it can be delivered in a good condition. Color distortion and loss of image details are the common problems that were faced by low-light image enhancement methods. This paper introduces a low-light image enhancement method that applied the concept of homomorphic filtering, unsharp masking, and gamma correction. The aim of the proposed method is to minimize the two problems stated while producing images of better quality when compared to the other low-light image enhancement methods. An objective evaluation was done on the proposed method, comparing the results produced by the enhanced method with other two existing low-light image enhancement methods. The results obtained showed the proposed method outshines the other two existing low-light image enhancement method in maintaining the image details and producing a natural looking image, achieving the lowest Mean Square Error (MSE) and Lightness Order Error (LOE) scores, and has the highest Features Similarity Index color (FSIMc), Features Similarity Index (FSIM), Structure Similarity Index (SSIM), and Visual information fidelity (VIF) scores. Future studies that should be made on this research are to implement dehaze and denoise functionality into the low-light image as well as enabling it to be applicable in real-time scenarios.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_64-Enhancement_of_Low_Light_Image_using_Homomorphic_Filtering.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Efficient Function Integration and a Case Study with Gompertz Functions for Covid-19 Waves</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130863</link>
        <id>10.14569/IJACSA.2022.0130863</id>
        <doi>10.14569/IJACSA.2022.0130863</doi>
        <lastModDate>2022-08-29T10:10:22.3500000+00:00</lastModDate>
        
        <creator>Oliver Amadeo Vilca-Huayta</creator>
        
        <creator>Ubaldo Yancachajlla Tito</creator>
        
        <subject>Covid-19; corona virus; function integration; Gompertz model; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>Numerical algorithms are widely used in different applications, therefore, the execution time of the functions involved in numerical algorithms is important, and, in some cases, decisive, for example, in machine learning algorithms. Given a finite set of independent functions A(x), B(x), ..., Z(x) with domains defined by disjoint, consecutive, and not necessarily adjacent intervals, the main goal is to integrate into a single function F(x) = k1&#215;A(x) + k2&#215;B(x) + … + kn&#215;Z(x), where each activation coefficient k, is one if x is in the interval of the respective domain and zero otherwise. The novelty of this work is the presentation and formal demonstration of two general forms of integration of functions in a single function: The first is the mathematical version and the second is the computational version (with the AND function at the bit level), which is characterized by its efficiency. The result is applied in a case study (Peru), where two regression functions were obtained that integrate all the waves of Covid-19, that is, the epidemic curve of the variable global number of deaths/infected per day, the adjustment provided a highly statistically significant measure of correlation, a Pearson&#39;s product-moment correlation of 0.96 and 0.98 respectively. Finally, the size of the epidemic was projected for the next 30 days.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_63-Efficient_Function_Integration_and_a_Case_Study.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>J-Selaras: New Algorithm for Automated Data Integration Tools</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130862</link>
        <id>10.14569/IJACSA.2022.0130862</id>
        <doi>10.14569/IJACSA.2022.0130862</doi>
        <lastModDate>2022-08-29T10:10:22.3330000+00:00</lastModDate>
        
        <creator>Mustafa Man</creator>
        
        <creator>Wan Aezwani Wan Abu Bakar</creator>
        
        <creator>Mohd. Kamir Yusof</creator>
        
        <creator>Norisah Abdul Ghani</creator>
        
        <creator>Mohd Adza Arshad</creator>
        
        <creator>Raja Normawati Raja Ayob</creator>
        
        <creator>Kamarul Azhar Mahmood</creator>
        
        <creator>Faizul Azwan Ariffin</creator>
        
        <creator>Mohamad Dizi Che Kadir</creator>
        
        <creator>Lily Mariya Rosli</creator>
        
        <creator>Nurhafiza Binti Abu Yaziz</creator>
        
        <subject>Automated data integration algorithm; bill of quantity (BQ); CostX; single and multiple sheets; J-Selaras tool</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>Data integration is a popular technique or method today for data converting and sharing within new application with different database format and location. The interaction of data from one application system to another application system requires an intermediary software or middleware that allows the data to be transferred or read systematically and easily. The development of dynamic algorithms allows data in various formats, whether structured or unstructured, to be transferred to various types of databases smoothly. A case study was conducted for the Bill of Quantity (BQ) data in the known Excel format generated through CostX software in a single sheet Excel file. It was transferred to a single workbook with multiple sheets with formulation generated automatically. Thus, an algorithm was developed and tested through the development of the J-Selaras System. This algorithm can remove the noisy data or data symbols that are not related in the excel single sheet (CostX) file and automatically transfer to multiple excel sheets with macros formulation quickly. The implementation results indicate a significant contribution where it reduces in execution time of BQ processes and manpower resources used.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_62-J_Selaras_New_Algorithm_for_Automated_Data_Integration.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Deep Learning Approach for Viral DNA Sequence Classification using Genetic Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130861</link>
        <id>10.14569/IJACSA.2022.0130861</id>
        <doi>10.14569/IJACSA.2022.0130861</doi>
        <lastModDate>2022-08-29T10:10:22.3170000+00:00</lastModDate>
        
        <creator>Ahmed El-Tohamy</creator>
        
        <creator>Huda Amin Maghwary</creator>
        
        <creator>Nagwa Badr</creator>
        
        <subject>Deep learning; sequence classification; convolutional neural networks; genetic algorithm; sequence encoding</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>DNA sequence classification is one of the major challenges in biological data processing. The identification and classification of novel viral genome sequences drastically help in reducing the dangers of a viral outbreak like COVID-19. The more accurate the classification of these viruses, the faster a vaccine can be produced to counter them. Thus, more accurate methods should be utilized to classify the viral DNA. This research proposes a hybrid deep learning model for efficient viral DNA sequence classification. A genetic algorithm (GA) was utilized for weight optimization with Convolutional Neural Networks (CNN) architecture. Furthermore, Long Short-Term Memory (LSTM) as well as Bidirectional CNN-LSTM model architectures are employed. Encoding methods are needed to transform the DNA into numeric format for the proposed model. Three different encoding methods to represent DNA sequences as input to the proposed model were experimented: k-mer, label encoding, and one hot vector encoding. Furthermore, an efficient oversampling method was applied to overcome the imbalanced dataset issues. The performance of the proposed GA optimized CNN hybrid model using label encoding achieved the highest classification accuracy of 94.88% compared with other encoding methods.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_61-A_Deep_Learning_Approach_for_Viral_DNA_Sequence.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Surface Electromyography Signal Classification for the Detection of Temporomandibular Joint Disorder using Spectral Mapping Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130860</link>
        <id>10.14569/IJACSA.2022.0130860</id>
        <doi>10.14569/IJACSA.2022.0130860</doi>
        <lastModDate>2022-08-29T10:10:22.2870000+00:00</lastModDate>
        
        <creator>Bormane D. S</creator>
        
        <creator>Roopa B. Kakkeri</creator>
        
        <creator>R. B. Kakkeri</creator>
        
        <subject>Temporomandibular joint (TMJ); temporomandibular joint disorder (TMD); surface electromyography (sEMG); spectral mapping; discrete wavelet transform (DWT)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>Temporomandibular joint Disorder (TMD) is with multifaceted and complex signs and symptoms which makes day to day activities of an individual uneasy. Electromyographic (EMG) processing of related muscles recordings could provide an early and immediate detection of TMD. To detect the TMD using surface electromyography (SEMG) of Masseter and Temporalis muscle with discrete wavelet transform (DWT) using spectral coding. To analyze the data, a new feature selection approach in the spectral domain is proposed. For statistical analyses, SPSS version 24 is employed. The results of the study revealed that the proposed approach was able to improve the accuracy of the classification by implementing a combination of DWT and the Support Vector Machine (SVM). The proposed method also exhibited a significant improvement in its performance in terms of its accuracy with 93%. In addition, the statistical analysis revealed that the model was able to improve the mean rank of the experimental and control group.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_60-Surface_Electromyography_Signal_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Light Gradient Boosting with Hyper Parameter Tuning Optimization for COVID-19 Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130859</link>
        <id>10.14569/IJACSA.2022.0130859</id>
        <doi>10.14569/IJACSA.2022.0130859</doi>
        <lastModDate>2022-08-29T10:10:22.2700000+00:00</lastModDate>
        
        <creator>Ferda Ernawan</creator>
        
        <creator>Kartika Handayani</creator>
        
        <creator>Mohammad Fakhreldin</creator>
        
        <creator>Yagoub Abbker</creator>
        
        <subject>ROS; light gradient boosting; hyper parameter tuning; COVID-19 screening; blood and age based</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>The 2019 coronavirus disease (COVID-19) caused pandemic and a huge number of deaths in the world. COVID-19 screening is needed to identify suspected positive COVID-19 or not and it can reduce the spread of COVID-19. The polymerase chain reaction (PCR) test for COVID-19 is a test that analyzes the respiratory specimen. The blood test also can be used to show people who have been infected with SARS-CoV-2. In addition, age parameters also contribute to the susceptibility of COVID-19 transmission. This paper presents the extra trees classification with random over-sampling by considering blood and age parameters for COVID-19 screening. This research proposes enhanced preprocessing data by using KNN Imputer to handle large missing values. The experiments evaluated the existing classification methods such as Random Forest, Extra Trees, Ada Boost, Gradient Boosting, and the proposed Light Gradient Boosting with hyperparameter tuning to measure the predictions of patients infected with SARS-CoV-2. The experiments used Albert Einstein Hospital test data in Brazil that consisted of 5,644 sample data from 559 patients with infected SARS-CoV-2. The experimental results show that the proposed scheme achieves an accuracy of about 98,58%, recall of 98,58%, the precision of 98,61%, F1-Score of 98,61%, and AUC of 0,9682.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_59-Light_Gradient_Boosting_with_Hyper_Parameter_Tuning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Power user Data Feature Matching Verification Model based on TSVM Semi-supervised Learning Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130858</link>
        <id>10.14569/IJACSA.2022.0130858</id>
        <doi>10.14569/IJACSA.2022.0130858</doi>
        <lastModDate>2022-08-29T10:10:22.2570000+00:00</lastModDate>
        
        <creator>Yakui Zhu</creator>
        
        <creator>Rui Zhang</creator>
        
        <creator>Xiaoxiao Lu</creator>
        
        <subject>Power system; data matching; data characteristics; semi-supervised learning algorithm; load model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>The existing model for identifying user data features based on smart meter data adopts a supervised learning method. Although the model has good identification performance under the condition of sufficient index samples, matching data are difficult to obtain and the marking cost is high in real life. The identification accuracy is significantly reduced when the matching data are insufficient or unavailable in the supervised learning method. In view of the above problems, based on the smart meter data, this paper proposes a feature recognition method for residential user data based on semi-supervised learning, which uses three indicators to evaluate the recognition performance of the proposed semi-supervised learning method for residential user data features and to find the appropriate feature selection method and data acquisition resolution. Then, explore the role of this method in real life when there is insufficient or unavailable matching data. Experimental results show that the performance of the proposed semi-supervised learning algorithm is better than that of the supervised learning algorithm, and the accuracy of the proposed algorithm is better than or close to that of the supervised learning algorithm.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_58-Power_user_Data_Feature_Matching_Verification_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predicting Malicious Software in IoT Environment Based on Machine Learning and Data Mining Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130857</link>
        <id>10.14569/IJACSA.2022.0130857</id>
        <doi>10.14569/IJACSA.2022.0130857</doi>
        <lastModDate>2022-08-29T10:10:22.2400000+00:00</lastModDate>
        
        <creator>Abdulmohsen Alharbi</creator>
        
        <creator>Md. Abdul Hamid</creator>
        
        <creator>Husam Lahza</creator>
        
        <subject>Machine learning; internet of things; malware; predictive modeling; cyber threats</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>The Internet of Things (IoT) enable the IoT to sense and respond using the power of computing to autonomously come up with the best solutions for any industry today. However, Internet of Things have vulnerabilities since it can be hacked by cybercriminals. The cybercriminals know where the IoT vulnerabilities are, such as unsecured update mechanisms and malware (Malicious Software) to attack the IoT devices. The recently posted IoT-23 dataset based on several IoT devices such as Philips Hue, Amazon Echo devices and Somfy door lock were used for machine learning classification algorithms and data mining techniques with training and testing for predictive modelling of a variety of malware attacks like Distributed Denial of Service (DDoS), Command and Control (C&amp;C) and various IoT botnets like Mirai and Okiru. This paper aims to develop predictive modeling that will predict malicious software to protect IoT and reduce vulnerabilities by using machine learning and data mining techniques. We collected, analyzed and processed benign and several of malicious software in IoT network traffic. Malware prediction is crucial in maintaining IoT devices’ safety and security from cybercriminals’ activities. Furthermore, the Principal Component Analysis (PCA) method was applied to determine the important features of IoT-23. In addition, this study compared with previous studies that used the IoT-23 dataset in terms of accuracy rate and other metrics. Experiments show that Random Forest (RF) classifier achieved the predictive model produced classification accuracy 0.9714% as well as predict 8754 samples with various types of malware and obtained 0.9644% of Area Under Curve (AUC) which outperforms several bassline machine learning classification models.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_57-Predicting_Malicious_Software_in_IoT_Environment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Novel Oversampling Algorithm for Handling Imbalanced Data Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130856</link>
        <id>10.14569/IJACSA.2022.0130856</id>
        <doi>10.14569/IJACSA.2022.0130856</doi>
        <lastModDate>2022-08-29T10:10:22.2100000+00:00</lastModDate>
        
        <creator>Anjali S. More</creator>
        
        <creator>Dipti P. Rana</creator>
        
        <subject>Binary imbalance; multiclass imbalance; oversampling; random forest classification; classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>In the current age, the attention of researchers is immersed by numerous imbalanced data applications. These application areas are intrusion detection in security, fraud recognition in finance, medical applications dealing with disease diagnosis pilfering in electricity, and many more. Imbalanced data applications are categorized into two types: binary and multiclass data imbalance. Unequal data distribution among data diverts classification performance metrics towards the majority data instance class and ignores the minority data, instance class. Data imbalance leads to an increase in the classification error rate. Random Forest Classification (RFC) is best suitable technique to deal with imbalanced datasets. This paper proposes the novel oversampling rate calculation algorithm as Improvised Dynamic Binary-Multiclass Imbalanced Oversampling Rate (IDBMORate). Experimentation analysis of the proposed novel approach IDBMORate on Page-block (Binary) dataset shows that instances of positive class is increased from 559 to 1118 whereas negative instance class remains same as 4913. In case of referred multiclass dataset (Ecoli), IDBMORate produces the consistent result as minority classes (om, omL, imS, imL) instances are oversampled majority class instances remains unchanged. IDBMORate algorithm reduces the ignorance of minority class and oversamples its data without disturbing the size of the majority instance class. Thus, it reduces the overall computation cost and leads towards the improvisation of classification performance.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_56-Novel_Oversampling_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Determinants of Information Security Awareness and Behaviour Strategies in Public Sector Organizations among Employees</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130855</link>
        <id>10.14569/IJACSA.2022.0130855</id>
        <doi>10.14569/IJACSA.2022.0130855</doi>
        <lastModDate>2022-08-29T10:10:22.1930000+00:00</lastModDate>
        
        <creator>Al-Shanfari I</creator>
        
        <creator>Warusia Yassin</creator>
        
        <creator>Nasser Tabook</creator>
        
        <creator>Roesnita Ismail</creator>
        
        <creator>Anuar Ismail</creator>
        
        <subject>Information security awareness; behaviour strategies; self-administered questionnaire; structural equation modelling (SEM)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>In this digital era, protecting an organisation&#39;s sensitive information system assets against cyberattacks is challenging. Globally, organisations spend heavily on information security (InfoSec) technological countermeasures. Public and private sectors often fail to secure their information assets because they depend primarily on technical solutions. Human components create the bulk of cybersecurity incidents directly or indirectly, causing many organisational information security breaches. Employees&#39; information security awareness (ISA) is crucial to preventing poor information security behaviours. Until recently, there was little combined information on how to improve ISA and how investigated factors influencing employees&#39; ISA levels were. This paper proposed a comprehensive theoretical model based on the Protection Motivation Theory, the Theory of Planned Behaviour, the General Deterrence Theory, and Facilitating Conditions for assessing public sector employees&#39; ISA intentions for information security behaviour. Using a survey and the structural equation modelling (SEM) method, this research reveals that the utilised factors are positively associated with actual information security behaviour adoption, except for perceived sanction certainty. The findings suggest that the three theories and facilitating conditions provide the most influential theoretical framework for explaining public sector employees&#39; information security adoption behaviour. These findings support previous empirical research on why employees&#39; information on security behaviours vary. Consistent with earlier research, these psychological factors are just as critical as facilitating conditions in ensuring more significant behavioural intention to engage in ISA activities, ensuring information security behaviour. The study recommends that public-sector organisations invest in their employees&#39; applied information security training.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_55-Determinants_of_Information_Security_Awareness.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Parameter Estimation in Computational Systems Biology Models: A Comparative Study of Initialization Methods in Global Optimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130854</link>
        <id>10.14569/IJACSA.2022.0130854</id>
        <doi>10.14569/IJACSA.2022.0130854</doi>
        <lastModDate>2022-08-29T10:10:22.1300000+00:00</lastModDate>
        
        <creator>Muhammad Akmal Remli</creator>
        
        <creator>Nor-Syahidatul N.Ismail</creator>
        
        <creator>Noor Azida Sahabudin</creator>
        
        <creator>Nor Bakiah Abd Warif</creator>
        
        <subject>Metaheuristic; opposition-based learning; kinetic parameters; initialization method; metabolic engineering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>This paper compares different initialization methods and investigates their performance and effects on estimating kinetic parameters’ value in models of biological systems. Estimating parameters values is difficult and time-consuming process due to their highly nonlinear and huge number of kinetic parameters involved. Global optimization method based on an enhanced scatter search (ESS) algorithm is a suitable choice to address this issue. However, despite its resounding success, the performance of ESS may decrease in solving high dimension problem. In this work, several choices of initialization methods are compared and experimental results indicated that the algorithm is sensitive to the initial value of kinetic parameters. Statistical results revealed that uniformly distributed random number generator (RNG) and controlled randomization (CR) that being used in ESS may lead to poor algorithm performance. In addition, the different initialization methods also influenced model accuracy. Our proposed methodology shows that initialization based on opposition-based learning scheme have shown 10% better accuracy in term of cost function.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_54-Parameter_Estimation_in_Computational_Systems_Biology_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implementation of a Mobile Application based on the Convolutional Neural Network for the Diagnosis of Pneumonia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130853</link>
        <id>10.14569/IJACSA.2022.0130853</id>
        <doi>10.14569/IJACSA.2022.0130853</doi>
        <lastModDate>2022-08-29T10:10:22.1130000+00:00</lastModDate>
        
        <creator>Jazmin Flores-Rodriguez</creator>
        
        <creator>Michael Cabanillas-Carbonell</creator>
        
        <subject>Pneumonia; convolutional neural network; deep learning; chest x-rays</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>Pneumonia is the main cause of infant mortality in Peru, which has led to plansfig, such as vaccination campaigns, greater economic investment in health, and the strengthening of specialized medical personnel, however, mortality rates remain high. In this sense, the implementation of new computer technologies such as Deep Learning through the use of the artificial neural network is proposed. The objective of this project was to determine the influence of a mobile application based on a Convolutional Neural Network for the diagnosis of Pneumonia, the project consists of the analysis of images of Chest X-rays with Pneumonia and Normal by means of an application developed called “Diagnost”. The study was carried out considering a control group and a study group formed by 33 medical staff members who used the application. The analysis of the data obtained was made based on the study of 3 indicators, detection time, result in accuracy, and reduction of medical assistance. According to the results, it was concluded that the mobile application based on the convolutional neural network allows the early detection of Pneumonia and allows the reduction of medical assistance, however, it is still necessary to continue working on the accuracy of the diagnosis.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_53-Implementation_of_a_Mobile_Application.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Medical Big Data Analysis using Binary Moth-Flame with Whale Optimization Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130852</link>
        <id>10.14569/IJACSA.2022.0130852</id>
        <doi>10.14569/IJACSA.2022.0130852</doi>
        <lastModDate>2022-08-29T10:10:22.1000000+00:00</lastModDate>
        
        <creator>Saka Uma Maheswara Rao</creator>
        
        <creator>K Venkata Rao</creator>
        
        <creator>Prasad Reddy PVGD</creator>
        
        <subject>Binary moth-flame optimization; complexity of the features; medical data; long short term memory; spark streaming layers; whale optimization algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>The accurate analysis of medical data is dependent on early disease detection and the value of accuracy is reduced when the medical data quality is poor. However, existing techniques have lower efficiency in handling heterogeneous medical data and the complexity of the features was not enhanced using an optimal feature selection model. The present research work has used the machine learning algorithm effectively for chronic disease prediction such as heart disease, cancer, diabetes, stroke, and arthritis for the frequent communities. The detailed information about the attributes is required to be known as it is significant in analyzing the medical data. The process of selecting the attributes plays an important role in decision-making for medical disease analysis. This research proposes Binary Moth-Flame Optimization (B-MFO) for effective feature selection to achieve higher performance in small and medium datasets. Additionally, the Whale Optimization Algorithm (WOA) is used that showed better performances for LSTM that suited well for the process of classification to predict the time series. The present research work utilizes Spark Streaming layers for data streaming to diagnose using Long Short Term Memory (LSTM) with whale optimization approach which is from the heterogeneous medical data. The proposed B-MFO-WOA method results showed that the proposed method obtained 97.45% accuracy better compared to the existing Modified adaptive neuro fuzzy inference system of 95.91% of accuracy and B-MFO of 92.43 % accuracy for the models.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_52-Medical_Big_Data_Analysis_using_Binary_Moth_Flame.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of a Cloud-Blockchain-based Secure Internet of Things Architecture</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130851</link>
        <id>10.14569/IJACSA.2022.0130851</id>
        <doi>10.14569/IJACSA.2022.0130851</doi>
        <lastModDate>2022-08-29T10:10:22.0670000+00:00</lastModDate>
        
        <creator>Deepti Rani</creator>
        
        <creator>Nasib Singh Gill</creator>
        
        <creator>Preeti Gulia</creator>
        
        <subject>Internet of things; cloud computing; blockchain; iot architecture; security and services</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>The growing number of Internet of Things (IoT) objects and the operational and security challenges in IoT systems are encouraging researchers to design suitable IoT architecture. Enormous data generated in the IoT environment face several kinds of security and privacy challenges. IoT system generally suffers from several issues like data storage, safety, privacy, integrity, transparency, trust, and single point of failure. IoT environment is emerging with several solutions to resolve these problems. The main objective of this paper is to design a cloud-blockchain-based secure IoT architecture that provides advanced and efficient storage and security solutions to IoT ecosystem. Blockchain technology appears to be a decent choice to resolve such kinds of problems. Blockchain technology uses a hash-based cryptographic technique for information security and integrity. Cloud computing provides advanced storage solutions with several remote services to store, compute and analyze the data. The proposed IoT architecture is based on the integration of cloud and blockchain services, which aim to provide transparent, decentralized, and trustworthy and secure storage solutions. In addition to the standard layers (perception layer, network layer, processing layer, and application layer) the proposed IoT architecture in the present paper includes a service layer, a security layer, and a parallel management and control layer, which focus on the security and management of the entire IoT infrastructure.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_51-Design_of_a_Cloud_Blockchain_based_Secure_Internet_of_Things.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>User Evaluation of UbiQuitous Access Learning (UQAL) Portal: Measuring User Experience</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130850</link>
        <id>10.14569/IJACSA.2022.0130850</id>
        <doi>10.14569/IJACSA.2022.0130850</doi>
        <lastModDate>2022-08-29T10:10:22.0530000+00:00</lastModDate>
        
        <creator>Nazlena Mohamad Ali</creator>
        
        <creator>Wan Fatimah Wan Ahmad</creator>
        
        <creator>Zainab Abu Bakar</creator>
        
        <subject>User experience questionnaire; user experience; user interface; heuristic evaluation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>The goal of user experience (UX) research in human-computer interaction is to understand how humans interact with technology. This paper aimed to evaluate the interface and user experience of UbiQuitous Access Learning Portal (UQAL) and make recommendations for the system interface. UQAL Portal is an e-learning web portal that teaches a targeted group of users how to start a business or an online business using an e-learning portal. The portal will be used to search for business-related information, among other things. The User Experience Questionnaire (UEQ) is used to evaluate user experience. The interface is evaluated using a heuristic evaluation technique based on Nielsen’s ten heuristics. According to the UEQ results, the average score for each aspect in 30 UQAL users is: Attractiveness aspect: 1.77; Perspicuity aspect: 2.20; Efficiency aspect: 2.30; Dependability aspect: 1.73; Stimulation aspect: 0.63; and Novelty aspect: 1.27. A comparison of the average score in the dataset product of UEQ Data Analysis Tool revealed that the Perspicuity, Efficiency, and Dependability aspects of UQAL belonged to the Excellent category. The Attractiveness and Novelty aspects could be categorized as Good, and its stimulation could be categorized as Below Average. Four evaluators participate in the heuristic evaluation, which tests all user categories in UQAL. The findings of this study can be used as a suggestion and reference for UQAL Portal improvement.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_50-User_Evaluation_of_UbiQuitous_Access_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Hybrid Sentiment Analysis Classification Approach for Mobile Applications Arabic Slang Reviews</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130849</link>
        <id>10.14569/IJACSA.2022.0130849</id>
        <doi>10.14569/IJACSA.2022.0130849</doi>
        <lastModDate>2022-08-29T10:10:22.0200000+00:00</lastModDate>
        
        <creator>Rabab Emad Saudy</creator>
        
        <creator>Alaa El Din M. El-Ghazaly</creator>
        
        <creator>Eman S. Nasr</creator>
        
        <creator>Mervat H. Gheith</creator>
        
        <subject>Arabic sentiment analysis; mobile application; hybrid classification model; hybrid supervised classification approach; Google play store; random forest; logistic regression; neural network; multi-layer perceptron neural network; machine learning; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>Arabic language incurs from the shortage of accessible huge datasets for Sentiment Analysis (SA), Machine Learning (ML), and Deep Learning (DL) applications. In this paper, we present MASR, a simple Mobile Applications Arabic Slang Reviews dataset for SA, ML, and DL applications which comprises of 2469 Egyptian Mobile Apps reviews, and help app developers meet user requirements evolution. Our methodology consists of six phases. We collect mobile apps reviews dataset, then apply preprocessing steps, in addition perform SA tasks. To evaluate MASR datasets, first we apply ML classification techniques: K-Nearest Neighbors (K-NN), Support vector machine (SVM), Logistic Regression (LR), and Random Forest (RF), and DL classification technique: Multi-layer Perceptron Neural Network (MLP-NN). From the examination for pervious classification techniques, we adopted a hybrid classification approach combined from the top two ML classifier accuracy results (LR, RF), and DL classifier (MLP-NN). The findings prove the adequacy of a hybrid supervised classification approach for MASR datasets.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_49-A_Novel_Hybrid_Sentiment_Analysis_Classification_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Combining Multiple Classifiers using Ensemble Method for Anomaly Detection in Blockchain Networks: A Comprehensive Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130848</link>
        <id>10.14569/IJACSA.2022.0130848</id>
        <doi>10.14569/IJACSA.2022.0130848</doi>
        <lastModDate>2022-08-29T10:10:22.0070000+00:00</lastModDate>
        
        <creator>Sabri Hisham</creator>
        
        <creator>Mokhairi Makhtar</creator>
        
        <creator>Azwa Abdul Aziz</creator>
        
        <subject>Blockchain; Ethereum; Bitcoin; ensemble; anomaly detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>Blockchain is one of the most anticipated technology revolutions, with immense promise in various applications. It is a distributed and encrypted database that can address a range of challenges connected to online security and trust. While many people identify Blockchain with cryptocurrencies such as Bitcoin, it has a wide range of applications in supply chain management, health, Internet of Things (IoT), education, identity theft prevention, logistics, and the execution of digital smart contracts. Although Blockchain Technology (BT) has numerous advantages for Decentralized Applications (DApps), it is nevertheless vulnerable to abuse, smart contract failures, security, theft, trespassing, and other concerns. As a result, using Machine Learning (ML) models to detect anomalies is an excellent way to detect and safeguard blockchain networks from criminal activity. Adapting ensemble learning methods in ML to create better prediction outcomes is a viable approach for anomaly identification. Ensemble learning, as the name implies, refers to creating a stronger and more accurate classification by combining the prediction results of numerous weak models. As a result, an in-depth evaluation of ensemble learning methodologies for anomaly detection in the blockchain network ecosystem is applied in this paper. It comprises numerous ensemble methods (e.g., averaging, voting, stacking, boosting, bagging). The review collects data from three established databases, which are Scopus, Web of Science (WoS), and Google Scholar. Specific keywords are employed, such as Blockchain, Ethereum, Bitcoin, Anomaly Detection, and Ensemble Learning, employing advanced searching algorithms. The results of the search found 60 primary articles from 2017 to 2022 (30 from Scopus, 20 from the WoS, and 10 from Google Scholar). Based on these findings, we decided to divide our debate into three primary themes: (1) the fundamentals of Blockchain Technology (BT), (2) the overview of ensemble learning, and (3) the integration and analysis of ensemble learning in blockchain networks for anomaly detection. In terms of awareness and knowledge, the results are also discussed in terms of what they mean and where future research should go.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_48-Combining_Multiple_Classifiers_using_Ensemble_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automated Study Plan Generator using Rule-based and Knapsack Problem</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130847</link>
        <id>10.14569/IJACSA.2022.0130847</id>
        <doi>10.14569/IJACSA.2022.0130847</doi>
        <lastModDate>2022-08-29T10:10:21.9730000+00:00</lastModDate>
        
        <creator>Muhammad Amin Mustapa</creator>
        
        <creator>Lizawati Salahuddin</creator>
        
        <creator>Ummi Rabaah Hashim</creator>
        
        <subject>Knapsack problem; rule-based; study plan; undergraduate; credit exemption</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>Undergraduate students are given the flexibility of arranging courses throughout their study duration especially when they are eligible for credit exemption for the courses taken during their diploma study. Issues arise when students arrange their studies manually. Improper course arrangement in the study plan may be resulting some of the selected courses do not correspond to the courses offered, and imbalance credit hours. Hence, this study aims to propose an algorithm to generate an automated and accurate study plan throughout the study duration. A combination of rule-based and knapsack problem were proposed to generate an automated study plan. A quantitative methodology through expert’s reviews and questionnaire survey was conducted to evaluate the accuracy of the proposed algorithm. The proposed algorithm shows high accuracy. In conclusion, the combination of rule-based and knapsack problem is appropriate to generate an automated and accurate study plan. The automated study plan generator can help students generate an effective study plan.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_47-Automated_Study_Plan_Generator.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Hybrid Combinatorial Design-based Session Key Distribution Method for IoT Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130846</link>
        <id>10.14569/IJACSA.2022.0130846</id>
        <doi>10.14569/IJACSA.2022.0130846</doi>
        <lastModDate>2022-08-29T10:10:21.9430000+00:00</lastModDate>
        
        <creator>Gundala Venkata Hindumathi</creator>
        
        <creator>D. Lalitha Bhaskari</creator>
        
        <subject>Key distribution; hybrid combinatorial design; IoT networks; resource constraint nodes; symmetric key generation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>Internet of Things (IoT) is currently being used in a range of applications as cutting-edge technology. IoT is a technological platform that connects the physical and digital worlds, allowing us to use things remotely. Various sensor-connected nodes serve as objects that communicate with one another over the internet. Hence security-related problems are more likely to arise in IoT networks. However, due to resource constraints such as power and memory capacity, complex security algorithms cannot be implemented in IoT networks. One of the security measures for IoT networks is to implement the lightweight key distribution algorithm. The lightweight key management process is essential for IoT networks to share the key securely. We presented the new key-distribution approach based on the hybrid combinatorial design that implements lightweight algorithms and describes the analysis functions. The comparison to existing hybrid combinatorial works shows better connectivity, resilience, and scalability.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_46-The_Hybrid_Combinatorial_Design_based_Session_Key_Distribution.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Acne Classification with Gaussian Mixture Model based on Texture Features</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130844</link>
        <id>10.14569/IJACSA.2022.0130844</id>
        <doi>10.14569/IJACSA.2022.0130844</doi>
        <lastModDate>2022-08-29T10:10:21.9100000+00:00</lastModDate>
        
        <creator>Alfa Nadhya Maimanah</creator>
        
        <creator>Wahyono</creator>
        
        <creator>Faizal Makhrus</creator>
        
        <subject>Acne; GLCM; Gabor filter; Gaussian mixture model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>This paper presents an acne detection method on face images using a Gaussian Mixture Model (GMM). First, the skin area in the face image is segmented based on color information using the GMM. Second, the candidates of the acne region are then extracted using a Laplacian of Gaussian-based blob detection strategy. Then, texture features are extracted from acne candidates using either a Gabor Filter or Gray Level Co-occurrence Matrix (GLCM). Lastly, these features are then utilized as input in the GMM for verifying whether these regions are acne or not. In our experiment, the proposed method was evaluated using face images from ACNE04 dataset. Based on the experiment, it is found that the best classification results were obtained when GLCM features in the Cr-YCbCr channel are applied. In addition, the proposed method has competitive performance compared to K-Nearest Neighbor (KNN).</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_44-Acne_Classification_with_Gaussian_Mixture_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Learning Content Classification and Mapping Content to Synonymous Learners based on 2022 Augmented Verb List of Marzano and Kendall Taxonomy</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130845</link>
        <id>10.14569/IJACSA.2022.0130845</id>
        <doi>10.14569/IJACSA.2022.0130845</doi>
        <lastModDate>2022-08-29T10:10:21.9100000+00:00</lastModDate>
        
        <creator>S. Celine</creator>
        
        <creator>M. Maria Dominic</creator>
        
        <creator>F. Sagayaraj Fransis</creator>
        
        <creator>M. Savitha Devi</creator>
        
        <subject>Learning taxonomies; marzano and kendall taxonomy; personalization; XG boost; deep neural network; CNN; property graph; action verbs; content classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>Finding suitable learning content for learners with different learning styles is a challenging task in the learning process. Hence it is essential to follow some learning taxonomies to deliver learner-centric learner content. Learning taxonomies are used to express various learning practices and learning habits to be followed by the learner for a better learning process. The investigator has already classified the learners based on the 2022 augmented verb list of Marzano and Kendall taxonomy. The main objective of this paper is to minutely classify the tutor-defined learning contents according to the domains as well as the subdomains of the considered taxonomy which is in text format. Providing personalized learning content could help the learners for a better understanding of learning content and their interrelationship which in turn produce better learning outcomes. Mapping the six levels of learning contents into the corresponding learner is a challenging task. Hence the investigator has chosen seven algorithms including Bagging, XG Boost, Support Vector Machine from Machine Learning and four algorithms including Convolutional Neural Network, and Deep Neural Network in Deep Learning algorithm to classify the learning contents. The experimental results indicate that Support Vector Machine performed well in machine learning and Deep Neural Network yields good performance in deep learning in the learning content classification process. These micro contents were organized using a property graph. Further, the micro contents were retrieved from the property graph using SPARQL for mapping the classified contents to the corresponding learners to achieve personalization in the learning process.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_45-Learning_Content_Classification_and_Mapping_Content.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cybersecurity Risk Assessment: Modeling Factors Associated with Higher Education Institutions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130843</link>
        <id>10.14569/IJACSA.2022.0130843</id>
        <doi>10.14569/IJACSA.2022.0130843</doi>
        <lastModDate>2022-08-29T10:10:21.8800000+00:00</lastModDate>
        
        <creator>Rachel Ganesen</creator>
        
        <creator>Asmidar Abu Bakar</creator>
        
        <creator>Ramona Ramli</creator>
        
        <creator>Fiza Abdul Rahim</creator>
        
        <creator>Md Nabil Ahmad Zawawi</creator>
        
        <subject>Cyber security; risk assessment; university</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>Most universities rely heavily on Information Technology (IT) to process their information and support their vision and mission. This rapid advancement in internet technology leads to increased cyberattacks in Higher Education Institutions (HEIs). To secure their infrastructure from cyberattacks, they must implement the best cybersecurity risk management approach, which involves technological and education-based solutions, to safeguard their environment. However, the main challenges in existing cybersecurity risk management approaches are limited knowledge of how organizations can determine or minimize the significance of risks. As a result, this research seeks to advance understanding to establish a risk assessment model for universities to measure and evaluate the risk in HEIs. The proposed model is based on theoretical aspects that we organized as follows: First, we review the existing cybersecurity frameworks to identify the suitability and limitation of each model. Next, we review current works on cybersecurity risk assessment in HEIs to evaluate the proposed risk assessment approaches, scope and steps. Based on the information gathered, we developed a risk assessment model. Finally, we conclude the study with directions for future research. The result presented from this study may give an insig1ht for HEIs staff to analyze what is to be assessed, how to measure the severity of the risk, and determine the level of risk acceptance, improving their decision-making on risk management.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_43-Cybersecurity_Risk_Assessment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Computational Study of Quantum Coherence from Classical Nonlinear Compton Scattering with Strong Fields</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130842</link>
        <id>10.14569/IJACSA.2022.0130842</id>
        <doi>10.14569/IJACSA.2022.0130842</doi>
        <lastModDate>2022-08-29T10:10:21.8630000+00:00</lastModDate>
        
        <creator>Huber Nieto-Chaupis</creator>
        
        <subject>Quantum coherence; bessel; compton scattering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>From the covariant formulation of radiation intensity of Hartemann-Kerman model entirely constructed in the classical electrodynamics scenario, a formulation of coherent states has been obtained in an explicit manner represented by the infinite sum of integer-order Bessel functions. Both linear and nonlinear Compton scattering are included, suggesting that Compton processes can be perceived as coherent states of light-matter interaction.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_42-Computational_Study_of_Quantum_Coherence.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluation of Spiral Pattern Watermarking Scheme for Common Attacks to Social Media Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130841</link>
        <id>10.14569/IJACSA.2022.0130841</id>
        <doi>10.14569/IJACSA.2022.0130841</doi>
        <lastModDate>2022-08-29T10:10:21.8500000+00:00</lastModDate>
        
        <creator>Tiew Boon Li</creator>
        
        <creator>Jasni Mohamad Zain</creator>
        
        <creator>Syifak Izhar Hisham</creator>
        
        <creator>Alya Afikah Usop</creator>
        
        <subject>Spiral pattern; fragile watermarking; social media; LSB substitution</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>The 21st century might be considered the &quot;boom&quot; period for social networking due to the fast expansion of social media use. In terms of user privacy and security regulations, a plethora of new requirements, issues, and concerns have arisen due to the proliferation of social media. With the increase in social media use, images on social media are often modified or fabricated for certain purposes. Therefore, this work implements and evaluates the SPIRAL-LSB algorithm for common attacks for social media images. Image compression was also discussed as images published to social media platforms was often compressed. An analysis was performed to assess the algorithm&#39;s output on social media images. The experiments were carried out prior to and after uploading to the Instagram platform. The dataset was subjected to image splicing, copy-move, cut-and-paste, text insertion, and 3D-sticker insertion attacks. The outcome of SPIRAL-LSB was effective for text insertion attacks solely. Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index Measure (SSIM) were selected as the experiment&#39;s metrics. The average PSNR value is 63.25, and the SSIM value is 0.99964, both of which are regarded high. This indicates that the watermark has not degraded the quality of the images. This work was designed for usage on social media for intellectual property reasons and may be used to validate the validity of social media images and prevent issues with image integrity, such as image manipulation.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_41-Evaluation_of_Spiral_Pattern_Watermarking_Scheme.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Disease Prediction Model based on Neural Network ARIMA Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130840</link>
        <id>10.14569/IJACSA.2022.0130840</id>
        <doi>10.14569/IJACSA.2022.0130840</doi>
        <lastModDate>2022-08-29T10:10:21.8330000+00:00</lastModDate>
        
        <creator>Kedong Li</creator>
        
        <subject>Disease prevention and control; trend prediction; neural network; combination model; ARIMA algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>Because the morbidity data of infectious diseases do not only have a single linear or nonlinear characteristic, but also have both linear and nonlinear characteristics, the combination model prediction method is often used to predict the morbidity of infectious diseases in recent years. Compared with the single model prediction analysis method, the combination model can combine the advantages of a single model to extract the effective information contained in the original time series more scientifically and fully. In the context of big data, for the medical field, massive medical data is complex, and the traditional manual data processing method has been unable to meet the current needs. With the help of the computer, data mining can discover new knowledge that is potentially useful and understandable by clearing, integrating, selecting, and transforming the original data. Using data mining, we can organize and reproduce the useful medical knowledge hidden in medical big data. In this paper, an ARIMA-GRNN model is established; the fitting value and the corresponding time are used as the input of the neural network. The actual morbidity is used as the output to train the network and construct the ARIMA-GRNN combined model. Due to the different information flow of BP neural network and neural network, this study also constructed ARIMA-GRNN combined model and ARIMA model, and compared the modeling effect and prediction performance of various models. The average absolute percentage error of the experimental results in this paper is less than 8.63%, and the average absolute percentage error is less than 5%. Compared with other models, it has a better prediction effect, higher accuracy, and more obvious advantages. In this paper, the prediction of disease is dynamic and continuous. It is of great significance for disease prevention and control to use monitoring data to study the epidemic trend and periodic change law, and to make a reasonable prediction.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_40-Disease_Prediction_Model_based_on_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Machine Learning in OCR Technology: Performance Analysis of Different OCR Methods for Slide-to-Text Conversion in Lecture Videos</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130839</link>
        <id>10.14569/IJACSA.2022.0130839</id>
        <doi>10.14569/IJACSA.2022.0130839</doi>
        <lastModDate>2022-08-29T10:10:21.8030000+00:00</lastModDate>
        
        <creator>Geeta S Hukkeri</creator>
        
        <creator>R H Goudar</creator>
        
        <creator>Prashant Janagond</creator>
        
        <creator>Pooja S Patil</creator>
        
        <subject>Video lectures; keyframes; Google cloud vision (GCV); Tesseract; Abbyy Finereader; Transym; text extraction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>A significant percentage of a lecture video&#39;s content shown is text. Video text can therefore be a crucial source for automated video indexing. Researchers have recognised printed and handwritten text extracted from pictures using a variety of machine learning techniques and tools before digitising it. A machine learning technology called optical character recognition (OCR) enables us to recognise and retrieve text information from documents, converting it into searchable and editable data. This study primarily focuses on text extraction from lecture slides using Google Cloud Vision (GCV), Tesseract, Abbyy Finereader, and Transym OCR and compares the results to develop a lecture video indexing scheme for the non-linear steering in lecture videos to watch only the interesting points of topics. We have taken a total of 438 key-frames in 10 categories from seven different lecture videos that range in length. First, binary and greyscale versions of the input colour images are created. Before using the OCR APIs, the frames are additionally preprocessed to improve the image quality. The recognition accuracy demonstrated that the GCV OCR performs effectively, saving computing time by collecting image text with the highest accuracy of other tools, 96.7 percent.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_39-Machine_Learning_in_OCR_Technology_Performance_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Federated Learning and its Applications for Security and Communication</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130838</link>
        <id>10.14569/IJACSA.2022.0130838</id>
        <doi>10.14569/IJACSA.2022.0130838</doi>
        <lastModDate>2022-08-29T10:10:21.7870000+00:00</lastModDate>
        
        <creator>Hafiz M. Asif</creator>
        
        <creator>Mohamed Abdul Karim</creator>
        
        <creator>Firdous Kausar</creator>
        
        <subject>Federated learning; communication; security; deep learning; Artificial Intelligence</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>The not so long ago, Artificial Intelligence (AI) has revolutionized our life by giving rise to the idea of self-learning in different environments. Amongst its different variants, Federated Learning (FL) is a novel approach that relies on decentralized communication data and its associated training. While reducing the amount of data acquired from users, federated learning derives the benefits of popular machine learning techniques, it brings learning to the edge or directly on-device. FL, frequently referred to as a new dawn in AI, is still in its early stages and is yet to garner widespread acceptance, owing to its (unknown) security and privacy implications. In this paper, we give an illustrative explanation of FL techniques, communication, and applications with privacy as well as security issues. According to our findings, there are fewer privacy-specific dangers linked with FL than security threats. We conclude the paper with the challenges of FL with special emphases on security.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_38-Federated_Learning_and_Its_Applications.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Simultaneous Importance-Performance Analysis based on SWOT in the Service Domain of Electronic-based Government Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130837</link>
        <id>10.14569/IJACSA.2022.0130837</id>
        <doi>10.14569/IJACSA.2022.0130837</doi>
        <lastModDate>2022-08-29T10:10:21.7700000+00:00</lastModDate>
        
        <creator>Tenia Wahyuningrum</creator>
        
        <creator>Gita Fadila Fitriana</creator>
        
        <creator>Arief Rais Bahtiar</creator>
        
        <creator>Aina Azalea</creator>
        
        <creator>Darwan</creator>
        
        <subject>Importance performance analysis; strength weakness opportunity threat analysis; service quality; electronic based government systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>Decision makers for decades have used SWOT analysis for strategic planning. However, the problems that arise in the SWOT analysis are subjective, so decision-making becomes inefficient. Therefore, SWOT analysis is often combined with other methods to make decision-making strategies more focused and measurable according to priority interests. The SWOT analysis basis in this study is Simultaneous Importance-Performance (SIPA) analysis by observing each indicator&#39;s weights. In addition, this study proposes a new method by focusing on competitor factors in strategies mapping to improve services for Electronic-Based Government Systems (SPBE). The object of this study was two local governments in Indonesia, namely the Meranti Islands Regency and the Limapuluh Kota Regency. The results showed that a SIPA-based SWOT analysis has succeeded in showing the Strengths, Weaknesses, Opportunities, and Challenges of the district government. Furthermore, based on the results of hypothesis testing, SIPA-based SWOT identification has reflected a valid organizational situation.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_37-Simultaneous_Importance_Performance_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Proposed Deep Learning based Framework for Arabic Text Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130836</link>
        <id>10.14569/IJACSA.2022.0130836</id>
        <doi>10.14569/IJACSA.2022.0130836</doi>
        <lastModDate>2022-08-29T10:10:21.7570000+00:00</lastModDate>
        
        <creator>Mostafa Sayed</creator>
        
        <creator>Hatem Abdelkader</creator>
        
        <creator>Ayman E. Khedr</creator>
        
        <creator>Rashed Salem</creator>
        
        <subject>Text classification; arabic text classification; scaled conjugate gradient; TF-IDF; GDX; ICW</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>Deep learning has become one of the crucial trends in the modern era due to the huge amount of data that has become available. This paper aims to investigate and improve a generic framework for Arabic Text Classification (ATC) with different deep learning techniques. Besides, it deals directly with a word in its original style as a basic unit of modern Arabic sentence and on a different level of N-grams versus a combination of Intersected Consecutive Word proposed method (ICW). However, it aimed to discuss the results of the different experiments for the enhancements of the proposed method on different deep learning algorithms such as Scaled Conjugate Gradient (SCG) and Gradient descent with momentum and adaptive learning rate backpropagation (GDX) on ATC. The results showed that the proposed framework applied with the SCG algorithm and TF-IDF outperforms the GDX algorithm with an accuracy ratio of 90.65%.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_36-A_Proposed_Deep_Learning_based_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modelling of IoT-WSN Enabled ECG Monitoring System for Patient Queue Updation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130835</link>
        <id>10.14569/IJACSA.2022.0130835</id>
        <doi>10.14569/IJACSA.2022.0130835</doi>
        <lastModDate>2022-08-29T10:10:21.7400000+00:00</lastModDate>
        
        <creator>Parminder Kaur</creator>
        
        <creator>Hardeep Singh Saini</creator>
        
        <creator>Bikrampal Kaur</creator>
        
        <subject>WSN; cloud; ECG monitoring; wearable sensors; IoT; queue updation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>The advancement of communication technologies has led to the interconnection of different sensors using the Internet of Things (IoT) and Wireless Sensor Network (WSN). WSN for healthcare applications has expanded exponentially due to evolving advantages such as low power requirement of sensors, transmission accuracy, and cost-efficiency. For heart attack patients, the future lies in ECG monitoring in which wearable sensors can be used to acquire patient information. In this paper, an attempt has been made to develop a novel IoT-enabled WSN to record patient information for detection of heart attack and to update queue of patients to ensure prioritized medical attention to critical patients. In the WSN, the Rayleigh Fading channel has been used to transmit data that can be accessed using the cloud repository by the medical staff remotely. The distance from the patient to the medical staff is calculated using Euclidean distance. Further, SNR in comparison to throughput and BER has been computed. The higher SNR indicates the maximum information transfer from patient to hospital staff. The proposed system uses the Grasshopper Optimization and CBNN based disease classification system and bubble sort algorithm has been used for updating patient queue. The proposed GHOA and CBNN has shown improved accuracy of 2.14% over existing techniques like CNN which has accuracy around 82% for R-R feature selection of ECG signals as compared to 82.72% shown by GHOA-CBNN.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_35-Modelling_of_IoT_WSN_Enabled_ECG_Monitoring_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mobile Applications for the Implementation of Health Control against Covid-19 in Educational Centers, a Systematic Review of the Literature</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130834</link>
        <id>10.14569/IJACSA.2022.0130834</id>
        <doi>10.14569/IJACSA.2022.0130834</doi>
        <lastModDate>2022-08-29T10:10:21.7100000+00:00</lastModDate>
        
        <creator>Bryan Quispe-Lavalle</creator>
        
        <creator>Fernando Sierra-Li&#241;an</creator>
        
        <creator>Michael Cabanillas-Carbonell</creator>
        
        <subject>Mobile application; sanitary control; systematic review; digital technologies</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>A health crisis caused by the SARS-CoV-2 virus is still ongoing. That is why an important factor for the resumption of on-site classes is the creation of sanitary measures to help control Covid-19. The present research is a literature review, The PRISMA methodology is used and 265 articles are collected from various databases such as EBSCO Host, IEEE Xplore, SAGE, ScienceDirect, and Scopus. According to the inclusion and exclusion criteria, the most relevant articles aligned to the topic were identified, systematizing 119 articles. Showcasing digital technologies used in mobile applications that allow better control, tracking, and monitoring of the health status of students, teachers, and staff of educational centers, in addition to the parameters and quality attributes that must be taken into account for the effective sanitary control of the disease, finally, a development model is proposed.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_34-Mobile_Applications_for_the_Implementation_of_Health_Control.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Enhancement Technique to Diagnose Colon and Lung Cancer by using Double CLAHE and Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130833</link>
        <id>10.14569/IJACSA.2022.0130833</id>
        <doi>10.14569/IJACSA.2022.0130833</doi>
        <lastModDate>2022-08-29T10:10:21.6930000+00:00</lastModDate>
        
        <creator>Nora yahia Ibrahim</creator>
        
        <creator>Amira Samy Talaat</creator>
        
        <subject>Artificial intelligent system; machine learning; cancer detection; image classification; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>The most common and deadly cancers are lung and colon cancers. More than a quarter of all cancer cases are caused by them. Early detection of the disease, on the other hand, greatly raises the probability of survival. Image enhancement by Double CLAHE stages and modified neural networks are made to improve classification accuracy and use Deep Learning (DL) algorithms to automate cancer detection. A new Artificial Intelligent classification system is presented in this research to recognize five kinds of colon and lung tissues, three malignant and two benign, with three classes for lung cancer and two classes for colon cancer, based on histological images. The results of the study imply that the suggested system can accurately identify tissues of cancer up to 99.5%. The use of this model will aid medical professionals in the development of an automatic and reliable system for detecting different kinds of colon and lung tumors.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_33-An_Enhancement_Technique_to_Diagnose_Colon.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Forest Fires Detection using Deep Transfer Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130832</link>
        <id>10.14569/IJACSA.2022.0130832</id>
        <doi>10.14569/IJACSA.2022.0130832</doi>
        <lastModDate>2022-08-29T10:10:21.6770000+00:00</lastModDate>
        
        <creator>Mimoun YANDOUZI</creator>
        
        <creator>Mounir GRARI</creator>
        
        <creator>Idriss IDRISSI</creator>
        
        <creator>Mohammed BOUKABOUS</creator>
        
        <creator>Omar MOUSSAOUI</creator>
        
        <creator>Mostafa AZIZI</creator>
        
        <creator>Kamal GHOUMID</creator>
        
        <creator>Aissa KERKOUR ELMIAD</creator>
        
        <subject>Forest fires; wildfires; deep learning; transfer learning; computer vision; convolutional neural networks (CNN)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>Forests are vital ecosystems composed of various plant and animal species that have evolved over years to coexist. Such ecosystems are often threatened by wildfires that can start either naturally, as a result of lightning strikes, or unintentionally caused by humans. In general, human-caused fires are more severe and expensive to fight because they are frequently located in inaccessible areas. Wildfires can spread quickly and become extremely dangerous, causing damage to homes and facilities, as well as killing people and animals. Early discovery of wildfires is vital to protect lives, property, and resources. Reinforced imaging technologies can play a key role to detect wildfires earlier. By applying deep learning (DL) over a dataset of images (collected using drones, planes, and satellites), we target to automate the forest fire detection. In this paper, we focus on building a DL model specifically to detect wildfires using transfer learning techniques from the best pretrained DL computer vision architectures available nowadays, such as VGG16, VGG19, Inceptionv3, ResNet50, ResNet50V2, InceptionResNetV2, Xception, Dense-Net, MobileNet, MobileNetV2, and NASNetMobile. Our proposed approach attained a detection rate of more than 99.9% over multiple metrics, proving that it could be used in real-world forest fire detection applications.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_32-Forest_Fires_Detection_using_Deep_Transfer_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Word by Word Labelling of Romanized Sindhi Text by using Online Python Tool</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130831</link>
        <id>10.14569/IJACSA.2022.0130831</id>
        <doi>10.14569/IJACSA.2022.0130831</doi>
        <lastModDate>2022-08-29T10:10:21.6600000+00:00</lastModDate>
        
        <creator>Irum Naz Sodhar</creator>
        
        <creator>Abdul Hafeez Buller</creator>
        
        <creator>Suriani Sulaiman</creator>
        
        <creator>Anam Naz Sodhar</creator>
        
        <subject>Romanized sindhi; word labelling; rule-based model; POS tagging; SindhiNLP tool</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>Sindhi is one of the most ancient languages in the world and it has its own written and spoken scripts. After the rigorous study it was found that a lot of research work has been done in different languages, but word by word labelling of Sindhi language had not been done yet. In this research study, word labelling was done on 100 sentences of Romanized Sindhi texts using Python online tool. The dataset was collected from different sources which include Sindhi newspaper, blogs and social media webpages. From this dataset, a rule-based model has been applied for the Parts-of-Speech (POS) tagging of the Romanized Sindhi sentences. A total of 624 words of Romanized Sindhi texts were tested and successfully tagged by the SindhiNLP tool in which 482 words were tagged as nouns and pronouns, 92 words tagged as verbs and 50 words tagged as determinants.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_31-Word_by_Word_Labelling_of_Romanized_Sindhi_Text.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dangerous Goods Container Location Allocation Strategy based on Improved NSGA-II Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130830</link>
        <id>10.14569/IJACSA.2022.0130830</id>
        <doi>10.14569/IJACSA.2022.0130830</doi>
        <lastModDate>2022-08-29T10:10:21.6470000+00:00</lastModDate>
        
        <creator>Xinmei Zhang</creator>
        
        <creator>Nannan Liang</creator>
        
        <creator>Chen Chen</creator>
        
        <subject>Dangerous goods containers; container allocation; improved NSGA-II algorithm; entropy weight-TOPSIS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>The characteristics of port dangerous goods are complicated and diverse in danger, which is very likely to cause chain effects once a fire and explosion accident occurs. Based on the distribution characteristics of dangerous goods container yards and the special national storage requirements for dangerous goods containers, the paper establishes a multi-objective optimization model with a double priority of safety and economy, starting from reducing the number of reversals. The improved non-dominated sorting genetic algorithm based on the elite strategy was used to solve the model and the algorithm was tested and improved. Based on the Pareto optimal solution set, the entropy weight-TOPSIS method was used to optimize the sorting of multiple solution sets, which improved the performance of the algorithm. The analysis further clarifies the important relationship between attributes, and the running time is shortened by 85.7% compared with the traditional NSGA algorithm. The optimization model and algorithm can provide decision support for the actual operation and management of container storage, and provide a good reference for accident risk prevention and control.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_30-Dangerous_Goods_Container_Location_Allocation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluation of Parameter Fine-Tuning with Transfer Learning for Osteoporosis Classification in Knee Radiograph</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130829</link>
        <id>10.14569/IJACSA.2022.0130829</id>
        <doi>10.14569/IJACSA.2022.0130829</doi>
        <lastModDate>2022-08-29T10:10:21.6130000+00:00</lastModDate>
        
        <creator>Usman Bello Abubakar</creator>
        
        <creator>Moussa Mahamat Boukar</creator>
        
        <creator>Steve Adeshina</creator>
        
        <subject>Osteoporosis; transfer learning models; convolutional neural network; fine-tuning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>Osteoporosis is a bone disease that raises the risk of fracture due to the density of the bone mineral being low and the decline of the structure of bone tissue. Among other techniques, such as Dual-Energy X-ray Absorptiometry (DXA), 2D x-ray pictures of the bone can be used to detect osteoporosis. This study aims to evaluate deep convolutional neural networks (CNNs), applied with transfer learning techniques, to categorize specific osteoporosis features in knee radiographs. For objective labeling, we obtained a selection of patient knee x-ray images. The study makes use of the Visual Geometry Group Deep (VGG-16), and VGG-16 with fine-tuning. In this work, the deployed CNNs were assessed using state-of-the-art metrics such as accuracy, sensitivity, and specificity. The evaluation shows that fine-tuning enhanced the VGG-16 CNN&#39;s effectiveness for detecting osteoporosis in radiographs of the knee. The accuracy of the VGG-16 with parameter fine-tuning was 88% overall, while the accuracy of the VGG-16 without parameter fine-tuning was 80%.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_29-Evaluation_of_Parameter_Fine_Tuning_with_Transfer_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Covid-19 Positive Case Prediction and People Movement Restriction Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130828</link>
        <id>10.14569/IJACSA.2022.0130828</id>
        <doi>10.14569/IJACSA.2022.0130828</doi>
        <lastModDate>2022-08-29T10:10:21.6000000+00:00</lastModDate>
        
        <creator>I Made Artha Agastya</creator>
        
        <subject>Covid-19; movement restriction; machine learning; positive case; infected prediction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>The world experienced a pandemic that changed people&#39;s daily life due to Coronavirus Disease 2019 (covid-19). In Jakarta, the covid-19 cases were discovered on March 18, 2020, and the case increased uncontrollably until the government conducted a movement restriction called pembatasan sosial berskala besar (PSBB). The effectivity of movement restriction was not evaluated in detail. Therefore, we investigated the covid-19 cases in the PSBB period to understand the contribution of movement restriction. Moreover, a prediction model is proposed to computerize the decision of movement restriction. The models are divided into regression and classification models. The regression model is developed to forecast the number of infected cases. At the same time, the classification model is used to identify the best movement restriction type. We utilize data transformation named Principal Component Analysis (PCA) to reduce the number of features. In our case, the best regression method is Multiple Linear Regression (MLP). Then, the best classification method is the Support Vector Machine (SVM). The MLP results are 148.38, 37036.37, and 0.250336 for Mean Absolute Error (MAE), Mean Square Error (MSE), and R2, respectively. In contrast, the SVM achieved an accuracy of 84.81%. Moreover, the prediction system on the website were successfully deployed.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_28-A_Covid_19_Positive_Case_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Efficient Patient Activity Recognition using LSTM Network and High-Fidelity Body Pose Tracking</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130827</link>
        <id>10.14569/IJACSA.2022.0130827</id>
        <doi>10.14569/IJACSA.2022.0130827</doi>
        <lastModDate>2022-08-29T10:10:21.5830000+00:00</lastModDate>
        
        <creator>Thanh-Nghi Doan</creator>
        
        <subject>Human body pose tracking; LSTM; raspberry Pi 4; patient monitoring system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>The need for healthcare services is growing, particularly in light of the COVID-19 epidemic&#39;s convoluted trajectory. This causes overcrowding in medical facilities, making it difficult to manage, treat, and monitor patients&#39; health. Therefore, a method to remotely observe the patient&#39;s behavior is required, to aid in early warning and treatment, and to reduce the need for hospitalization for patients with minor diseases. This paper proposes a new real-time smart camera system to monitor, recognize and warn the patient&#39;s abnormal actions remotely with reasonable cost and easy to deploy in practice. The key benefit of the proposed methods is that patient actions may be detected without the usage of ambient sensors by employing pictures from a regular video camera. It carries out the detection using high-fidelity human body pose tracking with MediaPipe Pose. Then, the Raspberry Pi 4 device and the LSTM network are used for remote monitoring and real-time classification of patient actions. The test dataset is built from reality and reuses the existing datasets. Our system has been evaluated and tested in practice with over 96.84% accuracy, runs at over 30 frames per second, suitable for real-time execution on mobile devices with limited hardware configuration.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_27-An_Efficient_Patient_Activity_Recognition_using_LSTM_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Math Balance Aids based on Internet of Things for Arithmetic Operational Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130826</link>
        <id>10.14569/IJACSA.2022.0130826</id>
        <doi>10.14569/IJACSA.2022.0130826</doi>
        <lastModDate>2022-08-29T10:10:21.5670000+00:00</lastModDate>
        
        <creator>Novian Anggis Suwastika</creator>
        
        <creator>Yovan Julio Adam</creator>
        
        <creator>Rizka Reza Pahlevi</creator>
        
        <creator>Maslin Masrom</creator>
        
        <subject>Arithmetic operation; education 4.0; internet of things development; math balance aids</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>Industry 4.0 has changed various aspects of human life towards an era heavily influenced by information technology. The impact of industry 4.0 on the education sector has led to the emergence of the term education 4.0. The Internet of Things (IoT) is one of the pillars of industry 4.0. With its capabilities, IoT can provide opportunities to develop innovations in the field of education. Several studies show that teaching aids can improve the quality of learning and learning outcomes. In Indonesia, mathematics is a compulsory subject taught from elementary school to higher education. Previous studies that used mathematical (math) balance aids to help students learn mathematical operations showed positive correlations between the learning process and student learning outcomes in the materials related to arithmetic operations. This study aims to develop an IoT-based mathematical balance tool to support three education 4.0 trends: remote access, personalization, and practice and feedback. This study used modifications of Fahmideh and Zogwhi&#39;s IoT development method. There are five phases of IoT development: initialization phase, analysis phase, design phase, implementation phase, and evaluation phase. From each phase of IoT development, IoT-based mathematical balance assistance systems have been successfully built and it complies with the functionality described in the analysis phase. The system performance also shows optimal results with 100% accuracy for reading the student&#39;s learning activities. Moreover, it uses less than 10 seconds for processing 1000 data requests.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_26-Math_Balance_Aids_based_on_Internet_of_Things.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving Internet of Things Platform with Anomaly Detection for Environmental Sensor Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130825</link>
        <id>10.14569/IJACSA.2022.0130825</id>
        <doi>10.14569/IJACSA.2022.0130825</doi>
        <lastModDate>2022-08-29T10:10:21.5370000+00:00</lastModDate>
        
        <creator>Okyza Maherdy Prabowo</creator>
        
        <creator>Suhono Harso Supangkat</creator>
        
        <creator>Eueung Mulyana</creator>
        
        <creator>I Gusti Bagus Baskara Nugraha</creator>
        
        <subject>Anomaly detection; sensor data; multivariate; Internet of Things; smart system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>Internet of things has an essential role in various application domains. The number of Internet of Things applications makes researchers try to formulate how to design the architecture of the Internet of Things platform so that it can be used generically in various domains. Commonly used architectural designs consist of data collecting, data preprocessing, data analysis, and data visualization. However, sensor data that enters the platform often experiences anomalies such as constant values or being stuck-at zero, which are processed manually at the data preprocessing stage. In this research, we try to design an anomaly detection system on the Internet of Things platform that can automatically improve the platform&#39;s performance in detecting anomalies. In this study, we compared the False Positive Rate of several anomaly detection algorithms tested to real datasets in the environmental sensor data application domain. The results showed that the anomaly detector system on the Internet of Things platform had an optimal False Positive Rate of 0.9%.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_25-Improving_Internet_of_Things_Platform_with_Anomaly.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Adaptation Layer for Hardware Restrictions of Quadruple-Level Cell Flash Memories</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130824</link>
        <id>10.14569/IJACSA.2022.0130824</id>
        <doi>10.14569/IJACSA.2022.0130824</doi>
        <lastModDate>2022-08-29T10:10:21.5200000+00:00</lastModDate>
        
        <creator>Se Jin Kwon</creator>
        
        <subject>Cache storage; flash memory; SSD; nonvolatile memory</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>In recent years, major flash memory vendors have produced SSDs and fusion memories as substitution for hard disks. However, there has been a lack of studies on access restriction of QLC flash memory, since most researches have targeted small capacity flash memory. As a solution, we propose to implement an adaptation layer between the file system and FTL (Flash Translation Layer). Instead of immediately writing data given from file system to flash memory, the adaptation layer gathers and adjusts data in the unit of a page, and separates random data from sequential data. By implementing the adaptation layer, previous FTL algorithms can be fully applied on the QLC flash memory. According to our experiment, the adaptation layer forms smaller number of pages than the current data gathering algorithm.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_24-An_Adaptation_Layer_for_Hardware_Restrictions.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>AI-based Academic Advising Framework: A Knowledge Management Perspective</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130823</link>
        <id>10.14569/IJACSA.2022.0130823</id>
        <doi>10.14569/IJACSA.2022.0130823</doi>
        <lastModDate>2022-08-29T10:10:21.4900000+00:00</lastModDate>
        
        <creator>Ghazala Bilquise</creator>
        
        <creator>Khaled Shaalan</creator>
        
        <subject>Knowledge management; artificial intelligence; academic advising; rule-based expert system; machine learning; chatbot; conversational agent</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>Academic advising has become a critical factor of students’ success as universities offer a variety of programs and courses in their curriculum. It is a student-centered initiative that fosters a student’s involvement with the institution by supporting students in their academic progression and career goals. Managing the knowledge involved in the advising process is crucial to ensure that the knowledge is available to those who need it and that it is used effectively to make good advising decisions that impact student persistence and success. The use of AI-based tools strengthens the advising process by reducing the workload of advisors and providing better decision support tools to improve the advising practice. This study explores the challenges associated with the current advising system from a knowledge management perspective and proposes an integrated AI-based framework to tackle the main advising tasks.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_23-AI_based_Academic_Advising_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Scalable Machine Learning-based Ensemble Approach to Enhance the Prediction Accuracy for Identifying Students at-Risk</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130822</link>
        <id>10.14569/IJACSA.2022.0130822</id>
        <doi>10.14569/IJACSA.2022.0130822</doi>
        <lastModDate>2022-08-29T10:10:21.4730000+00:00</lastModDate>
        
        <creator>Swati Verma</creator>
        
        <creator>Rakesh Kumar Yadav</creator>
        
        <creator>Kuldeep Kholiya</creator>
        
        <subject>Educational data mining; resampling methods; feature selection technique; machine learning; imbalanced data</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>Among the educational data mining problems, the early prediction of the students&#39; academic performance is the most important task, so that timely and requisite support may be provided to the needy students. Machine learning techniques may be used as an important tool for predicting low-performers in educational institutions. In the present paper, five single-supervised machine learning techniques have been used, including Decision Tree, Na&#239;ve Bayes, k-Nearest-Neighbor, Support Vector Machine, and Logistic Regression. To analyze the effect of an imbalanced dataset, the performance of these algorithms has been checked with and without various resampling methods such as Synthetic Minority Oversampling Technique (SMOTE), Borderline SMOTE, SVM-SMOTE, and Adaptive Synthetic (ADASYN). The Random hold-out method and GridSearchCV were used as model validation techniques and hyper-parameter tuning respectively. The results of the present study indicated that Logistic Regression is the best performing classifier with every balanced dataset generated using all of the four resampling techniques and also achieved the highest accuracy of 94.54% with SMOTE. Furthermore, to improve the prediction results and to make the model scalable, the most suitable classifier was integrated with the help of bagging, and a well-accepted accuracy of 95.45% was achieved.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_22-A_Scalable_Machine_Learning_based_Ensemble.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Determining the Best Email and Human Behavior Features on Phishing Email Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130821</link>
        <id>10.14569/IJACSA.2022.0130821</id>
        <doi>10.14569/IJACSA.2022.0130821</doi>
        <lastModDate>2022-08-29T10:10:21.4600000+00:00</lastModDate>
        
        <creator>Ahmad Fadhil Naswir</creator>
        
        <creator>Lailatul Qadri Zakaria</creator>
        
        <creator>Saidah Saad</creator>
        
        <subject>Phishing; phishing email classification; features selection; binary classification; email features; human features</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>There are many email filters that have been developed for classifying spam and phishing email. However, there is still a lack of phishing email filters developed because of the complexity of feature extraction and selection of the data. There are several categories of features for classifying phishing emails, either on the email part or on the human part. The absence of which features are best for helping to classify phishing emails is one of the challenges; in the previous experiment, there was no benchmark for the features to be used for phishing email classification. This research will provide new insight into the feature selection process in the phishing email classification area. Therefore, this work extracts the features based on the category and determines which features have the most impact on classifying email as phishing or not phishing using a machine learning approach. Feature selection is one of the essential parts of getting a good classification result. Therefore, obtaining the best features from email and human behavior will significantly impact phishing classification. This research collects the public phishing email dataset, extracts the features based on category using Python, and determines the feature importance using machine learning approaches with the PyCaret library. The dataset experimented on three different experiments in which each feature category was separated, and one experiment was the combined feature selection. Binary classification is also done with the extracted features. The experiment verified that the proposed method gave a good result in feature importance and the binary classification using selected features in terms of accuracy compared to previous research. The highest result obtained is the classification with combined features with 98% accuracy. The results obtained are better compared to previous studies. Hence, this research proves that the selected features will increase the performance of the classification.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_21-Determining_the_Best_Email_and_Human_Behaviour_Features.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Observation of Imbalance Tracer Study Data for Graduates Employability Prediction in Indonesia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130820</link>
        <id>10.14569/IJACSA.2022.0130820</id>
        <doi>10.14569/IJACSA.2022.0130820</doi>
        <lastModDate>2022-08-29T10:10:21.4270000+00:00</lastModDate>
        
        <creator>Ferian Fauzi Abdulloh</creator>
        
        <creator>Majid Rahardi</creator>
        
        <creator>Afrig Aminuddin</creator>
        
        <creator>Sharazita Dyah Anggita</creator>
        
        <creator>Arfan Yoga Aji Nugraha</creator>
        
        <subject>Tracer study; support vector machine; synthetic minority oversampling technique; SMOTE; employability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>Tracer Study is a mandatory aspect of accreditation assessment in Indonesia. The Indonesian Ministry of Education requires all Indonesia Universities to anually report graduate tracer study reports to the government. Tracer study is also needed by the University in evaluating the success of learning that has been applied to the curriculum. One of the things that need to be evaluated is the level of absorption of graduates into the working industry, so a machine learning model is needed to assist the University Officials in evaluating and understanding the character of its graduates, so that it can help determine curriculum policies. In this research, the researcher focuses on making a reliable machine learning model with a tracer study dataset format that has been determined by the Government of Indonesia. The dataset was obtained from the tracer study of Amikom University. In this study, SVM will be tested with several variants of the algorithm to handle imbalanced data. The study compared SMOTE, SMOTE-ENN, and SMOTE-Tomek combined with SVM to detect the employability of graduates. The test was carried out with K-Fold Cross Validation, with the highest accuracy and precision results produced by SMOTE-ENN SVM model by value of 0.96 and 0.89.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_20-Observation_of_Imbalance_Tracer_Study_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mobile Payment Transaction Model with Robust Security in the NFC-HCE Ecosystem with Secure Elements on Smartphones</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130819</link>
        <id>10.14569/IJACSA.2022.0130819</id>
        <doi>10.14569/IJACSA.2022.0130819</doi>
        <lastModDate>2022-08-29T10:10:21.4100000+00:00</lastModDate>
        
        <creator>Lucia Nugraheni Harnaningrum</creator>
        
        <creator>Ahmad Ashari</creator>
        
        <creator>Agfianto Eko Putra</creator>
        
        <subject>Transaction; near field communication; mobile; secure element; encryption</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>The Near Field Communication embedded (NFC-embedded) smartphone consists of two ecosystems, namely Near Field Communication Subscriber Identity Module Secure Element (NFC-SIM-SE) and Near Field Communication Host Card Emulation (NFC-HCE). NFC-SIM-SE places secure elements in smartphones, while NFC-HCE places secure elements in the cloud. In terms of security, the location of secure elements in the cloud is one of the weaknesses of NFC-HCE. The APL-SE transaction model is developed as a solution to improve transaction security with NFC-enabled mobile. This model moves the secure elements of the NFC-HCE ecosystem from the cloud to the smartphone so that when the transaction is made, the smartphone does not communicate with the outside network to access the secure element. The APL-SE transaction model is tested using dummy data to calculate the processing time measurements for each step. The model is also tested for the encryption process. The encrypted data is compared with the original data, then the randomness is calculated. This transaction model is also tested by looking at the data randomness, which shows that the encrypted data is declared random. Random data increases data security. The transaction model test shows that the transaction runs well because the encrypted data is proven random, and the execution time is 1,074 ms. The time of 1,074 ms is far below an attacker&#39;s time to decipher the encrypted data. Random and fast encryption results indicate that transactions are secure. This achievement makes the opportunity for attackers to manipulate data small, so security is increased.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_19-Mobile_Payment_Transaction_Model_with_Robust_Security.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cylinder Liner Defect Detection and Classification based on Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130818</link>
        <id>10.14569/IJACSA.2022.0130818</id>
        <doi>10.14569/IJACSA.2022.0130818</doi>
        <lastModDate>2022-08-29T10:10:21.3970000+00:00</lastModDate>
        
        <creator>Chengchong Gao</creator>
        
        <creator>Fei Hao</creator>
        
        <creator>Jiatong Song</creator>
        
        <creator>Ruwen Chen</creator>
        
        <creator>Fan Wang</creator>
        
        <creator>Benxue Liu</creator>
        
        <subject>Cylinder liner; defect detection; deep learning; machine vision</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>The machine vision-based defect detection for cylinder liner is a challenging task due to irregular shape, various and small defects on the cylinder liner surface. To improve the accuracy of defect detection by machine vision a deep learning-based defect detection method for cylinder liner was explored in this paper. First, a machine vision system was designed based on the analysis of the causes and types of defects to obtain the field images for establishing an original dataset. Then the dataset was augmented by a modified augmentation method which combines the region of interest automatic extraction method with the traditional augmentation methods. Except for introduction of the anchor configuration optimization method, an XML file-based method of highlighting defect area was proposed to address the problem of tiny defect detection. The optimal model was experimentally determined by considering the network model, the training strategy and the sample size. Finally, the detection system was developed and the network model was deployed. Experiments are carried out and the results of the proposed method compared with those of the traditional methods. The results show that the detection accuracies of sand, scratch and wear defects are 77.5%, 70% and 66.3% which are improved by at least 26.3% compared with the traditional methods. The proposal can be used for field defect detection of cylinder liner.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_18-Cylinder_Liner_Defect_Detection_and_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Multi-Objective Optimization for Supply Chain Management using Artificial Intelligence (AI)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130817</link>
        <id>10.14569/IJACSA.2022.0130817</id>
        <doi>10.14569/IJACSA.2022.0130817</doi>
        <lastModDate>2022-08-29T10:10:21.3630000+00:00</lastModDate>
        
        <creator>Mohamed Hassouna</creator>
        
        <creator>Ibrahim El-henawy</creator>
        
        <creator>Riham Haggag</creator>
        
        <subject>Supply chain management; artificial intelligence; particle swarm optimization; ant colony optimization and multi-objective optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>Supply chain management seeks to solve the complex problems of transporting goods from the suppliers to the end customers. Improving the differentiation between different paths to reduce costs and time may require smart systems. This paper proposes two new algorithms for determining, with Multi-Objective Optimization, the least cost and the most appropriate path between two nodes. First: Ant colony optimization (ACO) algorithm, working alongside with Multi Objective Optimization (MOO), is adopted to determine the shortest path and time between two nodes to reach with the least cost. Multi-Objective intelligent Ant Colony (MOIAC) algorithm improves supply chain management to achieve the optimal and the most appropriate solutions. Second: Particle Swarm Optimization (PSO) algorithm, also working alongside MOO, is adopted to determine the least cost, time, and shortest path. Multi Optimization Intelligent Particle Swarm (MOIPS) algorithm improves supply chain management by determining the shortest path with the least cost. These two proposed algorithms seek the optimal solution by MOO using a JAVA Program. The experimental results show the excellence of the first algorithm in determining the optimal and the most appropriate path while getting throw risks inherent in transporting goods. It also demonstrates excellence in transporting goods in the shortest possible time and with the least cost. The second algorithm also shows excellence in transporting goods with the least possible cost via the shortest path and in the shortest time.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_17-A_Multi_Objective_Optimization_for_Supply_Chain_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancement of Design Level Class Decomposition using Evaluation Process</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130816</link>
        <id>10.14569/IJACSA.2022.0130816</id>
        <doi>10.14569/IJACSA.2022.0130816</doi>
        <lastModDate>2022-08-29T10:10:21.3500000+00:00</lastModDate>
        
        <creator>Bayu Priyambadha</creator>
        
        <creator>Tetsuro Katayama</creator>
        
        <subject>Refactoring; design level refactoring; software refactoring; class decomposition; software quality</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>Refactoring on the design level artifact such as the class diagram was already done using the threshold-based agglomerative hierarchical clustering method, specifically class decomposition. The approach produced a better cluster based on the label name similarity of attribute and method. But, some problems emerge from the experiment result. The negative Silhouettes element still exist in the cluster. And, there is an unusable cluster that only consists of one attribute element. This paper has proposed the evaluation process to optimize the result of clustering. This evaluation process is an additional process that aims to move the negative Silhouettes element to the other cluster. The movement is also to get the better value of element Silhouettes value. The evaluation process can produce a better result for clusters. The clusters produced from the evaluation process have higher Silhouettes values. The average Silhouettes value is increased by about 40%. Ultimately, the result shows no unusable cluster as mentioned in the previous research.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_16-Enhancement_of_Design_Level_Class_Decomposition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Recognition of Odia Character in an Image by Dividing the Image into Four Quadrants</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130815</link>
        <id>10.14569/IJACSA.2022.0130815</id>
        <doi>10.14569/IJACSA.2022.0130815</doi>
        <lastModDate>2022-08-29T10:10:21.3330000+00:00</lastModDate>
        
        <creator>Aradhana Kar</creator>
        
        <creator>Sateesh Kumar Pradhan</creator>
        
        <subject>Odia characters; image processing; character decomposition; machine learning; optical character recognition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>This paper deals with optical character recognition of Odia characters written in a particular font family ‘AkrutiOriAshok-99’ with different font sizes 18, 20, 22, 24, 26, 28, 36, 48 and 72 in Bold style. The font ‘AkrutiOriAshok-99’ is a font from the typing software ‘Akruti’. The basic idea behind the approach followed in this paper is the character decomposition into four quadrants and then extracting features from each quadrant. The image processing techniques like converting the image to gray, resizing of image and converting gray image to binary are used in this approach. The system explained in this paper has two major parts: DictionaryBuilding and FindingMatch. For DictionaryBuilding, dictionary of images which are created either by scanning a document or a document converted to image, both written in same font family in different sizes. The features are extracted from each image in any font size in the ‘Dictionary’ using Preprocessing, FindPath, GettingFeaturesLeft or GettingFeaturesRight, VisitSubQuad, RemainingSubQuad, WriteToExcel and CommonFeature modules. The part FindingMatch is responsible for finding a correct match in the dictionary for the input image. For this, FeatureExtraction and Recognition modules have been used. Longest Common Subsequence (LCS) has been used for finding the common feature in DictionaryBuilding as well as finding the correct match. A total of 1800 characters, 200 characters of each font size have been tested and 98.1% of correctness has been achieved.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_15-Recognition_of_Odia_Character_in_an_Image.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>High Capacity Image Steganography System based on Multi-layer Security and LSB Exchanging Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130814</link>
        <id>10.14569/IJACSA.2022.0130814</id>
        <doi>10.14569/IJACSA.2022.0130814</doi>
        <lastModDate>2022-08-29T10:10:21.3030000+00:00</lastModDate>
        
        <creator>Rana Sami Hameed</creator>
        
        <creator>Siti Salasiah Mokri</creator>
        
        <creator>Mustafa Sabah Taha</creator>
        
        <creator>Mustafa Muneeb Taher</creator>
        
        <subject>Information hiding; steganography; cryptography; multi-layer security; high capacity component</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>Data security is becoming an important issue because of the vast use of the Internet and data transfer from one place to another. Security of these data is essential, especially when these data represent critical information. There are several techniques used to hide these data, such as encryption. Steganography can be utilised as an alternative to encryption because encryption is susceptible to data modification during transmission. Steganography is the hiding data on a cover multimedia such as images, audio, and video. The technique allows security for data transmission so unwanted third parties cannot notice the hidden data. The challenge of steganography is the trade-off between the hidden data&#39;s payload capacity and the system&#39;s imperceptibility and robustness. If the hidden data increases, the imperceptibility and the robustness will be decreased. This case is a big challenge in this digital world where social media, Internet, and data transfer are used hugely. Because of this, this paper proposes using a modified Least Significant Bit (LSB) method for the embedding process called Multi-Layer Least Significant Bit Exchange Method (MLLSBEM). This proposed algorithm uses the AES encryption method to encrypt the secret text and then uses Huffman coding to compress the encrypted message as pre-processing data. The proposed study seeks to strike a compromise between important issues, provide maximum payload capacity, and retain high security, imperceptibility, and reliability for secret communication Using image processing and steganography techniques. Simulation findings demonstrate that the suggested method is superior for existing PSNR, SSIM, NCC, and payload capacity investigations. The proposed method is immune to the histogram, chi-square, and HVS attacks.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_14-High_Capacity_Image_Steganography_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Erratic Navigation in Lecture Videos using Hybrid Text based Index Point Generation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130813</link>
        <id>10.14569/IJACSA.2022.0130813</id>
        <doi>10.14569/IJACSA.2022.0130813</doi>
        <lastModDate>2022-08-29T10:10:21.2870000+00:00</lastModDate>
        
        <creator>Geeta S Hukkeri</creator>
        
        <creator>R. H. Goudar</creator>
        
        <subject>Automatic speech recognition; indexing; key-frames; lecture video; optical character recognition; title identification; text extraction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>The difficulty in lecture videos is an erratic navigation in lecture video for watching only the needed portion of video content. Machine learning technologies like Optical Character Recognition and Automatic Speech Recognition allows to easily fetch the information that is hybrid text from lecture slides and audio respectively. This paper presents three main analysis for hybrid text retrieval, which is further useful for indexing the video. The experimental results indicate that the key frame extraction accuracy is 94 percent. The accuracy of the Slide-To-Text conversion achieved by this study&#39;s evaluation of the text extraction capability of Tesseract, Abbyy Finereader, Transym, and the Google Cloud Vision Optical Character Recognition is 92.0%, 90.5%, 80.8%, and 96.7% respectively. Similarly the result of title identification is about 96 percent. To extract the speech text three different APIs are used namely, Microsoft, IBM, and Google Speech-to-Text API. The performance of the transcript generator is measured using Word Error Rate, Word Recognition Rate, and Sentence Error Rate metrics. This paper found that Google Cloud Vision Optical Character Recognition and Google Speech-to-Text API have achieved best results compared to other methods. The results obtained are very good and agreeable, therefore the proposed methods can be used for automating the lecture video indexing.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_13-Erratic_Navigation_in_Lecture_Videos.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Improved Arabic Sentiment Analysis Approach using Optimized Multinomial Na&#239;ve Bayes Classifier</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130812</link>
        <id>10.14569/IJACSA.2022.0130812</id>
        <doi>10.14569/IJACSA.2022.0130812</doi>
        <lastModDate>2022-08-29T10:10:21.2570000+00:00</lastModDate>
        
        <creator>Ahmed Alsanad</creator>
        
        <subject>Machine learning; Arabic sentiment analysis; optimized multinomial Na&#239;ve Bayes (MNNB) classifier; hyper-parameters optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>Arabic sentiment analysis has emerged during the last decade as a computational process on Arabic texts for extracting people&#39;s attitudes toward targeted objects or their feelings and emotions regarding targeted events. Sentiment analysis (SA) using machine learning (ML) methods has become an important research task for developing various text-based applications. Among different ML classifiers, multinomial Na&#239;ve Bayes (MNNB) classifier is widely used for documents classification due to its ability for performing statistical analysis of text contents. It significantly simplifies textual-data classification and offers an alternative to heavy ML-based semantic analysis methods. However, the MNNB classifier has a number of hyper-parameters affects the classification task of texts and controls the decision boundary of the model itself. In this paper, an optimized MNNB classifier-based approach is proposed for improving Arabic sentiment analysis. A number of experiments are conducted on large sets of Arabic tweets to evaluate the proposed approach. The optimized MNNB classifier is trained on three datasets and tested on a different separated test set to show the performance of developed approach. The experimental results on the test set revealed that the optimized MNNB classifier of proposed approach outperforms the traditional MNNB classifier and other baseline classifiers. The accuracy rate of the optimization approach is increased by 1.6% compared with using the default values of the classifier’s hyper-parameters.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_12-An_Improved_Arabic_Sentiment_Analysis_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Smart Decision Making System for the Optimization of Manufacturing Systems Maintenance using Digital Twins and Ontologies</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130811</link>
        <id>10.14569/IJACSA.2022.0130811</id>
        <doi>10.14569/IJACSA.2022.0130811</doi>
        <lastModDate>2022-08-29T10:10:21.2400000+00:00</lastModDate>
        
        <creator>ABADI Mohammed</creator>
        
        <creator>ABADI Chaimae</creator>
        
        <creator>ABADI Asmae</creator>
        
        <creator>BEN-AZZA Hussain</creator>
        
        <subject>Maintenance systems; maintenance policy; digital twin; reasoning; ontologies; automation; cyber-physical interoperability; decision making; artificial intelligence</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>Now-a-days manufacturing processes are becoming more and more complex which constantly complicate the management of their life cycle. Although, in order to survive and maintain a good position in the competitive industrial context, industrials have understood that they must optimize the whole life cycle of their manufacturing processes. The maintenance constitutes one of the key processes indispensable to ensure the proper functioning and to optimize the lifetime of machines and production lines, and thus to optimize quality and production costs. Therefore, its automation and optimization represent until now a center of interest for researches and manufacturers, especially those related to the integration of artificial intelligence tools in the industry. In this context, several new concepts and technologies have emerged, particularly in the context of industry 4.0. One of these new concepts is digital twins, which has become a promising direction to optimize manufacturing processes lifecycle. However, the implementation of this technology faces several complex problems related to the interoperability between physical entities and their virtual counterparts, as well as to the logical reasoning between the different elements constituting the digital twin. It is in this context that an approach based on digital twins and ontologies is proposed. The originality of this paper lies in two important points: the first is the exploitation of the expressiveness and reasoning capabilities of ontologies to solve cyber-physical interoperability problems at the digital twin level, while the second is the automation of the whole maintenance process and its decision making key points using the inference potentialities of ontologies. The applicability and effectiveness of the proposed approach is validated through an industrial case of study.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_11-A_Smart_Decision_Making_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Encryption Algorithms Modeling in Detecting Man in the Middle Attack in Medical Organizations</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130810</link>
        <id>10.14569/IJACSA.2022.0130810</id>
        <doi>10.14569/IJACSA.2022.0130810</doi>
        <lastModDate>2022-08-29T10:10:21.2230000+00:00</lastModDate>
        
        <creator>Sulaiman Alnasser</creator>
        
        <creator>Raed Alsaqour</creator>
        
        <subject>Cyberattack; medical organization; man in the middle attack; encryption algorithm; rivest shamir adleman algorithm; elliptic curve cryptography algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>A cyberattack is a serious crime that could affect medical organizations. These attacks could affect medical organization sensitive data disclosure, loss of organization data, or the business&#39;s continuity. The Man-in-The-Middle (MITM) attack is one of the threats that could impact medical organizations. It happens when unapproved outsiders break into the traffic between two parties that think they are conversing directly. At the same time, the adversary can access, read, and change secret information. Because of that, medical organizations lose confidentiality, integrity, and availability. Data encryption is a solution that changes vital information to unreadable by unauthorized and unintended parties. It could involve protecting data with cryptography, usually by leveraging a scrambled code. Only the individuals with the decoding key can read the information. There is no full protection due to the variety of MITM attacks. Each encryption algorithm has its advantages and disadvantages, like the speed of encryption and decryption, strength of the algorithm, and the cipher type. This research investigates the MITM attacks and comprehensively compares the Rivest Shamir Adleman algorithm and the Elliptic Curve Cryptography algorithm.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_10-Encryption_Algorithms_Modeling_in_Detecting_Man.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modified Prophet+Optuna Prediction Method for Sales Estimations</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130809</link>
        <id>10.14569/IJACSA.2022.0130809</id>
        <doi>10.14569/IJACSA.2022.0130809</doi>
        <lastModDate>2022-08-29T10:10:21.2100000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Ikuya Fujikawa</creator>
        
        <creator>Yusuke Nakagawa</creator>
        
        <creator>Tatsuya Momozaki</creator>
        
        <creator>Sayuri Ogawa</creator>
        
        <subject>Prediction method; nonlinearity; prophet; optuna; typhoon event; modified optuna; mean and standard deviation adjustment</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>A prediction method for estimation of sales based on Prophet with a consideration of nonlinear events and conditions by a modified Optuna is proposed. Linear prediction does not work for a long-term sales prediction because purchasing actions are based on essentially nonlinear customers’ behavior. One of nonlinear prediction methods is the well-known Prophet. It, however, is still difficult to adjust the nonlinear parameters in the Prophet. To adjust the parameters, the Optuna is widely used. It, however, is not good enough for parameter tuning by the Optuna. Therefore, the Optuna is modified with a short-term moving mean and standard deviation of the sales for final prediction. More than that, specific event such as typhoon event is to be considered in the sales prediction. Through experiments with a real sales data, it is found the sensitivity of the parameters the upper window, lower window, event dates, etc. for the final sales and the effect of the Optuna is 11.73%. Also, it is found that the effect of the consideration of Covid-19 is about 2.4% meanwhile the effect of the proposed modified Optuna is around 3 % improvement of the prediction accuracy (from 80 % to 83 %).</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_9-Modified_Prophet_Optuna_Prediction_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Bilingual AI-Driven Chatbot for Academic Advising</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130808</link>
        <id>10.14569/IJACSA.2022.0130808</id>
        <doi>10.14569/IJACSA.2022.0130808</doi>
        <lastModDate>2022-08-29T10:10:21.1930000+00:00</lastModDate>
        
        <creator>Ghazala Bilquise</creator>
        
        <creator>Samar Ibrahim</creator>
        
        <creator>Khaled Shaalan</creator>
        
        <subject>Chatbot; conversational agent; academic advising; natural language processing; deep learning; bilingual English Arabic</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>Conversational technologies are revolutionizing how organizations communicate with people, thereby raising quick responses and constant availability expectations. Students often have queries about the institutional and academic policies and procedures, academic progression, activities, and more in an academic environment. In reality, the student services team and the academic advisors are overwhelmed with several queries that they cannot provide instant responses to, resulting in dissatisfaction with services. Our study leverages Artificial Intelligence and Natural Language processing technologies to build a bilingual chatbot that interacts with students in the English and Arabic languages. The conversational agent is built in Python and designed for students to support advising-related queries. We use a purpose-built domain-specific corpus consisting of the common questions advisors receive from students and their responses as the chatbots knowledge base. The chatbot engine determines the user intent by processing the input and retrieves the most appropriate response that matches the intent with an accuracy of 80% in English and 75% in Arabic. We also evaluated the chatbot interface by conducting field experiments with students to test the accuracy of the chatbot with real-time input and test the application interface.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_8-Bilingual_AI_Driven_Chatbot_for_Academic_Advising.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Research on Regional Differentiation Allocation Mode of Energy Finance based on Attention Mechanism and Support Vector Machine</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130807</link>
        <id>10.14569/IJACSA.2022.0130807</id>
        <doi>10.14569/IJACSA.2022.0130807</doi>
        <lastModDate>2022-08-29T10:10:21.1600000+00:00</lastModDate>
        
        <creator>Ling Sun</creator>
        
        <creator>Hao Wu</creator>
        
        <subject>Attention mechanism; support vector machine; energy finance; differentiation configuration mode; energy consumption</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>This paper studies the prediction method of regional differentiated allocation mode of energy finance based on attention mechanism and support vector machine to provide scientific guidance for the future development direction of energy finance in each region. Analysis of the key factors influencing the energy consumption, through the attention mechanism to extract the regional factors such features constitute the details of the sample set, the characteristics of the sample set after implementation of fusion and normalized processing, gain new characteristics of sample set as input to construct support vector machine forecasting model, prediction of energy consumption in each region of the output. According to the results, the differentiated allocation patterns of energy finance in each region are predicted. The results show that the prediction model of this method has high training and test prediction accuracy, and the prediction results are consistent with the actual data in historical statistics. Compared with the existing methods, the method of this study can more scientifically and effectively predict the sustainable and stable development of energy finance in various regions of the city in the future. The energy consumption of the experimental city predicted in this study in the next nine years is from high to low in the order of region C, region a and region B. from this, it is predicted that the regions A, B and C of this city in the future will be applicable to the government market dual oriented Government oriented and market-oriented energy finance allocation models. The prediction results can provide scientific guidance for the sustainable and stable development of energy finance in various regions of the city in the future.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_7-Research_on_Regional_Differentiation_Allocation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automatic Detection of Roads using Aerial Photographs and Calculation of the Optimal Overflight Route for a Fixed-wing Drone</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130806</link>
        <id>10.14569/IJACSA.2022.0130806</id>
        <doi>10.14569/IJACSA.2022.0130806</doi>
        <lastModDate>2022-08-29T10:10:21.1470000+00:00</lastModDate>
        
        <creator>Miguel P&#233;rez P</creator>
        
        <creator>Holman Montiel A</creator>
        
        <creator>Fredy Mart&#237;nez S</creator>
        
        <subject>Automatic road tracking; decorrelation stretching; aerial imagery; optimal overflight route calculation; UAV</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>Currently, fixed-wing drones have become indispensable tools for the surveillance of large areas of land, justified by their better cost/benefit ratio, great flight autonomy, and payload capacity. In particular, the identification of roads, traffic control, monitoring of wear on asphalt layers, risk identification, and safety improvement are applications that are being implemented in these unmanned aerial vehicles. Tracking a road requires systems capable of detecting artificial marks through images employing aerial photographs that allow the implementation of optimal overflight routes. This research work presents a solution to the problem of road tracking from aerial photographs and implements an image processing algorithm and morphological techniques that calculate and traces the ideal route for the drone to track automatically, regardless of its orientation and the type of road.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_6-Automatic_Detection_of_Roads_Using_Aerial_Photographs.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Experimental Evaluation of Basic Similarity Measures and their Application in Visual Information Retrieval</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130805</link>
        <id>10.14569/IJACSA.2022.0130805</id>
        <doi>10.14569/IJACSA.2022.0130805</doi>
        <lastModDate>2022-08-29T10:10:21.1130000+00:00</lastModDate>
        
        <creator>Miroslav Marinov</creator>
        
        <creator>Yordan Kalmukov</creator>
        
        <creator>Irena Valova</creator>
        
        <subject>Content Based Image Retrieval (CBIR); image search and ranking; similarity measures; image databases</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>Searching for similar images is an important feature for image databases and decision support systems in various subject domains. However, it is essential that search results are sorted by degree of similarity in reverse order. This paper presents a comparative analysis of four existing similarity measures and experimentally tests whether they could be used to calculate similarity between images. Metrics could be evaluated by comparing their results to the cumulative human perception of similarity between the same images, obtained by real people. However, this introduces a lot of subjectivism due to non-uniform judgement and evaluation scales. The paper presents a more objective approach - checks which measure performs best in retrieving more images, containing objects of the same type. Results show all four measures could be used to calculate similarity between images, but Jaccard’s index performs best in most cases, because it compares features vectors positionally and thus indirectly consider shape, position, orientation and other features.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_5-Experimental_Evaluation_of_Basic_Similarity_Measures.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comparative Research on Usability and User Experience of User Interface Design Software</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130804</link>
        <id>10.14569/IJACSA.2022.0130804</id>
        <doi>10.14569/IJACSA.2022.0130804</doi>
        <lastModDate>2022-08-29T10:10:21.1000000+00:00</lastModDate>
        
        <creator>Junfeng Wang</creator>
        
        <creator>Zhiyu Xu</creator>
        
        <creator>Xi Wang</creator>
        
        <creator>Jingjing Lu</creator>
        
        <subject>Usability; user experience; interaction design; UI design software; eye tracking</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>With the development of science and technology, people increasingly rely on intelligent interactive products, thus promoting the vigorous development of the user interface industry. Software with high usability and user experience can improve users’ effectiveness and satisfaction, as well as the user viscosity. Taking three design software: Sketch, Adobe XD, and Figma, which is most frequently used by design industry practitioners and students, as research cases, this study compared and discussed the impact of interaction design and interface layout on the usability and user experience combining with subjective experiment methods, scale scoring, user testing and retrospective think-aloud interview, as well as objective experiment method, eye tracking. It is found that the overall usability and user experience of Figma is the best, Adobe XD is the second, and Sketch is the worst. The main reason for this result is that the three software have different degrees of issues in interface layout, information quality, and interaction logic. Based on the results, the optimization suggestions for the usability and user experience of user interface design software are proposed from three perspectives: interface design, information quality and interaction design.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_4-A_Comparative_Research_on_Usability_and_User_Experience.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards Personalized Adaptive Learning in e-Learning Recommender Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130803</link>
        <id>10.14569/IJACSA.2022.0130803</id>
        <doi>10.14569/IJACSA.2022.0130803</doi>
        <lastModDate>2022-08-29T10:10:21.0670000+00:00</lastModDate>
        
        <creator>Massra Sabeima</creator>
        
        <creator>Myriam Lamolle</creator>
        
        <creator>Mohamedade Farouk Nanne</creator>
        
        <subject>e-Learning; adaptive learning; recommendation system; ontology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>An adaptive e-learning scenario not only allows people to remain motivated and engaged in the learning process, but it also helps them expand their awareness of the courses they are interested in. e-Learning systems in recent years had to adjust with the advancement of the educational situation. Therefore many recommender systems have been presented to design and provide educational resources. However, some of the major aspects of the learning process have not been explored quite enough; for example, the adaptation to each learner. In learning, and in a precise way in the context of the lifelong learning process, adaptability is necessary to provide adequate learning resources and learning paths that suit the learners’ characteristics, skills, etc. e-Learning systems should allow the learner to benefit the most from the presented learning resources content taking into account her/his learning experience. The most relevant resources should be recommended matching her/his profile and knowledge background not forgetting the learning goals she/he would like to achieve and the spare time she/he has in order to adjust the learning session with her/his goals whether it is to acquire or reinforce a certain skill. This paper proposes a personalized e-learning system that recommends learning paths adapted to the users profile.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_3-Towards_Personalized_Adaptive_Learning_in_e_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Severely Degraded Underwater Image Enhancement with a Wavelet-based Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130802</link>
        <id>10.14569/IJACSA.2022.0130802</id>
        <doi>10.14569/IJACSA.2022.0130802</doi>
        <lastModDate>2022-08-29T10:10:21.0530000+00:00</lastModDate>
        
        <creator>Shunsuke Takao</creator>
        
        <creator>Tsukasa Kita</creator>
        
        <creator>Taketsugu Hirabayashi</creator>
        
        <subject>Underwater image enhancement; deep learning; discrete wavelet transform; whitening and coloring transform</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>Underwater images are important in marine science and ocean engineering fields owing to giving color information, low cost, and compact. Yet obtained underwater images are often degraded and restoring and enhancing wavelength selective signal attenuation of underwater images depending on complex underwater physical process is essential in practical application. While recently developed deep learning is a promising choice, constructing sufficiently large dataset covering whole real images is challenging, peculiar to underwater image processing. In order to supplement relatively small dataset, previous studies alternatively construct an artificial underwater image dataset based on a physical model or Generative Adversarial Network. Also, incorporating traditional signal processing methods into the network architecture has shown promising success, though enhancement of severely degraded underwater images remains to be a big issue. In this paper, we tackle underwater image enhancement based on an encoder-decoder based deep learning model incorporating discrete wavelet transform and whitening and coloring transform. We also construct a severely degraded real underwater image dataset. The presented model shows excellent results both qualitatively and quantitatively in the artificial and real image dataset. Constructed dataset is available at https://github.com/tkswalk/2022-IJACSA.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_2-Severely_Degraded_Underwater_Image_Enhancement.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Real-Time Wildfire Detection and Alerting with a Novel Machine Learning Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130801</link>
        <id>10.14569/IJACSA.2022.0130801</id>
        <doi>10.14569/IJACSA.2022.0130801</doi>
        <lastModDate>2022-08-29T10:10:20.9270000+00:00</lastModDate>
        
        <creator>Audrey Zhang</creator>
        
        <creator>Albert S. Zhang</creator>
        
        <subject>Wildfire detection; CNN (convolutional neural network); machine learning; image processing; model server framework</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(8), 2022</description>
        <description>Up until the end of July 2022, there have been over 38k wildfires in the US alone, decimating over 5.6 million acres. Wildfires significantly contribute to carbon emission, which is the root cause of global warming. Research has shown that artificial intelligence already plays a very important role in wildfire management, from detection to remediation. In this investigation a novel machine learning approach has been defined for spot wildfire detection in real time with high accuracy. The research compared and examined two different Convolutional Neural Network (CNN) approaches. The first approach; a novel machine learning method, a model server framework is used to serve convolutional neural network models trained for daytime and nighttime to validate and feed wildfire images sorted by different times of day. In the second approach that has been covered by existing research, one big CNN model is described as training all wildfire images regardless of daytime or nighttime. With the first approach, a higher detection precision of 98% has been achieved, which is almost 8% higher than the result from the second approach. The novel machine learning approach can be integrated with social media channels and available forest response systems via API’s for alerting to create an automated wildfire detection system in real time. This research result can be extended by fine tuning the CNN network model to build wildfire detection systems for different regions and locations. With the rapid development of network coverage such as Starlink and drone surveillance, real time image capturing can be combined with this research to fight the increasing risk of wildfires with real time wildfires detection and alerting in automation.</description>
        <description>http://thesai.org/Downloads/Volume13No8/Paper_1-Real_Time_Wildfire_Detection_and_Alerting.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Acceptance of YouTube for Islamic Information Acquisition: A Multi-group Analysis of Students’Academic Discipline</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01307108</link>
        <id>10.14569/IJACSA.2022.01307108</id>
        <doi>10.14569/IJACSA.2022.01307108</doi>
        <lastModDate>2022-07-31T09:30:21.3630000+00:00</lastModDate>
        
        <creator>M. S Ishak</creator>
        
        <creator>A Sarkowi</creator>
        
        <creator>M. F Mustaffa</creator>
        
        <creator>R Mustapha</creator>
        
        <subject>Unified Theory of Acceptance and Use of Technology (UTAUT); YouTube; information acquisition; student knowledge; Partial Least Square</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>YouTube has been recognized as an important information source for the millennial generation. This paper aims to identify the factors affecting Malaysian higher education students’ acceptance of YouTube for Islamic information acquisition and to investigate if any notable distinction that exists between the students’ path coefficients in Islamic academic discipline and the other disciplines. Employing the Unified Theory of Acceptance and Use of Technology model (UTAUT) as its theoretical foundation, data were collected by distributing a self-administered survey to 795 students actively using YouTube for information seeking. Partial least squares structural equation modelling (PLS-SEM) and multi-group analysis (MGA) in SmartPLS 3.2.7 software were used to analyze the data. Three constructs of the UTAUT model, performance expectancy, effort expectancy and social influences, were found to significantly and positively influence behavioural intention to use YouTube for Islamic information acquisition in both groups of students. Facilitating conditions demonstrates significantly negative relationship with YouTube acceptance for students in other academic disciplines than for Islamic academic discipline. Additionally, the MGA analysis’ findings suggest that determinants’ factor coefficients of YouTube acceptance for Islamic information acquisition are not significantly different between students in Islamic academic disciplines and the other disciplines. This study validates the UTAUT model to understand the determinant of social media application usage in a new study context.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_108-Acceptance_of_YouTube_for_Islamic_Information_Acquisition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Model Driven Approach for Unifying user Interfaces Development</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01307107</link>
        <id>10.14569/IJACSA.2022.01307107</id>
        <doi>10.14569/IJACSA.2022.01307107</doi>
        <lastModDate>2022-07-31T09:30:21.3330000+00:00</lastModDate>
        
        <creator>Henoc Soude</creator>
        
        <creator>Kefil Koussonda</creator>
        
        <subject>Model driven; user interface; modeling language; templating language; low code; citizen developer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>In this paper, we dealt with the rapid development of client web applications (frontend) in a context where development frameworks are legion. In effect with the digital transformation due to the COVID-19 pandemic we are witnessing an ever-increasing demand of the application development in a relatively short time. To this is added the lack of skilled developers on constantly evolving technologies. We therefore offer a low-code platform for the automatic generation of client web applications, regardless of the platform or framework chosen. First, we defined an interface design methodology based on a portal. We then implemented our model driven architecture which consisted of defining a modeling and templating language, centered on user data, flexible enough to not only be used in various fields but also be easily used by a citizen developer.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_107-A_Model_Driven_Approach_for_Unifying_User_Interfaces.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fake News Detection in Social Media based on Multi-Modal Multi-Task Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01307106</link>
        <id>10.14569/IJACSA.2022.01307106</id>
        <doi>10.14569/IJACSA.2022.01307106</doi>
        <lastModDate>2022-07-31T09:30:21.3170000+00:00</lastModDate>
        
        <creator>Xinyu Cui</creator>
        
        <creator>Yang Li</creator>
        
        <subject>Multi-modal fake news; multi-task learning; external evidences; multi-level attention mechanism</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>The popularity of social media has led to a substantial increase of data. The task of fake news detection is very important, because the authenticity of posts cannot be guaranteed. In recent years, fake news detection combining multimodal information such as images and videos has attracted wide attention from scholars. However, the majority of research work only focuses on the fusion of multi-modal information, while neglecting the role of external evidences. To address this challenge, this paper proposes a fake news detection method based on multi-modal and multi-task learning. When learning the representation of the news posts, this paper models the interaction between images and texts in posts and external evidences through a multi-level attention mechanism, and uses evidence veracity classification as an auxiliary task, so as to improve the task of fake news detection. Authors conduct comprehensive experiments on a public dataset, and demonstrate that the proposed method outperforms several state-of-the-art baselines. The ablation experiment proves the effectiveness of the auxiliary task of evidence veracity in fake news detection.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_106-Fake_News_Detection_in_Social_Media.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automatic Tariff Classification System using Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01307105</link>
        <id>10.14569/IJACSA.2022.01307105</id>
        <doi>10.14569/IJACSA.2022.01307105</doi>
        <lastModDate>2022-07-31T09:30:21.2830000+00:00</lastModDate>
        
        <creator>German Cuaya-Simbro</creator>
        
        <creator>Irving Hernandez-Vera</creator>
        
        <creator>Elias Ruiz</creator>
        
        <creator>Karina Gutierrez-Fragoso</creator>
        
        <subject>Machine learning; digital image processing; automation process</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>The tariff fraction is the universal form of identi-fying a product. It is very useful because it helps to know the tariff that the product must pay when entering or leaving the country, in this case Mexico. Coffee is a complicated product to identify correctly due to its variants, which at first glance are not distinguishable, which can cause confusion and the tariff to be charged incorrectly. Therefore, the main objective of this project was to develop a system based on Deep Learning models, which allow to identify the tariff code of coffee to import or export this product through the analysis of digital images in real time, generating automatically a general report with this information for the customs broker. The developed system allows speeding up the process of assigning the tariff fraction, and also allows the correct assignment of the tariff fraction, avoiding confusion with other products and the wrong collection of the tariff. It is important to mention that the system, although for the moment it is focused on the country of Mexico, can be used in all customs offices since the tariff fraction is universal. The evaluation of the models was carried out with cross-validation, obtaining an effectiveness of more than 80%, and the tariff fraction assignment model had an effectiveness of 90%.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_105-Automatic_Tariff_Classification_System_using_Deep_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of a Low-Cost Teleoperated Explorer Robot (TXRob)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01307104</link>
        <id>10.14569/IJACSA.2022.01307104</id>
        <doi>10.14569/IJACSA.2022.01307104</doi>
        <lastModDate>2022-07-31T09:30:21.2700000+00:00</lastModDate>
        
        <creator>Rafael Verano M</creator>
        
        <creator>Jose Caceres S</creator>
        
        <creator>Abel Arenas H</creator>
        
        <creator>Andres Montoya A</creator>
        
        <creator>Joseph Guevara M</creator>
        
        <creator>Jarelh Galdos B</creator>
        
        <creator>Jesus Talavera S</creator>
        
        <subject>Rescue robot; teleoperation; low cost robot; artificial vision</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>Natural disasters such as earthquakes or mudslides destroy everything in their path, causing buildings to collapse, which can cause people to lose their lives or suffer permanent injuries. Rescuers and firefighters are responsible for entering these ruined buildings, this work being very dangerous for them because they can get trapped in the rubble or suffocate due to the harmful gases found inside these buildings. Taking into consideration the risk in this type of operations, technological innovations can be used to help in the exploration of ruined build-ings and the rescue of people. Therefore, this article describes the development of TXRob, a low-cost teleoperated robot used in the exploration of post-disaster scenarios. TXRob has artificial vision, environmental gas recognition sensors, a real-time data display panel, is sized to enter buildings, and is capable of moving over uneven surfaces, such as debris or cracks, thanks to its track system. A human operator can remotely monitor and control the robot. The TXRob’s versatility as well as sensors performance has been tested on uneven and harsh surfaces in a simulated disaster environment. These tests suggest that the designed robot is suitable for use in rescue situations.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_104-Development_of_a_Low_Cost_Teleoperated_Explorer_Robot.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Efficient Parallel Algorithm for Clustering Big Data based on the Spark Framework</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01307103</link>
        <id>10.14569/IJACSA.2022.01307103</id>
        <doi>10.14569/IJACSA.2022.01307103</doi>
        <lastModDate>2022-07-31T09:30:21.2370000+00:00</lastModDate>
        
        <creator>Zineb Dafir</creator>
        
        <creator>Said Slaoui</creator>
        
        <subject>Clustering; big data; spark; parallel computing; parallel K-means</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>The principal objective of this paper is to provide a parallel implementation focused on the main steps of the parameter-free clustering algorithm based on K-means (PFK-means) using the Spark framework and a machine learning-based model to process Big Data. Thus, the process consists of parallelizing the main tasks of the first stage of the PFK-means clustering algorithm using successive RDD functions. Then, the parallel K-means provided by Spark MLlib is invoked by setting the cluster centers and the number of clusters determined in the previous step as input parameters of the parallel K-means. Furthermore, a comparison between the parallel designed algorithm and the parallel K-means was conducted using UCI data sets in terms of the sum of squared errors and the processing time. The experimental results, performed locally using the Spark framework, demonstrate the efficiency of the proposed solution.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_103-An_Efficient_Parallel_Algorithm_for_Clustering_Big_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Exploring Regression-based Approach for Sound Event Detection in Noisy Environments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01307102</link>
        <id>10.14569/IJACSA.2022.01307102</id>
        <doi>10.14569/IJACSA.2022.01307102</doi>
        <lastModDate>2022-07-31T09:30:21.2230000+00:00</lastModDate>
        
        <creator>Soham Dinesh Tiwari</creator>
        
        <creator>Karanth Shyam Subraya</creator>
        
        <subject>Sound Event Detection (SED); sound event clas-sification; frequency dynamic convolution; audio processing; Fil-terAugment; data augmentation; vision transformers; Pretrained Audio Neural Networks (PANN); Convolutional Block Attention Module (CBAM)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>Sound-event detection enables machines to detect when a particular sound event has occurred in addition to classifying the type of event. Successful detection of various sound events is paramount in building secure surveillance systems and other smart home appliances. However, noisy events and environ-ments exacerbate the performance of many sound event detection models, rendering them ineffective in real-world scenarios. Hence, the need for robust sound event detection algorithms in noisy environments with low inference times arises. You Only Hear Once (YOHO) is a purely convolutional architecture that uses a regression-based approach for sound-event-detection instead of the more common, frame-wise classification-based approach. The YOHO architecture proved robust in noisy environments, outperforming convolutional recurrent neural networks popular in sound event detection systems. Additionally, different ways to enhance the performance of the YOHO architecture are explored, experimenting with different computer vision architectures, dy-namic convolutional layers, pretrained audio neural networks and data augmentation methods to help improve the performance of the models on noisy data. Amongst several modifications to the YOHO architecture, the Frequency Dynamic Convolution Layers helped improve the internal model data representations by enforcing frequency-dependent convolution operations, which helped improve YOHO performance on noisy audios in outdoor and vehicular environments. Similarly, the FilterAugment data augmentation method and Convolutional Block Attention Module helped improve YOHO’s performance on the VOICe dataset containing noisy audios by augmenting the data and improving internal model representations of the input audio data using attention, respectively.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_102-Exploring_Regression_based_Approach_for_Sound_Event_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of Higher-Dimensional Hyperchaotic System based on Combined Control and its Encryption Application</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01307101</link>
        <id>10.14569/IJACSA.2022.01307101</id>
        <doi>10.14569/IJACSA.2022.01307101</doi>
        <lastModDate>2022-07-31T09:30:21.1900000+00:00</lastModDate>
        
        <creator>Kun Zhao</creator>
        
        <creator>Jianbin He</creator>
        
        <subject>Hyperchaotic system; positive lyapunov exponent; chaotic pseudo-random sequence; image encryption</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>According to the anti-control principle of chaos, a combined control method is proposed based on a class of asymptotically stable linear systems with multiple controllers. A higher-dimensional hyperchaotic system is investigated by the Lyapunov exponents method and equilibrium points analysis, and it exists the largest number of positive Lyapunov exponents. The chaotic pseudo-random sequences of the higher-dimensional hyperchaotic system can pass all NIST tests after preprocess-ing, and behave better chaotic characteristics. Meanwhile, a new encryption algorithm of image information with position scrambling, sequential diffusion and reverse diffusion is designed based on the chaotic pseudo-random sequences. The experiments of image information are given to verify the effectiveness and feasibility of the encryption algorithm. Finally, the security analyses are also discussed by the key sensitivity, differential attack and statistical analysis. It is shown that the encryption algorithm has large enough key space and can be applied to secure communication.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_101-Design_of_Higher_Dimensional_Hyperchaotic_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Building an Arabic Dialectal Diagnostic Dataset for Healthcare</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01307100</link>
        <id>10.14569/IJACSA.2022.01307100</id>
        <doi>10.14569/IJACSA.2022.01307100</doi>
        <lastModDate>2022-07-31T09:30:21.1770000+00:00</lastModDate>
        
        <creator>Jinane Mounsef</creator>
        
        <creator>Maheen Hasib</creator>
        
        <creator>Ali Raza</creator>
        
        <subject>Dialectal Arabic (DA); healthcare diagnosis; natu-ral language processing (NLP); multi-class labeling; crowd sourc-ing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>Accurate diagnosis of patient conditions becomes challenging for medical practitioners in urban metropolitan cities. A variety of languages and spoken dialects impedes the diagnosis achieved through the exploratory journey a medical practitioner and patient go through. Natural language processing has been used in well-known applications, such as Google Translate, as a solution to reduce language barriers. Languages typically encountered in these applications provide the most commonly known, used or standardized dialect. The Arabic language can benefit from the common dialect, which is available in such applications. However, given the diversity of dialects in Arabic in the healthcare domain, there is a risk associated with incorrect interpretation of a dialect, which can impact the diagnosis or treatment of patients. Arabic language dialect corpuses published in recent research work can be applied to rule-based natural language applications. Our study aims to develop an approach to support medical practitioners by ensuring that the diagnosis is not impeded based on the misinterpretation of patient responses. Our initial approach reported in this work adopts the methods used by practitioners in the diagnosis carried out within the scope of the Emirati and Egyptian Arabic dialects. In this paper, we develop and provide a public Arabic Dialect Dataset (ADD), which is a corpus of audio samples related to healthcare. In order to train machine learning models, the dataset development is designed with multi-class labelling. Our work indicates that there is a clear risk of bias in datasets, which may come about when a large number of classes do not have enough training samples. Our crowd sourcing solution presented in this work may be an approach to overcome the sourcing of audio samples. Models trained with this dataset may be used to support the diagnosis made by medical practitioners.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_100-Building_an_Arabic_Dialectal_Diagnostic_Dataset.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Real-time Egyptian License Plate Detection and Recognition using YOLO</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130799</link>
        <id>10.14569/IJACSA.2022.0130799</id>
        <doi>10.14569/IJACSA.2022.0130799</doi>
        <lastModDate>2022-07-31T09:30:21.1600000+00:00</lastModDate>
        
        <creator>Ahmed Ramadan Youssef</creator>
        
        <creator>Abdelmgeid Ameen Ali</creator>
        
        <creator>Fawzya Ramadan Sayed</creator>
        
        <subject>Automatic license plate recognition; Egyptian li-cense plate; Tiny-YOLOV3; CNN; eALPR dataset</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>Automatic License Plate Detection and Recognition (ALPR) is one of the most significant technologies in intelligent transportation and surveillance across the world. It has many challenges because it affects by many parameters such as the country’s layout, colors, language, fonts, and several environmen-tal conditions so, there isn’t a consolidated ALPR system for all countries. Many ALPR methods have been proposed based on traditional image processing and machine learning algorithms since there aren’t enough datasets, particularly in the Arabic language. In this paper, we proposed a real-time ALPR system for the Egyptian license plate (LP) detection and recognition using Tiny-YOLOV3. It consists of two deep convolutional neural networks. The experimental results in the first available publicly Egyptian Automatic License Plate (EALPR) dataset show the proposed system is more robust in detecting and recognizing the Egyptian license plates and gives mean average precision values of 97.89% and 92.46% for LP detection and character recognition, respectively.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_99-Real_time_Egyptian_License_Plate_Detection_and_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Prediction of Diabetic Retinopathy using Convolutional Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130798</link>
        <id>10.14569/IJACSA.2022.0130798</id>
        <doi>10.14569/IJACSA.2022.0130798</doi>
        <lastModDate>2022-07-31T09:30:21.1430000+00:00</lastModDate>
        
        <creator>Manal Alsuwat</creator>
        
        <creator>Hana Alalawi</creator>
        
        <creator>Shema Alhazmi</creator>
        
        <creator>Sarah Al-Shareef</creator>
        
        <subject>CNN; convolutional neural networks; deep learn-ing; transfer learning; medical imaging; diabetic retinopathy; retina fundus images</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>Diabetic retinopathy (DR) is among the most dan-gerous diabetic complications that can lead to lifelong blindness if left untreated. One of the essential difficulties in DR is early discovery, which is crucial for therapy progress. The accurate diagnosis of the DR stage is famously complicated and demands a skilled analysis by the expert being of fundus images. This paper detects DR and classifies its stage using retina images by applying conventional neural networks and transfer learning models. Three deep learning models were investigated: trained from scratch CNN and pre-trained InceptionV3 and Efficient-NetsB5. Experiment results show that the proposed CNN model outperformed the pre-trained models with a 9 to 25% relative improvement in F1-score compared to pre-trained InceptionV3 and EfficientNetsB5, respectively.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_98-Prediction_of_Diabetic_Retinopathy_using_Convolutional_Neural_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards a Richer IndoWordNet with New Additions for Hindi and Gujarati Languages</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130797</link>
        <id>10.14569/IJACSA.2022.0130797</id>
        <doi>10.14569/IJACSA.2022.0130797</doi>
        <lastModDate>2022-07-31T09:30:21.1130000+00:00</lastModDate>
        
        <creator>Milind Kumar Audichya</creator>
        
        <creator>Jatinderkumar R. Saini</creator>
        
        <creator>Jatin C. Modh</creator>
        
        <subject>Gujarati; hindi; indian language WordNet; In-doWordNet; loanwords; WordNet</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>The authors of this research paper present a mech-anism for dealing with loanwords, missing words, and newly de-veloped terms inclusion issues in WordNets. WordNet has evolved as one of the most prominent Natural Language Processing (NLP) toolkits. This mechanism can be used to improve the WordNet of any language. The authors chose to work with the Hindi and Gujarati languages in this research work to achieve a higher quality research aspect because these are the languages with major dialects. The research work used more than 5000 Hindi verse-based data corpus instead of a prose-based data corpus.As a result, nearly 14000 Hindi words were discovered that were not present in the popular Hindi IndoWordNet, accounting for 13.23 percent of the total existing word count of 105000+. Working with idioms was a distinct method for the Gujarati language. Around 3500 idioms data were used, and nearly 900 Gujarati terms were discovered that did not exist in the IndoWordNet, accounting for nearly 1.4 percent of the total of 64000+ Gujarati words in the IndoWordNet. It will also contribute almost 14000 Hindi words and around 900 Gujarati words to the IndoWordNet project.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_97-Towards_a_Richer_IndoWordNet_with_New_Additions.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Learning Global Average Attention Pooling (GAAP) on Resnet50 Backbone for Person Re-identification Problem</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130796</link>
        <id>10.14569/IJACSA.2022.0130796</id>
        <doi>10.14569/IJACSA.2022.0130796</doi>
        <lastModDate>2022-07-31T09:30:21.1130000+00:00</lastModDate>
        
        <creator>Syamala Kanchimani</creator>
        
        <creator>Maloji Suman</creator>
        
        <creator>P. V. V. Kishore</creator>
        
        <subject>Person re-identification; attention network; ResNet50; global average attention</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>Person re-identification has been an extremely chal-lenging task in computer vision which has been seen as a success with deep learning approaches. Despite successful models, there are gaps in the form of unbalanced labels, poor resolution, uncertain bounding box annotations, occlusions, and unlabelled datasets. Previous methods applied deep learning approaches based on feature representation, metric learning, and ranking optimization. In this work, we propose Global Average Attention Pooling (GAAP) on Resnet50 applied on four benchmark Re-ID datasets for classification tasks. We also perform an extensive evaluation on the proposed Attention module with different deep learning pipelines as backbone architecture. The four benchmark person Re-ID datasets used is Market-1501, RAiD, Partial-iLIDS, and RPIfield. We computed cumulative matching characteristics (CMC) and mean Average Precision (mAP) as the performance evaluation parameters of the proposed against the state of the art. The results obtained have shown that the added attention layer has improved the overall recognition precision over the baselines.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_96-Learning_Global_Average_Attention_Pooling_GAAP.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Crop Field Monitoring and Disease Detection of Plants in Smart Agriculture using Internet of Things</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130795</link>
        <id>10.14569/IJACSA.2022.0130795</id>
        <doi>10.14569/IJACSA.2022.0130795</doi>
        <lastModDate>2022-07-31T09:30:21.0830000+00:00</lastModDate>
        
        <creator>G. Balram</creator>
        
        <creator>K. Kiran Kumar</creator>
        
        <subject>Image acquisition; segmentation; feature extrac-tion; Internet of Things; plant disease</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>The Internet of Things can be defined as the network of physical objects that have sensors, software, and other technologies built into them in order to communicate and exchange data with other systems and devices over the internet. In intelligent agricultural advancements to increase the quality of agriculture, the Internet of Things (IoT) can be used. The manual monitoring of plant diseases is quite challenging. It demands enormous effort, expertise in the diseases of plants and the considerable time required for processing. The idea of automation in Smart Agriculture is implemented using the Internet of Things (IoT). They help monitor the plant leaf conditions, control water irrigation, gather images using installed IoT system which includes NodeMCU, cameras, soil moisture, temperature sensors and detect diseases in plants on the datasets collected from leaves. To detect plant diseases, image processing is applied. The detection of diseases comprises the acquisition of images, image pre-processing, segmenting an image, extracting and classifying features. In addition, the performance of two machine-learning techniques, such as a linear and polynomial kernel multi hidden extreme machine (MELM) and a support vector machine (SVM), has been studied. This paper discussed how plant diseases could be detected via images of their leaves. This analysis seeks to validate a proposed system for an ap-propriate solution to the IoT-based environmental surveillance, water irrigation system management and an efficient approach for leaf disease detection on plants. The proposed multi hidden layers extreme machine classification delivers good performance of 99.12% in the classification of leaf diseases in comparison to the Support Vector Machine classification, which gives 98%.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_95-Crop_Field_Monitoring_and_Disease_Detection_of_Plants.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Analysis of Machine Learning Algorithms and Data Mining Techniques for Predicting the Existence of Heart Disease</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130794</link>
        <id>10.14569/IJACSA.2022.0130794</id>
        <doi>10.14569/IJACSA.2022.0130794</doi>
        <lastModDate>2022-07-31T09:30:21.0030000+00:00</lastModDate>
        
        <creator>Nourah Alotaibi</creator>
        
        <creator>Mona Alzahrani</creator>
        
        <subject>Heart disease; feature selection; feature extraction; dimensionality reduction; Chi-squared; Naive Bayes; Cleveland dataset</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>Heart diseases are considered one of the leading causes of death globally over the world. They are difficult to be predicted by a specialist physician as it is not an easy task which requires greater knowledge and expertise for prediction. With the variety of machine learning and deep learning algorithms, there exist many recent studies in the state of the art that have been done remarkable and practical works for predicting the presence of heart diseases. However, some of these works were affected by various drawbacks. Hence, this work aims to compare and analyze different classifiers, pre-processing, and dimensionality reduction techniques (feature selection and feature extraction) and study their effect on the prediction of heart diseases existence. Therefore, based on the resulting performance of several conducted experiments on the well-known Cleveland heart disease dataset, the findings of this study are: 1) the most significant subset of features to predict the existence of heart diseases are PES, EIA, CPT, MHR, THA, VCA, and OPK, 2) Na&#239;ve Bayes classifier gave the best performance prediction, and 3) Chi-squared feature selection was the data mining technique that reduced the number of features while maintained the same improved performance for predicting the presence of heart disease.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_94-Comparative_Analysis_of_Machine_Learning_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mobile Application Design: Sale of Clothes Through Electronic Commerce</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130793</link>
        <id>10.14569/IJACSA.2022.0130793</id>
        <doi>10.14569/IJACSA.2022.0130793</doi>
        <lastModDate>2022-07-31T09:30:20.9870000+00:00</lastModDate>
        
        <creator>Raul Jauregui-Velarde</creator>
        
        <creator>Franco Gonzalo Conde Arias</creator>
        
        <creator>Jose Luis Herrera Salazar</creator>
        
        <creator>Michael Cabanillas-Carbonell</creator>
        
        <creator>Laberiano Andrade-Arenas</creator>
        
        <subject>Mobile application; COVID-19; e-commerce; RUP; sale of clothes</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>During the COVID-19 pandemic, small clothing sales companies lost economic income and customers due to a lack of digital transformation, causing the dismissal of many employees. Due to this problem, our objective is to design an e-commerce mobile application for the sale of clothes, so that Small and medium-sized enterprises dedicated to this area generate income and retain their customers. For this, the Rational Unified Process (RUP) methodology was applied, because this method-ology provides a structured way for companies or developers to visualize the development of the software, and for the validation by expert judgment, the survey and the questionnaire were used as instruments. Obtaining as a result a positive rating for the design of the mobile application and its acceptance to accommodate what is reflected. In conclusion, the e-commerce mobile application was successfully designed, backed by expert judgment, so that Small and medium-sized enterprises can offer their products and generate income, as well as build customer loyalty</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_93-Mobile_Application_Design_Sale_of_Clothes_Through_Electronic_Commerce.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Prediction of COVID-19 Patients Recovery using Ensemble Machine Learning and Vital Signs Data Collected by Novel Wearable Device</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130792</link>
        <id>10.14569/IJACSA.2022.0130792</id>
        <doi>10.14569/IJACSA.2022.0130792</doi>
        <lastModDate>2022-07-31T09:30:20.9570000+00:00</lastModDate>
        
        <creator>Hasan K. Naji</creator>
        
        <creator>Hayder K. Fatlawi</creator>
        
        <creator>Ammar J. M. Karkar</creator>
        
        <creator>Nicolae GOGA</creator>
        
        <creator>Attila Kiss</creator>
        
        <creator>Abdullah T. Al-Rawi</creator>
        
        <subject>Machine learning; COVID-19; wearable device</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>During the spread of a pandemic such as COVID- 19, the effort required of health institutions increases dra-matically. Generally, Health systems’ response and efficiency depend on monitoring vital signs such as blood oxygen level, heartbeat, and body temperature. At the same time, remote health monitoring and wearable health technologies have revolutionized the concept of effective healthcare provision from a distance. However, analyzing such a large amount of medical data in time to provide the decision-makers with necessary health procedures is still a challenge. In this research, a wearable device and monitoring system are developed to collect real data from more than 400 COVID-19 patients. Based on this data, three classifiers are implemented using two ensemble classification techniques (Adaptive Boosting and Adaptive Random Forest). The analysis of collected data showed a remarkable relationship between the patient’s age and chronic disease on the one hand and the speed of recovery on the other. The experimental results indicate a highly accurate performance for Adaptive Boosting classifiers, reaching 99%, while the Adaptive Random Forest got a 91% accuracy metric.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_92-Prediction_of_COVID_19_Patients_Recovery_using_Ensemble_Machine_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mobile Application Prototype: Learning in the Programming Course in Computer Engineering Students</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130791</link>
        <id>10.14569/IJACSA.2022.0130791</id>
        <doi>10.14569/IJACSA.2022.0130791</doi>
        <lastModDate>2022-07-31T09:30:20.9570000+00:00</lastModDate>
        
        <creator>Lilian Ocares-Cunyarachi</creator>
        
        <creator>Laberiano Andrade-Arenas</creator>
        
        <subject>Design thinking; learning; mobile application; stu-dents; programming</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>Students need to continue with the learning process related to the world of programming because today are in the era of technological globalization. Therefore, it is very important to learn about it, since programming is used in different areas and as a result obtain software, electronic devices, among others. seek to design a mobile application that helps students learn much more about programming, since students in the first cycles of computer science and computer science have difficulties learning about different programming languages. That is why the application seeks to help the student by complementing their learning in such a way that they can obtain favorable results in their progress thanks to the development of the application. The objective is to design a mobile application for teaching programming in a didactic way that helps computer science students with learning difficulties. The methodology used is Design Thinking, because it is an agile methodology that is based on phases that help us understand and collect information about the problem encountered in order to provide a solution. As for the case study, the design of the mobile application and the detailed development of the prototype are shown. The result obtained is the prototype of the mobile application in which students with learning difficulties will benefit. In addition, a survey carried out at the University of Sciences and Humanities to students and teachers is shown, where very relevant data is obtained according to their learning.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_91-Mobile_Application_Prototype_Learning_in_the_Programming_Course.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Structural Vetting of Academic Proposals</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130790</link>
        <id>10.14569/IJACSA.2022.0130790</id>
        <doi>10.14569/IJACSA.2022.0130790</doi>
        <lastModDate>2022-07-31T09:30:20.9400000+00:00</lastModDate>
        
        <creator>Opeoluwa Iwashokun</creator>
        
        <creator>Abejide Ade-Ibijola</creator>
        
        <subject>Document structure; context free grammar; post-graduate supervision; artificial intelligence; natural language pro-cessing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>Increasing postgraduate enrollments gives rise to many proposal documents required for vetting and human supervision. Reading and comprehension of large documents is a boring and somewhat difficult task for humans which can be delegated to machines. One way of assisting supervisors with this routine screening of academic proposals is to provide an artificial intelligent (AI) tool for initial structural vetting— checking if sections of proposals are complete and appear where they are supposed to. Natural Language Processing (NLP) techniques in AI for document vetting has been applied in legal and financial domains. However, in academia, available tools only perform tasks such as checking proposals for plagiarism, spellings or grammar, word editing, and not structural vetting of academic proposal. This paper presents a tool named Auto-proofreader that attempts to perform the task of structural document review of proposals on behalf of the human expert using formal techniques and document structure understanding hinged on context free grammar rules (CFGs). The experimental results on a corpus of 20 academic proposals using confusion matrix technique for evaluation gives an overall of 87% accuracy. This tool is expected to be a useful aid in postgraduate supervision for vetting students’ academic proposals.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_90-Structural_Vetting_of_Academic_Proposals.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An IoT-based Fire Safety Management System for Educational Buildings: A Case Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130789</link>
        <id>10.14569/IJACSA.2022.0130789</id>
        <doi>10.14569/IJACSA.2022.0130789</doi>
        <lastModDate>2022-07-31T09:30:20.9100000+00:00</lastModDate>
        
        <creator>Souad Kamel</creator>
        
        <creator>Amani Jamal</creator>
        
        <creator>Kaouther Omri</creator>
        
        <creator>Mashael Khayyat</creator>
        
        <subject>Safety; fire; Internet of Things (IoT); sensors; cloud based platform; ThingSpeak</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>Safety is a serious concern that should be addressed carefully in different locations including homes, workplaces and educational buildings. The risk of fire is the most significant threat in many educational facilities such as schools, universities, offices, etc. The main goal of this work is developing an effective system that allows early managing of fires to avoid material and human losses. With the advent of the Internet of Things (IoT), the implementation of such system became possible. A low-cost system incorporating IoT sensors is constructed in this study to collect data (heat, the number of people at the fire scene, ...) in real time. The system provides a control panel that displays readings from all sensors on a single web page. When the collected values exceed a particular threshold, the system sends a message to the building keeper’s phone, allowing him to notify the authorities or dispatch firemen in real time. One of the system’s most important characteristics is that it keeps track of how many people are at the fire scene, simplifying the evacuation process and allowing civil defense authorities to efficiently manage resources. The system has been successfully tested in a variety of circumstances in an educational building (Al-Faisaliah female campus, University of Jeddah, Saudi Arabia).</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_89-An_IoT_based_Fire_Safety_Management_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Solving the Job Shop Scheduling Problem by the Multi-Hybridization of Swarm Intelligence Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130788</link>
        <id>10.14569/IJACSA.2022.0130788</id>
        <doi>10.14569/IJACSA.2022.0130788</doi>
        <lastModDate>2022-07-31T09:30:20.8930000+00:00</lastModDate>
        
        <creator>Jebari Hakim</creator>
        
        <creator>Siham Rekiek</creator>
        
        <creator>Kamal Reklaoui</creator>
        
        <subject>Scheduling; Job shop; Multi-hybridization; Swarm intelligence</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>The industry is subject to strong competition, and customer requirements which are increasingly strong in terms of quality, cost, and deadlines. Consequently, the companies must improve their competitiveness. Scheduling is an essential tool for improving business performance. The production scheduling problem is usually an NP-hard problem, its resolution requires optimization methods dedicated to its degree of difficulty. This paper aims to develop multi-hybridization of swarm intelligence techniques to solve job shop scheduling problems. The performance of recommended techniques is evaluated by applying them to all well-known benchmark instances and comparing their results with the results of other techniques obtainable in the literature. The experiment results are concordant with other studies that have shown that the multi hybridization of swarm intelligence techniques improve the effectiveness of the method and they show how these recommended techniques affect the resolution of the job shop scheduling problem.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_88-Solving_The_Job_Shop_Scheduling_Problem.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of Discrepancy Evaluation Model based on Tat Twam Asi with TOPSIS Calculation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130787</link>
        <id>10.14569/IJACSA.2022.0130787</id>
        <doi>10.14569/IJACSA.2022.0130787</doi>
        <lastModDate>2022-07-31T09:30:20.8800000+00:00</lastModDate>
        
        <creator>Dewa Gede Hendra Divayana</creator>
        
        <creator>Agus Adiarta</creator>
        
        <creator>P. Wayan Arta Suyasa</creator>
        
        <subject>Discrepancy; evaluation model; tat twam asi; TOPSIS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>This research had the main objective to provide information related to the innovation available in the form of an educational evaluation model that integrates the Discrepancy evaluation component, Tat Twam Asi concept, and TOPSIS (Technique for Order Preference by Similarity to Ideal Solution) method in the framework of determining the dominant indicators triggering the effectiveness of implementing blended learning in IT vocational schools. The approach of this research was development research by an R &amp; D development model that focused on four stages, including a) research and field data collection, b) planning, c) design development, d) initial trial, and e) revisions to the results of the initial trial. There were 34 subjects involved in the trial design of the evaluation model in this research, including two education experts, two informatics experts, and 30 IT vocational teachers in Bali. The instruments used in data collection were in the form of questionnaires, interview guidelines, and photo documentation. The analysis technique for the data that had been collected used quantitative descriptive techniques that referred to percentage descriptive calculations. The results of this research were Tat Twam Asi-based Discrepancy evaluation model design which was integrated with TOPSIS calculations and had been classified as excellent according to the eleven-scale categorization table.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_87-Development_of_Discrepancy_Evaluation_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Feedback Model when Applying the Evaluation by Indicators in the Development of Competences through Problem based Learning in a Systems Engineering Course</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130786</link>
        <id>10.14569/IJACSA.2022.0130786</id>
        <doi>10.14569/IJACSA.2022.0130786</doi>
        <lastModDate>2022-07-31T09:30:20.8630000+00:00</lastModDate>
        
        <creator>C&#233;sar Baluarte-Araya</creator>
        
        <creator>Oscar Ramirez-Valdez</creator>
        
        <subject>Problem based learning; competencies; evaluation; criteria; performance indicator; deliverable report; feedback report; skills; formative inquiry</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>Feedback can be very influential in students&#39; learning, therefore, the university must be very clear about its procedures and rules on the time lapse of the response of the work done by them, and the comments made to positively influence in a sustained manner in the evaluation of learning. The present work shows the experience of applying the Problem Based Learning (PBL) methodology and also developing research competencies through Formative Research, and as a result of the evaluation of the learning of the Criteria and its Performance Indicators corresponding to the course Business Electronic which is taught by two teachers in theory and laboratory practices. The objective is to design a Feedback Model for the problems solved by the students in order to support the improvement of their learning. The methodology used is Problem Based Learning together with the Feedback Model, of real problems posed contemplating different contexts of the organisations; we have that from the Deliverable Report of each problem at the same time the incidences and observations are registered in the corresponding register and in this way the Feedback Report is elaborated. The results obtained reveal that the objectives of producing the Feedback Report are achieved, which should be sent as soon as possible to the students for analysis, to propose their own strategies for improving the shortcomings or errors, as well as having the motivation to continue progressing by accepting the suggestions or contributions of the teacher; as well as seeing an increase in knowledge, development of their competences, skills, attitudes, making their own judgements, and achieving the Student Results. In conclusion, the application of a well-planned active didactic strategy, the adequate evaluation of learning through the qualification of the indicators of each criterion, and the elaboration of a timely feedback report on the problems, will achieve the expected results for both the course and the student.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_86-Feedback_Model_when_Applying_the_Evaluation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Blind Robust Image Watermarking on Selected DCT Coefficients for Copyright Protection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130785</link>
        <id>10.14569/IJACSA.2022.0130785</id>
        <doi>10.14569/IJACSA.2022.0130785</doi>
        <lastModDate>2022-07-31T09:30:20.8470000+00:00</lastModDate>
        
        <creator>Majid Rahardi</creator>
        
        <creator>Ferian Fauzi Abdulloh</creator>
        
        <creator>Wahyu Sukestyastama Putra</creator>
        
        <subject>Robust watermarking; copyright protection; discrete cosine transform; frequency domain; color image watermarking</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>This paper proposes a blind and robust image watermarking technique using Discrete Cosine Transform (DCT) for copyright protection on color images called BRIW-DCT. Each channel of the host image is divided into non-overlapping image blocks with the size of 8&#215;8 pixels. Each image block is transformed into a frequency domain using the DCT transformation. The watermark image is embedded into the host image by modifying the 11th to the 15th DCT coefficient. The experimental result shows that the watermarked image achieved a high PSNR value of 50.4489 dB and a high SSIM value of 0.9991. Furthermore, various attacks are performed on the watermarked image. BRIW-DCT can successfully recover the watermark image from the tampered image, which produces a high NC value of 0.7805 and a low BER value of 0.1126.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_85-A_Blind_Robust_Image_Watermarking_on_Selected_DCT_Coefficients.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of Intelligent Fusion Terminal System with Fog Computing Capability in Distribution Area based on Large Capacity CPU</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130784</link>
        <id>10.14569/IJACSA.2022.0130784</id>
        <doi>10.14569/IJACSA.2022.0130784</doi>
        <lastModDate>2022-07-31T09:30:20.8470000+00:00</lastModDate>
        
        <creator>Ou Zhang</creator>
        
        <creator>Songnan Liu</creator>
        
        <creator>Hetian Ji</creator>
        
        <creator>Xuefeng Wu</creator>
        
        <creator>Xue Jiang</creator>
        
        <subject>Large-capacity CPU; fog computing capability; power distribution station area; intelligent integration; terminal; system design</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>The intelligent fusion terminal in the distribution area usually adopts the mode of cooperation between the cloud and the edge, and the workload of manual operation and maintenance is large. Therefore, an intelligent fusion terminal system in the distribution area with fog computing capability based on a high-capacity CPU is proposed. Follow the “cloud pipe edge end” construction framework of smart IOT system, and take this framework as the edge computing node of distribution station area and power consumption side. Mt7622b chip in Linux operating system with openwrt firmware is used as the main control chip of edge agent gateway equipment, and the recursive least square method is used to realize the data fusion of power acquisition service in distribution station area and power distribution demand terminal. The test results show that the designed system can realize real-time monitoring of power consumption and distribution data and power quality management in the distribution station area, and the data processing delay is less than 100ms, which provides a reference for the intelligent fusion terminal system in the distribution station area.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_84-Design_of_Intelligent_Fusion_Terminal_System_with_Fog_Computing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Superpixel Sizes using Topology Preserved Regular Superpixel Algorithm and their Impacts on Interactive Segmentation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130783</link>
        <id>10.14569/IJACSA.2022.0130783</id>
        <doi>10.14569/IJACSA.2022.0130783</doi>
        <lastModDate>2022-07-31T09:30:20.8170000+00:00</lastModDate>
        
        <creator>Kok Luong Goh</creator>
        
        <creator>Soo See Chai</creator>
        
        <creator>Giap Weng Ng</creator>
        
        <creator>Muzaffar Hamzah</creator>
        
        <subject>Image segmentation; superpixel; input type; interactive segmentation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>Interactive Image Segmentation is a type of semi-automated segmentation that uses user input to extract the object of interest. It is possible to speed up and improve the end result of segmentation by using pre-processing steps. The use of superpixels is an example of a pre-processing step. A superpixel is a collection of pixels with similar properties such as texture and colour. Previous research was conducted to assess the impact of the number of superpixels (based on SEEDS superpixel aglorithms) required to achieve the best segmentation results. The study, however, only examined one type of input (strokes) and a small number of images. As a result, the goal of this study is to extend previous work by performing interactive segmentation with input strokes and a combination of bounding box and strokes on images from Grabcut image data sets generated by Topology preserved regular superpixel (TPRS). Based on our findings, an image with 1000 to 2500 superpixels and a combination of bounding box and strokes will help the interactive segmentation algorithm produce a good segmentation result. Finally, the size of the superpixels would influence the final segmentation results as well as the input type.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_83-Superpixel_Sizes_using_Topology_Preserved_Regular_Superpixel_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mobile Learning in Science Education to Improve Higher-Order Thinking Skills (HOTS) and Communication Skills: A Systematic Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130782</link>
        <id>10.14569/IJACSA.2022.0130782</id>
        <doi>10.14569/IJACSA.2022.0130782</doi>
        <lastModDate>2022-07-31T09:30:20.8000000+00:00</lastModDate>
        
        <creator>Adilah Afikah</creator>
        
        <creator>Sri Rejeki Dwi Astuti</creator>
        
        <creator>Suyanta Suyanta</creator>
        
        <creator>Jumadi Jumadi</creator>
        
        <creator>Eli Rohaeti</creator>
        
        <subject>Communication skills; higher-order thinking skills; mobile learning; science education</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>Today, the increasing use of technology and mobile applications in education was interesting. This research was a systematic review study with limited 30 articles from 2012 to 2021. It aims to answer research questions about what mobile devices used in learning and what learning approaches are used in science learning to improve higher-order thinking skill and communication skills. The findings of this study were in line with the research objectives: First, the most appropriate mobile devices used to achieve learning objectives are mobile phones, followed by PDAs, tablets, iPad, laptops, e-books, and iPods. Second, the learning approach used in science learning to improve higher-order thinking skills and communication skills are a collaborative learning approach, inquiry learning, project-based learning, problem-based learning, game-based learning, and flipped classroom learning. It was hoped that this research can be an illustration for other researchers to create innovative learning approaches. Some research that can be done next based on this research is how mobile learning in social learning or comparing the two, further research on the most appropriate learning media for mobile learning, and research on the effectiveness of implementing approach strategies in mobile learning.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_82-Mobile_Learning_in_Science_Education_to_Improve_Higher_Order_Thinking.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Iootfuzzer Method for Vulnerability Mining of Internet of Things Devices based on Binary Source Code and Feedback Fuzzing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130781</link>
        <id>10.14569/IJACSA.2022.0130781</id>
        <doi>10.14569/IJACSA.2022.0130781</doi>
        <lastModDate>2022-07-31T09:30:20.7700000+00:00</lastModDate>
        
        <creator>Guangxin Guo</creator>
        
        <creator>Chao Wang</creator>
        
        <creator>Jiahan Dong</creator>
        
        <creator>Bowen Li</creator>
        
        <creator>Xiaohu Wang</creator>
        
        <subject>Internet of things; system vulnerabilities; source code; fuzz testing; instrumentation technology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>With the technological progress of the Internet and 5G communication network, more and more Internet of Things devices are used in it. Limited by the cost, power consumption and other factors of Internet of Things devices, the systems carried by the Internet of Things devices often lack the security protection provided by larger equipment systems such as desktop computers. Because the current personal computers and servers mostly use the x86 architecture, and the previous research on security tools or hardware-based security analysis feature support is mostly based on the x86 architecture, the traditional security analysis techniques cannot be applied to the current large-scale ARM-based and MIPS-based Internet of Things devices. Based on this, this paper studies the firmware binary program of common Linux-based Internet of Things devices. A binary static instrumentation technology based on taint information analysis is proposed. The paper also analyzes how to use the binary static instrumentation technology combined with static analysis results to rewrite binary programs and obtain taint path information when binary programs are executed. Firmware binary fuzzing technology based on model constraints and path feedback is studied to cover more dangerous execution paths in the target program. Finally, iootfuzzer, a prototype vulnerability mining system for firmware binaries of Internet of Things devices, is used to test and analyze the two technologies. The results show that its fuzzing efficiency for Internet of Things devices is better than other fuzzing technologies such as boofuzz and Peach 3. It can fill in some gaps in the current security analysis tools for the Internet of Things devices and improve the efficiency of security analysis for Internet of Things devices, which contributes to the field through automated security vulnerability detection systems.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_81-An_Iootfuzzer_Method_for_Vulnerability_Mining_of_Internet_of_Things.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sentiment Analysis using Term based Method for Customers’ Reviews in Amazon Product</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130780</link>
        <id>10.14569/IJACSA.2022.0130780</id>
        <doi>10.14569/IJACSA.2022.0130780</doi>
        <lastModDate>2022-07-31T09:30:20.7530000+00:00</lastModDate>
        
        <creator>Thilageswari a/p Sinnasamy</creator>
        
        <creator>Nilam Nur Amir Sjaif</creator>
        
        <subject>Sentiment analysis; e-commerce; term based; n-gram</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>Customers’ review in Amazon platform plays an important role for making online purchase decision making, however the reviews are snowballing in E-commerce day by day. The active sharing of customers’ experience and feedback helps to predict the products and retailers’ quality by using natural language processing. This paper will focus on experimental discussion on Amazon products reviews analysis coupled with sentiment analysis using term-based method and N-gram to achieve best findings. The investigation of sentiment analysis on amazon product gain more valuable information on related text to solve problem related services, products information and quality. The analysis begins with data pre-processing of Amazon products reviews then feature extraction with POS tagging and term-based concept. e-Commerce customer’s reviews normally classify different experience into positive, negative and neutral to judge human behavior and emotion towards the purchase products. The major findings discussed in this journal will be using four different classifier and N-grams methods by computing accuracy, precision, recall and F1-Score. TF-IDF method with N-gram shows unigram with Support Vector Machine learning with highest accuracy results for Amazon product customers’ reviews. The score reveals that Support Vector Machine for unigram achieved 82.27% for accuracy, 82% precision, 80% Re-call and 72% F1-Score.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_80-Sentiment_Analysis_using_Term_based_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>News Analytics for Business Sentiment Suggestion</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130779</link>
        <id>10.14569/IJACSA.2022.0130779</id>
        <doi>10.14569/IJACSA.2022.0130779</doi>
        <lastModDate>2022-07-31T09:30:20.7370000+00:00</lastModDate>
        
        <creator>Sirinda Palahan</creator>
        
        <subject>Sentiment analysis; deep learning; pre-trained model; natural language processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>Business and economics news has become one of the factors businesses consider when making decisions. However, the exponential increase in the availability of business information sources on the internet makes it more difficult for entrepreneurs to keep up with and extract useful insights from many news articles. Although many preceding works focused on the sentiment extracted in the news, the results were intended for everyone. The sentiments based on a user&#39;s queries are needed to provide customized service. Hence, this paper proposed a system integrated into a chatbot to automatically understand users&#39; queries and recommend sentiments based on news articles. The main objective is to provide entrepreneurs, especially those considering international trade and investment, with the sentiments embodied in the latest news articles to help them keep up with the business and economic trends relevant to them. The methodology is based on deep learning and transfer learning. A pre-trained deep learning model was fine-tuned for natural language processing tasks to perform sentiment analysis in news articles. A survey questionnaire was used to measure the effectiveness of the system. The survey result showed that most users agreed with the predicted sentiments from the system.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_79-News_Analytics_for_Business_Sentiment_Suggestion.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Students’ Characteristics of Student Model in Intelligent Programming Tutor for Learning Programming: A Systematic Literature Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130778</link>
        <id>10.14569/IJACSA.2022.0130778</id>
        <doi>10.14569/IJACSA.2022.0130778</doi>
        <lastModDate>2022-07-31T09:30:20.7230000+00:00</lastModDate>
        
        <creator>Rajermani Thinakaran</creator>
        
        <creator>Suriayati Chuprat</creator>
        
        <subject>Intelligent tutoring system; intelligent programming tutor; student characteristics; student model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>This study describes preliminary results of a research related to Intelligent Programming Tutor (IPT) which is derived from Intelligent Tutoring System (ITS). The system architecture consists of four models. However, in this study student model mainly student characteristic was focused. From literature, 44 research articles were identified from a number of digital databases published between 1997 to 2022 base on systematic literature review (SLR) method. The findings show that the majority 48% of IPT implementation focuses on knowledge and skills. While 52% articles focused on a combination of two to three student characteristics where one of the combinations is knowledge and skill. When narrow down, 25% focused on knowledge and skills with errors or misconceptions; 4% focused on knowledge and skill with cognitive features; 5% focused focus on knowledge and skill with affective features; 2% focused on knowledge and skill with motivation; and 9% based on knowledge and skill with learning style and learning preferences as students’ characteristics to build their student model. Whereas 5% focused on a combination of three student characters which are knowledge and skill with cognitive and affective features and 2% focused on knowledge and skill with learning styles and learning preferences and motivation as students’ characteristics to construct the tutoring system student model. To provide an appropriate tutoring system for the students, students’ characteristic needs to decide for the student model before developing the tutoring system. From the findings, it can say that knowledge and skills is an essential students’ characteristic used to construct the tutoring system student model. Unfortunately, other students’ characteristic is less considered especially students’ motivation.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_78-Students_Characteristics_of_Student_Model_in_Intelligent_Programming.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Techno-pedagogical Solution to Support the Improvement of the Quality of Education in Technical and Vocational Training in Mauritania</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130777</link>
        <id>10.14569/IJACSA.2022.0130777</id>
        <doi>10.14569/IJACSA.2022.0130777</doi>
        <lastModDate>2022-07-31T09:30:20.7070000+00:00</lastModDate>
        
        <creator>Cheikhane SEYED</creator>
        
        <creator>Jeanne Roux NGO BILONG</creator>
        
        <creator>Mohamed Ahmed SIDI</creator>
        
        <creator>Mohamedade Farouk NANNE</creator>
        
        <subject>TVT; STEM; Mixed education; Moodle; WebRTC; signaling system; MCU; DTLS; SRTP</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>E-learning is the most promising and fastest growing activity since the advent of the COVID-19 pandemic. Although the pandemic seems to have been eradicated in several countries of the world, it is worth mentioning that some positive cases of the covid-19 variant have been detected. This could accelerate the rate of infection again. Hence the interest generate in reinforcing the quality of distance learning platforms. Technical and vocational training (TVT) in Mauritania is based on science, technology, engineering, and mathematics (STEM) disciplines. Unfortunately, the expansion of the COVID-19 pandemic has negatively impacted the quality of education with a halt in teaching affecting 8000 students. Yet, the quality of education in these disciplines is a key factor in meeting the demands of emergence and economic growth. This paper advocates the mixed pedagogical model by proposing a techno-pedagogical solution to improve the quality of teaching and learning processes. The proposed solution combines the use of technologies such as Modular Object-Oriented Dynamic Learning Environment (Moodle) and Web Real Time Communication (WebRTC) to provide pedagogical services in a context with a limited Internet connection. In addition, we set up a signaling system to maintain direct communication between the pairs, Application Programming Interface (API) of Multipoint Control Unit (MCU) to ensure simultaneous collaboration in a peer-to-peer context, used implementations of security protocols such as Datagram Transport Layer Security (DTLS) and Secure Real-time Transport Protocol (SRTP) to secure data transport.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_77-Techno_pedagogical_Solution_to_Support_the_Improvement.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Effect of Feature Engineering Technique for Determining Vegetation Density</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130776</link>
        <id>10.14569/IJACSA.2022.0130776</id>
        <doi>10.14569/IJACSA.2022.0130776</doi>
        <lastModDate>2022-07-31T09:30:20.6900000+00:00</lastModDate>
        
        <creator>Yuslena Sari</creator>
        
        <creator>Yudi Firmanul Arifin</creator>
        
        <creator>Novitasari Novitasari</creator>
        
        <creator>Mohammad Reza Faisal</creator>
        
        <subject>Vegetation cover; vegetation density; feature extraction; feature engineering; accuracy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>Vegetation density is one type of information collected from vegetation cover. Vegetation density influences evapotranspiration in terrain, which is essential in assessing how vulnerable peatlands are to fire. The Keetch and Byram Drought Index model, which evaluates peatland fire vulnerability, divides vegetation density into heavily grazed, softly grazed, and un-grazed. Manual approaches for analyzing vegetation density in the field, on the other hand, need a significant amount of resources. Image data acquisition, pre-processing, feature extraction, classification, feature selection, classification, and validation are all computer vision approaches used to solve these problems. Artificial intelligence algorithms and machine learning approaches promise outstanding accuracy in modern computer vision research. However, in the classification process, the impact of feature extraction is critical. Pattern identification at Back Propagation Neural Network (BPNN) is problematic because the feature extraction dimension is excessively complicated. The solution to this problem is using the feature engineering technique to choose the characteristics. This research aims to explore how feature engineering influences the accuracy of results. According to the statistics, implementing the recommended strategy can increase accuracy by 1% and increase kappa by 1.5%. This increase in vegetation density classification accuracy might help detect peatland vulnerability sooner. The novel aspect of this paper is that, after feature extraction, a feature engineering strategy is used in the machine learning classification stage to reduce the number of complex dimensions.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_76-Effect_of_Feature_Engineering_Technique.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparison of Image Enhancement Algorithms for Improving the Visual Quality in Computer Vision Application</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130775</link>
        <id>10.14569/IJACSA.2022.0130775</id>
        <doi>10.14569/IJACSA.2022.0130775</doi>
        <lastModDate>2022-07-31T09:30:20.6600000+00:00</lastModDate>
        
        <creator>Jenita Subash</creator>
        
        <creator>Jharna Majumdar</creator>
        
        <subject>Tracking; robotics; surveillance; enhancement</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>Computer vision has its numerous real-world applications on Visual Object Tracking which includes human-computer interaction, autonomous vehicles, robotics, motion-based recognition, video indexing, surveillance and security, human-computer interaction, autonomous vehicles, robotics, motion-based recognition, video indexing, surveillance and security. The factors affecting the tracking process is due to low illumination, haze and cloudy environment and noisy environment. In this paper, we aim to extensively review the latest trends and advances in adaptive enhancement algorithm and evaluate the performance using Full reference like, SSIM (Structure Similarity Index Measure), MS-SSIM (Multi-scale Structure Similarity Index Measure), ESSIM (Edge Strength Structural Similarity Index), FSIM (Feature Similarity Index Measure), VIF (Visual Information Fidelity), CW-SSIM (complex wavelet structural similarity), UQI (Universal Quality Index), IEF (Image Enhancement Factor), IQI (Image Quality Index), EME (Enhancement Measurement Error), CVSI (Contrast and Visual Salient Information), MCSD (Multiscale contrast similarity deviation), NQM (Noise Quality Measure), Gradient Magnitude Similarity Mean (GMSM), Gradient Magnitude Similarity Deviation (GMSM) and no-reference image quality measures Perception based Image Quality Evaluator (PIQE), Blind/Reference less Image Spatial Quality Evaluator (BRISQUE), Naturalness Image Quality Evaluator (NIQE), Average Gradient (AG), Contrast, Information Entropy (IE), Lightness order Error (LOE). The main purpose of adaptive image enhancement is to smooth the uniform area and sharpen the border of an image to improve its visual quality. In this paper, fourteen image enhancement algorithms were tested on LoL dataset to benchmark the time taken to process them and their output quality was evaluated. Results from this study will give insights to image analysts for selecting image enhancement algorithms which acts as a pre- processing stage for Visual object Tracking.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_75-Comparison_of_Image_Enhancement_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Rider Driven African Vulture Optimization with Multi Kernel Structured Text Convolutional Neural Network for Classifying e-Commerce Reviews</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130774</link>
        <id>10.14569/IJACSA.2022.0130774</id>
        <doi>10.14569/IJACSA.2022.0130774</doi>
        <lastModDate>2022-07-31T09:30:20.6430000+00:00</lastModDate>
        
        <creator>H. Mohamed Zakir</creator>
        
        <creator>S. Vinila Jinny</creator>
        
        <subject>Natural language processing; opinion mining; convolutional neural network; text sentiment classification; ecommerce review</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>Opinion mining is a natural language processing based on sentiment classification technique to determine the sentiment of the reviews. The major existing text Convolutional Neural Network (CNN) algorithms are derived based on 3&#215;3 size kernels which extract ineffective review text-features and lead to less classification accuracy. Moreover, most of the traditional CNN versions output three classes such as positive, negative, and neutral as their classification results. Hence, a novel algorithm namely ‘RAVO driven Multi-Size Kernel structured Text CNN for classifying ecommerce reviews (MSK-TCNN-RAVO)’ is proposed in this work. This proposed approach utilizes five multi-size kernels (3&#215;7,5&#215;7,1&#215;3,1&#215;5,1&#215;7), multi-dimensional kernels (1D &amp; 2D), and integrates varying size kernels to extract text-features effectively. In addition, the performance of multi-kernel CNN is highly enhanced by RAVO algorithm based on rider optimization. Moreover, the proposed approach is highly effective to process &#39;review-stop-words removal&#39; that decrease the complexity and time consumption of the opinion mining process. Most existing systems use single pooling operations which reduce feature map processing performance, hence, dual pooling operations (both Max and Average pooling) are employed in this research. Furthermore, it is configured to generate five classification outputs such as bad, fair, neutral, good, and excellent to support better decision-making with 95.5% accuracy. This method is evaluated using different quality metrics and five review-databases to measure the performance, and the results reveal that the proposed method outperforms the other existing review classification algorithms.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_74-Rider_Driven_African_Vulture_Optimization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Churn Prediction Analysis by Combining Machine Learning Algorithms and Best Features Exploration</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130773</link>
        <id>10.14569/IJACSA.2022.0130773</id>
        <doi>10.14569/IJACSA.2022.0130773</doi>
        <lastModDate>2022-07-31T09:30:20.6300000+00:00</lastModDate>
        
        <creator>Yasyn ELYUSUFI</creator>
        
        <creator>M’hamed AIT KBIR</creator>
        
        <subject>Customer churn; prediction; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>The market competition and the high cost of acquiring new customers have led financial organizations to focus more and more on effective customer retention strategies. Although the banking and financial sectors have low churn rates compared to other sectors, the impact on profitability related to losing a customer is comparatively high. Thereby, customer turnover management and analysis play an essential part for financial organizations in order to improve their long-term profitability. Recently, it appears that using machine learning to predict churning improves customer retention strategies. In this work, we discuss some specific machine learning models proposed in the literature that deal with this problem and compare them with some emerging models, based on Ensemble learning algorithms. As a result, we build a predictive churn approaches that look at the customer history data, check to see who is active after a certain time and then create models that identify stages where a customer can leave the concerned company service. Ensemble learning algorithms are also used to find relevant features in order to reduce their number which is of great importance when performing the training step with some classical models such us Multi-Layer Perception Neural networks. The proposed approaches can achieve up to 89% in accuracy when other research works, dealing with the same dataset, can achieve less than 86%.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_73-Churn_Prediction_Analysis_by_Combining_Machine_Learning_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intelligent Framework for Enhancing the Quality of Online Exams based on Students’ Personalization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130772</link>
        <id>10.14569/IJACSA.2022.0130772</id>
        <doi>10.14569/IJACSA.2022.0130772</doi>
        <lastModDate>2022-07-31T09:30:20.5970000+00:00</lastModDate>
        
        <creator>Ayman E. khedr</creator>
        
        <creator>Abdulwahab Ali Almazroi</creator>
        
        <creator>Amira M. Idrees</creator>
        
        <subject>Personalization; data mining; sentiment analysis; social networks; e-learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>In education sector, personalization is an evolutionary term that gained a high attention due to its effectiveness in raising the enterprise competence level. This research aims at proposing a novel model for effective smart testing, which considers the student’s Facebook activities in determining the students’ personality and constructing his suitable exam. The aim of this examination perspectives to ensure the reliable student evaluation according to his gained knowledge to ensure that no other factor interferes which may negatively affect the reliable evaluation. The research also applies text analytics techniques to ensure the exam balance. The proposed model has been applied and evaluated with professors’ percentage equal to 96.5 % and successfully reach students satisfaction percentage with average equal to 96.63%.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_72-Intelligent_Framework_for_Enhancing_the_Quality_of_Online_Exams.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced Gradient Boosting Machines Fusion based on the Pattern of Majority Voting for Automatic Epilepsy Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130771</link>
        <id>10.14569/IJACSA.2022.0130771</id>
        <doi>10.14569/IJACSA.2022.0130771</doi>
        <lastModDate>2022-07-31T09:30:20.5830000+00:00</lastModDate>
        
        <creator>Dwi Sunaryono</creator>
        
        <creator>Riyanarto Sarno</creator>
        
        <creator>Joko Siswantoro</creator>
        
        <creator>Diana Purwitasari</creator>
        
        <creator>Shoffi Izza Sabilla</creator>
        
        <creator>Rahadian Indarto Susilo</creator>
        
        <creator>Adam Abelard Garibaldi</creator>
        
        <subject>Epilepsy; enhanced gradient boosting machine fusion; electroencephalographic (EEG) signal; discrete wavelet transform (DWT); discrete fourier tansform (DFT); genetic algorithm (GA)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>Automatic detection of epilepsy based on EEG signals is one of the interesting fields to be developed in medicine to provide an alternative method for detecting epilepsy. High accuracy values are very important for accurate diagnosis in detecting epilepsy and avoid errors in diagnosing patients. Therefore, this study proposes the Enhanced Gradient Boosting Machines Fusion (Enhanced GBM Fusion) for automatically detecting epilepsy based on electroencephalographic (EEG) signals. Enhanced part of GBM Fusion is the pattern of majority voting evaluation based on the fusion of five-class and two-class GBM, called Enhanced GBM Fusion. The raw signal is extracted using Discrete Fourier Transform (DFT) and Discrete Wavelet Transform (DWT), then feature is selected by using Genetic Algorithm (GA) before classification. This proposed method was evaluated using five classes (normal in open eyes, normal in close eyes, interictal with hippocampal, interictal, and ictal) from the University of Bonn. The experimental results show that the proposed Enhanced GBM Fusion can increase the accuracy of GBM Fusion of 99.8% to classify five classes of epilepsy based on EEG signal. However, the performance of Enhanced GBM Fusion cannot be generalized to other datasets.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_71-Enhanced_Gradient_Boosting_Machines_Fusion.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Methodology for Disease Identification Using Metaheuristic Algorithm and Aura Image</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130770</link>
        <id>10.14569/IJACSA.2022.0130770</id>
        <doi>10.14569/IJACSA.2022.0130770</doi>
        <lastModDate>2022-07-31T09:30:20.5670000+00:00</lastModDate>
        
        <creator>Manjula Poojary</creator>
        
        <creator>Yarramalle Srinivas</creator>
        
        <subject>Aura images; BGMM; image classification; multiclass SVM; artificial neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>Every human has a specific Aura. Every organism in the human body emits energy comprising of Ultra Violet radiation, thermal radiation, and electromagnetic radiation. These energy levels help to underline the physical health inside the human body. In general, these energy levels are called Aura. In order to capture the energy levels, specific cameras like Kirlian are used. These cameras try to capture the energy distribution and map them to the individual organs of the human body. In this article, we present a methodology using Image processing techniques, where Bivariate Gaussian Mixture Model (BGMM) is considered as a classifier to identify the diseases in humans based on the energy distribution. In this article, we have considered five different categories of diseased organs that are identified based on the energy distribution. The preprocessing is subjected to the morphological technique and Particle Swarm Optimization (PSO) algorithm is considered for feature extraction. The segmentation process is carried out using the feature extracted and training is carried out using the BGMM classifier. The result obtained is summarized using various other methods like Support Vector Machine (SVM), Artificial Neural Network (ANN), and Multiclass SVM (MSVM). The results showcase that the proposed methodology exhibits recognition accuracy at 90%.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_70-A_Novel_Methodology_for_Disease_Identification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design and Implementation of ML Model for Early Diagnosis of Parkinson’s Disease using Gait Data Analysis in IoT Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130769</link>
        <id>10.14569/IJACSA.2022.0130769</id>
        <doi>10.14569/IJACSA.2022.0130769</doi>
        <lastModDate>2022-07-31T09:30:20.5330000+00:00</lastModDate>
        
        <creator>Navita Mehra</creator>
        
        <creator>Pooja Mittal</creator>
        
        <subject>Internet of things (IoT); sensors; parkinson&#39;s disease (PD); machine learning (ML); vertical ground reaction force (VGRF)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>Parkinson’s disease (PD) is the world’s second most neurodegenerative disorder that results in a steady loss of movement. The symptoms in patients occur slowly with the passage of time are and very hard to identify in its initial stage. So, early diagnosis of PD is the foremost need for timely treatment to people. The introduction of smart technologies like the Internet of Things (IoT) and wearable sensors in the healthcare domain offers a smart way of identifying the symptoms of PD patients. In which smart sensors are worn on the patient’s body which continuously monitor the symptoms in patients and track their possible health status. The major objective of this work is to propose a machine learning-based healthcare model that best classifies the subjects into healthy and Parkinson&#39;s patients by extracting the most important features. A step regression-based feature selection method is followed to improve the classification of PD. A Shapiro Wilk test is adopted to check the normality of the gait dataset. This model is implemented on three publicly available Parkinson’s datasets collected from three different studies available on Psyionet. All these data sets contain VGRF recordings obtained from eight different sensors placed under each foot. Experimentation is done on the Jupyter notebook by utilizing Python as a programming language. Experimental results revealed that our proposed model with effective pre-processing, feature extraction, and feature selection method achieved the highest accuracy result of 95.54%, 98.80%, and 94.52% respectively when applied to three datasets. Our research inducts knowledge about significant characteristics of a patient suffering from PD and may help to diagnose and cure at an early stage.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_69-Design_and_Implementation_of_ML_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Extractive Text Summarization based on Candidate Summary Sentences using Fuzzy-Decision Tree</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130768</link>
        <id>10.14569/IJACSA.2022.0130768</id>
        <doi>10.14569/IJACSA.2022.0130768</doi>
        <lastModDate>2022-07-31T09:30:20.5200000+00:00</lastModDate>
        
        <creator>Adhika Pramita Widyassari</creator>
        
        <creator>Edy Noersasongko</creator>
        
        <creator>Abdul Syukur</creator>
        
        <creator>Affandy</creator>
        
        <subject>Text summarization; extractive; fuzzy; decision tree</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>This study aims to predict candidate summary sentences in extractive summary using the Fuzzy-Decision Tree method. The fuzzy method is quite superior and the most widely used in extractive summaries, because Fuzzy has advantages in calculations that are not cryptic, so it is able to calculate uncertain possibilities. However, in its implementation, the fuzzy rule generation process is often carried out randomly or based on expert understanding so that it does not represent the distribution of the data. Therefore, in this study, a Decision Tree (DT) technique was added to generate fuzzy rules. From the fuzzy final result, important sentences are obtained that are candidates for summary sentences. The performance of our proposed method was tested on the 2002 DUC dataset in the ROUGE-1 evaluation. The results showed that our method outperformed other methods (baseline and sentence ranking) with an average precision of 0.882498, Recall 0.820443 and F Measure 0.882498 with CI for F1 0.821-0.879 at the 95% confidence level.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_68-An_Extractive_Text_Summarization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Grape Leaves Diseases Classification using Ensemble Learning and Transfer Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130767</link>
        <id>10.14569/IJACSA.2022.0130767</id>
        <doi>10.14569/IJACSA.2022.0130767</doi>
        <lastModDate>2022-07-31T09:30:20.4870000+00:00</lastModDate>
        
        <creator>Andrew Nader</creator>
        
        <creator>Mohamed H.Khafagy</creator>
        
        <creator>Shereen A. Hussien</creator>
        
        <subject>Ensemble learning; grape leaf diseases; convolutional neural network (CNN); transfer learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>Agriculture remains an important sector of the economy. Plant diseases and pests have a big impact on plant yield and quality. So, prevention and early detection of crop disease are some of the measures that must be implemented in farming to save the plants at an early stage and thereby reduce the overall food loss. Grapes are the most profitable fruit, but they are also vulnerable to a variety of diseases. Black Measles, Black Rot, and Leaf Blight are diseases that affect grape plants. Manual disease diagnosis can result in improper identification and use of pesticides, and it takes a long time. A variety of deep learning approaches have been used to address this issue of the identification and classification of grape leaf diseases. However, there are also limits to such approaches. Therefore, this paper uses deep learning with the concept of ensemble learning based on three famous Convolutional Neural Network (CNN) architectures (Visual Geometry Group (VGG16), VGG19, and Extreme Inception (Xception)). These three models are pre-trained with ImageNet. The performance of the proposed approach is analyzed using the Plant Village (PV) dataset of common grape leaf diseases. The Proposed model gives higher performance than the results achieved by using each Deep Learning architecture separately and compared with the recent approaches in this study. The proposed system outperformed the others with 99.82% accuracy.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_67-Grape_Leaves_Diseases_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Energy-based Collaborative Filtering Recommendation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130766</link>
        <id>10.14569/IJACSA.2022.0130766</id>
        <doi>10.14569/IJACSA.2022.0130766</doi>
        <lastModDate>2022-07-31T09:30:20.4730000+00:00</lastModDate>
        
        <creator>Tu Cam Thi Tran</creator>
        
        <creator>Lan Phuong Phan</creator>
        
        <creator>Hiep Xuan Huynh</creator>
        
        <subject>Energy distance; energy model; collaborative filtering; recommendation system; distance correlation; incompatibility</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>The core value of the recommendation model is the using of the measures to measure the difference between the jumps (e.g. pearson), some other studies based on the magnitude of the angle in space (e.g. cosine), or some other studies study the level of confusion (e.g. entropy) between users and users, between items and items. Recommendation model provides an important feature of suggesting the suitable items to user in common operations. However, the classical recommendation models are only concerned with linear problems, currently there is no research about nonlinear problems on the basis of potential/energy approach to apply for the recommendation model. In this work, we mainly focus on applying the energy distance measure according to the potential difference with the recommendation model to create a separate path for the recommendation problem. The theoretical properties of the energy distance and the incompatibility matrix are presented in this article. Two experiment scenarios are conducted on Jester5k, and Movielens datasets. The experiment result shows the feasibility of the energy distance measures/ the potential in the recommendation systems.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_66-Energy_based_Collaborative_Filtering_Recommendation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implementation of Gamification in Mathematics m-Learning Application to Creating Student Engagement</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130765</link>
        <id>10.14569/IJACSA.2022.0130765</id>
        <doi>10.14569/IJACSA.2022.0130765</doi>
        <lastModDate>2022-07-31T09:30:20.2830000+00:00</lastModDate>
        
        <creator>Sufa Atin</creator>
        
        <creator>Raihan Abdan Syakuran</creator>
        
        <creator>Irawan Afrianto</creator>
        
        <subject>Gamification; m-learning; mathematics; attention; relevance; confidence; and satisfaction (ARCS) model; octalysis framework; student engagement</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>Mathematics is one of the main subjects in school. In some schools, the learning methods used are still using conventional methods, namely lectures and exercises. The main difficulty in learning mathematics is how to make the material presented more interesting so that it does not make students bored and easy to understand the material. The use of an attitude of interest in games that knows no age and the various advantages of games gives rise to a combination of learning mechanisms called gamification. Gamification is the process of applying game mechanics to non-game activities to increase user interactivity. Gamification in the m-learning mathematics application was developed using the Attention, Relevance, Confidence, and Satisfaction (ARCS) learning model and the octalysis framework gamification method. Gamification in this mathematics m-learning application applies a game strategy using a system of levels, missions, challenges, points, progress bars, leader boards, and badges. The results of this study indicate that this application can be used as an alternative medium for learning mathematics and student engagement with the result that gamification applied to the m-learning mathematics application can increase student interest by 35%, increase student motivation by 33%, and improve understanding 42% of students towards learning mathematics.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_65-Implementation_of_Gamification_in_Mathematics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Reduced False Alarm for Forest Fires Detection and Monitoring using Fuzzy Logic Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130764</link>
        <id>10.14569/IJACSA.2022.0130764</id>
        <doi>10.14569/IJACSA.2022.0130764</doi>
        <lastModDate>2022-07-31T09:30:20.2530000+00:00</lastModDate>
        
        <creator>Maria Susan Anggreainy</creator>
        
        <creator>Bimo Kurniawan</creator>
        
        <creator>Felix Indra Kurniadi</creator>
        
        <subject>False alarm; fuzzy logic; forest fires detection; sensor; microcontroller</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>The purpose of this research was to detect forest fires in efforts to minimize the incidence of false alarms. The research method used is divided into several stages, namely planning, analysis, design, implementation, testing, and maintenance. Arduino acts as a data collector on the field, which will later be used to detect forest fires and false alarms. Fuzzy logic is used as the essence of the algorithm and will provide a higher level of accuracy for forest fire detection and false alarms. In testing the fuzzy program, the fuzzy output between Arduino and the fuzzy on the monitoring dashboard has a small difference of 0.99%. It can be concluded; the application can minimize the occurrence of false alarms to reduce the user&#39;s workload.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_64-Reduced_False_Alarm_for_Forest_Fires_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Data Collection Method for Energy Storage Device of Distributed Integrated Energy Station based on Double Decision Tree</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130763</link>
        <id>10.14569/IJACSA.2022.0130763</id>
        <doi>10.14569/IJACSA.2022.0130763</doi>
        <lastModDate>2022-07-31T09:30:20.2530000+00:00</lastModDate>
        
        <creator>Hao Chen</creator>
        
        <creator>Guilian Wu</creator>
        
        <creator>Linyao Zhang</creator>
        
        <creator>Jieyun Zheng</creator>
        
        <subject>Double decision tree; distributed; integrated; energy station; energy storage device; data collection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>The distributed integrated energy station includes an electric energy storage device, heat storage device, cold storage device and other devices. Aiming at the problem of low data acquisition accuracy of energy storage device caused by using a single sensor or acquisition scheme in the existing methods, a new data acquisition method of energy storage device of distributed integrated energy station is designed based on double decision tree algorithm. The data acquisition process of double decision tree algorithm is constructed. On the basis of the process, the mathematical models of electric energy storage device, heat storage device, cold storage device and hybrid energy storage device are established. Then the double decision tree algorithm is used to solve the constructed model, and the acquisition pseudo code is given. So far, the data acquisition of distributed integrated energy station energy storage device based on double decision tree has been completed. The results of case analysis show that the accuracy of this method is higher than 98%, and the collection time is less than 30 ms.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_63-Data_Collection_Method_for_Energy_Storage_Device.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>MSDAR: Multi-Stage Dynamic Architecture Intrusion Detection System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130762</link>
        <id>10.14569/IJACSA.2022.0130762</id>
        <doi>10.14569/IJACSA.2022.0130762</doi>
        <lastModDate>2022-07-31T09:30:20.2230000+00:00</lastModDate>
        
        <creator>Ahmed M. ElShafee</creator>
        
        <creator>Marianne A. Azer</creator>
        
        <subject>Ad hoc networks; attacks; DDoS; intrusion detection; security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>Ad hoc networks have been through extensive research in the last decade. Even with their desirable characteristics, major issues related to their security need to be considered. Various security solutions have been proposed to reduce the risks of malicious actions. They mainly focus on key management, authentication, secure localization, and aggregation techniques. These techniques have been proposed to secure wireless communications but they can only deal with external threats. Therefore, they are considered the first line of defense. Intrusion detection systems are always required to safeguard ad hoc networks as such threats cannot be completely avoided. In this paper, we present a comprehensive survey on intrusion detection systems in ad hoc networks. The intrusion detection systems and components and taxonomy as well as different implementations and types of IDSs are studied and categorized. In addition, we provide a comparison between different Intrusion Detection Systems’ architectures. We also propose a Multi Stage Dynamic Architecture intrusion detection system (MSDAR), designed with a multi-stage detection approach making use of both signature-based and anomaly detection benefits. Our proposed intrusion detection system MSDAR is featured by its dynamic architecture as it can be deployed in the network using the Distributed Hierarchical Architecture. The viability and performance of the proposed system MSDAR are tested against the Distributed Denial of Service Attacks through simulations. Advanced performance parameters were used to evaluate the proposed scheme MSDAR. Experimental results have shown that the performance of MSDAR improves by using multiple stages of different detection mechanisms. In addition, based on simulations, the Detection Rate increases when the sensitivity level increases.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_62-MSDAR_Multi_Stage_Dynamic_Architecture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>XBLQPS: An Extended Bengali Language Query Processing System for e-Healthcare Domain</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130761</link>
        <id>10.14569/IJACSA.2022.0130761</id>
        <doi>10.14569/IJACSA.2022.0130761</doi>
        <lastModDate>2022-07-31T09:30:20.2070000+00:00</lastModDate>
        
        <creator>Kailash Pati Mandal</creator>
        
        <creator>Prasenjit Mukherjee</creator>
        
        <creator>Atanu Chattopadhyay</creator>
        
        <creator>Baisakhi Chakraborty</creator>
        
        <subject>Relational database management system (RDBMS); modified bengali WordNet; LESK algorithm; structured query language (SQL); natural language query</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>The digital India program encourages Indian citizens to become conversant with e-services which are primarily English language-based services. However, the vast majority of the Indian population is comfortable with vernacular languages like Bengali, Assamese, Hindi, etc. The rural villagers are not able to interact with the Relational Database Management system in their native language. Therefore, create a system that produces SQL queries from natural language queries in Bengali, containing ambiguous words. This paper proposes a Bengali Query Processor named Extended Bengali language Query Processing System (XBLQPS) to handle queries containing ambiguous words posted to a Healthcare Information database in the electronic domain. The Healthcare Information database contains doctor, hospital and department details in the Bengali language. The proposed system provides support for the Bengali-speaking Indian rural population to efficiently fetch required information from the database. The proposed system extracts the Bengali root word by removing the inflectional part and categorizing them to a specific part of speech (POS) using modified Bengali WordNet. The proposed system uses manually annotated parts of speech detection of a word based on Bengali WordNet. Patterns of noun phrases are generated to detect the correct noun phrase as well as entity and attribute(s). Entity and attributes are used to prepare the semantic table which is utilized to create the Structured Query Language (SQL). The simplified LESK method is utilized to resolve ambiguous Bengali phrases in this query processing system. The accuracy, precision, recall and F1 score of the system is measured as 70%, 74%, 73%, and 73% respectively.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_61-XBLQPS_An_Extended_Bengali_Language_Query.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Assaying the Statistics of Crime Against Women in India using Provenance and Machine Learning Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130760</link>
        <id>10.14569/IJACSA.2022.0130760</id>
        <doi>10.14569/IJACSA.2022.0130760</doi>
        <lastModDate>2022-07-31T09:30:20.1900000+00:00</lastModDate>
        
        <creator>Geetika Bhardwaj</creator>
        
        <creator>R. K. Bawa</creator>
        
        <subject>Crime against women; provenance; scalar techniques; machine learning techniques; decision tree; random forest; gradient boosting; XgBoost; CatBoost; LightGBM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>Now-a-days, the surging of crime against women is occurring at a startling rate in India. According to the National Commission for Women, there was a 46% increase in reports of crimes against women in the initial months of the year 2021 in comparison with the same period in 2020. However, to handle this problem, the need of the hour is to fetch relevant and timely information about the various types of crime taking place and make specific predictions based on the existing information to safeguard women from future predictable contingencies. AI and Machine learning mechanisms have become a powerful tool in predicting the crime rate in India under various crime categories by analyzing the crime patterns, crime–centric areas, and the comparative study of various crime categories. Hence, from 2001 to 2019, a women&#39;s crime-based dataset from NCRB has been used in this paper, which included various crime sub-categories, for instance; molestation, sexual harassment, rape, kidnapping, dowry deaths, cruelty to family, importation of girls, immortal traffic, sati prevention act, and others. To acquire a better understanding of the data, a framework has been created which makes use of provenance and machine learning algorithms on the dataset, which has been grouped based on several factors such as distribution of cases convicted or reported every year, safest and un-safest states for women in India, etc. Different machine learning algorithms, such as gradient boosting and its many versions, Random forest, and many more, have been used on the dataset. Their performances are evaluated using various metrics such as accuracy, recall, precision, F1 score, and root mean error square.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_60-Assaying_the_Statistics_of_Crime_Against_Women_in_India.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>TEC Forecasting using Optimized Variational Mode Decomposition and Elman Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130759</link>
        <id>10.14569/IJACSA.2022.0130759</id>
        <doi>10.14569/IJACSA.2022.0130759</doi>
        <lastModDate>2022-07-31T09:30:20.1600000+00:00</lastModDate>
        
        <creator>Maladh Mahmood Shakir</creator>
        
        <creator>Zalinda Othman</creator>
        
        <creator>Azuraliza Abu Bakar</creator>
        
        <subject>Elman neural networks; forecast; hybrid model; optimized Variational Mode Decomposition; total electronic content</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>Forecasting the ionosphere layer’s total electronic content (TEC) is crucial for its impact on satellite signals and global positioning systems (GPS) and the ability to predict earthquakes. The existing statistical-based forecasting models such as ARMA, ARIMA, and HW suffered from the TEC non-stationarity nature, which requires algorithmic handling of the forecasting and the mathematical part. This study proposes a hybrid method that incorporates several components and is designated as Optimized Variational Mode Decomposition with Recursive Neural Network Forecasting (OVMD-RNN) to forecast TEC. Before using the Elman Network to train each component, Variational Mode Decomposition (VMD) was used to decompose the signal into its essential stationary components. In addition, the proposed method includes an optimization algorithm for determining the best VMD decomposer parameters. The GPS Ionospheric Scintillation and TEC Monitor (GISTM) at Universiti Kebangsaan Malaysia station have been used to evaluate the method based on collected datasets for three years, 2011, 2012, and 2013. The experiment findings show that the model has successfully tracked all the up and down patterns in the time series. The results also reveal that VMD-based training might not always provide good results due to the residual signal. Finally, the evaluation focused on generating loss value and comparing it to the ARIMA benchmark. It showed that OVMD-RNN had accomplished a maximum improvement percentage of ARIMA with a value of (99%).</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_59-TEC_Forecasting_using_Optimized_Variational_Mode.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Framework to Develop a Resilient and Sustainable Integrated Information System for Health Care Applications: A Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130758</link>
        <id>10.14569/IJACSA.2022.0130758</id>
        <doi>10.14569/IJACSA.2022.0130758</doi>
        <lastModDate>2022-07-31T09:30:20.1430000+00:00</lastModDate>
        
        <creator>Ayogeboh Epizitone</creator>
        
        <subject>Health information system; integrated information system; e-health; bioinformatics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>The reconstruction of the health sector amidst the forth industrial revolution has been confronted with many challenges. Many benefits have been attributed to the vital role played by technology in realizing and constructing a robust health information system. However, amidst the digitalization in the healthcare system, several challenges such as integration and fragmentation have been affecting the structure of the Health Information Systems (HIS) which subsequently influences decision making and resource allocation. Therefore, this paper through a comprehensive systematic review afford a proposition for a develop a resilient and sustainable information system for Health Care applications. The study reveals the parallel impact of health information technology application in the healthcare arena and highlight the need for more in-depth research on HIS that incorporate novel scientific methods. Additional this study also presents a body of evident that reveal the inadequacies of the HIS to tackle the constant transformative changes presently confronting the global healthcare systems.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_58-Framework_to_Develop_a_Resilient_and_Sustainable.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improved Spatial Invariance for Vehicle Platoon Application using New Pooling Method in Convolution Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130757</link>
        <id>10.14569/IJACSA.2022.0130757</id>
        <doi>10.14569/IJACSA.2022.0130757</doi>
        <lastModDate>2022-07-31T09:30:20.1300000+00:00</lastModDate>
        
        <creator>M S Sunitha Patel</creator>
        
        <creator>Srinath S</creator>
        
        <subject>Average pooling; convolution neural network; imbalance dataset; max pooling; mixed pooling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>The imbalanced dataset is a prominent concern for automotive deep learning researchers. The proposed work provides a new mixed pooling strategy with enhanced performance for imbalanced vehicle dataset based on Convolution Neural Network (CNN). Pooling is crucial for improving spatial invariance, processing time, and overfitting in CNN architecture. Max and average pooling are often utilized in contemporary research articles. Both techniques of pooling have their own advantages and disadvantages. In this study, the advantages of both pooling algorithms are evaluated for the classification of three vehicles: car, bus, and truck for imbalanced datasets. For each epoch, the performance of max pooling, average pooling, and the new mixed pooling method was assessed using ROC, F1-score, and error rate. Comparing the performance of the max-pooling method to that of the average pooling method, it has been found that the max-pooling method is superior. The performance of the proposed mixed pooling approach is superior to that of the maximum pooling and average pooling methods. In terms of Receiver Operating Characteristics (ROC), the proposed mixed pooling technique is approximately 2 per cent better than the maximum pooling method and 8 per cent better than the mixed pooling method. Using a new pooling technique, the classification performance with an imbalanced dataset is improved, and also a novel mixed pooling method is proposed for the classification of vehicles.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_57-Improved_Spatial_Invariance_for_Vehicle_Platoon_Application.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Firefly Algorithm with Mini Batch K-Means Entropy Measure for Clustering Heterogeneous Categorical Timber Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130756</link>
        <id>10.14569/IJACSA.2022.0130756</id>
        <doi>10.14569/IJACSA.2022.0130756</doi>
        <lastModDate>2022-07-31T09:30:20.1130000+00:00</lastModDate>
        
        <creator>Nurshazwani Muhamad Mahfuz</creator>
        
        <creator>Marina Yusoff</creator>
        
        <creator>Muhammad Shaiful Nordin</creator>
        
        <creator>Zakiah Ahmad</creator>
        
        <subject>Clustering; mini batch k-means; entropy; heterogeneous categorical; firefly optimization algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>Clustering analysis is the process of identifying similar patterns in various types of data. Heterogeneous categorical data consists of data on ordinal, nominal, binary, and Likert scales. The clustering solution for heterogeneous data clustering remains difficult due to partitioning complex and dissimilarity features. It is necessary to find a solution to high-quality clustering techniques to efficiently determine the significant features of the data. This paper emphasizes using the firefly algorithm to reduce the distance gap between features and improve clustering performance. To obtain an optimal global solution for clustering, we proposed a hybrid of mini-batch k-means (MBK) clustering-based entropy distance measures (EM) with a firefly optimization algorithm (FA). This study compares the performance of hybrid K-Means, Agglomerative, DBSCAN, and Affinity clustering models with EM and FA. The evaluation uses a variety of data from the timber perception survey dataset. In terms of performance, the proposed MBK+EM+FA has superior and most effective clustering. It achieves a higher accuracy of 96.3 percent, a 97 percent F-measure, a 98 percent precision, and a 97 percent recall. Other external assessments revealed that the Homogeneity (HOMO) is 79.14 percent, the Fowlkes-Mallows Index (FMI) is 93.07 percent, the Completeness (COMP) is 78.04 percent, and the V-Measure (VM) is 78.58 percent. Both proposed MBK+EM+FA and MBK+EM took about 0.45s and 0.35s to compute, respectively. The excellent quality of the clustering results does not justify such time constraints. Surprisingly, the proposed model reduced the distance measure of all heterogeneous features. The future model could put heterogeneous categorical data from a different domain to the test.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_56-Firefly_Algorithm_with_Mini_Batch_K_Means_Entropy_Measure.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Brain Tumor Detection using MRI Images and Convolutional Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130755</link>
        <id>10.14569/IJACSA.2022.0130755</id>
        <doi>10.14569/IJACSA.2022.0130755</doi>
        <lastModDate>2022-07-31T09:30:20.0500000+00:00</lastModDate>
        
        <creator>Driss Lamrani</creator>
        
        <creator>Bouchaib Cherradi</creator>
        
        <creator>Oussama El Gannour</creator>
        
        <creator>Mohammed Amine Bouqentar</creator>
        
        <creator>Lhoussain Bahatti</creator>
        
        <subject>Brain tumor; machine learning; convolutional neural network; MRI images</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>A brain tumor is the cause of abnormal growth of cells in the brain. Magnetic resonance imaging (MRI) is the most practical method for detecting brain tumors. Through these MRIs, doctors analyze and identify abnormal tissue growth and can confirm whether the brain is affected by a tumor or not. Today, with the emergence of artificial intelligence techniques, the detection of brain tumors is done by applying the techniques and algorithms of machine learning and deep learning. The advantages of the application of these algorithms are the quick prediction of brain tumors, fewer errors, and greater precision, which help in decision-making and in choosing the most appropriate treatment for patients. In the proposed work, a convolution neural network (CNN) is applied with the aim of detecting the presence of a brain tumor and its performance is analyzed. The main purpose of this article is to adopt the approach of convolutional neural networks as a machine learning technique to perform brain tumor detection and classification. Based on training and testing results, the pre-trained architecture model reaches 96% in precision and classification accuracy rates. For the given dataset, CNN proves to be the better technique for predicting the presence of brain tumors.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_55-Brain_Tumor_Detection_using_MRI_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Machine Learning and Multi-Agent Model to Automate Big Data Analytics in Smart Cities</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130754</link>
        <id>10.14569/IJACSA.2022.0130754</id>
        <doi>10.14569/IJACSA.2022.0130754</doi>
        <lastModDate>2022-07-31T09:30:20.0330000+00:00</lastModDate>
        
        <creator>Fouad SASSITE</creator>
        
        <creator>Malika ADDOU</creator>
        
        <creator>Fatimazahra BARRAMOU</creator>
        
        <subject>Big data analytics; machine learning; smart data; multi-agent system; automation; decision-making; urban planning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>The objective of this paper is to present an architecture to improve the process of automating big data analytics using a multi-agent system and machine learning techniques, to support the processing of real time big data streams and to enhance the process of decision-making for urban planning and management. With the rapidly evolving information technologies, and their utilization in many areas such as smart cities, social networks, urban management and planning, massive data streams are generated and need an efficient approach to deal with. The proposition in this paper adopts the concept of smart data which focuses on the value aspect from big data. The proposed architecture is composed of three layers: data acquisition and storage, data management and processing and the service layer, based on a multi-agent system to automate the big data analytics; the proposed model describe the functionalities of the system and the collaboration between agents, these autonomous entities receive data streams in real time, they perform operations of preprocessing, big data analytics and storage into a Hadoop cluster. The techniques of machine learning are also used to enhance the process of decision making, such the use of classification algorithms to predict habitat type based on the characteristics of a population to help making efficient urban planning decisions. The proposed system can serve as a platform to support data management and to conduct effective decision-making in smart cities.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_54-A_Machine_Learning_and_Multi_Agent_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Brain Tumor Segmentation and Classification from MRI Images using Improved FLICM Segmentation and SCA Weight Optimized Wavelet-ELM Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130753</link>
        <id>10.14569/IJACSA.2022.0130753</id>
        <doi>10.14569/IJACSA.2022.0130753</doi>
        <lastModDate>2022-07-31T09:30:20.0030000+00:00</lastModDate>
        
        <creator>Debendra Kumar Sahoo</creator>
        
        <creator>Satyasis Mishra</creator>
        
        <creator>Mihir Narayan Mohanty</creator>
        
        <subject>Sine cosine algorithm; extreme learning machine; fuzzy c means; GLCM feature; support vector machine</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>Image segmentation is an essential technique of brain tumor MRI image processing for automated diagnosis of an image by partitioning it into distinct regions referred to as a set of pixels. The classification of the tumor affected and non-tumor becomes an arduous task for radiologists. This paper presents a novel image enhancement based on the SCA (Sine Cosine Algorithm) optimization technique for the improvement of image quality. The improved FLICM (Fuzzy Local Information C Means) segmentation technique is proposed to detect the affected regions of brain tumor from the MRI brain tumor images and reduction of noise from the MRI images by introducing a fuzzy factor to the objective function. The SCA weight-optimized Wavelet-Extreme Learning Machine (SCA-WELM) model is also proposed for the classification of benign tumors and malignant tumors from MRI brain images. In the first instance, the enhanced images are undergone improved FLICM Segmentation. In the second phase, the segmented images are utilized for feature extraction. The GLCM feature extraction technique is considered for feature extraction. The extracted features are aligned as input to the SCA-WELM model for the classification of benign and malignant tumors. The following dataset (Dataset-255) is considered for evaluating the proposed classification approach. An accuracy of 99.12% is achieved by the improved FLICM segmentation technique. The classification performance of the SCA-WELM is measured by sensitivity, specificity, accuracy, and computational time and achieved 0.98, 0.99, 99.21%, and 97.2576 seconds respectively. The comparison results of SVM (Support Vector Machine), ELM, SCA-ELM, and proposed SCA-WELM models are presented to show the robustness of the proposed SCA-WELM classification model.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_53-Brain_Tumor_Segmentation_and_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Identification of Human Sperm based on Morphology Using the You Only Look Once Version 4 Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130752</link>
        <id>10.14569/IJACSA.2022.0130752</id>
        <doi>10.14569/IJACSA.2022.0130752</doi>
        <lastModDate>2022-07-31T09:30:19.9730000+00:00</lastModDate>
        
        <creator>Aristoteles </creator>
        
        <creator>Admi Syarif</creator>
        
        <creator>Sutyarso</creator>
        
        <creator>Favorisen R. Lumbanraja</creator>
        
        <creator>Arbi Hidayatullah</creator>
        
        <subject>Classification; deep learning; identification; sperm; sperm head; you only look once version 4</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>Infertility is a crucial reproductive problem experienced by both men and women. Infertility is the inability to get pregnant within one year of sexual intercourse. This study focuses on infertility in men. Many causes that can cause infertility in men including sperm quality. Currently, identification of human sperm is still done manually by observing the sperm with the help of humans through a microscope, so it requires time and high costs. Therefore, high technology is needed to determine sperm quality in the form of deep learning technology based on video. Deep learning algorithms support this research in identifying human sperm cells. So deep learning can help detect sperm video automatically in the process of evaluating sperm cells to determine infertility. We use deep learning technology to identify sperm using the You Only Look Once version 4 (YOLOv4) algorithm. Purpose of this study was to analyze the level of accuracy of the YOLOv4 algorithm. The dataset used is sourced from a VISEM dataset of 85 videos. The results obtained are 90.31% AP (Average Precision) for sperm objects and 68.19% AP (Average Precision) for non-sperm objects, then for the results of the training obtained by the model 79.58% mAP (Mean Average Precision). Our research show result about identification of human sperm using YOLOv4. The results obtained by the YOLOv4 model can identify sperm and non-sperm objects. The output on the YOLOv4 model is able to identify objects in the test data in the form of video and image.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_52-Identification_of_Human_Sperm_based_on_Morphology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Arabic Image Captioning: The Effect of Text Pre-processing on the Attention Weights and the BLEU-N Scores</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130751</link>
        <id>10.14569/IJACSA.2022.0130751</id>
        <doi>10.14569/IJACSA.2022.0130751</doi>
        <lastModDate>2022-07-31T09:30:19.9570000+00:00</lastModDate>
        
        <creator>Moaz T. Lasheen</creator>
        
        <creator>Nahla H. Barakat</creator>
        
        <subject>Arabic image captioning; computer vision; deep learning; image captioning; natural language processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>Image captioning using deep neural networks has recently gained increasing attention, mostly for English langue, with only few studies in other languages. Good image captioning model is required to automatically generate sensible, syntactically and semantically correct captions, which in turn requires good models for both computer vision and natural language processing. The process is more challenging in case of data scarcity, and languages with complex morphological structures like the Arabic language. This was the reason why only limited number of studies have been published for Arabic image captioning, compared to those of English language. In this paper, an efficient deep learning model for Arabic image captioning has been proposed. In addition, the effect of using different text pre-processing methods on the obtained BLEU-N scores and the quality of generated images, as well as the attention mechanism behavior were investigated. Furthermore, the “THUMB” framework to assess the quality of the generated captions is used -for the first time- for Arabic captions’ evaluation. As shown in the results, a BLEU-4 score of 27.12, has been achieved, which is the highest obtained results so far, for Arabic image captioning. In addition, the best THUMB scores were obtained, compared to previously published results on common images.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_51-Arabic_Image_Captioning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Improved Deep Learning Model of Chili Disease Recognition with Small Dataset</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130750</link>
        <id>10.14569/IJACSA.2022.0130750</id>
        <doi>10.14569/IJACSA.2022.0130750</doi>
        <lastModDate>2022-07-31T09:30:19.9400000+00:00</lastModDate>
        
        <creator>Nuramin Fitri Aminuddin</creator>
        
        <creator>Zarina Tukiran</creator>
        
        <creator>Ariffuddin Joret</creator>
        
        <creator>Razali Tomari</creator>
        
        <creator>Marlia Morsin</creator>
        
        <subject>Chili leaf; deep learning; data augmentation; geometric transformation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>Due to its tasty and spicy fruit with nutritional qualities, chili is a demanding crop widely farmed around the world. Hence, it is essential to accurately determine the health status of chili for agricultural productivity. Recent years have seen impressive results in recognition fields due to deep learning approaches. However, deep learning models’ networks need an abundant data to perform well and collecting enormous data for the networks is time-consuming and resource-intensive. A data augmentation method is proposed to overcome this problem. It was applied to a small dataset of healthy and diseased chili leaf by utilizing geometric transformation method. Eventually, two deep learning models of CNN and ResNet-18 were evaluated using augmented and original datasets. From a series of experiment, it can be concluded that the trained deep learning models using original and augmented datasets perform better with an average accuracy performance of 97%.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_50-An_Improved_Deep_Learning_Model_of_Chili_Disease.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fit-Gap Analysis: Pre-Fit-Gap Analysis Recommendations and Decision Support Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130749</link>
        <id>10.14569/IJACSA.2022.0130749</id>
        <doi>10.14569/IJACSA.2022.0130749</doi>
        <lastModDate>2022-07-31T09:30:19.9100000+00:00</lastModDate>
        
        <creator>LAHLOU Imane</creator>
        
        <creator>MOTAKI Nourredine</creator>
        
        <creator>SARSRI Driss</creator>
        
        <creator>L’YARFI Hanane</creator>
        
        <subject>ERP systems; misfit; customisation; fit-gap analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>Enterprise Resource Planning (ERP) system has been defined as a configurable Commercial Off-The-Shelf (COTS) system integrated into multiple business functions. For most companies, adopting ERP has become necessary to maintain market competitiveness. However, ERP implementation is still critical because project success depends on multiple parameters and involves several stakeholders. This article deals with the Fit-Gap analysis stage, which is an essential step in ERP implementation. This study was carried out through a literature review and interviews with experts to gather information and support stakeholders toward a successful Fit-Gap phase. It presents a set of recommendations for clients and consultants to consider before starting the Fit-Gap Analysis phase, and it presents an approach, with a decision support model represented as Business Process Modelling Notation (BPMN) based on several parameters to be used during the Fit-Gap Analysis stage to bridge gaps. The results obtained are intended for clients and consultants to make the most rational decision to bridge gaps based on the recommendations found, the approach and the decision support models presented.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_49-Fit_Gap_Analysis_Pre_Fit_Gap_Analysis_Recommendations.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Contribution of Experience in the Acquisition of Physical Sciences Case of High School Students Qualifying</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130748</link>
        <id>10.14569/IJACSA.2022.0130748</id>
        <doi>10.14569/IJACSA.2022.0130748</doi>
        <lastModDate>2022-07-31T09:30:19.8930000+00:00</lastModDate>
        
        <creator>Zineb Azar</creator>
        
        <creator>Oussama Dardary</creator>
        
        <creator>Jabran Daaif</creator>
        
        <creator>Malika Tridane</creator>
        
        <creator>Said Benmokhtar</creator>
        
        <creator>Said Belaaouad</creator>
        
        <subject>Experimental practice; physical sciences; students; practical Work PW</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>This work deals with the importance of experimental practice in the teaching of physical sciences. By practice, we rarely mean practical work carried out in the laboratory. We examined the relationship between students&#39; knowledge of physical science and how practice may or may not help them understand chemical and/or physical concepts. What emerges from a survey distributed to our students is that they are very favorable of the use of the practice. The problem posed in this work consists in identifying the impact of experiments on the acquisition of knowledge and in responding to the problems of learning through experience in the short and medium term. The analysis of the answers allowed us to conclude that the experiment in class, by the teacher, helps to understand the physical and chemical phenomena and can be done before or after the study of the theory. The length and difficulty of practical work sometimes worry students, trying to follow the protocol step by step. This fact underscores the importance of clarity of purpose, through which students can be guided toward questioning what is expected of them, such as knowing how their knowledge has increased.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_48-Contribution_of_Experience_in_the_Acquisition_of_Physical_Sciences.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Electronic Personal Health Record Assessment Methodology: A Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130747</link>
        <id>10.14569/IJACSA.2022.0130747</id>
        <doi>10.14569/IJACSA.2022.0130747</doi>
        <lastModDate>2022-07-31T09:30:19.8630000+00:00</lastModDate>
        
        <creator>Dirayana Kamarudin</creator>
        
        <creator>Nurhizam Safie</creator>
        
        <creator>Hasimi Sallehudin</creator>
        
        <subject>Personal health record; ePHR; ePHR evaluation variable</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>ePHR (Electronic Personal Health Record) is not a new concept in the era of electronic health information. The advantages of ePHR in improving health outcomes through patient empowerment have been recognized globally and almost all countries that implement electronic health records (EHR) have created ePHR. This study identifies the components of the ePHR implementation study methodology that has been conducted throughout the country. The types of ePHR studies selected were adoption studies, acceptance studies, readiness studies, and evaluation studies. This study’s systematic literature review process is identification, screening, eligibility, data abstraction, and analysis. A total of 16 final journals were analyzed from 173 journals identified from 5 databases (Science Direct, WoS, Scopus, JMIR, and PubMed) regardless of the year of publication until April 1st, 2021. Among the findings based on the four objectives of the study, there are two findings that are considered important and interesting by the author; first, the existence of 22 additional variables to the evaluation model by almost all studies in this study which shows a clear need to improve the evaluation model which is the TAM Model. Second, although the proposal of conducting a scientific study to evaluate the perspective of ePHR stakeholders before ePHR is developed only appeared once, based on this study and the knowledge of the authors, it is a starting point for the successful implementation of ePHR. These two findings contribute to the recommendations for the best design of the ePHR implementation study described in this paper.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_47-Electronic_Personal_Health_Record_Assessment_Methodology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Screening System for COVID-19 Severity using Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130746</link>
        <id>10.14569/IJACSA.2022.0130746</id>
        <doi>10.14569/IJACSA.2022.0130746</doi>
        <lastModDate>2022-07-31T09:30:19.8470000+00:00</lastModDate>
        
        <creator>Abang Mohd Irham Amiruddin Yusuf</creator>
        
        <creator>Marshima Mohd Rosli</creator>
        
        <creator>Nor Shahida Mohamad Yusop</creator>
        
        <subject>Severity prediction; COVID-19; random forest; Na&#239;ve Bayes; gradient boosting</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>COVID-19 disease can be classified into various stages depending on the severity of the patient. Patients in severe stages of COVID-19 need immediate treatment and should be placed in a medical-ready environment because they are at high risk of death. Thus, hospitals need a fast and efficient method to screen large numbers of patients. The enormous amount of medical data in public repositories allows researchers to gain information and predict possible outcomes. In this study, we use a publicly available dataset from Springer Nature repository to discuss the performance of three machine learning techniques for prediction of severity of COVID-19: Random Forest (RF), Na&#239;ve Bayes (NB) and Gradient Boosting (GB). These techniques were selected for their good performance in medical predictive analytics. We measured the performance of the machine learning techniques using six measurements (accuracy, precision, recall, F1-score, sensitivity and specificity) in predicting COVID-19 severity. We found that RF generates the highest performance score, which is 78.4, compared with NB and GB. We also conducted experiments with RF to establish the critical symptoms in predicting COVID-19 severity, and the findings suggested that seven symptoms are substantial. Overall, the performance of various machine learning techniques to predict severity of COVID-19 using electronic health records indicates that machine learning can be successfully applied to determine specific treatment and effective triage.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_46-A_Screening_System_for_COVID_19_Severity.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Power Grid Resource Integration of Enterprise Middle Station based on Analytic Hierarchy Process</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130745</link>
        <id>10.14569/IJACSA.2022.0130745</id>
        <doi>10.14569/IJACSA.2022.0130745</doi>
        <lastModDate>2022-07-31T09:30:19.8170000+00:00</lastModDate>
        
        <creator>Shaobo Liu</creator>
        
        <creator>Li Chen</creator>
        
        <creator>Wenyuan Bai</creator>
        
        <creator>Zhen Zhang</creator>
        
        <creator>Fupeng Li</creator>
        
        <subject>Power grid enterprises; SDN; power grid resources; centralized control architecture; power communication network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>With the transformation from smart grid to power Internet of Things, new power businesses such as power grid automation and power quality monitoring are constantly emerging. The load environment of power grid is changeable. In order to meet the needs of multi-service, the integrated access scheme for power grid resources in power enterprises is gradually diversified, which brings challenges to the unified management and control of power grid communication network. In this paper, SDN technology is used to improve the operation and maintenance management and control of power communication network, which aims at the integration scheme of power grid resources in power enterprises. Based on the controller cluster technology, combined with the new power business requirements, this paper designs a software-defined network centralized control architecture of the new business of power communication network. The architecture realizes the operation and maintenance management of network resources under the centralized control architecture of typical enterprise scenarios, such as power grid enterprises. The convergence speed is improved by 27%. The minimum value of iterative convergence is 31% better than that of other methods. The system requirement is reduced by 13.5%, which is helpful to improve the efficiency of node dynamic allocation and ensure the need of large-capacity data transmission of smart grid. The research in this paper can realize the two-way interaction, real-time expansion and unified deployment of power business in the future, and promote the intensive and lean development of power communication network.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_45-Power_Grid_Resource_Integration_of_Enterprise_Middle_Station.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Prediction Models to Effectively Detect Malware Patterns in the IoT Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130744</link>
        <id>10.14569/IJACSA.2022.0130744</id>
        <doi>10.14569/IJACSA.2022.0130744</doi>
        <lastModDate>2022-07-31T09:30:19.8000000+00:00</lastModDate>
        
        <creator>Rawabi Nazal Alhamad</creator>
        
        <creator>Faeiz M. Alserhani</creator>
        
        <subject>Internet of Things (IoT); malware deletion; random forest; Catboost; convolutional neural network; long short-term memory (LSTM); XGBoost</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>The Widespread use of the Internet of Things (IoT) has influenced many domains including smart cities, cameras, wearables, smart industrial equipment, and other aspects of our daily lives. On the other hand, the IoT environment deals with a massive volume of data that needs to be kept secure from tampering or theft. Detection of security attacks against IoT context requires intelligent techniques rather than relying on signature matching. Machine learning (ML) and Deep Learning (DL) approaches are efficient to detect these attacks and predicting intrusion behavior based on unknown patterns. This study proposes the application of five deep and ML techniques for identifying malware in network traffic based on the IoT-23 dataset. Random Forest, Catboost, XGBoost, Convolutional Neural Network, and Long Short-Term Memory (LSTM) models are among the classifiers utilized. These algorithms have been selected to provide lightweight security systems to be deployed in the IoT devices rather than a centralized approach. The dataset was preprocessed to remove unnecessary or missing data, and then the most significant features were extracted using a feature engineering technique. The highest overall accuracy achieved was 96% by applying all classifiers except LSTM which recorded a lower accuracy.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_44-Prediction_Models_to_Effectively_Detect_Malware_Patterns.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Novel Approach for Spatiotemporal Weather Data Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130743</link>
        <id>10.14569/IJACSA.2022.0130743</id>
        <doi>10.14569/IJACSA.2022.0130743</doi>
        <lastModDate>2022-07-31T09:30:19.7700000+00:00</lastModDate>
        
        <creator>Radhika T V</creator>
        
        <creator>K C Gouda</creator>
        
        <creator>S Sathish Kumar</creator>
        
        <subject>Spatiotemporal; big climate data; spark; ARIMA</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>Massive volumes of multidimensional array-based spatiotemporal data are generated by climate observations and model simulations. The growth in climate data leads to new opportunities for climate studies at multiple spatial and temporal scales. Managing, analyzing and processing of big climate data is considered to be challenging because of huge data volume. In this work multidimensional climate data such as precipitation and temperature are processed and analyzed in the Spark MapReduce Platform, since Spark platform is computationally more efficient than Hadoop-MapReduce Platform of same configuration. In temporal scale monthly and seasonal analysis of climate data has been carried out to understand the regional climate. The prediction of Rainfall on monthly and seasonal time scales is very much important for planning and devising agricultural strategies and disaster management, etc. As the prediction of climate state is very challenging, in this study an attempt is being made for the prediction of the rainfall using the time series analysis in the same framework. As a test case the time series approach such as Auto Regression Integrated Moving Average (ARIMA) has been used for the prediction of rainfall. The proposed approach is evaluated and found to be accurate in the analysis and prediction of climate data and this will surely guide for the researcher for better understanding of the climate and its application to multiple sectors.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_43-Novel_Approach_for_Spatiotemporal_Weather_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Average Delay-based early Congestion Detection in Named Data of Health Things</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130742</link>
        <id>10.14569/IJACSA.2022.0130742</id>
        <doi>10.14569/IJACSA.2022.0130742</doi>
        <lastModDate>2022-07-31T09:30:19.7070000+00:00</lastModDate>
        
        <creator>Asmaa EL-BAKKOUCHI</creator>
        
        <creator>Mohammed EL GHAZI</creator>
        
        <creator>Anas BOUAYAD</creator>
        
        <creator>Mohammed FATTAH</creator>
        
        <creator>Moulhime EL BEKKALI</creator>
        
        <subject>Named data networking; internet of health things; congestion control; congestion detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>The Internet of Health Things (IoHT) is receiving more attention from researchers because of its wide use in the healthcare field. IoHT refers to medical devices whose main purpose is to transmit health data in a secure and lossless manner between them and healthcare personnel. However, in a medical emergency, sensors transmit vital patient data simultaneously and frequently, increasing the risk of congestion and packet loss. This problem is highly undesirable in an IoHT system, leading to undesirable results. To address this issue, a new approach based on Named Data Networking (NDN) (which is considered as the most appropriate internet architecture for IoT systems) is proposed to control congestion in IoHT systems. The proposed approach, Average delay-based early congestion Detection (ADCD), detects and controls congestion at consumer nodes by calculating the average queuing delay based on the one-way delay similar to that proposed in Sync-TCP. Then according to the calculated value, ADCD divides the network into three states: no-congested state, less congested state, and heavily congested state. The adjustment of the congestion window size is done according to the state of the network. ADCD was implemented in ndnSIM and compared to the Interest Control Protocol ICP. The results show that ADCD maximizes bandwidth utilization compared to ICP and maintains a reasonable delay.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_42-Average_Delay_based_early_Congestion_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mining Educational Data to Analyze the Student’s Performance in TOEFL iBT Reading, Listening and Writing Scores</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130741</link>
        <id>10.14569/IJACSA.2022.0130741</id>
        <doi>10.14569/IJACSA.2022.0130741</doi>
        <lastModDate>2022-07-31T09:30:19.6900000+00:00</lastModDate>
        
        <creator>Khaled M. Hassan</creator>
        
        <creator>Mohammed Helmy Khafagy</creator>
        
        <creator>Mostafa Thabet</creator>
        
        <subject>Educational data mining; students score; linear regression; TOEFL; Statistics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>Student scores in TOEFL IBT reading, listening, and writing may reveal weaknesses and deficiencies in educational institutions. Traditional approaches and evaluations are unable to disclose the significant information hidden inside the student&#39;s TOEFL score. As a result, data mining approaches are widely used in a wide range of fields, particularly education, where it is recognized as Educational Data Mining (EDM). Educational data mining is a prototype for handling research issues in student data which can be used to investigate previously undetected relationships in a huge database of students. This study used the EDM to define the numerous factors that influence students&#39; achievement and to create observations using advanced algorithms. The present study explored the relationship among university students’ previous academic experience, gender, student place and their current course attendance within a sample of 473 (225 male and 248 female). Educational specialists must find out the causes of student dropout in TOEFL scores. The results of the study showed that the model could be suitable for investigation of important aspects of student outcomes, the present research was supposed to use the statistical package for social sciences (SPSS V26) for both descriptive and inferential statistics and multiple linear regressions to improve their scores.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_41-Mining_Educational_Data_to_Analyze_the_Students_Performance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Using a Fuzzy-Bayesian Approach for Predictive Analysis of Delivery Delay Risk</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130740</link>
        <id>10.14569/IJACSA.2022.0130740</id>
        <doi>10.14569/IJACSA.2022.0130740</doi>
        <lastModDate>2022-07-31T09:30:19.6770000+00:00</lastModDate>
        
        <creator>Ouafae EL Bouhadi</creator>
        
        <creator>Monir Azmani</creator>
        
        <creator>Abdellah Azmani</creator>
        
        <creator>Mouna Atik el ftouh</creator>
        
        <subject>Delivery logistics; risk management; predictive analysis; bayesian network; fuzzy logic</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>Although one of the major roles of delivery logistics activities is to ensure a good quality of customer service, certain risks such as damage, delay, return of transported goods occur quite often. This makes risk control and prevention one of the requirements of transport supply chain quality. The article focuses on the analysis of the risk of delay, which is often considered fundamental for the quality of service and as a center of additional costs related to the violation of time windows. Such a risk can harm the image of a supplier, which can even lead to the loss of customers in case of recurrence. The aim of the following case study is the development of a fuzzy-bayesian approach that anticipates, by predictive analysis combining Bayesian networks (BNs) and Fuzzy logic, the possible delays affecting the smooth running of a delivery operation. The results of the implementation of the late delivery risk prediction model are validated by verifying three axioms. In addition, a sensitivity and scenario analysis is performed to validate the model and identify the parameters that have the most adverse impact on the occurrence of such a risk. These results can help carriers/transport providers to minimize potential late deliveries. In addition, the developed model can be used as a basis for different types of predictions in the field of freight transport as well as in other research areas.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_40_Using_a_Fuzzy_Bayesian_Approach_for_Predictive_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Efficient Approach towards Vehicle Number Estimation with Ad-hoc Network under Vehicular Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130738</link>
        <id>10.14569/IJACSA.2022.0130738</id>
        <doi>10.14569/IJACSA.2022.0130738</doi>
        <lastModDate>2022-07-31T09:30:19.6430000+00:00</lastModDate>
        
        <creator>Yuva Siddhartha Boyapati</creator>
        
        <creator>Shallaja Salagrama</creator>
        
        <creator>Vimal Bibhu</creator>
        
        <subject>Vehicular ad hoc networking; hidden vehicle; visible vehicle; time division multiple access; dedicated short range communication</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>Ad-hoc network usability extends the application for Dedicated Short-Range Communication. This type of ad-hoc network technology is non infrastructure and due to this fact, it can be used for Direct Short Range Communication System to provide the quick and real time message to the vehicular operator to prevent the damage and fatality of life by meeting accidents and crashes. In this paper, we present a holistic approach to estimate the number of vehicles in specified range of one KM distance. The designed system for vehicle number estimation is based on the Time Division Multiple Access mechanism which further estimates the number of reserved slots by vehicular nodes. This estimation methodology is tested under the digital simulator and approximately 34 number of vehicles for 24 seconds are defined to test the slot reservation. We found that in case of vehicular nodes greater than 20, slot reservation accuracy is 95% and when the vehicular nodes are less than 20 then the slot reservation is 100%.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_38-An_Efficient_Approach_towards_Vehicle_Number_Estimation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Virtual Tourism and Digital Heritage: An Analysis of VR/AR Technologies and Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130739</link>
        <id>10.14569/IJACSA.2022.0130739</id>
        <doi>10.14569/IJACSA.2022.0130739</doi>
        <lastModDate>2022-07-31T09:30:19.6430000+00:00</lastModDate>
        
        <creator>Muhammad Shoaib Siddiqui</creator>
        
        <creator>Toqeer Ali Syed</creator>
        
        <creator>Adnan Nadeem</creator>
        
        <creator>Waqas Nawaz</creator>
        
        <creator>Ahmad Alkhodre</creator>
        
        <subject>Virtual tourism; digital heritage; virtual reality; user experience</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>During the time of the pandemic, travel restrictions have impacted the tourism industry with an estimated loss of more than a trillion USD; however, at the same time, we have seen a significant increase in profits for the industries which empower remote connectivity. Various studies have identified the positive impact of virtual tourism, in which tourists can be attracted by providing a VR/AR-based experience of the destination. Similarly, virtual, mixed, and augmented realities are being used to enhance user experience in digital heritage and its preservation. With emerging technologies and increasing demand for e-tourism (due to travel restrictions), there is a need to review the technological changes and analyze user requirements with respect to virtual tourism. This paper provides a literature review of the latest technologies and applications that can potentially benefit the virtual tourism and digital heritage industry. We also provide an analysis of its impact on user experience, awareness, and interest, as well as the pros and cons of virtual experiences, which may benefit the research community about the current spectrum of virtual tourism and digital heritage.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_39-Virtual_Tourism_and_Digital_Heritage.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dual Authentication for Payment Request Verification Over Cloud using Bilinear Dual Authentication Payments Transaction Protocol</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130737</link>
        <id>10.14569/IJACSA.2022.0130737</id>
        <doi>10.14569/IJACSA.2022.0130737</doi>
        <lastModDate>2022-07-31T09:30:19.6130000+00:00</lastModDate>
        
        <creator>A. Saranya</creator>
        
        <creator>R. Naresh</creator>
        
        <subject>Mobile payment; transaction protocol; bullet hash maximum distance separable; mutate Hellman algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>There has been a recent explosion in the number of mobile network payment gateways that enable users to access services through a variety of devices. Mobile payment gateway security is complicated by a number of difficult-to-solve issues. As digital technology has progressed over the last decade, mobile payment mechanisms have gained a lot of interest. In the internet industry, these standards might have a significant impact on service quality. However, the most important aspect to consider when using these systems is their accountability, which assures confidence between the parties engaged in the financial transactions. Mobile payments may be easy, quick and secure. On the other hand, they may be rather pricey and are still susceptible to problems caused by technology. Specifically, mobile payments won&#39;t be able to go through at all if there are any problems with the host phone. For this reason, in this article a mobile payment mechanism that uses secure bilinear dual authentication. Using Bullet hash Maximum distance separable (MDS) and the mutate Hellman algorithm, our payment protocol incorporates all of the essential security characteristics to establish confidence between the parties. To put it another way, accountability is assured by mutual authentication and non-repudiation. Conflicts that may emerge in the course of a payment transaction may be resolved using our strategy. Scyther is used to test our suggested protocol&#39;s empirical performance.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_37-Dual_Authentication_for_Payment_Request_Verification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automatic State Recognition of Multi-type Protection Platens based on YOLOv5 and Parallel Multi-Decision Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130736</link>
        <id>10.14569/IJACSA.2022.0130736</id>
        <doi>10.14569/IJACSA.2022.0130736</doi>
        <lastModDate>2022-07-31T09:30:19.5970000+00:00</lastModDate>
        
        <creator>Ying Zhang</creator>
        
        <creator>Yihui Zhang</creator>
        
        <creator>Hao Wu</creator>
        
        <creator>Boxiong Fu</creator>
        
        <creator>Ling He</creator>
        
        <subject>Protection platen; parallel multi-decision; YOLOv5 network; watershed-color space difference-shape feature</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>A protection platen is a vital component in relay protection systems. The manual inspection of protection platen states is long-term repetitive work with low efficiency and imposes a heavy burden on workers. In this work, we propose a new system to automatically detect the states of multi-type protection platens in images. This system can classify two protection platen categories and further recognize the states of protection platens. For the classification of protection platen types, we propose a new algorithm that automatically detects two protection platen types based on HSV (Hue, Saturation, Value) color space weighting operators. The proposed operators quantify the color variation in the protection platen and reduce the influence of environmental factors. With respect to the state recognition of protection platens, the Type-I protection platen states are automatically classified by the YOLOv5 (You only look once version 5) network. Since the Type-II protection platen has three primary states and more complicated structures, we investigate a new parallel multi-decision algorithm to recognize the states of Type-II protection platens based on the newly proposed watershed-color space difference-shape feature (W-CD-SF) method and the YOLOv5 network. The W-CD-SF technique can segment the protection platens and extract their shape features automatically. This multi-decision mechanism improves the robustness and generalization of state recognition. Experiments were conducted on the collected protection platen images containing 8,969 protection platens. The recognition accuracies of protection platen states exceed 95%. This system can provide auxiliary detection and long-term monitoring of protection platen states.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_36-Automatic_State_Recognition_of_Multi_type_Protection_Platens.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Data Fusion Model of Road Sensors based IoT Feature Clustering</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130735</link>
        <id>10.14569/IJACSA.2022.0130735</id>
        <doi>10.14569/IJACSA.2022.0130735</doi>
        <lastModDate>2022-07-31T09:30:19.5830000+00:00</lastModDate>
        
        <creator>Hua Yang</creator>
        
        <subject>Traffic data; Internet of Things (IoT) technology; perception sensors; vibration monitoring; k-means++ algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>The collection of traffic data can play a role in analyzing and predicting highway design, planning, and real-time traffic management. The accuracy requirements for road dynamic data collection are low, and the accuracy is usually 3%-5%. However, it is required that vehicles can pass at high speed and obtain traffic information such as vehicle classification and vehicle speed. The prerequisite for the application of Internet of Things (IoT) technology to road information monitoring lies in the research and development of sensor technology in the perception layer and communication technology in the network layer, so that can obtain a large amount of perception data to serve the development and application of algorithms. To achieve the goal of low-cost and long-term monitoring of comprehensive traffic information and road service status information, this paper constructs a road vibration monitoring system, carries out road vibration monitoring under complex road environments, and proposes a traffic information monitoring method driven by road vibration data. By deploying the pavement vibration monitoring system in the actual road, the original signal of pavement vibration under the action of vehicle moving load is obtained. Through smooth processing and eigenvalue extraction, the monitoring of vehicle speed, wheelbase driving direction, vehicle load position and traffic flow is realized. The experimental results prove that the analysis of the road dynamic response under working conditions, as well as smoothing processing and eigenvalue extraction, the numerical modeling method in this paper realizes the monitoring of the position of the vehicle load and the traffic flow. The calculation error of vehicle speed and wheelbase is within &#177;4%, which is helpful to find the characteristic index of road vibration signal for evaluating road service status, and provides a reference for the application of road vibration response in road damage early warning and scientific maintenance.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_35-Data_Fusion_Model_of_Road_Sensors.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-Task Reinforcement Meta-Learning in Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130734</link>
        <id>10.14569/IJACSA.2022.0130734</id>
        <doi>10.14569/IJACSA.2022.0130734</doi>
        <lastModDate>2022-07-31T09:30:19.5500000+00:00</lastModDate>
        
        <creator>Ghazi Shakah</creator>
        
        <subject>Multitasking; meta-learning; reinforcement learning; neural networks; optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>Artificial Neural Networks (ANN) is one of the main and widespread tools for creating intelligent systems. And, they are actively used for data analysis in many areas such as robotics, computer vision, natural language processing, etc. The learning process of ANN is one of the most labor-intensive stages in ANN. There are many different modifications of ANNs and methods for their training. Currently, deep neural networks are becoming one of the most popular methods of machine learning due to their effectiveness in areas such as speech recognition, medical informatics, computer vision, etc. It is known that ANN training depends on the type of input data. In this paper, reinforcement learning is considered, as popular method used in cases where information is reinforced by signals from the external environment with which the model interacts. The purpose of this paper is to develop a reinforcement meta-learning algorithm that would be efficient in terms of quality and speed of learning. However, despite the significant scientific progress in deep learning, existing algorithms are not efficient enough to solve problems in the real world. In addition, such algorithms require a significant amount of learning time, which complicates the development process. To solve these problems, the use of meta-learning or “learning to learn” algorithms has recently been especially relevant. The paper proposes an approach to reinforcement meta-learning using a multitasking weight optimizer. experimentally shown that the proposed approach is more efficient than the known MAML (Model-Agnostic Meta-Learning) algorithm. The proposed MAML SPSA-Track method shows an improvement in efficiency by an average of 4%, and MAML SPSA-Delta by 8%, respectively. Moreover, the last algorithm spends on average 2 times less time on push-v2 and pick-place-v2 tasks.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_34-Multi_Task_Reinforcement_Meta_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Alignment of Software System Level with Business Process Level: Resolving Syntactic and Semantic Conflicts</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130733</link>
        <id>10.14569/IJACSA.2022.0130733</id>
        <doi>10.14569/IJACSA.2022.0130733</doi>
        <lastModDate>2022-07-31T09:30:19.5330000+00:00</lastModDate>
        
        <creator>Samia Benabdellah Chaouni</creator>
        
        <creator>Maryam Habba</creator>
        
        <creator>Mounia Fredj</creator>
        
        <subject>Information system alignment; business process; software system; Business Process Model and Notation (BPMN); Unified Modelling Language (UML); class diagram; ontology; semantic aspects</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>Information systems help organizations manage their entities with innovative technologies. These entities are often very different in nature. In this paper, we consider a business process level based on a set of Business Process Model and Notation (BPMN) models and a software system level based on a Unified Modeling Language (UML) class diagram. The differences between these entities make them difficult to align. In addition, an organization’s BPMN models may be designed by different teams, which can cause syntactic and semantic heterogeneities. We present the first step of our proposed approach for aligning a software system level with a business process level without conflict (redundancy and lost information). Syntactic and semantic rules based on ontologies and other resources for comparing BPMN models are described, as well as a process for transforming BPMN models into UML model.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_33-Alignment_of_Software_System_Level.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of Adaptive Line Tracking Breakpoint Detection Algorithm for Room Sensing using LiDAR Sensor</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130732</link>
        <id>10.14569/IJACSA.2022.0130732</id>
        <doi>10.14569/IJACSA.2022.0130732</doi>
        <lastModDate>2022-07-31T09:30:19.5030000+00:00</lastModDate>
        
        <creator>Deddy El Amin</creator>
        
        <creator>Karlisa Priandana</creator>
        
        <creator>Medria Kusuma Dewi Hardhienata</creator>
        
        <subject>LiDAR; breakpoint detector; robot localization; corner detection; line segmentation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>This research focuses on the use of Light Detection and Ranging (LiDAR) sensors for robot localization. One of the most essential algorithms in LiDAR localization is the breakpoint detector algorithm which is used to determine the corner of the room. The previously developed breakpoint detection methods have weaknesses, such as the Adaptive Breakpoint Detector (ABD), could generate dynamic threshold values. The ABD results, on the other hand, still require Line Extraction to obtain the corner breakpoint. Line Extraction method, e.g. Iterative End Point Fit (IEPF), is used to categorize data, resulting in the generation of a line pattern as an interpretation of a wall. The computational method for obtaining the corner breakpoint becomes longer as the line is extracted. To address this issue, our algorithm proposes a new threshold area in the form of an ellipse with the threshold value parameter obtained from previously identified room size and sensor characteristics. As a result the corner breakpoint detection becomes more adaptive. The goal of this research is to create an Adaptive Line Tracking Breakpoint Detector (ALTBD) approach that will reduce the computing time required to detect corner breakpoints. Furthermore, the Line Extraction method required for corner breakpoint detection is modified in the ALTBD. To distinguish between the edge of the wall and the corner of the room, the boundary value is increased. The ALTBD method was tested in a simulation arena comprised of multiple rooms and halls. According to the results, the ALTBD computation time is faster in detecting corner breakpoints than the ABD IEPF method, also the accuracy for determining the position of the robot was improved.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_32-Development_of_Adaptive_Line_Tracking_Breakpoint_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning Framework for Locating Physical Internet Hubs using Latitude and Longitude Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130731</link>
        <id>10.14569/IJACSA.2022.0130731</id>
        <doi>10.14569/IJACSA.2022.0130731</doi>
        <lastModDate>2022-07-31T09:30:19.4870000+00:00</lastModDate>
        
        <creator>El-Sayed Orabi Helmi</creator>
        
        <creator>Osama Emam</creator>
        
        <creator>Mohamed Abdel-Salam</creator>
        
        <subject>Physical internet hubs (π hubs); deep learning; convolutional neural network (CNN); recurrent neural network (RNN); latitude and Longitude classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>This article proposes framework for determining the optimal or near optimal locations of physical internet hubs using data mining and deep learning algorithms. The framework extracts latitude and longitude coordinates from various data types as data acquisition phase. These coordinates has been extracted from RIFD, online maps, GPS, and GSM data. These coordinates has been class labeled according to decision maker’s preferences using k-mean, density based algorithm (DB Scan and hierarchical clustering analysis algorithms. The proposed algorithm uses haversine distance matrix to calculate the distance between each coordinates rather than the Euclidian distance matrix. The haversine matrix provides more accurate distance surface of a sphere. The framework uses the class labeled data after the clustering phase as input for the classification phase. The classification has been performed using decision tree, random forest, Bayesian, gradient decent, neural network, convolutional neural network and recurrent neural network. The classified coordinates has been evaluated for each algorithms. It has been found that CNN, RNN outperformed the other classification algorithms with accuracy 97.6% and 97.9% respectively.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_31-Deep_Learning_Framework_for_Locating_Physical_Internet_Hubs.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improved POS Tagging Model for Malay Twitter Data based on Machine Learning Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130730</link>
        <id>10.14569/IJACSA.2022.0130730</id>
        <doi>10.14569/IJACSA.2022.0130730</doi>
        <lastModDate>2022-07-31T09:30:19.4730000+00:00</lastModDate>
        
        <creator>Siti Noor Allia Noor Ariffin</creator>
        
        <creator>Sabrina Tiun</creator>
        
        <subject>Informal Malay; Malay Twitter corpus; Malay POS tagging; Malay POS tagger model; Malay social media texts; Malay POS machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>Twitter is a popular social media platform in Malaysia that allows for 280-character microblogging. Almost everything that happens in a single day is tweeted by users. Because of the popularity of Twitter, most Malaysians use it daily, providing researchers and developers with a wealth of data on Malaysian users. This paper explains why and how this study chose to create a new Malay Twitter corpus, Malay Part-of-Speech (POS) tags, and a Malay POS tagger model. The goal of this paper is to improve existing Malay POS tags so that they are more compatible with the newly created Malay Twitter corpus, as well as to build a POS tagging model specifically tailored for Malay Twitter data using various machine learning algorithms. For instance, Support Vector Machine (SVM), Na&#239;ve Bayes (NB), Decision Tree (DT), and K-Nearest Neighbor (KNN) classifiers. This study’s data was gathered by using Twitter&#39;s Advanced Search function and relevant and related keywords associated with informal Malay. The data was fed into machine learning algorithms after several stages of processing to serve as the training and testing corpus. The evaluation and analysis of the developed Malay POS tagger model show that the SVM classifier, as well as the newly proposed Malay POS tags, is the best machine learning algorithm for Malay Twitter data. Furthermore, the prediction accuracy and POS tagging results show that this research outperformed a comparable previous study, indicating that the Malay POS tagger model and its POS were successfully improved.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_30-Improved_POS_Tagging_Model_for_Malay_Twitter_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Breast Cancer Detection and Classification using Deep Learning Xception Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130729</link>
        <id>10.14569/IJACSA.2022.0130729</id>
        <doi>10.14569/IJACSA.2022.0130729</doi>
        <lastModDate>2022-07-31T09:30:19.4730000+00:00</lastModDate>
        
        <creator>Basem S. Abunasser</creator>
        
        <creator>Mohammed Rasheed J. AL-Hiealy</creator>
        
        <creator>Ihab S. Zaqout</creator>
        
        <creator>Samy S. Abu-Naser</creator>
        
        <subject>Breast cancer; deep learning; xception</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>Breast Cancer (BC) is one of the leading cause of deaths worldwide. Approximately 10 million people pass away internationally from breast cancer in the year 2020. Breast Cancer is a fatal disease and very popular among women globally. It is ranked fourth among the fatal diseases of different cancers, for example colorectal cancer, cervical cancer, and brain tumors. Furthermore, the number of new cases of breast cancer is anticipated to upsurge by 70% in the next twenty years. Consequently, early detection and precise diagnosis of breast cancer plays an essential part in enhancing the diagnosis and improving the breast cancer survival rate of patients from 30 to 50%. Through the advances of technology in healthcare, deep learning takes a significant role in handling and inspecting a great number of X-ray, Magnetic Resonance Imaging (MRI), computed tomography (CT) images. The aim of this study is to propose a deep learning model to detect and classify breast cancers. Breast cancers has eight classes of cancers: benign adenosis, benign fibroadenoma, benign phyllodes tumor, benign tubular adenoma, malignant ductal carcinoma, malignant lobular carcinoma, malignant mucinous carcinoma, and malignant papillary carcinoma. The dataset was collected from Kaggle depository for breast cancer detection and classification. The measurement that was used in the evaluation of the proposed model includes: F1-score, recall, precision, accuracy. The proposed model was trained, validated and tested using the preprocessed dataset. The results showed that Precision was (97.60%), Recall (97.60%) and F1-Score (97.58%). This indicates that deep learning models are suitable for detecting and classifying breast cancers precisely.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_29-Breast_Cancer_Detection_and_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>MCBRank Method to Improve Software Requirements Prioritization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130728</link>
        <id>10.14569/IJACSA.2022.0130728</id>
        <doi>10.14569/IJACSA.2022.0130728</doi>
        <lastModDate>2022-07-31T09:30:19.4570000+00:00</lastModDate>
        
        <creator>Sabrina Ahmad</creator>
        
        <creator>Riftika Rizawanti</creator>
        
        <creator>Terry Woodings</creator>
        
        <creator>Intan Ermahani A. Jalil</creator>
        
        <subject>Requirements prioritization; requirements engineering; software engineering; empirical software engineering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>Software Requirements Prioritization is an important issue that has a more profound effect on the overall quality of software development. Application of software requirements prioritization provides benefits to minimize risks in software development so that the most important and most impactful requirements are given priority. This paper presents a proposed software requirements prioritization method named MCBRank, which incorporates renowned MoSCoW Method and Case-Based Ranking to improve prioritization correctness. It elaborates on the implementation of MCBRank in an empirical investigation to determine software requirements prioritization for a potential e-library system. The investigation allows the software requirements prioritization process to be implemented by using the proposed MCBRank method. A role-playing empirical investigation with 30 respondents prioritized 31 software requirements, and the results were measured by Cohen Kappa. The kappa results show that MCBRank achieves a better agreement towards the Gold Standard with kappa value of 0.60. Therefore, the investigation results support that MCBRank improves the importance of ranking correctness, representing the stakeholders’ wants and the organization&#39;s actual needs for the potential e-library system.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_28-MCBRank_Method_to_Improve_Software_Requirements.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fuzzy Clustering Analysis of Power Incomplete Data based on Improved IVAEGAN Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130727</link>
        <id>10.14569/IJACSA.2022.0130727</id>
        <doi>10.14569/IJACSA.2022.0130727</doi>
        <lastModDate>2022-07-31T09:30:19.4400000+00:00</lastModDate>
        
        <creator>Yutian Hong</creator>
        
        <creator>Jun Lin</creator>
        
        <subject>Power system; power equipment; incomplete data; fuzzy clustering; mining algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>The scale of data generated by the complex and huge power system during operation is also very large. With the data acquisition of various information systems, it is easy to form the situation of incomplete power data information, which cannot guarantee the efficiency and quality of work, and reduce the security and reliability of the entire power grid. When incomplete data and incomplete data sets are caused by data storage failure or data acquisition errors, fuzzy clustering of data will face great difficulties. The fuzzy clustering of incomplete data of the power equipment is divided into the processing of incomplete data and the clustering analysis of &quot;recovered&quot; complete data. This paper proposes an IVAEGAN-IFCM interval fuzzy clustering algorithm, which uses interval data sets to fill in the incomplete data, and then completes the clustering of interval data. At the same time, the whole numerical data set is transformed into a complete interval data set. The final clustering result is obtained by interval fuzzy mean clustering analysis of the whole interval data set. Finally, the algorithm proposed in this paper and other machine learning training data sets is made for experimental analysis. The experimental results show that the algorithm proposed in this paper can complete incomplete data sets with high precision clustering. Compared with other contrast methods, it shows higher clustering accuracy. Compared with the numerical clustering algorithm, the clustering accuracy is improved by more than 4.3%, and it has better robustness. It also shows better generalization on the artificial data sets and other complex data sets. It is helpful to improve the technical level of the existing power grid and has important theoretical research value and engineering practice significance.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_27-Fuzzy_Clustering_Analysis_of_Power_Incomplete_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Effectiveness of Human-in-the-Loop Sentiment Polarization with Few Corrected Labels</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130726</link>
        <id>10.14569/IJACSA.2022.0130726</id>
        <doi>10.14569/IJACSA.2022.0130726</doi>
        <lastModDate>2022-07-31T09:30:19.4100000+00:00</lastModDate>
        
        <creator>Ruhaila Maskat</creator>
        
        <creator>Nurzety Aqtar Ahmad Azuan</creator>
        
        <creator>Siti Auni Amaram</creator>
        
        <creator>Nur Hayatin</creator>
        
        <subject>Human-in-the-loop; few labels; sentiment polarization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>In this work, we investigated the effectiveness of adopting Human-in-the-Loop (HITL) aimed to correct automatically generated labels from existing scoring models, e.g. SentiWordNet and Vader to enhance prediction accuracy. Recently, many proposals showed a trend in utilizing these models to label data by assuming that the labels produced are near to ground truth. However, none investigated the correctness of this notion. Therefore, this paper fills this gap. Bad labels result in bad predictions, hence hypothetically, by positioning a human in the computing loop to correct inaccurate labels accuracy performance can be improved. As it is infeasible to expect a human to correct a multitude of labels, we set out to answer the questions of “What is the smallest percentage of corrected labels needed to improve prediction quality against a baseline?” and “Would randomly selecting automatic labels for correction produce better prediction than specifically choosing labels with distinct data points?”. Na&#239;ve Bayes (NB) and Decision Tree (DT) were employed on AirBnB and Vaccines public datasets. We could conclude from our results that not all ML algorithms are suited to be used in a HITL environment. NB fared better than DT at producing improved accuracy with small percentages of corrected labels, as low as 1%, exceeding the baseline. When selected for human correction, labels with distinct data points assisted in enhancing the accuracy better than random selection for NB across both datasets, yet partially for DT.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_26-Effectiveness_of_Human_in_the_Loop_Sentiment_Polarization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>E-AHP: An Enhanced Analytical Hierarchy Process Algorithm for Priotrizing Large Software Requirements Numbers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130725</link>
        <id>10.14569/IJACSA.2022.0130725</id>
        <doi>10.14569/IJACSA.2022.0130725</doi>
        <lastModDate>2022-07-31T09:30:19.3930000+00:00</lastModDate>
        
        <creator>Nahla Mohamed</creator>
        
        <creator>Sherif Mazen</creator>
        
        <creator>Waleed Helmy</creator>
        
        <subject>Requirements engineering; analytical hierarchy process; software engineering; requirements prioritization techniques</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>One of the main activities of software requirements analysis is requirements prioritization. The wrong requirements prioritization is risky as it leads to many software failures. The current requirements prioritization techniques can’t deal with large requirement numbers efficiently, which is considered one of their main issues. Many researchers have agreed that the analytical hierarchy process (AHP) is one of the best prioritization techniques as it produces highly accurate results. AHP has two main problems: scalability and inconsistency. These problems have motivated us to propose an improved version of AHP for software requirements prioritization, namely Enhanced AHP (E-AHP). A performance evaluation has been done for the conventional AHP, E-AHP, and one of the recent algorithms that also try to solve the AHP scalability problems, namely removing eigenvalues and introducing the dynamic consistency checking algorithm into AHP (ReDCCahp) algorithms The evaluation shows which algorithm takes the least time, uses the least memory, produces the most consistent and accurate results, and has the highest scalability. The three algorithms have been evaluated by running their codes using different numbers of requirements ranging from 10 to 500. The results show that E-AHP is more scalable, takes the least time, uses the least memory, and produces the most consistent and accurate results compared to the other two algorithms. That becomes remarkable when the number of requirements increases. Therefore, E-AHP is suitable to be applied in large software projects, as it can deal efficiently with the large software requirements numbers.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_25-E_AHP_An_Enhanced_Analytical_Hierarchy_Process_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Impervious Surface Prediction in Marrakech City using Artificial Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130724</link>
        <id>10.14569/IJACSA.2022.0130724</id>
        <doi>10.14569/IJACSA.2022.0130724</doi>
        <lastModDate>2022-07-31T09:30:19.3630000+00:00</lastModDate>
        
        <creator>Sulaiman Mahyoub</creator>
        
        <creator>Hassan Rhinane</creator>
        
        <creator>Mehdi Mansour</creator>
        
        <creator>Abdelhamid Fadil</creator>
        
        <creator>Waban Al okaishi</creator>
        
        <subject>Deep-learning; remote sensing; artificial neural network ANN; impervious surface</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>Determining an impervious surface is one of the most important topics of remote sensing because of its great role in providing information that benefits decision-makers in urban planning, sustainable development goals, and environmental protection. In recent years, a great development in this field has occurred due to the huge improvement in the algorithms and techniques that are used to map impervious surfaces. In this paper, the deep learning technique has been implemented to investigate the extraction of impervious surfaces in Marrakesh city, based on Landsat images. 9000 polygons and 13840 points have been taken to prepare label data by random forest in Google Earth Engine (GEE). In addition, all preprocessing steps for remote sensing images have been implemented in GEE. An artificial neural network (ANN) has been used to determine impervious surfaces. After training and testing the proposed network on Landsat image datasets, precision, accuracy, recall, and F1-score matrix scores were 0.79, 0.98, 0.87, and 0.82, respectively. The experimental results show that this method is efficient and precise for mapping the impervious surfaces of Marrakesh city.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_24-Impervious_Surface_Prediction_in_Marrakech_City.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Recommendation System based on User Trust and Ratings</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130723</link>
        <id>10.14569/IJACSA.2022.0130723</id>
        <doi>10.14569/IJACSA.2022.0130723</doi>
        <lastModDate>2022-07-31T09:30:19.3470000+00:00</lastModDate>
        
        <creator>Mohamed TIMMI</creator>
        
        <creator>Loubna LAAOUINA</creator>
        
        <creator>Adil JEGHAL</creator>
        
        <creator>Said EL GAROUANI</creator>
        
        <creator>Ali YAHYAOUY</creator>
        
        <subject>Recommendation systems; collaborative filtering; trust; social networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>Recommendation systems aim at providing the user with large information that will be user-friendly. They are techniques based on the individual’s contribution in rating the items. The main principle of recommendation systems is that it is useful for user’s sharing the same interests. Furthermore, collaborative filtering is a widely used technique for creating recommender systems, and it has been successfully applied in many programs. However, collaborative filtering faces multiple issues that affect the recommended accuracy, including data sparsity and cold start, which is caused by the lack of the user&#39;s feedback. To address these issues, a new method called “GlotMF” has been suggested to enhance the collaborative filtering method of recommendation accuracy. Trust-based social networks are also used by modelling the user&#39;s preferences and using different user&#39;s situations. The experimental results based on real data sets show that the proposed method performs better result compared to trust-based recommendation approaches, in terms of prediction accuracy.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_23-Recommendation_System_based_on_User_Trust_and_Ratings.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fake Face Generator: Generating Fake Human Faces using GAN</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130721</link>
        <id>10.14569/IJACSA.2022.0130721</id>
        <doi>10.14569/IJACSA.2022.0130721</doi>
        <lastModDate>2022-07-31T09:30:19.3170000+00:00</lastModDate>
        
        <creator>Md. Mahiuddin</creator>
        
        <creator>Md. Khaliluzzaman</creator>
        
        <creator>Md. Shahnur Azad Chowdhury</creator>
        
        <creator>Muhammed Nazmul Arefin</creator>
        
        <subject>Generative adversarial network; fake human faces; generator; discriminator; CelebA dataset</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>As machine learning is growing rapidly, creating art and images by machine is one of the most trending topics in current time. It has enormous applications in our current day to day life. Various researchers have researched this topic and they try to implement various ideas and most of them are based on CNN or other tools. The aim of our work is to generate comparatively better real-life fake human faces with low computational power and without any external image classifier, rather than removing all the noise and maximizing the stabilization which was the main challenge of the previous related works. For that, in this paper, we tried to implement our generative adversarial network with two fully connected sequential models, one as a generator and another as a discriminator. Our generator is trainable which gets random data and tries to create fake human faces. On the other hand, our discriminator gets data from the CelebA dataset and tries to detect that the images generated by the generator are fake or real, and gives feedback to the generator. Based on the feedback the generator improves its model and tries to generate more realistic images.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_21-Fake_Face_Generator.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards Adapting Metamodelling Technique for an Online Social Networks Forensic Investigation (OSNFI) Domain</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130722</link>
        <id>10.14569/IJACSA.2022.0130722</id>
        <doi>10.14569/IJACSA.2022.0130722</doi>
        <lastModDate>2022-07-31T09:30:19.3170000+00:00</lastModDate>
        
        <creator>Aliyu Musa Bade</creator>
        
        <creator>Siti Hajar Othman</creator>
        
        <subject>Online social networks forensic; online social networks forensic investigation; metamodelling; metamodel; model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>With the ease of use of smart devices, most data is now kept and exchanged in digital forms such as images, diaries, calendars, movies, and so on. Digital forensic investigation is a new technology that emerged from criminals&#39; who extensively use computers and digital storage devices to commit different types of crimes. To address this issue, a new domain called Online Social Networks Forensic (OSNF) was created to investigate these dynamic crimes perpetrated on social media platforms. OSNFI seeks to obtain, organise, investigate, and visualise user information as direct, objective, and fair evidence. Considering the millions of individuals using social media to share and communicate, they are becoming increasingly relevant for criminal investigations. In forensics investigation of online social network, there are currently major problems such as: lack of structured procedures, the lack of unified automated methods, and the lack of a theoretical context. The use of non-uniform and ad hoc forensic techniques and procedures not only reduces the effectiveness of the process, but also affects the reliability and creditability of the proof in criminal proceedings. As a result, this paper will provide a method derived from the software engineering domain known as metamodelling, which will integrate OSNFI knowledge into an artifact known as a metamodel.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_22-Towards_Adapting_Metamodelling_Technique.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Evaluation Method for Service-Oriented Architecture Maturity Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130720</link>
        <id>10.14569/IJACSA.2022.0130720</id>
        <doi>10.14569/IJACSA.2022.0130720</doi>
        <lastModDate>2022-07-31T09:30:19.2830000+00:00</lastModDate>
        
        <creator>Mohd Hamdi Irwan Hamzah</creator>
        
        <creator>Ezak Fadzrin Ahmad Shaubari</creator>
        
        <subject>Maturity model; model evaluation; model validation; model verification; service-oriented architecture</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>SOA maturity model was used to clarify and provide a common definition of SOA inside an organization. The model provides an abstract overview of SOA adoption by characterizing evolutionary levels. However, this study found that there is a lacking on how the previous models were evaluated to show that the model is conforming to the specification and can be implemented in the real-world environment. Therefore, this study aims to provide the evaluation method for the SOA maturity model through the verification and validation process. The Integrated Adoption Maturity for Service-Oriented Architecture (IAMSOA) model was chosen and the verification process is being performed through expert review where the study identifies the experts, determines the verification criteria, and collects and analyzes the feedback; while the validation was performed through case study by identifying the organization, determining the validation criteria, brainstorming, and collecting and analyzing the feedback. The verification results show that the evaluated model is comprehensive, understandable, accurate, and well-organized. Moreover, the validation results reveal that it is feasible and practical to be executed in the real environment. Conclusively, this study has successfully evaluated one of the SOA maturity models and shows the verification and validation process in detail which can be re-enacted in different projects and settings.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_20-An_Evaluation_Method_for_Service_Oriented_Architecture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluation on the Effects of 2D Animation as a Verbal Apraxia Therapy for Children with Verbal Apraxia of Speech</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130719</link>
        <id>10.14569/IJACSA.2022.0130719</id>
        <doi>10.14569/IJACSA.2022.0130719</doi>
        <lastModDate>2022-07-31T09:30:19.2700000+00:00</lastModDate>
        
        <creator>Muhammad Taufik Hidayat</creator>
        
        <creator>Sarni Suhaila Rahim</creator>
        
        <creator>Shahril Parumo</creator>
        
        <creator>Nurul Najihah A’bas</creator>
        
        <creator>Muhammad ‘Ammar Muhammad Sani</creator>
        
        <creator>Hilmi Abdul Aziz</creator>
        
        <subject>Childhood apraxia of speech; verbal dyspraxia; speech and language disorder; 2D animation; visual animation treatment; evaluation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>This article presents an evaluation of 2D Animation of Learning Numbers and Letters for Children with Verbal Apraxia. The developed application provides some knowledge and encourage children with verbal apraxia to learn and know about numbers and letters. An experimental testing was conducted to evaluate the usability of the developed application, aimed as a therapy for the children who suffer this apraxia across all age. Five important evaluation components such as learnability, usability, accessibility, functionality, and effectiveness were included in this testing to investigate the user engagement and satisfaction of the proposed medical and educational learning system. Online questionnaires were distributed as a method to collect user testing outputs. A total of 33 respondents from multimedia designers, practitioners, psychologists, and parents were involved in this survey. The results of the testing indicate that majority of respondents are satisfied with the outcomes of the 2D animation video. The results presented may facilitate improvements in the teaching syllabus for students with speech and language disorder and produce a great visual animation treatment to the users.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_19-Evaluation_on_the_Effects_of_2D_Animation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>RETRACTED: Semantically Query All (Squerall): A Scalable Framework to Analyze Data from Heterogeneous Sources at Different Levels of Granularity</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130718</link>
        <id>10.14569/IJACSA.2022.0130718</id>
        <doi>10.14569/IJACSA.2022.0130718</doi>
        <lastModDate>2022-07-31T09:30:19.2370000+00:00</lastModDate>
        
        <creator>Iqbal Hasan</creator>
        
        <creator>Majid Zaman</creator>
        
        <creator>Sheikh Amir Fayaz</creator>
        
        <creator>Ifra Altaf</creator>
        
        <creator>Muheet Ahmed Butt</creator>
        
        <creator>S.A.M Rizvi</creator>
        
        <subject>Heterogeneous data; data warehouse; big data; presto; spark; squerall</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>After careful and considered review of the content of this paper by a duly constituted expert committee, this paper has been found to be in violation of IJACSA`s Publication Principles. We hereby retract the content of this paper. Reasonable effort should be made to remove all past references to this paper. Retraction DOI: 10.14569/IJACSA.2022.0130718.retraction</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_18-Semantically_Query_All_Squerall.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Virtual Reality, a Method to Achieve Social Acceptance of the Communities Close to Mining Projects: A Scoping Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130717</link>
        <id>10.14569/IJACSA.2022.0130717</id>
        <doi>10.14569/IJACSA.2022.0130717</doi>
        <lastModDate>2022-07-31T09:30:19.2230000+00:00</lastModDate>
        
        <creator>Patricia L&#243;pez-Casaperalta</creator>
        
        <creator>Jeanette Fabiola D&#237;az-Quintanilla</creator>
        
        <creator>Jos&#233; Juli&#225;n Rodr&#237;guez-Delgado</creator>
        
        <creator>Alejandro Marcelo Acosta-Quelopana</creator>
        
        <creator>Aleixandre Brian DucheP&#233;rez</creator>
        
        <subject>Mining projects; social acceptance; virtual reality; interaction; mining communities</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>Background: Virtual reality (VR) technology is an effective, interactive and immersive type of communication since it produces greater interest and attention in the user, thereby allowing greater understanding and comprehension than with more traditional methods. On the other hand, not much information is known about the application of this novel technology in the context of social acceptance as far as the mining sector is concerned; our approach and methodology were based on scoping review (Prisma-SrC, Daudt et al., Arksey, and O’Malley). The research terms were also planned before, with the aim of carrying out three posterior screening levels, among which was the use of EndNote 20 and the PICO framework. Exhaustive research was carried out in nine databases. We obtained n=2 research articles of n=923 initially found, all of which went through three levels of filtering. The chosen articles were evaluated according to Hawker et al. &#39;s methodological rigor, to be included in the review. This scoping review could be the starting point for a series of further investigations that would fill the gap in the literature on this topic, emphasizing experimental articles to confirm the impact of virtual reality technologies on the communities within the sphere of influence of a mining project.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_17-Virtual_Reality_a_Method_to_Achieve_Social_Acceptance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Ontological Model based on Machine Learning for Predicting Breast Cancer</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130715</link>
        <id>10.14569/IJACSA.2022.0130715</id>
        <doi>10.14569/IJACSA.2022.0130715</doi>
        <lastModDate>2022-07-31T09:30:19.1900000+00:00</lastModDate>
        
        <creator>Hakim El Massari</creator>
        
        <creator>Noreddine Gherabi</creator>
        
        <creator>Sajida Mhammedi</creator>
        
        <creator>Hamza Ghandi</creator>
        
        <creator>Fatima Qanouni</creator>
        
        <creator>Mohamed Bahaj</creator>
        
        <subject>Machine learning; prediction; ontology; semantic web rule language; decision tree; breast cancer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>Breast cancer is mostly a female disease, but it may affect men as well even at a considerably lower percentage. An automated diagnosis system should be built for early detection because manual breast cancer diagnosis takes a long time. Doctors have lately achieved significant advances in the early identification and treatment of breast cancer in order to decrease the rate of mortality caused by the latter. Researchers, on the other hand, are analysing large amounts of complicated medical data by employing a combination of statistical and machine learning methodologies to assist clinicians in predicting breast cancer. Various machine learning approaches, including ontology-based Machine Learning methods, have lately played an essential role in medical science by building an automated system that can identify breast cancer. This study examines and evaluates the most popular machine learning algorithms, besides the ontological model based on Machine Learning. Among the classification methods investigated were Naive Bayes, Decision Tree, Logistic Regression, Support Vector Machine, Artificial Neural Network, Random Forest, and k-Nearest Neighbours. The dataset utilized has 683 instances and is available for download from the Kaggle website. The findings are assessed using performance measures generated from the confusion matrix, such as F-Measure, Accuracy, Precision, and Recall. The ontology model surpassed all machine learning techniques, according to the results.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_15-An_Ontological_Model_based_on_Machine_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Intelligent Transport System in VANET using Proxima Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130716</link>
        <id>10.14569/IJACSA.2022.0130716</id>
        <doi>10.14569/IJACSA.2022.0130716</doi>
        <lastModDate>2022-07-31T09:30:19.1900000+00:00</lastModDate>
        
        <creator>Satyanarayana Raju K</creator>
        
        <creator>Selvakumar K</creator>
        
        <subject>Vehicular Ad hoc Network (VANET); intelligent transportation system (ITS); KNN; RF</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>There is no proper structure for Vehicular ad hoc networks (VANETs). VANET generates several mobility vehicles that move in different directions by connecting the vehicles and transferring the data between the source and destination which is very useful information. In this system, a small network is created with vehicles and other devices that behave like nodes in the network. Sometimes for better communication, VANET uses suitable hardware for improving the performance of the network. Reliability is one of the significant tasks that perform the needful operations and methods based on the conditions at a specific time. To disturb the VANETS, the attacker tries to hit the server and that causes damage to the server. This paper mainly focused on detecting the falsification nodes by analyzing the behavior of the models. In this paper, an improved intelligent transportation system (ITS) Proxima analysis is introduced to find the accurate falsification nodes. The proposed approach is the integration of KNN and RF with Proxima analysis. The main aim of the Proxima is to analyze the falsification nodes within the network and improve the mobility of the vehicles by sending source to destination without any miscommunication.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_16-An_Intelligent_Transport_System_in_VANET.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design and Development of Face Mask Reminder Box Technology using Arduino Uno</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130713</link>
        <id>10.14569/IJACSA.2022.0130713</id>
        <doi>10.14569/IJACSA.2022.0130713</doi>
        <lastModDate>2022-07-31T09:30:19.1600000+00:00</lastModDate>
        
        <creator>Chee Ken Nee</creator>
        
        <creator>Rafiza Abdul Razak</creator>
        
        <creator>Muhamad Hariz Bin Muhamad Adnan</creator>
        
        <creator>Wan Fatin Liyana Mutalib</creator>
        
        <creator>Nur Fatirah Roslee</creator>
        
        <creator>Noor Mursheeda Mahyuddin</creator>
        
        <subject>Face mask box; COVID-19; Voice reminder; Arduino; New norm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>The World Health Organization (WHO) declared the COVID-19 pandemic on 12 Mar, 2020, due to the growth in the number of cases worldwide. WHO advises wearing a face mask and practicing social distancing, which has played a crucial role in prevention and control measures that can prevent the spread of COVID-19. Thus, this paper presents the process through which face mask box is equipped with a voice reminder and sensor. It is made with the help of an Arduino Uno board to give awareness or reminder whenever a person is alerted with a voice reminder to wear a face mask before going outside. It can be helpful, especially in the pandemic era, as a new norm of practice in wearing a mask.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_13-Design_and_Development_of_Face_Mask_Reminder_Box.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Efficient Unusual Event Tracking in Video Sequence using Block Shift Feature Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130714</link>
        <id>10.14569/IJACSA.2022.0130714</id>
        <doi>10.14569/IJACSA.2022.0130714</doi>
        <lastModDate>2022-07-31T09:30:19.1600000+00:00</lastModDate>
        
        <creator>Karanam Sunil Kumar</creator>
        
        <creator>N P Kavya</creator>
        
        <subject>Object detection; tracking; learning models; video sequence analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>The area of video technology is rapidly growing owing to advancements in intelligent video systems in sensor operations, higher bandwidth capacity, storage, and high-resolution displays. This led to the proliferation of video-based computing modeling to perform specific tasks on video sequences to gain more insight from the data. Visual tracking of events is a core component in video visual surveillance systems that classify and track moving objects to describe their behavioral aspects. The prime motive behind intelligent video systems is to perform efficient video analytics to meet the specific requirements of the user/use-cases. It involves a self-directed paradigm to understand event sequences, reducing the computational burden of characterizing the activities. The study incorporates a block-shift feature algorithm and introduces a novel computational research method for unusual event tracking in video sequences. The formulated approach employs a framework combining operational blocks to compute sequential operations such as block-matching from the dictionary of motion estimations. Before applying the learning model, the subsequent analysis procedure adds feature lexicon and dominant attributes to make the execution computationally efficient. Further, it uses a sparse-non negative factorization approach to organize the informative details into k possible finite clusters. The event detection outcome from the training datasets of video sequences shows better experimental results than the traditional highly cited related approach of unusual object detection and tracking.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_14-An_Efficient_Unusual_Event_Tracking.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application-based Usability Evaluation Metrics</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130712</link>
        <id>10.14569/IJACSA.2022.0130712</id>
        <doi>10.14569/IJACSA.2022.0130712</doi>
        <lastModDate>2022-07-31T09:30:19.1300000+00:00</lastModDate>
        
        <creator>Hanaa Bayomi</creator>
        
        <creator>Noura A.Sayed</creator>
        
        <creator>Hesham Hassan</creator>
        
        <creator>Khaled Wassif</creator>
        
        <subject>Usability; human-computer interaction; evaluation; quantitative attributes; testing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>Testing is one of the vital stages in the software development life cycle (SDLC). Usability testing is a very important field that helps the applications be easily used by the end-users. Because of the importance of usability testing, a metrics has been developed to help in measuring the usability through converting the main qualitative usability attributes in ISO to quantitative steps that provide the developer a framework to follow in developing to achieve usability of their applications and helps the tester with a checklist and a tool to measure the usability percentage of their application. The framework provides a set of steps to achieve the usability attributes and answers the question of how you could measure this attribute with the defined steps. The framework results in a 95% average accuracy in the high-rate application and a 59% average accuracy in the low-rate application. Finally, the framework is programmed in a tool to measure the usability percentage of the application through a checklist and provides a scheme to help the developer achieve the best results in usability.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_12-Application_based_Usability_Evaluation_Metrics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Prediction of Instructor Performance using Machine and Deep Learning Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130711</link>
        <id>10.14569/IJACSA.2022.0130711</id>
        <doi>10.14569/IJACSA.2022.0130711</doi>
        <lastModDate>2022-07-31T09:30:19.1130000+00:00</lastModDate>
        
        <creator>Basem S. Abunasser</creator>
        
        <creator>Mohammed Rasheed J. AL-Hiealy</creator>
        
        <creator>Alaa M. Barhoom</creator>
        
        <creator>Abdelbaset R. Almasri</creator>
        
        <creator>Samy S. Abu-Naser</creator>
        
        <subject>Education; deep learning; machine learning; prediction; instructor performance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>The quality of instructors’ performance mainly influences the quality of educational services in higher educational institutions. One of the major challenges of higher educational institutions is the accumulated amount of data and how it can be utilized to boost the academic programs quality. The recent advancements in Artificial Intelligence techniques, including machine and deep learning models, have led to the expansion in practical prediction for various fields. In this paper, a dataset was collected from UCI Repository, University of California, for the prediction of instructor performance. In order to find how effective the instructor in the higher education systems is, a group of machine and deep learning algorithms were applied to predict instructor performance in higher education systems. The best machine-learning algorithm was Extra Trees Regressor with Accuracy (98.78%), Precision (98.78%), Recall (98.78%), F1-score (98.78%); however, the proposed deep learning algorithm achieved Accuracy (98.89%), Precision (98.91%), Recall (98.94%), and F1-score (98.92%).</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_11-Prediction_of_Instructor_Performance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Solving the Imbalanced and Limited Data Labeled for Automated Essay Scoring using Cost Sensitive XGBoost and Pseudo-Labeling</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130710</link>
        <id>10.14569/IJACSA.2022.0130710</id>
        <doi>10.14569/IJACSA.2022.0130710</doi>
        <lastModDate>2022-07-31T09:30:19.0830000+00:00</lastModDate>
        
        <creator>Marvina Pramularsih</creator>
        
        <creator>Mardhani Riasetiawan</creator>
        
        <subject>Imbalanced data; limited labeled data; automated essay scoring; cost sensitive XGBoost; pseudo-labeling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>There are two main problems on forming the Automatic Essay Scoring Model. They are the datasets having imbalanced amount of the right and wrong answers and the minimal use of labeled data in the model training. The model forming based on these problems is divided into three main points, namely word representation, Cost-Sensitive XGBoost Classification, and adding unlabeled data with the Pseudo-Labeling Technique. The essay answer data is converted into a vector using the trained word vector fastText. Furthermore, the classification of unlabeled data was carried out using the Cost-Sensitive XGBoost Method. The data labeled by the classification model is added as training data for the new classification model form. The process is carried out iteratively. This research is about using the combination of Cost-Sensitive XGBoost Classification and Pseudo-Labeling which is expected to solve the problems. For the 0th iteration, the dataset having a ratio of the amount of &quot;right&quot; labeled data with the amount of &quot;right&quot; labeled data is close to 1, in other words a balanced dataset or a ratio that is more than 1 produces a model with better performance. Thus, the selection of training data at an early stage must pay attention to this ratio. In addition, the use of the Hybrid Method on these datasets can save labeled data 56 times compared to the AdaBoost Method. Hybrid model is able to produce F1-Measure more than 95.6%, so it can be concluded that the Hybrid Method, which combines the XGBoost and Pseudo-Labeling Cost-Sensitive Classification with Self Training, is able to overcome the problem of unbalanced datasets and data limited label.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_10-Solving_the_Imbalanced_and_Limited_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mono Camera-based Human Skeletal Tracking for Squat Exercise Abnormality Detection using Double Exponential Smoothing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130709</link>
        <id>10.14569/IJACSA.2022.0130709</id>
        <doi>10.14569/IJACSA.2022.0130709</doi>
        <lastModDate>2022-07-31T09:30:19.0670000+00:00</lastModDate>
        
        <creator>Muhammad Nafis Hisham</creator>
        
        <creator>Mohd Fadzil Abu Hassan</creator>
        
        <creator>Norazlin Ibrahim</creator>
        
        <creator>Zalhan Mohd Zin</creator>
        
        <subject>Abnormality movement; double exponential smoothing; skeletal tracking; mediapipe; squat exercise</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>Human action analysis is an enthralling area of research in artificial intelligence, as it may be used to improve a range of applications, including sports coaching, rehabilitation, and monitoring. By forecasting the body&#39;s vital position of posture, human action analysis may be performed. Human body tracking and action recognition are the two primary components of video-based human action analysis. We present an efficient human tracking model for squat exercises using the open-source MediaPipe technology. The human posture detection model is used to detect and track the vital body joints within the human topology. A series of critical body joint motions are being observed and analysed for aberrant body movement patterns while conducting squat workouts. The model is validated using a squat dataset collected from ten healthy people of varying genders and physiques. The incoming data from the model is filtered using the double exponential smoothing method;the Mean Squared Error between the measured and smoothed angles is determined to classify the movement as normal or abnormal. Level smoothing and trend control have parameters of 0.8928 and 0.77256, respectively. Six out of ten subjects in the trial were precisely predicted by the model. The mean square error of the signals obtained under normal and abnormal squat settings is 56.3197 and 29.7857, respectively. Thus, by utilising a simple threshold method, the low-cost camera-based squat movement condition detection model was able to detect the abnormality of the workout movement.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_9-Mono_Camera_based_Human_Skeletal_Tracking.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An SDN-based Decision Tree Detection (DTD) Model for Detecting DDoS Attacks in Cloud Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130708</link>
        <id>10.14569/IJACSA.2022.0130708</id>
        <doi>10.14569/IJACSA.2022.0130708</doi>
        <lastModDate>2022-07-31T09:30:19.0330000+00:00</lastModDate>
        
        <creator>Jeba Praba. J</creator>
        
        <creator>R. Sridaran</creator>
        
        <subject>Distributed denial of service attack; greedy feature selection; decision tree algorithm; software defined networking; cloud and decision tree detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>Detecting Distributed Denial of Service (DDoS) attacks has become a significant security issue for various network technologies. This attack has to be detected to increase the system’s reliability. Though various traditional studies are present, they suffer from data shift issues and accuracy. Hence, this study intends to detect DDoS attacks by classifying the normal and malicious traffic. The study aims to solve the data shift issues by using the introduced Decision Tree Detection (DTD) model encompassing of Greedy Feature Selection (GFS) algorithm and Decision Tree Algorithm (DTA). It also attempts to enhance the proposed model’s detection rate (accuracy) above 90%. Various processes are involved in DDoS attack detection. Initially, the gureKddcup dataset is loaded to perform pre-processing. This process is essential for removing noisy data. After this, feature selection is performed to select only the relevant features, removing the irrelevant data. This is then fed into the train and test split. Following this, Software Defined Networking (SDN) based DTA is used to classify the normal and malicious traffic, then given to the trained model for predicting this attack. Performance analysis is undertaken by comparing the proposed model with existing systems in terms of accuracy, MCC (Matthew’s Correlation Coefficient), sensitivity, specificity, error rate, FAR (False Alarm Rate), and AUC (Area under Curve). This analysis is carried out to evaluate the efficacy of the proposed model, which is verified through the results.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_8-An_SDN_based_Decision_Tree_Detection_DTD_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Drought Prediction and Validation for Desert Region using Machine Learning Methods</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130707</link>
        <id>10.14569/IJACSA.2022.0130707</id>
        <doi>10.14569/IJACSA.2022.0130707</doi>
        <lastModDate>2022-07-31T09:30:19.0200000+00:00</lastModDate>
        
        <creator>Azmat Raja</creator>
        
        <creator>Gopikrishnan T</creator>
        
        <subject>Drought; SPEI; machine learning; water resources; prediction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>Drought prediction serves as an early warning to the effective management of water resources to avoid the drought impact. The drought prediction is carried out for arid, semi-arid, sub-humid, and humid climate types in the desert region. The drought is predicted using Standardized precipitation evapotranspiration index (SPEI). The application of machine learning methods such as artificial neural network (ANN), K-Nearest Neighbors (KNN), and Deep Neural Network (DNN) for the drought prediction suitability is analyzed. The SPEI is predicted using the aforesaid machine learning methods with inputs used to calculate SPEI. The predictions are assessed using statistical indicators. The coefficient of determination of ANN, KNN, and DNN are 0.93, 0.83, and 0.91 respectively. The mean square error of ANN, KNN, and DNN are 0.065, 0.512, and 0.52 respectively. The mean absolute error of ANN, KNN, and DNN are 0.001, 0.512, and 0.01 respectively. Based on results of statistical indicator and validations it is found that DNN is suitable to predict drought in all the four types of desert region.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_7-Drought_Prediction_and_Validation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Annotated Corpus with Negation and Speculation in Arabic Review Domain: NSAR</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130706</link>
        <id>10.14569/IJACSA.2022.0130706</id>
        <doi>10.14569/IJACSA.2022.0130706</doi>
        <lastModDate>2022-07-31T09:30:18.9870000+00:00</lastModDate>
        
        <creator>Ahmed Mahany</creator>
        
        <creator>Heba Khaled</creator>
        
        <creator>Nouh Sabri Elmitwally</creator>
        
        <creator>Naif Aljohani</creator>
        
        <creator>Said Ghoniemy</creator>
        
        <subject>Arabic NLP; negation; speculation; uncertainty; annotation; annotation guidelines; corpus; review domain; sentiment analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>Negation and speculation detection are critical for Natural Language Processing (NLP) tasks, such as sentiment analysis, information retrieval, and machine translation. This paper presents the first Arabic corpus in the review domain annotated with negation and speculation. The Negation and Speculation Arabic Review (NSAR) corpus consists of 3K randomly selected review sentences from three well-known and benchmarked Arabic corpora. It contains reviews from different categories, including books, hotels, restaurants, and other products written in various Arabic dialects. The negation and speculation keywords have been annotated along with their linguistic scope based on the annotation guidelines reviewed by an expert linguist. The inter-annotator agreement between two independent annotators, Arabic native speakers, is measured using the Cohen’s Kappa coefficients with values of 95 and 80 for negation and speculation, respectively. Furthermore, 29% of this corpus includes at least one negation instance, while only 4% of this corpus contains speculative content. Therefore, the Arabic reviews focus more on negation structures rather than speculation. This corpus will be available for the Arabic research community to handle these critical phenomena.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_6-Annotated_Corpus_with_Negation_and_Speculation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Hybrid-Heuristic Approach for Vertex p-Median Location Problems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130705</link>
        <id>10.14569/IJACSA.2022.0130705</id>
        <doi>10.14569/IJACSA.2022.0130705</doi>
        <lastModDate>2022-07-31T09:30:18.9730000+00:00</lastModDate>
        
        <creator>Hassan Mohamed Rabie</creator>
        
        <creator>Said Salhi</creator>
        
        <subject>P-median; discrete location problems; myopic heuristic; neighborhood heuristic</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>In this paper, a new hybridization of a Myopic and Neighborhood approaches is proposed to solve large-size vertex p-median location problems. The effectiveness and efficiency of our approach are demonstrated empirically through an intensive computational experiment on large-size instances taken from TSPLib and BIRCH datasets, with the number of nodes varying from 734 to 9,976 for the former and from 9,600 to 20,000 nodes for the latter. The results show that the new approach, though relatively simple, yields better solutions compared to the ones in the literature. This demonstrates that a simpler approach that takes into account the advantages of other methods can lead to promising outcome and has the potential of being adopted in other combinatorial optimization problems.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_5-A_New_Hybrid_Heuristic_Approach_for_Vertex_p_Median.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Customer Profiling Method with Big Data based on BDT and Clustering for Sales Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130704</link>
        <id>10.14569/IJACSA.2022.0130704</id>
        <doi>10.14569/IJACSA.2022.0130704</doi>
        <lastModDate>2022-07-31T09:30:18.9570000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Zhan Ming Ming</creator>
        
        <creator>Ikuya Fujikawa</creator>
        
        <creator>Yusuke Nakagawa</creator>
        
        <creator>Tatsuya Momozaki</creator>
        
        <creator>Sayuri Ogawa</creator>
        
        <subject>Customer profiling; binary decision tree: BDT; corporate social responsibility (CSR); k-means clustering; sales prediction; valuable customer findings</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>We propose a method for customer profiling based on Binary Decision Tree: BDT and k-means clustering with customer related big data for sales prediction; valuable customer findings as well as customer relation improvements. Through the customer related big data, not only sales prediction but also categorization of customers as well as Corporate Social Responsibility (CSR) can be done. This paper describes a method for these purposes. Examples of the analyzed data relating to the sales prediction, valuable customer findings and customer relation improvements are shown here. It is found that the proposed method allows sales prediction, valuable customer findings with some acceptable errors.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_4-Customer_Profiling_Method_with_Big_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Personality Classification Model of Social Network Profiles based on their Activities and Contents</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130703</link>
        <id>10.14569/IJACSA.2022.0130703</id>
        <doi>10.14569/IJACSA.2022.0130703</doi>
        <lastModDate>2022-07-31T09:30:18.9400000+00:00</lastModDate>
        
        <creator>Mervat Ragab Bakry</creator>
        
        <creator>Mona Mohamed Nasr</creator>
        
        <creator>Fahad Kamal Alsheref</creator>
        
        <subject>Psychological personality; machine learning techniques; big-five; LinearSVC</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>Social networks have become an important part of everyday life, especially after the latest technologies such as smartphones, tablets, and laptops have become widespread. Individuals spend a lot of time on social media and express their feelings and opinions through statuses, comments, and updates which could be a way to understand and classify their personalities. The personalities in psychological science are divided into five classes according to the Big-five model (Openness, Extraversion, Consciousness, Agreeableness, and Neurotic). This model shows the key features with their weights for each personality. In this paper, a proposed model is developed for detecting the personality features from users’ activities in social networks. In this model, machine learning techniques are used for predicting the personalities with a score for each Big-five model type and sorting them in descending order. The personality classification model will be useful in developing a better understanding of the user profile and specifically targeting users with appropriate advertising. Any social media network user&#39;s personality can be predicted by using their posts and status updates to get better accuracy. The experimental results of the model in this study provide an enhancement because it can predict the precise score of one user in each factor of the Big-five. The proposed model was tested on a dataset extracted from Facebook and manually classified by experts, and it achieved 89.37% accuracy.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_3-Personality_Classification_Model_of_Social_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detection of Premature Ventricular Contractions using 12-lead Dynamic ECG based on Squeeze-Excitation Residual Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130702</link>
        <id>10.14569/IJACSA.2022.0130702</id>
        <doi>10.14569/IJACSA.2022.0130702</doi>
        <lastModDate>2022-07-31T09:30:18.9270000+00:00</lastModDate>
        
        <creator>Duan Li</creator>
        
        <creator>Tingting Sun</creator>
        
        <creator>Yibai Xue</creator>
        
        <creator>Yilin Xie</creator>
        
        <creator>Xiaolei Chen</creator>
        
        <creator>Jiaofen Nan</creator>
        
        <subject>Dynamic ECG; squeeze-excitation; residual network; premature ventricular contraction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>Premature ventricular contraction (PVC) is a very common arrhythmia that can originate in any part of the ventricle and is one of the important causes of sudden cardiac death. Timely and rapid detection of PVC on dynamic electrocardiogram (ECG) recording for patients with cardiovascular diseases is of great significance for clinical diagnosis. Furthermore, it can facilitate the planning and execution of radiofrequency ablation. But the dynamic ECGs can be easily contaminated by various noises and its morphological characteristics show significant variations for different patients. Though the deep learning methods achieved outstanding performance in ECG automatic recognition, there are still some limitations, such as overfitting, gradient disappearance or gradient explosion in deep networks. Therefore, a residual module is constructed using the squeeze-excitation method to alleviate the problems. A 20-layer squeeze-extraction residual network (SE-ResNet) containing multiple squeeze-extraction modules was designed for real-time PVC detection on 12-lead dynamic ECG. The algorithm was evaluated using the dynamic 12-lead ECGs in INCART database (168,379 heartbeats in total). The experimental results show that the test accuracy of the method proposed in this paper is 98.71%, and the specificity and sensitivity of PVC are 99.12% and 99.59%, respectively. Under the same dataset and experimental platform, the average recognition accuracy of our proposed method is increased by 0.73%, 1.55%, 2.9% and 1.65% compared with the results obtained by CNN, Inception, AlexNet and deep multilayer perceptron, respectively. The proposed scheme provides a new method for real-time detection of PVC on dynamic 12-lead ECGs. The experiment results show that the proposed method outperforms state-of-the-art methods, and has good potential for clinical applications.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_2-Detection_of_Premature_Ventricular_Contractions.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Research Progress and Trend of the Machine Learning based on Fusion</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130701</link>
        <id>10.14569/IJACSA.2022.0130701</id>
        <doi>10.14569/IJACSA.2022.0130701</doi>
        <lastModDate>2022-07-31T09:30:18.8330000+00:00</lastModDate>
        
        <creator>Chen Xiao Yu</creator>
        
        <creator>Zhang Xiao Min</creator>
        
        <creator>Song Ying</creator>
        
        <creator>Gao Feng</creator>
        
        <subject>Machine learning; fusion; ensemble learning; federated learning; transfer learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(7), 2022</description>
        <description>Machine learning is widely used in the data processing including data classification, data regression, data mining and so on, and based on a single type of machine learning technology, it is often difficult to meet the requirements of data processing; in recent years, the machine learning based on fusion has become an important approach to improve data processing effect, and at the same time, corresponding summary study is relatively limited. In this study, we summarize and compare different types of fusion machine learning such as ensemble learning, federated learning and transfer learning from the perspectives of classification, principle and characteristics, and try to explore the research development trend, in order to provide effective reference for subsequent related research and application; furthermore, as an application of fusion machine learning，we also conduct a study on the modeling optimization for car service complaint text classification.</description>
        <description>http://thesai.org/Downloads/Volume13No7/Paper_1-Research_Progress_and_Trend_of_the_Machine_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dynamic Spatial-Temporal Graph Model for Disease Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01306112</link>
        <id>10.14569/IJACSA.2022.01306112</id>
        <doi>10.14569/IJACSA.2022.01306112</doi>
        <lastModDate>2022-06-30T12:23:39.8200000+00:00</lastModDate>
        
        <creator>Ashwin Senthilkumar</creator>
        
        <creator>Mihir Gupte</creator>
        
        <creator>Shridevi S</creator>
        
        <subject>Spatial temporal graph convolution network; disease prediction; graph neural network; graph convolutional network; deep learning; knowledge graph</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>Advances in the field of Neural Networks, especially Graph Neural Networks (GNNs) has helped in many fields, mainly in the areas of Chemistry and Biology where recognizing and utilising hidden patterns is of much importance. In Graph Neural Networks, the input graph structures are exploited by using the dependencies formed by the nodes. The data can also be transformed in the form of graphs which can then be used in such models. In this paper, a method is proposed to make appropriate transformations and then to use the structure to predict diseases. Current models in disease prediction do not fully use the temporal features that are associated with diseases, such as the order of the occurrence of symptoms and their significance. In the proposed work, the presented model takes into account the temporal features of a disease and represents it in terms of a graph to fully utilize the power of Graph Neural Networks and Spatial-Temporal models which take into consideration of the underlying structure that change over time. The model can be efficiently used to predict the most likely disease given a set of symptoms as input. The model exhibits the best algorithm based on its accuracy. The accuracy of the algorithm is determined by the performance on the given dataset. The proposed model is compared with the existing baseline models and proves to be outstanding and more promising in the disease prediction.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_112-Dynamic_Spatial_Temporal_Graph_Model_for_Disease_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Password Systems: Problems and Solutions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01306113</link>
        <id>10.14569/IJACSA.2022.01306113</id>
        <doi>10.14569/IJACSA.2022.01306113</doi>
        <lastModDate>2022-06-30T12:23:39.8200000+00:00</lastModDate>
        
        <creator>Lanfranco Lopriore</creator>
        
        <subject>Access authorization; key; password; revocation; security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>In a security environment featuring subjects and objects, we consider an alternative to the classical password paradigm. In this alternative, a key includes a password, an object identifier, and an authorization. A master password is associated with each object. A key is valid if the password in that key descends from the master password by using a validity relation expressed in terms of a symmetric-key algorithm. We analyse a number of security problems. For each problem, a solution is presented and discussed. In certain cases, extensions to the original key paradigm are introduced. The problems considered include the revocation of access authorizations; bounded keys expressing limitations on the number of iterated utilizations of the same key to access the corresponding object; repositories, which are objects aimed at storing keys, possibly organized into hierarchical structures; and the merging of two keys into a single key featuring a composite authorization that includes the access rights in the two keys.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_113-Password_Systems_Problems_and_Solutions.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Classification of Palm Trees Diseases using Convolution Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01306111</link>
        <id>10.14569/IJACSA.2022.01306111</id>
        <doi>10.14569/IJACSA.2022.01306111</doi>
        <lastModDate>2022-06-30T12:23:39.8030000+00:00</lastModDate>
        
        <creator>Marwan Abu-zanona</creator>
        
        <creator>Said Elaiwat</creator>
        
        <creator>Shayma’a Younis</creator>
        
        <creator>Nisreen Innab</creator>
        
        <creator>M. M. Kamruzzaman</creator>
        
        <subject>Palm trees diseases; convolutional neural networks; mobileNet; VGG-16</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>The palm tree is considered one of the most durable trees , and it occupies an advanced position as one of the most famous and most important trees that are planted in different regions around the world, which enter into many uses and have a number of benefits. In the recent years , date palms have been exposed to a large number of diseases. These diseases differ in their symptoms and causes, and sometimes overlap, making the diagnosing process with the naked eye difficult, even by an expert in this field. This paper proposes a CNN-model to detect and classify four common diseases threatening palms today, Bacterial leaf blight, Brown spots, Leaf smut, white scale in addition to healthy leaves. The proposed CNN structure includes four convolutional layers for feature extraction followed by a fully connected layer for classification. For performance evaluation, we investigate the performance of the proposed model and compare to other CNN- structures, VGG-16 and MobileNet, using four evaluation metrics: Accuracy, Precision, Recall and F1 Score. Our proposed model achieves 99.10% accuracy rate while VGG- 16 and MobileNet achieve 99.35% and 99.56% accuracy rates, respectively. In general, the performance of our model and other models are very close with a minor advantage to MobileNet over others. In contrast, our model characterized by simplicity and shows low computational training time comparing to others.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_111-Classification_of_Palm_Trees_Diseases_using_Convolution_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>On the Role of Text Preprocessing in BERT Embedding-based DNNs for Classifying Informal Texts</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01306109</link>
        <id>10.14569/IJACSA.2022.01306109</id>
        <doi>10.14569/IJACSA.2022.01306109</doi>
        <lastModDate>2022-06-30T12:23:39.7870000+00:00</lastModDate>
        
        <creator>Aliyah Kurniasih</creator>
        
        <creator>Lindung Parningotan Manik</creator>
        
        <subject>Natural language processing; bert embeddings; deep neural network; text preprocessing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>Due to highly unstructured and noisy data, analyz-ing society reports in written texts is very challenging. Classifying informal text data is still considered a difficult task in natural language processing since the texts could contain abbreviated words, repeating characters, typos, slang, et cetera. Therefore, text preprocessing is commonly performed to remove the noises and make the texts more structured. However, we argued that most tasks of preprocessing are no longer required if suitable word embeddings approach and deep neural network (DNN) ar-chitecture are correctly chosen. This study investigated the effects of text preprocessing in fine-tuning a pre-trained Bidirectional Encoder Representations from Transformers (BERT) model using various DNN architectures such as multilayer perceptron (MLP), long short-term memory (LSTM), bidirectional long-short term memory (Bi-LSTM), convolutional neural network (CNN), and gated recurrent unit (GRU). Various experiments were conducted using numerous learning rates and batch sizes. As a result, text preprocessing had insignificant effects on most models such as LSTM, Bi-LSTM, and CNN. Moreover, the combination of BERT embeddings and CNN produced the best classification performance.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_109-On_the_Role_of_Text_Preprocessing_in_BERT_Embedding_based_DNNs.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>VIHS with ROTR Technique for Enhanced Light-Weighted Cryptographic System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01306110</link>
        <id>10.14569/IJACSA.2022.01306110</id>
        <doi>10.14569/IJACSA.2022.01306110</doi>
        <lastModDate>2022-06-30T12:23:39.7870000+00:00</lastModDate>
        
        <creator>Sanjeev Kumar A N</creator>
        
        <creator>Ramesh Naik B</creator>
        
        <subject>Cryptography – partial pseudo-random based hash-ing technique; logical to sequential VIHS; system encrypts/decrypt data; dynamic system register; bypass parallel processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>Developing a bypass parallel processing block is one of the emerging and exciting research areas in the system encrypt/decrypt application areas. A Partial Pseudo-Random-based Hashing VIHS is the most suitable methodology for designing the system to encrypt/solve a block in cryptography. For this purpose, various VIHS and register techniques have been developed to process the storage system data. But, it is limited by the problems of reduced efficiency, increased computational complexity, high area consumption, and cost consumption. Thus, this research intends to develop a novel dynamic system register with hashing with optimal Hash Signature design to process the system’s encryption /decrypt data. The main intention of this paper is to analyze the transfer characteristics of the current based on the pseudo-differential pair for a proficient system detection. Then, a system window can be created and adjusted to obtain an optimized power flow with less data loss sensitivity. The major stages involved in the proposed block design are register, partition design, and VIHS design. The dynamic system register is designed at first for getting a fast decision and to enable a low input-referred offset value. Then, the partition is formed concerning the output of the register, and the VIHS is used to produce the high proportional logical work. During performance evaluation, various measures have been utilized to analyze the performance of the proposed dynamic system register-based hashing with optimal Hash Signature design. In addition to that, the estimated results are compared with the proposed technique to prove its efficiency.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_110-VIHS_with_ROTR_Technique_for_Enhanced_Light_Weighted_Cryptographic_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Identifying Community-Supported Technologies and Software Developments Concepts by K-means Clustering</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01306108</link>
        <id>10.14569/IJACSA.2022.01306108</id>
        <doi>10.14569/IJACSA.2022.01306108</doi>
        <lastModDate>2022-06-30T12:23:39.7730000+00:00</lastModDate>
        
        <creator>Farag Almansoury</creator>
        
        <creator>Segla Kpodjedo</creator>
        
        <creator>Ghizlane El Boussaidi</creator>
        
        <subject>Stack overflow; unsupervised machine learning; k-means clustering; empirical study; machine learning; random for-est; software development; Java; classification; community support</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>Working on technologies that have community sup-port is one of the most important factors in software development. Software developers often face difficulties during software devel-opment, and community support from other software developers help them significantly. This paper presents an approach based on K-mean clustering technique to identify the level of community support for software technologies and development concepts using Stack Overflow discussion forums. To test the approach, a case study was performed by gathering data from SO and preparing a dataset that contains over a million of Java developers’ questions. Then, K-mean clustering was applied to identify the community support levels. The goal is to find the best features that group community-supported software technologies and development concepts and identify the number of groups to determine the community support levels. Statistical error, clustering and classi-fication evaluation metrics were applied. The results indicate that the best features to formulate community supported technologies and development concept levels are Failure Rate and Wait Time. The results show that the approach identifies two groups of community supported and development concept levels based on the best silhouette index value of 97%. According to the results the majority of Java technologies and development concepts are labeled with less community supported technologies and development concepts (Cluster 2). Random Forest classifier was applied to indirectly evaluate the approach to detect the identified community support class. The result shows that RF classifier presents a good performance and shows high accuracy value of 99.49% which indicates that the identified groups improve the performance of the classifier. The approach can be utilized to assist software developers and researchers in utilizing the SO platform in developing SO-based recommendation systems.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_108-Identifying_Community_Supported_Technologies_and_Software_Developments.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An E2ED-based Approach to Custom Robot Navigation and Localization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01306107</link>
        <id>10.14569/IJACSA.2022.01306107</id>
        <doi>10.14569/IJACSA.2022.01306107</doi>
        <lastModDate>2022-06-30T12:23:39.7570000+00:00</lastModDate>
        
        <creator>Andres Moreno</creator>
        
        <creator>Daniel Paez</creator>
        
        <creator>Fredy Martinez</creator>
        
        <subject>End-to-End design; localization; navigation; path planning; robotics; SLAM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>Simultaneous mapping and localization or SLAM is a basic strategy used with robots and autonomous vehicles to identify unknown environments. It is of great attention in robotics due to its importance in the development of motion planning schemes in unknown and dynamic environments, which are close to the real cases of application of a robot. This is why, in parallel with research, they are also important in specialized training processes in robotics. However, access to robotic platforms and laboratories is often complex and costly, with high demands on time and resources, particularly for small research centers. A more efficient and affordable approach to working with autonomous algorithms and motion planning schemes is often the use of the ROS-Gazebo simulator, which allows high integration with customized non-commercial robots, and the possibility of an end-to-end design (E2ED) solution. This research addresses this approach as a training and research strategy with our ARMOS TurtleBot robotic platform, creating an environment for working with navigation algorithms, in localization, mapping, and path planning tasks. This paper shows the integration of ROS into the ARMOS TurtleBot project, and the design of several subsystems based on ROS to improve the interaction in the development of service robot tasks. The project’s source code is available to the research community.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_107-An_E2ED_based_Approach_to_Custom_Robot_Navigation_and_Localization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Prediction of Quality of Water According to a Random Forest Classifier</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01306105</link>
        <id>10.14569/IJACSA.2022.01306105</id>
        <doi>10.14569/IJACSA.2022.01306105</doi>
        <lastModDate>2022-06-30T12:23:39.7400000+00:00</lastModDate>
        
        <creator>Shahd Maadi Alomani</creator>
        
        <creator>Najd Ibrahim Alhawiti</creator>
        
        <creator>A’aeshah Alhakamy</creator>
        
        <subject>Big data; machine learning; classification; random forest; water quality; PySpark</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>Potable or drinking water is a daily life necessity for humans. The safety of this water is a concern in many regions around the world, since polluted waters are increasing and causing the spread of disease among populations. Continuous management and evaluation of the water which is meant for drinking is very essential and must be taken seriously. Often, the quality of water is evaluated through regular laboratory testing and analysis which can be tiresome and time consuming. On the other hand, advanced technologies using big data with the help of machine learning can have better results in terms of potability evaluation. For this reason, several studies have been conducted on predicting the quality of water and the several factors and classification that affect the prediction model. In this study, a random forest model was developed using PySpark classification to predict the potability of river water by relying on ten different features: pH, hardness, presence of solids, presence of chloramines, presence of sulfate, conductivity, organic carbon, trihalomethanes, turbidity, and finally potability. In addition, The developed model was able to predict water potability classification with a 1.0 accuracy, and 1.0 F1-score.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_105-Prediction_of_Quality_of_Water_According_to_a_Random_Forest_Classifier.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Virtual Reality Platform for Sustainable Road Education among Users of Urban Mobility in Cuenca, Ecuador</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01306106</link>
        <id>10.14569/IJACSA.2022.01306106</id>
        <doi>10.14569/IJACSA.2022.01306106</doi>
        <lastModDate>2022-06-30T12:23:39.7400000+00:00</lastModDate>
        
        <creator>Gabriel A. Leon-Paredes</creator>
        
        <creator>Omar G. Bravo-Quezada</creator>
        
        <creator>Erwin J. Sacoto-Cabrera</creator>
        
        <creator>Wilson F. Calle-Siavichay</creator>
        
        <creator>Ledys L. Jimenez-Gonzalez</creator>
        
        <creator>Juan Aguirre-Benalcazar</creator>
        
        <subject>Virtual reality; road safety education; virtual sce-narios; serious games; educational experience</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>A traffic accident is an event generated in an unforeseen way that is beyond the control of the people involved, which can produce bodily, functional, or organic injuries, leading to death or disability in the worst cases. According to the Empresa P&#180;ublica Municipal de Movilidad, Tr&#180;ansito y Transporte de Cuenca (EMOV-EP), the total accidents recorded in 2021 were, 24.97%due to ignoring traffic signs, 21.11% due to not paying attention to traffic, and 16.94% due to driving under the influence of alcohol. The EMOV-EP, is the responsible for the regulation of human mobility. Thus, the EMOV-EP in conjunction with the Universidad Polit&#180;ecnica Salesiana (UPS) have introduced the next research question: How can a road safety education strategy, supported by Information and Communication Technologies (ICTs), can be developed to contribute to the improvement of the behavior of citizens to increase their knowledge of the traffic laws and regulations, and thus reduce the number of accidents in the city of Cuenca? Furthermore, in this paper we present the development of a Virtual Reality (VR) platform designed for road safety education. The platform is composed of a Web system, and 4 VR systems (games) that have been designed for 4 common causes of accidents respectively (drunk drivers, high-speed drivers, cyclists riding in bicycle lanes, and users of the tram transport system), using a serious games approach and the Oculus Rift/Quest technology. We have found that more than 80%of users have had a very good experience of playing and learning through the VR systems. Hence, this virtual reality platform constitutes a technological proposal with social impact because it creates an entertainment environment that can raise awareness among citizens, thereby strengthening road safety education and reducing the number of accidents in the city of Cuenca.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_106-Virtual_Reality_Platform_for_Sustainable_Road_Education.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Unsupervised Domain Adaptation using Maximum Mean Covariance Discrepancy and Variational Autoencoder</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01306104</link>
        <id>10.14569/IJACSA.2022.01306104</id>
        <doi>10.14569/IJACSA.2022.01306104</doi>
        <lastModDate>2022-06-30T12:23:39.7230000+00:00</lastModDate>
        
        <creator>Fabian Barreto</creator>
        
        <creator>Jignesh Sarvaiya</creator>
        
        <creator>Suprava Patnaik</creator>
        
        <creator>Sushilkumar Yadav</creator>
        
        <subject>Deep learning; domain adaptation; face recognition; maximum mean covariance discrepancy; transfer learning; varia-tional autoencoders</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>Face Recognition has progressed tremendously from its initial use of holistic learning models to using hand-crafted, shallow, and deep learning models. DeepFace, a nine-layer Deep Convolutional Neural Network (DCNN), reached near-human performance on unconstrained face recognition for the La-beled Faces in the Wild (LFW) dataset. These models performed very well on the benchmark datasets, but their performance sometimes deteriorated for real-world applications. The problem arose when there was a domain shift due to different distribution spaces of the training and testing models. Few researchers looked at Unsupervised Domain Adaptation (UDA) to find the domain-invariant feature spaces. They tried to minimize the domain discrepancy using a static loss of maximum mean discrepancy (MMD). From MMD, the researchers delved into the higher-order statistics of maximum covariance discrepancy (MCD). MMD and MCD were combined to get maximum mean and covariance discrepancy (MMCD), which captured more information than MMD alone. We use a Variational Autoencoder (VAE) with joint mean and covariance discrepancy to offer a solution for domain adaptation. The proposed MMCD-VAE model uses VAE to measure the discrepancy in the spread of variance around the mean value and uses MMCD to measure the directional discrepancy in the variance. Analysis was done using the TinyFace benchmark dataset and the Bollywood Celebrities dataset. Three objective image quality parameters, namely SSIM, pieAPP, and SIFT feature matching, demonstrate the superiority of MMCD-VAE over the conventional KL-VAE model. MMCD-VAE shows an 18 % improvement in SSIM and a remarkable improvement in the perceptual quality of the image over the conventional KL-VAE model.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_104-Unsupervised_Domain_Adaptation_using_Maximum_Mean_Covariance_Discrepancy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Benchmarking of Motion Planning Algorithms with Real-time 3D Occupancy Grid Map for an Agricultural Robotic Manipulator</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01306103</link>
        <id>10.14569/IJACSA.2022.01306103</id>
        <doi>10.14569/IJACSA.2022.01306103</doi>
        <lastModDate>2022-06-30T12:23:39.7100000+00:00</lastModDate>
        
        <creator>Seyed Abdollah Vaghefi</creator>
        
        <creator>Mohd Faisal Ibrahim</creator>
        
        <creator>Mohd Hairi Mohd Zaman</creator>
        
        <subject>Motion planning; agricultural; harvesting; robot manipulator; benchmarking; oil palm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>The performance evaluation of motion planning algorithms for agricultural robotic manipulators is commonly performed via benchmarking platforms. However, creating a realistic benchmarking scene that constrains the motion planning algorithms with the characteristic of a real-work environment has always been a challenge worthy of research. In this paper, we present a lab-setup benchmarking platform to evaluate Open Motion Planning Library (OMPL) motion planners for the application of a robotic harvester of a palm-like tree using a real-time 3D occupancy grid map. First, three motion problems were defined with different levels of complexity based on a real oil palm fruit harvesting task. To achieve reliable outcomes, the benchmarking scene was modeled by converting point cloud data from a stereo-depth sensor into a 3D occupancy grid map using the Octomap algorithm. Then the benchmarking was performed, all within a real-time process. According to the results, a fair performance evaluation was achieved by modeling a realistic benchmarking scene, which can help in choosing a high-performing algorithm and efficiently conducting such harvesting tasks in real practice.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_103-Benchmarking_of_Motion_Planning_Algorithms_with_Real_time_3D_Occupancy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implementation of a Web System: Prevent Fraud Cases in Electronic Transactions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01306102</link>
        <id>10.14569/IJACSA.2022.01306102</id>
        <doi>10.14569/IJACSA.2022.01306102</doi>
        <lastModDate>2022-06-30T12:23:39.7100000+00:00</lastModDate>
        
        <creator>Edwin Kcomt Ponce</creator>
        
        <creator>Katherine Escobedo Sanchez</creator>
        
        <creator>Laberiano Andrade-Arenas</creator>
        
        <subject>Artificial intelligence; e-commerce; fraud; optical character recognition; scrum; social networks; web system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>The purpose of this project is to prevent cases of fraud in e-commerce of purchase and sale from person to person through social networks. For the development of the research work, the Scrum methodology was used to allow the project to be carried out in an agile and flexible way, adapting to the changes that could arise along the way. The technological tools that made this project possible were SQL Server, C++, Visual Studio and Marvel app, the latter for prototype design. In addition, there was the support of an artificial intelligence software known as Optical Character Recognition that allowed the document recognition process to be completed. The social network Facebook was also relevant for the development process since the data set for the training of the system was obtained from there, guaranteeing its functionality. The results obtained benefit both parties, sellers/suppliers and consumers, reducing the impact of fraud cases and guaranteeing safer online operations. In addition, a validation was carried out by experts in the development of web applications, taking usability, feasibility, scalability, innovation, and technology as criteria. Obtaining as a result the approval in all its criteria; with the total mean value of 2.76.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_102-Implementation_of_a_Web_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Designing a Mobile Application using Augmented Reality: The Case of Children with Learning Disabilities</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01306101</link>
        <id>10.14569/IJACSA.2022.01306101</id>
        <doi>10.14569/IJACSA.2022.01306101</doi>
        <lastModDate>2022-06-30T12:23:39.6930000+00:00</lastModDate>
        
        <creator>Misael Lazo-Amado</creator>
        
        <creator>Leoncio Cueva-Ruiz</creator>
        
        <creator>Laberiano Andrade-Arenas</creator>
        
        <subject>App augmented class; design thinking; marvel App; learning disorder; TinkerCad</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>The learning disorder has several difficulties to learn correctly; in many cases they have more stress because they do not understand the subjects proposed by the teacher. The aim of the research is to propose an innovative plan to design a mobile application for the treatment of learning disabilities using augmented reality in primary education. In this way, we used a methodology called Design Thinking that has five phases, empathize, define, devise, prototype and testing, which facilitates us in identifying the problems for itself to have solutions to these problems. For the prototype we used tools such as Marvel App, which is responsible for the layout of the mobile application. TinkerCad allows us to design the 3D model of the educational games and finally App Augmented Class to create the augmented reality model. The results obtained were through a survey about the prototype; identifying the acceptance about the prototype by parents for the usefulness of this idea for their children with learning disabilities, with 76% that the prototype is ideal for children. In addition, the prototype was validated by five experts, resulting in 85.4% acceptance. As a conclusion of the research is the achievement of a good design for a solution to the problems of children with learning disabilities to have a better understanding and to be free from stress.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_101-Designing_a_Mobile_Application_using_Augmented_Reality.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of the Influence of De-hazing Methods on Vehicle Detection in Aerial Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01306100</link>
        <id>10.14569/IJACSA.2022.01306100</id>
        <doi>10.14569/IJACSA.2022.01306100</doi>
        <lastModDate>2022-06-30T12:23:39.6770000+00:00</lastModDate>
        
        <creator>Khang Nguyen</creator>
        
        <creator>Phuc Nguyen</creator>
        
        <creator>Doanh C. Bui</creator>
        
        <creator>Minh Tran</creator>
        
        <creator>Nguyen D. Vo</creator>
        
        <subject>Foggy weather; vehicle detection; DWGAN; two-branch; YOLOv3; sparse R-CNN; deformable deter; cascade R-CNN; crossDet; adverse weather</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>In recent years, object detection from space in adverse weather, incredibly foggy, has been challenging. In this study, we conduct an empirical experiment using two de-hazing methods: DW-GAN and Two-Branch, for removing fog, then eval-uate the detection performance of six advanced object detectors belonging to four main categories: two-stage, one-stage, anchor-free and end-to-end in original and de-hazed aerial images to find the best suitable solution for vehicle detection in foggy weather. We use the UIT-DroneFog dataset, a challenging dataset that includes a lot of small, dense objects captured in various altitudes, as the benchmark to evaluate the effectiveness of approaches. After experiments, we observe that each de-hazing method has different impacts on six experimental detectors.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_100-Analysis_of_the_Influence_of_De_hazing_Methods_on_Vehicle_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Emergency Decision Model by Combining Preference Relations with Trapezoidal Pythagorean Fuzzy Probabilistic Linguistic Priority Weighted Averaging PROMETHEE Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130699</link>
        <id>10.14569/IJACSA.2022.0130699</id>
        <doi>10.14569/IJACSA.2022.0130699</doi>
        <lastModDate>2022-06-30T12:23:39.6770000+00:00</lastModDate>
        
        <creator>Xiao Yue</creator>
        
        <creator>Li jianhui</creator>
        
        <subject>COVID-19; emergency decision model; trapezoidal Pythagorean fuzzy probabilistic linguistic variables; preference relations; PROMETHEE approach</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>The outbreak of COVID-19 in 2019 has brought greater international attention to emergency decision making and management. Since emergency situations are often uncertain, prevention and control are crucial. For better prevent and control, according to the characteristics of emergency incidents, the paper proposes a new form of linguistic expression trape-zoidal Pythagorean fuzzy probabilistic linguistic variables to express decision-making information. Next, the paper develops the operational rules, value index and ambiguity of trapezoidal Pythagorean fuzzy probabilistic linguistic variables. Then, the new trapezoidal Pythagorean fuzzy probabilistic linguistic prior-ity weighted averaging PROMETHEE approach is introduced to aggregate the trapezoidal Pythagorean fuzzy probabilistic linguistic information combining with preference relation. Finally, an emergency decision making case of prevention of infec-tious diseases analysis illustrate the necessity and effectiveness of this method, the results of comparative and experimen-tal analyses demonstrate that the constructed new trapezoidal Pythagorean fuzzy probabilistic linguistic priority weighted av-eraging PROMETHEE approach owns better performances in terms of effectiveness and reasonability.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_99-Emergency_Decision_Model_by_Combining_Preference_Relations.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Methods and Directions of Contact Tracing in Epidemic Discovery</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130698</link>
        <id>10.14569/IJACSA.2022.0130698</id>
        <doi>10.14569/IJACSA.2022.0130698</doi>
        <lastModDate>2022-06-30T12:23:39.6630000+00:00</lastModDate>
        
        <creator>Mohammed Abdalla</creator>
        
        <creator>Amr M. AbdelAziz</creator>
        
        <creator>Louai Alarabi</creator>
        
        <creator>Saleh Basalamah</creator>
        
        <creator>Abdeltawab Hendawi</creator>
        
        <subject>Contact tracing; routes analysis; epidemic discov-ery; big spatial health applications</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>The contact tracing process is a mitigation and monitoring strategy that aims to capture infectious diseases to control their outbreak in a practical time. Various applications have been proposed and developed contact tracing process; most of these applications utilize the smartphone technologies to record all movements of contacts and send notifications to the expected infected ones, either high-risk or low-risk. On the other side, several challenges limit the functionality of contact tracing applications and processes; these limitations include (1) privacy concerns, (2) lack to fully identify contacts, and (3) delays in identification. In this paper, we survey the functionality of the contact tracing process, how its works, open directions and challenges, applications, and its domain of use.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_98-Methods_and_Directions_of_Contact_Tracing_in_Epidemic_Discovery.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Construction of a Repeatable Framework for Prostate Cancer Lesion Binary Semantic Segmentation using Convolutional Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130697</link>
        <id>10.14569/IJACSA.2022.0130697</id>
        <doi>10.14569/IJACSA.2022.0130697</doi>
        <lastModDate>2022-06-30T12:23:39.6470000+00:00</lastModDate>
        
        <creator>Ian Vincent O. Mirasol</creator>
        
        <creator>Patricia Angela R. Abu</creator>
        
        <creator>Rosula S. J. Reyes</creator>
        
        <subject>Convolutional neural networks; binary semantic segmentation; prostate cancer; computer vision; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>Prostate cancer is the 3rd most diagnosed cancer overall. Current screening methods such as the prostate-specific antigen test could result in overdiagonosis and overtreatment while other methods such as a transrectal ultrasonography are invasive. Recent medical advancements have allowed the use of multiparametric MRI — a noninvasive and reliable screening process for prostate cancer. However, assessment would still vary from different professionals introducing subjectivity. While con-volutional neural network has been used in multiple studies to ob-jectively segment prostate lesions, due to the sensitivity of datasets and varying ground-truth established used in these studies, it is not possible to reproduce and validate the results. In this study, we executed a repeatable framework for segmenting prostate cancer lesions using annotated apparent diffusion coefficient maps from the QIN-PROSTATE-Repeatability dataset — a publicly available dataset that includes multiparametric MRI images of 15 patients that are confirmed or suspected of prostate cancer with two studies each. We used a main architecture of U-Net with batch normalization tested with different encoders, varying data image augmentation combinations, and hyperparameters adopted from various published frameworks to validate which combination of parameters work best for this dataset. The best performing framework was able to achieve a Dice score of 0.47 (0.44-0.49) which is comparable to previously published studies. The results from this study can be objectively compared and improved with further studies whereas this was previously not possible.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_97-Construction_of_a_Repeatable_Framework_for_Prostate_Cancer.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Approval Rating of Peruvian Politicians and Policies using Sentiment Analysis on Twitter</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130696</link>
        <id>10.14569/IJACSA.2022.0130696</id>
        <doi>10.14569/IJACSA.2022.0130696</doi>
        <lastModDate>2022-06-30T12:23:39.6300000+00:00</lastModDate>
        
        <creator>Jose Yauri</creator>
        
        <creator>Luis Solis</creator>
        
        <creator>Efrain Porras</creator>
        
        <creator>Manuel Lagos</creator>
        
        <creator>Enrique Tinoco</creator>
        
        <subject>Twitter data analytic; sentiment analysis; Peruvian politicians; approval rating; convolutional neural networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>Now-a-days, using the social network Twitter, a subject can easily access, post, and share information about news, events, and incidents taking place currently in the world. Recently, due to the high number of users and the capability to transfer information instantly, Twitter had attracted the interest of politicians with the goal to interact with their followers and to communicate theirs polices. Fearing the disagreements and disturbances that the application of some policies might cause, usually politicians use surveys to support their actions. However, such studies still use traditional questionnaires to recover information and are costly and time-consuming. Recent advances in automatic natural language processing have allowed the extraction of information from textual data, like tweets. In this work, we present a method to analyze Twitter data related to Peruvian politicians and able to score the latent sentiment polarity of such messages. Our proposal is based on an embedding representation of tweets, which are classified by a convolutional neural network. For evaluation, we collected a new dataset related to the current President of Peru, where the model achieved 91.2%of sensibility and 94.4% of specificity. Furthermore, we evaluated the model in two politic topics, that were totally unknown for the model. In all of them, our approach gives comparable results to renowned Peruvian pollsters.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_96-Approval_Rating_of_Peruvian_Politicians_and_Policies.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid Quartile Deviation-based Support Vector Regression Model for Software Reliability Datasets</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130694</link>
        <id>10.14569/IJACSA.2022.0130694</id>
        <doi>10.14569/IJACSA.2022.0130694</doi>
        <lastModDate>2022-06-30T12:23:39.6170000+00:00</lastModDate>
        
        <creator>Y. Geetha Reddy</creator>
        
        <creator>Y Prasanth</creator>
        
        <subject>Software fault detection; reliability prediction; support vector machine; exponential distribution; quartile deviation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>Software reliability estimation using machine learning play a major role on the different software quality reliability databases. Most of the conventional software reliability estimation model fails to predict the test samples due to high true positive rate of the traditional support vector regression models. Most of the traditional machine learning based fault prediction models are integrated with standard software reliability growth measures for reliability severity classification. However, these models are used to predict the reliability level of binary class with less standard error. In this paper, a hybrid support vector regression-based quartile deviation growth measure is implemented on the training fault datasets. Experimental results are simulated on various reliability datasets with different configuration parameters for fault prediction.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_94-A_Hybrid_Quartile_Deviation_based_Support_Vector_Regression_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>EAGL: Enhancement Algorithm based on Gamma Correction for Low Visibility Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130695</link>
        <id>10.14569/IJACSA.2022.0130695</id>
        <doi>10.14569/IJACSA.2022.0130695</doi>
        <lastModDate>2022-06-30T12:23:39.6170000+00:00</lastModDate>
        
        <creator>Navleen S Rekhi</creator>
        
        <creator>Jagroop S Sidhu</creator>
        
        <subject>Low scale intensity images; discrete wavelet decomposition; gamma correction; quality metrics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>Under poor light conditions or improper acquisition settings, the image degrades due to low contrast, poor brightness and suffer poor visual quality of the picture. An enhancement is required to manipulate the scale of pixel intensity for significant improvement in the image. This paper proposed the method of gamma correction with a self-adaptive value in accordance with the intensity scale of the image. After transformation to HSI (hue, saturation and intensity) channel, a multi-scale wavelet transform is implemented on the intensity component of the image. The gamma scale is computed from the combination of reformed scale constant of logarithm function and Minkowski distance measure. Lastly, wavelet based de-noising technique is applied to suppress high noise coefficients to improve quality of the image. The proposed method is evaluated in terms of visual appearance, measure of information content, signal to noise ratio, and universal image quality index. It demonstrated that the proposed method showed its efficacy in terms of quality and improved visibility.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_95-EAGL_Enhancement_Algorithm_based_on_Gamma_Correction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep-Learning Approach for Efficient Eye-blink Detection with Hybrid Optimization Concept</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130693</link>
        <id>10.14569/IJACSA.2022.0130693</id>
        <doi>10.14569/IJACSA.2022.0130693</doi>
        <lastModDate>2022-06-30T12:23:39.6000000+00:00</lastModDate>
        
        <creator>Rupali Gawande</creator>
        
        <creator>Sumit Badotra</creator>
        
        <subject>Eye localization; CNN; Seagull Optimization with Enhanced Exploration (SOEE); improved active shape model (I-ASM); eye aspect ratio (EAR); eye-blink detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>In this research work, a novel eye-blink detection model is developed. The proposed eye blink detection model is modeled by following seven major phases: (a) video–to-frame conversion, (b) Pre-processing, (c) face detection, (d) eye region localization, (e) eye landmark detection and eye status detection, (f) eye blink detection and (f) Eye blink Classification. Initially, from the collected raw video sequence (input), each individual frames are extracted in the video –to-frame conversion phase. Then, each of the frames is subjected to pre-processing phase, where the quality of the image in the frames is improved using proposed Kernel median filtering (KMF) approach. In the face detection phase, the Viola-Jones Model has been utilized. Then, from the detected faces, the eye region is localization within the proposed eye region localization phase. The proposed Eye region localization phase encapsulates two major phases: Feature extraction and landmark detection. The features like improved active shape models (I-ASMs), Local Binary pattern are extracted from the detected facial images. Then, the eye region is localization by using a new optimized Convolution neural network framework. This optimized CNN framework is trained with the extracted features (I-ASM and LBP). Moreover, to enhance the classification accuracy of eye localization, the weight of CNN is fine-tuned using a new Seagull Optimization with Enhanced Exploration (SOEE), which is the improved version of standard Seagull Optimization Algorithm (SOA). The outcome from optimized CNN framework is providing the exact location of the eye region. Once the eye region is detected, it is essential to detect the status of the eye (whether open or close). The status of the eye is detected by computing the eye aspect ratio (EAR). Then, the identified eye blinks are classified based on the computed correlation coefficient as long and short blinks. Finally, a comparative evaluation has been accomplished to validate the projected model.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_93-Deep_Learning_Approach_for_Efficient_Eye_blink_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Driver Drowsiness Detection and Monitoring System (DDDMS)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130691</link>
        <id>10.14569/IJACSA.2022.0130691</id>
        <doi>10.14569/IJACSA.2022.0130691</doi>
        <lastModDate>2022-06-30T12:23:39.5830000+00:00</lastModDate>
        
        <creator>Raz Amzar Fahimi Rozali</creator>
        
        <creator>Suzi Iryanti Fadilah</creator>
        
        <creator>Azizul Rahman Mohd Shariff</creator>
        
        <creator>Khuzairi Mohd Zaini</creator>
        
        <creator>Fatima Karim</creator>
        
        <creator>Mohd Helmy Abd Wahab</creator>
        
        <creator>Rajan Thangaveloo</creator>
        
        <creator>Abdul Samad Bin Shibghatullah</creator>
        
        <subject>Distraction; drowsiness; eye blink; yawn; head pose estimation; eye tracking; computer vision; Raspberry Pi</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>The purpose of this paper is to develop a driver drowsiness and monitoring system that could act as an assistant to the driver during the driving process. The system is aimed at reducing fatal crashes caused by driver’s drowsiness and distraction. For drowsiness, the system operates by analysing eye blinks and yawn frequency of the driver while for distraction, the system works based on the head pose estimation and eye tracking. The alarm will be triggered if any of these conditions occur. Main part of the implementation of this system will be using python with computer vision, while Raspberry Pi, which is uniquely designed, for the hardware platform and the speaker for alarming. In short, this driver drowsiness monitoring system can always monitor drivers so as to avoid accidents in real time.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_91-Driver_Drowsiness_Detection_and_Monitoring_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>BiDLNet: An Integrated Deep Learning Model for ECG-based Heart Disease Diagnosis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130692</link>
        <id>10.14569/IJACSA.2022.0130692</id>
        <doi>10.14569/IJACSA.2022.0130692</doi>
        <lastModDate>2022-06-30T12:23:39.5830000+00:00</lastModDate>
        
        <creator>S. Kusuma</creator>
        
        <creator>Jothi. K. R</creator>
        
        <subject>Heart disease; ECG; deep learning; machine learning models; discrete wavelet transform</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>Every year, around 10 million people die due to heart attacks. The use of electrocardiograms (ECGs) is a vital part of diagnosing these conditions. These signals are used to collect information about the heart&#39;s rhythm. Currently, various limitations prevent the diagnosis of heart diseases. The BiDLNet model is proposed in this paper which aims to examine the capability of electrocardiogram data to diagnose heart disease. Through a combination of deep learning techniques and structural design, BiDLNet can extract two levels of features from the data. A discrete wavelet transform is a process that takes advantage of the features extracted from higher layers and then adds them to lower layers. An ensemble classification scheme is then made to combine the predictions of various deep learning models. The BiDLNet system can classify features of different types of heart disease using two classes of classification: binary and multiclass. It performed remarkably well in achieving an accuracy of 97.5% and 91.5%, respectively.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_92-BiDLNet_An_Integrated_Deep_Learning_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Region Growing Algorithm using Wavelet Coefficient Feature Combination of Image Dynamics</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130690</link>
        <id>10.14569/IJACSA.2022.0130690</id>
        <doi>10.14569/IJACSA.2022.0130690</doi>
        <lastModDate>2022-06-30T12:23:39.5700000+00:00</lastModDate>
        
        <creator>Tamanna Sahoo</creator>
        
        <creator>Bibhuprasad Mohanty</creator>
        
        <subject>Moving object; dynamism; wavelet transformation; region growing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>Moving object detection has versatile and potential applications in video surveillance, traffic monitoring, human motion capture etc., where detecting object(s) in a complex scene is vital. In the existing background subtraction method based on frame differencing, the false positive and misclassification rate increases as the background becomes more complex and also with the presence of multiple moving objects in the scene. In this piece of work, an approach has been made to enhance the detection performance of the background subtraction method by exploiting the dynamism available in the scene. The resultant differencing frame so obtained by the spatial background subtraction method is subjected to wavelet transformation. By extracting and combining wavelet features from the dynamics of the scene, a novel method of region growing technique has been further utilized to detect the moving object(s) in the scene. Simulation of various video sequences from CDnet, SBMnet, AGVS, I2R and Urban Tracker database has been applied and the method provides satisfactory detection of the moving object in a complex scene. The quantitative measure like Recall, Precision, F1-measure, and specificity computed for the algorithms, have indicated the algorithms can be a suitable candidate for surveillance applications.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_90-A_Novel_Region_Growing_Algorithm_Using_Wavelet.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detection of COVID-19 from Chest X-Ray Images using CNN and ANN Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130689</link>
        <id>10.14569/IJACSA.2022.0130689</id>
        <doi>10.14569/IJACSA.2022.0130689</doi>
        <lastModDate>2022-06-30T12:23:39.5530000+00:00</lastModDate>
        
        <creator>Micheal Olaolu Arowolo</creator>
        
        <creator>Marion Olubunmi Adebiyi</creator>
        
        <creator>Eniola Precious Michael</creator>
        
        <creator>Happiness Eric Aigbogun</creator>
        
        <creator>Sulaiman Olaniyi Abdulsalam</creator>
        
        <creator>Ayodele Ariyo Adebiyi</creator>
        
        <subject>Machine learning; COVID-19; ANN; CNN; X-ray images</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>The occurrence of coronavirus (COVID-19), which causes respiratory illnesses, is higher than in 2003. (SARS). COVID-19 and SARS are both spreading over regions and infecting living beings, with more than 73,435 deaths and more than 2000 deaths documented as of August 12, 2020. In contrast, SARS killed 774 lives in 2003, whereas COVID-19 claimed more in the shortest amount of time. However, the fundamental difference between them is that, after 17 years of SARS, a powerful new tool has developed that could be utilized to combat the virus and keep it within reasonable boundaries. One of these tools is machine learning (ML). Recently, machine learning (ML) has caused a paradigm shift in the healthcare industry, and its use in the COVID-19 outbreak could be profitable, especially in forecasting the location of the next outbreak. The use of AI in COVID-19 diagnosis and monitoring can be accelerated, reducing the time and cost of these processes. As a result, this study uses ANN and CNN techniques to detect COVID-19 from chest x-ray pictures, with 95% and 75% accuracy, respectively. Machine learning has greatly enhanced monitoring, diagnosis, monitoring, analysis, forecasting, touch tracking, and medication/vaccine production processes for the Covid-19 disease outbreak, reducing human involvement in nursing treatment.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_89-Detection_of_Covid_19_from_Chest_X_Ray_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Use of Information and Computer-based Distance Learning Technologies during COVID-19 Active Restrictions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130688</link>
        <id>10.14569/IJACSA.2022.0130688</id>
        <doi>10.14569/IJACSA.2022.0130688</doi>
        <lastModDate>2022-06-30T12:23:39.5370000+00:00</lastModDate>
        
        <creator>Irina Petrovna Gladilina</creator>
        
        <creator>Lyudmila Nikolaevna Pankova</creator>
        
        <creator>Svetlana Alexandrovna Sergeeva</creator>
        
        <creator>Vladimir Kolesnik</creator>
        
        <creator>Alexey Vorontsov</creator>
        
        <subject>Distance learning; teachers; electronic service; online class</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>Despite the reduction of restrictive measures imposed due to the COVID-19 pandemic, the problem of organizing distance learning continues to be topical. Distance learning imposes a much greater responsibility on teachers, giving them more of a workload as learning technologies change rapidly and teachers have to actively adapt to innovations, devoting a lot of time to preparing appropriate materials to ensure the best learning outcomes. The aim of the study is to detect the use of the most effective means of organizing distance learning by teachers. The study is based on a survey of university professors who were teaching in the distance mode during 2020-2021 active administrative restrictions. Opportunities for the use of various services in the organization of distance learning are analyzed and the drawbacks and advantages of the distance learning system are highlighted. The study reveals previously unapparent issues that arose in the course of distance work in quarantine. These include, first and foremost, the high physical workload of teachers, the many technical problems that arose in the transition to distance learning, the lack of teachers’ competencies, which needs to be urgently addressed, and the complicated coordination of the learning process. Despite the problems identified, the authors argue that the system of distance learning can and must be adopted and further developed as an additional supporting direction in the organization of the learning process, which will allow educational institutions to promptly shift to distance learning as needed.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_88-Use_of_Information_and_Computer_based_Distance_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improved Data Segmentation Architecture for Early Size Estimation using Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130687</link>
        <id>10.14569/IJACSA.2022.0130687</id>
        <doi>10.14569/IJACSA.2022.0130687</doi>
        <lastModDate>2022-06-30T12:23:39.5370000+00:00</lastModDate>
        
        <creator>Manisha </creator>
        
        <creator>Rahul Rishi</creator>
        
        <creator>Sonia Sharma</creator>
        
        <subject>Cosine similarity; hybrid similarity; machine learning; size estimation and soft cosine similarity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>Software size estimation plays an important role in project management. According to the report given by Standish Chaos, about 65% of software projects are beyond companies budget or overdue; which could have been saved if an early estimation was imposed. Though the software size can’t be measured directly, but it is related to effort and hence a low effort will lead to low size. The calculation of effort depends upon how the data is organized or segmented. This research paper focuses on the improvement of data segmentation in order to reduce the effort and parallel the size. In order to improve the segmentation architecture, the project data is divided based on the similarity indexes which the projects have in between their attributes. Three similarity measures were used namely Cosine Similarity (CS), Soft Cosine Similarity (SC) and a hybrid similarity index which combines CS and SC. Based on these similarity indexes, the project data is divided into groups by K-means algorithm. In order to estimate the size, the co-relation between the formed groups are calculated. To calculate the correlation, Mean Square Error (MSE), Square Error (SE), and Standard Deviation (STD) is calculated and the normalized parameters are used to evaluate the software size.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_87-Improved_Data_Segmentation_Architecture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Outlier Detection and Feature Ranking based Ensemble Learning for ECG Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130686</link>
        <id>10.14569/IJACSA.2022.0130686</id>
        <doi>10.14569/IJACSA.2022.0130686</doi>
        <lastModDate>2022-06-30T12:23:39.5230000+00:00</lastModDate>
        
        <creator>Venkata Anuhya Ardeti</creator>
        
        <creator>Venkata Ratnam Kolluru</creator>
        
        <creator>George Tom Varghese</creator>
        
        <creator>Rajesh Kumar Patjoshi</creator>
        
        <subject>Feature ranking; improved inter quartile range; majority voting; outlier detection; optimized random forest</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>Automated classification of each heartbeat class from the ECG signal is important to diagnose cardiovascular diseases (CVDs) more quickly. ECG data acquired from the real-time or clinical databases contains exceptional values or extreme values called outliers. The separation and removal of outliers is very much useful for improving the data quality. The presence of outliers will influence the results of machine learning (ML) methods such as classification and regression. Outlier identification and removal plays a significant role in this area of research and is a part of signal denoising. Also, most of the traditional ECG-signal processing methods are facing the difficulty in finding the essential key features of recorded signal. In this work, an extreme outlier detection technique known as improved inter quartile range (IIQR) filtering method is used to find the outliers of the signal for the feature ranking process. In addition, an optimized random forest (ORF) based heterogenous ensemble classification model is proposed to improve the true positive and runtime on the ECG data. The classification of each heartbeat type is classified with majority voting technique. Ensemble learning and majority voting rule is used to enhance the accuracy of heart disease prediction. The proposed feature ranking based ORF ensemble classification model (LR + SVM + ORF + XGBoost + KNN) is evaluated on the MITBIH arrhythmia database and produces an overall accuracy of 99.45% which significantly outperforms the state-of-the-art methods such as, (LR + SVM + RF + XGBoost + KNN) with 96.17% accuracy, ensemble deep learning accuracy of 95.81% and ensemble SVM accuracy of 94.47%.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_86-An_Outlier_Detection_and_Feature_Ranking.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Domain Human Recognition Techniques using Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130684</link>
        <id>10.14569/IJACSA.2022.0130684</id>
        <doi>10.14569/IJACSA.2022.0130684</doi>
        <lastModDate>2022-06-30T12:23:39.5070000+00:00</lastModDate>
        
        <creator>Seshaiah Merikapudi</creator>
        
        <creator>Murthy SVN</creator>
        
        <creator>Manjunatha. S</creator>
        
        <creator>R. V. Gandhi</creator>
        
        <subject>Human recognition; deep learning; hybrid model; CNN; HAR</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>As a key research subject in the fields of health and human-machine interaction, human activity recognition (HAR) has emerged as a major research focus over the past few decades. Many artificial intelligence-based models are being created for activity recognition. However, these algorithms are failing to extract spatial and temporal properties, resulting in poor performance on real-world long-term HAR. A drawback in the literature is that there are only a small number of publicly available datasets for physical activity recognition that contain a small number of activities, owing to the scarcity of publicly available datasets. In this paper, a hybrid model for activity recognition that incorporates both convolutional neural networks (CNN) are developed. The CNN network is used for extracting spatial characteristics, while the LSTM network is used for learning time-related information. Using a variety of traditional and deep machine learning models, an extensive ablation investigation is carried out in order to find the best possible HAR solution. The CNN approach can achieve a precision of 90.89%, indicating that the model is suitable for HAR applications.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_84-Domain_Human_Recognition_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Tourist Reviews Sentiment Classification using Deep Learning Techniques: A Case Study in Saudi Arabia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130685</link>
        <id>10.14569/IJACSA.2022.0130685</id>
        <doi>10.14569/IJACSA.2022.0130685</doi>
        <lastModDate>2022-06-30T12:23:39.5070000+00:00</lastModDate>
        
        <creator>Banan A. Alharbi</creator>
        
        <creator>Mohammad A. Mezher</creator>
        
        <creator>Abdullah M. Barakeh</creator>
        
        <subject>Sentiment classification; Saudi dialect; support -vector machine; recurrent neural network; long short-term memory</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>Now-a-days, social media sites and travel blogs have become one of the most vital expression sources. Tourists express everything related to their experiences, reviews, and opinions about the place they visited. Moreover, the sentiment classification of tourist reviews on social media sites plays an increasingly important role in tourism growth and development. Accordingly, these reviews are valuable for both new tourists and officials to understand their needs and improve their services based on the assessment of tourists. The tourism industry anywhere also relies heavily on the opinions of former tourists. However, most tourists write their reviews in their local dialect, making sentiment classification more difficult because there are no specific rules to control the writing system. Moreover, there is a gap between Modern Standard Arabic (MSA) and local dialects. one of the most prominent issues in sentiment analysis is that the local dialect lexicon has not seen significant development. Although a few lexicons are available to the public, they are sparse and small. Thus, this paper aims to build a model capable of accurate sentiment classification in the Saudi dialect for Arabic in tourist place reviews using deep learning techniques. Machine learning techniques help classifying these reviews into (positive, negative, and neutral). In this paper, three machine learning algorithms were used, Support -Vector Machine (SVM), Long short-term memory (LSTM), and Recurrent Neural Network (RNN). These algorithms are classified using Google Map data set for tourist places in Saudi Arabia. Performance classification of these algorithms is done using various performance measures such as accuracy, precision, recall and F-score. The results show that the SVM algorithm outperforms the deep learning techniques. The result of SVM was 98%, outperforming the LSTM, and RNN had the same performance of 96%.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_85-Tourist_Reviews_Sentiment_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis and Evaluation of Two Feature Selection Algorithms in Improving the Performance of the Sentiment Analysis Model of Arabic Tweets</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130683</link>
        <id>10.14569/IJACSA.2022.0130683</id>
        <doi>10.14569/IJACSA.2022.0130683</doi>
        <lastModDate>2022-06-30T12:23:39.4900000+00:00</lastModDate>
        
        <creator>Maria Yousef</creator>
        
        <creator>Abdulla ALali</creator>
        
        <subject>Sentiment analysis; Information Gain (IG); Chi-Square; AJGT database</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>Recently, Sentiment analysis from Twitter is one of the most interesting research disciplines; it combined data mining technologies with natural language processing techniques. The sentiment analysis system aims to evaluate the texts that are posted on social platforms to express positive, negative, or neutral feelings of people regarding a certain domain. The high dimensionality of the feature vector is considered to be one of the most popular problems of Arabic sentiment analysis. The main contribution of this paper is to solve the dimensionality problem by presenting a comparative study between two feature selection algorithms, namely, Information Gain (IG), and Chi-Square to choose the best one which may lead to improve the classification accuracy. In this paper, the Arabic Jordanian sentiment analysis model is proposed through four steps. First, a preprocessing step has been applied to the database and includes (Remove Non-Arabic Symbols, Tokenizing, Arabic Stop Word Removal, and Stemming). In the second step, the TF-IDF algorithm is used as a feature extraction method to represent the text into feature vectors. Then, we utilized IG and Chi-Square as feature selection steps to obtain the best subset of features and decrease the total number of features. Finally, different algorithms have been used in the classification step such as (SVM, DT, and KNN) to classify the views people have shared on Twitter, into two classes (positive, and negative). Several experiments were performed on Jordanian dialectical tweets using the AJGT database. The experimental results show the following: 1) The information acquisition algorithm outperformed the Chi-Square Algorithm in the feature selection step, as it was able to reduce the number of features from 1170 to 713 and increase the accuracy of the classifiers by 10%, 2) SVM classifier shows the greatest classification performance among all the classifiers tested which gives the highest accuracy of 85% with IG algorithm.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_83-Analysis_and_Evaluation_of_Two_Feature_Selection_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>DDoS Intrusion Detection Model for IoT Networks using Backpropagation Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130682</link>
        <id>10.14569/IJACSA.2022.0130682</id>
        <doi>10.14569/IJACSA.2022.0130682</doi>
        <lastModDate>2022-06-30T12:23:39.4730000+00:00</lastModDate>
        
        <creator>Jasem Almotiri</creator>
        
        <subject>DDoS; backpropagation neural network; IoT network; intrusion detection; CICDDoS2019</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>In today&#39;s digital landscape, Internet of Things (IoT) networking has grown dramatically broad. The major feature of IoT network devices is their ability to connect to the internet and interact with it through data collecting and exchanging. Distributed Denial of Service (DDoS) is one form of cyber-attacks in which the hackers penetrate a single connection and then multiple machines are operating together to attack one target. The direct connectivity of IoT devices to the internet makes DDoS attacks worse and more dangerous. The more businesses adapted IoT networks to streamline the operations, the more allowing of DDoS intrusions at small and large scales to take place. Therefore, the intrusion detection module in the IoT networks is not optional in today’s business environment. To achieve this objective, in this paper, an intelligent intrusion detection model is proposed to detect DDoS attacks in IoT networks. The intelligent model is a backpropagation neural network-based framework. The results are analyzed using different performance measures. The proposed model proves a detection rate of 99.46% and detection accuracy of 95.76% using the up-to-date benchmark CICDDoS2019 dataset. Furthermore, the proposed model has been compared with the most recent DDoS intrusion detection schemes and competitive performance is achieved.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_82-DDoS_Intrusion_Detection_Model_for_IoT_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predicting Blocking Bugs with Machine Learning Techniques: A Systematic Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130680</link>
        <id>10.14569/IJACSA.2022.0130680</id>
        <doi>10.14569/IJACSA.2022.0130680</doi>
        <lastModDate>2022-06-30T12:23:39.4600000+00:00</lastModDate>
        
        <creator>Selasie Aformaley Brown</creator>
        
        <creator>Benjamin Asubam Weyori</creator>
        
        <creator>Adebayo Felix Adekoya</creator>
        
        <creator>Patrick Kwaku Kudjo</creator>
        
        <creator>Solomon Mensah</creator>
        
        <subject>Blocking bugs; systematic review; software maintenance; bug report; reliability; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>The application of machine learning (ML) techniques to predict blocking bugs have emerged for the early detection of Blocking Bugs (BBs) in software components to mitigate the adverse effect of BBs on software release and project cost. This study presents a systematic literature review of the trends in the application of ML techniques in BB prediction, existing research gaps, and possible research directions to serve as a reference for future research and an application insight for software engineers. We constructed search phrases from relevant terms and used them to extract peer-reviewed studies from the databases of five famous academic publishers, namely Scopus, SpringerLink, IEEE Xplore, ACM digital library, and ScienceDirect. We included primary studies published between January 2012 and February 2022 that applied ML techniques to building Blocking Bug Prediction models (BBPMs). Our result reveals a paucity of literature on BBPMs. Also, previous researchers employed ML techniques such as Decision Trees, Random Forest, Bayes Network, XGBoost, and DNN in building existing BB prediction models. However, the publicly available datasets for building BBPMs are significantly imbalanced. Despite the poor performance of the Accuracy metric where imbalanced datasets are concerned, some primary studies still utilized the Accuracy metric to assess the performance of their proposed BBPM. Further research is required to validate existing and new BBPM on datasets of commercial software projects. Also, future researchers should mitigate the effect of class imbalance on the proposed BB prediction model before training a BBPM.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_80-Predicting_Blocking_Bugs_with_Machine_Learning_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Trace Learners Clustering to Improve Learning Object Recommendation in Online Education Platforms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130681</link>
        <id>10.14569/IJACSA.2022.0130681</id>
        <doi>10.14569/IJACSA.2022.0130681</doi>
        <lastModDate>2022-06-30T12:23:39.4600000+00:00</lastModDate>
        
        <creator>Zriaa Rajae</creator>
        
        <creator>Amali Said</creator>
        
        <creator>El Faddouli Nour-eddine</creator>
        
        <subject>e-learning; recommendation system; learning objects; tacit behaviors</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>E-learning platforms propose pedagogical pathways where learners are invited to mobilize their autonomy to achieve the learning objectives. However, some learners face a set of cognitive barriers that require additional learning objects to progress in the course. A mediating recommendation system is one of the efficient solutions to reinforce the resilience of online platforms, while suggesting learning objects that will be interesting for them according to their needs. The objective of this contribution is to design a new mediator recommendation model for e-learning platforms to suggest learning objects to the learner based on collaborative filtering. To this end, the proposed system relies on the implicit behaviors estimation function as an underlying technique to convert tacit traces into explicit preferences allowing to compute the similarity between learners.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_81-Trace_Learners_Clustering_to_Improve_Learning_Object.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implementation of Electronic Health Record and Health Insurance Management System using Blockchain Technology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130679</link>
        <id>10.14569/IJACSA.2022.0130679</id>
        <doi>10.14569/IJACSA.2022.0130679</doi>
        <lastModDate>2022-06-30T12:23:39.4430000+00:00</lastModDate>
        
        <creator>Lincy Golda Careline S</creator>
        
        <creator>T. Godhavari</creator>
        
        <subject>Electronic health records; insurance policy and claim processing; smart contracts; Ethereum; homomorphic encryption; edge computing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>Electronic health records (EHR) play an important role in digital health transition. EHRs contain medical information such as demographics, laboratory test results, radiological images, vaccination status, insurance policy, and claims. EHR is essential for doctors and healthcare organizations to analyze a patient&#39;s profile and provide appropriate therapy. Despite this, current electronic health record (EHR) systems lag with difficulties such as Interoperability and security. Better and faster care may be provided with an integrated and secure health record for each patient that can be transmitted easily in real-time across countries. People having health insurance policies are often confronted by insurance jargon and the insurer’s cumbersome requirements while filing a claim for treatment. There are times when the claims processing takes longer than expected. The insurer, Third-Party Administrators (TPAs), and network provider hospitals examine, approve, and initiate the sum claimed. The use of blockchain in the process allows for more efficient information sharing at a lower cost and with more security. Only authorized individuals have access to the shared ledger on a blockchain, making it more confidential and secure. All parties engaged in a health insurance policy, including the insurer, the insured, the TPA, and the network provider hospital, may be members of the blockchain network and have access to the same set of policy data. In our proposed work we implemented a Blockchain-based EHR and Health insurance management system using Ethereum and deployed smart contracts using solidity and created a web application with web3js and React Framework.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_79-Implementation_of_Electronic_Health_Record.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Deep Learning Classification Approach using Feature Fusion Model for Heart Disease Diagnosis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130677</link>
        <id>10.14569/IJACSA.2022.0130677</id>
        <doi>10.14569/IJACSA.2022.0130677</doi>
        <lastModDate>2022-06-30T12:23:39.4270000+00:00</lastModDate>
        
        <creator>Bhandare Trupti Vasantrao</creator>
        
        <creator>Selvarani Rangasamy</creator>
        
        <creator>Chetan J. Shelke</creator>
        
        <subject>Deep learning approach; heart disease diagnosis; feature fusion model; ECG analysis; weighted clustering; F-Score</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>Early Diagnosis has a very critical role in medical data processing and automated system. In medical diagnosis, automation is focused in different area of applications, in which heart disease diagnosis is a prominent domain. An early detection of heart disease can save many lives or criticality issues in diagnosing patients. In the process of heart disease diagnosis spatial and frequency domain features are used in making decision by the automation system. The processing features are observed to time variant or invariant in nature and the criticality of the observing feature varies with the diagnosis need. Wherein, the current automation system utilizes the features extracted in a large count to attain a higher accuracy, the processing overhead, and delay are considerable. Different regression approaches were developed in recent past to minimize the processing feature overhead the features are optimized based on gain performance or distance factors. The characteristic variation of feature and the significance of the feature vector are not addressed. This paper outlines a method of feature selection for heart disease diagnosis, based on weighted method of feature vector in consideration of feature significance and probability of estimate. A new optimizing function for feature selection is proposed as a dual function of probability factor and feature weight value. Simulation results illustrate the improvement of accuracy and speed of computation using proposed method compared to other existing methods.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_77-A_Deep_Learning_Classification_Approach_using_Feature_Fusion_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Effective Demand based Optimal Route Generation in Transport System using DFCM and ABSO Approaches</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130678</link>
        <id>10.14569/IJACSA.2022.0130678</id>
        <doi>10.14569/IJACSA.2022.0130678</doi>
        <lastModDate>2022-06-30T12:23:39.4270000+00:00</lastModDate>
        
        <creator>Archana M. Nayak</creator>
        
        <creator>Nirbhay Chaubey</creator>
        
        <subject>Clustering; optimization; demand based objectives; comfort level; optimal routing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>The transportation network service quality is generally depends on providing demand based routing. Different existing approaches are focused to enhance the service quality of the transportation but them fails to satisfy the demand. This work presents an effective demand based objectives for optimal route generation in public transport system. The importance of this work is providing demand based optimal routing for large city transportation. The proposed demand based optimal route generation process is described in subsequent stages. Initially the passengers in each route are clustered using Distance based adaptive Fuzzy C-means clustering approach (DFCM) for collecting the passengers count in each stop. Here the number of cluster members in each cluster is equivalent to the passenger count of each stop. After the clustering process, adaptive objectives based beetle swarm optimization (ABSO) approach based routing is performed with the clustered data. Then re-routing is performed based on the demand based objectives such as passenger’s count, comfort level of passengers, route distance and average travel time using ABSO approach. This ABSO approach provides the optimal routing based on these demand based objectives. The presented methodology is implemented in the MATLAB working platform. The dataset used for the analysis is Surat city transport historical data. The experimental results of the presented work is examined with the different existing approaches in terms of root mean square error (9.5%), mean error (0.254%), mean absolute error (0.3007%), correlation coefficient (0.8993), vehicle occupancy (85%) and accuracy (99.57%).</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_78-An_Effective_Demand_based_Optimal_Route_Generation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Impact of COVID-19 Pandemic Measures and Restrictions on Cellular Network Traffic in Malaysia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130676</link>
        <id>10.14569/IJACSA.2022.0130676</id>
        <doi>10.14569/IJACSA.2022.0130676</doi>
        <lastModDate>2022-06-30T12:23:39.4130000+00:00</lastModDate>
        
        <creator>Sallar Salam Murad</creator>
        
        <creator>Salman Yussof</creator>
        
        <creator>Rozin Badeel</creator>
        
        <creator>Reham A. Ahmed</creator>
        
        <subject>Cellular networks; COVID-19; network performance; pandemic</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>Due to the COVID-19 pandemic, intensive controls were put in place to prevent the pandemic from spreading. People&#39;s habits have been altered by the COVID-19 measures and restrictions imposed such as social distance and lockdown measures. These unexpected changes created a significant impact on cellular networks, such as increased use of online services and content streaming, which increased the burden on wireless networks. This research work is basically a case study that aims to examine and investigate cellular network performance, including upload speed, download speed, and latency, during two periods (MCO, CMCO) in three different regions, including Kuala Lumpur, Selangor (Cheras), and Johor Bahru, to observe the effects of lockdown enforcement and other restrictions in Malaysia on cellular network traffic. We used the phone application Speedtest™ as a tool for data collection within different times during the day, considering the peak times, including morning, evening, and night times. The research findings show how COVID-19 has affected internet traffic in Malaysia significantly. This research would help perspective developers and companies to better understand and be prepared for tough times and higher load on cellular networks in future pandemics such as COVID-19.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_76-Impact_of_COVID_19_Pandemic_Measures_and_Restrictions.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Data Recovery Approach with Optimized Cauchy Coding in Distributed Storage System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130675</link>
        <id>10.14569/IJACSA.2022.0130675</id>
        <doi>10.14569/IJACSA.2022.0130675</doi>
        <lastModDate>2022-06-30T12:23:39.3970000+00:00</lastModDate>
        
        <creator>Snehalata Funde</creator>
        
        <creator>Gandharba Swain</creator>
        
        <subject>Optimized cauchy coding; fault tolerant; data availability; reed solomon code</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>In the professional world, the impact of big data is pulsating to change things. Data is currently generated by a wide range of sensors that are part of smart devices. It necessitates data storage and retrieval that is fault tolerant. Data loss can be caused by natural calamities, human error, or mechanical failure. Several security threats and data degradation attacks attempt to destroy storage disks, causing partial or complete data loss. The data encoding and data recovery mechanisms is proposed in this research. To produce a set of matrices utilizing matrix heuristics, the suggested system proposes an efficient Optimized Cauchy Coding (OCC) approach. In this paper, the Cauchy matrix is used as a generator matrix in Reed Solomon (RS) code to encode data blocks with fewer XOR operations. It reduces the encoding algorithm&#39;s time complexity. Furthermore, in the event of a disk failure, missing data from any data block is made available through the Code word. In terms of data recovery, it outperforms the Optimal Weakly Secure Minimum Storage Regenerating (OWSPM-MSR) and Product-Matrix Minimum Storage Regenerating (PM-MSR) methods. For data coding, a 1024KB file with various combinations of data blocks l and parity blocks m is evaluated. In the first scenario, m is 1 and l ranges from 4 to 10. The value of l is 4 in the second scenario, while m ranges from 1 to 10. The existing OWSPM-MSR approach takes an average of 0.125 seconds to encode and 0.22 seconds to decode, whereas the PM-MSR approach takes an average of 0.045 seconds to encode and 0.16 seconds to decode. The proposed OCC approach speeds up data coding by taking an average of 0.035 seconds to encode and 0.116 seconds to decode data.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_75-Data_Recovery_Approach_with_Optimized_Cauchy_Coding.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Incremental Learning based Optimized Sentiment Classification using Hybrid Two-Stage LSTM-SVM Classifier</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130674</link>
        <id>10.14569/IJACSA.2022.0130674</id>
        <doi>10.14569/IJACSA.2022.0130674</doi>
        <lastModDate>2022-06-30T12:23:39.3800000+00:00</lastModDate>
        
        <creator>Alka Londhe</creator>
        
        <creator>P. V. R. D. Prasada Rao</creator>
        
        <subject>Sentiment analysis; natural language processing; incremental learning; long short-term memory; support vector machine; hybrid; dimensionality reduction; principal component analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>Sentiment analysis is a subtopic of Natural Language Processing (NLP) techniques that involves extracting emotions from unprocessed text. This is commonly used on customer review posts to automatically determine if user / customer sentiments are negative or positive. But quality of these analysis is completely dependent on its quantity of raw data. The conventional classifier-based sentiment prediction is not capable to handle these large datasets. Hence, for an efficient and effective sentiment prediction, deep learning approach is used. The proposed system consists of three main phases, such as 1) Data collection and pre-processing, 2) Count vectorizer and dimensionality reduction is used for feature extraction, 3) Hybrid classifier LSTM-SVM using incremental learning. Initially the input raw data is gathered from the e-commerce sites for product reviews and collected raw is given to pre-processing, which do tokenization, stop word removal, lemmatization for each review text. After pre-processing, features like keywords, length, and word count are extracted and given to feature extraction stage. Then a hybrid classifier using two-stage LSTM and SVM is developed for training the sentimental classes by passing new features and classes for incremental learning. The proposed system is developed using python and it is compared with the state-of-the-art classification techniques. The performance of the proposed system is compared based on performance metrics such as accuracy, precision, recall, sensitivity, specificity etc. The proposed model performed an accuracy of 92% which is better compared to the state-of-the-art existing techniques.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_74-Incremental_Learning_based_Optimized_Sentiment_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing the Security of Data Stored in the Cloud using customized Data Visualization Patterns</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130673</link>
        <id>10.14569/IJACSA.2022.0130673</id>
        <doi>10.14569/IJACSA.2022.0130673</doi>
        <lastModDate>2022-06-30T12:23:39.3800000+00:00</lastModDate>
        
        <creator>Archana M</creator>
        
        <creator>Gururaj Murtugudde</creator>
        
        <subject>Artificial intelligence; big data; cloud computing; data science; data visualization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>Cloud Computing is getting popularized with the invention of latest technologies like Big Data, Artificial Intelligence, Data Science etc. The biggest challenge faced by researchers is efficient ways of accessing the data and acquiring the required results. The efficiency of the system will help the researchers to go one step further in the field of cloud computing. Alongside of storing the data in an optimal way, one biggest challenge faced by the researchers is security. How best security can be enhanced for this data in order to protect the end system from data thefts and illegal attacks. In this paper the proposed research concentrates on customized data visualization techniques that have been developed in order to store the data and also enhance the data security. These visualization patterns are dynamic in nature and can be further extended based on the need and level of the security required by the application. The proposed research in this paper will help researchers to implement the data visualization techniques with enhanced security in the real time data stored in the cloud from unauthorized access and various attacks like Malware etc. and these data patterns are dynamic in nature which will be selected based on the number of fragments need to be stored pertaining to particular cluster or region. The patterns will be selected based on two factors basically, one is the number of fragments and another important factor is how many nodes are available in the pattern. This proposed research will give an additional strength to the Cloud Computing Platforms like AWS and Google Cloud where the customers can feel that their data is in safe hands. Today when we are living in the data world, the need of this system is very much essential as it enhances the security of the data.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_73-Enhancing_the_Security_of_Data_Stored_in_the_Cloud.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Influence of Management Automation on Managerial Decision-making in the Agro-Industrial Complex</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130672</link>
        <id>10.14569/IJACSA.2022.0130672</id>
        <doi>10.14569/IJACSA.2022.0130672</doi>
        <lastModDate>2022-06-30T12:23:39.3670000+00:00</lastModDate>
        
        <creator>Sergey Dokholyan</creator>
        
        <creator>Evgeniya Olegovna Ermolaeva</creator>
        
        <creator>Alexander Sergeyevich Verkhovod</creator>
        
        <creator>Elena Vladimirovna Dupliy</creator>
        
        <creator>Anna Evgenievna Gorokhova</creator>
        
        <creator>Vyacheslav Aleksandrovich Ivanov</creator>
        
        <creator>Vladimir Dmitrievich Sekerin</creator>
        
        <subject>Grain elevator; automation; grain quality; grain storage; grain drying; grain losses</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>The preservation and rational use of the grown harvest, obtaining the maximum product output from raw materials today is one of the most important state tasks. Automation of production processes is the main area in which production is currently advancing around the world. Everything that was previously performed by the person himself, his functions, not only physical but also intellectual, are gradually transferred to automation systems that perform technological cycles and exercise control over them. The purpose of the article is to analyze the effect of automation on the ability to store grain in elevators. The main research question is what factors should be considered when introducing an automation system into the grain storage process at elevators to improve the efficiency of process control at enterprises. To solve the question posed, a qualitative study was conducted using the method of an expert survey. The article reveals the factors that affect the quality of grain; the tasks implemented in the computerized process control system (CPCS) and management information and control system (MICS); the factors that hinder the grain elevator automation; the tasks solved by the automation of grain elevators in the framework of autonomous subsystems and integrated automatic control systems (ACS). It is concluded that the implementation of automation in the process of grain storage in elevators leads to an increase in grain quality, increased productivity, reduction or elimination of losses caused by theft and the peculiarities of grain storage, saving energy resources, minimizing the impact of the human factor, as well as the risks of accidents. At that, the inclusion of non-standard tools in the MICS and CPCS makes it easier to solve several current automation problems. Creating standard problem-oriented complexes of responsible decision-makers based on an integrated ACS, with the inclusion of certified object-oriented non-standard tools in their composition, is the most rational way to further improve the efficiency of the automated control system of the industry.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_72-Influence_of_Management_Automation_on_Managerial_Decision.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Op-RMSprop (Optimized-Root Mean Square Propagation) Classification for Prediction of Polycystic Ovary Syndrome (PCOS) using Hybrid Machine Learning Technique</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130671</link>
        <id>10.14569/IJACSA.2022.0130671</id>
        <doi>10.14569/IJACSA.2022.0130671</doi>
        <lastModDate>2022-06-30T12:23:39.3500000+00:00</lastModDate>
        
        <creator>Rakshitha Kiran P</creator>
        
        <creator>Naveen N. C</creator>
        
        <subject>SVM; decision tree; logistic regression; RM Sprop; frameworks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>Polycystic Ovary Syndrome is a common women&#39;s health problem caused by the imbalance in the reproductive hormones which causes problems in the ovaries. An appropriate machine learning (ML) algorithm can be applied to analyze the datasets and validate the performance of the algorithm in terms of accuracy. In this paper, a unique hybrid and optimized methodology are proposed which uses SVM linear kernel with Logistic Regression functionalities in a different way. The output of this model is passed on to the RMSprop optimizer. Optimization will train the model iteratively to get better output. For this research 1600 datasets were collected from the leading hospital in Bangalore Urban region. This optimized hybrid method is tested over PCOS datasets and it exhibited 89.03% accuracy. The results showed that the optimized-hybrid model works efficiently when compared to other existing ML Algorithms like SVM, Logistic regression, Decision tree, KNN, Random forest, and Adaboost. Also, the results of the optimized-hybrid SVLR model showed good results in terms of F-measure, precision, and recall statistical criteria. Overall this paper summarizes the working of the proposed optimized-SVLR hybrid model and prediction of PCOS.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_71-Op_RMSprop_Optimized_Root_Mean_Square_Propagation_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Registration Methods for Thermal Images of Diabetic Foot Monitoring: A Comparative Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130670</link>
        <id>10.14569/IJACSA.2022.0130670</id>
        <doi>10.14569/IJACSA.2022.0130670</doi>
        <lastModDate>2022-06-30T12:23:39.3030000+00:00</lastModDate>
        
        <creator>Doha Bouallal</creator>
        
        <creator>Hassan Douzi</creator>
        
        <creator>Rachid Harba</creator>
        
        <subject>Medical imaging; diabetic foot; thermography; registration; mobile health</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>This paper presents a comparative study of image registration techniques for Diabetic Foot (DF) thermal images. Four registration methods (Intensity-based algorithm, Iterative closest point (ICP), subpixel registration algorithm, which is mainly based on Fast Fourier Transform (FFT), and the pyramid approach for subpixel registration) have been implemented and analyzed. The performances of the four algorithms were evaluated using several overlap and symmetry metrics such as the Dice similarity coefficient (DSC), Root Mean Square Error (RMSE) and peak signal to noise ratio (PSNR). The methods were analyzed in a first step on the images of contralateral feet (right and left) of the same subject, which is called in this paper &quot;contralateral registration&quot; and in a second step on a pair of images of the same subject but acquired in two different times T0 and T10 after applying a cold stress test, which is called &quot;multi-temporal registration&quot;. Results showed that the intensity-based approach and the pyramid approach for subpixel registration algorithm give the best results in both types of registration (contralateral / multitemporal) and can be used efficiently for the registration of these types of images even under changing conditions.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_70-Registration_Methods_for_Thermal_Images_of_Diabetic_Foot_Monitoring.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sentiment Analysis of Tweets using Unsupervised Learning Techniques and the K-Means Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130669</link>
        <id>10.14569/IJACSA.2022.0130669</id>
        <doi>10.14569/IJACSA.2022.0130669</doi>
        <lastModDate>2022-06-30T12:23:39.2870000+00:00</lastModDate>
        
        <creator>Orlando Iparraguirre-Villanueva</creator>
        
        <creator>Victor Guevara-Ponce</creator>
        
        <creator>Fernando Sierra-Linan</creator>
        
        <creator>Saul Beltozar-Clemente</creator>
        
        <creator>Michael Cabanillas-Carbonell</creator>
        
        <subject>Techniques; machine learning; classification; twitter</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>Today, web content such as images, text, speeches, and videos are user-generated, and social networks have become increasingly popular as a means for people to share their ideas and opinions. One of the most popular social media for expressing their feelings towards events that occur is Twitter. The main objective of this study is to classify and analyze the content of the affiliates of the Pension and Funds Administration (AFP) published on Twitter. This study incorporates machine learning techniques for data mining, cleaning, tokenization, exploratory analysis, classification, and sentiment analysis. To apply the study and examine the data, Twitter was used with the hashtag #afp, followed by descriptive and exploratory analysis, including metrics of the tweets. Finally, a content analysis was carried out, including word frequency calculation, lemmatization, and classification of words by sentiment, emotions, and word cloud. The study uses tweets published in the month of May 2022. Sentiment distribution was also performed in three polarity classes: positive, neutral, and negative, representing 22%, 4%, and 74% respectively. Supported by the unsupervised learning method and the K-Means algorithm, we were able to determine the number of clusters using the elbow method. Finally, the sentiment analysis and the clusters formed indicate that there is a very pronounced dispersion, the distances are not very similar, even though the data standardization work was carried out.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_69-Sentiment_Analysis_of_Tweets_using_Unsupervised_Learning_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>COVID-19 Detection on X-Ray Images using a Combining Mechanism of Pre-trained CNNs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130668</link>
        <id>10.14569/IJACSA.2022.0130668</id>
        <doi>10.14569/IJACSA.2022.0130668</doi>
        <lastModDate>2022-06-30T12:23:39.2730000+00:00</lastModDate>
        
        <creator>Oussama El Gannour</creator>
        
        <creator>Soufiane Hamida</creator>
        
        <creator>Shawki Saleh</creator>
        
        <creator>Yasser Lamalem</creator>
        
        <creator>Bouchaib Cherradi</creator>
        
        <creator>Abdelhadi Raihani</creator>
        
        <subject>COVID-19; deep learning; transfer learning; feature extraction; concatenation technique</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>The COVID-19 infection was sparked by the severe acute respiratory syndrome SARS-CoV-2, as mentioned by the World Health Organization, and originated in Wuhan, Republic of China, eventually extending to every nation worldwide in 2020. This research aims to establish an efficient Medical Diagnosis Support System (MDSS) for recognizing COVID-19 in chest radiography with X-ray data. To build an ever more efficient classifier, this MDSS employs the concatenation mechanism to merge pretrained convolutional neural networks (CNNs) predicated on Transfer Learning (TL) classifiers. In the feature extraction phase, this proposed classifier employs a parallel deep feature extraction approach based on Deep Learning (DL). As a result, this approach increases the accuracy of our proposed model, thus identifying COVID-19 cases with higher accuracy. The suggested concatenation classifier was trained and validated using a Chest Radiography image database with four categories: COVID-19, Normal, Pneumonia, and Tuberculosis during this research. Furthermore, we integrated four separate public X-Ray imaging datasets to construct this dataset. In contrast, our mentioned concatenation classifier achieved 99.66% accuracy and 99.48% sensitivity respectively.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_68-COVID_19_Detection_on_X_Ray_Images_using_a_Combining_Mechanism.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Distributed Intrusion Detection System using Machine Learning for IoT based on ToN-IoT Dataset</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130667</link>
        <id>10.14569/IJACSA.2022.0130667</id>
        <doi>10.14569/IJACSA.2022.0130667</doi>
        <lastModDate>2022-06-30T12:23:39.2570000+00:00</lastModDate>
        
        <creator>Abdallah R. Gad</creator>
        
        <creator>Mohamed Haggag</creator>
        
        <creator>Ahmed A. Nashat</creator>
        
        <creator>Tamer M. Barakat</creator>
        
        <subject>Intrusion detection system (IDS); internet of things (IoT); ToN-IoT dataset; machine learning (ML)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>The internet of things (IoT) is a collection of common physical things which can communicate and synthesize data utilizing network infrastructure by connecting to the internet. IoT networks are increasingly vulnerable to security breaches as their popularity grows. Cyber security attacks are among the most popular severe dangers to IoT security. Many academics are increasingly interested in enhancing the security of IoT systems. Machine learning (ML) approaches were employed to function as intrusion detection systems (IDSs) to provide better security capabilities. This work proposed a novel distributed detection system based on machine ML approaches to detect attacks in IoT and mitigate malicious occurrences. Furthermore, NSL-KDD or KDD-CUP99 datasets are used in the great majority of current studies. These datasets are not updated with new attacks. As a consequence, the ToN-IoT dataset was used for training and testing. It was created from a large-scale, diverse IoT network. The ToN-IoT dataset reflects data from each layer of the IoT system, such as cloud, fog, and edge layer. Various ML methods were tested in each specific partition of the ToN-IoT dataset. The proposed model is the first suggested model based on the collected data from the same IoT system from all layers. The Chi2 technique was used to pick features in a network dataset. It reduced the number of features to 20. Another feature selection tool employed in the windows dataset was the correlation matrix, which was used to extract the most relevant features from the whole dataset. To balance the classes, the SMOTE method was used. This paper tests numerous ML approaches in both binary and multi-class classification problems. According to the findings, the XGBoost approach is superior to other ML algorithms for each node in the suggested model.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_67-A_Distributed_Intrusion_Detection_System_using_Machine_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application of Optimized SVM in Sample Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130666</link>
        <id>10.14569/IJACSA.2022.0130666</id>
        <doi>10.14569/IJACSA.2022.0130666</doi>
        <lastModDate>2022-06-30T12:23:39.2570000+00:00</lastModDate>
        
        <creator>Xuemei Yao</creator>
        
        <subject>Support vector machine; parameter optimization; K-fold cross validation; sample classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>Support vector machines (SVM) have unique advantages in solving problems with small samples, nonlinearity and high dimension. It has a relatively complete theory and has been widely used in various fields. The classification accuracy and generalization ability of SVMs are determined by the selected parameters, for which there is no solid theoretical guidance. To address this parameter optimization problem, we applied random selection, genetic algorithms (GA), particle swarm optimization (PSO) and K-fold cross validation (k-CV) method to optimize the parameters of SVMs. Taking the classification accuracy, mean squared error and squared correlation coefficient as the goal, the K-fold cross validation method is chosen as the best way to optimize SVM parameters. In order to further verify the best performance of the SVM whose parameters are optimized by the K-fold cross validation method, the back propagation neural network and decision tree are used as the contrast models. The experimental results show that the SVM-cross validation method has the highest classification accuracy in SVM parameter selection, which lead to SVM classifiers that outperform both BP neural networks and decision tree method.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_66-Application_of_Optimized_SVM_in_Sample_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sentiment Analysis of Covid-19 Vaccination using Support Vector Machine in Indonesia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130665</link>
        <id>10.14569/IJACSA.2022.0130665</id>
        <doi>10.14569/IJACSA.2022.0130665</doi>
        <lastModDate>2022-06-30T12:23:39.2400000+00:00</lastModDate>
        
        <creator>Majid Rahardi</creator>
        
        <creator>Afrig Aminuddin</creator>
        
        <creator>Ferian Fauzi Abdulloh</creator>
        
        <creator>Rizky Adhi Nugroho</creator>
        
        <subject>Covid-19; vaccination; support vector machine; twitter</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>Along with the development of the Covid-19 pandemic, many responses and news were shared through social media. The new Covid-19 vaccination promoted by the government has raised pros and cons from the public. Public resistance to covid-19 vaccination will lead to a higher fatality rate. This study carried out sentiment analysis about the Covid-19 vaccine using the Support Vector Machine (SVM). This research aims to study the public response to the acceptance of the vaccination program. The research result can be used to determine the direction of government policy. Data collection was taken via Twitter in the year 2021. The data then undergoes the preprocessing methods. Afterward, the data is processed using SVM classification. Finally, the result is evaluated by a confusion matrix. The experimental result shows that SVM produces 56.80% positive, 33.75% neutral, and 9.45% negative. The highest model accuracy was obtained by RBF kernel of 92%, linear and polynomial kernels obtained 90% accuracy, and sigmoid kernel obtained 89% accuracy.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_65-Sentiment_Analysis_of_Covid_19_Vaccination.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Face Recognition System Design and Implementation using Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130663</link>
        <id>10.14569/IJACSA.2022.0130663</id>
        <doi>10.14569/IJACSA.2022.0130663</doi>
        <lastModDate>2022-06-30T12:23:39.2230000+00:00</lastModDate>
        
        <creator>Jamil Abedalrahim Jamil Alsayaydeh</creator>
        
        <creator>Irianto</creator>
        
        <creator>Azwan Aziz</creator>
        
        <creator>Chang Kai Xin</creator>
        
        <creator>A. K. M. Zakir Hossain</creator>
        
        <creator>Safarudin Gazali Herawan</creator>
        
        <subject>Face recognition system; biometric identification; face detection; image processing; convolutional neural networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>Face recognition technology is used in biometric security systems to identify a person digitally before granting the access to the system or the data in it. There are many kidnappings or abduction cases happen around us, however, the kidnap suspects will be set free if there is lack of evidence or when the victims are not able to testify in court because they suffer from post-traumatic stress disorder (PTSD). The objectives of this study are, to develop a device that will capture the image of a kidnapper as evidence for future reference and send the captured image to the family of the victim through email, to design a face recognition system to be used in searching kidnap suspects and to determine the best training parameters for the convolution neural network (CNN) layers used by the proposed face recognition system. The accuracy of the proposed system is tested with three different datasets, namely the AT&amp;T database, face database from [23] and a custom face dataset. The results are 87.50%, 92.19% and 95.93% respectively. The overall face recognition accuracy of the proposed system is 98.48%. The best training parameters for the proposed CNN model are kernel size of 5x5, 32 and 64 filters for first and second convolutional layers and learning rate of 0.001.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_63-Face_Recognition_System_Design_and_Implementation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sparse Feature Aware Noise Removal Technique for Brain Multiple Sclerosis Lesions using Magnetic Resonance Imaging</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130664</link>
        <id>10.14569/IJACSA.2022.0130664</id>
        <doi>10.14569/IJACSA.2022.0130664</doi>
        <lastModDate>2022-06-30T12:23:39.2230000+00:00</lastModDate>
        
        <creator>Swetha M D</creator>
        
        <creator>Aditya C R</creator>
        
        <subject>Convolution neural networks; deep learning; denoising; magnetic resonance imaging; morphology learning; multiple sclerosis; sparse features</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>Medical Resonance Imaging (MRI) is non-radioactive-based medical imaging that provides a super-resolution of tissues. However, because of its complex nature using existing Deep Learning-based noise removal (i.e., Denoising) techniques, the reconstruction quality is poor and time-consuming. An extensive study shows very limited work has been done on Brain Multiple Sclerosis (MS) Lesions MRI. Designing an efficient noise removal technique will aid in improving MRI quality; thereby will aid in achieving better segmentation classification performance. In reducing computing time and enhancing image quality (i.e. reduce noise) this paper presents the Sparse Feature Aware Noise Removal (SFANR) technique for Brain MRI using Convolution Neural Network (CNN) architecture. A sparse-aware feature is incorporated into the patch-wise morphology learning model for removing noise in large-scale MRI MS lesion datasets. Experimental results demonstrated that our model SFANR outperforms all other state-of-art noise removal techniques in terms of Peak-Signal-Noise-Ratio (PSNR), Structural Similarity Index Metric (SSIM) with less running time.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_64-Sparse_Feature_Aware_Noise_Removal_Technique.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Separable Convolution Network for Prediction of Lung Diseases from X-rays</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130662</link>
        <id>10.14569/IJACSA.2022.0130662</id>
        <doi>10.14569/IJACSA.2022.0130662</doi>
        <lastModDate>2022-06-30T12:23:39.2100000+00:00</lastModDate>
        
        <creator>Geetha N</creator>
        
        <creator>S. J. Sathish Aaron Joseph S. J</creator>
        
        <subject>Lung diseases; X-rays; deep learning; filtering; edge detection; segmentation and swarm intelligence</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>Accurate diagnosis of lung cancer has been critical, and image segmentation and deep learning (DL) techniques have made it easier for medical people. Yet, the concept&#39;s effectiveness is extremely limited due to a scarcity of skilled radiologists. Although emerging DL-based methods frequently necessitate accordance with the regulation, such as labelled feature map, to train such networks, which is difficult to terminate on a big scale. This study proposed a swarm intelligence based modified DL model called MSCOA-DSCN to classify and forecast various Lung Diseases through anterior X-rays. Image enhancement with a modified median filter and edge enhancement with statistical range applied for better image production. The disparity between min and max pixels focused on the Statistical range from each 3&#215;3 input image cluster. Utilized Enriched Auto-Seed Fuzzy Means Morphological Clustering for segmentation (EASFMC); they could function together to identify edges in X-Ray imaging. Used A deep separable convolution network (DSCN) was in the created system to predict the class of lung cancer, and Modified Butterfly Optimization Algorithm (MBOA) applied for the feature selection procedure. This present study compared with various state-of-the-art classification algorithms using the NIH Chest-Xray-14 database.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_62-Deep_Separable_Convolution_Network_for_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Threshold Segmentation of Magnetic Column Defect Image based on Artificial Fish Swarm Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130661</link>
        <id>10.14569/IJACSA.2022.0130661</id>
        <doi>10.14569/IJACSA.2022.0130661</doi>
        <lastModDate>2022-06-30T12:23:39.1930000+00:00</lastModDate>
        
        <creator>Wang Jun</creator>
        
        <creator>Hou Mengjie</creator>
        
        <creator>Zhang Ruiran</creator>
        
        <creator>Xiao Jingjing</creator>
        
        <subject>Defect detecting; threshold segmentation; artificial fish swarm algorithm; improved 2D-OTSU algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>Aiming at the low efficiency of magnetic column surface defect detection, the vulnerability to human influence, and the insufficient anti-noise performance of existing 2D-OTSU threshold segmentation algorithm, an improved artificial fish swarm algorithm combined with 2D-OTSU algorithm was proposed to improve the accuracy and real-time of magnetic column surface defect detection. Firstly, the weight coefficient was added on the basis of the original 2D-OTSU algorithm, and the distance function was set to optimize the weight coefficient. The objective function was established by combining the inter-class discrete matrix and the intra-class discrete matrix, and the optimal threshold was obtained. Secondly, logistic model was used to optimize the perceptual range and moving step size of the artificial fish swarm algorithm, so as to balance the local and global search ability of the algorithm and improve the convergence speed of the algorithm. Finally, the optimal segmentation threshold is used to segment the image, and compared with other algorithms on four benchmark functions. Experimental results show that the improved algorithm can effectively reduce the time complexity of threshold segmentation and improve the efficiency of the algorithm. At the same time, the segmentation accuracy of the improved algorithm for magnetic column defects reaches 93%, which has good practicability.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_61-Threshold_Segmentation_of_Magnetic_Column_Defect_Image.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Lightweight ECC-based Three-Factor Mutual Authentication and Key Agreement Protocol for WSNs in IoT</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130660</link>
        <id>10.14569/IJACSA.2022.0130660</id>
        <doi>10.14569/IJACSA.2022.0130660</doi>
        <lastModDate>2022-06-30T12:23:39.1770000+00:00</lastModDate>
        
        <creator>Meriam Fariss</creator>
        
        <creator>Hassan El Gafif</creator>
        
        <creator>Ahmed Toumanari</creator>
        
        <subject>Mutual authentication; elliptic-curve cryptography; Physically Unclonable Function; wireless sensor networks; key-agreement; internet of things</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>The Internet of Things (IoT) represents a giant ecosystem where many objects are connected. They collect and exchange large amounts of data at a very high speed. One of the main parts of IoT is the Wireless Sensor Network (WSN), which is deployed in various critical applications such as military surveillance and healthcare that require high levels of security and efficiency. Authentication is a primary security factor that ensures the legitimacy of data requests and responses in WSN. Moreover, sensor nodes are characterized by their limited resources, which raise the need for lightweight authentication schemes applicable in IoT environments. This paper presents an informal analysis of the security of X. Li et al.’s protocol, which is claimed to be efficient and resistant to various attacks. The analysis results show that the reviewed protocol does not provide user anonymity and it is vulnerable to session key disclosure attack, many-time pad attack, and insider attack. To address all these requirements, a new three-factor authentication protocol is presented, which guarantees higher security using Physically Unclonable Function (PUF) and Elliptic Curve Cryptography (ECC). This protocol does not only withstand the security weaknesses in X. Li et al.’s scheme but also provides smart card revocation and is resistant to cloning attack. In terms of both computational and communicational costs, results demonstrate that the proposed scheme provides higher efficiency in comparison with other related protocols, which makes it notably suitable for IoT environments.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_60-A_Lightweight_ECC_based_Three_Factor_Mutual_Authentication.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Indoor Positioning System: A Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130659</link>
        <id>10.14569/IJACSA.2022.0130659</id>
        <doi>10.14569/IJACSA.2022.0130659</doi>
        <lastModDate>2022-06-30T12:23:39.1630000+00:00</lastModDate>
        
        <creator>N. Syazwani C. J</creator>
        
        <creator>Nur Haliza Abdul Wahab</creator>
        
        <creator>Noorhazirah Sunar</creator>
        
        <creator>Sharifah H. S. Ariffin</creator>
        
        <creator>Keng Yinn Wong</creator>
        
        <creator>Yichiet Aun</creator>
        
        <subject>Global positioning system (GPS); indoor positioning system (IPS); real-time location system (RTLS)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>Global Positioning System (GPS) has been developed in outdoor environments in recent years. GPS offers a wide range of applications in outdoor areas, including military, weather forecasting, vehicle tracking, mapping, farming, and many more. In an outdoor environment, an exact location, velocity, and time can be determined by using GPS. Rather than emitting satellite signals, GPS receivers passively receive them. However, due to No Line-of-Sight (NLoS), low signal strength, and low accuracy, GPS is not suitable to be used indoors. As consequence, the indoor environment necessitates a different Indoor Positioning System (IPS) approach that is capable to locate the position within a structure. IPS systems provide a variety of location-based indoor tracking solutions, such as Real-Time Location Systems (RTLS), indoor navigation, inventory management, and first-responder location systems. Different technologies, algorithms, and techniques have been proposed in IPS to determine the position and accuracy of the system. This paper introduces a review article on indoor positioning technologies, algorithms, and techniques. This review paper is expected to deliver a better understanding to the reader and compared the better solutions for IPS by choosing the suitable technologies, algorithms, and techniques that need to be implemented according to their situation.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_59-Indoor_Positioning_System_A_Review.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>K-Means Customers Clustering by their RFMT and Score Satisfaction Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130658</link>
        <id>10.14569/IJACSA.2022.0130658</id>
        <doi>10.14569/IJACSA.2022.0130658</doi>
        <lastModDate>2022-06-30T12:23:39.1630000+00:00</lastModDate>
        
        <creator>Doae Mensouri</creator>
        
        <creator>Abdellah Azmani</creator>
        
        <creator>Monir Azmani</creator>
        
        <subject>Customer segmentation; customer satisfaction; RFMT model; machine learning; k-means</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>Businesses derive more revenue from building and maintaining long-term relationships with their customers. Therefore, it is essential to build refined strategies based on customer relationship management, with the purpose of increasing their turnover and profits while retaining their customers. In this context, customer segmentation, which is at the heart of marketing strategy, makes it possible to determine the answers to questions relating to the number of investments to be released, the marketing campaigns to be organized, and the development strategy to be implemented. This paper develops an extended RFMT (Recency, Frequency, Monetary, and Interpurchase Time) model, namely the RFMTS model, by introducing a new dimension as satisfaction ‘S’. The aim of this model is to analyze online consumer satisfaction over time and discern changes to implement customer segmentation. This article proposes an approach to a segmentation, by client clustering along the unsupervised machine learning method k-means based on data generated using the proposed RFMTS model, in order to improve the customer relationship and develop more effective personalized marketing strategies. The study shows that including satisfaction to the existing RFM model for customer clustering has a major impact and helps identify customers who are satisfied and those who are not, unlike previous attempts to develop new RFM models. By ignoring the “satisfaction” indicator, what went well and what didn&#39;t went well cannot be understood. Consequently, the business loses its unsatisfied, loyal, and profitable customers and either fails or relies only on the satisfied ones to continue making profits for an indefinite period of time.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_58-K_Means_Customers_Clustering_by_their_RFMT.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Integrating Big Data Analytics into Business Process Modelling: Possible Contributions and Challenges</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130657</link>
        <id>10.14569/IJACSA.2022.0130657</id>
        <doi>10.14569/IJACSA.2022.0130657</doi>
        <lastModDate>2022-06-30T12:23:39.1470000+00:00</lastModDate>
        
        <creator>Zaeem AL-Madhrahi</creator>
        
        <creator>Dalbir Singh</creator>
        
        <creator>Elaheh Yadegaridehkordi</creator>
        
        <subject>Big data analytics; business process modeling; BPM; organisation’s performance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>Business Process Modelling (BPM) is a set of organised, structured, and related activities that boost the development and evolution of an organisation&#39;s success by understanding, improving, and automating existing business processes. Recently, the integration of Big Data Analytics (BDA) into BPM has widely gained more attention as a unique opportunity for organisations to enhance their efficiency, effectiveness, added value, and competitive advantage. However, some organisations still rely on old data-driven strategies and are late in integrating BDA into their BPM. This study aims to explore the possible contributions and challenges of integrating BDA into BPM. This study discovered that better decision making, embracing the organisation&#39;s performance, upgrading business process capabilities, and supporting supply chain management are the main contributions of BDA to BPM in organisations. However, poor data quality, shortage of BDA professionals, and data security and protection are the main challenges that hinder organisations from implementing BDA. This study provides valuable insights for organisations that intend to implement big data technologies in their business processes.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_57-Integrating_Big_Data_Analytics_into_Business_Process_Modelling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Implementation of a Solution for Low-Power Wide-Area Network using LoRaWAN</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130655</link>
        <id>10.14569/IJACSA.2022.0130655</id>
        <doi>10.14569/IJACSA.2022.0130655</doi>
        <lastModDate>2022-06-30T12:23:39.1300000+00:00</lastModDate>
        
        <creator>Nicoleta Cristina GAITAN</creator>
        
        <creator>Floarea PITU</creator>
        
        <subject>LoRa; low-power; LoRaWAN protocol; SigFox; LPWAN</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>In recent years, there has been an increasing emphasis on low-power wide-area network also knowns as LPWAN (Low-Power Wide-Area Networks) technologies that allow efficient and fast data transfer, thus desiring a large-scale integration of various devices facilitating long-distance communications in various fields such as agriculture, logistics, or infrastructure. This category of technologies includes SigFox, LoRa, NB-IoT and others. One area where these low-power technologies can be used successfully is agriculture, in which monitoring the humidity and temperature are crucial. The social-economic context of 2022 highlights as one of the main priorities the security of the food and the raw materials provided by the agriculture field, so the desire is to obtain a large, efficient, and traceable production. Starting from this context, in this paper an architecture based on LoRa (Long-Range) technology and the LoRaWAN protocol it is proposed. We will place special emphasis on monitoring the extremely important parameters in agriculture, namely temperature, humidity and pressure. Although there are multiple works of research in this direction or in similar directions in other fields of activity, we should mention that each of them focuses on certain strictly geographical area and most of the times the results are purely theoretical. The gain that comes with this paper consists first in the fact that there is a practical support implementable and secondly the solution described can be adapted to different geographical regions. Moreover, at the end of this paper, we will focus on the comparison and analysis at the architectural level of two LPWAN technologies, namely SigFox vs. LoRa implemented in the same context in order to find the best results.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_55-The_Implementation_of_a_Solution_for_Low_Power_Wide_Area_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Chaos Detection and Mitigation in Swarm of Drones Using Machine Learning Techniques and Chaotic Attractors</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130656</link>
        <id>10.14569/IJACSA.2022.0130656</id>
        <doi>10.14569/IJACSA.2022.0130656</doi>
        <lastModDate>2022-06-30T12:23:39.1300000+00:00</lastModDate>
        
        <creator>Emmanuel NEBE</creator>
        
        <creator>Mistura Laide SANNI</creator>
        
        <creator>Rasheed Ayodeji ADETONA</creator>
        
        <creator>Bodunde Odunola AKINYEMI</creator>
        
        <creator>Sururah Apinke BELLO</creator>
        
        <creator>Ganiyu Adesola ADEROUNMU</creator>
        
        <subject>Chaos detection; swarm of drones; machine learning; autoencoder; R&#246;ssler system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>Most existing identification and tackling of chaos in swarm drone missions focus on single drone scenarios. There is a need to assess the status of a system with multiple drones, hence, this research presents an on-the-fly chaotic behavior detection model for large numbers of flying drones using machine learning techniques. A succession of three Artificial Intelligence knowledge discovery procedures, Logistic Regression (LR), Convolutional Neural Network (CNN), Gaussian Mixture Models (GMMs) and Expectation–Maximization (EM) were employed to reduce the dimension of the actual data of the swarm of drone’s flight and classify it as non-chaotic and chaotic. A one-dimensional, multi-layer perceptive, deep neural network-based classification system was also used to collect the related characteristics and distinguish between chaotic and non-chaotic conditions. The R&#246;ssler system was then employed to deal with such chaotic conditions. Validation of the proposed chaotic detection and mitigation technique was performed using real-world flight test data, demonstrating its viability for real-time implementation. The results demonstrated that swarm mobility horizon-based monitoring is a viable solution for real-time monitoring of a system&#39;s chaos with a significantly reduced commotion effect. The proposed technique has been tested to improve the performance of fully autonomous drone swarm flights.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_56-Chaos_Detection_and_Mitigation_in_Swarm_of_Drones.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-layer Stacking-based Emotion Recognition using Data Fusion Strategy</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130654</link>
        <id>10.14569/IJACSA.2022.0130654</id>
        <doi>10.14569/IJACSA.2022.0130654</doi>
        <lastModDate>2022-06-30T12:23:39.1170000+00:00</lastModDate>
        
        <creator>Saba Tahseen</creator>
        
        <creator>Ajit Danti</creator>
        
        <subject>Electroencephalograph (EEG); linear regression based correlation coefficient; feature selection; multi-layer stacking model; machine learning techniques; emotion recognition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>Electroencephalography (EEG), or brain waves, is a commonly utilized bio signal in emotion detection because it has been discovered that the data recorded from the brain seems to have a connection between motions and physiological effects. This paper is based on the feature selection strategy by using the data fusion technique from the same source of EEG Brainwave Dataset for Classification. The multi-layer Stacking Classifier with two different layers of machine learning techniques was introduced in this approach to concurrently learn the feature and distinguish the emotion of pure EEG signals states in positive, neutral and negative states. First layer of stacking includes the support vector classifier and Random Forest, and the second layer of stacking includes multilayer perceptron and Nu-support vector classifiers. Features are selected based on a Linear Regression based correlation coefficient (LR-CC) score with a different range like n1, n2,n3,n4 a, for d1 used n1 and n2 dataset ,for d2 dataset, combined dataset of n3 and n4 are used and developed a new dataset d3 which is the combination of d1 and d2 by using the feature selection strategy which results in 997 features out of 2548 features of the EEG Brainwave dataset with a classification accuracy of emotion recognition 98.75%, which is comparable to many state-of-the-art techniques. It has been established some scientific groundwork for using data fusion strategy in emotion recognition.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_54-Multi_layer_Stacking_based_Emotion_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Research on the Classification Modeling for the Natural Language Texts with Subjectivity Characteristic</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130653</link>
        <id>10.14569/IJACSA.2022.0130653</id>
        <doi>10.14569/IJACSA.2022.0130653</doi>
        <lastModDate>2022-06-30T12:23:39.1000000+00:00</lastModDate>
        
        <creator>Chen Xiao Yu</creator>
        
        <creator>Gao Feng</creator>
        
        <creator>Song Ying</creator>
        
        <creator>Zhang Xiao Min</creator>
        
        <subject>Car service complaint; text classification; machine learning; natural language texts</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>The methods of natural language text classification have the characteristic of diversification, and the text characteristics are the basis of the method effectiveness; this paper takes the car service complaint data as an example to study the classification modeling for the texts with subjectivity characteristic. The effective handling of car service complaints is important for improving user experience and maintaining brand reputation; manual classification commonly has the disadvantages of experience dependence, prone to error, heavy workload and so on; corresponding automatic classification modeling research is of great practical significance. The core links of the research method in this study include word segmentation, text vectorization, feature selection and dimensionality reduction based on correlation, classification modeling based on progressive method and random forest, and model reliability analysis; the research results show that the car service complaint texts could be effectively classified based on the method in this study, which could provide a reference for related further research and application.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_53-Research_on_the_Classification_Modeling_for_the_Natural_Language.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Accuracy Enhancement of Prediction Method using SMOTE for Early Prediction Student&#39;s Graduation in XYZ University</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130652</link>
        <id>10.14569/IJACSA.2022.0130652</id>
        <doi>10.14569/IJACSA.2022.0130652</doi>
        <lastModDate>2022-06-30T12:23:39.0830000+00:00</lastModDate>
        
        <creator>Ainul Yaqin</creator>
        
        <creator>Majid Rahardi</creator>
        
        <creator>Ferian Fauzi Abdulloh</creator>
        
        <subject>Prediction study period; SMOTE; neural network; k-nearest neighbors; support vector machine</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>According to the Minister of Education and Culture of the Republic of Indonesia&#39;s regulations from 2014, one of the essential elements in implementing higher education is the student&#39;s study duration. Higher education institutions will use early graduation prediction as a guide when developing policy. According to XYZ University data, the student study period is Grade Point Average (GPA), Gender, and Age are all aspects to consider. Using a dataset of 8491 data, the Prediction of Early Graduation of Students based on XYZ University data was examined by this study, particularly in the information systems and informatics study program. The aim is to find significant features and compare three prediction models: Artificial Neural Networks (ANN), K-Nearest Neighbor (K-NN) method, and Support Vector Machines (SVM). The Challenge in the development of a prediction model is imbalanced data. The Synthetic Minority Oversampling Technique (SMOTE) handles the class imbalance problem. Next, the machine learning models are trained and then compared. Prediction results increase. The best test accuracy value is on ANN with a data Imbalance of 62.5% to 70.5% after using SMOTE, compared to the accuracy test on the K-NN method with SMOTE 69.3%, while the SVM method increased to 69.8%. The most significant increase in recall value to 71.3% occurred in the ANN.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_52-Accuracy_Enhancement_of_Prediction_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>RS Invariant Image Classification and Retrieval with Pretrained Deep Learning Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130651</link>
        <id>10.14569/IJACSA.2022.0130651</id>
        <doi>10.14569/IJACSA.2022.0130651</doi>
        <lastModDate>2022-06-30T12:23:39.0700000+00:00</lastModDate>
        
        <creator>D. N. Hire</creator>
        
        <creator>A. V. Patil</creator>
        
        <subject>CBIR; CNN; deep learning; ResNet18; rotation; scale; VGG19</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>CBIR deals with seeking of related images from large dataset, like Internet is a demanding task. Since last two decades scientists are working in this area in various angles. Deep learning provided state-of-the art result for image categorization and recovery. But pre-trained deep learning models are not strong enough to rotation and scale variations. A technique is proposed in this work to improve the precision and recall of image retrieval. This method concentrates on the extraction of high-level features with rotation and scaling invariant from ResNet18 CNN (Convolutional Neural Network) model. These features used for segregation of images using VGG19 deep learning model. Finally, after classification if the class of given query image is correct, we will get the 100% results for both precision and recall as the ideal requirement of image retrieval technique. Our experimental results shows that not only our proposed technique outstrip current techniques for rotated and scaled query images but also it has preferable results for retrieval time requirements. The performance investigation exhibit that the presented method upgrades the average precision value from 76.50% for combined features DCD (Dominant Color Descriptor), wavelet and curvelet to 99.1% and average recall value from 14.21% to 19.82% for rotated and scaled images utilizing Corel dataset. Also, the average retrieval time required is 1.39 sec, which is lower than existing modern techniques.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_51-RS_Invariant_Image_Classification_and_Retrieval.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Sentiment Extraction using Fuzzy-Rule Based Deep Sentiment Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130650</link>
        <id>10.14569/IJACSA.2022.0130650</id>
        <doi>10.14569/IJACSA.2022.0130650</doi>
        <lastModDate>2022-06-30T12:23:39.0700000+00:00</lastModDate>
        
        <creator>SIREESHA JASTI</creator>
        
        <creator>G. V. S. RAJ KUMAR</creator>
        
        <subject>Sentiment analysis; SemEval; recurrent neural networks; LSTM; word embeddings; accuracy; f1-score; fuzzy –rule; deep sentiment extraction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>In the world of social media, the amount of textual data is increasing exponentially on the internet, and a large portion of it expresses subjective opinions. Sentiment Analysis (SA) also named as Opinion mining, which is used to automatically identify and extract the subjective sentiments from text. In recent years, the research on sentiment analysis started taking off because of a huge of amount of data is available on the social media like twitter, machine learning algorithms popularity is increased in IR (Information Retrieval) and NLP (Natural Language Processing). In this work, we proposed three phase systems for sentiment classification in twitter tweets task of SemEval competition. The task is predicting the sentiment like negative, positive or neutral of a twitter tweets by analyzing the whole tweet. The first system used Artificial Bee Colony (ABC) optimization technique is used with Bag-of-words (BoW) technique in association with Naive Bayes (NB) and k-Nearest Neighbor (kNN) classification techniques with combination of various categories of features in identifying the sentiment for a given twitter tweet. The second system used to preserve the context a Rider Feedback Artificial Tree Optimization-enabled Deep Recurrent neural networks (RFATO-enabled Deep RNN) is developed for the efficient classification of sentiments into various grades. Further to improve the accuracy of classification on n-valued scale Adaptive Rider Feedback Artificial Tree (Adaptive RiFArT)-based Deep Neuro fuzzy network is devised for efficient sentiment grade classification. Finally, this research work proposed a Fuzzy-Rule Based Deep Sentiment Extraction (FBDSE) Algorithm with Deep Sentiment Score computation. Accuracy measure is considered to test the proposed systems performance. It was observed that the fuzzy-rule based system achieved good accuracy compared with machine learning and deep learning based approaches.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_50-Deep_Sentiment_Extraction_using_Fuzzy_Rule.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sena TLS-Parser: A Software Testing Tool for Generating Test Cases</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130649</link>
        <id>10.14569/IJACSA.2022.0130649</id>
        <doi>10.14569/IJACSA.2022.0130649</doi>
        <lastModDate>2022-06-30T12:23:39.0530000+00:00</lastModDate>
        
        <creator>Rosziati Ibrahim</creator>
        
        <creator>Samah W. G. AbuSalim</creator>
        
        <creator>Sapiee Jamel</creator>
        
        <creator>Jahari Abdul Wahab</creator>
        
        <subject>Software testing; schema parser; software under test (SUT); model based testing (MBT); java applications</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>Currently, software complexity and size has been steadily growing, while the variety of testing has also been increased as well. The quality of software testing must be improved to meet deadlines and reduce development testing costs. Testing software manually is time consuming, while automation saves time and money as well as increasing test coverage and accuracy. Over the last several years, many approaches to automate test case creation have been proposed. Model-based testing (MBT) is a test design technique that supports the automation of software testing processes by generating test artefacts based on a system model that represents the system under test&#39;s (SUT) behavioral aspects. The optimization technique for automatically generating test cases using Sena TLS-Parser is discussed in this paper. Sena TLS-Parser is developed as a Plug-in Tool to generate test cases automatically and reduce the time spent manually creating test cases. The process of generating test cases automatically by Sena TLS-Parser is be presented through several case studies. Experimental results on six publicly available java applications show that the proposed framework for Sena TLS-Parser outperforms other automated test case generation frameworks. Sena TLS-Parser has been shown to solve the problem of software testers manually creating test cases, while able to complete optimization in a shorter period of time.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_49-Sena_TLS_Parser_A_Software_Testing_Tool.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cross-Layer based TCP Performance Enhancement in IoT Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130648</link>
        <id>10.14569/IJACSA.2022.0130648</id>
        <doi>10.14569/IJACSA.2022.0130648</doi>
        <lastModDate>2022-06-30T12:23:39.0370000+00:00</lastModDate>
        
        <creator>Sultana Parween</creator>
        
        <creator>Syed Zeeshan Hussain</creator>
        
        <subject>Internet of things (IoT); transmission control protocol (TCP); cross-layer approach; packet scheduling; congestion control; fast retransmission; recovery</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>Transmission Control Protocol (TCP) used multiple paths for performing transmission of data simultaneously to improve its performance. However, previous TCP protocols in Internet of Things (IoT) networks experienced difficulty to transmit a greater number of subflows. To overcome the above issues, we introduced cross-layer framework to perform efficient packet scheduling and congestion control for increasing the performance of TCP in IoT networks. Initially, the proposed IoT network is constructed based on grid topology using Manhattan distance which improves the scalability and flexibility of the network. After network construction, packet scheduling is performed by considering numerous parameters such as bandwidth, delay, buffer rate, etc., using fitness based proportional fair (FPF) scheduling algorithm and selecting best subflow to reduce the transmission delay. The scheduled subflow is sent over an optimal path to improve the throughput and goodput. After packet scheduling, congestion control in TCP is performed using cooperative constraint approximation 3+ (CoCoA3+-TCP) algorithm in which three stages are employed namely congestion detection, fast retransmission, and recovery. The congestion detection in TCP-IoT environment is performed by considering several parameters in which cat and mouse-based optimization (CMO) is utilized to adaptively estimate retransmission timeout (RTO) for reducing the delay and improving the convergence during retransmission. Fast retransmission and recovery are performed to improve the network performance by adjusting the congestion window size thereby avoiding congestion. The simulation of cross-layer approach is carried out using network simulator (NS-3.26) and the simulation results show that the proposed work outperforms high TCP performance in terms of throughput, goodput, packet loss, and transmission delay, jitter, and congestion window size.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_48-Cross_Layer_based_TCP_Performance_Enhancement.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Approach for Detecting and Mitigating Address Resolution Protocol (ARP) Poisoning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130647</link>
        <id>10.14569/IJACSA.2022.0130647</id>
        <doi>10.14569/IJACSA.2022.0130647</doi>
        <lastModDate>2022-06-30T12:23:39.0370000+00:00</lastModDate>
        
        <creator>Ahmed A. Galal</creator>
        
        <creator>Atef Z. Ghalwash</creator>
        
        <creator>Mona Nasr</creator>
        
        <subject>Address Resolution Protocol (ARP); ARP detecting; ARP mitigation; ARP spoofing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>Address Resolution Protocol (ARP) Poisoning attack is considered as one of the most devastating attacks in a network context. As a result of its stateless nature and lack of authentication, this protocol suffers from many spoofing attacks in which attackers poison the cache of hosts on the network. By sending spoofed ARP requests and replies. This paper proposes an approach for detecting and mitigating ARP poisoning. This approach includes three modules: Module 1 for giving permission for first time and to store information in the database. There a security measure using MD5 hash is used. Module 2 is for avoiding internal ARP. Module 3 is for detecting whether a MAC has two IPs or an IP has two MACs. The architecture includes a database that gives a great facility and support for storing ARP table information. As ARP table entries generally expire after a short amount of time. To ensure changes in the network are accounted for. Experiments were conducted on real life network environment using Ettercap to check the functionality of the proposed mechanism. The results of experiments show that the proposed approach was able to detect and mitigate ARP poisoning. Especially, whether a MAC has two IPs or an IP has two MACs.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_47-A_New_Approach_for_Detecting_and_Mitigating_Address.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improved Particle Swarm Approach for Dynamic Automated Guided Vehicles Dispatching</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130646</link>
        <id>10.14569/IJACSA.2022.0130646</id>
        <doi>10.14569/IJACSA.2022.0130646</doi>
        <lastModDate>2022-06-30T12:23:39.0230000+00:00</lastModDate>
        
        <creator>Radhia Zaghdoud</creator>
        
        <creator>Marwa Amara</creator>
        
        <creator>Khaled Ghedira</creator>
        
        <subject>Dispatching; automated guided vehicles; dynamic; containers; particle swarm; genetic algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>The automated guided vehicles dispatching is one of the important operations in containers terminal because it affects the loading/unloading process. This operation has become faster and more complex until the automation advent. Although this evolution, the environment has become dynamic and uncertain. This paper aims to propose an improved particle swarm approach for solving the bi-objective problem of automated guided vehicles dispatching and routing in a dynamic environment of containers terminal. The objectives are to minimize the total travel distance of all automated guided vehicles and maximize the workload balance between them. The application of particle swarm algorithm in its basic form, shows a premature convergence. To ameliorate this convergence, the authors proposed the application of a method to escape the worst particles from the local optimum. The new Hybrid Guided Particle Swarm approach consists of hybridization between Dijkstra algorithms and a Guided Particle Swarm Algorithm. The routing problem is solved with Dijkstra algorithm and the dispatching problem with guided particle swarm approach. As a first step, this approach has been applied in a static environment where the dispatching parameters and the routing parameters are fixed in advance. The second step consists of applying this approach in a dynamic environment where the number of containers associated with each automated guided vehicles can change, the shortest path and the container locations can also change during the algorithm execution. The numeric results in a static environment show a good Hybrid Guided Particle Swarm performance with a faster and more stable convergence, which surpasses previous approaches such as Hybrid Genetic Approach and the efficiency of its extension approach Dynamic Hybrid Guided Particle Swarm in a dynamic environment.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_46-Improved_Particle_Swarm_Approach_for_Dynamic_Automated_Guided_Vehicles.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimization Performance Analysis for Adaptive Genetic Algorithm with Nonlinear Probabilities</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130645</link>
        <id>10.14569/IJACSA.2022.0130645</id>
        <doi>10.14569/IJACSA.2022.0130645</doi>
        <lastModDate>2022-06-30T12:23:39.0070000+00:00</lastModDate>
        
        <creator>Wenjuan Sun</creator>
        
        <creator>Qiaoping Su</creator>
        
        <creator>Hongli Yuan</creator>
        
        <creator>Yan Chen</creator>
        
        <subject>Adaptive genetic algorithm; genetic algorithm; nonlinear adjustment; probability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>Genetic Algorithm (GA) has been proven to be easy in falling into local optimal value due to its fixed crossover probability and mutation probability, while Adaptive Genetic Algorithm (AGA) has strong global search capability because the two probabilities adjust adaptively. There are two categories of AGA according to the different adjustment methods for crossover and mutation probabilities: probabilistic linear adjustment AGA and probabilistic non-linear adjustment AGA. AGA with linear adjustment of probability values cannot solve the problems of local optimal value and premature convergence. The nonlinear adaptive probability adjustment strategy can avoid premature convergence, poor stability and slow convergence speed. The typical AGA with nonlinear adjustment of probabilities are compared and analyzed through benchmark functions. The optimization performance of typical AGA algorithms is compared and analyzed by 10 benchmark functions. Compared with traditional GA and other AGA algorithms, AGA with crossover and mutation probabilities adjusted nonlinearly at both ends of the average fitness value has higher computational stability and is easy to find the global optimal solution, which provides ideas for the application of adaptive genetic algorithm.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_45-Optimization_Performance_Analysis_for_Adaptive_Genetic_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cricket Event Recognition and Classification from Umpire Action Gestures using Convolutional Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130644</link>
        <id>10.14569/IJACSA.2022.0130644</id>
        <doi>10.14569/IJACSA.2022.0130644</doi>
        <lastModDate>2022-06-30T12:23:39.0070000+00:00</lastModDate>
        
        <creator>Suvarna Nandyal</creator>
        
        <creator>Suvarna Laxmikant Kattimani</creator>
        
        <subject>Cricket match; computer vision; deep learning; SNWOLF dataset; umpire recognition; umpire action images; CNN; event classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>The advancement of hardware and deep learning technologies has made it possible to apply these technologies to a variety of fields. A deep learning architecture, the Convolutional Neural Network (CNN), revolutionized the field of computer vision. One of the most popular applications of computer vision is in sports. There are different types of events in cricket, which makes it a complex game. This task introduces a new dataset called SNWOLF for detecting Umpire postures and categorizing events in cricket match. The proposed dataset will be a preliminary help, it was assessed in system development for the automatic generation of highlights from cricket sport. When it comes to cricket, the umpire has the authority to make crucial decisions about on-field incidents. The referee signals important incidents with hand signals and gestures that are one-of-a-kind. Based on detecting the referee&#39;s stance from the cricket video referee action frame, it identifies most frequently used events classification: SIX, NO BALL, WIDE, OUT, LEG BYE, and FOUR. The proposed method utilizes Convolutional Neural Networks (CNNs) architecture to extract features and classify identified frames into Umpire postures of six event classes. Here created a completely new dataset of 1040 images of Umpire Action Images containing these six events. Our method train CNNs classifier on 80% images of SNWOLF dataset and tested on 20% of remaining images. Our approach achieves an average overall accuracy of 98.20% and converges on very low cross-entropy losses. The proposed system is a influential answer for generation of cricket sport highlights.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_44-Cricket_Event_Recognition_and_Classification_from_Umpire_Action.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Building Footprint Extraction in Dense Area from LiDAR Data using Mask R-CNN</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130643</link>
        <id>10.14569/IJACSA.2022.0130643</id>
        <doi>10.14569/IJACSA.2022.0130643</doi>
        <lastModDate>2022-06-30T12:23:38.9900000+00:00</lastModDate>
        
        <creator>Sayed A. Mohamed</creator>
        
        <creator>Amira S. Mahmoud</creator>
        
        <creator>Marwa S. Moustafa</creator>
        
        <creator>Ashraf K. Helmy</creator>
        
        <creator>Ayman H. Nasr</creator>
        
        <subject>Deep learning; object detection; mask R-CNN; point cloud; light detecting and ranging (LiDAR)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>Building footprint extraction is an essential process for various geospatial applications. The city management is entrusted with eliminating slums, which are increasing in rural areas. Compared with more traditional methods, several recent research investigations have revealed that creating footprints in dense areas is challenging and has a limited supply. Deep learning algorithms provide a significant improvement in the accuracy of the automated building footprint extraction using remote sensing data. The mask R-CNN object detection framework used to effectively extract building in dense areas sometimes fails to provide an adequate building boundary result due to urban edge intersections and unstructured buildings. Thus, we introduced a modified workflow to train ensemble of the mask R-CNN using two backbones ResNet (34, 101). Furthermore, the results were stacked to fine-grain the structure of building boundaries. The proposed workflow includes data preprocessing and deep learning, for instance, segmentation was introduced and applied to a light detecting and ranging (LiDAR) point cloud in a dense rural area. The outperformance of the proposed method produced better-regularized polygons that obtained results with an overall accuracy of 94.63%.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_43-Building_Footprint_Extraction_in_Dense_Area.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Computational Approach to Identify Regulatory Biomarkers in the Pathogenesis of Breast Carcinoma</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130641</link>
        <id>10.14569/IJACSA.2022.0130641</id>
        <doi>10.14569/IJACSA.2022.0130641</doi>
        <lastModDate>2022-06-30T12:23:38.9730000+00:00</lastModDate>
        
        <creator>Ghazala Sultan</creator>
        
        <creator>Swaleha Zubair</creator>
        
        <creator>Inamul Hasan Madar</creator>
        
        <creator>Harishchander Anandaram</creator>
        
        <subject>Breast cancer; invasive lobular carcinoma; invasive ductal carcinoma; biomarkers; MicroRNA; transcription factors; feed forward loops</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>Breast Cancer is reckoned amongst the most common cause of morbidity and mortality among women, adversely affecting female population irrespective of age. The poor survival rate reported in invasive carcinoma cases demands the identification of early developmental stage key markers. MicroRNAs are contributing a critical role in gene regulation potential markers. Over 2000 miRNAs have been identified and considered to offer a unique opportunity for early detection of diseases. In this study, a gene-miRNA-TF interaction network was constructed from the differentially expressed genes obtained from the invasive lobular and invasive ductal carcinoma samples. The network consists of experimentally validated miRNAs and transcription factors were identified for the target genes, followed by thermodynamics studies to identify the binding free energy between mRNA-miRNA. Our analysis identified miRNA; hsa-miR-28-5p binds with MAD2L1 with unexpectedly high binding free energy equivalent to -92.54kcal/mol and also makes canonical triplex with hsa-miR-203a, which acts as a catalyst to initialize the MAD2L1 regulation. For the identified regulatory elements, we proposed a mathematical model and feed-forward loops that may serve in understanding the regulatory mechanisms in breast cancer pathogenesis and progression.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_41-Computational_Approach_to_Identify_Regulatory_Biomarkers.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Discourse-based Opinion Mining of Customer Responses to Telecommunications Services in Saudi Arabia during the COVID-19 Crisis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130642</link>
        <id>10.14569/IJACSA.2022.0130642</id>
        <doi>10.14569/IJACSA.2022.0130642</doi>
        <lastModDate>2022-06-30T12:23:38.9730000+00:00</lastModDate>
        
        <creator>Abdulfattah Omar</creator>
        
        <subject>Artificial intelligence; collocate analysis; COVID-19; discourse; opinion mining; vector space clustering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>This study used opinion mining theory and the potentials of artificial intelligence to explore the opinions, sentiments, and attitudes of customers expressed on Twitter regarding the services provided by the Saudi telecommunications companies during the COVID-19 crisis. A corpus of 12,458 Twitter posts was constructed covering the period 2020–2021. For data analysis, the study adopted a discourse-based mining approach, combining vector space classification (VSC) and collocation analysis. The results indicate that most users had negative attitudes and sentiments regarding the performance of the telecommunications companies during the pandemic, as reflected in both the lexical semantic properties and discoursal and thematic features of their Twitter posts. The study of collocates and the discoursal properties of the data was useful in attaining a deeper understanding of the users’ responses and attitudes to the performance of the telecommunications companies during the COVID-19 pandemic. It was not possible for text clustering based on the “bag of words” model alone to address the discoursal features in the corpus. Opinion mining applications, especially in Arabic, thus need to integrate discourse approaches to gain a better understanding of people’s opinions and attitudes regarding given issues.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_42-Discourse_based_Opinion_Mining_of_Customer_Responses.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>GonioPi: Towards Developing a Scalable, Versatile, Reliable and Accurate Handheld-Wearable Digital Goniometer</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130640</link>
        <id>10.14569/IJACSA.2022.0130640</id>
        <doi>10.14569/IJACSA.2022.0130640</doi>
        <lastModDate>2022-06-30T12:23:38.9600000+00:00</lastModDate>
        
        <creator>Thomas Jonathan R. Garcia</creator>
        
        <creator>Dhong Fhel K. Gom-os</creator>
        
        <subject>Range of Motion (ROM); goniometer; physical therapy; goniometry; wearable; sensors; MPU-6050; Raspberry Pi Pico</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>Range of Motion (ROM) Testing is an important physical examination performed in physical therapy used in assessing the ROM of a patient’s joint. The most commonly used instrument for ROM Testing is the universal goniometer. The most common cause for unreliable and inaccurate joint angle ROM measurements is measurement errors. Multiple studies have been done to mitigate measurement errors in clinical goniometry by designing and developing wearable digital goniometers using sensor technology. This study aims to design and develop a handheld-wearable digital goniometer called the GonioPi that is versatile, scalable, reliable and accurate when using the MPU-6050 IMU sensor and Raspberry Pi Pico as the main components. The results showed that the GonioPi is versatile and scalable as it is able to support multiple ROM Tests using multiple different positions on people with varying heights, weights, and BMI categories. The results also showed that the GonioPi is reliable and accurate as it was able to record joint angle ROM measurements of less than 5 degrees and 10 degrees which are the accepted standard values for reliability and accuracy, respectively.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_40-GonioPi_Towards_Developing_a_Scalable_Versatile_Reliable_and_Accurate_Handheld.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Efficient and Optimal Deep Learning Architecture using Custom U-Net and Mask R-CNN Models for Kidney Tumor Semantic Segmentation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130639</link>
        <id>10.14569/IJACSA.2022.0130639</id>
        <doi>10.14569/IJACSA.2022.0130639</doi>
        <lastModDate>2022-06-30T12:23:38.9430000+00:00</lastModDate>
        
        <creator>Sitanaboina S L Parvathi</creator>
        
        <creator>Harikiran Jonnadula</creator>
        
        <subject>Kidney tumor (Blob) detection; custom U-Net; mask R-CNN; semantic segmentation; deep learning; medical image processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>Today, kidney medical imaging has become the backbone for health professionals in diagnosing kidney disease and determining its severity. Physicians commonly use Computerized Tomography (CT) and Magnetic Resonance Imaging (MRI) scan models to obtain kidney disease information. The significance and impact of kidney tumor analysis drew researchers to semantic segmentation of kidney tumors. Traditional image processing methodologies, in general, require more computational power and manual assistance to analyze kidney medical images for tumor segmentation. Deep Learning advances are enabling less computational and automated models for kidney medical image analysis and tumor lineation. Blobs (regions of interest) detection from medical images is gaining popularity in kidney disease diagnosis and is used widely in detecting tumors, glomeruli, and cell nuclei, among other things. Kidney Tumor segmentation is challenging compared to other segmentation models due to morphological diversity, object overlapping, intensity variance, and integrated noise. In this paper, It have proposed a kidney tumor semantic segmentation model based on CU-Net and Mask R-CNN to extract kidney tumor information from abdominal MR images. Initially, It trained the Custom U-Net architecture on abdominal MR images with kidney masks for kidney image segmentation. The Mask R-CNN model is then used to lineate tumors from kidney images. Experiments on abdominal MR images using Python image processing libraries revealed that the proposed deep learning architecture segmented the kidney images and lined up the tumors with high accuracy.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_39-An_Efficient_and_Optimal_Deep_Learning_Architecture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>MSA-SFO-based Secure and Optimal Energy Routing Protocol for MANET</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130638</link>
        <id>10.14569/IJACSA.2022.0130638</id>
        <doi>10.14569/IJACSA.2022.0130638</doi>
        <lastModDate>2022-06-30T12:23:38.9270000+00:00</lastModDate>
        
        <creator>D. Naga Tej</creator>
        
        <creator>K V Ramana</creator>
        
        <subject>MANET; sail fish optimization; energy; routing protocol</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>Mobile Adhoc Network (MANET) is a fast deployable wireless mobile network with minimal infrastructure requirements. In these networks, autonomous nodes may function as routers. Due to the mobility of MANET nodes, the network&#39;s topology is dynamic. Recent scientific emphasis has been placed on MANET security. Few MANET attacks have been discussed in the existing literature. Wired networks provide more security choices than wireless networks. Most routing protocols fail in a MANET with a malicious node. This research focuses on S-DSR, a novel hybrid secure routing system that guarantees the delivery and performance of packets across network nodes. This protocol leverages neighbor trust information to choose the most secure route for file transfer. This protocol is used by OMNET++. It offers a higher delivery rate and lower delay than AODV, AOMDV, and other similar protocols. MANETs, or mobile ad-hoc networks, will be used in the future communication protocols of industrial wireless networks. These protocols will decentralise the connection of smart devices. Due to the unidimensional nature of digital data, it is impossible to apply encryption methods indirectly. These publications are digital. To strengthen the privacy of e-healthcare MANETs, a safe, lightweight keyframe extraction technique is required. The purpose of this project is to develop a secure protocol for MANET wireless networks. This study proposes the use of chaotic cryptography to enhance the security of MANET Wireless networks. Using Modified Self-Adaptive Sailfish Optimization (MSA-SFO), it is possible to construct vital maps in a chaotic setting. This method produces secure key pairs.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_38-MSA_SFO_based_Secure_and_Optimal_Energy_Routing_Protocol.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning Approach for Masked Face Identification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130637</link>
        <id>10.14569/IJACSA.2022.0130637</id>
        <doi>10.14569/IJACSA.2022.0130637</doi>
        <lastModDate>2022-06-30T12:23:38.9270000+00:00</lastModDate>
        
        <creator>Maad Shatnawi</creator>
        
        <creator>Nahla Almenhali</creator>
        
        <creator>Mitha Alhammadi</creator>
        
        <creator>Khawla Alhanaee</creator>
        
        <subject>Masked face human identification; face recognition; deep transfer learning; convolutional neural networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>Covid-19 is a global health emergency and a major concern in the industrial and residential sectors. It has the ability to spread leading to health problems or death. Wearing a mask in public locations and busy areas is the most effective COVID-19 prevention measure. Face recognition provides an accurate method that overcomes uncertainties such as false prediction, high cost, and time consumption, as it is understood that the primary identification for every human being is his face. As a result, masked face identification is required to solve the issue of recognizing individuals with masks in several applications such as door access systems and smart attendance systems. This paper offers an important and intelligent method to solve this issue. We propose deep transfer learning approach for masked face human identification. We created a dataset of masked-face images and examined six convolutional neural network (CNN) models on this dataset. All models show great performance in terms of very high face recognition accuracy and short training time.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_37-Deep_Learning_Approach_for_Masked_Face_Identification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid RNN based Deep Learning Approach for Text Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130636</link>
        <id>10.14569/IJACSA.2022.0130636</id>
        <doi>10.14569/IJACSA.2022.0130636</doi>
        <lastModDate>2022-06-30T12:23:38.9130000+00:00</lastModDate>
        
        <creator>Pramod Sunagar</creator>
        
        <creator>Anita Kanavalli</creator>
        
        <subject>F1 score; gated recurrent unit; GloVe; long - short term memory; precision; recall; recurrent neural network; region-based convolutional neural network; text classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>Despite the fact that text classification has grown in relevance over the last decade, there are a plethora of approaches that have been created to meet the difficulties related with text classification. To handle the complexities involved in the text classification process, the focus has shifted away from traditional machine learning methods and toward neural networks. In this work the traditional RNN model is embedded with different layers to test the accuracy of the text classification. The work involves the implementation of RNN+LSTM+GRU model. This model is compared with RCNN+LSTM and RNN+GRU. The model is trained by using the GloVe dataset. The accuracy and recall are obtained from the models is assessed. The F1 score is used to compare the performance of both models. The hybrid RNN model has three LSTM layers and two GRU layers, whereas the RCNN model contains four convolution layers and four LSTM levels, and the RNN model contains four GRU layers. The weighted average for the hybrid RNN model is found to be 0.74, RCNN+LSTM is 0.69 and RNN+GRU is 0.77. RNN+LSTM+GRU model shows moderate accuracy in the initial epochs but slowly the accuracy increases as and when the epochs are increased.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_36-A_Hybrid_RNN_based_Deep_Learning_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Decentralized Tribrid Adaptive Control Strategy for Simultaneous Formation and Flocking Configurations of Multi-agent System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130634</link>
        <id>10.14569/IJACSA.2022.0130634</id>
        <doi>10.14569/IJACSA.2022.0130634</doi>
        <lastModDate>2022-06-30T12:23:38.7570000+00:00</lastModDate>
        
        <creator>B. K. Swathi Prasad</creator>
        
        <creator>Hariharan Ramasangu</creator>
        
        <creator>Govind R. Kadambi</creator>
        
        <subject>Simultaneous; flocking; polygonal formation; decentralized; hybrid; adaptive; control strategy; simulation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>This paper focuses on the development of a tribrid control strategy for leader-follower flocking of multi-agents in octagonal polygonal formation. The tribrid approach encompasses Reinforcement Learning (RL), centralized and de-centralized control strategies. While the RL for multi-agent polygonal formation addresses the issues of scalability, the centralized strategy maintains the inter-agent distance in the formation and the decentralized strategy reduces the consensus (in position and velocity) error. Unlike the previous studies focusing only on the predefined trajectory, this paper deals with the leader-follower scenario through a decentralized tribrid control strategy. Two cases on initial positions of multi-agents dealt in this paper include the octagonal pattern from RL and the agents randomly distributed in spatial environment. The tribrid control strategy is aimed at simultaneous formation and flocking, and its stability in a shorter response time. The convergence of flocking error to zero in 3s substantiates the validity of the proposed control strategy and is faster than previous control methods. Implicit use of centralized scheme in decentralized control strategy facilitates retention of formation structure of the initial configuration. The average position error of agents with the leader is within the position band in 3s and thus it confirms the maintenance of formation during flocking.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_34-Decentralized_Tribrid_Adaptive_Control_Strategy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Merged Dataset Creation Method Between Thermal Infrared and Microwave Radiometers Onboard Satellites</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130635</link>
        <id>10.14569/IJACSA.2022.0130635</id>
        <doi>10.14569/IJACSA.2022.0130635</doi>
        <lastModDate>2022-06-30T12:23:38.7570000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>Wavelets; VIRS/SST; TMI/SST; MRA; Daubechies; TRMM; TIR; MSR</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>Merged dataset creation method between Thermal Infrared (TIR) and Microwave Scanning Radiometer (MSR) onboard remote sensing satellites is proposed. One of the key issues here is the relation between thermal and microwave emissions from the same observation target in particular, Sea Surface Temperature (SST). An example of Tropical Rainfall Measuring Mission (TRMM) satellite based TIR and MSR, Visible Infrared Scanner (VIRS) and TRMM Microwave Imager (TMI) is shown in this paper. SST is estimated, independently, with VIRS or TMI. A method for interpolation of multi-sensor satellite images based on Multi-Resolution Analysis (MRA) is also proposed. The experimental results with TMI/SST image and VIRS/SST image show that Root Mean Square (RMS) error ranges from 0.87 to 0.91-degree C.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_35-Merged_Dataset_Creation_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Core Elements Impacting Cloud Adoption in the Government of Saudi Arabia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130633</link>
        <id>10.14569/IJACSA.2022.0130633</id>
        <doi>10.14569/IJACSA.2022.0130633</doi>
        <lastModDate>2022-06-30T12:23:38.7400000+00:00</lastModDate>
        
        <creator>Norah Alrebdi</creator>
        
        <creator>Nabeel Khan</creator>
        
        <subject>Cloud computing; e-governance; cloud computing adoption; smart government; Saudi Arabia vision 2030</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>The Kingdom of Saudi Arabia is taking rapid steps towards digital transformation in the field of government services. Cloud computing adoption may be the next step that supports this digital transformation to providing many features and reducing costs. Therefore, this paper will present multiple factors that may make it difficult to move to the cloud by conducting several interviews and questionnaires with government sector workers, those with technical experience, and that too to take caution and develop suitable solutions in advance. This paper also presents some recommendations and suggestions useful to consider when adopting the cloud in the public sector.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_33-Core_Elements_Impacting_Cloud_Adoption.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>COVID-19: Challenges and Opportunities in the Online University Education</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130631</link>
        <id>10.14569/IJACSA.2022.0130631</id>
        <doi>10.14569/IJACSA.2022.0130631</doi>
        <lastModDate>2022-06-30T12:23:38.7230000+00:00</lastModDate>
        
        <creator>Irena Valova</creator>
        
        <creator>Tsvetelina Mladenova</creator>
        
        <subject>e-Learning; online learning; students&#39; attitude to e-learning; pandemic outbreak; COVID-19</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>The COVID-19 pandemic had a very severe impact on the education both in schools and in universities. In the span of several weeks, educators around the world had to transform completely the teaching method and students had to adapt to the new form of learning. The following article reviews the opinions of university students based on three different studies – one before the pandemic and the distance learning, one in the middle of it and one in the end of the distance learning. The goal is to see how students&#39; thinking and perceptions of online learning has changed over the last three years as a result of different conditions.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_31-COVID_19_Challenges_and_Opportunities.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-modal Brain MR Image Registration using A Novel Local Binary Descriptor based on Statistical Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130632</link>
        <id>10.14569/IJACSA.2022.0130632</id>
        <doi>10.14569/IJACSA.2022.0130632</doi>
        <lastModDate>2022-06-30T12:23:38.7230000+00:00</lastModDate>
        
        <creator>Thuvanan Borvornvitchotikarn</creator>
        
        <subject>Local binary descriptor; multi-modal image registration; statistical approach; medical image registration; similarity measure</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>Medical image registration (MIR) has played an important role in medical image processing during the last decade. Its main objective is to integrate information inherent in two images, from different scanning sources, of the same object for guiding medical treatments such as diagnostic, surgery and therapy. A challenging task of MIR arises from the complex relationships of image intensities between the two images. Its performance is primarily depending on a chosen similarity measure technique. In this work, a statistical local binary descriptor (SLBD) is proposed as novel local descriptor of similarity measure, which is simple for computation and can handle Multi-modal registration more effectively. The proposed SLBD employs two statistical values, i.e., the mean and the standard deviation, of all intensities within the image patch for its computation. Finally, these experimental results have shown that SLBD outperforms other descriptors in terms of registration accuracy. In addition, SLBD has demonstrated that SLBD is robust to different modalities.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_32-Multi_modal_Brain_MR_Image_Registration.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modified Gradient Algorithm based Noise Subspace Estimation with Full Rank Update for Blind CSI Estimator in OFDM Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130630</link>
        <id>10.14569/IJACSA.2022.0130630</id>
        <doi>10.14569/IJACSA.2022.0130630</doi>
        <lastModDate>2022-06-30T12:23:38.7100000+00:00</lastModDate>
        
        <creator>Saravanan Subramanian</creator>
        
        <creator>Govind R. Kadambi</creator>
        
        <subject>Orthogonal Frequency Division Multiplexing (OFDM); Carrier Frequency Offset (CFO); Channel State Information (CSI); Recursive Least Square (RLS); Singular Value Decomposition (SVD); Channel Impulse Response (CIR); BPSK; QPSK; QAM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>This paper presents a modified Gradient-based method to directly compute the noise subspace iteratively from the received Orthogonal Frequency Division Multiplexing (OFDM) symbols to estimate Channel State Information (CSI). By invoking the matrix inversion lemma which is extensively used in Recursive Least Square (RLS) algorithms, the proposed computationally efficient method enables direct computation of noise subspace using the inverse of the autocorrelation matrix of the received OFDM symbols. In the case of a vector input, the modified Gradient algorithm uses rank one update to calculate noise subspace recursively. For an input in the matrix form, the modified Gradient algorithm uses a full rank update. The validity, efficacy, and accuracy of the proposed modified Gradient algorithm have been substantiated through a relative comparison of the results with the conventional Singular Value Decomposition (SVD) algorithm, which is in wide use in the estimation of the subspaces. The simulation results obtained through the modified Gradient algorithm show a satisfactory correlation with the results of SVD, even though the computational complexity involved in modified Gradient is relatively less. Apart from the results encompassing various power levels of the multipath channel, this paper also discusses the adaptive tracking of CSI and presents a comparative study.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_30-Modified_Gradient_Algorithm_based_Noise_Subspace.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparison of Path Planning between Improved Informed and Uninformed Algorithms for Mobile Robot</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130629</link>
        <id>10.14569/IJACSA.2022.0130629</id>
        <doi>10.14569/IJACSA.2022.0130629</doi>
        <lastModDate>2022-06-30T12:23:38.6930000+00:00</lastModDate>
        
        <creator>Mohamed Amr</creator>
        
        <creator>Ahmed Bahgat</creator>
        
        <creator>Hassan Rashad</creator>
        
        <creator>Azza Ibrahim</creator>
        
        <subject>Mobile robots; informed algorithm; uninformed algorithm; path planning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>This work is concerned with the Path Planning Algorithms (PPA), which hold an important place in Robotics navigation. Navigation has become indispensable to most modern inventions. Mobile robots have to move to a relevant task point in order to achieve the tasks assigned to them. The actions, which are planned in a structure, may restrict the task duration and even in some situations, the mission tends to be accomplished. This paper aims to study and compare six commonly used informed and uninformed algorithms. Three different maps have been created with gradually increasing difficulty levels related to a number of obstacles in the tested maps. The paper provides a detailed comparison between the algorithms under investigation of several parameters such as: Total steps, straight steps, rotation steps, and search time. The promised results were obtained when the proposed algorithms were applied to a case study.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_29-Comparison_of_Path_Planning_between_Improved_Informed_and_Uninformed_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Synthetic Data Augmentation of Tomato Plant Leaf using Meta Intelligent Generative Adversarial Network: Milgan</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130628</link>
        <id>10.14569/IJACSA.2022.0130628</id>
        <doi>10.14569/IJACSA.2022.0130628</doi>
        <lastModDate>2022-06-30T12:23:38.6930000+00:00</lastModDate>
        
        <creator>Sri Silpa Padmanabhuni</creator>
        
        <creator>Pradeepini Gera</creator>
        
        <subject>Basic image operations; meta-learning techniques; generator; discriminator; synthetic data; sampling techniques; latent points; kernel filters</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>Agriculture is one of the most famous case studies in deep learning. Most researchers want to detect different diseases at the early stages of cultivation to save the farmer&#39;s economy. The deep learning technique needs more data to develop an accurate system. Researchers generated more synthetic data using basic image operations in traditional approaches, but these approaches are more complicated and expensive. In deep learning and computer vision, the system&#39;s accuracy is the crucial component for deciding the system&#39;s efficiency. The model&#39;s precision is based on the image&#39;s size and quality. Getting many images from the real-world environment in medicine and agriculture is difficult. The image augmentation technique helps the system generate more images that can replicate the physical circumstances by performing various operations. It also prevents overfitting, especially when the system has fewer images than required. Few researchers experimented using CNN and simple Generative Adversarial Network (GAN), but these approaches create images with more noise. The proposed research aims to develop more data using a Meta approach. The images are processed using kernel filters. Different geometric transformations are passed as input to the enhanced GANs to reduce the noise and create more fake images using latent points, acting as weights in the neural networks. The proposed system uses random sampling techniques, passes a few processed images to the generator component of GAN, and the system uses a discriminator component to classify the synthetic data created by the Meta-Learning Approach.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_28-Synthetic_Data_Augmentation_of_Tomato_Plant_Leaf.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Survey on Highly Imbalanced Multi-class Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130627</link>
        <id>10.14569/IJACSA.2022.0130627</id>
        <doi>10.14569/IJACSA.2022.0130627</doi>
        <lastModDate>2022-06-30T12:23:38.6770000+00:00</lastModDate>
        
        <creator>Mohd Hakim Abdul Hamid</creator>
        
        <creator>Marina Yusoff</creator>
        
        <creator>Azlinah Mohamed</creator>
        
        <subject>Imbalanced data; highly imbalanced data; highly imbalanced multi-class; data strategies</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>Machine learning technology has a massive impact on society because it offers solutions to solve many complicated problems like classification, clustering analysis, and predictions, especially during the COVID-19 pandemic. Data distribution in machine learning has been an essential aspect in providing unbiased solutions. From the earliest literatures published on highly imbalanced data until recently, machine learning research has focused mostly on binary classification data problems. Research on highly imbalanced multi-class data is still greatly unexplored when the need for better analysis and predictions in handling Big Data is required. This study focuses on reviews related to the models or techniques in handling highly imbalanced multi-class data, along with their strengths and weaknesses and related domains. Furthermore, the paper uses the statistical method to explore a case study with a severely imbalanced dataset. This article aims to (1) understand the trend of highly imbalanced multi-class data through analysis of related literatures; (2) analyze the previous and current methods of handling highly imbalanced multi-class data; (3) construct a framework of highly imbalanced multi-class data. The chosen highly imbalanced multi-class dataset analysis will also be performed and adapted to the current methods or techniques in machine learning, followed by discussions on open challenges and the future direction of highly imbalanced multi-class data. Finally, for highly imbalanced multi-class data, this paper presents a novel framework. We hope this research can provide insights on the potential development of better methods or techniques to handle and manipulate highly imbalanced multi-class data.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_27-Survey_on_Highly_Imbalanced_Multi_class_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimization of Small Sized File Access Efficiency in Hadoop Distributed File System by Integrating Virtual File System Layer</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130626</link>
        <id>10.14569/IJACSA.2022.0130626</id>
        <doi>10.14569/IJACSA.2022.0130626</doi>
        <lastModDate>2022-06-30T12:23:38.6630000+00:00</lastModDate>
        
        <creator>Neeta Alange</creator>
        
        <creator>Anjali Mathur</creator>
        
        <subject>HDFS; Small sizes files; virtual file system; bucket chain; ensemble classifiers; text classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>Storage for large datasets, handling data in different formats and data getting generated with high speed are the major highlights of the Hadoop because of which the Hadoop got invented. Hadoop is the solution for the big data problems as discussed above. In order to give the improved solution (in terms of access efficiency and time) for small sized files, this solution is proposed. A novel approach called VFS-HDFS architecture is designed in which the focus is on optimization of small sized files access problems with significant development compared with the existing solutions i.e. HDFS sequence files, HAR, NHAR. In the proposed work a Virtual file system layer has been added as a wrapper over the top of existing HDFS architecture. However, the research work is carried out without altering the existing HFDS architecture. In this paper drawbacks of existing techniques i.e. Flat File Technique and Table Chain Technique which are implemented in HDFS HAR, NHAR, sequence file is overcome by using Bucket Chain Technique. The files to merge in a single bucket are selected using ensemble classifier which is a combination of different classifiers. Combination of multiple classifiers gives the better accurate results. Using this proposed system, better results are obtained compared with the existing system in terms of access efficiency of small sized files in HDFS.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_26-Optimization_of_Small_Sized_File_Access_Efficiency.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Digital Storytelling Framework to Assist Young Children in Understanding Dementia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130625</link>
        <id>10.14569/IJACSA.2022.0130625</id>
        <doi>10.14569/IJACSA.2022.0130625</doi>
        <lastModDate>2022-06-30T12:23:38.6630000+00:00</lastModDate>
        
        <creator>Noreena Yi-Chin Liu</creator>
        
        <creator>Nooralisa M Tuah</creator>
        
        <creator>Kevin Chi-Jen Miao</creator>
        
        <subject>Digital storytelling; Dementia; interactive learning; entertainment experience</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>A digital storytelling tool is one of the interactive technologies that can help youngsters better comprehend Dementia. Dementia makes it difficult for older people to maintain their daily routines. They have difficulties in effectively communicating with those around them. Similarly, children whose grandparents have Dementia will struggle to understand their grandparents&#39; situation. It will also negatively influence children&#39;s relationships with their grandparents. Learning through interactive digital storytelling will affect younger people&#39;s entertainment experiences, which may help them better comprehend Dementia. As a result, the children&#39;s relationships with their grandparents may be strengthened. This study aims to present the framework of digital storytelling in helping young children understand more about Dementia. The framework was developed in a step-by-step procedure that included analyzing and synthesizing current applications and relevant research, constructing the framework, and having it confirmed by experts. Researchers and developers may use the framework as a guideline to build meaningful digital storytelling features.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_25-Digital_Storytelling_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Bayesian Network Modelling for Improved Knowledge Management of the Expert Model in the Intelligent Tutoring System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130624</link>
        <id>10.14569/IJACSA.2022.0130624</id>
        <doi>10.14569/IJACSA.2022.0130624</doi>
        <lastModDate>2022-06-30T12:23:38.6470000+00:00</lastModDate>
        
        <creator>Fatima-Zohra Hibbi</creator>
        
        <creator>Otman Abdoun</creator>
        
        <creator>El Khatir Haimoudi</creator>
        
        <subject>Smart tutoring system; expert model; knowledge processing; Bayesian network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>The expert module is an essential part of the intelligent tutoring system. This module uses only declarative knowledge, excluding other types of domain knowledge: procedural and conditional. This elimination makes the expert module very delicate. To solve this issue, the authors propose to embed knowledge processing into the expert model. The contribution aims to empower the expert model via the fragmentation of the knowledge process into four categories: Analyzation, Application, Conceptualization, and Experimentation using the Bayesian Network method as an instrument for modelling expert systems in uncertain areas. According to the management of the expert system through a list of criteria, the expert module can suggest the correct type of knowledge and their following status.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_24-Bayesian_Network_Modelling_for_Improved_Knowledge_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The 4W Framework of the Online Social Community Model for Satisfying the Unmet Needs of Older Adults</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130623</link>
        <id>10.14569/IJACSA.2022.0130623</id>
        <doi>10.14569/IJACSA.2022.0130623</doi>
        <lastModDate>2022-06-30T12:23:38.6300000+00:00</lastModDate>
        
        <creator>Farhat Mahmoud Embarak</creator>
        
        <creator>Nor Azman Ismail</creator>
        
        <creator>Alhuseen Omar Alsayed</creator>
        
        <creator>Mohamed Bashir Buhalfaya</creator>
        
        <creator>Abdurrahman Abdulla Younes</creator>
        
        <creator>Blha Hassan Naser</creator>
        
        <subject>Online social community; elderly’s unmet needs; 4w framework; elderly’s requirements</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>Human&#39;s cherished and respectable desires could be fulfilled by social integration through interaction with their friends and families. These kinds of interactions are critical for the elderly, particularly for someone who has retired. Online social communities could assist them and offer a beneficial impact on the elderly. However, because the elderly people are hesitant to use new technology, researchers have attempted to integrate specially built social networking applications into simple user-interface gadgets for the elderly through the context aware systems. A proper understanding amongst the aged and the supporting community people is needed for optimal execution of the platform. The study presents a 4W framework (Who, What, Where, When) to effectively comprehend and portray the online social interaction community model&#39;s application in assisting the elderly in satisfying their unmet needs, as well as to improve the system&#39;s efficiency in addressing the elderly&#39;s unfulfilled demands. It is essential to discover what the users are keen on and provide a chance for the community group to take good decisions by utilizing the insights gained from these events.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_23-The_4W_Framework_of_the_Online_Social_Community_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Internal Works Quality Assessment for Wall Evenness using Vision-based Sensor on a Mecanum-Wheeled Mobile Robot</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130622</link>
        <id>10.14569/IJACSA.2022.0130622</id>
        <doi>10.14569/IJACSA.2022.0130622</doi>
        <lastModDate>2022-06-30T12:23:38.6170000+00:00</lastModDate>
        
        <creator>Ahmad Zaki Shukor</creator>
        
        <creator>Muhammad Herman bin Jamaluddin</creator>
        
        <creator>Mohd Zulkifli bin Ramli</creator>
        
        <creator>Ghazali bin Omar</creator>
        
        <creator>Syed Hazni Abd Ghani</creator>
        
        <subject>Construction industry standards; internal works quality assessment; vision; Mecanum wheels</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>Robotics in the construction industry has been used for a few decades up to this present time. There are various advanced robotics mechanisms or technologies developed for specific construction task to assist construction. However, not many researches have been found on the quality assessment of the finished structures. This research proposes a quality assessment robot that will assist in performing the assessment of the internal works of a building by assessing a quality assessment criterion in the Malaysian Construction Industry Standards. There are various assessment criteria such as hollowness, cracks and damages, finishing and jointing. This paper will focus on the wall evenness using a camera mounted on a mobile robot with a Mecanum wheel design. The wall evenness assessment was done via projecting a laser leveler on the wall and capturing the images by using a camera, which is later processed by a central controller. Results show that the deviation calculation method can be used to differentiate between even and uneven walls. Pixel deviations for even walls show values of less than 15 while uneven walls show values of more than 20 pixels.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_22-Internal_Works_Quality_Assessment_for_Wall_Evenness.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Vision based Human Activity Recognition using Deep Neural Network Framework</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130621</link>
        <id>10.14569/IJACSA.2022.0130621</id>
        <doi>10.14569/IJACSA.2022.0130621</doi>
        <lastModDate>2022-06-30T12:23:38.6170000+00:00</lastModDate>
        
        <creator>Jitha Janardhanan</creator>
        
        <creator>S. Umamaheswari</creator>
        
        <subject>Activity recognition; long short-term memory (LSTM); deep learning; feature extraction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>Human Activity Recognition (HAR) has become a well-liked subject in study as of its broad application. With the growth of deep learning, novel thoughts have emerged to tackle HAR issues. One example is recognizing human behaviors without exposing a person&#39;s identify. Advanced computer vision approaches, on the other hand, are still thought to be potential development directions for constructing a human activity classification approach from a series of video frames. To solve this issue, a deep learning neural network technique using Depthwise Separable Convolution (DSC) with Bidirectional Long Short-Term Memory (DSC-BLSTM) is proposed here. The redeeming features of the proposed network system comprises a DSC convolution that helps to reduce not only the number of learnable parameters but also computational cost in together training and testing method The bidirectional LSTM process can combine the positive and the negative time direction. The proposed method comprises of three phases, which includes Video data preparation, Feature Extraction using Depthwise Separable Convolution Neural Network algorithm and DSC-BLSTM algorithm. The proposed DSC-BLSTM method obtains high accuracy, F1-score when compared to other HAR algorithms like MC-HF-SVM, Baseline LSTM Bidir-LSTM algorithms.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_21-Vision_based_Human_Activity_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application of Machine Learning Algorithms in Coronary Heart Disease: A Systematic Literature Review and Meta-Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130620</link>
        <id>10.14569/IJACSA.2022.0130620</id>
        <doi>10.14569/IJACSA.2022.0130620</doi>
        <lastModDate>2022-06-30T12:23:38.6000000+00:00</lastModDate>
        
        <creator>Solomon Kutiame</creator>
        
        <creator>Richard Millham</creator>
        
        <creator>Adebayor Felix Adekoya</creator>
        
        <creator>Mark Tettey</creator>
        
        <creator>Benjamin Asubam Weyori</creator>
        
        <creator>Peter Appiahene</creator>
        
        <subject>Coronary heart diseases; algorithms; datasets; ensembling algorithms; machine learning; artificial intelligence</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>This systematic review relied on the Preferred Reporting Items for Systematic reviews and Meta-Analysis (PRISMA) statement and 37 relevant studies. The literature search used search engines including PubMed, Hindawi, SCOPUS, IEEE Xplore, Web of Science, Google Scholar, Wiley Online, Jstor, Taylor and Francis, Ebscohost, and ScienceDirect. This study focused on four aspects: Machine Learning Algorithms, datasets, best-performing algorithms, and software used in coronary heart disease (CHD) predictions. The empirical articles never mentioned &#39;Reinforcement Learning,&#39; a promising aspect of Machine Learning. Ensemble algorithms showed reasonable accuracy rates but were not common, whereas deep neural networks were poorly represented. Only a few papers applied primary datasets (4 of 37). Logistic Regression (LR), Deep Neural Network (DNN), K-Means, K-Nearest Neighbors (KNN), Support Vector Machine (SVM), and boosting algorithms were the best performing algorithms. This systematic review will be valuable for researchers predicting coronary heart disease using machine learning techniques.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_20-Application_of_Machine_Learning_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards the Smart Industry for the Sustainability through Open Innovation based on ITSM (Information Technology Service Management)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130619</link>
        <id>10.14569/IJACSA.2022.0130619</id>
        <doi>10.14569/IJACSA.2022.0130619</doi>
        <lastModDate>2022-06-30T12:23:38.5830000+00:00</lastModDate>
        
        <creator>Asti Amalia Nur Fajrillah</creator>
        
        <creator>Muharman Lubis</creator>
        
        <creator>Arariko Rezeki Pasa</creator>
        
        <subject>Smart industry; Ward and Peppard; IS/IT strateg</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>The Indonesian coffee industry has become a trend that has a strategic role and potential for the livelihoods of the business people in it, as well as Indonesia&#39;s economic growth. One of the trends that stole attention is the concept of smart industry, the concept of a digital-based industry that is highly relevant to technological developments in this era. When companies want to implement a smart industry, companies need a strategy to implement IT (Information Technology) so that the investment spent is right to build the company&#39;s targets. This study aims to design a systematic IS/IT strategy to realize the concept of smart industries that are effective. The analysis and design method used is the Ward &amp; Peppard framework which consists of two phases, namely the input and output phases. The input phase consists of internal business analysis, external business, IT internal and external. The output stage includes the design of IT management strategies, business information systems and IT strategies. The results of this study are in the form of a portfolio of IT designs at the Margamulya Coffee Producers Cooperative consisting of business strategy designs and IT management.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_19-Towards_the_Smart_Industry_for_the_Sustainability_through_Open_Innovation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Data Augmentation Techniques on Chilly Plants to Classify Healthy and Bacterial Blight Disease Leaves</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130618</link>
        <id>10.14569/IJACSA.2022.0130618</id>
        <doi>10.14569/IJACSA.2022.0130618</doi>
        <lastModDate>2022-06-30T12:23:38.5830000+00:00</lastModDate>
        
        <creator>Sudeepthi Govathoti</creator>
        
        <creator>A Mallikarjuna Reddy</creator>
        
        <creator>Deepthi Kamidi</creator>
        
        <creator>G BalaKrishna</creator>
        
        <creator>Sri Silpa Padmanabhuni</creator>
        
        <creator>Pradeepini Gera</creator>
        
        <subject>Image augmentation; geometric transformations; transfer learning; neural style learning; residual network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>Designing an automation system for the agriculture sector is difficult using machine learning approach. So many researchers proposed deep learning system which requires huge amount of data for training the system. The proposed system suggests that geometric transformations on the original dataset help the system to generate more images that can replicate the physical circumstances. This process is known as “Image Augmentation”. This enhancement of data helps the system to produce more accurate systems in terms of all metrics. In olden days when researchers work with machine learning techniques they used to implement traditional approaches which are a time consuming and expensive process. In deep learning, most of the operations are automatically taken care by the system. So, the proposed system applies neural style and to classify the images it uses the concept of transfer learning. The system utilizes the images available in the open source repository known as “Kaggle”, this majorly consists of images related to chilly, tomato and potato. But this system majorly focuses on chilly plants because it is most productive plant in the South Indian regions. Image augmentation creates new images in different scenarios using the existing images and by applying popular deep learning techniques. The model has chosen ResNet-50, which is a pre-trained model for transfer learning. The advantage of using pre-trained model lies in not to develop the model from scratch. This pre-trained model gives more accuracy with less number of epochs. The model has achieved an accuracy of “100%”.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_18-Data_Augmentation_Techniques_on_Chilly_Plants.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Approach to Video Compression using Region of Interest (ROI) Method on Video Surveillance Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130617</link>
        <id>10.14569/IJACSA.2022.0130617</id>
        <doi>10.14569/IJACSA.2022.0130617</doi>
        <lastModDate>2022-06-30T12:23:38.5700000+00:00</lastModDate>
        
        <creator>DewiAnggraini Puspa Hapsari</creator>
        
        <creator>Sarifuddin Madenda</creator>
        
        <creator>Muhammad Subali</creator>
        
        <creator>Aini Suri Talita</creator>
        
        <subject>Compression; decompression; foreground; region of interest; video surveillance systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>With the increasing of criminal actions, people use various surveillance techniques to create a sense of security. One of the most widely used surveillance technique is installing CCTV cameras at various locations. On the surveillance systems, there are other supporting devices apart from CCTV cameras. One of such supporting devices is a hard disk to save the recorded data. The recording on CCTV has two modes: motion detection mode and continuous mode. The continuous mode will record continuously, which affects the amount of hard disk space used. Motion detection mode records one event only, not all recordings, saving hard disk space, however, it may miss some events. Based on these two modes, compression technology is required. The current compression technology applies the ROI method. A ROI (Region of Interest) is the part of the image that wants to filter to form some operations against it. ROI allows coding differently in certain areas of the digital image to have a higher quality than the surrounding area (background). This paper offers a novel approach to saving the foreground frame generated from the ROI method and compressing it. The novel approach will be applied to the AVI, MJPEG 2000, and MPEG-4 video formats. The decompression process is used to restore the original video data to measure the method&#39;s performance. To measure the proposed method&#39;s performance, it will compare the compression ratio and Peak Signal-to-Noise Ratio (PSNR) with the traditional method without implementing the ROI-based method. The PSNR value in this paper, that measures the quality of the compression result,s are above 40 dB. It indicates that the resulting video is similar to the original video. The ROI-based compression method can increase the compression ratio 5-7 times higher than the existing method for lossy AVI format video. While on MJPEG-2000 and MPEG-4 format video, it increases the compression ratio 7-15 times and 1-3 times, respectively. The PSNR value for the proposed method is above 40 dB, which indicates that the reconstructed video is similar to the original video, even though the pixel values have changed slightly.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_17-A_Novel_Approach_to_Video_Compression.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Face Mask Wear Detection by using Facial Recognition System for Entrance Authorization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130616</link>
        <id>10.14569/IJACSA.2022.0130616</id>
        <doi>10.14569/IJACSA.2022.0130616</doi>
        <lastModDate>2022-06-30T12:23:38.5530000+00:00</lastModDate>
        
        <creator>Munirah Ahmad Azraai</creator>
        
        <creator>Ridhwan Rani</creator>
        
        <creator>Raja Mariatul Qibtiah</creator>
        
        <creator>Hidayah Samian</creator>
        
        <subject>Face recognition; face detection; face mask; coronavirus; intelligent system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>A Face Mask Wear Detection Device for Entrance Authorization is designed to ensure that everyone wears a face mask at all times in a confined space. It is one of the easiest methods to lower the rate of coronavirus infection and hence save lives. Asthma, high blood pressure, heart failure, and many other chronic conditions can be fatal to those who are infected by the novel Coronavirus (nCoV-21). Consequently, the goal of this research is for face mask wear detection devices that help to reduce the rate of Novel Coronavirus infection on-premises or in public places by ensuring that customers comply with Standard Handling Procedures (SOP) set by the Ministry of Malaysian Health (MOH). Customers&#39; faces are recognized by this device whether or not they are covered by a face mask upon entry into a facility. Additionally, the use of this device can contribute to ensuring compliance with the maximum number of customers allowed on the premises. A facial recognition system is the goal of this study that uses technology designed as an individual disciplinary aid and follows the safety procedure at this critical time. This research was developed using the engineering design process development model which has four phases namely; identifying the problem, making possible solutions, prototype development and testing and evaluating the solution. Results indicate that the developed product can function effectively. Experts have discovered that using this product helps people stick to their face mask routines. The design of this product has improved, which means that the overall quality of the product is elevated to be capable of performing as intended in terms of intelligent technologies.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_16-Face_Mask_Wear_Detection_by_using_Facial_Recognition_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Advanced Medicinal Plant Classification and Bioactivity Identification Based on Dense Net Architecture</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130614</link>
        <id>10.14569/IJACSA.2022.0130614</id>
        <doi>10.14569/IJACSA.2022.0130614</doi>
        <lastModDate>2022-06-30T12:23:38.5370000+00:00</lastModDate>
        
        <creator>Banita Pukhrambam</creator>
        
        <creator>Arun Sahayadhas</creator>
        
        <subject>Indian medicinal plants; convolutional neural network; DenseNet; IMPPAT dataset</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>Plant species identification helps a wide range of stakeholders, including forestry services, botanists, taxonomists, physicians and pharmaceutical laboratories, endangered species organizations, the government, and the general public. As a result, there has been a spike in interest in developing automated plant species recognition systems. Using computer vision and deep learning approaches, this work proposes a fully automated system for finding medical plants. As a result, work is being done to classify the correct therapeutic plants based on their images. A training data set contains image data; this work uses the Indian Medicinal Plants, Photochemistry, and Therapeutics (IMPPAT) benchmark dataset. Convolutional Neural Network (CNN) with DenseNet algorithm is a classification system for medicinal plants that explains how they work and what they&#39;re efficient. This study also suggests a standard dataset for medicinal plants that can be found in various parts of Manipur, India&#39;s northwest coast state. On the IMPPAT dataset, the suggested DenseNet model has a recognition rate of 99.56% and on the Manipuri dataset; it has a recognition rate of 98.51%, suggesting that the DenseNet method is a promising technique for smart forestry.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_14-Advanced_Medicinal_Plant_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>RETRACTED: Short Words Signature Verification using Markov Chain and Fisher Linear Discriminant Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130615</link>
        <id>10.14569/IJACSA.2022.0130615</id>
        <doi>10.14569/IJACSA.2022.0130615</doi>
        <lastModDate>2022-06-30T12:23:38.5370000+00:00</lastModDate>
        
        <creator>M. Nazir</creator>
        
        <creator>Surendra Singh Choudhary</creator>
        
        <subject>Human signature verification; morphological directional transformations; structuring element; optical character recognition; fisher linear discriminant</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>After careful and considered review of the content of this paper by a duly constituted expert committee, this paper has been found to be in violation of IJACSA`s Publication Principles. We hereby retract the content of this paper. Reasonable effort should be made to remove all past references to this paper. Retraction DOI: 10.14569/IJACSA.2022.0130615.retraction</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_15-Short_Words_Signature_Verification_using_Markov_Chain.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Shallow Net for COVID-19 Classification Based on Biomarkers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130613</link>
        <id>10.14569/IJACSA.2022.0130613</id>
        <doi>10.14569/IJACSA.2022.0130613</doi>
        <lastModDate>2022-06-30T12:23:38.5230000+00:00</lastModDate>
        
        <creator>Mahmoud B. Rokaya</creator>
        
        <subject>COVID-19; pandemic; shallow net; deep learning; decision trees; ROC curve; PCA analysis; biomarkers</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>In many cases, especially at the beginning of epidemic disaster, it is very important to be able to determine the severity of illness of a given patient. Picking up the severe status will help in directing the effort in a proper way. At the beginning, the number of classified status and the available data are limited, so, in such situation, one needs a system that can be trained based on limited data to give a trusted result. The current work focuses on the importance of the bioscience in differentiation between recovered patients and mortalities. Even with limited data, the decision trees (DT) was able to distinguish between recovered patients and mortalities with accuracy of 94%. Shallow dense network achieved accuracy of 75%. However, when a 10-fold technique was followed with the same data, the net achieved 99% of accuracy. The used data in this work was collected from King Faisal hospital in Taif city under a formal permission from the health ministry. PCA analysis confirmed that there are two parameters that have the greatest ability to differentiate between recovered patients and mortalities. ROC curve reveals that the parameters that can differentiate between recovered patients and mortalities are calcium and hemoglobin. The shallow net gives an accuracy of 92% when trained using calcium and hemoglobin only. This paper shows that with a suitable choosing of the parameters a small decision tree or shallow net can be trained quickly to decide which patient needs more attention so as to use the hospitals resources in a more reasonable way during the pandemic. All codes and data can be accessed from the following link “codes and data”.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_13-Shallow_Net_for_COVID_19_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Proposed Architecture for Smart Home Systems Based on IoT, Context-awareness and Cloud Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130612</link>
        <id>10.14569/IJACSA.2022.0130612</id>
        <doi>10.14569/IJACSA.2022.0130612</doi>
        <lastModDate>2022-06-30T12:23:38.5070000+00:00</lastModDate>
        
        <creator>Samah A. Z. Hassan</creator>
        
        <creator>Ahmed M. Eassa</creator>
        
        <subject>Smart Home Systems (SHS); Internet of Things (IoT); Context-awareness (CA); Cloud Computing (CC); Rule-based Event Processing Systems (RbEPS); Smart Home System architecture</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>The main objective of this paper is to propose a simple, low cost, reliable and scalable architecture for building Smart Home Systems (SHSs) that can be used to remotely automate and control home appliances, using microcontroller. The proposed architecture aims to take advantage of emerging technologies to make it easier to develop Smart Home systems and to provide more management by expanding its capabilities suitably. The suggested design intends to make it easier and more convenient for many applications to access context data, as well as providing a new schematic guide for creating as complete and comprehensive Smart Home Systems and data processing as possible. Related topics like smart homes and their intelligent systems will be addressed by examining prior work and proposing the authors&#39; opinions in order to suggest the new architecture. The proposed advanced architecture&#39;s building blocks include Classic Smart Homes, Internet of Things (IoT), Context-awareness (CA), Cloud Computing (CC), and Rule-based Event Processing Systems (RbEPS). Finally, the proposed architecture is validated and evaluated by constructing a smart home system.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_12-A_Proposed_Architecture_for_Smart_Home_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Proctoring and Non-proctoring Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130610</link>
        <id>10.14569/IJACSA.2022.0130610</id>
        <doi>10.14569/IJACSA.2022.0130610</doi>
        <lastModDate>2022-06-30T12:23:38.4900000+00:00</lastModDate>
        
        <creator>Yusring Sanusi Baso</creator>
        
        <subject>Proctoring system; comparative study; Arabic translating course; online exam</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>This research describes learning achievement assessment technology, especially proctor technology. This study compares and contrasts proctoring and non-proctoring procedures used for online exams. The sample case used was the test scores of students enrolled in Hasanuddin University&#39;s Indonesian Arabic translation course. The research method used was a non-experimental quantitative method that compared students&#39; online test results using proctoring and non-proctoring systems during online exams. The test scores of 101 students (40 male and 61 female students) from two different classes were sampled. The results of the tests for both classes were collected six times: three times using the proctoring method and three times using the non-proctoring system. A trend analysis was performed on the data. SPSS 26 was used to analyze the data via the two-way ANOVA procedure. The results indicate that the online proctoring system resulted in lower test scores than the online non-proctoring system, while the variables of class and gender did not affect the learning results.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_10-Proctoring_and_Non_proctoring_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Groundnuts Leaf Disease Recognition using Neural Network with Progressive Resizing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130611</link>
        <id>10.14569/IJACSA.2022.0130611</id>
        <doi>10.14569/IJACSA.2022.0130611</doi>
        <lastModDate>2022-06-30T12:23:38.4900000+00:00</lastModDate>
        
        <creator>Rajnish M. Rakholia</creator>
        
        <creator>Jinal H. Tailor</creator>
        
        <creator>Jatinderkumar R. Saini</creator>
        
        <creator>Jasleen Kaur</creator>
        
        <creator>Hardik Pahuja</creator>
        
        <subject>Groundnut leaf disease recognition; progressive resizing; deep learning; neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>Groundnut is an important oilseed crop in the world, and India is the second-largest producer of groundnuts. This crop is prone to attack by numerous diseases which is one of the most important factors contributing to the loss of productivity and degradation in the quality; both of these finally result in a low agricultural economy. Therefore, it is necessary to find better and more reliable automation solutions to recognize groundnut leaf diseases. In this paper, a deep learning based model with progressive resizing is proposed for groundnut leaf disease recognition and classification tasks. Five major categories of groundnut leaf diseases namely leaf spot, armyworms effect, wilts, yellow leaf, and healthy leaf are considered. The proposed model was trained with and without progressive resizing while it was validated using cross-entropy loss. The first of its kind dataset used for training and validation purposes was manually created from the Saurashtra region of Gujarat state of India. The created dataset was imbalanced in terms of a different number of samples for each category. To handle the imbalanced dataset problem, the extended focal loss function was used. To evaluate the performance of the proposed model, different performance measures including precision, sensitivity, F1-score, and accuracy were applied. The proposed model achieved state-of-the-art accuracy of 96.12%. The model with progressive resizing performed better than the traditional core neural network-based model built on cross-entropy loss.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_11-Groundnuts_Leaf_Disease_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fast and Robust Fuzzy-based Hybrid Data-level Method to Handle Class Imbalance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130609</link>
        <id>10.14569/IJACSA.2022.0130609</id>
        <doi>10.14569/IJACSA.2022.0130609</doi>
        <lastModDate>2022-06-30T12:23:38.4730000+00:00</lastModDate>
        
        <creator>Kamlesh Upadhyay</creator>
        
        <creator>Prabhjot Kaur</creator>
        
        <creator>Ritu Sachdeva</creator>
        
        <subject>Data level approaches; undersampling; oversampling; fuzzy concept; imbalanced data-sets; classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>Conventional classification algorithms do not provide accurate results when the data distribution (class sizes) is unequal or data is corrupted with noise because the results are biased towards the bigger class. In many real life cases, there is a requirement to uncover unusual/smaller classes. There are a bundle of examples where importance of smaller/rare class is much-much higher than the bigger class for example- brain tumor detection, credit card fraud or anomaly detection and many more. This is usually called as problem of imbalance classes. The situation becomes worst when the data is corrupted with extra impurities like noise in data or overlapping of class or any other glitch in data because in this scenario traditional methods produce more poor results. This paper proposed a fast, simple and effective data level hybrid technique based on fuzzy concept to overcome the class imbalance problem in noisy condition. To appraise the classification performance of the offered technique it is tested with 40 UCI real imbalanced data sets having imbalance ratio ranges from 1.82 to 129.44 and compared with 12 other approaches. The outcome specifies that the presented hybrid data level technique performed better and in a fast manner when compared to other approaches.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_9-Fast_and_Robust_Fuzzy_based_Hybrid.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Pelican Komodo Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130607</link>
        <id>10.14569/IJACSA.2022.0130607</id>
        <doi>10.14569/IJACSA.2022.0130607</doi>
        <lastModDate>2022-06-30T12:23:38.4600000+00:00</lastModDate>
        
        <creator>Purba Daru Kusuma</creator>
        
        <creator>Ashri Dinimaharawati</creator>
        
        <subject>Metaheuristic; Pelican Optimization Algorithm; Komodo Mlipir Algorithm; portfolio optimization algorithm; LQ45 index</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>In this work, a new metaheuristic algorithm, namely the hybrid pelican Komodo algorithm (HPKA), has been proposed. This algorithm is developed by hybridizing two shortcoming metaheuristic algorithms: the Pelican Optimization Algorithm (POA) and Komodo Mlipir Algorithm (KMA). Through hybridization, the proposed algorithm is designed to adapt the advantages of both POA and KMA. Several improvisations regarding this proposed algorithm are as follows. First, this proposed algorithm replaces the randomized target with the preferred target in the first phase. Second, four possible movements are selected stochastically in the first phase. Third, in the second phase, the proposed algorithm replaces the agent’s current location with the problem space width to control the local problem space. This proposed algorithm is then challenged to tackle theoretical and real-world optimization problems. The result shows that the proposed algorithm is better than grey wolf optimizer (GWO), marine predator algorithm (MPA), KMA, and POA in solving 14, 12, 14, and 18 functions. Meanwhile, the proposed algorithm creates 109%, 46%, 47%, and 1% better total capital gain rather than GWO, MPA, KMA, and POA, respectively in solving the portfolio optimization problem.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_7-Hybrid_Pelican_Komodo_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Users’ Acceptance and Sense of Presence towards VR Application with Stimulus Effectors on a Stationary Bicycle for Physical Training</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130608</link>
        <id>10.14569/IJACSA.2022.0130608</id>
        <doi>10.14569/IJACSA.2022.0130608</doi>
        <lastModDate>2022-06-30T12:23:38.4600000+00:00</lastModDate>
        
        <creator>Imran Bin Mahalil</creator>
        
        <creator>Azmi Bin Mohd Yusof</creator>
        
        <creator>Nazrita Binti Ibrahim</creator>
        
        <creator>Eze Manzura Binti Mohd Mahidin</creator>
        
        <creator>Ng Hui Hwa</creator>
        
        <subject>Virtual reality; sense of presence; technology acceptance; stimulus effectors</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>This research’s objective is to identify lacking elements in various effectors utilized in current physical training for cyclists. This encompasses both virtual reality-based system and indoor conventional training. Another objective is to identify user’s acceptance from the use of vProCycle; which acts as the primary instrument of this study. Virtual Reality (VR) technology is a computer-generated simulation experience where immersive surroundings replicate lifelike environments – and is used for cyclists’ physical training. Distinctive combinations of stimulus effectors (such as altitude, wind-effect, visuals, audio etcetera) have been applied to simulate actual world training environment. This is in order to increase the fidelity of presence for the participants involved, with emphasis on the five human senses. However, in this research the focus is only on hearing, sight, and interaction. The methodology of this mixed-mode pilot study is inclusive of 2 cyclists as participants and a 30 minute training session inside the hypoxic chamber room, whereby they have experienced a VR visual route replica of L&#39;&#201;tape du Tour, France. Variables composed of distinctive stimulus effectors are employed during the training, and survey interviews are utilized to gain users’ insight. Results from this pilot study on the presence level indicate that the cyclists’ have given high scores. This high score means that the cyclists were immersed while using the vProCycle system. In addition, the cyclists’ also gave a high score on the level of technology acceptance towards using vProCycle. The main contribution from this study is to understand how various combinations of stimulus effectors can be applied in a VR-based training system.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_8-Users_Acceptance_and_Sense_of_Presence_towards_VR_Application.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Novel Framework for Enhanced Learning-based Classification of Lesion in Diabetic Retinopathy</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130606</link>
        <id>10.14569/IJACSA.2022.0130606</id>
        <doi>10.14569/IJACSA.2022.0130606</doi>
        <lastModDate>2022-06-30T12:23:38.4430000+00:00</lastModDate>
        
        <creator>Prakruthi M K</creator>
        
        <creator>Komarasamy G</creator>
        
        <subject>Diabetic retinopathy; convolution neural network; classification; fundus retinal image; multi-class categorization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>Diabetic retinopathy is an adverse medical condition resulting from a high level of blood sugar potentially affecting the retina and leading to permanent vision loss in its advanced stage of progression. A literature review is conducted to assess the effectiveness of existing approaches to find that Convolution Neural Network (CNN) has been frequently adopted for analyzing the fundus retinal image for detection and classification. However, existing scientific methods are mainly inclined towards achieving accuracy in their learning techniques without much deeper investigation of possibilities to improve the methodology of type using CNN. Therefore, the proposed scheme introduces a computational framework where a simplified feature enhancement operation is carried out, resulting in artifact-free images with better features. The enhanced image is then subjected to CNN to perform multiclass categorization of potential stages of diabetic retinopathy to see if it outperforms existing schemes.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_6-Novel_Framework_for_Enhanced_Learning_based_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Efficient System for Real-time Mobile Smart Device-based Insect Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130605</link>
        <id>10.14569/IJACSA.2022.0130605</id>
        <doi>10.14569/IJACSA.2022.0130605</doi>
        <lastModDate>2022-06-30T12:23:38.4270000+00:00</lastModDate>
        
        <creator>Thanh-Nghi Doan</creator>
        
        <subject>Deep learning; real-time insect pest detection; YOLOv5; mobile devices</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>In recent years, the rapid development of many pests and diseases has caused heavy damage to the agricultural production of many countries. However, it is difficult for farmers to accurately identify each type of insect pest, and yet they have used a large number of pesticides indiscriminately, causing serious environmental pollution. Meanwhile, spraying pesticides is very expensive, and thus developing a system to identify crop-damaging pests early will help farmers save a lot of money while also contributing to the development of sustainable agriculture. This paper presents a new efficient deep learning system for real-time insect image recognition on mobile devices. Our system achieved an accuracy of mAP@0.5 with the YOLOv5-S model of 70.5% on the 10 insect dataset and 42.9% on the IP102 large-scale insect dataset. In addition, our system can provide more information to farmers about insects such as biological characteristics, distribution, morphology, and pest control measures. From there, farmers can take appropriate measures to prevent pests and diseases, helping reduce production costs and protecting the environment.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_5-An_Efficient_System_for_Real_time_Mobile_Smart_Device.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Improved Genetic Algorithm for the Multi-temperature Food Distribution with Multi-Station</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130604</link>
        <id>10.14569/IJACSA.2022.0130604</id>
        <doi>10.14569/IJACSA.2022.0130604</doi>
        <lastModDate>2022-06-30T12:23:38.4270000+00:00</lastModDate>
        
        <creator>Bo Wang</creator>
        
        <creator>Jiangpo Wei</creator>
        
        <creator>Bin Lv</creator>
        
        <creator>Ying Song</creator>
        
        <subject>Logistics; genetic algorithm; neighbourhood search; food distribution</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>This paper studies on the food distribution route planning problem for improving the customer satisfaction and the operator cost of food providers. In the first, the problem is formulated to a combinatorial optimization which is hard to be solved. Thus, a polynomial time algorithm is proposed to solve the problem, combining genetic algorithm and neighbourhood search, to increase the total amount of distributed food and reduce the distribution cost. The proposed algorithm employs the genetic al-gorithm with integer coding to decide the assignment of customers to distribution vehicles, and integrates the neighbourhood search strategy into the genetic algorithm to improve its performance. Experiment results show that the proposed method improved the distribution performance up-to 111.09%, 73.10% and 70.21%, respectively, in the distributed food amount, the cost efficiency, and the customer satisfaction.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_4-An_Improved_Genetic_Algorithm_for_the_Multi_temperature.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Convolution Neural Networks for Image Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130603</link>
        <id>10.14569/IJACSA.2022.0130603</id>
        <doi>10.14569/IJACSA.2022.0130603</doi>
        <lastModDate>2022-06-30T12:23:38.4130000+00:00</lastModDate>
        
        <creator>Arun D. Kulkarni</creator>
        
        <subject>Deep learning; convolutional neural networks; image classification; machine learning; object recognition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>Deep learning is a highly active area of research in machine learning community. Deep Convolutional Neural Networks (DCNNs) present a machine learning tool that enables the computer to learn from image samples and extract internal representations or properties underlying grouping or categories of the images. DCNNs have been used successfully for image classification, object recognition, image segmentation, and image retrieval tasks. DCNN models such as Alex Net, VGG Net, and Google Net have been used to classify large dataset having millions of images into thousand classes. In this paper, we present a brief review of DCNNs and results of our experiment. We have implemented Alex Net on Dell Pentium processor using MATLAB deep learning toolbox. We have classified three image datasets. The first dataset contains four hundred images of two types of animals that was classified with 99.1 percent accuracy. The second dataset contains four thousand images of five types of flowers that was classified with 86.64 percent accuracy. In the first and second dataset seventy percent randomly chosen samples from each class were used for training. The third dataset contains forty images of stained pleura tissues from rat-lungs are classified into two classes with 75 percent accuracy. In this data set eighty percent randomly chosen samples were used in training the model.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_3-Deep_Convolution_Neural_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Blockchain Privacy Data Access Control Method Based on Cloud Platform Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130602</link>
        <id>10.14569/IJACSA.2022.0130602</id>
        <doi>10.14569/IJACSA.2022.0130602</doi>
        <lastModDate>2022-06-30T12:23:38.3970000+00:00</lastModDate>
        
        <creator>Biying Sun</creator>
        
        <creator>Qian Dang</creator>
        
        <creator>Yu Qiu</creator>
        
        <creator>Lei Yan</creator>
        
        <creator>Chunhui Du</creator>
        
        <creator>Xiaoqin Liu</creator>
        
        <subject>Cloud platform; blockchain; private data; data encryption; access control</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>With the improvement of digital informatization and openness of the smart grid, the security of all kinds of sensitive and private data in the power grid is inevitably facing severe threats and challenges. In this paper, we propose a privacy protection scheme for multidimensional data aggregation and access control in the cloud Internet of Things for smart grid. The scalable access control based on attribute encryption is used to determine the data security of power user data in the process of data information sharing in the blockchain under the large data traffic of the cloud platform, which is to achieve privacy protection and fine-grained access control for demand-side multidimensional data. By using the EBGN homomorphic encryption algorithm, the multidimensional data is encrypted, and each dimension can be decrypted separately using the corresponding private key. The multidimensional data aggregation at the gateway can aggregate the multidimensional data into cipher-text, and the control center does not need to decrypt the cipher-text data of each dimension, thereby simplifying the operation of the gateway and the control center and improving the security and privacy of the data. By encrypting the EBGN private key of each dimension through the cipher-text policy attribute encryption algorithm, the fine-grained access control at the dimension level is realized. The experimental results show that the proposed method can effectively improve the security of private data in the aspect of multidimensional data privacy protection, thus reducing the security risk of multidimensional data being illegally accessed. The research in this paper can effectively reduce the communication overhead and computational complexity, reduce the computational cost, and is suitable for data security and privacy protection of smart grid cloud Internet of Things.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_2-Blockchain_Privacy_Data_Access_Control_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Solutions to the Endless Addition of Transaction Volume in Blockchain</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130601</link>
        <id>10.14569/IJACSA.2022.0130601</id>
        <doi>10.14569/IJACSA.2022.0130601</doi>
        <lastModDate>2022-06-30T12:23:38.3800000+00:00</lastModDate>
        
        <creator>Hongping Cao</creator>
        
        <creator>Hongxing Cao</creator>
        
        <subject>Blockchain; endless addition; expired transactions; substitute; storage problem; consensus algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(6), 2022</description>
        <description>In blockchain system, the problem of endless addition of transaction volume results in larger space occupation, heavier network transmission burden and the like. But the way of abandoning historical data is hindered by the characteristic of blockchain-tamper-proof. To solve this problem, this dissertation takes bitcoin system as an example, and gives a definition of expired transaction. By abandoning expired transactions and packing the rest transactions in several blocks into a new substitute block to replace old blocks, help to overcome the difficulty in clearing historical data. However, this solution fails to clear ineffective intermediate transactions. Thus, a follow-up solution is proposed, abandon the transactions whose outputs have been run out and retain the transactions where not all the outputs have been spent. Additionally, include records of the spending details of each transaction output, which allows us to clear ineffective intermediate transactions. In the end, an experiment is conducted to confirm the effectiveness of the two solutions as to clearing transactions.</description>
        <description>http://thesai.org/Downloads/Volume13No6/Paper_1-Solutions_to_the_Endless_Addition_of_Transaction_Volume.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>End-to-End Car Make and Model Classification using Compound Scaling and Transfer Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01305111</link>
        <id>10.14569/IJACSA.2022.01305111</id>
        <doi>10.14569/IJACSA.2022.01305111</doi>
        <lastModDate>2022-05-31T14:40:28.9500000+00:00</lastModDate>
        
        <creator>Omar BOURJA</creator>
        
        <creator>Abdelilah MAACH</creator>
        
        <creator>Zineb ZANNOUTI</creator>
        
        <creator>Hatim DERROUZ</creator>
        
        <creator>Hamza MEKHZOUM</creator>
        
        <creator>Hamd AIT ABDELALI</creator>
        
        <creator>Rachid OULAD HAJ THAMI</creator>
        
        <creator>Francois BOURZEIX</creator>
        
        <subject>Vehicles classification; deep learning; compound scaling; transfer learning; IoT</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>Recently, Morocco has started to invest in IoT systems to transform our cities into smart cities that will promote economic growth and make life easier for citizens. One of the most vital addition is intelligent transportation systems which represent the foundation of a smart city. However, the problem often faced in such systems is the recognition of entities, in our case, car and model makes. This paper proposes an approach that identifies makes and models for cars using transfer learning and a workflow that first enhances image quality and quantity by data augmentation and then feeds the newly generated data into a deep learning model with a scaling feature–that is, compound scaling. In addition, we developed a web interface using the FLASK API to make real-time predictions. The results obtained were 80%accuracy, fine-tuning it to an accuracy rate of 90% on unseen data. Our framework is trained on the commonly used Stanford Cars dataset.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_111-End_to_End_Car_Make_and_Model_Classification_using_Compound_Scaling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Survey of Sink Mobility Models to Avoid the Energy-Hole Problem in Wireless Sensor Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01305110</link>
        <id>10.14569/IJACSA.2022.01305110</id>
        <doi>10.14569/IJACSA.2022.01305110</doi>
        <lastModDate>2022-05-31T14:40:28.9330000+00:00</lastModDate>
        
        <creator>Ghada Al-Mamari</creator>
        
        <creator>Fatma Bouabdallah</creator>
        
        <creator>Asma Cherif</creator>
        
        <subject>Energy hole problem; mobile sink; wireless sensor networks; linear programming; artificial intelligence</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>Wireless Sensor Networks (WSN) refer to networks where the sensors are deployed in an environment to sense and select data. WSN sensor nodes have limited power and cannot be recharged easily. Consequently, the faster sensor nodes to deplete their energy budget are those close to the sink as they have to relay all data emanating from any sensor in the network. Thus, a hole of energy around the sink is created as the sink coverage nodes have drained their initial energy thus leading to sink unreachability. The WSN lifetime maximization problem has always been a hot research topic. Collecting data in WSN using a mobile sink is an efficient approach for achieving WSN longevity and preventing the energy hole problem. However, finding the optimal trajectory along with its appropriate flow routing is a challenging problem since many constraints should be considered. This paper discusses and compares several existing WSN-lifetime-maximization using sink mobility solutions. These solutions are mainly classified into two types: Linear Programming and Artificial Intelligence-based solutions. The state-of-the-art solutions are compared in terms of network topology, sojourn points and duration, buffer size, and overhearing. Finally, a discussion of the WSN lifetime maximization constraints is provided to define a promising sink mobility model.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_110-A_Survey_of_Sink_Mobility_Models_to_Avoid_the_Energy_Hole_Problem.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>BERT-based Approach to Arabic Hate Speech and Offensive Language Detection in Twitter: Exploiting Emojis and Sentiment Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01305109</link>
        <id>10.14569/IJACSA.2022.01305109</id>
        <doi>10.14569/IJACSA.2022.01305109</doi>
        <lastModDate>2022-05-31T14:40:28.9170000+00:00</lastModDate>
        
        <creator>Maha Jarallah Althobaiti</creator>
        
        <subject>Deep learning; hate speech detection; offensive language detection; sentiment analysis; transformer-based model; BERT; emoji</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>The user-generated content on the internet including that on social media may contain offensive language and hate speech which negatively affect the mental health of the whole internet society and may lead to hate crimes. Intelligent models for automatic detection of offensive language and hate speech have attracted significant attention recently. In this paper, we propose an automatic method for detecting offensive language and fine-grained hate speech from Arabic tweets. We compare between BERT and two conventional machine learning techniques (SVM, logistic regression). We also investigate the use of sentiment analysis and emojis descriptions as appending features along with the textual content of the tweets. The experiments shows that BERT-based model gives the best results, surpassing the best benchmark systems in the literature, on all three tasks:(a) offensive language detection with 84.3% F1-score, (b) hate speech detection with 81.8% F1-score, and (c) fine-grained hatespeech recognition (e.g., race, religion, social class, etc.) with 45.1% F1-score. The use of sentiment analysis slightly improves the performance of the models when detecting offensive language and hate speech but has no positive effect on the performance of the models when recognising the type of the hate speech. The use of textual emoji description as features can improve or deteriorate the performance of the models depending on the size of the examples per class and whether the emojis are considered among distinctive features between classes or not.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_109-BERT_based_Approach_to_Arabic_Hate_Speech_and_Offensive_Language.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>COVID-19 Cases Detection from Chest X-Ray Images using CNN based Deep Learning Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01305108</link>
        <id>10.14569/IJACSA.2022.01305108</id>
        <doi>10.14569/IJACSA.2022.01305108</doi>
        <lastModDate>2022-05-31T14:40:28.9030000+00:00</lastModDate>
        
        <creator>Md Amirul Islam</creator>
        
        <creator>Giovanni Stea</creator>
        
        <creator>Sultan Mahmud</creator>
        
        <creator>Kh. Mustafizur Rahman</creator>
        
        <subject>COVID-19; CNN; deep learning; machine learning; chest X-ray</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>COVID-19 has recently manifested as one of the most serious life-threatening infections and is still circulating globally. COVID-19 can be contained to a considerable extent if a patient can know their COVID-19 infection at a possible earlier time, and they can be isolated from other individuals. Recently, researchers have explored AI (Artificial Intelligence) based technologies like deep learning and machine learning strategies to identify COVID-19 infection. Individuals can detect COVID-19 disease using their phones or computers, dispensing with the need for clinical specimens or visits to a diagnostic center. This can significantly reduce the risk of spreading COVID-19 farther from a probably infected patient. Motivated by the above, we propose a deep-learning model using CNN (Convolutional Neural Networks) to autonomously diagnose COVID-19 disease from CXR (Chest X-ray) images. The dataset used to train our model includes 10293 X-ray images, with 875 X-ray images from COVID-19 cases. The dataset contains three different classes of the tuple: COVID-19, pneumonia, and normal cases. The empirical outcomes show that the proposed model achieved 97%specificity, 96.3% accuracy, 96% precision, 96% sensitivity, and 96% F1-score, respectively, which are better than the available works, despite using a CNN with fewer layers than those.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_108-COVID_19_Cases_Detection_from_Chest_X_Ray_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Non-Parametric Stochastic Autoencoder Model for Anomaly Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01305107</link>
        <id>10.14569/IJACSA.2022.01305107</id>
        <doi>10.14569/IJACSA.2022.01305107</doi>
        <lastModDate>2022-05-31T14:40:28.8700000+00:00</lastModDate>
        
        <creator>Raphael Alampay</creator>
        
        <creator>Patricia Angela Abu</creator>
        
        <subject>Neural networks; autoencoders; machine learning; anomaly detection; semi-supervised learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>Anomaly detection is a widely studied field in computer science with applications ranging from intrusion de-tection, fraud detection, medical diagnosis and quality assurance in manufacturing. The underlying premise is that an anomaly is an observation that does not conform to what is considered to be normal. This study addresses two major problems in the field. First, anomalies are defined in a local context, that is, being able to give quantitative measures as to how anomalies are categorized within its own problem domain and cannot be generalized to other domains. Commonly, anomalies are mea-sured according to statistical probabilities relative to the entire dataset with several assumptions such as type of distribution and volume. Second, the performance of a model is dependent on the problem itself. As a machine learning problem, each model has to have parameters optimized to achieve acceptable performance specifically thresholds that are either defined by domain experts of manually adjusted. This study attempts to address these problems by providing a contextual approach to measuring anomaly detection datasets themselves through a quantitative approach called categorical measures that provides constraints to the problem of anomaly detection and proposes a robust model based on autoencoder neural networks whose parameters are dynamically adjusted in order to avoid parameter tweaking on the inferencing stage. Empirically, the study has conducted a relatively exhaustive experiment against existing and state of the art anomaly detection models in a semi-supervised learning approach where the assumption is that only normal data is available to provide insight as to how well the model performs under certain quantifiable anomaly detection scenarios.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_107-Non_Parametric_Stochastic_Autoencoder_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modeling and Simulation of Adaptive Traffic Control System for Multi-Intersection Management using Cellular Automaton and Queuing System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01305106</link>
        <id>10.14569/IJACSA.2022.01305106</id>
        <doi>10.14569/IJACSA.2022.01305106</doi>
        <lastModDate>2022-05-31T14:40:28.8570000+00:00</lastModDate>
        
        <creator>Salma EL BAKKAL</creator>
        
        <creator>Abdallah LAKHOUILI</creator>
        
        <creator>El Hassan ESSOUFI</creator>
        
        <subject>Traffic light systems; cellular automaton; BCMP; queuing systems; traffic congestion; waiting time; adaptive systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>During last years, urban traffic has become one of the most studied research topics. This is mainly due to the enlargement of the cities and the growing number of vehicles traveling in this road network. One of the most sensitive problems is to verify if the intersections are congestion-free. Another related problem is the automatic reconfiguration of the network without building new roads to alleviate congestions. These problems require an accurate model to determine the steady state of the traffic. The present article proposes an adaptive traffic light system based on the BCMP network queuing and cellular automata. The aim of this work is to predict the best red and green time span by combining three important factors: The queue length, the evacuation time and the capacity of the destination roads. This approach can maximize the number of vehicles passing intersection and at the same time can minimize the average waiting time of vehicles as a result reducing the congestion and keep the fluency in intersections. To validate our results, we compared our model with a fixed model to explain the strengths of our proposed algorithm.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_106-Modeling_and_Simulation_of_Adaptive_Traffic_Control_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Transformer based Model for Coherence Evaluation of Scientific Abstracts: Second Fine-tuned BERT</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01305105</link>
        <id>10.14569/IJACSA.2022.01305105</id>
        <doi>10.14569/IJACSA.2022.01305105</doi>
        <lastModDate>2022-05-31T14:40:28.8400000+00:00</lastModDate>
        
        <creator>Anyelo-Carlos Gutierrez-Choque</creator>
        
        <creator>Vivian Medina-Mamani</creator>
        
        <creator>Eveling Castro-Gutierrez</creator>
        
        <creator>Rosa Nunez-Pacheco</creator>
        
        <creator>Ignacio Aguaded</creator>
        
        <subject>Coherence evaluation; inconsistent sentences detec-tion; BERT; second fine-tuned</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>Coherence evaluation is a problem related to the area of natural language processing whose complexity lies mainly in the analysis of the semantics and context of the words in the text. Fortunately, the Bidirectional Encoder Representation from Transformers (BERT) architecture can capture the aforemen-tioned variables and represent them as embeddings to perform Fine-tunings. The present study proposes a Second Fine-Tuned model based on BERT to detect inconsistent sentences (coherence evaluation) in scientific abstracts written in English/Spanish. For this purpose, 2 formal methods for the generation of inconsistent abstracts have been proposed: Random Manipulation (RM) and K-means Random Manipulation (KRM). Six experiments were performed; showing that performing Second Fine-Tuned improves the detection of inconsistent sentences with an accuracy of 71%. This happens even if the new retraining data are of different language or different domain. It was also shown that using several methods for generating inconsistent abstracts and mixing them when performing Second Fine-Tuned does not provide better results than using a single technique.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_105-Transformer_based_Model_for_Coherence_Evaluation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid Heuristic for a Two-Agent Multi-Skill Resource-Constrained Scheduling Problem</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01305104</link>
        <id>10.14569/IJACSA.2022.01305104</id>
        <doi>10.14569/IJACSA.2022.01305104</doi>
        <lastModDate>2022-05-31T14:40:28.8070000+00:00</lastModDate>
        
        <creator>Meya Haroune</creator>
        
        <creator>Cheikh Dhib</creator>
        
        <creator>Emmanuel Neron</creator>
        
        <creator>Ameur Soukhal</creator>
        
        <creator>Hafed Mohamed Babou</creator>
        
        <creator>Farouk Mohamedade Nanne</creator>
        
        <subject>Two agents; multi-skilled employees; multi-project scheduling; hybrid genetic algorithm; MIGP</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>This paper addresses an industrial case of the two-agent scheduling problem with a global objective function. Each agent manages one or several projects and competes with another agent for the use of common multi-skilled employees. There is a pool of employees, each of which can perform a set of skills with heterogeneous performance levels. The objectives of the two agents are both to minimize the total weighted tardiness of its tasks. Furthermore, We assume that some constraints (soft constraints) can be violated when there is no feasible schedule for the problem. Thus, the global objective function minimizes the constraint violations by reducing the undesirable deviations in the soft constraints from their respective goals. The overall objective is to find a schedule that minimizes both agents objective functions (local objectives) and the global objective function. We provide a mixed-integer goal programming (MIGP) formulation for the problem. In addition, we present a hybrid algorithm combining an exact procedure, a greedy heuristic, and a genetic algorithm to find an approximate Pareto solution set. We compare the performance of the hybrid algorithm against the corresponding MIGP formulation with simulated instances derived from real-world instances.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_104-A_Hybrid_Heuristic_for_a_Two_Agent_Multi_Skill_Resource.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detection of Android Malware App through Feature Extraction and Classification of Android Image</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01305103</link>
        <id>10.14569/IJACSA.2022.01305103</id>
        <doi>10.14569/IJACSA.2022.01305103</doi>
        <lastModDate>2022-05-31T14:40:28.7930000+00:00</lastModDate>
        
        <creator>Mohd Abdul Rahim Khan</creator>
        
        <creator>Nand Kumar</creator>
        
        <creator>R C Tripathi</creator>
        
        <subject>Android malware; obfuscation attack machine learning; android application package (APK); android malware app; grayscale images</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>Android apps have security risks due to rapid development in android devices. In the Android ecosystem, there are many challenges to detecting Android malware. Traditional techniques such as static, dynamic, and hybrid approach, most of the existing approaches require a high rate of human intervention to detect Android malware. Most of the current techniques have the most significant security challenges to detect Android malware, the inspection of Android Package Kit(APK) file structures, increased complexity, high processing power, more storage space, and much human intervention. This paper proposed Machine Learning(ML)based algorithms to detect Android malware apps through feature extraction and classification of grayscale images. In our proposed approach, convert most of the files of APK such multiDex, resources, certificate, and manifest files transform into a grayscale image, using the image algorithm to extract the local feature of the image. In the paper used different ML models to classify the local features with the help of multiple images of malware families. This approach deals with the obfuscation attack.it can hide in any files of APK. The proposed approach enhanced accuracy reached up to 96.86%, and computation time did not increase more than the existing techniques. The quality of that proposed worked; it has a high classification accuracy and less complexity validation loss.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_103-Detection_of_Android_Malware_App_through_Feature_Extraction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Transformer-based Models for Arabic Online Handwriting Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01305102</link>
        <id>10.14569/IJACSA.2022.01305102</id>
        <doi>10.14569/IJACSA.2022.01305102</doi>
        <lastModDate>2022-05-31T14:40:28.7770000+00:00</lastModDate>
        
        <creator>Fakhraddin Alwajih</creator>
        
        <creator>Eman Badr</creator>
        
        <creator>Sherif Abdou</creator>
        
        <subject>Selft attention; Transformer; deep Learning; con-nectionist temporal classification; convolutional neural networks; Arabic online handwriting recognition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>Transformer neural networks have increasingly be-come the neural network design of choice, having recently been shown to outperform state-of-the-art end-to-end (E2E) recurrent neural networks (RNNs). Transformers utilize a self-attention mechanism to relate input frames and extract more expressive sequence representations. Transformers also provide parallelism computation and the ability to capture long dependencies in contexts over RNNs. This work introduces a transformer-based model for the online handwriting recognition (OnHWR) task. As the transformer follows encoder-decoder architecture, we investigated the self-attention encoder (SAE) with two different decoders: a self-attention decoder (SAD) and a connectionist temporal classification (CTC) decoder. The proposed models can recognize complete sentences without the need to integrate with external language modules. We tested our proposed mod-els against two Arabic online handwriting datasets: Online-KHATT and CHAW. On evaluation, SAE-SAD architecture per-formed better than SAE-CTC architecture. The SAE-SAD model achieved a 5% character error rate (CER) and an 18%word error rate (WER) against the CHAW dataset, and a 22% CER and a 56% WER against the Online-KHATT dataset. The SAE-SAD model showed significant improvements over existing models of the Arabic OnHWR.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_102-Transformer_based_Models_for_Arabic_Online_Handwriting.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Using Machine Learning Techniques to Predict Bugs in Classes: An Empirical Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01305101</link>
        <id>10.14569/IJACSA.2022.01305101</id>
        <doi>10.14569/IJACSA.2022.01305101</doi>
        <lastModDate>2022-05-31T14:40:28.7600000+00:00</lastModDate>
        
        <creator>Musaad Alzahrani</creator>
        
        <subject>Software bugs; bug prediction; machine learning techniques; software metrics; unified bug dataset</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>Software bug prediction is an important step in the software development life cycle that aims to identify bug-prone software modules. Identification of such modules can reduce the overall cost and effort of the software testing phase. Many approaches have been introduced in the literature that have investigated the performance of machine learning techniques when used in software bug prediction activities. However, in most of these approaches, the empirical investigations were conducted using bug datasets that are small or have erroneous data leading to results with limited generality. Therefore, this study empirically investigates the performance of 8 commonly used machine learning techniques based on the Unified Bug Dataset which is a large and clean bug dataset that was published recently. A set of experiments are conducted to construct bug prediction models using the considered machine learning techniques. Each constructed model is evaluated using three performance metrics: accuracy, area under the curve, and F-measure. The results of the experiments show that logistic regression has better performance for bug prediction compared to other considered techniques.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_101-Using_Machine_Learning_Techniques_to_Predict_Bugs_in_Classes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Computer Vision: The Effectiveness of Deep Learning for Emotion Detection in Marketing Campaigns</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01305100</link>
        <id>10.14569/IJACSA.2022.01305100</id>
        <doi>10.14569/IJACSA.2022.01305100</doi>
        <lastModDate>2022-05-31T14:40:28.7300000+00:00</lastModDate>
        
        <creator>Shaldon Wade Naidoo</creator>
        
        <creator>Nalindren Naicker</creator>
        
        <creator>Sulaiman Saleem Patel</creator>
        
        <creator>Prinavin Govender</creator>
        
        <subject>Computer vision; deep learning; emotion detection; generative adversarial networks; marketing campaigns component</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>As businesses move towards more customer-centric business models, marketing functions are becoming increasingly interested in gathering natural, unbiased feedback from cus-tomers. This has led to increased interest in computer vision studies into emotion recognition from facial features, for appli-cation in marketing contexts. This research study was conducted using the publicly-available Facial Emotion Recognition 2013 data-set, published on Kaggle. This article provides a comparative study of four deep learning algorithms for computer vision application in emotion recognition, namely, Convolution Neural Network (CNN), Multilayer Perceptron (MLP), Recurring Neural Network (RNN), Generative Adversarial Networks (GAN) and Long Short-Term Memory (LSTM) models. Comparisons be-tween these models were done quantitatively using the metrics of accuracy, precision, recall and f1-score; as well and qualitatively by determining goodness-of-fit and learning rate from accuracy and loss curves. The results of the study show that the CNN, GAN and MLP models surpassed the data, and the LSTM model failed to learn at all. Only the RNN adequately learnt from the data. The RNN was found to exhibit a low learning rate, and the computational intensiveness of training the model resulted in a premature termination of the training process. However, the model still achieved a test accuracy of up to 72%, the highest of all models studied, and it is possible that this could be increased through further training. The RNN also had the best F1-score (0.70), precision (0.73) and recall (0.73) of all models studied.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_100-Computer_Vision_The_Effectiveness_of_Deep_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Revisiting Polyglot Persistence: From Principles to Practice</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130599</link>
        <id>10.14569/IJACSA.2022.0130599</id>
        <doi>10.14569/IJACSA.2022.0130599</doi>
        <lastModDate>2022-05-31T14:40:28.7130000+00:00</lastModDate>
        
        <creator>Omar Lajam</creator>
        
        <creator>Salahadin Mohammed</creator>
        
        <subject>Database system; database architecture; relational database; NoSQL; distributed storage; multi-model database; review; classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>To cope with the rapid advancements in information technologies, many database systems have been developed in the last decade to satisfy various data storage requirements, such as NoSQL databases. In many cases, using a single database system cannot be an option because of the limitations posed on the functionalities of the software application. Therefore, applications may use multiple distributed storage databases that complement each other to satisfy the conflicting requirements. Such applications that are called polyglot persistent applications. However, the practical implementation of polyglot persistence and its complexities have not been studied enough. In this paper, the most recent studies related to polyglot persistence are reviewed. Database systems are classified based on their data storage model, and their use cases are discussed. The principles of polyglot persistence and its challenges are expounded. The implementation architectures of polyglot persistence applications are categorized into Application-coordinated Polyglot Persistence, Service-oriented Polyglot Persistence, Polyglot- Persistence-as-a-Service, and Multi-models Databases. An analysis of the issues related to each architecture is presented. In light of the study findings, a practical polyglot persistence implantation strategy is proposed. The outcomes of this work can help design future polyglot persistence applications and influence future research on how to resolve the complexity involved in polyglot persistence solutions.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_99-Revisiting_Polyglot_Persistence_From_Principles_to_Practice.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Code Completion Strategy</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130598</link>
        <id>10.14569/IJACSA.2022.0130598</id>
        <doi>10.14569/IJACSA.2022.0130598</doi>
        <lastModDate>2022-05-31T14:40:28.6830000+00:00</lastModDate>
        
        <creator>Hayatou Oumarou</creator>
        
        <creator>Ousmanou Dahirou</creator>
        
        <subject>Integrated development environment; code comple-tion; API; code completion tool; pharo</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>Programmers rely on a multitude of techniques to speed up the development process. Among these techniques is code completion, a productivity improvement technique widely used by developers to explore APIs and automatically complete a word being typed by providing a progressively refined list of candidate words (or recommendations). Still called auto-completion, it reduces incorrect calls to APIs. Several techniques have been developed to obtain the list of candidates. Some methods use the history of the code, others neural networks or artificial intelligence; some exploit the program’s structure through AST. Often the recommendation list is long, and finding suitable candidates comes at a cost. In this work, we propose a strategy that improves the accuracy of recommendation list offered by code completion. We present a sorting approach based on the popularity and importance of the elements (suggestions) of the list by analyzing the usage data of classes, methods, and variables of projects in the same development environment. We implemented our sorting strategy in Pharo (IDE and language), an immersive modern programming environment to show its applicability. The empirical evaluation results of this strategy show that our approach improves the quality of the suggestions.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_98-A_Novel_Code_Completion_Strategy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>PhishRepo: A Seamless Collection of Phishing Data to Fill a Research Gap in the Phishing Domain</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130597</link>
        <id>10.14569/IJACSA.2022.0130597</id>
        <doi>10.14569/IJACSA.2022.0130597</doi>
        <lastModDate>2022-05-31T14:40:28.6670000+00:00</lastModDate>
        
        <creator>Subhash Ariyadasa</creator>
        
        <creator>Shantha Fernando</creator>
        
        <creator>Subha Fernando</creator>
        
        <subject>Cyberattack; crowdsourcing; internet security; phishing; machine learning; multi-modal data</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>Machine learning-based anti-phishing solutions face various challenges in collecting diverse multi-modal phishing data. As a result, most previous works have trained with little or no multi-modal data, which opens several drawbacks. Therefore, this study aims to develop a phishing data repository to meet the diverse data needs of the anti-phishing domain. As a result, a gap-filling solution named PhishRepo was proposed as an online data repository that collects, verifies, disseminates, and archives phishing data. It includes innovative design aspects such as automated submission, deduplication filtering, automated verification, crowdsourcing-based human interaction, an objection reporting window, and target attack prevention techniques. Moreover, the deduplication filter, used for the first time in phishing data collection, significantly impacted the collection process. It eliminated the duplicate data, which causes one of the most common machine learning errors known as data leakage. In addition, PhishRepo enables researchers to apply modern machine learning techniques effectively and supports them by eliminating phishing data hassle. Therefore, more thoughtful use of PhishRepo will lead to effective anti-phishing solutions in the future, minimising the social engineering crime called phishing.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_97-PhishRepo_A_Seamless_Collection_of_Phishing_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mining Hidden Partitions of Voice Utterances using Fuzzy Clustering for Generalized Voice Spoofing Countermeasures</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130596</link>
        <id>10.14569/IJACSA.2022.0130596</id>
        <doi>10.14569/IJACSA.2022.0130596</doi>
        <lastModDate>2022-05-31T14:40:28.6530000+00:00</lastModDate>
        
        <creator>Sarah Mohammed Altuwayjiri</creator>
        
        <creator>Ouiem Bchir</creator>
        
        <creator>Mohamed Maher Ben Ismail</creator>
        
        <subject>Voice spoofing; spoofing countermeasure; classifi-cation; clustering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>The high level of usability achieved by voice biomet-rics compared to other biometric authentication modalities has promoted the widespread use of automatic speaker verification (ASV) systems as authentication tools for several services in various domains. Despite their satisfactory performance, ASV systems are vulnerable to malicious voice spoofing attacks. Hence, voice spoofing countermeasures have emerged as essential solutions to stop such harmful attacks and protect ASV systems as well as users’ confidentiality. Typically, these countermeasures classify utterances into genuine and spoofing categories. In this research, we propose two voice spoofing countermeasures that mainly aim to improve the generalization of supervised learning models. This goal is achieved through the adaptive handling of the high variance of both utterance classes, i.e., genuine and spoofing classes. The proposed spoofing countermeasure addresses the poor generalization problem by identifying the hidden structure of each utterance category prior to the classification task. Specif-ically, fuzzy clustering algorithms were deployed to mine the hidden partitions of utterance classes. The conducted experiments showed that the proposed approach outperforms the state-of-the-art approaches in the ASVspoof 2017 dataset, with a testing EER equal to 1.07%.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_96-Mining_Hidden_Partitions_of_Voice_Utterances_using_Fuzzy_Clustering.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Prediction of Presence of Brain Tumor Utilizing Some State-of-the-Art Machine Learning Approaches</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130595</link>
        <id>10.14569/IJACSA.2022.0130595</id>
        <doi>10.14569/IJACSA.2022.0130595</doi>
        <lastModDate>2022-05-31T14:40:28.6200000+00:00</lastModDate>
        
        <creator>Mitrabinda Khuntia</creator>
        
        <creator>Prabhat Kumar Sahu</creator>
        
        <creator>Swagatika Devi</creator>
        
        <subject>Brain tumor classification; MRI; SVM; decision tree; random forest; CAD</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>A brain tumor is a kind of abnormal development caused by unregularized cell reproduction and it is increasing day-by-day. The Magnetic Resonance Imaging (MRI) tools are the most often used diagnostic tool for brain tumor detection. However, ample amount of information contained in MRI makes the detection and analysis process tedious and time consuming. The ability to accurately identify the exact size and proper location of a brain tumor is a tough task for radiologists. Medical image processing is an interdisciplinary discipline in which image processing is a tough research. Image segmentation is the prime requirement in image processing as it separates dubious regions from biomedical images thereby enhancing the treatment reliability. In this regard, our article reviews eight existing binary classifiers to compare their results for designing an automated Computer Aided Diagnosis (CAD) system. The proposed classification models can analyze T1-weighted brain MRI images to reach at a conclusion. The classification accuracy advocates the quality of our work.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_95-Prediction_of_Presence_of_Brain_Tumor_Utilizing_Some_State_of_the_Art_Machine.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Correcting Arabic Soft Spelling Mistakes using BiLSTM-based Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130594</link>
        <id>10.14569/IJACSA.2022.0130594</id>
        <doi>10.14569/IJACSA.2022.0130594</doi>
        <lastModDate>2022-05-31T14:40:28.6070000+00:00</lastModDate>
        
        <creator>Gheith Abandah</creator>
        
        <creator>Ashraf Suyyagh</creator>
        
        <creator>Mohammed Z. Khedher</creator>
        
        <subject>Arabic text; natural language processing; spelling mistakes; recurrent neural networks; bidirectional long short-term memory</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>Soft spelling mistakes are a class of mistakes that is widespread among native Arabic speakers and foreign learners alike. Some of these mistakes are typographical in nature. They occur due to orthographic variations of some Arabic letters and the complex rules that dictate their correct usage. Many people forgo these rules, and given the identical phonetic sounds, they often confuse such letters. In this paper, we investigate how to use machine learning to correct such mistakes given that there are no suﬀicient datasets to train the correction models. Soft errors detection and correction is an active field in Arabic natural language processing. We generate training datasets using proposed transformed input approach and stochastic error injec-tion approach. These approaches are applied to two acclaimed datasets that represent Classical Arabic and Modern Standard Arabic. We treat the problem as character-level, one-to-one sequence transcription problem. This one-to-one transcription of mistakes that include omissions and deletions is possible with adopted simple transformations. This approach permits using bidirectional long short-term memory (BiLSTM) models that are more effective to train compared to other alternatives such as encoder-decoder models. Based on investigating multiple alternatives, we recommend a configuration that has two BiLSTM layers, and is trained using the stochastic error injection approach with error injection rate of 40%. The best model corrects 96.4%of the injected errors and achieves a low character error rate of 1.28% on a real test set of soft spelling mistakes.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_94-Correcting_Arabic_Soft_Spelling_Mistakes_using_BiLSTM.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Anomaly Detection using Network Metadata</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130593</link>
        <id>10.14569/IJACSA.2022.0130593</id>
        <doi>10.14569/IJACSA.2022.0130593</doi>
        <lastModDate>2022-05-31T14:40:28.5730000+00:00</lastModDate>
        
        <creator>Khaled Mutmbak</creator>
        
        <creator>Sultan Alotaibi</creator>
        
        <creator>Khalid Alharbi</creator>
        
        <creator>Umar Albalawi</creator>
        
        <creator>Osama Younes</creator>
        
        <subject>Anomaly detection; network metadata; packet anal-ysis; intrusion detection system; machine learning; classification; heterogeneous traffic</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>The proliferation of numerous network function today gave rise to the importance of network traffic classification against various cyber-attacks. Automatic training with a huge number of representative data necessitates the creation of a model for an efficient classifier. As a result, automatic categorization requires using training techniques capable of assigning classes to data objects based on the activities supplied to learn classes. Predefined classes allow for the detection of new items. However, the analysis and categorization of data activity in intrusion detection systems are vulnerable to a wide range of threats. Thus, New methods of analysis must be developed in order to establish an appropriate approach for monitoring circulating traffic in order to solve this problem. The major goal of this research is to develop and verify a heterogeneous traffic classifier that can classify the collected metadata of networks. In this study, a new model is proposed, which is based on machine learning technique, to increase the accuracy of prediction. Prior to the analysis stage, the gathered traffic is subjected to preprocessing. This paper aims to provide the mathematical validation of a novel machine learning classifier for heterogeneous traffic and anomaly detection.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_93-Anomaly_Detection_using_Network_Metadata.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Genetic Algorithms Applied to the Searching of the Optimal Path in Image-based Robotic Navigation Environments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130592</link>
        <id>10.14569/IJACSA.2022.0130592</id>
        <doi>10.14569/IJACSA.2022.0130592</doi>
        <lastModDate>2022-05-31T14:40:28.5570000+00:00</lastModDate>
        
        <creator>Fernando Mart&#180;inez Santa</creator>
        
        <creator>Fredy H. Mart&#180;inez Sarmiento</creator>
        
        <creator>Holman Montiel Ariza</creator>
        
        <subject>Genetic algorithms; optimal path; optimization; robotic environment; mobile robots; image skeletonization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>This paper describes an optimal-path finding strat-egy based on Genetic Algorithms, applied to mobile robots in static navigation environments. This strategy starts from an image or plan of the environment and is supported by some different image processing algorithms, mainly the image skeletonization. Three different strategies were tested, changing the domain of the optimization target function for the Genetic Algorithm, the first domain was all the points of the environment image less the obstacles or walls, the second domain was similar but using an image with the obstacles dilated, and the final domain was only the points of the skeleton image. The last tested domain is from 99.4% to 99.6% smaller than the others, that implied reductions from 95% to 96% in the overall execution time of the strategy. Likewise, three skeletonization algorithms were tested in order to use the one with less execution time in this proposal. Finally, the proposed path planning strategy was tested on the same environment changing the initial and final points giving as result a valid and optimized path for the mobile robot in all the tested cases, and an overall average optimization time less than 2 minutes. This last, validates this proposal for robotic navigation applications with static obstacles.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_92-Genetic_Algorithms_Applied_to_the_Searching_of_the_Optimal_Path.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multileveled ALPR using Block-Binary-Pixel-Sum Descriptor and Linear SVC</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130591</link>
        <id>10.14569/IJACSA.2022.0130591</id>
        <doi>10.14569/IJACSA.2022.0130591</doi>
        <lastModDate>2022-05-31T14:40:28.5430000+00:00</lastModDate>
        
        <creator>B. Lavanya</creator>
        
        <creator>G. Lalitha</creator>
        
        <subject>BBPS – Block Binary Pixel Sum; ALPR - Automatic License Plate Recognition; ROI - Region of Interest; SVC - Support Vector Classification; LP - License Plate</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>Automatic license plate recognition (ALPR) is es-sential component of security and surveillance. ALPR mainly aims to detect and prevent the crime and fraud activities; it also plays an important role in traffic monitoring. An algorithm is proposed for recognizing license plate candidates. The proposed work aimed to recognize the license plate of a car. Proposed work is designed in multilevel for more accurate License Plate (LP) recognition, At level 1 algorithm produced 93.5% accuracy and in level 3 algorithm gives 96% accuracy. For training and testing purpose, LP images were used from Medialab cars dataset, kaggle car dataset and goggle map images. These images in the dataset is formulated at various angles and illumination. Proposed algorithm for LP recognition is done by using the Block Binary Pixel descriptors (BBPS) and Linear Support Vector Classification (SVC). Proposed algorithm is novel and produces higher accuracy in minimal processing time of an average 0.42 milliseconds with 96% accuracy when compared with state-of-the art methods.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_91-Multileveled_ALPR_using_Block_Binary_Pixel_Sum_Descriptor.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Alarm System using Image Processing to Prevent a Patient with Nasogastric Tube Feeding from Removing Tube</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130590</link>
        <id>10.14569/IJACSA.2022.0130590</id>
        <doi>10.14569/IJACSA.2022.0130590</doi>
        <lastModDate>2022-05-31T14:40:28.5100000+00:00</lastModDate>
        
        <creator>Amonrat Prasitsupparote</creator>
        
        <creator>Pakorn Pasitsuparoad</creator>
        
        <subject>NG tube; image processing; fiducial markers; face detection; Aruco</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>A removal nasogastric (NG) tube of a patient is a critical problem especially the patients resist swallowing. To solve this problem, the conventional approach using a personal caretaker is a time-consuming and intense focus on the patient’s hands. However, visual technology can decrease the intense focus of a personal caretaker by using image processing to evaluate the patient’s gesture and warn the personal caretaker when the patient acts in a risky pose. This work illustrates the feasible solution to prevent a patient with nasogastric tube feeding on removing tube by applied the face detection using Haar and Fiducial markers which consist of color marker and ArUco marker. An image processing can evaluate the patient’s gesture and warn the personal caretaker when the subject acts the risky pose. A Raspberry Pi 3 Model B and a Camera module with Python and Open CV package are applied to detect and evaluate the warning gestures with 648 measurements. Six detection methods to evaluate and warn when the patient on bed tries to remove a nasal feeding tube were performed and the results were analyzed. The results show that the detection method using ArUco marker is found to be a good candidate for the alarm system preventing nasogastric (NG) tube removal of a patient.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_90-Alarm_System_using_Image_Processing_to_Prevent_a_Patient.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Natural Language Processing for the Analysis Sentiment using a LSTM Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130589</link>
        <id>10.14569/IJACSA.2022.0130589</id>
        <doi>10.14569/IJACSA.2022.0130589</doi>
        <lastModDate>2022-05-31T14:40:28.4970000+00:00</lastModDate>
        
        <creator>Achraf BERRAJAA</creator>
        
        <subject>Artificial intelligence; NLP; RNN; LSTM; customer relationship management</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>Over the past decade, social networks have revo-lutionised the communication between organisations and their customers, and the data provided by customers on social net-work platforms is having an increasingly important impact on how organisations collect and analyse this data to make better decisions. We have prepared a new dataset that will allow the scientific community to estimate and evaluate new models using nearly the same conditions. Moreover, this dataset represents a recent and interesting sample for the proposed machine learning models to correctly identify the topics or points on which the company should focus to improve customer satisfaction and better meet their needs. Therefore, we have proposed a recurrent neural network (RNN) with Long short-term memory (LSTM) that we will run in the cloud to predict sentiment analysis. The objective is also to define systems capable of extracting subjective information from natural language texts, such as feelings and opinions, with the aim of creating structured knowledge that can be used by a decision support system or a decision maker for better customer management. The proposed neural network has been trained on the proposed dataset which contains 50 000 customer observations. The performance of the proposed architecture is very important as the success rate is 96%.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_89-Natural_Language_Processing_for_the_Analysis_Sentiment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Improved Label Initialization based Label Propagation Method for Detecting Graph Clusters in Complex Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130588</link>
        <id>10.14569/IJACSA.2022.0130588</id>
        <doi>10.14569/IJACSA.2022.0130588</doi>
        <lastModDate>2022-05-31T14:40:28.4800000+00:00</lastModDate>
        
        <creator>Jyothimon Chandran</creator>
        
        <creator>V Madhu Viswanatham</creator>
        
        <subject>Social networks; community detection; graph clus-tering; edge clustering coefficient; label initialization; triangle count</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>Community structure is one of the fundamental characteristics of complex networks. Detection of community structure can provide insight into the structural and functional or-ganization that helps to understand various dynamical processes such as epidemics and information spreading. Label propagation algorithm (LPA) is a well-known method for community struc-ture identification due to linear time complexity. However, the communities extracted by the LPA is unstable since it produces different combinations of communities at each run on the same network. In this paper, a novel label initialization method for label propagation algorithm (ILI-LPA) is proposed to detect stable and accurate community structures. The proposed ILI-LPA focuses on more accurate label initialization rather than assigning unique labels thereby reduce the effect of randomness in LPA. The experiments on several real-world and synthetic networks show that the ILI-LPA improves the quality and stability of communities compared to existing algorithms. The results also demonstrate that appropriate label initialization can significantly improve the performance of label propagation algorithms, and the stability has been improved up to 50-78% relative to the standard LPA.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_88-An_Improved_Label_Initialization_based_Label_Propagation_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Relational Deep Learning Detection with Multi-Sequence Representation for Insider Threats</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130587</link>
        <id>10.14569/IJACSA.2022.0130587</id>
        <doi>10.14569/IJACSA.2022.0130587</doi>
        <lastModDate>2022-05-31T14:40:28.4630000+00:00</lastModDate>
        
        <creator>Abdullah Alshehri</creator>
        
        <subject>Insider threats; machine learning; recurrent neural network; user behavior analytic</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>Insider threats are typically more challenging to be detected since security protocols struggle to recognize the anomaly behavior of privileged users in the network. Intuitively, an insider threat detection model depends on analyzing the audit data, representing trusted users’ activity streams, on recognizing malicious behaviors. However, the audit data is high dimensional data in that it presents n dependent streams of activities where it establishes a complex feature extraction. In this context, the dependent streams represent user activities where each activity is represented by an ordered set of real variables that pertain to a specific occurrence, such as log-in records. As a result, multiple actions can be represented simultaneously, with one or more values being recorded at each timestamp. Moreover, the relations between dependent streams are typically neglected while detecting the anomaly behavior. Ideally, relation learning is commonly considered to recognize occurrence patterns in streaming data. Thus, the latent relations are thought to have insight for the accurate detection of anomaly behavior concerning insider threats. This study introduces a novel model to detect insider threats by representing audit data as multivariate time series to explicitly learn the existing inter-relations between activity streams using a Recurrent Neural Network (RNN). The model considers learning the latent relationships to effectively extract features for modeling the behavior profile where anomaly behavior can be detected accurately. The evaluation, using the CERT dataset has shown that the proposed model outperforms the comparator approaches to insider threats detection with AUC of 0.99.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_87-Relational_Deep_Learning_Detection_with_Multi_Sequence.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Abnormal Event Detection using Additive Summarization Model for Intelligent Transportation Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130586</link>
        <id>10.14569/IJACSA.2022.0130586</id>
        <doi>10.14569/IJACSA.2022.0130586</doi>
        <lastModDate>2022-05-31T14:40:28.4500000+00:00</lastModDate>
        
        <creator>G. Balamurugan</creator>
        
        <creator>J. Jayabharathy</creator>
        
        <subject>Event detection; gated recurrent unit; summarization; intelligent transportation system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>Video surveillance is used for capturing the abnormal events on roadsides that are caused due to improper driving, accidents, and hindrances resulting in transportation lags and life-critical issues. It is essential to highlight the accident keyframes in videos to achieve intelligent video surveillance. Video summarization plays a vital role in summarizing the keyframe for an abnormal event from the stacked video surveillance input. The observed video is converted into frames and analyzed for providing an accurate summarization for accident analysis forecast and guiding the users in avoiding such events. The main issues in summarization arise from the inconsistency between the spatiotemporal redundancies and the classification of sequence verification in video surveillance. This article introduces an Additive Event Summarization Method (AESM) for projecting classified events through a gated recurrent unit learning paradigm. In this process, the gates are assigned for unclassified and active frames for sequence verification. Based on the sequence, the abnormality is classified and summarized with higher accuracy than the state of art techniques. This proposed method relies on heterogeneous features for classifying events with better structural indices. The proposed method’s performance is analyzed using the metrics accuracy, false rate, analysis time, SSIM, and F1-Score.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_86-Abnormal_Event_Detection_using_Additive_Summarization_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application of Random Forest Regression with Hyper-parameters Tuning to Estimate Reference Evapotranspiration</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130585</link>
        <id>10.14569/IJACSA.2022.0130585</id>
        <doi>10.14569/IJACSA.2022.0130585</doi>
        <lastModDate>2022-05-31T14:40:28.4030000+00:00</lastModDate>
        
        <creator>Satendra Kumar Jain</creator>
        
        <creator>Anil Kumar Gupta</creator>
        
        <subject>Reference evapotranspiration; random forest regression; hyper-parameters; grid search; random search optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>Estimation of reference evapotranspiration (ETo) is a complex and non-linear problem that is used for the quantification of crop water requirements. In this study, random forest regression based models are developed to predict the ETo of Bhopal city, Madhya Pradesh, India. The meteorological data is collected from IMD, Pune for the periods of the years 2015-16. Based on the correlation among meteorological variables with observed ETo, four different random forest regression models are created. Moreover, the effects of three important hyper-parameters of random forest, such as the number of trees in the forest, depth of the tree, and the number of samples at a leaf node are evaluated to estimate ETo using the proposed models. These hyper-parameters are applied in three different ways to the models such as one hyper-parameter parameter at a time, and combination of hyper-parameters using grid search, and random search approaches. In this study, the result indicates that a random forest regression based model with maximal meteorological input variables exhibits great predictive power in small execution time than minimal input variables. This study also reveals that the model that optimises the hyper-parameters using a grid search approach shows equal predictive power but takes much execution time whereas random search based optimization exhibits the same level of predictive capability in less computation time. Stakeholders can utilize random forest regression models with sufficient meteorological data to estimate crop water requirements, and enhance the food production.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_85-Application_of_Random_Forest_Regression_with_Hyper_parameters_Tuning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intelligent Interfaces for Assisting Blind People using Object Recognition Methods</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130584</link>
        <id>10.14569/IJACSA.2022.0130584</id>
        <doi>10.14569/IJACSA.2022.0130584</doi>
        <lastModDate>2022-05-31T14:40:28.3700000+00:00</lastModDate>
        
        <creator>Jamil Abedalrahim Jamil Alsayaydeh</creator>
        
        <creator>Irianto</creator>
        
        <creator>Maslan Zainon</creator>
        
        <creator>Hasvinii Baskaran</creator>
        
        <creator>Safarudin Gazali Herawan</creator>
        
        <subject>Object recognition method; computer vision; blind people; image processing; Raspberry Pi; Pi camera; smart assistance; portable device; voice narration; visualise</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>Object recognition method is a computer vision technique for identifying objects in images. The main purpose of this system build is to put an end to blindness by constructing automated hardware with Raspberry Pi that enables a visually impaired person to detect objects or persons in front of them instantly, and inform what is in front of them through audio. Raspberry Pi receives data from a camera then processes it. In addition, the blind will listen to a voice narration via an audio receiver. This paper’s key objective is to provide the blind with cost-effective smart assistance to explore and sense the world independently. The second objective is to provide a convenient portable device allows users to recognise objects without touch, having the system determine the object in front of them. The camera module attached in Raspberry Pi will capture image and the processor will then process it. Subsequently, the processed image sends data to the audio receiver narrating the detected object(s). This system will be very useful for a blind person to explore the world by listening to the voice narration. The generated voice narration after processing the image will help the blind to visualise objects in front of them.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_84-Intelligent_Interfaces_for_Assisting_Blind_People.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Research on Students&#39; Course Selection Preference based on Collaborative Filtering Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130583</link>
        <id>10.14569/IJACSA.2022.0130583</id>
        <doi>10.14569/IJACSA.2022.0130583</doi>
        <lastModDate>2022-05-31T14:40:28.3570000+00:00</lastModDate>
        
        <creator>Mustafa Man</creator>
        
        <creator>Jianhui Xu</creator>
        
        <creator>Ily Amalina Ahmad Sabri</creator>
        
        <creator>Jiaxin Li</creator>
        
        <subject>Collaborative filtering; course selection system; recommendation; scoring matrix; weighting</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>Due to the events caused by the COVID-19 pandemic, the education industry is no longer limited to offline, and online classroom education is widely used. The rapid development of online education provides users with more abundant educational course resources and flexible learning methods. Various online education platforms are also constantly improving their service models to give users a better learning experience. However, at present, there are few personalized information recommendation services in student course selection. Students receive the same course selection information and cannot be &quot;tailored&quot; according to their specific preferences. This paper focuses on the integration of collaborative filtering technology into a college course selection system to construct a rating matrix based on students&#39; ratings of the courses they take through correlation between courses and correlation between students. Based on the collaborative filtering algorithm, a predictive rating matrix is generated to produce a recommendation list to achieve intelligent recommendation of suitable courses for students. The experimental results show that, based on the traditional collaborative filtering recommendation technique, the improved collaborative filtering algorithm based on both item and user weighting is used to achieve course recommendation with higher recommendation accuracy. The application of the improved collaborative filtering technique in the course selection recommendation system of colleges and universities is very good at recommending courses for students intelligently, and the recommended courses for students have good rationality and accuracy, and achieve more intelligent course selection for students, which has great practicality and practical significance.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_83-Research_on_Students_Course_Selection_Preference.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improved Deep Learning Performance for Real-Time Traffic Sign Detection and Recognition Applicable to Intelligent Transportation Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130582</link>
        <id>10.14569/IJACSA.2022.0130582</id>
        <doi>10.14569/IJACSA.2022.0130582</doi>
        <lastModDate>2022-05-31T14:40:28.3400000+00:00</lastModDate>
        
        <creator>Anass BARODI</creator>
        
        <creator>Abderrahim Bajit</creator>
        
        <creator>Abdelkarim ZEMMOURI</creator>
        
        <creator>Mohammed Benbrahim</creator>
        
        <creator>Ahmed Tamtaoui</creator>
        
        <subject>Deep learning; convolutional neural network; computer vision; artificial intelligence; traffic sign detection; traffic sign recognition; intelligent transportation systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>In this paper, we improve the performance of Deep Learning (DL) by creating a robust and efficient Convolutional Neural Network (CNN) model. This CNN model will be subjected to detecting and recognizing traffic signs in real-time. We apply several techniques; the first is pre-processing, which includes conversion of color space RGB, then equalization and normalization histogram of the image dataset according to Computer Vision (CV) tools. The second is devoted to Artificial Intelligence (AI), which needs the right choice of a neural layer such convolution layer, or dropout layer, with powerful optimizer as Adam and activation functions such as ReLU and SoftMax. Also, we use the technique of augmentation dataset which characterizes by the function of batch size for each epoch. The results obtained is very satisfactory, the prediction at the average is equal to 98%, which encourages this approach to the integration in Intelligent Transportation Systems (ITS) in the automotive sector.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_82-Improved_Deep_Learning_Performance_for_Real_Time_Traffic.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Demand Forecasting Model using Deep Learning Methods for Supply Chain Management 4.0</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130581</link>
        <id>10.14569/IJACSA.2022.0130581</id>
        <doi>10.14569/IJACSA.2022.0130581</doi>
        <lastModDate>2022-05-31T14:40:28.3230000+00:00</lastModDate>
        
        <creator>Loubna Terrada</creator>
        
        <creator>Mohamed El Khaili</creator>
        
        <creator>Hassan Ouajji</creator>
        
        <subject>Supply chain management 4.0; demand forecasting; decision making; artificial intelligence; deep learning; Auto-Regressive Integrated Moving Average (ARIMA); Long Short-Term Memory (LSTM)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>In the context of Supply Chain Management 4.0, costumers’ demand forecasting has a crucial role within an industry in order to maintain the balance between the demand and supply, thus improve the decision making. Throughout the Supply Chain (SC), a large amount of data is generated. Artificial Intelligence (AI) can consume this data in order to allow each actor in the SC to gain in performance but also to better know and understand the customer. This study is carried out in order to improve the performance of the demand forecasting system of the SC based on Deep Learning methods, including Auto-Regressive Integrated Moving Average (ARIMA) and Long Short-Term Memory (LSTM) using historical transaction record of a company. The experimental results enable to select the most efficient method that could provide better accuracy than the tested methods.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_81-Demand_Forecasting_Model_using_Deep_Learning_Methods.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Penetration Testing on Malaysia Popular e-Wallets and m-Banking Apps</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130580</link>
        <id>10.14569/IJACSA.2022.0130580</id>
        <doi>10.14569/IJACSA.2022.0130580</doi>
        <lastModDate>2022-05-31T14:40:28.3070000+00:00</lastModDate>
        
        <creator>Md Arif Hassan</creator>
        
        <creator>Zarina Shukur</creator>
        
        <creator>Masnizah Mohd</creator>
        
        <subject>Electronic payments; e-wallet; m-banking; android; static analysis; security analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>e-Wallets and m-banking apps became more and more popular in the developed world, approaching a point of tipping. This can be due to the global use of big and small merchants of paying equipment and the ubiquity of e-wallet and m-banking apps adoption. Many consumers are using e-wallets and m-banking apps that can be an effective cybercrime option. e-Wallets and m-banking apps allow financial transactions via smartphones that give cybercriminals a lucrative opportunity. Mobile technology has become increasingly mainstream and continually strengthening, with the focus on mobile apps protection and forensic analysis developing. In this paper, the security aspect of five popular e-wallets in Malaysia were analyzed. This paper also provides a security analysis of another five leading m-banking apps. The security analysis is based on a security principle that is recommended by Open Web Application Security (OWASP) under Mobile Security Testing Guide (MSTG) and Mobile Security Threats (MST). The static analysis has been done by using three mobile application-testing tools. This study included a variation of vulnerability scanning, code review and, most significantly, penetration testing. Each app complied with the security requirement, but their security features and characteristics, such as encryption, security protocols, and app services, are different to each other. This study was carried out using a DELL computer with Intel Core i7 CPU, 3.40 GHz CPU, 6 GB RAM. Finally, the results revealed the secure e-wallet and m-banking apps among the selected apps.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_80-A_Penetration_Testing_on_Malaysia_Popular_E_Wallets.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application of the Fuzzy Delphi Method to Identify and Prioritize the Social-Health Family Disintegration Indicators in Yemen</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130579</link>
        <id>10.14569/IJACSA.2022.0130579</id>
        <doi>10.14569/IJACSA.2022.0130579</doi>
        <lastModDate>2022-05-31T14:40:28.2930000+00:00</lastModDate>
        
        <creator>Abed Saif Ahmed Alghawli</creator>
        
        <creator>Abdualmajed A. Al-khulaidi</creator>
        
        <creator>Adel A. Nasser</creator>
        
        <creator>Nesmah A. AL-Khulaidi</creator>
        
        <creator>Faisal A. Abass</creator>
        
        <subject>Family disintegration; child marriage; early marriage; Fuzzy Delphi Method; multi-criteria decision making; Yemen</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>Constantly increasing political events and socially related changes have led governments worldwide to adopt strategies to reduce their negative effects on the cohesion of societies, which requires developing assessment frameworks that include realistic, measurable, and useful indicators for analyzing the family disintegration causes, taking into consideration the circumstances surrounding the countries and the development trends that they adopt. Therefore, this study aims to identify and prioritize indicators of a decision-making support framework for evaluation, ranking, and structural comparison of the family disintegration causes resulting from the child marriage phenomenon in Yemen. To achieve this, the Fuzzy Delphi Method was applied. Firstly, a set of related literature and theories were analyzed to extract the expected framework&#39;s suitable initial indicators. Then, with the participation of twenty-four local experts, the extracted factors were revised, and the most suitable factors were selected. As a result, one social factor out of nine social-health factors was excluded due to its inappropriateness, and a framework of eight indicators was built. Also, with a rating average of 0.727, it was consistently agreed that the indicator &quot;Increasing divorce rates in marital cases that do not take place according to the common desire of the spouses&quot; is the most important indicator. Also, with high consistent evaluation averages (0.652–0.658), all three health indicators were ranked in the second and third places, while the other four social indicators were ranked in the last three positions (fourth–sixth). Finally, the real applications of the proposed framework were recommended.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_79-Application_of_the_Fuzzy_Delphi_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Empirical Study of a Spatial Analysis for Prone Road Traffic Accident Classification based on MCDM Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130578</link>
        <id>10.14569/IJACSA.2022.0130578</id>
        <doi>10.14569/IJACSA.2022.0130578</doi>
        <lastModDate>2022-05-31T14:40:28.2930000+00:00</lastModDate>
        
        <creator>Anik Vega Vitianingsih</creator>
        
        <creator>Zahriah Othman</creator>
        
        <creator>Safiza Suhana Kamal Baharin</creator>
        
        <creator>Aji Suraji</creator>
        
        <subject>Spatial Analysis; GIS; prone road traffic accident; MCDM Model; WSM; WP; SAW; WPM; MAUT; TOPSIS; AHP</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>Spatial analysis techniques are widely used as an effective approach for prone road traffic accident classification. This paper will present the results of empirical behavioral testing on the spatial analysis for prone road traffic accident classification using the Multicriteria Decision Making (MCDM) method. The performance of MCDM is compared on arterial and collector road types processed with multicriteria parameters. MCDM was chosen because it can be used as a decision making based on an alternative selection with many criteria. Empirical tests of the MCDM method used include Weighted Sum Model (WSM), Weighted Product (WP), Simple Additive Weighting (SAW), Weighted Product Model (WPM), Multi-Attribute Utility Theory (MAUT), Technique for Others Reference by Similarity to Ideal Solution (TOPSIS), and Analytical Hierarchy Process (AHP). The multicriteria parameter weight values are based on expert judgment and the Fuzzy-AHP method (EJ-AHP), which comprises volume-to-capacity ratio (VCR), international roughness index (IRI), vehicle type, horizontal alignment, vertical alignment, design speed, and shoulder. Then, the performance of the models was compared to determine the value of accuracy, precision, recall, and F1-score as decision-making on the prone road traffic accident classification using Multicriteria Evaluation Techniques (MCE). The empirical test results on arterial roads show that the SAW and TOPSIS methods have the same performance and are superior to other methods, with an accuracy value of 63%. However, the results on the collector road type show that the accuracy value of the AHP method outperforms other methods with an accuracy value of 70%.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_78-Empirical_Study_of_A_Spatial_Analysis_for_Prone_Road_Traffic_Accident.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application for a Waste Management via the QR-Code System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130577</link>
        <id>10.14569/IJACSA.2022.0130577</id>
        <doi>10.14569/IJACSA.2022.0130577</doi>
        <lastModDate>2022-05-31T14:40:28.2600000+00:00</lastModDate>
        
        <creator>Pichit Wandee</creator>
        
        <creator>Zakon Bussabong</creator>
        
        <creator>Seksit Duangkum</creator>
        
        <subject>Application; QR–Code system; waste management; SDLC; community information</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>This research aims in developing an application for the waste management via the QR code system: 1) to study the quality of the application and 2) to study the satisfaction of users of the application by using the system development life cycle (SDLC) principle. There were 388 people of sample groups in this research which consisted of community leaders, village health volunteers, youth and the general public of Ban Yang Sub-district, Mueang Buriram District, and Buriram Province. The research instruments were the application for a waste management via the QR code system, Application Quality Assessment Form, and the application satisfaction questionnaire. The statistics used in the data analysis were mean and standard deviation. The results of the research revealed that there were three main functions of the application for a waste management via the QR code system as follows: 1) The quality assessment of the application in all aspects at a high level (Mean = 4.41, S.D.=0.10). 2) Study of satisfaction of the application users in all aspects at a high level (Mean = 4.42, S.D. = 0.45). 3) The application of waste management application via the QR code system allowed group members in the community to reduce the process of managing household waste more conveniently and create a positive attitude in using waste to elevate through the work of the group members in the community.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_77-Application_for_A_Waste_Management_via_the_QR_Code.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Impact of the Pandemic on the Development and Regulation of Electronic Commerce in Russia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130576</link>
        <id>10.14569/IJACSA.2022.0130576</id>
        <doi>10.14569/IJACSA.2022.0130576</doi>
        <lastModDate>2022-05-31T14:40:28.2470000+00:00</lastModDate>
        
        <creator>Svetlana Panasenko</creator>
        
        <creator>Maisa Seifullaeva</creator>
        
        <creator>Ibragim Ramazanov</creator>
        
        <creator>Elena Mayorova</creator>
        
        <creator>Alexander Nikishin</creator>
        
        <creator>A. M. Vovk</creator>
        
        <subject>Legal support; strategy; development; electronic commerce; Russian federation; digitalization; informatization; online stores; delivery channels; digital product</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>The article deals with topical issues of legal support for the development of electronic commerce in the Russian Federation. The analysis of the main categories of e-commerce has been carried out, the content of which is standardized in domestic regulatory legal acts, the following is among them: elements of the purchase and sale process, the concept of types of trading activities, electronic signature, digital assets, digital currency, smart contracts, digital transactions, etc.). The categories have been defined, the concept of which is absent in the normative legal acts of Russia: the concept of digital goods, e-commerce infrastructure, e-commerce services, delivery channels in online stores, the concept of a courier/courier service, the concept of smart applications, the definition of varieties of online stores, etc. The problem of research is defined: insufficiently effective legal support of economic activity in electronic commerce. An imperfect system of planning strategies for the development of trade organizations in the online environment has been revealed. The conclusions have been formed that the process of digitalization and the consequences of the pandemic have a significant impact on the dynamics of legal support for the development of electronic commerce, but its level is currently not high enough and requires improvement (by making additions to several regulatory legal acts, such as the Law on Trade, the Strategy for the Development of Electronic Commerce in Russia, etc.). The study used system, situational, complex methods, graphical, block grouping, methods of comparative analysis of normative legal acts, and synthesis of conclusions and proposals.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_76-Impact_of_the_Pandemic_on_the_Development_and_Regulation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Deformable Convolutional with Recurrent Neural Network for Optimal Traffic Congestion Prediction: A Fuzzy Logic Congestion Index System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130575</link>
        <id>10.14569/IJACSA.2022.0130575</id>
        <doi>10.14569/IJACSA.2022.0130575</doi>
        <lastModDate>2022-05-31T14:40:28.2130000+00:00</lastModDate>
        
        <creator>Sara Berrouk</creator>
        
        <creator>Abdelaziz El Fazziki</creator>
        
        <creator>Mohammed Sadgal</creator>
        
        <subject>Optimal traffic congestion prediction; deformable convolutional network; recurrent neural network; Hybrid Jaya Harris Hawk Optimization Algorithm; congestion index computation; fuzzy interference system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>In the field of Intelligent Transportation Systems (ITs), traffic congestion is considered as an important problem. Traffic blockage usually affects the quality of time, travel time, economy of the country, and transportability of people. The information of traffic congestion is collected and analyzed in ITs, and the methods to prevent the traffic congestion are predicted. However, the tackling of huge data is still challenging. The rapid increase in vehicle usage and road construction has resulted in traffic congestion. Various studies are undergone in ITs to recognize the traffic management system by adopting few resources. Real time-based traffic services are implemented to prevent the traffic congestion in existing areas. These services provide high expense accuracy. This paper plans to develop a new technique to predict the traffic congestion using improved deep learning approaches. At first, the benchmark dataset is gathered and the pre-processing of data is performed with removing the bad data, organizing the raw data, and filling the null values. The optimized weighted features are selected from the pre-processed data by adopting a new meta-heuristic Hybrid Jaya Harris Hawk Optimization (HJHHO) algorithm. The prediction of congestion parameters such as speed reduction rate, very low speed rate, and volume to capacity ratio of vehicles are performed by the proposed Improved Deformable Convolutional Recurrent Network (IDCRN) prediction model. These predicted measures are subjected to fuzzy interference system for congestion index computation. From the experimental analysis, it has proved that the proposed method has reduced the error rate while comparing with other deep learning and machine learning approaches.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_75-Hybrid_Deformable_Convolutional_with_Recurrent_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Voice Biometrics for Indonesian Language Users using Algorithm of Deep Learning CNN Residual and Hybrid of DWT-MFCC Extraction Features</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130574</link>
        <id>10.14569/IJACSA.2022.0130574</id>
        <doi>10.14569/IJACSA.2022.0130574</doi>
        <lastModDate>2022-05-31T14:40:28.2000000+00:00</lastModDate>
        
        <creator>Haris Isyanto</creator>
        
        <creator>Ajib Setyo Arifin</creator>
        
        <creator>Muhammad Suryanegara</creator>
        
        <subject>Voice biometric; deep learning; CNN; DWT-MFCC; security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>This research develops a Voice Biometrics model for the Indonesian language users by using deep learning algorithm of CNN Residual and Hybrid of DWT-MFCC Feature Extraction. The voice dataset of Indonesian speakers were created with a duration of 5, 10, 15, 20, and 25 minutes. The testing phase of speaker recognition and speech recognition were carried out by comparing the model of CNN Residual with CNN Standard. In the phase of speaker recognition, CNN Residual model has obtained the best results with the highest precision percentage of 99.91% and the highest accuracy of 99.47% at 25 minutes voice samples, compared to the CNN Standard obtaining precision of 96.83% and accuracy of 99.00%. In the phase of speech recognition, CNN Residual model has reached the best performance at 100% accuracy during 20 trials, while CNN Standard only gave 95% accuracy. CNN Residual Model provides a better performance for its accuracy and precision, but it is slightly slower than the CNN Standard, with a time difference of 0.03 – 1.28 seconds.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_74-Voice_Biometrics_for_Indonesian_Language_Users.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluating Learning Management System based on PACMAD Usability Model: Brighten Mobile Application</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130573</link>
        <id>10.14569/IJACSA.2022.0130573</id>
        <doi>10.14569/IJACSA.2022.0130573</doi>
        <lastModDate>2022-05-31T14:40:28.1830000+00:00</lastModDate>
        
        <creator>Masyura Ahmad Faudzi</creator>
        
        <creator>Zaihisma Che Cob</creator>
        
        <creator>Ridha Omar</creator>
        
        <creator>Sharul Azim Sharudin</creator>
        
        <subject>Mobile learning; usability; PACMAD; learning management system; Moodle</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>During the pre-COVID-19 pandemic, mobile learning is just an optional or a supplementary module in learning process. However, when the pandemic hit the world in the middle of 2020, a large number of students were forced to move from traditional learning process to online learning. This has become a critical issue especially for new online learners. Usability of a mobile learning application is important in ensuring that learners are able to learn efficiently and effectively with ease. This study evaluates the usability of the Brighten mobile application; a Moodle-based Learning Management System (LMS) which is currently used by all Universiti Tenaga Nasional’s students. The evaluation is based on People at the Center of Mobile Application Development (PACMAD). The results indicate that Brighten mobile application is acceptable in terms of usability’s effectiveness, efficiency, learnability, memorability and error-tolerance. Learners’ satisfaction level shows a “marginally acceptable” result based on the SUS Adjective Rating Scale and the result for cognitive load shows that the highest cognitive load was in terms of the performance factor.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_73-Evaluating_Learning_Management_System_based_on_PACMAD.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving Social Engineering Awareness, Training and Education (SEATE) using a Behavioral Change Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130572</link>
        <id>10.14569/IJACSA.2022.0130572</id>
        <doi>10.14569/IJACSA.2022.0130572</doi>
        <lastModDate>2022-05-31T14:40:28.1530000+00:00</lastModDate>
        
        <creator>Azaabi Cletus</creator>
        
        <creator>Benjamin Weyory</creator>
        
        <creator>Alex Opoku</creator>
        
        <subject>Social engineering; user training; user awareness; user education; ATAC model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>Social Engineering (SE) Awareness, Training, and Education (SEATE) is one of the recommended defenses against SE attacks among users of Information Systems. However, many of these SEATE programs fails to achieve the desired impact leading to exposures. This study sought to explore SEATE programs to identify gaps/challenges and propose relevant content, Delivery Methods, and a novel behavioral change Model to improve SEATE programs among users. An explorative Literature Search was conducted on the relevant SEATE Content, Delivery methods and the challenges of SEATE Programs. Consequently, the relevant and critical content and delivery methods were proposed. The challenges that impede the efficient and effective conduct of SEATE Programs were established. A behavioral change Model known as Social Engineering Awareness, Transition, Adaptation and Consolidation (ATAC) based on Stable-Quasi-Stationary Equilibrium theory was proposed. The model was validated using Expert Opinions. Five (5) expert in cybersecurity were recruited to appraise the model based on five metrics; fit for purpose, novelty, ease of use and structure. The results show that, challenges still exist in the conduct of SEATE programs. To improve SEATE programs requires relevant and innovative content, and delivery method (Hybrid Approach). Validation of the proposed behavioral change model showed an average score at 73.6% and performance metrics at 92%. As the menace of SE attacks rages on and exploiting the user, the need for SEATE programs remains imperative. A well-developed and relevant content, delivery methods and a clear understanding of the challenges is required to improve SEATE. Following the model developed, and the repeated use of it will lead to improving user resistance and or immunity to SE attacks and by extension improve security culture among users.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_72-Improving_Social_Engineering_Awareness_Training_and_Education.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving Computational Thinking in Nursing Students through Learning Computer Programming</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130571</link>
        <id>10.14569/IJACSA.2022.0130571</id>
        <doi>10.14569/IJACSA.2022.0130571</doi>
        <lastModDate>2022-05-31T14:40:28.1370000+00:00</lastModDate>
        
        <creator>Leticia Laura-Ochoa</creator>
        
        <creator>Norka Bedregal-Alpaca</creator>
        
        <creator>Elizabeth Vidal</creator>
        
        <subject>Computational thinking; computational thinking assessment; computational thinking test; programming; programming environments</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>Computational thinking is a fundamental skill for problem-solving, it uses computational concepts and other types of thinking such as algorithmic. The experience of improving computational thinking in nursing students using block-based programming environments such as Code.org, Lightbot, and the Python textual programming language is described. The results obtained are analyzed by applying a pre-and post-test of computational thinking to the students. The methodological design is quasi-experimental since it did not work with a control group. The study group was made up of 30 students from the Professional School of Nursing of the National University of San Agustin de Arequipa. The results show that teaching programming allows the understanding of computational concepts and improves computational thinking. It is concluded that block-based programming environments and the Python language facilitate the development of algorithmic thinking and computational thinking.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_71-Improving_Computational_Thinking_in_Nursing_Students.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Identifying Influential Nodes with Centrality Indices Combinations using Symbolic Regressions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130570</link>
        <id>10.14569/IJACSA.2022.0130570</id>
        <doi>10.14569/IJACSA.2022.0130570</doi>
        <lastModDate>2022-05-31T14:40:28.1200000+00:00</lastModDate>
        
        <creator>Mohd Fariduddin Mukhtar</creator>
        
        <creator>Zuraida Abal Abas</creator>
        
        <creator>Amir Hamzah Abdul Rasib</creator>
        
        <creator>Siti Haryanti Hairol Anuar</creator>
        
        <creator>Nurul Hafizah Mohd Zaki</creator>
        
        <creator>Ahmad Fadzli Nizam Abdul Rahman</creator>
        
        <creator>Zaheera Zainal Abidin</creator>
        
        <creator>Abdul Samad Shibghatullah</creator>
        
        <subject>Centrality indices; combination; symbolic regressions; influential nodes</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>Numerous strategies for determining the most influential nodes in a connected network have been developed. The use of centrality indices in a network allows the identification of the most important nodes in the network. Specific indices, on the other hand, cannot search for a network&#39;s entire meaning because they are only interested in a single attribute. Researchers frequently overlook an index&#39;s characteristics in favour of focusing on its application. The purpose of this research is to integrate selected centrality indices classified by their various properties. A symbolic regression approach was used to find meaningful mathematical expressions for this combination of indices. When the efficacy of the combined indices is compared to other methods, the combined indices react similarly and outperform the previous method. Using this adaptive technique, network researchers can now identify the most influential network nodes.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_70-Identifying_Influential_Nodes_with_Centrality_Indices.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modeling for Car Quality Complaint Classification based on Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130569</link>
        <id>10.14569/IJACSA.2022.0130569</id>
        <doi>10.14569/IJACSA.2022.0130569</doi>
        <lastModDate>2022-05-31T14:40:28.1070000+00:00</lastModDate>
        
        <creator>Chen Xiao Yu</creator>
        
        <creator>Hou Xia</creator>
        
        <creator>Zhang Xiao Min</creator>
        
        <creator>Song Ying</creator>
        
        <subject>Car; quality complaint; natural language text; classification modeling; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>Cars play an important role in many aspects of people&#39;s social life, and the effective handling of car quality complaints is of great significance to the proper running of cars and the reputation maintenance of car brands; effective classification of car quality complaint texts is the basis of the efficient handling of corresponding quality complaints, while relying on manual classification has disadvantages such as heavy workload, experience dependence, and error proneness; machine learning methods have been quite widely used in the automatic classification modeling for different types of natural language texts. It is of great practical significance to construct the automatic classification model of car quality complaints based on machine learning. Based on the characteristics of car quality complaint texts, this study vectorized the texts after word segmentation, performed feature selection and dimension reduction based on correlation analysis, and combined the progressive model training method and support vector machine to construct the classification model; in model reliability analysis, it was evaluated based on the effect of data amount on the modeling and the effect of text length on the prediction probability distribution. The results show that based on the method in this study, effective automatic classification model of car quality complaint texts could be constructed.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_69-Modeling_for_Car_Quality_Complaint_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Parallel Improved Genetic Algorithm for the Quadratic Assignment Problem</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130568</link>
        <id>10.14569/IJACSA.2022.0130568</id>
        <doi>10.14569/IJACSA.2022.0130568</doi>
        <lastModDate>2022-05-31T14:40:27.9330000+00:00</lastModDate>
        
        <creator>Huda Alfaifi</creator>
        
        <creator>Yassine Daadaa</creator>
        
        <subject>Component; Quadratic Assignment Problem (QAP); Genetic Algorithm (GA); Parallel Genetic Algorithm (PGA); Sequential Genetic Algorithm (SGA); Central Processing Unit (CPU); Compute Unified Device Architecture (CUDA); Quadratic Assignment Problem Library (QAPLIB); Best Known Solution (BKS); Average Percent Deviation (APD)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>Quadratic Assignment Problem is one of the most common combinatorial optimization problems that represents many real-life problems. Many techniques are applied to solve Quadratic Assignment Problem, these include exact, heuristic, and metaheuristic methods. A Genetic Algorithm is a powerful heuristic approach used to find optimal solutions or near-to-optimal for Quadratic Assignment problems. In this paper, we developed a Genetic Algorithm with a new crossover operator with new technology closer to that found in nature without a crossover point and a new suggested intelligent mutation operator, then we developed a Parallel Genetic Algorithm using the same crossover and mutation. The sequential Genetic Algorithm will be implemented in the Central Processing Unit (CPU), and the Parallel Genetic Algorithm will be implemented in the Graphical Processing Unit (GPU). This paper presents two comparisons, first calculates elapsed time for crossover, mutation, and selection in both CPU and GPU, then compares the results. This comparison clearly shows the enhancement degree of computation time in the parallel environment, which is around half the time executed in the sequential environment. The second comparison, iterates these operators into several generations, using twenty benchmark instances reported in Quadratic Assignment Problem Library with sizes from (12-70), population size equal to 600, the number of generations equal to 2000, and the maximum number of parallel threads will be 600. Proposed crossover and mutation give the optimal solutions with ten benchmarks with problem sizes from 12 to 32 in both Sequential Genetic Algorithm and Parallel Genetic Algorithm, the next ten benchmarks give solutions closed to the optimal solution with a small error rate.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_68-Parallel_Improved_Genetic_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dual-fast Greedy Heuristic Algorithm for Green ICT</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130567</link>
        <id>10.14569/IJACSA.2022.0130567</id>
        <doi>10.14569/IJACSA.2022.0130567</doi>
        <lastModDate>2022-05-31T14:40:27.9170000+00:00</lastModDate>
        
        <creator>Inas Abuqaddom</creator>
        
        <subject>Abilene backbone; backbone network; bundled link; capacity; Internet power consumption; power savings</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>The Internet power consumption represents 3.6% to 6.2% of the annual worldwide power consumption and is continually expanding. The awareness of this problem has increased, hence, a few strategies are being put into place to decrease the power consumption of the Information Communication Technology (ICT) sectors, in general. Backbone networks are the main part of the Internet power consumption because their line cards expend a lot of energy, also their links are commonly bundled and provide larger capacity than needed. Therefore, bundled links are partially shut down during times of low demand to reduce power consumption. Literature introduces a few heuristic algorithms that are run periodically to shut down bundled links partially. This paper proposes a Dual-Fast Greedy Heuristic algorithm (DGH), which significantly speeds up the power savings. DGH is examined on the topology and traffic of the Abilene backbone. The experimental results show that DGH provides competitive power savings with minimum execution time.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_67-Dual_fast_Greedy_Heuristic_Algorithm_for_Green_ICT.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Framework for Development of 3D Temple Objects based on Photogrammetry Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130566</link>
        <id>10.14569/IJACSA.2022.0130566</id>
        <doi>10.14569/IJACSA.2022.0130566</doi>
        <lastModDate>2022-05-31T14:40:27.9030000+00:00</lastModDate>
        
        <creator>Herman Tolle</creator>
        
        <creator>Ratih Kartika Dewi</creator>
        
        <creator>Komang Candra Brata</creator>
        
        <creator>Benyamin Perdamean</creator>
        
        <subject>Photogrammetry; historical relic; 3D objects; digital preservation; virtual reality</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>Indonesia has a lot of cultural buildings that need to capture as digital objects for other purposes, especially for digital preservation. One of the methods of making 3D objects is using the photogrammetry method. The photogrammetry method makes 3D objects using many photos captured by the camera that will be integrated into the software and processed into 3D objects. In this research, the framework for modeling 3D objects of the cultural building is proposed based on the photogrammetry approach. This framework includes data capture, modeling and processing, and calibration. This framework was tested while making the 3D object of the Candi Badut temple and reported that the 3D model has significantly had similarities with the real object. The framework is useful for standard guidelines for making 3D modeling of historical relics efficiently.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_66-Framework_for_Development_of_3D_Temple_Objects.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sentiment Analysis to Explore User Perception of Teleworking in Saudi Arabia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130565</link>
        <id>10.14569/IJACSA.2022.0130565</id>
        <doi>10.14569/IJACSA.2022.0130565</doi>
        <lastModDate>2022-05-31T14:40:27.8870000+00:00</lastModDate>
        
        <creator>Malak Nazal Alotaibi</creator>
        
        <creator>Zahyah H. Alharbi</creator>
        
        <subject>Mazajak; sentiment analysis; thematic analysis; telework; remote work</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>Due to the emergence of the COVID-19 pandemic in 2019, many public and private organizations from different sectors in Saudi Arabia were forced to enforce teleworking as the main work arrangement. This paper seeks to understand the experience and attitude of the public toward remote work by analyzing Twitter data from March 2020 to July 2021 by using &quot;Mazajak&quot; the online Arabic analyzer. A corpus of 39,523 tweets with hashtags mentioning the teleworking program in Saudi Arabia was obtained. The results indicate that neutrality was the most prevalent sentiment with 58.21%, followed by positive sentiment with 30.67%. Thematic analysis was used to identify themes in the tweets with positive and negative sentiment. Flexibility, teamwork, teleworking preference, and learning were the major themes related to positive sentiment, while themes related to negative sentiment were private sector, companies, and fake.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_65-Sentiment_Analysis_to_Explore_User_Perception.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Graph-oriented Framework for Online Analytical Processing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130564</link>
        <id>10.14569/IJACSA.2022.0130564</id>
        <doi>10.14569/IJACSA.2022.0130564</doi>
        <lastModDate>2022-05-31T14:40:27.8700000+00:00</lastModDate>
        
        <creator>Abdelhak KHALIL</creator>
        
        <creator>Mustapha BELAISSAOUI</creator>
        
        <subject>Graph OLAP; data warehousing; graph databases; NoSQL; data cube; decision support system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>OLAP (Online Analytical Processing) is a tried-and-tested technology and a core concept in Business Intelligence. With data flowing from different and countless sources, exploring data in order to deliver actionable insights has become a daunting task with current OLAP tools despite the cycle of improvement that has gone through it. In the last decade, with the emergence of the big data phenomenon, NoSQL databases are seeing a spike in popularity and become more used in industry and academia as their value in handling a huge and varied amount of data become increasingly evident. Graph oriented database is one of the four chief types of NoSQL oriented databases that represent a promising technology candidate for big data analytics. In this paper we bring forward our contribution to graph-oriented analytical processing, which is twofold. First, we provide a novel approach for modeling a graph-oriented data warehouse. Second, we propose a data cube materialization through the precomputation of aggregated nodes. We present how typical OLAP queries can be performed against data warehouses stored in NoSQL graph-oriented database management systems. An implementation is conducted on a fictional data warehouse using Neo4j and the Cypher declarative language. The same dataset is stored in a relational data warehouse in order to compare storage space and query performance. Thus, the obtained results shows that graph OLAP implementation outperform clearly the relational alternative in term of query response time.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_64-A_Graph_oriented_Framework_for_Online_Analytical_Processing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced Symbol Recognition based on Advanced Data Augmentation for Engineering Diagrams</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130563</link>
        <id>10.14569/IJACSA.2022.0130563</id>
        <doi>10.14569/IJACSA.2022.0130563</doi>
        <lastModDate>2022-05-31T14:40:27.8400000+00:00</lastModDate>
        
        <creator>Ong Kai Bin</creator>
        
        <creator>Yew Kwang Hooi</creator>
        
        <creator>Said Jadid Abdul Kadir</creator>
        
        <creator>Haruhiro Fujita</creator>
        
        <creator>Luqman Hakim Rosli</creator>
        
        <subject>Symbol recognition; symbol spotting; engineering drawing; convolution neural network (CNN); CycleGAN; piping and instrument diagram (P&amp;ID)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>Symbol recognition has generated research interest for image analytics of engineering diagrams. Techniques including structural, syntactic, statistical, Convolution Neural Network (CNN) were studied to identify gaps of research. Despite popularity, CNN requires huge learning dataset, which often involves costly procurement. To address this, combination between CycleGAN and CNN is proposed. CycleGAN generates more learning dataset synthetically, thus yielding opportunity to improve accuracy of symbol recognition. In the domain of for engineering symbols, standard CNN model is developed and used in experimental testing. Different ratios of training dataset were tested in multiple experiments using Piping and Instrument Diagram (P&amp;IDs) drawings. Result of highest accuracy for symbol recognition is up to 92.85% against baseline and other method. The results determined that gradual reduction of training samples, the effectiveness of recognition accuracy performance after using proposed method was remained substantially stable.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_63-Enhanced_Symbol_Recognition_based_on_Advanced_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Digital Learning Tools for Security Inductions in Mining Interns: A PLS-SEM Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130562</link>
        <id>10.14569/IJACSA.2022.0130562</id>
        <doi>10.14569/IJACSA.2022.0130562</doi>
        <lastModDate>2022-05-31T14:40:27.8230000+00:00</lastModDate>
        
        <creator>Jose Julian Rodriguez-Delgado</creator>
        
        <creator>Patricia Lopez-Casaperalta</creator>
        
        <creator>Mario Gustavo Berrios-Espezua</creator>
        
        <creator>Alejandro Marcelo Acosta-Quelopana</creator>
        
        <creator>Jose Sulla-Torres</creator>
        
        <subject>Technology acceptance model 3; PLS-SEM; SmartPLS; mining company; safety; safety inductions; safety talk; learning; education; teaching; technology; learning tools</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>New pedagogical tools have been introduced in educational contexts in recent years. They have been shown to impact learning compared to conventional education strategies positively. Before implementing new learning tools, a study of technological acceptance is needed for its application to succeed. For this reason, the objective of this research was to measure the intention and acceptance of the use of new digital learning tools, such as mobile applications, holograms, interactive platforms, and virtual or augmented reality, through the Technology Acceptance Model 3 (TAM3) in safety on-board training inductions in a mining company. This measurement was based on the analysis of a survey carried out in Google Forms based on the Likert scale; the results were processed using the partial least squares technique in structural equation models (PLS-SEM), processed through SmartPLS 3. As a result, we got positive correlations between the instrument&#39;s variables and acceptance by the participants studied. The findings indicate that it is essential to consider the participants&#39; opinions a priori to implementing new digital education tools for managerial decision-making. It was considering highlighting the teaching about safety in mining companies since this allows contributing to engineering education and protecting the most precious resource of any company, the human being.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_62-Digital_Learning_Tools_for_Security_Inductions.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Adaptive Approach for Preserving Privacy in Context Aware Applications for Smartphones in Cloud Computing Platform</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130561</link>
        <id>10.14569/IJACSA.2022.0130561</id>
        <doi>10.14569/IJACSA.2022.0130561</doi>
        <lastModDate>2022-05-31T14:40:27.8070000+00:00</lastModDate>
        
        <creator>H. Manoj T. Gadiyar</creator>
        
        <creator>Thyagaraju G. S</creator>
        
        <creator>R. H. Goudar</creator>
        
        <subject>Context-aware; privacy; active defence; privacy protection and mobile phones</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>With the widespread use of mobile phones and smartphone applications, protecting one’s privacy has become a major concern. Because active defensive strategies and temporal connections between situations relevant to users are not taken into account, present privacy preservation systems for cell phones are often ineffective. This work defines secrecy maintenance issues similar to optimizing tasks, thereby verifying their accuracy and optimization capabilities through a hypothetical study. Many optimal issues arise while preserving one’s privacy and these optimal issues are to be addressed as linear programming issues. By addressing linear programming issues, an effective context-aware privacy-preserving algorithm (CAPP) was created that uses an active defence strategy to determine how to release a user’s current context to enhance the quality of service (QoS) regarding context-aware applications while maintaining secrecy. CAPP outperforms other standard methodologies in lengthy simulations of actual data. Additionally, the minimax learning algorithm (MLA) optimizes the policy users and improves the satisfaction threshold of the context-aware applications. Moreover, a cloud-based approach is introduced in the work to protect the user’s privacy from third parties. The obtained performance measures are compared with existing approaches in terms of privacy policy breaches, context sensitivity, satisfaction threshold, adversary power, and convergence speed for online and offline attacks.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_61-An_Adaptive_Approach_for_Preserving_Privacy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>New Method for 4D Reconstruction of Medical Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130560</link>
        <id>10.14569/IJACSA.2022.0130560</id>
        <doi>10.14569/IJACSA.2022.0130560</doi>
        <lastModDate>2022-05-31T14:40:27.7770000+00:00</lastModDate>
        
        <creator>Lamyae MIARA</creator>
        
        <creator>Said BENOMAR ELMDEGHRI</creator>
        
        <creator>Mohammed Oucamah CHERKAOUI MALKI</creator>
        
        <subject>2D medical image; 4D reconstruction; contour matching; recent GPU; 3D mesh</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>This paper proposes a new optimized method that is fast in rendering for 4D reconstruction from 2D medical images of human anatomy permitting their real–time refined visualization. This method uses the 3D reconstruction algorithm based on contour matching of medical image sequences and on the tessellation of recent GPU. In our framework, the construction of the low-resolution mesh that is based on contour extraction allows to create a 3D mesh without any ambiguity and exactly matches the real shape of the human anatomy. Such preliminary result is of great interest, since it permits to lead to other valuable realizations such as reducing the computation burden of basic meshes and displacement vectors. Moreover, one can achieve a very low storage memory, as well as one can ease the fast real-time 4D visualization with a high desired resolution. Hence, it is then straight forward that this study can contribute to easing the diagnosis and detection in real-time of human organs in motion damage and deterioration. Especially, 4D visualization technology that is still under development is highly important and needed for assessing some dangerously evaluative diseases, as in the case of lung diseases.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_60-New_Method_for_4D_Reconstruction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of Hausa Acoustic Model for Speech Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130559</link>
        <id>10.14569/IJACSA.2022.0130559</id>
        <doi>10.14569/IJACSA.2022.0130559</doi>
        <lastModDate>2022-05-31T14:40:27.7600000+00:00</lastModDate>
        
        <creator>Umar Adam Ibrahim</creator>
        
        <creator>Moussa Mahamat Boukar</creator>
        
        <creator>Muhammad Aliyu Suleiman</creator>
        
        <subject>Acoustic model; Hausa Phonemes; word level; CNN</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>Acoustic modeling is essential for enhancing the accuracy of voice recognition software. To build an automatic speech system and application for any language, building an acoustic model is essential. In this regard, this research is concerned with the development of the Hausa acoustic model for automatic speech recognition. The goal of this work is to design and develop an acoustic model for the Hausa language. This is done by creating a word-level phonemes dataset from the Hausa speech corpus database. Then implement a deep learning algorithm for acoustic modeling. The model was built using Convolutional Neural Network that achieved 83% accuracy. The developed model can be used as a foundation for the development and testing of the Hausa speech recognition system.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_59-Development_of_Hausa_Acoustic_Model_for_Speech_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Study on Feature Engineering and Ensemble Learning for Student Academic Performance Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130558</link>
        <id>10.14569/IJACSA.2022.0130558</id>
        <doi>10.14569/IJACSA.2022.0130558</doi>
        <lastModDate>2022-05-31T14:40:27.7470000+00:00</lastModDate>
        
        <creator>Du Xiaoming</creator>
        
        <creator>Chen Ying</creator>
        
        <creator>Zhang Xiaofang</creator>
        
        <creator>Guo Yu</creator>
        
        <subject>Academic performance prediction; feature engineering; ensemble learning; random forest; XGBoost</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>Student academic performance prediction is one of the important works in the teaching management, which can realize accurate management, scientific teaching and personalized learning by mining important features affecting the academic performance and accurately predicting academic. Due to the subjectivity of feature extraction and the randomness of hyperparameters, the accuracy of academic performance prediction needs to be improved. Therefore, in order to improve the accuracy of prediction, an academic prediction method based on Feature Engineering and ensemble learning is proposed, which makes full use of the advantages of random forest in feature extraction and the ability of XGBoost in prediction. Firstly, the feature importance is calculated and ranked by using the random forest method, and the optimal feature subset combined with the forward search strategy. Secondly, the optimal feature subset is input into the XGBoost model for prediction. The sparrow search algorithm is used to optimize the XGBoost hyperparameters to further improve the accuracy of academic prediction. Finally, the performance of the proposed method is verified through the experiments of the public data set. The experimental results show that the academic prediction method designed is better than the single learner prediction method and other integrated learning prediction methods. The accuracy result jumps to 82.4%. It has good prediction performance and can provide support for teachers to teach according to students’ aptitude.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_58-Study_on_Feature_Engineering_and_Ensemble_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Optimized Kernel MSVM Machine Learning-based Model for Churn Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130557</link>
        <id>10.14569/IJACSA.2022.0130557</id>
        <doi>10.14569/IJACSA.2022.0130557</doi>
        <lastModDate>2022-05-31T14:40:27.7300000+00:00</lastModDate>
        
        <creator>Pankaj Hooda</creator>
        
        <creator>Pooja Mittal</creator>
        
        <subject>CA (Churn Analysis); CC (Customer Churn); OKMSVM (Optimized Kernel-MultiClass Support Vector Machine) Model; KPCA (Kernel Principle Component Analysis); A.L.O. (Ant Lion Optimization) method</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>Customer churn is considered as a significant issue in any industry due to various services, clients, and commodities. A massive amount of data is being created from e-commerce services and tools. Analytical data and machine learning-based approaches have been implemented and utilized for CA (churn analysis) to design a plan, i.e., required to comprehend the rationale for the CC (Customer Churn) and to generate a profitable and actual customer holding program. The analytics and machine learning approaches mainly focus on customer profiling, CC classification, and detection of features that affect churn. However, there are no specific techniques which can be used to determine how often a prospective customer is inclined to cover all the expenses whether they are churned or not. In this paper, an Optimized Kernel MSVM classification model is proposed to predict and classify churn. In the proposed work, MSVM algorithm has been used for classification. The kernel PCA and ALO optimizer method has been used for Feature extraction and selection. The proposed model Optimized Kernel MSVM has been implemented on Tele-communication sector customer churn database to demonstrate the proposed model&#39;s generalization ability. The Optimized Kernel MSVM model has achieved an accuracy of 91.05%, AUC 85% being maximum and reduced the RMSE score to 2.838. The implementation shows that both churn detection and classification may be examined at the same time while maintaining the highest overall accuracy and AUC.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_57-An_Optimized_Kernel_MSVM_Machine_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>IoT Enabled Smart Parking System for Improvising the Prediction Availability of the Parking Space</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130556</link>
        <id>10.14569/IJACSA.2022.0130556</id>
        <doi>10.14569/IJACSA.2022.0130556</doi>
        <lastModDate>2022-05-31T14:40:27.7000000+00:00</lastModDate>
        
        <creator>Anchal Dahiya</creator>
        
        <creator>Pooja Mittal</creator>
        
        <subject>IoT; Data mining; sensors; ensemble; decision tree; bagging technique; boosting technique</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>Smart cities are a result of persistent technological advancements aimed at improving the quality of life for their residents. IoT-enabled smart parking is one of the foundations of smart transportation which seeks to be versatile, long-lasting, and integrated into a Smart City. One of the studies shows that the drivers who are searching for free parking space can cause congestion problems up to 30%. There is a possibility to reduce air pollution and fluidity noise traffic by combining Internet of Things (IoT) sensors positioned in different parking areas with a mobile application and help the drivers to search for free places in different areas of the city and also provide guidance toward the parking space. In this paper, we show and explain a unique Data Mining-based Ensemble technique for anticipating parking lot occupancy to reduce parking search time and improve car flow in congested locations, with a favorable overall impact on traffic in urban centers. In this paper multi scanning, IoT Enabled smart parking model is proposed along with ensemble classifier that improvises the predictive availability of the free parking space. The predictors&#39; parameters were additionally optimized using a Bootstrap and bagging algorithm. The proposed method was tested an IoT dataset containing a number of sensor recordings. The tests conducted on the data set resulted in an average mean absolute error of 0.07% using the Bagging Regression method (BRM).</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_56-IoT_Enabled_Smart_Parking_System_for_Improvising_the_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Structural Equation Modelling for Validating Disruptive Factors in Livestock Supply Chain</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130555</link>
        <id>10.14569/IJACSA.2022.0130555</id>
        <doi>10.14569/IJACSA.2022.0130555</doi>
        <lastModDate>2022-05-31T14:40:27.6830000+00:00</lastModDate>
        
        <creator>Nur Amlya Abd Majid</creator>
        
        <creator>Noraidah Sahari</creator>
        
        <creator>Nur Fazidah Elias</creator>
        
        <creator>Hazura Mohamed</creator>
        
        <creator>Latifah Abd Latib</creator>
        
        <creator>Khairul Firdaus Ne’matullah</creator>
        
        <subject>Supply chain management; disruption; livestock; structural equation modelling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>The purpose of this paper is to deploy a structural equation modeling approach through the Partial Small Square technique to validate the disruptions factors that affect livestock supply chain performance. The disruption prediction factors were obtained from the analysis of literature studies and data from the Department of Veterinary Services (DSV) and expert evaluation. Factors considered in the study model are Livestock Process, Finance, Breeders, Quality, Facilities, Technology, Demand, Supply, Information Communication, Sales, Transportation, Government Involvement, Disaster and Syariah Compliance. The results of the study found that the factors of Livestock Process, Finance, Breeders, Livestock Quality, Technology, Supply, Sales, Transportation, Government Involvement and Syariah Compliance were accepted as disruptions in the livestock supply chain. The findings of this study will assist farmers and livestock stakeholders to take necessary measures to minimise the disruption and further the government&#39;s goal of enlivening small and medium livestock enterprises in Malaysia.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_55-Structural_Equation_Modelling_for_Validating_Disruptive_Factors.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Importance of Memory Management Layer in Big Data Architecture</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130554</link>
        <id>10.14569/IJACSA.2022.0130554</id>
        <doi>10.14569/IJACSA.2022.0130554</doi>
        <lastModDate>2022-05-31T14:40:27.6670000+00:00</lastModDate>
        
        <creator>Maha Dessokey</creator>
        
        <creator>Sherif M. Saif</creator>
        
        <creator>Hesham Eldeeb</creator>
        
        <creator>Sameh Salem</creator>
        
        <creator>Elsayed Saad</creator>
        
        <subject>Apache Spark; Big Data; data analytics algorithms; memory management</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>The generation of daily massive amounts of heterogeneous data from a variety of sources presents a challenge in terms of storage and analysis capabilities and brings new problems into high-performance computing clusters. To better utilize this huge and heterogeneous data, the continuous development of advanced Big Data platforms and Big Data analytic techniques are required. One of the significant issues with in-memory Big Data processing platforms, such as Apache Spark, is the user’s responsibility to decide whether the intermediate data should be cached or not. In addition, the data may be kept in several storage systems and physically scattered over different racks, regions, and clouds. Data need to be close to the computation nodes and hence data locality issue is a challenge. In this paper, using a distinct memory management layer between the data processing layer and the data storage layer, which automatically caches data without the need for any interaction from the applications’ developers, is evaluated. K-means, PageRank and WordCount workloads from the HiBench benchmark beside a real case to predict the price of Real Estate that is implemented using Gradient Boosting Regression Tree model, are used to evaluate this framework. Experiments show that the memory management layer outperforms the Apache Spark in reducing the execution time.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_54-Importance_of_Memory_Management_Layer.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Readability Complexity Score for Gujarati Idiomatic Text</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130553</link>
        <id>10.14569/IJACSA.2022.0130553</id>
        <doi>10.14569/IJACSA.2022.0130553</doi>
        <lastModDate>2022-05-31T14:40:27.6370000+00:00</lastModDate>
        
        <creator>Jatin C. Modh</creator>
        
        <creator>Jatinderkumar R. Saini</creator>
        
        <creator>Ketan Kotecha</creator>
        
        <subject>Complexity; Gujarati; idiomatic text; natural language processing (NLP); readability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>Gujarati language is used for conversation by more than 55 million people worldwide and it is more than 1000 years old language. It is the chief language of the Indian state of Gujarat. There are many dialects of Gujarati like Standard Gujarati, Amdawadi Gujarati, Kathiawadi Gujarati, Kutchi Gujarati etc. The Gujarati language is very rich in morphology like other Indo-Aryan languages like Hindi. Many readability tests are available in the English language, but no readability complexity test is available for the Gujarati idiomatic text. The Complexity score is the sub concept of the readability test. In order to define complexity level of Gujarati text, complexity score of Gujarati text is calculated. We deployed a novel readability complexity score calculation method in which we considered the number of letters of each word, the number of diacritics of each word, Gujarati idiomatic text of n-gram where n=1 to 9, Gujarati idiomatic text of m-meaning idioms where m=1 to 7. The complexity score is calculated as the sum of word complexity score, diacritics complexity score, n-gram complexity score of Gujarati idioms and m-meaning complexity score of Gujarati idioms. We emphasized Gujarati idiomatic text for the calculation of complexity score as idioms make the text more complex to understand. This is an innovative and first of its kind work in the research community of Gujarati language. The results are hopeful enough to employ the suggested complexity score method for developing a readability test method for natural language processing tasks for the Gujarati language.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_53-A_Novel_Readability_Complexity_Score_for_Gujarati_Idiomatic_Text.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Re-CRUD Code Automation Framework Evaluation using DESMET Feature Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130552</link>
        <id>10.14569/IJACSA.2022.0130552</id>
        <doi>10.14569/IJACSA.2022.0130552</doi>
        <lastModDate>2022-05-31T14:40:27.6370000+00:00</lastModDate>
        
        <creator>Asyraf Wahi Anuar</creator>
        
        <creator>Nazri Kama</creator>
        
        <creator>Azri Azmi</creator>
        
        <creator>Hazlifah Mohd Rusli</creator>
        
        <creator>Yazriwati Yahya</creator>
        
        <subject>Re-CRUD; web application; DESMET feature analysis; electronic records features</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>A unified view of web application design and development is crucial for dealing with complexity. However, the literature proposes many denominations, depending on the development methodology, frameworks or tools. This multitude of Create, Read, Update and Delete (CRUD) approaches does not allow a holistic view of the web application. Besides, in a web application, the search for good practice in design, features and essential functions is still a relevant issue. A subset of essential CRUD operations is to provide code automation for web application rapid prototyping. Re-CRUD articulates the records management features into CRUD operation. This study aims to provide insight into the effectiveness and efficiency of Re-CRUD in web application development and to compare it with other web application frameworks&#39; CRUD output. The qualitative feature analysis is used based on the evaluation guideline proposed in DESMET and reviewed by experts for validation. A document management system is developed and used as a case study for Re-CRUD evaluation. The feature analysis comprises Re-CRUD and four other web application frameworks CRUD, namely, CakePHP, Laravel, Symfony and FuelPHP. According to the review, Re-CRUD satisfies its expectations by providing more useful features and delivering higher code automation in the web application development process. Compared to the other existing CRUD generator, Re-CRUD has integrated records management features that are useful in providing support in managing born-digital data and also contributes to effectiveness and efficiency in web application development.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_52-Re_CRUD_Code_Automation_Framework_Evaluation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Flower Pollination Algorithm for Feature Selection in Tweets Sentiment Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130551</link>
        <id>10.14569/IJACSA.2022.0130551</id>
        <doi>10.14569/IJACSA.2022.0130551</doi>
        <lastModDate>2022-05-31T14:40:27.6070000+00:00</lastModDate>
        
        <creator>Muhammad Iqbal Abu Latiffi</creator>
        
        <creator>Mohd Ridzwan Yaakub</creator>
        
        <creator>Ibrahim Said Ahmad</creator>
        
        <subject>Sentiment analysis; metaheuristic algorithm; flower pollination algorithm; machine learning; feature selection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>Text-based social media platforms have developed into important components for communication between customers and businesses. Users can easily state their thoughts and evaluations about products or services on social media. Machine learning algorithms have been hailed as one of the most efficient approaches for sentiment analysis in recent years. However, as the number of online reviews increases, the dimensionality of text data increases significantly. Due to the dimensionality issue, the performance of machine learning methods has been degraded. However, traditional feature selection methods select attributes based on their popularity, which typically does not improve classification performance. This work presents a population-based metaheuristic for feature selection algorithms named Flower Pollination Algorithms (FPA) because of their propensity to accept less optimum solutions and avoid getting caught in local optimum solutions. The study analyses tweets from Kaggle first with the usual Term Frequency-Inverse Document Frequency statistical weighting filter and then with the FPA. Four baseline classifiers are used to train the features: Naive Bayes (NB), Decision Tree (DT), Support Vector Machine (SVM), and k-Nearest Neighbor (kNN). The results demonstrate that the FPA outperforms alternative feature subset selection algorithms. For the FPA, an average improvement in accuracy of 2.7% is seen. The SVM achieves a better accuracy of 98.99%.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_51-Flower_Pollination_Algorithm_for_Feature_Selection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Efficient Segment-based Image Ciphering using Discretized Chaotic Standard Map with ECB, OFB and CBC</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130550</link>
        <id>10.14569/IJACSA.2022.0130550</id>
        <doi>10.14569/IJACSA.2022.0130550</doi>
        <lastModDate>2022-05-31T14:40:27.5900000+00:00</lastModDate>
        
        <creator>Mohammed A. AlZain</creator>
        
        <subject>Cryptography; 2D discretized CSM; ECB; OFB; CBC</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>This paper presents a block-based ciphering scheme that employs the 2D discretized chaotic Standard map (CSM) in three different operation modes. The employed operation modes include the electronic codebook (ECB), the output feedback (OFB) and the cipher block chaining (CBC) modes. In the presented 2D discretized CSM with the OFB and CBC, the initiation vector (IV) is employed as the primary secret key. The presented 2D discretized CSM with the CBC has two merits. The first merit is the ability of the presented 2D discretized CSM with the ECB, OFB and CBC to encipher images of any dimensions in a comparatively short time. The second merit is the high level of security of the presented 2D discretized CSM with the OFB and CBC through the integration of both diffusion and confusion operations. Different security key metrics like histogram deviation, irregular, and coefficient of correlation, are examined to assess the functionality of the presented 2D discretized CSM with the OFB and CBC. The resistance to noise, uniformity of histogram, and encryption speed are also investigated. The suggested 2D discretized CSM with the OFB and CBC is compared with the 2D discretized CSM in ECB. The achieved outcomes demonstrate that the proposed 2D discretized CSM with the OFB and CBC has a high security than in ECB from cryptographic viewpoint. Also, achieved outcomes demonstrate that the proposed 2D discretized CSM has better noise immunity in OFB compared with ECB and OFB.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_50-Efficient_Segment_based_Image_Ciphering.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application of the Clahe Method Contrast Enhancement of X-Ray Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130549</link>
        <id>10.14569/IJACSA.2022.0130549</id>
        <doi>10.14569/IJACSA.2022.0130549</doi>
        <lastModDate>2022-05-31T14:40:27.5730000+00:00</lastModDate>
        
        <creator>Omarova G. S</creator>
        
        <creator>Starovoitov V. V</creator>
        
        <creator>Aitkozha Zh. Zh</creator>
        
        <creator>Bekbolatov S</creator>
        
        <creator>Ostayeva A. B</creator>
        
        <creator>Nuridinov O</creator>
        
        <subject>Digital X-ray image; image quality assessment; image enhancement; contrast enhancement; luminance transformation; adaptive image histogram equalization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>Due to the nonlinearity of the luminance function produced by many medical recording devices, the quality of medical images deteriorates, which creates problems in the visual research work of physicians. X-rays can be taken as an example. This article examines methods of improving the contrast of graphic images methods of improving the quality of X-ray images. The research was carried out in several stages. Attempts were made to increase the contrast of several dozen X-ray images to select the best image brightness using brightness conversion methods in the MATLAB system. Contrast enhancement was observed during the experiments, resulting in the selection of a brightness range corresponding to the visual contrast enhancement. The selection of variables γ for the selected brightness range of the image was performed. The possibilities of the image histogram equalization method were considered. To obtain the best result before performing gamma correction the method of X-ray image histogram equalization is suggested. An enhancement version of this algorithm is presented because of the comparison. Application of the adaptive histogram equalization algorithm with contrast limitation provides a visual effect of improving the contrast of X-ray images. The NIQE and BRISQUE evaluation functions, which do not use reference images, are used to objectively quantify the conversion results.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_49-Application_of_the_Clahe_Method_Contrast_Enhancement.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Social Customer Relationship Management as a Communication Tool for Academic Communities in Higher Education Institutions through Social Media</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130548</link>
        <id>10.14569/IJACSA.2022.0130548</id>
        <doi>10.14569/IJACSA.2022.0130548</doi>
        <lastModDate>2022-05-31T14:40:27.5570000+00:00</lastModDate>
        
        <creator>Ali Ibrahim</creator>
        
        <creator>Ermatita</creator>
        
        <creator>Saparudin</creator>
        
        <subject>Social customer relationship management; communication tools; social media; academic client and communities</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>The interaction between academic community members in universities is fundamental in facilitating the learning management process and workflow, thereby necessitating a communication tool system that is unrestricted by time and place. Social customer relationship management as a communication tool through social media is also essential. Therefore, this research employed a mixed-method exploratory design to evaluate 2421 subjects determined based on a proportional random sampling technique from the academic community of universities in four cities in Indonesia. The data collection method used an online interview questionnaire, and the coding techniques, namely open, axial, and selective coding, as well as the value stream analysis, were used. This research found that Social Customer Relationship Management (SCRM) can be a communication tool between the academic communities in higher education. This tool can be exploited by utilizing effective social media platforms, namely Instagram, Facebook, WhatsApp, and YouTube, integrated and connected as a center for information retrieval.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_48-Social_Customer_Relationship_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Non-contact Facial based Vital Sign Estimation using Convolutional Neural Network Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130546</link>
        <id>10.14569/IJACSA.2022.0130546</id>
        <doi>10.14569/IJACSA.2022.0130546</doi>
        <lastModDate>2022-05-31T14:40:27.5270000+00:00</lastModDate>
        
        <creator>Nor Surayahani Suriani</creator>
        
        <creator>Nur Syahida Shahdan</creator>
        
        <creator>Nan Md. Sahar</creator>
        
        <creator>Nik Shahidah Afifi Md. Taujuddin</creator>
        
        <subject>rPPG; remote heart rate estimation; respiration rate; fitness</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>A rapid heart rate may indicate early diagnosis of heart disease, which could result in abrupt mortality if a heart attack occurs while exercising. A fatal incident is usually precipitated by a heart attack while strenuously exercising. This paper proposed invasive health monitoring through remote photoplethysmography (rPPG) analysis captured by RGB video camera to measure a wide range of biological data. A non-contact facial-based vital signs prediction can facilitate checking pulse rate and respiration rate regularly. Several studies have been conducted on evaluating rPPG signals under a variety of static conditions and little head movement, including different skin tones, angles of the camera, and distance from the camera. A study of heart rate (HR) and breathing rate (BR) data from facial videos for fitness applications were presented in this paper. Most studies still do not have a way to measure vital sign estimation especially for physical activity application from facial videos. The face detector was applied based on three regions of interest on facial landmarks for vital sign estimation. Then, the rPPG method with convolutional neural network (CNN) is presented to construct a spatio-temporal mapping of essential characteristics and estimate the vital sign from a sequence of facial images of people after doing various types of exercises. This will allow people to keep track of their health while exercising and creating a tailored training program based on their physiological preferences. The absolute error (AE) between the estimated HR and the reference HR from all experiments is 2.16 &#177; 2.2 beats/min. While the AE for the estimated BR from the references BR are 1.53 &#177; 2.3 beats/min.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_46-Non_contact_Facial_based_Vital_Sign_Estimation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Survey on MCT vs. DCT: Who is the Winner in COVID-19</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130547</link>
        <id>10.14569/IJACSA.2022.0130547</id>
        <doi>10.14569/IJACSA.2022.0130547</doi>
        <lastModDate>2022-05-31T14:40:27.5270000+00:00</lastModDate>
        
        <creator>Omar Khattab</creator>
        
        <subject>COVID-19; coronavirus disease; manual contact tracing; digital contact tracing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>Coronavirus disease (COVID-19) is a contagious disease appeared in late 2019 and caused by a virus called SARS-CoV-2. It is a pandemic spreading across the whole world and impacts millions of people and sadly causes death. There are two main Contact Tracing Methods (CTMs) to limit and slow down any chance of transmission of it: Manual Contact Tracing (MCT) and Digital Contact Tracing (DCT). The MCT abides by the guide to World Health Organization&#39;s guidance (WHO) on COVID-19 in terms of properly applying social distancing, wearing masks, washing hands, using sanitizers, etc. while the DCT abides by the digital contact tracing applications developed by several countries. This survey is mainly focused on these CTMs and the recent proposed solutions in this field, in order to highlight their drawbacks that negatively impact on both of satisfaction and feasibility in using them. The findings in the survey will be beneficial to understand the effectiveness of CTMs and current proposed solutions, in order to develop a comprehensive smart tracking system able to cooperatively contribute with both of MCT and DCT in extremely detecting, preventing, and slowing down the spread of COVID-19 or even any other similar pandemics in the future.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_47-A_Survey_on_MCT_vs_DCT.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>RENTAKA: A Novel Machine Learning Framework for Crypto-Ransomware Pre-encryption Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130545</link>
        <id>10.14569/IJACSA.2022.0130545</id>
        <doi>10.14569/IJACSA.2022.0130545</doi>
        <lastModDate>2022-05-31T14:40:27.5100000+00:00</lastModDate>
        
        <creator>Wira Z. A. Zakaria</creator>
        
        <creator>Mohd Faizal Abdollah</creator>
        
        <creator>Othman Mohd</creator>
        
        <creator>S. M. Warusia Mohamed S. M. M Yassin</creator>
        
        <creator>Aswami Ariffin</creator>
        
        <subject>Ransomware; crypto-ransomware; ransomware early detection; pre-encryption; pre-attack; ransomware lifecycle</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>Crypto ransomware is malware that locks its victim’s file for ransom using an encryption algorithm. Its popularity has risen at an alarming rate among the cyber community due to several successful worldwide attacks. The encryption employed had caused irreversible damage to the victim’s digital files, even when the victim chose to pay the ransom. As a result, cybercriminals have found ransomware a lucrative and profitable cyber-extortion approach. The increasing computing power, memory, cryptography, and digital currency advancement have caused ransomware attacks. It spreads through phishing emails, encrypting sensitive data, and causing harm to the designated client. Most research in ransomware detection focuses on detecting during the encryption and post-attack phase. However, the damage done by crypto-ransomware is almost impossible to reverse, and there is a need for an early detection mechanism. For early detection of crypto-ransomware, behavior-based detection techniques are the most effective. This work describes RENTAKA, a framework based on machine learning for the early detection of crypto-ransomware. The features extracted are based on the phases of the ransomware lifecycle. This experiment included five widely used machine learning classifiers: Na&#239;ve Bayes, kNN, Support Vector Machines, Random Forest, and J48. This study proposed a pre-encryption detection framework for crypto-ransomware using a machine learning approach. Based on our experiments, support vector machines (SVM) performed with the best accuracy and TPR, 97.05% and 0.995, respectively.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_45-RENTAKA_A_Novel_Machine_Learning_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Smart Agriculture Monitoring System using Clean Energy</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130544</link>
        <id>10.14569/IJACSA.2022.0130544</id>
        <doi>10.14569/IJACSA.2022.0130544</doi>
        <lastModDate>2022-05-31T14:40:27.4800000+00:00</lastModDate>
        
        <creator>Karim ABOUELMEHDI</creator>
        
        <creator>Kamal ELHATTAB</creator>
        
        <creator>Abdelmajid EL MOUTAOUAKKIL</creator>
        
        <subject>IoT; smart agriculture; new model; solar panel; esp32; mobile app</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>Internet of Things (IoT) technology makes all areas of human life more comfortable. The development of farms through the use of IoT positively influences agricultural production not only by strengthening it, but also by making it more profitable and reducing the cost of production. The goal of this paper is to offer a new IoT-based smart agriculture system that helps farmers get real-time data such as (temperature, humidity, soil moisture) for effective environmental monitoring that will allow them to increase overall yield and product quality. The farm monitoring system proposed in this paper is based on the ESP32 microcontroller with a set of sensors. This new model produces a real-time data feed that can be viewed online via a mobile app. The proposed new system uses solar energy with battery as an energy source.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_44-Smart_Agriculture_Monitoring_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Smart Blended Learning Framework based on Artificial Intelligence using MobileNet Single Shot Detector and Centroid Tracking Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130543</link>
        <id>10.14569/IJACSA.2022.0130543</id>
        <doi>10.14569/IJACSA.2022.0130543</doi>
        <lastModDate>2022-05-31T14:40:27.4630000+00:00</lastModDate>
        
        <creator>Abdul Wahid</creator>
        
        <creator>Muhammad Fajar B</creator>
        
        <creator>Jumadi M. Parenreng</creator>
        
        <creator>Seny Luhriyani</creator>
        
        <creator>Puput Dani Prasetyo Adi</creator>
        
        <subject>Smart blended learning; mobilenet; single shot detector; convolutional neural network; centroid tracking</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>The Covid-19 pandemic has affected all aspects of human life and has even forced humans to shift their life habits, including in the world of education. The learning model must shift from the traditional face-to-face pattern to a modern face-to-face pattern or an asynchronous pattern with information technology-based applications. Blended learning is one of the appropriate solutions to adjust the limited face-to-face learning conditions. Blended learning can be done, for example, by scheduling learning by dividing the number of participants by 50% and entering on a scheduled basis. However, the problem is that the time and effort used are less efficient. Blended learning can also be done by conducting learning simultaneously with 50% of students in class and the remaining 50% through conferences. This concept will streamline the time and effort used. However, the problem is that there is a gap in the learning experience between students in class and students who do learning via conference. This innovative blended learning system framework is proposed to overcome these problems. The system built seeks to present an online learning experience atmosphere so that it is expected to be able to resemble an offline learning atmosphere. We created a system using camera technology and object detection that will track the movement of the teacher so that the teacher can move freely in the room without having to be stuck in front of the computer holding the conference. The algorithms used are MobileNet Single Shot Detector and Centroid Tracking. This research produces an accurate model for detecting teacher movement at a distance of 2, 4, and 6 meters with a camera installation height of 1.5 and 3 meters.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_43-Smart_Blended_Learning_Framework_based_on_Artificial_Intelligence.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Parameter Optimization of Nonlinear Piezoelectric Energy Harvesting System for IoT Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130542</link>
        <id>10.14569/IJACSA.2022.0130542</id>
        <doi>10.14569/IJACSA.2022.0130542</doi>
        <lastModDate>2022-05-31T14:40:27.4500000+00:00</lastModDate>
        
        <creator>Li Wah Thong</creator>
        
        <creator>Swee Leong Kok</creator>
        
        <creator>Roszaidi Ramlan</creator>
        
        <subject>Energy harvesting; nonlinear dynamics; piezoelectric; vibration; broadband frequency</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>The vibrational energy harvesting has been essentially applied to power up low-power electronics, microsystems, and wireless sensors especially in the areas of Internet of Things (IoT) devices. This paper investigates the prospect of incorporating nonlinearity in a unimorph piezoelectric cantilever beam with a tip magnet placed under a harmonic base excitation in IoT enabled environment. An empirical and theoretical analysis on the impact of various parameters such as spacing distance between magnets, presence of magnetic tip mass and positioning of vibrational source on the frequency response output was performed. It was observed that the largest spectrum of frequency can be produced when at the lowest resonant frequency of the cantilever. The positioning of vibrational source deeply impacts the hysteresis region and frequency range in realizing broadband energy harvesting. The inclusiveness of vibration source on both the cantilever beam as well as the external magnets impacts the energy harvester in terms of frequency range and the minimal distance for bistable condition.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_42-Parameter_Optimization_of_Nonlinear_Piezoelectric_Energy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Effective Prediction of Software Defects using Random-tree Entropy based Feature Selection Framework</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130541</link>
        <id>10.14569/IJACSA.2022.0130541</id>
        <doi>10.14569/IJACSA.2022.0130541</doi>
        <lastModDate>2022-05-31T14:40:27.4330000+00:00</lastModDate>
        
        <creator>Abdulaziz Alhumam</creator>
        
        <subject>Software defect prediction; machine learning; classification; feature entropy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>Software systems have grown in size and complexity. These characteristics increase the difficulty of preventing software errors. As a result, forecasting the frequency of software module failures is critical to a developer’s efficiency. Many methods for defect detection and correcting problems exist. Hence, Machine Learning (ML) classification performance has to be greatly improved. Thus, in this study, a novel approach is proposed for predicting the number of software defects based on relevant variables using ML. First, feature entropy on each raw features is performed and then identifying the un-pruned random feature. Then is selected the relevant feature through the identical existence among the entropy and un-pruned feature. And finally, the software defect dataset of National Aeronautics and Space Administration (NASA) PC-1 is sent to an ML-based model to estimate the number of faults. Initial PC-1 dataset comprises 37 raw features from this only 8 critical characteristics are utilized to enhance the ML model. A random tree feature selection strategy is shown to be accurate and potentially outperform existing methods in the experimental results. The proposed method considerably outperformed the performance of current ML models by obtaining the accuracy of 97.76% in Random Forest (RF) model.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_41-Effective_Prediction_of_Software_Defects_using_Random_tree_Entropy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Validation of Evacuation Assessment Algorithm in Finding the Best Indoor Evacuation Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130540</link>
        <id>10.14569/IJACSA.2022.0130540</id>
        <doi>10.14569/IJACSA.2022.0130540</doi>
        <lastModDate>2022-05-31T14:40:27.4030000+00:00</lastModDate>
        
        <creator>Amir Haikal Abdul Halim</creator>
        
        <creator>Khyrina Airin Fariza Abu Samah</creator>
        
        <subject>Assessment algorithm; evacuation model; indoor evacuation; k-mean; validation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>This paper proposed an indoor evacuation assessment algorithm. Indoor evacuation wayfinding to the nearest exit becomes more difficult due to the intricacy of the inside layout and the involvement of numerous people. Thus, evacuation models were developed by researchers to assist evacuees in safely exiting a building. Unfortunately, building owners are unsure which evacuation model is best for their high-rise buildings. Therefore, we proposed an assessment algorithm to help the owners assess the best evacuation model. This research uses floor plan levels 13 and 14 of Yayasan Melaka’s, an office building, to simulate the evacuation. Ten simulation studies for each level are created. The proposed assessment algorithm focuses on three Microscopic evacuation models; agent-based, cellular automata, and social force. Hence, three simulation software were used to represent the mentioned evacuation model: Pathfinder, PedGo, and AnyLogic. K-Mean is then used to cluster the simulation time results. Elbow, Silhouette and V-measure techniques were applied to produce accurate results of the K-Mean. We compiled and analyzed the results from ten simulation studies for each level. The validation was done by comparing the final results. It shows that 70% of the lowest time taken is from Pathfinder, 30% from PedGo, and 0% from AnyLogic. Based on the result, it is proven that the proposed assessment algorithm can provide the best indoor evacuation model followed the attributes set for the building.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_40-Validation_of_Evacuation_Assessment_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Secure Routing Protocol for Low Power and Lossy Networks Against Rank Attack: A Systematic Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130539</link>
        <id>10.14569/IJACSA.2022.0130539</id>
        <doi>10.14569/IJACSA.2022.0130539</doi>
        <lastModDate>2022-05-31T14:40:27.3870000+00:00</lastModDate>
        
        <creator>Laila Al-Qaisi</creator>
        
        <creator>Suhaidi Hassan</creator>
        
        <creator>Nur Haryani Binti Zakaria</creator>
        
        <subject>Wireless sensor networks; internet of things (IoT); routing security; RPL; objective function</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>The Internet of Things (IoT) is witnessing massive widespread along in almost all aspects of life. IoT is defined as a network of interconnected devices applied in various environments including smart cities, transportation, health, industries, military, and agriculture. Its main purpose is to simplify the exchange and collect data from and to deployment environments. Due to their small size and cost-effectiveness, Wireless Sensor Networks (WSN) form one of the core technologies deployed in IoT. Yet, things interconnected with each other and exchanging data are prone to different kinds of security attacks. As a result, it is possible to compromise data while transmitted from source to destination through nodes. Routing Protocol for Low Power and Lossy Networks (RPL) offers only slight protection against routing attacks, but having a network with limited energy sources, processors, and memory, besides being deployed in unattended nature and hostile environment requires more scalable security measures. This paper focuses on investigating the problem of security provisioning in RPL. As such, a Systematic Literature Review (SLR) of security mechanisms proposed for RPL will be discussed. An extensive search was conducted on various online databases, then findings were filtered by reviewing abstracts, introduction, and conclusion. Finally, a summary of recent research work is presented. This work is important to highlight various aspects of securing RPL and get an initial insight for studying them.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_39-Secure_Routing_Protocol_for_Low_Power_and_Lossy_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Architecture of Domain Independent and Extensible Intelligent Tutoring System based on Concept Dependencies and Subject Paths</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130538</link>
        <id>10.14569/IJACSA.2022.0130538</id>
        <doi>10.14569/IJACSA.2022.0130538</doi>
        <lastModDate>2022-05-31T14:40:27.3700000+00:00</lastModDate>
        
        <creator>Sanjay Singh</creator>
        
        <creator>Vikram Singh</creator>
        
        <subject>Personalized tutoring; intelligent tutoring system; adaptive learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>Intelligent Tutoring Systems (ITS) seek to provide personalized tutoring to learners, but are often domain specific, and lack extensibility. When featuring extensibility and domain independence, it is a challenge to provide appropriate level of personalization for every learner. In this paper, an architecture of a system that features domain-independence and extensibility with personalization and automatic course improvements without requiring persistent subject expert intervention has been proposed. The proposed architecture utilizes the notion of concept dependencies and the ability to sequence inter-dependent concepts intelligently into subject paths that enable automated tutoring as well as effective course customization per learner. It features a separate interface for subject experts through which they do not require ITS building knowledge to fulfil their appropriately assigned tasks assisted intelligently by the system, and an API based interface layer that supports today’s mobile requirements for better engagement.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_38-An_Architecture_of_Domain_Independent_and_Extensible_Intelligent_Tutoring_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Survey on Genomic Dataset for Predicting the DNA Abnormalities Using Ml</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130537</link>
        <id>10.14569/IJACSA.2022.0130537</id>
        <doi>10.14569/IJACSA.2022.0130537</doi>
        <lastModDate>2022-05-31T14:40:27.3400000+00:00</lastModDate>
        
        <creator>Siripuri Divya</creator>
        
        <creator>Y. Bhavani</creator>
        
        <creator>Thota Mahesh Kumar</creator>
        
        <subject>Genomic data; deoxyribonucleic acid (DNA); machine learning algorithms; single nucleotide polymorphism (SNPs)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>Genomic data is used in bioinformatics for collecting, storing and processing the genomes of living things. In order to process the genetic information, machine learning algorithms plays a vital role in building a computational model by using the statistical theory. This paper helps the researchers, who are doing research with the DNA dataset by applying the machine learning logics. Feature scaling machine learning techniques helps in predicting the sequence of genome for extrachromosomal amplification and predicting the tumor intensity in the human gene. Identification of unconventional chromosome in the DNA sequence minimizes the structural risk. In this paper, researchers can get clear insight on classification, sequence prediction, fuzzy relationship and SNP on genome dataset. The performance of various existing models is measured using the performance metrics and the accuracy.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_37-A_Survey_on_Genomic_Dataset_for_Predicting_the_DNA.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Soil Color as a Measurement for Estimation of Fertility using Deep Learning Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130536</link>
        <id>10.14569/IJACSA.2022.0130536</id>
        <doi>10.14569/IJACSA.2022.0130536</doi>
        <lastModDate>2022-05-31T14:40:27.3230000+00:00</lastModDate>
        
        <creator>N Lakshmi Kalyani</creator>
        
        <creator>Kolla Bhanu Prakash</creator>
        
        <subject>Artificial neural networks; deep learning; soil classification; soil nutrients; data augmentation; transform learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>Soil Behavior helps the farmer predict performance for growing crops, nutrient movement, and determine soil limitations. The traditional methods for soil classification in the laboratory require time and human resources and are expensive. This analysis examines the possibility of image recognition by artificial intelligence, with a machine learning technique called deep learning, to develop the cases that use artificial intelligence. This study performed deep learning with a model using a neural network. Neural Networks has used to evaluate relationships between the parameters of the three-dimensional coordinates resulting in soil classification and parameters. So Artificial Neural Networks (ANN) can be an effective tool for soil classification. This paper focused on AI techniques used to predict the soil type, advice the crop to yield, and discuss the transformed learning and benefits.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_36-Soil_Color_as_a_Measurement_for_Estimation_of_Fertility.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Rule-based Text Extraction for Multimodal Knowledge Graph</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130535</link>
        <id>10.14569/IJACSA.2022.0130535</id>
        <doi>10.14569/IJACSA.2022.0130535</doi>
        <lastModDate>2022-05-31T14:40:27.2930000+00:00</lastModDate>
        
        <creator>Idza Aisara Norabid</creator>
        
        <creator>Fariza Fauzi</creator>
        
        <subject>Relation extraction; knowledge graph; multimodal knowledge graph; dependency relations; object/scene detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>Textual information is widely integrated in visual tasks such as object/scene detection and image annotation. However, the textual information is not fully exploited, overlooking the wide background knowledge available for Web images. This work proposes a multimodal knowledge graph (KG) to represent the knowledge extracted from unstructured Web image surrounding text and to integrate the relationship between image and text entities. Existing multimodal KG works have mainly focused on advanced visual processes for extracting entities and relations from images, and only employed standard text processing techniques such as tokenization, stop word removal, and part-of-speech (POS) tagging to capture nouns only or basic subject-verb-object from text in the semantic enrichment process. Adversely, neglecting other rich information in the text. Thus, the proposed approach attempts to address this as an automatic relation extraction (RE) problem to extract all possible triples from the text information from simple to complex sentences, in constructing the multimodal KG which eventually can be used as a training seed for visual tasks. A linguistic analysis is performed on a set of Web news articles consisting of news images and their related text. The dependency relations and POS information obtained are used to formulate a set of domain-agnostic entity-relation extraction rules. A triple extractor incorporating these rules, is developed to extract the triples from a news articles dataset and construct the proposed MKG. The Precision and Recall metrics are used to evaluate the extractor’s performance. The evaluation results show that the proposed approach can extract entities and relations in the dataset with the precision score of 0.90 and recall score of 0.60. While the results are promising, the extraction rules can still be improved to capture all the knowledge.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_35-Rule_Based_Text_Extraction_for_Multimodal_Knowledge.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>E-Evaluation based on CSE-UCLA Model Refers to Glickman Pattern for Evaluating the Leadership Training Program</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130534</link>
        <id>10.14569/IJACSA.2022.0130534</id>
        <doi>10.14569/IJACSA.2022.0130534</doi>
        <lastModDate>2022-05-31T14:40:27.2770000+00:00</lastModDate>
        
        <creator>Ketut Rusmulyani</creator>
        
        <creator>I Made Yudana</creator>
        
        <creator>I Nyoman Natajaya</creator>
        
        <creator>Dewa Gede Hendra Divayana</creator>
        
        <subject>E-evaluation; leadership training; evaluation of educational programs; CSE-UCLA; human resource development agency</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>This study aimed to describe the implementation of the level III leadership training program at the human resource development agency. The evaluation process used the CSE-UCLA model that was divided into: (1) components of system assessment, (2) program planning, (3) program implementation, (4) program improvement, and (5) program certification. This study involved 100 participants from the human resource development agency as institutional leaders, heads of divisions and heads of sub-sectors, lecturers/Widyaiswara, implementers/committees, superiors of alumni/mentors, and leadership training participants. Data was collected through questionnaires, interviews, observation, and documentation. The data were analyzed by quantitative descriptive analysis and verified by the Glickman quadrant, while the weaknesses found in the evaluation used qualitative descriptive analysis. The results showed that the level of effectiveness in terms of system assessment component included good criteria with a percentage of 82.4%, program planning component included good criteria (86.4%), program implementation component included good criteria (82.8%), and program improvement component included good criteria (83.2%). Lastly, the program certification component included good criteria (83%). The implementation of this Leadership Training Program is a strategy in developing SCA competencies, both managerial competencies, technical competencies, and socio-cultural competencies, to create a world-class bureaucracy in 2025 through independent learning and learning through coaching and mentoring.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_34-E-Evaluation_based_on_CSE_UCLA_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>RETRACTED: IoT based Portable Weather Station for Irrigation Management using Real-Time Parameters</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130533</link>
        <id>10.14569/IJACSA.2022.0130533</id>
        <doi>10.14569/IJACSA.2022.0130533</doi>
        <lastModDate>2022-05-31T14:40:27.2470000+00:00</lastModDate>
        
        <creator>Geeta Ambildhuke</creator>
        
        <creator>Barnali Gupta Banik</creator>
        
        <subject>Deep learning; edge analytics; internet of things; machine learning; irrigation management; precision agriculture; rainfall prediction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>After careful and considered review of the content of this paper by a duly constituted expert committee, this paper has been found to be in violation of IJACSA`s Publication Principles. We hereby retract the content of this paper. Reasonable effort should be made to remove all past references to this paper. Retraction DOI: 10.14569/IJACSA.2022.0130533.retraction</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_33-IoT_based_Portable_Weather_Station_for_Irrigation_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid Material Generation Algorithm with Probabilistic Neural Networks for Solving Classification Problems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130532</link>
        <id>10.14569/IJACSA.2022.0130532</id>
        <doi>10.14569/IJACSA.2022.0130532</doi>
        <lastModDate>2022-05-31T14:40:27.2300000+00:00</lastModDate>
        
        <creator>Mohammad Wedyan</creator>
        
        <creator>Omar Alshaweesh</creator>
        
        <creator>Enas Ramadan</creator>
        
        <creator>Ryan Alturki</creator>
        
        <creator>Foziah Gazzawe</creator>
        
        <creator>Mohammed J. Alghamdi</creator>
        
        <subject>Artificial neural network (ANN); material generation algorithm (MGA); classification; probabilistic neural networks (PNN)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>Classification is based on machine learning, in which each element in a set of data is classified into one of a predetermined set of groups. In data mining, an artificial neural network (ANN) is the most significant methodology because of the exact results obtained through this algorithm and applied in solving many classification problems. ANN consists of a group of types of feed-forward networks, feed-back network, RFB networks, and the probabilistic neural networks (PNN). For classification issues, the PNN is frequently utilized. The primary goals of this research are to fine-tune the weights of neural networks to enhance the classiﬁcation accuracy. To accomplish this goal, the Material Generation Algorithm (MGA) was investigated with PNN in a hybrid model. Newly, the hybridization of algorithms is ubiquitous and it has led to the development of unique procedures that outperform those that use a single algorithm. Several distinct classification tasks are used to test the efficiency of the suggested (MGA-PNN) approach. The MGA algorithm&#39;s efficiency is evaluated using the PNN training outcomes generated, and its outcomes are compared to that of other optimization strategies. By 11 benchmark datasets, the suggested algorithm&#39;s performance in terms of classification accuracy is evaluated. The outcomes display that the MGA outperforms the biogeography based optimization, firefly method in terms of classification accuracy.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_32-A_Hybrid_Material_Generation_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Model for Classification and Diagnosis of Skin Disease using Machine Learning and Image Processing Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130531</link>
        <id>10.14569/IJACSA.2022.0130531</id>
        <doi>10.14569/IJACSA.2022.0130531</doi>
        <lastModDate>2022-05-31T14:40:27.2130000+00:00</lastModDate>
        
        <creator>Shaden Abdulaziz AlDera</creator>
        
        <creator>Mohamed Tahar Ben Othman</creator>
        
        <subject>Skin disease; image processing; classification; machine learning; diagnosis; SVM; RF; K-NN; acne; cherry angioma; melanoma; psoriasis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>Skin diseases are a global health problem that is difficult to diagnose sometimes due to the disease’s complexity, and the time-consuming effort. In addition to the fact that skin diseases affect human health, it also affects the psycho-social life if not diagnosed and controlled early. The enhancement of images processing techniques and machine learning leads to an effective and fast diagnosis that help detect the skin disease early. This paper presents a model that takes an image of the skin affected by a disease and diagnose acne, cherry angioma, melanoma, and psoriasis. The proposed model is composed of five steps, i.e., image acquisition, preprocessing, segmentation, feature extraction, and classification. In addition to using the machine learning algorithms for evaluating the model, i.e., Support Vector Machine (SVM), Random Forest (RF), and K-Nearest Neighbor (K-NN) classifiers, and achieved 90.7%, 84.2%, and 67.1%, respectively. Also, the SVM classifier result of the proposed model was compared with other papers, and mostly the proposed model’s result is better. In contrast, one paper achieved an accuracy of 100%.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_31-A_Model_for_Classification_and_Diagnosis_of_Skin_Disease.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modeling Wireless Mesh Networks for Load Management</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130530</link>
        <id>10.14569/IJACSA.2022.0130530</id>
        <doi>10.14569/IJACSA.2022.0130530</doi>
        <lastModDate>2022-05-31T14:40:27.2000000+00:00</lastModDate>
        
        <creator>Soma Pandey</creator>
        
        <creator>Govind R. Kadambi</creator>
        
        <subject>MATLAB; Simulink; multi-hop; wireless mesh network (WMN); gateway; simulation model; load management</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>Developing a simulation model for multi-hop multi-gateway wireless mesh networks (WMNs) is a challenging task. In this paper, a multi-hop multi-gateway WMN simulation model is developed in a step-by-step approach. This paper presents a MATLAB Simulink-based simulation model of Wireless Mesh Network (WMN) designed for easy optimization of layer 2. The proposed model is of special utility for the simulation of scheduling of GateWay (GW) and packet within a multi-hop multi gateway wireless network. The simulation model provides the flexibility of controlling the flow of packets through the networks. Load management among the GWs of WMN is performed in a distributed manner wherein the nodes based on their local knowledge of neighborhood beacons optimize their path to a GW. This paper presents a centralized Load Management Scheme (LMS). The LMS is based on the formation of Gateway Service Sets (GSS). The GSS is formed on basis of equal load distribution among the GWs. The proposed LMS is then analyzed for throughput improvement by leveraging the MATLAB Simulink model developed in the paper. A throughput improvement of almost 600% and a 40% reduction in packet loss was observed through simulations thus indicating the efficacy of the proposed LMS. The uniqueness of the simulation model presented in this paper are its scalability and flexibility in terms of network topology parameters.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_30-Modeling_Wireless_Mesh_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Supervisory Control and Data Acquisition System for Machines used for Thermal Processing of Materials</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130529</link>
        <id>10.14569/IJACSA.2022.0130529</id>
        <doi>10.14569/IJACSA.2022.0130529</doi>
        <lastModDate>2022-05-31T14:40:27.1830000+00:00</lastModDate>
        
        <creator>Diego Patino</creator>
        
        <creator>Wilson Tafur Preciado</creator>
        
        <creator>Albert Miyer Suarez Castrillon</creator>
        
        <creator>Sir-Alexci Suarez Castrillon</creator>
        
        <subject>Automatic control of thermal processes; programmable logic controllers; monitoring and supervision in automatic control systems; machine components; Sensors and virtual instruments for control</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>A supervisory control and data acquisition (SCADA) system has been developed for three machines used for the thermal processing of materials: a hot wire cutter, an induction heater and a welding test stand. The cutter uses a transformer with adjustable voltage between 20 V and 32 V, and current of 8 A, measuring the temperature of the wire with thermal expansion. The heater uses a 24 V, 15 A source, and a type K thermocouple embedded in the sample in order to measure temperature. In welding, a temperature control system was implemented for the sample using type K thermocouple and a cooling fan using a 12 V and 20 A source. The SCADA system consists of a PLC and a PC with a graphical interface which serves to select the process to be worked on as it displays the thermal history of the monitored object. The supervisory system uses a PC with a 32-bit Windows 7 operating system and an OPC software package running on the academic LabVIEW platform. It was designed to use a single human-machine interface for different thermal processes. This paper describes the important components of the system, including its architecture, software development and performance testing.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_29-Supervisory_Control_and_Data_Acquisition_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Effective Cross Synthesized Methodology for Movie Recommendation with Emotion Analysis through Ranking Score</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130528</link>
        <id>10.14569/IJACSA.2022.0130528</id>
        <doi>10.14569/IJACSA.2022.0130528</doi>
        <lastModDate>2022-05-31T14:40:27.1530000+00:00</lastModDate>
        
        <creator>R Lavanya</creator>
        
        <creator>B Bharathi</creator>
        
        <subject>Recommendation systems; collaborative filtering; styling; content based filtering; implicit feedback; hybrid recommendation; sentiment analysis; singular value decomposition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>Providing accurate movie recommendations to a user with limited computing capability is a challenging task. A hybrid system offers a good trade-off between the accuracy and computations needed for such recommendations. Collaborative Filtering and Content-Based Filtering are two of the most widely employed methods of computing such recommendations. In this work, a high-efficient hybrid recommendation algorithm is proposed, which deeds users’ contour attributes to screen them into various groups and recommends movie to a user based on rating given by other similar users. Compared to traditional clustering-based CF recommendation schemes, our technique can effectively decrease the time complexity, whereas attaining remarkable recommendation output. This approach mitigates the shortcomings of the individual methods, while maintaining the advantages. This allows the system to be highly reactive to new viewer inputs without sacrificing on the quality of the recommendations themselves. Building on other hybrids of a similar kind, our proposed system aims to reduce the complexity and features needed for calculation while maintaining good accuracy and further enhanced by utilizing Sentiment Analysis to rank the movies and take user reviews into consideration, which traditional hybrids do not take into account. Then analysis was performed on the data set and the results show that the proposed recommendation system outperforms other traditional approaches.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_28-Effective_Cross_Synthesized_Methodology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Entanglement Quantification and Classification: A Systematic Literature Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130527</link>
        <id>10.14569/IJACSA.2022.0130527</id>
        <doi>10.14569/IJACSA.2022.0130527</doi>
        <lastModDate>2022-05-31T14:40:27.1370000+00:00</lastModDate>
        
        <creator>Amirul Asyraf Zhahir</creator>
        
        <creator>Siti Munirah Mohd</creator>
        
        <creator>Mohd Ilias M Shuhud</creator>
        
        <creator>Bahari Idrus</creator>
        
        <creator>Hishamuddin Zainuddin</creator>
        
        <creator>Nurhidaya Mohamad Jan</creator>
        
        <creator>Mohamed Ridza Wahiddin</creator>
        
        <subject>Entanglement quantification; quantum entanglement; entanglement classification; quantum measurement</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>Quantum entanglement is one of the essences of quantum mechanics and quantum information theory. It is a physical phenomenon in which entangled particles remain correlated with each other regardless of the distance between them. Quantum entanglement plays a significant role in areas such as quantum computing, quantum cryptography, and quantum teleportation. Quantifying entanglement is important for determining the depth of the entanglement level and has an impact on quantum information tasks performance. Entanglement classification is critical in quantum information theory for determining the class of states in a quantum system. The entanglement classification of two qubits as separable or entangled has been established. The classification of multiqubit entanglement is more challenging, especially in higher-qubit systems. The goal of this study is to identify different established measurements for entanglement quantification and entanglement classification methods through a systematic literature review. Indexed articles between 2017 and 2021 were selected as secondary resources from several sources based on specific keywords. This study presents a conceptual framework of entanglement quantification and classification based on previous studies.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_27-Entanglement_Quantification_and_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Food Waste Mobile Gamified Application Design Model using UX Agile Approach in Malaysia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130526</link>
        <id>10.14569/IJACSA.2022.0130526</id>
        <doi>10.14569/IJACSA.2022.0130526</doi>
        <lastModDate>2022-05-31T14:40:27.1200000+00:00</lastModDate>
        
        <creator>Nooralisa Mohd Tuah</creator>
        
        <creator>Siti Khadijah Abd. Ghani</creator>
        
        <creator>Suryani Darham</creator>
        
        <creator>Suaini Sura</creator>
        
        <subject>Avatar; food waste disposal; mobile apps; gamification; data visualization; black soldier fly</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>Food waste is a significant worldwide issue in landfill management. Due to improper implementation, technology applications related to food waste collection and its management system are still lacking in practice. The available applications have yet to address the issue of food waste management. Constructing an interactive mobile application is necessary for managing food waste collection for the decomposition process using Black Soldier Fly (BSF) treatment. Furthermore, as the mobile application requires participation from various user backgrounds, maintaining user involvement has become a priority. Gamification has emerged as one of the approaches that might favourably affect individual engagement behaviour. A comprehensive game element design is required where it focuses on how gamification can influence user engagement. This study aims to model the food waste gamified mobile application design to benefit Malaysia&#39;s decomposition ecosystem. It includes gamification, management features, and data visualization for reporting and will involve users from households, businesses, and the BSF farm. This paper presents the modelling process of a new mobile application design for this concept of study. The UX agile approach was used in gathering and designing the application requirements as it allows for active participation from all stakeholders. The result shows that the experts agree on the application design. This research will indirectly benefit the BSF industry in Malaysia, and it will have a significant impact on gamification, user experience, and food waste management in the direction of a sustainable environment.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_26-A_Food_Waste_Mobile_Gamified_Application.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A k-interpolation Model Clustering Algorithm based on Kriging Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130525</link>
        <id>10.14569/IJACSA.2022.0130525</id>
        <doi>10.14569/IJACSA.2022.0130525</doi>
        <lastModDate>2022-05-31T14:40:27.1070000+00:00</lastModDate>
        
        <creator>Guoyan Chen</creator>
        
        <creator>Yaping Qian</creator>
        
        <subject>Data clustering; Kriging method; k-means algorithm; interpolation model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>In this work, a k-interpolation model clustering algorithm is proposed based on Kriging method, aim to partition data according to the relationship between the response of interest and input variables. Kriging method is used to describe the relationship between the response of interest and input variables. For each datum, the estimation errors of the interpolation models of the clusters are used to decide its assignment. An optimization strategy is proposed to obtain the final clustering results. The key factors of the proposed algorithm on its performance are studied through the synthetic and real-world datasets. The results show that the proposed algorithm is able to cluster the data according to the response of interest and input variables, and provides competitive clustering performance compared with the other clustering algorithms.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_25-A_k_interpolation_Model_Clustering_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Pre-trained Neural Network to Predict Alzheimer’s Disease at an Early Stage</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130524</link>
        <id>10.14569/IJACSA.2022.0130524</id>
        <doi>10.14569/IJACSA.2022.0130524</doi>
        <lastModDate>2022-05-31T14:40:27.0730000+00:00</lastModDate>
        
        <creator>Ragavamsi Davuluri</creator>
        
        <creator>Ragupathy Rengaswamy</creator>
        
        <subject>Mel-spectrum; VGG-16; ADAM optimizer; softmax; flatten layers; ReLU</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>Alzheimer’s disease (AD), which is a neuro associated disease, has become a common for past few years. In this competitive world, individual has to perform lot of multi tasking to prove their efficiency, in this process the neurons in the brain gets affected after a while i.e., “Alzheimer’s Disease”. Existing models to identify the disease at early stage has taken the individuals speech as input then they are converted into textual transcripts. These transcripts are analyzed using neural network approached by integrating them with NLP techniques. These techniques failed in designing the model which can process the long conversation text at faster rate and few models are unable to recognize the replacement of the unknown words during the translation process. The proposed system addresses these issues by converting the speech obtained into image format and then the output “Mel-spectrum” is passed as input to pre-trained VGG-16. This process has greatly reduced the pre-processing step and improved the efficiency of the system with less kernel size architecture. The speech to image translation mechanism has improved accuracy when compared to speech to text translators.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_24-A_Pre_trained_Neural_Network_to_Predict_Alzheimers_Disease.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improved ISODATA Clustering Method with Parameter Estimation based on Genetic Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130523</link>
        <id>10.14569/IJACSA.2022.0130523</id>
        <doi>10.14569/IJACSA.2022.0130523</doi>
        <lastModDate>2022-05-31T14:40:27.0570000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>ISODATA clustering; nonlinear merge and split; concaveness of probability density function: PDF; remote sensing satellite imagery data; clustering; genetic algorithm: GA; nonlinear optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>Improved ISODATA clustering method with merge and split parameters as well as initial cluster center determination with GA: Genetic Algorithm is proposed. Although ISODATA method is well-known clustering method, there is a problem that the iteration and clustering result is strongly depending on the initial parameters, especially the threshold for merge and split. Furthermore, it shows a relatively poor clustering performance in the case that the probability density function of data in concern cannot be expressed with convex function. To overcome this situation, GA is introduced for the determination of initial cluster center as well as the threshold of merge and split between constructing clusters. Through experiments with simulated data, the well-known the University of California, Irvine: UCI repository data for clustering performance evaluations and ASTER/VNIR: Advanced Spaceborne Thermal Emission and Reflection Radiometer / Visible and Near Infrared Radiometer onboard Terra satellite of imagery data, the proposed method is confirmed to be superior to the conventional ISODATA method.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_23-Improved_ISODATA_Clustering_Method_with_Parameter_Estimation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Proposed Fraud Detection Model based on e-Payments Attributes a Case Study in Egyptian e-Payment Gateway</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130522</link>
        <id>10.14569/IJACSA.2022.0130522</id>
        <doi>10.14569/IJACSA.2022.0130522</doi>
        <lastModDate>2022-05-31T14:40:27.0430000+00:00</lastModDate>
        
        <creator>Mohamed Hassan Nasr</creator>
        
        <creator>Mohamed Hassan Farrag</creator>
        
        <creator>Mona Mohamed Nasr</creator>
        
        <subject>Data mining; decision tree; e-payments; fraud detection; e-payment gateways; e-commerce</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>As per Payfort&#39;s 2017 report, titled State of payments in the Arab world; Egypt had a 22% yearly increase in the overall volume of internet payments in 2016, which was assessed at $6.2 billion. e-Payments are the major point of life nowadays in Egypt and the whole world; with tens of e-payments companies in Egypt and more than 5 million transactions done every day and 60 billion EGP volume of payments in 2018. Online and mobile fraud was estimated at $10.7 billion in 2015, as per Juniper Research, and is expected to reach $25.6 billion by the end of the decade. As the whole e-payments business is affected by fraud, e-payments firms and their consumers lose a lot of money. On the other hand, one of the most powerful techniques that could be used for fraud predictive is data mining techniques such as the decision tree. This paper introduces a prediction model for managing the risk of fraud in the Egyptian e-payment market that helps to reduce the loss of money. This model is developed using a real dataset from one of Egypt&#39;s top e-payment gateways based on the e-payment transaction attributes importance like transaction time, transaction amount, transaction limit, and transaction customer No. repetition limit. The importance of these attributes was determined using IBM SPSS modeler&#39;s decision tree and its predictors&#39; importance. The model significantly assisted in the reduction of fraud cases by a very high rate, with an accuracy of 88.45% and a precision of 93.5% resulting in a savings of 101970.52 EGP out of 131297.83 EGP.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_22-A_Proposed_Fraud_Detection_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis and Prediction of COVID-19 by using Recurrent LSTM Neural Network Model in Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130521</link>
        <id>10.14569/IJACSA.2022.0130521</id>
        <doi>10.14569/IJACSA.2022.0130521</doi>
        <lastModDate>2022-05-31T14:40:27.0270000+00:00</lastModDate>
        
        <creator>N. P. Dharani</creator>
        
        <creator>Polaiah Bojja</creator>
        
        <subject>COVID-19; corona virus; KAGGLE; LSTM neural network; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>As we all know that corona virus is announced as pandemic in the world by WHO. It is spreaded all over the world with few days of time. To control this spreading, every citizen maintains social distance and self preventive measures are the best strategies. As of now many researchers and scientists are continuing their research in finding out the exact vaccine. The machine learning model finds that the corona virus disease behaves in exponential manner. To abolish the consequence of this pandemic, an efficient step should be taken to analyze this disease. In this paper, a recurrent neural network model is chosen to predict the number of active cases in a particular state. To do this prediction of active cases, we need database. The database of COVID-19 is downloaded from KAGGLE website and is analyzed by applying recurrent LSTM neural network with univariant features to predict for the number of active cases of patients suffering from corona virus. The downloaded database is divided into training and testing the chosen neural network model. The model is trained with the training data set and tested with testing dataset to predict the number of active cases in a particular state here we have concentrated on Andhra Pradesh state.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_21-Analysis_and_Prediction_of_COVID_19_by_using_Recurrent_LSTM.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Effectivity Score of Simulation Tools towards Modelling Design in Internet-of-Things</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130520</link>
        <id>10.14569/IJACSA.2022.0130520</id>
        <doi>10.14569/IJACSA.2022.0130520</doi>
        <lastModDate>2022-05-31T14:40:27.0100000+00:00</lastModDate>
        
        <creator>Gauri Sameer Rapate</creator>
        
        <creator>N C Naveen</creator>
        
        <subject>Internet-of-things; real-world; application; simulation model; environment</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>Simulation tools play an integral and significant role in studying the applicability and effectiveness of different algorithms for solving real-world problems cost-effectively. In the case of Internet-of-Things, the issues associated with real-world implementation are exponentially multi-fold. Although various simulators have facilitated the evolution of schemes to address the problems in IoT applications in the last few years, their applicability in the real world is highly questionable. Hence, this paper discusses the potential features of existing simulations (both commercial and research-based) and investigates features of different assessment environment tools to understand their current state. The paper further contributes toward a distinct research pattern. The core contribution of this manuscript is to review standard practices of using simulation tool along with different test environments. The paper also contributes towards exploring various prospects of unaddressed problems associated with a usage of existing simulation environment/tool for investigating the challenging and practical environment of an IoT ecosystem. The learning outcome of this study will assist the reader to make a decision towards adopting precise simulation tool for their work as well as it also highlights the need to perform more number of customization towards including the features that is found in research gap.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_20-Effectivity_Score_of_Simulation_Tools.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Spatial Feature Fusion for Biomedical Image Classification based on Ensemble Deep CNN and Transfer Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130519</link>
        <id>10.14569/IJACSA.2022.0130519</id>
        <doi>10.14569/IJACSA.2022.0130519</doi>
        <lastModDate>2022-05-31T14:40:26.9800000+00:00</lastModDate>
        
        <creator>Sanskruti Patel</creator>
        
        <creator>Rachana Patel</creator>
        
        <creator>Nilay Ganatra</creator>
        
        <creator>Atul Patel</creator>
        
        <subject>Biomedical images; convolutional neural network; ensemble deep learning; feature fusion</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>Biomedical imaging is a rapidly evolving field that covers different types of imaging techniques which are used for diagnostic and therapeutic purposes. It plays a vital role in diagnosis and treating health conditions of human body. Classification of different imaging modalities plays a vital role in terms of providing better care and treatment options to the patients. Advancements in technology open up the new doors for medical professionals and this involves deep learning methods for automatic image classification. Convolutional neural network (CNN) is a special class of deep learning that is applied to visual imagery. In this paper, a novel spatial feature fusion based deep CNN is proposed for classification of microscopic peripheral blood cell images. In this work, multiple transfer learning features are extracted through four pre-trained CNN architectures namely VGG19, ResNet50, MobileNetV2 and DenseNet169. These features are fused into a generalized feature space that increases the classification accuracy. The dataset considered for the experiment contains 17902 microscopic images that are categorized into 8 distinct classes. The result shows that the proposed CNN model with fusion of multiple transfer learning features outperforms the individual pre-trained CNN model. The proposed model achieved 96.10% accuracy, 96.55% F1-score, 96.40% Precision and 96.70% Recall values.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_19-Spatial_Feature_Fusion_for_Biomedical_Image.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Goal Question Metric as an Interdisciplinary Tool for Assessing Mobile Learning Application</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130518</link>
        <id>10.14569/IJACSA.2022.0130518</id>
        <doi>10.14569/IJACSA.2022.0130518</doi>
        <lastModDate>2022-05-31T14:40:26.9630000+00:00</lastModDate>
        
        <creator>Sim Yee Wai</creator>
        
        <creator>Cheah WaiShiang</creator>
        
        <creator>Piau Phang</creator>
        
        <creator>Kai-Chee Lam</creator>
        
        <creator>Eaqerzilla Phang</creator>
        
        <creator>Nurfauza binti Jali</creator>
        
        <subject>Goal question metric; mobile application; evaluation; communication; interdisciplinary</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>Assessing the mobile learning application among interdisciplinary researchers is a non-trivial task. Mandarin Learning App is a Mandarin 3D game tailor-made for students who choose PBC1033 Mandarin Language Level 1 as an elective course. It is an interdisciplinary project which it involves researchers from software engineering, computational science/mathematics and the Faculty of Language studies. In the project, the software engineer focuses on producing a quality application mostly through usability studies; the language teacher focuses on students’ study performance upon using the Mandarin Learning App and the mathematician focuses on finding the statistical data dependency of collected data through the various statistical packages. Hence, we are facing issues like how to reach a consensus in working on assessing the Mandarin 3D games? How to enable the discussion among the researchers; how to consolidate the results so that we can understand? We introduce Goal Question Metric to tackle these issues. In this paper, we demonstrate how Goal Question Metric is used to form a holistic view of assessing requirements on mobile applications and guide the discussion and reach consensus in analyzing the results of the evaluation. The contribution of this paper is to introduce Goal Question Metric as an interdisciplinary tool while assessing the mobile learning application. With Goal Question Metric, we demonstrate how it can structure the assessment from a different viewpoint in a comprehensive and systematic manner; 1) better structure of the experiments, 2) able to reach consensus among researchers from different disciplines, 3) able to analyze the dependencies among various experiments and 4) able to find hidden results.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_18-Goal_Question_Metric_as_an_Interdisciplinary_Tool.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implementation of a Data Protection Model dubbed Harricent_RSECC</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130517</link>
        <id>10.14569/IJACSA.2022.0130517</id>
        <doi>10.14569/IJACSA.2022.0130517</doi>
        <lastModDate>2022-05-31T14:40:26.9500000+00:00</lastModDate>
        
        <creator>Frimpong Twum</creator>
        
        <creator>Vincent Amankona</creator>
        
        <creator>Yaw Marfo Missah</creator>
        
        <creator>Ussiph Najim</creator>
        
        <creator>Michael Opoku</creator>
        
        <subject>Elliptic curve cryptography; encoding; encryption; Reed Solomon; security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>Every organization subsists on data, which is a quintessential resource. Quite a number of studies have been carried out relative to procedures that can be deployed to enhance data protection. However, available literature indicates most authors have focused on either encryption or encoding schemes to provide data security. The ability to integrate these techniques and leverage on their strengths to achieve a robust data protection is the pivot of this study. As a result, a data protection model, dubbed Harricent_RSECC has been designed and implemented to achieve the study’s objective through the utilization of Elliptic Curve Cryptography (ECC) and Reed Solomon (RS) codes. The model consists of five components, namely: message identification, generator module, data encoding, data encryption and data signature. The result is the generation of the Reed Solomon codewords; cipher texts; and generated hash values which are utilized to detect and correct corrupt data; obfuscates data; and sign data respectively, during transmission or storage. The contribution of this paper is the ability to combine encoding and encryption schemes to enhance data protection to ensure confidentiality, authenticity, integrity, and non-repudiation.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_17-Implementation_of_a_Data_Protection_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Review on Bio-inspired Optimization Method for Supervised Feature Selection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130516</link>
        <id>10.14569/IJACSA.2022.0130516</id>
        <doi>10.14569/IJACSA.2022.0130516</doi>
        <lastModDate>2022-05-31T14:40:26.9330000+00:00</lastModDate>
        
        <creator>Montha Petwan</creator>
        
        <creator>Ku Ruhana Ku-Mahamud</creator>
        
        <subject>Bio-inspired optimization; swarm intelligence; evolutionary algorithm; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>Feature selection is a technique that is commonly used to prepare particular significant features or produce understandable data for improving the task of classification. Bio-inspired optimization algorithms have been successfully used to perform feature selection techniques. The exploration and exploitation mechanism that is based on the inspiration of living things to find a food source and the biological evolution in nature. Nevertheless, irrelevant, noisy, and redundant features persist from the situation of fall into local optima in case of high dimensionality. Thus, this review is conducted to shed some light on techniques that have been used to overcome the problem. The taxonomy of bio-inspired algorithms is presented, along with its performances and limitations, followed by the techniques used in supervised feature selection in term of data perspectives and applications. This review paper has also included the analysis of supervised feature selection on large dataset which showed that recent studies focus on metaheuristic methods because of their promising results. In addition, a discussion of some open issues is presented for further research.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_16-A_Review_on_Bio_inspired_Optimization_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Security Analysis on an Improved Anonymous Authentication Protocol for Wearable Health Monitoring Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130515</link>
        <id>10.14569/IJACSA.2022.0130515</id>
        <doi>10.14569/IJACSA.2022.0130515</doi>
        <lastModDate>2022-05-31T14:40:26.9030000+00:00</lastModDate>
        
        <creator>Gayeong Eom</creator>
        
        <creator>Haewon Byeon</creator>
        
        <creator>Younsung Choi</creator>
        
        <subject>Authentication protocol; health status; physiological data; security analysis; WHMS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>The wearable health monitoring system (WHMS) plays a significant role in medical experts collecting and using patient medical data. The WHMS is becoming more popular than in the past through mobile devices due to meaningful progress in wireless sensor networks. However, because the data about health used by the WHMS is related to privacy, it has to be protected from malicious access when wirelessly transmitted. Jiang et al. proposed a two-factor suitable for WHMSs using a fuzzy verifier. However, Jiaqing Mo et al. revealed that the protocol proposed by Jiang et al. had various security vulnerabilities and proposed an authentication protocol with improved security and guaranteed anonymity for WHMSs. In this paper, we analyse the authentication protocol proposed by Jiaqing Mo et al. and determine problems with the offline identification, password guessing attacks, operation process bit mismatch, no perfect forward secrecy, no mutual authentication and insider attacks.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_15-Security_Analysis_on_an_Improved_Anonymous_Authentication_Protocol.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Efficacy of the Image Augmentation Method using CNN Transfer Learning in Identification of Timber Defect</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130514</link>
        <id>10.14569/IJACSA.2022.0130514</id>
        <doi>10.14569/IJACSA.2022.0130514</doi>
        <lastModDate>2022-05-31T14:40:26.8870000+00:00</lastModDate>
        
        <creator>Teo Hong Chun</creator>
        
        <creator>Ummi Rabaah Hashim</creator>
        
        <creator>Sabrina Ahmad</creator>
        
        <creator>Lizawati Salahuddin</creator>
        
        <creator>Ngo Hea Choon</creator>
        
        <creator>Kasturi Kanchymalay</creator>
        
        <subject>Convolutional neural network; deep learning; defect identification; image augmentation; transfer learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>This paper discusses the efficacy of the data augmentation method deployed in many Convolutional Neural Network (CNN) algorithms for determining timber defect in four timber species from Malaysia. A sequence of morphological transformation, involving x-reflection and rotation, was executed in the timber defect augmentation dataset for aiding CNN model training and generating the finest CNN models which offer the best classification performance in determining timber defect. For further assessing the CNN algorithms’ classification performance, several deep learning hyperparameters were tried on the Merbau timber species by utilising epoch as well as learning rate. A comparison of the classification performance was then done between other timber classes, namely KSK, Meranti, and Rubberwood. According to the results, the ResNet50 algorithm, which has its basis in the transfer learning methodology, outclasses other CNN algorithms (ShuffleNet, AlexNet, MobileNetV2, NASNetMobile, and GoogLeNet) with the best classification accuracy of 94.59% using the data augmentation method. Furthermore, the outcomes indicate that utilising an augmentation methodology not just addresses the issue of a limited dataset but also enhances CNN classification output by 5.78% with the support of T-test that demonstrates a significant difference across all CNN algorithms except for Alexnet. Our study on hyperparameter optimisation by utilising learning rate as well as epoch is sufficient to infer that a greater number of epoch and learning rate does not deliver superior precision in CNN classification. The experimental findings suggest that the proposed methods improved CNN algorithms classification performance in identification of timber defect while tackling the imbalanced and limited dataset challenges.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_14-Efficacy_of_the_Image_Augmentation_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluating the Effectiveness and Usability of AR-based OSH Application: HazHunt</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130513</link>
        <id>10.14569/IJACSA.2022.0130513</id>
        <doi>10.14569/IJACSA.2022.0130513</doi>
        <lastModDate>2022-05-31T14:40:26.8700000+00:00</lastModDate>
        
        <creator>Ahmad A. Kamal</creator>
        
        <creator>Syahrul N. Junaini</creator>
        
        <creator>Abdul H. Hashim</creator>
        
        <subject>OSH training; computer-aided training; online learning; marker-based AR; AR-based application; SUS score</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>This study investigates the effectiveness and usability level of an augmented reality (AR) application called HazHunt to improve occupational safety and health (OSH) training. Previous research shows that AR has been growing in popularity as an innovative tool to enhance hazard identification courses. HazHunt, a marker-based AR app, was first developed using Vuforia software with OSH experts&#39; guidance. Then, two online sessions of hazard identification course were conducted, where the experimental group&#39;s (EG) training was enhanced with the implementation of HazHunt. Analysis shows that the EG scores better (mean = 13.82, s = 3.38, n = 22) than the CG (mean = 13.41, s = 2.15, n = 22) in the post-quiz, but this difference is statistically non-significant, with t (21) = 0.48 and one-tail p = 0.32. Reduced Instructional Motivation Survey (RIMMS) shows that EG participants obtained higher confidence levels among the Attention, relevance, confidence, satisfaction (ARCS) factors in learning motivation. The System Usability Scale (SUS) score of HazHunt recorded the maximum count of &#39;Good&#39; rating (mean = 78.41, n = 8). It is concluded that HazHunt has positive impacts on enhancing OSH training in terms of effectiveness and motivational impact. HazHunt also scored a high SUS score among the EG.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_13-Evaluating_the_Effectiveness_and_Usability_of_AR_based_OSH.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Individual Risk Classification of Crime Groups using Ensemble Classifier Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130512</link>
        <id>10.14569/IJACSA.2022.0130512</id>
        <doi>10.14569/IJACSA.2022.0130512</doi>
        <lastModDate>2022-05-31T14:40:26.8570000+00:00</lastModDate>
        
        <creator>Ardhito P. Anggana</creator>
        
        <creator>Amalia Zahra</creator>
        
        <subject>Priority scale; data mining; classification; ensemble classifier method</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>The most significant challenge for humanity worldwide to crime, especially terrorist attacks, should be considered. Determining the priority scale for anticipating individual terrorist groups is not easy and will significantly affect work activities and subsequent decision-making measures. Priority scale determination decisions should be made carefully so team members cannot choose the desired priority target. Determining the exact priority scale for a target can be influenced by several factors, such as desire factors and ability factors, using Dataset Intelligence. This research aims to find out the ability of each target and pattern to be carried out. Based on this problem, the study used the K-Nearest Neighbor (KNN), Na&#239;ve Bayes (NB), Decision Tree (DT), and Ensemble Bagging methods. Each of these algorithms has its characteristics; This classification technique can group priority targets according to their similarities, abilities, and desires. The value of each method used can be used as a reference to determine the correct group information for officers to determine the next steps. The study obtained a maximum accuracy of 70.25% using the Ensemble Bagging-Backward Elimination-K-Nearest Neighbor (KNN) classification method using 20 features. The results showed tests conducted and final analysis and conclusions based on accuracy and recall performance. The precision performance revealed that the Ensemble Bagged KNN, more precisely than KNN, Na&#239;ve Bayes, Decision Tree, and Bagging Na&#239;ve Bayes and Bagging Decision Tree. The KNN Bagging ensemble model can add accuracy, map individuals, and detect who should be intensely monitored based on predictive results.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_12-Individual_Risk_Classification_of_Crime_Groups.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Impact of Security and Payment Method On Consumers’ Perception of Marketplace in Saudi Arabia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130511</link>
        <id>10.14569/IJACSA.2022.0130511</id>
        <doi>10.14569/IJACSA.2022.0130511</doi>
        <lastModDate>2022-05-31T14:40:26.8400000+00:00</lastModDate>
        
        <creator>Mdawi Alqahtani</creator>
        
        <creator>Marwan Ali Albahar</creator>
        
        <subject>Payment security; digital strategy; digital transformation; user security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>Digital transformation has been accelerated in recent years, and COVID-19 has resulted in a rise in overall internet spending. Businesses must take measures in order to ensure that customers have a safe and enjoyable online purchasing experience. In this paper, customers’ security perceptions regarding the most popular e-commerce applications in Saudi Arabia are explored. Surveys were distributed online via Google Form to 200 participants in total as part of a cross-sectional research design using quantitative methodology. The main findings were related to confirming eight main hypotheses of the research that were related to testing if some factors were important to forming perceived trust by customers. Five factors (trust, security, reputation, benefits, and convenience) were found to have a positive effect, and the remaining three were not (familiarity, size, and usefulness). Finally, this study recommends various actions for practitioners and policymakers to take in order to improve customer perceptions of payment methods and security in Saudi Arabia.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_11-The_Impact_of_Security_and_Payment_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SHD-IoV: Secure Handover Decision in IoV</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130510</link>
        <id>10.14569/IJACSA.2022.0130510</id>
        <doi>10.14569/IJACSA.2022.0130510</doi>
        <lastModDate>2022-05-31T14:40:26.8070000+00:00</lastModDate>
        
        <creator>Hala E. I. Jubara</creator>
        
        <subject>Internet of Vehicle (IoV); HO; L2; Stream Control Transmission Protocol (SCTP); Secure Handover Decision (SHD); security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>Internet of Vehicle (IoV) is the smartest thing being connected over the Internet. With continuously increasing urban population and swiftly growing of cities, causes moving vehicles with various speeds. These high speeds may increase the handover delay (HoD), accordingly causing an insecure connection due to the handover interruption. For instance, some of the network protocols try to overcome the problem without considering transport layer supports. This article proposes a dynamic HO algorithm with a cross-layer architecture called Secure Handover Decision (SHD) in IoV to assist the protocol layers aware of consecutive HOs of the vehicle. The results show that vehicle communication in IoV is more secure and lossless by reducing HoD in both sides of vehicle and network during fast movement.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_10-SHD_IoV_Secure_Handover_Decision_in_IoV.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-instance Finger Vein-based Authentication with Secured Templates</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130509</link>
        <id>10.14569/IJACSA.2022.0130509</id>
        <doi>10.14569/IJACSA.2022.0130509</doi>
        <lastModDate>2022-05-31T14:40:26.7930000+00:00</lastModDate>
        
        <creator>Swati K. Choudhary</creator>
        
        <creator>Ameya K. Naik</creator>
        
        <subject>Finger vein; multi-instance; authentication; cancelable; template protection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>The illegitimate access to biometric templates is one of the major issues to be handled for authentication systems. In this work, we propose to use two instances of finger vein images which inherits the advantages of a robust multi-modal biometric authentication system without needing different sensors. Two local texture feature extraction methods are experimented on standard finger-vein datasets. Fused discriminating features with reduced dimension lowers down the system computational cost. A cancelable template protection scheme as Gaussian Random Projection based Index-of-Max is then applied for embedding privacy and security to the templates. Foremost template protection properties like revocability, non-invertibility and unlinkability are observed to be significantly obeyed by the proposed system with considerable authentication performance. Recognition performance of the proposed methods are compared with some previously executed finger vein systems and observed to be less complex and overperforming on the combined basis of authentication and template protection. Thus, the proposed system utilizes multiple evidence and provides a balanced performance with respect to authentication, template protection and computational cost.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_9-Multi_instance_Finger_Vein_based_Authentication.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Fault Diagnosis Method based on Wavelet Packet Energy Spectrum and SSA-SVM</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130508</link>
        <id>10.14569/IJACSA.2022.0130508</id>
        <doi>10.14569/IJACSA.2022.0130508</doi>
        <lastModDate>2022-05-31T14:40:26.7600000+00:00</lastModDate>
        
        <creator>Jinglei Qu</creator>
        
        <creator>Bingxin Ma</creator>
        
        <creator>Xiaojie Ma</creator>
        
        <creator>Mengmeng Wang</creator>
        
        <subject>Wavelet packet energy spectrum; sparrow search optimization; support vector machine; rolling bearing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>As one of the important components of mechanical equipment, rolling bearing has been widely used, and its motion state affects the safety and performance of equipment. To enhance the fault feature information in the bearing signal and improve the classification accuracy of support vector machine, a hybrid fault diagnosis method based on wavelet packet energy spectrum and SSA-SVM is proposed. Firstly, the wavelet packet decomposition is used to decompose vibration signals to generate frequency band energy spectrum, and the bearing characteristic information is constructed from the energy spectrum to extract and enhance the bearing fault characteristic information. Secondly, the penalty and kernel parameters are optimized globally by sparrow search algorithm to improve the classification accuracy of support vector machine, and then construct the WPES-SSA-SVM model. Finally, the proposed model is used to diagnose and analyze the measured signals. Compared with BP, ELM and SVM, the effectiveness and superiority of the proposed method are verified.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_8-Hybrid_Fault_Diagnosis_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Effect of Natural Language Processing on the Analysis of Unstructured Text: A Systematic Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130507</link>
        <id>10.14569/IJACSA.2022.0130507</id>
        <doi>10.14569/IJACSA.2022.0130507</doi>
        <lastModDate>2022-05-31T14:40:26.7600000+00:00</lastModDate>
        
        <creator>Walter Luis Roldan-Baluis</creator>
        
        <creator>Noel Alcas Zapata</creator>
        
        <creator>Maria Soledad Manaccasa Vasquez</creator>
        
        <subject>Artificial intelligence; natural language processing; machine learning; unstructured text analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>The analysis of the unstructured text has become a challenge for the community dedicated to natural language processing (NLP) and Machine Learning (ML). This paper aims to describe the potential of the most used NLP techniques and ML algorithms to address various problems afflicting our society. Several original articles were reviewed and published in SCOPUS during 2021. The applied approach was retrospective, transversal and descriptive. The data collected were entered into the SPSS statistical software v25 and among the findings, it was determined that the most used NLP technique was the Term frequency - Inverse document frequency (TF-IDF), while the most used supervised learning algorithm was the Support Vector Machines (SVM). Likewise, the predominant deep learning algorithm was Long Short-Term Memory (LSTM). This research aims to support experts and those starting in research to identify the most used algorithms of NLP and ML.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_7-The_Effect_of_Natural_Language_Processing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Design of Home Fire Monitoring System based on NB-IoT</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130506</link>
        <id>10.14569/IJACSA.2022.0130506</id>
        <doi>10.14569/IJACSA.2022.0130506</doi>
        <lastModDate>2022-05-31T14:40:26.7300000+00:00</lastModDate>
        
        <creator>Jun Wang</creator>
        
        <creator>Ting Ke</creator>
        
        <creator>Mengjie Hou</creator>
        
        <creator>Gangyu Hu</creator>
        
        <subject>NB-IoT; cloud platform; fire monitoring system; STM32F103C8T6; sensor; BP neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>In the field of home fire monitoring, the currently relatively mature monitoring solutions include GPRS/GSM com-munication and Zigbee communication. The main disadvantage of GPRS wireless communication is high power consumption, and the disadvantage of Zigbee technology is that it needs to be combined with other communication technologies to realize remote monitoring. In addition, the above technical solutions all require self-built local or remote monitoring servers to save mon-itoring data. In view of the above problems, this system designs a home fire monitoring system based on NB-IoT technology and cloud platform. The system uses a single-chip STM32F103C8T6 as the core controller and contains a sensor data acquisition module and a narrowband IoT communication module. The data fusion of multi-sensor data is performed by BP neural network algorithm.On the basis of remote transmission, the system solves the problems of high power consumption, high cost and insufficient signal coverage of terminal hardware. The system can collect indoor environmental parameters and fire information in real time, and upload them to the cloud platform for storage. If abnormal data is detected, an early warning message will be issued. The feasibility of the system is verified, and the verification results show that the system works normally and the output is accurate, which meets the design requirements and can be widely used.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_6-The_Design_of_Home_Fire_Monitoring_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of Ontology-based Domain Knowledge Model for IT Domain in e-Tutor Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130505</link>
        <id>10.14569/IJACSA.2022.0130505</id>
        <doi>10.14569/IJACSA.2022.0130505</doi>
        <lastModDate>2022-05-31T14:40:26.7130000+00:00</lastModDate>
        
        <creator>Ghanim Hussein Ali Ahmed</creator>
        
        <creator>Jawad Alshboul</creator>
        
        <creator>Laszlo Kovacs</creator>
        
        <subject>e-learning; knowledge model; domain model; e-tutor system; SPARQL</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>Ontology as a technology has been studied in many areas and is being used in several fields. A number of studies have utilized ontology to manage problems such as interoperability in teaching materials, modeling and enriching education resources, and personalizing learning content recommendations in the educational context. A possible reason for the lack of success may be that simply posting lecture notes on the internet does not provide enough learning and training. However, this situation can improve by using education software like an e-tutoring system. The e-tutor system has built-in modules to track students&#39; performance and personalize learning according to an adaptation of students&#39; learning styles, knowledge levels, and proper teaching techniques in e-learning systems. e-Tutor is an excellent area in the context of electronic instruction since it provides adequate aid for learners and becomes increasingly important for individual and collaborative learning. Thus, there has been significant interest in adopting e-tutoring to facilitate learning processes and enhance learners&#39; performance. This paper represents a domain knowledge model for an e-tutoring system that enables knowledge to be stored in such a way separated from the domain of interest and assists in storing transfer and prerequisite knowledge relationships. This innovative technique is helpful for the students in improving their learning progress. This paper introduced a domain knowledge model for an e-tutor system to support the way of teaching and learning process. The model implementation is developed in Python and Owlready2. Two types of ontologies are provided: general concepts of the domain knowledge ontology and specific domain knowledge ontology. This solution represents the knowledge to be learned, delivers input to the expert model, and eventually provides specific feedback, selects problems, generates guidance, and supports the student model.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_5-Development_of_Ontology_based_Domain_Knowledge_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Use of Neural Networks in the Adaptive Testing System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130504</link>
        <id>10.14569/IJACSA.2022.0130504</id>
        <doi>10.14569/IJACSA.2022.0130504</doi>
        <lastModDate>2022-05-31T14:40:26.6830000+00:00</lastModDate>
        
        <creator>Ekaterina Vitalevna Chumakova</creator>
        
        <creator>Tatiana Alexandrovna Chernova</creator>
        
        <creator>Yulia Aleksandrovna Belyaeva</creator>
        
        <creator>Dmitry Gennadievich Korneev</creator>
        
        <creator>Mikhail Samuilovich Gasparian</creator>
        
        <subject>Adaptive testing system; artificial neural network; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>The paper examines the issues of the use of adaptive testing systems in terms of their incorporation in artificial neural network modules designed to solve the problem of choosing the next question, thereby forming an individual testing trajectory. The study presents an analysis of data affecting the quality of problem-solving, proposes a general modular structure of a system, and describes the main data flows at the input of an artificial neural network. The solution proposed for the problem of choosing the difficulty of the question is to use feedforward neural networks. Different architectures and parameters of training artificial neural networks (weight update mechanisms, loss functions, the number of training epochs, batch sizes) are compared. As an alternative, the option of using recurrent long-short term memory networks is considered.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_4-Use_of_Neural_Networks_in_the_Adaptive_Testing_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Replica Scheduling Strategy for Streaming Data Mining</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130503</link>
        <id>10.14569/IJACSA.2022.0130503</id>
        <doi>10.14569/IJACSA.2022.0130503</doi>
        <lastModDate>2022-05-31T14:40:26.6670000+00:00</lastModDate>
        
        <creator>Shufan Li</creator>
        
        <creator>Siyuan Yu</creator>
        
        <creator>Fang Xiao</creator>
        
        <subject>Streaming data mining; dynamic programming; replica scheduling strategy; cloud computing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>In a distributed storage and computing framework, traditional streaming data mining techniques are inefficient when processing massive amounts of data. In this paper, we take the copy in cloud storage as an allocatable resource for scheduling and propose a RepRM strategy to improve the efficiency of data mining and analysis. The key idea of this work is to take the data copy as the resource to be allocated, and use the backward inference method of dynamic programming to solve the data copy ratio, the optimal number of copies is obtained. Experiments and observations have proved that compared with the traditional scheduling method of Hadoop, after adopting the RepRM strategy scheduling, the memory resources of the homogeneous cluster are saved by about 40-50% during parallel mining of streaming data, and the throughput rate is increased by 20% to 30%.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_3-Replica_Scheduling_Strategy_for_Streaming_Data_Mining.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>AI Powered Anti-Cyber Bullying System using Machine Learning Algorithm of Multinomial Na&#239;ve Bayes and Optimized Linear Support Vector Machine</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130502</link>
        <id>10.14569/IJACSA.2022.0130502</id>
        <doi>10.14569/IJACSA.2022.0130502</doi>
        <lastModDate>2022-05-31T14:40:26.6530000+00:00</lastModDate>
        
        <creator>Tosin Ige</creator>
        
        <creator>Sikiru Adewale</creator>
        
        <subject>Cyberbullying; anti cyberbullying; machine learning; NLP; social media; multinomial Na&#239;ve Bayes; support vector machine</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>“Unless and until our society recognizes cyber bullying for what it is, the suffering of thousands of silent victims will continue.” ~ Anna Maria Chavez. There had been series of research on cyber bullying which are unable to provide reliable solution to cyber bullying. In this research work, we were able to provide a permanent solution to this by developing a model capable of detecting and intercepting bullying incoming and outgoing messages with 92% accuracy. We also developed a chatbot automation messaging system to test our model leading to the development of Artificial Intelligence powered anti-cyber bullying system using machine learning algorithm of Multinomial Na&#239;ve Bayes (MNB) and optimized linear Support Vector Machine (SVM). Our model is able to detect and intercept bullying outgoing and incoming bullying messages and take immediate action.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_2-AI_Powered_Anti_Cyber_Bullying_System_using_Machine_Learning_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implementation of Data Mining on a Secure Cloud Computing over a Web API using Supervised Machine Learning Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130501</link>
        <id>10.14569/IJACSA.2022.0130501</id>
        <doi>10.14569/IJACSA.2022.0130501</doi>
        <lastModDate>2022-05-31T14:40:26.6370000+00:00</lastModDate>
        
        <creator>Tosin Ige</creator>
        
        <creator>Sikiru Adewale</creator>
        
        <subject>Cloud computing; data warehouse; data mining; window service; Web API; machine learning algorithm; secure cloud computing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>Ever since the era of internet had ushered in cloud computing, there had been increase in the demand for the unlimited data available through cloud computing for data analysis, pattern recognition and technology advancement. With this also bring the problem of scalability, efficiency and security threat. This research paper focuses on how data can be dynamically mine in real time for pattern detection in a secure cloud computing environment using combination of decision tree algorithm and Random Forest over a restful Application Programming Interface (API). We are able to successfully Implement data mining on cloud computing bypassing or avoiding direct interaction with data warehouse and without any terminal involve by using combination of IBM Cloud storage facility, Amazing Web Service, Application Programming Interface and Window service along with a decision tree and Random Forest algorithm for our classifier. We were able to successfully bypass direct connection with the data warehouse and cloud terminal with 94% accuracy in our model.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_1-Implementation_of_Data_Mining_on_a_Secure_Cloud_Computing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Lightweight Verifiable Secret Sharing in Internet of Things</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01305115</link>
        <id>10.14569/IJACSA.2022.01305115</id>
        <doi>10.14569/IJACSA.2022.01305115</doi>
        <lastModDate>2022-05-31T14:40:26.6200000+00:00</lastModDate>
        
        <creator>Likang Lu</creator>
        
        <creator>Jianzhu Lu</creator>
        
        <subject>Verifiable secret sharing; one-way function; inter-net of things; security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>Verifiable Secret Sharing (VSS) is a fundamental tool of cryptography and distributed computing in Internet of Things. Since network bandwidth is a scarce resource, minimizing the number of verification data will improve the performance of VSS. Existing VSS schemes, however, face limitations in meeting the number of verification data and energy consumptions for low-end devices, which make their adoption challenging in resource-limited IoTs. To address above limitations, we propose a VSS scheme according to Nyberg’s one-way Accumulator for one-way Hash Functions (NAHFs). The proposed VSS has two distinguished features: first, the security of the scheme is based on NAHFs whose computational requirements are the basic criteria for known IoT devices and, second, upon receiving only one veri-fication data, participants can verify the correctness of both their shares and the secret without any communication. Experimental results show that, compared to the Feldman scheme and Rajabi-Eslami scheme, the energy consumption of a participant in the proposed scheme is respectively reduced by at least 24% and 83% for a secret.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_115-A_Lightweight_Verifiable_Secret_Sharing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving the Computational Complexity of the COOL Screening Tool</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01305114</link>
        <id>10.14569/IJACSA.2022.01305114</id>
        <doi>10.14569/IJACSA.2022.01305114</doi>
        <lastModDate>2022-05-31T14:40:26.5900000+00:00</lastModDate>
        
        <creator>Mohamed Ghalwash</creator>
        
        <subject>Software engineering; screening tool; autoimmune disorder</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>Autoimmune disorder, such as celiac disease and type 1 diabetes, is a condition in which the immune system attacks body tissues by mistake. This might be triggered by abnormality in the development of biomarkers such as autoantibodies, which are generated by unhealthy beta cells. Therefore, screening of such biomarkers is crucial for early diagnosis of autoimmune diseases. However, one of the fundamental questions of screening is when to screen subjects who might be at a higher risk of au-toimmune disorder. This requires an exhaustive search to find the optimal ages of screening in retrospective cohorts. Very recently, a comprehensive tool was developed for screening in autoimmune disease. In this paper, we improved the computational time of the algorithm used in the screening tool. The new algorithm is more than 100 times faster than the original one. This improvement would help to increase the utility of the tool among clinicians and research scientists in the community.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_114-Improving_the_Computational_Complexity_of_the_COOL_Screening_Tool.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>OvSbChain: An Enhanced Snowball Chain Approach for Detecting Overlapping Communities in Social Graphs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01305113</link>
        <id>10.14569/IJACSA.2022.01305113</id>
        <doi>10.14569/IJACSA.2022.01305113</doi>
        <lastModDate>2022-05-31T14:40:26.5730000+00:00</lastModDate>
        
        <creator>Jayati Gulati</creator>
        
        <creator>Muhammad Abulaish</creator>
        
        <creator>Sajid Yousuf Bhat</creator>
        
        <subject>Clustering coefficient; community detection; over-lapping communities; snowball sampling; social graph</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>Overlapping Snowball Chain is an extension to Snowball Chain, which is based on the concept of community formation in line to the snowball chaining process. The inspiration behind this approach is from the snowball sampling process, wherein a snowball grows to form chain of nodes, leading to the formation of mutually exclusive communities in Snowball Chain. In the current work, the nodes are allowed to be shared among different snowball chains in a graph, leading to the formation of overlapping communities. Unlike its predecessor Snowball Chain, the proposed technique does not require the use of any hyper-parameter which is often difficult to tune for most of the existing methods. The proposed algorithm works in two phases, where overlapping chains are formed in the first phase, and then they are combined using a similarity-based criteria in the second phase. The communities identified at the end of the second phase are evaluated using different measures, including modularity, overlapping NMI and running time over both real-world and synthetic benchmark datasets. The proposed Overlapping Snowball Chain method is also compared with eleven state-of-the-art community detection methods.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_113-OvSbChain_An_Enhanced_Snowball_Chain_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cache Complexity of Cache-Oblivious Approaches: A Review and Extension</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01305112</link>
        <id>10.14569/IJACSA.2022.01305112</id>
        <doi>10.14569/IJACSA.2022.01305112</doi>
        <lastModDate>2022-05-31T14:40:26.4970000+00:00</lastModDate>
        
        <creator>Inas Abuqaddom</creator>
        
        <creator>Sami Serhan</creator>
        
        <creator>Basel A. Mahafzah</creator>
        
        <subject>Cache complexity; cache-oblivious algorithm; mem-ory hierarchy; neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(5), 2022</description>
        <description>The latest direction in cache-aware/cache-efficient algorithms is to use cache-oblivious algorithms based on the cache-oblivious model, which is an improvement of the external-memory model. The cache-oblivious model utilizes memory hierarchies without knowing memories’ parameters in advance since algorithms of this model are automatically tuned according to the actual memory parameters. As a result, cache-oblivious algorithms are particularly applied to multi-level caches with changing parameters and to environments in which the amount of available memory for an algorithm can fluctuate. This paper shows the state of the art in cache-oblivious algorithms and data structures; each with its complexity concerning cache misses, which is called cache complexity. Additionally, this paper intro-duces an extension to minimize the cache complexity of neural networks by applying an appropriate cache-oblivious approach to neural networks.</description>
        <description>http://thesai.org/Downloads/Volume13No5/Paper_112-Cache_Complexity_of_Cache_Oblivious_Approaches.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Enhanced Genetic Algorithm (EGA)-based Multi-Hop Path for Energy Efficient in Wireless Sensor Network (WSN)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01304114</link>
        <id>10.14569/IJACSA.2022.01304114</id>
        <doi>10.14569/IJACSA.2022.01304114</doi>
        <lastModDate>2022-05-02T09:35:24.9730000+00:00</lastModDate>
        
        <creator>Battina Srinuvasu Kumar</creator>
        
        <creator>S. G. Santhi</creator>
        
        <creator>S. Narayana</creator>
        
        <subject>Cluster head; energy efficient; multi-hop path; enhanced genetic algorithm; wireless sensor network (WSN)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>Wireless Sensor Networks (WSNs) encounter a number of issues in terms of performance. In WSN, the majority of the energy is utilized to transfer data from sensor nodes to a central station or hub (BS). There have therefore been many different types of routing protocols devised to help with the distribution of data in WSNs. Large-scale networks have been designed with minimal power consumption and multipurpose processing due to recent improvements in wireless communication and networking technology. For the time being, sensor energy remains a restricted resource for constructing routing protocols between sensor nodes and the base station, despite advances in energy collection technologies. For wireless sensor networks with far-flung cluster heads and base stations, direct transmission is a critical component since it impacts the network&#39;s efficiency in terms of power consumption and lifespan. A new approach for identifying an effective multi-hop routing between a source (CH) and a destination (BS) is investigated in this study in order to decrease power consumption and hence increase the life of a network (OMPFM). The suggested technique utilizes a genetic algorithm and a novel fitness metric to discover the best route. For selecting CHs and enhancing the speed and quality of created chromosomes, they suggest two pre-processes, which they call CH-selection and Chromosome Quality Improvement (CHI). The proposed method is evaluated and compared to others of its kind using MATLAB simulator. It has been found that using the proposed method outperforms other methods such as LEACH, GCA, EAERP, GAECH, and HiTSeC in terms of the first node die metric by 35%, 34%, 26%, 19% and 50%, respectively. It also outperforms other methods by 100% in terms of the last node die metric.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_114-An_Enhanced_Genetic_Algorithm_EGA_based_Multi_Hop_Path.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Chatbots for the Detection of Covid-19: A Systematic Review of the Literature</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01304113</link>
        <id>10.14569/IJACSA.2022.01304113</id>
        <doi>10.14569/IJACSA.2022.01304113</doi>
        <lastModDate>2022-05-02T09:35:24.9270000+00:00</lastModDate>
        
        <creator>Antony Albites-Tapia</creator>
        
        <creator>Javier Gamboa-Cruzado</creator>
        
        <creator>Junior Almeyda-Ortiz</creator>
        
        <creator>Alberto Moreno L&#225;zaro</creator>
        
        <subject>Covid-19 diagnosis; chatbot; NLP; digital assistants; health; systematic review</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>At present, the development of chatbots is one of the key activities for the diagnosis of Covid-19. The aim is to understand how these chatbots operate in the health area to make a respective diagnosis. The purpose of the research is to determine the state of the art on the use of chatbots and its impact on Covid-19 diagnosis during the last 2 years. The data sources that have been consulted are IEEE Xplore, Taylor &amp; Francis Online, ProQuest, World Wide Science, Science Direct, Microsoft Academic, Google Scholar, ACM Digital Library, Wiley Online Library, and ETHzurich. The search strategy identified 5701 papers, of which 101 papers were selected through 8 selection criteria and 7 quality assessments. This review presents discussions regarding the methodologies used for chatbot development, i.e., what are the purposes and impact of using chatbots for Covid-19 diagnosis. In addition, this is presents the results of how important the development and implementation of chatbots are in the area of health in the face of this pandemic.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_113-Chatbots_for_the_Detection_of_Covid_19.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Ensemble Deep Learning Approach for Emotion Detection in Arabic Tweets</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01304112</link>
        <id>10.14569/IJACSA.2022.01304112</id>
        <doi>10.14569/IJACSA.2022.01304112</doi>
        <lastModDate>2022-04-30T05:22:32.6770000+00:00</lastModDate>
        
        <creator>Alaa Mansy</creator>
        
        <creator>Sherine Rady</creator>
        
        <creator>Tarek Gharib</creator>
        
        <subject>Deep learning; emotion detection; transformers; RNNs; Bi-LSTM; Bi-GRU</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>Now-a-days people use social media websites for different activities such as business, entertainment, following the news, expressing their thoughts, feelings, and much more. This initiated a great interest in analyzing and mining such user-generated content. In this paper, the problem of emotion detection (ED) in Arabic text is investigated by proposing an ensemble deep learning approach to analyze user-generated text from Twitter, in terms of the emotional insights that reflect different feelings. The proposed model is based on three state-of-the-art deep learning models. Two models are special types of Recurrent Neural Networks RNNs (Bi-LSTM and Bi-GRU), and the third model is a pre-trained language model (PLM) based on BERT and it is called MARBERT transformer. The experiments were evaluated using the SemEval-2018-Task1-Ar-Ec dataset that was published in a multilabel classification task: Emotion Classification (EC) inside the SemEval-2018 competition. MARBERT PLM is compared to one of the most famous PLM for dealing with the Arabic language (AraBERT). Experiments proved that MARBERT achieved better results with an improvement of 4%, 2.7%, 4.2%, and 3.5% regarding Jaccard accuracy, recall, F1 macro, and F1 micro scores respectively. Moreover, the proposed ensemble model showed outperformance over the individual models (Bi-LSTM, Bi-GRU, and MARBERT). It also outperforms the most recent related work with an improvement ranging from 0.2% to 4.2% in accuracy, and from 5.3% to 23.3% in macro F1 score.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_112-An_Ensemble_Deep_Learning_Approach_for_Emotion_Detection_in_Arabic_Tweets.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning-based Hybrid Model for Efficient Anomaly Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01304111</link>
        <id>10.14569/IJACSA.2022.01304111</id>
        <doi>10.14569/IJACSA.2022.01304111</doi>
        <lastModDate>2022-04-30T05:22:32.6600000+00:00</lastModDate>
        
        <creator>Frances Osamor</creator>
        
        <creator>Briana Wellman</creator>
        
        <subject>Anomaly detection; system call sequence; convolution neural network; long short term memory</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>It is common among security organizations to run processes system call trace data to predict its anomalous behavior, and it is still a dynamic study region. Learning-based algorithms can be employed to solve such problems since it is typical pattern recognition problem. With the advanced progress in operating systems, some datasets became outdated and irrelevant. System calls datasets such as Australian Defense Force Academy Linux Dataset (ADFA-LD) are amongst the current cohort containing labeled data of system call traces for normal and malicious processes on various applications. In this paper, we propose a hybrid deep learning-based anomaly detection system. To advance the detection accurateness and competence of anomaly detection systems, Convolution Neural Network (CNN) with Long Short Term Memory (LSTM) is employed. The raw sequence of system call trace is fed to the CNN network first, reducing the traces&#39; dimension. This reduced trace vector is further fed to the LSTM network to learn the sequences of the system calls and produce the concluding detection outcome. Tensorflow-GPU was used to implement and train the hybrid model and evaluated on the ADFA-LD dataset. Experimental results showed that the proposed method had reduced training time with an enhanced anomaly detection rate. Therefore, this method lowers the false alarm rates.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_111-Deep_Learning_based_Hybrid_Model_for_Efficient_Anomaly_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Machine Learning Model for the Diagnosis of Coffee Diseases</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01304110</link>
        <id>10.14569/IJACSA.2022.01304110</id>
        <doi>10.14569/IJACSA.2022.01304110</doi>
        <lastModDate>2022-04-30T05:22:32.6470000+00:00</lastModDate>
        
        <creator>Fredy Martinez</creator>
        
        <creator>Holman Montiel</creator>
        
        <creator>Fernando Martinez</creator>
        
        <subject>Cercospora Coffeicola; convolutional neural network; coffee leaf miner; coffee leaf rust; deep learning; image processing; phoma leaf spot</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>The growing and marketing of coffee is an impor-tant source of economic resources for many countries, especially those with economies dependent on agricultural production, as is the case of Colombia. Although the country has done a lot of research to develop the sector, the truth is that most of its cultivation is carried out by small coffee families without a high degree of technology, and without major resources to access it. The quality of the coffee bean is highly sensitive to diverse diseases related to environmental conditions, fungi, bacteria, and insects, which directly and strongly affect the economic income of the entire production chain. In many cases the diseases are transmitted rapidly, causing great economic losses. A quick and reliable diagnosis would have an immediate effect on reducing losses. In this sense, this research advances the development of an embedded system based on machine learning capable of performing on-site diagnoses by untrained personnel but taking advantage of the know-how of expert coffee growers. Such a system seeks to instrument the visual characteristics of the most common plant diseases on low-cost, robust, and highly reliable hardware. We identified a deep network architecture with high performance in disease categorization and adjusted the hyperparameters of the model to maximize its characterization capacity without incurring overfitting problems. The prototype was evaluated in the laboratory on real plants for recognized disease cases, tests that matched the performance of the model validation dataset.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_110-A_Machine_Learning_Model_for_the_Diagnosis_of_Coffee_Diseases.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving Intrusion Detection for Imbalanced Network Traffic using Generative Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01304109</link>
        <id>10.14569/IJACSA.2022.01304109</id>
        <doi>10.14569/IJACSA.2022.01304109</doi>
        <lastModDate>2022-04-30T05:22:32.6300000+00:00</lastModDate>
        
        <creator>Amani A. Alqarni</creator>
        
        <creator>El-Sayed M. El-Alfy</creator>
        
        <subject>Intrusion detection; machine learning; imbalance learning; conditional tabular generative adversarial networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>Network security has become a serious issue since networks are vulnerable and subject to increasing intrusive activities. Therefore, network intrusion detection systems (IDSs) are an essential component to defend against these activities. One of the biggest issues encountered by IDSs is the class imbalance problem which leads to a biased performance by most machine learning models to normal activities (majority class). Several techniques were proposed to overcome the class-imbalance problem such as resampling, cost-sensitive, and en-semble learning techniques. Other issues related to intrusion detection data include mixed data types, and non-Gaussian and multimodal distributions. In this study, we employed a conditional tabular generative adversarial network (CTGAN) model with common machine learning algorithms to construct more effective detection systems while addressing the imbalance issue. CTGAN can generate samples of the minority class during training to make the dataset more balanced. To assess the effectiveness of the proposed IDS, we combined CTGAN with three machine learning algorithms: support vector machine (SVM), K-nearest neighbor (KNN), and decision tree (DT). The imbalanced NSL-KDD dataset was used and several experiments were conducted. The results showed that CTGAN can improve the performance of imbalance learning for intrusion detection with SVM and DT. On the other hand, KNN showed no improvement in the performance since it is less sensitive to the class imbalance problem. Moreover, the results proved that CTGAN can capture the distribution of discrete features better than continuous features.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_109-Improving_Intrusion_Detection_for_Imbalanced_Network_Traffic.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Independent Channel Residual Convolutional Network for Gunshot Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01304108</link>
        <id>10.14569/IJACSA.2022.01304108</id>
        <doi>10.14569/IJACSA.2022.01304108</doi>
        <lastModDate>2022-04-30T05:22:32.6300000+00:00</lastModDate>
        
        <creator>Jakub Bajzik</creator>
        
        <creator>Jiri Prinosil</creator>
        
        <creator>Roman Jarina</creator>
        
        <creator>Jiri Mekyska</creator>
        
        <subject>Acoustic signal processing; gunshot detection systems; audio signal analysis; machine learning; deep learning; residual networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>The main purpose of this work is to propose a robust approach for dangerous sound events detection (e.g. gunshots) to improve recent surveillance systems. Despite the fact that the detection and classification of different sound events has a long history in signal processing, the analysis of environmental sounds is still challenging. The most recent works aim to prefer the time-frequency 2-D representation of sound as input to feed convolutional neural networks. This paper includes an analysis of known architectures as well as a newly proposed Independent Channel Residual Convolutional Network architecture based on standard residual blocks. Our approach consists of processing three different types of features in the individual channels. The UrbanSound8k and the Free Firearm Sound Library audio datasets are used for training and testing data generation, achieving a 98 % F1 score. The model was also evaluated in the wild using manually annotated movie audio track, achieving a 44 % F1 score, which is not too high but still better than other state-of-the-art techniques.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_108-Independent_Channel_Residual_Convolutional_Network_for_Gunshot_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Eye-movement Analysis and Prediction using Deep Learning Techniques and Kalman Filter</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01304107</link>
        <id>10.14569/IJACSA.2022.01304107</id>
        <doi>10.14569/IJACSA.2022.01304107</doi>
        <lastModDate>2022-04-30T05:22:32.6130000+00:00</lastModDate>
        
        <creator>Sameer Rafee</creator>
        
        <creator>Xu Yun</creator>
        
        <creator>Zhang Jian Xin</creator>
        
        <creator>Zaid Yemeni</creator>
        
        <subject>Eye Movement Classification; Eye Movement Prediction; Convolutional Neural Network (CNN); Recurrent Neural Network (RNN)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>Eye movement analysis has gained significant atten-tion from the eye-tracking research community, particularly for real-time applications. Eye movement prediction is predominantly required for the improvement of sensor lag. The previously introduced eye-movement approaches focused on classifying eye movements into two categories: saccades and non-saccades. Al-though these approaches are practical and relatively simple, they confuse fixations and smooth pursuit by putting them up within the non-saccadic category. Moreover, Eye movement analysis has been integrated into different applications, including psychology, neuroscience, human attention analysis, industrial engineering, marketing, advertising, etc. This paper introduces a low-cost eye-movement analysis system using Convolutional Neural Network (CCN) techniques and the Kalman filter to estimate and analyze eye position. The experiment results reveal that the proposed system can accurately classify and predict eye movements and detect pupil position in frames, notwithstanding the face tracking and detection. Additionally, the obtained results revealed that the overall performance of the proposed system is more efficient and effective comparing to Recurrent Neural Network (RNN).</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_107-Eye_movement_Analysis_and_Prediction_using_Deep_Learning_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Empirical Analysis of Learning-based Malware Detection Methods using Image Visualization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01304106</link>
        <id>10.14569/IJACSA.2022.01304106</id>
        <doi>10.14569/IJACSA.2022.01304106</doi>
        <lastModDate>2022-04-30T05:22:32.6000000+00:00</lastModDate>
        
        <creator>Abdullah Sheneamer</creator>
        
        <creator>Essa Alhazmi</creator>
        
        <creator>James Henrydoss</creator>
        
        <subject>Malware detection; malware analysis; deep learning; machine learning; malware features</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>Malware, a short name for malicious software is an emerging cyber threat. Various researchers have proposed ways to build advanced malware detectors that can mitigate threat actors and enable effective cybersecurity decisions in the past. Recent research implements malware detectors based on visualized images of malware executable files. In this framework, a malware binary is converted into an image, and by extracting image features and applying machine learning methods, the malware is identified based on image similarity. In this research work, we implement the Image visualization-based malware detection method and conduct an empirical analysis of vari-ous learners for selecting a candidate learning classifier that can provide better prediction performance. We evaluate our framework using the following malware datasets, Search And RetrieVAl of Malware (SARVAM), Xue-dataset, and Canadian Institutes for Cyber Security (CIC) datasets. Our experiments include the following learning algorithms, Linear Regression, Random Forest, K-Nearest Neighbor (KNN), Classification and Decision Tree (CART), Support Vector Machine (SVM), Multi-Layer Perceptron (MLP), and deep learning-based Convolutional Neural Network (CNN). This image-visualization-based method proves to be effective in terms of prediction accuracy. Some conclusions emerge from our initial study and find that a Con-volutional Neural Network (CNN) algorithm provides relatively better performance when used against SARvAM and various malware datasets. The CNN model achieved a high performance of F1-score and accuracy in the binary classification task reaching 95.70% and 99.50%, consecutively. The model in the multi-classification task achieved of 95.96% and 99.30% (F1-score and accuracy) for detecting malware types. We find that the KNN model outperforms other traditional classifiers.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_106-Empirical_Analysis_of_Learning_based_Malware_Detection_Methods.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Route Planning using Wireless Sensor Network for Garbage Collection in COVID-19 Pandemic</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01304105</link>
        <id>10.14569/IJACSA.2022.01304105</id>
        <doi>10.14569/IJACSA.2022.01304105</doi>
        <lastModDate>2022-04-30T05:22:32.5830000+00:00</lastModDate>
        
        <creator>Javier E. Ramirez</creator>
        
        <creator>Caleb M. Santiago</creator>
        
        <creator>Angelica Kamiyama</creator>
        
        <subject>K-Means; ant colony optimization; route planning; vehicle routing problem; garbage collection; wireless sensor network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>Garbage collection is a responsibility faced by all cities and, if not properly carried out, can generate greater costs or sanitary problems. Considering the sanitary situation due to the COVID-19 pandemic, it is necessary to take sanitary safety measures to prevent its spread. The challenge of the present work is to provide an efficient and effective solution that guarantees a garbage collection that optimizes the use of resources and prioritizes the attention to garbage containers located in or near contagion risk zones. To this end, this research proposes the integration of a basic garbage monitoring system, consisting of a wireless sensor network, and a route planning system that implements the decomposition of the Vehicle Routing problem into the subproblems of clustering and sequencing of containers using the K-Means and Ant Colony algorithms. For the monitoring of garbage, a significant reduction in the measurement error of waste level in the containers was achieved compared to other authors. About route planning, adequate error ranges were obtained in the calculation of the optimal values of distance traveled and travel time indicators with respect to an exhaustive enumeration of routes.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_105-Route_Planning_using_Wireless_Sensor_Network_for_Garbage_Collection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Deep Learning Approach for Sentiment Classification of Malayalam Tweets</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01304103</link>
        <id>10.14569/IJACSA.2022.01304103</id>
        <doi>10.14569/IJACSA.2022.01304103</doi>
        <lastModDate>2022-04-30T05:22:32.5670000+00:00</lastModDate>
        
        <creator>Soumya S</creator>
        
        <creator>Pramod K V</creator>
        
        <subject>Bi-LSTM; CNN; NLP; Malayalam; Twitter</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>Social media content in regional languages is ex-panding from day to day. People use different social media platforms to express their suggestions and thoughts in their native languages. Sentiment Analysis (SA) is the known procedure for identifying the hidden sentiment present in the sentences for categorizing it as positive, negative, or neutral. The SA of Indian languages is challenging due to the unavailability of benchmark datasets and lexical resources. The analysis has been done using lexicon, Machine Learning (ML), and Deep Learning (DL) techniques. In this work, the baseline models and hybrid models of Deep Neural Network (DNN) architecture have been used for the classification of Malayalam tweets as positive, negative and neutral. Since, sentiment-tagged dataset for Malayalam is not readily available, the analysis has been done on the manually created dataset and translated Kaggle dataset. The hybrid models used in this study combine Convolutional Neural Networks (CNN) with variants of Recurrent Neural Net-works (RNN). The RNN models are Long Short-Term Memory (LSTM), Bidirectional LSTM (Bi-LSTM) and Gated Recurrent Unit (GRU). All these hybrid models improve the performance of Sentiment Classification (SC) compared to baseline models LSTM, Bi-LSTM and GRU.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_103-Hybrid_Deep_Learning_Approach_for_Sentiment_Classification_of_Malayalam_Tweets.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>IoDEP: Towards an IoT-Data Analysis and Event Processing Architecture for Business Process Incident Management</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01304104</link>
        <id>10.14569/IJACSA.2022.01304104</id>
        <doi>10.14569/IJACSA.2022.01304104</doi>
        <lastModDate>2022-04-30T05:22:32.5670000+00:00</lastModDate>
        
        <creator>Abir Ismaili-Alaoui</creator>
        
        <creator>Karim Baina</creator>
        
        <creator>Khalid Benali</creator>
        
        <subject>Business process management; internet of things; machine learning; complex event processing; data analytics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>IoT is becoming a hot spot area of technological innovations and economic development promises for many in-dustries and services. This new paradigm shift affects all the enterprise architecture layers from infrastructure to business. Business Process Management (BPM) is a field among others that is affected by this new technology. To assist data and events ex-plosion resulting, among others, from IoT, data analytic processes combined with event processing techniques, examine large data sets to uncover hidden patterns, unknown correlations between collected events, either at a very technical level (incident/anomaly detection, predictive maintenance) or at business level (customer preferences, market trends, revenue opportunities) to provide improved operational efficiency, better customer service and competitive advantages over rival organizations. In order to capitalize the business value of data and events generated by IoT sensors, IoT, Data Analytics and BPM need to meet in the middle. In this paper, we propose an end-to-end IoT-BPM integration architecture (IoDEP: IoT-Data-Event-Process) for a proactive business process incident management. A case study is presented and the obtained results from our experimentations demonstrate the benefit of our approach and allowed us to confirm the efficiency of our assumptions.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_104-IoDEP_Towards_an_IoT_Data_Analysis_and_Event_Processing_Architecture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comparative Performance of Optimizers and Tuning of Neural Networks for Spoof Detection Framework</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01304102</link>
        <id>10.14569/IJACSA.2022.01304102</id>
        <doi>10.14569/IJACSA.2022.01304102</doi>
        <lastModDate>2022-04-30T05:22:32.5530000+00:00</lastModDate>
        
        <creator>Ankita Chadha</creator>
        
        <creator>Azween Abdullah</creator>
        
        <creator>Lorita Angeline</creator>
        
        <subject>Spoof detection; speech synthesis; voice conversion; convolutional neural networks; optimizers; gradient descent algo-rithm; spoofed speech; automatic speaker verification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>The breakthroughs in securing speaker verification systems have been challenging and yet are explored by many researchers over the past five years. The compromise in security of these systems is due to naturally sounding synthetic speech and handiness of the recording devices. For developing a spoof detection system, the back-end classifier plays an integral role in differentiating spoofed speech from genuine speech. This work conducts the experimental analysis and comparison of up-to-date optimization techniques for a modified form of Convolutional Neural Network (CNN) architecture which is Light CNN (LCNN). The network is standardized by exploring various optimizers such as Adaptive moment estimation, and other adaptive algorithms, Root Mean Square propagation and Stochastic Gradient Descent (SGD) algorithms for spoof detection task. Furthermore, the ac-tivation functions and learning rates are also tested to investigate the hyperparameter configuration for faster convergence and improving the training accuracy. The counter measure systems are trained and validated on ASV spoof 2019 dataset with Logical (LA) and Physical Access (PA) attack data. The experimental results show optimizers perform better for LA attack in contrast to PA attack. Additionally, the lowest Equal Error Rate (EER) of 9.07 is obtained for softmax activation with SGD with momentum wrt LA attack and 9.951 for SGD with nestrov wrt PA attack.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_102-A_Comparative_Performance_of_Optimizers_and_Tuning_of_Neural_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Enhanced Predictive Approach for Students’ Performance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01304101</link>
        <id>10.14569/IJACSA.2022.01304101</id>
        <doi>10.14569/IJACSA.2022.01304101</doi>
        <lastModDate>2022-04-30T05:22:32.5370000+00:00</lastModDate>
        
        <creator>Mohamed Farouk Yacoub</creator>
        
        <creator>Huda Amin Maghawry</creator>
        
        <creator>Nivin A Helal</creator>
        
        <creator>Sebastian Ventura</creator>
        
        <creator>Tarek F. Gharib</creator>
        
        <subject>Educational data mining; students’ performance; classification; feature selection; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>Applying data mining for improving the outcomes of the educational process has become one of the most significant areas of research. The most important corner stone in the educational process is students’ performance. Therefore, early prediction of students’ performance aims to assist at-risk students by providing appropriate and early support and intervention. The objective of this paper is to propose an enhanced predictive model for students’ performance prediction. Selecting the most impor-tant features is a crucial indicator for the academic institutions to make an appropriate intervention to help students with poor performance and the top influencing features were selected in feature selection step besides the dimensionality reduction and build an efficient predictive model. DB-Scan clustering technique is applied to enhance the proposed predictive model performance in the preprocessing step. Various classification techniques are used such as Decision Tree, Logistic regression, Naive Bayes, Random Forest, and Multilayer Perceptron. Moreover ensemble method is used to solve the trade-off between the bias and the variance and there are two proposed ensemble methods through the experiments to be compared. The proposed model is an ensemble classifier of Multilayer Perceptron, Decision Tree, and Random Forest classifiers. The proposed model achieves an accuracy of 83.16%.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_101-An_Enhanced_Predictive_Approach_for_Students_Performance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>CNN-LSTM Based Approach for Dos Attacks Detection in Wireless Sensor Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130497</link>
        <id>10.14569/IJACSA.2022.0130497</id>
        <doi>10.14569/IJACSA.2022.0130497</doi>
        <lastModDate>2022-04-30T05:22:32.4900000+00:00</lastModDate>
        
        <creator>Salim Salmi</creator>
        
        <creator>Lahcen Oughdir</creator>
        
        <subject>Denial of Service (DoS); Wireless Sensor Networks (WSN); Convolutional Neural Network (CNN); Long Short-Term Memory (LSTM)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>A denial-of-service (DoS) attack is a coordinated attack by many endpoints, such as computers or networks. These attacks are often performed by a botnet, a network of malware-infected computers controlled by an attacker. The endpoints are instructed to send traffic to a particular target, overwhelming it and preventing legitimate users from accessing its services. In this project, we used a CNN-LSTM network to detect and classify DoS intrusion attacks. Attacks detection is considered a classification problem; the main aim is to clarify the attack as Flooding, Blackhole, Normal, TDMA, or Grayhole. This research study uses a computer- generated wireless sensor network-detection system dataset. The wireless sensor network environment was simulated using network simulator NS-2 based on the LEACH routing protocol to gather data from the network and preprocessed to produce 23 features classifying the state of the respective sensor and simulate five forms of Denial of Service (DoS) attacks. The developed CNN-LSTM model is further evaluated on 25 epochs with accuracy, Precision score, and Recall score of 0.944, 0.959, and 0.922, respectively, all on a scale of 0-1.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_97-CNN_LSTM_Based_Approach_for_Dos_Attacks_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>BMP: Toward a Broker-less and Microservice Platform for Internet of Thing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130496</link>
        <id>10.14569/IJACSA.2022.0130496</id>
        <doi>10.14569/IJACSA.2022.0130496</doi>
        <lastModDate>2022-04-30T05:22:32.4730000+00:00</lastModDate>
        
        <creator>Lam Nguyen Tran Thanh</creator>
        
        <creator>Khoi Le Quoc</creator>
        
        <creator>The Anh Nguyen</creator>
        
        <creator>Huong Hoang Luong</creator>
        
        <creator>Hong Khanh Vo</creator>
        
        <creator>Tuan Dao Anh</creator>
        
        <creator>Hy Nguyen Vuong Khang</creator>
        
        <creator>Khoi Nguyen Huynh Tuan</creator>
        
        <creator>Hieu Le Van</creator>
        
        <creator>Nghia Huynh Huu</creator>
        
        <creator>Khoa Tran Dang</creator>
        
        <creator>Khiem Huynh Gia</creator>
        
        <subject>Internet of Things (IoT); gRPC; Single Sign-On; Broker-Less; Kafka; Microservice; Role-based Access Control (RBAC)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>The Internet of Things (IoT), currently, is one of the most interesting technology trends. IoT is the foundation and driving force for the development of other scientific fields based on its ability to connect things and the huge amount of data it collects. The IoT Platform is considered the backbone of every IoT architecture that not only allows the transfer of data between user and device but also the feed of high-level applications such as big data or deep learning. As a result, the optimal design of the IoT Platform is a very important issue, which should be carefully considered in many aspects. Although the IoT is applied in multiple domains, there are three indispensable features including (a) data collections, (b) devices and users management, and (c) remote device control. These functions usually come with some requirements, for example, security, high-speed transmission, low energy consumption, reliable data exchange, and scalable systems. In this paper, we propose the IoT Platform,called BMP (Broker-less and Microservice Platform) designed according to microservice and broker-less architecture combined with gRPC protocol to meet the requirements of the three features mentioned above. Our IoT Platform addresses five issues: (1) address the limited processing capacity of devices, (2) reduce energy consumption, (3) speed up transmission rate and enhance the accuracy of the data exchange, (4) improve security mechanisms, and (5) improve the scalability of the system. Moreover, we describe the evaluation to prove the effectiveness of the BMP (i.e., proof-of-concept) in three scenarios. Finally, a source code of the BMP is publicized on the GitHub repository to engage further reproducibility and improvement.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_96-BMP_Toward_a_Broker_less_and_Microservice_Platform.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Human Activity Recognition in Car Workshop</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130495</link>
        <id>10.14569/IJACSA.2022.0130495</id>
        <doi>10.14569/IJACSA.2022.0130495</doi>
        <lastModDate>2022-04-30T05:22:32.4730000+00:00</lastModDate>
        
        <creator>Omar Magdy</creator>
        
        <creator>Ayman Atia</creator>
        
        <subject>Machine learning; human activity recognition; pose identification; industry analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>Human activity recognition has become so widespread in recent times. Due to the modern advancements of technology, it has become an important solution to many prob-lems in various fields such as medicine, industry, and sports. And this subject got the attention of a lot of researchers. Along with problems like wasted time in maintenance centers, we proposed a system that extracts worker poses from videos by using pose classification. In this paper, we have tested two algorithms to detect worker activity. This system aims to detect and classify positive and negative worker’s activities in car maintenance centers such as (changing the tire, changing oil, using the phone, standing without work). We have conducted two experiments, the first experiment was for comparison between algorithms to determine the most accurate algorithm in recognizing the activities performed. The experiment was done using two different algorithms (1 dollar recognizer and Fast Dynamic time warping) on 3 participants in a controlled area. The one-dollar recognizer has achieved a 97% accuracy compared to the fastDTW with 86%. The second experiment was conducted to measure the performance of a one-dollar algorithm with different participants. The results show that a 1 dollar recognizer achieved an accuracy of 94.2% when tested on 420 different videos.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_95-Human_Activity_Recognition_in_Car_Workshop.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Multi View Spatio Temporal Spectral Feature Embedding on Skeletal Sign Language Videos for Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130494</link>
        <id>10.14569/IJACSA.2022.0130494</id>
        <doi>10.14569/IJACSA.2022.0130494</doi>
        <lastModDate>2022-04-30T05:22:32.4570000+00:00</lastModDate>
        
        <creator>SK. Ashraf Ali</creator>
        
        <creator>M. V. D. Prasad</creator>
        
        <creator>P. Praveen Kumar</creator>
        
        <creator>P. V. V. Kishore</creator>
        
        <subject>Laplacian eigenmaps; 3D convolutional networks; sign language recognition; multi view; skeletal data</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>To build a competitive global view from multiple views which will represent all the views within a class label is the primary objective of this work. The first phase involves the extraction of spatio temporal features from videos of skeletal sign language using a 3D convolutional neural network. In phase two, the extracted spatio temporal features are ensembled into a latent low dimensional subspace for embedding in the global view. This is achieved by learning the weights of the linear combination of Laplacian eigenmaps of multiple views. Subsequently, the constructed global view is applied as training data for sign language recognition.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_94-Deep_Multi_View_Spatio_Temporal_Spectral_Feature.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards Security Awareness of Mobile Applications using Semantic-based Sentiment Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130493</link>
        <id>10.14569/IJACSA.2022.0130493</id>
        <doi>10.14569/IJACSA.2022.0130493</doi>
        <lastModDate>2022-04-30T05:22:32.4430000+00:00</lastModDate>
        
        <creator>Ahmed Alzhrani</creator>
        
        <creator>Abdulmjeed Alatawi</creator>
        
        <creator>Bandar Alsharari</creator>
        
        <creator>Umar Albalawi</creator>
        
        <creator>Mohammed Mustafa</creator>
        
        <subject>Security awareness; semantic analysis; sentiment analysis; mobile applications; topic modelling; clustering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>With the rapid increase of smartphones and the growing interest in their applications, e.g., Google Play Apps, it becomes necessary to analyze users’ reviews whether they are expressed as ratings or comments. This is because recent studies reported that users’ reviews could provide us with useful clues and valuable features that can help in understanding the broad opinion about some applications in term of security awareness. Several techniques have been developed for this crucial task and significant progress have been achieved such as Semantic and Sentiment Analysis, Topic Modelling, and Clustering. The majority of the existing methods are mainly based on representing reviews’ words in a Bag-Of-Words vector space with String-matched approaches without considering the common polysemy and synonymy problems of words. This is true due to the fact that users who make use of these applications are often from a diverse background and thus, different vocabulary. This paper proposes a new approach to classifying security opinions about applications from users’ reviews while considering special features of synonymous and polysemous words. To achieve this task, the proposed model makes use of word embedding, topic modelling, Bi-LSTM, and n-grams approach. For the proposed model, a new dataset is built that contains reviews about 18 popular applications. The application’s selection was primarily governed by making the dataset diverse in its domain. The experiment results showed that the proposed ensemble model which combines the prediction of the extracted features, which in turn captures synonymy, polysemy, and dependency of words-is significantly useful, and it achieves better results with an accuracy approaching 90% compared to the use of each technique separately. The model could contribute in preventing mobile users from unsafe applications.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_93-Towards_Security_Awareness_of_Mobile_Applications.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Protecting User Preference in Ranked Queries</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130492</link>
        <id>10.14569/IJACSA.2022.0130492</id>
        <doi>10.14569/IJACSA.2022.0130492</doi>
        <lastModDate>2022-04-30T05:22:32.4270000+00:00</lastModDate>
        
        <creator>Rong Tang</creator>
        
        <creator>Xinyu Yu</creator>
        
        <subject>Preference privacy; distortion; homomorphic encryption</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>Protecting data privacy is of extremely importance for users who outsource their data to a third-party in cloud computing. Although there exist plenty of research work on data privacy protection, the problem of protecting user’s preference information has received less attention. In this paper, we consider the problem of how to prevent user preference information leakage from the third-party when processing ranked queries. We propose two algorithms to solve the problem, where the first one is based on distortion of preference vector and the second one is based on homomorphic encryption. We conduct extensive experiments to verify effectiveness of our approaches.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_92-Protecting_User_Preference_in_Ranked_Queries.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comprehensive Overview on Biometric Authentication Systems using Artificial Intelligence Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130491</link>
        <id>10.14569/IJACSA.2022.0130491</id>
        <doi>10.14569/IJACSA.2022.0130491</doi>
        <lastModDate>2022-04-30T05:22:32.4270000+00:00</lastModDate>
        
        <creator>Shoroog Albalawi</creator>
        
        <creator>Lama Alshahrani</creator>
        
        <creator>Nouf Albalawi</creator>
        
        <creator>Reem Kilabi</creator>
        
        <creator>Aaeshah Alhakamy</creator>
        
        <subject>Biometric authentication; physiological traits; be-havioral traits; facial recognition; iris recognition; voice recogni-tion; signature</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>Biometric authentication is becoming more preva-lent as it allows consumers to authenticate themselves without entering a physical address or a personal identification number. Thus, a simple finger gesture or a glance at a camera can still prove one’s identity. In this review, we explain in detail how the concept of authentication and the various types of biometric techniques is used for user identification. Then, we discuss the various ways these techniques can be combined to create a truly multimodal authentication system. For a more organized approach, our overview is classified into two main categories based on human biometric traits. First, the physiolog-ical traits include fingerprint, facial, iris/retina, hand, and finger-vein. Second, the behavioral traits includes voice, signature, and keystroke recognition systems. Finally, we offer a comprehensive comparison of selected methods and techniques and focus on three criteria: algorithms, merits, and drawbacks. Based on this comparison, we provide insight into our future research in iris recognition, by which we combine several artificial intelligence algorithms to develop our system.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_91-A_Comprehensive_Overview_on_Biometric_Authentication_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Emotions Classification from Speech with Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130490</link>
        <id>10.14569/IJACSA.2022.0130490</id>
        <doi>10.14569/IJACSA.2022.0130490</doi>
        <lastModDate>2022-04-30T05:22:32.4100000+00:00</lastModDate>
        
        <creator>Andry Chowanda</creator>
        
        <creator>Yohan Muliono</creator>
        
        <subject>Emotions recognition; speech modality; temporal information; affective system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>Emotions are the essential parts that convey mean-ing to the interlocutors during social interactions. Hence, recog-nising emotions is paramount in building a good and natural affective system that can naturally interact with the human interlocutors. However, recognising emotions from social inter-actions require temporal information in order to classify the emotions correctly. This research aims to propose an architecture that extracts temporal information using the Temporal model of Convolutional Neural Network (CNN) and combined with the Long Short Term Memory (LSTM) architecture from the Speech modality. Several combinations and settings of the architectures were explored and presented in the paper. The results show that the best classifier achieved by the model trained with four layers of CNN combined with one layer of Bidirectional LSTM. Furthermore, the model was trained with an augmented training dataset with seven times more data than the original training dataset. The best model resulted in 94.25%, 57.07%, 0.2577 and 1.1678 for training accuracy, validation accuracy, training loss and validation loss, respectively. Moreover, Neutral (Calm) and Happy are the easiest classes to be recognised, while Angry is the hardest to be classified.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_90-Emotions_Classification_from_Speech_with_Deep_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid Deep Learning Approach for Freezing of Gait Prediction in Patients with Parkinson&#39;s Disease</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130489</link>
        <id>10.14569/IJACSA.2022.0130489</id>
        <doi>10.14569/IJACSA.2022.0130489</doi>
        <lastModDate>2022-04-30T05:22:32.3970000+00:00</lastModDate>
        
        <creator>Hadeer El-ziaat</creator>
        
        <creator>Nashwa El-Bendary</creator>
        
        <creator>Ramadan Moawad</creator>
        
        <subject>Freezing of Gait (FoG); Parkinson&#39;s disease (PD); angular axes features; spectrogram; convolutional neural network (CNN); long short-term memory (LSTM)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>The main objective of this work is to enhance the prediction of the Freezing of Gait (FoG) episodes for patients with Parkinson&#39;s Disease (PD). Thus, this paper proposes a hybrid deep learning approach that considers FoG prediction as an unsupervised multiclass classification problem with 3 classes: namely, normal walking, pre-FoG, and FoG events. The proposed hybrid approach Deep Conv-LSTM is based on the use of Convolutional Neural Network layers (CNN) and Long Short-Term Memory (LSTM) units with spectrogram images generated based on angular axes features instead of the normal principle-axes features as the model input. Experimental results showed that the proposed approach achieved an average accuracy of 94.55% for FoG episodes early detection using Daphnet and Opportunity publicly available benchmark datasets. Furthermore, the proposed approach achieved an accuracy of 93.5% for FoG events prediction using the Daphnet dataset with the subject independent mode. Thus, the significance of this study is to investigate and validate the impact of using hybrid deep learning method for improving FoG episodes prediction.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_89-A_Hybrid_Deep_Learning_Approach_for_Freezing_of_Gait_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>IAGA: Interference Aware Genetic Algorithm based VM Allocation Policy for Cloud Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130488</link>
        <id>10.14569/IJACSA.2022.0130488</id>
        <doi>10.14569/IJACSA.2022.0130488</doi>
        <lastModDate>2022-04-30T05:22:32.3970000+00:00</lastModDate>
        
        <creator>Tarannum Alimahmad Bloch</creator>
        
        <creator>Sridaran Rajagopal</creator>
        
        <creator>Prashanth C. Ranga</creator>
        
        <subject>Cloud computing; interference; VM allocation; SLA violation; resource utilization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>Diversified systems hosted on cloud infrastructure have to work increasingly on physical servers. Cloud applications running on physical machines require diverse resources. The resource requirements of cloud applications are fluctuating based on the resource intensity of the applications. The multi-tenancy of Cloud servers can be achieved based on effective resource utilization. The optimum resource utilization, maximum service level agreement, and minimization of interference are the major objectives to be achieved. Using live Virtual Machine (VM) migration techniques cloud resources can be utilized efficiently. But the migrated VMs can interfere with the ongoing applications on the targeted server which may lead to the service level agreement violation (SLAV) and performance degradation. To resolve this issue, understanding the current state of cloud hosts before the allocation of newly migrated VM is necessary. This paper presents Interference Attentive Genetic Algorithm (IAGA) based VM allocation strategy to achieve the aforementioned objectives. The proposed IAGA policy has outperformed existing policies for quantifiable performance metrics such as energy consumed by cloud systems, count of hosts shut down, average SLAV, and count of VM migrations.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_88-IAGA_Interference_Aware_Genetic_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>RTL Design and Testing Methodology for UHF RFID Passive Tag Baseband-Processor</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130487</link>
        <id>10.14569/IJACSA.2022.0130487</id>
        <doi>10.14569/IJACSA.2022.0130487</doi>
        <lastModDate>2022-04-30T05:22:32.3800000+00:00</lastModDate>
        
        <creator>Syifaul Fuada</creator>
        
        <creator>Aris Agung Pribadi</creator>
        
        <creator>Trio Adiono</creator>
        
        <creator>Tengku Ahmad Madya</creator>
        
        <subject>UHF RFID passive tag; baseband processor; register transfer level; universal verification methodology; Internet-of-things enabler; FPGA</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>With the rapid growth and widespread implementation of Internet-of-Things (IoT) technology, Radio Frequency Identification (RFID) has become a vital supporting technology to enable it. Various researchers have studied the design of digital or analog blocks for RFID readers. However, most of these works did not provide a comprehensive design methodology. Hence, the motivation of this study is to full fill the research gap. This paper proposes a comprehensive design and testing methodology for the Ultrahigh Frequency (UHF) RFID passive tag baseband processor at the register transfer (RTL). A complete design procedure of each block from state diagram to schematic level is presented; it comprises several blocks, i.e., transmitter, receiver, Cyclic Redundancy Check (CRC), command processing, and Pseudorandom Number Generator (PRNG). Each block produces low latency (&lt;400 ns). Two CRCs were applied to this system for different purpose: CRC-5 and CRC-16. To perform multi-parameter combinations of as many as 1344 combinations (including timing parameter, query respond, state transition, and BLF), a Universal Verification Methodology (UVM)-based test is conducted. The simulation results reveal that the proposed RFID baseband processor passes all the testing scenarios using UVM (version 1.1d). Moreover, we also implemented the proposed design on the FPGA board (ALTERA DE2-115). The system consumes 976 logic elements and 173.14 mW of total power dissipation (i.e., 0.13 mW of dynamic power dissipation, 98.6 mW of static power dissipation, and 74.34 mW of I/O dissipation), which is reasonably low. This demonstrates that our design is synthesizable and ready to be processed further. All system design and test criteria were conducted following the EPC Gen-2 standard. The developed chip can be a solution for various kinds of RFID chip-based IoT applications.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_87-RTL_Design_and_Testing_Methodology_for_UHF_RFID.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automatic Healthy Sperm Head Detection using Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130486</link>
        <id>10.14569/IJACSA.2022.0130486</id>
        <doi>10.14569/IJACSA.2022.0130486</doi>
        <lastModDate>2022-04-30T05:22:32.3630000+00:00</lastModDate>
        
        <creator>Ahmad Abdelaziz Mashaal</creator>
        
        <creator>Mohamed A. A. Eldosoky</creator>
        
        <creator>Lamia Nabil Mahdy</creator>
        
        <creator>Kadry Ali Ezzat</creator>
        
        <subject>Infertility; sperm morphology; deep learning; human sperm head; healthy sperms</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>Infertility is one of the diseases in which researchers are interested. Infertility disease is a global health concern, and andrologists are constantly looking for more advanced solutions for this disease. The intracytoplasmic sperm injection (ICSI) process is considered as one of the most common procedures for achieving fertilization. Sperm selection is performed using visual assessment which is dependent upon the skills of the laboratory technicians and as such prone to human errors. Therefore, an automatic detection system is needed for quick and more accurate results. This study utilizes a deep learning technique for the classification of heads of human sperms which indicate the healthy human sperms. The Convolutional Neural Network (CNN) model of visual Geometry Group of 16 layers (VGG16) was used for classification, and it is one of the best architectures used for image classification. The dataset consists of 1200 images of human sperm heads divided into healthy and unhealthy. Here, the VGG16 model is fine-tuned and achieved an accuracy of 97.92% and a sensitivity of 98.82%. Moreover, it achieved an F1 score of 98.53%. The model is an effective and real-time system for detecting healthy sperms that can be injected into eggs for achieving successful fertilization. This model quickly recognizes healthy sperms and makes the sperm selection process more accurate and easier for the andrologists.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_86-Automatic_Healthy_Sperm_Head_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of Factors Influencing the COVID-19 Mortality Rate in Indonesia using Zero Inflated Negative Binomial Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130485</link>
        <id>10.14569/IJACSA.2022.0130485</id>
        <doi>10.14569/IJACSA.2022.0130485</doi>
        <lastModDate>2022-04-30T05:22:32.3500000+00:00</lastModDate>
        
        <creator>Maria Susan Anggreainy</creator>
        
        <creator>Abdullah M. Illyasu</creator>
        
        <creator>Hanif Musyaffa</creator>
        
        <creator>Florence Helena Kansil</creator>
        
        <subject>Mortality rate; overdispersion; zero-inflated negative binomial; Poisson regression; correlation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>This research aims to create a model, analyze the factors that influence the COVID-19 mortality rate in Indonesia. There are five independent variables and one dependent variable used in the research. The independent variables used are the percentage of poor people, the percentage of households using shared toilet facilities, the percentage of households using wood as the main fuel for cooking, the percentage of the population whose drinking water source comes from pumped water and the percentage of population who have health insurance from private insurance. While the dependent variable used is the Annual Parasite incidence COVID-19. The results obtained are as follows. First, a Zero-Inflated Negative Binomial regression model was obtained for the case of COVID-19 morbidity where this model could overcome overdispersion and excess zero values in observations. Second, there are 4 independent variables that have a significant effect on the count model and there is no independent variable that has a significant effect on the Zero inflation model. Third, a web application is produced that can display the Zero-Inflated Negative Binomial regression model (ZINB).</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_85-Analysis_of_Factors_Influencing_the_COVID_19_Mortality_Rate.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Plant Disease Detection using AI based VGG-16 Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130484</link>
        <id>10.14569/IJACSA.2022.0130484</id>
        <doi>10.14569/IJACSA.2022.0130484</doi>
        <lastModDate>2022-04-30T05:22:32.3330000+00:00</lastModDate>
        
        <creator>Anwar Abdullah Alatawi</creator>
        
        <creator>Shahd Maadi Alomani</creator>
        
        <creator>Najd Ibrahim Alhawiti</creator>
        
        <creator>Muhammad Ayaz</creator>
        
        <subject>Machine learning; VGG-16; disease detection; convolutional networks; Plant Village; modern farming</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>Agriculture and modern farming is one of the fields where IoT and automation can have a great impact. Maintaining healthy plants and monitoring their environment in order to identify or detect diseases is essential in order to maintain a maximum crop yield. The implementation of current high rocketing technologies including artificial intelligence (AI), machine learning, and deep learning has proved to be extremely important in modern agriculture as a method of advanced image analysis domain. Artificial intelligence adds time efficiency and the possibility of identifying plant diseases, in addition to monitoring and controlling the environmental conditions in farms. Several studies showed that machine learning and deep learning technologies can detect plant diseases upon analyzing plant leaves with great accuracy and sensitivity. In this study, considering the worth of machine learning for disease detection, we present a convolutional neural network VGG-16 model to detect plant diseases, to allow farmers to make timely actions with respect to treatment without further delay. To carry this out, 19 different classes of plants diseases were chosen, where 15,915 plant leaf images (both diseased and healthy leaves) were acquired from the Plant Village dataset for training and testing. Based on the experimental results, the proposed model is able to achieve an accuracy of about 95.2% with the testing loss being only 0.4418. The proposed model provides a clear direction toward a deep learning-based plant disease detection to apply on a large scale in future.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_84-Plant_Disease_Detection_using_AI_based_VGG_16_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Information Security Enhancement by Increasing Randomness of Stream Ciphers in GSM</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130483</link>
        <id>10.14569/IJACSA.2022.0130483</id>
        <doi>10.14569/IJACSA.2022.0130483</doi>
        <lastModDate>2022-04-30T05:22:32.3330000+00:00</lastModDate>
        
        <creator>Ram Prakash Prajapat</creator>
        
        <creator>Rajesh Bhadada</creator>
        
        <creator>Arjun Choudhary</creator>
        
        <subject>Security; encryption; A5/1 stream cipher; randomness; NIST test suite</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>Information security is a crucial issue and needs to be addressed efficiently. Encryption of the original information is used to ensure privacy during exchange of information. In GSM (Global System for Mobile) standard, once the voice traffic initiates after signaling, encryption comes into the picture to ensure privacy during the call, after authentication. Here in this process, the plaintext is encrypted in to cipher-text using stream ciphers. For stronger security, strong ciphers with strong randomness are required. Linear Feedback Shift Registers (LFSRs) based A5 algorithm family is used for encryption in GSM. There are many shortcomings of this cipher and with these, privacy can’t be assured. Some ways are proposed in this paper to ensure better security by enhancing the randomness of the generated bit stream being used for encryption. These are incorporation of user’s current location, reuse of already generated 32 bit SRES during authentication process and conversion of linear FSRs into nonlinear FSRs. Statistical Test Suite NIST is used to test the various properties of random bit stream and an attempt has been made to achieve better randomness, hence more security.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_83-Information_Security_Enhancement_by_Increasing_Randomness.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A CNN based Approach for Handwritten Character Identification of Telugu Guninthalu using Various Optimizers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130482</link>
        <id>10.14569/IJACSA.2022.0130482</id>
        <doi>10.14569/IJACSA.2022.0130482</doi>
        <lastModDate>2022-04-30T05:22:32.3170000+00:00</lastModDate>
        
        <creator>B. Soujanya</creator>
        
        <creator>Suresh Chittineni</creator>
        
        <creator>T. Sitamahalakshmi</creator>
        
        <creator>G. Srinivas</creator>
        
        <subject>Character recognition; Adam; RMSProp; SGD; CNN</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>Handwritten character recognition is the most critical and challenging area of research in image processing. A computer&#39;s ability to detect handwriting input from various original sources, such as paper documents, images, touch screens, and other online and offline devices, may be classified as this recognition. Identifying handwriting in Indian languages like Hindi, Tamil, Telugu, and Kannada has gotten less attention than in other languages like English and Asian dialects like Japanese and Chinese. Adaptive Moment Estimation (ADAM), Root Mean Square Propagation (RMSProp) and Stochastic Gradient Descent (SGD) optimization methods employed in a Convolution Neural Network (CNN) have produced good recognition, accuracy, and training and classification times for Telugu handwritten character recognition. It&#39;s possible to overcome the limitations of classic machine learning methods using CNN. We used numerous handwritten Telugu guninthalu as input to construct our own data set used in our proposed model. Comparatively, the RMSprop optimizer outperforms ADAM and SGD optimizer by 94.26%.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_82-A_CNN_based_Approach_for_Handwritten_Character_Identification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design and Usability Study of Hypertension Management Guideline Mobile Application with Hypertension and Non-hypertension Patients</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130481</link>
        <id>10.14569/IJACSA.2022.0130481</id>
        <doi>10.14569/IJACSA.2022.0130481</doi>
        <lastModDate>2022-04-30T05:22:32.3030000+00:00</lastModDate>
        
        <creator>Nor Azman Ismail</creator>
        
        <creator>Nor Atiqah Mohd Fuaad</creator>
        
        <creator>Muhammad Syahmi Zulkifli</creator>
        
        <creator>Farhat Embarak</creator>
        
        <creator>Nur Zuhairah Afiqah Husni</creator>
        
        <creator>Su Elya Mohamed</creator>
        
        <creator>Puteri Syaza Kamarina Megat Mohd Zainon</creator>
        
        <subject>Hypertension management guideline; hypertension patient; hypertension symptoms; user interface; user experience</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>Hypertension is currently rising steadily among the world population. The first level of screening to know whether one is suffering from hypertension is essential as this lays the foundation for the actual diagnosis. This research details the user interface design and usability evaluation of the hypertension management guideline. The proposed mobile application prototype assists people in screening themselves with regards to hypertension based on symptoms. This prototype also acts as a sharing platform for hypertension patients to help them share their concerns and advice within the related online community. The eye-tracker experiment was used to support the visual strategy of the prototype design. In studying the usability of the mobile application, an experiment carried out with two groups of people, of which one group of people have hypertension. In contrast, the other group of people do not have hypertension. An independent-samples t-test conducted to compare the user performance scores using the proposed prototype. Based on the usability study, both user groups understood and used the applications with ease. However, the findings revealed there was a significant difference in overall scores for hypertension patients and non-hypertension patients. The findings of this study could help software developer design an effective application for hypertension guideline tool for monitoring health and well-being.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_81-Design_and_Usability_Study_of_Hypertension_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Two-Stage Assessment Approach for QoS in Internet of Things based on Fuzzy Logic</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130480</link>
        <id>10.14569/IJACSA.2022.0130480</id>
        <doi>10.14569/IJACSA.2022.0130480</doi>
        <lastModDate>2022-04-30T05:22:32.3030000+00:00</lastModDate>
        
        <creator>Mutasim Elsadig Adam</creator>
        
        <creator>Yasir Abdalgadir Ahmed Hamid</creator>
        
        <subject>IoT; internet of things; QoS; quality of services; fuzzy logic; evaluate; assess</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>In the sphere of IoT, one of the most significant issues is quality of service (QoS), which is critical for both developers and customers. As a result, IoT platform developers are working to enhance models that will meet consumer expectations in terms of IoT services meeting their expected specified levels of quality. The multidimensional architecture of the IoT platform, combined with the ambiguous mindset of consumers&#39; thinking, makes QoS evaluation a difficult process. As a result, this study seeks to solve these issues and proposes a new paradigm for assessing QoS in IoT ecosystems. The proposed approach evaluates QoS in two steps, with the goal of assessing QoS at all levels. To address the issue of uncertainty, the metric values and QoS were represented using a fuzzy logic method. The model correctly estimated the QoS for 50 services in the dataset, and the results show that 16 services are classed as high quality, while 25 are rated as medium quality and the rest rated as low quality.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_80-A_Two_Stage_Assessment_Approach_for_QoS_in_Internet_of_Things.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development and Validation of e-Books during the Post-Pandemic to Improve Attitude towards Environmental Care in Case of Indonesia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130479</link>
        <id>10.14569/IJACSA.2022.0130479</id>
        <doi>10.14569/IJACSA.2022.0130479</doi>
        <lastModDate>2022-04-30T05:22:32.2870000+00:00</lastModDate>
        
        <creator>Anggraeni Mashinta Sulistyani</creator>
        
        <creator>Zuhdan Kun Prasetyo</creator>
        
        <creator>Farida Hanum</creator>
        
        <creator>Rizki Noor Prasetyono</creator>
        
        <subject>e-Book; Google site; green science learning model; local wisdom; environmental care character</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>Indonesia has natural diversity in the form of tropical rain forests, the sea, flora, and fauna. Information technology adaptation is needed considering its development and use is getting higher, especially during the pandemic and post-pandemic periods. Through information technology in the form of E-books based on local wisdom, an attitude of caring for the environment grows. This research aims to combine information technology, environmental-based science learning, and local wisdom in growing students&#39; environmental care character. Through the use of e-books based on Google&#39;s website as information technology in implementing a green science learning model that is oriented to local wisdom. This study develops and validates an e-book based on local wisdom through the ADDIE research design. The results of the analysis and discussion are that the development of an e-book based on Google&#39;s website with a green science model oriented to local wisdom is declared valid and feasible to use with an average percentage of 90.74% from the three experts. Then the results of student responses also showed a positive response from students with a percentage of 90.29%. The application of google site-based e-books in the green science model oriented to local wisdom in learning can increase the growth of environmental care characters for junior high school students as evidenced by the results of multiple regression analysis of significance 0.00 below 0.05.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_79-Development_and_Validation_of_e_Books.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Breast Cancer Classification using Decision Tree Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130478</link>
        <id>10.14569/IJACSA.2022.0130478</id>
        <doi>10.14569/IJACSA.2022.0130478</doi>
        <lastModDate>2022-04-30T05:22:32.2700000+00:00</lastModDate>
        
        <creator>Omar Tarawneh</creator>
        
        <creator>Mohammed Otair</creator>
        
        <creator>Moath Husni</creator>
        
        <creator>Hayfa.Y. Abuaddous</creator>
        
        <creator>Monther Tarawneh</creator>
        
        <creator>Malek A Almomani</creator>
        
        <subject>Data mining; decision tree; classifier; breast cancer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>Cancer is a major health issue that affects individuals all over the world. This disease has claimed the lives of many people, and will continue to do so in the future. Breast cancer has recently surpassed cervical cancer as the most frequent cancer among women in both industrialized and developing countries and it is now the second leading cause of cancer mortality among women. A high number of women die each year as a result of this disease. Breast cancer is significantly easier to treat if caught early. This paper introduces a decision tree-based data mining technique for breast cancer early detection with highest accuracy, which helps patients to recover. Breast cancers are classed as benign (unable to penetrate surrounding tissue) or malignant (able to infiltrate adjacent tissue) breast growths. Two tests were included in the review. The primary study uses 10 breast cancer samples from the Kaggle archive, whereas the follow-up study uses 286 breast cancer samples from the same pool. The Decision Tree&#39;s accuracy in the first trial was 100%, while it was 97.9% in the follow-up inquiry. These findings justify the use of the proposed machine learning-based Decision Tree classifier in pre-evaluating patients for triage and decision-making prior to the availability of data.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_78-Breast_Cancer_Classification_using_Decision_Tree_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SPKP: A Web-based Application System for Managing Mental Health in Higher Institution</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130477</link>
        <id>10.14569/IJACSA.2022.0130477</id>
        <doi>10.14569/IJACSA.2022.0130477</doi>
        <lastModDate>2022-04-30T05:22:32.2530000+00:00</lastModDate>
        
        <creator>Mohamad Fadli Zolkipli</creator>
        
        <creator>Zahidah Mohamad Said</creator>
        
        <creator>Massudi Mahmuddin</creator>
        
        <subject>Psychology tests; psychology health; counselling; counselor</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>Client psychology profile is one of the methods used by UUM’s Counselling Centre to analyze client psychological health conditions. This psychological profile is in a physical form which means the questions that need to be answered are on the paper. This psychology profile consists of three types of psychological modules whereby each module consists of questions related to psychology. The estimated time taken to answer this psychological profile is around 10 minutes. This physical form method may cause data to be lost and it will create issues when counselors want to retrieve the data back. This paper is aimed to develop Sistem Profil Kesejahteraan Psikologi (SPKP) for Counselling Centre of Universiti Utara Malaysia (UUM) in helping to know the client’s psychological health before meeting up with the counselor. By focusing on analyzing Counselling Centre client’s data, the basis of this web application system is to create a new space that will help Counselling Centre to improve the way they collect data, store data, and also improve their quality in time management whereby all those data can be collect, store, retrieve and analyze in a single click. By developing this system, Counselling Centre can monitor clients’ psychological health before they meet up with the counselor.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_77-SPKP_A_Web_based_Application_System_for_Managing_Mental_Health.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Optimized Hybrid Fuzzy Weighted k-Nearest Neighbor with the Presence of Data Imbalance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130476</link>
        <id>10.14569/IJACSA.2022.0130476</id>
        <doi>10.14569/IJACSA.2022.0130476</doi>
        <lastModDate>2022-04-30T05:22:32.2530000+00:00</lastModDate>
        
        <creator>Soha A. Bahanshal</creator>
        
        <creator>Rebhi S. Baraka</creator>
        
        <creator>Bayong Kim</creator>
        
        <creator>Vaibhav Verdhan</creator>
        
        <subject>Imbalanced data; fuzzy weighted kNN; SMOTE; classification model; optimized hybrid kNN</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>We present an optimized hybrid fuzzy Weighted k-Nearest Neighbor classification model in the presence of imbalanced data. More attention is placed on data points in the boundary area between two classes. Finding greater results in the general classification of imbalanced data for both the minority and the majority classes. The fuzzy weighted approach assigns large weights to small classes and small weights to large classes. It improves the classification performance for the minority class. Experimental results show a higher average performance than other relevant algorithms, e.g., the variants of kNN with SMOTE such as Weighted kNN alone and Fuzzy kNN alone. The results also signify that the proposed approach makes the overall solution more robust. At the same time, the overall classification performance on the complete dataset is also increased, thereby improving the overall solution.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_76-An_Optimized_Hybrid_Fuzzy_Weighted_k_Nearest_Neighbor.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Investigation of Hybrid Feature Selection Techniques for Autism Classification using EEG Signals</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130475</link>
        <id>10.14569/IJACSA.2022.0130475</id>
        <doi>10.14569/IJACSA.2022.0130475</doi>
        <lastModDate>2022-04-30T05:22:32.2400000+00:00</lastModDate>
        
        <creator>S. Thirumal</creator>
        
        <creator>J. Thangakumar</creator>
        
        <subject>Autism spectrum disorder (ASD); electroencephalo graphy (EEG); feature selection; River Formation Dynamics (RFD); Support Vector Machine (SVM); hybrid greedy RFD</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>Autism Spectrum Disorder (ASD), the non-uniform neurodevelopment condition that is characterized by the impairment of behaviour in communication and social interaction with some restricted their repetitive behaviour. Today, to measure the voltage created during brain activity is measured using electroencephalography (EEG). The wavelet transform is used for decomposing the time-frequency of the EEG signal. Feature Selection is the process that significantly reduces feature space dimensionality, while maintaining the right representation of their original data. In this work, metaheuristic algorithm is utilized for feature selection. The proposed feature selection is based on River Formation Dynamics (RFD) and a hybrid Greedy RFD is presented. Support Vector Machine (SVM) can be a concept consisting of a set of methods of supervised learning to analyze pattern recognition that is a successful tool in the analysis of regression and classification. Experimental results show the proposed Greedy RFD feature selection improves the performance of the classifiers and enhance the accuracy of classifying ASD.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_75-Investigation_of_Hybrid_Feature_Selection_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Internet of Things (IoT) Application for Management in Automotive Parts Manufacturing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130474</link>
        <id>10.14569/IJACSA.2022.0130474</id>
        <doi>10.14569/IJACSA.2022.0130474</doi>
        <lastModDate>2022-04-30T05:22:32.2230000+00:00</lastModDate>
        
        <creator>Apiwat Krommuang</creator>
        
        <creator>Opal Suwunnamek</creator>
        
        <subject>Internet of things; auto parts industry; FANP; MCDM; sustainable manufacturing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>Automotive parts manufacturing focuses on sustainable development manufacturing capabilities with future technology changes. Internet of Things (IoT) plays an important role in applying internet technology to machines and equipment in manufacturing processes for transformation towards Industry 4.0 as well as creating values added and higher competitive advantage for the sustainability of the industries. This research aims to study factors that influence decision-makers in selecting IoT applications for managing auto parts production and they consist of connectivity, telepresence, intelligence, security, and value including their fifteen sub-factors. In this research, The Fuzzy Analytic Network Process (FANP) is a Multiple Criteria Decision Making (MCDM) technique used to analyze, identify, and prioritize factors in selecting IoT applications for managing production processes. The questionnaire is designed based on the FANP technique to survey the importance of weight for each factor from executives of 88 auto parts manufacturers who authorize as the decision-makers for selecting IoT applications. The results have indicated that telepresence is the most important factor that will assist them in controlling production to guarantee that production capabilities meet the objective and connectivity is the second important factor that must ensure that IoT applications are compatible with their machinery and equipment can be controlled smoothly and precisely. Meanwhile, performance is the most important sub-factor and other sub-factors are ranked as functional orientation, data management, control, and compatibility respectively. Therefore, manufacturers can use this research as a criterion for selecting appropriate IoT applications for controlling their manufacturing for sustainable effectiveness.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_74-Internet_of_Things_IoT_Application_for_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Framework of Infotainment using Predictive Scheme for Traffic Management in Internet-of-Vehicle</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130473</link>
        <id>10.14569/IJACSA.2022.0130473</id>
        <doi>10.14569/IJACSA.2022.0130473</doi>
        <lastModDate>2022-04-30T05:22:32.2070000+00:00</lastModDate>
        
        <creator>Reshma S</creator>
        
        <creator>Chetanaprakash</creator>
        
        <subject>Infotainment system; internet-of-vehicle; reinforcement learning; decision making; power; long short term memory</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>Infotainment system potentially contributes towards controlling accident fatalities in the era of Internet-of-Vehicles (IoV). Review of existing system is carried out to find that irrespective of various methods towards infotainment system, the quality of data being retrieved as well as issues associated with power and traffic congestion in vehicular communication is still an impending challenge. Therefore, this manuscript introduces a novel predictive scheme that offers enriched set of information from the environment to assists in decision making. Reinforcement learning is adopted for controlling traffic signal and power while the proposed system introduce augmented Long Short Term Memory scheme in order to predict the best possible traffic scenario for assisting the infotainment system to make a precise decision. The simulation is carried out for proposed system with existing learning schemes to find out proposed scheme offers better performance in every respect over challenging scene of an IoV.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_73-Framework_of_Infotainment_using_Predictive_Scheme.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fusion of Statistical Reasoning for Healing Highly Corrupted Image</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130472</link>
        <id>10.14569/IJACSA.2022.0130472</id>
        <doi>10.14569/IJACSA.2022.0130472</doi>
        <lastModDate>2022-04-30T05:22:32.2070000+00:00</lastModDate>
        
        <creator>Golam Moktader Daiyan</creator>
        
        <creator>Leiting Chen</creator>
        
        <creator>Chuan Zho</creator>
        
        <creator>Golam Moktader Nayeem</creator>
        
        <subject>Salt and pepper noise; median filter; statistical reasoning; performance analysis matrices; high-density noise; mood; trimmed median; trimmed mean; peak signal-to-noise ratio; image enhancement factor; structural similarity index</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>The accurate approximation of pixel value for preserving image details at a high concentration of noise has led the researchers to improve filters performance. A few image restoration filters are effective at lower density noise. Filters are commonly deployed for cameras, image processing tasks, medical image analysis, guided media data transmission, and real-time machine learning. This article proposes a mathematical model for the exact pixel value estimation at a high noise density for RGB and Gray images. The mathematical model is implemented to fuse statistical reasoning on the optimized mask sizes while preserving image details. Different parameter returns from the median filter, the trimmed median filter, the trimmed mean filter, and mood analysis form a mathematical function. The filter iteratively selects different schemes to calculate pixel values at different noise densities with minimum image information. Different processing masks are analyzed to preserve local data at specific image locations correctly in high density. A robust estimator counts false approximation of pixel values as discontinued, identified, and removed. At the post smoothening process, the filter recovers the misclassification of noise-free pixel and blur effects in the image. The qualitative experiments show satisfactory results in storing the details of the image from any image. The performance of the fusion filter is verified with visual quality and performance analysis matrices such as the image enhancement factor, the similarity indicator and the noise ratio from the peak signal.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_72-Fusion_of_Statistical_Reasoning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Learning Effectivess of Virtual Land Surveying Simulator for Blended Open Distance Learning Amid Covid-19 Pandemic</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130471</link>
        <id>10.14569/IJACSA.2022.0130471</id>
        <doi>10.14569/IJACSA.2022.0130471</doi>
        <lastModDate>2022-04-30T05:22:32.1930000+00:00</lastModDate>
        
        <creator>Yeap Chu Im</creator>
        
        <creator>Muhammad Norhadri Bin Mohd Hilmi</creator>
        
        <creator>Tan Cheng Peng</creator>
        
        <creator>Azrina Jamal</creator>
        
        <creator>Noor Halizah binti Abdullah</creator>
        
        <creator>Suzi Iryanti Fadilah</creator>
        
        <subject>Covid-19; 3D simulator; virtual laboratory; land surveying; blended open distance learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>Many universities worldwide were forced to physically close campuses due to lockdown and resumed the in-person classes compliant with a stringent set of standards of procedures (SOPs) as Covid cases drop. This has profoundly disrupted the hands-on lab face-to-face learning process that is harder to be moved online. Virtual simulation lab could be the answer and its use in many courses has been extensively studied. However, it is relatively little studied when it comes to land surveying courses. The purpose of the study is to explore the learning effectiveness of virtual surveying field lab for blended open distance learning (ODL) students at Wawasan Open University (WOU) in the time of Covid-19. This study used a mixed-method that combines qualitative and quantitative approaches to get a fuller picture and deeper meaning of learning behavior while using descriptive and inferential statistical methods in SPSS platform. Respondents were selected using the purposive sampling method. Survey questionnaires were designed and distributed to students before and after lab simulation class. Instructors were interviewed after the lab simulation class. Students’ learning results for the surveying course were compared with the past-year examination results at pre-Covid-19 times before the virtual simulator was introduced. Both qualitative and quantitative data set were collected and analyzed. The findings revealed that the virtual simulator has enhanced students’ learning interest and efficiency for surveying course in a ODL setting. Both students and instructors have responded positively towards the virtual simulator learning experiences. Students’ achievement in the final examination amid Covid-19 was better than pre-Covid-19 performance. It is recommended that the virtual simulator shouldn’t be a replacement to physical instrument but as a complement.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_71-Learning_Effectivess_of_Virtual_Land_Surveying_Simulator.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Bridge Pillar Defect Detection using Close Range Thermography Imagery</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130470</link>
        <id>10.14569/IJACSA.2022.0130470</id>
        <doi>10.14569/IJACSA.2022.0130470</doi>
        <lastModDate>2022-04-30T05:22:32.1770000+00:00</lastModDate>
        
        <creator>Abd Wahid Rasib</creator>
        
        <creator>Muhammad Latifi Mohd Yaacob</creator>
        
        <creator>Nurul Hawani Idris</creator>
        
        <creator>Khairulazhar Zainuddin</creator>
        
        <creator>Rozilawati Dollah</creator>
        
        <creator>Norbazlan Mohd Yusof</creator>
        
        <creator>Norisam Abd Rahaman</creator>
        
        <creator>Shahrin Ahmad</creator>
        
        <creator>Norhadi A. Hamid</creator>
        
        <creator>Abdul Manaf Mhapo</creator>
        
        <subject>Defect; detection; bridge pillar; drone; thermography; close range remote sensing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>Currently, radiometric thermography image has been explored adequately as alternative advance Non-Destructive Testing (NDT) especially for early detection analysis in various applications. Systematic image calibration, higher spatial resolution and high degree order image processing, thermography imagery potential to be used in concrete structure defect detection. Therefore, this study is carried out to examine the defect on bridge pillar surface concrete using drone-based thermography sensor (7–13 &#181;m). Close range remote sensing NDT based on drone platform and imagery segmentation analysis have been applied to interpret the crack line on two pillars at North-South Expressway Central Link (ELITE) Highway. As a result, thermography imagery segmentation and support by multispectral radiometric imagery (RGB) successfully to delineate the micro crack line on the bridge pillar concrete using K-means clustering method. Overall, this study successfully shows the higher order optional platform using drone and thermography sensor that potentially to be applied in forensic concrete structure defect detection for tall structure building.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_70-Bridge_Pillar_Defect_Detection_using_Close_Range_Thermography.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Augmented System for Food Crops Production in Agricultural Supply Chain using Blockchain Technology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130468</link>
        <id>10.14569/IJACSA.2022.0130468</id>
        <doi>10.14569/IJACSA.2022.0130468</doi>
        <lastModDate>2022-04-30T05:22:32.1600000+00:00</lastModDate>
        
        <creator>Dayana D. S</creator>
        
        <creator>Kalpana G</creator>
        
        <subject>Blockchain; distributed ledger; consensus; decentralized; stakeholders; agricultural insurance; payouts; trust-based farming system; food safety; panel of advisers; agri-supply chain</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>The elevated version of the Agricultural Traceability System dealing with food production holds utmost significance in not only assuring food insecure, smart contracts and agri-insurance to farmers but also guaranteeing insurance to them during natural disaster. The proposed and improved system for food cultivation traceability deals with agri-crops and farmers that hires the Blockchain technology guarantee at par safety, agreement, distributed ledger, immediate payment, de-centralization and thereby achieving the goal of minimizing the cost incurred in the food processing system and building trust. Smart contracts play a pivot role in the field of agricultural insurance. Agricultural insurance based upon Blockchain comprises of major weather incidents and associated payouts enlisted on a smart contract, connected to the mobile wallets with timely weather updates notified by the field sensors and interrelated with data from proximity weather stations would enable prompt payout during any natural calamity such as flood or drought. A panel of advisers in the decentralized system which is professionally governed and managed by certain retired officers makes the traceability system more trustworthy. These professionals can offer wise suggestions to the planters aiding them to acquire productive outcome.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_68-Augmented_System_for_Food_Crops_Production.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comprehensive Analysis of Blockchain-based Cryptocurrency Mining Impact on Energy Consumption</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130469</link>
        <id>10.14569/IJACSA.2022.0130469</id>
        <doi>10.14569/IJACSA.2022.0130469</doi>
        <lastModDate>2022-04-30T05:22:32.1600000+00:00</lastModDate>
        
        <creator>Md Rafiqul Islam</creator>
        
        <creator>Muhammad Mahbubur Rashid</creator>
        
        <creator>Mohammed Ataur Rahman</creator>
        
        <creator>Muslim Har Sani Bin Mohamad</creator>
        
        <creator>Abd Halim Bin Embong</creator>
        
        <subject>Blockchain; cryptocurrency; bitcoin; Proof-of-Work (PoW); Proof-of-Stake (PoS)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>Blockchain already has gained popularity due to its highly secured network and same time enormous computational power consumption has become an undifferentiated debate among the users. A Blockchain network is reliable, secure, transparent, and immutable where the transactions cannot be reversed between sender and receiver. Blockchain technology is not only used for mining cryptocurrency, it has other applications in different sectors like agriculture, education, insurance, etc., but the noticeable concern is still energy consumption. On the other hand, there is a significant impact on the environment due to the use of excessive energy for mining cryptocurrency which releases more carbon dioxide (CO2) in nature. The Proof-of-Work (PoW) algorithm is used for mining ‘Bitcoin’ which is consumed enormous computational power. However, an alternative solution like Proof-of-Stake (PoS) consensus protocol has been proposed to use instead of the Proof-of-Work algorithm for mining cryptocurrencies which is capable to reduce the significant amount of energy consumption. Not only that, but the use of renewable energy can also be an alternate option to use the Proof-of-Work algorithm for mining cryptocurrencies which is environment friendly. This paper aims to highlight blockchain technology, the energy consumption and impact on the environment, energy reducing method by using PoS consensus protocol instead of using the PoW algorithm, and discussion with some recommendations.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_69-A_Comprehensive_Analysis_of_Blockchain_based_Cryptocurrency.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Efficient Intrusion Detection System for IoT Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130467</link>
        <id>10.14569/IJACSA.2022.0130467</id>
        <doi>10.14569/IJACSA.2022.0130467</doi>
        <lastModDate>2022-04-30T05:22:32.1470000+00:00</lastModDate>
        
        <creator>Rehab Hosny Mohamed</creator>
        
        <creator>Faried Ali Mosa</creator>
        
        <creator>Rowayda A. Sadek</creator>
        
        <subject>Intrusion detection systems (IDSs); TON IoT dataset; machine learning; deep learning; ReliefF</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>These days, the Internet is subjected to a variety of attacks that can harm network devices or allow attackers to steal the most sensitive data from these devices. IoT environment provides new perspective and requirements for Intrusion detection due to its heterogeneity. This paper proposes a newly developed Intrusion Detection System (IDS) that relies on machine learning and deep learning techniques to identify new attacks that existed systems fail to detect in such an IoT environment. The paper experiments consider the benchmark dataset ToN_IoT that includes IoT services telemetry, Windows, Linux operating system, and network traffic. Feature selection is an important process that plays a key role in building an efficient IDS. A new feature selection module has been introduced to the IDS; it is based on the ReliefF algorithm which outputs the most essential features. These extracted features are fed into some selected machine learning and deep learning models. The proposed ReliefF-based IDSs are compared to the existed IDSs based correlation function. The proposed ReliefF-based IDSs model outperforms the previous IDSs based correlation function models. The Medium Neural Network model, Weighted KNN model, and Fine Gaussian SVM model have an accuracy of 98.39 %, 98.22 %, and 97.97 %, respectively.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_67-Efficient_Intrusion_Detection_System_for_IoT_Environment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Intelligent Approach based on the Combination of the Discrete Wavelet Transform, Delta Delta MFCC for Parkinson&#39;s Disease Diagnosis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130466</link>
        <id>10.14569/IJACSA.2022.0130466</id>
        <doi>10.14569/IJACSA.2022.0130466</doi>
        <lastModDate>2022-04-30T05:22:32.1300000+00:00</lastModDate>
        
        <creator>BOUALOULOU Nouhaila</creator>
        
        <creator>BELHOUSSINE DRISSI Taoufiq</creator>
        
        <creator>NSIRI Benayad</creator>
        
        <subject>Parkinson’s disease; discrete wavelet transform; delta delta MFCC; decision tree classifier</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>To diagnose Parkinson’s disease (PD), it is necessary to monitor the progression of symptoms. Unfortunately, diagnosis is often confirmed years after the onset of the disease. Communication problems are often the first symptoms that appear earlier in people with Parkinson’s disease. In this study, we focus on the signal of speech to discriminate between people with and without PD, for this, we used a Spanish database that contains 50 records of which 28 are patients with Parkinson’s disease and 22 are healthy people, these records contain five types of supported vowels (/a/, /e/, /i/, /o/ and /u/), The proposed treatment is based on the decomposition of each sample using Discrete Wavelet Transform (DWT) by testing several kinds of wavelets, then extracting the delta delta Mel Frequency Cepstral Coefficients (delta delta MFCC) from the decomposed signals, finally we apply the decision tree as a classifier, the purpose of this process is to determine which is the appropriate wavelet analyzer for each type of vowel to diagnose Parkinson’s disease.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_66-An_Intelligent_Approach_based_on_the_Combination_of_the_Discrete_Wavelet.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application of Affective Computing in the Analysis of Advertising Jingles in the Political Context</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130465</link>
        <id>10.14569/IJACSA.2022.0130465</id>
        <doi>10.14569/IJACSA.2022.0130465</doi>
        <lastModDate>2022-04-30T05:22:32.1300000+00:00</lastModDate>
        
        <creator>Gabriel Elias Chanchi Golondrino</creator>
        
        <creator>Manuel Alejandro Ospina Alarcon</creator>
        
        <creator>Luz Marina Sierra Mart&#237;nez</creator>
        
        <subject>Affective computing; advertising jingles; emotion analysis; arousal; valence</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>Affective computing is an emerging research area focused on the development of systems with the ability to recognize, process and simulate human emotions in order to improve a user’s experience in an interactive system. One of the possible fields of application of affective computing is marketing and advertising, where the application of emotion analysis techniques on the opinions made by users in different contexts has been evidenced, being a challenge the analysis of different types of multimedia content, such as advertising jingles. Thus, in this article we propose as a contribution the development of an emotion analysis study on the advertising jingles of the main candidates for mayor of Cartagena-Colombia in the period 2020-2023. For the development of this study, the acoustic properties of arousal and valence present throughout the audio track of each advertising jingle were taken into consideration, in such a way that through these properties it is possible to classify an audio fragment into an emotion belonging to the circumflex model or Russell model. To perform the segmentation of the audio track and the extraction of the acoustic properties of arousal and valence, we developed the MUSEMAN tool, which allows determining the emotions fluctuation in the advertising jingle audio track.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_65-Application_of_Affective_Computing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Balancing a Practical Inverted Pendulum Model Employing Novel Meta-Heuristic Optimization-based Fuzzy Logic Controllers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130464</link>
        <id>10.14569/IJACSA.2022.0130464</id>
        <doi>10.14569/IJACSA.2022.0130464</doi>
        <lastModDate>2022-04-30T05:22:32.1130000+00:00</lastModDate>
        
        <creator>Dao-Thi Mai-Phuong</creator>
        
        <creator>Pham Van-Hung</creator>
        
        <creator>Nguyen Ngoc-Khoat</creator>
        
        <creator>Pham Van-Minh</creator>
        
        <subject>mGA; inverted pendulum (IP); balance control; scaling factors; optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>This paper concentrates on proposing a newly effective control approach to ensure the balance of an inverted pendulum (IP) system consisting of a free-rotational rod and a small cart. The novel idea is to create an effective integration between a PD-like fuzzy logic architecture and a modified genetic algorithm (mGA). The mGA is executed with an appropriately fast optimization as an initial phase in dealing with optimizing scaling factors of the fuzzy logic controller. There are totally six meaningful scaling factors corresponding to two fuzzy logic controllers applied in the balance control system of the IP. These six scaling coefficients are highly significant to strongly affect control quality of the system applying such a PD-like fuzzy logic control methodology. Excellent results obtained in terms of numerical simulations compared with those of both the conventional PID and existed fuzzy logic counterparts as well as practical experiments implemented in a real IP topology verify the promising applicability of the new control methodology proposed in this study.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_64-Balancing_a_Practical_Inverted_Pendulum_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Social Group Optimization-based New Routing Approach for WMN’s</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130462</link>
        <id>10.14569/IJACSA.2022.0130462</id>
        <doi>10.14569/IJACSA.2022.0130462</doi>
        <lastModDate>2022-04-30T05:22:32.1000000+00:00</lastModDate>
        
        <creator>Bhanu Sharma</creator>
        
        <creator>Amar Singh</creator>
        
        <subject>ACO; AODV; DSR; BAT; BBO; wireless mesh network; social group optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>Wireless Mesh Networks (WMNs) are hop-to-hop communication networks that are quickly deployable, dynamically self-organizing, self-configuring, self-healing, self-balancing, and self-aware. In WMNs, a node can leave or join the network at any time. Due to the mobile nature of nodes, the routes between source and destination can change frequently. Computing the shortest path under dynamic conditions for a given time constraint imposed due to node mobility can also be placed in the class of highly complex problems. However, as the network size grows, the performance of the nodes decreases. As a result, we require Soft Computing approaches to handle this problem. This article proposes a Social Group Optimization (SGO) based routing approach to wireless mesh networks. The proposed approach was implemented in MATLAB and tested on different dynamic nodes network scenarios. We compare the performance of the proposed approach with Ant Colony Optimization (ACO), Ad-hoc On-demand Distance Vector (AODV), Dynamic Source Routing (DSR), BAT, Biogeography-based optimization (BBO), and Firefly Algorithm based routing approaches. We observe that the proposed approach outperformed all other approaches on the more than 1000 node network scenarios.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_62-Social_Group_Optimization_based_New_Routing_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Combination Approach to CPU Scheduling based on Priority and Round-Robin Algorithms for Assigning a Priority to a Process and Eliminating Starvation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130463</link>
        <id>10.14569/IJACSA.2022.0130463</id>
        <doi>10.14569/IJACSA.2022.0130463</doi>
        <lastModDate>2022-04-30T05:22:32.1000000+00:00</lastModDate>
        
        <creator>Hussain Mohammad Abu-Dalbouh</creator>
        
        <subject>Average turnaround time; average waiting time; utilization; performance measures; operating system; process</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>The main purpose of an operating system is to control a group of processes, through a method known as CPU scheduling. The performance and efficiency of multitasking operating systems are determined by the use of a CPU scheduling algorithm. Round-robin scheduling is the best solution for time-shared systems, but it is not ideal for real-time systems as it causes more context shifts, longer wait times, and slower turnaround times. Its performance is mostly determined by the time quantum. Processes cannot have priorities set for them. Round-robin scheduling does not give more critical work greater consideration, which may affect system performance in solving processes. On the other hand, a priority algorithm can resolve processes&#39; priority levels. This means that each process has a priority assigned to it, and processes with highest priority are executed first. If which process should come first and the process waiting time in CPU are not considered, this can cause a starvation problem. In this paper, a new CPU scheduling algorithm called the mix PI-RR algorithm was developed. The proposed algorithm is based on a combination of round-robin (RR) and priority-based (PI) scheduling algorithms for determining which tasks run and which should be waiting. The disadvantages of both round-robin and priority CPU scheduling algorithms are addressed by this novel algorithm. When using the proposed mix PI-RR algorithm, the performance measures indicated improved CPU scheduling. Other processes should not be affected by the CPU&#39;s requirements. This algorithm helps the CPU to overcome some of the problems of both algorithms.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_63-A_New_Combination_Approach_to_CPU_Scheduling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Adaptive Generation-based Approaches of Oversampling using Different Sets of Base and Nearest Neighbor’s Instances</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130461</link>
        <id>10.14569/IJACSA.2022.0130461</id>
        <doi>10.14569/IJACSA.2022.0130461</doi>
        <lastModDate>2022-04-30T05:22:32.0830000+00:00</lastModDate>
        
        <creator>Hatem S Y Nabus</creator>
        
        <creator>Aida Ali</creator>
        
        <creator>Shafaatunnur Hassan</creator>
        
        <creator>Siti Mariyam Shamsuddin</creator>
        
        <creator>Ismail B Mustapha</creator>
        
        <creator>Faisal Saeed</creator>
        
        <subject>Class imbalance; nearest neighbors; base samples; initial selection; SMOTE</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>Standard classification algorithms often face a challenge of learning from imbalanced datasets. While several approaches have been employed in addressing this problem, methods that involve oversampling of minority samples remain more widely used in comparison to algorithmic modifications. Most variants of oversampling are derived from Synthetic Minority Oversampling Technique (SMOTE), which involves generation of synthetic minority samples along a point in the feature space between two minority class instances. The main reasons these variants produce different results lies in (1) the samples they use as initial selection / base samples and the nearest neighbors. (2) Variation in how they handle minority noises. Therefore, this paper presented different combinations of base and nearest neighbor’s samples which never used before to monitor their effect in comparison to the standard oversampling techniques. Six methods; three combinations of Only Danger Oversampling (ODO) techniques, and three combinations of Danger Noise Oversampling (DNO) techniques are proposed. The ODO’s and DNO’s methods use different groups of samples as base and nearest neighbors. While the three ODO’s methods do not consider the minority noises, the three DNO’s include the minority noises in both the base and neighbor samples. The performances of the proposed methods are compared to that of several standard oversampling algorithms. We present experimental results demonstrating a significant improvement in the recall metric.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_61-Adaptive_Generation_based_Approaches_of_Oversampling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Framework to Deploy Containers using Kubernetes and CI/CD Pipeline</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130460</link>
        <id>10.14569/IJACSA.2022.0130460</id>
        <doi>10.14569/IJACSA.2022.0130460</doi>
        <lastModDate>2022-04-30T05:22:32.0670000+00:00</lastModDate>
        
        <creator>Manish Kumar Abhishek</creator>
        
        <creator>D. Rajeswara Rao</creator>
        
        <creator>K. Subrahmanyam</creator>
        
        <subject>Containers; Docker; Jenkin; Kubernetes; rancher; virtualization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>Containers are continuously replacing the usage of virtual machines and gaining popularity in terms of scalability and agility in IT Industry. The key concept behind containers is Operating system based virtualization. In cloud, computing containers are getting deployed in terms of computing instances whereas in premises they are getting deployed using Docker as a part of CI/CD pipelines using Jenkin Server. When containers are going to be increased in number, its deployment and resource management is always a concern which is managed using the Kubernetes. Kubernetes is used to deploy and manage the containers in an autonomous manner and Rancher is used to manage the Kubernetes Cluster in an efficient manner. First Analysis is done for the scheduler, resource management which is used by Kubernetes to deploy the containers and proposed a framework which will automate the whole process using the helm-charts, ansible scripts from container deployment to the management of Kubernetes Cluster in a scalable manner. It is fully automated framework and can be used to deploy the scalable applications in form of containers as Docker images. CI/CD pipeline is also considered using Jenkin Server.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_60-Framework_to_Deploy_Containers_using_Kubernetes_and_CICD_Pipeline.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Lasso-based Collaborative Filtering Recommendation Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130458</link>
        <id>10.14569/IJACSA.2022.0130458</id>
        <doi>10.14569/IJACSA.2022.0130458</doi>
        <lastModDate>2022-04-30T05:22:32.0530000+00:00</lastModDate>
        
        <creator>Hiep Xuan Huynh</creator>
        
        <creator>Vien Quang Dam</creator>
        
        <creator>Long Van Nguyen</creator>
        
        <creator>Nghia Quoc Phan</creator>
        
        <subject>UBCF-LASSO; IBCF-LASSO; Lasso regression</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>This paper proposes a new approach to solve the problem of lack of information in rating data due to new users or new items, or there is too little rating data of the user for items of the collaborative filtering recommendation models (CFR models). In this approach, we consider the similarity between users or items based on the lasso regression to build the CFR models. In the commonly used CFR models, the recommendation results are built only based on the feedback matrix of users. The results of our model are predicted based on two similarity calculated values: (1) the similarity calculated value based on the rating matrix; (2) the similarity calculated value based on the prediction results of the Lasso regression. The experimental results of the proposed models on two popular datasets have been processed and integrated into the recommenderlab package showed that the suggested models have higher accuracy than the commonly used CFR models. This result confirms that Lasso regression helps to deal with the lack of information in the rating data problem of the CFR models.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_58-A_Lasso_based_Collaborative_Filtering.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Face Age Estimation using Shortcut Identity Connection of Convolutional Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130459</link>
        <id>10.14569/IJACSA.2022.0130459</id>
        <doi>10.14569/IJACSA.2022.0130459</doi>
        <lastModDate>2022-04-30T05:22:32.0530000+00:00</lastModDate>
        
        <creator>Shohel Pramanik</creator>
        
        <creator>Hadi Affendy Bin Dahlan</creator>
        
        <subject>Shortcut identity connection; ImageNet; residual connection; face aging</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>Depletion of skin and muscle tone has a considerable impact on the appearance of the face, which is constantly evolving. Algorithms necessitate a large number of aging faces for this purpose. Another popular deep learning technique is convolutional neural networks. In a recent study, many computer vision and pattern recognition problems have been successfully tackled using it. But these methods have architectural issues (e.g., the training process) that have a negative impact on their age estimation performance. As a result, a whole new approach is proposed in this research to address the issue. Using a convolutional neural network framework and resnet50 architecture, researchers were able to detect the age of a human face. This proposed shortcut identity connection strategy, which enables age estimation from the face image, has improved the success of the resnet50 architecture. To be able to tell a person&#39;s age from a picture of their face, it was important to know the characteristics of aging. As a result, the rhetorical classification method, which employs the resnet50 structure, is used to shift the face aging levels to a probability level. All of the 50 layers in the proposed residual network have a residual block that connects them. As far as face-aging databases go, ImageNet and FG-NET are both good choices for the proposed age estimation process. In the training session, the experiment results are 2.27 and 2.38, based mostly on the mean absolute error. The test accuracy results for the ImageNet dataset are 81.75% with the FG-NET dataset and 57% with the ImageNet dataset.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_59-Face_Age_Estimation_using_Shortcut_Identity_Connection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Organizational Architecture and Service Delivery Re-Alignment based on ITIL and TOGAF: Case Study of the Provincial Development Bank</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130457</link>
        <id>10.14569/IJACSA.2022.0130457</id>
        <doi>10.14569/IJACSA.2022.0130457</doi>
        <lastModDate>2022-04-30T05:22:32.0370000+00:00</lastModDate>
        
        <creator>Asti Amalia Nur Fajrillah</creator>
        
        <creator>Muharman Lubis</creator>
        
        <creator>Irmayanti Syam</creator>
        
        <subject>Organization; service; innovation; alignment; ITIL; TOGAF; alignment</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>The operations function area is the core function areas of the development bank to serve its customer related to the financial needs. Interestingly, the total scope of services provided in this case can be categorized as small or not optimal compared to the total population of its coverage area in the West Java and Banten. The lack of customer confidence in the services offered can be said as one of the reason cause by the unpreparedness of organization to adopt business agility and technological innovation as their alignment framework. Thus, as the beginning, IT planning in the function area can be utilized as the solution to be implemented to increase the service delivery performance through strengthening the organizational architecture. To support this, Enterprise Architecture (EA) should align business and IT with mapping ITIL best practice as a foundation and practical direction to bring the company operational services to have sustainability in growth, profit and satisfaction. This study delivers the roadmap design using the TOGAF framework to identify the current state of the company and the desired IT architecture with business strategies in the area of operations functions.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_57-Organizational_Architecture_and_Service_Delivery.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Integrated Approach to Research Paper and Expertise Recommendation in Academic Research</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130456</link>
        <id>10.14569/IJACSA.2022.0130456</id>
        <doi>10.14569/IJACSA.2022.0130456</doi>
        <lastModDate>2022-04-30T05:22:32.0200000+00:00</lastModDate>
        
        <creator>Charles N. Mabude</creator>
        
        <creator>Iyabo O. Awoyelu</creator>
        
        <creator>Bodunde O. Akinyemi</creator>
        
        <creator>Ganiyu A. Aderounmu</creator>
        
        <subject>Research paper; expertise; recommendation systems; academic</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>Research papers and expertise recommendation systems have been operating independently of one another, resulting in avoidable search queries and unsatisfactory recommendations. This study developed an integrated recommendation system to harmonize research papers and expertise recommendations in the academic domain for enhanced research collaboration. It aims to address issues related to the isolated existence of these systems. A recommendation algorithm was developed to synergize the research paper and expertise recommendation models. The Cosine similarity function between user query and available research papers as well as experts, was combined with selected criteria to achieve recommendation. The synergized model was simulated and evaluated using Precision, Recall, F-measure and Root Mean Square Error as performance metrics. The findings showed that the harmonization of research paper and expertise recommendation approaches provides a holistic and enhanced approach towards research paper and expertise recommendation. Thus, academic researchers now have a reliable way to recommend experts and research papers, which will lead to more collaborative research activities.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_56-An_Integrated_Approach_to_Research_Paper.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Smart Home Automation by Internet-of-Things Edge Computing Platform</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130455</link>
        <id>10.14569/IJACSA.2022.0130455</id>
        <doi>10.14569/IJACSA.2022.0130455</doi>
        <lastModDate>2022-04-30T05:22:32.0030000+00:00</lastModDate>
        
        <creator>Zubair Sharif</creator>
        
        <creator>Low Tang Jung</creator>
        
        <creator>Muhammad Ayaz</creator>
        
        <creator>Mazlaini Yahya</creator>
        
        <creator>Dodo Khan</creator>
        
        <subject>Smart home; Internet of Things (IoTs); home automation; android application; raspberry pi; edge computing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>Internet of Things (IoTs) offer significant benefits to various applications, including homes automation (HA), environmental monitoring, healthcare, homeland security, agriculture, and many others. Consequently, the trend of IoTs is rapidly evolving in many new sectors leading to higher comfort, better quality of life, and conveniences that can offer optimum consumption of valuable resources for the users. This paper presents a low-cost and flexible HA system that uses different sensors and other resources to control commonly used home appliances and connected devices by establishing IP connectivity to access, manage, and monitor them remotely. To enable remote access, an android-based application was developed to monitor and control the home devices. Edge computing (EC) platform is used in the proposed system to enhance reliability and robustness when the computation offloading is required. The system operates automatically according to the environmental conditions in the home based on the home occupants’ requirements. The outcomes of the proposed system reveal that it greatly supports the concept of smart HA and capable to significantly reduces the wastage of electricity via optimum utilization.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_55-Smart_Home_Automation_by_Internet_of_Things.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Is Deep Learning on Tabular Data Enough? An Assessment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130454</link>
        <id>10.14569/IJACSA.2022.0130454</id>
        <doi>10.14569/IJACSA.2022.0130454</doi>
        <lastModDate>2022-04-30T05:22:32.0030000+00:00</lastModDate>
        
        <creator>Sheikh Amir Fayaz</creator>
        
        <creator>Majid Zaman</creator>
        
        <creator>Sameer Kaul</creator>
        
        <creator>Muheet Ahmed Butt</creator>
        
        <subject>Deep learning; XGBoost; NODE; TabNet; DNF-net; statistical significance test; tabular geographical data</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>It is critical to select the model that best fits the situation while analyzing the data. Many scholars on classification and regression issues have offered ensemble techniques on tabular data, as well as other approaches to classification and regression problems (Like Boosting and Logistic Model tree ensembles). Furthermore, various deep learning algorithms have recently been implemented on tabular data, with the authors claiming that deep models outperform Boosting and Model tree approaches. On a range of datasets including historical geographical data, this study compares the new deep models (TabNet, NODE, and DNF-net) against the boosting model (XGBoost) to see if they should be regarded a preferred choice for tabular data. We look at how much tweaking and computation they require, as well as how well they perform based on the metrics evaluation and statistical significance test. According to our study, XGBoost outperforms these deep models across all datasets, including the datasets used in the journals that presented the deep models. We further show that, when compared to deep models, XGBoost requires considerably less tweaking. In addition, we can also confirm that a combination of deep models with XGBoost outperforms XGBoost alone on almost all datasets.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_54-Is_Deep_Learning_on_Tabular_Data_Enough.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Review of the Integration of Cyber-Physical System and Internet of Things</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130453</link>
        <id>10.14569/IJACSA.2022.0130453</id>
        <doi>10.14569/IJACSA.2022.0130453</doi>
        <lastModDate>2022-04-30T05:22:31.9570000+00:00</lastModDate>
        
        <creator>Ramesh Sneka Nandhini</creator>
        
        <creator>Ramanathan Lakshmanan</creator>
        
        <subject>Cyber-physical system; internet of things; industrial internet (ii); industry 4.0</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>Cyber-physical system has a bigger impact on the present and future of the engineered systems. The integration of cyber-physical system with other technologies often poses challenges that are unique in the fields of design, implementation, and applications. This study explores the definitions of the integrated cyber-physical system from different perceptions, how cyber-physical systems evolved, emerging fields of research in cyber-physical systems. Efficiency, reliability, predictability, and security are some of the challenges that cyber-physical systems pose in the overall implementation. cyber-physical systems when integrated with the Internet of Things evolves into a hybrid technology that advances the technology aspects. The CPS-IoT models complement each other offering useful insights to deal with engineering systems and control modules.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_53-A_Review_of_the_Integration_of_Cyber_Physical_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Integration of Ensemble Variant CNN with Architecture Modified LSTM for Distracted Driver Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130452</link>
        <id>10.14569/IJACSA.2022.0130452</id>
        <doi>10.14569/IJACSA.2022.0130452</doi>
        <lastModDate>2022-04-30T05:22:31.9430000+00:00</lastModDate>
        
        <creator>Zakaria Boucetta</creator>
        
        <creator>Abdelaziz El Fazziki</creator>
        
        <creator>Mohamed El Adnani</creator>
        
        <subject>Distracted driver detection; ensemble variant convolutional neural network; hybrid squirrel whale optimization algorithm; local gradient pattern; local weber pattern; optimal fusion-based pattern descriptors; long short term memory</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>Driver decisions and behaviors are the major factors in on-road driving safety. Most significantly, traffic injuries and accidents are reduced using the accurate driver behavior monitoring system. However, the challenges occur in understanding human behaviors in the practical environment due to uncontrolled scenarios like cluttered and dynamic backgrounds, occlusion, and illumination variation. Recently, traffic accidents are mainly caused by distracted drivers, which has increased with the popularization of smartphones. Therefore, the distracted driver detection model is necessary to appropriately find the behavior of the distracted driver and give warnings to the driver to prevent accidents, which need to be concentrated as serious issues. The main intention of this paper is to design and implement a novel deep learning framework for driver distraction detection. First, the datasets for driver distraction detection are gathered from public sources. Furthermore, the Optimal Fusion-based Local Gradient Pattern (LGP) and Local Weber Pattern (LWP) perform the pattern extraction of the images. These patterns are inputted into the new deep learning framework with Ensemble Variant Convolutional Neural Network (EV-CNN) for feature learning. The EV-CNN includes three different models, like Resnet50, Inceptionv3, and Xception. The extracted features are subjected to the architecture-optimized Long Short-Term Memory (LSTM). The Hybrid Squirrel Whale Optimization Algorithm (HSWOA) performs both the pattern extraction and the LSTM optimization. The experimental results demonstrate the effective classification performance of the suggested model in terms of accuracy during the detection of distracted driving and are helpful in maintaining safe driving habits.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_52-Integration_of_Ensemble_Variant_CNN_with_Architecture_Modified_LSTM.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Tweet Credibility Detection for COVID-19 Tweets using Text and User Content Features</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130451</link>
        <id>10.14569/IJACSA.2022.0130451</id>
        <doi>10.14569/IJACSA.2022.0130451</doi>
        <lastModDate>2022-04-30T05:22:31.9100000+00:00</lastModDate>
        
        <creator>Vaishali Vaibhav Hirlekar</creator>
        
        <creator>Arun Kumar</creator>
        
        <subject>Fake news; machine learning; natural language processing; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>The deadly COVID-19 pandemic is currently sweeping the globe, and millions of people have been exposed to false information about the disease, its remedies, prevention, and origins. During such perilous times, the propagation of fake news and misinformation can have serious implications, causing widespread panic and exacerbating the pandemic&#39;s threat. This increasing threat factor has given rise to considerable research challenges. This article is mainly concerned about fake news identification and experimentation is specifically performed considering COVID-19 fake news as a case study. Fake news is spread intentionally to mislead the people and therefore we need to identify user’s involvement and it’s correlation with additional features. The aim of this research is to develop a model that can predict the essence of a tweet given as an input with the help of multiple features. Our strategy is to make use of the tweet&#39;s text as well as the user&#39;s metadata and develops a model using natural processing technique and deep learning method. In this process, we have analyzed the behavior of the accounts, observed the impact of the various factors that can lead to fake news. The experimental analysis shows that hybrid model with text and content features have generated a benchmark result than the existing state of art techniques. We have obtained a best F1-score of 0.976 during the experimentation.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_51-Tweet_Credibility_Detection_for_COVID_19_Tweets.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning Approach for Spoken Digit Recognition in Gujarati Language</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130450</link>
        <id>10.14569/IJACSA.2022.0130450</id>
        <doi>10.14569/IJACSA.2022.0130450</doi>
        <lastModDate>2022-04-30T05:22:31.8970000+00:00</lastModDate>
        
        <creator>Jinal H. Tailor</creator>
        
        <creator>Rajnish Rakholia</creator>
        
        <creator>Jatinderkumar R. Saini</creator>
        
        <creator>Ketan Kotecha</creator>
        
        <subject>CNN; Deep learning; digit; Gujarati; speech recognition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>Speech Recognition is an emerging field in the area of Natural Language Processing which provides ease for human machine interaction with speech. Speech recognition for digits is useful for numbers oriented communication such as mobile number, scores, account number, registration code, social security code etc. This research paper seeks to achieve recognition of ten Gujarati digits from zero to nine (૦ to ૯) by using a deep learning approach. Dataset is generated with total 8 native speakers 4 male 4 female with the age group of 20 to 40. The dataset includes 2400 labeled audio clips of both genders. To implement a deep learning approach, Convolutional Neural Network (CNN) with MFCC is used to analyze audio clips to generate spectrograms. For the proposed approach three different experiments were performed with different dataset sizes as 1200, 1800 and 2400. With this approach maximum 98.7% accuracy is achieved for spoken digits in Gujarati language with 98% Precision and 98% Recall. It is analyzed from various experiments that increase in dataset size improves the accuracy rate for spoken digit recognition. No of epochs in CNN also improves accuracy to some extent.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_50-Deep_Learning_Approach_for_Spoken_Digit.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Classification of Osteoporosis in the Lumbar Vertebrae using L2 Regularized Neural Network based on PHOG Features</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130449</link>
        <id>10.14569/IJACSA.2022.0130449</id>
        <doi>10.14569/IJACSA.2022.0130449</doi>
        <lastModDate>2022-04-30T05:22:31.8630000+00:00</lastModDate>
        
        <creator>Kavita Avinash Patil</creator>
        
        <creator>K. V. Mahendra Prashanth</creator>
        
        <creator>A Ramalingaiah</creator>
        
        <subject>Cross validation; image enhancement; lumbar spine; lumbar vertebrae; neural network; osteoporosis; PHOG; regularization; trabecular bone; texture features; x-ray images</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>One of the most common bone diseases in humans is osteoporosis, which is a major concern for the public health. Osteoporosis can be prevented if it is detected at an early stage. The research agenda consists of two phases: pre-processing of X-ray images of the spine and analysis of texture features from trabecular bone lumbar vertebrae L1-L4 for detecting osteoporosis. The preprocessing involves image enhancement of texture features and co-register the images in order to segment the L1-L4 regions in the lumbar spine. Range filtering and Pyramid Histogram of Orientation Gradient (PHOG) are used to analyze texture features. Input images are filtered with a range filter to adjust the local sub range intensities in a specified window to detect edges. Then a PHOG algorithm is designed to determine both the local shape of an image texture and its spatial layout. Based on texture features of lumbar vertebrae L1-L4, classify them as normal or osteoporotic using neural network (NN) models with L2 regularization. In an experiment, X-ray images and dual-energy X-ray absorptiometry (DXA) reports of individual patients are used to verify the system. DXA reports describe a statistical analysis of normal and osteoporotic results. However, the proposed work is categorized according to the texture features as normal or osteoporotic. 99.34% classification accuracy is achieved; cross-validation of these classified results is done with the DXA reports. Diagnostic accuracy of the proposed method is higher than that of the existing DXA with X-ray. Further, the area under the Receiver Operating Characteristic (ROC) curve for L1-L4 had a significantly higher sensitivity for osteoporosis.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_49-Classification_of_Osteoporosis_in_the_Lumbar_Vertebrae.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Machine Learning for Exhibition Recommendation in a Museum’s Virtual Tour Application</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130448</link>
        <id>10.14569/IJACSA.2022.0130448</id>
        <doi>10.14569/IJACSA.2022.0130448</doi>
        <lastModDate>2022-04-30T05:22:31.7530000+00:00</lastModDate>
        
        <creator>Shinta Puspasari</creator>
        
        <creator>Ermatita</creator>
        
        <creator>Zulkardi</creator>
        
        <subject>Machine learning; recommender system; museum exhibition; virtual tour; pandemic</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>The museum visit is having a crisis during the COVID-19 pandemic. SMBII Museum in Palembang has a remarkable decrease of visitors up to 90%. A strategy is needed to increase museum visits and enable educational and tourism roles in a pandemic situation. This paper evaluates the machine learning model for exhibition recommendations given to visitors through virtual tour applications. Exploring unfamiliar museum exhibitions to visitors through virtual museum applications will be tedious. If virtual collections are ancient and do not display any interest, they will quickly lead to boredom and reluctance to explore virtual museums. For this reason, an effective method is needed to provide suggestions or recommendations that meet the interests of visitors based on the profiles of museum visitors, making it easier for visitors to find exciting exhibition rooms for learning and tourism. Machine learning has proven its effectiveness for predictions and recommendations. This study evaluates several machine learning classifiers for exhibition recommendations and development of virtual tour applications that applied machine learning classifiers with the best performance based on the model evaluation. The experimental results show that the KNN model performs best for exhibition recommendations with cross-validation accuracy = 89.09% and F-Measure = 90.91%. The SUS usability evaluation on the exhibition recommender feature in the virtual tour application of SMBII museum shows average score of 85.83. The machine learning-based recommender feature usability is acceptable, making it easy and attractive for visitors to find an exhibition that might match their interests.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_48-Machine_Learning_for_Exhibition_Recommendation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Characters Segmentation from Arabic Handwritten Document Images: Hybrid Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130447</link>
        <id>10.14569/IJACSA.2022.0130447</id>
        <doi>10.14569/IJACSA.2022.0130447</doi>
        <lastModDate>2022-04-30T05:22:31.7400000+00:00</lastModDate>
        
        <creator>Omar Ali Boraik</creator>
        
        <creator>M. Ravikumar</creator>
        
        <creator>Mufeed Ahmed Naji Saif</creator>
        
        <subject>Arabic handwritten character recognition; connected components; word segmentation; character segmentation; morphological operators; overlapping and touching characters</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>Character segmentation in Unconstrained Arabic handwriting is a complex and challenging task due to the overlapping and touching of words or letters. Such issues have not been widely investigated in the literature. Addressing these issues in the segmentation stage reduces errors in the segmentation process, which plays a significant role in enhancing the accuracy of the Arabic optical character recognition. Therefore, this paper proposes a hybrid approach to improve the accuracy for interconnection, overlapping or touching character segmentation. The proposed method includes several stages: removing extra shapes such as signatures from the document. Using morphological operations, connected components and bounding box detection, detect and extract individual words directly from the document. Finally, the touching characters segmentation is achieved based on background thinning and computational analysis of the word&#39;s region. The proposed method has been tested on KHATT, IFN/ENIT database and our own collected dataset. The experimental results showed that the proposed method obtained high performance and improved the accuracy compared to other methods.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_47-Characters_Segmentation_from_Arabic_Handwritten_Document_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Smart Information Retrieval using Query Transformation based on Ontology and Semantic-Association</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130446</link>
        <id>10.14569/IJACSA.2022.0130446</id>
        <doi>10.14569/IJACSA.2022.0130446</doi>
        <lastModDate>2022-04-30T05:22:31.7230000+00:00</lastModDate>
        
        <creator>Ram Kumar</creator>
        
        <creator>S. C. Sharma</creator>
        
        <subject>Ontology; semantics; information retrieval; query transformation; indexing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>A notable problem with current information retrieval systems is that the input queries cannot express user information needs properly. This imprecise representation of the query hampers the effectiveness of the retrieval system. One method to solve this problem is to transform the original query into a more meaningful form. This paper proposes an ontology-based retrieval system that transforms initial user queries using domain ontologies and applies semantic association during the indexing process. The proposed system performs a semantic matching between an ontologically enhanced query and index to capture query-related terms. To show the performance of the proposed system, it is evaluated using standard parameters like precision, recall, and NDCG. In addition, the authors presented a comparison between the proposed and existing retrieval systems on three test datasets. Experimental results on these datasets indicate that the use of ontology and semantics has significantly increased the retrieval efficiency obtained by baseline. This work highlights the importance of ontology and semantics in information retrieval.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_46-Smart_Information_Retrieval_using_Query_Transformation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Framework for Sanskrit-Gujarati Symbolic Machine Translation System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130444</link>
        <id>10.14569/IJACSA.2022.0130444</id>
        <doi>10.14569/IJACSA.2022.0130444</doi>
        <lastModDate>2022-04-30T05:22:31.7070000+00:00</lastModDate>
        
        <creator>Jaideepsinh K. Raulji</creator>
        
        <creator>Jatinderkumar R. Saini</creator>
        
        <creator>Kaushika Pal</creator>
        
        <creator>Ketan Kotecha</creator>
        
        <subject>Bilingual synonym dictionary; Gujarati; lemmatization; machine translation system (MTS); morphological analyzer; Sanskrit; synthesizer; transliteration</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>Sanskrit falls under the Indo-European language family category. Gujarati, which has descended from the Sanskrit language, is a widely spoken language particularly in the Indian state of Gujarat. The proposed and realized Machine Translation framework uses a grammatical transfer approach to translate the written Sanskrit language to Gujarati. Because both languages are morphologically rich, studying the morphology of each item is difficult but necessary to incorporate into implementation. To improve the implementation accuracy and translation clarity, an in-depth research of the creation of Nouns, Verbs, Pronouns, and Indeclinables, as well as their mappings, has been carried out. Tokenization, lemmatization, morphological analysis, Sanskrit-Gujarati bilingual synonym-based dictionary, language synthesis, and transliteration are the proposed framework&#39;s primary components. The implementation outcome was tested on 1,000 phrases, using the automated Bilingual Evaluation Understudy (BLEU) scale which yielded a value of 58.04 It was also tested on the ALPAC scale, yielding the Intelligibility score of 69.16 and the Fidelity score of 68.11. The results are encouraging and prove that the proposed system is promising and robust for the implementation in the real world applications.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_44-A_Novel_Framework_for_Sanskrit_Gujarati_Symbolic_Machine_Translation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detection and Analysis of Oil Spill using Image Processing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130445</link>
        <id>10.14569/IJACSA.2022.0130445</id>
        <doi>10.14569/IJACSA.2022.0130445</doi>
        <lastModDate>2022-04-30T05:22:31.7070000+00:00</lastModDate>
        
        <creator>Myssar Jabbar Hammood AL-BATTBOOTTI</creator>
        
        <creator>Nicolae GOGA</creator>
        
        <creator>Iuliana MARIN</creator>
        
        <subject>PipelineMLML; oil spill; artificial intelligence; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>Thousands of oil spills occur every year on offshore oil production platforms. Moreover, ships that crosses rivers to reach the harbor cause spills each year. The current study focuses on IRAQI marine and rivers, especially Al-Bakr, Khor al-Amaya, ABOT oil terminal and SHAT AL-Arab river inside Al Başrah oil terminal. In order to mitigate and manage oil spill impacts, an unmanned aerial vehicle has proven to be a valuable tool in mitigating and managing incidents. To achieve high accuracy, the objective of the current research is to analyze captured images for rivers, identify oil pollution and determine its location. The images were taken from the Iraqi Regional Organization for the Protection, General Company for Ports of Iraq, Iraqi Ministry of Environment and online websites. In the current paper is presented a software framework for detecting oil spills, pollution in rivers and other kinds of garbage. The framework based on artificial intelligence is divided into two parts: a training model and an operational model. In the training model part, a machine learning model is applied, which is one of the fastest and most accurate methods, integrated inside PipelineMLML. Thus, the object detection technique used can identify one or more categories of objects in a picture or video. Furthermore, the locations of objects can be identified with the help of neural networks. In the operational mode, models can identify oil spills in images.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_45-Detection_and_Analysis_of_Oil_Spill.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modern City Issues, Management and the Critical Role of Information and Communication Technology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130443</link>
        <id>10.14569/IJACSA.2022.0130443</id>
        <doi>10.14569/IJACSA.2022.0130443</doi>
        <lastModDate>2022-04-30T05:22:31.6930000+00:00</lastModDate>
        
        <creator>Qasim Hamakhurshid Hamamurad</creator>
        
        <creator>Normal Mat Jusoh</creator>
        
        <creator>Uznir Ujang</creator>
        
        <subject>Cities; modern city; smart city; COVID-19; information communication technology (ICT); challenges; data; geographic information system (GIS)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>Cities are currently dealing with major difficulties that no longer allow slight adjustments to the way cities operate. Instead, local officials must devise creative, transformative solutions. Fortunately, novel approaches to municipal administration and technological advancements are providing city officials with new and beneficial tools. As a result of these improvements, citizens, businesses, and other groups in the city will be able to actively participate in implementing the reforms. In a nutshell, technology can assist cities in becoming smarter. This paper highlighted the implications of the management challenges of cities, the types of cities, and the issues that cities face during epidemics. A coordinated approach that reacts to both COVID-19 and climate change is required to avoid negative outcomes from both. To figure out how Malaysian city administration works and how important information and communication technology is in Malaysian smart cities, it is important to look into technology and data opportunities.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_43-Modern_City_Issues_Management_and_the_Critical_Role.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Integration of Convolutional Neural Networks and Recurrent Neural Networks for Foliar Disease Classification in Apple Trees</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130442</link>
        <id>10.14569/IJACSA.2022.0130442</id>
        <doi>10.14569/IJACSA.2022.0130442</doi>
        <lastModDate>2022-04-30T05:22:31.6770000+00:00</lastModDate>
        
        <creator>Disha Garg</creator>
        
        <creator>Mansaf Alam</creator>
        
        <subject>Artificial intelligence; deep learning; transfer learning; convolutional neural network; recurrent neural network; LSTM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>Automated methods intended for image classification have become increasingly popular in recent years, with applications in the agriculture field including weed identification, fruit classification, and disease detection in plants and trees. In image classification, convolutional neural networks (CNN) have already shown exceptional results but the problem with these models is that these models cannot extract some relevant image features of the input image. On the other hand, the recurrent neural network (RNN) can fully exploit the relationship among image features. In this paper, the performance of combined CNN and RNN models is evaluated by extracting relevant image features on images of diseased apple leaves. This article suggested a combination of pre-trained CNN network and LSTM, a particular type of RNN. With the use of transfer learning, the deep features were extracted from several fully connected layers of pre-trained deep models i.e. Xception, VGG16, and InceptionV3. The extracted deep features from the CNN layer and RNN layer were concatenated and fed into the fully connected layer to allow the proposed model to be more focused on finding relevant information in the input data. Finally, the class labels of apple foliar disease images are determined by the integrated model for apple foliar disease classification, experimental findings demonstrate that the proposed approach outperforms individual pre-trained models.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_42-Integration_of_Convolutional_Neural_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enterprise Architecture for Smart Enterprise System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130440</link>
        <id>10.14569/IJACSA.2022.0130440</id>
        <doi>10.14569/IJACSA.2022.0130440</doi>
        <lastModDate>2022-04-30T05:22:31.6600000+00:00</lastModDate>
        
        <creator>Meuthia Rachmaniah</creator>
        
        <creator>Arif Imam Suroso</creator>
        
        <creator>Muhamad Syukur</creator>
        
        <creator>Irman Hermadi</creator>
        
        <subject>Chili agrosystem; enterprise architecture; intelligent enterprise; supply chain; Zachman framework</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>The chili agrosystem faces many challenges, and the enterprise architecture (EA) artifacts as the building block of a chili enterprise system (ES) are not exist. This research is a qualitative systematic literature review as part of developing intelligent ES to examine chili&#39;s production, consumption, and price. The first step toward ES development is recognizing worthy chili EA and EA frameworks that characterize existing chili market conditions. The study aims to answer three research questions (RQ) and uses a state-of-the-art approach, employing predetermined keywords, to six research databases and data gathered from the corresponding institutional agencies. The findings on RQ1 revealed eight dynamics chili main supply chain patterns and data segregation among institutional agencies. The RQ2 disclosed numerous studies on EA; however, none offered for the chili agrosystem. In addition, the RQ3 results are multiple and expose different EAF characteristics. Again, no study considers its applicability to the chili agrosystem. To conclude, the strength of enterprise architecture for the chili enterprise system is the resulting deliverables that fall into three categories. These are factors of the chili agrosystem, enterprise, and architecture factors. Of many available frameworks, the Zachman framework - The Ontology gives more offerings.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_40-Enterprise_Architecture_for_Smart_Enterprise_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Virtual Reality Application for Pain Management: User Requirements</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130441</link>
        <id>10.14569/IJACSA.2022.0130441</id>
        <doi>10.14569/IJACSA.2022.0130441</doi>
        <lastModDate>2022-04-30T05:22:31.6600000+00:00</lastModDate>
        
        <creator>Ioan Alexandru Bratosin</creator>
        
        <creator>Ionel Bujorel Pavaloiu</creator>
        
        <creator>Nicolae Goga</creator>
        
        <creator>Andreea Iuliana Luca</creator>
        
        <subject>Virtual Reality; user requirements; serious games; therapy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>The usage of fully immersive Virtual Reality applications for pain relief is still in an exploratory stage. In consequence there is a need to understand the user perspective and wishes regarding this kind of product. To address this issue, this paper presents quantitative research in order to establish the functional and non-functional requirements of our application. Voluntary response sampling was used for the research (N = 55). The inquiry form contained questions regarding serious game content, eagerness for testing, performance, resource consumption optimization, portability, data security, accessibility. The questionnaire was shared via Google Forms. The answers were collected and interpreted. The study revealed that a significant part of the participants was willing to test the application and that they would use an immersive Virtual Reality application during a normal treatment session if the opportunity is available. As functional requirements, the following were considered important: the presence of animals in game, a bright environment and nature-based background sounds. The following non-functional requirements were considered important: game optimization, portability, data security, accessibility, graphics quality and a short learning curve.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_41-Virtual_Reality_Application_for_Pain_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Review of Industry Workpiece Classification and Defect Detection using Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130439</link>
        <id>10.14569/IJACSA.2022.0130439</id>
        <doi>10.14569/IJACSA.2022.0130439</doi>
        <lastModDate>2022-04-30T05:22:31.6470000+00:00</lastModDate>
        
        <creator>Changxing Chen</creator>
        
        <creator>Azween Abdullah</creator>
        
        <creator>S. H. Kok</creator>
        
        <creator>D. T. K. Tien</creator>
        
        <subject>Convolutional neural network; image processing; image recognition; defect detection; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>Object detection and classification denotes one of the most extensively-utilized machine vision applications given the high requirements put forward for object classification and defect detection with the rise of object recognition scenes. Notwithstanding, conventional image recognition processing technology encounters specific drawbacks. Its benefits and limitations were duly compared upon selecting several typical conventional image recognition techniques. Resultantly, such recognition approaches required multiple manual participation elements and extensive manpower with restricted object identification. As a branch of machine learning, deep learning has attained more optimal results in the image recognition discipline. In the classification and defect detection of industrial workpieces, over 70 literature reviews of deep learning algorithms across multiple application scenarios for classical algorithm model and network structure assessment based on the deep learning theory. Relevant network model performance was compared and analyzed based on network intricacies parallel to natural image classification. Six research gaps were found based on the reviewed algorithm pros and cons. The corresponding six research proposal in workpiece image classification was highlighted with prospects on the workpiece image classification and defect detection direction development. It provides an empirical solution for the selection of workpiece classification and defect detection deep learning model in the future.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_39-Review_of_Industry_Workpiece_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sentiment Analysis on Amazon Product Reviews using the Recurrent Neural Network (RNN)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130437</link>
        <id>10.14569/IJACSA.2022.0130437</id>
        <doi>10.14569/IJACSA.2022.0130437</doi>
        <lastModDate>2022-04-30T05:22:31.6300000+00:00</lastModDate>
        
        <creator>Roobaea Alroobaea</creator>
        
        <subject>Sentiment analysis; natural language processing; deep learning; RNN</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>In this paper, the problem of sentiment analysis on Amazon products is tackled. In fact, sentiment analysis systems are applied in all business and social fields. This is because the opinions are at the center of all human activities, and they are key influencers of our behaviors. In this study, the recurrent neural network (RNN) model is used to classify the reviews. Three Amazon review datasets were applied to predict the sentiments of the authors. In conclusion, our results are comparable to the best state of the art models with an accuracy of 85%, 70% and 70% for three datasets.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_37-Sentiment_Analysis_on_Amazon_Product_Reviews.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Arabic Sign Language Recognition using Lightweight CNN-based Architecture</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130438</link>
        <id>10.14569/IJACSA.2022.0130438</id>
        <doi>10.14569/IJACSA.2022.0130438</doi>
        <lastModDate>2022-04-30T05:22:31.6300000+00:00</lastModDate>
        
        <creator>Batool Yahya AlKhuraym</creator>
        
        <creator>Mohamed Maher Ben Ismail</creator>
        
        <creator>Ouiem Bchir</creator>
        
        <subject>Arabic sign language recognition; supervised learning; deep learning; efficient lightweight network based convolutional neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>Communication is a critical skill for humans. People who have been deprived from communicating through words like the rest of humans, usually use sign language. For sign language, the main signs features are the handshape, the location, the movement, the orientation and the non-manual component. The vast spread of mobile phones presents an opportunity for hearing-disabled people to engage more into their communities. Designing and implementing a novel Arabic Sign Language (ArSL) recognition system would significantly affect their quality of life. Deep learning models are usually heavy for mobile phones. The more layers a neural network has, the heavier it is. However, typical deep neural network necessitates a large number of layers to attain adequate classification performance. This project aims at addressing the Arabic Sign Language recognition problem and ensuring a trade-off between optimizing the classification performance and scaling down the architecture of the deep network to reduce the computational cost. Specifically, we adapted Efficient Network (EfficientNet) models and generated lightweight deep learning models to classify Arabic Sign Language gestures. Furthermore, a real dataset collected by many different signers to perform hand gestures for thirty different Arabic alphabets. Then, an appropriate performance metrics used in order to assess the classification outcomes obtained by the proposed lightweight models. Besides, preprocessing and data augmentation techniques were investigated to enhance the models generalization. The best results were obtained using the EfficientNet-Lite 0 architecture and the Label smooth as loss function. Our model achieved 94% and proved to be effective against background variations.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_38-Arabic_Sign_Language_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Urban Water Infrastructure Management System Design with Storm Water Intervention for Smart Cities</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130436</link>
        <id>10.14569/IJACSA.2022.0130436</id>
        <doi>10.14569/IJACSA.2022.0130436</doi>
        <lastModDate>2022-04-30T05:22:31.6130000+00:00</lastModDate>
        
        <creator>Edward B. Panganiban</creator>
        
        <creator>Rafael J. Padre</creator>
        
        <creator>Melanie A. Baguio</creator>
        
        <creator>Oliver B. Francisco</creator>
        
        <creator>Orlando F. Balderama</creator>
        
        <subject>Water infrastructure; geographic information systems; storm water; decision support systems; water management</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>Cauayan City is one of the hubs of economic development and activities in the northern part of the Philippines. Since this is an urban area, there is a tendency for people and businesses to converge, which results in higher water demand. At present, the combined distribution efficiency of its water infrastructures under the management and supervision of the Cauayan City Water District (CCWD) is only 87% or with a combined distribution loss of 10%, which is 1282.50m3 losses per day. This study suggests the necessity to introduce new and innovative water management technologies and systems adapted to climate change to address the city&#39;s needs. Problems that need to be addressed include a low-efficiency performance of the existing water infrastructure systems, lack of management tools for more efficient delivery of water services, limited service coverage of the water district due to limited water resources, and depletion and contamination of aquifers and other water sources since shallow aquifer is mainly utilized. Hence, a decision-support application based on geographic information systems (GIS) for managing urban water infrastructure with Storm Water intervention is a designed solution to address the needs of the city. The combination of decision support systems (DSS) and geographic information systems (GIS) was presented in this paper to maximize and properly utilize water infrastructure. One of the tools used as DSS is MIKE Operation. This is a complete GIS-integrated modeling tool that enables users to make sound decisions and create future concepts for urban storm water systems - cost-effective and resilient to change. A conceptual framework and relevant methodologies were presented as a guide for the success of the designed new technology.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_36-An_Urban_Water_Infrastructure_Management_System_Design.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>License Plates Detection and Recognition with Multi-Exposure Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130435</link>
        <id>10.14569/IJACSA.2022.0130435</id>
        <doi>10.14569/IJACSA.2022.0130435</doi>
        <lastModDate>2022-04-30T05:22:31.6000000+00:00</lastModDate>
        
        <creator>Seong-O Shim</creator>
        
        <creator>Romil Imtiaz</creator>
        
        <creator>Asif Siddiq</creator>
        
        <creator>Ishtiaq Rasool Khan</creator>
        
        <subject>License plate detection; character recognition; multi-exposure images; convolutional neural networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>Automatic License Plate Recognition (ALPR) has been an important research topic for many years in the intelligent transportation system and image recognition fields. License Plate (LP) detection and recognition has always been a challenging issue due to several factors, including different weather and lighting, unavoidable data acquisition noise, and requirement for real-time performance in state-of-the-art Intelligent Transportation Systems (ITS) applications. Different techniques have been proposed based on machine learning, deep learning, and image processing for the detection and recognition of LPs. This paper proposes a method that performs vehicle LP detection and character recognition with high accuracy by using artificially generated multi-exposure images of the LP. First, one under-exposed and three over-exposed images are generated from a reference image taken from the camera. Then, LP detection and character recognition algorithms are applied on these five images – one real image and four synthesized images. At each character location in LP, the character detected with the highest confidence level among these images is selected as the final predicted character. The system is fully automated, and no pre-processing, calibration, or configuration procedures are needed. Experimental results show that the proposed method achieves high accuracy and works robustly even in challenging conditions. The proposed method can be used in any existing ALPR system and the results on three recent ALPR techniques show that the accuracies are further improved when they are combined with the proposed method.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_35-License_Plates_Detection_and_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of a Multiloop Controller for a Nonlinear Process</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130434</link>
        <id>10.14569/IJACSA.2022.0130434</id>
        <doi>10.14569/IJACSA.2022.0130434</doi>
        <lastModDate>2022-04-30T05:22:31.5830000+00:00</lastModDate>
        
        <creator>S Anbu</creator>
        
        <creator>M Senthilkumar</creator>
        
        <creator>T S Murugesh</creator>
        
        <subject>Nonlinear process; interaction; multi input multi output control; closed loop performance; stability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>Among the category of nonlinear processes, the Continuous Stirred Tank Reactor (CSTR) is one popular unit that finds application in various verticals of chemical process industries. The process variables within the CSTR are highly interactive; hence developing control strategies become a laborious task as it can be viewed as a Multi Input Multi Output (MIMO) system. Often the CSTR is assumed as a Single Input Single Output (SISO) system and during the development of control strategies or algorithm, the main objective is on maintaining only a single process variable closer to its set point, even though many measured variables form part of it. On the contrary, when compared to a SISO system, the MIMO control includes sustaining different controlled variables at their appropriate set points concurrently; thereby achieving an improved efficiency. The components’ concentration and the temperature inside the CSTR are highly interactive in nature and exhibit reasonably high non-linear steady state behaviour. Both the interaction and non-linear behaviours pose challenges to the overall system stability. A stabilizing Proportional + Integral (PI) controller employing Stability Boundary Locus (SBL) concept is designed for a CSTR which eventually encapsulates both the stability and closed loop performance in its design procedure and analysed through simulation in MATLAB with the results presented.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_34-Design_of_a_Multiloop_Controller.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Data Security Awareness in Online Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130432</link>
        <id>10.14569/IJACSA.2022.0130432</id>
        <doi>10.14569/IJACSA.2022.0130432</doi>
        <lastModDate>2022-04-30T05:22:31.5670000+00:00</lastModDate>
        
        <creator>Rosilah Hassan</creator>
        
        <creator>Wahiza Wahi</creator>
        
        <creator>Nurul Halimatul Asmak Ismail</creator>
        
        <creator>Samer Adnan Bani Awwad</creator>
        
        <subject>Covid-19; IR4.0; education 4.0; online learning; data security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>The Covid-19 pandemic has intensified the online learning activities, which have becoming the most crucial platforms for learning sessions. With these online learning activities, a major concern arises particularly on the security of educators and students’ data. Every element in an online learning system can be a potential target of hacking or attack. Therefore, this paper examines students’ awareness of data security in online learning. Google forms have been used to gather the students’ feedback. Forty-two (42) students, involving the secondary school, pre-university and university students, responded to the survey questions. The results show that most of the respondents are well aware of how to keep their data safe in online learning. It is discovered that the level of awareness about data security in online learning amongst the students is commendable. The findings of this study indicate there are various ways to secure students’ data especially during online learning.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_32-Data_Security_Awareness_in_Online_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>KP-USE: An Unsupervised Approach for Key-Phrases Extraction from Documents</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130433</link>
        <id>10.14569/IJACSA.2022.0130433</id>
        <doi>10.14569/IJACSA.2022.0130433</doi>
        <lastModDate>2022-04-30T05:22:31.5670000+00:00</lastModDate>
        
        <creator>Lahbib Ajallouda</creator>
        
        <creator>Fatima Zahra Fagroud</creator>
        
        <creator>Ahmed Zellou</creator>
        
        <creator>Elhabib Ben Lahmar</creator>
        
        <subject>Key-phrase extraction; natural language processing; semantic similarity; embedding technique; universal sentence encoder</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>Automatic key-phrase extraction (AKE) is one of the most popular research topics in the field of natural language processing (NLP). Several techniques were used to extract the key-phrases: statistical, graph-based, classification algorithms, deep learning, and embedding techniques. AKE approaches that use embedding techniques are based on calculating the semantic similarity between a vector representing the document and the vectors representing the candidate phrases. However, most of these methods only give acceptable results in short texts such as abstracts paper, but on the other hand, their performance remains weak in long documents because it is represented by a single vector. Generally, the key phrases of a document are often expressed in certain parts of the document as, the title, the summary, and to a lesser extent in the introduction and the conclusion, and not of the entire document. For this reason, we propose in this paper KP-USE. A method extracts key-phrases from long documents based on the semantic similarity of candidate phrases to parts of the document containing key-phrases. KP-USE makes use of the Universal Sentence Encoder (USE) as an embedding method for text representation. We evaluated the performance of the proposed method on three datasets containing long papers, namely, NUS, Krapivin2009, and SemEval2010, where the results showed its performance outperforms recent AKE methods which are based on embedding techniques.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_33-KP_USE_An_Unsupervised_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Evolving Sentimental Bag-of-Words Approach for Feature Extraction to Detect Misinformation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130431</link>
        <id>10.14569/IJACSA.2022.0130431</id>
        <doi>10.14569/IJACSA.2022.0130431</doi>
        <lastModDate>2022-04-30T05:22:31.5530000+00:00</lastModDate>
        
        <creator>Yashoda Barve</creator>
        
        <creator>Jatinderkumar R. Saini</creator>
        
        <creator>Kaushika Pal</creator>
        
        <creator>Ketan Kotecha</creator>
        
        <subject>Misinformation detection; healthcare; incremental learning; sentiment analysis; bag-of-words</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>The state-of-the-art misinformation detection techniques mainly focus on static datasets. However, a massive amount of information is generated online and the websites are flooded with this legitimate information and misinformation. It is difficult to keep track of this changing information and provide up-to-date accurate status of webpages giving either legitimate information or misinformation. Therefore, to keep the features up-to-date, authors have proposed evolving sentimental Bag-of-Words approach. This involves, updating sentimental features every time the new or changed web contents are read. This process accumulates the sentimental features at different time intervals that can be utilized to detect misinformation in URLs and upgrade the status of the webpage with timely information. Apart from sentimental features, other state-of-the-art features viz. syntactical, Part-Of-Speech Tagging (POST), and Term-Frequency (TF) are updated in a timely manner and utilized to detect misinformation. The model performed well with the support vector machine showing an accuracy of 80% while the decision tree classifier showed less accuracy of 56.66%.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_31-A_Novel_Evolving_Sentimental_Bag_of_Words_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Machine Learning Algorithms for Document Classification: Comparative Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130430</link>
        <id>10.14569/IJACSA.2022.0130430</id>
        <doi>10.14569/IJACSA.2022.0130430</doi>
        <lastModDate>2022-04-30T05:22:31.5370000+00:00</lastModDate>
        
        <creator>Faizur Rashid</creator>
        
        <creator>Suleiman M. A. Gargaare</creator>
        
        <creator>Abdulkadir H. Aden</creator>
        
        <creator>Afendi Abdi</creator>
        
        <subject>Document classification; machine learning algorithms; text classification; analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>Automated document classification is the machine learning fundamental that refers to assigning automatic categories among scanned images of the documents. It reached the state-of-art stage but it needs to verify the performance and efficiency of the algorithm by comparing. The objective was to get the most efficient classification algorithms according to the usage of the fundamentals of science. Experimental methods were used by collecting data from a sum of 1080 students and researchers from Ethiopian universities and a meta-data set of Banknotes, Crowdsourced Mapping, and VxHeaven provided by UC Irvine. 25% of the respondents felt that KNN is better than the other models. The overall analysis of performance accuracies through various parameters namely accuracy percentage of 99.85%, the precision performance of 0.996, recall ratio of 100%, F-Score 0.997, classification time, and running time of KNN, SVM, Perceptron and Gaussian NB was observed. KNN performed better than the other classification algorithms with a fewer error rate of 0.0002 including the efficiency of the least classification time and running time with ~413 and 3.6978 microseconds consecutively. It is concluded by looking at all the parameters that KNN classifiers have been recognized as the best algorithm.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_30-Machine_Learning_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel High-Speed Key Transmission Technique to Avoid Fiddling Movements in e-Commerce</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130429</link>
        <id>10.14569/IJACSA.2022.0130429</id>
        <doi>10.14569/IJACSA.2022.0130429</doi>
        <lastModDate>2022-04-30T05:22:31.5200000+00:00</lastModDate>
        
        <creator>A. B. Hajira Be</creator>
        
        <creator>R. Balasubramanian</creator>
        
        <subject>Secret key; encryption; decryption; transaction; HSKT</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>To prevent fraud in e-shopping, the High-Speed Key Transmission Technique (HSKT) is primarily focused on safe and effectual transactions of payments and product content. The privacy of users, traders, trader information, product content, and the payment procedure are all protected by this framework. High speed key transmission is also committed to providing an effective, sensible approach to privacy in order to minimize the complexity of applications and the load on consumers. This paper proposes a new key transmission technique that allows users to conduct multi-banking account transactions from a single location. The proposed system is devoted to preventing unwanted people from gaining access. The secret key is used to represent access mechanisms, allowing authorized users to verify the transaction if and only if its characteristics meet secret key access requirements. Finally, the proposed high-speed key transmission technique (HSKT) improves the chances of success and throughput by reducing the time it takes to decrypt and encrypt, the amount of energy it uses, and the average delay.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_29-A_Novel_High_Speed_Key_Transmission_Technique.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Stochastic Marine Predator Algorithm with Multiple Candidates</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130428</link>
        <id>10.14569/IJACSA.2022.0130428</id>
        <doi>10.14569/IJACSA.2022.0130428</doi>
        <lastModDate>2022-04-30T05:22:31.5200000+00:00</lastModDate>
        
        <creator>Purba Daru Kusuma</creator>
        
        <creator>Ratna Astuti Nugrahaeni</creator>
        
        <subject>Metaheuristic; marine predator algorithm; stochastic system; production planning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>This work proposes a metaheuristic algorithm that modifies the marine predator algorithm (MPA), namely, the stochastic marine predator algorithm with multiple candidates (SMPA-MC). The modification is conducted in several aspects. The proposed algorithm replaces the three fixed equal size iteration phases with linear probability. Unlike the original MPA, in this proposed algorithm, the selection between exploration and exploitation is conducted stochastically during iteration. In the beginning, the exploration-dominant strategy is implemented to increase the exploration probability. Then, during the iteration, the exploration probability decreases linearly. Meanwhile, the exploitation probability increases linearly. The second modification is in the prey’s guided movement. Different from the basic MPA, where the prey moves toward the elite with small step size, several candidates are generated with equal inter-candidate distance in this work. Then, the best candidate is chosen to replace the prey’s current location. The proposed algorithm is then implemented to solve theoretical mathematic functions and a real-world optimization problem in production planning. The simulation result shows that in the average fitness score parameter, the proposed algorithm is better than MPA, especially in solving multimodal functions. The simulation result also shows that the proposed algorithm creates 9%, 19%, and 30% better total gross profit than particle swarm optimization, marine predator algorithm, and Komodo mlipir algorithm, respectively.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_28-Stochastic_Marine_Predator_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Ontology Model for Medical Tourism Supply Chain Knowledge Representation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130427</link>
        <id>10.14569/IJACSA.2022.0130427</id>
        <doi>10.14569/IJACSA.2022.0130427</doi>
        <lastModDate>2022-04-30T05:22:31.5030000+00:00</lastModDate>
        
        <creator>Worawit Janchai</creator>
        
        <creator>Abdelaziz Bouras</creator>
        
        <creator>Veeraporn Siddoo</creator>
        
        <subject>Medical tourism; application ontology; supply chain; ontology development; knowledge representation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>This study developed an application ontology related to the medical tourism supply chain domain (MTSC). The motivation for developing an ontology is that current MTSC studies use a descriptive approach to provide knowledge, which is difficult to comprehend and apply. The formalization of MTSC knowledge can provide medical tourism stakeholders with a shared understanding of the medical tourism business. As a result, the MTSC domain requires efficient semantic knowledge representation. Ontology is a popular approach for integrating knowledge and comprehension because it presents schema and knowledge base in an accurate and relevant feature. This paper employed the ontology engineering methodology, which included specification, conceptualization, and implementation steps. The conceptual model and facets of the MTSC are proposed. The MTSC objective and scope are tested with semantic competency questions against SPARQL Query formulations. The ontology metrics evaluation was used to verify the ontology quality including the external validation done by the domain experts. The results showed that the MTSC ontology has an appropriate schema design, terminologies, and query results.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_27-An_Ontology_Model_for_Medical_Tourism_Supply_Chain.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Assessing Node Trustworthiness through Adaptive Trust Threshold for Secure Routing in Mobile Ad Hoc Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130426</link>
        <id>10.14569/IJACSA.2022.0130426</id>
        <doi>10.14569/IJACSA.2022.0130426</doi>
        <lastModDate>2022-04-30T05:22:31.4900000+00:00</lastModDate>
        
        <creator>M Venkata Krishna Reddy</creator>
        
        <creator>P. V. S. Srinivas</creator>
        
        <creator>M. Chandra Mohan</creator>
        
        <subject>Node trustworthiness; misbehaving nodes; secure trust; static threshold; adaptive threshold; secure routing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>In the field of communication, Mobile Ad-hoc networks (MANET) have become popular and widely used. However, there are many security challenges in communication through these networks due to the presence of malicious nodes. The aim of this article is to present a novel adaptive threshold trust based approach for isolating malicious nodes to establish secure routing between source and destination. Many existing cryptography methods are complex and do not properly address the elimination of malicious nodes. Several trust-dependent mechanisms have been proposed that supplement old traditional cryptography- related security schemes. But it is observed that most of these trust based approaches are using direct trust and comparing with static trust threshold. This article proposes a novel method, secured trust with adaptive threshold (STAT) that uses the Adaptive threshold technique (APTT) combined along with secure trust based approach (STBA) to evaluate the node trustworthiness for efficient routing. Secure trust for a node is calculated based upon three tier observations that includes direct, neighbor, self-historical to enrich the trust factor and adaptive trust threshold is determined based upon network parameters dynamically. Node’s secure trust is compared with adaptive trust threshold computed to isolate the malicious nodes from routing. The proposed method is compared with two cases where routing is performed without any trust calculation and routing with trust calculation and compared with static trust threshold approach. Results show significant performance of the proposed work in terms of metrics like packet delivery ratio, delay, throughput, false positive detection ratio and packet drop ratio. The proposed method STAT effectively isolates the malicious nodes and establishes secure routing.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_26-Assessing_Node_Trustworthiness_through_Adaptive_Trust.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Three Layer Authentications with a Spiral Block Mapping to Prove Authenticity in Medical Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130425</link>
        <id>10.14569/IJACSA.2022.0130425</id>
        <doi>10.14569/IJACSA.2022.0130425</doi>
        <lastModDate>2022-04-30T05:22:31.4900000+00:00</lastModDate>
        
        <creator>Ferda Ernawan</creator>
        
        <creator>Afrig Aminuddin</creator>
        
        <creator>Danakorn Nincarean</creator>
        
        <creator>Mohd Faizal Ab Razak</creator>
        
        <creator>Ahmad Firdaus</creator>
        
        <subject>Fragile watermarking; self-embedding; image authentication; self-recovery; medical image; spiral block mapping</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>Digital medical image has a potential to be manipulated by unauthorized persons due to advanced communication technology. Verifying integrity and authenticity have become important issues on the medical image. This paper proposed a self-embedding watermark using a spiral block mapping for tamper detection and restoration. The block-based coding with the size of 3&#215;3 was applied to perform self-embedding watermark with two authentication bits and seven recovery bits. The authentication bits are obtained from a set of condition between sub-block and block image, and the parity bits of each sub-block. The authentication bits and the recovery bits are embedded in the least significant bits using the proposed spiral block mapping. The recovery bits are embedded into different sub-blocks based on a spiral block mapping. The watermarked images were tested under various tampered images such as blurred image, unsharp-masking, copy-move, mosaic, noise, removal, and sharpening. The experimental results show that the scheme achieved a PSNR value of about 51.29 dB and a SSIM value of about 0.994 on the watermarked image. The scheme showed tamper localization with accuracy of 93.8%. In addition, the proposed scheme does not require external information to perform recovery bits. The proposed scheme was able to recover the tampered image with a PSNR value of 40.45 dB and a SSIM value of 0.994.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_25-Three_Layer_Authentications_with_a_Spiral_Block_Mapping.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Bearing Fault Detection based on Internet of Things using Convolutional Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130424</link>
        <id>10.14569/IJACSA.2022.0130424</id>
        <doi>10.14569/IJACSA.2022.0130424</doi>
        <lastModDate>2022-04-30T05:22:31.4730000+00:00</lastModDate>
        
        <creator>Sovon Chakraborty</creator>
        
        <creator>F. M. Javed Mehedi Shamrat</creator>
        
        <creator>Rasel Ahammad</creator>
        
        <creator>Md. Masum Billah</creator>
        
        <creator>Moumita Kabir</creator>
        
        <creator>Md Rabbani Hosen</creator>
        
        <subject>Accuracy; convolution 1D; convolution 2D; data loss; faulty machinery; mobileNetV2; precision</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>In the age of the industrial revolution, industry and machinery are elements of the utmost importance to the development of human civilization. As industries are dependent on their machines, regular maintenance of these machines is required. However, if the machine is too big for humans to look after, we need a system that will observe these giants. This paper proposes a convolutional neural network-based system that detects faults in industrial machines by diagnosing motor sounds using accelerometers sensors. The sensors collect data from the machines and augment the data into 261756 samples to train (70%) and test (30%) the models for better accuracy. The sensor data are sent to the server through the wireless sensor network and decomposed using discrete wavelet transformation (DWT). This big data is processed to detect faults. The study shows that custom CNN architectures surpass the performance of the transfer learning-based MobileNetV2 fault diagnosis model. The system could successfully detect faults with up to 99.64% accuracy and 99.83% precision with the MobileNetV2 pre-trained on the ImageNet Dataset. However, the Convolutional 1D and 2D architectures perform excellently with 100% accuracy and 100 % precision.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_24-Bearing_Fault_Detection_based_on_Internet_of_Things.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards Employee Perceived Satisfaction in using Citrix Workspace Technology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130423</link>
        <id>10.14569/IJACSA.2022.0130423</id>
        <doi>10.14569/IJACSA.2022.0130423</doi>
        <lastModDate>2022-04-30T05:22:31.4570000+00:00</lastModDate>
        
        <creator>Bandar Ali Al Fehaidi</creator>
        
        <creator>Muhammad Asif Khan</creator>
        
        <subject>Citrix workspace; employee satisfaction; secure remote access; mobile worker; TAM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>In view of increasing use of technologies and their impacts in businesses, companies are now using technology to accelerate business processes. However, some companies are still failing to achieve the desired promising results by using technology. Technology in any company succeeds only when the people working in the company accept the technology wholeheartedly and show diligence in learning and using it. In this research we have evaluated the perceptions about a technology called Citrix Workspace in a Saudi company to see whether workers are satisfied with this technology and how they find its usefulness at work. A theoretical framework Technology Acceptance Model (TAM) has been used in this research to study different variables. The data collected by employees within the organization has been analyzed in order to determine level of satisfaction with the technology and to know factors that keep the workers away from remote use of the technology. These results also help managers decide how Citrix Workspace technology or other such technologies can be used in remote sites of the company.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_23-Towards_Employee_Perceived_Satisfaction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Collaborative Course Assignment Problem to Minimize Unserved Classes and Optimize Education Quality</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130421</link>
        <id>10.14569/IJACSA.2022.0130421</id>
        <doi>10.14569/IJACSA.2022.0130421</doi>
        <lastModDate>2022-04-30T05:22:31.4430000+00:00</lastModDate>
        
        <creator>Purba Daru Kusuma</creator>
        
        <creator>Ratna Astuti Nugrahaeni</creator>
        
        <subject>Course assignment problem; simulated annealing; collaborative model; online teaching; combinatorial problem</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>This work proposes a collaborative course assignment model among universities. It is different from existing studies in educational assignment problems or course timetabling, where the scope is only within the institution or department. In this work, the system consists of several universities. A collaborative approach is conducted so that lecturers exchange is possible and conducted automatically. Each university shares its courses and lecturers. The optimization is conducted to minimize the unserved classes and improve education quality. The cloud-theory based simulated annealing is deployed to optimize the assignment. This model is then benchmarked with two non-collaborative models. The first model’s objective is to minimize the unserved classes only. The second model’s objectives are to minimize the unserved classes and improve education quality. The simulation result shows that the proposed assignment model is better in minimizing the unserved classes and improving education quality. The proposed model reduces 89 to 92 percent of the unserved classes ratio compared with the non-collaborative model.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_21-Collaborative_Course_Assignment_Problem.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Morphological Analysis based Approach for Dynamic Detection of Inflected Gujarati Idioms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130422</link>
        <id>10.14569/IJACSA.2022.0130422</id>
        <doi>10.14569/IJACSA.2022.0130422</doi>
        <lastModDate>2022-04-30T05:22:31.4430000+00:00</lastModDate>
        
        <creator>Jatin C. Modh</creator>
        
        <creator>Jatinderkumar R. Saini</creator>
        
        <creator>Ketan Kotecha</creator>
        
        <subject>Gujarati; idiom; machine translation system (MTS); morphological analysis; natural language processing (NLP); rudhiprayog</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>The Gujarati language is primarily spoken by Gujarati people living in the state of Gujarat, India. It is the medium of communication in the state of Gujarat. In the Gujarati language, ‘rudhiprayog’ i.e. idiom is a very much popular form of expression. It represents the real flavour of the Gujarati language. The idiom is a group of words saying one thing literally but means something else when we explored it in context. Like Gujarati verbs, idioms can be written in many forms. Due to different morphological forms of the same Gujarati idiom, Gujarati idiom identification is a challenging task for any machine translation system (MTS). Accordingly many forms also make idiom translation more complicated. In the current paper, Gujarati idioms in their different inflected forms are collected, analyzed and classified based on ending words. After classifying idioms, their base or root forms are identified. Base idiom form and their possible idioms forms are morphologically analyzed and rules are generated based on the relationship between base form and possible inflected forms of idioms. These rules are used to generate possible idiom forms from the base idiom form. Gujarati idiom in any valid inflected form can be dynamically detected from the Gujarati input text using the proposed novel morphological analysis based approach. The results are encouraging enough to implement the proposed model of rules for natural language processing tasks as well as a machine translation system for Gujarati language idioms.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_22-A_Novel_Morphological_Analysis_based_Approach_for_Dynamic_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Evaluation of Different Supervised Machine Learning Algorithms in Predicting Linear Accelerator Multileaf Collimator Positioning’s Accuracy Problem</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130420</link>
        <id>10.14569/IJACSA.2022.0130420</id>
        <doi>10.14569/IJACSA.2022.0130420</doi>
        <lastModDate>2022-04-30T05:22:31.4270000+00:00</lastModDate>
        
        <creator>Hamed S. El-Ghety</creator>
        
        <creator>Ismail Emam</creator>
        
        <creator>AbdelMagid M. Ali</creator>
        
        <subject>Linear accelerators; logistic regression; performance evaluation; prediction methods; supervised learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>Radiation Oncology is one of the businesses that employs Machine Learning to automate quality assurance tests so that errors and defects can be reduced, avoided, or eliminated as much as possible during tumor therapy using a Linear Accelerator with MultiLeaf Collimator (Linac MLC). The majority of Machine Learning applications have used supervised learning algorithms rather than unsupervised learning algorithms. However, in most cases, there is a clear bias in deciding which supervised machine learning algorithm to use. And prediction findings may be less accurate as a result of this bias. As a result, in this study, an evidence is presented for a novel application of Logistic Regression technique to predict Linac MLC positioning accuracy, which achieved 98.68 percent prediction accuracy with robust and consistent performance across several sets of Linac data. this evidence was obtained by comparing the performance of various supervised machine learning algorithms (i.e. Logistic Regression, Decision Tree, Support Vector Machine, Random Forest, Naive Bayes, and K-Nearest Neighbor) in the prediction of Linac MLC&#39;s positioning accuracy problem using leaves&#39; positioning displacement datasets with labelled results as training and test datasets. For each method, two parameters were used to evaluate performance: prediction accuracy and the receiver operating characteristics curve. Based on that evaluation, the right selection sequence was proposed for supervised Machine Learning algorithms in order to achieve near-optimal prediction performance for Linac MLC&#39;s leaf positioning accuracy problem. As a result, the selection bias, as well as the negative side effects (i.e. ineffective preventive maintenance plan for Linac MLC to avoid and solve causes of inaccurate leaf displacement such as motor fatigue and stuck problems) could have occurred were successfully avoided.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_20-Performance_Evaluation_of_Different_Supervised_Machine_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Soil Nutrients Prediction and Optimal Fertilizer Recommendation for Sustainable Cultivation of Groundnut Crop using Enhanced-1DCNN DLM</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130419</link>
        <id>10.14569/IJACSA.2022.0130419</id>
        <doi>10.14569/IJACSA.2022.0130419</doi>
        <lastModDate>2022-04-30T05:22:31.4100000+00:00</lastModDate>
        
        <creator>Sivasankaran S</creator>
        
        <creator>K. Jagan Mohan</creator>
        
        <creator>G. Mohammed Nazer</creator>
        
        <subject>Soil nutrients; Enhc-ID-CNN DLM; nutrient classification; fertilizer recommendation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>Cultivation of crops and their parallel production yields hugely depend upon the fertility composition of the soil in which the crops are being cultivated. The prime fertility factors which contribute towards the health of the soil are the available soil nutrients. Varying climatic conditions and improper cultivation patterns have resulted in unpredictable growth and yield of the groundnut crops, one of the major cause for the fluctuation seen in groundnut pod growth patterns and production, is the differing soil nutrient compositions of the land which is under cultivation. The unnecessary usage of excessive artificial fertilizers to boost the soil strength, without properly diagnosing the exact nutritional need of the soil required for the conducive growth of the crop has led to the imbalanced distribution of the soil’s major macro-nutrients constituents such as (Phosphorous (P), Potassium (K) and Nitrogen (N)). In this research article, we have made a detailed investigation for nutrient prediction mechanism of the soil nutrient datasets taken under investigation of a specific geographic location from one of the major groundnut cultivating districts (Villupuram) in the state of Tamil Nadu and have proposed a Soil nutrients prediction scheme and optimal fertilizer recommendation model for sustainable cultivation of groundnut crop using Enhanced- 1DCNN DLM. This Investigation model utilizes the natural compact robust features of 1DCNN in classifying the major macro nutrients(N,P,K)on the basis of low, Medium and high values. Based on the generated heatmap results the correlation between certain macronutrients and their corresponding micronutrient presence is classified. This proposed model has been compared for its performance and error measures with existing SVM, Na&#239;ve Bayes and ANN models and has proved to be outperforming all the compared baseline models by preserving the original data distribution with an overall accuracy of 99.78%.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_19-Soil_Nutrients_Prediction_and_Optimal_Fertilizer.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Location-based Mobile Application for Blood Donor Search</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130418</link>
        <id>10.14569/IJACSA.2022.0130418</id>
        <doi>10.14569/IJACSA.2022.0130418</doi>
        <lastModDate>2022-04-30T05:22:31.4100000+00:00</lastModDate>
        
        <creator>Orlando Iparraguirre-Villanueva</creator>
        
        <creator>Fernando Sierra-Linan</creator>
        
        <creator>Michael Cabanillas-Carbonell</creator>
        
        <subject>Blood banks; location; mobile applications; donor; applicant; blood bank; mobile apps</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>Technological advances and the massive use of mobile devices have led to the exponential evolution of mobile applications in the health sector. Blood donation centers frequently suffer blood shortages due to lack of donations, which is why blood donation requests are frequently seen on social networks for blood donors in urgent need of a transfusion of a specific blood group. Mobile applications for blood donation are crucial in the health sector, since it allows donors and blood donation centers to communicate immediately to coordinate with each other, minimizing the time to perform the donation process. The present work was to develop a location-based mobile application for the search of blood donors, with the objective of increasing the number of donors, having a greater population reach, and reducing the time to search for blood donors. The results obtained show a significant increase of 39.58% in the number of donors, a reduction of 53.2% in the search time, and a greater population reach.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_18-Location_based_Mobile_Application_for_Blood_Donor_Search.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The SMILE, A Cyber Pedagogy based Learning Management System Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130417</link>
        <id>10.14569/IJACSA.2022.0130417</id>
        <doi>10.14569/IJACSA.2022.0130417</doi>
        <lastModDate>2022-04-30T05:22:31.3970000+00:00</lastModDate>
        
        <creator>Paula Dewanti</creator>
        
        <creator>I Made Candiasa</creator>
        
        <creator>I Made Tegeh</creator>
        
        <creator>I Gde Wawan Sudatha</creator>
        
        <subject>ADDIE; cyber pedagogy; e-learning; higher education; learning management system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>The study&#39;s purpose is to create an LMS model that is adapted to the characteristics of university students to enhance the learning experience by utilizing various multidimensional learning resources in Cyber Pedagogy. This research and development study used the Analyze, Design, Develop, Implement, and Evaluate (ADDIE) instructional design framework as well as the Waterfall system development model to develop learning materials and infrastructure. The study involves 50 students from the Bali Institute of Technology and Business, as well as five lecturers and six judges at the expert test stage, namely learning media experts, learning design experts, and teaching experts, who were chosen through purposive sampling. The SMILE Model (Simple, Multidimensional, and Interactive Learning Ecosystem) is designed to meet the learning needs and expectations of today&#39;s largest market share of higher education, the millennial generation. The SMILE Model was developed successfully with ongoing assistance from researchers&#39; students, particularly in the E-Tourism course. The implementation is accomplished through the combination of university E-Learning and the use of Microsoft Teams as a virtual learning platform alternative. During the COVID-19 pandemic, this was considered the new face-to-face norm.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_17-The_SMILE_A_Cyber_Pedagogy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Smart Elevator Obstruction Detection System using Image Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130416</link>
        <id>10.14569/IJACSA.2022.0130416</id>
        <doi>10.14569/IJACSA.2022.0130416</doi>
        <lastModDate>2022-04-30T05:22:31.3800000+00:00</lastModDate>
        
        <creator>Preethi Chandirasekeran</creator>
        
        <creator>Shridevi S</creator>
        
        <subject>Binary image classification; machine learning; deep learning; elevator safety</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>This paper proposes an approach that leverages real-time Image Classification to improve elevator safety. Elevators are a necessity for most multi story buildings. As a result, they play a crucial role in the lives of millions of people around the world. Despite this, there has been limited advancement in the technology used for elevator door operators. In the current system, elevators use multiple infrared transmitter/receiver pair of sensors to detect obstructions between the doors. This does not effectively detect smaller objects such as pets, small children, pet leashes etc. between the elevator doors which has led to thousands of tragic fatalities. This paper proposes an approach to tackle this challenge by leveraging Binary Image Classification to determine whether there is an obstruction between the elevator doors. This study includes the construction of a novel dataset of over 10,000 images and a comprehensive evaluation and comparison of several Machine Learning models for the proposed system. The results have produced novel findings that can be used to significantly improve safety and reliability of elevator door operators by preventing tragic fatalities every year.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_16-Smart_Elevator_Obstruction_Detection_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Efficient Productive Feature Selection and Document Clustering (PFS-DocC) Model for Document Clustering</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130415</link>
        <id>10.14569/IJACSA.2022.0130415</id>
        <doi>10.14569/IJACSA.2022.0130415</doi>
        <lastModDate>2022-04-30T05:22:31.2230000+00:00</lastModDate>
        
        <creator>Perumal Pitchandi</creator>
        
        <subject>Benchmark standards; document clustering; productive feature selection; multiple clustering; web applications</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>In mining, document clustering pretends to diminish the document size by constructing the clustering model which is extremely essential in various web-based applications. Over the past few decades, various mining approaches are analysed and evaluated to enhance the process of document clustering to attain better results; however, in most cases, the documents are messed up and degrade the performance by reducing the level of accuracy. The data instances need to be organized and a productive summary have to be generated for all clusters. The summary or the description of the document should demonstrate the information to the users’ devoid of any further analysis and helps in easier scanning of associated clusters. It is performed by identifying the relevant and most influencing features to generate the cluster. This work provides a novel approach known as Productive Feature Selection and Document Clustering (PFS-DocC) model. Initially, the productive features are selected from the input dataset DUC2004 which is a benchmark dataset. Next, the document clustering model is attempted for single and multiple clusters where the generated output has to be more extractive, generic, and clustering model. This model provides more appropriate and suitable summaries which is well-suited for web-based applications. The experimentation is carried out in online available benchmark dataset and the evaluation shows that the proposed PFS-DocC model gives superior outcomes with higher ROUGE score.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_15-An_Efficient_Productive_Feature_Selection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel IoT Architecture for Seamless IoT Integration into University Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130413</link>
        <id>10.14569/IJACSA.2022.0130413</id>
        <doi>10.14569/IJACSA.2022.0130413</doi>
        <lastModDate>2022-04-30T05:22:31.2070000+00:00</lastModDate>
        
        <creator>Wafa Altwoyan</creator>
        
        <creator>Ibrahim S. Alsukayti</creator>
        
        <subject>Internet of things; architecture; smart campus; education</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>IoT architectures play critical roles in guiding IoT system construction and enhancing IoT integration. However, there is still no standardized IoT architecture that meets the varying requirements of different IoT deployments. Although this has been a focus within the research community, no specific attention has been paid to optimizing an IoT architecture toward seamless IoT integration in educational environments. Moreover, different advanced system aspects have not been considered for designing an optimized IoT architecture. These include the need for complete security and privacy support, a highly responsive system, dynamic interactivity, and wide-range IoT connectivity. Such considerations are important considering the complexity and multidimensionality of integrating IoT in educational environments. In this paper, we introduced a novel IoT architecture with the main objective of facilitating the effective integration of IoT into university systems. It also aims at optimizing the IoT-integrated system with advanced aspects to enhance system security, responsiveness, and IoT connectivity. The proposed architecture provides a modular and scalable design of six architectural layers in addition to a vertical layer that provides security support across the architecture. Only the most relevant and critical layers are added to the architecture to maintain a practical trade-off between effective modularity and less complexity. Compared with other IoT architectures, the proposed one ensures high reliability, data management, full security support, responsiveness, and wide coverage while maintaining acceptable complexity.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_13-A_Novel_IoT_Architecture_for_Seamless_IoT_Integration.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Software Defined Network based Load Balancing for Network Performance Evaluation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130414</link>
        <id>10.14569/IJACSA.2022.0130414</id>
        <doi>10.14569/IJACSA.2022.0130414</doi>
        <lastModDate>2022-04-30T05:22:31.2070000+00:00</lastModDate>
        
        <creator>Omran M. A. Alssaheli</creator>
        
        <creator>Z. Zainal Abidin</creator>
        
        <creator>N. A. Zakaria</creator>
        
        <creator>Z. Abal Abas</creator>
        
        <subject>Load balancing; software-defined network (SDN); SDN load balancing; network performance; Mininet</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>Load balancing distributes incoming network traffic across multiple controllers that improve the availability of the internet for users. The load balancing is responsible to maintain the internet availability to users in 24 hours by 7 days a week. However, the internet become unavailable since the load balancer is inflexibility, costly, and non-programmable for settings adjustment especially in managing the network traffic congestion. An increasing user using mobile devices and cloud facilities, the current load balancer has limitations and demands for the deployment of a Software-Defined Network (SDN). SDN decouples network control, applications, network services, and forwarding roles; hence makes the network more flexible, affordable, and programmable. Furthermore, it has been found that SDN load balancing performs intelligent action, efficient and maintains better QoS (Quality of Service) performance. This study proposes the application of SDN-based Load Balancing since it provides pre-defined servers in the server-farm that receive the arrived Internet Protocol (IP) data packet from various clients in the same number of loads and process orders for each server. Experiments have been conducted using Mininet™ and based on several scenarios (Scenario A, Scenario B, and Scenario C) of network topologies. Parameters used to evaluate the load balancing in SDN are throughput, delay, and jitter. Findings indicated that scenario A gives a high throughput, scenario B and C produce a low jitter values and scenario C produces the lowest delay. The impact of SDN brings a multi-path adaptive direction in finding the best route for a better network performance.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_14-Software_Defined_Network_based_Load_Balancing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Assessing Digital Readiness of Small Medium Enterprises: Intelligent Dashboard Decision Support System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130412</link>
        <id>10.14569/IJACSA.2022.0130412</id>
        <doi>10.14569/IJACSA.2022.0130412</doi>
        <lastModDate>2022-04-30T05:22:31.1930000+00:00</lastModDate>
        
        <creator>Okfalisa </creator>
        
        <creator>Mahyarni</creator>
        
        <creator>Wresni Anggraini</creator>
        
        <creator>Saktioto</creator>
        
        <creator>B. Pranggono</creator>
        
        <subject>Decision support system; digital readiness; fuzzy analytical hierarchy process; business intelligent dashboard; objective matrix</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>The implication of the Covid-19 global pandemic is driving the transition of SMEs’ business towards digitalization. However, despite the use of the digital platform, many SMEs are unable to survive. Therefore, this study included a focus on Decision Support System (DSS)-based dashboard model as a new feature in assessing SMEs’ digitalization readiness. The twenty-four criteria appraisals are regarded in this sense as two views of business and Information Technology (IT) dimensions which include the Fuzzy-Analytical Hierarchy Process Method (F-AHP) for the weighting measurement and Objective Matrix (OMAX) for the performance mapping analysis, and both are embedded in the Business Intelligent (BI) dashboard development. In Riau Province, Indonesia, a total of 118 SMEs were interested in this study and fact thus revealed the general performance of SMEs as rated at an “Average” level of index value 4.95 with comprehensive parameters for index contribution viz., 3.79, 3.84, 7.75, 4.68, 4.32, and 5.43 for Business Activity (BA), Transaction (TC), Marketing (MC), Management (MG), Micro Environment (MI) and Macro Environment (MA) respectively. Furthermore, the dashboard prepares a tracking and analysis system with the graphical diagram extracted from each criteria hierarchy’s root cause to sub-criteria. The DSS dashboard’s information and knowledge have been developed into a promotional framework for stakeholders relevant to a digital business’s success and sustainability performance initiatives.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_12-Assessing_Digital_Readiness_of_Small_Medium_Enterprises.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Extended Max-Occurrence with Normalized Non-Occurrence as MONO Term Weighting Modification to Improve Text Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130411</link>
        <id>10.14569/IJACSA.2022.0130411</id>
        <doi>10.14569/IJACSA.2022.0130411</doi>
        <lastModDate>2022-04-30T05:22:31.1770000+00:00</lastModDate>
        
        <creator>Cristopher C. Abalorio</creator>
        
        <creator>Ariel M. Sison</creator>
        
        <creator>Ruji P. Medina</creator>
        
        <creator>Gleen A. Dalaorao</creator>
        
        <subject>Extended MO; normalized NO; text classification; term weighting scheme</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>The increased volume of data due to advancements in the internet and relevant technology makes text classification of text documents a popular demand. Providing better representations of the feature vector by setting appropriate term weight values using supervised term weighting schemes improves classification performance in classifying text documents. A state-of-the-art term weighting scheme MONO with variants TF-MONO and SRTF-MONO improves text classification considering the values of non-occurrences. However, the MONO strategy suffers setbacks in weighting terms with non-uniformity values in its term&#39;s interclass distinguishing power. In this study, extended max-occurrence with normalized non-occurrence (EMONO) with variants TF-EMONO and SRTF-EMONO are proposed where EMO value is determined as MO interclass extensions as improvements to address its problematic weighting behavior of MONO as it neglected the utilization of the occurrence of the classes with short-distance document frequency in non-uniformity values. The proposed schemes&#39; classification performance is compared with the MONO variants on the Reuters-21578 dataset with the KNN classifier. Chi-square-max was used to conduct experiments in different feature sizes using micro-F1 and macro-F1. The results of the experiments explicitly showed that the proposed EMONO outperforms the variants of MONO strategy in all feature sizes with an EMO parameter value of 2 sets number of classes in MO extension. However, the SRTF-EMONO showed better performance with Micro-F1 scores of 94.85% and 95.19% for smallest to largest feature size, respectively. Moreover, this study also emphasized the significance of interclass document frequency values in improving text classification aside from non-occurrence values in term weighting schemes.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_11-Extended_Max_Occurrence_with_Normalized_Non_Occurrence.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design and Implementation of Teaching Assistant System for Mechanical Course based on Mobile AR Technology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130410</link>
        <id>10.14569/IJACSA.2022.0130410</id>
        <doi>10.14569/IJACSA.2022.0130410</doi>
        <lastModDate>2022-04-30T05:22:31.1600000+00:00</lastModDate>
        
        <creator>Jinglei Qu</creator>
        
        <creator>Bingxin Ma</creator>
        
        <creator>Lulu Zheng</creator>
        
        <creator>Yuhui Kang</creator>
        
        <subject>Augmented reality; teaching assistance; mechanical teaching; visual operation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>Augmented reality technology has become a hot spot in many fields because of its unique real-time interaction and the ability to add virtual objects in 3D video space. To enable students better understand and master traditional mechanical courses and solve the limitations of the courseware teaching method, a teaching assistant system based on Vuforia platform is designed and developed by combining augmented reality technology with existing teaching methods. On the smartphone application side, the 3D model of parts can be displayed by scanning mechanical engineering drawings, which can rotate, zoom, and section view the model. It realizes 3D visualization and interactive operation of 2D drawings in textbooks and achieves the purpose of image, diversity and efficient teaching. The results show that this method can not only promote the cognition of spatial relations based on visualization, but also create a situational learning environment based on experience, which can effectively improve the learning effect of courses and enhance students&#39; interest and enthusiasm in learning.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_10-Design_and_Implementation_of_Teaching_Assistant_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of an IoT Device for Measurement of Respiratory Rate in COVID-19 Patients</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130409</link>
        <id>10.14569/IJACSA.2022.0130409</id>
        <doi>10.14569/IJACSA.2022.0130409</doi>
        <lastModDate>2022-04-30T05:22:31.1600000+00:00</lastModDate>
        
        <creator>Jean Pierre Tincopa</creator>
        
        <creator>Paulo Vela-Anton</creator>
        
        <creator>Cender U. Quispe-Juli</creator>
        
        <creator>Anthony Arostegui</creator>
        
        <subject>Covid-19; respiration rate; internet of things; vital signs; hardware</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>During the COVID-19 pandemic, patients who required face-to-face attention and tested positive, even showing signs of high risk, were forced to isolate themselves in their own homes immediately without adequate medical monitoring. Continuous remote monitoring of their vital signs would have helped to avoid subsequent hospitalization caused by the progression of the virus. Using deterministic design methods, a system to measure respiratory rate through impedance pneumography was proposed, amplifying microvolt signals to read and process data with a microcontroller. An embedded algorithm was designed to measure inspiration and expiration time. The values captured were sent via WiFi to a server, for posterior evaluation by the clinician. The key findings of this study are as follows (1) a respiratory-rate remote monitoring system was developed, displaying values calculated from impedance pneumography signals, (2) the correlation of the respiratory rate values from a patient during exercising and resting time, measured by a physician and by the device, was 0.96. (3) when analyzing separately the data obtained by the resting test and the exercise test, the performance of the device presented an average error percentage of -5.36% and +1.97%, respectively. As a conclusion, this device has practical applications for acute and chronic respiratory diseases, where respiratory rate is an indicator of the progression of these conditions.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_9-Development_of_an_IoT_Device_for_Measurement.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Portable ECG Monitoring System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130408</link>
        <id>10.14569/IJACSA.2022.0130408</id>
        <doi>10.14569/IJACSA.2022.0130408</doi>
        <lastModDate>2022-04-30T05:22:31.1470000+00:00</lastModDate>
        
        <creator>Zhadyra N. Alimbayeva</creator>
        
        <creator>Chingiz A. Alimbayev</creator>
        
        <creator>Nurlan A. Bayanbay</creator>
        
        <creator>Kassymbek A. Ozhikenov</creator>
        
        <creator>Oleg N. Bodin</creator>
        
        <creator>Yerkat B. Mukazhanov</creator>
        
        <subject>Electrocardiography; portable ECG device; ECG monitoring systems; cardiovascular diseases; mobile healthcare</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>The number of patients with cardiovascular diseases (CVD) is rapidly increasing in the world. Many CVDs are likely to manifest their symptoms some time prior to the onset of any adverse or catastrophic events, and early detection of cardiac abnormalities is incredibly important. To reduce the risks of life-threatening arrhythmia, it is necessary to develop and introduce portable systems for monitoring the state of the heart in conditions of free activity. This paper presents the second generation (prototype) of a portable cardiac analyzer and the developed system for non-invasive cardiac diagnostics. The portable cardiac analyzer mainly consists of an ADC for taking an electrocardiosignal (ECS) and an STM32L151xD microcontroller. To record operational data on current ECS, a block of non-volatile high-speed memory MRAM is connected to the microcontroller. A communication unit is based on the universal combo module SIM868 from SIMCOM, which supports data exchange in GSM/GPRS networks. The developed ECG monitoring system allows making decisions at different levels (cardiac analyzer, server, doctor), as well as exchanging information necessary to ensure an effective diagnostic and treatment process. We evaluated the performances of the developed system. The signal-to-noise ratio of the output signal is favorable, and all the features needed for a clinical evaluation (P waves, QRS complexes and T waves) are clearly readable.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_8-Portable_ECG_Monitoring_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Method for Estimation of Oleic Acid Content in Soy Plants using Green Band Data of Sentinel-2/MSI</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130407</link>
        <id>10.14569/IJACSA.2022.0130407</id>
        <doi>10.14569/IJACSA.2022.0130407</doi>
        <lastModDate>2022-04-30T05:22:31.1300000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Yoshitomo Hideshima</creator>
        
        <creator>Yuuhi Iwaki</creator>
        
        <creator>Ryota Ito</creator>
        
        <subject>Oleic acid; NDVI (normalized difference vegetation index); regressive analysis; soy plant; Multi Spectral Imager: MSI</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>A method for estimation of oleic acid content in soy plants using green band data of Sentinel-2/MSI: Multi Spectral Imager is proposed. Conventionally, vitality of agricultural plants is estimated with NDVI: Normalized Difference Vegetation Index. Spatial resolution of Near Infrared: NIR band of Sentinel-2/MSI for calculation of NDVI, however, is 20 m. Therefore, a method for estimation of vitality with only green band data of Sentinel-2/MSI is proposed here. Through regressive analysis with the satellite data as well as drone mounted NDVI camera data together with component analysis data by gas chromatography, it is found the correlation between NDVI and green band data of the optical sensor (MSI) onboard Sentinel-2 as well as the component analysis data. It is also found that the new variety of soy plant, Saga University brand: HO1 contains about 50% much oleic acid in comparison to the conventional variety of soy plant, Fukuyutaka.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_7-Method_for_Estimation_of_Oleic_Acid_Content_in_Soy_Plants.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hadoop as a Service: Integration of a Company’s Heterogeneous Data to a Remote Hadoop Infrastructure</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130406</link>
        <id>10.14569/IJACSA.2022.0130406</id>
        <doi>10.14569/IJACSA.2022.0130406</doi>
        <lastModDate>2022-04-30T05:22:31.1130000+00:00</lastModDate>
        
        <creator>Yordan Kalmukov</creator>
        
        <creator>Milko Marinov</creator>
        
        <subject>Hadoop integration; data analytical tools; heterogeneous data integration; Hadoop distributed file system (HDFS); HBase; hive</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>Data analysis is very important for the development of any business today. It helps to identify organizational bottlenecks, optimize business processes, foresee customers’ demands and behavior, and provides summarized data that could help reducing costs and increase profits. Having this information when designing new products or services highly increases the chances of their success, and thus provides an additional competitive advantage over other businesses. However, having a single data analyst with a computer is far from enough in the era of big data. There are powerful data analytical software tools, but they are either expensive or hard to deploy and require multiple high-performance servers to run. Buying expensive hardware and software, and hiring high-qualified IT experts, is not affordable for all companies, especially for smaller ones and start-ups. Therefore, this article proposes an architecture for integration of a company’s heterogeneous data (stored within a database of any type, or in the file system) to a remote Hadoop cluster, providing powerful data analytical services on demand. This is an affordable and cost-effective cloud-based solution, suitable for a company of any size. Businesses are not required to by any hardware or software, but use the data analytical services on demand, paying a small processing fee per request or by subscription.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_6-Hadoop_as_a_Service_Integration_of_a_Companys_Heterogeneous_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Image Analysis of Heat-Affected Zone of Laser-Cut Heat-Resistant Paper using Otsu Thresholding Technique</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130405</link>
        <id>10.14569/IJACSA.2022.0130405</id>
        <doi>10.14569/IJACSA.2022.0130405</doi>
        <lastModDate>2022-04-30T05:22:31.1130000+00:00</lastModDate>
        
        <creator>Shalida Mohd Rosnan</creator>
        
        <creator>Kong Peifu</creator>
        
        <creator>Toshiharu Enomae</creator>
        
        <creator>Nakagawa-Izumi Akiko</creator>
        
        <subject>Heat-affected zone; image analysis; image processing; laser cutting; thresholding</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>Since ancient times, natural fibers have been essential in paper production and packaging fabrication. However, beauty-marring carbonization, or a heat-affected zone (HAZ) generated during the laser cutting process of paper materials &#172;&#172;&#172;led to an intriguing discussion on the possibility of reducing this defect zone. Thus, paper loaded with aluminum hydroxide [Al(OH)3] (AH) was prepared and tested with laser cutting. There were two input parameters of laser processing: the ratio of laser power to a maximum and the cutting speed. The study discussed the HAZ area of the paper with AH loaded at 0–40% on a dry pulp basis. The HAZ area was measured through image processing software. The Otsu thresholding technique (OTT) was applied to HAZ area determinations. The results from the image analysis signified that the smallest HAZ area was successfully achieved on samples with AH loaded at 40%. The optimal condition for the sample with 40% AH loaded was 60% power ratio and 20 mm/s in cutting speed. Based on the results, the cutting speed was the most significant parameter to produce the smallest HAZ area; therefore, the laser processing parameters were optimized to achieve a minimum HAZ area, and it was possible to reduce its dark color appearance of the material surfaces. Based on this study, it was found that the application of the Otsu thresholding technique was of significance to the HAZ area determination and reduction of the time consumption for the image analysis.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_5-Image_Analysis_of_Heat_Affected_Zone.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Research on Intelligent Natural Language Texts Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130404</link>
        <id>10.14569/IJACSA.2022.0130404</id>
        <doi>10.14569/IJACSA.2022.0130404</doi>
        <lastModDate>2022-04-30T05:22:31.1000000+00:00</lastModDate>
        
        <creator>Chen Xiao Yu</creator>
        
        <creator>Zhang Xiao Min</creator>
        
        <subject>Machine learning; natural language texts; text vectorization; classification information processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>Natural language texts widely exist in many aspects of social life, and classification is of great significance to its efficient use and normalized preservation. Manual texts classification has the problems such as labor intensive, experience dependent and error prone, therefore, the research on intelligent classification of natural language texts has great social value. In recent years, machine learning technology has developed rapidly, and related researchers have carried out a lot of works on the texts classification based on machine learning, the research methods show the characteristic of diversification. This paper summarizes and compares the texts classification methods mainly from three aspects, including technical routes, text vectorization methods and classification information processing methods, in order to provide references for further research and explore the development direction of the texts classification.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_4-Research_on_Intelligent_Natural_Language_Texts_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Soft-sensor of Carbon Content in Fly Ash based on LightGBM</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130403</link>
        <id>10.14569/IJACSA.2022.0130403</id>
        <doi>10.14569/IJACSA.2022.0130403</doi>
        <lastModDate>2022-04-30T05:22:31.0830000+00:00</lastModDate>
        
        <creator>Liu Junping</creator>
        
        <creator>Luo Hairui</creator>
        
        <creator>Huang Xiangguo</creator>
        
        <creator>Peng Tao</creator>
        
        <creator>Zhu Qiang</creator>
        
        <creator>Hu XinRong</creator>
        
        <creator>He Ruhan</creator>
        
        <subject>LightGBM; carbon content; fly ash; soft-sensor; feature engineering; Bayesian optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>The soft-sensor method of carbon content in fly ash is to predict and calculate the carbon content of boiler fly ash by modeling the distributed control system (DCS) data of thermal power stations. A novel data-driven soft-sensor model that combines data pre-processing, feature engineering and hyperparameter optimization for application in the carbon content of fly ash is presented. First, extract steady-state data by data mining technology. Second, twenty characteristics that may affect the carbon content in fly ash are identified as variables by feature engineering. Third, a LightGBM prediction model that captures the relation between the carbon content in fly ash and various DCS parameters is established and improves the prediction accuracy by the Bayesian optimization (BO) algorithm. Finally, to verify the prediction accuracy of the proposed model, a case study is carried out using the data of a coal-fired boiler in China. Results show that the proposed method yielded the best prediction accuracy and closely approximates the non-linear relationships between variables.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_3-Soft_sensor_of_Carbon_Content_in_Fly_Ash.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>BEAM: A Network Topology Framework to Detect Weak Signals</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130402</link>
        <id>10.14569/IJACSA.2022.0130402</id>
        <doi>10.14569/IJACSA.2022.0130402</doi>
        <lastModDate>2022-04-30T05:22:31.0670000+00:00</lastModDate>
        
        <creator>Hiba Abou Jamra</creator>
        
        <creator>Marinette Savonnet</creator>
        
        <creator>Eric Leclercq</creator>
        
        <subject>Weak signals; network analysis; network topology; graphlets</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>In these days, strategic decision making and im-mediate action are becoming a complex task for companies and policymakers, since the environment is subject to emerging changes that might include unknown factors. When facing these challenges, companies are exposed to opportunities for growth, but also to threats. Therefore, they seek to explore and analyze large amounts of data to detect emerging changes, or so-called weak signals, that can help maintaining their competitive advantages and shaping up their future operational environments. But due to the increasing volume of daily produced data, scalable and automated computer-aided systems are needed to explore and extract these weak signals. To overcome the automation and scalability challenges, and capture early signs of change in a big data environment, we propose a framework for weak signals detection relying on the network topology. It is implemented under the Cocktail project framework whose goal is to create a real-time observatory of trends, innovations and weak signals circulating in the discourses of the food and health sectors on Twitter. This method analyses quantitatively the network local structure using the graphlets (particular type of motifs) to find weak signals. It provides accordingly qualitative elements that contextualise the identified signals, which will allow business experts to interpret and evaluate their dynamics to determine which ones may have a relevant future. After testing this method on different types of networks (we present two of them in this paper), we proved that it is able to detect weak signals and provides a quantifiable signature that allows better decision making.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_2-BEAM_A_Network_Topology_Framework_to_Detect_Weak_Signals.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>CISE: Community Engagement of CEB Cloud Ecosystem in Box</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130401</link>
        <id>10.14569/IJACSA.2022.0130401</id>
        <doi>10.14569/IJACSA.2022.0130401</doi>
        <lastModDate>2022-04-30T05:22:31.0530000+00:00</lastModDate>
        
        <creator>Benjamin Garlington</creator>
        
        <subject>Collaboration; outreach; engagement; narrowcasting; conceptual; methodological; storage-as-a-service; software-as-a-service; data-as-a-service; infrastructure-as-a-service; platform-as-a-service; cloud-ecosystem; minority serving institutions (MSI)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(4), 2022</description>
        <description>The explosion of digital and observational data is having a profound effect on the nature of scientific inquiry, requiring new approaches to manipulating and analyzing large and complex data and increasing the need for collaborative solid research teams to address these challenges. These data, along with the availability of computational resources and recent advances in artificial intelligence, machine learning software tools, and methods, can enable unprecedented science and innovation. Unfortunately, these software tools and techniques are not uniformly accessible to all communities, mainly scientists and engineers at Minority Serving Institutions (MSI). Cloud computing resources are natural channels to enhance these institutions&#39; research productivity. However, utilizing cloud computing resources for research effectively requires a significant investment in time and effort, awkward manipulation of data sets, and deployment of cloud-based applications workflows that support analysis and visualization tools.</description>
        <description>http://thesai.org/Downloads/Volume13No4/Paper_1-CISE_Community_Engagement_of_CEB_Cloud_Ecosystem.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Evaluation of Temporal and Frequential Analysis Approaches of Electromyographic Signals for Gestures Recognition using Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130382</link>
        <id>10.14569/IJACSA.2022.0130382</id>
        <doi>10.14569/IJACSA.2022.0130382</doi>
        <lastModDate>2022-03-30T07:42:09.2500000+00:00</lastModDate>
        
        <creator>Edwar Jacinto Gomez</creator>
        
        <creator>Fredy H. Martinez Sarmiento</creator>
        
        <creator>Fernando Martinez Santa</creator>
        
        <subject>Neural networks; electromyographic signals; Myo armband; tensorflow; fast fourier transform</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(3), 2022</description>
        <description>Now-a-days, human-machine interfaces are increasingly intuitive and straightforward to design, but there is difficulty capturing electromyographic signal data using the least amount of hardware. This work takes the signals of a human forearm as input parameters describing a series of five gestures, using a dataset of 8 channels of electromyographic signals, using as a capture device a Thalmic Labs Inc. handle called Myo armband. The aim is to compare the performance of the artificial neural network using data in the time domain as input to the learning system. The same data are pre-processed to the frequency domain, looking for an improvement in the neural network&#39;s performance since transforming the input signals of the system to the frequency domain minimizes the problems inherent to this type of signal. This transformation is achieved using the fast Fourier transform. Consequently, it seeks to reach a neural network architecture that recognizes the gestures captured with the Myo armband in a high percentage of performance to be used in stand-alone applications, using the TensorFlow libraries of Python for its design. As a result, a comparison of the neural network trained with data in time versus the same data expressed in the frequency domain is obtained, seen from the increase in performance and the percentage of gesture detection.</description>
        <description>http://thesai.org/Downloads/Volume13No3/Paper_82-Performance_Evaluation_of_Temporal_and_Frequential_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Software Reliability Prediction by using Deep Learning Technique</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130381</link>
        <id>10.14569/IJACSA.2022.0130381</id>
        <doi>10.14569/IJACSA.2022.0130381</doi>
        <lastModDate>2022-03-30T07:42:09.2330000+00:00</lastModDate>
        
        <creator>Shivani Yadav</creator>
        
        <creator>Balkishan</creator>
        
        <subject>Software reliability; deep learning; performance metrics; prediction; dense neural network; fault prediction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(3), 2022</description>
        <description>The importance of software systems and their impact on all sectors of society is undeniable. Furthermore, it is increasing every day as more services get digitized. This necessitates the need for evolution of development and quality processes to deliver reliable software. For reliable software, one of the important criteria is that it should be fault-free. Reliability models are designed to evaluate software reliability and predict faults. Software reliability prediction is always an area of interest in the field of software engineering. Prediction of software reliability can be done using numerous available models but with the inception of computational intelligence techniques, researchers are exploring new techniques such as machine learning, genetic algorithm, deep learning, etc. to develop better prediction models. In the current study, a software reliability prediction model is developed using a deep learning technique over twelve real datasets from different repositories. The results of the proposed model are analyzed and found quite encouraging. The results are also compared with previous studies based on various performance metrics.</description>
        <description>http://thesai.org/Downloads/Volume13No3/Paper_81-Software_Reliability_Prediction_by_using_Deep_Learning_Technique.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Approach for Small Object Detection in Medical Images through Deep Ensemble Convolution Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130380</link>
        <id>10.14569/IJACSA.2022.0130380</id>
        <doi>10.14569/IJACSA.2022.0130380</doi>
        <lastModDate>2022-03-30T07:42:09.2030000+00:00</lastModDate>
        
        <creator>J. Maria Arockia Dass</creator>
        
        <creator>S. Magesh Kumar</creator>
        
        <subject>Medical image processing; convolution neural network; lung tumor detection; early prediction; image enhancement</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(3), 2022</description>
        <description>Small objects detection in medical image becomes an interesting field of research that helps the medical practitioners to focus on in-depth evaluation of diseases. The accurate localization and classification of objects face tremendous difficulty due to lower intensity of the images and distraction of pixel points that vary the decision on identifying the shape, structure etc. In many real-time cases, detection and classification of tiny objects in the medically treated images becomes mandatory. The proposed system is designed in the same criteria in which the semantic segmentation of tiny objects in the medical images is considered. The system design focused on implementing the model for different kinds of human organs such as lung and liver. The axial CT or PET images of Lung and Liver are considered as the prime input for the given system. Detection of tiny objects in the CT-PET images, segmenting it from the background and classification of segmented part as Tumor or Nodule is discussed. The preprocessed images are feature extracted after the morphology segmentation that determines the structural features of the tiny object being segmented. The feature vectors are nothing but the feature points from Kaze feature extraction and Morphology segmented image. These two inputs are fetched to the Deep ensemble Convolution neural network (DECNN) to obtain the dual classification results. Performing the quantitative measurements to evaluate the decision making system for nodule or tumor class is determined. The performance measure is done using accuracy, precision, recall and F1Score.</description>
        <description>http://thesai.org/Downloads/Volume13No3/Paper_80-A_Novel_Approach_for_Small_Object_Detection_in_Medical_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Energy Efficient Hop-by-Hop Retransmission and Congestion Mitigation of an Optimum Routing and Clustering Protocol for WSNs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130379</link>
        <id>10.14569/IJACSA.2022.0130379</id>
        <doi>10.14569/IJACSA.2022.0130379</doi>
        <lastModDate>2022-03-30T07:42:09.1870000+00:00</lastModDate>
        
        <creator>Prakash K Sonwalkar</creator>
        
        <creator>Vijay Kalmani</creator>
        
        <subject>Wireless sensor network; network lifetime; clustering strategies; clustering process</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(3), 2022</description>
        <description>In the past few decades, wireless sensor networks, which take a growing number of applications in the surroundings further than the human reach, have risen in popularity. Various routing pseudo codes have been suggested for network optimization, emphasizing energy efficiency, network longevity, and clustering processes. To the existing load balancing energy-efficient sleep awake, aware smart sensor network routing protocol, the modified load-balancing efficient-energy sleep active alert smart routing system for wireless sensor networks is presented in this paper, which takes network homogeneity into account. The modified protocol is the optimum clustering and routing protocol in wireless sensor networks (OCRSN), which simulates an enhanced network coupled node pair model. Our suggested modified approach studies and enhances factors such as network stability, network lifetime, and cluster monitor mechanism choice. The significance of typically combining sensor endpoints is applied to maximize energy efficiency. The proposed protocol significantly improved network parameters in simulations, showing that it could be a valuable option for WSNs. In wireless sensor networks, in addition to memory considerations and dependable transportation, this paper presents a hop-by-hop re-transmission strategy and congestion mitigation, which is the major contribution of this paper. It is a very consistent method based on a pipe flow model. After performing additional optimized overhead to improve the network lifespan of wireless sensors networks, the current algorithm can be paralleled to the less energy adaptive clustering hierarchy protocol. The optimal clustering in multipath and multihop technique intends to minimize the energy consumption highlighted for a circular area enclosed by a sink by replacing one-hop communication with efficient multihop communication. The optimum quantity of clusters is determined, and the energy consumption is reduced by splitting the network into clusters of nearly equal size. The obtained simulation results will demonstrate the increase in the network lifetime compared to previous clustering strategies such as Low Energy Adaptive Clustering Hierarchy.</description>
        <description>http://thesai.org/Downloads/Volume13No3/Paper_79-Energy_Efficient_Hop_by_Hop_Retransmission_and_Congestion_Mitigation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dynamic Support Range based Rare Pattern Mining over Data Streams</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130378</link>
        <id>10.14569/IJACSA.2022.0130378</id>
        <doi>10.14569/IJACSA.2022.0130378</doi>
        <lastModDate>2022-03-30T07:42:09.1700000+00:00</lastModDate>
        
        <creator>Sunitha Vanamala</creator>
        
        <creator>L. Padma Sree</creator>
        
        <creator>S. Durga Bhavani</creator>
        
        <subject>Depth first search; Hybrid-Eclat algorithm; SRP-tree; itemset; frequent-pattern support; rare-pattern support; pivot; data stream; rare itemset; infrequent itemset</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(3), 2022</description>
        <description>Rare itemset mining is a relatively recent topic of study in data mining. In certain application domains, such as online banking transaction analysis, sensor data analysis, and stock market analysis, rare patterns are patterns with low support and high confidence that are extremely interesting when compared to frequent patterns. Numerous applications generate large amounts of continuous data streams. We require efficient algorithms capable of processing data streams in order to analyze them and find unique patterns. The strategies developed for static databases cannot be used to data streams. As a result, we require algorithms created expressly for data stream processing in order to extract critical unique patterns. Rare pattern mining is still in its infancy, with only a few ways available. To address this is developed the Dynamic Support Range-based Hybrid-Eclat Algorithm (DSRHEA), an Eclat-based technique for mining unique patterns from a data stream using bit-set vertical mining with two item-based optimizations. The detected patterns are kept in a prefix-based rare pattern tree that uses double hashing to maintain the unusual pattern in the data stream. Testing showed that the proposed method did well in terms of how long it took to run ,how many rare patterns it made and accuracy.</description>
        <description>http://thesai.org/Downloads/Volume13No3/Paper_78-Dynamic_Support_Range_based_Rare_Pattern_Mining.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Structural Information Retrieval in XML Documents: A Graph-based Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130377</link>
        <id>10.14569/IJACSA.2022.0130377</id>
        <doi>10.14569/IJACSA.2022.0130377</doi>
        <lastModDate>2022-03-30T07:42:09.1530000+00:00</lastModDate>
        
        <creator>Imane Belahyane</creator>
        
        <creator>Mouad Mammass</creator>
        
        <creator>Hasna Abioui</creator>
        
        <creator>Assmaa Moutaoukkil</creator>
        
        <creator>Ali Idarrou</creator>
        
        <subject>Semi-structured document; XML document; largest common sub-graph; structural Information retrieval</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(3), 2022</description>
        <description>Although retrieval engines are becoming more and more functional and efficient, they still have the drawback of not being able to locate the relevant documentary granularity, which results in ignoring the structural aspect. In the context of XML document, Information Retrieval Systems allow to return the user’s documentary granules. Several studies have used graphs to represent XML documents. However, in the scope of this research, the semi-structured document’s structure and that of a user’s query can be seen as arborescences composed of a hierarchy of nested elements. By using graph theory, by calculating the structural proximity and especially the intersection between these two arborescences. The article presents a model for structural information retrieval based on graphs. A collection of multimedia documents are randomly extracted from INEX (Initiative for the Evaluation of XML Retrieval) 2010 to validate the approach. The first results shows the interest of such an approach.</description>
        <description>http://thesai.org/Downloads/Volume13No3/Paper_77-Structural_Information_Retrieval_in_XML_Documents.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>High-quality Voxel Reconstruction from Stereoscopic Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130376</link>
        <id>10.14569/IJACSA.2022.0130376</id>
        <doi>10.14569/IJACSA.2022.0130376</doi>
        <lastModDate>2022-03-30T07:42:09.1230000+00:00</lastModDate>
        
        <creator>Arturo Navarro</creator>
        
        <creator>Manuel Loaiza</creator>
        
        <subject>Voxel reconstruction; stereoscopy; convolutional neural networks; disparity maps</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(3), 2022</description>
        <description>Volumetric reconstruction from one or multiple RGB images has shown significant advances in recent years, but the approaches used so far 
do not take advantage of stereoscopic features such as distance blur, perspective disparity, textures, etc. that are useful to shape the object volumes. 
Our study is to evaluate a convolutional neural network architecture for reconstruction of 128&#179; voxel models from 960 pairs of stereoscopic images. The preliminary results show an 80% of coincidence with the original models in 2 categories using the Intersection over Union metric. These results indicate that good reconstructions can be made from a small dataset. This will reduce the time and memory usage for this task.</description>
        <description>http://thesai.org/Downloads/Volume13No3/Paper_76-High_quality_Voxel_Reconstruction_from_Stereoscopic_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multiple Hydrophone Arrays based Underwater Localization with Matching Field Processing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130375</link>
        <id>10.14569/IJACSA.2022.0130375</id>
        <doi>10.14569/IJACSA.2022.0130375</doi>
        <lastModDate>2022-03-30T07:42:09.1070000+00:00</lastModDate>
        
        <creator>Shuo Jin</creator>
        
        <creator>Xiukui Li</creator>
        
        <subject>Matched Field Processing (MFP); hydrophone ar-ray; source localization; underwater acoustic</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(3), 2022</description>
        <description>Matched field processing technology (MFP) is a general passive localization method for underwater sound source due to its advantages in ultra-long distance positioning. In this paper, assume the total number of hydrophones remains unchanged, a single hydrophone array is divided into multiple hydrophone sub-arrays for independent positioning, and the positioning results of sub-arrays are fused to reduce the impact of noise and improve the robustness of the positioning system. Based on the traditional Bartlett processor, we derive the formula for average positioning error which varies with signal to noise ratio (SNR) and the number of hydrophones. The formula is used to decide the optimal structure of sub-arrays, i.e., the number of sub-arrays and the number of hydrophones in each sub-array. Experiments and simulations proves that multiple sub-arrays can improve the positioning accuracy compared with the single hydrophone array in the noisy environment. The average positioning errors produced by the experiments are consistent with the numerical ones based on the theoretical analysis.</description>
        <description>http://thesai.org/Downloads/Volume13No3/Paper_75-Multiple_Hydrophone_Arrays_based_Underwater_Localization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Ensuring Privacy Preservation Access Control Mechanism in Cloud based on Identity based Derived Key</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130374</link>
        <id>10.14569/IJACSA.2022.0130374</id>
        <doi>10.14569/IJACSA.2022.0130374</doi>
        <lastModDate>2022-03-30T07:42:09.0770000+00:00</lastModDate>
        
        <creator>Suresha D</creator>
        
        <creator>K Karibasappa</creator>
        
        <creator>Shivamurthy</creator>
        
        <subject>Access control; cloud storage; confidentiality; data origin authentication; key derivation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(3), 2022</description>
        <description>Cloud computing is a dominant technology that involves massive amounts of data storage and access via the internet. Because there is a large amount of data stored in data centers, it is critical to implement appropriate access control mechanisms over data stored in a cloud. Today, there are numerous access control mechanisms available to provide confidentiality, privacy, and data origin authentication in a cloud environment. The available access control techniques may have a higher computational overhead and lack security concerns. In this paper, we designed and implemented a privacy-preserving access control in cloud computing using derived key identity-based encryption. The proposed method may reduce computational overhead while generating the key while also increasing the robustness of cryptographic keys. During the key generation pro-cess, the trusted key canter (TKC) is involved. The experimental results show that the proposed method reduces computational overhead and provides an easy way to implement an access control mechanism in a cloud environment.</description>
        <description>http://thesai.org/Downloads/Volume13No3/Paper_74-Ensuring_Privacy_Preservation_Access_Control_Mechanism.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Game Theory-based Virtual Machine Placement Algorithm in Hybrid Cloud Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130373</link>
        <id>10.14569/IJACSA.2022.0130373</id>
        <doi>10.14569/IJACSA.2022.0130373</doi>
        <lastModDate>2022-03-30T07:42:09.0600000+00:00</lastModDate>
        
        <creator>Nawaf Alharbe</creator>
        
        <creator>Mohamed Ali Rakrouki</creator>
        
        <subject>Cloud computing; virtual machine placement; game theory; quality of service; load balancing; energy consumption</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(3), 2022</description>
        <description>This paper deals with the problem of virtual ma-chine placement in hybrid cloud situation from an economic and QoS perspectives. Because excessive investment of resources in cloud computing environment will result in resources waste, and too few resources can generate QoS issues. This paper uses a game theory model to describe the problem and find the balance between these contradictions. Based on this model, a virtual machine placement algorithm for scheduling virtual resources is proposed. Compared to the traditional game theory, our LBOGT algorithm proposes a game between tripartite sides: users, individual providers and provider groups. Experiments show that our proposed algorithm reduces physical machines energy consumption by 6.16%, and increases by 10.6% in profit provider under the premise of users’ QoS.</description>
        <description>http://thesai.org/Downloads/Volume13No3/Paper_73-A_Game_Theory_based_Virtual_Machine_Placement_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Text Summarization Approach based on Relative Entropy and Document Decomposition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130372</link>
        <id>10.14569/IJACSA.2022.0130372</id>
        <doi>10.14569/IJACSA.2022.0130372</doi>
        <lastModDate>2022-03-30T07:42:09.0470000+00:00</lastModDate>
        
        <creator>Nawaf Alharbe</creator>
        
        <creator>Mohamed Ali Rakrouki</creator>
        
        <creator>Abeer Aljohani</creator>
        
        <creator>Mashael Khayyat</creator>
        
        <subject>Natural language processing; text summarization; extractive methods; relative entropy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(3), 2022</description>
        <description>In the era of the fourth industrial revolution, the rapid relay on using the Internet made online resources explosively grow. This revolution emphasized the demand for new approaches to utilize the use of online resources such as texts. Thus, the difficulty to compare unstructured resources (text) is urging the demand of proposing a new approach, which is the core of this paper. In fact, text summarization technology is a vital part of text processing, therefore. The focus is on the semantic information not just on the basic information. It requires mining topic features in order to obtain topic-words and topic-sentences relationships. This automatic text summarization is document decomposition according to relative entropy analysis; which means measuring the difference of the probability distribution to measure the correlation between sentences. This paper introduced a new method for document decomposition, which categorizes the sentences into three types of content. The performance demonstrated the efficiency of using the relative entropy of the topic probability distribution over sentences, which enriched the horizon of text processing and summarization research field.</description>
        <description>http://thesai.org/Downloads/Volume13No3/Paper_72-A_New_Text_Summarization_Approach_based_on_Relative_Entropy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An End-to-End Method to Extract Information from Vietnamese ID Card Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130371</link>
        <id>10.14569/IJACSA.2022.0130371</id>
        <doi>10.14569/IJACSA.2022.0130371</doi>
        <lastModDate>2022-03-30T07:42:09.0300000+00:00</lastModDate>
        
        <creator>Khanh Nguyen-Trong</creator>
        
        <subject>Optical character recognition; U-Net network; VGG16 network; CRAFT network; rebia network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(3), 2022</description>
        <description>Information extraction from ID cards plays an important role in many daily activities, such as legal, banking, insurance, or health services. However, in many developing countries, such as Vietnam, it is mostly carried out manually, which is time-consuming, tedious, and may be prone to errors. Therefore, in this paper, we propose an end-to-end method to extract information from Vietnamese ID card images. The proposed method contains three steps with four neural networks and two image processing techniques, including U-Net, VGG16, Contour detection, and Hough transformation to pre-process input card images, CRAFT, and Rebia neural network for Optical Character Recognition, and Levenshtein distance and regular expression to post-process extracted information. In addition, a dataset, including 3.256 Vietnamese ID cards, 400k manual annotated text, and more than 500k synthetic text, was built for verifying our methods. The results of an empirical experiment conducted on our self-collected dataset indicate that the proposed method achieves a high accuracy of 94%, 99.5%, and 98.3% for card segmentation, classification, and text recognition.</description>
        <description>http://thesai.org/Downloads/Volume13No3/Paper_71-An_End_to_End_Method_to_Extract_Information.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Remote Healthcare Monitoring using Expert System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130370</link>
        <id>10.14569/IJACSA.2022.0130370</id>
        <doi>10.14569/IJACSA.2022.0130370</doi>
        <lastModDate>2022-03-30T07:42:09.0000000+00:00</lastModDate>
        
        <creator>Prajoona Valsalan</creator>
        
        <creator>Najam ul Hasan</creator>
        
        <creator>Imran Baig</creator>
        
        <creator>Manaf Zghaibeh</creator>
        
        <subject>Internet of Things (IoT); remote health care mon-itoring; wearable sensors; fuzzy logic</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(3), 2022</description>
        <description>With the introduction of the novel coronavirus and the ensuing epidemic, health care has become a primary priority for all governments. In this context, the best course of action is to implement an Internet of Things (IoT)-based remote health monitoring system. As a result, IoT systems have attracted significant attention in academia and industry, and this trend is likely to continue as wearable sensors and smartphones become more prevalent. Even if the doctor is a substantial distance away, IoT health monitoring enables the prevention of illness and the accurate diagnosis of one’s current state of health through the use of a portable physiological monitoring framework that continually monitors the patient’s systolic blood pressure, blood glucose, oxygen saturation, and diastolic blood pressure. The expert system generates a diagnosis of the patient’s health status based on the sensor data. Once the patient’s sensor data is transmitted to the cloud via a WiFi module, the expert system uses it to diagnose the patient’s health status in order to facilitate any medical attention or critical care that may be required for his condition. The simulation is carried out in Matlab, and the results of the study are presented to demonstrate the suggested system’s significance.</description>
        <description>http://thesai.org/Downloads/Volume13No3/Paper_70-Remote_Healthcare_Monitoring_using_Expert_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intraday Trading Strategy based on Gated Recurrent Unit and Convolutional Neural Network: Forecasting Daily Price Direction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130369</link>
        <id>10.14569/IJACSA.2022.0130369</id>
        <doi>10.14569/IJACSA.2022.0130369</doi>
        <lastModDate>2022-03-30T07:42:08.9830000+00:00</lastModDate>
        
        <creator>Nabil MABROUK</creator>
        
        <creator>Marouane CHIHAB</creator>
        
        <creator>Zakaria HACHKAR</creator>
        
        <creator>Younes CHIHAB</creator>
        
        <subject>Forex; trading; machine learning; deep learning; random forest; technical indicators; technical rules; convolutional neural network; gated recurrent unit</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(3), 2022</description>
        <description>Forex or FX is the short form of the Foreign Exchange Market, it is known as the largest financial market in the world where Investors can buy a certain amount of currency and hold it on until the exchange rate moves, then sell it to make money. This operation is not easy as it looks; due to the forte fluctuation of this market, investors find it a risky area to trade. A successful strategy in Forex should reduce the rate of risks and increase the profitability of investment by considering economic and political factors and avoiding emotional investment. In this article, we propose a trading strategy based on machine learning algorithms to reduce the risks of trading on the forex market and increase benefits at the same time. For that, we use an algorithm that generates technical indicators and technical rules containing information that may explain the movement of the stock price, the generated data is fed to a machine-learning algorithm to learn and recognize price patterns. Our algorithm is the combination of two deep learning algorithms, Gated Recurrent Unit “GRU” and Convolutional Neural Network “CNN”; it aims to predict the next day signal (BUY, HOLD or SELL) The model performance is evaluated for USD/EUR by different metrics generally used for machine learning algorithms, another method used to evaluate the profitability by comparing the returns of the strategy and the returns of the market. The proposed system showed a good improvement in the prediction of the price.</description>
        <description>http://thesai.org/Downloads/Volume13No3/Paper_69-Intraday_Trading_Strategy_based_on_Gated_Recurrent_Unit.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Secure and Trusted Fog Computing Approach based on Blockchain and Identity Federation for a Granular Access Control in IoT Environments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130368</link>
        <id>10.14569/IJACSA.2022.0130368</id>
        <doi>10.14569/IJACSA.2022.0130368</doi>
        <lastModDate>2022-03-30T07:42:08.9530000+00:00</lastModDate>
        
        <creator>Samia EL HADDOUTI</creator>
        
        <creator>Mohamed Dafir ECH-CHERIF EL KETTANI</creator>
        
        <subject>Access control; blockchain; fog computing; identity federation; IoT; smart contracts</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(3), 2022</description>
        <description>Fog computing is a new computing paradigm that is an extension of the standard cloud computing model, which can be adopted as a cost effective strategy for managing connected objects , by enabling real-time computing and communication for analytical and decision making. Nonetheless, even though Fog-based Internet of Things networks optimize the standard architecture by moving computing, storage, communication, and control decision closer to the edge network, the technology becomes open to malicious attackers and remains many business risks that are not yet resolved. In fact, access control, privacy as well as trust risks present major challenges in Internet of Things environments based on Fog computing due to the large scale distributed nature of devices at the Fog layer. In addition, the traditional authentication methods are not adequate in Fog-based Internet of Things contexts since they consume significantly more computation power and incur high latency. To deal with these gaps, we present in this paper a secure and trusted Fog Computing approach based on Blockchain and Identity Federation technologies for a granular access control in IoT environments. The proposed scheme uses Smart Contract concept and Attribute-Based Access Control model to ensure the level of security and scalability required for data integrity without resorting to a central authority to make an access decision.</description>
        <description>http://thesai.org/Downloads/Volume13No3/Paper_68-A_Secure_and_Trusted_Fog_Computing_Approach_based_on_Blockchain.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Human-Computer Interaction in Mobile Learning: A Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130367</link>
        <id>10.14569/IJACSA.2022.0130367</id>
        <doi>10.14569/IJACSA.2022.0130367</doi>
        <lastModDate>2022-03-30T07:42:08.9370000+00:00</lastModDate>
        
        <creator>Nurul Amirah Mashudi</creator>
        
        <creator>Mohd Azri Mohd Izhar</creator>
        
        <creator>Siti Armiza Mohd Aris</creator>
        
        <subject>Human-computer interaction; education technol-ogy; digital technology; mobile learning; e-learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(3), 2022</description>
        <description>Mobile learning mainly concerns mobility and high-quality education, regardless of location or time. Human-computer interaction comprises the concepts and methods in which humans interact with computers, including designing, im-plementing, and evaluating computer systems that are accessible and provide an intuitive user interface. Some studies showed that mobile learning could help overcome multiple limitations and improve learning in educational systems. The study investigates the HCI design challenges, including the guidelines and methods in mobile HCI for education. An existing mobile learning tool was discussed on the current and future design enhancements of Udemy. Next is the further discussion on future mobile learning to provide the possible improvements for learners based on the challenges of mobile HCI in education.</description>
        <description>http://thesai.org/Downloads/Volume13No3/Paper_67-Human_Computer_Interaction_in_Mobile_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Computer Vision-based System for Surgical Waste Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130366</link>
        <id>10.14569/IJACSA.2022.0130366</id>
        <doi>10.14569/IJACSA.2022.0130366</doi>
        <lastModDate>2022-03-30T07:42:08.9030000+00:00</lastModDate>
        
        <creator>Md. Ferdous</creator>
        
        <creator>Sk. Md. Masudul Ahsan</creator>
        
        <subject>COVID-19; You Only Look Once (YOLO); surgical waste; deep learning; image dataset; real-time detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(3), 2022</description>
        <description>The world population is going through a difficult time due to the pandemic of COVID-19 while other disasters prevail. However, a new environmental catastrophe is coming because surgical masks and gloves are putting down anywhere, leading to the massive spreading of COVID-19 and environmental disasters. A significant number of masks and gloves are not properly managed. They are scattered around us such as roads, rivers, beaches, oceans and other places. Since these types of waste are turned into microplastics and chemicals are deadly harmful to the environment, human health and other species, especially for the aquatic animals on this planet. During the outbreaks of corona pandemic, surgical waste in the open place or seawater can create a fatal contagious environment. Putting them in a particular area can protect us from spreading infectious diseases. This study proposed a system that can detect surgical masks, gloves and infectious/biohazard symbols to put down infectious waste in a specific place or a container. Among the various types of surgical waste, this study prefers mask and gloves since it is currently the most widely used element due to the COVID-19. A novel dataset is created named MSG (Mask, Bio-hazard Symbol and Gloves), containing 1153 images and their corresponding annotations. Different versions of the You Only Look Once (YOLO) are applied as the architecture of this study; however, the YOLOX model outperforms.</description>
        <description>http://thesai.org/Downloads/Volume13No3/Paper_66-A_Computer_Vision_based_System_for_Surgical_Waste_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Review-based Context-Aware Recommender Systems: Using Custom NER and Factorization Machines</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130365</link>
        <id>10.14569/IJACSA.2022.0130365</id>
        <doi>10.14569/IJACSA.2022.0130365</doi>
        <lastModDate>2022-03-30T07:42:08.8900000+00:00</lastModDate>
        
        <creator>Rabie Madani</creator>
        
        <creator>Abderrahmane Ez-zahout</creator>
        
        <subject>Recommender systems; context aware recommender systems; factorization machines; bidirectional encoder representa-tions from transformers; named entity recognition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(3), 2022</description>
        <description>Recommender Systems depend fundamentally on user feedback to provide recommendation. Classical Recom-menders are based only on historical data and also suffer from several problems linked to the lack of data such as sparsity. Users’ reviews represent a massive amount of valuable and rich knowledge information, but they are still ignored by most of current recommender systems. Information such as users’ preferences and contextual data could be extracted from reviews and integrated into Recommender Systems to provide more accurate recommendations. In this paper, we present a Context Aware Recommender System model, based on a Bidirectional Encoder Representations from Transformers (BERT) pretrained model to customize Named Entity Recognition (NER). The model allows to automatically extract contextual information from reviews then insert extracted data into a Contextual Machine Factorization to compte and predict ratings. Empirical results show that our model improves the quality of recommendation and outperforms existing Recommender Systems.</description>
        <description>http://thesai.org/Downloads/Volume13No3/Paper_65-A_Review_based_Context_Aware_Recommender_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>BCSM: A BlockChain-based Security Manager for Big Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130364</link>
        <id>10.14569/IJACSA.2022.0130364</id>
        <doi>10.14569/IJACSA.2022.0130364</doi>
        <lastModDate>2022-03-30T07:42:08.8570000+00:00</lastModDate>
        
        <creator>Hanan E. Alhazmi</creator>
        
        <creator>Fathy E. Eassa</creator>
        
        <subject>Big data security; blockchain; access control; hy-perledger fabric</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(3), 2022</description>
        <description>The amount of data generated globally is increasing rapidly. This growth in big data poses security and privacy issues. Organizations that collect data from numerous sources could face legal or business consequences resulting from a security breach and the exposure of sensitive information. The traditional tools used for decades to handle, manage, and secure data are not suitable anymore in the case of big data. Furthermore, most of the current security tools rely on third-party services, which have numerous security problems. More research must investigate protecting user-sensitive information which can be abused and altered from several sides. Blockchain is a promising technology that provides decentralized backend infrastructure. Blockchain keeps track of transactions indefinitely and protects them from alteration. It provides a secure, tamper-proof database that may be used to track the past state of the system. In this paper, we present our big data security manager based on Hyperledger Fabric, which provides end-to-end big data security, including data storage, transmitting, and sharing as well as access control and auditing mechanisms. The manager components and modular architecture are illustrated. The metadata and permissions related to stored datasets are stored in the blockchain to be protected. Finally, we have tested the performance of our solution in terms of transaction throughput and average latency. The performance metrics are provided by Hyperledger Caliper, a benchmark tool for analyzing Hyperledger blockchain performance.</description>
        <description>http://thesai.org/Downloads/Volume13No3/Paper_64-BCSM_A_BlockChain_based_Security_Manager_for_Big_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design and Implementation of a Low-cost CO₂ Monitoring and Control System Prototype to Optimize Ventilation Levels in Closed Spaces</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130363</link>
        <id>10.14569/IJACSA.2022.0130363</id>
        <doi>10.14569/IJACSA.2022.0130363</doi>
        <lastModDate>2022-03-30T07:42:08.8430000+00:00</lastModDate>
        
        <creator>Ramces Cavallini-Rodriguez</creator>
        
        <creator>Jesus Espinoza-Valera</creator>
        
        <creator>Carlos Sotomayor-Beltran</creator>
        
        <subject>CO₂ Monitoring; IoT; Low-cost Indoor Ventilation System; NodeMCU; Open Source Software</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(3), 2022</description>
        <description>High concentrations ofCO₂ levels are significantly present in closed environments that do not have proper venti-lation. 
Such high concentrations generate negative health con-sequences such as dizziness, headaches and various respiratory problems. 
For this reason, the design and implementation of a low-costCO₂ monitoring and control prototype is proposed to optimize ventilation levels 
in closed spaces. The parameters that the proposed device measures are concentration of carbon dioxide, humidity and temperature. 
A digital PID controller was implemented, with the use of C++ programming language and an exhaust fan to stabilize carbon dioxide levels within a closed space. 
The aforementioned parameters can be viewed in two ways: The first way is locally through a LCD screen and LED indicators, and the second one, remotely using the 
free Arduino IoT Cloud platform. The closed environment was emulated using a cardboard box and in the tests it was obtained that the prototype manages to keep theCO₂ concentration levels below the established limit. However, this can be further improved by using more precise sensors for more accurate results. It is expected that this model can be successfully scaled to closed spaces such as classrooms and offices.</description>
        <description>http://thesai.org/Downloads/Volume13No3/Paper_63-Design_and_Implementation_of_a_Low_cost_CO2_Monitoring.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Evaluation of Safe Avoidance Time and Safety Message Dissemination for Vehicle to Vehicle (V2V) Communication in LTE C-V2X</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130362</link>
        <id>10.14569/IJACSA.2022.0130362</id>
        <doi>10.14569/IJACSA.2022.0130362</doi>
        <lastModDate>2022-03-30T07:42:08.8100000+00:00</lastModDate>
        
        <creator>Hakimah Abdul Halim</creator>
        
        <creator>Azizul Rahman Mohd Shariff</creator>
        
        <creator>Suzi Iryanti Fadilah</creator>
        
        <creator>Fatima Karim</creator>
        
        <subject>Time gap; safe distance; collision; VANET; CAM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(3), 2022</description>
        <description>VANET has many opportunities to manage vehicle safety on the road efficiently. The standards from European Telecommunications Standards Institute (ETSI) for Intelligent Transport System (ITS) provide necessary upper-layer specifications for safety message dissemination between vehicles using Cooperative Aware Messages (CAM) and Decentralized Event Notification Message (DENM). Besides, mobile radio technology of Long-Term Evolution (LTE) in Release-14 comes with two modes of communication, which is mode 3 and mode 4 to support vehicle to vehicle communications. The relationship between vehicle time gap, speed, and UE transmit power significantly impacts the Packet Delivery Ratio (PDR) and throughput. With higher vehicle moving speeds, longer safe distances must be kept in ensuring safety. However, at longer safe distances, we have proven that communication may be lost because CAM messages cannot be exchanged successfully. As a result, no vehicle safety can be guaranteed using V2V communication. This may get worse in urban or cities environment where interference is dominant. Simulation results provide evidence that variable distance between vehicles cannot be ignored to ensure vehicle safety with successful message communication among them.</description>
        <description>http://thesai.org/Downloads/Volume13No3/Paper_62-Performance_Evaluation_of_Safe_Avoidance_Time.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Face Recognition using Principal Component Analysis and Clustered Self-Organizing Map</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130361</link>
        <id>10.14569/IJACSA.2022.0130361</id>
        <doi>10.14569/IJACSA.2022.0130361</doi>
        <lastModDate>2022-03-30T07:42:08.7970000+00:00</lastModDate>
        
        <creator>Jasem Almotiri</creator>
        
        <subject>Artificial intelligence; machine learning; clustering; agglomerative hierarchical clustering; face recognition; neural network; self-organizing map; principal component analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(3), 2022</description>
        <description>Face recognition is one of the cornerstones of the face processing schemes that composed the contemporary intelligent vision-based interactive systems between computers and humans. Instead of using neurons of the Self-Organized Map (SOM) neural network to cluster the facial data, in this work, we applied an agglomerative hierarchical clustering to cluster the neurons of the SOM network, which in turns, used to cluster the facial dataset. In prior, Principal Component Analysis (PCA) is employed to reduce the dimension of the facial data as well as to establish the initial state of SOM neurons. The design of the clustered-SOM recognition engine involves post-training steps that labeled the clustered SOM neurons resulting in a supervised SOM network. The effectiveness of the proposed model is demonstrated using the well-known ORL database. Using five images per person for SOM training, the proposed recognizer results in a recognition rate of 94.7%, whereas using nine images raise the recognition rate up to 99.33%. The facial recognizer has attained a notable reliability and robustness against the additive white Gaussian noise, where increasing the level of noise variance from 0 to 0.09, the recognition rate decreased only by 8%. Furthermore, time cost is analyzed, where using 200 images for training takes less than 4 seconds to be performed, whereas testing using a new set of 200 images takes less than 0.013 seconds which is competitive to many artificial intelligence and machine learning based schemes.</description>
        <description>http://thesai.org/Downloads/Volume13No3/Paper_61-Face_Recognition_using_Principal_Component_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Progressive 3-Layered Block Architecture for Image Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130360</link>
        <id>10.14569/IJACSA.2022.0130360</id>
        <doi>10.14569/IJACSA.2022.0130360</doi>
        <lastModDate>2022-03-30T07:42:08.7800000+00:00</lastModDate>
        
        <creator>Munmi Gogoi</creator>
        
        <creator>Shahin Ara Begum</creator>
        
        <subject>CNN; transfer learning; progressive resizing; PReLU; deep network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(3), 2022</description>
        <description>Convolutional Neural Networks (CNNs) have been used to handle a wide range of computer vision problems, including image classification and object detection. Image classification refers to automatically classifying a huge number of images and various techniques have been developed for accomplishing this goal. The focus of this article is to enhance image classification accuracy implemented on CNN models by using the concept of transfer learning and progressive resizing with split and train strategy. Furthermore, the Parametric Rectified Linear Unit (PReLU) activation function, which generalizes the standard traditional rectified unit, has also been applied on dense layers of the model. PReLU enhances model fitting with almost little significant computational cost and low over-fitting hazard. A “Progressive 3-Layered Block Architecture&quot; model is proposed in this paper which considers the fine-tuning of hyperparameters and optimizers of the Deep network to achieve state-of-the-art accuracy on benchmark datasets with fewer parameters.</description>
        <description>http://thesai.org/Downloads/Volume13No3/Paper_60-Progressive_3_Layered_Block_Architecture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dynamic User Activity Prediction using Contextual Service Matching Mechanism</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130359</link>
        <id>10.14569/IJACSA.2022.0130359</id>
        <doi>10.14569/IJACSA.2022.0130359</doi>
        <lastModDate>2022-03-30T07:42:08.7500000+00:00</lastModDate>
        
        <creator>M. Subramanyam</creator>
        
        <creator>S. S. Parthasarathy</creator>
        
        <subject>Contextual information; service discovery; prediction; ubiquitous computing; user activity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(3), 2022</description>
        <description>The significance of context-based services is significantly increasing with the advancement of integrated technologies of sensors and ubiquitous technologies. The existing approaches are reviewed to find out that identification of user&#39;s activity has more scope of improvement. After reviewing the current literature towards context-based methodologies, it is found that existing methods are devoid of considering dynamic context; while the modelling perspective is mainly towards considering predefined and static contextual information. Further, existing models doesn&#39;t have any inclusion of potential belief system nor any incorporation of service matching. Further, practical world case-studies is characterized by complex activity of user while it is quite challenging to extract the accurate contextual information associated with complex user activity. From the practical deployment scenario, the existing system offers less supportability toward collaborative network, which is highly essential to be considered for constraint modelling for user activity detection. Therefore, the proposed manuscript contributes a solution towards existing research problems by introducing a Dynamic User Activity Prediction using Contextual Service Matching Mechanism. A mixed research methodology is used to prove how service matching mechanism is important in contextual service discovery using multimodal activity data. The first contributory solution towards addressing the research problem is by introducing a novel and simplified belief system that considers both static contextual parameters as well as dynamic activity-based contextual parameter. The second contributory solution towards existing problem is to develop a novel service matching module that takes the input from service reposit, user calendar events, and collaborative units for assisting in similarity-based recommendation system. The model considers Hidden Markov Model for activity determination considering states of activity. With a combined usage of user activity context, feature management, and collaborative model, the proposed system offers better granularity in investigating user activity. The experimental and simulation analysis of the proposed outcome shows the enhanced accuracy performance of proposed system under different test environment. The study also investigates the impact of the service matching mechanism as well as relevance feedback on the accuracy to find that the proposed system excels better accuracy.</description>
        <description>http://thesai.org/Downloads/Volume13No3/Paper_59-Dynamic_User_Activity_Prediction_using_Contextual_Service.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Wave Parameters Prediction for Wave Energy Converter Site using Long Short-Term Memory</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130358</link>
        <id>10.14569/IJACSA.2022.0130358</id>
        <doi>10.14569/IJACSA.2022.0130358</doi>
        <lastModDate>2022-03-30T07:42:08.7330000+00:00</lastModDate>
        
        <creator>Manzoor Ahmed Hashmani</creator>
        
        <creator>Muhammad Umair</creator>
        
        <creator>Horio Keiichi</creator>
        
        <subject>Wave energy converter; significant wave height; peak wave period; LSTM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(3), 2022</description>
        <description>Forecasting the behaviour of various wave parameters is crucial for the safety of maritime operations as well as for optimal operations of wave energy converter (WEC) sites. For coastal WEC sites, the wave parameters of interest are significant wave height (Hs) and peak wave period (Tp). Numerical and statistical modeling, along with machine and deep learning models, have been applied to predict these parameters for the short and long-term future. For near-future prediction of Hs and Tp, this study investigates the possibility of optimally training a Long Short-Term Memory (LSTM) model on historical values of Hs and Tp only. Additionally, the study investigates the minimum amount of training data required to predict these parameters with acceptable accuracy. The Root Mean Square Error (RMSE) measure is used to evaluate the prediction ability of the model. As a result, it is identified that LSTM can effectively predict Hs and Tp given their historical values only. For Hs, it is identified that a 4-year dataset, 20 historical inputs, and a batch size of 256 produce the best results for three, six, twelve, and twenty-four-hour prediction windows at half-hourly step. It is also established that the future values of Tp can be optimally predicted using a 2-year dataset, 10 historical inputs, and a 128-batch size. However, due to the much dynamic nature of the peak wave period, it is discovered that the LSTM model yielded relatively low prediction accuracy as compared to Hs.</description>
        <description>http://thesai.org/Downloads/Volume13No3/Paper_58-Wave_Parameters_Prediction_for_Wave_Energy_Converter_Site.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>System Architecture for Brain-Computer Interface based on Machine Learning and Internet of Things</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130357</link>
        <id>10.14569/IJACSA.2022.0130357</id>
        <doi>10.14569/IJACSA.2022.0130357</doi>
        <lastModDate>2022-03-30T07:42:08.7170000+00:00</lastModDate>
        
        <creator>Shahanawaj Ahamad</creator>
        
        <subject>Brain-computer interface; machine learning; internet of things; EEG; system architecture</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(3), 2022</description>
        <description>Brain functions are required to be read for curing neurological illness. Brain-Computer Interface (BCI) connects the brain to the digital world for brain signals receiving, recording, processing, and comprehending. With a Brain-Computer Interface (BCI), the information from the user’s brain is fed into actuation devices, which then carry out the actions programmed into them. The Internet of Things (IoT) has made it possible to connect a wide range of everyday devices. Asynchronous BCIs can benefit from an improved system architecture proposed in this paper. Individuals with severe motor impairments will particularly get benefit from this feature. Control commands were translated using a rule-based translation algorithm in traditional BCI systems, which relied only on EEG recordings of brain signals. Examining BCI technology’s various and cross-disciplinary applications, this argument produces speculative conclusions about how BCI instruments combined with machine learning algorithms could affect the forthcoming procedures and practices. Compressive sensing and neural networks are used to compress and reconstruct ECoG data presented in this article. The neural networks are used to combine the classifier outputs adaptively based on the feedback. A stochastic gradient descent solver is employed to generate a multi-layer perceptron regressor. An example network is shown to take a 50% compression ratio and 89% reconstruction accuracy after training with real-world, medium-sized datasets as shown in this paper.</description>
        <description>http://thesai.org/Downloads/Volume13No3/Paper_57-System_Architecture_for_Brain_Computer_Interface.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sentiment Analysis on Customer Satisfaction of Digital Banking in Indonesia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130356</link>
        <id>10.14569/IJACSA.2022.0130356</id>
        <doi>10.14569/IJACSA.2022.0130356</doi>
        <lastModDate>2022-03-30T07:42:08.7030000+00:00</lastModDate>
        
        <creator>Bramanthyo Andrian</creator>
        
        <creator>Tiarma Simanungkalit</creator>
        
        <creator>Indra Budi</creator>
        
        <creator>Alfan Farizki Wicaksono</creator>
        
        <subject>Sentiment analysis; ensemble method; customer satisfaction; digital bank</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(3), 2022</description>
        <description>Southeast Asia, including Indonesia, is seeing an increase in digital banking adoption, owing to changing customer expectations and increasing digital penetration. The pandemic Covid-19 has hastened this tendency for digital transformation. However, customer satisfaction should not be left unmanaged during this transition. This research aims to obtain customer satisfaction of digital banking in Indonesia based on sentiment analysis from Twitter. Data collected were related to three digital banks in Indonesia, namely Jenius, Jago, and Blu. Total of 34,605 tweets were collected and analyzed within the period of August 1st 2021 to October 31st 2021. Sentiment analysis was conducted using nine standalone classifiers, Na&#239;ve Bayes, Logistic Regression, K-Nearest Neighbours, Support Vector Machines, Random Forest, Decision Tree, Adaptive Boosting, eXtreme Gradient Boosting and Light Gradient Boosting Machine. Two ensemble methods were also used for this research, hard voting and soft voting. The results of this study show that SVM among other stand-alone classifiers has the best performance when used to predict sentiments with value for F1-score 73.34%. Ensemble method performed better than using stand-alone classifier, and soft voting with 5-best classifiers performed best overall with value for F1-score 74.89%. The results also show that Jago sentiments were mainly positive, Jenius sentiments mostly were negative and for Blu, most sentiments were neutral.</description>
        <description>http://thesai.org/Downloads/Volume13No3/Paper_56-Sentiment_Analysis_on_Customer_Satisfaction_of_Digital_Banking.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Data Security Algorithm for the Cloud Computing based on Elliptic Curve Functions and Sha3 Signature</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130355</link>
        <id>10.14569/IJACSA.2022.0130355</id>
        <doi>10.14569/IJACSA.2022.0130355</doi>
        <lastModDate>2022-03-30T07:42:08.6870000+00:00</lastModDate>
        
        <creator>Sonia KOTEL</creator>
        
        <creator>Fatma SBIAA</creator>
        
        <subject>Cloud; IaaS simulation upon SimGrid (SCHIaas); elliptic curve encryption; one-time pad symmetrical encryption method (OTP); confidentiality; integrity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(3), 2022</description>
        <description>The rapid development of distributed system technologies enforces numerous challenges. For example, one of the most critical challenges facing cloud computing is ensuring the security of confidential data during both transfer and storage. Indeed, many techniques are used to enhance data security on cloud computing storage environment. Nevertheless, the most significant method for data protection is encryption. Thus, it has become an interesting topic of research and different encryption algorithms have been put forward in the last few years in order to provide data security, integrity, and authorized access. However, they still have some limitations. In this paper, we will study the security concept in Cloud Computing applications. Then, an ECC (Elliptical Curve Cryptography) based algorithm is designed and tested to ensure cloud security. The experimental results demonstrate the efficiency of the proposed algorithm which presents a strong security level and reduced execution time compared to widely used existing techniques.</description>
        <description>http://thesai.org/Downloads/Volume13No3/Paper_55-A_Data_Security_Algorithm_for_the_Cloud_Computing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Random and Sequence Workload for Web-Scale Architecture for NFS, GlusterFS and MooseFS Performance Enhancement</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130354</link>
        <id>10.14569/IJACSA.2022.0130354</id>
        <doi>10.14569/IJACSA.2022.0130354</doi>
        <lastModDate>2022-03-30T07:42:08.6530000+00:00</lastModDate>
        
        <creator>Mardhani Riasetiawan</creator>
        
        <creator>Nashihun Amien</creator>
        
        <subject>component; network storage; container; NFS; GlusterFS; MooseFS; random workload; sequence workload</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(3), 2022</description>
        <description>The problem in the data storage method that can support the data processing speed in the network is one of the key problems in big data. As computing speed increases and cluster size increases, I/O and network processes related to intensive data usage cannot keep up with the growth rate and data processing speed. Data processing applications will experience latency issues from long I/O. Distributed data storage systems can use Web scale technology to assist centralized data storage in a computing environment to meet the needs of data science. By analyzing several distributed data storage models, namely NFS, GlusterFS and MooseFS, a distributed data storage method is proposed. The parameters used in this study are transfer rate, IOPS and CPU resource usage. Through testing the sequential and random reading and writing of data, it is found that GlusterFS has faster performance and the best performance for sequential and random data reading when using 64k block data storage. MooseFS uses 64k power storage blocks to obtain the best performance in random data read operations. Using 32k data storage blocks, NFS achieves the best results in random writes. The performance of a distributed data storage system may be affected by the size of the data storage block. Using a larger data storage block can achieve faster performance in data transmission and performing operations on data.</description>
        <description>http://thesai.org/Downloads/Volume13No3/Paper_54-Random_and_Sequence_Workload_for_Web_Scale_Architecture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automated Feature Extraction for Predicting Multiple Sclerosis Patient Disability using Brain MRI</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130353</link>
        <id>10.14569/IJACSA.2022.0130353</id>
        <doi>10.14569/IJACSA.2022.0130353</doi>
        <lastModDate>2022-03-30T07:42:08.6400000+00:00</lastModDate>
        
        <creator>Ali M. Muslim</creator>
        
        <creator>Syamsiah Mashohor</creator>
        
        <creator>Rozi Mahmud</creator>
        
        <creator>Gheyath Al Gawwam</creator>
        
        <creator>Marsyita binti Hanafi</creator>
        
        <subject>Multiple sclerosis; expanded disability status scale prediction; multiple sclerosis disability; magnetic resonance imaging</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(3), 2022</description>
        <description>Predicting Multiple Sclerosis (MS) patient&#39;s disability level is an important issue as this could help in better diagnoses and monitoring the progression of the disease. Expanded Disability Status Scale (EDSS) is a common protocol used to manually score the disability level. However, it is time-consuming requires expert knowledge and exposure to inter-and intra-subject variation. Many previous studies focused on predicting patients&#39; disability from multiple MRI scans and manual or semi-automated features extraction. Furthermore, all of them are required patient follow up. This study aims to predict MS patients&#39; disability using fully automated feature extraction, single MRI scan, single MRI protocols and without patient follow-up. Data from 65 MS patients were used in this study. They were collected from multiple centers in Iraq and Saudi Arabia. Automated brain abnormalities segmentation, automated brain lobes, and brain periventricular are segmentation have been used to extract large scan features. A linear regression algorithm has been used to predict different types of MS patient disability. Initially, weak performance was found until MS patients were divided into four groups according to the MRI-Tesla model and the condition of the patient with a lesion in the spinal cord or not. The best performance was with an average RMSE of 0.6 to predict the EDSS with a step of 2. These results demonstrate the possibility of predicting with fully automated feature extraction, single MRI scan, single MRI protocols and without patient follow-up.</description>
        <description>http://thesai.org/Downloads/Volume13No3/Paper_53-Automated_Feature_Extraction_for_Predicting_Multiple_Sclerosis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Robust Reversible Data Hiding Framework for Video Steganography Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130352</link>
        <id>10.14569/IJACSA.2022.0130352</id>
        <doi>10.14569/IJACSA.2022.0130352</doi>
        <lastModDate>2022-03-30T07:42:08.6230000+00:00</lastModDate>
        
        <creator>Manjunath Kamath K</creator>
        
        <creator>R. Sanjeev Kunte</creator>
        
        <subject>Reversible data hiding; data integrity; embedding capacity; video steganography</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(3), 2022</description>
        <description>Reversible Data Hiding (RDH) is a special form of data hiding approach for data integrity and confidentiality protection where the secret image bits (SI) are embedded into Cover Media (CM) by altering its intrinsic pixel attributes. However, in RDH the CM along with the secret message is recovered at the end of computing phase. However, despite of its potential use-cases for enhancing the embedding performance, when it comes to security for various network standards, the traditional RDH mechanisms cannot fully comply with the standards for different set of attacks during the bit-stream transmission scenarios. Therefore, the proposed study contributes towards a computational framework of a robust RDH framework for Video Steganography (VS) which is modeled and simulated under various attack effects and the observation outcome are produced in before and after attack situations to justify the improvement over Embedding Capacity (EC) and Peak Signal-to-Noise Ratio (PSNR) performance for both CM and secret message unlike traditional difference expansion-based methods (DE). The outcome of the study shows that the formulated RDH method not only achieves better reversibility at lower cost of computing but also ensures effective PSNR and imperceptibility outcome for both CM and secret image.</description>
        <description>http://thesai.org/Downloads/Volume13No3/Paper_52-A_Robust_Reversible_Data_Hiding_Framework_for_Video_Steganography.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Bayesian Hyperparameter Optimization and Ensemble Learning for Machine Learning Models on Software Effort Estimation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130351</link>
        <id>10.14569/IJACSA.2022.0130351</id>
        <doi>10.14569/IJACSA.2022.0130351</doi>
        <lastModDate>2022-03-30T07:42:08.5930000+00:00</lastModDate>
        
        <creator>Robert Marco</creator>
        
        <creator>Sakinah Sharifah Syed Ahmad</creator>
        
        <creator>Sabrina Ahmad</creator>
        
        <subject>Bayesian optimization; adaboost ensemble learning; random forest; software effort estimation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(3), 2022</description>
        <description>In recent decades, various software effort estimation (SEE) algorithms have been suggested. Unfortunately, generating high-precision accuracy is still a major challenge in the context of SEE. The use of traditional techniques and parametric approaches is largely inaccurate because they produce biased and subjective accuracy. Meanwhile, none of the machine learning methods performed well. This study applies the AdaBoost ensemble learning method and random forest (RF), on the other hand the Bayesian optimization method is applied to determine the hyperparameters of this model. The PROMISE repository and the ISBSG dataset were used to build the SEE model. The developed model was comprehensively compared with four machine learning methods (classification and regression tree, k-nearest neighbor, multilayer perceptron, and support vector regression) under 3-fold cross validation (CV). It can be seen that the RF method based on AdaBoost ensemble learning and bayesian optimization outperforms this approach. In addition, the AdaBoost-based model assigns a feature importance rating, which makes it a promising tool in software effort prediction.</description>
        <description>http://thesai.org/Downloads/Volume13No3/Paper_51-Bayesian_Hyperparameter_Optimization_and_Ensemble_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Developing a Credit Card Fraud Detection Model using Machine Learning Approaches</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130350</link>
        <id>10.14569/IJACSA.2022.0130350</id>
        <doi>10.14569/IJACSA.2022.0130350</doi>
        <lastModDate>2022-03-30T07:42:08.5770000+00:00</lastModDate>
        
        <creator>Shahnawaz Khan</creator>
        
        <creator>Abdullah Alourani</creator>
        
        <creator>Bharavi Mishra</creator>
        
        <creator>Ashraf Ali</creator>
        
        <creator>Mustafa Kamal</creator>
        
        <subject>Credit card fraud detection; neural network; support vector machine; logistic regression; performance measures</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(3), 2022</description>
        <description>The growing application and usage of e-commerce applications have given an exponential rise to the number of online transactions. Though there are several methods for completing online transactions, however, credit cards are most commonly used. The increased number of transactions has given the opportunity to the fraudsters to mislead the customers and make them execute fraudulent transactions. Therefore, there is a need for such a method that can automatically classify detect fraudulent transactions. This research study aims to develop a credit-card fraud detection model that can effectively classify an online transaction as fraudulent or genuine. Three supervised machine learning approaches have been applied to develop a credit-card fraud classifier. These techniques include logistic regression, artificial intelligence and support vector machine. The classification accuracy achieved by all the classifiers is almost similar. This research has used the confusion matrix and area under the curve to demonstrate the score of the different performance measures and evaluate the overall performance of the classifiers. Several performance measures such as accuracy, precision, recall, F1-measure, Matthews correlation coefficient, receiver operating characteristic curve have been computed and analysed to evaluate the performance of the credit-card fraud detection classifiers. The analysis demonstrates that the support vector machine-based classifier outperforms the other classifiers.</description>
        <description>http://thesai.org/Downloads/Volume13No3/Paper_50-Developing_a_Credit_Card_Fraud_Detection_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Affinity Degree as Ranking Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130349</link>
        <id>10.14569/IJACSA.2022.0130349</id>
        <doi>10.14569/IJACSA.2022.0130349</doi>
        <lastModDate>2022-03-30T07:42:08.5470000+00:00</lastModDate>
        
        <creator>Rosyazwani Mohd Rosdan</creator>
        
        <creator>Wan Suryani Wan Awang</creator>
        
        <creator>Samhani Ismail</creator>
        
        <subject>Affinity; affinity degree; rank; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(3), 2022</description>
        <description>In machine learning, ranking is a fundamental problem that attempts to rank a list of things based on their relevance in a certain task. Ranking can be helpful, especially for future decision making. The framework for ranking has been classified into three primary approaches in machine learning: pointwise, pairwise, and listwise. However, learning to rank in all three approaches still lacks continuous learning ability, particularly when it comes to determining the degree of relevancy of ranking orders. In this paper, an affinity degree technique for ranking is proposed as another potential machine learning framework. The definition and attributes of the affinity degree technique are discussed, as well as the results of an experiment adopting the affinity degree approach as a ranking mechanism. The experiment&#39;s performance is measured using assessment metrics such as Mean Average Precision (MAP).</description>
        <description>http://thesai.org/Downloads/Volume13No3/Paper_49-Affinity_Degree_as_Ranking_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Proposal of an Automated Tool for the Application of Sentiment Analysis Techniques in the Context of Marketing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130348</link>
        <id>10.14569/IJACSA.2022.0130348</id>
        <doi>10.14569/IJACSA.2022.0130348</doi>
        <lastModDate>2022-03-30T07:42:08.5000000+00:00</lastModDate>
        
        <creator>Gabriel Elias Chanchi Golondrino</creator>
        
        <creator>Manuel Alejandro Ospina Alarcon</creator>
        
        <creator>Wilmar Yesid Campo Munoz</creator>
        
        <subject>E-commerce; marketing; opinion mining; polarity analysis; sentiment analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(3), 2022</description>
        <description>Currently, the opinions and comments made by customers on e-commerce portals regarding different products and services have great potential for identifying customer perceptions and preferences. Based on the above, there is a growing need for companies to have automated tools based on sentiment analysis through polarity analysis, which allow the examination of customer opinions to obtain quantitative indicators from qualitative information that enable decision-making in the context of marketing. In this article, we propose the construction of an automated tool for conducting opinion mining studies, which can be used in a transparent way to the algorithmic process by the marketing units of companies for decision making. The functionality of the proposed tool was verified through a case study, in which the opinions obtained from electronic commerce website concerning one of the best-selling technological products were investigated.</description>
        <description>http://thesai.org/Downloads/Volume13No3/Paper_48-Proposal_of_an_Automated_Tool_for_the_Application_of_Sentiment_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning Applications in Solid Waste Management: A Deep Literature Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130347</link>
        <id>10.14569/IJACSA.2022.0130347</id>
        <doi>10.14569/IJACSA.2022.0130347</doi>
        <lastModDate>2022-03-30T07:42:08.4830000+00:00</lastModDate>
        
        <creator>Sana Shahab</creator>
        
        <creator>Mohd Anjum</creator>
        
        <creator>M. Sarosh Umar</creator>
        
        <subject>Solid waste management; systematic literature review; deep learning; convolutional neural networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(3), 2022</description>
        <description>Solid waste management (SWM) has recently received more attention, especially in developing countries, for smart and sustainable development. SWM system encompasses various interconnected processes which contain numerous complex operations. Recently, deep learning (DL) has attained momentum in providing alternative computational techniques to determine the solution of various SWM problems. Researchers have focused on this domain; therefore, significant research has been published, especially in the last decade. The literature shows that no study evaluates the potential of DL to solve the various SWM problems. The study performs a systematic literature review (SLR) which has complied 40 studies published between 2019 and 2021 in reputed journals and conferences. The selected research studies have implemented the various DL models and analyzed the application of DL in different SWM areas, namely waste identification and segregation and prediction of waste generation. The study has defined the systematic review protocol that comprises various criteria and a quality assessment process to select the research studies for review. The review demonstrates the comprehensive analysis of different DL models and techniques implemented in SWM. It also highlights the application domains and compares the reported performance of selected studies. Based on the reviewed work, it can be concluded that DL exhibits the plausible performance to detect and classify the different types of waste. The study also explains the deep convolutional neural network with the computational requirement and determine the research gaps with future recommendations.</description>
        <description>http://thesai.org/Downloads/Volume13No3/Paper_47-Deep_Learning_Applications_in_Solid_Waste_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Technological Affordances and Teaching in EFL Mixed-ability Classes during the COVID-19 Pandemic</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130346</link>
        <id>10.14569/IJACSA.2022.0130346</id>
        <doi>10.14569/IJACSA.2022.0130346</doi>
        <lastModDate>2022-03-30T07:42:08.4670000+00:00</lastModDate>
        
        <creator>Waheed M. A. Altohami</creator>
        
        <creator>Mohamed Elarabawy Hashem</creator>
        
        <creator>Abdulfattah Omar</creator>
        
        <creator>Mohamed Saad Mahmoud Hussein</creator>
        
        <subject>Technological affordances; blackboard; writing teaching; EFL mixed-ability classes; differentiation instruction; COVID-19</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(3), 2022</description>
        <description>With the widespread of COVID-19 in Saudi Arabia, the educational authorities issued firm directions to convert to virtual classes exploiting the available Learning Management System (LMS). However, during the academic year 2020-2021, the researchers observed that writing EFL instructors at Prince Sattam bin Abdulaziz University (PSAU), Saudi Arabia, faced diverse challenges due to having online mixed-ability classes, i.e. those classes where students have varying levels of readiness, motivation, and academic caliper. Though many previous studies explored the influence of the COVID-19 pandemic on teaching and learning practices, very few studies addressed the way technological affordances pose challenges for instructors teaching mixed-ability classes. Therefore, the present study, using mixed quantitative and qualitative research methods, sought to explore challenges that evolved due to the technological affordances of LMS to spot the persistent problems and to offer relevant solutions for upgrading, writing teaching and learning practices. The basic research design relied on an online questionnaire followed by semi-structured interviews. Findings showed that differentiated instruction proved to be the most successful strategy for teaching writing in mixed-ability online classes as it allowed the adaptation of materials, teaching and learning practices, and assessment tools to motivate low-achievers. In addition, the collaborative tools offered by the Blackboard such as the White Board, Discussion Board, Blogs, and Breakout Groups helped to meet the preferences of visual, auditory, and kinesthetic learners. Finally, further studies are recommended to explore the affordances of educational technologies regularly to identify potential benefits and limitations for offering the best teaching and learning practices.</description>
        <description>http://thesai.org/Downloads/Volume13No3/Paper_46-Technological_Affordances_and_Teaching_in_EFL.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Classification of Autism Spectrum Disorder and Typically Developed Children for Eye Gaze Image Dataset using Convolutional Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130345</link>
        <id>10.14569/IJACSA.2022.0130345</id>
        <doi>10.14569/IJACSA.2022.0130345</doi>
        <lastModDate>2022-03-30T07:42:08.4530000+00:00</lastModDate>
        
        <creator>Praveena K N</creator>
        
        <creator>Mahalakshmi R</creator>
        
        <subject>Autism spectrum disorder; classification; fixation maps; eye expression; visual focus; gaze pattern; CNN</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(3), 2022</description>
        <description>Autism is a neurobehavioral problem that hinders to interact with others. Autistic Spectrum Disorder (ASD) is a psychological disorder that hampers procurement of etymological, communication, cognitive, social skills and Stereotypical motor behaviors and capabilities. Recent research revealing that Autism Spectrum Disorder can be diagnosed using gaze structures which has opened up a new field where visual focus modelling could be highly used. Diagnosis of ASD becomes a difficult task due to wide range of symptoms and severity of ASD. Deep neural networks have been widely employed and have shown to perform well in a variety of visual data processing applications. In this paper, typical developed (TD) or ASD is classified using Convolution neural Networks (CNN) for the fixation maps of the corresponding observer&#39;s gaze at a given image. The objective of this paper is to observe whether eye-tracking data of fixation map could classify children with ASD and typical development (TD). We further investigated whether features on visual fixation would attain better classification performance. The proposed CNN model achieves 75.23% accuracy for validation.</description>
        <description>http://thesai.org/Downloads/Volume13No3/Paper_45-Classification_of_Autism_Spectrum_Disorder.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of Pipe Inspection Robot using Soft Actuators, Microcontroller and LabVIEW</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130343</link>
        <id>10.14569/IJACSA.2022.0130343</id>
        <doi>10.14569/IJACSA.2022.0130343</doi>
        <lastModDate>2022-03-30T07:42:08.4200000+00:00</lastModDate>
        
        <creator>Mohd Aliff</creator>
        
        <creator>Mohammad Imran</creator>
        
        <creator>Sairul Izwan</creator>
        
        <creator>Mohd Ismail</creator>
        
        <creator>Nor Samsiah</creator>
        
        <creator>Tetsuya Akagi</creator>
        
        <creator>Shujiro Dohta</creator>
        
        <creator>Weihang Tian</creator>
        
        <creator>So Shimooka</creator>
        
        <creator>Ahmad Athif</creator>
        
        <subject>Soft pneumatic actuator; pipe inspection robot; flexible actuator; microcontroller; sliding and holding mechanism</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(3), 2022</description>
        <description>Pipeline transportation is particularly significant nowadays because it can transfer liquids or gases over a long distance, usually to a market area for use, using a system of pipes. The pipeline&#39;s numerous fittings, such as elbows and tees, as well as the various sizes and types of materials utilized, make routine inspection and maintenance challenging for the technician. Therefore, the compact and portable pipe inspection robots with pneumatic actuators are required for use in industry especially in hazardous areas. Flexible pneumatic actuators with clean and safe pneumatic energy have high mobility to move in complex pipelines. High safety features such as no oil or electrical leakage, which would be dangerous if used in an explosive environment are a major factor it is widely used nowadays. As a result, the goal of this study is to propose and present the development of pipe inspection robot that employ soft actuators and are monitored by LabVIEW for usage in a variety of pipe sizes and types. This research focuses on the movement of robots in the pipeline by proposing some important mechanisms such as sliding mechanism, holding mechanism, and bending unit to move easily and effectively in the pipeline. Experiments show that with an appropriate pneumatic pressure source of 4 bar, a flexible robot using the soft pneumatic actuator can bend and move in a 2-inch diameter pipe smoothly and efficiently. It has been discovered that the proposed mechanism may readily travel pipe corners while bending in any required direction.</description>
        <description>http://thesai.org/Downloads/Volume13No3/Paper_43-Development_of_Pipe_Inspection_Robot_using_Soft_Actuators.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning-based Detection System for Heavy-Construction Vehicles and Urban Traffic Monitoring</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130344</link>
        <id>10.14569/IJACSA.2022.0130344</id>
        <doi>10.14569/IJACSA.2022.0130344</doi>
        <lastModDate>2022-03-30T07:42:08.4200000+00:00</lastModDate>
        
        <creator>Sreelatha R</creator>
        
        <creator>Roopa Lakshmi R</creator>
        
        <subject>Intelligent transportation systems; heavy-construction vehicles detection; traffic monitoring and SSD-based CNN component; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(3), 2022</description>
        <description>In this intelligent transportation systems era, traffic congestion analysis in terms of vehicle detection followed by tracking their speed is gaining tremendous attention due to its complicated intrinsic ingredients. Specifically, in the existing literature, vehicle detection on highway roads are studied extensively while, to the best of our knowledge the identification and tracking of heavy-construction vehicles such as rollers are not yet fully explored. More specifically, heavy- construction vehicles such as road rollers, trenchers and bulldozers significantly aggravate the congestion in urban roads during peak hours because of their deadly slow movement rates accompanied by their occupation of majority of road portions. Due to these reasons, promising frameworks are very much important, which can identify the heavy-construction vehicles moving in urban traffic-prone roads so that appropriate congestion evaluation strategies can be adopted to monitor traffic situations. To solve these issues, this article proposes a new deep-learning based detection framework, which employs Single Shot Detector (SSD)-based object detection system consisting of CNNs. The experimental evaluations extensively carried out on three different datasets including the benchmark ones MIO-TCD localization dataset, clearly demonstrate the enhanced performance of the proposed detection framework in terms of confidence scores and time efficiency when compared to the existing techniques.</description>
        <description>http://thesai.org/Downloads/Volume13No3/Paper_44-Deep_Learning_based_Detection_System_for_Heavy_Construction_Vehicles.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Methodology for Infrastructure Site Monitoring using Unmanned Aerial Vehicles (UAVs)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130342</link>
        <id>10.14569/IJACSA.2022.0130342</id>
        <doi>10.14569/IJACSA.2022.0130342</doi>
        <lastModDate>2022-03-30T07:42:08.4030000+00:00</lastModDate>
        
        <creator>Cristian Benjamin Garcia Casierra</creator>
        
        <creator>Carlos Gustavo Calle Sanchez</creator>
        
        <creator>Javier Ferney Castillo Garcia</creator>
        
        <creator>Felipe Munoz La Rivera</creator>
        
        <subject>Topography; unmanned aerial vehicle; infrastructure work monitoring; digital image processing component; construction site</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(3), 2022</description>
        <description>Monitoring a work of infrastructure allows one to know the state of this and the efficiency of the workers. The follow-up is a work carried out by the auditor, which sails to correspond with the design in planes. It takes fulfillment in the budgeted one and complies with the established times. This work uses classical topography elements, which demand time and money and the implications on the safety issue of non-construction personnel. To avoid this, this project implements a methodology capable of carrying out the task of monitoring civil work. An unmanned aerial vehicle or drone is used, which are small remotely controlled flying devices that in recent years have become an extremely useful tool in activities that human beings cannot perform or that threaten their integrity. For the realization of this work, a drone Quadcopter Phantom 3 Standard is used, responsible for taking photographs; these are loaded in the software Agisoft Metashape Professional that by photogrammetry techniques allows digital processing of images, generating a 3D vision, cloud of dots, digital surface model and distance measurement. By obtaining this information, it is possible to make a match with the work schedule and detect delays or advances in a precise way.</description>
        <description>http://thesai.org/Downloads/Volume13No3/Paper_42-Methodology_for_Infrastructure_Site_Monitoring.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detecting Hate Speech on Twitter Network Using Ensemble Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130341</link>
        <id>10.14569/IJACSA.2022.0130341</id>
        <doi>10.14569/IJACSA.2022.0130341</doi>
        <lastModDate>2022-03-30T07:42:08.3900000+00:00</lastModDate>
        
        <creator>Raymond T Mutanga</creator>
        
        <creator>Nalindren Naicker</creator>
        
        <creator>Oludayo O Olugbara</creator>
        
        <subject>Classical learning; deep learning; ensemble learning; hate speech; social media; twitter network; voting ensemble</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(3), 2022</description>
        <description>Twitter is habitually exploited now-a-days to propagate torrents of hate speeches, misogynistic, and misandry tweets that are written in slang. Machine learning methods have been explored in manifold studies to address the inherent challenges of hate speech detection in online spaces. Nevertheless, language has subtleties that can make it stiff for machines to adequately comprehend and disambiguate the semantics of words that are heavily dependent on the usage context. Deep learning methods have demonstrated promising results for automatic hate speech detection, but they require a significant volume of training data. Classical machine learning methods suffer from the innate problem of high variance that in turn affects the performance of hate speech detection systems. This study presents a voting ensemble machine learning method that harnesses the strengths of logistic regression, decision trees, and support vector machines for the automatic detection of hate speech in tweets. The method was evaluated against ten widely used machine learning methods on two standard tweet data sets using the famous performance evaluation metrics to achieve an improved average F1-score of 94.2%.</description>
        <description>http://thesai.org/Downloads/Volume13No3/Paper_41-Detecting_Hate_Speech_on_Twitter_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>PLA Mechanical Performance Before and After 3D Printing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130340</link>
        <id>10.14569/IJACSA.2022.0130340</id>
        <doi>10.14569/IJACSA.2022.0130340</doi>
        <lastModDate>2022-03-30T07:42:08.3730000+00:00</lastModDate>
        
        <creator>Houcine SALEM</creator>
        
        <creator>Hamid ABOUCHADI</creator>
        
        <creator>Khalid ELBIKRI</creator>
        
        <subject>Additive manufacturing; PLA; test sample; traction; 3D printing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(3), 2022</description>
        <description>PLA or polylactic acid is a thermoplastic made from renewable sources. Thanks to its environmental value compared to petroleum sourced materials, it is widely used in 3D printing industry. Due to the advantages of additive manufacturing in terms of cost and time consumption, many industries are using these technologies to re-engineer parts or assemblies to optimize their products. However, the properties given by the supplier are not conforming to the final printed product. This issue can be dangerous, especially when these products are used in the biomedical fields or toys for children or other sensitive areas. The aim of this paper is to outline the difference between the final properties and the primary ones. The samples are tested in traction following the ASTM D638 Standard. The specificities of the standard in terms of specimen dimensions and test methodologies have been respected. The results demonstrated that there is a difference between the performance of the material before and after using a 3d printer.</description>
        <description>http://thesai.org/Downloads/Volume13No3/Paper_40-PLA_Mechanical_Performance_Before_and_After_3D_Printing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comprehensive Study of Different Types of Deduplication Technique in Various Dimensions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130339</link>
        <id>10.14569/IJACSA.2022.0130339</id>
        <doi>10.14569/IJACSA.2022.0130339</doi>
        <lastModDate>2022-03-30T07:42:08.3430000+00:00</lastModDate>
        
        <creator>G. Sujatha</creator>
        
        <creator>Jeberson Retna Raj</creator>
        
        <subject>Digital data; deduplication; storage optimization; cloud storage service; duplicate copies; bandwidth utilization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(3), 2022</description>
        <description>In the current digital era, the growth of digital data is highly exceptional. There are various sources available for these digital data. The quantity of digital data being produced rose exponentially with time because of organizations and even by individuals, finally end up in the need of huge storage space. Cloud storage provides the storage space for such requirement. Since the storage space is utilized by many different users, having the duplicate data cannot be avoided. So it is necessary to make use of some storage optimization technique to handle such duplicate contents. Deduplication is a technique which is used to evade redundant data get stored. Among the various digital data, the possibility of having duplicate copies is high for data. In this research work, we review the benefits of having deduplication in optimizing the usage of storage space and study about the various types of deduplication techniques in different dimensions which can be used for data. It helps to select the appropriate data deduplication technique to increase their effective storage utilization and reduce the wastage of memory space because of duplicate data.</description>
        <description>http://thesai.org/Downloads/Volume13No3/Paper_39-A_Comprehensive_Study_of_Different_Types_of_Deduplication_Technique.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Supervised Learning Techniques for Intrusion Detection System based on Multi-layer Classification Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130338</link>
        <id>10.14569/IJACSA.2022.0130338</id>
        <doi>10.14569/IJACSA.2022.0130338</doi>
        <lastModDate>2022-03-30T07:42:08.3270000+00:00</lastModDate>
        
        <creator>Mansoor Farooq</creator>
        
        <subject>IDS; SNORT; fuzzy logic; neural networks; decision tree; Na&#239;ve Bayes</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(3), 2022</description>
        <description>The goal of this study is to discover a solution to two problems: first, the signature-based intrusion detection system SNORT can identify a new attack signature without human intervention; and second, signature-based IDS cannot detect multi-stage attacks. The interesting aspect of this study is the growing ways to address the aforementioned issues. We introduced a multi-layer classification strategy in this study, in which we employ two layers, the first of which is based on a decision tree, and the second of which includes machine learning technique fuzzy logic and neural networks. If the first layer fails to identify fresh attacks, the second layer takes over and detects new signature assaults, updating the SNORT signature automatically.</description>
        <description>http://thesai.org/Downloads/Volume13No3/Paper_38-Supervised_Learning_Techniques_for_Intrusion_Detection_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Cubic B-Splines Approximation Method Combined with DWT and IBP for Single Image Super-resolution</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130337</link>
        <id>10.14569/IJACSA.2022.0130337</id>
        <doi>10.14569/IJACSA.2022.0130337</doi>
        <lastModDate>2022-03-30T07:42:08.2970000+00:00</lastModDate>
        
        <creator>Victor Kipkoech Mutai</creator>
        
        <creator>Elijah Mwangi</creator>
        
        <creator>Ciira wa Maina</creator>
        
        <subject>Single-image super-resolution; pre-filtering; cubic B-Splines approximation; discrete wavelet transform (DWT); iterative back-projection (IBP); B-Splines coefficients</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(3), 2022</description>
        <description>The process of converting low-resolution images into high-resolution images by removing noise and estimating high-frequency information is known as image super-resolution. Aliased and decimated versions of the actual scenes are considered low-resolution images. The edges of high-resolution images produced by super-resolution from a single image are typically blurred. This paper proposes an approach to generate high-resolution image with sharp edges by combining a cubic B-Splines approximation, a discrete wavelet transform (DWT), and an iterative back-projection (IBP) edge-preserving weighted guided filter. A two-stage cubic B-Splines approximation, which includes pre-filtering and interpolation, is employed to up-sample the low-resolution image. The pre-filtering approach is used to transform pixel values to B-Splines coefficients. This approach minimizes blurring in the up-sampled image. The lost high-frequency information is then estimated using a one-level discrete wavelet transform based on the db1 wavelet. Finally, using a weighted guided filter, the resulting image is subjected to back-projection to obtain a high-resolution image. The proposed single-image super-resolution approach is applied on RGB colour images. The proposed method outperforms other selected approaches for comparison objectively in terms of PSNR and SSIM and also in visual quality.</description>
        <description>http://thesai.org/Downloads/Volume13No3/Paper_37-A_Cubic_B_Splines_Approximation_Method_Combined_with_DWT_and_IBP.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Algorithm based on Convolutional Neural Networks to Manage Online Exams via Learning Management System Without using a Webcam</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130336</link>
        <id>10.14569/IJACSA.2022.0130336</id>
        <doi>10.14569/IJACSA.2022.0130336</doi>
        <lastModDate>2022-03-30T07:42:08.2800000+00:00</lastModDate>
        
        <creator>Lassaad K. SMIRANI</creator>
        
        <creator>Jihane A. BOULAHIA</creator>
        
        <subject>Artificial intelligence; convolutional neural network; learning assessment; online cheating; online examination; higher education; emergency remote teaching</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(3), 2022</description>
        <description>Cheating attempts in educational assessments have long been observed. Because students today are characterized by their great digital intelligence, this negative conduct has intensified throughout the emergency remote teaching time. First, this article discusses the most innovative methods for combating cheating throughout the online evaluation procedure. Then, for this aim, a Convolutional Neural Networks for Cheating Detection System (CNNCDS) is presented.. The proposed solution has the advantage of not requiring the use of a camera, it recognizes and identifies IP addresses, records and analyzes exam sessions, and prevents internet browsing during exams. The K-Nearest Neighbor (K-NN) has been adopted as a classifier while the Principal Component Analysis (PCA) was used for exploratory data analysis and for making predictive models. The CNNCDS was learned, tested, and validated by using data extracted from a face-to-face exam session. its main output is a binary students&#39; classification in real-time (normal or abnormal). The CNNCDS surpasses the fundamental classifiers Multi-class Logistic Regression (MLR), Support Vector Machine (SVM), Random Forest (RF), and Gaussian Naive Bayes (GNB) in terms of mean accuracy (98.5%). Furthermore, it accurately detected screen pictures in an acceptable processing time, with a sensitivity average of 99.8 percent and a precision average of 1.8 percent. This strategy has been shown to be successful in minimizing cheating in several colleges. This solution is useful for higher education institutions that operate entirely online and do not require the use of a webcam.</description>
        <description>http://thesai.org/Downloads/Volume13No3/Paper_36-An_Algorithm_Based_on_Convolutional_Neural_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Non-Repudiation-based Network Security System using Multiparty Computation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130335</link>
        <id>10.14569/IJACSA.2022.0130335</id>
        <doi>10.14569/IJACSA.2022.0130335</doi>
        <lastModDate>2022-03-30T07:42:08.2630000+00:00</lastModDate>
        
        <creator>Divya K. S</creator>
        
        <creator>Roopashree H. R</creator>
        
        <creator>Yogeesh A C</creator>
        
        <subject>Future network; multiparty computation; nonrepudiation; security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(3), 2022</description>
        <description>Security has always been a prominent concern over the network, and various essential requirements are required to cater to an efficient security system. Non-repudiation is a requirement about the non-deniability of services acting as a bridge between seamless relaying of service/data and efficient security implementation. There have been various studies carried out towards strengthening the non-repudiation system. There are certain pitfalls that render inapplicability on dynamic cases of vulnerability. The conventional two-party non-repudiation schemes have been widely explored in the existing literature. But this paper also advocates the adoption of multi-party computation, which has better feasibility toward strengthening a distributed security system. The current work presents a survey on the existing approaches of non-repudiation to investigate its effectiveness in the multi-party system. The prime aim of the proposed work is to analyze the current research progress and draw a research gap as the prominent contribution of the proposed study. The manuscript begins by highlighting the issues concerning multi-party strategies and cryptographic approaches, and the security requirements and standardization are briefly discussed. It then describes the essentials of non-repudiation and examines state-of-the-art mechanisms. Finally, the study summarizes and discusses research gaps identified through the review analysis.</description>
        <description>http://thesai.org/Downloads/Volume13No3/Paper_35-Non_Repudiation_based_Network_Security_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Approach of Hyperspectral Imaging Classification using Hybrid ConvNet</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130334</link>
        <id>10.14569/IJACSA.2022.0130334</id>
        <doi>10.14569/IJACSA.2022.0130334</doi>
        <lastModDate>2022-03-30T07:42:08.2330000+00:00</lastModDate>
        
        <creator>Soumyashree M Panchal</creator>
        
        <creator>Shivaputra</creator>
        
        <subject>Hyperspectral image; convolution neural network; classification; spatial feature; spectral feature</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(3), 2022</description>
        <description>In recent years, remote sensing applications have been booming, and with this hyperspectral imaging (HSI) has been used in many real-life applications. However, the classification of HSI is a significant problem due to the complex features of the captured hyperspectral scene. Moreover, the HSI is often inherently nonlinear and has very high-dimensional data. Recent years have seen a rise in deep learning applications for addressing nonlinear problems. However, deep learning tends to overfit when sparse or less training data is available. In this paper, the proposed work focuses on addressing the trade-off problem between classification performance and less training samples for classifying hyperspectral image data in a single training process. Thus, the study presents a hybrid multilayer learning system based on the joint approach of 2D and 3D convolutional kernels. The main reason is to utilize the spectral-spatial and spatial correlations in the learning process to achieve improved generalization of features in the training process for better HSI classification. The study outcome exhibits higher precision, recall rate, and F1-score performance. The overall accuracy is 99.9%, with a better convergence rate. The results prove that the proposed model is effective for HSI classification even with fewer training data samples.</description>
        <description>http://thesai.org/Downloads/Volume13No3/Paper_34-A_Novel_Approach_of_Hyperspectral_Imaging_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>IoT based Speed Control for Semi-Autonomous Electric On-Road Cargo Vehicle</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130333</link>
        <id>10.14569/IJACSA.2022.0130333</id>
        <doi>10.14569/IJACSA.2022.0130333</doi>
        <lastModDate>2022-03-30T07:42:08.2170000+00:00</lastModDate>
        
        <creator>P. L. Arunkumar</creator>
        
        <creator>M. Ramaswamy</creator>
        
        <creator>T. S. Murugesh</creator>
        
        <subject>Electric vehicle; IoT; speed control; battery SoC; battery SoH; micro-controller; embedded system; GSM; proximity sensor; payload; real time navigation; GPS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(3), 2022</description>
        <description>The paper develops an investigative GSM enabled IoT based speed control scheme suitable for electric On-Road cargo vehicles. The design involves the bounding of the parameters that include the vehicle speed, motor speed, Truck payload, battery SoC (State of Charge), battery SoH (State of Health), real time navigation points using GPS, tire pressure, motor temperature and current consumption, driver fatigue detection and vehicle proximity detection which enters the system using GSM enabled wireless sensors and IoT based maps for arriving at the recommended speed. It engages a state-of-the-art Microcontroller based embedded system to govern the operation of the three-phase induction motor in accordance with the changes that the vehicle either experiences or becomes necessary for it to negotiate. It incorporates a close monitoring methodology for evolving a sequence of steps that enable the system to remain in operation over scheduled time frames. The results obtained from a simulation process carried out using embedded-c firmware code on ARM Core STM32 micro-controller exemplify the merits and illustrate the performance of the chosen vehicle in terms of its ability to be used in real world systems.</description>
        <description>http://thesai.org/Downloads/Volume13No3/Paper_33-IoT_based_Speed_Control_for_Semi_Autonomous_Electric.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Unsupervised Chest X-ray Opacity Classification using Minimal Deep Features</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130332</link>
        <id>10.14569/IJACSA.2022.0130332</id>
        <doi>10.14569/IJACSA.2022.0130332</doi>
        <lastModDate>2022-03-30T07:42:08.2030000+00:00</lastModDate>
        
        <creator>Mohd Zulfaezal Che Azemin</creator>
        
        <creator>Mohd Izzuddin Mohd Tamrin</creator>
        
        <creator>Mohd Adli Md Ali</creator>
        
        <creator>Iqbal Jamaludin</creator>
        
        <subject>Unsupervised classification; minimal deep features; convolution neural network; chest x-ray; airspace opacity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(3), 2022</description>
        <description>Data privacy has been a concern in medical imaging research. One important step to minimize the sharing of patient’s information is by limiting the use of original images in the workflow. This research aimed to use minimal deep learning features in detecting anomaly in chest X-ray (CXR) images. A total of 3,504 CXRs were processed using a pre-trained deep learning convolutional neural network to output ten discriminatory features which were then used in the k-mean algorithm to find underlying similarities between the features for further clustering. Two clusters were set to distinguish between “Opacity” and “Normal” CXRs with the accuracy, sensitivity, specificity, and positive predictive value of 80.9%, 86.6%, 71.5% and 83.1%, respectively. With only ten features required to build the unsupervised model, this would pave the way for future federated learning research where actual CXRs can remain distributed over multiple centers without sacrificing the anonymity of the patients.</description>
        <description>http://thesai.org/Downloads/Volume13No3/Paper_32-Unsupervised_Chest_X_ray_Opacity_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Extended DBSCAN Clustering Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130331</link>
        <id>10.14569/IJACSA.2022.0130331</id>
        <doi>10.14569/IJACSA.2022.0130331</doi>
        <lastModDate>2022-03-30T07:42:08.1700000+00:00</lastModDate>
        
        <creator>Ahmed Fahim</creator>
        
        <subject>Cluster analysis; density-based clustering; varied density clusters; data mining; extended density-based spatial clustering of applications with noise (E-DBSCAN)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(3), 2022</description>
        <description>Finding clusters of different densities is a challenging task. DBSCAN “Density-Based Spatial Clustering of Applications with Noise” method has trouble discovering clusters of various densities since it uses a fixed radius. This article proposes an extended DBSCAN for finding clusters of different densities. The proposed method uses a dynamic radius and assigns a regional density value for each object, then counts the objects of similar density within the radius. If the neighborhood size ≥ MinPts, then the object is a core, and a cluster can grow from it, otherwise, the object is assigned noise temporarily. Two objects are similar in local density if their similarity ≥ threshold. The proposed method can discover clusters of any density from the data effectively. The method requires three parameters; MinPts, Eps (distance to the kth neighbor), and similarity threshold. The practical results show the superior ability of the suggested method to detect clusters of different densities even with no discernible separations between them.</description>
        <description>http://thesai.org/Downloads/Volume13No3/Paper_31-An_Extended_DBSCAN_Clustering_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Using Decision Tree Classification Model to Predict Payment Type in NYC Yellow Taxi</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130330</link>
        <id>10.14569/IJACSA.2022.0130330</id>
        <doi>10.14569/IJACSA.2022.0130330</doi>
        <lastModDate>2022-03-30T07:42:08.1530000+00:00</lastModDate>
        
        <creator>Hadeer Ismaeil</creator>
        
        <creator>Sherif Kholeif</creator>
        
        <creator>Manal A. Abdel-Fattah</creator>
        
        <subject>Big data analytics; apache spark; decision tree classification; taxi trips; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(3), 2022</description>
        <description>The taxi services are growing rapidly as reliable services. The demand and competition between service providers is so high. A billion trip records need to be analyzed to raise the spirit of competition, understand the service users, and improve the business. Although decision tree classification is a common algorithm which generates rules that are easy to understand, there is no implementation for classification on taxi dataset. This research applies the decision tree classification model on taxi dataset to classify instances correctly, build a decision tree, and calculate accuracy. This experiment collected decision tree algorithm with Spark framework to present the good performance and high accuracy when predicting payment type. Applied decision tree algorithm with different aspects on NYC taxi dataset results in high accuracy.</description>
        <description>http://thesai.org/Downloads/Volume13No3/Paper_30-Using_Decision_Tree_Classification_Model_to_Predict_Payment_Type.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Intelligent Anti-Jamming Mechanism against Rule-based Jammer in Cognitive Radio Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130328</link>
        <id>10.14569/IJACSA.2022.0130328</id>
        <doi>10.14569/IJACSA.2022.0130328</doi>
        <lastModDate>2022-03-30T07:42:08.1230000+00:00</lastModDate>
        
        <creator>Sudha Y</creator>
        
        <creator>Sarasvathi V</creator>
        
        <subject>Anti-jamming; agent; cognitive radio network; reinforcement learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(3), 2022</description>
        <description>Cognitive Radio Network (CRN) has become a promising technology to overcome the problem of insufficient spectrum utilization. However, the CRN is susceptible to the well-known jamming attack, which reduces its spectrum utilization efficiency. Existing jamming identification schemes and their countermeasure typically require prior statistical information about the communication channel and jamming pattern. This is quite an impractical assumption in the real context. The prime research problem is that the existing schemes are mainly associated with higher computational costs and communication overhead. Hence, the proposed manuscript presents a non-device-centric and efficient anti-jamming mechanism in the form of higher spectrum utilization driven by reinforcement learning techniques to address this above-stated problem. The proposed anti-jamming mechanism is modeled in two phases of implementation. First, the design of the customized environment is introduced as a single wideband cognitive-communication channel where a jammer signal sweeps transversely in the entire band of interest. Secondly, an intelligent agent is designed based on a model-free off-policy algorithm that operates over the same spectrum band. The agent uses its frequency-band knowledge discovery capability to learn frequency band selection and preference strategies to detect and avoid jamming signals, maximizing its successful transmission rate. The simulation results show that the proposed anti-jamming mechanism can effectively eliminate interference and is efficient in power usage and Signal to Noise Ratio (SNR) compared to other existing advanced algorithms.</description>
        <description>http://thesai.org/Downloads/Volume13No3/Paper_28-An_Intelligent_Anti_Jamming_Mechanism.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Rainfall Forecasting using Support Vector Regression Machines</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130329</link>
        <id>10.14569/IJACSA.2022.0130329</id>
        <doi>10.14569/IJACSA.2022.0130329</doi>
        <lastModDate>2022-03-30T07:42:08.1230000+00:00</lastModDate>
        
        <creator>Lemuel Clark Velasco</creator>
        
        <creator>Johanne Miguel Aca-ac</creator>
        
        <creator>Jeb Joseph Cajes</creator>
        
        <creator>Nove Joshua Lactuan</creator>
        
        <creator>Suwannit Chareen Chit</creator>
        
        <subject>Support vector regression machines; support vector machines; rainfall forecasting</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(3), 2022</description>
        <description>Heavy rainfall as a consequence of climate change have immensely impacted the ecology, the economy, and the lives of many. With the variety of available predictive tools, it is imperative that performance analysis of rainfall forecasting models is properly conducted as a measure for disaster preparedness and mitigation. Support Vector Regression Machine (SVRM) was utilized in predicting the rainfall of a city in a tropical country using a 4-year and 17-month rainfall dataset captured from an automated rain gauge (ARG) in Southern Philippines, involving parameter cost and gamma identification to determine the relationship between past and present values, determining optimal cost and gamma parameters to improve prediction accuracy, and forecasting model evaluation. The SVRM model that utilized Radial Basis Function (RBF) kernel function having the parameters of c=100; g=1; e=0.1; p=0.001 and the lag variable which used 12-hour report with lags up to 672-timesteps (i-672) demonstrated a Mean Square Error (MSE) of 3.461315. With close to accurate forecast between the predicted values and the actual rainfall values, the results of this study showed that SVRM has the potential to be a viable rainfall forecasting model given the proper data preparation, model kernel function selection, model parameter value selection and lag variable selection.</description>
        <description>http://thesai.org/Downloads/Volume13No3/Paper_29-Rainfall_Forecasting_using_Support_Vector_Regression.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning Framework for Physical Internet Hubs Inbound Containers Forecasting</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130327</link>
        <id>10.14569/IJACSA.2022.0130327</id>
        <doi>10.14569/IJACSA.2022.0130327</doi>
        <lastModDate>2022-03-30T07:42:08.0930000+00:00</lastModDate>
        
        <creator>El-Sayed Orabi Helmi</creator>
        
        <creator>Osama Emam</creator>
        
        <creator>Mohamed Abdel-Salam</creator>
        
        <subject>Physical internet hubs (π hubs); deep learning; convolutional neural network (CNN); recurrent neural network (RNN); time series forecasting</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(3), 2022</description>
        <description>This article presents a framework for physical internet hubs inbound containers forecasting based on deep learning and time series analysis. The inbound containers forecasting is essential for planning, scheduling, and resources allocation. The proposed framework consists of three main phases. First, the inbound historical transaction has been processed to find out the training window size (lags) using auto correlation function (ACF) and partial autocorrelation function (PACF). Second, the framework uses convolutional neural network (CNN) and recurrent neural network (RNN) as training networks for the historical time series data in two techniques. The proposed framework uses univariate and multivariate time series analysis to explore the maximum forecasting outcomes. Last, the framework measures the accuracy and compares the forecasting output using mean absolute error matrix (MAE) for both approaches. The experiments illustrated that RNN forecasts univariate inbound transaction with total 5.0954 MAE rather than 5.0236 for CNN. The CNN outperforms multivariate inbound containers forecasting with 0.7978 MAE. All the results has been compared with autoregressive integrated moving average (ARIMA) and support vector machine (SVR).</description>
        <description>http://thesai.org/Downloads/Volume13No3/Paper_27-Deep_Learning_Framework_for_Physical_Internet_Hubs.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Prediction Error Nonlinear Difference Expansion Reversible Watermarking for Integrity and Authenticity of DICOM Medical Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130326</link>
        <id>10.14569/IJACSA.2022.0130326</id>
        <doi>10.14569/IJACSA.2022.0130326</doi>
        <lastModDate>2022-03-30T07:42:08.0770000+00:00</lastModDate>
        
        <creator>David Muigai</creator>
        
        <creator>Elijah Mwangi</creator>
        
        <creator>Edwell T. Mharakurwa</creator>
        
        <subject>Medical Image Watermarking (MIW); Digital Imaging and Communication in Medicine (DICOM); region of interest (ROI) and region of non-interest (RONI); prediction error (PE); nonlinear difference expansion (NDE); authenticity; integrity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(3), 2022</description>
        <description>It is paramount to ensure the integrity and authenticity of medical images in telemedicine. This paper proposes an imperceptible and reversible Medical Image Watermarking (MIW) scheme based on image segmentation, image prediction and nonlinear difference expansion for integrity and authenticity of medical images and detection of both intentional and unintentional manipulations. The metadata from the Digital Imaging and Communications in Medicine (DICOM) file constitutes the authentication watermark while the integrity watermark is computed from Secure Hash Algorithm (SHA)-256. The two watermarks are combined and compressed using the Lempel Ziv (LZ) -77 algorithm. The scheme takes advantage of the large smooth areas prevalent in medical images. It predicts the smooth regions with zero error or values close to zero error, while non-smooth areas are predicted with large error values. The binary watermark is encoded and extracted in the zero-prediction error using a nonlinear difference expansion. The binary watermark is concentrated more on the Region of non-interest (RONI) than the Region of interest (ROI) to ensure a high visual quality while maintaining a high capacity. The paper also presents a separate low degradation side information processing algorithm to handle overflow. Experimental results show that the scheme is reversible and has a remarkable imperceptibility and capacity that are comparable to current works reported in literature.</description>
        <description>http://thesai.org/Downloads/Volume13No3/Paper_26-A_Prediction_Error_Nonlinear_Difference_Expansion_Reversible_Watermarking.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Paradigm for DoS Attack Disclosure using Machine Learning Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130325</link>
        <id>10.14569/IJACSA.2022.0130325</id>
        <doi>10.14569/IJACSA.2022.0130325</doi>
        <lastModDate>2022-03-30T07:42:08.0470000+00:00</lastModDate>
        
        <creator>Mosleh M. Abualhaj</creator>
        
        <creator>Ahmad Adel Abu-Shareha</creator>
        
        <creator>Mohammad O. Hiari</creator>
        
        <creator>Yousef Alrabanah</creator>
        
        <creator>Mahran Al-Zyoud</creator>
        
        <creator>Mohammad A. Alsharaiah</creator>
        
        <subject>DoS attack; machine learning; NSL-KDD; IDPS systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(3), 2022</description>
        <description>Cybersecurity is one of the main concerns of governments, businesses, and even individuals. This is because a vast number of attacks are their core assets. One of the most dangerous attacks is the Denial of Service (DoS) attack, whose primary goal is to make resources unavailable to legitimate users. In general, the Intrusion Detection and Prevention Systems (IDPS) hinder the DoS attack, using advanced techniques. Using machine learning techniques, this study will develop a detection model to detect DoS attacks. Utilizing the NSL-KDD dataset, the suggested DoS attack detection model was investigated using Naive Bayes, K-nearest neighbor, Decision Tree, and Support Vector Machine algorithms. The Accuracy, Recall, Precision, and Matthews Correlation Coefficients (MCC) metrics are used to compare these four techniques. In general, all techniques are performing well with the proposed model. However, The Decision Tree technique has outperformed all the other techniques in all four metrics, while the Naive Bayes technique showed the lowest performance.</description>
        <description>http://thesai.org/Downloads/Volume13No3/Paper_25-A_Paradigm_for_DoS_Attack_Disclosure.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>EMOGAME: Digital Games Therapy for Older Adults</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130324</link>
        <id>10.14569/IJACSA.2022.0130324</id>
        <doi>10.14569/IJACSA.2022.0130324</doi>
        <lastModDate>2022-03-30T07:42:08.0300000+00:00</lastModDate>
        
        <creator>Nita Rosa Damayanti</creator>
        
        <creator>Nazlena Mohamad Ali</creator>
        
        <subject>Digital games; therapy; older adults; mild cognitive impairment</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(3), 2022</description>
        <description>EmoGame is a cognitive and emotional game useful for helping older adults who experience Mild Cognitive Impairment (MCI). EmoGame was developed with a memory therapy approach. This therapy can help cognitive and positive emotions and introduce objects through pictures, such as pictures of old objects and old music in old age. This study aims to build a game application for MCI older adults on the Android platform to support improved cognitive abilities and positive emotions for older adults. This app has two games which are memory puzzle and memory exploration. This study uses a mixture of quantitative and qualitative methods through data collection questionnaires, diary entries 3E (Expressing, Experiences and Emotions), and interviews. Respondents were selected aged 50 years and above through the Mini-Mental State Examination (MMSE). The findings found that memory therapy can help older adults increase positive emotions through digital games. Through diary entries and Diary 3E, respondents’ feelings and experiences described positive emotions (happy, smile and like). The PANAS questionnaire (Positive and Negative Affect Schedule) was conducted for pre and post-testing to find positive emotions in EmoGame. Analysis of mean scores showed positive emotional factors at pre-interaction (M = 3.39, SD = 0.89) with post-interaction levels of positive emotions (M = 4.02, SD = 0.97), meaning there were significant differences in positive emotions of the older adults. The memory therapy applied in the EmoGame app is effective in helping to reduce the problem of memory decline and positive emotions for MCI older adults.</description>
        <description>http://thesai.org/Downloads/Volume13No3/Paper_24-EMOGAME_Digital_Games_Therapy_for_Older_Adults.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design and Development for a Vehicle Tracking System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130323</link>
        <id>10.14569/IJACSA.2022.0130323</id>
        <doi>10.14569/IJACSA.2022.0130323</doi>
        <lastModDate>2022-03-30T07:42:08.0000000+00:00</lastModDate>
        
        <creator>Tim Abe P. Andutan</creator>
        
        <creator>Rosanna C. Ucat</creator>
        
        <subject>Arduino; global positioning system; GPS; global system for mobile communications; GSM; vehicle tracking; route deviation detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(3), 2022</description>
        <description>In recent years, the drastic increase in the number of vehicle thefts brings about at an alarming rate around the world. However, existing vehicle tracking devices have certain limitations including the lack of ability to determine if the vehicle is on the right route. To address this problem, this study focused on the design and development of a vehicle tracking prototype with route detection, emergency button, and STATUS command to monitor the current location of the vehicle. Arduino Mega 2560, SIM900 Global System for Mobile communication (GSM) module, and NEO-6M Global Positioning System (GPS) module were used to develop the prototype. The GPS module, push buttons, and SMS command served as an input. The Arduino Mega 2560 was programmed using an algorithm to determine if the device deviated from its route, detect if the emergency button was pressed, and if STATUS command was received. The system sends a SMS if the vehicle deviated from its path, emergency button is pressed, and a STATUS command from the operator is received. Results showed that after several trials the prototype was successful in performing its functional objectives. The prototype was only limited to the use of a prototyping grade GPS module. The GPS used a built-in antenna and took time to connect to satellites. It is recommended to use an industrial grade GPS module and connect an external antenna to improve signal strength.</description>
        <description>http://thesai.org/Downloads/Volume13No3/Paper_23-Design_and_Development_for_a_Vehicle_Tracking_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Conceptual Framework for using Big Data in Egyptian Agriculture</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130322</link>
        <id>10.14569/IJACSA.2022.0130322</id>
        <doi>10.14569/IJACSA.2022.0130322</doi>
        <lastModDate>2022-03-30T07:42:07.9830000+00:00</lastModDate>
        
        <creator>Sayed Ahmed</creator>
        
        <creator>Amira S. Mahmoud</creator>
        
        <creator>Eslam Farg</creator>
        
        <creator>Amany M. Mohamed</creator>
        
        <creator>Marwa S. Moustafa</creator>
        
        <creator>Mohamed A. E. AbdelRahman</creator>
        
        <creator>Hisham M. AbdelSalam</creator>
        
        <creator>Sayed M. Arafat</creator>
        
        <subject>Agriculture; big data (BD); big data paradigm; BD processing framework; conceptual BD framework; geographical information systems (GIS); Hadoop; spark</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(3), 2022</description>
        <description>Agriculture is a typical contributor to the Egyptian economy, which could benefit from the comprehensive capabilities of Big Data (BD). In this work, we review the BD role in the agriculture sector in responding to two main questions: 1) Which technique, frameworks and data types were adopted. 2) Identification of the existing gap associated with the data sources, modeling, and analysis techniques. Therefore, the contribution in this paper can be outlined in three main aspects. 1) Popular BD frameworks were briefed, and a thorough comparison was conducted between them. 2) The potential data sources were described and characterized. 3) A Conceptual framework for Egyptian agriculture practice based on BD analytics was introduced. 4) Challenges and extensive recommendations have been provided, which could guide future development.</description>
        <description>http://thesai.org/Downloads/Volume13No3/Paper_22-A_Conceptual_Framework_for_using_Big_Data_in_Egyptian_Agriculture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Analysis of RSA and NTRU Algorithms and Implementation in the Cloud</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130321</link>
        <id>10.14569/IJACSA.2022.0130321</id>
        <doi>10.14569/IJACSA.2022.0130321</doi>
        <lastModDate>2022-03-30T07:42:07.9530000+00:00</lastModDate>
        
        <creator>Bambang Harjito</creator>
        
        <creator>Henny Nurcahyaning Tyas</creator>
        
        <creator>Esti Suryani</creator>
        
        <creator>Dewi Wisnu Wardani</creator>
        
        <subject>Attacks; privacy; cryptography; RSA; NTRU; cloud storage</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(3), 2022</description>
        <description>The emergence of cloud computing platforms makes it easier to connect and collaborate globally without setting up additional infrastructures such as servers and data centers. This causes the emergence of threats to data security against digital information. This security threat can be overcome by cryptography. Examples of cryptographic algorithms are RSA and NTRU. The main concern that arises in this research is how to perform a comparative analysis between asymmetric cryptographic algorithms, RSA (Rivest-Shamir-Adleman) and NTRU (Nth-Degree Truncated Polynomial Ring) algorithms and their implementation in cloud storage. Comparison of performance between the RSA and NTRU algorithms at security levels 80, 112, 128, 160, 192, and 256 bits by running 5 – 1000 data, the results obtained that the running time of the key generation process and encryption of the NTRU algorithm is more efficient than the RSA algorithm. Wiener&#39;s Attack test on the RSA algorithm and LLL Lattice Basis Reduction on the NTRU algorithm. NTRU algorithm has a more secure level of resilience, so that it can be said that the NTRU algorithm is more recommended for cloud storage security.</description>
        <description>http://thesai.org/Downloads/Volume13No3/Paper_21-Comparative_Analysis_of_RSA_and_NTRU_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dynamic and Optimized Routing Approach (DORA) in Vehicular Ad hoc Networks (VANETs)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130320</link>
        <id>10.14569/IJACSA.2022.0130320</id>
        <doi>10.14569/IJACSA.2022.0130320</doi>
        <lastModDate>2022-03-30T07:42:07.9200000+00:00</lastModDate>
        
        <creator>Satyanarayana Raju K</creator>
        
        <creator>Selvakumar K</creator>
        
        <subject>Data transmission rate (DTR); packet delivery ratio (PDR); packet drop ratio (PDRatio); throughput</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(3), 2022</description>
        <description>Vehicular Ad hoc Networks (VANETs) are one of the significant areas of research and this is also a subfield in Ad Hoc Networks. This is mainly focused on improving the safety of roads and reducing the total number of accidents. There is no central coordination to this network, nodes are mobile, dynamic topology, the routing process is a big challenge, and this is most responsible for the delivery message to the small overhead and delay. Routing is a tedious task that occurs huge changes in network topology and delivers the data packets in a limited period. In VANETs many existing routing protocols are introduced to overcome various issues but these are not efficient to overcome all the issues in routing. Routing shows a huge impact on other parameters such as data transmission rate (DTR), packet delivery ratio (PDR), Packet Drop Ratio (PDRatio), Average Propagation Delay (APD) and throughput. In this paper, the dynamic and optimized routing approach (DORA) is introduced in VANETs to overcome the various issues and improve the performance of IP by measuring the DTR, PDR and PDRatio. Comparisons among the Ant Colony Optimization (ACO), improved distance-based ant colony optimization routing (IDBACOR), and DORA is shown.</description>
        <description>http://thesai.org/Downloads/Volume13No3/Paper_20-Dynamic_and_Optimized_Routing_Approach_DORA.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Authorization Framework for Preserving Privacy of Big Medical Data via Blockchain in Cloud Server</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130319</link>
        <id>10.14569/IJACSA.2022.0130319</id>
        <doi>10.14569/IJACSA.2022.0130319</doi>
        <lastModDate>2022-03-30T07:42:07.8900000+00:00</lastModDate>
        
        <creator>Hemanth Kumar N P</creator>
        
        <creator>Prabhudeva S</creator>
        
        <subject>Medical data; cloud; blockchain; data sharing; access control; security; privacy preservation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(3), 2022</description>
        <description>In recent years, cloud-based medical record sharing has greatly improved the process of researching the disease and patient diagnosis. However, since cloud systems are centralized, there is serious concern about data security and privacy. Blockchain technology is viewed as a promising method of dealing with privacy issues and data security because of its exclusive features of distributed ledgers, secrecy, verifiability, and enhanced security. The literature review has shown significant works on integrating blockchain technology with cloud system for managing and sharing healthcare data. It has been analyzed that previous works are primarily dependent on the centralized data storage approach, which raises privacy concerns. The previous works also do not emphasize handling big medical data and lack the reliability of the end-to-end security features system. This paper has presented an authorization framework for ensuring data security and privacy preservation using blockchain technology with IPFS as decentralized file storage and sharing system. The proposed study devises a proof of replication algorithm using smart contracts to provide a better access control mechanism. The implementation of the proposed framework is based on the symmetric encryption and Ethereum blockchain platform. The study outcome illustrates the efficiency and availability of the proposed scheme compared to the typical cloud-based blockchain method.</description>
        <description>http://thesai.org/Downloads/Volume13No3/Paper_19-An_Authorization_Framework_for_Preserving_Privacy_of_Big_Medical_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>HEMClust: An Improved Fraud Detection Model for Health Insurance using Heterogeneous Ensemble and K-prototype Clustering</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130318</link>
        <id>10.14569/IJACSA.2022.0130318</id>
        <doi>10.14569/IJACSA.2022.0130318</doi>
        <lastModDate>2022-03-30T07:42:07.8730000+00:00</lastModDate>
        
        <creator>Shamitha S Kotekani</creator>
        
        <creator>V Ilango</creator>
        
        <subject>Fraud detection; health insurance; ensemble learners; meta-level learning; clustering; classification algorithms</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(3), 2022</description>
        <description>Health insurance plays an integral part of society&#39;s economic well-being; the existence of fraud creates innumerable challenges in providing affordable health care support for the people. In order to reduce the losses incurred due to fraud, there is a need for a powerful model to predict fraud on the data accurately. The purpose of the paper is to implement a more sophisticated technique for fraud detection using machine learning: HEMClust (Heterogeneous Ensemble Model with Clustering). The first phase of the model aims in improving the quality of claims data by providing effective preprocessing. The second stage addresses the overlapping instances in provider specialties by grouping them using k-prototype clustering. The final stage includes building the model using a heterogeneous stacking ensemble that performs classification on multiple levels, with four base learners in level 0 and a meta learner in level 1. The results were assessed using evaluation metrics and statistical tests such as Friedman and Nememyi to compare the performance of base classifiers against the proposed HEMClust. The empirical results show that the HEMClust produced 94% and 96% overall precision-recall rates on the dataset, which was an increase of 45% to 50% in the fraud detection rate for each class in the data.</description>
        <description>http://thesai.org/Downloads/Volume13No3/Paper_18-HEMClust_An_Improved_Fraud_Detection_Model_for_Health_Insurance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Heuristic Feature Selection in Logistic Regression Modeling with Newton Raphson and Gradient Descent Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130317</link>
        <id>10.14569/IJACSA.2022.0130317</id>
        <doi>10.14569/IJACSA.2022.0130317</doi>
        <lastModDate>2022-03-30T07:42:07.8430000+00:00</lastModDate>
        
        <creator>Samingun Handoyo</creator>
        
        <creator>Nandia Pradianti</creator>
        
        <creator>Waego Hadi Nugroho</creator>
        
        <creator>Yusnita Julyarni Akri</creator>
        
        <subject>Classification model; feature selection; gradient descent; logistic regression; Newton Raphson</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(3), 2022</description>
        <description>Binary choices, such as success or failure, acceptance or rejection, high or low, heavy or light, and so on, can always be used to express decision-making. Based on the known predictor feature values, a classification model can be used to predict an unknown categorical value. The logistic regression model is a commonly used classification approach in a variety of scientific domains. The goal of this research is to create a logistic regression model with a heuristic approach for selecting input characteristics and to compare the Newton Raphson and gradient descent (GD) algorithms for estimating parameters. Among predictor traits, there were four that met the criterion for being both dependent on the target and independent of one another. Also, optional features In Malang, Indonesia, researchers used the Chi-square test to find four significant characteristics that increase the incidence of pregnant women developing preeclampsia: age (X1), parity (X2), history of hypertension (X3) and salty food consumption (X6). In the above work author proposed, the logistic regression model developed using the gradient descent approach had a lower risk of error than the logistic regression model generated using the Newton Raphson algorithm. The model with the gradient descent approach has a precision of 98.54 percent and an F1 score of 97.64 percent, while the model with the Newton Raphson algorithm has a precision of 86.34 percent and an F1 score of 72.55 percent.</description>
        <description>http://thesai.org/Downloads/Volume13No3/Paper_17-A_Heuristic_Feature_Selection_in_Logistic_Regression_Modeling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Technique for Balanced Load Balancing in Cloud Computing Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130316</link>
        <id>10.14569/IJACSA.2022.0130316</id>
        <doi>10.14569/IJACSA.2022.0130316</doi>
        <lastModDate>2022-03-30T07:42:07.8270000+00:00</lastModDate>
        
        <creator>Narayan A. Joshi</creator>
        
        <subject>Cloud environment; resource sharing; load balancing; starvation; priority oriented resource allocation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(3), 2022</description>
        <description>Resource sharing by means of load balancing in cloud computing environments helps for efficient utilization of cloud resources and higher overall throughput. However, implementation of poor load balancing algorithms may cause some virtual machines starving for additional cloud resources. Employing meagre crafted mechanism for priority-oriented load balancing may leave low-level priority virtual machines starving. We suggest an improved resource sharing mechanism for load balancing in the cloud computing environments. The suggested mechanism helps to provide efficient load balancing by avoiding starvation. In order to cater efficient load balancing, the proposed resource sharing technique takes respective virtual machines’ priority levels into consideration. An implementation of the suggested load balancing algorithm in cloud environment provides reduction in waiting time of the starving virtual machines which are looking for additional resources in cloud platform. The implementation of our proposed algorithm has been deployed on a prototype cloud computing infrastructure testbed established with open source software OpenStack. The prototype cloud testbed is supported in backend by the open source CentOS Linux operating system’s minimal setup. Experimental results of proposed load balancing mechanism in the prototype cloud computing infrastructure setup designate reduction in the waiting time of overloaded starving virtual machines. The proposed mechanism is beneficial to accomplish priority-oriented and starvation free resource sharing for load balancing in cloud computing environments. In future, the proposed technique can be further enhanced for implementing load balancing in collaborated cloud computing environments.</description>
        <description>http://thesai.org/Downloads/Volume13No3/Paper_16-Technique_for_Balanced_Load_Balancing_in_Cloud_Computing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Risk Management Framework for Large Scale Scrum using Metadata Outer Request Management Methodology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130315</link>
        <id>10.14569/IJACSA.2022.0130315</id>
        <doi>10.14569/IJACSA.2022.0130315</doi>
        <lastModDate>2022-03-30T07:42:07.7970000+00:00</lastModDate>
        
        <creator>Rehab Adel</creator>
        
        <creator>Hany Harb</creator>
        
        <creator>Ayman Elshenawy</creator>
        
        <subject>Distributed agile development; knowledge sharing; risk management; large scale scrum; metadata outer request management</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(3), 2022</description>
        <description>Recently, most software projects became naturally Distributed Agile Development (DAD) projects. The main benefits of DAD projects are cost-saving and being close to markets due to their distributed nature, such as in large-scale Scrum (LeSS). Developing LeSS projects leads to the emergence of challenges in risk management, especially the team collaboration challenges, where there is no standardized process for teams to communicate collaboratively. Team collaboration and the knowledge sharing is a vital resource for a large Scrum team&#39;s success. Hence, finding a dynamic technique that facilitates team collaboration in the LeSS environment is necessary. This paper proposes a risk management framework for LeSS using outer metadata requests. The proposed framework manages the outer requests amongst the distributed team. Therefore, it avoids missing team collaboration, risks, and threats to project completion. It also contributes to exchanging team skills and experience. The proposed framework is evaluated by applying it to two different case studies for large-scale Scrum projects. The evaluation results are given. The evaluation proved the effectiveness of the proposed framework.</description>
        <description>http://thesai.org/Downloads/Volume13No3/Paper_15-A_Risk_Management_Framework_for_Large_Scale_Scrum.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of Mathematics Web-based Learning on Table Set-Up Activities</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130314</link>
        <id>10.14569/IJACSA.2022.0130314</id>
        <doi>10.14569/IJACSA.2022.0130314</doi>
        <lastModDate>2022-03-30T07:42:07.7800000+00:00</lastModDate>
        
        <creator>Gusti Ayu Dessy Sugiharni</creator>
        
        <creator>I Made Ardana</creator>
        
        <creator>I Gusti Putu Suharta</creator>
        
        <creator>I Gusti Putu Sudiarta</creator>
        
        <subject>Development; web-based learning; mathematics; table set-up; activities</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(3), 2022</description>
        <description>This paper aimed to discuss product design and expert validation of the mathematics web-based learning table set up activities in the hospitality industry. This research was a type of Research and Development, which aimed to develop a new product. The experts involved in this study were four experts. There were two experts in the field of learning technology as media validators and two experts in the field of mathematics education as material validators. In the process of validating the mathematics web-based learning in this study, using a questionnaire that had been prepared to evaluate it as a research instrument. This research had produced mathematics web-based learning which consists of five parts, namely, the initial part to recall about the Cartesian coordinates; the translation sub-material section; the reflection sub-material section; the rotation sub-material section; and the dilatation sub-material section. In the review activity by experts, the average percentage of material validators was eighty five percent, its means is very good and the average percentage of media validators was ninety five its means is very good also. It showed that this mathematics web-based learning can be said to be proper to use.</description>
        <description>http://thesai.org/Downloads/Volume13No3/Paper_14-Development_of_Mathematics_Web_based_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>COVID-19 Detection from X-Ray Images using Convoluted Neural Networks: A Literature Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130313</link>
        <id>10.14569/IJACSA.2022.0130313</id>
        <doi>10.14569/IJACSA.2022.0130313</doi>
        <lastModDate>2022-03-30T07:42:07.7500000+00:00</lastModDate>
        
        <creator>Othman A. Alrusaini</creator>
        
        <subject>Convoluted neural networks; COVID-19; chest x-ray; transfer learning; support vector machines; long short-term memory</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(3), 2022</description>
        <description>This paper reviews a host of other peer-reviewed articles related to the detection of COVID-19 infection from X-ray images using Convoluted Neural Network (CNN) approaches. It stems from a background of a pandemic that has hit the world and negatively affected all spheres of life. The currently available testing mechanisms are invasive, expensive, time-consuming, and not everywhere. The paper considered 33 main articles supported by several other articles. The measurement metrics considered in this review are accuracy, precision, recall, F1-score, and specificity. The inclusion criteria for studies was that the article should have been written after the pandemic began, deliberates on CNN, and attempts to detect the disease from X-ray images. Findings suggest that transfer learning, support vector machines, long short-term memory, and other CNN approaches are highly effective in predicting the likelihood of the disease from X-rays. However, multi-class predictions seemed to score lowly on the accuracy score relative to their binary counterparts. Also, data augmentation significantly improved the performance of the models. Hence, the paper concluded that all reviewed approaches are effective. Recommendations are that analysts should integrate transfer learning procedures in the model formulation process, engage in data augmentation practices, and focus on classifying data based on binary classes.</description>
        <description>http://thesai.org/Downloads/Volume13No3/Paper_13-Covid_19_Detection_from_X_Ray_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Analysis of Lexicon and Machine Learning Approach for Sentiment Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130312</link>
        <id>10.14569/IJACSA.2022.0130312</id>
        <doi>10.14569/IJACSA.2022.0130312</doi>
        <lastModDate>2022-03-30T07:42:07.7170000+00:00</lastModDate>
        
        <creator>Roopam Srivastava</creator>
        
        <creator>P. K. Bharti</creator>
        
        <creator>Parul Verma</creator>
        
        <subject>NLP; sentiment analysis; SGD (stochastic gradient descent); machine learning; TFIDF; BoW; VADER; SVM; AFINN</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(3), 2022</description>
        <description>Opinion mining or analysis of text are other terms for sentiment analysis. The fundamental objective is to extract meaningful information and data from unstructured text using natural language processing, statistical, and linguistics methodologies. This further is used for deriving qualitative and quantitative results on the scale of ‘positive’, ‘neutral’, or ‘negative to get the overall sentiment analysis. In this research, we worked with both approaches, machine learning, and an unsupervised lexicon-based algorithm for sentiment calculation and model performance. Stochastic gradient descent (SGD) is utilized in this work for optimization for support vector machine (SVM) and logistic regression. AFINN and Vader lexicon are used for the lexicon model. Both the feature TF-IDF and bag of a word are used for classification. This dataset includes &quot;Trip advisor hotel reviews&quot;. There are around 20k reviews in the dataset. Cleaned and preprocessed data were used in our work. We conducted some training and assessment. A classifier&#39;s accuracy is measured using evaluation metrics. In TF-IDF, the Support Vector Machine is the more accurate of the two classifiers used to assess machine learning accuracy. The classification rate in Bag of Words was 95.2 percent and the accuracy in TF-IDF was 96.3 percent on the support vector machine algorithm. VADER outperforms the Lexicon model with an accuracy of 88.7%, whereas AFINN Lexicon has an accuracy of 86.0%. When comparing the Supervised and unsupervised lexicon approaches, support vector machine model outperforms with a TFIDF accuracy of 96.3 percent and a VADER lexicon accuracy of 88.7%.</description>
        <description>http://thesai.org/Downloads/Volume13No3/Paper_12-Comparative_Analysis_of_Lexicon_and_Machine_Learning_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comparative Analysis of Multi-Criteria Decision Making Techniques for Ranking of Attributes for e-Governance in India</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130311</link>
        <id>10.14569/IJACSA.2022.0130311</id>
        <doi>10.14569/IJACSA.2022.0130311</doi>
        <lastModDate>2022-03-30T07:42:07.7030000+00:00</lastModDate>
        
        <creator>Bhaswati Sahoo</creator>
        
        <creator>Rabindra Narayana Behera</creator>
        
        <creator>Prasant Kumar Pattnaik</creator>
        
        <subject>e-Governance; information and communication technology; multi-criteria decision making; ranking; technique for order of preference by similarity to ideal solution (TOPSIS); usability; weighted sum model (WSM); weighted product model (WPM)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(3), 2022</description>
        <description>e-Governance is the system in which all the public services are made available in the online platform with the help of secured cyber architecture. Government along with the people have praised the ability of Information and communications technology (ICT) around the world in stimulating the various vital sectors of the economy. The advanced technologies have provided speed, inexpensive and convenient method of interaction and communication. In various developing and developed countries, these newly adopted technologies have shown direct positive impact on the country’s productivity, efficiency and thus leads to rapid development. This work represents a comparative study of various Multi-Criteria Decision Making (MCDM) techniques like Technology, Multi-criteria Decision making, Ranking, Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS), Weighted Sum Model (WSM) and Weighted Product Model (WPM) to find the ranking of various attributes responsible for better decision making for implementing successful e-Governance in developing country, India.</description>
        <description>http://thesai.org/Downloads/Volume13No3/Paper_11-A_Comparative_Analysis_of_Multi_Criteria_Decision_Making_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design Level Class Decomposition using the Threshold-based Hierarchical Agglomerative Clustering</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130310</link>
        <id>10.14569/IJACSA.2022.0130310</id>
        <doi>10.14569/IJACSA.2022.0130310</doi>
        <lastModDate>2022-03-30T07:42:07.6870000+00:00</lastModDate>
        
        <creator>Bayu Priyambadha</creator>
        
        <creator>Tetsuro Katayama</creator>
        
        <subject>Refactoring; design level refactoring; software refactoring; hierarchical clustering; class decomposition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(3), 2022</description>
        <description>Refactoring activity is essential to maintain the quality of a software’s internal structure. It decays as the impact of software changes and evolution. Class decomposition is one of the refactoring processes in maintaining internal quality. Mostly, the refactoring process is done at the level of source code. Shifting from source code level to design level is necessary as a quick step to refactoring and close to the requirement. The design artifact has a higher abstraction level than the source code and has limited information. The challenge is to define new metrics needed in class decomposition using the design artifact&#39;s information. Syntactic and semantic information from the design artifact provides valuable data for the decomposition process. Class decomposition can be done at the level of design artifact (class diagram) using syntactic and semantic information. The dynamic threshold-based Hierarchical Agglomerative Clustering produces a more specific cluster that is considered to produce a single responsibility class.</description>
        <description>http://thesai.org/Downloads/Volume13No3/Paper_10-Design_Level_Class_Decomposition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Personalized Desire2Learn Recommender System based on Collaborative Filtering and Ontology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130309</link>
        <id>10.14569/IJACSA.2022.0130309</id>
        <doi>10.14569/IJACSA.2022.0130309</doi>
        <lastModDate>2022-03-30T07:42:07.6530000+00:00</lastModDate>
        
        <creator>Walid Qassim Qwaider</creator>
        
        <subject>Collaborative filtering; Desire2Learn; ontology; recommender system (RS); personalized Desire2Learn; PDRS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(3), 2022</description>
        <description>In this century, attention has grown to recommendation systems (RS), especially in e-learning, to solve the problem of overloading information in e-learning systems. E-learning providers also play a major role in helping learners to find appropriate courses that fit their learning plan using Desire2Learn at Majmaah University. Although recommendation systems generally have a clear advantage in solving problems related to overloading information in various areas of e-business and making accurate recommendations, e-learning recommendation systems still have problems with overloading information about the characteristics of the learning recipient Such as the appropriate education style, the level of skills provided and the student&#39;s level of education. In this paper, we suggest that a recommendation technique combining collaborative filtering and ontology be introduced to recommend courses for learning recipients through Desire2Learn. Ontology involves the integration of the characteristics of the learning recipient into the recommendation process as well as the classifications, while the liquidation process cooperates in the predictions and generates recommendations for e-learning. In addition, ontological knowledge is employed by the educational RS in the early stages if no assessments can be made to mitigate the cold start problem. The results of this study show that the proposed recommendation technique is distinguished and superior to the cooperative liquidation in terms of specialization and accuracy of the recommendation.</description>
        <description>http://thesai.org/Downloads/Volume13No3/Paper_9-Personalized_Desire2Learn_Recommender_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mobile-based Vaccine Registry to Improve Collection and Completeness of Maternal Immunization Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130308</link>
        <id>10.14569/IJACSA.2022.0130308</id>
        <doi>10.14569/IJACSA.2022.0130308</doi>
        <lastModDate>2022-03-30T07:42:07.6400000+00:00</lastModDate>
        
        <creator>Zubeda S. Kilua</creator>
        
        <creator>Mussa A. Dida</creator>
        
        <creator>Devotha N. Nyambo</creator>
        
        <subject>Maternal immunization; electronic immunization registry; USSD; data collection; limited resource setting</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(3), 2022</description>
        <description>Immunization during pregnancy and infancy significantly reduces morbidity and mortality of mothers, unborn fetuses, and young infants. Several studies show the merits of getting complete, quality, and accurate data on time to enhance policy and decision-making for society or country development. Despite the efforts by nations to ensure the success of maternal immunization through electronic immunization registries, limited resources such as poor internet access, shortage of electricity, and digital illiteracy in developing countries hinder the goal of full immunization of mothers and infants. Since 2015, immunization programs in Tanzania use internet-based information systems to collect immunization data from health facilities and submit them to the responsible authority for further decision-making such as the allocation of vaccines to health facilities. The internet-based media is not fully achieved in developing countries due to its cost and resource setting, thus, the responsible authority does not receive instant data to update its vaccine inventory and management activities which often results in partial immunization due to the unavailability of vaccines in some facilities. This challenge can be solved by having an affordable system that instantly incorporates and transmits vaccination details such as the utilization of vaccines and demands from each health facility to responsible authority with less resources. The present study proposes a USSD platform to enhance the receipt of real-time data by immunization authorities from both health facilities with poor and good internet connectivity at a lesser cost. A greater number of health facilities in Tanzania prefer to use both online and offline platforms for collecting and recording immunization data. As electronic immunization registry has been introduced in areas with limited resources, it is recommended the use online and offline platforms for data collection so that they can submit immunization data in real-time without the delays caused by poor resource setting.</description>
        <description>http://thesai.org/Downloads/Volume13No3/Paper_8-Mobile_based_Vaccine_Registry_to_Improve_Collection_and_Completeness.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Spectrum Pricing in Cognitive Radio Networks: An Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130307</link>
        <id>10.14569/IJACSA.2022.0130307</id>
        <doi>10.14569/IJACSA.2022.0130307</doi>
        <lastModDate>2022-03-30T07:42:07.6230000+00:00</lastModDate>
        
        <creator>Reshma C R</creator>
        
        <creator>Arun kumar B. R</creator>
        
        <subject>Price; game theory; analysis of price; trading; usage</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(3), 2022</description>
        <description>The wireless technology is applied in developing various applications in different trust areas. Due to this there is a huge demand for the spectrum band. The available spectrum can be shared among the primary users and the secondary users. The spectrum is utilized by the secondary user on rental basis. In this competitive world, the primary users provide a good quality for services to the end users for retaining the spectrum band. The pricing is one of the vital components in Cognitive Radio Networks (CRN) for owning/renting the spectrum. The spectrum is utilized by the secondary users when the spectrum is in idle state. This research work focuses on the spectrum pricing for the secondary users based on the price paid by the primary user. The primary users generate revenue, the same price is utilized for maintenance or annual fees which is to be paid to the governance of telecommunication department. The pricing and trading issues is one of the research areas for allocating the spectrum to the primary users. This research work focuses on providing spectrum to the secondary users with the minimal price for utilization during the specified time. The work highlights the open fact that there is a huge scarcity of the spectrum and the price are high, and not affordable to the individuals. Hence primary users lease/rent out the idle bandwidth for the secondary user. To utilize the spectrum for a dedicated period of time the secondary user has to pay the usage charges to the primary user. In this research work, various methods are presented for determining the price for the secondary users. The pricing components are analyzed by adapting the one-way Anova which compares the values among the groups. The results indicates that all the group means are not same and they are independent variables.</description>
        <description>http://thesai.org/Downloads/Volume13No3/Paper_7-Spectrum_Pricing_in_Cognitive_Radio_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Can Ready-to-Use RNNs Generate “Good” Text Training Data?</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130306</link>
        <id>10.14569/IJACSA.2022.0130306</id>
        <doi>10.14569/IJACSA.2022.0130306</doi>
        <lastModDate>2022-03-30T07:42:07.5930000+00:00</lastModDate>
        
        <creator>Jia Hui Feng</creator>
        
        <subject>Neural networks; machine learning; text genera-tion; classification; natural language processing; data augmenta-tion; artificial intelligence</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(3), 2022</description>
        <description>There is much research on the state-of-the-art techniques for generating training data through neural networks. However, many of these techniques are not easily implemented or available due to factors such as copyright of their research code. Meanwhile, there are other neural network codes currently available that are easily accessible for individuals to generate text data; this paper explores the quality of the text data generated by these ready-to-use neural networks for classification tasks. This paper’s experiment showed that using the text data generated by a default configured RNN to train a classification model can match closely with baseline accuracy.</description>
        <description>http://thesai.org/Downloads/Volume13No3/Paper_6-Can_Ready_to_Use_RNNs_Generate.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Accessibility of Bulgarian Regional Museums Websites</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130305</link>
        <id>10.14569/IJACSA.2022.0130305</id>
        <doi>10.14569/IJACSA.2022.0130305</doi>
        <lastModDate>2022-03-30T07:42:07.5770000+00:00</lastModDate>
        
        <creator>Todor Todorov</creator>
        
        <creator>Galina Bogdanova</creator>
        
        <creator>Mirena Todorova–Ekmekci</creator>
        
        <subject>Accessibility; museums; web accessibility; visual disability; disabled person; testing; automatic validation tools; WCAG criteria</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(3), 2022</description>
        <description>Web accessibility is an inclusive practise that ensures everyone including people with disabilities can successfully work and interact with websites and use all their functionality. The research in the paper investigates the problem of web accessibility of Regional museums in Bulgaria and the compliance of their websites with the recommendations of Web Content Accessibility Guidelines 2.1 (WCAG 2.1), published by World Wide Web Consortium (W3C). The study presents the results of the user experience of people with disabilities regarding the accessibility of museums and exhibits in them. A methodology for automated testing of web accessibility with several software tools is described in the paper. Results from these tests are analysed and visualized with graphical tools. Some important conclusions about most common accessibility problems are given.</description>
        <description>http://thesai.org/Downloads/Volume13No3/Paper_5-Accessibility_of_Bulgarian_Regional_Museums_Websites.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of the Elderly&#39;s Internet Accessed Time using XGB Machine Learning Model for Solving the Level of the Information Gap of the Elderly</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130304</link>
        <id>10.14569/IJACSA.2022.0130304</id>
        <doi>10.14569/IJACSA.2022.0130304</doi>
        <lastModDate>2022-03-30T07:42:07.5470000+00:00</lastModDate>
        
        <creator>Hung Viet Nguyen</creator>
        
        <creator>Haewon Byeon</creator>
        
        <subject>Information gap; machine learning; prediction model; elderly</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(3), 2022</description>
        <description>This study aims to construct machine learning models to predict the elderly&#39;s internet-accessed time. These models can resolve the information gaps in the present and future by analyzing information use factors such as internet access and mobile device usability. We analyzed 2,300 adults 55 years of age and older who participated in the national survey. This study followed a pipeline of five steps: primary data selection, data imputation to process missing data, feature ranking to identify most important features, machine learning algorithms to develop classifier models, and model evaluation. We applied the Extremely Randomized Trees classifier (Extra Tree) model, the Random Forest classifier (RF) model, and the Extreme Gradient Boosting classifier (XGB) model to look for feature ranking, then select feature importance. All classification models used the accuracy score to calculate the effect. In our study, the most accurate model for predicting the Internet access time of the elderly was the XGB model. The evaluation scores of the XGB machine learning model are very positive and bring high expectations. To solve the information gap of the elderly problem, we can use these effective models to predict the elderly object. Then, we can give some solutions to help them in a society with a strong information technology base.</description>
        <description>http://thesai.org/Downloads/Volume13No3/Paper_4-Analysis_of_the_Elderlys_Internet_Accessed_Time.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Stochastic Rounding for Image Interpolation and Scan Conversion</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130303</link>
        <id>10.14569/IJACSA.2022.0130303</id>
        <doi>10.14569/IJACSA.2022.0130303</doi>
        <lastModDate>2022-03-30T07:42:07.5300000+00:00</lastModDate>
        
        <creator>Olivier Rukundo</creator>
        
        <creator>Samuel Emil Schmidt</creator>
        
        <subject>Cardiac ultrasound; deterministic rounding; image quality; interpolation; pseudorandom number; scan conversion; stochastic rounding; video quality</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(3), 2022</description>
        <description>The stochastic rounding (SR) function is proposed to evaluate and demonstrate the effects of stochastically rounding row and column subscripts in image interpolation and scan conversion. The proposed SR function is based on a pseudorandom number, enabling the pseudorandom rounding up or down any non-integer row and column subscripts. Also, the SR function exceptionally enables rounding up any possible cases of subscript inputs that are inferior to a pseudorandom number. The algorithm of interest is the nearest-neighbor interpolation (NNI) which is traditionally based on the deterministic rounding (DR) function. Experimental simulation results are provided to demonstrate the performance of NNI-SR and NNI-DR algorithms before and after applying smoothing and sharpening filters of interest. Additional results are also provided to demonstrate the performance of NNI-SR and NNI-DR interpolated scan conversion algorithms in cardiac ultrasound videos.</description>
        <description>http://thesai.org/Downloads/Volume13No3/Paper_3-Stochastic_Rounding_for_Image_Interpolation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Can the Futures Market be Predicted-Perspective based on AutoGluon</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130302</link>
        <id>10.14569/IJACSA.2022.0130302</id>
        <doi>10.14569/IJACSA.2022.0130302</doi>
        <lastModDate>2022-03-30T07:42:07.5130000+00:00</lastModDate>
        
        <creator>YangChun Xiong</creator>
        
        <creator>ZiXuan Pan</creator>
        
        <creator>BaiFu Chen</creator>
        
        <subject>AutoGluon; LSTM; GRU; Chinese futures market</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(3), 2022</description>
        <description>This paper discusses how to raise efficiency of predicting the Chinese futures market correlation coefficient. First, the predicted periods are divided by major events and the predictabilities between different periods are compared at the same time. Second, on this basis, an automatic machine learning framework, AutoGluon is applied to compare the predictive ability between different deep learning models such as LSTM and GRU. Results demonstrate that: (1) Compared by LSTM and GRU, AutoGluon can indeed raise efficiency of predicting. (2) The changes of prediction error between different periods can explain the influence of major events happened in futures market. (3) Although the predictive ability of many models decline over time, the performance of XGBoost is relatively stable, which can provide useful tools for market participants.</description>
        <description>http://thesai.org/Downloads/Volume13No3/Paper_2-Can_the_Futures_Market_be_Predicted_Perspective.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Helping People with Social Anxiety Disorder to Recognize Facial Expressions in Video Meetings</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130301</link>
        <id>10.14569/IJACSA.2022.0130301</id>
        <doi>10.14569/IJACSA.2022.0130301</doi>
        <lastModDate>2022-03-30T07:42:07.4670000+00:00</lastModDate>
        
        <creator>Jieyu Wang</creator>
        
        <creator>Abdullah Abuhussein</creator>
        
        <creator>Hanwei Wang</creator>
        
        <creator>Tian Qi</creator>
        
        <creator>Xiaoyue Ma</creator>
        
        <creator>Amani Alqarni</creator>
        
        <creator>Lynn Collen</creator>
        
        <subject>Social phobia/social anxiety disorder; video meeting; facial expression recognition; emotion recognition; empirical research</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(3), 2022</description>
        <description>According to previous research on social anxiety disorder (SAD) and facial expressions, those with SAD tend to view all faces as portraying negative emotions; thus, they are afraid of chatting with others when they cannot understand the real emotions being communicated. The advancement of facial recognition technology has given people opportunities to get a more precise emotional estimation of facial expressions. This study aims to investigate the practical effects of apps that detect facial expressions of emotion (e.g., AffdexMe) on people with SAD when communicating with other people through video chatting. We conducted empirical research to examine whether facial emotion recognition software can help people with SAD overcome the fear of chatting with others in video meetings and help them understand others’ emotions to reduce communication conflicts. This paper presents the design of an experiment to measure participants’ reactions when they video-chat with others using the AffdexMe application and to interview participants to get in-depth feedback. The results show that people with SAD could better recognize the emotions of others using AffdexMe. This results in more reasonable responses and better interaction during video chats. We also propose design suggestions to make the described approach better and more convenient to use. This research shed a light on the future design of emotion recognition in video chatting for people with disabilities or even ordinary users.</description>
        <description>http://thesai.org/Downloads/Volume13No3/Paper_1-Helping_People_with_Social_Anxiety_Disorder.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Advanced Ontology based Deep Learning for Computer-aided Interpretation of Mammography Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121175</link>
        <id>10.14569/IJACSA.2021.0121175</id>
        <doi>10.14569/IJACSA.2021.0121175</doi>
        <lastModDate>2022-03-16T11:42:29.5470000+00:00</lastModDate>
        
        <creator>Hamida Samiha Rahli</creator>
        
        <creator>Nac&#233;ra Benamrane</creator>
        
        <subject>Computer assisted detection (CAD); breast cancer; the digital database for screening mammography (DDSM); ontology; deep learning; breast imaging-reporting and data system (BIRADS)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>This paper has been removed.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_75-An_Advanced_Ontology_based_Deep_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Evaluation of Different Raspberry Pi Models for a Broad Spectrum of Interests</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130295</link>
        <id>10.14569/IJACSA.2022.0130295</id>
        <doi>10.14569/IJACSA.2022.0130295</doi>
        <lastModDate>2022-03-01T11:08:01.2930000+00:00</lastModDate>
        
        <creator>Eric Gamess</creator>
        
        <creator>Sergio Hernandez</creator>
        
        <subject>Performance evaluation; benchmarks; raspberry pi; single board computer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>Now-a-days, Single Board Computers (SBCs), especially Raspberry Pi (RPi) devices, are extensively used due to their low cost, efficient use of energy, and successful implementation in a wide range of applications; therefore, evaluating their performance is critical to better understand the applicability of RPis to solve problems in different areas of knowledge. This paper describes a comparative and experimental study regarding the performance of five different models of the RPi family (RPi Zero W, RPi Zero 2 W, RPi 3B, RPi 3B+, and RPi 4B) in several scenarios and with different configurations. To conduct our multiple experiments on RPis, we used a self-developed and other existing open-source benchmarking tools allowing us to perform tests that mimic real-world needs, assessing important factors including CPU frequency and temperature during stressful activities, processor performance when executing CPU-intensive processes such as audio and file compressions as well as cryptographic operations, memory and microSD storage performance when executing read and write operations, TCP throughput in different WiFi bands, and TCP latency to send a specific payload from a source to a destination. Our experimental results showed that the RPi 4B significantly outperformed the other SBCs tested. In addition, our research indicated that the RPi Zero 2 W overclocked, RPi 3B, and RPi 3B+ had similar performance. Finally, the RPi Zero 2 W showed a much higher capacity than its predecessor, the RPi Zero W, and seems to be a perfect replacement when upgrading, since they have the same form factor and are physically interchangeable. With this study, we aim to guide researchers and hobbyists in selecting adequate RPis for their projects.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_95-Performance_Evaluation_of_Different_Raspberry_Pi_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Teacher e-Training and Student e-Learning during the Period of Confinement Caused by Covid-19 in Case of Morocco</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130294</link>
        <id>10.14569/IJACSA.2022.0130294</id>
        <doi>10.14569/IJACSA.2022.0130294</doi>
        <lastModDate>2022-02-28T09:51:48.0930000+00:00</lastModDate>
        
        <creator>Abdessamad El Omari</creator>
        
        <creator>Malika Tridane</creator>
        
        <creator>Said Belaaouad</creator>
        
        <subject>COVID-19; e-learning; e-training; change; innovation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>The rapid advance of the new coronavirus imposes several drastic measures on the authorities to contain the virus and prevent its spread. The Ministry of Education has decreed the suspension of face-to-face classes in all schools. The pedagogical continuity, now a priority, must be carried out remotely through video conferencing, platforms, video capsules The main questions of this research are: What are the success factors of distance learning and training in Moroccan context? This raises a whole range of questions: are teachers able to adopt this new teaching method? Are the means available and adequate to ensure distance learning? What about the training of teachers to cope with this unexpected radical change? Based on the results obtained from a population of 126 teachers, we discovery that 91% of teachers said that they have adopted the online teaching but it is not really e-learning as recognized by the specialists, its objective is rather to maintain communication with the students. While 9% have not used this mode of teaching, they point out that this type of teaching does not guarantee equal opportunities for learners. We have therefore concluded that the necessary material resources must be made available to ensure the success of this type of teaching, such as computers and the Internet, as well as the necessary training for teachers to develop their skills associated with managing distance learning.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_94-Teacher_e_Training_and_Student_e_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detection of Criminal Behavior at the Residential Unit based on Deep Convolutional Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130293</link>
        <id>10.14569/IJACSA.2022.0130293</id>
        <doi>10.14569/IJACSA.2022.0130293</doi>
        <lastModDate>2022-02-28T09:51:48.0770000+00:00</lastModDate>
        
        <creator>H. A. Razak</creator>
        
        <creator>Nooritawati Md Tahir</creator>
        
        <creator>Ali Abd Almisreb</creator>
        
        <creator>N. K. Zakaria</creator>
        
        <creator>N. F. M. Zamri</creator>
        
        <subject>Abnormal behavior; deep learning; convolution neural network; forensic posture; property crime</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>Studies on abnormal behavior based on deep learning as a processing platform increase. Deep learning, specifically the convolutional neural network (CNN), is known for learning the features directly from the raw image. In return, CNN requires a high-performance hardware platform to accommodate its computational cost like AlexNet and VGG-16 with 62 million and 138 million parameters, respectively. Hence in this study, four CNN samplings with different architectures in detecting abnormal behavior at the gate of residential units are evaluated and validated. The forensic postures, with some other collected data, are used for the preliminary step in constructing the criminal case database. High accuracy up to 97% is obtained from the trained CNN samplings with 80% to 97% recognition rate achieved during the offline testing and 70% to 90% recognition rate recorded during the real-time testing. Results showed that the developed CNN samplings owned good performance and can be utilized in detecting and recognizing the normal and abnormal behavior at the gate of residential units.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_93-Detection_of_Criminal_Behavior_at_the_Residential_Unit.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Effectiveness of CATA Software in Exploring the Significance of Modal Verbs in Large Data Texts</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130292</link>
        <id>10.14569/IJACSA.2022.0130292</id>
        <doi>10.14569/IJACSA.2022.0130292</doi>
        <lastModDate>2022-02-28T09:51:48.0470000+00:00</lastModDate>
        
        <creator>Ayman Farid Khafaga</creator>
        
        <subject>CATA software; frequency distribution analysis; ideologies; modal verbs; Bond’s Lear</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>This paper investigates the effectiveness of using and applying CATA (Computer-Aided Text Analysis) software in exploring the extent to which particular modals are significant in communicating the ideological and thematic messages of literary discourse. More specifically, the paper attempts to test the hypothesis that CATA software, including FDA (Frequency Distribution Analysis), KWICK (Key Word in Context), CA (Content Analysis), and TDA (Thematic Distribution Analysis) are effectively helpful in the linguistic and ideological analysis of modals in literary texts. To this end, the paper uses the frequency distribution analysis (FDA) and applies it to Edward Bond’s Lear as a sample representing literary texts. Two modal verbs were selected to be computationally analyzed by means of the frequency distribution analysis in order to decode the different ideologies they carry in the discourse of the selected play. These are will and must. These modal verbs were computationally displayed within their contextual, total and indicative occurrences in the play under investigation to demonstrate the way they convey particular ideologies. Findings revealed that CATA software represented in its variable of FDA is highly contributive to communicating ideologies in the play under investigation. The paper further demonstrated two findings: first, via CATA software, analysts can easily arrive at the ideological significance of the various classes of words, including modal verbs that are used in literary texts; and, second, the analysis showed that only a few occurrences out of the total number of frequencies of the modal verbs at hand are indicative in conveying the hidden ideologies of their users.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_92-The_Effectiveness_of_CATA_Software.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Extraction of Point-of-Interest in Multispectral Images for Face Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130291</link>
        <id>10.14569/IJACSA.2022.0130291</id>
        <doi>10.14569/IJACSA.2022.0130291</doi>
        <lastModDate>2022-02-28T09:51:48.0300000+00:00</lastModDate>
        
        <creator>Kossi Kuma KATAKPE</creator>
        
        <creator>Lyes AKSAS</creator>
        
        <creator>Diarra MAMADOU</creator>
        
        <creator>Pierre GOUTON</creator>
        
        <subject>Multispectral image; hybrid sensor; image mosaic-ing; image demosaicing; mosaic filter</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>Security systems in companies, airports, enter-prises, etc. face numerous challenges. Among the major ones there is objects or face recognition. The problem with the robustness of recognition systems that usually affects color images nowadays can be addressed by multispectral image acquisition in the near infrared range with cameras equipped with new high performance sensors able to take images in dark or uncontrolled environments with much more accuracy. Multispectral CMOS (Complementary Metal Oxide Semi-conductor) sensors in a single shot record several wavelengths that are isolated and allow very specific analyses. They are equipped with new acquisition methods and provide observations that are more accurate. The current generation of these imaging sensors involve scientific and technical interest because they provide much more information than those that operate in visible range; precise nature and spatio-temporal evolution of the areas need to be analyzed. In this study, multispectral images acquired by camera equipped with a hybrid sensor operating in near infrared has been used. This camera is built in the ImViA laboratory of the University of Bourgogne as part of the European project EXIST (EXtended Image Sensing Technologies). The process involved in image acquisition, image mosaicing and image demosaicing by using mosaic filters. After acquisition process the interest points be extract in these bands of images in order to know how information is shared out all over the bands. The results were satisfactory because information is spread all over the images bands and the algorithms used also have detected many interest points. Based on the results, a large database can be set up for a face recognition system building.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_91-Extraction_of_Point_of_Interest_in_Multispectral_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Index for Detecting Frequency of Unknown Underwater Weak Signals with Genetic Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130290</link>
        <id>10.14569/IJACSA.2022.0130290</id>
        <doi>10.14569/IJACSA.2022.0130290</doi>
        <lastModDate>2022-02-28T09:51:48.0130000+00:00</lastModDate>
        
        <creator>Weixiang Yu</creator>
        
        <creator>Xiukui Li</creator>
        
        <subject>Stochastic resonance; underwater weak signal de-tection; genetic algorithm; frequency detection; frequency-hopping signal; index</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>In this paper, a new index is proposed for detecting the frequency of unknown underwater signals based on the stochastic resonance theory. When the received weak signal is input into the stochastic resonance system, first, by frequency analysis, the frequency with the highest amplitude Aₘ of the output signal spectrum is considered as the pre-detection fre-quency. Then a cosine signal with the pre-detection frequency and unit amplitude is constructed. Define the pre-signal-to-noise-ratio as the logarithm of the squared amplitude Aₘ over the mean of signal amplitudes in all other frequencies. The new index is defined as the product of the pre-signal-to-noise-ratio and the correlation coefficient between the received unknown signal and the constructed cosine signal. The new index is featured by taking into account the signal characteristics in both time and frequency domain, and it will yield better signal frequency detection performance. In addition, to improve the time efficiency of the frequency detection, a method to bound the searching range, keyed to the genetic algorithm, of the stochastic resonance system parameters is proposed. The method can be used to detect the frequency of both single frequency and frequency-hopping unknown signals. With the designed new index and system parameter bounding method, the simulations and experiments for the weak underwater unknown signals are conducted. Compared to the piecewise mean value index and weighted power spectral kurtosis index, the new index yields a higher detection probability at varied input signal-to-noise ratios and signal frequencies. With bounding system parameter searching ranges, the time efficiency is improved. The main purpose of this paper is to detect the frequency of unknown underwater weak signals by stochastic resonance system with genetic algorithm. The main contributions are summarized as follows. First, the detection probability of weak signals is improved by stochastic resonance system with the proposed signal detection index than some other indexes. Second, to improve the time efficiency of the signal frequency detection, a method to bound the searching range of system parameters is proposed.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_90-A_New_Index_for_Detecting_Frequency_of_Unknown_Underwater_Weak_Signals.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Systematic Exploration and Classification of Useful Comments in Stack Overflow</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130289</link>
        <id>10.14569/IJACSA.2022.0130289</id>
        <doi>10.14569/IJACSA.2022.0130289</doi>
        <lastModDate>2022-02-28T09:51:48.0000000+00:00</lastModDate>
        
        <creator>Prasadhi Ranasinghe</creator>
        
        <creator>Nipuni Chandimali</creator>
        
        <creator>Chaman Wijesiriwardana</creator>
        
        <subject>Stack overflow; useful comments; machine learning; SVM; classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>Stack Overflow is a public platform for developers to share their knowledge on programming with an engaged community. Crowdsourced programming knowledge is not only generated through questions and answers but also through comments which are commonly known as developer discussions. Despite the availability of standard commenting guidelines on Stack Overflow, some users tend to post comments not adhering to those guidelines. This practice affects the quality of the developer discussion, thus adversely affecting the knowledge-sharing process. Literature reveals that analyzing the comments could facilitate the process of learning and knowledge sharing. Therefore, this study intends to extract and classify useful comments into three categories: request clarification, constructive criticism, and relevant information. In this study, the classifi-cation of useful comments was performed using the Support Vector Machine (SVM) algorithm with five different kernels. Feature engineering was conducted to identify the possibility of concatenating ten external features with textual features. During the feature evaluation, it was identified that only TF-IDF and N-grams scores help classify useful comments. The evaluation results confirm Radial Basis Function (RBF) kernel of the SVM classification algorithm performs best in classifying useful comments in Stack Overflow regardless of the usage of the optimal combinations of hyperparameters.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_89-Systematic_Exploration_and_Classification_of_Useful_Comments.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detecting Malware Families and Subfamilies using Machine Learning Algorithms: An Empirical Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130288</link>
        <id>10.14569/IJACSA.2022.0130288</id>
        <doi>10.14569/IJACSA.2022.0130288</doi>
        <lastModDate>2022-02-28T09:51:47.9670000+00:00</lastModDate>
        
        <creator>Esraa Odat</creator>
        
        <creator>Batool Alazzam</creator>
        
        <creator>Qussai M. Yaseen</creator>
        
        <subject>Malware classification; machine learning; SMOT; information security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>Machine learning algorithms have proved their effectiveness in detecting malware. This paper conducts an em-pirical study to demonstrate the effectiveness of selected machine learning algorithms in detecting and classifying Android malware using permissions features. The used dataset consists of 9000 different malicious applications from the CIC-Maldroid2020, CIC-Maldroid2017 and CIC-InvesAndMal2019 datasets collected by the Canadian Institute for Cybersecurity. Meta-Multiclass and Random Forest ensemble classifiers are used based on different machine learning classifiers to overcome the imbalance in the data classes. Moreover, a genetic attribute selection technique and SMOTE are used to classify Ransomware sub-families to handle the small size of the dataset and underfitting problem. The results show that optimization and ensemble approaches are successful in treating dataset issues, with 95% accuracy in classifying big malware families and 80% in Ransomware subfamilies.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_88-Detecting_Malware_Families_and_Subfamilies_using_Machine_Learning_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-Spectral Imaging for Fruits and Vegetables</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130287</link>
        <id>10.14569/IJACSA.2022.0130287</id>
        <doi>10.14569/IJACSA.2022.0130287</doi>
        <lastModDate>2022-02-28T09:51:47.9530000+00:00</lastModDate>
        
        <creator>Shilpa Gaikwad</creator>
        
        <creator>Sonali Tidke</creator>
        
        <subject>Multi-spectral imaging; fruit grading; vegetable classification; fruit quality; disease detection; fruit maturity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>In the field of agriculture, fruit grading and veg-etable classification is an important and challenging task. The current fruit and vegetable classifications are done manually, which results in inconsistent performance. There is an influence of external surroundings on this manual classification. Sometimes getting an expert fruit or vegetable grader, is challenging and the performance of that person may become stagnant over some time. With the recent development in computer technology and multi-spectral camera system, it is possible to achieve an efficient fruit grading or vegetable classification system. In this manuscript, we summarize different automated fruit grading as well as vegetable classification systems, which are based on multi-spectral imaging. We have focused our analysis on four major areas such as varietal identification, fruit quality assessment, classification, and disease detection. From our analysis, we have found that the Partial Least Square Discriminant Analysis (PLS-DA) was most effective for identifying varieties of tomato seeds. For analyzing the quality of pomegranate fruits, the multiple linear regression model was the most optimal method. Multi-Layer Perceptron (MLP) classifier was the recommended method for classifying medicinal plant leaves. A Linear Discriminant Analysis (LDA) based classifier could accurately detect diseases in fruits and vegetables.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_87-Multi_Spectral_Imaging_for_Fruits_and_Vegetables.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Politicians-based Deep Learning Models for Detecting News, Authors and Media Political Ideology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130286</link>
        <id>10.14569/IJACSA.2022.0130286</id>
        <doi>10.14569/IJACSA.2022.0130286</doi>
        <lastModDate>2022-02-28T09:51:47.9370000+00:00</lastModDate>
        
        <creator>Khudran M. Alzhrani</creator>
        
        <subject>Deep neural networks; text classification; political ideology; politician personalization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>Non-partisanship is one of the qualities that con-tribute to journalistic objectivity. Factual reporting alone cannot combat political polarization in the news media. News framing, agenda settings, and priming are influence mechanisms that lead to political polarization, but they are hard to identify. This paper attempts to automate the detection of two political science concepts in news coverage: politician personalization and political ideology. Politicians’ news coverage personalization is a concept that encompasses one more of the influence mechanisms. Political ideologies are often associated with controversial topics such as abortion and health insurance. However, the paper prove that politicians’ personalization is related to the political ideology of the news articles. Constructing deep neural network models based on politicians’ personalization improved the performance of political ideology detection models. Also, deep networks models could predict news articles’ politician personalization with a high F1 score. Despite being trained on less data, personalized-based deep networks proved to be more capable of capturing the ideology of news articles than other non-personalized models. The dataset consists of two politician personalization labels, namely Obama and Trump, and two political ideology labels, Democrat and Republican. The results showed that politicians’ personalization and political polarization exist in news articles, authors, and media sources.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_86-Politicians_based_Deep_Learning_Models_for_Detecting_News.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Rectenna Design for Enhanced Node Lifetime in Energy Harvesting WSNs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130285</link>
        <id>10.14569/IJACSA.2022.0130285</id>
        <doi>10.14569/IJACSA.2022.0130285</doi>
        <lastModDate>2022-02-28T09:51:47.9030000+00:00</lastModDate>
        
        <creator>Prakash K Sonwalkar</creator>
        
        <creator>Vijay Kalmani</creator>
        
        <subject>Antenna design; backscattering; beamfroming; en-ergy harvesting; sequential rule; wireless sensor networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>In a scenario where every possible solution is investigated for sustainability, Energy Harvesting (EH) stands as an undisputed candidate for enhancing the network lifetime in WSNs where node lifetime decides the network’s life. Radio Frequency (RF) energy is abundantly available in the ambience among all the available energy sources. Since both information and power are transmitted in an RF signal, EH is possible in the far-field region. At first, we present a novel 4-element rectangular Patch Antenna Array (PAA) design of EH rectenna. The receiving antenna is designed to pick up the radio signal in the RF range (2.45 GHz) from the free space. Then, the H-shape antenna is modified by introducing a circular slot to enhance the bandwidth. The paper compares the results of the basic parameters of the antenna, such as return loss, input impedance, bandwidth, gain, directivity, and efficiency. As a result, the modified H-shaped antenna (with circular slot) has an increased gain from 8.24 dB to 8.32 dB, with a reduced return loss from -10 dB to -16 dB and enhanced bandwidth from 64.8 MHz to 868 MHz. The high gain, large bandwidth, suitably matched impedance for a minor return loss, and high efficiency of the modified H-shaped patch antenna makes it eligible for energy harvesting application.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_85-Rectenna_Design_for_Enhanced_Node_Lifetime_in_Energy_Harvesting.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards Linguistic-based Evaluation System of Cloud Software as a Service (SaaS) Provider</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130284</link>
        <id>10.14569/IJACSA.2022.0130284</id>
        <doi>10.14569/IJACSA.2022.0130284</doi>
        <lastModDate>2022-02-28T09:51:47.8900000+00:00</lastModDate>
        
        <creator>Mohammed Abdulaziz Ikram</creator>
        
        <creator>Ryan Alturki</creator>
        
        <creator>Farookh K. Hussain</creator>
        
        <subject>Cloud services; software as a service (SaaS); eval-uation system; quality of experience (QoE); fuzzy logic; TOPSIS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>Cloud Software as a Service (SaaS) is a type of delivering software application by using Cloud computing Infrastructure services. Cloud SaaS used the global Internet connection to offer its services to the client consumers. The selection of Cloud SaaS provider is based on the evaluation mechanism that the Cloud SaaS consumer manage before making the service contract. In this paper, the linguistic-based evaluation of Cloud SaaS quality attributes has been proposed to help the consumer to assess the service for optimal service selection. Our proposed approach has been developed by the combinations of fuzzy logic and TOPSIS MCDM methods. The proposed approach helps the Cloud SaaS consumer to select the optimal service Cloud SaaS service provider. The case study has been proposed in order to demonstrate the proposed approach.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_84-Towards_Linguistic_based_Evaluation_System_of_Cloud_Software.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Feature based Entailment Recognition for Malayalam Language Texts</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130283</link>
        <id>10.14569/IJACSA.2022.0130283</id>
        <doi>10.14569/IJACSA.2022.0130283</doi>
        <lastModDate>2022-02-28T09:51:47.8570000+00:00</lastModDate>
        
        <creator>Sara Renjit</creator>
        
        <creator>Sumam Mary Idicula</creator>
        
        <subject>Textual entailment; natural language inference; Malayalam language; machine learning; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>Textual entailment is a relationship between two text fragments, namely, text/premise and hypothesis. It has applications in question answering systems, multi-document sum-marization, information retrieval systems, and social network analysis. In the era of the digital world, recognizing semantic variability is important in understanding inferences in texts. The texts are either in the form of sentences, posts, tweets, or user experiences. Hence understanding inferences from customer experiences helps companies in customer segmentation. The availability of digital information is ever-growing with textual data in almost all languages, including low resource languages. This work deals with various machine learning approaches applied to textual entailment recognition or natural language inference for Malayalam, a South Indian low resource language. A performance-based analysis using machine learning classification techniques such as Logistic Regression, Decision Tree, Support Vector Machine, Random Forest, AdaBoost, and Naive Bayes is done for the MaNLI (Malayalam Natural Language Inference) dataset. Different lexical and surface-level features are used for this binary and multiclass classification. With the increasing size of the dataset, there is a drop in the performance of feature-based classification. A comparison of feature-based models with deep learning approaches highlights this inference. The main focus here is the feature-based analysis with 14 different features and its comparison, essential to any NLP classification problem.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_83-Feature_based_Entailment_Recognition_for_Malayalam_Language.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Trust-based Access Control Model with Quantification Method for Protecting Sensitive Attributes</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130282</link>
        <id>10.14569/IJACSA.2022.0130282</id>
        <doi>10.14569/IJACSA.2022.0130282</doi>
        <lastModDate>2022-02-28T09:51:47.8430000+00:00</lastModDate>
        
        <creator>Mohd Rafiz Salji</creator>
        
        <creator>Nur Izura Udzir</creator>
        
        <creator>Mohd Izuan Hafez Ninggal</creator>
        
        <creator>Nor Fazlida Mohd. Sani</creator>
        
        <creator>Hamidah Ibrahim</creator>
        
        <subject>Access control; trust-based access control; quantifi-cation method; sensitive attributes; privacy; privacy protection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>The prevailing trend of the seamless digital collec-tion has prompted privacy concerns to the organization. In enforc-ing the automation of privacy policies and laws, access control has been one of the most devoted subjects. Despite the recent advances in access control frameworks and models, there are still issues that hinder the implementation of successful access control. This paper illustrates the problem of the previous model which typically preserves data without explicitly considering the protection of sensitive attributes. This paper also highlights the drawback of the previous works which provides inaccurate calculation to specify user’s trustworthiness. Therefore, a trust-based access control (TBAC) model is proposed to protect sensitive attributes. A quantification method that provides accurate calculation of the two user properties is also proposed, namely: seniority and behaviour to specify user’s trustworthiness. Experiment have been conducted to compare the proposed quantification method and the previous quantification methods. The result shows that the proposed quantification method is stricter and accurate in specifying user’s trustworthiness as compared to the previous works. Therefore, based on the result, this study resolves the issue of specifying the user’s trustworthiness. This study also indicates that the issue of protecting sensitive attributes has been resolved.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_82-Trust_based_Access_Control_Model_with_Quantification_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Software Framework for Self-Organized Flocking System Motion Coordination Research</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130281</link>
        <id>10.14569/IJACSA.2022.0130281</id>
        <doi>10.14569/IJACSA.2022.0130281</doi>
        <lastModDate>2022-02-28T09:51:47.8270000+00:00</lastModDate>
        
        <creator>Fredy Martinez</creator>
        
        <creator>Holman Montiel</creator>
        
        <creator>Edwar Jacinto</creator>
        
        <subject>Flocking; formation control; motion planning; multi-robot system; obstacle avoidance; swarm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>We describe and analyze the basic algorithms for the self-organization of a swarm of robots in coordinated motion as a flock of agents as a strategy for the solution of multi-agent tasks. This analysis allows us to postulate a simulation framework for such systems based on the behavioral rules that characterize the dynamics of these systems. The problem is approached from the perspective of autonomous navigation in an unknown but restricted and locally observable environment. The simulation framework allows defining individually the characteristics of the basic behaviors identified as fundamental to show a flocking behavior, as well as the specific characteristics of the naviga-tion environment. It also allows the incorporation of different path planning approaches to enable the system to navigate the environment for different strategies, both geometric and reactive. The basic behaviors modeled include safe wandering, following, aggregation, dispersion, and homing, which interact to generate flocking behavior, i.e., the swarm aggregates, reach a stable formation and move in an organized fashion toward the target point. The framework concept follows the principle of constrained target tracking, which allows the problem to be solved similarly as a small robot with limited computation would solve it. It is shown that the algorithm and the framework that implements it are robust to the defined constraints and manage to generate the flocking behavior while accomplishing the navigation task. These results provide key guidelines for the implementation of these algorithms on real platforms.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_81-A_Software_Framework_for_Self_Organized_Flocking_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Geolocation Mobile Application to Create New Routes for Cyclists</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130280</link>
        <id>10.14569/IJACSA.2022.0130280</id>
        <doi>10.14569/IJACSA.2022.0130280</doi>
        <lastModDate>2022-02-28T09:51:47.7970000+00:00</lastModDate>
        
        <creator>Jesus F. Lalupu Aguirre</creator>
        
        <creator>Laberiano Andrade-Arenas</creator>
        
        <subject>Android studio; cyclists; mobile application; geolo-cation; scrum</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>In Peru in recent decades it has undergone unex-pected changes, often generating chaos among the population, such as the excess of vehicles that travel daily on the roads generating pollution. This has led people to seek alternatives, such as the use of bike, as a means of transportation. The objective is develop a mobile application for the creation of alternative routes for cyclists. For them we have carried out a survey of 50 people dedicated to the field of cycling as well as people who do not exercise it in order to collect data, analyze it and create mechanisms that help these users. This application was developed in Android Studio implementing free libraries to achieve its geolocation in a way that provides all the facilities for the cyclist to move. For the process of creating this application, the Scrum methodology was used, the design of the prototype is done in Adobe Photoshop. It was obtained as results of the investigation carried out in the survey that 75% of the people are satisfied with the use of the application, 60% responded defining it as very good and 100% answered yes they would recommend the application.The investigation is of importance, since it would allow as future work the reduction of environmental contamination.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_80-Geolocation_Mobile_Application_to_Create_New_Routes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Wifi Indoor Positioning with Genetic and Machine Learning Autonomous War-Driving Scheme</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130279</link>
        <id>10.14569/IJACSA.2022.0130279</id>
        <doi>10.14569/IJACSA.2022.0130279</doi>
        <lastModDate>2022-02-28T09:51:47.7800000+00:00</lastModDate>
        
        <creator>Pham Doan Tinh</creator>
        
        <creator>Bui Huy Hoang</creator>
        
        <subject>Wifi fingerprinting; indoor positioning; machine learning; genetic algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>Wifi Fingerprinting is a widely used method for indoor positioning due to its proven accuracy. However, the offline phase of the method requires collecting a large quantity of data which costs a lot of time and effort. Furthermore, interior changes in the environment can have impact on system accuracy. This paper addresses the issue by proposing a new data collecting procedure in the offline phase that only needs to collect some data points (Wi-fi reference point). To have a sufficient amount of data for the offline phase, we proposed a genetic algorithm and machine learning model to generate labeled data from unlabeled user data. The experiment was carried out using real Wi-fi data collected from our testing site and the simulated motion data. Results have shown that using the proposed method and only 8 Wi-fi reference points, labeled data can be generated from user’s live data with a positioning error of 1.23 meters in the worst case when motion error is 30%. In the online phase, we achieved a positioning error of 1.89 meters when using the Support Vector Machine model at 30% motion error.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_79-Wifi_Indoor_Positioning_with_Genetic_and_Machine_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cryptanalysis of a Hamming Code and Logistic-Map based Pixel-Level Active Forgery Detection Scheme</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130278</link>
        <id>10.14569/IJACSA.2022.0130278</id>
        <doi>10.14569/IJACSA.2022.0130278</doi>
        <lastModDate>2022-02-28T09:51:47.7500000+00:00</lastModDate>
        
        <creator>Oussama Benrhouma</creator>
        
        <subject>Cryptanalysis; watermarking; tamper detection; at-tack; chaotic functions; forgery localization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>In this paper, we analyze the security of a fragile watermarking scheme for tamper detection in images recently proposed by S. Prasad et al. The chaotic functions are used in the scheme to exploit its pseudo-random behavior and its sensibility to initial condition and control parameter, but despite that, security flaws have been spotted and cryptanalysis of the scheme is conducted. Experimental results shows that the scheme could not withstand the attack and watermarked images were manipulated without triggering any alarm in the extraction scheme. In this paper, two different approaches of attacks are demonstrated and conducted to break the scheme. This work falls into the context of improving the quality of the designed cryptographic schemes taking into account several cryptanalysis techniques.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_78-Cryptanalysis_of_a_Hamming_Code_and_Logistic_Map.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Path Optimization for Mobile Robots using Genetic Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130277</link>
        <id>10.14569/IJACSA.2022.0130277</id>
        <doi>10.14569/IJACSA.2022.0130277</doi>
        <lastModDate>2022-02-28T09:51:47.7330000+00:00</lastModDate>
        
        <creator>Fernando Martinez Santa</creator>
        
        <creator>Fredy H. Martinez Sarmiento</creator>
        
        <creator>Holman Montiel Ariza</creator>
        
        <subject>Optimization; path planning; genetic algorithms; visibility graphs</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>This article proposes a path planning strategy for mobile robots based on image processing, the visibility graphs technique, and genetic algorithms as searching/optimization tool. This proposal pretends to improve the overall execution time of the path planning strategy against other ones that use visibility graphs with other searching algorithms. The global algorithm starts from a binary image of the robot environment, where the obstacles are represented in white over a black background. After that four keypoints are calculated for each obstacle by applying some image processing algorithms and geometric measurements. Based on the obtained keypoints, a visibility graph is generated, connecting all of these along with the starting point and the ending point, as well as avoiding collisions with the obstacles taking into account a safety distance calculated by means of using an image dilation operation. Finally, a genetic algorithm is used to optimize a valid path from the start to the end passing through the navigation network created by the visibility graph. This implementation was developed using Python programming language and some modules for working with image processing ang genetic algorithms. After several tests, the proposed strategy shows execution times similar to other tested algorithms, which validates its use on applications with a limited number of ob-stacles presented in the environment and low-medium resolution images.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_77-Path_Optimization_for_Mobile_Robots_using_Genetic_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Image-based Automatic Counting of Bacillus cereus Colonies using Smartphone</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130274</link>
        <id>10.14569/IJACSA.2022.0130274</id>
        <doi>10.14569/IJACSA.2022.0130274</doi>
        <lastModDate>2022-02-28T09:51:47.6870000+00:00</lastModDate>
        
        <creator>Phongsatorn Taithong</creator>
        
        <creator>Siriwan Wichai</creator>
        
        <creator>Rattapoom Waranusast</creator>
        
        <creator>Panomkhawn Riyamongkol</creator>
        
        <subject>Bacillus cereus bacteria; colonies; automatic counting; android phone application; image processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>Substantial amounts of Bacillus cereus bacteria present in food indicates that the food is unsafe to eat, so counting B. cereus colonies in food samples is a common test for food cleanliness. Manual counting of B. cereus bacteria colonies requires approximately 2-5 minutes per Petri dish, depending on the number of colonies present. This study presents a new smartphone-based method called Bacillus Cereus Image Counting System (BCICS, “B. kiks”) for automatic counting of B. cereus colonies. BCICS uses image processing techniques including Projection Profiling, Circle Hough Transformation, Adaptive Thresholding, and Power-Law Transformation to achieve high image clarity and then uses the Connected-Component Labeling (CCL) technique to correctly count the colonies, including overlapping colonies. These techniques are built into a convenient Android smartphone application. Results of counting the colonies with BCICS were compared with results of hand counting the same dishes. The accuracy rate of each dish count was calculated, as well as the average dish accuracy across all dishes. BCICS counted total colonies with an accuracy of 90.14%, which is close to that of hand counting accuracy since hand counting itself commonly involves an error rate of 5-10%. Importantly, the application took only 3-5 seconds to count one Petri dish, which is more than 74 times faster than the time required for manual counting.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_74-Image_based_Automatic_Counting_of_Bacillus_cereus_Colonies.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>New Blockchain Protocol for Partial Confidentiality and Transparency (PPCT)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130273</link>
        <id>10.14569/IJACSA.2022.0130273</id>
        <doi>10.14569/IJACSA.2022.0130273</doi>
        <lastModDate>2022-02-28T09:51:47.6700000+00:00</lastModDate>
        
        <creator>Salima TRICHNI</creator>
        
        <creator>Mohammed BOUGRINE</creator>
        
        <creator>Fouzia OMARY</creator>
        
        <subject>Blockchain; security; privacy; confidentiality; transparency; integrity; authentication; validation process</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>Running behind new technologies is increasingly becoming a non-circumventable requirement for organisms’ survival. This is not only a strategy to gain a competitive advantage in the market but it is a determinant key for their continuity and persistence. The Blockchain is at the heart of this technological revolution for which transparency, accessibility to the public and the sense of sharing are fundamental properties of its design. Despite its importance, leveraging this technology in an ethical and secure manner by ensuring confidentiality and privacy is a top concern. Through this work, we try to design a new approach to validate transactions within the Blockchain. Entitled &quot;Protocol for Partial Confidentiality &amp; Transparency PPCT&quot;, this new protocol makes possible to seek a compromise between the two requirements: Confidentiality &amp; Transparency. It allows introducing a new notion of confidentiality that we have named partial confidentiality. Subsequently, it applies it on the transactions exchanged while ensuring the process of their validations by the different nodes of the Blockchain. In addition, and through the use of hashing and digital signature functions, this protocol also ensures integrity and authentication within its validation process. To present this work, we will first discuss the state of the art on the different current privacy approaches and our motivation behind this work. Then we will explain more about the different stages of this process, its benefits and areas for improvement.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_73-New_Blockchain_Protocol_for_Partial_Confidentiality.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Forecasting Foreign Currency Exchange Rate using Convolutional Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130272</link>
        <id>10.14569/IJACSA.2022.0130272</id>
        <doi>10.14569/IJACSA.2022.0130272</doi>
        <lastModDate>2022-02-28T09:51:47.6530000+00:00</lastModDate>
        
        <creator>Manaswinee Madhumita Panda</creator>
        
        <creator>Surya Narayan Panda</creator>
        
        <creator>Prasant Kumar Pattnaik</creator>
        
        <subject>Convolutional neural network; exchange rate; R square; random forest regression method</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>Foreign exchange rate forecasting has always been in demand because it is critical for foreign traders to know how their money will perform against other currencies. Traders and investors are always looking for fresh ways to outperform the market and make more money. As a result, economists, researchers and investors have done a number of studies in order to forecast trends and facts that influence the rise or fall of the exchange rate (ER). In this paper, a new Convolutional Neural Network (CNN) model with a random forest regression layer is used for future closing price prediction. The intended model has been tested using three major currency pairs: Australian Dollar against the Japanese Yen (AUD/JPY), the New Zealand Dollar against the US Dollar (NZD/USD) and the British Pound Sterling against the Japanese Yen (GBP/JPY). As a proof-of-concept, the forecast is made for 1 month, 2 months, 3 months, 4 months, 5 months, 6 months and 7 months utilizing data from January 2, 2001 to May 31, 2020 for AUD/JPY and GBP/JPY and data from January 1, 2003 to May 31, 2020 for NZD/USD. Furthermore, when compared the performance of the suggested model with the Autoregressive Integrated Moving Average (ARIMA), Multi-Layer Perceptron (MLP) and Linear Regression (LR) models and found that the proposed CNN with Random Forest model surpasses all models. The suggested model&#39;s prediction performance is assessed using R2, MAE, RMSE performance measures. The proposed model&#39;s average R2 values for three currency pairs from one to seven months are 0.9616, 0.9640 and 0.9620, demonstrating that it is the best model among them. The study&#39;s findings have ramifications for both policymakers and investors in the foreign exchange market.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_72-Forecasting_Foreign_Currency_Exchange_Rate.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Data Visualization of Influent and Effluent Parameters of UASB-based Wastewater Treatment Plant in Uttar Pradesh</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130271</link>
        <id>10.14569/IJACSA.2022.0130271</id>
        <doi>10.14569/IJACSA.2022.0130271</doi>
        <lastModDate>2022-02-28T09:51:47.6400000+00:00</lastModDate>
        
        <creator>Parul Yadav</creator>
        
        <creator>Aditya Chaudhary</creator>
        
        <creator>Anand Keshari</creator>
        
        <creator>Nitish Kumar Chaudhary</creator>
        
        <creator>Priyanshu Sharma</creator>
        
        <creator>Kumar Saurabh</creator>
        
        <creator>Brijesh Singh Yadav</creator>
        
        <subject>Wastewater treatment plant; Bharwara STP; UASB-based plant; influent or effluent prediction; data visualization of influent and effluent; machine learning based for WWTPs</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>A rise in the population of a region implies an increase in water consumption and such a continuous increase in the usage of water worsens wastewater generation by the region. This escalation in wastewater (influent) requires the Wastewater Treatment Plants (WWTPs) to operate efficiently in order to process the demand for sewage disposal (effluent). This research paper is based upon visualizing and analyzing the parameters of influent like COD, BOD, TSS, pH, MPN and also, the parameters of effluent like COD, BOD, DO, pH and MPN of Bharwara WWTP situated in Lucknow, India which is the largest UASB-based wastewater treatment plant in Asia. We also design and implement an initial model using the machine learning based techniques to analyze as well as predict the parameters of influent and effluent of the WWTP. Model Performance is measured using Mean Squared Error (MSE) and Correlation Coefficient (R). For analyzing and designing the model, the parameters of influent and effluent have been collected over a period of 26 months on a daily basis covering the variations between seasons and climate. As a result, the model shall provide a better quality of effluent along with consuming the plant resources in an efficient manner.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_71-Data_Visualization_of_Influent_and_Effluent_Parameters.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detecting Ransomware within Real Healthcare Medical Records Adopting Internet of Medical Things using Machine and Deep Learning Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130270</link>
        <id>10.14569/IJACSA.2022.0130270</id>
        <doi>10.14569/IJACSA.2022.0130270</doi>
        <lastModDate>2022-02-28T09:51:47.6070000+00:00</lastModDate>
        
        <creator>Randa ELGawish</creator>
        
        <creator>Mohamed Abo-Rizka</creator>
        
        <creator>Rania ELGohary</creator>
        
        <creator>Mohamed Hashim</creator>
        
        <subject>Artificial neural networks; deep learning; healthcare system; internet of things; machine learning; supervised learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>The Internet of Medical Things was immensely implemented in healthcare systems during the covid 19 pandemic to enhance the patient&#39;s circumstances remotely in critical care units while keeping the medical staff safe from being infected. However, Healthcare systems were severely affected by ransomware attacks that may override data or lock systems from caregivers&#39; access. In this work, after obtaining the required approval, we have got a real medical dataset from actual critical care units. For the sake of research, a portion of data was used, transformed, and manifested using laboratory-made payload ransomware and successfully labeled. The detection mechanism adopted supervised machine learning techniques of K Nearest Neighbor, Support Vector Machine, Decision Trees, Random Forest, and Logistic Regression in contrast with deep learning technique of Artificial Neural Network. The methods of KNN, SVM, and DT successfully detected ransomware&#39;s signature with an accuracy of 100%. However, ANN detected the signature with an accuracy of 99.9%. The results of this work were validated using precision, recall, and f1 score metrics.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_70-Detecting_Ransomware_within_Real_Healthcare_Medical_Records.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fuzzy-set Theory to Support the Design of an Augmentative and Alternative Communication Systems for Aphasia Individuals</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130269</link>
        <id>10.14569/IJACSA.2022.0130269</id>
        <doi>10.14569/IJACSA.2022.0130269</doi>
        <lastModDate>2022-02-28T09:51:47.5930000+00:00</lastModDate>
        
        <creator>Md. Sazzad Hossain</creator>
        
        <subject>Aphasia; augmentative and alternative communication; human factors; fuzzy-set theory</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>This paper presents a new design of an Augmentative and Alternative Communication (AAC) systems for conveying delicate feelings or emotions of aphasia individuals, which is based on Fuzzy-set theory. Fuzzy-set theory is crucial in addressing the ambiguity of linguistic terms used and judgments made by aphasia individuals. Due to the communication difficulties of aphasia individuals, their insights were assigned in triangular fuzzy membership functions during the design process of AAC systems. In the proposed design of AAC systems, the delicate feelings or emotions were expressed as a scale, and candidate(s) of delicate feelings or emotions were shown based on their specified position. If the candidate(s) cannot properly convey the desired delicate feelings or emotions, then the corresponding fuzzy membership function can be realized by controlling its position. The proposed method has the advantage of being able to be conveyed the exact want and needs of delicate feelings or emotions during communication. Experimental result shows that conveying delicate feelings or emotions of the aphasia individual could be improved by 50 percent using the proposed design of AAC systems.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_69-Fuzzy_set_Theory_to_Support_the_Design_of_an_Augmentative.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Game-based Learning Increase Japanese Language Learning through Video Game</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130268</link>
        <id>10.14569/IJACSA.2022.0130268</id>
        <doi>10.14569/IJACSA.2022.0130268</doi>
        <lastModDate>2022-02-28T09:51:47.5770000+00:00</lastModDate>
        
        <creator>Yogi Udjaja</creator>
        
        <creator>Puti Andam Suri</creator>
        
        <creator>Ricky Satria Gunawan</creator>
        
        <creator>Felix Hartanto</creator>
        
        <subject>Video games; ELEKTRA; games-based learning; Japanese; JLPT N5</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>This research was purposed to test the effectiveness of learning the Japanese language through a video game. the video game is built for Personal Computer (PC) users to provide Japanese Language education through video games for teens and adults. The research methods used include literature studies of various books, journals, websites, and theories that can support the writing as well as defining the questions for questionnaires to collect useful data. The development of game application methodology used is Game-Based Learning with Enhanced Learning Experience and Knowledge Transfer (ELEKTRA) methodology, which consists of in-depth analysis of the target audience and learning materials. the effectiveness of video games are evaluated using pre-test and post-test methods. From this researches can be seen that video games are effective to increase the users’ knowledge of the Japanese language. also, a video game has the capability to increase the user’s interest in learning Japanese because of the visual form of the learning process that leads the user to stay engaged with the learning process.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_68-Game_based_Learning_Increase_Japanese_Language_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implementation of QT Interval Measurement to Remove Errors in ECG</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130267</link>
        <id>10.14569/IJACSA.2022.0130267</id>
        <doi>10.14569/IJACSA.2022.0130267</doi>
        <lastModDate>2022-02-28T09:51:47.5470000+00:00</lastModDate>
        
        <creator>S. Chitra</creator>
        
        <creator>V. Jayalakshmi</creator>
        
        <subject>e-Health; QT interval; GPU; ECG signal; CKD; IoT</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>Wireless Body Sensor Network (WBSNs) are devices that can be ported with different detection, storage, computer, but also communication capabilities. Interfacing was beneficial whenever information was collected by many sources, which may lead to erroneous sensory information. During this paper, an information nuclear fission Ensembles technique for working raw healthcare information through WBSNs during ambient cloud computer settings as described. Monitoring data were collected through various instruments and combined to provide statistics on high movements. The simulation was conducted using the low-cost Internet of Things (IoT) surveillance system on chronic kidney disease (CKD). Biosensors have been used in healthcare surveillance systems to record health problems. Patients with CKD would benefit from the developed surveillance system, which will facilitate the early diagnosis of the predominant diseases. This merged information was then sent into using the Aggregation algorithm can forecast premature cardiac illness and CKD. These groups were housed within a Cloud processing context; therefore these forecasting calculations were distributed. Another lengthy practical investigation backs that system provides application, while those findings were encouraging, with 98 percent efficiency whenever the height of that tree was equivalent with 15, total amount if estimation methods are 40, while the overall predicting job was based upon 8 attributes. We compute a mean square ECG waveform from all available leads and use a new technique to measure the QT interval. We tested this algorithm using standard and unique ECG databases. Our real-time QT interval measurement algorithm was found to be stable, accurate, and capable of tracking changing QT values.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_67-Implementation_of_QT_Interval_Measurement_to_Remove_Errors_in_ECG.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluation of Re-identification Risk using Anonymization and Differential Privacy in Healthcare</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130266</link>
        <id>10.14569/IJACSA.2022.0130266</id>
        <doi>10.14569/IJACSA.2022.0130266</doi>
        <lastModDate>2022-02-28T09:51:47.5130000+00:00</lastModDate>
        
        <creator>Ritu Ratra</creator>
        
        <creator>Preeti Gulia</creator>
        
        <creator>Nasib Singh Gill</creator>
        
        <subject>Data privacy; anonymization; differential privacy; re-identification risk analysis; privacy preserving data publishing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>In the present scenario, due to regulations of data privacy, sharing of data with other organization for research or any medical purpose becomes a big hindrance for different healthcare organizations. To preserve the privacy of patients seems like a crucial challenge for Healthcare Centre. Numerous techniques are used to preserve the privacy such as perturbation, anonymization, cryptography, etc. Anonymization is well known practical solution of this problem. A number of anonymization methods have been proposed by researchers. In this paper, an improved approach is proposed which is based on k-anonymity and differential privacy approaches. The purpose of proposed approach is to prevent the dataset from re-identification risk more effectively from linking attacks using generalization and suppression techniques.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_66-Evaluation_of_Re_identification_Risk_using_Anonymization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>AquaStat: An Arduino-based Water Quality Monitoring Device for Fish Kill Prevention in Tilapia Aquaculture using Fuzzy Logic</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130265</link>
        <id>10.14569/IJACSA.2022.0130265</id>
        <doi>10.14569/IJACSA.2022.0130265</doi>
        <lastModDate>2022-02-28T09:51:47.5000000+00:00</lastModDate>
        
        <creator>Mark Rennel D. Molato</creator>
        
        <subject>Fuzzy logic; fuzzy sets; fish kill; freshwater aquaculture; Tilapia</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>In the Philippines, Tilapia fish farming sector is vital to the economy in providing substantial employment, income and meeting local demand for protein sources of the Filipinos. However, the possible benefits that can be derived from this industry are at stake because of the sudden occurrences of fish kill events. This can be attributed to a wide variety of natural and unnatural causes such as old age, starvation, body injury, stress, suffocation, water pollution, diseases, parasites, predation, toxic algae, severe weather, and other reasons. With the identified severe effects of fish kill events to the fish farmers, consumers and the fisheries industry, advanced measures and methods must be established to alleviate the adverse effects of this phenomenon. To solve the underlying problem on water quality monitoring system to improve freshwater aquaculture, various studies were already conducted. However, these studies merely focused on the reading and gathering of water parameters. In this paper, fuzzy logic was used to come up with a model that can analyze and generate result regarding the overall quality of the water being used in Tilapia aquaculture. The water parameters considered in this paper were temperature, dissolved oxygen, and pH level. The results of the water parameter readings using the conventional method were compared to the data that were gathered by AquaStat to test its accuracy and showed no significant difference. Also, the overall water quality obtained using the conventional method was compared to the overall water quality generated by AquaStat and obtained an accurate result.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_65-AquaStat_An_Arduino_based_Water_Quality_Monitoring_Device.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>IoT based Date Palm Water Management System Using Case-Based Reasoning and Linear Regression for Trend Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130264</link>
        <id>10.14569/IJACSA.2022.0130264</id>
        <doi>10.14569/IJACSA.2022.0130264</doi>
        <lastModDate>2022-02-28T09:51:47.4670000+00:00</lastModDate>
        
        <creator>Ferddie Quiroz Canlas</creator>
        
        <creator>Moayad Al Falahi</creator>
        
        <creator>Sarachandran Nair</creator>
        
        <subject>Date palm tree; case-based reasoning; IoT; mobile application; NodeMCU; water management system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>Palms trees (Phoenix dactylifera L.), Al Nakheel in Arabic are known to have cultural and economic importance to Gulf and Arabic-speaking countries. However, using the traditional method of cultivation, improper use, and depletion of water is perceived as the major challenge as farmers used almost two and a half times the required amount without considering numerous factors. This paper attempts to develop an implementation model of a water management system for Date Palm Trees using Case Based-Reasoning. The said model involves an IoT-based module comprised of NodeMCU, soil moisture, temperature, and humidity sensors that automate the settings of the water amount for the whole year based on palm age, temperature, air humidity, and soil moisture. CBR calculates the amount of water supplied to palm trees (based on initial knowledgebase cited from previous empirical studies) and stores it in a cloud-based database. These data and hardware status can be accessed using a mobile application. When the temperature or soil moisture sensor fails, data trends are retrieved from the database and processed using Linear Regression Analysis. The test results have shown that the proposed model helped in a significant decrease in water consumption compared to the traditional method.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_64-IoT_based_Date_Palm_Water_Management_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Objective Type Question Generation using Natural Language Processing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130263</link>
        <id>10.14569/IJACSA.2022.0130263</id>
        <doi>10.14569/IJACSA.2022.0130263</doi>
        <lastModDate>2022-02-28T09:51:47.4530000+00:00</lastModDate>
        
        <creator>G. Deena</creator>
        
        <creator>K. Raja</creator>
        
        <subject>Intelligent tutoring system; true or false; dependency parser; natural language processing; question generation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>Automatic Question Generation (AQG) is a research trend that enables teachers to create assessments with greater efficiency in right set of questions from the study material. Today&#39;s educational institutions require a powerful tool to correctly assess learner’s mastery of concepts learned through study materials. Objective type questions are an excellent method of assessing a learner&#39;s topic understanding in learning process, based on Information and Communication Technology (ICT) and Intelligent Tutoring Systems (ITS).Creating a set of questions for assessment can take a significant amount of time for teachers, and obtaining questions from external sources such as assessment books or question banks may not be relevant to the content covered by students during their studies. This proposed system involves to generate the familiar objective type questions like True or False, ‘Wh’, Fill up with double blank space, match the following type question have generated using Natural Language Processing(NLP) techniquesfrom the given study material. Different rules are created to generate T/F and ‘Wh’ type questions. Dependence parser method has involved in ‘Wh’ questions. Proposed system is tested with Grade V Computer Science text book as an input. Experimental result shows that the proposed system is quite promising to generate the amount of objective type assessment questions.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_63-Objective_Type_Question_Generation_using_Natural_Language_Processing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Predictive Scheme for Confirming State of Bipolar Disorder using Recurrent Decision Tree</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130262</link>
        <id>10.14569/IJACSA.2022.0130262</id>
        <doi>10.14569/IJACSA.2022.0130262</doi>
        <lastModDate>2022-02-28T09:51:47.4370000+00:00</lastModDate>
        
        <creator>Yashaswini K. A</creator>
        
        <creator>Aditya Kishore Saxena</creator>
        
        <subject>Bipolar disorder; depression; recurrent neural network; decision tree; prediction; sadness</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>Bipolar disorder is one of the most challenging illnesses where medical science is still struggling to achieve its landmark therapies. After reviewing existing prediction-based approaches towards investigating bipolar disorder, it is noted that existing approaches are more or less symptomatic and relates depression as sadness. It implies various theories that don&#39;t consider many precise indicators of confirming bipolar disorder. Therefore, this manuscript presents a novel framework capable of treating the dataset of depression and fine-tune it appropriately to subject it further to a machine learning-based predictive scheme. The proposed system subjects its dataset for a series of data cleaning operations followed by data preprocessing using a standard scale of rating bipolar level. Further usage of feature engineering and correlation analysis renders more contextual inference towards its statistical score. The proposed system also introduces a Recurrent Decision Tree that further contributes towards the predictive outcome of bipolar disorder. The outcome obtained showcases that the proposed scheme performs better than the conventional decision tree.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_62-A_Novel_Predictive_Scheme_for_Confirming_State_of_Bipolar_Disorder.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Secure and Robust Architecture based on Mobile Healthcare Applications for Patient Monitoring Environments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130261</link>
        <id>10.14569/IJACSA.2022.0130261</id>
        <doi>10.14569/IJACSA.2022.0130261</doi>
        <lastModDate>2022-02-28T09:51:47.4200000+00:00</lastModDate>
        
        <creator>Shaik Shakeel Ahamad</creator>
        
        <creator>Majed Alowaidi</creator>
        
        <subject>Mobile healthcare applications (MHA); UICC (universal integrated circuit card); Kotlin language; android studio; ECDSA (elliptic curve digital signature algorithm); GCM mode; end to end security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>The recent outbreak of COVID-19 pandemic realized the importance of patient monitoring environments, Mobile Healthcare Applications (MHA) plays very crucial role in the successful implementation of patient monitoring environments. Existing MHA’s in the realm of patient monitoring environments are prone to repackaging attacks; do not ensure security, application security and communication security. This paper proposes a secure and robust architecture for mobile healthcare applications in patient monitoring environments ensuring end to end security ensuring all the security properties by overcoming repackaging attacks which are very vital for success of mobile healthcare applications. We implemented our proposed protocol in Android Studio, Kotlin is designed to interoperate fully with Java. ECDH Key exchange algorithm is used for key exchange between MHA in patient’s smart phone and MHA in the hospital TPM. We created an EC key pairs (NIST P-256 aka secp256r1) at patient’s MHA and MHA of hospital TPM by using ECDH and we created a shared AES secret key. AES with GCM mode used for encryption and decryption of patient data.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_61-A_Secure_and_Robust_Architecture_based_on_Mobile_Healthcare_Applications.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Cyber-attack Leads Prediction System using Cascaded R2CNN Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130260</link>
        <id>10.14569/IJACSA.2022.0130260</id>
        <doi>10.14569/IJACSA.2022.0130260</doi>
        <lastModDate>2022-02-28T09:51:47.3900000+00:00</lastModDate>
        
        <creator>P. Shanmuga Prabha</creator>
        
        <creator>S. Magesh Kumar</creator>
        
        <subject>Cyber security in smart devices; cyber security; cyber-attacks; internet of things; IoT devices; machine learning; wireless sensor networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>Novel prediction systems are required in almost all internet-connected platforms to safeguard the user information to get hacked by intermediate peoples. Finding the real impacted factors associated with the Cyber-attack probes are being considered for research. The proposed methodology is derived from various literature studies that motivated to find the unique prediction model that shows improved accuracy and performance. The proposed model is represented as R2CNN that acts as the cascaded combination of Gradient boosted regression detector with recurrent convolution neural network for pattern prediction. The given input data is the collection of various applications engaged with the wireless sensor nodes in a smart city. Each user connected with a certain number of applications that access the authorization of the device owner. The dataset comprises device information, the number of connectivity, device type, simulation time, connectivity duration, etc. The proposed R2CNN extracts the features of the dataset and forms a feature mapping that related to the parameter being focused on. The features are tested for correlation with the trained dataset and evaluate the early prediction of Cyber-attacks in the massive connected IoT devices.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_60-A_Novel_Cyber_attack_Leads_Prediction_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Channeled Multilayer Perceptron as Multi-Modal Approach for Two Time-Frames Algo-Trading Strategy</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130259</link>
        <id>10.14569/IJACSA.2022.0130259</id>
        <doi>10.14569/IJACSA.2022.0130259</doi>
        <lastModDate>2022-02-28T09:51:47.3730000+00:00</lastModDate>
        
        <creator>Noussair Fikri</creator>
        
        <creator>Khalid Moussaid</creator>
        
        <creator>Mohamed Rida</creator>
        
        <creator>Amina El Omri</creator>
        
        <creator>Noureddine Abghour</creator>
        
        <subject>FOREX; neural networks; EMA; Bollinger band; stochastic RSI; momentum reversal; MLP; back-propagation; feed-forward</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>FOREX (Foreign Exchanges) is a 24H open market with an enormous daily volume. Most of the used Trading strategies, used individually, are not providing accurate signals. In this paper, we are proposing an automated trading strategy that fits random market behaviors. It is based on neural networks applying triple exponential weighted moving average (EMA) as a trend indicator, Bollinger bands as a volatility indicator, and stochastic RSI as a momentum reversal indicator to prevent false indications in a short time frame. This approach is based on trend, volatility, and momentum reversal patterns combined with a market adaptive and a distributed multi-layer perceptron (MLP). It is called channeled multi-layer perceptron (CMLP) that is a neural network using channels and routines trained by previous profit/loss earned by triple EMA crossover, Bollinger Bands, and Stochastic RSI signals. Instead of using classic computations and Back-propagation for adjusting MLP parameters, we established a channeled multi-layer perceptron inspired by a multi-modal learning approach where each group of modalities (Channel) has its K_c That stands for a dynamic channel coefficient to produce a multi-processed feed-forward neural network that prevents uncertain trading signals depending on trend-volatility-momentum random patterns. CMLP has been compared to Multi-Modal GARCH-ARIMA and has proven its efficiency in unstable markets.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_59-A_Channeled_Multilayer_Perceptron_as_Multi_Modal_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Secure Unmanned Aerial Vehicle Service for Medical System to Improve Smart City Facilities</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130258</link>
        <id>10.14569/IJACSA.2022.0130258</id>
        <doi>10.14569/IJACSA.2022.0130258</doi>
        <lastModDate>2022-02-28T09:51:47.3570000+00:00</lastModDate>
        
        <creator>Birasalapati Doraswamy</creator>
        
        <creator>K. Lokesh Krishna</creator>
        
        <creator>M. N. Giriprasad</creator>
        
        <subject>Drones; security; FFUDAS; malicious attack; fruit fly fitness; path identification; medicine delivery</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>The use of drone technology and drones are currently widespread due to their increasing applications. However, there are some specific security-based challenges in the authentication process. In most drone-based applications, there are many authentication approaches, which are subject to handover delay issues with security complexities for an attack. To end these issues, the presented research has focused on developing a novel Optimized deep learning model known as Fruit Fly based UNet Drone Assisted Security (FFUDAS) to remove the malicious attacks. Moreover, the user requests are stored in the cloud, and the stored data are trained to the drones. Hereafter, the drones can deliver medicine to the requestor’s location; in that, the malicious attacks were changes the location of drones. Once the attack is identified, then the attack removal process is done. Finally, the new path location to the requested user was identified with the help of fruit fly fitness; then the medicines are delivered to the requested user’s location. Furthermore, the designed procedure is executed in an NS2 platform with required nodes. The robustness of the presented model was verified by evaluating the metrics like confidential data rate, execution time, handover delay, pack perception and data delivery rate, and energy consumption. Furthermore, to identify the effectiveness of the presented work, the presented model is compared with other existing schemes. The comparison results show that the presented model has higher throughput, less execution time and handover delay.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_58-A_Secure_Unmanned_Aerial_Vehicle_Service_for_Medical_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dynamic Vehicular Communication using Gaussian Interpolation of Cluster Head Selection (GI-CHS)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130257</link>
        <id>10.14569/IJACSA.2022.0130257</id>
        <doi>10.14569/IJACSA.2022.0130257</doi>
        <lastModDate>2022-02-28T09:51:47.3270000+00:00</lastModDate>
        
        <creator>Mahmoud Zaki Iskandarani</creator>
        
        <subject>Cluster head; VANETS; adaptive routing; weighted clustering; Gaussian interpolation; V2V; V2I</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>Decentralized and centralized vehicular communication is investigated in this work using Gaussian interpolation function with cluster head (CH) selection technique. The work uncovered that the best communication approach is to use both centralized and decentralized vehicular communication as combining them will achieve a much more uniform results as a function of communication radius values and vehicular speed. It is also found that vehicular speed contributes negatively to the efficiency of data communication if the relative speed of the vehicles to the communication radius is limited by their ratios. Mathematical expression is presented that relates probability of successful transmission to communication radius for both centralized and decentralized techniques with data proving the importance of the spread parameter within the Gaussian interpolation in a tabulated form, and explained to prove the adaptability of the function used. It is also shown in this work that weights affecting CH selection, thus using Gaussian interpolation is proved to be important as a weighting function in an a adaptive and dynamic vehicular ad-hoc networks (VANETS) covering both vehicle to vehicle (V2V), and vehicle to infrastructure (V2I) communication through cluster head selection.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_57-Dynamic_Vehicular_Communication_using_Gaussian_Interpolation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>FNU-BiCNN: Fake News and Fake URL Detection using Bi-CNN</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130256</link>
        <id>10.14569/IJACSA.2022.0130256</id>
        <doi>10.14569/IJACSA.2022.0130256</doi>
        <lastModDate>2022-02-28T09:51:47.3100000+00:00</lastModDate>
        
        <creator>R. Sandrilla</creator>
        
        <creator>M. Savitha Devi</creator>
        
        <subject>Bi-LSTM; CNN; WORDNET; machine learning; fake news and URL; ARIMA</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>Fake news (FN) has become a big problem in today&#39;s world, recognition partly to the widespread use of social media. A wide variety of news organizations and news websites post their stories on social media. It is important to verify that the information posted is genuine and obtained from reputable sources. The intensity and sincerity of internet news cannot be quantified completely and remains a challenge. We present an FNU-BiCNN model for identifying FN and fake URLs in this study by analyzing the correctness of a report and predicting its validity. Stop words and stem words with NLTK characteristics were employed during data pre-processing. Following that, we compute the TF-IDF using LSTM, batch normalization, and dense. The WORDNET Lemmatizer is used to choose the features. Bi-LSTM with ARIMA and CNN are used to train the datasets, and various machine learning techniques are used to classify them. By deriving credibility ratings from textual data, this model develops an ensemble strategy for concurrently learning the depictions of news stories, authors, and titles. To achieve greater accuracy while using Voting ensemble classifier and compared with several machine learning algorithms such as SVM, DT, RF, KNN, and Naive Bayes were tried, and it was discovered that the voting ensemble classifier achieved the highest accuracy of 99.99%. Classifiers&#39; accuracy, recall, and F1-Score were used to assess their performance and efficacy.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_56-FNU_BiCNN_Fake_News_and_Fake_URL_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Routing Topology Control for Node Energy Minimization For WSN</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130255</link>
        <id>10.14569/IJACSA.2022.0130255</id>
        <doi>10.14569/IJACSA.2022.0130255</doi>
        <lastModDate>2022-02-28T09:51:47.2970000+00:00</lastModDate>
        
        <creator>K Abdul Basith</creator>
        
        <creator>T. N. Shankar</creator>
        
        <subject>SCI (stochastic conditional inequality); LEACH; clustering; DDOS (distributed denial of service); DEEC (distributed energy efficient clustering); TETRA (terrestrial trunked radio); WSN (wireless sensor nework)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>Wireless sensing has become an essential feature for minimizing energy for WSN applications. The foundation of the WSN is to implicate the uniqueness of the design feature capabilities, which are tied to different applications choices of interest. The implementation of pervasive algorithms with ubiquitous Network features are depicted with changes in frequency topology bands and congestion regression of the Network. The Network would affect the parametric criteria such as bitrate, Cluster-head energy, minimum energy and bandwidth usage. Our improved hybrid Pervasive algorithms would prevent the different attacks and control with the least tolerant error since topology becomes an integral part of the design, providing efficient Routing for the Network. In order to effectively solve the problem, a hybrid tangential transform with improved topologies for effecting network parameters. The other algorithm implicates the energy-efficient with optimization of stochastic conditional inequity for different network sizes. Performance characteristics of the proposed algorithms for WSN would estimate a tolerant error with a factor of 12% on each feature of the network parameter.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_55-Hybrid_Routing_Topology_Control_for_Node_Energy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Machine Learning Techniques for Sentiment Analysis of Code-Mixed and Switched Indian Social Media Text Corpus - A Comprehensive Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130254</link>
        <id>10.14569/IJACSA.2022.0130254</id>
        <doi>10.14569/IJACSA.2022.0130254</doi>
        <lastModDate>2022-02-28T09:51:47.2800000+00:00</lastModDate>
        
        <creator>Gazi Imtiyaz Ahmad</creator>
        
        <creator>Jimmy Singla</creator>
        
        <creator>Anis Ali</creator>
        
        <creator>Aijaz Ahmad Reshi</creator>
        
        <creator>Anas A. Salameh</creator>
        
        <subject>Sentiment analysis; code mixing; corpus; deep learning; machine learning; NLP; social media text</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>A comprehensive review of sentiment analysis for code-mixed and switched text corpus of Indian social media using machine learning (ML) approaches, based on recent research studies has been presented in this paper. Code-mixing and switching are linguistic behavior shown by the bilingual/multilingual population, primarily in spoken but also in written communication, especially on social media. Code-mixing involves combining lower linguistic units like words and phrases of a language into the sentences of other language (the base language) and code-switching involves switching to another language, for the length of one sentence or more. In code-mixing and switching, a bilingual person takes one or more words or phrases from one language and introduces them into another language while communicating in that language in spoken or written mode. People nowadays express their views and opinions on several issues on social media. In multilingual countries, people express their views using English as well as their native languages. Several reasons can be attributed to code-mixing. Lack of knowledge in one language on a particular subject, being empathetic, interjection and clarification are some to name. Sentiment analysis of monolingual social media content has been carried out for the last two decades. However, during recent years, Natural Language Processing (NLP) research focus has also shifted towards the exploration of code-mixed data, thereby, making code mixed sentiment analysis an evolving field of research. Systems have been developed using ML techniques to predict the polarity of code-mixed text corpus and to fine tune the existing models to improve their performance.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_54-Machine_Learning_Techniques_for_Sentiment_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Spark based Framework for Supervised Classification of Hyperspectral Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130253</link>
        <id>10.14569/IJACSA.2022.0130253</id>
        <doi>10.14569/IJACSA.2022.0130253</doi>
        <lastModDate>2022-02-28T09:51:47.2500000+00:00</lastModDate>
        
        <creator>N. Aswini</creator>
        
        <creator>R. Ragupathy</creator>
        
        <subject>Hyperspectral images; spark; supervised classifiers; spectral features; ANOVA F-test; distributed processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>The advancement of remote sensing sensors acquired large amount of image data easily. Primary aspects of big data, such as volume, velocity, and variety, are represented in the acquired images. Furthermore, standard data processing approaches have different limits when dealing with such large amounts of data. As a result, good machine learning-based algorithms are required to process the data with higher accuracy and lower computational efficiency. Therefore, we propose ANOVA F-test based spectral feature selection method with a distributed implementation of this machine learning algorithm on Spark. Experimental results are obtained using the bench mark datasets acquired using AVIRIS and ROSIS sensors. The performance of Spark MLlib based supervised machine learning techniques are evaluated using the criteria viz., accuracy, specificity, sensitivity, F1-score and execution time. Added to that, we compared the execution time between distributed processing and processing with single processor. The results reveal that the proposed strategy significantly cuts down on analytical time while maintaining classification accuracy.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_53-Spark_based_Framework_for_Supervised_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Framework for Integrating the Distributed Hash Table (DHT) with an Enhanced Bloom’s Filter in Manet</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130252</link>
        <id>10.14569/IJACSA.2022.0130252</id>
        <doi>10.14569/IJACSA.2022.0130252</doi>
        <lastModDate>2022-02-28T09:51:47.2330000+00:00</lastModDate>
        
        <creator>Renisha P Salim</creator>
        
        <creator>Rajesh R</creator>
        
        <subject>Mobile ad hoc network (MANET); distributed hash table (DHT); bloom’s filter; link stability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>MANET, a self-organizing, infrastructure-less, wireless network is a fast-growing technology in day-to-day life. There is a rapid growth in the area of mobile computing due to the extent of economical and huge availability of wireless devices which leads to the extensive analysis of the mobile ad-hoc network. It consists of the collection of wireless dynamic nodes. Due to this dynamic nature, the routing of packets in the MANET is a complex one. The integration of distributed hash table (DHT) in MANET is performed to enhance the overlay of routing. The node status updating in the centralized hash table creates the storage overhead. The bloom filter is a data structure that is a space-effective randomized one but it allows the false-positive rates. However, this can be able to compensate for the issue of storage overhead in DHT (Distributed hash table). Hence, to overcome the storage overhead occurring in DHT, and reduce the false positives, the Bloom&#39;s filter is integrated with the DHT initially. Furthermore, the link stability is measured by the distance among mobile nodes. The optimal node selection should be done for the transmission of packets which is the lacking factor. If it fails to select the optimal path then the removal of malicious nodes may lead to the unwanted entry of nodes into the other clustering groups. Therefore, to solve this problem, the bloom&#39;s filter is modified for enhancing the link stability. The novelty of this proposed work is the integration of Bloom&#39;s filter with the Distributed Hash Table which provides good security on transmission data by removing false-positive errors and storage overhead.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_52-A_Framework_for_Integrating_the_Distributed_Hash_Table.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implementation of Modified Wiener Filtering in Frequency Domain in Speech Enhancement</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130251</link>
        <id>10.14569/IJACSA.2022.0130251</id>
        <doi>10.14569/IJACSA.2022.0130251</doi>
        <lastModDate>2022-02-28T09:51:47.2030000+00:00</lastModDate>
        
        <creator>C. Ramesh Kumar</creator>
        
        <creator>M. P. Chitra</creator>
        
        <subject>Digital hearing aid; least mean square value; noise reduction method; power spectral density</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>The most common complaint about Digital Hearing Aids is feedback noise. Many attempts have been undertaken in recent years to successfully reduce feedback noise. A wiener filter, which calculates the wiener gain using before and after filtering SNR, is one technique to reduce background noise. Modified Noise Reduction Method (MNRM), a new way for reducing feedback noise Reduction, is presented in this work. In the Modified Noise Reduction Strategy, the advantages of a wiener filter are merged with a decision-directed approach and a twin-stage noise suppression technique The Modified Noise Reduction method can reduce the noise more successfully, according to comprehensive MATLAB programming, investigation, and findings analysis. After being modelled in MATLAB for seven distinct noise types, the SNR of the two architectures is compared.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_51-Implementation_of_Modified_Wiener_Filtering.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Effective Analytics and Performance Measurement of different Machine Learning Algorithms for Predicting Heart Disease</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130250</link>
        <id>10.14569/IJACSA.2022.0130250</id>
        <doi>10.14569/IJACSA.2022.0130250</doi>
        <lastModDate>2022-02-28T09:51:47.1870000+00:00</lastModDate>
        
        <creator>S. M. Hasan Sazzad Iqbal</creator>
        
        <creator>Nasrin Jahan</creator>
        
        <creator>Afroja Sultana Moni</creator>
        
        <creator>Masuma Khatun</creator>
        
        <subject>Machine learning; heart disease prediction; KNN; naive bayes; decision tree; random forest; support vector machine</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>This Heart disease means any condition that affects to directly heart. Globally, Heart disease is the main reason for death. According to a survey, approximately 17.9 million people died from heart disease in 2019 (representing 32 percent of global deaths). The number of people dying is increasing at an alarming rate every day. So it is necessary to detect and prevent heart disease as soon as possible. Medical experts who work inside the field of coronary heart sickness can predict the rate of coronary heart disorder up to 69%, which is not so useful. Because of the invention of various machine learning techniques, intelligent machines can predict the chance of heart disease up to 84%, which will be helpful to prevent heart disease earlier. In this paper, for picking essential characteristics among all features in the dataset, the univariate feature selection approach was employed. One-of-a-kind machine learning algorithms like K-Nearest Neighbors, Naive Bayes, Decision Tree, Random Forest, Support Vector Machine were used to assess the performance of these algorithms and forecast which one performs best. These machine learning approaches require less time to predict disease with more precision, resulting in the loss of valued lives all around the world.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_50-An_Effective_Analytics_and_Performance_Measurement.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Data Mining Model for Predicting Customer Purchase Behavior in E-Commerce Context</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130249</link>
        <id>10.14569/IJACSA.2022.0130249</id>
        <doi>10.14569/IJACSA.2022.0130249</doi>
        <lastModDate>2022-02-28T09:51:47.1530000+00:00</lastModDate>
        
        <creator>Orieb Abu Alghanam</creator>
        
        <creator>Sumaya N. Al-Khatib</creator>
        
        <creator>Mohammad O. Hiari</creator>
        
        <subject>Apriori PT algorithm; C4.5; CS-MC4; Data mining; decision tree; E-commerce; K-means</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>Nowadays e-commerce environment plays an important role to exchange commodity knowledge between consumers commonly with others. Accurately predicting customer purchase patterns in the e-commerce market is one of the critical applications of data mining. In order to achieve high profit in e-commerce, the relationship between customer and merchandise are very important. Moreover, many e-commerce websites increase rapidly and instantly and competition has become just a mouse-click away. That is why the importance of staying in the business, and improving the profit needs to accurately predict purchase behavior and target their customers with personalized services according to their preferences. In this paper, a data mining model has been proposed to enhance the accuracy of predicting and to find association rules for frequent item sets. Also, K-means clustering algorithm has been used to reduce the size of the dataset in order to enhance the runtime for the proposed model. The proposed model has used four different classifiers which are C4.5, J48, CS-MC4, and MLR. Also, Apriori algorithm to provide recommendations for items based on previous purchases. The proposed model has been tested on Northwind trader’s dataset and the results archives accuracy equal 95.2% when the number of clusters were 8.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_49-Data_Mining_Model_for_Predicting_Customer_Purchase_Behavior.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Smart Decision Making System for the Selection of Production Parameters using Digital Twin and Ontologies</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130248</link>
        <id>10.14569/IJACSA.2022.0130248</id>
        <doi>10.14569/IJACSA.2022.0130248</doi>
        <lastModDate>2022-02-28T09:51:47.1400000+00:00</lastModDate>
        
        <creator>ABADI Mohammed</creator>
        
        <creator>ABADI Chaimae</creator>
        
        <creator>ABADI Asmae</creator>
        
        <creator>BEN-AZZA Hussain</creator>
        
        <subject>Production parameters selection; digital twin; case-based reasoning; ontologies; automation; cyber-physical systems; decision making; artificial intelligence</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>Currently, the industrial and economic environment is highly competitive, forcing companies to keep up with technological progress and to be efficient in terms of quality and responsiveness, not only to survive, but also to dominate the market. So, to achieve this goal, companies are always looking to master their production processes, as well as to enlarge their range of products, either by developing new products or by improving old ones. This confronts companies to many problems, including the identification of adequate and optimal production parameters for the development of their products. In this context, a decision making system based on digital twins (DT), case-based reasoning (CBR) and Ontologies is proposed. The originality of this work lies in the fact that it combines three emerging artificial intelligence tools for modeling, reasoning and decision making. Thus, this work proposes a new flexible and automated system that ensures an optimal selection of production parameters for a given complex product. An industrial case of study is developed to illustrate the effectiveness of the proposed approach.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_48-A_Smart_Decision_Making_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Processing of Clinical Notes for Efficient Diagnosis with Dual LSTM</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130247</link>
        <id>10.14569/IJACSA.2022.0130247</id>
        <doi>10.14569/IJACSA.2022.0130247</doi>
        <lastModDate>2022-02-28T09:51:47.1070000+00:00</lastModDate>
        
        <creator>Chandru A. S</creator>
        
        <creator>Seetharam K</creator>
        
        <subject>Clinical notes; natutal language processing; diagnosis; long short term memory; convolution neural network; autoencoder</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>Clinical records contain patient information such as laboratory values, doctor notes, or medications. However, clinical notes are underutilized because notes are complex, high-dimensional, and sparse. However, these clinical records may play an essential role in modeling clinical decision support systems. The study aimed to develop an effective predictive learning model that can process these sparse data and extract useful information to benefit the clinical decision support system for the effective diagnosis. The proposed system conducts phase-wise data modeling, and suitable text data treatment is carried out for data preparation. The study further utilized the Natutal Language Processing (NLP) mechanism where word2vec with Autoencoder is used as a clustering scheme for the topic modeling. Another significant contribution of the proposed work is that a novel approach of learning mechanism is devised by integrating Long Short Term Memory (LSTM) and Convolution Neural Network (CNN) to learn the inter-dependencies of the data sequences to predict diagnosis and patient testimony as output for the clinical decision. The development of the proposed system is carried out using the Python programming language. The study outcome based on the comparative analysis exhibits the effectiveness of the proposed method.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_47-Processing_of_Clinical_Notes_for_Efficient_Diagnosis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Machine Learning Model for Prediction and Visualization of HIV Index Testing in Northern Tanzania</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130246</link>
        <id>10.14569/IJACSA.2022.0130246</id>
        <doi>10.14569/IJACSA.2022.0130246</doi>
        <lastModDate>2022-02-28T09:51:47.0930000+00:00</lastModDate>
        
        <creator>Happyness Chikusi</creator>
        
        <creator>Judith Leo</creator>
        
        <creator>Shubi Kaijage</creator>
        
        <subject>Index testing; machine learning; random forest; xgboost; artificial neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>Human Immunodeficiency Virus Acquired Immunodeficiency Syndrome (HIV AIDS) in Tanzania is still a threatening disease in society. There have been various strategies to increase the number of people to know their HIV status. Among these strategies, HIV index testing has proven to be the best modality for collecting the number of HIV contacts who might be at risk of contracting HIV from an HIV-positive person. However, the current HIV index testing is manual-based, creating many challenges, including errors, time-consuming, and expensive to operate. Therefore, this paper presents the Machine Learning model results to predict and visualise HIV index testing. The development process followed the Agile Software development methodology. The data was collected from Kilimanjaro, Arusha and Manyara regions in Tanzania. A total of 6346 samples and 11 features were collected. Then, the dataset was divided into training sets of 5075 samples and a testing set of 1270 samples (80/20). The datasets were run into Random Forest (RF), XGBoost, and Artificial Neural Networks (ANN) algorithms. The results of the evaluation, by Mean Absolute Errors (MAE), showed that; RF MAE (1.1261), XGBoost MAE (1.2340), and ANN MAE (1.1268.); whereby the RF appeared to have the best result compared to the other two algorithms. Data visualisation shows that 17.4% of males and 82.6 of females had been notified. In addition, the Kilimanjaro region had more cases of people with HIV status from their partners. Overall, this study improved our understanding of the significance of ML in the prediction and visualisation of HIV index testing. The developed model can assist decision-makers in coming out with a suitable intervention strategy towards ending HIV AIDS in our societies. The study recommends that health centres in other regions use this model to simplify their work.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_46-Machine_Learning_Model_for_Prediction_and_Visualization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Using HBase to Implement Speed Layer in Time Series Data Storage Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130245</link>
        <id>10.14569/IJACSA.2022.0130245</id>
        <doi>10.14569/IJACSA.2022.0130245</doi>
        <lastModDate>2022-02-28T09:51:47.0770000+00:00</lastModDate>
        
        <creator>Milko Marinov</creator>
        
        <subject>Lambda architecture; speed layer; time series data; data storage system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>In recent years, modern systems have become increasingly integrated, and the challenges are focused on delivering real-time analytics based on big data. Thus, using standard software tools to extract information from such datasets is not always possible. The Lambda Architecture proposed by Marz is an architectural solution that can manage the processing of large data volumes by combining real-time and data batch processing techniques. Choosing a suitable database management system for storing large volumes of time series data is not a trivial issue as various aspects such as low latency, high performance and the possibility of horizontal scalability must be taken into account. The new NoSQL approaches use for this purpose non-relational databases with significant advantages in terms of flexibility and performance in comparison with the traditional relational databases. With reference to this, the purpose of this paper is to analyse the general characteristics of time series data and the main activities performed by the Speed layer in a system based on the Lambda Architecture. Based on this, the use of a column-oriented NoSQL DBMS as a system for storing time series data is justified. The paper also addresses the challenges of using HBase as a system for storing and analysing time series data. These questions are related to the design of an appropriate database schema, the need to achieve balance between ease of access to the data and performance as well as considering the factors that affect the overload of individual nodes in the system.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_45-Using_HBase_to_Implement_Speed_Layer.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Free Hardware based System for Air Quality and CO2 Monitoring</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130244</link>
        <id>10.14569/IJACSA.2022.0130244</id>
        <doi>10.14569/IJACSA.2022.0130244</doi>
        <lastModDate>2022-02-28T09:51:47.0600000+00:00</lastModDate>
        
        <creator>Cristhoper Alvarez-Mendoza</creator>
        
        <creator>Jhon Vilchez-Lucana</creator>
        
        <creator>Fernando Sierra-Li&#241;an</creator>
        
        <creator>Michael Cabanillas-Carbonell</creator>
        
        <subject>Air quality; air pollution; co2; control system; free hardware; v-model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>Due to the increase in air pollution, especially in Latin American countries of low and middle income, great environmental and health risks have been generated, highlighting that there is more pollution in closed environments. Given this problem, it has been proposed to develop a system based on free hardware for monitoring air quality and CO2, in order to reduce the levels of air pollution in a closed environment, improving the quality of life of people and contributing to the awareness of the damage caused to the environment by the hand of man himself. The system is based on V-Model, complemented with a ventilation prototype implemented with sensors and an application for its respective monitoring. The sample collected in the present investigation was non-probabilistic, derived from the reports of air indicators during 15 days with specific schedules of 9am, 1pm and 6pm. The results obtained indicated that the air quality decreased to 670 ppm, as well as the collection time decreased to 5 seconds and finally the presence of CO2 was reduced to 650 ppm after the implementation of the system, achieving to be within the standards recommended by the World Health Organization.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_44-Free_Hardware_based_System_for_Air_Quality.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Securing Dynamic Source Routing by Neighborhood Monitoring in Wireless Adhoc Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130243</link>
        <id>10.14569/IJACSA.2022.0130243</id>
        <doi>10.14569/IJACSA.2022.0130243</doi>
        <lastModDate>2022-02-28T09:51:47.0300000+00:00</lastModDate>
        
        <creator>Rajani K C</creator>
        
        <creator>Aishwarya P</creator>
        
        <subject>Mobile adhoc network; wireless adhoc network; security; attack; dynamic source routing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>Wireless Adhoc Network (WANET) significantly contributes to cost-effective network formulation due to decentralized and infrastructure-less schemes. One of the primary forms of WANET in Mobile Adhoc Network (MANET) is still evolving in research and a continued set of research problems associated with security. A review of existing security approaches shows that identifying malicious behavior in MANET is still an open-end problem irrespective of various methods. This paper introduces an improved DSR protocol mechanism of neighborhood monitoring scheme towards analyzing the malicious behavior in the presence of an unknown attacker of dynamic type. The proposed method contributes to deploying auxiliary relay nodes and retaliation nodes to control the communication process and prevent the attacker from joining the network. Using analytical research methodology, the proposed system can offer better communication performance with effective resistance from threats in MANET.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_43-Securing_Dynamic_Source_Routing_by_Neighborhood_Monitoring.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Animated CAPTCHA Technique based on Persistence of Vision</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130242</link>
        <id>10.14569/IJACSA.2022.0130242</id>
        <doi>10.14569/IJACSA.2022.0130242</doi>
        <lastModDate>2022-02-28T09:51:47.0130000+00:00</lastModDate>
        
        <creator>Shafiya Afzal Sheikh</creator>
        
        <creator>M. Tariq Banday</creator>
        
        <subject>CAPTCHA; OCR; animation; segmentation; botnet; HIP; CAPTCHA usability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>Image-based CAPTCHA challenges have been successfully used to distinguish between humans and bots for a long time. However, image-based CAPTCHA techniques are constantly broken by hackers, forcing web developers to implement more robust security features and new approaches in CAPTCHA images. Modern-day bots can use many techniques and technologies to break CAPTCHA images automatically. These techniques include OCR, Segmentation, erosion, threshold, flood fill, etc. This led to innovative CAPTCHA systems, including those based on drag and drop, image recognition, fingerprint, mathematical problems, etc. Animated image CAPTCHAs have also been designed to show moving characters and objects and require users to recognize the characters or objects in the animation. Unfortunately, these CAPTCHA systems have also been broken successfully. This research proposes a novel animated CAPTCHA technique based on the persistence of vision, which shows text characters in multiple layers in an animated image. The proposed CAPTCHA technique has been implemented in PHP using GD library functions and tested using various popular CAPTCHA breaking tools. Further, the proposed CAPTCHA challenge has also been tested against the frame separation based breaking technique. The security analysis and usability study have demonstrated user-friendliness, vast accessibility, and robustness.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_42-A_Novel_Animated_CAPTCHA_Technique.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-Criteria Prediction Framework for the Prioritization of Council Candidates based on Integrated AHP-Consensus and TOPSIS Methods</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130241</link>
        <id>10.14569/IJACSA.2022.0130241</id>
        <doi>10.14569/IJACSA.2022.0130241</doi>
        <lastModDate>2022-02-28T09:51:47.0000000+00:00</lastModDate>
        
        <creator>Nurul Akhmal Mohd Zulkefli</creator>
        
        <creator>Muhamad Hariz Muhamad Adnan</creator>
        
        <creator>Mukesh Madanan</creator>
        
        <creator>Tariq Mohsen Hardan</creator>
        
        <subject>Analytic hierarchy process (AHP); technique for order of preference by similarity to ideal solution (TOPSIS); multi-criteria decision making (MCDM); student council</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>Predicting the council candidate becomes difficult due to the large number of criteria that must be known and identified. The best candidate should be chosen from among the candidates because he or she will play an important role in the organization or institution. It is critical to find the right and best candidate these days because people see and judge the outcome from the candidate in a short time with the help of social media. Perhaps the organization and institution require the best candidate criteria because they will manage and organize the community around them. This study focuses on how to prioritize council candidates using Analytic Hierarchy Process (AHP) for determine the criteria and Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) for prioritize the student council candidate. This proposed framework based on Multi-Criteria Decision Making (MCDM) will be used to recommend and assist students in selecting the best candidate for student council. The three criteria chosen were grade point average (GPA), Age, and Semester. Based on the results of the questionnaire and a review of the literature, these criteria were developed. The three criteria were then used to determine the most important criterion for selecting the student council. The AHP weight is used to determine and prioritize the most important criteria. TOPSIS was used to select the most qualified student council candidate. The findings show that GPA is the most important criteria in selecting the best candidate, and the TOPSIS findings support the AHP findings.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_41-Multi_Criteria_Prediction_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Machine Learning Application for Predicting Heart Attacks in Patients from Europe</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130240</link>
        <id>10.14569/IJACSA.2022.0130240</id>
        <doi>10.14569/IJACSA.2022.0130240</doi>
        <lastModDate>2022-02-28T09:51:46.9830000+00:00</lastModDate>
        
        <creator>Enrique Arturo Elescano-Avenda&#241;o</creator>
        
        <creator>Freddy Edson Huam&#225;n-Leon</creator>
        
        <creator>Gilson Andreson Vasquez-Torres</creator>
        
        <creator>Dayana Ysla-Espinoza</creator>
        
        <creator>Enrique Lee Huaman&#237;</creator>
        
        <creator>Alexi Delgado</creator>
        
        <subject>Prediction; machine learning model; logistic regression; heart attack</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>Even today, there are still a large number of people suffering from heart attacks, which have already claimed numerous lives worldwide. To examine the main components of this problem in an objective and timely manner, we chose to work with a methodology that relies on taking and learning from real and existing data for use in training and testing predictive models. This was carried out to obtain useful data for the present research work. There are in parallel different methodologies that do not quite fit the model of this work. Data was collected from the &quot;Center for Machine Learning and Intelligent Systems&quot; which in turn contains data from patients who have ever suffered a cardiovascular attack and from patients who never suffered the disease, all of them being patients selected from different medical institutions. With the corresponding information, it was subjected to different processes such as cleaning, preparation, and training with the data, to obtain a logistic regression type automatic learning model ready to predict whether or not a person may suffer a cardiovascular attack. Finally, a result of 87% accuracy was obtained for people who suffered a heart attack and an accuracy of 81% for people who would not suffer from this disease. This can greatly reduce the mortality rate due to infarction, by knowing the condition of a person who is unaware of his or her health situation and thus being able to take appropriate measures.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_40-Machine_Learning_Application_for_Predicting_Heart_Attacks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Trust Management in Industrial Internet of Things using a Trusted E-Lithe Protocol</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130239</link>
        <id>10.14569/IJACSA.2022.0130239</id>
        <doi>10.14569/IJACSA.2022.0130239</doi>
        <lastModDate>2022-02-28T09:51:46.9670000+00:00</lastModDate>
        
        <creator>Ahmed Motmi</creator>
        
        <creator>Samah Alhazmi</creator>
        
        <creator>Ahmed Abu-Khadrah</creator>
        
        <creator>Mousa AL-Akhras</creator>
        
        <creator>Fuad Alhosban</creator>
        
        <subject>IoT; industrial; IIoT; trust management; E-lithe; secure communication; internet of things; CoAP; datagram transport layer security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>The IoT has gained significant recognition from research and industrial communities over the last decade. The concept of Industrial IoT (IIoT) has emerged to improve industrial processes and reduce downtime or breach in secure communication. If automated, industrial applications can make the implementation process more convenient, it also helps increase productivity, but an external attacker may cause distortion to the process, which could cause much damage. Thus, a trust management technique is proposed for securing IIoT. The transition of the Internet to IoT and for industrial applications to IIoT leads to numerous changes in the communication processes. This transition was initiated by wireless sensor networks that have unattended wireless topologies and were comprised due to the nature of their resource-constrained nodes. In order to protect the sensitivity of transmitted information, the security protocol uses the Datagram Transport Layer Security (DTLS) mandated by Secure Constrained Application Protocol (CoAP). However, DTLS was designed for powerful devices and needed strong support for industrial applications connected through high-bandwidth links. In the proposed trust management system, machine learning algorithms are used with an elastic slide window to handle bigger data and reduce the strain of massive communication. The proposed method detected on and off attacks on nodes, malicious nodes, healthy nodes, and broken nodes. This identification is necessary to check if a particular node could be trusted or not. The proposed technique successfully predicted 97% of nodes&#39; behavior faster than other machine learning algorithms.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_39-Trust_Management_in_Industrial_Internet_of_Things.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Smart Monitoring System for Chronic Kidney Disease Patients based on Fuzzy Logic and IoT</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130238</link>
        <id>10.14569/IJACSA.2022.0130238</id>
        <doi>10.14569/IJACSA.2022.0130238</doi>
        <lastModDate>2022-02-28T09:51:46.9370000+00:00</lastModDate>
        
        <creator>Govind Maniam</creator>
        
        <creator>Jahariah Sampe</creator>
        
        <creator>Rosmina Jaafar</creator>
        
        <creator>Mohd Faisal Ibrahim</creator>
        
        <subject>Anemia; cardiovascular disease (CVD); fuzzy logic; healthcare; internet of things</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>A Chronic Kidney Disease (CKD) monitoring system was proposed for early detection of cardiovascular disease (CVD) and anemia using Fuzzy Logic. To determine the heart rate and blood oxygen saturation, the proposed model was simulated using MATLAB and Simulink to handle ECG and PPG inputs. The Pan-Tompkins method was used to determine the heart rate, while the Takuo Aoyagi algorithm was used to assess blood oxygen saturation levels. The findings show that the ECG recorded using the CKD model has all of the characteristics of a typical ECG wave cycle, but with reduced signal degradation in the 0.8–1.3mV region. The heart rate signal processing yielded findings between 78 and 83 beats per minute is within the range of the supplied heart rate. Takuo Aoyagi&#39;s pulse oximeter simulation generated the same findings. For real-time verification, the proposed model was implemented in hardware using ESP8266 32-bit microcontroller with IoT integration via Wireless Fidelity for data storage and monitoring. In comparison with the Fuzzy Logic simulation done on MATLAB and Simulink, the CKD monitoring device has 100% accuracy in patient status detection. The CKD monitoring system has an overall accuracy of 99% in comparison with a commercial fingertip pulse oximeter.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_38-Smart_Monitoring_System_for_Chronic_Kidney_Disease.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>DBTechVoc: A POS-tagged Vocabulary of Tokens and Lemmata of the Database Technical Domain</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130237</link>
        <id>10.14569/IJACSA.2022.0130237</id>
        <doi>10.14569/IJACSA.2022.0130237</doi>
        <lastModDate>2022-02-28T09:51:46.9200000+00:00</lastModDate>
        
        <creator>Jatinder kumar R. Saini</creator>
        
        <creator>Ketan Kotecha</creator>
        
        <creator>Hema Gaikwad</creator>
        
        <subject>Database; lemma; part-of-speech (POS); technical word list; token; unigram; vocabulary</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>Vocabulary of a language has a great role to play in the Natural Language Processing (NLP) applications. Such applications make use of lists like stop-word list, general service list, academic word list and technical domain word list. The technical domain word list differs with each domain and though it is available for fields like medicine, biology, computer science, physics and law, the domain of databases in specific has still not been explored. For the first time, we propose technical vocabulary comprising of POS-tagged unigram tokens and POS-tagged unigram lemmata for the technical domain of databases. This vocabulary has been called DBTechVoc with a coined term. Notably, the multi-word phrases have also been considered, without their further tokenization, to maintain their semantics. The empirical results, with more than 1000 high quality research papers collected over a period of 45 years from 1976 to 2021, prove that the technical general word list of the domain of computer science is different from the technical and specific word list of the domain of databases. The overlap was found to be less than 2%. The research titles use 6% Rainbow stop words while 13% of the words used for the research paper titles are inflectional forms of lemmata.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_37-DBTechVoc_A_POS_tagged_Vocabulary_of_Tokens.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Intelligent Metaheuristic Optimization with Deep Convolutional Recurrent Neural Network Enabled Sarcasm Detection and Classification Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130236</link>
        <id>10.14569/IJACSA.2022.0130236</id>
        <doi>10.14569/IJACSA.2022.0130236</doi>
        <lastModDate>2022-02-28T09:51:46.9030000+00:00</lastModDate>
        
        <creator>K. Kavitha</creator>
        
        <creator>Suneetha Chittineni</creator>
        
        <subject>Sarcasm detection; data classification; deep learning; feature extraction; TLBO algorithm; parameter optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>Sarcasm is a state of speech in which the speaker says something that is externally unfriendly with a purpose of abusing/deriding the listener and/or a third person. Since sarcasm detection is mainly based on the context of utterances or sentences, it is hard to design a model to proficiently detect sarcasm in the domain of natural language processing (NLP). Despite the fact that various methods for detecting sarcasm have been created utilizing statistical machine learning and rule-based approaches, they are unable of discerning figurative meanings of words. The models developed using deep learning approaches have shown superior performance for sarcasm detection over traditional approaches. With this motivation, this paper develops novel deep learning (DL) enabled sarcasm detection and classification (DLE-SDC) model. The DLE-SDC technique primarily involves pre-processing stage which encompasses single character removal, multispaces removal, URL removal, stop word removal, and tokenization. Next to data preprocessing, the preprocessed data is converted into the feature vector by Glove Embeddings technique. Followed by, convolutional neural network with recurrent neural network (CNN-RNN) technique is utilized to detect and classify sarcasm. In order to boost the detection outcomes of the CNN+RNN technique, a hyper parameter tuning process utilizing teaching and learning based optimization (TLBO) algorithm is employed in such a way that the classification performance gets increased. The DLE-SDC model is validated using the benchmark dataset and the performance is examined interms of precision, recall, accuracy, and F1-score.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_36-An_Intelligent_Metaheuristic_Optimization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design Processes for User Engagement with Mobile Health: A Systematic Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130235</link>
        <id>10.14569/IJACSA.2022.0130235</id>
        <doi>10.14569/IJACSA.2022.0130235</doi>
        <lastModDate>2022-02-28T09:51:46.8900000+00:00</lastModDate>
        
        <creator>Tochukwu Ikwunne</creator>
        
        <creator>Lucy Hederman</creator>
        
        <creator>P. J. Wall</creator>
        
        <subject>Design process; mobile health; socio-cultural; user-centered design; user engagement</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>Despite the importance of user engagement in mHealth system efficacy, many such interventions fail to engage their users effectively. This paper provides a systematic review of 10 years of research (32 articles) on mHealth design interventions conducted between 2011 and 2020. The PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) model was used for this review with the IEEE, Medline EBSCO Host, ACM, and Springer databases searched for English language papers with the published range. The goal of this review was to find out which design process improves user engagement with mHealth in order to guide the development of future mHealth interventions. We discovered that the following six analytical themes influence user engagement: design goal, design target population, design method, design approach, socio-technical aspects, and design evaluation. These six analytical themes, as well as 16 other specific implementations derived from the reviewed articles, were included in a checklist designed to make designing, developing, and implementing mHealth systems easier. This study closes a gap in the literature by identifying a lack of consideration of socio-cultural contexts in the design of mHealth interventions and recommends that such socio-cultural contexts be considered and addressed in a systematic manner by identifying a design process for engaging users in mHealth interventions. Based on this, our systematic literature review recommends that a framework that captures the socio-cultural context of any mHealth implementation be refined or developed to support user engagement for mHealth.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_35-Design_Processes_for_User_Engagement.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Optimal Execution of Composite Service in Decentralized Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130234</link>
        <id>10.14569/IJACSA.2022.0130234</id>
        <doi>10.14569/IJACSA.2022.0130234</doi>
        <lastModDate>2022-02-28T09:51:46.8730000+00:00</lastModDate>
        
        <creator>Yashwant Dongre</creator>
        
        <creator>Rajesh Ingle</creator>
        
        <subject>Genetic algorithm; service composition; decentralized execution; composite service</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>It is important for service-oriented architectures to consider about how the composition of web services affects business processes. For instance, a single web service may not have been adequate for most complex business operations, needing the use of multiple web services. This paper proposed a novel technique for optimal partitioning and execution of the services using a decentralized environment. The proposed technique is designed and developed using a genetic algorithm with multiple high task allocations on a single server. We compared three existing techniques, including meta-heuristic genetic algorithm, heuristics like Pooling-and-Greedy-Merge (PGM) technique, and Merge-by-Define-Use (MDU) technique, to a simulation of Business Process Execution Language (BPEL) partition using genetic algorithm through multiple high tasks allocation to single server node. The proposed technique is practical and advantageous. In terms of execution time, number of server requests, and throughput, the proposed technique outperformed the existing GA, PGM, and MDU techniques.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_34-An_Optimal_Execution_of_Composite_Service.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mobile Mathematics Learning Application Selection using Fuzzy TOPSIS</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130233</link>
        <id>10.14569/IJACSA.2022.0130233</id>
        <doi>10.14569/IJACSA.2022.0130233</doi>
        <lastModDate>2022-02-28T09:51:46.8570000+00:00</lastModDate>
        
        <creator>Seren Basaran</creator>
        
        <creator>Firass El Homsi</creator>
        
        <subject>Fuzzy TOPSIS; ISO/IEC 25010 standards; mathematics; mobile applications; multi-criteria decision making</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>Impressive evolution of technology increased the usage frequency of smart mobile phones, and hence abundance in the quantity of available mobile applications has emerged as the vital problem of inventing practical and efficient ways for selecting suitable mobile applications for the desired use. Today, there are almost three million apps only at the Google Play store. Therefore, the need for an automated, effective, and less time-consuming approach towards suitable mobile application selection to choose the best alternative has gained more significance than ever. Despite the sudden growth in mobile learning applications, there exists a dearth of research in the effective way of selecting a suitable mobile application in that respect particularly in relation to mobile apps for Mathematics. Moreover, using multi-criteria decision-making methods (MCDM) is only recently applied in rare studies for that purpose. This paper focused on ISO/IEC 25010 software quality standards in selecting mobile Mathematics learning applications. Six highly rated applications were evaluated by two experts. This paper aims to apply the fuzzy Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) to retrieve the best alternative among present applications. The results showed an objective and flexible assessment for ranking to eliminate ambiguity in decision-making. Results also identified significant features thus rendering a useful and valuable tool for decision-makers. The study assists users, teachers/instructors, students in their decision-making processes regarding finding the most suitable application for Mathematics.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_33-Mobile_Mathematics_Learning_Application_Selection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Failure Region Estimation of Linear Voltage Regulator using Model-based Virtual Sensing and Non-invasive Stability Measurement</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130232</link>
        <id>10.14569/IJACSA.2022.0130232</id>
        <doi>10.14569/IJACSA.2022.0130232</doi>
        <lastModDate>2022-02-28T09:51:46.8430000+00:00</lastModDate>
        
        <creator>Syukri Zamri</creator>
        
        <creator>Mohd Hairi Mohd Zaman</creator>
        
        <creator>Muhammad Fauzi Mohd Raihan</creator>
        
        <creator>Asraf Mohamed Moubark</creator>
        
        <creator>M Marzuki Mustafa</creator>
        
        <subject>Voltage regulator; output capacitor; equivalent series resistance; failure region; system identification; neural network; noninvasive stability measurement</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>Voltage regulator (VR) stability plays an essential role in ensuring maximum power delivery and long-lasting electronic lifespan. Capacitor with a specific equivalent series resistance (ESR) range is typically connected at the VR output terminal to compensate for instability of the VR due to sudden changes in load current. The stability of VR can be measured by analyzing output voltage during load transient tests. However, the optimum ESR range obtained from the ESR tunnel graph in its datasheet can only be characterized by testing a set of data points consisting of ESR and load currents. Characterization process is performed manually by changing the value of ESR and load current for each operating point. However, the inefficient process of estimating the critical value of ESR must be improved given that it requires a large amount of time and expertise. Furthermore, the stability analysis is currently conducted on the basis of the number of oscillation counts of VR output voltage signal. Therefore, a model-based virtual sensing approach that mainly focuses on black-box modeling through system identification method and training neural network on the basis of estimated transfer function coefficients is introduced in this study. The proposed approach is used to estimate the internal model of the VR and reduce the number of data points that need to be acquired. In addition, the VR stability is analyzed using noninvasive stability measurement method, which can measure phase margin from the frequency response of the VR circuit in closed-loop conditions. Results showed that the proposed method reduces the time it takes to produce an ESR tunnel graph by 84% with reasonable accuracy (MSE of 5&#215;10−6, RMSE of 2.24&#215;10−3, MAE of 1&#215;10−3, and R2 of 0.99). Therefore, efficiency and effectiveness of ESR characterization and stability analysis of the VR circuit is improved.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_32-Failure_Region_Estimation_of_Linear_Voltage_Regulator.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Review on Software Bug Localization Techniques using a Motivational Example</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130231</link>
        <id>10.14569/IJACSA.2022.0130231</id>
        <doi>10.14569/IJACSA.2022.0130231</doi>
        <lastModDate>2022-02-28T09:51:46.8100000+00:00</lastModDate>
        
        <creator>Amr Mansour Mohsen</creator>
        
        <creator>Hesham Hassan</creator>
        
        <creator>Ramadan Moawad</creator>
        
        <creator>Soha Makady</creator>
        
        <subject>Bug localization; bug localization artifacts; information retrieval; program spectrum</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>Software bug localization is an essential step within the software maintenance activity, consuming about 70% of the time and cost of the software development life cycle. Therefore, the need to enhance the automation process of software bug localization is important. This paper surveys various software bug localization techniques. Furthermore, a running motivational example is utilized throughout the paper. Such motivational example illustrates the surveyed bug localization techniques, while highlighting their pros and cons. The motivational example utilizes different software artifacts that get created throughout the software development lifecycle, and sheds light on those software artifacts that remain poorly utilized within existing bug localization techniques, regardless of the rich wealth of knowledge embedded within them. This research thus presents guidance on what artifacts should future bug localization techniques focus, to enhance the accuracy of bug localization, and speedup the software maintenance process.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_31-A_Review_on_Software_Bug_Localization_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Identify Discriminatory Factors of Traffic Accidental Fatal Subtypes using Machine Learning Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130230</link>
        <id>10.14569/IJACSA.2022.0130230</id>
        <doi>10.14569/IJACSA.2022.0130230</doi>
        <lastModDate>2022-02-28T09:51:46.7970000+00:00</lastModDate>
        
        <creator>W. Z. Loskor</creator>
        
        <creator>Sharif Ahamed</creator>
        
        <subject>Traffic accident; clustering analysis; machine learning; feature selection; classification; discriminatory factors</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>In today&#39;s world, traffic accidents are one of the main reasons of mortality and long-term injury. Bangladesh is no exception in this case. Several vehicle accidents each year have become an everyday occurrence in Bangladesh. Bangladesh&#39;s largest highway, the Dhaka-Banglabandha National Highway, has a significant number of accidents each year. In this work, we gathered accident data from the Dhaka-Banglabandha highway over an eight-year period and attempted to determine the subtypes present in this dataset. Then we tested with various classification algorithms to see which ones performed the best at classifying accident subtypes. To describe the discriminatory factors among the subtypes, we also used an interpretable model. This experiment gives essential information on traffic accidents and so helps in the development of policies to reduce road traffic collisions on Bangladesh&#39;s Dhaka-Banglabandha National Highway.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_30-Identify_Discriminatory_Factors_of_Traffic_Accidental.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Region-based Compression Technique for Medical Image Compression using Principal Component Analysis (PCA)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130229</link>
        <id>10.14569/IJACSA.2022.0130229</id>
        <doi>10.14569/IJACSA.2022.0130229</doi>
        <lastModDate>2022-02-28T09:51:46.7800000+00:00</lastModDate>
        
        <creator>Sin Ting Lim</creator>
        
        <creator>Nurulfajar Bin Abd Manap</creator>
        
        <subject>Principal component analysis; region-of-interest (ROI); automated segmentation; MRI brain scans; region-based compression</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>Region-based compression technique is particularly useful for radiological archiving system as it allows diagnostically important regions to be compressed with near lossless quality while the non-diagnostically important regions (NROI) to be compressed at lossy quality. In this paper, we present a region-based compression technique tailored for MRI brain scans. In the proposed technique termed as automated arbitrary PCA (AAPCA), an automatic segmentation based on brain symmetrical property is used to separate the ROI from the background. The arbitrary-shape ROI is then compressed by block-to-row PCA algorithm (BTRPCA) based on a factorization approach. The ROI is optimally compressed with lower compression rate while the NROI is compressed with higher compression rate. The proposed technique achieves satisfactory segmentation performance. The subjective and objective evaluation performed confirmed that the proposed technique achieves better performance metrics (PSNR and CoC) and higher overall compression rate. The experimental results also demonstrated that the proposed technique is more superior to various state-of-the-art compression methods.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_29-A_Region_based_Compression_Technique.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Evaluation of the Automatic Detection of Hate Speech in Social Media Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130228</link>
        <id>10.14569/IJACSA.2022.0130228</id>
        <doi>10.14569/IJACSA.2022.0130228</doi>
        <lastModDate>2022-02-28T09:51:46.7500000+00:00</lastModDate>
        
        <creator>Abdulfattah Omar</creator>
        
        <creator>Mohamed Elarabawy Hashem</creator>
        
        <subject>Artificial intelligence; automatic detection; Facebook; hate speech; Islamic discourse; social media networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>Numerous approaches have been developed over recent years to detect hate speech on social media networks. Nevertheless, a great deal of what is generally recognized as hate speech cannot yet be detected. There remain many challenges to assuring the effectiveness and reliability of automatic detection systems in different languages, including Arabic. Social media platforms and networks such as Facebook continue to encounter difficulties regarding the automatic detection of hate speech in Arabic content. Given the importance of developing reliable artificial intelligence and automatic detection systems that can reduce the problems and crimes associated with the spread of hate speech on social media platforms, this study is concerned with evaluating the performance of the automatic detection and tracking of hate speech in Arabic content on Facebook. As an example, the study evaluates the period in October 2020 that came to be known as France’s cartoon controversy. Two different corpora were designed. The first corpus comprised 347 posts deleted by Facebook, now known as Meta. The second corpus was composed of 1,856 posts that were randomly selected using the hashtag إلا رسول الله (except the Prophet of Allah). The results indicate that there is a considerable amount of hate speech taken from or influenced by the Islamic religious discourse, but that automatic detection systems are unable to address the peculiar linguistic features of Arabic. There is also a lack of clarity in defining what constitutes “hate speech”. The study suggests that social media networks, including Facebook, need to adopt more reliable automatic detection systems that consider the linguistic properties of Arabic. Political thinkers and religious scholars should be involved in defining what constitutes hate speech in Arabic.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_28-An_Evaluation_of_the_Automatic_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Computational Approach to Decode the Pragma-Stylistic Meanings in Narrative Discourse</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130227</link>
        <id>10.14569/IJACSA.2022.0130227</id>
        <doi>10.14569/IJACSA.2022.0130227</doi>
        <lastModDate>2022-02-28T09:51:46.7330000+00:00</lastModDate>
        
        <creator>Ayman Farid Khafaga</creator>
        
        <creator>Iman El-Nabawi Abdel Wahed Shaalan</creator>
        
        <subject>Frequency distribution analysis; narrative discourse; pragma-stylistic meanings; thematic categorization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>This paper presents a computer-based frequency distribution analysis to decode the pragma-stylistic meanings in one of the narrative discourse represented by Orwell’s dystopian novel Animal Farm. The main objective of the paper is to explore the extent to which computer software contribute to the linguistic analysis of texts. The paper uses the variable of frequency distribution analysis (FDA) generated by concordance software to decode the pragmatic and stylistic significance beyond the mere linguistic expressions employed by the writer in the selected data. Some words were selected to undergo a frequency distribution analysis so as to highlight their pragmatic and linguistic weight which, in turn, helps arrive at a comprehensive understanding of the thematic message intended by the writer. The paper is grounded on one analytical strand: Frequency distribution analysis conducted by concordance. Results reveal that applying a frequency distribution analysis to the linguistic analysis of large data fictional texts serves to (i) identify the various types of discourse in these texts; (ii) create a thematic categorization that is based on the frequency distribution analysis of specific words in texts; and (iii) indicate that not only high frequency words are indicative in the production of particular pragmatic and stylistic meanings in discourse, but also low frequency words are highly indicative in this regard. These results accentuate a further general finding that computer software contribute significantly to the linguistic analysis of texts, particularly those pertaining to literature. The paper recommends further and intensive incorporation of computer and CALL (computer-assisted language learning) software in teaching and learning literary texts in EFL (English as a foreign language) settings.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_27-A_Computational_Approach_to_Decode.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing EFL Students’ COCA-Induced Collocational Usage of Coronavirus: A Corpus-Driven Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130226</link>
        <id>10.14569/IJACSA.2022.0130226</id>
        <doi>10.14569/IJACSA.2022.0130226</doi>
        <lastModDate>2022-02-28T09:51:46.7170000+00:00</lastModDate>
        
        <creator>Amir H. Y. Salama</creator>
        
        <creator>Waheed M. A. Altohami</creator>
        
        <subject>COCA; collocations; coronavirus; corpus-driven approach; EFL learners; extended lexical units</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>The present study seeks to propose a novel pedagogical strategy for enhancing EFL students’ collocational usage of the node ‘coronavirus’ as currently used in the Corpus of Contemporary American English (COCA) across its five genre-based sections, viz. TV/Movies, Blog, Web-General, Spoken, Fiction, Magazine, Newspaper, and Academic. Drawing on a corpus-driven approach, we conducted a pedagogical descriptive analysis of the ‘coronavirus’ top collocates generated by the COCA. The target collocates have been calculated by the Mutual Information (MI) of 3 or above and specified in terms of the four main lexical parts of speech of nouns, verbs, adjectives, and adverbs. The study has reached three main results. First, employing the COCA as a pedagogical corpus tool can enhance the collocational competence of EFL students should a corpus-driven approach be used descriptively in the classroom. Second, the two methodological stages of demonstration and praxis could facilitate the process of topical priority as a significant index of collocational usage and its thematic relevance. Third, more empirically, the naturally occurring collocates of the node ‘coronavirus’ have proven significant to the pedagogical situation of teaching the node’s collocational meanings encoded in the syntactic categories of nouns, verbs, adjectives, and adverbs, e.g. infection, cause, novel, and closely, respectively.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_26-Enhancing_EFL_Students_COCA_Induced_Collocational.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detecting and Fact-checking Misinformation using “Veracity Scanning Model”</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130225</link>
        <id>10.14569/IJACSA.2022.0130225</id>
        <doi>10.14569/IJACSA.2022.0130225</doi>
        <lastModDate>2022-02-28T09:51:46.7030000+00:00</lastModDate>
        
        <creator>Yashoda Barve</creator>
        
        <creator>Jatinderkumar R. Saini</creator>
        
        <creator>Ketan Kotecha</creator>
        
        <creator>Hema Gaikwad</creator>
        
        <subject>Document similarity; fact-checking; healthcare; incremental learning; misinformation; sentiment analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>The expeditious flow of information over the web and its ease of convenience has increased the fear of the rampant spread of misinformation. This poses a health threat and an unprecedented issue to the world impacting people’s life. To cater to this problem, there is a need to detect misinformation. Recent techniques in this area focus on static models based on feature extraction and classification. However, data may change at different time intervals and the veracity of data needs to be checked as it gets updated. There is a lack of models in the literature that can handle incremental data, check the veracity of data and detect misinformation. To fill this gap, authors have proposed a novel Veracity Scanning Model (VSM) to detect misinformation in the healthcare domain by iteratively fact-checking the contents evolving over the period of time. In this approach, the healthcare web URLs are classified as legitimate or non-legitimate using sentiment analysis as a feature, document similarity measures to perform fact-checking of URLs, and incremental learning to handle the arrival of incremental data. The experimental results show that the Jaccard Distance measure has outperformed other techniques with an accuracy of 79.2% with Random Forest classifier while the Cosine similarity measure showed less accuracy of 60.4% with the Support Vector Machine classifier. Also, when implemented as an algorithm Euclidean distance showed an accuracy of 97.14% and 98.33% respectively for train and test data.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_25-Detecting_and_Fact_checking_Misinformation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Incorporation of Computational Thinking Practices to Enhance Learning in a Programming Course</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130224</link>
        <id>10.14569/IJACSA.2022.0130224</id>
        <doi>10.14569/IJACSA.2022.0130224</doi>
        <lastModDate>2022-02-28T09:51:46.6870000+00:00</lastModDate>
        
        <creator>Leticia Laura-Ochoa</creator>
        
        <creator>Norka Bedregal-Alpaca</creator>
        
        <subject>Programming tools; computational thinking; algorithmic thinking; motivation; abstraction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>The development of computational thinking skills is essential for information management, problem-solving, and understanding human behavior. Thus, the aim of the experience described here was to incorporate computational thinking practices to improve learning in a first Python programming course using programming tools such as PSeInt, CodingBat, and the turtle graphic library. A quasi-experimental methodological design was used in which the experimental and control groups are in different academic semesters. Exploratory mixed research was carried out. The control and experimental group consisted of 41 and 36 students, respectively. The results show that with the use of support programming tools, such as PSeInt, CodingBat, Python turtle graphic library, and the incorporation of computational thinking practices, the experimental group students obtained better learning results. It is concluded that student performance and motivation in university programming courses can be improved by using proper tools that help the understanding of programming concepts and the skills development related to computational thinking, such as abstraction and algorithmic thinking.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_24-Incorporation_of_Computational_Thinking_Practices.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Effective ANN Model based on Neuro-Evolution Mechanism for Realistic Software Estimates in the Early Phase of Software Development</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130223</link>
        <id>10.14569/IJACSA.2022.0130223</id>
        <doi>10.14569/IJACSA.2022.0130223</doi>
        <lastModDate>2022-02-28T09:51:46.6700000+00:00</lastModDate>
        
        <creator>Ravi Kumar B N</creator>
        
        <creator>Yeresime Suresh</creator>
        
        <subject>Software cost estimation; COCOMO-II; neuro-evolution; artificial neural network; genetic algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>There is no doubt that the software industry is one of the fastest-growing sectors on the planet today. As the cost of the entire development process continues to rise, an effective mechanism is needed to estimate the required development cost to control better the cost overrun problem and make the final software product more competitive. However, in the early stages of planning, the project managers have difficulty estimating the realistic value of the effort and cost required to execute development activities. Software evaluation prior to development can minimize risk and upsurge project success rates. Many techniques have been suggested and employed for cost estimation. However, computations based on several of these techniques show that the estimation of development effort and cost vary, which may cause problems for software industries in allocating overall resources costs. The proposed research study proposes the artificial neural network (ANN) based Neural-Evolution technique to provide more realistic software estimates in the early stages of development. The proposed model uses the advantages of the topology augmentation using an evolutionary algorithm to automate and achieve optimality in ANN construction and training. Based on the results and performance analysis, it is observed that software effort prediction using the proposed approach is more accurate and better than other existing approaches.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_23-Effective_ANN_Model_based_on_Neuro_Evolution.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Prediction of Metastatic Relapse in Breast Cancer using Machine Learning Classifiers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130222</link>
        <id>10.14569/IJACSA.2022.0130222</id>
        <doi>10.14569/IJACSA.2022.0130222</doi>
        <lastModDate>2022-02-28T09:51:46.6070000+00:00</lastModDate>
        
        <creator>Ertel Merouane</creator>
        
        <creator>Amali Said</creator>
        
        <creator>El Faddouli Nour-eddine</creator>
        
        <subject>Machine learning; classification; personalized medicine; CRISP-DM; metastasis; breast cancer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>The volume and amount of data in cancerology is continuously increasing, yet the vast majority of this data is not being used to uncover useful and hidden insights. As a result, one of the key goals of physicians for therapeutic decision-making during multidisciplinary consultation meetings is to combine prediction tools based on data and best practices (MCM). The current study looked into using CRISP-DM machine learning algorithms to predict metastatic recurrence in patients with early-stage (non-metastatic) breast cancer so that treatment-appropriate medicine may be given to lower the likelihood of metastatic relapse. From 2014 to 2021, data from patients with localized breast cancer were collected at the Regional Oncology Center in Meknes, Morocco. There were 449 records in the dataset, 13 predictor variables and one outcome variable. To create predictive models, we used machine learning techniques such as Support Vector Machine (SVM), Nave Bayes (NB), K-Nearest Neighbors (KNN) and Logistic Regression (LR). The main objective of this article is to compare the performance of these four algorithms on our data in terms of sensitivity, specificity and precision. According to our results, the accuracies of SVM, kNN, LR and NB are 0.906, 0.861, 0.806 and 0.517 respectively. With the fewest errors and maximum accuracy, the SVM classification model predicts metastatic breast cancer relapse. The unbiased prediction accuracy of each model is assessed using a 10-fold cross-validation method.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_22-Prediction_of_Metastatic_Relapse.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implementation of Password Hashing on Embedded Systems with Cryptographic Acceleration Unit</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130221</link>
        <id>10.14569/IJACSA.2022.0130221</id>
        <doi>10.14569/IJACSA.2022.0130221</doi>
        <lastModDate>2022-02-28T09:51:46.5770000+00:00</lastModDate>
        
        <creator>Holman Montiel A</creator>
        
        <creator>Fredy Mart&#237;nez S</creator>
        
        <creator>Edwar Jacinto G</creator>
        
        <subject>Cryptography; password hashing; embedded systems; cryptographic acceleration hardware; SHA-256</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>In this modern world where the proliferation of electronic devices associated with the Internet of Things (IoT) grows day by day, security is an imperative issue. The criticality of the information linked to the various electronic devices connected to the Internet forces developers to establish protection mechanisms against possible cyber-attacks. When using computer equipment or servers, security mechanisms can be applied without having problems with the number of resources associated with this activity; the opposite is the case when implementing such mechanisms on embedded systems. The objective of this document is to implement password hashing on a FRDM-K82F development board with ARM&#174; Cortex™-M4 processor. It describes the basic criteria necessary to aim at moderate levels of security in specific purpose applications; that can be developed taking advantage of the hardware cryptographic acceleration units that these embedded systems have. Performance analysis of the implemented hash function is also presented, considering the variation in the number of iterations performed by the development board. The validation of the correct functioning of the hashing scheme using the SHA-256 algorithm is carried out by comparing the results obtained in real-time versus an application developed in Python software using the PyCryptodome library.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_21-Implementation_of_Password_Hashing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Method of Braille Embossed Dots Segmentation for Braille Document Images Produced on Reusable Paper</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130220</link>
        <id>10.14569/IJACSA.2022.0130220</id>
        <doi>10.14569/IJACSA.2022.0130220</doi>
        <lastModDate>2022-02-28T09:51:46.5600000+00:00</lastModDate>
        
        <creator>Sasin Tiendee</creator>
        
        <creator>Charay Lerdsudwichai</creator>
        
        <creator>Somying Thainimit</creator>
        
        <creator>Chanjira Sinthanayothin</creator>
        
        <subject>Braille; embossed dots; document images; reusable paper; segmentation; recognition; blind; visually impaired</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>Braille is the language of communication for blind and visually impaired people. Braille characters are embossed at points to convey the meaning. Typically, Braille documents can be produced on plain paper. Braille documents can be created on reusable paper, also known as a third-page paper; this reduces the paper cost, allowing more available documents to stimulate learning for blind or visually impaired persons. This research presents a method of Braille embossed dots segmentation for Braille document images produced on reusable paper to support the availability of cheaper learning material. Initially, Braille documents were imported with a calibrated scanner, Braille document image layer separation was then performed. Followed by edge removal, Braille embossed dot recovery, noise removal, and specify the embossed Braille point. This research was conducted by using four scanners, which scanned Braille documents images under four different lighting conditions. For each lighting condition, the Braille document image area was cropped to the desired size, considering the possible event conditions. They were used to create over 200,000 Braille cells, with over 12 billion patterns. When calculating the average performance under all lighting conditions, the values were Precision 1.0000, Recall 0.7817, Accuracy 0.8545, and F-Measure 0.8756. By effectively using Braille embossed dots segmentation, the process of Braille document recognition will also be efficient.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_20-The_Method_of_Braille_Embossed_Dots_Segmentation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Developing and Validating Instrument for Data Integration Governance Framework</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130219</link>
        <id>10.14569/IJACSA.2022.0130219</id>
        <doi>10.14569/IJACSA.2022.0130219</doi>
        <lastModDate>2022-02-28T09:51:46.5300000+00:00</lastModDate>
        
        <creator>Noor Hasliza Mohd Hassan</creator>
        
        <creator>Kamsuriah Ahmad</creator>
        
        <creator>Hasimi Salehuddin</creator>
        
        <subject>Content validity; instrument development; data integration governance; Lawshe’s technique</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>Data integration is one of the important subfields in data management. It allows users to access the same data from multiple sources without redundancy and preserving its integrity. Data Integration Governance Framework (DIGF) is being developed to guide the implementation of data integration. It functions as a reference and guideline for working level in data integration implementation. Hence the instrument used to validate the DIGF needs to be developed and validated for its accuracy, applicability, and suitability of use. The instrument comprises items structured as a questionnaire. This study proposes Lawshe’s technique to construe the content validity of the instrument. This technique involved the arithmetic of the Content Validity Ratio (CVR) to validate items in the questionnaire, which developed based on the factors identified for Data Integration Governance Framework. Each item in the questionnaire that validated based on the minimum CVR value of 0.75 endorsed as the final instrument of Data Integration Governance Framework to be used in Delphi Technique Evaluation.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_19-Developing_and_Validating_Instrument.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>PAD: A Pancreatic Cancer Detection based on Extracted Medical Data through Ensemble Methods in Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130218</link>
        <id>10.14569/IJACSA.2022.0130218</id>
        <doi>10.14569/IJACSA.2022.0130218</doi>
        <lastModDate>2022-02-28T09:51:46.5130000+00:00</lastModDate>
        
        <creator>Santosh Reddy P</creator>
        
        <creator>Chandrasekar M</creator>
        
        <subject>Pancreatic; PDAC; LYVE1; REG1A; TFF1; CA19_9</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>The considerable research into medical health systems is allowing computing systems to develop with the most cutting-edge innovations. These developments are paving the way for more efficient medical system implementations, including automatic identification of health-related disorders. The most important health research is being done to predict cancer, which can take several forms and affect many parts of the body. One of the most prevalent tumors that is expected to be incurable is pancreatic cancer. Pancreatic cancer is one of the most common cancers that is projected to be incurable. Previous research has found that a panel of three protein biomarkers (LYVE1, REG1A, and TFF1) found in urine can help detect respectable PDAC. To improve this panel in this study by replacing REG1A with REG1B from extracted data sets into CSV format. Finally, will analyze four significant biomarkers that are found in urine, creatinine, LYVE1, REG1B, and TFF1. Creatinine is a protein that is commonly utilized as a kidney function indicator. Lymphatic vessel endothelial hyaluronan receptor 1 (YVLE1) is a protein that may help tumors spread. REG1B is a protein that has been linked to pancreatic regeneration, while TFF1 is trefoil factor 1, which has been linked to urinary tract regeneration and repair It’s impossible to treat it properly once it&#39;s been diagnosed. Machine learning and neural networks are now showing promise for accurate pancreatic picture segmentation in real time for early diagnosis. This research looks at how to analyze pancreatic tumors using ensemble approaches in machine learning. According to preliminary data, the proposed technique looks to improve the classifier&#39;s performance for early diagnosis of pancreatic cancer.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_18-PAD_A_Pancreatic_Cancer_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Solution for Automatic Counting and Differentiate Motorcycles and Modified Motorcycles in Remote Area</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130217</link>
        <id>10.14569/IJACSA.2022.0130217</id>
        <doi>10.14569/IJACSA.2022.0130217</doi>
        <lastModDate>2022-02-28T09:51:46.4830000+00:00</lastModDate>
        
        <creator>Indrabayu </creator>
        
        <creator>Intan Sari Areni</creator>
        
        <creator>Anugrayani Bustamin</creator>
        
        <creator>Elly Warni</creator>
        
        <creator>Sofyan Tandungan</creator>
        
        <creator>Rizka Irianty</creator>
        
        <creator>Najiah Nurul Afifah</creator>
        
        <subject>Histogram of oriented gradient; optical flow; vehicles counting; support vector machine</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>Motorcycles are the most significant contributor to the vehicle numbers in Indonesia, about 81% of all vehicles in the country. In addition, the growth of modified motorcycles has also increased in several areas, particularly remote places. Many studies have been conducted for detecting vehicles. However, most vehicle detection studies were conducted to detect cars or four-wheeled vehicles, and only a few studies were done to detect motorcycles. Further problems increase if the system is implemented in remote areas with limited electricity power resources that need low-cost budget specification computation. This study detects and calculates the number of motor vehicles and modified motorcycles passed on a highway from video data. It proposed Machine Learning instead of Deep Learning to suit the low computational video in remote areas. Computer vision-based methods used in the prediction are optical flow and Histogram Oriented Gradient (HOG) + Support Vector Machine (SVM). Five videos were used in the system testing, taken from the roadsides using a static camera with a resolution of 160x112 pixels at &#177;135&#186; angle. This research showed that the accuracy of motorcycles and modified motorcycles detection and calculation systems using the HOG + SVM method is higher than the optical flow method. The average accuracy of HOG + SVM for motorcycles and modified motorcycles is 89.70% and 95.16%, respectively.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_17-A_Solution_for_Automatic_Counting.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Machine Learning: Assisted Cardiovascular Diseases Diagnosis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130216</link>
        <id>10.14569/IJACSA.2022.0130216</id>
        <doi>10.14569/IJACSA.2022.0130216</doi>
        <lastModDate>2022-02-28T09:51:46.4670000+00:00</lastModDate>
        
        <creator>Aseel Alfaidi</creator>
        
        <creator>Reem Aljuhani</creator>
        
        <creator>Bushra Alshehri</creator>
        
        <creator>Hajer Alwadei</creator>
        
        <creator>Sahar Sabbeh</creator>
        
        <subject>Cardiovascular diseases; artificial intelligence; prediction; multi-layer perceptron</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>Detecting cardiovascular problems during their early stages is one of the great difficulties facing physicians. Cardiovascular diseases contribute to the deaths of around 18 million patients every year worldwide. That&#39;s why heart disease is a critical worry that must be addressed. However, it can be difficult to detect heart disease because of the multiple factors that affect health, such as high blood pressure, increased cholesterol, abnormal pulse rate, and many other factors. Therefore, the field of artificial intelligence can be instrumental in detecting diseases early on and finding an appropriate solution. This paper proposes a model for diagnosing the probability of an individual having cardiovascular illness by employing Machine Learning (ML) models. The experiments were executed using seven algorithms, and a public dataset of cardiovascular disease was used to train the models. A Chi-square test was used to identify the most important features to predict cardiovascular disease. The experiment results showed that Multi-Layer Perceptron gives the highest accuracy of disease prediction at 87.23%.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_16-Machine_Learning_Assisted_Cardiovascular_Diseases_Diagnosis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Melody Difficulty Classification using Frequent Pattern and Inter-Notes Distance Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130215</link>
        <id>10.14569/IJACSA.2022.0130215</id>
        <doi>10.14569/IJACSA.2022.0130215</doi>
        <lastModDate>2022-02-28T09:51:46.2970000+00:00</lastModDate>
        
        <creator>Pulung Nurtantio Andono</creator>
        
        <creator>Edi Noersasongko</creator>
        
        <creator>Guruh Fajar Shidik</creator>
        
        <creator>Khafiizh Hastuti</creator>
        
        <creator>Sudaryanto Sudaryanto</creator>
        
        <creator>Arry Maulana Syarif</creator>
        
        <subject>Multi-class classification; frequent analysis; Apriori; Symbolic music; Gamelan</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>This research proposes a novel method for melody difficulty classification performed using frequent pattern and inter-notes distance analysis. The Apriori algorithm was used to measure the frequency of the notes in the note sequence, in which the melody length is also included in the calculation. In addition, the inter-notes distance analysis was also used to measure the difficulty level of composition based on the distance between successive notes. The classification was performed for traditional Javanese compositions known as Gamelan music. Symbolic representation, in which the Gamelan compositions music sheets were collected as the dataset, was chosen by asking experts to divide the compositions based on their difficulty level into basic, intermediate and advanced classes. Then, the proposed method was implemented to measure the difficulty value of each composition. The difference in the interpretation of the difficulty level between the experts and the difficulty value of the composition is solved by calculating the mean value to obtain the range of difficulty values in each class. Evaluation was performed using confusion matrix to measure the accuracy, precision and recall value, and the results reaching 82%, 82.1% and 82%, respectively.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_15-Melody_Difficulty_Classification_using_Frequent_Pattern.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Proposed Model for Improving the Performance of Knowledge Bases in Real-World Applications by Extracting Semantic Information</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130214</link>
        <id>10.14569/IJACSA.2022.0130214</id>
        <doi>10.14569/IJACSA.2022.0130214</doi>
        <lastModDate>2022-02-28T09:51:46.2800000+00:00</lastModDate>
        
        <creator>Abdelrahman Elsharif Karrar</creator>
        
        <subject>Semantic information extraction; knowledge base; slot filling; content recommendation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>Knowledge Bases are information resources that convert factual knowledge to machine-readable formats to allow users to extract their desired data from multiple sources. The objective of knowledge base population frameworks is to extend KBs with semantic information to solve fundamental artificial intelligence problems such as understanding human knowledge. Information extraction entails the discovery of critical knowledge facts from unstructured text, which is important in the population of knowledge bases. The objective of this paper is to explore the concept of information extraction as a technique for accelerating the performance of knowledge bases with minimal annotation efforts for real-world applications such as content recommendation during a web search. This entails performing slot filling operations for data collection from large KBs and applying probabilistic estimations to determine the accuracy of the new information. The results are then used to explore the feasibility of applying knowledge bases to real-world tasks such as user-centric information access by encoding entities with deep semantic knowledge.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_14-A_Proposed_Model_for_Improving_the_Performance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>LPRNet: A Novel Approach for Novelty Detection in Networking Packets</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130213</link>
        <id>10.14569/IJACSA.2022.0130213</id>
        <doi>10.14569/IJACSA.2022.0130213</doi>
        <lastModDate>2022-02-28T09:51:46.2500000+00:00</lastModDate>
        
        <creator>Anshumaan Chauhan</creator>
        
        <creator>Ayushi Agarwal</creator>
        
        <creator>Angel Arul Jothi</creator>
        
        <creator>Sangili Vadivel</creator>
        
        <subject>Novelty detection; deep learning; autoencoders; unsupervised learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>Novelty Detection is a task of recognition of abnormal data points within a given system. Recently, this task has been performed using Deep Learning Autoencoders, but they face several drawbacks which include the problem of identity mapping, adversarial perturbations and optimization algorithms. In this paper, we have proposed a novel approach LPRNet, a Denoising Autoencoder which uses algorithms such as Least Trimmed Square, Projected Gradient Descent and Robust Principal Component Analysis, to solve the above-mentioned problems. LRPNet is then trained and tested on NSL-KDD dataset, and experiments have been performed using Accuracy as performance metric for comparing the existing models with the proposed model. The results show that LRPNet has the maximum accuracy of 95.9% and performed better than all the previous state-of-the-art algorithms.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_13-LPRNet_A_Novel_Approach_for_Novelty_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Tracking Axonal Transports in Time-Lapse Images Obtained from a Microfluidic Culture Platform</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130212</link>
        <id>10.14569/IJACSA.2022.0130212</id>
        <doi>10.14569/IJACSA.2022.0130212</doi>
        <lastModDate>2022-02-28T09:51:46.2500000+00:00</lastModDate>
        
        <creator>Nak Hyun Kim</creator>
        
        <subject>Axonal transports; kymograph; trajectory detection; image sequence analysis; motion parameter extraction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>In this paper, a procedure is described for tracking moving object trajectories from image sequences acquired from a microfluidic culture platform. Since particles move along the axons, curve structures need to be detected first from the input image sequence. A kymograph analysis technique is applied to detect axon structures from the consolidated image of the input sequence. Horizontally and vertically oriented axons are then detected by applying the process twice to the original and the 90-degree rotated image. Multiple kymographs are generated along the detected axons by projecting image intensity variation through the time-axis. The trajectory detection process is then applied to each kymograph image. To obtain the particle motion information from the entire image sequence, an integration process is applied to each horizontal and vertical kymograph data set. The proposed technique has been applied to image sequences in the present application area. It is demonstrated that practical results can be obtained using time-lapse image sequence data.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_12-Tracking_Axonal_Transports_in_Time_Lapse_Images_Obtained.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Review on Classification Methods for Plants Leaves Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130211</link>
        <id>10.14569/IJACSA.2022.0130211</id>
        <doi>10.14569/IJACSA.2022.0130211</doi>
        <lastModDate>2022-02-28T09:51:46.2170000+00:00</lastModDate>
        
        <creator>Khaled Suwais</creator>
        
        <creator>Khattab Alheeti</creator>
        
        <creator>Duaa Al_Dosary</creator>
        
        <subject>Leaf recognition; feature extraction; leaf features; classifiers; image processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>Plants leaves recognition is an important scientific field that is concerned of recognizing leaves using image processing techniques. Several methods are presented using different algorithms to achieve the highest possible accuracy. This paper provides an analytical survey of various methods used in image processing for the recognition of plants through their leaves. These methods help in extracting useful information for botanists to utilize the medicinal properties of these leaves, or for any other agricultural and environmental purposes. We also provide insights and a complete review of different techniques used by researchers that consider different features and classifiers. These features and classifiers are studied in term of their capabilities in enhancing the accuracy ratios of the classification methods. Our analysis shows that both of the Support Victor Machines (SVM) and the Convolutional Neural Network (CNN) are positively dominant among other methods in term of accuracy.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_11-A_Review_on_Classification_Methods.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluation of Applicability of 1D-CNN and LSTM to Predict Horizontal Displacement of Retaining Wall According to Excavation Work</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130210</link>
        <id>10.14569/IJACSA.2022.0130210</id>
        <doi>10.14569/IJACSA.2022.0130210</doi>
        <lastModDate>2022-02-28T09:51:46.2030000+00:00</lastModDate>
        
        <creator>Seunghwan Seo</creator>
        
        <creator>Moonkyung Chung</creator>
        
        <subject>Excavation; wall displacement; neural network; prediction wall deflection; CNN-LSTM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>During excavation works in downtown, stability and safety considerations of such excavations and constructions are crucial for which continuous wall structures with varying structural components are commonly used. Most of the current models used for this purpose are often complex, where the accepted parameters do not have a clear physical meaning. Moreover, accurate ground movement forecasts are challenging due to nonlinear and inelastic soil behavior. Therefore, this study proposes a method to predict the lateral displacement of the braced wall at each stage of excavation by using all the basic information necessary for braced wall design, including ground information of the excavation site, support methods such as the type of brace, location and stiffness, information about the neighboring buildings, and the results of numerical analysis. One-dimensional convolutional neural network and long short-term memory network are used for estimation and prediction to develop an optimal prediction model based on well-refined but limited data. The applicability of the braced wall was confirmed for safety management by predicting the horizontal displacement of the braced wall for each stage of excavation. The proposed model can be used to predict the stability of the horizontal wall for each excavation step and reduce accident risks, such as collapse of the retaining wall, which may occur during construction.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_10-Evaluation_of_Applicability_of_1D_CNN_and_LSTM.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparison of Latent Semantic Analysis and Vector Space Model for Automatic Identification of Competent Reviewers to Evaluate Papers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130209</link>
        <id>10.14569/IJACSA.2022.0130209</id>
        <doi>10.14569/IJACSA.2022.0130209</doi>
        <lastModDate>2022-02-28T09:51:46.1870000+00:00</lastModDate>
        
        <creator>Yordan Kalmukov</creator>
        
        <subject>Latent semantic analysis; vector space model; automatic assignment of reviewers to papers</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>The assignment of reviewers to papers is one of the most important and challenging tasks in organizing scientific events. A major part of it is the correct identification of proper reviewers. This article presents a series of experiments aiming to test whether the latent semantic analysis (LSA) could be reliably used to identify competent reviewers to evaluate submitted papers. It also compares the performance of the LSA, the vector space model (VSM) and the method of explicit document description by a taxonomy of keywords, in computing accurate similarity factors between papers and reviewers. All the three methods share the same input datasets, taken from real-life conferences and the produced paper-reviewer similarities are evaluated with the same evaluation methods, allowing a fair and objective comparison between them. Experimental results show that in most cases LSA outperforms VSM and could even slightly outperform the explicit document description by a taxonomy of keywords, if the term-document matrix is composed of TF-IDF values, rather than the raw number of term occurrences.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_9-Comparison_of_Latent_Semantic_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>RETRACTED: Computational Intelligence Algorithm Implemented in Indoor Environments based on Machine Learning for Lighting Control System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130208</link>
        <id>10.14569/IJACSA.2022.0130208</id>
        <doi>10.14569/IJACSA.2022.0130208</doi>
        <lastModDate>2022-02-28T09:51:46.1530000+00:00</lastModDate>
        
        <creator>Mohammad Ehsanul Alim</creator>
        
        <creator>Md. Nazmus Sakib Bin Alam</creator>
        
        <creator>Ihab Hassoun</creator>
        
        <subject>Machine learning algorithms; indoor lighting control system; internet of things (IoT); ultra-wide band sensors; lux sensors; remote access facility</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>After careful and considered review of the content of this paper by a duly constituted expert committee, this paper has been found to be in violation of IJACSA`s Publication Principles. We hereby retract the content of this paper. Reasonable effort should be made to remove all past references to this paper. Retraction DOI: 10.14569/IJACSA.2022.0130208.retraction</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_8-Computational_Intelligence_Algorithm_Implemented.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Combining Multiple Seismic Attributes using Convolutional Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130207</link>
        <id>10.14569/IJACSA.2022.0130207</id>
        <doi>10.14569/IJACSA.2022.0130207</doi>
        <lastModDate>2022-02-28T09:51:46.1400000+00:00</lastModDate>
        
        <creator>Abrar Alotaibi</creator>
        
        <creator>Mai Fadel</creator>
        
        <creator>Amani Jamal</creator>
        
        <creator>Ghadah Aldabbagh</creator>
        
        <subject>CNNs; neural networks; seismic attributes; seismic images; image fusion</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>Seismic exploration involves estimating the properties of the Earth&#39;s subsurface from reflected seismic waves then visualizing the resulting seismic data and its attributes. These data and derived seismic attributes provide complementary information and reduce the amount of time and effort for the geoscientist. Multiple conventional methods to combine various seismic attributes exist, but the number of attributes is always limited, and the quality of the resulting image varies. This paper proposes a method that can be used to overcome these limitations. In this paper, we propose using Deep Learning-based image fusion models to combine seismic attributes. By using convolutional neural network (CNN) capabilities in feature extraction, the resulting image quality is better than that obtained with conventional methods. This work implemented two models and conducted a number of experiments using them. Several techniques have been used to evaluate the results, such as visual inspection, and using image fusion metrics. The experiments show that the Image-fusion Framework, using the Image Fusion Framework Based on CNN (IFCNN) approach, outperformed all other models in both quantitative and visual analysis. Its QAB/F and MS-SSIM scores are 50% and 10%, respectively, higher than all other models. Also, IFCNN was evaluated against the current state-of-the-art solution, Octree, in a comparative study. IFCNN overcomes the limitation of the Octree method and succeeds in combining nine seismic attributes with a better-combining quality, with QAB/F and NAB/F scores being 40% higher.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_7-Combining_Multiple_Seismic_Attributes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluation of Consumer Network Structure for Cosmetic Brands on Twitter</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130206</link>
        <id>10.14569/IJACSA.2022.0130206</id>
        <doi>10.14569/IJACSA.2022.0130206</doi>
        <lastModDate>2022-02-28T09:51:46.1230000+00:00</lastModDate>
        
        <creator>Yuzuki Kitajima</creator>
        
        <creator>Kohei Otake</creator>
        
        <creator>Takashi Namatame</creator>
        
        <subject>Social networking services; community structure; network analysis; consumer network; influencer marketing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>Since the early 2000s, the Internet has become increasingly popular for the development of information dissemination technology and as a platform for interaction. Therefore, the penetration rate of Social Networking Services (SNSs) is also increasing. Using the accounts created on SNSs, companies can disseminate information and communicate with users on SNSs for marketing purposes. Moreover, there are several influencer marketing activities that use influencers who are highly influential in their surroundings as marketing using SNSs. In this study, we aim to identify influencers on Twitter and consumer network structures for six cosmetic brands. Specifically, create a consumer network for each of the six cosmetic brands using follower data obtained from Twitter is created to identify the network structure. Furthermore, brand influencers were also identified. The consumer network of all six cosmetic brands was created to identify the influencers in the cosmetics industry. We compared the influencers of the brands with the influencers of the entire industry to examine any differences.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_6-Evaluation_of_Consumer_Network_Structure.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Adoption of Digital Games Among Older Adults</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130205</link>
        <id>10.14569/IJACSA.2022.0130205</id>
        <doi>10.14569/IJACSA.2022.0130205</doi>
        <lastModDate>2022-02-28T09:51:46.0930000+00:00</lastModDate>
        
        <creator>Nurul Farinah Mohsin</creator>
        
        <creator>Suriati Khartini Jali</creator>
        
        <creator>Sylvester Arnab</creator>
        
        <creator>Mohamad Imran Bandan</creator>
        
        <creator>Minhua Ma</creator>
        
        <subject>Digital games; Malaysia; older adults; technology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>The revolution of technology brings many benefits towards diverse population. Digital game is one of the digital technologies that has potential to facilitate older adults’ daily routine. However, some of them faces challenges to adopt the usage of digital games in their daily lives, one of which is that most commercial games are not suitable for older people. This paper discusses the investigation into the challenges associated with the older adults’ adoption of digital games, their interaction, and experiences with digital games and specifically explores the andragogical perspectives, and game design attributes. A set of questionnaires consisted of open-ended and close-ended questions were distributed, targeting the older adults across Malaysia, using online and non-probability sampling technique. 81 respondents were recruited, and 56 respondents (n=56) were eligible in this study. Four participants were recruited for informal interview session. The analysis of the results indicates that the older adults’ perception of digital games and game design aspects are the major factors influencing their digital game adoption. Game designs are important to attract many older adults to experience and interact with digital games.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_5-The_Adoption_of_Digital_Games_Among_Older_Adults.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Extended Kalman Filter Sensor Fusion in Practice for Mobile Robot Localization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130204</link>
        <id>10.14569/IJACSA.2022.0130204</id>
        <doi>10.14569/IJACSA.2022.0130204</doi>
        <lastModDate>2022-02-28T09:51:46.0770000+00:00</lastModDate>
        
        <creator>Alaa Aldeen Housein</creator>
        
        <creator>Gao Xingyu</creator>
        
        <creator>Weiming Li</creator>
        
        <creator>Yang Huang</creator>
        
        <subject>Autonomous navigation; Kalman filter; self-driving vehicle; simultaneous localization and mapping; occupancy grid map; ROS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>Self-driving vehicles and autonomously guided robots could be very beneficial to today&#39;s civilization. However, the mobile robot&#39;s position must be accurately known, which referred as the localization with the task of tracking the dynamic position, in order for the robot to be active and useful. This paper presents a robot localization method with a known starting location by a real-time reconstructed environment model that represented as an occupancy grid map. The extended Kalman filter (EKF) is formulated as a nonlinear model-based estimator for fuse Odometry and a LIDAR range finder sensor. Because the occupancy grid map for the area is provided, just the inaccuracies of the LIDAR range finder will be considered. The experimental results on the “turtlebot” robot using robot operating system (ROS) show a significant improvement in the pose of the robot using the Kalman filter compared with sample Odometry. This paper also establishes the framework for using a Kalman filter for state estimation, providing all relevant mathematical equations for differential drive robot, this technique can be used to a variety of mobile robots.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_4-Extended_Kalman_Filter_Sensor_Fusion_in_Practice.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Review of Mobility Supporting Tunneling Protocols in Wireless Cellular Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130203</link>
        <id>10.14569/IJACSA.2022.0130203</id>
        <doi>10.14569/IJACSA.2022.0130203</doi>
        <lastModDate>2022-02-28T09:51:46.0470000+00:00</lastModDate>
        
        <creator>Zeeshan Abbas</creator>
        
        <creator>Wonyong Yoon</creator>
        
        <subject>Tunneling; mobility; 3GPP; WLAN; interworking</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>With recent technology advancements mobility support is one of the major needed parameters by any wireless or mobile networks. Continuous mobile movement from one cell to another or from one network to another requires continuous mobility support. Previously, tunneling protocols employment was the technique to support UE’s inter or intra network mobility. More specifically, GRE, GTP, MIPv6 or PMIPv6 were employed for mobility support. In tunneling encapsulation of one protocol over another protocol is performed to deliver IP packet during inter network or intra network handover. In terms of usage scenario of each tunneling protocol, tunnel establishment, data transfer and tunnel release, an overview and comparison of tunneling protocols is presented in this paper. 3GPP and WLAN interworking, and GAN based usage scenarios and supported tunneling mechanisms has been discussed. Some insights regarding security, multiplexing, multiprotocol and packet sequencing support are also provided for each tunneling protocol.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_3-A_Review_of_Mobility_Supporting_Tunneling_Protocols.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Robotic Ad-hoc Networks Connectivity Maintenance based on RF Signal Strength Mapping</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130202</link>
        <id>10.14569/IJACSA.2022.0130202</id>
        <doi>10.14569/IJACSA.2022.0130202</doi>
        <lastModDate>2022-02-28T09:51:46.0300000+00:00</lastModDate>
        
        <creator>Mustafa Ayad</creator>
        
        <creator>Richard Voyles</creator>
        
        <creator>Mohamed Ayad</creator>
        
        <subject>RF mapping recognition; link connectivity; gradient algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>Network connectivity preservation is one of the substantial factors in achieving efficient mobile robot teams&#39; maneuverability. We present a connectivity maintenance method for a robot team&#39;s communication. The proposed approach augments the Radio Frequency Mapping Recognition (RFMR) method and the signal strength gradient decent approach for an overall goal to create a Proactive Motion Control Algorithm (PMCA). The PMCA algorithm controls and helps strengthen mobile communicating robots&#39; connectivity in the existent Radio Frequency (RF) obstacles. The RFMR method takes advantage of Hidden Markov Models (HMMs) results, which assist in learning electromagnetic environments depending on measurements of RF signal strength. The classification results of HMM lead the robots to resolve whether to continue the current trajectory for avoiding the obstacle shadow or move back to desirable robust Signal Strength (SS) positions. In both cases, the robot will run the gradient approach to determine the signal change trend and drive the robot toward the strong SS direction for maintaining link connectivity. The PMCA, depending on the results of RFMR and gradient approaches, promises to preserve robots&#39; motion control and link connectivity maintenance.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_2-Robotic_Ad_hoc_Networks_Connectivity_Maintenance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Random-Valued Impulse Noise Detection and Removal based on Local Statistics of Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130201</link>
        <id>10.14569/IJACSA.2022.0130201</id>
        <doi>10.14569/IJACSA.2022.0130201</doi>
        <lastModDate>2022-02-28T09:51:45.9670000+00:00</lastModDate>
        
        <creator>Mickael Aghajarian</creator>
        
        <creator>John E. McInroy</creator>
        
        <subject>Random-valued impulse noise; noise detection; image restoration; modified weighted mean filter</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(2), 2022</description>
        <description>Random-valued impulse noise removal from images is a challenging task in the field of image processing and computer vision. In this paper, an effective three-step noise removal method was proposed using local statistics of grayscale images. Unlike most existing denoising algorithms that assume the noise density is known, our method estimated the noise density in the first step. Based on the estimated noise density, a noise detector was implemented to detect corrupted pixels in the second step. Finally, a modified weighted mean filter was utilized to restore the detected noisy pixels while leaving the noise-free pixels unchanged. The noise removal performance of our method was compared with 10 well-known denoising algorithms. Experimental results demonstrated that our proposed method outperformed other denoising algorithms in terms of noise detection and image restoration in the vast majority of the cases.</description>
        <description>http://thesai.org/Downloads/Volume13No2/Paper_1-Random_Valued_Impulse_Noise_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>ASM-ROBOT: A Cyber-Physical Home Automation Controller with Memristive Reconfigurable State Machine</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01301103</link>
        <id>10.14569/IJACSA.2022.01301103</id>
        <doi>10.14569/IJACSA.2022.01301103</doi>
        <lastModDate>2022-02-01T09:13:50.5470000+00:00</lastModDate>
        
        <creator>Kennedy Chinedu Okafor</creator>
        
        <creator>Omowunmi Mary Longe</creator>
        
        <subject>Cloud computing; cyber-physical systems; complex robot; computational science; IoT; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>In the next 5 to 10 years, digital Artificial Intelligence with Machine Circuit Learning Algorithms (MCLA) will become the mainstream in complex automated robots. Its power concerns, ethical perspectives, including the issues of digital sensing, actuation, mobility, efficient process-computation, and wireless communication will require advanced neuromorphic process variable controls. Existing home automated robots lack memristic associative memory. This work presents Cyber-Physical Home Automation System (CPHAS) using Memristive Reconfigurable Algorithmic State Machine (MRASM) chart. A process control architecture that supports Concurrent Wireless Data Streams and Power-Transfer (CWDSPT) is developed. Unlike legacy systems with power-splitting (PS) and time-switching (TS) controls, the MRASM-ROBOT explores granular wireless signal controls through unmodulated high-power continuous wave (CW). This transmits infinite process variables using Orthogonal Space-Time Block Code (OSTBC) for interference reduction. The CWDSPT transmitter and receiver circuit design for signal processing are implemented with complexity noise-error reduction during telemetry data decoding. Received signals are error-buffered while gathering control variables&#39; status. Transceiver Memristive neuromorphic circuits are introduced for computational acceleration in the design. Hardware circuit design is tested for system reliability considering the derived schematic models for all process variables. Under small range space diversity, the system demonstrated significant memory stabilization at the synchronous iteration of the synaptic circuitry.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_103-ASM_ROBOT_A_Cyber_Physical_Home_Automation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Assessing and Proposing Countermeasures for Cyber-Security Attacks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01301102</link>
        <id>10.14569/IJACSA.2022.01301102</id>
        <doi>10.14569/IJACSA.2022.01301102</doi>
        <lastModDate>2022-01-31T10:55:48.7730000+00:00</lastModDate>
        
        <creator>Ali Al-Zahrani</creator>
        
        <subject>Cyber-attacks; cyber-security; risk assessment; countermeasures</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>Cyber-attacks on IT domain infrastructure directly affect the security of businesses’ operational processes, potentially leading to system failure. Some industries have a high risk than others due to the sensitivity of their data, including the transportation industry, which has recently moved from traditional data management to digitalization. This study aims to identify the main cyber threats in the transportation sector by analyzing related works and highlighting the main countermeasures used to respond to such threats as well as enhance overall cybersecurity. This paper presents a comprehensive cybersecurity risk assessment for the transportation companies, identifying the most common attacks and proposing methods to minimize risk as much as possible. A risk assessment analysis was prepared by industry experts that included previous cyberattack scenarios. The results of our paper identified the most critical attacks on the transportation company’s booking system and recommended suitable countermeasures to minimize the risk of those attacks.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_102-Assessing_and_Proposing_Countermeasures_for_Cyber_security.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards a Low-Cost FPGA Micro-Server for Big Data Processing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01301101</link>
        <id>10.14569/IJACSA.2022.01301101</id>
        <doi>10.14569/IJACSA.2022.01301101</doi>
        <lastModDate>2022-01-31T10:55:48.7600000+00:00</lastModDate>
        
        <creator>Mohamed Abouzahir</creator>
        
        <creator>Khalifa Elmansouri</creator>
        
        <creator>Rachid Latif</creator>
        
        <creator>Mustapha Ramzi</creator>
        
        <subject>Arria 10 FPGA (Field Programmable Gate Arrays); GPGPU (General Purpose Graphics Processing Unit) ; big data; parallel computing; (HLS) High-Level Synthesis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>The development of big data in the era of data explosion and the growing demand for micro-servers in place of traditional servers to adapt to lightweight tasks in recent years has put into question how to integrate and make use of these two important domains. During the same era, CPU performance growth has reached a certain maturity. In order to surpass these issues and to reach high performances computing, a new trend now is to use multiple processing units or heterogeneous components in micro-servers to reduce computational complexity. The implementation of Big Data processing algorithms using embedded heterogeneous architectures rises a new challenges due to constraints of the used architecture-based system on chip which require a special attention and imposed new demands to our works. In this article, we focus on using embedded FPGA accelerator to give a solution to this problem. Precisely, we will attempt to prototype a micro-server for the processing of big data on FPGA and compare its performances with a high-end GPGPU using existing benchmarks. The implementation on the FPGA is done using a High-Level Synthesis based-OpenCL (HLS) instead of the traditional description language. The obtained results shows that FPGA is an interesting alternative and can be a promising platform to design a micro-server when it comes to process a hug amount of data, in particular with the emerging technologies for FPGA programming using HLS approach and by adopting the OpenCL optimization strategies.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_101-Towards_a_Low_Cost_FPGA_Micro_Server_for_Big_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Investigation Framework for Cloud Forensics using Dynamic Genetic-based Clustering</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.01301100</link>
        <id>10.14569/IJACSA.2022.01301100</id>
        <doi>10.14569/IJACSA.2022.01301100</doi>
        <lastModDate>2022-01-31T10:55:48.7430000+00:00</lastModDate>
        
        <creator>Mohammed Y. Alkhanafseh</creator>
        
        <creator>Mohammad Qatawneh</creator>
        
        <creator>Wesam Almobaideen</creator>
        
        <subject>Cloud computing forensics; genetic clustering al-gorithms; genetic dynamic clustering; forensics framework; digital forensics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>Cloud computing allows a pool of resources, such as storage, computation power, communication bandwidth, to be shared and accessed by many users from different locations. High dependency on sharing resources among different cloud users allows some attacker to hide and commit crimes using cloud resources and as a result, cloud computing forensics become essential. Many solutions and frameworks for cloud computing forensics have been developed to deal with could based crimes. However, many problems and issues face the proposed solutions and frameworks. In this paper, a new framework for cloud computing forensics is proposed to enhance the investigation process performance and accuracy by adding a new stage to conventional stages. This new stage includes the implementation of a new way for matching based on the LSH algorithm. The proposed framework evaluation results show an improvement for matching and accurate cluster retrieval through the collection process.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_100-Investigation_Framework_for_Cloud_Forensics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving Arabic Cognitive Distortion Classification in Twitter using BERTopic</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130199</link>
        <id>10.14569/IJACSA.2022.0130199</id>
        <doi>10.14569/IJACSA.2022.0130199</doi>
        <lastModDate>2022-01-31T10:55:48.7430000+00:00</lastModDate>
        
        <creator>Fatima Alhaj</creator>
        
        <creator>Ali Al-Haj</creator>
        
        <creator>Ahmad Sharieh</creator>
        
        <creator>Riad Jabri</creator>
        
        <subject>Arabic tweets; cognitive distortions’ classification; machine learning; social media; supervised learning; unsupervised learning; transformers; BERTopic; topic modeling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>Social media platforms allow users to share thoughts, experiences, and beliefs. These platforms represent a rich resource for natural language processing techniques to make inferences in the context of cognitive psychology. Some inaccurate and biased thinking patterns are defined as cognitive distortions. Detecting these distortions helps users restructure how to perceive thoughts in a healthier way. This paper proposed a machine learning-based approach to improve cognitive distortions’ classi-fication of the Arabic content over Twitter. One of the challenges that face this task is the text shortness, which results in a sparsity of co-occurrence patterns and a lack of context information (semantic features). The proposed approach enriches text rep-resentation by defining the latent topics within tweets. Although classification is a supervised learning concept, the enrichment step uses unsupervised learning. The proposed algorithm utilizes a transformer-based topic modeling (BERTopic). It employs two types of document representations and performs averaging and concatenation to produce contextual topic embeddings. A comparative analysis of F1-score, precision, recall, and accuracy is presented. The experimental results demonstrate that our enriched representation outperformed the baseline models by different rates. These encouraging results suggest that using latent topic distribution, obtained from the BERTopic technique, can improve the classifier’s ability to distinguish between different CD categories.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_99-Improving_Arabic_Cognitive_Distortion_Classification_in_Twitter.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>NLI-GSC: A Natural Language Interface for Generating SourceCode</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130198</link>
        <id>10.14569/IJACSA.2022.0130198</id>
        <doi>10.14569/IJACSA.2022.0130198</doi>
        <lastModDate>2022-01-31T10:55:48.7270000+00:00</lastModDate>
        
        <creator>Aaqib Ahmed R.H. Ansari</creator>
        
        <creator>Deepali R. Vora</creator>
        
        <subject>Natural Language Processing (NLP); Natural Lan-guage Interface (NLI); Entity Recognition (ER); Artificial Intelli-gence (AI); source code generation; pseudocode generation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>There are many different programming languages and each programming language has its own structure or way of writing the code, it becomes difficult to learn and frequently switch between different programming languages. Due to this reason, a person working with multiple programming languages needs to look at documentations frequently which costs time and effort. In the past few years, there have been significant increase in the amount of papers published on this topic, each providing a unique solution to this problem. Many of these papers are based on applying NLP concepts in unique configuration to get the desired results. Some have used AI along with NLP to train the system to generate source-code in specific language, and some have trained the AI directly without pre-processing the dataset with NLP. All of these papers face two problems: a lack of proper dataset for this particular application and each paper can convent natural language into only one specified programming language source-code. This proposed system shows that a language independent solution is a feasible alternate for writing source-code without having full knowledge about a programming language. The proposed system uses Natural Lan-guage Processing to convert Natural Language into programming language-independent pseudo code using custom Named Entity Recognition and save it in XML (eXtensible Markup Language) format which is an intermediate step. Then, using traditional programming, this system converts the generated pseudo code into programming language-dependent source-code. In this paper, another novel method has been proposed to create dataset from scratch using predefined structure that is filled with predefined keywords creating unique combination of training dataset.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_98-NLI_GSC_A_Natural_Language_Interface_for_Generating_SourceCode.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modeling and Predicting Blood Flow Characteristics through Double Stenosed Artery from Computational Fluid Dynamics Simulations using Deep Learning Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130197</link>
        <id>10.14569/IJACSA.2022.0130197</id>
        <doi>10.14569/IJACSA.2022.0130197</doi>
        <lastModDate>2022-01-31T10:55:48.7100000+00:00</lastModDate>
        
        <creator>Ishat Raihan Jamil</creator>
        
        <creator>Mayeesha Humaira</creator>
        
        <subject>Double stenosed artery; CFD; neural network; LSTM; GRU</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>Establishing patient-specific finite element analysis (FEA) models for computational fluid dynamics (CFD) of double stenosed artery models involves time and effort, restricting physicians’ ability to respond quickly in time-critical medical applications. Such issues might be addressed by training deep learning (DL) models to learn and predict blood flow character-istics using a dataset generated by CFD simulations of simplified double stenosed artery models with different configurations. When blood flow patterns are compared through an actual double stenosed artery model, derived from IVUS imaging, it is revealed that the sinusoidal approximation of stenosed neck geometry, which has been widely used in previous research works, fails to effectively represent the effects of a real constriction. As a result, a novel geometric representation of the constricted neck is proposed which, in terms of a generalized simplified model, outperforms the former assumption. The sequential change in artery lumen diameter and flow parameters along the length of the vessel presented opportunities for the use of LSTM and GRU DL models. However, with the small dataset of short lengths of doubly constricted blood arteries, the basic neural network model outperforms the specialized RNNs for most flow properties. LSTM, on the other hand, performs better for predicting flow properties with large fluctuations, such as varying blood pressure over the length of the vessels. Despite having good overall accuracies in training and testing across all the properties for the vessels in the dataset, the GRU model underperforms for an individual vessel flow prediction in all cases. The results also point to the need of individually optimized hyperparameters for each property in any model rather than aiming to achieve overall good performance across all outputs with a single set of hyperparameters.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_97-Modeling_and_Predicting_Blood_Flow_Characteristics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Machine Learning Applied to Prevention and Mental Health Care in Peru</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130196</link>
        <id>10.14569/IJACSA.2022.0130196</id>
        <doi>10.14569/IJACSA.2022.0130196</doi>
        <lastModDate>2022-01-31T10:55:48.7100000+00:00</lastModDate>
        
        <creator>Edwin Kcomt Ponce</creator>
        
        <creator>Melissa Flores Cruz</creator>
        
        <creator>Laberiano Andrade-Arenas</creator>
        
        <subject>Artificial intelligence; machine learning; mental health; scrum; sentiment analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>The present research aims to develop an application that allows the early and timely detection of signs of problems in the mental health of citizens. Agile methodology was used, with its SCRUM framework developing its four steps. In addition, technological tools such as artificial intelligence, mobile appli-cations, social networks and the python programming language were used. Also using SQL Server, Android Studio and the Marvel applications, the latter for the design of the prototypes, through the method of sentiment analysis and machine learning, in order to create a mobile application that is as accurate as possible in its results. For this, several types of algorithm were evaluated, managing to select the most appropriate one since it works based on information collected through the social networks Facebook and Twitter. The result that was obtained was the application that uses machine learning to prevent and take care of mental health in Peru, thus benefiting the citizens of society.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_96-Machine_Learning_Applied_to_Prevention_and_Mental_Health_Care.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>On the Long Tail Products Recommendation using Tripartite Graph</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130195</link>
        <id>10.14569/IJACSA.2022.0130195</id>
        <doi>10.14569/IJACSA.2022.0130195</doi>
        <lastModDate>2022-01-31T10:55:48.6970000+00:00</lastModDate>
        
        <creator>Arlisa Yuliawati</creator>
        
        <creator>Hamim Tohari</creator>
        
        <creator>Rahmad Mahendra</creator>
        
        <creator>Indra Budi</creator>
        
        <subject>Long tail; recommender system; tripartite graph; random walker; hitting time; absorbing time</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>The growth of the number of e-commerce users and the items being sold become both opportunities and challenges for e-commerce marketplaces. As the existence of the long-tail phenomenon, the marketplaces need to pay attention to the high number of rarely sold items. The failure to sell these products would be a threat for some B2C e-commerce companies that apply a non-consignment sale system because the products cannot be returned to the manufacturer. Thus, it is important for the marketplace to boost the promotion of long-tail products. The objective of this study is to adapt the graph-based technique to build the recommendation system for long-tail products. The set of products, customers, and categories are represented as nodes in the tripartite graph. The Absorbing Time and Hitting Time algorithms are employed together with the Markov Random Walker to traverse the nodes in the graph. We find that using Absorbing Time achieves better accuracy than the Hitting Time for recommending long-tail products.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_95-On_the_Long_Tail_Products_Recommendation_using_Tripartite_Graph.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Transfer Learning based Performance Comparison of the Pre-Trained Deep Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130193</link>
        <id>10.14569/IJACSA.2022.0130193</id>
        <doi>10.14569/IJACSA.2022.0130193</doi>
        <lastModDate>2022-01-31T10:55:48.6800000+00:00</lastModDate>
        
        <creator>Jayapalan Senthil Kumar</creator>
        
        <creator>Syahid Anuar</creator>
        
        <creator>Noor Hafizah Hassan</creator>
        
        <subject>Transfer learning; deep neural networks; image classification; Convolutional Neural Network (CNN) models</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>Deep learning has grown tremendously in recent years, having a substantial impact on practically every discipline. Transfer learning allows us to transfer the knowledge of a model that has been formerly trained for a particular task to a new model that is attempting to solve a related but not identical problem. Specific layers of a pre-trained model must be retrained while the others must remain unmodified to adapt it to a new task effectively. There are typical issues in selecting the layers to be enabled for training and layers to be frozen, setting hyper-parameter values, and all these concerns have a substantial effect on training capabilities as well as classification performance. The principal aim of this study is to compare the network performance of the selected pre-trained models based on transfer learning to help the selection of a suitable model for image classifica-tion. To accomplish the goal, we examined the performance of five pre-trained networks, such as SqueezeNet, GoogleNet, ShuffleNet, Darknet-53, and Inception-V3 with different Epochs, Learning Rates, and Mini-Batch Sizes to compare and evaluate the network’s performance using confusion matrix. Based on the experimental findings, Inception-V3 has achieved the highest accuracy of 96.98%, as well as other evaluation metrics, including precision, sensitivity, specificity, and f1-score of 92.63%, 92.46%, 98.12%, and 92.49%, respectively.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_93-Transfer_Learning_based_Performance_Comparison.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Augmented Reality: Prototype for the Teaching-Learning Process in Peru</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130194</link>
        <id>10.14569/IJACSA.2022.0130194</id>
        <doi>10.14569/IJACSA.2022.0130194</doi>
        <lastModDate>2022-01-31T10:55:48.6800000+00:00</lastModDate>
        
        <creator>Shalom Adonai Huaraz Morales</creator>
        
        <creator>Laberiano Andrade-Arenas</creator>
        
        <creator>Alexi Delgado</creator>
        
        <creator>Enrique Lee Huamani</creator>
        
        <subject>Augmented reality; teaching; education; scrum; unity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>In recent years, augmented reality is playing an important role in the world of mobile technology, since it is a way to facilitate the teaching-learning processes, this easy teaching-learning process generates a great contribution in companies, either creating opportunities or changing the way in which companies approach and interact with their end customers, this can mean a remarkable growth of the organization, that is why in this work an augmented reality prototype has been made using the methodology Scrum at the University of Sciences and Humanities of Lima-Peru, but with a focus on the nursing career. Knowing that the problem is the limited learning that students acquire in the classrooms, for which, we want to make use of augmented reality, so that this improves the form of education that is provided to university students. The result obtained, from developing the case study, was an augmented reality prototype for the improvement of education at the University of Sciences and Humanities of Lima-Peru, which shows a virtual model (it depends on the image shown), able to interact with the user, making it attractive and motivating for the student, this prototype was achieved, using Unity (3D development platform, Vuforia (augmented reality software development kit), Microsoft Visual Studio (integrated development environment), the Scrum Methodology (Scrum Pillars, Product Backlog, Product Backlog Estimation, Speed, Backlog Prioritization and Sprint Planning) and the C# Language (C Sharp).</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_94-Augmented_Reality_Prototype_for_the_Teaching_Learning_Process.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Keyphrases Concentrated Area Identification from Academic Articles as Feature of Keyphrase Extraction: A New Unsupervised Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130192</link>
        <id>10.14569/IJACSA.2022.0130192</id>
        <doi>10.14569/IJACSA.2022.0130192</doi>
        <lastModDate>2022-01-31T10:55:48.6630000+00:00</lastModDate>
        
        <creator>Mohammad Badrul Alam Miah</creator>
        
        <creator>Suryanti Awang</creator>
        
        <creator>Md. Saiful Azad</creator>
        
        <creator>Md Mustafizur Rahman</creator>
        
        <subject>Keyphrase concentrated area; KCA identification; feature extraction; data processing; keyphrase extraction; curve fitting</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>The extraction of high-quality keywords and sum-marising documents at a high level has become more difficult in current research due to technological advancements and the expo-nential expansion of textual data and digital sources. Extracting high-quality keywords and summarising the documents at a high-level need to use features for the keyphrase extraction, becoming more popular. A new unsupervised keyphrase concentrated area (KCA) identification approach is proposed in this study as a feature of keyphrase extraction: corpus, domain and language independent; document length-free; utilized by both supervised and unsupervised techniques. In the proposed system, there are three phases: data pre-processing, data processing, and KCA identification. The system employs various text pre-processing methods before transferring the acquired datasets to the data processing step. The pre-processed data is subsequently used during the data processing step. The statistical approaches, curve plotting, and curve fitting technique are applied in the KCA identification step. The proposed system is then tested and evaluated using benchmark datasets collected from various sources. To demonstrate our proposed approach’s effectiveness, merits, and significance, we compared it with other proposed techniques. The experimental results on eleven (11) datasets show that the proposed approach effectively recognizes the KCA from articles as well as significantly enhances the current keyphrase extraction methods based on various text sizes, languages, and domains.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_92-Keyphrases_Concentrated_Area_Identification_from_Academic_Articles.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>AuSDiDe: Towards a New Authentication System for Distributed and Decentralized Structure based on Shamir’s Secret Sharing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130191</link>
        <id>10.14569/IJACSA.2022.0130191</id>
        <doi>10.14569/IJACSA.2022.0130191</doi>
        <lastModDate>2022-01-31T10:55:48.6500000+00:00</lastModDate>
        
        <creator>Omar SEFRAOUI</creator>
        
        <creator>Afaf Bouzidi</creator>
        
        <creator>Kamal Ghoumid</creator>
        
        <creator>El Miloud Ar-Reyouchi</creator>
        
        <subject>Shamir’s secret sharing; authentication system; de-centralized; distributed; blockchain</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>Nowadays, connected devices are growing exponen-tially; their produced data traffic has increased unprecedent-edly. Information systems security and cybersecurity are critical because data typically contain sensitive personal information, requiring high data protection. An authentication system manages and controls access to this data allowing the system to ensure the legitimacy of the access request. Most of the current identification and authentication systems are based on a centralized architec-ture. However, some concepts as Cloud computing and Blockchain use respectively distributed and decentralized architectures. Users without a central server will own platforms and applications of the next generation of Internet and Web3. This paper proposes AuSDiDe, a new authentication system for the distributed and decentralized structure. This solution aims to divide and share keys toward different and distributed nodes. The main objective of AuSDiDe is to securely store and manage passwords, private keys, and authentication based on the Shamir secret sharing algo-rithm. This new proposal significantly reinforces data protection in information security.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_91-AuSDiDe_Towards_a_New_Authentication_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Enhanced Traffic Split Routing Heuristic for Layer 2 and Layer 1 Services</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130190</link>
        <id>10.14569/IJACSA.2022.0130190</id>
        <doi>10.14569/IJACSA.2022.0130190</doi>
        <lastModDate>2022-01-31T10:55:48.6330000+00:00</lastModDate>
        
        <creator>Ahlem Harchay</creator>
        
        <creator>Abdelwahed Berguiga</creator>
        
        <creator>Ayman Massaoudi</creator>
        
        <subject>Virtual private network; enhanced traffic split rout-ing; quality of service; shortest path routing; layer 1 VPN; layer 2 VPN</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>Virtual Private Networks (VPNs) have now taken an important place in computer and communication networks. A virtual private network is the extension of a private network that encompasses links through shared or public networks, such as the Internet. A VPN is a transmission network service for businesses with two or more remote locations. It offers a range of access speeds and options depending on the needs of each site. This service supports voice, data and video and is fully managed by the service provider, including routing equipment installed at the customer’s premises. According to its characteristics, VPN has widely deployed on ”COVID-19” offering extensive services to connect roaming employees to their corporate networks and have access to all the company information and applications. Hence, VPN focuses on two important issues such as security and Quality-of-Service. This latter has a direct relationship with network performance such as delay, bandwidth, throughput, and jitter. Traditionally, Internet Service Providers (ISPs) accommo-date static point-to-point resource demand, named, Layer 1 VPN (L1VPN). The primary disadvantage of L1VPN is that the data plane connectivity does not guarantee control plane connectivity. Layer 2 VPN is designed to provide end-to-end layer 2 connection by transporting layer 2 frames between distributed sites. An L2VPN is suitable for supporting heterogeneous higher-level protocols. In this paper we propose an enhanced routing protocol based on Traffic Split Routing (TSR) and Shortest Path Routing (SPR) algorithms. Simulation results show that our proposed scheme outperforms the Shortest Path Routing (SPR) in term of network resources. Indeed, 72% of network links are used by the Enhanced Traffic Split Routing compared to Shortest Path Routing (SPR) which only used 44% of the network links.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_90-An_Enhanced_Traffic_Split_Routing_Heuristic.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Data Recovery Approach for Fault-Tolerant IoT Node</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130189</link>
        <id>10.14569/IJACSA.2022.0130189</id>
        <doi>10.14569/IJACSA.2022.0130189</doi>
        <lastModDate>2022-01-31T10:55:48.6330000+00:00</lastModDate>
        
        <creator>Perigisetty Vedavalli</creator>
        
        <creator>Deepak Ch</creator>
        
        <subject>Internet of things; data recovery; RAID; node failures; reliability; network lifetime</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>Internet of Things (IoT) has a wide range of applications in many sectors like industries, health care, homes, militarily, and agriculture. Especially IoT-based safety and critical applications must be more securable and reliable. Such type of applications needs to be operated continuously even in the presence of errors and faults. In safety and critical IoT applica-tions maintaining data reliability and security is the critical task. IoT suffers from node failures due to limited resources and the nature of deployment which results in data loss consequently. This paper proposes a Data Recovery Approach for Fault Tolerant (DRAFT) IoT node algorithm, which is fully distributed, data replication and recovery implemented through redundant local database storage of other nodes in the network. DRAFT ensures high data availability even in the presence of node failures to preserve the data. When an IoT node fails in any cluster in the network data can be retrieved through redundant storage with the help of neighbor nodes in the cluster. The proposed algorithm is simulated for 100-150 IoT nodes which enhances 5% of network lifetime, and throughput. The performance metrics such as Mean Time to Data Loss (MTTDL), throughput, Network lifetime, and reliability are computed and results are found to be improved.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_89-Data_Recovery_Approach_for_Fault_Tolerant_IoT_Node.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis about Benefits of Software-Defined Wide Area Network: A New Alternative for WAN Connectivity</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130188</link>
        <id>10.14569/IJACSA.2022.0130188</id>
        <doi>10.14569/IJACSA.2022.0130188</doi>
        <lastModDate>2022-01-31T10:55:48.6170000+00:00</lastModDate>
        
        <creator>Catherine Janir&#180;e Mena Diaz</creator>
        
        <creator>Laberiano Andrade-Arenas</creator>
        
        <creator>Javier Gustavo Utrilla Arellano</creator>
        
        <creator>Miguel Angel Cano Lengua</creator>
        
        <subject>Networking technology; connectivity; SDWAN; wide area network; Waterfall</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>This article is based on conducting research to analyze the benefits of emerging trends in communications and networking technology, such as software-defined wide area networks. Using Waterfall as a methodology, the main objective is to carry out a technical comparison at the design and configuration level, creating a virtual environment that simulates traditional and SDWAN (Software-Defined Wide Area Network) infrastructures. The results obtained verify that the benefits of SDWAN maintain business continuity, anticipate situations in which the infrastructure can act intelligently, optimize connectiv-ity while maintaining security, and provide improvements in the management of the entire infrastructure. People will be able to see the results obtained between both technologies and validate the benefits that SDWAN offers .</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_88-Analysis_about_Benefits_of_Software_Defined_Wide_Area_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detecting Irony in Arabic Microblogs using Deep Convolutional Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130187</link>
        <id>10.14569/IJACSA.2022.0130187</id>
        <doi>10.14569/IJACSA.2022.0130187</doi>
        <lastModDate>2022-01-31T10:55:48.6030000+00:00</lastModDate>
        
        <creator>Linah Alhaidari</creator>
        
        <creator>Khaled Alyoubi</creator>
        
        <creator>Fahd Alotaibi</creator>
        
        <subject>Verbal irony; natural language processing; machine learning; automatic irony detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>A considerable amount of research has been de-veloped lately to analyze social media with the intention of understanding and exploiting the available information. Recently, irony has took a significant role in human communication as it has been increasingly used in many social media platforms. In Natural Language Processing (NLP), irony recognition is an important yet difficult problem to solve. It is considered to be a complex linguistic phenomenon in which people means the opposite of what they literally say. Due to its significance, it becomes essential to analyze and detect irony in subjective texts to improve the analysis tools to classify people opinion automatically. This paper explores how deep learning methods can be employed to the detection of irony in Arabic language with the help of Word2vec term representations that converts words to vectors. We applied two different deep learning models; Convolutional Neural Network (CNN) and Bidirectional Long Short-Term Memory (BiLSTM). We tested our frameworks with a manually annotated datasets that was collected using Tweet Scraper. The best result was achieved by the CNN model with an F1 score of 0.87.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_87-Detecting_Irony_in_Arabic_Microblogs_using_Deep_Convolutional.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Data Analysis of Coronavirus CoVID-19: Study of Spread and Vaccination in European Countries</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130185</link>
        <id>10.14569/IJACSA.2022.0130185</id>
        <doi>10.14569/IJACSA.2022.0130185</doi>
        <lastModDate>2022-01-31T10:55:48.5870000+00:00</lastModDate>
        
        <creator>Hela Turki</creator>
        
        <creator>Kais Khrouf</creator>
        
        <subject>Multidimensional model; constellation schema; coronavirus covid-19; vaccination; European countries</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>Humanity has gone since a long time through several pandemics, such as: H1N1 in 2009 and also Spanish flu in 1917. In December 2019, the health authorities of China detected unexplained cases of pneumonia. The WHO World Health Organization has declared the apparition of CoVID-19 (novel Coronavirus) that caused a global pandemic in 2020. In data analysis, multiple approaches and diverse techniques were used to extract useful information from multiple heterogeneous sources and to discover knowledge and new information for decision-making; it is used in different business and science domains. In this context, we propose to use the multidimensional analysis techniques based on two concepts: fact (subject of analysis) and dimensions (axes of analyses). This technique allows decision makers to observe data from various heterogeneous sources and analyze them according several viewpoints or perspectives. More precisely, we propose a multidimensional model for analyzing the Coronavirus CoVID-19 data (spread and vaccination in European countries). This model is based on constellation schema that contains several facts surrounded by common dimensions.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_85-Data_Analysis_of_Coronavirus_CoVID_19.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of Low Cost Bio-impedance Measuring Instrument</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130186</link>
        <id>10.14569/IJACSA.2022.0130186</id>
        <doi>10.14569/IJACSA.2022.0130186</doi>
        <lastModDate>2022-01-31T10:55:48.5870000+00:00</lastModDate>
        
        <creator>Rajesh Birok</creator>
        
        <creator>Rajiv Kapoor</creator>
        
        <subject>Noninvasive; bio-electrical; Impedance; bio-impedance; bio-medical; instrumentation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>It is a well-established fact that the electrical bio-impedance of a part of the human body can provide valuable information regarding physiological parameters of the human body, if the signal is correctly detected and interpreted. Accord-ingly, an efficient low-cost bio-electrical impedance measuring instrument was developed, implemented, and tested in this study. Primarily, it is based upon the low-cost component-level approach so that it can be easily used by researchers and investigators in the specific domain. The measurement setup of instrument was tested on adult human subjects to obtain the impedance signal of the forearm which is under investigation in this case. However, depending on the illness or activity under examination, the instrument can be used on any other part of the body. The current injected by the instrument is within the safe limits and the gain of the biomedical instrumentation amplifier is highly reasonable. The technique is easy and user-friendly, and it does not necessitate any special training, therefore it can be effectively used to collect bio-impedance data and interpret the findings for medical diagnostics. Moreover, in this paper, several existing methods and associated approaches have been extensively explored, with in-depth coverage of their working principles, implementations, merits, and disadvantages, as well as focused on other technical aspects. Lastly, the paper also deliberates upon the present status, future challenges and scope of various other possible bio-impedance methods and techniques.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_86-Design_of_Low_Cost_Bio_impedance_Measuring_Instrument.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cotton Crop Yield Prediction using Data Mining Technique</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130184</link>
        <id>10.14569/IJACSA.2022.0130184</id>
        <doi>10.14569/IJACSA.2022.0130184</doi>
        <lastModDate>2022-01-31T10:55:48.5700000+00:00</lastModDate>
        
        <creator>Amiksha Ashok Patel</creator>
        
        <creator>Dhaval Kathiriya</creator>
        
        <subject>Data mining; cotton crop yield prediction; agriculture; data processing; data visualization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>Cotton is a very important crop, as India leads it in terms of production in the world; and also that a vast number of manpower is engaged in farming as well as post-harvest processing and management of different derivatives of it. Weather is crucial for the productivity of the crop. The challenges of climate change; availability of limited land and water for farming; lake of knowledge for good cultivation practices and judicious use of agricultural inputs with farmers are critical hindrances for improving productivity. This requires thorough research on land preparation and use, how to improve fertility of soil, good agronomic practices in lieu of variable climatic conditions, etc. All the talukas of the three districts of North Gujarat where cotton is cultivated have been selected purposively for this study. The effect of soil type, soil pH, soil organic carbon, phosphorous, potassium, precipitation and temperature were selected as independent factors. The yield of cotton crop has positive correlation with the selected parameters. The data sets were applied for analytical process to WEKA. The difference between average of predicted and actual yields of all talukas for high rainfall year 2013 was only 1.55 per cent. The difference between actual and predicted yield for the low temperature year (2015) in different talukas of all talukas was only 0.44 per cent.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_84-Cotton_Crop_Yield_Prediction_using_Data_Mining_Technique.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design and Performance Analysis of Anti-Surge Control Mechanism for Compressor System using Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130182</link>
        <id>10.14569/IJACSA.2022.0130182</id>
        <doi>10.14569/IJACSA.2022.0130182</doi>
        <lastModDate>2022-01-31T10:55:48.5570000+00:00</lastModDate>
        
        <creator>Divya M. N</creator>
        
        <creator>Narayanappa C.K</creator>
        
        <creator>S L Gangadhariah</creator>
        
        <creator>V Nuthan Prasad</creator>
        
        <subject>Anti-surge; fuzzy logic controller (FLC); neuro-fuzzy controller (NFC); compressors; neural-network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>The compressor system is caused by the surge, which is an instability occurrence in most gas-process and oil industries. These issues are solved by using a recycle valve that avoids the surge and provides higher mass flow in the compressor system. An advanced controller-based anti-surge control mechanism is a need in the compressor system to improve the stability and surge issues. In this manuscript, an efficient, Neural-network predictive controller (NNPC) based variable speed compressor recycle system is modeled with an anti-surge control mechanism. When the mass flow is deficient, the recycle system is introduced, acts as a safety system, and feeds the compressed gas back to the upstream system. The different controllers like Proportional Integral Derivative (PID) controller, Fuzzy logic controller (FLC), and Neuro-fuzzy controller (NFC) based anti-surge control mechanism are also used in Compressor recycle system to compare the stability and performance metrics with NNPC. The NNPC based compressor system provides a better operating position and dynamic response with less error than other controllers-based compressor systems.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_82-Design_and_Performance_Analysis_of_Anti_Surge_Control.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design and Implementation of True Parallelism Quad-Engine Cybersecurity Architecture on FPGA</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130183</link>
        <id>10.14569/IJACSA.2022.0130183</id>
        <doi>10.14569/IJACSA.2022.0130183</doi>
        <lastModDate>2022-01-31T10:55:48.5570000+00:00</lastModDate>
        
        <creator>Nada Qaim Mohammed</creator>
        
        <creator>Amiza Amir</creator>
        
        <creator>Muataz Hammed Salih</creator>
        
        <creator>Badlishah Ahmad</creator>
        
        <subject>Field programmable gate array (FPGA); spatial parallelism; cybersecurity; throughput component</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>Applications, such as Internet of Things, deal with huge amount of transmitted, processed and stored images that required a high computing capability. Therefore, there is a need a computing architecture that contribute in increasing the throughput by exploiting modern technologies in both spatial and temporal parallelisms. This paper conducts a parallel quad-engine cybersecurity architecture with new configuration to increase the throughput. using DE1-SoC and Neek FPGA boards and HDL. In this architecture, each engine operates with 600MHz maximum frequency. Each image is divided into four parts of equal size and each part processed by single engine concurrently to achieve spatial parallelism. Internally, engine is handling image’s part in temporal parallelism and deep pipelining abstraction applied in every engine by dividing it to sub modules to execute different tasks concurrently. All data processed in engines is encrypted via AES algorithm that implemented as a significant part of engine architecture. The obtained results increased the throughput by four times, with 153,600Mbps, that make this computing architecture efficient and suitable for fast applications such as IoT and cybersecurity level of processing.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_83-Design_and_Implementation_of_True_Parallelism.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predicting Cyber-Attack using Cyber Situational Awareness: The Case of Independent Power Producers (IPPs)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130181</link>
        <id>10.14569/IJACSA.2022.0130181</id>
        <doi>10.14569/IJACSA.2022.0130181</doi>
        <lastModDate>2022-01-31T10:55:48.5400000+00:00</lastModDate>
        
        <creator>Akwetey Henry Matey</creator>
        
        <creator>Paul Danquah</creator>
        
        <creator>Godfred Yaw Koi-Akrofi</creator>
        
        <subject>Internet of things; cyber situational awareness; critical infrastructures; power generation; cyber-attack; cyber security; human behavioural and independent power producers</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>The increasing critical dependencies on Internet-of-Things (IoT) have raised security concerns; its application on the critical infrastructures (CIs) for power generation has come under massive cyber-attack over the years. Prior research efforts to understand cybersecurity from Cyber Situational Awareness (CSA) perspective fail to critically consider the various Cyber Situational Awareness (CSA) security vulnerabilities from a human behavioural perspective in line with the CI. This study evaluates CSA elements to predict cyber-attacks in the power generation sector. Data for this research article was collected from IPPs using the survey method. The analysis method was employed through Partial Least Squares Structural Equation Modeling (PLS-SEM) to assess the proposed model. The results revealed negative effects on people and cyber-attack, but significant in predicting cyber-attacks. The study also indicated that information handling is significant and positively influences cyber-attack. The study also reveals no mediation effect between the association of People and Attack and Information and Attack. It could result from an effective cyber security control implemented by the IPPs. Finally, the study also shows no sign of network infrastructure cyber-attack predictions. The reasons could be because managers of IPPs had adequate access policies and security measures in place.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_81-Predicting_Cyber_Attack_Using_Cyber_Situational_Awareness.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Global Survey of Technological Resources and Datasets on COVID-19</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130179</link>
        <id>10.14569/IJACSA.2022.0130179</id>
        <doi>10.14569/IJACSA.2022.0130179</doi>
        <lastModDate>2022-01-31T10:55:48.5230000+00:00</lastModDate>
        
        <creator>Manoj Muniswamaiah</creator>
        
        <creator>Tilak Agerwala</creator>
        
        <creator>Charles C. Tappert</creator>
        
        <subject>Vaccination; hospitalization; confirmed cases; datasets; data science; COVID-19</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>The application and successful utilization of technological resources in developing solutions to health, safety, and economic issues caused by COVID-19 indicate the importance of technology in curbing COVID-19. Also, the medical field has had to race against tie to develop and distribute the COVID-19 vaccine. This endeavour became successful with the vaccines created and approved in less than a year, a feat in medical history. Currently, much work is being done on data collection, where all significant factors impacting the disease are recorded. These factors include confirmed cases, death rates, vaccine rates, hospitalization data, and geographic regions affected by the pandemic. Continued research and use of technological resources are highly recommendable—the paper surveys list of packages, applications and datasets used to analyse COVID-19.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_79-A_Global_Survey_of_Technological_Resources_and_Datasets.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comparison between Online and Offline Health Seeking Information using Social Networks for Patients with Chronic Health Conditions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130180</link>
        <id>10.14569/IJACSA.2022.0130180</id>
        <doi>10.14569/IJACSA.2022.0130180</doi>
        <lastModDate>2022-01-31T10:55:48.5230000+00:00</lastModDate>
        
        <creator>Andrew Kear</creator>
        
        <creator>Simon Talbot</creator>
        
        <subject>Component; social media; healthcare; peer to peer networks; patient networks; pre-Covid</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>The patient is now better connected with other patients just like the consumer is now better connected with other consumers in particular through the growing adoption of social media and online peer to peer communities. These relationships which become collaborative have either positive or indeed negative consequences that may either endorse or have implications for a firm’s products [32]. The aim of this research was to gain an understanding of the impact social media has on patient influence on healthcare provision especially in relation to information seeking and clinical product choice. It compares a group of patients who are predominantly online information seekers with a group who are predominantly offline information seekers. Bias will be eliminated by utilising probability sampling techniques in order to be able to perform statistical analysis on the results obtained. This study capitalises on having access to approximately 8000+ Direct to Patient consumers who are currently receiving devices for the management of their bladder problems. The intention of this research project is to gain an understanding of how two way online interactions have developed between patients with similar chronic medical conditions and how firms can use online social media to improve their relationship with patients. The key research question of this paper is: Have online social media tools affected demand for healthcare intermediation in patients, who experience chronic medical conditions and reflect a need to become better informed. The findings of this pre-Covid research were that, for patient groups that had chronic conditions, there was a positive relationship between time spent in developed peer to peer communities, are more trusting of online information and spend more time online.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_80-A_Comparison_between_Online_and_Offline_Health_Seeking_Information.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance of Data Reduction Algorithms for Wireless Sensor Network (WSN) using Different Real-Time Datasets: Analysis Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130178</link>
        <id>10.14569/IJACSA.2022.0130178</id>
        <doi>10.14569/IJACSA.2022.0130178</doi>
        <lastModDate>2022-01-31T10:55:48.5100000+00:00</lastModDate>
        
        <creator>M. K. Hussein</creator>
        
        <creator>Ion Marghescu</creator>
        
        <creator>Nayef.A.M. Alduais</creator>
        
        <subject>Data reduction algorithms; WSN; energy consumption; accuracy; neural network; independent component analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>This paper investigates the effect of data reduction methods in the performance of Wireless Sensor Network (WSN) using a variety of real-time datasets. The simulation tests are carried out in MATLAB for several methods of reducing the quantity of sent data. These approaches are Data Reduction based - Neural Network Fitting (NNF), Neural Network Time Series (NNTS), Linear Regression with Multiple Variables (LRMV), Data Reduction based – “An Efficient Data Collection and Dissemination (EDCD2)” and Data Reduction based – Fast Independent Component Analysis (FICA). The selected algorithms NNF, NNST, EDCD2, LRMV, and FICA are evaluated using real-time datasets. The performance indicators included are energy consumption, data accuracy, and data reduction percentage. The research results show that the selected algorithm helps to reduce the amount of data transferred and consumed energy, but each algorithm performs differently depending on the dataset used.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_78-Performance_of_Data_Reduction_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Identification of Coronary Heart Disease through Iris using Gray Level Co-occurrence Matrix and Support Vector Machine Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130177</link>
        <id>10.14569/IJACSA.2022.0130177</id>
        <doi>10.14569/IJACSA.2022.0130177</doi>
        <lastModDate>2022-01-31T10:55:48.4930000+00:00</lastModDate>
        
        <creator>Vincentius Abdi Gunawan</creator>
        
        <creator>Leonardus Sandy Ade Putra</creator>
        
        <creator>Fitri Imansyah</creator>
        
        <creator>Eka Kusumawardhani</creator>
        
        <subject>Iris; iridology; coronary heart; circle hough transform; gray level co-occurrence matrix; support vector machine</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>Now-a-days, coronary heart disease is one of the deadliest diseases in the world. An unfavorable lifestyle, lack of physical activity, and consuming tobacco are the causes of coronary heart disease aside from genetic inheritance. Sometimes the patient does not know whether he has abnormalities in heart function or not. Therefore, this study proposes a system that can detect heart abnormalities through the iris, known as the Iridology method. The system is designed automatically in the iris detection to the classification results. Feature extraction using five characteristics is applied to the Gray Level Co-occurrence Matrix (GLCM) method. The classification process uses the Support Vector Machine (SVM) with linear kernel variation, Polynomial, and Gaussian to obtain the best accuracy in the system. From the system simulation results, the use of the Gaussian kernel can be relied on in the classification of iris conditions with an accuracy rate of 91%, then the Polynomial kernel accuracy reaches 89%, and the linear kernel accuracy reaches 87%. This study has succeeded in detecting heart conditions through the iris by dividing the iris into normal iris and abnormal iris.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_77-Identification_of_Coronary_Heart_Disease.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Extract Concept using Subtitles in MOOC</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130176</link>
        <id>10.14569/IJACSA.2022.0130176</id>
        <doi>10.14569/IJACSA.2022.0130176</doi>
        <lastModDate>2022-01-31T10:55:48.4770000+00:00</lastModDate>
        
        <creator>Aarika Kawtar</creator>
        
        <creator>Habib Benlahmar</creator>
        
        <creator>Mohamed Amine Naji</creator>
        
        <creator>Elfilali Sanaa</creator>
        
        <creator>Zouheir Banou</creator>
        
        <subject>LDA; BERT; topic coherence; overlap coefficient</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>Massive open online courses (MOOCs) are a variety of courses offered through the online mode, paid or unpaid and has evolved as an excellent learning resource for students. The structure of the course design is mainly linear where there are a few video lectures provided by either professors of several universities, or people with expertise in the particular subject. They are usually graded on a weekly basis through quizzes or peer-graded assignments. The objective of this paper is to extract the concepts taught in the videos from the subtitles, which could later be used to enhance recommendations of the learners using their clickstream data. The teachers could also use this to see the demand for their courses. Evaluate two keyword extraction methods, which are BERT and LDA using different Coursera courses. The experimental results show that BERT outperforms LDA in terms of Coherence.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_76-Extract_Concept_using_Subtitles_in_Mooc.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Balanced Schedule on Storm for Performance Enhancement</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130175</link>
        <id>10.14569/IJACSA.2022.0130175</id>
        <doi>10.14569/IJACSA.2022.0130175</doi>
        <lastModDate>2022-01-31T10:55:48.4770000+00:00</lastModDate>
        
        <creator>Arwa Z. Selim</creator>
        
        <creator>Noha E. El-Attar</creator>
        
        <creator>I. M. Hanafy</creator>
        
        <creator>Wael A. Awad</creator>
        
        <subject>Real-time; big data; apache storm; scheduling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>In recent years, real-time and big data aroused and received a lot of attention due to the spread of embedded systems in almost everything in life. This has led to many challenges that need to be solved to enhance and improve systems that work on big real-time data. Apache Storm is a system used for computing and analyzing big real-time data of distributed systems. This paper aims to develop a scheduler to improve the scheduling of the applications represented by topologies on the Storm cluster. The proposed scheduler is hybridization between the scheduling algorithms of A3 Storm and the Workload scheduler. Its objective is to minimize the communication between tasks while balancing the workload on all cluster machines. The proposed scheduler is compared with the A3 Storm and Fischer and Bernstein’s scheduling algorithm. The comparison has been made using four different topologies. The experimental results show that our proposed scheduler outperforms the two other schedulers in throughput and complete latency.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_75-Balanced_Schedule_on_Storm_for_Performance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Early Intervention Technique for At-Risk Prediction of Higher Education Students in Cloud-based Virtual Learning Environment using Classification Algorithms during COVID-19</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130174</link>
        <id>10.14569/IJACSA.2022.0130174</id>
        <doi>10.14569/IJACSA.2022.0130174</doi>
        <lastModDate>2022-01-31T10:55:48.4600000+00:00</lastModDate>
        
        <creator>Arul Leena Rose. P. J</creator>
        
        <creator>Ananthi Claral Mary.T</creator>
        
        <subject>Prediction; at-risk; machine learning; virtual learning environment; cloud platforms; classification; COVID-19; random forest; student academic performance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>Higher Education is considered vital for societal development. It leads to many benefits including a prosperous career and financial security. Virtual learning through cloud platforms has become fashionable as it is expediency and flexible to students. New student learning models and prediction outcomes can be developed by using these platforms. The appliance of machine learning techniques in identifying students at-risk is a challenging and concerning factor in virtual learning environment. When there are few students, it is easy for identification, but it is impractical on larger number of students. This study included 530 higher education students from various regions in India and the outcomes generated from online survey data were analyzed. The main objective of this research is to predict early identification of students at-risk in cloud virtual learning environment by analyzing their demographic characteristics, previous academic achievement, learning behavior, device type, mode of access, connectivity, self-efficacy, cloud platform usage, readiness and effectiveness in participating online sessions using four machine learning algorithms namely K Nearest Neighbor (KNN), Support Vector Machine (SVM), Linear Discriminant Analysis (LDA) and Random Forest (RF). Predictive system helps to provide solutions to low performance students. It has been implemented on real data of students from higher education who perform various courses in virtual learning environment. Deep analysis is performed to estimate the at-risk students. The experimental results exhibited that random forest achieved higher accuracy of 88.61% compared to other algorithms.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_74-An_Early_Intervention_Technique_for_At_risk_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Human Emotion Recognition by Integrating Facial and Speech Features: An Implementation of Multimodal Framework using CNN</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130172</link>
        <id>10.14569/IJACSA.2022.0130172</id>
        <doi>10.14569/IJACSA.2022.0130172</doi>
        <lastModDate>2022-01-31T10:55:48.4470000+00:00</lastModDate>
        
        <creator>P V V S Srinivas</creator>
        
        <creator>Pragnyaban Mishra</creator>
        
        <subject>Emotion recognition; multimodal; fusion; MFCC; MFA; FERCNN; CREMAD; RAVDESS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>This Emotion recognition plays a prominent role in today&#39;s intelligent system applications. Human computer interface, health care, law, and entertainment are a few of the applications where emotion recognition is used. Humans convey their emotions in the form of text, voice, and facial expressions, thus developing a multimodal emotional recognition system playing a crucial role in human-computer or intelligent system communication. The majority of established emotional recognition algorithms only identify emotions in unique data, such as text, audio, or image data. A multimodal system uses information from a variety of sources and fuses the information by using fusion techniques and categories to improve recognition accuracy. In this paper, a multimodal system to recognise emotions was presented that fuses the features from information obtained from heterogenous modalities like audio and video. For audio feature extraction energy, zero crossing rate and Mel-Frequency Cepstral Coefficients (MFCC) techniques are considered. Of these, MFCC produced promising results. For video feature extraction, first the videos are converted to frames and stored in a linear scale space by using a spatial temporal Gaussian Kernel. The features from the images are further extracted by applying a Gaussian weighted function to the second momentum matrix of linear scale space data. The Marginal Fisher Analysis (MFA) fusion method is used to fuse both the audio and video features, and the resulted features are given to the FERCNN model for evaluation. For experimentation, the RAVDESS and CREMAD datasets, which contain audio and video data, are used. Accuracy levels of 95.56, 96.28, and 95.07 on the RAVDESS dataset and accuracies of 80.50, 97.88, and 69.66 on the CREMAD dataset in audio, video, and multimodal modalities are achieved, whose performance is better than the existing multimodal systems.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_72-Human_Emotion_Recognition_by_Integrating_Facial_and_Speech_Features.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>BERT based Named Entity Recognition for Automated Hadith Narrator Identification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130173</link>
        <id>10.14569/IJACSA.2022.0130173</id>
        <doi>10.14569/IJACSA.2022.0130173</doi>
        <lastModDate>2022-01-31T10:55:48.4470000+00:00</lastModDate>
        
        <creator>Emha Taufiq Luthfi</creator>
        
        <creator>Zeratul Izzah Mohd Yusoh</creator>
        
        <creator>Burhanuddin Mohd Aboobaider</creator>
        
        <subject>Hadith narrator; hadith authentication; natural language processing; named entity recognition; NLP; NER; BERT; BERT fine-tune</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>Hadith serves as a second source of Islamic law for Muslims worldwide, especially in Indonesia, which has the world&#39;s most significant Muslim population of 228.68 million people. However, not all Hadith texts have been certified and approved for use, and several falsified Hadiths make it challenging to distinguish between authentic and fabricated Hadiths. In terms of Hadith science, determining the authenticity of a Hadith can be accomplished by examining its Sanad and Matn. Sanad is an essential aspect of the Hadith because it indicates the chain of the Narrator who transmits the Hadith. The research reported in this paper provides an advanced Natural Language Processing (NLP) technique for identifying and authenticating the Narrator of Hadith as a part of Sanad, utilizing Named Entity Recognition (NER) to address the necessity of authenticating the Hadith. The NER technique described in the research adds an extra feed-forward classifier to the last layer of the pre-trained BERT model. In the testing process using Cahya/bert-base-indonesian-1.5G, the proposed solution received an overall F1-score of 99.63 percent. On the Hadith Narrator Identification using other Hadith passages, the final examination yielded a 98.27 percent F1-score.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_73-BERT_based_Named_Entity_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>AI-based System for the Detection and Prevention of COVID-19</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130171</link>
        <id>10.14569/IJACSA.2022.0130171</id>
        <doi>10.14569/IJACSA.2022.0130171</doi>
        <lastModDate>2022-01-31T10:55:48.4300000+00:00</lastModDate>
        
        <creator>Sofien Chokri</creator>
        
        <creator>Wided Ben Daoud</creator>
        
        <creator>Wasma Hanini</creator>
        
        <creator>Sami Mahfoudhi</creator>
        
        <creator>Amel Makhlouf</creator>
        
        <subject>Face mask detection; coronavirus; COVID-19; deep learning; MobileNetV2; AES</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>The COVID-19 pandemic has had catastrophic consequences all over the world since the detection of the first case in December 2019. Currently, exponential growth is expected. In order to stop the spread of this pandemic, it is necessary to respect sanitary protocols such as the mandatory wearing of masks. In this research paper, we present an affordable artificial intelligence-based solution to increase the protection against COVID-19, covering several relevant aspects to facilitate the detection and prevention of this pandemic: non-contact temperature measurement, mask detection, automatic gel-dispensing, and automatic sterilization. Our main contribution is to provide high-quality, real-time learning and analysis. To achieve this goal, we used a deep convolutional neural network (CNN) based on MobileNetV2 architecture as the learning algorithm and Advanced Encryption Standard (AES) as an encryption protocol for sending secure data to notify hospital staff. The experimental results show the effectiveness of our model by providing 99.7% accuracy in detecting masks with a runtime of 1.54 s.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_71-AI_based_System_for_the_Detection_and_Prevention_of_COVID_19.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fusion of BIFFOA and Adaptive Two-Phase Mutation for Helmetless Motorcyclist Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130170</link>
        <id>10.14569/IJACSA.2022.0130170</id>
        <doi>10.14569/IJACSA.2022.0130170</doi>
        <lastModDate>2022-01-31T10:55:48.4130000+00:00</lastModDate>
        
        <creator>Sutikno </creator>
        
        <creator>Agus Harjoko</creator>
        
        <creator>Afiahayati</creator>
        
        <subject>Motorcycle classification; helmetless head detection; BIFFOA; two-phase mutation algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>Road traffic injuries and deaths cause considerable economic losses to individuals, families, and nations as a whole. One of the strategies needed to curtail these fatalities is the surveillance of helmetless motorcyclists, which is carried out by developing an automatic detection system based on computer vision. Generally, this system consists of three subsystems, namely, moving object segmentation, motorcycle classification, and helmetless head detection. HOPG-LDB (Histogram of Oriented Phase and Gradient - Local Difference Binary) descriptor for this system produced good accuracy; however, it still has a drawback related to a large number of features. Based on these observations, this paper proposed an Adaptive Two-phase Mutation Binary Improved Fruit Fly Optimization Algorithm (ATMBIFFOA) to reduce the features. The ATMBIFFOA is a new feature selection algorithm that improved BIFFOA (Binary Improved Fruit Fly Optimization Algorithm) with an adaptive two-phase mutation algorithm. The BIFFOA produced good accuracy; however, weak in reducing feature dimension. The adaptive two-phase mutation algorithm was used to cover this weakness. The experiment results show that the proposed method can reduce the number of features and computation time effectively from BIFFOA. The proposed method produced motorcycle classification accuracy of 96.06% for the JSC1 dataset and 96.85% for the JSC2 dataset. As for helmetless head detection, the proposed method produced an average precision of 66.29% for the JSC1 dataset and 63.95% for the JSC2 dataset.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_70-Fusion_of_BIFFOA_and_Adaptive_Two_Phase_Mutation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Pragmatics of Function Words in Fiction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130169</link>
        <id>10.14569/IJACSA.2022.0130169</id>
        <doi>10.14569/IJACSA.2022.0130169</doi>
        <lastModDate>2022-01-31T10:55:48.4130000+00:00</lastModDate>
        
        <creator>Ayman Farid Khafaga</creator>
        
        <subject>Computer-aided text analysis (CATA); concordance; function words; persuasion; manipulation; ideology; Bond’s Lear</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>This paper uses a computer-aided text analysis (CATA) to decipher the ideologies pertaining to function words in fictional discourse represented by Edward Bond’s Lear. In literary texts, function words, such as pronouns and modal verbs display a very high frequency of occurrence. Despite the fact that these linguistic units are often employed to channel a mere grammatical function pertaining to their semantic nature, they, sometimes, exceed their grammatical and semantic functionality towards further ideological and pragmatic purposes, such as persuasion and manipulation. This study investigates the extent to which function words, linguistically manifested in two personal pronouns (I, we) and two modal verbs (will, must) are utilized in Bond’s Lear to convey both persuasive and/or manipulative ideologies. This paper sets three main objectives: (i) to explore the persuasive and/or manipulative ideologies the four function words under investigation communicate in the selected text, (ii) to highlight the extent to which CATA software helps in deciphering the ideological weight of function words in Bond’s Lear, and (iii) to clarify the integrative relationship between discourse studies and computer-aided text analysis. Two findings are reported in this paper: first, function words do not only carry semantic functions, but also go beyond their semantic functionality towards pragmatic purposes that serve to achieve specific ideologies in discourse. Second, the application of CATA software proves useful in extracting ideologies from language and helps better understand the power of function words, which, in turn, accentuates the analytical integration between discourse studies and computer, particularly in the linguistic analysis of large data texts.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_69-The_Pragmatics_of_Function_Words_in_Fiction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design and Implementation of Deep Depth Decision Algorithm for Complexity Reduction in High Efficiency Video Coding (HEVC)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130168</link>
        <id>10.14569/IJACSA.2022.0130168</id>
        <doi>10.14569/IJACSA.2022.0130168</doi>
        <lastModDate>2022-01-31T10:55:48.4000000+00:00</lastModDate>
        
        <creator>Helen K Joy</creator>
        
        <creator>Manjunath R Kounte</creator>
        
        <creator>B K Sujatha</creator>
        
        <subject>CNN; HEVC; deep learning; RDO; encoding time; complexity reduction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>High efficiency video (HEVC) coding made its mark as a codec which compress with low bit rate than its preceding codec that is H.264, but the factor that stop HEVC from many applications is its complex encoding procedure. The rate distortion optimisation (RDO) cost calculation in HEVC consume complex calculations. In this paper, we propose a method to cross out the issue of complex calculations by replacing the traditional inter-prediction procedure of brute force search for RDO by a deep convolutional neural network to predict and perform this process. In the first step, the modelling of the deep depth decision algorithm is done with optimum specifications using convolutional neural network (CNN). In the next step, the model is designed and trained with dataset and validated. The trained model is tested by pipelining it to the original HEVC encoder to check its performance. We also evaluate the efficiency of the model by comparing the average time of encoding for various resolution video input. The testing is done with mutually independent input to maintain the accuracy of the system. The system shows a substantial saving in encoding time that proves the complexity reduction in HEVC.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_68-Design_and_Implementation_of_Deep_Depth_Decision_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Blockchain in the Quantum World</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130167</link>
        <id>10.14569/IJACSA.2022.0130167</id>
        <doi>10.14569/IJACSA.2022.0130167</doi>
        <lastModDate>2022-01-31T10:55:48.3830000+00:00</lastModDate>
        
        <creator>Arman Rasoodl Faridi</creator>
        
        <creator>Faraz Masood</creator>
        
        <creator>Ali Haider Thabet Shamsan</creator>
        
        <creator>Mohammad Luqman</creator>
        
        <creator>Monir Yahya Salmony</creator>
        
        <subject>Blockchain; quantum computers; distributed ledger technology; security; systematic literature review; quantum attacks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>Blockchain is one of the most discussed and highly accepted technologies, primarily due to its application in almost every field where third parties are needed for trust. Blockchain technology relies on distributed consensus for trust, which is accomplished using hash functions and public-key cryptography. Most of the cryptographic algorithms in use today are vulnerable to quantum attacks. In this work, a systematic literature review is done so that it can be repeated, starting with identifying the research questions. Focusing on these research questions, literature is analysed to find the answers to these questions. The survey is completed by answering the research questions and identification of the research gaps. It is found in the literature that 30% of the research solutions are applicable for the data layer, 24% for the application and presentation layer, 23% for the network layer, 16% for the consensus layer and only 1% for hardware and infrastructure layer. We also found that 6% of the solutions are not blockchain-based but present different distributed ledger technology.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_67-Blockchain_in_the_Quantum_World.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Visual-Range Cloud Cover Image Dataset for Deep Learning Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130166</link>
        <id>10.14569/IJACSA.2022.0130166</id>
        <doi>10.14569/IJACSA.2022.0130166</doi>
        <lastModDate>2022-01-31T10:55:48.3670000+00:00</lastModDate>
        
        <creator>Muhammad Umair</creator>
        
        <creator>Manzoor Ahmed Hashmani</creator>
        
        <subject>Cloud cover; dataset; classification; GoogLeNet; ResNet-50; EfficientNet-B0</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>Coastal and offshore oil and gas structures and operations are subject to continuous exposure to environmental conditions (ECs) such as varying air and water temperatures, rough sea conditions, strong winds, high humidity, rain, and varying cloud cover. To monitor ECs, weather and wave sensors are installed on these facilities. However, the capital expenditure (CAPEX) and operational expenses (OPEX) of these sensors are high, especially for offshore structures. For observable ECs, such as cloud cover, a cost-effective deep learning (DL) classification model can be employed as an alternative solution. However, to train and test a DL model, a cloud cover image dataset is required. In this paper, we present a novel visual-range cloud cover image dataset for cloud cover classification using a deep learning model. Various visual-range sky images are captured on nine different occasions, covering six cloud cover conditions. For each cloud cover condition, 100 images are manually classified. To increase the size and quality of images, multiple label-preserving data augmentation techniques are applied. As a result, the dataset is expanded to 9,600 images. Moreover, to evaluate the usefulness of the proposed dataset, three DL classification models, i.e., GoogLeNet, ResNet-50, and EfficientNet-B0, are trained, tested, and their results are presented. Even though EfficientNet-B0 had better generalization ability and marginally higher classification accuracy, it was discovered that ResNet-50 is the best choice for cloud cover classification due to its lower computational cost and competitive classification accuracy. Based on these results, it is concluded that the proposed dataset can be used in further research in DL-based cloud cover classification model development.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_66-A_Visual_Range_Cloud_Cover_Image_Dataset.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing the Security of Digital Image Encryption using Diagonalize Multidimensional Nonlinear Chaotic System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130165</link>
        <id>10.14569/IJACSA.2022.0130165</id>
        <doi>10.14569/IJACSA.2022.0130165</doi>
        <lastModDate>2022-01-31T10:55:48.3670000+00:00</lastModDate>
        
        <creator>Mahmoud I. Moussa</creator>
        
        <creator>Eman I. Abd El-Latif</creator>
        
        <creator>Nawaz Majid</creator>
        
        <subject>Chaotic system; encryption; decryption; image; algorithms; cryptosystem</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>This paper describes a new efficient cryptosystem for the color image encryption technique, based on a combination of multidimensional proposed chaos systems. This chaos system consists of six bisections: T_1 (x),T_2 (x),T_2 (y),T_3 (x),T_3 (y), andT_3 (z). They induce three chaotic matrix keys and three chaotic vector keys. We use a multidimensional chaotic system together with an encryption algorithm to provide better security and wide key spaces. The proposed cryptosystem uses four levels of random pixel diffusions and permutations simultaneously and ω - times interchange between rows and columns. The correlations between the RGB components of the plain image are reduced. The level of security, the computational complexity, the quality of decoding a decrypted image under closure threat is improved. The simulation results showed that the algorithm shows a high level of security, and the assurance that the image recovered at the receiving point is identified as the image at the transmission point.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_65-Enhancing_the_Security_of_Digital_Image_Encryption.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Moving Object Detection over Wireless Visual Sensor Networks using Spectral Dual Mode Background Subtraction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130164</link>
        <id>10.14569/IJACSA.2022.0130164</id>
        <doi>10.14569/IJACSA.2022.0130164</doi>
        <lastModDate>2022-01-31T10:55:48.3530000+00:00</lastModDate>
        
        <creator>Ahmed M. AbdelTawab</creator>
        
        <creator>M.B. Abdelhalim</creator>
        
        <creator>S.E.D. Habib</creator>
        
        <subject>Background subtraction; discrete cosine transform; embedded camera networks; Gaussian mixture models; wireless visual sensor networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>Wireless Visual Sensor Networks (WVSN) play an essential role in tracking moving objects. WVSN&#39;s key drawbacks are storage, power, and bandwidth. Background subtraction is used in the early stages of target tracking to extract moving targets from video images. Many standard methods of subtracting backgrounds are no longer suitable for embedded devices because they use complex statistical models to manage small changes in lighting. This paper introduces a system based on the Partial Discrete Cosine Transform (PDCT), reducing the vast dimensions of processed data while retaining most of the important information, thereby reducing processing and transmission energy. It also uses a dual-mode single Gaussian model (SGM) for accurate detection of moving objects. The proposed system&#39;s performance is to be assessed using the standard CDnet 2014 benchmark dataset in terms of detection accuracy and time complexity. Furthermore, the suggested method is compared to previous WVSN background subtraction methods. Simulation results show that the proposed method consistently has 15% better accuracy and is up to 3 times faster than the state-of-the-art object detection methods for WVSN. Finally, we showed the practicality of the suggested method by simulating it in a sensor network environment using the Contiki OS Cooja Simulator and implementing it in a real testbed using Cortex M3 open nodes of IOT-LAB.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_64-Moving_Object_Detection_over_Wireless_Visual_Sensor_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>CovSeg-Unet: End-to-End Method-based Computer-Aided Decision Support System in Lung COVID-19 Detection on CT Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130162</link>
        <id>10.14569/IJACSA.2022.0130162</id>
        <doi>10.14569/IJACSA.2022.0130162</doi>
        <lastModDate>2022-01-31T10:55:48.3370000+00:00</lastModDate>
        
        <creator>Fatima Zahra EL BIACH</creator>
        
        <creator>Imad IALA</creator>
        
        <creator>Hicham LAANAYA</creator>
        
        <creator>Khalid MINAOUI</creator>
        
        <subject>Deep learning; COVID-19; loss function; balanced data</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>COVID-19 epidemic continues to threaten public health with the appearance of new, more severe mutations, and given the delay in the vaccination process, the situation becomes more complex. Thus, the implementation of rapid solutions for the early detection of this virus is an immediate priority. To this end, we provide a deep learning method called CovSeg-Unet to diagnose COVID-19 from chest CT images. The CovSeg-Unet method consists in the first time of preprocessing the CT images to eliminate the noise and make all images in the same standard. Then, CovSeg-Unet uses an end-to-end architecture to form the network. Since CT images are not balanced, we propose a loss function to balance the pixel distribution of infected/uninfected regions. CovSeg-Unet achieved high performances in localizing COVID-19 lung infections compared to others methods. We performed qualitative and quantitative assessments on two public datasets (Dataset-1 and Dataset-2) annotated by expert radiologists. The experimental results prove that our method is a real solution that can better help in the COVID-19 diagnosis process.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_62-CovSeg_Unet_End_to_End_Method_based_Computer_aided_Decision.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Elevint: A Cloud-based Internet of Elevators</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130163</link>
        <id>10.14569/IJACSA.2022.0130163</id>
        <doi>10.14569/IJACSA.2022.0130163</doi>
        <lastModDate>2022-01-31T10:55:48.3370000+00:00</lastModDate>
        
        <creator>Sarah Mohammed Aljadani</creator>
        
        <creator>Shahd Mohammed Almutairi</creator>
        
        <creator>Saja Saeed Ghaleb</creator>
        
        <creator>Lama Al Khuzayem</creator>
        
        <subject>Internet of things; elevator; fault detection; monitoring; notification; real-time</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>With the significant growth of the number of high-rise buildings nowadays, the dependence on elevators has also increased. The issue that faces elevator passengers in case of breakdowns is the long waiting time for the arrival of the maintenance engineers to perform the repair, as the process of reporting is done manually. The safety concern increases when people are trapped. Most state-of-the-art approaches detect faults without providing means to facilitate the communication between elevator owners, maintenance companies, and engineers or notify them in case of breakdowns. Moreover, none of the proposed fault detection solutions rely on rules specified by experts in the field. This paper aims at addressing these issues by proposing a system that manages, monitors, detects faults and informs users of any faults instantly by sending notifications. Specifically, the paper proposes a mobile application, Elevint, that is, cloud-based and exploits the Internet of Things (IoT) technology. Elevint provides real-time monitoring of elevator operating conditions collected from sensors. The data is then transferred to the cloud, where faults are detected by applying rules that compare the current conditions with severity levels determined by experts. In the case that a fault is detected, elevator owners and maintenance companies are automatically notified. Moreover, through Elevint, maintenance companies can assign engineers to repair the fault and elevator owners can view and re-schedule the engineer’s visit if needed. Testing our system on an elevator model shows 98% accuracy. In future, we intend to test it on real elevators to verify its applicability in practice.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_63-Elevint_A_Cloud_based_Internet_of_Elevators.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>New Textual Authentication Method to Resistant Shoulder-Surfing Attack</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130161</link>
        <id>10.14569/IJACSA.2022.0130161</id>
        <doi>10.14569/IJACSA.2022.0130161</doi>
        <lastModDate>2022-01-31T10:55:48.3200000+00:00</lastModDate>
        
        <creator>Islam Abdalla Mohamed Abass</creator>
        
        <creator>Loay F.Hussein</creator>
        
        <creator>Tarak kallel</creator>
        
        <creator>Anis Ben Aissa</creator>
        
        <subject>Shoulder surfing; caesar cipher; virtual keyboard; graphical password; social engineering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>Using textual passwords suffer from the balance between security and usability. Password policies are usually adopted by system administrators to force users to choose strong passwords. However, users often use a simple password to make it easy to remember, which reduces the password strength and make it vulnerable to information security threats. When users enter their passwords in public places like airports or cafes, they become exposed to shoulder surfing attacks which are considered as a kind of social engineering. With a little effort, an attacker can capture a password by recording the individual’s authentication session or by direct observation. To overcome this vulnerability, we propose a new textual-password approach that uses camouflage characters and a virtual keyboard which leads to generating strong and easy to remember passwords. The perspective of usability and security was evaluated by experimental studies conducted with 65 users and then compared with recent studies. The results showed that the proposed technique has the lowest shoulder surfing success rate with just 3.63% with reasonable usability.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_61-New_Textual_Authentication_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>What Influences Customer’s Trust on Online Social Network Sites (SNSs) Sellers?</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130160</link>
        <id>10.14569/IJACSA.2022.0130160</id>
        <doi>10.14569/IJACSA.2022.0130160</doi>
        <lastModDate>2022-01-31T10:55:48.3070000+00:00</lastModDate>
        
        <creator>Ramona Ramli</creator>
        
        <creator>Asmidar Abu Bakar</creator>
        
        <creator>Fiza Abdul Rahim</creator>
        
        <subject>Online commerce; trust; social commerce; multi-criteria decision–making</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>Customer trust has been recognized as an essential part of the rising trend of social commerce. Lack of trust facilitates the hesitation of customers to shop online or to avoid them completely. Therefore, it is essential to implement and analyze a way of buyer-seller relationship establishment that will improve customers&#39; trust. This paper aims to develop a trust model of Social Network Sites (SNSs) sellers, and to assess the dimensions and criteria that affects customer&#39;s trust on Online Social Network Sites (SNSs) sellers by using Analytic Hierarchy Process (AHP) approach. The study was carried out among those who have transactions with Malaysian online SNSs sellers at least every three months. The findings have indicated the top three influencing criteria: recommendation, transaction safety, and rating. This study provides insight into the customers&#39; thoughts about placing trust on online SNSs sellers for selling and purchasing activities.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_60-What_Influences_Customers_Trust_on_Online_Social_Network_Sites.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Trend of Segmentation for Arabic Handwritten Touching Characters</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130159</link>
        <id>10.14569/IJACSA.2022.0130159</id>
        <doi>10.14569/IJACSA.2022.0130159</doi>
        <lastModDate>2022-01-31T10:55:48.3070000+00:00</lastModDate>
        
        <creator>Ahmed Mansoor Mohsen Algaradi</creator>
        
        <creator>Mohd Sanusi Azmi</creator>
        
        <creator>Intan Ermahani A. Jalil</creator>
        
        <creator>Abdulwahab Fuad Ayyash Hashim</creator>
        
        <creator>Afrah Abdullah Muhammad Al-Malki</creator>
        
        <subject>Component; character segmentation; Arabic handwritten; character touching; recognition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>The paper is a comprehensive study of existing research trends in the sector of Arabic language, with a focus on state-of-the-art methods to illustrate the existing condition of various theory in that sector, with the goal of facilitating the adaptation and extension of prior ones into new systems and applications. In the Arabic alphabet, there are 28 letters. Depending on its place in the word, every Arabic letter has over one shape; a single character may have from one to four shapes. The Touching between character and the Overlapping occurred in the handwritten. Historical documents contained a massive knowledge and culture. There are many old books that need to be converted into readable format. Which would take a long time if humans converted it. However, the main problem is the lack of research in Arabic Handwritten especially for segmentation of touching characters. Thus, current trends of the segmentation techniques are investigated to identify the current state-of-the art of segmenting touching characters in other domains for constructing enhance techniques for Arabic touching characters. In this paper, it reviewed approaches for the segmentation of the touching characters. This paper presents the trend of approaches for the recognition process and segmentation of Arabic handwritten touching characters. In this paper, it highlighted the strength of each technique, the method used, and the drawback of the techniques. Based on the outcome, this will provide a good foundation for constructing a better technique for segmentation of Arabic touching characters, especially from the degraded documents.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_59-The_Trend_of_Segmentation_for_Arabic_Handwritten_Touching_Characters.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Energy Efficient and Quality-of-Service Aware Routing using Underwater Wireless Sensor Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130158</link>
        <id>10.14569/IJACSA.2022.0130158</id>
        <doi>10.14569/IJACSA.2022.0130158</doi>
        <lastModDate>2022-01-31T10:55:48.2900000+00:00</lastModDate>
        
        <creator>P. Sathya</creator>
        
        <creator>P. Sengottuvelan</creator>
        
        <subject>Underwater wireless sensor networks (UWSNs); QOS aware (CSOA-EQ); underwater environments</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>In current years, there has been an increasing attention in Underwater wireless sensor networks (UWSNs). Underwater sensor networks (USNs) can be applied for many various purposes. To address the routing issue, the Cuckoo Search Optimization Algorithm with Energy Efficient and QOS Aware (CSOA-EQ) based routing methods have been proposed in this chapter. Every application is important in its own right, but some of them can help improve sea investigation to meet a variety of underwater applications, such as a catastrophic event alert system (such as torrent and seismic monitoring), supported navigation, oceanographic data collection, and underwater surveillance, ecological applications (such as the nature of organic water and contamination monitoring), modern applications (such as marine investigation), and so on. For example, sensors can assess specific metrics, such as base intensity and securing pressure, to monitor the auxiliary nature of the securing environment in offshore engineering applications. UASNs have also improved our understanding of underwater environments, such as climate change, underwater creature life, and the number of inhabitants in coral reefs.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_58-Energy_Efficient_and_Quality_of_Service_Aware_Routing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Stance based Sampling for Imbalanced Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130157</link>
        <id>10.14569/IJACSA.2022.0130157</id>
        <doi>10.14569/IJACSA.2022.0130157</doi>
        <lastModDate>2022-01-31T10:55:48.2730000+00:00</lastModDate>
        
        <creator>Isha Agarwal</creator>
        
        <creator>Dipti Rana</creator>
        
        <creator>Aemie Jariwala</creator>
        
        <creator>Sahil Bondre</creator>
        
        <subject>Fake news; healthcare; sampling; stance; COVID-19; imbalance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>While the world is suffering from coronavirus pandemic (COVID-19), a parallel battle with Infodemic, the proliferation of fake news online is also taking place. The spread of fake news during this global pandemic COVID-19 has dangerous consequences. This is the driving force behind this study. Relying on incorrect information obtained from the internet or social media can be fatal. According to a World Health Organization survey, at least 800 people have lost their lives because of COVID-19 misinformation during this time, highlighting the accurate automated classification of fake news. However, the data at disposal for classification is imbalanced. The Internet has a vast repository of authentic healthcare news, whereas Fake News on COVID-19 healthcare is not abundant. This imbalance leads to incorrect classification. The paper studies alternative approaches to text sampling. In this paper, we propose a stance based sampling method for balancing news data. The disparity between the title and content of news items is utilized to sample data points selectively and rectify the imbalance. The key findings are that the proposed stance-based sampling strategies enhance categorisation task performance consistently for varying degrees of imbalance. The proposed techniques can better detect misleading news in the health care sector.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_57-A_Novel_Stance_based_Sampling_for_Imbalanced_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Regression Model to Predict Key Performance Indicators in Higher Education Enrollments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130156</link>
        <id>10.14569/IJACSA.2022.0130156</id>
        <doi>10.14569/IJACSA.2022.0130156</doi>
        <lastModDate>2022-01-31T10:55:48.2730000+00:00</lastModDate>
        
        <creator>Ashraf Abdelhadi</creator>
        
        <creator>Suhaila Zainudin</creator>
        
        <creator>Nor Samsiah Sani</creator>
        
        <subject>Data mining; KPI; regression; higher education; prediction model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>Key Performance Indicators (KPIs) are essential factors for the success of an organization. KPIs measure the current performance and identify the ongoing progress for specified business objectives. The Ministry of Higher Education (MoHE) in Palestine used established formulas to predict the KPI. These KPIs are vital for charting the organization aims. This study applies regression models for student enrollment data sets to predict accurate KPIs that can be used and adapted for any higher education system. The predictive engine will determine the KPI based on linear regression techniques such as Lasso, Elastic Net, and non-linear regression such as Support Vector Regression (SVR), and K-Nearest Neighbor (KNN). The Ministry of Higher Education (MoHE) in Palestine provided the datasets related to enrollments and graduations data for different Higher Education Institutions (HEIs). The regression algorithms were evaluated by mean absolute error, mean square error (MSE), root mean square error (RMSE) and the R Squared. The experiment demonstrates that the 40% training with 60% testing splitting using linear regression shows the best result.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_56-A_Regression_Model_to_Predict_Key_Performance_Indicators.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>State-of-the-Art Approach to e-Learning with Cutting Edge NLP Transformers: Implementing Text Summarization, Question and Distractor Generation, Question Answering</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130155</link>
        <id>10.14569/IJACSA.2022.0130155</id>
        <doi>10.14569/IJACSA.2022.0130155</doi>
        <lastModDate>2022-01-31T10:55:48.2600000+00:00</lastModDate>
        
        <creator>Spandan Patil</creator>
        
        <creator>Lokshana Chavan</creator>
        
        <creator>Janhvi Mukane</creator>
        
        <creator>Deepali Vora</creator>
        
        <creator>Vidya Chitre</creator>
        
        <subject>Machine intelligence; natural language processing; neural networks; predictive models; text processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>Amid the worldwide wave of pandemic lockdowns, there has been a remarkable growth in E-learning. Online learning has become a challenge for students. It has become difficult for students to find the content they need. The mounting accessibility of textual content has necessitated comprehensive study in the areas of automatic text summarization and question generation. Multiple Choice Questions is very smooth for evaluations, and its assessment is implemented through computerized applications in order that results may be declared within some hours, and the evaluation system is 100% pure. The system proposes an interactive reading platform where the user can upload an E-Book and get textual summary and generates questions like MCQs, fill in the blanks and one word. The user can also evaluate the questions answered. The proposed system is an all-in-one interactive reading platform.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_55-State_of_Art_Approach_to_E_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Applying Artificial Intelligence in Retrieving Design Solution</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130154</link>
        <id>10.14569/IJACSA.2022.0130154</id>
        <doi>10.14569/IJACSA.2022.0130154</doi>
        <lastModDate>2022-01-31T10:55:48.2430000+00:00</lastModDate>
        
        <creator>Y. Moubachir</creator>
        
        <creator>B. Hamri</creator>
        
        <creator>S. Taibi</creator>
        
        <subject>Artificial intelligence; ANN; design methodologies; DFX; morphological analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>Design is a very important step in the product life cycle, because it is generally the key for the success or the failure of the product. The field of design theories and methodologies is fill with theories and methods that have been taught and developed throughout the years. Most of them relay on subdividing the design process into phases, where the transition between each two phases relay on using some design tools. One of the main challenges of nowadays is to find a way for the integration of artificial intelligence (AI) in the design process. This integration could be very benefic, due to the fact that AI can learn quickly the relationship between input and output of any phenomena, and it can also give us a prediction of the behavior, if the inputs parameters vary. In our previous work we shaded the light, on how we can improve the transition between design phase by storing and retrieving design solutions using morphological analysis and design tools like DFX. In this work, we present a deferent methodology to perform this transition which relay on using an artificial intelligence tool called Artificial Neural Networks (ANN) instead of morphological analysis to retrieve the right design solution. To illustrate this method, we will take the same example from our previous work and will show how we can use ANN to learn and predict the right design solution.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_54-Applying_Artificial_Intelligence_in_Retrieving_Design_Solution.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Conceptual User Experience Evaluation Model on Online Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130153</link>
        <id>10.14569/IJACSA.2022.0130153</id>
        <doi>10.14569/IJACSA.2022.0130153</doi>
        <lastModDate>2022-01-31T10:55:48.2430000+00:00</lastModDate>
        
        <creator>Norhanisha Yusof</creator>
        
        <creator>Nor Laily Hashim</creator>
        
        <creator>Azham Hussain</creator>
        
        <subject>User experience; UX evaluation; UX model; online system; conceptual model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>An online system has become a priority for organisations or companies in many countries, as it allows many processes to be conducted via online platforms, which contributes to profit gain. There are different types of user experience (UX) evaluation models that have been proposed to guide the measurement and development process. However, most of these models only have dimensions, and there is no guidance for UX measurement on online systems. The lack of evaluation models for online system measurement requires further investigation. This paper aims to identify the gaps in UX evaluation models, and develop a conceptual UX evaluation model for online systems. The method used in this study includes reviewing several literatures and shortlisting the relevant publications on UX and online systems. After that, the gaps were identified from the existing UX evaluation model in the relevant publications based on the ISO standard. Then, the study identified the important components of UX, and proposed a new conceptual UX evaluation model for online systems. The results of the study are the identification of the gaps in existing UX evaluation models, and the development of a new conceptual UX evaluation model that is specifically for online systems. Therefore, the results help in considering UX dimensions, criteria, and metrics and potential UX components for evaluation and measurement. The paper contributes to system developers, designers, and also researchers for future UX evaluation model development for online systems. Future studies could use the reviewed UX evaluation models to identify relevant dimensions of online systems, and hence improve the model that they will develop. The findings may also be beneficial to organisations that own online systems by providing guidelines on important dimensions involved in their UX-based evaluations.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_53-A_Conceptual_User_Experience_Evaluation_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Preserving Location Privacy in the IoT against Advanced Attacks using Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130152</link>
        <id>10.14569/IJACSA.2022.0130152</id>
        <doi>10.14569/IJACSA.2022.0130152</doi>
        <lastModDate>2022-01-31T10:55:48.2270000+00:00</lastModDate>
        
        <creator>Abdullah S. Alyousef</creator>
        
        <creator>Karthik Srinivasan</creator>
        
        <creator>Mohamad Shady Alrahhal</creator>
        
        <creator>Majdah Alshammari</creator>
        
        <creator>Mousa Al-Akhras</creator>
        
        <subject>LBS; dummy; deep-learning; attacks; accuracy; resistance; performance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>Location-based services (LBSs) have received a significant amount of recent attention from the research community due to their valuable benefits in various aspects of society. In addition, the dependency on LBS in the performance of daily tasks has increased dramatically, especially after the spread of the COVID-19 pandemic. LBS users use their real location to build LBS queries to take benefits. This makes location privacy vulnerable to attacks. The privacy issue is accentuated if the attacker is an LBS provider since all information about users is accessible. Moreover, the attacker can apply advanced attacks, such as map matching and semantic location attacks. In response to these issues, this work employs artificial intelligence to build a robust defense against advanced location privacy attacks. The key idea behind protecting the location privacy of LBS users is to generate smart dummy locations. Smart dummy locations are false locations with the same query probability as the real location, but they are far from both the real location and each other. Relying on the previous two conditions, the deep-learning-based intelligent finder ensures a high level of location privacy protection against advanced attacks. The attacker cannot recognize the dummies from the real location and cannot isolate the real location by a filtering process. In terms of entropy (the privacy protection metric), accuracy (the deep learning metric), and total execution time (the performance metric) and compared to the well-known DDA and BDA systems, the proposed system shows better results, where entropy = 15.9, accuracy = 9.9, and total execution time = 17 sec.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_52-Preserving_Location_Privacy_in_the_IoT_against.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Comparison between Lab-VIEW and MATLAB on Feature Matching-based Speech Recognition System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130150</link>
        <id>10.14569/IJACSA.2022.0130150</id>
        <doi>10.14569/IJACSA.2022.0130150</doi>
        <lastModDate>2022-01-31T10:55:48.2100000+00:00</lastModDate>
        
        <creator>Edita Rosana Widasari</creator>
        
        <creator>Barlian Henryranu Prasetio</creator>
        
        <creator>Dian Eka Ratnawati</creator>
        
        <subject>Speech; articulation; feature matching; Lab-VIEW; MATLAB</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>Speech recognition systems are widely used for smart applications. The smart application-based speech recognition system has different requirements for processing the human voice. The most common performance in the speech recognition system is essential to observe, since it is necessary to design smart application-based speech recognition systems for people&#39;s needs. Moreover, feature matching is the principal part of speech recognition systems since it plays a key role to authenticate, separate one human voice from another, and their different articulation. Therefore, this work proposes a performance comparison of speech recognition systems based on feature matching using Lab-VIEW and MATLAB. The feature extraction involves calculation of Mel Frequency Cepstral Coefficients (MFCC) for each frame. For the matching process, the system was tested 100 times for each five speeches by making changes in articulation with the same vocal cords. This matching process uses DTW (Dynamic Time Warping), and then the testing is based on the most common performance in the speech recognition system’s comparison between Lab-VIEW and MATLAB such as accuracy rate, execution time, and CPU usage. Based on experimental results, the average accuracy rate of MATLAB is better than Lab-VIEW. The execution time testing indicates that Lab-VIEW has a shorter execution time than MATLAB. On the other hand, Lab-VIEW and MATLAB have almost the same CPU usage. This result indicates that the performance comparison is able to be used according to the requirements of smart application-based speech recognition systems.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_50-Performance_Comparison_between_Lab_VIEW_and_MATLAB.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Priority Rule for Initial Ordering of Jobs in Permutation Flowshop Scheduling Problems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130151</link>
        <id>10.14569/IJACSA.2022.0130151</id>
        <doi>10.14569/IJACSA.2022.0130151</doi>
        <lastModDate>2022-01-31T10:55:48.2100000+00:00</lastModDate>
        
        <creator>B. Dhanasakkaravarthi</creator>
        
        <creator>A. Krishnamoorthy</creator>
        
        <subject>Priority rule; flowshop scheduling; makespan; NEH algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>Scheduling in a permutation flowshop refers to processing of jobs in a set of available machines in the same order. Among the several possible performance characteristics of a flowshop, makespan remains one of the highest preferred metrics by researchers in the past six decades. The constructive heuristic proposed by Nawaz-Enscore-Ham (NEH) is one of the best for makespan minimization. The performance essentially depends on the initial ordering jobs according to a particular priority rule (PR). The popular priority rules are non-increasing order of the jobs&#39; total processing time, the sum of average processing time and standard deviation and, the sum of average processing time, standard deviation and absolute skewness among others. The objective of this paper is to propose and analyse a new job priority rule for the permutation flowshop. The popular priority rules available in the literature are studied and, one of the best priority rules; the sum of average processing time and standard deviation is slightly modified, by replacing the standard deviation by mean absolute deviation (MAD). To assess the performance of the new rule, four benchmark datasets are used. The computational results and statistical analyses demonstrate the better performance of the new rule.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_51-A_New_Priority_Rule_for_Initial_Ordering_of_Jobs.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Knowledge-based Expert System for Supporting Security in Software Engineering Projects</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130149</link>
        <id>10.14569/IJACSA.2022.0130149</id>
        <doi>10.14569/IJACSA.2022.0130149</doi>
        <lastModDate>2022-01-31T10:55:48.1970000+00:00</lastModDate>
        
        <creator>Ahmad Azzazi</creator>
        
        <creator>Mohammad Shkoukani</creator>
        
        <subject>Knowledge-based systems; security engineering; software development process; expert systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>Building secure software systems requires the intersection between two engineering disciplines, software engineering and security engineering. There is a lack of a defined security mechanism for each of the software development phases, which affects the quality of the software system intensively. In this paper, the authors are proposing a framework to consider the security aspects in all the phases of the software development process from the requirements until the deployment of the software product, with three additional phases that are important to automatically produce a secure system. The framework is developed after analyzing the existing models for secure system development. The key elements of the framework are the addition of the phases like physical, training, and auditing, where they improve the level of security in software engineering projects. The authors found so a solution for the replacement of the knowledge of the security engineer through the construction of an intelligent knowledge-based system, which provides the software developer with the security rules needed in each phase of the software development lifecycle and it improves the awareness of the software developer about the security-related issues in each phase of the software development lifecycle. The framework and the expert system are tested on a variety of software projects, where a significant improvement of security in each phase of the software development process is achieved.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_49-A_Knowledge_based_Expert_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of an Efficient Electricity Consumption Prediction Model using Machine Learning Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130147</link>
        <id>10.14569/IJACSA.2022.0130147</id>
        <doi>10.14569/IJACSA.2022.0130147</doi>
        <lastModDate>2022-01-31T10:55:48.1800000+00:00</lastModDate>
        
        <creator>Ghaidaa Hamad Alraddadi</creator>
        
        <creator>Mohamed Tahar Ben Othman</creator>
        
        <subject>Anomalies detection; deep learning; electricity consumption forecasting; LSTM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>Electricity consumption has continued to go up rapidly to follow the rapid growth of the economy. Therefore, detecting anomalies in buildings&#39; energy data is considered one of the most essential techniques to detect anomalous events in buildings. This paper aims to optimize the electricity consumption in households by forecasting the consumption of these households and, consequently, identifying the anomalies. Further, as the used dataset is huge and published publicly, many research used part of it based on their needs. In this paper, the dataset is grouped as daily consumption and monthly consumption to compare the network topologies of all other works that used the same dataset with the selected part. The proposed methodology will depend basically on long short-term memory (LSTM) because it is powerful, flexible, and can deal with complex multi-dimensional time-series data. The results of the model can accurately predict the future consumption of individual households in a daily or monthly consumption base, even if the household was not included in the original training set. The proposed daily model achieves Root Mean Square Error (RMSE) value of 0.362 and mean absolute error (MAE) of 19.7%, while the monthly model achieves an RMSE value of 0.376 and MAE of 17.8%. Our model got the lowest accuracy result when compared with other compared network topologies. The lowest RMSE achieved from other topologies is 0.37 and the lowest MAE is 18% where our model achieved RMSE of 0.362 and MAE of 17.8%. Further, the model can detect the anomalies efficiently in both daily electricity consumption data and monthly electricity consumption data. However, the daily electricity consumption readings are way better to detect anomalies than the monthly electricity consumption readings because of the different picks that appear in the daily consumption data.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_47-Development_of_an_Efficient_Electricity_Consumption.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Critical Review of Technology-Enhanced Learning using Automatic Content Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130148</link>
        <id>10.14569/IJACSA.2022.0130148</id>
        <doi>10.14569/IJACSA.2022.0130148</doi>
        <lastModDate>2022-01-31T10:55:48.1800000+00:00</lastModDate>
        
        <creator>Amalia Rahmah</creator>
        
        <creator>Harry B. Santoso</creator>
        
        <creator>Zainal A. Hasibuan</creator>
        
        <subject>Automatic content analysis; ACA; assessment questionnaire; concept; facet analysis; key terms; Leximancer; systematic literature review; SLR; technology-enhanced learning; TEL; text analysis; theme</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>Technology-enhanced learning (TEL) continues to grow gradually while considering a multitude of factors, which underpins the need to develop a TEL maturity assessment as a guideline for this gradual improvement. This study investigates the potential application of TEL’s expert knowledge presented in various research articles as qualitative data for developing assessment questionnaires. A mixed-method approach is applied to analyze the qualitative data using systematic literature review (SLR) with automated content analysis (ACA) as quantitative data processing to strengthen the trustworthiness of the findings and reduce researcher bias. This process is carried out six steps: conducting SLR, data processing with ACA using Leximancer, organizing resulting concepts with facet analysis, contextualizing each TEL facet, constructing the assessment questionnaire for each context, and establishing TEL maturity dimensions. This study generates 64 questionnaire statements grouped according to the target respondents, namely students, teachers, or institutions. This set of questions is also grouped into dimensions representing aligned context: student performance, learning process, applied technology, contents, accessibility, teachers and teachings, strategy and regulation. Further research is required to distribute this questionnaire for pilot respondents to design the improvement roadmap and check data patterns to formulate maturity appraisals and scoring methods.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_48-Critical_Review_of_Technology_Enhanced_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Greedy-based Algorithm in Optimizing Student’s Recommended Timetable Generator with Semester Planner</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130146</link>
        <id>10.14569/IJACSA.2022.0130146</id>
        <doi>10.14569/IJACSA.2022.0130146</doi>
        <lastModDate>2022-01-31T10:55:48.1630000+00:00</lastModDate>
        
        <creator>Khyrina Airin Fariza Abu Samah</creator>
        
        <creator>Siti Qamalia Thusree</creator>
        
        <creator>Ahmad Firdaus Ahmad Fadzil</creator>
        
        <creator>Lala Septem Riza</creator>
        
        <creator>Shafaf Ibrahim</creator>
        
        <creator>Noraini Hasan</creator>
        
        <subject>Greedy algorithm; optimization; recommendation system; semester planner</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>Semester planner plays an essential role in students’ society that might help students have self-discipline and determination to complete their studies. However, during the COVID-19 pandemic, they faced difficulty organizing time management and doing a manual schedule. It resulted in substantial disruptions in learning, internal assessment disturbances, and the cancellation of public evaluations. Hence, this research aims to optimize the recommended semester planner, Timetable Generator using a greedy algorithm to increase student productivity. We identified three-set control functions for each entered information: 1) validation for the inserted information to ensure valid data and no redundancy, 2) focus scale, and 3) the number of hours to finish the activity. We calculate the priority task sequence to achieve the best optimal solution. The greedy algorithm can solve the optimization problem with the best optimal solution for each situation. Then, we executed it to make a recommended semester planner. From the test conducted, the functionality shows all the features successfully passed. We validate using test accuracy for the system’s reliability by evaluating it compared to the Brute Force algorithm, and the trends increase from 60% to 100%.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_46-A_Greedy_based_Algorithm_in_Optimizing_Student’s.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Performance of Personality-based Recommender System for Fashion with Demographic Data-based Personality Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130145</link>
        <id>10.14569/IJACSA.2022.0130145</id>
        <doi>10.14569/IJACSA.2022.0130145</doi>
        <lastModDate>2022-01-31T10:55:48.1500000+00:00</lastModDate>
        
        <creator>Iman Paryudi</creator>
        
        <creator>Ahmad Ashari</creator>
        
        <creator>Khabib Mustofa</creator>
        
        <subject>Implicit personality elicitation; demographic data; personality-based recommender system; personality trait</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>Currently, the common method to predict personality implicitly (Implicit Personality Elicitation) is Personality Elicitation from Text (PET). PET predicts personality implicitly based on statuses written on social media. The weakness of this method when applied to a recommender system is the requirement to have minimal one social media account. A user without such qualification cannot use such system. To overcome this shortcoming, a new method to predict personality implicitly based on demographic data is proposed. This proposal is based on findings by previous researchers stating that there is a correlation between demographic data and personality trait. To predict personality based on demographic data, a personality model (rule) is needed. This model correlates demographic data and personality. To apply this model to a recommender system, another model is needed, that is preference model which connects personality and preference. These two models are then applied to a personality-based recommender system for fashion. From performance evaluation, the precision of and user satisfaction to the recommendation is 60.19% and 87.50%, respectively. When compared to precision and user satisfaction of PET-based recommender system (which are 82% and 79%, respectively), the precision of demographic data-based recommender system is lower whereas the satisfaction is higher.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_45-The_Performance_of_Personality_based_Recommender_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Snowball Framework for Web Service Composition in SOA Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130143</link>
        <id>10.14569/IJACSA.2022.0130143</id>
        <doi>10.14569/IJACSA.2022.0130143</doi>
        <lastModDate>2022-01-31T10:55:48.1330000+00:00</lastModDate>
        
        <creator>Mohamed Elkholy</creator>
        
        <creator>Youcef Baghdadi</creator>
        
        <creator>Marwa Marzouk</creator>
        
        <subject>Service oriented architecture (SOA); web service granularity; web service composition; software flexibility; snowball composition framework</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>Service Oriented Architecture (SOA) has emerged as a promising architectural style that provides software applications with high level of flexibility and reusability. However, in several cases where legacy software components are wrapped to be used as web services the final solution does not completely satisfy the SOA aims of flexibility and reusability. The literature review and the industrial applications show that SOA lacks a formal definition and measurement for optimal granularity of web services. Indeed, wrapping several business functionalities as a coarse-grained web services lacks reusability and flexibility. On the other hand, a huge number of fine-grained web services results in a high coupling between services and large size messages transferred over the Internet. The main research question still concerns with “How to determine an optimal level of service granularity when wrapping business functionalities as web services?” This research proposes the Snowball framework as a promising approach to integrate and compose web services. The framework is made up three-step process. The process uses the rules in deciding the web services that have an optimal granularity that maintains the required performance. To demonstrate and evaluate the framework, we realized a car insurance application that was already implemented by a traditional approach. The results show the efficiency of snowball framework over other approaches.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_43-Snowball_Framework_for_Web_Service_Composition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Selection of Requirement Elicitation Techniques: A Neural Network based Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130144</link>
        <id>10.14569/IJACSA.2022.0130144</id>
        <doi>10.14569/IJACSA.2022.0130144</doi>
        <lastModDate>2022-01-31T10:55:48.1330000+00:00</lastModDate>
        
        <creator>Mohd Muqeem</creator>
        
        <creator>Sultan Ahmad</creator>
        
        <creator>Jabeen Nazeer</creator>
        
        <creator>Md. Faizan Farooqui</creator>
        
        <creator>Afroj Alam</creator>
        
        <subject>Requirement elicitation; requirement engineering; neural network; back propagation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>Requirement Elicitation is key activity of requirement engineering and has a strong impact on design and other phases of software development life cycle. Poor requirement engineering practices lead to project failure. A sound requirement elicitation process is the foundation for the overall quality of software product. Due to criticality and high impact of this phase on overall success and failure of projects, it is very necessary to perform the requirements elicitation activities in a perfect and specific manner. The most difficult and demanding jobs during Requirement Elicitation phase is to select appropriate and specific technique from a wide array of techniques and tools. In this paper, a new approach is proposed using an artificial neural network for selection of requirement elicitation technique from a wide variety of tools and techniques that are available. The training of Neural Network is done by back propagation algorithm. The trained and resultant network can be used as a base for selection of requirement elicitation techniques.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_44-Selection_of_Requirement_Elicitation_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Secure Inter-Domain Routing for Resisting Unknown Attacker in Internet-of-Things</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130142</link>
        <id>10.14569/IJACSA.2022.0130142</id>
        <doi>10.14569/IJACSA.2022.0130142</doi>
        <lastModDate>2022-01-31T10:55:48.1170000+00:00</lastModDate>
        
        <creator>Bhavana A</creator>
        
        <creator>Nanda Kumar A N</creator>
        
        <subject>Internet-of-things; security; inter-domain routing; gateway node; attacker</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>With an increasing adoption of Internet-of-Things (IoT) over massively connected device, there is a raising security concern. Review of existing security schemes in IoT shows that there is a significant trade-off due to non-adoption of inter-domain routing scheme over larger domain of heterogeneous nodes in an Internet of Things (IoT) via gateway nodes. Hence, the purpose of the proposed study is to bridge this trade-off by adopting a new security scheme that works over an inter-domain routing without any apriori information of an attacker. The goal of the proposed framework is to identify the malicious intention of attacker by evaluating their increasing attention over different types of hop links information. Upon identification, the framework also aims for resisting attacker node to participate in IoT environment by advertising the counterfeited route information with a target of misleading the attackers promoting autonomous self-isolation. The study outcome shows proposed scheme is secure compared to existing scheme.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_42-Secure_Inter_Domain_Routing_for_Resisting_Unknown_Attacker.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Educational Data Mining to Identify the Patterns of Use made by the University Professors of the Moodle Platform</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130140</link>
        <id>10.14569/IJACSA.2022.0130140</id>
        <doi>10.14569/IJACSA.2022.0130140</doi>
        <lastModDate>2022-01-31T10:55:48.1030000+00:00</lastModDate>
        
        <creator>Johan Calderon-Valenzuela</creator>
        
        <creator>Keisi Payihuanca-Mamani</creator>
        
        <creator>Norka Bedregal-Alpaca</creator>
        
        <subject>Clustering; educational data mining; moodle; usage patterns; k-means algorithm; a priori algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>Due to the events caused by the COVID-19 pandemic and social distancing measures, learning management systems have gained importance, preserving quality standards, they can be used to implement remote education or as support for face-to-face education. Consequently, it is important to know how teachers and students use them. In this work, clustering techniques are used to analyze the use, made by university professors, of the resources and activities of the Moodle platform. The CRISP-DM methodology was applied to implement a data mining process, based on the Simple K-Means algorithm; to identify associated groups of teachers it was necessary to categorize the data obtained from the platform. The Apriori algorithm was applied to identify associations in the use of resources and activities. Performance scales were established in the use of Moodle functionalities, the results show the use made by teachers was very low. Rules were generated to identify the associations between activities and resources. As a result the functionalities that need to be enhanced in the teacher training processes were identified. Having identified the patterns of use of the Moodle platform, it is concluded that it was necessary to use a Likert scale to transform the frequency of use of activities and resources and identify the rules of association that establish profiles of teachers and tools that should be promoted in future training actions.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_40-Educational_Data_Mining_to_Identify_the_Patterns.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Assessing the Quality of Educational Websites in Sudan using Quality Model Criteria through an Electronic Tool</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130141</link>
        <id>10.14569/IJACSA.2022.0130141</id>
        <doi>10.14569/IJACSA.2022.0130141</doi>
        <lastModDate>2022-01-31T10:55:48.1030000+00:00</lastModDate>
        
        <creator>Asim Seedahmed Ali Osman</creator>
        
        <subject>Website quality; e-websites; education; information technology; PHP language</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>The application of internet has grown in recent years. Due to this, there is an increase in the number of websites which creates diversity in the services. This leads the researchers to do more research in the quality of websites in order to set standards and models to maintain its quality. The main objective of these standards is to support the trust and speed which is the cornerstone which are the basis for using websites. Various statistics reports show that the sites of institutions and companies which applied the quality standards have achieved high rates in terms of the user satisfaction and the number of visitors. This study has incorporates the concept of website and electronic gates, its objectives, advantages, types and in addition to its quality and standards of e-websites. It also touched on previous studies conducted on website in the world, the Arab world and African peninsula and also in the Sudan in support of the development of Sudanese websites. The proposed models consist of important metrics to evaluate the application, quality of content, aesthetic aspects, multimedia, reputation and security etc. This paper also proposed an application for evaluating the quality of websites based. This model is applied on Sudanese websites such as governmental, educational, and commercial etc. The authors used the object oriented programing approach to build the proposed model using the PHP language with the combination of CSS and Java script.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_41-Assessing_the_Quality_of_Educational_Websites.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Feature Selection Pipeline based on Hybrid Optimization Approach with Aggregated Medical Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130139</link>
        <id>10.14569/IJACSA.2022.0130139</id>
        <doi>10.14569/IJACSA.2022.0130139</doi>
        <lastModDate>2022-01-31T10:55:48.0870000+00:00</lastModDate>
        
        <creator>Palwinder Kaur</creator>
        
        <creator>Rajesh Kumar Singh</creator>
        
        <subject>Content-based image retrieval system; CLAHE; Gabor filter; Cuckoo search; LION optimization; support vector machine</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>For quite some time, the usage of many sources of data (data fusion) and the aggregation of that data have been underappreciated. For the purposes of this study, trials using several medical datasets were conducted, with the results serving as a single aggregated source for identifying eye illnesses. It is proposed in this paper that a diagnostic system that can detect diabetic retinopathy, glaucoma, and cataract can be built as an alternative to current methods. The data fusion and data aggregation techniques used to create this multi-model system made it conceivable. As the name implies, it is a way of compiling data from a large number of legitimate sources. The development of a pipeline of algorithms was accomplished through iterative trials and hyper parameter tweaking. CLAHE (Contrast Level Adaptive Histogram Equalization) approaches, which increase the gradient between picture edges, improve segmentation by raising the contrast between picture edges. The Gabor filter has been shown to be the most effective method of selecting features. The Gabor filter was selected using a hybrid optimization method (LION + Cuckoo), which was developed by the author. For automation, the Support Vector Machine (SVM) radial is the most effective method since it delivers excellent stability and accuracy in terms of accuracy and recall, as well as precision and recall. The discoveries and approaches detailed here provide a more solid foundation for future image-based diagnostics researchers to build on in the future. Eventually, the findings of this study will help to improve healthcare workflows and practices.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_39-Feature_Selection_Pipeline_based_on_Hybrid_Optimization_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Tomato Leaf Disease Detection using Deep Learning Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130138</link>
        <id>10.14569/IJACSA.2022.0130138</id>
        <doi>10.14569/IJACSA.2022.0130138</doi>
        <lastModDate>2022-01-31T10:55:48.0700000+00:00</lastModDate>
        
        <creator>Nagamani H S</creator>
        
        <creator>Sarojadevi H</creator>
        
        <subject>Fuzzy Support Vector Machine (SVM); Convolution Neural Network (CNN); Region-based Convolution Neural Network (R-CNN); color thresholding; flood filling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>Plant diseases cause low agricultural productivity. Plant diseases are challenging to control and identify by the majority of farmers. In order to reduce future losses, early disease diagnosis is necessary. This study looks at how to identify tomato plant leaf disease using machine learning techniques, including the Fuzzy Support Vector Machine (Fuzzy-SVM), Convolution Neural Network (CNN), and Region-based Convolution Neural Network (R-CNN). The findings were confirmed using images of tomato leaves with six diseases and healthy samples. Image scaling, color thresholding, flood filling approaches for segmentation, gradient local ternary pattern, and Zernike moments’ features are used to train the pictures. R-CNN classifiers are used to classify the illness kind. The classification methods of Fuzzy SVM and CNN are analyzed and compared with R-CNN to determine the most accurate model for plant disease prediction. The R-CNN-based Classifier has the most remarkable accuracy of 96.735 percent compared to the other classification approaches.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_38-Tomato_Leaf_Disease_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Neuromarketing Solutions based on EEG Signal Analysis using Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130137</link>
        <id>10.14569/IJACSA.2022.0130137</id>
        <doi>10.14569/IJACSA.2022.0130137</doi>
        <lastModDate>2022-01-31T10:55:48.0700000+00:00</lastModDate>
        
        <creator>Asad Ullah</creator>
        
        <creator>Gulsher Baloch</creator>
        
        <creator>Ahmed Ali</creator>
        
        <creator>Abdul Baseer Buriro</creator>
        
        <creator>Junaid Ahmed</creator>
        
        <creator>Bilal Ahmed</creator>
        
        <creator>Saba Akhtar</creator>
        
        <subject>Electroencephalogram (EEG); brain-computer interface; neuromarketing; machine learning; artificial neural networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>Marketing campaigns that promote and market various consumer products are a well-known strategy for increasing sales and market awareness. This simply means the profit of a manufacturing unit would increase. &quot;Neuromarketing&quot; refers to the use of unconscious mechanisms to determine customer preferences for decision-making and behavior prediction. In this work, a predictive modeling method is proposed for recognizing product consumer preferences to online (E-commerce) products as “Likes” and “Dislikes”. Volunteers of various ages were exposed to a variety of consumer products, and their EEG signals and product preferences were recorded. Artificial Neural Networks and other classifiers such as Logistic Regression, Decision Tree Classifier, K-Nearest Neighbors, and Support Vector Machine were used to perform product-wise and subject-wise classification using a user-independent testing method. Though, the subject-wise classification results were relatively low with artificial neural networks (ANN) achieving 50.40 percent and k-Nearest Neighbors achieving 60.89 percent. Furthermore, the results of product-wise classification were relatively higher with 81.23 percent using Artificial Neural Networks and 80.38 percent using Support Vector Machine.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_37-Neuromarketing_Solutions_based_on_EEG_Signal_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>4PCDT: A Quantifiable Parameter-based Framework for Academic Software Project Management</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130136</link>
        <id>10.14569/IJACSA.2022.0130136</id>
        <doi>10.14569/IJACSA.2022.0130136</doi>
        <lastModDate>2022-01-31T10:55:48.0570000+00:00</lastModDate>
        
        <creator>Vikas S. Chomal</creator>
        
        <creator>Jatinderkumar R. Saini</creator>
        
        <creator>Hema Gaikwad</creator>
        
        <creator>Ketan Kotecha</creator>
        
        <subject>Academic; CMMI-Dev; PMBOK; project management; software project; student</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>Many authorities like Project Management Body of Knowledge (PMBOK) and Capability Maturity Model Integration for Development (CMMI-Dev) lend a hand to software development organizations in management of their crucial projects. Though this area needs focused research, such models are not dedicatedly available for the academic projects developed by students of computer science and engineering where software project development is considered as one of the criteria for the award of degree to the future professionals of the IT industry. With this motivation, we explored 4PTRB, 3PR and software project management practices, approaches as well processes framed and provided by PMBOK and CMMI-Dev. The main aim of this research is to introduce and propose a software project management framework for the academic domain. The proposed framework contains identification and description of 7 and 26 quantifiable parameters and sub-parameters respectively. The framework is called 4PCDT for People, Process, Product, Project, Complexity, Duration and Technology for the academic software projects. To validate the proposed framework, an online survey of 113 faculties was conducted to rank and weigh the quantifiable parameters. The results show that People, Process and Technology management parameters are top 3 ranked parameters. The robustness of the approach is further evident from the results of experimentation on 18 actual academic software projects of final year post graduate students of the IT domain. Not only the proposed work is first of its kind, but also it is bound to generate an excellent ripple effect in the research community.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_36-4PCDT_A_Quantifiable_Parameter_based_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Feature Concatenation based Multilayered Sparse Tensor for Debond Detection Optical Thermography</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130134</link>
        <id>10.14569/IJACSA.2022.0130134</id>
        <doi>10.14569/IJACSA.2022.0130134</doi>
        <lastModDate>2022-01-31T10:55:48.0400000+00:00</lastModDate>
        
        <creator>Junaid Ahmed</creator>
        
        <creator>Abdul Baseer</creator>
        
        <creator>Guiyun Tian</creator>
        
        <creator>Gulsher Baloch</creator>
        
        <creator>Ahmed Ali Shah</creator>
        
        <subject>Improved tensor nuclear norm; low-rank decomposition; concatenated feature space; optical thermography</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>Composites being the key ingredients of the manufacturing in the aerospace, aircraft, civil and related industries, it is quite important to check its quality and health during its manufacture or in service. The most commonly found problem in the CFRPs is debonding. As debonds are subsurface defects, the general methods are not quite effective and require destructive tests. The Optical Pulse Thermography (OPT) is a quite promising technology that is being used for detecting the debonds. However, the thermographic time sequences from the OPT system have a lot of noise and normally the defects information is not clear. For solving this problem, an improved tensor nuclear norm (I-TNN) decomposition is proposed in the concatenated feature space with multilayer tensor decomposition. The proposed algorithm utilizes the frontal slice of the tensor to define the TNN and the core singular matrix is further decomposed to utilize the information in the third mode of the tensor. The concatenation helps embed the low-rank and sparse data jointly for weak defect extraction. To show the efficacy and robustness of the algorithm experiments are conducted and comparisons are presented with other algorithms.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_34-Feature_Concatenation_based_Multilayered_Sparse_Tensor.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Unmoderated Remote Usability Testing: An Approach during Covid-19 Pandemic</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130135</link>
        <id>10.14569/IJACSA.2022.0130135</id>
        <doi>10.14569/IJACSA.2022.0130135</doi>
        <lastModDate>2022-01-31T10:55:48.0400000+00:00</lastModDate>
        
        <creator>Ambar Relawati</creator>
        
        <creator>Guntur Maulana Zamroni</creator>
        
        <creator>Yanuar Primanda</creator>
        
        <subject>Mobile learning; nurse competency test; unmoderated remote testing; usability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>Online Nurse Test for Indonesian Nurse Competency (ONT UKNI) is a mobile application that was developed to help increase the success rate of nurse competency test participants. By using this application, users can learn more about the materials tested and conduct try out as a competency test simulation. However, ONT UKNI has not yet passed adequate testing stages, especially in terms of User Interface/User Experience (UI/UX). The Covid-19 pandemic situation presents challenges in the UI/UX testing process. Testing process which is ideally carried out face-to-face with respondents to get further insight, have to be carried out using another approach following the new normal protocol. This study aims to test the usability of UI/UX with an unmoderated remote testing approach on ONT UKNI application using a USE questionnaire. The test was performed using 26 respondents and all were nursing profession students of Universitas Muhammadiyah Yogyakarta. Respondents performed 8 tasks on ONT UKNI and answered set of questionnaire that will be tabulated and analyzed. The results indicate that usefulness, ease of learning, and satisfaction variables get the Very Good category while the ease of use variable gets the Good category. Overall, usability testing using an unmoderated remote testing approach can be carried out and able to provide information about areas where users are satisfied with ONT UKNI application. However, some areas still have room for improvement such as better UI design and implementation of gamification.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_35-Unmoderated_Remote_Usability_Testing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Secure Transposition Cipher Technique using Arbitrary Zigzag Patterns</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130133</link>
        <id>10.14569/IJACSA.2022.0130133</id>
        <doi>10.14569/IJACSA.2022.0130133</doi>
        <lastModDate>2022-01-31T10:55:48.0230000+00:00</lastModDate>
        
        <creator>Basil Al-Kasasbeh</creator>
        
        <subject>Cryptography; symmetric cipher; block cipher; transposition algorithms</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>Symmetric cipher cryptography is an efficient technique for encrypting bits and letters to maintain secure communication and data confidentiality. Compared to asymmetric cipher cryptography, symmetric cipher has the speed advantage required for various real-time applications. Yet, with the distribution of micro-devices and the wider utilization of the Internet of Things (IoT) and Wireless Sensor Network (WSN), lightweight algorithms are required to operate on such devices. This paper proposes a symmetric cipher based on a scheme consisting of multiple zigzag patterns, a secret key of variable length, and block data of variable size. The proposed system uses transposition principles to generate various encryption patterns with a particular initial point over a grid. The total number of cells in the grid and its dimension are variable. Various patterns can be created for the same grid, leading to different outcomes on different grids. For a grid of n cells, a total of n! * (n-1!) total patterns can be generated. This information is encapsulated in the private key. Thus, the huge number of possible patterns and the variation of the grid size, which are kept hidden, maintain the security of the proposed technique. Moreover, variable padding can be used; two paddings with different lengths lead to a completely different output even with the same pattern and the same inputs, which improves the security of the proposed system.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_33-A_Novel_Secure_Transposition_Cipher_Technique.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detecting Diabetic Retinopathy in Fundus Images using Combined Enhanced Green and Value Planes (CEGVP) with k-NN</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130132</link>
        <id>10.14569/IJACSA.2022.0130132</id>
        <doi>10.14569/IJACSA.2022.0130132</doi>
        <lastModDate>2022-01-31T10:55:48.0100000+00:00</lastModDate>
        
        <creator>Minal Hardas</creator>
        
        <creator>Sumit Mathur</creator>
        
        <creator>Anand Bhaskar</creator>
        
        <subject>Combined enhanced green and value plane; diabetic retinopathy; fundus image; image processing; k-NN; principal component analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>Diabetic Retinopathy (DR) is a disease that causes damage to the blood vessels of the retina, especially in patients having high uncontrolled blood sugar levels, which may lead to complications in the eyes or loss of vision. Thus, early detection of DR is essential to avoid complete blindness. The automatic screenings through computational techniques would eventually help in diagnosing the disease more accurately. The traditional DR detection techniques identify the abnormalities such as microaneurysms, hemorrhages, hard exudates, and soft exudates from the diabetic retinopathy images individually. When these abnormalities occur in combination, it becomes difficult to predict them and the individual detection (traditional 4 class classification) accuracy decreases. Hence, there is a need to have separate combinational classes (16 class classification) that help to classify these abnormalities in a group or one by one. The objective of our work is to develop an automated DR prediction scheme that classifies the abnormalities either individually or in combination in retinal fundus images. The proposed system uses Combined Enhanced Green and Value Planes (CEGVP) for processing the fundus images, Principal Component Analysis (PCA) for feature extraction, and k-nearest neighbor (k-NN) for classification of DR. The suggested technique yields an average accuracy of 97.11 percent using a k-NN classifier. This is the first time that a 16-class classification is initiated that precisely gives the ability and flexibility to map the combinational complexity in a single step. The proposed method can assist ophthalmologists in efficiently detecting the abnormalities and starting the diagnosis on time.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_32-Detecting_Diabetic_Retinopathy_in_Fundus_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Machine Learning Approach to Weather Prediction in Wireless Sensor Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130131</link>
        <id>10.14569/IJACSA.2022.0130131</id>
        <doi>10.14569/IJACSA.2022.0130131</doi>
        <lastModDate>2022-01-31T10:55:47.9930000+00:00</lastModDate>
        
        <creator>Suvarna S Patil</creator>
        
        <creator>B.M.Vidyavathi</creator>
        
        <subject>Data mining; wireless sensor network; multiple linear regression; outliers treatment; r-square; adjusted r-square</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>Weather prediction is the key requirement to save many lives from environmental disasters like landslides, earthquake, flood, forest fire, tsunami etc. Disaster monitoring and issuing forewarning to people, living in disaster-prone places, can help protect lives. In this paper, the Multiple Linear Regression (MLR) model is proposed for humidity prediction. After exploratory data analysis and outlier treatment, Multiple Linear Regression technique was applied to predict humidity. Intel lab dataset, collected by deploying 54 sensors, to form a wireless sensor network, an advanced networking technology that existed in the frontier of computer networks, is used for solution build. Inputs to the model are various meteorological variables, for predicting weather precisely. The model is evaluated using metrics - Mean Absolute Error (MAE), Root Mean Square Error (RMSE) and Mean Absolute Percentage Error (MAPE). From experimentation, the applied method generated results with a minimum error of 11%, hence the model is statistically significant and predictions more reliable than other methods.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_31-A_Machine_Learning_Approach_to_Weather_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimize and Secure Routing Protocol for Multi-hop Wireless Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130130</link>
        <id>10.14569/IJACSA.2022.0130130</id>
        <doi>10.14569/IJACSA.2022.0130130</doi>
        <lastModDate>2022-01-31T10:55:47.9930000+00:00</lastModDate>
        
        <creator>Salwa Othmen</creator>
        
        <creator>Wahida Mansouri</creator>
        
        <creator>Somia Asklany</creator>
        
        <creator>Wided Ben Daoud</creator>
        
        <subject>Mutli-hop wireless network; routing protocol; Diffie-Hellman; Weil Pairing; NS2</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>Multi-hop Wireless Network (MWN) requires the existing of wireless nodes that communicate via a wireless channel. Thus, selecting optimal paths between the communicant nodes is a major challenge. Many researchers are focusing on this topic and proposed some routing protocols that help the nodes to learn multi-hop paths. Multi-hop wireless network is used for several types of applications, like military, medical care and national security. These applications are important and critical, so they require a certain level of performance and security during data communication. Securing the transmission of data in a multi-hop network is a challenge since the devices have limited resources like memory and battery. In this paper, we propose an optimal and secure routing protocol. The main goal of this proposal is to improve the performance and the security of such network by selecting a secure route between the source and its target destination. To secure data transmission phase, we propose to create a key shared between the source and the destination. Since the devices have limited energy, we propose to take into consideration the energy of the intermediate nodes of the selected route. Extensive simulations are performed using the Network Simulator (NS2) to validate the proposed protocol. This proposal is compared with the secured Ad-Hoc On-demand Distance Vector (SAODV) in terms of end-to-end delay, overhead and number of compromised devices.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_30-Optimize_and_Secure_Routing_Protocol.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluation Optimal Prediction Performance of MLMs on High-volatile Financial Market Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130129</link>
        <id>10.14569/IJACSA.2022.0130129</id>
        <doi>10.14569/IJACSA.2022.0130129</doi>
        <lastModDate>2022-01-31T10:55:47.9770000+00:00</lastModDate>
        
        <creator>Yao HongXing</creator>
        
        <creator>Hafiz Muhammad Naveed</creator>
        
        <creator>Muhammad Usman Answer</creator>
        
        <creator>Bilal Ahmed Memon</creator>
        
        <creator>Muhammad Akhtar</creator>
        
        <subject>Support vector regression; random forest; machine learning-linear regression model; optimal prediction performance; currencies exchange rates; stock price returns</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>The present study evaluates the prediction performance of the multi-machine learning models (MLMs) on high-volatile financial markets data sets since 2007 to 2020. The linear and nonlinear empirical data sets are comprised on stock price returns of Karachi stock exchange (KSE) 100-Index of Pakistan and currencies exchange rates of Pakistani Rupees (PKR) against five major currencies (USD, Euro, GBP, CHF &amp; JPY). In the present study, the support vector regression (SVR), random forest (RF), and machine learning-linear regression model (ML-LRM) are under-evaluated for comparative prediction performance. Moreover, the findings demonstrated that the SVR comparatively gives optimal prediction performance on group1. Similarly, the RF relatively gives the best prediction performance on group2. The findings of study concludes that the algorithm of RF is most appropriate for nonlinear approximation/evaluation and the algorithm of SVR is most useful for high-frequency time-series data estimation. The present study is contributed by exploring comparative enthusiastic/optimistic machine learning model on multi-nature data sets. This empirical study would be helpful for finance and machine-learning pupils, data analysts and researchers, especially for those who are deploying machine-learning approaches for financial analysis.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_29-Evaluation_Optimal_Prediction_Performance_of_MLMs.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Prediction of Diabetic Obese Patients using Fuzzy KNN Classifier based on Expectation Maximization, PCA and SMOTE Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130128</link>
        <id>10.14569/IJACSA.2022.0130128</id>
        <doi>10.14569/IJACSA.2022.0130128</doi>
        <lastModDate>2022-01-31T10:55:47.9600000+00:00</lastModDate>
        
        <creator>Ibrahim Eldesouky Fattoh</creator>
        
        <creator>Soha Safwat</creator>
        
        <subject>KNN classifier; SMOTE; PCA; diabetic obese patients</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>Diabetes is a long-term disease. Inappropriate blood sugar level control in diabetic patients can lead to serious issues like kidney and heart diseases. Obesity is widely regarded as a major risk factor for type 2 diabetes. In this research, a model proposed to predict diabetic obese patients based on Expectation Maximization, PCA, and SMOTE Algorithms in the preprocessing and feature extraction phases, and using Fuzzy KNN classifier in the prediction phase. The model applied on real dataset and the accuracy of prediction results reflects the positive effect of the preprocessing techniques. The accuracy of the proposed model is 95.97% and outperforms other model applied on the same dataset.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_28-Prediction_of_Diabetic_Obese_Patients.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Ambulatory Monitoring of Maternal and Fetal using Deep Convolution Generative Adversarial Network for Smart Health Care IoT System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130126</link>
        <id>10.14569/IJACSA.2022.0130126</id>
        <doi>10.14569/IJACSA.2022.0130126</doi>
        <lastModDate>2022-01-31T10:55:47.9470000+00:00</lastModDate>
        
        <creator>S. Venkatasubramanian</creator>
        
        <subject>Deep convolutional generative adversarial network; fetal health monitoring; high-risk pregnancies; internet of things; smart healthcare system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>With the increase in the number of high-risk pregnancies, it is important to monitor the health of the fetus during pregnancy. Major advances in the field of study have led to the development of intelligent automation systems that enable clinicians to predict and determine the monitoring of Maternal and Fetal Health (MFH) with the aid of the Internet of Things (IoT). This paper provides a solution for monitoring high-risk MHF based on IoT sensors, data analysis-based feature extraction, and an intelligent system based on the Deep Convolutional Generative Adversarial Network (DCGAN) classifier. Various clinical indicators such as heart rate of MF, oxygen saturation, blood pressure, and uterine tonus of maternal are monitored continuously. Many data sources produce large amounts of data in different formats and ratios. The smart health analytics system proposes to extract several features and measure linear and non-linear dimensions. Finally, a DCGAN has been proposed as a predictive mechanism for the simultaneous classification of MFH status by considering more than four possible outcomes. The results showed that the proposed system for mobile monitoring between MFH is a practical solution based on the IoT.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_26-Ambulatory_Monitoring_of_Maternal_and_Fetal_using_Deep_Convolution.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Periapical Radiograph Texture Features for Osteoporosis Detection using Deep Convolutional Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130127</link>
        <id>10.14569/IJACSA.2022.0130127</id>
        <doi>10.14569/IJACSA.2022.0130127</doi>
        <lastModDate>2022-01-31T10:55:47.9470000+00:00</lastModDate>
        
        <creator>Khasnur Hidjah</creator>
        
        <creator>Agus Harjoko</creator>
        
        <creator>Moh. Edi Wibowo</creator>
        
        <creator>Rurie Ratna Shantiningsih</creator>
        
        <subject>Osteoporosis; dental periapical radiograph; convolutional neural network; texture features; bone mineral density</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>Currently, research for osteoporosis examination using dental radiographic images is increasing rapidly. Many researchers have used various methods from subject data. It indicates that osteoporosis has become a widespread disease that should be studied more deeply. This study proposes a deep Convolutional Neural Network architecture as a texture feature of dental periapical radiograph for osteoporosis detection. The subject of this study is postmenopausal Javanese women aged over 40 and data measurement result of Bone Mineral Density. The proposed model is divided into stages: 1) stage image acquisition and RoI selection, 2) stage feature extraction and classification. Various experiments with the number of convolution layers (3 layers to 6 layers) and various input block sizes and other hyper parameters were used to get the best model. The best model is obtained when the input image size is greater than 100 and less than 150 and a five of convolution layer, as well as other hyper parameters, including epochs=100, dropout=0.5, learning rate=0.0001, batch size= 16 and loss function using Adam&#39;s optimization. Validation and testing accuracy achieved by the best model is 98.10%, and 92.50. The research shows that the bigger images provide additional information about trabecular patterns in normal, osteopenia and osteoporosis classes, so that the proposed method using deep convolutional neural network as textural feature of the periapical radiograph achieves a good performance for detection osteoporosis.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_27-Periapical_Radiograph_Texture_Features.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards a Strategic IT GRC Framework for Healthcare Organizations</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130125</link>
        <id>10.14569/IJACSA.2022.0130125</id>
        <doi>10.14569/IJACSA.2022.0130125</doi>
        <lastModDate>2022-01-31T10:55:47.9300000+00:00</lastModDate>
        
        <creator>Fawaz Alharbi</creator>
        
        <creator>Mohammed Nour A. Sabra</creator>
        
        <creator>Nawaf Alharbe</creator>
        
        <creator>Abdulrahman A. Almajed</creator>
        
        <subject>Information technlogy; strategic; healthcare; governance; compliance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>The rapidly changing healthcare market requires healthcare institutions to adjust their operations to address regulatory, strategic, and other risks. Healthcare organizations use a wide range of IT systems producing large amounts of sensitive and confidential data. However, few tools are available to measure the data governance activities of healthcare institutions and align healthcare data management with legislation. The Governance, Risk, and Compliance (GRC) Model focused on integrating that ability to achieve organizational goals. The demand for corporate governance is crucial for protecting the healthcare system from risks. An adaptation of a modified version that includes strategy, processes, technology, people, as well as legal and business requirements was developed to analyze the factors affecting IT GRC implementation in healthcare organizations. Although about 48% of participants reported that their organizations implemented IT GRC programs, 16% stated that they are considering implementing IT GRC programs soon. In almost 71% of healthcare organizations, IT governance, risk management, and compliance are integrated. Among the factors influencing the implementation of IT GRC programs in Saudi healthcare organizations, legal context ranked as the most critical, followed by process, strategy, then technology, business, and finally, people contexts. This study shows that healthcare organizations must assess various factors for the effective implementation of IT GRC activities.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_25-Towards_a_Strategic_IT_GRC_Framework_for_Healthcare.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dynamic Deployment of Road Side Units for Reliable Connectivity in Internet of Vehicles</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130123</link>
        <id>10.14569/IJACSA.2022.0130123</id>
        <doi>10.14569/IJACSA.2022.0130123</doi>
        <lastModDate>2022-01-31T10:55:47.9130000+00:00</lastModDate>
        
        <creator>Abdulwahab Ali Almazroi</creator>
        
        <creator>Muhammad Ahsan Qureshi</creator>
        
        <subject>VANET; Roadside unit; internet of vehicle; social internet of vehicles; unmanned aerial vehicle; line of sight vehicular communication</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>Internet of vehicles (IoV) promises to provide ubiquitous information exchange among moving vehicles and reliable connectivity to the internet. Therefore, IoV is becoming more and more popular as the number of connected vehicles is increasing. However, the existing vehicular communication infrastructure cannot guarantee reliable connectivity because all-time information exchange for every travelling vehicle is not assured due to lack of required number of roadside units (RSUs) especially along the intercity highways. This study is aimed towards exploring the use of cost-effective dynamic deployment of RSUs based upon the road traffic density and by ensuring the Line of Sight (LOS) among RSUs and Cellular Network Antennas. The unmanned aerial vehicles (UAVs) have a potential to serve as economical dynamic RSUs. Therefore, the use of UAVs along the roadside for providing reliable and ubiquitous information exchange among vehicles is proposed. The UAVs will be deployed along the roadside and their respective placement will be changed dynamically based upon the current traffic density in order to ensure the all-time connectivity with the travelling vehicles and the other UAVs/Cellular Network antennas. The reliability of the proposed network will be tested in terms of signal strength and packet delivery ratio (PDR) using the simulation.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_23-Dynamic_Deployment_of_Road_Side_Units.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Customer Satisfaction with Digital Wallet Services: An Analysis of Security Factors</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130124</link>
        <id>10.14569/IJACSA.2022.0130124</id>
        <doi>10.14569/IJACSA.2022.0130124</doi>
        <lastModDate>2022-01-31T10:55:47.9130000+00:00</lastModDate>
        
        <creator>Dewan Ahmed Muhtasim</creator>
        
        <creator>Siok Yee Tan</creator>
        
        <creator>Md Arif Hassan</creator>
        
        <creator>Monirul Islam Pavel</creator>
        
        <creator>Samiha Susmit</creator>
        
        <subject>Cashless transaction; electronic payment; internet security; consumer satisfaction; e-commerce</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>This study aimed to determine an efficient framework that caters to the security and consumer satisfaction for digital wallet systems. A quantitative online survey was carried out to test whether the six factors (i.e., transaction speed, authentication, encryption mechanisms, software performance, privacy details, and information provided) positively or negatively impact customer satisfaction. This questionnaire was divided into two sections: the respondents’ demographic data and a survey on security factors that influence customer satisfaction. The questionnaires were distributed to the National University of Malaysia’s professors and students. A sample of 300 respondents undertook the survey. The survey results suggested that many respondents agreed that the stated security factors influenced their satisfaction when using digital wallets. Previous studies indicated that financial security, privacy, system security, cybercrime, and trust impact online purchase intention. The proposed framework in this research explicitly covers the security factors of the digital wallet. This study may help digital wallet providers understand the customer&#39;s perspective on digital wallet security aspects, therefore motivating providers to implement appropriately designed regulations that will attract customers to utilize digital wallet services. Formulating appropriate security regulations will generate long-term value, leading to greater digital wallet adoption rates.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_24-Customer_Satisfaction_with_Digital_Wallet_Services.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Investigate the Ensemble Model by Intelligence Analysis to Improve the Accuracy of the Classification Data in the Diagnostic and Treatment Interventions for Prostate Cancer</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130122</link>
        <id>10.14569/IJACSA.2022.0130122</id>
        <doi>10.14569/IJACSA.2022.0130122</doi>
        <lastModDate>2022-01-31T10:55:47.9000000+00:00</lastModDate>
        
        <creator>Abdelrahman Elsharif Karrar</creator>
        
        <subject>Ensemble model; intelligence analysis; classification of imbalanced data; prostate cancer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>Class imbalance problem become greatest issue in data mining, imbalanced data appears in daily application, especially in the health care. This research aims at investigating the application of ensemble model by intelligence analysis to improving the classification accuracy of imbalanced data sets on prostate cancer. The primary requirements obtained for this study included the datasets, relevant tools for pre-processing to identify the missing values, models for attribute selection and cross validation, data resembling framework, and intelligent algorithms for base classification. Additionally, the ensemble model and meta-learning algorithms were acquired in preparation for performance evaluation by embedding feature selecting capabilities into the classification model. The experimental results led to the conclusion that the application of ensemble learning algorithm on resampled data sets provides highly accurate classification results on single classifier J48. The study further suggests that gain ratio and ranker techniques are highly effective for attribute selection in the analysis of prostate cancer data. The lowest error rate and optimal performance accuracy in the classification of imbalanced prostate cancer data is achieved using when Adaboost algorithm is combined with single classifier J48.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_22-Investigate_the_Ensemble_Model_by_Intelligence_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of Logistics Service Quality and Customer Satisfaction during COVID-19 Pandemic in Saudi Arabia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130121</link>
        <id>10.14569/IJACSA.2022.0130121</id>
        <doi>10.14569/IJACSA.2022.0130121</doi>
        <lastModDate>2022-01-31T10:55:47.8830000+00:00</lastModDate>
        
        <creator>Amjaad Bahamdain</creator>
        
        <creator>Zahyah H. Alharbi</creator>
        
        <creator>Muna M. Alhammad</creator>
        
        <creator>Tahani Alqurashi</creator>
        
        <subject>Logistics services; sentiment analysis; lexicon-based approach; SVM; sentiment classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>Logistics companies&#39; success is inextricably linked to the quality of their services, particularly when dealing with customer issues. Nowadays, social media is the first place that users turn to in order to express their thoughts on services or to communicate with customer service representatives to resolve problems. Businesses can retrieve and analyze these data to gain a better understanding of the factors that affect their operations, both positively and negatively. During the COVID-19 pandemic, we conducted a sentiment analysis to assess customer satisfaction with logistics services in Saudi Arabia&#39;s private and public sectors. Using a lexicon-based approach, 67,124 tweets were collected and classified as positive, negative, or neutral. A support vector machine (SVM) model was used for classification, with an average accuracy of 82%. Following that, we conducted a thematic analysis of negative opinions in order to identify the factors that influenced the effectiveness and quality of logistics services. The findings reveal five negative themes: delay, customer service issues, damaged shipments, delivery issues, and hidden prices. Finally, we make suggestions to improve the efficiency and quality of logistics services.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_21-Analysis_of_Logistics_Service_Quality_and_Customer_Satisfaction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Medical Image Cryptanalysis using Adaptive, Lightweight Neural Network based Algorithm for IoT based Secured Cloud Storage</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130120</link>
        <id>10.14569/IJACSA.2022.0130120</id>
        <doi>10.14569/IJACSA.2022.0130120</doi>
        <lastModDate>2022-01-31T10:55:47.8830000+00:00</lastModDate>
        
        <creator>M V Narayana</creator>
        
        <creator>Ch Subba Lakshmi</creator>
        
        <creator>Rishi Sayal</creator>
        
        <subject>Cloud storage; IoT; medical image; neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>Currently available modern medical system generates large amounts of data, such as computerized patient data and digital medical pictures, which must be kept securely for future reference. Existing storage technologies are not capable of storing large amounts of data efficiently. It is a key and abrogating topic of specialized, social, and medical significance and a key and abrogating subject of general interest. The results of cars, purchasers, and Internet of Things industry-based and essential segments, sensors, and other daily objects are fused with a network of the Internet and solid information abilities that promise to alter the way we operate and live future. The suggested work demonstrates a symmetric-key lightweight technique for secure data transmission of images and text, which uses an image encryption system and a reversible data hiding system to demonstrate the program&#39;s implementation. On the other hand, cloud storage services can meet demand due to features such as flexibility and availability. Cloud computing is enabled by amazing internet innovation as well as cutting-edge electrical equipment. Even though medical images may be stored on the cloud, most cloud service providers only save client data in plain text. As part of their overall strategy, cloud users must take responsibility for protecting medical data. Because attackers&#39; increasing computing power and creativity are opening up more and more areas in this mathematical form, most existing image encryption schemes are vulnerable to the plaintext attack of choice. This article presents an image encryption method inspired by an Adaptive IoT-based Hopfield Neural Network (AIHNN) that can resist other assaults while optimizing and improving the system through continuous learning and updating.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_20-Medical_Image_Cryptanalysis_using_Adaptive_Lightweight_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Determine the Level of Concentration of Students in Real Time from their Facial Expressions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130119</link>
        <id>10.14569/IJACSA.2022.0130119</id>
        <doi>10.14569/IJACSA.2022.0130119</doi>
        <lastModDate>2022-01-31T10:55:47.8670000+00:00</lastModDate>
        
        <creator>Bouhlal Meriem</creator>
        
        <creator>Habib Benlahmar</creator>
        
        <creator>Mohamed Amine Naji</creator>
        
        <creator>Elfilali Sanaa</creator>
        
        <creator>Kaiss Wijdane</creator>
        
        <subject>Emotion recognition; level of concentration; transfer learning; data augmentation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>In teaching environments, student facial expressions are a clue to the traditional classroom teacher in gauging students&#39; level of concentration in the course. With the rapid development of information technology, e-learning will take off because students can learn anytime, anywhere and anytime they feel comfortable. And this gives the possibility of self-learning. Analyzing student concentration can help improve the learning process. When the student is working alone on a computer in an e-learning environment, this task is particularly challenging to accomplish. Due to the distance between the teacher and the students, face-to-face communication is not possible in an e-learning environment. It is proposed in this article to use transfer learning and data augmentation techniques to determine the concentration level of learners from their facial expressions in real time. We found that expressed emotions correlate with students&#39; concentration, and we designed three distinct levels of concentration (highly concentrated, nominally concentrated, and not at all concentrated).</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_19-Determine_the_Level_of_Concentration_of_Students_in_Real_Time.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automatic Fake News Detection based on Deep Learning, FastText and News Title</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130118</link>
        <id>10.14569/IJACSA.2022.0130118</id>
        <doi>10.14569/IJACSA.2022.0130118</doi>
        <lastModDate>2022-01-31T10:55:47.8530000+00:00</lastModDate>
        
        <creator>Youssef Taher</creator>
        
        <creator>Adelmoutalib Moussaoui</creator>
        
        <creator>Fouad Moussaoui</creator>
        
        <subject>Fake news; automatic detection; deep learning; FastText; news title</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>As a range of daily phenomena, Fake News is quickly becoming a longstanding issue affecting individuals, public and private sectors. This major challenge of the connected and modern world can cause many severe and real damages such as manipulating public opinion, damaging reputations, contributing to the loss in stock market value and representing many risks to the global health. With the fast spreading of online misinformation, checking manually Fake News becomes ineffective solution (not obvious, difficult and takes a long time). The improvement of Deep Learning Networks (DLN) can support with high degree of accuracy and efficiency the classical processes of Fake News spotting. One of the keys improvement strategies are optimizing the Word Embedding Layer (WEL) and finding relevant Fake News predicting features. In this context, and based on six DLN architectures, FastText process as WEL and Inverted Pyramid as News Articles Pattern (IPP), the present paper focuses on the assessment of the first news article feature that is hypothesized as affecting the performances of fake news predicting: News Title. By assessing the impact that the Embedding Vector Size (EVS), Window Size (WS) and Minimum Frequency of Words (MFW) in News Titles corpus can have on DLN, the experiments carried out in this paper showed that the News Title feature and FastText process can have a significant improvement on DLN fake news detection with accuracy rates exceeding 98%.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_18-Automatic_Fake_News_Detection_based_on_Deep_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of an Intelligent Hydroponics System to Identify Macronutrient Deficiencies in Chili</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130117</link>
        <id>10.14569/IJACSA.2022.0130117</id>
        <doi>10.14569/IJACSA.2022.0130117</doi>
        <lastModDate>2022-01-31T10:55:47.8370000+00:00</lastModDate>
        
        <creator>Deffa Rahadiyan</creator>
        
        <creator>Sri Hartati</creator>
        
        <creator>Wahyono</creator>
        
        <creator>Andri Prima Nugroho</creator>
        
        <subject>Multi Layer perceptron; internet of things; feature combination; leaf image; nutrient deficiency</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>Nutrient contents are important for plants. Lack of macronutrients causes plant damage. Several macronutrient deficiencies exhibit similar visual characteristics that are difficult for ordinary farmers to identify. Collaboration between Computer Vision technology and IoT has become a non-destructive method for nutrient monitoring and control, included in the hydroponic system. Computer vision plays a role in processing plant image data based on specific characteristics. However, the analysis of one characteristic cannot represent plant health. In addition, knowing the percentage of macronutrient deficiencies is also needed to support precision agriculture systems. Therefore, we propose a Multi Layer Perceptron architecture that can perform multi-tasks, namely, identification and estimation. In addition, the optimal architecture will also be sought based on the characteristics of the combination of three features in the form of texture, color, and leaf shape. Based on analysis and design, our proposed model has a high potential for identifying and estimating macronutrient deficiency at the same time as well and can be applied to support precision agriculture in Indonesia.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_17-Design_of_An_Intelligent_Hydroponics_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Node Monitoring Agent based Handover Mechanism for Effective Communication in Cloud-Assisted MANETs in 5G</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130116</link>
        <id>10.14569/IJACSA.2022.0130116</id>
        <doi>10.14569/IJACSA.2022.0130116</doi>
        <lastModDate>2022-01-31T10:55:47.8370000+00:00</lastModDate>
        
        <creator>B. V.S Uma Prathyusha</creator>
        
        <creator>K.Ramesh Babu</creator>
        
        <subject>MANET; clustering; cloud computing; 5G Wireless networks; cloud-assisted MANET</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>As nodes often join or leave the network, the communication between the cloud and the MANET remains unreliable in Cloud-Assisted MANET. The event of connection failure in MANET presents several challenges to the network, in particular, the handover issue and high energy consumption during route re-establishment if a connection fails in D2D (device-to-device) communication networks. To address this problem of D2D mobile communication in 5G, we propose a Node Monitoring Agent Based Handover Mechanism (NMABHM). To improve the network&#39;s efficiency, we use the K-means algorithm for clustering and cluster head selection in Hybrid MANET and maintain a backup routing table based on a Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) to quickly recover the route. Additionally, a Node Monitoring Agent (NMA) is introduced to handle the handover issue if a node comes out of range during the communication phase. The NMABHM-based handover mechanism is proposed with group mobility over a cluster-based architecture involving agents. The results of the simulation indicate that, in terms of lower energy consumption and higher throughput, our proposed mechanism is more effective than the current routing mechanisms.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_16-A_Node_Monitoring_Agent_based_Handover_Mechanism.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Deep Neural Network Model for Detection of Security Attacks in IoT Enabled Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130115</link>
        <id>10.14569/IJACSA.2022.0130115</id>
        <doi>10.14569/IJACSA.2022.0130115</doi>
        <lastModDate>2022-01-31T10:55:47.8200000+00:00</lastModDate>
        
        <creator>Amit Sagu</creator>
        
        <creator>Nasib Singh Gill</creator>
        
        <creator>Preeti Gulia</creator>
        
        <subject>Internet of things; deep learning; optimization; convolutional neural network; security attack detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>The extensive use of Internet of Things (IoT) appliances has greatly contributed in the growth of smart cities. Moreover, the smart city deploys IoT-enabled applications, communications, and technologies to improve the quality of life, people’s wellbeing, quality of services for the service providers and increase the operational efficiency. Nevertheless, the expansion of smart city network has become the utmost hazard due to increased cyber security attacks and threats. Consequently, it is more significant to develop the system models for preventing the attacks and also to protect the IoT devices from hazards. This paper aims to present a novel deep hybrid attack detection method. The input data is subjected for preprocessing phase. Here, data normalization process is carried out. From the preprocessed data, the statistical and higher order statistical features are extracted. Finally, the extracted features are subjected to hybrid deep learning model for detecting the presence of attack. The proposed hybrid classifier combines the models like Convolution Neural Network (CNN) and Deep Belief Network (DBN). To make the detection more precise and accurate, the training of CNN and DBN is carried out by using Seagull Adopted Elephant Herding optimization (SAEHO) model by tuning the optimal weights.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_15-Hybrid_Deep_Neural_Network_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Using a Rule-based Model to Detect Arabic Fake News Propagation during Covid-19</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130114</link>
        <id>10.14569/IJACSA.2022.0130114</id>
        <doi>10.14569/IJACSA.2022.0130114</doi>
        <lastModDate>2022-01-31T10:55:47.8070000+00:00</lastModDate>
        
        <creator>Fatimah L. Alotaibi</creator>
        
        <creator>Muna M. Alhammad</creator>
        
        <subject>Fake news; Covid-19; text classification; rule-based system; trends</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>Since the emergence of the Covid-19, both factual and false information about the new virus has been disseminated. Fake news harms societies and must be combated. This research aims to identify Arabic fake news tweets and classify them into six categories: entertainment, health, politics, religious, social, and sports. The study also aims to uncover patterns in the spread of Arabic fake news associated with the Covid-19 pandemic. The researchers created an Arabic dictionary and used text classification based on a rule-based system to detect and categorize fake news. A dataset consisting of 5 million tweets was analyzed. The developed model achieves an overall accuracy of 78.1% with 70% precision and 98%recall. The model detected more than 26006 fake news tweets. Interestingly we found an association between the number of fake news tweets and dates. The result demonstrates that as more information and knowledge about Covid-19 become available over time, people&#39;s awareness increase, while the number of fake news tweets decreases. The categorization of false news indicates that the social category was highest in all Arab countries except Palestine, Qatar, Yemen, and Algeria. Conversely, fake news related to the entertainment category was the weakest dissemination in most Arab countries.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_14-Using_a_Rule_based_Model_to_Detect_Arabic_Fake_News.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Effective Malware Detection using Shapely Boosting Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130113</link>
        <id>10.14569/IJACSA.2022.0130113</id>
        <doi>10.14569/IJACSA.2022.0130113</doi>
        <lastModDate>2022-01-31T10:55:47.8070000+00:00</lastModDate>
        
        <creator>Rajesh Kumar</creator>
        
        <creator>Geetha S</creator>
        
        <subject>Artificial intelligence; machine learning; malware detection; shapely value; decision plot; waterfall plot</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>Malware constitutes a prime exploitation tool to attack the vulnerabilities in software that lead to a threat to security. The number of malware gets generated as exploitation tools need effective methods to detect them. Machine learning methods are effective in detecting malware. The effectiveness of machine learning models can be increased by analyzing how the features that build the model contribute to the detection of malware. The model can be made robust by getting insight into how features contribute to each sample that is fed to a trained model. In this paper, the boosting machine learning model based on LightGBM is enhanced with Shapley value to detect the contribution of the top nine features for classification such as true positive or true negative and for misclassification such as false positive or false negative. This insight in the model can be used for effective and robust malware detection and to avoid wrong detections such as false positive and false negative. The comparison of the top features and their contribution in shapely value for each category of the sample gives insight and inductive learning into the model to know the reasons for misclassification. Inductive learning can be transformed into rules. The prediction by the trained model can be re-evaluated with such inductive learning and rules to ensure effective and robust prediction and avoid misclassification. The performance of models gives 98.48 at maximum and 97.45 at a minimum by 10 fold cross-validation.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_13-Effective_Malware_Detection_using_Shapely.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Drug Sentiment Analysis using Machine Learning Classifiers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130112</link>
        <id>10.14569/IJACSA.2022.0130112</id>
        <doi>10.14569/IJACSA.2022.0130112</doi>
        <lastModDate>2022-01-31T10:55:47.7900000+00:00</lastModDate>
        
        <creator>Mohammed Nazim Uddin</creator>
        
        <creator>Md. Ferdous Bin Hafiz</creator>
        
        <creator>Sohrab Hossain</creator>
        
        <creator>Shah Mohammad Mominul Islam</creator>
        
        <subject>Machine Learning Algorithms; natural language processing; drugs sentiment analysis; text mining</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>In recent times, one of the most emerging sub-dimensions of natural language processing is sentiment analysis which refers to analyzing opinion on a particular subject from plain text. Drug sentiment analysis has become very significant in present times as classifying medicines based on their effectiveness through analyzing reviews from users can assist potential future consumers in gaining knowledge and making better decisions about a particular drug. The objective of this proposed research is to measure the effectiveness level of a particular drug. Currently most of the text mining researches are based on unsupervised machine learning methods to cluster data. When supervised learning methods are used for text mining, the usual primary concern is to classify the data into two classes. Lack of technical terms in similar datasets make the categorization even more challenging. The proposed research focuses on finding out the keywords through tokenization and lemmatization so that better accuracy can be achieved for categorizing the drugs based on their effectiveness using different algorithms. Such categorization can be instrumental for treating illness as well as improve one’s health and well-being. Four machine learning algorithms have been applied for binary classification and one for multiclass classification on the drug review dataset acquired from the UCI machine learning repository. The machine learning algorithms used for binary classification are naive Bayes classifier, random forest, support vector classifier (SVC), and multilayer perceptron; among these machine learning algorithms, linear SVC was used for multiclass classification. Results obtained from these four classifier algorithms have been analyzed to evaluate their performances. The random forest has been proven to have the best performance among these four algorithms. However, multiclass classification was found to have low performance when applied to natural language processing. On the contrary, the applied linear SVC algorithm performed better for class 2 with AUC 0.82 in this research.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_12-Drug_Sentiment_Analysis_using_Machine_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Special Negative Database (SNDB) for Protecting Privacy in Big Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130111</link>
        <id>10.14569/IJACSA.2022.0130111</id>
        <doi>10.14569/IJACSA.2022.0130111</doi>
        <lastModDate>2022-01-31T10:55:47.7730000+00:00</lastModDate>
        
        <creator>Tamer Abdel Latif Ali</creator>
        
        <creator>Mohamed Helmy Khafagy</creator>
        
        <creator>Mohamed Hassan Farrag</creator>
        
        <subject>Big data; big data challenges; privacy violations; privacy-preserving techniques; special negative database; data integrity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>Despite the importance of big data, it faces many challenges. The most important big data challenges are data storage, heterogeneity, inconsistency, timeliness, security, scalability, visualization, fault tolerance, and privacy. This paper concentrates on privacy which is one of the most pressing issues with big data. As mentioned in the Literature Review below there are numerous methods for safeguarding privacy with big data. This paper introduces an efficient technique called Specialized Negative Database (SNDB) for protecting privacy in big data. SNDB is proposed to avoid the drawbacks of all previous techniques. SNDB is based on deceiving bad users and hackers by replacing only sensitive attribute with its complement. Bad user cannot differentiate between the original data and the data after applying this technique.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_11-Special_Negative_Database_SNDB.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Empirical Analysis Measuring the Performance of Multi-threading in Parallel Merge Sort</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130110</link>
        <id>10.14569/IJACSA.2022.0130110</id>
        <doi>10.14569/IJACSA.2022.0130110</doi>
        <lastModDate>2022-01-31T10:55:47.7730000+00:00</lastModDate>
        
        <creator>Muhyidean Altarawneh</creator>
        
        <creator>Umur Inan</creator>
        
        <creator>Basima Elshqeirat</creator>
        
        <subject>Parallel merge sort; sort; multithread; degree of multithreading</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>Sorting is one of the most frequent concerns in Computer Science, various sorting algorithms were invented for specific requirements. As these requirements and capabilities grow, sequential processing becomes inefficient. Therefore, algorithms are being enhanced to run in parallel to achieve better performance. Performing algorithms in parallel differ depending on the degree of multi-threading. This study determines the optimal number of threads to use in parallel merge sort. Furthermore, it provides a comparative analysis of various degrees of multithreading. The implementation in this empirical experiment takes a group of devices with various specifications. For each device, it takes fixed-sized data set and executes merge sort for sequential and parallel algorithms. For each device, the lowest average runtime is used to measure the efficiency of the experiment. In all experiments, single-threaded is more efficient when the data size is less than 105 since it claimed 53% of the lowest runtime than the multithreaded executions. The overall average of the experiments shows either four or eight threads, with 72% and 28%, respectively, are most efficient when data sizes exceed 105.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_10-Empirical_Analysis_Measuring_the_Performance_of_Multi_threading.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Various Antenna Structures Performance Analysis based Fuzzy Logic Functions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130109</link>
        <id>10.14569/IJACSA.2022.0130109</id>
        <doi>10.14569/IJACSA.2022.0130109</doi>
        <lastModDate>2022-01-31T10:55:47.7600000+00:00</lastModDate>
        
        <creator>Chafaa Hamrouni</creator>
        
        <creator>Aarif Alutaybi</creator>
        
        <creator>Slim Chaoui</creator>
        
        <subject>Antenna; antenna element; function; fuzzy logic function; fuzzy inference system; Matlab; Simulink</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>The antenna is a critical component of the communication system. The antenna is used in wireless communication for signal transmission and reception over long distances. There are numerous sorts of antennas, such as wire antennas, traveling wave antennas, reflector antennas, microstrip antennas, and so on. The application of antennas is determined by the antenna&#39;s attributes as well as the frequency range of operation. As a result, it is vital to understand the behavior of antennas over a wide range of operations and select the optimum antenna for the application. The performance parameters of the antenna determines its efficiency. VSWR, Return Loss, Directivity, Bandwidth, and more parameters are available. As a result, one of the primary areas of focus is antenna analysis. In this study, we simulate various antenna types and derive performance parameters such as return loss, directivity, and so on. MATLAB will be used to simulate the antenna at various frequencies. When all of the parameters are taken into account, the analysis becomes quite tough. In this case of ambiguity, we use fuzzy logic to calculate the antenna&#39;s performance index. A variety of antenna parameters will be fed into the fuzzy inference system, which will make a judgment based on a set of rules. The crisp numbers are turned into fuzzy values using the fuzzification process, then evaluated and defuzzied to obtain the antenna&#39;s performance index. The fuzzy inference system will be developed in MATLAB, and the overall system will be modeled in Simulink.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_9-Various_Antenna_Structures_Performance_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Method for Improvement of Ocean Wind Speed Estimation Accuracy by Taking into Account the Relation between Wind Speed and Wind Direction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130107</link>
        <id>10.14569/IJACSA.2022.0130107</id>
        <doi>10.14569/IJACSA.2022.0130107</doi>
        <lastModDate>2022-01-31T10:55:47.7430000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Kenta Azuma</creator>
        
        <subject>Sea surface wind speed (WS); advanced microwave scanning radiometer for erath observing system (AMSR-E); NCEP Global Data Assimilation System (GDAS); relative wind direction (RWD)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>A method for improvement of ocean wind speed estimation accuracy by taking into account the relation between wind speed and wind direction is proposed. Brightness temperature observed with microwave radiometer onboard satellite is modified with microwave radiometer derived wind direction proposed by Frank Wentz. Using the modified brightness temperature, more precise wind speed is estimated. Experiments with AMSR-E and NCEP GDAS data show improvements of wind speed estimation in comparison to the existing method based on the geophysical model of Frank Wentz together with the retrieval algorithm of Akira Shibata.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_7-Method_for_Improvement_of_Ocean_Wind_Speed.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Study of Security Impacts and Cryptographic Techniques in Cloud-based e-Learning Technologies</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130108</link>
        <id>10.14569/IJACSA.2022.0130108</id>
        <doi>10.14569/IJACSA.2022.0130108</doi>
        <lastModDate>2022-01-31T10:55:47.7430000+00:00</lastModDate>
        
        <creator>Lavanya-Nehan Degambur</creator>
        
        <creator>Sheeba Armoogum</creator>
        
        <creator>Sameerchand Pudaruth</creator>
        
        <subject>e-Learning; cloud computing; data management; pseudonymization; data deduplication</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>e-Learning has transposed the perception of teaching and learning considering knowledge delivery and knowledge acquirement. Today, e-learning participants access and upload their materials at any time and at any place since e-learning technologies are typically hosted on the cloud. Cloud computing has embellished the base platform for the future of e-learning, however, security and privacy remains a major concern. Cloud-hosted e-learning technologies as they are accessed over the internet suffer from the same risks to information security aspects namely availability, confidentiality, and integrity. In such a context, data authenticity, privacy, access rights and digital footprints are vulnerable in the cloud. Research in this domain focuses on specific components of cloud and e-learning without covering a holistic view of applied cryptographic techniques and practical implementation. Hence, aiming at the various security aspects and impacts of cloud-based e-learning technologies, this paper puts forward reviewing the various cryptographic techniques used to secure data across the whole end-to-end cloud-based e-learning service spectrum using systematic review and exploratory method. The results obtained define several sets of criteria to evaluate the requirements of cryptographic techniques and propose an implementation framework across an end-to-end cloud-based e-learning architecture using multi-agent software.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_8-A_Study_of_Security_Impacts_and_Cryptographic_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Proficient Networking Protocol for BPLC Network Built on Adaptive Multicast, PNP-BPLC</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130106</link>
        <id>10.14569/IJACSA.2022.0130106</id>
        <doi>10.14569/IJACSA.2022.0130106</doi>
        <lastModDate>2022-01-31T10:55:47.7270000+00:00</lastModDate>
        
        <creator>Ali Md Liton</creator>
        
        <creator>Zhi Ren</creator>
        
        <creator>Dong Ren</creator>
        
        <creator>Xin Su</creator>
        
        <subject>Powerline communication; network delay; control overhead; central coordinator</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>In order to solve the problems of the existing broadband power line carrier communication standard IEEE1901.1 data link layer protocol network, multiple primary nodes receiving the beacon will send connotation entreaty message, and CCO will send an association sanction message proximately after receiving the message to confirm their character the central coordinator CCO association confirmation message reply is not timely to reduce the success rate of network access, high network access delay and control overhead. Based on the characteristics of BPLC carrier network, Proficient networking instrument of broadband PLC built on adaptive multicast PNP-BPLC is proposed in this paper. The simulation results show that adaptive multicast is used when CCO replies to the primary station, which can excellently recover the success rate of low-voltage PLB carrier communication nodes, decrease the network access delay and cut the network control overhead. Finally we used to OPNET simulation software to prove simulation result.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_6-Proficient_Networking_Protocol_for_BPLC_Network_Built.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detecting Distributed Denial of Service in Network Traffic with Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130105</link>
        <id>10.14569/IJACSA.2022.0130105</id>
        <doi>10.14569/IJACSA.2022.0130105</doi>
        <lastModDate>2022-01-31T10:55:47.7100000+00:00</lastModDate>
        
        <creator>Muhammad Rusyaidi</creator>
        
        <creator>Sardar Jaf</creator>
        
        <creator>Zunaidi Ibrahim</creator>
        
        <subject>Cybersecurity; Cyber-attack; DDoS attack; machine learning; deep learning; recurrent neural networks; long short-term memory</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>COVID-19 has altered the way businesses throughout the world perceive cyber security. It resulted in a series of unique cyber-crime-related conditions that impacted society and business. Distributed Denial of Service (DDoS) has dramatically increased in recent year. Automated detection of this type of attack is essential to protect business assets. In this research, we demonstrate the use of different deep learning algorithms to accurately detect DDoS attacks. We show the effectiveness of Long Short-Term Memory (LSTM) algorithms to detect DDoS attacks in computer networks with high accuracy. The LSTM algorithms have been trained and tested on the widely used NSL-KDD dataset. We empirically demonstrate our proposed model achieving high accuracy (~97.37%). We also show the effectiveness of our model in detecting 22 different types of attacks.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_5-Detecting_Distributed_Denial_of_Service.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Robust Facial Recognition System using One Shot Multispectral Filter Array Acquisition System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130104</link>
        <id>10.14569/IJACSA.2022.0130104</id>
        <doi>10.14569/IJACSA.2022.0130104</doi>
        <lastModDate>2022-01-31T10:55:47.7100000+00:00</lastModDate>
        
        <creator>M. El&#233;onore Elvire HOUSSOU</creator>
        
        <creator>A. Tidjani SANDA MAHAMA</creator>
        
        <creator>Pierre GOUTON</creator>
        
        <creator>Guy DEGLA</creator>
        
        <subject>Multispectral image database; multispectral imaging; multispectral filter array (MSFA); one-shot camera; facial recognition system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>Face recognition in the visible and Near Infrared range has received a lot of attention in recent years. The current Multispectral (MS) imaging systems used for facial recognition are based on multiple cameras having multiple sensors. These acquisition systems are normally slow because they take one MS image in several shots, which makes them unable to acquire images in real time and to capture moving scenes. On the other hand, currently there are snapshot multispectral imaging systems which integrate a single sensor with Multispectral Filter Arrays (MSFA) allow having at each acquisition an image on several spectra. These systems drastically reduce image acquisition time and are able to capture moving scenes in real time. This paper proposes a study of robust facial recognition using Multispectral Filter Array acquisition system. For this goal, a MSFA one-shot camera was used to collect the images and a robust facial recognition method based on Fast Discrete Curvelet Transform and Convolutional Neural Network is proposed. This camera covers the spectral range from 650 nm to 950 nm. A comparison of the facial recognition system using Multispectral Filter Arrays camera is made with those that using multiple cameras. Experimental results proved that face recognition systems whose acquisition systems are designed using MSFA perform more efficiently with an accuracy of 100%.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_4-Robust_Facial_Recognition_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of Smart IoT Device for Monitoring Short-term Exposure to Air Pollution Peaks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130103</link>
        <id>10.14569/IJACSA.2022.0130103</id>
        <doi>10.14569/IJACSA.2022.0130103</doi>
        <lastModDate>2022-01-31T10:55:47.6970000+00:00</lastModDate>
        
        <creator>Eric Nizeyimana</creator>
        
        <creator>Jimmy Nsenga</creator>
        
        <creator>Ryosuke Shibasaki</creator>
        
        <creator>Damien Hanyurwimfura</creator>
        
        <creator>JunSeok Hwang</creator>
        
        <subject>Short-duration air pollution peak/spike; real-time monitoring; short-term prediction; immutable data; blockchain; AI</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>Air pollution spikes have been causing harm to human beings and the environment. Most exposure to Air pollution spikes has demonstrated a significant impact on mental health, especially children at an early age. That lead to suicide or depression. Previous research concentrated on air pollution in general. Existing monitoring systems do not consider Short-term air pollution peaks. This paper presents the co-design of the hardware and software for IoT to monitor air pollution spikes for a short duration in real-time monitoring. The system comprises two technologies like edge computing to capture short-term exposure and a mathematical model for distribution in analyzing the captured data. This system ensures the presence of the spikes start and end for each pollutant. Monte Carlo simulation has been used in this research to predict the next spike of each pollutant. Artificial Intelligent is used to analyze immutable data for a short term prediction. After the analysis, legislators based on intelligent contracts created using blockchain to reduce pollution based on its source.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_3-Design_of_Smart_IoT_Device_for_Monitoring_Short_term_Exposure.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Knock Knock, Who’s There: Facial Recognition using CNN-based Classifiers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130102</link>
        <id>10.14569/IJACSA.2022.0130102</id>
        <doi>10.14569/IJACSA.2022.0130102</doi>
        <lastModDate>2022-01-31T10:55:47.6800000+00:00</lastModDate>
        
        <creator>Qiyu Sun</creator>
        
        <creator>Alexander Redei</creator>
        
        <subject>Face recognition; deep learning; convolutional neu-ral networks; DeepFace</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>Artificial intelligence (AI) has captured the public’s imagination. Performance gains in computing hardware, and the ubiquity of data have enabled new innovations in the field. In 2014, Facebook’s DeepFace AI took the facial recognition industry by storm with its splendid performance on image recognition. While newer models exist, DeepFace was the first to achieve near-human level performance. To better understand how this breakthrough performance was achieved, we developed our own facial image detection models. In this paper, we developed and evaluated six Convolutional Neural Net (CNN) models inspired by the DeepFace architecture to explore facial feature identification. This research made use of the You Tube Faces (YTF) dataset which included 621,126 images consisting of 1,595 identities. Three models leveraged pretrained layers from VGG16 and InceptionResNetV2, whereas the other three did not. Our best model achieved a 84.6% accuracy on the test dataset.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_2-Knock_Knock_Whos_There_Facial_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Impact of Type-I Virtualization on a NewSQL Relational Database Management System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2022</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2022.0130101</link>
        <id>10.14569/IJACSA.2022.0130101</id>
        <doi>10.14569/IJACSA.2022.0130101</doi>
        <lastModDate>2022-01-31T10:55:47.6630000+00:00</lastModDate>
        
        <creator>J. Bryan Osborne</creator>
        
        <subject>Database benchmarking; NewSQL; relational database; virtualization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 13(1), 2022</description>
        <description>For more than 40 years, the relational database management system (RDBMS) and the atomicity, consistency, isolation, durability (ACID) transaction guarantees provided through its use have been the standard for data storage. The advent of Big Data created a need for new storage approaches that led to NoSQL technologies, which rely on basic availability, soft-state, eventual consistency (BASE) transactions. Over the last decade, NewSQL RDBMS technology has emerged, providing the benefits of RDBMS ACID transaction guarantees and the performance and scalability of NoSQL databases. The reliance on virtualization in IT has continued to grow, but an investigation of current academic literature identified a void regarding the performance impact of virtualization of NewSQL databases. To help address the lack of research in this area, a quantitative experimental study was designed and carried out to answer the central research question, &quot;What is the performance impact of Type-I virtualization on a NewSQL RDBMS?&quot; VMware ESXi virtualization software, NuoDB RDBMS, and OLTP-Bench software were used to execute a mixed-load benchmark. Performance metrics were collected comparing bare metal and virtualized environments, and the data analyzed statistically to evaluate five hypotheses related to CPU utilization, memory utilization, disk and network input-output (I/O) rates, and database transactions per second. Findings indicated a negative performance impact on CPU and memory utilization, as well as network I/O rates. Performance improvements were noted in disk I/O rates and database transactions-per-second.</description>
        <description>http://thesai.org/Downloads/Volume13No1/Paper_1-Performance_Impact_of_Type_I_Virtualization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Evaluation of BDAG Aided Blockchain Technology in Clustered Mobile Ad-Hoc Network for Secure Data Transmission</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.01212116</link>
        <id>10.14569/IJACSA.2021.01212116</id>
        <doi>10.14569/IJACSA.2021.01212116</doi>
        <lastModDate>2021-12-31T05:43:14.9900000+00:00</lastModDate>
        
        <creator>B. Harikrishnan</creator>
        
        <creator>T. Balasubaramanian</creator>
        
        <subject>Mobile ad-hoc network; node authentication; BLISS algorithm; blockchain; Bayesian direct acyclic graph</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>In mobile ad-hoc network (MANET) environment, routing of data packets is a challenging task due to rapid changes in mobility and network topology. In addition, the security aspect of routing is disturbed by attacks caused by malicious nodes. These attacks greatly affect the Quality of Service. To overcome the challenges faced in routing message packets, the Bayesian Directed Acyclic Graph (B DAG) Aided Blockchain model is proposed for clustered MANET environment. The proposed model encompasses the following processes: (i) Multi factor authentication of users by using BLISS algorithm. This step involves acquisition of user credentials and generates hash values for those credentials using Cube Hash algorithm. These hash values are further used to generate public and private keys by BLISS algorithm, (ii) Weighted sum computation for clustering to reduce complexity in the MANET environment. Cluster head (CH) and cluster member (CM) are classified based on energy status, geometric distance, link quality and direction, (iii) A secure AODV based routing protocol using Dolphin Swarm Optimization (DSO) algorithm. This step involves selection of reputed node based on link stability, relative velocity, available bandwidth, energy, queue length, and trust. The packet forwarding is based on the reputation value of the node, by which the trust provided by malicious nodes are eliminated to improve security and (iv) Bayesian DAG aided blockchain, in which the user authenticity, data integrity of packets and signature are verified to mitigate the routing attacks created by nodes in the MANET environment. The proposed model is experimented in NS 3.26 network simulator tool and its performance in terms of multiple QoS metrics is evaluated.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_116-Performance_Evaluation_of_BDAG_Aided_Blockchain_Technology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Text to Image GANs with RoBERTa and Fine-grained Attention Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.01212115</link>
        <id>10.14569/IJACSA.2021.01212115</id>
        <doi>10.14569/IJACSA.2021.01212115</doi>
        <lastModDate>2021-12-31T05:43:14.9730000+00:00</lastModDate>
        
        <creator>Siddharth M</creator>
        
        <creator>R Aarthi</creator>
        
        <subject>Natural language processing; computer vision; GANs; AttnGAN; RoBERTa</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>Synthesizing new images from textual descriptions requires understanding the context of the text. It is a very chal-lenging problem in Natural Language Processing and Computer vision. Existing systems use Generative Adversarial Network (GAN) to generate images using a simple text encoder from their captions. This paper consist synthesizing images from textual descriptions using Caltech-UCSD birds datasets by baselining the generative model using Attentional Generative Adversarial Networks (AttnGAN) and using RoBERTa pre-trained neural language model for word embeddings. The results obtained are compared with the baseline AttnGAN model and conduct various analyses on incorporating RoBERTa text encoder concerning simple encoder in the existing system. Various performance improvements were noted compared to baseline Attention Gen-erative networks. The FID score has decreased from 23.98 in AttnGAN to 20.77 with incorporation of RoBERTa model with AttnGAN.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_115_Text_to_Image_GANs_with_RoBERTa.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>New SARIMA Approach Model to Forecast COVID-19 Propagation: Case of Morocco</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.01212114</link>
        <id>10.14569/IJACSA.2021.01212114</id>
        <doi>10.14569/IJACSA.2021.01212114</doi>
        <lastModDate>2021-12-31T05:43:14.9430000+00:00</lastModDate>
        
        <creator>Ibtissam CHOUJA</creator>
        
        <creator>Sahar SAOUD</creator>
        
        <creator>Mohamed SADIK</creator>
        
        <subject>COVID-19, machine learning, seasonal autoregres-sive integrated moving average, SARIMA, statistical modeling, time series forecasting</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>The aim of this paper is to avoid any future health crises by analysing COVID-19 data of Morocco using Time Series to get more information about how the pandemic is spreading. For this reason, we used a statistical model called Seasonal Autoregressive Integrated Moving Average (SARIMA) to forecast the new confirmed cases, new deaths, cumulative cases and deaths. Besides predicting the spreading of COVID-19, this study will also help decision makers to better take the right decisions at the right time. Finally, we evaluated the performance of our model by measuring metrics such as Mean Squared Error (MSE). We have applied our SARIMA model for a forward forecasting in a period of 50 days, the MSE reported was 62196.46 for cumulative cases forecasting, and 621.14 for cumulative deaths forecasting.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_114-New_SARIMA_Approach_Model_to_Forecast_COVID_19_Propagation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Efficient Weighted Edit Distance and N-gram Language Models to Improve Spelling Correction of Segmentation Errors</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.01212113</link>
        <id>10.14569/IJACSA.2021.01212113</id>
        <doi>10.14569/IJACSA.2021.01212113</doi>
        <lastModDate>2021-12-31T05:43:14.9270000+00:00</lastModDate>
        
        <creator>Hicham GUEDDAH</creator>
        
        <subject>Spelling correction; error; natural language; inser-tion; space; distance; language models; probability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>In most research that has dealt with the correction of spelling errors, the errors are caused by the misuse of space (deletion or insertion of space) are not tackled. Forgetting to deal with this type of errors in the texts poses a problem of understanding and ambiguity of the meaning of the sentence containing these errors. In this article, we propose a new approach to correct errors due to the insertion of space in a word, and at the same time correct other types of editing errors. This approach is based on the edit distance and uses bi-grams language models to correct words in context. The test conducted on hundreds of erroneous words (by insertion of space and/or by simple editing errors) made it possible to assess the relevance and validity of the methods developed to correct this type of error. The approaches proposed in this article provide a very important clarification and reminder by comparing them to those of other existing approaches.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_113-Efficient_Weighted_Edit_Distance_and_N_gram_Language_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Secure Fog-cloud Architecture using Attribute-based Encryption for the Medical Internet of Things (MIoT)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.01212112</link>
        <id>10.14569/IJACSA.2021.01212112</id>
        <doi>10.14569/IJACSA.2021.01212112</doi>
        <lastModDate>2021-12-31T05:43:14.8970000+00:00</lastModDate>
        
        <creator>Suhair Alshehri</creator>
        
        <creator>Tahani Almehmadi</creator>
        
        <subject>MIoT; fog computing; cloud computing; ciphertext-policy; attribute-based encryption; security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>The medical internet of things (MIoT) has affected radical transformations in people’s lives by offering innovative solutions to health-related issues. It enables healthcare pro-fessionals to continually monitor various medical concerns in their patients, without requiring visits to hospitals or healthcare professionals’ offices. The various MIoT systems and applications promote healthcare services that are more readily available, accessible, quality-controlled, and cost-effective. An essential requirement is to secure medical data when developing MIoT architectures, as MIoT devices produce considerable amounts of highly sensitive, diverse real-time data. The MIoT architectures discussed in previous works possessed numerous security issues. The integration of fog computing and MIoT is acknowledged as an encouraging and suitable solution for addressing the challenges within data security. In order to ensure data security and to prevent unauthorized access, medical information is kept in fog nodes, and safely transported to the cloud. This paper presents a secure fog-cloud architecture using attribute-based encryption for MIoT to protect medical data. It investigates the feasibility of the proposed architecture, and its ability to intercept security threats. The results demonstrate the feasibility of adopting the fog-based implementation to protect medical data, whilst conserving MIoT resources, and the capability to prevent various security attacks.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_112-A_Secure_Fog_cloud_Architecture_using_Attribute_based_Encryption.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Secure and Efficient Proof of Ownership Scheme for Client-Side Deduplication in Cloud Environments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.01212111</link>
        <id>10.14569/IJACSA.2021.01212111</id>
        <doi>10.14569/IJACSA.2021.01212111</doi>
        <lastModDate>2021-12-31T05:43:14.8800000+00:00</lastModDate>
        
        <creator>Amer Al-Amer</creator>
        
        <creator>Osama Ouda</creator>
        
        <subject>Client-side deduplication; proof of ownership; con-vergent encryption; could storage services</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>Data deduplication is an effective mechanism that reduces the required storage space of cloud storage servers by avoiding storing several copies of the same data. In contrast with server-side deduplication, client-side deduplication can not only save storage space but also reduce network bandwidth. Client-side deduplication schemes, however, might suffer from serious security threats. For instance, an adversary can spoof the server and gain access to a file he/she does not possess by claiming that she/he owns it. In order to thwart such a threat, the concept of proof-of-ownership (PoW) has been introduced. The security of the existing PoW scheme cannot be assured without affecting the computational complexity of the client-side deduplication. This paper proposes a secure and efficient PoW scheme for client-side deduplication in cloud environments with minimal computational overhead. The proposed scheme utilizes convergent encryption to encrypt a sufficiently large block specified by the server to challenge the client that claims possession of the file requested to be uploaded. To ensure that the client owns the entire file contents, and hence resist collusion attacks, the server challenges the client by requesting him to split the file he asks to upload into fixed-sized blocks and then encrypting a randomly chosen block using a key formed from extracting one bit at a specified location in all other blocks. This ensures a significant reduction in the communication overhead between the server and client. Computational complexity analysis and experimental results demonstrate that the proposed PoW scheme outperforms state-of-the-art PoW techniques.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_111-Secure_and_Efficient_Proof_of_Ownership_Scheme.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Lattice-based Group Enlargement for a Robot Swarm based on Crystal Growth Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.01212110</link>
        <id>10.14569/IJACSA.2021.01212110</id>
        <doi>10.14569/IJACSA.2021.01212110</doi>
        <lastModDate>2021-12-31T05:43:14.8500000+00:00</lastModDate>
        
        <creator>Kohei Yamagishi</creator>
        
        <creator>Tsuyoshi Suzuki</creator>
        
        <subject>Multi-robot systems; self-organization; distributed control; crystal growth</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>Swarm robotic systems control multiple robots in a coordinated manner for using this flexible coordination to solve complex tasks in various environments. Such systems can utilize the individual capabilities of robots scattered within the swarm as well as the collective capabilities of the assembled robots. By coordinating these capabilities, swarms can solve tasks with a range of purposes, including carrying out rough sweeps of the overall environment using scattered robots or detailed observation of a part of the environment using assembled robots. This study developed a self-organization method for constructing regular groups of robots from scattered robots to achieve coor-dination between individual and collective states. An approach that integrates elements of self-organization with different input information requires centralized control to manage them. To provide this self-organization without centralized control, we focus on using the phase-field method and cellular automata to facilitate crystal growth that produces ordered structures from scattered particles. We formulate a method for arranging robots in a self-organizing manner based on the geometrical regularities of tile-able lattices (honeycomb, square, and hexagonal lattices) on a two-dimensional plane, demonstrate the process undertaken in carrying out the proposed method, and quantitatively evaluate the effectiveness of the lattice-based geometrical regularity approach. The proposed method contributes to carrying out tasks with a range of purposes by organizing states with either individual or collective capabilities of robot groups.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_110-Lattice_based_Group_Enlargement_for_a_Robot_Swarm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modelling and Simulating Exit Selection during Assisted Hospital Evacuation Process using Fuzzy Logic and Unity3D</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.01212109</link>
        <id>10.14569/IJACSA.2021.01212109</id>
        <doi>10.14569/IJACSA.2021.01212109</doi>
        <lastModDate>2021-12-31T05:43:14.8330000+00:00</lastModDate>
        
        <creator>Intiaz Mohammad Abir</creator>
        
        <creator>Ali Ahmed Ali Moustafa Allam</creator>
        
        <creator>Azhar Mohd Ibrahim</creator>
        
        <subject>Evacuation simulation; exit selection; fuzzy logic; unity3D</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>Evacuation procedures are an integral aspect of the emergency response strategy of a hospital. Evacuation simulation models help to properly evaluate and improve evacuation strate-gies. However, the issue of exit selection during evacuation is often overlooked and oversimplified in the evacuation simulation models. Moreover, most of the available evacuation simulation models lack integration of movement devices and assisted evacu-ation features. However, finding a solution of these limitations is a necessity to properly evaluate evacuation strategies. To tackle this problem, we propose an effective approach to model exit selection using a fuzzy logic controller (FLC) and simulate assisted hospital evacuation using Unity3D game engine. Our research demonstrates that selecting exits based on distance only is not sufficient for real life situation because it ignores the unpredictability of human behavior. On the contrary, the use of the proposed FLC for exit selection makes the simulation more realistic by addressing the uncertainty and randomness in an evacuee’s decision-making process. This research can play a vital role in future developments of evacuation simulation models.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_109-Modelling_and_Simulating_Exit_Selection_during_Assisted_Hospital.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>OBEInsights: Visual Analytics Design for Predictive OBE Knowledge Generation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.01212108</link>
        <id>10.14569/IJACSA.2021.01212108</id>
        <doi>10.14569/IJACSA.2021.01212108</doi>
        <lastModDate>2021-12-31T05:43:14.8030000+00:00</lastModDate>
        
        <creator>Leona Donna Lumius</creator>
        
        <creator>Mohammad Fadhli Asli</creator>
        
        <subject>Visual analytics; visualization; learning analytics; outcome-based education (OBE)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>Gaining traction in modern higher education, outcome-based education (OBE) focuses on strategizing peda-gogical approaches to help the student achieve specified learning outcomes. In the context of Malaysia, OBE is oriented towards holistic development of graduates to ensure readiness towards the working sector. To empower OBE implementation, standardized measuring instrument iCGPA was introduced to higher education institutions nationwide. With lower dependency on provided curriculum, graduate abilities and values development are also attainable via extracurricular activities. However, analyzing the curriculum results in hand with extracurricular activities can be a daunting task, albeit the potential enriched performance assessment. In addition, the current iCGPA instrument employs radar map that restricts data exploration despite its capability in visualizing multivariate information. This study aims to enable predictive knowledge generation on understanding the relation-ship between learning activities and performance in OBE. There-fore, a predictive visual analytics system namely OBEInsights is proposed to facilitate education analysts in assessing OBE. The system development started with the identification of crucial design and analytic requirements via a domain expert case study. The system is then built with deliberate considerations of guiding factors of a design framework conceptualized from the case study. Subsequently, the system was then evaluated in usability testing with 10 domain experts that consist of usability rating and expert validation. The evaluation and expert validation results demonstrated the effectiveness and usability of OBEInsights in facilitating OBE predictive assessment. Several design insights on constructing visual analytics for OBE assessment were also discovered in terms of effective visualization, predictive modeling, and knowledge generation. Analytic designers and builders can leverage the reported design insights to enhance knowledge generation tools for OBE assessment.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_108-OBEInsights_Visual_Analytics_Design_for_Predictive_OBE_Knowledge_Generation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Real Time Multi-Object Tracking based on Faster RCNN and Improved Deep Appearance Metric</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.01212107</link>
        <id>10.14569/IJACSA.2021.01212107</id>
        <doi>10.14569/IJACSA.2021.01212107</doi>
        <lastModDate>2021-12-31T05:43:14.7870000+00:00</lastModDate>
        
        <creator>Mohan Gowda V</creator>
        
        <creator>Megha P Arakeri</creator>
        
        <subject>Multi-object detection; tracking; faster RCNN; con-volution neural network; data association</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>Computer Vision has set a new trend in image resolution, object detection, object tracking, and more by incor-porating advanced techniques from Artificial Intelligence (AI). Object detection and tracking have many use cases such as driverless cars, security systems, patient monitoring, and so on. Various methods have been proposed to overcome the challenges such as long-term occlusion, identity switching, and fragmenta-tion in real-time multi-object detection and tracking. However, reducing the number of identity switches and fragmentation remains unclear in multi-object detection and tracking. Hence, in this paper, we proposed a multi-object detection and tracking technique that involves two stages. The first stage helps to detect the multiple objects with high uniqueness using Faster RCNN and the second stage, Improved Sqrt cosine similarity, helps to track the multiple objects by using appearance and motion features. Finally, we evaluated our proposed technique using the Multi-Object Tracking (MOT) benchmark dataset with current state-of-the-art methods. The proposed technique resulted in enhanced accuracy and reduces identity switching and fragmentation.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_107-Real_Time_Multi_Object_Tracking_based_on_Faster_RCNN.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predicting Stock Closing Prices in Emerging Markets with Transformer Neural Networks: The Saudi Stock Exchange Case</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.01212106</link>
        <id>10.14569/IJACSA.2021.01212106</id>
        <doi>10.14569/IJACSA.2021.01212106</doi>
        <lastModDate>2021-12-31T05:43:14.7700000+00:00</lastModDate>
        
        <creator>Nadeem Malibari</creator>
        
        <creator>Iyad Katib</creator>
        
        <creator>Rashid Mehmood</creator>
        
        <subject>Stock price prediction; time-series forecasting; transformer deep neural networks; Saudi Stock Exchange (Tadawul); financial markets</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>Deep learning has transformed many fields includ-ing computer vision, self-driving cars, product recommendations, behaviour analysis, natural language processing (NLP), and medicine, to name a few. The financial sector is no surprise where the use of deep learning has produced one of the most lucrative applications. This research proposes a novel fintech machine learning method that uses Transformer neural networks for stock price predictions. Transformers are relatively new and while have been applied for NLP and computer vision, they have not been explored much with time-series data. In our method, self-attention mechanisms are utilized to learn nonlinear patterns and dynamics from time-series data with high volatility and nonlinearity. The model makes predictions about closing prices for the next trading day by taking into account various stock price inputs. We used pricing data from the Saudi Stock Exchange (Tadawul) to develop this model. We validated our model using four error evaluation metrics. The applicability and usefulness of our model to fintech are demonstrated by its ability to predict closing prices with a probability above 90%. To the best of our knowledge, this is the first work where transformer networks are used for stock price prediction. Our work is expected to make significant advancements in fintech and other fields depending on time-series forecasting.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_106-Predicting_Stock_Closing_Prices_in_Emerging_Markets.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Solving the Steel Continuous Casting Problem using an Artificial Intelligence Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.01212105</link>
        <id>10.14569/IJACSA.2021.01212105</id>
        <doi>10.14569/IJACSA.2021.01212105</doi>
        <lastModDate>2021-12-31T05:43:14.7400000+00:00</lastModDate>
        
        <creator>Achraf BERRAJAA</creator>
        
        <subject>Artificial intelligence; SCC Program; RNN; LSTM; big data</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>Over the past decade, the steel continuous casting problem has revolutionized in important and remarkable ways. In this paper, we consider a multiple parallel device for the steel continuous casting problem (SCC) known as one of the hardest scheduling problem. The SCC problem is an important NP-hard combinatorial optimization problem and can be seen as three stages hybrid flowshop problem. We have proposed to solve it a recurrent neural network (RNN) with LSTM cells that we will executed in the cloud. For our problem, we consider several machines at each stage that are the converter stage, the refining stage and the continuous casting stage. We formulate the mathematical model and implemented a RNN with LSTM cells to approximately solve the problem. The proposed neural network has been trained on a big dataSet, which contains 10 000 real use cases and others generated randomly. The performances of the proposed model are very interesting such that the success rate is 93% and able to resolve large instances while the traditional approaches are limited and fail to resolve very large instances. We analyzed the results taking into account the quality of the solution and the prediction time to highlight the performance of the approach.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_105-Solving_the_Steel_Continuous_Casting_Problem.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sustainable Android Malware Detection Scheme using Deep Learning Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.01212104</link>
        <id>10.14569/IJACSA.2021.01212104</id>
        <doi>10.14569/IJACSA.2021.01212104</doi>
        <lastModDate>2021-12-31T05:43:14.7100000+00:00</lastModDate>
        
        <creator>Abdulaziz Alzubaidi</creator>
        
        <subject>Smartphone security; machine learning; mobile malware; classification; big data</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>The immense popularity of smartphones has led to the constant use of these devices for productive and entertainment purposes in daily life. Among the different operating systems, the Android system plays a very important role in the development of mobile technology as it is the most popular operating system. This makes it a target for cyberattack, with severe negative effects in terms of monetary and privacy costs. Thus, this study implements a detection scheme using effective deep learning algorithms (LSTM and MLP). Also, it tests their ability to detect malware by employing private and public datasets, with accuracy of over than 99%.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_104-Sustainable_Android_Malware_Detection_Scheme.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Framework and Method for Measurement of Particulate Matter Concentration using Low Cost Sensors</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.01212103</link>
        <id>10.14569/IJACSA.2021.01212103</id>
        <doi>10.14569/IJACSA.2021.01212103</doi>
        <lastModDate>2021-12-31T05:43:14.6930000+00:00</lastModDate>
        
        <creator>Shree Vidya Gurudath</creator>
        
        <creator>Krishna Raj P M</creator>
        
        <creator>Srinivasa K G</creator>
        
        <subject>Air pollution; low cost sensor; optical dust sensors; particulate matter; MQTT; ponte</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>Rapid urbanisation and infrastructure shortcom-ings leading to heavy traffic, heavy construction activities are ma-jor contributors to emission of particulate matter into the ambient atmosphere. This is especially true in developing countries, such as India and China. There have been numerous attempts from government authorities and civic agencies to curtail pollution, but these efforts have been in vain. Cities like Beijing, New Delhi suffer from extremely unhealthy air quality during multiple months of the year. Hence, the onus of keeping oneself safe from extreme affects of air pollution falls on the individual. The following study presents a method and framework to measure particulate matter (PM2.5) concentration using low cost sensors, and infer patterns from the data collected. The study uses a SDS011 high precision laser PM2.5 detector module combined with a raspberry pi, which communicates the measurements through message queueing telemetry transport (MQTT) protocol to a ponte server which inturn persists the data into a MongoDB, which can be consumed by algorithms for further analysis. For example, the data obtained from the sensors can be fused with data from measurement stations and geographical land use information to estimate dense spatio-temporal pollution maps which is the basis for computing individual exposure to pollutants.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_103-Framework_and_Method_for_Measurement_of_Particulate_Matter.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of an Anti-theft Alarm System for Vehicles using IoT</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.01212102</link>
        <id>10.14569/IJACSA.2021.01212102</id>
        <doi>10.14569/IJACSA.2021.01212102</doi>
        <lastModDate>2021-12-31T05:43:14.6770000+00:00</lastModDate>
        
        <creator>Jorge Arellano-Zubiate</creator>
        
        <creator>Jheyson Izquierdo-Calongos</creator>
        
        <creator>Laberiano Andrade-Arenas</creator>
        
        <subject>Global mobile communications system; global posi-tioning system; internet of things; radio frequency identification; scrum</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>Automobiles have become one of the most sought-after targets for criminals due to their worldwide popularity. Crime is reflected in the statistics, which show that over the years, the crime rate of vehicle theft has been on the rise. As part of the fight against this crime, the vehicles come with certain systems incorporated to avoid this type of situations; obtaining many outstanding results. In this research project, a system was developed that allows through the application of the Internet of Things (IoT), the management of software and hardware technologies that allow the user to have access to various actions, such as vehicle location through the global positioning system (GPS), and identification of the offender, through radio frequency identification (RFID), as well as the global system of mobile communications (GSM). The objective of the research is to design a mobile and IoT application to reduce robberies in the department of Lima-Peru, using the scrum methodology. The result obtained is the design of the mobile application, with its anti-theft system, vehicle blocking and notification of unauthorized ignition.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_102-Design_of_an_Anti_theft_Alarm_System_for_Vehicles.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implementation of a Web System to Improve the Evaluation System of an Institute in Lima</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.01212101</link>
        <id>10.14569/IJACSA.2021.01212101</id>
        <doi>10.14569/IJACSA.2021.01212101</doi>
        <lastModDate>2021-12-31T05:43:14.6470000+00:00</lastModDate>
        
        <creator>Franco Manrique jaime</creator>
        
        <creator>Laberiano Andrade-Arenas</creator>
        
        <subject>COVID-19; evaluation; learning platform; scrum; web system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>In Peru and the world, millions of students saw their education interrupted caused by the problems brought by the COVID-19 virus, due to this many educational entities began to adopt learning platforms and web systems so that the teaching process is not affected, having to comply with all the guidelines and requirements of the institution to solve any academic dif-ficulty. That is why in the present work the implementation of a web system was proposed for the improvement of the qualification and evaluation processes of an institution using the scrum methodology since it is an agile framework that is based on empiricism and offers adaptation and flexibility in the projects. For the software development, the open source language PHP was used since it is more adapted to these web systems, Mysql was also used, which is a database manager for relational databases. The results of this research was the correct implementation of this system to the educational institution, verifying the absence of errors and the improvement of the processes involved so that the institution can provide students with an adequate learning process.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_101-Implementation_of_a_Web_System_to_Improve_the_Evaluation_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Relationship between Stress and Academic Performance: An Analysis in Virtual Mode</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.01212100</link>
        <id>10.14569/IJACSA.2021.01212100</id>
        <doi>10.14569/IJACSA.2021.01212100</doi>
        <lastModDate>2021-12-31T05:43:14.6170000+00:00</lastModDate>
        
        <creator>Janet Corzo Zavaleta</creator>
        
        <creator>Roberto Yon Alva</creator>
        
        <creator>Samuel Vargas Vargas</creator>
        
        <creator>Eleazar Flores Medina</creator>
        
        <creator>Yrma Principe Somoza</creator>
        
        <creator>Laberiano Andrade-Arenas</creator>
        
        <subject>Academic performance; anxiety; stress; teaching-learning; virtual mode</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>This research work analyzes the relationship be-tween stress and academic performance of engineering students at the University of Sciences and Humanities, in Peru, in the context of the pandemic. During this period, classes at the university were held virtually; and the difficulties that students present to carry out their classes were identified, such as lack of connectivity, family and financial problems and anxiety. The objective of the research is to analyze the relationship between stress and academic performance of engineering students. The work is part of a mixed approach, and data collection was carried out through interviews and surveys of engineering students based on the two variables identified: stress and academic performance. It began with a descriptive analysis, then moving on to inferential analysis to perform the hypothesis test. The SPSS was used for the reliability analysis using Cronbach’s alpha with 0.84 as a result and validation by expert judgment with 84.5% acceptance. It was obtained as a result that between the Stress variable based on its three dimensions and academic performance there is not relationship since it was obtained that its P-value is greater than 0,05; it is concluded that stress is not only academic but also should consider others as labor, social. In addition, the positive stress that drives academic performance emerged. The beneficiaries with the research are students, and the university. It is concluded that there is no relationship between stress and academic performance.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_100-Relationship_between_Stress_and_Academic_Performance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implementation of a Web System to Detect Anemia in Children of Peru</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121299</link>
        <id>10.14569/IJACSA.2021.0121299</id>
        <doi>10.14569/IJACSA.2021.0121299</doi>
        <lastModDate>2021-12-31T05:43:14.6000000+00:00</lastModDate>
        
        <creator>Ricardo Leon Ayala</creator>
        
        <creator>Noe Vicente Rosas</creator>
        
        <creator>Laberiano Andrade-Arenas</creator>
        
        <subject>Anemia; diagnosis; health; scrum; web system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>Now-a-days, anemia is considered a worldwide problem that not only seriously affects our health, but also has economic and social consequences. Therefore, seeks to provide a solution to the problem to detect anemia with a non-invasive method quickly, simple and low-cost way. In this research work, a web system was designed applying the scrum methodology to detect anemia and simplify the detection process of anemia in Peruvian children. In addition, this study shows as a result a technological prototype that helped in the diagnosis of anemia ; at the same time it provides food recommendations to patients to combat anemia efficiently, with a variety of recipes and ingredients that are available in any home, helping in the recovery process. In addition, the analysis carried out on children with anemia in Peru is shown, where it is known that Puno is the most affected department. With respect to the capital Lima, the most affected district is Callao. However, this amount is expected to drop considerably in the coming years.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_99-Implementation_of_a_Web_System_to_Detect_Anemia.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implementation of an Expert System for Automated Symptom Consultation in Peru</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121298</link>
        <id>10.14569/IJACSA.2021.0121298</id>
        <doi>10.14569/IJACSA.2021.0121298</doi>
        <lastModDate>2021-12-31T05:43:14.5830000+00:00</lastModDate>
        
        <creator>Gilson Vasquez Torres</creator>
        
        <creator>Luis Lunarejo Aponte</creator>
        
        <creator>Laberiano Andrade-Arenas</creator>
        
        <subject>Automated query; buchanan; expert system; prolog; symptoms</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>The human being has a fragile life, and is attacked by different diseases throughout his life, neglecting or ignoring some of them because it is considered minimal, can be fatal, but many do not want to attend a health center, so they seek your symptoms on the Internet and finding pages with false information, that is the problem that we will address in this investigation. The objective of the research is to implement an expert system, creating a web page that provides real information when a user enters their symptoms. This was achieved based on the logic of rules developed in Prolog, so when a user fills out the created questionnaire, the expert system will follow. the rules to conclude with the desired diagnosis; all these steps were carried out using the buchanan methodology. The result was an improvement in the accessibility of truthful information through the Internet, facilitating the management of appointments of users if they have a serious illness, or the treatment in case of a minor illness. The beneficiaries of the research were the population that required the use of the automated query application.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_98-Implementation_of_an_Expert_System_for_Automated_Symptom.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mobile Application Aimed at Older Adults to Increase Cognitive Capacity</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121297</link>
        <id>10.14569/IJACSA.2021.0121297</id>
        <doi>10.14569/IJACSA.2021.0121297</doi>
        <lastModDate>2021-12-31T05:43:14.5670000+00:00</lastModDate>
        
        <creator>Ricardo Leon-Ayala</creator>
        
        <creator>Gerald G&#180;omez-Cortez</creator>
        
        <creator>Laberiano Andrade-Arenas</creator>
        
        <subject>Alzheimer’s; balsamiq; mobile; prototype; scrum</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>The research work focuses on people with dementia of the Alzheimer’s type, since, among the types of dementia, this is the most common worldwide. In Peru, more than 200 thousand adults over 60 years of age suffer from this disease and many others who still do not know it or are in its initial stage. Therefore, it was decided to create a prototype of a mobile application with memory games, riddles, reminders and different types of physical activities to perform during the day. The scrum methodology was implemented to promote good practices for team and collaborative work, in terms of us phases from inception to launch of the product which is the mobile application. In addition, balsamiq was used as a prototype design tool. And so the objective of creating the prototype for its development was achieved. The goal of creating the prototype for the application was achieved. Positive results were obtained in terms of user and customer satisfaction. This will allow the benefit of adults for the improvement of cognitive ability, being able to perform their daily activities in the best way and socializing with family and friends.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_97-Mobile_Application_Aimed_at_Older_Adults.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Secured 6-Digit OTP Generation using B-Exponential Chaotic Map</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121296</link>
        <id>10.14569/IJACSA.2021.0121296</id>
        <doi>10.14569/IJACSA.2021.0121296</doi>
        <lastModDate>2021-12-31T05:43:14.5370000+00:00</lastModDate>
        
        <creator>Rasika Naik</creator>
        
        <creator>Udayprakash Singh</creator>
        
        <subject>One-time password generation; B-exponential chaotic map; 6-digit one-time password; online transactions; se-curity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>Today, the traditional username and password sys-tems are becoming less popular on the internet due to their vulnerabilities. These systems are prone to replay attacks and eavesdropping. During the Coronavirus pandemic, most of the important transactions take place online. Hence we require a more secure method like one-time password generation to avoid any online frauds. one-time password generation has multiple techniques. With one-time password generation it has become possible to overcome the drawbacks posed by the traditional username and password systems. The one-time password is a two-way authentication technique and hence secure one-time password generation is very important. The current method of one-time password generation is time-consuming and consumes a lot of memory on backend servers. The 4-digit one-time password system limits its uses to 9999 users and with advance deep learning approaches and faster computing it is possible to break through the existing one-time password generation method. Hence we need a system that is not vulnerable to predictive learning algorithms. We propose a 6-digit one-time password generation technique based on a B-exponential chaotic map. The proposed 24-bit (6-digit) long one-time password system offers 120 times higher security as compared traditional 4-digit systems, with a faster backend computing system that selects 24-bits out of 10 8 bits in 89 seconds at 1.09 Kilo-bits per milliseconds. The proposed method can be used for online transactions, online banking, and even automated teller machines.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_96-Secured_6_Digit_OTP_Generation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Blockchain-oriented Inter-organizational Collaboration between Healthcare Providers to Handle the COVID-19 Process</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121294</link>
        <id>10.14569/IJACSA.2021.0121294</id>
        <doi>10.14569/IJACSA.2021.0121294</doi>
        <lastModDate>2021-12-31T05:43:14.5070000+00:00</lastModDate>
        
        <creator>Ilyass El Kassmi</creator>
        
        <creator>Zahi Jarir</creator>
        
        <subject>Blockchain, inter-organizational collaboration, choreography, permissioned blockchain, business process management, COVID-19</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>Collaborative business activities have aroused great interest from organizations because of the benefits they offer. However, sharing data, services, and resources and exposing them to external use can prevent organizations involved in collaboration from being engaged. Therefore, the need for advanced mechanisms to ensure trust between the different parties involved is paramount. In this context, blockchain and smart contracts are promising solutions for performing choreography processes. However, the seamless integration of these technologies as non-functional requirements in the design and implementation phases of inter-organizational collaborative activities is a challenging task, as reported in the literature. Consequently, we aim in the proposed approach to extend the modeling and implementation of the choreography lifecycle based on service-oriented processes. This is fulfilled by integrating blockchain transactions and smart contract calls, to allow collaboration and interoperability between different entities while guaranteeing trust and auditability. Moreover, to conduct this extension efficiently we use a BPMN choreography diagram combined with Finite State Automata to ensure a meticulous modeling which targets the processes’ internal interactions. Hyperledger Fabric is used as a permissioned blockchain for proof-of-concept implementation. A use case of COVID-19 collaborative processes is used to experiment our approach, which aims to guarantee a fluid collaboration between healthcare providers and epidemiological entities at a national scale in Morocco.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_94-Blockchain_oriented_Inter_organizational_Collaboration.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Language Tutoring Tool based on AI and Paraphrase Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121295</link>
        <id>10.14569/IJACSA.2021.0121295</id>
        <doi>10.14569/IJACSA.2021.0121295</doi>
        <lastModDate>2021-12-31T05:43:14.5070000+00:00</lastModDate>
        
        <creator>Anas Basalamah</creator>
        
        <subject>LTT; NLG; NLU; paraphrase detection; LSTM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>A language tutoring tool (LTT) helps learning a language through casual human-like conversations. Natural language understanding (NLU) and natural language generation (NLG) are two key components of an LTT. In this paper, we propose a paraphrase detection algorithm that is used as the building block of the NLU. Our proposed tree-LSTM with a self-attention method for paraphrase detection shows accuracy of 87% with a lower parameter of 6.5m, which is much robust and lighter than the other existing paraphrase detection algorithms. Furthermore, we discuss an LTT prototype using the proposed algorithm with having some featured components like- message analysis, grammar detection, dialogue management, and response generation component. Each component is discussed in detail in the methodology section of this paper.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_95-A_Language_Tutoring_Tool_based_on_AI.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Non-functional Requirements (NFR) Identification Method using FR Characters based on ISO/IEC 25023</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121293</link>
        <id>10.14569/IJACSA.2021.0121293</id>
        <doi>10.14569/IJACSA.2021.0121293</doi>
        <lastModDate>2021-12-31T05:43:14.4730000+00:00</lastModDate>
        
        <creator>Nurbojatmiko </creator>
        
        <creator>Eko K. Budiardjo</creator>
        
        <creator>Wahyu C. Wibowo</creator>
        
        <subject>Non functional requirements; FR character; ISO/IEC 25023; NFR identification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>The researches show that software quality depends on Functional Requirements (FR) and Non-Functional Requirements (NFR). The developers identify NFR attributes by interviewing stakeholders. The difficulty in identifying NFR attributes makes quality requirements often ignored. The basic concept of software quality measurements is the quality measuring of the software product. During product-based quality measurement, the potential of software development process repetition will occur. Factors measuring software product quality are not suitable for NFR identification. These differences result in the software development process repeating itself and additional costs. This research proposes easy NFR attributes identification using FR characters. The NFR and FR tightly relations are obtained by extending the NFR measurement at ISO/IEC25023 to programming coding level, then generalizing to get the FR character. The generalization uses the Grounded Theory method. The result is the NFR attributes identification method using FR character based on ISO/IEC 25023. The analyst or programmer can identify the NFR attributes from the FR using the FR character in the requirements stage. This research produces an NFR Identification Method that has been validated by experimenting with several programmers and experts. Tests on programmers identify NFR using the FR character method. The test is to see the level of similarity of the resulting NFR. The result of the test shows level similarity upper 75%.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_93-Non_functional_Requirements_NFR_Identification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Prediction of Tourist Visit in Taman Negara Pahang, Malaysia using Regression Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121292</link>
        <id>10.14569/IJACSA.2021.0121292</id>
        <doi>10.14569/IJACSA.2021.0121292</doi>
        <lastModDate>2021-12-31T05:43:14.4600000+00:00</lastModDate>
        
        <creator>Sofianita Mutalib</creator>
        
        <creator>Athila Hasya Razali</creator>
        
        <creator>Siti Nur Kamaliah Kamarudin</creator>
        
        <creator>Shamimi A Halim</creator>
        
        <creator>Shuzlina Abdul-Rahman</creator>
        
        <subject>Regression models; SDG; Taman Negara Pahang; tourist analytics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>Tourism is among the significant source of income to Malaysia and Taman Negara Pahang is one of the Malaysia&#39;s tourism spots and the heritage of Malaysia in achieving the Sustainable Development Goals (SDG). It has attracted many international and local tourists for its richness in flora and fauna. Currently, the information of tourists’ visits is not properly analyzed. This study integrates the internal and public information to analyze the visits. The regression models used are multiple linear regression, support vector regression, and decision tree regression to predict the tourism demand for Taman Negara, Malaysia and the best model was deployed. Predictive analytics can support the decision-making process for tourism destinations management. When the management gets a head-up of the demand in the future, they can choose a strategic planning and be more aware about the factors influencing tourism demand, such as the tourists’ web search engine behaviors for accommodation, facilities, and attractions. The factors affecting the tourism demand are determined as the first objective. The role of independent variable was set to the total number of visitors, subsequently being set as the target variable in the modeling process. A total of 30 models were generated by tuning the cross-validation parameters. This study concluded that the best model is the multiple linear regression due to lower root mean square error (RSME) value.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_92-Prediction_of_Tourist_Visit_in_Taman_Negara_Pahang.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Workflow Scheduling and Offloading for Service-based Applications in Hybrid Fog-Cloud Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121290</link>
        <id>10.14569/IJACSA.2021.0121290</id>
        <doi>10.14569/IJACSA.2021.0121290</doi>
        <lastModDate>2021-12-31T05:43:14.4270000+00:00</lastModDate>
        
        <creator>Saleh M. Altowaijri</creator>
        
        <subject>Workflow scheduling; workflow offloading; cloud computing; fog computing; edge computing; scientific workflows</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>Fog and edge computing has emerged as an important paradigm to address many challenges related to time-sensitive and real-time applications, high network loads, user privacy, security, and others. While these developments offer huge potential, many efforts are needed to study and design applications and systems for these emerging computing paradigms. This paper provides a detailed study of workflow scheduling and offloading of service-based applications. We develop different models of cloud, fog and edge systems and study the scheduling of workflows (such as scientific and machine learning workflows) using a range of system sizes and application intensities. Firstly, we develop several Markov models of cloud, fog, and edge systems and compute the steady-state probabilities for system utilization and stability. Secondly, using steady-state probabilities, we define a range of system metrics to study the performance of workflow scheduling and offloading including, network load, response delay, energy consumption, and energy costs. An extensive investigation of application intensities and cloud, fog, and edge system sizes reveals that significant benefits can be accrued from the use of fog and edge computing in terms of low network loads, response times, energy consumption and costs.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_90-Workflow_Scheduling_and_Offloading.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Advanced Machine Learning Algorithms for House Price Prediction: Case Study in Kuala Lumpur</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121291</link>
        <id>10.14569/IJACSA.2021.0121291</id>
        <doi>10.14569/IJACSA.2021.0121291</doi>
        <lastModDate>2021-12-31T05:43:14.4270000+00:00</lastModDate>
        
        <creator>Shuzlina Abdul-Rahman</creator>
        
        <creator>Nor Hamizah Zulkifley</creator>
        
        <creator>Ismail Ibrahim</creator>
        
        <creator>Sofianita Mutalib</creator>
        
        <subject>House price; house price prediction; machine learning; property; regression analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>House price is affected significantly by several factors and determining a reasonable house price involves a calculative process. This paper proposes advanced machine learning (ML) approaches for house price prediction. Two recent advanced ML algorithms, namely LightGBM and XGBoost were compared with two traditional approaches: multiple regression analysis and ridge regression. This study utilizes a secondary dataset called ‘Property Listing in Kuala Lumpur’, gathered from Kaggle and Google Map, containing 21984 observations with 11 variables, including a target variable. The performance of the ML models was evaluated using mean absolute error (MAE), root mean square error (RMSE), and adjusted r-squared value. The findings revealed that the house price prediction model based on XGBoost showed the highest performance by generating the lowest MAE and RMSE, and the closest adjusted r-squared value to one, consistently outperformed other ML models. A new dataset which consists of 1300 samples was deployed at the model deployment stage. It was found that the percentage of the variance between the actual and predicted price was relatively small, which indicated that this model is reliable and acceptable. This study can greatly assist in predicting future house prices and the establishment of real estate policies.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_91-Advanced_Machine_Learning_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Supervised Learning through Classification Learner Techniques for the Predictive System of Personal and Social Attitudes of Engineering Students</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121289</link>
        <id>10.14569/IJACSA.2021.0121289</id>
        <doi>10.14569/IJACSA.2021.0121289</doi>
        <lastModDate>2021-12-31T05:43:14.4130000+00:00</lastModDate>
        
        <creator>Omar Chamorro-Atalaya</creator>
        
        <creator>Soledad Olivares-Zegarra</creator>
        
        <creator>Alejandro Paredes-Soria</creator>
        
        <creator>Oscar Samanamud-Loyola</creator>
        
        <creator>Marco Anton-De los Santos</creator>
        
        <creator>Juan Anton-De los Santos</creator>
        
        <creator>Maritte Fierro-Bravo</creator>
        
        <creator>Victor Villanueva-Acosta</creator>
        
        <subject>Supervised learning; classification learner; predictive system; personal and social attitudes; engineering students</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>In this competitive scenario of the educational system, higher education institutions use intelligent learning tools and techniques to predict the factors of student academic performance. Given this, the article aims to determine the supervised learning model for the predictive system of personal and social attitudes of university students of professional engineering careers. For this, the Machine Learning Classification Learner technique is used by means of the Matlab R2021a software. The results reflect a predictive system capable of classifying the four satisfaction classes (1: dissatisfied, 2: not very satisfied, 3: satisfied and 4: very satisfied) with an accuracy of 91.96%, a precision of 79.09%, a Sensitivity of 75.66% and a Specificity of 92.09%, regarding the students&#39; perception of their personal and social attitudes. As a result, the higher institution will be able to take measures to monitor and correct the strengths and weaknesses of each variable related to satisfaction with the quality of the educational service.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_89-Supervised_Learning_through_Classification_Learner_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SCADA and Distributed Control System of a Chemical Products Dispatch Process</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121288</link>
        <id>10.14569/IJACSA.2021.0121288</id>
        <doi>10.14569/IJACSA.2021.0121288</doi>
        <lastModDate>2021-12-31T05:43:14.3970000+00:00</lastModDate>
        
        <creator>Omar Chamorro-Atalaya</creator>
        
        <creator>Dora Arce-Santillan</creator>
        
        <creator>Guillermo Morales-Romero</creator>
        
        <creator>Nic&#233;foro Trinidad-Loli</creator>
        
        <creator>Adri&#225;n Quispe-And&#237;a</creator>
        
        <creator>C&#233;sar Le&#243;n-Velarde</creator>
        
        <subject>Distributed control; supervision; acquisition; chemical products; dispatch of chemical supplies</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>The objective of this article is to show the application of a supervision, control and acquisition system of an industrial network of chemical products, for which the design of the control logic and the architecture of the industrial network on Profibus-DP protocols is described and Ethernet, with a peripheral terminal station ET-200, through the Siemens CP433 programmable logic controller and level sensor sensors, coupled to radar-type transmitters with an accuracy of &#177; 0.5 mm. As findings of the implementation of the control system, it was possible to demonstrate the optimal regulation of the filling system of the 3-compartment trucks with a capacity of 300 Kilos each, generating the elimination of spills of the chemical product, as well as the reduction of polluting particles in the work environment. Finally, as a direct consequence, the productivity of the company was improved, which is a relevant aspect at the level of planning, management and direction.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_88-SCADA_and_Distributed_Control_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Systematic Review on e-Wastage Frameworks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121287</link>
        <id>10.14569/IJACSA.2021.0121287</id>
        <doi>10.14569/IJACSA.2021.0121287</doi>
        <lastModDate>2021-12-31T05:43:14.3800000+00:00</lastModDate>
        
        <creator>Sultan Ahmad</creator>
        
        <creator>Sudan Jha</creator>
        
        <creator>Abubaker E. M. Eljialy</creator>
        
        <creator>Shakir Khan</creator>
        
        <subject>e-Wastage; e-Wastage management; barriers; policy; findings; e-Wastage regulations; industrializing countries; industrialized countries</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>The electronic devices that are targeted to the end users have become day to day essential parts. Traditional methodologies have changed drastically resulting in efficient mode of communication and fast information retrieval. As the demand and the production are exponentially growing, patterns of sales, storage and their destruction and then again, their collection have also been changed. This paper analyses many such behaviors of (electronic) waste management and recommends solutions like recycling management, different directives and policies required to be followed. Authors have emphasized on providing substantial information that can be useful to the regulating authorities responsible for waste management or the manufacturers of various electronic products and then the policy makers. With an extensive review of electronic wastages, authors have emphasized three variables (sales, stock and lifespan) for replacing/upgrading the older products with advanced versions. The root causes of electronic wastages are found in industrializing countries like India, China, Vietnam, Pakistan, the Philippines, Ghana and Nigeria whereas industrialized countries also play equally important role for its generation. This paper signifies the importance of e-waste management practice to reduce the emerging electronic waste hazards. Authors focus on today’s demand of electronic devices, importance of e-waste management and management practices. The paper recommends key findings based on surveying data regarding the lack of regulation to manage the e-waste. The review concludes that the lack of regulation and improper awareness are the basic factors responsible for e-wastage and requires major focus to manage the e-waste.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_87-A_Systematic_Review_on_e_Wastage.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Arabic Sentiment Analysis for Multi-dialect Text using Machine Learning Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121286</link>
        <id>10.14569/IJACSA.2021.0121286</id>
        <doi>10.14569/IJACSA.2021.0121286</doi>
        <lastModDate>2021-12-31T05:43:14.3330000+00:00</lastModDate>
        
        <creator>Aya H. Hussein</creator>
        
        <creator>Ibrahim F. Moawad</creator>
        
        <creator>Rasha M. Badry</creator>
        
        <subject>Arabic sentiment analysis (ASA); arabic tweets; sentiment analysis (SA); natural language processing (NLP); machine learning (ML)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>Social media networks facilitated the availability and accessibility of a wide range of information and data. It allows the users to share and express their opinions. In addition, it presents the appraisals of the top news and the evaluation of movies, products, and services. This headway has been controlled by a well-known field called Sentiment Analysis (SA). Compared to the research studies conducted in English Sentiment Analysis (ESA), little effort is exerted in Arabic Sentiment Analysis (ASA). The Arabic language is a morphologically rich language that poses significant challenges to Natural Language Processing (NLP) systems. The purpose of the paper is to enrich the Arabic Sentiment Analysis via proposing a sentiment analysis model for analyzing an Arabic multi-dialect text using machine learning algorithms. The proposed model is applied to two datasets: ASTD Egyptian-Dialect tweets and RES Multi-Dialect restaurant reviews. Different evaluation measures were used to evaluate the proposed model to identify the best performing classifiers. The findings of this research revealed that the developed model outperformed the other two research works in terms of accuracy, precision, and recall. In addition, the Bernoulli Naive Bayes (B-NB) classifier achieved the best results with 82% for the ASTD Egyptian-Dialect tweets dataset, while the SVM classifier scored the best accuracy result for the RES Multi-Dialect reviews dataset with 87.7%.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_86-Arabic_Sentiment_Analysis_for_Multi_dialect_Text.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Inherent Feature Extraction and Soft Margin Decision Boundary Optimization Technique for Hyperspectral Crop Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121285</link>
        <id>10.14569/IJACSA.2021.0121285</id>
        <doi>10.14569/IJACSA.2021.0121285</doi>
        <lastModDate>2021-12-31T05:43:14.3030000+00:00</lastModDate>
        
        <creator>M. C. Girish Babu</creator>
        
        <creator>Padma M. C</creator>
        
        <subject>Crop classification; decision boundary; deep learning; dimensionality; feature selection; hyperspectral image; support vector machines</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>Crop productivity and disaster management can be enhanced by employing hyperspectral images. Hyperspectral imaging is widely utilized in identifying and classifying objects on the ground surface for various agriculture application uses such as crop mapping, flood management, identifying crops damaged due to flood/drought, etc. Hyperspectral imaging-based crop classification is a very challenging task because of spectral dimensions and poor spatial feature representation. Designing efficient feature extraction and dimension reduction techniques can address high dimensionality problems. Nonetheless, achieving good classification accuracies with minimal computation overhead is a challenging task in Hyperspectral imaging-based crop classification. In meeting research challenges, this work presents Hyperspectral imaging-based crop classification using soft-margin decision boundary optimization (SMDBO) based Support Vector Machine (SVM) along with Image Fusion-Recursive Filter (IFRF) and Inherent Feature Extraction (IFE). In this work, IFRF is used for reducing spectral features with meaningful representation. Then, IFE is used for differentiating physical properties and shading elements of different objects spatially. Both spectral and spatial features extracted are trained using SMDBO-SVM for performing hyperspectral object classification. Using SMDBO-SVM for Hyperspectral object classification aid in addressing class imbalance issues; thus, the proposed IFE-SMDBO-SVM model achieves better accuracies and with minimal misclassification in comparison with state-of-art statistical and Deep Learning (DL) based Hyperspectral object classification model using standard dataset Indian Pines and Pavia University.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_85-Inherent_Feature_Extraction_and_Soft_Margin_Decision.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Encoding LED for Unique Markers on Object Recognition System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121284</link>
        <id>10.14569/IJACSA.2021.0121284</id>
        <doi>10.14569/IJACSA.2021.0121284</doi>
        <lastModDate>2021-12-31T05:43:14.2870000+00:00</lastModDate>
        
        <creator>Wildan Pandji Tresna</creator>
        
        <creator>Umar Ali Ahmad</creator>
        
        <creator>Isnaeni</creator>
        
        <creator>Reza Rendian Septiawan</creator>
        
        <creator>Iyon Titok Sugiarto</creator>
        
        <creator>Alex Lukmanto Suherman</creator>
        
        <subject>Encoding LED; PWM signal; target markers; object recognition system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>In this paper a new approach of unique markers to detect and track moving objects with the encoding LED marker is presented. In addition, an LED spotlight system that can shoot light in the direction of the target is also proposed. The encoding process is done by making a unique blinking pattern on the LED, and thus the camera and a servomotor as an object recognition system would only recognize a unique marker given by an LED. In this work the camera with OpenCV could capture a unique marker in all variant of blinking patterns. A unique marker is important on object recognition system so the camera could identify the object marked by our unique marker and ignore all other markers that might captured by the camera. In addition, analysis of the PWM signal from an LED is carried out here to determine the characteristics of the LEDs in each color, determine the accuracy of the duty cycle, and the use of the bright-dim method on the LEDs. The results show that the highest accuracy is obtained when a 50% duty cycle is used with the on and off time are set to be 1 second for all LED colors. The benefit of the proposed system is confirmed by implementing an integrated control system as a unique marker. The effectiveness of the blinking LED against other laser interferences is also discussed.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_84-Encoding_LED_for_Unique_Markers_on_Object_Recognition_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparison of Convolutional Neural Network Architectures for Face Mask Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121283</link>
        <id>10.14569/IJACSA.2021.0121283</id>
        <doi>10.14569/IJACSA.2021.0121283</doi>
        <lastModDate>2021-12-31T05:43:14.2570000+00:00</lastModDate>
        
        <creator>Siti Nadia Yahya</creator>
        
        <creator>Aizat Faiz Ramli</creator>
        
        <creator>Muhammad Noor Nordin</creator>
        
        <creator>Hafiz Basarudin</creator>
        
        <creator>Mohd Azlan Abu</creator>
        
        <subject>Convolution neural network; deep learning; transfer learning; computer vision; facemask detection; COVID-19</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>In 2020 World Health Organization (WHO) has declared that the Coronaviruses (COVID-19) pandemic is causing a worldwide health disaster. One of the most effective protections for reducing the spread of COVID-19 is by wearing a face mask in densely and close populated areas. In various countries, it has become mandatory to wear a face mask in public areas. The process of monitoring large numbers of individuals to comply with the new rule can be a challenging task. A cost-effective method to monitor a large number of individuals to comply with this new law is through computer vision and Convolution Neural Network (CNN). This paper demonstrates the application of transfer learning on pre-trained CNN architectures namely; AlexNet, GoogleNet ResNet-18, ResNet-50, ResNet-101, to classify whether or not a person in the image is wearing a facemask. The number of training images are varied in order to compare the performance of these networks. It is found that AlexNet performed the worst and requires 400 training images to achieve Specificity, Accuracy, Precision, and F-score of more than 95%. Whereas, GoogleNet and Resnet can achieve the same level of performance with 10 times fewer number of training images.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_83-Comparison_of_Convolutional_Neural_Network_Architectures.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Framework for Cloud based Virtual Machine Security by Change Management using Machine</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121282</link>
        <id>10.14569/IJACSA.2021.0121282</id>
        <doi>10.14569/IJACSA.2021.0121282</doi>
        <lastModDate>2021-12-31T05:43:14.2400000+00:00</lastModDate>
        
        <creator>S. Radharani</creator>
        
        <creator>V. B. Narasimha</creator>
        
        <subject>DevOps; deep clustering; VM security; cloud security; VM versioning; progression cryptography</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>The increased growth in the cloud-based application development and hosting, the demand for higher application and data security is also increasing. The cloud-based applications are hosted on virtual machines and the data generated or used by these applications are also hosted inside the virtual machines. Hence, the security of the applications and the data can be achieved only by securing the virtual machines. There are number of challenges to achieve the security of the virtual machines. Firstly, the size of the virtual machines is large, and the generic cryptographic methods are primarily designed to handle smaller size of the data. Thus, the applicability of these methods for virtual machine are subjected to analysis. Secondly, the additional time required for applying the cryptographic algorithms on the virtual machines impact the response time of the applications, which again impacts the service level agreements. Finally, the virtual machines during the migration are highly vulnerable as the virtual machines are migrated inside the data center networks as simple text data. A good number of research attempts have tried to solve these challenges. Nonetheless, most of the parallel research works have either compromised on the strength of the security protocols or have compromised on the time taken to apply the cryptographic methods. However, the need of the research is to identify the attacks based on the characteristics of connection requests and reduce the time for the encryption and decryption of the virtual machines. This work proposes a novel framework for detection of the attacks based on a machine learning driven algorithm by analyzing the connection properties and prevent the attacks by selective encryption of the virtual machines using another machine learning driven algorithm. This work demonstrates nearly 98% accuracy in detection of the newer and existing attack types.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_82-A_Novel_Framework_for_Cloud_Based_Virtual_Machine_Security.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Usability Evaluation of Web Search User Interfaces from the Elderly Perspective</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121281</link>
        <id>10.14569/IJACSA.2021.0121281</id>
        <doi>10.14569/IJACSA.2021.0121281</doi>
        <lastModDate>2021-12-31T05:43:14.2100000+00:00</lastModDate>
        
        <creator>Khalid Krayz Allah</creator>
        
        <creator>Nor Azman Ismail</creator>
        
        <creator>Layla Hasan</creator>
        
        <creator>Wong Yee Leng</creator>
        
        <subject>Usability; Google interface; Bing interface; SUS questionnaire; web search user interfaces; observation method</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>The elderly population is increasing in many countries, often with health and incapacity challenges, largely disengaged them from the world of digital tools like Internet usage. They browse the Internet daily for obtaining needed information through various search engines through the search UI. Earlier technologies were fabricated for improving daily life, but the specific needs of the elderly are neglected often. Currently, available online search UIs are well-developed, but they did not consider usability in their design specifically for the elderly. This research aims to evaluate web search UIs based on the elderly perspectives to identify existing search UIs usability issues and recommend improvements to web search UI designs. The observation technique evaluated two web search UIs (Google interface and Bing interface) with fifteen participants aged 60 years and above. System Usability Scale (SUS) questionnaire was applied to measure the user satisfaction of the current two interfaces. The data collected from the observations were analyzed using content analysis, while the data acquired from the questionnaires were analyzed using the t-test. The results revealed a statistically significant difference in SUS ratings, with Google scoring 73.5 and Bing scoring 66.5, indicating that users prefer the Google interface over the Bing interface. Besides that, the usability issues were identified, and recommendations to improve the design of the search UI were suggested. These findings contribute to a better understanding of the issues that prevent elderly users from using web search UI and valuable feedback to designers on improving the UI to suit the elderly better.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_81-Usability_Evaluation_of_Web_Search_User_Interfaces.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Framework for Secure Healthcare Data Management using Blockchain Technology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121280</link>
        <id>10.14569/IJACSA.2021.0121280</id>
        <doi>10.14569/IJACSA.2021.0121280</doi>
        <lastModDate>2021-12-31T05:43:14.1930000+00:00</lastModDate>
        
        <creator>Ahmed I. Taloba</creator>
        
        <creator>Alanazi Rayan</creator>
        
        <creator>Ahmed Elhadad</creator>
        
        <creator>Amr Abozeid</creator>
        
        <creator>Osama R. Shahin</creator>
        
        <creator>Rasha M. Abd El-Aziz</creator>
        
        <subject>Blockchain; electronic health record (EHR); storage; security; accessibility; cryptography; decentralization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>In the current era of smart cities and smart homes, the patient’s data like name, personal details and disease description are highly insecure and violated most often. These details are stored digitally in a network called Electronic Health Record (EHR). The EHR can be useful for future medical researches to enhance patients’ healthcare and the performance of clinical practices. These data cannot be accessible for the patients and their caretakers, but they are readily available for unauthorized external agencies and are easily breached by hackers. This creates an imbalance in data accessibility and security. This can be resolved by using blockchain technology. The blockchain creates an immutable ledger and makes the transaction to be decentralized. The blockchain has three key features namely Security, Transparency, and Decentralization. These key features make the system to be highly secured, prevent data manipulation, and can only be accessible by authorized persons. In this paper, a blockchain-based security framework has been proposed to secure the EHR and provide a safe way of accessing the clinical data of the patients for the patients and their caretakers, doctors, and insurance agents using cryptography and decentralization. The proposed system also maintains the balance between data accessibility and security. This paper also establishes how the proposed framework helps doctors, patients, caretakers, and external authorities to securely store and access patients’ medical data in EHR.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_80-A_Framework_for_Secure_Healthcare_Data_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Vision based 3D Gesture Tracking using Augmented Reality and Virtual Reality for Improved Learning Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121279</link>
        <id>10.14569/IJACSA.2021.0121279</id>
        <doi>10.14569/IJACSA.2021.0121279</doi>
        <lastModDate>2021-12-31T05:43:14.1630000+00:00</lastModDate>
        
        <creator>Zainal Rasyid Mahayuddin</creator>
        
        <creator>A F M Saifuddin Saif</creator>
        
        <subject>Augmented reality; virtual reality; 3D gesture tracking</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>3D gesture recognition and tracking based augmented reality and virtual reality have become a big interest of research because of advanced technology in smartphones. By interacting with 3D objects in augmented reality and virtual reality, users get better understanding of the subject matter where there have been requirements of customized hardware support and overall experimental performance needs to be satisfactory. This research investigates currently various vision based 3D gestural architectures for augmented reality and virtual reality. The core goal of this research is to present analysis on methods, frameworks followed by experimental performance on recognition and tracking of hand gestures and interaction with virtual objects in smartphones. This research categorized experimental evaluation for existing methods in three categories, i.e. hardware requirement, documentation before actual experiment and datasets. These categories are expected to ensure robust validation for practical usage of 3D gesture tracking based on augmented reality and virtual reality. Hardware set up includes types of gloves, fingerprint and types of sensors. Documentation includes classroom setup manuals, questionaries, recordings for improvement and stress test application. Last part of experimental section includes usage of various datasets by existing research. The overall comprehensive illustration of various methods, frameworks and experimental aspects can significantly contribute to 3D gesture recognition and tracking based augmented reality and virtual reality.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_79-Vision_based_3D_Gesture_Tracking_using_Augmented_Reality.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Patient Care Predictive Model using Logistic Regression</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121278</link>
        <id>10.14569/IJACSA.2021.0121278</id>
        <doi>10.14569/IJACSA.2021.0121278</doi>
        <lastModDate>2021-12-31T05:43:14.1470000+00:00</lastModDate>
        
        <creator>Harkesh J. Patel</creator>
        
        <creator>Jatinderkumar R. Saini</creator>
        
        <subject>Health-care; inpatient care; logistic regression; machine learning model; outpatient care; stacking classifier</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>Medical treatments and operations in hospitals are divided into in-patient and out-patient procedures. It is critical for patients to know and understand the differentiation between these two forms of treatment since it will affect the time of a patient&#39;s stay in a hospital or a medical institution as well as the cost of a treatment. In today&#39;s era of information, a person&#39;s talents and expertise may be put to good use by automating activities wherever possible. A medical service will be termed inpatient care if a doctor issues an order and the patient is admitted to the hospital on that order whereas a patient seeking outpatient care do not need to spend the night in a hospital. Choosing between in-patient and out-patient care is usually a matter of how involved the doctor wants to be with the patient&#39;s treatment. With the aid of numerous data points regarding the patients, their illnesses, and lab tests, our main objective is to develop a system as part of the hospital automation system that predicts and estimates whether the patient should be given an in-patient care or an out-patient care. The main idea of the paper is to understand and develop a logistic regression model to predict whether a patient needs to be treated as an in-patient or an out-patient depending on the results of laboratory tests. Furthermore, this study also focuses on how logistic regression performs for this dataset. In addition, research on how logistic regression performs for this dataset was also not done. From the study, the results show that logistic regression gives an accuracy of 75%, F1-score of 73%, precision of 74% and recall of 74%.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_78-A_Patient_Care_Predictive_Model_using_Logistic_Regression.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detecting Distributed Denial of Service Attacks using Machine Learning Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121277</link>
        <id>10.14569/IJACSA.2021.0121277</id>
        <doi>10.14569/IJACSA.2021.0121277</doi>
        <lastModDate>2021-12-31T05:43:14.1170000+00:00</lastModDate>
        
        <creator>Ebtihal Sameer Alghoson</creator>
        
        <creator>Onytra Abbass</creator>
        
        <subject>Cybersecurity; distributed denial of service (DDoS); machine learning (ML); Canadian institute cybersecurity - distributed denial of service (CICDDoS2019) dataset</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>The Software Defined Networking (SDN) is a vital technology which includes decoupling the control and data planes in the network. The advantages of the separation of the control and data planes including: a dynamic, manageable, flexible, and powerful platform. In addition, a centralized network platform offers situations that challenge security, for instance the Distributed Denial of Service (DDoS) attack on the centralized controller. DDoS attack is a well-known malicious attack attempts to disrupt the normal traffic of targeted server, network, or service, by overwhelming the target’s infrastructure with a flood of Internet traffic. This paper involves investigating several machine learning models and employ them with the DDoS detection system. This paper investigates the issue of enhancing the DDoS attacks detection accuracy using a well-known DDoS named as CICDDoS2019 dataset. In addition, the DDoS dataset has been preprocessed using two main approaches to obtain the most relevant features. Four different machine learning models have been selected to work with the DDoS dataset. According to the results obtained from real experiments, the Random Forest machine learning model offered the best detection accuracy with (99.9974%), with an enhancement over the recent developed DDoS detection systems.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_77-Detecting_Distributed_Denial_of_Service_Attacks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Industrial Revolution 5.0 and the Role of Cutting Edge Technologies</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121276</link>
        <id>10.14569/IJACSA.2021.0121276</id>
        <doi>10.14569/IJACSA.2021.0121276</doi>
        <lastModDate>2021-12-31T05:43:14.0830000+00:00</lastModDate>
        
        <creator>Mamoona Humayun</creator>
        
        <subject>Industry 5.0; cutting-edge technologies; Internet of Things; Aartificial intelligence; big data</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>IR 4.0 emphasizes the interconnection of machines and systems to achieve optimal performance and productivity gains. IR 5.0 is said to take it a step further by fine-tuning the human-machine connection. IR 5.0 is more collaboration between the two: automated technology&#39;s ultra-fast accuracy combines with a human&#39;s intelligence and creativity. The driving force behind IR 5.0 is customer demand for customization and personalization, necessitating a greater human involvement in the production process. As IR 5.0 evolves, we may expect to see a slew of breakthroughs across various industries. However, just automating jobs or digitizing processes will not be enough; the finest and most successful businesses will be those that can combine the dual powers of technology and human ingenuity. IR 5.0 focuses on the use of modern cutting-edge technologies, namely, AI, IoT, big data, cloud computing, Blockchain, Digital twins, edge computing, collaborative robots, and 6G along with leveraging human creativity and intelligence. Wherever possible, IR 5.0 will change industrial processes worldwide by removing mundane, filthy, and repetitive activities from human workers. Intelligent robotics and systems will have unparalleled access to industrial supply networks and production floors. However, to understand and leverage the benefits of IR 5.0 better, there is a need to understand the role of modern CET in industrial revolution 5.0. To fill this gap, this article will examine IR 5.0 prospective, uses, supporting technologies, opportunities, and issues involved that need to be understood for leveraging the potentials of IR 5.0.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_76-Industrial_Revolution_5.0_and_the_Role_of_Cutting_Edge_Technologies.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automated Telugu Printed and Handwritten Character Recognition in Single Image using Aquila Optimizer based Deep Learning Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121275</link>
        <id>10.14569/IJACSA.2021.0121275</id>
        <doi>10.14569/IJACSA.2021.0121275</doi>
        <lastModDate>2021-12-31T05:43:14.0670000+00:00</lastModDate>
        
        <creator>Vijaya Krishna Sonthi</creator>
        
        <creator>S. Nagarajan</creator>
        
        <creator>N. Krishnaraj</creator>
        
        <subject>Optical character recognition; Telugu; deep learning; Aquila optimizer; BiLSTM; handwritten characters; printed characters</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>Machine printed or handwritten character recognition becomes a major research topic in several real time applications. The recent advancements of deep learning and image processing techniques can be employed for printed and handwritten character recognition. Telugu character Recognition (TCR) remains a difficult task in optical character recognition (OCR), which transforms the printed and handwritten characters into respective text formats. In this aspect, this study introduces an effective deep learning based TCR model for printed and handwritten characters (DLTCR-PHWC). The proposed DLTCR-PHWC technique aims to detect and recognize the printed as well as handwritten characters that exist in the same image. Primarily, image pre-processing is performed using the adaptive fuzzy filtering technique. Next, line and character segmentation processes are performed to derive useful regions. In addition, the fusion of EfficientNet and CapsuleNet models is used for feature extraction. Finally, the Aquila optimizer (AO) with bi-directional long short-term memory (BiLSTM) model is utilized for recognition process. A detailed experimentation of the proposed DLTCR-PHWC technique is investigated using Telugu character dataset and the simulation outcome portrayed the supremacy of the proposed DLTCR-PHWC technique over the recent state of art approaches.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_75-Automated_Telugu_Printed_and_Handwritten_Character.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Micro Expression Recognition: Multi-scale Approach to Automatic Emotion Recognition by using Spatial Pyramid Pooling Module</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121274</link>
        <id>10.14569/IJACSA.2021.0121274</id>
        <doi>10.14569/IJACSA.2021.0121274</doi>
        <lastModDate>2021-12-31T05:43:14.0530000+00:00</lastModDate>
        
        <creator>Lim Jun Sian</creator>
        
        <creator>Marzuraikah Mohd Stofa</creator>
        
        <creator>Koo Sie Min</creator>
        
        <creator>Mohd Asyraf Zulkifley</creator>
        
        <subject>Micro expression recognition; facial expression; spatial pyramid pooling module; multi-scale approach; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>Facial expression is one of the obvious cues that humans used to express their emotions. It is a necessary aspect of social communication between humans in their daily lives. However, humans do hide their real emotions in certain circumstances. Therefore, facial micro-expression has been observed and analyzed to reveal the true human emotions. However, micro-expression is a complicated type of signal that manifests only briefly. Hence, machine learning techniques have been used to perform micro-expression recognition. This paper introduces a compact deep learning architecture to classify and recognize human emotions of three categories, which are positive, negative, and surprise. This study utilizes the deep learning approach so that optimal features of interest can be extracted even with a limited number of training samples. To further improve the recognition performance, a multi-scale module through the spatial pyramid pooling network is embedded into the compact network to capture facial expressions of various sizes. The base model is derived from the VGG-M model, which is then validated by using combined datasets of CASMEII, SMIC, and SAMM. Moreover, various configurations of the spatial pyramid pooling layer were analyzed to find out the most optimal network setting for the micro-expression recognition task. The experimental results show that the addition of a multi-scale module has managed to increase the recognition performance. The best network configuration from the experiment is composed of five parallel network branches that are placed after the second layer of the base model with pooling kernel sizes of two, three, four, five, and six.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_74-Micro_Expression_Recognition_Multi_scale_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Gesture based Arabic Sign Language Recognition for Impaired People based on Convolution Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121273</link>
        <id>10.14569/IJACSA.2021.0121273</id>
        <doi>10.14569/IJACSA.2021.0121273</doi>
        <lastModDate>2021-12-31T05:43:14.0370000+00:00</lastModDate>
        
        <creator>Rady El Rwelli</creator>
        
        <creator>Osama R. Shahin</creator>
        
        <creator>Ahmed I. Taloba</creator>
        
        <subject>Arabic sign language; convolution neural network; hand movements; sensing device</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>The Arabic Sign Language has endorsed outstanding research achievements for identifying gestures and hand signs using the deep learning methodology. The term &quot;forms of communication&quot; refers to the actions used by hearing-impaired people to communicate. These actions are difficult for ordinary people to comprehend. The recognition of Arabic Sign Language (ArSL) has become a difficult study subject due to variations in Arabic Sign Language (ArSL) from one territory to another and then within states. The Convolution Neural Network has been encapsulated in the proposed system which is based on the machine learning technique. For the recognition of the Arabic Sign Language, the wearable sensor is utilized. This approach has been used a different system that could suit all Arabic gestures. This could be used by the impaired people of the local Arabic community. The research method has been used with reasonable and moderate accuracy. A deep Convolutional network is initially developed for feature extraction from the data gathered by the sensing devices. These sensors can reliably recognize the Arabic sign language&#39;s 30 hand sign letters. The hand movements in the dataset were captured using DG5-V hand gloves with wearable sensors. For categorization purposes, the CNN technique is used. The suggested system takes Arabic sign language hand gestures as input and outputs vocalized speech as output. The results were recognized by 90% of the people.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_73-Gesture_based_Arabic_Sign_Language_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Virtual Reality Simulation to Help Decrease Stress and Anxiety Feeling for Children during COVID-19 Pandemic</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121272</link>
        <id>10.14569/IJACSA.2021.0121272</id>
        <doi>10.14569/IJACSA.2021.0121272</doi>
        <lastModDate>2021-12-31T05:43:14.0070000+00:00</lastModDate>
        
        <creator>Devi Afriyantari Puspa Putri</creator>
        
        <creator>Ratri Kusumaningtyas</creator>
        
        <creator>Tsania Aldi</creator>
        
        <creator>Fikri Zaki Haiqal</creator>
        
        <subject>Blender; COVID-19; Unity3D; virtual reality</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>The occurrence of COVID-19 pandemic has changed people’s life in every aspect, such as applying social distancing, the transition from offline to online activity are applied in order to decrease and stop the spread of the virus. This sudden change causes a fairly high level of anxiety and stress in society, especially for children because of activity restrictions. Various innovations, especially technology have been carried out to overcome the problems in distance restrictions that have arisen due to the COVID-19 pandemic. Virtual reality believes becoming one of the innovations that can be used to reduced anxiety levels and boredom during activity rectrictions, because it creates an artificial environment for humans to socialize. In this research, combine the Unity3D and blender software to build a virtual reality simulation with the help of virtual glassed to give a real impression of the virtual room that has been created. This VR application consists of three environments that children can use it to explore the virtual room without need being in crowded atmosphere. Based on the result of pretest and posttest questionnaire in 30 participants with the range age from seven to ten, it concludes that this VR applications can decrease the level of stress and anxiety in children by one to two levels. Besides that, this application located in acceptable area based on SUS score system.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_72-Virtual_Reality_Simulation_to_Help_Decrease_Stress.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Framework for Weak Signal Detection in Competitive Intelligence using Semantic Clustering Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121271</link>
        <id>10.14569/IJACSA.2021.0121271</id>
        <doi>10.14569/IJACSA.2021.0121271</doi>
        <lastModDate>2021-12-31T05:43:13.9730000+00:00</lastModDate>
        
        <creator>Adil Bouktaib</creator>
        
        <creator>Abdelhadi Fennan</creator>
        
        <subject>Competitive intelligence; apache spark; big data; weak signal detection; web mining; semantic clustering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>Companies nowadays are sharing a lot of data on the web in structured and unstructured format, the data holds many signals from which we can analyze and detect innovation using weak signal detection approaches. To gain a competitive advantage over competitors, the velocity and volume of data available online must be exploited and processed to extract and monitor any type of strategic challenge or surprise whether it is in form of opportunities or threats. To capture early signs of a change in the environment in a big data context where data is voluminous and unstructured, we present in this paper a framework for weak signal detection relying on the crawling of a variety of web sources and big data based implementation of text mining techniques for the automatic detection and monitoring of weak signals using an aggregation approach of semantic clustering algorithms. The novelty of this paper resides in the capability of the framework to extend to an unlimited amount of unstructured data, that needs novel approaches to analyze, and the aggregation of semantic clustering algorithms for better automation and higher accuracy of weak signal detection. A corpus of scientific articles and patents is collected in order to validate the framework and provide a use case for future interested researchers in identifying weak signals in a corpus of data of a specific technological domain.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_71-A_Framework_for_Weak_Signal_Detection_in_Competitive_Intelligence.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Collaborative Multi-Resolution MSER and Faster RCNN (MRMSER-FRCNN) Model for Improved Object Retrieval of Poor Resolution Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121270</link>
        <id>10.14569/IJACSA.2021.0121270</id>
        <doi>10.14569/IJACSA.2021.0121270</doi>
        <lastModDate>2021-12-31T05:43:13.9600000+00:00</lastModDate>
        
        <creator>Amitha I C</creator>
        
        <creator>N S Sreekanth</creator>
        
        <creator>N K Narayanan</creator>
        
        <subject>Faster RCNN; feature representation; multi-resolution MSER; object detection; object retrieval</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>Object detection and retrieval is an active area of research. This paper proposes a collaborative approach that is based on multi-resolution maximally stable extreme regions (MRMSER) and faster region-based convolutional neural network (FRCNN) suitable for efficient object detection and retrieval of poor resolution images. The proposed method focuses on improving the retrieval accuracy of object detection and retrieval. The proposed collaborative model overcomes the problems in a faster RCNN model by making use of multi-resolution MSER. Two different datasets were used on the proposed system. A vehicle dataset contains three classes of vehicles and the Oxford building dataset with 11 different landmarks. The proposed MRMSER-FRCNN method gives a retrieval accuracy 84.48% on Oxford 5k building dataset and 92.66% on vehicle dataset. Experimental results show that the proposed collaborative approach outperform the faster RCNN model for poor-resolution conditioned query images.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_70-Collaborative_Multi_Resolution_MSER_and_Faster_RCNN.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Transfer Learning-based One Versus Rest Classifier for Multiclass Multi-Label Ophthalmological Disease Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121269</link>
        <id>10.14569/IJACSA.2021.0121269</id>
        <doi>10.14569/IJACSA.2021.0121269</doi>
        <lastModDate>2021-12-31T05:43:13.9270000+00:00</lastModDate>
        
        <creator>Akanksha Bali</creator>
        
        <creator>Vibhakar Mansotra</creator>
        
        <subject>Fundus images; one versus rest strategy; transfer learning; VGG-16; augmentation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>The main objective of this paper is to propose transfer learning technique for multiclass multilabel opthalmological diseases prediction in fundus images by using the one versus rest strategy. The proposed transfer learning-based techniques to detect eight categories (seven diseases and one normal class) are Normal, Diabetic retinopathy, Cataract, Glaucoma, Age-related macular degeneration, Myopia, Hypertension and Other abnormalities in fundus images collected and augmented from Ocular Disease Intelligent Recognition (ODIR) dataset. To increase the data set, no differentiation between left and right eye images has been done and these images were used on VGG-16 CNN network to binary classify each disease separately and trained 8 separate models using one versus rest strategy to identify these 7 diseases plus normal eyes. In this paper, various results has been showcased such as accuracy of each organ and accuracy of the overall model compared to benchmark papers. Base line accuracy have increased from 89% to almost 91% and also proposed model has improved the performance of identifying disease drastically prediction of glaucoma has increased from 54% to 91%, normal images prediction has increased from 40% to 85.28% and other diseases prediction has increased from 44% to 88%. Out of 8 categories prediction, proposed model prediction rate has improved in 6 diseases by using proposed transfer learning technique vgg16 and eight different one versus classifier classification algorithms.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_69-Transfer_Learning_based_One_Versus_Rest_Classifier.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving Chi-Square Feature Selection using a Bernoulli Model for Multi-label Classification of Indonesian-Translated Hadith</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121268</link>
        <id>10.14569/IJACSA.2021.0121268</id>
        <doi>10.14569/IJACSA.2021.0121268</doi>
        <lastModDate>2021-12-31T05:43:13.9130000+00:00</lastModDate>
        
        <creator>Fahmi Salman Nurfikri</creator>
        
        <creator>Adiwijaya</creator>
        
        <subject>Bernoulli model; Chi-Square; feature selection; hadith classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>Hadith is the foundational knowledge in Islam that must be studied and practiced by Muslims. In the Hadith, several types of teachings are beneficial to Muslims and all of mankind. Some Hadith serve as advice, while others contain prohibitions that Muslims should adhere to. There are yet others that do not belong to these categories and serve only as information. This study focuses on increasing the performance of Chi-Square feature selection to obtain relevant features for multilabel classification of Indonesian-translated Bukhari Hadith data. This study proposes a Chi-Square-based Bernoulli model to improve Chi-Square feature selection which is appropriate for short-text data such as Hadith. The findings of this study show that the proposed method can select relevant features based on data classes; thereby improving Hadith classification performance with an error value of 9.38% compared to that (9.91%) obtained using the basic Chi-Square feature selection.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_68-Improving_Chi_Square_Feature_Selection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards Stopwords Identification in Tamil Text Clustering</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121267</link>
        <id>10.14569/IJACSA.2021.0121267</id>
        <doi>10.14569/IJACSA.2021.0121267</doi>
        <lastModDate>2021-12-31T05:43:13.8800000+00:00</lastModDate>
        
        <creator>M. S. Faathima Fayaza</creator>
        
        <creator>F. Fathima Farhath</creator>
        
        <subject>Stopwords; Tamil; pre-processing; TF-IDF; clustering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>Now-a-days, digital documents have become the primary source of information. Therefore, natural language processing is widely utilized in information retrieval, topic modeling, document classification, and document clustering. Preprocessing plays a significant role in all of these applications. One of the critical steps in preprocessing is removing stopwords. Many languages have defined their list of stopwords. However, a publicly available stopwords list isn&#39;t available for the Tamil language since it is under-resourced. This study identified 93 general and some domain-specific stopwords for sports, entertainment, local and foreign news by analyzing more than 1.7 million Tamil documents with more than 21 million words. Also, this study shows that removing stopwords improves the accuracy of a Tamil document clustering system. It showed an improvement of 2.4%, 0.95% in the F-score for TF-IDF with one pass algorithm and FastText with the one-pass algorithm, respectively.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_67-Towards_Stopwords_Identification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Study of Haar-AdaBoost (VJ) and HOG-AdaBoost (PoseInv) Detectors for People Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121266</link>
        <id>10.14569/IJACSA.2021.0121266</id>
        <doi>10.14569/IJACSA.2021.0121266</doi>
        <lastModDate>2021-12-31T05:43:13.8670000+00:00</lastModDate>
        
        <creator>Nagi OULD TALEB</creator>
        
        <creator>Mohamed Larbi BEN MAATI</creator>
        
        <creator>Mohamedade Farouk NANNE</creator>
        
        <creator>Aicha Mint Aboubekrine</creator>
        
        <creator>Adil CHERGUI</creator>
        
        <subject>Pedestrian detection; learned-based methods; Haar like features; HOG descriptor; AdaBoost; behavior analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>Object detection in general and pedestrians in particular in images and videos is a very popular research topic within the computer vision community; it is an issue that is currently at the heart of much research. The detection of people is a particularly difficult subject because of the great variability of appearances and the situations in which a person may find themselves (a person is not a rigid object; it is articulate and unpredictable; its shape changes during its movement). The situations in which a person may find themselves are very varied: They are alone, near a group of people or in a crowd, obscured by an object. In addition, the characteristics vary from one person to another (color of the skin, hair, clothes, etc.), the background simple, clear or complex, the lighting or weather conditions, the shadow caused by different light sources, etc. greatly complicate the problem. In this article, we will present a comparative study of the performance of the two detectors Haar-AdaBoost and HOG-AdaBoost in detecting people in the INRIA images database of persons. An evaluation of the experiments will be presented after making certain modifications to the detection parameters.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_66-Study_of_Haar_AdaBoost_VJ_and_HOG_AdaBoost.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>User-centric Activity Recognition and Prediction Model using Machine Learning Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121265</link>
        <id>10.14569/IJACSA.2021.0121265</id>
        <doi>10.14569/IJACSA.2021.0121265</doi>
        <lastModDate>2021-12-31T05:43:13.7400000+00:00</lastModDate>
        
        <creator>Namrata Roy</creator>
        
        <creator>Rafiul Ahmed</creator>
        
        <creator>Mohammad Rezwanul Huq</creator>
        
        <creator>Mohammad Munem Shahriar</creator>
        
        <subject>Machine learning algorithms; activity recognition; gradient boosting; next activity prediction; LSTM sequence prediction model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>Human Activity Recognition has been a dynamic research area in recent years. Various methods of collecting data and analyzing them to detect activity have been well investigated. Some machine learning algorithms have shown excellent performance in activity recognition, based on which many applications and systems are being developed. Unlike this, the prediction of the next activity is an emerging field of study. This work proposes a conceptual model that uses machine learning algorithms to detect activity from sensor data and predict the next activity from the previously seen activity sequence. We created our activity recognition dataset and used six machine learning algorithms to evaluate the recognition task. We have proposed a method for the next activity prediction from the sequence of activities by converting a sequence prediction problem into a supervised learning problem using the windowing technique. Three classification algorithms were used to evaluate the next activity prediction task. Gradient Boosting performs best for activity recognition, yielding 87.8% accuracy for the next activity prediction over a 16-day timeframe. We have also measured the performance of an LSTM sequence prediction model for predicting the next activity, where the optimum accuracy is 70.90%.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_65-User_centric_Activity_Recognition_and_Prediction_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Adaptive Deep Learning based Cryptocurrency Price Fluctuation Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121264</link>
        <id>10.14569/IJACSA.2021.0121264</id>
        <doi>10.14569/IJACSA.2021.0121264</doi>
        <lastModDate>2021-12-31T05:43:13.7230000+00:00</lastModDate>
        
        <creator>Ahmed Saied El-Berawi</creator>
        
        <creator>Mohamed Abdel Fattah Belal</creator>
        
        <creator>Mahmoud Mahmoud Abd Ellatif</creator>
        
        <subject>Computer intelligence; cryptocurrency; deep learning; market movement; recurrent neural network; timeseries forecasting</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>This paper proposes a deep learning based predictive model for forecasting and classifying the price of cryptocurrency and the direction of its movement. These two tasks are challenging to address since cryptocurrencies prices fluctuate with extremely high volatile behavior. However, it has been proven that cryptocurrency trading market doesn’t show a perfect market property, i.e., price is not totally a random walk phenomenon. Based upon this, this study proves that the price value forecast and price movement direction classification is both predictable. A recurrent neural networks based predictive model is built to regress and classify prices. With adaptive dynamic features selection and the use of external dependable factors with a potential degree of predictability, the proposed model achieves unprecedented performance in terms of movement classification. A na&#239;ve simulation of a trading scenario is developed and it shows a 69% profitability score a cross a six months trading period for bitcoin.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_64-Adaptive_Deep_Learning_based_Cryptocurrency_Price_Fluctuation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Use of Value Chain Mapping to Determine R&amp;D Domain Knowledge Retention Framework Extended Criteria</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121263</link>
        <id>10.14569/IJACSA.2021.0121263</id>
        <doi>10.14569/IJACSA.2021.0121263</doi>
        <lastModDate>2021-12-31T05:43:13.6930000+00:00</lastModDate>
        
        <creator>Mohamad Safuan Bin Sulaiman</creator>
        
        <creator>Ariza Nordin</creator>
        
        <creator>Nor Laila Md Noor</creator>
        
        <creator>Wan Adilah Wan Adnan</creator>
        
        <subject>Knowledge retention framework; research and development; porter value chain; knowledge management; knowledge loss; research intensive portfolio; knowledge chain model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>Implementing a knowledge retention (KR) strategy is crucial to overcome the loss of expert knowledge due to employee turnover and retirement. The knowledge loss phenomenon caused organizations to face enormous risks which affect performance. KR frameworks and models are made available beyond research and development (R&amp;D) organizations, to address knowledge retention strategies for administrative, operational, and manufacturing organizations. For research-intensive portfolios within R&amp;D organizations, using the available KR frameworks requires fitting. The difficulty to address knowledge loss due to the uniqueness of the R&amp;D organization’s knowledge artifacts requires an extended KR framework. Before designing the extended KR framework, it is crucial to determine the framework’s additional criteria. The paper reports the use of value chain mapping to determine the extended criteria of the KR framework fit for R&amp;D organizations. The value chain mapping method identifies the knowledge activities in the R&amp;D using Porter Value Chain (PVC) as the reference model. The output is a Knowledge Chain Model (KCM) that defines the critical points of knowledge loss in the R&amp;D value chain. These critical points are project-based expert critical knowledge focus, project-based tacit knowledge transfer, and project-based knowledge repository which are nominated extended criteria of the KR Framework fit for R&amp;D organizations.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_63-Use_of_Value_Chain_Mapping_to_Determine_RandD_Domain.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multistage Relay Network Topology using IEEE802.11ax for Construction of Multi-robot Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121262</link>
        <id>10.14569/IJACSA.2021.0121262</id>
        <doi>10.14569/IJACSA.2021.0121262</doi>
        <lastModDate>2021-12-31T05:43:13.6770000+00:00</lastModDate>
        
        <creator>Ryo Odake</creator>
        
        <creator>Kei Sawai</creator>
        
        <creator>Noboru Takagi</creator>
        
        <creator>Hiroyuki Masuta</creator>
        
        <creator>Tatsuo Motoyoshi</creator>
        
        <subject>Multi-robot system; IEEE802.11ax; information gathering; multistage relay network; network topology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>This paper describes an information gathering system comprising multiple mobile robots and a wireless sensor network. In general, a single robot searches an environment using a teleoperation system in a multistage relay network while maintaining communication quality. However, the search range of a single robot is limited, and it is difficult to gather comprehensive information in large-scale facilities. This paper proposes a multistage relay network topology using IEEE802.11ax for information gathering by multi-robot. In this multi-robot operation, a mobile robot carries wireless relay nodes and deploys them into the environment. After a network is constructed, each robot connects to this network and gathers information. An operator then controls each robot remotely while monitoring its end-to-end communication quality with each mobile robot in the network. This paper proposes a method assuming the end-to-end throughput with multiple mobile robots. The validity of the proposed method is then inspected via an evaluation experiment on multi-robot teleoperation. The experimental results show that the network constructed with the proposed topology is capable of maintaining the communication connectivity of more than three mobile robots.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_62-Multistage_Relay_Network_Topology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Heart Rate Variability Analysis of ECG, Holter and PPG Signals</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121261</link>
        <id>10.14569/IJACSA.2021.0121261</id>
        <doi>10.14569/IJACSA.2021.0121261</doi>
        <lastModDate>2021-12-31T05:43:13.6630000+00:00</lastModDate>
        
        <creator>Galya N. Georgieva-Tsaneva</creator>
        
        <creator>Evgeniya Gospodinova</creator>
        
        <subject>Heart rate variability; cardiovascular diseases; mathematical analysis; holter records; software system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>The article presents a demonstrative software system with included procedures for input, preprocessing and mathematically based analysis of cardiac data. The created program has the ability to work with the following signals: ECG, holter recordings, PPG signals. The presented system uses real cardiological data from patients and obtained with modern medical devices - electrocardiography, continuous holter monitoring, photoplethysmography device and others. The presented system allows mathematically based study of cardiac records through the use of linear, nonlinear, fractal and wavelet based methods. A comparative analysis was made of the results obtained in the evaluation of the HRV parameters in the three types of signals used. The difference between HRV time series (cardiac intervals and HRV analysis) obtained by examination of individuals diagnosed with heart failure and healthy individuals is graphically presented. The findings indicate that studies of heart rate variability on ECG, Holter and PPG records can be used to support the cardiac practice of physicians.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_61-Comparative_Heart_Rate_Variability_Analysis_of_ECG.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Human Face Recognition from Part of a Facial Image based on Image Stitching</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121260</link>
        <id>10.14569/IJACSA.2021.0121260</id>
        <doi>10.14569/IJACSA.2021.0121260</doi>
        <lastModDate>2021-12-31T05:43:13.6470000+00:00</lastModDate>
        
        <creator>Osama R. Shahin</creator>
        
        <creator>Rami Ayedi</creator>
        
        <creator>Alanazi Rayan</creator>
        
        <creator>Rasha M. Abd El-Aziz</creator>
        
        <creator>Ahmed I. Taloba</creator>
        
        <subject>Face recognition; image stitching; principal component analysis; Eigenfaces distance classifiers; geometrical approach</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>Most of the current techniques for face recognition require the presence of a full face of the person to be recognized, and this situation is difficult to achieve in practice, the required person may appear with a part of his face, which requires prediction of the part that did not appear. Most of the current forecasting processes are done by what is known as image interpolation, which does not give reliable results, especially if the missing part is large. In this work, we adopted the process of stitching the face by completing the missing part with the flipping of the part shown in the picture, depending on the fact that the human face is characterized by symmetry in most cases. To create a complete model, two facial recognition methods were used to prove the efficiency of the algorithm. The selected face recognition algorithms that are applied here are Eigenfaces and geometrical methods. Image stitching is the process during which distinctive photographic images are combined to make a complete scene or a high-resolution image. Several images are integrated to form a wide-angle panoramic image. The quality of the image stitching is determined by calculating the similarity among the stitched image and original images and by the presence of the seam lines through the stitched images. The Eigenfaces approach utilizes PCA calculation to reduce the feature vector dimensions. It provides an effective approach for discovering the lower-dimensional space. In addition, to enable the proposed algorithm to recognize the face, it also ensures a fast and effective way of classifying faces. The phase of feature extraction is followed by the classifier phase. Displacement classifiers using square Euclidean and City-Block distances are used. The test results demonstrate that the proposed algorithm gave a recognition rate of around 95%, to validate the proposed algorithm; it compared to the existing CNN and Multibatch estimator method.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_60-Human_Face_Recognition_from_Part_of_a_Facial_Image.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of Crime Pattern using Data Mining Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121259</link>
        <id>10.14569/IJACSA.2021.0121259</id>
        <doi>10.14569/IJACSA.2021.0121259</doi>
        <lastModDate>2021-12-31T05:43:13.6170000+00:00</lastModDate>
        
        <creator>Chikodili Helen Ugwuishiwu</creator>
        
        <creator>Peter O. Ogbobe</creator>
        
        <creator>Matthew Chukwuemeka Okoronkwo</creator>
        
        <subject>Information technology; law enforcement agency; data mining; crime; classification and rule induction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>The advancement in Information Technology permits high volume of data to be generated in databases of institutions, organizations, government, including Law Enforcement Agencies (LEAs). Technologies have also been developed to store and manipulate these data to enhance decision making. Crime remains a severe threat to humanity. Criminals currently, exploit highly sophisticated technologies to perform criminal activities. To effectively combat crime, LEAs must be adequately equipped with technological tools such as data mining technology to enable useful discoveries from databases. To achieve this, a Real-time Integrated Crime Information System (RICIS) was developed and mobile phones were used by informants (general public) to capture information about crimes being committed within Southern-East, Nigeria. Each crime information captured is being sent to the LEA responsible for the crime type and the information is stored in the agency database for data analysis. Thus, this study uses data mining algorithms to analyze crime trends and patterns in Southern-Eastern part of Nigeria between 2012 and 2013. The algorithms adopted were Classification and Rule Induction. The data set of 973 were collected from Eleme Police station, PortHarcourt (2012) and Nsukka Police station (2013). The analysis enables identifications of some trends of crimes and criminal activities from various LEAs databases, enhancing crime control and public safety.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_59-Analysis_of_Crime_Pattern_using_Data_Mining.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modified Method of Traffic Engineering in DCN with a Ramified Topology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121258</link>
        <id>10.14569/IJACSA.2021.0121258</id>
        <doi>10.14569/IJACSA.2021.0121258</doi>
        <lastModDate>2021-12-31T05:43:13.6000000+00:00</lastModDate>
        
        <creator>As’ad Mahmoud As’ad Alnaser</creator>
        
        <creator>Yurii Kulakov</creator>
        
        <creator>Dmytro Korenko</creator>
        
        <subject>Local networks; traffic engineering; SDN; DCN; DFS; Mininet</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>This article consider two main local network topologies. Based on the basic DFS protocol, a mathematical model has been developed for a new method of multipath routing and traffic engineering in data centers with a ramified topology. This method was developed with the features and benefits of SDN in mind. Also, the simulation of the developed method was carried out on two local topologies considered earlier.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_58-Modified_Method_of_Traffic_Engineering_in_DCN.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>M-SVR Model for a Serious Game Evaluation Tool</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121257</link>
        <id>10.14569/IJACSA.2021.0121257</id>
        <doi>10.14569/IJACSA.2021.0121257</doi>
        <lastModDate>2021-12-31T05:43:13.5830000+00:00</lastModDate>
        
        <creator>Kamal Omari</creator>
        
        <creator>Said Harchi</creator>
        
        <creator>Mohamed Moussetad</creator>
        
        <creator>El Houssine Labriji</creator>
        
        <creator>Ali Labriji</creator>
        
        <subject>Serious game; evaluation tool; multi-output support vector regression</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>Today, due to their interactive, participatory and entertaining nature, the Serious Games set themselves apart from other learning methods used in teaching. Much progress has been made in the design techniques and methods of Serious Games, but little in their evaluation. In order to fill this gap, we had proposed in our previous work an evaluation tool capable of helping practitioners to evaluate Serious Games in different training contexts. This evaluation tool for Serious Games is designed around four dimensions, namely the pedagogical, technological, ludic and behavioral dimensions, which are measured by clearly defined criteria. During this process, it was highlighted that the human factor (evaluator) influences considerably the result of the weightings through the choice to weight the evaluation dimensions of the Serious Games. In order to reduce this influence during the evaluation process and to keep the correlation between the variables of our evaluation system, we present in this paper, an improvement of our evaluation tool by equipping it with an intelligent supervised self-learning algorithm allowing self-regulation of the weights according to the context of use of the Serious Game to be evaluated. Thanks to the experimental verification of the optimization results, the root mean square error and the coefficient of determination are 0.016 and 98.59 percent respectively, indicating that the model has high precision which guaranteed better predictive performance. A comparison was made between this intelligent model and the models presented in our previous work; the results obtained indicated the same order of the four dimensions, and this by reducing the influence of the human factor during the Multi-Output Support Vector Regression weighting process.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_57-M_SVR_Model_for_a_Serious_Game_Evaluation_Tool.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Assessment System of Local Government Projects Prototype in Indonesia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121256</link>
        <id>10.14569/IJACSA.2021.0121256</id>
        <doi>10.14569/IJACSA.2021.0121256</doi>
        <lastModDate>2021-12-31T05:43:13.5530000+00:00</lastModDate>
        
        <creator>Herri Setiawan</creator>
        
        <creator>Husnawati</creator>
        
        <creator>Tasmi</creator>
        
        <subject>Group Decision Making (GDM); Group Decision Support System (GDSS); MULTI-CRITERIA DECISION MAKING (MCDM); Project Management Body of Knowledge (PMBOK); local government</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>The purpose of this research is to build an application that is used for project evaluation and provide recommendations on project performance in local government agencies. In this study, project evaluation was carried out using the Group Decision Making (GDM) model based on the Group Decision Support System (GDSM) concept. The project output and outcome parameters used by the Decision Maker (DM) use a hybrid of the Multi-Criteria Decision Making (MCDM) and Project Management Body of Knowledge (PMBOK) methods to reduce subjectivity in scoring qualitative data and to determine project ratings from all DMs. Copeland Score voting method. The results of application computing on the implementation of Group Decision Support System (GDSS) and MCDM indicate that the project ranking process will be faster and more accurate. The results of the sensitivity test show that two criteria have a great influence on project performance so that they have a very important role in project evaluation.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_56-Assessment_System_of_Local_Government_Projects.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Smart Irrigation and Precision Farming of Paddy Field using Unmanned Ground Vehicle and Internet of Things System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121254</link>
        <id>10.14569/IJACSA.2021.0121254</id>
        <doi>10.14569/IJACSA.2021.0121254</doi>
        <lastModDate>2021-12-31T05:43:13.5370000+00:00</lastModDate>
        
        <creator>Srinivas A</creator>
        
        <creator>J Sangeetha</creator>
        
        <subject>Sensor; cloud; mobile application; agriculture; water valve</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>Paddy is one of the largest consumed staple foods across the globe, especially in Asian countries. With the population growing larger and the agricultural land shrinking, there is a need to increase the yield of the crop to meet the ever-growing food demand. The yield of paddy largely depends on the irrigation of the paddy field, that is maintaining the optimum water level in the paddy field. The solution to this irrigation problem has been proposed in this paper, by addressing various challenges in implementing the Unmanned Ground Vehicle (UGV) and Internet of Things (IoT) system in paddy cultivation. A UGV which carries the sensors were used to collect the sensor data (water level, rainwater, humidity, temperature, light intensity) from the paddy field, which is controlled by cloud-based solution and by the mobile application-based solution. The data was then processed and used to control the water valves which can again be controlled by using cloud and mobile application. Water level maintained by using the mobile application-based solution, cloud-based solution and by following the traditional method of irrigation was compared and the cloud-based solution was found to be more efficient. Thereby providing a solution which reduces the manpower required for the process of irrigation when compared to the traditional irrigation method, also reducing the water wastage, therefore conserving water.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_54-Smart_Irrigation_and_Precision_Farming_of_Paddy_Field.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Adaptive Trajectory Control Design for Bilateral Robotic Arm with Enforced Sensorless and Acceleration based Force Control Technique</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121255</link>
        <id>10.14569/IJACSA.2021.0121255</id>
        <doi>10.14569/IJACSA.2021.0121255</doi>
        <lastModDate>2021-12-31T05:43:13.5370000+00:00</lastModDate>
        
        <creator>Nuratiqa Natrah Mansor</creator>
        
        <creator>Muhammad Herman Jamaluddin</creator>
        
        <creator>Ahmad Zaki Shukor</creator>
        
        <subject>Force and position controller; reaction force observer; bilateral control robotic arm; sensorless; system response</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>This study offers an approach for tackling the issue of instability on the computed force generated on a joint of a robotic arm by improving the model of a bilateral master-slave haptic system with an adaptive technique known as Reaction Force Observer (RFOB). The purpose of recommended modelling is to correct unsought signals coming from the employed standard controller and the surroundings produced within the moving joint of the articulated robotic arm. RFOB is employed to adjust the signal interference by modifying its position response to obtain the desired final location. The investigation and observation were carried out in two separate tests to evaluate the outcomes of the recommended integration technique with the former system that only enforced Disturbance Observer (DOB). Generated feedbacks produced from the organised experiments are measured inside a simulation platform. All numerical records and signal charts illustrate the durability of the proposed method since the system integrated with acceleration-based force control is more precise and quicker.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_55-Adaptive_Trajectory_Control_Design.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Learning Cultural Heritage History in Muzium Negara through Role-Playing Game</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121253</link>
        <id>10.14569/IJACSA.2021.0121253</id>
        <doi>10.14569/IJACSA.2021.0121253</doi>
        <lastModDate>2021-12-31T05:43:13.5070000+00:00</lastModDate>
        
        <creator>Nor Aiza Moketar</creator>
        
        <creator>Nurul Hidayah Mat Zain</creator>
        
        <creator>Siti Nuramalina Johari</creator>
        
        <creator>Khyrina Airin Fariza Abu Samah</creator>
        
        <creator>Lala Septem Riza</creator>
        
        <creator>Massila Kamalrudin</creator>
        
        <subject>Muzium Negara; history; role-playing games; gamification; museum-based learning; enjoyable</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>The traditional classroom-based teaching and learning of the History subject are ineffective and less interactive, influencing the students’ interest and motivation to learn history. Therefore, museum-based learning was proposed to supplement classroom-based learning for effective teaching and learning of the History subject. However, the excursions to the museum are often hindered by issues caused by the geographical location, the museum’s policies, and student commitments. The hindrances motivated the researchers to design and develop a role-playing game (RPG) in Muzium Negara (National Museum of Malaysia) known as “Waktu Silam” to enhance students’ interest, motivation, and knowledge on the cultural and historical heritage of Malaysia. A survey questionnaire was distributed to assess the enjoyment level provided by the game. The results showed that 84.8% of participants had experienced the element of enjoyment in this game. This study anticipated enhancing the student interest and knowledge in history, enhancing visitors’ experience, and promoting tourism to Muzium Negara. Additionally, the project is expected to include multiplayer functionality to add more interactivity to the game in future works.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_53-Learning_Cultural_Heritage_History.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning-enabled Detection of Acute Ischemic Stroke using Brain Computed Tomography Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121252</link>
        <id>10.14569/IJACSA.2021.0121252</id>
        <doi>10.14569/IJACSA.2021.0121252</doi>
        <lastModDate>2021-12-31T05:43:13.4900000+00:00</lastModDate>
        
        <creator>Khalid Babutain</creator>
        
        <creator>Muhammad Hussain</creator>
        
        <creator>Hatim Aboalsamh</creator>
        
        <creator>Majed Al-Hameed</creator>
        
        <subject>Acute ischemic brain stroke; deep learning; ‎‎‎convolutional neural ‎‎network; ‎ CT brain slice classification; brain tissue segmentation; brain tissue contrast enhancement; brain tissue classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>Stroke is the second leading cause of death globally. Computed Tomography plays a significant role in the initial diagnosis of suspected stroke patients. Currently, stroke is subjectively interpreted on CT scans by domain experts, and significant inter- and intra-observer variation has been documented. Several methods have been proposed to detect ischemic brain stroke automatically on CT scans using machine learning and deep learning, but they are not robust and their performance is not ready for clinical practice. We propose a fully automatic method for acute ischemic stroke detection on brain CT scans. The system’s first component is a brain slice classification module that eliminates the CT scan’s upper and lower slices, which do not usually include brain tissue. In turn, a brain tissue segmentation module segments brain tissue from CT slices, followed by tissue contrast enhancement using the Extreme-Level Eliminating Histogram Equalization technique. Finally, the processed brain tissue is classified as either normal or ischemic stroke using a classification module, to determine whether the patient is suffering from an ischemic stroke. We leveraged the use of the pre-trained ResNet50 model for slice classification and tissue segmentation, while we propose an efficient lightweight multi-scale CNN model (5S-CNN), which outperformed state-of-the-art models for brain tissue classification. Evaluation included the use of more than 130 patient brain CT scans curated from King Fahad Medical City (KFMC). The proposed method, using 5-fold cross-validation to validate generalization and susceptibility to overfitting, achieved accuracies of 99.21% in brain slice classification, 99.70% in brain tissue segmentation, ‎87.20% in patient-wise brain tissue classification, and 90.51% in slice-wise brain tissue classification. The system can assist both expert and non-expert radiologists in the early identification of ischemic stroke on brain CT scans.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_52-Deep_Learning_enabled_Detection_of_Acute_Ischemic_Stroke.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Real-Time Emotional Expression Generation by Humanoid Robot</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121251</link>
        <id>10.14569/IJACSA.2021.0121251</id>
        <doi>10.14569/IJACSA.2021.0121251</doi>
        <lastModDate>2021-12-31T05:43:13.4730000+00:00</lastModDate>
        
        <creator>Master Prince</creator>
        
        <subject>Artificial facial expression; emotional speech database; convunational neural network; RGB pattern; humanoid robot; human-robot interaction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>Emotion integrates different aspects of a person, including mood (current emotional state), personality, voice or speech, color around the eyes, and facial organs&#39; movement. We are considering the mood because a person’s current emotional state must always affect upcoming emotions. So behind an emotion, all these parameters are involved, and a human being can easily recognize it by seeing that face even if more than one person is there, so for the robot to make human-like emotion, all these parameters have to be considered to imitate artificial facial expression against that emotion. Most researchers working in this area still find difficulties in determining exact emotion by the robot because facial information is not always available, especially when interacting with a group of people and mimicking exact emotion that the user can effectively recognise. In our study, the loud most speeches among the people sensed by the robot and color around eyes are considered to cope with these issues. Another issue is the rise time and fall time of emotional intensity. In other words, how long should the robot keep an emotion here? An experimental approach is applied to get these values. The proposed method used an emotional speech database to recognize the human emotion using convunational neural network (CNN) and RGB patterns to mimic the emotion, which simulates an improved humanoid robot that can express emotion like human beings and give real-time responses to the user or group of users that can make more effective Human-Robot Interaction (HRI).</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_51-Real_Time_Emotional_Expression_Generation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Feature Extraction based Breast Cancer Detection using WPSO with CNN</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121250</link>
        <id>10.14569/IJACSA.2021.0121250</id>
        <doi>10.14569/IJACSA.2021.0121250</doi>
        <lastModDate>2021-12-31T05:43:13.4600000+00:00</lastModDate>
        
        <creator>Naga Deepti Ponnaganti</creator>
        
        <creator>Raju Anitha</creator>
        
        <subject>Breast cancer; microcalcifications; weighted particle swarm optimization (WPSO); Convolutional Neural Networks (CNNs) mammogram</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>The cancer reports of the past few years in India says that 30% cases have breast cancer and moreover it may increase in near future. It is added that in every two minutes, one woman is diagnosed and one expires in every nine minutes. Early diagnosis of cancer saves the lives of the individuals affected. To detect breast cancer in early stages, micro calcifications is considered as one key symptom. Several scientific investigations were performed to fight against this disease for which machine learning techniques can be extensively used. Particle swarm optimization (PSO) is recognized as one among several efficient and promising approach for diagnosing breast cancer by assisting medical experts for timely and apt treatment. This paper uses weighted particle swarm optimization (WPSO) approach for extracting textural features from the segmented mammogram image for classifying micro calcifications as normal, benign or malignant thereby improving the accuracy. In the breast region, tumour part is extracted using optimization methods. Here, Convolutional Neural Networks (CNNs) is proposed for detecting breast cancer which reduces the manual overheads. CNN framework is constructed for extracting features efficiently. This designed model detects the cancer regions in mammogram (MG) images and rapidly classifies those regions as normal or abnormal. This model uses MG images which were obtained from various local hospitals.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_50-Feature_Extraction_based_Breast_Cancer_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Machine Learning Model through Ensemble Bagged Trees in Predictive Analysis of University Teaching Performance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121249</link>
        <id>10.14569/IJACSA.2021.0121249</id>
        <doi>10.14569/IJACSA.2021.0121249</doi>
        <lastModDate>2021-12-31T05:43:13.4430000+00:00</lastModDate>
        
        <creator>Omar Chamorro-Atalaya</creator>
        
        <creator>Carlos Ch&#225;vez-Herrera</creator>
        
        <creator>Marco Anton-De los Santos</creator>
        
        <creator>Juan Anton-De los Santos</creator>
        
        <creator>Almintor Torres-Quiroz</creator>
        
        <creator>Antenor Leva-Apaza</creator>
        
        <creator>Abel Tasayco-Jala</creator>
        
        <creator>Gutember Peralta-Eugenio</creator>
        
        <subject>Machine learning; ensemble; bagged trees; predictive analysis; teaching performance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>The objective of this study is to analyze and discuss the metrics of the Machine Learning model through the Ensemble Bagged Trees algorithm, which will be applied to data on satisfaction with teaching performance in the virtual environment. Initially the classification analysis through the Matlab R2021a software, identified an Accuracy of 81.3%, for the Ensemble Bagged Trees algorithm. When performing the validation of the collected data, and proceeding with the obtaining of the predictive model, for the 4 classes (satisfaction levels), total precision values of 82.21%, Sensitivity of 73.40%, Specificity of 91.02% and of 90.63% Accuracy. In turn, the highest level of the area under the curve (AUC) by means of the Receiver operating characteristic (ROC) is 0.93, thus considering a sensitivity of the predictive model of 93%. The validation of these results will allow the directors of the higher institution to have a database, to be used in the process of improving the quality of the educational service in relation to teaching performance.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_49-Machine_Learning_Model_through_Ensemble_Bagged_Trees.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Personalized Recommender System for Arabic News on Twitter</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121248</link>
        <id>10.14569/IJACSA.2021.0121248</id>
        <doi>10.14569/IJACSA.2021.0121248</doi>
        <lastModDate>2021-12-31T05:43:13.4270000+00:00</lastModDate>
        
        <creator>Bashaier Almotairi</creator>
        
        <creator>Mayada Alrige</creator>
        
        <creator>Salha Abdullah</creator>
        
        <subject>Hybrid recommender system; online social network; Arabic news recommendation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>Reading online news is the most popular way to read articles from news sources worldwide. Nowadays, we have observed a mass increase of information that is shared through social media and specially news. Many researchers have proposed different techniques that focus on providing recommendations to news articles, but most of these researches focused on presenting solution for English text. This research aimed to develop a personalized news recommender system that can be used by Arabic newsreaders; to display news articles based on readers’ interests instead of presenting them only in order of their occurrence. To develop the system we have created an Arabic dataset of tweets and a set of Arabic news articles to serve as the source of recommendations. Then we have used CAMeL tools for Arabic natural language processing to preprocess the collected data. After that, we have built a hybrid recommender system through combining two filtering approaches: First, using a content-based filtering approach to consider the user&#39;s profile to recommend news articles to the user. Second, using collaborative filtering approach to consider the article&#39;s popularity with the support of Twitter. The system’s performance was evaluated using two evaluation metrics. We have conducted a user experimental study of 25 respondents to perform an assessment to get the users’ feedbacks. Also, we have used Mean Absolute Error (MAE) metrics as another way to evaluate the system accuracy. Based on evaluation results we found that hybrid recommender systems would recommend more relevant articles to users compared to the other two types of recommender system.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_48-Personalized_Recommender_System_for_Arabic_News.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Monitoring the Growth of Tomatoes in Real Time with Deep Learning-based Image Segmentation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121247</link>
        <id>10.14569/IJACSA.2021.0121247</id>
        <doi>10.14569/IJACSA.2021.0121247</doi>
        <lastModDate>2021-12-31T05:43:13.3970000+00:00</lastModDate>
        
        <creator>Sigit Widiyanto</creator>
        
        <creator>Dheo Prasetyo Nugroho</creator>
        
        <creator>Ady Daryanto</creator>
        
        <creator>Moh Yunus</creator>
        
        <creator>Dini Tri Wardani</creator>
        
        <subject>Deep learning; Mask R-CNN; segmentation; tomato; growth</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>Increasing agricultural productivity such as tomatoes needs to be increased, considering the consumption growth reaches 6.34% per year. Efforts to increase productivity can be made through several methods, such as counting and predicting the time of fruit to be harvested. This information is a a visual problem, so computer vision should solve it as an automation method in the industry world. With this information, the farmer can monitor the tomato fruit growth. The proposed method is a framework that has been implemented in real-time processing. To obtain growth information of tomatoes, the tomato area can be used as a region of interest (ROI) every week or another scheduled time. As the challenge of this research, this ROI can be extracted using segmentation analysis. The segmentation method used is Mask Region-Convolutional Network (R-CNN) with ResNet101 architecture. The accuracy of this method is obtained from the similarity value between the proposed method and the ground truth used, namely 97.34% using the Dice Coefficient and 94.83% using the Jaccard Coefficient. This result indicates that the method can extract the ROI information with high accuracy. So, the result can be used as a reference for the farmer to treat each tomato plant.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_47-Monitoring_the_Growth_of_Tomatoes_in_Real_Time.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Integrated Reinforcement DQNN Algorithm to Detect Crime Anomaly Objects in Smart Cities</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121246</link>
        <id>10.14569/IJACSA.2021.0121246</id>
        <doi>10.14569/IJACSA.2021.0121246</doi>
        <lastModDate>2021-12-31T05:43:13.3800000+00:00</lastModDate>
        
        <creator>Jyothi Mandala</creator>
        
        <creator>Pragada Akhila</creator>
        
        <creator>Vulapula Sridhar Reddy</creator>
        
        <subject>HybridFly; Advanced Encryption Standard (AES); reinforcement; anomaly detection; crime rate prediction; security attacks; RCNN</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>In olden days it is difficult to identify the unsusceptible forces happening in the society but with the advancement of smart devices, government has started constructing smart cities with the help of IoT devices, to capture the susceptible events happening in and around the surroundings to reduce the crime rate. But, unfortunately hackers or criminals are accessing these devices to protect themselves by remotely stopping these devices. So, the society need strong security environment, this can be achieved with the usage of reinforcement algorithms, which can detect the anomaly activities. The main reason for choosing the reinforcement algorithms is it efficiently handles a sequence of decisions based on the input captured from the videos. In the proposed system, the major objective is defined as minimum identification time from each frame by defining if then decision rules. It is a sort of autonomous system, where the system tries to learn from the penalties posed on it during the training phase. The proposed system has obtained an accuracy of 98.34% and the time to encrypt the attributes is also less.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_46-An_Integrated_Reinforcement_DQNN_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Adding Water Path Capabilities to QWAT Databases</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121245</link>
        <id>10.14569/IJACSA.2021.0121245</id>
        <doi>10.14569/IJACSA.2021.0121245</doi>
        <lastModDate>2021-12-31T05:43:13.3500000+00:00</lastModDate>
        
        <creator>Bogdan Vaduva</creator>
        
        <creator>Honoriu Valean</creator>
        
        <subject>Relational database; graphs; water network; water path; open source; QWAT</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>The main purpose of this article is to show how to extend an existing open source database, namely QWAT (Acronym from Quantum GIS Water Plugin), by using pgRouting (PostgreSQL routing extension) in order to achieve the ability to find the water flow in a water network. The water path in a water network is a key information needed by any water supplying company for different activities such as customer identification, meter the water flow or isolating areas of the water network. In our environment an open source database was used and that database didn’t have any means to identify the water path, so our research is intended into that direction. Once a water path is found, our next goal was to show that identifying customers for a water supplying company is just a click away (by using no directional graphs). Another key information needed by the water supplying companies is to know which valves should be closed in order to shut off the water for an area of the water network. As result, the second purpose of the article is to show how to identify the necessary valves, to be closed or open, in order to shut off or on the water (within the pipe network).</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_45-Adding_Water_Path_Capabilities.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modified Deep Residual Quantum Computing Optimization Technique for IoT Platform</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121244</link>
        <id>10.14569/IJACSA.2021.0121244</id>
        <doi>10.14569/IJACSA.2021.0121244</doi>
        <lastModDate>2021-12-31T05:43:13.3330000+00:00</lastModDate>
        
        <creator>Rasha M. Abd El-Aziz</creator>
        
        <creator>Alanazi Rayan</creator>
        
        <creator>Osama R. Shahin</creator>
        
        <creator>Ahmed Elhadad</creator>
        
        <creator>Amr Abozeid</creator>
        
        <creator>Ahmed I. Taloba</creator>
        
        <subject>Internet of things (IoT); cloud; Res-HQCNN; intrusion detection system (IDS); optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>Internet of Things (IoT) is defined as millions of interconnections between wireless devices to obtain data globally. The multiple data are targeting to observe the data through a common platform, and then it becomes essential to investigate accuracy for realizing the best IoT platform. To address the growing demand for time-sensitive data analysis and real-time decision-making, accuracy in IoT data collecting has become critical. The Res-HQCNN is a hybrid quantum-classical neural network with deep residual learning. The model is qualified in an end-to-end analog method in a traditional neural network, backpropagation is used. To discover the Res-HQCNN efficiency to perform on the classical computer, there has been a lot of investigation into quantum data with or without noise. Then focus on the application of the artificial neural network to analyze the dangers to these IoT networks. For data recording purposes, to undertake in-depth analysis on the threat severity, kind, and source, a model is trained using recurrent and convolutional neural networks. The intrusion detection system (IDS) explored in this study has a success rate of 99% based on the empirical data supplied to the model. Due to irregularly distributed robust execution, larger affectability for the introduction of authority dimension, steadiness, and the extremely large crucial area, a quantum hash function work has been proposed as an amazing method for secure communication between the IoT and cloud.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_44-Modified_Deep_Residual_Quantum_Computing_Optimization_Technique.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>“Digital Influencer”: Development and Coexistence with Digital Social Groups</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121243</link>
        <id>10.14569/IJACSA.2021.0121243</id>
        <doi>10.14569/IJACSA.2021.0121243</doi>
        <lastModDate>2021-12-31T05:43:13.3030000+00:00</lastModDate>
        
        <creator>Jirawat Sookkaew</creator>
        
        <creator>Pipatpong Saephoo</creator>
        
        <subject>Virtual influencer; online social; virtual character; media</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>Digital identities, also known as virtual influencers, are created by humans through the creation of digital tools that mimic human behavior through the use of creative design. As a result of this, it has resulted in the creation of a group of people who are very fond of and trendy called &quot;virtual influencers&quot;, particularly in the modern day. With the rise of virtual influencers, they must be used as tools in marketing and media, particularly in the online world. Because such a character is able to overcome a variety of limitations that humans are unable to provide, character styles, which do not need to have the same look or composition as people, are factors that make these characters popular, but the development of the virtual influencer depends on the social and cultural factors of the people of that era, as well as relying on technology to play a role for humans to be able to apply and use these elements to integrate with the existing virtual influencer to grow and develop more.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_43-Digital_Influencer_Development_and_Coexistence.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Critical Data Consolidation in MDM to Develop the Unified Version of Truth</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121242</link>
        <id>10.14569/IJACSA.2021.0121242</id>
        <doi>10.14569/IJACSA.2021.0121242</doi>
        <lastModDate>2021-12-31T05:43:13.2870000+00:00</lastModDate>
        
        <creator>Dupinder Kaur</creator>
        
        <creator>Dilbag Singh</creator>
        
        <subject>Master data management (MDM); master record; TALEND; data matching and merging</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>Organization seeking growth and competitive lead should use Master Data Management (MDM) as a foundation for efficient decision making. An MDM framework creates a trusted and reliable continuous record of customers, products, suppliers and other shared data sets. In master data, the critical data is consolidated to portray essential business entities into a Unified version of Truth. To create trusted view of master data challenges like quality, identity resolution, analytics and investment are faced. In proposed research, a technique has been designed to generate Master Data to assist the policy maker to address the said issues. In this paper, four steps have been taken for master data creation namely: Data Enrichment, Data Matching, Data Merging and Data Governance. To achieve legitimate data quality TALEND open studio has been used for data pre-processing and enrichment. An algorithm is designed to match and merge the master records. To validate the designed approach, results are evaluated using Pandas Data Frame on Python platform. This paper will assist the policy makers of the organizations in formulating the business strategies.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_42-Critical_Data_Consolidation_in_MDM.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Data Backup Approach using Software-defined Wide Area Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121241</link>
        <id>10.14569/IJACSA.2021.0121241</id>
        <doi>10.14569/IJACSA.2021.0121241</doi>
        <lastModDate>2021-12-31T05:43:13.2570000+00:00</lastModDate>
        
        <creator>Ahmed Attia</creator>
        
        <creator>Nour Eldeen Khalifa</creator>
        
        <creator>Amira Kotb</creator>
        
        <subject>Wide area networks; software defined network; software defined wide area network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>Over the past several years, the traditional approaches of managing and utilizing hybrid Wide Area Network (WAN) connections, between sites across geographical regions, have posed many challenges to enterprises. Software-Defined Wide Area Network (SD-WAN) has emerged as a new paradigm that can overcome the traditional WAN challenges like the lack of visibility for WAN bandwidth utilization and the inefficient usage of expensive WAN resources. The flexibility and agility brought to WAN by applying the SD-WAN paradigm helped to improve the efficiency of bandwidth utilization and to address the surge of bandwidth demands. The SD-WAN capabilities become essential for meeting the heavy inter-data center&#39;s traffic exchange required for business continuity and disaster recovery operations. In this paper, a data backup approach is introduced using SD-WAN that makes the network centrally programmable. This will leverage the ability to make fine-grained traffic engineering for different data flows over WAN to optimize the bandwidth utilization of the expensive WAN resources by balancing the traffic load across network links between data centers and to minimize the time required to transfer backup data to disaster recovery sites. The proposed approach proved its efficiency according to the bandwidth utilization if it is compared to the other related works.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_41-Data_Backup_Approach_using_Software.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Developing the Mathematical Model of the Bipedal Walking Robot Executive Mechanism</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121240</link>
        <id>10.14569/IJACSA.2021.0121240</id>
        <doi>10.14569/IJACSA.2021.0121240</doi>
        <lastModDate>2021-12-31T05:43:13.2230000+00:00</lastModDate>
        
        <creator>Zhanibek Issabekov</creator>
        
        <creator>Nakhypbek Aldiyarov</creator>
        
        <subject>Exoskeleton; manipulator; model; kinematics; dynamics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>The paper considers the accuracy of footstep control in the vicinity of the application object. The methodology of forming a simulation of the executive electro-hydraulic servomechanism is developed. The paper presents control algorithms in the dynamic walking mode. The issues of stabilization of the sensors installed in the soles are investigated. The description of the laboratory model and simulation of the main links of the exoskeleton, approximated to human parameters, allowing to insert the studied algorithms of motion of the executive mechanism into the program of automation of calculations of the links of motion are given. The authors for the first time simulated the bipedal walking robot using modern digital technologies, including the joint use of pneumatic electric drive. This paper proposes an automated control scheme for manipulators controlling immobilized human limbs. Considering the functions of the leg and the phases of movement, the structure scheme is chosen so that the same actuator performs several functions. This construction partially reduces the load on the person, because the drives of the various links due to their gravity can overturn a person. Using the kinematic structure of the model and the method of adaptive control of the manipulator, as well as replacing some movement parts with plastic material, the authors were successful in reducing the total weight by three times compared with foreign analogues, which is important for a sick person.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_40-Developing_the_Mathematical_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Leveraging Artificial Intelligence–enabled Workflow Framework for Legacy Transformation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121239</link>
        <id>10.14569/IJACSA.2021.0121239</id>
        <doi>10.14569/IJACSA.2021.0121239</doi>
        <lastModDate>2021-12-31T05:43:13.2100000+00:00</lastModDate>
        
        <creator>Abdullah Al-Barakati</creator>
        
        <subject>Legacy systems; service oriented architecture; workflow management; legacy transformation; digital transformation; artificial intelligence (AI)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>The rapid advancement of web technologies coupled with evolving business needs make legacy transformation a necessity for enterprises around the world. However, the risks in such a transformation must be mitigated with an approach that is flexible enough to allow for a gradual and low risk transformation process. This paper presents a Service Oriented Architecture (SOA) workflow-based legacy transformation approach that allows for phased transformation in which a legacy system is first transformed into self-contained modular services accessible via a dedicated service layer. These modular services are managed through an AI-enabled workflow management layer that interacts with improved UI frontend for the system’s end users. This paper presents a hypothetical prototype in which an Oracle 5 legacy system is transformed using the proposed architecture. ASP .NET Core MVC as well as Pega business process management platform are utilized to practically assess the feasibility of the proposed approach.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_39-Leveraging_Artificial_Intelligence.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Knowledge Graph-based Framework for Domain Expertise Elicitation and Reuse in e-Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121238</link>
        <id>10.14569/IJACSA.2021.0121238</id>
        <doi>10.14569/IJACSA.2021.0121238</doi>
        <lastModDate>2021-12-31T05:43:13.1930000+00:00</lastModDate>
        
        <creator>Jawad Berri</creator>
        
        <subject>Knowledge graph; domain expertise; e-learning; knowledge elicitation; learning web</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>Reusing knowledge expertise of different domains in e-learning is an ideal approach to sustain knowledge and disseminate it throughout the different organizations’ processes. This approach generates a valuable source for instruction which can enrich significantly the quality of teaching and training as it uses effortlessly expertise from its original sources. It is also very useful for teaching activities since it connects learners with real-life scenarios involving field experts and reliefs instructors from the tedious task of authoring teaching material. In this paper we propose a framework that allows gathering automatically expertise from domain experts while doing their activities and then represents it in a form that can be shared and reused in e-learning by different types of learners. The framework relies on knowledge graphs that are knowledge representation structures which facilitate mapping expertise to e-learning objects. A case study is presented showing how inspector reports are handled to generate on-demand e-courses specifically adapted to learners’ needs.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_38-Knowledge_Graph_based_Framework_for_Domain_Expertise.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detection of Data Leaks through Large Scale Distributed Query Processing using Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121237</link>
        <id>10.14569/IJACSA.2021.0121237</id>
        <doi>10.14569/IJACSA.2021.0121237</doi>
        <lastModDate>2021-12-31T05:43:13.1770000+00:00</lastModDate>
        
        <creator>Kiranmai MVSV</creator>
        
        <creator>D Haritha</creator>
        
        <subject>Distributed query processing; distributed data leak; data leak detection; attribute subset equivalence; dynamic inference; adaptive threshold model introduction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>With the growth in the distributed data processing and data being the fuel for each of the processes, the query processes of the data are expected to be significantly lower. Hence, the distribution of the data is highly expected and during the distributing of the data, the chances for data leakage increases to a significant extend. The data leakage problems are not generally caused by intentional errors, rather this is caused by the higher visibility of the data over multiple clusters. Henceforth, the detection process is also very critical. Many of the parallel research attempts have demonstrated various methods for the detection and as well as the prevention methods. The works in the direction of the detection of the data leaks are highly dependent either on the historical information of the leaks or depends on the contextual importance of the data. In both the cases, the outcomes of the detection process accuracy cannot be ensured. In the other hand, the preventive measures can also turn into a reactive process for detection by reversing the principles proposed in these research outcomes, but the computational complexities are significantly higher. Thus, this work proposes a novel strategy for detection of the data leakages after the data distribution during the query processing events. This work proposes an initial Occurrence Based Rule Set Extraction method using Adaptive Threshold for generating the rulesets, further for reducing the time complexity and reducing the loss of dataset attribute information, this work introduces yet another algorithm for Dynamic Inference-based Rule Set Reduction. After the inferences are generated, finally this work deploys the Attribute Subset Equivalence-based Leak Detection mechanism for final detection of the clusters with data leaks. This work demonstrates nearly 89% accuracy for the detection process.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_37-Detection_of_Data_Leaks_through_Large_Scale.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SG-TSE: Segment-based Geographic Routing and Traffic Light Scheduling for EV Preemption based Negative Impact Reduction on Normal Traffic</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121236</link>
        <id>10.14569/IJACSA.2021.0121236</id>
        <doi>10.14569/IJACSA.2021.0121236</doi>
        <lastModDate>2021-12-31T05:43:13.1470000+00:00</lastModDate>
        
        <creator>Shridevi Jeevan Kamble</creator>
        
        <creator>Manjunath R Kounte</creator>
        
        <subject>Emergency vehicle (EV) preemption; Segment-based Geographic routing and Traffic light Scheduling based EV preemption (SG-TSE); geographic routing; Segment based Geographic Routing (SGR); dynamic traffic light scheduling; Dynamic Traffic light Scheduling and EV preemption (DTSE); green phase adjustment</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>Emergency Vehicles (EVs) play a significant role in giving timely assistance to the general public by saving lives and avoiding property damages. The EV preemption models help the EVs to maintain their speed along their path by pre-clearing the normal vehicles from the path. However, few preemption models are designed in literature, and they lack in minimizing the negative impacts of EV preemption on normal vehicle traffic and also negative impacts of normal vehicle traffic on EV speed. To accomplish such goals, the work proposes a Segment-based Geographic routing and Traffic light Scheduling based EV preemption (SG-TSE) that incorporates two mechanisms: Segment based Geographic Routing (SGR) and Dynamic Traffic Light Scheduling and EV Preemption (DTSE) for efficient EV preemption. Firstly, the SGR utilized a geographic routing model through the Segment Heads (SHs) along the selected route and passed the EV arrival messages to the traffic light controller to pre-clear the normal traffic. Secondly, the DTSE designs effective scheduling at traffic lights by dynamically adjusting the green time phase based on the minimum detection distance of EVs to the intersections. Thus, the EVs are passed through the intersections quickly without negatively impacting normal traffic, even the signal head in the red phase. Moreover, the proposed SG-TSE activates the green phase time at the correct time and minimizes the negative impacts on the EV preemption model. Finally, the performance of SG-TSE is evaluated using Network Simulator-2 (NS-2) with different performance metrics and various network traffic scenarios.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_36-SG_TSE_Segment_based_Geographic_Routing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimizing Smartphone Recommendation System through Adaptation of Genetic Algorithm and Progressive Web Application</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121235</link>
        <id>10.14569/IJACSA.2021.0121235</id>
        <doi>10.14569/IJACSA.2021.0121235</doi>
        <lastModDate>2021-12-31T05:43:13.1300000+00:00</lastModDate>
        
        <creator>Khyrina Airin Fariza Abu Samah</creator>
        
        <creator>Nursalsabiela Affendy Azam</creator>
        
        <creator>Raseeda Hamzah</creator>
        
        <creator>Chiou Sheng Chew</creator>
        
        <creator>Lala Septem Riza</creator>
        
        <subject>Genetic algorithm; progressive web application; recommendation; smartphone introduction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>The ubiquity of smartphone use nowadays is undeniable exponentially growing, replaced cell phones, and a host of other gadgets replaced personal computers to a certain degree. Different smartphones specifications and overwhelmed smartphone advertisements have caused broader choices for the customer. Many qualitative and quantitative criteria need to consider, and customers want to select the most suitable smartphones. They face difficulties deciding the best smartphone according to their budget and desire. Thus, a new method is needed to recommend the customer according to their preferences and budget. This study proposed a method for optimizing the recommendation system of the smartphone using the genetic algorithm (GA). Moreover, it is implemented with a progressive web application (PWA) platform to ensure the customer can use it on multiple platforms. They can choose the platform to input any specification of smartphone preferences besides the budget. Functional testing results showed the achievement of the study’s objectives, and usability testing using UEQ managed to receive feedback of 93.64%, with an overall average mean of 4.682. Therefore, according to the outcome, it can be concluded that optimizing the smartphone recommendations through GA enables the customer to ease the comparison based on the obtained optimum result.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_35-Optimizing_Smartphone_Recommendation_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Recognition Method for Cassava Phytoplasma Disease (CPD) Real-Time Detection based on Transfer Learning Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121234</link>
        <id>10.14569/IJACSA.2021.0121234</id>
        <doi>10.14569/IJACSA.2021.0121234</doi>
        <lastModDate>2021-12-31T05:43:13.1000000+00:00</lastModDate>
        
        <creator>Irma T. Plata</creator>
        
        <creator>Edward B. Panganiban</creator>
        
        <creator>Darios B. Alado</creator>
        
        <creator>Allan C. Taracatac</creator>
        
        <creator>Bryan B. Bartolome</creator>
        
        <creator>Freddie Rick E. Labuanan</creator>
        
        <subject>Cassava phytoplasma disease; faster regions with convolutional neural networks (R-CNN) inception v2; you only look once (YOLO) v4; object detection; precision agriculture</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>Object detection technology aims to detect the target objects with the theories and methods of image processing and pattern recognition, determine the semantic categories of these objects, and mark the specific position of the target object in the image. This study generally aims to establish a recognition method for Cassava Phytoplasma Disease (CPD) real-time detection based on transfer learning neural networks. Several methods and procedures were conducted, such as the testing of two methods in transmitting long-distance high definition (HD) video capture; establishment of a compact setup for a long-range wireless video transmission system; the development, testing of the real-time CPD detection and quantification monitoring system, providing the comparative performance analysis of the three models used. We have successfully custom-trained three artificial neural networks using transfer learning: Faster Regions with Convolutional Neural Networks (R-CNN) Inception v2, Single Shot Detector (SSD) Mobilenet v2, and You Only Look Once (YOLO) v4. These deep learning models can detect and recognize CPD in actual environment settings. Overall, the developed real-time CPD detection and quantification monitoring system was successfully integrated into the wireless video receiver and seamlessly visualized all the incoming data using the three different CNN models. If the consideration is the image processing speed, YOLOv4 is better compared to other models. But, if accuracy is the priority, Faster R-CNN inception v2 performs better. However, since CPD detection is the main purpose of this study, the Faster R-CNN model is recommended for adoption to detect CPD in a real-time environment.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_34-A_Recognition_Method_for_Cassava_Phytoplasma_Disease.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>GML_DT: A Novel Graded Multi-label Decision Tree Classifier</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121233</link>
        <id>10.14569/IJACSA.2021.0121233</id>
        <doi>10.14569/IJACSA.2021.0121233</doi>
        <lastModDate>2021-12-31T05:43:13.0530000+00:00</lastModDate>
        
        <creator>Wissal Farsal</creator>
        
        <creator>Mohammed Ramdani</creator>
        
        <creator>Samir Anter</creator>
        
        <subject>Graded multi-label classification; algorithm adaptation; decision tree classifier; label dependencies</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>The goal of Graded Multi-label Classification (GMLC) is to assign a degree of membership or relevance of a class label to each data point. As opposed to multi-label classification tasks which can only predict whether a class label is relevant or not. The graded multi-label setting generalizes the multi-label paradigm to allow a prediction on a gradual scale. This is in agreement with practical real-world applications where the labels differ in matter of level relevance. In this paper, we propose a novel decision tree classifier (GML_DT) that is adapted to the graded multi-label setting. It fully models the label dependencies, which sets it apart from the transformation-based approaches in the literature, and increases its performance. Furthermore, our approach yields comprehensive and interpretable rules that efficiently predict all the degrees of memberships of the class labels at once. To demonstrate the model’s effectiveness, we tested it on real-world graded multi-label datasets and compared it against a baseline transformation-based decision tree classifier. To assess its predictive performance, we conducted an experimental study with different evaluation metrics from the literature. Analysis of the results shows that our approach has a clear advantage across the utilized performance measures.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_33-GML_DT_A_Novel_Graded_Multi_label_Decision.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-objective based Cloud Task Scheduling Model with Improved Particle Swarm Optimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121232</link>
        <id>10.14569/IJACSA.2021.0121232</id>
        <doi>10.14569/IJACSA.2021.0121232</doi>
        <lastModDate>2021-12-31T05:43:13.0070000+00:00</lastModDate>
        
        <creator>Chaitanya Udatha</creator>
        
        <creator>Gondi Lakshmeeswari</creator>
        
        <subject>Cloud computing; task scheduling; cloud service provider; virtual machines; PSO; multi-objective; cloud service broker</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>Now-a-days, advanced technologies have emerged from the parallel, cluster, client-server, distributed, and grid computing paradigms. Cloud is one of the advanced technology paradigms that deliver services to users on demand by cost per usage over the internet. Nowadays, a number of cloud services have rapidly increased to facilitate the user requirements. The cloud is able to provide anything as a service over web networks from hardware to applications on demand. Due to the complex infrastructure of the cloud, it needs to manage resources efficiently, and constant monitoring is required from time to time. Task scheduling plays an integral role in improving cloud performance by reducing the number of resources used and efficiently allocating tasks to the requested resources. The paper&#39;s main idea attempts to assign and schedule the resources efficiently in the cloud environment by using proposed Multi-Objective based Hybrid Initialization of Particle Swarm Optimization (MOHIPSO) strategy by considering both sides of the cloud vendor and user. The proposed algorithm is a novel hybrid approach for initializing particles in PSO instead of random values. This strategy can obtain the minimum total task execution time for the benefit of the cloud user and maximum resource usage for the benefit of the cloud provider. The proposed strategy shows improvement over standard PSO and the other heuristic initialization of PSO approach to reduce the makespan, execution time, waiting time, and virtual machine imbalance parameters are considered for comparison results.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_32-Multi_objective_based_Cloud_Task_Scheduling_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>English Semantic Similarity based on Map Reduce Classification for Agricultural Complaints</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121231</link>
        <id>10.14569/IJACSA.2021.0121231</id>
        <doi>10.14569/IJACSA.2021.0121231</doi>
        <lastModDate>2021-12-31T05:43:12.9900000+00:00</lastModDate>
        
        <creator>Esraa Rslan</creator>
        
        <creator>Mohamed H. Khafagy</creator>
        
        <creator>Kamran Munir</creator>
        
        <creator>Rasha M.Badry</creator>
        
        <subject>Agricultural system; semantic textual similarity; text classification; latent semantic analysis; part of speech</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>Due to environmental changes, including global warming, climatic changes, ecological impact, and dangerous diseases like the Coronavirus epidemic. Since coronavirus is a hazardous disease that causes many deaths, government of Egypt undertook many strict regulations, including lockdowns and social distancing measures. These circumstances have affected agricultural experts&#39; presence to help farmers or advise on solving agricultural problems. For helping this issue, this work focused on improving support for farmers on the major field crops in Egypt Retrieving solutions corresponding to farmer query. For our work, we have mainly focused on detecting the semantic similarity between large agriculture dataset and user queries using Latent Semantic Analysis (LSA) based on Term Frequency Weighting and Inverse Document Frequency (TF-IDF) method. In this research paper, we apply SVM MapReduce classifier as a framework for paralleling and distributing the work on the dataset to classify the dataset. Then we apply different approaches for computing the similarity of sentences. We presented a system based on semantic similarity methods and support vector machine algorithm to detect the similar complaints of the user query. Finally, we run different experiments to evaluate the performance and efficiency of the proposed system as the system performs approximately 77.8%~94.8% in F-score measure. The experimental results show that the accuracy of SVM classifier is approximately 88.68%~89.63% and noted the leverage of SVM classification to the semantic similarity measure between sentences.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_31-English_Semantic_Similarity_based_on_Map_Reduce_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detecting Server-Side Request Forgery (SSRF) Attack by using Deep Learning Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121230</link>
        <id>10.14569/IJACSA.2021.0121230</id>
        <doi>10.14569/IJACSA.2021.0121230</doi>
        <lastModDate>2021-12-31T05:43:12.9600000+00:00</lastModDate>
        
        <creator>Khadejah Al-talak</creator>
        
        <creator>Onytra Abbass</creator>
        
        <subject>Server-side request forgery (SSRF); machine learning (ML); deep learning (DL); long short-term memory (LSTM)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>Server-side request forgery (SSRF) is a security vulnerability that arises from a vulnerability in web applications. For example, when the services are accessed via URL the attacker supply or modify a URL to access services on servers that he is not permitted to use. In this research, various types of SSRF attacks are discussed, and how to secure web applications are explained. Various techniques have been used to detect and mitigate these attacks, most of which are concerned with the use of machine learning techniques. The main focus of this research was the application of deep learning techniques (LSTM networks) to create an intelligent model capable of detecting these attacks. The generated deep learning model achieved an accuracy rate of 0.969, which indicates the strength of the model and its ability to detect SSRF attacks.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_30-Detecting_Server_Side_Request_Forgery.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Customers’ Opinions on Mobile Telecommunication Services in Malaysia using Sentiment Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121229</link>
        <id>10.14569/IJACSA.2021.0121229</id>
        <doi>10.14569/IJACSA.2021.0121229</doi>
        <lastModDate>2021-12-31T05:43:12.9430000+00:00</lastModDate>
        
        <creator>Muhammad Radzi Abdul Rahim</creator>
        
        <creator>Shuzlina Abdul-Rahman</creator>
        
        <creator>Yuzi Mahmud</creator>
        
        <subject>Sentiment analysis; predictive analytics; RapidMiner; mobile telecommunications</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>Mobile telecommunication companies in Malaysia have been widely used in the recent decade. There is intense competition among them to keep and gain new customers by offering various services. The reviews of the services by the customers are commonly shared on social media such as Twitter. Those reviews are essential for mobile telecommunication companies to improve their services and at the same time to keep their customers from churning to another company. Hence, this study focuses on the public sentiment on Twitter towards mobile telecommunication services in Malaysia. Data on Twitter was scraped using three keywords: Celcom, Digi, and Maxis. The keywords used to refer to Malaysia&#39;s top three mobile telecommunication companies. The timeline for the tweets was between December 2020 until January 2021 and was based on the promotion sales commonly used by the organisation to boost their sales which is called Year End Sales. Corpus-based approach and Machine Learning model using RapidMiner were used in this study, namely, Support Vector Machine (SVM), Na&#239;ve Bayes, and Deep Learning. The corpus determines the sentiment from the tweets, either positive, negative, or neutral. The models&#39; performances were compared in terms of accuracy, and the outcome shows that Deep Learning classifiers have the highest performance compared to other classifiers. The results of this sentiment analysis are visualised for easy understanding.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_29-Customers_Opinions_on_Mobile_Telecommunication_Services.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Educational Data Mining in Predicting Student Final Grades on Standardized Indonesia Data Pokok Pendidikan Data Set</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121227</link>
        <id>10.14569/IJACSA.2021.0121227</id>
        <doi>10.14569/IJACSA.2021.0121227</doi>
        <lastModDate>2021-12-31T05:43:12.9130000+00:00</lastModDate>
        
        <creator>Nathan Priyasadie</creator>
        
        <creator>Sani Muhammad Isa</creator>
        
        <subject>Educational data mining; student performance; classification models; feature selection; parameter optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>Educational Data Mining has been implemented in predicting student final grade in Indonesia. It can be used to improve learning efficiency by paying more attention to students who are predicted to have low scores, but in practice it shows that each algorithm has a different performance depending on the attributes and data set used. This study uses Indonesian standardized students’ data named Data Pokok Pendidikan to predict the grades of junior high school students. Several prediction techniques of K-Nearest Neighbor, Naive Bayes, Decision Tree and Support Vector Machine are compared with implementation of parameter optimization and feature selection on each algorithm. Based on accuracy, precision, recall and F1-Score shows that various algorithm performs differently based on the high school data set, but in general Decision Tree with parameter optimization and feature selection outperform other classification algorithm with peak F1-Score at 61.48% and the most significant attribute in are First Semester Natural Science and First Semester Social Science score on predicting student final score.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_27-Educational_Data_Mining_in_Predicting_Student_Final_Grades.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cyberbullying Detection in Textual Modality</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121228</link>
        <id>10.14569/IJACSA.2021.0121228</id>
        <doi>10.14569/IJACSA.2021.0121228</doi>
        <lastModDate>2021-12-31T05:43:12.9130000+00:00</lastModDate>
        
        <creator>Evangeline D</creator>
        
        <creator>Amy S Vadakkan</creator>
        
        <creator>Sachin R S</creator>
        
        <creator>Aakifha Khateeb</creator>
        
        <creator>Bhaskar C</creator>
        
        <subject>Cyberbullying detection; support vector machine (SVM); kNN (k nearest neighbor); logistic regression; random forest classifier</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>Cyberbullying is the use of technology to harass, threaten or target another individual. Online bullying can be particularly damaging and upsetting since it is usually anonymous and it’s often hard to trace the bully. Sometimes cyberbullying can lead to issues like anxiety, depression, shame, suicide, etc. Most of the cyberbullying cases are not revealed to the public and the number of cases reported to the legal system is only few. Certain victims do not reveal their bully experiences out of shame or due to difficult procedures for reporting to the legal system. Our cyberbullying detection system aims to bring cases involving cyberbullying under control by detecting and warning the bully. Such cases are also reported to appropriate authorities, which can then be verified and necessary actions can be taken depending on the situation. The technology stack used for implementation include Flask, Scikit learn, Chat application APIs, Firebase, HTML, Javascript and CSS. The model was tested on classifiers like SVM, KNN, Logistic regression and Random Forest. F1 score was used as a metric to assess the four models. While analyzing the performances of these models, it was observed that Random Forest Classifier outperformed all the models. F1 score of 93.48% was achieved using the Random Forest Classifier.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_28-Cyberbullying_Detection_in_Textual_Modality.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards a Computational Model to Thematic Typology of Literary Texts: A Concept Mining Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121226</link>
        <id>10.14569/IJACSA.2021.0121226</id>
        <doi>10.14569/IJACSA.2021.0121226</doi>
        <lastModDate>2021-12-31T05:43:12.8800000+00:00</lastModDate>
        
        <creator>Abdulfattah Omar</creator>
        
        <subject>Computational linguistics; concept mining; data mining; empirical methodologies; semantic annotators; text clustering; typology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>In recent years, computational linguistic methods have been widely used in different literary studies where they have been proved useful in breaking into the mainstream of literary critical scholarship as well as in addressing different inherent challenges that were long associated with literary studies. Such computational approaches have revolutionized literary studies through their potentials in dealing with large datasets. They have bridged the gap between literary studies and computational and digital applications through the integration of these applications including most notably data mining in reconsidering the way literary texts are analyzed and processed. As thus, this study seeks to use the potentials of computational linguistic methods in proposing a computational model that can be usefully used in the thematic typologies of literary texts. The study adopts concept mining methods using semantic annotators for generating a thematic typology of the literary texts and exploring their thematic interrelationships through the arrangement of texts by topic. The study takes the prose fiction texts of Thomas Hardy as an example. Findings indicated that concept mining was usefully used in extracting the distinctive concepts and revealing the thematic patterns within the selected texts. These thematic patterns would be best described in these categories: class conflict, Wessex, religion, female suffering, and social realities. It can be finally concluded that computational approaches as well as scientific and empirical methodologies are useful adjuncts to literary criticism. Nevertheless, conventional literary criticism and human reasoning are also crucial and irreplaceable by computer-assisted systems.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_26-Towards_a_Computational_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards a New Metamodel Approach of Scrum, XP and Ignite Methods</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121225</link>
        <id>10.14569/IJACSA.2021.0121225</id>
        <doi>10.14569/IJACSA.2021.0121225</doi>
        <lastModDate>2021-12-31T05:43:12.8670000+00:00</lastModDate>
        
        <creator>Merzouk Soukaina</creator>
        
        <creator>Elkhalyly Badr</creator>
        
        <creator>Marzak Abdelaziz</creator>
        
        <creator>Sael Nawal</creator>
        
        <subject>Agile software development; scrum; extreme programming; XP; internet of things; IoT; Ignite | IoT Methodology; IoT Methodology; metamodel; MDA; OMG</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>The agile approach is a philosophy that aims to avoid the traditional management approach problems. It concentrates on the collaborative approach, using iterative and incremental development. The client receives a first production version (increment) of his software, faster thanks to agile methodologies. Project needs are influenced by the rapid expansion of technologies, particularly after the emergence of the Internet of Things (IoT). They are becoming larger and more complex. IoT provides a standardization and unification of electronic identities, digital entities, and physical objects. Consequently, interconnected devices can retrieve, store, send, and process data easier from both physical and virtual worlds. Scalable methods such as SAFe, LeSS, SPS, and others are existing methodologies ameliorated and dedicated to large projects. These methods are tough to adopt and do not consider the physical side of the project, according to IoT enterprise teams. Based on their managerial and IoT expertise, they suggest their own methods (Ignite | IoT Methodology and IoT Methodology). Model Driven Architecture (MDA) was coined by the Object Management Group (OMG) in 2000 to develop perpetual models that are independent of the technical intricacies of the execution platforms. The purpose of this paper is to propose a metamodel for each methodology among: Scrum, XP, and Ignite.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_25-Toward_a_New_Metamodel_Approach_of_Scrum.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Ontology-based Decision Support System for Multi-objective Prediction Tasks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121224</link>
        <id>10.14569/IJACSA.2021.0121224</id>
        <doi>10.14569/IJACSA.2021.0121224</doi>
        <lastModDate>2021-12-31T05:43:12.8500000+00:00</lastModDate>
        
        <creator>Touria Hamim</creator>
        
        <creator>Faouzia Benabbou</creator>
        
        <creator>Nawal Sael</creator>
        
        <subject>Profile modeling; student; ontology; machine learning; academic domain</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>Student profile modeling is a topic that continues to attract the interest of both academics and researchers because of its crucial role in the development of predictive or decision support systems. It provides platforms to build intelligent systems such as e-orientation, e-recruitment, recommendation, and prediction systems. The purpose of this research is to propose an ontology-based decision support system that can be used for multi-objective prediction tasks such as prediction of failure/abundance, orientation or decision-making. Two major contributions are proposed here: a new domain ontology that models the profile of a student and a system that is based on this ontology to perform multiple prediction tasks. The proposed approach relies on the efficiency of the ontology to ensure semantic interoperability and the benefits of machine learning techniques to build an intelligent system for a multipurpose decision support objectives. The proposed system uses Decision Tree algorithm (C5.0), but other machine learning models can be added if they prove to be more efficient. Furthermore, the performance of the developed method is computed using performance metrics and achieved 83.6% for accuracy and 81.9% for recall.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_24-An_Ontology_based_Decision_Support_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predicting Aesthetic Preferences: Does the Big-Five Matters?</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121223</link>
        <id>10.14569/IJACSA.2021.0121223</id>
        <doi>10.14569/IJACSA.2021.0121223</doi>
        <lastModDate>2021-12-31T05:43:12.8170000+00:00</lastModDate>
        
        <creator>Carolyn Salimun</creator>
        
        <creator>Esmadi Abu bin Abu Seman</creator>
        
        <creator>Wan Nooraishya binti Wan Ahmad</creator>
        
        <creator>Zaidatol Haslinda binti Abdullah Sani</creator>
        
        <creator>Saman Shishehchi</creator>
        
        <subject>User experience; aesthetic dimensions; personality traits; big-five</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>User experience is imperative for the success of interactive products. User experience is notably affected by user preferences; the higher the preference, the better the user experience. The way users develop their preferences are closely related to personality traits. However, there is a void in understanding the association between personality traits and aesthetic dimensions that may potentially explain how users develop their preferences. This paper examines the relationship between the Big-Five personality traits (Openness to Experience, Conscientiousness, Extraversion, Agreeableness, and Neuroticism) and the two dimensions of aesthetics (classical aesthetics, expressive aesthetics). Two hundred twenty participants completed the Big-Five questionnaire and rated their preference for each of the ten images of web pages on a 7-point Likert scale. Results show Openness to Experience, Conscientiousness, Extraversion, and Neuroticism were not significantly correlated with aesthetic dimensions. Only Agreeableness showed a significant correlation (although weakly) with both classical and expressive aesthetics. The finding conforms to literature that personality traits have influence on the preference of individual design features in lieu of aesthetic dimensions. In other words, personality traits are inapt predictor of aesthetic dimension. Therefore, more studies are needed to explore other factors that potentially help to predict aesthetic dimensions.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_23-Predicting_Aesthetic_Preferences.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Smart Tourism Recommendation Model: A Systematic Literature Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121222</link>
        <id>10.14569/IJACSA.2021.0121222</id>
        <doi>10.14569/IJACSA.2021.0121222</doi>
        <lastModDate>2021-12-31T05:43:12.8030000+00:00</lastModDate>
        
        <creator>Choirul Huda</creator>
        
        <creator>Arief Ramadhan</creator>
        
        <creator>Agung Trisetyarso</creator>
        
        <creator>Edi Abdurachman</creator>
        
        <creator>Yaya Heryadi</creator>
        
        <subject>Systematic review; tourism; smart tourism; digital tourism; recommender system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>The tourism industry has become a potential sector to leverage economic growth. Many attractions are detected on several platforms. Machine learning and data mining are some potential technologies to improve the service of tourism by providing recommendations for a specific attraction for tourists according to their location and profile. This research applied for a systematic literature review on tourism, digital tourism, smart tourism, and recommender system in tourism. This research aims to evaluate the most relevant and accurate techniques in tourism that focused on recommendations or similar efforts. Several research questions were defined and translated into search strings. The result of this research was promoting 41 research that discussed tourism, digital tourism, smart tourism, and recommender systems. All of the literature was reviewed on some aspects, in example the problem addressed, methodology used, data used, strength, and the limitation that can be an opportunity for improvement in future research. This study proposed some references for further study based on reviewed papers regarding tourism management, tourist experience, tourist motivation, and tourist recommendation system. The opportunities for a further research study can be conducted with more data usage especially for a smart recommender system in tourism through many types of recommendation techniques such as content-based, collaborative filtering, demographic, knowledge-based, community-based, and hybrid recommender systems.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_22-Smart_Tourism_Recommendation_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Noise Cancellation in Computed Tomography Images through Adaptive Multi-Stage Noise Removal Paradigm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121221</link>
        <id>10.14569/IJACSA.2021.0121221</id>
        <doi>10.14569/IJACSA.2021.0121221</doi>
        <lastModDate>2021-12-31T05:43:12.7700000+00:00</lastModDate>
        
        <creator>Jenita Subash</creator>
        
        <creator>Kalaivani S</creator>
        
        <subject>De-noising; computer tomography; discrete wavelet transform; crow search optimization; bilateral filter</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>Image de-noising is a noise removal approach, which is utilized to remove noise from the noisy image and is utilized to protect the significant features of images namely, corners, edges, textures, and sharp structures. For medical diagnosis Computer tomography (CT) images are mainly utilized. Due to acquisition and transmission in CT imaging, the noise that appears leads to poor image quality. To overcome this problem, an efficient Noise cancellation in computed tomography images using adaptive multi-stage noise removal paradigm is proposed. The proposed approach consists of three phases namely, Optimal Discrete Wavelet Transform, first stage noise removal using Block Matching, and 3D filtering (BM3D) filter and second stage noise removal using the bilateral filter (BF). Initially, Discrete Wavelet Transform (DWT) is applied to the input image to diminish noise in CT images. In this method, co-efficient ranges are optimally selected with the help of Crow Search Optimization (CSO) algorithm. Secondly, to remove the noise present in the bands, BM3D algorithm is applied. Finally, bilateral filter is applied to the BM3D output image to further enhance the image. The performance of the proposed methodology is analyzed in terms of Peak signal-to-noise ratio (PSNR), Root Mean Square Error (RMSE), and Structural Similarity Index (SSIM). Furthermore, the multi-stage noise removal model obtained gives the best PSNR values compared to other techniques.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_21-Noise_Cancellation_in_Computed_Tomography_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>DoItRight: An Arabic Gamified Mobile Application to Raise Awareness about the Effect of Littering among Children</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121220</link>
        <id>10.14569/IJACSA.2021.0121220</id>
        <doi>10.14569/IJACSA.2021.0121220</doi>
        <lastModDate>2021-12-31T05:43:12.7570000+00:00</lastModDate>
        
        <creator>Ayman Alfahid</creator>
        
        <creator>Hind Bitar</creator>
        
        <creator>Mayda Alrige</creator>
        
        <creator>Hend Abeeri</creator>
        
        <creator>Eman Sulami</creator>
        
        <subject>Littering; mobile application; gamification; children intention; raise awareness; behavior change Saudi Arabia</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>Littering contributes significantly to environmental pollution. Previous studies have noted that children are more likely to litter than adults. This target age group can be easily reached through mobile applications and games. Therefore, this study aims to investigate the effect of a gamified application in raising awareness on the effect of littering in the environment. We developed a gamified app, called DoItRight to promote an environment friendly behavior and improve the littering behavior of children. The DoItRight app is in Arabic language and targets children between 5 and 13 years old. It is a gamified application that enables kids to learn the importance of picking up litters and dropping it in trash cans. The app was evaluated using the System Usability Scale (SUS) standardized instrument which was administered on the target audience. The results of the evaluation showed that the DoItRight app has an SUS score of 93.25 which represents an A+ grade and a percentile range of 96 to 100. This indicates that the DoItRight app is technically usable and can potentially serve the purpose of increasing kids’ awareness about the downsides of littering on the environment.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_20-DoItRight_An_Arabic_Gamified_Mobile_Application.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Empirical Study on Fake News Detection System using Deep and Machine Learning Ensemble Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121219</link>
        <id>10.14569/IJACSA.2021.0121219</id>
        <doi>10.14569/IJACSA.2021.0121219</doi>
        <lastModDate>2021-12-31T05:43:12.7230000+00:00</lastModDate>
        
        <creator>T V Divya</creator>
        
        <creator>Barnali Gupta Banik</creator>
        
        <subject>Transfer learning; GANS; glove algorithms; word2vec; ensemble techniques; auto encoders; pre-trained models; word embeddings; BERT models</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>With the revolution that happened in electronic gadgets in the past few years, information sharing has evolved into a new era that can spread the news globally in a fraction of minutes, either through yellow media or through satellite communication without any proper authentication. At the same time, all of us are aware that with the increase of different social media platforms, many organizations try to grab people&#39;s attention by creating fake news about celebrities, politicians (or) politics, branded products, and others. There are three ways to generate fake news: tampering with an image using advanced morphing tools; this is generally a popular technique while posting phony information about the celebrities (or) cybercrimes related to women. The second one deals with the reposting of the old happenings with new fake content injected into it. For example, in generally few social media platforms either to increase their TRP ratings or to expand their subscribers, they create old news that happened somewhere years ago as latest one with new fake content like by changing the date, time, locations, and other important information and tries to make them viral across the globe. The third one deals with the image/video real happened at an event or place, but media try to change the content with a false claim instead of the original one that occurred. A few decades back, researchers started working on fake news detection topics with the help of textual data. In the recent era, few researchers worked on images and text data using traditional and ensemble deep and machine learning algorithms, but they either suffer from overfitting problems due to insufficient data or unable to extract the complex semantic relations between documents. The proposed system designs a transfer learning environment where Neural Style Transfer Learning takes care of the size and quality of the datasets. It also enhances the auto-encoders by customizing the hidden layers to handle complex problems in the real world.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_19-An_Empirical_Study_on_Fake_News_Detection_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detection of Covid-19 through Cough and Breathing Sounds using CNN</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121218</link>
        <id>10.14569/IJACSA.2021.0121218</id>
        <doi>10.14569/IJACSA.2021.0121218</doi>
        <lastModDate>2021-12-31T05:43:12.7100000+00:00</lastModDate>
        
        <creator>Evangeline D</creator>
        
        <creator>Sumukh M Lohit</creator>
        
        <creator>Tarun R</creator>
        
        <creator>Ujwal K C</creator>
        
        <creator>Sai Viswa Sumanth D</creator>
        
        <subject>Coronavirus; cough sounds; mel frequency cepstral coefficients; convolutional neural network; reverse transcription–polymerase chain reaction (RT-PCR)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>Covid-19 is declared a global pandemic by WHO due to its high infectivity rate. Medical attention is required to test and diagnose those with Covid-19 like symptoms. They are required to take an RT-PCR test which takes about 10-15 hours to obtain the result, and in some cases, it goes up to 3 days when the demand is too high. Majority of victims go unnoticed because they are not willing to get tested. The commonly used RT-PCR technique requires human contact to obtain the swab samples to be tested. Also, there is a shortage of testing kits in some areas and there is a need for self-diagnostic testing. This solution is a preliminary analysis. The basic idea is to use sound data, in this case, cough sounds, breathing sounds and speech sounds to isolate its characteristics and deduce if it belongs to a person who is infected or not, based on the trained model analysis. An Ensemble of Convolution neural networks have been used to classify the samples based on cough, breathing and speech samples, the model also considers symptoms exhibited by the person such as fever, cold, muscle pain etc. These Audio samples have been pre-processed and converted into Mel spectrograms and MFCC (Mel Cepstral Coefficients) are obtained that are fed as input to the model. The model gave an accuracy of 88.75% with a recall of 71.42 and Area Under Curve of 80.62%.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_18-Detection_of_Covid_19_through_Cough_and_Breathing_Sounds.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Review of Feature Selection Algorithms in Sentiment Analysis for Drug Reviews</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121217</link>
        <id>10.14569/IJACSA.2021.0121217</id>
        <doi>10.14569/IJACSA.2021.0121217</doi>
        <lastModDate>2021-12-31T05:43:12.6770000+00:00</lastModDate>
        
        <creator>Siti Rohaidah Ahmad</creator>
        
        <creator>Nurhafizah Moziyana Mohd Yusop</creator>
        
        <creator>Afifah Mohd Asri</creator>
        
        <creator>Mohd Fahmi Muhamad Amran</creator>
        
        <subject>Sentiment analyis; drug reviews; feature selection; metaheuristic</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>Social media data contain various sources of big data that include data on drugs, diagnosis, treatments, diseases, and indications. Sentiment analysis (SA) is a technology that analyses text-based data using machine learning techniques and Natural Language Processing to interpret and classify emotions in the subjective language. Data sources in the medical domain may exist in the form of clinical documents, nurse’s letter, drug reviews, MedBlogs, and Slashdot interviews. It is important to analyse and evaluate these types of data sources to identify positive or negative values that could ensure the well-being of the users or patients being treated. Sentiment analysis technology can be used in the medical domain to help identify either positive or negative issues. This approach helps to improve the quality of health services offered to consumers. This paper will be reviewing feature selection algorithms, sentiment classifications, and standard measurements that are used to measure the performance of these techniques in previous studies. The combination of feature extraction techniques based on Natural Language Processing with Machine Learning techniques as a feature selection technique can reduce the size of features, while selecting relevant features can improve the performance of sentiment classifications. This study will also describe the use of metaheuristic algorithms as a feature selection algorithm in sentiment analysis that can help achieve higher accuracy for optimal subset selection tasks. This review paper has also identified previous studies that applied metaheuristics algorithm as a feature selection algorithm in the medical domain, especially studies that used drug review data.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_17-A_Review_of_Feature_Selection_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Securing Images through Cipher Design for Cryptographic Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121216</link>
        <id>10.14569/IJACSA.2021.0121216</id>
        <doi>10.14569/IJACSA.2021.0121216</doi>
        <lastModDate>2021-12-31T05:43:12.6630000+00:00</lastModDate>
        
        <creator>Punya Prabha V</creator>
        
        <creator>M D Nandeesh</creator>
        
        <creator>Tejaswini S</creator>
        
        <subject>Decryption; encryption; Latin square generator; sequence generator</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>The emphasis of this work is image encoding based on permutation as well as changes that utilize Latin cube as well as Latin square image cipher meant for both color and gray images. Generally, multimedia data are transmitted in the network as well websites, numerous methods have been established for securing the information without any negotiation. Security of information in all the areas is required to ensure that the information sustains privacy, is presentable for recovery as well as governance purposes. These data can be secured by taking CIA (confidentiality, Integrity, and Availability) to realize information like confidentiality about the data that can be reserved as undisclosed from an illegal user of source, the integrity of the data is maintained unaffected for unauthorized font, availability of resources for official personal to retrieve data for access the information. Authentication of a person is by identification, conserving information and its validation of data. Implementing this authentication will store the data in the required format that is either exchanged or transmitted for the internet application. By breaching the misuse of data is can be protected for confidential as well as sensitive data. To achieve security and maintain confidentiality cryptographic methods are implemented.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_16-Securing_Images_through_Cipher_Design.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Low Time Complexity Model for Email Spam Detection using Logistic Regression</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121215</link>
        <id>10.14569/IJACSA.2021.0121215</id>
        <doi>10.14569/IJACSA.2021.0121215</doi>
        <lastModDate>2021-12-31T05:43:12.6470000+00:00</lastModDate>
        
        <creator>Zubeda K. Mrisho</creator>
        
        <creator>Jema David Ndibwile</creator>
        
        <creator>Anael Elkana Sam</creator>
        
        <subject>Machine learning; feature selection; feature extraction; parameter tuning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>Spam emails have recently become a concern on the Internet. Machine learning techniques such as Neural Networks, Na&#239;ve Bayes, and Decision Trees have frequently been used to combat these spam emails. Despite their efficiency, time complexity in high-dimensional datasets remains a significant challenge. Due to a large number of features in high-dimensional datasets, the intricacy of this problem grows exponentially. The existing approaches suffer from a computational burden when thousands of features are used (high-time complexity). To reduce time complexity and improve accuracy in high-dimensional datasets, extra steps of feature selection and parameter tuning are necessary. This work recommends the use of a hybrid logistic regression model with a feature selection approach and parameter tuning that could effectively handle a big dimensional dataset. The model employs the Term Frequency-Inverse Document Frequency (TF-IDF) feature extraction method to mitigate the drawbacks of Term Frequency (TF) to obtain an equal feature weight. Using publicly available datasets (Enron and Lingspam), we compared the model’s performance to that of other contemporary models. The proposed model achieved a low level of time complexity while maintaining a high level of spam detection rate of 99.1%.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_15-Low_Time_Complexity_Model_for_Email_Spam_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Linear Mixed Effect Modelling for Analyzing Prosodic Parameters for Marathi Language Emotions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121214</link>
        <id>10.14569/IJACSA.2021.0121214</id>
        <doi>10.14569/IJACSA.2021.0121214</doi>
        <lastModDate>2021-12-31T05:43:12.6170000+00:00</lastModDate>
        
        <creator>Trupti Harhare</creator>
        
        <creator>Milind Shah</creator>
        
        <subject>Prosodic parameters; a marathi language prosody model; a two-way analysis of variance; linear mixed-effect models; r programming language</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>Along with linguistic messages, prosody is an essential paralinguistic component of emotional speech. Prosodic parameters such as intensity, fundamental frequency (F0), and duration were studied worldwide to understand the relationship between emotions and corresponding prosody features for various languages. For evaluating prosodic aspects of emotional Marathi speech, the Marathi language has received less attention. This study aims to see how different emotions affect suprasegmental properties such as pitch, duration, and intensity in Marathi&#39;s emotional speech. This study investigates the changes in prosodic features based on emotions, gender, speakers, utterances, and other aspects using a database with 440 utterances in happiness, fear, anger, and neutral emotions recorded by eleven Marathi professional artists in a recording studio. The acoustic analysis of the prosodic features was employed using PRAAT, a speech analysis framework. A statistical study using a two-way Analysis of Variance (two-way ANOVA) explores emotion, gender, and their interaction for mean pitch, mean intensity, and sentence utterance time. In addition, three distinct linear mixed-effect models (LMM), one for each prosody characteristic designed comprising emotion and gender factors as fixed effect variables, whereas speakers and sentences as random effect variables. The relevance of the fixed effect and random effect on each prosodic variable was verified using likelihood ratio tests that assess the goodness of fit. Based on Marathi&#39;s emotional speech, the R programming language examined linear mixed modeling for mean pitch, mean intensity, and sentence duration.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_14-Linear_Mixed_Effect_Modelling_for_Analyzing_Prosodic_Parameters.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid Similarity Measure for Dynamic Service Discovery and Composition based on Mobile Agents</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121213</link>
        <id>10.14569/IJACSA.2021.0121213</id>
        <doi>10.14569/IJACSA.2021.0121213</doi>
        <lastModDate>2021-12-31T05:43:12.6000000+00:00</lastModDate>
        
        <creator>Naoufal EL ALLALI</creator>
        
        <creator>Mourad FARISS</creator>
        
        <creator>Hakima ASAIDI</creator>
        
        <creator>Mohamed BELLOUKI</creator>
        
        <subject>IO-MATCHING; latent semantic analysis; mobile agents; OWL-S; semantic web services; semantic similarity; semantic relatedness</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>With the ever-present competition among companies, the prevalence of web services (WSs) is increasing dramatically. This leads to the diversity of the similar services and their developed nature, which makes the discovery of a relevant service during the composition phase a complex task. Since most of the competition companies aim to discover high-quality services with minimum charges in order to increase the number of customers and their profit. The semantic WSs allow performing dynamic service discovery through the entities software and intelligent agents. However, the solutions provided to the discovery process are limited to their performance in terms of the quickness to respond to the request in real-time, without considering the constraints such as the accuracy in the discovery phase and the quality of the similarity mechanism evaluation. They usually are based on the similarity measure of distance between concepts in the ontology instead of taking into consideration the relationships semantically and the strength of the semantic relationship between concepts in the context. In this paper, we proposed a novel hybrid semantic similarity method to improve the service discovery process. The hybrid method is applied to an architecture based on mobile agents, where cooperative agents are integrated to facilitate and speed up the discovery process. In the first hybrid method, we defined the Latent Semantic Analysis (LSA) with a semantic relatedness measure to avoid the ambiguity of the terms and obtain a purely semantic relatedness at level of the service description. The second one is defined to analyze the relationships at the level of the I/O service based on the subsumption reasoning, called IO-MATCHING. Experimental results on a real data set demonstrate that our solution outperforms the state-of-the-art approaches in terms of precision, recall, F-measure, and consumed time of the service discovery.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_13-A_Hybrid_Similarity_Measure_for_Dynamic_Service_Discovery.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Trend of Bootstrapping from 2009 to 2016</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121212</link>
        <id>10.14569/IJACSA.2021.0121212</id>
        <doi>10.14569/IJACSA.2021.0121212</doi>
        <lastModDate>2021-12-31T05:43:12.5670000+00:00</lastModDate>
        
        <creator>Paulin Boale Bomolo</creator>
        
        <creator>Eugene Mbuyi Mukendi</creator>
        
        <creator>Simon Ntumba Badibanga</creator>
        
        <subject>Bootstrapping; homomorphic encryption; binary multiplication; logic gates</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>The pedestal of fully homomorphic encryption is bootstrapping which allows unlimited processing on encrypted data. This technique is a bottleneck in the practicability of homomorphic encryption. From 2009 to 2016, the execution time of bootstrapping decreased from several hours to a few thousandths of a second for processing a logic gate on two encrypted bits. This paper makes a comparative study of the evolution of bootstrapping during the period. An implementation of multiplication on 16-bit integers on an Intel i7 architecture through three schemes whose libraries are respectively DGHV, FHEW and TFHE makes it possible to corroborate the trend that to date the best bootstrapping on bits is that of the TFHE which executes this processing in 29 seconds improving that of the FHEW 30 times despite the multiplication algorithm used.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_12-Trend_of_Bootstrapping_from_2009_to_2016.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Computerization of Local Language Characters</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121211</link>
        <id>10.14569/IJACSA.2021.0121211</id>
        <doi>10.14569/IJACSA.2021.0121211</doi>
        <lastModDate>2021-12-31T05:43:12.5370000+00:00</lastModDate>
        
        <creator>Yusring Sanusi Baso</creator>
        
        <creator>Andi Agussalim</creator>
        
        <subject>Innovative model; language maintenance; Lontara script; Makassarese; local language; hypertext-based application</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>The objective of this study is to provide innovative model for the approach of language preservation. It is necessary to maintain indigenous languages in order to avoid language death. Script applications for indigenous languages are one of the solutions being pursued. This script program will facilitate communication through writing between speakers of indigenous languages. Additionally, the study illustrates the implementation of the Lontara script (Bugis-Makassar local language letters and characters). This script application is compatible with the Microsoft Windows operating system and the Hypertext Transfer Protocol (HTTP). This study employed the research and development (R&amp;D) approach. Six stages are followed in this R &amp; D study: 1) doing a requirements analysis to determine the viability of Bugis-Makassar indigenous languages in everyday life and also to determine ways to retain them 2) designing and constructing Lontara scripts with hypertext-based applications, 3) producing Lontara scripts with hypertext-based applications, and 4) validating the hypertext-based applications through one-to-one testing, small and large group testing. 5) Lontara application revision; and 6) Lontara application as a finished product. This product is designed to be used in conjunction with other interactive applications.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_11-Computerization_of_Local_Language_Characters.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Digital Transformation of Human Resource Processes in Small and Medium Sized Enterprises using Robotic Process Automation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121210</link>
        <id>10.14569/IJACSA.2021.0121210</id>
        <doi>10.14569/IJACSA.2021.0121210</doi>
        <lastModDate>2021-12-31T05:43:12.5200000+00:00</lastModDate>
        
        <creator>Cristina Elena Turcu</creator>
        
        <creator>Corneliu Octavian Turcu</creator>
        
        <subject>Robotic process automation (RPA); small- and medium-sized enterprises (SME); human resource (HR); digital HR; recruitment</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>The aim of this paper was to obtain data and information on the digital transformation of human resource (HR) processes in small- and medium-sized enterprises (SMEs) with the help of robotic process automation (RPA), in order to increase competitiveness in the digital age. Romanian businesses are attempting to close the gap with companies in developed countries by implementing projects that allow the adoption of emerging technologies in HR departments. This paper presents some of the preliminary findings, resulted from a collaboration between a university and an SME, for the efficient implementation of specific HR processes using RPA. The paper provides a brief introduction of the RPA concept as well as a list of HR processes that can be automated within enterprises, with the benefits brought to the enterprise and employees presented in both qualitative and quantitative terms for each HR process. In addition, a case study for the automatic collection of candidates&#39; documents and extraction of primary information about them was considered. Further on, the problems encountered during implementation were listed, along with potential solutions. Given the benefits offered, RPA could play an important role in transitioning HR functions into the digital era.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_10-Digital_Transformation_of_Human_Resource_Processes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Analysis of National Cyber Security Strategies using Topic Modelling</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121209</link>
        <id>10.14569/IJACSA.2021.0121209</id>
        <doi>10.14569/IJACSA.2021.0121209</doi>
        <lastModDate>2021-12-31T05:43:12.5070000+00:00</lastModDate>
        
        <creator>Minkyoung Song</creator>
        
        <creator>Dong Hee Kim</creator>
        
        <creator>Sunha Bae</creator>
        
        <creator>So-Jeong Kim</creator>
        
        <subject>Cybersecurity policy; national cyber security strategy (NCSS); policy analysis; quantitative analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>Comprehensive comparative analyses of national cyber security strategies (NCSSs) have thus far been limited or complicated by the unique nature of cybersecurity, which combines various areas such as technology, industry, economy, and defense in a complex manner. This study aims to characterize the NCSSs of major countries, quantitatively considering the time series, and identify further cybersecurity agendas for the benefit of NCSS revision in South Korea, by applying topic modelling to the analysis of eight NCSSs from the US, UK, Japan, and EU. As a result, fifteen agendas were identified and grouped into four sectors. We determined from the agenda distribution that the approach of each country to cybersecurity was different. In addition, additional agendas worthy of consideration for future NCSS revisions in South Korea were proposed, based on a comparison of the 15 aforementioned agendas with those of South Korea. This study is significant for cybersecurity policy in terms of enabling quantitative analysis in a single framework via latent dirichlet allocation (LDA) topic modelling, and deriving further cybersecurity agendas for future NCSS revisions in South Korea.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_9-Comparative_Analysis_of_National_Cyber_Security_Strategies.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Gabor Descriptor for Representation of Spatial Feature</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121208</link>
        <id>10.14569/IJACSA.2021.0121208</id>
        <doi>10.14569/IJACSA.2021.0121208</doi>
        <lastModDate>2021-12-31T05:43:12.4730000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>Spatial feature; Gabor wavelet descriptor; Fourier descriptor; wavelet transformation; ADEOS (advanced earth observing satellite); AVNIR (advanced visible and near-infrared radiometer)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>New spatial feature descriptor based on Gabor wavelet function is proposed. The proposed method is compared to Fourier descriptor. The experimental results with Advanced Earth Observing Satellite: ADEOS / Advanced Visible and Near-Infrared Radiometer: AVNIR image show an effectiveness of the proposed method. It is found that the restored image quality, in terms of root mean square error between the original and the restored images depends on the support length of the mother wavelet and is much better than that with the conventional Fourier descriptor method for spatial feature description.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_8-Gabor_Descriptor_for_Representation_of_Spatial_Feature.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>On Validating Cognitive Diagnosis Models for the Arithmetic Skills of Elementary School Students</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121207</link>
        <id>10.14569/IJACSA.2021.0121207</id>
        <doi>10.14569/IJACSA.2021.0121207</doi>
        <lastModDate>2021-12-31T05:43:12.4600000+00:00</lastModDate>
        
        <creator>Hyejung Koh</creator>
        
        <creator>Wonjin Jang</creator>
        
        <creator>Yongseok Yoo</creator>
        
        <subject>Cognitive diagnosis model; attribute hierarchy model; cognitive hierarchy; model validation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>Cognitive diagnosis models (CDMs) have been shown to provide detailed evaluations of students’ achievement in terms of proficiency of individual cognitive attributes. Attribute hierarchy model (AHM), a variant of CDM, takes the hierarchical structure of those cognitive attributes to provide more accurate and interpretable measurements of learning achievement. However, advantages of the richer model come at the expense of increased difficulty in designing the hierarchy of the cognitive attributes and developing corresponding test sets. In this study, we propose quantitative tools for validating the hierarchical structures of cognitive attributes. First, a method to quantitatively compare alternative cognitive hierarchies is established by computing the inconsistency between a given cognitive hierarchy and students’ responses. Then, this method is generalized to validate a cognitive hierarchy without real responses. Numerical simulations were performed starting from an AHM designed by experts and responses of elementary school students. Results show that the expert-designed cognitive attribute explains the students’ responses better than most of alternative hierarchies do, but not all; a superior cognitive hierarchy is identified. This discrepancy is discussed in terms of internalization of cognitive attributes.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_7-On_Validating_Cognitive_Diagnosis_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Conceptual Design Framework based on TRIZ Scientific Effects and Patent Mining</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121206</link>
        <id>10.14569/IJACSA.2021.0121206</id>
        <doi>10.14569/IJACSA.2021.0121206</doi>
        <lastModDate>2021-12-31T05:43:12.4430000+00:00</lastModDate>
        
        <creator>E-Ming Chan</creator>
        
        <creator>Ah-Lian Kor</creator>
        
        <creator>Kok Weng Ng</creator>
        
        <creator>Mei Choo Ang</creator>
        
        <creator>Amelia Natasya Abdul Wahab</creator>
        
        <subject>TRIZ; patent mining; natural language processing; product design</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>Conceptual design represents a critical initial design stage that involves both technical and creative thinking to develop and derive concept solutions to meet design requirements. TRIZ Scientific Effects (TRIZSE) is one of the TRIZ tools that utilize a database on functional, transformation, parameterization of scientific effects to provide conceptual solutions to engineering and design problems. Although TRIZSE has been introduced to help engineers solve design problems in the conceptual design phase, the current TRIZSE database presents general scientific concept solutions with a few examples of solutions from patents which are very abstract and not updated since its introduction. This research work explores the derivation of a novel framework that integrates TRIZ scientific effects to the current patent information (USPTO) using data mining techniques to develop a better design support tool to assist engineers in deriving innovative design concept solutions. This novel framework will provide better, updated, relevant and specific examples of conceptual design ideas from patents to engineers. The research used Python as the base programming platform to develop a conceptual design software prototype based on this new framework where both the TRIZSE Database and Patents Database (USPTO) are searched and processed in order to build a Doc2Vec similarity model. A case study on the corrosion of copper pipelines by seawater is presented to validate this novel framework and results of the novel TRIZSE Database and patents examples are presented and further discussed in this paper. The results of the case study indicated that the Doc2Vec model is able to perform its intended similarity queries. The patent examples from results of the case study warrant further consideration in conceptual design activities.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_6-A_Conceptual_Design_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Changing Communication Path to Maintain Connectivity of Mobile Robots in Multi-Robot System using Multistage Relay Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121205</link>
        <id>10.14569/IJACSA.2021.0121205</id>
        <doi>10.14569/IJACSA.2021.0121205</doi>
        <lastModDate>2021-12-31T05:43:12.4130000+00:00</lastModDate>
        
        <creator>Ryo Odake</creator>
        
        <creator>Kei Sawai</creator>
        
        <subject>Multi-robot; multistage relay network; communication connectivity; changing communication path</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>Mobile robots are being increasingly used to gather information from disaster sites and prevent further damage in disaster areas. Previous studies discussed a multi-robot system that uses a multistage relay backbone network to gather information in a closed space after a disaster. In this system, the mobile robot explores its search range by switching the connected nodes. Here it is necessary to maintain the communication quality required for the teleoperation of the mobile robot and to send and receive packets between the operator PC and the mobile robot. However, the mobile robot can become isolated when it is not able to maintain the communication quality required for teleoperations in the communication path after changing the nodes. This paper proposes a method to change the communication path of a mobile robot while maintaining its communication connectivity. In the proposed method, the mobile robot changes its route while maintaining communication connectivity without any communication loss time by connecting to two nodes.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_5-Changing_Communication_Path_to_Maintain_Connectivity.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Neural Network Model for Artifacts Marking in EEG Signals</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121204</link>
        <id>10.14569/IJACSA.2021.0121204</id>
        <doi>10.14569/IJACSA.2021.0121204</doi>
        <lastModDate>2021-12-31T05:43:12.3800000+00:00</lastModDate>
        
        <creator>Olga Komisaruk</creator>
        
        <creator>Evgeny Nikulchev</creator>
        
        <subject>Artifacts in EEG signal; neural network model; recurrent neural network; U-net architecture</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>One of the main methods for research of the holistic activity system of human brain is the method of electroencephalography (EEG). For example, eye movements, blink, hearth activity, muscle activity that affects EEG signal interfere with cerebral activity. The paper describes the development of an intelligent neural network model aimed at detecting the artifacts in EEG signals. The series of experiments were conducted to investigate the performance of different neural networks architectures for the task of artifact detection. As a result, the performance rates for different ML methods were obtained. The neural network model based on U-net architecture with recurrent networks elements was developed. The system detects the artifacts in EEG signals using the model with 128 channels and 70% accuracy. The system can be used as an auxiliary instrument for EEG signal analysis.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_4-Neural_Network_Model_for_Artifacts_Marking_in_EEG_Signals.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multifractal Analysis of Heart Rate Variability by Applying Wavelet Transform Modulas Maxima Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121203</link>
        <id>10.14569/IJACSA.2021.0121203</id>
        <doi>10.14569/IJACSA.2021.0121203</doi>
        <lastModDate>2021-12-31T05:43:12.3500000+00:00</lastModDate>
        
        <creator>Evgeniya Gospodinova</creator>
        
        <creator>Galya Georgieva-Tsaneva</creator>
        
        <creator>Penio Lebamovski</creator>
        
        <subject>RR time series; heart rate variability; wavelet transform modulas maxima method; monofractal; multifractal</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>The analysis of heart rate variability is based on the intervals between the successive heartbeats and thanks to it information about the functional state of the person can be obtained and the dynamics of its change can be traced. The nonlinear dynamics methods provide additional, prognostic information about the patient&#39;s health, complementing traditional analyses and are considered potentially promising tools for assessing heart rate variability. In this article, studies have been carried out to identify the mono- and multifractal properties of two groups of people: healthy controls and patients with arrhythmia using Wavelet Transform Modulas Maxima Method. The obtained results from the studies show that for healthy subjects the multifractal spectrum is broader than the spectrum of patients with arrhythmia. The value of the Hurst exponent is lower in healthy controls, and in patients with arrhythmia this parameter tends to one. For the healthy subjects, the scaling exponent showed nonlinear behaviour, while for patients with arrhythmia it was linear. This indicates that heart rate variability in healthy controls has multifractal behaviour while patients with arrhythmia have monofractal behaviour. The finding may be useful in diagnosing subjects with cardiovascular disease, as well as in predicting future diseases, as the heart rate variability changes at the slightest deviation in the health status of subjects before the onset of relevant signs of the disease.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_3-Multifractal_Analysis_of_Heart_Rate_Variability.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>New Feature Engineering Framework for Deep Learning in Financial Fraud Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121202</link>
        <id>10.14569/IJACSA.2021.0121202</id>
        <doi>10.14569/IJACSA.2021.0121202</doi>
        <lastModDate>2021-12-31T05:43:12.3330000+00:00</lastModDate>
        
        <creator>Chie Ikeda</creator>
        
        <creator>Karim Ouazzane</creator>
        
        <creator>Qicheng Yu</creator>
        
        <creator>Svetla Hubenova</creator>
        
        <subject>Financial fraud; online banking; feature engineering; unbalanced class data; deep learning; autoencoder</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>The total losses through online banking in the United Kingdom have increased because fraudulent techniques have progressed and used advanced technology. Using the history transaction data is the limit for discovering various patterns of fraudsters. Autoencoder has a high possibility to discover fraudulent action without considering the unbalanced fraud class data. Although the autoencoder model uses only the majority class data, in our hypothesis, if the original data itself has various feature vectors related to transactions before inputting the data in autoencoder then the performance of the detection model is improved. A new feature engineering framework is built that can create and select effective features for deep learning in remote banking fraud detection. Based on our proposed framework [19], new features have been created using feature engineering methods that select effective features based on their importance. In the experiment, a real-life transaction dataset has been used which was provided by a private bank in Europe and built autoencoder models with three different types of datasets: With original data, with created features and with selected effective features. We also adjusted the threshold values (1 and 4) in the autoencoder and evaluated them with the different types of datasets. The result demonstrates that using the new framework the deep learning models with the selected features are significantly improved than the ones with original data.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_2-New_Feature_Engineering_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Machine Learning Augmented Breast Tumors Classification using Magnetic Resonance Imaging Histograms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121201</link>
        <id>10.14569/IJACSA.2021.0121201</id>
        <doi>10.14569/IJACSA.2021.0121201</doi>
        <lastModDate>2021-12-31T05:43:12.2700000+00:00</lastModDate>
        
        <creator>Ahmed M. Sayed</creator>
        
        <subject>Tumor classification; histogram analysis; magnetic resonance imaging; breast cancer; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(12), 2021</description>
        <description>At present, breast cancer survival rate significantly varies with the stage at which it was first detected. It is crucial to achieve early detection of malignant tumors to reduce their negative effects. Magnetic resonance imaging (MRI) is currently an important imaging modality in the detection of breast tumors. A need exists to develop computer aided methods to provide early diagnosis of malignancy. In this study, I present machine learning models utilizing new image histogram features using the pixels least significant bit. The models were first trained on an MRI breast dataset that included 227 images captured using the short TI inversion recovery (STIR) sequence and diagnosed as either benign or malignant. Three data classification methods were utilized to differentiate between the tumor’s classes. The examined classification methods were the Discriminant Analysis, K-Nearest Neighborhood, and the Random Forest. Algorithms’ testing was performed on a completely different dataset that included another 186 MRI STIR images showing breast tumors with verified biopsy diagnostics. A significant tumor classification efficiency was found, as judged by the pathological diagnosis. Classification’s accuracy was calculated as 94.1% for the DA, 94.6% for the KNN and 80.6% for the RF algorithm. Receiver operating curves also showed significant classification performances. The proposed tumor classification techniques can be used as non-invasive and fast diagnostic tools for breast tumors, with the capability of significantly reducing false errors associated with common MRI imaging-based diagnosis.</description>
        <description>http://thesai.org/Downloads/Volume12No12/Paper_1-Machine_Learning_Augmented_Breast_Tumors_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Novel Algorithm Utilizing Deep Learning for Enhanced Arabic Lip Reading Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121192</link>
        <id>10.14569/IJACSA.2021.0121192</id>
        <doi>10.14569/IJACSA.2021.0121192</doi>
        <lastModDate>2021-12-06T10:24:20.8870000+00:00</lastModDate>
        
        <creator>Doaa Sami Khafaga</creator>
        
        <creator>Hanan A. Hosni Mahmoud</creator>
        
        <creator>Norah S. Alghamdi</creator>
        
        <creator>Amani A. Albraikan</creator>
        
        <subject>Lip reading; deep CNN; deep learning; recognition; viseme</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>Computerized lip reading is the science of translating visemes and oral lip reading into written text, where visemes are lip movement without sound. Video processing is applied for the recognition of those visemes. Previous research developed automated systems to for computerized lip reading recognition to be hearing-impaired aid. Many challenges face such an automation process, including insufficient training datasets. Also, speaker-dependency is one of the challenges that are faced. Real-time applications which respond within a specific time period are also widely required. Real-time human computer interaction are systems which require real time response. Response time for human computer interaction is measured by number of elapsed video frames. Video processing of lip reading necessitates real-time implementation. There are applications for viseme recognition, as an aid for deaf people, video games with human computer interaction, and surveillance systems. In this paper, a real-time viseme recognition system is introduced. In order to enhance existing methods to overcome these pitfalls, this paper proposed a computerized lip reading technique based on feature extraction. We utilized blocks arrangement techniques to reach a near-optimal appearance feature extraction technique. A Deep Neural Network is utilized to enhance recognition. The benchmark dataset SAVE, for Arabic visemes, is employed in this research, and high viseme recognition accuracies are achieved. The described computerized lip reading recognition technique is advantageous for the hearing impaired and for other speakers in noisy environments.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_92-Novel_Algorithm_Utilizing_Deep_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Factors Influencing the Adoption of Cyber Security Standards Among Public Listed Companies in Malaysia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121191</link>
        <id>10.14569/IJACSA.2021.0121191</id>
        <doi>10.14569/IJACSA.2021.0121191</doi>
        <lastModDate>2021-11-30T13:01:04.2530000+00:00</lastModDate>
        
        <creator>Mohamed Abdalla</creator>
        
        <creator>Muath Jarrah</creator>
        
        <creator>Ahmed Abu-Khadrah</creator>
        
        <creator>Yusri bin Arshad</creator>
        
        <subject>Cyber security; human factor; cyberspace; cyber security standards</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>Employee’s failure to adhere to their organization’s cyber security policies contributes most of the cyber incidents. To secure information security systems, companies need to communicate behavioral and technical solutions to their employees, due to the fragility of the human factor since it plays a critically significant role in securing cyber systems. The necessity to safeguard information systems have speed up the evolution of the present method of cyber security, which should be based on adequately adopting cyber security standards to secure business enterprise’s assets and users in cyberspace. This paper studies factors influencing the adoption of cyber security standards among public listed companies in Malaysia. Through online survey that was distributed among 275 Public listed companies. The findings indicated that expected related benefits and perceived ease of use had significant impact on the adoption of cyber security standards. On the other hand, perceived security had played important moderating influence on the relationship between organizational factors and the adoption of cyber security standards.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_91-Factors_Influencing_the_Adoption_of_Cyber_Security_Standards.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design and Implementation of HSQL: A SQL-like language for Data Analysis in Distributed Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121190</link>
        <id>10.14569/IJACSA.2021.0121190</id>
        <doi>10.14569/IJACSA.2021.0121190</doi>
        <lastModDate>2021-11-30T13:01:04.2370000+00:00</lastModDate>
        
        <creator>Anurag Singh Bhadauria</creator>
        
        <creator>Atreya Bain</creator>
        
        <creator>Jyoti Shetty</creator>
        
        <creator>Shobha G</creator>
        
        <creator>Arjuna Chala</creator>
        
        <creator>Jeremy Clements</creator>
        
        <subject>ANTLR4; big data; context free grammar; dis-tributed systems; HPCC; Javascript; language; machine learning; Parser; SQL; transpiler</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>In today’s modern world, we’re experiencing a substantial increase in the use of data in various fields, and this has necessitated the use of distributed systems to consume and process Big Data. Machine learning tends to benefit from the usage of Big Data, and the models generated from such techniques tend to be more effective. However, there is a steep learning curve to getting used to handling Big Data, as traditional data management tools fail to perform well. Distributed systems have become popular, where the task of data processing is split amongst various nodes in clusters. SQL, is a popular database management language popular to data scientists. It is often given second class support, where SQL can be embedded into a primary language of use (e.g. SQL in Scala for Spark), which allows for using SQL but one still needs to know the primary language of the platform (Scala, as per the example, or ECL in HPCC Systems). It may also be present as a supported language. In either case, using useful tooling such as Visualizing data and creating and using machine learning models become difficult, as the user needs to fall back to the primary language of the system. In the proposed work, a new SQL-like language, HSQL, an open source distributed systems solution, was developed for allowing new users to get used to its distributed architecture and the ECL language, with which it primarily operates with (which was chosen as a target). Additionally, a program that could translate HSQL-based programs to ECL for use was made. HSQL was made to be completely inter-compatible with ECL programs, and it was able to provide a compact and easy to comprehend SQL-like syntax for performing general data analysis, creation of Machine learning models and visualizations while allowing a modular structure to such programs.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_90-Design_and_Implementation_of_HSQL.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Inclusive Education: Implementation of a Mobile Application for Blind Students</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121189</link>
        <id>10.14569/IJACSA.2021.0121189</id>
        <doi>10.14569/IJACSA.2021.0121189</doi>
        <lastModDate>2021-11-30T13:01:04.2230000+00:00</lastModDate>
        
        <creator>Alejandro Boza-Chua</creator>
        
        <creator>Karen Gabriel-Gonzales</creator>
        
        <creator>Laberiano Andrade-Arenas</creator>
        
        <subject>Blind; disability; educational inclusion; mobile ap-plication; scrum methodology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>Currently, the world is going through an era of changes in the education sector, but most Latin American countries are lagging, especially Peru, which does not have the technological tools that allow it to advance to an adequate level of inclusion of disabled students, especially blind students who are 60% of students who drop out of school for lack of education that have be suitable for their need. Consequently, the present research work is originated which aims to develop a mobile application oriented to the benefit of inclusive education of blind students. Therefore, the agile scrum methodology was used for the development of this project, executed in 5 phases, the requirements identified for the development of the mobile application were obtained through a questionnaire to 25 parents of visually impaired students, allowing the development of a mobile application that meets quality of inclusive education that can be applied in the education sector. Finally, as a result of the research work, another satisfaction survey was conducted with 50 parents where the application was evaluated, obtaining 90%of acceptance and satisfaction.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_89-Inclusive_Education_Implementation_of_a_Mobile_Application.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Near-ground Measurement and Modeling for Archaeological Park of Pisac in Cusco for LoRa Technology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121187</link>
        <id>10.14569/IJACSA.2021.0121187</id>
        <doi>10.14569/IJACSA.2021.0121187</doi>
        <lastModDate>2021-11-30T13:01:04.1900000+00:00</lastModDate>
        
        <creator>Yhon D. Lezama</creator>
        
        <creator>Jinmi Lezama</creator>
        
        <creator>Jorge Arizaca-Cusicuna</creator>
        
        <subject>Archaeological park; LoRa; propagation model; statistics; near-ground</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>This research work presents the details of a near-ground received power measurement and modeling based on it, inside the archaeological park of Pisac in the city of Cusco. The measurements were performed at a working frequency of 920 MHz in the industrial, scientific, and medical (ISM) band, using transceiver devices with LoRa technology. The power of the received signal is obtained while the transmitter moves at a constant speed of 0.4 m/s, to characterize the fading that occurred within this archaeological park, making appropriate use of the moving average filter separates the fading to large and small scale, then on the filtered signal is used the algorithm of linear regression, to obtain a model that characterizes the exponent of propagation loss and the shading factor. In addition, the small-scale fading is characterized according to its proba-bility distribution, the statistical parameters of the distributions are obtained based on the small-scale measurements. We also measured variations of receiving antenna height, ranging from 20cm, 50cm, 80cm, and 120cm above the ground, finding that the height has a strong influence on the propagation loss exponent, while the shadow variation shows a smaller variation. The model obtained is validated by the coefficient of determination and the root mean square error (RMSE) value.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_87-Near_ground_Measurement_and_Modeling_for_Archaeological_Park.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of Predictions through Machine Learning for Sars-Cov-2 Forecasting in Peru</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121188</link>
        <id>10.14569/IJACSA.2021.0121188</id>
        <doi>10.14569/IJACSA.2021.0121188</doi>
        <lastModDate>2021-11-30T13:01:04.1900000+00:00</lastModDate>
        
        <creator>Shal&#180;om Adonai Huaraz Morales</creator>
        
        <creator>Marissel Fabiola Mio Antayhua</creator>
        
        <creator>Laberiano Andrade-Arenas</creator>
        
        <subject>Forecast; laboratory methods; machine learning; Python; SARS-COV-2</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>The SARS-COV-2 virus of the coronavirus family was identified in 2019. This is a type of virus that infects humans and some animals, in Peru it has seriously affected everyone, causing so many deaths, which has resulted in that people be tested to rule out contagion, using laboratory methods recommended by the government of the country. Therefore, the data science methodology was used with this research, where its objective is to predict what types of people are contaminated during SARS-COV-2 by the regions of Peru, identified through laboratory methods, therefore, the ”data bank” was taken by PNDA, the CSV file was used for that study, apart from the fact that it comes from the INS and the CDC of the MINSA. In which, machine learning was developed with the decision tree algorithm and then began coding, in such a way that the distribution called Anaconda was used where it is encoded in Python language, together with that distribution, Jupyter Notebook was used which is a client-server application. The results generated by this research prove that it was possible to identify the types of individuals by SARS-COV-2. These results can help prevention entities against SARS-COV-2 to apply the corresponding preventive measures in a more focused way.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_88-Development_of_Predictions_through_Machine_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Fast and Efficient Algorithm for Outlier Detection Over Data Streams</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121185</link>
        <id>10.14569/IJACSA.2021.0121185</id>
        <doi>10.14569/IJACSA.2021.0121185</doi>
        <lastModDate>2021-11-30T13:01:04.0970000+00:00</lastModDate>
        
        <creator>Mosab Hassaan</creator>
        
        <creator>Hend Maher</creator>
        
        <creator>Karam Gouda</creator>
        
        <subject>Data mining; outlier detection; data streams; density-based approach; clustering-based approach</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>Outlier detection over data streams is an important task in data mining. It has various applications such as fraud detection, public health, and computer network security. Many approaches have been proposed for outlier detection over data streams such as distance-,clustering-, density-, and learning-based approaches. In this paper, we are interested in the density-based outlier detection over data streams. Specifically, we propose an improvement of DILOF, a recent density-based algorithm. We observed that the main disadvantage of DILOF is that its summarization method has many drawbacks such as it takes a lot of time and the algorithm accuracy is significant degradation. Our new algorithm is called DILOFC that utilizing an efficient summarization method. Our performance study shows that DILOF􀀀 outperforms DILOF in terms of total response time and outlier detection accuracy.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_85-A_Fast_and_Efficient_Algorithm_for_Outlier_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>e-Business Model to Optimise Sales through Digital Marketing in a Peruvian Company</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121184</link>
        <id>10.14569/IJACSA.2021.0121184</id>
        <doi>10.14569/IJACSA.2021.0121184</doi>
        <lastModDate>2021-11-30T13:01:04.0670000+00:00</lastModDate>
        
        <creator>Misael Lazo-Amado</creator>
        
        <creator>Leoncio Cueva-Ruiz</creator>
        
        <creator>Laberiano Andrade-Arenas</creator>
        
        <subject>Design thinking; desingScrum methodology; digital marketing; e-Business; scrum</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>The COVID-19 pandemic has affected the Peruvian market, generating a great loss in sales in Peruvian companies. The objective of the research is to develop a model to optimize sales with the use of digital marketing in a Peruvian company, the chosen methodology is DesingScrum, which is a hybrid of Scrum and Desing Thinking, with 10 phases (empathize, define. It has 10 phases (empathise, define, ideate, planning meeting, sprint backlog, daily meeting, sprint review, sprint retrospective sprint, prototype and testing) and the MarvelApp tool was used to create the prototype. The results are obtained after the completion of the review of each sprint, showing in detail how it was progressing in each sprint, and through the retrospective evaluated the development of the project for the realization of continuous improvement in the next product. Further prototype was made, with the application MarvelApp, which shows the model of e-business. Then testing was done through a survey that customers gave their opinions about the prototype and finally the digital marketing proposal was made by a model, which explains the interconnected tools to attract new customers. The conclusion is the construction of the digital marketing model according to the needs of the context to improve the sales of the company through e-business.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_84-e_Business_Model_to_Optimise_Sales_through_Digital_Marketing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-Criteria Decision-Making Approach for Selection of Requirements Elicitation Techniques based on the Best-Worst Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121183</link>
        <id>10.14569/IJACSA.2021.0121183</id>
        <doi>10.14569/IJACSA.2021.0121183</doi>
        <lastModDate>2021-11-30T13:01:04.0500000+00:00</lastModDate>
        
        <creator>Abdulmajeed Aljuhani</creator>
        
        <subject>Requirements elicitation; elicitation techniques; de-cision support methods; Best-Worst Method; BWM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>During requirements elicitation stage, requirements engineers gather system requirements and drive stakeholders to convey needs and desired software functionality. The elicitation techniques used to acquire software requirements significantly impact the quality of elicited requirements. Several elicitation techniques have been proposed for the Requirement Engineering (RE) process; however, these techniques are rarely used in reality due to a lack of empirical and relative appraisals to assist software team members in deciding on the most appropriate technique. Re-quirement engineers encounter difficulty in deciding the suitable elicitation technique to adopt for a certain software project. This difficulty is due to a lack of knowledge regarding the available elicitation techniques, their efficacy, and how appropriate they are for a certain project. According to the literature, requirements engineering processes benefit from the use of Multi-Criteria Decision-Making (MCDM) approaches within particular contexts. An optimal structure is constitutionally presented within the area of the requirements engineering process; hence, the demonstra-tion of a robust decision-making method in the requirements engineering process should motivate a higher level of satisfaction with software projects developed in this way. This study proposes an approach for using the MCDM method in the requirements engineering process. The study contains a model for investigating the selection of an appropriate elicitation technique based on a decision-making method, namely, the Best-Worst Method (BWM). The findings of the proposed model demonstrate the BWM’s power in solving complex decision problems involving several criteria and alternatives.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_83-Multi_Criteria_Decision_Making_Approach_for_Selection_of_Requirements.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Reliable Lightweight Trust Evaluation Scheme for IoT Security</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121182</link>
        <id>10.14569/IJACSA.2021.0121182</id>
        <doi>10.14569/IJACSA.2021.0121182</doi>
        <lastModDate>2021-11-30T13:01:04.0370000+00:00</lastModDate>
        
        <creator>Hamad Aldawsari</creator>
        
        <creator>Abdel Monim Artoli</creator>
        
        <subject>IoT security; clustering; trust; energy efficient algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>The rapid development of smart devices and the consequent demand their reliability have posed many challenges limiting their versatility. One of the most significant challenges is safeguarding the widespread network of sensors and devices within harsh remote environments. Numerous trust schemes have been proposed to overcome related IoT security concerns. How-ever, most of these schemes are not lightweight and consequently are not energy-efficient. This paper proposes a reliable lightweight trust evaluation scheme (RTE) to mitigate the malicious behavior of the nodes within IoT networks. The nodes are grouped into a set of clusters each having a cluster head while cluster members are categorized by evaluating their associated residual energy. Nodes with residual energy lower than the threshold (which is determined by the base station) are suspended until they recover and regain their activity. The computations are handled by the CH which is elected by an algorithm according to its energy and coverage degree in order to optimize the energy consumption in the network. For validation and performance evaluation, the proposed RTE scheme was compared to three of the recent schemes in its category. The obtained results have revealed that the proposed RTE scheme outperforms all of them in terms of detection rate, trust evaluation time, and energy efficiency.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_82-A_Reliable_Lightweight_Trust_Evaluation_Scheme.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>CDRA: A Community Detection based Routing Algorithm for Link Failure Recovery in Software Defined Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121181</link>
        <id>10.14569/IJACSA.2021.0121181</id>
        <doi>10.14569/IJACSA.2021.0121181</doi>
        <lastModDate>2021-11-30T13:01:04.0200000+00:00</lastModDate>
        
        <creator>Muhammad Yunis Daha</creator>
        
        <creator>Mohd Soperi Mohd Zahid</creator>
        
        <creator>Babangida Isyaku</creator>
        
        <creator>Abdussalam Ahmed Alashhab</creator>
        
        <subject>Software Defined Network (SDN); community detection methods; CDRA; link failure</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>The increase in size and complexity of the Internet has led to the introduction of Software Defined Networking (SDN). SDN is a new networking paradigm that breaks the limitations of traditional IP networks and upgrades the current network infrastructures. However, like traditional IP networks, network failures may also occur in SDN. Multiple research studies have discussed this problem by using a variety of techniques. Among them is the use of the community detection method is one of the failure recovery technique for SDN. However, this technique have not considered the specific problem of multiple link multi-community failure and inter-community link failure scenarios. This paper presents a community detection-based routing algorithm (CDRA) for link failure recovery in SDN. The proposed CDRA scheme is efficient to deal with single link intra-community failure scenarios and multiple link multi-community failure scenarios and is also able to handle the inter-community link failure scenarios in SDN. Extensive simulations are performed to evaluate the performance of the proposed CDRA scheme. The simulation results depicts that the proposed CDRA scheme have better simulations results and reduce average round trip time by 35.73%, avg data packet loss by 1.26% and average end to end delay 49.3% than the Dijkstra based general recovery algorithm and also can be used on a large scale network platform.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_81-CDRA_A_Community_Detection_based_Routing_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Software Security Static Analysis False Alerts Handling Approaches</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121180</link>
        <id>10.14569/IJACSA.2021.0121180</id>
        <doi>10.14569/IJACSA.2021.0121180</doi>
        <lastModDate>2021-11-30T13:01:03.9870000+00:00</lastModDate>
        
        <creator>Aymen Akremi</creator>
        
        <subject>Software security; static analysis; false alert reduc-tion; source code dataset; security bugs</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>False Positive Alerts (FPA), generated by Static Analyzers Tools (SAT), reduce the effectiveness of the automatic code review, letting them be underused in practice. Researchers conduct a lot of tests to improve SAT accuracy while keeping FPA at a lower rate. They use different simulated and production datasets to validate their proposed methods. This paper surveys recent approaches dealing with FPA filtering; it compares them and discusses their usefulness. It also studies the used datasets to validate the identified methods and show their effectiveness to cover most program defects. This study focuses mainly on the security bugs covered by the datasets and handled by the existing methods.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_80-Software_Security_Static_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Swapping-based Data Sanitization Method for Hiding Sensitive Frequent Itemset in Transaction Database</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121179</link>
        <id>10.14569/IJACSA.2021.0121179</id>
        <doi>10.14569/IJACSA.2021.0121179</doi>
        <lastModDate>2021-11-30T13:01:03.9400000+00:00</lastModDate>
        
        <creator>Dedi Gunawan</creator>
        
        <creator>Yusuf Sulistyo Nugroho</creator>
        
        <creator>Maryam</creator>
        
        <subject>Transaction database; data sanitization; data min-ing; sensitive frequent itemset; swapping-based method</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>Sharing a transaction database with other parties for exploring valuable information becomes more recognized by business institutions, i.e., retails and supermarkets. It offers various benefits for the institutions, such as finding customer shopping behavior and frequently bought items, known as fre-quent itemsets. Due to the importance of such information, some institutions may consider certain frequent itemsets as sensitive information that should be kept private. Therefore, prior to handling a database, the institutions should consider privacy preserving data mining (PPDM) techniques for preventing sen-sitive information breaches. Presently, several PPDM methods, such as item suppression-based methods and item insertion-based methods have been developed. Unfortunately, the methods result in significant changes to the database and induce some side effects such as hiding failure, significant data dissimilarity, misses cost, and artificial frequent itemset occurrence. In this paper, we propose a swapping-based data sanitization method that can hide the sensitive frequent itemset while at the same time it can minimize the side effects of the data sanitization process. Experimental results indicate that the proposed method outperforms existing methods in terms of minimizing the side effects.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_79-Swapping_based_Data_Sanitization_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sign Language Gloss Translation using Deep Learning Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121178</link>
        <id>10.14569/IJACSA.2021.0121178</id>
        <doi>10.14569/IJACSA.2021.0121178</doi>
        <lastModDate>2021-11-30T13:01:03.9100000+00:00</lastModDate>
        
        <creator>Mohamed Amin</creator>
        
        <creator>Hesahm Hefny</creator>
        
        <creator>Ammar Mohammed</creator>
        
        <subject>Sequence to sequence model; neural machine trans-lation; sign language; deep learning; LSTM; GRU</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>Converting sign language to a form of natural language is one of the recent areas of the machine learning domain. Many research efforts have focused on categorizing sign language into gesture or facial recognition. However, these efforts ignore the linguistic structure and the context of natural sentences. Traditional translation methods have low translation quality, poor scalability of their underlying models, and are time-consuming. The contribution of this paper is twofold. First, it proposes a deep learning approach for bidirectional translation using GRU and LSTM. In each of the proposed models, Bahdanau and Luong’s attention mechanisms are used. Second, the paper experiments proposed models on two sign languages corpora: namely, ASLG-PC12 and Phoenix-2014T. The experiment con-ducted on 16 models reveals that the proposed model outperforms the other previous work on the same corpus. The results on the ASLG-12 corpus, when translating from text to gloss, reveal that the GRU model with Bahdanau attention gives the best result with ROUGE (Recall-Oriented Understudy for Gisting Evaluation) score 94.37% and BLEU (Bilingual Evaluation Understudy)-4 score 83.98%. When translating from gloss to text, the results also show that the GRU model with Bahdanau attention achieves the best result with ROUGE score 87.31% and BLEU-4 66.59%. On Phoenix-2014T corpus, the results of text to gloss translation show that the GRU model with Bahdanau attention gives the best result in ROUGE with a score of 42.96%, while the GRU model with Luong attention gives the best result in BLEU-4 with 10.53%. When translating from gloss to text, the results report that the GRU model with Luong attention achieves the best result in ROUGE with a score of 45.69% and BLEU-4 with a score of 19.56%.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_78-Sign_Language_Gloss_Translation_using_Deep_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Normalisation of Indonesian-English Code-Mixed Text and its Effect on Emotion Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121177</link>
        <id>10.14569/IJACSA.2021.0121177</id>
        <doi>10.14569/IJACSA.2021.0121177</doi>
        <lastModDate>2021-11-30T13:01:03.8930000+00:00</lastModDate>
        
        <creator>Evi Yulianti</creator>
        
        <creator>Ajmal Kurnia</creator>
        
        <creator>Mirna Adriani</creator>
        
        <creator>Yoppy Setyo Duto</creator>
        
        <subject>Code-mixed normalisation; Indonesian; English; emotion classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>Usage of code-mixed text has increased in re-cent years among Indonesian internet users, who often mix Indonesian-language with English-language text. Normalisation of this code-mixed text into Indonesian needs to be performed to capture the meaning of English parts of the text and process them effectively. We improve a state-of-the-art code-mixed Indonesian-English normalisation system by modifying its pipeline modules. We further analyse the effect of code-mixed normalisation on emotion classification tasks. Our approach significantly improved on a state-of-the-art Indonesian-English code-mixed text normal-isation system in both the individual pipeline modules and the overall system. The new feature set in the language identification module showed an improvement of 4.26% in terms of F1 score. The combination of machine translation and ruleset in the lexical normalisation module improved BLEU score by 25.22% and lowered WER by 62.49%. The use of context in the translation module improved BLEU score by 2.5% and lowered WER by 8.84%. The effectiveness of the overall pipeline normalisation system increased by 32.11% and 33.82%, in terms of BLEU score and WER, respectively. Code-mixed normalisation also improved the accuracy of emotion classification by up to 37.74% in terms of F1 score.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_77-Normalisation_of_Indonesian_English_Code_Mixed_Text.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Electronic Commerce Product Recommendation using Enhanced Conjoint Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121176</link>
        <id>10.14569/IJACSA.2021.0121176</id>
        <doi>10.14569/IJACSA.2021.0121176</doi>
        <lastModDate>2021-11-30T13:01:03.8630000+00:00</lastModDate>
        
        <creator>Andrew Brian Osmond</creator>
        
        <creator>Fadhil Hidayat</creator>
        
        <creator>Suhono Harso Supangkat</creator>
        
        <subject>Enhanced conjoint analysis; marketplace; ecommerce</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>While finding any product, there are many identical products sold in the marketplace, so buyers usually compare the items according to the desired preferences, for example, price, seller reputation, product reviews, and shipping cost. From each preference, buyers count subjectively to make a final decision on which product is should be bought. With hundreds of thousands of products to be compared, the buyer may not get the product that meets his preferences. To that end, we proposed the Enhanced Conjoint Analysis method. Conjoint Analysis is a common method to draw marketing strategy from a product or analyze important factors of a product. From its feature, this method also can be used to analyze important factors from a product in the marketplace based on price. We convert importance factor percentage as a coefficient to calculate weight from every attributes and summarize it. To evaluate this method, we compared the ECA method to another prediction algorithm: generalized linear model (GLM), decision tree (DT), random forest (RF), gradient boosted trees (GBT), and support vector machine (SVM). Our experimental results, ECA running time is 6.146s, GLM (5.537s), DT (1s), RF (10,119s), GBT (45.881s), and SVM (11.583s). With this result, our proposed method can be used to create recommendations besides the neural network or machine learning approach.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_76-Electronic_Commerce_Product_Recommendation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Emerging Requirement Engineering Models: Identifying Challenges is Important and Providing Solutions is Even Better</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121174</link>
        <id>10.14569/IJACSA.2021.0121174</id>
        <doi>10.14569/IJACSA.2021.0121174</doi>
        <lastModDate>2021-11-30T13:01:03.8170000+00:00</lastModDate>
        
        <creator>Hina Noor</creator>
        
        <creator>Maheen Tariq</creator>
        
        <creator>Anam Yousaf</creator>
        
        <creator>Hafiz Wajid Ali</creator>
        
        <creator>Arqam Abdul Moqeet</creator>
        
        <creator>Abu Bakar Hamid</creator>
        
        <creator>Mahek Hanif</creator>
        
        <creator>Huma Naz</creator>
        
        <creator>Nabeel Tariq</creator>
        
        <creator>Ijaz Amin</creator>
        
        <creator>Osama Naseer</creator>
        
        <subject>Requirement engineering; current requirement engineering methods; emerging models for requirement engineering; challenges; pros and cons</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>Requirement Engineering is one of the most crucial tasks because it serves as the foundation for any software. Four pillars of requirement engineering procedures underpin the entire software. The bricks that make up the software edifice are functional and non-functional needs. Finally, design, implementation, and testing add stories to the foundation, allowing a full software tower to be built on top of it. As a result, the foundation must be strong enough to support the remainder of the software tower. Requirement engineers have various hurdles to design successful software for this purpose. Requirement Engineering (RE) is emerging as an increasingly important discipline to promote the development of web applications, as these are designed to meet various stakeholder requirements, additional functional, information, multimedia, and usability requirements compared to traditional software applications. The requirements of software systems are a very important area in software engineering. The success of software systems depends on how it effectively meets the requirements of users. In this paper, the review of current state of requirements engineering in which requirements from users are checked analyzed with their consistency and correctness is presented, and then identifies the emerging models of requirement engineering. Firstly, the paper highlight the current activities that enable the understanding of goals and objectives for developing proposed software systems, then the focus is on the techniques for improving the precision, accuracy, and variety of requirements. Next, identification of the challenges of emerging requirement engineering models is explained. The challenges like security and global trend that posed by emerging models of the future. Finally, we are trying to suggest some solutions for the mentioned challenges.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_74-Emerging_Requirement_Engineering_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Ontology-based Daily Menu Recommendation System for Complementary Food According to Nutritional Needs using Na&#239;ve Bayes and TOPSIS</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121173</link>
        <id>10.14569/IJACSA.2021.0121173</id>
        <doi>10.14569/IJACSA.2021.0121173</doi>
        <lastModDate>2021-11-30T13:01:03.8000000+00:00</lastModDate>
        
        <creator>Mujahidah Showafah</creator>
        
        <creator>Sari Widya Sihwi</creator>
        
        <creator>Winarno</creator>
        
        <subject>Calorie; complementary food; babies; Na&#239;ve Bayes; nutrition needs; ontology; recommendation system; SUS; TOPSIS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>Babies begin to be given complementary feeding at the age of 6 to 24 months. Complementary foods given to babies need to meet nutritional needs according to their ages. Since, at these ages, babies are just learning to eat, it is necessary to plan a complementary food menu referring to the nutritional needs and the baby and mother&#39;s preferences. It is certainly not an easy thing for a mother. Therefore, a recommendation system is needed to determine the baby&#39;s daily menu according to those all. This research proposes a complementary food menu recommendation system that considers the balanced composition of three significant nutrients (carbohydrates, protein, and fat) in the diet. It also takes into account the baby and mother&#39;s preferences. The ontology contains Knowledge-based about food and its nutritional content and the nutritional needs of babies according to their ages. Naive Bayes is used to prepare menu options according to user preferences. TOPSIS method is used in this study to provide optimal recommendations regarding nutritional balance and user preferences. Several mothers who have had babies aged 6-24 months and mothers of babies aged 6-24 months were asked to test the recommendation system. The results of the usability testing of the system using SUS showed a good level of user satisfaction.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_73-Ontology_based_Daily_Menu_Recommendation_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Statistical Analysis of Cybersecurity Awareness Issues in Higher Education Institutes</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121172</link>
        <id>10.14569/IJACSA.2021.0121172</id>
        <doi>10.14569/IJACSA.2021.0121172</doi>
        <lastModDate>2021-11-30T13:01:03.7870000+00:00</lastModDate>
        
        <creator>Latifa Alzahrani</creator>
        
        <subject>Cyber security; cyber security awareness; educational institutes; cyber risk; higher education; cyber trust</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>Now-a-days, computers and the Internet are becoming increasingly indispensable tools in several aspects of our lives, including academic study, professional work, entertainment, and communication. Despite the significant advantages of information technology, particularly in information accessibility and internet applications, cyber security has risen to become a national concern in Saudi Arabia, and cyber security threats now need to be taken more seriously. Therefore, computer and network security are a concern not only for traditional security awareness organisations, for example, military, bank, or financial institutions, but also for every individual and government official who use computers. Besides, nowadays, more and more organisations’ valuable assets are stored in the computerised information system; security has become an essential and urgent issue. However, it is remarkable that most systems today are designed with little attention to security concerns. This study aims to examine and analyse cyber security issues, including cyber risk, cyber security, cyber security awareness, and cyber trust, among higher education students in Saudi Arabia. Based on an analysis of the collected data using SPSS, the findings of this study highlight a lack of awareness of basic information related to cyber security among Saudi students. In addition, the number of students attending training programs was very low. Considering other security issues, this study reveals that while Saudi students are aware of cyber risk, they are not aware of cyber security. In addition, Saudi students are not aware of and do not have cyber trust.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_72-Statistical_Analysis_of_Cybersecurity_Awareness.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Unsupervised Machine Learning Approach for Identifying Biomechanical Influences on Protein-Ligand Binding Affinity</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121171</link>
        <id>10.14569/IJACSA.2021.0121171</id>
        <doi>10.14569/IJACSA.2021.0121171</doi>
        <lastModDate>2021-11-30T13:01:03.7700000+00:00</lastModDate>
        
        <creator>Arjun Singh</creator>
        
        <subject>Drug discovery; unsupervised machine learning; feature engineering; protein-ligand binding affinity; virtual screening</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>Drug discovery is incredibly time-consuming and expensive, averaging over 10 years and $985 million per drug. Calculating the binding affinity between a target protein and a ligand through Virtual Screening is critical for discovering viable drugs. Although supervised machine learning (ML) can predict binding affinity accurately, models experience severe overfitting due to an inability to identify informative properties of protein-ligand complexes. This study used unsupervised ML to reveal underlying protein-ligand characteristics that strongly influence binding affinity. Protein-ligand 3D models were collected from the PDBBind database and vectorized into 2422 features per complex. Principal Component Analysis (PCA), t-Distributed Stochastic Neighbor Embedding (t-SNE), K-Means Clustering, and heatmaps were used to identify groups of complexes and the features responsible for the separation. ML benchmarking was used to determine the features’ effect on ML performance. The PCA heatmap revealed groups of complexes with binding affinity of pKd&lt;6 and pKd&gt;8 and identified the number of CCCH and CCCCCH fragments in the ligand as the most responsible features. A high correlation of 0.8337, their ability to explain 18% of the binding affinity’s variance, and an error increase of 0.09 in Decision Trees when trained without the two features suggests that the fragments exist within a larger ligand substructure that significantly influences binding affinity. This discovery is a baseline for informative ligand representations to be generated so that ML models overfit less and can more reliably identify novel drug candidates. Future work will focus on validating the ligand substructure’s presence and discovering more informative intra-ligand relationships.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_71-Unsupervised_Machine_Learning_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Applying Grey Systems and Inverse Distance Weighted Method to Assess Water Quality from a River</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121170</link>
        <id>10.14569/IJACSA.2021.0121170</id>
        <doi>10.14569/IJACSA.2021.0121170</doi>
        <lastModDate>2021-11-30T13:01:03.7530000+00:00</lastModDate>
        
        <creator>Alexi Delgado</creator>
        
        <creator>Anthonny Fernandez</creator>
        
        <creator>Eduardo Lozano</creator>
        
        <creator>Dennis Miguel</creator>
        
        <creator>F&#233;lix Le&#243;n</creator>
        
        <creator>Jhosep Arteta</creator>
        
        <creator>Ch. Carbajal</creator>
        
        <subject>Grey systems; inverse distance weighted (IDW); water quality</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>The Ca&#241;ete River basin, in Peru, has suffered an increase in pollution due to various causes, among the main ones being the lack of knowledge, culture of individuals and municipal authorities, economic activities, among others. We analyze this degree of pollution reflected in the upper part of the Ca&#241;ete river basin through the Grey Clustering method, based on grey systems, which is presented as a good alternative to evaluate water quality in a comprehensive way, making use of Historical data from the monitoring program for the years 2014 and 2015 with nine monitoring points carried out by the National Water Authority (ANA), 6 parameters were defined to evaluate: Hydrogen potential (pH), Biochemical oxygen demand, Chemical demand of Oxygen, Total Suspended Solids, Total Manganese and Total Iron based on the PRATI index. For the spatial distribution, interpolation surfaces of the clustering coefficients were created, using the Spatial Analyst extension of the ArcGIS software, which provides tools to create, analyze and map data in raster format or surfaces. The interpolation method used is Inverse Distance Weighted (IDW). The results of the evaluation showed that in 2014 the monitoring points determined, through the Grey Clustering method, a level of contamination &quot;Uncontaminated&quot; at each point except for point P7 which gives us an &quot;Acceptable&quot; level according to the PRATI indices, while for the year 2015 points P1 and P2 indicate a level of contamination &quot;Moderately contaminated&quot;, point 3 an &quot;Acceptable&quot; level, after points P4 to P9 they present a level of &quot;Not contaminated&quot;. Finally, the Grey Clustering analysis method will determine the water quality in the 9 monitoring points of the upper-middle basin of the Ca&#241;ete River in the years 2014 and 2015. Allowing to observe the reduction of water quality in points P1 and P2 for the period of the years 2014 and 2015 respectively, being crucial to achieve water resource management among local governments that can insert awareness and sustainable development policies.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_70-Applying_Grey_Systems_and_Inverse_Distance_Weighted_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>1D-CNN based Model for Classification and Analysis of Network Attacks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121169</link>
        <id>10.14569/IJACSA.2021.0121169</id>
        <doi>10.14569/IJACSA.2021.0121169</doi>
        <lastModDate>2021-11-30T13:01:03.7370000+00:00</lastModDate>
        
        <creator>Kuljeet Singh</creator>
        
        <creator>Amit Mahajan</creator>
        
        <creator>Vibhakar Mansotra</creator>
        
        <subject>1D-CNN; CICIDS2017; network attacks; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>With the advancement in technology and upsurge in network devices, more and more devices are getting connected to the network leading to more data and information on the network which emphasizes the security of the network to be of paramount importance. Malicious traffic must be detected in networks and machine learning or more precisely deep learning (DL), which is an upcoming approach, should be used for better detection. In this paper, Detection of attacks through a classification of traffic into normal and attack data is done using 1D-CNN, a special variant of convolutional neural network (CNN). For this, the CICIDS2017 dataset consisting of 14 attack types spread across 8 different files, is considered for evaluating model performance and various indicators like recall, precision, F1-score have been utilized. Separate 1D-CNN based DL models were built on individual sub-datasets as well as on combined datasets. Also, an evaluation of the model is done by comparing it with an artificial neural network (ANN) model. Experimental results have demonstrated that the proposed model has performed better and shown great capability in detecting network attacks as the majority of the class labels had achieved excellent scores in each of the evaluation indicators used.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_69-1D_CNN_based_Model_for_Classification_and_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>EC-Elastic an Explicit Congestion Control Mechanism for Named Data Networking</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121168</link>
        <id>10.14569/IJACSA.2021.0121168</id>
        <doi>10.14569/IJACSA.2021.0121168</doi>
        <lastModDate>2021-11-30T13:01:03.7070000+00:00</lastModDate>
        
        <creator>Asmaa EL-BAKKOUCHI</creator>
        
        <creator>Mohammed EL GHAZI</creator>
        
        <creator>Anas BOUAYAD</creator>
        
        <creator>Mohammed FATTAH</creator>
        
        <creator>Moulhime EL BEKKALI</creator>
        
        <subject>NDN; named data networking; congestion control; explicit congestion control; TCP-elastic</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>In recent years, Named Data Networking (NDN) has attracted researchers’ attention as a new internet architecture. NDN changes the internet communication paradigm from a host-to-host IP model to a name-based model. Thus, NDN permits the retrieval of requested content by name, from different sources and via multiple paths, and the use of caching in intermediate routers. These features transform the transport control model from sender to receiver and make traditional end-to-end congestion control mechanisms incompatible with NDN architecture. To deal with this problem, a reliable congestion control mechanism becomes necessary for a successful deployment of NDN. This paper presents a new hybrid congestion control mechanism for NDN, EC-Elastic (Explicit Congestion Elastic), which adopts the basic concept of Elastic-TCP to control the sending rates of the interest packets at the consumer nodes. In the intermediate nodes, a queue has been associated with the Controlled Delay-Active Queue Management CoDel-AQM to measure the packet sojourn time and notify the consumer to decrease its interest packet sending rate when it receives an explicit congestion signal. EC-Elastic was implemented in ndnSIM and evaluated with Agile-SD, CUBIC, and STCP in different scenarios. Simulation results show that EC-Elastic provides a significant improvement in bandwidth utilization while maintaining lower delay and packet loss rates.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_68-EC_Elastic_an_Explicit_Congestion_Control_Mechanism.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comparative Study of Segmentation Method for Computer-aided Diagnosis (CAD) Leukemia AML Subtype M0, M1, and M2</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121167</link>
        <id>10.14569/IJACSA.2021.0121167</id>
        <doi>10.14569/IJACSA.2021.0121167</doi>
        <lastModDate>2021-11-30T13:01:03.6900000+00:00</lastModDate>
        
        <creator>Wiharto </creator>
        
        <creator>Wisnu Widiarto</creator>
        
        <creator>Esti Suryani</creator>
        
        <creator>Nurmajid Hidayatullah</creator>
        
        <subject>Acute myelogenous leukemia; leukemia; segmentation; feature extraction; classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>A computer-based diagnosis model for Acute Myelogenous Leukemia (AML) is carried out using white blood cell image processing. The stages in computer-aided diagnosis (CAD) are included pre-processing, segmentation, feature extraction, and classification. The segmentation method has many approaches, namely, clustering, region growing, and thresholding. The number of approaches that can be used requires proper selection because it will have an impact on CAD performance. This study aims to conduct a comparative study of the performance of the WBC segmentation method on the AML M0, M1, and M2 subtype leukemia CAD system. The segmentation algorithm used is k-means, fuzzy c-means, SOM, watershed, chan vese (active contour), otsu thresholding, and histogram. The feature extraction method uses GLCM, while the classification algorithms tested are SVM, Random-forest, decision tree, naive Bayesian, and k-NN. The test results show that the histogram segmentation method is able to provide the best average performance when using SVM, namely 90.3% accuracy, 85.9% sensitivity, and 92.7% specificity.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_67-A_Comparative_Study_of_Segmentation_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of Wearable System to Help Preventing the Spread of Covid-19 in Public Indoor Area</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121166</link>
        <id>10.14569/IJACSA.2021.0121166</id>
        <doi>10.14569/IJACSA.2021.0121166</doi>
        <lastModDate>2021-11-30T13:01:03.6770000+00:00</lastModDate>
        
        <creator>Annisa Istiqomah Arrahmah</creator>
        
        <creator>Surya Ramadhan</creator>
        
        <subject>Wearable device; healthcare system; COVID-19; door-lock system; radio signal strength indicator (RSSI)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>Smart wearables as a part of the Internet of Things nowadays gaining confidence in our daily lives because of its accessibility and simplicity. Today, with the outbreak of Coronavirus around the world, a smart wearable device can become another solution to help slowing the virus spreading by ensuring public health and social measures. In this paper, a system consisting of distance detector and touchless door access is proposed to help the personal, physical and social distancing measures practice in a public indoor area. A BLE positioning method based on RSSI localization is used to ensure the physical distancing around the user. WPA2 and MAC address-based authentication for the touchless door access is used to restrict and trace the visitor of the indoor area. The system is implemented in ESP microcontroller. A proof of concept is conducted to see if the functionality of the system already satisfied the public health and social measures practice. The results show that only registered devices can give a signal to open the door and the device can guarantee the physical distance around the user with 4.51% error in indoor area.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_66-Development_of_Wearable_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Use of the Relational Concept in the Arabic Morphological Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121165</link>
        <id>10.14569/IJACSA.2021.0121165</id>
        <doi>10.14569/IJACSA.2021.0121165</doi>
        <lastModDate>2021-11-30T13:01:03.6430000+00:00</lastModDate>
        
        <creator>Said Iazzi</creator>
        
        <creator>Abderrazak Iazzi</creator>
        
        <creator>Saida Laaroussi</creator>
        
        <creator>Abdellah Yousfi</creator>
        
        <subject>Arabic language rules; morphology; morphological analyzers; database; relational concept</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>The Arabic language differs from other natural languages in its structures and compositions. In this article we have developed an Arabic morphological analyzer. For this, we have used the relational concept in the database to build our Arabic morphological analyzer. This analyzer uses a set of tables which are linked together by relationships. These relations model certain numbers of compatibility rules between different affixes. Our morphological analysis have been trained and tested on the same databases. The tests of our new approach have given good results and the numbers obtained are very close to those of existing analyzer.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_65-The_Use_of_the_Relational_Concept.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Resource Allocation in Spectrum Deployment for Cognitive Third-party Users</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121164</link>
        <id>10.14569/IJACSA.2021.0121164</id>
        <doi>10.14569/IJACSA.2021.0121164</doi>
        <lastModDate>2021-11-30T13:01:03.6300000+00:00</lastModDate>
        
        <creator>Arikatla Jaya Lakshmi</creator>
        
        <creator>G. N. Swamy</creator>
        
        <creator>M. N. Giri Prasad</creator>
        
        <subject>Slot monitoring; spectrum sensing; primary user (PU); secondary user (SU); third-party spectrum utilization (TSU)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>Spectrum scarcity is a major challenge in wireless communication for next generation applications. The spectrum sharing and utilization of other user available spectrum is an optimal solution, which has outcome with a new mode of communication called cognitive network. Cognitive devices are capable of requesting and sharing free spectrum among each other in a communication range. The spectrums are shared among primary and secondary user (PU, SU). To extend the range of spectrum utilization, users from other clusters are requested in sharing of spectrum which is called as third party user (TSU). In detection of free spectrum a jamming based approach is proposed in recent work. Jamming approach generates a periodic jamming signal for TSU in sensing of free spectrum. The repetitive requesting of spectrum availability result in large jamming probability for engaged TSU users resulting in large delay. In minimizing the delay observed, in this paper, a new monitored jamming approach is proposed. Proposed Monitored jamming approach is develop in recent to the engagement of each TSU for communication and governing the jamming signal based on the spectrum engagement. The proposed approach minimizes the delay due to jamming signaling in existing system. The Experimental results obtained shows an enhancement to the system throughput, fairness factor and minimizes the delay metric for proposed approach.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_64-Resource_Allocation_in_Spectrum_Deployment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Blockchain-based Secure Data Transmission for UAV Swarm using Modified Particle Swarm Optimization Path Planning Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121163</link>
        <id>10.14569/IJACSA.2021.0121163</id>
        <doi>10.14569/IJACSA.2021.0121163</doi>
        <lastModDate>2021-11-30T13:01:03.5970000+00:00</lastModDate>
        
        <creator>M. Kayalvizhi</creator>
        
        <creator>S. Ramamoorthy</creator>
        
        <subject>Unmanned aerial vehicles; path planning; swarm optimization; denial of service attack; blockchain; security and privacy; data communication</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>With the rapid development of unmanned aerial vehicles assisted applications enabled with a communication system, they are open to various malicious attacks. As a new form of flying things, they can access the network for better communication via the aerial base station. Most of the Unmanned aerial vehicles assisted flying objects&#39; optimal path selection schemes does not consider the path deviation. In path deviation attacks, secure data transmission are not addressed in existing works. The secure communication process between Unmanned aerial vehicles and base station are exploited through security-based attacks. Moreover, path loss issue leads to multicast packet loss and unsecured broadcast. The existing network architecture setup does not fulfill the secure data communication and privacy issues. In this paper, Blockchain is utilized to investigate the secure communication between Unmanned aerial vehicles to Wireless Unmanned aerial vehicles Base stations. Since the destination information is dynamic under an uncertain environment, it will cause a delay in data communication. Unmanned aerial vehicles are more vulnerable to security attacks. The proposed blockchain-based architecture supports secure data communication in Unmanned aerial vehicles uncertain environments. To improve network security, this paper designs a modified particle swarm optimization method for better path selection. Through these experimental results, a blockchain-based data communication scheme is outer performed concerning network security.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_63-Blockchain_based_Secure_Data_Transmission.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>HORAM: Hybrid Oblivious Random Access Memory Scheme for Secure Path Hiding in Distributed Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121162</link>
        <id>10.14569/IJACSA.2021.0121162</id>
        <doi>10.14569/IJACSA.2021.0121162</doi>
        <lastModDate>2021-11-30T13:01:03.5830000+00:00</lastModDate>
        
        <creator>Snehalata Funde</creator>
        
        <creator>Gandharba Swain</creator>
        
        <subject>HORAM; metadata; data blocks; privacy; block shuffling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>Now-a-days in most of the sectors digitization has taken place to store data and process it easily with enhanced techniques. Online transactions produce very huge data daily in various sectors like health care, military, government office. To store huge data many firms, take the help of third-party organizations and store data on machines provided by them which creates new security issues. While performing operations on the data or accessing data metadata leakage may happen due to untrustworthy systems. This paper proposed hybrid oblivious random-access memory (HORAM) offers users to access their data from untrusted storage devices without sharing any information about their access patterns or techniques. Here random data block shuffling approach is used which helps in hiding storage policies about the user data blocks placement and preserving privacy of data. HORAM techniques perform pull-push operations on data in a parallel manner which in turn minimizes network overhead and reduces the execution time of operation. An extensive experimental analysis of the proposed system produces better results than weak and strong Federated oblivious random access memory (FEDORAM) respectively. The method is faster than weak FEDORAM and strong FEDORAM as it takes 0.96 seconds for communication with 5 servers whereas weak and strong FEDORAM takes 1.5 and 2 seconds respectively for reading and writing data.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_62-HORAM_Hybrid_Oblivious_Random_Access_Memory_Scheme.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Long Term Solar Power Generation Prediction using Adaboost as a Hybrid of Linear and Non-linear Machine Learning Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121161</link>
        <id>10.14569/IJACSA.2021.0121161</id>
        <doi>10.14569/IJACSA.2021.0121161</doi>
        <lastModDate>2021-11-30T13:01:03.5500000+00:00</lastModDate>
        
        <creator>Sana Mohsin Babbar</creator>
        
        <creator>Chee Yong Lau</creator>
        
        <creator>Ka Fei Thang</creator>
        
        <subject>Recurrent neural network (RNN); support vector machine (SVM); autoregressive with exogenous variable (ARX); adaptive boosting (Adaboost) and photovoltaic system (PV)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>The usage of renewable energy sources has increased manifolds in terms of electric utilities. From other non-conventional sources, solar energy has been seen as a promising and convenient source used around the globe. In terms of meeting the daily requirements, solar energy has huge potential to fulfil the world’s demand. However, firstly the characteristic of solar energy is very unpredictable and intermittent due to variation of weather. Secondly, the optimization and the planning of smart grid effect the operation of PV system. Thus, prediction on the long horizon is needed to address this problem. Nevertheless, long term forecasting of solar power generation is deliberated as a challenging problem. Therefore, this paper proposes a 10 day ahead solar power forecasting using combination of linear and non-linear machine learning models. At first, the outputs are generated from Recurrent Neural Network (RNN), Support Vector Machine (SVM) and Autoregressive with exogenous variable (ARX). Later on, these three outputs are combined and are made as a strong classifier with the Adaptive boost (Adaboost) algorithm. The simulations were conducted on the data obtained from real PV plant. By the experimental results and discussion, it was endogenously concluded that the combination of all techniques with the Adaboost have increased the performances and showing the high accuracy as compare to the individual machine learning models. The hybrid Adaboost shows %MAPE 8.88, which proven high accuracy. While on the other hand, for the individual technique, RNN shows 10.88, SVM reveals 11.78 and ARX gives 13.00 of percentage MAPE. The improvement proves that combination of techniques performs better than individual models and proclaims the high accuracy.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_61-Long_Term_Solar_Power_Generation_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design and Implementation of Dynamic Packet Scheduling with Waiting Time Aware: DPSW2A</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121159</link>
        <id>10.14569/IJACSA.2021.0121159</id>
        <doi>10.14569/IJACSA.2021.0121159</doi>
        <lastModDate>2021-11-30T13:01:03.5200000+00:00</lastModDate>
        
        <creator>K Raghavendra Rao</creator>
        
        <creator>B N Jagadesh</creator>
        
        <subject>MPTCP; scheduler; latency; delay; transport protocol; waiting time</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>One of the principal goals of 5G is to enhance performance in connection with speed and delay curtailment. To accomplish this, IETF has proposed Multipath TCP to utilize the accessible interface for communication. The demand for mobile communication is escalating day by day. The predominant communication option for people is mobile. For giving better service for users’, nodes are fitted out with multiple interfaces. Multiple interfaces are as well one of the benefits of 5G. Multi path protocols are used to load balancing and resilience to failure. When communicating with asymmetric interfaces, latency is an imperative factor. To attain low latency is hard when asymmetric interfaces are used. When communication happens using multiple interfaces, the scheduler plays a central role since it decides which interface needs to be used for the packet. In this article we spotlight on scheduling algorithms, how this schedule will play a vital role to transfer data to receiver nodes with low latency. In this paper, we emphasize on the Scheduler named DPSWWA with the objective of minimizing delay and effective usage of interfaces.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_59-Design_and_Implementation_of_Dynamic_Packet_Scheduling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluating Domain Knowledge and Time Series Features for Automated Detection of Schizophrenia from EEG Signals</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121160</link>
        <id>10.14569/IJACSA.2021.0121160</id>
        <doi>10.14569/IJACSA.2021.0121160</doi>
        <lastModDate>2021-11-30T13:01:03.5200000+00:00</lastModDate>
        
        <creator>Saqib Hussain</creator>
        
        <creator>Nasrullah Pirzada</creator>
        
        <creator>Erum Saba</creator>
        
        <creator>Muhammad Aamir Panhwar</creator>
        
        <creator>Tanveer Ahmed</creator>
        
        <subject>Artificial intelligence (AI); Logistic Regression (LR); smote-class weight (S-CW); borderline smote-class weight (BS-CW); electroencephalography; Time Series Feature Extraction Library (TSFEL)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>Over the recent years, Schizophrenia has become a serious mental disorder that is affecting almost 21 million people globally. There are different symptoms to recognize schizophrenia from healthy people. It can affect the thinking pattern of the brain. Delusions, hallucinations, and disorganized speech are the common symptoms of Schizophrenia. In this study, we have used electroencephalography (EEG) signals to analyze and diagnose Schizophrenia using machine learning algorithms and found that temporal features performed well as compared to statistical features. EEG signals are the best way to analyze this disorder as they are intimately linked with human thinking patterns and provide information about brain activities. The present work proposes a Machine Learning (ML) model based on Logistic Regression (LR) along with two feature extraction libraries Time Series Feature Extraction Library (TSFEL) and MNE Python toolkit to diagnose Schizophrenia from EEG signals. The results are analyzed based on 5 different sampling techniques. The dataset was cross-validated using leave one subject out cross-validation (LOSOCV) using Scikit learn and achieve greater accuracy, sensitivity, specificity, macro average recall, and macro f1 score on temporal features respectively.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_60-Evaluating_Domain_Knowledge_and_Time_Series_Features.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Linguistic Analysis Metric in Detecting Ransomware Cyber-attacks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121158</link>
        <id>10.14569/IJACSA.2021.0121158</id>
        <doi>10.14569/IJACSA.2021.0121158</doi>
        <lastModDate>2021-11-30T13:01:03.4870000+00:00</lastModDate>
        
        <creator>Diana Florea</creator>
        
        <creator>Wayne Patterson</creator>
        
        <subject>Cyber-security; cyber-attacks; machine translation; language; Levenshtein distance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>Originating and striking from anywhere, cyber-attacks have become ever more sophisticated in our modern society and users are forced to adopt increasingly good and vigilant practices to protect from them. Among these, ransomware remains a major cyber-attack whose major threat to end users (disrupted operations, restricted files, scrambled sensitive data, financial demands, etc.) does not particularly lie in number but in severity. In this study we explore the possibility of real-time detection of ransomware source through a linguistic analysis that examines machine translation relative to the Levenshtein Distance and may thereby provide important indications as to attacker’s language of origin. Specifically, the aim of our research is to advance a metric to assist in determining whether an external ransom text is an indicator of either a human- or a machine-generated cyber-attack. Our proposed method works its argument on a set of Eastern European languages but is applicable to a large(r) range of languages and/or probabilistic patterns, being characterized by usage of limited resources and scalability properties.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_58-A_Linguistic_Analysis_Metric_in_Detecting_Ransomware.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analytical Framework for Binarized Response for Enhancing Knowledge Delivery System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121157</link>
        <id>10.14569/IJACSA.2021.0121157</id>
        <doi>10.14569/IJACSA.2021.0121157</doi>
        <lastModDate>2021-11-30T13:01:03.4270000+00:00</lastModDate>
        
        <creator>Chethan G S</creator>
        
        <creator>Vinay S</creator>
        
        <subject>Education; knowledge transfer; machine learning; natural language processing; student feedback</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>The student feedback offer effective insight into their experience of knowledge transfer, routinely collected in academic institutions. However, the existing research literature lacks reporting whether the comments in education system are helpful or non-useful. Most of the existing works are limited to sentiment polarity computation only, and teacher evaluation is carried out without considering the aspects of the teaching. This study analyzes student comments and classifies comments as applicable and non-useful for the teacher scoring system. In the proposed research, the data considered is the student feedback collected from the teacher rating website. The study performed phase-by-phase text data modeling. First, exploratory analysis is carried out on the student feedback dataset to understand text data characteristics and features. Based on the exploratory analysis, appropriate steps are determined to perform preprocessing operations for data cleaning. Using natural language processing context, the study only focuses on removing stop words and common words that belong to both useful and non-useful contexts. BoW model is considered for features extraction, and two probabilistic supervised machine learning models are used for comment classification. The study outcome exhibits that Gaussian Na&#239;ve Bayes outperforms Multinominal Na&#239;ve Bayes in accuracy, precision, recall rate, and F1-score.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_57-Analytical_Framework_for_Binarized_Response.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Ada-IDS: AdaBoost Intrusion Detection System for ICMPv6 based Attacks in Internet of Things</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121156</link>
        <id>10.14569/IJACSA.2021.0121156</id>
        <doi>10.14569/IJACSA.2021.0121156</doi>
        <lastModDate>2021-11-30T13:01:03.3930000+00:00</lastModDate>
        
        <creator>A. Arul Anitha</creator>
        
        <creator>L. Arockiam</creator>
        
        <subject>IoT; ICMPv6; version number attack; DIS attack; DAO attack; Ada-IDS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>The magical buzzword Internet of Things (IoT) connects any objects which are diverse in nature. The restricted capacity, heterogeneity and large scale implementation of the IoT technology tend to have lot of security threats to the IoT networks. RPL is the routing protocol for the constraint devices like IoT nodes. ICMPv6 protocol plays a major role in constructing the tree-like topology called DODAG. It is vulnerable to several security attacks. Version Number Attack, DIS flooding attack and DAO attack are the ICMPv6 based attacks discussed in this paper. The network traffic is collected from the simulated environment in the normal and attacker settings. An AdaBoost ensemble model termed Ada-IDS is developed in this research to detect these three ICMPv6 based security attacks in RPL based Internet of Things. The proposed model detects the attacks with 99.6% accuracy and there is no false alarm rate. The Ada-IDS ensemble model is deployed in the Border Router of the IoT network to safeguard the IoT nodes and network.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_56-Ada_IDS_AdaBoost_Intrusion_Detection_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improved GRASP Technique based Resource Allocation in the Cloud</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121155</link>
        <id>10.14569/IJACSA.2021.0121155</id>
        <doi>10.14569/IJACSA.2021.0121155</doi>
        <lastModDate>2021-11-30T13:01:03.3630000+00:00</lastModDate>
        
        <creator>Madhukar E</creator>
        
        <creator>Ragunathan T</creator>
        
        <subject>Cloud computing; task scheduling; meta-heuristics; fixed set search; GRASP; resource allocation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>In the era of cloud computing, everyone is somehow using cloud resources. However, the resources are limited in the Cloud. Cloud vendors look for enhanced returns on investments. Promising return on investment is possible only when the cloud resources are scheduled efficiently to execute jobs within the stipulated time. However, brute force methods require exponential time to produce a schedule. Heuristic and meta-heuristic algorithms have been proposed in the literature to allocate resources to the jobs. These algorithms still suffer from slow convergence. To overcome this problem, researchers clubbed various heuristics and meta-heuristic to form a new hybrid algorithm. With the same motive, this paper explores the limitations of greedy random adaptive search and shows that learning through a fixed set search enhances efficiency. Based on the results, it can be concluded that the proposed algorithm is on par with existing hybrid meta-heuristic algorithms.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_55-Improved_GRASP_Technique_based_Resource_Allocation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Energy-efficient Multi-hop Routing Protocol for Heterogeneous Wireless Sensor Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121154</link>
        <id>10.14569/IJACSA.2021.0121154</id>
        <doi>10.14569/IJACSA.2021.0121154</doi>
        <lastModDate>2021-11-30T13:01:03.3470000+00:00</lastModDate>
        
        <creator>Rowayda A. Sadek</creator>
        
        <creator>Doha M. Abd-alazeem</creator>
        
        <creator>Mohamed M. Abbassy</creator>
        
        <subject>Heterogeneous wireless sensor networks (HWSNs); forwarding of reliable route packets (FRRPs); grey wolf optimizer (GWO); routing optimization; tabu search algorithm (TSA); quality of service (QoS)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>Energy use of sensor nodes efficiently and extending the lifetime of heterogeneous wireless sensor networks (HWSNs) is a main goal of HWSNs routing optimization methods, and therefore building an energy-efficient routing protocol becomes critical for HWSN performance improvement. They present an energy-efficient routing protocol based on the grey wolf optimizer (GWO) and the Tabu search algorithm (TSA) in this paper. Proposed routing system with primary objectives include clustering and the selection of cluster heads (CH) utilizing GWO with a fitness function based on the residual energy of sensor nodes and the average distance between CH and sink nodes base station (BS ) due to the mobility of sensors, the quality of service (QoS) parameters such as reliability and energy consumption can be improved by discovering multiple optimized paths for data transmission from CH to BS and by TSA selecting the optimal route from CH to BS based on the forwarding of reliable route packets (FRRPs). The experimental results indicate that the proposed grey wolf optimizer with tabu Search Algorithm (GWO-TSA) can reduce HWSNs energy consumption by 10% and 20%, increase lifetime by 13% and 18%, and finally increase throughput by 6% and 14% when compared to the genetic algorithm with tabu search algorithm (GA-TSA) and grey wolf optimizer with crow search algorithm (GWO-CSA). When compared to GA-TSA &amp; GWO-CSA, simulation reveals that the proposed GWO-TSA protocol improves HWSNs performance by minimizing energy consumption, maximizing network lifetime, and boosting network throughput.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_54-A_New_Energy_efficient_Multi_hop_Routing_Protocol.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Transformer based Contextual Model for Sentiment Analysis of Customer Reviews: A Fine-tuned BERT</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121153</link>
        <id>10.14569/IJACSA.2021.0121153</id>
        <doi>10.14569/IJACSA.2021.0121153</doi>
        <lastModDate>2021-11-30T13:01:03.3330000+00:00</lastModDate>
        
        <creator>Ashok Kumar Durairaj</creator>
        
        <creator>Anandan Chinnalagu</creator>
        
        <subject>Transformers model; BERT; sequential model; deep learning; RNN; LSVM; LSTM; BiLSTM; fastText</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>The Bidirectional Encoder Representations from Transformers (BERT) is a state-of-the-art language model used for multiple natural language processing tasks and sequential modeling applications. The accuracy of predictions from context-based sentiment and analysis of customer review data from various social media platforms are challenging and time-consuming tasks due to the high volumes of unstructured data. In recent years, more research has been conducted based on the recurrent neural network algorithm, Long Short-Term Memory (LSTM), Bidirectional LSTM (BiLSTM) as well as hybrid, neutral, and traditional text classification algorithms. This paper presents our experimental research work to overcome these known challenges of the sentiment analysis models, such as its performance, accuracy, and context-based predictions. We’ve proposed a fine-tuned BERT model to predict customer sentiments through the utilization of customer reviews from Twitter, IMDB Movie Reviews, Yelp, Amazon. In addition, we compared the results of the proposed model with our custom Linear Support Vector Machine (LSVM), fastText, BiLSTM and hybrid fastText-BiLSTM models, as well as presented a comparative analysis dashboard report. This experiment result shows that the proposed model performs better than other models with respect to various performance measures.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_53-Transformer_based_Contextual_Model_for_Sentiment_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Delay-tolerant MAC Protocol for Emergency Care in WBAN Considering Preemptive and Non-preemptive Methods</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121152</link>
        <id>10.14569/IJACSA.2021.0121152</id>
        <doi>10.14569/IJACSA.2021.0121152</doi>
        <lastModDate>2021-11-30T13:01:03.3170000+00:00</lastModDate>
        
        <creator>Shah Murtaza Rashid Al Masud</creator>
        
        <creator>Aloke Kumar Saha</creator>
        
        <subject>WBAN; MAC; preemptive; non-preemptive; delay; emergency traffic</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>For facilitating pilgrims with no delay, quick and real-time emergency medical services at ritual sites a delay-tolerant Medium Access Control (MAC) protocol for the IEEE 802.15.6 standard based Wireless Body Area Networks (WBANs) has been proposed. Since MAC protocol is application-specific hence any particular MAC technique may not be appropriate for diverse applications. In this research work, we consider dealing with medical emergency traffics which is random, independent of each other and can be generated at any time. Moreover, emergency traffics must be transmitted ahead of normal medical data or emergency traffic with a lower severity level; because any delay in emergency data transmission may endanger patients’ life. The proposed MAC protocol is compared with both preemptive and non-preemptive methods. Where, a modified MAC superframe (SF) structure, minimum backoff period and minimum Contention Window (CWmin) for quick data access to the IEEE 802.15.6 standard based EAP channel are also considered. The proposed delay-tolerant MAC protocol has been experimented with and simulated by the Castalia simulator which is based on the OMNeT++ platform. The experimental results show that data transmission using the preemptive method works faster with reduced delay than that of the non-preemptive method. Furthermore, the delay metric of the proposed delay-tolerant MAC protocol is analyzed, calculated and compared with the current Traffic-aware TA-MAC protocol. Results demonstrate that delay is relatively low during emergency data transmission using the proposed MAC in WBANs environment.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_52-A_Delay_tolerant_MAC_Protocol_for_Emergency_Care_in_WBAN.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>EFPT-OIDS: Evaluation Framework for a Pre-processing Techniques of Automatic Optho-Imaging Diagnosis and Detection System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121151</link>
        <id>10.14569/IJACSA.2021.0121151</id>
        <doi>10.14569/IJACSA.2021.0121151</doi>
        <lastModDate>2021-11-30T13:01:03.3000000+00:00</lastModDate>
        
        <creator>Sobia Naz</creator>
        
        <creator>Radha Krishna Rao K. A</creator>
        
        <creator>Shreekanth T</creator>
        
        <subject>Pre processing; FUNDUS image; glaucoma; diabetic retinopathy; interpolation; image enhancement</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>The modalities of FUNDUS images and the availability of public domain data sets provides a starting point in designing an ecosystem for developing an automatic detection of degenerative early-stage Glaucoma and Diabetic Retinopathy, and other eye-related diseases. The existing techniques for these operations lack flexibility and robustness in their design implementation and are limited to only certain preprocessing requirements. However, the existing methods are useful but provide lower performance when the FUNDUS image quality degrades due to misalignment of lens opening in camera and poor functioning of visual sensors. This paper presents a unified framework that mechanizes different preprocessing techniques to benefit the Optho-imaging diagnosis and disease detection process. The proposed framework facilitates on-demand data treatment operations that include image interpolation, brightness adjustment, illumination correction, and noise reduction. The proposed techniques for FUNDUS image enhancement provide better PSNR and SSIM-performance metrics for image quality than existing popular image enhancement techniques when tested on two standard publicly available datasets. The contribution of the proposed framework is that it offers flexible and effective mechanisms that meet dynamic preprocessing operations on an on-demand basis to prepare better data representation for building machine learning models. The framework can also be used in real-time for eye disease diagnosis by an ophthalmologist.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_51-EFPT_OIDS_Evaluation_Framework_for_a_Pre_processing_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Finding Good Binary Linear Block Codes based on Hadamard Matrix and Existing Popular Codes</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121150</link>
        <id>10.14569/IJACSA.2021.0121150</id>
        <doi>10.14569/IJACSA.2021.0121150</doi>
        <lastModDate>2021-11-30T13:01:03.2700000+00:00</lastModDate>
        
        <creator>Driss Khebbou</creator>
        
        <creator>Reda Benkhouya</creator>
        
        <creator>Idriss Chana</creator>
        
        <creator>Hussain Ben-azza</creator>
        
        <subject>Binary linear codes; code construction; minimum hamming distance; error-correcting codes; weight distribution; coding theory; hadamard matrix</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>Because of their algebraic structure and simple hardware implementation, linear codes as class of error-correcting codes, are used in a multitude of situations such as Compact disk, backland bar code, satellite and wireless communication, storage systems, ISBN numbers and so more. Nevertheless, the design of linear codes with high minimum Hamming distance to a given dimension and length of the code, remains an open challenge in coding theory. In this work, we propose a code construction method for constructing good binary linear codes from popular ones, while using the Hadamard matrix. The proposed method takes advantage of the MacWilliams identity for computing the weight distribution, to overcome the problem of computing the minimum Hamming distance for larger dimensions.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_50-Finding_Good_Binary_Linear_Block_Codes_based_on_Hadamard_Matrix.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Machine Learning for Predicting Employee Attrition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121149</link>
        <id>10.14569/IJACSA.2021.0121149</id>
        <doi>10.14569/IJACSA.2021.0121149</doi>
        <lastModDate>2021-11-30T13:01:03.2530000+00:00</lastModDate>
        
        <creator>Norsuhada Mansor</creator>
        
        <creator>Nor Samsiah Sani</creator>
        
        <creator>Mohd Aliff</creator>
        
        <subject>Artificial neural networks; decision tree; employee attrition; machine learning; support vector machines</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>Employee attrition has become a focus of researchers and human resources because of the effects of poor performance on organizations regardless of geography, industry, or size. In this context, the use of machine learning classification models to predict whether an employee is likely to quit could greatly increase the human resource department’s ability to intervene on time and possibly provide a remedy to the situation to prevent attrition. This study is conducted with an objective to compare the performance machine learning techniques, namely, Decision Tree (DT) classifier, Support Vector Machines (SVM) classifier, and Artificial Neural Networks (ANN) classifier, and select the best model. These machine learning techniques are compared using the IBM Human Resource Analytic Employee Attrition and Performance dataset. Preprocessing steps for the dataset used in this comparative study include data exploration, data visualization, data cleaning and reduction, data transformation, discretization, and feature selection. In this study, parameter tuning and regularization techniques to overcome overfitting issues are applied for optimization purposes. The comparative study conducted on the three classifiers found that the optimized SVM model stood as the best model that can be used to predict employee attrition with the highest accuracy percentage of 88.87% as compared to the other classification models experimented with, followed by ANN and DT.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_49-Machine_Learning_for_Predicting_Employee_Attrition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Expert System in Enhancing Efficiency in Basic Educational Management using Data Mining Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121148</link>
        <id>10.14569/IJACSA.2021.0121148</id>
        <doi>10.14569/IJACSA.2021.0121148</doi>
        <lastModDate>2021-11-30T13:01:03.2230000+00:00</lastModDate>
        
        <creator>Fuseini Inusah</creator>
        
        <creator>Yaw Marfo Missah</creator>
        
        <creator>Najim Ussiph</creator>
        
        <creator>Frimpong Twum</creator>
        
        <subject>Basic education; data mining; educational management; expert system; population</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>The importance of basic education is well noted in every country. Proper planning and utilization of resources at the basic level, helps in leveraging the success of education at all other levels of education in a country. Ghana is noted to be the country that spends higher in education than its West African counterparts. In Ghana, attempts made to plan by predicting and projecting expenditure as well as the available resources to manage basic education are not accurate enough to address the challenges of education in the country. With the issue of COVID 19 pandemic, more expenditure is realized in managing educational institutions as more resources are needed in observing the protocols to curtail the pandemic. This throws a serious challenge to the effective and efficient utilization of the limited resources in the country. In this paper, the data from the Ministry of Education is analysed using data mining techniques. This has helped to identify the inaccuracies in the data. Inaccurate population projection affects the Key Performance Indicators (KPIs) in education because population is a common denominator for educational indicators. A proposed expert system is to be developed to assist in managing the situation.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_48-Expert_System_in_Enhancing_Efficiency.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Scalable and Reactive Multi Micro-Agents System Middleware for Massively Distributed Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121147</link>
        <id>10.14569/IJACSA.2021.0121147</id>
        <doi>10.14569/IJACSA.2021.0121147</doi>
        <lastModDate>2021-11-30T13:01:03.1900000+00:00</lastModDate>
        
        <creator>EZZRHARI Fatima Ezzahra</creator>
        
        <creator>EL ABID AMRANI Noureddine</creator>
        
        <creator>YOUSSFI Mohamed</creator>
        
        <creator>BOUATTANE Omar</creator>
        
        <subject>Massively distributed system; multi agent system (MAS); high performance computing; reactive programming; hazelcast</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>IT transformation has revolutionized the business landscape and changed most of organizations business model into digital and innovation driven firms. To fully take advantage of this digitalization and the exponential growth of data, organizations need to rely on resilient, scalable, extremely connected, highly available &amp; very performant systems. To meet this need, this paper presents a model of middleware for multi micro-agents system based on reactive programming and designed for massively distributed systems and High-Performance Computing, especially to face big data challenges. This middleware is based on multi-agents systems (MAS) which are known as a reliable solution for High Performance Computing. This proposal framework is built on abstraction and modularity principles through a multi-layered architecture. The design choices aim to ensure cooperation between heterogeneous distributed systems by decoupling the communication model and the cognitive pattern of micro agents. To ensure high scalability and to overcome networks latency, the proposal architecture uses distribution model of data &amp; computing, that allows an adaptation of the grid size as needed. The resilience problem is addressed by adopting the same mechanism as Hazelcast middleware, thanks to his peer-to-peer architecture with no single point of failure.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_47-Scalable_and_Reactive_Multi_Micro_Agents_System_Middleware.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fuel Consumption Prediction Model using Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121146</link>
        <id>10.14569/IJACSA.2021.0121146</id>
        <doi>10.14569/IJACSA.2021.0121146</doi>
        <lastModDate>2021-11-30T13:01:03.1770000+00:00</lastModDate>
        
        <creator>Mohamed A. HAMED</creator>
        
        <creator>Mohammed H.Khafagy</creator>
        
        <creator>Rasha M.Badry</creator>
        
        <subject>Fuel consumption; machine learning; support vector machine; feature weight; feature selection; on-board diagnostic</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>In the paper, we are enhancing the accuracy of the fuel consumption prediction model with Machine Learning to minimize Fuel Consumption. This will lead to an economic improvement for the business and satisfy the domain needs. We propose a machine learning model to predict vehicle fuel consumption. The proposed model is based on the Support Vector Machine algorithm. The Fuel Consumption estimation is given as a function of Mass Air Flow, Vehicle Speed, Revolutions Per Minute, and Throttle Position Sensor features. The proposed model is applied and tested on a vehicle’s On-Board Diagnostics Dataset. The observations were conducted on 18 features. Results achieved a higher accuracy with an R-Squared metric value of 0.97 than other related work using the same Support Vector Machine regression algorithm. We concluded that the Support Vector Machine has a great effect when used for fuel consumption prediction purposes. Our model can compete with other Machine Learning algorithms for the same purpose which will help manufacturers find more choices for successful Fuel Consumption Prediction models.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_46-Fuel_Consumption_Prediction_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Review of a Biomimicry Swimming Robot using Smart Actuator</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121145</link>
        <id>10.14569/IJACSA.2021.0121145</id>
        <doi>10.14569/IJACSA.2021.0121145</doi>
        <lastModDate>2021-11-30T13:01:03.1430000+00:00</lastModDate>
        
        <creator>Muhammad Shafique Ashroff Md Nor</creator>
        
        <creator>Mohd Aliff</creator>
        
        <creator>Nor Samsiah</creator>
        
        <subject>Biomimicry; fish propulsion; biological life; smart actuator</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>Biomimicry-based robotic mobility is a newer subgenre of bio-inspired design and it&#39;s all about applying natural concepts to the development of real-world engineered systems. Previously, researchers used actuators such as motors, pumps, and intelligent materials or intelligent actuators to build many biomimicry robots. Due to the field&#39;s growing interest, this study will examine the performance of several biomimicry robots that have been built based on their different design, the type of material the robot utilizes, and the type of propulsion for the robot to swim while providing huge thrust. Robots must not only design such an animal, but its maneuverability and control tactics must also be tied to wildlife to provide the finest impersonation of biological life. Fish propulsion can be separated into two categories which are body and/or caudal fins (BCF) and median and/or paired fins (MPF). The old propeller system in underwater robot usually uses motor and pump. Many researchers have begun developing smart materials as drivers in recent years that can be grouped into four categories: shape memory alloy SMA, ionic polymer metal composite IPMC, lead zirconate titanate (PZT) and pneumatic soft actuator as replacement for pump or motor. Varied materials produce different result and can be applied for different propulsion modes. Future researchers working on biomimetic fish robots will be guided by the findings of this study.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_45-A_Review_of_a_Biomimicry_Swimming_Robot.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-level Hierarchical Controller Assisted Task Scheduling and Resource Allocation in Large Cloud Infrastructures</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121144</link>
        <id>10.14569/IJACSA.2021.0121144</id>
        <doi>10.14569/IJACSA.2021.0121144</doi>
        <lastModDate>2021-11-30T13:01:03.1300000+00:00</lastModDate>
        
        <creator>Jyothi S</creator>
        
        <creator>B S Shylaja</creator>
        
        <subject>Task-scheduling; VM-migration; improved ant colony system; SLA assurance; energy-efficient consolidation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>The high-pace emergence in Cloud Computing technologies demands and alarmed academia-industries to attain Quality-of-Service (QoS) oriented solutions to ensure optimal network performance in terms of Service Level Agreement (SLA) provision as well as Energy-Efficiency. Majority of the at-hand solutions employ Virtual Machine Migration to perform dynamic resource allocation which fails in addressing the key problem of SLA-sensitive scheduling where it demands timely and reliable task-migration solution. Undeniably, VM consolidation may help achieve energy-efficiency along with dynamic resource allocation where the classical heuristic methods which are often criticized for its local minima and premature convergence doesn’t guarantee the optimality of the solution, especially over large cloud infrastructures. Considering these key problems as motivation, in this paper a highly robust and improved meta-heuristic model based on Ant Colony System is developed to achieve Task Scheduling and Resource Allocation. CloudSim based simulation over different PlanetLab cloud traces exhibited superior performance by the proposed task-scheduling model in terms of negligible SLA violence, minimum downtime, minimum energy-consumption and higher number of migrations over other heuristic variants, which make it suitable towards realistic Cloud Computing purposes.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_44-Multi_level_Hierarchical_Controller_Assisted_Task_Scheduling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Integrated Scheme for Detection and Mitigation of Route Diversion Attack in MANET</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121143</link>
        <id>10.14569/IJACSA.2021.0121143</id>
        <doi>10.14569/IJACSA.2021.0121143</doi>
        <lastModDate>2021-11-30T13:01:03.1130000+00:00</lastModDate>
        
        <creator>H C Ramaprasad</creator>
        
        <creator>S. C. Lingareddy</creator>
        
        <subject>Mobile Adhoc network; route diversion attack; routing attack; link legitimacy; encryption</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>With the involvement of Mobile Adhoc Network (MANET) in many upcoming technologies and applications, there is an increasing concern about secure data transmission. Until the last decade, various solutions have evolved to circumvent this threat; however, the security issue is still a more significant threat. The problems studied during the review are usage of Complex Cryptographic Usage, Less Energy Efficient, Fewer studies towards Route Diversion Attack, and Less Emphasis towards Securing Beacon. An analytical method has been used to study these problems. This paper introduces a novel scheme that carries out dual operation viz. i) assessing the link legitimacy for detection of route diversion attack, and ii) cost-effective countermeasures for the same attack. The key findings of proposed study is token generation process when associated with link legitimacy offers more routing security from various ranges of threats. The broader implication of this finding is that proposed system when characterized by lightweight encryption operation, it is capable of excelling better balance between data transmission and security performance unlike existing security solutions in MANET.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_43-A_Novel_Integrated_Scheme_for_Detection_and_Mitigation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Query Expansion based on Word Embeddings and Ontologies for Efficient Information Retrieval</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121142</link>
        <id>10.14569/IJACSA.2021.0121142</id>
        <doi>10.14569/IJACSA.2021.0121142</doi>
        <lastModDate>2021-11-30T13:01:03.0830000+00:00</lastModDate>
        
        <creator>Namrata Rastogi</creator>
        
        <creator>Parul Verma</creator>
        
        <creator>Pankaj Kumar</creator>
        
        <subject>CBOW; Information retrieval; ontology; query reformulation; semantic web; skip gram; word embeddings; word2vec</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>Information retrieval has been an ever-going process for end users to fetch relevant data at one go. The problem intensifies more with unstructured data in a semantic web environment. It is also a promising area for researchers to dive in and refine it from time to time. Expanding the user query and reformulating it is one probable solution to increase the efficiency of the information retrieval system. In this paper we propose “WeOnto”, a novel two-level query expansion algorithm that utilizes the combination of web ontologies and word embeddings for similarity calculation. In the first level, the Real estate Ontology (REO) is created using Prot&#233;g&#233; and Sparql queries are passed to retrieve probable semantic words from the given ontology for each inputted user query. The first level gave significant results and improved the information retrieval by 18%. The second level of algorithm uses word embedding enhanced with the domain knowledge that helps to retrieve similar meaningful words based on cosine similarity for the same user query. Word embeddings are implemented using Word2Vec method that follows two architectures namely CBOW or Skip Gram. Most similar semantic words are retrieved using the CBOW word embeddings method in the proposed algorithm and concatenated with the semantic keywords generated from the real estate ontology to form a powerful reformulated query that gives promising relevant results. Finally, two topmost words as per their similarity index are taken to reformulate the original user query. Experimental results depict that proposed algorithm has given distinct results and has showcased significant improvement of 93% over the initial user query.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_42-Query_Expansion_based_on_Word_Embeddings_and_Ontologies.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Examining User Experience of Moodle e-Learning System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121141</link>
        <id>10.14569/IJACSA.2021.0121141</id>
        <doi>10.14569/IJACSA.2021.0121141</doi>
        <lastModDate>2021-11-30T13:01:03.0670000+00:00</lastModDate>
        
        <creator>Layla Hasan</creator>
        
        <subject>User Experience (UX); e-learning system; Moodle; usability; learning management system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>This research investigates the user experience (UX) of the Moodle e-learning system employed at one university in Malaysia from students’ perspectives. Comprehensive user experience (UX) criteria were suggested, which was adopted from two reliable criteria, to evaluate the user experience (UX) of the e-learning system. The suggested comprehensive user experience (UX) criteria consist of 8 categories and 29 corresponding sub-categories; these can be used to evaluate teaching and learning, usability, and hedonic aspects of an e-learning system. Semi-structured interviews and questionnaires were employed based on the suggested user experience (UX) criteria to collect qualitative and quantitative data regarding users’ experience (UX) of the tested e-learning system. The results showed that the e-learning system had positive user experience (UX) in general from the students’ perspectives. The results also showed that the students were satisfied with most of the metrics related to teaching and learning, usability and hedonic. However, the students identified some challenges they faced while interacting with the e-learning system which could be improved in order to improve their user experience (UX) and gain more benefits from a good user experience (UX) e-learning system.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_41-Examining_User_Experience_of_Moodle_e_Learning_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Back-off Algorithm with Priority Scheduling for MQTT Protocol and IoT Protocols</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121140</link>
        <id>10.14569/IJACSA.2021.0121140</id>
        <doi>10.14569/IJACSA.2021.0121140</doi>
        <lastModDate>2021-11-30T13:01:03.0500000+00:00</lastModDate>
        
        <creator>Marwa O Al Enany</creator>
        
        <creator>Hany M. Harb</creator>
        
        <creator>Gamal Attiya</creator>
        
        <subject>Back-off algorithm; priority scheduling; MQTT protocol; average transmission frequency rate; IoT protocols</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>The Internet of Things (IoT) protocols have encountered great challenges as the growth of technology has led to many limitations of the performance of the IoT protocols. Message Queuing Telemetry Transport protocol (MQTT) is one of the most dominant protocols in most fields of smart applications, so it has been chosen in this research to be a use case for implementing and evaluating a new proposed Back-off algorithm that is designed to eliminate suspicious and fake messages by calculating an initial frequent rate for each publisher connected to the MQTT broker. The proposed Back-off algorithm was designed to mitigate the traffic load of the uplink traffic by applying an exponential delay factor to suspicious publishers. Another priority scheduling algorithm was proposed to classify publishers as high priority or low priority depending on the new calculated frequent rate. The two algorithms were implemented on the Mosquitto broker and evaluated using a simulation environment by measuring specified performance metrics. The simulated results proved that the Back-off algorithm eliminated network load and introduced an acceptable range of CPU and RAM consumption. The results also concluded that the priority classification algorithm managed to reduce the latency of high-priority publishers.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_40-A_New_Back_off_Algorithm_with_Priority_Scheduling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Thermal-aware Dynamic Weighted Adaptive Routing Algorithm for 3D Network-on-Chip</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121139</link>
        <id>10.14569/IJACSA.2021.0121139</id>
        <doi>10.14569/IJACSA.2021.0121139</doi>
        <lastModDate>2021-11-30T13:01:03.0200000+00:00</lastModDate>
        
        <creator>Muhammad Kaleem</creator>
        
        <creator>Ismail Fauzi Bin Isnin</creator>
        
        <subject>Routing algorithms; thermal-aware; dynamic weighted model; 3D Network-on-Chip</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>3D Network-on-Chip NoC based systems have severe thermal problems due to the stacking of dies and disproportionate cooling efficiency of different layers. While adaptive routing can help with thermal issues, current routing algorithms are either thermally imbalanced or suffer from traffic congestion. In this work a novel thermal aware dynamic weighted adaptive routing algorithm has been proposed that takes traffic and temperature information into account and prevents packets being routed across congested and thermally aggravated areas. Dynamic weighted model will consider parameters related to congestion and thermal issues and provide a balanced suitable approach according to the current scenario at each node. The efficiency of the proposed algorithm is analyzed and evaluated with state-of-the-art thermal-aware routing algorithms using a simulation environment. Results obtained from the simulation shows that the proposed algorithm has performed better in terms of global average delay with 17-33 percent improvement and better thermal profiling under various synthetic traffic conditions.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_39-Thermal_aware_Dynamic_Weighted_Adaptive_Routing_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Machine Learning Driven Feature Sensitive Progressive Sampling Model for BigData Analytics</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121138</link>
        <id>10.14569/IJACSA.2021.0121138</id>
        <doi>10.14569/IJACSA.2021.0121138</doi>
        <lastModDate>2021-11-30T13:01:03.0030000+00:00</lastModDate>
        
        <creator>Nandita Bangera</creator>
        
        <creator>Kayarvizhy N</creator>
        
        <subject>Feature sensitive progressive sampling; BigData analytics; machine learning; ensemble learning; rank sum test; IoT-device classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>BigData requires processing a huge data volume, which is an undeniable challenge for academia-industries. The classical sampling techniques are limited when addressing data-imbalance, large data-heterogeneity, multi-dimensionality etc. To alleviate it, in this paper a novel machine learning driven feature sensitive progressive sampling (ML-FSPS) that in conjunction with an improved feature selection and classification environment achieves more than 95.7% of accuracy, even with 10-14% of the original data size. The proposed ML-FSPS model was applied for IoT-device classification problem that possesses exceedingly high data-imbalance, multi-dimensionality and heterogeneity issues. Functionally, the FSPS-driven analytics model at first performed active period segmentation followed by multi-dimensional (descriptive) statistical feature extraction and Wilcoxon Rank Sum Test based feature selection. Subsequently, it executed K-Means clustering over a gigantically huge feature instances (16,00,000,000 network traces) Here, K-means algorithm clustered each feature samples into five distinct clusters. With initial sample size of 10%, FSPS model selected same amount of data elements (0.5-5% iteratively) from each cluster for each feature to perform multi-class classification using homogenous ensemble learning (HEL) model. Here HEL encompassed AdaBoost, Random Forest and Extended Tree ensemble algorithms as base classifiers. The simulation results affirmed that the proposed model achieves accuracy of almost 99% even with 10-16% of sample size.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_38-Machine_Learning_Driven_Feature_Sensitive_Progressive_Sampling_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards Measuring User Experience based on Software Requirements</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121137</link>
        <id>10.14569/IJACSA.2021.0121137</id>
        <doi>10.14569/IJACSA.2021.0121137</doi>
        <lastModDate>2021-11-30T13:01:02.9730000+00:00</lastModDate>
        
        <creator>Issa Atoum</creator>
        
        <creator>Jameel Almalki</creator>
        
        <creator>Saeed Masoud Alshahrani</creator>
        
        <creator>Waleed Al Shehri</creator>
        
        <subject>User experience; benchmark dataset; requirements engineering; UX evaluation; software engineering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>User Experience (UX) provides insights into the users’ product perceptions while using or intending to use an application. Software products are known for complexity and changeability, starting from requirements engineering until the product operation. Users often evaluate software UX based on a prototype; however, UX is semantically embedded in the software requirements, a crucial indicator for project success. The problem of current UX evaluation methods is their dependence on the actual involvement of users or experts, a time-consuming process. First, this paper builds a benchmark dataset of UX based on textual software requirements crowdsourcing several UX experts. Second, the paper develops a machine learning model to measure UX based on the dataset. This research describes the dataset characteristics and reports its statistical internal consistency and reliability. Results indicate a high Cronbach Alpha and a low root mean square error of the dataset. We conclude that the new benchmark dataset could be used to estimate UX instantly without the need for subjective UX evaluation. The dataset will serve as a foundation of UX features for machine learning models.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_37-Towards_Measuring_User_Experience.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Heuristics and Think-aloud Method for Evaluating the Usability of Game-based Language Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121136</link>
        <id>10.14569/IJACSA.2021.0121136</id>
        <doi>10.14569/IJACSA.2021.0121136</doi>
        <lastModDate>2021-11-30T13:01:02.9570000+00:00</lastModDate>
        
        <creator>Kashif Ishaq</creator>
        
        <creator>Fadhilah Rosdi</creator>
        
        <creator>Nor Azan Mat Zin</creator>
        
        <creator>Adnan Abid</creator>
        
        <subject>Engagement; game-based; heuristic evaluation; language learning; motivation; think aloud; usability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>Digital learning environments are increasingly becoming popular in recent years. The rising usage of cell phones has invited researchers to design and develop learning applications and games for mobile phones. Specifically, game-based language learning is being promoted by researchers in many parts of the world. “Language Learning Serious Game (LLSG)” is based on a theoretical model constructed by the researcher that supports children learning English as a second language in a cultural context. The usability of such games is evaluated based on well-defined heuristics and other standard methods. This research aims to appraise the usability of LLSG through heuristics and think-aloud approaches while involving all essential stakeholders, including language experts, students, teachers, and game developers. The researcher proposed the heuristics in a cultural context, whereas the think-aloud review is compiled from the rigorous discussion session involving these stakeholders to evaluate the LLSG. The findings obtained from the heuristics evaluation reveal that the usability of LLSG is acceptable. On the other hand, various interesting suggestions and reviews were gathered from the discussion between experts and students. This evaluation will further improve the future versions of the game.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_36-Heuristics_and_Think_aloud_Method_for_Evaluating_the_Usability.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Regularization Effect of Pre-activation Batch Normalization on Convolutional Neural Network Performance for Face Recognition System Paper</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121135</link>
        <id>10.14569/IJACSA.2021.0121135</id>
        <doi>10.14569/IJACSA.2021.0121135</doi>
        <lastModDate>2021-11-30T13:01:02.9400000+00:00</lastModDate>
        
        <creator>Abu Sanusi Darma</creator>
        
        <creator>Fatma Susilawati Binti Mohamad</creator>
        
        <subject>Face recognition; pre-active batch normalization; convolutional neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>Face recognition is of pronounced significance to real-world applications such as video surveillance systems, human computing interaction, and security systems. This biometric authenticating system encompasses rich real human face characteristics. As such, it has been one of the important research topics in computer vision. Face recognition systems based on deep learning approaches suffer from internal covariate shift problems that cause gradients to explode or gradient disappearance, which leads to improper network training. Improper network training causes network overfitting and computational load. This reduces recognition accuracy and slows down network speed. This paper proposes a modified pre-activation batch normalization convolutional neural network by adding a batch normalization layer after each convolutional layer within each of the four convolutional units of the proposed model. The performance of the proposed model is validated with a new dataset, AS-Darmaset, which is built out of two publicly available databases. This paper compared the convergence behavior of four different CNN models: the Pre-activation Batch Normalization CNN model, the Traditional CNN without Batch Normalization, the Post-Activation Batch Normalization CNN model, and the Sparse Batch Normalization CNN Architecture. The evaluation results show that the recognition performance of Pre-activation BN CNN has training and validation accuracies of 100.00% and 99.87%, the Post activation Batch normalization has 100.00% and 99.81%, and the traditional CNN without BN has 96.50% and 98.93%. The sparse batch normalization CNN has 96.25% and 97.60% success rate, respectively. The result shows that the Pre-activation BN CNN model is more effective than the other three deep learning models.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_35-The_Regularization_Effect_of_Pre_activation_Batch_Normalization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>UX Testing for Mobile Learning Applications of Deaf Children</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121134</link>
        <id>10.14569/IJACSA.2021.0121134</id>
        <doi>10.14569/IJACSA.2021.0121134</doi>
        <lastModDate>2021-11-30T13:01:02.9270000+00:00</lastModDate>
        
        <creator>Normala Mohamad</creator>
        
        <creator>Nor Laily Hashim</creator>
        
        <subject>User experience; UX testing; UX dimension; UX metrics; mobile learning application; deaf children; smileyometer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>Many studies are focusing on deaf children mobile learning. However, they are not concentrating on user experience (UX) testing. Current UX testing is based on existing UX evaluation models that are hard to apply due to the comprehensive measurements and lack of description on how to conduct evaluation for a more specific mobile learning process. Moreover, the existing UX evaluation models are not highlighted to be applied in testing UX for deaf children’s mobile learning. Hence, this paper proposed questions for UX testing for deaf children’s mobile learning to explore UX issues in offering an enjoyable learning application. Smileyometer is used to capture the data from deaf children after using a selected mobile learning application, KoTBaM and Learning Fakih. This study involves deaf children aged 7 – 12 years old familiar with the mobile application. The survey is divided into two sections: i) demographic information and ii) 24 questions that the respondent must answer using a smileyometer. The survey included 38 deaf children from Malaysian Deaf School. The participating deaf children completed the questionnaires with the assistance of their teachers after using the mobile learning application in the classroom. Yet, various issues needed to be addressed in order to improve the deaf children&#39;s user experience. Special exercises should be developed for deaf children connected to their school syllabus to consolidate their knowledge and self-learn everywhere. Furthermore, games elements should be adapted so the deaf children are able to learn while playing.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_34-UX_Testing_for_Mobile_Learning_Applications.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing the Takhrij Al-Hadith based on Contextual Similarity using BERT Embeddings</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121133</link>
        <id>10.14569/IJACSA.2021.0121133</id>
        <doi>10.14569/IJACSA.2021.0121133</doi>
        <lastModDate>2021-11-30T13:01:02.8930000+00:00</lastModDate>
        
        <creator>Emha Taufiq Luthfi</creator>
        
        <creator>Zeratul Izzah Mohd Yusoh</creator>
        
        <creator>Burhanuddin Mohd Aboobaider</creator>
        
        <subject>Hadith text; Takhrij; natural language processing; text-similarity; word embedding; BERT fine-tuning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>Muslims are required to conduct Takhrij to validate the truth of Hadith text, especially when it is obtained from online media. Typically, the traditional Takhrij processes are conducted by experts and apply to Arabic Hadith text. This study introduces a contextual similarity model based on BERT Embedding to handle Takhrij on Indonesian Hadith Text. This study examines the effectiveness of BERT Fine-Tuning on the six pre-trained models to produce embedding models. The result shows that BERT Fine-Tuning improves the embedding model average accuracy by 47.67%, with a mean of 0.956845. The most high-grade accuracy was the BERT embedding built based on the indobenchmark/indobert-large-p2 pre-trained model on 1.00. In addition, the manual evaluation achieved 91.67% accuracy.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_33-Enhancing_the_Takhrij_Al_Hadith.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving Customer Churn Classification with Ensemble Stacking Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121132</link>
        <id>10.14569/IJACSA.2021.0121132</id>
        <doi>10.14569/IJACSA.2021.0121132</doi>
        <lastModDate>2021-11-30T13:01:02.8800000+00:00</lastModDate>
        
        <creator>Mohd Khalid Awang</creator>
        
        <creator>Mokhairi Makhtar</creator>
        
        <creator>Norlina Udin</creator>
        
        <creator>Nur Farraliza Mansor</creator>
        
        <subject>Stacking ensemble; customer churn prediction; bagging; boosting</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>Due to the high cost of acquiring new customers, accurate customer churn classification is critical in any company. The telecommunications industry has employed single classifiers to classify customer churn; however, the classification accuracy remains low. Nevertheless, combining several classifiers&#39; decisions improves classification accuracy. This article attempts to enhance ensemble integration via stack generalisation. This paper proposed a stacking ensemble based on six different learning algorithms as the base-classifiers and tested on five different meta-model classifiers. We compared the performance of the proposed stacking ensemble model with single classifiers, bagging and boosting ensemble. The performances of the models were evaluated with accuracy, precision, recall and ROC criteria. The findings of the experiments demonstrated that the proposed stacking ensemble model resulted in the improvement of the customer churn classification. Based on the results of the experiments, it indicates that the prediction accuracy, precision, recall and ROC of the proposed stacking ensemble with MLP meta-model outperformed other single classifiers and ensemble methods for the customer churn dataset.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_32-Improving_Customer_Churn_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Real Time Distributed and Decentralized Peer-to-Peer Protocol for Swarm Robots</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121131</link>
        <id>10.14569/IJACSA.2021.0121131</id>
        <doi>10.14569/IJACSA.2021.0121131</doi>
        <lastModDate>2021-11-30T13:01:02.8630000+00:00</lastModDate>
        
        <creator>Mahmoud Almostafa RABBAH</creator>
        
        <creator>Nabila RABBAH</creator>
        
        <creator>Hicham BELHADAOUI</creator>
        
        <creator>Mounir RIFI</creator>
        
        <subject>Autonomous robots; smart objects; peer-to-peer; real time communication; ROS2; ZeroMQ; middleware</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>This contribution proposes an approach to enhance the capability of robotic agents to join the Internet of Things (IoT) and act autonomously in extreme and hostile environment. This capability will help in the development in environments where the connectivity, availability, and responsivity of the devices are subject to variations and noises. A real time distributed and decentralized Peer-to-Peer protocol was designed to allow Autonomous Unmanned Surface Vessels (AUSV) extend their context awareness. The developed Middleware allows a real time communication and is designed to run on top of a Real Time Operating System (RTOS). Furthermore, the proposed Middleware will give researchers access to a large amount of data collected by sensors, and thus solve one of the major problems encountered while training artificial intelligence models which is the lack of sufficient data.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_31-Real_Time_Distributed_and_Decentralized_Peer_to_Peer_Protocol.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Bioinformatics Research Through Image Processing of Histopathological Response to Stonefish Venom</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121130</link>
        <id>10.14569/IJACSA.2021.0121130</id>
        <doi>10.14569/IJACSA.2021.0121130</doi>
        <lastModDate>2021-11-30T13:01:02.8330000+00:00</lastModDate>
        
        <creator>Mohammad Wahsha</creator>
        
        <creator>Heider A. M. Wahsheh</creator>
        
        <creator>Wissam Hayek</creator>
        
        <creator>Haya Al-Tarawneh</creator>
        
        <creator>Maroof Khalaf</creator>
        
        <creator>Tariq Al-Najjar</creator>
        
        <subject>Synanceia verrucosa; Gulf of Aqaba; artificial intelligence; marine biotoxins</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>The present study utilizes coastal and environmental engineering to investigate the histopathological effects of Synanceia verrucosa venom on Albino BALB/c mice. S. verrucosa is the most hazardous venomous marine fish that belong to the family Synanceiidae, generally known as the &quot;Reef Stonefish&quot;. Crude venom was collected from venom glands of the dorsal spines of stonefish samples taken from the Jordanian coastline of the Gulf of Aqaba, Red Sea. The mice were given intramuscular injections of the venom. Consequently, the research evaluated the acute toxicity and influence on selected serum biomarker enzymes, as well as possible histological alterations of the soleus skeletal muscles. The mice 24 h LD50 was 0.107 &#181;g toxin/kg mouse body weight. After the treatment using venom sublethal dose, the serum biomarkers, including Lactate dehydrogenase (LDH) and Alanine aminotransferase (ALT), were significantly improved (P≤0.05). In addition, Lipid Peroxidation (LPO) contents were significantly increased (P≤0.05) after venom treatment. Moreover, we combined routine medical procedures and artificial intelligence-assisted image analysis for a rapid qualitative and quantitative diagnosis of stonefish injury, based on the histophotography of mice tissue samples during the observation period (1, 2, and 3 hours respectively). The novelty of our method is that we could detect severe and mild damage with an accuracy of 93% and 91%, respectively. The most histological abnormalities in muscles were the great variety in diameters, content, and widespread among randomly distributed muscle fibres. In addition, loss of the tissue&#39;s striated appearance was noticed in toxin-treated groups compared with the control group. Consequently, our findings indicate the Stonefish&#39;s harmful influences that may endanger human life and highlight the need for appropriate measures to be considered. This, in turn, can ensure beach safety in the Gulf of Aqaba.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_30-Bioinformatics_Research_Through_Image_Processing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid Deep Neural Network for Human Activity Recognition based on IoT Sensors</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121129</link>
        <id>10.14569/IJACSA.2021.0121129</id>
        <doi>10.14569/IJACSA.2021.0121129</doi>
        <lastModDate>2021-11-30T13:01:02.8170000+00:00</lastModDate>
        
        <creator>Zakaria BENHAILI</creator>
        
        <creator>Youssef BALOUKI</creator>
        
        <creator>Lahcen MOUMOUN</creator>
        
        <subject>IoT; deep learning; CNN; GRU; CNGRU; human activity recognition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>Internet of things (IOT) sensors, has received a lot of interest in recent years due to the rise of application demands in domains like ubiquitous and context-aware computing, activity surveillance, ambient assistive living and more specifically in Human activity recognition. The recent development in deep learning allows to extract high-level features automatically, and eliminates the reliance on traditional machine learning techniques, which depended heavily on hand crafted features. In this paper, we introduce a network that can identify a variety of everyday human actions that can be carried out in a smart home environment, by using raw signals generated from Internet of Thing’s motion sensors. We design our architecture basing on a combination of convolutional neural network (CNN) and Gated recurrent unit (GRU) layers. The CNN is first deployed to extract local and scale-invariance features, then the GRU layers are used to extract sequential temporal dependencies. We tested our model called (CNGRU) on three public datasets. It achieves an accuracy better or comparable to existing state of the art models.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_29-A_Hybrid_Deep_Neural_Network_for_Human_Activity_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Statistical Analysis for Revealing Defects in Software Projects: Systematic Literature Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121128</link>
        <id>10.14569/IJACSA.2021.0121128</id>
        <doi>10.14569/IJACSA.2021.0121128</doi>
        <lastModDate>2021-11-30T13:01:02.7870000+00:00</lastModDate>
        
        <creator>Alia Nabil Mahmoud</creator>
        
        <creator>V&#237;tor Santos</creator>
        
        <subject>Defects; software projects; statistical model; linear regression; logistic regression</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>Defect detection in software is the procedure to identify parts of software that may comprise defects. Software companies always seek to improve the performance of software projects in terms of quality and efficiency. They also seek to deliver the soft-ware projects without any defects to the communities and just in time. The early revelation of defects in software projects is also tried to avoid failure of those projects, save costs, team effort, and time. Therefore, these companies need to build an intelligent model capable of detecting software defects accurately and efficiently. The paper is organized as follows. Section 2 presents the materials and methods, PRISMA, search questions, and search strategy. Section 3 presents the results with an analysis, and discussion, visualizing analysis and analysis per topic. Section 4 presents the methodology. Finally, in Section 5, the conclusion is discussed. The search string was applied to all electronic repositories looking for papers published between 2015 and 2021, which resulted in 627 publications. The results focused on finding three important points by linking the results of manuscript analysis and linking them to the results of the bibliometric analysis. First, the results showed that the number of defects and the number of lines of code are among the most important factors used in revealing software defects. Second, neural networks and regression analysis are among the most important smart and statistical methods used for this purpose. Finally, the accuracy metric and the error rate are among the most important metrics used in comparisons between the efficiency of statistical and intelligent models.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_28-Statistical_Analysis_for_Revealing_Defects_in_Software_Projects.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Trust-based Key Management Conglomerate ElGamal Encryption for Data Aggregation Framework in WSN using Blockchain Technology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121127</link>
        <id>10.14569/IJACSA.2021.0121127</id>
        <doi>10.14569/IJACSA.2021.0121127</doi>
        <lastModDate>2021-11-30T13:01:02.7530000+00:00</lastModDate>
        
        <creator>T. G. Babu</creator>
        
        <creator>V. Jayalakshmi</creator>
        
        <subject>Wireless sensor networks; data aggregation; PoW detection scheme; blockchain technology; cluster formation; key generation; security; delay ratio</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>In wireless sensor networks (WSN), data aggregation is a widely used method. Security issues like data integrity and data confidentiality became a significant concern in data aggregation when the sensor network is deployed in a hostile environment. Many researches may carry out several works to tolerate these security issues. However, there were some limitations like delay, the arrival rate of packets, and so on. Hence, to overcome the existing problems, this approach offers a blockchain-dependent data aggregation scheme in WSN. The main intention of the proposed work is to generate a certificateless key generation so that the proposed system&#39;s secrecy rate is improved. The use of blockchain is employed for security purposes, and it enables the user to acquire the information stored internally in an effortless manner. Initially, deployment of sensor and base station (BS) is carried out, followed by node registration at which the public/private keys are generated. The computation of private hash values is carried by performing certificateless key generation. After that, the formation of blockchain is made using the PoW (Proof of Work) detection algorithm followed by the aggregation of data. In the data aggregation process, Elgammal based cryptographic approach is introduced to acquire member data, perform aggregation logic, and transfer the aggregated data. Finally, cluster-based routing is established with the use of Knapsack based cluster routing strategy. The performance investigation of the proposed system is estimated and the outcomes attained are compared with the existing techniques in terms of arrival rate, average delay, and the delay ratio of the packets. The investigation illustrates that the suggested approach is better than the traditional techniques.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_27-Trust_based_Key_Management_Conglomerate_ElGamal_Encryption.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Challenges in Developing Virtual Reality, Augmented Reality and Mixed-Reality Applications: Case Studies on A 3D-Based Tangible Cultural Heritage Conservation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121126</link>
        <id>10.14569/IJACSA.2021.0121126</id>
        <doi>10.14569/IJACSA.2021.0121126</doi>
        <lastModDate>2021-11-30T13:01:02.7370000+00:00</lastModDate>
        
        <creator>Ahmad Zainul Fanani</creator>
        
        <creator>Khafiizh Hastuti</creator>
        
        <creator>Arry Maulana Syarif</creator>
        
        <creator>Prayanto Widyo Harsanto</creator>
        
        <subject>Virtual reality; augmented reality; mixed reality; tangible cultural heritage; 3D-based cultural heritage conservation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>A model that contributes in a simple, practical and effective way to develop 3D-based CH conservation applications involving the use of VR, AR and MR technologies was proposed based on the identification of challenges in developing applications. Identification was carried out by analyzing related and relevant articles selected randomly using Google and Google Scholar search engines. The model can prevent researchers from lack of planning in carrying out research in this field, and it is suitable for those just starting out with this type of research. In addition, this model can support researchers to more easily, practically and effectively implement 3D-based cultural heritage conservation by using virtual reality or augmented reality or mixed reality technology.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_26-Challenges_in_Developing_Virtual_Reality_Augmented_Reality.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Polarity Detection of Dialectal Arabic using Deep Learning Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121125</link>
        <id>10.14569/IJACSA.2021.0121125</id>
        <doi>10.14569/IJACSA.2021.0121125</doi>
        <lastModDate>2021-11-30T13:01:02.7230000+00:00</lastModDate>
        
        <creator>Saleh M. Mohamed</creator>
        
        <creator>Ensaf Hussein Mohamed</creator>
        
        <creator>Mohamed A. Belal</creator>
        
        <subject>Sentiment analysis; word embedding; sentiment classification; dialectical arabic; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>With the evolution of a new era of technology and social media networks, as well as an increase in Arabs sharing their point of view, it became necessary that this research be conducted. Sentiment analysis is concerned with identifying and extracting opinionated phrases from reviews or tweets. Specifically, to determine whether a given tweet is positive, negative, or neutral. Dialectical Arabic poses difficulties for sentiment analysis. In this paper, four deep learning models are presented, to be specific convolution neural networks (CNN), long short-term memory (LSTM), a hybrid of (CNN-LSTM), and Bidirectional LSTMs (BiLSTM), to determine the tweets polarities written in dialectal Arabic. The performance of the four models is validated on the used corpus with the use of word embedding and applying the (k-Fold Cross-Validation) method. The results show that CNN outperforms the others achieving an accuracy of 99.65%.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_25-Polarity_Detection_of_Dialectal_Arabic.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Face Age Estimation and the Other-race Effect</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121124</link>
        <id>10.14569/IJACSA.2021.0121124</id>
        <doi>10.14569/IJACSA.2021.0121124</doi>
        <lastModDate>2021-11-30T13:01:02.6900000+00:00</lastModDate>
        
        <creator>Oluwasegun Oladipo</creator>
        
        <creator>Elijah Olusayo Omidiora</creator>
        
        <creator>Victor Chukwudi Osamor</creator>
        
        <subject>Component; face recognition; age estimation; other-race effect</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>Age estimation is an automated method of predicting human age from 2-D facial feature representations. The majority of studies carried out in this research area use the FG-NET and MORPH 2 databases to train and test developed systems, which are lacking in black-face content. Most age frauds are perpetuated in the sub-Saharan African region due to the unavailability of an official database and unregistered births in the rural areas. The issues of unverified age in the region made it possible for under-age voters, under-age drivers, and the engagement of over-aged sportsmen. The other-race effect could reduce the performance of face recognition techniques, which could make techniques that work for white faces underperform when deployed for use in the predominately black face region. This study examines the other-race effect on face-based age estimation by analyzing the accuracy of an age estimation system trained with predominantly black faces against the same age estimation system trained with predominately white faces. The developed age estimation system uses a genetic algorithm-artificial neural network classifier and local binary pattern for texture and shape feature extraction. A total of 170 black faces were used for system testing. The result showed that the age estimation system trained with the predominantly black face database (GA-ANN-AES-855) outperformed the system trained with predominantly white faces (GA-ANN-AES-255) on testing with the aforementioned black face samples. The results obtained from the simulation were further subjected to inferential statistics, which established that the improvement in the correct classification rate was statistically significant. Hence, the other-race effect affects face-based age estimation systems.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_24-Face_Age_Estimation_and_the_Other_race_Effect.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mutual Informative Brown Clustering based Multiattribute Cockroach Swarm Optimization for Reliable Data Dissemination in VANET</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121123</link>
        <id>10.14569/IJACSA.2021.0121123</id>
        <doi>10.14569/IJACSA.2021.0121123</doi>
        <lastModDate>2021-11-30T13:01:02.6600000+00:00</lastModDate>
        
        <creator>D. Radhika</creator>
        
        <creator>A. Bhuvaneswari</creator>
        
        <subject>VANET; data dissemination; mutual informated brown agglomerative clustering; multi attribute cockroach swarm optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>A vehicular ad hoc network (VANETs) intends to obtain communication for vehicular networks and enhances road safety and effectiveness with help of wireless technology. Data dissemination is an important process in communication. In VANETs system, Data dissemination plays a significant role. A novel Mutual informative brown clustering-based multi attribute cockroach swarm optimization (MIBCMCSO) technique is introduced for improving data dissemination. In reliable data dissemination, clustering and optimization are the two major process of proposed MIBCMCSO technique. Initially, clustering procedure is performed for separating entire network towards different groups of vehicle nodes namely distance, direction, density and velocity of node. For each group, cluster head was chosen among the members to efficient data with minimum delay. Secondly, multi attribute cockroach swarm optimization technique is applied for finding optimal cluster head through multi attribute functions such as residual energy, bandwidth availability, and distance. Then source node performs data dissemination destination via optimal cluster head. Simulation of MIBCMCSO as well as existing technique is performed by various performance parameters like packet delivery ratio, end to end delay and throughput. MIBCMCSO achieves higher consistency of data dissemination as well as lesser delay than conventional methods.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_23-Mutual_Informative_Brown_Clustering.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Local Frequency Descriptor and Hybrid Features for Classification of Brain Magnetic Resonance Images using Ensemble Classifier</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121122</link>
        <id>10.14569/IJACSA.2021.0121122</id>
        <doi>10.14569/IJACSA.2021.0121122</doi>
        <lastModDate>2021-11-30T13:01:02.6430000+00:00</lastModDate>
        
        <creator>Shruthi G</creator>
        
        <creator>Krishna Raj P M</creator>
        
        <subject>Brain tumor; hybrid features; local frequency descriptor (LFD); ensemble classifier</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>A brain tumor is an irregular development of cells in the human brain that causes problems with the brain&#39;s normal functionalities. Early detection of brain tumor is an essential process to help the patient to live longer than treatment. Hence in this paper, a hybrid ensemble model has been proposed to classify the input brain MRI images into two classes: brain MRI images having tumor and brain MRI images with no tumor. The hybrid features are extracted by analyzing the texture and statistical properties of brain MRI images. Further, the Local Frequency Descriptor (LFD) technique is employed to extract the prominent features from the brain tumor region. Finally, an ensemble classifier has been developed with the combination of Support Vector Machine (SVM), Decision Tree (DT) and K-Nearest Neighbour (KNN) technique to successfully classify the brain MRI images into brain tumor MRI images and non-tumor brain MRI images. The proposed model is tested on the Kaggle brain tumor dataset and the performance of the method is evaluated in terms of accuracy, sensitivity, specificity, precision, recall and f-measure (f1 score-harmonic mean of precision and recall). The results show that the proposed model is promising and encouraging.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_22-Local_Frequency_Descriptor_and_Hybrid_Features.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Secured and Provisioned Access Authentication using Subscribed user Identity in Federated Clouds</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121121</link>
        <id>10.14569/IJACSA.2021.0121121</id>
        <doi>10.14569/IJACSA.2021.0121121</doi>
        <lastModDate>2021-11-30T13:01:02.6130000+00:00</lastModDate>
        
        <creator>Sudan Jha</creator>
        
        <creator>Sultan Ahmad</creator>
        
        <creator>Meshal Alharbi</creator>
        
        <creator>Bader Alouffi</creator>
        
        <creator>Shoney Sebastian</creator>
        
        <subject>Security authentication (SA); cloud federation (CF); cloud service provider (CSP); key distribution center (KDC); user identity verification module (UIdVM)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>Cloud computing has become an essential source for modern trade or market environments by abled frameworks. The exponential growth of cloud computing services in the last few years has resulted in extensive use, especially in storing and sharing the data on various cloud servers. The current trend in the cloud shows that the cloud owners use relative functions and target areas in such a way that cloud customers access or store their data either in the same servers or related servers. Simultaneously, from the security point of view, the lack of confidence about the customer&#39;s data on the cloud server is still questionable. The hour&#39;s need is to provide the cloud service in a &#39;single port way&#39; by forming the joint management policy to increase customer satisfaction and profitability. In addition to this, the authentication steps also need to be improvised. This paper discusses issues on the security authentication and access provisioning of the cloud service consumers in federated clouds using subscribed user identity. This work proposes the user identity verification module (UidVM) in the cloud service consumer&#39;s authentication process to serve as a cloud broker to minimize the work overloads on the central cloud federation management system, thus enhancing the cloud security.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_21-Secured_and_Provisioned_Access_Authentication.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Na&#239;ve Bayes Classification of High-Resolution Aerial Imagery</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121120</link>
        <id>10.14569/IJACSA.2021.0121120</id>
        <doi>10.14569/IJACSA.2021.0121120</doi>
        <lastModDate>2021-11-30T13:01:02.5970000+00:00</lastModDate>
        
        <creator>Asmala Ahmad</creator>
        
        <creator>Hamzah Sakidin</creator>
        
        <creator>Mohd Yazid Abu Sari</creator>
        
        <creator>Abd Rahman Mat Amin</creator>
        
        <creator>Suliadi Firdaus Sufahani</creator>
        
        <creator>Abd Wahid Rasib</creator>
        
        <subject>Na&#239;ve Bayes; k-means; classification accuracy; training set size; discriminant analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>In this study, the performance of Na&#239;ve Bayes classification on a high-resolution aerial image captured from a UAV-based remote sensing platform is investigated. K-means clustering of the study area is initially performed to assist in selecting the training pixels for the Na&#239;ve Bayes classification. The Na&#239;ve Bayes classification is performed using linear and quadratic discriminant analyses and by making use of training set sizes that are varied from 10 through 100 pixels. The results show that the 20 training set size gives the highest overall classification accuracy and Kappa coefficient for both discriminant analysis types. The linear discriminant analysis with 94.44% overall classification accuracy and 0.9395 Kappa coefficient is found higher than the quadratic discriminant analysis with 88.89% overall classification accuracy and 0.875 Kappa coefficient. Further investigations carried out on the producer accuracy and area size of individual classes show that the linear discriminant analysis produces a more realistic classification compared to the quadratic discriminant analysis particularly due to limited homogenous training pixels of certain objects.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_20-Na&#239;ve_Bayes_Classification_of_High_Resolution_Aerial_Imagery.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning based Neck Models for Object Detection: A Review and a Benchmarking Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121119</link>
        <id>10.14569/IJACSA.2021.0121119</id>
        <doi>10.14569/IJACSA.2021.0121119</doi>
        <lastModDate>2021-11-30T13:01:02.5830000+00:00</lastModDate>
        
        <creator>Sara Bouraya</creator>
        
        <creator>Abdessamad Belangour</creator>
        
        <subject>Object detection; deep learning; computer vision; neck models; feature aggregation; feature fusion</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>Artificial intelligence is the science of enabling computers to act without being further programmed. Particularly, computer vision is one of its innovative fields that manages how computers acquire comprehension from videos and images. In the previous decades, computer vision has been involved in many fields such as self-driving cars, efficient information retrieval, effective surveillance, and a better understanding of human behaviour. Based on deep neural networks, object detection is actively growing for pushing the limits of detection accuracy and speed. Object Detection aims to locate each object instance and assign a class to it in an image or a video sequence. Object detectors are usually provided with a backbone network designed for feature extractors, a neck model for feature aggregation, and finally a head for prediction. Neck models, which are the purpose of study in this paper, are neural networks used to make a fusion between high-level features and low-level features and are known by their efficiency in object detection. The aim of this study to present a review of neck models together before making a benchmarking that would help researchers and scientists use it as a guideline for their works.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_19-Deep_Learning_based_Neck_Models_for_Object_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Systematic Review of Published Articles, Phases and Activities in an Online Social Networks Forensic Investigation Domain</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121118</link>
        <id>10.14569/IJACSA.2021.0121118</id>
        <doi>10.14569/IJACSA.2021.0121118</doi>
        <lastModDate>2021-11-30T13:01:02.5670000+00:00</lastModDate>
        
        <creator>Aliyu Musa Bade</creator>
        
        <creator>Siti Hajar Othman</creator>
        
        <subject>Forensic; investigation; model; online social networks; bibliometric analysis; degree of confidence</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>The purpose of this paper is to retrieve, evaluate and analyse the available published articles in five (5) relevant online databases from 2011 to 2021 and also critically identify the phases and activities involved in an Online Social Networks Forensic Investigation based on bibliometric analysis and Degree of confidence respectively in order to know the evolution in the research domain. A systematic literature review (SLR) technique was adopted by the author to search using pre-defined keywords. Only scholarly articles published between 2011 and 2021 written in English were included in the search. The total of 316 subscribed documents were collected from the five (5) online databases based on the search criteria although twenty-nine (29) are duplicates. ScienceDirect has the highest number with 189 documents and the year 2020 with the highest published articles. Six (6) phases and forty-three (43) activities were identified. According to a review of the recovered publications, no previous research has been done to statistically retrieve, evaluate and analyse the level of work that has been done in the domain of OSNFI, as well as the phases and activities involved in the forensic investigation of an online social networks crime.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_18-A_Systematic_Review_of_Published_Articles_Phases_and_Activities.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sparse Distributed Memory Approach for Reinforcement Learning Driven Efficient Routing in Mobile Wireless Network System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121117</link>
        <id>10.14569/IJACSA.2021.0121117</id>
        <doi>10.14569/IJACSA.2021.0121117</doi>
        <lastModDate>2021-11-30T13:01:02.5500000+00:00</lastModDate>
        
        <creator>Varshini Vidyadhar</creator>
        
        <creator>Nagaraj R</creator>
        
        <creator>G Sudha</creator>
        
        <subject>Mobile wireless network; reinforcement learning; Q-learning; Kanerva coding; routing; memory optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>In recent years, researchers have explored the applicability of Q-learning, a model-free reinforcement learning technology towards designing QoS-aware, resource-efficiency, and reliable routing techniquesin a dynamically changing network environment. However, Q-learning is based on tabular representation to characterize learned policies that frequently encounter a dimension disaster problem when introduced to the uncertain and dynamically changing network environment. In addition, the time required for agent learning in the training phase is too long, which makes it difficult for the agent to generalize the observation state efficiently. To this end, this paper attempts to overcome the overhead memory problems encountered in Q-learning-based routing techniques. In this paper, the study presents a novel memory-efficient intelligent routing mechanism based on adaptive Kanerva coding, which minimizes the storage cost required for storing large action and a state value. Unlike existing schemes, the proposed method optimizes memory requirements. Also, it enables better generalization by storing the learnable parameters of the function approximator present in the agent in a Kanerva-coding data structure. The Kanerva-coding is a sparse memory with distributed reading and writing mechanism which enables optimal compression and state abstractions for learning with fewer parameterized components making it highly memory efficient. The design and implementation of the proposed technique are done on the Anaconda tool. Simulation results demonstrate that the proposed technique can adaptively adjust the routing policy according to the varying network environment to meet the transmission requirements of different services with low memory requirements.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_17-Sparse_Distributed_Memory_Approach_for_Reinforcement_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Visualization of the Temporal Topic Model on Higher Education Preferences with Higher Education Ranking Indicators</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121116</link>
        <id>10.14569/IJACSA.2021.0121116</id>
        <doi>10.14569/IJACSA.2021.0121116</doi>
        <lastModDate>2021-11-30T13:01:02.5370000+00:00</lastModDate>
        
        <creator>Winda Widya Ariestya</creator>
        
        <creator>Achmad Benny Mutiara</creator>
        
        <creator>I Made Wiryana</creator>
        
        <creator>Setia Wirawan</creator>
        
        <subject>Management decisions; temporal topic model; university; visualization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>Private universities have devised a strategy to counteract the ongoing competition. Private universities can use the appropriate data analysis method to make higher education management decisions. The goal of this research is to find a new approach to data analysis methods in the form of visualization using the TTM (Temporal Topic Model) method to assist private university management. These findings are the two formulas used to generate time-based visualizations and the Temporal Topic Model per month to visually change news topics related to rankings so that management can decide on marketing strategies and policies that are in relation to public opinion.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_16-Visualization_of_the_Temporal_Topic_Model_on_Higher_Education.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>DNA Profiling: An Investigation of Six Machine Learning Algorithms for Estimating the Number of Contributors in DNA Mixtures</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121115</link>
        <id>10.14569/IJACSA.2021.0121115</id>
        <doi>10.14569/IJACSA.2021.0121115</doi>
        <lastModDate>2021-11-30T13:01:02.5030000+00:00</lastModDate>
        
        <creator>Hamdah Alotaibi</creator>
        
        <creator>Fawaz Alsolami</creator>
        
        <creator>Rashid Mehmood</creator>
        
        <subject>Machine learning; DNA profiling; DNA mixtures; forensic science</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>DNA (Deoxyribonucleic acid) profiling involves analysis of sequences of individual or mixed DNA profiles to identify persons these profiles belong to. DNA profiling is used in important applications such as for paternity tests, in forensic science for person identification on a crime scheme, etc. Finding the number of contributors in a DNA mixture is a major task in DNA profiling with challenges caused due to allele dropout, stutter, blobs, and noise. The existing methods for finding the number of unknowns in a DNA mixture suffer from issues including computational complexity and accuracy of estimating the number of unknowns. Machine learning has received attention recently in this area but with limited success. Many more efforts are needed for improving the robustness and accuracy of these methods. Our research aims to advance the state-of-the-art in this area. Specifically, in this paper, we investigate the performance of six machine learning algorithms -- Nearest Neighbors (KNN), Random Forest (RF), Support Vector Machine (SVM), Logistic Regression (LR), Stochastic Gradient Descent (SGD), and Gaussian Na&#239;ve-Bayes (GNB) -- applied to a publicly available dataset called PROVEDIt, containing mixtures with up to five contributors. We evaluate the algorithmic performance using confusion matrices and four performance metrics namely accuracy, F1-Score, Recall, and Precision. The results show that LR provides the highest Accuracy of 95% for mixtures with five contributors.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_15-DNA_Profiling_An_Investigation_of_Six_Machine_Learning_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Insights on Deep Learning based Segmentation Schemes Towards Analyzing Satellite Imageries</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121114</link>
        <id>10.14569/IJACSA.2021.0121114</id>
        <doi>10.14569/IJACSA.2021.0121114</doi>
        <lastModDate>2021-11-30T13:01:02.4870000+00:00</lastModDate>
        
        <creator>Natya S</creator>
        
        <creator>Ramya K</creator>
        
        <creator>Seema Singh</creator>
        
        <subject>Deep learning; landcover; map generation; remotely sense image; satellite image; segmentation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>Satellite imageries are essentially a complex form of an image when subjected to critical analytical operation. The analytical process applied on remotely sensed satellite imageries are utilized for generating the land cover map. With an abundance of traditional techniques evolved to date, deep learning-based schemes are progressively gaining pace for identifying and classifying a terrestrial object in satellite images. However, different variants of deep learning approaches have different operations, and so are the consequences. At the same time, there is no reported literature to highlight the issues, trends, and effectiveness much on a generalized scale concerning segmentation. Therefore, this paper reviews some of the recent segmentation approaches using deep learning to contribute towards review findings in the form of research trends, research gaps, and essential learning outcomes. The paper offers a compact and distinct picture of deep learning approaches used to boost segmentation for satellite images.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_14-Insights_on_Deep_Learning_based_Segmentation_Schemes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Machine Learning for Diagnosing Drug Users and Types of Drugs Used</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121113</link>
        <id>10.14569/IJACSA.2021.0121113</id>
        <doi>10.14569/IJACSA.2021.0121113</doi>
        <lastModDate>2021-11-30T13:01:02.4570000+00:00</lastModDate>
        
        <creator>Anthony Anggrawan</creator>
        
        <creator>Christofer Satria</creator>
        
        <creator>Che Ku Nuraini</creator>
        
        <creator>Lusiana</creator>
        
        <creator>Ni Gusti Ayu Dasriani</creator>
        
        <creator>Mayadi</creator>
        
        <subject>Machine learning; drug; expert system; forward chaining; certainty factor</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>Drug use is very detrimental to the physical and psychological health of users. Drug abuse also causes addiction and is a global epidemic. Therefore it is not surprising that scientific research related to drugs has attracted attention for research. However, many factors become obstacles in the medical services of the drug user, including cost, flexibility, and a slow process. Meanwhile, electronic systems can speed up handling time, improve work efficiency, save costs and reduce inspection errors. It means that a breakthrough is needed in developing a platform that can identify drug users. Therefore, this research aims to build machine learning with expertise like an expert who can diagnose drug users and distinguish the types of drugs used by drug users. The expert system on machine learning was developed using the Forward Chaining and Certainty Factor methods. This study concludes that the expert system on machine learning developed can be used to diagnose drug users and distinguish the types of drugs used with an accuracy of up to 80%. The implications of the expert system on machine learning are an alternative method for narcotics officers and medical doctors in diagnosing drug users and the types of drugs used.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_13-Machine_Learning_for_Diagnosing_Drug_Users.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Analysis of Supervised Machine Learning Techniques for Sales Forecasting</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121112</link>
        <id>10.14569/IJACSA.2021.0121112</id>
        <doi>10.14569/IJACSA.2021.0121112</doi>
        <lastModDate>2021-11-30T13:01:02.4400000+00:00</lastModDate>
        
        <creator>Stuti Raizada</creator>
        
        <creator>Jatinderkumar R. Saini</creator>
        
        <subject>Sales forecasting; linear regression; random forest regression; KNN regression algorithm; SVM algorithm; supervised machine learning techniques</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>This study talks about how data mining can be used for sales forecasting in retail sales and demand prediction. Prediction of sales is a crucial task which determines the success of any organization in the long run. There are various techniques available for predicting the sales of a supermarket such as Time Series Algorithm, Regression Techniques, Association rule etc. In this paper, a comparative analysis of some of the Supervised Machine Learning Techniques have been done such as Multiple Linear Regression Algorithm, Random Forest Regression Algorithm, K-NN Algorithm, Support Vector Machine (SVM) Algorithm and Extra Tree Regression to build a prediction model and precisely estimate possible sales of 45 retail outlets of Walmart store which are at different geographical locations. Walmart is one of the foremost stores across the world and thus authors would like to predict the sales accurately. Certain events and holidays affect the sales periodically, which sometimes can also be on a daily basis. The forecast of probable sales is based on a combination of features such as previous sales data, promotional events, holiday week, temperature, fuel price, CPI i.e., Consumer Price Index and Unemployment rate in the state. The data is collected from 45 outlets of Walmart and the prediction about the sales of Walmart was done using various Supervised Machine Learning Techniques. The contribution of this paper is to help the business owners decide which approach to follow while trying to predict the sales of their Supermarket taken into account different scenarios including temperature, holidays, fuel price, etc. This will help them in deciding the promotional and marketing strategy for their products.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_12-Comparative_Analysis_of_Supervised_Machine_Learning_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Survey on Deep Learning Face Age Estimation Model: Method and Ethnicity</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121111</link>
        <id>10.14569/IJACSA.2021.0121111</id>
        <doi>10.14569/IJACSA.2021.0121111</doi>
        <lastModDate>2021-11-30T13:01:02.4100000+00:00</lastModDate>
        
        <creator>Hadi A. Dahlan</creator>
        
        <subject>Deep learning; face age estimation; face database; ethnicity bias</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>Face age estimation is a type of study in computer vision and pattern recognition. Designing an age estimation or classification model requires data as training samples for the machine to learn. Deep learning method has improved estimation accuracy and the number of deep learning age estimation models developed. Furthermore, numerous datasets availability is making the method an increasingly attractive approach. However, face age databases mostly have limited ethnic subjects, only one or two ethnicities and may result in ethnic bias during age estimation, thus impeding progress in understanding face age estimation. This paper reviewed available face age databases, deep learning age estimation models, and discussed issues related to ethnicity when estimating age. The review revealed changes in deep learning architectural designs from 2015 to 2020, frequently used face databases, and the number of different ethnicities considered. Although model performance has improved, the widespread use of specific few multi-races databases, such as the MORPH and FG-NET databases, suggests that most age estimation studies are biased against non-Caucasians/non-white subjects. Two primary reasons for face age research’s failure to further discover and understand ethnic traits effects on a person’s facial aging process: lack of multi-race databases and ethnic traits exclusion. Additionally, this study presented a framework for accounting ethnic in face age estimation research and several suggestions on collecting and expanding multi-race databases. The given framework and suggestions are also applicable for other secondary factors (e.g. gender) that affect face age progression and may help further improve future face age estimation research.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_11-A_Survey_on_Deep_Learning_Face_Age_Estimation_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Developing of Middleware and Cross Platform Chat Application</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121110</link>
        <id>10.14569/IJACSA.2021.0121110</id>
        <doi>10.14569/IJACSA.2021.0121110</doi>
        <lastModDate>2021-11-30T13:01:02.3930000+00:00</lastModDate>
        
        <creator>Danny Sebastian</creator>
        
        <creator>Restyandito</creator>
        
        <creator>Kristian Adi Nugraha</creator>
        
        <subject>Telegram API; line API; chat application; flutter; middleware</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>The rapid development of technology has resulted in many new innovations on social media platforms. Now-a-days, there are many chat applications available, namely Whatsapp, Telegram, LINE, Viber, and many others. This in turn forces users to juggle between many chat applications as different applications can’t communicate with each other. This research aims to develop a chat application which serves as a middleware to make communication between developed chat application and two conventional chat applications possible (Telegram and LINE). Several tests are done to ensure that the message exchange process (in text, picture, video, and file type) works well between the developed chat application as well as Telegram or LINE.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_10-Developing_of_Middleware_and_Cross_Platform_Chat.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Study of Flooding Area Detection with SAR Images based on Thresholding and Difference Images Acquired Before and After the Flooding</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121109</link>
        <id>10.14569/IJACSA.2021.0121109</id>
        <doi>10.14569/IJACSA.2021.0121109</doi>
        <lastModDate>2021-11-30T13:01:02.3630000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>Flooding; landslide; sediment disaster; heavy rain; image quality; Synthetic Aperture Radar; SAR; sentinel-1 SAR; thresholding; difference images between before and after disaster</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>Comparative study of flooding area detection with Synthetic Aperture Radar (SAR) images based on thresholding and difference images acquired before and after the flooding is conducted. Method for flooding, landslide and sediment disaster area detections with SAR is proposed. The following two different methods for flooding detection are common. It is not so easy to determine a threshold for the thresholding method while subtraction method between before and after images of a disaster occurrence has the disadvantage that false disaster areas are detected due to a variation of ground cover targets. Therefore, a comparative study between both methods is required. Its application is demonstrated for the disaster which is occurred in Saga Prefecture, Japan due to a long term of heavy rain during from the begging of August to the middle of August in 2021. Through experiments with Sentinel-1 SAR imagery data, it is found that the proposed method works well for the detection of the disaster.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_9-Comparative_Study_of_Flooding_Area_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analyzing the Sentiments of Jordanian Students Towards Online Education in the Higher Education Institutions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121108</link>
        <id>10.14569/IJACSA.2021.0121108</id>
        <doi>10.14569/IJACSA.2021.0121108</doi>
        <lastModDate>2021-11-30T13:01:02.3470000+00:00</lastModDate>
        
        <creator>Bayan Alfayoumi</creator>
        
        <creator>Mohammad Alshraideh</creator>
        
        <creator>Saleh Al-Sharaeh</creator>
        
        <creator>Martin Leiner</creator>
        
        <creator>Iyad Muhsen AlDajani</creator>
        
        <subject>Online education; students; sentiment analysis; online education; online environment; online social media</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>Sentiment analysis and opinion polling are two areas that have grown significantly over the past decade. Opinion research and sentiments analysis in the online education environment can truly reflect the learning state of students and educators and experts in the field; providing the theoretical basis needed to further review educational procedure and conduct. This study aims to shed light on identifying and visualizing students&#39; objective feelings based on an exploration of the subject matter and materials of learning and gathering sentiments from university Facebook groups at various levels and layers in detail. The proposed method is a qualitative descriptive research method that includes data pre-processing, subject discovery, sentiment analysis, and visualization. In relative terms, 39.7% of text messages were positive and 52.3% of text messages were negative and understanding the narrative of these feelings and their impact on the online learning environment.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_8-Analyzing_the_Sentiments_of_Jordanian_Students.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Machine Learning based Forecasting Systems for Worldwide International Tourists Arrival</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121107</link>
        <id>10.14569/IJACSA.2021.0121107</id>
        <doi>10.14569/IJACSA.2021.0121107</doi>
        <lastModDate>2021-11-30T13:01:02.3170000+00:00</lastModDate>
        
        <creator>Ram Krishn Mishra</creator>
        
        <creator>Siddhaling Urolagin</creator>
        
        <creator>J. Angel Arul Jothi</creator>
        
        <creator>Nishad Nawaz</creator>
        
        <creator>Haywantee Ramkissoon</creator>
        
        <subject>Tourists; forecasting; machine learning; Covid-19</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>The international tourist movement has overgrown in recent decades, and travelers are considered a significant source of income to the tourism economy. When tourists visit a place, they spend considerable money on their enjoyment, travel, and hotel accommodations. In this research, tourist data from 2010 to 2020 have been extracted and extended with depth analysis of different dimensions to identify valuable features. This research attempts to use machine learning regression techniques such as Support Vector Regression (SVR) and Random Forest Regression (RFR) to forecast and predict worldwide international tourist arrivals and achieved forecasting accuracy using SVR is 99.4% and using RFR is 84.7%. The study also analyzed the forecasting deadlock condition after covid-19 in the sudden drop of international visitors due to lockdown enforcement by all countries.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_7-Machine_Learning_based_Forecasting_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hardware Architecture for Adaptive Dual Threshold Filter and Discrete Wavelet Transform based ECG Signal Denoising</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121106</link>
        <id>10.14569/IJACSA.2021.0121106</id>
        <doi>10.14569/IJACSA.2021.0121106</doi>
        <lastModDate>2021-11-30T13:01:02.3000000+00:00</lastModDate>
        
        <creator>Safa MEJHOUDI</creator>
        
        <creator>Rachid LATIF Wissam JENKAL</creator>
        
        <creator>Amine Saddik</creator>
        
        <creator>Abdelhafid EL OUARDI SATIE</creator>
        
        <subject>ECG signal; DWT; ADTF; hybrid technique; hardware-software codesign; FPGA</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>The ECG signal, like all signals obtained when instrumenting a data acquisition system, is affected by noises of physiological and technical sources such as Electromyogram (EMG) and power line interferences, which can deteriorate its morphology. To overcome this issue, it’s subjected to apply a preprocessing step to remove these noises. Filtring techniques are complex computations becoming more common in medical applications, which must be completed in real-time. As a result, these applications are geared at integrating high-performance embedded architectures. This paper presents an FPGA (Field Programmable Gate Array) embedded architecture designed for an ECG denoising hybrid technique based on the Discrete Wavelet transform (DWT) and the Adaptive Dual Threshold Filter (ADTF), dedicated to handle with noises affecting ECG signals. The architecture was designed following a hardware-software codesign using a high-level description language and synthetized to be implemented on different FPGAs due to the structural description flexibility. The global architecture was divided into a set of functional blocks to allow parallel processing of ECG data. The simulation results confirm the high performance of the system in noise reduction without affecting the morphology of the signal. The process takes 0.3 ms with an acquisition frequency of 360 Hz. The whole architecture requires a small area in different FPGAs in terms of resources utilization. It uses less than 1% of the total registers for all FPGA devices which represents a total of 292 registers for Cyclone III LS, Cyclone IV GX, Cyclone IV E, and Arria II GX; and a total of 329 registers for Cyclone V. The logic elements occupancy varies between 3% using Cyclone V and 60% using Cyclone IV GX freeing up space for other parallel processing tasks.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_6-Hardware_Architecture_for_Adaptive_Dual_Threshold_Filter.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning for Arabic Image Captioning: A Comparative Study of Main Factors and Preprocessing Recommendations</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121105</link>
        <id>10.14569/IJACSA.2021.0121105</id>
        <doi>10.14569/IJACSA.2021.0121105</doi>
        <lastModDate>2021-11-30T13:01:02.2870000+00:00</lastModDate>
        
        <creator>Hani Hejazi</creator>
        
        <creator>Khaled Shaalan</creator>
        
        <subject>Deep learning; NLP; Arabic image captioning; Arabic text preprocessing; LSTM; VGG16; INCEPTION V3</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>Captioning of images has been a major concern for the last decade, with most of the efforts aimed at English captioning. Due to the lack of work done for Arabic, relying on translation as an alternative to creating Arabic captions will lead to accumulating errors during translation and caption prediction. When working with Arabic datasets, preprocessing is crucial, and handling Arabic morphological features such as Nunation requires additional steps. We tested 32 different variables combinations that affect caption generation, including preprocessing, deep learning techniques (LSTM and GRU), dropout, and features extraction (Inception V3, VGG16). Moreover, our results on the only publicly avail-able Arabic Dataset outperform the best result with BLEU-1=36.5, BLEU-2=21.4, BLEU-3=12 and BLEU4=6.6. As a result of this study, we demonstrated that using Arabic preprocessing and VGG16 image features extraction enhanced Arabic caption quality, but we saw no measurable difference when using Dropout or LSTM instead of GRU.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_5-Deep_Learning_for_Arabic_Image_Captioning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application-based Framework for Analysis, Monitoring and Evaluation of National Open Data Portals</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121104</link>
        <id>10.14569/IJACSA.2021.0121104</id>
        <doi>10.14569/IJACSA.2021.0121104</doi>
        <lastModDate>2021-11-30T13:01:02.2530000+00:00</lastModDate>
        
        <creator>Vigan Raca</creator>
        
        <creator>Goran Velinov</creator>
        
        <creator>Betim Cico</creator>
        
        <creator>Margita Kon-Popovska</creator>
        
        <subject>Open data; government; datasets; evaluation; portals; framework</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>Open Government Data (OGD) portals are considered of significant national importance towards transparency and accountability improvement. The continuous publication of data in OGD portals introduces the need for high-quality data and the qualitative portal itself. This paper aims to address the data quality issues through a framework composed of several components aimed at measuring and monitoring the OGD portals in an automated way. Through this proposed framework, is intended to monitor and evaluate OGD quality, respectively OGD portals, and to show their progress/regress based on accumulated scores for different periods. The advantage of the proposed framework is the compatibility with any OGD Portal due to its flexibility of integration. The integration interface consists of only a few basic metrics but is necessary that almost the OGD portal possesses and can produce very compressive results. The other advantage is the possibility of extraction of collected data for further analysis and the introduction of artificial intelligence (AI) for prediction purposes to point out how the OGD portals will stand in the next period.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_4-Application_based_Framework_for_Analysis_Monitoring_and_Evaluation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Human Action Recognition in Video Sequence using Logistic Regression by Features Fusion Approach based on CNN Features</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121103</link>
        <id>10.14569/IJACSA.2021.0121103</id>
        <doi>10.14569/IJACSA.2021.0121103</doi>
        <lastModDate>2021-11-30T13:01:02.2370000+00:00</lastModDate>
        
        <creator>Tariq Ahmad</creator>
        
        <creator>Jinsong Wu</creator>
        
        <creator>Imran Khan</creator>
        
        <creator>Asif Rahim</creator>
        
        <creator>Amjad Khan</creator>
        
        <subject>Human action recognition; logistic regression; deep learning; convolution neural network; features fusion</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>Human Action recognition (HAR) gains too much attention due to its wide range of real world applications, such as video surveillance, robotics and computer vision. In video surveillance systems security cameras are placed to monitor activities and motion, generate alerts in undesirable situations. Due to such importance of video surveillance in daily life, HAR becomes the primary and key factor of video surveillance systems. Many researchers worked on human action recognition but HAR still a challenging problem, due to large variation among human to human and human actions in daily life, which make human recognition very challenging and makes surveillance system difficult to outperform. In this article a novel method is proposed by features fusion of pre-trained convolution neural network (CNN) features. Initially pre-trained CNN VGG 19 weights are exploited to extract fully connected 7th layer (FC7) of the selected dataset, subsequently pre-trained fully connected 8th layer features (FC8) extracted by employing pre-trained weights of the same neural network. However the resultant feature fused vector further optimized by employing two statistical features selection techniques, chi-square test and mutual information to select best features among them to reduced redundancy and increase performance accuracy of human action, a threshold value used for selecting best features. Furthermore the best features are fused, then grid search with 10 fold cross validation is applied for tuning hyper parameter to select best k fold and the resulting best parameter are feed to Logistic regression (LR) classifier for recognition. The proposed technique used You Tube 11 action dataset and achieved 98.49% accuracy. Lastly the proposed method compares with the existing state of the art methods which show dominance performance.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_3-Human_Action_Recognition_in_Video_Sequence.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Digital Economy and its Importance in the Development of Small and Medium Innovative Enterprises</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121102</link>
        <id>10.14569/IJACSA.2021.0121102</id>
        <doi>10.14569/IJACSA.2021.0121102</doi>
        <lastModDate>2021-11-30T13:01:02.2070000+00:00</lastModDate>
        
        <creator>Tatiana Korsakova</creator>
        
        <creator>Lyudmila Dubanevich</creator>
        
        <creator>Oleg Drozdov</creator>
        
        <creator>Anna Mikhailova</creator>
        
        <creator>Ekaterina Kamchatova</creator>
        
        <subject>Business; SMEs; entrepreneurship; Russia; digitalization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>A single universally accepted definition, levels and interconnections of digital economy with other economies is not yet developed. Thus, various definitions of the digital economy have been investigated, as well as various approaches to describing the process of transformation of the digital economy for the correct establishment of these relationships. The article observes the relationship between the state of the digital economy, innovative small and medium enterprises, the development of small and medium businesses in general. The stage of transformation of the digital economy of Russia is determined at the second, intermediate stage of development and the main barriers to moving to the third level are pointed out. The dual role of the digital economy in the development of small and medium innovative enterprises is determined based on the selected model of R. Bukht &amp; R. Heeks, the two directions of influence being the SMEs provision with necessary tools and the digital economy becoming the object of innovative development of SMEs. Finally, the assessment of the state of digital economy in Russia is given and the recommendations for its further implementation are given.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_2-Digital_Economy_and_its_Importance_in_the_Development.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Evaluation of SNMPv1/2c/3 using Different Security Models on Raspberry Pi</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121101</link>
        <id>10.14569/IJACSA.2021.0121101</id>
        <doi>10.14569/IJACSA.2021.0121101</doi>
        <lastModDate>2021-11-30T13:01:02.1130000+00:00</lastModDate>
        
        <creator>Eric Gamess</creator>
        
        <creator>Sergio Hernandez</creator>
        
        <subject>Simple network management protocol; SNMP; performance evaluation; benchmarks; raspberry pi</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(11), 2021</description>
        <description>The Simple Network Management Protocol (SNMP) is one of the dominant protocols for network monitoring and configuration. The first two versions of SNMP (v1 and v2c) use the Community-based Security Model (CSM), where the community is transferred in clear text, resulting in a low level of security. With the release of SNMPv3, the User-based Security Model (USM) and Transport Security Model (TSM) were proposed, with strong authentication and privacy at different levels. The Raspberry Pi family of Single-Board Computers (SBCs) is widely used for many applications. To help their integration into network management systems, it is essential to study the impact of the different versions and security models of SNMP on these SBCs. In this work, we carried out a performance analysis of SNMP agents running in three different Raspberry Pis (Pi Zero W, Pi 3 Model B, and Pi 3 Model B+). Our comparisons are based on the response time, defined as the time required to complete a request/response exchange between a manager and an agent. Since we did not find an adequate tool for our assessments, we developed our own benchmarking tool. We did numerous experiments, varying different parameters such as the type of requests, the number of objects involved per request, the security levels of SNMPv3/USM, the authentication and privacy protocols of SNMPv3/USM, the transport protocols, and the versions and security models of SNMP. Our experiments were executed with Net-SNMP, an open-source and comprehensive distribution of SNMP. Our tests indicate that SNMPv1 and SNMPv2c have similar performance. SNMPv3 has a longer response time, due to the overhead caused by the security services (authentication and privacy). The Pi 3 Model B and Pi 3 Model B+ have comparable performance, and significantly outperform the Pi Zero W.</description>
        <description>http://thesai.org/Downloads/Volume12No11/Paper_1-Performance_Evaluation_of_SNMPv1_2c_3.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Joint Deep Clustering: Classification and Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121096</link>
        <id>10.14569/IJACSA.2021.0121096</id>
        <doi>10.14569/IJACSA.2021.0121096</doi>
        <lastModDate>2021-10-31T06:54:28.4600000+00:00</lastModDate>
        
        <creator>Arwa Alturki</creator>
        
        <creator>Ouiem Bchir</creator>
        
        <creator>Mohamed Maher Ben Ismail</creator>
        
        <subject>Clustering; deep learning; deep neural network; representation learning; clustering loss; reconstruction loss</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>Clustering is a fundamental problem in machine learning. To address this, a large number of algorithms have been developed. Some of these algorithms, such as K-means, handle the original data directly, while others, such as spectral clustering, apply linear transformation to the data. Still others, such as kernel-based algorithms, use nonlinear transformation. Since the performance of the clustering depends strongly on the quality of the data representation, representation learning approaches have been extensively researched. With the recent advances in deep learning, deep neural networks are being increasingly utilized to learn clustering-friendly representation. We provide here a review of existing algorithms that are being used to jointly optimize deep neural networks and clustering methods.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_96-Joint_Deep_Clustering_Classification_and_Review.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Faculty e-Learning Adoption During the COVID-19 Pandemic: A Case Study of Shaqra University</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121095</link>
        <id>10.14569/IJACSA.2021.0121095</id>
        <doi>10.14569/IJACSA.2021.0121095</doi>
        <lastModDate>2021-10-31T06:54:28.4430000+00:00</lastModDate>
        
        <creator>Asma Hassan Alshehri</creator>
        
        <creator>Saad Ali Alahmari</creator>
        
        <subject>e-Learning; Learning Management System (LMS); distance learning; LMS readiness; training</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>e-Learning can generally be applied by employ-ing learning management system (LMS) platforms designed to support an instructor to develop, manage, and provide online courses to learners. During the COVID-19 pandemic, several LMS platforms were adopted in Saudi Arabian institutions, such as Moodle and Blackboard. However, in order to adopt e-learning and operate LMS platforms, there is a need to investigate factors that influence the capability of faculty to utilize e-learning and its perceived benefits on students. This paper examines how training support and LMS readiness factors influence the capability of faculty to adopt e-learning and student perceived benefits. A quantitative research method was conducted using an online questionnaire survey method. Research data was collected from 274 faculty members, who used Moodle as a main LMS platform, at Shaqra University in the Kingdom of Saudi Arabia (KSA). The results reveal that training support and LMS readiness have a positive influence on the faculty’s capability to adopt e-learning, which leads to enhancing students’ perceived benefits. By identifying the factors that influence e-learning adoption, universities can provide enhanced e-learning services to students and support faculty through providing adequate training and powerful e-learning platform.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_95-Faculty_e_Learning_Adoption_During_the_COVID_19_Pandemic.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Deep Learning-based Online Proctoring System using Face Recognition, Eye Blinking, and Object Detection Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121094</link>
        <id>10.14569/IJACSA.2021.0121094</id>
        <doi>10.14569/IJACSA.2021.0121094</doi>
        <lastModDate>2021-10-31T06:54:28.4130000+00:00</lastModDate>
        
        <creator>Istiak Ahmad</creator>
        
        <creator>Fahad AlQurashi</creator>
        
        <creator>Ehab Abozinadah</creator>
        
        <creator>Rashid Mehmood</creator>
        
        <subject>Online learning; online proctor; student authen-tication; face detection; face recognition; eye blinking detection; object detection; distance learning; e-learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>Distance and online learning (or e-learning) has become a norm in training and education due to a variety of benefits such as efficiency, flexibility, affordability, and usability. Moreover, the COVID-19 pandemic has made online learning the only option due to its physical isolation requirements. However, monitoring of attendees and students during classes, particularly during exams, is a major challenge for online systems due to the lack of physical presence. There is a need to develop methods and technologies that provide robust instru-ments to detect unfair, unethical, and illegal behaviour during classes and exams. We propose in this paper a novel online proctoring system that uses deep learning to continually proctor physical places without the need for a physical proctor. The system employs biometric approaches such as face recognition using the HOG (Histogram of Oriented Gradients) face detector and the OpenCV face recognition algorithm. Also, the system incorporates eye blinking detection to detect stationary pictures. Moreover, to enforce fairness during exams, the system is able to detect gadgets including mobile phones, laptops, iPads, and books. The system is implemented as a software system and evaluated using the FDDB and LFW datasets. We achieved up to 97% and 99.3% accuracies for face detection and face recognition, respectively.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_94-A_Novel_Deep_Learning_based_Online_Proctoring_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Code Optimizations for Parallelization of Programs using Data Dependence Identifier</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121093</link>
        <id>10.14569/IJACSA.2021.0121093</id>
        <doi>10.14569/IJACSA.2021.0121093</doi>
        <lastModDate>2021-10-31T06:54:28.3970000+00:00</lastModDate>
        
        <creator>Kavya Alluru</creator>
        
        <creator>Jeganathan L</creator>
        
        <subject>Automatic parallelization; parallelizing compilers; code optimizations; data dependence; loop invariant code motion; node splitting; live range analysis; loop fusion</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>In a Parallelizing Compiler, code transformations help to reduce the data dependencies and identify parallelism in a code. In our earlier paper, we proposed a model Data Dependence Identifier (DDI), in which a program P is represented as graph GP . Using 􀀀􀀀 , we could identify data dependencies in a program and also perform transformations like dead code elimination and constant propagation. In this paper, we present algorithms for loop invariant code motion, live range analysis, node splitting and loop fusion code transformations using DDI in polynomial time.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_93-Code_Optimizations_for_Parallelization_of_Programs.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Model of Quantum Transfer Learning to Classify Face Images with a COVID-19 Mask</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121092</link>
        <id>10.14569/IJACSA.2021.0121092</id>
        <doi>10.14569/IJACSA.2021.0121092</doi>
        <lastModDate>2021-10-31T06:54:28.3670000+00:00</lastModDate>
        
        <creator>Christian Soto-Paredes</creator>
        
        <creator>Jose Sulla-Torres</creator>
        
        <subject>hybrid; quantum; classify; face; COVID-19; mask</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>The problem of the COVID-19 disease has deter-mined that about 219 million people have contracted it, of which 4.55 million died. This importance has led to the implementation of security protocols to prevent the spread of this disease. One of the main protocols is to use protective masks that properly cover the nose and mouth. The objective of this paper was to classify images of faces using protective masks of COVID-19, in the classes identified as correct mask, incorrect mask, and no mask, with a Hybrid model of Quantum Transfer Learning. To do this, the method used has made it possible to gather a data set of 660 people of both sexes (man and woman), with ages ranging from 18 to 86 years old. The classic transfer learning model chosen was ResNet-18; the variational layers of the proposed model were built with the Basic Entangler Layers template for four qubits, and the optimization of the training was carried out with the Stochastic Gradient Descent with Nesterov Momentum. The main finding was the 99.05% accuracy in classifying the correct Protective Masks using the Pennylane quantum simulator in the tests performed. The conclusion reached is that the proposed hybrid model is an excellent option to detect the correct position of the protective mask for COVID-19.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_92-Hybrid_Model_of_Quantum_Transfer_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Employing Video-based Motion Data with Emotion Expression for Retail Product Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121091</link>
        <id>10.14569/IJACSA.2021.0121091</id>
        <doi>10.14569/IJACSA.2021.0121091</doi>
        <lastModDate>2021-10-31T06:54:28.3500000+00:00</lastModDate>
        
        <creator>Ahmad B. Alkhodre</creator>
        
        <creator>Abdullah M. Alshanqiti</creator>
        
        <subject>Shopper Behavior; motion tracking; emotion clas-sification; machine learning; association rule learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>Mining approaches based on video data can serve in identifying stores’ performance by gaining insight into what needs to be proceeded to further enhance customers’ experience, leading to increased business profits. To this end, this paper proposes an association rule mining approach, depending on video analytic techniques, for detecting store-items that are likely to be out of demand. Our approach is developed upon motion-tracking and facial emotion expression methods. We used a motion-tracking technique to record information related to customers’ regions of interest inside the store and customers’ interactions with the on-shelf products. Besides, we have implemented an emotion classification model, trained on recorded video data, to identify customers’ emotions towards items. Results of our conducted experiments yielded several scenarios representing customer behavior towards out-of-demand stores’ items.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_91-Employing_Video_based_Motion_Data_with_Emotion_Expression.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-logic Rulesets based Junction-point Movement Controller Framework for Traffic Streamlining in Smart Cities</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121090</link>
        <id>10.14569/IJACSA.2021.0121090</id>
        <doi>10.14569/IJACSA.2021.0121090</doi>
        <lastModDate>2021-10-31T06:54:28.3330000+00:00</lastModDate>
        
        <creator>Sreelatha R</creator>
        
        <creator>Roopalakshmi R</creator>
        
        <subject>Intelligent transportation systems; junction-point traffic monitoring; ruleset database; traffic density estimation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>In the internet era, Intelligent Transportation Sys-tem (ITS) for smart cities is gaining tremendous attention since it offers intelligent smart services for traffic monitoring and management with the help of different technologies such as micro-electronics, sensors and IoT. However, in the existing literature, very few attempts are made towards effective traffic monitoring at road junctions in terms of providing faster decision making so that the traffic present in heavily congested urban environments can be dynamically rerouted. In order to tackle this issue, this article proposes a new Controller framework that can be applied at junction-points in order to the control the traffic movement. Specifically, the proposed framework utilizes a multi-logic ruleset database to estimate the traffic density dynamically at the first stage followed by the usage of signal-time computation algorithm at the second stage in order to streamline the traffic and achieve faster clearance at the junction-points. The experimental results conducted with the help of test environment using MEMSIC nodes clearly demonstrate the improved efficiency of the proposed framework in terms various performance metrics including move command frequency, ruleset score and fluctuation score.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_90-Multi_logic_Rulesets_based_Junction_point_Movement.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Verifiable Homomorphic Encrypted Computations for Cloud Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121089</link>
        <id>10.14569/IJACSA.2021.0121089</id>
        <doi>10.14569/IJACSA.2021.0121089</doi>
        <lastModDate>2021-10-31T06:54:28.3030000+00:00</lastModDate>
        
        <creator>Ruba Awadallah</creator>
        
        <creator>Azman Samsudin</creator>
        
        <creator>Mishal Almazrooie</creator>
        
        <subject>Cloud computing; computation verification; data confidentiality; data integrity; data privacy; distributed processing; homomorphic encryption</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>Cloud computing is becoming an essential part of computing, especially for enterprises. As the need for cloud computing increases, the need for cloud data privacy, con-fidentially, and integrity are also becoming essential. Among potential solutions, homomorphic encryption can provide the needed privacy and confidentiality. Unlike traditional cryptosys-tem, homomorphic encryption allows computation delegation to the cloud provider while the data is in its encrypted form. Unfortunately, the solution is still lacking in data integrity. While on the cloud, there is a possibility that valid homomorphically encrypted data beings swapped with other valid homomorphically encrypted data. This paper proposes a verification scheme based on the modular residue to validate homomorphic encryption computation over integer finite field to be used in cloud computing so that data confidentiality, privacy, and data integrity can be en-forced during an outsourced computation. The performance of the proposed scheme varied based on the underlying cryptosystems used. However, based on the tested cryptosystems, the scheme has 1.5% storage overhead and a computational overhead that can be configured to work below 1%. Such overhead is an acceptable trade-off for verifying cloud computation which is highly needed in cloud computing.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_89-Verifiable_Homomorphic_Encrypted_Computations.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Protection Scheme for Biometric Templates based on Random Projection and CDMA Principle</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121088</link>
        <id>10.14569/IJACSA.2021.0121088</id>
        <doi>10.14569/IJACSA.2021.0121088</doi>
        <lastModDate>2021-10-31T06:54:28.2870000+00:00</lastModDate>
        
        <creator>Ayoub Lahmidi</creator>
        
        <creator>Khalid Minaoui</creator>
        
        <creator>Chouaib Moujahdi</creator>
        
        <creator>Mohammed Rziza</creator>
        
        <subject>Biometric template; security; authentication; CDMA; random projection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>Although biometric technologies have revolution-ized the world of communication and dematerialized exchanges, authentication by biometrics still has many limitations, particu-larly in terms of privacy concerns, due to the various potential threats to which biometric templates are subject. The existence of these vulnerabilities has created an enormous need for biometric data protection. Indeed, several protection schemes have been proposed, which are normally supposed to offer certain guaran-tees, including the confidentiality of the collected personal data and the reliability of the recognition system. The challenge for all these techniques is to achieve a trade-off between performance ac-curacy and robustness against vulnerabilities, which is not always obvious. In this paper, we propose a theoretical protection model dedicated to biometric authentication systems. The objective is to ensure a high level of security for the stored reference data in such a way that it complies with the non-invertibility and revocability properties. The main idea is to incorporate a discretization tool, namely the spread spectrum technology and in particular the Code Division Multiple Access (CDMA), into a biometric system based on Random Projection. We introduce and demonstrate the proposed scheme as a non-invertible transform, while proving its effectiveness and ability to meet the requirements of revocability and unlinkability.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_88-A_New_Protection_Scheme_for_Biometric_Templates.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Using Transfer Learning for Nutrient Deficiency Prediction and Classification in Tomato Plant</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121087</link>
        <id>10.14569/IJACSA.2021.0121087</id>
        <doi>10.14569/IJACSA.2021.0121087</doi>
        <lastModDate>2021-10-31T06:54:28.2730000+00:00</lastModDate>
        
        <creator>Vrunda Kusanur</creator>
        
        <creator>Veena S Chakravarthi</creator>
        
        <subject>Nutrient deficiency; plant nutrients; deep neural networks; transfer learning; random forest (RF); support vector machine (SVM)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>Plants need nutrients to develop normally. The essential nutrients like carbon, oxygen, and hydrogen are obtained from sunlight, air, and water to prepare food and plant growth. For healthy growth, plants also need macronutrients such as Potassium, Calcium, Nitrogen, Sulphur, Magnesium, and Phosphorus in relatively great quantities. When a plant doesn’t find necessary nutrients for its growth inadequate amount, deficiency of plant nutrients occur. Plants exhibit various symptoms to indicate the deficiency. Automatic identification and differentiation of these deficiencies are very important in the greenhouse environment. Deep Neural Networks are extremely efficient in image categorization problems. In this work, we used the part of the pre-trained deep learning model i.e. Transfer Learning model to detect the nutrient stress in the plant. We compared three different architectures including Inception-V3, ResNet50, and VGG16 with two classifiers: RF and SVM to improve, classification accuracy. A total of 880 images of Calcium and Magnesium deficiencies in the Tomato plant from the greenhouse were collected to form a dataset. For training, 704(80%) images are used and for testing, 176(20%) images are used to examine the model performance. Experimental results demonstrated that the largest accuracy of 99.14% has resulted for the VGG16 model with SVM classifier and 98.71% for Inception-V3 with Random Forest Classifier. For a batch size of 8 and epochs equal to 10, the Inception -V3 architecture attained the highest validation accuracy of 99.99% and the least validation loss of 0.0000384 on an average.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_87-Using_Transfer_Learning_for_Nutrient_Deficiency_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Digital Preoperative Planning for High Tibial Osteotomy using 2D Medical Imaging</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121086</link>
        <id>10.14569/IJACSA.2021.0121086</id>
        <doi>10.14569/IJACSA.2021.0121086</doi>
        <lastModDate>2021-10-31T06:54:28.2400000+00:00</lastModDate>
        
        <creator>Norazimah Awang</creator>
        
        <creator>Faudzi Ahmad</creator>
        
        <creator>Rosnita A. Rahaman</creator>
        
        <creator>Riza Sulaiman</creator>
        
        <creator>Azrulhizam Shapi’i</creator>
        
        <creator>Abdul Halim Abdul Rashid</creator>
        
        <subject>Center of rotation of angulation; CORA; HTO; software; digital; medical image</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>The pre-operative planning process for High Tibial Osteotomy (HTO) is vital to correct the deformity of the long bones. The most important process is needed to find the Centre Of Rotation of Angulation (CORA) and display the forecast result based on the value of the correction angles simultaneously. Presently, these practices should be done manually because current software only can define either CORA’s point or correction angle at one time. This paper proposed to use computer-aided software to make the fully digitized process of pre-operative planning for HTO to be done. For this purpose, we introduced OsteoAid software. This software enables the user to define the mechanical or anatomical axes and define the CORA’s point and the angle at one time. For testing purposes, we compared the reliability of osteotomy&#39;s correction angle between this two software (MedWeb and OsteoAid) in preoperative planning open-wedge high tibial osteotomy. This is to ensure that the new software is reliable for the correction. Thirteen digital long leg radiographs with long-standing positions from the frontal axis showing patients with both tibia deformities were examined using intra-class correlation. Those images are accessed from the picture archiving and communication system (PACS). Three medical officers (raters) who were involved in an osteotomy used the same medical image format twice with a two-week interval. Using the MedWeb software, the mean correction angle score of each rater is at excellent level: 0.989 (intra-rater1), 0.982 (intra-rater2) and 0.972 (intra-rater3). Scores of each rater for OsteoAid are also excellent: 0.949, 0.987 and 0.986 respectively. The inter-rater reliabilities of the correction angle were 0.820 and 0.979 (p&lt;0.001) respectively for each software. The principal finding of this study was that the new software (OsteoAid) showed excellent reliabilities and good consistency in preoperative planning in finding CORA and correction angle.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_86-Digital_Preoperative_Planning_for_High_Tibial_Osteotomy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Delivery of User Intentionality between Computer and Wearable for Proximity-based Bilateral Authentication</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121085</link>
        <id>10.14569/IJACSA.2021.0121085</id>
        <doi>10.14569/IJACSA.2021.0121085</doi>
        <lastModDate>2021-10-31T06:54:28.2270000+00:00</lastModDate>
        
        <creator>Jaeseong Jo</creator>
        
        <creator>Eun-Kyu Lee</creator>
        
        <creator>Junghee Jo</creator>
        
        <subject>Security; authentication; internet of things; user intentionality; proximity-based authentication; bilateral</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>Recent research discovers that delivering user intentionality for authentication resolves a random authentication problem in a proximity-based authentication. However, they still have limitations – energy issue, inaccurate data consistency, and vulnerability to shoulder surfing. To resolve them, this paper proposes a new method for user intent delivery and a new proximity-based bilateral authentication system by adopting it. The proposed system designs a protocol for authentication to reduce energy consumption in a power-constrained wearable, applies a Needleman-Wunsch algorithm to the matching of time values as well, and introduces randomness to a user behavior that a user must perform for authentication. We developed a prototype of our authentication system on which a list of experiments was conducted. Experimental results show that the proposed method results in more accurate data consistency than conventional methods for user authentication intent delivery. Eventually, our system reduces authentication failure rate by 66.7% compared to conventional ones.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_85-Delivery_of_User_Intentionality_between_Computer_and_Wearable.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Efficient DNN Ensemble for Pneumonia Detection in Chest X-ray Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121084</link>
        <id>10.14569/IJACSA.2021.0121084</id>
        <doi>10.14569/IJACSA.2021.0121084</doi>
        <lastModDate>2021-10-31T06:54:28.1930000+00:00</lastModDate>
        
        <creator>V S Suryaa</creator>
        
        <creator>Arockia Xavier Annie R</creator>
        
        <creator>Aiswarya M S</creator>
        
        <subject>Deep neural networks; ensemble learning; pneumonia detection using x-ray images; transfer learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>Pneumonia is a disease caused by a variety of organisms, including bacteria, viruses, and fungi, which could be fatal if timely medical care is not provided. According to the World Health Organization (WHO) report, the most common diagnosis for severe COVID-19 is severe pneumonia. The most common method of detecting Pneumonia is through chest X-ray which is a very time intensive process and requires a skilled expert. The rapid development in the field of deep learning and neural networks in recent years has led to drastic improvement in automation of pneumonia detection from analysing chest x- rays. In this paper, a pre-trained Convolutional Neural Networks (CNN) on chest x-ray images is used as feature extractors which are then further processed to classify the images in order to predict whether a person has pneumonia or not. The different pre- trained Convolutional Neural Networks used are assessed with various parameters regarding their predictions on the images. The results of pre-trained neural networks were examined, and an ensemble model was proposed that combines the predictions of the best pre-trained models to produce better results than individual models.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_84-Efficient_DNN_Ensemble_for_Pneumonia_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Effective Controlling Scheme to Mitigate Flood Attack in Delay Tolerant Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121083</link>
        <id>10.14569/IJACSA.2021.0121083</id>
        <doi>10.14569/IJACSA.2021.0121083</doi>
        <lastModDate>2021-10-31T06:54:28.1800000+00:00</lastModDate>
        
        <creator>Hanane ZEKKORI</creator>
        
        <creator>Sa&#239;d AGOUJIL</creator>
        
        <creator>Youssef QARAAI</creator>
        
        <subject>DTN; flooding attack; DOS; congestion; buffer capacity; bundle; ONE</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>Conventional routing protocols breaks down in opportunistic networks due to long delays, frequent disconnectivity and resource scarcity. Delay Tolerant Network (DTN) has been developed to cope with these mentioned features. In the absence of connected link between the sender and the receiver, in DTN mobile nodes replicate bundles and work cooperatively to improve the delivery probability. Malicious nodes may flood the network as possible by a huge number of unwanted bundles (messages) or bundle replicas which waste the limited resources. DOS (Denial of Service) attack especially Flooding attack attempt to compromise the availability service of the network. Traditional congestion control strategies are not suitable for DTN, so developing new mechanisms to detect and to control flooding attack is a major challenge in DTN network. In this paper, we presented a comprehensive overview of the existing solutions for dealing with flooding attack in delay tolerant network, and we proposed an effective controlling mechanism to mitigate this threat. The main goal of this mechanism is first to detect malicious nodes that flood the network by unwanted messages, and then to limit the damage caused by this attack. We also ran a large number of simulations with the ONE simulator to investigate how changing buffer capacity, message lifetime, message size, and message replicas affect DTN network performance metrics.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_83-Effective_Controlling_Scheme_to_Mitigate_Flood_Attack.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Heuristic Algorithm for Automatic Extraction Relational Data from Spreadsheet Hierarchical Tables</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121082</link>
        <id>10.14569/IJACSA.2021.0121082</id>
        <doi>10.14569/IJACSA.2021.0121082</doi>
        <lastModDate>2021-10-31T06:54:28.1470000+00:00</lastModDate>
        
        <creator>Arwa Awad</creator>
        
        <creator>Rania Elgohary</creator>
        
        <creator>Ibrahim Moawad</creator>
        
        <creator>Mohamed Roushdy</creator>
        
        <subject>Spreadsheet table analysis; hierarchal table structure; cell classification; heuristic algorithm; relational data extraction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>Spreadsheets are contained critical information on various topics and were most broadly utilized in numerous spaces. There are a huge amount of spreadsheet clients everywhere in the world. Spreadsheets provide considerable flexibility for data structure organization. As well as it gives their makers an enormous level of opportunity to encode their data as it is simple to utilize and easy to store the data in a table format. Because of this flexibility, tables with very complex and hierarchical data structures could be generated. Thusly, such complexity makes table processing and reusing this data is a difficult task. Therefore, the expansion in volume and complexity of these tables has prompted the necessity to preserve this data and reuse it. As a result, this paper implemented a novel algorithm-based heuristic technique and cell classification strategy to automate relational data extraction from spreadsheet hierarchical tables and without need any programming language experience.  Finally, the paper does experiments on 2 different real public datasets. The percentage of average accuracy using the proposed approach on the two datasets is 95 % and 94.2% respectively.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_82-Heuristic_Algorithm_for_Automatic_Extraction_Relational_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of the Asynchronous Motor Controlled by Frequency Inverter Applied to Fatigue Test System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121081</link>
        <id>10.14569/IJACSA.2021.0121081</id>
        <doi>10.14569/IJACSA.2021.0121081</doi>
        <lastModDate>2021-10-31T06:54:28.1300000+00:00</lastModDate>
        
        <creator>Nel Yuri Huaita Ccallo</creator>
        
        <creator>Omar Chamorro-Atalaya</creator>
        
        <subject>Induction; torque; frequency inverter (VDF); current; harmonic; temperature</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>This research focuses on analyzing the functional and operational parameters of the three-phase induction motor, squirrel cage type; Where the experimental model consists of a fatigue test system operated by two types of control: Control by Frequency Inverter and Classic star-delta control, where the engine load consists of a standard specimen, corresponding to 61.9% of the nominal load of the object of study. Experimental evaluations of this rotary machine are at regular operating conditions. Managing to Record electrical, mechanical, thermal variables; in a database where they were classified, developed, analyzed and interpreted; Highlighting from the graphs, the quasi-constant behavior of the Cos(φ) at 0.754 at different regulated frequency values which lead to a low energy consumption of current 1.88 Ampere with variator with respect to the weighted of 2.04 Ampere without inverter; even with improvements in torque when you are opting to use the drive from a 0.71 N-m to a 0.94 N-m. Likewise, the operation of this machine at low frequencies manifests some damages to normal operation, such as the rate of increase in the operating temperature of 78.76 &#176;C in a short time and with projection to increase. Similarly, the injection of harmonic distortion into the network as a result of using electronic equipment, contributes to the detriment of energy quality.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_81-Analysis_of_the_Asynchronous_Motor_Controlled_by_Frequency.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Analysis of Qualitative Evaluation Model for Software Reuse with AspectJ using AHP</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121080</link>
        <id>10.14569/IJACSA.2021.0121080</id>
        <doi>10.14569/IJACSA.2021.0121080</doi>
        <lastModDate>2021-10-31T06:54:28.1000000+00:00</lastModDate>
        
        <creator>Ravi Kumar</creator>
        
        <creator>Dalip</creator>
        
        <subject>Reusability; AspectJ; software quality metrics; analytic hierarchy process</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>Reusability is necessary for developing advance software. Aspect Oriented programming is an emerging approach which understand the problem of arrangement of scattered software modules and tangled code. The aim of this paper is to explore the AOP approach with implementation of real life projects in AspectJ language and its impact on software quality in form of reusability. In this paper, experimental results are evaluated of 11 projects (Java and AspectJ) using proposed Quality Evaluation Model for Software Reuse (QEMSR) and existing Aspect Oriented Software Quality Model (AOSQ). To evaluate AOP quality model QEMSR based on developers AOP projects by using Analytic Hierarchy Process (AHP) tools. Paper provides the evaluation of software reusability and positive impact on software quality. QEMSR model is used to assess Aspect Oriented reusability quality issues, which helps developers to adapt for software development. The overall quality of three models QEMSR, existing AOSQ and PAOSQMO are 0.62552223, 0.5283693, and 0.505815 calculated. According to this, QEMSR model is best in form of quality in same characteristics and sub-characteristics.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_80-Performance_Analysis_of_Qualitative_Evaluation_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mask RCNN with RESNET50 for Dental Filling Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121079</link>
        <id>10.14569/IJACSA.2021.0121079</id>
        <doi>10.14569/IJACSA.2021.0121079</doi>
        <lastModDate>2021-10-31T06:54:28.0830000+00:00</lastModDate>
        
        <creator>S Aparna</creator>
        
        <creator>Kireet Muppavaram</creator>
        
        <creator>Chaitanya C V Ramayanam</creator>
        
        <creator>K Satya Sai Ramani</creator>
        
        <subject>Dental x-rays; deep learning; mask RCNN; RESNET50</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>Teeth are very important for humans to eat food. However, teeth do get damaged for several reasons, like poor maintenance. Damaged teeth can cause severe pain and make it difficult to eat food. To safeguard the tooth from minor damages, an inert material is used to close the gap between the live part of the teeth or sometimes even the nerve and enamel. Although, long-time ignorance can increase the damage and inevitably result in root canal or tooth replacement. In the case of root canal, the gap between nerve and enamel is filled with an inert material. To check if the filling has been done properly, an X-ray is taken and verify. As technology is developing, robots are being introduced into many fields. In the medical field, there are instances where robots do surgery. For dental treatment, as an X-ray is taken to determine the filing, this work introduces a model to analyze the X-ray taken and estimate the level of filing done. The model is constructed using Mask RCNN with ResNet50 architecture. A dataset of different kinds of filings is taken and trained using the model. This model can be used to enable machines to perform dental operations as it works on pixel-level classification.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_79-Mask_RCNN_with_RESNET50_for_Dental_Filing_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of Optimal Control of DFIG-based Wind Turbine System through Linear Quadratic Regulator</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121078</link>
        <id>10.14569/IJACSA.2021.0121078</id>
        <doi>10.14569/IJACSA.2021.0121078</doi>
        <lastModDate>2021-10-31T06:54:28.0530000+00:00</lastModDate>
        
        <creator>Ines Zgarni</creator>
        
        <creator>Lilia ElAmraoui</creator>
        
        <subject>Wind turbine system; doubly fed induction generator; DFIG; optimal control; linear quadratic regulator; LQR</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>This paper is devoted to implement an optimal control approach applying Linear Quadratic Regulator (LQR) to control a DFIG based Wind Turbine. The main goal of proposed LQR Controller is to achieve the active and reactive power and the DC-link voltage control of DFIG system in order to extract the maximum power from the wind turbine. In fact, the linearized state-space model of studied system in the d-q rotating reference frame is established. However, the overall system is controlled using MPPT strategy. The simulation results are obtained with Sim-Power-System and Simulink of MATLAB in terms of steady-state values, Peak amplitude, settling time and rise-time. Finally, the eigenvalue analysis and the simulation results are rated to ensure studied system robustness and stability, and the effectiveness of the control strategy.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_78-Design_of_Optimal_Control_of_DFIG_based_Wind_Turbine_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Chatbot Design for a Healthy Life to Celiac Patients: A Study According to a New Behavior Change Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121077</link>
        <id>10.14569/IJACSA.2021.0121077</id>
        <doi>10.14569/IJACSA.2021.0121077</doi>
        <lastModDate>2021-10-31T06:54:28.0370000+00:00</lastModDate>
        
        <creator>Eythar Alghamdi</creator>
        
        <creator>Reem Alnanih</creator>
        
        <subject>Celiac disease; health behavior changes models; healthcare apps; user-centered design; experiment test; WhatsApp chatbot</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>There is an absolute need for technology in our daily life that makes people busy with their smartphones all day long. In the healthcare field, mobile apps have been widely used for the treatment of many diseases. Most of these apps were designed without considering health behavior change models. Celiac disease is a significant public health problem worldwide. In Saudi Arabia, the incidence of celiac disease is 1.5%. Celiac patients have a natural demand for resources to facilitate care and research; however, they have not received much attention in the field of healthcare apps. This study introduced a new health behavior change model based on the existing common models and adapted it to the use of technology for the changing behavior of celiac patients towards healthy suitable food. As proof of concept, the new model was applied to the WhatsApp chatbot for patients with celiac disease. To test the impact of the chatbot, 60 Saudi celiac patients participated in three steps. First, they completed a pre-test questionnaire. Then, the participants were divided into two groups: the control group, which was left without any intervention, and the test group, who used the chatbot for 90 days. Finally, all participants completed the post-test questionnaire. The results confirmed a significant statistical difference between both groups, and the test group improved their healthy life in terms of eating habits, reduced celiac symptoms, and commitment to the treatment plan.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_77-Chatbot_Design_for_a_Healthy_Life_to_Celiac_Patients.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Using Eye Tracking Approach in Analyzing Social Network Site Area of Interest for Consumers’ Decision Making in Social Commerce</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121076</link>
        <id>10.14569/IJACSA.2021.0121076</id>
        <doi>10.14569/IJACSA.2021.0121076</doi>
        <lastModDate>2021-10-31T06:54:28.0230000+00:00</lastModDate>
        
        <creator>Suaini Binti Sura</creator>
        
        <creator>Nona M. Nistah</creator>
        
        <creator>Sungwon Lee</creator>
        
        <creator>Daimler Benz Alebaba</creator>
        
        <subject>Eye tracking; SNS-based commerce; seller generated content; user generated content; social commerce</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>The growing popularity of social network site (SNS) in social commerce (s-commerce) has intensified interest in understanding consumers decision making based on the SNS seller generated content (SGC) and user generated content (UGC). This study examines consumers’ decision making while doing online shopping by analyzing both SNS’s seller-user generated content on SNS utilizing eye tracking approach. Based on eye tracking experimental with 50 participants, gaze map in term of fixation time were collected and analyzed to measure the order of identified Area of interest (AOI) by which consumer viewed and heat map to measure the consumer intensity when looking at the identified AOIs. The results identify that SCG is most important AOI compare to UGC and that product image and description receive the greatest attention from consumers when making decision. Furthermore, seller information serves as a key entry point for SNS-based commerce based on fixation time. The analysis result shows that there is no significant influence of AOIs order based on consumers’ viewed on the intensity which consumers look at the AOIs. The comparison between Facebook and Instagram reveals some substantial differences in mean between AOIs based on fixation time and intensity. The findings suggest several AOIs should be addressed and emphasized for sellers and companies who interested in utilizing SNS for their s-commerce strategy.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_76-Using_Eye_Tracking_Approach_in_Analyzing_Social_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Assessment Framework for Defining the Maturity of Information Technology within Enterprise Risk Management (ERM)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121075</link>
        <id>10.14569/IJACSA.2021.0121075</id>
        <doi>10.14569/IJACSA.2021.0121075</doi>
        <lastModDate>2021-10-31T06:54:27.9900000+00:00</lastModDate>
        
        <creator>Rokhman Fauzi</creator>
        
        <creator>Muharman Lubis</creator>
        
        <subject>Risk management; assessment framework; maturity level; PDCA cycles; ISO/IEC 27005</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>The process of reviewing, assessing and improving the organization&#39;s IT risk management requires some basic information summarized in a process maturity profile. In general, IT risk management standards or frameworks do not include a mechanism for assessing the maturity level of process implementations. This study was conducted to develop a framework, which can be applied to assess the maturity level of IT risk management under ISO / IEC 27005. A standards-based management system implementation can be represented as a model cycle of planning, implementation, validation and also action plan. The proposed evaluation framework consists of templates, methods, and working papers. Therefore, the template focus on the evaluation areas, which are planning, execution, validation, and execution, then evaluation area details (8 domains, 35 subdomains, 82 items), and evaluation metrics and criteria. Meanwhile, a working paper has been created to assist in conducting the evaluation. Actually, by using this evaluation framework, it can provide a representation of the maturity level from the entire process in managing IT risk, based on the provisions of ISO/IEC 27005. This framework complements the existing model with the representation of (1) providing a single-cycle planning, establishment, validation, and execution, (2) evaluation tools, (3) more comprehensive data collection methods, and (4) priority list of elements to be reformed and/or improved.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_75-Assessment_Framework_for_Defining_the_Maturity.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Survey on Sentiment Analysis Approaches in e-Commerce</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121074</link>
        <id>10.14569/IJACSA.2021.0121074</id>
        <doi>10.14569/IJACSA.2021.0121074</doi>
        <lastModDate>2021-10-31T06:54:27.9770000+00:00</lastModDate>
        
        <creator>Thilageswari a/p Sinnasamy</creator>
        
        <creator>Nilam Nur Amir Sjaif</creator>
        
        <subject>Sentiment analysis; e-Commerce; feature extraction; classification; customers’ reviews</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>Sentiment analysis represents the process of judging customers’ behavior expression and feeling as either positive, negative or neutral. Hence, a tangle of different approaches for sentiment analysis is being used, reflecting analysis of unstructured customers’ reviews dataset to guide and generate insightful and helpful information. The aim of this paper is to highlight research design of sentiment analysis and choice of methodological by other researchers in E-Commerce customers’ reviews to guide future development. This paper presents a study of sentiment analysis approaches, process challenges and trends to give researchers a review and survey in existing literature. Next, this study will discuss on feature extraction and classification method of sentiment analysis of customers’ reviews to have an exhaustive view of their methods. The knowledge on challenges of sentiment analysis underpins to clarify future directions.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_74-A_Survey_on_Sentiment_Analysis_Approaches.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Analysis of IoT-based Healthcare Heterogeneous Delay-sensitive Multi-Server Priority Queuing System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121073</link>
        <id>10.14569/IJACSA.2021.0121073</id>
        <doi>10.14569/IJACSA.2021.0121073</doi>
        <lastModDate>2021-10-31T06:54:27.9430000+00:00</lastModDate>
        
        <creator>Barbara Kabwiga Asingwire</creator>
        
        <creator>Alexander Ngenzi</creator>
        
        <creator>Louis Sibomana</creator>
        
        <creator>Charles Kabiri</creator>
        
        <subject>Delay tolerant; delay sensitive; internet of things; mean slowdown; prioritized scheme</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>Previous studies have considered scheduling schemes for Internet of Things (IoT)-based healthcare systems like First Come First Served (FCFS), and Shortest Job First (SJF). However, these scheduling schemes have limitations that range from large requests starving short requests, process starvation that results in long time to complete if short processes are continuously added, and performing poorly under overloaded conditions. To address the mentioned challenges, this paper proposes an analytical model of a prioritized scheme that provides service differentiation in terms of delay sensitive packets receiving service before delay tolerant packets and also in terms of packet size with the short packets being serviced before large packets. The numerical results obtained from the derived models show that the prioritized scheme offers better performance than FCFS and SJF scheduling schemes for both short and large packets, except the shortest short packets that perform better under SJF than the prioritized scheme in terms of mean slowdown metric. It is also observed that the prioritized scheme performs better than FCFS and SJF for all considered large packets and the difference in performance is more pronounced for the shortest large packets. It is further observed that reduction in packet thresholds leads to decrease in mean slowdown and the decrease is more pronounced for the short packets with larger sizes and large packets with shorter sizes.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_73-Performance_Analysis_of_IoT_based_Healthcare_Heterogeneous_Delay.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparison of Machine Learning Algorithms for Sentiment Classification on Fake News Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121072</link>
        <id>10.14569/IJACSA.2021.0121072</id>
        <doi>10.14569/IJACSA.2021.0121072</doi>
        <lastModDate>2021-10-31T06:54:27.9130000+00:00</lastModDate>
        
        <creator>Yuzi Mahmud</creator>
        
        <creator>Noor Sakinah Shaeeali</creator>
        
        <creator>Sofianita Mutalib</creator>
        
        <subject>Data mining; fake news; sentiment classification; supervised machine learning; text mining</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>With the wide usage of World Wide Web (WWW) and social media platforms, fake news could become rampant among the users. They tend to create and share the news without knowing the authenticity of it. This would become the most critical issues among the societies due to the dissemination of false information. In that regard, fake news needs to be detected as early as possible to avoid negative influences on people who may rely on such information while making important decisions. The aim of this paper is to develop an automation of sentiment classifier model that could help individuals, or readers to understand the sentiment of the fake news immediately. The Cross-Industry Standard Process for Data Mining (CRISP-DM) process model has been applied for the research methodology. The dataset on fake news detection were collected from Kaggle website. The dataset was trained, tested, and validated with cross-validation and sampling methods. Then, comparison model performance using four machine learning algorithms which are Na&#239;ve Bayes, Logistic Regression, Support Vector Machine and Random Forest was constructed to investigate which algorithms has the most efficiency towards sentiment text classification performance. A comparison between 1000 and 2500 instances from the fake news dataset was analyzed using 200 and 500 tokens. The result showed that Random Forest (RF) achieved the highest accuracy compared to other machine learning algorithms. </description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_72-Comparison_of_Machine_Learning_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluation of using Parametric and Non-parametric Machine Learning Algorithms for Covid-19 Forecasting</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121071</link>
        <id>10.14569/IJACSA.2021.0121071</id>
        <doi>10.14569/IJACSA.2021.0121071</doi>
        <lastModDate>2021-10-31T06:54:27.8970000+00:00</lastModDate>
        
        <creator>Ghada E. Atteia</creator>
        
        <creator>Hanan A. Mengash</creator>
        
        <creator>Nagwan Abdel Samee</creator>
        
        <subject>Covid-19; parametric regression; non-parametric regression; linear regression; log regression; polynomial regression; generative additive regression; spline regression; k-nearest neighborhood; KNN; support vector machine; SVM; decision trees; DT</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>Machine learning prediction algorithms are considered powerful tools that could provide accurate insights about the spread and mortality of the novel Covid-19 disease. In this paper, a comparative study is introduced to evaluate the use of several parametric and non-parametric machine learning methods to model the total number of Covid-19 cases (TC) and total deaths (TD). A number of input features from the available Covid-19 time sequence are investigated to select the most significant model predictors. The impact of using the number of PCR tests as a model predictor is uniquely investigated in this study. The parametric regression including the Linear, Log, Polynomial, Generative Additive Regression, and Spline Regression and the non-parametric K-Nearest Neighborhood (KNN), Support Vector machine (SVM) and the Decision Tree (DT) have been utilized for building the models. The findings show that, for the used dataset, the linear regression is more accurate than the non-parametric models in predicting TC &amp; TD. It is also found that including the total number of tests in the mortality model significantly increases its prediction accuracy.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_71-Evaluation_of_using_Parametric_and_Non_parametric_Machine.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Approach for Training Cobots from Small Amount of Data in Industry 5.0</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121070</link>
        <id>10.14569/IJACSA.2021.0121070</id>
        <doi>10.14569/IJACSA.2021.0121070</doi>
        <lastModDate>2021-10-31T06:54:27.8670000+00:00</lastModDate>
        
        <creator>Khalid Jabrane</creator>
        
        <creator>Mohammed Bousmah</creator>
        
        <subject>Small data; industry 5.0; common-sense capability; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>Machine learning is a vital part of today&#39;s world. Although the current Machine Learning slogan is “big data is required for a smarter AI”. All Artificial Intelligence learning techniques require the training of algorithms with huge data. Collecting and storing this data takes time and requires increasing computer memory. In Industry 5.0, human-robot collaboration is a challenge for artificial intelligence (AI) and its subdomains. Indeed, integration of its domains is required. Many AI techniques are needed, ranging from visual processing to symbolic reasoning, task planning to mind building theory, reactive control to action recognition and learning. Otherwise, the main two obstacles to this natural workflow interaction are big data memorization and time Learning that grows exponentially with the problem complexity especially. In this article, we propose a new approach for training Cobots from Small Amount of Data in the context of industry 5.0 based on common-sense capability inspired by human learning.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_70-A_New_Approach_for_Training_Cobots.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Computer Vision based Polyethylene Terephthalate (PET) Sorting for Waste Recycling</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121069</link>
        <id>10.14569/IJACSA.2021.0121069</id>
        <doi>10.14569/IJACSA.2021.0121069</doi>
        <lastModDate>2021-10-31T06:54:27.8500000+00:00</lastModDate>
        
        <creator>Ouiem Bchir</creator>
        
        <creator>Shahad Alghannam</creator>
        
        <creator>Norah Alsadhan</creator>
        
        <creator>Raghad Alsumairy</creator>
        
        <creator>Reema Albelahid</creator>
        
        <creator>Monairh Almotlaq</creator>
        
        <subject>PET; recycling; computer vision; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>Recycling plays a vital role in saving the planet for future generations as it allows keeping a clean environment, reducing energy consumption, and saving materials. Of special interest is the plastic material which may take centuries to decompose. In particular, the Polyethylene Terephthalate (PET) is a widely used plastic for packaging various products that can be recycled. Sorting PET can be performed, either manually or automatically, at recycling facilities where the post-consumed objects are moving on the conveyor belt. In particular, automated sorting can process a large amount of PET objects without human intervention. In this paper, we propose a computer vision system for recognizing PET objects placed on a conveyor belt. Specifically, DeepLabv3+ is deployed to segment PET objects semantically. Such system can be exploited using an autonomous robot to compensate for human intervention and supervision. The conducted experiments showed that the proposed system outperforms the state of the art semantic segmentation approaches with weighted IoU equals to 97% and Mean BFscore equals to 89%.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_69-Computer_Vision_based_Polyethylene_Terephthalate.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>MultiStage Authentication to Enhance Security of Virtual Machines in Cloud Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121068</link>
        <id>10.14569/IJACSA.2021.0121068</id>
        <doi>10.14569/IJACSA.2021.0121068</doi>
        <lastModDate>2021-10-31T06:54:27.8330000+00:00</lastModDate>
        
        <creator>Anitha HM</creator>
        
        <creator>P Jayarekha</creator>
        
        <subject>Authentication; multi stage authentication; one time password; finite state machine; mealy machine</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>The adoption of cloud computing in different areas has shown benefits and given solutions to applications. The cloud provider offers virtualized platforms through virtual machines for the cloud users to store the data and perform computations. Due to the distributed nature of cloud, there are many challenges and security is one of the challenges. To address this challenge, verification method is implemented to achieve high level security in the cloud environment. Many researchers have provided different authentication mechanisms to safeguard virtual machines from attacks. In this paper, Multi Stage Authentication is proposed to overcome the threats from attackers towards virtual machines. In order to authorize and access the virtual machine, multistage authentication incorporating the factors like username, email id, password and OTP is carried out. Mealy Machine model is applied to analyze the state changes with factors supplied at multiple stages and trust built with each stage. Experimental results prove that system is safe achieving data integrity and privacy. The proposed work gives the protection against unauthorized users, provides secure environment to the cloud users accessing the virtual machines.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_68-MultiStage_Authentication_to_Enhance_Security.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>High Density Impulse Noise Removal from Color Images by K-means Clustering based Detection and Least Manhattan Distance-oriented Removal Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121067</link>
        <id>10.14569/IJACSA.2021.0121067</id>
        <doi>10.14569/IJACSA.2021.0121067</doi>
        <lastModDate>2021-10-31T06:54:27.8200000+00:00</lastModDate>
        
        <creator>Aritra Bandyopadhyay</creator>
        
        <creator>Kaustuv Deb</creator>
        
        <creator>Atanu Das</creator>
        
        <creator>Rajib Bag</creator>
        
        <subject>Impulse noise; color image; salt and pepper noise; random valued impulse noise</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>Removal of impulse noise from color images is a stringent job in the arena of image processing. Impulse noise is fundamental of two types: Salt and pepper noise (SAPN) and Random valued impulse noise (RVIN). The key challenge in impulse noise removal from color images lies in tackling out the randomness in the noise pattern and in handling multiple color channels efficiently. Over the years, several filters have been designed to remove impulse noise from color images, but still, the researchers face a stringent challenge in designing a filter effective at high noise densities. In this study, a combination of K-means clustering-based detection followed by a minimum distance-based approach for removal is taken for high-density impulse noise removal from color images. In the detection phase, K-means clustering is applied on combined data consisting of elements from designated 5 &#215; 5 windows of all the planes from RGB color images to segregate noisy and non-noisy elements. In the removal phase, noisy pixels are replaced by taking the average of medians of all non-noisy pixels and non-noisy pixels under 7 &#215; 7 windows residing at least Manhattan distance from the inspected noisy pixel. Performance of the proposed method is evaluated and compared up against the latest filters, on the basis of well-known metrices, such as Peak signal to noise ratio (PSNR) and Structural similarity index measurement (SSIM). Based on these comparisons, the proposed filter is found superior than the compared filters in removing impulse noise at high noise densities.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_67-High_Density_Impulse_Noise_Removal_from_Color_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implementing Flipped Classroom Strategy in Learning Programming</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121066</link>
        <id>10.14569/IJACSA.2021.0121066</id>
        <doi>10.14569/IJACSA.2021.0121066</doi>
        <lastModDate>2021-10-31T06:54:27.7870000+00:00</lastModDate>
        
        <creator>Rosnizam Eusoff</creator>
        
        <creator>Syahanim Mohd Salleh</creator>
        
        <creator>Abdullah Mohd Zin</creator>
        
        <subject>Flipped classroom; learning programming; cognitive load; active learning; focus group discussion</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>Novice students encountered many difficulties and challenges when learning to program. They face problems in terms of high cognitive load in learning and lack of prior programming knowledge. Various strategies and approaches are implemented to overcome the difficulties and challenges in programming. A flipped classroom is an active learning strategy implemented in many subjects and courses, including programming. The flipped classroom strategy consists of three phases, namely, pre-class, in-class, and post-class. A focus group discussion is conducted involving 13 participants from various learning institutions. The purpose of the study is to discuss the implementation of flipped classroom strategy in programming. The study also identifies a technique for monitoring students&#39; involvement in activities outside the classroom and proper motivation to engage students in programming. Related research questions are constructed as guidelines for the discussion. The deductive thematic analysis is performed on the transcripts of the discussion. As a result, four pre-determine codes and two codes were generated from the analysis. This study identifies suitable activities, tools, monitoring strategies, and motivation to support the implementation of a flipped classroom in programming. There is good potential through flipped classrooms in learning programming with a systematic and careful planned implementation.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_66-Implementing_Flipped_Classroom_Strategy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Pattern Language for Class Responsibility Assignment for Business Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121065</link>
        <id>10.14569/IJACSA.2021.0121065</id>
        <doi>10.14569/IJACSA.2021.0121065</doi>
        <lastModDate>2021-10-31T06:54:27.7730000+00:00</lastModDate>
        
        <creator>Soojin Park</creator>
        
        <subject>Class responsibility assignment; analysis pattern; business application; sequence diagram</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>Assigning class responsibility is a design decision to be made early in the design phase in software development, which bridges requirements and an analysis model.  In general, assigning class responsibility relies heavily on the expertise and experience of the developer, and it is often ad-hoc. Class responsibility assignment rules are hard to be uniformly defined across the various domains of systems.  Thus, the existing work describes general stepwise guidelines without concrete methods, which imposes the limit in deriving an analysis model from requirements specification without any loss of information and providing sufficient quality of the analysis model. This study tried to grasp the commonality and variations in analyzing the business application domain. By narrowing the subject of the solution, the presented patterns can help identify and assign class responsibilities for a system belonging to the business application domain. The presented pattern language consists of six segmented patterns, including 19 variations of relationship type among conceptual classes. Each sequence of a use case specification could be analyzed as the result of weaving a set of the six segmented patterns. A case study with a payroll system is presented to prove the patterns&#39; feasibility, explaining how the proposed patterns can develop an analysis model. The coverage of the proposing CRA patterns and enhancement of implementation code quality is discussed as the benefit.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_65-A_Pattern_Language_for_Class_Responsibility_Assignment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Case Study on Social Media Analytics for Malaysia Budget</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121064</link>
        <id>10.14569/IJACSA.2021.0121064</id>
        <doi>10.14569/IJACSA.2021.0121064</doi>
        <lastModDate>2021-10-31T06:54:27.7570000+00:00</lastModDate>
        
        <creator>Ahmad Taufiq Mohamad</creator>
        
        <creator>Nur Atiqah Sia Abdullah</creator>
        
        <subject>Malaysia budget; twitter; social media analytics; sentiment analysis; category classification; budget corpus</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>Malaysia citizen always looks forward to the budget announcement, which is presented by the government each year. Due to the direct effect on the economy, the citizens&#39; opinions are crucial in understanding what they want and whether the budget satisfies them or not. Social media analytics can gather netizens’ opinions on Twitter and conduct sentiment analysis. Most of the corpora in previous sentiment analysis research use English-based corpus. However, the current scenario of tweets in Malaysia uses a combination of English-Malay words. Therefore, this study uses a hybrid of the corpus-based and support vector machine approach. Semantic corpus-based combines the Malay and English words. Then, the domain-specific corpus on Malaysia Budget is constructed, which is budget corpus. Two separate analyses include category classification and sentiment analysis. Overall, most netizens have a positive sentiment about Malaysia&#39;s Budget with 56.28% of the tweets being positive sentiments. The majority of the netizens focus on social welfare and education that have the highest tweets. The discussion highlights the suggestion to improve the accuracy of this study.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_64-A_Case_Study_on_Social_Media_Analytics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intelligent Locking System using Deep Learning for Autonomous Vehicle in Internet of Things</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121063</link>
        <id>10.14569/IJACSA.2021.0121063</id>
        <doi>10.14569/IJACSA.2021.0121063</doi>
        <lastModDate>2021-10-31T06:54:27.7400000+00:00</lastModDate>
        
        <creator>S. Zaleha. H</creator>
        
        <creator>Nora Ithnin</creator>
        
        <creator>Nur Haliza Abdul Wahab</creator>
        
        <creator>Noorhazirah Sunar</creator>
        
        <subject>Face recognition; deep learning; internet of things; convolution neural networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>Now-a-days, we are using modern locking system application to lock and unlock our vehicle. The most common method is by using key to unlock our car from outside, pressing unlock button inside our car to unlock the door and many vehicles are using keyless entry remote control for unlocking their vehicle. However, all of this locking system is not user friendly in impaired situation for example when the user hand is full, lost the key, did not bring the key or even conveniently suited for special case like disable driver. Hence, we are proposing a new way to unlock the vehicle by using face recognition. Face recognition is the one of the key components for future intelligent vehicle application in the Autonomous Vehicle (AV) and is very crucial for next generation of AV to promote user convenience. This paper proposes a locking system for AV by using face deep learning approach that adapt face recognition technique. This paper aims to design and implement face recognition procedural steps using image dataset that consist of training, validation and test dataset folder. The methodology used in this paper is Convolution Neural Network (CNN) and we were program it by using Python and Google Colab. We create two different folders to test either the methodology capable to recognize difference faces. Finally, after dataset training a testing was conducted and the works shows that the data trained was successful implemented. The models predict an accurate output result and give significant performance. The data set consist of every face angle from the front, right (30-45 degrees) and left (30-45 degrees).</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_63-Intelligent_Locking_System_using_Deep_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of a Novel Architecture for Cost-Effective Cloud-based Content Delivery Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121062</link>
        <id>10.14569/IJACSA.2021.0121062</id>
        <doi>10.14569/IJACSA.2021.0121062</doi>
        <lastModDate>2021-10-31T06:54:27.7100000+00:00</lastModDate>
        
        <creator>Suman Jayakumar</creator>
        
        <creator>Prakash S</creator>
        
        <creator>C. B Akki</creator>
        
        <subject>Content delivery network; content placement; cloud; optimization; data delivery; cost</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>Content Delivery Network (CDN) offers faster transmission of massive content from content providers to users using servers that are distributed geographically to offer seamless relay of service. However, conventional CDN is not capable of catering to the larger scope of demand for data delivery, and hence cloud-based CDN evolves as a solution. In a real-world scenario, each requested content has different popularity for different users. The problem arises with deciding which content objects should be placed in each content server to minimize delivery delays and storage costs. A review of existing approaches in cloud-based CDN shows that yet the problem associated with content placement is not solved. In this regard, a precise strategy is required to select the contents objects to be placed in a content server to achieve higher efficiency without affecting the CCDN performance. Therefore, the proposed system introduces a novel architecture that addresses this practical problem of content placement. The study considers placement problem as optimization problem with the ultimate purpose of maximizing the user content requests served and reducing the overall cost associated with content and data delivery. With an inclusion of a bucket-based concept for cache proxy and content provider, a novel topology is constructed where an optimal algorithm for placement of content is implemented using matrix operation of row reduction and column reduction. Simulation outcome shows that the proposed system excels better performance in contrast to the existing content placement strategy for cloud-based CDN.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_62-Design_of_a_Novel_Architecture_for_Cost_Effective_Cloud.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Development of Borneo Wildlife Game Platform</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121061</link>
        <id>10.14569/IJACSA.2021.0121061</id>
        <doi>10.14569/IJACSA.2021.0121061</doi>
        <lastModDate>2021-10-31T06:54:27.6930000+00:00</lastModDate>
        
        <creator>Ramadiani Ramadiani</creator>
        
        <creator>Erdinal Respatti</creator>
        
        <creator>Gubta Mahendra Putra</creator>
        
        <creator>Muhammad Labib Jundillah</creator>
        
        <creator>Tamrin Rahman</creator>
        
        <creator>Muhammad Dahlan Balfas</creator>
        
        <creator>Arda Yunianta</creator>
        
        <creator>Hasan Jamal Alyamani</creator>
        
        <subject>Game development; Kalimantan; Borneo; wildlife game</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>Games are a unique, interesting, and fun entertainment medium. Games can contain education, introduction to certain flora and fauna, work and daily life, intelligence and dexterity. The game built in this study aims to introduce the flora and fauna found in the forests of East Borneo (Kalimantan), Indonesia as the object of a plat former game. Games are built using the Game Development Life Cycle method in order to make good and organized games. The GDLC method contain 6 stages, first is the initiation for the initial idea, second is to preproduction for the asset creation, third stage is production for the system creation, forth is the testing for the trial, fifth is the beta for the external trial, and the sixth stage is to release for publication. The results of the study resulted in the Borneo Wildlife game platform. This game introduces the unique flora and fauna in East Borneo, Indonesia, such as Black Orchids, Ironwood trees, Proboscis monkeys, Mahakam dolphins and Hornbills, as well as how to protect and preserve their nature. The game received 46 downloads from March 1, 2021 to May 24, 2021.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_61-The_Development_of_Borneo_Wildlife_Game_Platform.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Skin Lesions Classification and Segmentation: A Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121060</link>
        <id>10.14569/IJACSA.2021.0121060</id>
        <doi>10.14569/IJACSA.2021.0121060</doi>
        <lastModDate>2021-10-31T06:54:27.6630000+00:00</lastModDate>
        
        <creator>Marzuraikah Mohd Stofa</creator>
        
        <creator>Mohd Asyraf Zulkifley</creator>
        
        <creator>Muhammad Ammirrul Atiqi Mohd Zainuri</creator>
        
        <subject>Lesion segmentation; lesion classification; machine learning; deep learning; skin lesions</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>An automated intelligent system based on imaging input for unbiased diagnosis of skin-related diseases is an essential screening tool nowadays. This is because visual and manual analysis of skin lesion conditions based on images is a time-consuming process that puts a significant workload on health practitioners. Various machine learning and deep learning techniques have been researched to reduce and alleviate the workloads. In several early studies, the standard machine learning techniques are the more popular approach, which is in contrast to the recent studies that rely more on the deep learning approach. Although the recent deep learning approach, mainly based on convolutional neural networks has shown impressive results, some challenges remain open due to the complexity of the skin lesions. This paper presents a wide range of analyses that cover classification and segmentation phases of skin lesion detection using deep learning techniques. The review starts with the classification techniques used for skin lesion detection, followed by a concise review on lesions segmentation, also using the deep learning techniques. Finally, this paper examined and analyzed the performances of state-of-the-art methods that have been evaluated on various skin lesion datasets. This paper has utilized performance measures based on accuracy, mean specificity, mean sensitivity, and area under the curve of 12 different Convolutional Neural Network based classification models.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_60-Skin_Lesions_Classification_and_Segmentation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>P Systems Implementation: A Model of Computing for Biological Mitochondrial Rules using Object Oriented Programming</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121059</link>
        <id>10.14569/IJACSA.2021.0121059</id>
        <doi>10.14569/IJACSA.2021.0121059</doi>
        <lastModDate>2021-10-31T06:54:27.6470000+00:00</lastModDate>
        
        <creator>Mohammed M. Nasef</creator>
        
        <creator>Bishoy El-Aarag</creator>
        
        <creator>Amal Hashim</creator>
        
        <creator>Passent M. El Kafrawy</creator>
        
        <subject>Computational biology; P systems; membranes fusion – fission; mitochondria; Mutual Dynamic Membranes (MDM); NP- complete problems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>Membrane computing is a computational framework that depends on the behavior and structure of living cells. P systems are arising from the biological processes which occur in the living cells’ organelles in a non-deterministic and maximally parallel manner. This paper aims to build a powerful computational model that combines the rules of active and mobile membranes, called Mutual Dynamic Membranes (MDM). The proposed model will describe the biological mechanisms of the metabolic regulation of mitochondrial dynamics made by mitochondrial membranes. The behaviors of the proposed model regulate the mitochondrial fusion and fission processes based on the combination of P systems variants. The combination of different variants in our computational model and their high parallelism lead to provide the possibility for solving problems that belong to NP-complete classes in polynomial time in a more efficient way than other conventional methods. To evaluate this model, it was applied to solve the SAT problem and calculate a set of computational complexity results that approved the quality of our model. Another contribution of this paper, the biological models of mitochondrial is presented in the formal class relationship diagrams were designed and illustrated using Unified Modeling Language (UML). This mechanism will be used to define a new specification of membrane processes into Object-Oriented Programming (OOP) to add the functionality of a common programming methodology to solve a large category of NP-hard problems as an interesting initiative of future research.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_59-P_Systems_Implementation_A_Model_of_Computing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SMAD: Text Classification of Arabic Social Media Dataset for News Sources</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121058</link>
        <id>10.14569/IJACSA.2021.0121058</id>
        <doi>10.14569/IJACSA.2021.0121058</doi>
        <lastModDate>2021-10-31T06:54:27.6170000+00:00</lastModDate>
        
        <creator>Amira M. Gaber</creator>
        
        <creator>Mohamed Nour El-din</creator>
        
        <creator>Hanan Moussa</creator>
        
        <subject>Text classification; Arabic text classification; Arabic Natural Language Processing (ANLP)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>Due to the advances in technology, social media has become the most popular means for the propagation of news. Many news items are published on social media like Facebook, Twitter, Instagram, etc. but are not categorized into various different domains, such as politics, education, finance, art, sports, and health. Thus, text classification is needed to classify the news into different domains to reduce the huge amount of news available over social media, reduce time and effort for recognizing the category or domain, and present data to improve the searching process. Most existing datasets don’t follow pre-processing and filtering processes and aren’t organized based on classification standards to be ready for use. Thus, the Arabic Natural Processing Language (ANLP) phases will be used to pre-process, normalize, and categorize the news into the right domain. This paper proposes an Arabic Social Media Dataset (SMAD) for text classification purposes over the social media using ANLP steps. The SMAD dataset consists of 15,240 Arabic news items categorized over the Facebook social network. The experimental results illustrate that the SMAD corpus gives accuracy of about 98% in five domains (Art, Education, Health, Politics, and Sport). The SMAD dataset has been trained tested and is ready for use.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_58-SMAD_Text_Classification_of_Arabic_Social_Media.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Time Line Correlative Spectral Processing for Stratification of Blood Pressure using Adaptive Signal Conditioning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121057</link>
        <id>10.14569/IJACSA.2021.0121057</id>
        <doi>10.14569/IJACSA.2021.0121057</doi>
        <lastModDate>2021-10-31T06:54:27.6170000+00:00</lastModDate>
        
        <creator>Santosh Shinde</creator>
        
        <creator>Pothuraju RajaRajeswari</creator>
        
        <subject>Stratification of blood pressure; discrete wavelet transform; spectral coding; and selective correlative approach</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>Stratification of Blood Pressure is essential input in most of the cardiovascular diseases detection and prediction and is also a great aid to medical practitioners in dealing with Hypertension. Denoising based on spectral coding is developed based on frequency spectral decomposition and a spectral correlative approach based on wavelet transform. The existing approaches perform a standard deviation and mean of peak correlation in signal conditioning. The artifact filtrations were developed based on thresholding. Filtration of coefficients has an impact on accuracy of estimation and hence proper signal conditioning is a primal need. Wherein threshold is measured with discrete monitoring, time line observation could improve the accuracy of filtration efficiency under varying interference condition. Dynamic interference due to capturing or processing source results in jitter type noises which are short period deviations with varying frequency component. Hence a time-frequency analysis for filtration is adapted for filtration. This paper presents an approach of spectral correlation approach for signal condition in stratification of blood pressure under cuff less monitoring. This presented approach operates on the spectral distribution of finer resolution bands for monitoring signal in denoising and decision making. Existing approaches lacks the capability of loss-less denoising which is efficiently worked out in this paper.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_57-Time_Line_Correlative_Spectral_Processing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Learning Pick to Place Objects using Self-supervised Learning with Minimal Training Resources</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121056</link>
        <id>10.14569/IJACSA.2021.0121056</id>
        <doi>10.14569/IJACSA.2021.0121056</doi>
        <lastModDate>2021-10-31T06:54:27.5830000+00:00</lastModDate>
        
        <creator>Marwan Qaid Mohammed</creator>
        
        <creator>Lee Chung Kwek</creator>
        
        <creator>Shing Chyi Chua</creator>
        
        <subject>Self-supervised; pick-to-place; robotics; deep q-network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>Grasping objects is a critical but challenging aspect of robotic manipulation. Recent studies have concentrated on complex architectures and large, well-labeled data sets that need extensive computing resources and time to achieve generalization capability. This paper proposes an effective grasp-to-place strategy for manipulating objects in sparse and chaotic environments. A deep Q-network, a model-free deep reinforcement learning method for robotic grasping, is employed in this paper. The proposed approach is remarkable in that it executes both fundamental object pickup and placement actions by utilizing raw RGB-D images through an explicit architecture. Therefore, it needs fewer computing processes, takes less time to complete simulation training, and generalizes effectively across different object types and scenarios. Our approach learns the policies to experience the optimal grasp point via trial-and-error. The fully conventional network is utilized to map the visual input into pixel-wise Q-value, a motion agnostic representation that reflects the grasp&#39;s orientation and pose. In a simulation experiment, a UR5 robotic arm equipped with a Parallel-jaw gripper is used to assess the proposed approach by demonstrating its effectiveness. The experimental outcomes indicate that our approach successfully grasps objects with consuming minimal time and computer resources.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_56-Learning_Pick_to_Place_Objects_using_Self_supervised_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Forecast Breast Cancer Cells from Microscopic Biopsy Images using Big Transfer (BiT): A Deep Learning Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121054</link>
        <id>10.14569/IJACSA.2021.0121054</id>
        <doi>10.14569/IJACSA.2021.0121054</doi>
        <lastModDate>2021-10-31T06:54:27.5530000+00:00</lastModDate>
        
        <creator>Md. Ashiqul Islam</creator>
        
        <creator>Dhonita Tripura</creator>
        
        <creator>Mithun Dutta</creator>
        
        <creator>Md. Nymur Rahman Shuvo</creator>
        
        <creator>Wasik Ahmmed Fahim</creator>
        
        <creator>Puza Rani Sarkar</creator>
        
        <creator>Tania Khatun</creator>
        
        <subject>Convolutional neural network (CNN); breast cancer; Big Transfer (BiT); densenet-201; NasNet-Large; Inception-Resnet-v3; mammography</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>Now-a-days, breast cancer is the most crucial problem amongst men and women. A massive number of people are invaded with breast cancer all over the world. An early diagnosis can help to save lives with proper treatment. Recently, computer-aided diagnosis is becoming more popular in medical science as well as in cancer cell identification. Deep learning models achieve excessive attention because of their performance in identifying cancer cells. Mammography is a significant creation for detecting breast cancer. However, due to its complex structure, it is challenging for doctors to identify. This study provides a convolutional neural network (CNN) approach to detecting cancer cells early. Dividing benign and malignant mammography images can significantly improve detection and accuracy levels. The BreakHis 400X dataset is collected from Kaggle and DenseNet-201, NasNet-Large, Inception ResNet-V3, Big Transfer (M-r101x1x1); these architectures show impressive performance. Among them, M-r101x1x1 provides the highest accuracy of 90%. The main priority for this research work is to classify breast cancer with the highest accuracy with selected neural networks. This study can improve the systematic way of early-stage breast cancer detection and help physicians&#39; decision-making.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_54-Forecast_Breast_Cancer_Cells_from_Microscopic_Biopsy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mobile Application with Augmented Reality to Improve Learning in Science and Technology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121055</link>
        <id>10.14569/IJACSA.2021.0121055</id>
        <doi>10.14569/IJACSA.2021.0121055</doi>
        <lastModDate>2021-10-31T06:54:27.5530000+00:00</lastModDate>
        
        <creator>Miriam Gamboa-Ramos</creator>
        
        <creator>Ricardo G&#243;mez-Noa</creator>
        
        <creator>Orlando Iparraguirre-Villanueva</creator>
        
        <creator>Michael Cabanillas-Carbonell</creator>
        
        <creator>Jos&#233; Luis Herrera Salazar</creator>
        
        <subject>Augmented reality; learning; mobile application; Mobile D methodology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>Education has taken a big turn due to the current health situation, and as a result the use of technology has become a great ally of education, achieving important benefits. Augmented reality is being used by teachers and students especially in distance and/or face-to-face learning through didactic learning, self-instruction and the promotion of research. This article shows the development and influence of a mobile application with augmented reality that serves as a reinforcement for the learning of Science and Technology in students of sixth grade of Primary and first year of Secondary School. The Mobile D methodology is used during the development process of the application, the research design is Pre-Experimental since the Pre-Test and Post-Test tests are performed to a single group of students being the total of 30, obtaining as final result the increase in the level of interest of the students to 100%, in the level of understanding there was an improvement of 50% and the level of satisfaction is maintained in a range of 40% satisfaction and very satisfied of 60%, which implies that the application helps them to improve their learning.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_55-Mobile_Application_with_Augmented_Reality.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improvement of Deep Learning-based Human Detection using Dynamic Thresholding for Intelligent Surveillance System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121053</link>
        <id>10.14569/IJACSA.2021.0121053</id>
        <doi>10.14569/IJACSA.2021.0121053</doi>
        <lastModDate>2021-10-31T06:54:27.5370000+00:00</lastModDate>
        
        <creator>Wahyono </creator>
        
        <creator>Moh. Edi Wibowo</creator>
        
        <creator>Ahmad Ashari</creator>
        
        <creator>Muhammad Pajar Kharisma Putra</creator>
        
        <subject>Human detection; YOLO; dynamic thresholding; intelligent surveillance system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>Human detection plays an important role in many applications of the intelligent surveillance system (ISS), such as person re-identification, human tracking, people counting, etc. On the other hand, the use of deep learning in human detection has provided excellent accuracy. Unfortunately, the deep-learning method is sometimes unable to detect objects that are too far from the camera. It is because the threshold selection for confidence value is statically determined at the decision stage. This paper proposes a new strategy for using dynamic thresholding based on geometry in the images. The proposed method is evaluated using the dataset we created. The experiment found that the use of dynamic thresholding provides an increase in F-measure of 0.11 while reducing false positives by 0.18. This shows that the proposed strategy effectively detects human objects, which is applied to the ISS.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_53-Improvement_of_Deep_Learning_based_Human_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Level Transducer Circuit Implemented by Ultrasonic Sensor and Controlled with Arduino Nano for its Application in a Water Tank of a Fire System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121052</link>
        <id>10.14569/IJACSA.2021.0121052</id>
        <doi>10.14569/IJACSA.2021.0121052</doi>
        <lastModDate>2021-10-31T06:54:27.4770000+00:00</lastModDate>
        
        <creator>Omar Chamorro-Atalaya</creator>
        
        <creator>Dora Arce-Santillan</creator>
        
        <creator>Guillermo Morales-Romero</creator>
        
        <creator>Adri&#225;n Quispe-And&#237;a</creator>
        
        <creator>Nic&#233;foro Trinidad-Loli</creator>
        
        <creator>Elizabeth Auqui-Ramos</creator>
        
        <creator>C&#233;sar Le&#243;n-Velarde</creator>
        
        <creator>Edith Guti&#233;rrez-Zubieta</creator>
        
        <subject>Level transducer; ultrasonic sensor; Arduino Nano; control; pulse width</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>This article aims to describe the design of a circuit of a level transducer implemented by means of an ultrasonic sensor and controlled by Arduino Nano, applied to a water tank of a firefighting system. Initially, the integration of the Siemens 1212C programmable logic controller is described, in the connection between the sensor, the controller and the interfaces that allow to generate the monitoring, control and data recording, conditioned by a PWM pulse width modulated signal controlled by Arduino Nano. When developing the research and performing an analysis of the linear regression model, it is established that the behavior of the controlled variable with respect to time, generates a linear voltage response in the range of 0 to 10 volts; expressing in terms of correlational relationship a factor of R2 equal to 0.997, thus establishing that the designed transducer does not show susceptibility to noise or disturbances in the start-up of the firefighting system.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_52-Level_Transducer_Circuit_Implemented_by_Ultrasonic_Sensor.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automating Time Series Forecasting on Crime Data using RNN-LSTM</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121051</link>
        <id>10.14569/IJACSA.2021.0121051</id>
        <doi>10.14569/IJACSA.2021.0121051</doi>
        <lastModDate>2021-10-31T06:54:27.4600000+00:00</lastModDate>
        
        <creator>J Vimala Devi</creator>
        
        <creator>K S Kavitha</creator>
        
        <subject>Time series analysis; deep learning; RNN; forecasting; crime data; predictive policing; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>Criminal activities, be it violent or non-violent are major threats to the safety and security of people. Frequent Crimes are the extreme hindrance to the sustainable development of a nation and thus need to be controlled. Often Police personnel seek the computational solution and tools to realize impending crimes and to perform crime analytics. The developed and developing countries experimenting their tryst with predictive policing in the recent times. With the advent of advanced machine and deep learning algorithms, Time series analysis and building a forecasting model on crime data sets has become feasible. Time series analysis is preferred on this data set as the crime events are recorded with respect to time as significant component. The objective of this paper is to mechanize and automate time series forecasting using a pure DL model. N-Beats Recurrent Neural Networks (RNN) are the proven ensemble models for time series forecasting. Herein, we had foreseen future trends with better accuracy by building a model using NBeats algorithm on Sacremento crime data set. This study applied detailed data pre-processing steps, presented an extensive set of visualizations and involved hyperparameter tuning. The current study has been compared with the other similar works and had been proved as a better forecasting model. This study varied from the other research studies in the data visualization with the enhanced accuracy.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_51-Automating_Time_Series_Forecasting_on_Crime_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Application of Image Processing in Liver Cancer Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121050</link>
        <id>10.14569/IJACSA.2021.0121050</id>
        <doi>10.14569/IJACSA.2021.0121050</doi>
        <lastModDate>2021-10-31T06:54:27.4300000+00:00</lastModDate>
        
        <creator>Meenu Sharma</creator>
        
        <creator>Rafat Parveen</creator>
        
        <subject>Liver cancer; digital image processing; magnetic resonance imaging; early stage</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>Hepatic cancer is caused by the uncontrolled growth of liver cells, an HCC is the most common form of malignant liver cancer, accounting for 75 percent of cases. This tumor is difficult to diagnose, and it is often discovered at an advanced stage, posing a life-threatening danger. As a result, early diagnosis of liver cancer increases life expectancy. So, using a digital image processing method, we suggest an automated computer-aided diagnosis of liver tumors from MRI images. Magnetic Resonance Imaging (MRI) images are used to identify liver tumors in this case. The image goes through image preprocessing, image segmentation, and feature extraction, all of which are done within the layers of an Artificial Neural Network, making it an automated operation. To make the edge continuous, this operation combines two processes: edge and manual labeling. On the basis of statistical characteristics, tumors are often divided into four categories: cyst, adenoma, hemangioma, and malignant liver tumor. The aim of this proposed technique is to automatically highlight and categorize tumor regions in Magnetic Resonance Imaging images without the need for a medical practitioner.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_50-The_Application_of_Image_Processing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Factors Impacting Users’ Compliance with Information Security Policies: An Empirical Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121049</link>
        <id>10.14569/IJACSA.2021.0121049</id>
        <doi>10.14569/IJACSA.2021.0121049</doi>
        <lastModDate>2021-10-31T06:54:27.4130000+00:00</lastModDate>
        
        <creator>Latifa Alzahrani</creator>
        
        <subject>Information security; users’ compliance; compliance factors; security education systems; information security policies</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>One of the main concerns for organizations in today&#39;s connected world is to find out how employees follow the information security policy (ISP), as the internal employee has been identified as the weakest link in all breaches of the security policies. Several studies have examined ISP compliance from a dissuasive perspective; however, the results were mixed. This empirical study analyses the impact of organisational security factors and individual non-compliance on users’ intentions toward information security policies. A research model and hypotheses have been developed in this quantitative study. Data from 352 participants was collected through a questionnaire, which then validated the measurement model. The findings revealed that while security system anxiety and non-compliant peer behaviours negatively impact users’ compliance intentions, work impediments positively influence these intentions. Security visibility negatively influences users’ non-compliance, and security education systems positively impact work impediments. This research will help information security managers address the problem of information security compliance because it provides them with an understanding of one of the many factors underlying employee compliance behaviors.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_49-Factors_Impacting_Users_Compliance_with_Information_Security.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multistage Sentiment Classification Model using Malaysia Political Ontology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121048</link>
        <id>10.14569/IJACSA.2021.0121048</id>
        <doi>10.14569/IJACSA.2021.0121048</doi>
        <lastModDate>2021-10-31T06:54:27.3800000+00:00</lastModDate>
        
        <creator>Nur Farhana Ismail</creator>
        
        <creator>Nur Atiqah Sia Abdullah</creator>
        
        <creator>Zainura Idrus</creator>
        
        <subject>Malay corpus; political ontology; sentiment analysis; sentiment classification; social media</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>Now-a-days, people use social media platforms such as Facebook, Twitter, and Instagram to share their opinions on particular entities or services. The sentiment analysis can get the polarity of these opinions, especially in the political domain. However, in Malaysia, current sentiment analysis can be inaccurate when the netizen tempts to use the combination of Malay words in their comments. It is due to the insufficient Malay corpus and sentiment analysis tools. Therefore, this study aims to construct a multistage sentiment classification model based on Malaysia Political Ontology and Malay Political Corpus. The reviews are carried out in sentiment analysis, classification techniques, Malay sentiment analysis, and sentiment analysis on politics. It starts with the data preparation for Malay tweets to produce tokenized Malay words and then, the construction of corpus using corpus filtering, web search, and filtering using linguistic patterns before enhancing with political lexicons. The process continues with the classifier construction. It started with a generic ontology with Malaysia&#39;s political context. Lastly, twelve features are identified. Then the extracted features are tested using different classifiers. As a result, Linear Support Vector Machine yields an accuracy of 86.4% for the classification. It proved that the multistage sentiment classification model improved the Malay tweets classification in the political domain.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_48-Multistage_Sentiment_Classification_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Increasing Randomization of Ciphertext in DNA Cryptography</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121047</link>
        <id>10.14569/IJACSA.2021.0121047</id>
        <doi>10.14569/IJACSA.2021.0121047</doi>
        <lastModDate>2021-10-31T06:54:27.3500000+00:00</lastModDate>
        
        <creator>Maria Imdad</creator>
        
        <creator>Sofia Najwa Ramli</creator>
        
        <creator>Hairulnizam Mahdin</creator>
        
        <subject>DNA cryptography; avalanche effect; frequency test; entropy; hamming weight</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>Deoxyribonucleic acid (DNA) cryptography is becoming an emerging area in hiding messages, where DNA bases are used to encode binary data to enhance the randomness of the ciphertext. However, an extensive study on existing algorithms indicates that the encoded ciphertext has a low avalanche effect of providing a desirable confusion property of an encryption algorithm. This property is crucial to randomize the relationship between the plaintext and the ciphertext. Therefore, this research aims to reassess the security of the existing DNA cryptography by modifying the steps in the DNA encryption technique and utilizing an existing DNA encoding/decoding table at a selected step in the algorithm to enhance the overall security of the cipher. The modified and base DNA cryptography techniques are evaluated for frequency analysis, entropy, avalanche effect, and hamming weight using 100 different plaintexts with high density, low density, and random input data, respectively. The result introduces good performances to the frequency analysis, entropy, avalanche effect, and hamming weight, respectively. This work shows that the ciphertext generated from the modified model yields better randomization and can be adapted to transmit sensitive information.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_47-Increasing_Randomization_of_Ciphertext_in_DNA.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Indonesia Sign Language Recognition using Convolutional Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121046</link>
        <id>10.14569/IJACSA.2021.0121046</id>
        <doi>10.14569/IJACSA.2021.0121046</doi>
        <lastModDate>2021-10-31T06:54:27.3330000+00:00</lastModDate>
        
        <creator>Suci Dwijayanti</creator>
        
        <creator>Hermawati</creator>
        
        <creator>Sahirah Inas Taqiyyah</creator>
        
        <creator>Hera Hikmarika</creator>
        
        <creator>Bhakti Yudho Suprapto</creator>
        
        <subject>Indonesia sign language (BISINDO); recognition; CNN; lighting</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>In daily life, the deaf use sign language to communicate with others. However, the non-deaf experience difficulties in understanding this communication. To overcome this, sign recognition via human-machine interaction can be utilized. In Indonesia, the deaf use a specific language, referred to as Indonesia Sign Language (BISINDO). However, only a few studies have examined this language. Thus, this study proposes a deep learning approach, namely, a new convolutional neural network (CNN) to recognize BISINDO. There are 26 letters and 10 numbers to be recognized. A total of 39,455 data points were obtained from 10 respondents by considering the lighting and perspective of the person: specifically, bright and dim lightning, and from first and second-person perspectives. The architecture of the proposed network consisted of four convolutional layers, three pooling layers, and three fully connected layers. This model was tested against two common CNNs models, AlexNet and VGG-16. The results indicated that the proposed network is superior to a modified VGG-16, with a loss of 0.0201. The proposed network also had smaller number of parameters compared to a modified AlexNet, thereby reducing the computation time. Further, the model was tested using testing data with an accuracy of 98.3%, precision of 98.3%, recall of 98.4%, and F1-score of 99.3%. The proposed model could recognize BISINDO in both dim and bright lighting, as well as the signs from the first-and second-person perspectives.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_46-Indonesia_Sign_Language_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>How to Analyze Air Quality During the COVID-19 Pandemic? An Answer using Grey Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121045</link>
        <id>10.14569/IJACSA.2021.0121045</id>
        <doi>10.14569/IJACSA.2021.0121045</doi>
        <lastModDate>2021-10-31T06:54:27.3200000+00:00</lastModDate>
        
        <creator>Alexi Delgado</creator>
        
        <creator>Denilson Pongo</creator>
        
        <creator>Katherine Felipa</creator>
        
        <creator>Kiara Saavedra</creator>
        
        <creator>Lorena Torres</creator>
        
        <creator>Ch. Carbajal</creator>
        
        <subject>Air quality; COVID 19; grey systems; grey clustering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>The Peruvian government declared a State of National Emergency due to the spread of COVID-19 where the closure of businesses, companies and home isolation was imposed from 03/15/2020 to 06/30/2020. In this context, the research focused on analyzing the characteristics of the air quality in Lima during said period compared to its similar in 2018 and 2019, for this purpose, data from two air quality monitoring stations in PM2.5, PM10, CO and NO2 concentrations and the quality levels given by the Air Quality Index (INCA) were used for further processing with the Grey Clustering method, which is based on grey systems. The results showed that during the quarantine, air quality improved significantly, specifically the northern area of Lima, which was favored by the meteorological conditions that will be classified as good quality as well as the reduction of PM10 by 46% and PM2.5 in 45% to a lesser extent, NO2 by 17% and CO in 11%, unlike the southern zone which, although it showed an improvement, it is still classified as moderate quality with reductions in PM10 by 26%, PM2.5 by 27%, CO by 19%,; however the concentration NO2 registered a non-significant increase of 2%. This behaviour is explained by the lower height of the thermal inversion layer, therefore less space for the dispersion of pollutants. Finally, the study obtains essential information for regulatory agencies as it allows understanding the relationship between air quality and control measures at emission sources for the development of public policies on public health and the environment.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_45-How_to_Analyze_Air_Quality_During_the_Covid_19_Pandemic.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Reverse Vending Machine Item Verification Module using Classification and Detection Model of CNN</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121044</link>
        <id>10.14569/IJACSA.2021.0121044</id>
        <doi>10.14569/IJACSA.2021.0121044</doi>
        <lastModDate>2021-10-31T06:54:27.3030000+00:00</lastModDate>
        
        <creator>Razali Tomari</creator>
        
        <creator>Nur Syahirah Razali</creator>
        
        <creator>Nurul Farhana Santosa</creator>
        
        <creator>Aeslina Abdul Kadir</creator>
        
        <creator>Mohd Fahrul Hassan</creator>
        
        <subject>Convolutional neural network (CNN); classification; detection; reverse vending machine (RVM); You Only Look Once (YOLO)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>Reverse vending machine (RVM) is an interactive platform that can boost recycling activities by rewarding users that return the recycle items to the machine. To accomplish that, the RVM should be outfitted with material identification module to recognize different sort of recyclable materials, so the user can be rewarded accordingly. Since utilizing combination of sensors for such a task is tedious, a vision-based detection framework is proposed to identify three types of recyclable material which are aluminum can, PET bottle and tetra-pak. Initially, a self-collected of 5898 samples were fed into classification and detection framework which were divided into the ratio of 85:15 of training and validation samples. For the classification model, three pre-trained models of AlexNet, VGG16 and Resnet50 were used, while for the detection model YOLOv5 architecture is employed. As for the dataset, it was gathered by capturing the recycle material picture from various point and information expansion of flipping and pivoting the pictures. A progression of thorough hyper parameters tuning were conducted to determine an optimal structure that is able to produce high accuracy. From series of experiments it can be concluded that, the detection model shows promising outcome compare to the classification module for accomplishing the recycle item verification task of the RVM.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_44-Reverse_Vending_Machine_Item_Verification_Module.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Complex Plane based Realistic Sound Generation for Free Movement in Virtual Reality</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121043</link>
        <id>10.14569/IJACSA.2021.0121043</id>
        <doi>10.14569/IJACSA.2021.0121043</doi>
        <lastModDate>2021-10-31T06:54:27.2870000+00:00</lastModDate>
        
        <creator>Kwangki Kim</creator>
        
        <subject>Virtual reality; realistic sound; binaural rendering; constant power panning; head related transfer function</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>A binaural rendering is a technology that generates a realistic sound for a user with a stereo headphone, so it is essential for the stereo headphone based virtual reality (VR) service. However, the binaural rendering has a problem that it cannot reflect the user&#39;s free movement in the VR. Because the VR sound does not match with the visual scene when the user moves freely in the VR space, the performance of the VR may be degraded. To reduce the mismatch problem in the VR, the complex plane based stereo realistic sound generation method was proposed to allow the user’s free movement in the VR causing the change of the distance and azimuth between the user and the speaker. For the calculation of the distance and the azimuth between the user and the speaker by the user’s position change, the 5.1 multichannel speaker playback system and the user are placed in the complex plane. Then, the distance and the azimuth between the user and the speaker can be simply calculated as the distance and the angle between two points in the complex plane. The 5.1 multichannel audio signals are scaled by the estimated five distances according to the inverse square law, and the scaled multichannel audio signals are mapped to the newly generated virtual 5.1 multichannel speaker layout using the measured five azimuths and the azimuth by the head movement. Finally, we can successfully obtain the stereo realistic sound to reflect the user’s position change and the head movement through the binaural rendering using the scaled and mapped 5.1 multichannel audio signals and the HRTF coefficients. Experimental results show that the proposed method can generate the realistic audio sound reflecting the user’s position and azimuth change in the VR only with less than about 5 % error rate.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_43-Complex_Plane_based_Realistic_Sound_Generation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Selection of Learning Apps to Promote Critical Thinking in Programming Students using Fuzzy TOPSIS</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121042</link>
        <id>10.14569/IJACSA.2021.0121042</id>
        <doi>10.14569/IJACSA.2021.0121042</doi>
        <lastModDate>2021-10-31T06:54:27.2570000+00:00</lastModDate>
        
        <creator>Kesarie Singh</creator>
        
        <creator>Nalindren Naicker</creator>
        
        <creator>Mogiveny Rajkoomar</creator>
        
        <subject>Critical thinking; visual programming environment; multi-criteria decision-making; fuzzy TOPSIS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>The aim of this research was to use intelligent decision support systems to obtain student-centred preferences for learning applications to promote critical thinking in first year programming students. This study focuses on the visual programming environment and critical thinking as the gateway skill for student success in understanding programming. Twenty-five critical thinking criteria were synthesized from the literature. As a quantitative study, 217 randomly selected students from an approximate target population of 500 programming students to rate four learning Apps, namely, Scratch, Alice, Blockly and MIT App Inventor, against critical thinking criteria to establish the App that best promotes critical thinking among first year programming students. There were 175 responses received from the 217 randomly chosen programming students who willingly contributed to the study. Consequently, the distinctiveness of this paper lies in its use of the Fuzzy TOPSIS (Technique for Order Preference by Similarity to Ideal Situation) multi-criteria decision-making algorithm to rank criteria for critical thinking, calculate their weights on the basis of informed opinion and hence scientifically deduce the best rated App among the available alternatives that promote critical thinking among first year programming students. The results showed that Scratch promoted critical thinking skills the best in first year programming students whilst Blockly promoted critical thinking skills the least. As a contribution to the study, policy-makers and academic staff can be potentially supported to make informed decisions about the types of learning Apps to consider for students when confronted with multiple selection criteria.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_42-Selection_of_Learning_Apps_to_Promote_Critical_Thinking.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design and Implementation of Collaborative Management System for Effective Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121041</link>
        <id>10.14569/IJACSA.2021.0121041</id>
        <doi>10.14569/IJACSA.2021.0121041</doi>
        <lastModDate>2021-10-31T06:54:27.2400000+00:00</lastModDate>
        
        <creator>Tochukwu A. Ikwunne</creator>
        
        <creator>Wilfred Adigwe</creator>
        
        <creator>Christopher C. Nnamene</creator>
        
        <creator>Noah Oghenefego Ogwara</creator>
        
        <creator>Henry A. Okemiri</creator>
        
        <creator>Chinedu E. Emenike</creator>
        
        <subject>Collaborative learning; conventional education; effective learning; e-portfolio; interactive board</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>Recently, the need for online collaborative learning in educational systems have increased greatly because of COVID-19 pandemic. The pandemic has provided an opportunity for introducing online collaboration and learning among instructors and students in Nigeria. Currently, several schools, colleges, universities in Nigeria have discontinued face to face teaching and learning. Many schools resorted to ineffective alternatives such as the use of televisions and radio programmes to carry out distance education (DE). These alternatives have challenges such as lack of monitoring and evaluation of students’ learning. Collaborative Learning Management System (CLMS) is a research project that aims to assist instructors in achieving their pedagogical goals, organizing course content, collaborating, monitoring, and supporting students&#39; online learning. It is an interactive, online based as well as android based system that has been designed, implemented, tested. The system demonstrates that it is robust, interactive, and achieves the predefined goals. As a Software Development Approach, it was created using the Rapid Application Development (RAD) Methodology. It also provides a secured and reliable platform for the schools, colleges, and universities to implement an online learning system.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_41-Design_and_Implementation_of_Collaborative_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Load Balanced and Energy Aware Cloud Resource Scheduling Design for Executing Data-intensive Application in SDVC</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121040</link>
        <id>10.14569/IJACSA.2021.0121040</id>
        <doi>10.14569/IJACSA.2021.0121040</doi>
        <lastModDate>2021-10-31T06:54:27.2100000+00:00</lastModDate>
        
        <creator>Shalini. S</creator>
        
        <creator>Annapurna P Patil</creator>
        
        <subject>Cloud computing; data-intensive applications; heterogenous server; IEEE 802.11p; software defined network; software defined vehicular cloud; vehicular adhoc network; workload scheduling; road side unit; vehicular edge cloud</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>Cloud computational platform provisions numerous cloud-based Vehicular Adhoc Network (VANET) applications. For providing better bandwidth and connectivity in dynamic manner, Software Defined VANET (SDVN) is developed. Using SDVN, new VANET framework are modeled; for example, Software Defined Vehicular Cloud (SDVC). In SDVC, the vehicle enables virtualization technology through SDVN and provides complex data-intensive workload execution in scalable and efficient manner. Vehicular Edge Computing (VEC) addresses various challenges of fifth generation (5G) workload applications performance and deadline requirement. VEC aid in reducing response time, delay with high reliability for workload execution. Here the workload tasks are executed to nearby edge devices connected to Road Side Unit (RSU) with limited computing capability. If the resources are not available in RSU, then the task execution is offloaded through SDN toward heterogeneous cloud server. Existing workload scheduling in cloud environment are designed considering minimizing cost and delay; however, very limited work has been done considering energy minimization for workload execution. This paper presents a Load Balanced and Energy Aware Cloud Resource Scheduling (LBEACRS) design for heterogeneous cloud framework. Experiment outcome shows the LBEACRS achieves better makespan and energy efficiency performance when compared with standard cloud resource scheduling design.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_40-Load_Balanced_and_Energy_Aware_Cloud_Resource.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An NB-ANN based Fusion Approach for Disease Genes Prediction and LFKH-ANFIS Classifier for Eye Diseases Identification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121039</link>
        <id>10.14569/IJACSA.2021.0121039</id>
        <doi>10.14569/IJACSA.2021.0121039</doi>
        <lastModDate>2021-10-31T06:54:27.1800000+00:00</lastModDate>
        
        <creator>Samar Jyoti Saikia</creator>
        
        <creator>S. R. Nirmala</creator>
        
        <subject>Disease gene identification; eye disease identification; deep learning; adaptive neuro-fuzzy inferences system (ANFIS); levy flight based krill herd (LFKH); principle component analysis (PCA)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>A key step to apprehend the mechanisms of cells related to a particular disease is the disease gene identification. Computational forecast of disease genes are inexpensive and also easier compared to biological experiments. Here, an effectual deep learning-centered fusion algorithm called Naive Bayes-Artificial Neural Networks (NB-ANN) is proposed aimed at disease gene identification. Additionally, this paper proposes an effectual classifier, namely Levy Flight Krill herd (LFKH) based Adaptive Neuros-Fuzzy Inferences System (ANFIS), for the prediction of eye disease that are brought about by the human disease genes. Utilizing this technique, completely &#39;10&#39; disparate sorts of eye diseases are identified. The NB-ANN includes these ‘4’ steps: a) construction of ‘4’ Feature Vectors (FV), b) selection of negative data, c) training of FV utilizing NB, and d) ANN aimed at prediction. The LFKH-ANFIS undergoes Feature Extraction (FE), Feature Reduction (FR), along with classification for eye disease prediction. The experimental outcomes exhibit that method’s efficiency with regard to precision and recall.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_39-An_NB_ANN_based_Fusion_Approach_for_Disease_Genes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Arabic Semantic Similarity Approach for Farmers’ Complaints</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121038</link>
        <id>10.14569/IJACSA.2021.0121038</id>
        <doi>10.14569/IJACSA.2021.0121038</doi>
        <lastModDate>2021-10-31T06:54:27.1630000+00:00</lastModDate>
        
        <creator>Rehab Ahmed Farouk</creator>
        
        <creator>Mohammed H. Khafagy</creator>
        
        <creator>Mostafa Ali</creator>
        
        <creator>Kamran Munir</creator>
        
        <creator>Rasha M.Badry</creator>
        
        <subject>Semantic similarity; latent semantic analysis; big data; MapReduce SVM; COVID-19; agriculture farmer&#39;s complaint</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>Semantic similarity is applied for many areas in Natural Language Processing, such as information retrieval, text classification, plagiarism detection, and others. Many researchers used semantic similarity for English texts, but few used for Arabic due to the ambiguity of Arabic concepts in both sense and morphology. Therefore, the first contribution in this paper is developing a semantic similarity approach between Arabic sentences. Nowadays, the world faces a global problem of coronavirus disease. In light of these circumstances and distancing&#39;s imposition, it is difficult for farmers to physically communicate with agricultural experts to provide advice and find suitable solutions for their agricultural complaints. In addition, traditional practices still are used by most farmers. Thus, our second contribution is helping the farmers solve their Arabic agricultural complaints using our proposed approach. The Latent Semantic Analysis approach is applied to retrieve the most problem-related semantic to a farmer&#39;s complaint and find the related solution for the farmer. Two methods are used in this approach as a weighting schema for data representation are Term Frequency and Term Frequency-Inverse Document Frequency. The proposed model has also classified the big agricultural dataset and the submitted farmer complaint according to the crop type using MapReduce Support Vector Machine to improve the performance of semantic similarity results. The proposed approach&#39;s performance with Term Frequency-Inverse Document Frequency-based Latent Semantic Analysis achieved better than its counterparts with an F-measure of 86.7%.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_38-Arabic_Semantic_Similarity_Approach_for_Farmers.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Expert Review on Mobile Augmented Reality Applications for Language Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121037</link>
        <id>10.14569/IJACSA.2021.0121037</id>
        <doi>10.14569/IJACSA.2021.0121037</doi>
        <lastModDate>2021-10-31T06:54:27.1300000+00:00</lastModDate>
        
        <creator>Nur Asylah Suwadi</creator>
        
        <creator>Nazatul Aini Abd Majid</creator>
        
        <creator>Meng Chun Lam</creator>
        
        <creator>Nor Hashimah Jalaluddin</creator>
        
        <creator>Junaini Kasdan</creator>
        
        <creator>Aznur Aisyah Abdullah</creator>
        
        <creator>Afifuddin Husairi Hussain</creator>
        
        <creator>Azlan Ahmad</creator>
        
        <creator>Daing Zairi Maarof</creator>
        
        <subject>AR; expert review; HCI; language learning; mobile application; self-learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>Many mobile applications that can increase user engagement and promote self-learning have been developed to date. Nevertheless, mobile applications specific to Malay language learning for non-native speakers with relevant materials are still lacking. Moreover, expert reviews are needed to identify usability issues and check whether such applications can meet the learning goal with relevant materials features. This study developed an augmented reality (AR)-based mobile application called RakanBM for learning the Malay language (i.e. the language officially spoken in Malaysia), and then performed an expert review on the application contents, text presentations, learning outcomes, assessments, effectiveness, efficiency, and satisfaction. The expert review was conducted by a panel of six experts from two specific fields, namely the Malay language and Human-Computer Interaction (HCI), using methods such as cognitive walkthrough (CW), semi-structured interviews, think-aloud protocols, and survey. The results from CW, semi-structured interviews, think-aloud protocols shows that enhancement was needed on user interface and user experience in term of aesthetic and interactivity. The survey results were classified into two levels: high (mean &gt; 4.0) and satisfied (mean &gt; 3.5). Application factors that were recorded as satisfied were the application contents, text presentations, and satisfaction, while the factors recorded as high were the learning outcomes, assessments, effectiveness, and efficiency. The comments or suggestions for improvement were mainly around the contents of the application. Nevertheless, the application received good comments on its usefulness and the topics covered, which were suitable and best for non-native speakers. The findings of this study can guide developers and researchers in the development of future applications that can support language learning for non-native speakers in particular.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_37-Expert_Review_on_Mobile_Augmented_Reality_Applications.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Proactive Virtual Machine Scheduling to Optimize the Energy Consumption of Computational Cloud</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121036</link>
        <id>10.14569/IJACSA.2021.0121036</id>
        <doi>10.14569/IJACSA.2021.0121036</doi>
        <lastModDate>2021-10-31T06:54:27.1170000+00:00</lastModDate>
        
        <creator>Shailesh Saxena</creator>
        
        <creator>Mohammad Zubair Khan</creator>
        
        <creator>Ravendra Singh</creator>
        
        <creator>Abdulfattah Noorwali</creator>
        
        <subject>Cloud computing; CO2; proactive scheduling; unsupervised learning; clustering; energy; prediction; cloud-sim; performance assessment</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>The rapid expansion of communication and computational technology provides us the opportunity to deal with the bulk nature of dynamic data. The classical computing style is not much effective for such mission-critical data analysis and processing. Therefore, cloud computing is become popular for addressing and dealing with data. Cloud computing involves a large computational and network infrastructure that requires a significant amount of power and generates carbon footprints (CO2). In this context, we can minimize the cloud&#39;s energy consumption by controlling and switching off ideal machines. Therefore, in this paper, we propose a proactive virtual machine (VM) scheduling technique that can deal with frequent migration of VMs and minimize the energy consumption of the cloud using unsupervised learning techniques. The main objective of the proposed work is to reduce the energy consumption of cloud datacenters through effective utilization of cloud resources by predicting the future demand of resources. In this context four different clustering algorithms, namely K-Means, SOM (Self Organizing Map), FCM (Fuzzy C Means), and K-Mediod are used to develop the proposed proactive VM scheduling and find which type of clustering algorithm is best suitable for reducing the energy uses through proactive VM scheduling. This predictive load-aware VM scheduling technique is evaluated and simulated using the Cloud-Sim simulator. In order to demonstrate the effectiveness of the proposed scheduling technique, the workload trace of 29 days released by Google in 2019 is used. The experimental outcomes are summarized in different performance matrices, such as the energy consumed and the average processing time. Finally, by concluding the efforts made, we also suggest future research directions.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_36-Proactive_Virtual_Machine_Scheduling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Analysis of Data Mining Algorithms for Cancer Gene Expression Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121035</link>
        <id>10.14569/IJACSA.2021.0121035</id>
        <doi>10.14569/IJACSA.2021.0121035</doi>
        <lastModDate>2021-10-31T06:54:27.0830000+00:00</lastModDate>
        
        <creator>Preeti Thareja</creator>
        
        <creator>Rajender Singh Chhillar</creator>
        
        <subject>Colorectal cancer; breast cancer; bioinformatics; data mining; WEKA; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>Cancer is amongst the most challenging disorders to diagnose nowadays, and experts are still struggling to detect it on early stage. Gene selection is significant for identifying cancer-causing different parameters. The two deadliest cancers namely, colorectal cancer and breast malignant, is found in male and female, respectively. This study aims at predicting the cancer at an early stage with the help of cancer bioinformatics. According to the complexity of illness metabolic rates, signaling, and interaction, cancer bioinformatics is among strategies to focus bioinformatics technologies like data mining in cancer detection. The goal of the proposed study is to make a comparison between support vector machine, random forest, decision tree, artificial neural network, and logistic regression for the prediction of cancer malignant gene expression data. For analyzing data against algorithms, WEKA is used. The findings show that smart computational data mining techniques could be used to detect cancer recurrence in patients. Finally, the strategies that yielded the best results were identified.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_35-Comparative_Analysis_of_Data_Mining_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Arabic Document Classification by Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121034</link>
        <id>10.14569/IJACSA.2021.0121034</id>
        <doi>10.14569/IJACSA.2021.0121034</doi>
        <lastModDate>2021-10-31T06:54:27.0530000+00:00</lastModDate>
        
        <creator>Taghreed Alghamdi</creator>
        
        <creator>Samia Snoussi</creator>
        
        <creator>Lobna Hsairi</creator>
        
        <subject>Arabic document; document classification; deep learning; convolutional neural network (CNN); pre_trained network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>In this paper, we show how to classify Arabic document images using a convolutional neural network, which is one of the most common supervised deep learning algorithms. The main goal of using deep learning is its ability to automatically extract useful features from images, which eliminates the need for a manual feature extraction process. Convolutional neural networks can extract features from images through a convolution process involving various filters. We collected a variety of Arabic document images from various sources and passed them into a convolutional neural network classifier. We adopt a VGG16 pre-trained network trained on ImageNet to classify the dataset of four classes as handwritten, historical, printed, and signboard. For the document image classification, we used VGG16 convolutional layers, ran the dataset through them, and then trained a classifier on top of it. We extract features by fixing the pre-trained network&#39;s convolutional layers, then adding the fully connected layers and training them on the dataset. We update the network with the addition of dropout by adding after each max-pooling layer and to the fourteen and the seventeenth layers which are the fully connected layers. The proposed approach achieved a classification accuracy of 92%.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_34-Arabic_Document_Classification_by_Deep_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluating Deep and Statistical Machine Learning Models in the Classification of Breast Cancer from Digital Mammograms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121033</link>
        <id>10.14569/IJACSA.2021.0121033</id>
        <doi>10.14569/IJACSA.2021.0121033</doi>
        <lastModDate>2021-10-31T06:54:27.0370000+00:00</lastModDate>
        
        <creator>Amel A. Alhussan</creator>
        
        <creator>Nagwan M. Abdel Samee</creator>
        
        <creator>Vidan F. Ghoneim</creator>
        
        <creator>Yasser M. Kadah</creator>
        
        <subject>Computer-aided detection; computer-aided diagnosis; statistical machine learning; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>The application of artificial intelligence techniques in computer aided detection and diagnosis problems has been among the most promising areas with interest from the scientific community and healthcare industry. Recently, deep learning has become the prime tool for such application with many studies focusing on developing variants that optimize diagnostic performance. Despite the widely accepted success of this class of techniques in this application by the scientific community, it is not prudent to consider it as the only tool available for such purpose. In particular, statistical machine learning offers a variety of techniques that can also be applied at a much lower computational cost. Unfortunately, the results from both strategies cannot be directly compared due to the differences in experimental setups and datasets used in available research studies. Therefore, we focus in this study on this direct comparison using the same dataset and similar data preprocessing as the input to both. We compare statistical machine learning to deep learning in the context of computer-aided detection of breast cancer from mammographic images. The results are compared using diagnostic performance metrics and suggest that simpler statistical machine learning techniques may provide better performance with simpler architectures that allow explanation of results.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_33-Evaluating_Deep_and_Statistical_Machine_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Healthcare Misinformation Detection and Fact-Checking: A Novel Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121032</link>
        <id>10.14569/IJACSA.2021.0121032</id>
        <doi>10.14569/IJACSA.2021.0121032</doi>
        <lastModDate>2021-10-31T06:54:27.0230000+00:00</lastModDate>
        
        <creator>Yashoda Barve</creator>
        
        <creator>Jatinderkumar R. Saini</creator>
        
        <subject>Misinformation detection; sentiment analysis; document similarity; fact-check; healthcare</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>Information gets spread rapidly in the world of the internet. The internet has become the first choice of people for medication tips related to their health problems. However, this ever-growing usage of the internet has also led to the spread of misinformation. The misinformation in healthcare has severe effects on the life of people, thus efforts are required to detect the misinformation as well as fact-check the information before using it. In this paper, the authors proposed a model to detect and fact-check the misinformation in the healthcare domain. The model extracts the healthcare-related URLs from the web, pre-processes it, computes Term-Frequency, extracts sentimental and grammatical features to detect misinformation, and computes distance measures viz. Euclidean, Jaccard, and Cosine similarity to fact-check the URLs as True or False based on the manually generated dataset with expert’s opinions. The model was evaluated using five state-of-the-art machine learning classifiers Logistic Regression, Support Vector Machine, Na&#239;ve Bayes, Decision Tree, and Random forest. The experimental results showed that the sentimental features are crucial while detecting misinformation as more negative words are found in URLs containing misinformation compared to the URLs having true information. It was observed that Na&#239;ve Bayes outperformed all other models in terms of accuracy showing 98.7% accuracy whereas the decision tree classifier showed less accuracy compared to all other models showing an accuracy of 92.88%. Also, the Jaccard Distance measure was found to be the best distance measure algorithm in terms of accuracy compared to Euclidean distance and Cosine similarity measures.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_32-Healthcare_Misinformation_Detection_and_Fact_Checking.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-lane LBP-Gabor Capsule Network with K-means Routing for Medical Image Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121031</link>
        <id>10.14569/IJACSA.2021.0121031</id>
        <doi>10.14569/IJACSA.2021.0121031</doi>
        <lastModDate>2021-10-31T06:54:26.9900000+00:00</lastModDate>
        
        <creator>Patrick Kwabena Mensah</creator>
        
        <creator>Anokye Acheampong Amponsah</creator>
        
        <creator>Kwame Baffour Agyemang</creator>
        
        <creator>Gabriel Kofi Armah</creator>
        
        <creator>Abra Ayidzoe</creator>
        
        <creator>Faiza Umar Bawah</creator>
        
        <creator>Adebayor Felix Adekoya</creator>
        
        <creator>Benjamin Asubam Weyori</creator>
        
        <creator>Mark Amo-Boateng</creator>
        
        <subject>Convolutional neural networks; deep learning; Gabor filters; k-means routing; local binary pattern; power squash introduction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>Medical images naturally occur in smaller quantities and are not balanced. Some medical domains such as radiomics involve the analysis of images to diagnose a patient’s condition. Often, images of sick inaccessible parts of the body are taken for analysis by experts. However, medical experts are scarce, and the manual analysis of the images is time-consuming, costly, and prone to errors. Machine learning has been adopted to automate this task, but it is tedious, time-consuming, and requires experienced annotators to extract features. Deep learning alleviates this problem, but the threat of overfitting on smaller datasets and the existence of the “black box” still lingers. This paper proposes a capsule network that uses Local Binary Pattern (LBP), Gabor layers, and K-Means routing in an attempt to alleviate these drawbacks. Experimental results show that the model produces state-of-the-art accuracy for the three datasets (KVASIR, COVID-19, and ROCT), does not overfit on smaller and imbalanced datasets, and has reduced complexity due to fewer parameters. Layer activation maps, a cluster of features, predictions, and reconstruction of the input images, show that our model is interpretable and has the credibility and trust required to gain the confidence of practitioners for deployment in critical areas such as health.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_31-Multi_lane_LBP_Gabor_Capsule_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of Decentralized Application for Telemedicine Image Record System with Smart Contract on Ethereum</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121030</link>
        <id>10.14569/IJACSA.2021.0121030</id>
        <doi>10.14569/IJACSA.2021.0121030</doi>
        <lastModDate>2021-10-31T06:54:26.9770000+00:00</lastModDate>
        
        <creator>Darrell Yonathan</creator>
        
        <creator>Diyanatul Husna</creator>
        
        <creator>Fransiskus Astha Ekadiyanto</creator>
        
        <creator>I Ketut Eddy Purnama</creator>
        
        <creator>Afif Nurul Hidayati</creator>
        
        <creator>Mauridhi Hery Purnomo</creator>
        
        <creator>Supeno Mardi Susiki Nugroho</creator>
        
        <creator>Reza Fuad Rachmadi</creator>
        
        <creator>Ingrid Nurtanio</creator>
        
        <creator>Anak Agung Putri Ratna</creator>
        
        <subject>Blockchain; Ethereum; smart contract; telemedicine</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>This paper discusses the implementation of smart contracts on the Ethereum blockchain system for telemedicine data storage. Telemedicine is one of the currently developing digital technologies in the health and medical sectors. Telemedicine can be more efficient when seeking treatment because patients do not need to see a doctor face to face. When using blockchain technology, the stored data becomes more transparent for each node in the blockchain network but has verification on every transaction which takes time and gas costs. However, telemedicine has several risks and problems, one of which is long data storage process time because there must be a verification process first to ensure data security. Another problem faced is the issue of the gas fee of the blockchain telemedicine system which is billed in every data storage transaction. In this study, a blockchain system was introduced for managing and securing databases on telemedicine. The implementation of this blockchain system was carried out on a website page that can add data to and retrieve data from the blockchain system. The results of this study showed that blockchain was successfully implemented to store telemedicine data with Ethereum. The analysis in this paper refers to the set and gets functions. The set function is used to send data to the blockchain, and the get function is used to retrieve data from the blockchain. From testing, the Get function has a much faster execution time than the Set function because the Get function does not require verification to retrieve its data. In the iterations carried out—namely 1, 10, and 100—the longest time on average was at 100 iterations when compared to the other iterations. In the tests carried out, the more characters that were stored, the more gas costs must be paid. In the tests, the percentage increase in costs was 0.34% per character.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_30-Design_of_Decentralized_Application_for_Telemedicine.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Forensic Analysis on False Data Injection Attack on IoT Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121029</link>
        <id>10.14569/IJACSA.2021.0121029</id>
        <doi>10.14569/IJACSA.2021.0121029</doi>
        <lastModDate>2021-10-31T06:54:26.9430000+00:00</lastModDate>
        
        <creator>Saiful Amin Sharul Nizam</creator>
        
        <creator>Zul-Azri Ibrahim</creator>
        
        <creator>Fiza Abdul Rahim</creator>
        
        <creator>Hafizuddin Shahril Fadzil</creator>
        
        <creator>Haris Iskandar Mohd Abdullah</creator>
        
        <creator>Muhammad Zulhusni Mustaffa</creator>
        
        <subject>Advanced Metering Infrastructure (AMI); False Data Injection Attack (FDIA); man in the middle (MITM); internet of things (IoT); forensic analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>False Data Injection Attack (FDIA) is an attack that could compromise Advanced Metering Infrastructure (AMI) devices where an attacker may mislead real power consumption by falsifying meter usage from end-users smart meters. Due to the rapid development of the Internet, cyber attackers are keen on exploiting domains such as finance, metering system, defense, healthcare, governance, etc. Securing IoT networks such as the electric power grid or water supply systems has emerged as a national and global priority because of many vulnerabilities found in this area and the impact of the attack through the internet of things (IoT) components. In this modern era, it is a compulsion for better awareness and improved methods to counter such attacks in these domains. This paper aims to study the impact of FDIA in AMI by performing data analysis from network traffic logs to identify digital forensic traces. An AMI testbed was designed and developed to produce the FDIA logs. Experimental results show that forensic traces can be found from the evidence logs collected through forensic analysis are sufficient to confirm the attack. Moreover, this study has produced a table of attributes for evidence collection when performing forensic investigation on FDIA in the AMI environment.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_29-Forensic_Analysis_on_False_Data_Injection_Attack.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automatic Essay Scoring: A Review on the Feature Analysis Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121028</link>
        <id>10.14569/IJACSA.2021.0121028</id>
        <doi>10.14569/IJACSA.2021.0121028</doi>
        <lastModDate>2021-10-31T06:54:26.9300000+00:00</lastModDate>
        
        <creator>Ridha Hussein Chassab</creator>
        
        <creator>Lailatul Qadri Zakaria</creator>
        
        <creator>Sabrina Tiun</creator>
        
        <subject>Automatic essay scoring; automatic essay grading; semantic analysis; structure analysis; string-based; corpus-based; word embedding</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>Automatic Essay Scoring (AES) is the automatic process of identifying scores for a particular essay answer. Such a task has been extensively addressed by the literature where two main learning paradigms have been utilized: Supervised and Unsupervised. Within these paradigms, there is a wide range of feature analyses has been utilized, Morphology, Frequencies, Structure, and semantics. This paper aims at addressing these feature analysis types with their subcomponent and corresponding approaches by introducing a new taxonomy. Consequentially, a review of recent AES studies is being conducted to highlight the utilized techniques and feature analysis. The finding of such a critical analysis showed that the traditional morphological analysis of the essay answer would lack semantic analysis. Whereas, utilizing a semantic knowledge source such as ontology would be restricted to the domain of the essay answer. Similarly, utilizing semantic corpus-based techniques would be impacted by the domain of the essay answer as well. On the other hand, using essay structural features and frequencies alone would be insufficient, but rather as an auxiliary to another semantic analysis technique would bring promising results. The state-of-the-art in AES research concentrated on neural-network-based-embedding techniques. Yet, the major limitations of these techniques are represented as (i) finding an adequate sentence-level embedding when using models such as Word2Vec and Glove, (ii) ‘out-of-vocabulary when using models such as Doc2Vec and GSE, and lastly, (iii) ‘catastrophic forgetting’ when using BERT model.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_28-Automatic_Essay_Scoring.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detection of Acute Myeloid Leukemia based on White Blood Cell Morphological Imaging using Na&#239;ve Bayesian Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121027</link>
        <id>10.14569/IJACSA.2021.0121027</id>
        <doi>10.14569/IJACSA.2021.0121027</doi>
        <lastModDate>2021-10-31T06:54:26.8970000+00:00</lastModDate>
        
        <creator>Esti Suryani</creator>
        
        <creator>Wiharto</creator>
        
        <creator>Adi Prasetya Putra</creator>
        
        <creator>Wisnu Widiarto</creator>
        
        <subject>Leukemia; acute myeloid leukemia; morphology; image processing; Na&#239;ve Bayes</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>The process of diagnosing AML is based on the complete blood-count analysis of the patients. As such, it involves high energy consumption, long completion times, and is rather expensive compared to conventional medical practices. One of the methods for identifying tumor cells involves the utilization of image-processing techniques based on the morphology of white blood cells (WBCs). The principal objective of this study involves the identification of AML cells—especially of the AML M1 and AML M2 types—through morphological imaging of WBCs using the Na&#239;ve Bayes&#39; Classifier. The Image-processing methods used in this study include YCbCr color space classification, image thresholding, morphological operations, chain code representation, and the use of bounding boxes. Regardless of the processing technique used, all identification procedures, performed in this study, were based on the Na&#239;ve Bayes&#39; Classifier. The test process was performed on 30 images of each of the AML M1 and M2 cell types. The use of the cell identification method proposed in this study demonstrated an accuracy of 73.33%. While the accuracy of cell type identification is 54.92%. Based on the results obtained in this study, it is inferred that the Na&#239;ve Bayes&#39; Classifier method can be employed in the process of identifying dominant AML cell types amongst AML M1 and AML M2 (myeloblast, promyelocyte, myelocyte, and metamyelocyte) based on the morphology of WBCs.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_27-Detection_of_Acute_Myeloid_Leukemia.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Chest Diseases Prediction from X-ray Images using CNN Models: A Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121026</link>
        <id>10.14569/IJACSA.2021.0121026</id>
        <doi>10.14569/IJACSA.2021.0121026</doi>
        <lastModDate>2021-10-31T06:54:26.8670000+00:00</lastModDate>
        
        <creator>Latheesh Mangeri</creator>
        
        <creator>Gnana Prakasi O S</creator>
        
        <creator>Neeraj Puppala</creator>
        
        <creator>Kanmani P</creator>
        
        <subject>Convolutional neural networks; VGG19; ResNet50V2; DenseNet201</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>Chest Disease creates serious health issues for human beings all over the world. Identifying these diseases in earlier stages helps people to treat them early and save their life. Conventional Neural Networks play an important role in the health sector especially in predicting diseases in the earlier stages. X-rays are one of the major parameters which help to identify Chest diseases accurately. In this paper, we study the prediction of chest diseases such as Pneumonia, COVID-19, and Tuberculosis (TB) from the X-ray images. The prediction of these diseases is analyzed with the support of three CNN Models such as VGG19, Resnet50V2, and Densenet201, and results are elaborated in the terms of Accuracy and Loss. Though all three models are highly accurate and consistent, considering the factors like architectural size, training speed, etc. Resnet50V2 is the best model for all three diseases. It trained with F1 score accuracies of 0.98,0.92,0.97 for pneumonia, tuberculosis, covid respectively.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_26-Chest_Diseases_Prediction_from_X_rays_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>RETRACTED: Intrusion Detection System for Energy Efficient Cluster based Vehicular Adhoc Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121025</link>
        <id>10.14569/IJACSA.2021.0121025</id>
        <doi>10.14569/IJACSA.2021.0121025</doi>
        <lastModDate>2021-10-31T06:54:26.8500000+00:00</lastModDate>
        
        <creator>M V B Murali Krishna M</creator>
        
        <creator>C. Anbu Ananth</creator>
        
        <creator>N. Krishna Raj</creator>
        
        <subject>Clustering; intrusion detection; vehicular communication; VANET; machine learning; krill herd optimization; fuzzy logic</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>After careful and considered review of the content of this paper by a duly constituted expert committee, this paper has been found to be in violation of IJACSA`s Publication Principles. We hereby retract the content of this paper. Reasonable effort should be made to remove all past references to this paper. Retraction DOI: 10.14569/IJACSA.2021.0121025.retraction</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_25-Intrusion_Detection_System_for_Energy_Efficient_Cluster.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Machine Learning Mini Batch K-means and Business Intelligence Utilization for Credit Card Customer Segmentation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121024</link>
        <id>10.14569/IJACSA.2021.0121024</id>
        <doi>10.14569/IJACSA.2021.0121024</doi>
        <lastModDate>2021-10-31T06:54:26.8200000+00:00</lastModDate>
        
        <creator>Firman Pradana Rachman</creator>
        
        <creator>Handri Santoso</creator>
        
        <creator>Arko Djajadi</creator>
        
        <subject>Customer segmentation; machine learning; business intelligence; data warehouse</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>An effective marketing strategy is a method to identify the customers well. One of the methods is by performing a customer segmentation. This study provided an illustration of customer segmentation based on the RFM (Recency, Frequency, Monetary) analysis using a machine learning clustering that can be combined with customer segmentation based on demography, geography, and customer habit through data warehouse-based business intelligence.  The purpose of classifying the customers based on the RFM and machine learning clustering analyses was to make a customer level. Meanwhile, customer segmentation based on demography, geography, and behavior was to classify the customers with the same characteristics. The combination of both provided a better analysis result in understanding customers. This study also showed a result that minibatch k-means was the machine learning model with the rapid performance in clustering 3-dimension data, namely recency, frequency, and monetary.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_24-Machine_Learning_Mini_Batch_K_means.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Meta-analytic Review of Intelligent Intrusion Detection Techniques in Cloud Computing Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121023</link>
        <id>10.14569/IJACSA.2021.0121023</id>
        <doi>10.14569/IJACSA.2021.0121023</doi>
        <lastModDate>2021-10-31T06:54:26.8030000+00:00</lastModDate>
        
        <creator>Meghana G Raj</creator>
        
        <creator>Santosh Kumar Pani</creator>
        
        <subject>Intrusion detection system (IDS); machine learning; computational intelligence algorithms; hybrid meta-heuristic algorithms; cloud security; cloud computing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>Security and data privacy continue to be major considerations in the selection and study of cloud computing. Organizations are migrating more critical operations to the cloud, resulting in increase in the number of cloud vulnerability incidents. In recent years, there have been several technological advancements for accurate detection of attacks in the cloud. Intrusion Detection Systems (IDS) are used to detect malicious attacks and reinstate network security in the cloud environment. This paper presents a systematic literature review and a meta-analysis to shed light on intelligent approaches for IDS in cloud. This review focuses on three intelligent IDS approaches- Machine Learning Algorithms, Computational Intelligence Algorithms and Hybrid Meta-Heuristic Algorithms. A qualitative review synthesis was carried out on a total of 28 articles published between 2016 and 2021. This study concludes that IDS based on Hybrid Meta-Heuristic Algorithms have increased Accuracy, decreased False Positivity Rate and increased Detection Rate.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_23-A_Meta_analytic_Review_of_Intelligent_Intrusion_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Adaptive Logarithmic-Power Algorithm for Preserving the Brightness in Contrast Distorted Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121022</link>
        <id>10.14569/IJACSA.2021.0121022</id>
        <doi>10.14569/IJACSA.2021.0121022</doi>
        <lastModDate>2021-10-31T06:54:26.7730000+00:00</lastModDate>
        
        <creator>Navleen S Rekhi</creator>
        
        <creator>Jagroop S Sidhu</creator>
        
        <subject>Non-uniform images; gamma correction; multi-scale 2D- discrete wavelet transform; logarithmic-power; quality metrics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>The digital images get distorted due to non-uniform light conditions or improper acquisition settings of the digital camera. Such factors lead to distorted contrast objects. In this work, we proposed adaptive enhancement algorithm to improve the contrast while preserving the mean brightness in the image. The method developed is a combination of discrete wavelet transform and gamma correction. Firstly, the gamma scale is computed from multi-scale decomposition using 2D-discrete wavelet transform. The value of scale parameter in gamma was computed from combination of logarithmic and power function. Secondly, the gamma correction is implemented to improve the contrast in the image. Lastly, bilateral filtering is utilized for smoothness of edges in the image. The approach effectively preserved the brightness and optimized the contrast in the image. The objective quality measures used as Peak SNR, AMBE, entropy, entropy based contrast measure and median absolute deviation is computed and compared with other state-of-the-art techniques.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_22-Adaptive_Logarithmic_Power_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Review of Modern DNA-based Steganography Approaches</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121021</link>
        <id>10.14569/IJACSA.2021.0121021</id>
        <doi>10.14569/IJACSA.2021.0121021</doi>
        <lastModDate>2021-10-31T06:54:26.7400000+00:00</lastModDate>
        
        <creator>Omar Haitham Alhabeeb</creator>
        
        <creator>Fariza Fauzi</creator>
        
        <creator>Rossilawati Sulaiman</creator>
        
        <subject>Information security in bioinformatics; deoxyribonucleic acid-based steganography; modern hiding approaches</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>In the last two decades, the field of DNA-based steganography has emerged as a promising domain to provide security for sensitive information transmitted over an untrusted channel. DNA is strongly nominated by researchers in this field to exceed other data covering mediums like video, image, and text due to its structural characteristics. Features like enormous hiding capacity, high computational power, and the randomness of its building contents, all sustained to prove DNA supremacy. There are mainly three types of DNA-based algorithms. These are insertion, substitution, and complementary rule-based algorithms. In the last few years, a new generation of DNA-based steganography approaches has been proposed by researchers. These modern algorithms overpass the performance of the old ones either by exploiting a biological factor that exists in the DNA itself or by using a suitable technique available in another field of computer science like artificial intelligence, data structure, networking, etc. The main goal of this paper is to thoroughly analyze these modern DNA-based steganography approaches. This will be achieved by explaining their working mechanisms, stating their pros and cons, and proposing suggestions to improve these methods. Additionally, a biological background about DNA structure, the main security parameters, and classical concealing approaches will be illustrated to give a comprehensive picture of the field.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_21-A_Review_of_Modern_DNA_based_Steganography.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Aligning Software System Level with Business Process Level through Model-Driven Architecture</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121020</link>
        <id>10.14569/IJACSA.2021.0121020</id>
        <doi>10.14569/IJACSA.2021.0121020</doi>
        <lastModDate>2021-10-31T06:54:26.7100000+00:00</lastModDate>
        
        <creator>Maryam Habba</creator>
        
        <creator>Samia Benabdellah Chaouni</creator>
        
        <creator>Mounia Fredj</creator>
        
        <subject>Information system alignment; business process; software system; Business Process Model and Notation (BPMN); Unified Modeling Language (UML); class diagram</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>Information systems are intended to provide organisations with a new way of sustaining themselves, by helping them manage their activities using innovative technologies. Information systems require aligned levels for maximum effectiveness. In this context, business and information technology (IT) alignment is an important issue for the success of organisations. This paper presents the first step of the proposed approach to align the software system level, modelled by a Unified Modeling Language (UML) class diagram, with the business process level, modelled by the Business Process Model and Notation (BPMN) model. A model-driven architecture approach is proposed as a means to transform a set of BPMN models into a UML class diagram. A set of transformation rules is proposed, followed by guidelines that help apply those rules.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_20-Aligning_Software_System_Level_with_Business_Process.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Expert’s Usability Evaluation of the Pelvic Floor Muscle Training mHealth App for Pregnant Women</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121019</link>
        <id>10.14569/IJACSA.2021.0121019</id>
        <doi>10.14569/IJACSA.2021.0121019</doi>
        <lastModDate>2021-10-31T06:54:26.6930000+00:00</lastModDate>
        
        <creator>Aida Jaffar</creator>
        
        <creator>Sherina Mohd Sidik</creator>
        
        <creator>Novia Admodisastro</creator>
        
        <creator>Evi Indriasari Mansor</creator>
        
        <creator>Lau Chia Fong</creator>
        
        <subject>Pregnant women; pelvic floor muscle training; mHealth app; usability evaluation; cognitive walkthrough; heuristic evaluation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>Pelvic floor muscle training (PFMT) is the first line in managing urinary incontinence. Unfortunately, personal, and social barriers involvement hinder pregnant women to perform PFMT. Therefore, a Kegel Exercise Pregnancy Training (KEPT) app was developed to bridge the accessibility barriers among incontinent pregnant women. This study aimed to evaluate the usability properties of the KEPT app developed for pregnant women to improve their pelvic floor muscle training. A purposive sampling method of the experts was conducted from a sample of experts in informatics and a physician with a special interest in informatics. The design activities were planned in the following sequence: cognitive walkthrough for learnability of the app, heuristic evaluation for the interface of the app and usability questionnaire to evaluate the usability properties (quantitative assessment) of the app. The mHealth application usability questionnaire (MAUQ) was used as its assessment tool to assess the application usability. A total of four experts were involved in this study. Cognitive walkthrough revealed that the KEPT app has several major learnability issues especially the training interface and language consistency to ensure its learnability. Heuristic evaluation showed that the training interface must provide additional information regarding the displayed icon. KEPT app was rated by MAUQ being as ease-of-use, the interface and satisfaction with the usefulness by all the experts which scored 5.80/7.0, 5.57/7.0, and 5.83/7.0, respectively. The suggestions were shared to assist future researchers and developers in developing PFMT mHealth app.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_19-Experts_Usability_Evaluation_of_the_Pelvic_Floor_Muscle_Training.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cyber Threat Intelligence in Risk Management</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121018</link>
        <id>10.14569/IJACSA.2021.0121018</id>
        <doi>10.14569/IJACSA.2021.0121018</doi>
        <lastModDate>2021-10-31T06:54:26.6630000+00:00</lastModDate>
        
        <creator>Amira M. Aljuhami</creator>
        
        <creator>Doaa M. Bamasoud</creator>
        
        <subject>Cyber threat intelligence; risk management; cyberthreat; cyber security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>Cyber Threat Information (CTI) has emerged to help cybersecurity professionals keep abreast of and respond to rising cyber threats (CT) in real-time. This paper aims to study the impact of cyber threat intelligence on risk management in Saudi universities in mitigating cyber risks. In this survey, a comprehensive review of CTI concepts and challenges, as well as risk management and practices in higher education, is presented. Previous literature was reviewed from theses, reviews, and books on the factors influencing the increase of cyber threats and CTI as well as risk management in higher education. A brief discussion of previous studies and their contribution to the current paper and the impact of CTI on risk management to reduce risk. An extensive search of more than 65 research papers was conducted and 28 were cited in this survey. Cyber threats are changing in addition to the huge flow of information about them and dealing with these threats on time requires advance and deep information about the nature of these threats and how to take appropriate defensive measures, and this is what defines the concept of CTI. The use of cyber threat information in risk management enhances the ability of defenders to mitigate growing cyber threats.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_18-Cyber_Threat_Intelligence_in_Risk_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Integrated Document-based Electronic Health Records Persistence Framework</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121017</link>
        <id>10.14569/IJACSA.2021.0121017</id>
        <doi>10.14569/IJACSA.2021.0121017</doi>
        <lastModDate>2021-10-31T06:54:26.6330000+00:00</lastModDate>
        
        <creator>Aya Gamal</creator>
        
        <creator>Sherif Barakat</creator>
        
        <creator>Amira Rezk</creator>
        
        <subject>Electronic health records; operational business intelligence; document data model; NoSQL; health information system; persistence framework; Couchbase server</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>Electronic health record systems work beyond just recording patients` health data. They have multiple secondary functionalities, such as data reporting and clinical decision support. As each of these systems` workloads has contradictory different needs, managing a multipurpose electronic health record is a challenge. This paper proposes a unified healthcare data framework that can simplify health information system infrastructure. It investigates the suitability of the document-based NoSQL persistence mechanism, storing electronic health records data as a design choice for managing varied complexity ad hoc queries used in operational business intelligence. The performance of the most popular two document-based NoSQL back-ends, Couchbase Server and MongoDB, is compared according to the size of the database and query execution time. Results showed that while MongoDB can execute simple single-document queries nearly in milliseconds. It does not provide satisfactory response time for unplanned complex queries spanning multiple documents. By utilizing its analytics services and multi-dimensional scaling architecture, Couchbase Server multi-node cluster outperforms the response times of MongoDB for both simple and complex healthcare data access patterns. The primary advantage of the proposed tightly coupled EHRs processing framework is its flexibility to manage workload according to changing requirements.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_17-Integrated_Document_based_Electronic_Health_Records.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>LightGBM-based Ransomware Detection using API Call Sequences</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121016</link>
        <id>10.14569/IJACSA.2021.0121016</id>
        <doi>10.14569/IJACSA.2021.0121016</doi>
        <lastModDate>2021-10-31T06:54:26.6170000+00:00</lastModDate>
        
        <creator>Duc Thang Nguyen</creator>
        
        <creator>Soojin Lee</creator>
        
        <subject>Ransomware; machine learning; API call; dynamic analysis technique; gradient boosting decision tree; GBDT; lightGBM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>Along with the development of technology as well as the explosion in digital data in the era of fourth industrial revolution, cyberattacks using ransomware are emerging as a serious threat to many agencies and organizations. The harm of ransomware is not limited to the areas of information technology and finance but also affects areas related to people&#39;s lives, such as the medical field. Therefore, research to identify and detect these types of malicious code is urgent. this paper present a novel approach of identifying and classifying ransomware based on dynamic analysis techniques combined with the use of machine learning algorithms. First, this research focused on the Application programming interface (API) call functions that are extracted during a dynamic analysis of executable samples using the Cuckoo sandbox. Second, research used LightGBM, a gradient boosting decision tree algorithm, for training and then detecting and classifying normal software and eight different types of ransomware. Experimental results showed that the proposed approach achieves an overall accuracy rate of 98.7% when performing multiclass classification. In particular, the detection rates of ransomware and normalware were both 99.9%. At the same time, the accuracy in identifying two specific types of ransomware, WannaCry and Win32:FileCoder, reached 100%.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_16-LightGBM_based_Ransomware_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Symbolic Representation-based Melody Extraction using Multiclass Classification for Traditional Javanese Compositions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121015</link>
        <id>10.14569/IJACSA.2021.0121015</id>
        <doi>10.14569/IJACSA.2021.0121015</doi>
        <lastModDate>2021-10-31T06:54:26.5830000+00:00</lastModDate>
        
        <creator>Arry Maulana Syarif</creator>
        
        <creator>Azhari Azhari</creator>
        
        <creator>Suprapto Suprapto</creator>
        
        <creator>Khafiizh Hastuti</creator>
        
        <subject>Melody extraction; symbolic representation-based; multiclass classification; feed-forward neural network; Gamelan</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>Traditional Javanese compositions contain melodies and skeletal melodies. Skeletal melodies are an extraction form of melodies. The melody extraction problem is similar to the chord detection in Western music, where chords are extracted from a melody. This research aims to develop a melody extraction system for traditional Javanese compositions. Melodies which have a time series data structure were designed as a part of the supervised learning problem to be solved using the pattern recognition technique and the Feed-Forward Neural Networks method. The melody data source uses a symbolic format in the form of sheet music. The beats in melodies data are used as the input and notes in skeletal melodies are used as the target. An FFNN multi-class classifier was built with six classes as the targets, where the class represents notes of the musical scale system. The network evaluation was conducted using accuracy, precision, recall, specificity and F-1 score measurements.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_15-Symbolic_Representation_based_Melody_Extraction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>University Course Timetabling Model in Joint Courses Program to Minimize the Number of Unserved Requests</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121014</link>
        <id>10.14569/IJACSA.2021.0121014</id>
        <doi>10.14569/IJACSA.2021.0121014</doi>
        <lastModDate>2021-10-31T06:54:26.5700000+00:00</lastModDate>
        
        <creator>Purba Daru Kusuma</creator>
        
        <creator>Abduh Sayid Albana</creator>
        
        <subject>Course timetabling; joint course program; artificial bee colony; simulated annealing; genetic algorithm; online course</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>This work proposes a novel course timetable model for the national joint courses program. In this model, the participants, both students and lecturers, come from different universities. It is different from most existing university course timetabling models where the environment is physical, and the system can dictate the timeslots and classrooms for the students and lecturers. The courses are delivered online in this model, so physical classrooms are no longer required, as was the case in most previous course timetabling studies. In this model, the matching process is conducted based on the assigned timeslots and the requested courses. The courses are elective rather than mandatory. Three metaheuristic methods are used to optimize this model: artificial bee colonies, cloud theory-based simulated annealing, and genetic algorithms. Due to the simulation process, the cloud theory-based simulated annealing performs best in minimizing the number of unserved requests. This method outperforms the two other metaheuristic methods, the genetic algorithm, and the artificial bee colony algorithm. According to the simulation results, when the number of students is low, the cloud theory-based simulated annealing has 91 percent fewer unserved requests than the genetic algorithm. When the number of students is large, this figure drops to 62%.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_14-University_Course_Timetabling_Model_in_Joint_Courses_Program.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Survey on Computer Vision Architectures for Large Scale Image Classification using Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121013</link>
        <id>10.14569/IJACSA.2021.0121013</id>
        <doi>10.14569/IJACSA.2021.0121013</doi>
        <lastModDate>2021-10-31T06:54:26.5530000+00:00</lastModDate>
        
        <creator>D. Dakshayani Himabindu</creator>
        
        <creator>S. Praveen Kumar</creator>
        
        <subject>Image classification; deep learning; computer vision survey; convolution neural networks; IMAGENET dataset</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>The advancement in deep learning is increasing day-by-day from image classification to language understanding tasks. In particular, the convolution neural networks are revived and shown their performance in multiple fields such as natural language understanding, signal processing, and computer vision. The property of translational invariance for convolutions has made a huge advantage in the field of computer vision to extract feature invariances appropriately. When these convolu-tions trained using back-propagation tend to prove their results ability to outperform existing machine vision techniques by overcoming the various hand-engineered machine vision models. Hence, a clear understanding of current deep learning methods is crucial. These convolution neural networks have proven to show their performance by attaining state-of-the-art performance in computer vision over years when applied on humongous data. Hence in this survey, we detail a set of state-of-the-art models in image classification evolved from the birth of convolutions to present ongoing research. Each state-of-the-art model evolved in the successive year is illustrated with architecture schema, implementation details, parametric tuning and their performance. It is observed that the neural architecture construction i.e. a supervised approach for an image classification problem is evolved as data construction with cautious augmentations i.e., a self-supervised approach. A detailed evolution from neural architecture construction to augmentation construction is il-lustrated by provided appropriate suggestions to improve the performance. Additionally, the implementation details and the appropriate source for the execution and reproducibility of results are tabulated.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_13-A_Survey_on_Computer_Vision_Architectures.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Critical Success Factors Associated to Tourism e-Commerce: Study of Peruvian Tourism Operators</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121012</link>
        <id>10.14569/IJACSA.2021.0121012</id>
        <doi>10.14569/IJACSA.2021.0121012</doi>
        <lastModDate>2021-10-31T06:54:26.5230000+00:00</lastModDate>
        
        <creator>Sussy Bayona-Or&#233;</creator>
        
        <creator>Romy Estrada</creator>
        
        <subject>Adoption; e-commerce; tourism operators; PYMEs; critical factors; TOE</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>The incorporation of information and communication technologies (ICT) has generated new opportunities and innovation in business models, such as electronic commerce (EC). Despite the benefits that the EC offers PYMEs, its adoption is low. Several authors have argued that many factors condition the adoption of EC in developing countries. This study proposes a model of factors associated with the adoption of EC by tourism operators based on factors categorized into organizational, individual, environmental, and technological factors. In this study, a structural equation modeling and confirmatory factor analysis tools were used to analyze the data. The data collected from 116 participants (69%) males and 31% females, managers of tourism operators). The results reveal that 11 factors influence the adoption of EC. Also, the operators currently using EC consider that the most influential factor are related with organizational factors operators that have not implement EC value factors involving skills, knowledge and experience in technology. This study can be used to establish policies on ICT adoption in tourism PYMEs.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_12-Critical_Success_Factors_Associated_to_Tourism.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Highly Efficient Parts of Speech Tagging in Low Resource Languages with Improved Hidden Markov Model and Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121011</link>
        <id>10.14569/IJACSA.2021.0121011</id>
        <doi>10.14569/IJACSA.2021.0121011</doi>
        <lastModDate>2021-10-31T06:54:26.5070000+00:00</lastModDate>
        
        <creator>Diganta Baishya</creator>
        
        <creator>Rupam Baruah</creator>
        
        <subject>Hidden markov models; viterbi algorithm; machine learning; deep learning; text processing; low resource language; unknown words; parts of speech tagging</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>Over the years, many different algorithms are proposed to improve the accuracy of the automatic parts of speech tagging. High accuracy of parts of speech tagging is very important for any NLP application. Powerful models like The Hidden Markov Model (HMM), used for this purpose require a huge amount of training data and are also less accurate to detect unknown (untrained) words. Most of the languages in this world lack enough resources in the computable form to be used during training such models. NLP applications for such languages also encounter many unknown words during execution. This results in a low accuracy rate. Improving accuracy for such low-resource languages is an open problem. In this paper, one stochastic method and a deep learning model are proposed to improve accuracy for such languages. The proposed language-independent methods improve unknown word accuracy and overall accuracy with a low amount of training data. At first, bigrams and trigrams of characters that are already part of training samples are used to calculate the maximum likelihood for tagging unknown words using the Viterbi algorithm and HMM. With training datasets below the size of 10K, an improvement of 12% to 14% accuracy has been achieved. Next, a deep neural network model is also proposed to work with a very low amount of training data. It is based on word level, character level, character bigram level, and character trigram level representations to perform parts of speech tagging with less amount of available training data. The model improves the overall accuracy of the tagger along with improving accuracy for unknown words. Results for “English” and a low resource Indian Language “Assamese” are discussed in detail. Performance is better than many state-of-the-art techniques for low resource language. The method is generic and can be used with any language with very less amount of training data.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_11-Highly_Efficient_Parts_of_Speech_Tagging.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predictive Scaling for Elastic Compute Resources on Public Cloud Utilizing Deep Learning based Long Short-term Memory</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121010</link>
        <id>10.14569/IJACSA.2021.0121010</id>
        <doi>10.14569/IJACSA.2021.0121010</doi>
        <lastModDate>2021-10-31T06:54:26.4770000+00:00</lastModDate>
        
        <creator>Bharanidharan. G</creator>
        
        <creator>S. Jayalakshmi</creator>
        
        <subject>Predictive auto-scaling; business intelligence; virtual machines (VM’s); deep learning models; analytics; elasticity; high performance public cloud data centre (HP-PCDC); right sizing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>The cloud resource usage has been increased exponentially because of adaptation of digitalization in government and corporate organization. This might increase the usage of cloud compute instances, resulting in massive consumption of energy from High performance Public Cloud Data Center servers. In cloud, there are some web applications which may experience diverse workloads at different timestamps that are essential for workload efficiency as well as feasibility of all extent. In cloud application, one of the major features is scalability in which most Cloud Service Providers (CSP) offer Infrastructure as a Service (IaaS) and have implemented auto-scaling on the Virtual Machine (VM) levels. Auto-scaling is a cloud computing feature which has the ability in scaling the resources based on demand and it assists in providing better results for other features like high availability, fault tolerance, energy efficiency, cost management, etc. In the existing approach, the reactive scaling with fixed or smart static threshold do not fulfill the requirement of application to run without hurdles during peak workloads, however this paper focuses on increasing the green tracing over cloud computing through proposed approach using predictive auto-scaling technique for reducing over-provisioning or under-provisioning of instances with history of traces. On the other hand, it offers right sized instances that fit the application to execute in satisfying the users through on-demand with elasticity. This can be done using Deep Learning based Time-Series LSTM Networks, wherein the virtual CPU core instances can be accurately scaled using cool visualization insights after the model has been trained. Moreover, the LSTM accuracy result of prediction is also compared with Gated Recurrent Unit (GRU) to bring business intelligence through analytics with reduced energy, cost and environmental sustainability.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_10-Predictive_Scaling_for_Elastic_Compute_Resources.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analyzing User Involvement Practice: A Case Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121009</link>
        <id>10.14569/IJACSA.2021.0121009</id>
        <doi>10.14569/IJACSA.2021.0121009</doi>
        <lastModDate>2021-10-31T06:54:26.4600000+00:00</lastModDate>
        
        <creator>Asaad Alzayed</creator>
        
        <creator>Abdulwahed Khalfan</creator>
        
        <subject>User involvement practices; user involvement challenges; usability; user-center practice; user feedback; end users communication</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>Engaging users in software development is recognized as effective in furthering the likelihood of product efficacy and a successful project, together with user contentment. Furthermore, user involvement is potentially applicable to numerous organizational contexts that can incorporate a focused user-centered group. This research analyzes the findings of a case study carried out to assess the user involvement situation within a business specializing in innovative software for general consumers, service providers, and enterprises. This company has now formed a user experience group that is devoted to applying user-centered approaches for the overall development of the organizational structure. General feedback was confirmed as the most typical means of gaining user insight, with the level of user involvement in focused development falling short. Nevertheless, the study led to recognition that a firm plan for drawing users into development processes is necessary moving forward.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_9-Analyzing_User_Involvement_Practice.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid e-Government Framework based on Datawarehousing and MAS for Data Interoperability</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121008</link>
        <id>10.14569/IJACSA.2021.0121008</id>
        <doi>10.14569/IJACSA.2021.0121008</doi>
        <lastModDate>2021-10-31T06:54:26.4130000+00:00</lastModDate>
        
        <creator>Barakat Oumkaltoum</creator>
        
        <creator>El beqqali Omar</creator>
        
        <creator>Ouksel Aris</creator>
        
        <creator>Chakir Loqman</creator>
        
        <subject>e-Government; interoperability; multi-agent system; materialized views; datawarehouse; business intelligence</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>The exponential growth in technological innovation is driven in large part by the digitization of multiple domains and assumes environments of increasing data volumes, arriving at high velocity and variety. e-Government is one such domain that exploits current ICT innovations to improve the delivery of public services to its citizens, businesses, and other stakeholders This imposes to continuously maintain information on daily operations, activities, and assets as well as extensive profiles on citizens, institutions, and organizations. In addition, current centralized platform-based approaches suffer from the single-point-of-failure, which may result in data breaches and leakages, leading to the need for efficient robust mechanisms to ensure secure information sharing, data interoperability, and privacy. In this paper, we propose a business intelligence approach to design a data interoperability framework for e-governance based on data warehousing technology to improve transparency and data accessibility. We also present a hybrid data filtering mechanism, which relies both on the Extraction, Transformation, and Loading (ETL) process, and multi-agent technology to integrate data quality and data interoperability, and supports data transformation into human-readable format. Finally, the framework emphasizes the availability of materialized views to enable efficient execution of analytical queries directly on the large volumes of raw data in the data warehouse.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_8-Hybrid_e_Government_Framework_based_on_Datawarehousing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Introduction to NFTs: The Future of Digital Collectibles</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121007</link>
        <id>10.14569/IJACSA.2021.0121007</id>
        <doi>10.14569/IJACSA.2021.0121007</doi>
        <lastModDate>2021-10-31T06:54:26.3970000+00:00</lastModDate>
        
        <creator>Muddasar Ali</creator>
        
        <creator>Sikha Bagui</creator>
        
        <subject>Blockchain technologies; smart contracts; cryptocurrencies; ethereum; non-fungible tokens (NFTs)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>This paper commences by introducing the essentials of blockchain technology and then goes into how Ethereum blockchain revolutionized blockchain. Smart contracts are presented in the context of showing how they play an important role in implementing rules regarding the Ethereum blockchain, allowing the user to regulate digital assets. The standards used in the Ethereum blockchain to build Non-Fungible Tokens (NFTs) are discussed. The paper concludes by presenting the benefits of NFTs as well as the use of Ethereum blockchain for future applications.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_7-Introduction_to_NFTs_The_Future_of_Digital_Collectibles.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Head Position and Pose Model and Method for Head Pose Angle Estimation based on Convolution Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121006</link>
        <id>10.14569/IJACSA.2021.0121006</id>
        <doi>10.14569/IJACSA.2021.0121006</doi>
        <lastModDate>2021-10-31T06:54:26.3670000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Akifumi Yamashita</creator>
        
        <creator>Hiroshi Okumura</creator>
        
        <subject>CNN; head pose; OpenCV; Dlib; open-source software; python</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>Head position and pose model is created. Also, a method for head poses angle estimation based on Convolution Neural Network (CNN) is proposed. 3D head position model is created from these locations and obtain 3D coordinate of head position. The method proposed here uses CNN. As for the head pose detection, OpenCV and Dlib of the open-source software tools are used with Python program. The images used were RGB images, RGB images + thermography, grayscale images, and RGB images assuming images obtained by near infrared rays, with only the red channel elements extracted. As a result, the RGB image model was the most accurate, but considering the criteria set, the RGB image model was used for morning and daytime detection, and the near-infrared image was used for nighttime and rainy weather scenes. It turned out that it is better to use the model obtained by the training in. The experimental results show almost perfect head pose detection performance when the head pose angle ranges from 0 to 180 degrees with 45 degrees steps.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_6-Head_Position_and_Pose_Model_and_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Visual Selective Attention System to Intervene User Attention in Sharing COVID-19 Misinformation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121005</link>
        <id>10.14569/IJACSA.2021.0121005</id>
        <doi>10.14569/IJACSA.2021.0121005</doi>
        <lastModDate>2021-10-31T06:54:26.3500000+00:00</lastModDate>
        
        <creator>Zaid Amin</creator>
        
        <creator>Nazlena Mohamad Ali</creator>
        
        <creator>Alan F. Smeaton</creator>
        
        <subject>Visual selective attention; COVID-19 misinformation; user attention; information sharing; implicit association test</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>Information sharing on social media must be accompanied by attentive behavior so that in a distorted digital environment, users are not rushed and distracted in deciding to share information. The spread of misinformation, especially those related to the COVID-19, can divide and create negative effects of falsehood in society. Individuals can also cause feelings of fear, health anxiety, and confusion in the treatment COVID-19. Although much research has focused on understanding human judgment from a psychological underline, few have addressed the essential issue in the screening phase of what technology can interfere amidst users&#39; attention in sharing information. This research aims to intervene in the user&#39;s attention with a visual selective attention approach. This study uses a quantitative method through studies 1 and 2 with pre-and post-intervention experiments. In study 1, we intervened in user decisions and attention by stimulating ten information and misinformation using the Visual Selective Attention System (VSAS) tool. In Study 2, we identified associations of user tendencies in evaluating information using the Implicit Association Test (IAT). The significant results showed that the user&#39;s attention and decision behavior improved after using the VSAS. The IAT results show a change in the association of user exposure, where after the intervention using VSAS, users tend not to share misinformation about COVID-19. The results are expected to be the basis for developing social media applications to combat the negative impact of the infodemic COVID-19 misinformation.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_5-Visual_Selective_Attention_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Taxonomy of Cybersecurity Awareness Delivery Methods: A Countermeasure for Phishing Threats</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121004</link>
        <id>10.14569/IJACSA.2021.0121004</id>
        <doi>10.14569/IJACSA.2021.0121004</doi>
        <lastModDate>2021-10-31T06:54:26.3200000+00:00</lastModDate>
        
        <creator>Asma A. Alhashmi</creator>
        
        <creator>Abdulbasit Darem</creator>
        
        <creator>Jemal H. Abawajy</creator>
        
        <subject>Phishing attack; human factors in cybersecurity; cybersecurity threats; cybersecurity awareness; anti-phishing awareness delivery methods</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>Phishing is a serious threat to the Internet users and has become a vehicle for cybercriminals to perpetrate large-scale crimes worldwide. A wide range of technical and educational measures have been developed and used to address phishing threats. However, the technical anti-phishing measures have been widely studied in the current literature whereas comprehensive analysis of the non-technical anti-phishing techniques has generally been ignored. To close this gap, we develop a new taxonomy of the most common cybersecurity training delivery methods and compare them along various factors. The work reported in this paper is useful for various stakeholders. For organizations conducting or considering phishing training, it helps them understand the various awareness training and phishing campaigns capabilities and design an appropriate program with a meaningful return. For researchers, it offers a clearer understanding of the main challenges, the existing solution space, and the potential scope of future research to be addressed.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_4-Taxonomy_of_Cybersecurity_Awareness_Delivery_Methods.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Collaborative Recommendation based on Implication Field</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121003</link>
        <id>10.14569/IJACSA.2021.0121003</id>
        <doi>10.14569/IJACSA.2021.0121003</doi>
        <lastModDate>2021-10-31T06:54:26.2870000+00:00</lastModDate>
        
        <creator>Hoang Tan Nguyen</creator>
        
        <creator>Lan Phuong Phan</creator>
        
        <creator>Hung Huu Huynh</creator>
        
        <creator>Hiep Xuan Huynh</creator>
        
        <subject>Implication intensity; implication rules; implication field; equipotential surface</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>Recently, recommender systems has grown rapidly in both quantity and quality and has attracted many studies aimed at improving their quality. Especially, collaborative fil-tering techniques based on rule mining model combined with statistical implication analysis (SIA) technique also achieved some interesting results. This has shown the potential of SIA to improve the performance of recommender systems. However, it is still not rich and there are several problems to be solved for better results such as the problem of non-binary data processing, dealing with bottleneck case of data partitioning method according to the number of transactions on the very sparse transaction sets during training and testing the model, and not paying attention to exploiting the trend of variation of statistical implication. In order to contribute to solving these problems, the paper focuses on proposing a new data partitioning method, and developing the recommendation model based on equipotential planes mining generated by variation of implication intensity or implication index in the implication field on both binary and non-binary data to improve the recommendations further. Experimental results have shown the success of this new approach through its quality comparison with collaborative filtering recommendation models as well as existing SIA-based ones.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_3-Collaborative_Recommendation_based_on_Implication_Field.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>UAV Aided Data Collection for Wildlife Monitoring using Cache-enabled Mobile Ad-hoc Wireless Sensor Nodes</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121002</link>
        <id>10.14569/IJACSA.2021.0121002</id>
        <doi>10.14569/IJACSA.2021.0121002</doi>
        <lastModDate>2021-10-31T06:54:26.2570000+00:00</lastModDate>
        
        <creator>Umair B. Chaudhry</creator>
        
        <creator>Chris I. Phillips</creator>
        
        <subject>UAV; caching; sensors; MANETs; WSN; waypoint</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>Unmanned aerial vehicle (UAV) assisted data collection is not a new concept and has been used in various mobile ad hoc networks. In this paper, we propose a caching assisted scheme alternative to routing in MANETs for the purpose of wildlife monitoring. Rather than deploying a routing protocol, data is collected and transported to and from a base station using a UAV. Although some literature exists on such an approach, we propose the use of intermediate caching between the mobile nodes and compare it to a baseline scenario where no caching is used. The paper puts forward our communication design where we have simulated the movement of multiple mobile sensor nodes in a field that move according to the Levy walk model imitating wildlife animal foraging and a UAV that makes regular trips across the field to collect data from them. The unmanned aerial vehicle can collect data not only from the current node it is communicating with but also data of other nodes that this node came into contact with. Simulations show that exchanging cached data is highly advantages as the drone can indirectly communicate with many more mobile nodes.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_2-UAV_Aided_Data_Collection_for_Wildlife_Monitoring.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Effective Design of Model for Information Security Requirement Assessment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0121001</link>
        <id>10.14569/IJACSA.2021.0121001</id>
        <doi>10.14569/IJACSA.2021.0121001</doi>
        <lastModDate>2021-10-31T06:54:26.1930000+00:00</lastModDate>
        
        <creator>Shailaja Salagrama</creator>
        
        <subject>Information security; network security; web security; confidentiality; integrity; availability; communication technology; information system; internet security; security framework introduction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(10), 2021</description>
        <description>Information security is a major domain of analysis for enhancing the security of sensitive detained business organizations. These days, attackers are advancing themselves by applying highly advanced technological solutions such as artificially intelligent malicious codes, advanced phishing methods and many others to acquire sensitive and critical data from businesses. This paper presents a novel model framework to analyze the requirements of information security for a more robust information system and its assets in organizations. The framework of this model is designed in such a fashion that both new and legacy organizations can adopt it to define the requirement of security that will ensure confidentiality, integrity and availability of information systems and their components - including sensitive domain business and private data that is critical to the organization. There are two different model frameworks which are proposed here. The first one provides specifications of the security requirements and the second provides for the audit of the access logs to capture any unethical practices and violations by internal users. The proposed model for security requirements provides the roadmap to analyze and build proper security requirements to secure business sensitive data. Stepwise processes which are needed to analyze and define security requirements are the key factors of this security model, as they help in clear definitions of security frameworks and infrastructure for an organization. The Audit Model provides the framework for defining information auditing requirements, thus enabling the capture of unethical and unauthorized access to the information system components of the organization.</description>
        <description>http://thesai.org/Downloads/Volume12No10/Paper_1-An_Effective_Design_of_Model_for_Information_Security.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Effective Service Discovery based on Pertinence Probabilities Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120989</link>
        <id>10.14569/IJACSA.2021.0120989</id>
        <doi>10.14569/IJACSA.2021.0120989</doi>
        <lastModDate>2021-10-05T12:55:33.6400000+00:00</lastModDate>
        
        <creator>Mohammed Merzoug</creator>
        
        <creator>Abdelhak Etchiali</creator>
        
        <creator>Fethallah Hadjila</creator>
        
        <creator>Amina Bekkouche</creator>
        
        <subject>Service-oriented computing; web service discovery; rank aggregation; probabilistic fusion</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>Web service discovery is one of the most motivating issues of service-oriented computing field. Several approaches have been proposed to tackle this problem. In general, they leverage similarity measures or logic-based reasoning to perform this task, but they still present some limitations in terms of effectiveness. In this paper, we propose a probabilistic-based approach to merge a set of matching algorithms and boost the global performance. The key idea consists of learning a set of relevance probabilities; thereafter, we use them to produce a combined ranking. The conducted experiments on the real world dataset “OWL-S TC 2” demonstrate the effectiveness of our model in terms of mean averaged precision (MAP); more specifically, our solution, termed “probabilistic fusion”, outperforms all the state of the art matchmakers as well as the most prominent similarity measures.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_89-Effective_Service_Discovery_based_on_Pertinence_Probabilities.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Employing DDR to Design and Develop a Flipped Classroom and Project based Learning Module to Applying Design Thinking in Design and Technology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120988</link>
        <id>10.14569/IJACSA.2021.0120988</id>
        <doi>10.14569/IJACSA.2021.0120988</doi>
        <lastModDate>2021-10-05T12:55:33.5800000+00:00</lastModDate>
        
        <creator>Mohd Ridzuan Padzil</creator>
        
        <creator>Aidah Abd Karim</creator>
        
        <creator>Hazrati Husnin</creator>
        
        <subject>Flipped classroom; project-based learning; design and development (DDR); Isman instructional design model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>The purpose of this study is to discuss the Design and Development (DDR) research approach that was used to develop a Flipped Classroom and project-based learning modules for students of Design and Technology (D&amp;T). The module&#39;s fundamental theory is based on 21st-century teaching and learning models, as well as design thinking. The DDR process is divided into three phases: analysis of needs, design and development, and evaluation. The phase of needs analysis is used to ascertain the necessity of module development and the application of design thinking. Three distinct data collection methods were used in this phase: semi-structured interviews, survey studies, and document analysis. The findings from this phase serve as a backup for the next phase. The Isman Instructional Design Model (2011) is adapted for use in this phase as a guide for module design and development. Additionally, the Fuzzy Delphi Method is used to obtain expert consensus on module material design, teaching and learning strategies, software development and hardware development requirements, and module prototype evaluation. The final phase is implementation and evaluation, which focuses on determining the module&#39;s effectiveness in the actual teaching and learning process. Each finding is organised and documented more systematically and orderly in accordance with the DDR phase in order to produce more meaningful research results. The conclusion of this article proposes a conceptual framework for the research.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_88-Employing_DDR_to_Design_and_Develop_a_Flipped_Classroom.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Structured and Unstructured Robust Control for an Induction Motor</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120987</link>
        <id>10.14569/IJACSA.2021.0120987</id>
        <doi>10.14569/IJACSA.2021.0120987</doi>
        <lastModDate>2021-09-30T08:09:10.7630000+00:00</lastModDate>
        
        <creator>Jhoel F. Espinoza-Quispe</creator>
        
        <creator>Juan C. Cutipa-Luque</creator>
        
        <creator>German A. Echaiz Espinoza</creator>
        
        <creator>Andres O. Salazar</creator>
        
        <subject>H8 Robust control; induction motor; indirect field oriented control</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>The indirect field-oriented control approaches for induction motors have 
recently gained more attention due to its use in trend areas, such as electromobility, electric vehicles, 
electric ships, and unmanned vehicles. This work studies the performance of two advanced control 
synthesized by the H8 norm as an alternative to the classical Proportional-Integral and Derivative 
controller. It will be assessed in terms of the performance against disturbance variations in the 
reference speed in the nominal conditions. The tuning of the parameters of these controllers must be 
defined of the stability and performance of the system and to increase their operation range frequency. 
An algorithm is proposed to reach a better shape of weighting func-tions. A numerical simulation will be 
shown where the advances in structured advanced controller synthesis with unstructured H8 controller is 
still the good election for the induction motor control. Unstructured controller approach shows still good robustness in performance and stability compared with the structured controller. Constraints imposed in structured controller is the main disadvantage to improve its robustness properties. However, compared with a conventional PID approach, the structured controller has shown quite good performance and can become in one of the most attractive approaches for practitioners.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_87-Structured_and_Unstructured_Robust_Control.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Open-source Wireless Platform for Real-time Water Quality Monitoring with Precise Global Positioning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120986</link>
        <id>10.14569/IJACSA.2021.0120986</id>
        <doi>10.14569/IJACSA.2021.0120986</doi>
        <lastModDate>2021-09-30T08:09:10.7300000+00:00</lastModDate>
        
        <creator>Niel F. Salas-Cueva</creator>
        
        <creator>Jorch Mendoza</creator>
        
        <creator>Juan Carlos Cutipa-Luque</creator>
        
        <creator>Pablo Raul Yanyachi</creator>
        
        <subject>Open-source; water quality monitoring; real-time; python; visual interface; MySQL; dual Global Navigation Satellite System (GNSS); Inertial Navigation System (INS); multiparameter sonde</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>Sustainable development associated with the agri-cultural field of Arequipa, a region in economic growth, is vul-nerable to contamination of water resources, putting production systems and food security at risk. Therefore, it is necessary to implement an automated system to control, management, and monitor this vital resource. The proposed work proposed a system to measure water quality monitoring in reservoirs and lakes with high accurate related to global positioning. It includes an embedded computer, multiparameter sonde, and an additional dual GNSS/INS in hardware architecture. The software architecture is fully open-source with compatibility, modularity, and interoperability features between Python and MySQL, al-lowing data management for real-time data in visual interface on a platform that stores unlimited data logging, monitors and analyzes. The proposed system is validated in an experimental test that measures the water quality of a huge agricultural reservoir, where certified instrumentation is mandatory, as compared to other methods used locally for this action.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_86-An_Open_source_Wireless_Platform_for_Real_time_Water_Quality.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Future Friend Recommendation System based on User Similarities in Large-Scale on Social Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120985</link>
        <id>10.14569/IJACSA.2021.0120985</id>
        <doi>10.14569/IJACSA.2021.0120985</doi>
        <lastModDate>2021-09-30T08:09:10.7170000+00:00</lastModDate>
        
        <creator>Md. Amirul Islam</creator>
        
        <creator>Linta Islam</creator>
        
        <creator>Md. Mahmudul Hasan</creator>
        
        <creator>Partho Ghose</creator>
        
        <creator>Uzzal Kumar Acharjee</creator>
        
        <creator>Md. Ashraf Kamal</creator>
        
        <subject>Social networks; recommendation framework; pro-file similarity; network similarity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>Friendship is one of the most important issues in online social networks (OSN). Researchers analyze the OSN to determine how people are connected to a network and how new connections are developed. Most of the existing methods cannot efficiently evaluate a friendship graphs internal connectivity and decline to render a proper recommendation. This paper presented three proposed algorithms that can apply in OSN to predict future friends recommendations for the users. Using network and profile similarity proposed approach can measure the similarity among the users. To predict the user similarity, we calculated an average weight that indicates the probability of two users being similar by considering every precise subset of some profile attributes such as age, profession, location, and interest rather than taking the only average of the superset profile attributes. The suggested algorithms perform a significant enhancement in prediction accuracy 97% and precision 96.566%. Furthermore, the proposed recommendation frameworks can handle any profile attribute’s missing value by assuming the value based on friends’ profile attributes.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_85-Future_Friend_Recommendation_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Collision Resolution Techniques in Hash Table: A Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120984</link>
        <id>10.14569/IJACSA.2021.0120984</id>
        <doi>10.14569/IJACSA.2021.0120984</doi>
        <lastModDate>2021-09-30T08:09:10.7000000+00:00</lastModDate>
        
        <creator>Ahmed Dalhatu Yusuf</creator>
        
        <creator>Saleh Abdullahi</creator>
        
        <creator>Moussa Mahamat Boukar</creator>
        
        <creator>Salisu Ibrahim Yusuf</creator>
        
        <subject>Hashing; collision resolution; hash table; hash function; slot</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>One of the major challenges of hashing is achieving constant access time O (1) with an efficient memory space at a high load factor environment when various keys generate the same hash value or address. This problem causes a collision in the hash table, to resolve the collision and achieve constant access time O (1) researchers have proposed several methods of handling collision most of which introduce a non-constant access time complexity at a worst-case scenario. In this study, the worst case of several proposed hashing collision resolution techniques are analyzed based on their time complexity at a high load factor environment, it was found that almost all the existing techniques have a non-constant access time complexity. However, they all require an additional computation for rehashing keys in a hash table some of which is as a result of deadlock while iterating to insert a key. It was also found out that there are wasted slots in a hah table in all the reviewed techniques. Therefore, this work, provides an in-depth understanding of collision resolution techniques which can serve as an avenue for further research work in the field.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_84-Collision_Resolution_Techniques_in_Hash_Table.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Algorithm to Reduce Peak to Average Power Ratio in OFDM Systems based on BCH Codes</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120983</link>
        <id>10.14569/IJACSA.2021.0120983</id>
        <doi>10.14569/IJACSA.2021.0120983</doi>
        <lastModDate>2021-09-30T08:09:10.6830000+00:00</lastModDate>
        
        <creator>Brahim BAKKAS</creator>
        
        <creator>Reda Benkhouya</creator>
        
        <creator>Toufik Chaayra</creator>
        
        <creator>Chana Idriss</creator>
        
        <creator>Hussain Ben-Azza</creator>
        
        <subject>Orthogonal Frequency Division Multiplexing (OFDM); Peak to Average Power Ratio (PAPR); Bit Error Rate (BER); Peak Insertion (PI); Coding; Bose Chaudhuri Hocquenghem (BCH)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>Orthogonal Frequency Division Multiplexing (OFDM) has a great peak-to-average power ratio (PAPR). This will reduce the performance of the power amplifier (PA). Therefore, PAPR deteriorates the overall energy efficiency of an OFDM system. Peak Insertion (PI) is one of the most commonly used methods to reduce PAPR, it gives the best reduction in PAPR. Therefore, it causes a strong degradation in Bit Error Rate (BER). To solve this problem, we propose a new algorithm called BCB-OFDM based on Bose Chaudhuri Hocquenghem Codes (BCHs) and PI. BCB is implemented in OFDM system with Quadrature Amplitude Modulation (QAM) and two coding rates 1/2 and 1/4 over an Additive White Gaussian Noise (AWGN) channel. Simulation results show that the BCB is very interesting and achieve a good value in terms of PAPR reduction with keeping good performance compared with PI and normal OFDM. In addition, BCB algorithm is simple, robust, and leaves no requirement side information with more flexibility to choose between PAPR reduction and BER performances.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_83-A_New_Algorithm_to_Reduce_Peak_to_Average_Power_Ratio.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Data Security: A New Symmetric Cryptosystem based on Graph Theory</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120982</link>
        <id>10.14569/IJACSA.2021.0120982</id>
        <doi>10.14569/IJACSA.2021.0120982</doi>
        <lastModDate>2021-09-30T08:09:10.6530000+00:00</lastModDate>
        
        <creator>Khalid Bekkaoui</creator>
        
        <creator>Soumia Ziti</creator>
        
        <creator>Fouzia Omary</creator>
        
        <subject>Cryptosystem; graph theory; hamiltonian circuits; adjacency matrix; block cipher; encryption</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>Sharing private data in an unsecured channel is extremely critical, as unauthorized entities can intercept it and could break its privacy. The design of a cryptosystem that fulfills the security requirements in terms of confidentiality, integrity and authenticity of transmitted data has therefore become an unavoidable imperative. Indeed, a lot of work has been carried out in this regard. Although many cryptosystems have been proposed in the published literature, it has been found that their robustness and performance vary relatively from one to another. Adopting this reflection, we address in this paper the concept of block cipher, which is a major cryptographic solution to guarantee confidentiality, by involving the properties of graph theory to represent the plaintext message. Our proposal is in fact a new symmetric encryption block cipher that proceeds by representing plaintext messages using disjoint Hamiltonian circuits and then dealing with them as an adjacency matrix in a pre-encryption phase. The proposed system relies on a particular sub-key generator that has been carefully designed to produce the encryption keys according to the specifications of the system. The obtained experimental results demonstrate that our proposed cryptosystem is robust against statistical attacks, particularly the DIEHARD test, and presents both good confusion and good diffusion.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_82-Data_Security_A_New_Symmetric_Cryptosystem.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Carrot Disease Recognition using Deep Learning Approach for Sustainable Agriculture</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120981</link>
        <id>10.14569/IJACSA.2021.0120981</id>
        <doi>10.14569/IJACSA.2021.0120981</doi>
        <lastModDate>2021-09-30T08:09:10.6370000+00:00</lastModDate>
        
        <creator>Naimur Rashid Methun</creator>
        
        <creator>Rumana Yasmin</creator>
        
        <creator>Nasima Begum</creator>
        
        <creator>Aditya Rajbongshi</creator>
        
        <creator>Md. Ezharul Islam</creator>
        
        <subject>Deep learning; convolutional neural network; In-ception v3; carrot disease recognition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>Carrot is a fast-growing and nutritious vegetable cultivated throughout the world for its edible roots. The farmers are still learning the scientific methods of carrot production worldwide. For the production of good quality carrots, modern technology is not being used to its fullest to detect carrot vegetable diseases in the farms. As a result, the farmers face difficulties now and then in continuous monitoring and detecting defects in carrot crops. Hence, this paper proposes an efficient carrot disease identification and classification method using a deep learning approach, especially Convolutional Neural Network (CNN). In this research, five different carrot diseases including healthy carrots have been examined and experimented with four different pretrained models of CNN architecture, i.e., VGG16, VGG19, MobileNet, and Inception v3. Among the four models, the Inception v3 model is selected as an efficient pretrained CNN architecture to build an effective and robust system. The Inception v3 based system proposed here takes carrot images as input and examines whether they are healthy or infected, and provides output accordingly. To train and evaluate the system, a robust dataset is used, which consists of original and synthetic data. In the Fully Connected Neural Network (FCNN), dropout is used to solve the problem of overfitting as well as to improve the accuracy of the system. The accuracy achieved from the method which uses Inception v3 is 97.4%, which is undoubtedly helpful for the farmers to identify carrot disease and maximize their benefits to establish sustainable agriculture.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_81-Carrot_Disease_Recognition_using_Deep_Learning_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Analysis of Spark and Ignite for Big Spatial Data Processing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120980</link>
        <id>10.14569/IJACSA.2021.0120980</id>
        <doi>10.14569/IJACSA.2021.0120980</doi>
        <lastModDate>2021-09-30T08:09:10.6070000+00:00</lastModDate>
        
        <creator>Samah Abuayeid</creator>
        
        <creator>Louai Alarabi</creator>
        
        <subject>Big spatial data; GeoSpark; SpatialIgnite; Apache Ignite; Apache Spark</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>Recently, spatial data became one of the most interesting fields related to big data studies, in which the spatial data have been generated and consumed from different resources. However, the increasing numbers of location-based services and applications such as Google Maps, vehicle navigation, recommendation systems are the main foundation of the idea of spatial data. On the other hand, several researchers started to discover and compared spatial frameworks to understand the requirements for spatial database processing, manipulating, and analysis systems. Apache Spark, Apache Ignite, and Hadoop are the most widely known frameworks for large data processing. However, Apache Spark, Apache Ignite have integrated different spatial data operations and analysis queries, but each system has its advantages and disadvantages when dealing with spatial data. Dealing with a new framework or system that needs to integrate new functionality sometimes becomes a risky decision if we did not examine it well The main aim of this research is to conduct a comprehensive evaluation of big spatial data computing on two well-known data management systems Apache Ignite and Apache Spark. The comparative has been done on four different domains, experimental environment setup, supported features, supported functions and queries, and performance and execution time. The results show that GeoSpark has recorded more flexibility to use than SpatialIgnite. We thoroughly investigated and discovered that multiple factors affect the performance of both frameworks, such as CPU, Main memory, data set size the complexity of data type, and programming environment. spark is more advanced and equipped with several functionalities that made it well suitable with spatial data queries and indexing. such as kNN queries; in which these functionalities are not supported in SpatialIgnite.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_80-Comparative_Analysis_of_Spark_and_Ignite.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Risk Assessment Methods for Cybersecurity in Nuclear Facilities: Compliance to Regulatory Requirements</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120979</link>
        <id>10.14569/IJACSA.2021.0120979</id>
        <doi>10.14569/IJACSA.2021.0120979</doi>
        <lastModDate>2021-09-30T08:09:10.5900000+00:00</lastModDate>
        
        <creator>Lilis Susanti Setianingsih</creator>
        
        <creator>Reza Pulungan</creator>
        
        <creator>Agfianto Eko Putra</creator>
        
        <creator>Moh Edi Wibowo</creator>
        
        <creator>Syarip</creator>
        
        <subject>Risk assessment; cybersecurity; nuclear facilities; security requirements; regulatory requirements</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>As strategic infrastructures, nuclear facilities are considered attractive targets for attackers to commit their mali-cious intention. At the same time, for efficiency, those infrastruc-tures are increasingly implemented, equipped with, and managed by digitally computerized systems. Attackers, therefore, try to realign their attack scenarios through such cyber systems. It is crucial to understand various existing risk assessment methods for cybersecurity in nuclear facilities to prevent such attacks. Risk assessment is designed to study the nature of the originated attack threats and the consequences implied. This paper studies a series of risk assessment methods implemented for security related to cybersecurity of strategic infrastructures, including nuclear facilities. Extended from cybersecurity, the required concepts in nuclear security cover defense-in-depth, synergy of safety and security, and probabilistic safety/risk assessment. Selecting cybersecurity risk assessment methods should integrate these three essential concepts in their evaluation. This paper highlights the suitable and appropriate risk assessment methods that meet security requirements in the nuclear industry as specified in the national and international regulations.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_79-Risk_Assessment_Methods_for_Cybersecurity_in_Nuclear_Facilities.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Data Dissemination for Bioinformatics Application using Agent Migration</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120977</link>
        <id>10.14569/IJACSA.2021.0120977</id>
        <doi>10.14569/IJACSA.2021.0120977</doi>
        <lastModDate>2021-09-30T08:09:10.5600000+00:00</lastModDate>
        
        <creator>Shakir Ullah Shah</creator>
        
        <creator>Abdul Hameed</creator>
        
        <creator>Jamil Ahmad</creator>
        
        <creator>Hafeez Ur Rehman Safia Fatima</creator>
        
        <creator>Muhammad Amin</creator>
        
        <subject>Data dissemination; protein-protein interactions; agent migration; inter-platform mobility; multi-agent systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>Bioinformatics is research intensive field where agents operate in highly dynamic environment. Due to extensive research in this domain leads to basic but important problems for the researchers that are (1) Bandwidth (2) storage and (3) computation. We are using agent migration approach to reduce the network load and resolve the resource problem for the client by using server side resources for the computations on large data. The proposed approach does not demand extra storage and extensive computational resources on clients slide fsage. It solves the problem of bandwidth, storage, computation. Our results show that this approach saves the time of the user up to 12.5 % approximately, depending on the size of the data. Similarly the agent can work like a mashup to get heterogeneous data from different service providers and presents in homogeneous shape to its owner.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_77-Data_Dissemination_for_Bioinformatics_Application.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Metaheuristic Aided Energy Efficient Cluster Head Selection in Wireless Sensor Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120978</link>
        <id>10.14569/IJACSA.2021.0120978</id>
        <doi>10.14569/IJACSA.2021.0120978</doi>
        <lastModDate>2021-09-30T08:09:10.5600000+00:00</lastModDate>
        
        <creator>Turki Ali Alghamdi</creator>
        
        <subject>Cluster head; security; trust; dragonfly algorithm; LU-DA model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>Clustering is one of the significant techniques for expanding the lifetime of networks in wireless sensor networks (WSNs). It entails combining of sensor nodes (SNs) into clusters and electing cluster heads (CHs) for each and every cluster. CH collects the information from particular cluster nodes and passes the cumulative data to the base station (BS). However, the most important requirement in WSN is to choose a suitable CH with an increased network life span. This work introduces a new CHS model in WSN. The optimal CH is elected by a new hybridized model termed as “Lion Updated Dragonfly Algorithm (LU-DA) that hybrid the concepts of Dragonfly Algorithm (DA) and Lion Algorithm (LA)”. Moreover, the optimal selection of CH is done depending upon constraints like “energy, delay, distance, security (risk) and trust (direct and indirect trust)”. This optimal CH ensures the network lifetime enhancement. At last, the superiority of the developed approach is proved on varied measures like energy and alive node analysis. Accordingly, the proposed model has accomplished higher energy of 0.55 at 1st round, whereas at the 2000th round, the normalized energy value has been dropped to 0.1.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_78-Hybrid_Metaheuristic_Aided_Energy_Efficient_Cluster_Head.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid Intrusion Detection Model for Identification of Threats in Internet of Things Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120976</link>
        <id>10.14569/IJACSA.2021.0120976</id>
        <doi>10.14569/IJACSA.2021.0120976</doi>
        <lastModDate>2021-09-30T08:09:10.5300000+00:00</lastModDate>
        
        <creator>Nsikak Pius Owoh</creator>
        
        <creator>Manmeet Mahinderjit Singh</creator>
        
        <creator>Zarul Fitril Zaaba</creator>
        
        <subject>Internet of things; intrusion detection system; k-means; principal component analysis; support vector machine</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>Internet of Things (IoT) has transcended from its application in traditional sensing networks such as wireless sensing and radio frequency identification to life-changing and critical applications. However, IoT networks are still vulnerable to threats, attacks, intrusions, and other malicious activities. Intrusion Detection Systems (IDS) that employ unsupervised learning techniques are used to secure sensitive data transmitted on IoT networks and preserve privacy. This paper proposes a hybrid model for intrusion detection that relies on a dimension reduction algorithm, an unsupervised learning algorithm, and a classifier. The proposed model employs Principal Component Analysis (PCA) to reduce the number of features in a dataset. The K-means algorithm generates clusters that serve as class labels for the Support Vector Machine (SVM) classifier. Experimental results using the NSL-KDD and the UNSW-NB15 datasets justify the effectiveness of our proposed model in detecting malicious activities in IoT networks. The proposed model, when trained, identifies benign and malicious behaviours using an unlabelled dataset.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_76-A_Hybrid_Intrusion_Detection_Model_for_Identification_of_Threats.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Applying Grey Clustering and Shannon’s Entropy to Assess Sediment Quality from a Watershed</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120975</link>
        <id>10.14569/IJACSA.2021.0120975</id>
        <doi>10.14569/IJACSA.2021.0120975</doi>
        <lastModDate>2021-09-30T08:09:10.5130000+00:00</lastModDate>
        
        <creator>Alexi Delgado</creator>
        
        <creator>Betsy Vilchez</creator>
        
        <creator>Fabian Chipana</creator>
        
        <creator>Gerson Trejo</creator>
        
        <creator>Renato Acari</creator>
        
        <creator>Rony Camarena</creator>
        
        <creator>V&#237;ctor Galicia</creator>
        
        <creator>Chiara Carbajal</creator>
        
        <subject>Grey clustering; sediment quality; Shannon entropy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>The evaluation of the quality of sediments is a complex issue in the Peruvian reality, mainly because there is no sampling protocol or norm for comparison, which leads to the assessment of sediments without a comprehensive analysis of their quality. In the present study, the quality of the sediments in the upper basin of the Huarmey river was evaluated in 30 monitoring points and 7 parameters, which are: arsenic, cadmium, copper, chromium, mercury, lead and zinc, which were compared according to the standards recommended by the Environmental Quality Guidelines for Sediments in freshwater bodies of Canada (Canadian Environmental Quality Guidelines - CEQG, 2002. Sediment Quality Guidelines for Protection of Aquatic Life - Fresh water according to Canadian Council of Ministers of the Environment (CCME)). The results of the evaluation, by grey clustering method and Shannon entropy, showed that 13 monitoring points resulted in good sediment quality, 1 monitoring point had moderate quality and 16 monitoring points presented poor quality; therefore, it can be concluded that the effluents and discharges of the mining activities that take place in the aforementioned location have a negative impact on environmental quality. Finally, the results obtained can be of great help for OEFA, the regional government, the municipalities and any other body that has oversight functions, since they will allow them to be more objective and precise decisions.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_75-Applying_Grey_Clustering_and_Shannons_Entropy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Organisational Information Security Management Maturity Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120974</link>
        <id>10.14569/IJACSA.2021.0120974</id>
        <doi>10.14569/IJACSA.2021.0120974</doi>
        <lastModDate>2021-09-30T08:09:10.4970000+00:00</lastModDate>
        
        <creator>Mazlina Zammani</creator>
        
        <creator>Rozilawati Razali</creator>
        
        <creator>Dalbir Singh</creator>
        
        <subject>Information security; information security management; maturity models; information security management maturity model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>Information Security Management (ISM) is a systematic initiative in managing the organisation’s information security. ISM can also be defined as a strategic approach to addressing information security (IS) risks, breaches, and incidents that could threaten the confidentiality, integrity, and availability of information. Although organisations have complied with ISM requirements, security incidents are still afflicting numerous organisations. This issue shows that the current implementation of ISM is still ineffective. The ineffective ISM implementation illustrates the low maturity level. To achieve a higher level of maturity, organisations should always evaluate their ISM practices. Several maturity models have been developed by international organisations, consultants, and researchers to assist organisations in assessing their ISM practices. However, the current models do not evaluate ISM practices holistically. The measurement dimensions in current models are more focused on assessing certain factors only. This caused the maturity assessment to be not executed comprehensively. Therefore, this study aims to address this shortcoming by proposing a comprehensive maturity assessment model that takes into account ISM success factors to evaluate the effectiveness of the implementation. This study adopted a mixed-method approach, which comprises qualitative and quantitative studies to strengthen the research finding. The qualitative study analyses the existing literature and conducts interviews with nine industry practitioners and six experts while the quantitative study involves a questionnaire survey. The data obtained from the qualitative study were analysed using content analysis while the quantitative data employed statistics analysis. The study identified fourteen success factors and fifty-seven maturity dimensions, which each contains five maturity levels. The proposed model was evaluated through experts’ reviews to ensure its accuracy and suitability. The evaluation shows that the model can identify the ISM maturity level systematically and comprehensively. This model will ultimately help the organisations to improve the weaknesses in the implementations thus diminishing security incidents.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_74-Organisational_Information_Security_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modeling a Fault Detection Predictor in Compressor using Machine Learning Approach based on Acoustic Sensor Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120973</link>
        <id>10.14569/IJACSA.2021.0120973</id>
        <doi>10.14569/IJACSA.2021.0120973</doi>
        <lastModDate>2021-09-30T08:09:10.4800000+00:00</lastModDate>
        
        <creator>Divya M. N</creator>
        
        <creator>Narayanappa C. K</creator>
        
        <creator>Gangadharaiah S. L</creator>
        
        <subject>Air-compressor; fault detection; LSTM; multi-layer perception; ANN; acoustic sensor data</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>Proper functioning of the air compressor ensures stability for many critical systems. The ill-effect of the breakdown caused by the wear and tear in the system can be mitigated if there exists an effective automated fault classification system. Traditionally, the simulation-based methods help to extend to identify the faults; however, those systems are not so effective enough to build real-time adaptive methods for the fault detection and its type. This paper proposes an effective model for the fault classification in the air compressor based on the real-time empirical acoustic sensor time-series data were taken on a sampling frequency of 50Khz. In the proposed work, the time-series datais transformedinto the frequency domain using fast Fourier transforms,where half of the signals are considered due to its symmetric representation. Afterward, a masking operation is carried out to extract significant feature vectors fed to the multilayer perception neural network. The uniqueness of the proposed system is that it requires less trainable parameters, thus reduces the training time and imposes lower memory overhead. The model is benchmarked with performance metric accuracy, and it is found that the proposed masked feature set-based MLP-ANN exhibits an accuracy of 91.32%. In contrast, the LSTM based fault classification model gives only 83.12% accuracy, takes more training time, and consumes more memory. Thus, the proposed model is realistic enough to be considered a real-time monitoring system of the fault and control. However, other performance metrics like precision, recall, and F1-Score are also promising with the LSTM based fault classifier.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_73-Modeling_a_Fault_Detection_Predictor_in_Compressor.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comparison of BAT and Firefly Algorithm in Neighborhood based Collaborative Filtering</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120972</link>
        <id>10.14569/IJACSA.2021.0120972</id>
        <doi>10.14569/IJACSA.2021.0120972</doi>
        <lastModDate>2021-09-30T08:09:10.4670000+00:00</lastModDate>
        
        <creator>Hartatik </creator>
        
        <creator>Bayu Permana Sejati</creator>
        
        <creator>Hamdani Hamdani</creator>
        
        <creator>Andri Syafrianto</creator>
        
        <subject>Bat algorithm; firefly algorithm; collaborative filtering; recommender system; swarm intelligent</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>The recommender system is a knowledge-based filtering system that predicts the users&#39; rating and preference for what they might desire. Simultaneously, the neighborhood method is a promising approach to perform predictions, resulting in a high accuracy based on the common items. This method, furthermore, could affect the resulting accuracy value because when each user provides limited data and sparsity, the accuracy of value might be narrow down as a consequence. In this research, we use the Swarm Intelligent (SI) technique in the recommender system to overcome this problem, whereby SI will train each feature to optimal weight. This technique&#39;s main objective is to form better groups of similar users and improve recommendations&#39; accuracy. The intelligent swarm technique used to compare its accuracy to help provide recommendations is the Firefly and Bat Algorithm. The results show that the Firefly Algorithm has slightly better performance than the Bat Algorithm, with a difference in the mean absolute error of 0.02013333. The significance test using the independent t-test method states that no statistically significant difference between Bat and Firefly algorithm.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_72-A_Comparison_of_BAT_and_Firefly_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Gray Box-based Approach to Automatic Requirements Specification for a Robot Patrol System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120971</link>
        <id>10.14569/IJACSA.2021.0120971</id>
        <doi>10.14569/IJACSA.2021.0120971</doi>
        <lastModDate>2021-09-30T08:09:10.4330000+00:00</lastModDate>
        
        <creator>Soojin Park</creator>
        
        <subject>Embedded system; automatic requirement specifications generation; mobile robots; use case specification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>The black box-based requirements specification models representative by the use case model focus on specifying system behaviors exposed outside. While these models are sufficiently effective in specifying requirements for business applications behavior, they are limited in specifying requirements for embedded systems with relatively very short interaction sequences with users. To solve this problem, we have proposed a gray box-based requirements specification method to specify the inner logic of an embedded system, including a tool for automatic generation of requirements specification from some analysis models in our previous work. This study proves the benefits of the proposed software requirements specification method by applying it to a robot patrol system and showing the possibility of general use of the proposed method in the embedded system domain. Compared with our previous work, we enhance the tool for automatic generation of requirements specification, called SpecGen, and prove the benefit of the proposed method from multiple aspects. The application result on the robot patrol system case is quantitatively demonstrating that our proposed requirements specification method improves development productivity and enhances overall software product quality, including code quality.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_71-A_Gray_Box_based_Approach_to_Automatic_Requirements.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparing SMOTE Family Techniques in Predicting Insurance Premium Defaulting using Machine Learning Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120970</link>
        <id>10.14569/IJACSA.2021.0120970</id>
        <doi>10.14569/IJACSA.2021.0120970</doi>
        <lastModDate>2021-09-30T08:09:10.4200000+00:00</lastModDate>
        
        <creator>Mohamed Hanafy Kotb</creator>
        
        <creator>Ruixing Ming</creator>
        
        <subject>Machine learning; classification; insurance; imbalanced data; SMOTE family; statistical analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>Default in premium payments impacts significantly on the profitability of the insurance company. Therefore, predicting defaults in advance is very important for insurance companies. Predicting in the insurance sector is one of the most beneficial and important study areas in today&#39;s world, thanks to technological advancements. But because of the imbalanced datasets in this industry, predicting insurance premium defaulting becomes a difficult task. Moreover, there is no study that applies and compares different SMOTE family approaches to address the issue of imbalanced data. So, this study aims to compare different SMOTE family approaches. Such as Synthetic Minority Oversampling Technique (MOTE), Safe-level SMOTE (SLS), Relocating Safe-level SMOTE (RSLS), Density-based SMOTE (DBSMOTE), Borderline-SMOTE(BLSMOTE), Adaptive Synthetic Sampling (ADSYN), and Adaptive Neighbor Synthetic (ASN), SMOTE-Tomek, and SMOTE-ENN, to solve the problem of unbalanced data. This study applied a variety of machine learning (ML)classifiers to assess the performance of the SMOTE family in addressing the imbalanced problem. These classifiers including Logistic Regression (LR), CART, C4.5, C5.0, Support Vector Machine (SVM), Random Forest (RF), Bagged CART(BC), AdaBoost (ADA), Stochastic Gradient Boosting, (SGB), XGBOOST(XGB), NA&#207;VE BAYES, (NB), k-Nearest Neighbors (K-NN), and Neural Networks (NN). Additionally, model validation strategies include Random hold-out. The findings obtained using various assessment measures show that ML algorithms do not perform well with imbalanced data, indicating that the problem of imbalanced data must be addressed. On the other hand, using balanced datasets created by SMOTE family techniques improves the performance of classifiers. Moreover, the Friedman test, a statistical significance test, further confirms that the hybrid SMOTE family methods are better than others, especially the SMOTE -TOMEK, which performs better than other resampling approaches. Moreover, among ML algorithms, the SVM model has produced the best results with the SMOTE- TOMEK.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_70-Comparing_SMOTE_Family_Techniques_in_Predicting_Insurance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Internet of Things Multi-protocol Interoperability with Syntactic Translation Capability</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120969</link>
        <id>10.14569/IJACSA.2021.0120969</id>
        <doi>10.14569/IJACSA.2021.0120969</doi>
        <lastModDate>2021-09-30T08:09:10.3870000+00:00</lastModDate>
        
        <creator>Nedaa H. Ahmed</creator>
        
        <creator>Ahmed M. Sadek</creator>
        
        <creator>Haytham Al-Feel</creator>
        
        <creator>Rania A. AbulSeoud</creator>
        
        <subject>Internet of things (IoT); interoperability; multiprotocol translation; message payload translation; SSN ontology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>Because Internet of Things (IoT) systems contain different devices, infrastructures, and data formats; its success depends on the realization of full interoperability among these systems. Interoperability is a communication challenge that affects all the layers of the system. In this paper, a transparent translator to solve interoperability issues in two layers of an IoT system is proposed. The communication protocol layer is the first layer. In this layer, it is necessary to overcome the difference between the interaction patterns, such as request/response and publish/subscribe. The second layer includes the syntactic layer, which refers to data encoding. This type of interoperability is achieved through the semantic sensor network (SSN) ontology. Tests and evaluations of the proposed translator in comparison with a similar translator were performed using the constrained application protocol (CoAP), message queuing telemetry transport (MQTT) protocol, and hypertext transfer (HTTP) protocol, in addition to different data formats, such as JSON, CSV, and XML. The results reveal the efficiency of the proposed method in terms of application protocol interoperability. In addition, the suggested translator has the added feature that it supports different data encoding standards as compared to the other translator.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_69-Internet_of_Things_Multi_protocol_Interoperability.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced Graphical Representation of Data in Web Application (Case Study: Covid-19 in the UK)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120968</link>
        <id>10.14569/IJACSA.2021.0120968</id>
        <doi>10.14569/IJACSA.2021.0120968</doi>
        <lastModDate>2021-09-30T08:09:10.3730000+00:00</lastModDate>
        
        <creator>Rockson Adomah</creator>
        
        <creator>Tariq Alwada’n</creator>
        
        <creator>Mohammed Al Masarweh</creator>
        
        <subject>Data representation; data visualization; accessibility standards; scalable vector graphics; Covid-19</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>This paper describes the analysis, design, and implementation of responsive data representation in the web application that can render data asynchronously to users by making an Application Programming Interface (API) request from a webserver. At the same time, provides high-quality downloadable Scalable Vector Graphics (SVG) images for journals, magazines, and other printed media. For this issue, large-scale data that uses open-source Covid-19 data was used to improve the Covid-19 data visualization and the other improvements that can be done for proper representation of such vital data to the general public. During the development process, qualitative research into data representation with responsive charts and/or Scalable Vector Graphics images file has been conducted in contrast of each other to answer questions like what tools and technologies are often used, what are the alternative tools and technology, when, where, and why developers make use of certain approach to data representation.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_68-Enhanced_Graphical_Representation_of_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Genetic Behaviour of Zika Virus and Identification of Motif</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120967</link>
        <id>10.14569/IJACSA.2021.0120967</id>
        <doi>10.14569/IJACSA.2021.0120967</doi>
        <lastModDate>2021-09-30T08:09:10.3400000+00:00</lastModDate>
        
        <creator>Pushpa Susant Mahapatro</creator>
        
        <creator>Jatinderkumar R. Saini</creator>
        
        <subject>Circadian behaviour; consensus string; genome study; greedy search technique; motif search; regulatory proteins</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>ZIKV is a mosquito-borne disease. It is known to cause neurological disorders and congenital disabilities in newborns. The Genome Sequence of the Zika virus is used for the study. The essential cell functionalities like circadian behavior and expression of genes are studied. Regulatory proteins are alternating functionalities during daytime and night time. Identifying motif is made by understanding the features of motifs, finding the count matrix, and formulating the profile matrix. The consensus string of the Zika virus is to be computed, and the score motif is to be calculated. Different techniques of motif finding like the Brute Force technique and Greedy Search techniques are proposed. In the Brute Force technique, each motif is selected, its score is to be calculated, and then the minimum score can be obtained. The Brute Force technique will take an enormous amount of time, but it is guaranteed to find a solution. The Greedy Search technique is not guaranteed to find motif like the Brute Force technique but can give a close answer in a realistic time. This paper presents the identification of motif in the Zika virus genome using programming techniques.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_67-Genetic_Behavior_of_Zika_Virus.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Problem based Learning: An Experience of Evaluation based on Indicators, Case of Electronic Business in Professional Career of Systems Engineering</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120966</link>
        <id>10.14569/IJACSA.2021.0120966</id>
        <doi>10.14569/IJACSA.2021.0120966</doi>
        <lastModDate>2021-09-30T08:09:10.3100000+00:00</lastModDate>
        
        <creator>C&#233;sar Baluarte-Araya</creator>
        
        <creator>Ernesto Suarez-Lopez</creator>
        
        <creator>Oscar Ramirez-Valdez</creator>
        
        <subject>Problem-based learning; formative research; competences; evaluation; performance indicator; skills; deliverable report</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>It is a reality that universities place great emphasis on formative research in the training of their students in order to increase their knowledge, skills, attitudes and achieve competences. This paper aims to show the experience of applying the Problem-Based Learning (PBL) methodology to assess learning based on indicators that have been determined from the criteria considered to correspond to the competences of the course under study Business Electronic, at the Professional School of Systems Engineering (EPIS) of the Universidad Nacional de San Agust&#237;n de Arequipa (UNSA), Arequipa-Peru, with theory and laboratory practice taught by two teachers. The objective is to apply an evaluation strategy for the development of competences with active didactics to the engineering training course. The methodology used is Problem-Based Learning applied to a formative research project based on real problems common to many organizations. In the semester, the students in groups solve the problems stated, and then they deliver a deliverable report and formative research report of each problem that is scored through a rubric. The teachers make contributions and provide feedback in the report for the improvement and experience that the student is acquiring in their training. The results obtained show that the objectives are achieved, increasing knowledge, skills, attitudes and adequate evaluation in the training of students, as well as the development of the competences of the course, as well as achieving the results of the student; showing that the application of PBL with Formative Research would provide good results for other courses of the professional career, allowing continuous improvement in the teaching-learning process. The conclusion is that an adequate assessment of learning based on indicators with an active didactic strategy, effectively planned and adequately applied to real-life problems, makes it possible to achieve the expected student results.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_66-Problem_based_Learning_An_Experience_of_Evaluation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Goal-oriented Email Stream Classifier with A Multi-agent System Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120965</link>
        <id>10.14569/IJACSA.2021.0120965</id>
        <doi>10.14569/IJACSA.2021.0120965</doi>
        <lastModDate>2021-09-30T08:09:10.2930000+00:00</lastModDate>
        
        <creator>Wenny Hojas-Mazo</creator>
        
        <creator>Mailyn Moreno-Espino</creator>
        
        <creator>Jos&#233; Vicente Bern&#225; Mart&#237;nez</creator>
        
        <creator>Francisco Maci&#225; P&#233;rez</creator>
        
        <creator>Iren Lorenzo Fonseca</creator>
        
        <subject>Email stream classification; goal-oriented requirements; i*; multi-agent system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>Now-a-days, email is often one of the most widely used means of communication despite the rise of other communication methods such as instant messaging or communication via social networks. The need to automate the email stream management increases for reasons such as multi-folder categorization, and spam email classification. There are solutions based on email content, capable of contemplating elements such as the text subjective nature, adverse effects of concept drift, among others. This paper presents an email stream classifier with a goal-oriented approach to client and server environment. The i* language was the basis for designing the proposed email stream classifier. The email environment was represented with the early requirements model and the proposed classifier with the late requirements model. The classifier was implemented following a multi-agent system approach supported by JADE agent platform and Implementation_JADE pattern. The behavior of agents was taking from an existing classifier. The multi-agent classifier was evaluated using functional, efficacy and performance tests, which compared the existing classifier with the multi-agent approach. The results obtained were satisfactory in all the tests. The performance of multi-agent approach was better than the existing classifier due to the use of multi-threads.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_65-Goal_oriented_Email_Stream_Classifier.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Categorical Vehicle Classification and Tracking using Deep Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120964</link>
        <id>10.14569/IJACSA.2021.0120964</id>
        <doi>10.14569/IJACSA.2021.0120964</doi>
        <lastModDate>2021-09-30T08:09:10.2800000+00:00</lastModDate>
        
        <creator>Deependra Sharma</creator>
        
        <creator>Zainul Abdin Jaffery</creator>
        
        <subject>Vehicle classification; generative adversarial networks; single shot multibox detector; vehicle tracking; deep neural networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>The classification and tracking of vehicles is a crucial component of modern transportation infrastructure. Transport authorities make significant investments in it since it is one of the most critical transportation facilities for collecting and analyzing traffic data to optimize route utilization, increase transportation safety, and build future transportation plans. Numerous novel traffic evaluation and monitoring systems have been developed as a result of recent improvements in fast computing technologies. However, still the camera-based systems lag in accuracy as mostly the systems are constructed using limited traffic datasets that do not adequately account for weather conditions, camera viewpoints, and highway layouts, forcing the system to make trade-offs in terms of the number of actual detections. This research offers a categorical vehicle classification and tracking system based on deep neural networks to overcome these difficulties. The capabilities of generative adversarial networks framework to compensate for weather variability, Gaussian models to look for roadway configurations, single shot multibox detector for categorical vehicle detections with high precision and boosted efficient binary local image descriptor for tracking multiple vehicle objects are all incorporated into the research. The study also includes the publication of a high-quality traffic dataset with four different perspectives in various environments. The proposed approach has been applied on the published dataset and the performance has been evaluated. The results verify that using the proposed flow of approach one can attain higher detection and tracking accuracy.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_64-Categorical_Vehicle_Classification_and_Tracking.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Feature Extraction for Complementing Authentication in Hand-based Biometric</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120963</link>
        <id>10.14569/IJACSA.2021.0120963</id>
        <doi>10.14569/IJACSA.2021.0120963</doi>
        <lastModDate>2021-09-30T08:09:10.2470000+00:00</lastModDate>
        
        <creator>Mahalakshmi B S</creator>
        
        <creator>Sheela S V</creator>
        
        <subject>Biometric; security; feature extraction; hand geometric; authentication; palmprint recognition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>With an increasing usage of hand-based biometrics in authentication system, there is a need to evolve up with more potential security owing to increasing evolution of threats. The security of the hand authentication system completely depends upon uniqueness and distinct selection of features from hand image which has the properties of robustness, fault tolerance, and simpler implication. Review of existing feature extraction literatures shows more inclination towards sophisticated process as well as it also suffers from various other limitation. Therefore, this manuscript resolves this limitation by presenting a novel model of feature extraction which is carried out in more progressive form and less iterative form, unlike existing approaches. The proposed system achieves its research goal by introducing simplified feature extraction operation via storage, blurring, color space conversion, binary image conversion, the modelling aspect of the study emphasizes on image enhancement along with fuzzification for yielding more efficient result. An experimental study has been carried out using Python considering hand-biometric dataset in order to carry analysis where the outcome shows significant supportability over any palmprint recognition system. The study outcome is compared with most standard implementation of feature extraction to find that proposed system offer better accuracy performance in contrast with existing system.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_63-A_Novel_Feature_Extraction_for_Complementing_Authentication.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Brown Spot Disease Severity Level Detection using Binary-RGB Image Masking</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120962</link>
        <id>10.14569/IJACSA.2021.0120962</id>
        <doi>10.14569/IJACSA.2021.0120962</doi>
        <lastModDate>2021-09-30T08:09:10.2300000+00:00</lastModDate>
        
        <creator>N. S. A. M Taujuddin</creator>
        
        <creator>N. H. N. A Halim</creator>
        
        <creator>M. Siti Norsuha</creator>
        
        <creator>R. Koogeethavani</creator>
        
        <creator>Z. H Husin</creator>
        
        <creator>A. R. A Ghani</creator>
        
        <creator>Tara Othman Qadir</creator>
        
        <subject>Brown spot; image enhancement; binary image; RGB image; masking process</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>Agriculture is known as one of the main factor for a growth of a country. Paddy plantation is the most widely planted crop in Malaysia. The rice produced is the main food source to Malaysian’s people and source of income to this country as well. However, a disease known as Brown Spot (BS) attacks the paddy plants and threats their quality. This disease caused by bipolar fungus, which represent by the development of an oval, dark brown to purplish-brown spot on leaf. This disease observed as among the hazardous disease that may result in degradation of paddy production. Brown Spot disease could spread through airborne spores from plant to plant on the field. In this research, a system that could help people, especially farmers, to detect the disease at early stage is developed. The real image capture at paddy field is processed in the MATLAB software with image enhancement, background removal as well as binary and RGB image masking process. To determine the Brown Spot area, pixel intensity between the infected and non-infected areas is calculated. The severity level table developed by Horsfall and Heuberger is then used as reference to classify the severity level of Brown Spot disease. A GUI is created to detect the Brown Spot disease automatically. From the study conducted, the accuracy of Brown Spot detection is approximately 89% accurate compared to manual evaluation by plant pathology.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_62-Brown_Spot_Disease_Severity_Level_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>IoT-based e-Health Framework for COVID-19 Patients Monitoring</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120961</link>
        <id>10.14569/IJACSA.2021.0120961</id>
        <doi>10.14569/IJACSA.2021.0120961</doi>
        <lastModDate>2021-09-30T08:09:10.2170000+00:00</lastModDate>
        
        <creator>Fahad Albogamy</creator>
        
        <subject>COVID-19; IoT; healthcare; e-Health</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>The COVID-19 pandemic, produced by the SARS-CoV-2 virus, has caused global public health emergency, with the rapid evolution and tragic consequences. The fight against this disease, whose epidemiological, clinical, and prognostic characteristics are still being studied in recent works which is forcing a change in the form of care, to include transforming some face-to-face consultations into non-face-to-face. Recently, various initiatives have emerged to incorporate the Internet of Things (IoT) in different sectors specially the health sector generally and in e-Health systems specifically. Millions of devices are connected and generating massive amounts of data. In this sense, based on the experience in the health sector in the management of the pandemic caused by COVID-19, it has been determined that monitoring potential patients of COVID-19 is still a great challenge for the latest technologies. In this paper, an IoT-based monitoring framework is proposed to help the health caregivers to obtain useful information during the current pandemic of COVID-19, thus bringing direct benefits of monitoring patient&#39;s health and speed of hospital care and cost reduction. An analysis of the proposed framework was carried out and a prototype system was developed and evaluated. Moreover, we evaluated the efficacy of the proposed framework to detect potentially serious cases of COVID-19 among patients treated in home isolation.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_61-IoT_based_e_Health_Framework_for_COVID_19_Patients.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of Momentous Fragmentary Formants in Talaqi-like Neoteric Assessment of Quran Recitation using MFCC Miniature Features of Quranic Syllables</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120960</link>
        <id>10.14569/IJACSA.2021.0120960</id>
        <doi>10.14569/IJACSA.2021.0120960</doi>
        <lastModDate>2021-09-30T08:09:10.2000000+00:00</lastModDate>
        
        <creator>Mohamad Zulkefli Adam</creator>
        
        <creator>Noraimi Shafie</creator>
        
        <creator>Hafiza Abas</creator>
        
        <creator>Azizul Azizan</creator>
        
        <subject>Speech processing; MFCC-Formant; Quranic recitation assessment; human-guided threshold classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>The use of technological speech recognition systems with a variety of approaches and techniques has grown rapidly in a variety of human-machine interaction applications. Further to this, a computerized assessment system to identify errors in reading the Qur&#39;an can be developed to practice the advantages of technology that exist today. Based on Quranic syllable utterances, which contain Tajweed rules that generally consist of Makhraj (articulation process), Sifaat (letter features or pronunciation) and Harakat (pronunciation extension), this paper attempts to present the technological capabilities in realizing Quranic recitation assessment. The transformation of the digital signal of the Quranic voice with the identification of reading errors (based on the Law of Tajweed) is the main focus of this paper. This involves many stages in the process related to the representation of Quranic syllable-based Recitation Speech Signal (QRSS), feature extraction, non-phonetic transcription Quranic Recitation Acoustic Model (QRAM), and threshold classification processes. MFCC-Formants are used in a miniature state that are hybridized with three bands in representing QRSS combined vowels and consonants. A human-guided threshold classification approach is used to assess recitation based on Quranic syllables and threshold classification performance for the low, medium, and high band groups with performances of 87.27%, 86.86%and 86.33%, respectively.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_60-Analysis_of_Momentous_Fragmentary_Formants.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analogy of the Application of Clustering and K-Means Techniques for the Approximation of Values of Human Development Indicators</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120959</link>
        <id>10.14569/IJACSA.2021.0120959</id>
        <doi>10.14569/IJACSA.2021.0120959</doi>
        <lastModDate>2021-09-30T08:09:10.1700000+00:00</lastModDate>
        
        <creator>Jos&#233; Luis Morales Rocha</creator>
        
        <creator>Mario Aurelio Coyla Zela</creator>
        
        <creator>Nakaday Irazema Vargas Torres</creator>
        
        <creator>Genciana Serruto Medina</creator>
        
        <subject>Clustering; K-Means; elbow method; cohesion; separation; human development index</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>The objective of this study was to apply Clustering and K-Means’ techniques to classify the departments of Peru according to their Human Development Index. In this article, the elbow method was used to determine the optimal number of clusters, applying the classification algorithms to group the departments of Peru according to their similarities, in addition to the Principal Component Analysis (PCA) technique for a better display of clusters. After applying the unsupervised algorithms, the results were more relevant in clusters 2 and 4 according to their HDI, made up of the departments of Arequipa, the Constitutional Province of Callao, Ica, Lima, Moquegua and Tacna, where the most notable is the life expectancy at birth, the population with full secondary education, the number of years of education, the average per capita income, and the state&#39;s density index. The results obtained by the K-Means algorithm show more cohesive results than the Clustering algorithm.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_59-Analogy_of_the_Application_of_Clustering.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluation of Data Center Network Security based on Next-Generation Firewall</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120958</link>
        <id>10.14569/IJACSA.2021.0120958</id>
        <doi>10.14569/IJACSA.2021.0120958</doi>
        <lastModDate>2021-09-30T08:09:10.1530000+00:00</lastModDate>
        
        <creator>Andi Jehan Alhasan</creator>
        
        <creator>Nico Surantha</creator>
        
        <subject>Network security; next-generation firewall; TCP SYN attack; UDP flood attack; ICMP smurf attack</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>This study aims to create a network security system that can mitigate attacks carried out by internal users and reduce attacks from internal networks. Further, a network security system is expected to overcome the difficulty of mitigating attacks carried out by internal users. The goal of this research is to analyze the effectiveness of the Next-Generation Firewall implemented to improve network security. The method used in this research is the comparison method with a test of TCP SYN attack, UDP flood attack, ICMP smurf attack, and DHCP starvation attack on a company network. From the experiment results, it can be concluded that the Next-Generation Firewall has significantly better performance for protecting mitigating attacks carried out by internal users on a company network. It can increase the security of data communication networks against threats from the internal networks.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_58-Evaluation_of_Data_Center_Network_Security.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Personally Identifiable Information (PII) Detection in the Unstructured Large Text Corpus using Natural Language Processing and Unsupervised Learning Technique</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120957</link>
        <id>10.14569/IJACSA.2021.0120957</id>
        <doi>10.14569/IJACSA.2021.0120957</doi>
        <lastModDate>2021-09-30T08:09:10.1370000+00:00</lastModDate>
        
        <creator>Poornima Kulkarni</creator>
        
        <creator>Cauvery N K</creator>
        
        <subject>PII; natural language processing; word2vec machine learning; PII detection; security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>Personally Identifiable Information (PII) has gained much attention with the rapid development of technologies and the exploitation of information relating to an individual. The corporates and other organizations store a large amount of information that is primarily disseminated in the form of emails that include personnel information of the user, employee, and customers. The security aspects of PII storage have been ignored, raising serious security concerns onindividual privacy. A significant concern arises about comprehending the responsibilities regarding the uses of PII. However, in real-time scenarios, email data is regarded as unstructured text data, detecting PII from such an unstructured large text corpus is quite challenging. This paper presents an intelligent clustering approach for automatically detecting personally identifiable information (PII) from a large text corpus. The focus of the proposed study is to design a model that receives text content and detects possible PII attributes. Therefore, this paper presents a clustering-based PII Model (C-PPIM) based on NLP and unsupervised learning to address detection of PII in the unstructured large text corpus. NLP is used to perform topic modeling, and Byte mLSTM, a different approach of sequence model, is implemented to address clustering problems in PII detection. The performance analysis of the proposed model is carried out existing hierarchical clustering concerning silhouette and cohesion score. The outcome indicatedthe effectiveness of the proposed system that highlights significant PII attributes, with significant scope in real-time implementation. In contrast, existing techniques are too expensive to function and fit in real-time environments.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_57-Personally_Identifiable_Information_PII_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Internet of Things (IoT) Reference Model for an Infectious Disease Active Digital Surveillance System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120956</link>
        <id>10.14569/IJACSA.2021.0120956</id>
        <doi>10.14569/IJACSA.2021.0120956</doi>
        <lastModDate>2021-09-30T08:09:10.1070000+00:00</lastModDate>
        
        <creator>Nur Hayati</creator>
        
        <creator>Kalamullah Ramli</creator>
        
        <creator>Muhammad Suryanegara</creator>
        
        <creator>Muhammad Salman</creator>
        
        <subject>IoT; framework; digital surveillance; infectious disease; Covid-19</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>Internet of Things (IoT) technological assistance for infectious disease surveillance is urgently needed when outbreaks occur, especially during pandemics. The IoT has great potential as an active digital surveillance system, since it can provide meaningful time-critical data needed to design infectious disease surveillance. Many studies have developed the IoT for such surveillance; however, such designs have been developed based on authors&#39; ideas or innovations, without consideration of a specific reference model. Therefore, it is essential to build such a model that could encompass end-to-end IoT-based surveillance system design. This paper proposes a reference model for the design of an active digital surveillance system of infectious diseases with IoT technology. It consists of 14 attributes with specific indicators to accommodate IoT characteristics and to meet the needs of infectious disease surveillance design. The proof of concept was conducted by adopting the reference model into an IoT system design for the active digital surveillance of the Covid-19 disease. The use-case of the design was a community-based surveillance (CBS) system utilizing the IoT to detect initial symptoms and prevent closed contacts of Covid-19 in a nursing home. We then elaborated its compliance with the 14 attributes of the reference model, reflecting how the IoT design should meet the criteria mandated by the model. The study finds that the proposed reference model could eventually benefit engineers who develop the complete IoT design, as well as epidemiologists, the government or the relevant policy makers who work in preventing infectious diseases from worsening.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_56-An_Internet_of_Things_IoT_Reference_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comparative Analysis of Scalability Issues within Blockchain-based Solutions in the Internet of Things</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120955</link>
        <id>10.14569/IJACSA.2021.0120955</id>
        <doi>10.14569/IJACSA.2021.0120955</doi>
        <lastModDate>2021-09-30T08:09:10.0770000+00:00</lastModDate>
        
        <creator>Ahmed Alrehaili</creator>
        
        <creator>Abdallah Namoun</creator>
        
        <creator>Ali Tufail</creator>
        
        <subject>Blockchain; IoT; scalability; issues; distributed ledger; throughput; latency</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>Recently, enormous interest has been shown by both academia and industry around concepts and techniques related to connecting heterogeneous IoT devices. It is now considered a rapidly evolving technology with billions of IoT devices expected to be deployed in the upcoming years around the globe. These devices must be maintained, managed, traced, and secured in a timely and flexible manner. Previously, the centralized approaches constituted mainstream solutions to handle the ever-increasing number of connected IoT devices. However, these approaches may be inadequate to handle devices at a massive scale. Blockchain as a distributed approach that presents a promising solution to tackle the concerns of IoT devices connectivity. However, current Blockchain platforms face several scalability issues to accommodate diverse IoT devices without losing efficiency. This paper performs a comprehensive analysis of the recent blockchain-based scalability solutions applied to the Internet of Things domain. We propose an evaluation framework of scalability in IoT environments, encompassing critical criteria like throughput, latency, and block size. Moreover, we conduct an assessment of the notable scalability solutions and conclude the results by highlighting six overarching scalability issues of blockchain-based solutions in IoT that ought to be resolved by the industry and research community.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_55-A_Comparative_Analysis_of_Scalability_Issues.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluation Study of Elliptic Curve Cryptography Scalar Multiplication on Raspberry Pi4</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120954</link>
        <id>10.14569/IJACSA.2021.0120954</id>
        <doi>10.14569/IJACSA.2021.0120954</doi>
        <lastModDate>2021-09-30T08:09:10.0430000+00:00</lastModDate>
        
        <creator>Fatimah Alkhudhayr</creator>
        
        <creator>Tarek Moulahi</creator>
        
        <creator>Abdulatif Alabdulatif</creator>
        
        <subject>IoT; elliptic curve cryptography; fast scalar multiplication; raspberry Pi4</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>The internet of things (IoT) is defined as a collection of autonomous devices that connect and network with each other via the Internet without the requirement for human interaction. It enhances daily our lives such as through personal devices, healthcare sensing, retail sensing, and industrial control, as well as the smart homes, smart cities, and smart supply chains. Although the IoT offers significant benefits, it has inherent issues, including security and privacy risks, memory size limitations, and processing capability challenges. This paper describes the application of elliptic curve cryptography (ECC) in a simulated IoT environment to ensure the confidentiality of data passed between the connected devices. Scalar multiplication represents the main operation of ECC, and it is primarily used for key generation, encryption, and decryption. The aim of this paper is to evaluate and show the efficiency of adapt lightweight ECC with an IoT devices. In the study outlined in this paper, scalar multiplication was implemented on Raspberry Pi4 and processing time and consumed energy were measured to compare the performance. The comparison was made on the scalar multiplication of both fast and basic ECC algorithms. The result of the performance test revealed that a fast scalar multiplication reduced the computation time in comparison with basic scalar multiplication while consuming a similar level of energy.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_54-Evaluation_Study_of_Elliptic_Curve_Cryptography.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Construction of a Model and Development of an Algorithm for Solving the Wave Problem under Pulsed Loading</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120953</link>
        <id>10.14569/IJACSA.2021.0120953</id>
        <doi>10.14569/IJACSA.2021.0120953</doi>
        <lastModDate>2021-09-30T08:09:10.0130000+00:00</lastModDate>
        
        <creator>Khabdolda Bolat</creator>
        
        <creator>Zhuzbayev Serik</creator>
        
        <creator>Sabitova Diana S</creator>
        
        <creator>Aitkenova Ailazzat A</creator>
        
        <creator>Serikbayeva Sandugash</creator>
        
        <creator>Badekova Karakoz Zh</creator>
        
        <creator>Yerzhanova Akbota Y</creator>
        
        <subject>Information systems; wave process; explosive technologies; method of bicharacteristics; stress tensor</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>The article considers approaches and methods for modeling the wave process resulting from blasting operations. The analysis of modeling methods has shown that in the context of the task it is advisable to conduct a study based on the application of the method of behavioral characteristics, which was optimized using the splitting method. The defining equations were calculated, the point scheme of the template was selected, the resolving difference equations for dynamic boundary value problems of a seismic nature were calculated. Based on the method, an algorithm for calculating the relationship between the voltage and the seismic medium was developed, which allowed generating a code and designing an information system for calculating the wave process.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_53-Construction_of_a_Model_and_Development_of_an_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detection of Intruder in Cloud Computing Environment using Swarm Inspired based Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120952</link>
        <id>10.14569/IJACSA.2021.0120952</id>
        <doi>10.14569/IJACSA.2021.0120952</doi>
        <lastModDate>2021-09-30T08:09:09.9800000+00:00</lastModDate>
        
        <creator>Nishika </creator>
        
        <creator>Kamna Solanki</creator>
        
        <creator>Sandeep Dalal</creator>
        
        <subject>Cloud computing; intrusion detection system; cuckoo search; feed forward back propagation neural network (FFBPNN)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>Cloud computing services offered a resource pool with a wide range of storage for large amounts of data. Cloud services are generally used as a demand-driven private or open data forum, and the increase in use has led to security concerns. Therefore, it is necessary to design an accurate Intrusion Detection System (IDS) to identify the suspected node in the cloud computing environment. This is possible by monitoring network traffic so that the quality of service and performance of the system can be maintained. Several researchers have worked on designing valid IDS with the help of a machine learning approach. A single classification algorithm seems to be impossible to detect intruders with high accuracy. Therefore, a hybrid approach is presented. This approach is a combination of Cuckoo Search. CS as an optimization algorithm and Feed Forward Back Propagation Neural Network (FFBPNN) as a multi-class classification approach. The user&#39;s request to access cloud data is collected and essential features are selected using CS as an optimization approach. The selected features are used to train FFBPNN with reduced training time and complexity. The experimental analysis has been performed in terms of precision, recall, F-measure, and accuracy. The evaluated value for parameters i.e., precision (85.5%), recall (86.4%), F-measure (85.9%), and accuracy (86.22%) are observed. At last, the parameters are also compared with the existing approach.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_52-Detection_of_Intruder_in_Cloud_Computing_Environment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Building a Standard Model of an Information System for Working with Documents on Scientific and Educational Activities</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120951</link>
        <id>10.14569/IJACSA.2021.0120951</id>
        <doi>10.14569/IJACSA.2021.0120951</doi>
        <lastModDate>2021-09-30T08:09:09.9500000+00:00</lastModDate>
        
        <creator>Serikbayeva Sandugash</creator>
        
        <creator>Tussupov Jamalbek</creator>
        
        <creator>Sambetbayeva Madina</creator>
        
        <creator>Yerzhanova Akbota</creator>
        
        <creator>Abduvalova Ainur</creator>
        
        <subject>Scientific and educational activities; distributed information systems; electronic library; metadata; model; search; interoperability; document; ontology; Z39. 50; SRU/SRW; Apache Solr</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>To increase the effectiveness of research, it is necessary to have access to systematic information resources of scientific work. Therefore, in any field of science, it begins with research, the search for scientific information, but with the growing number of scientific articles, books, monographs, patents, the search for information becomes more and more difficult. Creating a unified information system that allows scientists to quickly get acquainted with the results of other scientific research and prevent their duplication. The article discusses the technological techniques of distributed information systems that provide scientific and educational activities. The main tasks of creating a model of a distributed information system that supports scientific and educational activities, the functional capabilities of the model, the concept of metadata and the requirements for the metadata profile are described. The task, subject area, subjects, objects, the main functionality of the information system are defined, a list of the main types of information resources is provided. The paper analyzes the functional requirements for such systems. The paper describes a technological approach to creating a standard model of an information system to support scientific and educational activities organized in the form of an electronic library for working with documents on scientific heritage. The article describes the architecture of the information system and the principles of integration with the digital depository, the rules for the presentation and transformation of metadata.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_51-Building_a_Standard_Model_of_an_Information_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Customer Segmentation and Profiling for Life Insurance using K-Modes Clustering and Decision Tree Classifier</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120950</link>
        <id>10.14569/IJACSA.2021.0120950</id>
        <doi>10.14569/IJACSA.2021.0120950</doi>
        <lastModDate>2021-09-30T08:09:09.9330000+00:00</lastModDate>
        
        <creator>Shuzlina Abdul-Rahman</creator>
        
        <creator>Nurin Faiqah Kamal Arifin</creator>
        
        <creator>Mastura Hanafiah</creator>
        
        <creator>Sofianita Mutalib</creator>
        
        <subject>Customer segmentation; customer profiling; decision tree; insurance domain; k-modes clustering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>Customer segmentation and profiling has become an important marketing strategy in most businesses as a preparation for better customer services as well as enhancing customer relationship management. This study presents the segmentation and classification technique for insurance industry via data mining approaches: K-Modes Clustering and Decision Tree Classifier. Data from an insurance company were gathered. Decision Tree Algorithm was applied for customer profile classification comparing two methods which are Entropy and Gini. K-Modes Clustering segmentized the customers into three prominent groups which are “Potential High-Value Customers”, “Low Value Customers” and “Disinterested Customers”. Decision Tree with Gini model with 10-fold cross validation was found as the best fit model with average accuracy of 81.30%.  This segmentation would help marketing team of insurance company to strategize their marketing plans based on different group of customers by formulating different approaches to maximize customer values. Customers can receive customization of insurance plans which satisfy their necessity as well as better assistance or services from insurance companies.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_50-Customer_Segmentation_and_Profiling_for_Life_Insurance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Semi-supervised Deep Learning for Stress Prediction: A Review and Novel Solutions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120949</link>
        <id>10.14569/IJACSA.2021.0120949</id>
        <doi>10.14569/IJACSA.2021.0120949</doi>
        <lastModDate>2021-09-30T08:09:09.9030000+00:00</lastModDate>
        
        <creator>Mazin Alshamrani</creator>
        
        <subject>Deep learning; semi-supervised learning; contrastive learning; physiological data; stress prediction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>This research introduces a novel self-supervised deep learning model for stress detection using an intelligent solution that detects the stress state using the physiological parameters. The first part of this research represents a concise review of different intelligent techniques for processing physiological data and the emotional states of humans. Also, for all covered methods, special attention is made to semi-supervised learning algorithms. In the second part of the paper, a novel semi-supervised deep learning model for predicting the stress state is proposed. It is the first attempt of using contrastive learning for the stress prediction tasks. The model is based on utilizing generative and contrastive features specially tailored for treating time-series data. A widely popular multimodal WESAD (Wearable Stress and Affect Detection) data set is exploited for experimental purposes. It consists of physiological and motion data recorded from the wrist and chest-worn devices. To provide an intelligent solution that will be widely applicable, only the wrist data recorded from smartwatches is exploited during the model&#39;s training. The proposed model in this research is tested on a single subject&#39;s data and predicts the stress and non-stress events. Keeping in mind that the initial data was unbalanced with only 11% of the stress data, data augmentation techniques are applied within the model to provide additional reliable training information. The model shows significant potential in clustering stress conditions, and it presents accuracy in the range with other state-of-the-art solutions. The most significant benefits of using this model are its prediction capabilities when dealing with unlabeled data and performances when undersized data cannot be processed optimally by traditional intelligent methods.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_49-Semi_supervised_Deep_Learning_for_Stress_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application of Deep Learning in Satellite Image-based Land Cover Mapping in Africa</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120948</link>
        <id>10.14569/IJACSA.2021.0120948</id>
        <doi>10.14569/IJACSA.2021.0120948</doi>
        <lastModDate>2021-09-30T08:09:09.8870000+00:00</lastModDate>
        
        <creator>Nzurumike Obianuju Lynda</creator>
        
        <creator>Ezeomedo Innocent C</creator>
        
        <creator>Nwojo Agwu Nnanna</creator>
        
        <creator>Ali Ahmad Aminu</creator>
        
        <subject>Deep learning; satellite image classification; land cover mapping; Africa</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>Deep Learning Networks (DLN), in particular, Convolutional Neural Networks (CNN) has achieved state-of-the-art results in various computer vision tasks including automatic land cover classification from satellite images. However, despite its remarkable performance and broad use in developed countries, using this advanced machine learning algorithm has remained a huge challenge in developing continents such as Africa. This is because the necessary tools, techniques, and technical skills needed to utilize DL networks are very scarce or expensive. Recently, new approaches to satellite image-based land cover classification with DL have yielded significant breakthroughs, offering novel opportunities for its further development and application. This can be taken advantage of in low resources continents such as Africa. This paper aims to review some of these notable challenges to the application of DL for satellite image-based classification tasks in developing continents. Then, review the emerging solutions as well as the prospects of their use. Harnessing the power of satellite data and deep learning for land cover mapping will help many of the developing continents make informed policies and decisions to address some of its most pressing challenges including urban and regional planning, environmental protection and management, agricultural development, forest management and disaster and risks mitigation.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_48-Application_of_Deep_Learning_in_Satellite.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design and Evaluation of an Engagement Framework for e-Learning Gamification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120947</link>
        <id>10.14569/IJACSA.2021.0120947</id>
        <doi>10.14569/IJACSA.2021.0120947</doi>
        <lastModDate>2021-09-30T08:09:09.8570000+00:00</lastModDate>
        
        <creator>Mohammed Abdulaziz Alsubhi</creator>
        
        <creator>Noraidah Sahari Ashaari</creator>
        
        <creator>Tengku Siti Meriam Tengku Wook</creator>
        
        <subject>e-learning; learning activities; gamification; game elements; engagement framework</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>Recently, gamification in education software development to improve student engagement and performance has become prevalent. Gamification is used to counter attrition and dropout issues in e-learning. A handful of methods are presented for the gamification of e-learning systems in the literature. However, the e-learning gamification methods proposed in the literature lacked consistency. The number and types of game elements used in the methods are varied. In addition, there is a lack of an engagement framework that can be used in applying game elements to e-learning systems. Therefore, this paper provides insights into gamification and how it is used in e-learning systems. Then, the study proposes and evaluates an engagement framework that can be used to guide developers on how to add game elements to e-learning to improve student engagement and performance. The framework consists of three components: game elements, learning activities, and engagement factors components. Two experts evaluate the engagement framework via a semi-structured interview. The evaluation results indicate that developers can efficiently and effectively use the framework to gamify e-learning systems for improved student engagement and performance.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_47-Design_and_Evaluation_of_an_Engagement_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application of Convolutional Neural Networks for Binary Recognition Task of Two Similar Industrial Machining Parts</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120946</link>
        <id>10.14569/IJACSA.2021.0120946</id>
        <doi>10.14569/IJACSA.2021.0120946</doi>
        <lastModDate>2021-09-30T08:09:09.8400000+00:00</lastModDate>
        
        <creator>Hadyan Hafizh</creator>
        
        <creator>Amir Hamzah Abdul Rasib</creator>
        
        <creator>Rohana Abdullah</creator>
        
        <creator>Mohd Hadzley Abu Bakar</creator>
        
        <creator>Anuar Mohamed Kassim</creator>
        
        <subject>Convolutional neural networks; binary recognition; machining parts; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>Misclassifying parts in the small-medium manufacturing enterprise can lead to serious consequences. Manual inspection, as currently practiced, allows for compromises in product traceability. Due to this condition, inspection of the part’s number is not digitally visible. Due to a lack of modern traceability, customers receive incorrect parts, and the same incidents continue to occur. It is essential to transform manual inspections into digital and automated ones. AI-based technologies have recently been employed to enable a smart and intelligent recognition system for industrial machining parts. Convolutional Neural Networks (CNN) are widely used for image recognition tasks and are gaining popularity as deep learning algorithms. In this paper, a CNN model is used to perform binary recognition on two similar industrial machining parts. The model has been trained to recognise two classes of machining parts: Parts A and B. The dataset used to train the model includes both original and augmented images, with a total of 2447 images for both classes. The performance metrics have been measured during the training process, and 10 experiments have been conducted to evaluate the performance of the model. The test results reveal that the CNN model achieves 98% mean accuracy, 97.1% precision for Part A, 99% precision for part B and 0.982 AUC value. The results demonstrate the effectiveness of the CNN-based recognition of parts. It offers an effective alternative and is a compelling method for quality assurance in small-medium manufacturing enterprises.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_46-Application_of_Convolutional_Neural_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Research Efforts and Challenges in Crowd-based Requirements Engineering: A Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120945</link>
        <id>10.14569/IJACSA.2021.0120945</id>
        <doi>10.14569/IJACSA.2021.0120945</doi>
        <lastModDate>2021-09-30T08:09:09.7800000+00:00</lastModDate>
        
        <creator>Rosmiza Wahida Abdullah</creator>
        
        <creator>Sabrina Ahmad</creator>
        
        <creator>Siti Azirah Asmai</creator>
        
        <creator>Seok-Won Lee</creator>
        
        <creator>Zarina Mat Zain</creator>
        
        <subject>Crowd-based requirement engineering; requirements engineering; requirements elicitation; software engineering; crowdsourcing; review</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>Eliciting software system development requirements is a challenging task as the information is from various resources. The most constructive resource is the stakeholders of the system to be developed. It is critical yet time-consuming to capture essential requirements to realize a reliable and workable software system. The crowd-based Requirements Engineering (crowd-based RE) approach adapts the crowdsourcing technique to access an extensive range of stakeholders and save time, especially for the generic type system with no clear stakeholder. This paper presents current research efforts and challenges in crowd-based RE. A systematic literature review method is adopted to explore literature based on two specific research questions. The first question aimed at identifying research efforts on crowd-based RE, and the second question focused on the main challenges discovered in pursuing crowd-based RE. The findings from the literature review show that many efforts have been made to explore and further improve crowd-based RE. This paper provides a foundation to pursue research in improving crowdsourcing techniques for the benefit of requirements engineering.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_45-Research_Efforts_and_Challenges_in_Crowd_based_Requirements.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Weighted Clustering for Deep Learning Approach in Heart Disease Diagnosis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120944</link>
        <id>10.14569/IJACSA.2021.0120944</id>
        <doi>10.14569/IJACSA.2021.0120944</doi>
        <lastModDate>2021-09-30T08:09:09.7630000+00:00</lastModDate>
        
        <creator>BhandareTrupti Vasantrao</creator>
        
        <creator>Selvarani Rangasamy</creator>
        
        <subject>Learning approach; weighted clustering; heart disease diagnosis; gain factor</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>An approach for heart diagnosis based on weighted clustering is presented in this paper. The existing heart diagnosis approach develops a decision based on correlation of feature vector of a querying sample with available knowledge to the system. With increase in the learning data to the system the search overhead increases. This tends to delay in decision making. The linear mapping is improved by the clustering process of large database information. However, the issue of data clustering is observed to be limited with increase in training information and characteristic of learning feature. To overcome the issue of accurate clustering, a weighted clustering approach based on gain factor is proposed. This approach updates the cluster information based on dual factor monitoring of distance and gain parameter. The presented approach illustrates an improvement in the mining performance in terms of accuracy, sensitivity and recall rate.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_44-Weighted_Clustering_for_Deep_Learning_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Decision Support System Framework for Leaf Image Analysis to Improve Crop Productivity</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120943</link>
        <id>10.14569/IJACSA.2021.0120943</id>
        <doi>10.14569/IJACSA.2021.0120943</doi>
        <lastModDate>2021-09-30T08:09:09.7300000+00:00</lastModDate>
        
        <creator>Meeradevi </creator>
        
        <creator>Monica R Mundada</creator>
        
        <subject>Deep learning; activation function; attention mechanism; dropout operation; transfer learning; VGG16</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>Crop disease is one of the major problems with agriculture in India. Identifying the disease and classifying the type of disease is most important which can be made possible using the deep learning technique. To perform this verified dataset is required which consists of healthy and disease leaf images of all crops. The proposed model uses a hybrid approach which integrates VGG16 classifier with an attention mechanism, transfer learning approach and dropout operation. The proposed model uses a rice disease dataset and using the proposed approach it achieves an train accuracy of 96.45 percent and train loss 0.09 and validation loss of 0.44. The dataset is collected from the plant village project for rice leaf which consists of 4955 images which include Brown Spot, Healthy, Hipsa, and Leaf Blast type of images. The proposed model use attention mechanism that focuses mainly on the part of the image rather than the whole part of the image using a glimpse ratio of 3:1. The traditional method of detecting crop diseases needs high experience and knowledge of experts in the field which is time consuming, ineffective, and high cost. In this study, Deep Convolutional Neural Networks (DCNN) and Transfer Learning with Attention models are used to detect diseases associated with rice plants without overfitting the model.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_43-Hybrid_Decision_Support_System_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Business Process Modeling with Context and Ontology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120942</link>
        <id>10.14569/IJACSA.2021.0120942</id>
        <doi>10.14569/IJACSA.2021.0120942</doi>
        <lastModDate>2021-09-30T08:09:09.7000000+00:00</lastModDate>
        
        <creator>Jamal EL BOUROUMI</creator>
        
        <creator>Hatim GUERMAH</creator>
        
        <creator>Mahmoud NASSAR</creator>
        
        <subject>Business process; business process modeling; context; context-awareness; ontology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>Business process is a sequence of events and tasks that encompass actions and people. Therefore, a company that pays much attention to its business processes has to clearly identify and define the procedures of their relations. However, with the exponential evolution in ubiquitous computing, the exploitation of information spread all over different devices has become essential to further improve business processes. Hence, in this paper we present a new approach for business process modeling that is based on context-awareness and ontology. We propose a set of meta-model for the elements that we find very important, taking into account the context and, in modeling, the business process. To validate our approach, we propose a concrete case study about transport system to provide a proof of the applicability as well as the utility of the model.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_42-Enhancing_Business_Process_Modeling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Distance Education during the COVID-19 Pandemic: The Impact of Online Gaming Addiction on University Students’ Performance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120941</link>
        <id>10.14569/IJACSA.2021.0120941</id>
        <doi>10.14569/IJACSA.2021.0120941</doi>
        <lastModDate>2021-09-30T08:09:09.6830000+00:00</lastModDate>
        
        <creator>Mahmoud Abou Naaj</creator>
        
        <creator>Mirna Nachouki</creator>
        
        <subject>Distance education; COVID-19 pandemic; game addiction; students’ performance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>The COVID-19 pandemic has forced most universities worldwide to convert to distance education to ensure the educational process remains uninterrupted. The COVID-19 pandemic-related confinement orders have led students to be more engaged with online games. However, for a minority of students, excessive playing can become problematic and addictive. Few studies investigated the long-term effect of COVID-19 on game addiction among university students. The present study investigates the changes in online game addiction rates between May 2021 and May 2020 and aims at determining the impact of playing online games on students’ academic performance. It also examines the demographic factors associated with video game addiction. A sample (n= 418) of students from one private university in UAE was randomly selected, and data were analyzed. The study has determined a reduction in online game addiction levels in the second year of pandemic compared with the first year. Gender and academic level were considered the most predominant features expressively related to online games addiction. It has also been found that digital game addiction is positively associated with academic performance.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_41-Distance_Education_during_the_COVID_19_Pandemic.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Power Loss Minimization using Optimal Power Flow based on Firefly Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120940</link>
        <id>10.14569/IJACSA.2021.0120940</id>
        <doi>10.14569/IJACSA.2021.0120940</doi>
        <lastModDate>2021-09-30T08:09:09.6530000+00:00</lastModDate>
        
        <creator>Chia Shu Jun</creator>
        
        <creator>Syahirah Abd Halim</creator>
        
        <creator>Hazwani Mohd Rosli</creator>
        
        <creator>Nor Azwan Mohamed Kamari</creator>
        
        <subject>Optimal power flow; firefly algorithm; real power loss; control variables</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>Conventional methods are commonly used to solve optimal power flow problems in power system networks. However, conventional methods are not suitable for solving large and non-linear optimal power flow problems as they are influenced by initialization values and more likely be trapped in local optimum. Hence, heuristic optimization methods such as Firefly Algorithm have been widely implemented to overcome the limitations of the conventional methods. These methods often use random strategy that can provide better solutions to avoid being trapped in the local optimum while achieving global optimum. In this study, the load flow analysis was performed using the conventional method of Newton-Raphson technique to calculate the real power loss. Next, Firefly Algorithm was implemented to optimize the control variables for minimizing the real power loss of the transmission system. Generator bus voltage magnitudes, transformer tap settings and generator output active power were taken as the control variables to be optimized. The effectiveness of the proposed Firefly Algorithm was then tested on the IEEE 14-bus and 30-bus system using MATLAB software. The simulated results were then analyzed and compared with Particle Swarm Optimization’s results based on the consistency and execution time. Implementation of the Firefly Algorithm has successfully produced minimum real power loss with faster computational time as compared to Particle Swarm Optimization. For the IEEE 14-bus system, the active power loss for the Firefly Algorithm is 6.6222 MW and the calculation time is 18.2372 seconds. Therefore, the application of optimal power flow based on Firefly Algorithm is a reliable technique, in which the optimal settings with respect to power transmission loss can be determined effectively.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_40-Power_Loss_Minimization_using_Optimal_Power_Flow.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-objective Batch Scheduling in Collaborative Multi-product Flow Shop System by using Non-dominated Sorting Genetic Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120939</link>
        <id>10.14569/IJACSA.2021.0120939</id>
        <doi>10.14569/IJACSA.2021.0120939</doi>
        <lastModDate>2021-09-30T08:09:09.6370000+00:00</lastModDate>
        
        <creator>Purba Daru Kusuma</creator>
        
        <subject>Batch scheduling; flow shop; NSGA II; collaborative system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>Batch scheduling is a well-known topic that has been studied widely with various objectives, methods, and circumstances. Unfortunately, batch scheduling in a collaborative flow shop system is still unexplored. All studies about batch scheduling that are found were in a single flow shop system where all arriving jobs come from single door. In a collaborative flow shop system, every flow shop handles its own customers although joint production among flow shops to improve efficiency is possible. This work aims to develop a novel batch scheduling model for a collaborative multi-product flow shop system. Its objective is to minimize make-span and total production cost. This model is developed by using non-dominated sorting genetic algorithm (NSGA II) which is proven in many multi objective optimization models. This model is then compared with the non-collaborative models which use NSGA II and adjacent pairwise interchange algorithm. Due to the simulation result, the proposed model performs better than the existing models in minimizing the make-span and total production cost. The make-span of the proposed model is 10 to 17 percent lower than the existing non-collaborative models. The total production cost of the proposed model is 0.3 to 3.5 percent lower than the existing non-collaborative models.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_39-Multi_objective_Batch_Scheduling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of Different Attacks on Software Defined Network and Approaches to Mitigate using Intelligent Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120938</link>
        <id>10.14569/IJACSA.2021.0120938</id>
        <doi>10.14569/IJACSA.2021.0120938</doi>
        <lastModDate>2021-09-30T08:09:09.6230000+00:00</lastModDate>
        
        <creator>P. Karthika</creator>
        
        <creator>A. Karmel</creator>
        
        <subject>Distributed Denial of Service (DDoS); Software Defined Networking (SDN); attack detection; Mininet; OpenFlow; mitigation; machine learning; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>The detection of DDoS (Distributed Denial of Service) attacks is essential topic under network security. DDoS attacks cause network services to become unavailable by repeatedly flooding servers with unwanted traffic. The volume, magnitude, and complexity of these attacks increased dramatically as a result of low-cost Internet connections and easily available attack tools. Both Software Defined Networking (SDN) and Deep Learning (DL) have recently found a number of practical and fascinating applications in industry and academia. SDN enables centralized management, a global view of the overall network, and configurable control planes, allowing network devices to adapt to diverse applications. When applied to diverse categorization problems, DL-based approaches outperformed classic machine learning techniques, while SDN characteristics offer better network monitoring and security of the managed network when compared to traditional networks. By inheriting the non-linearity of neural networks, they increase feature extraction and reduction from a high-dimensional dataset in an unsupervised way. An overview of deep learning algorithms for sensing distributed denial of service attacks in software-defined networks with Deep learning is presented within this article. Furthermore, SDN environment is simulated in Mininet using RYU controller. In addition, each paper&#39;s mitigation method is examined in the survey.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_38-Analysis_of_Different_Attacks_on_Software_Defined_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparing MapReduce and Spark in Computing the PCC Matrix in Gene Co-expression Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120937</link>
        <id>10.14569/IJACSA.2021.0120937</id>
        <doi>10.14569/IJACSA.2021.0120937</doi>
        <lastModDate>2021-09-30T08:09:09.6070000+00:00</lastModDate>
        
        <creator>Nagwan Abdel Samee</creator>
        
        <creator>Nada Hassan Osman</creator>
        
        <creator>Rania Ahmed Abdel Azeem Abul Seoud</creator>
        
        <subject>Pearson&#39;s correlation; Hadoop; MapReduce; spark; gene co-expression networks; GCN; Affymetrix microarrays</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>Correlation between gene expression profiles across multiple samples and the identification of inter-gene interactions is a critical technique for Co-expression networking. Due to the highly intensive processing of calculating the Pearson’s Correlation Coefficient, PCC, matrix, it often takes too much processing time to accomplish it. Therefore, in this work, Big Data techniques including MapReduce and Spark have been employed in a cloud environment to calculate the PCC matrix to find the dependencies between genes measured in high throughput microarray. A comparison between the running time of each phase in both of MapReduce and Spark approaches has been held. Both these techniques can dramatically speed up the computation allowing users to work with highly intensive processing. However, Spark has yielded a better performance than the MapReduce as it performs the processing in the main memory of the worker nodes and avoids the unnecessary I/O operations with the disks. Spark has yielded 80 times speed up for calculating the PCC of 22777 genes, however the MapReduce attained barely 8 times speed up.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_37-Comparing_MapReduce_and_Spark_in_Computing_the_PCC_Matrix.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Online Programming Semantic Error Feedback using Dynamic Template Matching</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120936</link>
        <id>10.14569/IJACSA.2021.0120936</id>
        <doi>10.14569/IJACSA.2021.0120936</doi>
        <lastModDate>2021-09-30T08:09:09.5770000+00:00</lastModDate>
        
        <creator>Razali M. K. A</creator>
        
        <creator>S. Suhailan</creator>
        
        <creator>Mohamed M. A</creator>
        
        <creator>M. D. M. Sufian</creator>
        
        <subject>Dynamic; feedback; online programming; semantic error; template matching</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>Many of automated computer programming feedback is generated based on static template matching that need to be provided by the experts. This research is focusing on developing an automated online programming semantic error feedback by using dynamic template matching models based on students’ correct answers submission. Currently, there is a lack of research using dynamic template matching model due to their complexity and varies in terms of programming structure. To solve the formulation of the dynamic templates, a new automated feedback model using front and rear n-gram sequence as the matching technique was developed to provide feedback to students based on the missing structure of the best-matched template. We have tested 60 student’s Java programming answers on 3 different types of programming questions using all the dynamic templates randomly chosen for each student. An expert was assigned to manually match the student’s answer with the 3 randomly chosen templates. The result shows that 80% of the best-matched templates for each student using the technique were similarly chosen by the expert. Based on the matched template, the student will be given feedback notifying the possible next programming instruction that can be included in the answer to get it correct as was achieved by the template. This model can contribute to automatically assist students in answering computational programming exercises.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_36-Online_Programming_Semantic_Error_Feedback.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Multi-dimensional Credibility Assessment for Arabic News Sources</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120935</link>
        <id>10.14569/IJACSA.2021.0120935</id>
        <doi>10.14569/IJACSA.2021.0120935</doi>
        <lastModDate>2021-09-30T08:09:09.5430000+00:00</lastModDate>
        
        <creator>Amira M. Gaber</creator>
        
        <creator>Mohamed Nour El-din</creator>
        
        <creator>Hanan Moussa</creator>
        
        <subject>Information credibility; social media; machine learning; Arabic Natural Language Processing (ANLP)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>Due to the advances in social media, it has become the most popular means of the propagation of news. Many news items are published on social media like Facebook, Twitter, Instagram, etc. Facebook is a huge source for spreading and consuming daily news, but it is an unstructured way of producing news about domains (Art, Health, Education, Sport, Politics, etc.). Thus, this paper will present a model to assess the credibility of news sources over the social context in a particular domain through a particular period of time from a multidimensional perspective. Based on these dimensions of credibility, this model will be designed, evaluated, and implemented by using machine learning algorithms and Arabic NLP approaches to assess the credibility score for Arabic news sources on Facebook. In addition, the study will visualize their scores at different data analysis levels to make the assessment more precise and trustworthy. The proposed model has been implemented and tested over some real Arabic news sources for specific domains and over a period of time to produce a credibility score for each one, whereas the user can display these scores and choose the most credible news sources. The credibility assessment model will be more specific and accurate for a specific domain and time with an accuracy of 98%.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_35-A_Multi_dimensional_Credibility_Assessment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Classification of Breast Cancer Cell Images using Multiple Convolution Neural Network Architectures</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120934</link>
        <id>10.14569/IJACSA.2021.0120934</id>
        <doi>10.14569/IJACSA.2021.0120934</doi>
        <lastModDate>2021-09-30T08:09:09.5430000+00:00</lastModDate>
        
        <creator>Zarrin Tasnim</creator>
        
        <creator>F. M. Javed Mehedi Shamrat</creator>
        
        <creator>Md Saidul Islam</creator>
        
        <creator>Md.Tareq Rahman</creator>
        
        <creator>Biraj Saha Aronya</creator>
        
        <creator>Jannatun Naeem Muna</creator>
        
        <creator>Md. Masum Billah</creator>
        
        <subject>Breast cancer; IDC; non-IDC; AlexNet; VGG19; Inception sV3; GoogLeNet; Xecption; accuracy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>Breast cancer is a malignant tumor that affects women. It is the most prevalent cancer in women, affecting about 10% of all women at any point in their lives. The development of breast cancer begins in the lobules or ducts of the cells. Early detection and prevention are the best ways to stop this cancer from spreading. In this study, five Convolution Neural Network (CNN) models are used to process image data of breast cells. AlexNet, InceptionV3, GoogLeNet, VGG19 and Xception models are used for the classification of Invasive Ductal Carcinoma, IDC and Non-Invasive Ductal Carcinoma (Non-IDC) cells. The models are trained and tested at different epochs to record the learning rate. It is observed from the study that with higher epochs, the data loss decreases and accuracy increases. The accuracy of InceptionV3 and Xception is 92.48% and 90.72% respectively. Likewise, VGG19 and AlexNet have fairly close accuracy of 94.83% and 96.74%. However, GoogLeNet dominates over the other implemented models with the highest accuracy of 97.80%. The GoogLeNet model performs with high accuracy and precision in detecting IDC cells responsible for breast cancer.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_34-Classification_of_Breast_Cancer_Cell_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Recent Progress, Emerging Techniques, and Future Research Prospects of Bangla Machine Translation: A Systematic Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120933</link>
        <id>10.14569/IJACSA.2021.0120933</id>
        <doi>10.14569/IJACSA.2021.0120933</doi>
        <lastModDate>2021-09-30T08:09:09.5300000+00:00</lastModDate>
        
        <creator>M. A. H. Akhand</creator>
        
        <creator>Arna Roy</creator>
        
        <creator>Argha Chandra Dhar</creator>
        
        <creator>Md Abdus Samad Kamal</creator>
        
        <subject>Machine Translation (MT); Bangla language; rule-based MT; example-based MT; statistical MT; neural MT; hybrid MT</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>Machine Translation (MT), the way of translating texts or documents from a source language to a target language automatically without human intervention, has gained popularity in the growing information technology-based era of globalization. Bangla is a major language, and several MT studies with different tools and techniques have been investigated in the last two decades. Considering the importance of the Bangla language and its prospects in MT studies, this study provides a comprehensive review of existing Bangla MT studies to meet the timely demand. Specifically, at first, the basic ideas of different MT methods (Rule-based, Example-based, Statistical, Neural, and Hybrid) and performance measures of MT are presented as a background study of the present review. Then an overview of the Bangla language and a brief description of the available Bangla-English corpora are provided. Next, a description of the existing Bangla MT studies is provided categorically following the common strategic fashion to create a valuable reference for current researchers in the field that is also suitable for non-expert users. The achieved performances of individual methods are also compared in a tabular form. Finally, a number of future research prospects are revealed from the studies, encouraging researchers and practitioners to develop a better and comprehensive Bangla MT system.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_33-Recent_Progress_Emerging_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Non-linear Multiclass SVM Classification Optimization using Large Datasets of Geometric Motif Image</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120932</link>
        <id>10.14569/IJACSA.2021.0120932</id>
        <doi>10.14569/IJACSA.2021.0120932</doi>
        <lastModDate>2021-09-30T08:09:09.5130000+00:00</lastModDate>
        
        <creator>Fikri Budiman</creator>
        
        <creator>Edi Sugiarto</creator>
        
        <subject>Geometric motif; image classification; multiclass; non-linear; large dataset</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>Support Vector Machine (SVM) with Radial Basis Functions (RBF) kernel is one of the methods frequently applied to nonlinear multiclass image classification. To overcome some constraints in the form of a large number of image datasets divided into nonlinear multiclass, there three stages of SVM-RBF classification process carried out i.e. 1) Determining the algorithms of feature extraction and feature value dimensions used, 2) Determining the appropriate kernel and parameter values, and 3) Using correct multiclass method for the training and testing processes. The OaO, OaA, and DAGSVM multi-class methods were tested on a large dataset of batik motif images whose geometric motifs with a variety of patterns and colors in each class and containing similar patterns in the motifs between the classes. DAGSVM has the advantage in classification accuracy value, i.e. 91%, but it takes longer during the training and testing processes.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_32-Non_linear_Multiclass_SVM_Classification_Optimization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Critical Success Factor of Trusted Elements for Mobile Health Records Management: A Review of Conceptual Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120931</link>
        <id>10.14569/IJACSA.2021.0120931</id>
        <doi>10.14569/IJACSA.2021.0120931</doi>
        <lastModDate>2021-09-30T08:09:09.4800000+00:00</lastModDate>
        
        <creator>Fatin Nur Zulkipli</creator>
        
        <creator>Nurussobah Hussin</creator>
        
        <creator>Saiful Farik Mat Yatin</creator>
        
        <creator>Azman Ismail</creator>
        
        <subject>Electronic health record; mobile health record; records management; health information technology; records trust</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>Health Information Technology such as Mobile Health Record Management (MHRM) and Electronic Health Record (EHR) depend on each other in maintaining the patients’ medical record. For maintaining trust specifically in health information technology development, the relationship among the patients, providers and clinicians needs to be maintained. The present study consists of the understanding of the importance of the trusted elements of mobile health (mHealth) record management implementation in government hospitals. Covid-19 pandemic situation force obeying the technological approach in healthcare delivery. Technology gives a big impact on healthcare industry that deals with confidential data and human life. The increased use of mobile in records management in the wrong way leads the practitioner and communities towards poor quality, security problems, and meaningless data. To fulfil this objective, the conceptual framework has been developed by producing the trust elements for the implementation of mHealth apps in hospitals. Secondary data have been used and analyses to justify the objectives of the study. The findings and discussion have been evolved on correlating the existing literature and the analyses data. Five trusted elements for MHRM have been found: Governance, Professional skills and competency, Mobile Health Records Management (MHRM), Sustainability and, Technological. This paper has evolved the use of electronic health records in the health organizations for the accessibility of trust data and timely access. The involvement success factors of trust elements avoid the petty problem, and inefficient process but giving users convenient and instant access to patients&#39; records.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_31-Critical_Success_Factor_of_Trusted_Elements.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SNR based Energy Efficient Communication Protocol for Emergency Applications in WBAN</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120930</link>
        <id>10.14569/IJACSA.2021.0120930</id>
        <doi>10.14569/IJACSA.2021.0120930</doi>
        <lastModDate>2021-09-30T08:09:09.4670000+00:00</lastModDate>
        
        <creator>K. Viswavardhan Reddy</creator>
        
        <creator>Navin Kumar</creator>
        
        <subject>WBAN; energy efficiency; emergency applications; protocol; remote monitoring</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>Continuous remote monitoring of a patient’s health condition in dynamic environment imposes many challenges. Challenges further get multiplied based on the size of body area sensor network. One such challenge is energy efficiency of sensors. Maintaining longer life of all nodes, especially who participate in communicating vital signals from one network to another towards the base station is very important. In this work, an energy efficient communication protocol for the wireless body area network (WBAN) is proposed. The essential characteristics of the protocol are: random deployment of nodes, formation of clusters, node with high signal to noise ratio (SNR) as cluster head (CH), random rotation of CHs within each cluster, and so on. The developed algorithm is simulated in MATLAB by varying the number of nodes and networks. Obtained results are compared with some of the recent and most relevant existing works. It is found that there is an enhancement in the network lifetime by 19.5%, throughput by 12.61% and average remaining energy by 57.21%.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_30-SNR_based_Energy_Efficient_Communication_Protocol.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Systematic Literature Review on Regression Test Case Prioritization </title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120929</link>
        <id>10.14569/IJACSA.2021.0120929</id>
        <doi>10.14569/IJACSA.2021.0120929</doi>
        <lastModDate>2021-09-30T08:09:09.4500000+00:00</lastModDate>
        
        <creator>Ani Rahmani</creator>
        
        <creator>Sabrina Ahmad</creator>
        
        <creator>Intan Ermahani A. Jalil</creator>
        
        <creator>Adhitia Putra Herawan</creator>
        
        <subject>Software testing; test case prioritization; regression testing; requirements-based test case prioritization; software engineering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>Test case prioritization (TCP) is deemed valid to improve testing efficiency, especially in regression testing, as retest all is costly. The TCP schedule the test case execution order to detect bugs faster. For such benefit, test case prioritization has been intensively studied. This paper reviews the development of TCP for regression testing with 48 papers from 2017 to 2020. In this paper, we present four critical surveys. First is the development of approaches and techniques in regression TCP studies, second is the identification of software under test (SUT) variations used in TCP studies, third is the trend of metrics used to measure the TCP studies effectiveness, and fourth is the state-of-the-art of requirements-based TCP. Furthermore, we discuss development opportunities and potential future directions on regression TCP. Our review provides evidence that TCP has increasing interests. We also discovered that requirement-based utilization would help to prepare test cases earlier to improve TCP effectiveness.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_29-A_Systematic_Literature_Review_on_Regression_Test.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multimedia Transmission Mechanism for Streaming Over Wireless Communication Channel</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120928</link>
        <id>10.14569/IJACSA.2021.0120928</id>
        <doi>10.14569/IJACSA.2021.0120928</doi>
        <lastModDate>2021-09-30T08:09:09.4200000+00:00</lastModDate>
        
        <creator>Shwetha M</creator>
        
        <creator>Yamuna Devi C R</creator>
        
        <subject>Multimedia transmission; video encoding; multimedia streaming; quality of service; quality of experience; video compression standards</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>With the evolution of wireless communication technologies (i.e., 4G/5G), the explosion of multimedia transmission of content sharing has become an integral part of users&#39; daily lives. It expects further growth in Quality of Service (QoS) and Quality-of-Experience (QoE) performance. Therefore, multimedia service providers are developing new technologies to offer higher video streaming quality content along with video compression standards, which is highly demanded by the receivers. Thus, inventing precise and efficient quality-based media transmission protocol will significantly help to improve the multimedia QoS over wireless networks. This comprehensive research study discusses standard research work progress in multimedia transmission protocol for wireless communication networks. It also investigates the limitations of such literature found some challenging factors that play a significant role in managing the superior signal quality for digital or video content transmission over heavy traffic conditions. The final section provides a briefing on crucial open research issues to develop a multimedia transmission model that can seamlessly communicate multimedia content irrespective of adverse traffic conditions.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_28-Multimedia_Transmission_Mechanism.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Systematic Mapping Study of Software Usability Studies</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120927</link>
        <id>10.14569/IJACSA.2021.0120927</id>
        <doi>10.14569/IJACSA.2021.0120927</doi>
        <lastModDate>2021-09-30T08:09:09.4030000+00:00</lastModDate>
        
        <creator>Abdulwahab Ali Almazroi</creator>
        
        <subject>Software usability; usability study; systematic mapping study; systematic literature review; software engineering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>Among software quality attributes “software usability” is considered as one of the vital factors in software engineering literature. Software usability is the ability for users to generally understand, use, and learn a software with ease. Due to the importance of usability in software quality, a considerable amount of literature is published in the past decade. Few review and survey studies are also published to critically review the existing literature in the domain. However, there is limited research covering systematic mapping study of software usability. Mapping studies help in analyzing the general trends and research productivity in a research area. To fill this gap, this work critically examines the overall research productivity, demographics, trends, and challenges of software usability. The objective is to classify the current contributions and trends in the area of software usability. We retrieved 9,874 research articles from six research databases and 62 works are selected as primary studies using an evidence-based approach. The result of this mapping study shows that software usability is an active research area, with a promising number of works published in the last decade (2011 - 2020). We identified that the current literature spans over multiple article classes of which investigative papers, model proposals and evaluation papers are the most frequently published article types. We found experiments and theoretical validations to be the most common validation techniques. In terms of application domains; web, software development and mobile applications are the most frequent domains where usability studies are conducted. We identified that future usability studies should focus more on field studies as well as on the usability testing of scientific software packages. It will be of importance to consider ethical issues in usability testing as well.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_27-A_Systematic_Mapping_Study_of_Software_Usability.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comprehensive Framework for Big Data Analytics in Education</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120926</link>
        <id>10.14569/IJACSA.2021.0120926</id>
        <doi>10.14569/IJACSA.2021.0120926</doi>
        <lastModDate>2021-09-30T08:09:09.3870000+00:00</lastModDate>
        
        <creator>Ganeshayya Shidaganti</creator>
        
        <creator>Prakash S</creator>
        
        <subject>Big data; data analytics; educational big data; predictive analytics; text mining; machine learning; education technology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>With the adoption of cloud services for hosting knowledge delivery system in educational domain, there is a surplus quantity of education data being generated every day by current learning management system. Such data are associated with certain typical complexities that impose significant challenges for existing database management and analytics. Review of existing approaches towards educational data highlights that they do not offer full-fledged solution towards analytics and still there is an open-end problem. Therefore, the proposed system introduces a comprehensive framework which offers integrated operation of transformation, data quality, and predictive analytics. The emphasis is more towards achieving distributed analytical operation towards educational data in cloud. Implemented using analytical research methodology, the proposed system shows better analytical performance with respect to frequently used educational data analytical approaches.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_26-A_Comprehensive_Framework_for_Big_Data_Analytics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Security Enhancement in Software Defined Networking (SDN): A Threat Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120925</link>
        <id>10.14569/IJACSA.2021.0120925</id>
        <doi>10.14569/IJACSA.2021.0120925</doi>
        <lastModDate>2021-09-30T08:09:09.3570000+00:00</lastModDate>
        
        <creator>Pradeep Kumar Sharma</creator>
        
        <creator>S. S Tyagi</creator>
        
        <subject>Software defined networking (SDN); openflow; control plane; data plane; controller; programmability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>Software Defined Networking (SDN) has emerged as a technology which can replace the prevalent vendor based proprietary CLI networking devices. SDN has introduced applications based network control and provided various opportunities and challenges for research and innovation in these networks. Despite many advantages and opportunities in SDN, security is a matter of concern for developers who want to invest in SDN. In this paper we are analyzing the SDN security issues with their countermeasures. We have generalized four use cases threat model that should cover security requirements of SDN. These use cases are: (I) protect controllers from applications, (II) inter-controller protection, (III) protecting data plane or switches from controller, (IV) protecting controllers from malicious switches. We found that these SDN components are inter-related if one is secure another one is already secure. We also compared the SDN and traditional network security in terms of these four use cases and provide the insights for protection mechanism and security enhancements. A framework for the development of a SDN security application has been presented based on ryu controller. We believe that our threat model will help various researchers and developers to understand current security requirements and provide a ready reference to tackle vulnerabilities and threats in this area. Finally, we identify some open research problems and future research directions with a proposed security architecture.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_25-Security_Enhancement_in_Software_Defined_Networking.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Improved K-anonymization Approach for Preserving Graph Structural Properties</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120924</link>
        <id>10.14569/IJACSA.2021.0120924</id>
        <doi>10.14569/IJACSA.2021.0120924</doi>
        <lastModDate>2021-09-30T08:09:09.3270000+00:00</lastModDate>
        
        <creator>A. Mohammed Hanafy</creator>
        
        <creator>Sherif Barakat</creator>
        
        <creator>Amira Rezk</creator>
        
        <subject>Privacy; social networks; anonymization; edge-perturbation methods</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>Privacy risks are an important issue to consider during the release of network data to protect personal information from potential attacks. Network data anonymization is a successful procedure used by researchers to prevent an adversary from revealing the user&#39;s identity. Such an attack is called a re-identification attack. However, this is a tricky task where the primary graph structure should be maintained as much as feasible within the anonymization process. Most existing solutions used edge-perturbation methods directly without any concern regarding the structural information of the graph. While that preserving graph structure during the anonymization process requires keeping the most important knowledge/edges in the graph without any modifications. This paper introduces a high utility K-degree anonymization method that could utilize edge betweenness centrality (EBC) as a measure to map the edges that have a central role in the graph. Experimental results showed that preserving these edges during the modification process will lead the anonymization algorithm to better preservation for the most important structural properties of the graph. This method also proved its efficiency for preserving community structure as a trade-off between graph utility and privacy.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_24-An_Improved_K_anonymization_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A PSNR Review of ESTARFM Cloud Removal Method with Sentinel 2 and Landsat 8 Combination</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120923</link>
        <id>10.14569/IJACSA.2021.0120923</id>
        <doi>10.14569/IJACSA.2021.0120923</doi>
        <lastModDate>2021-09-30T08:09:09.2930000+00:00</lastModDate>
        
        <creator>Dietrich G. P. Tarigan</creator>
        
        <creator>Sani M. Isa</creator>
        
        <subject>Cloud removal; RS-Remote sensing; PSNR-Peak signal noise ratio; GIS-Geographic information system; spatiotemporal image fusion</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>Remote sensing images with high spatial and temporal resolution (HSHT) for GIS land use monitoring are crucial data sources. When trying to get HSHT resolution images, cloud cover is a typical problem. The effects of cloud cover reduction using the ESTARFM, one of spatiotemporal image fusion technique, is examined in this study. By merging two satellite photos of low-resolution and medium-resolution images, the Enhanced Spatial and Temporal Adaptive Reflectance Fusion Method (ESTARFM), predicts the reflectance value of the cloud cover region. ESTARFM, on the other hand, employs both medium and high-resolution satellite pictures in this study. Using Sentinel 2 and Landsat 8, the Peak Signal Noise Ratio (PSNR) statistical methods are then utilized to evaluate the ESTARFM. The PSNR explain ESTARFM cloud removal performance by comparing the level of similarity of the reference image with the reconstructed image. In remote sensing, this hypothesis was established to get high-quality HSHT pictures. Based on this study, Landsat 8 images that have been cloud removed with ESTARFM may be classed as good. The PSNR value of 21.8 to 26 backs this up, and the ESTARFM result seems good on visual examination.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_23-A_PSNR_Review_of_ESTARFM_Cloud_Removal_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detection Technique and Mitigation Against a Phishing Attack</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120922</link>
        <id>10.14569/IJACSA.2021.0120922</id>
        <doi>10.14569/IJACSA.2021.0120922</doi>
        <lastModDate>2021-09-30T08:09:09.2630000+00:00</lastModDate>
        
        <creator>Haytham Tarek Mohammed Fetooh</creator>
        
        <creator>M. M. EL-GAYAR</creator>
        
        <creator>A. Aboelfetouh</creator>
        
        <subject>Rogue access point; phishing attacks; KARMA attack; social engineering; hacking</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>Wireless networking is a main part of our daily life during these days, each one wants to be connected. Nevertheless, the massive progress in the Wi-Fi trends and technologies leads most people to give no attention to the security issues. Also detecting a fake access point is a hard security issue over the wireless network. All the currently used methods are either in need of hardware installation, changing the protocol or needs analyzing frames. Moreover, these solutions mainly focus on a single digital attack identification. In this paper, we proposed an admin side way of detection of a not real access point. That works on multiple cyber-attacks especially the phishing attack. We shed the light on detecting WI-phishing or Evil Twin, DE authentication attack, KARMA attack, advanced WI-phishing attack and differentiate them from the normal packets. By performing the frame type analysis in real time and analyzing different static and dynamic parameters as any change in the static features will be considered as an evil twin attack. Also, providing that the value of the dynamic parameters surpasses the threshold, it reflects Evil Twin. The detector has been tested experimentally and it reflects average accuracy of 94.40%, 87.08% average precision and an average specificity of 96.39% for the five types of attack.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_22-Detection_Technique_and_Mitigation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluating Chinese Potential e-Commerce Websites based on Analytic Hierarchy Process</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120921</link>
        <id>10.14569/IJACSA.2021.0120921</id>
        <doi>10.14569/IJACSA.2021.0120921</doi>
        <lastModDate>2021-09-30T08:09:09.2300000+00:00</lastModDate>
        
        <creator>Hemn Barzan Abdalla</creator>
        
        <creator>Liwei Wang</creator>
        
        <subject>Analytic hierarchy process (AHP); e-commerce; chinese e-commerce websites; JD (JING DONG); taobao; suning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>China has recently become the largest market for e-commerce at the global level. After the technological revolution and its widespread, As of December 2020, about 782.41 million people deal with this e-commerce through modern and advanced electronic devices, which are smartphones, computers, and others. The study of e-commerce is one of the branches of business administration established electronically through the use of Internet networks, which aim to carry out buying and selling operations. This article applied the Analytic hierarchy process (AHP) to evaluate three potential Chinese e-Commerce websites: JD, TAOBAO, and SUNING. We divided our model into three primary levels: Goal level, Strategical level, and criteria level. We take two main factors into account at the Strategical level: website (A1) and user (A2). Meanwhile, in the criteria level, we consider the total of six aspects: Visiting speed (B1), website stability (B2), page ranking (B3), average person visits (B4), period of average person visits (B5), and users&#39; comments (B6). We also make all scores normalized; all scores are mapped into 0-1 to compare each website&#39;s performance. Our results show the TAOBAO is the best E-commerce website with a score of 0.8233 based on our algorithm, and JD is the second one with a score of 0.7895, while SUNING is the worst with a score of 0.5955.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_21-Evaluating_Chinese_Potential_e_Commerce_Websites.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Efficient Aspect based Sentiment Analysis Model by the Hybrid Fusion of Speech and Text Aspects</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120920</link>
        <id>10.14569/IJACSA.2021.0120920</id>
        <doi>10.14569/IJACSA.2021.0120920</doi>
        <lastModDate>2021-09-30T08:09:09.2170000+00:00</lastModDate>
        
        <creator>Maganti Syamala</creator>
        
        <creator>N. J. Nalini</creator>
        
        <subject>Acoustic; aspect-based sentiment analysis; decision making; emotion; extraction; hybrid; lexicon; linguistic; natural language processing; speech</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>Aspect-based Sentiment Analysis (ABSA) is treated to be a challenging task in the domain of speech, as it needs the fusion of acoustic features and Linguistic features for information retrieval and decision making. The existing studies in speech are limited to speech and emotion recognition. The main objective of this work is to combine acoustic features in speech with linguistic features in text for ABSA. A deep learning and language model is implemented for acoustic feature extraction in speech. Different variants of text feature extraction techniques are used for aspect extraction in text. Trained Lexicons, Latent Dirichlet Allocation (LDA) model, Rule based approach and Efficient Named Entity Recognition (E-NER) guided dependency parsing approach has been used for aspect extraction. Sentiment with respect to the extracted aspect is analyzed using Natural Language Processing (NLP) techniques. The experimental results of the proposed model proved the effectiveness of hybrid level fusion by yielding improved results of 5.7% WER and 3% CER when compared with the traditional baseline individual linguistic and acoustic feature models.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_20-An_Efficient_Aspect_based_Sentiment_Analysis_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Integration of Value Co-creation into the e-Learning Platform</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120919</link>
        <id>10.14569/IJACSA.2021.0120919</id>
        <doi>10.14569/IJACSA.2021.0120919</doi>
        <lastModDate>2021-09-30T08:09:09.1830000+00:00</lastModDate>
        
        <creator>Eliza Annis Thangaiah</creator>
        
        <creator>Ruzzakiah Jenal</creator>
        
        <creator>Jamaiah Yahaya</creator>
        
        <subject>e-Learning; co-creation; S-D Logic; value propositions; value drivers; actors</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>The e-learning platform is a technology used in most academic institutions. The e-learning platform provides services as an alternative to conventional methods. Previous studies have primarily focused on using and accepting e-learning among consumers and developing tangible attributes on the platform. Platform attributes should be available to engage with all users, leading to innovative ideas and improvement in the value offerings to users if well used. Therefore, this study explores the service science perspective in terms of co-creation for an e-learning platform. The concept of Service-Dominant Logic and value co-creation is adopted to explore and extract the elements and factors that are collectively applied to the model. The concepts illustrate how user value is co-created through the value propositions on the platform and value drivers for the users. The findings help identify the components for value proposition on the platform: enrichment, interaction, personalization, and environment. Meanwhile, the components of value drivers for actors are engagement, resources, experience, and goals. Then, the proposed components are used to develop an e-learning conceptual model. A service-driven model of e-learning will be a significant input to develop an effective platform that provides co-creation opportunities to its users. Future research is to identify the critical features available on the e-learning platform from the users’ view.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_19-Integration_of_Value_Co_Creation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Components and Indicators of Problem-solving Skills in Robot Programming Activities</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120917</link>
        <id>10.14569/IJACSA.2021.0120917</id>
        <doi>10.14569/IJACSA.2021.0120917</doi>
        <lastModDate>2021-09-30T08:09:09.1530000+00:00</lastModDate>
        
        <creator>Chacharin Lertyosbordin</creator>
        
        <creator>Sorakrich Maneewan</creator>
        
        <creator>Daruwan Srikaew</creator>
        
        <subject>Components; indicators; problem solving; robot programming</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>The objective of this research was to study the components and indicators of problem-solving skills in robot programming activities for high school students. This is done by analyzing the second order of confirmatory factor analysis (CFA) based on data from the behavioral assessment with regard to the robot programming activities of 320 students from specialized science schools. The results of the research revealed that the problem-solving skills in robot programming activities had five components and 15 indicators. All the components were tested for consistency using CFA statistics with the support of R-Studio program. The model analysis results were found to be consistent with empirical data with Chi-Square = 98.273, df = 80.000, p-value = 0.081, GFI = 0.961, NFI (TLI) = 0.924, CFI = 0.985, RMSEA = 0.027, RMR = 0.007. This indicates that all the identified components and indicators are involved in problem-solving skills in the robot programming activities of high school students. </description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_17-Components_and_Indicators_of_Problem_solving_Skills.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid Ensemble Word Embedding based Classification Model for Multi-document Summarization Process on Large Multi-domain Document Sets</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120918</link>
        <id>10.14569/IJACSA.2021.0120918</id>
        <doi>10.14569/IJACSA.2021.0120918</doi>
        <lastModDate>2021-09-30T08:09:09.1530000+00:00</lastModDate>
        
        <creator>S Anjali Devi</creator>
        
        <creator>S Sivakumar</creator>
        
        <subject>Word embedding models; text classification; multi-document summarization; contextual feature similarity; natural language processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>Contextual text feature extraction and classification play a vital role in the multi-document summarization process. Natural language processing (NLP) is one of the essential text mining tools which is used to preprocess and analyze the large document sets. Most of the conventional single document feature extraction measures are independent of contextual relationships among the different contextual feature sets for the document categorization process. Also, these conventional word embedding models such as TF-ID, ITF-ID and Glove are difficult to integrate into the multi-domain feature extraction and classification process due to a high misclassification rate and large candidate sets. To address these concerns, an advanced multi-document summarization framework was developed and tested on number of large training datasets. In this work, a hybrid multi-domain glove word embedding model, multi-document clustering and classification model were implemented to improve the multi-document summarization process for multi-domain document sets. Experimental results prove that the proposed multi-document summarization approach has improved efficiency in terms of accuracy, precision, recall, F-score and run time (ms) than the existing models.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_18-A_Hybrid_Ensemble_Word_Embedding.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Role of Ontologies through the Lifecycle of Virtual Reality based Training (VRT) Development Process: A Review Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120916</link>
        <id>10.14569/IJACSA.2021.0120916</id>
        <doi>10.14569/IJACSA.2021.0120916</doi>
        <lastModDate>2021-09-30T08:09:09.1230000+00:00</lastModDate>
        
        <creator>Youcef Benferdia</creator>
        
        <creator>Mohammad Nazir Ahmad</creator>
        
        <creator>Mushawiahti Mustafa</creator>
        
        <creator>Mohd Amran Md Ali</creator>
        
        <subject>Virtual reality; ontology; methodology; training and learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>The size of learning content continually challenges education and training providers. A recent advanced technology called Virtual Reality (VR) has emerged as a promising choice to facilitate knowledge acquisition and skill transfer in a variety of sectors. The main challenge in this technology is the increasing costs, time, effort, and resources needed for designing Virtual Reality based Training (VRT) applications as educational content. To fill such gaps, ontology approach was introduced to support VR development. Therefore, this review has the objective of investigating on how ontologies have been applied throughout the life cycle of a VR development process. Accordingly, articles from the year 2015 onwards have been explored. Findings show that VR developers do not incorporate ontology in all phases of the lifecycle of VR methodology, but only cover some phases like creation and implementation. Creating novel solutions without a complete methodology results in a long development process and an ineﬀective product. This could consequently raise high dangers in real life, especially when VRT is for fields containing trivial details that are vital for saving lives such as healthcare. This research thus presents a proposal of methodological guidance on designing VR applications with the use of an ontology approach throughout all the life cycles of VR construction.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_16-The_Role_of_Ontologies_through_the_Lifecycle.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Energy-aware Facilitation Framework for Scalable Social Internet of Vehicles</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120915</link>
        <id>10.14569/IJACSA.2021.0120915</id>
        <doi>10.14569/IJACSA.2021.0120915</doi>
        <lastModDate>2021-09-30T08:09:09.0900000+00:00</lastModDate>
        
        <creator>Abdulwahab Ali Almazroi</creator>
        
        <creator>Muhammad Ahsan Qureshi</creator>
        
        <subject>Social Internet of Vehicles (SIoV); energy optimization; Travel Sales Person (TSP) problem; Ant Colony Optimization (ACO)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>The Internet of Things (IoT) has eventually evolved into a more promising service provisioning paradigm, namely, Social Internet of Things (SIoT). Social Internet of Vehicles (SIoV) symbolizes a multitude of components from the existing Vehicular Ad-Hoc Networks (VANETs) such as OBUs, RSUs, and cloud devices that necessitate energy for proper functioning. It is speculated that the connected devices will surpass the 40 billion mark in the year 2022 in which the devices related to ITS will constitute a significant part. Therefore, the ever-increasing number of components increases the communication hopping that results in the immense escalation of energy consumption. However, the energy consumption at the object level increases due to individual communication, storage, and processing capabilities. The existing research in SIoV is focused on providing state-of-the-art services and applications; however, a significant goal of energy efficiency is largely ignored. Therefore, extensive research needs to be performed to come up with an energy-efficient framework for a scalable SIoV system to meet the future requirements of ITS. Consequently, this study proposed, simulated, and evaluated an energy-aware efficient deployment of RSUs scheme. The proposed scheme is based on network energy, data acquisition energy, and data processing energy. To achieve efficiency in terms of energy, traveling salesman problem with ant colony optimization algorithm are utilized. The experiments are performed in an urban scenario with different numbers of RSUs. The outcomes of the experiments exhibited promising results in energy gain and energy consumption having implications for society and consumers at large.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_15-An_Energy_aware_Facilitation_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of Technology for Summarization of Kazakh Text</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120914</link>
        <id>10.14569/IJACSA.2021.0120914</id>
        <doi>10.14569/IJACSA.2021.0120914</doi>
        <lastModDate>2021-09-30T08:09:09.0600000+00:00</lastModDate>
        
        <creator>Talgat Zhabayev</creator>
        
        <creator>Ualsher Tukeyev</creator>
        
        <subject>Summarization; text simplification; low-resource language; seq2seq; transfer learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>This paper presents the solution to the problem of summarizing Kazakh texts. The problem of Kazakh text summarization is considered as a sequence of two tasks: extracting the most important sentences of the text and simplifying the received sentences. The task of extracting the most important sentences of the text is solved using the TF-IDF method and the task of simplifying sentences is solved using the neural network technology “Seq2Seq”. Problem of using NMT method for simplification of Kazakh was in absence of Kazakh dataset for training. To solve this problem in this work propose use transfer learning method. The use of transfer learning made it possible to use a ready-made model that was trained on a parallel corpus of Simple English Wikipedia and not create a simplification corpus in Kazakh from scratch. For this, a transfer learning technology for simplifying sentences of the Kazakh language has been developed, based on training a neural model for simplifying sentences in the English language. Main scientific contribution of this work is transfer learning technology for the simplification of Kazakh sentences using the parallel corpus of the English language simplification.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_14-Development_of_Technology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Enhanced Feature Acquisition for Sentiment Analysis of English and Hausa Tweets</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120913</link>
        <id>10.14569/IJACSA.2021.0120913</id>
        <doi>10.14569/IJACSA.2021.0120913</doi>
        <lastModDate>2021-09-30T08:09:09.0430000+00:00</lastModDate>
        
        <creator>Amina Imam Abubakar</creator>
        
        <creator>Abubakar Roko</creator>
        
        <creator>Aminu Muhammad Bui</creator>
        
        <creator>Ibrahim Saidu</creator>
        
        <subject>Multilingual sentiment analysis; sentiment analysis; social media; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>Due to the continuous and rapid growth of social media, opinionated contents are actively created by users in different languages about various products, services, events, and political parties. The automated classification of these contents prompted the need for multilingual sentiment analysis researches. However, the majority of research efforts are devoted to English and Arabic, English and German, English and French languages, while a great share of information is available in other languages such as Hausa. This paper proposes multilingual sentiment analysis of English and Hausa tweets using an Enhanced Feature Acquisition Method (EFAM). The method uses machine learning approach to integrate two newly defined Hausa features (Hausa Lexical Feature and Hausa Sentiment Intensifiers) and English feature to measure classification performance and to synthesize a more accurate sentiment classification procedure. The approach has been evaluated using several experiments with different classifiers in both monolingual and multilingual datasets. The experimental results reveal the effectiveness of the approach in enhancing feature integration for multilingual sentiment analysis. Similarly, by using features drawn from multiple languages, we can construct machine learning classiﬁers with an average precision of over 65%.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_13-An_Enhanced_Feature_Acquisition_for_Sentiment_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Wide Area Measurement System in the IEEE-14 Bus System using Multiobjective Shortest Path Algorithm for Fault Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120912</link>
        <id>10.14569/IJACSA.2021.0120912</id>
        <doi>10.14569/IJACSA.2021.0120912</doi>
        <lastModDate>2021-09-30T08:09:09.0130000+00:00</lastModDate>
        
        <creator>Lilik J. Awalin</creator>
        
        <creator>Syahirah Abd Halim</creator>
        
        <creator>Jafferi Bin Jamaludin</creator>
        
        <creator>Nor Azuana Ramli</creator>
        
        <subject>IEEE-14 bus system; wide area measurement; multi-objective shortest path algorithm; fault location; PMU</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>In a large-scale interconnected power system network, few challenges exist in evaluating and maintaining the overall system stability. The power system’s ability to supply all types of loads during natural disasters or faults has yet to be addressed. This work focuses on developing a wide-area measurement system to manage and control the power system under all operating conditions. The IEEE-14 bus system was modeled in PSCAD software for simulating nineteen types of fault based on multi-objective shortest path algorithm. To manage the wide area measurement, the research must comprehend the working principle of the multi-objective shortest path algorithm, whereby the proposed method will determine the new path for the IEEE-14 bus system. To evaluate the performance of the multi-objective shortest path algorithm, all sections of the IEEE-14 bus system were simulated with faults. The distances of the normal path (without simulated fault) and the new path (with simulated fault) were recorded. Based on the recorded data, it was found that the location of the fault has significant influence on the shortest path of the buses connected to each other.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_12-Wide_Area_Measurement_System_in_the_IEEE_14_Bus_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Traffic Adaptive Deep Learning based Fine Grained Vehicle Categorization in Cluttered Traffic Videos</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120911</link>
        <id>10.14569/IJACSA.2021.0120911</id>
        <doi>10.14569/IJACSA.2021.0120911</doi>
        <lastModDate>2021-09-30T08:09:08.9970000+00:00</lastModDate>
        
        <creator>Shobha B. S</creator>
        
        <creator>Deepu. R</creator>
        
        <subject>Vehicle categorization; deep learning; traffic density estimation; clutter</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>Smart traffic management is being proposed for better management of traffic infrastructure and regulate traffic in smart cities. With surge of traffic density in many cities, smart traffic management becomes utmost necessity. Vehicle categorization, traffic density estimation and vehicle tracking are some of the important functionalities in smart traffic management. Vehicles must be categorized based on multiple levels like type, speed, direction of travel and vehicle attributes like color etc. for efficient tracking and traffic density estimation. Vehicle categorization becomes very challenging due to occlusions, cluttered backgrounds and traffic density variations. In this work, a traffic adaptive multi-level vehicle categorization using deep learning is proposed. The solution is designed to solve the problems in vehicle categorization in terms of occlusions, cluttered backgrounds.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_11-Traffic_Adaptive_Deep_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Empirical Analysis of Feature Points Extraction Techniques for Space Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120910</link>
        <id>10.14569/IJACSA.2021.0120910</id>
        <doi>10.14569/IJACSA.2021.0120910</doi>
        <lastModDate>2021-09-30T08:09:08.9670000+00:00</lastModDate>
        
        <creator>Janhavi H. Borse</creator>
        
        <creator>Dipti D. Patil</creator>
        
        <subject>Artificial intelligence; convolutional neural network; computer vision; feature extraction; machine learning; satellite images; space research</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>Recently, space research advancements have widened the scope of many vision-based techniques. Computer vision techniques with manifold objectives require that valuable features are extracted from input data. This paper attempts to analyze known feature extraction techniques empirically; Scale Invariant Feature Transform (SIFT), Speeded up robust features (SURF), Oriented fast and Rotated Brief (ORB), and Convolutional Neural Network (CNN). A methodology for autonomously extracting features using CNN is analyzed in more detail. The autonomous process demonstrates the use of convolutional neural networks for feature extraction. Those techniques are studied and evaluated empirically on lunar satellite images. For analysis, a dataset containing different affine transformations of a video frame is generated from a sample lunar descent video. The nearest neighbor algorithm is then applied for feature matching. For an unbiased evaluation, a similar process of feature matching is repeated for all the models. Well-known metrics like repeatability and matching scores are employed to validate the studied techniques. The results show that the CNN features showed much better computational efficiency and stable performance concerning matching accuracy for lunar images than other studied algorithms.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_10-Empirical_Analysis_of_Feature_Points_Extraction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of Star-Schema Model for Lecturer Performance in Research Activities</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120909</link>
        <id>10.14569/IJACSA.2021.0120909</id>
        <doi>10.14569/IJACSA.2021.0120909</doi>
        <lastModDate>2021-09-30T08:09:08.9330000+00:00</lastModDate>
        
        <creator>M. Miftakul Amin</creator>
        
        <creator>Adi Sutrisman</creator>
        
        <creator>Yevi Dwitayanti</creator>
        
        <subject>Data warehouse (DW); star-schema; multidimensional data; business intelligence (BI)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>In this study, the researchers developed a multidimensional data model to investigate the activities of lecturers in universities in carrying out research activities as part of the Three Pillars of Higher Education. Information about lecturers&#39; research activities has been managed using spreadsheet (excel) documents. Thus, access and analysis of the information were limited. Data warehouse development was carried out through several stages, namely requirement analysis, data source analysis, multidimensional modeling, ETL process, and reporting. The information generated in this data warehouse (DW) can be used as one of the business intelligence (BI) models in universities. In this study, the star-schema model was used in designing dimension tables and fact tables to facilitate and speed up the query process. The information generated in this study can be used by management in universities to make decisions and strategic planning. The results of this study can also be used as one of the important information in the preparation of institutional accreditation data and study program accreditation.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_9-Development_of_Star_Schema_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Classifying Familial Hypercholesterolaemia: A Tree-based Machine Learning Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120908</link>
        <id>10.14569/IJACSA.2021.0120908</id>
        <doi>10.14569/IJACSA.2021.0120908</doi>
        <lastModDate>2021-09-30T08:09:08.9030000+00:00</lastModDate>
        
        <creator>Marshima Mohd Rosli</creator>
        
        <creator>Jafhate Edward</creator>
        
        <creator>Marcella Onn</creator>
        
        <creator>Yung-An Chua</creator>
        
        <creator>Noor Alicezah Mohd Kasim</creator>
        
        <creator>Hapizah Nawawi</creator>
        
        <subject>Familial hypercholesterolaemia; predicting FH; machine learning algorithms; tree-based classifier</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>Familial hypercholesterolaemia is the most common and serious form of inherited hyperlipidaemia. It has an autosomal dominant mode of inheritance, and is characterised by severely elevated low-density lipoprotein cholesterol levels. Familial hypercholesterolaemia is an important cause of premature coronary heart disease, but is potentially treatable. However, the majority of familial hypercholesterolaemia individuals are under-diagnosed and under-treated, resulting in lost opportunities for premature coronary heart disease prevention. This study aims to assess performance of machine learning algorithms for enhancing familial hypercholesterolaemia detection within the Malaysian population. We applied three machine learning algorithms (random forest, gradient boosting and decision tree) to classify familial hypercholesterolaemia among Malaysian patients and to identify relevant features from four well-known diagnostic instruments: Simon Broome, Dutch Lipid Clinic Criteria, US Make Early Diagnosis to Prevent Early Deaths and Japanese FH Management Criteria. The performance of these classifiers was compared using various measurements for accuracy, precision, sensitivity and specificity. Our results indicated that the decision tree classifier had the best performance, with an accuracy of 99.72%, followed by the gradient boosting and random forest classifiers, with accuracies of 99.54% and 99.52%, respectively. The three classifiers with Recursive Feature Elimination method selected six common features of familial hypercholesterolaemia diagnostic criteria (family history of coronary heart disease, low-density lipoprotein cholesterol levels, presence of tendon xanthomata and/or corneal arcus, family hypercholesterolaemia, and family history of familial hypercholesterolaemia) that generate the highest accuracy in predicting familial hypercholesterolaemia. We anticipate machine learning algorithms will enhance rapid diagnosis of familial hypercholesterolaemia by providing the tools to develop a virtual screening test for familial hypercholesterolaemia.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_8-Classifying_Familial_Hypercholesterolaemia.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Flipped Learning Engagement Model to Teach Programming Course</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120907</link>
        <id>10.14569/IJACSA.2021.0120907</id>
        <doi>10.14569/IJACSA.2021.0120907</doi>
        <lastModDate>2021-09-30T08:09:08.8870000+00:00</lastModDate>
        
        <creator>Ahmad Shaarizan Shaarani</creator>
        
        <creator>Norasiken Bakar</creator>
        
        <subject>Flipped learning engagement model; online programming course; student achievement; blended learning; technical based higher learning institutions</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>Online learning education at higher learning institutions has changed over the years as technology evolves. The main purpose of this study was to propose a new Flipped Learning Engagement (FLE) model. User testing to measure students’ achievement was carried out in four separate groups namely Control Technology (CT) group, Experimental Technology (ET) group, Control Engineering (CE) group and Experimental Engineering (EE) group by using t-Test. The findings yielded that the experimental group (ET and EE) that underwent learning and teaching process by using the proposed FLE model obtained higher results or level of achievements as compared to the control groups (CT and CE) undergoing the conventional approach of teaching and learning. The study contributes mainly to the design and development of FLE model. FLE model proposed in this study can be beneficial to guide not only programming related educators but also for all educators that use flipped learning approach in their learning and teaching process. Future study should examine the proposed model in depth to improve it by adding new entities, hence, enabling its application in any related courses at various levels.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_7-A_New_Flipped_Learning_Engagement_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Facilitator Support System that Overlooks Keywords Expressing the True Intentions of All Discussion Participants</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120906</link>
        <id>10.14569/IJACSA.2021.0120906</id>
        <doi>10.14569/IJACSA.2021.0120906</doi>
        <lastModDate>2021-09-30T08:09:08.7470000+00:00</lastModDate>
        
        <creator>Chika Oshima</creator>
        
        <creator>Tatsuya Oyama</creator>
        
        <creator>Chihiro Sasaki</creator>
        
        <creator>Koichi Nakayama</creator>
        
        <subject>Keyword movement disclose system; discussion board system; facilitator; putting keywords in box</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>This paper proposed the Keyword Movement Disclose System (KMDS), which allows a facilitator of discussion to watch a record of the moving keywords in a Discussion Board System (DBS). In the DBS, the discussion participants place each keyword in a box made for each item to be discussed. The keywords in the box were expected to show each participant’s opinion and intention, because the participant’s individual display was not disclosed to the other participants. Therefore, if the facilitator of the discussion can see the true opinions and intentions of all participants via the keywords in the boxes through the KMDS, the facilitator will be more appropriately advance the discussions and be able to draw conclusions based on diverse opinions. Moreover, the KMDS may contribute to the development of an artificial intelligence facilitator. In this paper, we conducted an experiment in which ten facilitators were asked to listen to a recorded discussion held by nine participants using the DBS. Five of the facilitators used the KMDS while listening the recorded discussion. It was suggested that KMDS may allow the facilitators to build a consensus from various viewpoints of the participants, although the results of the experiment did not show much difference depending on the conditions with/without KMDS.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_6-A_Facilitator_Support_System_that_Overlooks_Keywords.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>EEG-based Brain Computer Interface Prosthetic Hand using Raspberry Pi 4</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120905</link>
        <id>10.14569/IJACSA.2021.0120905</id>
        <doi>10.14569/IJACSA.2021.0120905</doi>
        <lastModDate>2021-09-30T08:09:08.7300000+00:00</lastModDate>
        
        <creator>Haider Abdullah Ali</creator>
        
        <creator>Diana Popescu</creator>
        
        <creator>Anton Hadar</creator>
        
        <creator>Andrei Vasilateanu</creator>
        
        <creator>Ramona Cristina Popa</creator>
        
        <creator>Nicolae Goga</creator>
        
        <creator>Hussam Al Deen Qhatan Hussam</creator>
        
        <subject>Prosthetic; brain computer interface (BCI); electroencephalography (EEG); raspberry pi 4; EMOTIV</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>Accidents, wars, or different diseases can affect upper limbs in such a manner so their amputation is required, with dramatic effects on people’s ability to perform tasks such as grabbing, holding objects, or moving them. In this context, it is necessary to develop solutions to support upper limb amputees to perform daily routine activities. BCI (brain-computer interface) offer the ability to use the neural activity of the brain to communicate or control robots, artificial limbs, or machines without physical movement. This article proposing an electroencephalography (EEG) mind-controlled prosthetic arm. It eliminates the drawbacks like the high price, heaviness, and dependency on the intact nerves related to the myoelectric and other types of prostheses currently in use. The developed prototype is a low-cost 3D-printed prosthetic arm controllable via brain commands using EEG-based BCI technology. It includes a stepper motor controlled by Raspberry Pi 4 to perform actions like open/close movement and holding objects. The project has successfully implemented and achieve the aim to create a prototype of a mind-controlled prosthetic arm system in addition to the necessary experimental tests and calculations regarding torque, force, and the weight that the hand can carry. The paper proves the feasibility of the approach and opens the route for improving the design of the prototype to attach it to the upper-limb amputation stump.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_5-EEG_Based_Brain_Computer_Interface_Prosthetic_Hand.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Facilitating Personalisation in Epilepsy with an IoT Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120904</link>
        <id>10.14569/IJACSA.2021.0120904</id>
        <doi>10.14569/IJACSA.2021.0120904</doi>
        <lastModDate>2021-09-30T08:09:08.7000000+00:00</lastModDate>
        
        <creator>S. A McHale</creator>
        
        <creator>E. Pereira</creator>
        
        <subject>IoT; healthcare systems; smart healthcare; personalisation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>The premises made in this paper put the future of personalisation in epilepsy into focus, a focus that shifts from a one-size fits all to a focus on the core of the epilepsy patients’ individual characteristics. The emerging approach of personalised healthcare is known to be facilitated by the Internet of Things (IoT) and sensor-based IoT devices are in popular demand for healthcare providers due to the constant need for patient monitoring. In epilepsy, the most common and complex patients to deal with correspond to those with multiple strands of epilepsy. These extremely varied kind of patients should be monitored precisely according to their identified key symptoms and specific characteristics then treatment tailored accordingly. Consequently, paradigms are needed to personalise this information. By focusing upon personalised parameters that make epilepsy patients distinct this paper proposes an IoT based Epilepsy monitoring model endorsing a more accurate and refined way of remotely monitoring the ‘individual’ patient.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_4-Facilitating_Personalisation_in_Epilepsy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving the Quality of e-Commerce Service by Implementing Combination Models with Step-by-Step, Bottom-Up Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120903</link>
        <id>10.14569/IJACSA.2021.0120903</id>
        <doi>10.14569/IJACSA.2021.0120903</doi>
        <lastModDate>2021-09-30T08:09:08.6700000+00:00</lastModDate>
        
        <creator>Hemn Barzan Abdalla</creator>
        
        <creator>Ge Chengwei</creator>
        
        <creator>Baha Ihnaini</creator>
        
        <subject>E-commerce; website; framework; criteria; model; approach; service; quality</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>e-Commerce, as a hot industry, plays an important role in people&#39;s lives. People visit e-commerce websites, check what they want, then click buy, and finally complete the transaction. The developments taking place at the level of electronic services at the global level and the intensification of competition and the increase in the experiences of electronic shoppers, the awareness and understanding of companies of the distinctive characteristics of the population in the region and their purchasing habits has become the most important for companies of e-commerce and services, where it is imperative Companies should keep pace with these developments and provide electronic services via the internet of high quality and efficiency, by focusing on the most important requirements for customer satisfaction, especially in light of the information and technological revolution. However, customers will have an awful experience if they visit crudely made e-commerce websites. Kunst A. (2019, Dec 20) claimed that around a total of 37.4% of customers complained that they had an awful shopping experience. The reason is that the service quality of e-commerce websites is not up to standard. This research aims to improve the quality of e-commerce service by using the Comprehensive and Referential Combination Model by implementing a Step-by-Step, Bottom-Up approach. Finally, we will recommend improving the quality of e-commerce service in construct and revision ways within parts of this model.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_3-Improving_the_Quality_of_e_Commerce_Service.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Monitoring Indoor Activity of Daily Living using Thermal Imaging: A Case Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120902</link>
        <id>10.14569/IJACSA.2021.0120902</id>
        <doi>10.14569/IJACSA.2021.0120902</doi>
        <lastModDate>2021-09-30T08:09:08.6530000+00:00</lastModDate>
        
        <creator>Hassan M. Ahmed</creator>
        
        <creator>Bessam Abdulrazak</creator>
        
        <subject>Activity monitoring; activities of daily living (ADLs); thermal imaging; indoor monitoring; thermal sensor array (TSA)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>Monitoring indoor activities of daily living (ADLs) of a person is subjected to dependency on sensor type, power supply stability, and connectivity stability without mentioning artifacts introduced by the person himself. Multiple challenges have to be overcome in this field, such as; detecting the precise spatial location of the person, and estimating vital signs like an individual’s average temperature. Privacy is another domain of the problem to be thought of with care. Identifying the person’s posture without a camera is another challenge. Posture identification is a key in assisting detection of a person’s fall. Thermal imaging could be a proper solution for most of the mentioned challenges. It provides monitoring both the person’s average temperature and spatial location while maintaining privacy. In this research, an IoT system for monitoring an indoor ADL using thermal sensor array (TSA) is proposed. Three classes of ADLs are introduced, which are daily activity, sleeping activity and no-activity respectively. Estimating person average temperature using TSAs is introduced as well in this paper. Results have shown that the three activity classes can be identified as well as the person’s average temperature during day and night. The person’s spatial location can be determined while his/her privacy is maintained as well.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_2-Monitoring_Indoor_Activity_of_Daily_Living.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Information Flow Control for Serverless Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120901</link>
        <id>10.14569/IJACSA.2021.0120901</id>
        <doi>10.14569/IJACSA.2021.0120901</doi>
        <lastModDate>2021-09-30T08:09:08.6070000+00:00</lastModDate>
        
        <creator>Rishabh Chawla</creator>
        
        <subject>Information flow control; serverless systems; lan-guage based security; cloud computing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(9), 2021</description>
        <description>Security for Serverless Systems is looked at from two perspectives, the server-level security managed by the infras-tructure company and the Application level Security managed by the tenants.The Trusted computing base for cloud systems is enormous as it encompasses all the functions running on a system. Authentication for systems is mostly done using ACL. Most Serverless Systems share data and thus, ACL isn’t sufficient. IFC using appropriate label design can enforce continuously through-out the application. IFC can be used to increase confidence between functions with other functions and cloud provider and also mitigate security vulnerabilities making the system safer. A survey of the present IFC implementations for Serverless Systems is presented and system designs which are relevant to Serverless Systems and could be added to Serverless Systems Architecture and, an idea of an IFC model that could be effectively applied in a decentralised model like serverless systems.</description>
        <description>http://thesai.org/Downloads/Volume12No9/Paper_1-Information_Flow_Control_for_Serverless_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Financial Stumbling Detection Model for SMEs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.01208104</link>
        <id>10.14569/IJACSA.2021.01208104</id>
        <doi>10.14569/IJACSA.2021.01208104</doi>
        <lastModDate>2021-08-31T10:27:01.3730000+00:00</lastModDate>
        
        <creator>Reham Nasser Farag</creator>
        
        <creator>Nashaat Al-Wakeel</creator>
        
        <creator>Mohamed Sameh Hassanein</creator>
        
        <subject>SMEs; financial statements; business intelligence (BI); DSR methodology; financial decisions</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>Access to financing is still one of the greatest obstacles facing Small and Medium Enterprises (SMEs) all over the world and prevents them from developing. Because a large percentage of these projects fail/stumble and go bankrupt, due to failures in their financial management and assets management decisions. SMEs do not only play an important role in the global economy but also it is one of the sources of social stability and an effective role in a country’s economy. The previous researches indicate that BI systems are mainly applied in large enterprises, but using BI in SMEs is very rare. Thus, to enhance financial decisions, the study uses various ICT tools such as business intelligence (BI). This research develops a Financial Stumbling Detection (FSD) Model using BI that can help SMEs stakeholders in taking the proper financial decisions. The FSD model identifies the initial stumble/defect in SMEs, by using some financial ratios. And it was created relying on Design Science Research (DSR) methodology. The results of using BI in SMEs give the necessary insight into a business&#39;s financial data, and highlight if the business is heading for a financial crisis and potential failure.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_104-Financial_Stumbling_Detection_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimized Design of a Converter Decimal to BCD Encoder and a Reversible 4-bit Controlled Up/Down Synchronous Counter</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.01208103</link>
        <id>10.14569/IJACSA.2021.01208103</id>
        <doi>10.14569/IJACSA.2021.01208103</doi>
        <lastModDate>2021-08-31T10:27:01.3570000+00:00</lastModDate>
        
        <creator>Issam Andaloussi</creator>
        
        <creator>Sedra Moulay Brahim</creator>
        
        <creator>Mariam El Ghazi</creator>
        
        <subject>Decimal to BCD Encoder (D2BE); Reversible Bi-nary Counter; Number of Gates (CG); Number of Garbage Output (NGO); Number of Constant Inputs (NCI); Quantum Cost (QC); Hardware Complexity (HC)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>The field of quantum computing, reversible logic, and nanotechnology have earned much attention from researchers in recent years due to their low power dissipation. Quantum computing has been a guiding light for nanotechnology, optical computing of information, low power CMOS design, computer science. Moreover, the dissipation of energy in the field combina-torial logic circuits becomes one of the most important aspects to be avoided. This problem is remedied by a reversible logic favoring the reproduction of inputs to outputs, which is due to the absence of unused bits. Every bit of information not used generates a loss of information causing a loss of energy under the form of heat, the reversible logic leads to zero heat dissipation. Among the components affected by reversible logic are binary reversible counter and converter from decimal to BCD encoder(D2BE) which are considered essential elements. This article will propose an optimized reversible design of a converter from decimal to BCD encoder (D2BE) and an optimized design of reversible Binary counter with up/ down. Our designs show an improvement compared to previous works by replacing some reversible gates with others while keeping the same functionality and improving performance criteria in terms of the number of gates, garbage outputs, constant inputs, quantum cost, delay, and Hardware complexity.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_103-Optimized_Design_of_a_Converter_Decimal_to_BCD_Encoder.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Blind, Secured and Robust Watermarking for 3-D Polygon Mesh using Vertex Curvature</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.01208102</link>
        <id>10.14569/IJACSA.2021.01208102</id>
        <doi>10.14569/IJACSA.2021.01208102</doi>
        <lastModDate>2021-08-31T10:27:01.3570000+00:00</lastModDate>
        
        <creator>Priyanka Singh</creator>
        
        <creator>K Jyothsna Devi</creator>
        
        <subject>3-D mesh watermarking; mean curvature; radial distance; spherical coordinates; visual masking</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>In this paper a blind, imperceptible, robust and secure watermarking scheme for 3-D mesh models is presented. Here, the watermark is embedded in deeper surface vertices to minimize the perceivable distortion. Deeper surface vertices are selected on the basis of their mean curvature (lesser than zero) and converted to spherical coordinates. Out of the three spherical coordinates, radial distance represents approximate mesh and is invariant to distortionless attack. Therefore, watermark bits are embedded by modifying the distribution of radial distance to make the proposed scheme robust against such attacks. Radial distances are divided into bins and normalized to range in [0, 1]. Each bin accommodates one watermark bit. Watermark is embedded repeatedly in the 3-D mesh to resist cropping and simplification attack. To ensure higher security, a 128-bit unique watermark is generated by hashing (MD5 algorithm) the mean of histogram map obtained from a grayscale watermark image. Watermark bits are extracted from bins by comparing mean of each bin with a reference value. Since original mesh is not required at the time of extraction, the proposed scheme is blind. Through experimental results, it is demonstrate that the proposed scheme has good visual masking and higher robustness against various attacks. It shows improved performance as compared to some of the prominent schemes.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_102-Blind_Secured_and_Robust_Watermarking_for_3_D_Polygon.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Real Time Vehicle Detection, Tracking, and Inter-vehicle Distance Estimation based on Stereovision and Deep Learning using YOLOv3</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.01208101</link>
        <id>10.14569/IJACSA.2021.01208101</id>
        <doi>10.14569/IJACSA.2021.01208101</doi>
        <lastModDate>2021-08-31T10:27:01.3400000+00:00</lastModDate>
        
        <creator>Omar BOURJA</creator>
        
        <creator>Hatim DERROUZ</creator>
        
        <creator>Hamd AIT ABDELALI</creator>
        
        <creator>Abdelilah MAACH</creator>
        
        <creator>Rachid OULAD HAJ THAMI</creator>
        
        <creator>Francois BOURZEIX</creator>
        
        <subject>Stereovision; stereo image; YOLOv3 deep neural network; convolutional neural network; vehicle detection; tracking; bounding boxes; distance estimation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>In this paper, we propose a robust real-time vehicle tracking and inter-vehicle distance estimation algorithm based on stereovision. Traffic images are captured by a stereoscopic system installed on the road, and then we detect moving vehicles with the YOLO V3 Deep Neural Network algorithm. Thus, the real-time video goes through an algorithm for stereoscopy-based measurement in order to estimate distance between detected vehicles. However, detecting the real-time objects have always been a challenging task because of occlusion, scale, illumination etc. Thus, many convolutional neural network models based on object detection were developed in recent years. But they cannot be used for real-time object analysis because of slow speed of recognition. The model which is performing excellent currently is the unified object detection model which is You Only Look Once (YOLO). But in our experiment, we have found that despite of having a very good detection precision, YOLO still has some limitations. YOLO processes every image separately even in a continuous video or frames. Because of this much important identification can be lost. So, after the vehicle detection and tracking, inter-vehicle distance estimation is done.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_101-Real_Time_Vehicle_Detection_Tracking.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Identifying Central Nodes in Directed and Weighted Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.01208100</link>
        <id>10.14569/IJACSA.2021.01208100</id>
        <doi>10.14569/IJACSA.2021.01208100</doi>
        <lastModDate>2021-08-31T10:27:01.3270000+00:00</lastModDate>
        
        <creator>Sharanjit Kaur</creator>
        
        <creator>Ayushi Gupta</creator>
        
        <creator>Rakhi Saxena</creator>
        
        <subject>Centrality; weighted network; directed network; mi-gration network; world input output trade network; community structure</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>An issue of critical interest in complex network analysis is the identification of key players or important nodes. Centrality measures quantify the notion of importance and hence provide a mechanism to rank nodes within a network. Several centrality measures have been proposed for un-weighted, un-directed networks but applying or modifying them for networks in which edges are weighted and directed is challenging. Existing centrality measures for weighted, directed networks are by and large domain-specific. Depending upon the application, these measures prefer either the incoming or the outgoing links of a node to measure its importance. In this paper, we introduce a new centrality measure, Affinity Centrality, that leverages both weighted in-degrees as well as out-degrees of a node’s local neighborhood. A tuning parameter permits the user to give preference to a node’s neighbors in either incoming or outgoing direction. To evaluate the effectiveness of the proposed measure, we use three types of real-world networks - migration, trade, and animal social networks. Experimental results on these weighted, directed networks demonstrate that our centrality measure can rank nodes in consonance to the ground truth much better than the other established measures</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_100-Identifying_Central_Nodes_in_Directed_and_Weighted_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comprehensive Study on Intrusion and Extrusion Phenomena</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120899</link>
        <id>10.14569/IJACSA.2021.0120899</id>
        <doi>10.14569/IJACSA.2021.0120899</doi>
        <lastModDate>2021-08-31T10:27:01.3100000+00:00</lastModDate>
        
        <creator>Md. Abdul Hamid</creator>
        
        <creator>Marjia Akter</creator>
        
        <creator>M. F. Mridha</creator>
        
        <creator>Muhammad Mostafa Monowar</creator>
        
        <creator>Madini O. Alassafi</creator>
        
        <subject>Intrusion; extrusion; intrusion detection; security and survey</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>This paper presents a comprehensive survey on intrusion and extrusion phenomena and their existing detection and prevention techniques. Intrusion and extrusion events, breach of security system, hamper the protection of the devices or systems. Needless to say that security threats are flourishing with new level of complexity making difficulty in recognizing them. Therefore, security is the remarkable issue at the core of developing a boundless, constant and reliable web. In this paper, our purpose is to unveil and categorize all possible intrusion and extrusion events, bring out issues related to events and explore solutions associated with them. Nevertheless, we suggest further recommendations to improve the security in these issues. We strongly believe that this survey may help understanding intrusion and extrusion phenomena, and pave the way for a better design to protect against security threats.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_99-A_Comprehensive_Study_on_Intrusion_and_Extrusion_Phenomena.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A RED-BET Method to Improve the Information Diffusion on Social Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120898</link>
        <id>10.14569/IJACSA.2021.0120898</id>
        <doi>10.14569/IJACSA.2021.0120898</doi>
        <lastModDate>2021-08-31T10:27:01.2930000+00:00</lastModDate>
        
        <creator>Son N. Duong</creator>
        
        <creator>Hanh P. Du</creator>
        
        <creator>Cuong N. Nguyen</creator>
        
        <creator>Hoa N. Nguyen</creator>
        
        <subject>Information diffusion; graph reduction; between-ness centrality; parallel computing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>Information diffusion in the social network has been widely used in many fields today, from online marketing, e-government campaigns to predicting large social events. Some study focuses on how to discover a method to accelerate the parameter calculation for the information diffusion forecast in order to improve the efficiency of the information diffusion problem. The Betweenness Centrality is a significant indicator to identify the important people on social networks that should be aimed to maximize information diffusion. Thus, in this paper, we propose the RED-BET method to improve the information diffusion on social networks by a hybrid approach that allows to quickly determine the nodes having high Betweenness Centrality. Our main idea in the proposed method combines both the graph reduction and parallelization of the Betweenness Centrality calculation. Experimental results with the currently popular large datasets of SNAP and Animer have demonstrated that our proposed method improves the performance from 1.2 to 1.41 times compared to the TeexGraph toolkit, from 1.76 to 2.55 times than the NetworKit, and from 1.05 to 1.1 times in comparison with the bigGraph toolkit.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_98-A_RED_BET_Method_to_Improve_the_Information_Diffusion.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Adaptive Discrete Brain Storm Algorithm Solves 3D Protein Structure Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120896</link>
        <id>10.14569/IJACSA.2021.0120896</id>
        <doi>10.14569/IJACSA.2021.0120896</doi>
        <lastModDate>2021-08-31T10:27:01.2800000+00:00</lastModDate>
        
        <creator>Alaa Fahim</creator>
        
        <creator>Nehad Abdelraheem</creator>
        
        <subject>Brain storm optimization; integer programming problem; three dimensional protein structure prediction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>Brain Storm Optimization (BSO) is one of the major effective swarm intelligence algorithms that simulate the human brainstorming process to find optimality for optimization problems. BSO method has successfully been applied to many real-world problems. This study employs BSO method, called BSO-IP, to solve the integer programming problem. Our method collects best solutions to generate new solutions that then search for optimal solutions in all areas of search space.The BSO-IP method solves some benchmark integer programming problems to test its efficiency. The BSO-IP is used to simulate the 3D protein structure prediction problem, which is mathematically presented as an integer programming problem to approve the viability and helpfulness of our proposed Algorithm. The experimental results of different benchmarks protein structure show that our proposed method is superior in high performance, convergence, and stability in predicting protein structure. We examined our strategy results to be promising compared to other results.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_96-An_Adaptive_Discrete_Brain_Storm_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of a Web System to Optimize the Logistics and Costing Processes of a Chocolate Manufacturing Company</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120897</link>
        <id>10.14569/IJACSA.2021.0120897</id>
        <doi>10.14569/IJACSA.2021.0120897</doi>
        <lastModDate>2021-08-31T10:27:01.2800000+00:00</lastModDate>
        
        <creator>Richard Arias-Marreros</creator>
        
        <creator>Keyla Nalvarte-Dionisio</creator>
        
        <creator>Laberiano Andrade-Arenas</creator>
        
        <subject>Adobe XD; costing; logistics; scrum methodology; web system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>The research work is focused on the solution to a problem that a company dedicated to the manufacture of chocolates has. This company does not have a computer system to help it improve its management. The information recorded from the logistics and cost processes is stored locally in Excel files, and the information from the operational processes is stored in bond sheets and then transferred to Excel files. Since information is very valuable for a company, it must be orderly, accessible, and secure. Therefore, the Scrum methodology was implemented for the development of a prototype web system for the company. The prototype of this web system was developed with Adobe XD software because it is easy to use and complete for the design of interfaces for web pages. As a result of the development of the prototype of the web system, there is a record of information in an orderly, interactive, easy, fast, and above all safe way. And it was concluded that the management of the logistics and cost area of the company was optimized and improved.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_97-Design_of_a_Web_System_to_Optimize_the_Logistics_and_Costing_Processes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Knowledge Base Driven Automatic Text Summarization using Multi-objective Optimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120895</link>
        <id>10.14569/IJACSA.2021.0120895</id>
        <doi>10.14569/IJACSA.2021.0120895</doi>
        <lastModDate>2021-08-31T10:27:01.2630000+00:00</lastModDate>
        
        <creator>Chihoon Jung</creator>
        
        <creator>Wan Chul Yoon</creator>
        
        <creator>Rituparna Datta</creator>
        
        <creator>Sukhwan Jung</creator>
        
        <subject>Multi-document summarization; evolutionary multi-objective optimization; knowledge base; named entity recognition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>Automatic Text summarization aims to automati-cally generate condensed summary from a large set of documents on the same topic. We formulate text summarization task as a multi-objective optimization problem by defining information coverage and diversity as two conflicting objective functions. With this formulation, we propose a novel technique to improve the performance using a knowledge base. The main rationale of the approach is to extract important text features of the original text by detecting important entities in a knowledge base. Next, an improvement on the multi-objective optimization algorithm is also proposed for the automatic text summarization problem. The focus is on improving efficiency of the each steps in the evolutionary multi-objective optimization process which is applicable to all tasks with the same problem formulation. The result summary of the suggested method ensure the maximum coverage of the original documents and the diversity of the sentences in the summary among each other. The experiments on DUC2002 and DUC2004 multi-document summarization task dataset shows that the proposed model is effective compared to other methods.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_95-Knowledge_Base_Driven_Automatic_Text_Summarization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Unique Glottal Flow Parameters based Features for Anti-spoofing Countermeasures in Automatic Speaker Verification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120894</link>
        <id>10.14569/IJACSA.2021.0120894</id>
        <doi>10.14569/IJACSA.2021.0120894</doi>
        <lastModDate>2021-08-31T10:27:01.2470000+00:00</lastModDate>
        
        <creator>Ankita Chadha</creator>
        
        <creator>Azween Abdullah</creator>
        
        <creator>Lorita Angeline</creator>
        
        <subject>Spoof detection; synthetic speech; glottal excitation; speaker verification; voice conversion; text-to-speech</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>The domain of Automatic Speaker Verification (ASV) is blooming with growing developments in feature en-gineering and artificial intelligence. Inspite of this, the system is liable to spoofing attacks in the form of synthetic or replayed speech. The difficulty in detecting synthetic speech is due to recent advancements in the Voice conversion and Text-to-speech systems which produce natural, indistinguishable speech. To prevent such attacks, there is a need to develop robust spoof detection systems. In order to achieve this goal, we are proposing estimation of Glottal Flow Parameters (GFP) from speech of genuine speech and synthetic spoof samples. The GFP are further parameterized using time, frequency and Liljencrants–Fant (LF) models. Along with GFP features, the Linear Prediction Cepstrum Co-efficient (LFCC) and statistical parameters are computed. The GFP features are investigated to prove their usefulness in detecting spoofed and genuine speech. The ASV spoof 2019 corpus is used to test the framework and evaluated against the baseline models. The proposed spoof detection framework produces an Equal Error Rate (EER) of 2.39% and tandem Detection Cost Function (t-DCF) of 0.0562 which is found to be better than the state-of-the art technique.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_94-A_Unique_Glottal_Flow_Parameters_based_Features.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Grammatical Error Correction with Denoising Autoencoder</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120893</link>
        <id>10.14569/IJACSA.2021.0120893</id>
        <doi>10.14569/IJACSA.2021.0120893</doi>
        <lastModDate>2021-08-31T10:27:01.2330000+00:00</lastModDate>
        
        <creator>Krzysztof Pajak</creator>
        
        <creator>Adam Gonczarek</creator>
        
        <subject>Denoising autoencoder transformer; sequence-to-sequence; grammatical error correction; model ensembling; error remarks filtering; fine-tuning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>A denoising autoencoder sequence-to-sequence model based on transformer architecture proved to be useful for underlying tasks such as summarization, machine translation, or question answering. This paper investigates the possibilities of using this model type for grammatical error correction and introduces a novel method of remark-based model checkpoint output combining. This approach was evaluated by the BEA 2019 shared task. It was able to achieve state-of-the-art F-score results on the test set 73.90 and development set 56.58. This was done without any GEC-specific pre-training, but only by fine-tuning the autoencoder model and combining checkpoint outputs. This proves that an efficient model solving GEC might be trained in a matter of hours using a single GPU.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_93-Grammatical_Error_Correction_with_Denoising_Autoencoder.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimized Design of Decoder 2 to 4, 3 to 8 and n to 2n using Reversible Gates</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120892</link>
        <id>10.14569/IJACSA.2021.0120892</id>
        <doi>10.14569/IJACSA.2021.0120892</doi>
        <lastModDate>2021-08-31T10:27:01.2170000+00:00</lastModDate>
        
        <creator>Issam Andaloussi</creator>
        
        <creator>Sedra Moulay Brahim</creator>
        
        <creator>Mariam El Ghazi</creator>
        
        <subject>Decoder 2to4; Decoder 3to8; Decoder n to2n; Number of Gates (CG); Number of Garbage Output (NGO); Number of Constant Inputs (NCI); Quantum Cost (QC); Hardware Complexity (HC)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>The design of low consumption CMOS circuits, nanotechnologies and quantum computing has becomed more attached to the reversible logic. A set of gates have been recently exploited in reversible computer science for the design of certain circuits. Among them, we find the decoders. In this paper we have exploited a recent study making the design of the decoder 2 to 4, 3 to 8, and n to 2n, our work aims to enhance the previous designs , by replacing some reversible gates by others while maintaining their functionality and improving their performance criteria namely the number of gates (CG), number of garbage outputs (NGO), number of constant inputs(NCI), Quantum cost (QC) and hardware complexity (HC), compared to our study of the base and other recent studies from which we have obtained remarkable results.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_92-Optimized_Design_of_Decoder_2_to_4.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Vietnamese Sentence Paraphrase Identification using Pre-trained Model and Linguistic Knowledge</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120891</link>
        <id>10.14569/IJACSA.2021.0120891</id>
        <doi>10.14569/IJACSA.2021.0120891</doi>
        <lastModDate>2021-08-31T10:27:01.2170000+00:00</lastModDate>
        
        <creator>Dien Dinh</creator>
        
        <creator>Nguyen Le Thanh</creator>
        
        <subject>Paraphrase identification; Vietnamese; pre-trained model; linguistics; neural networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>The paraphrase identification task identifies whether two text segments share the same meaning, thereby playing a crucial role in various applications, such as computer-assisted translation, question answering, machine translation, etc. Although the literature on paraphrase identification in English and other popular languages is vast and growing, the research on this topic in Vietnamese remains relatively untapped. In this paper, we propose a novel method to classify Vietnamese sentence paraphrases, which deploys both the pre-trained model to exploit the semantic context and linguistic knowledge to provide further information in the identification process. Two branches of neural networks built in the Siamese architecture are also responsible for learning the differences among the sentence representations. To evaluate the proposed method, we present experiments on two existing Vietnamese sentence paraphrase corpora. The results show that for the same corpora, our method using the PhoBERT as a feature vector yields 94.97% F1-score on the VnPara corpus and 93.49% F1-score on the VNPC corpus. They are better than the results of the Siamese LSTM method and the pre-trained models.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_91-Vietnamese_Sentence_Paraphrase_Identification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Machine Learning Model to Analyze Telemonitoring Dyphosia Factors of Parkinson’s Disease</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120890</link>
        <id>10.14569/IJACSA.2021.0120890</id>
        <doi>10.14569/IJACSA.2021.0120890</doi>
        <lastModDate>2021-08-31T10:27:01.2000000+00:00</lastModDate>
        
        <creator>Mohimenol Islam Fahim</creator>
        
        <creator>Syful Islam</creator>
        
        <creator>Sumaiya Tun Noor</creator>
        
        <creator>Md. Javed Hossain</creator>
        
        <creator>Md. Shahriar Setu</creator>
        
        <subject>Parkinson’s disease; correlation; outliers; machine learning; RFE-based analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>For many years, lots of people have been suffering from Parkinson’s disease all over the world, and some datasets are generated by recording important PD features for reliable decision-making diagnostics. But a dataset can contain correlated data points and outliers that can affect the dataset’s output. In this work, a framework is proposed where the performance of an original dataset is compared to the performance of its reduced version after removing correlated features and outliers. The dataset is collected from UCI Machine Learning Repository, and many machine learning (ML) classifiers are used to evaluate its performance in various categories. The same process is repeated on the reduced dataset, and some improvement in prediction accuracy is noticed. Among ANOVA F-test, RFE, MIFS, and CSFS methods, the Logistic Regression classifier along with RFE-based feature selection technique outperforms all other classifiers. We observed that our improved system demonstrates 82.94%accuracy, 82.74% ROC, 82.9% F-measure, along with 17.46%false positive rate and 17.05% false negative rate, which are better compared to the primary dataset prediction accuracy metric values. Therefore, we hope that this model can be beneficial for physicians to diagnose PD more explicitly.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_90-Machine_Learning_Model_to_Analyze_Telemonitoring.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Anti-Islamic Arabic Text Categorization using Text Mining and Sentiment Analysis Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120889</link>
        <id>10.14569/IJACSA.2021.0120889</id>
        <doi>10.14569/IJACSA.2021.0120889</doi>
        <lastModDate>2021-08-31T10:27:01.1830000+00:00</lastModDate>
        
        <creator>Rawan Abdullah Alraddadi</creator>
        
        <creator>Moulay Ibrahim El-Khalil Ghembaza</creator>
        
        <subject>Web Text mining; text classification; Arabic computational linguistics; natural language processing; SVM; MNB; opinion mining; hate speech; toxic language detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>The aim of this research is to detect and classify websites based on their content if it encourages spreading hate speech toward Islam and Muslims, or Islamophobia using sentiment analysis and web text mining techniques. In this research, a large dataset corpus has been collected, to identify and classify anti-Islamic online contents. Our target is to automatically detect the content of those websites that are hostile to Islam and transmitting extremist ideas against it. The main purpose is to reduce the spread of those webpages that give the wrong idea about Islam. The proper dataset is collected from different sources, and the two datasets for the Arabic language (balanced and unbalanced) have been produced. The framework of the proposed approach has been described. The approach used in this framework is based on supervised Machine Learning (ML) approach using Support Vector Machines (SVM) and Multinomial Naive Bayes (MNB) models as classifiers, and Term Frequency-Inverse Document Frequency (TF-IDF) as feature extraction. Different experiments including word level and tri-gram level on the two datasets have been conducted, and compared the obtained results. The experimental results shows that the supervised ML approach using word level is the finest approach for both datasets that produce high accuracy with 97% applied on the balanced Arabic dataset using SVM algorithm with TF-IDF as feature extraction. Finally, an interactive web-application prototype has been developed and built in order to detect and classify toxic language such as anti-Islamic online text-contents.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_89-Anti_Islamic_Arabic_Text_Categorization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detection of Hepatoma based on Gene Expression using Unitary Matrix of Singular Vector Decomposition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120888</link>
        <id>10.14569/IJACSA.2021.0120888</id>
        <doi>10.14569/IJACSA.2021.0120888</doi>
        <lastModDate>2021-08-31T10:27:01.1700000+00:00</lastModDate>
        
        <creator>Lailil Muflikhah</creator>
        
        <creator>Nashi Widodo</creator>
        
        <creator>Wayan Firdaus Mahmudy</creator>
        
        <creator>Solimun</creator>
        
        <creator>Ninik Nihayatul Wahibah</creator>
        
        <subject>Hepatoma; gene expression; feature extraction; unitary matrix</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>Hepatoma is a long-term disease with a high risk of mortality. However, the disease is late detected, at the fourth level stadium due to silent symptoms. The infected hepatitis B virus gene HBx is a genome virus to trigger liver disease. This virus inserts material genetic into the host and disturbs the cell cycle. The regulation of gene expression is blocked to make work abnormal, especially for repairing and degrading. A microarray is a tool to quantify the RNA gene expression in huge volumes without any information for the related potential gene. Therefore, this study is proposed a feature extraction method using a unitary singular matrix for simplifying the classification model of hepatoma detection. Principally, the feature is decomposed using a singular vector to get the k-rank value of pattern. This matrix is applied to the representative machine learning algorithm, including KNN, Na&#239;ve Bayes, C5.0 Decision Tree, and SVM. The experimental result achieved high performance with Area under the Curve (AUC) of above 90% on average.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_88-Detection_of_Hepatoma_based_on_Gene_Expression.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Usability and Learning Environment of a Virtual Reality Simulator for Laparoscopic Surgery Training</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120887</link>
        <id>10.14569/IJACSA.2021.0120887</id>
        <doi>10.14569/IJACSA.2021.0120887</doi>
        <lastModDate>2021-08-31T10:27:01.1530000+00:00</lastModDate>
        
        <creator>Karina Rosas-Paredes</creator>
        
        <creator>Jos&#233; Esquicha-Tejada</creator>
        
        <creator>H&#233;ctor Manrique Morante</creator>
        
        <creator>Agueda Mu&#241;oz del Carpio Toia</creator>
        
        <subject>Usability; learning environment; virtual reality; training; simulator; medical; laparoscopic surgery</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>One of the critical aspects of laparoscopic surgeries is the training and learning of medical students to acquire the experience and the correct use of the equipment, which is usually difficult due to different circumstances. One of the ways to improve these activities is by using Virtual Reality technology that allows immersion in scenarios that simulate reality, but to achieve its correct use, it is necessary to consider usability and its learning environment. This work aims to develop a simulator software in Virtual Reality that allows the training and learning of students in laparoscopic surgery, evaluating its usability and learning environment. The proposal was developed in five levels so that the student can have a greater concentration on each task; The levels developed are: clamps, clamps for the camera, cut, cut camera and cut with clamps; Each level has different tasks to perform with which the student will be able to interact in a more orderly way. The evaluation of usability and learning environment was carried out through the survey technique using the questionnaire instrument. The reliability analysis of the instrument was carried out using Cronbach&#39;s alpha test and using the correlation coefficient of its items by Spearman. The results showed that 85.85% of the surveys carried out were positive for the learning environment area and 81.18% of the positive surveys for the Usability area, so it is concluded that the proposal developed can help in the training of medical students procedurally and practically for the development of skills in laparoscopic surgery.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_87-Usability_and_Learning_Environment_of_a_Virtual_Reality.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Effect of Adaptive Learning Rate on the Accuracy of Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120885</link>
        <id>10.14569/IJACSA.2021.0120885</id>
        <doi>10.14569/IJACSA.2021.0120885</doi>
        <lastModDate>2021-08-31T10:27:01.1370000+00:00</lastModDate>
        
        <creator>Jennifer Jepkoech</creator>
        
        <creator>David Muchangi Mugo</creator>
        
        <creator>Benson K. Kenduiywo</creator>
        
        <creator>Edna Chebet Too</creator>
        
        <subject>CNN; ConvNet; learning rate; gradient descent</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>Learning rates in gradient descent algorithms have significant effects especially on the accuracy of a Capsule Neural Network (CNN). Choosing an appropriate learning rate is still an issue to date. Many developers still have a problem in selecting a learning rate for CNN leading to low accuracies in classification. This gap motivated this study to assess the effect of learning rate on the accuracy of a developed (CNN). There are no predefined learning rates in CNN and therefore it is hard for researchers to know what learning rate will give good results. This work, therefore, focused on assessing the effect of learning rate on the accuracy of a CNN by using different learning rates and observing the best performance. The contribution of this work is to give an appropriate learning rate for CNNs to improve accuracy during classification. This work has assessed the effect of different learning rates and came up with the most appropriate learning rate for CNN plant leaf disease classification. Part of the images used in this work was from the PlantVillage dataset while others were from the Nepal database. The images were pre-processed then subjected to the original CNN model for classification. When the learning rate was 0.0001, the best performance was 99.4% on testing and 100% on training. When the learning rate was 0.00001, the highest performance was 97% on testing and 99.9% on training. The lowest performance observed was 81% accuracy on testing and 99% on training when the learning rate was 0.001. This work observed that CNN was able to achieve the highest accuracy with a learning rate of 0.0001. The best Convolutional Neural Network accuracy observed was 98% on testing and 100% on training when the learning rate was 0.0001.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_85-The_Effect_of_Adaptive_Learning_Rate.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Systematic Review Web Content Mining Tools and its Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120886</link>
        <id>10.14569/IJACSA.2021.0120886</id>
        <doi>10.14569/IJACSA.2021.0120886</doi>
        <lastModDate>2021-08-31T10:27:01.1370000+00:00</lastModDate>
        
        <creator>Manjunath Pujar</creator>
        
        <creator>Monica R Mundada</creator>
        
        <subject>Web content mining; web structure mining; web usage mining; data mining; information retrieval; information extraction.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>In recent years, the emergence of WWW (World Wide Web) led to the accumulation of huge amount of information and data. Hence the web is found to consist of unstructured and structured information that impacts the day to day life of the society. Because of such availability of huge information, utilization of the required information becomes more challenging. This paper provided a comprehensive survey on the current situation and recent trends on web content mining (WCM) and its applications thereby contributing to the enhancement of the upcoming research in WCM. The paper focused mainly on the mining and retrieval techniques, various WCM approaches, challenges and process of information retrieval and information extraction. The paper describes the four major tasks of web content mining that is information retrieval, information extraction, generalization and validation in detail. WCM concentrates on orchestrating, sorting, classifying, collecting, congregating of web data and provide the improved data which can be easily accessed by the users. Web content mining tools were needed to scan text, images and HTML documents and provide results to the search engine. It guides the search engine to provide better productive results for every search based on their importance. The paper also analysed different web content mining tools for the extraction of relevant information from the corresponding web page.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_86-A_Systematic_Review_Web_Content_Mining_Tools.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comprehensive Study on Machine Learning Techniques for Software Bug Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120884</link>
        <id>10.14569/IJACSA.2021.0120884</id>
        <doi>10.14569/IJACSA.2021.0120884</doi>
        <lastModDate>2021-08-31T10:27:01.1230000+00:00</lastModDate>
        
        <creator>Nasraldeen Alnor Adam Khleel</creator>
        
        <creator>K&#225;roly Neh&#233;z</creator>
        
        <subject>Static code analysis; software bug prediction; software metrics; machine learning techniques</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>Software bugs are defects or faults in computer programs or systems that cause incorrect or unexpected operations. These negatively affect software quality, reliability, and maintenance cost; therefore many researchers have already built and developed several models for software bug prediction. Till now, a few works have been done which used machine learning techniques for software bug prediction. The aim of this paper is to present comprehensive study on machine learning techniques that were successfully used to predict software bug. Paper also presents a software bug prediction model based on supervised machine learning algorithms are Decision Tree (DT), Na&#239;ve Bayes (NB), Random Forest (RF) and Logistic Regression (LR) on four datasets. We compared the results of our proposed models with those of the other studies. The results of this study demonstrated that our proposed models performed better than other models that used the same data sets. The evaluation process and the results of the study show that machine learning algorithms can be used effectively for prediction of bugs.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_84-Comprehensive_Study_on_Machine_Learning_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Model-driven Architecture for Collaborative Business Processes</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120883</link>
        <id>10.14569/IJACSA.2021.0120883</id>
        <doi>10.14569/IJACSA.2021.0120883</doi>
        <lastModDate>2021-08-31T10:27:01.0900000+00:00</lastModDate>
        
        <creator>Leila Amdah</creator>
        
        <creator>Naima Essadi</creator>
        
        <creator>Adil Anwar</creator>
        
        <subject>Model Driven Engineering (MDE); Business Process Model and Notation (BPMN); Business Process Execution Language (BPEL); ATLAS Transformation Language (ATL); BPMN to BPEL Transformation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>The Model Driven Engineering was developed to make application development more flexible, it provides a comprehensive interoperability framework for defining interconnected systems, and aims to reduce the inherent complexity that partners must face when developing their systems. In collaborative environments where systems are made through the collaboration of several departments or companies, the MDA (Model Driven Architecture) approach seems efficient in maintaining and developing this type of system. This paper show the use of MDE in the context of business process management and present in detail an architecture for the development of collaborative business processes.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_83-A_Model_Driven_Architecture_for_Collaborative_Business.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automated Pavement Distress Detection, Classification and Measurement: A Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120882</link>
        <id>10.14569/IJACSA.2021.0120882</id>
        <doi>10.14569/IJACSA.2021.0120882</doi>
        <lastModDate>2021-08-31T10:27:00.9670000+00:00</lastModDate>
        
        <creator>Brahim Benmhahe</creator>
        
        <creator>Jihane Alami Chentoufi</creator>
        
        <subject>Automated pavement distress detection; smarts cities; pavement management system; machine learning; deep neural networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>Road surface distress is an unavoidable situation due to age, vehicles overloading, temperature changes, etc. In the beginning, pavement maintenance actions took only place after having too much pavement damage, which leads to costly corrective actions. Therefore, scheduled road surface inspections can extend service life while guaranteeing users security and comfort. Traditional manual and visual inspections don’t meet the nowadays criteria, in addition to a relatively high time volume consumption. Smart City pavement management preventive approach requires accurate and scalable data to deduce significant indicators and plan efficient maintenance programs. However, the quality of data depends on sensors used and conditions during scanning. Many studies focused on different sensors, Machine Learning algorithms and Deep Neural Networks tried to find a sustainable solution. Besides all these studies, pavement distress measurement stills a challenge in Smarts Cities because distress detection is not enough to decide on maintenance actions required. Damages localization, dimensions and future development should be highly detected on real-time. This paper summarizes the state-of-the-art methods and technologies used in recent years in pavement distress detection, classification and measurement. The aim is to evaluate current methods and highlight their limitations, to lay out the blueprint for future researches. PMS (Pavement Management System) in Smarts Cities requires an automated pavement distress monitoring and maintenance with high accuracy for large road networks.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_82-Automated_Pavement_Distress_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning Predictive Model for Colon Cancer Patient using CNN-based Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120880</link>
        <id>10.14569/IJACSA.2021.0120880</id>
        <doi>10.14569/IJACSA.2021.0120880</doi>
        <lastModDate>2021-08-31T10:27:00.9500000+00:00</lastModDate>
        
        <creator>Zarrin Tasnim</creator>
        
        <creator>Sovon Chakraborty</creator>
        
        <creator>F. M. Javed Mehedi Shamrat</creator>
        
        <creator>Ali Newaz Chowdhury</creator>
        
        <creator>Humaira Alam Nuha</creator>
        
        <creator>Asif Karim</creator>
        
        <creator>Sabrina Binte Zahir</creator>
        
        <creator>Md. Masum Billah</creator>
        
        <subject>Colon cancer; MobileNetV2; Max pooling; Average pooling; data loss; accuracy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>In recent years, the area of Medicine and Healthcare has made significant advances with the assistance of computational technology. During this time, new diagnostic techniques were developed. Cancer is the world&#39;s second-largest cause of mortality, claiming the lives of one out of every six individuals. The colon cancer variation is the most frequent and lethal of the numerous kinds of cancer. Identifying the illness at an early stage, on the other hand, substantially increases the odds of survival. A cancer diagnosis may be automated by using the power of Artificial Intelligence (AI), allowing us to evaluate more cases in less time and at a lower cost. In this research, CNN models are employed to analyse imaging data of colon cells. For colon cell image classification, CNN with max pooling and average pooling layers and MobileNetV2 models are utilized. To determine the learning rate, the models are trained and evaluated at various Epochs. It&#39;s found that the accuracy of the max pooling and average pooling layers is 97.49% and 95.48%, respectively. And MobileNetV2 outperforms the other two models with the most remarkable accuracy of 99.67% with a data loss rate of 1.24.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_80-Deep_Learning_Predictive_Model_for_Colon_Cancer_Patient.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards Indian Sign Language Sentence Recognition using INSIGNVID: Indian Sign Language Video Dataset</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120881</link>
        <id>10.14569/IJACSA.2021.0120881</id>
        <doi>10.14569/IJACSA.2021.0120881</doi>
        <lastModDate>2021-08-31T10:27:00.9500000+00:00</lastModDate>
        
        <creator>Kinjal Mistree</creator>
        
        <creator>Devendra Thakor</creator>
        
        <creator>Brijesh Bhatt</creator>
        
        <subject>Indian sign language; sign language recognition; pretrained models; transfer learning; vision-based approaches</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>Sign language, a language used by Deaf community, is a fully visual language with its own grammar. The Deaf people find it very difficult to express their feelings to the other people, since the other people lack the knowledge of the sign language used by the Deaf community. Due to the differences in vocabulary and grammar of the sign languages, complete adoption of methods used for other international sign languages is not possible for Indian Sign Language (ISL) recognition. It is difficult to handle continuous sign language sentence recognition and translation into text as no large video dataset for ISL sentences is available. INSIGNVID - the first Indian Sign Language video dataset has been proposed and with this dataset as input, a novel approach is presented that converts video of ISL sentence in appropriate English sentence using transfer learning. The proposed approach gives promising results on our dataset with MobilNetV2 as pretrained model.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_81-Towards_Indian_Sign_Language_Sentence_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cyclic Path Planning of Hyper-redundant Manipulator using Whale Optimization Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120879</link>
        <id>10.14569/IJACSA.2021.0120879</id>
        <doi>10.14569/IJACSA.2021.0120879</doi>
        <lastModDate>2021-08-31T10:27:00.9330000+00:00</lastModDate>
        
        <creator>Affiani Machmudah</creator>
        
        <creator>Setyamartana Parman</creator>
        
        <creator>Aijaz Abbasi</creator>
        
        <creator>Mahmud Iwan Solihin</creator>
        
        <creator>Teh Sabariah Abd Manan</creator>
        
        <creator>Salmia Beddu</creator>
        
        <creator>Amiruddin Ahmad</creator>
        
        <creator>Nadiah Wan Rasdi</creator>
        
        <subject>Hyper-redundant; path planning; whale optimization algorithm; sustainable manufacturing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>This paper develops a path planning algorithm of hyper-redundant manipulators to achieve a cyclic property. The basic idea is based on a geometrical analysis of a 3-link planar series manipulator in which there is an orientation angle boundary of a prescribed path. To achieve the repetitive behavior, for hyper-redundant manipulators consisting of 3-link components, an additional path is chosen in such away so that it is a repetitive curve which has the same curve frequency with the prescribed end-effector path. To solve the redundancy resolution, meta-heuristic optimizations, namely Genetic Algorithm (GA) and Whale Optimization Algorithm (WOA), are applied to search optimal trajectories inside local orientation angle boundaries. Results show that using constant of the local orientation angle trajectories for the 3-link component, the cyclic properties can be achieved. The performance of the WOA shows very promising result where generally it obtains the lowest fitness value as compare with the GA. Depending on the complexity of the path planning, dividing the path into several stages via intermediate points may be necessary to achieve the good posture. The performance of the swarm based meta-heuristic optimization, namely the WOA, shows very promising result where generally it obtains the lowest fitness value as compare with the GA. Using the developed approach, not only the cyclic property is obtained but also the optimal movement of the hyper-redundant manipulator is achieved.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_79-Cyclic_Path_Planning_of_Hyper_redundant_Manipulator.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving Data Security using Compression Integrated Pixel Interchange Model with Efficient Encryption Technique</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120878</link>
        <id>10.14569/IJACSA.2021.0120878</id>
        <doi>10.14569/IJACSA.2021.0120878</doi>
        <lastModDate>2021-08-31T10:27:00.9200000+00:00</lastModDate>
        
        <creator>Naga Raju Hari Manikyam</creator>
        
        <creator>Munisamy Shyamala Devi</creator>
        
        <subject>Image segmentation; image pixel extraction; pixel compression; pixel interchange; image security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>We exist in a digital era in which communication is largely based on the transfer of digital knowledge over data networks. Disclosures are sometimes regarded as a transmitter that transmits a digital file. A system of encoding, efficient and easy yet secure model must be developed for quick and prompt transmission. The file is sent from source to destination where it is difficult to maintain the privacy of knowledge. The encryption of images is vital for securing sensitive information or images from unauthorized readers. By using a technique of selective encryption, the original image pixel values are completely obscured in this way and an intruder cannot retrieve statistical data from its original image. This paper introduces a new methodology for the use of a protective facility for the transmission of digital data over the public network. A highly established field with image compression is allowed for speedy transmission and efficient information storage. During the initial process, the original image is divided into blocks of the same size and sub segmentation is performed for accurate extraction of images within the boundaries. A random matrix is used to swap the pixels of the neighboring sub blocks. Afterwards, each pixel is randomly exchanged for the neighboring blocks with a random matrix, then each block is encrypted with the proposed function and then the encrypted data can be stored in cloud. The proposed method uses Image Segmentation based Compression Model with Pixel Interchange Encryption (ISbCPIE) Model for providing high security to the Image transmitted in the network. Compressor is needed to achieve rapid transmission and efficient storage. The proposed model is compared with the traditional models and the results show that the proposed model security levels are better than the existing models.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_78-Improving_Data_Security.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>ExMrec2vec: Explainable Movie Recommender System based on Word2vec</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120876</link>
        <id>10.14569/IJACSA.2021.0120876</id>
        <doi>10.14569/IJACSA.2021.0120876</doi>
        <lastModDate>2021-08-31T10:27:00.9030000+00:00</lastModDate>
        
        <creator>Amina SAMIH</creator>
        
        <creator>Abderrahim GHADI</creator>
        
        <creator>Abdelhadi FENNAN</creator>
        
        <subject>Recommender system; explainable artificial intelligence machine learning; Word2vec</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>According to the user profile, a recommender system intends to offer items to the user that may interest him. The recommendations have been applied successfully in various fields. Recommended items include movies, books, travel and tourism services, friends, research articles, research queries, and much more. Hence the presence of recommender systems in many areas, in particular, movies recommendations. Most current Machine Learning recommender systems serve as black boxes that do not provide the user with any insight into or justification for the system&#39;s logic. What puts users at risk of losing their confidence. Recommender systems suffer from an overload of information, which poses numerous problems, including high cost, slow data processing, and low time complexity. That is why researchers in have been using graph embeddings algorithms in the recommendation field to reduce the quantity of data, as these algorithms have been successful in the last few years. This work aims to improve the quality of recommendation and the simplicity of recommendation explanation based on the word2vec graph embeddings model.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_76-ExMrec2vec_Explainable_Movie_Recommender_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Using Blockchain in the Internet of Things Coordination</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120877</link>
        <id>10.14569/IJACSA.2021.0120877</id>
        <doi>10.14569/IJACSA.2021.0120877</doi>
        <lastModDate>2021-08-31T10:27:00.9030000+00:00</lastModDate>
        
        <creator>Radia Belkeziz</creator>
        
        <creator>Zahi Jarir</creator>
        
        <subject>Internet of things; IoT; Internet of things coordination; blockchain; smart contract; multi-agent systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>Now-a-days, the Internet of Things (IoT) has generated enormous interest from industry to create distributed and innovative solutions. However, achieving this goal is a tedious task and presents several open challenges as the literature points out. One of the most complex is the IoT coordination service. Unfortunately, most research works give rarely importance to this service in their models or architectures proposals. Wherefore our current contribution deals with this open issue and proposes a solution capable of implementing advanced processes that can be based on orchestration, choreography or both mechanisms. More over and to conduct efficiently both coordination mechanisms when sharing knowledge or tasks between connected objects, we integrate smart contacts to guarantee the modalities of behavior change in the coordination mechanism. Smart contracts are a safe way to decide the coordination mechanism based on the state of the system environment. To prove our approach, we have built a technical architecture based on a multi-agent system to abstract connected objects of IoT systems, blockchain technology, and the frameworks and languages required for collaboration processes such as BPMN, BPEL and BPEL4CHOR. Carbon leakage as a case study is used for experimentation.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_77-Using_Blockchain_in_the_Internet_of_Things.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Simplified IT Risk Management Maturity Audit System based on “COBIT 5 for Risk”</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120875</link>
        <id>10.14569/IJACSA.2021.0120875</id>
        <doi>10.14569/IJACSA.2021.0120875</doi>
        <lastModDate>2021-08-31T10:27:00.8870000+00:00</lastModDate>
        
        <creator>Hasnaa Berrada</creator>
        
        <creator>Jaouad Boutahar</creator>
        
        <creator>Souha&#239;l El Ghazi El Houssa&#239;ni</creator>
        
        <subject>IT risk management; COBIT 5 for risk; maturity audit system; COBIT 5 enablers; analysis axes; maturity scale and score; maturity audit report</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>In recent years, the role of risk management has emerged as a key success factor in ensuring the growth on the one hand and the survival on the other hand of any organization. Moreover, dependence on IT has become systematic within any organization. This dependence therefore, implies the importance of implementation of an IT risk management system in order to well manage IT risks. There are several standards that deal with enterprise risk management in general or information security in particular. However, few standards deal with IT risk management. Noting, for example, COBIT 5 (Control Objectives for Information and related Technology) which deals with IT risk management but is complicated to deploy. The purpose of this article is to describe a simplified IT risk management maturity audit system in an organization based on “COBIT 5 for risk”. This system aims to evaluate the maturity of IT risk management before proceeding to the implementation or update of an IT risk management system within an organisation.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_75-Simplified_IT_Risk_Management_Maturity_Audit_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fuzzy C-mean Missing Data Imputation for Analogy-based Effort Estimation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120874</link>
        <id>10.14569/IJACSA.2021.0120874</id>
        <doi>10.14569/IJACSA.2021.0120874</doi>
        <lastModDate>2021-08-31T10:27:00.8730000+00:00</lastModDate>
        
        <creator>Ayman Jalal AlMutlaq</creator>
        
        <creator>Dayang N. A. Jawawi</creator>
        
        <creator>Adila Firdaus Binti Arbain</creator>
        
        <subject>Analogy-based effort estimation; imputation; missing data; fuzzy c-mean</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>The accuracy of effort estimation in one of the major factors in the success or failure of software projects. Analogy-Based Estimation (ABE) is a widely accepted estimation model since its flow human nature in selecting analogies similar in nature to the target project. The accuracy of prediction in ABE model in strongly associated with the quality of the dataset since it depends on previous completed projects for estimation. Missing Data (MD) is one of major challenges in software engineering datasets. Several missing data imputation techniques have been investigated by researchers in ABE model. Identification of the most similar donor values from the completed software projects dataset for imputation is a challenging issue in existing missing data techniques adopted for ABE model. In this study, Fuzzy C-Mean Imputation (FCMI), Mean Imputation (MI) and K-Nearest Neighbor Imputation (KNNI) are investigated to impute missing values in Desharnais dataset under different missing data percentages (Desh-Miss1, Desh-Miss2) for ABE model. FCMI-ABE technique is proposed in this study. Evaluation comparison among MI, KNNI, and (ABE-FCMI) is conducted for ABE model to identify the suitable MD imputation method. The results suggest that the use of (ABE-FCMI), rather than MI and KNNI, imputes more reliable values to incomplete software projects in the missing datasets. It was also found that the proposed imputation method significantly improves software development effort prediction of ABE model.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_74-Fuzzy_C_Mean_Missing_Data_Imputation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of Electroencephalography Signals using Particle Swarm Optimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120873</link>
        <id>10.14569/IJACSA.2021.0120873</id>
        <doi>10.14569/IJACSA.2021.0120873</doi>
        <lastModDate>2021-08-31T10:27:00.8570000+00:00</lastModDate>
        
        <creator>Shereen Essam Elbohy</creator>
        
        <creator>Laila Abdelhamed</creator>
        
        <creator>Farid Mousa Ali</creator>
        
        <creator>Mona M. Nasr</creator>
        
        <subject>Electroencephalographic; k-nearest neighbors; long short-term memory; epileptic seizure recognition; decision tree</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>Brain computer interface devices monitor the brain signals and convert them into control commands in an attempt to imitate certain human cognitive functions. Numerous studies and applications have developed, because of the researchers&#39; interest in systems in recent years. The capacity to categorize electroencephalograms is essential for building effective brain-computer interfaces. In this paper, three experiments were performed in order to categorize the brain signals with the goal of improving a model for EEG data analysis. An investigation is carried out to detect the characteristics derived from interactions across channels that may be more accurate than features that could be taken from individuals. Many machine learning techniques were applied such as; K-Nearest Neighbors, Long Short-Term memory and Decision Tree in this paper in order to detect and analyze the EEG signals from three different datasets to determine the best accuracy results using the particle swarm optimization algorithm that obviously minimized the dimension of the feature vector and improved the accuracy results.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_73-Analysis_of_Electroencephalography_Signals.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Vertical Handover Algorithm for Telemedicine Application in 5G Heterogeneous Wireless Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120872</link>
        <id>10.14569/IJACSA.2021.0120872</id>
        <doi>10.14569/IJACSA.2021.0120872</doi>
        <lastModDate>2021-08-31T10:27:00.8400000+00:00</lastModDate>
        
        <creator>Boh Wen Diong</creator>
        
        <creator>Mark Irwin Goh</creator>
        
        <creator>Seng Kheau Chung</creator>
        
        <creator>Ali Chekima</creator>
        
        <creator>Hoe Tung Yew</creator>
        
        <subject>Mobile terminal; vertical handover; heterogeneous networks; unnecessary handover; telemedicine; TOPSIS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>In this fast-paced technology era, the advancement of telecommunication systems has made many advanced technologies possible. With the help of the 5G technology, more technologies will become a reality and telemedicine is one of them. Numerous studies have shown that the fatal rate of ischemic heart disease cases can be reduced by sending the real-time patient health data from an ambulance to the medical centre so that healthcare professionals can make early preparation and give immediate treatment in the golden hour. 5G technology offers a high data rate and low latency. However, the coverage of 5G is small compared to 4G. It will induce a high number of unnecessary handovers when an ambulance traverses the 5G networks at high speed and lead to degradation of services quality. Therefore, a fast and accurate vertical handover decision-making algorithm is needed to minimize unnecessary handover in high-speed scenarios. This paper proposes a handover algorithm that integrates the Travelling Time Estimation, Fuzzy Analytic Hierarchy Process (FAHP) and Technique for order of preference by similarity to ideal solution (TOPSIS) algorithms to reduce unnecessary handover in 5G heterogeneous networks. The simulation results show that the proposed algorithm has successfully reduced up to 80.3% of handovers compared to FAHP-TOPSIS based handover algorithm in the high-speed scenario. The proposed handover algorithm can improve the quality of telemedicine services in high-speed scenarios.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_72-Vertical_Handover_Algorithm_for_Telemedicine.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimized Energy-efficient Load Balance Routing Protocol for Wireless Mesh Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120871</link>
        <id>10.14569/IJACSA.2021.0120871</id>
        <doi>10.14569/IJACSA.2021.0120871</doi>
        <lastModDate>2021-08-31T10:27:00.8270000+00:00</lastModDate>
        
        <creator>M Kiran Sastry</creator>
        
        <creator>Arshad Ahmad Khan Mohammad</creator>
        
        <creator>Arif Mohammad Abdul</creator>
        
        <subject>Wireless mesh network; routing; energy efficiency; optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>Wireless mesh network (WMN) technology gains the user attractions due to its deployment flexibility. The main challenging task in the WMN is the provision of Quality of Service due to its unbalanced traffic in communication. Moreover, the recent advancement in wireless technology has fueled the user’s attraction towards the delay-sensitive services, which in turn additional impact on WMN to provide QoS. The paper aims to provide the QoS in the WMN by load balancing, and energy efficiency. In literature, various mechanisms have been designed to address the issue, but they fail to achieve the optimal solution in terms of throughput and energy efficiency. Thus, the work aims to design the energy-efficient load balancing routing metric to address the limitations of existing conventional methods. Load balancing is accomplished by the selection of non-congested nodes, and energy efficiency is attained by the selection of the greatest packet processing capable nodes for communication. Both non-congested and greatest packet processing capable nodes are used to compute the route between source and destination. The performance of the proposed routing protocol is analyzed by network simulator and the results outperformed in comparison with recent existing methods.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_71-Optimized_Energy_Efficient_Load_Balance_Routing_Protocol.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Robot Chat System (Chatbot) to Help Users “Homelab” based in Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120870</link>
        <id>10.14569/IJACSA.2021.0120870</id>
        <doi>10.14569/IJACSA.2021.0120870</doi>
        <lastModDate>2021-08-31T10:27:00.8100000+00:00</lastModDate>
        
        <creator>Aji Naufal Aqil</creator>
        
        <creator>Burhanuddin Dirgantara</creator>
        
        <creator>Istikmal</creator>
        
        <creator>Umar Ali Ahmad</creator>
        
        <creator>Reza Rendian Septiawan</creator>
        
        <creator>Alex Lukmanto Suherman</creator>
        
        <subject>API; Chatbot; deep learning; Homelab; question and answer forum</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>Homelab is a discussion platform on course materials and assignments for students and is packed in an Android application product and website. The Homelab website is built using Laravel. For Android-based Homelab application development, a special Application Programming Interface (API) with JWT security is made in this research. In Homelab, besides the question and answer feature, a virtual conversation agent (chatbot) based on deep learning with a retrieval model that uses multilayer perceptron and a special text dataset for conversations about Homelab products is also created. The virtual conversation agent at Homelab is made by utilizing the Sastrawi library and natural language processing to facilitate the processing of user messages in Indonesian. The output of this research is the response from the chatbot and the probability value from the classification results of the available response classes. The system made has an accuracy rate of 96.43 percent with an average processing time of 0.3 seconds to get a response.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_70-Robot_Chat_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Development of Green Software Process Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120869</link>
        <id>10.14569/IJACSA.2021.0120869</id>
        <doi>10.14569/IJACSA.2021.0120869</doi>
        <lastModDate>2021-08-31T10:27:00.8100000+00:00</lastModDate>
        
        <creator>Siti Rohana Ahmad Ibrahim</creator>
        
        <creator>Jamaiah Yahaya</creator>
        
        <creator>Hasimi Salehudin</creator>
        
        <creator>Aziz Deraman</creator>
        
        <subject>Software process; green factor; waste reduction; qualitative instrument; green software process model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>Software process and development are the fundamental activities in software engineering. Increasing software usage either develops in-house or outsourcing requires improving the software process accordingly to minimise the adverse effects on the environment. The resources and power consumption controlled by hardware affected the software to move the process that causes high emission of power and energy. Thus, most of the existing work on software development is aimed at the efficiency of hardware operation through CPU, memory, and processor. Although sustainability is still initial in software engineering, the green software process can be achieved through sustainable development that concerns the preservation of the environment. However, there is still a lack of study and effort in the software process that emphasises sustainability perspectives and software waste elimination. Therefore, this study proposes the green factors for the software process that consider the sustainability elements and waste reduction during development. The green factors are the benchmark to measure a sustainable and green software process. Besides, this paper also presents the qualitative interview design and pilot study. The pilot study analysis has demonstrated the reliability of the interview protocol. Therefore, the actual interview and data analysis are currently in progress.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_69-The_Development_of_Green_Software_Process_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of Architecture and Software Implementation of Deep Neural Network Models Repository for Spatial Data Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120868</link>
        <id>10.14569/IJACSA.2021.0120868</id>
        <doi>10.14569/IJACSA.2021.0120868</doi>
        <lastModDate>2021-08-31T10:27:00.7930000+00:00</lastModDate>
        
        <creator>Stanislav A. Yamashkin</creator>
        
        <creator>Anatoliy A. Yamashkin</creator>
        
        <creator>Ekaterina O. Yamashkina</creator>
        
        <creator>Milan M. Radovanovic</creator>
        
        <subject>Repository; deep learning; artificial neural network; spatial data; visual programming; software design</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>The article presents the key aspects of designing and developing a repository of deep neural network models for analyzing and predicting the development of spatial processes based on spatial data. The framework of the system operates on the basis of the MVC pattern, in which the framework is decomposed into modules for working with the system&#39;s business logic, its data and graphical interfaces. The characteristics of the developed web interfaces, a module for visual programming for editing of models, an application programming interface for unified interaction with the repository are given. The stated aim of the study determined the structure of the scientific article and the results obtained. The paper describes the functional requirements for the repository as a signific part of software design, presents the developed formalized storage scheme for neural network models, describes the aspects of development of a repository of neural network models and an API of the repository. The developed system allows us to approach the solution of the scientific problem of integrating neural networks with the possibility of their subsequent use to solve design problems of the digital economy.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_68-Development_of_Architecture_and_Software_Implementation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Methods and Architectural Patterns of Storage, Analysis and Distribution of Spatio-temporal Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120867</link>
        <id>10.14569/IJACSA.2021.0120867</id>
        <doi>10.14569/IJACSA.2021.0120867</doi>
        <lastModDate>2021-08-31T10:27:00.7800000+00:00</lastModDate>
        
        <creator>Stanislav A. Yamashkin</creator>
        
        <creator>Anatoliy A. Yamashkin</creator>
        
        <creator>Ekaterina O. Yamashkina</creator>
        
        <creator>Sergey M. Kovalenko</creator>
        
        <subject>Spatial data infrastructure; deep learning; neural networks; spatial data; geoportals</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>The work describes the key principles of the process of building digital spatial data infrastructures for effective decision-making in the management of natural systems and for the sustainable development of the regional economy. The following reference points are considered in detail: increasing the accuracy of the deep learning and neural networks algorithmic and software for the process of analyzing spatial data, developing storage systems for large spatio-temporal data by developing new physical and logical storage models, introducing effective geoportal technologies and developing new architectural patterns for presentation and further dissemination of spatio-temporal using modern web technologies. The plan for working out a scientific problem of development of methods and architectural patterns of storage, analysis and distribution of spatio-temporal data determined the structure of the article. The first section concretizes the criteria of efficiency of information processes in the digital spatial data infrastructure (SDI), the second section discusses algorithmic support of the process of analysis of spatial data, the third – integration of spatial data, and finally, the final section – implementation and project-oriented use of geoportal systems.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_67-Methods_and_Architectural_Patterns_of_Storage.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Threat Analysis using N-median Outlier Detection Method with Deviation Score</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120866</link>
        <id>10.14569/IJACSA.2021.0120866</id>
        <doi>10.14569/IJACSA.2021.0120866</doi>
        <lastModDate>2021-08-31T10:27:00.6700000+00:00</lastModDate>
        
        <creator>Pattabhi Mary Jyosthna</creator>
        
        <creator>Konala Thammi Reddy</creator>
        
        <subject>Organizational roles; insider threats; outlier detection; deviation score</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>Any organization can only operate optimally if all employees fulfil their roles and responsibilities. For the majority of tasks and activities, each employee must collaborate with other employees. Every employee must log their activities related with their roles, responsibilities, and access permissions. Some users may deviate from their work or abuse their access rights in order to gain a benefit, such as money, or to harm an organization&#39;s reputation. Insider threats are caused by these types of users/employees, and those users are known as insiders. Detecting insiders after they have caused damage is more difficult than preventing them from posing a threat. We proposed a method for determining the amount of deviation a user has from other users in the same role group in terms of log activities. This deviation score can be used by role managers to double-check before sharing sensitive information or granting access rights to the entire role group. We first identified the abnormal users in each individual role, and then used distance measures to calculate their deviation score. In a large data space, we considered the problem of identifying abnormal users as outlier detection. The user log activities were first converted using statistics, and the data was then normalized using Min-Max scalar standardization, using PCA to transform the normalized data to a two-dimensional plane to reduce dimensionality. The results of N-Median Outlier Detection (NMOD) are then compared to those of Neighbour-based and Cluster-based outlier detection algorithms.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_66-Threat_Analysis_using_N_median_Outlier_Detection_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An IoT-based Coastal Recreational Suitability System using Effective Messaging Protocol</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120865</link>
        <id>10.14569/IJACSA.2021.0120865</id>
        <doi>10.14569/IJACSA.2021.0120865</doi>
        <lastModDate>2021-08-31T10:27:00.6530000+00:00</lastModDate>
        
        <creator>Farashazillah Yahya</creator>
        
        <creator>Ahmad Farhan Ahmad Zaki</creator>
        
        <creator>Ervin Gubin Moung</creator>
        
        <creator>Hasimi Sallehudin</creator>
        
        <creator>Nur Azaliah Abu Bakar</creator>
        
        <creator>Rio Guntur Utomo</creator>
        
        <subject>Coastal recreational; internet of things; message queuing telemetry transport; sensor; water quality</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>Coastal recreational activities are one of the main attractions for local public beachgoers and overseas tourists. The accessibility to better-quality coastal water will enhance safety and public health awareness when the information is available. Existing platforms showing the risk of whether a beach is suitable for public recreational use is less available in Malaysia. The Internet of Things (IoT) based system design specifically for coastal recreational suitability may differ from the existing configuration depending on the environment and requirements. This paper reports the design and implementation of an IoT-based system to capture the coastal environmental data and recommend recreational suitability. The system captures sensor data, store it in a database and displays the result using a dashboard. The variable data include the temperature, humidity, rain, pH, turbidity, oxidation-reduction potential (ORP), and total dissolved solids (TDS) in a coastal area. The hardware used in the design is the development boards such as Raspberry Pi, Arduino Uno, and ESP32 controller. The system is developed using PHP, MySQL, and Apache Web Server and can be accessed online at https://ipantai.xyz. When using Message Queuing Telemetry Transport (MQTT) as the effective messaging protocol and HiveMQ broker, the result has shown improvement for message size, throughput, and power consumption. The further potential of an IoT-based system is to bring value for coastal management and serve as a powerful tool to determine whether the coastal area is suitable for the public to access water recreational activities.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_65-An_IoT_based_Coastal_Recreational_Suitability_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of Big Data Storage Tools for Data Lakes based on Apache Hadoop Platform</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120864</link>
        <id>10.14569/IJACSA.2021.0120864</id>
        <doi>10.14569/IJACSA.2021.0120864</doi>
        <lastModDate>2021-08-31T10:27:00.6370000+00:00</lastModDate>
        
        <creator>Vladimir Belov</creator>
        
        <creator>Evgeny Nikulchev</creator>
        
        <subject>Big data formats; data lakes; Apache Hadoop; data warehouses</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>When developing large data processing systems, the question of data storage arises. One of the modern tools for solving this problem is the so-called data lakes. Many implementations of data lakes use Apache Hadoop as a basic platform. Hadoop does not have a default data storage format, which leads to the task of choosing a data format when designing a data processing system. To solve this problem, it is necessary to proceed from the results of the assessment according to several criteria. In turn, experimental evaluation does not always give a complete understanding of the possibilities for working with a particular data storage format. In this case, it is necessary to study the features of the format, its internal structure, recommendations for use, etc. The article describes the features of both widely used data storage formats and the currently gaining popularity.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_64-Analysis_of_Big_Data_Storage_Tools.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis and Optimization of Delegation-based Sequenced Packet Exchange (SPX) Protocol: A Kailar Logic Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120863</link>
        <id>10.14569/IJACSA.2021.0120863</id>
        <doi>10.14569/IJACSA.2021.0120863</doi>
        <lastModDate>2021-08-31T10:27:00.6370000+00:00</lastModDate>
        
        <creator>Ebrima Jaw</creator>
        
        <creator>Mbemba Hydara</creator>
        
        <creator>Wang Xue Ming</creator>
        
        <subject>Delegator; accountability; grantor; Kailar logic; principal; client; delegate; grantee</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>Accountability within electronic commerce protocols has tremendous significance, especially those that require answerability for the actions taken by participants. In this study, the authors evaluate the delegation of accountability based on the Sequenced Packet Exchange (SPX) protocol. The study emphasizes the concept of provability as a benchmark to formalize accountability. Moreover, this paper proposed a new framework that enables principals to delegate individual rights to other principals and how the delegator&#39;s accountability is handed over or retained, which provides the crucial functionality of tracing how accountability is distributed among principals within a system. The study provides a novel solution to accountability challenges and analysis of protocols, such as introducing novel conditions for distributing essential credentials among the grantor and the grantee and analyzing delegation-based protocols. The approach adopted will help prevent potential compromises of the integrity of online transactions. By extension, it will also serve as a best practice solution for settling legal disputes among principals.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_63-Analysis_and_Optimization_of_Delegation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Effect of using Flipped Learning Strategy on the Academic Achievement of Eighth Grade Students in Jordan</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120862</link>
        <id>10.14569/IJACSA.2021.0120862</id>
        <doi>10.14569/IJACSA.2021.0120862</doi>
        <lastModDate>2021-08-31T10:27:00.6230000+00:00</lastModDate>
        
        <creator>Firas Ibrahem Mohammad Al-Jarrah</creator>
        
        <creator>Mustafa Ayasreh</creator>
        
        <creator>Fadi Bani Ahmad</creator>
        
        <creator>Othman Mansour</creator>
        
        <subject>Strategy; flipped learning; academic achievement; eighth grade; introduction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>This study aimed to reveal the effect of using the flipped learning strategy on the development of achievement and the trend towards it among eighth grade students in the English language subject in the Hashemite Kingdom of Jordan. The sample of the study consisted of a sample of eighth-grade students who study in government schools affiliated with the Directorate of Education in the Northern Mazar District in the Hashemite Kingdom of Jordan, and their number (50) students were distributed into two groups chosen randomly, one of them is a control group, and the number of its students is (25) and the other Experimental and the number of its students (25) students. Where the control group was taught using the usual method, the experimental group was taught using the flipped learning strategy. To achieve the objectives of the study, the researcher used the descriptive approach and the experimental approach (with a quasi-experimental design), and the study tools were: an achievement test and a questionnaire to measure attitude. The study materials also included educational software. The results of the study indicated that there was a statistically significant difference between the average scores of the group of students. The experimental group and the control group in the post application of the attitude scale in favour of the experimental group, and this indicates that the use of the flipped classroom strategy had an impact on developing students&#39; attitudes towards it.In light of this, the researcher recommends several recommendations, most notably taking advantage of the standards and the proposed educational model in the current research in the field of English language learning, as well as the application of multimedia programs in the use of the flipped classroom strategy, to raise the level of students in the basic stage in academic achievement, and to expand the application of e-learning And blended learning to improve students’ attitudes towards using the flipped classroom strategy to learn English, holding training courses for teachers on the use of the flipped learning strategy, and employing modern technologies and social networks in the educational process.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_62-The_Effect_of_using_Flipped_Learning_Strategy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Auditor&#39;s Perception on Technology Transformation: Blockchain and CAATs on Audit Quality in Indonesia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120861</link>
        <id>10.14569/IJACSA.2021.0120861</id>
        <doi>10.14569/IJACSA.2021.0120861</doi>
        <lastModDate>2021-08-31T10:27:00.6070000+00:00</lastModDate>
        
        <creator>Meiryani </creator>
        
        <creator>Monika Sujanto</creator>
        
        <creator>ASL Lindawati</creator>
        
        <creator>Arif Zulkarnain</creator>
        
        <creator>Suryadiputra Liawatimena</creator>
        
        <subject>Blockchain; CAATs; audit quality; auditor’s perception; technology transformation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>The purpose of this study is to analyze the auditor’s perception of the implementation of technology transformation such as: blockchain and CAATs that can affect audit quality at the Big Four Public Accounting Firm in Jakarta. This study uses quantitative research methods using a combination of primary and secondary data. The data collection techniques used in this study was questionnaires. The sample was taken using purposive sampling method, which resulted in 60 respondents. Data analysis was carried out with SmartPLS 3.0 and IBM SPSS 26 software which resulted in the conclusion that the auditor’s perception of the implementation of blockchain had a significant positive effect on audit quality, while the auditor’s perception of the implementation of CAATs had no significant positive effect on audit quality.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_61-Auditor&#39;s_Perception_on_Technology_Transformation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Adaptive Continuous Authentication System for Smartphones using Hyper Negative Selection and Random Forest Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120859</link>
        <id>10.14569/IJACSA.2021.0120859</id>
        <doi>10.14569/IJACSA.2021.0120859</doi>
        <lastModDate>2021-08-31T10:27:00.5900000+00:00</lastModDate>
        
        <creator>Maryam M. Alharbi</creator>
        
        <creator>Rashiq Rafiq Marie</creator>
        
        <subject>Continuous authentication (CA); artificial immunes system (AIS); negative selection algorithm (NSA); random forest algorithm (RFA); smartphones</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>As smartphones have become a part of our daily lives, including payment and banking transactions; therefore, increasing current data and privacy protection models is essential. A continuous authentication model aims to track the smartphone user&#39;s interaction after the initial login. However, current continuous authentication models are limited due to dynamic changes in smartphone user behavior. This paper aims to enhance smartphone user privacy and security using continuous authentication based on touch dynamics by proposing a framework for smartphone devices based on user touch behavior to provide a more accurate and adaptive learning model. We adopt a hybrid model based on the Hyper Negative Selection Algorithm (HNSA) as an artificial immune system (AIS) and the random forest ensemble classifier to instantly classify a user behavior. With the new approach, a decision model could detect normal/abnormal user behavior and update a user profile continuously while using his/her smartphone. The proposed approach was compared with the v-detector and HNSA, where it shows a high average accuracy of 98.5%, a low false alarm rate, and an increased detection rate. The new model is significant as it could be integrated with a smartphone to increase user privacy instantly. It is concluded that the proposed approach is efficient and valuable for smartphone users to increase their privacy while dynamic user behaviors evolve to change.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_59-Adaptive_Continuous_Authentication_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Lightweight Chain for Detection of Rumors and Fake News in Social Media</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120860</link>
        <id>10.14569/IJACSA.2021.0120860</id>
        <doi>10.14569/IJACSA.2021.0120860</doi>
        <lastModDate>2021-08-31T10:27:00.5900000+00:00</lastModDate>
        
        <creator>Yazed Alsaawy</creator>
        
        <creator>Ahmad Alkhodre</creator>
        
        <creator>Nour M. Bahbouh</creator>
        
        <creator>Adnan Abi Sen</creator>
        
        <creator>Adnan Nadeem</creator>
        
        <subject>Component; fake news detection; text mining; blockchain; detection algorithms words</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>Social media has become one of the most important sources of news in our lives, but the process of validating news and limiting rumors remains an open research issue. Many researchers have suggested using Blockchain to solve this problem, but it has traditionally failed due to the large volume of data and users in such environments. In this paper, we propose to modify the structure of the Blockchain while preserving its main characteristics. We achieve this by integrating customize blockchain with the Text Mining (TM) algorithm to create a modified Light Weight chain (LWC). LWC will speed up the verification process, which is carried out through proof of good history where the nodes will have the weights according to their previous posts. Moreover, the LWC will be compatible with different applications such as verifying the authenticity of news or legal religious ruling (fatwas). In this research, we have implemented a simple model to simulate the proposed LWC for the detection of fake news and preserving the characteristics and features of the traditional Blockchain. The results on experimental data reflect the effectiveness of the proposed algorithm in establishing the chain.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_60-Lightweight_Chain_for_Detection_of_Rumors.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Logistic Regression Modeling to Predict Sarcopenia Frailty among Aging Adults</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120858</link>
        <id>10.14569/IJACSA.2021.0120858</id>
        <doi>10.14569/IJACSA.2021.0120858</doi>
        <lastModDate>2021-08-31T10:27:00.4970000+00:00</lastModDate>
        
        <creator>Sukhminder Kaur</creator>
        
        <creator>Azween Abdullah</creator>
        
        <creator>Noran Naqiah Hairi</creator>
        
        <creator>Siva Kumar Sivanesan</creator>
        
        <subject>Sarcopenia; frailty; logistic regression model; prediction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>Sarcopenia and frailty have been associated with low aging population capacities for exercise and high metabolic instability. To date, the current models merely support one classification with an accuracy of 83%. The models also reflect overfitting dataset complexities in predicting the accuracy and detecting the misclassifications of rare diseases. As multiple classifications led to incongruent data analyses and methods, each evaluation yielded inaccurate results regarding high prediction accuracy. This study intends to contribute to the current medical informatics literature by comparing the most optimal model to identify relevant patterns and parameters for prediction model development. The methods were duly assessed on a real dataset together with the classification model. Meanwhile, the obesity physical frailty (OPF) model was presented as a conceptual study model. A matrix of accuracy, classification, and feature selection was also utilized to compare the computer output and deep learning models against current counterparts. Essentially, the study findings predicted that an individuals’ risk of sarcopenia corresponded to physical frailty. Each model was compared with an accuracy matrix to determine the best-fitting model. Resultantly, logistic regression produced the highest results with an accuracy rate of 97.69% compared to the other four study models.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_58-Logistic_Regression_Modeling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automated Labeling of Hyperspectral Images for Oil Spills Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120857</link>
        <id>10.14569/IJACSA.2021.0120857</id>
        <doi>10.14569/IJACSA.2021.0120857</doi>
        <lastModDate>2021-08-31T10:27:00.4670000+00:00</lastModDate>
        
        <creator>Madonna Said</creator>
        
        <creator>Monica Hany</creator>
        
        <creator>Monica Magdy</creator>
        
        <creator>Omar Saleh</creator>
        
        <creator>Maha Sayed</creator>
        
        <creator>Yomna M.I. Hassan</creator>
        
        <creator>Ayman Nabil</creator>
        
        <subject>Oil spills; hyperspectral imagery; unlabeled data; k-means cluster; classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>The constant increase in oil demand caused a huge loss in the form of oil spills during the process of exporting the product, which leads to an increase in pollution, especially in the marine environment. This research assists in providing a solution for this problem through modern technology by detecting oil spills using satellite imagery, more specifically hyperspectral images (HSI). The obtained dataset from the AVIRIS satellite is considered raw data, which leads to the availability of a vast amount of unlabeled data. This was one of the main reasons to propose a method to classify the HSI by automatically labeling the raw data first through unsupervised K-means clustering. The automatically labeled HSI is used to train various classifiers, that are Support Vector Machine (SVM), Random Forest (RF), and K-nearest neighbor (K-NN), to accomplish the optimal accuracy to be comparable with another research accuracy. In addition, the results of the first region of interest (ROI) indicate that the SVM with RBF kernel obtains 99.89% with principle component analysis (PCA) and 99.86% without the PCA, which revealed better accuracy than RF and the K-NN, while in the second ROI the RF obtained 99.9% with PCA and 99.91% without the PCA, better than K-NN and SVM. The region of interests selected lies within the Gulf of Mexico area. This area was selected based on the frequency of usage in previous research in detecting oil spills.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_57-Automated_Labeling_of_Hyperspectral_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Knowledge Extraction and Data Visualization: A Proposed Framework for Secure Decision Making using Data Mining</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120856</link>
        <id>10.14569/IJACSA.2021.0120856</id>
        <doi>10.14569/IJACSA.2021.0120856</doi>
        <lastModDate>2021-08-31T10:27:00.4030000+00:00</lastModDate>
        
        <creator>Hazzaa N. Alshareef</creator>
        
        <creator>Ahmed Majrashi</creator>
        
        <creator>Maha Helal</creator>
        
        <creator>Muhammad Tahir</creator>
        
        <subject>Big data; data mining; data visualization; classification; cloud computing; security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>The decision-making process, promptly on time, is a crucial success factor in large organizations. Generally, the data warehouses of these organizations grow rapidly with the data generated from various business activities. This huge volume of data needs to be analyzed and decisions must be made quickly to meet the market challenges. Accurate knowledge extraction and its visualization from big data can guide decision-makers to conduct key analysis and make correct predictions. This paper proposes a decision-making framework that not only takes into account knowledge extraction and visualization but also considers the security of the data. The proposed framework uses data mining techniques to extract useful patterns, then, visualizes those patterns for further analysis and decision making. The significance of the proposed framework lies in the mechanism through which it protects the data from intruders. The data is first processed and then stored in an encrypted format on the cloud. When the data is needed for analysis and decision making, a temporary copy of the data is first decrypted, and then important patterns are visualized. The proposed framework will assist managers and other decision-makers to analyze and visualize the data in real-time with an enhanced security mechanism.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_56-Knowledge_Extraction_and_Data_Visualization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Experimental Study of Hybrid Genetic Algorithms for the Maximum Scatter Travelling Salesman Problem</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120855</link>
        <id>10.14569/IJACSA.2021.0120855</id>
        <doi>10.14569/IJACSA.2021.0120855</doi>
        <lastModDate>2021-08-31T10:27:00.3570000+00:00</lastModDate>
        
        <creator>Zakir Hussain Ahmed</creator>
        
        <creator>Asaad Shakir Hameed</creator>
        
        <creator>Modhi Lafta Mutar</creator>
        
        <creator>Mohammed F. Alrifaie</creator>
        
        <creator>Mundher Mohammed Taresh</creator>
        
        <subject>Hybrid genetic algorithm; maximum scatter travelling salesman problem; sequential constructive crossover; adaptive mutation; local search; perturbation procedure</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>We consider the maximum scatter travelling salesman problem (MSTSP), a travelling salesman problem (TSP) variant. The problem aims to maximize the shortest edge in the tour that travels each city only once in the given network. It is a very complicated NP-hard problem, and hence, exact solutions are obtainable for small sizes only. For large sizes, heuristic algorithms must be applied, and genetic algorithms (GAs) are observed to be very successful in dealing with such problems. In our study, a simple GA (SGA) and four hybrid GAs (HGAs) are proposed for the MSTSP. The SGA starts with initial population produced by sequential sampling approach that is improved by 2-opt search, and then it is tried to improve gradually the population through a proportionate selection procedure, sequential constructive crossover, and adaptive mutation. A stopping condition of maximum generation is adopted. The hybrid genetic algorithms (HGAs) include a selected local search and perturbation procedure to the proposed SGA. Each HGA uses one of three local search procedures based on insertion, inversion and swap operators directly or randomly. Experimental study has been carried out among the proposed SGA and HGAs by solving some TSPLIB asymmetric and symmetric instances of various sizes. Our computational experience reveals that the suggested HGAs are very good. Finally, our best HGA is compared with a state-of-art algorithm by solving some TSPLIB symmetric instances of many sizes. Our computational experience reveals that our best HGA is better.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_55-Experimental_Study_of_Hybrid_Genetic_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>User-centred Design and Evaluation of Web and Mobile based Travelling Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120854</link>
        <id>10.14569/IJACSA.2021.0120854</id>
        <doi>10.14569/IJACSA.2021.0120854</doi>
        <lastModDate>2021-08-31T10:27:00.2930000+00:00</lastModDate>
        
        <creator>Nor Azman Ismail</creator>
        
        <creator>Siti Fatimah Nizam</creator>
        
        <creator>Simon Yuen</creator>
        
        <creator>Layla Hasan</creator>
        
        <creator>Su Elya Mohamed</creator>
        
        <creator>Wong Yee Leng</creator>
        
        <creator>Khalid Krayz Allah</creator>
        
        <subject>Usability; travelling application; SUS questionnaire: low-fidelity prototype; user-centred design (UCD)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>Travelling has been known as one of the top-rated activities people do during their leisure time. In this digital time, people usually research before visiting a new place to avoid unpleasant events and to have a well-planned trip. Due to the complexity of search engine browsers, people have been switching to designated travelling applications. Travelling applications should be designed by taking into consideration user’s needs and requirements; and usability. This research aims to design a travelling application based on a user-centred design approach and compare its performance on different platforms. Two prototypes of travelling applications were designed and evaluated; web-based and mobile-based. Then, System Usability Scale (SUS) questionnaire was used to evaluate the usability of the two prototypes. Pearson correlation coefficient test and t-test were used to analyses the data collected from the questionnaire. The results showed no statistically significant difference in SUS scores for both prototypes, which indicates that the participants do not prefer any of the prototypes more than another one.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_54-User_centred_Design_and_Evaluation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Cryptographic Technique for Communication among IoT Devices using Tiger192 and Whirlpool</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120853</link>
        <id>10.14569/IJACSA.2021.0120853</id>
        <doi>10.14569/IJACSA.2021.0120853</doi>
        <lastModDate>2021-08-31T10:27:00.2630000+00:00</lastModDate>
        
        <creator>Bismark Tei Asare</creator>
        
        <creator>Kester Quist-Aphetsi</creator>
        
        <creator>Laurent Nana</creator>
        
        <subject>IoT devices; whirlpool; Tiger192; internet of things security; cryptographic communication</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>The heterogeneous standards and operational platforms of IoT devices, introduce additional security loopholes into the network thereby increasing the attack surface for the IoT. Most of the devices used in these IoT systems are not secure by design. Such vulnerable devices pose a great threat to the IoT system. In recent times, there have been a lot of research works on improving existing mechanisms for securing IoT data at both the software and hardware levels. Although there exist cryptographic research solutions to secure data at the node level in IoT systems, there is not a lot of these security solutions that target securing both the IoT data and validating IoT nodes. The Authors propose a cryptographic solution that uses double hashing to provide improved security for IoT node data and validating nodes in IoT system. A cryptographic mechanism that is composed of the Tiger192 cryptographic hash and the whirlpool hash function is proposed in authenticating IoT data and validating devices in this paper. The use of digital ledger technology and cryptographic double hashing algorithm provided enhanced security, privacy, and integrity of data among IoT nodes. It also assured the availability of IoT data.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_53-A_Cryptographic_Technique_for_Communication.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Ontology based Semantic Query Expansion for Searching Queries in Programming Domain</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120852</link>
        <id>10.14569/IJACSA.2021.0120852</id>
        <doi>10.14569/IJACSA.2021.0120852</doi>
        <lastModDate>2021-08-31T10:27:00.2470000+00:00</lastModDate>
        
        <creator>Manal Anwer Khedr</creator>
        
        <creator>Fatma A. El-Licy</creator>
        
        <creator>Akram Salah</creator>
        
        <subject>Query expansion; ontology; semantic search; short queries; cosine similarity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>Information on the web is growing rapidly; learning programming languages has become one of the most widely searched topics. Programmers and people of several backgrounds are now using Web search engines to acquire programming information, including information about a specific language or topic. Nonetheless, due to a lack of programming knowledge, many laypeople have difficulties in forming appropriate queries to articulate their inquiries, so they used to write short queries, which leads to vocabulary problems and contextless query. Besides, semantics is almost neglected in traditional search query since it is just keyword-based searches, which deem their search to be imprecise with irrelevant results, due to the use of unclear keywords. A Semantic query expansion method is proposed for disambiguating queries in computer programming domain using ontology. The integration of Cosine similarity method into the proposed model improved the expanded query, and accordingly, the searching results. The proposed system has been tested with several ambiguous and misspelled queries and the generated extensions proved to retrieve more relevant results, when applied to a search engine. The quality of the retrieved results for the expanded queries, are much higher than that for the crude queries. The proposed technique was implemented, and then tested with external independent testers. They confirmed the mechanical test results and displayed its improvement with an average of precision @10 82.2% and 91.1% respectively. The results were promising and therefore open further research directions.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_52-Ontology_based_Semantic_Query_Expansion.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Radiofrequency Temperature Control System for Fish Capture</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120851</link>
        <id>10.14569/IJACSA.2021.0120851</id>
        <doi>10.14569/IJACSA.2021.0120851</doi>
        <lastModDate>2021-08-31T10:27:00.1830000+00:00</lastModDate>
        
        <creator>Walter V&#225;squez-Cubas</creator>
        
        <creator>Marvin Zarate-Cuzco</creator>
        
        <creator>Michael Cabanillas-Carbonell</creator>
        
        <creator>Jos&#233; Luis Herrera Salazar</creator>
        
        <creator>Oswaldo Casazola-Cruz</creator>
        
        <subject>Temperature monitoring system; satisfaction; radiofrequency; fish capture</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>Artisanal fishing is one of the activities with the greatest economic impact worldwide. In the present research, the radiofrequency temperature monitoring system for fish catches was aimed to determine the influence on the satisfaction of fish catches. The dimensions included were time delay and direct cost; on the other hand, in the monitoring system were usability and reliability. It was shown that the use of the radiofrequency temperature monitoring system decreased the direct cost incurred, reduced the time required to locate hot spots, increased satisfaction and confidence in the system by 95%.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_51-Radiofrequency_Temperature_Control_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>RETRACTED: Mobile Computational Vision System in the Identification of White Quinoa Quality</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120850</link>
        <id>10.14569/IJACSA.2021.0120850</id>
        <doi>10.14569/IJACSA.2021.0120850</doi>
        <lastModDate>2021-08-31T10:27:00.1230000+00:00</lastModDate>
        
        <creator>Percimil Lecca-Pino</creator>
        
        <creator>Daniel Tafur-Vera</creator>
        
        <creator>Michael Cabanillas-Carbonell</creator>
        
        <creator>Jos&#233; Luis Herrera Salazar</creator>
        
        <creator>Esteban Medina-Rafaile</creator>
        
        <subject>Computer vision system; quinoa quality; digital image processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>After careful and considered review of the content of this paper by a duly constituted expert committee, this paper has been found to be in violation of IJACSA`s Publication Principles. We hereby retract the content of this paper. Reasonable effort should be made to remove all past references to this paper. Retraction DOI: 10.14569/IJACSA.2021.0120850.retraction</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_50-Mobile_Computational_Vision_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>White-Grey-Black Hat Hackers Role in World and Russian Domestic and Foreign Cyber Strategies</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120849</link>
        <id>10.14569/IJACSA.2021.0120849</id>
        <doi>10.14569/IJACSA.2021.0120849</doi>
        <lastModDate>2021-08-31T10:26:59.9970000+00:00</lastModDate>
        
        <creator>Mikhail A. Shlyakhtunov</creator>
        
        <subject>Cyber wars; cybercriminals; disinformation; espionage; hackers; information security; military technology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>The article aims at establishing the role of three different types of hackers in the domestic cyber space, policy and welfare, international relations and warfare of the Russian Federation compared to the situation in the world as for the beginning of 2020 year. The character and structure of hackers’ participation in the information policy and cybersecurity are characterized in connection with the intensity and duration of their intervention, national interests, technical and political outcomes. The new role of cyberwarfare as the fifth sphere of military activity is highlighted and proved. Positive and negative influence of hackers’ activities, methods of their detection and control on the level of an individual Internet user, company, government and international organization are differentiated. Examples of certain criminal groups of hackers’ activities are given. Main organizational measures, tools and cryptography techniques for the protection against hackers’ invasion are proposed. The article reveals and analyzes the specifics of various hacker attacks: white, gray and black. The article emphasizes that cybersecurity in the world is now the object of hacker attacks, which can affect the functioning of not only national or private corporations: but also, the work of government agencies.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_49-White_Grey_Black_Hat_Hackers_Role.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Efficient IoT-based Smart Water Meter System of Smart City Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120848</link>
        <id>10.14569/IJACSA.2021.0120848</id>
        <doi>10.14569/IJACSA.2021.0120848</doi>
        <lastModDate>2021-08-31T10:26:59.8570000+00:00</lastModDate>
        
        <creator>Raad AL-Madhrahi</creator>
        
        <creator>Nayef.A.M. Alduais</creator>
        
        <creator>Jiwa Abdullah</creator>
        
        <creator>Hairulnizam B. Mahdin</creator>
        
        <creator>Abdul-Malik H. Y. Saad</creator>
        
        <creator>Abdullah B. Nasser</creator>
        
        <creator>Husam Saleh Alduais</creator>
        
        <subject>Internet of things; smart water metering; energy consumption; smart city</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>Water is a precious need of our lives. Due to the rapid population and urbanization, water usage monitoring is a significant problem facing our society. One solution is to control, analyze, and reduce the water consumption of the houses. The emerging of the Internet of Things (IoT) concept lately in our lives has offered the opportunity to establish water usage-efficient smart devices, systems and applications for buildings and cities. Many studies have suggested designing an IoT-based smart meter system; however, the IoT sensor node has limited studies, especially in battery life. Therefore, this study aims to implement and analyze an efficient data collection algorithm for IoT-based smart metering applications in consideration with energy consumption. The system items used are Arduino Uno, Wi-Fi-ESP8266, and water flow sensors. The applied algorithm is an efficient data collection algorithm for water meter (EDCDWM) to reduce the number of packet transmissions. Arduino performed this system’s implementation, while the simulation and analysis performed by MATLAB R2019b. The average percentage of energy saved by the applied algorithms of EDCDWM absolute change; and EDCDWM with relative differences in all nodes are around 60% and 93%, respectively.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_48-An_Efficient_IoT_Based_Smart_Water_Meter_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Novel Secure Validation Scheme to Assess Device Legitimacy in Internet of Things</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120847</link>
        <id>10.14569/IJACSA.2021.0120847</id>
        <doi>10.14569/IJACSA.2021.0120847</doi>
        <lastModDate>2021-08-31T10:26:59.8400000+00:00</lastModDate>
        
        <creator>Ayasha </creator>
        
        <creator>M Savitha Devi</creator>
        
        <subject>Internet of things; security; validation; privacy; integrity; attacks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>The increasing security concerns in Internet-of-Things (IoT) have led the researchers to evolve up with multiple levels of the research-based solution towards identifying and protecting the lethal threats. After reviewing the existing literature, it is found that existing approaches are highly specific toward stopping threats via predefined threat information. Hence, the proposed system introduces a new computational model capable of building up a flexible validation model for evaluating the legitimacy of the IoT nodes. The proposed system develops an algorithm that uses a simplified generation of secret keys, performs encryption, and generates validation tokens to ensure a higher degree of privacy and data integrity. The proposed model also contributes towards a unique energy allocation approach to ensure better energy conservation while performing security operations. The simulated study outcome shows better security and data transmission performance compared to the existing scheme.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_47-Novel_Secure_Validation_Scheme.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Smart Air Pollution Monitoring System with Smog Prediction Model using Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120846</link>
        <id>10.14569/IJACSA.2021.0120846</id>
        <doi>10.14569/IJACSA.2021.0120846</doi>
        <lastModDate>2021-08-31T10:26:59.8270000+00:00</lastModDate>
        
        <creator>Salman Ahmad Siddiqui</creator>
        
        <creator>Neda Fatima</creator>
        
        <creator>Anwar Ahmad</creator>
        
        <subject>Air pollution; IoT; machine learning; smart mirror; temperature and humidity sensor</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>Air Pollution is a harsh reality of today’s times. With rapid industrialization and urbanization, the polluting gases emitted by the burning of fossil fuels in industries, factories and vehicles, cities around the world have become “gas chambers”. Unfortunately, New Delhi too happens to be among the most polluted cities in the world. The present paper designs and demonstrates an IoT(Internet of Things) based smart air pollution monitoring system that could be installed at various junctions and high traffic zones in urban metropolis and megalopolis to monitor pollution locally. It is designed in a novelistic way that not just monitors air pollution by taking varied inputs from various sensors (temperature, humidity, smoke, Carbon monoxide, gas) and but also presents it on a smart mirror. Its unique feature is the demonstration of a smog prediction model by determining PM10 (Particulate Matter 10) concentration using the most efficient machine learning model after an extensive comparison by taking into account environmental conditions. This data generated can also be sent as a feedback to the traffic department to avoid incessant rush and to maintain uniform flow of traffic and also to environmental agencies to keep pollution levels under check.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_46-Smart_Air_Pollution_Monitoring_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Novel CPW 2.4 GHz Antenna with Parallel Hybrid Electromagnetic Solar for IoT Energy Harvesting and Wireless Sensors</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120845</link>
        <id>10.14569/IJACSA.2021.0120845</id>
        <doi>10.14569/IJACSA.2021.0120845</doi>
        <lastModDate>2021-08-31T10:26:59.8100000+00:00</lastModDate>
        
        <creator>Irfan Mujahidin</creator>
        
        <creator>Akio Kitagawa</creator>
        
        <subject>Double CPW antenna; energy harvesting; wireless sensors; IoT communications</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>The design and implementation&#39;s novelty simultaneously utilizes the antenna&#39;s frequency, polarization, and feed structure to maximize the harvested RF energy and become a microstrip communication circuit for wireless sensor or communication systems in IoT devices. In addition, the optimization of the parallel circuit configuration has a voltage doubler model with an integrated parallel system and thin-film solar cells. Implementation of the antenna structure has two-line feeds in one antenna. Usage both feeds have the same function as CPW circular polarization. Another advantage is that there is no miss-configuration when installing the port exchanges when using both output ports simultaneously. The 2-port antenna has an area of 1/2 per port (where accessible wavelengths work well at the 2.4 GHz frequency). It has been shown to achieve a relatively narrow bandwidth of 86.5 percent covering WiFi frequency band networks and IoT communications. It does not require additional filters and analog matching circuits that cause power loss in the transmission process in parallel voltage doubler circuits. Integrating a reflector on the CPW antenna with two ports for placement of thin-film solar cells provides antenna gain of up to 8.2 dB. It provides a wide beam range with directional radiation. Using a multi-stage parallel to increase voltage output and integrated with a thin-film solar cell converter proves efficient in the 2.4 GHz frequency band. When the transmission power density is -16.15 dBm with a tolerance of 0.023, the novel energy harvester configuration circuit can produce an output voltage of 54 mV dc without adding solar cell energy. And Integrated thin-film solar cell a light beam of 300 lux in the radiation beam area of -16.15 dBm, the energy obtained has a value of 1,74467V. It also shows that the implementation of this configuration can produce an optimal dc output voltage in the actual indoor and outdoor ambient settings. The optimization of antenna implementation and the communication process with Multiple signal classifications improves the configuration of antennas that are close to each other and have identical phase outputs. It is instrumental and efficient when applied to IoT devices.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_45-The_Novel_CPW_2.4_GHz_Antenna_with_Parallel_Hybrid.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Software Quality Analysis for Halodoc Application using ISO 25010:2011</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120844</link>
        <id>10.14569/IJACSA.2021.0120844</id>
        <doi>10.14569/IJACSA.2021.0120844</doi>
        <lastModDate>2021-08-31T10:26:59.8100000+00:00</lastModDate>
        
        <creator>Aditia Arga Pratama</creator>
        
        <creator>Achmad Benny Mutiara</creator>
        
        <subject>Halodoc; ISO 25010:2011; software quality assurance; telemedicine</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>The rapid spread of the Covid-19 disease and the ongoing social distancing have disrupted community activities. This consequently makes people use information and communication technology, especially in the fields of health, education, and business. The use of information and communication technology in the health sector plays an important role in preventing the spread of the Covid-19 disease. One of its uses is Telemedicine. One of the developing telemedicine in Indonesia is Halodoc. This study will test the quality of the Halodoc application to determine the quality of the Halodoc application using the model of the International Organization for Standardization (ISO), namely ISO 25010:2011. Software quality assurance can use this model as the basis for testing. Therefore the test will include 8 characteristics and 29 sub-characteristics. The assessment in this tests will use a weight where this weight is determined using Analytical Hierarchy Process (AHP) method. The testing method is carried out using Black-Box testing, stress testing, and distributing questionnaires to 100 respondents for Usability testing. Functional Suitability, Compatibility, Reliability, and Maintainability got a maximum score of 5. Performance Efficiency get a score of 4,886, Usability gets a score of 4, Security gets a score of 3,549, Portability gets a score of 3,718. The total results of the Halodoc application assessment get a score of 4,515 out of a maximum score of 5.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_44-Software_Quality_Analysis_for_Halodoc_Application.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Multiagent and Machine Learning based Hybrid NIDS for Known and Unknown Cyber-attacks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120843</link>
        <id>10.14569/IJACSA.2021.0120843</id>
        <doi>10.14569/IJACSA.2021.0120843</doi>
        <lastModDate>2021-08-31T10:26:59.7930000+00:00</lastModDate>
        
        <creator>Said OUIAZZANE</creator>
        
        <creator>Malika ADDOU</creator>
        
        <creator>Fatimazahra BARRAMOU</creator>
        
        <subject>Intrusion detection; zero-day attacks; machine learning; multi-agent systems; security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>The objective of this paper is to propose a hybrid Network Intrusion Detection System (NIDS) for the detection of cyber-attacks that may target modern computer networks. Indeed, in the era of technological evolution that the world is currently experiencing, hackers are constantly inventing new attack mechanisms that can bypass traditional security systems. Thus, NIDS are now an essential security brick to be deployed in corporate networks to detect known and zero-day attacks. In this research work, we propose a hybrid NIDS model based on the use of both a signature-based NIDS and an anomaly detection NIDS. The proposed system is based on agent technology, SNORT signature-based NIDS, machine learning techniques and the CICIDS2017 dataset is used for training and evaluation purposes. Thus, the CICIDS2017 dataset has undergone several pre-processing actions, namely, dataset cleaning, and dataset balancing as well as reducing the number of attributes (from 79 to 33 attributes). In addition, a set of machine learning algorithms are used, such as decision tree, random forest, Naive Bayes and multilayer perceptron, and are evaluated using some metrics, such as recall, precision, F-measure and accuracy. The detection methods used give very satisfactory results in terms of modeling benign network traffic and the accuracy reaches 99.9% for some algorithms.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_43-A_Multiagent_and_Machine_Learning_based_Hybrid_NIDS.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Diabetes Classification using an Expert Neuro-fuzzy Feature Extraction Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120842</link>
        <id>10.14569/IJACSA.2021.0120842</id>
        <doi>10.14569/IJACSA.2021.0120842</doi>
        <lastModDate>2021-08-31T10:26:59.7800000+00:00</lastModDate>
        
        <creator>P. Bharath Kumar Chowdary</creator>
        
        <creator>R. Udaya Kumar</creator>
        
        <subject>Diabetes; neuro-fuzzy model; feature extraction; artificial neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>Diabetes is one of the challenging diseases prevailing in recent times. Due to the incompleteness, uncertainty and imprecise details, classification of diabetes using machine learning algorithms is turning out to be even more challenging. The efficiency of the classification model is influenced by the data present in the dataset. This study enhances the classification of diabetes by using a Neuro-Fuzzy model with special attention to Feature Extraction. The main goal of the present study is to enhance the diabetes prediction technique that helps the medical practitioners to easily identify the disease and diagnose it appropriately to reduce several complications that diabetes may cause to the patient in the future. The proposed model initially applies fuzzification on diabetes data to produce membership values. Later the membership values are examined by the proposed model to check the contribution of the features in diabetes classification. The feature extraction algorithm passes the significant features to a neural network after the features are extracted. The proposed model is tested on standard PIMA diabetic dataset to evaluate the performance. The proposed model is able to outperform all the existing machine learning algorithms.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_42-Diabetes_Classification_using_an_Expert_Neuro_Fuzzy_Feature.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>IDD-HPO: A Proposed Model for Improving Diabetic Detection using Hyperparameter Optimization and Cloud Mapping Storage</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120840</link>
        <id>10.14569/IJACSA.2021.0120840</id>
        <doi>10.14569/IJACSA.2021.0120840</doi>
        <lastModDate>2021-08-31T10:26:59.7630000+00:00</lastModDate>
        
        <creator>Eman H. Zaky</creator>
        
        <creator>Mona M. Soliman</creator>
        
        <creator>A. K. Elkholy</creator>
        
        <creator>Neveen I. Ghali</creator>
        
        <subject>Machine learning; deep learning; diabetes; hospital readmission; hyper parameter optimization; cloud computing; mobile cloud computing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>Readmission to the hospital is an important and critical procedure for the quality of health care as it is very costly and helps in determining the quality level of the point of care provided by the hospital to the patient. This paper proposes a group model to predict readmission by choosing between Machine Learning and Deep Learning algorithms based on performance improvement. The algorithms used for Machine Learning are Logistic Regression, K-Nearest Neighbors, and Support Vector Machine, while the algorithms used for Deep Learning are a Convolutional Neural Network and Recurrent Neural Network. The reasons for the appearance of the efficiency of the model depend on the are preparation of correct parameters and the values that control the learning. This paper aims to enhance the performance of both machine learning and deep learning based readmission models using hyperparameter optimization in both Personal Computer environments and Mobile Cloud Computing systems. The proposed model is called improving detection diabetic using hyperparameter optimization , the proposed model aims to achieve the best rate of between prediction rate accuracy for hospital readmission at the same time minimizing resources such as time delay and energy consumption. Results achieved by proposed model for Logistic Regression, K-Nearest Neighbors, and Support Vector Machine are (accuracy=0.671, 0.883, 0.901, time delay=5, 7, 20, and energy consumed=25, 32, 48) respectively, for Recurrent Neural Network and Convolutional Neural Network are (accuracy=0.854, 0.963, time delay=25, 660 energy consumed=89, 895) respectively. However, this proposed model takes a lot of time and energy consumed especially in Convolutional Neural Network. So, the experiments were conducted again, but in the cloud environment, based on the existence of two types of storage to preserve the accuracy but decreasing time and energy, the proposed model in cloud environment achieve for Logistic Regression, K-Nearest Neighbors, and Support Vector Machine (accuracy=0.671, 0.883, 0.901, time delay=2, 3, 8, and energy consumed=8, 9, 11) respectively, for Recurrent Neural Network, Convolutional Neural Network (accuracy=0.854, 0.963, time delay=15, 220, and energy consumed=20, 301) respectively.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_40-IDD_HPO_A_Proposed_Model_for_Improving_Diabetic_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Emotional Cascade Model and Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120841</link>
        <id>10.14569/IJACSA.2021.0120841</id>
        <doi>10.14569/IJACSA.2021.0120841</doi>
        <lastModDate>2021-08-31T10:26:59.7630000+00:00</lastModDate>
        
        <creator>Carlos Pelta</creator>
        
        <subject>Emotional cascade model; borderline personality disorder; rumination; deep learning; cascade-correlation algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>Emotional Cascade Model proposes that the emotional and behavioral dysregulation of individuals with Borderline Personality Disorder can be understood through emotional cascades. Emotional cascades are vicious cycles of intense rumination and negative affect that may induce aversive emotional states that generate abnormal behaviors to reduce the effect of intense rumination. Borderline Personality Disorder is a psychiatric disorder whose main symptoms to diagnose it are mood instability and impulsivity. This disorder often involves risky behaviors such as non-suicidal self-injury or substance abuse. Recently, Selby and collaborators have proved that the Emotional Cascade Model has a high explanatory and diagnostic capacity using Temporal Bayesian Networks. Taking into consideration the meta-analytic study developed by Richman et al., in this article it has been designed a deep learning model, based on cascading artificial neural networks, following the correlations established for the Emotional Cascade Model. It has been confirmed with accuracy estimates reaching up to 99%, the predictive power of this model relative to the various types of rumination that influence some of the basic classes of symptoms of Borderline Personality Disorder.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_41-Emotional_Cascade_Model_and_Deep_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Usability Perceptions of the Madrasati Platform by Teachers in Saudi Arabian Schools</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120839</link>
        <id>10.14569/IJACSA.2021.0120839</id>
        <doi>10.14569/IJACSA.2021.0120839</doi>
        <lastModDate>2021-08-31T10:26:59.7470000+00:00</lastModDate>
        
        <creator>Wesam Shishah</creator>
        
        <subject>Usability; Madrasati Platform; e-learning; Saudi Arabia; schools</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>As a result of the COVID-19 pandemic, the Saudi Ministry of Education launched the Madrasati Platform for distant teaching and learning for schools. Therefore, recognizing the design and the usability challenges that are linked to this new platform is significant. This paper reports the results of a study that examines the usability level of the Madrasati Platform from the perspectives of the schoolteachers in Saudi Arabia. It also investigates the usability issues that teachers faced when using the platform. A total of 759 teachers responded to the Computer System Usability Questionnaire (CSUQ). Semi-structured interviews were also conducted with ten teachers. The findings of the study indicated that the usability of the Madrasati Platform for the schoolteachers is inadequate and needs to be improved further. Navigation issues were the most reported issues by the participants. Finally, the paper presents some recommendations for improving the usability and experience with the Madrasati Platform.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_39-Usability_Perceptions_of_the_Madrasati_Platform.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Spelling Correction and Query Expansion for Relevance Document Searching</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120838</link>
        <id>10.14569/IJACSA.2021.0120838</id>
        <doi>10.14569/IJACSA.2021.0120838</doi>
        <lastModDate>2021-08-31T10:26:59.7330000+00:00</lastModDate>
        
        <creator>Dewi Soyusiawaty</creator>
        
        <creator>Denny Hilmawan Rahmatullah Wolley</creator>
        
        <subject>Cosine similarity; information retrieval; Levenshtein distance; TF-IDF; typographical error; query expansion</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>A digital library is a type of information retrieval (IR) system. The existing IR methodologies generally have problems on keyword searching. Some of search engine has not been able to provide search results with partial matching and typographical error. Therefore, it is required to be able to provide search results that are relevant to keywords provided by the user. We proposed a model to solve the problem by combining the spell correction and query expansion. Searching is starting with indexing the title of the document by preprocessing the title of all incoming document data and then weighting the Term Frequency – Inverse Document Frequency (TF-IDF) against all terms of the whole document. Levenshtein Distance algorithm is used in the search process to correct typo-indicated keywords. Before calculating the relevance between the keywords and the documents using Cosine Similarity, the keywords are expanded using Query Expansion to increase number of documents retrieved. Calculation results using Cosine Similarity are then added to Query Expansion weight calculation to get final ranking result. Results show improvements over IR system compared with system without spell check and query expansion. The results of the study in the form of web-based application conducted testing for 50 times with number of data of 2,045. The system was able to correct typo-indicated keywords and search documents with average recall value of 95.91%, average precision value of 63.82% and average Non Interpolated Average Precision (NIAP) value of 86.29%.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_38-Hybrid_Spelling_Correction_and_Query_Expansion.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Online Discussion Support System with Facilitation Function</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120837</link>
        <id>10.14569/IJACSA.2021.0120837</id>
        <doi>10.14569/IJACSA.2021.0120837</doi>
        <lastModDate>2021-08-31T10:26:59.7170000+00:00</lastModDate>
        
        <creator>Chihiro Sasaki</creator>
        
        <creator>Tatsuya Oyama</creator>
        
        <creator>Chika Oshima</creator>
        
        <creator>Shin Kajihara</creator>
        
        <creator>Koichi Nakayama</creator>
        
        <subject>Discussion board system; decision-making; facilitation function; diverse values and opinions</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>In this paper, we present a discussion board system (DBS) with a facilitator function that we developed for the purpose of facilitating discussions that require decision-making that incorporates diverse values and opinions. Its function is to constantly extract nouns from the utterances of discussion participants and display them on the DBS on each participant&#39;s personal computer. Items to be decided in the discussion are displayed together with a frame (called a “box”), and each participant puts the displayed keywords in the box according to their own opinions and intentions. No other participant can see what the individual is doing. The color of any keyword that all participants put in the same box changes to green. Furthermore, comments are automatically presented based on the time when each participant last spoke and the time when the keyword was moved. This is intended to encourage participants who appear to be less involved to join the discussion. Experiments with the DBS suggested that it might be possible to capture the will of participants who disagree but do not speak. However, it was also deemed necessary to post comments that encourage participants to express their intentions independently, and to have a mechanism that can link the motivated manifestation of intentions to appropriate actions.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_37-Online_Discussion_Support_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Determine the Main Target Audience Characteristics in M-learning Applications in Saudi Arabian University Communities</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120836</link>
        <id>10.14569/IJACSA.2021.0120836</id>
        <doi>10.14569/IJACSA.2021.0120836</doi>
        <lastModDate>2021-08-31T10:26:59.7000000+00:00</lastModDate>
        
        <creator>Alaa Badwelan</creator>
        
        <creator>Adel A. Bahaddad</creator>
        
        <subject>M-learning; mobile learning; UTAUT; KSA; MOE; application quality; qualitative study</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>In the fourth economic revolution, which is based on digital transformation, the e-learning process represents one of the most important processes needed to deal with the revolution through increased skills and knowledge. Thus, the automation services in the e-learning field represent one of the most important and supportive means of transferring and disseminating knowledge to reach a diverse and wide segment. This study focuses on defining the parameters and characteristics of the target audience, who are interested in the e-learning method, through smart device applications in higher education institutions in Saudi Arabia. The study used a quantitative method and data collected from 539 participants from several universities and institutes to determine their characteristics. The study segment represents one of the basic aspects of and full motivations for accepting new technology; 70% of Saudi smart device users form the youth segment, which is the university age group. This is the category that is expected to have the most use of e-learning in light of the coronavirus pandemic, which has cast a shadow over the six continents of the world. This approach could help the adoption of M-learning applications by the target audience according to a number of technical and design requirements, which are presented in this study.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_36-Determine_the_Main_Target_Audience_Characteristics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Adaptive Demand Response Management System using Polymorphic Bayesian Inference Supported Multilayer Analytics</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120835</link>
        <id>10.14569/IJACSA.2021.0120835</id>
        <doi>10.14569/IJACSA.2021.0120835</doi>
        <lastModDate>2021-08-31T10:26:59.6830000+00:00</lastModDate>
        
        <creator>Sarin CR</creator>
        
        <creator>Geetha Mani</creator>
        
        <subject>Adaptive control; Bayesian; demand response; energy management system; load scheduling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>The most significant limitation of stand-alone microgrid systems is the challenge of meeting unexpected additional demands. If demand exceeds the capacity of a stand-alone system, the system may be unable to satisfy demand. This issue is alleviated in grid-connected technology since the utility system will provide more power if it is demanded. As a result, load scheduling is an integral element of the demand response of a standalone system. There are two components to this problem. If the capacity of a battery-supported power system is restricted, for the period of time that the source is available, it will not be able to meet the entire demand. Appropriately the demand is dispersed across a period of time until the next charge becomes available. Some demands may be disregarded in order to accomplish peak load trimming or if the system is incapable of meeting demand without compromising other important technical and consumer objectives. This is a challenging assignment. This article aims to develop an Adaptive Demand Response Management System (ADRMS) capable of load scheduling and load shedding using an interwoven multidimensional Bayesian inference supported by multiple mathematical models. A two-stage hardware architecture is being developed, with the first hardware measuring demand and source capacity before sending the data to the second hardware via LPWAN for mathematical analysis. In the first phase, two approaches are used to forecast demand: Gaussian Naive Bayes Model (GNBM) and Bayesian Structural Time Series analysis. GNBM is rapid but fails to properly forecast the output when there is zero frequency error whereas BSTS can offer more precise results than GNBM but is slower. Hence two approaches are employed in tandem. The next stage is to assign demand source integration. This is accomplished using Bayesian Reinforcement Learning (BRL), which is based on a number of incentives, including anomaly, cost factors, usefulness, reliability, and size. All Bayesian models are subjected to much of the common Bayes rule, resulting in the formulation of a blended polymorphism model that reduces computing time and memory allocation, and improves processing reliability. The Isolation Forest (IF) method is used to identify and avoid vulnerable loads by determining demand anomalies. The last step employs a Dynamic Preemptive Priority Round Robin (DPPRR) algorithm for preemptive priority based load scheduling based on forecasted data to allocate the next loads to be added.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_35-Adaptive_Demand_Response_Management_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mitigating Traffic Congestion at Road Junction using Fuzzy Logic</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120834</link>
        <id>10.14569/IJACSA.2021.0120834</id>
        <doi>10.14569/IJACSA.2021.0120834</doi>
        <lastModDate>2021-08-31T10:26:59.6700000+00:00</lastModDate>
        
        <creator>Amir Hamzah Pohan</creator>
        
        <creator>Liza A. Latiff</creator>
        
        <creator>Rudzidatul Akmam Dziyauddin</creator>
        
        <creator>Nur Haliza Abdul Wahab</creator>
        
        <subject>Traffic fuzzy classification; congestion; traffic lights control</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>The timing of traffic lights at intersections is determined by the Local Authority. It is based on the peak hour statistics and the timing is maintained even during an off-peak hour. This one standard green time will be used every time in a day regardless of the number of vehicles and the road width. This approach will have a long green traffic light even though the number of vehicles is only a few and hence, will cause a long waiting time at the intersections. Therefore, the aim of this study is to vary the timing of traffic lights at junctions according to the number of vehicles. This paper will also consider road width variable which have not been considered so far. Fuzzy logic rules will be used to classify the number of vehicles and road width and time taken for vehicles to move at the intersections which was proposed in a previous work. The new timing will commensurate with the number of vehicles and road width. Field test data were gathered from Sala Benda and Semplak intersections which are amongst the busiest intersections in Bogor, Indonesia. Comparisons were made and show that the green light timing obtained is appropriate to the two factors considered. Also the waiting time for vehicles in each traffic cycle was also reduced. This study have formulated an optimal green lights timing in each intersection and will be used by local authorities to determine the timing of green traffic lights at the intersection and hence, can implement traffic control and an appropriate waiting time.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_34-Mitigating_Traffic_Congestion_at_Road_Junction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Unsupervised Clustering of Comments Written in Albanian Language</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120833</link>
        <id>10.14569/IJACSA.2021.0120833</id>
        <doi>10.14569/IJACSA.2021.0120833</doi>
        <lastModDate>2021-08-31T10:26:59.6700000+00:00</lastModDate>
        
        <creator>M&#235;rgim H. HOTI</creator>
        
        <creator>Jaumin AJDARI</creator>
        
        <subject>Unsupervised clustering; k-means; spectral; agglomerative; vectorization; Albanian language</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>Now-a-days, social media and communications in social media have become very important for services providers and those play a key role in service quality improvement as well as in decision making. The services consumers’ discussions usually are written in their local languages and extracting important knowledge sometimes is very hard and problematic. In this field the natural language processing techniques are helpful, but different languages have their specifics and difficulties, and some languages are not prosperous enough in the techniques and methods on NLP, especially the local speaking of the language. In this scientific paper, we have tried to solve such a problem for the Albanian language spoken in Kosovo. Namely, for a dataset of the comments, written in Albanian language in Kosovo (local speaking), collected from the social media, by use of unsupervised clustering techniques, to make clustering regarding the topic of discussion in the comment. In this research, the different techniques of text feature extraction (vectorization and others) and clustering algorithms (K-means, Spectral, Agglomerative, etc.), are used with the idea to find and define more appropriate techniques for the Albanian language. In this paper are shown the results of the conducted experiments as well as discussions about what to use in case of the Albanian language and other languages similar or in group with Albanian (those which have a weak NLP).</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_33-Unsupervised_Clustering_of_Comments_Written.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>On the Training of Deep Neural Networks for Automatic Arabic-Text Diacritization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120832</link>
        <id>10.14569/IJACSA.2021.0120832</id>
        <doi>10.14569/IJACSA.2021.0120832</doi>
        <lastModDate>2021-08-31T10:26:59.6530000+00:00</lastModDate>
        
        <creator>Asma Abdel Karim</creator>
        
        <creator>Gheith Abandah</creator>
        
        <subject>Arabic text; automatic diacritization; bidirectional neural network; long short-term memory; natural language processing; recurrent neural networks; sequence transcription</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>Automatic Arabic diacritization is one of the most important and challenging problems in Arabic natural language processing (NLP). Recurrent neural networks (RNNs) have proved recently to achieve state-of-the-art results for sequence transcription problems in general, and Arabic diacritization in specific. In this work, we investigate the effect of varying the size of the training corpus on the accuracy of diacritization. We produce a cleaned corpus of approximately 550k sequences extracted from the full dataset of Tashkeela and use subsets of this corpus in our training experiments. Our base model is a deep bidirectional long short-term memory (BiLSTM) RNN that transcribes undiacritized sequences of Arabic letters with fully diacritized sequences. Our experiments show that error rates improve as the size of training corpus increases. Our best performing model achieves average diacritic and word error rates of 1.45% and 3.89%, respectively. When compared with state-of-the-art diacritization systems, we reduce the word error rate by 12% over the best published results.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_32-On_the_Training_of_Deep_Neural_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Object Detection Approaches in Images: A Weighted Scoring Model based Comparative Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120831</link>
        <id>10.14569/IJACSA.2021.0120831</id>
        <doi>10.14569/IJACSA.2021.0120831</doi>
        <lastModDate>2021-08-31T10:26:59.6370000+00:00</lastModDate>
        
        <creator>Hafsa Ouchra</creator>
        
        <creator>Abdessamad Belangour</creator>
        
        <subject>Computer vision; object detection; images; WSM method; object detection algorithms</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>Computer vision is a branch of artificial intelligence that trains computers to acquire high-level understanding of images and videos. Some of the most well-known areas in Computer Vision are object detection, object tracking and motion estimation among others. Our focus in this paper concerns object detection subarea of computer vision which aims at recognizing instances of predefined sets of objects classes using bounding boxes or object segmentation. Object detection relies on various algorithms belonging to various families that differs in term of speed and quality of results. Hence, we propose in this paper to provide a comparative study of these algorithms based on a set of criteria. In this comparative study we will start by presenting each of these algorithms, selecting a set of criteria for comparison and applying a comparative methodology to get results. The methodology we chose to this purpose is called WSM (Weighted Scoring Model) which fits exactly our needs. Indeed, WSM method allows us to assign a weight to each of our criterion to calculate a final score of each of our compared algorithms. The obtained results reveal the weaknesses and the strengths of each one of them and opened breaches for their future enhancement.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_31-Object_Detection_Approaches_in_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Genetic Algorithm and Ensemble Learning Aided Text Classification using Support Vector Machines</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120830</link>
        <id>10.14569/IJACSA.2021.0120830</id>
        <doi>10.14569/IJACSA.2021.0120830</doi>
        <lastModDate>2021-08-31T10:26:59.6230000+00:00</lastModDate>
        
        <creator>Anshumaan Chauhan</creator>
        
        <creator>Ayushi Agarwal</creator>
        
        <creator>Razia Sulthana</creator>
        
        <subject>Genetic algorithm; ensemble learning; support vector machines; text classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>Text classification is one of the areas where machine learning algorithms are used. The size of the dataset and the methods used for converting the textual words into vectors play a major role in classifying them. This paper proposes a heuristic based approach to classify the documents using Genetic Algorithm aided Support Vector Machines (SVM) and Ensemble Learning approach. The real valued representation of the textual data into vectors is done on applying Term Frequency – Inverse Document Frequency (TF-IDF) and Bi-Normal Separation (BNS). However, in this paper, the common data misclassification issue in SVM is overcome by introducing two algorithms that adds weightage to accurate classification. The first algorithm applied BNS and TF-IDF along with ensemble learning and constructs a voting classifier for classifying the textual documents. The results produced justify that TF-IDF produces good results with voting classifier than BNS for classification. Henceforth TF-IDF is applied in the subsequent approach for vector generation. Secondly, genetic algorithm is applied along with OneVsRest strategy in SVM to overcome the drawback of multiclass multilabel classification. The results show that Genetic algorithm improves the accuracy of classification even with a very small labelled dataset, as genetic algorithm applies the process of Mutation and Cross over across many generations to understand the pattern of right classification.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_30-Genetic_Algorithm_and_Ensemble_Learning_Aided.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluating and Comparing the Usability of Privacy in WhatsApp, Twitter, and Snapchat</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120829</link>
        <id>10.14569/IJACSA.2021.0120829</id>
        <doi>10.14569/IJACSA.2021.0120829</doi>
        <lastModDate>2021-08-31T10:26:59.6070000+00:00</lastModDate>
        
        <creator>Abdulmohsen S. Albesher</creator>
        
        <creator>Thamer Alhussain</creator>
        
        <subject>HCI; usability; heuristics evaluation; STRAP; privacy; usable privacy; social media</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>With the increased use of social networking platforms, especially with the inclusion of sensitive personal information, it has become important for those platforms to have adequate levels of security and privacy. This research aimed to evaluate the usability of privacy in the WhatsApp, Twitter, and Snapchat applications. The evaluation was conducted based on the structured analysis of privacy (STRAP) framework. Seven expert evaluators performed heuristic evaluations and applied the 11 STRAP heuristics to the privacy policy statements and settings provided in the WhatsApp, Twitter, and Snapchat applications. This study provides useful information for designers and developers of social media applications as well as users of the apps. In particular, the results indicate that Snapchat had the highest severity rating, followed by Twitter and WhatsApp. Moreover, the most notable severity rating for all the apps was regarding the ability to revoke consent, where all of the apps had a very high number of usability problems.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_29-Evaluating_and_Comparing_the_Usability_of_Privacy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Secured SECS/GEM: A Security Mechanism for M2M Communication in Industry 4.0 Ecosystem</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120828</link>
        <id>10.14569/IJACSA.2021.0120828</id>
        <doi>10.14569/IJACSA.2021.0120828</doi>
        <lastModDate>2021-08-31T10:26:59.5900000+00:00</lastModDate>
        
        <creator>Ashish Jaisan</creator>
        
        <creator>Selvakumar Manickam</creator>
        
        <creator>Shams A. Laghari</creator>
        
        <creator>Shafiq Ul Rehman</creator>
        
        <creator>Shankar Karuppayah</creator>
        
        <subject>SECS/GEM; HSMS; cybersecurity; industry-4.0; machine-to-machine communication; AES-GCM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>The manufacturing industry has been revolutionized by Industry 4.0, vastly improving the manufacturing process, increasing production quality and capacity. Machine-to-Machine (M2M) communication protocols were developed to strengthen and bind this ecosystem by allowing machines to communicate with each other. The SECS/GEM protocol is at the heart of the manufacturing industry, thriving as a communication protocol and control system for years. It is a manufacturing equipment protocol used for equipment-host data communications. However, it is not without drawbacks, despite being a widely adopted communication protocol used by leading industries. SECS/GEM does not offer any type of security features as it was designed to work in a closed network. Such shortcomings in the protocol will allow attackers to steal secrets such as manufacturing processes by looking at recipes, perform reconnaissance prior to sabotage attempts, and can have severe implications on the entire industry. This paper proposes a mechanism to secure SECS/GEM data messages with AES-GCM encryption and evaluate the performance with the standard SECS/GEM protocol. The results from our evaluations showed that the proposed mechanism achieves data confidentiality and authenticity with a negligible overhead of 0.8 milliseconds and 0.37 milliseconds when sending and receiving a message, respectively, compared to the standard protocol.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_28-Secured_SECS_GEM_A_Security_Mechanism.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of Novel Algorithm for Data Hiding on Mobile Application</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120826</link>
        <id>10.14569/IJACSA.2021.0120826</id>
        <doi>10.14569/IJACSA.2021.0120826</doi>
        <lastModDate>2021-08-31T10:26:59.5430000+00:00</lastModDate>
        
        <creator>Roopesh Kumar</creator>
        
        <creator>Ajay Kumar Yadav</creator>
        
        <subject>Steganography encryption; discrete cosine transform; DES; F5Algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>We can easily observe that in the current era of communication, Computers, mobile phones, and the Internet are widely used mediums. In such an environment, information security is an important issue. The most common techniques used for information security are Cryptography and Steganography. Many research works have been done on image Steganography. These images have different kinds of formats like BMP, GIF, and PNG. Several compression methods are available which applied to the images having the format BMP, GIF, and PNG is comparatively less effective than the JPEG image. JPEG compression steganography method is complex to implement even then some algorithms have been developed for it. They provide single-level or two-level information securities. This paper presents a method in which is the combination of Cryptography and Steganography is used over android mobile phones. Here we are performing encryption two times on data to be hidden and then that encrypted data hidden both in text and image. We are using the Advanced Encryption Standard (AES) algorithm to hide text within a text, Discrete Cosine transform (DCT) for image steganography, and Data Encryption Standard (DES) for encryption of text. This paper gives the idea of three-level information security over the android environment.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_26-Development_of_Novel_Algorithm_for_Data_Hiding.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fuzzy and Genetic Algorithm-based Decision-making Approach for Collaborative Team Formation: A Study on User Acceptance using UTAUT</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120827</link>
        <id>10.14569/IJACSA.2021.0120827</id>
        <doi>10.14569/IJACSA.2021.0120827</doi>
        <lastModDate>2021-08-31T10:26:59.5430000+00:00</lastModDate>
        
        <creator>Azleena Mohd Kassim</creator>
        
        <creator>Norhanizah Minin</creator>
        
        <creator>Yu-N Cheah</creator>
        
        <creator>Fazilah Othman</creator>
        
        <subject>Collaborative team formation; genetic algorithm; fuzzy logic; unified theory of acceptance and use of technology (UTAUT)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>Forming an optimal collaborative team is achieved using members characteristics to improve team efficiency. A team’s performance may have a negative effect when a team is formed randomly. Moreover, it is quite impossible to achieve an optimal team manually as the formation can expand into countless possibilities. Hence, this paper presents a decision-making framework for collaborative team formation by incorporating Fuzzy Logic and Genetic Algorithm (Fuzzy-GA). The framework has been initiated by combining effective team formation factors such as skills, trust, leadership, and individual performance. Unified Theory of Acceptance and Use of Technology (UTAUT) is utilised to survey the readiness and technology acceptance of the organisations’ employees in adopting the proposed decision-making approach to form a collaborative team. The UTAUT survey had proven that behavioural intention (BI) had a positive impact on the performance expectancy (PE), effort expectancy (EE), social influence (SI) and facilitating conditions (FC). However, behavioural intention (BI) had a negative impact on the voluntariness of use (VU); thus the transformation of collaborative team formation must be further explored to increase the team’s voluntarism towards this automated collaborative team formation.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_27-Fuzzy_and_Genetic_Algorithm_based_Decision_making_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Correlating Discriminative Quality Factors (CDQF) for Optimal Resource Scheduling in Cloud Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120825</link>
        <id>10.14569/IJACSA.2021.0120825</id>
        <doi>10.14569/IJACSA.2021.0120825</doi>
        <lastModDate>2021-08-31T10:26:59.5300000+00:00</lastModDate>
        
        <creator>B. Ravindra Babu</creator>
        
        <creator>A.Govardhan</creator>
        
        <subject>Resource management (RM); resource scheduling (RS); resource provisioning (RP); QoS; infrastructure-as-a-service (IAAS)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>The Correlating Discriminative Quality Factors (CDQF) for Optimal Resource Scheduling in cloud networks has been addressed in this manuscript. It is since the resources under the cloud platform are loosely coupled according to the SLA between the cloud platform and the resource partakers. This enables the possibility of multiple resources from diversified partakers, those intended to accomplish similar services. The resource scheduling intends to select one resource among available resources to accomplish the scheduled task(s). The contemporary contributions related to resource scheduling are specific to traditional QoS factors, including cost, deadline constraints, and power consumption. However, the quality of service is often influenced by the contextual factors of the IAAS. Hence, this manuscript portrayed a novel resource scheduling strategy that orders the resources under the degree of optimality proposed in this manuscript. Unlike traditional resource scheduling methods, this manuscript portrayed a set of context-related factors that are further used to define the heuristic measure called “Degree of Optimality.” The experimental study on the simulated environment elevates the proposal performance advantage as opposed to other existing methods.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_25-Correlating_Discriminative_Quality_Factors_CDQF.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Empirical Validation of WebQMDW Model for Quality-based External Web Data Source Incorporation in a Data Warehouse</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120824</link>
        <id>10.14569/IJACSA.2021.0120824</id>
        <doi>10.14569/IJACSA.2021.0120824</doi>
        <lastModDate>2021-08-31T10:26:59.5130000+00:00</lastModDate>
        
        <creator>Priyanka Bhutani</creator>
        
        <creator>Anju Saha</creator>
        
        <creator>Anjana Gosain</creator>
        
        <subject>Data warehouse; external data sources; web data sources; quality evaluation model; quality model validation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>In recent years, World Wide Web has emerged as the most promising external data source for organizations’ Data Warehouses for valuable insights required in comprehensive decision making to gain a competitive edge. However, when the Data Warehouse uses external data sources from the Web without quality evaluation, it can adversely impact its quality. Quality models have been proposed in the research literature to evaluate and select Web Data sources for their integration in a Data Warehouse. However, these models are only conceptually proposed and not empirically validated. Therefore, in this paper, the authors present the empirical validation conducted on a set of 57 subjects to thoroughly validate the set of 22 quality factors and the initial structure of the multi-level, multi-dimensional WebQMDW quality model. The validated and restructured WebQMDW model thus obtained can significantly enhance the decision-making in the DW by selecting high-quality Web Data Sources.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_24-Empirical_Validation_of_WebQMDW_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improved Rough-fuzzy C-means Clustering and Optimum Fuzzy Interference System for MRI Brain Image Segmentation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120823</link>
        <id>10.14569/IJACSA.2021.0120823</id>
        <doi>10.14569/IJACSA.2021.0120823</doi>
        <lastModDate>2021-08-31T10:26:59.4970000+00:00</lastModDate>
        
        <creator>D. Maruthi Kumar</creator>
        
        <creator>D. Satyanarayana</creator>
        
        <creator>M. N. Giri Prasad</creator>
        
        <subject>Cerebrospinal fluid; fuzzy interference system; enhanced grasshopper optimization algorithm; improved rough fuzzy c-means clustering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>The categorization of brain tissues plays a vital role in various neuro-anatomical identification and implementations. In manual detection, misidentification of location and sound of unwanted tissues may occur due to visual fatigue by humans. Also, it consumes more time and may exhibit enormous partially inner or outer the manipulator. At present, automatic identification of brain tissues in MRI is vital for investigation and healing applications. This work proposed MRI image tissue segmentation using Improved Rough Fuzzy C Means (IRFCM) algorithm and classification using multiple fuzzy systems. Proposed research work comprises four modules: pre-processing, segmentation, categorization, and extracting features. Initially, the elimination of boisterous occur in the given image is done through pre-processing. After the pre-processing, segmentation is carried out for the pre-processed brain image to segment the tissue based on clustering concept using Improved Rough Fuzzy C Means algorithm. Later, the features of Gray-Level Co-Occurrence Matrix (GLCM) are extracted from segmentation, and the features extracted from segmented images are applied to Optimum Fuzzy Interference System (OFIS). Then the entire system parameters are optimized using Enhanced Grasshopper Optimization Algorithm (EGOA). Finally, the novel OFIS classifier helps to classify the brain-based tissue images as Gray Matter (GM), White Matter (WM), Cerebrospinal Fluid (CSF), and Tumor Tissues (TT). The results using MRI data sets are analyzed and compared with other existing techniques through performance metrics to show the superiority of the proposed methodology.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_23-Improved_Rough_fuzzy_C_means_Clustering.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Approach for Requirements Engineering Analysis using Conceptual Mapping in Healthcare Domain</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120822</link>
        <id>10.14569/IJACSA.2021.0120822</id>
        <doi>10.14569/IJACSA.2021.0120822</doi>
        <lastModDate>2021-08-31T10:26:59.4830000+00:00</lastModDate>
        
        <creator>Aya Radwan</creator>
        
        <creator>A. Abdo</creator>
        
        <creator>Sayed Abdel Gaber</creator>
        
        <subject>Conceptual mapping; healthcare systems; clustering; requirement engineering analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>Healthcare systems aim to achieve the best possible support for patient care and to provide good medical care. Good analysis of requirements is essential to avoid any crises. Elicitation of healthcare systems requirements is an emerging and critical phase. It is a challenging task to deal with constraints from the stakeholders and restrictions of the legal issues. In this research, an approach &quot;Conceptual Mapping for non-functional Health care Requirements; CMHR&quot; is proposed to perform an analysis and to evaluate the relationship between the clinical non-functional requirements of medical devices (ventilators as an example in this research) according to the following five attributes: prioritization of requirements, suitability, feasibility, achievability, and risky. Requirements are automatically clustered using the K-means++ algorithm to find out the optimal number of clusters. Requirements are then clustered to visualize the concept map. Clustering is applied on different combinations of the attributes to sort the requirements and to visualize them. Label names are assigned to the classes of requirements to assign each requirement to the appropriate class. Consequently, a prediction of a new requirement can be figured automatically. The approach achieved less rework, fast delivery of the project with good quality, and achieved a higher level of user satisfaction.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_22-An_Approach_for_Requirements_Engineering_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automatic Speech Recognition Features Extraction Techniques: A Multi-criteria Comparison</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120821</link>
        <id>10.14569/IJACSA.2021.0120821</id>
        <doi>10.14569/IJACSA.2021.0120821</doi>
        <lastModDate>2021-08-31T10:26:59.4670000+00:00</lastModDate>
        
        <creator>Maria Labied</creator>
        
        <creator>Abdessamad Belangour</creator>
        
        <subject>Automatic speech recognition; feature extraction; comparative study; MFCC; PCA; LPC; DWT; WSM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>Features extraction is an important step in Automatic Speech Recognition, which consists of determining the audio signal components that are useful for identifying linguistic content while removing background noise and irrelevant information. The main objective of features extraction is to identify the discriminative and robust features in the acoustic data. The derived feature vector should possess the characteristics of low dimensionality, long-time stability, non-sensitivity to noise, and no correlation with other features, which makes the application of a robust feature extraction technique a significant challenge for Automatic Speech Recognition. Many comparative studies have been carried out to compare different speech recognition feature extraction techniques, but none of them have evaluated the criteria to be considered when applying a feature extraction technique. The objective of this work is to answer some of the questions that may arise when considering which feature extraction techniques to apply, through a multi-criteria comparison of different features extraction techniques using the Weighted Scoring Method.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_21-Automatic_Speech_Recognition_Features_Extraction_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Classification of Essential Factors for the Development and Implementation of Cyber Security Strategy in Public Sector Organizations</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120820</link>
        <id>10.14569/IJACSA.2021.0120820</id>
        <doi>10.14569/IJACSA.2021.0120820</doi>
        <lastModDate>2021-08-31T10:26:59.4500000+00:00</lastModDate>
        
        <creator>Waqas Aman</creator>
        
        <creator>Jihan Al Shukaili</creator>
        
        <subject>Cyber security strategy; critical factors; risk treatment; culture; threats</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>To ensure the achievement of quality security to safeguard business objectives, implementing, and maintaining an effective Cyber Security Strategy (CSS) is crucial. Inevitably, we need to recognize and evaluate the essential factors, such as technological, cultural, regulatory, economic, and others, that may hinder the efficacy of a CSS development and implementation. From the literature review, it is evident that such factors are either abstractly stated, or only assessed from singular viewpoint and are scattered across the literature. Moreover, there is a lack of holistic studies that could assist us in comprehending the critical factors affecting a CSS. In this paper, we present a systematic classification of distinct, structured, and comprehensive list of key factors covering multiple aspects of an organization’s CSS, including organizational, cultural, economic, legal and political, and security, to provide a more complete view of understanding the essentials and analyzing the aptitude of a planned or given CSS. The proposed classification is further evaluated to examine the critical factors verified by conducting semi-structured interviews from security experts in different public sector organizations. Furthermore, we present a comparison of our work with the recent attempts that reflects that a significant accumulation of essential factors have been holistically identified in this study.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_20-A_Classification_of_Essential_Factors_for_the_Development_and_Implementation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced Clustering-based MOOC Recommendations using LinkedIn Profiles (MR-LI)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120818</link>
        <id>10.14569/IJACSA.2021.0120818</id>
        <doi>10.14569/IJACSA.2021.0120818</doi>
        <lastModDate>2021-08-31T10:26:59.4330000+00:00</lastModDate>
        
        <creator>Fatimah Alruwaili</creator>
        
        <creator>Dimah Alahmadi</creator>
        
        <subject>MOOCs; recommendation systems; content-based; clustering; term frequency-inverse document frequency (TF-IDF); LinkedIn</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>With the rapid development of massive open online courses (MOOCs), the interest of learners in MOOCs has increased significantly. MOOC platforms offer thousands of varied courses with many options. These options make it difficult for learners to choose courses that suit their needs and compatible with their interests. So, they become exposed to many courses on all topics. Therefore, there is an urgent need for personalized recommendation systems that assist learners in filtering courses according to their interests. Therefore, in this research, we target learners on the professional platform, LinkedIn, to be the basis for user modeling; the number of extracted profiles equals 5,039. Then, skill-based clustering algorithms were applied to LinkedIn users. Subsequently, we applied the similarity measurement between the vector features of the resulting clusters and the extracted course vectors. In the experiment result, four clusters were provided with the top-N course recommendations. Ultimately, the proposed approach was evaluated, and the F1-score of the approach was .81.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_18-Enhanced_Clustering_based_MOOC_Recommendations.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Constructing IoT Botnets Attack Pattern for Host-based and Network-based Platform</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120819</link>
        <id>10.14569/IJACSA.2021.0120819</id>
        <doi>10.14569/IJACSA.2021.0120819</doi>
        <lastModDate>2021-08-31T10:26:59.4330000+00:00</lastModDate>
        
        <creator>Wan Nur Fatihah Wan Mohd Zaki</creator>
        
        <creator>Raihana Syahirah Abdullah</creator>
        
        <creator>Warusia Yassin</creator>
        
        <creator>Faizal M.A</creator>
        
        <creator>Muhammad Safwan Rosli</creator>
        
        <subject>IoT; botnet; IoT botnet; host-based; network-based</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>Internet of things (IoT) is the things or devices with software, intelligent sensors interconnected via the internet to send and receive data with another device. This capacity makes things, i.e., smartphones, smart homes, intelligent toys, baby monitors, IP cameras, and many more to act as intelligent devices like artificial intelligence (AI) and be utilized in the everyday life-world widely. IoT has enormous expansion potential, and many challenges have been acknowledged but are still open today. The botnet is a collection of bots from IoT devices used to launch extensive network attacks. In addition, rapid growth in technology has led to an incomplete understanding of IoT. The increasing number of IoT devices has led to the spread of malware targeting IoT devices make IoT Botnet behaviors challenging to identify and determine. To detect these IoT Botnets, a preliminary experiment on flow analysis is necessary. This paper is to identify IoT Botnet attack patterns from the IoT Botnet behavior that get from IoT Botnet activities. Therefore, this research is to identify IoT Botnet attack patterns in a host-based and network-based environment. First, this paper contributes to discovering, recognizing, categorizing, and detecting IoT Botnet activities. Next, organizing information to have a better understanding of the IoT botnet&#39;s problem and potential solutions. Then, construct the IoT Botnet attack pattern by analyzing the characteristics of the IoT Botnet behavior. This IoT Botnet attack pattern divides into two environments which are host-based and network-based. As a result, this paper aims to inform people about the attack pattern when the IoT device has been infected and become part of the botnet.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_19-Constructing_IoT_Botnets_Attack_Pattern.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analyzing Predictive Algorithms in Data Mining for Cardiovascular Disease using WEKA Tool</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120817</link>
        <id>10.14569/IJACSA.2021.0120817</id>
        <doi>10.14569/IJACSA.2021.0120817</doi>
        <lastModDate>2021-08-31T10:26:59.4030000+00:00</lastModDate>
        
        <creator>Aman </creator>
        
        <creator>Rajender Singh Chhillar</creator>
        
        <subject>Logistic regression (LR); support vector machine (SVM); Statlog; Cleveland; WEKA</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>Cardiovascular Disease (CVD) is the foremost cause of death worldwide that generates a high percentage of Electronic Health Records (EHRs). Analyzing these complex patterns from EHRs is a tedious process. To address this problem, Medical Institutions requires effective Predictive Algorithms for the Prognosis and Diagnosis of the Patients. Under this work, the current state-of-the-art studied to identify leading Predictive Algorithms. Further, these algorithms namely Support Vector Machine (SVM), Na&#239;ve Bayes (NB), Decision Tree (DT), Random Forest (RF), Artificial Neural Network (ANN), Logistic Regression (LR), AdaBoost and k-Nearest Neighbors (k-NN) analyzed against the two datasets on open-source WEKA software. This work used two similar structured datasets i.e., Statlog Dataset and Cleveland Dataset. For Pre-Processing of Datasets, The missing values were replaced with the Mean value and later 10 Fold Cross-Validation was utilized for the evaluation. The result of the performance analysis showed that SVM outperforms other algorithms against both datasets. SVM showed an accuracy of 84.156% against the Cleveland dataset and 84.074% against the Statlog dataset. LR showed a ROC Area of 0.9 against both datasets. The findings of the work will help Health Institutions to understand the importance and usage of Predictive Algorithms for the automatic prediction of CVD based on the symptoms.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_17-Analyzing_Predictive_Algorithms_in_Data_Mining.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced Framework for Big Data Requirement Elicitation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120816</link>
        <id>10.14569/IJACSA.2021.0120816</id>
        <doi>10.14569/IJACSA.2021.0120816</doi>
        <lastModDate>2021-08-31T10:26:59.4030000+00:00</lastModDate>
        
        <creator>Aya Hesham</creator>
        
        <creator>Osama E. Emam</creator>
        
        <creator>Marwa Salah</creator>
        
        <subject>Requirement engineering; big data requirement; agile methodology; mind mapping</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>Requirement engineering is one of the software development life cycle phases; it has been recognized as an important phase for collecting and analyzing a system’s goals. However, despite its importance, requirement engineering has several limitations such as incomplete requirements, vague requirements, lack of prioritization, and less user involvement, all of which affect requirement quality. With the emergence of big data technology, the complexity of big data, which is defined by large data volume, high velocity, and large data variety, has gradually increased, affecting the quality of big data software requirements. This study proposes a framework with four sequential phases to improve requirement engineering quality through big data software development. By integrating the proposed framework’s phases in which user requirements are collected in a complete vision using traditional requirement elicitation techniques with agile methodology and mind mapping, the collected requirements are displayed via a graphical representation using mind maps to achieve high requirement accuracy with connectivity and modifiability, enabling the accurate prioritization of requirements implemented using agile SCRUM methodology. The proposed framework improves requirement quality in big data software development, which is represented by accuracy, completeness, connectivity, and modifiability to understand the value of the collected requirements and effectively affect the quality of the implementation phase.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_16-Enhanced_Framework_for_Big_Data_Requirement.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Higher Order Statistics and Phase Synchronization as Features in a Motor Imagery Paradigm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120815</link>
        <id>10.14569/IJACSA.2021.0120815</id>
        <doi>10.14569/IJACSA.2021.0120815</doi>
        <lastModDate>2021-08-31T10:26:59.3870000+00:00</lastModDate>
        
        <creator>Oana-Diana Hrisca-Eva</creator>
        
        <creator>Madalina-Giorgiana Murariu</creator>
        
        <creator>Anca Mihela Lazar</creator>
        
        <subject>Brain computer interface; motor imagery; higher order statistics; phase synchronization; EEG</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>The paper proposes an approach based on higher order statistics and phase synchronization for detection and classification of relevant features in electroencephalographic (EEG) signals recorded during the subjects are performing motor tasks. The method was tested on two different datasets and the performance was evaluated using k nearest neighbor classifier. The results (classification rates higher than 90%) have shown that the method can be used for discriminating right and left motor imagery tasks as an offline analysis for EEG in a brain computer interface system.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_15-Higher_Order_Statistics_and_Phase_Synchronization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optical Character Recognition Engines Performance Comparison in Information Extraction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120814</link>
        <id>10.14569/IJACSA.2021.0120814</id>
        <doi>10.14569/IJACSA.2021.0120814</doi>
        <lastModDate>2021-08-31T10:26:59.3570000+00:00</lastModDate>
        
        <creator>Tosan Wiar Ramdhani</creator>
        
        <creator>Indra Budi</creator>
        
        <creator>Betty Purwandari</creator>
        
        <subject>Named entity recognition; information extraction; optical character recognition; government human resources documents</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>Named Entity Recognition (NER) is often used to acquire important information from text documents as a part of the Information Extraction (IE) process. However, the text documents quality affects the accuracy of the data obtained, especially for text documents acquired involving the Optical Character Recognition (OCR) process, which never reached 100% accuracy. This research tried to examine which OCR engine with the highest performance for IE using NER by comparing three OCR engines (Foxit, PDF2GO, Tesseract) over 8,562 government human resources documents within six document categories, two document structures, and four measurements. Several essential entities such as name, employee ID, document number, document publishing date, employee rank, and family member&#39;s name were trying to be extracted automatically from the documents. NER processes were done using Python programming language, and the preprocessing tasks were done separately for Foxit, PDF2GO, and Tesseract. In summary, each OCR engine has its drawbacks and benefit, such as Tesseract has better NER extraction and conversion time with better accuracy but lack in the number of entities acquired.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_14-Optical_Character_Recognition_Engines_Performance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>ANNMDD: Strength of Artificial Neural Network Types for Medical Diagnosis Domain</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120813</link>
        <id>10.14569/IJACSA.2021.0120813</id>
        <doi>10.14569/IJACSA.2021.0120813</doi>
        <lastModDate>2021-08-31T10:26:59.3400000+00:00</lastModDate>
        
        <creator>Ahmed Hamza Osman</creator>
        
        <subject>Medical data; neural network algorithm; multiple; radial based function network; dynamic; quick; prune; accuracy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>The abundance of medical evidence in health institutions necessitates the creation of effective data collection methods for extracting valuable information. For several years, scholars focused on the use of computational techniques and data processing techniques in order to enhance the study of broad historical datasets. There is a deficiency to investigate the collected data of health disease in the data sources such as COVID-19, Chronic Kidney, Epileptic Seizure, Parkinson, Hard diseases, Hepatitis, Breast Cancer and Diabetes, where millions of people are killed in the world by these diseases. This research aims to investigate the neural network algorithms for different types of medical diseases in order to select the best type of neural network suitable for each disease. The data mining process has been applied to investigate the mentioned medical disease datasets. The related works and literature review of machine learning in the medical domain were studied in the initial stage of this research. Then, the experiments behind the initial stage have been designed with six neural network algorithm styles which are Multiple, Radial Based Function Network (RBFN), Dynamic, Quick and Prune algorithms. The extracted results for each algorithm have been analyzed and compared with each other to select the perfect neural network algorithm for each disease. T-test statistical significance test has been applied as one of the investigation strategies for the NN optimal selection. Our findings highlighted the strong side of the Multiple NN algorithm in terms of training and testing phases in the medical domain.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_13-ANNMDD_Strength_of_Artificial_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mobile Malware Classification for iOS Inspired by Phylogenetics</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120812</link>
        <id>10.14569/IJACSA.2021.0120812</id>
        <doi>10.14569/IJACSA.2021.0120812</doi>
        <lastModDate>2021-08-31T10:26:59.3270000+00:00</lastModDate>
        
        <creator>Muhammad Afif Husainiamer</creator>
        
        <creator>Madihah Mohd Saudi</creator>
        
        <creator>Azuan Ahmad</creator>
        
        <creator>Amirul Syauqi Mohamad Syafiq</creator>
        
        <subject>iOS; mobile malware; reverse engineering; exploitation; phylogenetic</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>Cyber-attacks such as ransomware, data breaches, and phishing triggered by malware, especially for iOS (iPhone operating system) platforms, are increasing. Yet not much works on malware detection for the iOS platform have been done compared to the Android platform. Hence, this paper presents an iOS malware classification inspired by phylogenetics. It consists of mobile behaviour, exploits, and surveillance features. The new iOS classification helps to identify, detect, and predict any new malware variants. The experiment was conducted by using hybrid analysis, with twelve (12) malwares datasets from the Contagio Mobile website. As a result, twenty-nine (29) new classifications have been developed. One hundred (100) anonymous mobile applications (50 from the Apple Store and 50 from iOS Ninja) have been used for evaluation. Based on the evaluation conducted, 13% of the mobile applications matched with the developed classifications. In the future, this work can be used as guidance for other researchers with the same interest.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_12-Mobile_Malware_Classification_for_iOS.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A WSM-based Comparative Study of Vision Tracking Methodologies</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120811</link>
        <id>10.14569/IJACSA.2021.0120811</id>
        <doi>10.14569/IJACSA.2021.0120811</doi>
        <lastModDate>2021-08-31T10:26:59.3270000+00:00</lastModDate>
        
        <creator>Sara Bouraya</creator>
        
        <creator>Abdessamad Belangour</creator>
        
        <subject>Multiple object tracking; object detection; WSM method; computer vision; video analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>Vision tracking is a key component of a video sequence. It is the process of locating single or multiple moving objects over time using one or many cameras. The latter’s function consists of detecting, categorizing, and tracking. The development of the trustworthy solution for video sequence analysis opens up new horizons for a variety of applications, including intelligent transportation systems, biomedical, agriculture, human-machine interaction, augmented reality, video surveillance, robots, and many crucial research areas. To make efficient models, there are challenges in video observation to deal with, such as problems with the environment, light variation, pose variation, motion blur, clutter, occlusion, and so on. In this paper, we present several techniques that addressed the issues of detecting and tracking multiple targets on video sequences. The proposed comparative study relied on different methodologies. This paper&#39;s purpose is to list various approaches, classify them, and compare them, using the Weighted Scoring Model (WSM) comparison method. This includes studying these algorithms, selecting relevant comparison criteria, assigning weights for each criterion, and lastly computing scores. The obtained results of this study will reveal the strong and weak points of each algorithm mentioned and discussed.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_11-A_WSM_based_Comparative_Study_of_Vision_Tracking.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Learning Optimum Number of Bases for Indian Languages in Non-negative Matrix Factorization based Multilingual Speech Separation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120809</link>
        <id>10.14569/IJACSA.2021.0120809</id>
        <doi>10.14569/IJACSA.2021.0120809</doi>
        <lastModDate>2021-08-31T10:26:59.2930000+00:00</lastModDate>
        
        <creator>Nandini C Nag</creator>
        
        <creator>Milind S Shah</creator>
        
        <subject>Indian languages; optimum number of bases; non-negative matrix factorization; speech separation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>Non-negative matrix factorization-based audio source separation separating a target source has shown significant performance improvement when the spectral bases attained after factorization exhibits latent structures in the mixed audio signal comprising multiple speaker sources. If all the sources are known, the spectral bases may be inferred on priority by using a training process on the database of isolated sources. The number of bases inferred for a source should not include bases matching spectral patterns of the interfering sources in the audio mixture; otherwise, the estimated target source after separation will be incorporated with undesirable spectral patterns. It is difficult to distinguish and separate similar audio sources in an overlapped speech, leading to a complex speech processing task. Therefore, this research attempts to learn an optimum number of bases for Indian languages leading to successful separation of target source in multi-lingual multiple speaker speech mixtures using non-negative matrix factorization. The languages used for utterances are Hindi, Marathi, Gujarati, and Bengali. The speaker combinations used are female-female, male-male, and female-male. The optimum number of bases which was determined by evaluating improvement in the separation performance was found to be 40 for all the languages considered.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_9-Learning_Optimum_Number_of_Bases_for_Indian_Languages.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Determining Local Hematology Reference Ranges: A Data-driven Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120810</link>
        <id>10.14569/IJACSA.2021.0120810</id>
        <doi>10.14569/IJACSA.2021.0120810</doi>
        <lastModDate>2021-08-31T10:26:59.2930000+00:00</lastModDate>
        
        <creator>K. A. Hasara Semini</creator>
        
        <creator>H.A.Caldera</creator>
        
        <subject>Hematology science; standard hematology reference range; domestic hematology reference ranges; local hematology reference range; machine learning; white blood cell count</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>Hematology is the study of blood, blood-forming organs, and blood diseases. Hematological tests such as Full Blood Count (FBC) can be used to diagnose a wide range of infections and diseases by comparing their results with the standard hematology reference (SHR) ranges. These ranges were established many years ago by considering the Caucasian population and all countries have used them until recent times to measure the healthiness of the people. But these reference ranges can be varied according to various reasons such as dietary habits, geographical location, climate, environmental factors, etc., and the use of them by all countries may not be correct. Many researchers have started research in finding Local Hematology Reference (LHR) ranges. Most of them used statistical analyses which have their limitations. Machine learning is a solution to overcome those limitations. Finding an approach to determine the LHR range based on machine learning techniques is the goal of this research. The dataset was generated using FBC test reports in Sri Lanka. The LHR range of WBC count of healthy adults in Sri Lanka is only addressed in this research. A difference between the SHR range of WBC and the LHR range of WBC is observed.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_10-Determining_Local_Hematology_Reference_Ranges.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>MNN and LSTM-based Real-time State of Charge Estimation of Lithium-ion Batteries using a Vehicle Driving Simulator</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120808</link>
        <id>10.14569/IJACSA.2021.0120808</id>
        <doi>10.14569/IJACSA.2021.0120808</doi>
        <lastModDate>2021-08-31T10:26:59.2800000+00:00</lastModDate>
        
        <creator>Si Jin Kim</creator>
        
        <creator>Jong Hyun Lee</creator>
        
        <creator>Dong Hun Wang</creator>
        
        <creator>In Soo Lee</creator>
        
        <subject>Lithium-ion battery; state of charge; multilayer neural network; long short-term memory; vehicle driving simulator; real time</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>Lithium-ion batteries (a type of secondary battery) are now used as a power source in many applications due to their high energy density, low self-discharge rates, and ability to store long-term energy. However, overcharging is inevitable due to frequent charging and discharging of these batteries. This may result in property damage caused by system shutdown, accident, or explosion. Therefore, reliable and efficient use requires accurate prediction of the battery state of charge (SOC). In this paper, a method of estimating SOC using vehicle simulator operation is proposed. After manufacturing the simulator for the battery discharge experiment, voltage, current, and discharge-time data were collected. The collected data was used as input parameters for multilayer neural network (MNN) and recurrent neural network–based long short-term memory (LSTM) to predict SOC of batteries and compare errors. In addition, discharge experiments and SOC estimates were performed in real time using the developed MNN and LSTM surrogate models.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_8-MNN_and_LSTM_based_Real_time_State_of_Charge_Estimation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Prototype Design and Experimental Evaluation e-Healthcare System based on Molecular Analysis Devices</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120807</link>
        <id>10.14569/IJACSA.2021.0120807</id>
        <doi>10.14569/IJACSA.2021.0120807</doi>
        <lastModDate>2021-08-31T10:26:59.2630000+00:00</lastModDate>
        
        <creator>Maxim Zakharov</creator>
        
        <creator>Alexander Paramonov</creator>
        
        <creator>Ammar Muthanna</creator>
        
        <creator>Ruslan Kirichek</creator>
        
        <subject>Micro spectrometer; molecular analysis; NIR spectroscopy; public communication networks; internet of things; e-health; m-health; edge computing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>Recently, wireless body area networks (WBANs) and mobile Internet of Things (IoT) have been greatly increased and integrated into a different type of systems such as electronic/mobile-healthcare (e/m-healthcare) systems. In addition, analyzing the composition of drugs or performing the medical rapid tests in the conditions field are the main tasks of the m-healthcare system, in which the traditional laboratory methods of analysis are not suitable. Therefore, in this study, we proposed a novel structure with a distributed e-health system to perform such analysis, where portable infrared micro spectrometers are utilized and then boundary calculations are applied. More specifically, this system is proposed to use a portable infrared micro spectrometer with a specially designed application connected to a public communication network, which can process the results of analysis using boundary calculations. Moreover, it provides remote processing and long-term storage of analysis data using artificial neural networks and cloud technologies. Finally, simulation results show that preprocessing (error checking), data buffering and Edge Computing can significantly reduce the network latency and volume of transferred data.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_7-Prototype_Design_and_Experimental_Evaluation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Constrained Quantum Optimization for Resource Distribution Management</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120806</link>
        <id>10.14569/IJACSA.2021.0120806</id>
        <doi>10.14569/IJACSA.2021.0120806</doi>
        <lastModDate>2021-08-31T10:26:59.2470000+00:00</lastModDate>
        
        <creator>Sara El Gaily</creator>
        
        <creator>S&#225;ndor Imre</creator>
        
        <subject>Quantum computing; constrained quantum optimization algorithm; quantum extreme values searching algorithm; resource distribution management; cloud computing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>The cloud computing field suffers from the heavy processing caused by the exponentially increasing data traffic. Therefore, optimizing the network performance and achieving a better quality of service (QoS) became a central goal. In cloud computing, the problem of energy consumption of resource distribution management system (RDMS) is presented as an optimization problem. Most of the existing classical optimization approaches, such as heuristic and metaheuristic have high computational complexity. In this work, we proposed a quantum optimization strategy that executes the tasks exponentially faster and with high accuracy named constrained quantum optimization algorithm (CQOA). We exploit the CQOA in RDMS as a toy example for pointing out the efficiency of the proposed quantum strategy in reducing energy consumption and computational complexity. Following that, we investigate the CQOA&#39;s implementation, setup, and computational complexity. Finally, we create a simulation environment to evaluate the efficiency of the suggested implemented constrained quantum strategy.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_6-Constrained_Quantum_Optimization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>DCRL: Approach for Pattern Recognition in Price Time Series using Directional Change and Reinforcement Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120805</link>
        <id>10.14569/IJACSA.2021.0120805</id>
        <doi>10.14569/IJACSA.2021.0120805</doi>
        <lastModDate>2021-08-31T10:26:59.2330000+00:00</lastModDate>
        
        <creator>Nora Alkhamees</creator>
        
        <creator>Monira Aloud</creator>
        
        <subject>Machine learning; reinforcement learning; directional-change event; pattern recognition; stock market</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>Developing an intelligent pattern recognition model for electronic markets has been a vital research direction in the field. Ongoing research continues for intelligent learning algorithms capable of recognizing and classifying price patterns and hence providing investors and market analysts with better insights into price time-series. In this paper, an adaptive intelligent Directional Change (DC) pattern recognition model with Reinforcement Learning (RL) is proposed, so called DCRL model. Compared with traditional analytical approaches that uses fixed time interval and specified features of the market, the DCRL is an alternative intelligent approach that samples price time-series using an event-based time interval and RL. In this model, the environment’s behavior is incorporated into the RL process to automate the identification of directional price changes. The DCRL learns the price time-series representation by adaptively selecting different price features depending on the current state. DCRL is evaluated using Saudi stock market data with different price trends. A series of analyses demonstrate the effective analytical performance in detecting price changes and the extensive applicability of the DCRL model.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_5-DCRL_Approach_for_Pattern_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Data Hiding Method with Principal Component Analysis and Image Coordinate Conversion</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120804</link>
        <id>10.14569/IJACSA.2021.0120804</id>
        <doi>10.14569/IJACSA.2021.0120804</doi>
        <lastModDate>2021-08-31T10:26:59.2170000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>Multi-dimensional wavelet transformation; multi resolution analysis (MRA); image data hiding; secrete image; Daubechies basis function</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>Data hiding method with Principal Component Analysis (PCA) and image coordinate conversion as a preprocessing of wavelet Multi Resolution Analysis (MRA) is proposed. The method introduced in this paper, based on the characteristics of the original multispectral image, allows recovering the secret data. Through experiments, it is found that the proposed method is superior to the conventional data hiding method without any preprocessing. The method introduced in this paper allows only I who knows the characteristics of the original multispectral image to recover the secret data, i.e., when the information of the original image needs to be protected. Moreover, in the introduced method, the information of the secret data is protected by the existence of the eigenvector and the oblique coordinate transformation, that is, the secret data is restored if at least the information of the true original image is not known. The principal component transformation coefficient differs for each original image and is composed of the eigenvectors of the original image.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_4-Data_Hiding_Method_with_Principal_Component.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Lip Detection and Tracking with Geometric Constraints under Uneven Illumination and Shadows</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120803</link>
        <id>10.14569/IJACSA.2021.0120803</id>
        <doi>10.14569/IJACSA.2021.0120803</doi>
        <lastModDate>2021-08-31T10:26:59.2000000+00:00</lastModDate>
        
        <creator>Waqqas ur Rehman Butt</creator>
        
        <creator>Luca Lombardi</creator>
        
        <subject>Lip detection; lip tracking; illumination equalization; shadow filtering; 16 points lip model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>In the modern era, recent advancement in computer vision has led to emergent attention in lip reading. Indeed, lip-reading is used to understand speech without hearing it, and the process is mentioned as a lip-reading system. To construct an automatic lip-reading system, locating the lip and defining the lip region is essential, especially under different lighting conditions, significantly impacting the robustness of the lip-reading system. Unluckily, in previous studies, lip localization under illumination and shadow consideration has not been well solved. In this paper, we extant a local region-based approach towards the lip-reading system. It consists of four significant parts, firstly detecting/localizing the human face, mouth and lip region of interest in the first video frame. Secondly, apply pre-processing to overwhelmed the inference triggered by illumination effects, shadow and teeth appearance, thirdly create contour line using sixteen key points with geometric constraint and stored the coordinates of these constraints. Finally, track the coordinates of sixteen points in the following frames. The proposed method adapts to the lip movement and is robust in contrast to the appearance of teeth, shadows, and low contrast environment. Extensive experiments show encouraging results and the proposed method&#39;s effectiveness compared to the existing methods.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_3-Lip_Detection_and_Tracking_with_Geometric_Constraints.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Research on the Relationship between Exploratory Behavior and Consumer Values using Eye Tracking Gaze Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120802</link>
        <id>10.14569/IJACSA.2021.0120802</id>
        <doi>10.14569/IJACSA.2021.0120802</doi>
        <lastModDate>2021-08-31T10:26:59.1830000+00:00</lastModDate>
        
        <creator>Mei Nonaka</creator>
        
        <creator>Kohei Otake</creator>
        
        <creator>Takashi Namatame</creator>
        
        <subject>Consumer values; eye tracking; factor analysis; cluster analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>In recent years, the popularity of e-commerce has witnessed a significant uptick. Physical apparel stores need to implement measures that focus on the behavioral experience of shopping at physical stores, a trait that e-commerce lacks. The purpose of this paper is to clarify the relationship between customer values and product search behavior and proposed product placement and customer service methods based on their values. We used questionnaire data on the values of customer purchasing to perform factor analysis and cluster analysis. Moreover, we extracted the product search behavior using eye-tracking gaze data from an apparel physical store. The results showed that product search behavior differed based on three types: trend cluster, self-esteem cluster, and conservative cluster. Finally, we proposed product placement in a store considering the features of these clusters.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_2-Research_on_the_Relationship_between_Exploratory_Behavior.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Approach of e-Commerce Web Design for Accessibility based on Game Accessibility in Chinese Market</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120801</link>
        <id>10.14569/IJACSA.2021.0120801</id>
        <doi>10.14569/IJACSA.2021.0120801</doi>
        <lastModDate>2021-08-31T10:26:59.1700000+00:00</lastModDate>
        
        <creator>Hemn Barzan Abdalla</creator>
        
        <creator>Lu Zhen</creator>
        
        <creator>Zhang Yuantu</creator>
        
        <subject>e-Commerce; accessibility design; color-blind; game accessibility</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(8), 2021</description>
        <description>China is the largest e-commerce market globally, with a share of more than 40% of the total value of e-commerce transactions in the world, down from just 1% a decade ago. The Chinese are the most used electronic payment, ordering services, and watching videos on smart devices worldwide. The study of e-commerce is one of the branches of business administration established electronically through the use of Internet networks, which aim to carry out buying and selling operations. With the popularity of e-commerce, people from more and more backgrounds are using e-commerce websites and apps, but among these users, some people are unable to use these apps/websites or have barriers to use them. Accessibility design enables anyone (regardless of ability, for example, Color-blind) to successfully navigate, understand and use some applications. The accessibility design is widely used in video games, which can give guidance to the e-commerce accessibility design. This study will analyze five well-known e-commerce websites worldwide and the consumption habits of people with barriers to use from the perspective of accessible design to suggest two new concepts of accessible design methods based on game accessibility and web accessibility to make these e-commerce websites/apps more suitable with the user habits of the particular group.</description>
        <description>http://thesai.org/Downloads/Volume12No8/Paper_1-A_New_Approach_of_e_Commerce_Web_Design.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Wireless Intrusion and Attack Detection for 5G Networks using Deep Learning Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120795</link>
        <id>10.14569/IJACSA.2021.0120795</id>
        <doi>10.14569/IJACSA.2021.0120795</doi>
        <lastModDate>2021-08-02T08:32:15.1070000+00:00</lastModDate>
        
        <creator>Bayana Alenazi</creator>
        
        <creator>Hala Eldaw Idris</creator>
        
        <subject>Wireless intrusion detection system; 5G; autoencoder; deep learning; attack detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>A Wireless Intrusion Detection System is an important part of any system or company connected to the internet and has a wireless connection inside it because of the increasing number of internal or external attacks on the network. These WIDS systems are used to predict and detect wireless network attacks such as flooding, DoS attack, and evil- twin that badly affect system availability. Artificial intelligence (Machine Learning, Deep Learning) are popular techniques used as a good solution to build effective network intrusion detection. That&#39;s because of the ability of these algorithms to learn complicated behaviors and then use the learned system for discovering and detecting network attacks. In this work, we have performed an autoencoder with a DNN deep algorithm for protecting the companies by detecting intrusion and attacks in 5G wireless networks. We used the Aegean Wi-Fi Intrusion dataset (AWID). Our WIDS resulted in a very good performance with an accuracy of 99% for the dataset attack types: Flooding, Impersonation, and Injection.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_95-Wireless_Intrusion_and_Attack_Detection_for_5G_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Risk Assessment of Attack in Autonomous Vehicle based on a Decision Tree</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120789</link>
        <id>10.14569/IJACSA.2021.0120789</id>
        <doi>10.14569/IJACSA.2021.0120789</doi>
        <lastModDate>2021-07-31T15:09:07.8070000+00:00</lastModDate>
        
        <creator>Sara FTAIMI</creator>
        
        <creator>Tomader MAZRI</creator>
        
        <subject>Vehicular adhoc network; intelligent transportation system; decision tree; risk assessment; impact; autonomous vehicle; attacks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>Risk management has become increasingly essential in all areas, and it represents a cornerstone of the Safety Management System. In principle, it brings together all the procedures to identify and evaluate risks to improve systems performance. With the development of the transportation system and the appearance of intelligent ones (ITS) that are changing citizens&#39; mobility nowadays, the risks associated with them have also increased exponentially. In ITS, vehicles can reach 100% autonomy since they are equipped with sensors to move safely. The vehicle&#39;s architecture and embedded sensors enfold inherent vulnerabilities that attackers may exploit to craft malicious acts. In addition, vehicles communicate with each other and with the road infrastructure via vehicular adhoc network (VANET) and may use Internet connections, raising the risk that an attacker performs malicious actions and may take control of a vehicle to perform terrorist acts. This paper aims to draw attention to the risks associated with autonomous vehicles (AV) and the interest in evaluating flaws inherent in AV. For this purpose, our paper will extensively detail a new approach to assess the risk of attacks targeting autonomous vehicles. Our proposed approach will use a decision tree model to predict risk criticality based on the probability of attack success and its impact on the targeted system.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_89-Risk_Assessment_of_Attack_in_Autonomous_Vehicle.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Designing Strategies for Autonomous Stock Trading Agents using a Random Forest Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120788</link>
        <id>10.14569/IJACSA.2021.0120788</id>
        <doi>10.14569/IJACSA.2021.0120788</doi>
        <lastModDate>2021-07-31T15:09:07.7730000+00:00</lastModDate>
        
        <creator>Monira Aloud</creator>
        
        <subject>Decision trees; financial forecasting; machine learning; random forest; trading agents; trading strategy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>Machine learning-based autonomous agents are valuable for back-testing stock trading strategies, including algorithmic trading. Several studies in the financial literature have proposed artificial intelligence-based algorithms that support decision making for financial investment, but few studies have provided systematic processes for designing intelligent trading agents. This paper overviews the steps involved in designing agents that forecast stock prices in a trading strategy. These steps include data preprocessing, time-series segmentation, dimensionality reduction, clustering, and others. Our main contributions are: (i) a systematic process that guides the design and development of trading agents, and (ii) a random forest forecasting model.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_88-Designing_Strategies_for_Autonomous_Stock_Trading_Agents.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Machine Learning Predictors for Sustainable Urban Planning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120787</link>
        <id>10.14569/IJACSA.2021.0120787</id>
        <doi>10.14569/IJACSA.2021.0120787</doi>
        <lastModDate>2021-07-31T15:09:07.7430000+00:00</lastModDate>
        
        <creator>Sarojini Devi Nagappan</creator>
        
        <creator>Salwani Mohd Daud</creator>
        
        <subject>Urban planning; sustainable development; urban development classification model; machine learning; urban development predictors</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>While essential for economic reasons, rapid urbanization has had many negative impacts on the environment and the social wellbeing of humanity. Heavy traffic, unexpected geohazards are some of the effects of uncontrollable development. This situation points its fingerto urban planning and design; there are numerous automation tools to help urban planners assess and forecast, yet unplanned development still occurs, impeding sustainability. Automation tools use machine learning classification models to analyze spatial data and various trend views before planning a new urban development. Although there are many sophisticated tools and massive datasets, big cities with colossal migration still witness traffic jams, pollution, and environmental degradation affecting urban dwellers&#39; quality. This study will analyze the current predictors in urban planning machine learning models and identify the suitable predictors to support sustainable urban planning. A correct set of predictors could improve the efficiency of the urban development classification models and help urban planners to enhance the quality of life in big cities.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_87-Machine_Learning_Predictors_for_Sustainable_Urban_Planning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>View-independent Vehicle Category Classification System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120786</link>
        <id>10.14569/IJACSA.2021.0120786</id>
        <doi>10.14569/IJACSA.2021.0120786</doi>
        <lastModDate>2021-07-31T15:09:07.7270000+00:00</lastModDate>
        
        <creator>Sara Baghdadi</creator>
        
        <creator>Noureddine Aboutabit</creator>
        
        <subject>Vehicle category classification; view recognition; machine learning; deep learning; convolutional neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>Vehicle category classification is important, but it is a challenging task, especially, when the vehicles are captured from a surveillance camera with different view angles. This paper aims to develop a view-independent vehicle category classification system. It proposes a two-phase system: one phase recognizes the view angles helping the second phase to recognize the vehicle category including bus, car, motorcycle, and truck. In each phase, several descriptors and Machine Learning techniques including traditional algorithms and Deep neural networks are employed. In particular, we used three descriptors: HOG (Histogram of Oriented Gradient), LBP (Local Binary Patterns) and Gabor filter with two classifiers SVM (Support Vector Machine) and k-NN (k-Nearest Neighbor). And also, we used the Convolutional Neural Network (CNN, or ConvNet). Three experiments have been elaborated based on many datasets. The first experiment is dedicated to choosing the best approach for the recognition of views: rear or front. The second experiment aims to classify the vehicle categories based on each view. In the third experiment, we developed the overall system, the categories were classified independently of the view. Experimental results reveal that CNN gives the highest recognition accuracy of 94.29% in the first experiment, and HOG with SVM or k-NN gives the best results (99.58%, 99.17%) in the second experiment. The system can robustly recognize vehicle categories with an accuracy of 95.77%.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_86-View_independent_Vehicle_Category_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Drip Irrigation Detection for Power Outage-Prone Areas with Internet-of-Things Smart Fertigation Managemant System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120785</link>
        <id>10.14569/IJACSA.2021.0120785</id>
        <doi>10.14569/IJACSA.2021.0120785</doi>
        <lastModDate>2021-07-31T15:09:07.6970000+00:00</lastModDate>
        
        <creator>Dahlila Putri Dahnill</creator>
        
        <creator>Zaihosnita Hood</creator>
        
        <creator>Afzan Adam</creator>
        
        <creator>Mohd Zulhakimi Ab Razak</creator>
        
        <creator>Ahmad Ghadafi Ismail</creator>
        
        <subject>Irrigation technique; water and nutrient; automatic drip irrigation; crop; power-outage</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>In drip irrigation agriculture or fertigation technique, sufficient amount of water and nutrients are crucial for a plant&#39;s growth and development. An electronic timer is usually used to control the plant watering automatically and the scheduling is set according to different levels of plant growth. The timer has to be adjusted frequently since the required amount of water is different according to the growth stages. In power outage-prone regions, the problem with scheduled irrigation using timer becomes worsen since the watering schedule is disrupted by occasional black-outs, leading to an insufficient supply of water and nutrients, which leads to poor crop yields. Typical solution for such problems is by hiring field works to monitor the functionality of the automated system, plant health, and to re-adjust the timer once a power outage occurs. However, this solution is ineffective, time-consuming, and acquires high overhead costs. This paper proposes a systematic irrigation method using the internet-of-things (IoT) framework in order to improve the monitoring of plant growth and consequently improves the efficiency of the workflow. This systematic fertigation monitoring system consists of power outage alerts, and on-line notifications of plant irrigation, pesticide delivery, and polybag cleaning schedule. As a result, by using the proposed system, higher efficiency in farming management is achieved, with a 40% reduction in manpower, as compared to a typical fertigation-based farming system. This system demonstrates greater control over irrigation scheduling, plant growth, recording of pesticide scheduling automatically, and polybag cleaning, all of which will improve crop yields significantly.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_85-Drip_Irrigation_Detection_for_Power_Outage.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improved Incentive Pricing Wireless Multi-service Single Link with Bandwidth Attribute</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120784</link>
        <id>10.14569/IJACSA.2021.0120784</id>
        <doi>10.14569/IJACSA.2021.0120784</doi>
        <lastModDate>2021-07-31T15:09:07.6800000+00:00</lastModDate>
        
        <creator>Nael Hussein</creator>
        
        <creator>Kamaruzzaman Seman</creator>
        
        <creator>Fitri Maya Puspita</creator>
        
        <creator>Khairi Abdulrahim</creator>
        
        <creator>Mus’ab Sahrim</creator>
        
        <subject>Optimal solution; multi service network; wireless pricing scheme; bandwidth QoS attribute</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>Several objectives that service providers have to achieve are to determine the increase or decrease in the price change due to the change in service quality and the amount of service quality value. Multi-service wireless Internet pricing schemes that apply the quality of the bandwidth advantage are designed to take into account the need of ISPs to provide high-quality services to users and increase their revenue, considering the limited bandwidth of the resources. The modified model is an improvement of the original model by adding variables and parameters to the multiple service network model by specifying the base price for QoS (alpha) and premium quality (β) as variables, parameters, and service class load factor, Pregnancy basis factor and differentiation factor. The models are solved by the program Lingo 18.0 to get the best solution. The results prove that the modified model is the best and yields the best profit for the service provider when the cost of all changes in quality of service is increased and the variable α and β is set as constant or variable.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_84-Improved_Incentive_Pricing_Wireless_Multi_service_Single_Link.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Intelligent Approach for Data Analysis and Decision Making in Big Data: A Case Study on E-commerce Industry</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120783</link>
        <id>10.14569/IJACSA.2021.0120783</id>
        <doi>10.14569/IJACSA.2021.0120783</doi>
        <lastModDate>2021-07-31T15:09:07.6630000+00:00</lastModDate>
        
        <creator>EL FALAH Zineb</creator>
        
        <creator>RAFALIA Najat</creator>
        
        <creator>ABOUCHABAKA Jaafar</creator>
        
        <subject>Big data; data analytics; decision making; big data analytics; big data analysis; machine learning; marketing; e-commerce</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>A recent informational phenomenon has emerged as one of the considerable innovations in information systems, commonly referred to as &quot;Big Data&quot;. The latter is currently trendy, both in academia and industry, and is used to describe a wide range of concepts, from data extraction, storage, and management, to data processing and analysis using well-known schemas, to extract patterns in hidden relationships in order to make better decisions and to derive new knowledge using analytical techniques and solutions. The technology that enables the potential of big data to be exploited is called &quot;Big Data Analytics&quot;. Big data analytics is a major challenge that enables researchers, analysts and business users to make better decisions faster. Big Data became an important part of marketing research and marketing strategies. The e-commerce industry is one of the industries that currently benefits most from the potential of big data collection and analysis. This paper therefore aims to demonstrate the use of big data to understand customers and to improve and facilitate the decision making process. In this research, we apply multiple machine learning (ML) models on large dataset present in the e-commerce area by studying several practical cases on online markets.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_83-An_Intelligent_Approach_for_Data_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Machine Learning Approach of Hybrid KSVN Algorithm to Detect DDoS Attack in VANET</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120782</link>
        <id>10.14569/IJACSA.2021.0120782</id>
        <doi>10.14569/IJACSA.2021.0120782</doi>
        <lastModDate>2021-07-31T15:09:07.6330000+00:00</lastModDate>
        
        <creator>Nivedita Kadam</creator>
        
        <creator>Krovi Raja Sekhar</creator>
        
        <subject>K-Nearest neighbor (KNN); support vector machine (SVM); DDoS (distributed denial of service attack)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>Most of the self-driving vehicles are suspect of the of the different types attacks due to its communication pattern and changing network topology characteristics, these types of vehicles are dependent on external communication sources of VANET, which is a vehicular network, It has attracted great interest of industry and academia, but it is having a number of issues like security, traffic congestion, road safety which are not addressed properly in recent years. To address these issues it’s required to build secure framework for the communication system in VANET and to detect different types of attack are the most important needs of the network security, which has been studied adequately by many researchers. However to improve the performance and to adapt the scenario of VANET, here in this paper we proposed a novel Hybrid KSVM scheme which is based on KNN and SVM algorithm to build a secure framework to detect Distributed Daniel of Service attack which is the part of Machine Learning approach. The experimental results shows that this approach gives the better results as compared to different Machine Learning based Algorithms to detect Distributed Daniel of Service attack.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_82-Machine_Learning_Approach_of_Hybrid_KSVN_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Planar 2&#215;2 MIMO Antenna Array for 5G Smartphones</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120781</link>
        <id>10.14569/IJACSA.2021.0120781</id>
        <doi>10.14569/IJACSA.2021.0120781</doi>
        <lastModDate>2021-07-31T15:09:07.6030000+00:00</lastModDate>
        
        <creator>A. K. M. Zakir Hossain</creator>
        
        <creator>Nurulhalim Bin Hassim</creator>
        
        <creator>W. H. W. Hassan</creator>
        
        <creator>Win Adiyansyah Indra</creator>
        
        <creator>Safarudin Gazali Herawan</creator>
        
        <creator>Mohamad Zoinol Abidin Bin Abd. Aziz</creator>
        
        <subject>MIMO; smartphone antenna; isolation; envelope correlation coefficient; diversity gain; specific absorption rate (SAR)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>Here, a planar 2&#215;2 MIMO configuration for the 5G smartphone has been presented. A single element modified planar tree profile shape (MPTPS) antenna is implemented to investigate the suitability in future 5G communication for different sub-6 GHz spectrum band. The size of the single MPTPS antenna is 40 &#215; 25 mm2. The electronic band gap (EBG) and partial ground plane (PGP) techniques have been utilized to tune this antenna. The antenna works from 2.81 – 7.23 GHz, with a (VSWR &lt; 2) bandwidth of 4.42 GHz that covers all the mid-range sub-6 GHz 5G frequencies. It also has a comparatively good gain of 3.14 dBi, high efficiency of 96% and a bi-directional radiation pattern. The antenna has been implemented in a 145 &#215; 75 mm2 smartphone mainboard with MIMO configuration using polarization diversity. More than -21.1 dB isolation has been found between different ports. A good gain of as high as 6.59 dBi is observed for the MIMO in the band. Also, as MIMO performance, excellent envelope correlation coefficient of less than 0.0029 and minimum diversity gain of 9.9853 has been observed. The investigation has been further stretched by adding a liquid crystal display (LCD) for radiation performance and a hand phantom to assess the performance in terms of specific absorption rate (SAR). It is observed that the SAR value is as low as 0.887641 at 3.5 GHz. This design will motivate the researcher to develop high performance MIMO arrays for 5G smartphones.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_81-A_Planar_2_2_MIMO_Antenna_Array.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Secure Algorithm for Upcoming Sensitive Connection between Heterogeneous Mobile Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120780</link>
        <id>10.14569/IJACSA.2021.0120780</id>
        <doi>10.14569/IJACSA.2021.0120780</doi>
        <lastModDate>2021-07-31T15:09:07.5870000+00:00</lastModDate>
        
        <creator>Omar Khattab</creator>
        
        <subject>Vertical handover security; mobile networks; wireless networks; heterogeneous wireless</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>One of the most important concepts in the heterogeneous mobile networks is Vertical Handover (VHO). The VHO is a vital process taken place by Mobile Users (MUs) in order to satisfy their preferences of security and cost, in addition to the rest of parameters of network and terminal such as latency and velocity, respectively. However, a proactive security for upcoming sensitive connection and performing VHO between heterogeneous mobile networks have not been considered. This paper therefore comes up with a new secure algorithm to address this issue: Proactive Security for Upcoming Sensitive Connection (PSUSC). Analysis of the PSUSC algorithm proves reducing potential attacks extremely compared with previous works which rely on using less secure RAT.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_80-A_New_Secure_Algorithm_for_Upcoming_Sensitive_Connection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fine-tuned Predictive Model for Verifying POI Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120779</link>
        <id>10.14569/IJACSA.2021.0120779</id>
        <doi>10.14569/IJACSA.2021.0120779</doi>
        <lastModDate>2021-07-31T15:09:07.5700000+00:00</lastModDate>
        
        <creator>Monika Sharma</creator>
        
        <creator>Mahesh Bundele</creator>
        
        <creator>Vinod Bothale</creator>
        
        <creator>Meenakshi Nawal</creator>
        
        <subject>Crowdsourced; fine-tuning; geotagged data; hyperparameters; predictive model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>Mapping websites and geo portals are playing a vital role in daily life due to the availability of geo-tagged data. From booking a cab to search a place, getting traffic information, review of the place, searching for a doctor or best school available in the locality, we are heavily dependent on the map services and geo portals available for finding such information. There is voluminous data available on these sources and it is getting increasing every moment. These data are majorly collected through crowdsourcing methods where people are contributing. As a basic principle of Garbage in garbage out, the quality of this data impacts the quality of the services based on this data. Therefore, it is highly desired to have a model which can predict the quality/accuracy of the geotagged Point of interest data. We propose a novel Fine-Tuned Predictive Model to check the accuracy of this data using the best suitable supervised machine learning approach. This work focuses on the complete life cycle of the model building, starting from the data collection to the fine-tuning of the hyperparameters. We covered the challenges particularly to the geotagged POI data and remedies to resolve the issues to make it suitable for predictive modeling for classifying the data based on their accuracy. This is a unique work that considers multiple sources including ground truth data to verify the geotagged data using a machine learning approach. After exhaustive experiments, we obtained the best values for hyperparameters for the selected predictive model built on the real data set prepared specifically to target the proposed solution. This work provides a way to develop a robust pipeline for predicting the accuracy of crowdsourced geotagged data.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_79-Fine_tuned_Predictive_Model_for_Verifying_POI_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Proposed Framework for Big Data Analytics in Higher Education</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120778</link>
        <id>10.14569/IJACSA.2021.0120778</id>
        <doi>10.14569/IJACSA.2021.0120778</doi>
        <lastModDate>2021-07-31T15:09:07.5570000+00:00</lastModDate>
        
        <creator>Beenu Mago</creator>
        
        <creator>Nasreen Khan</creator>
        
        <subject>Big data analysis; higher education; learning analytics; academic analytics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>Students, faculties, and other members of the higher education (HEd) system are increasingly reliant on various information technologies. Such a reliance results in a plethora of data that can be explored to obtain relevant statistics or insights. Another reason to explore the data is to acquire valuable insight regarding the novel unstructured forms of data that are discovered and often found to have a connection with elements of social media such as pictures, videos, Web pages, audio files, etc. Moreover, the data can bring additional valuable benefits when processed in the context of HEd. When used strategically, Big Data (BD) provides educational institutions with the chance to improve the quality of education from all the perspectives and steer students of HEd toward higher rates of completion. Further, this will improve student persistence and results, all of which are facilitated by technology. With this aim, the current research proposes a framework that analyzes the data collected from heterogeneous sources and analyzes using BD analytics tools to do various types of analysis that will be beneficial for different learners, faculties and other members of HEd system. Moreover, current research also focuses on the challenges of acquiring BD from various sources.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_78-A_Proposed_Framework_for_Big_Data_Analytics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>New Data Placement Strategy in the HADOOP Framework</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120777</link>
        <id>10.14569/IJACSA.2021.0120777</id>
        <doi>10.14569/IJACSA.2021.0120777</doi>
        <lastModDate>2021-07-31T15:09:07.5230000+00:00</lastModDate>
        
        <creator>Akram Elomari</creator>
        
        <creator>Larbi Hassouni</creator>
        
        <creator>Abderrahim MAIZATE</creator>
        
        <subject>Big data; data storage; Hadoop; DFS; HDFS; data striping; chunks; placement strategy; performance optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>Today, the data quantities generated and exchanged between information systems continues to increase. Storing and exploiting such quantities require can’t be done without bigdata systems with mechanisms capable of meeting technological challenges commonly grouped under the four Vs (Volume, Velocity, Variety and Veracity). These technologies include mainly the Distributed File System (DFS). Like Hadoop, which is based on HDFS, the main Big Data systems use a data distributed storage where a subsystem is responsible for subdividing data (data striping) and replicating it on a network of nodes called Grid. In the typical case of Hadoop, a Grid generally consists of many nodes, grouped in multiple Racks. The logic of distributing the stored data through the Grid respects a simple strategy that guarantees the durability of the data and a certain speed of writing. This strategy does not take into consideration neither the technical characteristics of nodes, nor the number of requests on the data, which means a considerable loss in processing capacity of the grid. In this work we proposed a new placement strategy based on exploitation analysis of new information integrated into the HDFS metadata model. A significant 20% improvement in overall processing time was reached through the simulations we conducted on Hadoop.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_77-New_Data_Placement_Strategy_in_the_HADOOP_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Content-based Image Retrieval using Tesseract OCR Engine and Levenshtein Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120776</link>
        <id>10.14569/IJACSA.2021.0120776</id>
        <doi>10.14569/IJACSA.2021.0120776</doi>
        <lastModDate>2021-07-31T15:09:07.5100000+00:00</lastModDate>
        
        <creator>Charles Adjetey</creator>
        
        <creator>Kofi Sarpong Adu-Manu</creator>
        
        <subject>Image Retrieval Systems; image processing; Optical Character Recognition (OCR); text matching algorithm; Tesseract OCR engine; Levenshtein Algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>Image Retrieval Systems (IRSs) are applications that allow one to retrieve images saved at any location on a network. Most IRSs make use of reverse lookup to find images stored on the network based on image properties such as size, filename, title, color, texture, shape, and description. This paper provides a technique for obtaining full image document given that the user has some portions of the document under search. To demonstrate the reliability of the proposed technique, we designed a system to implement the algorithm. A combination of Optical Character Recognition (OCR) engine and an improved text-matching algorithm was used in the system implementation. The Tesseract OCR engine and Levenshtein Algorithm was integrated to perform the image search. The extracted text is compared to the text stored in the database. For example, a query result is returned when a significant ratio of 0.15 and above is obtained. The results showed a 100% successful retrieval of the appropriate file base on the match even when partial query images were submitted.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_76-Content_based_Image_Retrieval_using_Tesseract_OCR_Engine.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mean Value Estimation of Shape Operator on Triangular Meshes</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120775</link>
        <id>10.14569/IJACSA.2021.0120775</id>
        <doi>10.14569/IJACSA.2021.0120775</doi>
        <lastModDate>2021-07-31T15:09:07.4770000+00:00</lastModDate>
        
        <creator>Ahmed Fouad El Ouafdi</creator>
        
        <creator>Hassan El Houari</creator>
        
        <subject>Curvature estimation; shape operator; triangular meshes; discrete differential operator</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>The principal curvatures, eigenvalues of the shape operator, are an important differential geometric features that characterize the object’s shape, as a matter of fact, it plays a central role in geometry processing and physical simulation. The shape operator is a local operator resulting from the matrix quotient of normal derivative with the metric tensor, and hence, its matrix representation is not symmetric in general. In this paper, the local differential property of the shape operator is exploited to propose a local mean value estimation of the shape operator on triangular meshes. In contrast to the stat-of-art approximation methods that produce a symmetric operator, the resulting estimation matrix is accurate and generally not symmet-ric. Various comparative examples are presented to demonstrate the accuracy of proposed estimation. The results show that the principle curvature arising from the estimated shape operator are accurate in comparison with the standard estimation in the literature.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_75-Mean_Value_Estimation_of_Shape_Operator_on_Triangular_Meshes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Combining Word Embeddings and Deep Neural Networks for Job Offers and Resumes Classification in IT Recruitment Domain</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120774</link>
        <id>10.14569/IJACSA.2021.0120774</id>
        <doi>10.14569/IJACSA.2021.0120774</doi>
        <lastModDate>2021-07-31T15:09:07.4470000+00:00</lastModDate>
        
        <creator>Amine Habous</creator>
        
        <creator>El Habib Nfaoui</creator>
        
        <subject>IT recruitment; word embeddings; deep neural net-works; text classification; natural language processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>Now-a-days, the use of web portals known as job boards for publishing job offers by recruiters has grown consid-erably. The candidates in their turn, apply to the job positions via the job boards. Since the opportunities are available on a wide range and the job application process is fast and straightforward, the data flow is transformed to large-volume data sets which are hard to handle. Most companies tend to automate the candidate selection process that aims to match the job offers with suitable resumes. In this paper, we propose a supervised learning approach to classify the job offers and CVs shared in the recruitment sites in order to enhance automatic recruitment process. We used natural language processing techniques for job offers and CV preprocessing. Next, we used word embeddings and deep neural networks to train two models, the first one categorizes recruitment documents based on job skills, and the second one predicts the expertise degree class. The experiment results show that our proposal is very efficient.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_74-Combining_Word_Embeddings_and_Deep_Neural_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluation of Routing Protocols and Mobility in Flying Ad-hoc Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120773</link>
        <id>10.14569/IJACSA.2021.0120773</id>
        <doi>10.14569/IJACSA.2021.0120773</doi>
        <lastModDate>2021-07-31T15:09:07.4130000+00:00</lastModDate>
        
        <creator>Emad Felemban</creator>
        
        <subject>Flying ad-hoc network (FANET); mobility models; adhoc routing protocols; OPNET; Unmanned Aerial Vehicles (UAVs)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>The ability of dynamic reconfigurability, quick response and ease of deployment has made Unmanned Aerial Vehicles (UAVs), a paramount solution in several areas such as military applications. Flying ad-hoc network (FANET) is a net-work of UAVs connected wirelessly and configured continuously without infrastructures. Routing on its own is not significant, but the mobility sequence of a UAV in FANETs is a more significant factor and an interesting research topic. The routing protocols gives us a certain and better perception of routing structure for FANETs. In this paper, routing protocols such as Ad-hoc On-Demand Vector (AODV), Dynamic Source Routing (DSR), Temporally Ordered Routing Algorithm (TORA), Geographic Routing Protocol (GRP) and Optimized Link State Routing (OLSR) are compared using performance parameters such as number-of-hops, packet loss ratio, throughput, end-to-end delay and throughput. The mobility models like Pursue Mobility Model (PRS), Semi-Circular Random Movement (SCRM), Manhattan Grid Mobility Model (MGM) and Random Waypoint Mobility (RWPM). The evaluation is carried out with three scenarios including one sender node and one receiver node, all senders one receiver and all senders all receivers are considered for above protocols and mobility models. For all evaluation scenarios, the performance of OLSR is the most efficient among the five routing protocols under four different performance parameters due to its proactive nature which makes the routing information up to date with the help of MPR (Multi Point Relay) in the network, resulting in the reduction of routing overhead in the network.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_73-Evaluation_of_Routing_Protocols_and_Mobility.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Encryption on Multimodal Biometric using Hyper Chaotic Method and Inherent Binding Technique</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120772</link>
        <id>10.14569/IJACSA.2021.0120772</id>
        <doi>10.14569/IJACSA.2021.0120772</doi>
        <lastModDate>2021-07-31T15:09:07.4000000+00:00</lastModDate>
        
        <creator>Nalini M K</creator>
        
        <creator>Radhika K R</creator>
        
        <subject>Chaotic systems; DNA sequences; cryptographic techniques; Convolutional Neural Networks (CNN); key binding</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>Chaotic maps are non-convergent and highly sensitive to initial values. The applications include secure digital identity in distributed systems. Face and fingerprint biometric templates are subjected to hyper-chaotic map leading to encrypted image. Encrypted image is fed as an input to deoxyribonucleic acid (DNA) sequence. Dimensionality of generated DNA sequence is reduced by hashing. The intra-variation for a subject is measured with inter-quartile range. Image set with minimal variation value is identified for selecting the consistent image of a subject. 256 bit Key is generated from the consistent image. Generated key is reduced to 128 bits by eliminating subject specific outliers and redundant values. User specific features are extracted for both the traits using ResNet 50 convolutional neural network and are fused by addition. Final key is bound to feature vector by permutation function and time taken towards key binding is estimated with benchmark database SDUMLA-HMT. Outcome reveals that time taken for key binding varies between 45ms and 58ms for an image of size 80 MB.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_72-Encryption_on_Multimodal_Biometric_using_Hyper_Chaotic_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Randomized Hyperparameter Tuning of Adaptive Moment Estimation Optimizer of Binary Tree-Structured LSTM</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120771</link>
        <id>10.14569/IJACSA.2021.0120771</id>
        <doi>10.14569/IJACSA.2021.0120771</doi>
        <lastModDate>2021-07-31T15:09:07.3830000+00:00</lastModDate>
        
        <creator>Ruo Ando</creator>
        
        <creator>Yoshiyasu Takefuji</creator>
        
        <subject>Adaptive moment estimation; gradient descent; tree-structured LSTM; hyperparameter tuning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>Adam (Adaptive Moment Estimation) is one of the promising techniques for parameter optimization of deep learning. Because Adam is an adaptive learning rate method and easier to use than Gradient Descent. In this paper, we propose a novel randomized search method for Adam with randomizing parameters of beta1 and beta2. Random noise generated by normal distribution is added to the parameters of beta1 and beta2 every step of updating function is called. In the experiment, we have implemented binary tree-structured LSTM and adam optimizer function. It turned out that in the best case, randomized hyperparameter tuning with beta1 ranging from 0.88 to 0.92 and beta2 ranging from 0.9980 to 0.9999 is 3.81 times faster than the fixed parameter with beta1 = 0.999 and beta2 = 0.9. Our method is optimization algorithm independent and therefore performs well in using other algorithms such as NAG, AdaGrad, and RMSProp.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_71-A_Randomized_Hyperparameter_Tuning_of_Adaptive_Moment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improved Medical Image Classification Accuracy on Heterogeneous and Imbalanced Data using Multiple Streams Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120770</link>
        <id>10.14569/IJACSA.2021.0120770</id>
        <doi>10.14569/IJACSA.2021.0120770</doi>
        <lastModDate>2021-07-31T15:09:07.3530000+00:00</lastModDate>
        
        <creator>Mumtaz Ali</creator>
        
        <creator>Riaz Ali</creator>
        
        <creator>Nazim Hussain</creator>
        
        <subject>Medical image classification; convolutional neural networks; class imbalance; small dataset; margin loss</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>Small and massively imbalanced datasets are long-standing problems on medical image classification. Traditionally, researchers use pre-trained models to solve these problems, however, pre-trained models typically have a huge number of trainable parameters. Small datasets are challenging for them to train a model adequately and imbalanced datasets easily lead to overfitting on the classes with more samples. Multiple-stream networks that learn a variety of features have recently gained popularity. Therefore, in this work, a quad-stream hybrid model called QuadSNet using conventional as well as separable convolutional neural networks is proposed to achieve better performance on small and imbalanced datasets without using any pre-trained model. The designed model extracts hybrid features and the fusion of such features makes the model more robust on heterogeneous data. Besides, a weighted margin loss is used to handle the problem of class imbalance. The QuadSNet is trained and tested on seven different classification datasets. To evaluate the advantages of QuadSNet on small and massively imbalanced data, it is compared with six state-of-the-art pre-trained models on three benchmark datasets based on Pneumonia, COVID-19, and Cancer classification. To assess the performance of QuadSNet on general classification datasets, it is compareed with the best model on each of the remaining four datasets, which contain larger, balanced, grayscale, color or non-medical image data. The results show that QuadSNet handles the class imbalance and overfitting better than existing pre-trained models with much fewer parameters on small datasets. Meanwhile, QuadSNet has competitive performance in general datasets.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_70-Improved_Medical_Image_Classification_Accuracy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Survey on the Effectiveness of Virtual Reality-based Therapy and Pain Management</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120769</link>
        <id>10.14569/IJACSA.2021.0120769</id>
        <doi>10.14569/IJACSA.2021.0120769</doi>
        <lastModDate>2021-07-31T15:09:07.3200000+00:00</lastModDate>
        
        <creator>Fatma E. Ibrahim</creator>
        
        <creator>Neven A. M. Elsayed</creator>
        
        <creator>Hala H. Zayed</creator>
        
        <subject>Virtual reality; mental health; cancer pain; distrac-tion; pain management</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>Virtual reality refers to the technology used to create multi-sensory three-dimensional environments that can be navigated, manipulated, and interacted by a user. This paper’s objective is to categorize the most common areas that use virtual reality (VR) for managing pain (psychological and physical). To our knowledge, this is the first survey that summarizes all of these areas in one place. This paper reviews the conducted studies that used VR for psychological treatment, especially with phobias. Also, this paper summarizes the current literature on using virtual reality interventions for managing acute, chronic, and cancer pain. Based on the review, virtual reality shows great potential for controlling acute pain - such as pain associated with burn wound care. However, limited studies only investigated the impact of using virtual reality on patients with chronic pain. The findings indicated that VR distraction has a great impact on pain and distress related to cancer and its treatments. This paper also discusses the challenges and limitations of the current research. Notably, the identified studies recommend VR distraction as a promising adjunct for pain reduction and psychological treatment. However, further research needs to be conducted to determine under what conditions VR distraction will provide more analgesic effects.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_69-A_Survey_on_the_Effectiveness_of_Virtual_Reality.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>IoHT-MBA: An Internet of Healthcare Things (IoHT) Platform based on Microservice and Brokerless Architecture</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120768</link>
        <id>10.14569/IJACSA.2021.0120768</id>
        <doi>10.14569/IJACSA.2021.0120768</doi>
        <lastModDate>2021-07-31T15:09:07.3070000+00:00</lastModDate>
        
        <creator>Lam Nguyen Tran Thanh</creator>
        
        <creator>Nguyen Ngoc Phien</creator>
        
        <creator>The Anh Nguyen</creator>
        
        <creator>Hong Khanh Vo</creator>
        
        <creator>Hoang Huong Luong</creator>
        
        <creator>Tuan Dao Anh</creator>
        
        <creator>Khoi Nguyen Huynh Tuan</creator>
        
        <creator>Ha Xuan Son</creator>
        
        <subject>Internet of Health Things (IoHT); microservice; brokerless; gRPC; kafka; single sign-on; RBAC</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>Internet of Thing (IoT), currently, is one of the technology trends that are most interested. IoT can be divided into five main areas including: Health-care, Environmental, Smart city, Commercial and Industrial. The IoHT-MBA Platform is considered the backbone of every IoT architecture, so the optimal design of the IoHT-MBA Platform is essential issue, which should be carefully considered in the different aspects. Although, IoT is applied in multiple domains, however, there are still three main features that are challenge to improve: i) data collection, ii) users, devices management, and iii) remote device control. Today’s medical IoT systems, often too focused on the big data or access control aspects of participants, but not focused on collecting data accurately, quickly, and efficiently; power redundancy and system expansion. This is very important for the medical sector - which always prioritizes the availability of data for therapeutic purposes over other aspects. In this paper, we introduce the IoHT Platform for Healthcare environment which is designed by microservice and brokerless architecture, focusing strongly on the three aforementioned characteristics. In addition, our IoHT Platform considers the five other issues including (1) the limited processing capacity of the devices, (2) energy saving for the device, (3) speed and accurate of the data collection, (4) security mechanisms and (5) scalability of the system. Also, in order for the IoHT Platform to be suitable for the field of health monitoring, we also add realtime alerts for the medical team. In the evaluation section, moreover, we describe the evaluation to prove the effectiveness of the proposed IoHT Platform (i.e. the proof-of-concept) in the performance, non-error, and non affected by geographical distance. Finally, a complete code solution is publicized on the authors’ GitHub repository to engage further reproducibility and improvement.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_68-IoHT_MBA_An_Internet_of_Healthcare_Things_IoHT_Platform.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SIP-MBA: A Secure IoT Platform with Brokerless and Micro-service Architecture</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120767</link>
        <id>10.14569/IJACSA.2021.0120767</id>
        <doi>10.14569/IJACSA.2021.0120767</doi>
        <lastModDate>2021-07-31T15:09:07.2900000+00:00</lastModDate>
        
        <creator>Lam Nguyen Tran Thanh</creator>
        
        <creator>Nguyen Ngoc Phien</creator>
        
        <creator>The Anh Nguyen</creator>
        
        <creator>Hong Khanh Vo</creator>
        
        <creator>Hoang Huong Luong</creator>
        
        <creator>Tuan Dao Anh</creator>
        
        <creator>Khoi Nguyen Huynh Tuan</creator>
        
        <creator>Ha Xuan Son</creator>
        
        <subject>Internet of Things (IoT); gRPC; Single Sign-On; brokerless; micro-service; MQTT; message queue; security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>The Internet of Things is one of the most interesting technology trends today. Devices in the IoT network are often geared towards mobility and compact in size, thus having a rather weak hardware configuration. There are many light weight protocols, tailor-made suitable for limited processing power and low energy consumption, of which MQTT is the typical one. The current MQTT protocol supports three types of quality-of-service (QoS) and the user has to trade-off the security of the packet transmission by transmission rate, bandwidth and energy consumption. The MQTT protocol, however, does not support packet storage mechanisms which means that when the receiver is interrupted, the packet cannot be retrieved. In this paper, we present a broker-less SIP-MBA Platform, designed for micro-service and using gRPC protocol to transmit and receive messages. This design optimizes the transmission rate, power consumption and transmission bandwidth, while still meeting reliability when communicating. Besides, we implement users and things management mechanisms with the aim of improving secu-rity issues. Finally, we present the test results by implementing a collect data service via gRPC protocol and comparing it with streaming data by using the MQTT protocol.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_67-SIP_MBA_A_Secure_IoT_Platform_with_Brokerless_and_Micro_service.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Robust Real-time Head Pose Estimation for 10 Watt SBC</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120766</link>
        <id>10.14569/IJACSA.2021.0120766</id>
        <doi>10.14569/IJACSA.2021.0120766</doi>
        <lastModDate>2021-07-31T15:09:07.2730000+00:00</lastModDate>
        
        <creator>Emad Wassef</creator>
        
        <creator>Hossam E. Abd El Munim</creator>
        
        <creator>Sherif Hammad</creator>
        
        <creator>Maged Ghoneima</creator>
        
        <subject>Head Pose Estimation; real-time; face detection; face landmarks localization; single board computing; SBC; GPU optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>Head Pose Estimation has always been an essential part for many applications such as autonomous driving and driv-ing assist systems and hence performance optimization provides better performance as well as lower computing and power needs that allows us to run such applications over embedded devices inside these systems. In this article we present an implementation over a Single board computer for a new system of 3D Head pose estimation that estimates the Head pose of a person in real-time for applications such as Driver monitoring systems, Drones, Gesture recognition and tracking devices. The system is developed over a single board computer (SBC) that is suitable for very low powered applications, it only utilizes the data provided through the IR camera sensor to estimate both the Head and camera pose without any need for external sensors. This system will combine methods that include traditional image processing techniques for image projection, feature detection, key point description and 3D pose estimation along with Machine Learning techniques for face detection and facial landmarks detection.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_66-Robust_Real_time_Head_Pose_Estimation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Applying Custom Algorithms in Windows Active Directory Certificate Services</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120765</link>
        <id>10.14569/IJACSA.2021.0120765</id>
        <doi>10.14569/IJACSA.2021.0120765</doi>
        <lastModDate>2021-07-31T15:09:07.2430000+00:00</lastModDate>
        
        <creator>Alaev Ruhillo</creator>
        
        <subject>O’zDSt 1106:2009 hashing algorithm; the O’zDSt 1092:2009 signature algorithm; active directory certificate services; digital certificate; key access control; key storage provider</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>The article presents a solution to the problem of not recognizing the O’zDst 1092:2009 algorithm by the operating system and the problem of using digital certificates generated using the O’zDst 1092:2009 algorithm and O’zDst 1106:2009 algorithm. These algorithms were adopted in 2009. But these algorithms are still not recognized by the operating system. For other cryptographic algorithms used in Windows, cryptographic service providers have been developed that provide cryptographic operation functions to other software. These cryp-tographic service providers do not support the above algorithms. From here it becomes necessary to develop the cryptographic provider supporting the O’zDSt 1106:2009 hashing algorithm and the O’zDSt 1092:2009 signature algorithm. But to work with digital certificates, one cryptographic provider is not enough. Special extensions are also required to encode and decode digital certificate data. Therefore, the development of an extension for cryptographic providers is given. Also, for managing digital cer-tificates and key lifecycle, a method of integrating cryptographic providers with Windows Active Directory Certificate Services is presented. Developed cryptographic providers are composed of 3 types of providers such as hash provider, signature provider, and key storage provider. The architecture of the key storage provider, a method for secure storage of cryptographic keys, as well as key access control are proposed. The description of the O’zDst 1092:2009 algorithm and the implementation of the functions of the Key storage provider interface are shown.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_65-Applying_Custom_Algorithms_in_Windows_Active_Directory.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An ICU Admission Predictive Model for COVID-19 Patients in Saudi Arabia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120764</link>
        <id>10.14569/IJACSA.2021.0120764</id>
        <doi>10.14569/IJACSA.2021.0120764</doi>
        <lastModDate>2021-07-31T15:09:07.2270000+00:00</lastModDate>
        
        <creator>Hamza Ghandorh</creator>
        
        <creator>Muhammad Zubair Khan</creator>
        
        <creator>Raed Alsufyani</creator>
        
        <creator>Mehshan Khan</creator>
        
        <creator>Yousef M. Alsofayan</creator>
        
        <creator>Anas A. Khan</creator>
        
        <creator>Ahmed A. Alahmari</creator>
        
        <subject>Covid-19; ANN; ensemble learning method; predic-tion; ICU admission; Saudi Arabia</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>Globally, COVID-19 already emerged in around 170 million confirmed cases of infected people and, as of May 31, 2021, affected more than 3.54 million deaths. This pandemic has given rise to numerous public health and socioeconomic issues, emphasizing the significance of unraveling the epidemic’s history and forecasting the disease’s potential dynamics. A variety of mathematical models have been proposed to obtain a deeper understanding of disease transmission mechanisms. Machine Learning (ML) models have been used in the last decade to identify patterns and enhance prediction efficiency in healthcare applications. This paper proposes a model to predict COVID-19 patients admission to the intensive care unit (ICU). The model is built upon robust known classification algorithms, including classic Machine Learning Classifiers (MLCs), an Artificial Neural Network (ANN) and ensemble learning. This model’s strength in predicting COVID-19 infected patients is shown by performance analysis of various MLCs and error metrics. Among other used ML models, the ANN model resulted in the highest accuracy, 97.9% over other models. Mean Squared Error showed that the ANN method had the lowest error (0.0809). In conclusion, this paper could be beneficial to ICU staff to predict ICU admission based on COVID-19 patients’ clinical characteristics.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_64-An_ICU_Admission_Predictive_Model_for_COVID_19_Patients.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of Distance Learning in the Professional School of Systems Engineering and Informatics</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120763</link>
        <id>10.14569/IJACSA.2021.0120763</id>
        <doi>10.14569/IJACSA.2021.0120763</doi>
        <lastModDate>2021-07-31T15:09:07.1970000+00:00</lastModDate>
        
        <creator>Eleazar Flores Medina</creator>
        
        <creator>Yrma Principe Somoza</creator>
        
        <creator>Laberiano Andrade-Arenas</creator>
        
        <creator>Janet Corzo Zavaleta</creator>
        
        <creator>Roberto Yon Alva</creator>
        
        <creator>Samuel Vargas Vargas</creator>
        
        <subject>Distance modality; evaluation; focus group; re-sources and pedagogical materials; teaching strategy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>The distance modality exponentially accelerated the use of technological tools in times of pandemic. In this context, educational institutions at all levels implemented actions to strengthen teaching work through training. The present study was carried out in the University of Sciences and Humanities considering the distance teaching process that is based on three dimensions: teaching strategy, resources and pedagogical mate-rials, and evaluation. The study objective to analyze the distance learning process in its 3 dimensions to propose solutions in virtual teaching. The applied methodology was of a mixed approach; that is, qualitative through focus group and quantitative through student survey. The student population of 159 and a sample of 113 with a confidence level of 95% and margin of error 5%.The result obtained in the focus group shows that teachers have difficulties in the application of teaching strategies in the virtual modality evidenced in the management of digital tools, elaboration of rubrics to evaluate learning, and in the use of resources and pedagogical materials. This is complemented with surveys that show partial acceptance of teaching work in the distance modality; that is, the teaching strategy has an average of 3,76 and standard deviation (S.D) ,63 and 58,41% agrees with the teacher’s teaching strategy; likewise, the pedagogical resources and materials dimension was obtained an average of 3,72 and S.D ,74 and agrees 51,33%.Also in the evaluation, an average of 3,76 and S.D ,72 were obtained with a 55,75% according to the way the teacher evaluates.The research work serves as input for future curricular designs in the distance modality.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_63-Analysis_of_Distance_Learning_in_the_Professional_School.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>System Dynamics Modeling for Solid Waste Management in Lima Peru</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120762</link>
        <id>10.14569/IJACSA.2021.0120762</id>
        <doi>10.14569/IJACSA.2021.0120762</doi>
        <lastModDate>2021-07-31T15:09:07.1800000+00:00</lastModDate>
        
        <creator>Margarita Giraldo Retuerto</creator>
        
        <creator>Dayana Ysla Espinoza</creator>
        
        <creator>Laberiano Andrade-Arenas</creator>
        
        <subject>Causal diagram; environmental pollution; forrester diagram; systems dynamics; vensim</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>This research work focuses on environmental care based on the treatment of solid, organic and inorganic waste. These inappropriate wastes cause deterioration of the environ-ment and the ozone layer. This is why we are currently seeing an abrupt change in climate and diseases caused by environmental pollution. The objective of the research work is to perform a system dynamics modeling for effective and efficient solid waste management in Lima Peru, and thus contribute to the scientific community to achieve a future vision for solid waste management. The methodology used was system dynamics, which made it possible to analyze and understand the behavior of a complex solid waste system in a given time. In addition, vensim software was used for system dynamics modeling, creating the causal diagram and forrester diagram for solid waste management. The results obtained are the system dynamics modeling proposed for solid waste management, which were modeled from 2020 to 2030, where by 2030 it will be reduced in favorable equilibrium to 23,066 tons. Thanks to this system dynamics modeling, society will be made aware of the need to sort and use solid waste, in order to reduce environmental pollution. Likewise, having a healthy environment that will benefit health, agriculture and education will benefit society as a whole.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_62-System_Dynamics_Modeling_for_Solid_Waste_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid Model to Profile and Evaluate Soft Skills of Computing Graduates for Employment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120761</link>
        <id>10.14569/IJACSA.2021.0120761</id>
        <doi>10.14569/IJACSA.2021.0120761</doi>
        <lastModDate>2021-07-31T15:09:07.1500000+00:00</lastModDate>
        
        <creator>Hemalatha Ramalingam</creator>
        
        <creator>Raja Sher Afgun Usmani</creator>
        
        <creator>Ibrahim Abakar Targio Hashem</creator>
        
        <creator>Thulasyammal Ramiah Pillai</creator>
        
        <subject>Soft skills; artificial intelligence; intelligent con-troller; game based assessment; graduate profile; hybrid model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>Emerging tools such as Game Based Assessments have been valuable in talent screening and matching soft skills for job selection. However, these techniques/models are rather stand alone and are unable to provide an objective measure of the effectiveness of their approach leading to mismatch of skills. In this research study, we are proposing a Theoretical Hybrid Model, combining aspects of Artificial Intelligence and Game Based Assessment in profiling, assessing and ranking graduates based on their soft skills. Firstly, an Intelligent Controller is used to extract and classify the graduate skill profile based on data findings extracted using traditional assessment methods of self-evaluation and interview. With motivation and engagement as a competitive difference, an existing Game Based Assessment (OWIWI) is then used to assess the soft skills of these graduates hence generating a Graduate Profile based on results of the game. Moving forward, a ranking technique is then applied to match the profile to selected job requirements based on soft skills required for the job and the graduate strength. Finally, a comparison analysis is concluded based on the soft skills profile obtained before employment (pre-employment) and objective measure feedback of soft skills obtained after employment (post-employment) to provide a validity check to study the effectiveness of the overall Hybrid Model. Specifically, data obtained from this study can be useful in solving issues of unemployment due to mismatch of soft skills at the Higher Learning Institution level.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_61-A_Hybrid_Model_to_Profile_and_Evaluate_Soft_Skills.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Method for Rainfall Prediction and Classification using Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120760</link>
        <id>10.14569/IJACSA.2021.0120760</id>
        <doi>10.14569/IJACSA.2021.0120760</doi>
        <lastModDate>2021-07-31T15:09:07.1170000+00:00</lastModDate>
        
        <creator>K. Varada Rajkumar</creator>
        
        <creator>K. Subrahmanyam</creator>
        
        <subject>Pattern recognition; ant colony optimization; artificial neural network; rainfall prediction; feed-forward; cascade-forward; data processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>In the field of food production, it is an important and difficult job to maintain water sources for major population centres and reduce the risk of flooding, to forecast rainfall reliably and accurately. Accurate and genuine forecasts of rainfall on monthly and seasonal time scales help to provide beneficiaries with knowledge on the control of water supplies, farm forecasting and integrated crop insurance applications. Present rainfall prediction is the challenging task for the researchers and most of the rainfall prediction techniques are fail in accuracy. For this we propose a new effective hybrid approach for forecasting and classifying rainfall using the neural network and ACO method. The collected rainfall data were preprocessed by filling missing data and normalized by min-max normalization, the processed data is given to various classifiers for evaluating its performance. The performance of the existing and proposed models is compared. Performance comparison of existing feed-forward, cascade-forward and pattern recognition NN classifier and the proposed ACO+feed-forward backpropagation, ACO+ cascade-forward backpropagation and ACO+ pattern recognition NN classifier are done. The entire HNN forecasting protocol consists of pre-processing and choosing the input vector and maximising the number of hidden nodes using ACO and ANN modelling.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_60-A_Novel_Method_for_Rainfall_Prediction_and_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Adaptive Control Technique Effects on Single Link Bilateral Articulated Robot Arm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120759</link>
        <id>10.14569/IJACSA.2021.0120759</id>
        <doi>10.14569/IJACSA.2021.0120759</doi>
        <lastModDate>2021-07-31T15:09:07.0870000+00:00</lastModDate>
        
        <creator>Nuratiqa Natrah Mansor</creator>
        
        <creator>Muhammad Herman Jamaluddin</creator>
        
        <creator>Ahmad Zaki Shukor</creator>
        
        <subject>Force and position controller; disturbance observer; simulated bilateral system; adaptive control; manipulator arm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>This paper describes a technique for addressing the issue of instability within force controller by developing a model of a bilateral master-slave haptic system that incorporates a Disturbance Observer (DOB) in a robotic simulation. The suggested modeling is used in conjunction with conventional controllers to be correcting undesired noise that occurs inside the working system of a particular joint of the youBot arm. To acquire the target position, the controller will additionally compensate for interference by changing its position response. Two tests were carried out to examine and compare the system’s feedback that employed the proposed approach and another system with the conventional and standard-setting. The experimental findings demonstrate the resilience of the suggested system, as the system integrated with observers is more precise and faster. All of the system feedbacks from conducted experiments are measured in the simulation platform.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_59-Adaptive_Control_Technique_Effects_on_Single_Link.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Open Text Ontology Mining to Improve Retrievals of Information</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120758</link>
        <id>10.14569/IJACSA.2021.0120758</id>
        <doi>10.14569/IJACSA.2021.0120758</doi>
        <lastModDate>2021-07-31T15:09:07.0570000+00:00</lastModDate>
        
        <creator>Mohd Pouzi Hamzah</creator>
        
        <creator>Syarifah Fatem Na’imah Syed Kamaruddin</creator>
        
        <subject>Information retrieval; ontology; Malay text; taxonomy relationship; non-taxonomy relationship</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>Information retrieval is the main task to extract relevant information from documents. Mostly, the information retrieval system is based on the keyword approach to extract the knowledge of relevant documents. The experiment shows the ontology can improve the result to overcome the weakness of keyword approach. Ontology implementation method is based on phrase formation and semantic relationships between words. This study tested 10 Malay documents using ontology to retrieve information. The results obtained were compared with the result obtained from manual information retrieval done by experts for precision and recall measure. In this study, there are three semantic relationships between words that are capable of expressing knowledge in documents. They are taxonomy relationship, attribute relationship and non-taxonomy relationship. The relationship of ontology can be formed by using taxonomy relationships algorithm, attribute relationships algorithm and non-taxonomy relationships algorithm based on the linguistic rules of the Malay language. The result of precision and recall for this experiment shows that the ontology approach can enhance the performance of information retrieval from the relevant documents.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_58-Open_Text_Ontology_Mining.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of Technology to Support Large Information Storage and Organization of Reduced User Access to this Information</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120757</link>
        <id>10.14569/IJACSA.2021.0120757</id>
        <doi>10.14569/IJACSA.2021.0120757</doi>
        <lastModDate>2021-07-31T15:09:07.0230000+00:00</lastModDate>
        
        <creator>Serikbayeva Sandugash Kurmanbekovna</creator>
        
        <creator>Batyrkhanov Ardak Gabitovich</creator>
        
        <creator>Sambetbayeva Madina Aralbaevna</creator>
        
        <creator>Sadirmekova Zhana Bakirbaevna</creator>
        
        <creator>Yerimbetova Aigerim Sembekovna</creator>
        
        <subject>Information systems; digital library; metadata; collection; privilege; rights; administrator</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>This article solves the problem of developing a technology for supporting large information storages and organizing delimited user access to this information, which provides a service both for managing these objects and organizing access to these objects. Solving the problem will allow you to create a conceptual model with the allocation of basic entities among information objects and the establishment of relationships between them. It will also allow the development of technical documentation reflecting the results of the first stage of creating an information system: solving problems of syntactic and technical interoperability, developing a single interface, interacting with users, etc. In existing DL developments, as a rule, search and access to information are provided only through visual graphical interfaces. The task of the subsystem for integrating various digital resources is to provide other subsystems with a single interface for access to information stored in the data sources of the system. That is, any resource must be cataloged in a standard way, provided with metadata, access rules, and a unique identifier. To implement search functions outside of graphical interfaces, support for special network services and query languages is required. Ideally, all IS should support a single search profile and a single query language.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_57-Development_of_Technology_to_Support_Large_Information_Storage.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Power System Controlled Islanding using Modified Discrete Optimization Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120756</link>
        <id>10.14569/IJACSA.2021.0120756</id>
        <doi>10.14569/IJACSA.2021.0120756</doi>
        <lastModDate>2021-07-31T15:09:06.9470000+00:00</lastModDate>
        
        <creator>N. Z. Saharuddin</creator>
        
        <creator>I. Z. Abidin</creator>
        
        <creator>H. Mokhlis</creator>
        
        <creator>M.Y. Hassan</creator>
        
        <subject>Controlled islanding; modified discrete evolutionary programming (MDEP) technique; modified discrete particle swarm optimization (MDPSO) technique; minimal power flow disruption; power imbalance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>Controlled islanding is implemented to save the power system from experiencing blackouts during severe sequence line tripping. The power system is partitioned into several stand-alone islands by removing the optimal transmission line during controlled islanding execution. Since selecting the optimal transmission lines to be removed (cutsets) is important in this action, a good technique is required in order to determine the optimal islanding solution (lines to be removed). Thus, this paper developed two techniques, namely Modified Discrete Evolutionary Programming (MDEP) and Modified Discrete Particle Swarm Optimization (MDPSO) to determine the optimal islanding solution for controlled islanding implementation. The best technique among these two which is based on their capability of producing the optimal islanding solution with minimal objective function (minimal power flow disruption) will be selected to implement the controlled islanding. The performance of these techniques is evaluated through case studies using the IEEE 118-bus test system. The results show that the MDEP technique produces the best optimal islanding solution compared to the MDPSO and other previously published techniques.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_56-Power_System_Controlled_Islanding.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-point Fundraising and Distribution via Blockchain</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120755</link>
        <id>10.14569/IJACSA.2021.0120755</id>
        <doi>10.14569/IJACSA.2021.0120755</doi>
        <lastModDate>2021-07-31T15:09:06.9300000+00:00</lastModDate>
        
        <creator>Abdullah Omar Abdul Kareem Alassaf</creator>
        
        <creator>Fakhrul Hazman Yusoff</creator>
        
        <subject>Blockchain; smart contract; transparency; charity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>Trust and transparency are significant facets that are much esteemed by charitable organizations in achieving their mission and encouraging donations from the public. However, after many high-profile scandals, the faith in charities is questionable, heralding the need for an increased level of transparency among such organizations. Fortunately, leveraging Blockchain technology in charities’ systems could help to rebuild the integrity of these organizations. This study aims to raise the level of integrity showcased by charities by creating a multi-point fundraising approach using smart contracts. The proposed system offers a transparent fundraising platform through its integration of charity organization evaluators. Various steps were deployed to satisfy the intended target. Firstly, the study investigated the potentials of Blockchain in improving the level of transparency. Secondly, a probing process was undertaken to choose a suitable platform as a server-side in the system. This process involved garnering salient features in Blockchain platforms based on the proposed system requirements. After the probing process, a Decision Support System (DSS) was utilized to investigate the most suitable Blockchain platform. Results garnered proved that the Ethereum platform is best for the proposed system.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_55-Multi_Point_Fundraising_and_Distribution_via_Blockchain.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>ROI Image Encryption using YOLO and Chaotic Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120754</link>
        <id>10.14569/IJACSA.2021.0120754</id>
        <doi>10.14569/IJACSA.2021.0120754</doi>
        <lastModDate>2021-07-31T15:09:06.9000000+00:00</lastModDate>
        
        <creator>Sung Won Kang</creator>
        
        <creator>Un Sook Choi</creator>
        
        <subject>Image encryption; cellular automata; YOLO algorithm; deep learning; Chen system; region of interest</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>In this paper, we design a cellular automata (CA)-based ROI (region of interest) image encryption system that can effectively reduce computational cost and maintain an appropriate level of security. The proposed image encryption system obtains a cryptographic image through three steps. First, a region of interest with high importance is extracted from the entire image using deep learning. We use the YOLO (You Only Look Once) algorithm to extract the ROI from a given original image. Next, the detected ROI is encrypted using the Chen system, a chaotic-based function with high security. Finally, the execution time is effectively reduced by encrypting the entire image using a hardware-friendly CA. The safety of the proposed encryption system is verified through various statistical experiment results and analyses.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_54-ROI_Image_Encryption_Using_YOLO_and_Chaotic_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multicriteria Handover Management by the SDN Controller-based Fussy AHP and VIKOR Methods</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120753</link>
        <id>10.14569/IJACSA.2021.0120753</id>
        <doi>10.14569/IJACSA.2021.0120753</doi>
        <lastModDate>2021-07-31T15:09:06.8670000+00:00</lastModDate>
        
        <creator>Najib Mouhassine</creator>
        
        <creator>Mostapha Badri</creator>
        
        <creator>Mohamed Moughit</creator>
        
        <subject>SDN; QoS; WLAN; Handover; F-AHP; VIKOR</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>A wireless environment is characterized by its dynamic nature, inherent uncertainty, and imprecise parameters and constraints. Network settings such as speed, RSS, network delays, etc. are inherently imprecise. Due to this vagueness, accurately measuring these network parameters in a wireless environment is a difficult task. As a result, a fuzzy logic approach appears to work best when used to design systems in such environments. Although conventional techniques based on precise values can be used to reduce transmission delay, they cannot produce intelligent and efficient transfer decisions that take into account all the constraints of the network. Thus, using one criterion only can lead to service disruption, unbalanced network load, and inefficient handoff. Therefore, to guide the Horizontal handover process in wireless networks towards making a better choice for VoIP in congested environments, we propose the integration of the Fuzzy-AHP and VIKOR method in SDN (Software Defined Networking) controller on several criteria (the signal-to-noise ratio plus interference (SNIR), packet loss, jitter, delay, throughput). However, the results of this work show that our contribution maintains a good quality of service for real-time applications.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_53-Multicriteria_Handover_Management_by_the_SDN_Controller.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of a Low-Cost Bio-Inspired Swimming Robot (SRob) with IoT</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120752</link>
        <id>10.14569/IJACSA.2021.0120752</id>
        <doi>10.14569/IJACSA.2021.0120752</doi>
        <lastModDate>2021-07-31T15:09:06.8530000+00:00</lastModDate>
        
        <creator>Mohd Aliff</creator>
        
        <creator>Ahmad Raziq Mirza</creator>
        
        <creator>Mohd Ismail</creator>
        
        <creator>Nor Samsiah</creator>
        
        <subject>Stingray robot; angle of flapping motion; remote control; position control; compact Srob</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>Now-a-days, exploring underwater is a difficult activity to do and requires specialized equipment to explore it. Many studies so far have proven that bio-inspired robotic fish such as stingray robots have many advantages for use as underwater exploration. One of them is the manta ray which can show excellent swimming ability by flapping the pectoral fins with large amplitude. By studying the movement behavior of genus Mobula, the development of biomimetic robots has grown exponentially in recent years. But this technology requires expensive development costs, and the prototypes produced are heavy. Therefore, the development of low-cost bio-inspired Swimming Robot (SRob) using embedded controller with internet of things (IOT) is proposed and presented in this paper. SRob is designed with a small size and lightweight compared to other conventional swimming robots and is well equipped with 6 servo motors, ADXL335 accelerometer 3-axis, 2 Lipo batteries 7.4V, ESP01 Wi-Fi module and Arduino Mega. The RemoteXY app that works like a remote control will be connected to the Arduino Mega using the ESP01 Wi-Fi module to control servo motors and obtain readings of the sensors. Based on the experimental results, the servo motor used to produce flapping motion can be controlled precisely while producing a large amplitude of motion. In addition, the position control for the compact SRob can be realized and determined correctly while swimming in the water.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_52-Development_of_a_Low_Cost_Bio_Inspired_Swimming_Robot.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Optimized Neural Network Model for Facial Expression Recognition over Traditional Deep Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120751</link>
        <id>10.14569/IJACSA.2021.0120751</id>
        <doi>10.14569/IJACSA.2021.0120751</doi>
        <lastModDate>2021-07-31T15:09:06.8370000+00:00</lastModDate>
        
        <creator>Pavan Nageswar Reddy Bodavarapu</creator>
        
        <creator>P.V.V.S Srinivas</creator>
        
        <subject>Facial expression recognition; deep learning; filtering techniques; convolutional neural network; emotion</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>Emotions have a key role in Feedback analysis to provide a good customer service, the main seven emotions are Anger, Disgust, Fear, Happy, Neutral, Sad and Surprise. There are several advantages, an efficient Facial Emotion Recognition model can help us in self-discipline and control over the drivers, while they are driving the vehicle. Low resolution and Low-reliable images are main problems in this field. We proposed a new model which can efficiently perform on Low resolution and Low-reliable images. We created a low resolution facial expression dataset (LRFE) by collecting various images from different resources, which contains low resolution images. We also proposed a new hybrid filtering method, which is a combination of Gaussian, Bilateral, Non local means filtering techniques. Densenet-121 achieves 0.60 0.68 accuracy on fer2013 and LRFE respectively. When hybrid filtering method is combined with Densenet-121, it achieved 0.95 accuracy. Similarly Resnet-50, MobileNet, Xception models performed effectively when combined with the hybrid filtering method. The proposed convolutional neural network(CNN) model achieved 0.65 accuracy on fer2013 dataset, while the existing models like Resnet-50, MobileNet, Densenet-121 and Xception obtained 0.60 0.57 0.60 0.52 accuracies on fer2013 respectively. The proposed model when combined with hybrid filtering method achieved 0.85 accuracy. Clearly the proposed model outperforms the traditional methods. When the hybrid filtering method is combined with the CNN models, there is significant increase in the accuracy.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_51-An_Optimized_Neural_Network_Model_for_Facial_Expression.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Image Encryption Enabling Chaotic Ergodicity with Logistic and Sine Map</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120750</link>
        <id>10.14569/IJACSA.2021.0120750</id>
        <doi>10.14569/IJACSA.2021.0120750</doi>
        <lastModDate>2021-07-31T15:09:06.8070000+00:00</lastModDate>
        
        <creator>Mohammad Ahmar Khan</creator>
        
        <creator>Jalaluddin Khan</creator>
        
        <creator>Abdulrahman Abdullah Alghamdi</creator>
        
        <creator>Sarah Mohammed Awadh Bait Saidan</creator>
        
        <subject>Image encryption; ergodicity; logistic sine map; security; privacy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>Chaotic systems with complicated characteristics of ergodicity, impredictability as well as sensitivity to beginning stages are commonly utilized in the world of cryptography. A 2D logistic-adjusted-sine (LS) map is implemented in this article. Performance assessments reveal superior ergodicity as well as unpredictable and even a broader spectrum of chaotics than numerous previous chaotic maps. This research also develops a 2D-LS-based image encryption system and proposed LS-IES. The notion of diffusion as well as confusion is properly complied with enabled encryption functions. Research outcomes as well as security analyses demonstrate that LS-IES can swiftly encrypting different parameters in various images with a great resistance towards security threats.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_50-Image_Encryption_Enabling_Chaotic_Ergodicity.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Data Mining to Determine Behavioral Patterns in Respiratory Disease in Pediatric Patients</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120749</link>
        <id>10.14569/IJACSA.2021.0120749</id>
        <doi>10.14569/IJACSA.2021.0120749</doi>
        <lastModDate>2021-07-31T15:09:06.7900000+00:00</lastModDate>
        
        <creator>Michael Cabanillas-Carbonell</creator>
        
        <creator>Randy Verdecia-Pe&#241;a</creator>
        
        <creator>Jos&#233; Luis Herrera Salazar</creator>
        
        <creator>Esteban Medina-Rafaile</creator>
        
        <creator>Oswaldo Casazola-Cruz</creator>
        
        <subject>Respiratory diseases; data mining; cluster algorithms; K-Means algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>There are several varieties of respiratory diseases which mainly affect children between 0 and 5 years of age, not having a complete report of the behavior of each of these. This research seeks to conduct a study of the behavior of patterns in respiratory diseases of children in Peru through data mining, using data generated by the health sector, organizations and research between the years 2015 to 2019. This process was given by means of the K-Means clustering algorithm which allowed performing an analysis of this data identifying the patterns in a total of 10,000 Peruvian clinical records between the years mentioned, generating different behaviors. Through the grouping obtained in the clusters, it was obtained as a result that most of the cases in all the ages studied, they presented diseases with codes between the range of 000 and 060 approximately. This research was carried out in order to help health centers in Peru for further study, documentation and due decision-making, waiting for optimal prevention strategies regarding respiratory diseases.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_49-Data_Mining_to_Determine_Behavioral_Patterns.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>IoT-based Cyber-security of Drones using the Na&#239;ve Bayes Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120748</link>
        <id>10.14569/IJACSA.2021.0120748</id>
        <doi>10.14569/IJACSA.2021.0120748</doi>
        <lastModDate>2021-07-31T15:09:06.7730000+00:00</lastModDate>
        
        <creator>Rizwan Majeed</creator>
        
        <creator>Nurul Azma Abdullah</creator>
        
        <creator>Muhammad Faheem Mushtaq</creator>
        
        <subject>Drone technology; security; internet of things; internet of drones; machine learning; blockchain</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>Recent advancements in drone technology are opening new opportunities and applications: in various fields of life especially in the form of small drones. However, these advancements are also causing new challenges in terms of security, adaptability, and consistency. Small drones are proving to be a new opportunity for the civil and military industries. The small drones are suffering from architectural issues and the definition of security and safety issues. The rapid growth of the Internet of things opens new dimensions for drone technology but posing new threats as well. The tiny flying intelligent devices are challenging for the security and privacy of data. The design of these small drones is yet not matured to fulfill the domain requirements. The basic design issues also need security mechanisms, privacy mechanisms, and data transformations. The aspects like intrusion and interception in the domain of the Internet of Drones (IoD) need to be investigated to make these timely drones more secure and more adaptable. In this paper, we have used intelligent machine learning approach to design an IoT aided drone. This approach will provide intelligent cyber security system which will help in detecting network security threats using Blockchain.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_48-IoT_based_Cyber_security_of_Drones.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Impact of Data Compression on the Performance of Column-oriented Data Stores</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120747</link>
        <id>10.14569/IJACSA.2021.0120747</id>
        <doi>10.14569/IJACSA.2021.0120747</doi>
        <lastModDate>2021-07-31T15:09:06.7600000+00:00</lastModDate>
        
        <creator>Tsvetelina Mladenova</creator>
        
        <creator>Yordan Kalmukov</creator>
        
        <creator>Milko Marinov</creator>
        
        <creator>Irena Valova</creator>
        
        <subject>Column-oriented data stores; data compression; distributed non-relational databases; benchmarking column-oriented databases</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>Compression of data in traditional relational database management systems significantly improves the system performance by decreasing the size of the data that results in less data transfer time within the communication environment and higher efficiency in I/O operations. The column-oriented database management systems should perform even better since each attribute is stored in a separate column, so that its sequential values are stored and accessed sequentially on the disk. That further increases the compression efficiency as the entire column is compressed/decompressed at once. The aim of this research is to determine if data compression could improve the performance of HBase, running on a small-sized Hadoop cluster, consisted of one name node and nine data nodes. Test scenario includes performing Insert and Select queries on multiple records with and without data compression. Four data compression algorithms are tested since they are natively supported by HBase - SNAPPY, LZO, LZ4 and GZ. Results show that data compression in HBase highly improves system performance in terms of storage saving. It shrinks data 5 to 10 times (depending on the algorithm) without any noticeable additional CPU load. That allows smaller but significantly faster SSD disks to be used as cluster’s primary data storage. Furthermore, the substantial decrease in the network traffic is an additional benefit with major impact on big data processing.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_47-Impact_of_Data_Compression_on_the_Performance_of_Column.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Evaluation of the Accuracy of the Machine Translation Systems of Social Media Language</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120746</link>
        <id>10.14569/IJACSA.2021.0120746</id>
        <doi>10.14569/IJACSA.2021.0120746</doi>
        <lastModDate>2021-07-31T15:09:06.7270000+00:00</lastModDate>
        
        <creator>Yasser Muhammad Naguib Sabtan</creator>
        
        <creator>Mohamed Saad Mahmoud Hussein</creator>
        
        <creator>Hamza Ethelb</creator>
        
        <creator>Abdulfattah Omar</creator>
        
        <subject>Colloquial Arabic; Google translate; machine translation evaluation; reliability; social media</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>In this age of information technology, it has become possible for people all over the world to communicate in different languages through social media platforms with the help of machine translation (MT) systems. As far as the Arabic-English language pair is concerned, most studies have been conducted on evaluating the MT output for the standard varieties of Arabic, with fewer studies focusing on the vernacular or colloquial varieties. This study attempts to address this gap through presenting an evaluation of the performance of MT output for vernacular or colloquial Arabic in the social media domain. As it is currently the most widely used MT system, Google Translate (GT) has been chosen for evaluating the reliability of its output in the context of translating the Arabic colloquial language (i.e., Egyptian/Cairene Arabic variety) used in social media into English. With this goal in mind, a corpus consisting of Egyptian dialectal Arabic sentences were collected from social media networks, i.e., Facebook and Twitter, and then fed into GT system. The GT output was then evaluated by three human translators to assess their accuracy of translation in terms of adequacy and fluency. The results of the study show that several translation problems have been spotted for GT output. These problems are mainly concerned with wrong equivalents, inappropriate additions and deletions, and transliteration for out-of-vocabulary (OOV) words, which are mostly due to the literal translation of the Arabic vernacular sentences into English. This can be due to the fact that Arabic vernacular varieties are different from the standard language for which MT systems have been basically developed. This, consequently, necessitates the need to upgrade such MT systems to deal with the vernacular varieties.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_46-An_Evaluation_of_the_Accuracy_of_the_Machine_Translation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Advanced Stress Detection Approach based on Processing Data from Wearable Wrist Devices</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120745</link>
        <id>10.14569/IJACSA.2021.0120745</id>
        <doi>10.14569/IJACSA.2021.0120745</doi>
        <lastModDate>2021-07-31T15:09:06.6970000+00:00</lastModDate>
        
        <creator>Mazin Alshamrani</creator>
        
        <subject>Fully convolutional neural network; stress detection; smartwatch; data pre-processing; semi-supervised learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>Today&#39;s busy lifestyle often leads to frequent stress, the accumulation of which may lead to severe consequences for humans. Smartwatches are widely distributed and accessible, and as such deserve intelligent solutions that deal with the processing of such collected data and ensuring the improvement of the quality of life of end-users. The goal of this research is to create a stress detection technology that can correctly, constantly, and unobtrusively monitor psychological stress in real time. Due to the importance of stress detection and prevention, many traditional and advanced techniques have been proposed likewise we provide a unique stress-detection technique that is context-based. Due to the importance of stress detection and prevention, many traditional and advanced techniques have been proposed. In this research, a novel approach to designing and using a deep neural network for stress detection is presented. To provide a desirable training environment for network development, an open-source data set based on motion and physiological information collected from wrist and chest-worn devices was acquired and exploited. Raw data were analyzed, filtered, and preprocessed to create the best possible training data. For the proposed solution to have wide use value, further focus was placed on the data recorded using only smartwatches. Smartwatches are widely distributed and accessible, and as such deserve intelligent solutions that deals with the processing of such collected data and ensuring the improvement of the quality of life of end-users. Finally, two network types with proven capabilities of processing time series data are examined in detail: a fully convolutional network (FCN) and a ResNet deep learning model. The FCN model showed better empirical performances, and further efforts were made to select an optimal network structure. In the end, the proposed solution demonstrated performance similar to state-of-the-art solutions and significantly better than some traditional machine learning techniques, providing a good foundation for reliable stress detection and further development efforts.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_45-An_Advanced_Stress_Detection_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Exploration on Online Learning Challenges in Malaysian Higher Education: The Post COVID-19 Pandemic Outbreak</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120744</link>
        <id>10.14569/IJACSA.2021.0120744</id>
        <doi>10.14569/IJACSA.2021.0120744</doi>
        <lastModDate>2021-07-31T15:09:06.6800000+00:00</lastModDate>
        
        <creator>Ramlan Mustapha</creator>
        
        <creator>Maziah Mahmud</creator>
        
        <creator>Norhapizah Mohd Burhan</creator>
        
        <creator>Hapini Awang</creator>
        
        <creator>Ponmalar Buddatti Sannagy</creator>
        
        <creator>Mohd Fairuz Jafar</creator>
        
        <subject>Online learning; COVID-19; outbreak; fuzzy delphi method; expert consensus</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>Flexible online programmes and learning are gaining popularity as a means of educating students. It can also facilitate the delivery of knowledge to pupils, as well as facilitating the learning process. The purpose of this study was to investigate Online Learning Challenges following the Covid-19 pandemic outbreak in Malaysia. This study employs the qualitative and Fuzzy Delphi Method in collecting the data. In the qualitative research phase, open-ended questions were distributed to 118 participants, while in the Fuzzy Delphi phase, expert questionnaires were distributed to 7 experts in the field of study. Qualitative data were analysed using Atlas-ti software, whereas Fuzzy Delphi data was analysed using Fudelo 1.0 software. The qualitative study discovered that students confront seven significant challenges: internet coverage, mental fatigue, learning devices, environmental disturbance, pedagogical challenges, lack of motivation, and social interaction.Meanwhile, the fuzzy Delphi analysis of the expert consensus of the theme is at a reasonable level. The overall expert consensus agreement findings exceed 75%, the overall value of the threshold (d) is 0.2, and the -cut exceeds 0.5. The study provides important insights into online learning issues and the fields for further improvement. This study also discusses the avenue for future research by future researchers for more significant benefits and contributions to knowledge in general.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_44-An_Exploration_on_Online_Learning_Challenges.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Improvised Facial Emotion Recognition System using the Optimized Convolutional Neural Network Model with Dropout</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120743</link>
        <id>10.14569/IJACSA.2021.0120743</id>
        <doi>10.14569/IJACSA.2021.0120743</doi>
        <lastModDate>2021-07-31T15:09:06.6500000+00:00</lastModDate>
        
        <creator>P V V S Srinivas</creator>
        
        <creator>Pragnyaban Mishra</creator>
        
        <subject>Convolutional neural network (CNN); facial emotion recognition (FER); dropout; FER 2013; CREMAD; RVDSR; CK48; JAFFE</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>Facial expression detection has long been regarded as both verbal and nonverbal communication. The muscular expression on a person&#39;s face reflects their physical and mental state. Using computer programming to integrate all face curves into a categorization class is significantly more important than doing so manually. Convolutional Neural Networks, an Artificial Intelligence approach, was recently developed to improve the task with more acceptance. Due to overfit during the learning step, the model performance may be lowered and regarded underperforming. There is a method dropout uses to reduce testing error. The influence of dropout is applied at convolutional layers and dense layers to classify face emotions into a distinct category of Happy, Angry, Sad, Surprise, Neutral, Disgust, and Fear and is represented as an improved convolutional neural network model. The experimental setup used the datasets namely JAFEE, CK48, FER2013, RVDSR, CREMAD and a self-prepared dataset of 36,153 facial images for observing train and test accuracy in presence and absence of dropout. Test accuracies of 92.33, 96.50, 97.78, 99.44, and 98.68 are obtained on Fer2013, RVDSR, CREMA-D, CK48, and JAFFE datasets are obtained in presence of dropout. The used features are countably large in the computation as a result the higher computation support of NVDIA with the capacity of GPU 16GB, CPU 13GB and memory 73.1 GB are used for the experimental purposes.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_43-An_Improvised_Facial_Emotion_Recognition_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Arbitrary Verification of Ontology Increments using Natural Language</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120742</link>
        <id>10.14569/IJACSA.2021.0120742</id>
        <doi>10.14569/IJACSA.2021.0120742</doi>
        <lastModDate>2021-07-31T15:09:06.6170000+00:00</lastModDate>
        
        <creator>Kaneeka Vidanage</creator>
        
        <creator>Noor Maizura Mohamad Noor</creator>
        
        <creator>Rosmayati Mohemad</creator>
        
        <creator>Zuriana Abu Bakar</creator>
        
        <subject>First order logic; linked data; ontologist; iterative framework</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>Parallel to the advancement of practical use cases in computers, the trend toward collaborative ontology engineering is accelerating. Both domain experts and ontologists must collaborate in collaborative ontology engineering processes. However, the bulk of domain experts are not computer experts (i.e. lawyers, medical doctors, bankers, etc.). Question and Answer on Linked Data (QALD) is a suggested method for non-computer domain experts to engage with the ontology increments as they evolve. Existing QALD methods and systems, on the other hand, have a number of drawbacks, including significant setup requirements, domain dependence, and user discomfort. As a result, a new QALD algorithm and QALD system designed with the usage of First Order Logic (FOL) are presented in order to address the shortcomings of current QALD mechanisms. The suggested FOL based, QALD mechanism was tested quantitatively and qualitatively over three distinct ontology increments. This experiment had an overall acceptance rate of 79 percent from all stakeholders.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_42-Arbitrary_Verification_of_Ontology_Increments.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Assessment of Emotion in Online News based on Kansei Approach for National Security</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120741</link>
        <id>10.14569/IJACSA.2021.0120741</id>
        <doi>10.14569/IJACSA.2021.0120741</doi>
        <lastModDate>2021-07-31T15:09:06.6030000+00:00</lastModDate>
        
        <creator>Noor Afiza Mat Razali</creator>
        
        <creator>Nur Atiqah Malizan</creator>
        
        <creator>Nor Asiakin Hasbullah</creator>
        
        <creator>Norul Zahrah Mohd Zainuddin</creator>
        
        <creator>Normaizeerah Mohd Noor</creator>
        
        <creator>Khairul Khalil Ishak</creator>
        
        <creator>Sazali Sukardi</creator>
        
        <subject>Online news; kansei; national security; political security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>Securing a nation is more complicated in modern days than how it was decades ago. In the era of big data, massive information is constantly being shared in cyberspace. Online rumours and fake news could evoke negative emotions and disruptive behaviours that possibly can jeopardize national security. Real-time detection and monitoring of unsettling emotions and potential national security threats should be further developed to help authorities manage the situation early. Text in the online news could be weighted with emotions that possibly lead to a misunderstanding that can affect national security and trigger chaos. Thus, understanding the emotion included in the online news and the relationship with national security is crucial. Kansei approach was determined as a methodology capable of interpreting human emotions towards an artefact. This research explores the emotion assessment using Kansei for text in online news and summarized the emotion variable factors that are likely to have a relationship with an individual state of mind towards one of the national security elements which are political security. The result determines that the identified variables of factors were “Frustrated,” Consent,” Resentful” and “Attentive”. This gives an understanding of the significant effect of people&#39;s emotions represented in the text for political security elements.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_41-Assessment_of_Emotion_in_Online_News_based_on_Kansei_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SRAVIP: Smart Robot Assistant for Visually Impaired Persons</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120739</link>
        <id>10.14569/IJACSA.2021.0120739</id>
        <doi>10.14569/IJACSA.2021.0120739</doi>
        <lastModDate>2021-07-31T15:09:06.5870000+00:00</lastModDate>
        
        <creator>Fahad Albogamy</creator>
        
        <creator>Turk Alotaibi</creator>
        
        <creator>Ghalib Alhawdan</creator>
        
        <creator>Mohammed Faisal</creator>
        
        <subject>Mobile robot; robotics; robot assistance; and visually impaired persons</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>Vision is one of the most important human senses, as visually impaired people encounter various difficulties due to their inability to move safely in different environments. This research aimed to facilitate integrating such persons into society by proposing a robotic solution (Robot Assistance) to assist them in navigating within indoor environments, such as schools, universities, hospitals, airports, etc. according to a prescheduled task. The proposed system is called the smart robot assistant for visually impaired persons (SRAVIP). It includes two subsystems: 1) an initialization system aimed to initialize the robot, create an environment map, and register a visual impaired person as a target object; 2) a real-time operation system implemented to navigate the mobile robot and communicate with the target object using a speech-processing engine and an optical character recognition (OCR) module. An important contribution of the proposed SRAVIP is the user-independent, i.e. it does not depend on the user, and one robot can serve unlimited users. SRAVIP utilized Turtlebot3 robot to realize SRAVIP and then tested it in the College of Computer and Information Sciences, King Saud University, AlMuzahmiyah Campus. The experimental results confirmed that the proposed system could function successfully.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_39-SRAVIP_Smart_Robot_Assistant_for_Visually_Impaired.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modelling the Player and Avatar Attachment based on Student’s Engagement and Attention in Educational Games</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120740</link>
        <id>10.14569/IJACSA.2021.0120740</id>
        <doi>10.14569/IJACSA.2021.0120740</doi>
        <lastModDate>2021-07-31T15:09:06.5870000+00:00</lastModDate>
        
        <creator>Nooralisa Mohd Tuah</creator>
        
        <creator>Dinna @ Nina Mohd Nizam</creator>
        
        <creator>Zaidatol Haslinda A. Sani</creator>
        
        <subject>Avatar; engagement; attention; digital educational games</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>The Player and Avatar attachment help to motivate a student to strengthen their engagement in gameplay. The different types of avatar designs deployed in a game have an impact on students&#39; engagements. The avatars are designed with different roles, wherein each role offers varying motivational effects on students&#39; engagement. Several research in human and computer interaction have assessed user engagement and user attention in a computer or system application as well as in gameplay. Among the usual approaches to assess user engagement are using questionnaire and eye-tracking. Investigating the possible use of these approaches in determining the player and avatar attachment, particularly the attachment that associated with the various avatar designs and their effect on students&#39; engagement are inconclusive and remains untapped. Essentially, studying students&#39; engagement and attention perception while learning enriches one&#39;s comprehension about engagement in the education segment. As such, this study proposes a new model of player and avatar attachment based on the students&#39; engagement and focus attention on the gameplay of digital educational games (DEGs). The model is developed follows a stepwise approach consisting component identification, relationship of the components, model development, and model validation. Several components were scrutinized, summarized, and developed into the model proposed in this study. A significant attachment can determine the avatar design that may influence a student&#39;s engagement in gameplay. Hence, this study offers several constructive recommendations for future avatars in game design for education purpose, which may validate the user&#39;s engagement based on his or her focus attention.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_40-Modeling_the_Player_and_Avatar_Attachment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Similarity Score Model for Aspect Category Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120738</link>
        <id>10.14569/IJACSA.2021.0120738</id>
        <doi>10.14569/IJACSA.2021.0120738</doi>
        <lastModDate>2021-07-31T15:09:06.5570000+00:00</lastModDate>
        
        <creator>Zohreh Madhoushi</creator>
        
        <creator>Abdul Razak Hamdan</creator>
        
        <creator>Suhaila Zainudin</creator>
        
        <subject>Aspect category detection; language model; semantic similarity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>Aspect-based Sentiment Analysis (ABSA) aims to extract significant aspects of an item or product from reviews and predict the sentiment of each aspect. Previous similarity methods tend to extract aspect categories at the word level by combining Language Models (LM) in their models. A drawback for the LM model is its dependence on a large amount of labelled data for a specific domain to function well. This work proposes a mechanism to address labelled data dependency by a one-step approach experimenting to decide the best combinatory architectures of recurrent-based LM and the best semantic similarity measures for fostering a new aspect category detection model. The proposed model addresses drawbacks of previous aspect category detection models in an implicit manner. The datasets of this study, S1 and S2, are from standard SemEval online competition. The proposed model outperforms the previous baseline models in terms of the F1-score of aspect category detection. This study finds more relevant aspect categories by creating a more stable and robust model. The F1 score of our best model for aspect category detection is 79.03% in the restaurant domain for the S1 dataset. In dataset S2, the F1-score is 72.65% in the laptop domain and 75.11% in the restaurant domain.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_38-A_Similarity_Score_Model_for_Aspect_Category_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cyberattacks and Vociferous Implications on SECS/GEM Communications in Industry 4.0 Ecosystem</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120737</link>
        <id>10.14569/IJACSA.2021.0120737</id>
        <doi>10.14569/IJACSA.2021.0120737</doi>
        <lastModDate>2021-07-31T15:09:06.5230000+00:00</lastModDate>
        
        <creator>Shams A. Laghari</creator>
        
        <creator>Selvakumar Manickam</creator>
        
        <creator>Shankar Karuppayah</creator>
        
        <creator>Ayman Al-Ani</creator>
        
        <creator>Shafiq Ul Rehman</creator>
        
        <subject>SECS/GEM; cybersecurity; industry-4.0; machine-to-machine communication; industrial internet of things (IIoT)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>Information and communications technology (ICT) is prevalent in almost every field of industrial production and manufacturing processes at present. A typical industry network consists of sensors, actuators, devices, and services to connect, track, and manage production processes to increase performance and boost productivity. The SEMI Equipment Communications Standard/Generic Equipment Model (SECS/GEM) is SEMI&#39;s Machine-to-Machine (M2M) protocol for equipment-to-host data communications. It is the most popular and profoundly used M2M communication protocol operating in the manufacturing industry. With Industry 4.0 as a guiding factor, connectivity to business networks is required for accessing real-time data whenever and wherever needed. This openness of connectivity raises security concerns as SECS/GEM protocol offers no security, which endangers exposing the manufacturing industries&#39; business secrets and production processes. This paper discusses the key processes involved in SECS/GEM communications and how potential attackers can manipulate these processes to obtain illegal or unauthorized access. The experiments&#39; results indicate that the SECS/GEM processes are entirely vulnerable to numerous attacks, including DoS attack, Replay attack, and False-Data-Injection-Attack. Thus, the future direction involves developing a prevention mechanism that aims at securing the SECS/GEM processes in the industrial network. This study&#39;s findings are useful as preliminary guidance for the infrastructure owners to plan for appropriate security measures to protect the industrial network.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_37-Cyberattacks_and_Vociferous_Implications_on_SECS_GEM_Communications.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Validation of Requirements for Transformation of an Urban District to a Smart City</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120736</link>
        <id>10.14569/IJACSA.2021.0120736</id>
        <doi>10.14569/IJACSA.2021.0120736</doi>
        <lastModDate>2021-07-31T15:09:06.5100000+00:00</lastModDate>
        
        <creator>Rosziati Ibrahim</creator>
        
        <creator>N.A.M. Asri</creator>
        
        <creator>Sapiee Jamel</creator>
        
        <creator>Jahari Abdul Wahab</creator>
        
        <subject>Smart city; Internet of Things (IoTs); requirements analysis; survey instrument</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>The concept of a smart city is still debatable and yet gives attention to every country around the globe to provide their community with a better quality of life. New ideas for the development of a smart city have always evolved to enhance the quality, performance, and interactivity of services. This paper presents a model of a smart city based on the comparison of the chosen smart cities in the world and used the model to validate the requirements for the transformation of an urban district to a smart city. The proposed model for a smart city in this paper focuses on two major components, which are by utilizing IoTs (Internet of Things) in forming a model for a smart city and incorporating culture diversity. The relationship of components and culture influence are the foundation of designing the model of a smart city. In this research, the model of a smart city has been validated based on the requirements analysis from the survey instrument and the results show that the average mean of each element used is more than 4 out of 5. The model of a smart city can be used as a guideline for transformation of an urban district to a smart city.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_36-Validation_of_Requirements_for_Transformation_of_an_Urban_District.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Method for Handling Partial Occlusion on Person Re-identification using Partial Siamese Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120735</link>
        <id>10.14569/IJACSA.2021.0120735</id>
        <doi>10.14569/IJACSA.2021.0120735</doi>
        <lastModDate>2021-07-31T15:09:06.4930000+00:00</lastModDate>
        
        <creator>Muhammad Pajar Kharisma Putra</creator>
        
        <creator>Wahyono</creator>
        
        <subject>CCTV; CNN; video-surveillance; NN; contrastive-loss</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>Person-reidentification (Re-ID) is one of the tasks in CCTV-based surveillance system for verifying whether two detected objects are the same person or not. Re-ID visually matching one person or group in various situations obtained from different cameras or on the same camera but at different times. This method replaces the task of surveillance through surveillance cameras that was previously carried out conventionally by humans because it is prone to errors. The challenge of Re-ID is the pose of varied objects, occlusions, and the appearance of people who tend to be similar. Occlusion issues receive special attention since the performance of Re-ID can decrease due to partial occlusion. This can occur because the re-identification process relies on features of the person such as the color and pattern of clothing. The occlusion resulted in the feature not being caught by the camera resulting in a re-identification error. This paper proposed to overcome this problem by dividing the image into several parts (partial) and then processed in different neural network (NN) but with the same architecture. The research conducted is applying the CNN algorithm with the Siamese network architecture and applying the contrastive loss function to calculate the similarity distance between a pair of images. The test results show that the partial process obtained an accuracy of 86%, 77%, 68%, and 56% for occlusion data of 20%, 40%, 60%, and 80%. This accuracy is three to five percent higher than images without partial.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_35-A_Novel_Method_for_Handling_Partial_Occlusion_on_Person.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Impact of CALL Software on the Performance of EFL Students in the Saudi University Context</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120734</link>
        <id>10.14569/IJACSA.2021.0120734</id>
        <doi>10.14569/IJACSA.2021.0120734</doi>
        <lastModDate>2021-07-31T15:09:06.4630000+00:00</lastModDate>
        
        <creator>Ayman Khafaga</creator>
        
        <creator>Abed Saif Ahmed Alghawli</creator>
        
        <subject>CALL; EFL students; Saudi university context; language skills; reading; SnagitTM; screencast; performance; effectiveness</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>This paper investigates the extent to which Computer-Assisted Language Learning (CALL) contributes academically and pedagogically to the performance of students majoring English as a Foreign Language (EFL). The paper’s main objective is to explore the extent to which CALL is effective in developing the linguistic and communicative competence of EFL students in the skill of reading. This paper uses both quantitative and qualitative approaches in the process of data collection. As an empirical study, the sample in this study was 47 students studying English at Prince Sattam bin Abdulaziz University. The participants were classified into two groups: experimental and control; each of which has been assigned specific reading activities. The experimental group has been allocated technological learning, by means of using the computer programs of SnagitTM, Screencast; whereas the control group has been assigned traditional learning, i.e. without using computer. Results revealed that the use of CALL has more positive effects on the learning outcomes of the experimental group than those pertaining to the control group. This, in turn, accentuates the fact that the use and application of CALL into EFL contexts improves the students’ learning outcomes concerning the skill of reading. The study recommends further integration of computer software into the designation of the different EFL courses.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_34-The_Impact_of_CALL_Software_on_the_Performance_of_EFL_Students.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Feature Engineering Framework to detect Phishing Websites using URL Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120733</link>
        <id>10.14569/IJACSA.2021.0120733</id>
        <doi>10.14569/IJACSA.2021.0120733</doi>
        <lastModDate>2021-07-31T15:09:06.4300000+00:00</lastModDate>
        
        <creator>N. Swapna Goud</creator>
        
        <creator>Anjali Mathur</creator>
        
        <subject>Recursive feature elimination; principal component analysis; standard scalar transformation; eXtreme gradient boosting classifier; correlation matrix</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>Phishing is a most popular and dangerous cyber-attack in the world of internet. One of the most common attacks in cyber security is to access the personal information of internet users through “Phishing Website”. The major element through which hacker can do this job is through URL. Hacker creates an almost replica of original URL in which there is a very small difference, generally not revealed without keen observation. By pipelining various machine learning algorithms, the proposed model aims to recognize the important features to classify the URL using a recursive feature elimination process. In this work the data set of various URL records has been collected with 112 features including one target value. In this work a Machine Learning based model is proposed to identify the significant features, used to classify a URL, the wrapper method recursive feature elimination compares different bagging and boosting machine learning approaches .Ensemble algorithms, Bootstrap Aggregation Algorithms, Boosting and stacking algorithms are used for feature selection. The proposed work has five sections: work on the pre-processing phase, finding the relation between the features of the dataset, automatic selection of number of features using Extra Tree Classifier, comparison of the various ensemble algorithm and finally generates the best features for URL analysis. This paper, designs meta learner with XG BOOST classifier as base classifier and achieved an accuracy of 93% Out of 112 features, this model has performed an extensive comparative study on feature selection and identified 29 features as core features by performing URL analysis.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_33-Feature_Engineering_Framework_to_Detect_Phishing_Websites.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>IoT-based Closed Algal Cultivation System with Vision System for Cell Count through ImageJ via Raspberry Pi</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120732</link>
        <id>10.14569/IJACSA.2021.0120732</id>
        <doi>10.14569/IJACSA.2021.0120732</doi>
        <lastModDate>2021-07-31T15:09:06.4000000+00:00</lastModDate>
        
        <creator>Lean Karlo S. Tolentino</creator>
        
        <creator>Sheila O. Belarmino</creator>
        
        <creator>Justin Gio N. Chan</creator>
        
        <creator>Oliver D. Cleofas Jr</creator>
        
        <creator>Jethro Gringo M. Creencia</creator>
        
        <creator>Meryll Eve L. Cruz</creator>
        
        <creator>JC Glenn B. Geronimo</creator>
        
        <creator>John Peter M. Ramos</creator>
        
        <creator>Lejan Alfred C. Enriquez</creator>
        
        <creator>Jay Fel C. Quijano</creator>
        
        <creator>Edmon O. Fernandez</creator>
        
        <creator>Maria Victoria C. Padilla</creator>
        
        <subject>Spirulina platensis; ImageJ; image processing; closed algal cultivation; parameter monitoring; firebase</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>Spirulina platensis and other microalgae are now being considered in the different fields of research. It is due to the former’s endless potential let alone, its high protein content. That is why, a stable demand of microalga production is now necessary. To achieve a high-protein spirulina, its cultivation using closed algal cultivation requires monitoring and maintenance of the bio-environmental factors and parameters affecting its growth to provide a stable and efficient production of microalgae. Meanwhile, laboratories that culture spirulina determine its cell count through manually counting the cells under a microscope – a tedious work. This establishes the need to construct a device that cultivates spirulina with maintaining and cell counting capabilities. Thus, the proponents developed a culturing device that has three main systems. The first system is tasked to maintain the bio-environmental parameters, such as the pH level, temperature, and light. The second system on the other hand speaks of the cell counting system through ImageJ’s Image Processing. This system verifies the cell count and growth through counting the filaments of the spirulina. Lastly, a corresponding Android application, which was developed using Firebase and Android Studio, displays real-time values of the culture’s parameter. Results show that the device was able to stabilize its parameters. Also, red LEDs exhibited 28.43% higher approximate cell count than red-blue LEDs. With this, the quality of the Spirulina that was produced throughout the study was improved. Lastly, the use of ImageJ’s image processing feature showed no significant difference with manual counting. It also releases the results multiple times faster than the manual counting. Thus, being a better alternative to manual cell counting.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_32-IoT_based_Closed_Algal_Cultivation_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multiple Relay Nodes Selection Scheme using Exit Time Variation for Efficient Data Dissemination in VANET</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120731</link>
        <id>10.14569/IJACSA.2021.0120731</id>
        <doi>10.14569/IJACSA.2021.0120731</doi>
        <lastModDate>2021-07-31T15:09:06.3830000+00:00</lastModDate>
        
        <creator>Deepak Gupta</creator>
        
        <creator>Rakesh Rathi</creator>
        
        <creator>Shikha Gupta</creator>
        
        <creator>Neetu Sharma</creator>
        
        <subject>Broadcasting; disseminations; exit time; highway lanes; relay nodes; vehicle speed; vehicular ad hoc networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>Efficient Data dissemination in VANET is still the challenge because of variable speed of vehicles, road conditions, frequent fragmentation etc. In this article a selective forwarding data dissemination scheme using exit time differences in vehicles for highway lanes scenario is proposed that focuses on the solution of broadcast storm, less coverage, transmission delay and reliable data delivery. Our approach is selecting multiple forwarding nodes to increase coverage in less delay. In this article road lanes concept is used to identify the moving node direction. Redundant regions and zones technique in proposed approach is reducing the processing of parameters at significant extents. Simulation of proposed approach is done using NS2 and SUMO. Output of implementation is compared with unidirectional flooding, KB_Selective, and LT_Selective techniques. Result analysis shown that the proposed technique is much efficient and it increases the rate of coverage up to 23%. Also it reduces the delay up to 18% in data delivery ratio. This methodology also improves the performance of system by increasing the throughput and reducing the collision rate in comparison with other methods.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_31-Multiple_Relay_Nodes_Selection_Scheme.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>LSTM, VADER and TF-IDF based Hybrid Sentiment Analysis Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120730</link>
        <id>10.14569/IJACSA.2021.0120730</id>
        <doi>10.14569/IJACSA.2021.0120730</doi>
        <lastModDate>2021-07-31T15:09:06.3530000+00:00</lastModDate>
        
        <creator>Mohamed Chiny</creator>
        
        <creator>Marouane Chihab</creator>
        
        <creator>Omar Bencharef</creator>
        
        <creator>Younes Chihab</creator>
        
        <subject>Sentiment analysis; hybrid model; long short-term memory (LSTM); Valence Aware Dictionary and sEntiment Reasoner (VADER); term frequency-inverse document frequency (TF-IDF); classification algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>Most sentiment analysis models that use supervised learning algorithms consume a lot of labeled data in the training phase in order to give satisfactory results. This is usually expensive and leads to high labor costs in real-world applications. This work consists in proposing a hybrid sentiment analysis model based on a Long Short-Term Memory network, a rule-based sentiment analysis lexicon and the Term Frequency-Inverse Document Frequency weighting method. These three (input) models are combined in a binary classification model. In the latter, each of these algorithms has been implemented: Logistic Regression, k-Nearest Neighbors, Random Forest, Support Vector Machine and Naive Bayes. Then, the model has been trained on a limited amount of data from the IMDB dataset. The results of the evaluation on the IMDB data show a significant improvement in the Accuracy and F1 score compared to the best scores recorded by the three input models separately. On the other hand, the proposed model was able to transfer the knowledge gained on the IMDB dataset to better handle a new data from Twitter US Airlines Sentiments dataset.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_30-LSTM_VADER_and_TF_IDF_based_Hybrid_Sentiment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Copy Move Forgery Detection Techniques: A Comprehensive Survey of Challenges and Future Directions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120729</link>
        <id>10.14569/IJACSA.2021.0120729</id>
        <doi>10.14569/IJACSA.2021.0120729</doi>
        <lastModDate>2021-07-31T15:09:06.3200000+00:00</lastModDate>
        
        <creator>Ibrahim A. Zedan</creator>
        
        <creator>Mona M. Soliman</creator>
        
        <creator>Khaled M. Elsayed</creator>
        
        <creator>Hoda M. Onsi</creator>
        
        <subject>Image forensics; copy-move forgery detection (CMFD);conventional techniques; deep learning techniques</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>Digital Image Forensics is a growing field of image processing that attempts to gain objective ‎proof ‎of the origin and veracity of a visual image. Copy-move forgery detection (CMFD) has ‎currently ‎become an active research topic in the passive/blind image forensics field. There has no ‎doubt that ‎conventional techniques and especially the keypoint based techniques have pushed the ‎CMFD ‎forward in the previous two decades. However, CMFD techniques in general and ‎conventional ‎techniques in particular suffer from several challenges. And thus, increasing approaches ‎are exploiting ‎deep learning for CMFD. In this survey, we cover the conventional and the ‎deep learning ‎based CMFD techniques from a new perspective. We classify the ‎CMFD techniques into several ‎classifications according to the detection methodology, the detection paradigm, and the detection ‎capability‎. We discuss the ‎challenges facing the CMFD techniques as well as the ways for solving ‎them. In addition, this survey covers the evaluation metrics‎ and datasets commonly utilized for ‎CMFD. Also, we are ‎debating and proposing certain plans for future research. This survey will be ‎helpful for the researchers’ ‎as it master the recent trends of CMFD and outline some future research ‎directions.‎</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_29-Copy_Move_Forgery_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dynamic Phrase Generation for Detection of Idioms of Gujarati Language using Diacritics and Suffix-based Rules</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120728</link>
        <id>10.14569/IJACSA.2021.0120728</id>
        <doi>10.14569/IJACSA.2021.0120728</doi>
        <lastModDate>2021-07-31T15:09:06.2900000+00:00</lastModDate>
        
        <creator>Jatin C. Modh</creator>
        
        <creator>Jatinderkumar R. Saini</creator>
        
        <subject>Diacritic; Gujarati; idiom; machine translation system (MTS); natural language processing (NLP); suffix; unicode transformation format (UTF)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>Gujarati is the language used for everyday communication in the state of Gujarat, India. The Gujarati language is also officially recognized by the constitution and the government of India. Gujarati script is based on the Devanagari script. An idiom is an expression, phrase, or word that has a different meaning from the literal meaning of the words in it. Idioms represent the cultural heritage of Gujarati language. Idioms are used in Gujarati language for effective communication and convey of an accurate message. No Machine Translation System does the accurate translation of Gujarati idioms to English or any other language. Different idiom phrases can be generated by adding diacritic(s) as well as suffix to the root or base form of the idiom. Many forms of single idiom make automatic idiom identification as well as machine translation more challenging. This paper focuses on the design and implementation of diacritics and suffix-based rules for dynamic phrase generation and detection of idioms of Gujarati language. This implementation helps in identifying Gujarati idiom present in any possible form in the Gujarati text. The obtained results with the execution of 7050 different Gujarati idiom phrases yield an accuracy of 99.73%. The results are encouraging enough to make the proposed implementation useful for Natural Language processing tasks related to Gujarati language idioms.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_28-Dynamic_Phrase_Generation_for_Detection_of_Idioms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Grey Clustering Approach to Assess Sediment Quality in a Watershed in Peru</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120727</link>
        <id>10.14569/IJACSA.2021.0120727</id>
        <doi>10.14569/IJACSA.2021.0120727</doi>
        <lastModDate>2021-07-31T15:09:06.2730000+00:00</lastModDate>
        
        <creator>Alexi Delgado</creator>
        
        <creator>Jossel Altaminarano</creator>
        
        <creator>Luis Pariona</creator>
        
        <creator>Patricia Oscanoa</creator>
        
        <creator>Stephany Esquivel</creator>
        
        <creator>Wendy Mej&#237;a</creator>
        
        <creator>Chiara Carbajal</creator>
        
        <subject>Grey clustering; sediment quality; watershed</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>The evaluation of sediment quality is a relevant topic that involves the analysis of various parameters that are altered by natural or anthropogenic causes. Therefore, the Grey Clustering method provides an alternative to evaluate sediment quality. In the present study, the sediment quality of the Chontayacu river watershed was evaluated considering the results of the monitoring of twenty-three points carried out in the early evaluations by the Environmental Impact Evaluation Agency (OEFA by its Spanish acronym). These twenty-three points were separated into three blocks considering the monitoring points upstream of the Uchiza town center and the Chontayacu Alto and Chontayacu Bajo hydroelectric plants. Seven parameters were analyzed: As, Cd, Cr, Cu, Pb, Hg and Zn, which were compared with Canadian sediment quality standards for the protection of aquatic life. The results of the assessment showed that all points in the Chontayacu River were classified as having unlikely adverse biological effects from heavy metals. However, a quality ranking was established between the points of each block where it was found that points P3, P4 and P17 correspond to the lowest values for the high CH, low CH and CP Uchiza blocks respectively. Finally, the results obtained will provide integrated information for decision making by the competent authorities in Peru, as well as indicate the level of sediment contamination that should be taken into account for the proposal of hydroelectric projects that influence sediment transport and entrainment.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_27-Grey_Clustering_Approach_to_Assess_Sediment_Quality.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Preprocessing Handling to Enhance Detection of Type 2 Diabetes Mellitus based on Random Forest</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120726</link>
        <id>10.14569/IJACSA.2021.0120726</id>
        <doi>10.14569/IJACSA.2021.0120726</doi>
        <lastModDate>2021-07-31T15:09:06.2600000+00:00</lastModDate>
        
        <creator>Nur Ghaniaviyanto Ramadhan</creator>
        
        <creator>Adiwijaya</creator>
        
        <creator>Ade Romadhony</creator>
        
        <subject>Diabetes mellitus; data preprocessing; data augmentation; random forest; classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>Diabetes is a non-communicable disease that has a death rate of 70% in the world. Majority of diabetes cases, 90-95%, are of diabetes cases are type 2 diabetes which is caused by an unhealthy lifestyle. Type 2 diabetes can be detected earlier by using examination that contains diabetes-related parameters. However, the dataset does not always contain complete information, the distribution between positive and negative classes is mostly imbalanced, and some parameters have low importance to the decision class. To overcome the problems, this study needs to carry out preprocessing to improve detection precision and recall. In this paper, propose an approach on dataset preprocessing, which is applied to diabetes prediction. The preprocessing approach consists of the following process: missing value process, imbalanced data process, feature importance process, and data augmentation process. The data preprocessing process uses the median for missing value, random oversampling for imbalanced data, the Gini score in the random forest for feature importance, and posterior distribution for data augmentation. This research used random forest and logistic regression as classification algorithms. The experimental results show that the classification increased by 20% precision and 24% recall by applying proposed method and random forest method compared to without proposed method and random forest method.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_26-Preprocessing_Handling_to_Enhance_Detection_of_Type_2_Diabetes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detecting Website Defacement Attacks using Web-page Text and Image Features</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120725</link>
        <id>10.14569/IJACSA.2021.0120725</id>
        <doi>10.14569/IJACSA.2021.0120725</doi>
        <lastModDate>2021-07-31T15:09:06.2270000+00:00</lastModDate>
        
        <creator>Trong Hung Nguyen</creator>
        
        <creator>Xuan Dau Hoang</creator>
        
        <creator>Duc Dung Nguyen</creator>
        
        <subject>Website defacement attacks; website defacement detection; machine learning-based website defacement detection; deep learning-based website defacement detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>Recently, web attacks in general and defacement attacks in particular to websites and web applications have been considered one of major security threats to many enterprises and organizations who provide web-based services. A defacement attack can result in a critical effect to the owner’s website, such as instant discontinuity of website operations and damage of the owner’s reputation, which in turn may lead to huge financial losses. A number of techniques, measures and tools for monitoring and detecting website defacements have been researched, developed and deployed in practice. However, some measures and techniques can only work with static web-pages while some others can work with dynamic web-pages, but they require extensive computing resources. The other issues of existing proposals are relatively low detection rate and high false alarm rate because many important elements of web-pages, such as embedded code and images are not processed. In order to address these issues, this paper proposes a combination model based on BiLSTM and EfficientNet for website defacement detection. The proposed model processes web-pages’ two important components, including the text content and page screenshot images. The combination model can work effectively with dynamic web-pages and it can produce high detection accuracy as well as low false alarm rate. Experimental results on a dataset of over 96,000 web-pages confirm that the proposed model outperforms existing models on most of measurements. The model’s overall accuracy, F1-score and false positive rate are 97.49%, 96.87% and 1.49%, respectively.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_25-Detecting_Website_Defacement_Attacks_using_Web_page_Text.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Independent Task Scheduling in Cloud Computing using Meta-Heuristic HC-CSO Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120724</link>
        <id>10.14569/IJACSA.2021.0120724</id>
        <doi>10.14569/IJACSA.2021.0120724</doi>
        <lastModDate>2021-07-31T15:09:06.2130000+00:00</lastModDate>
        
        <creator>Jai Bhagwan</creator>
        
        <creator>Sanjeev Kumar</creator>
        
        <subject>Crow search algorithm (CSA); cat swarm optimization (CSO); H-CSO algorithm; HC-CSO algorithm; heft algorithm; SMIW (self-motivated inertia weight); independent tasks; particle swarm optimization (PSO); QoS (Quality of Service); virtual machines (VMs)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>Cloud computing is a vital paradigm of emerging technologies. It provides hardware, software, and development platforms to end-users as per their demand. Task scheduling is an exciting job in the cloud computing environment. Tasks can be divided into two categories dependent and independent. Independent tasks are not connected to any type of parent-child concept. Various meta-heuristic algorithms have come into force to schedule the independent tasks. In this, paper a hybrid HC-CSO algorithm has been simulated using independent tasks. This hybrid algorithm has been designed by using the HEFT algorithm, Self-Motivated Inertia Weight factor, and standard Cat Swarm Optimization algorithm. The Crow Search algorithm has been applied to overcome the problem of premature convergence and to avoid the H-CSO algorithm getting stuck in the local fragment. The simulation was carried out using 500-1300 random lengths independent tasks and it was found that the H-CSO algorithm has beaten PSO, ACO, and CSO algorithms whereas the hybrid algorithm HC-CSO is working fine despite Cat Swarm Optimization, Particle Swarm Optimization, and H-CSO algorithm in the name of processing cost and makespan. For all scenarios, the HC-CSO algorithm is found overall 4.15% and 7.18% efficient than the H-CSO and standard CSO respectively in comparison to the makespan and in case of computation cost minimization, 9.60% and 14.59% than the H-CSO and the CSO, respectively.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_24-Independent_Task_Scheduling_in_Cloud_Computing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Pre-trained CNNs Models for Content based Image Retrieval</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120723</link>
        <id>10.14569/IJACSA.2021.0120723</id>
        <doi>10.14569/IJACSA.2021.0120723</doi>
        <lastModDate>2021-07-31T15:09:06.1970000+00:00</lastModDate>
        
        <creator>Ali Ahmed</creator>
        
        <subject>Pre-trained deep neural networks; transfer learning; content based image retrieval</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>Content based image retrieval (CBIR) systems is a ‎common recent method for image retrieval and is ‎based mainly ‎on two pillars extracted features and similarity measures. Low ‎level image presentations, ‎based on colour, texture and shape ‎properties are the most common feature extraction methods used ‎by ‎traditional CBIR systems. Since these traditional handcrafted ‎features require good prior domain ‎knowledge, inaccurate ‎features used for this type of CBIR systems may widen the ‎semantic gap and ‎could lead to very poor performance retrieval ‎results. Hence, features extraction methods, which ‎are ‎independent of domain knowledge and have automatic ‎learning capabilities from input image are ‎highly useful. Recently, ‎pre-trained deep convolution neural networks (CNN) with ‎transfer learning ‎facilities have ability to generate and extract ‎accurate and expressive features from image data. Unlike ‎other ‎types of deep CNN models which require huge amount of data ‎and massive processing time ‎for training purposes, the pre-‎trained CNN models have already trained for thousands of ‎classes of large-scale data, including huge ‎images and their ‎information could be easily used and transferred. ResNet18 ‎and ‎SqueezeNet are successful and effective examples of pre-‎trained CNN models used recently in many ‎machine learning ‎applications, such as classification, clustering and object ‎recognition. In this ‎study, we have developed CBIR systems ‎based on features extracted using ResNet18 and SqueezeNet ‎pre-‎trained CNN models. Here, we have utilized these pre-trained ‎CNN models to extract two groups of features ‎that are stored ‎separately and then later are used for online image searching and ‎retrieval. Experimental ‎results on two popular image datasets ‎Core-1K and GHIM-10K show that ResNet18 features ‎based on ‎the CBIR method have overall accuracy of 95.5% and 93.9% for ‎the two datasets, respectively, which ‎greatly outperformed the ‎traditional handcraft features based on the CBIR method.‎</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_23-Pre_trained_CNNs_Models_for_Content_based_Image_Retrieval.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>CRS-iEclat: Implementation of Critical Relative Support in iEclat Model for Rare Pattern Mining</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120722</link>
        <id>10.14569/IJACSA.2021.0120722</id>
        <doi>10.14569/IJACSA.2021.0120722</doi>
        <lastModDate>2021-07-31T15:09:06.1800000+00:00</lastModDate>
        
        <creator>Wan Aezwani Wan Abu Bakar</creator>
        
        <creator>Mustafa Man</creator>
        
        <creator>Zailani Abdullah</creator>
        
        <creator>Mahadi B Man</creator>
        
        <subject>Critical relative support; equivalence class transformation (Eclat); iEclat model; interestingness measure</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>The research purpose is to develop a performance enhancement in Incremental Eclat (iEclat) model by embedding Critical Relative Support (CRS) in mining of infrequent itemset. The CRS measure acts as an interestingness measure (filter) in iEclat model that comprises of i-Eclat-diffset algorithm, i-Eclat-sortdiffset algorithm and i-Eclat-postdiffset algorithm for infrequent (rare) itemset mining. The association rule is performed to reveal the relationships among itemsets in a transactional database. The task of association rule mining is to discover if there exist the frequent itemset or infrequent patterns in the database and if any, an interesting relationship between these frequent or infrequent itemsets can reveal a new pattern analysis for the future decision making. Regardless of frequent or infrequent itemsets, the persisting issues are deemed to execution time to display the rules and the highest memory consumption during mining process. CRS-iEclat engine is proposed to overcome the said issues. Prior to experimentation, results indicate that CRS-iEclat outperforms iEclat from 54% to 100% accuracy on execution time (ET) in selected database as to show the improvement of ET efficiency.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_22-CRS_iEclat_Implementation_of_Critical_Relative_Support.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Recent Progress on Bio-mechanical Energy Harvesting System from the Human Body: Comprehensive Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120721</link>
        <id>10.14569/IJACSA.2021.0120721</id>
        <doi>10.14569/IJACSA.2021.0120721</doi>
        <lastModDate>2021-07-31T15:09:06.1630000+00:00</lastModDate>
        
        <creator>Mohankumar V</creator>
        
        <creator>G.V. Jayaramaiah</creator>
        
        <subject>Bio-mechanical; energy harvesting; electromagnetic; human-body; piezoelectric; triboelectric</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>Energy harvesting is a powerful technique to produce clean and renewable energy with better infrastructure improvement. The exhaustive review of recent progress and development in bio-mechanical energy harvesting (BMEH) techniques from human body is discussed in this manuscript. The BMEH from the human body is categorized into three parts, namely, piezoelectric energy harvesting (PEEH), triboelectric energy harvesting (TEEH), and Electro-magnetic Energy harvesting (EMEH). Each energy harvesting system is discussed with working principles with mathematical equations; each energy harvesting progress is discussed with a few work demonstrations. The applications of each energy harvesting from the recent research work are addressed in detail. The summary of each energy harvester from the human body or motion with advantages, limitations, performance metrics, current methods, and implemented human body parts are highlighted with Tabulation. The critical challenges/issues with possible solutions are also discussed.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_21-Recent_Progress_on_Bio_mechanical_Energy_Harvesting_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Monte Carlo Ray Tracing based Method for Investigation of Multiple Reflection among Trees</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120720</link>
        <id>10.14569/IJACSA.2021.0120720</id>
        <doi>10.14569/IJACSA.2021.0120720</doi>
        <lastModDate>2021-07-31T15:09:06.1330000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>Radiative transfer equation; Monte Carlo ray tracing: MCRT; multi reflection among trees; forest research; canopy reflectance; ellipse and cone shaped trees model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>Monte Carlo Ray Tracing (MCRT) method for investigation of the multiple reflection among trees is proposed. For the forest research (Leaf Area Index: LAI, Normalized Difference Vegetation Index: NDVI, forest type, tree age, etc.) with spaceborne based optical sensor data, some errors due to influences of the multiple reflection among trees on the estimation of at sensor radiance have to be considered. The influence is difficult to formulate in a radiative transfer equation. The proposed method allows estimating the influence. Through experiment with miniature sized forest, it is found that the proposed method is validated. It is also found that a few to more than 10% of influence due to multiple reflections among trees are anticipated. Furthermore, the influence on the estimation of at sensor radiance is clarified. The potentialities of the code are then depicted over different types of forests including coniferous and broadleaf canopies.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_20-Monte_Carlo_Ray_Tracing_based_Method_for_Investigation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Harnessing Emotive Features for Emotion Recognition from Text</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120719</link>
        <id>10.14569/IJACSA.2021.0120719</id>
        <doi>10.14569/IJACSA.2021.0120719</doi>
        <lastModDate>2021-07-31T15:09:06.1170000+00:00</lastModDate>
        
        <creator>Rutal Mahajan</creator>
        
        <creator>Mukesh Zaveri</creator>
        
        <subject>Emotion recognition; emotive features; natural language processing; affective computing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>With the prevalence of affective computing, emotion recognition becomes vital in any work related to natural language understanding. The inspiration for this work is provided by supplying machines with complete emotional intelligence and integrating them into routine life to satisfy complex human desires and needs. The text being a common communication medium on social media even now, it is important to analyze the emotions expressed in the text which is challenging due to the absence of audio-visual cues. Additionally, the conversational text conveys many emotions through communication contexts. Emoticon serves the purpose of self-annotation of writer’s emotion in text. Therefore, a machine learning-based text emotion recognition model using emotive features proposed and evaluated it on the SemEval-2019 dataset. The proposed work involves exploitation of different emotion-based features with classical machine learning classifiers like SVM, Multilayer perceptron, REPTree, and decision tree classifiers. The proposed system performs competitively well in terms of f-score 65.31% and accuracy 87.55%.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_19-Harnessing_Emotive_Features_for_Emotion_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comparative Study of Unimodal and Multimodal Interactions for Digital TV Remote Control Mobile Application among Elderly</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120718</link>
        <id>10.14569/IJACSA.2021.0120718</id>
        <doi>10.14569/IJACSA.2021.0120718</doi>
        <lastModDate>2021-07-31T15:09:06.1030000+00:00</lastModDate>
        
        <creator>Nor Azman Ismail</creator>
        
        <creator>Nurul Aiman Ab Majid</creator>
        
        <creator>Nur Haliza Abdul Wahab</creator>
        
        <creator>Farhan Mohamed</creator>
        
        <subject>HCI; usability testing; unimodal; multimodal; elderly</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>A research was conducted to study user interactions design for the TV remote control applications that are preferable among the elderly. Now-a-days smart home concept is widely accepted around the globe. Many applications were developed based on smart home concepts, such as smart remote-control applications for TVs and air conditioners. These applications were helpful in our daily life. However, the elderly tends not to use these applications because of the complexity of the processes and interaction design that is unfriendly. Therefore, this study was conducted to determine which interaction design is preferable for the elderly, enhancing the elderly experience in using the TV remote control application besides encouraging them to use one in daily life and keep up with new technologies. In this paper, the two types of new interaction designs – a touch-based only (unimodal) interaction and multimodal interaction prototypes and an existing TV Remote Control application were compared by conducting usability testing of these three applications on the elderly. Three parameters were considered to compare these three interaction designs: task completion time, error rate, and satisfaction. Also, using usability testing’s data collection, statistical analysis was conducted to find out which type of interaction is preferable by the elderly. Ten elderlies participated in the usability testing carried out. The results show a significant difference in these three interactions designs regarding task completion time and satisfaction, but not error rate. After considering usability testing and analyses conducted, the elderly prefers a unimodal interaction design in the TV Remote Control application. Nevertheless, the unimodal interaction was not the typical “tapping buttons” user interface in existing applications. Instead, the favourable interaction design was the one that involved swiping gestures to replace several features that were implemented using buttons on existing TV remote control applications.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_18-A_Comparative_Study_of_Unimodal_and_Multimodal_Interactions.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Anomaly Detection on Medical Images using Autoencoder and Convolutional Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120717</link>
        <id>10.14569/IJACSA.2021.0120717</id>
        <doi>10.14569/IJACSA.2021.0120717</doi>
        <lastModDate>2021-07-31T15:09:06.0700000+00:00</lastModDate>
        
        <creator>Rashmi Siddalingappa</creator>
        
        <creator>Sekar Kanagaraj</creator>
        
        <subject>Anomalies; autoencoder; convolutional neural networks (CNN) (ConvNets); deep neural network architecture; regularization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>Detection of anomalies from the medical image dataset improves prognosis by discovering new facts hidden in the data. The present study aims to discuss anomaly detection using autoencoders and convolutional neural networks. The autoencoder identifies the imbalance between normal and abnormal samples. They create learning models flexible and accurate on training data. The problem is addressed in four stages: 1) training: an autoencoder is initialized with the hyper-parameters and trained on the lung cancer CT scan images, 2) test: the autoencoder reconstructs the input from the latent space representation with a slight variation from the original data, indicated by a reconstruction error as Mean Squared Error (MSE), 3) evaluate: the MSE value of the training and test dataset are compared. The MSE values of anomalous data are higher than a base threshold, detecting those as anomalies, 4) validate: the efficiency metrics such as accuracy and MSE scores are used at both training and validation phases. The dataset was further classified as benign and malignant. The accuracy reported for outlier detection and the classification task is 98% and 97.2%. Thus, the proposed autoencoder-based anomaly detection could positively isolate anomalies from the CT scan images of lung cancer.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_17-Anomaly_Detection_on_Medical_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Note on Time and Space Complexity of RSA and ElGamal Cryptographic Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120716</link>
        <id>10.14569/IJACSA.2021.0120716</id>
        <doi>10.14569/IJACSA.2021.0120716</doi>
        <lastModDate>2021-07-31T15:09:06.0400000+00:00</lastModDate>
        
        <creator>Adeniyi Abidemi Emmanuel</creator>
        
        <creator>Okeyinka Aderemi E</creator>
        
        <creator>Adebiyi Marion O</creator>
        
        <creator>Asani Emmanuel O</creator>
        
        <subject>RSA algorithm; ElGamal algorithm; time complexity; space complexity; data security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>The computational complexity study of algorithms is highly germane to the design and development of high-speed computing devices. The whole essence of computation is principally influenced by efficiency of algorithms; this is more so the case with the algorithms whose solution space explodes exponentially. Cryptographic algorithms are good examples of such algorithms. The goal of this study is to compare the computational speeds of RSA and ElGamal cryptographic algorithms by carrying out a survey of works done so far by researchers. This study has therefore examined some of the results of the studies already done and highlighted which of the RSA and ElGamal algorithms performed better under given parameters. It is expected that this study would spur further investigation of the behaviour of cryptographic structures in order to ascertain their complexity and impact on the field of theoretical computer science. The experimental results of many of the papers reviewed showed that RSA cryptographic algorithm performs better as regards to energy usage, time complexity and space complexity of text, image and audio data during encryption process while some studies showed that ElGamal performs better in terms of time complexity during decryption process.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_16-A_Note_on_Time_and_Space_Complexity.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Truck Scheduling Model in the Cross-docking Terminal by using Multi-agent System and Shortest Remaining Time Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120715</link>
        <id>10.14569/IJACSA.2021.0120715</id>
        <doi>10.14569/IJACSA.2021.0120715</doi>
        <lastModDate>2021-07-31T15:09:06.0100000+00:00</lastModDate>
        
        <creator>Purba Daru Kusuma</creator>
        
        <subject>Truck scheduling; cross-docking system; multi agent system; shortest remaining time; intelligent supply chain</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>One most important and critical problem in a cross-docking system is truck scheduling. Many studies in it assumed that the temporary storage is unlimited which is in the real world, the temporary storage is limited. Many studies focus on minimizing total completion time. Meanwhile, studies that focus on minimizing temporary storage are hard to find, although this aspect is very important. Due to its complexity, especially in the cross-docking system with multiproduct characteristics, manual scheduling is almost impossible to achieve its goals. Many studies used several techniques, such as genetic algorithm (GA) and mixed integer programming where these methods are computationally expensive. Based on this problem, in this work, we propose new truck scheduling model in a cross-docking terminal with limited temporary storage constraint. This model is developed by using multi-agent system. The main contribution of this work is proposing the multi-agent-based truck scheduling model with limited temporary storage capacity constraint and temporary truck changeover permit. In it, there are three agents: inbound-trucks scheduler agent, outbound-trucks scheduler agent, and material handler agent. The shortest remaining time (SRT) algorithm is adopted in every agent. Based on the simulation result, this proposed model is proven competitive compared with the existing FIFO based models and integer-programming based model. Compared with the integer-programming model, it creates 41.8 percent lower in maximum inventory level. Compared with the FIFO based model, it creates 52.1 to 55.1 percent lower in maximum inventory level. In total time aspect, it creates 0.2 to 2.2 percent lower than the FIFO based model. It creates 7.2 percent higher in total time compared with the integer-programming based model.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_15-Truck_Scheduling_Model_in_the_Cross_docking_Terminal.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Snapshot of Energy Optimization Techniques to Leverage Life of Wireless Sensor Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120714</link>
        <id>10.14569/IJACSA.2021.0120714</id>
        <doi>10.14569/IJACSA.2021.0120714</doi>
        <lastModDate>2021-07-31T15:09:05.9930000+00:00</lastModDate>
        
        <creator>Kavya A P</creator>
        
        <creator>D J Ravi</creator>
        
        <subject>Battery; energy efficiency; energy optimization; network lifetime; sensor node; wireless sensor network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>Energy Optimization in Wireless Sensor Network (WSN) deals with the techniques which targets higher degree of energy efficiency using resource-constraint sensor nodes with minimal inclusion of any different variants of resources. At present, there are various approaches and techniques towards addressing the problems of energy but not all the research implications can be considered as optimized approach. Therefore, this paper reviews all the existing energy optimization schemes, categorizes them, briefly discuss about their strength and weakness to offer a compact snapshot of existing energy optimization techniques in WSN. The paper also contributes towards exploring the updates research trends and highlights about the open-end research problems in WSN. It is anticipated that the study findings of this manuscript will offer a true picture of study effectiveness in dealing with energy challenges so that favorable direction of investigation towards evolving up optimized solution comes up with promising outcome.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_14-Snapshot_of_Energy_Optimization_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>IoT-based Smart Greenhouse with Disease Prediction using Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120713</link>
        <id>10.14569/IJACSA.2021.0120713</id>
        <doi>10.14569/IJACSA.2021.0120713</doi>
        <lastModDate>2021-07-31T15:09:05.9770000+00:00</lastModDate>
        
        <creator>Neda Fatima</creator>
        
        <creator>Salman Ahmad Siddiqui</creator>
        
        <creator>Anwar Ahmad</creator>
        
        <subject>Cloud; deep learning; greenhouse; humidity; IoT; soil moisture; temperature</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>Rapid industrialization and urbanization has led to decrease in agricultural land and productivity worldwide. This is combined with increasing demand of chemical free organic vegetables by the educated urban households, and thus, greenhouses are quickly catching trend for their specialized advantages especially in extreme weather countries. They provide an ideal environment for longer and efficient growing seasons and ensure profitable harvests. The present paper designs and demonstrates a comprehensive IoT based Smart Greenhouse system that implements a novel combination of monitoring, alerting, cloud storage, automation and disease prediction, viz. a readily deployable complete package. It continuously keeps track of ambient conditions like temperature, humidity and soil moisture conditions to ensure a higher yield of crop and immediate redressal in case of abnormal conditions. It also has a built-in automatic irrigation management system. Finally, it employs the most efficient deep learning model for disease identification with leaf images. Furthermore, with memory and storage optimization through cloud storage, an individual living in the city can also build a greenhouse and can monitor it from his home and take redressal methods as and when desired.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_13-IoT_based_Smart_Greenhouse_with_Disease_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-parameter Coordinated Public School Admission Model by using Stable Marriage</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120712</link>
        <id>10.14569/IJACSA.2021.0120712</id>
        <doi>10.14569/IJACSA.2021.0120712</doi>
        <lastModDate>2021-07-31T15:09:05.9470000+00:00</lastModDate>
        
        <creator>Purba Daru Kusuma</creator>
        
        <subject>School admission; school choice; stable marriage; deferred-acceptance; education</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>School admission is a very important process in improving the education quality. Meanwhile, one problem in the school admission system is the mismatch. There are unassigned applicants and unallocated seats. In Indonesia, zone-based model is adopted in the public-school admission system. Students are assigned to their nearest school. Besides location, student’s academic performance and economic level are also concerned. Based on it, this work proposes coordinated public school admission model that accommodates flexible number of the concerned parameters. It is built based on the stable marriage algorithm or the deferred-acceptance algorithm as its derivative. The proposed model is a combination between the mandatory approach and the school choice approach. The concerned parameters are school-home distance, student national exam score, school rank, applicant poor status, and applicant’s preference. The simulation is conducted to investigate the performance of the proposed model compared with the previous models: the zone-based model and the two-step model. The prioritization of the concerned parameters is proven easily adjusted. The simulation result shows that in the over-demand condition, the proposed model creates higher average student national exam score and higher average school-home distance rather than the previous models. When the number of applicants is twice of the number of seats, the proposed model creates 6.6 percent higher in the average student national exam score and 71.4 percent higher in the average school-home distance. The simulation result also shows that the mismatch is solved.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_12-Multi_parameter_Coordinated_Public_School_Admission_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>WorkStealing Algorithm for Load Balancing in Grid Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120711</link>
        <id>10.14569/IJACSA.2021.0120711</id>
        <doi>10.14569/IJACSA.2021.0120711</doi>
        <lastModDate>2021-07-31T15:09:05.9300000+00:00</lastModDate>
        
        <creator>Hadeer S. Hossam</creator>
        
        <creator>Hala Abdel-Galil</creator>
        
        <creator>Mohamed Belal</creator>
        
        <subject>Grid computing; static scheduling; dynamic scheduling; load balancing; directed acyclic graph (DAG)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>Grid computing is a computer network in which many resources and services are shared for performing a specific task. The term grid appeared in the mid-1990s and due to the computational capabilities, efficiency and scalability provided by the shared resources, it is used nowadays in many areas, including business, e-libraries, e-learning, military applications, medicine, physics, and genetics. In this paper, we propose WorkStealing-Grid Cost Dependency Matrix (WS-GCDM) which schedule DAG tasks according to their data transfer cost, dependency between tasks and load of the available resources. WS-GCDM algorithm is an enhanced version from GCDM algorithm. WS-GCDM algorithm balances load between all the available resources in grid system unlike GCDM which uses specific number of resources regardless how many resources are available. WS-GCDM introduces better makespan than GCDM algorithm and enhances system performance from 13% up to 17% when we experiment algorithms using DAG with dependent tasks.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_11-WorkStealing_Algorithm_for_Load_Balancing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Fuzzy MCDM Approach for Structured Comparison of the Health Literacy Level of Hospitals</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120710</link>
        <id>10.14569/IJACSA.2021.0120710</id>
        <doi>10.14569/IJACSA.2021.0120710</doi>
        <lastModDate>2021-07-31T15:09:05.9000000+00:00</lastModDate>
        
        <creator>Abed Saif Ahmed Alghawli</creator>
        
        <creator>Adel A. Nasser</creator>
        
        <creator>Mijahed N. Aljober</creator>
        
        <subject>Health literacy; the organizational health literacy standard; fuzzy analytic hierarchy process; fuzzy Delphi method; structured comparison</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>The primary objective of this study is to develop a hybrid multi-criteria decision-making (MCDM) model to evaluate and compare the organizational health literacy responsiveness (OHLR) level of hospitals. To achieve this goal, the health literacy performance indicators are selected, some potential uses of single and hybrid MCDM and qualitative approaches for structured comparison purposes are illustrated, one more common hybrid approach based on the Fuzzy Analytic Hierarchy Process and fuzzy Delphi method was chosen, developed, and applied. To compare the proposed model with its classical non fuzzy version (Qualitative – AHP), a case study example on the effect of their implementation on a structured comparison decisions is conceded, and the Bland Altman agreement method is applied to compare the results obtained by them. The results present the suitability of the application of both hybrid approaches for solving the problem. It also shows that the application of them leads to a distinctive outcomes. Robust Fuzzy based outcomes, and small agreement interval (&lt; 0.0113) and little average change level in the rates of the hospitals (&lt; 2.08 %) are observed between results acquired by the Fuzzy based approach and those which were defined by the other model. Based on these results, a fuzzy based model was recommended for structured comparison of the OHLR level of hospitals under uncertainty conditions. It supports sustainable planning practices, and helps with improvement and effectively distributes the necessary resources.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_10-A_Fuzzy_MCDM_Approach_for_Structured_Comparison.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Effects of Adaptive Feedback on Student’s Learning Gains</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120709</link>
        <id>10.14569/IJACSA.2021.0120709</id>
        <doi>10.14569/IJACSA.2021.0120709</doi>
        <lastModDate>2021-07-31T15:09:05.8830000+00:00</lastModDate>
        
        <creator>Andrew Thomas Bimba</creator>
        
        <creator>Norisma Idris</creator>
        
        <creator>Ahmed Al-Hunaiyyan</creator>
        
        <creator>Salwa Ungku Ibrahim</creator>
        
        <creator>Naharudin Mustafa</creator>
        
        <creator>Izlina Supa’at</creator>
        
        <creator>Norazlin Zainal</creator>
        
        <creator>Mohd Yahya Ahmad</creator>
        
        <subject>Authoring tools and methods; evaluation of CAL systems; intelligent tutoring systems; teaching/learning strategies; pedagogical issues</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>There is an increase in the implementation of adap-tive feedback models, which focus on the relationship between adaptive feedback and learning gains. These literatures suggest that the complex relationship between feedback, task complexity, pedagogical principles and student’s characteristics affect the significance of feedback effects. However, current studies have shown insufficient research on the effect of adaptive feedback characteristics on student’s learning gains. Thus, there is a need to investigate the effect of multiple adaptive feedback characteristics on student’s learning gains. The adaptive feedback model proposed supports the retrieval of appropriate feedback for students based on established weights between related concepts. In comparing three experimental groups, students who were provided with adaptive feedback showed learning gains and normalized learning gains of 0.87 and 0.05 over the normal feedback group, with 0.97 and 0.07 over the non-feedback group. This research yielded better outcomes than previous similar studies.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_9-The_Effects_of_Adaptive_Feedback_on_Students_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Structural Limitations with K Means Algorithms in Research in Per&#250;</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120708</link>
        <id>10.14569/IJACSA.2021.0120708</id>
        <doi>10.14569/IJACSA.2021.0120708</doi>
        <lastModDate>2021-07-31T15:09:05.8530000+00:00</lastModDate>
        
        <creator>Javier Pedro Flores Arocutipa</creator>
        
        <creator>Jorge Jinchu&#241;a Huallpa</creator>
        
        <creator>Julio C&#233;sar Lujan Minaya</creator>
        
        <creator>Ruth Daysi Cohaila Quispe</creator>
        
        <creator>Juan Luna Carpio</creator>
        
        <creator>Gamaniel Carbajal Navarro</creator>
        
        <subject>Researchers; PBIpc; investment in I&amp;D; exports; universities</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>In the world of science there are high-level, moderate-level, and low-level emerging countries. The indicators are an investment in research and development (I&amp;D), number of universities, investment, researchers, intellectual production, expenditure on education, gross domestic product (PBI), and quality of life (IDH). In Methodology, it is basic, explanatory, of conglomerates. There are 37 countries analyzed. The data comes from the FMI, datosmacro.com, UNESCO, URWU. There are 11 indicators. These are data taken in two stages, 2006 and 2019. The Results shows R2= 0.9887, which explains the behavior of the PBI by the investment in I&amp;D. The positive and significant relationship between IDH and PBI per capita, which is 0.824, is transcendent. In conclusion, there are three clusters with clearly differentiated indicators. Peru’s problem is structural in that it does not have a per capita PBI of $ 30,000 per person or more. Investment in I&amp;D in Peru is low and PBI is also low. Therefore, countries with higher investments in science have high PBIs and better IDH.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_8-Structural_Limitations_with_K_Means_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Requirements Engineering: A State of Practice in Gulf Cooperation Countries</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120707</link>
        <id>10.14569/IJACSA.2021.0120707</id>
        <doi>10.14569/IJACSA.2021.0120707</doi>
        <lastModDate>2021-07-31T15:09:05.8370000+00:00</lastModDate>
        
        <creator>Asaad Alzayed</creator>
        
        <creator>Abdulwahed Khalfan</creator>
        
        <subject>Requirements engineering; project success; software development; requirements engineering practices; GCC countries</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>Requirement Engineering (RE) is one of the crucial elements for successful software development. Nevertheless, in terms of research discussing the failure or success of various products, little has been undertaken to examine this area as it pertains to the Gulf Cooperation Council (GCC) nations, i.e. Saudi Arabia (KSA), Kuwait, United Arab Emirates (UAE), Bahrain, Qatar, and Oman. The aim of this research is to present an analysis of the current ways in which software is developed in these nations. The researchers undertook a survey of practitioners in software development, asking questions regarding their recent work. The survey was based on an extensive survey that was adapted in view of contemporary software development practice. The research reports on requirement practices and how they relate to project sponsors/customers/users and project management. The respondents came from GCC nation companies, most of whom worked on developing software in-house. The outcomes demonstrate that the majority of IT companies in these nations do not employ the optimal methodologies for requirement engineering processes, using their own. In addition, project managers are often lacking in complete authority. Making comparisons between our findings and past research, requirements engineering practices is still inadequate in these nations. Thus the research results are particularly useful as the data is derived from countries where published research about software development practices is scant.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_7-Requirements_Engineering_A_State_of_Practice_in_Gulf.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Is Face Recognition with Masks Possible?</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120706</link>
        <id>10.14569/IJACSA.2021.0120706</id>
        <doi>10.14569/IJACSA.2021.0120706</doi>
        <lastModDate>2021-07-31T15:09:05.8070000+00:00</lastModDate>
        
        <creator>Yaaseen Muhammad Saib</creator>
        
        <creator>Sameerchand Pudaruth</creator>
        
        <subject>Face detection; face recognition; face mask; deep learning; VGG16; MobileNetV2; HOG</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>With the recent outbreak of the COVID-19 pandemic, wearing face masks has become extremely important to protect us, and to reduce the spread of the virus. This measure has made many existing face recognition systems ineffective as they were trained to work with unmasked faces. In this paper, several methods have been proposed for masked face recognition. Two pre-trained deep learning architectures (VGG16, and MobileNetV2) and the Histogram of Gradients (HOG) technique were used to extract the relevant features from face images of celebrities. A SoftMax layer and Support Vector Machines (SVM) were used for classification. Five scenarios were devised to assess the different models and approaches. With an accuracy of 96.8%, the best model was obtained with MobileNetV2 with a SoftMax layer on the dataset consisting of a mixture of masked and unmasked images. Three different types of masks were also used in this study. The mean accuracy was 91.35% when the same type of mask is used for training and testing. However, the accuracy dropped by an average of 5.6% when a different type of mask is used for training and testing. A contactless attendance system using the best masked face recognition model has also been implemented.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_6-Is_Face_Recognition_with_Masks_Possible.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Approach for Network Steganography Detection based on Deep Learning Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120705</link>
        <id>10.14569/IJACSA.2021.0120705</id>
        <doi>10.14569/IJACSA.2021.0120705</doi>
        <lastModDate>2021-07-31T15:09:05.7730000+00:00</lastModDate>
        
        <creator>Cho Do Xuan</creator>
        
        <creator>Lai Van Duong</creator>
        
        <subject>Network steganography; network steganography detection method; abnormal packets; deep learning techniques</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>One of the techniques that current cyber-attack methods often use to steal and transmit data out is to hide secret data in packets. This is the network steganography technique. Because millions of packets are sent and received every hour in internet activity, so it is very difficult to detect the theft and transmission of system data out using this form. Recent approaches often seek ways to compute and extract abnormal behaviors of packets to detect a steganography protocol or technique. However, such methods have the difficult problem of not being able to detect abnormal packets when an attacker uses other steganography techniques. To solve the above problem, this paper proposes a network steganography detection method using deep learning techniques. The highlight of this study is some new proposed features based on different components of the packet. By combining these many components, this proposal will not only provide the ability to detect many steganography techniques in the network, but also improve the ability to accurately detect abnormal packets. Besides, this study proposes to use deep learning for the task of detecting normal and abnormal packets. The authors want to take advantage of the big data analysis and processing capabilities of deep learning models in order to improve the ability to analyze and detect network steganography techniques. The experimental results in Section IVD have proved the effectiveness of this proposed method compared with other approaches.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_5-A_New_Approach_for_Network_Steganography_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of Intelligent Tools for Detecting Resource-intensive Database Queries</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120704</link>
        <id>10.14569/IJACSA.2021.0120704</id>
        <doi>10.14569/IJACSA.2021.0120704</doi>
        <lastModDate>2021-07-31T15:09:05.7600000+00:00</lastModDate>
        
        <creator>Salah M.M. Alghazali</creator>
        
        <creator>Konstantin Polshchykov</creator>
        
        <creator>Ahmad M. Hailan</creator>
        
        <creator>Lyudmila Svoykina</creator>
        
        <subject>Resource-intensive queries; database; detecting; self-organizing Kohonen maps; statistical parameters</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>The detection of resource-intensive queries which consume an excessive amount of time, processor, disk, and memory resources is one of the most popular vulnerabilities of Database Management Systems (DBMS). The tools for monitoring and optimizing queries typically used in modern DBMS were analyzed, and their shortcomings were identified. Subsequently, the relevance of new intelligent tools’ development for timely and reliable detection of resource-intensive queries to databases was distinctly justified. The study concluded a set of analysis of an extended statistical parameter which indicated to be of interest for identifying resource-intensive queries. The initial set of queries’ parameters reduced by two consecutive methods. Firstly, normalizing the set of indicators using a sigmoid function. Secondly, selecting a finite number of principal components based on the Cattell test. Whereas the clustering of a set of queries performed using self-organizing Kohonen maps. Suggestions for further studies in the classification algorithm context were indicated in lights of the study’s conclusions.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_4-Development_of_Intelligent_Tools_for_Detecting_Resource.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Vietnamese Short Text Classification via Distributed Computation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120703</link>
        <id>10.14569/IJACSA.2021.0120703</id>
        <doi>10.14569/IJACSA.2021.0120703</doi>
        <lastModDate>2021-07-31T15:09:05.7270000+00:00</lastModDate>
        
        <creator>Hiep Xuan Huynh</creator>
        
        <creator>Linh Xuan Dang</creator>
        
        <creator>Nghia Duong-Trung</creator>
        
        <creator>Cang Thuong Phan</creator>
        
        <subject>Short text classification; na&#168;ive bayes; apache spark; vietnamese; distributed computation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>Social networking has been growing rapidly in Vietnam. The sharing information is diverse and circulates in many forms. It requires user-friendly solutions such as topic sorting and perspectives analysis in analyzing community trends, advertisements or anticipating and monitoring the spread of bad news. Unfortunately, Vietnamese is highly different from other languages and little research has been conducted in the literature on messages classification. The implementation of machine learning models on Vietnamese has not been thoroughly investigated and these models’ performance is unknown when applying in a different language. Vietnamese text is a serialization of syllables, hence, word boundary identification is not trivial. This research portrays our endeavor to construct an effective distributed framework for addressing the task of classification of short Vietnamese texts on social networks using the idea of probability categorization. The authors argue that addressing the task sharps the successful combination of machine learning, nat-ural language processing, and ambient intelligence. The proposed framework is effective and enables fast calculation, suitable for implementation in Apache Spark, meeting the demand for dealing with large amounts of textual data on the current social networks. Our data has been collected from several online text sources of 12412 short messages classified into 5 different topics. The evaluation shows that our approach has achieved an average of 82.73% classification accuracy. Thoughtfully learning the literature, we could state that this is the first attempt to classify short Vietnamese messages under a distributed computation framework.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_3-Vietnamese_Short_Text_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SmartTS: A Component-Based and Model-Driven Approach to Software Testing in Robotic Software Ecosystem</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120702</link>
        <id>10.14569/IJACSA.2021.0120702</id>
        <doi>10.14569/IJACSA.2021.0120702</doi>
        <lastModDate>2021-07-31T15:09:05.7130000+00:00</lastModDate>
        
        <creator>Vineet Nagrath</creator>
        
        <creator>Christian Schlegel</creator>
        
        <subject>Model-Driven Engineering (MDE); Component-Based Software Engineering (CBSE); Model-Driven Testing (MDT); Component-Based Software Testing (CBST); Service Robotics; Software Quality; Automated Software Testing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>Validating the behaviour of commercial off-the-shelf components and of interactions between them is a complex, and often a manual task. Treated like any other software product, a software component for a robot system is often tested only by the component developer. Test sets and results are often not available to the system builder, who may need to verify functional and non-functional claims made by the component. Availability of test records is key in establishing compliance and thus selection of the most suitable components for system composition. To provide empirically verifiable test records consistent with a component’s claims would greatly improve the overall safety and dependability of robotic software systems in open-ended environments. Addi-tionally, a test and validation suite for a system built from the model package of that system empirically codifies its behavioural claims. In this paper, we present the “SmartTS methodology”: A component-based and model-driven approach to generate model-bound test-suites for software components and systems. SmartTS methodology and tooling are not restricted to the robotics domain. The core contribution of SmartTS is support for test and validation suites derived from the model packages of components and systems. The test-suites in SmartTS are tightly bound to an application domain’s data and service models as defined in the RobMoSys (EU H2020 project) compliant SmartMDSD toolchain. SmartTS does not break component encapsulation for system builders while providing them complete access to the way that component is tested and simulated.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_2-SmartTS_A_Component_Based_and_Model_Driven_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Edge-based Video Analytic for Smart Cities</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120701</link>
        <id>10.14569/IJACSA.2021.0120701</id>
        <doi>10.14569/IJACSA.2021.0120701</doi>
        <lastModDate>2021-07-31T15:09:05.6800000+00:00</lastModDate>
        
        <creator>Dipak Pudasaini</creator>
        
        <creator>Abdolreza Abhari</creator>
        
        <subject>Video analytic; cloud computing; smart city; object detection; object tracking; edge network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>Video analytic is the important tool for smart city development. The video analytic application requires more memories and high processing devices. The problems of cloud-based approach for video analytic are high latency and more network bandwidth to transfer data into the cloud. To overcome these problems, we propose a model based on dividing the jobs into smaller sub-tasks with less processing requirements in a typical video analytics application for the development of smart city. The object detection, tracking and pattern recognition method to reduce the size of videos based on edge network will be proposed. We will design a video analytic model, and simulation is performed using iFogSim simulator. We will also propose Convolutional Neural Network (CNN) based object tracking model. The experimental verification shows that our tracking model is more than 96% accurate, and the proposed edge and cloud-based model is more than 80% effective than only cloud-based approach for video analytic applications.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_1-Edge_based_Video_Analytic_for_Smart_Cities.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Real-time Driver Drowsiness Detection using Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120794</link>
        <id>10.14569/IJACSA.2021.0120794</id>
        <doi>10.14569/IJACSA.2021.0120794</doi>
        <lastModDate>2021-07-31T15:09:05.6500000+00:00</lastModDate>
        
        <creator>Md. Tanvir Ahammed Dipu</creator>
        
        <creator>Syeda Sumbul Hossain</creator>
        
        <creator>Yeasir Arafat</creator>
        
        <creator>Fatama Binta Rafiq</creator>
        
        <subject>Deep learning; drowsiness detection; object detection; MobileNets; Single Shot Multibox Detector</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>Every year thousands of lives pass away worldwide due to vehicle accidents, and the main reason behind this is the drowsiness in drivers. A drowsiness detection system will help to reduce this accident and save many lives around the world. To defend this problem, we propose a methodology based on Convolutional Neural Networks (CNN) that illustrates drowsiness detection as a task to detect an object. It will detect and localize whether the eyes are open or close based on the real-time video stream of drivers. The MobileNet CNN Architecture with Single Shot Multibox Detector is the technology used for this object detection task. A separate algorithm is used based on the output given by the SSD_MobileNet_v1 architecture. A dataset that consists of around 4500 images was labeled with the object’s face yawn, no-yawn, open eye, and closed eye to train the SSD_MobileNet_v1 Network. Around 600 randomly selected images are used to test the trained model using the PASCAL VOC metric. The proposed approach is to ensure better accuracy and computational efficiency. It is also affordable as it can process incoming video streams in real-time and does not need any expensive hardware support. There only needs a standalone camera to be implemented using cheap devices in cars using Raspberry Pi 3 or other IP cameras.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_94-Real_time_Driver_Drowsiness_Detection_using_Deep_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of Learning Analytics Dashboard based on Moodle Learning Management System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120793</link>
        <id>10.14569/IJACSA.2021.0120793</id>
        <doi>10.14569/IJACSA.2021.0120793</doi>
        <lastModDate>2021-07-31T15:09:05.6170000+00:00</lastModDate>
        
        <creator>Ong Kiat Xin</creator>
        
        <creator>Dalbir Singh</creator>
        
        <subject>Learning analytics; learning management system; moodle</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>Digitalization catalyzes drastic changes to a particular subject or area. Digitalization is an operational structure transformation process, such as in the educational domain. Digitalization in the academic field has brought the classroom to the users&#39; fingertips with the prevalence of e-learning applications, learning management systems, etc. However, with the increasing number of digital learning platform users, educators find it hard to monitor their students&#39; progress. Analytics that analyze data generated from the usage pattern of the users contribute to giving the educators an insight regarding the performance of their students. With that, they can apply early intervention and modification of their delivery method to suit the students&#39; needs and, at the same time, increase the quality of the content. This study illustrates the development of a learning analytics dashboard that can improve learning outcomes for educators and students.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_93-Development_of_Learning_Analytics_Dashboard.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Systematic Literature Review of the Types of Authentication Safety Practices among Internet Users</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120792</link>
        <id>10.14569/IJACSA.2021.0120792</id>
        <doi>10.14569/IJACSA.2021.0120792</doi>
        <lastModDate>2021-07-31T15:09:05.5870000+00:00</lastModDate>
        
        <creator>Krishnapriyaa Kovalan</creator>
        
        <creator>Siti Zobidah Omar</creator>
        
        <creator>Lian Tang</creator>
        
        <creator>Jusang Bolong</creator>
        
        <creator>Rusli Abdullah</creator>
        
        <creator>Akmar Hayati Ahmad Ghazali</creator>
        
        <creator>Muhammad Adnan Pitchan</creator>
        
        <subject>Password authentication; biometric authentication; multi-factor authentication; information security; safety practices</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>The authentication system is one of the most important methods for maintaining information security in smart devices. There are many authentication methods, such as password authentication, biometric authentication, signature authentication, and so on, to protect cloud users’ data. However, online information is not yet effectively authenticated. The purpose of this systematic literature review is to examine the current types of authentication methods as a safety practices for information security among Internet users. The PRISMA method was adopted to present a systematic literature review of 28 articles from three main databases (20 articles from Scopus, one article from Google Scholar, and seven articles from Dimension). This study used the Prediction Study Risk of Bias Assessment Tool to appraise the quality of the included studies. From the findings of the study, a total of three main themes were identified: password authentication, biometric authentication, and multiple-factor authentication. Multiple-factor authentication was found to be the most secure and most frequently recommended authentication method. It is highly recommended to implement three-factor authentication and multi-biometric model in the future, as it provides a higher surveillance level in terms of information security among cloud computing users.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_92-A_Systematic_Literature_Review_of_the_Types_of_Authentication.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>System Design and Case Study Reporting on AQASYS: A Web-based Academic Quality Assurance System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120791</link>
        <id>10.14569/IJACSA.2021.0120791</id>
        <doi>10.14569/IJACSA.2021.0120791</doi>
        <lastModDate>2021-07-31T15:09:05.5570000+00:00</lastModDate>
        
        <creator>Adel Alfozan</creator>
        
        <creator>Mohammad Ali Kadampur</creator>
        
        <subject>Outcome-based education; quality standards; automated software; system design; education technology; accreditation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>The demands of modern education have evolved from a teacher-centric requirement to a learner-centric requirement. Knowledge, skill, and competence are the most sought-after attributes of a graduate. Features such as the objective focus of learning, curriculum planning, a set of high expectations, and extended opportunities to the learner after completion of education are at the center of all the planning. It is all about skill-oriented, outcome-based standardization that has been infused by the societal stakeholders into the modern global education system to create a work-ready human capital. In this paper, a software product for academic quality assurance is presented. The software provides a generic framework to any educational institution that operates to implement known international standards of education. The software accepts the data and computes the quality parameters as per the selected standards. It has an analytical module that provides summary analytics and generates the course reports in the given format automatically. The software is tested with a case study and results are presented. The paper also presents the system design approach with discussion on the technologies selected for the development.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_91-System_Design_and_Case_Study_Reporting_on_AQASYS.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Automated Framework for Enterprise Financial Data Pre-processing and Secure Storage</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120790</link>
        <id>10.14569/IJACSA.2021.0120790</id>
        <doi>10.14569/IJACSA.2021.0120790</doi>
        <lastModDate>2021-07-31T15:09:05.4930000+00:00</lastModDate>
        
        <creator>Sirisha Alamanda</creator>
        
        <creator>Suresh Pabboju</creator>
        
        <creator>G. Narasimha</creator>
        
        <subject>Financial data pre-processing; outlier treatment; missing value treatment; regression; differential iterations; iterative clustering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(7), 2021</description>
        <description>The analysis on the financial data is highly crucial and critical as the results or the conclusion communicated based on the analysis can generate a greater impact on the personal and enterprise scale business processes. The primary source of the financial data is the business process and often the data is collected by automation tools deployed at various points of the business process data flow. The data entered in the business process is primary done by the stake holders of the process and at various levels of the process the data is modified, translated and sometimes completed transverter, due to which the impurities or anomalies are introduced in the data. These impurities, such as outliers and missing values, cause a high impact on the final decision after processing these datasets. Hence an appropriate pre-processing for financial data is the demand of the research. A good number of parallel research outcomes can be observed to solve these problems. Nonetheless, majority of the solutions are either highly time complex or not accurate effectively. Thus, this work proposes an automated framework for identification and imputation of the outliers using the iterative clustering method, identification and imputation of the missing values using Differential count based binary iterations method and finally the secure data storage using regression based key generation. The proposed framework has showcased nearly 100% accuracy in detection of outliers and missing values with highly improved time complexity.</description>
        <description>http://thesai.org/Downloads/Volume12No7/Paper_90-An_Automated_Framework_for_Enterprise_Financial_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Reputation Measurement based on a Hybrid Sentiment Analysis Approach for Saudi Telecom Companies</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.01206107</link>
        <id>10.14569/IJACSA.2021.01206107</id>
        <doi>10.14569/IJACSA.2021.01206107</doi>
        <lastModDate>2021-07-07T13:46:19.8130000+00:00</lastModDate>
        
        <creator>Bayan Abdullah</creator>
        
        <creator>Nouf Alosaimi</creator>
        
        <creator>Sultan Almotiri</creator>
        
        <subject>Reputation; sentiment analysis; Arabic language; social media</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>Thousands of active people on social media daily share their thoughts and opinions about different subjects and different issues. Many social media platforms used to express the feeling or opinion and at top of it is Twitter. On Twitter, many opinions are expressed in many fields such as movies, events, products, and services; this data considered a valuable resource for companies and decision-makers to help in making decisions. This study was based on using a hybrid approach to extract the opinions from an Arabic tweet to measuring service providers’ reputation. In this study, the Saudi telecom companies used as a case study. This research concentrates on determining peoples’ opinions more accurately by utilizing the Retweet and Favorite. The number resulting from positive and negative tweets after applying the polarity equation was used to estimate reputation scores. The result indicated that the STC company represents a high reputation compared to other companies. The proposed approach shows promising results to expand existing knowledge of sentiment analysis in the domain of measure reputation.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_107-Reputation_Measurement_based_on_a_Hybrid_Sentiment_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detecting Malware Infection on Infrastructure Hosted in Iaas Cloud using Cloud Visibility and Forensics</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.01206106</link>
        <id>10.14569/IJACSA.2021.01206106</id>
        <doi>10.14569/IJACSA.2021.01206106</doi>
        <lastModDate>2021-06-30T12:15:25.1730000+00:00</lastModDate>
        
        <creator>Lama Almadhoor</creator>
        
        <creator>A. A. bd El-Aziz</creator>
        
        <creator>Hedi Hamdi</creator>
        
        <subject>Malware attacks; infrastructure as a service (IaaS); amazon web services (AWS); malware detection; cloud forensics; visibility</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>Cloud computing has been adopted very rapidly by organizations with different businesses and sizes, the use of cloud services is rising at an unparalleled rate these days especially IaaS services as cloud providers offer more powerful resources with flexible offerings and models. This rapid adoption opens new surface attacks to the organizations that attackers abuse with their malware to take advantage of these powerful resources and the valuable data that exist on them. Therefore for organizations to well defend against malware attacks they need to have full visibility not only on their data centers but also on their resources hosted on the cloud and don&#39;t take their security for granted. This paper discusses and aims to provide the best approaches to achieve continuous monitoring of malware attacks on the cloud along with their phases (before, during, and after) and the limitations of today&#39;s available techniques suggesting needed developments. Logging and forensics techniques have always been the cornerstone of achieving continuous monitoring and detection of malware attacks on-premises, this paper defines the best methods to bring loggings and forensics to the cloud and integrate them with on-premises visibility, thus achieving the full monitoring over the whole security posture of the organization assets whether they are on-premises or on the cloud.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_106-Detecting_Malware_Infection_on_Infrastructure.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design and Implementation of a Most Secure Cryptographic Scheme for Lightweight Environment using Elliptic Curve and Trigonohash Technique</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.01206105</link>
        <id>10.14569/IJACSA.2021.01206105</id>
        <doi>10.14569/IJACSA.2021.01206105</doi>
        <lastModDate>2021-06-30T12:15:25.1600000+00:00</lastModDate>
        
        <creator>Bhaskar Prakash Kosta</creator>
        
        <creator>PasalaSanyasi Naidu</creator>
        
        <subject>Internet of Things (IoT); authentication; one way hash function; lightweight environment; secret key</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>The Internet of Things (IoT) is a rising development and is an organization of all gadgets that can be gotten to through the web. As a central advancement of the IoT, wireless sensor networks (WSN) can be used to accumulate the vital environment parameters for express applications. In light of the resource limitation of sensor devices and the open idea of remote channel, security has become an enormous test in WSN. Validation as an essential security service can be used to guarantee the authenticity of data access in WSN.  The proposed three factor system using one way hash function is depicted by low computational cost, and limit overhead, while achieving all other benefits. Keys are made from secret key for meeting for improving the security. We differentiated the arrangement&#39;s security and execution with some lightweight plans. As shown by the examination, the proposed plan can give more prominent security incorporates low overhead of correspondence. Encryption and unscrambling is done using numerical thoughts and by using the possibility of hash function. Mathematical thoughts are lightweight and update the security up by a staggering degree by diminishing the chances of cryptanalysis. When contrasted with different calculations, the proposed calculation gives better execution results.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_105-Design_and_Implementation_of_a_Most_Secure_Cryptographic_Scheme.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Wheelchair Control System based Eye Gaze</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.01206104</link>
        <id>10.14569/IJACSA.2021.01206104</id>
        <doi>10.14569/IJACSA.2021.01206104</doi>
        <lastModDate>2021-06-30T12:15:25.1430000+00:00</lastModDate>
        
        <creator>Samar Gamal Amer</creator>
        
        <creator>Rabie A. Ramadan</creator>
        
        <creator>Sanaa A. kamh</creator>
        
        <creator>Marwa A. Elshahed</creator>
        
        <subject>Dilip; numpy; gaze ratio; facial landmarks points; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>The inability to control the limbs is the main reason that affects the daily activities of the disabled which causes social restrictions and isolation. More studies were performed to help disabilities for easy communication with the outside world and others. Various techniques are designed to help the disabled in carrying out daily activities easily. Among these technologies is the Smart Wheelchair. This research aims to develop a smart eye-controlled wheelchair whose movement depends on eye movement tracking. The proposed Wheelchair is simple in design and easy to use with low cost compared with previous Wheelchairs. The eye movement was detected through a camera fixed on the chair. The user&#39;s gaze direction is obtained from the captured image after some processing and analysis. The order is sent to the Arduino Uno board which controls the wheelchair movement. The Wheelchair performance was checked using different volunteers and its accuracy reached 94.4% with a very short response time compared with the other existing chairs.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_104-Wheelchair_Control_System_based_Eye Gaze.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Self-adaptive Algorithm for Solving Basis Pursuit Denoising Problem</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.01206103</link>
        <id>10.14569/IJACSA.2021.01206103</id>
        <doi>10.14569/IJACSA.2021.01206103</doi>
        <lastModDate>2021-06-30T12:15:25.1270000+00:00</lastModDate>
        
        <creator>Mengkai Zhu</creator>
        
        <creator>Xu Zhang</creator>
        
        <creator>Bing Xue</creator>
        
        <creator>Hongchun Sun</creator>
        
        <subject>Basis pursuit denoising problem; algorithm; global convergence; sublinearly convergent rate; sparse signal recovery</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>In this paper, we further consider a method for solving the basis pursuit denoising problem (BPDP), which has received considerable attention in signal processing and statistical inference. To this end, a new self-adaptive algorithm is proposed, its global convergence results is established. Furthermore, we also show that the method is sublinearly convergent rate of O( 1/k). Finally, the availability of given method is shown via somek numerical examples.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_103-A_Self_adaptive_Algorithm_for_Solving_Basis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comparative Study of Stand-Alone and Hybrid CNN Models for COVID-19 Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.01206102</link>
        <id>10.14569/IJACSA.2021.01206102</id>
        <doi>10.14569/IJACSA.2021.01206102</doi>
        <lastModDate>2021-06-30T12:15:25.1130000+00:00</lastModDate>
        
        <creator>Wedad Alawad</creator>
        
        <creator>Banan Alburaidi</creator>
        
        <creator>Asma Alzahrani</creator>
        
        <creator>Fai Alflaj</creator>
        
        <subject>COVID-19; convolutional neural network; hybrid models; chest X-Ray; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>The COVID-19 pandemic continues to impact both the international economy and individual lives. A fast and accurate diagnosis of COVID-19 is required to limit the spread of this disease and reduce the number of infections and deaths. However, a time consuming biological test, Real-Time Reverse Transcription–Polymerase Chain Reaction (RT-PCR), is used to diagnose COVID-19. Furthermore, sometimes the test produces ambiguous results, especially when samples are taken in the early stages of the disease. As a potential solution, machine learning algorithms could help enhance the process of detecting COVID-19 cases. In this paper, we have provided a study that compares the stand-alone CNN model and hybrid machine learning models in their ability to detect COVID-19 from chest X-Ray images. We presented four models to classify such kinds of images into COVID-19 and normal. Visual Geometry Group (VGG-16) is the architecture used to develop the stand-alone CNN model. This hybrid model consists of two parts: the VGG-16 as a features extractor, and a conventional machine learning algorithm, such as support-vector-machines (SVM), Random-Forests (RF), and Extreme-Gradient-Boosting (XGBoost), as a classifier. Even though several studies have investigated this topic, the dataset used in this study is considered one of the largest because we have combined five existing datasets. The results illustrate that there is no noticeable improvement in the performance when hybrid models are used as an alternative to the stand-alone CNN model. VGG-16 and (VGG16+SVM) models provide the best performance with a 99.82% model accuracy and 100% model sensitivity. In general, all the four presented models are reliable, and the lowest accuracy obtained among them is 98.73%.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_102-A_Comparative_Study_of_Stand_Alone_and_Hybrid_CNN_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving Data Services of Mobile Cloud Storage with Support for Large Data Objects using OpenStack Swift</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.01206101</link>
        <id>10.14569/IJACSA.2021.01206101</id>
        <doi>10.14569/IJACSA.2021.01206101</doi>
        <lastModDate>2021-06-30T12:15:25.1130000+00:00</lastModDate>
        
        <creator>Aslam B Nandyal</creator>
        
        <creator>Mohammed Rafi</creator>
        
        <creator>M Siddappa</creator>
        
        <creator>Babu B. Sathish</creator>
        
        <subject>Mobile cloud computing; mobile backend as a service; large files; distributed systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>Providing data services support for large file upload and download is increasingly vital for mobile cloud storage. There is an increase in mobile users whose data access trends show more access and large file sharing. It is a challenging task for Mobile Application Developers to handle upload and retrieve large files to/from a mobile app because of difficulties with latency, bandwidth, speed, errors, and disruptions to service in a wireless mobile environment. Some scenarios require these large files to be used offline, sometimes to be updated by a single user, and sometimes be shared among all other users. The Wireless mobile environment must consider mobile user’s constraints, such as frequent disconnections and low bandwidth, which affect the ability to handle data and transactions management. The primary objective of this study is to propose a cloud-based Mobile Sync service (sometimes referred as Mobile backend as a Service) with OpenStack Swift object storage to manage large objects efficiently using two main techniques of segmentation and object chunking with compression in a mobile cloud environment. This work further contributed to a prototype implementation of the proposed framework and provides Application Programming Interface (API) consisting of Create, Read, and Delete queries and chunking operations and a lightweight sync protocol that can manage large file synchronization and access. The experimental findings with object-chunking tested size settings show that the proposed Mobile Sync framework can accommodate large files ranging from 100MB to 1GB and provides a decrease in upload/download synchronization times of 63.203% / 92.987%percent as compared to other frameworks.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_101-Improving_Data_Services_of_Mobile_Cloud_Storage.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fish Disease Detection System: A Case Study of Freshwater Fishes of Bangladesh</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.01206100</link>
        <id>10.14569/IJACSA.2021.01206100</id>
        <doi>10.14569/IJACSA.2021.01206100</doi>
        <lastModDate>2021-06-30T12:15:25.0970000+00:00</lastModDate>
        
        <creator>Juel Sikder</creator>
        
        <creator>Kamrul Islam Sarek</creator>
        
        <creator>Utpol Kanti Das</creator>
        
        <subject>K-means; c-means fuzzy logic; multi-SVM; detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>The proposed system is designed for automatic detection and classification of fish diseases in freshwater es-pecially Rangamati Kaptai Lake and Sunamganj Hoar area of Bangladesh. Our experimental result is indicating that the proposed approach is significantly an accurate and automatic detection and recognition of fish diseases. This study presents fish disease detection based on the K-means and C-means fuzzy logic clustering method to segment the filtering image. Gabor’s Filters and Gray Level Co-occurrence Matrix (GLCM) are used to extracts the features from the segmented regions. Finally Multi-Support Vector Machine (M-SVMs) is used for classification of the test image. The proposed system demonstrated a comparison between K-means clustering and C-means fuzzy logic. The proposed methodology gave 96.48% accuracy using K-means and 97.90% using C-means fuzzy logic which is the highest accuracy rate to compare other existing methods. The proposed system has been experimented in the MATLAB environment on infected fish images of Rangamati Kaptai Lake and Sumangan Hoar area. It is a challenging task of fisheries farming in Hoar areas and Lake areas to detect fish diseases initially. The proposed methodology can detect and classify different fish diseases in early stages and also contributes to improved results for fish disease detection.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_100-Fish_Disease_Detection_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Identification of Abusive Behavior Towards Religious Beliefs and Practices on Social Media Platforms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120699</link>
        <id>10.14569/IJACSA.2021.0120699</id>
        <doi>10.14569/IJACSA.2021.0120699</doi>
        <lastModDate>2021-06-30T12:15:25.0800000+00:00</lastModDate>
        
        <creator>Tanvir Ahammad</creator>
        
        <creator>Md. Khabir Uddin</creator>
        
        <creator>Tamanna Yesmin</creator>
        
        <creator>Abdul Karim</creator>
        
        <creator>Sajal Halder</creator>
        
        <creator>Md. Mahmudul Hasan</creator>
        
        <subject>Social media; religious abuse detection; religious keywords; religious hatred; feature extraction; classifier</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>The ubiquitous use of social media has enabled many people, including religious scholars and priests, to share their religious views. Unfortunately, exploiting people’s religious beliefs and practices, some extremist groups intentionally or unin-tentionally spread religious hatred among different communities and thus hamper social stability. This paper aims to propose an abusive behavior detection approach to identify hatred, violence, harassment, and extremist expressions against people of any religious belief on social media. For this, first religious posts from social media users’ activities are captured and then the abusive behaviors are identified through a number of sequential processing steps. In the experiment, Twitter has been chosen as an example of social media for collecting dataset of six major religions in English Twittersphere. In order to show the performance of the proposed approach, five classic classifiers on n-gram TF-IDF model have been used. Besides, Long Short-term Memory (LSTM) and Gated Recurrent Unit (GRU) classifiers on trained embedding and pre-trained GloVe word embedding models have been used. The experimental result showed 85%accuracy in terms of precision. However, to the best of our knowledge, this is the first work that will be able to distinguish between hateful and non-hateful contents in other application domains on social media in addition to religious context.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_99-Identification_of_Abusive_Behavior_Towards_Religious_Beliefs.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comprehensive Analysis of Augmented Reality Technology in Modern Healthcare System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120698</link>
        <id>10.14569/IJACSA.2021.0120698</id>
        <doi>10.14569/IJACSA.2021.0120698</doi>
        <lastModDate>2021-06-30T12:15:25.0670000+00:00</lastModDate>
        
        <creator>Jinat Ara</creator>
        
        <creator>Faria Benta Karim</creator>
        
        <creator>Mohammed Saud A Alsubaie</creator>
        
        <creator>Yeasin Arafat Bhuiyan</creator>
        
        <creator>Muhammad Ismail Bhuiyan</creator>
        
        <creator>Salma Begum Bhyan</creator>
        
        <creator>Hanif Bhuiyan</creator>
        
        <subject>Augmented Reality (AR); healthcare applications; healthcare challenges; AR-based healthcare security issues; dy-namic security solution</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>The recent advances of Augmented Reality (AR) in healthcare have shown that technology is a significant part of the current healthcare system. In recent days, augmented reality has proposed numerous intelligent applications in the healthcare domain including, wearable access, telemedicine, remote surgery, diagnosis of medical reports, emergency medicine, etc. These developed augmented healthcare applications aim to improve patient care, increase efficiency, and decrease costs. Therefore, to identify the advances of AR-based healthcare applications, this article puts on an effort to perform an analysis of 45 peer-reviewed journal and conference articles from scholarly databases between 2011 and 2020. It also addresses concurrent concerns and their relevant future challenges including, user satisfaction, convenient prototypes, service availability, maintenance cost, etc. Despite the development of several AR healthcare applications, there are some untapped potentials regarding secure data trans-mission, which is an important factor for advancing this cutting-edge technology. Therefore, this paper also analyzes distinct AR security and privacy including, security requirements (i.e., scalability, confidentiality, integrity, resiliency, etc.) and attack terminologies (i.e. sniffing, fabrication, modification, interception, etc.). Based on the security issues, in this paper, we propose an artificial intelligence-based dynamic solution to build an intelligent security model to minimize data security risks. This intelligent model can identify seen and unseen threats in the threat detection layer and thus can protect data during data transmission. In addition, it prevents external attacks in the threat elimination layer using threat reduction mechanisms.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_98-Comprehensive_Analysis_of_Augmented_Reality_Technology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Maneuverable Autonomy of a Six-legged Walking Robot: Design and Implementation using Deep Neural Networks and Hexapod Locomotion</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120697</link>
        <id>10.14569/IJACSA.2021.0120697</id>
        <doi>10.14569/IJACSA.2021.0120697</doi>
        <lastModDate>2021-06-30T12:15:25.0670000+00:00</lastModDate>
        
        <creator>Hiep Xuan Huynh</creator>
        
        <creator>Nghia Duong-Trung</creator>
        
        <creator>Tran Nam Quoc Nguyen</creator>
        
        <creator>Bao Hoai Le</creator>
        
        <creator>Tam Hung Le</creator>
        
        <subject>Six-legged walking robot; hexapod locomotion; Raspberry Pi; deep neural networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>Automatically real-time synthesizing behaviors for a six-legged walking robot pose several exciting challenges, which can be categorized into mechanics design, control software, and the combination of both. Due to the complexity of control-ling and automation, numerous studies choose to gear their attention to a specific aspect of the whole challenge by either proposing valid and low-power assumption of mechanical parts or implementing software solutions upon sensorial capabilities and camera. Therefore, a complete solution associating both mechanical moving parts, hardware components, and software encouraging generalization should be adequately addressed. The architecture proposed in this article orchestrates (i) interlocutor face detection and recognition utilizing ensemble learning and convolutional neural networks, (ii) maneuverable automation of six-legged robot via hexapod locomotion, and (iii) deployment on a Raspberry Pi, that has not been previously reported in the literature. Not satisfying there, the authors even develop one step further by enabling real-time operation. We believe that our contributions ignite multi-research disciplines ranging from IoT, computer vision, machine learning, and robot autonomy.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_97-Maneuverable_Autonomy_of_a_Six_legged_Walking_Robot.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Numerical Investigation on System of Ordinary Differential Equations Absolute Time Inference with Mathematica&#174;</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120696</link>
        <id>10.14569/IJACSA.2021.0120696</id>
        <doi>10.14569/IJACSA.2021.0120696</doi>
        <lastModDate>2021-06-30T12:15:25.0500000+00:00</lastModDate>
        
        <creator>Adeniji Adejimi</creator>
        
        <creator>Surulere Samuel</creator>
        
        <creator>Mkolesia Andrew</creator>
        
        <creator>Shatalov Michael</creator>
        
        <subject>Euler’s method; Runge-Kutta method; System of ODE; Mathematica&#174;; AbsoluteTiming</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>The purpose of this research is to perform a comparative numerical analysis of an efficient numerical methods for second-order ordinary differential equations, by reducing the second-order ODE to a system of first-order differential equations. Then we obtain approximate solutions to the system of ODE. To validate the accuracy of the algorithm, a comparison between Euler’s method and the Runge-Kutta method or order four was carried out and an exact solution was found to verify the efficiency, accuracy of the methods. Graphical representations of the parametric plots were also presented. Time inference analysis is taken to check the time taken to executes the algorithm in Mathematica&#174;12.2.0. The obtained approximate solution using the algorithm shows that the Runge-Kutta method of order four is more efficient for solving system of linear ordinary differential equations.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_96-Numerical_Investigation_on_System_of_Ordinary_Differential_Equations.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Role of Data Pre-processing Techniques in Improving Machine Learning Accuracy for Predicting Coronary Heart Disease</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120695</link>
        <id>10.14569/IJACSA.2021.0120695</id>
        <doi>10.14569/IJACSA.2021.0120695</doi>
        <lastModDate>2021-06-30T12:15:25.0330000+00:00</lastModDate>
        
        <creator>Osamah Sami</creator>
        
        <creator>Yousef Elsheikh</creator>
        
        <creator>Fadi Almasalha</creator>
        
        <subject>Coronary heart disease; heart; machine learning; data preprocessing; classification technique</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>These days, in light of the rapid developments, people work day and night to live at a good level. This often causes them to not pay much attention to a healthy lifestyle, such as what they eat or even what physical activities they do. These people are often the most likely to suffer from coronary heart disease. The heart is a small organ responsible for pumping oxygen-rich blood to the rest of the human body through the coronary arteries. Accordingly, any blockage or narrowing in one of these coronary arteries may cause blood not to be pumped to the heart and from it to the rest of the body, and thus cause what is known as heart attacks. From here, the importance of early prediction of coronary heart disease has emerged, as it can help these people change their lifestyle and eating habits to become healthier and thus prevent coronary heart disease and avoid death. This paper improve the accuracy of machine learning techniques in predicting coronary heart disease using data preprocessing techniques. Data preprocessing is a technique used to improve the efficiency of a machine learning model by improving the quality of the feature. The popular Framingham Heart Study dataset was used for validation purposes. The results of the research paper indicate that the use of data preprocessing techniques had a role in improving the predictive accuracy of poorly efficient classifiers, and shows satisfactory performance in determining the risk of coronary heart disease. For example, the Decision Tree classifier led to a predictive accuracy of coronary heart disease of 91.39% with an increase of 1.39% over the previous work, the Random Forest classifier led to a predictive accuracy of 92.80% with an increase of 2.7% over the previous work, the K-Nearest Neighbor classifier led to a predictive accuracy of 92.68% with an increase of 2.58% over the previous work, the Multilayer Perceptron Neural Network (MLP) classifier led to a predictive accuracy of 92.64% with an increase of 2.64% over the previous work, and the Na&#168;ıve Bayes classifier led to a predictive accuracy of 90.56% with an increase of 0.66% over the previous work.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_95-The_Role_of_Data_Pre_processing_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Discrete Time-Space Stochastic Mathematical Modelling for Quantitative Description of Software Imperfect Fault-Debugging with Change-Point</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120694</link>
        <id>10.14569/IJACSA.2021.0120694</id>
        <doi>10.14569/IJACSA.2021.0120694</doi>
        <lastModDate>2021-06-30T12:15:25.0200000+00:00</lastModDate>
        
        <creator>Mohd Taib Shatnawi</creator>
        
        <creator>Omar Shatnawi</creator>
        
        <subject>Stochastic mathematical modelling; discrete time-space; non-homogenous poisson process; change-point; imperfect fault-debugging; software-reliability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>Statistics and stochastic-process theories, along with the mathematical modelling and the respective empirical evidence support, describe the software fault-debugging phenomenon. In software-reliability engineering literature, stochastic mathemat-ical models based on the non-homogeneous Poisson process (NHPP) are employed to measure and boost reliability too. Since reliability evolves on account of the running of computer test-run, NHPP type of discrete time-space models, or difference-equation, is superior to their continuous time-space counterparts. The majority of these models assume either a constant, monotonically increasing, or decreasing fault-debugging rate under an imperfect fault-debugging environment. However, in the most debugging scenario, a sudden change may occur to the fault-debugging rate due to an addition to, deletion from, or modification of the source code. Thus, the fault-debugging rate may not always be smooth and is subject to change at some point in time called change-point. Significantly few studies have addressed the problem of change-point in discrete-time modelling approach. The paper examines the combined effects of change-point and imperfect fault-debugging with the learning process on software-reliability growth phenomena based on the NHPP type of discrete time-space modelling approach. The performance of the proposed modelling approach is compared with other existing approaches on an actual software-reliability dataset cited in literature. The findings reveal that incorporating the effect of change-point in software-reliability growth modelling enhances the accuracy of software-reliability assessment because the stochastic character-istics of the software fault-debugging phenomenon alter at the change-point.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_94-Discrete_Time_Space_Stochastic_Mathematical_Modelling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Optimized Artificial Neural Network Model using Genetic Algorithm for Prediction of Traffic Emission Concentrations</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120693</link>
        <id>10.14569/IJACSA.2021.0120693</id>
        <doi>10.14569/IJACSA.2021.0120693</doi>
        <lastModDate>2021-06-30T12:15:25.0030000+00:00</lastModDate>
        
        <creator>Akibu Mahmoud Abdullah</creator>
        
        <creator>Raja Sher Afgun Usmani</creator>
        
        <creator>Thulasyammal Ramiah Pillai</creator>
        
        <creator>Mohsen Marjani</creator>
        
        <creator>Ibrahim Abaker Targio Hashem</creator>
        
        <subject>Optimized Artificial Neural Network (OANN); Ge-netic Algorithm; traffic emissions</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>Global warming and climate change have become universal issues recently. One of the leading sources of climate change is automobiles. Automobiles are the prime source of air pollution in urban areas globally. This has resulted in a problematic and chaotic state in the development of an automatic traffic management system for capturing and monitoring vehicles’ hourly and daily passage. With the significant advancement of sensor technology, atmospheric information such as air pollu-tion, meteorological, and motor vehicle data can be harvested and stored in databases. However, due to the complexity and non-linear associations between air quality, meteorological, and traffic variables, it is difficult for the traditional statistical and mathematical models to analyze them. Recently, machine learning algorithms in the field of traffic emissions prediction have become a popular tool. Meteorological and traffic variables influence the variation and the trend of the traffic pollutants. In this paper, an optimized artificial neural network (OANN) was developed to enhance the existing artificial neural network (ANN) model by updating the initial weights in the network using a Genetic Algorithm (GA). The OANN model was implemented to predict the concentration of CO, NO, NO2, and NOx pollutants produced by motor vehicles in Kuala Lumpur, Malaysia. OANN was compared with Artificial Neural Network (ANN), Random Forest (RF), and Decision Tree (DT) models. The results show that the developed OANN model performed better than the ANN, RF, and DT models with the lowest MSE values of 0.0247 for CO, 0.0365 for NO, 0.0542 NO2, and 0.1128 for NOx. It can be concluded that the developed OANN model is a better choice in predicting traffic emission concentrations. The developed OANN model can help environmental agencies monitor traffic-related air pollution levels efficiently and take necessary measures to ensure the effectiveness of traffic management policy. The OANN model can also help decision-makers mitigate traffic emissions to protect citizens living in the neighborhood of highways.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_93-An_Optimized_Artificial_Neural_Network_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detecting COVID-19 Utilizing Probabilistic Graphical Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120692</link>
        <id>10.14569/IJACSA.2021.0120692</id>
        <doi>10.14569/IJACSA.2021.0120692</doi>
        <lastModDate>2021-06-30T12:15:24.9870000+00:00</lastModDate>
        
        <creator>Emad Alsuwat</creator>
        
        <creator>Sabah Alzahrani</creator>
        
        <creator>Hatim Alsuwat</creator>
        
        <subject>Coronavirus disease 2019; COVID-19; artificial in-telligence; machine learning; probabilistic graphical models; causal models; Bayesian networks; detection methods</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>Probabilistic graphical models are employed in a variety of areas such as artificial intelligence and machine learn-ing to depict causal relations among sets of random variables. In this research, we employ probabilistic graphical models in the form of Bayesian network to detect coronavirus disease 2019 (denoted as COVID-19) disease. We propose two efficient Bayesian network models that are potent in encoding causal relations among random variable, i.e., COVID-19 symptoms. The first Bayesian network model, denoted as BN1, is built depending on the acquired knowledge from medical experts. We collect data from clinics and hospitals in Saudi Arabia for our research. We name this authentic dataset DScovid. The second Bayesian network model, denoted as BN2, is learned from the real dataset DScovid depending on Chow-Liu tree approach. We also implement our proposed Bayesian network models and present our experimental results. Our results show that the proposed approaches are capable of modeling the issue of making decisions in the context of COVID-19. Moreover, our experimental results show that the two Bayesian network models we propose in this work are effective for not only extracting casual relations but also reducing uncertainty and increasing the effectiveness of causal reasoning and prediction.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_92-Detecting_COVID_19_Utilizing_Probabilistic_Graphical_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fake News Detection in Arabic Tweets during the COVID-19 Pandemic</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120691</link>
        <id>10.14569/IJACSA.2021.0120691</id>
        <doi>10.14569/IJACSA.2021.0120691</doi>
        <lastModDate>2021-06-30T12:15:24.9700000+00:00</lastModDate>
        
        <creator>Ahmed Redha Mahlous</creator>
        
        <creator>Ali Al-Laith</creator>
        
        <subject>Fake news; Twitter; social media; Arabic corpus</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>In March 2020, the World Health Organization declared the COVID-19 outbreak to be a pandemic. Soon af-terwards, people began sharing millions of posts on social media without considering their reliability and truthfulness. While there has been extensive research on COVID-19 in the English lan-guage, there is a lack of research on the subject in Arabic. In this paper, we address the problem of detecting fake news surrounding COVID-19 in Arabic tweets. We collected more than seven million Arabic tweets related to the corona virus pandemic from January 2020 to August 2020 using the trending hashtags during the time of pandemic. We relied on two fact-checkers: the France-Press Agency and the Saudi Anti-Rumors Authority to extract a list of keywords related to the misinformation and fake news topics. A small corpus was extracted from the collected tweets and manually annotated into fake or genuine classes. We used a set of features extracted from tweet contents to train a set of machine learning classifiers. The manually annotated corpus was used as a baseline to build a system for automatically detecting fake news from Arabic text. Classification of the manually annotated dataset achieved an F1-score of 87.8% using Logistic Regression (LR) as a classifier with the n-gram-level Term Frequency-Inverse Document Frequency (TF-IDF) as a feature, and a 93.3% F1-score on the automatically annotated dataset using the same classifier with count vector feature. The introduced system and datasets could help governments, decision-makers, and the public judge the credibility of information published on social media during the COVID-19 pandemic.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_91-Fake_News_Detection_in_Arabic_Tweets.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Kinematic Analysis for Trajectory Planning of Open-Source 4-DoF Robot Arm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120690</link>
        <id>10.14569/IJACSA.2021.0120690</id>
        <doi>10.14569/IJACSA.2021.0120690</doi>
        <lastModDate>2021-06-30T12:15:24.9570000+00:00</lastModDate>
        
        <creator>Han Zhong Ting</creator>
        
        <creator>Mohd Hairi Mohd Zaman</creator>
        
        <creator>Mohd Faisal Ibrahim</creator>
        
        <creator>Asraf Mohamed Moubark</creator>
        
        <subject>Robot arm; kinematic analysis; forward kinematic; inverse kinematic; open-source</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>Many small and large industries use robot arms to establish a range of tasks such as picking and placing, and painting in today’s world. However, to complete these tasks, one of the most critical problems is to obtain the desire position of the robot arm’s end-effector. There are two methods for analyzing the robot arm: forward kinematic analysis and inverse kinematic analysis. This study aims to model the forward and inverse kinematic of an open-source 4 degrees of freedom (DoF) articulated robotic arm. A kinematic model is designed and further evaluated all the joint parameters to calculate the end-effector’s desired position. Forward kinematic is simple to design, but as for the inverse kinematic, a closed-form solution is needed. The developed kinematic model’s performance is assessed on a simulated robot arm, and the results were analyzed if the errors were produced within the accepted range. At the end of this study, forward kinematic and inverse kinematic solutions of a 4-DoF articulated robot arm are successfully modeled, which provides the theoretical basis for the subsequent analysis and research of the robot arm.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_90-Kinematic_Analysis_for_Trajectory_Planning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Technique for Constrained Optimization of Cross-ply Laminates using a New Variant of Genetic Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120689</link>
        <id>10.14569/IJACSA.2021.0120689</id>
        <doi>10.14569/IJACSA.2021.0120689</doi>
        <lastModDate>2021-06-30T12:15:24.9570000+00:00</lastModDate>
        
        <creator>Huiyao Zhang</creator>
        
        <creator>Atsushi Yokoyama</creator>
        
        <subject>Laminated composite; classical lamination theory; genetic algorithm; optimal design</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>The main challenge presented by the design of laminated composite material is the laminate layup, involving a set of fiber orientations, composite material systems, and stacking sequences. In nature, it is a combinatorial optimization problem with constraints that can be solved by the genetic algorithm. The traditional approach to solve a constrained problem is reformulating the objective function. In the present study, a new variant of the genetic algorithm is proposed for the design of composite material by using a mix of selection strategies, instead of modifying the objective function. To check the feasibility of a laminate subject to in-plane loading, the effect of the fiber orientation angles and material components on the first ply failure is studied. The algorithm has been validated by successfully optimizing the design of cross-ply laminate under different in-plane loading cases. The results obtained by this algorithm are better than works in related literature.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_89-A_Technique_for_Constrained_Optimization_of_Cross_ply_Laminates.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Using API with Logistic Regression Model to Predict Hotel Reservation Cancellation by Detecting the Cancellation Factors</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120688</link>
        <id>10.14569/IJACSA.2021.0120688</id>
        <doi>10.14569/IJACSA.2021.0120688</doi>
        <lastModDate>2021-06-30T12:15:24.9400000+00:00</lastModDate>
        
        <creator>Sultan Almotiri</creator>
        
        <creator>Nouf Alosaimi</creator>
        
        <creator>Bayan Abdullah</creator>
        
        <subject>Prediction; API; factors; logistics regression</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>The aim of establishing hotels is to provide a service activity to its customers with the aim of making a profit. So, for that the cancellations are a key perspective of inn income administration since their effectiveness on room reservation systems. Cancelling the reservation eliminates the outcome. Many expected factors affect this problem. By knowing these factors, the hotel management can make a suitable cancellation policy. This project aims to create an API that can provide a function to predict if a reservation is most likely to cancel or not. That API can integrate with the hotel management systems to evaluate each reservation process with the same parameters. To do this, the study starts by defining the factors using Chi test, correlation to find the effective variables, and coefficient of the variables in the linear regression. And the results that have been found for the factors are: is repeated guest, previous cancellations, previous bookings not cancelled, required car parking spaces, and deposit type. For API function, the intercept and coefficients have been used from the logistics regression model to create a scoring function. Scoring function can be calculated by the sum of the factors multiplied by their coefficients in addition to the intercept. This score is to be evaluated as a probability later using the logistic function.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_88-Using_API_with_Logistic_Regression_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Do Learning Styles Enhance the Academic Performance of University Students? A Case Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120686</link>
        <id>10.14569/IJACSA.2021.0120686</id>
        <doi>10.14569/IJACSA.2021.0120686</doi>
        <lastModDate>2021-06-30T12:15:24.9230000+00:00</lastModDate>
        
        <creator>Jorge Mu&#241;oz-Mederos</creator>
        
        <creator>Elizabeth Acosta-Gonzaga</creator>
        
        <creator>Elena Fabiola Ruiz-Ledesma</creator>
        
        <creator>Aldo Ram&#237;rez-Arellano</creator>
        
        <subject>Learning styles; learning objects; unified learning style model (ULSM); academic performance; higher education</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>Higher education models appear to be not entirely designed to support students in facing severe challenges, such as failure in exams and dropping out of school. To solve these challenges, several models of learning styles have been proposed, under the premise that these studies contribute to improving the student&#39;s learning experience. This research aims at quantifying the impact of learning styles (learning preferences/dimensions) on students&#39; academic performance from a higher education institution. Ninety-six undergraduate students were surveyed during the 2018-2019 school year and randomly divided into two groups: control (CG) and experimental (EG). The learning preferences of the students were identified using the Unified Learning Style Model (ULSM) instrument. Subsequently, the level of students’ knowledge concerning the course was determined employing a pre-test exam. As a following step, the students of the EG consulted the learning objects designed considering different learning styles. The CG attend their lessons in a face-to-face environment; both groups answered a post-test exam to assess their learning. The learning styles&#39; effect -learning objects were designed to cover several learning styles- on academic performance is quantified employing an ANOVA analysis. The results differ from those postulated in previous researches based on the ULSM since there is no statistical evidence that learning styles influence students&#39; academic performance. Therefore, it is necessary to explore other cognitive and affective factors that make the student&#39;s learning experience efficient and effective.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_86-Do_Learning_Styles_Enhance_the_Academic_Performance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application of Expert System with Smartphone-based Certainty Factor Method for Hypertension Risk Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120687</link>
        <id>10.14569/IJACSA.2021.0120687</id>
        <doi>10.14569/IJACSA.2021.0120687</doi>
        <lastModDate>2021-06-30T12:15:24.9230000+00:00</lastModDate>
        
        <creator>Dodon Yendri</creator>
        
        <creator>Derisma</creator>
        
        <creator>Desta Yolanda</creator>
        
        <creator>Sabrun Jamil</creator>
        
        <subject>Hypertension; certainty factor; blood pressure; mpx5500dp</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>Hypertension is a factor health problem with a high prevalence in Indonesia. The total amount of hypertension nationwide is 34.11%. There are only 1/2 were diagnosed, and the remaining 1/2 are undiagnosed. Certainty factor was needed to prove the certain or not of a fact in a metric form, and it commonly was used in expert systems. This method was perfect for diagnosing something uncertain. This research aimed to build a smartphone-based hypertension risk detection system. This system was built by MPX5500DP pressure sensor components that served for blood pressure measurement, Bluetooth module HC-05 as microcontroller data transmission, Arduino Uno as sensor data processing, and Smartphone as hypertension risk detection interface and database access. Measuring blood pressure on the user&#39;s right or left arm was used to test the system. Then the data was sent to the smartphone to be classified and processed by using the certainty factor method implanted in the android smartphone to detect the risk of hypertension and then followed by the selection of symptoms and risk factors of hypertension according to previous experience. The results showed that the success rate of the system in detecting the risk of hypertension was 100%, the mean error of the systolic value of the right arm was 2.092%, and the average error of the diastolic value of the right arm was 2.977%, while the average error of the systolic value of the left arm. 1.944% and the mean error of the left arm diastolic value was 2.800%.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_87-Application_of_Expert_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Software Project Estimation with Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120685</link>
        <id>10.14569/IJACSA.2021.0120685</id>
        <doi>10.14569/IJACSA.2021.0120685</doi>
        <lastModDate>2021-06-30T12:15:24.9100000+00:00</lastModDate>
        
        <creator>Noor Azura Zakaria</creator>
        
        <creator>Amelia Ritahani Ismail</creator>
        
        <creator>Afrujaan Yakath Ali</creator>
        
        <creator>Nur Hidayah Mohd Khalid</creator>
        
        <creator>Nadzurah Zainal Abidin</creator>
        
        <subject>Software effort estimation; project estimation; constructive cost model; COCOMO; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>This project involves research about software effort estimation using machine learning algorithms. Software cost and effort estimation are crucial parts of software project development. It determines the budget, time and resources needed to develop a software project. One of the well-established software project estimation models is Constructive Cost Model (COCOMO) which was developed in the 1980s. Even though such a model is being used, COCOMO has some weaknesses and software developers still facing the problem of lack of accuracy of the effort and cost estimation. Inaccuracy in the estimated effort will affect the schedule and cost of the whole project as well. The objective of this research is to use several algorithms of machine learning to estimate the effort of software project development. The best machine learning model is chosen to compare with the COCOMO.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_85-Software_Project_Estimation_with_Machine_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>MAGITS: A Mobile-based Information Sharing Framework for Integrating Intelligent Transport System in Agro-Goods e-Commerce in Developing Countries</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120684</link>
        <id>10.14569/IJACSA.2021.0120684</id>
        <doi>10.14569/IJACSA.2021.0120684</doi>
        <lastModDate>2021-06-30T12:15:24.8930000+00:00</lastModDate>
        
        <creator>Stivin Aloyce Nchimbi</creator>
        
        <creator>Mussa Ally Dida</creator>
        
        <creator>Gerrit K. Janssens</creator>
        
        <creator>Janeth Marwa</creator>
        
        <creator>Michael Kisangiri</creator>
        
        <subject>Intelligent transport system; stakeholders; mobile phone; agro-goods; information sharing; agriculture supply chain; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>The technological advancement in Intelligent Transport Systems and mobile phones enable massive collaborating devices to collect, process, and share information to support the sales and transportation of agricultural goods (agro-goods) from farmer to market within the Agriculture Supply Chain. Mobile devices, especially smartphones and intelligent Point of Sale (PoS), provide multiple features such as Global Positioning System (GPS) and accelerometer to complement infrastructure requirements. Despite the opportunity, the development and deployment of the innovative platforms integrating Agro-goods transport services with e-commerce and e-payment systems are still challenging in developing countries. Some noted challenges include the high cost of infrastructure, implementation complexities, technology, and policy issues. Therefore, this paper proposes a framework for integrating ITS services in agro-goods e-commerce, taking advantage of mobile device functionalities and their massive usage in developing countries. The framework components identified and discussed are Stakeholders and roles, User Services, Mobile Operations, Computing environment with Machine Learning support, Service goals and Information view, and Enabling Factors. A Design Science Research (DSR) method is applied to produce a framework as an artifact using a six-step model. Also, a case study of potato sales and transportation from the Njombe region to Dar es Salaam city in Tanzania is presented. The framework constructs the ability to improve information quality shared among stakeholders; provide a cost-effective and efficient approach for buying, selling, payment, and transportation of Agriculture goods.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_84-MAGITS_A_Mobile_based_Information_Sharing_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Skeletonization of the Straight Leg Raise Movement using the Kinect SDK</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120683</link>
        <id>10.14569/IJACSA.2021.0120683</id>
        <doi>10.14569/IJACSA.2021.0120683</doi>
        <lastModDate>2021-06-30T12:15:24.8770000+00:00</lastModDate>
        
        <creator>Hustinawaty </creator>
        
        <creator>T. Rumambi</creator>
        
        <creator>M. Hermita</creator>
        
        <subject>Straight leg raise; range of motion; skeleton; kinect</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>Biomechanics is widely used as a basis for research in studying disorders of the movement system in the human body. The development of biomechanics has an effect on the field of physiotherapy. Various physiotherapy tests or studies have been developed by researchers to identify and analyze the causes of movement disorders in the human body. Physiotherapy tests performed to detect movement disorders in the human body include: Hawkin Test, Standing Hip Flexion, Standing Trunk Sidebend Test and Straight Leg Raise Test. Straight Leg Raise test is a test to lift one leg straight up in a lying down position. In the medical field, Straight Leg Raise (SLR) movement is very specific for lower lumbar disc herniation and through the use of a device called Kinect which is a color based camera (RGB) and depth camera that can detect movement and track the human skeleton. To get the human body skeleton is through the stages from RGB image acquisition, pose calibration, depth image, skeletonization to feature extraction of the two legs that can form ROM. The result of this study, Kinect is able to track the user&#39;s foot frame in SLR pose. Kinect can be relied upon to be used as a tool for detecting and tracking skeletons, and can then be used to measure SLR ROM in real time and analyze low back pain problems and measure effectiveness in physiotherapy.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_83-Skeletonization_of_the_Straight_Leg_Raise_Movement.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dynamic Management of Security Policies in PrivOrBAC</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120681</link>
        <id>10.14569/IJACSA.2021.0120681</id>
        <doi>10.14569/IJACSA.2021.0120681</doi>
        <lastModDate>2021-06-30T12:15:24.8630000+00:00</lastModDate>
        
        <creator>Jihane EL MOKHTARI</creator>
        
        <creator>Anas ABOU EL KALAM</creator>
        
        <creator>Siham BENHADDOU</creator>
        
        <creator>Jean-Philippe LEROY</creator>
        
        <subject>Access control; privacy; PrivOrBAC; PrivUML; smart contract; WS-agreement</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>This article is a continuation of our previous work on identifying and developing tools and concepts to provide automatic management and derivation of security and privacy policies. In this document we are interested in the extension of the PrivOrBAC model in order to ensure a dynamic management of privacy-aware security policies. Our approach, based on smart contracts (SC) and the WS-Agreement Specification, allows automatic agents representing data providers and access requesters to enter into an access agreement that takes into consideration not only service level clauses but also security rules to protect the privacy of individuals. Our solution can be deployed in such a way that no human intervention is required to reach this type of agreement. This work shows how to use the WS-Agreement Specification to set up a process for negotiating, creating and monitoring Service Level Agreements (SLAs) in accordance with a predefined access control policy. This article concludes with a case study accompanied by a representative implementation of our solution.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_81-Dynamic_Management_of_Security_Policies.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hyperspectral Image Classification using Convolutional Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120682</link>
        <id>10.14569/IJACSA.2021.0120682</id>
        <doi>10.14569/IJACSA.2021.0120682</doi>
        <lastModDate>2021-06-30T12:15:24.8630000+00:00</lastModDate>
        
        <creator>Shambulinga M</creator>
        
        <creator>G. Sadashivappa</creator>
        
        <subject>Convolutional neural network; hyperspectral image; classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>Hyperspectral image is well-known for the identification of the objects on the earth&#39;s surface. Most of the classifier uses the spectral features and does not consider the spatial features to perform the classification and to recognize the various objects on the image. In this paper, the hyperspectral image is classified based on spectral and spatial features using a convolutional neural network (CNN). The hyperspectral image is divided into a small number of patches. CNN constructs the high level spectral and spatial features of each patch, and the multi-layer perceptron helps in the classification of image features into different classes. Simulation results show that CNN archives the highest classification accuracy of the hyperspectral image compared with other classifiers.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_82-Hyperspectral_Image_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Case Study of Self-Organization Processes in Information System Caching Components</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120680</link>
        <id>10.14569/IJACSA.2021.0120680</id>
        <doi>10.14569/IJACSA.2021.0120680</doi>
        <lastModDate>2021-06-30T12:15:24.8470000+00:00</lastModDate>
        
        <creator>Pavel Kurnikov</creator>
        
        <creator>Nina Krapukhina</creator>
        
        <subject>Caching mechanisms; ORM; phase space reconstruction; self-organization; dissipative structures; deterministic chaos</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>Most modern Information Systems (IS) are designed with Object-Relational Mapping components (ORM). Such components bring down the number of queries to the database server and therefore increase the system performance. Caching mechanisms in software components are complex dynamic systems of open type. A simulation model of cache-application interaction has been created to assess the critical modes of the system. The authors suggest accumulating the cache elements states during the application run at discrete moments of time and present them as multi-variable time series. This work also suggests a method for reconstruction of phase-plane portrait of the system with multidimensional dynamics of the cache elements states. The article shows self-organization processes in caching components of information systems and illustrates the variability of system states for various initial conditions followed by transition to steady-state conditions. In particular, the paper illustrates dissipative structures as well as deterministic chaos with the complete determinism of queries in simulated information systems.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_80-Case_Study_of_Self_Organization_Processes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Modified Particle Swarm Optimization Approach for Latency of Wireless Sensor Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120679</link>
        <id>10.14569/IJACSA.2021.0120679</id>
        <doi>10.14569/IJACSA.2021.0120679</doi>
        <lastModDate>2021-06-30T12:15:24.8300000+00:00</lastModDate>
        
        <creator>Jannat H. Elrefaei</creator>
        
        <creator>Ahmed Yahya</creator>
        
        <creator>Mouhamed K. Shaat</creator>
        
        <creator>Ahmed H. Madian</creator>
        
        <creator>Refaat M. Fikry</creator>
        
        <subject>Wireless sensor networks; media access control protocol; scheduling algorithms; meta-heuristics; particle swarm optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>In time-sensitive applications, such as detecting environmental and individual nuclear radiation exposure, wireless sensor networks are employed.. Such application requires timely detection of radiation levels so that appropriate emergency measures are applied to protect people and the environment from radiation hazards. In these networks, collision and interference in communication between sensor nodes cause more end-to-end delay and reduce the network&#39;s performance. A time-division multiple-access (TDMA) media access control protocol guarantees minimum latency and low power consumption. It also overcomes the problem of interference. TDMA scheduling problem determines the minimum length conflict-free assignment of slots in a TDMA frame where each node or link is activated at least once. This paper proposes a meta-heuristic centralized contention-free approach based on TDMA, a modified particle swarm optimization. This approach realizes the TDMA scheduling more efficiently compared with other existing algorithms. Extensive simulations were performed to evaluate the modified approach. The simulation results prove that the proposed scheduling algorithm has a better performance in wireless sensor networks than the interference degree leaves order algorithm and interference degree remaining leaves order algorithm. The results demonstrate also that integrating the proposed algorithm in TDMA protocols significantly optimizes the communication latency reduction and increases the network reliability.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_79-A_Modified_Particle_Swarm_Optimization_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Approach based on Fuzzy MADM for Enhancing Infrastructure Selection in the Case of VANET Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120678</link>
        <id>10.14569/IJACSA.2021.0120678</id>
        <doi>10.14569/IJACSA.2021.0120678</doi>
        <lastModDate>2021-06-30T12:15:24.8170000+00:00</lastModDate>
        
        <creator>Abdeslam Houari</creator>
        
        <creator>Tomader Mazri</creator>
        
        <subject>5G; Vehicular adhoc network; internet of things; intelligent transportation system; cellular infrastructure; road infrastructure; fuzzy; vertical handover; MADM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>The emergence of the 5G mobile network has a huge impact on the evolution of services and functionalities offered to its customers; this latest version of mobile networks will allow the simultaneous connection of a significant number of people and IoT devices, in addition to the improvement of several other features. 5G will serve in a large part of smart cities and especially in the field of intelligent transportation systems (ITS). Vehicular Ad-Hoc Network (VANET) is one of the promising projects on which the ITS is relying on. Its main purpose is to provide communication and information-sharing support for the vehicles in its network. VANET is based on a heterogeneous network architecture composed mainly of two infrastructures, the first one is the cellular infrastructure, and the second is the road infrastructure. This paper proposes a new approach based on Fuzzy multiple attribute decision making (MADM) methods for the selection of the most appropriate infrastructure in the VANET network and consequently enhance the number of executed vertical handover to move from one infrastructure to another without loss of connection.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_78-A_New_Approach_based_on_Fuzzy_MADM.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Energy Material Network Data Hubs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120677</link>
        <id>10.14569/IJACSA.2021.0120677</id>
        <doi>10.14569/IJACSA.2021.0120677</doi>
        <lastModDate>2021-06-30T12:15:24.8170000+00:00</lastModDate>
        
        <creator>Robert R. White</creator>
        
        <creator>Kristin Munch</creator>
        
        <creator>Nicholas Wunder</creator>
        
        <creator>Nalinrat Guba</creator>
        
        <creator>Chitra Sivaraman</creator>
        
        <creator>Kurt M. Van Allsburg</creator>
        
        <creator>Huyen Dinh</creator>
        
        <creator>Courtney Pailing</creator>
        
        <subject>Energy materials research; cloud computing; virtual laboratories; data management; consortium; network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>In early 2015 the United States Department of Energy conceived of a consortium of collaborative bodies based on shared expertise, data, and resources that could be targeted towards the more difficult problems in energy materials research. The concept of virtual laboratories had been envisioned and discussed earlier in the decade in response to the advent of the Materials Genome Initiative and similar scientific thrusts. To be effective, any virtual laboratory needed a robust method for data management, communication, security, data sharing, dissemination, and demonstration to work efficiently and effectively for groups of remote researchers. With the accessibility of new, easily deployed cloud technology and software frameworks, such individual elements could be integrated, and the required collaboration architecture is now possible. The developers have leveraged open-source software frameworks, customized them, and merged them into a platform to enable collaborative energy materials science, regardless of the geographic dispersal of the people and resources. After five years in operations, the systems are demonstratively an effective platform for enabling research within the Energy Material Networks (EMN). This paper will show the design and development of a secured scientific data sharing platform, the ability to customize the system to support diverse workflows, and examples of the enabled research and results connected with some of the Energy Material Networks.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_77-Energy_Material_Network_Data_Hubs.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Approach for Optimal Feature Selection in Machine Learning using Global Sensitivity Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120676</link>
        <id>10.14569/IJACSA.2021.0120676</id>
        <doi>10.14569/IJACSA.2021.0120676</doi>
        <lastModDate>2021-06-30T12:15:24.8000000+00:00</lastModDate>
        
        <creator>G. Saranya</creator>
        
        <creator>A. Pravin</creator>
        
        <subject>Feature selection; feature sensitivity; feature correlation; global sensitivity analysis; classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>The classification application is an important procedure for selecting the feature. The classification is mainly based on the features extracted from the object. You can select the best feature using the following three methods: wrapper selection, filter and embedded procedure. All three practices have been implemented by single or combined two approaches. As a result, there is no important feature in the classification process. This problem is solved by the proposed integrated global analysis of sensitivity. Each feature is selected in a classification based on the sensitivity of the feature and the correlation from the target vector in this integrated sensitivity and correlation approach. Likewise, the GSA approach uses a variety of filtering techniques for ranking attributes and optimization using particle swarm technique. Then, the optimum attributes are trained and tested using the Random Forest Classifier grid search via MATLAB software. In comparison to the existing method, wrapper-based selection, the performance of our integrated model is measured using sensitivity, specificity and accuracy. The experimental results of our proposed approach outweigh the sensitivities by 93.72%, 94.74% and the accuracy of 89.921% and 90% where, wrapper selection approach as sensitivity by 89.83% and the accuracy of 93%.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_76-An_Approach_for_Optimal_Feature_Selection_in_Machine_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards Evaluating Adversarial Attacks Robustness in Wireless Communication</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120675</link>
        <id>10.14569/IJACSA.2021.0120675</id>
        <doi>10.14569/IJACSA.2021.0120675</doi>
        <lastModDate>2021-06-30T12:15:24.7830000+00:00</lastModDate>
        
        <creator>Asmaa FTAIMI</creator>
        
        <creator>Tomader MAZRI</creator>
        
        <subject>Adversarial attacks; deep learning; wireless communication; security; robustness; vulnerability; threat</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>The emerging new technologies, such as autonomous vehicles, augmented reality, IoT, and other aspects that are revolutionising our world today, have highlighted new requirements that wireless communications must fulfil. Wireless communications are expected to have a high optimisation capability, efficient detection ability, and prediction flexibility to meet today&#39;s cutting-edge telecommunications technologies&#39; challenges and constraints.  In this regard, the integration of deep learning models in wireless communications appears to be extremely promising. However, the study of deep learning models has exhibited inherent vulnerabilities that attackers could harness to compromise wireless communication systems. The examination of these vulnerabilities and the evaluation of the attacks leveraging them remains essential. Therefore, this paper&#39;s main objective is to address the alignment of security studies of deep learning models with wireless communications&#39; specific requirements, thereby proposing a pattern for assessing adversarial attacks targeting deep learning models embedded in wireless communications.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_75-Toward_Evaluating_Adversarial_Attacks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Augmented Reality Adapted Book (AREmotion) Design as Emotional Expression Recognition Media for Children with Autistic Spectrum Disorders (ASD)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120674</link>
        <id>10.14569/IJACSA.2021.0120674</id>
        <doi>10.14569/IJACSA.2021.0120674</doi>
        <lastModDate>2021-06-30T12:15:24.7700000+00:00</lastModDate>
        
        <creator>Tika Miningrum</creator>
        
        <creator>Herman Tolle</creator>
        
        <creator>Fitra A Bachtiar</creator>
        
        <subject>Autism Spectrum Disorder (ASD); adapted book; Augmented Reality (AR); User-Centered Design (UCD)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>One of the Autism Spectrum Disorder (ASD) characteristics is their difficulty understanding other people&#39;s emotions. Their lack of skill of understanding emotion includes expression and appropriate emotional response for a certain situation. This paper proposes an adapted book that helps therapists and parents guide ASD children to learn facial emotional expression. The adapted book combined with video, animation, and Augmented Reality increases children with ASD at recognizing emotional expression. This research uses User-Centered Design (UCD) approach to design the AR Adapted Book application and design the social story to be observable in school and family environment. Based on usability testing tried on the application, the result shows that AREmotion has an average score of 82.73 percent, and the learnability aspect has the highest score of 86.7 percent. This preliminary usability testing proves that the design of the AREmotion application is ready to use in real implementation for children with ASD.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_74-Augmented_Reality_Adapted_Book.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Feasibility Study of a Small-Scale Grid-Connected PV Power Plants in Egypt; Case Study: New Valley Governorate</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120673</link>
        <id>10.14569/IJACSA.2021.0120673</id>
        <doi>10.14569/IJACSA.2021.0120673</doi>
        <lastModDate>2021-06-30T12:15:24.7700000+00:00</lastModDate>
        
        <creator>Mahmoud Saad</creator>
        
        <creator>Hamdy M. Sultan</creator>
        
        <creator>Ahmed Abdeltwab</creator>
        
        <creator>Ahmed A. Zaki Diab</creator>
        
        <subject>RETScreen; new valley; solar energy; energy cost; feasibility analysis; greenhouse gases</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>The construction of photovoltaic power plants (PVPPs) in the right place is an important task when planning the development of the power system and choosing investors. In this paper, the technical, environmental, the economic feasibility of installing a 50kW solar power plant in different places in the New Valley Governorate in Egypt has been presented using RETScreen Expert software. The input data used in the current study are obtained from the database of the Surface Meteorology and Solar Energy Dataset of NASA. In general, five sites for the construction of 50 kW power stations were assessed which represent the five administrative regions of the New Vally governorate. The study is based on annual electricity production, greenhouse gas (GHG) emissions, and financial analysis.  With the proposed PV power plant, up to 100 MWh of electricity can be produced and a minimum of 43.3 tons of GHG emission can be prevented from the exhaust into the local atmosphere annually The obtained results from the RETScreen program proved the viability of installing the proposed 50kW photovoltaic power plant in any of the proposed locations. This study could give a piece of important information and feedback that can be utilized as a database for upcoming investments in the photovoltaic generation projects in Egypt.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_73-Feasibility_Study_of_a_Small_Scale_Grid_Connected.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Augmented Reality based Adaptive and Collaborative Learning Methods for Improved Primary Education Towards Fourth Industrial Revolution (IR 4.0)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120672</link>
        <id>10.14569/IJACSA.2021.0120672</id>
        <doi>10.14569/IJACSA.2021.0120672</doi>
        <lastModDate>2021-06-30T12:15:24.7530000+00:00</lastModDate>
        
        <creator>A F M Saifuddin Saif</creator>
        
        <creator>Zainal Rasyid Mahayuddin</creator>
        
        <creator>Azrulhizam Shapi&#39;i</creator>
        
        <subject>Augmented reality; virtual reality; adaptive and collaborative learning; fourth industrial revolution</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>Existing primary education is not organized properly to fulfil the demand of fourth industrial revolution (IR 4.0) which causes lack of engagement in learning materials, lack of spatial ability and motivation towards learning by young age students. Students especially from initial education level tend to learn from seeing rather than reading or memorizing. In this context, technology like augmented reality and virtual reality bears such visual power in lieu with interactivity and engaging characteristics by bringing virtual and real world together.  In addition, flexible presentation mechanisms in lieu with tagging and tracking technology with various degree of freedom provide the effective ground of augmented reality and virtual reality to integrate with current primary education or initial level of education. In this context, this research presents comprehensive and critical reviews in the previous research in terms with several aspects such as methods, research design and experimental validation. Each of the aspects is demonstrated with adequate advantages and disadvantages to design the integration of augmented reality and virtual reality with primary education methodology. In addition, this research illustrates extensive critical explanation for various challenges to integrate augmented reality and virtual reality with primary education to make students motivated towards education, effective activity and memorization with visualization. The overall investigation proposed in this research is expected to fulfil the future demand for Fourth Industrial Revolution (IR 4.0).</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_72-Augmented_Reality_based_Adaptive_and_Collaborative_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cluster based Certificate Blocking Scheme using Improved False Accusation Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120671</link>
        <id>10.14569/IJACSA.2021.0120671</id>
        <doi>10.14569/IJACSA.2021.0120671</doi>
        <lastModDate>2021-06-30T12:15:24.7370000+00:00</lastModDate>
        
        <creator>Chetan S Arage</creator>
        
        <creator>K. V. V. Satyanarayana</creator>
        
        <subject>Mobile Ad Hoc Networks; cooperative bait detection scheme; cluster; cluster head; certifying authority; certificate revocation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>The aggregation of mobile nodes without the use of a base station is known as Mobile Ad Hoc Networks (MANETS). In nature, the nodes are moving. These networks are not connected and thus subject to security attacks due to their mobility. There are several mechanisms proposed to prevent mishaps while routing of the packets in such networks methods: The methodology outlined in Mobile Ad Hoc Networks to protect against various types of assaults is based on a recent method known as Cooperative Bait Detection Scheme. Its implementation scenario demonstrates that in the event of Sybil assaults, the packet delivery ratio and performance are low. on the network. Our goal is to propose a cluster-based methodology to improve delays, packet delivery ratio, and other performance assessment criteria. Improved Cooperative Bait Detection recommends a disjointed multipath technique to avoid attacks. Until date, the dropped packet delivery ratio has been the key to preventing collaborative and Sybil assaults. In the Hybrid Cooperative Bait Detection Scheme, nodes are verified in two stages: first, on the basis of packet delivery ratio, and then, in the second stage, the exact cause of performance decline is explored to check node behavior. In order to improve security, certifying procedures must be used to clustered networks. For malevolent entities, the false accusation algorithm provided certificate revocation and blocking approaches. An algorithm is proposed that remembers false accusations for a set period of time in order to increase the number of normal nodes in the network and hence improve the system&#39;s performance. Results: With the help of NS2 simulation, the clustering approach was evaluated by considering several Sybil-attack network scenarios. When the proposed work is compared to other ways such as Cooperative Bait Detection Scheme, Improve Comparative Bait Detection Scheme, and Hybrid Comparative Bait Detection Scheme, the results show that Packet Delivery Ratio and performance are improved for Sybil attackers over the internet. In conjunction with Certifying authority, the Cluster Head in the network identifies and prevents false complaints. The results of the comparison using several performance parameters reveal that the new strategy outperforms the existing ones. As the number of normal nodes in the system grows, the system will be able to work at its best, preventing various types of attacks.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_71-Cluster_based_Certificate_Blocking_Scheme.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Privacy Preserving Dynamic Provable Data Possession with Batch Update for Secure Cloud Storage</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120669</link>
        <id>10.14569/IJACSA.2021.0120669</id>
        <doi>10.14569/IJACSA.2021.0120669</doi>
        <lastModDate>2021-06-30T12:15:24.7200000+00:00</lastModDate>
        
        <creator>Smita Chaudhari</creator>
        
        <creator>Gandharba Swain</creator>
        
        <subject>Public auditing; ring signature; Indistinguishability Obfuscation; Rank-based Merkle Tree (RBMT)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>Cloud Server (CS) is an untrusted entity in cloud paradigm that may hide accidental data loss to maintain its reputation. Provable Data Possession (PDP) is a model that allows Third Party Auditor (TPA) to verify the integrity of outsourced data on behalf of cloud user without downloading the data files. But this public auditing model faces many security and performance issues such as: unnecessary computational burden on user as well as on TPA, to preserve identities of users from TPA during auditing, support for dynamic updates etc. Many PDP schemes creates computational burden either on TPA or Cloud User. To balance this overhead between TPA and User, this paper proposes Privacy-Preserving Dynamic Provable Data Possession (P2DPDP) scheme, which is based on ODPDP scheme. In ODPDP scheme, user relieves the burden by signing a contract with TPA regarding verification of his outsourced data. But this scheme generates computation overhead on TPA. To reduce this computation overhead of TPA, our P2DPDP scheme uses Indistinguishability Obfuscation (IO) with one-way function such as message authentication code to make a lightweight auditing process. P2DPDP scheme uses Rank-based Merkle Tree (RBMT) to support dynamic updates in batch mode which greatly reduces computation overhead of TPA. ODPDP lacks privacy which is maintained in P2DPDP using ring signature technique. Our experimental results demonstrate the reduced verification time and computation cost compared to existing schemes.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_69-Privacy_Preserving_Dynamic_Provable_Data_Possession.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Classification Model Evaluation Metrics</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120670</link>
        <id>10.14569/IJACSA.2021.0120670</id>
        <doi>10.14569/IJACSA.2021.0120670</doi>
        <lastModDate>2021-06-30T12:15:24.7200000+00:00</lastModDate>
        
        <creator>Željko &#208;. Vujovic</creator>
        
        <subject>Classification model; classification models; evaluate classification models; worst metric values; four-class classification; metric classification; reliable classified classification models; detailed class accuracy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>The purpose of this paper was to confirm the basic assumption that classification models are suitable for solving the problem of data set classifications. We selected four representative models: BaiesNet, NaiveBaies, MultilayerPerceptron, and J48, and applied them to a four-class classification of a specific set of hepatitis C virus data for Egyptian patients. We conducted the study using the WEKA software classification model, developed at Waikato University, New Zealand. Defeat results were obtained. None of the four classes envisaged has been determined reliably. We have described all 16 metrics, which are used to evaluate classification models, listed their characteristics, mutual differences, and the parameter that evaluates each of these metrics. We have presented comparative, tabular values that give each metric for each classification model in a concise form, detailed class accuracy with a table of best and worst metric values, confusion matrices for all four classification models, and a type I and II error table for all four classification models. In addition to the 16 metric classifications, which we described, we listed seven other metrics, which we did not use because we did not have the opportunity to show their application on the selected data set. Metrics were negatively rated selected, standard reliable, classification models. This led to the conclusion that the data in the selected data set should be pre-processed to be reliably classified by the classification model.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_70-Classification_Model_Evaluation_Metrics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparing the Balanced Accuracy of Deep Neural Network and Machine Learning for Predicting the Depressive Disorder of Multicultural Youth</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120668</link>
        <id>10.14569/IJACSA.2021.0120668</id>
        <doi>10.14569/IJACSA.2021.0120668</doi>
        <lastModDate>2021-06-30T12:15:24.7070000+00:00</lastModDate>
        
        <creator>Haewon Byeon</creator>
        
        <subject>Quick unbiased efficient statistical tree; gradient boosting machine; deep neural network; classification and regression trees; balanced accuracy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>Multicultural youth are more likely to experience negative emotions (e.g. depressive symptoms) due to social prejudice and discrimination. Nevertheless, previous studies that analyzed the emotional aspects of multicultural youth mainly compared the characteristics of multicultural youth and those of the other youth or identified individual risk factors using a regression model. This study developed models to predict the depressive disorders of multicultural youth based on the Quick Unbiased Efficient Statistical Tree (QUEST), Classification And Regression Trees (CART), gradient boosting machine (G-B-M), random forest, and deep neural network (deep-NN) using epidemiological data representing multicultural youth and compared the prediction performance (PRED PER) of the developed models. Our study analyzed 19,431 youths (9,835 males and 9,596 females) aged between 19 and 24 years old. We developed models for predicting the self-awareness of health of youths by using QUEST, CART, G-B-M, random forest, and deep-NN and compared the balanced accuracy of them to evaluate their PRED PER. Among 19,431 subjects, 42.9% (5,838 people) experienced a depressive disorder in the past year. The results of our study confirmed that deep-NN had the best PRED PER with a specificity of 0.85, a sensitivity of 0.71, and a balanced accuracy of 0.78. It will be necessary to develop a model with optimal PRED PER by tuning hyperparameters (e.g., number of hidden layers, number of iterations, and activation function, number of hidden nodes) of deep-NN.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_68-Comparing_the_Balanced_Accuracy_of_Deep_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Relative Merits of Data Mining Algorithms of Chronic Kidney Diseases</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120667</link>
        <id>10.14569/IJACSA.2021.0120667</id>
        <doi>10.14569/IJACSA.2021.0120667</doi>
        <lastModDate>2021-06-30T12:15:24.6900000+00:00</lastModDate>
        
        <creator>Harsha Herle</creator>
        
        <creator>Padmaja K V</creator>
        
        <subject>Ultrasound images; support vector machine (SVM) k-nearest algorithm (K-NN); multilayer perceptron algorithm (MLP); random forest (RF); clinical data</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>Early prediction of Chronic Kidney Disease in human subjects is considered to be a critical factor for diagnosis and treatment. The use of data mining algorithms to reveal the hidden information from clinical and laboratory samples helps physician in early diagnosis, thus contributing towards increase in accuracy, prediction and detection of Chronic Kidney Disease. The experimental results obtained from this work, with subjected to optimal data mining algorithms for better classification and prediction, of Chronic Kidney Disease. The result of applying relevant algorithms, like K-Nearest Neighbors, Support Vector Machine, Multi Layer Perceptron, Random Forest, are studied for  both clinical and laboratory samples. Our findings show that K - Nearest Neighbour algorithm provides the best classification for clinical data and, similarly, Random Forest for laboratory samples, when compared with the performance parameters like, precision, accuracy, recall and F1 Score of other data mining analysis techniques.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_67-Relative_Merits_of_Data_Mining_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dorsal Hand Vein Extraction in Uncontrolled Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120665</link>
        <id>10.14569/IJACSA.2021.0120665</id>
        <doi>10.14569/IJACSA.2021.0120665</doi>
        <lastModDate>2021-06-30T12:15:24.6730000+00:00</lastModDate>
        
        <creator>Nisha Charaya</creator>
        
        <creator>Anil Kumar</creator>
        
        <creator>Priti Singh</creator>
        
        <subject>Biometrics; security; forgery; hand vein; vein segmentation; vein extraction; thick veins; unclear veins</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>Biometrics is an inseparable part of our day to day life. A major development in this area has been observed in past few decades. Over the recent years, dorsal hand veins have emerged as a promising biometric attribute due to its universality, stability and anti-forgery characteristics. However, detecting the veins of different thickness under different illumination is a challenging task. The traditional vein extraction approaches based on thresholding does not find their applicability in these situations. This paper presents a hybrid approach for vein segmentation for these hand images. The proposed approach is a combination of two techniques, i.e. repeated line tracking and maximum curvature points.  The technique has been tested over Bosphorus hand vein dataset which contains 1575 images of different age groups captured under different illumination conditions. From the results, it is evident that this technique is suitable to extract vein pattern from all types of images. Further, these images have yielded an accuracy of more than 98% when subjected to feature extraction and classification steps.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_65-Dorsal_Hand_Vein_Extraction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Ensemble GRU Approach for Wind Speed Forecasting with Data Augmentation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120666</link>
        <id>10.14569/IJACSA.2021.0120666</id>
        <doi>10.14569/IJACSA.2021.0120666</doi>
        <lastModDate>2021-06-30T12:15:24.6730000+00:00</lastModDate>
        
        <creator>Anibal Flores</creator>
        
        <creator>Hugo Tito-Chura</creator>
        
        <creator>Victor Yana-Mamani</creator>
        
        <subject>Wind speed forecasting; recurrent neural networks; gated recurrent unit; ensemble GRU; data augmentation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>This paper proposes an ensemble model for wind speed forecasting using the recurrent neural network known as Gated Recurrent Unit (GRU) and data augmentation. For the experimentation, a single wind speed time series is used, from which four augmented time series are generated, which serve to train four GRU sub-models respectively, the results of these sub-models are averaged to generate the results of the proposal ensemble model (E-GRU). The results achieved by E-GRU are compared with those of each sub-model, showing that E-GRU outperforms the sub-models. Likewise, the proposal model (E-GRU) is compared with benchmark models without data augmentation such as Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU), showing that E-GRU is much more precise, reaching a difference of around 15% with respect to the Relative Root mean Square Error (RRMSE) and 11% with respect to the Mean Absolute Percentage Error (MAPE).</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_66-An_Ensemble_GRU_Approach_for_Wind_Speed.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Integration of Identity Governance and Management Framework within Universities for Privileged Users</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120664</link>
        <id>10.14569/IJACSA.2021.0120664</id>
        <doi>10.14569/IJACSA.2021.0120664</doi>
        <lastModDate>2021-06-30T12:15:24.6600000+00:00</lastModDate>
        
        <creator>Shadma Parveen</creator>
        
        <creator>Sultan Ahmad</creator>
        
        <creator>Mohammad Ahmar Khan</creator>
        
        <subject>Identity management; governance framework; privileged access management; security; privacy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>The development of high-tech progression around the world, with the attentions of governance body and private companies towards well-organized setup access control measures within the organization. The importance of these exceedingly essential perception, this article is proposing an integrated approach with the assimilation of IAM (identity access management) as an authentication tool and PAM (privilegd access management) as a restricting accessing control measure in terms of an active directory. Originally the experimental setup organized within the Prince Sattam Bin Abdulaziz University, and it is analyzed using the real-time data which is available within the university database. We found that the proposed mechanism can be a vital method for protecting governance data or key business-oriented data from the unauthorized or adversarial attack. We also reviewed and compared other access control methods and find that the integrated method is relatively have an advantage to deal accessing task in any premier organization.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_64-Integration_of_Identity_Governance_and_Management_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Threshold based Method for Vessel Intensity Detection and Extraction from Retinal Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120663</link>
        <id>10.14569/IJACSA.2021.0120663</id>
        <doi>10.14569/IJACSA.2021.0120663</doi>
        <lastModDate>2021-06-30T12:15:24.6430000+00:00</lastModDate>
        
        <creator>Farha Fatina	Wahid</creator>
        
        <creator>Sugandhi K</creator>
        
        <creator>Raju G</creator>
        
        <creator>Debabrata Swain</creator>
        
        <creator>Biswaranjan Acharya</creator>
        
        <creator>Manas Ranjan Pradhan</creator>
        
        <subject>Retinal images; blood vessel detection; and segmentation; segmentation; hysteresis thresholding; cumulative distribution function introduction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>Retinal vessel segmentation is an active research area in medical image processing. Several research outcomes on retinal vessel segmentation have emerged in recent years. Each method has its own pros and cons, either in the vessel detection stage or in its extraction. Based on a detailed empirical investigation, a novel retinal vessel extraction architecture is proposed, which makes use of a couple of existing algorithms.  In the proposed algorithm, vessel detection is carried out using a cumulative distribution function-based thresholding scheme. The resultant vessel intensities are extracted based on the hysteresis thresholding scheme. Experiments are carried out with retinal images from DRIVE and STARE databases.  The results in terms of Sensitivity, Specificity, and Accuracy are compared with five standard methods. The proposed method outperforms all methods in terms of Sensitivity and Accuracy for the DRIVE data set, whereas for STARE, the performance is comparable with the best method.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_63-A_Novel_Threshold_based_Method_for_Vessel_Intensity_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analyzing the Performance of Stroke Prediction using ML Classification Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120662</link>
        <id>10.14569/IJACSA.2021.0120662</id>
        <doi>10.14569/IJACSA.2021.0120662</doi>
        <lastModDate>2021-06-30T12:15:24.6270000+00:00</lastModDate>
        
        <creator>Gangavarapu Sailasya</creator>
        
        <creator>Gorli L Aruna Kumari</creator>
        
        <subject>Stroke; machine learning; logistic regression; decision tree classification; random forest classification; k-nearest neighbors; support vector machine; na&#239;ve bayes classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>A Stroke is a health condition that causes damage by tearing the blood vessels in the brain. It can also occur when there is a halt in the blood flow and other nutrients to the brain. According to the World Health Organization (WHO), stroke is the leading cause of death and disability globally. Most of the work has been carried out on the prediction of heart stroke but very few works show the risk of a brain stroke. With this thought, various machine learning models are built to predict the possibility of stroke in the brain. This paper has taken various physiological factors and used machine learning algorithms like Logistic Regression, Decision Tree Classification, Random Forest Classification, K-Nearest Neighbors, Support Vector Machine and Na&#239;ve Bayes Classification to train five different models for accurate prediction. The algorithm that best performed this task is Na&#239;ve Bayes that gave an accuracy of approximately 82%.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_62-Analyzing_the_Performance_of_Stroke_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimal Operation of Smart Distribution Networks using Gravitational Search Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120661</link>
        <id>10.14569/IJACSA.2021.0120661</id>
        <doi>10.14569/IJACSA.2021.0120661</doi>
        <lastModDate>2021-06-30T12:15:24.6130000+00:00</lastModDate>
        
        <creator>Surender Reddy Salkuti</creator>
        
        <subject>Distributed generation; renewable energy; meta-heuristic algorithms; network reconfiguration; smart grid; reactive power compensation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>This paper proposes a methodology for an optimal operation of smart distribution network considering the network reconfiguration, distributed generation (DG) units allocation and optimally placing the shunt capacitors for reactive power compensation. In this work, the total power losses minimization objective is considered. By optimizing this objective, it can also results in the reduction of voltage deviation. The proposed problem is solved using evolutionary-based gravitational search algorithm (GSA). Simulation studies are performed on 33 bus radial distribution system (RDS). Simulation results reveal that there is a drastic reduction in the power losses by utilizing the network reconfiguration, DG allocation, and reactive power compensation.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_61-Optimal_Operation_of_Smart_Distribution_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multiplicative Iterative Nonlinear Constrained Coupled Non-negative Matrix Factorization (MINC-CNMF) for Hyperspectral and Multispectral Image Fusion</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120660</link>
        <id>10.14569/IJACSA.2021.0120660</id>
        <doi>10.14569/IJACSA.2021.0120660</doi>
        <lastModDate>2021-06-30T12:15:24.5970000+00:00</lastModDate>
        
        <creator>Priya K</creator>
        
        <creator>Rajkumar K K</creator>
        
        <subject>Hyperspectral data; multispectral data; minimum volume; nonlinear mixing model; spectral variability; spectral image fusion</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>Hyperspectral and Multispectral (HS-MS) image fusion is a most trending technology that enhance the quality of hyperspectral image. By this technology, retrieve the precise information from both HS and MS images combined together increase spatial and spectral quality of the image. In the past decades, many image fusion techniques have been introduced in literature. Most of them using Coupled Nonnegative matrix factorization (CNMF) technique which is based on Linear Mixing Model (LMM) which neglect the nonlinearity factors in the unmixing and fusion technique of the hyperspectral images. To overcome this limitation, we are going to propose an unmixing based fusion algorithm namely Multiplicative Iterative Nonlinear Constrained Coupled Nonnegative Matrix Factorization (MINC-CNMF) that enhance the spatial quality of the image by considering the nonlinearity factor associated with the unmixing process of in the image. This method not only consider the spatial quality but also enhance the spectral data by imposing constraints known as minimum volume (MV) which helps to estimate accurate endmembers. We also measure the strength and superiority of our method against baseline methods by using four public dataset and found that our method shows outstanding performance than all the baseline methods.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_60-Multiplicative_Iterative_Nonlinear_Constrained.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Proposal to Improve the Bit Plane Steganography based on the Complexity Calculation Technique</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120659</link>
        <id>10.14569/IJACSA.2021.0120659</id>
        <doi>10.14569/IJACSA.2021.0120659</doi>
        <lastModDate>2021-06-30T12:15:24.5800000+00:00</lastModDate>
        
        <creator>Cho Do Xuan</creator>
        
        <subject>Steganography; video steganography technique; bit-plane complexity segmentation (BPCS); complexity formula</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>The video steganography technique is being studied and applied a lot today because of its benefits. In particular, the video steganography technique using Bit-Plane Complexity Segmentation (BPCS) has increasingly proven its effectiveness compared to other methods. In this paper, based on the theoretical basis of the BPCS method, we propose a new method to improve the efficiency of the steganography process. Accordingly, our improvement proposal in this paper is improving the complexity formula of the bit planes. Our new formula not only has improved the steganographic thresholds in the bit planes to find more planes hiding secret information, but also has ensured the amount of information hidden in the video and their safety. The experimental results in the paper have not only demonstrated the effectiveness of our proposed method but also provided a new mechanism for digital image analysis in general and video steganography techniques in particular.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_59-A_Proposal_to_Improve_the_Bit_Plane_Steganography.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Feature Filtering Approach by Integrating IG and T-Test Evaluation Metrics for Text Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120657</link>
        <id>10.14569/IJACSA.2021.0120657</id>
        <doi>10.14569/IJACSA.2021.0120657</doi>
        <lastModDate>2021-06-30T12:15:24.5670000+00:00</lastModDate>
        
        <creator>Abubakar Ado</creator>
        
        <creator>Mustafa Mat Deris</creator>
        
        <creator>Noor Azah Samsudin</creator>
        
        <creator>Aliyu Ahmed</creator>
        
        <subject>Dimensional reduction; feature filtering; feature selection; t-test; information gain; V-score</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>High dimensionality is one of the main issues associated with text classification, such as selecting the most discrepant features subset for classifier&#39;s effective utilization is a difficult task. This significant preprocessing stage of selecting the relevant features is often called feature selection or feature filtering. Eliminating the non-relevant and noise features from the original feature set will drastically reduce the size of the feature set and the time complexity of the classification models and also improve or maintain their performance. Most of the existing filtering method produced a subset with relatively high number of features without much significant impact on running time, or produced subset with lesser number of features but results in performance degradation. In this paper, we proposed a new bi-strategy filtering approach that integrates Information Gain with t-test that selects a subset of informative features by considering both the score and ranking of respective features. Our approach considers the results&#39; disparity produced by the benchmark metrics used in order to maximized and lessen their advantage and disadvantage. The approach set a new threshold parameter by computing V-score of the features with minimum scores present in both the two subsets and further refined the selected features. Hence, it reduces the size of the features subset without losing much informative features. Experiment results conducted on three different text datasets have shown that the proposed method is able to select features that are highly discrepant and at the same time achieves a significant improvement in terms of classification accuracy and F-score at the cost of a minimum running time.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_57-A_New_Feature_Filtering_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Ontological Framework for Healthcare Web Applications Security</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120658</link>
        <id>10.14569/IJACSA.2021.0120658</id>
        <doi>10.14569/IJACSA.2021.0120658</doi>
        <lastModDate>2021-06-30T12:15:24.5670000+00:00</lastModDate>
        
        <creator>Mamdouh Alenezi</creator>
        
        <subject>Security; web application; healthcare; digitization; requirement</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>The current era of digitization and transformation causes various issues and advantages both at the same time in the healthcare sector. The beneficial advantages are exceptionally good but the issues that are gardening the business of attackers is a serious issue and requires effective prevention. Current statistics and attack vector analysis portray that technical breaches have the most common and highest priority in numbers. This type of information opens a need to prevent and develop an effective model that helps to exerts and healthcare practitioners in security management. In order to achieve this desired goal, we adopt and apply an ontology-based approach of security and development methodology and provide a model that effectively produces systematic secure pathways to design healthcare web applications. The conceptual framework discussed in the study has many effective and beneficial advantages namely, it gives a unified pathway to future developers; the model also attains focus on requirement identification during development and portrays its significance.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_58-An_Ontological_Framework_for_Healthcare.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving Imbalanced Data Classification in Auto Insurance by the Data Level Approaches</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120656</link>
        <id>10.14569/IJACSA.2021.0120656</id>
        <doi>10.14569/IJACSA.2021.0120656</doi>
        <lastModDate>2021-06-30T12:15:24.5500000+00:00</lastModDate>
        
        <creator>Mohamed Hanafy</creator>
        
        <creator>Ruixing Ming</creator>
        
        <subject>Machine learning; classification; insurance; imbalanced data problem; resampling methods</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>Predicting the frequency of insurance claims has become a significant challenge due to the imbalanced datasets since the number of occurring claims is usually significantly lower than the number of non-occurring claims. As a result, classification models tend to have a limited ability to predict the occurrence of claims. So, in this paper, we&#39;ll use various data level approaches to try to solve the imbalanced data problem in the insurance industry. We developed 32 machine learning models for predicting insurance claims occurrence {(under-sampling, over-sampling, the combination of over-and under-sampling (hybrid), and SMOTE)&#215; (three Decision tree models, three boosting models, and two bagging models) = 32}, and we compared the models&#39; accuracies, sensitivities, and specificities to comprehend the prediction performance of the built models. The dataset contains 81628 claims, each of which is a car insurance claim. There were 5714 claims that occurred and 75914 claims that didn&#39;t occur. According to the findings, the AdaBoost classifier with oversampling and the hybrid method had the most accurate predictions, with a sensitivity of 92.94%, a specificity of 99.82%, and an accuracy of 99.4%. And with a sensitivity of 92.48%, a specificity of 99.63%, and an accuracy of 99.1%, respectively. This paper confirmed that When analyzing imbalanced data, the AdaBoost classifier, whether using oversampling or the hybrid process, could generate more accurate models than other boosting models, Decision tree models, and bagging models.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_56-Improving_Imbalanced_Data_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An HC-CSO Algorithm for Workflow Scheduling in Heterogeneous Cloud Computing System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120655</link>
        <id>10.14569/IJACSA.2021.0120655</id>
        <doi>10.14569/IJACSA.2021.0120655</doi>
        <lastModDate>2021-06-30T12:15:24.5330000+00:00</lastModDate>
        
        <creator>Jai Bhagwan</creator>
        
        <creator>Sanjeev Kumar</creator>
        
        <subject>Cloud computing; Crow Search Algorithm (CSA); Cat Swarm Optimization (CSO); H-CSO; HC-CSO; HEFT; Self-Motivated Inertia Weight (SMIW); Virtual Machines (VMs)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>Many scientists are using meta-heuristic techniques for dynamic workflow task scheduling in the area of cloud computing systems to get optimum solutions. Many swarm intelligent algorithms have been designed so far which are having many limitations as some get trapped in local optima, a few are having low convergence speed, some are having poor global search facilities, etc. Still, there is a requirement of designing a new algorithm or modification of existing algorithms to overcome the limitations of the existing techniques. A new Hybrid Cat Swarm Optimization algorithm named H-CSO was designed inspired by the HEFT algorithm and the initialization problem of the Cat Swarm Optimization was overcome. Still, that algorithm has a limitation of getting stuck in local minima. To overcome this algorithm a part of the Crow Search Algorithm has been integrated into H-CSO and described in this paper. After simulation, it was found that the new hybrid algorithm named HC-CSO outperforms CSO and H-CSO.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_55-An_HC_CSO_Algorithm_for_Workflow_Scheduling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of a Plastic Shredding Machine to Obtain Small Plastic Waste</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120654</link>
        <id>10.14569/IJACSA.2021.0120654</id>
        <doi>10.14569/IJACSA.2021.0120654</doi>
        <lastModDate>2021-06-30T12:15:24.5200000+00:00</lastModDate>
        
        <creator>Witman Alvarado-Diaz</creator>
        
        <creator>Jason Chicoma-Moreno</creator>
        
        <creator>Brian Meneses-Claudio</creator>
        
        <creator>Luis Nu&#241;ez-Tapia</creator>
        
        <subject>Automation; pollution; filaments; recycling; plastic waste</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>One of the biggest environmental problems in the world is the excess of plastic waste. Although it is true, around the world there are companies that are responsible for recycling plastic waste, since 42% of plastics that are generated worldwide have a single utility. In Peru, the culture of recycling is not promoted, each year approximately each person uses 30 kilos of plastic and that only in Metropolitan Lima and Callao 46% of plastic waste is generated nationwide. In view of this problem, this article will design a plastic shredding machine to obtain small plastic waste to help people to be dedicated to the recycling industry in an automated way, it would also generate jobs since it requires a staff in charge of the machine and it will also be extremely useful to reduce plastic pollution in the environment, which due to COVID-19 is increasing. Through the design of the plastic waste shredding machine, the plastic will be selected by color and type of plastic composition, whether it is Polyethylene Terephthalate (PET), High Density Polyethylene (HDPE), Low Density Polyethylene (LDPE), Polychloride vinyl (PVC) or Others (Plastic Mix), then it will go through the shredding process to become small plastic waste, which could be turned into filament for 3D printer.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_54-Design_of_a_Plastic_Shredding_Machine.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Internet of Things (IoT) Based ECG System for Rural Health Care</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120653</link>
        <id>10.14569/IJACSA.2021.0120653</id>
        <doi>10.14569/IJACSA.2021.0120653</doi>
        <lastModDate>2021-06-30T12:15:24.5030000+00:00</lastModDate>
        
        <creator>Md. Obaidur Rahman</creator>
        
        <creator>Mohammod Abul Kashem</creator>
        
        <creator>Al-Akhir Nayan</creator>
        
        <creator>Most. Fahmida Akter</creator>
        
        <creator>Fazly Rabbi</creator>
        
        <creator>Marzia Ahmed</creator>
        
        <creator>Mohammad Asaduzzaman</creator>
        
        <subject>Internet of things (IoT); electrocardiogram (ECG) monitoring system; ECG signal parameters; cardiovascular disease; logistic regression model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>Nearly 30% of the people in the rural areas of Bangladesh are below the poverty level. Moreover, due to the unavailability of modernized healthcare-related technology, nursing and diagnosis facilities are limited for rural people. Therefore, rural people are deprived of proper healthcare. In this perspective, modern technology can be facilitated to mitigate their health problems.  ECG sensing tools are interfaced with the human chest, and requisite cardiovascular data is collected through an IoT device. These data are stored in the cloud incorporates with the MQTT and HTTP servers. An innovative IoT-based method for ECG monitoring systems on cardiovascular or heart patients has been suggested in this study. The ECG signal parameters P, Q, R, S, T are collected, pre-processed, and predicted to monitor the cardiovascular conditions for further health management. The machine learning algorithm is used to determine the significance of ECG signal parameters and error rate. The logistic regression model fitted the better agreements between the train and test data. The prediction has been performed to determine the variation of PQRST quality and its suitability in the ECG Monitoring System.  Considering the values of quality parameters, satisfactory results are obtained. The proposed IoT-based ECG system reduces the health care cost and complexity of cardiovascular diseases in the future.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_53-Internet_of_Things_IoT_based_ECG_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluation of Agent-Network Environment Mapping on Open-AI Gym for Q-Routing Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120652</link>
        <id>10.14569/IJACSA.2021.0120652</id>
        <doi>10.14569/IJACSA.2021.0120652</doi>
        <lastModDate>2021-06-30T12:15:24.4870000+00:00</lastModDate>
        
        <creator>Varshini Vidyadhar</creator>
        
        <creator>R. Nagaraja</creator>
        
        <subject>Reinforcement learning; environment; agent; network; Net-AI-Gym; Q-routing; rule-based routing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>The changes in network dynamics demands a routing algorithm that adapts intelligently with the changing requirements and parameters. In this regard, an efficient routing mechanism plays an essential role in supporting such requirements of dynamic and QoS-aware network services. This paper has introduced a self-learning intelligent approach to route selection in the network. A Q-Routing approach is designed based on a reinforcement learning algorithm to provide reliable and stable packet transmission for different network services with minimal delay and low routing overhead. The novelty of the proposed work is that a new customized environment for the network, namely Net-AI-Gym, has been integrated into Open-AI Gym. Besides, the proposed Q-routing with Net-AI-Gym offers optimization in exploring the path to support multi-QoS aware services in the different networking applications. The performance assessment of the NET-AI Gym is carried out with less, medium, and a high number of nodes. Also, the results of the proposed system are compared with the existing rule-based method. The study outcome shows the Net-AI-Gym&#39;s potential that effectively supports the varied scale of nodes in the network. Apart from this, the proposed Q-routing approach outperforms the rule-based routing technique regarding episodes vs. Rewards and path length.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_52-Evaluation_of_Agent_Network_Environment_Mapping.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sarcasm Detection in Tweets: A Feature-based Approach using Supervised Machine Learning Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120651</link>
        <id>10.14569/IJACSA.2021.0120651</id>
        <doi>10.14569/IJACSA.2021.0120651</doi>
        <lastModDate>2021-06-30T12:15:24.4870000+00:00</lastModDate>
        
        <creator>Arifur Rahaman</creator>
        
        <creator>Ratnadip Kuri</creator>
        
        <creator>Syful Islam</creator>
        
        <creator>Md. Javed Hossain</creator>
        
        <creator>Mohammed Humayun Kabir</creator>
        
        <subject>Machine learning; detection; sarcasm; sentiment; tweets</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>Sarcasm (i.e., the use of irony to mock or convey contempt) detection in tweets and other social media platforms is one of the problems facing the regulation and moderation of social media content. Sarcasm is difficult to detect, even for humans, due to the deliberate ambiguity in using words. Existing approaches to automatic sarcasm detection primarily rely on lexical and linguistic cues. However, these approaches have produced little or no significant improvement in terms of the accuracy of sentiment. We propose implementing a robust and efficient system to detect sarcasm to improve accuracy for sentiment analysis. In this study, four sets of features include various types of sarcasm commonly used in social media. These feature sets are used to classify tweets into sarcastic and non-sarcastic. This study reveals a sarcastic feature set with an effective supervised machine learning model, leading to better accuracy. Results show that Decision Tree (91.84%) and Random Forest (91.90%) outperform in terms of accuracy compared to other supervised machine learning algorithms for the right features selection. The paper has highlighted the suitable supervised machine learning models along with its appropriate feature set for detecting sarcasm in tweets.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_51-Sarcasm_Detection_in_Tweets.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Current Perspective of Symbiotic Organisms Search Technique in Cloud Computing Environment: A Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120650</link>
        <id>10.14569/IJACSA.2021.0120650</id>
        <doi>10.14569/IJACSA.2021.0120650</doi>
        <lastModDate>2021-06-30T12:15:24.4700000+00:00</lastModDate>
        
        <creator>Ajoze Abdulraheem Zubair</creator>
        
        <creator>Shukor Bin Abd Razak</creator>
        
        <creator>Md. Asri Bin Ngadi</creator>
        
        <creator>Aliyu Ahmed</creator>
        
        <subject>Cloud computing; cloud resource management; cloud task scheduling; symbiotic organisms search; entrapment; convergence speed</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>Nature-inspired algorithms in computer science and engineering are algorithms that take their inspiration from living things and imitate their actions in order to construct functional models. The SOS algorithm (symbiotic organisms search) is a new promising metaheuristic algorithm. It is based on the symbiotic relationship that exists between different species in an ecosystem. Organisms develop symbiotic bonds like mutualism, commensalism, and parasitism to survive in their environment. Standard SOS has since been modified several times, either by hybridization or as better versions of the original algorithm. Most of these modifications came from engineering construction works and other discipline like medicine and finance. However, little improvement on the standard SOS has been noticed on its application in cloud computing environment, especially cloud task scheduling. As a result, this paper provides an overview of SOS applications in task scheduling problem and suggest a new enhanced method for better performance of the technique in terms of fast convergence speed.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_50-Current_Perspective_of_Symbiotic_Organisms_Search.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analyzing the Performance of Anomaly Detection Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120649</link>
        <id>10.14569/IJACSA.2021.0120649</id>
        <doi>10.14569/IJACSA.2021.0120649</doi>
        <lastModDate>2021-06-30T12:15:24.4570000+00:00</lastModDate>
        
        <creator>Chiranjit Das</creator>
        
        <creator>Akhtar Rasool</creator>
        
        <creator>Aditya Dubey</creator>
        
        <creator>Nilay Khare</creator>
        
        <subject>Anomaly; machine learning; outlier detection; minimum covariance determinant</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>An outlier is a data observation that is considerably irregular from the rest of the dataset. The outlier present in the dataset may cause the integrity of the dataset. Implementing machine learning techniques in various real-world applications and applying those techniques to the healthcare-related dataset will completely change the particular field&#39;s present scenario. These applications can highlight the physiological data having anomalous behavior, which can ultimately lead to a fast and necessary response and help to gather more critical knowledge about the particular area. However, a broad amount of study is available about the performance of anomaly detection techniques applied to popular public datasets. But then again, have a minimal amount of analytical work on various supervised and unsupervised methods considering any physiological datasets. The breast cancer dataset is both a universal and numeric dataset. This paper utilized and analyzed four machine learning techniques and their capacity to distinguish anomalies in the breast cancer dataset.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_49-Analyzing_the_Performance_of_Anomaly_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Prediction of Cantilever Retaining Wall Stability using Optimal Kernel Function of Support Vector Machine</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120648</link>
        <id>10.14569/IJACSA.2021.0120648</id>
        <doi>10.14569/IJACSA.2021.0120648</doi>
        <lastModDate>2021-06-30T12:15:24.4400000+00:00</lastModDate>
        
        <creator>Rohaya Alias</creator>
        
        <creator>Siti Jahara Matlan</creator>
        
        <creator>Aniza Ibrahim</creator>
        
        <subject>Cantilever retaining wall; kernel function; prediction; stability; support vector machine</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>The Support Vector Machine is one of the artificial intelligence techniques that can be applied to forecast the stability of cantilever retaining walls. The selection of the right Kernel function is very important so that the Support Vector Machine model can make good predictions. However, there are no general guidelines that can be used to select Kernel functionality. Therefore, the Kernel function which consists of Linear, Polynomial, Radial Basis Function and Sigmoid has been evaluated to determine the optimal Kernel function by using 10 cross-validation (V-fold cross-validation). The achievement of each function is evaluated based on the mean square error value and the squared correlation coefficient. The mean square error value is closer to zero and the squared correlation coefficient closer to the value of one indicates a more accurate Kernel function. Results show that the Support Vector Machine model with Radial Basis Function Kernel can successfully predict the stability of cantilever retaining walls with good accuracy and reliability in comparison to the various other Kernel functions.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_48-Prediction_of_Cantilever_Retaining_Wall_Stability.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cost Effective Hybrid Fault Tolerant Scheduling Model for Cloud Computing Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120646</link>
        <id>10.14569/IJACSA.2021.0120646</id>
        <doi>10.14569/IJACSA.2021.0120646</doi>
        <lastModDate>2021-06-30T12:15:24.4230000+00:00</lastModDate>
        
        <creator>Annabathula. Phani Sheetal</creator>
        
        <creator>K. Ravindranath</creator>
        
        <subject>Cloud computing; failures; fault tolerant; critical tasks; scheduling; fault recovery; overhead</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>Cloud computing provides flexible and cost effective way for end users to access data from multiplatform environment. Despite of support by the features of cloud computing, there are also chances of resource failure. Hence there is a need of a fault tolerant mechanism to achieve undisrupted performance of cloud services. The task reallocation and duplication are the two commonly used fault tolerant mechanisms. But task replication method results in huge storage and computational overhead, when the number of tasks is increasing gradually. If the number of faults are high, it incurs more storage overhead and time complexity based on task criticality. In order to solve these issues, we propose to develop a Cost Effective Hybrid Fault Tolerant Scheduling (CEHFTS) Model for cloud computing. In this model, the Failure Occurrence Probability (FoP) of each VM is estimated by finding the previous failures and successful executions. Then an adaptive fault recovery timer is maintained during a fault, which is adjusted based on the type of faults.  Experimental results have shown that CEHFTS model achieves 43% reduced storage cost and 13% reduced response delay for critical tasks, when compared to existing technique.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_46-Cost_Effective_Hybrid_Fault_Tolerant_Scheduling_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Efficient Image Clustering Technique based on Fuzzy C-means and Cuckoo Search Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120647</link>
        <id>10.14569/IJACSA.2021.0120647</id>
        <doi>10.14569/IJACSA.2021.0120647</doi>
        <lastModDate>2021-06-30T12:15:24.4230000+00:00</lastModDate>
        
        <creator>Lahbib KHRISSI</creator>
        
        <creator>Nabil EL AKKAD</creator>
        
        <creator>Hassan SATORI</creator>
        
        <creator>Khalid SATORI</creator>
        
        <subject>Clustering; classification; image segmentation; fuzzy c-means; cuckoo search algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>Clustering is a predominant technique used in image segmentation due to its simple, easy and efficient approach. It is very important for the analysis, extraction and interpretation of images; which makes it used in multiple applications and in various fields. In this article, we propose a different image segmentation technique based on the cooperation between an optimization algorithm which is the Cuckoo Search Algorithm (CSA) and a clustering technique which is the Fuzzy C-means (FCM). The clustering method we propose goes through two major steps. In the first step, CSA explores the entire search space of the specified data to find the optimal clustering centers. Subsequently, these centers are evaluated using a new objective function. The result of the first step is used to initialize the FCM algorithm in the second step. The efficiency of the suggested method is measured on several images selected from the BSD300 database and we compare it with other algorithms such as FCM optimized by genetic algorithms (FCM-GA) and FCM optimized by particle swarm optimization (FCM-PSO). The experimental results on the different algorithms used in this paper show that the proposed method improves the segmentation results, based on the analysis of the best values of fitness, MSE, PSNR, CC, RI, GCE, BDE and VOI.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_47-An_Efficient_Image_Clustering_Technique.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Abnormal Pulmonary Sounds Classification Algorithm using Convolutional Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120645</link>
        <id>10.14569/IJACSA.2021.0120645</id>
        <doi>10.14569/IJACSA.2021.0120645</doi>
        <lastModDate>2021-06-30T12:15:24.4100000+00:00</lastModDate>
        
        <creator>Alva Mantari Alicia</creator>
        
        <creator>Arancibia-Garcia Alexander</creator>
        
        <creator>Ch&#225;vez Fr&#237;as William</creator>
        
        <creator>Cieza-Terrones Michael</creator>
        
        <creator>Herrera-Arana V&#237;ctor</creator>
        
        <creator>Ramos-Cosi Sebastian</creator>
        
        <subject>Algorithm; classification; computational neural networks; lung sounds; mortality; pneumonia</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>In the world and in Peru, Acute Respiratory Infections are the main cause of death, especially in the most vulnerable population, children under 5 years of age and older adults. Pneumonia is the leading cause of death of children in the world. 60.2% of pneumonia cases affect children under 5 years of age. Thus, prevention and timely treatment of lung diseases are crucial to reduce infant mortality in Peru. Among the main problems associated with this high is percentage the lack of medical professionals and resources, especially in remote areas, such as Puno, Huancavelica and Arequipa, which experience temperatures as low as -20&#176;C during the cold season. This study develops an algorithm based on computational neural networks to differentiate between normal and abnormal lung sounds. The initial base of 917 sounds was used, through a process of data augmentation, this base was increased to 8253 sounds in total, and this process was carried out due to the need of a large number of data for the use of computational neural networks. From each signal, features were extracted using three methods: MFCC, Melspectogram and STFT. Three models were generated, the first one to classify normal and abnormal, which obtained a training Accuracy of 1 and a testing accuracy of 0.998. The second one classifies normal sound, pneumonia and other abnormalities and obtained training Accuracy values of 0.9959 and a testing accuracy of 0.9885. Finally, we classified by specific ailment where we obtained a training Accuracy of 0.9967 and a testing accuracy of 0.9909. This research provides interesting findings about the diagnosis and classification of lung sounds automatically using convolutional neural networks, which is the beginning for the development of a platform to assess the risk of pneumonia in the first moment, thus allowing rapid care and referral that seeks to reduce mortality associated mainly with pneumonia.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_45-Abnormal_Pulmonary_Sounds_Classification_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Determining Optimal Number of K for e-Learning Groups Clustered using K-Medoid</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120644</link>
        <id>10.14569/IJACSA.2021.0120644</id>
        <doi>10.14569/IJACSA.2021.0120644</doi>
        <lastModDate>2021-06-30T12:15:24.3930000+00:00</lastModDate>
        
        <creator>S. Anthony Philomen Raj</creator>
        
        <creator>Vidyaathulasiraman</creator>
        
        <subject>Clustering; e-learning; elbow method; k-means; k-medoid; machine learning; silhouette method</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>e-Learning is appropriate when the learners are grouped and facilitated to learn according to their learning style and at their own pace. Elaborate researches have been proposed to categorize learners based on various e-learning parameters. Most of these researches have deployed the clustering principles for grouping eLearners, and in particular, they have utilized K-Medoid principle for better clustering. In the classical K-Medoid algorithm, predicting or determining the value of K is critical, two methods namely the Elbow and Silhouette methods are widely applied.  In this paper, we experiment with the application of both these methods to determine the value of K for clustering eLearners in K-Medoid and prove that Silhouette method best predicts the value of K.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_44-Determining_Optimal_Number_of_K_for_e_Learning_Groups.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Feistel Network Assisted Dynamic Keying based SPN Lightweight Encryption for IoT Security</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120642</link>
        <id>10.14569/IJACSA.2021.0120642</id>
        <doi>10.14569/IJACSA.2021.0120642</doi>
        <lastModDate>2021-06-30T12:15:24.3770000+00:00</lastModDate>
        
        <creator>Krishna Priya Gurumanapalli</creator>
        
        <creator>Nagendra Muthuluru</creator>
        
        <subject>Internet-of-Things; dynamic programming; lightweight encryption; generalized Feistel network; substitution and permutation network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>In the last few years Internet-of-Things (IoT) technology has emerged significantly to serve varied purposes including healthcare, surveillance and control, business communication, civic administration and even varied financial activities. Despite of such broadened applications, being distributed, wireless based systems, IoTs are often considered vulnerable towards intrusion or malicious attacks, where exploiting the benefits of loosely connected peers, the attackers intend to gain device access or data access un authentically. However, being resource constrained in nature while demanding time-efficient computation, the majority of the classical cryptosystems are either computationally exhaustive or limited to avoid attacks like Brute-Force, Smart Card Loss Attack, Impersonation, Linear and Differential attacks, etc. The assumptions hypothesizing that increasing key-size with higher encryption round can achieve augmented security often fail in IoT due to increased complexity, overhead and eventual resource exhaustion. Considering it as limitation, in this paper we proposed a state-of-art Generalized Feistel Network assisted Shannon-Conditioned and Dynamic Keying based SSPN (GFS-SSPN) Lightweight Encryption System for IoT Security. Unlike classical cryptosystems or even substitution and permutation (SPN) based methods, we designed Shannon-criteria bounded SPN model with Generalized Feistel Network (SPN-GFS) model that employs 64-bit dynamic key with five rounds of encryption to enable highly attack-resilient IoT security. The proposed model was designed in such manner that it could be suitable towards both data-level security as well as device level access-credential security to enable a “Fit-To-All” security solution for IoTs. Simulation results revealed that the proposed GFS-SSPN model exhibits very small encryption time with optimal NPCR and UACI. Additionally, correlation output too was found encouragingly fair, indicating higher attack-resilience.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_42-Feistel_Network_Assisted_Dynamic_Keying.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cloud-based Secure Healthcare Framework by using Enhanced Ciphertext Policy Attribute-Based Encryption Scheme</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120643</link>
        <id>10.14569/IJACSA.2021.0120643</id>
        <doi>10.14569/IJACSA.2021.0120643</doi>
        <lastModDate>2021-06-30T12:15:24.3770000+00:00</lastModDate>
        
        <creator>Siti Dhalila Mohd Satar</creator>
        
        <creator>Mohamad Afendee Mohamed</creator>
        
        <creator>Masnida Hussin</creator>
        
        <creator>Zurina Mohd Hanapi</creator>
        
        <creator>Siti Dhalila Mohd Satar</creator>
        
        <subject>Cloud computing; privacy and integrity; fine-grained access control; Ciphertext policy attribute-based encryption; electronic health record</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>Cloud computing is an emerging technology that has been used to provide better healthcare services to the users because of its convenient and economical features. Noted that the healthcare services required fast and reliable data sharing at anytime from anywhere for better monitoring and decision making in medical requirements. However, the privacy and integrity of electronic healthcare record become a significant issue during data sharing and outsourcing in Cloud. The data privacy of clients/patients is important in healthcare services where exposure of the data to unauthorized parties is unexceptional. In order to address this security loophole, this paper presents a Cloud-based Secure Healthcare Framework (SecHS) to offer safe access to healthcare and medical data. Specifically, this paper enhance the Ciphertext Policy Attribute-Based Encryption (CP-ABE) scheme by adding two more modules which aims to provide fine-grained access control and offer privacy and integrity of data. It facilitates encryption and hashing schemes. The proposed framework is compared with existing frameworks that used CP-ABE scheme. It shows the SecHS offers better features towards securing the healthcare services data. Optimistically, data security requirements such as privacy, integrity and fine-grained access control are required to effectively proposed for assuring data sharing in the Cloud environment.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_43-Cloud_based_Secure_Healthcare_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Exploring the Socio-economic Implications of Artificial Intelligence from Higher Education Student’s Perspective</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120641</link>
        <id>10.14569/IJACSA.2021.0120641</id>
        <doi>10.14569/IJACSA.2021.0120641</doi>
        <lastModDate>2021-06-30T12:15:24.3630000+00:00</lastModDate>
        
        <creator>Sarah Bamatraf</creator>
        
        <creator>Lobna Amouri</creator>
        
        <creator>Nahla El-Haggar</creator>
        
        <creator>Aishah Moneer</creator>
        
        <subject>Artificial intelligence; Saudi community; data analysis; social efficiency impact; economic productivity impact</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>As a result of the instability of oil prices, the economic prospects of the Gulf region are increasing their focus on new technologies. Thus, Saudi Arabia has demonstrated a strong commitment towards the development and implementation of Artificial Intelligence (AI) technologies as alternative sources for revenue and growth in line with globalisation, development, and the vision 2030. This paper examines the impact of AI in the Saudi Arabia community, especially for social and economic evolution. Special focus on the use of smart cars and smart cameras to monitor intelligently traffic, public services and national security is explored. A total of 424 participants from Eastern Province took part in this study. Analysis and discussion of the obtained results are also presented. The findings showed that 75.71% of participants mostly highly agreed about the AI economic impact leading to an increase in both government and business financial incomes. Whereas only 59.84% of participants mostly highly agreed about the social impact of AI as they are worried about AI ethical concerns, job loss and the changing workforce.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_41-Exploring_the_Socio_economic_Implications.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Emotional Evocative User Interface Design for Lifestyle Intervention in Non-communicable Diseases using Kansei</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120640</link>
        <id>10.14569/IJACSA.2021.0120640</id>
        <doi>10.14569/IJACSA.2021.0120640</doi>
        <lastModDate>2021-06-30T12:15:24.3470000+00:00</lastModDate>
        
        <creator>Noor Afiza Mat Razali</creator>
        
        <creator>Normaizeerah Mohd Noor</creator>
        
        <creator>Norulzahrah Mohd Zainudin</creator>
        
        <creator>Nur Atiqah Malizan</creator>
        
        <creator>Nor Asiakin Hasbullah</creator>
        
        <creator>Khairul Khalil Ishak</creator>
        
        <subject>Emotion; Kansei; non-communicable diseases; lifestyle intervention; user-interface design</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>The advancement of technology has led to the development of an artificial intelligence-based healthcare-related application that can be easily accessed and used to assist people in lifestyle intervention for preventing the development of non-communicable diseases (NCD)s. Previous research suggested that users are demanding a more emotional evocative user interface design. However, most of the time, it has been ignored due to lack of a model that could be referred in developing emotional evocative user interface design. This creates a gap in the user interface design that could lead to the ineffectiveness of content delivery in the NCD domain. This paper aims to investigate emotion traits and their relationship with user interface design for lifestyle intervention. Kansei Engineering method was applied to determine the dimensions for constructing emotional evocative user interface design. Data analysis was done using SPSS statistic tool and the result showed the emotional concepts that are significant and impactful towards user interface design for lifestyle intervention in NCD domain. The outcome of this research shall create new research fields that incorporate multi research domain including user interface design and emotions.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_40-Emotional_Evocative_User_Interface_Design.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Microorganisms: Integrating Augmented Reality and Gamification in a Learning Tool</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120639</link>
        <id>10.14569/IJACSA.2021.0120639</id>
        <doi>10.14569/IJACSA.2021.0120639</doi>
        <lastModDate>2021-06-30T12:15:24.3300000+00:00</lastModDate>
        
        <creator>Ratna Zuarni Ramli</creator>
        
        <creator>Nor Athirah Umairah Marobi</creator>
        
        <creator>Noraidah Sahari Ashaari</creator>
        
        <subject>Augmented reality; digital game-based learning (DGBL); game; learning tool; microorganisms; usability testing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>Microorganisms is a Year 6 Science subject that primary school students find considerably less attractive because of the enormous facts that require them to have good imaginary skills to understand. Only limited applications are available on smartphones as tools to learn subjects, especially Science, Technology, Engineering and Mathematics (STEM). Since the young generation is very much into current technology, there is a need to develop an application to gain students&#39; interest and improve their understanding of microorganisms. Therefore, a microorganism learning application that combines augmented reality (AR) and gamification called Microorganisms was developed. The Microorganisms prototype includes two modules; learning and training. The learning modules use AR technology that scan marker images to display a digital layer of microorganisms in three dimensions (3D). Meanwhile, the training module is represented through gamification consisting of quiz questions, a timer, and a score. The application was designed using Agile methodology and developed using various software such as Unity, Autodesk 3ds Max, Vuforia, and Firebase. Ten respondents, nine students and one teacher in a primary school, assessed the prototype through experimental testing. The results showed that, on average, the user satisfaction value was 4.6 out of 5. Thus, the Microorganisms application based on AR and gamification can be considered a good learning tool for primary school students to learn about micro-organisms.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_39-Microorganisms_Integrating_Augmented_Reality.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Law Architecture for Regulatory-Compliant Public Enterprise Model: A Focus on Healthcare Reform in Egypt</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120638</link>
        <id>10.14569/IJACSA.2021.0120638</id>
        <doi>10.14569/IJACSA.2021.0120638</doi>
        <lastModDate>2021-06-30T12:15:24.3170000+00:00</lastModDate>
        
        <creator>Alsayed Abdelwahed Mohamed</creator>
        
        <creator>Nashwa El-bendary</creator>
        
        <creator>A. Abdo</creator>
        
        <subject>Law architecture; regulatory compliance; requirements engineering; enterprise architecture; law ontology; TOGAF</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>Public business operations are governed by a set of legal sources, which regulate their implementation under administrative laws, that are increasingly influencing software system design and development. Enterprise Architecture (EA) is a critical approach for driving business and Information Systems (IS) transformation in the public sector. On the other hand, EA frameworks lack representation schemas that support law models. Understanding of the law Architecture in the government domain is required for EA work and is thus the first architecture activity that must be completed. As EA approaches for Law compliance reviews are performed by legal experts, there is a gap between law experts and technical system architects. To cover these gaps, this paper proposes a novel framework for analyzing the administrative laws, extracting the legal policies and legal rules, identify their relationships with other EA domains, and identifying the law compliance requirements. Moreover, the integration of our proposed law architecture framework with existing EA frameworks to reach a law-compliant public enterprise model is identified. Finally, the applicability of the proposed framework is shown and validated through a case study.  Moreover, subject matter experts of the legal domain also evaluated the extracted legal policies and rules during the implementation of our proposed framework.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_38-Law_Architecture_for_Regulatory_Compliant_Public_Enterprise.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Method to Prevent SQL Injection Attack using an Improved Parameterized Stored Procedure</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120636</link>
        <id>10.14569/IJACSA.2021.0120636</id>
        <doi>10.14569/IJACSA.2021.0120636</doi>
        <lastModDate>2021-06-30T12:15:24.3000000+00:00</lastModDate>
        
        <creator>Kamsuriah Ahmad</creator>
        
        <creator>Mayshara Karim</creator>
        
        <subject>SQL injection prevention; database security; parameterized stored procedure; network firewall</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>Structured Query Language (SQL) injection is one of the critical threats to database security. The effects of SQL injection attacks cause the data contained in the database to be at risk of being exploited by irresponsible parties, compromising data integrity, disrupting server operations and in return affecting the organization&#39;s image. Although SQL injection is an attack performed at the application level, SQL injection prevention requires security controls at all levels, namely application level, database level and network level. The absence of SQL injection prevention measures at the application level makes the database vulnerable to attack. Reviews indicate that the current approaches still not sufficient in addressing these three issues, which are i) improper use of dynamic SQL, ii) lack of input validation process and iii) inconsistent error handling. Currently, program and database code security is based solely on basic security measures that are focused at the network level such as network firewalls, database access control and web server request filtering. Unfortunately, these measures are still inadequate and not sufficient to safe guard the program code and databases from the attack. To overcome this shortcoming as addressed by these three issues, a new comprehensive method is proposed using an improved parameterized stored procedure to enhance database security. Experimental results prove that the proposed method is able to prevent SQL injection from occurring and able to shorten the processing time when compared with existing methods, hence able to improve database security.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_36-A_Method_to_Prevent_SQL_Injection_Attack.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>From User Stories to UML Diagrams Driven by Ontological and Production Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120637</link>
        <id>10.14569/IJACSA.2021.0120637</id>
        <doi>10.14569/IJACSA.2021.0120637</doi>
        <lastModDate>2021-06-30T12:15:24.3000000+00:00</lastModDate>
        
        <creator>Samia Nasiri</creator>
        
        <creator>Yassine Rhazali</creator>
        
        <creator>Mohammed Lahmer</creator>
        
        <creator>Amina Adadi</creator>
        
        <subject>Ontology; prolog rules; natural language processing; UML diagrams; user stories</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>The User Story format has become the most popular way of expressing requirements in Agile methods. However, a requirement does not state how a solution will be physically achieved. The purpose of this paper is to present a new approach that automatically transforms user stories into UML diagrams. Our approach aims to automatically generate UML diagrams, namely class, use cases, and package diagrams. User stories are written in natural language (English), so the use of a natural language processing tool was necessary for their processing. In our case, we have used Stanford core NLP. The automation approach consists of the combination of rules formulated as a predicate and an ontological model. Prolog rules are used to extract relationships between classes and eliminate those that are at risk of error. To extract the design elements, the prolog rules used dependencies offered by Stanford core NLP. An ontology representing the components of the user stories was created to identify equivalent relationships and inclusion use cases. The tool developed was implemented in the Python programming language and has been validated by several case studies.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_37-From_User_Stories_to_UML_Diagrams_Driven.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Anonymity Feature in Android Mobile Apps for Interest Groups</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120635</link>
        <id>10.14569/IJACSA.2021.0120635</id>
        <doi>10.14569/IJACSA.2021.0120635</doi>
        <lastModDate>2021-06-30T12:15:24.2830000+00:00</lastModDate>
        
        <creator>Lee Sin Yi</creator>
        
        <creator>Nurul A. Emran</creator>
        
        <creator>Norharyati Harum</creator>
        
        <subject>Anonymity feature; mobile social apps; quantitative observation; android apps; anonymous social media; interest groups component</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>Mobile apps that provide platforms for interest-oriented communities (or interest groups) allow people with common interests to gather virtually in sharing their shared passion and ideas. While the participation of such apps is voluntary, some people feel uncomfortable revealing their real identities due to privacy and safety concerns. Thus, some mobile apps provide an anonymity feature that allows people to join the group anonymously. Nevertheless, little is known on how the anonymity feature relates to the people who prefer to join interest groups. Thus in this paper, we hypothesize that mobile apps users that highly value interest groups will also highly appreciate the anonymity feature provided in the interest groups. In particular, we explored the market segment of mobile Android apps with anonymity features within selected interest groups. A pilot study was conducted where 34 Android apps users, primarily Malaysian, filled up the questionnaires designed to investigate the anonymity feature in the apps. The results of the pilot study show that most Android apps nowadays offer their users to remain anonymous. The findings show that most users who give a high score on the importance of interest-based groups also provide a high score on the importance of the anonymity feature offered by the mobile apps providers.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_35-Anonymity_Feature_in_Android_Mobile.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Framework for Protecting Teenagers from Cyber Crimes and Cyberbullying</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120634</link>
        <id>10.14569/IJACSA.2021.0120634</id>
        <doi>10.14569/IJACSA.2021.0120634</doi>
        <lastModDate>2021-06-30T12:15:24.2700000+00:00</lastModDate>
        
        <creator>Sultan Saud Alanazi</creator>
        
        <creator>Adwan Alownie Alanazi</creator>
        
        <subject>Cyberbullying; cyber bullying; internet crimes; social media security; e-crimes</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>Social applications consist of powerful tools that allow people to connect and interact with each other. However, its negative use cannot be ignored. Cyberbullying is a new and serious Internet problem. Cyberbullying is one of the most common risks for teenagers to go online. More than half of young people report that they do not tell their parents when this will occur, this can have significant physiological consequences. Cyberbullying involves the deliberate use of digital media on the Internet to convey false or embarrassing information about others. Therefore, this article provides a way to detect cyber-bullying in social media applications for parents. The purpose of our work is to develop an architectural model for identifying and measuring the state of Cyberbullying faced by children on social media applications. For parents, this will be a good tool for monitoring their children without invading their privacy. Finally, some interesting open-ended questions were raised, suggesting promising ideas for starting new research in this new field.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_34-A_Framework_for_Protecting_Teenagers.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Identifying Small and Medium Enterprise Smart Entrepreneurship Training Framework Components using Thematic Analysis and Expert Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120633</link>
        <id>10.14569/IJACSA.2021.0120633</id>
        <doi>10.14569/IJACSA.2021.0120633</doi>
        <lastModDate>2021-06-30T12:15:24.2700000+00:00</lastModDate>
        
        <creator>Anis Nur Assila Rozmi</creator>
        
        <creator>Puteri N.E. Nohuddin</creator>
        
        <creator>Abdul Razak Abdul Hadi</creator>
        
        <creator>Mohd Izhar A. Bakar</creator>
        
        <subject>Small Medium Enterprise (SME); business owner; thematic analysis method; expert panel; Information and Communication Technology (ICT); course selection system; smart entrepreneurship training framework</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>Small and Medium Enterprises (SMEs) today are facing a competitive business environment, in a complex and rapidly changing environment. For that technology is seen as a mediator capable of transforming SMEs to greater heights in an amid and vigorous pace of a borderless world. The agenda of SMEs to generate national income as well as to create more employment opportunities has made the government much focused in providing improvements in business opportunities to SMEs to boost the country&#39;s economic growth. To ensure that the SME owners sustain their business, they should be able to adapt the use of the internet as a key component in designing new business model values, customer experiences and internal capabilities that support the key operations. However, there are still some SME owners who do not leverage on the use of Information and Communication Technology (ICT) in their business operations. This study interviewed eight SME owners who operated their businesses in Kuala Lumpur and Selangor to identify a list of most important business training courses needed for SMEs in Malaysia. The data was analyzed using Thematic Analysis method and it was found that there are five main components of courses in SMEs, namely, Business Management, Sales and Marketing, Accounting and Finance, ICT and Technology, and Production and Operations. As a result of this Thematic Analysis study, researchers have developed a smart entrepreneurship training framework related to the five components and produced a system called, the Malaysian SMEs Psychometric Test or U-PPM which has been reviewed and endorsed by the respective panels of experts. This proposed framework is important for SME owners and management and also the government and stakeholders, when making the correct decisions in selecting business training courses as well as to increase ICT and digital technologies usage in providing a positive impact to all SMEs in Malaysia.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_33-Identifying_Small_and_Medium_Enterprise.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Validation: Conceptual versus Activity Diagram Approaches</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120632</link>
        <id>10.14569/IJACSA.2021.0120632</id>
        <doi>10.14569/IJACSA.2021.0120632</doi>
        <lastModDate>2021-06-30T12:15:24.2530000+00:00</lastModDate>
        
        <creator>Sabah Al-Fedaghi</creator>
        
        <subject>Validation; conceptual model; activity diagram; thinging machine; informal validation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>A conceptual model is used to support development and design within the area of systems and software modeling. The notion of validation refers to representing a domain in a model accurately and generating results using an executable model. In UML specifications, validation verifies the correctness of UML diagrams against any constraints and rules defined within the model. Currently, significant research has been conducted on generating test sets to validate that UML diagrams conform to requirements. UML activity diagrams are a specific focus of such efforts. An activity diagram is a flexible instrument for describing a system’s behaviors and the internal logic of complex operations. This paper focuses on the notion of validation using activity diagrams and contrasts that process with a proposed method that involves an informal validation procedure. Accordingly, this informal validation involves comparing requirements to specifications expressed by a diagram of a modeling language called thinging machine (TM) modeling. The informal validation is a type of model checking that requires the model to be small enough for the verification to be done in a limited space or time period. In the proposed method, the model diagram is divided into subdiagrams to achieve this purpose. We claim the TM behavioral model comes with a particular dispositional structure that allows a designer to “carve” a model into smaller components for informal validation, which is shown through two case studies.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_32-Validation_Conceptual_Versus_Activity_Diagram.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Method to Accommodate Backward Compatibility on the Learning Application-based Transliteration to the Balinese Script</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120631</link>
        <id>10.14569/IJACSA.2021.0120631</id>
        <doi>10.14569/IJACSA.2021.0120631</doi>
        <lastModDate>2021-06-30T12:15:24.2370000+00:00</lastModDate>
        
        <creator>Gede Indrawan</creator>
        
        <creator>I Ketut Paramarta</creator>
        
        <creator>I Gede Nurhayata</creator>
        
        <creator>Sariyasa</creator>
        
        <subject>Backward compatibility; Balinese Script; learning application; transliteration</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>This research proposed a method to accommodate backward compatibility on the learning application-based transliteration to the Balinese Script. The objective is to accommodate the standard transliteration rules from the Balinese Language, Script, and Literature Advisory Agency. It is considered as the main contribution since there has not been a workaround in this research area. This multi-discipline collaboration work is one of the efforts to preserve digitally the endangered Balinese local language knowledge in Indonesia. The proposed method covered two aspects, i.e. (1) Its backward compatibility allows for interoperability at a certain level with the older transliteration rules; and (2) Breaking backward compatibility at a certain level is unavoidable since, for the same aspect, there is a contradictory treatment between the standard rule and the old one. This study was conducted on the developed web-based transliteration learning application, BaliScript, where its Latin text input will be converted into the Balinese Script output using the dedicated Balinese Unicode font. Through the experiment, the proposed method gave the expected transliteration results on the accommodation of backward compatibility.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_31-A_Method_to_Accommodate_Backward_Compatibility.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Securing Student Data Privacy using Modified Snake and Ladder Cryptographic Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120630</link>
        <id>10.14569/IJACSA.2021.0120630</id>
        <doi>10.14569/IJACSA.2021.0120630</doi>
        <lastModDate>2021-06-30T12:15:24.2200000+00:00</lastModDate>
        
        <creator>Kamaladevi Kunkolienker</creator>
        
        <creator>Vaishnavi Kamat</creator>
        
        <subject>Student data; privacy; encryption; decryption; snake and ladder; variable keys</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>Transformed by the advent of the Digital Revolution, the world deals with a gold mine of data every day. Along with improvement in processing methods for the data, data security is of utmost importance. Recently, there was a noticeable surge in online learning during the pandemic. Modifying their workflow strategies, the educational institutions provided courses for students designed to suit the need of the hour. This opened up the avenue for a greater number of students to take part in the online learning. With the increase in number of students registered, there exist a substantial repository of data to deal with. Hackers have been targeting student data and using it for illegal purposes. In this research paper, an attempt has been made by modifying the classic Snake and Ladder game to perform encryption on short text student data to ensure data privacy. The Novel algorithm maintains simplicity yet produces a strong cipher text. The algorithm stands strong against the brute-force attack, cipher-only attack, etc. Decryption also uses same key as used for encryption, the key being symmetric in nature. New variable keys are generated every time the algorithm is used.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_30-Securing_Student_Data_Privacy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>What Drives Airbnb Customers’ Satisfaction in Amsterdam? A Sentiment Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120628</link>
        <id>10.14569/IJACSA.2021.0120628</id>
        <doi>10.14569/IJACSA.2021.0120628</doi>
        <lastModDate>2021-06-30T12:15:24.2070000+00:00</lastModDate>
        
        <creator>Heyam Abdullah Bin Madhi</creator>
        
        <creator>Muna M. Alhammad</creator>
        
        <subject>Airbnb; customer satisfaction; customer experience; big data; sentiment analysis; ordinal logistic regression</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>The sharing economy is a new socio-economic system that allows individuals to rent out their personal belongings, such as their private car or a room in their home, for a short period. This study aims to investigate the attributes that impact customers’ satisfaction when using the sharing economy propriety rentals websites. Large data sets of Airbnb’s online reviews and listings in Amsterdam were analyzed using sentiment analysis, word clustering, ordinal logistic regression, and visualization techniques. Findings reveal that the polarity of Airbnb guests reviews in Amsterdam is significantly impacted by property price, value, cleanliness, rate, host communication, easiness of check-in, the accuracy of property description, and whether the host is super host or not. Surprisingly, the property neighborhood was not found to impact customers’ sentiment in Amsterdam. In addition, Airbnb guests in Amsterdam tend to positively express their experience satisfaction level mainly based on property exact location and host interaction followed with the facilities surrounding the property, property cleanliness, and room quality. On the other hand, negative online reviews tend to be mainly linked to problems with check-in services followed by aspects related to weak host interaction, location, and room quality. The results indicate that Airbnb hosts need to offer clear and easy check-in services with emphasizing the importance of keeping a good communication channel with their guests to enhance customers’ experience and increase customers’ satisfaction level. Future studies should investigate the applicability of the findings of this study in the context of other cities.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_28-What_Drives_Airbnb_Customers_Satisfaction_in_Amsterdam.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Motivational Factors Impacting the Use of Citizen Reporting Applications in Saudi Arabia: The Case of Balagh Application</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120629</link>
        <id>10.14569/IJACSA.2021.0120629</id>
        <doi>10.14569/IJACSA.2021.0120629</doi>
        <lastModDate>2021-06-30T12:15:24.2070000+00:00</lastModDate>
        
        <creator>Muna M. Alhammad</creator>
        
        <creator>Layla Hajar</creator>
        
        <creator>Sahar Alshathry</creator>
        
        <creator>Mashael Alqasabi</creator>
        
        <subject>Crowdsourcing; self-determination theory; intrinsic motivation; extrinsic motivation; citizens reporting</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>Citizen reporting applications are considered a new approach for interaction between government authorities and citizens. Citizen reporting applications are implemented to collectively gather information from citizens on issues related to public interest such as accidents, traffic violations, and commercial frauds. Through utilizing citizen reporting applications, citizens are able to provide information about incidents efficiently and conveniently to the local authorities via mobile applications that are designed for these specific purposes. For such applications to be successful, citizens’ willingness to participate continually and to become daily users of these applications is required. This paper applies the self-determination theory to investigate the factors that encourage citizens to participate in citizen reporting applications. In this study, the factors impacting behavioural intention to use the applications are divided into two categories. First, intrinsic motivation factors that include self-concern, social responsibility, and revenge. Second, extrinsic motivation factors that include output quality and rewards. The study empirically surveyed 297 Saudi citizens from different age groups. The partial least square (PLS) approach validates the research model. Findings reveal that output quality, revenge, and self-concern are significantly associated with citizens’ motivations to use the applications, whereas rewards and social responsibility do not significantly influence citizens’ motivations to engage with such applications. This study contributes theoretically by enriching literature on the identification of the factors behind user’s engagement in citizen reporting applications. It also contributes practically by supporting the developers of citizen reporting applications to consider these factors when designing and marketing this kind of application.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_29-Motivational_Factors_Impacting_the_Use_of_Citizen_Reporting.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fog-based Remote in-Home Health Monitoring Framework</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120627</link>
        <id>10.14569/IJACSA.2021.0120627</id>
        <doi>10.14569/IJACSA.2021.0120627</doi>
        <lastModDate>2021-06-30T12:15:24.1900000+00:00</lastModDate>
        
        <creator>Fatma H. Elgendy</creator>
        
        <creator>Amany M. Sarhan</creator>
        
        <creator>Mahmoud A. M. Alshewimy</creator>
        
        <subject>Fog computing; health monitoring; iFogSim; IoT; cloud computing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>As a result of what happened to the world during the past and current year of the spread of the Covid-19 epidemic, it was necessary to have a reliable health care system for remote observation, especially in care homes for the elderly. There are many research works have been done in this field, but still have limitations in terms of latency, security, response delay, and long execution times. To remove these limitations, this paper introduces a Smart Healthcare Framework called Remote in-Home Health Monitoring (RHHM), which provides an architecture and functionalities in order to facilitate the control of patients&#39; conditions when they are at home. The framework exploits the benefits of fog layers with high-level services such as local storage, local real-time data processing, and embedded data mining for taking responsibility for handling some burdens of the sensor network and the cloud and to become a decision maker. In addition to, it incorporates camera with body sensors in diagnosis for more reliability and efficiency with privacy preserving. The performance of the proposed framework was evaluated using the popular iFogSim toolkit. The results show the proposed system&#39;s ability to reduce latency, energy consumption, network communications, and overall response time. The efforts of this work will help support the overall goal to establish a high performance, secured and reliable smart Healthcare system.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_27-Fog_based_Remote_in_Home_Health_Monitoring.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of an Efficient RPL Objective Function for Internet of Things Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120625</link>
        <id>10.14569/IJACSA.2021.0120625</id>
        <doi>10.14569/IJACSA.2021.0120625</doi>
        <lastModDate>2021-06-30T12:15:24.1730000+00:00</lastModDate>
        
        <creator>Sonia Kuwelkar</creator>
        
        <creator>H.G. Virani</creator>
        
        <subject>Internet of things; low power Lossy Networks; IPv6 routing protocol for LLN; objective function; fuzzy logic</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>Over the decade, a rapid growth in use of smart devices connected and communicating over the Internet is seen in various domains. The IPv6 Routing protocol for Low power and Lossy Networks (RPL) is the routing backbone of such IOT networks. RPL is a proactive, distance vector protocol which constructs the routes based on an Objective Function. The performance of RPL protocol largely depends on the design of Objective Function. Depending on application requirements, the RPL standard offers flexibility in design of the objective function and scope for improving the routing process. In this paper, an efficient Objective Function, RPL-FZ, is proposed. Speedy communication across nodes, low energy consumption and reliable data delivery is key to achieve quality of Service. Considering this, RPL-FZ uses relevant metrics like Residual Energy of Node, Delay and ETX (Expected Transmission count) to make the routing decisions. The metrics are combined using fuzzy logic technique to obtain a single metric Quality for each neighbor node. The neighbor with highest value of Quality is chosen as best parent to forward sensed data toward the collection unit. The proposed objective function RPL-FZ is integrated in the Contiki OS and network simulations are performed using the COOJA simulator. The performance evaluation reveals that RPL-FZ achieves 7% higher Packet Delivery rate, 8% lower energy consumption and 8% lesser latency as compared to single metric based standard objective functions OF0 and MRHOF.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_25-Design_of_an_Efficient_RPL_Objective_Function.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhance Risks Management of Software Development Projects in Concurrent Multi-Projects Environment to Optimize Resources Allocation Decisions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120626</link>
        <id>10.14569/IJACSA.2021.0120626</id>
        <doi>10.14569/IJACSA.2021.0120626</doi>
        <lastModDate>2021-06-30T12:15:24.1730000+00:00</lastModDate>
        
        <creator>Ibraheem M Alharbi</creator>
        
        <creator>Adel A Alyoubi</creator>
        
        <creator>Majid Altuwairiqi</creator>
        
        <creator>Mahmoud Abd Ellatif</creator>
        
        <subject>Risk management; multiple software projects; risk assessment; software development projects</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>In software development project management, Risk management represents critical knowledge and skills at the level of a single software project and at the enterprise ‎level, which executes multiple software projects concurrently. ‎The best decision of Risk management contributes to optimizing resource allocation ‎ at the enterprise level for achieving its goals. Therefore, the issue needs centralized risk management at the enterprise level as a whole and not for each project. Risk management is implemented through several stages and using different methods. ‎Various studies deal with multiple aspects of software ‎management. This research provides an analytical view of risk assessment in multi-environment software development projects that take place simultaneously. The study uses a public dataset previously used in previous research for several simultaneous projects in one organization. It describes the multi-software project&#39;s risks through 12 variables. A comparative ‎ analysis uses classification methods (‎Random Forest- TreesJ48 - REP Tree - Simple Logistic)‎ to assess risks and put them in central view. The research experiment has proven high accuracy in determining risk levels in a multi-project environment, reaching approximately 98%, using the REP tree technique.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_26-Enhance_Risks_Management_of_Software_Development_Projects.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fully Automated Ontology Increment`S user Guide Generation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120624</link>
        <id>10.14569/IJACSA.2021.0120624</id>
        <doi>10.14569/IJACSA.2021.0120624</doi>
        <lastModDate>2021-06-30T12:15:24.1600000+00:00</lastModDate>
        
        <creator>Kaneeka Vidanage</creator>
        
        <creator>Rosmayati Mohemad</creator>
        
        <creator>Noor Maizura Mohamad Noor</creator>
        
        <creator>Zuriana Abu Bakar</creator>
        
        <subject>AliceBot; artificial intelligent modelling language; ontologist; verbalizing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>This research focuses on a domain and schema independent user-guide generation for ontology increments. Having a user guide or a catalogue/manual is vital for quick and effective knowledge dissemination. If a user guide can be generated for an ontology as well, there could be ample advantages. Stakeholders can scan across the user guide of the ontology and verify the eligibility of it, against the intended purposes. Additionally, this could be useful in ontology`s version management requisites and knowledge verification requirements as well.  Even though, ontology construction being iterative and incremental operational, there will be several intermediate versions before it reaches to the fine-tuned final version. Therefore, manual user guide creation will be a tedious and impossible operation. Consequently, this research focuses on a novel algorithmic approach to domain and schema independent ontology verbalization. A special algorithm is created to alter the functionality of Google`s AliceBot to work as a verbalizer, instead of a chatterbot. Artificial Intelligent Modelling Language (AIML) technology is utilized to create the templates for the ontology specific knowledge embeddings. This entire process is fully automated via the proposed novel algorithm, which is a key contribution of this research. Eventually, the generated user guide generation tool is evaluated against three different domains with the involvement of fifteen stakeholders and 82% of averaged acceptance has been yielded.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_24-Fully_Automated_Ontology_Increments_User_Guide.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Key Generation Technique based on Neural Networks for Lightweight Block Ciphers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120623</link>
        <id>10.14569/IJACSA.2021.0120623</id>
        <doi>10.14569/IJACSA.2021.0120623</doi>
        <lastModDate>2021-06-30T12:15:24.1430000+00:00</lastModDate>
        
        <creator>Sohel Rana</creator>
        
        <creator>M. Rubaiyat Hossain Mondal</creator>
        
        <creator>A. H. M. Shahariar Parvez</creator>
        
        <subject>Lightweight cryptography; IoT; resource limited devices; neural network; avalanche effect; FELICS; MATLAB</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>In recent years, small computing devices used in wireless sensors, radio frequency identification (RFID) tags, Internet of Things (IoT) are increasing rapidly. However, the resources and capabilities of these devices are limited. Conventional encryption ciphers are computationally expensive and not suitable for lightweight devices. Hence, research in lightweight ciphers is important. In this paper, a new key scheduling technique based on neural network (NN) is introduced for lightweight block ciphers. The proposed NN approach is based on a multilayer feedforward neural network with a single hidden layer with the concept of nonlinear activation function to satisfy the Shannon confusion properties. It is shown here that NN consisting of 4 input, 4 hidden, and 4 output neurons is the best in key scheduling process. With this architecture, 5 unique keys are generated from 64 bit input data. Nonlinear bit shuffling is applied to create enough diffusion. The 4-4-4 NN approach generates the se-cure keys with an avalanche effect of more than 50 percent and consumes less power and memory, thus ensuring better performance than that of the existing algorithms. In our experiments, the memory usage and execution cycle of the NN key scheduling technique are evaluated on the fair evaluation of lightweight cryptographic systems (FELICS) tool that runs on the Linux operating system. The proposed NN approach is also implemented using MATLAB 2021a to test the key sensitivity by the histogram and correlation graphs of several encrypted and decrypted images. Results also show that compared to the existing algorithms, the proposed NN-cipher algorithm has lower number of execution cycles and hence less power consumption.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_23-A_New_Key_Generation_Technique.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Method for Determination of Support Length of Daubechies Basis Function for Wavelet MRA based Moving Characteristic Estimation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120621</link>
        <id>10.14569/IJACSA.2021.0120621</id>
        <doi>10.14569/IJACSA.2021.0120621</doi>
        <lastModDate>2021-06-30T12:15:24.1270000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>Multi-dimensional wavelet transformation; multi resolution analysis: MRA; moving target detection; support length; typhoon movement; boundary detection and tracking</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>Method for determination of support length of Daubechies basis function for wavelet Multi Resolution Analysis: MRA based moving characteristic estimation is proposed. The method based on root mean square difference between original and reconstructed image from just a low frequency component from MRA. Also, one of applications of the method for detection and tracking of moving targets, typhoon and boundary between warm and cold current observed from satellite remote sensing images is shown. It is found that the proposed method allows to detect and to track for moving targets of a typhoon and a boundary in the time series of GOES images.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_21-Method_for_Determination_of_Support_Length.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>DeepfakeNet, an Efficient Deepfake Detection Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120622</link>
        <id>10.14569/IJACSA.2021.0120622</id>
        <doi>10.14569/IJACSA.2021.0120622</doi>
        <lastModDate>2021-06-30T12:15:24.1270000+00:00</lastModDate>
        
        <creator>Dafeng Gong</creator>
        
        <creator>Yogan Jaya Kumar</creator>
        
        <creator>Ong Sing Goh</creator>
        
        <creator>Zi Ye</creator>
        
        <creator>Wanle Chi</creator>
        
        <subject>DeepfakeNet; deepfake detection; data enhancement; CNNs; cross dataset</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>Different CNNs models do not perform well in deepfake detection in cross datasets. This paper proposes a deepfake detection model called DeepfakeNet, which consists of 20 network layers. It refers to the stacking idea of ResNet and the split-transform-merge idea of Inception to design the network block structure, That is, the block structure of ResNeXt. The study uses some data of FaceForensics++, Kaggle and TIMIT datasets, and data enhancement technology is used to expand the datasets for training and testing models. The experimental results show that, compared with the current mainstream models including VGG19, ResNet101, ResNeXt50, XceptionNet and GoogleNet, in the same dataset and preset parameters, the proposed detection model not only has higher accuracy and lower error rate in cross dataset detection, but also has a significant improvement in performance.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_22-DeepfakeNet_an_Efficient_Deepfake.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Building Research Productivity Framework in Higher Education Institution</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120620</link>
        <id>10.14569/IJACSA.2021.0120620</id>
        <doi>10.14569/IJACSA.2021.0120620</doi>
        <lastModDate>2021-06-30T12:15:24.1130000+00:00</lastModDate>
        
        <creator>Ahmad Sanmorino</creator>
        
        <creator>Ermatita</creator>
        
        <creator>Samsuryadi</creator>
        
        <creator>Dian Palupi Rini</creator>
        
        <subject>Framework; research productivity; variable selection; data mining classifier</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>The purpose of this study is to build a framework for improving research productivity in higher education institutions. The research begins by collecting data and defining candidate variables. The next process is to determine the selected variable from the candidate variable. Variable selection is carried out in three stages, univariate selection, feature importance, and correlation matrix. After the variable selection stage, eight input variables and one target variable were obtained. The eight input variables are Article (C), Conference (CO), Grant (GT), Research Grantee (RG), Rank (R), Degree (D), IPR, and Citation (C). The target variable is Research Productivity (RP). This selected variable is used to build the framework. The next step is to test the framework that has been built. The testing process involves four data mining classifiers, Support Vector Machine, Decision Tree, K-Nearest Neighbor, and Na&#239;ve Bayes. The classification results are tested using confusion matrix-based testing, accuracy, precision, sensitivity, and f-measure. The testing results show the proposed framework is able to obtain high accuracy scores for each classification algorithm. It means the proposed framework is relevant to use.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_20-Building_Research_Productivity_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Emotional Intelligence Robotics to Motivate Interaction in E-Learning: An Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120619</link>
        <id>10.14569/IJACSA.2021.0120619</id>
        <doi>10.14569/IJACSA.2021.0120619</doi>
        <lastModDate>2021-06-30T12:15:24.0970000+00:00</lastModDate>
        
        <creator>Dalia khairy</creator>
        
        <creator>Salem Alkhalaf</creator>
        
        <creator>M. F. Areed</creator>
        
        <creator>Mohamed A. Amasha</creator>
        
        <creator>Rania A. Abougalala</creator>
        
        <subject>Robotics; emotional intelligence; interaction; E-learning; motivation;  robot with emotional intelligence; machine learning algorithms; face analysis;  speech recognition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>The development of emotional intelligence robotics in the learning environment plays valuable support for social interaction among students. Emotional intelligence robots should be scalable to recognize emotions, appear empathetic in learning situations, and enrich the confidence with students for active interaction. This paper presents some related issues about integrating emotional intelligence robotics in E-learning such as its role and outcomes to motivate interaction during education and discover the main aspects of the emotional intelligence between humans and robots. This paper aims to determine the design requirements of emotional robots. Besides, this paper proposed a framework of educational Robotics with Emotional Intelligence in Learning(EREIL). EREIL consists of three main units; student emotions discovery, student emotions representation, and EREIL-Student Communication (RSC). In addition, it introduces a perception of EREIL working. In the future, this paper tries to merge more sensor devices and machine learning algorithms to integrate face analysis with speech recognition. Besides, it can add a persuasion unit in the EREIL robot to convince students with better learning choices to their abilities.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_19-Emotional_Intelligence_Robotics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Deep Learning Approach Combining CNN and Bi-LSTM with SVM Classifier for Arabic Sentiment Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120618</link>
        <id>10.14569/IJACSA.2021.0120618</id>
        <doi>10.14569/IJACSA.2021.0120618</doi>
        <lastModDate>2021-06-30T12:15:24.0800000+00:00</lastModDate>
        
        <creator>Omar Alharbi</creator>
        
        <subject>Sentiment analysis; Arabic sentiment analysis; deep learning approach; convolutional neural network CNN; bidirectional long short-term memory Bi-LSTM; support vector machine; SVM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>Deep learning models have recently been proven to be successful in various natural language processing tasks, including sentiment analysis. Conventionally, a deep learning model’s architecture includes a feature extraction layer followed by a fully connected layer used to train the model parameters and classification task. In this paper, we employ a deep learning model with modified architecture that combines Convolutional Neural Network (CNN) and Bidirectional Long Short-Term Memory (Bi-LSTM) for feature extraction, with Support Vector Machine (SVM) for Arabic sentiment classification. In particular, we use a linear SVM classifier that utilizes the embedded vectors obtained from CNN and Bi-LSTM for polarity classification of Arabic reviews. The proposed method was tested on three publicly available datasets. The results show that the method achieved superior performance than the two baseline algorithms of CNN and SVM in all datasets.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_18-A_Deep_Learning_Approach_Combining_CNN_and_Bi_LSTM.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Real Time Face Expression Recognition along with Balanced FER2013 Dataset using CycleGAN</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120617</link>
        <id>10.14569/IJACSA.2021.0120617</id>
        <doi>10.14569/IJACSA.2021.0120617</doi>
        <lastModDate>2021-06-30T12:15:24.0670000+00:00</lastModDate>
        
        <creator>Fatma Mazen Ali Mazen</creator>
        
        <creator>Ahmed Aly Nashat</creator>
        
        <creator>Rania Ahmed Abdel Azeem Abul Seoud</creator>
        
        <subject>Facial expressions detection and recognition; multi-task cascaded convolutional networks; transfer learning; residual neural network; CycleGAN; FER2013; GPU and CUDA; HAAR</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>Human face expression recognition is an active research area that has massive applications in medical field, crime investigation, marketing, online learning, automobile safety and video games. The first part of this research defines a deep neural network model-based framework for recognizing the seven main types of facial expression, which are found in all cultures. The proposed methodology involves four stages: (a) pre-processing the FER2013 dataset through relabeling to avoid misleading results and getting rid of non-face and non-frontal faces; (b) design of an efficient stable Cycle Generative Adversarial Network (CycleGAN), which provides unsupervised expression-to-expression translation. The CycleGAN has been designed and trained with a new cycle consistency loss. (c) Generating new images to overcome the class imbalance and finally (d) building the DNN architecture for recognizing the face sign expression, using the pretrained VGG-Face model with vggface weights. The second part encompasses the design of a GPU-accelerated face expression recognition system for real time video sequences using NVIDIAs Compute Unified Device Architecture (CUDA). OpenCV library has been compiled from scratch with CUDA and NVIDIA CUDA Deep Neural Network library, cuDNN. For face detection stage Haar Cascaded and deep learning were used and tested using both CPU and GPU as backend. Results show that the designed model run time to recognize a face sign is 0.44 seconds. Besides, the average test accuracy has been increased from 64% for the original FER2013 dataset to 91.76% for the modified balanced version using the same transfer learning model.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_17-Real_Time_Face_Expression_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Communication Protocol for Drones Cooperative Network: 5G Site Survey Case Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120616</link>
        <id>10.14569/IJACSA.2021.0120616</id>
        <doi>10.14569/IJACSA.2021.0120616</doi>
        <lastModDate>2021-06-30T12:15:24.0670000+00:00</lastModDate>
        
        <creator>Youssef Shawky Othman</creator>
        
        <creator>Mohamed Helmy Megahed</creator>
        
        <creator>Mohamed Abo Rezka</creator>
        
        <creator>Fathy Ahmed Elsayed Amer</creator>
        
        <subject>Drones; communication protocol; Message Queuing Telemetry Transport (MQTT); cooperative network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>Directing the antennas of the 5th generation mobile network optimally became a hard and tedious process due to the abundance of the antennas that the 5th generation networks relay on. Also, due to the traditional way of measuring the signal strength of the 5th generation networks. The process that could take weeks of working until it is done. So, the solution is to make an automated process to measure signal strength and to direct antennas using &quot;Drones&quot; rather than human power. This way will ease the process of directing antennas in a quite shorter amount of time which will be some hours instead of weeks. Plus the low cost and the high accuracy achieved. So, a cooperative network between &quot;Drones&quot; and a new communication protocol to support that network will be designed. &quot;Drones&quot; will communicate with each others and with antennas through exchanging messages by &quot;MQTT cloud&quot; using New designed communication protocol. &quot;Raspberry pi&quot; platform will be used as a server to control the direction of antennas. &quot;Drones&quot; will carry a &quot;4G mobile&quot; and a &quot;Raspberry pi&quot; with an &quot;Building Identification System (BIS)&quot; setuped on it. The &quot;(BIS)&quot; gives every building a number and will recognize the entrance of buildings to measure signal strength at every floor. Then, performance analysis metrics (Throughput) will be measured. OMNeT++ &quot;will be used for simulation, and &quot;Raspberry Pi&quot; platform will be used to implement the system and measure the performance of the new Communication Protocol.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_16-A_New_Communication_Protocol_for_Drones.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comprehensive Analysis for Sensor-Based Hydraulic System Condition Monitoring</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120615</link>
        <id>10.14569/IJACSA.2021.0120615</id>
        <doi>10.14569/IJACSA.2021.0120615</doi>
        <lastModDate>2021-06-30T12:15:24.0500000+00:00</lastModDate>
        
        <creator>Ahmed Alenany</creator>
        
        <creator>Ahmed M. Helmi</creator>
        
        <creator>Basheer M. Nasef</creator>
        
        <subject>Condition monitoring; sensory data analysis; machine learning; classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>Condition monitoring of equipment can be very effective in predicting faults and taking early corrective actions. As hydraulic systems constitute the core of most industrial plants, predictive maintenance of such systems is of vital importance. Due to the availability of huge data collected from industrial plants, machine learning can be used for this purpose. In this work, a hydraulic system condition monitoring (HSCM) is addressed via a public dataset with 17 sensors distributed throughout the system. Using a set of 6 features extracted from sensory data, the random forest classifier was proven, in the literature, to achieve classification rate exceeding 99% for four independent target classes, namely Cooler, Valve, Pump and Accumulator. In this paper, sensor dependency is examined and experimental results show that a reduced set of important sensors may be sufficient for the addressed classification task. In addition, feature importance as well as implementation issues, i.e. training time and model size on disk, are analyzed. It is found that the training time can be reduced by 25.7% to 36.4% while the size on disk is reduced by 70.3% to 85.5%, using the optimized models, with only important sensors employed, in comparison with the basic model, with full set of sensors, while maintaining classification precision.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_15-Comprehensive_Analysis_for_Sensor_based_Hydraulic_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Review on SDR, Spectrum Sensing, and CR-based IoT in Cognitive Radio Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120613</link>
        <id>10.14569/IJACSA.2021.0120613</id>
        <doi>10.14569/IJACSA.2021.0120613</doi>
        <lastModDate>2021-06-30T12:15:24.0330000+00:00</lastModDate>
        
        <creator>Nadia Kassri</creator>
        
        <creator>Abdeslam Ennouaary</creator>
        
        <creator>Slimane Bah</creator>
        
        <creator>Hajar Baghdadi</creator>
        
        <subject>Cognitive radio; cognitive radio networks; software defined radio; spectrum sensing; machine learning; CR-based IoT</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>The inherent scarcity of frequency spectrum, along with the fixed spectrum allocation adopted policy, has led to a dire shortage of this indispensable resource. Furthermore, with the tremendous growth of wireless applications, this problem is intensified as the unlicensed frequency spectrum becomes overcrowded and unable to meet the requirement of emerging radio devices operating at higher data rates. Additionally, the already assigned spectrum is underutilized. That has prompted researchers to look for a way to address spectrum scarcity and enable efficient use of the available spectrum. In this context, Cognitive Radio (CR) technology has been proposed as a potential means to overcome this issue by introducing opportunistic usage to less congested portions of the licensed spectrum. In addition to outlining the fundamentals of Cognitive Radio, including Dynamic Spectrum Access (DSA) paradigms and CR functions, this paper has a three-fold objective: first, providing an overview of Software Defined Radio (SDR), in which the architecture, benefits, and ongoing challenges of SDR are presented; second, giving an extensive review of spectrum sensing, covering sensing types, narrowband and wideband sensing schemes with their pros and cons, Machine Learning-based sensing, and open issues that need to be further addressed in this field; third, exploring the use of Cognitive Radio in the Internet of Things (IoT) while highlighting the crucial contribution of CR in enabling IoT. This Review is elaborated in an informative fashion to help new researchers entering the area of Cognitive Radio Networks (CRN) to easily get involved.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_13-A_Review_on_SDR_Spectrum_Sensing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Artificial Intelligence based Recommendation System for Analyzing Social Bussiness Reviews</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120614</link>
        <id>10.14569/IJACSA.2021.0120614</id>
        <doi>10.14569/IJACSA.2021.0120614</doi>
        <lastModDate>2021-06-30T12:15:24.0330000+00:00</lastModDate>
        
        <creator>Asma Alanazi</creator>
        
        <creator>Marwan Alseid</creator>
        
        <subject>ILbM; reviews; classifier; text analysing; training bagging; MapReduce; big data</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>Recently, analysing reviews presented by clients to products that are provided by e-commerce companies, such as Amazon, to produce efficient recommendations has been receiving a lot of attention. However, ensuring and generating effective recommendations on time is a challenge. This research paper proposes an artificial intelligence-based system. The proposed system uses the Incremental Learning - based Method (ILbM) to learn a neural network classifier. The ILbM uses the bagging technique in the process of training the classifier. To ensure a high degree of performance, the ILbM is implemented on the Hadoop since it allows the execution in parallel. Compared to a similar system, the proposed system shows better results in terms of accuracy (97.5%), precision (95.7%), recall (91.5%), and time of response (36 seconds).</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_14-Artificial_Intelligence_based_Recommendation_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Firm Performance Prediction for Macroeconomic Diffusion Index using Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120612</link>
        <id>10.14569/IJACSA.2021.0120612</id>
        <doi>10.14569/IJACSA.2021.0120612</doi>
        <lastModDate>2021-06-30T12:15:24.0200000+00:00</lastModDate>
        
        <creator>Cu Nguyen Giap</creator>
        
        <creator>Dao The Son</creator>
        
        <creator>Dinh Thi Ha</creator>
        
        <creator>Vu Quang Huy</creator>
        
        <creator>Do Thi Thu Hien</creator>
        
        <creator>Le Mai Trang</creator>
        
        <subject>Firm performance prediction; machine learning algorithms; diffusion index</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>Utilizing firm performance in the prediction of macroeconomic conditions is an interesting research trend with increasing momentum that supports to build nowcasting and early warning systems for macroeconomic management. Firm-level data is normally high volume, with which the traditional statistics-based prediction models are inefficient. This study, therefore, attempts to assess achievements of Machine Learning on firm performance prediction and proposes an emerging idea of applying it for macroeconomic prediction. Inspired by “micro-meso-macro” framework, this study compares different machine learning algorithms on each Vietnamese firm group categorized by the Vietnamese Industry Classification Standard. This approach figures out the most suitable classifier for each group that has specific characteristics itself. Then, selected classifiers are used to predict firms’ performance in the short term, where data was collected in wide range enterprise surveys conducted by the General Statistics Office of Vietnam. Experiments showed that Random Forest and J48 outperfomed other ML algorithms. The prediction result presents the fluctuation of firms’ performance across industries, and it supports to build a diffusion index that is a potential early warning indicator for macroeconomic management.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_12-Firm_Performance_Prediction_for_Macroeconomic_Diffusion.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Indoor Localization and Navigation based on Deep Learning using a Monocular Visual System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120611</link>
        <id>10.14569/IJACSA.2021.0120611</id>
        <doi>10.14569/IJACSA.2021.0120611</doi>
        <lastModDate>2021-06-30T12:15:24.0030000+00:00</lastModDate>
        
        <creator>Rodrigo Eduardo Arevalo Ancona</creator>
        
        <creator>Leonel Germ&#225;n Corona Ram&#237;rez</creator>
        
        <creator>Oscar Octavio Guti&#233;rrez Fr&#237;as</creator>
        
        <subject>Visual localization; visual navigation; autonomous navigation; feature extractor; object detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>Now-a-days, computer systems are important for artificial vision systems to analyze the acquired data to realize crucial tasks, such as localization and navigation. For successful navigation, the robot must interpret the acquired data and determine its position to decide how to move through the environment. This paper proposes an indoor mobile robot visual-localization and navigation approach for autonomous navigation. A convolutional neural network and background modeling are used to locate the system in the environment. Object detection is based on copy-move detection, an image forensic technique, extracting features from the image to identify similar regions. An adaptive threshold is proposed due to the illumination changes. The detected object is classified to evade it using a control deep neural network. A U-Net model is implemented to track the path trajectory. The experiment results were obtained from real data, proving the efficiency of the proposed algorithm. The adaptive threshold solves illumination variation issues for object detection.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_11-Indoor_Localization_and_Navigation_based_on_Deep_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel High-order Linguistic Time Series Forecasting Model with the Growth of Declared Word-set</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120609</link>
        <id>10.14569/IJACSA.2021.0120609</id>
        <doi>10.14569/IJACSA.2021.0120609</doi>
        <lastModDate>2021-06-30T12:15:23.9870000+00:00</lastModDate>
        
        <creator>Nguyen Duy Hieu</creator>
        
        <creator>Pham Dinh Phong</creator>
        
        <subject>Linguistic time series; high-order linguistic time series; linguistic logical relationship; hedge algebras; time series forecasting</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>The existing researches have shown that the fuzzy forecasting methods based on the high-order fuzzy time series are better than the fuzzy forecasting methods based on the first-order fuzzy time series. The linguistic forecasting method based on the first-order linguistic time series which can handle directly the word-set of the linguistic variable have been examined by Hieu et al. This paper examines a novel model of high-order linguistic time series with the growth of declared word-set. A procedure for forecasting the enrollments of University of Alabama and Lahi crop production of India is developed based on the proposed model. In the proposed forecasting method, the high-order linguistic logical relationship groups will be established and utilized to calculate the forecasted values based on the quantitative semantics of the used words generated by the hedge algebras structure. The experimental results show that the forecasting accuracy of the proposed high-order forecasting method is better than their counterparts and the growth of the word-set of the linguistic variable is significant in increasing the accuracy of the forecasted results.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_9-A_Novel_High_order_Linguistic_Time_Series.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Opinion Mining of Saudi Responses to COVID-19 Vaccines on Twitter</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120610</link>
        <id>10.14569/IJACSA.2021.0120610</id>
        <doi>10.14569/IJACSA.2021.0120610</doi>
        <lastModDate>2021-06-30T12:15:23.9870000+00:00</lastModDate>
        
        <creator>Fahad M. Alliheibi</creator>
        
        <creator>Abdulfattah Omar</creator>
        
        <creator>Nasser Al-Horais</creator>
        
        <subject>Computational linguistics; COVID-19 vaccines; lexical semantics; opinion mining; vector space classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>In recent months, many governments have announced COVID-19 vaccination programs and plans to help end the crises the world has been facing since the emergence of the coronavirus pandemic. In Saudi Arabia, the Ministry of Health called for citizens and residents to take up the vaccine as an essential step to return life to normal. However, the take-up calls were made in the face of profound disagreements on social media platforms and online networks about the value and efficacy of the vaccines. Thus, this study seeks to explore the responses of Saudi citizens to the COVID-19 vaccines and their sentiments about being vaccinated using opinion mining methods to analyze data extracted from Twitter, the most widely used social media network in Saudi Arabia. A corpus of 37,467 tweets was built. Vector space classification (VSC) methods were used to group and categorize the selected tweets based on their linguistic content, classifying the attitudes and responses of the users into three defined categories: positive, negative, and neutral. The lexical semantic properties of the posts show a prevalence of negative responses. This indicates that health departments need to ensure citizens are equipped with accurate, evidence-based information and key facts about the COVID-19 vaccines to help them make appropriate decisions when it comes to being vaccinated. Although the study is limited to the analysis of attitudes of people to the COVID-19 vaccines in Saudi Arabia, it has clear implications for the application of opinion mining using computational linguistic methods in Arabic.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_10-Opinion_Mining_of_Saudi_Responses_to_COVID_19_Vaccines.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>IoT based Smart Water Quality Prediction for Biofloc Aquaculture</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120608</link>
        <id>10.14569/IJACSA.2021.0120608</id>
        <doi>10.14569/IJACSA.2021.0120608</doi>
        <lastModDate>2021-06-30T12:15:23.9700000+00:00</lastModDate>
        
        <creator>Md. Mamunur Rashid</creator>
        
        <creator>Al-Akhir Nayan</creator>
        
        <creator>Sabrina Afrin Simi</creator>
        
        <creator>Joyeta Saha</creator>
        
        <creator>Md. Obaidur Rahman</creator>
        
        <creator>Muhammad Golam Kibria</creator>
        
        <subject>Smart aquaculture system; biofloc technology; machine learning; life below water</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>Traditional fish farming faces several challenges, including water pollution, temperature imbalance, feed, space, cost, etc. Biofloc technology in aquaculture transforms the manual into an advanced system that allows the reuse of unused feed by converting them into microbial protein. The objective of the research is to propose an IoT-based solution to aquaculture that increases efficiency and productivity. The article presented a system that collects data using sensors, analyzes them using a machine learning model, generates decisions with the help of Artificial Intelligence (AI), and sends notifications to the user. The proposed system has been implemented and tested to validate and achieve a satisfactory result.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_8-IoT_based_Smart_Water_Quality_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Controlling a Wheelchair using a Brain Computer Interface based on User Controlled Eye Blinks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120607</link>
        <id>10.14569/IJACSA.2021.0120607</id>
        <doi>10.14569/IJACSA.2021.0120607</doi>
        <lastModDate>2021-06-30T12:15:23.9570000+00:00</lastModDate>
        
        <creator>Sebasti&#225;n Poveda Zavala</creator>
        
        <creator>Sang Guun Yoo</creator>
        
        <creator>David Edmigio Valdivieso Tituana</creator>
        
        <subject>BCI; EEG; brain computer interface; OpenBCI; eye blink detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>Data published by different organizations such as the United Nations indicates that there are a large number of people who suffer from different types of movement disabilities. In many cases, the disability is so severe that they cannot have any kind of movements. Faced with this situation, Brain Computer Interface technology has taken up the challenge of developing solutions that allow delivering a better quality of life to those people, and one of the most important areas has been the mobility solutions, which includes the brain computer interface enabled electric wheelchairs as one of the most helpful solutions. Faced with this situation, the present work has developed a Brain Computer Interface solution that allows users to control the movement of their wheelchairs using the brain waves generated when blinks their eyes. For the creation of this solution, the Incremental Prototyping methodology has been used to optimize the development process by generating independent modules. The solution is made up of several components i.e. EEG System (OpenBCI), Main Controller, Wheelchair Controller and Wheelchair that allows to have a modularity to carry out updates (improvements) of their functionalities in a simple way. The developed system has shown that it requires a low amount of training time and has a real world applicable response time. Experimental results show that the users can perform different tasks with an acceptable grade of error in a period of time that could be considered as acceptable for the system. Considering that the prototype was created for people with disabilities, the system could grant them a certain level of independence.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_7-Controlling_a_Wheelchair_using_a_Brain_Computer.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evil Twin Attack Mitigation Techniques in 802.11 Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120605</link>
        <id>10.14569/IJACSA.2021.0120605</id>
        <doi>10.14569/IJACSA.2021.0120605</doi>
        <lastModDate>2021-06-30T12:15:23.9400000+00:00</lastModDate>
        
        <creator>Raja Muthalagu</creator>
        
        <creator>Sachin Sanjay</creator>
        
        <subject>Evil twin Wi-Fi attack; DNS spoofing; SSL strip; IP spoofing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>Evil Twin Wi-Fi attack is an attack on the 802.11 IEEE standard. The attack poses a threat to wireless connections. Evil Twin Wi-Fi attack is one of the attacks which has been there for a long time. Once the Evil Twin Wi-Fi attack is performed this acts as a gateway to many other attacks such as DNS spoofing, SSL Strip, IP Spoofing, and many more attacks. Thus, preventing the attack is essential for privacy and data security. This paper will be going through in detail how the attack is performed and different measures to prevent the attack. The proposed algorithm sniffs for fake AP using the whitelist in all the channels, once an unauthorized AP is detected the user has an option to de-authenticate any user in the unauthorized network in case any clients do connect to it by accident also the algorithm will be checking if any de-authentication frame is being sent to any of the AP to know which of the AP is being compromised. The efficiency of proposed approach is verified by simulating and mitigating the evil-twin attack.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_5-Evil_Twin_Attack_Mitigation_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Solving Reactive Power Scheduling Problem using Multi-objective Crow Search Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120606</link>
        <id>10.14569/IJACSA.2021.0120606</id>
        <doi>10.14569/IJACSA.2021.0120606</doi>
        <lastModDate>2021-06-30T12:15:23.9400000+00:00</lastModDate>
        
        <creator>Surender Reddy Salkuti</creator>
        
        <subject>Optimal power flow; crow search algorithm; evolutionary algorithms; loss minimization; reactive power scheduling; multi-objective optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>This paper presents the solution of multi-objective based optimal reactive power dispatch (MO-ORPD) problem by optimizing the system power losses and voltage stability enhancement index (VSEI)/L-index objectives. ORPD problem is considered as an important issue from system security and operational point of view for optimal steady-state operation of power system. Here, single-objective based ORPD problem is solved using Crow Search Algorithm (CSA) and multi-objective based ORPD problem is solved using multi-objective CSA (MO-CSA). The CSA is considered as an efficient and robust algorithm which determines the global optimal solution for solving the non-linear and discontinuous objective functions. Two standard test systems, i.e., IEEE 30 and 57 bus systems are considered to show the effectiveness, suitability and robustness of CSA and MO-CSA for solving the ORPD problem.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_6-Solving_Reactive_Power_Scheduling_Problem.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid Encryption Solution to Improve Cloud Computing Security using Symmetric and Asymmetric Cryptography Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120604</link>
        <id>10.14569/IJACSA.2021.0120604</id>
        <doi>10.14569/IJACSA.2021.0120604</doi>
        <lastModDate>2021-06-30T12:15:23.9230000+00:00</lastModDate>
        
        <creator>Hossein Abroshan</creator>
        
        <subject>Cloud computing; security; cryptography; digital signature</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>Ensuring the security of cloud computing is one of the most critical challenges facing the cloud services sector. Dealing with data in a cloud environment, which uses shared resources, and providing reliable and secure cloud services, requires a robust encryption solution with no or low negative impact on performance. Thus, this study proposes an effective cryptography solution to improve security in cloud computing with a very low impact on performance. A complex cryptography algorithm is not helpful in cloud computing as computing speed is essential in this environment. Therefore, this solution uses an improved Blowfish algorithm in combination with an elliptic-curve-based algorithm. Blowfish will encrypt the data, and the elliptic curve algorithm will encrypt its key, which will increase security and performance. Moreover, a digital signature technique is used to ensure data integrity. The solution is evaluated, and the results show improvements in throughput, execution time, and memory consumption parameters.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_4-A_Hybrid_Encryption_Solution_to_Improve_Cloud_Computing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>UIP2SOP: A Unique IoT Network applying Single Sign-On and Message Queue Protocol</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120603</link>
        <id>10.14569/IJACSA.2021.0120603</id>
        <doi>10.14569/IJACSA.2021.0120603</doi>
        <lastModDate>2021-06-30T12:15:23.9100000+00:00</lastModDate>
        
        <creator>Lam Nguyen Tran Thanh</creator>
        
        <creator>Nguyen Ngoc Phien</creator>
        
        <creator>The Anh Nguyen</creator>
        
        <creator>Hong Khanh Vo</creator>
        
        <creator>Hoang Huong Luong</creator>
        
        <creator>Tuan Dao Anh</creator>
        
        <creator>Khoi Nguyen Huynh Tuan</creator>
        
        <creator>Ha Xuan Son</creator>
        
        <subject>Internet of Things (IoT); MQTT; OAuth; Single Sign-On; Kafka; message queue</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>Internet of Things (IoT), currently, plays an importance role in our life, also, this is one of the most rapidly developing technology trends. However, the present structure has some limitation - one of these is the communication via client-server model - the users, devices, and applications using IoT services where all the connection/requirement is managed at IoT service providers. On the one hand, the IoT service providers (e.g., individual, organization) have different method to manage their devices, services, and users. Thus, the unique standard (i.e., communication method among the service providers and between client server) is still the challenge for the developers. On the other hand, Message Queuing Telemetry Protocol (MQTT) that is one of the most popular protocols in IoT deployments, has signif-icant security and privacy issues by itself (e.g., authentication, authorization, as well as privacy problem). Therefore, this paper proposes UIP2SOP - an unique IoT network by using Single Sign-On (SSO) and message queue to improve the MQTT protocol’s security problem. Besides, this model allows the organizations to provide the IoT services to connect into a single network but does not change the architecture of organization at all. The evaluation section proves the effectiveness of our proposed model. In particular, we consider the number of concurrent users publishing messages simultaneously in the two scenarios i) internal communication and ii) external communication. In addition, we evaluate recovery ability of system when occurred broken connection. Finally, to engage further reproducibility and improvement, we share a complete code solution is publicized on the GitHub repository.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_3-UIP2SOP_A_Unique_IoT_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predicting Strength Ratio of Laminated Composite Material with Evolutionary Artificial Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120602</link>
        <id>10.14569/IJACSA.2021.0120602</id>
        <doi>10.14569/IJACSA.2021.0120602</doi>
        <lastModDate>2021-06-30T12:15:23.8930000+00:00</lastModDate>
        
        <creator>Huiyao Zhang</creator>
        
        <creator>Atsushi Yokoyama</creator>
        
        <subject>Classical lamination theory; genetic algorithm; ar-tificial neural network; optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>In this paper, an alternative methodology to obtain the strength ratio for the laminated composite material is pre-sented. Traditionally, classical lamination theory and related fail-ure criteria are used to calculate the numerical value of strength ratio of laminated composite material under in-plane and out-of-plane loading from a knowledge of the material properties and its layup. In this study, to calculate the strength ratio, an alternative approach is proposed by using an artificial neural network, in which the genetic algorithm is proposed to optimize the search process at four different levels: the architecture, parameters, connections of the neural network, and active functions. The results of the present method are compared to those obtained via classical lamination theory and failure criteria. The results show that an artificial neural network is a feasible method to calculate the strength ratio concerning in-plane loading instead of classical lamination and associated failure theory.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_2-Predicting_Strength_Ratio_of_Laminated_Composite_Material.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Real-Time Driver’s Focus of Attention Extraction and Prediction using Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120601</link>
        <id>10.14569/IJACSA.2021.0120601</id>
        <doi>10.14569/IJACSA.2021.0120601</doi>
        <lastModDate>2021-06-30T12:15:23.8770000+00:00</lastModDate>
        
        <creator>Pei-heng Hong</creator>
        
        <creator>Yuehua Wang</creator>
        
        <subject>Driving; attention; interesting zones; deep neural network; deep learning; models</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(6), 2021</description>
        <description>Driving is one of the most common activities in our modern lives. Every day, millions drive to and from their schools or workplaces. Even though this activity seems simple and everyone knows how to drive on roads, it actually requires drivers’ complete attention to keep their eyes on the road and surrounding cars for safe driving. However, most of the research focused on either keeping improving the configurations of active safety systems with high-cost components like Lidar, night vision cameras, and radar sensor array, or finding the optimal way of fusing and interpreting sensor information without consid-ering the impact of drivers’ continuous attention and focus. We notice that effective safety technologies and systems are greatly affected by drivers’ attention and focus. In this paper, we design, implement and evaluate DFaep, a deep learning network for automatically examining, estimating, and predicting driver’s focus of attention in a real-time manner with dual low-cost dash cameras for driver-centric and car-centric views. Based on the raw stream data captured by the dash cameras during driving, we first detect the driver’s face and eye and generate augmented face images to extract facial features and enable real-time head movement tracking. We then parse the driver’s attention behaviors and gaze focus together with the road scene data captured by one front-facing dash camera faced towards the roads. Facial features, augmented face images, and gaze focus data are then inputted to our deep learning network for modeling drivers’ driving and attention behaviors. Experiments are then conducted on the large dataset, DR(eye)VE, and our own dataset under realistic driving conditions. The findings of this study indicated that the distribution of driver’s attention and focus is highly skewed. Results show that DFaep can quickly detect and predict the driver’s attention and focus, and the average accuracy of prediction is 99.38%. This will provide a basis and feasible solution with a computational learnt model for capturing and understanding driver’s attention and focus to help avoid fatal collisions and eliminate the probability of potential unsafe driving behavior in the future.</description>
        <description>http://thesai.org/Downloads/Volume12No6/Paper_1-Real_Time_Drivers_Focus_of_Attention_Extraction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cloud Computing in Remote Sensing: Big Data Remote Sensing Knowledge Discovery and Information Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.01205104</link>
        <id>10.14569/IJACSA.2021.01205104</id>
        <doi>10.14569/IJACSA.2021.01205104</doi>
        <lastModDate>2021-05-31T08:58:26.9700000+00:00</lastModDate>
        
        <creator>Yassine SABRI</creator>
        
        <creator>Fadoua Bahja</creator>
        
        <creator>Aouad Siham</creator>
        
        <creator>Aberrahim Maizate</creator>
        
        <subject>Remote sensing; data integration; cloud computing; big data</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>With the rapid development of remote sensing technology, our ability to obtain remote sensing data has been improved to an unprecedented level. We have entered an era of big data. Remote sensing data clear showing the characteristics of Big Data such as hyper spectral, high spatial resolution, and high time resolution, thus, resulting in a significant increase in the volume, variety, velocity and veracity of data. This paper proposes a feature supporting, salable, and efficient data cube for time-series analysis application, and used the spatial feature data and remote sensing data for comparative study of the water cover and vegetation change.The spatial-feature remote sensing data cube (SRSDC) is described in this paper. It is a data cube whose goal is to provide a spatial-feature-supported, efficient, and scalable multidimensional data analysis system to handle large-scale RS data. It provides a high-level architectural overview of the SRSDC.The SRSDC offers spatial feature repositories for storing and managing vector feature data, as well as feature translation for converting spatial feature information to query operations.The paper describes the design and implementation of a feature data cube and distributed execution engine in the SRSDC. It uses the long time-series remote sensing production process and analysis as examples to evaluate the performance of a feature data cube and distributed execution engine. Big data has become a strategic highland in the knowledge economy as a new strategic resource for humans. The core knowledge discov-ery methods include supervised learning methods data analysis supervised learning, unsupervised learning methods data analysis unsupervised learning, and their combinations and variants.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_104-Cloud_Computing_in_Remote_Sensing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Network Forensics: A Comprehensive Review of Tools and Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.01205103</link>
        <id>10.14569/IJACSA.2021.01205103</id>
        <doi>10.14569/IJACSA.2021.01205103</doi>
        <lastModDate>2021-05-31T08:58:26.9570000+00:00</lastModDate>
        
        <creator>Sirajuddin Qureshi</creator>
        
        <creator>Saima Tunio</creator>
        
        <creator>Faheem Akhtar</creator>
        
        <creator>Ahsan Wajahat</creator>
        
        <creator>Ahsan Nazir</creator>
        
        <creator>Faheem Ullah</creator>
        
        <subject>Network forensics; Tshark; Dumpcap; Wireshark; OSCAR; network security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>With the evolution and popularity of computer networks, a tremendous amount of devices are increasingly being added to the global internet connectivity. Additionally, more sophisticated tools, methodologies, and techniques are being used to enhance global internet connectivity. It is also worth men-tioning that individuals, enterprises, and corporate organizations are quickly appreciating the need for computer networking. However, the popularity of computer and mobile networking brings various drawbacks mostly associated with security and data breaches. Each day, cyber-related criminals explore and devise complicated means of infiltrating and exploiting individual and corporate networks’ security. This means cyber or network forensic investigators must be equipped with the necessary mech-anisms of identifying the nature of security vulnerabilities and the ability to identify and apprehend the respective cyber-related offenders correctly. Therefore, this research’s primary focus is to provide a comprehensive analysis of the concept of network forensic investigation and describing the methodologies and tools employed in network forensic investigations by emphasizing on the study and analysis of the OSCAR methodology. Finally, this research provides an evaluative analysis of the relevant literature review in a network forensics investigation.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_103-Network_Forensics_A_Comprehensive_Review.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of the Use of Videoconferencing in the Learning Process During the Pandemic at a University in Lima</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.01205102</link>
        <id>10.14569/IJACSA.2021.01205102</id>
        <doi>10.14569/IJACSA.2021.01205102</doi>
        <lastModDate>2021-05-31T08:58:26.9400000+00:00</lastModDate>
        
        <creator>Angie Del Rio-Chillcce</creator>
        
        <creator>Luis Jara-Monge</creator>
        
        <creator>Laberiano Andrade-Arenas</creator>
        
        <subject>Digital tools; health emergency; universities; video conferencing; virtual education</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>Due to the health emergency situation, which forced universities to stop using their centers as a means of teaching, many of them opted for virtual education. Affecting the learning process of students, which has predisposed many of them to become familiar with this new learning process, making the use of virtual platforms more common. Many educational centers have come to rely on digital tools such as: Discord, Google Meet, Microsoft Team, Skype and Zoom. The objective of the research is to report on the impact of student learning through the use of the aforementioned videoconferencing tools. Surveys were conducted with teachers and students who stated that 66% were not affected in their educational development. Most of them became familiar with the platforms; however, less than 24% qualified that their academic performance has improved, some teachers still have difficulties at a psychological level due to this new teaching modality. In conclusion, teachers and students agree that these tools are a great help for virtual classes.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_102-Analysis_of_the_Use_of_Videoconferencing_in_the_Learning_Process.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Supervised Learning-based Cancer Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.01205101</link>
        <id>10.14569/IJACSA.2021.01205101</id>
        <doi>10.14569/IJACSA.2021.01205101</doi>
        <lastModDate>2021-05-31T08:58:26.9230000+00:00</lastModDate>
        
        <creator>Juel Sikder</creator>
        
        <creator>Utpol Kanti Das</creator>
        
        <creator>Rana Jyoti Chakma</creator>
        
        <subject>Semantic segmentation; CNN; brain; breast; leukemia; lung</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>The segmentation, detection and extraction of the infected tumor from Magnetic Resonance Imaging (MRI) images are the key concerns for radiologists or clinical experts. But it is tedious and time consuming and its accuracy depends on their experience only. This paper suggest a new methodology segmentation, recognition, classification and detection of different types of cancer cells from both MRI and RGB (Red, Green, Blue) images are performed using supervised learning, Convolutional Neural Network (CNN) and morphological operations. In this methodology, CNN is used to classify cancer types and semantic segmentation to segment cancer cells. The system trained using the pixel labeled the ground truth where every image labeled as cancerous and non-cancerous. The system trained with 70%images and validated and tested with the rest 30%. Finally, the segmented cancer region is extracted and its percentage area is calculated. The research examined on the MATLAB platform on MRI and RGB images of the infected cell of BreCaHAD dataset for breast cancer, SN-AM Dataset for leukemia, Lung and Colon Cancer Histopathological Images dataset for lung cancer and Brain MRI Images for Brain Tumor Detection dataset for brain cancer.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_101-Supervised_Learning_based_Cancer_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improved Trust Model to Enhance Availability in Private Cloud</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.01205100</link>
        <id>10.14569/IJACSA.2021.01205100</id>
        <doi>10.14569/IJACSA.2021.01205100</doi>
        <lastModDate>2021-05-31T08:58:26.8930000+00:00</lastModDate>
        
        <creator>Vijay Kumar Damera</creator>
        
        <creator>A Nagesh</creator>
        
        <creator>M Nagaratna</creator>
        
        <subject>Trust; trust model; cloud; availability; reliability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>In the process of cloud service selection, it is difficult for users to choose trusted, available, and reliable cloud services. A trust model is a perfect solution for this service selection problem. In cloud computing, data availability and reliability have always been major concerns. According to research, around$285 million is lost per year due to cloud service failures, with a 99.91 percent availability rate. Replication has long been used to improve the data availability of large-scale cloud storage systems where errors are anticipated. As compared to a small-scale environment, where each data node can have different capabilities and can only accept a limited number of requests, replica place-ment in cloud storage systems becomes more complicated. As a result, deciding where to keep replicas in the system to meet the availability criteria is an issue. To address above issue this paper proposes a trust model which helps in selecting appropriate node for replica placement. This trust model generates comprehensive trust value of the data center node based on dynamic trust value combined with QoS parameters. Simulation experiments show that the model can reflect the dynamic change of data center node subject trust, enhance the predictability of node selection, and effectively decreases the failure rate of node.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_100-Improved_Trust_Model_to_Enhance_Availability.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Black-box Fuzzing Approaches to Secure Web Applications: Survey</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120599</link>
        <id>10.14569/IJACSA.2021.0120599</id>
        <doi>10.14569/IJACSA.2021.0120599</doi>
        <lastModDate>2021-05-31T08:58:26.8630000+00:00</lastModDate>
        
        <creator>Aseel Alsaedi</creator>
        
        <creator>Abeer Alhuzali</creator>
        
        <creator>Omaimah Bamasag</creator>
        
        <subject>Black-box fuzzing; web application security; vulner-ability scanning; automatic web app testing; vulnerability detection; survey</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>Web applications are increasingly important tools in our modern daily lives, such as in education, business transac-tions, and social media. Because of their prevalence, they are becoming more susceptible to different types of attacks that exploit security vulnerabilities. Exploiting these vulnerabilities may cause damage to the web applications as well as the end-users. Thus, web apps’ developers should identify vulnerabilities and fix them before an attacker exploits them. Using black-box fuzzing techniques for vulnerability identification is very popular during the web apps’ development life cycle. These techniques pledge to find vulnerabilities in web applications by constructing attacks without accessing their source codes. This survey explores the research that has been done in the black-box vulnerability finding and exploits construction in web applications and proposes future directions.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_99-Black_box_Fuzzing_Approaches.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Techniques for Solving Shortest Vector Problem</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120598</link>
        <id>10.14569/IJACSA.2021.0120598</id>
        <doi>10.14569/IJACSA.2021.0120598</doi>
        <lastModDate>2021-05-31T08:58:26.8470000+00:00</lastModDate>
        
        <creator>V. Dinesh Reddy</creator>
        
        <creator>P. Ravi</creator>
        
        <creator>Ashu Abdul</creator>
        
        <creator>Mahesh Kumar Morampudi</creator>
        
        <creator>Sriramulu Bojjagani</creator>
        
        <subject>Lattice; SVP; CVP; post quantum cryptography</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>Lattice-based crypto systems are regarded as secure and believed to be secure even against quantum computers. lattice-based cryptography relies upon problems like the Shortest Vector Problem. Shortest Vector Problem is an instance of lattice problems that are used as a basis for secure cryptographic schemes. For more than 30 years now, the Shortest Vector Problem has been at the heart of a thriving research field and finding a new efficient algorithm turned out to be out of reach. This problem has a great many applications such as optimization, communication theory, cryptography, etc. This paper introduces the Shortest Vector Problem and other related problems such as the Closest Vector Problem. We present the average case and worst case hardness results for the Shortest Vector Problem. Further this work explore efficient algorithms solving the Shortest Vector Problem and present their efficiency. More precisely, this paper presents four algorithms: the Lenstra-Lenstra-Lovasz (LLL) algorithm, the Block Korkine-Zolotarev (BKZ) algorithm, a Metropolis algorithm, and a convex relaxation of SVP. The experimental results on various lattices show that the Metropolis algorithm works better than other algorithms with varying sizes of lattices.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_98-Techniques_for_Solving_Shortest_Vector_Problem.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Extended Graph Convolutional Networks for 3D Object Classification in Point Clouds</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120597</link>
        <id>10.14569/IJACSA.2021.0120597</id>
        <doi>10.14569/IJACSA.2021.0120597</doi>
        <lastModDate>2021-05-31T08:58:26.8170000+00:00</lastModDate>
        
        <creator>Sajan Kumar</creator>
        
        <creator>Sai Rishvanth Katragadda</creator>
        
        <creator>Ashu Abdul</creator>
        
        <creator>V. Dinesh Reddy</creator>
        
        <subject>Object classification; graph convolution networks; non-autonomous driving</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>Point clouds are a popular way to represent 3D data. Due to the sparsity and irregularity of the point cloud data, learning features directly from point clouds become complex and thus huge importance to methods that directly consume points. This paper focuses on interpreting the point cloud inputs using the graph convolutional networks (GCN). Further, we extend this model to detect the objects found in the autonomous driving datasets and the miscellaneous objects found in the non-autonomous driving datasets. We proposed to reduce the runtime of a GCN by allowing the GCN to stochastically sample fewer input points from point clouds to infer their larger structure while preserving its accuracy. Our proposed model offer improved accuracy while drastically decreasing graph building and prediction runtime.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_97-Extended_Graph_Convolutional_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Feasibility of Implementing a Secure C2C Credit Scoring Platform</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120596</link>
        <id>10.14569/IJACSA.2021.0120596</id>
        <doi>10.14569/IJACSA.2021.0120596</doi>
        <lastModDate>2021-05-31T08:58:26.8000000+00:00</lastModDate>
        
        <creator>Mariam Musa Al- 0qabi</creator>
        
        <creator>Wahid Rajeh</creator>
        
        <subject>Social media; online commerce; social credit sys-tem; creditworthiness</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>The continuous development of social media and online commerce, which permeates all aspects of our lives, leads to the need for a similar mechanism similar to the financial credit score in traditional business. Also, a realistic classification of users through social media to be used in all aspects of the relation-ships between users and some of them or between them and organizations is needed. In this article a new metrics to classify users according to their creditworthiness of the transactions that take place through the Internet is established. The object from this aricle design a social credit system model (SCSM) based on these new metrics. How to deal on the Internet, attacking people on social media, violating the privacy of people and others. Also Buying and selling operations, executing purchase and sale orders, paying amounts of money easily and quickly, and so on. These data and their degree of importance were determined according to several questionnaires directed to several segments of society. This creditworthiness can be used in banks, Uber, Online transactions and so on.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_96-The_Feasibility_of_Implementing_a_Secure_C2C_Credit.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving the Effectiveness of e-Learning Processes through Dynamic Programming: A Survey</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120595</link>
        <id>10.14569/IJACSA.2021.0120595</id>
        <doi>10.14569/IJACSA.2021.0120595</doi>
        <lastModDate>2021-05-31T08:58:26.7670000+00:00</lastModDate>
        
        <creator>Norah Alqahtani</creator>
        
        <creator>Farrukh Nadeem</creator>
        
        <subject>e-learning; e-learning challenge; dynamic programming</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>E-learning has been widely adopted as an important tool for distance education, especially in these days of pandemic Covid-19. However, several problems/challenges have been re-ported in different processes of e-learning that need to be ad-dressed for effective use of e-learning. These problems/challenges include development of student focused contents, giving learner partial control, addressing different learning styles, etc . Recently, several efforts have been made to solve e-learning process problems using dynamic programming techniques. Dynamic pro-gramming techniques divide a problem situation into several sub-problems and dynamically solves each sub-problem based on student needs. Thus it allows student focused customization at each step and provides adaptive e-learning to support students with different capabilities. The objective of this study is to review different e-learning problems and challenges and how those can be addressed using dynamic programming techniques. We conclude by highlighting the importance of different dynamic programming techniques for different processes of e-learning.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_95-Improving_the_Effectiveness_of_e_Learning_Processes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Interactive Tool for Teaching the Central Limit Theorem to Engineering Students</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120594</link>
        <id>10.14569/IJACSA.2021.0120594</id>
        <doi>10.14569/IJACSA.2021.0120594</doi>
        <lastModDate>2021-05-31T08:58:26.7200000+00:00</lastModDate>
        
        <creator>Anas Basalamah</creator>
        
        <subject>Probability distribution; CLT; population; interac-tive tool; sampling distribution</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>The sole purpose of this paper is to guide students in learning the introductory statistical concepts such as- probability distribution and the central limit theorem (CLT) in an intuitive approach through an interactive tool. When a used data has different probability distributions, this paper intends to clarify the notions of the CLT and the use of samples in the hypothesis testing of a population by demonstrating step-by-step procedures and hands-on simulation approach. This paper discusses the relationship between the sample size and the nature of the sampling distribution, which is a vital element of the CLT, in different population distribution using the developed interactive tool. Finally, the impact of the developed interactive tool is measured via a survey experiment that illustrated the success of the developed tool in teaching the CLT.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_94-An_Interactive_Tool_for_Teaching_the_Central_Limit_Theorem.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Situation Awareness Levels to Evaluate the Usability of Augmented Feedback to Support Driving in an Unfamiliar Traffic Regulation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120593</link>
        <id>10.14569/IJACSA.2021.0120593</id>
        <doi>10.14569/IJACSA.2021.0120593</doi>
        <lastModDate>2021-05-31T08:58:26.7200000+00:00</lastModDate>
        
        <creator>Hasan J. Alyamani</creator>
        
        <creator>Ryan Alturkiy</creator>
        
        <creator>Arda Yunianta</creator>
        
        <creator>Nashwan A. Alromema</creator>
        
        <creator>Hasan Sagga</creator>
        
        <creator>Manolya Kavakli</creator>
        
        <subject>Situation awareness; unfamiliar traffic regulation; augmented feedback; in-vehicle information systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>Driving in an unfamiliar traffic regulation using an unfamiliar vehicle configuration contributes to increase number of traffic accidents. In these circumstances, a driver needs to have what is referred to as ‘situation awareness’ (SA). SA is divided into (level 1) perception of environmental cues, (level 2) comprehension of the perceived cues in relation to the current situation and (level 3) projection of the status of the situation in the near future. On the other hand, augmented feedback (AF) is used to enhance the performance of a certain task. In Driving, AF can be provided to drivers via in-vehicle information systems. In this paper, we hypothesize that considering the SA levels when designing AF can reduce the driving errors and thus enhance road safety. To evaluate this hypothesis, we conducted a quantitative study to test the usability of a certain set of feedback and an empirical study using a driving simulator to test the effectiveness of that feedback in terms of improving driving performance, particularly at roundabouts and intersections in an unfamiliar traffic system. The results of the first study enhanced the ability of the in-vehicle information system to provide feedback considering SA levels. This information was incorporated into a driving simulator and provided to drivers. The results of the second study revealed that considering SA levels when designing augmented feedback significantly reduces the driving errors at roundabouts and intersections in an unfamiliar traffic regulation.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_93-Situation_Awareness_Levels_to_Evaluate_the_Usability.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning-based Natural Language Processing Methods Comparison for Presumptive Detection of Cyberbullying in Social Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120592</link>
        <id>10.14569/IJACSA.2021.0120592</id>
        <doi>10.14569/IJACSA.2021.0120592</doi>
        <lastModDate>2021-05-31T08:58:26.6900000+00:00</lastModDate>
        
        <creator>Diego A. Andrade-Segarra</creator>
        
        <creator>Gabriel A. Le&#180;on-Paredes</creator>
        
        <subject>Bidirectional Encoder Representations from Trans-formers; BERT; Cyberbullying; Natural Language Processing; Re-current Neural Network + Long Short Term Memory; RNN+LSTM; Sentiment Analysis; Semantics; Spanish Language Processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>Due to TIC development in the last years, users have managed to satisfy many social experiences through several digital media like blogs, web and especially social networks. However, not all social media users have had good experiences with these media. Since there are malicious users that are able to cause negative psychological effects over people, this is called cyberbullying. For this reason, social networks such as Twitter are looking to implement models based on deep learning or machine learning capable of recognizing harassing comments on their platforms. However, most of these models are focused on the use of English language and there are very few models adapted for Spanish language. This is why, in this paper we propose the evaluation of an RNN+LSTM neural network, as well as a BERT model through sentiment analysis, to perform the detection of cyberbullying based on Spanish language for Ecuadorian accounts of the social network Twitter. The results obtained show a balance between execution time and accuracy obtained for the RNN + LSTM model. In addition, evaluations of comments that are not explicitly offensive show a better performance for the BERT model, which outperforms its counterpart by 20%.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_92-Deep Learning-based Natural Language Processing Methods.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Pornographic Visual Content Classifier based on Sensitive Object Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120591</link>
        <id>10.14569/IJACSA.2021.0120591</id>
        <doi>10.14569/IJACSA.2021.0120591</doi>
        <lastModDate>2021-05-31T08:58:26.6600000+00:00</lastModDate>
        
        <creator>Dinh-Duy Phan</creator>
        
        <creator>Thanh-Thien Nguyen</creator>
        
        <creator>Quang-Huy Nguyen</creator>
        
        <creator>Hoang-Loc Tran</creator>
        
        <creator>Khac-Ngoc-Khoi Nguyen</creator>
        
        <creator>Duc-Lung Vu</creator>
        
        <subject>Computer vision; image proccessing; object detec-tion; pornographic recognition and classification; blocking exten-sion; machine learning; deep learning; CNN</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>With the increasing amount of pornography being uploaded on the Internet today, arises the need to detect and block such pornographic websites, especially in Eastern cultural countries. Studying pornographic images and videos, show that explicit sensitive objects are typically one of the main charac-teristics portraying the unique aspect of pornography content. This paper proposed a classification method on pornographic visual content, which involved detecting sensitive objects using object detection algorithms. Initially, an object detection model is used to identify sensitive objects on visual content. The detection results are then used as high-level features combined with two other high-level features including skin body and human presence information. These high-level features finally are fed into a fusion Support Vector Machine (SVM) model, thus draw the eventual decision. Based on 800 videos from the NDPI-800 dataset and the 50.000 manually collected images, the evaluation results show that our proposed approach achieved 94.06% and 94.88% in Accuracy respectively, which can be compared with the cutting-edge pornographic classification methods. In addition, a pornographic alerting and blocking extension is developed for Google Chrome to prove the proposed architecture’s effectiveness and capability. Working with 200 websites, the extension achieved an outstanding result, which is 99.50% Accuracy in classification.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_91-A_Novel_Pornographic_Visual_Content_Classifier.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Travel Behavior Modeling: Taxonomy, Challenges, and Opportunities</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120590</link>
        <id>10.14569/IJACSA.2021.0120590</id>
        <doi>10.14569/IJACSA.2021.0120590</doi>
        <lastModDate>2021-05-31T08:58:26.6430000+00:00</lastModDate>
        
        <creator>Aman Sharma</creator>
        
        <creator>Abdullah Gani</creator>
        
        <creator>David Asirvatham</creator>
        
        <creator>Riyath Ahmed</creator>
        
        <creator>Muzaffar Hamzah</creator>
        
        <creator>Mohammad Fadhli Asli</creator>
        
        <subject>Travel behavior; travel behavior modeling; predic-tion modeling; intelligent traveling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>Personal daily movement patterns have a longitu-dinal impact on the individual’s decision-making in traveling. Recent observation on human travel raises concerns on the impact of travel behavior changes on many aspects. Many travel-related aspects like traffic congestion management and effective land-use were significantly affected by travel behavior changes. Existing travel behavior modeling (TBM) were focusing on assessing traffic trends and generate improvement insights for urban planning, infrastructure investment, and policymaking. However, literature indicates limited discussions on recent TBM adaptation towards future technological advances like the integration of autonomous vehicles and intelligent traveling. This survey paper aims to provide overview insights on recent advances of TBM including notable classifications, emerging challenges, and rising opportunities. In this survey, we reviewed and analyzed recently published works on TBM from high-quality publication sources. A taxonomy was devised based on notable characteristics of TBM to guide the classification and analysis of these works. The taxonomy classifies recent advances in TBM based on type of algorithms, applications, data sources, technologies, behavior analysis, and datasets. Furthermore, emerging research chal-lenges and limitations encountered by recent TBM studies were characterized and discussed. Subsequently, this survey identified and highlights open issues and research opportunities arise from recent TBM advances for the future undertaking.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_90-Travel_Behavior_Modeling_Taxonomy_Challenges.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Effect of Using Light Stemming for Arabic Text Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120589</link>
        <id>10.14569/IJACSA.2021.0120589</id>
        <doi>10.14569/IJACSA.2021.0120589</doi>
        <lastModDate>2021-05-31T08:58:26.6130000+00:00</lastModDate>
        
        <creator>Jaffar Atwan</creator>
        
        <creator>Mohammad Wedyan</creator>
        
        <creator>Qusay Bsoul</creator>
        
        <creator>Ahmad Hamadeen</creator>
        
        <creator>Ryan Alturki</creator>
        
        <creator>Mohammed Ikram</creator>
        
        <subject>Arabic language; light stemming; information retrieval; Na&#239;ve Bayes classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>Arabic is one of the Semitic languages in antiquity and one of the six official languages of the UN. Also, Arabic classification plays a significant and essential role in modern applications. There is a big difference between handling English text and Arabic text classification; preprocessing is also challenging for Arabic text. This paper presents the implementation of a Na&#239;ve Bayes classifier for Arabic text with and without stemmer.  A set of four categories and 800 documents were used from the Text Retrieval Conference (TREC) 2001 dataset. The results showed that Na&#239;ve Bayes with light stemmer achieves better results than Na&#239;ve Bayes without stemmer. The findings of the classifier accuracy by employing stemmer and without stemmer are as preprocessing. It reveals that the accuracy resulted from the light stemmer was better than the classifier without stemmer detection, which Na&#239;ve Bayes Classification with light stemmer got 35.0745 higher than the Na&#239;ve Bayes Classification 33.831% without stemmer. After contrasting them, the stemmer got better accuracy than the classifier.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_89-The_Effect_of_using_Light_Stemming.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-category Bangla News Classification using Machine Learning Classifiers and Multi-layer Dense Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120588</link>
        <id>10.14569/IJACSA.2021.0120588</id>
        <doi>10.14569/IJACSA.2021.0120588</doi>
        <lastModDate>2021-05-31T08:58:26.5970000+00:00</lastModDate>
        
        <creator>Sharmin Yeasmin</creator>
        
        <creator>Ratnadip Kuri</creator>
        
        <creator>A R M Mahamudul Hasan Rana</creator>
        
        <creator>Ashraf Uddin</creator>
        
        <creator>A. Q. M. Sala Uddin Pathan</creator>
        
        <creator>Hasnat Riaz</creator>
        
        <subject>Bangla news classification; supervised learning; feature extraction; category prediction; machine learning; neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>Online and offline newspaper articles have become an integral phenomenon to our society. News articles have a significant impact on our personal and social activities but picking a piece of an appropriate news article is a challenging task for users from the ocean of sources. Recommending the appropriate news category helps find desired articles for the readers but categorizing news article manually is laborious, sluggish and expensive. Moreover, it gets more difficult when considering a resource-insufficient language like Bengali which is the fourth most spoken language of the world. However, very few approaches have been proposed for categorizing Bangla news articles where few machine learning algorithms were applied with limited resources. In this paper, we accentuate multiple machine learning approaches including a neural network to categorize Bangla news articles for two different datasets. News articles have been collected from the popular Bengali newspaper Prothom Alo to build Dataset I and dataset II has been gathered from the famous machine learning competition platform Kaggle. We develop a modified stop-word set and apply it in the preprocessing stage which leads to significant improvement in the performance. Our result shows that the Multi-layer Neural network, Na&#239;ve Bayes and support vector machine provide better performance. Accuracy of 94.99%, 94.60%, 95.50% has been achieved for SVM, Logistic regression and Multi-layer dense Neural network, respectively.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_88-Multi_category_Bangla_News_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Machine Learning based Analytical Approach for Envisaging Bugs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120587</link>
        <id>10.14569/IJACSA.2021.0120587</id>
        <doi>10.14569/IJACSA.2021.0120587</doi>
        <lastModDate>2021-05-31T08:58:26.5670000+00:00</lastModDate>
        
        <creator>Anjali Munde</creator>
        
        <subject>Software bug prediction; prediction model; data mining; machine learning; Na&#239;ve Bayes (NB); support vector machine (SVM); k-nearest neighbors (KNN); decision tree (DT); random forest (RF); python programming</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>A software imperfection is a shortcoming, virus, defect, mistake, breakdown or glitch in software that initiates it to establish an unsuitable or unanticipated result. The foremost hazardous components connected with a software imperfection that is not identified at an initial stage of software expansion are time, characteristic, expenditure, determination and wastage of resources. Faults appear in any stage of software expansion. Thriving software businesses emphasize on software excellence, predominantly in the early stage of the software advancement. In succession to disable this setback, investigators have formulated various bug estimation methodologies till now. Though, emerging vigorous bug estimation prototype is a demanding assignment and several practices have been anticipated in the text. This paper exhibits a software fault estimation prototype grounded on Machine Learning (ML) Algorithms. The simulation in the paper directs to envisage the existence or non-existence of a fault, employing machine learning classification models. Five supervised ML algorithms are utilized to envisage upcoming software defects established on historical information. The classifiers are Na&#239;ve Bayes (NB), Support Vector Machine (SVM), K- Nearest Neighbors (KNN), Decision Tree (DT) and Random Forest (RF). The assessment procedure indicated that ML algorithms can be manipulated efficiently with high accuracy rate. Moreover, an association measure is employed to evaluate the propositioned extrapolation model with other methods. The accumulated conclusions indicated that the ML methodology has an improved functioning.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_87-A_Machine_Learning_based_Analytical_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Supporting Multi-interface Entities in Software-Defined Wireless Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120586</link>
        <id>10.14569/IJACSA.2021.0120586</id>
        <doi>10.14569/IJACSA.2021.0120586</doi>
        <lastModDate>2021-05-31T08:58:26.5500000+00:00</lastModDate>
        
        <creator>Jun-Hyuk Park</creator>
        
        <creator>Wonyong Yoon</creator>
        
        <subject>Software-defined networking; multi-interface; multi-interface switch; flow-precision mobility; flow-precision routing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>Software-Defined Networking (SDN) has gained growing momentum from its earlier application for wired networks (e.g., data center networks) to its application for wireless and mobile networks. In addition, state-of-the-art wireless and mobile networks (cellular networks and mesh networks) have been enhanced through the integration of multiple radio access technologies or multiple interfaces. This paper considers how to evolve multi-interface wireless mobile networks according to a future SDN-based paradigm, and deals with the technical problems therein. It presents the design and implementation of mechanisms that support SDN-based control of two types of multi-interface wireless mobile networks: one with multi-interface user devices and the other with multi-interface switching entities. As a methodology of demonstrating the feasibility of the proposed solution, a novel testing suite incorporating a real SDN controller and a standardized network simulator is designed and built. The functional verification and performance of the proposed solution is demonstrated in a virtual network topology but with the orchestration of the real SDN controller. The results show the multi-interface wireless entities can exploit the multi-radio and multi-channel wireless resources with the help of SDN approach.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_86-Supporting_Multi_interface_Entities.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Blockchain Technology in Education System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120585</link>
        <id>10.14569/IJACSA.2021.0120585</id>
        <doi>10.14569/IJACSA.2021.0120585</doi>
        <lastModDate>2021-05-31T08:58:26.5170000+00:00</lastModDate>
        
        <creator>Afnan H. Alsaadi</creator>
        
        <creator>Doaa M. Bamasoud</creator>
        
        <subject>Blockchain; certifications; authentication; decentralized-education; transparency; immutability; smart contracts; learning accessibility; fraud prevention; sustainability; ledger; consensus</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>The aim of this paper is to review the blockchain technology and its benefits in relations to education system. Blockchain technology is widely researched and highly evaluated and appraised for its unique infrastructure. In general, blockchain researched for its association with Bitcoin and cryptocurrency advantages. In this survey the plan is to conduct a full review of previous literatures focused on blockchain in education systems. Provide overall reviews of blockchain concepts and architecture behind the technology and to examine verification software that used by the technology to improve security and immutability. Brief discussion on the consensus algorithms and hashing function and how these operate and difference type of blockchain will be discussed. In this survey, the existing technology used in Saudi Arabia will be reviewed. In-depth research conducted for over 70 papers, of which 35 noted in this survey. Blockchain emerging promise a real time democracy and justice to all users over the world. Educational Industries said to revolutionize its communication system and accessibility and extend their market globally by widening their admissions and providing secure cost-effective, transparent and immutable communications across their educational platforms.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_85-Blockchain_Technology_in_Education_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>RETRACTED: Drone Security: Issues and Challenges</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120584</link>
        <id>10.14569/IJACSA.2021.0120584</id>
        <doi>10.14569/IJACSA.2021.0120584</doi>
        <lastModDate>2021-05-31T08:58:26.4870000+00:00</lastModDate>
        
        <creator>Rizwan Majeed</creator>
        
        <creator>Nurul Azma Abdullah</creator>
        
        <creator>Muhammad Faheem Mushtaq</creator>
        
        <creator>Rafaqut Kazmi</creator>
        
        <subject>Drone technology; security; internet of things; threats; privacy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>After careful and considered review of the content of this paper by a duly constituted expert committee, this paper has been found to be in violation of IJACSA`s Publication Principles. We hereby retract the content of this paper. Reasonable effort should be made to remove all past references to this paper. Retraction DOI: 10.14569/IJACSA.2021.0120584.retraction</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_84-Drone_Security_Issues_and_Challenges.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Power-based Side Channel Analysis and Fault Injection: Hacking Techniques and Combined Countermeasure</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120583</link>
        <id>10.14569/IJACSA.2021.0120583</id>
        <doi>10.14569/IJACSA.2021.0120583</doi>
        <lastModDate>2021-05-31T08:58:26.4570000+00:00</lastModDate>
        
        <creator>Noura Benhadjyoussef</creator>
        
        <creator>Mouna Karmani</creator>
        
        <creator>Mohsen Machhout</creator>
        
        <subject>Advanced encryption standard; fault attack; power attacks; combined countermeasure; hardware implementation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>Over the last years, physical attacks have been massively researched due to their capability to extract secret information from cryptographic engine. These hacking techniques are based on exploiting information from physical implementations instead of cryptographic algorithm flaws. Fault-injection attacks (FA) and Side-channel analysis (SCA) are the most popular techniques of implementation attacks. Aiming to secure cryptographic devices against such attacks, many studies have proposed a variety of developed and sophisticated countermeasures. Hence, the majority of these secured approaches are used for precise and single attack and it is difficult to thwart hybrid attack, such as combined power and fault attacks. In this work, the Advanced Encryption Standard is used as a case study in order to analyse the most well-known physical-based Hacking techniques:  Differential Fault Analysis (DFA) and Correlation Power Analysis (CPA). Consequently, with the knowledge of such contemporary hacking technique, we proposed a low overhead countermeasure for the AES implementation that combines the concept of correlated power noise generating with a combined-approach based fault detection scheme.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_83-Power_based_Side_Channel_Analysis_and_Fault_Injection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predicting the Appropriate Mode of Childbirth using Machine Learning Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120582</link>
        <id>10.14569/IJACSA.2021.0120582</id>
        <doi>10.14569/IJACSA.2021.0120582</doi>
        <lastModDate>2021-05-31T08:58:26.4400000+00:00</lastModDate>
        
        <creator>Md. Kowsher</creator>
        
        <creator>Nusrat Jahan Prottasha</creator>
        
        <creator>Anik Tahabilder</creator>
        
        <creator>Kaiser Habib</creator>
        
        <creator>Md. Abdur-Rakib</creator>
        
        <creator>Md. Shameem Alam</creator>
        
        <subject>Childbirth; labour mode; supervised machine learning; maternal death; infant </subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>A woman&#39;s satisfaction with childbirth may have immediate and long-term effects on her health as well as on the relationship with her newborn child. The mode of baby delivery is genuinely vital to a delivery patient and her infant child. It might be a crucial factor for ensuring the safety of both the mother and the child. During the baby delivery, decision-making within a short time becomes very challenging for the physician. Besides, humans may make wrong decisions selecting the appropriate delivery mode of childbirth. A wrong decision increases the mother&#39;s life risk and can also be harmful to the newborn baby&#39;s health. Computer-aided decision-making can be an excellent solution to this problem. Considering this scope, we have built a supervised machine learning-based decision-making model to predict the most suitable childbirth mode that will reduce this risk. This work has applied 32 supervised classifier algorithms and 11 training methods on the real childbirth dataset from the Tarail Upazilla Health complex, Kishorganj, Bangladesh. We have also analyzed the result and compared them using various statistical parameters to determine the best-performed model. The quadratic discriminant analysis has shown the highest accuracy of 0.979992 with the F1 score of 0.979962. Using this model to decide the appropriate labor mode may significantly reduce maternal and infant health risks.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_82-Predicting_the_Appropriate_Mode_of_Childbirth.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improved Exemplar based Image Inpainting for Partial Instance Occlusion Handling with K-means Clustering and YCbCr Color Space</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120581</link>
        <id>10.14569/IJACSA.2021.0120581</id>
        <doi>10.14569/IJACSA.2021.0120581</doi>
        <lastModDate>2021-05-31T08:58:26.4100000+00:00</lastModDate>
        
        <creator>Deepa Abin</creator>
        
        <creator>Sudeep D.Thepade</creator>
        
        <subject>Partial occlusion; exemplar inpainting; K-means clustering; YCbCr color space</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>The images acquired in real time outdoor environment are often subject to uneven illumination conditions, cloudy weather, lighting conditions. The instances of partial occlusions deteriorate the background modeling of such scenes. Varying illumination outdoor scene set of images with partial occlusions are addressed through this investigative work. There is a need for a restoration method that finds improved subjective perception and execution time. The proposed work focuses on novel amended exemplar model to improve the subjective perception. The exemplar inpainting method is improved through the color quantization with K-means clustering approach in YCbCr color space. Experimental validations and proposed method results show better improvement in qualitative and objective measures than existing methods. The average “Peak Signal to Noise Ratio (PSNR)” as 28.2869 and “Structural SIMilarity index (SSIM)” as 0.9759 of the proposed method has shown better results respectively both visually and with a tradeoff in time.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_81-Improved_Exemplar_based_Image_Inpainting.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Natural Language Processing Applications: A New Taxonomy using Textual Entailment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120580</link>
        <id>10.14569/IJACSA.2021.0120580</id>
        <doi>10.14569/IJACSA.2021.0120580</doi>
        <lastModDate>2021-05-31T08:58:26.3770000+00:00</lastModDate>
        
        <creator>Manar Elshazly</creator>
        
        <creator>Mohammed Haggag</creator>
        
        <creator>Soha Ahmed Ehssan</creator>
        
        <subject>Textual entailment; deep learning; summarization; sentiment analysis; information verification; machine learning; depression detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>Textual entailment recognition is one of the recent challenges of the Natural Language Processing (NLP) domain. Deep learning strategies are used in the work of text entailment instead of traditional Machine learning or raw coding to achieve new enhanced results. Textual entailment is also used in the substantial applications of NLP such as summarization, machine translation, sentiment analysis, and information verification. Text entailment is more precise than traditional Natural Language Processing techniques in extracting emotions from text because the sentiment of any text can be clarified by textual entailment. For this purpose, when combining a textual entailment with deep learning, they can hugely showed an improvement in performance accuracy and aid in new applications such as depression detection. This paper lists and describes applications of natural language processing regarding textual entailment. Various applications and approaches are discussed. Moreover, datasets, algorithms, resources, and performance evaluation for each model is included. Also, it compares textual entailment application models according to the method used, the result for each model, and the pros and cons of each model.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_80-Natural_Language_Processing_Applications.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>New Smart Encryption Approach based on Multidimensional Analysis Tools</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120579</link>
        <id>10.14569/IJACSA.2021.0120579</id>
        <doi>10.14569/IJACSA.2021.0120579</doi>
        <lastModDate>2021-05-31T08:58:26.3630000+00:00</lastModDate>
        
        <creator>Salima TRICHNI</creator>
        
        <creator>Fouzia OMARY</creator>
        
        <creator>Mohammed BOUGRINE</creator>
        
        <subject>Security; confidentiality; artificial intelligent; smart encryption; cryptography; skyline</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>In the last decade, with the new situation forced by the Covide-19 pandemic, the information systems are often forced to work remotely, they must communicate and share confidential data with several interlocutors. In such a context, ensuring the confidentiality of communications becomes a complex and difficult task. Hence, the need to have a flexible system that can adapt with different parameters involved in every exchange of information. We recently presented in [1] a new smart approach to data encryption that serves the same purpose. This approach uses the concept of artificial intelligence and apply BNL skyline algorithm to decide about the most suitable algorithm to ensure the best data privacy. However, with the evolution of dimensions and criteria to be considered for this smart encryption, we find that the complexity of the BNL algorithm increase, then, the response time of the application increase and the skyline encryption quality decreases. In this work, we propose a new idea to resolve this problematic. Indeed, this contribution consists in adding another Intelligence brick to dynamically define the Skyline algorithm depending on the type and number of dimensions. Through this paper, we provide an analysis and a comparison of some skyline algorithms for the multidimensional search. The results obtained by this study show the performance of this new approach whether in terms of execution time or in the quality of the dominant encryption solution.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_79-New_Smart_Encryption_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Earthquake Prediction using Hybrid Machine Learning Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120578</link>
        <id>10.14569/IJACSA.2021.0120578</id>
        <doi>10.14569/IJACSA.2021.0120578</doi>
        <lastModDate>2021-05-31T08:58:26.3300000+00:00</lastModDate>
        
        <creator>Mustafa Abdul Salam</creator>
        
        <creator>Lobna Ibrahim</creator>
        
        <creator>Diaa Salama Abdelminaam</creator>
        
        <subject>Extreme learning machine; least square support vector machine; flower pollination algorithm; earthquake prediction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>This research proposes two earthquake prediction models using seismic indicators and hybrid machine learning techniques in the region of southern California. Seven seismic indicators were mathematically and statistically calculated depending on pervious recorded seismic events in the earthquake catalogue of that region. These indicators are namely, time taken during the occurrence of n seismic events (T), average magnitude of n events (M_mean), magnitude deficit that is the difference between the observed magnitude and expected one (ΔM), the curve slope for n events using inverse power law of Gutenberg Richter (b), mean square deviation for n events using inverse power law of Gutenberg Richter (η), the square root of the released energy during T time (DE1/2) and average time between events (&#181;). Two hybrid machine learning models are proposed to predict the earthquake magnitude during fifteen days. The first model is FPA-ELM, which is a hybrid of the flower pollination algorithm (FPA) and the extreme learning machine (ELM). The second is FPA-LS-SVM, which is a hybrid of FPA and the least square support vector machine (LS-SVM). These two models&#39; performance is compared and assessed using four assessment criteria: Root Mean Square Error (RMSE), Mean Absolute Error (MAE), Symmetric Mean Absolute Percentage Error (SMAPE), and Percent Mean Relative Error (PMRE). The simulation results showed that the FPA-LS-SVM model outperformed the FPA-ELM, LS-SVM, and ELM models in terms of prediction accuracy.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_78-Earthquake_Prediction_using_Hybrid_Machine_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Succinct Novel Searching Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120577</link>
        <id>10.14569/IJACSA.2021.0120577</id>
        <doi>10.14569/IJACSA.2021.0120577</doi>
        <lastModDate>2021-05-31T08:58:26.3170000+00:00</lastModDate>
        
        <creator>Celine </creator>
        
        <creator>Shinoj Robert</creator>
        
        <creator>Maria Dominic</creator>
        
        <subject>Time complexities; space complexities; searching algorithm; biform tree; pre-order traversal</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>A searching algorithm was found to be effective in producing acutely needed results in the operation of data structures. Searching is being performed as a common operation unlike other operations in various formats of algorithms. The binary and linear search book a room in most of the searching techniques. Going with each technique has its inbuilt limitations and explorations. The versatile approach of different techniques which is in practice helps in bringing out the hybrid search techniques around it. For any tree representation, the sorted order is expected to achieve the best performance. This paper exhibits the new technique named the biform tree approach for producing the sorted order of elements and to perform efficient searching.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_77-A_Succinct_Novel_Searching_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>UML Sequence Diagram: An Alternative Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120576</link>
        <id>10.14569/IJACSA.2021.0120576</id>
        <doi>10.14569/IJACSA.2021.0120576</doi>
        <lastModDate>2021-05-31T08:58:26.3000000+00:00</lastModDate>
        
        <creator>Sabah Al-Fedaghi</creator>
        
        <subject>Requirements elicitation; conceptual modeling; static model; events model; behavioral model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>The UML sequence diagram is the second most common UML diagram that represents how objects interact and exchange messages over time. Sequence diagrams show how events or activities in a use case are mapped into operations of object classes in the class diagram. The general acceptance of sequence diagrams can be attributed to their relatively intuitive nature and ability to describe partial behaviors (as opposed to such diagrams as state charts). However, studies have shown that over 80% of graduating students were unable to create a software design or even a partial design, and many students had no idea how sequence diagrams were constrained by other models. Many students exhibited difficulties in identifying valid interacting objects and constructing messages with appropriate arguments. Additionally, according to authorities, even though many different semantics have been proposed for sequence diagrams (e.g., translations to state machines), there exists no suitable semantic basis refinement of required sequence diagram behavior because direct style semantics do not precisely capture required sequence diagram behaviors; translations to other formalisms disregard essential features of sequence diagrams such as guard conditions and critical regions. This paper proposes an alternative to sequence diagrams, a generalized model that provides further understanding of sequence diagrams to assimilate them into a new modeling language called thinging machine (TM). The sequence diagram is extended horizontally by removing the superficial vertical-only dimensional limitation of expansion to preserve the logical chronology of events. TM diagramming is spread nonlinearly in terms of actions. Events and their chronology are constructed on a second plane of description that is superimposed on the initial static description. The result is a more refined representation that would simplify the modeling process. This is demonstrated through remodeling sequence diagram cases from the literature.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_76-UML_Sequence_Diagram.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving Packet Delivery Ratio in Wireless Sensor Network with Multi Factor Strategies</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120575</link>
        <id>10.14569/IJACSA.2021.0120575</id>
        <doi>10.14569/IJACSA.2021.0120575</doi>
        <lastModDate>2021-05-31T08:58:26.2830000+00:00</lastModDate>
        
        <creator>Venkateswara Rao M</creator>
        
        <creator>Srinivas Malladi</creator>
        
        <subject>Multi factor strategies; novel routing metric; packet coding; packet delivery ratio</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>In the design of wireless sensor network (WSN), packet delivery ratio is an import parameter to be maximized. In existing schemes, a secure zone-based routing protocol was implemented for life time improvement in WSNs. In multi - hop communication, a new routing criterion was formulated for packet transmission. Security against message tampering, dropping and flooding attacks was incorporated in the routing metric. The approach skipped risky zones as a whole from routing and chooses alternative path to route a packet in secured manner with less energy consumption. Though energy conservation and attack resilience are achieved, congestion in WSN is increased and because of it packet delivery ratio is diminished. To address this problem, we propose a solution to improve the packet delivery ratio with a multi factor strategies involving routing, differentiation of flows, flow-based congestion control with retransmission and redundant packet coding. Detailed analysis and simulations are undertaken to evaluate the efficiency of the contemplated work compared to the existing solutions.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_75-Improving_Packet_Delivery_Ratio.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluation of Machine Learning Algorithms for Intrusion Detection System in WSN</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120574</link>
        <id>10.14569/IJACSA.2021.0120574</id>
        <doi>10.14569/IJACSA.2021.0120574</doi>
        <lastModDate>2021-05-31T08:58:26.2670000+00:00</lastModDate>
        
        <creator>Mohammed S. Alsahli</creator>
        
        <creator>Marwah M. Almasri</creator>
        
        <creator>Mousa Al-Akhras</creator>
        
        <creator>Abdulaziz I. Al-Issa</creator>
        
        <creator>Mohammed Alawairdhi</creator>
        
        <subject>Internet of Things (IoT); Wireless Sensor Network (WSN); Information Technology (IT); Denial of Service (DoS); artificial intelligence (AI); machine learning (ML)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>Technology has revolutionized into connecting “things” together with the rebirth of the global network called Internet of Things (IoT). This is achieved through Wireless Sensor Network (WSN) which introduces new security challenges for Information Technology (IT) scientists and researchers. This paper addresses the security issues in WSN by establishing potential automated solutions for identifying associated risks. It also evaluates the effectiveness of various machine learning algorithms on two types of datasets, mainly, KDD99 and WSN datasets. The aim is to analyze and protect WSN networks in combination with Firewalls, Deep Packet Inspection (DPI), and Intrusion Prevention Systems (IPS) all specialized for the overall protection of WSN networks. Multiple testing options were investigated such as cross validation and percentage split. Based on the finding, the most accurate algorithm and the least time processing were suggested for both datasets.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_74-Evaluation_of_Machine_Learning_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards using Single EEG Channel for Human Identity Verification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120573</link>
        <id>10.14569/IJACSA.2021.0120573</id>
        <doi>10.14569/IJACSA.2021.0120573</doi>
        <lastModDate>2021-05-31T08:58:26.2370000+00:00</lastModDate>
        
        <creator>Marwa A. Elshahed</creator>
        
        <subject>Biometric; EEG; single channel; verification; brain lobes</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>Biometrics is an interesting area of research as a result of tremendous technological advances, especially in security. It is considered as an automated technology used for identification based on biological or behavioral human traits. An electroencephalogram (EEG) is the brain electrical activity signals considered as biological traits used in biometrics systems. The primary goal of this work is trying to find a single EEG channel to be used for human identification purposes. A single EEG channel recording is used for personal identity-based verification mode, which is preferred for many subjects with instant real-time system decisions. Percent residual difference (PRD) is a common quantitative measurement used to determine the human identity-based measures the distance between two signals. The proposed system sensitivity gives 100% using some single channels placed in the parietal and occipital lobes. The proposed system takes a short time in the enrolment process with an instant decision using verification mode, which is preferred with a large number of subjects. Also, using imaginary tasks is preferred for human identity verification.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_73-Towards_using_Single_EEG_Channel.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cultural Events Classification using Hyper-parameter Optimization of Deep Learning Technique</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120572</link>
        <id>10.14569/IJACSA.2021.0120572</id>
        <doi>10.14569/IJACSA.2021.0120572</doi>
        <lastModDate>2021-05-31T08:58:26.2070000+00:00</lastModDate>
        
        <creator>Feng Zhipeng</creator>
        
        <creator>Hamdan Gani</creator>
        
        <subject>Cultural events; convolutional neural network (CNN); very depth convolutional network (VGG); multi-class classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>Through digitization, maintaining and promoting cultural heritage is being strengthened. Concerning this background, this study presents a new Indonesia cultural events dataset and automatic image classification for cultural events. The dataset was developed using the Flickr image platform, and the five cultural events image was collected including the Baliem Festival, Jember Fashion Festival, Nyepi Festival, Pacu Jawi, and Pasola Festival. Further, Convolutional Neural Networks (CNN) was developed for the classification method. A comparison of CNN models (VGG16 and VGG19) using several optimization configurations was performed to get the best model. The results showed that the VGG16 with image augmentation and dropout regularization technique performed best with 94.66% accuracy. This study hoped to support the heritage&#39;s digital documentation process and preserve Indonesia&#39;s cultural heritage.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_72-Cultural_Events_Classification_using_Hyper_Parameter.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automating and Optimizing Software Testing using Artificial Intelligence Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120571</link>
        <id>10.14569/IJACSA.2021.0120571</id>
        <doi>10.14569/IJACSA.2021.0120571</doi>
        <lastModDate>2021-05-31T08:58:26.1430000+00:00</lastModDate>
        
        <creator>Minimol Anil Job</creator>
        
        <subject>Software testing; artificial intelligence; testing automation; software engineering; software quality</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>The final product of software development process is a software system and testing is one of the important stages in this process. The success of this process can be determined by how well it accomplishes its goal. Due to the advancement of technology, various software testing tools have been introduced in the software engineering discipline. The use of software is increasing day-by-day and complexity of software functions are challenging and there is need to release the software within the short quality evaluation period, there is a high demand in adopting automation in software testing. Emergence of automatic software testing tools and techniques helps in quality enhancement and reducing time and cost in the software development activity. Artificial Intelligence (AI) techniques are widely applied in different areas of Software engineering (SE). Application of AI techniques can help in achieving good performance in software Testing and increase the productivity of the software development firms. This paper briefly presents the state of the art in the field of software testing by applying AI techniques in software testing. </description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_71-Automating_and_Optimizing_Software_Testing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Computing Academics’ Perceived Level of Awareness and Exposure to Software Engineering Code of Ethics: A Case Study of a South African University of Technology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120570</link>
        <id>10.14569/IJACSA.2021.0120570</id>
        <doi>10.14569/IJACSA.2021.0120570</doi>
        <lastModDate>2021-05-31T08:58:26.1130000+00:00</lastModDate>
        
        <creator>Robert T. Hans</creator>
        
        <creator>Senyeki M. Marebane</creator>
        
        <creator>Jacqui Coosner</creator>
        
        <subject>Software engineering ethics; code of ethics; ethics awareness; ethics education; moral development ethics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>The need for awareness on ethical computing is increasingly becoming important. As a result this challenges all stakeholders in the software engineering profession, including educators, to improve their efforts on the awareness of professional codes of ethics which provide framework for ethical reference. However, the several compromises in the software engineering practice suggest that there are some in the profession, who are not familiar with the profession’s codes of ethics and subsequently not able to practice and teach students about them. This research work investigates the extent of codes of ethics awareness by practitioners who are teaching software development courses in an academic environment. An online questionnaire with indicators for measuring awareness on software engineering code of ethics was deployed and responded to by 44 educators. Graphical, univariate and bivariate analyses were conducted on the data to determine the profile of the respondents and the extent of their level of awareness on the codes of ethics. The results indicate that majority of the lecturers (54.5 %) are not aware of software engineering codes of ethics, and those who are aware, majority of them were exposed to through self-study or personal development. Furthermore, the inclusion of codes of ethics in the learning activities is minimal as inhibited by lack of awareness and failure to apply the codes practically. This study recommends that lecturing staff as part of the professional software engineers serving as academic corps, should be placed on programmes for exposing them to professional software engineering codes of ethics. Moreover, the study calls for accreditation of software engineering courses, as it is the case with other professional engineering disciplines, to improve awareness and subsequent practical application of the codes of ethics.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_70-Computing_Academics_Perceived_Level_of_Awareness.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards the Development of a Brain Semi-controlled Wheelchair for Navigation in Indoor Environments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120569</link>
        <id>10.14569/IJACSA.2021.0120569</id>
        <doi>10.14569/IJACSA.2021.0120569</doi>
        <lastModDate>2021-05-31T08:58:26.0970000+00:00</lastModDate>
        
        <creator>Hailah AlMazrua</creator>
        
        <creator>Abir Benabid Najjar</creator>
        
        <subject>Usability; wheelchair navigation; indoor navigation; mobility impairment; obstacle avoidance; obstacle detection; path planning; BCI; brain-computer interaction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>Several technological advancements emerged providing the technical assistance supporting people with special needs in tackling their everyday tasks. Particularly, with the advancements in cost-effective Brain-Computer Interfaces (BCI), they can be very useful for people with disabilities to improve their quality of life. This paper investigates the usability of low-cost BCI for navigation in an indoor environment, which is considered one of the daily challenges facing individuals with mobility impairment. A software framework is proposed to control a wheelchair using three modes of operations: brain-controlled, autonomous and semi-autonomous, taking into consideration the usability and safety requirements. A prototype system based on the proposed framework was developed. The system can detect an obstacle in the front, right and left sides of the wheelchair and can stop the movement automatically to avoid collation. The usability evaluation of the proposed system, in terms of effectiveness, efficiency and satisfaction, shows that it can be very helpful in the daily life of the mobility impaired people. An experiment was conducted to assess the usability of the proposed framework using the prototype system. Subjects steered the wheelchair using the three different operation modes effectively by controlling the direction of motion.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_69-Towards_the_Development_of_a_Brain.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Assessing Data Sharing&#39;s Model Fitness Towards Open Data by using Pooled CFA</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120568</link>
        <id>10.14569/IJACSA.2021.0120568</id>
        <doi>10.14569/IJACSA.2021.0120568</doi>
        <lastModDate>2021-05-31T08:58:26.0670000+00:00</lastModDate>
        
        <creator>Siti Nur’asyiqin Ismael</creator>
        
        <creator>Othman Mohd</creator>
        
        <creator>Yahaya Abd Rahim</creator>
        
        <subject>Pooled CFA; data sharing; open data; measurement model; validity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>This study demonstrates the step-by-step procedure to perform Pooled Confirmatory Factor Analysis (CFA) in the measurement part of Structural Equation Modelling (SEM). CFA is crucial for the SEM measurement model to obtain the acceptable model fit before modeling the structural model. There are two techniques in CFA; individual CFA and Pooled-CFA. Usually, Pooled-CFA is done due to the high number of constructs and items. If the model is too complicated and has so many constructs and items, then it is recommended to perform Pooled-CFA to simplify the model&#39;s looks yet easy to understand. The perception of Malaysia Technical University Network (MTUN) academics on data sharing towards open data was analysed by using pooled-CFA. There are three main constructs: data sharing with its 4 sub-constructs; (technological factor, organizational factor, environmental factor, and individual factor), mediator construct (open data licenses), and open data construct was analyzed in this research. Furthermore, second-order constructs&#39; factor loadings towards their corresponding sub-constructs were investigated. This research collected the primary data of 442 respondents using a stratified random sampling technique. This paper will explain the theoretical framework before revealing the results of Pooled-CFA on data sharing towards open data.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_68-Assessing_Data_Sharings_Model_Fitness.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design, Aggregation and Analysis of Power Consumption Data using the Jump Process</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120567</link>
        <id>10.14569/IJACSA.2021.0120567</id>
        <doi>10.14569/IJACSA.2021.0120567</doi>
        <lastModDate>2021-05-31T08:58:26.0500000+00:00</lastModDate>
        
        <creator>Yazid Hambally Yacouba</creator>
        
        <creator>Amadou Diabagat&#233;</creator>
        
        <creator>Michel Babri</creator>
        
        <creator>Adama Coulibaly</creator>
        
        <subject>Design; aggregation; analysis; jump process; electricity consumption; smart meter</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>This work aims to seek a pragmatic approach to assess electricity consumption at the level of households, buildings and neighborhoods. The main concern consists in proposing aggregation methods based on jump process according to a customer environment that is intrinsically linked to the implementation of a centralized system. The aim of the approach is to present data aggregations that derive their basis from a data model in order to facilitate the processing of electricity data at different scales of analysis. Such a smart meter data management process merits the design of an aggregated database that can store data for a house, a building and a neighborhood. The advantage of this system lies in the facilitation of data interpretation and the ability to guide decision-makers in the management of electricity consumption. An analysis of the behavior of electricity consumption is also proposed based on the monitoring of the electricity consumption of the various devices connected to a smart meter.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_67-Design_Aggregation_and_Analysis_of_Power_Consumption.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Review on Feature Selection and Ensemble Techniques for Intrusion Detection System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120566</link>
        <id>10.14569/IJACSA.2021.0120566</id>
        <doi>10.14569/IJACSA.2021.0120566</doi>
        <lastModDate>2021-05-31T08:58:26.0170000+00:00</lastModDate>
        
        <creator>Majid Torabi</creator>
        
        <creator>Nur Izura Udzir</creator>
        
        <creator>Mohd Taufik Abdullah</creator>
        
        <creator>Razali Yaakob</creator>
        
        <subject>Intrusion detection system (IDS); anomaly-based IDS; feature selection (FS); ensemble</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>Intrusion detection has drawn considerable interest as researchers endeavor to produce efficient models that offer high detection accuracy. Nevertheless, the challenge remains in developing reliable and efficient Intrusion Detection System (IDS) that is capable of handling large amounts of data, with trends evolving in real-time circumstances. The design of such a system relies on the detection methods used, particularly the feature selection techniques and machine learning algorithms used. Thus motivated, this paper presents a review on feature selection and ensemble techniques used in anomaly-based IDS research. Dimensionality reduction methods are reviewed, followed by the categorization of feature selection techniques to illustrate their effectiveness on training phase and detection. Selection of the most relevant features in data has been proven to increase the efficiency of detection in terms of accuracy and computational efficiency, hence its important role in the design of an anomaly-based IDS. We then analyze and discuss a variety of IDS-based machine learning techniques with various detection models (single classifier-based or ensemble-based), to illustrate their significance and success in the intrusion detection area. Besides supervised and unsupervised learning methods in machine learning, ensemble methods combine several base models to produce one optimal predictive model and improve accuracy performance of IDS. The review consequently focuses on ensemble techniques employed in anomaly-based IDS models and illustrates how their use improves the performance of the anomaly-based IDS models. Finally, the paper laments on open issues in the area and offers research trends to be considered by researchers in designing efficient anomaly-based IDSs.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_66-A_Review_on_Feature_Selection_and_Ensemble.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sensed-Lexicon based Approach for Identification of Similarity among Punjabi Documents</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120565</link>
        <id>10.14569/IJACSA.2021.0120565</id>
        <doi>10.14569/IJACSA.2021.0120565</doi>
        <lastModDate>2021-05-31T08:58:25.9870000+00:00</lastModDate>
        
        <creator>Jasleen Kaur</creator>
        
        <creator>Jatinderkumar R Saini</creator>
        
        <subject>Cosine Similarity Index (CSI); Jaccard Similarity Index (JSI); Levenshtien Distance Index (LDI); n-gram; Punjabi; similarity checker</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>Textual similarity among documents often leads to copyright issues. Manual measurement of similarity among documents is a time consuming infeasible activity. In this paper, we proposed a technique for measuring similarity at sensed-lexicon level for documents written in Punjabi language using Gurumukhi script. 50 Punjabi document pairs were manually collected with the help of Punjabi native writers. The proposed technique consisted of major 4 levels. Level 0 consists of data collection phase. Level 1 consists of noise removal and stop word removal sub levels. Extracted tokens were stemmed, lemmatized and synonyms were replaced based on part of speech tagging in level 2. Vector space representation corresponding to each document leads to n-gram generation of documents in level 2. Extracted n-grams were weighted based on term frequency. In level 3, string based token level similarity indexes such as Jaccard Similarity Index (JSI), Cosine Similarity Index (CSI) and Levenshtien Distance Index (LDI) were experimented with weighed tokens. In this work, Human Intelligence Task (HIT) based rating has been utilized for measuring the similarity among documents between 0-100. Results obtained from HIT based rating are compared with results obtained from the proposed technique with various combinations of pre-processing levels. Results revealed that on the basis of majority voting, combination of stop word removal with stemming and ‘noun’ based synonym replacement leads to the best combination with bi-gram tokens. Statistical analysis indicates strong correlation between CSI and HIT based rating.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_65-Sensed_Lexicon_based_Approach_for_Identification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mammogram Segmentation Techniques: A Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120564</link>
        <id>10.14569/IJACSA.2021.0120564</id>
        <doi>10.14569/IJACSA.2021.0120564</doi>
        <lastModDate>2021-05-31T08:58:25.9570000+00:00</lastModDate>
        
        <creator>Eman Justaniah</creator>
        
        <creator>Areej Alhothali</creator>
        
        <creator>Ghadah Aldabbagh</creator>
        
        <subject>Mammogram; medical imaging; segmentation; preprocessing; breast cancer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>There is a significant development in computer-aided detection (CADe) and computer-aided diagnostic (CADx) systems in recent years. This development coincides with the evolution of computing power and the growth of data. The CAD systems support detections and diagnosis of significant diseases, including cancer. Breast cancer is one of the most prevalent cancers influencing women and causing death around the world. Early detection of breast cancer has a significant effect on treatment. The typical CAD system goes through various steps, including image segmentation, feature extraction, and image classification. Image segmentation plays an important role in CAD systems and simplifies further processing. This review explores popular mammogram segmentation techniques. A mammogram is medical imaging which uses a low-dose x-ray system to see inner tissues of the breast. There are many segmentation techniques used to segment medical images. These techniques can be categorized into five main categories: region-based methods, boundary-based methods, atlas-based methods, model-based methods, and deep learning. A ground truth image is needed to measure the performance of the segmentation algorithm. Different performance measurements were used to evaluate the segmentation process, including accuracy, precision, recall, F1 score, Hausdorff Distance, Jaccard, and Dice Index. The research in mammogram segmentation has yielded promising results, but there is room for improvements.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_64-Mammogram_Segmentation_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Early Prediction of Plant Diseases using CNN and GANs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120563</link>
        <id>10.14569/IJACSA.2021.0120563</id>
        <doi>10.14569/IJACSA.2021.0120563</doi>
        <lastModDate>2021-05-31T08:58:25.9400000+00:00</lastModDate>
        
        <creator>Ahmed Ali Gomaa</creator>
        
        <creator>Yasser M. Abd El-Latif</creator>
        
        <subject>Plants diseases; deep learning; early detection; convolutional neural network; generative adversarial networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>Plant diseases enormously affect the agricultural crop production and quality with huge economic losses to the farmers and the country. This in turn increases the market price of crops and food, which increase the purchase burden of customers. Therefore, early identification and diagnosis of plant diseases at every stage of plant life cycle is a very critical approach to protect and increase the crop yield. In this paper using a deep-learning model, we present a classification system based on real-time images for early identification of plant infection prior of onset of severe disease symptoms at different life stages of a tomato plant infected with Tomato Mosaic Virus (TMV). The proposed classification was applied on each stage of the plant separately to obtain the largest data set and manifestation of each disease stage. The plant stages named in relation to disease stage as healthy (uninfected), early infection, and diseased (late infection). Classification was designed using the Convolutional Neural Network (CNN) model and the accuracy rate was 97%. Using Generative Adversarial Networks (GANs) to increase the number of real-time images and then apply CNN on these new images and the accuracy rate was 98%.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_63-Early_Prediction_of_Plant_Diseases_using_CNN_and_GANs.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Data Analytics in Investment Banks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120562</link>
        <id>10.14569/IJACSA.2021.0120562</id>
        <doi>10.14569/IJACSA.2021.0120562</doi>
        <lastModDate>2021-05-31T08:58:25.9100000+00:00</lastModDate>
        
        <creator>Basma Iraqi</creator>
        
        <creator>Lamia Benhiba</creator>
        
        <creator>Mohammed Abdou Janati Idrissi</creator>
        
        <subject>Capital markets; data analytics; data analytics use cases; data-driven transformation; investment banks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>Capital Markets are one of the most important pillars of worldwide economy. They gather skilled finance and IT professionals as well as economists in order to take the best investment decisions and choose the most suitable funding solutions every time. Data analytics projects in Capital Markets can definitely be very beneficial as all optimizations and innovations would have a financial impact, but can also be very challenging as the field itself has always incorporated a research component, thus finding out what could really be of an added value might be a tricky task. Based on a comprehensive literature review, this paper aims to structure the thoughts around data analytics in investment banks, and puts forward a classification of relevant data analytics use cases. Lastly, it also discusses how transforming to a data-driven enterprise is the real change investment banks should aim to achieve, and discusses some of the challenges that they might encounter when engaging in this transformation process.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_62-Data_Analytics_in_Investment_Banks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Online Learning Acceptance Model during Covid-19: An Integrated Conceptual Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120561</link>
        <id>10.14569/IJACSA.2021.0120561</id>
        <doi>10.14569/IJACSA.2021.0120561</doi>
        <lastModDate>2021-05-31T08:58:25.8770000+00:00</lastModDate>
        
        <creator>Qasem Kharma</creator>
        
        <creator>Kholoud Nairoukh</creator>
        
        <creator>AbdelRahman Hussein</creator>
        
        <creator>Mosleh Abualhaj</creator>
        
        <creator>Qusai Shambour</creator>
        
        <subject>Online learning; technology acceptance; learning assistance; learning community building assistance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>Because of Covid-19, many countries shutdown schools in order to prevent spreading the virus in their communities. Therefore, schools have opted to use online learning technologies that support distance learning for students. As consequences, Ministry of Higher Education and Scientific Research encourages higher education institutes to adopt blended learning in their programs. However, different students react in different ways to online learning. Some students were able to make productive use of online learning strategies more than others. A conceptual model based on 15 variables was constructed based on UTAUT2, TAM, and other models to investigate and to study the factors that affect students’ acceptance of online learning. 29 hypotheses were investigated to study the relationships among the variables that affect online learning acceptance and online learning community building in Al-Ahliyya Amman University. The collected responses were analyzed using a structural equation modeling (SEM) approach. SPSS and AMOS were used to analyze the data.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_61-Online_Learning_Acceptance_Model_during_Covid-19.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fairness Embedded Adaptive Recommender System: A Conceptual Framework</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120560</link>
        <id>10.14569/IJACSA.2021.0120560</id>
        <doi>10.14569/IJACSA.2021.0120560</doi>
        <lastModDate>2021-05-31T08:58:25.8630000+00:00</lastModDate>
        
        <creator>Alina Popa</creator>
        
        <subject>Algorithmic fairness; reinforcement learning; recommender systems; system adaptability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>In the current fast paced and constantly changing environment, companies should ensure that their way of interacting with user is both relevant and highly adaptive. In order to stay competitive, companies should invest in state-of-the-art technologies that optimize the relationship with the user using increasingly available data. The most popular applications used to develop user relationship are Recommender Systems. The vast majority of the traditional recommender system considers recommendation as a static procedure and focus on a specific type of recommendation, being not very agile in adapting to new situations. Also, when implementing a Recommender System there is the need to ensure fairness in the way decisions are made upon customer data. In this paper, it is proposed a novel Reinforcement Learning-based recommender system that is highly adaptive to changes in customer behavior and focuses on ensuring both producer and consumer fairness, Fairness Embedded Adaptive Recommender System (FEARS). The approach overcomes Reinforcement Learning’s main drawback in recommendation area by using a small, but meaningful action space. Also, there are presented two fairness metrics, their calculation and adaptation for usage with Reinforcement Learning, this way ensuring that the system gets to the optimal trade-off between personalization and fairness.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_60-Fairness_Embedded_Adaptive_Recommender_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Gender Diversity in Computing and Immersive Games for Computer Programming Education: A Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120559</link>
        <id>10.14569/IJACSA.2021.0120559</id>
        <doi>10.14569/IJACSA.2021.0120559</doi>
        <lastModDate>2021-05-31T08:58:25.8470000+00:00</lastModDate>
        
        <creator>Chyanna Wee</creator>
        
        <creator>Kian Meng Yap</creator>
        
        <subject>Computer science education; game-based learning; gender; virtual reality</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>This paper provides a review of the current state of the gender gap in computer science and highlights how immersive games can mitigate this issue. Game-based learning (GBL) applications have been shown to successfully incite motivation in students and increase learning efficiency in both formal and non-formal educational settings. With the rise of GBL, researchers have also used virtual reality to provide pupils with a more immersive learning experience. Both GBL and virtual reality techniques are also used for computer programming education. However, there is a paucity of applications that utilize these techniques to incite interest in computer science from a female perspective. This is a cause for concern as immersive games have been proven to be capable of inciting affective motivation and fostering positive attitudes towards specific subjects. Hence, this review summarises the benefits and limitations of GBL and virtual reality; how males and females respond to certain game elements; and suggestions to aid in the development of immersive games to increase female participation in the field of computer science.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_59-Gender_Diversity_in_Computing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Stacked Autoencoder based Feature Compression for Optimal Classification of Parkinson Disease from Vocal Feature Vectors using Immune Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120558</link>
        <id>10.14569/IJACSA.2021.0120558</id>
        <doi>10.14569/IJACSA.2021.0120558</doi>
        <lastModDate>2021-05-31T08:58:25.8170000+00:00</lastModDate>
        
        <creator>K. Kamalakannan</creator>
        
        <creator>G.Anandharaj</creator>
        
        <subject>Immune algorithms; Parkinson’s disease; stacked autoencoder; airs-parallel; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>Parkinson’s disease (PD) is a neurological progressive disorder and is most common among people who are above 60 years old. It affects the brain nerve cells due to the deficiency of dopamine secretion. Dopamine acts as a neurotransmitter and helps in the movement of the body parts. Once brain cells/neurons start dying due to aging, then it will lead to a decrease in dopamine levels. The symptoms of Parkinson’s are difficultly in doing regular/habitual movements, uncontrollable shaking of hands and limbs may encounter memory loss, stiff muscles, sudden temporary loss of control, etc.  The severity of the disease will be worse if not diagnosed and treated at the early stages. This paper concentrates on developing Parkinson’s disease diagnosing system using machine learning techniques and algorithms. Machine Learning is an integral part of artificial intelligence it takes huge data as input and train by making use of existing algorithms to understand the pattern of the data. Based on the recognized pattern, the machine will act accordingly without any human intervention. In this work, two major approaches have been employed to diagnose PD. Initially, 26 vocal data of PD affected and healthy individual datasets are obtained from the UCI Machine Learning data repository, are taken as initial raw data/features. In pre-processing, the mRMR feature selection algorithm is employed to minimize the feature count and increase the accuracy rate. The selected features will further be extracted using the Stacked Autoencoder technique to improve and increase the accuracy rate and quality of classification with reduced run time. K-fold cross-validation is used to evaluate the predictive capability of the model and the effectiveness of the extracted features. Artificial Immune Recognition System – Parallel (AIRS-P), an immune inspired algorithm is employed to classify the data from the extracted features. The proposed system attained 97% accuracy, outperforms the benchmarked algorithms and proved its significance on PD classification.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_58-Stacked_Autoencoder_based_Feature_Compression.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of Multi-band Microstrip Patch Antennas for Mid-band 5G Wireless Communication</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120557</link>
        <id>10.14569/IJACSA.2021.0120557</id>
        <doi>10.14569/IJACSA.2021.0120557</doi>
        <lastModDate>2021-05-31T08:58:25.7830000+00:00</lastModDate>
        
        <creator>Karima Mazen</creator>
        
        <creator>Ahmed Emran</creator>
        
        <creator>Ahmed S. Shalaby</creator>
        
        <creator>Ahmed Yahya</creator>
        
        <subject>Band-width; microstrip; multi-band; notch slot; rectangle slot; 5 G</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>Recently, the best antenna structures have considered microstrip patch antenna due to their simple construction, low cost, minimum weight, and the fact that they can be effortlessly integrated with circuits. To achieve multi-band operation an antenna is designed with an etching rectangle and circle slot on the surface of the patch to achieve multi-band frequency capabilities in mid-band 5G applications. Inset-fed structure type of fed of all antenna printed and fabricated on the brow of the Rogers RT5880 substrate. Then, prototype structures of the microstrip patch antenna were acquired during the design process until achieving the desired antennas. The antenna_1 achieved tri-band characteristics covering the WiMAX band including 2.51 – 2.55 GHz, WLAN, and S-band including 3.80 – 3.87 GHz and C-and X-band including 6.19 – 6.60 GHz. The antenna_2 gives dual-band characteristics covering C-band and X-band including (6.72 – 7.92 GHz) with a peak under -45 dB suitable for mid-band 5G applications. High impedance bandwidth increases between (70 MHz-1.25 GHz) for wireless applications. The proposed microstrip patch antennas were simulated using CST MWS-2015 and were experimentally tested to verify the fundamental characteristics of the proposed design, it offers multiple-band operation with high stable gain and good directional radiation characteristics results.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_57-Design_of_Multi_band_Microstrip_Patch_Antennas.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Twitter based Data Analysis in Natural Language Processing using a Novel Catboost Recurrent Neural Framework</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120555</link>
        <id>10.14569/IJACSA.2021.0120555</id>
        <doi>10.14569/IJACSA.2021.0120555</doi>
        <lastModDate>2021-05-31T08:58:25.7530000+00:00</lastModDate>
        
        <creator>V. Laxmi Narasamma</creator>
        
        <creator>M. Sreedevi</creator>
        
        <subject>Natural language processing; sentiment analysis; twitter data; Catboost; recurrent neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>In recent years, the sentiment analysis using Twitter data is the most prevalent theme in Natural Language Processing (NLP). However, the existing sentiment analysis approaches are having lower performance and accuracy for classification due to the inadequate labeled data and failure to analyze the complex sentences. So, this research develops the novel hybrid machine learning model as Catboost Recurrent Neural Framework (CRNF) with an error pruning mechanism to analyze the Twitter data based on user opinion. Initially, the twitter-based dataset is collected that tweets based on the coronavirus COVID-19 vaccine, which are pre-processed and trained to the system. Furthermore, the proposed CRNF model classifies the sentiments as positive, negative, or neutral. Moreover, the process of sentiment analysis is done through Python and the parameters are calculated. Finally, the attained results in the performance parameters like precision, recall, accuracy and error rate are validated with existing methods.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_55-Twitter_based_Data_Analysis_in_Natural_Language.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Image-based Onion Disease (Purple Blotch) Detection using Deep Convolutional Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120556</link>
        <id>10.14569/IJACSA.2021.0120556</id>
        <doi>10.14569/IJACSA.2021.0120556</doi>
        <lastModDate>2021-05-31T08:58:25.7530000+00:00</lastModDate>
        
        <creator>Muhammad Ahmed Zaki</creator>
        
        <creator>Sanam Narejo</creator>
        
        <creator>Muhammad Ahsan</creator>
        
        <creator>Sammer Zai</creator>
        
        <creator>Muhammad Rizwan Anjum</creator>
        
        <creator>Naseer u Din</creator>
        
        <subject>Disease detection; disease classification; artificial intelligence; inceptionv3; deep convolutional neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>Agriculture on earth is the biggest need for human sustenance. Over years, many farming methods and components have become computerized to guarantee quicker production with higher quality. Because of the enlarged demand in the farming industry, agricultural produce must be cultivated using an efficient process. Onion (Allium cepa L.) is an economically valuable crop and is the second-largest vegetable crop in the world. The spread of various diseases highly affected the production of the onion crop. One of the serious and most common diseases of onion worldwide is purple blotch. To compensate for a limited amount of training dataset of healthy and infected onion crops, the proposed method employs a pre-trained enhanced InceptionV3 model. The proposed model detects onion disease (purple blotch) from images by recognizing the abnormalities caused by the disease. The suggested approach achieves a classification accuracy of 85.47% in recognizing the disease. This research investigates a novel approach for the rapid and accurate diagnosis of plant/crop diseases, laying the theoretical foundation for the use of deep learning in agricultural information.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_56-Image_based_Onion_Disease_Purple_Blotch.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Increasing the Steganographic Resistance of the LSB Data Hide Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120554</link>
        <id>10.14569/IJACSA.2021.0120554</id>
        <doi>10.14569/IJACSA.2021.0120554</doi>
        <lastModDate>2021-05-31T08:58:25.7200000+00:00</lastModDate>
        
        <creator>A. Y. Buchaev</creator>
        
        <creator>A. G. Mustafaev</creator>
        
        <creator>V.S. Galyaev</creator>
        
        <creator>A. M. Bagandov</creator>
        
        <subject>Steganography; steganalysis; visual attack; least significant bit</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>The robustness of the security algorithm is one of the most important properties that determines how difficult it is to break it. Increasing the robustness of the algorithm directly affects the degree of secrecy when it is used for confidential transmission. The paper analyzes the steganographic algorithm Least Significant Bit, represents a method of counteracting the algorithm of the &quot;visual attack&quot; and statistical methods used against stego-containers generated using the LSB algorithm. To prove the increase in resistance, the study used the PSNR index, Chi-square test. The proposed technique involves the use of a uniform distribution and compression method. The paper presents the results of computer experiments demonstrating the effectiveness of the proposed technique.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_54-Increasing_the_Steganographic_Resistance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Markerless-based Gait Analysis and Visualization Approach for ASD Children</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120553</link>
        <id>10.14569/IJACSA.2021.0120553</id>
        <doi>10.14569/IJACSA.2021.0120553</doi>
        <lastModDate>2021-05-31T08:58:25.6900000+00:00</lastModDate>
        
        <creator>Nur Khalidah Zakaria</creator>
        
        <creator>Nooritawati Md Tahir</creator>
        
        <creator>Rozita Jailani</creator>
        
        <subject>Autism spectrum disorder (ASD); kinematic; marker-based; markerless-based; gait analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>This study proposed a new method in gait acquisition and analysis for autistic children based on the markerless technique versus the gold standard marker-based technique. Here, the gait acquisition stage is conducted using a depth camera with a customizable skeleton tracking function that is the Microsoft Kinect sensor for recording the walking gait trials of the 23 children with autism spectrum disorder (ASD) and 30 typically healthy developing (TD) children. Next, the Kinect depth sensor outputs information is translated into kinematic gait features. Further, analysis and evaluation are done specifically the kinematic angles of the hip, knee, and ankle in analyzing and visualizing the pattern of the plots versus the kinematic plots acquired from the marker-based that is the Vicon motion system gait technique. In addition, these kinematic angles are also validated using the statistical method namely the Analysis of Variance (ANOVA). Results showed that the ρ-values are insignificant for all angles upon computing both the intra-group and inter-group normalization. Hence, these findings have proven that the proposed markerless-based gait technique is indeed apt to be used as a new alternative markerless method for gait analysis of ASD children.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_53-A_Markerless_based_Gait_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design and Evaluation of Bible Learning Application using Elements of User Experience</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120552</link>
        <id>10.14569/IJACSA.2021.0120552</id>
        <doi>10.14569/IJACSA.2021.0120552</doi>
        <lastModDate>2021-05-31T08:58:25.6600000+00:00</lastModDate>
        
        <creator>Frederik Allotodang</creator>
        
        <creator>Herman Tolle</creator>
        
        <creator>Nataniel Dengen</creator>
        
        <subject>Christian education; Sunday school; element of user experience; ARCS; android application</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>Technological developments can encourage children to learn easily and help solve problems that often arise in the learning process. Sunday School students need learning media to make it easier to understand Christian Education. The method used in Sunday Schools still uses conventional methods which include face-to-face teaching and learning in class. This method often faces various challenges such as student&#39;s lack of focus. One of the solutions that are proposed in this paper is to design an Android-based learning application that will support the learning process. Application User Interface and User Experience (UI/UX) design will be built based on the Elements of User Experience methodology. The Elements of User Experience method will be used in the analysis and design process to maximize the usability and engagement level of the application. The learning materials will be designed based on Attention, Relevance, Confidence, and Satisfaction (ARCS) framework. ARCS will help the material design process to ensure the clarity and appropriateness of the material. The application will be implemented and tested on students to measure its effectiveness. The application trial has shown a promising improvement especially in student&#39;s engagement toward the materials.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_52-Design_and_Evaluation_of_Bible_Learning_Application.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implementation of Artificial Neural Network in Forecasting Sales Volume in Tokopedia Indonesia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120551</link>
        <id>10.14569/IJACSA.2021.0120551</id>
        <doi>10.14569/IJACSA.2021.0120551</doi>
        <lastModDate>2021-05-31T08:58:25.6430000+00:00</lastModDate>
        
        <creator>Meiryani </creator>
        
        <creator>Dezie Leonarda Warganegara</creator>
        
        <subject>Forecasting; e-commerce; backpropagation; artificial neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>Predicting sales is one way to get company profits. Tokopedia Indonesia is one of the marketplaces that is included in the type of e-commerce customer to customer (C2C). This research was conducted in order to help sellers in the Tokopedia Indonesia marketplace to predict the sales of their merchandise, so that sellers can prepare or stock items that are predicted to increase in sales by implementing Artificial Neural Networks. Artificial neural networks can help predict future sales values. The data is divided into training data and testing data. The results of the analysis of this study indicate that the network model obtained reaches an accuracy rate of 95%.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_51-Implementation_of_Artificial_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Monophonic Guitar Synthesizer via Mobile App</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120550</link>
        <id>10.14569/IJACSA.2021.0120550</id>
        <doi>10.14569/IJACSA.2021.0120550</doi>
        <lastModDate>2021-05-31T08:58:25.6270000+00:00</lastModDate>
        
        <creator>Edgar Garc&#237;a Leyva</creator>
        
        <creator>Elena Fabiola Ruiz Ledesma</creator>
        
        <creator>Rosaura Palma Orozco</creator>
        
        <creator>Lorena Chavarr&#237;a B&#225;ez</creator>
        
        <subject>Monophonic synthesizer; guitar; sound emulation; mobile application</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>In the guild of guitarists, it is common to work with guitar synthesizers because the emulation of a great variety of sounds that are produced by different musical instruments, starting from just playing the guitar, which means, a piece of music is played with a guitar, but other musical instruments are actually heard such as, a saxophone, a violin, a piano or percussions, depending on the instrument that has been selected. The problem that arises in this article is that synthesizers are expensive and due to their size, the transportation of the equipment is often impractical. As mentioned, the development of a mobile application that has the function of a monophonic synthesizer is proposed as a solution. In this way, the cost is greatly reduced, and additionally, the user is able to install the application on a mobile device with Android operating system and connect it to an electric or electro-acoustic guitar through an audio interface; obtaining as a result, a functional technological instrument by offering guitarists an alternative with respect to conventional synthesizers. The construction of this application used the Fast Fourier Transform Radix-2 as a signal recognition algorithm, which allowed obtaining the fundamental frequencies generated by the guitar, which were transformed into MIDI notation and later used in sound emulation.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_50-Monophonic_Guitar_Synthesizer.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Conceptualizing Smart Sustainable Cities: Crossing Visions and Utilizing Resources in Africa</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120549</link>
        <id>10.14569/IJACSA.2021.0120549</id>
        <doi>10.14569/IJACSA.2021.0120549</doi>
        <lastModDate>2021-05-31T08:58:25.5970000+00:00</lastModDate>
        
        <creator>Ahmed Al-Gindy</creator>
        
        <creator>Aya Al-Chikh Omar</creator>
        
        <creator>Mariam Aerabe</creator>
        
        <creator>Ziad Elkhatib</creator>
        
        <subject>Smart cities; sustainable energy; renewable energy; internet of things; artificial intelligence</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>Recent advancements in technologies enabled the development of smart cities to be more effective and possible. Smart cities depend on intelligent systems, artificial intelligence, the internet of things, control system, and many more advanced technologies. Sustainability challenges and problems worldwide, with smart and sustainability concepts, reflect almost mutual goals. It includes improving and providing the essential life services for all people efficiently while depending on sustainable, clean, and renewable energy with considerations of different economic, educational, health, social and environmental aspects in the city. In this research, a cost analysis process has been implemented to ease the implementation and resource utilization of smart and sustainable cities in Africa. The challenges and difficulties of those implementations are summarized.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_49-Conceptualizing_Smart_Sustainable_Cities.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Spoken Language Identification on Local Language using MFCC, Random Forest, KNN, and GMM</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120548</link>
        <id>10.14569/IJACSA.2021.0120548</id>
        <doi>10.14569/IJACSA.2021.0120548</doi>
        <lastModDate>2021-05-31T08:58:25.5670000+00:00</lastModDate>
        
        <creator>Vincentius Satria Wicaksana</creator>
        
        <creator>Amalia Zahra S.Kom</creator>
        
        <subject>Gaussian mixture model; random forest; K-Nearest Neighbor; spoken language recognition; MFCC; GMM; KNN</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>Spoken language identification is a field of research that is already being done by many people. There are many techniques proposed for doing speech processing, such as Support Vector Machines, Gaussian Mixture Models, Decision Trees, and others. This paper will use the system using the Mel-Frequency Cepstral Coefficient (MFCC) features of speech input signal, use Random Forest (RF), Gaussian Mixture Model (GMM), and K-Nearest Neighbor (KNN) as a classifier, use the 3s, 10s, and 30s as scoring method, and use dataset that consists of Javanese, Sundanese, and Minang languages which are traditional languages from Indonesia. K-Nearest Neighbor has 98.88% of accuracy for 30s of speech and followed by Random Forest that has 95.55% of accuracy for 30s of speech, GMM has 82.24% of accuracy.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_48-Spoken_Language_Identification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Customer Opinion Mining by Comments Classification using Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120547</link>
        <id>10.14569/IJACSA.2021.0120547</id>
        <doi>10.14569/IJACSA.2021.0120547</doi>
        <lastModDate>2021-05-31T08:58:25.5330000+00:00</lastModDate>
        
        <creator>Moazzam Ali</creator>
        
        <creator>Farwa yasmine</creator>
        
        <creator>Husnain Mushtaq</creator>
        
        <creator>Abdullah Sarwar</creator>
        
        <creator>Adil Idrees</creator>
        
        <creator>Sehrish Tabassum</creator>
        
        <creator>BaburHayyat</creator>
        
        <creator>Khalil Ur Rehman</creator>
        
        <subject>Customer comments; behavior mining; data mining; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>In this era of digital and competitive market, every business entity is trying to adopt a digital marketing strategy to get global business benefits. To get such competitive advantages, it is necessary for E-commerce business organizations to understand the feelings, thinking and seasons of their customers regarding their products and services. The major objective of this study is to investigate customers’ buying behavior and consumer behavior to enable the customer to evaluate an online available product in various perspectives like variety, convenience, trust and time. It performs data analysis on the E-commerce customer data which is collected through intelligent agents (automated scripts) or web scrapping techniques to enable the customers to quickly understand the product in given perspectives through other customers’ opinion at a glance. This is qualitative and quantitative e-commerce content analysis in using various methods like data crawling, manual annotation, text processing, feature engineering and text classification. We have employed got manually annotated data from e-commerce experts and employed BOW and N-Gram techniques for Feature Engineering and KNN, Na&#239;ve Bays and VSM classifiers with different features extraction combinations are applied to get better results. This study also incorporates data mining and data analytics results evaluation and validation techniques like precision, recall and F1-score.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_47-Customer_Opinion_Mining_by_Comments_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Secure Communication Process of Wireless Sensor Network Architecture for Smart Urban Environment Monitoring Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120546</link>
        <id>10.14569/IJACSA.2021.0120546</id>
        <doi>10.14569/IJACSA.2021.0120546</doi>
        <lastModDate>2021-05-31T08:58:25.5030000+00:00</lastModDate>
        
        <creator>Rashmi S Bhaskar</creator>
        
        <creator>Veena S Chakravarthi</creator>
        
        <subject>Wireless sensor network (WSN); sensors; smart city security; secure communication process</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>Wireless Sensor Network has been increasingly used for remote monitoring system and its adoption in increasing exponentially for larger application too. However, there are various challenges associated with both resource management and security that roots up when the deployment scale goes massive and distributed in order. The proposed system considers a case study of smart city management where the problems associated with data transmission and security has been addressed. This is carried out using the provisioning of urban environment monitoring system that is an essential system for smart city projects to assure the citizens&#39; better-quality well-being. The scalable and effective urban environment monitoring system requires a seamless transmission of the data from the sensor nodes to the analytics engine. The existing architectures are more designed to suit very specifically the use-cases. As a contribution, the proposed system introduces a cost-effective architecture for environmental monitoring in urban zones of smart city named as a Smart Sensor Surveillance System (4S-UEM). The core idea of the proposed system is to offer a balance between resource efficiency and resilience secure communication in large scale deployment of WSN considering smart city as deployment and assessment area. The proposed system makes use of urban geographical clustering process in order to develop an organized structure of sensor nodes. Different from any existing studies, the proposed system introduces data analytical engine followed by secure routing using gateway. The design of the proposed system is carried out using layered architecture of the communication model targeting towards a cost-effective, energy optimal, and secure data transmission to the analytics engine.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_46-A_Secure_Communication_Process_of_Wireless_Sensor_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Efficient Rain Simulation based on Constrained View Frustum</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120545</link>
        <id>10.14569/IJACSA.2021.0120545</id>
        <doi>10.14569/IJACSA.2021.0120545</doi>
        <lastModDate>2021-05-31T08:58:25.4570000+00:00</lastModDate>
        
        <creator>JinGi Im</creator>
        
        <creator>Mankyu Sung</creator>
        
        <subject>View-dependent rendering; realistic real-time simulation; view frustum</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>Realistic real-time rain streaks rendering has been treated as a very difficult problem because of various natural phenomena. Also, for creating and managing many particles in a rain streak, many resources had to be used. This paper propose am efficient real-time rain streaks simulation algorithm by generating view-dependent rain particles, which can express a large amount of rain streaks even with a small number of particles. By creating a ‘constrained view frustum’ depending on the camera moving in real time, particles are rendered only in that space. Accordingly, particles rendered well even if the camera keep moving or rotating rapidly. And a small number of particles are used, since the simulation is performed in a user-viewed limited space, an effect of simulation many particles can be obtained. This enables very efficient real-time simulation of rain streaks.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_45-Efficient_Rain_Simulation_based_on_Constrained_View.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Speeding up an Adaptive Filter based ECG Signal Pre-processing on Embedded Architectures</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120544</link>
        <id>10.14569/IJACSA.2021.0120544</id>
        <doi>10.14569/IJACSA.2021.0120544</doi>
        <lastModDate>2021-05-31T08:58:25.4400000+00:00</lastModDate>
        
        <creator>Safa Mejhoudi</creator>
        
        <creator>Rachid Latif</creator>
        
        <creator>Amine Saddik</creator>
        
        <creator>Wissam Jenkal</creator>
        
        <creator>Abdelhafid El Ouardi</creator>
        
        <subject>ECG signal denoising; ADTF algorithm; OpenMP programming; embedded architectures</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>Medical applications increasingly require complex calculations with constraints of accelerated processing time. These applications are therefore oriented towards the integration of high-performance embedded architectures. In this context, the detection of cardiac abnormalities is a task that remains a high priority in emergency medicine. ECG analysis is a complex task that requires significant computing time since a large amount of information must be analyzed in parallel with high frequencies. Real-time processing is the biggest challenge for researchers, when talking about applications that require time constraints like that of cardiac activity monitoring. This work evaluates the Adaptive Dual Threshold Filter (ADTF) algorithm dedicated to ECG signal filtering using various embedded architectures: A Raspberry 3B+ and Odroid XU4. The implementation has been based on C/C++ and OpenMP to exploit the parallelism in the used architectures. The evaluation was validated using several ECG signals proposed in MIT-BIH Arrhythmia database with a sampling frequency of 360 Hz. Based on an algorithmic complexity study and a parallelization of the functional blocks which present significant workloads, the evaluation results show a mean execution time of 7.5 ms on the Raspberry 3B+ and 0.34 ms on the Odroid XU4. With an efficient parallelization on the Odroid XU4 architecture, real-time performance can be achieved.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_44-Speeding_up_an_Adaptive_Filter_based_ECG.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Effect of Augmented Reality in Improving Visual Thinking in Mathematics of 10th-Grade Students in Jordan</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120543</link>
        <id>10.14569/IJACSA.2021.0120543</id>
        <doi>10.14569/IJACSA.2021.0120543</doi>
        <lastModDate>2021-05-31T08:58:25.4100000+00:00</lastModDate>
        
        <creator>Fadi Abdul Raheem Odeh Bani Ahmad</creator>
        
        <subject>Augmented reality technology; visual thinking development; 10th grade; mathematics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>Augmented reality is one of the key issues in the area of improving visual thinking in science courses such as Mathematics. Augmented reality also offers a significant and effective role in the educational process. The current study aimed to investigate the effect of augmented reality in improving visual thinking of 10th-grade students in mathematics in Jordan. To achieve the objectives of the study, the methodology used includes the application of the semi-experimental approach and augmented reality technology. The methodology used also includes preparing a test to measure visual thinking comprising (20) multiple-choice items used as a pre-and post-test, and its validity and reliability are verified. The study sample consists of (57) female students purposefully selected from the 10th-grade students at the Jerash Model Schools for the first semester of 2020/2021. The study sample is divided into two groups as follows: one is an experimental group consisting of (28) female students taught by the augmented reality technology, and the second is a control group consisting of (29) female students taught in the traditional method. The results of the study show that there are statistically significant differences at the level of (α = 0.05) in the development of visual thinking in favor of the experimental group students taught by the augmented reality technology. The study also shows that there are differences in the performance of the experimental group students in each skill of visual thinking.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_43-The_Effect_of_Augmented_Reality.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>How Enterprise must be Prepared to be “AI First”?</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120542</link>
        <id>10.14569/IJACSA.2021.0120542</id>
        <doi>10.14569/IJACSA.2021.0120542</doi>
        <lastModDate>2021-05-31T08:58:25.3770000+00:00</lastModDate>
        
        <creator>Mustapha Lahlali</creator>
        
        <creator>Naoual Berbiche</creator>
        
        <creator>Jamila El Alami</creator>
        
        <subject>Artificial intelligence; machine learning; RPA; business transformation; AI adoption</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>Among disruptive technologies, Artificial Intelligence (AI), Robotic Process Automation (RPA) and Machine Learning (ML) play a very important role in Businesses Transformation and continues to show great promise for creating new sources of wealth and new business models. The reality of AI in the company is not reduced to a simple process optimization. In fact, AI introduces new organizational schemes, new ways of working, new optimization niches, new services, other ways of thinking about interactions with customers and therefore a new way of doing business. It thus reshuffles competitive data and imagine innovative processes to create new business models, offering new opportunities not only for IT solution providers but also for innovators, investors and business owners. Even if the contribution of Artificial Intelligence is not to be proved, many companies face difficulties in adopting this technology, mainly due to the lack of a pragmatic approach highlighting the roles and responsibilities of the various stakeholders, especially IT professionals and business owners and the key steps to follow to make this experience a real success. This research aims to answer fundamental questions, in particular: What will bring the implementation of this technology to the business of the company? How to prepare for this adoption? and if the decision to go is confirmed, what kind of adoption approach should companies follow? and finally how can Enterprises monitor this shift to the Intelligent edge.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_42-How_Enterprise_must_be_Prepared_to_be_AI_First.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Traffic Engineering in Software-defined Networks using Reinforcement Learning: A Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120541</link>
        <id>10.14569/IJACSA.2021.0120541</id>
        <doi>10.14569/IJACSA.2021.0120541</doi>
        <lastModDate>2021-05-31T08:58:25.3470000+00:00</lastModDate>
        
        <creator>Delali Kwasi Dake</creator>
        
        <creator>James Dzisi Gadze</creator>
        
        <creator>Griffith Selorm Klogo</creator>
        
        <creator>Henry Nunoo-Mensah</creator>
        
        <subject>Software defined networking; reinforcement learning; machine learning; traffic engineering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>With the exponential increase in connected devices and its accompanying complexities in network management, dynamic Traffic Engineering (TE) solutions in Software-Defined Networking (SDN) using Reinforcement Learning (RL) techniques has emerged in recent times. The SDN architecture empowers network operators to monitor network traffic with agility, flexibility, robustness and centralized control. The separation of the control and the forwarding plane in SDN has enabled the integration of RL agents in the networking architecture to enforce changes in traffic patterns during network congestions.  This paper surveys major RL techniques adopted for efficient TE in SDN. We reviewed the use of RL agents in modelling TE policies for SDNs, with agents’ actions on the environment guided by future rewards and a new state. We further looked at the SARL and MARL algorithms the RL agents deploy in forming policies for the environment. The paper finally looked at agents design architecture in SDN and possible research gaps.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_41-Traffic_Engineering_in_Software_defined_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Implementation of Hybrid Enhanced Sentiment Analysis System using Spark ML Pipeline: A Big Data Analytics Framework</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120540</link>
        <id>10.14569/IJACSA.2021.0120540</id>
        <doi>10.14569/IJACSA.2021.0120540</doi>
        <lastModDate>2021-05-31T08:58:25.3300000+00:00</lastModDate>
        
        <creator>Raviya K</creator>
        
        <creator>Mary Vennila S</creator>
        
        <subject>Big data; sentiment analysis; machine learning; apache spark; ML pipeline</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>Today, we live in the Big Data age. Social networks, online shopping, mobile data are main sources generating huge text data by users. This &quot;text data&quot; will provide companies with useful insight on how customers view their brand and encourage them to make business strategies actively in order to maintain their trade. Hence, it is essential for the enterprises to analyse the sentiments of social media big data to make predictions. Because of the variety and existence of data, the study of sentiment on broad data has become difficult. However, it includes open-source Big Data platforms and machine learning techniques to process large text information in real-time. The advancement in fields including Big Data and Deep Learning technology has influenced and overcome the traditional restrictions of distributed computing. The primary aim is to perform sentiment analysis on the pipelined architecture of Apache Spark ML to speed upward the computations and improve machine efficiency in different environments. Therefore, the Hybrid CNN-SVM model is designed and developed. Here, CNN is pipeline with SVM for sentiment feature extraction and classification in ML to improve the accuracy. It is more flexible, fast and scalable. In addition, Naive Bayes, Support Vector Machines (SVM), Random Forest, Logistic Regression classifiers have been used to measure the efficiency of the proposed system on multi-node environment. The experimental results demonstrate that in terms of different evaluation metrics, the hybrid sentiment analysis model outperforms the conventional models. The proposed method makes it convenient for effective handling of big sentiment datasets. It would be more beneficial for corporations, government and individuals to improve their great value.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_40-An_Implementation_of_Hybrid_Enhanced_Sentiment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Online Training and Serious Games in Clinical Training in Nursing and Midwife Education</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120539</link>
        <id>10.14569/IJACSA.2021.0120539</id>
        <doi>10.14569/IJACSA.2021.0120539</doi>
        <lastModDate>2021-05-31T08:58:25.3170000+00:00</lastModDate>
        
        <creator>Galya Georgieva-Tsaneva</creator>
        
        <creator>Ivanichka Serbezova</creator>
        
        <subject>Health care; medical education; serious educational games; nurse; midwife; online training</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>The article examines the application, methods, and trends in online training of health care and medical professionals in Bulgaria. Attention is paid to modern methods for the effective application of online training and the extent to which online training can replace traditional training. The article presents the results of a survey for online training conducted in April 2021 at universities in Bulgaria Health Care professional field, specialty Nurse and Midwife. The results of the survey can serve to improve online education in Bulgaria by including in it educational resources recommended by respondents. The creation of new web-based educational resources (video materials, serious games, virtual simulations, video presentations, webinars, etc.) can complement the traditional methods of training in the students in the Health Care professional field, specialties Nurse and Midwives in Bulgaria.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_39-Online_Training_and_Serious_Games.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Combined Non-parametric and Parametric Classification Method Depending on Normality of PDF of Training Samples</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120538</link>
        <id>10.14569/IJACSA.2021.0120538</id>
        <doi>10.14569/IJACSA.2021.0120538</doi>
        <lastModDate>2021-05-31T08:58:25.2830000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>Spectral information; spatial information; maximum likelihood decision rule; satellite image; image classification; classification performance; instantaneous field of view</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>Classification method with combined nonparametric and parametric classifications which depends on the normality of Probability Density Function of training samples is proposed. The proposed classification method is also based on spatial information for high spatial resolution of satellite based optical sensor images is proposed. Also, a classification method which takes into account not only spectral but also spatial features for LANDSAT-4 and 5 Thematic Mapper (TM) data is proposed. Treatment of the spatial-spectral variability existing within a region is more important for such high spatial resolution of satellite imagery data. Standard deviations in small cells, such as 2x2, 3x3 and 4x4 pixels, were used as measures to represent the spatial-spectral variabilities. This information can be used together with conventional spectral features in a unified way, for the traditional classifier such as the pixelwise Maximum Likelihood Decision Rule (MLHDR). The classification performance of new clear cuts and alpine meadows which are very close in spectral space characteristics and difficult to distinguish them by conventional methods are focused. Through experiments, it is found that there is a substantial improvement in overall classification accuracy for TM forestry data. The Probability of Correct Classification (PCC) for the new clear cuts and the alpine meadows classes rose by 7% to 97% correct. The confusion between alpine meadows and new clear cuts was reduced from 9% to 3%.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_38-Combined_Non_parametric_and_Parametric_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Onion Crop Monitoring with Multispectral Imagery using Deep Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120537</link>
        <id>10.14569/IJACSA.2021.0120537</id>
        <doi>10.14569/IJACSA.2021.0120537</doi>
        <lastModDate>2021-05-31T08:58:25.2670000+00:00</lastModDate>
        
        <creator>Naseer U Din</creator>
        
        <creator>Bushra Naz</creator>
        
        <creator>Samer Zai</creator>
        
        <creator>Bakhtawer</creator>
        
        <creator>Waqar Ahmed</creator>
        
        <subject>UAV; deep neural networks; onion crop; NDVI; crop monitoring; VGG16</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>The world’s growing population leads the government of Pakistan to increase the supply of food for the coming years in a well-organized manner. Feasible agriculture plays a vital role for sustain food production and preserves the environment from any unnecessary chemicals by the use of technology for good management. This research presents the design and development of a multi-spectral imaging system for precision agriculture tasks. This imaging system includes an RGB camera and Pi NoIR camera controlled by a raspberry pi in a drone. The images are captured by Unmanned Aerial Vehicle (UAV) and then send images to the Java application. Images are processed to sharp, resize by application. The Normalized Difference Vegetation Index (NDVI) is calculated to determine the crop health status based on real-time data. The Deep Learning (DL) technique is used to recognize the onion crop growth stage using the captured dataset. We express how to implement a progressive model for the deep neural network to recognize the onion crop growth stage. The performance accuracy of the system for batch size 16 is 96.10% and for batch size 32 is 93.80%.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_37-Onion_Crop_Monitoring_with_Multispectral_Imagery.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intelligent Scroll Order Generator Software from View Movements in People with Disabilities</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120536</link>
        <id>10.14569/IJACSA.2021.0120536</id>
        <doi>10.14569/IJACSA.2021.0120536</doi>
        <lastModDate>2021-05-31T08:58:25.2370000+00:00</lastModDate>
        
        <creator>Juan R&#237;os-Kavadoy</creator>
        
        <creator>Harold Guerrero-Bello</creator>
        
        <creator>Michael Cabanillas-Carbonell</creator>
        
        <subject>Displacement; motor disability; pupil; computer vision of images</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>People with motor disabilities face problems such as being able to move around independently, as well as having difficulties to take advantage of the technological tools developed for their rehabilitation. This research is based on computer vision and robotics, and was carried out with the objective of generating displacement orders using smoothed and binarized algorithms to assist the displacement of disabled people. The intelligent software for people with disabilities or motor deficiencies generates movement commands for an acceptable time through visual commands made by the person, by means of communication between the camera and the software, constantly capturing images. The results obtained for the scene (image) is cropped according to the face scene, then the view scene is cropped according to the face scene, and by means of necessary algorithms the pupil must be found; The complexity lies not only in locating the pupil, but also in identifying when a command is being sent and when it is not. Finally, the unit processes the movement command (left and right) to turn the LEDs on and off.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_36-Intelligent_Scroll_Order_Generator_Software.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Proposal of a Method to Measure Test Suite Quality Attributes for White-Box Testing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120535</link>
        <id>10.14569/IJACSA.2021.0120535</id>
        <doi>10.14569/IJACSA.2021.0120535</doi>
        <lastModDate>2021-05-31T08:58:25.2070000+00:00</lastModDate>
        
        <creator>Mochamad Chandra Saputra</creator>
        
        <creator>Tetsuro Katayama</creator>
        
        <subject>Test case; test suite quality attributes; white-box testing; reliability analysis; software quality</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>As an important asset in software testing, measuring quality attributes of the test suite is important to describe the quality of software. This research proposes a method to measure the test suite quality attributes for white-box testing. The attributes are usability, efficiency, reliability, functionality, portability, and maintainability that are selected from 28 attributes in software quality. By using the proposed method, the test suite quality attributes are calculated with various results of level of quality. The result of test suite quality attribute measurement then proves the validity of its result by the reliability analysis. It is used Cohen’s kappa coefficient to validating the result of test suite quality attributes measurement based on the level of agreement between the result of measurement and expert assessment. Reliability analysis on test suite quality attribute finds the attribute that strongly related based on the minimum percentage of level of agreement value are usability, reliability and functionality. Hence, our proposed method is useful to measure test suite quality attributes.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_35-Proposal_of_a_Method_to_Measure_Test_Suite_Quality.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Security and Threats of RFID and WSNs: Comparative Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120534</link>
        <id>10.14569/IJACSA.2021.0120534</id>
        <doi>10.14569/IJACSA.2021.0120534</doi>
        <lastModDate>2021-05-31T08:58:25.1730000+00:00</lastModDate>
        
        <creator>Ghada Hisham Alzeer</creator>
        
        <creator>Ghada Sultam Aljumaie</creator>
        
        <creator>Wajdi Alhakami</creator>
        
        <subject>Security; IoT; WSN; RFID</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>The Internet of Things (IoT) has garnered significant attention from people with growing changes in human life over the last few years. IoT is a network of a group of smart devices that use sensors to collect information and conduct events in their environments. The information can then be shared on the Internet. IoT uses a range of technologies and finds various applications such as smart homes, environmental monitoring, and healthcare. In this paper, we conducted a comparative study to analyze the difference between two technologies—Wireless Sensor Networks (WSNs) and Radio Frequency Identification (RFID). It is pertinent to note that these technologies would not be effective without incorporating security aspects due to a potential number of threats and attacks on the network. This paper provides a comprehensive review of the recent approaches to securing RFID and WSNs. We have carefully chosen most of these studies to investigate only the recent technique from 2017 to 2020. The paper also highlights common attacks on RFID and WSNs and the secure authentication mechanisms on these technologies. It further provides a different way of detecting varying attacks in RFID and WSNs.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_34-Security_and_Threats_of_RFID_and_WSNs.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>ICS: Interoperable Communication System for Inter-Domain Routing in Internet-of-Things</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120533</link>
        <id>10.14569/IJACSA.2021.0120533</id>
        <doi>10.14569/IJACSA.2021.0120533</doi>
        <lastModDate>2021-05-31T08:58:25.1430000+00:00</lastModDate>
        
        <creator>Bhavana A</creator>
        
        <creator>Nandha Kumar A N</creator>
        
        <subject>Internet-of-Things (IoT); interoperability; heterogeneous; gateway protocol; inter-domain routing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>The Internet-of-Things consists of heterogeneous smart appliances connected by global network with self-configuring capabilities requiring interoperable communication schemes while performing inter-domain routing. A review of existing interoperable approaches shows that there is still a large scope of improving IoT interoperability. The proposed system introduces Interoperable Communication System (ICS) by developing a novel inter-domain routing in IoT using two schemes. Preemptive and Non-Preemptive Communication scheme targets mainly emergency-based routing, which demands faster transmission, and dedicated transmission, demanding accountability in communication. A simulation study carried out for the proposed system shows that it offers approximately 90% reduced delay, 57% increased packet delivery ratio, and 98% faster processing time when compared with existing approaches to accomplish interoperability in IoT.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_33-ICS_Interoperable_Communication_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-Robot based Control System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120532</link>
        <id>10.14569/IJACSA.2021.0120532</id>
        <doi>10.14569/IJACSA.2021.0120532</doi>
        <lastModDate>2021-05-31T08:58:25.1130000+00:00</lastModDate>
        
        <creator>Atef Gharbi</creator>
        
        <subject>Robotic Flexible Manufacturing Systems (RFMS); multi-robot based control system; RFMS control architecture; planning model; flexibility</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>One of the most important challenge in Robotic Flexible Manufacturing Systems (RFMS) is how to develop a Multi-Robot based control system in which the robot is able to take intelligent decision to a changing environment. The problematic is how to ensure the flexibility with the proposed multi-robot based control system based on triggering strategies. The flexibility of the whole system is expanded by the capacity of the flexible robots to effectively ensure tasks assigned to it. Through this paper, three contributions can be presented: (i) the RFMS based Control Architecture by presenting in details the main components and methods, (ii) the planning model, and (iii) the different levels of flexibility in RFMS.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_32-Multi_Robot_based_Control_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving Performance of ABAC Security Policies Validation using a Novel Clustering Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120531</link>
        <id>10.14569/IJACSA.2021.0120531</id>
        <doi>10.14569/IJACSA.2021.0120531</doi>
        <lastModDate>2021-05-31T08:58:25.0970000+00:00</lastModDate>
        
        <creator>K. Vijayalakshmi</creator>
        
        <creator>V.Jayalakshmi</creator>
        
        <subject>Anomalies; attribute-based access control model; big data; cloud storage; clustering; intrusion detection system; security policy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>Cloud computing offers several services, such as storage, software, networking, and other computing services. Cloud storage is a boon for big data and big data owners. Although big data owners can easily avail cloud storage without spending much on infrastructure and software to manage their data, security is a big issue, and protecting the outsourced big data is challenging and ongoing research. Cloud service providers use the attribute-based access control model to detect malicious intruders and address the security requirements of today’s new computing technologies. Anomalies in security policies are removed to improve the efficiency of the access control model. This paper implements a novel clustering approach to cluster security policies. Our proposed approach uses a rule-specific cluster merging technique that compares the rule with the clusters where the probability of similarity is high. Hence this technique reduces the cost, time, and complexity of clustering. Rather than verifying all rules, detecting and removing anomalies in every cluster of rules improve the performance of the intrusion detection system. Our novel clustering approach is useful for the researchers and practitioners in the ABAC policy validation.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_31-Improving_Performance_of_ABAC_Security_Policies.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comprehensive Survey and Research Directions on Blockchain IoT Access Control</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120530</link>
        <id>10.14569/IJACSA.2021.0120530</id>
        <doi>10.14569/IJACSA.2021.0120530</doi>
        <lastModDate>2021-05-31T08:58:25.0670000+00:00</lastModDate>
        
        <creator>Hafiz Adnan Hussain</creator>
        
        <creator>Zulkefli Mansor</creator>
        
        <creator>Zarina Shukur</creator>
        
        <subject>Blockchain; Internet of Things; IoT; access control; access control management</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>The Internet of Things (IoT) is a widely used technology in the last decade in different applications. The Internet of things is wirelessly or wired to communicate, store, compute and track various real-time scenarios. This survey mainly discussed the core problems of Internet of things security and access control to unauthorized users and security requirements for IoT. The Internet of things is a heterogeneous device and has low memory, less processing power because of the small sizes. Nowadays, IoT systems are not sure and powerless to protect themselves against cyber attacks. It is mainly due to inadequate space in IoT gadgets, immature standards, and the lack of protected hardware and software design, development, and deployment. To meet IoT requirements, the authors discussed the limitations of traditional access control. Then the authors examined the potential to spread access control by implementing the safe architecture accommodated by the Blockchain. The authors also addressed how to use the Blockchain to work with and resolve some of the standards relevant to IoT security issues. In the end, an analysis of this survey shows future, open-ended problems, and challenges. It offers how the Blockchain potentially ensures reliable, scalable, and more efficient security solutions for IoT and further research work.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_30-Comprehensive_Survey_and_Research_Directions.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Approach based on Machine Learning Algorithms for the Recommendation of Scientific Cultural Heritage Objects</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120529</link>
        <id>10.14569/IJACSA.2021.0120529</id>
        <doi>10.14569/IJACSA.2021.0120529</doi>
        <lastModDate>2021-05-31T08:58:25.0330000+00:00</lastModDate>
        
        <creator>Fouad Nafis</creator>
        
        <creator>Khalid AL FARARNI</creator>
        
        <creator>Ali YAHYAOUY</creator>
        
        <creator>Badraddine AGHOUTANE</creator>
        
        <subject>Cultural heritage; CIDOC-CRM; ontologies; OWL; recommender system; semantic web; RDF</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>The Scientific Cultural Heritage (SCH) of the Dr&#226;a-Tafilalet region in south-eastern Morocco is a rich source of data testifying to the ingenuity of an older generation that has shaped the past of the region. These data must be preserved for future generations, particularly with new technologies and the semantic web. Recommendation systems (RS) are intended to assist prospective users in recommending the most suitable services based on their profile and expectations. The collaborative filtering (CF), content filtering (CB) or hybrid filtering (CF) RS has shown promising results in order to explore the problems experienced especially in CH. However, there are some limitations to be resolved, mostly due to the ability of these methods to build a stable and complete framework, which can provide a complete image of the user profile and suggest the most appropriate offers. This paper presents a hybrid recommender system for SCH data; a field little explored despite its historical importance and the value it generates. The results presented in this paper belong to the data collected from the region of Dr&#226;a-Tafilalet in southern Morocco.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_29-An_Approach_based_on_Machine_Learning_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Workload Partitioning of a Bio-inspired Simultaneous Localization and Mapping Algorithm on an Embedded Architecture</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120528</link>
        <id>10.14569/IJACSA.2021.0120528</id>
        <doi>10.14569/IJACSA.2021.0120528</doi>
        <lastModDate>2021-05-31T08:58:25.0170000+00:00</lastModDate>
        
        <creator>Amraoui Mounir</creator>
        
        <creator>Latif Rachid</creator>
        
        <creator>Abdelhafid El Ouardi</creator>
        
        <creator>Abdelouahed Tajer</creator>
        
        <subject>Simultaneous localization and mapping (SLAM); Bio-inspired algorithms; CPU-GPU workload partitioning; embedded systems; visual acuity (VA); hardware/software codesign</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>Many algorithms were developed to perform visual localization and mapping (SLAM) for robotic applications. These algorithms used monocular or stereovision systems to solve constraints related to the navigation in unknown or dynamic environment. The requirement of SLAM systems in terms of processing time and precision is a factor that limits their use in many embedded applications like UAVs or autonomous vehicles. Meanwhile, trends towards low-cost and low-power processing require massive parallelism on hardware architectures. The emergence of recent heterogeneous embedded architectures should help design embedded systems dedicated to Visual SLAM applications. It was demonstrated in a previous work that bio-inspired algorithms are competitive compared to classical methods based on image processing and environment perception. This paper is a study of a bio-inspired SLAM algorithm with the aim of making it suitable for an implementation on a heterogeneous architecture dedicated for embedded applications. An algorithm-architecture adequation approach is used to achieve a workload partitioning on CPU-GPU architecture and hence speeding up processing tasks.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_28-Workload_Partitioning_of_a_Bio_inspired.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>IoT Soil Monitoring based on LoRa Module for Oil Palm Plantation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120527</link>
        <id>10.14569/IJACSA.2021.0120527</id>
        <doi>10.14569/IJACSA.2021.0120527</doi>
        <lastModDate>2021-05-31T08:58:24.9870000+00:00</lastModDate>
        
        <creator>Ahmad Alfian Ruslan</creator>
        
        <creator>Shafina Mohamed Salleh</creator>
        
        <creator>Sharifah Fatmadiana Wan Muhamad Hatta</creator>
        
        <creator>Aznida Abu Bakar Sajak</creator>
        
        <subject>Internet of Things (IoT); Low Range (LoRa); Organic Light-Emitting Diodes (OLED); ThingSpeak; Arduino</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>Internet of Things (IoT) Soil Monitoring based on Low Range (LoRa) Module for Palm Oil Plantation is a prototype that sends data from the sender to the receiver by using LoRa technology. This realises the implementation of Industrial Revolution 4.0 in the agriculture sector. Also, this prototype uses the TTGO development board for Arduino with built-in ESP32 and LoRa, pH sensor and moisture level sensor as main components. The prototype utilises the LoRa communication between the sender and the receiver. The sensors will detect soil pH along with the moisture level. The data then will be sent to the receiver, where it will be displayed in the Organic Light-Emitting Diodes (OLED) display. At the same time, the data will be uploaded to the database named ThingSpeak by using wireless communication. Users can monitor the data collected by accessing ThingSpeak&#39;s website using smartphones or laptops. The prototype is easy to set up and use to help users monitor the pH level and moisture level percentage. For future enhancement, the project can be enhanced by combining temperature and tilt sensors to get comprehensive data about the soil’s condition.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_27-IoT_Soil_Monitoring_based_on_LoRa_Module.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Integrated Model to Develop Grammar Checker for Afaan Oromo using Morphological Analysis: A Rule-based Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120526</link>
        <id>10.14569/IJACSA.2021.0120526</id>
        <doi>10.14569/IJACSA.2021.0120526</doi>
        <lastModDate>2021-05-31T08:58:24.9570000+00:00</lastModDate>
        
        <creator>Jemal Abate</creator>
        
        <creator>Vijayshri Khedkar</creator>
        
        <creator>Sonali Kothari Tidke</creator>
        
        <subject>Grammar checker; spell checker; part-of-speech tag; error detection; syntactic analysis; semantic analysis; morphological analyzer; NLP</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>This study has implemented a rule-based approach on grammar checkers by integrating a spell-checker with a morphological analyzer to improve the Afaan Oromo grammar checker. A corpus containing about 300,000 words has been prepared to be used for spell-checker. About 300 grammar rules are constructed to detect the grammar error within the Afaan Oromo text and to suggest the possible grammar correction. The developed frameworks have experimented on the document having pairs of 100 correct and incorrect sentences. The experimental result for checking the spelling errors has scored 73% of recall, 76% precision, and 75% of F-measure. The score for suggesting the correct spelling is 78% of recall, 62% precision, and 70% precision F-measure while the evaluation result for detecting the grammar errors has 47% recall, 90% precision and 68% f-measure score. For suggesting the possible correct grammar on the detected error, the system has scored 61% recall, 71% precision and 66% f-measure. The overall performance of the developed system has a good performance. However, there is still a need to conduct further research to improve the Afaan Oromo grammar checker.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_26-Integrated_Model_to_Develop_Grammar_Checker.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Ultra-key Space Domain for Image Encryption using Chaos-based Approach with DNA Sequence</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120525</link>
        <id>10.14569/IJACSA.2021.0120525</id>
        <doi>10.14569/IJACSA.2021.0120525</doi>
        <lastModDate>2021-05-31T08:58:24.9230000+00:00</lastModDate>
        
        <creator>Ibrahim AlBidewi</creator>
        
        <creator>Nashwan Alromema</creator>
        
        <subject>Chaos-based; image encryption; confusion; diffusion; color image; RGB components; DNA sequence</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>Recently, image encryption has taken an importance especially after the dramatic evolution of the internet and network communication. The importance of securing the images contents is due to the simplicity of capturing and transferring of digital images in various communication media. Although there are many approaches for image encryptions, chaos-based image encryption approach is considered one of the most appropriate approaches because of its simplicity, security, and sensitivity to the input parameter. This research paper presents a new technique for encrypting RGB image components using nonlinear chaotic function and DNA sequence. A new image with the same dimensions of the plain-image is used as a key for confusions and diffusion process for each RGB components of plain-image. Experimental results show the efficiency of the proposed technique, simplicity, and high level of resistant against several cryptanalyst.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_25-Ultra_key_Space_Domain_for_Image_Encryption.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Contemporary Ensemble Aspect-based Opinion Mining Approach for Twitter Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120524</link>
        <id>10.14569/IJACSA.2021.0120524</id>
        <doi>10.14569/IJACSA.2021.0120524</doi>
        <lastModDate>2021-05-31T08:58:24.9100000+00:00</lastModDate>
        
        <creator>Satvika </creator>
        
        <creator>Vikas Thada</creator>
        
        <creator>Jaswinder Singh</creator>
        
        <subject>Aspect-based sentiment analysis; dependency parsing; long short-term memory (LSTM); part of speech (POS) tagging; term frequency-inverse document frequency (TF-IDF)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>Aspect-based opinion mining is one among the thought-provoking research field which focuses on the extraction of vivacious aspects from opinionated texts and polarity value associated with these. The principal aim here is to identify user sentiments about specific features of a product or service rather than overall polarity. This fine-grained polarity identification about myriad aspects of an entity is highly beneficial for individuals or business organizations. Extricating these implicit or explicit aspects can be very challenging and this paper elaborates copious aspect extraction techniques, which is decisive for aspect-based sentiment analysis. This paper presents a novel idea of combining several approaches like Part of Speech tagging, dependency parsing, word embedding, and deep learning to enrich the aspect-based sentiment analysis specially designed for Twitter data. The results show that combining deep learning with traditional techniques can produce excellent results than lexicon-based methods.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_24-A_Contemporary_Ensemble_Aspect_based_Opinion.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Secure Data Transmission Framework for Internet of Things based on Oil Spill Detection Application</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120523</link>
        <id>10.14569/IJACSA.2021.0120523</id>
        <doi>10.14569/IJACSA.2021.0120523</doi>
        <lastModDate>2021-05-31T08:58:24.8930000+00:00</lastModDate>
        
        <creator>Abhijith H V</creator>
        
        <creator>H S Rameshbabu</creator>
        
        <subject>Internet of things; wireless sensor networks; sensor nodes; data aggregation; authentication; light weight cryptography</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>Internet of Things (IoT) is a leading technology which can interlink anything to Internet and makes everything to intelligent and smart. IoT is not just a single technology. IoT is a combination of various technologies like communication, data analytics, sensors and actuators, cloud computing, artificial intelligence, machine learning, etc. Applications of IoT are spread across various domains. IoT is most suitable for remote applications like underwater networks. One such application is oil spill detection in ocean. Oil spill in an ocean is a critical challenge that causes damages to marine ecosystem. Detection of oil spills in a real time manner helps to resolve the problem quickly to minimize the damage. IoT can be used to detect the oil spill by making use of sensors deployed at various locations of ocean. With a massive amount of sensors deployed and the huge amount of data associated with it, there remain concerns about the data management. Also amount of data generated in IoT based remote sensing network is usually enormous for the servers to process and many times data generated are redundant. Hence there is a need for designing a framework which addresses both aggregation of data and security related issues at various aggregation points. In this paper we are proposing a secure data transmission framework for detecting oil spill through IoT, which avoids redundant data transmission through data aggregation and ensures secure data transmission through authentication and light weight encryption.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_23-Secure_Data_Transmission_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development and Usability Testing of a Consultation System for Diabetic Retinopathy Screening</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120522</link>
        <id>10.14569/IJACSA.2021.0120522</id>
        <doi>10.14569/IJACSA.2021.0120522</doi>
        <lastModDate>2021-05-31T08:58:24.8770000+00:00</lastModDate>
        
        <creator>Nurul Najihah A’bas</creator>
        
        <creator>Sarni Suhaila Rahim</creator>
        
        <creator>Mohamad Lutfi Dolhalit</creator>
        
        <creator>Wan Sazli Nasarudin Saifudin</creator>
        
        <creator>Nazreen Abdullasim</creator>
        
        <creator>Shahril Parumo</creator>
        
        <creator>Raja Norliza Raja Omar</creator>
        
        <creator>Siti Zakiah Md Khair</creator>
        
        <creator>Khavigpriyaa Kalaichelvam</creator>
        
        <creator>Syazwan Izzat Noor Izhar</creator>
        
        <subject>Consultation; diabetic retinopathy; eye screening; image editing; image processing; web development; testing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>This study aims to develop a novel web-based decision support system for diabetic retinopathy screening and classification of eye fundus images for medical officers. The research delivers diabetic retinopathy information with a web-based environment according to the needs of the users. The proposed research also intends to evaluate the developed system usability to the target users. The complex characteristics of diabetic retinopathy signs contribute to the difficulty in detecting diabetic retinopathy. Therefore, professional and skilled retinal screeners are required to produce accurate diabetic retinopathy detection and diagnosis. The proposed system assists the communication and consultation among the medical experts in the hospital and the primary health cares located at the health clinics. The agile software development model is the methodology used for the development of this research project. The project collaborates with the Department of Ophthalmology, Hospital Melaka, Malaysia for the medical content expertise and testing. Representative medical officers from Hospital Melaka and all the public health clinics in Melaka were involved in the preliminary study and system testing. This research study consists of a web development producing an interactive web-based application of diabetic retinopathy consultation which comprises image processing and editing features as a core of the system. It is envisaged that this research project will contribute to the management of diabetic retinopathy screening among medical officers.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_22-Development_and_Usability_Testing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>GRASP Combined with ILS for the Vehicle Routing Problem with Time Windows, Precedence, Synchronization and Lunch Break Constraints</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120521</link>
        <id>10.14569/IJACSA.2021.0120521</id>
        <doi>10.14569/IJACSA.2021.0120521</doi>
        <lastModDate>2021-05-31T08:58:24.8470000+00:00</lastModDate>
        
        <creator>Ettazi Haitam</creator>
        
        <creator>Rafalia Najat</creator>
        
        <creator>Jaafar Abouchabaka</creator>
        
        <subject>Optimization; VRP; home health care; ILS; tabu search; metaheuristics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>In this era of pandemic especially with COVID-19, many hospitals and care structures are at full capacity regarding availability of beds. This problem leads to ensure giving specific cares to people in need either in illness or disability in their own homes. Home Health Care (HHC) proposes this kind of services for patients demanding it. These services have to be done at the request of the patient which appears to be the client in a way that gives satisfaction to the requester of the service. Often, these demands are bound by a specific time that the workforce (caregivers) are obligated to respect in addition to the precedence (priority) constraint. The main purpose of the HHC structures is to provide a service that is good in term of quality, minimize the overall costs and shorten the losses. To reduce the costs of these HHC structures, it is mandatory to find comprehensible and logical ways to do it, for it is not permissible to touch the caregiver salary, HHC structures find themselves in the obligation to optimize by other means such as reducing the travel cost. Note that these structures give cares in one&#39;s home, which means that the travel aspect is important and is considered the core spending charges of the institution. Another fact is the satisfaction of the patients toward caregivers; this is an essential element to optimize in order to obtain a good quality service, to give a realistic aspect for the problem the lunch break of caregivers is introduced as a parameter. For those arguments, a conception of an efficient planning of caregivers involves using decision tools and optimization methods. A caregiver (vehicle) is attributed to a patient (customer) to do a number of  cares with several options in accordance to the customer wishes like time windows requirement often specified by the client, the priority or precedence constraints are usually performed if a care have to be performed before another and could need the intervention of more than one caregiver and must have at least one lunch break a day and it is not always taken at a set time of the day and must be versatile to optimize customer demand satisfaction. To resolve this issue which is called VRPTW-SPLB, a mathematical model of the problem is proposed and explained as a Mixed Integer Linear Programming (MILP) and a greedy heuristic based on a Greedy Randomized Adaptive Procedure (GRASP) is proposed, two strategies based on local search and two metaheuristics, and a metaheuristic resultant of an hybridization of the two metaheuristics. At the end of the paper, results are shown on a benchmark extracted from the literature.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_21-GRASP_Combined_with_ILS_for_the_Vehicle_Routing_Problem.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>GAAR: Gross Anatomy using Augmented Reality Mobile Application</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120520</link>
        <id>10.14569/IJACSA.2021.0120520</id>
        <doi>10.14569/IJACSA.2021.0120520</doi>
        <lastModDate>2021-05-31T08:58:24.8300000+00:00</lastModDate>
        
        <creator>Wan Aezwani Wan Abu Bakar</creator>
        
        <creator>Mustafa Man</creator>
        
        <creator>Mohd Airil Solehan</creator>
        
        <creator>Ily Amalina Ahmad Sabri</creator>
        
        <subject>Augmented reality; gross anatomy; learning tool; android mobile application; 3D human anatomy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>Covid-19 pandemic has forced the teaching and learning activity into real time and real-world education meetings. The traditional physical and face-to-face meetings are avoided in accordance to reducing the close physical contacts among individuals. Thus, a new paradigm shift towards teaching and learning needs to be highly enforced. Teaching and learning on medical field especially require for real world anatomy against human or living things body. In response to providing the facility for medical teachers and learners, Gross Anatomy Augmented Reality (GAAR) is introduced. GAAR is an android mobile Augmented Reality (AR) learning tool to assist the educators and learners in internalizing 3D human anatomy with more fun and interactivity. The AR methodology is implied to attract the personal impacts and feelings towards operating of close to ‘real’ organ during anatomy practices. Traditional learning methods are changed with AR technology through small digital device. This application may be able to show the students the actual form of human gross anatomy and assist the teachers or educators’ in explaining the sciences behind human body in more interactive and interesting. Furthermore, this application uses a 3-dimensional object, video and interactive info so that students are interested in using this application. The AR for education and learning is vital in bridging the digital divide among all generations through the conversion of static pictures into real-like 3D animation. The implementation results show that, through the real visualization, small to adult learners can imitate the real truth on human organ and how this can motivate them to take care of their bodies that would lead to a healthier living styles as well as easy memorizing of the subject contents.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_20-GAAR_Gross_Anatomy_using_Augmented_Reality.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Investigative Study of the Effect of Various Activation Functions with Stacked Autoencoder for Dimension Reduction of NIDS using SVM</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120519</link>
        <id>10.14569/IJACSA.2021.0120519</id>
        <doi>10.14569/IJACSA.2021.0120519</doi>
        <lastModDate>2021-05-31T08:58:24.7670000+00:00</lastModDate>
        
        <creator>Nirmalajyothi Narisetty</creator>
        
        <creator>Gangadhara Rao Kancherla</creator>
        
        <creator>Basaveswararao Bobba</creator>
        
        <creator>K.Swathi</creator>
        
        <subject>Auto-encoder; cloud computing; dimension reduction; intrusion detection system; machine leaning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>Deep learning is one of the most remarkable artificial intelligence trends. It remains behind numerous recent achievements in various domains, such as speech processing, and computer vision, to mention a few. Likewise, these achievements have sparked great attention in utilizing deep learning for dimension reduction.  It is known that the deep learning algorithms built on neural networks contain number of hidden layers, activation function and optimizer, which make the computation of deep neural network challenging and, sometimes, complex. The reason for this complexity is that obtaining an outstanding and consistent result from such deep architecture requires identifying number of hidden layers and suitable activation function for dimension reduction. To investigate the aforementioned issues linear and non-linear activation functions are chosen for dimension reduction using Stacked Autoencoder (SAE) when applied to Network Intrusion Detection Systems (NIDS). To conduct experiments for this study various activation functions like linear, Leaky ReLU, ELU, Tanh, sigmoid and softplus have been identified for the hidden and output layers. Adam optimizer and Mean Square Error loss functions are adopted for optimizing the learning process. The SVM-RBF classifier is applied to assess the classification accuracies of these activation functions by using CICIDS2017 dataset because it contains contemporary attacks on cloud environment. The performance metrics such as accuracy, precision, recall and F-measure are evaluated along with theses classification time is being considered as an important metric. Finally it is concluded that ELU is performed with low computational overhead with negligible difference of accuracy that is 97.33% when compared to other activation functions.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_19-Investigative_Study_of_the_Effect_of_Various_Activation_Functions.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intelligent Data Aggregation Framework for Resource Constrained Remote Internet of Things Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120518</link>
        <id>10.14569/IJACSA.2021.0120518</id>
        <doi>10.14569/IJACSA.2021.0120518</doi>
        <lastModDate>2021-05-31T08:58:24.7530000+00:00</lastModDate>
        
        <creator>Abhijith H V</creator>
        
        <creator>H S Ramesh Babu</creator>
        
        <subject>Wireless sensor networks; Internet of Things; intelligent boundary determination; sensor nodes; data aggregation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>Internet of Things (IoT) is a technology that can connect everything to the Internet.  IoT can be used in a wide range of applications which includes remote applications like Underwater networks. Remote applications involve the deployment of several low-power, low-cost interconnected sensor nodes in the specific region. With a massive amount of devices connected to the IoT and the considerable amount of data associated with it, there remain concerns about data management. Also, the amount of data generated in an extensive IoT-based remote sensing network is usually enormous for the servers to process, and many times data generated are redundant. Hence there is a need for designing a framework that addresses both aggregations of data and security-related issues at various aggregation points. In this paper, we are proposing an intelligent data aggregation mechanism for IoT-based remote sensing networks. This method avoids redundant data transmission by adapting spatial aggregation techniques. The proposed method was tested through simulations, and the results prove the efficiency of the proposed work.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_18-Intelligent_Data_Aggregation_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predicting DOS-DDOS Attacks: Review and Evaluation Study of Feature Selection Methods based on Wrapper Process</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120517</link>
        <id>10.14569/IJACSA.2021.0120517</id>
        <doi>10.14569/IJACSA.2021.0120517</doi>
        <lastModDate>2021-05-31T08:58:24.7200000+00:00</lastModDate>
        
        <creator>Kawtar BOUZOUBAA</creator>
        
        <creator>Youssef TAHER</creator>
        
        <creator>Benayad NSIRI</creator>
        
        <subject>DOS-DDOS attacks; feature selection; wrapper process; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>Now-a-days, Cybersecurity attacks are becoming increasingly sophisticated and presenting a growing threat to individuals, private and public sectors, especially the Denial Of Service attack (DOS) and its variant Distributed Denial Of Service (DDOS). Dealing with these dangerous threats by using traditional mitigation solutions suffers from several limits and performance issues. To overcome these limitations, Machine Learning (ML) has become one of the key techniques to enrich, complement and enhance the traditional security experiences. In this context, we focus on one of the key processes that improve and optimize Machine Learning DOS-DDOS predicting models: DOS-DDOS feature selection process, particularly the wrapper process. By studying different DOS-DDOS datasets, algorithms and results of several research projects, we have reviewed and evaluated the impact on used wrapper strategies, number of DOS-DDOS features, and many commonly used metrics to evaluate DOS-DDOS prediction models based on the optimized DOS-DDOS features. In this paper, we present three important dashboards that are essential to understand the performance of three wrapper strategies commonly used in DOS-DDOS ML systems: heuristic search algorithms, meta-heuristic search and random search methods. Based on this review and evaluation study, we can observe some of wrapper strategies, algorithms, DOS-DDOS features with a relevant impact can be selected to improve the DOS-DDOS ML existing solutions.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_17-Predicting_DOS_DDOS_Attacks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Exploring Factors Associated with the Social Discrimination Experience of Children from Multicultural Families in South Korea by using Stacking with Non-linear Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120516</link>
        <id>10.14569/IJACSA.2021.0120516</id>
        <doi>10.14569/IJACSA.2021.0120516</doi>
        <lastModDate>2021-05-31T08:58:24.6900000+00:00</lastModDate>
        
        <creator>Haewon Byeon</creator>
        
        <subject>Stacking ensemble; meta model; root-mean-square-error; index of agreement; rotation forest</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>The number of children from multicultural families is increasing rapidly along with quickly increasing multicultural families. However, there are not enough surveys and basic researches for understanding the characteristics of multicultural children and issues such as social discrimination. This study discovered the machine learning model with the best performance for predicting the social discrimination experience of children from multicultural families by comparing the prediction performance (accuracy) of individual prediction models and stacking ensemble models. This study analyzed 19,431 adolescents (between 19 and 24 years old: 9,835 males and 9,596 females) among the children of marriage immigrants. This study used random forest (RF), rotation forest, artificial neural network (ANN), and support vector machine (SVM) for the base model. Logistic regression algorithm was applied for the meta model. Each machine learning model was built through 5-fold cross-validation. Root-mean-square-error (RMSE), index of agreement (IA), and variance of errors (Ev) were used to evaluate the prediction performance of the developed models. The results of this study indicated that the prediction performance of the rotation forest-logistic regression model had the best performance. The future studies need to explore stacking ensemble models with the best performance through combining a base model and a meta model by using various machine learning algorithms such as clustering and boosting.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_16-Exploring_Factors_Associated_with_the_Social_Discrimination.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Image Contrast Optimization using Local Color Correction and Fuzzy Intensification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120515</link>
        <id>10.14569/IJACSA.2021.0120515</id>
        <doi>10.14569/IJACSA.2021.0120515</doi>
        <lastModDate>2021-05-31T08:58:24.6730000+00:00</lastModDate>
        
        <creator>Avadhesh Kumar Dixit</creator>
        
        <creator>Rakesh Kumar Yadav</creator>
        
        <creator>Ramapati Mishra</creator>
        
        <subject>Contrast enhancement; local color correction; fuzzy operators; optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>Global image enhancement techniques are used to enhance contrast in images but these techniques are found to be under-enhanced or over-enhanced in differently illuminated regions of the image. Local color correction methods work on local pixel regions to optimize the color contrast enhancement but they also have been found to show a lag while covering pixel regions which are overexposed, compared to those which are underexposed causing local artifacts. In this work, we overcome the shortcomings of both the local color correction and global color correction. This method uses local color correction in the Hue Saturation Luminance (HSL) domain, and fuzzy intensification operators are used to control the color fidelity of the local color corrected images. Thus, is able to sort out the problem of overexposed and underexposed regions and provide optimized contrast enhancement in colored images. Several experiments have been performed to analyze the performance of the proposed method and feasibility as compared to existing techniques. Performance parameters such as Mean Square Error (MSE), Peak Signal to Noise Ratio (PSNR), Structural Similarity Index Measurement (SSIM) and Naturalness Image Quality Evaluator (NIQE) is evaluated and the comparison with some existing techniques of contrast enhancement of color images is performed. The obtained result have good contrast and approve the better performance of the proposed method in support of the quantitative measure of perceptual appearance of the processed images and low computational time.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_15-Image_Contrast_Optimization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Online Parameter Estimation of DC-DC Converter through OPC Communication Channel</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120514</link>
        <id>10.14569/IJACSA.2021.0120514</id>
        <doi>10.14569/IJACSA.2021.0120514</doi>
        <lastModDate>2021-05-31T08:58:24.6430000+00:00</lastModDate>
        
        <creator>Mohammad A Obeidat</creator>
        
        <creator>Malek Al Anani</creator>
        
        <creator>Ayman M Mansour</creator>
        
        <subject>Online parameters estimation; Open Platform Communication; OPC; communication channel; ARMAX model (autoregressive-moving average with exogenous terms); DC-DC Converter; chopper circuit</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>System identification is a very powerful tool for determining the system model and parameters from sets of observable input and output data. Once the system parameters are obtained, the system dynamic behavior, including all the system characteristics (time constant, overshoot, settling time, etc.) can be accessed and evaluated. Despite the difficulty and communication channel lag, online parameter estimation outperforms offline system identification due to the ability to remotely monitor and control the system as well as improve the system&#39;s controller, making it more accurate and reliable. With the extreme development in technology, the importance of combining wireless networks with closed automatic control systems has emerged. This connection facilitates communication processes between the different units in the control for remotely controlled of the output. However, there are some errors affecting such system resulted from communication channel, A/D and D/A conversion process, identification process, or the existence of adaptive weight Gaussian noise. In this paper, the errors were investigated using real system, and then a suitable controller was tuned and optimized in order to reduce and eliminate various errors. The results show excellent dynamic behavior of the system under transmitting and receiving process.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_14-Online_Parameter_Estimation_of_DC_DC_Converter.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Evaluation of Automatic Text Summarization of News Articles: The Case of Three Online Arabic Text Summary Generators</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120513</link>
        <id>10.14569/IJACSA.2021.0120513</id>
        <doi>10.14569/IJACSA.2021.0120513</doi>
        <lastModDate>2021-05-31T08:58:24.6130000+00:00</lastModDate>
        
        <creator>Fahad M. Alliheibi</creator>
        
        <creator>Abdulfattah Omar</creator>
        
        <creator>Nasser Al-Horais</creator>
        
        <subject>AlSummarizer; Arabic; automatic summarization; discourse markers; extraction; LAKHASLY; news articles; RESOOMER; sentence relevance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>Digital news platforms and online newspapers have multiplied at an unprecedented speed, making it difficult for users to read and follow all news articles on important, relevant topics. Numerous automatic text summarization systems have thus been developed to address the increasing needs of users around the world for summaries that reduce reading and processing time. Various automatic summarization systems have been developed and/or adapted in Arabic. The evaluation of automatic summarization performance is as important as the summarization process itself. Despite the importance of assessing summarization systems to identify potential limitations and improve their performance, very little has been done in this respect on systems in Arabic. Therefore, this study evaluated three text summarizers AlSummarizer, LAKHASLY, and RESOOMER using a corpus built of 40 news articles. Only articles written in Modern Standard Arabic (MSA) were selected as this is the formal and working language of Arab newspapers and news networks. Three expert examiners generated manual summaries and examined the linguistic consistency and relevance of the automatic summaries to the original news articles by comparing the automatic summaries to the manual (human) summaries. The scores for the three automatic summarizers were very similar and indicated that their performance was not satisfactory. In particular, the automatic summaries had serious problems with sentence relevance, which has negative implications for the reliability of such systems. The poor performance of Arabic summarizers can mainly be attributed to the unique morphological and syntactic characteristics of Arabic, which differ in many ways from English and other Western languages (the original language/s of automatic summarizers), and are critical in building sentence relevance and coherence in Arabic. Thus, summarization systems should be trained to identify discourse markers within the texts and use these in the generation of automatic summaries. This will have a positive impact on the quality and reliability of text summarization systems. Arabic summarization systems need to incorporate semantic approaches to improve performance and construct more coherent and meaningful summaries. This study was limited to news articles in MSA. However, the findings of the study and their implications can be extended to other genres, including academic articles.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_13-An_Evaluation_of_Automatic_Text_Summarization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Bird’s Eye View of Natural Language Processing and Requirements Engineering</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120512</link>
        <id>10.14569/IJACSA.2021.0120512</id>
        <doi>10.14569/IJACSA.2021.0120512</doi>
        <lastModDate>2021-05-31T08:58:24.5670000+00:00</lastModDate>
        
        <creator>Assad Alzayed</creator>
        
        <creator>Ahmed Al-Hunaiyyan</creator>
        
        <subject>Automated text understanding; natural language processing; requirements engineering; requirements elicitation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>Natural Language Processing (NLP) has demonstrated effectiveness in many application domains. NLP can assist software engineering by automating various activities. This paper examines the interaction between software requirements engineering (RE) and NLP. We reviewed the current literature to evaluate how NLP supports RE and to examine research developments. This literature review indicates that NLP is being employed in all the phases of the RE domain. This paper focuses on the phases of elicitation and the analysis of requirements. RE communication issues are primarily associated with the elicitation and analysis phases of the requirements. These issues include ambiguity, inconsistency, and incompleteness. Many of these problems stem from a lack of participation by the stakeholders in both phases. Thus, we address the application of NLP during the process of requirements elicitation and analysis. We discuss the limitations of NLP in these two phases. Potential future directions for the domain are examined. This paper asserts that human involvement with knowledge about the domain and the specific project is still needed in the RE process despite progress in the development of NLP systems.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_12-A_Birds_Eye_View_of_Natural_Language_Processing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Reversible Data Hiding using Block-wise Histogram Shifting and Run-length Encoding</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120511</link>
        <id>10.14569/IJACSA.2021.0120511</id>
        <doi>10.14569/IJACSA.2021.0120511</doi>
        <lastModDate>2021-05-31T08:58:24.5330000+00:00</lastModDate>
        
        <creator>Kandala Sree Rama Murthy</creator>
        
        <creator>V. M. Manikandan</creator>
        
        <subject>Histogram shifting; run-length encoding; secure message transmission; overflow; Elias gamma</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>Histogram shifting-based Reversible Data Hiding (RDH) is a well-explored information security domain for secure message transmission. In this paper, we propose a novel RDH scheme that considers the block-wise histograms of the image. Most of the existing histogram shifting techniques will have additional overhead information to recover the overflow and/or the underflow pixels. In the new scheme, the meta-data that is required for a block is embedded within the same block in such a way that the receiver can perform image recovery and data extraction. As per the proposed data hiding process, all the blocks need not be used for data hiding, so we have used marker information to distinguish between the blocks which are used to hide data and the blocks which are not used for data hiding. Since the marker information needs to be embedded within the image, we have compressed the marker information using run-length encoding. The run-length encoded sequence is represented by an Elias gamma encoding procedure. The compression on the marker information ensures a better Embedding Rate (ER) for the proposed scheme. The proposed RDH scheme will be useful for secure message transmission also where we are also concerned about the restoration of the cover image. The proposed scheme&#39;s experimental analysis is conducted on the USC-SIPI image dataset maintained by the University of Southern California, and the results show that the proposed scheme performs better than the existing schemes.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_11-Reversible_Data_Hiding_using_Block_wise_Histogram.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Spiritual User Experience (iSUX) for Older Adult Users using Mobile Application</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120510</link>
        <id>10.14569/IJACSA.2021.0120510</id>
        <doi>10.14569/IJACSA.2021.0120510</doi>
        <lastModDate>2021-05-31T08:58:24.5170000+00:00</lastModDate>
        
        <creator>Nahdatul Akma Ahmad</creator>
        
        <creator>Zirawani Baharum</creator>
        
        <creator>Azaliza Zainal</creator>
        
        <creator>Fariza Hanis Abdul Razak</creator>
        
        <creator>Wan Adilah Wan Adnan</creator>
        
        <subject>Techno-spiritual; user experience; human computer interaction; Geneva emotional musical scale; 3e diary; older people</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>The increasing number of aging populations worldwide versus vast developments in mobile technology creates questions on how older adults adapt and apply mobile technology in their daily life. This research focused on spiritual user experience for older adult users because older adults are claimed to be more spiritually inclined as they aged. Despite high profile calls for research in the area of spirituality, the research pertaining spirituality in HCI is still in infancy state. Recent literatures discover most studies focus on design for spiritual user experience and evaluation of spiritual application for adult users, but fundamental of spirituality and its elements from the view of user experience is limited. Therefore, this study employs qualitative method approach within an interpretive paradigm to propose a model for Spiritual User Experience from the perspective of Islamic older adult users. The Geneva Emotional Musical Scale (GEMS) was adopted as a theoretical lens in order to gain deeper insights on the spirituality elements. A single case study was conducted with the total of 11 participants to research on the spirituality user experience elements among older adults. The triangulation of qualitative data collection through 3E diary, interviews and observations was conducted. All data were analyzed verbatimly using thematic analysis. Six themes emerged from the analysis which are effectiveness, efficiency, learnability, satisfaction, sublimity and vitality. These themes are further categorized into 10 attributes; effectiveness (accessibility features), efficiency (simplicity and portability), learnability, satisfaction (attractiveness and reliability), sublimity (transcendence and peacefulness) and vitality (energy and joyful activation). These are embedded into a model known as Spiritual User Experience (iSUX) which are evaluated by the Islamic religious experts, user experience expert and older adult’s representatives. This model could be a reference for spiritual model development apps among developers and provide understanding for researchers in the HCI domain. In conclusion, the Spiritual User Experience (iSUX) is hope to increase the understanding of spirituality from the domain of user experience.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_10-Spiritual_User_Experience_iSUX.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Creativity Training Model for Game Design</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120509</link>
        <id>10.14569/IJACSA.2021.0120509</id>
        <doi>10.14569/IJACSA.2021.0120509</doi>
        <lastModDate>2021-05-31T08:58:24.4870000+00:00</lastModDate>
        
        <creator>Raudyah Md Tap</creator>
        
        <creator>Nor Azan Mat Zin</creator>
        
        <creator>Hafiz Mohd Sarim</creator>
        
        <creator>Norizan Mat Diah</creator>
        
        <subject>Creativity training; game design; creative ideas; creative thinking</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>The popularity of digital games is increasing with a global market value of RM197.6 billion. However, the game produced locally still has no impact. One reason is that there is no emphasis on the game design process in the game development education program. Games designed have a problem in terms of creativity, and there is still no specific method of training creative thinking. This study aims to identify and validate game design&#39;s creativity components and develop a Creativity Training Model for Game Design (LK2RBPD Model) verified through the Game Design Document Tool (GDD Tool) prototype. This research has four main phases: the requirements planning, design, development, implementation, and testing phases. In the requirements analysis phase, the component of LK2RBPD Model was identified. The LK2RBPD Model contains elements from industry practices of game designing, creative and innovative thinking skills, creativity dimensions, Sternberg Creativity, and Cultural Activity theories. The GDD Tool prototype implementing the model was developed and tested. The LK2RBPD Model was evaluated using questionnaire survey, SWOT (Strengths, Weaknesses, Opportunities, and Threats) analysis, and verification of ideas in the GDD Tool prototype. Evaluation using a five-point Likert scale shows that GDD Tool prototype is effective in implementing 19 components. Expert verification on the results of game design ideas and creativity building using Cohen Kappa calculations is 0.94, indicating an excellent agreement. The results show that the LK2RBPD Model can be effectively used to train creativity in game design. This research&#39;s contributions are LK2RBPD Model, creative game design ideation process guideline, and GDD Tool prototype design.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_9-Creativity_Training_Model_for_Game_Design.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Propose Vulnerability Metrics to Measure Network Secure using Attack Graph</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120508</link>
        <id>10.14569/IJACSA.2021.0120508</id>
        <doi>10.14569/IJACSA.2021.0120508</doi>
        <lastModDate>2021-05-31T08:58:24.4570000+00:00</lastModDate>
        
        <creator>Zaid. J. Al-Araji</creator>
        
        <creator>Sharifah Sakinah Syed Ahmad</creator>
        
        <creator>Raihana Syahirah Abdullah</creator>
        
        <subject>Attack graph; security metrics; attack path; path analysis; attack graph uses</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>With the increase in using computer networking, the security risk has also increased. To protect the network from attacks, attack graph has been used to analyze the vulnerabilies of the network. However, properly securing networks requires quantifying the level of security offered by these actions, as you cannot enhance what you cannot measure. Security metrics provide a qualitative and quantitative representation of a system&#39;s or network&#39;s security level. However, using existing security metrics can lead to misleading results. This work proposed three metrics, which is the Number of Vulnerabilities (NV), Mean Vulnerabilities on Path (MVoP), and the Weakest Path (WP). The experiment of this work used two networks to test the metrics. The results show the effect of these metrics on finding the weaknesses of the network that the attacker may use.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_8-Propose_Vulnerability_Metrics_to_Measure_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Exploring Factors Associated with Subjective Health of Older-Old using ReLU Deep Neural Network and Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120507</link>
        <id>10.14569/IJACSA.2021.0120507</id>
        <doi>10.14569/IJACSA.2021.0120507</doi>
        <lastModDate>2021-05-31T08:58:24.4230000+00:00</lastModDate>
        
        <creator>Haewon Byeon</creator>
        
        <subject>Gradient boosting machine; classification and regression trees; Naive Bayes model; deep learning; subjective health</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>Resolving the health issues of the elderly has emerged as an important task in the current society. This study developed models that could predict the subjective health of the older-old based on gradient boosting machine (GBM), naive Bayes model, classification and regression trees (CART), deep neural network, and random forest by using the health survey data of the elderly and compared their prediction performance (i.e., accuracy, sensitivity, specificity) the models. This study analyzed 851 older-old people (≥75 years old) who resided in the community. This study compared the accuracy, sensitivity, and specificity of the developed models to evaluate their prediction performance. This study conducted 5-fold cross-validation to validate the developed models. The results of this study showed that the deep neural network with an accuracy of 0.75, a sensitivity of 0.73, and a specificity of 0.81 was the model with the best prediction performance. The normalized importance of variables derived from deep neural network analysis showed that depression, subjective stress recognition, the number of accompanying chronic diseases, subjective oral conditions, and the number of days walking more than 30 minutes were major predictors for the subjective health of the older-old. Further studies are needed to identify factors associated with the subjective health of the older-old with considering the age-period-cohort effects.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_7-Exploring_Factors_Associated_with_Subjective_Health.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Exploring Machine Learning Techniques for Coronary Heart Disease Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120505</link>
        <id>10.14569/IJACSA.2021.0120505</id>
        <doi>10.14569/IJACSA.2021.0120505</doi>
        <lastModDate>2021-05-31T08:58:24.4100000+00:00</lastModDate>
        
        <creator>Hisham Khdair</creator>
        
        <creator>Naga M Dasari</creator>
        
        <subject>Coronary heart disease; machine learning; prediction; classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>Coronary Heart Disease (CHD) is one of the leading causes of death nowadays. Prediction of the disease at an early stage is crucial for many health care providers to protect their patients and save lives and costly hospitalization resources. The use of machine learning in the prediction of serious disease events using routine medical records has been successful in recent years. In this paper, a comparative analysis of different machine learning techniques that can accurately predict the occurrence of CHD events from clinical data was performed. Four machine learning classifiers, namely Logistic Regression, Support Vector Machine (SVM), K- Nearest Neighbor (KNN), and Multi-Layer Perceptron (MLP) Neural Networks were identified and applied to a dataset of 462 medical instances and 9 features as well as the class feature from the South African Heart Disease data retrieved from the KEEL repository. The dataset consists of 302 records of healthy patients and 160 records of patients who suffer from CHD. In order to handle the imbalanced classification problem, the K-means algorithm along with Synthetic Minority Oversampling TEchnique (SMOTE) was used in this study. The empirical results of applying the four machine learning classifiers on the oversampled dataset have been very promising. The results reported using different evaluation metrics showed that SVM has achieved the highest overall prediction performance.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_5-Exploring_Machine_Learning_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Survey of Specification-based Intrusion Detection Techniques for Cyber-Physical Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120506</link>
        <id>10.14569/IJACSA.2021.0120506</id>
        <doi>10.14569/IJACSA.2021.0120506</doi>
        <lastModDate>2021-05-31T08:58:24.4100000+00:00</lastModDate>
        
        <creator>Livinus Obiora Nweke</creator>
        
        <subject>Cyber-physical systems; intrusion detection systems; specification-based intrusion detection mechanism; security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>Cyber-physical systems (CPS) integrate computa-tion and communication capabilities to monitor and control physical systems. Even though this integration improves the performance of the overall system and facilitates the application of CPS in several domains, it also introduces security challenges. Over the years, intrusion detection systems (IDS) have been de-ployed as one of the security controls for addressing these security challenges. Traditionally, there are three main approaches to IDS, namely: anomaly detection, misuse detection and specification-based detection. However, due to the unique attributes of CPS, the traditional IDS need to be modified or completely replaced before it can be deployed for CPS. In this paper, we present a survey of specification-based intrusion detection techniques for CPS. We classify the existing specification-based intrusion detection techniques in the literature according to the following attributes: specification source, specification extraction, specifi-cation modelling, detection mechanism, detector placement and validation strategy. We also discuss the details of each attribute and describe our observations, concerns and future research directions. We argue that reducing the efforts and time needed to extract the system specification of specification-based intrusion detection techniques for CPS and verifying the correctness of the extracted system specification are open issues that must be addressed in the future.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_6-A_Survey_of_Specification_based_Intrusion_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Bluetooth-based WKNNPF and WKNNEKF Indoor Positioning Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120504</link>
        <id>10.14569/IJACSA.2021.0120504</id>
        <doi>10.14569/IJACSA.2021.0120504</doi>
        <lastModDate>2021-05-31T08:58:24.3630000+00:00</lastModDate>
        
        <creator>Sokliep Pheng</creator>
        
        <creator>Ji Li</creator>
        
        <creator>Luo Xiaonan</creator>
        
        <creator>Yanru Zhong</creator>
        
        <subject>Indoor Positioning System (IPS); Bluetooth low energy; WLAN; RSSI; WKNNPF; WKNNEKF; KNN; WKNN</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>Indoor Positioning System (IPS) in generally perform as a network of devices that always located the objects or people inside a building wirelessly. An IPS has direction relies nearby anchors and also can be entirely local to your smartphone. With the rapid growth and sharp increase in Indoor Positioning System (IPS) demand in the world, there are a lot of researchers trying to invent new algorithm to develop IPS. This paper proposed the Bluetooth-Base Indoor Positioning Algorithm. The RF characteristics such as RSSI and WLAN RSSI fingerprinting system normally formed by two phases, fist is offline phase and second is online phase. Fingerprinting system handling both off-line and online data and estimate the user’s location. Our algorithm design is a collection of Weighted K-Nearest Neighbors (WKNN) and Filtering algorithms by KALMAN Filter. Finally, to avoid the problems of IPS and get a better accurate we proposed two algorithms: Weighted K-Nearest Neighbors Particle Filter (WKNNPF) and Weighted K-Nearest Neighbors Extended Kalman Filter (WKNNEKF) compare to KNN and WKNN result. After comparing we found that the result of WKNNPF and WKNNEKF is better result than KNN and WKNN. The Probability in 3M of WKNN is about 79%, WKNNEKF is about 89%, and WKNNPF is about 95.1%. Among one of the proposed algorithms WKNNPF is better than WKNNEKF on accuracy 1.7-2 meters with 42.2m/s response time.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_4-Bluetooth_based_WKNNPF_and_WKNNEKF.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Matters of Neural Network Repository Designing for Analyzing and Predicting of Spatial Processes</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120503</link>
        <id>10.14569/IJACSA.2021.0120503</id>
        <doi>10.14569/IJACSA.2021.0120503</doi>
        <lastModDate>2021-05-31T08:58:24.3470000+00:00</lastModDate>
        
        <creator>Stanislav A. Yamashkin</creator>
        
        <creator>Anatoliy A. Yamashkin</creator>
        
        <creator>Ekaterina O. Yamashkina</creator>
        
        <creator>Anastasiya A. Kamaeva</creator>
        
        <subject>Repository; deep learning; artificial neural network; spatial data; visual programming</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>The article is devoted to solving the scientific problem of accumulating and systematizing models and machine learning algorithms by developing a repository of deep neural network models for analyzing and predicting of spatial processes in order to support the process of making managerial decisions in the field of ensuring conditions for sustainable development of regions. The issues of architecture development and software implementation of a repository of deep neural network models for spatial data analysis are considered, based on a new ontological model, which makes it possible to systematize models in terms of their application for solving design problems. An ontological model of a deep neural network repository for spatial data analysis is decomposed into the domain of deep machine learning models, problems being solved and data. Special attention is paid to the problems of storing data in the repository and the development of a subsystem for visualizing neural networks using a graph model. The authors have shown that for organizing a repository of deep neural network models, it is advisable to use a scientifically grounded set of database management systems integrated into a multi-model storage, characterizing the domains of using relational and NoSQL storages. </description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_3-Matters_of_Neural_Network_Repository_Designing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of Wearable Heart Sound Collection Device</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120502</link>
        <id>10.14569/IJACSA.2021.0120502</id>
        <doi>10.14569/IJACSA.2021.0120502</doi>
        <lastModDate>2021-05-31T08:58:24.3170000+00:00</lastModDate>
        
        <creator>Ximing HUAI</creator>
        
        <creator>Shota Notsu</creator>
        
        <creator>Dongeun Choi</creator>
        
        <creator>Panote Siriaraya</creator>
        
        <creator>Noriaki Kuwahara</creator>
        
        <subject>Cardiovascular diseases; wearable devices; heart sound collection; convolutional neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>In recent years, the mortality rate of cardiovascular diseases and the younger generation have attracted people&#39;s attention. At the same time, there is an increasing demand for devices that can monitor the physiological parameters of the heart. In this research, a wearable devices was designed and developed for heart sound collection. Microphones wrapped in urethane resin holders were directly fixed on the vest for heart sound collection. The device has received many positive reviews in terms of comfort. The cumulative contribution rate of the two common factors (material factor and clothing design factor) obtained through factor analysis was 75.371%, which was the main factor affecting the experience of using the device. Finally, the heart sounds of 11 healthy young people were collected and input into the completed convolutional neural network for detection, and an accuracy rate of 71.3% was obtained. Therefore, it can be concluded that the device improves the user experience and has a good effect on heart sound collection and detection.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_2-Development_of_Wearable_Heart_Sound.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Rain Attenuation in 5G Wireless Broadband Backhaul Link and Develop (IoT) Rainfall Monitoring System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120501</link>
        <id>10.14569/IJACSA.2021.0120501</id>
        <doi>10.14569/IJACSA.2021.0120501</doi>
        <lastModDate>2021-05-31T08:58:24.2370000+00:00</lastModDate>
        
        <creator>Konstantinos Zarkadas</creator>
        
        <creator>George Dimitrakopoulos</creator>
        
        <subject>Rain attenuation; Internet of Things; wireless broadband</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(5), 2021</description>
        <description>Climate change is the cause of more frequent and intense rainfall where they affect wireless communications because they cause severe weakening of the power of the emitted signal. These losses reduce network coverage and, therefore, system availability. The proposed solution is to integrate an Internet of Things (IoT) rainfall monitoring system where it will be able to collect real-time data on the height of rain that erupts in a particular place. This data will help areas where base stations install and the distance of the link that may need to be changed to reduce rainfall&#39;s harmful effects. So, the prediction of attenuation due to rain is an essential parameter in both terrestrial and satellite connections. The present study uses the ITU-R P 838 and ITU-R P 530 models to theoretically calculate losses in a 5G wireless broadband link with 99.9% link availability. This study conducts three frequency bands, 24 GHz, 28 GHz, and 38GHz, in Palo Alto, California. The travel distance is 5km, while the rainfall rate for the analyzed area is in zone D. The results show that the attenuations are proportional to the frequency, polarization, and rainfall rate.</description>
        <description>http://thesai.org/Downloads/Volume12No5/Paper_1-Rain_Attenuation_in_5G_Wireless_Broadband.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Security Aspects of Electronic Health Records and Possible Solutions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120496</link>
        <id>10.14569/IJACSA.2021.0120496</id>
        <doi>10.14569/IJACSA.2021.0120496</doi>
        <lastModDate>2021-05-01T16:56:39.5630000+00:00</lastModDate>
        
        <creator>Prashant Vilas Kanade</creator>
        
        <creator>Arun Kumar</creator>
        
        <subject>Patient history; cryptography; blockchain; timestamp based record; IPFS; electronic health records</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>Health related information of a person in systematic format using information and Communication technology is definitely required. Storing patient information according to guidelines provided by government will help to achieve the concept of one person one record. There is also need to share the personal health record whenever necessary. If patient record (History) is readily available, it will help to make correct decisions related to patient’s treatment. In our country (India) Ministry of Health and Family Welfare have recommended to eliminate conventional health record system. The major focus of this paper is to represent various methodologies that are adopted to implement web based health record system. As there is need of security while accessing and sharing of health related information, security is the major factor. Use of block chain, cryptography and timestamp based log record method is discussed. To assure the sharing of records, Inter Planetary File System (IPFS) is also discussed. Major purpose is to provide systematic and easy to use interoperable electronic health Record system.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_96-Security_Aspects_of_Electronic_Health_Records.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comparative Analysis of Hadoop and Spark Frameworks using Word Count Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120495</link>
        <id>10.14569/IJACSA.2021.0120495</id>
        <doi>10.14569/IJACSA.2021.0120495</doi>
        <lastModDate>2021-05-01T16:56:39.5470000+00:00</lastModDate>
        
        <creator>Yassine Benlachmi</creator>
        
        <creator>Abdelaziz El Yazidi</creator>
        
        <creator>Moulay Lahcen Hasnaoui</creator>
        
        <subject>Big data; hadoop; spark; machine learning; Hadoop Distributed File System (HDFS)); mapreduce; word count</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>With the advent of the Big Data explosion due to the Information Technology (IT) revolution during the last few decades, the need for processing and analyzing the data at low cost in minimum time has become immensely challenging. The field of Big Data analytics is driven by the demand to process Machine Learning (ML) data, real-time streaming data, and graphics processing. The most efficient solutions to Big Data analysis in a distributed environment are Hadoop and Spark administered by Apache, both these solutions are open-source data management frameworks and they allow to distribute and compute the large datasets across multiple clusters of computing nodes. This paper provides a comprehensive comparison between Apache Hadoop &amp; Apache Spark in terms of efficiency, scalability, security, cost-effectiveness, and other parameters. It describes primary components of Hadoop and Spark frameworks to compare their performance. The major conclusion is that Spark is better in terms of scalability and speed for real-time streaming applications; whereas, Hadoop is more viable for applications dealing with bigger datasets. This case study evaluates the performance of various components of Hadoop-such, MapReduce, and Hadoop Distributed File System (HDFS) by applying it to the well-known Word Count algorithm to ascertain its efficacy in terms of storage and computational time. Subsequently, it also provides an analysis of how Spark’s in-line memory processing could reduce the computational time of the Word Count Algorithm.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_95-A_Comparative_Analysis_of_Hadoop_and_Spark_Frameworks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Integrating Cost-231 Multiwall Propagation and Adaptive Data Rate Method for Access Point Placement Recommendation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120494</link>
        <id>10.14569/IJACSA.2021.0120494</id>
        <doi>10.14569/IJACSA.2021.0120494</doi>
        <lastModDate>2021-05-01T16:56:39.5170000+00:00</lastModDate>
        
        <creator>Fransiska Sisilia Mukti</creator>
        
        <creator>Puput Dani Prasetyo Adi</creator>
        
        <creator>Dwi Arman Prasetya</creator>
        
        <creator>Volvo Sihombing</creator>
        
        <creator>Nicodemus Rahanra</creator>
        
        <creator>Kristia Yuliawan</creator>
        
        <creator>Julianto Simatupang</creator>
        
        <subject>Access point placement; indoor propagation; Cost- 231 Multiwall; ADR; RSSI</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>A new approach has been developed to provide an overview about signal behavior in indoor environments using Cost-231 Multiwall Model (Cost-231 MWM) and Adaptive Data Rate (ADR) method. This approach used as a reference for access point (AP) placement for campus building. The Cost-231 MWM plays a role in estimating the measured power received by user (usually called as Received Signal Strength Indicator/RSSI) by considering the existence of obstacles around the transmitter (AP). We used Institut Asia Malang environments as the case study and gave some recommendations for AP placement: ten optimal placements for the first, third and fourth floor, also seven optimal placements for the second floor. These recommendations were based on the RSSI for good and excellent level signal (-50 dBm until -10dBm). This research also uses the Adaptive Data Rate (ADR) mechanism approach to reduce the amount of packet loss (kbps) resulting from obstacles that cause attenuation (-dB). With the Adaptive Data Rate mechanism, it means increasing the number of access points, the signal attenuation (-dB) occurs from the obstacles (Walls) that are penetrated by the Radio Frequency device and causes attenuation (-dB), the more Access points on the Multi-Wall, will allow communication and data transmitting stability.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_94-Integrating_Cost_231_Multiwall_Propagation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comprehensive Analysis of Two Malicious Arabic-Language Twitter Campaigns</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120493</link>
        <id>10.14569/IJACSA.2021.0120493</id>
        <doi>10.14569/IJACSA.2021.0120493</doi>
        <lastModDate>2021-05-01T16:56:39.5000000+00:00</lastModDate>
        
        <creator>Reem Alharthi</creator>
        
        <creator>Areej Alhothali</creator>
        
        <creator>Kawthar Moria</creator>
        
        <subject>Social network security; social spammers; arab twitter users; malicious campaigns on twitter; data mining</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>Fake malicious accounts are one of the primary causes of the deterioration of social network content quality. Numerous such accounts are generated by attackers to achieve multiple nefarious goals, including phishing, spamming, spoof- ing, and promotion. These practices pose significant challenges regarding the availability of credible data that reflect real- world social media interactions. This has led to the development of various methods and approaches to combat spammers on social media networks. Previous studies, however, have almost exclusively focused on studying and identifying English-language spam profiles, whereas the problem of malicious Arabic-language accounts remains under-addressed in the literature. In this paper, therefore, we conduct a comprehensive investigation of malicious Arabic-language campaigns on Twitter. The study involves analyzing the accounts of these campaigns from several perspectives, including their number, content, social interaction graphs, lifespans, and day-to-day activities. In addition to expos- ing their spamming tactics, we find that these spam accounts are more successful in avoiding Twitter suspensions that has been previously reported in the literature.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_93-Comprehensive_Analysis_of_Two_Malicious.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fast Fractal Coding of MRI Images using Deep Reinforcement Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120492</link>
        <id>10.14569/IJACSA.2021.0120492</id>
        <doi>10.14569/IJACSA.2021.0120492</doi>
        <lastModDate>2021-05-01T16:56:39.4870000+00:00</lastModDate>
        
        <creator>Bejoy Varghese</creator>
        
        <creator>S. Krishnakumar</creator>
        
        <subject>Fractal compression; deep reinforcement learning; MRI image compression; deep learning; adative fractal coding</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>This paper presents an algorithm based on Fractal theory by using Iterated Function Systems (IFS). An efficient and fast coding mechanism is proposed by exploiting the self similarity nature in the Brain MRI images. The proposed algorithm utilizes Deep Reinforcement Learning (DRL) technique to learn the transformations required to recreate the original image. We avail of the Adaptive Iterated Function System (AIFS) as the encoding scheme. The proposed algorithm is trained and customized to compress the Medical images, especially Magnetic Resonance Imaging (MRI). The algorithm is tested and evaluated by using the original MR head scan test images. It learns from an existing biomedical dataset viz The Internet Brain Segmentation Repository (IBSR) to predict the new local affine transformations. The empirical analysis shows that the proposed algorithm is at least 4 times faster than the competitive methods and the decoding quality is far distinct with a reduction in the bit rate.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_92-Fast_Fractal_Coding_of_MRI_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Corner Detection Operator for Multi-Spectral Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120491</link>
        <id>10.14569/IJACSA.2021.0120491</id>
        <doi>10.14569/IJACSA.2021.0120491</doi>
        <lastModDate>2021-05-01T16:56:39.4530000+00:00</lastModDate>
        
        <creator>Hassan El Houari</creator>
        
        <creator>Ahmed Fouad El Ouafdi</creator>
        
        <subject>Corner detection; multi-spectral; operator</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>Corner detection is a crucial image processing technique that has a wide range of application, including motion detection, image registration, video tracking, and object recogni-tion. Most proposed approaches for corner detection are based on gray-scale images, despite it has been shown that color infor-mation can greatly improve the quality of corners detection. This paper aims to introduce a new operator that identifies the second-order image information for multi-spectral images. The operator is developed using the multi-spectral gradient and differential structures of the image. Consequently, the eigenvectors of the proposed operator are used for detecting corners. A comparative study is conducted using synthetic and real images, and the result confirms that the proposed approach performs better compared with two other approaches for detecting corners.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_91-A_New_Corner_Detection_Operator.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design and Performance Measurement of Energy-based Acoustic Signal Detection with Autonomous Underwater Vehicles</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120490</link>
        <id>10.14569/IJACSA.2021.0120490</id>
        <doi>10.14569/IJACSA.2021.0120490</doi>
        <lastModDate>2021-05-01T16:56:39.4400000+00:00</lastModDate>
        
        <creator>Redouane Es-sadaoui</creator>
        
        <creator>Jamal Khallaayoune</creator>
        
        <creator>Tamara Brizard</creator>
        
        <subject>Autonomous Underwater Vehicles (AUVs); acoustic signal processing; spectrum sensing; energy detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>The Autonomous Underwater Vehicles (AUVs) in-dustry is still awaiting its Henry Ford to bring to the market solutions that are well adapted to the challenge of underwater exploration. This will certainly be done by the advent of small connected drones equipped with small sensors and embedded devices, allowing AUVs to operate in a coordinated swarm, at a unit price so affordable that we can consider deploying hundreds, or even thousands simultaneously, to be able to observe the ocean with an instrument of a size finally adapted to its immensity. The scope of this work is to build a high performance and low-cost embedded device easy to mount onboard small AUVs and implementing energy-based spectrum sensing algorithms in order to detect targets underwater using acoustic waves. The principle of design, hardware architecture and real-time implementation of the acoustic signal processing chain are described in this paper. Simulations and sea experiments have been conducted successfully and qualified the performance of the realized system to detect acoustic pings underwater depending on the signal-to-noise ratio (SNR). Moreover, this paper proposes methods to improve the measured detection range and accuracy.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_90-Design_and_Performance_Measurement_of_Energy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid Technique based on RSA and Data Hiding for Securing Handwritten Signature</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120489</link>
        <id>10.14569/IJACSA.2021.0120489</id>
        <doi>10.14569/IJACSA.2021.0120489</doi>
        <lastModDate>2021-05-01T16:56:39.4230000+00:00</lastModDate>
        
        <creator>Yaser Maher Wazery</creator>
        
        <creator>Shimaa Gamal Haridy</creator>
        
        <creator>AbdElmegeid Amin Ali</creator>
        
        <subject>Image Steganography; LSB; Data Hiding; Security; Embedding data; Cryptography; RSA; Handwritten signature</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>Data exchange has been significantly encouraged by the development of communication technology and the wide use of social media over the Internet. Therefore, it is important to hide the data transmitted, especially the data that requires a person’s signature. Where the signature is an increasingly needed item that is used in our daily life to achieve some paper-based authentication in departments, the individual himself needs the signature. Cryptography and steganography are commonly considered to be the most important data hiding methodologies. Steganography is used to hide the secret message in the carrier media, such as text, audio, video, and image files, without the carrier media being distorted, and cryptography is used to conceal the purpose of the secret message. A hybrid data hiding (image steganography) and encryption technique is implemented in this research on the time domain. The secret handwritten signature image is first encrypted using the public key algorithm (RSA) in the proposed technique, then randomly inserted using Embedding data process to be concealed in one of the last three bits of that pixel(1st Least Significant Bit, 2nd LSB, and 3rd LSB) based on mathematical randomized formula over all pixels of the carrier media (image). It is assumed that the process of randomization will increase the protection provided by the technique. The suggested technique is implemented on gray level cover images. As a consequence of the random scattering of bits and using encryption, it is noted that the proposed technique achieves enhanced data hiding results in terms of performance, protection, and imperceptibility properties and the histogram of the proposed technique is better and provides more protection and security than the ordinary sequential Least Significant Bit (LSB).</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_89-A_Hybrid_Technique_based_on_RSA_and_Data_Hiding.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Permissioned Blockchain: Securing Industrial IoT Environments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120488</link>
        <id>10.14569/IJACSA.2021.0120488</id>
        <doi>10.14569/IJACSA.2021.0120488</doi>
        <lastModDate>2021-05-01T16:56:39.3930000+00:00</lastModDate>
        
        <creator>Samira Yeasmin</creator>
        
        <creator>Adeel Baig</creator>
        
        <subject>Industrial Internet of Things; IIoT; permissioned blockchain; hyperledger fabric; information security; device communication; data sharing; access control</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>With the significantly increased use of the Industrial Internet of Things (IIoT), it is believed that this technology will revolutionize industrial applications and infrastructures by connecting several industrial assets. But it is getting prone to many cyberattacks and security issues. The emerging security challenges of IIoT can have a devastating effect since it deals with mission and safety-critical systems. Thus, it becomes extremely important to address the security vulnerabilities and susceptibilities of this technology. Blockchain, being one of the most significant solutions to several technologies&#39; security problems, can play a vital role in improving the security of IIoT. Therefore, this paper proposes to use a Hyperledger Fabric Blockchain-enabled IIoT that guarantees the security of the communication medium, data storage, access, and sharing between the IIoT devices and ensures to provide limited access to the authorized identities only. This system also monitors the user access and makes sure that the transactions are performed according to their roles defined by the Certificate Authority (CA) and Membership Service Provider (MSP). Moreover, this paper presents the findings on the implementation of the blockchain network and addresses the key challenges. It evaluates the performance of the proposed network and discusses the key areas to be improved. Finally, the paper describes the benefits of the permissioned blockchain for IIoT and presents a future direction for further research and study.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_88-Permissioned_Blockchain_Securing_Industrial_IoT.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Internet of Things (IoT) based Smart Vehicle Security and Safety System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120487</link>
        <id>10.14569/IJACSA.2021.0120487</id>
        <doi>10.14569/IJACSA.2021.0120487</doi>
        <lastModDate>2021-05-01T16:56:39.3770000+00:00</lastModDate>
        
        <creator>Yassine SABRI</creator>
        
        <creator>Aouad Siham</creator>
        
        <creator>Aberrahim Maizate</creator>
        
        <subject>Smart vehicle security; safety system; Internet of Things (IoT)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>The Internet of Things (IoT) is making human life easy in all aspects. The applications it offers are beyond com-prehension. IoT is an abstract idea, a notion which interconnects all devices, tools, and gadgets over the Internet to enable these devices to communicate with one another. IoT finds application in various areas, such as intelligent cars and their safety, security, navigation, and efficient fuel consumption. This project puts forth a solution to achieve the desired outcome of saving precious human lives that are lost to road crashes .In this context, we propose to develop a system, we are designing and deploying a system that not only avoids accidents but also to take action accordingly. This research aims at dealing with the issues that cause fatal crashes and also integrates measures to ensure safety. Life without transportation is impossible to imagine; it makes far off places easy to reach and greatly reduces the travel time. But the problems which surface due to the ever-increasing number of vehicles on the road cannot be ignored. The project aims to eradicate a few of the major reasons of car crashes and also aims to integrate post-crash measures.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_87-Internet_of_Things_IoT_based_Smart_Vehicle.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Efficient Security Solutions for IoT Devices</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120486</link>
        <id>10.14569/IJACSA.2021.0120486</id>
        <doi>10.14569/IJACSA.2021.0120486</doi>
        <lastModDate>2021-05-01T16:56:39.3600000+00:00</lastModDate>
        
        <creator>Faleh Alfaleh</creator>
        
        <creator>Salim Elkhediri</creator>
        
        <subject>Efficient; IoT; Systems on a Chip (SoC); ML; Network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>The Internet of Things (IoT) is a technological innovation that has revolutionized society. The IoT will forever change the way we use simple things that do very little things to smart, fully capable things. IoT devices can process and automate everyday household and workplace tasks through simple sensors. Yet despite the benefits of these devices, they are vulnerable to violations such as privacy issues and security breaches. This paper aims to provide a clearer understanding of the IoT and current threats to it by explaining why IoT devices are susceptible to attack. Moreover, the technologies used in the IoT are examined, as well as the different communication layers of the IoT and their functioning. The findings reveal that IoT devices are prone to many software and hardware vulnerabilities, not to mention the challenges that come with IoT. Solutions to these challenges are proposed, notably through the use of anomaly-based intrusion detection systems, which are critical components of network security. Using machine learning (ML) to detect potential attacks is recommended. Many proposed anomaly-based detection systems use different ML algorithms and techniques. However, there is no standard benchmark to compare these in terms of power consumption. A benchmark that measures both accuracy and power consumption to calculate and evaluate each algorithm’s implementation is proposed.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_86-Efficient_Security_Solutions_for_IoT_Devices.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Learners Classification for Personalized Learning Experience in e-Learning Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120485</link>
        <id>10.14569/IJACSA.2021.0120485</id>
        <doi>10.14569/IJACSA.2021.0120485</doi>
        <lastModDate>2021-05-01T16:56:39.3300000+00:00</lastModDate>
        
        <creator>A. JOHN MARTIN</creator>
        
        <creator>M. MARIA DOMINIC</creator>
        
        <creator>F. Sagayaraj Francis</creator>
        
        <subject>Learning style; learning profile; learning objects; e-Learning; personalization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>The investigators are inspired by the increasing need and the demand for educational applications and the Learning Management Systems which provide learning objects centered on the learning style of the learners. The technique in which the learners acquire, process, gain the information is unique; these unique characteristics affect their learning process. Hence it is essential to consider and understand the uniqueness among the learners to deliver learner-centric learning objects. The investigators present a system to classify the learners based on the time spent by the learner on learning content of different types. The types of learning content are identified with the percentage of visual, auditory, read/write and kinesthetic in learning object. The prominent learning style called VARK (Visual, Auditory, Read/Write and Kinesthetic) is used to classify the learners. This system classifies the learner and recommends the learning objects based on their learning preference, it also facilitates the faculty members or the content creators to prepare and provide personalized learning objects based on the learning style of the learners.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_85-Learners_Classification_for_Personalized_Learning_Experience.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Comparison of Three Hybridization Categories to Solve Multi-Objective Flow Shop Scheduling Problem</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120484</link>
        <id>10.14569/IJACSA.2021.0120484</id>
        <doi>10.14569/IJACSA.2021.0120484</doi>
        <lastModDate>2021-05-01T16:56:39.2970000+00:00</lastModDate>
        
        <creator>Jebari Hakim</creator>
        
        <creator>Siham Rekiek</creator>
        
        <creator>Rahali El Azzouzi Saida</creator>
        
        <creator>Samadi Hassan</creator>
        
        <subject>Scheduling; multi-objective; flow shop; multi hybridization; artificial bee colony ABC; particle swarm optimization PSO</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>The industries must preserve a rate of constant productivity; however, weaknesses appear at the level of production system which engenders high manufacturing costs. Scheduling is considered the most significant issue in the production system, the solution to that problem need complex methods to solve it. The goal of this paper is to establish three hybridization categories of the evolutionary methods ABC and PSO to solve multi-objective flow shop scheduling problem: Synchronous parallel hybridization using the weighted sum method of the fitness function, sequential hybridization using or not using the weighted sum method of the fitness function, and asynchronous parallel hybridization using the weighted sum method of the fitness function. Then to test these methods in an automotive multi-objective flow shop and to perform an in-depth comparison for verifying how the multi hybridization and the hybridization categories influence the resolution of multi-objective flow shop scheduling problems. The results are consistent with other studies that have shown that the multi hybridization improve the effectiveness of the algorithm.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_84-Performance_Comparison_of_Three_Hybridization_Categories.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Novel Properties for Total Strong - Weak Domination Over Bipolar Intuitionistic Fuzzy Graphs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120483</link>
        <id>10.14569/IJACSA.2021.0120483</id>
        <doi>10.14569/IJACSA.2021.0120483</doi>
        <lastModDate>2021-05-01T16:56:39.2830000+00:00</lastModDate>
        
        <creator>As’ad Mahmoud As’ad Alnaser</creator>
        
        <subject>Fuzzy sets; bipolar intuitionistic fuzzy sets; strong (weak) bipolar intuitionistic fuzzy sets; total strong (weak) bipolar intuitionistic number</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>Through this research study, we introduced and discussed total strong (weak) domination concept of bipolar intuitionistic fuzzy graphs and in define strong domination bipolar intuitionistic fuzzy graph also strong domination. Theorems, examples and some properties of these concept are discussed.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_83-Novel_Properties_for_Total_Strong_Weak_Domination.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Factors Affecting Mobile Learning Acceptance in Higher Education: An Empirical Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120482</link>
        <id>10.14569/IJACSA.2021.0120482</id>
        <doi>10.14569/IJACSA.2021.0120482</doi>
        <lastModDate>2021-05-01T16:56:39.2500000+00:00</lastModDate>
        
        <creator>Nahil Abdallah</creator>
        
        <creator>Odeh Abdallah</creator>
        
        <creator>OM Bohra</creator>
        
        <subject>Mobile learning; UTAUT; structural equation modeling; tam; technology acceptance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>The use of mobile tools to support learning and teaching activities has become a significant part of the informal learning process. Mobile learning (M-learning) is used to considerably develop the forms of learning activities made by learners, and support the learning process. The effective application of M-learning in higher educational institutions, however, is based on the learners’ adoption. It is therefore essential to define and investigate the factors affecting the desire of learners to use and adopt M-learning. Thus, this research investigates the factors affecting students’ intention to adopt M-learning in institutions of higher education. To achieve the objectives of this research, a model is proposed based on the Unified Theory of Acceptance and Use of Technology (UTAUT) model and the Technology Acceptance Model (TAM). The instrument is developed using validated items from previous studies and shreds of literature. Data for this quantitative study are collected from undergraduate and postgraduate students. A Structural Equation Model (SEM) is used to analyze the data collected from 218 participants using a survey questionnaire. The findings show that students’ intention to adopt M-learning is shaped by various variables consisting of personnel innovativeness, self-management, facilitating conditions, social influence, relative advantage, and effort expectancy. The research results also present several practical contributions and implications for M-learning adoption in terms of research and practice. Investigation of the required determinants may contribute to gain learners’ adoption and is important to enhance the learning experience of students and help them improve their knowledge and academic achievement. The contribution of this paper lies in defining the factors influencing the acceptance and use of M-learning systems by students of higher education in Palestine. Hopefully, the results of the study are valuable for policy-makers in designing comprehensive M-learning systems.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_82-Factors_Affecting_Mobile_Learning_Acceptance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Sentiment Analysis of Egypt’s New Real Estate Registration Law on Facebook</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120481</link>
        <id>10.14569/IJACSA.2021.0120481</id>
        <doi>10.14569/IJACSA.2021.0120481</doi>
        <lastModDate>2021-05-01T16:56:39.2370000+00:00</lastModDate>
        
        <creator>Abdulfattah Omar</creator>
        
        <creator>Wafya Ibrahim Hamouda</creator>
        
        <subject>Egypt; facebook; opinion; real estate registration law; sentiment analysis; social media</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>In response to the increasing influence of social media networks on shaping the public opinion, sentiment analysis systems and applications have been developed to extract insights and gain an overview of the wider public opinion behind certain topics so as to support businesses, manufacturers, government agencies, and policymakers with their decisions and plans. Despite the importance of sentiment analysis in providing policymakers with effective mechanisms to understand the attitudes of customers and citizens which can be usefully used in decision-making processes and planning for the future, so far studies on sentiment analysis are very limited in Egypt. Much of the work is still done using survey tools such as questionnaires and polls to gather information about the citizens’ attitudes towards given issues and topics. Despite the effectiveness of such methods, citizens’ reflections on social media platforms and networks remain more powerful in providing comprehensive insights and overviews. Furthermore, social media-based sentiment analysis is usually more representative being based on larger numbers of participants, which has positive implications to reliability. Opinions expressed on social media are often the most powerful forms of feedback for businesses because they are given unsolicited. In light of this argument, this study seeks to provide a sentiment analysis of Egypt’s New Real Estate Registration Law on Facebook. To extract information about the users’ sentiment polarity (positive, neutral or negative), Facebook posts were used. The rationale is that Facebook is still the most popular social media platform in Egypt. Text classification was then used for classifying the selected data into three main classes/values: Positive, Negative, and Neutral. The findings indicate that sentiments expressed in the users’ posts and comments have a significant negative attitude towards the new law. Despite the effectiveness of the automatic evaluation and analysis of the sentiments and opinions of the users in social media concerning the new Real Estate Registration Law, linguistic approaches including Critical Discourse Analysis (CDA), functional linguistics, and semiotics need to be incorporated into sentiment analysis applications for gaining a better understanding of people’s attitudes towards specific issues.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_81-A_Sentiment_Analysis_of_Egypts_New_Real_Estate.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Effect of Different Dimensionality Reduction Techniques on Machine Learning Overfitting Problem</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120480</link>
        <id>10.14569/IJACSA.2021.0120480</id>
        <doi>10.14569/IJACSA.2021.0120480</doi>
        <lastModDate>2021-05-01T16:56:39.2200000+00:00</lastModDate>
        
        <creator>Mustafa Abdul Salam</creator>
        
        <creator>Ahmad Taher Azar</creator>
        
        <creator>Mustafa Samy Elgendy</creator>
        
        <creator>Khaled Mohamed Fouad</creator>
        
        <subject>Dimensionality reduction; feature subset selection; rough set; overfitting; underfitting; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>In most conditions, it is a problematic mission for a machine-learning model with a data record, which has various attributes, to be trained. There is always a proportional relationship between the increase of model features and the arrival to the overfitting of the susceptible model. That observation occurred since not all the characteristics are always important. For example, some features could only cause the data to be noisier. Dimensionality reduction techniques are used to overcome this matter. This paper presents a detailed comparative study of nine dimensionality reduction methods. These methods are missing-values ratio, low variance filter, high-correlation filter, random forest, principal component analysis, linear discriminant analysis, backward feature elimination, forward feature construction, and rough set theory. The effects of used methods on both training and testing performance were compared with two different datasets and applied to three different models. These models are, Artificial Neural Network (ANN), Support Vector Machine (SVM) and Random Forest classifier (RFC). The results proved that the RFC model was able to achieve the dimensionality reduction via limiting the overfitting crisis. The introduced RFC model showed a general progress in both accuracy and efficiency against compared approaches. The results revealed that dimensionality reduction could minimize the overfitting process while holding the performance so near to or better than the original one.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_80-The_Effect_of_different_Dimensionality_Reduction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Handling Sudden and Recurrent Changes in Business Process Variability: Change Mining based Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120479</link>
        <id>10.14569/IJACSA.2021.0120479</id>
        <doi>10.14569/IJACSA.2021.0120479</doi>
        <lastModDate>2021-05-01T16:56:39.2030000+00:00</lastModDate>
        
        <creator>Asmae HMAMI</creator>
        
        <creator>Hanae SBAI</creator>
        
        <creator>Mounia FREDJ</creator>
        
        <subject>Component; variability; process variant; configurable process model; process mining; change mining; concept drift</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>Changes are random and unavoidable actions in business processes, and they are frequently overlooked by managers, especially when managers need to deal with a collection of process variants. Because they must manage every single business process variant separately which is a time-consuming task. They exist many approaches to manage a collection of business process and deal with variability. Such as process mining approaches, that can discover configurable business process models, enhancing them and verify conformity automatically. However, those approaches do not cover changes and concept drift that occur over time. This paper presents a novel change mining approach that discovers changes in a collection of event logs and reports them on a change log. This change log can be analyzed to determine whether the changes are sudden or recurrent and recommend afterward some improvement to the configurable process model.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_79-Handling_Sudden_and_Recurrent_Changes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Smart Company System using Hybrid Nomenclature of Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120478</link>
        <id>10.14569/IJACSA.2021.0120478</id>
        <doi>10.14569/IJACSA.2021.0120478</doi>
        <lastModDate>2021-05-01T16:56:39.1730000+00:00</lastModDate>
        
        <creator>Mbida Mohamed</creator>
        
        <creator>Ezzati Abdellah</creator>
        
        <subject>Neural; network; intelligent; CRM; company; warehouse; real time</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>Physically, to manage the data related to the products, CRM, suppliers and Administration warehouse of the company makes us use a lot of human resources, and a time which deals with this, consequently the error rate increases and sometimes everything goes out of control, however, this work designed an intelligent overall management system (an intelligent neural network) which completes and up-date the product management network that presented in one of the previous articles. This new version assembles the three modules, in an order to automate tasks in the real time.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_78-Smart_Company_System_using_Hybrid_Nomenclature.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Investigation of Smart Home Security and Privacy: Consumer Perception in Saudi Arabia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120477</link>
        <id>10.14569/IJACSA.2021.0120477</id>
        <doi>10.14569/IJACSA.2021.0120477</doi>
        <lastModDate>2021-05-01T16:56:39.1570000+00:00</lastModDate>
        
        <creator>Omar Almutairi</creator>
        
        <creator>Khalid Almarhabi</creator>
        
        <subject>Smart home; IoT; Saudi Arabia; security; privacy; issues; demographic; perception; consumer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>One of the fastest and most developing technologies around the globe is the Internet of things (IoT). The research questions in this study focus on the security and privacy challenges for a smart home environment. The geographical region of Saudi Arabia is the selected boundary for the study. The study is focused on finding the problems associated with the Smart Home adaption in Saudi Arabia. However, there is a large phase shift, which is seen towards the increase of threats in smart homes. It is believed that the awareness by humans towards the use of these devices. The level of security offered by the devices, is one of the factors for these threats and privacy issues. This research targets to identify all the facts that can be discarded towards adaption of Smart Homes. It is desirable that a quantitative methodology must be implemented for identification of the population under threat due to IoT devices in smart homes. The views of the users are the major input values to trace the problems. The expected results from this research will provide all the factors which can be improved and provided with proper solution to avoid any security or privacy threats in the Saudi Arabian realm.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_77-Investigation_of_Smart_Home_Security.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>RETRACTED: A Data Science Framework for Data Quality Assessment and Inconsistency Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120476</link>
        <id>10.14569/IJACSA.2021.0120476</id>
        <doi>10.14569/IJACSA.2021.0120476</doi>
        <lastModDate>2021-05-01T16:56:39.1430000+00:00</lastModDate>
        
        <creator>Anusuya Ramasamy</creator>
        
        <creator>Berhanu Sisay</creator>
        
        <creator>Amanuel Bahiru</creator>
        
        <subject>Data Science; OMD data model; weight generation; min-sum; dynamic programming algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>After careful and considered review of the content of this paper by a duly constituted expert committee, this paper has been found to be in violation of IJACSA`s Publication Principles. We hereby retract the content of this paper. Reasonable effort should be made to remove all past references to this paper. Retraction DOI: 10.14569/IJACSA.2021.0120476.retraction</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_76-A_Data_Science_Framework_for_Data_Quality_Assessment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Healthcare Logistics Optimization Framework for Efficient Supply Chain Management in Niger Delta Region of Nigeria</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120475</link>
        <id>10.14569/IJACSA.2021.0120475</id>
        <doi>10.14569/IJACSA.2021.0120475</doi>
        <lastModDate>2021-05-01T16:56:39.1270000+00:00</lastModDate>
        
        <creator>Imeh J. Umoren</creator>
        
        <creator>Ubong E. Etuk</creator>
        
        <creator>Anietie P. Ekong</creator>
        
        <creator>Kingsley C. Udonyah</creator>
        
        <subject>Dedicated logistics department (DLD); Quality of Care (QoC); Quality of Experience (QoE); information/cognitive technologies (ETA) and type-1 fuzzy logic model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>Optimizing logistics allocation and utilization is essential for effective healthcare management. Apparently, less consideration is given to it in most hospitals in Nigeria where less resources are allocated to health sector in yearly budgetary. Hospital consists of several patient classes, each of which follows different treatment process flow paths over a multiphase and multidimensional requirement with scarce resources and inadequate space. Despite the small budget provision made for healthcare resources, patient’s demands for better service is rapidly experiencing upsurge. Hence, efficient and optimal solutions are required to lessen costs of healthcare service towards enhancing Quality of Care (QoC) and Quality of Experience (QoE) in most public healthcare sector. However, certain control coefficients like the absence of a Dedicated Logistics Department (DLD) in the medical facilities actually limit the efforts of stakeholders. This paper proposed a Computational framework to assess various strategic and operational decisions for optimizing the multiple objectives using Type-1 Fuzzy Logic Model. In phase I, we explore healthcare resource allocation plan. In phase II, we determine a resource utilization schedule by patient class for daily operational level. While in Phase III, we develop a framework capable of evaluating and optimizing healthcare logistics using control coefficients of Logistics Optimization (LO), Integration of Information/Cognitive Technologies (ETA), and Collaboration of all Logistics Stakeholders (COL). We assigned weights between 1 and 10 to the coefficients and modeled the effects on efficient supply chain. Finally, we further explore the effects of separate strategies and their combination to identify the best possible resource supply chain. The computational experiment was considered on the basis of data obtained from a study conducted on a typical public healthcare department. Results shows that our approach significantly evaluate and optimized healthcare logistics.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_75-Healthcare_Logistics_Optimization_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Software Engineering Ethics Competency Gap in Undergraduate Computing Qualifications within South African Universities of Technology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120474</link>
        <id>10.14569/IJACSA.2021.0120474</id>
        <doi>10.14569/IJACSA.2021.0120474</doi>
        <lastModDate>2021-05-01T16:56:39.0970000+00:00</lastModDate>
        
        <creator>Senyeki M. Marebane</creator>
        
        <creator>Robert T. Hans</creator>
        
        <subject>Software engineering ethics; software engineer; technical skills; knowledge; curriculum; professional ethics; general ethics; university of technology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>Computing graduates working as software engineers are expected to demonstrate competencies in various categories of software engineering ethics as a component of non-technical skills that complement technical skills. Therefore, university programme offerings should provide opportunities for students to develop software engineering ethical competence. This study analyses curriculum documents to determine the extent to which entry-level undergraduate computing qualifications of Universities of Technology (UoTs) in South Africa provide opportunities to empower students with software engineering ethical competence. We used summative content analysis to analyze texts within the UoT computing undergraduate qualifications related to software development as retrieved from the South African Qualifications Authority database. ATLAS.ti text analysis tool was used to classify texts according to predetermined software engineering ethics categories to determine the extent to which the qualifications under study expose students to software engineering ethics. The results show that the coverage of the various categories of software engineering ethics by UoT computing qualifications for software development is insufficient, incomplete and superficial, providing only limited opportunities for prospective software engineers to develop software engineering ethical competence. Lack of adequate inclusion of software engineering ethics by UoT qualifications in South Africa deprives prospective software engineers an opportunity to develop ethical competence required to become ethically successful software engineers. Such limited exposure by software development graduates risks the development of potentially unethical software products in the software industry.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_74-Software_Engineering_Ethics_Competency_Gap.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid SFLA-UBS Algorithm for Optimal Resource Provisioning with Cost Management in Multi-cloud Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120473</link>
        <id>10.14569/IJACSA.2021.0120473</id>
        <doi>10.14569/IJACSA.2021.0120473</doi>
        <lastModDate>2021-05-01T16:56:39.0800000+00:00</lastModDate>
        
        <creator>Muhammad Iftikhar Hussain</creator>
        
        <creator>JingSha He</creator>
        
        <creator>Nafei Zhu</creator>
        
        <creator>Fahad Sabah</creator>
        
        <creator>Zulfiqar Ali Zardari</creator>
        
        <creator>Saqib Hussain</creator>
        
        <creator>Fahad Razque</creator>
        
        <subject>Optimal provisioning; resource allocation; multi-cloud; cost management; QoS and selection of resources</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>Multi-cloud is a vendor-based heterogeneous cloud paradigm in recent era of computing with dynamic infrastructural deployment. Multi-cloud provides all the essential and on-demand requirements of a virtual environment from various domains under a single service level agreement (SLA). Consumers from multitier domains can access all the available resources placed in a shared pool on service provider’s side, as per their requirement. The shared pool of resources creates complexity in assigning the best and suitable resource to a particular virtual instance under the same services provider end. The complexity of resources in terms of accessibility from the various domains, dynamic allocation, security, and quality of services (QoS) raises concerns in the multi-cloud infrastructure. This complexity raise concern relates to optimal provisioning and cost management. In the proposed work a hybrid technique with a shuffled leapfrog algorithm and ubiquitous binary search (SLFA-UBS) to resolve these issues with optimal provisioning, dynamic allocation and better resource selection. The proposed work will help to create a need-based and demand-based resource pool with the appropriate selection of each resource. The proposed model also supports resource optimization with dynamic provisioning, cost-effective solution to achieve QoS in multi-cloud deployment on service provider end.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_73-Hybrid_SFLA_UBS_Algorithm_for_Optimal_Resource.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of Speech Signal Data of Mising Vowels using Logistic Regression and K-Means Clustering</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120472</link>
        <id>10.14569/IJACSA.2021.0120472</id>
        <doi>10.14569/IJACSA.2021.0120472</doi>
        <lastModDate>2021-05-01T16:56:39.0630000+00:00</lastModDate>
        
        <creator>Ujjal Saikia</creator>
        
        <creator>Jiten Hazarika</creator>
        
        <subject>Clustering methods; formant frequency; logit model; pitch; stype</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>In this paper, an attempt has been made to study and analyze speech signal data. Here, the sound or speech data has different attributes like time, pitch, formant frequencies, speaker type, Vowel No etc. The dataset used here is speech signal data which are analog in nature and has been converted to digital format. After converting the data into digital format we want to establish a Logit model to predict the speaker gender on the basis of the pitch signal values which is also considered as fundamental formant frequency. That is our objective is to predict whether a speaker is male or female by looking at the pitch value by using logistic regression. We have applied clustering techniques to visualize and interpret how it works in speech signal data. The logistic model gives us 91% accuracy rate with low and efficient AIC value where as in case of the clustering algorithm we get a 93% accuracy for the whole sample.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_72-Analysis_of_Speech_Signal_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Grey Clustering Method for Water Quality Assessment to Determine the Impact of Mining Company, Peru</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120471</link>
        <id>10.14569/IJACSA.2021.0120471</id>
        <doi>10.14569/IJACSA.2021.0120471</doi>
        <lastModDate>2021-05-01T16:56:39.0470000+00:00</lastModDate>
        
        <creator>Alexi Delgado</creator>
        
        <creator>Jhoel Andy Gauna Achata</creator>
        
        <creator>Jorge Alfredo Barreda Valdivia</creator>
        
        <creator>Julio Cesar Junior Santiva&#241;ez Montes</creator>
        
        <creator>Chiara Carbajal</creator>
        
        <subject>Grey clustering method; mining company; water quality; artificial intelligence</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>Mining operations have a significant impact on environment, where the quality of water is an important affected issue that need to be controlled. In that way, the Grey Clustering Method based on center-point triangular whitenization weight (CTWF), is an artificial intelligence criterion that evaluates water samples according to selected parameters, in order to realize an effective water quality assessment. In the present study, the analysis is made on the Crisnejas River Basin, by using fifteen monitoring points based on an investigation realized by the National Water Authority (ANA) in 2019, based on the Peruvian law (ECA) about water quality standards. The results reveal that almost all of the monitoring points on the Crisnejas River Basin were classified as “irrigation of vegetables unrestricted”, but only one point was classified as “animal drink”, which is ubicated in an urbanized area. This implies that mining discharges are being well treated by the company, but another deal is the contamination generated in towns. Further, the present study might be helpful to audit processes made by the state or companies, to justify the quality of surface waters using a more accurate methodology.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_71-Grey_Clustering_Method_for_Water_Quality_Assessment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Non-Hodgkin Type Lymphoma Cancer Cell Detection using Connected Components Labeling and Moments of Image</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120470</link>
        <id>10.14569/IJACSA.2021.0120470</id>
        <doi>10.14569/IJACSA.2021.0120470</doi>
        <lastModDate>2021-05-01T16:56:39.0330000+00:00</lastModDate>
        
        <creator>Monirul Islam Pavel</creator>
        
        <creator>Mohsinul Bari Shakir</creator>
        
        <creator>Dewan Ahmed Muhtasim</creator>
        
        <creator>Omar Faruk</creator>
        
        <subject>Non-hodgking; lymphoma; moment of image; connected components labeling; otsu thresholding</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>Cancers are one of the deadliest diseases with a costly treatment system in the world at present. In this paper a cost-effective, autonomous system of cancer-cell detection was proposed using several efficient image processing methods to develop an early stage non-Hodgkin type lymphoma which is a type of blood cancer. The system is implemented automatically to detect the traits of cancer in microscopy images of biopsy samples. Recent attempts have previously lacked flexibility in characteristics and the accuracy level is not consistent with the individual cancer type. The framework consisted three stages for detecting cancer on the basis of various detected traits including cell segmentation, quantification, area measurement analysis of cells, a center clump detection using the moment of image, identification of 4-connected components and Moore-Neighbor tracing algorithm. This methodology has been used in several sets of images and Feedback from these test executions has been used to improve the system. Subsequently, the proposed method can be used efficiently for used for autonomous non-hodgking type lymphoma cancer cell detection, which has an accuracy of 93.75%.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_70-Non_Hodgkin_Type_Lymphoma_Cancer.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Empirical Investigation of the Relationship Between Business Process Transparency and Business Process Attack</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120468</link>
        <id>10.14569/IJACSA.2021.0120468</id>
        <doi>10.14569/IJACSA.2021.0120468</doi>
        <lastModDate>2021-05-01T16:56:39.0000000+00:00</lastModDate>
        
        <creator>Alhanouf Aldayel</creator>
        
        <creator>Ahmad Alturki</creator>
        
        <subject>Business Process Management (BPM); business process; transparency; business process attack; fraud</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>Business Process Management (BPM) is a management approach to discover, analyze, redesign, execute and monitor business processes. Implementing BPM concepts help and benefit organizations by increasing their productivity, achieving their strategies and operational excellence, and saving costs. Rosemann et al. identify business process transparency as one of the key values of BPM, and essential to achieving other BPM benefits. Business process transparency provides visibility about how operations/activities are conducted in a detailed way, sometimes with technical details, within an organization; which facilitates the identification of structural issues of the process model. A conducted content analysis of the literature shows that fraudsters have leveraged structural issues of the business process model to commit fraud. Such fraud can be labeled as a Business Process Attack (BPA). In analogy to information system security attack, BPA can be defined as the exploitation of a vulnerability in a business process model to commit fraudulent activities that influence the business negatively such as achieving invalid or unwanted results. This research aims to investigate the relationship between the degree of business process transparency and exposure to BPA. If the relationship is positive, appropriate security controls shall be implemented on the business process transparency to avoid BPA. The main research question is: What is the relationship between an organization&#39;s degree of business process transparency and exposure to BPA. A quantitative research method is employed to measure and understand the impact of business process transparency on BPA. An experiment is designed and conducted to assess the awareness of the existence of vulnerabilities in various process models and how to exploit them to commit BPA. Results show that there is a positive significant relationship between increased business process transparency and exposure to BPA. This research contributes towards understanding and highlights the negative impact of business process transparency, which motivates researchers to investigate this phenomenon and find appropriate solutions.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_68-An_Empirical_Investigation_of_the_Relationship.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Tree-profile Shape Ultra Wide Band Antenna for Chipless RFID Tags</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120469</link>
        <id>10.14569/IJACSA.2021.0120469</id>
        <doi>10.14569/IJACSA.2021.0120469</doi>
        <lastModDate>2021-05-01T16:56:39.0000000+00:00</lastModDate>
        
        <creator>A K M Zakir Hossain</creator>
        
        <creator>Nurulhalim Bin Hassim</creator>
        
        <creator>Jamil Abedalrahim Jamil Alsayaydeh</creator>
        
        <creator>Mohammad Kamrul Hasan</creator>
        
        <creator>Md. Rafiqul Islam</creator>
        
        <subject>Planar microstrip; UWB antenna; chipless RFID; realized gain; total efficiency</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>In this article, a new small size planar microstrip tree profile shaped Ultra-Wide Band (UWB) antenna with partial ground plane has been presented. The antenna is designed for chipless RFID tags that are working in UWB region. The operating frequency of the antenna is between 2.72 GHz to 11.1 GHz which covers the entire UWB frequency band. The antenna exhibits comparatively high realized gain of 4.2 dBi with respect to its small size of 27 &#215; 40 mm2 and have a gain to aperture ratio of 0.243 which is comparatively higher than other existing retransmission-based chipless RFID antennas. Another aspect of this antenna is its total efficiency which never goes below 80% throughout the entire bandwidth whereby it reaches as high as 96% at 3.5GHz. This design will motivate the chipless RFID designers to produce small size and cost effective tags.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_69-A_Tree_profile_Shape_Ultra_Wide_Band_Antenna.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Identification of Babbitt Damage and Excessive Clearance in Journal Bearings through an Intelligent Recognition Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120467</link>
        <id>10.14569/IJACSA.2021.0120467</id>
        <doi>10.14569/IJACSA.2021.0120467</doi>
        <lastModDate>2021-05-01T16:56:38.9700000+00:00</lastModDate>
        
        <creator>Joel Pino G&#243;mez</creator>
        
        <creator>Fidel E. Hern&#225;ndez Montero</creator>
        
        <creator>Julio C. G&#243;mez Mancilla</creator>
        
        <creator>Yenny Villuendas Rey</creator>
        
        <subject>Journal bearing; babbitt damage; excessive clearance; fault identification; feature selection; supervised classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>Journal bearings play an important role on many rotating machines placed on industrial environments, especially in steam turbines of thermoelectric power plants. Babbitt damage (BD) and excessive clearance (C) are usual faults of steam turbine journal bearings. This paper is focused on achieving an effective identification of these faults through an intelligent recognition approach. The work was carried out through the processing of real data obtained from an industrial environment. In this work, a feature selection procedure was applied in order to choose the features more suitable to identify the faults. This feature selection procedure was performed through the computation of typical testors, which allows working with both quantitative and qualitative features. The classification tasks were carried out by using Nearest Neighbors, Voting Algorithm, Na&#239;ve Associative Classifier and Assisted Classification for Imbalance Data techniques. Several performance measures were computed and used in order to assess the classification effectiveness. The achieved results (e.g., six performance measures were above 0.998) showed the convenience of applying pattern recognition techniques to the automatic identification of BD and C.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_67-Identification_of_Babbitt_Damage_and_Excessive_Clearance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Effective Approach for Detecting Diabetes using Deep Learning Techniques based on Convolutional LSTM Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120466</link>
        <id>10.14569/IJACSA.2021.0120466</id>
        <doi>10.14569/IJACSA.2021.0120466</doi>
        <lastModDate>2021-05-01T16:56:38.9530000+00:00</lastModDate>
        
        <creator>P. Bharath Kumar Chowdary</creator>
        
        <creator>R. Udaya Kumar</creator>
        
        <subject>Convolutional long short-term memory; diabetes prediction; machine learning; pre-processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>The most common disorder affecting millions of population worldwide due to insufficient release of insulin by pancreas is diabetes. Early detection or precaution of diabetes is necessary, otherwise leads to many complicated problems. Predicting diabetes at early stages with appropriate treatment, individuals can maintain a happy life. If the conventional diabetes detection method is tedious, the identification of diabetes from clinical and physical data requires an automated system. This paper proposes an approach to enhance diabetes prediction using deep learning techniques. Based on the Convolutional Long Short-term Memory (CLSTM), we developed a diabetes classification model and compared with the existing methods on the Pima Indians Diabetes Database (PIDD). We assessed the findings of various classification approaches in this study. The proposed approach is further improved by an efficient pre-processing mechanism called multivariate imputation by chained equations. The outcomes are promising compared to existing machine learning approaches and other research models.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_66-An_Effective_Approach_for_Detecting_Diabetes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A-SA SOS: A Mobile- and IoT-based Pre-hospital Emergency Service for the Elderly and Village Health Volunteers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120465</link>
        <id>10.14569/IJACSA.2021.0120465</id>
        <doi>10.14569/IJACSA.2021.0120465</doi>
        <lastModDate>2021-05-01T16:56:38.9230000+00:00</lastModDate>
        
        <creator>Kannikar Intawong</creator>
        
        <creator>Waraporn Boonchieng</creator>
        
        <creator>Peerasak Lerttrakarnnon</creator>
        
        <creator>Ekkarat Boonchieng</creator>
        
        <creator>Kitti Puritat</creator>
        
        <subject>Pre–hospital emergency service; mobile healthcare; IoT-based healthcare system; elderly; healthcare volunteer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>In Thailand, emergency illnesses are life-threatening conditions that constitute serious health problems and quick access to definitive care can improve the survival rate of the elderly dramatically. Currently, the pre-hospital emergency medical services have limitations which prevent the treatment from getting to the accident site on time. In this research, we proposed the A-SA SOS application, a mobile-and IoT-based pre-hospital emergency service for the elderly. The system helps the elderly to send the request to the nearest village health volunteers via a mobile application and smart device. After reaching the elderly, the village health volunteers help carry out basic life support to increase the survival rate before sending the patients directly to the Emergency Management System (EMS) agency. To evaluate the system, we tested it for three months in the Sub-district of Suthep in Chiang Mai city. Finally, the incident report showed that the average time to reach the scene (4.91&#177;0.56) to help elderly patients was less than the standard criteria of an average 3 minutes.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_65-A_SA_SOS_A_Mobile_and_IoT_based_Pre_Hospital.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Wireless Body Area Sensor Networks for Wearable Health Monitoring: Technology Trends and Future Research Opportunities</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120464</link>
        <id>10.14569/IJACSA.2021.0120464</id>
        <doi>10.14569/IJACSA.2021.0120464</doi>
        <lastModDate>2021-05-01T16:56:38.9070000+00:00</lastModDate>
        
        <creator>Malek ALRASHIDI</creator>
        
        <creator>Nejah NASRI</creator>
        
        <subject>Healthcare; physiological signals; security; UWB; wireless technologies; WBASN</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>Today, there is an emerging interest in Wireless Body Area Sensor Networks (WBASNs) for the real-time monitoring of patients and early chronic disease detection. In this context, this paper presents a synopsis survey of healthcare monitoring via the IEEE 802.15.6 (UWB) protocol. We intend to propose a survey of the current issues of wearable physiological monitoring signals and devices, application areas, and reliability in WBASNs. To help elderly and disabled people, it would be beneficial to use a wireless transportable gadget at home to gather useful data in traditional human activities. This will manage regular hospital and emergency department appointments and will monitor crucial physiological signals real-time. This paper will also present a study on new wireless technologies intended for body area sensor networks, including signal processing problems, spectral allocation, security, and future research challenges of WBASNs.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_64-Wireless_Body_Area_Sensor_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Agent-Based Evaluation Model of Students&#39; Emotional Engagement in Classroom</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120463</link>
        <id>10.14569/IJACSA.2021.0120463</id>
        <doi>10.14569/IJACSA.2021.0120463</doi>
        <lastModDate>2021-05-01T16:56:38.8770000+00:00</lastModDate>
        
        <creator>Moamin A. Mahmoud</creator>
        
        <creator>Latha Subramainan</creator>
        
        <creator>Ihab L Hussein Alsammak</creator>
        
        <creator>Mahmood H. Hussein</creator>
        
        <subject>Students’ emotional engagement; agent-based evaluation; computational model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>This study proposes an agent-based evaluation model of students’ emotional engagement in a classroom. The proposed model consists of four main elements in a classroom which are, the selected strategy to control engagement, the engagement level of students, the emotional state of a lecturer, and the emotional states of students. The process starts with a lecturer selecting a strategy, which in turn influences the students’ emotional state. By utilizing the three variables, students’ misbehaviors, motivation, and participation, the engagement level of students is measured that eventually influences the lecturer’s emotion either positively or negatively. If negatively, the lecturer proposes another strategy that would trigger the students’ emotions and eventually improves the students’ engagement level. We simulate our model to validate the applicability and functionality of the model. The simulation result shows a promising application to simulate a classroom environment with very flexible settings that leads to results in less time and cost. It also shows to be widely utilized by researchers in the field of social studies for further investigation of the problem of students’ engagement by conducting experiments and report the results.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_63-An_Agent_Based_Evaluation_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Traffic Accidents Detection using Geographic Information Systems (GIS)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120462</link>
        <id>10.14569/IJACSA.2021.0120462</id>
        <doi>10.14569/IJACSA.2021.0120462</doi>
        <lastModDate>2021-05-01T16:56:38.8600000+00:00</lastModDate>
        
        <creator>Wesam Alkhadour</creator>
        
        <creator>Jamal Zraqou</creator>
        
        <creator>Adnan Al-Helali</creator>
        
        <creator>Sajeda Al-Ghananeem</creator>
        
        <subject>Geographic Information System (GIS); statistical tools; hotspots; spatial analysis; temporal analysis; road safety; traffic accidents; spatial correlation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>The mission of reducing the number and severity of traffic accidents becomes the dominant target of road traffic safety management worldwide. The main objective of this work is to analyze traffic accidents in temporal and spatial frameworks in the capital city Amman and identify hotspot zones in the study area. Several statistical analyses are conducted using SQL to give insight into the temporal distribution of accidents and to identify the most revealing accidents based on several attributes such as the year of accidents, the severity of accidents, road type, and lighting environment which enables the authors to do further investigations on the more frequent accidents. GIS-based statistical and spatial analysis tools are utilized to examine the spatial pattern of accident distribution in the study area for three successive years, hotspots are identified for clusters of high concentrations. The Nearest Neighbor Index (NNI) is used to analyze the pattern of traffic accident distribution based on selective parameters. This was followed by identifying hotspot zones for regions that showed clustering using the optimal hotspot analysis tool. Experimental results showed clustering for all tested groups, and thus hotspots were detected for these accidents in the study area. The importance of this work is in providing a spatial understanding of accident distribution in the capital city of Amman which can help policymakers of road safety setting out efficient strategies for traffic safety management and find optimal solutions as required for factors causing such accidents.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_62-Traffic_Accidents_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Generating Test Cases using Eclipse Environment – A Case Study of Mobile Application</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120461</link>
        <id>10.14569/IJACSA.2021.0120461</id>
        <doi>10.14569/IJACSA.2021.0120461</doi>
        <lastModDate>2021-05-01T16:56:38.8470000+00:00</lastModDate>
        
        <creator>Rosziati Ibrahim</creator>
        
        <creator>Nurul Ain Aswini Abdul Jan</creator>
        
        <creator>Sapiee Jamel</creator>
        
        <creator>Jahari Abdul Wahab</creator>
        
        <subject>Software testing; automation testing; test cases; Eclipse Environment</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>In Software Development Life Cycle (SDLC), there are four phases involved. They are analysis, design, implement and testing. Testing is done to ensure the functionalities of the system are correct. There are many approaches to software testing. It is usually divided into two approaches: manual testing or automatic testing. However, these days, with the rapidly advanced technology, performing software testing manually has become hugely laborious but still doable. Therefore, experts of the software development field are beginning to go for automatic testing. This paper presents a case study of mobile application and discusses how test cases can be generated automatically from the application using different automatic tools. Three software testing tools have been used to generate test cases automatically. The results from generating test cases automatically from these three tools are then being compared together with the results of generating test cases using manual testing technique.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_61-Generating_Test_Cases_using_Eclipse_Environment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fraud Detection in Shipping Industry using K-NN Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120460</link>
        <id>10.14569/IJACSA.2021.0120460</id>
        <doi>10.14569/IJACSA.2021.0120460</doi>
        <lastModDate>2021-05-01T16:56:38.8130000+00:00</lastModDate>
        
        <creator>Ganesan Subramaniam</creator>
        
        <creator>Moamin A. Mahmoud</creator>
        
        <subject>Fraud detection; shipping industry; k-nn algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>The shipment industry is going through tremendous growth in volume thanks to technological innovation in e-commerce and global trade liberalization. Volume growth also means a rise in fraud cases involving smuggling and false declaration of shipments. Shipping companies and customs are mostly relying on routine random inspection thus finding fraud is often by chance. As the volume increases dramatically it would no longer be sustainable and effective for both shipment companies and customs to pursue traditional fraud detection strategies. Other related papers on this area have proven that intelligent data-driven fraud detection is proven to be far more effective than routine inspections. However, the challenge in data-driven detection is its effectiveness are often reliant on the availability of data and the various fraud mechanism used by fraudsters to commit shipment related fraud. As such in this paper, we review and subsequently identify the most optimized approaches and algorithms to detect fraud effectively within the shipping industry. We also identify factors that influence fraud activity, review existing fraud detection models, develop the detection framework and implement the framework using the Rapidminer tool.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_60-Fraud_Detection_in_Shipping_Industry.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Proposed Design of White Sugar Industrial Supply Chain System based on Blockchain Technology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120459</link>
        <id>10.14569/IJACSA.2021.0120459</id>
        <doi>10.14569/IJACSA.2021.0120459</doi>
        <lastModDate>2021-05-01T16:56:38.7970000+00:00</lastModDate>
        
        <creator>Ratna Ekawati</creator>
        
        <creator>Yandra Arkeman</creator>
        
        <creator>Suprihatin</creator>
        
        <creator>Titi Candra Sunarti</creator>
        
        <subject>Blockchain technology; supply chain; white crystal sugar</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>The white crystal sugar agro-industry is an industry with dynamic characteristics characterized by a sustainable relationship between actors ranging from farmers to consumers. An inefficient supply chain system will affect the flow of products, information, and finance because many actors are involved and have influence. Hence, complicating the system in the tracking process flow, product flow and creating problems that occur in business processes. The main objective of this research is to propose the design of an integrated white crystal sugar agro-industrial supply chain system based on blockchain technology so that it can increase competitiveness in realizing food security and resilience; by proposing a search for the problem of mismatches that occur along the supply chain from upstream to downstream. The variables that will be identified in the supply chain flow include quality, quantity, and price, with the suitability of transaction information data ranging from farmers, sugar factories, warehouses, distribution, retailers to the final consumer. It is hoped that consumers will feel happy to consume trusted local sugar with the best safety and quality, as well as ensure transparency of information between actors. Previous traditional methods, which were still centralized, would be transformed into decentralized information, to create trust among stakeholders. With a blockchain-based traceability architecture design, it is hoped that the proposed design can be implemented in the white crystal sugar agro-industry.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_59-Proposed_Design_of_White_Sugar_Industrial_Supply_Chain.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Optimization Approach for Multiple Sequence Alignment using Divide-Conquer and Genetic Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120458</link>
        <id>10.14569/IJACSA.2021.0120458</id>
        <doi>10.14569/IJACSA.2021.0120458</doi>
        <lastModDate>2021-05-01T16:56:38.7670000+00:00</lastModDate>
        
        <creator>Arunima Mishra</creator>
        
        <creator>Sudhir Singh Soam</creator>
        
        <creator>Bipin Kumar Tripathi</creator>
        
        <subject>Multiple sequence alignment; divide; and conquer; genetic algorithm; optimization method</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>Multiple Sequence Alignment (MSA) is a very effective tool in bioinformatics. It is used for the prediction of the structure and function of the protein, locating DNA regulatory elements like binding sites, and evolutionary analysis. This research paper proposed an optimization method for the improvement of multiple sequence alignment generated through progressive alignment. This optimization method consists of a fusion of two problem-solving techniques, divide-conquer and genetic algorithms in which the initial population of MSAs was generated through progressive alignment. Each multiple alignment was then divided vertically into four parts, three genetic operators were applied on each part of the MSA, recombination was done to reconstruct the full MSA. To estimate the performance of the method the results generated through the method are compared with well-known existing MSA methods named Clustal Ω, MUSCLE, PRANK, and Clustal W. Experimental results showed an 11-26% increase in sum_of_pair score (SP score) in the proposed method in comparison to the above-mentioned methods. SP score is the cumulative score of all possible pairs of alignment within the MSA.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_58-An_Optimization_Approach_for_Multiple_Sequence.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Symptoms-Based Fuzzy-Logic Approach for COVID-19 Diagnosis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120457</link>
        <id>10.14569/IJACSA.2021.0120457</id>
        <doi>10.14569/IJACSA.2021.0120457</doi>
        <lastModDate>2021-05-01T16:56:38.7500000+00:00</lastModDate>
        
        <creator>Maad Shatnawi</creator>
        
        <creator>Anas Shatnawi</creator>
        
        <creator>Zakarea AlShara</creator>
        
        <creator>Ghaith Husari</creator>
        
        <subject>COVID-19; coronavirus diagnosis; fuzzy inference system; fuzzy logic; fuzzy rules; expert systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>The coronavirus (COVID-19) pandemic has caused severe adverse effects on the human life and the global economy affecting all communities and individuals due to its rapid spreading, increase in the number of affected cases and creating severe health issues and death cases worldwide. Since no particular treatment has been acknowledged so far for this disease, prompt detection of COVID-19 is essential to control and halt its chain. In this paper, we introduce an intelligent fuzzy inference system for the primary diagnosis of COVID-19. The system infers the likelihood level of COVID-19 infection based on the symptoms that appear on the patient. This proposed inference system can assist physicians in identifying the disease and help individuals to perform self-diagnosis on their own cases.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_57-Symptoms_Based_Fuzzy_Logic_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Development of Students’ Spatial Orientation through the use of 3D Graphics</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120456</link>
        <id>10.14569/IJACSA.2021.0120456</id>
        <doi>10.14569/IJACSA.2021.0120456</doi>
        <lastModDate>2021-05-01T16:56:38.7200000+00:00</lastModDate>
        
        <creator>Benjam&#237;n Maraza-Quispe</creator>
        
        <subject>Orientation; reasoning; spatial; technology; 3D graphics; education; processes; educational</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>The purpose of this research is to determine to what extent the use of 3D graphics in the educational process improves the spatial orientation skills of secondary school students. The research follows a qualitative approach of experimental type, the population is constituted by 300 students of Secondary Education of which through a simple random sampling 25 students were chosen. Four sessions of 50 minutes each have been developed, in which three-dimensional models were used, in order to determine if spatial skills are developed. A psychometric pre-test and post-test of spatial reasoning was taken in order to determine how much the spatial skills of the selected sample members are developed based on the measurement of five criteria: Construction of three-dimensional objects (intermediate level), Construction of three-dimensional objects (advanced level), Rotation of objects from references (intermediate level), Rotation of objects from references (advanced level) and deconstruction of three-dimensional objects. For the data analysis, the data from the scores obtained by the students in both the pre-test and the post-test are processed. The results allow us to visualize that the use of 3D graphics in the teaching-learning processes allows us to improve spatial orientation skills to a great extent. The result is evidenced in the increase of the total scores obtained in the post-test in comparison with the results of the pre-test. Likewise, an increase from 47.9% to 75.1% of items answered correctly was observed on average, which was corroborated with the Student&#39;s t-test that gave a P value of less than 0.05, demonstrating the reliability of the research developed and therefore significantly improving spatial orientation skills in students through the use of 3D graphics technology.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_56-The_Development_of_Students_Spatial_Orientation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>On State-of-the-art of POS Tagger, ‘Sandhi’ Splitter, ‘Alankaar’ Finder and ‘Samaas’ Finder for Indo-Aryan and Dravidian Languages</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120455</link>
        <id>10.14569/IJACSA.2021.0120455</id>
        <doi>10.14569/IJACSA.2021.0120455</doi>
        <lastModDate>2021-05-01T16:56:38.7200000+00:00</lastModDate>
        
        <creator>Hema Gaikwad</creator>
        
        <creator>Jatinderkumar R. Saini</creator>
        
        <subject>‘Alankaar’; ‘samaas’; ‘sandhi’; parts of speech tagger (POST)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>Computational Linguistic refers to the development of the computer systems that deal with human languages. In this paper, different Computational Linguistic Techniques such as Parts of Speech (POS) tagger, ‘Sandhi’ Splitter, ‘Alankaar’ Finder and ‘Samaas’ Finder were considered. After a thorough literature review, it was found that fifteen techniques were used for POS tagging, nine techniques were used for ‘Sandhi’ splitting, one work is done for ‘Alankaar’ finder and absolutely no techniques are available for ‘Samaas’ finder for the Indo-Aryan as well as Dravidian languages. Analysis shows that Rule Based Approach (RBA) and Hidden Markov Model (HMM) are frequently used for POS tagging, RBA is most frequently used for ‘Sandhi’ Splitter, the general Human Intelligence (HI) is used for ‘Alankaar’ Finder and no ‘Samaas’ finder technique is available for any Indian language.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_55-On_State_of_the_Art_of_POS_Tagger.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Study and Analysis for the Choice of Optical Fiber in the Implementation of High-Capacity Backbones in Data Transmission</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120454</link>
        <id>10.14569/IJACSA.2021.0120454</id>
        <doi>10.14569/IJACSA.2021.0120454</doi>
        <lastModDate>2021-05-01T16:56:38.6900000+00:00</lastModDate>
        
        <creator>Wilmer Vergaray-Mendez</creator>
        
        <creator>Brian Meneses-Claudio</creator>
        
        <creator>Alexi Delgado</creator>
        
        <subject>Mechanical loads; electric fields; backbones; optical fiber; data transmission</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>Fiber optic implementation projects for backbones today have become a necessity, where in many cases failures due to cable stress breaks have been reported. Due to this, it is necessary to carry out a study and analysis of the zone and area prior to implementation. In this research work, through a method based on theory and analysis, the geographical and climatological conditions where the optical fiber will be installed in the Lima region, Peru, will be evaluated, as well as the study of mechanical loads and electric fields associated with the installation of fiber optics on existing electrical network lines will also be carried out. The results of this study showed that for regional backbone projects in the city of Lima, Peru, the use of the type of optical fiber should be considered under the recommendation of the International Telecommunications Union (ITU) -T G.652.D All -Dielectric Self-Supporting (ADSS). The studies and results obtained in this research may also help the various companies in the sector, in future implementations of high-capacity fiber optic backbones in data transmission, to make the best decision on the type of cable and its recommended characteristics for the region.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_54-Study_and_Analysis_for_the_Choice_of_Optical_Fiber.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Developing an IoT Platform for the Elderly Health Care</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120453</link>
        <id>10.14569/IJACSA.2021.0120453</id>
        <doi>10.14569/IJACSA.2021.0120453</doi>
        <lastModDate>2021-05-01T16:56:38.6730000+00:00</lastModDate>
        
        <creator>Medhat Awadalla</creator>
        
        <creator>Firdous Kausar</creator>
        
        <creator>Razzaqul Ahshan</creator>
        
        <subject>Internet of things; healthcare; sensor network; real-time monitoring; wearable sensors; wireless sensor networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>The health care of elderly people addresses the necessity for services that utilize recent technologies and devices. Now-a-days, both loneliness and psychological depressions are typical problems which elderly people face because of living alone/abandoned or reduced communication with their children and relatives. This paper presents the development of an integrated platform using the Internet of Things to manage and provide extensive services for elderly people to address the aforementioned issues. The proposed platform relies on wearable sensor devices to collect real-time data and store it in a cloud server via a developed smartphone application. The cloud server is accessed to retrieve data stored using the OAuth protocol. A web-based database-driven application is developed that facilitates the management of helpful information about elderly people to an authorized person. The doctors can perform real-time monitoring of the health condition of their elder patients remotely, and the system generates an alarm and sends notifications to caregivers and doctors in case of emergency. The conducted experiments and the achieved results showed that the developed platform has remarkable features of accessibility, security, efficiency, and cost.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_53-Developing_an_IoT_Platform.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Speeding up Natural Language Text Search using Compression</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120452</link>
        <id>10.14569/IJACSA.2021.0120452</id>
        <doi>10.14569/IJACSA.2021.0120452</doi>
        <lastModDate>2021-05-01T16:56:38.6570000+00:00</lastModDate>
        
        <creator>Majed AbuSafiya</creator>
        
        <subject>Text compression; text search; unicode</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>Text search is a well-known problem in computer science where the valid shifts of a pattern P in a text string T are found. This paper shows how to speed up text search by searching for P in a compressed version of T. A fast compression algorithm was designed for this aim. This algorithm is based on the assumption that T is restricted to the letters of a single natural language. Relying on this assumption, a letter, in T or P, is encoded into a single byte instead of the two-byte unicode which shortens the string on which a text search algorithm works. The main disadvantage of this approach is the restriction of the alphabet of T to be from a single natural language. However, wide range of text documents comply to this assumption. Another issue is the overhead that is required to compress P and T, but it was found that the proposed compression algorithm is so fast such that its run-time can be paid for and still save text search time. Different approaches to store compressed T are also explored. The conducted experimental study showed that this approach does actually reduce the text search time.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_52-Speeding_up_Natural_Language_Text.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Private LTE Network Service Management Model, based on Agile Methodologies, for Big Mining Companies</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120451</link>
        <id>10.14569/IJACSA.2021.0120451</id>
        <doi>10.14569/IJACSA.2021.0120451</doi>
        <lastModDate>2021-05-01T16:56:38.6430000+00:00</lastModDate>
        
        <creator>Jos&#233; Valdivia-Bedregal</creator>
        
        <creator>Norka Bedregal-Alpaca</creator>
        
        <creator>Elisa Casta&#241;eda-Huaman</creator>
        
        <subject>IT service management; agile methodology; ITIL; expert judgment; LTE network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>Information technology (IT) services must generate added value for any business, either by enhancing its processes, automating its activities, or managing its resources. Using Long Term Evolution (LTE) Networks, IoT solutions and devices are able to be deployed in order to prevent safety incidents and or to online monitor performance and maintenance indicators produced by field equipment. Under the same context, implementing a private LTE network Service Management Model becomes a basic need for any company. Proposed Service Management model must be flexible enough to changes in order to accomplish high productivity demands like the ones that a Big Mining Company requires. Proposed model is based on Information Technology Infrastructure Library (ITIL) as the best known, disseminated and proven framework; additionally, it uses well known agile methodologies such as Scrum and DevOps. Along with the deployment for each of the proposed stages, a visual scheme is generated which, when it comes to a conclusion at the final stage, allows to visualize model interactions in its entirety. In addition to describing the expected results, model validation has been accomplished by an expert panel judgment under the developed topic. As a conclusion, proposed model involves a holistic approach, that is, a comprehensive approach that addresses various aspects of service management supported by a private LTE platform.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_51-Private_LTE_Network_Service_Management_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intelligent Traffic Light Controller using Fuzzy Logic and Image Processing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120450</link>
        <id>10.14569/IJACSA.2021.0120450</id>
        <doi>10.14569/IJACSA.2021.0120450</doi>
        <lastModDate>2021-05-01T16:56:38.6270000+00:00</lastModDate>
        
        <creator>Abdelkader Chabchoub</creator>
        
        <creator>Ali Hamouda</creator>
        
        <creator>Saleh Al-Ahmadi</creator>
        
        <creator>Adnen Cherif</creator>
        
        <subject>Traffic congestion; smart city; traffic light; fuzzy logic; image processing; objects detections</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>Today&#39;s traffic congestion in the big city is a serious problem, as it causes a lot of environmental pollution and difficulty in transportation, which leads to difficult daily life for the human beings in addition to material losses. In this work a smart traffic light controller was designed using fuzzy logic and image processing with MATLAB, to control movement in two ways, aided by a camera and auto sensors. The Fuzzy logic has two inputs and six outputs designed, the console input is the number of cars on each road and the time of the assumed red, yellow and green signal according to the vehicles congestion. The simulation result is similar to the proposed control unit, as it deals with the lights simultaneously according to the number of cars in each branch of the road, which leads to the use of all the time to operate the stoplights. Our system can be employed in solving the problem of traffic congestion in the big cities or the smart cities.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_50-Intelligent_Traffic_Light_Controller_Using_Fuzzy_Logic.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Artificial Intelligence Model based on Grey Clustering for Integral Analysis of Industrial Hygiene Risk</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120449</link>
        <id>10.14569/IJACSA.2021.0120449</id>
        <doi>10.14569/IJACSA.2021.0120449</doi>
        <lastModDate>2021-05-01T16:56:38.5970000+00:00</lastModDate>
        
        <creator>Alexi Delgado</creator>
        
        <creator>Diana Aliaga</creator>
        
        <creator>Cristian Carlos</creator>
        
        <creator>Lisseth Vergaray</creator>
        
        <creator>Chiara Carbajal</creator>
        
        <subject>Artificial intelligence; grey clustering; industrial hygiene; lighting; noise</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>The article proposes a model with an artificial intelligence approach that integrates risks through the Grey Clustering method applying the &quot;Triangulation of center-point based on Whitening functions -CTWF&quot;, for this, the data established is standard data (minimum standards that the four workshops of a company in the industrial sector must meet) and sampled data (real data obtained in the field) to test the grey classes. In this study, the different types of risks (lighting, noise and hand-arm vibration) were globally evaluated and analyzed in the four workshops of a heavy machinery maintenance services company in the industrial sector (welding shop, hydraulic shop, machine shop 1 and machine shop 2), located in Lima, Peru. According to the results obtained from the level of hygienic quality in each workshop, the welding workshop is at a very poor-quality level, while the others are at a good and very good level; regarding the four workshops, it was determined that the noise level is not recommended as they do not meet the minimum required standards. Therefore, control measures were proposed in the four workshops where the level of irrigation is bad and very bad. This study will benefit companies in the industrial sector that need to analyze the level of hygienic quality in their work areas with a global approach in order to apply control measures with prevention, protection of health and physical integrity of workers.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_49-Artificial_Intelligence_Model_based_on_Grey_Clustering.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Experiment for Outdoor GPS Localization Enhancement using Kalman Filter with Multiantenna Consumer-Grade Sensors</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120448</link>
        <id>10.14569/IJACSA.2021.0120448</id>
        <doi>10.14569/IJACSA.2021.0120448</doi>
        <lastModDate>2021-05-01T16:56:38.5800000+00:00</lastModDate>
        
        <creator>Phudinan Singkhamfu</creator>
        
        <creator>Parinya Suwansrikham</creator>
        
        <subject>Global positioning systems accuracy; kalman; multi global positioning systems; global positioning systems pointer; global positioning systems enhance; filtering algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>Consumer-Grade global positioning system (GPS) is widely used in many domains. The obvious issue of this consumer-grade device is low accuracy and reading fluctuation results. In terms of using an application that requires a more precise location, the output could be difficult. In this study, the authors deploy various methods to reduce the global positioning system data fluctuation and present field test results. Two main types of the device worked together to collect data from global positioning systems, such as Microcontroller for algorithm processing and presenting data and global positioning system receivers for receiving data from a satellite. We combine three global positioning system modules to received signals in a single device and test calculated data compared with the Kalman filtering methods in many cases, including moving and static devices. Implementing the Standard Kalman Filter to multiple global positioning system Modules has improved the constancy of cheap global positioning system equipment. The experiment algorithm is presented significant improvement to overcome the retrieved data fluctuation problem. This study&#39;s contribution will enable creating a cheap global positioning system locator device for various applications that require more accuracy than the standard consumer-grade receiver.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_48-An_Experiment_for_Outdoor_GPS_Localization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Resource Utilization Prediction in Cloud Computing using Hybrid Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120447</link>
        <id>10.14569/IJACSA.2021.0120447</id>
        <doi>10.14569/IJACSA.2021.0120447</doi>
        <lastModDate>2021-05-01T16:56:38.5630000+00:00</lastModDate>
        
        <creator>Anupama K C</creator>
        
        <creator>Shivakumar B R</creator>
        
        <creator>Nagaraja R</creator>
        
        <subject>Workload prediction; SARIMA; LSTM; ARIMA; cloud service provider</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>In cloud environment, maximum utilization of resource is possible with good resource management strategies. Workload prediction plays a vital role in estimating the actual resource required for successful execution of an application on cloud. Most of the existing works concentrated on predicting workloads which either showed clear seasonality/trend or for irregular workload patterns. This paper presents a new perspective in forecasting both seasonal and non-seasonal workloads. To accomplish this, a hybrid prediction model which is a combination of statistical and machine learning technique is proposed. Suppose the seasonality exists in the workload pattern, Seasonal Auto Regressive Integrated Moving Average (SARIMA) model is applied for prediction. For non-seasonal workloads Long Short-Term Memory networks (LSTM) or AutoRegressive Integrated Moving Average (ARIMA) model is used based on the results of normality test. This paper presents a prediction model which forecasts the actual resource required for diverse time intervals of daily, hourly and minutes utilization. The experimental results confirm that accuracy of the prediction of LSTM model outperformed ARIMA for irregular workload patterns. The SARIMA model accurately forecasts the resource usage for forthcoming days. This work actually helps the cloud service provider (CSP) to analyze the workload and predict accordingly to avoid over or under provisioning of the cloud resources.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_47-Resource_Utilization_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of Load Variation Consideration for Optimal Distributed Generation Placement</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120446</link>
        <id>10.14569/IJACSA.2021.0120446</id>
        <doi>10.14569/IJACSA.2021.0120446</doi>
        <lastModDate>2021-05-01T16:56:38.5470000+00:00</lastModDate>
        
        <creator>Aida Fazliana Abdul Kadir</creator>
        
        <creator>Mohamad Fani Sulaima</creator>
        
        <creator>Noor Ropidah Bujal</creator>
        
        <creator>Mohd Nazri Bin Abd Halim</creator>
        
        <creator>Elia Erwani Hassan</creator>
        
        <subject>Distribution generation; optimization techniques; IGSA; losses minimization; optimal placement and sizing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>Distributed generation (DG) devices offered usefully for power losses minimization, grid reinforcement, bus voltages improvement and efficiency of a distribution system. Usually, the DG placement problem considers the predefined DG number and sizes that might result in many small DGs. However, a better solution could be reached with a minimum number of DGs, reducing installation and maintenance costs. Furthermore, the increment of load and vice versa may affect the voltage profile below or upper the tolerable limit and distribution feeders. Thus, this paper aims to analyze the impact of the variation of the load level with the DG connection in the power system by using the improved gravitational search algorithm (IGSA) as an optimization technique. The multi-objective function target reduces the total power loss, average total voltage harmonic distortion and voltage deviation in the distribution system. This study is considering six different load levels as in percentage of load. This proposed technique compares with the particle swarm optimization (PSO) and the gravitational search algorithm (GSA). This efficiency of the proposed technique tests on the 33-bus radial distribution system with six case studies.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_46-Analysis_of_Load_Variation_Consideration.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimal Allocation of DG and D-STATCOM in a Distribution System using Evolutionary based Bat Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120445</link>
        <id>10.14569/IJACSA.2021.0120445</id>
        <doi>10.14569/IJACSA.2021.0120445</doi>
        <lastModDate>2021-05-01T16:56:38.5170000+00:00</lastModDate>
        
        <creator>Surender Reddy Salkuti</creator>
        
        <subject>Bat algorithm; distributed generation; voltage stability index; loss sensitivity; optimal location and size; radial distribution system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>In this work, a methodology to find the optimal allocation (i.e., sizing and location) of Distributed Generators (DGs) and Distribution-static compensators (D-STATCOM) in a radial distribution system (RDS) is proposed. Here, the voltage stability index (VSI) is utilized to find the optimal location for the D-STATCOM, and loss sensitivity factor (LSF) method is utilized to find the optimal location for distributed generation. In this work, the proposed work is formulated as a non-linear optimization problem and it is solved using the meta-heuristic/evolutionary-based algorithm. The evolutionary-based Bat algorithm is used to find optimal sizes of D-STATCOM and DGs in RDSs. To check the validity and feasibility and validity of the proposed optimal allocation approach, two standard IEEE 34 and 85 bus RDSs are considered in this paper. The simulation results show reduction in power losses and enhancement in bus voltages in the RDSs.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_45-Optimal_Location_and_Sizing_of_Distributed_Generation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Vehicle Routing Problem for the Collection of Medical Samples at Home: Case Study of Morocco</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120443</link>
        <id>10.14569/IJACSA.2021.0120443</id>
        <doi>10.14569/IJACSA.2021.0120443</doi>
        <lastModDate>2021-05-01T16:56:38.5000000+00:00</lastModDate>
        
        <creator>Ettazi Haitam</creator>
        
        <creator>Rafalia Najat</creator>
        
        <creator>Jaafar Abouchabaka</creator>
        
        <subject>Optimization; metaheuristics; vehicle routing problem; allocation and planning; home care</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>This paper aims to solve the problem of sampling and collecting blood and/ or urine tubes from sick people at home via a medical staff (nurse/ caregiver) to the laboratory in an optimal way. To ensure good management, several constraints must be taken into account, namely: staff schedules, patient preferences, the maximum delay time for a blood sample, etc. This problem is considered as a vehicle routing problem with time windows, preference and priority according to urgent cases. We first proposed a mathematical formulation of the problem by using a mixed integer linear programming (MILP) as well as various metaheuristics. Also, we applied this method to a real instance of a laboratory in Morocco (T&#233;mara) named Laboratory BioGuich, which gave the most optimal results.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_43-A_Vehicle_Routing_Problem.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Computer-Assisted Collaborative Reading Model to Improve Reading Fluency of EFL Learners in Continuous Learning Programs in Saudi Universities</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120444</link>
        <id>10.14569/IJACSA.2021.0120444</id>
        <doi>10.14569/IJACSA.2021.0120444</doi>
        <lastModDate>2021-05-01T16:56:38.5000000+00:00</lastModDate>
        
        <creator>Abdulfattah Omar</creator>
        
        <creator>Mohamed Saad Mahmoud Hussein</creator>
        
        <creator>Fahd Shehail Alalwi</creator>
        
        <subject>Collaborative reading model; computer-assisted language learning (CALL); computer-based instruction; EFL learners; fluency; Saudi Universities; struggling readers</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>Reading is not synonymous to comprehension; rather it is a prerequisite that doesn’t, by itself, guarantee comprehension. This is to say that being efficient in decoding letters, syllables and whole words and recognizing vocabulary does not ensure natural r automatic comprehension. Fluency seems to be the bridge between the mastery of the mechanics of reading and the dynamics of comprehension. Abundant research exists that explores how to improve reading skills of EFL learners at Saudi universities. However little, if any, of this array of research sought to discern the potential effects of educational technology on the fluency of struggling readers in continuous learning programs. To fill this gap, this study seeks to probe the multi dimensions of the problem and suggest ways to solve it. For this purpose, 24 EFL lecturers from three Saudi universities were selected and interviewed. A suggested computer-assisted collaborative reading model was put forth to be applied in the three universities. Students were diagnosed by their instructors as gaining relatively enough grasp of decoding skills at the multi-levels of orthographic knowledge, mono and polysyllabic words, but exhibit slow and inaccurate reading indicating reprehensive symptoms for a fluency problem. The lecturers explained that the disappointment resulting from learners’ inability to reach comprehension despite mastering decoding skills influences their attitudes towards reading and language learning, bringing about reading apathy and low self-esteem. The proposed model is designed to enhance reading fluency which is perceived as the underlying problem that makes the reader struggle. It is to be delivered partly individually and partly collaboratively online. Collaboration also is operated via face to face instruction especially in teaching the reading strategy. In doing so, the procedures followed are in line with the blended learning. The findings indicate clearly that the proposed model was successfully used to improve reading fluency through accelerating the different reading subskills for decoding and create positive attitudes toward reading. The results highlight the importance of establishing a level of automaticity that gives rise to the higher skills of comprehension.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_44-A_Computer_Assisted_Collaborative_Reading_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Impact of Deep Learning on Localizing and Recognizing Handwritten Text in Lecture Videos</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120442</link>
        <id>10.14569/IJACSA.2021.0120442</id>
        <doi>10.14569/IJACSA.2021.0120442</doi>
        <lastModDate>2021-05-01T16:56:38.4700000+00:00</lastModDate>
        
        <creator>Lakshmi Haritha Medida</creator>
        
        <creator>Kasarapu Ramani</creator>
        
        <subject>Lecture video; text localization; segmentation; word recognition; deep convolutional neural network (DCNN)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>Now-a-days, the video recording technologies have turned out to be more and more forceful and easier to utilize. Therefore, numerous universities are recording and publishing their lectures online in order to make them reachable for learners or students. These lecture videos encapsulate the handwritten text written either on a paper or blackboard or on a tablet using a stylus. On the other hand, this mechanism of recording the lecture videos consumes huge quantity of multimedia data in a faster manner. Thus, handwritten text recognition on the lecture video portals has turned out to be an incredibly significant and demanding task. Thus, this paper intends to develop a novel handwritten text detection and recognition approach on the video lecture dataset by following four major phases, viz. (a) Text Localization, (b) Segmentation (c) Pre-processing and (d) Recognition. The text localization in the lecture video frames is the initial phase and here the arbitrarily oriented text on video frames is localized using the Modified Region Growing (MRG) algorithm. Then, the localized words are subjected to segmentation via the K-means clustering, in which the words from the detected text regions are segmented out. Subsequently, the segmented words are pre-processed to avoid the blurriness artifacts as well. Finally, the pre-processed words are recognized using the Deep Convolutional Neural Network (DCNN). The performance of the proposed model is analyzed in terms of the performance measures like accuracy, precision, sensitivity and specificity to exhibit the supremacy of the text detection and recognition in lecture video. Experimental results reveal that at Learning Percentage of 70, the presented work has the highest accuracy of 89.3% for 500 count of frames.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_42-Impact_of_Deep_Learning_on_Localizing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Detailed Study on the Choice of Hyperparameters for Transfer Learning in Covid-19 Image Datasets using Bayesian Optimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120441</link>
        <id>10.14569/IJACSA.2021.0120441</id>
        <doi>10.14569/IJACSA.2021.0120441</doi>
        <lastModDate>2021-05-01T16:56:38.4530000+00:00</lastModDate>
        
        <creator>Miguel Miranda</creator>
        
        <creator>Kid Valeriano</creator>
        
        <creator>Jos&#180;e Sulla-Torres</creator>
        
        <subject>Transfer Learning; COVID-19; X-ray image; deep learning; Bayes optimization; machine learning; hyperparameter optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>For many years, the area of health care has evolved, mainly using medical images to detect and evaluate diseases. Nowadays, the world is going through a pandemic due to COVID- 19, causing a severe effect on the health system and the global economy. Researchers, both in health and in different areas, are focused on improving and providing various alternatives for rapid and more effective detection of this disease. The main objective of this study is to automatically explore as many configurations as possible to recommend a smaller starting hyperparameter space. Because the manual selection of these hyperparameters can lose configurations that generate more efficient models, for this, we present the MKCovid-19 workflow, which uses chest x-ray images of patients with COVID-19. We use knowledge transfer based on convolutional neural networks and Bayes optimization. A detailed study was conducted with different amounts of training data. This automatic selection of hyperparameters allowed us to find a robust model with an accuracy of 98% in test data.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_41-A_Detailed_Study_on_the_Choice_of_Hyperparameters.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Body Weight Estimation using 2D Body Image</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120440</link>
        <id>10.14569/IJACSA.2021.0120440</id>
        <doi>10.14569/IJACSA.2021.0120440</doi>
        <lastModDate>2021-05-01T16:56:38.4230000+00:00</lastModDate>
        
        <creator>Rohan Soneja</creator>
        
        <creator>Prashanth S</creator>
        
        <creator>R Aarthi</creator>
        
        <subject>Body weight estimation; deep learning; xgboost regressor; anthropometric features; computer vision</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>Two dimensional images of a person implicitly contain several useful biometric information such as gender, iris colour, weight, etc. Among them, body weight is a useful metric for a number of usecases such as forensics, fitness and health analysis, airport dynamic luggage allowance, etc. Most current solutions for body weight estimation from images make use of additional apparatus like depth sensors and thermal cameras along with predefined features such as gender and height which generally make them more computationally intensive. Motivated by the need to provide a time and cost efficient solution, a novel computer-vision based method for body weight estimation using only 2D images of people is proposed. Considering the anthropometric features from the two most common types of images, facial and full body, facial landmark measurements and body joint measurements are used in deep learning and XG boost regression models to estimate the person’s body weight. The results obtained, though comparable to previous approaches, perform much faster due to the reduced complexities of the proposed models, with facial models performing better than full body models.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_40-Body_Weight_Estimation_using_2D_Body_Image.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Assessment of Context-aware Online Learning for Task Offloading in Vehicular Edge Computing Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120439</link>
        <id>10.14569/IJACSA.2021.0120439</id>
        <doi>10.14569/IJACSA.2021.0120439</doi>
        <lastModDate>2021-05-01T16:56:38.4070000+00:00</lastModDate>
        
        <creator>Mutaz A. B. Al-Tarawneh</creator>
        
        <creator>Saif E. Alnawayseh</creator>
        
        <subject>Vehicular edge computing; task offloading; multi-armed bandits; contextual bandits</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>Vehicular Edge Computing (VEC) systems have recently become an essential computing infrastructure to support a plethora of applications entailed by smart and connected vehicles. These systems integrate the computing resources of edge and cloud servers and utilize them to execute computational tasks offloaded from various vehicular applications. However, the highly fluctuating status of VEC resources besides the varying characteristics and requirements of different application types introduce extra challenges to task offloading. Hence, this paper presents, implements and evaluates various task offloading algorithms based on the Multi-Armed Bandit (MAB) theory for VEC systems with predefined application types. These algorithms seek to make use of available contextual information to better steer task offloading. These information include application type, application characteristics, network status and server utilization. The proposed algorithms are based on having either a single MAB learner with application-dependent reward assignment, multiple application-dependent MAB learners or dedicated contextual bandits implemented as an array of incremental learning models. They have been implemented and extensively evaluated using the EdgeCloudSim simulation tool. Their performance has been assessed based on task failure rate, service time and Quality of Experience (QoE) and compared to that of recently reported algorithms. Simulation results demonstrate that the proposed contextual bandit-based algorithm outperforms its counterparts in terms of failure rate and QoE while having comparable service time values. It has achieved up to 73.4% and 21.7% average improvements in failure rate and QoE, respectively, among all application types. In addition, it efficiently utilizes the available contextual information to make appropriate offloading decisions for tasks originating from different application types achiev-ing more balanced utilization of the available VEC resources. Ultimately, employing incremental learning to implement the proposed contextual bandit algorithm has shown a profound potential to cope with dynamic changes of the simulated VEC systems.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_39-Performance_Assessment_of_Context_aware_Online_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>ParaDist-HMM: A Parallel Distributed Implementation of Hidden Markov Model for Big Data Analytics using Spark</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120438</link>
        <id>10.14569/IJACSA.2021.0120438</id>
        <doi>10.14569/IJACSA.2021.0120438</doi>
        <lastModDate>2021-05-01T16:56:38.3930000+00:00</lastModDate>
        
        <creator>Imad Sassi</creator>
        
        <creator>Samir Anter</creator>
        
        <creator>Abdelkrim Bekkhoucha</creator>
        
        <subject>Big data; machine learning; Hidden Markov model; forward; backward; baum-welch; parallel distributed computing; spark; cloud computing; ParaDist-HMM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>Big Data is an extremely massive amount of hetero-geneous and multisource data which often requires fast processing and real time analysis. Solving big data analytics problems needs powerful platforms to handle this enormous mass of data and efficient machine learning algorithms to allow the use of big data full potential. Hidden Markov models are statistical models, rich and widely used in various fields especially for time varying data sequences modeling and analysis. They owe their success to the existence of many efficient and reliable algorithms. In this paper, we present ParaDist-HMM, a parallel distributed imple-mentation of hidden Markov model for modeling and solving big data analytics problems. We describe the development and the implementation of the improved algorithms and we propose a Spark-based approach consisting in a parallel distributed big data architecture in cloud computing environment, to put the proposed algorithms into practice. We evaluated the model on synthetic and real financial data in terms of running time, speedup and prediction quality which is measured by using the accuracy and the root mean square error. Experimental results demonstrate that ParaDist-HMM algorithms outperforms other implementations of hidden Markov models in terms of processing speed, accuracy and therefore in efficiency and effectiveness.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_38-ParaDist_HMM_A_Parallel_Distributed_Implementation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Multi-layer Machine Learning-based Intrusion Detection System for Wireless Sensor Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120437</link>
        <id>10.14569/IJACSA.2021.0120437</id>
        <doi>10.14569/IJACSA.2021.0120437</doi>
        <lastModDate>2021-05-01T16:56:38.3770000+00:00</lastModDate>
        
        <creator>Nada M. Alruhaily</creator>
        
        <creator>Dina M. Ibrahim</creator>
        
        <subject>Intrusion detection; wireless sensor networks; ma-chine learning; defence in depth strategy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>With the increase relay on the internet, and the shift of most business to provide remote services, the burdens of protecting the network and detecting any attack quickly become more significant, as the attack surface and Cyberattack increases in return. Most current Wireless Sensor Networks (WSNs) intrusion detection models that use machine learning methods to identify non-previously seen attacks utilize one layer of detection, meaning that a costly algorithm should be run before detecting any suspicious activity. In this paper, we propose a multi-layer intrusion detection framework for WSN; in which we adopt a defense-in-depth security strategy, where two layers of detection are deployed. The first layer is located on the network edge sensors are distributed; it uses a Naive Bayes classifier for real-time decision making of the inspected packets. The second layer is located on the cloud and utilizes a Random Forest multi-class classifier for an in-depth analysis of the inspected packets. The results demonstrate that our proposed multi-layer detection model gives a relatively high performance of the TPR, TNR, FPR, and FNR, additionally achieving a high Precision rate with values of, 100%, 90.4%, 99.5%, 97%, 99.9% for the Normal, Flooding, Scheduling, Grayhole, and Blackhole attacks, respectively</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_37-A_Multi_layer_Machine_Learning_based_Intrusion_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>PlexNet: An Ensemble of Deep Neural Networks for Biometric Template Protection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120436</link>
        <id>10.14569/IJACSA.2021.0120436</id>
        <doi>10.14569/IJACSA.2021.0120436</doi>
        <lastModDate>2021-05-01T16:56:38.3470000+00:00</lastModDate>
        
        <creator>Ashutosh Singh</creator>
        
        <creator>Ranjeet Srivastva</creator>
        
        <creator>Yogendra Narain Singh</creator>
        
        <subject>Biometrics; template protection; deep learning; transfer learning; ensemble</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>The security of biometric systems, especially pro-tecting the templates stored in the gallery database, is a primary concern for researchers. This paper presents a novel framework using an ensemble of deep neural networks to protect biometric features stored as a template. The proposed ensemble chooses two state-of-the-art CNN architectures i.e., ResNet and DenseNet as base models for training. While training, the pre-trained weights enable the learning algorithm to converge faster. The weights obtained through the base model is further used to train other compatible models, generating a fine-tuned model. Thus, four fine-tuned models are prepared, and their learning are fused to form an ensemble named as PlexNet. To analyze biometric templates’ security, the rigorous learning of ensemble is collected using a smart box i.e., application programming interface (API). The API is robust and correctly identifies the query image without referring to a template database. Thus, the proposed framework excludes the templates from database and performed predictions based on learning that is irrevocable.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_36-PlexNet_An_Ensemble_of_Deep_Neural_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Feature Engineering Algorithms for Traffic Dataset</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120435</link>
        <id>10.14569/IJACSA.2021.0120435</id>
        <doi>10.14569/IJACSA.2021.0120435</doi>
        <lastModDate>2021-05-01T16:56:38.3300000+00:00</lastModDate>
        
        <creator>Akibu Mahmoud Abdullah</creator>
        
        <creator>Raja Sher Afgun Usmani</creator>
        
        <creator>Thulasyammal Ramiah Pillai</creator>
        
        <creator>Ibrahim Abaker Targio Hashem</creator>
        
        <creator>Mohsen Marjani</creator>
        
        <subject>Feature engineering algorithm; queuing theory; Road Traffic Volume Malaysia (RTVM); machine learning algo-rithms</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>As a result of an increase in the human popula-tion globally, traffic congestion in the urban area is becoming worse, which leads to time-consuming, waste of fuel, and, most importantly, the emission of pollutants. Therefore, there is a need to monitor and estimate traffic density. The emergence of an automatic traffic management system allows us to record and monitor motor vehicles’ movement in a road segment. One of the challenges researchers face is when the historical traffic data is given as an annual average that contains incomplete data. The annual average daily traffic (AADT) is an average number of traffic volumes at the roadway segment in a specific location over a year. An example of AADT data is the one given by Road Traffic Volume Malaysia (RTVM), and this data is incomplete. The RTVM provides an average of daily traffic data and one peak hour. The recorded traffic data is for sixteen hours, and the only hourly data given is one hour, from 8.00 am to 9.00 am. Hence there is a need to estimate hourly traffic volume for the remaining hours. Feature engineering can be used to overcome the issue of incomplete data. This paper proposed feature engineering algorithms that can efficiently estimate hourly traffic volume and generate features from the existing dataset for all traffic census stations in Malaysia using queuing theory. The proposed feature engineering algorithms were able to estimate the hourly traffic volume and generate features for three years in Jalan Kepong census station, Kuala Lumpur, Malaysia. The algorithms were evaluated using the Random Forest model and Decision Tree Models. The result shows that our feature engineering algorithms improve machine learning algorithms’ performance except for the prediction of N􀀀2 using Random Forest, which shows the highest MAE, MSE, and RMSE when traffic data was included for prediction. The algorithm is applied in one of the traffic census stations in Kuala Lumpur, and it can be used for the other stations in Malaysia. Additionally, the algorithm can also be used for any annual average daily traffic data if it includes average hourly data.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_35-Feature_Engineering_Algorithms_for_Traffic_Dataset.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Birds Identification System using Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120434</link>
        <id>10.14569/IJACSA.2021.0120434</id>
        <doi>10.14569/IJACSA.2021.0120434</doi>
        <lastModDate>2021-05-01T16:56:38.3130000+00:00</lastModDate>
        
        <creator>Suleyman A. Al-Showarah</creator>
        
        <creator>Sohyb T. Al-qbailat</creator>
        
        <subject>Birds identification; deep learning convolutional neural networks (CNN); VGG-19; principal component analysis (PCA)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>Identifying birds is one of challenging role for bird watchers due to the similarity of the birds’ forms/image background and the lack of experience for watchers. So, it needs a computer system based images to help birdwatchers in order to identify birds. This study aims at investigating the use of deep learning for birds’ identification using convolutional neural network for extracting features from images. The investigation was performed on database contained 4340 images that collected by the paper author from Jordan. The Principal Component Analysis (was applied on layer 6 and 7, as well as on the statistical operations of merging the two layers like: average, minimum, maximum and combine of both layers. The datasets were investigated by the following classifiers: Artificial neural networks, K-Nearest Neighbor, Random Forest, Na&#239;ve Bayes and Decision Tree. Whereas, the metrics used in each classifier are: accuracy, precision, recall, and F-Measure. The results of investigation include and not limited to the following, the PCA used on the deep features does not only reduce the dimensionality, and therefore, the training/testing time is reduced significantly, but also allows for increasing the identification accuracy, particularly when using the Artificial Neural Networks classifier. Based on the results of classifiers; Artificial neural networks showed high classification accuracy (70.9908), precision (0.718), recall (0.71) and F-Measure (0.708) compared to other classifiers.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_34-Birds_Identification_System_using_Deep_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Contribution to the Improvement of Cryptographic Protection Methods for Medical Images in DICOM Format through a Combination of Encryption Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120433</link>
        <id>10.14569/IJACSA.2021.0120433</id>
        <doi>10.14569/IJACSA.2021.0120433</doi>
        <lastModDate>2021-05-01T16:56:38.2830000+00:00</lastModDate>
        
        <creator>Maka Maka Ebenezer</creator>
        
        <creator>Paun&#233; F&#233;lix</creator>
        
        <creator>Malong Yannick</creator>
        
        <creator>Simo Ntso Pascal Junior</creator>
        
        <creator>Nnem&#233; Nnem&#233; L&#233;andre</creator>
        
        <subject>Medical images; DICOM; advanced encryption standard (AES); GCM; authentication</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>This paper proposes a method for storing and securing medical images in DICOM format. Other methods offered affect the quality of the image. The solution proposed here is based on the AES256 algorithm in Galois/Counter Mode (GCM) which already integrates authentication and signature processes to ensure the integrity of the images manipulated. This solution is implemented by using the Phyton programming language under the DJANGO framework, libraries such as NUMPHY, PYDICOM, MYSQLCLIENT, and PYCRYPTODOME. The results obtained after experimental tests give us a good average encryption and decryption time. The difference in the mean value of time between encryption and decryption is quite small in view of the tests carried out. We obtain saving on storage space owing to the fact that the proposed solution directly stores the encrypted image. The manipulated image is not altered.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_33-Contribution_to_the_Improvement_of_Cryptographic_Protection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predictive Analysis of Ransomware Attacks using Context-aware AI in IoT Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120432</link>
        <id>10.14569/IJACSA.2021.0120432</id>
        <doi>10.14569/IJACSA.2021.0120432</doi>
        <lastModDate>2021-05-01T16:56:38.2670000+00:00</lastModDate>
        
        <creator>Vytarani Mathane</creator>
        
        <creator>P.V. Lakshmi</creator>
        
        <subject>Ransomware; IoT; context-aware; machine learning; ontology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>Ransomware attacks are emerging as a major source of malware intrusion in recent times. While so far ransomware has affected general-purpose adequately resourceful computing systems, there is a visible shift towards low-cost Internet of Things systems which tend to manage critical endpoints in industrial systems. Many ransomware prediction techniques are proposed but there is a need for more suitable ransomware prediction techniques for constrained heterogeneous IoT systems. Using attack context information profiles reduces the use of resources required by resource-constrained IoT systems. This paper presents a context-aware ransomware prediction technique that uses context ontology for extracting information features (connection requests, software updates, etc.) and Artificial Intelligence, Machine Learning algorithms for predicting ransomware. The proposed techniques focus and rely on early prediction and detection of ransomware penetration attempts to resource-constrained IoT systems. There is an increase of 60 % of reduction in time taken when using context-aware dataset over the non-context aware data.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_32-Predictive_Analysis_of_Ransomware_Attacks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Efficient Security Model for RDF Files Used in IoT Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120431</link>
        <id>10.14569/IJACSA.2021.0120431</id>
        <doi>10.14569/IJACSA.2021.0120431</doi>
        <lastModDate>2021-05-01T16:56:38.2370000+00:00</lastModDate>
        
        <creator>Mohamed El kholy</creator>
        
        <creator>Abdel baes Mohamed</creator>
        
        <subject>Semantic Web; Internet of Things (IoT); resource description framework (RDF); smart cities; security mechanism; web ontology language (OWL); partial encryption; SPARQL protocol and RDF query language (SPARQL); data encryption standard (DES) component</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>The openness environment of IoT ecosystem arises several security and privacy issues. However, the huge amount of data produced by several IoT devices restricts using traditional security methods. Another security challenge for IoT system is the interoperability between heterogeneous IoT devices. Semantic Web has risen as a promising technology that provides semantic annotations allowing interoperability between IoT devices. Semantic web uses RDF triples to allow semantic data exchange between heterogeneous applications. Hence, RDF files used in IoT systems require specific security mechanism that regards large data size as well as rapidly data updates. The proposed work introduces a security novel that provides RDF files with a fine grained partial encryption. The proposed method allows applying security for the sensitive parts of RDF files without affecting the public parts. Encryption metadata is stored in a container related to each individual sensitive triple. Thus accessing public data in RDF file is not affected with the encryption overheads. A motivation scenario for privacy in a smart city is used to evaluate the proposed method. Experimental results showed that the proposed methodology enhances the access time of RDF triples from 10.4 msec to 6.2 msec. Moreover the proposed method facilitates integration of separated parts of a RDF graph together. The empirical evaluation proved the enhancement in efficiency and flexibility by applying the proposed method to RDF files used in IoT systems. Moreover the insensitive triples in RDF files are not affected with the security overheads.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_31-Efficient_Security_Model_for_RDF_Files.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Survey of Tools and Techniques for Sentiment Analysis of Social Networking Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120430</link>
        <id>10.14569/IJACSA.2021.0120430</id>
        <doi>10.14569/IJACSA.2021.0120430</doi>
        <lastModDate>2021-05-01T16:56:38.2200000+00:00</lastModDate>
        
        <creator>Sangeeta Rani</creator>
        
        <creator>Nasib Singh Gill</creator>
        
        <creator>Preeti Gulia</creator>
        
        <subject>Social networking sites sentiment analysis; twitter sentiment analysis; opinion mining; ensemble classifier; stack based ensemble</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>Social media has rapidly expanded over a period of time and generated a huge repository of content. Sentiment analysis of this data has a vast scope in decision support and attracted many researchers to explore various possibilities for technique enhancement and accuracy improvement. Twitter is one of the social media platforms that are widely explored in the area of sentiment analysis. This paper presents a systematic survey related to Social Networking Sites Sentiment Analysis and mainly focus on Twitter sentiment analysis. The paper explores and identifies the techniques and tools used in a well-structured approach to find out the research gaps and identify future scope in this area of research. The techniques evolved over time to improve the efficiency of classification. Total 55 research papers are included in this survey. The result reflects that Twitter is the most explored social networking site for opinion mining. Na&#239;ve Bayes and SVM machine learning algorithms are implemented in maximum researches. As the latest advancements, Stack based ensemble, fuzzy based and neural network based classifiers are also implemented to enhance the efficiency of classification. WEKA, R Studio, Python are mostly used tools by research scholars for implementation. The overall evolution of the research goes through various changes in terms of technologies, tools, social media platforms and data corpus targeted.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_30-Survey_of_Tools_and_Techniques_for_Sentiment_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Evaluation of User Experience Testing for Retrieval-based Model and Deep Learning Conversational Agent</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120429</link>
        <id>10.14569/IJACSA.2021.0120429</id>
        <doi>10.14569/IJACSA.2021.0120429</doi>
        <lastModDate>2021-05-01T16:56:38.1900000+00:00</lastModDate>
        
        <creator>Pui Huang Leong</creator>
        
        <creator>Ong Sing Goh</creator>
        
        <creator>Yogan Jaya Kumar</creator>
        
        <creator>Yet Huat Sam</creator>
        
        <creator>Cheng Weng Fong</creator>
        
        <subject>Conversational agent; retrieval-based model; deep learning; user experience testing; usability; usefulness; satisfaction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>The use of a conversational agent to relay information on behalf of individuals has gained worldwide acceptance. The conversational agent in this study was developed using Retrieval-based Model and Deep Learning to enhance the user experience. Nevertheless, the successfulness of the conversational agent could only be determined upon the evaluation. Thus, the testing was performed in the quantitative approach via questionnaire survey to capture user experience upon the usage of the conversational agent in terms of Usability, Usefulness and Satisfaction. The questionnaire survey was tested via statistical tool for reliability and validation test and proven to be carried out. The test results indicate positive experience towards the usage of the conversational agent and the outcome of the testing showed promising results and proof the success of this study, with immense contributions to the field of conversational agent.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_29-The_Evaluation_of_user_Experience_Testing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning based Anomaly Detection in Images: Insights, Challenges and Recommendations</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120428</link>
        <id>10.14569/IJACSA.2021.0120428</id>
        <doi>10.14569/IJACSA.2021.0120428</doi>
        <lastModDate>2021-05-01T16:56:38.1730000+00:00</lastModDate>
        
        <creator>Ahad Alloqmani</creator>
        
        <creator>Yoosef B. Abushark</creator>
        
        <creator>Asif Irshad Khan</creator>
        
        <creator>Fawaz Alsolami</creator>
        
        <subject>Anomaly detection; outlier detection; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>Deep learning-based anomaly detection in images has recently been considered a popular research area with numerous applications worldwide. The main aim of anomaly detection (i.e., Outlier detection), is to identify data instances that deviate considerably from the majority of data instances. This paper offers a comprehensive analysis of previous works that have been proposed in the area of anomaly detection in images through deep learning generally and in the medical field specifically. Twenty studies were reviewed, and the literature selection methodology was defined based on four phases: keyword filter, publish filter, year filter, and abstract filter. In this review, we highlight the differences among the studies included by considering the following factors: methodology, dataset, prepro-cessing, results and limitations. Besides, we illustrate the various challenges and potential future directions relevant to anomaly detection in images</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_28-Deep_Learning_based_Anomaly_Detection_in_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Econometric Analysis of Stock Market Performance during COVID-19 Pandemic: A Case Study of Uzbekistan Stock Market</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120427</link>
        <id>10.14569/IJACSA.2021.0120427</id>
        <doi>10.14569/IJACSA.2021.0120427</doi>
        <lastModDate>2021-05-01T16:56:38.1570000+00:00</lastModDate>
        
        <creator>Mansur Eshov</creator>
        
        <creator>Walid Osamy</creator>
        
        <creator>Ahmed Aziz</creator>
        
        <creator>Ahmed M. Khedr</creator>
        
        <subject>Stock market; factors; SEM-model; COVID-19; global economy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>This article highlights the impact of the Coronavirus disease (COVID-19). COVID-19 pandemic on the stock market of Uzbekistan on the basis of empirical research and the main factors affecting the stock market are identified as well. Secondary statistical data were collected from the Tashkent Stock Exchange, the Central Bank of the Republic of Uzbekistan, the State Statistics Committee of the Republic of Uzbekistan and other public funds, and the regression equation of the SEM-model of the impact of the Covid-19 pandemic on the stock market of Uzbekistan was formed. In particular, indicators such as the latest daily and total number of people infected with Covid-19 in the Republic of Uzbekistan, the total number of recovered people after being infected with Covid-19, the total number of people who died of the disease, the daily number of recovered people post-infection, the stock market index of Uzbekistan, Uzbekistan Indicators such as the number of daily securities traded on the Republican Stock Exchange &quot;Tashkent&quot;, the exchange rate of the US dollar set by the Central Bank of the Republic of Uzbekistan were selected as the main factors. The constructed regression equation was examined using F-statistics, Student’s t-test, and multicorrelation tests to determine the level of adequacy. The authors identify factors based on a systematic analysis of the scientific work of world-renowned scientists on major stock markets and creates a SEM-model of the factors affecting the Uzbek stock market during the pandemic.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_27-Econometric_Analysis_of_Stock_Market_Performance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Book Recommendation for Library Automation Use in School Libraries by Multi Features of Support Vector Machine</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120426</link>
        <id>10.14569/IJACSA.2021.0120426</id>
        <doi>10.14569/IJACSA.2021.0120426</doi>
        <lastModDate>2021-05-01T16:56:38.1270000+00:00</lastModDate>
        
        <creator>Kitti Puritat</creator>
        
        <creator>Phichete Julrode</creator>
        
        <creator>Pakinee Ariya</creator>
        
        <creator>Sumalee Sangamuang</creator>
        
        <creator>Kannikar Intawong</creator>
        
        <subject>Library automation; book recommendation system; library integrated system; title similarity; support vector machine; open source</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>This paper proposed the algorithms of book recommendation for the open source of library automation by using machine learning method of support vector machine. The algorithms consist of using multiple features (1) similarity measures for book title (2) The DDC for systematic arrangement combination of Association Rule Mining (3) similarity measures for bibliographic information of book. To evaluate, we used both qualitative and quantitative data. For qualitative, sixty four students of Banpasao Chiang Mai school reported the satisfaction questionnaire and interview. For Quantitative, we used web monitoring and precision measures to effectively use the system. The results show that books recommended by our algorithms can suggest books to students “Very interested” and “interested” by 14.5% and 22.5% and improve usage of the OPAC system&#39;s highest average of 52 per day. Therefore, these systems suitable for library automation of Thai language and small library with not much book resource.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_26-Book_Recommendation_for_Library_Automation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Recent Themes of Colombian Scientific Engineering Journals in Scopus</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120425</link>
        <id>10.14569/IJACSA.2021.0120425</id>
        <doi>10.14569/IJACSA.2021.0120425</doi>
        <lastModDate>2021-05-01T16:56:38.1100000+00:00</lastModDate>
        
        <creator>Marco Aguilera-Prado</creator>
        
        <creator>Octavio Jos&#233; Salcedo Parra</creator>
        
        <creator>Eduardo Avenda&#241;o Fern&#225;ndez</creator>
        
        <subject>Co-occurrence words; bibliometric analysis; bibliometrics; Colombian journals</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>Through a co-occurrence bibliometric and citation analysis of 1.272 texts published in the four Colombian engineering journals available in Scopus between 2014 and 2018, this paper identified that most articles belong to supply chain optimization and logistics and involve work with information that requires minimal laboratory experimentation. Works applying artificial neural networks, clustering, and genetic algorithms are also prominent. Results from researching on biomass analysis on bioenergy and sustainability are more recent and are present to a lesser extent. Most of the reference texts of the articles published come from Spanish-speaking countries and mostly cite DYNA, the European Journal of Operational Research, the Journal of Food Engineering, and Ingenier&#237;a e Investigaci&#243;n.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_25-Recent_Themes_of_Colombian_Scientific_Engineering.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Service Outages Prediction through Logs and Tickets Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120424</link>
        <id>10.14569/IJACSA.2021.0120424</id>
        <doi>10.14569/IJACSA.2021.0120424</doi>
        <lastModDate>2021-05-01T16:56:38.0970000+00:00</lastModDate>
        
        <creator>Sunita A Yadwad</creator>
        
        <creator>V. Valli Kumari</creator>
        
        <creator>S Venkata Lakshmi</creator>
        
        <subject>Failure prediction; linear regression technique; network fault prediction; lasso; ridge regression; Bayesian structural time series; prophet</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>Service outage or downtime is a growing challenge to the service providers and end users. The major cause for the unavailability firstly is failure of equipments and applications at various places and secondly failure for proactive diagnosis and rectification. The system activities that are logged and the response of customers and providers in the form of trouble tickets could be studied for minimizing network faults. The downtime can be reduced when the failures are predicted well in time and proactively corrected. Accurate prediction of faults helps in responding to downtime even before the customer tickets are raised or network trouble is encountered. Most of the research focuses on trouble shooting through forecasting the quantity of trouble tickets using the historical ones. If these tickets can be supported with the warning in the form of Syslogs and the technical support of network tickets the predictive models would be more efficient and accurate. Dynamic and truly adaptive machine learning algorithms are essentially required for processing the torrent of data and formulating predictions based on the trends and the patterns existing in it. The work refers to i) identifying number of trouble tickets that are related to the device a few days before the network component fails, ii) predicting fault will occur in broadband networks. Lasso and Ridge regression are used for the first and Bayesian structural time series analysis and prophet are used for the latter.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_24-Service_Outages_Prediction_through_Logs.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>NetAI-Gym: Customized Environment for Network to Evaluate Agent Algorithm using Reinforcement Learning in Open-AI Gym Platform</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120423</link>
        <id>10.14569/IJACSA.2021.0120423</id>
        <doi>10.14569/IJACSA.2021.0120423</doi>
        <lastModDate>2021-05-01T16:56:38.0630000+00:00</lastModDate>
        
        <creator>Varshini Vidyadhar</creator>
        
        <creator>Nagaraj R</creator>
        
        <creator>D V Ashoka</creator>
        
        <subject>Open-AI Gym; network; environment; agent; reinforcement learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>The growing size of the network imposes computational overhead during network route establishment using conventional approaches of the routing protocol. The alternate approach in contrast to the route table updating mechanism is the rule-based method, but this also provides a limited scope in the dynamic networks. Therefore, reinforcement learning promises a better way of finding the route, but it requires an evaluation platform to build a model synchronization between route and agent. Unfortunately, the de-facto platform for agent evaluation, namely Open-AI Gym, does not provide a suitable networking environment. Therefore, this paper aims to propose a networking environment as a novel contribution by designing a suitable customized environment for a network synchronically with Open-AI Gym. The successful deployment of the proposed network environment: NetAI-Gym provides a functional and practical result that can be used further to develop routing mechanisms based on Q-learning. The validation of the proposed NetAI-Gym is carried out with different nodes in the network regarding Episodes Vs. Reward. The experimental outcome justifies the validity of the proposed NetAI-Gym that it is suitable for solving network-related problems.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_23-NetAI_Gym_Customized_Environment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Indonesian Speech Emotion Recognition using Cross-Corpus Method with the Combination of MFCC and Teager Energy Features</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120422</link>
        <id>10.14569/IJACSA.2021.0120422</id>
        <doi>10.14569/IJACSA.2021.0120422</doi>
        <lastModDate>2021-05-01T16:56:38.0470000+00:00</lastModDate>
        
        <creator>Oscar Utomo Kumala</creator>
        
        <creator>Amalia Zahra</creator>
        
        <subject>Cross corpus; Indonesian speech emotion recognition; Mel Frequency Cepstral Coefficients; Teager Energy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>Emotion recognition is one of the widely studied topics in speech technology. Emotions that come from speech can contain useful information for many purposes. The main aspects in speech emotion recognition are speech features, speech corpus, and machine learning algorithms as the classifier method. In this paper, cross-corpus method is used to conduct Indonesian Speech Emotion Recognition (SER) along with the combination of Mel Frequency Cepstral Coefficients (MFCC) and Teager Energy features. Using Support Vector Machine (SVM) as classifier, the experiment result shows that applying cross-corpus method by adding corpora from other languages to the training dataset improves the emotion classification accuracy by 4.16% on MFCC Statistics feature and 2.09% on Teager-MFCC Statistics feature.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_22-Indonesian_Speech_Emotion_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Efficient Privacy Preserving Approach for e-Health</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120421</link>
        <id>10.14569/IJACSA.2021.0120421</id>
        <doi>10.14569/IJACSA.2021.0120421</doi>
        <lastModDate>2021-05-01T16:56:38.0170000+00:00</lastModDate>
        
        <creator>Supriya Menon M</creator>
        
        <creator>Rajarajeswari Pothuraju</creator>
        
        <subject>e-Health; Attribute Based Encryption (ABE); secure hash algorithm (SHA-1); anonymity; privacy; sensitive parameters</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>Immense Procreation of large amounts of data in medical field and health care domain, benefitting society is at risk with sensitive attributes being disclosed. Access to Medical Information made feasible over internet with an intension of serving the people related to medical community is triggering a challenge for researchers in norms of Privacy and security. The medical data at cloud is vulnerable to unpredictable threats with evolving technology, and the threat landscape sounds resilient with sensitive attributes. In this contemporary stretch, Organizations fail to hold the reputation and are unable to preserve public confidence. The austerity of sophisticated security attacks compromise the privacy of patient data and security of healthcare units. The fruitful approaches by several researches and practitioners provided an up heal resolutions, but the demand for an optimal solution remains unanswered. In this paper we present a solution for addressing the security issues in health care management. We propose a hybrid framework using enhanced Attribute Based Encryption (ABE) with Anonymity approach based on access primitives of sensitive attributes. The proposed mechanism is evaluated in terms of performance, encryption time, decryption time and Memory utilization using Jsim simulator which envisage drastic performance expedition in the presented model.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_21-An_Efficient_Privacy_Preserving_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Iterative Decoding of Chase Pyndiah Decoder Utilizing Multiple Relays Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120420</link>
        <id>10.14569/IJACSA.2021.0120420</id>
        <doi>10.14569/IJACSA.2021.0120420</doi>
        <lastModDate>2021-05-01T16:56:37.9870000+00:00</lastModDate>
        
        <creator>Saif E. A. Alnawayseh</creator>
        
        <subject>Turbo product code (TPC); modified iterative chase Pyndiah decoding algorithm; relay; source; Bit Error Rate (BER); vertical parities; horizontal parities</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>In this paper, a distributed Encoding and decoding of Turbo Product Code (TPC) over single and multiple relays network are proposed. The information message matrix is encoded at source by Bose Chaudhuri Hochquenghem (BCH) as component code and transmitted to destination and relays in the midway between source and destination. The coded source message is decoded by simple chase П decoder at relay and encoded again horizontally and vertically by BCH component code to construct TPC. Two scenarios were investigated. First scenario, utilizing one relay in cooperative network, where the vertical parity part of TPC is transmitted to destination to be the input of row decoder for original chase pyndiah decoder, while the received encoded horizontally matrix from source is the input of the column decoder. In the second scenario multiple relays are utilized and multiple copies of vertical parity part of TPC are sent to destination to be decoded by the proposed modified iterative chase pyndiah decoder with multiple integrated stages for each iteration. Simulation results for the first scenario over Additive White Gaussian (AWGN) and Rayleigh fading channels and using original chase Pyndiah decoder at destination shows 2dB gain improvement at BER= 〖10〗^(-5) and 4dB gain improvement at BER=〖10〗^(-4) respectively over BER performance of un cooperative system. While results for distributed TPC decoding for the second scenario and using the proposed modified iterative chase Pyndiah decoder at destination shows 2.7 dB and 3 dB gain at BER =〖10〗^(-4) for AWGN and Rayleigh fading channels respectively over the first scenario.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_20-Iterative_Decoding_of_Chase_Pyndiah_Decoder.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Integrated Pairwise Testing based Genetic Algorithm for Test Optimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120419</link>
        <id>10.14569/IJACSA.2021.0120419</id>
        <doi>10.14569/IJACSA.2021.0120419</doi>
        <lastModDate>2021-05-01T16:56:37.9530000+00:00</lastModDate>
        
        <creator>Baswaraju Swathi</creator>
        
        <creator>Harshvardhan Tiwari</creator>
        
        <subject>Test case generation; genetic algorithm; multi objective optimization; pairwise testing; test optimization; fitness value</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>Generation of Test cases in software testing is an important and a complex activity as it deals with diversified range of inputs. Fundamentally, test case generation is considered to be a multi-objective problem as it aims to cover many targets. Deriving test cases for the Web Applications has become critical to the most of the enterprises. In this paper, a solution for generating test cases for web applications is proposed; the solution uses the System Graph (consisting of links and data dependencies) considering that test cases were based on a combination of input values and data dependencies. Pairwise testing is used to derive the test cases to be executing from entire test cases and then a genetic algorithm is proposed to generate test cases specific to functional testing. The proposed approach was tested through two distinct experiments by measuring the code coverage at every generation and results show that genetic algorithm used increased the fitness value and code coverage. Overall, the results of the paper validate the proposed approach and algorithm, having potential in further construct an automated integrated solution for generating test cases for the entire process.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_19-Integrated_Pairwise_Testing_based_Genetic_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Formulation of Association Rule Mining (ARM) for an Effective Cyber Attack Attribution in Cyber Threat Intelligence (CTI)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120418</link>
        <id>10.14569/IJACSA.2021.0120418</id>
        <doi>10.14569/IJACSA.2021.0120418</doi>
        <lastModDate>2021-05-01T16:56:37.9230000+00:00</lastModDate>
        
        <creator>Md Sahrom Abu</creator>
        
        <creator>Siti Rahayu Selamat</creator>
        
        <creator>Robiah Yusof</creator>
        
        <creator>Aswami Ariffin</creator>
        
        <subject>Cyber threat intelligence (CTI); association rule mining; apriori algorithm; attribution; interestingness measures</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>In recent year, an adversary has improved their Tactic, Technique and Procedure (TTPs) in launching cyberattack that make it less predictable, more persistent, resourceful and better funded. So many organisation has opted to use Cyber Threat Intelligence (CTI) in their security posture in attributing cyberattack effectively. However, to fully leverage the massive amount of data in CTI for threat attribution, an organisation needs to spend their focus more on discovering the hidden knowledge behind the voluminous data to produce an effective cyberattack attribution. Hence this paper emphasized on the research of association analysis in CTI process for cyber attack attribution. The aim of this paper is to formulate association ruleset to perform the attribution process in the CTI. The Apriori algorithm is used to formulate association ruleset in association analysis process and is known as the CTI Association Ruleset (CTI-AR). Interestingness measure indicator specially support (s), confidence (c) and lift (l) are used to measure the practicality, validity and filtering the CTI-AR. The results showed that CTI-AR effectively identify the attributes, relationship between attributes and attribution level group of cyberattack in CTI. This research has a high potential of being expanded into cyber threat hunting process in providing a more proactive cybersecurity environment.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_18-Formulation_of_Association_Rule_Mining.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>VG4 Cipher: Digital Image Encryption Standard</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120417</link>
        <id>10.14569/IJACSA.2021.0120417</id>
        <doi>10.14569/IJACSA.2021.0120417</doi>
        <lastModDate>2021-05-01T16:56:37.9230000+00:00</lastModDate>
        
        <creator>Akhil Kaushik</creator>
        
        <creator>Vikas Thada</creator>
        
        <subject>DNA cryptography; cipher; information security; encryption; decryption</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>When it comes to providing security to information systems, encryption emerges as an indispensable tool, as it has been used extensively in the past few decades for securing stationary data as well as data in motion. With the rapid data transmission techniques and multimedia options available for data representation, the field of information security has become very significant. The state-of-art cryptographic technique is DNA encryption, which uses biological principles for safeguarding data. The use of Bio-inspired ciphers is becoming the de-facto safety standard, especially for digital images as they are a key source of extracting crucial information. Hence, image encoding becomes of ultimate importance when there is a need to send them via an insecure communication channel. The purpose of this research paper is to present a DNA- inspired cryptosystem that can be employed in the domain of image encryption that provides superior security with enhanced efficiency. The experimental outcomes prove that this novel cryptographic algorithm not only provides better security but also at a reasonable pace.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_17-VG4_Cipher_Digital_Image_Encryption_Standard.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Distance Education during COVID-19 Pandemic: The Perceptions and Preference of University Students in Malaysia Towards Online Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120416</link>
        <id>10.14569/IJACSA.2021.0120416</id>
        <doi>10.14569/IJACSA.2021.0120416</doi>
        <lastModDate>2021-05-01T16:56:37.8930000+00:00</lastModDate>
        
        <creator>Husna Hafiza Razami</creator>
        
        <creator>Roslina Ibrahim</creator>
        
        <subject>COVID; distance learning; education; online learning; student; perception; preference</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>The sudden shift from the brick-and-mortar approach to online distance learning due to the coronavirus pandemic greatly impacted everyone involved, particularly students. Hence, it is critical to identify the perception of students regarding the challenges they faced, their satisfaction with remote learning, as well as their preferences and recommendations for improvement which are the objectives of this research. A survey taken by 408 diploma students with 377 valid answers for the quantitative study showed that the most common difficulties they encountered were in terms of interaction, concentration and motivation. The mean of the perceived challenges was found to be significantly different depending on the respondents’ prior e-learning experience and area of residence. With regard to the relevant activities to be conducted virtually, most participants approved of assessments such as quiz, assignments and tests. Animation and gamification received the highest votes as the elements that students wished were incorporated to boost their online learning engagement. The findings from this research contribute to existing studies on the perceptions and preference of students towards distance education by shedding light on the perspective of diploma students.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_16-Distance_Education_During_COVID_19_Pandemic.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Is Deep Learning Better than Machine Learning to Predict Benign Laryngeal Disorders?</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120415</link>
        <id>10.14569/IJACSA.2021.0120415</id>
        <doi>10.14569/IJACSA.2021.0120415</doi>
        <lastModDate>2021-05-01T16:56:37.8770000+00:00</lastModDate>
        
        <creator>Haewon Byeon</creator>
        
        <subject>Benign laryngeal mucosal disorder; voice disorder; deep learning; Naive Bayes model; generalized linear model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>It is important in otolaryngology to accurately understand the etiology of a laryngeal disorder, diagnose it early, and provide appropriate treatment accordingly. The objectives of this study were to develop models for predicting benign laryngeal mucosal disorders based on deep learning, naive Bayes model, generalized linear model, a Classification and Regression Tree (CART), and random forest using laryngeal mucosal disorder data obtained from a national survey and confirm the best classifier for predicting benign laryngeal mucosal disorders by comparing the prediction performance and runtime of the developed models. This study analyzed 626 subjects (313 people with a laryngeal disorder and 313 people without a laryngeal disorder). In this study, deep learning was the best model with the highest accuracy (0.84). However, the runtime of deep learning was 39min 41sec, which was a 10 times longer development time than CART (3min 7sec). This model confirmed that subjective voice problem recognition, pain and discomfort in the last two weeks, education level, occupation, mean monthly household income, high-risk drinker, and current smoker were major variables with high weight for the benign laryngeal mucosal disorders of Korean adults. Among them, subjective voice problem recognition was the most important factor with the highest weight. The results of this study implied that the prediction performance of deep learning could be better than that of machine learning for structured data, such as health behavior and demographic factors as well as video and image data.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_15-Is_Deep_Learning_Better_than_Machine_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Annotated Corpus of Mesopotamian-Iraqi Dialect for Sentiment Analysis in Social Media</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120413</link>
        <id>10.14569/IJACSA.2021.0120413</id>
        <doi>10.14569/IJACSA.2021.0120413</doi>
        <lastModDate>2021-05-01T16:56:37.8600000+00:00</lastModDate>
        
        <creator>Al-Khafaji Ali J Askar</creator>
        
        <creator>Nilam Nur Amir Sjarif</creator>
        
        <subject>Sentiment analysis; Mesopotamian dialect; Iraqi dialect; social media; annotated corpus; emotion classification; Arabic language</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>Research on Sentiment Analysis in social media by using Mesopotamian-Iraqi Dialect (MID) of Arabic language was rarely found, there is no reliable dataset developed in MID neither an annotated corpus for the sentiment analysis of social media in this dialect. Therefore, this gap was the main stumbling block for researchers of sentiment analysis in MID, for this reason, this paper introduced the development of an annotated corpus of Mesopotamian-Iraqi Dialect for sentiment analysis in social media and named it as (ACMID) stands for (the annotated corpus of Mesopotamian-Iraqi Dialect) to help researchers in future for using this corpus for their studies, to the best of our knowledge this is the first annotated corpus that both classify polarity as well as emotion classification in MID. Likewise, Facebook as the most popular social platform among Iraqis was used to extract the data from its popular Iraqi pages. 5000 comments were extracted from these pages classified by its polarity (Positive, Negative, Neutral, Spam) by two Iraqi annotators, these annotators were simultaneously classifying the same comments according to Ekman seven universal emotions (Anger, Fear, Disgust, Happiness, Sadness, Surprise, Contempt) or no emotion. Cohen&#39;s kappa coefficient was then used to compare the two annotators’ results to find the reliability of these results. The data shows a comparable value among the two annotators for the polarity classification as high as 0.82, while for the emotion classification the result was 0.65.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_13-Annotated_Corpus_of_Mesopotamian_Iraqi_Dialect.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Exploring Parkinson’s Disease Predictors based on Basic Intelligence Quotient and Executive Intelligence Quotient</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120414</link>
        <id>10.14569/IJACSA.2021.0120414</id>
        <doi>10.14569/IJACSA.2021.0120414</doi>
        <lastModDate>2021-05-01T16:56:37.8600000+00:00</lastModDate>
        
        <creator>Haewon Byeon</creator>
        
        <subject>Undersampling; oversampling; SMOTE; random forest; Parkinson&#39;s disease–mild cognitive impairment</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>It is important to identify the risk factors of dementia and prevent them for the health of patients and caregivers. This study (1) explored sampling methods that could minimize overfitting due to data imbalance using a data-level approach, (2) developed nine ensemble learning models for predicting Parkinson&#39;s Disease–Mild Cognitive Impairment (PD-MCI) ((undersampling, oversampling, and SMOTE) &#215; (boosting, bagging, and random forest)=9), and (3) compared the accuracies, sensitivities, and specificities of these models to understand the prediction performance of the developed models. We examined 368 subjects: 320 healthy elderly people (≥60 and ≤74 years old) without Parkinson&#39;s disease (168 men and 152 women) and 48 subjects with PD-MCI (20 men and 28 women). This study used the Cognition Scale for Olde Adults (CSOA), which could measure cognitive functions comprehensively while considering age and education level, to determine the specific cognitive level of the subject. Our study developed nine prediction models ((undersampling, oversampling, and SMOTE) &#215; (boosting, bagging, and random forest)=9) for developing a model to predict PD-MCI based on basic intelligence quotient and executive intelligence quotient. The analysis results showed that a random forest classifier with SMOTE had the best prediction performance with a sensitivity of 69.2%, a specificity of 75.7%, and a mean overall accuracy of 74.0%. In this final model, digit span test-backward, stroop test-interference trial, verbal memory test-delayed recall, verbal fluency test, and confrontation naming test were identified as the key variables with high weight in predicting PD-MCI. The results of this study implied that a random forest classifier with SMOTE could produce models with higher accuracy than a bagging classifier with SMOTE or a boosting classifier with SMOTE when analyzing imbalanced data.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_14-Exploring_Parkinsons_Disease_Predictors.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sensing and Detection of Traffic Status through V2V Routing Hop Count and Route Energy</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120412</link>
        <id>10.14569/IJACSA.2021.0120412</id>
        <doi>10.14569/IJACSA.2021.0120412</doi>
        <lastModDate>2021-05-01T16:56:37.8300000+00:00</lastModDate>
        
        <creator>Mahmoud Zaki Iskandarani</creator>
        
        <subject>V2V; consumed energy; congestion; hops; VANET; routing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>New approach to manage congestion using vehicular communication is presented in this work. The research work using MATLAB simulation, tracked communicating vehicles travelling on roads with constant registration of changes in routes, number of hops, and energy consumed as a function of travelled distances. The area of travel and simulation is divided into blocks or zones to enable sufficient allocation and distribution of Road Side Units (RSUs) that are used to relay communication signals and transmission of Basic Safety Messages (BSMs). The successfully concluded simulation is based on the assumption that as congestion occurs, the number of hops per route and associated energy consumption per transmitted packets will change patterns in terms of hops, routes and consumed energy as traffic passes from low to smooth (optimal) to high density (congestion) states, where at the start of congestion, vehicles start to slow down and become closer to each other in a two dimensional space. The output is used as input to traffic status pattern characterization algorithm (management system) that uses the data to indicate the start of traffic accumulation, thus pre-emptive measures can be taken to avoid congestion and reduction in mobility. The presented analysis proved that it is possible to predict congestion as a function of both hops sequences and consumed energy, depending on the hops pattern which is shown to be symmetric in the case of optimum traffic that flows smoothly. The analysis also showed that when congestion starts to occur, asymmetric hops pattern occurs with hops sequences elements switch and swap places within the identified pattern. Further analysis and polynomial curve fitting proved that congestion control and smooth traffic management using the proposed approach is achievable.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_12-Sensing_and_Detection_of_Traffic_Status.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Smartphone-based Recognition of Human Activities using Shallow Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120410</link>
        <id>10.14569/IJACSA.2021.0120410</id>
        <doi>10.14569/IJACSA.2021.0120410</doi>
        <lastModDate>2021-05-01T16:56:37.8130000+00:00</lastModDate>
        
        <creator>Maha Mohammed Alhumayyani</creator>
        
        <creator>Mahmoud Mounir</creator>
        
        <creator>Rasha Ismael</creator>
        
        <subject>Data preprocessing; data mining; classification; genetic programming; Na&#239;ve Bayes; decision tree</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>The human action recognition (HAR) attempts to classify the activities of individuals and the environment through a collection of observations. HAR research is focused on many applications, such as video surveillance, healthcare and human computer interactions. Many problems can deteriorate the performance of human recognition systems. Firstly, the development of a light-weight and reliable smartphone system to classify human activities and reduce labelling and labelling time; secondly, the features derived must generalise multiple variations to address the challenges of action detection, including individual appearances, viewpoints and histories. In addition, the relevant classification should be guaranteed by those features. In this paper, a model was proposed to reliably detect the type of physical activity conducted by the user using the phone&#39;s sensors. This includes review of the existing research solutions, how they can be strengthened, and a new approach to solve the problem. The Stochastic Gradient Descent (SGD) decreases the computational strain to accelerate trade iterations at a lower rate. SGD leads to J48 performance enhancement. Furthermore, a human activity recognition dataset based on smartphone sensors are used to validate the proposed solution. The findings showed that the proposed model was superior.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_10-Smartphone_based_Recognition_of_Human_Activities.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning Approaches for Intrusion Detection in IIoT Networks – Opportunities and Future Directions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120411</link>
        <id>10.14569/IJACSA.2021.0120411</id>
        <doi>10.14569/IJACSA.2021.0120411</doi>
        <lastModDate>2021-05-01T16:56:37.8130000+00:00</lastModDate>
        
        <creator>Thavavel Vaiyapuri</creator>
        
        <creator>Zohra Sbai</creator>
        
        <creator>Haya Alaskar</creator>
        
        <creator>Nourah Ali Alaseem</creator>
        
        <subject>Industrial Control System; Industrial Internet of Things (IIoT); cybersecurity; intrusion detection system and deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>In recent years, the Industrial Internet of things (IIoT) is a fastest advancing innovative technology with a poten-tial to digitize and interconnect many industries for huge business opportunities and development of global GDP. IIoT is used in diverse range of industries such as manufacturing, logistics, transportation, oil and gas, mining and metals, energy utilities and aviation. Although IIoT provides promising opportunities for the development of different industrial applications, they are prone to cyberattacks and demands for higher security require-ments. The enormous number of sensors present in the IIoT network generates a large amount of data and has attracted the attention of cybercriminals across globe. The intrusion detection system (IDS) that monitors the network traffic and detects the behaviour of the network is considered as one of the key security solution for securing IIoT application from attacks. Recently, the application of machine and deep learning techniques have proved to mitigate multiple security threats and enhance the performance of intrusion detection. In this paper, we present a survey of deep learning-based IDS technique for IIoT. The main objective of this research is to provide the various deep learning-based IDS detection methods, datasets and comwparative analysis. Finally, this research aims to identify the limitations and challenges of existing studies, solutions and future directions.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_11-Deep_Learning_Approaches_for_Intrusion_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development and Print of Clothing through Digitalized Designs of Natural Patterns with Flexible Filaments in 3D Printers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120409</link>
        <id>10.14569/IJACSA.2021.0120409</id>
        <doi>10.14569/IJACSA.2021.0120409</doi>
        <lastModDate>2021-05-01T16:56:37.7830000+00:00</lastModDate>
        
        <creator>Jean Roger Farf&#225;n Gavancho</creator>
        
        <creator>Wilber Antonio Figueroa Quispe</creator>
        
        <creator>Dayvis Victor Farf&#225;n Gavancho</creator>
        
        <creator>Beto Puma Huam&#225;n</creator>
        
        <creator>Victor Manuel Lima Condori</creator>
        
        <creator>George Jhonatan Cahuana Alca</creator>
        
        <subject>Natural pattern; fractal; garment; digital design; flexible filament; 3D printing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>This study proposes clothing development by digitalizing natural patterns with flexible filaments in 3D printers. The motivation to carry out this research was the similarity and evenness features of fractals. For this purpose, three natural exemplars have been selected and subsequently digitized: Snowflake, Honeycomb, and Flower of Life, with line variants and infill density at 12%. The garment was printed with Thermoplastic polyurethane (TPU) and Thermoplastic Elastomer (TPE) in two 3D printers: Anet A8 and M3D Crane Quad. Lastly, the combination of filaments, printers, line variant, and infill density resulted in forty-eight (48) samples. Two tests were carried out on the printed patterns for the research: The elongation and tensile strength test. The elongation test consists of applying a variable force to each exemplar in order to obtain the percentage of its elastic limit before reaching its fracture point. The tensile test applies a variable vertical power to each design to determine how it will behave under particular pressure. Results show that the snowflake pattern with line variant obtained the best performance in the elongation test compared to the tensile test. Subsequently, four clothing samples were printed with TPU and TPE materials on the two printers mentioned above. The garments are composed of twenty-nine (29) pieces respectively which were connected with a 3D pen. Finally, the item of clothing was worn by five volunteers of different sizes, as shown in the following pages.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_9-Development_and_Print_of_Clothing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Integrated Implementation Framework for an Efficient Transformation to Online Education</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120408</link>
        <id>10.14569/IJACSA.2021.0120408</id>
        <doi>10.14569/IJACSA.2021.0120408</doi>
        <lastModDate>2021-05-01T16:56:37.7670000+00:00</lastModDate>
        
        <creator>Ahmed Al-Hunaiyyan</creator>
        
        <creator>Salah Al-Sharhan</creator>
        
        <creator>Rana Alhajri</creator>
        
        <creator>Andrew Bimba</creator>
        
        <subject>Distance learning; e-learning framework; COVID-19 pandemic; e-learning delivery; e-Content</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>The least developed countries have been tasked with introducing effective e-learning frameworks as they look to overcome technology inadequacies and lack of research support or vision. Ongoing efforts are reliant upon a mixed-methods approach. A systematic literature analysis and a quantitative examination have been undertaken to achieve a thorough assessment of the available data taken from educational facilities in Kuwait. Results show clear support for embracing e-learning, with most participants recognizing its positives when faced with the scope of challenges its practice may incorporate. Consequently, the authors recommend a framework that is integrated to support a smooth upgrade to online teaching in a manner that furthers the efficacy and understanding of e-learning potential in the context of education in Kuwait and neighboring countries, with a particular focus on how to function during a pandemic lockdown. The proposed framework is structured according to five key tiers: infrastructure, e-learning delivery, LMS, e-Content, and user portal. In support of this, a model of e-Content development is proposed to assist with the establishment and execution of educational materials, in particular, to cope with the lack of digital learning materials in Arabic.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_8-An_Integrated_Implementation_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detecting Unauthorized Network Intrusion based on Network Traffic using Behavior Analysis Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120407</link>
        <id>10.14569/IJACSA.2021.0120407</id>
        <doi>10.14569/IJACSA.2021.0120407</doi>
        <lastModDate>2021-05-01T16:56:37.7500000+00:00</lastModDate>
        
        <creator>Nguyen Tung Lam</creator>
        
        <subject>Network intrusion detection; abnormal behaviors; IDS 2018 dataset; deep learning and machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>Nowadays, network intrusion detection is an essential problem because cyber-attacks are increasing in both the number and extent of the danger. Network intrusion techniques often use various methods to bypass the oversight of anomaly detection and surveillance systems. This paper proposes to use behavior analysis techniques, machine learning, and deep learning algorithms for the task of detecting network intrusions. The practical and scientific significance of our paper includes two issues: (1) Regarding the process of selecting and extracting features: instead of using typical abnormal behaviors of attacks, this study will use statistical behaviors that are easy to calculate and extract while still ensuring the effectiveness of the method; (2) Regarding the detection process, this study proposes to use the Random Forest (RF) classification algorithm, the Multilayer Perceptron (MLP) and the Convolutional Neural Network (CNN) deep learning model. The experimental results in Section IV have proven that our proposal in this paper is completely correct and reasonable. Based on the results shown in Section IV, this study has provided network surveillance systems with a number of abnormal behaviors as the basis for detecting network intrusions.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_7-Detecting_Unauthorized_Network_Intrusion.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Energy Storage and Electric Vehicles: Technology, Operation, Challenges, and Cost-Benefit Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120406</link>
        <id>10.14569/IJACSA.2021.0120406</id>
        <doi>10.14569/IJACSA.2021.0120406</doi>
        <lastModDate>2021-05-01T16:56:37.7370000+00:00</lastModDate>
        
        <creator>Surender Reddy Salkuti</creator>
        
        <subject>Energy storage; electric vehicles; cost-benefit analysis; demand-side management; renewable energy; smart grid</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>With ever-increasing oil prices and concerns for the natural environment, there is a fast-growing interest in electric vehicles (EVs) and renewable energy resources (RERs), and they play an important role in a gradual transition. However, energy storage is the weak point of EVs that delays their progress. The world’s EV industry is accelerating to faster adoption with appropriate incentives to the EV owners, policy support, and encouraging local manufacturing. The increasing demand for EV’s has presented itself as an authentic alternative to internal combustion engines (ICE). The main feature of the RERs is their variability and intermittency. These drawbacks are overcome by integrating more than one renewable energy source including backup sources and storage systems. This paper presents various technologies, operations, challenges, and cost-benefit analysis of energy storage systems and EVs.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_6-Energy_Storage_and_Electric_Vehicles.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Systems Security Affectation with the Implementation of Quantum Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120405</link>
        <id>10.14569/IJACSA.2021.0120405</id>
        <doi>10.14569/IJACSA.2021.0120405</doi>
        <lastModDate>2021-05-01T16:56:37.6900000+00:00</lastModDate>
        
        <creator>Norberto Novoa Torres</creator>
        
        <creator>Juan Carlos Suarez Garcia</creator>
        
        <creator>Erik Alexis Valderrama Guancha</creator>
        
        <subject>Quantum computing; encryption; cryptography; cryptanalysis; data security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>Current security systems use cryptographic robust tools that have been of great help in regulating information. During its time the implementation of these tools abolished the classic security systems, as by means of cryptanalysis they allowed decryption of information in a fast, automated, and simple mode from these systems. Considering this scenario, the same happens when quantum cryptographic systems are implemented, insomuch as the current security systems could be abolished, as tools exist that permit its encryption in a simple way, but with the risk of putting the data of worldwide organizations in danger. With the purpose of mitigating these risks, it is necessary to consider the upgrade of the available security systems, by security systems and quantic encryption, before a massive implementation of the quantum computer’s use as an everyday tool. With this it does not mean that quantum computing would be a disadvantage, on the contrary, the advantages from this technology will mean that security information and data are almost invulnerable, which is a meaningful advance in the IT field. With security information professionals are obliged to recommend and perform an appropriate migration of new technologies to avoid existing exposition risks as data as well as transactions. If this were not the case, the same scenario presented in the classic security systems would occur.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_5-Systems_Security_Affectation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Method for Most Appropriate Plucking Date Determination based on the Elapsed Days after Sprouting with NIR Reflection from Sentinel-2 Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120404</link>
        <id>10.14569/IJACSA.2021.0120404</id>
        <doi>10.14569/IJACSA.2021.0120404</doi>
        <lastModDate>2021-05-01T16:56:37.6730000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Yoshiko Hokazono</creator>
        
        <subject>Plucking date; elapsed days after sprouting; NIR reflection; sentinel-s; normalized difference vegetation index: NDVI</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>Method for most appropriate plucking date determination based on the elapsed days after sprouting with Near Infrared: NIR reflection from Sentinel-2 data is proposed. Depending on the elapsed days after sprouting, tealeaf quality is decreasing. On the other hand, tealeaf yield is increasing with increasing of the days after sprouting. Therefore, there is most appropriate plucking date is very important. Usually, it is determined by the normalized Difference Vegetation Index: NDVI derived from handheld NDVI cameras, drone mounted NDVI cameras, and visible to NIR radiometer onboard satellites because NIR reflection and NDVI depend on tealeaf quality and yield. It, however, does not work well in terms of poor regression performance and species dependency. Moreover, it takes time consumable works for finding appropriate tealeaves from the acquired camera images. The proposed method uses only the days after sprouting. Next thing it has to do is to determination of sprouting date. In order to determine the date, optical sensor onboard Sentinel-2 data is used. Through experiment with the truth data taken at the intensive study area of the Oita Prefectural Agriculture, Forestry and Fisheries Research Guidance Center: OPAFFRGC, it is found that the proposed method is validated.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_4-Method_for_Most_Appropriate_Plucking_Date_Determination.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Adoption of Mobile Health Applications by Patients in Developing Countries: A Systematic Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120403</link>
        <id>10.14569/IJACSA.2021.0120403</id>
        <doi>10.14569/IJACSA.2021.0120403</doi>
        <lastModDate>2021-05-01T16:56:37.6570000+00:00</lastModDate>
        
        <creator>Nasser Aljohani</creator>
        
        <creator>Daniel Chandran</creator>
        
        <subject>M-health; mobile health; apps; adoption; review; developing countries</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>Mobile health (m-health) apps adoption in developing countries is a new research area in the healthcare industry. M-health is comparatively recent in information systems, with little attention being paid to it developing countries in the previous years. Applications of the m-health strategies in developing nations are considered one of the best platforms for guaranteeing the citizenry&#39;s safety and healthcare security. A systematic review was conducted of m-health apps adoption by patients in developing countries to evaluate the current results. It reviews 22 papers that were published on the topic of m-health adoption in developing countries in academic journals and conferences over the last decade. It identifies the research in terms of research methodologies, theories and models adopted, significant factors identified, limitations and recommendations. Findings show there is a limited contribution to m-health apps adoption in developing countries. Most studies employed TAM and focused on the technological and individual levels; very low intention has been made to health-related factors, levels, and theories. The review presents a broad overview of previous academic studies with a view to future research.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_3-The_Adoption_of_Mobile_Health_Applications.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Autonomous Reusing Policy Selection using Spreading Activation Model in Deep Reinforcement Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120402</link>
        <id>10.14569/IJACSA.2021.0120402</id>
        <doi>10.14569/IJACSA.2021.0120402</doi>
        <lastModDate>2021-05-01T16:56:37.6270000+00:00</lastModDate>
        
        <creator>Yusaku Takakuwa</creator>
        
        <creator>Hitoshi Kono</creator>
        
        <creator>Hiromitsu Fujii</creator>
        
        <creator>Wen Wen</creator>
        
        <creator>Tsuyoshi Suzuki</creator>
        
        <subject>Reinforcement learning; transfer learning; deep learning; cognitive psychology; spreading activation theory</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>This paper describes a policy transfer method of a reinforcement learning agent based on the spreading activation model of cognitive psychology. This method has a prospect of increasing the possibility of policy reuse, adapting to multiple tasks, and assessing agent mechanism differences. In the existing methods, policies are evaluated and manually selected depending on the target–task. The proposed method generates a policy network that calculates the relevance between policies in order to select and transfer a specific policy that is presumed to be effective based on the current situation of the agent while learning. Using a policy network graph structure, the proposed method decides the most effective policy while repeating probabilistic selection, activation, and spread processing. In the experiment section, this study describes experiments conducted to evaluate usefulness, conditions of use, and the usable range of the proposed method. Tests using CartPole and MountainCar, which are classical reinforcement learning tasks, are described and transfer learning is compared between the proposed method and a Deep Q–Network without transfer. As the experimental results, usefulness was suggested in the transfer learning of the same task without manual compared with previous method with various conditions.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_2-Autonomous_Reusing_Policy_Selection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Road Detection Method based on Online Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120401</link>
        <id>10.14569/IJACSA.2021.0120401</id>
        <doi>10.14569/IJACSA.2021.0120401</doi>
        <lastModDate>2021-05-01T16:56:37.4700000+00:00</lastModDate>
        
        <creator>Wenbo Wang</creator>
        
        <creator>Yong Ma</creator>
        
        <subject>Road detection; data fusion; unmanned ground vehicle; online learning; image segmentation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(4), 2021</description>
        <description>Road detection is always the key problem of re-searches on areas of unmanned ground vehicle and computer vision. A road detection method is proposed based on online learning and multi-sensor fusion. First of all, the Lidar point clouds are projected onto the images via the joint calibration of these two kinds of sensors. Then Simple Linear Iterative Clustering is used to segment images into many superpixles. Based on that, a multilayer online learning method is proposed, in which 2 Support Vector Machines are trained to detect the road. To be specific, the superpixel layer Support Vector Machine is used to detect road roughly, and the pixel layer Support Vector Machine is then trained to classify the edge pixels of the road areas, which is classified by the upper-layer Support Vector Machine. These 2 Support Vector Machines are updated online at each frame to be adapted to the changing environment. At last, some experiments are carried out on KITTI RAW dataset and an autonomous land vehicle, and the results show the effectiveness of proposed method. The main contributions of this work lie on as follows: 1) a multilayer learning model is proposed to detect road more robustly and accurately; 2) an online learning method is proposed which can be adapted to the changing environment.</description>
        <description>http://thesai.org/Downloads/Volume12No4/Paper_1-Road_Detection_Method_based_on_Online_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning Algorithm for Classification of Cerebral Palsy from Functional Magnetic Resonance Imaging (fMRI)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120383</link>
        <id>10.14569/IJACSA.2021.0120383</id>
        <doi>10.14569/IJACSA.2021.0120383</doi>
        <lastModDate>2021-03-31T07:07:46.5400000+00:00</lastModDate>
        
        <creator>Pradeepa Palraj</creator>
        
        <creator>Gopinath Siddan</creator>
        
        <subject>Cerebral palsy; deep neural network; functional magnetic resonance image</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(3), 2021</description>
        <description>Cerebral palsy is a disorder of neurology that may be caused by prenatal, perinatal or postnatal reasons that result in the failure of motor functioning in children besides mental well-being. Referring to the location of brain injury and the effect of it on the muscle tone, cerebral palsy is classified into subgroups namely spastic, non-spastic etc. Each type of palsy varies in symptoms and hence the therapy planning and rehabilitation are decided depending on the factors involved in each type. This urges the requirement of a suitable technique to classify the type of Palsy at the earlier stages to effectively plan therapy. Functional MRI of the neonatal brain helps in imaging and classification of cerebral palsy. The deep neural network is a subset of machine learning that is widely used in image classification applications. This technique is applied to the functional magnetic resonance brain images of infants to classify the type of cerebral palsy using a deep convolutional network of modified AlexNet architecture that helps the physician further in a planned rehabilitation to facilitate the lifestyle of the affected children.</description>
        <description>http://thesai.org/Downloads/Volume12No3/Paper_83-Deep_Learning_Algorithm_for_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Factors Influencing the Use of Wireless Sensor Networks in the Irrigation Field</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120382</link>
        <id>10.14569/IJACSA.2021.0120382</id>
        <doi>10.14569/IJACSA.2021.0120382</doi>
        <lastModDate>2021-03-31T07:07:46.5100000+00:00</lastModDate>
        
        <creator>Loubna HAMAMI</creator>
        
        <creator>Bouchaib NASSEREDDINE</creator>
        
        <subject>Cost; energy consumption and management; irrigation; smart irrigation; wireless sensor network; WSN deployment </subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(3), 2021</description>
        <description>Battle control, natural disasters discovery, water monitoring, smart homes, agricultural applications, health care, weather forecasts, smart buildings, intrusion detection, medical devices, and more are the application areas of wireless sensor networks (WSNs). WSNs can help bring about revolutionary changes in important areas of our world. As a result, this technology has become a particularly interesting technology, which can be used to meet the specific requirements of a particular application because of its distinctive characteristics. In this context, WSNs are a promising approach in the agricultural sector and irrigation in particular for overcoming the world&#39;s major problems (e.g., the global water crisis). When implementing WSN in the irrigation field, many factors, like limited sensor node resources, limited sensor node power, costs, hardware constraints, and type of deployment environment, must be taken into account in order to improve WSN performance and achieve the desired results. In this paper, we will study and analyze the main factors that affect WSNs in the irrigation field. We will also provide a set of measures and solutions that need to be taken to overcome the challenges of deploying a WSN in irrigation. In this regard, we will also highlight several factors for improvement to achieve an efficient and consistent irrigation system using WSN.</description>
        <description>http://thesai.org/Downloads/Volume12No3/Paper_82-Factors_Influencing_the_use_of_Wireless_Sensor_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Attack Resilient Trust and Signature-based Intrusion Detection Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120381</link>
        <id>10.14569/IJACSA.2021.0120381</id>
        <doi>10.14569/IJACSA.2021.0120381</doi>
        <lastModDate>2021-03-31T07:07:46.4770000+00:00</lastModDate>
        
        <creator>Boniface Kabaso</creator>
        
        <creator>Saber A. Aradeh</creator>
        
        <creator>Ademola P. Abidoye</creator>
        
        <subject>Wireless sensor network; routing attacks; public-key cryptography; packet dropping; denial of service attacks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(3), 2021</description>
        <description>Wireless sensor networks have been widely applied in many areas due to their unique characteristics. These have exposed them to different types of active and passive attacks. In the literature, several solutions have been proposed to mitigate these attacks. Most of the proposed solutions are too complex to be implemented in wireless sensor networks considering the resource-constraint of sensor nodes. In this work, we proposed a hierarchical trust mechanism based on clustering approach to detect and prevent denial of service attacks in wireless sensor networks. The approach was validated through simulation using Network Simulator (NS2). The following metrics were used to evaluation the proposed scheme: packet delivery ratio, network lifetime, routing delay, overhead, and number of nodes. The proposed approach is capable of detecting compromised sensor nodes vulnerable to a denial of service attacks. Moreover, it is able to detect all sensed data that have been compromised during transmission to the base station. The results show that our method can effectively detect and defend against denial of service attacks in sensor wireless sensor networks.</description>
        <description>http://thesai.org/Downloads/Volume12No3/Paper_81-Attack_Resilient_Trust_and_Signature_Based_Intrusion.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Arabic Sign Language Recognition using Faster R-CNN</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120380</link>
        <id>10.14569/IJACSA.2021.0120380</id>
        <doi>10.14569/IJACSA.2021.0120380</doi>
        <lastModDate>2021-03-31T07:07:46.4630000+00:00</lastModDate>
        
        <creator>Rahaf Abdulaziz Alawwad</creator>
        
        <creator>Ouiem Bchir</creator>
        
        <creator>Mohamed Maher Ben Ismail</creator>
        
        <subject>Arabic sign language recognition; supervised learning; deep learning; faster region based convolutional neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(3), 2021</description>
        <description>Deafness does not restrict its negative effect on the person’s hearing, but rather on all aspect of their daily life. Moreover, hearing people aggravated the issue through their reluctance to learn sign language. This resulted in a constant need for human translators to assist deaf person which represents a real obstacle for their social life. Therefore, automatic sign language translation emerged as an urgent need for the community. The availability and the widespread use of mobile phones equipped with digital cameras promoted the design of image-based Arabic Sign Language (ArSL) recognition systems. In this work, we introduce a new ArSL recognition system that is able to localize and recognize the alphabet of the Arabic sign language using a Faster Region-based Convolutional Neural Network (R-CNN). Specifically, faster R-CNN is designed to extract and map the image features, and learn the position of the hand in a given image. Additionally, the proposed approach alleviates both challenges; the choice of the relevant features used to encode the sign visual descriptors, and the segmentation task intended to determine the hand region. For the implementation and the assessment of the proposed Faster R-CNN based sign recognition system, we exploited VGG-16 and ResNet-18 models, and we collected a real ArSL image dataset. The proposed approach yielded 93% accuracy and confirmed the robustness of the proposed model against drastic background variations in the captured scenes.</description>
        <description>http://thesai.org/Downloads/Volume12No3/Paper_80-Arabic_Sign_Language_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multiclass Vehicle Classification Across Different Environments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120379</link>
        <id>10.14569/IJACSA.2021.0120379</id>
        <doi>10.14569/IJACSA.2021.0120379</doi>
        <lastModDate>2021-03-31T07:07:46.4470000+00:00</lastModDate>
        
        <creator>Aisha S. Azim</creator>
        
        <creator>Afshan Jafri</creator>
        
        <creator>Ashraf Alkhairy</creator>
        
        <subject>Vehicle detection; vehicle recognition; multiclass learning; boosting; GentleBoost</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(3), 2021</description>
        <description>Vehicle detection and classification are necessary components in a variety of useful applications related to traffic, security, and autonomous driving systems. Many studies have focused on recognizing vehicles from the point of view of a single perspective, such as the rear of other cars from the driving seat, but not from all possible perspectives, including the aerial view. In addition, they are usually given prior knowledge of a specific kind of vehicle, such as the fact that it is a car, as opposed to being a bus, before deducing other information about it. One of the popular classification techniques used is boosting, where weak classifiers are combined to form a strong classifier. However, most boosting applications consider complex classification problems to be a combination of binary problems. This paper explores in detail the development of a multi-class classifier that recognizes vehicles of any type, from any view, without prior information, and without breaking the task into binary problems. Instead, a single multi-class application of the GentleBoost algorithm is used. This system is compared to a similar system built from a combination of separate classifiers that each classifies a single vehicle. The results show that a single, multi-class classifier clearly outperforms a combination of separate classifiers, and proves that a simple boosting classifier is sufficient for vehicle recognition, given any type of vehicle from any perspective of viewing, without the need of representing the problem as a complex 3D model.</description>
        <description>http://thesai.org/Downloads/Volume12No3/Paper_79-Multiclass_Vehicle_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Concatenative Speech Recognition using Morphemes</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120378</link>
        <id>10.14569/IJACSA.2021.0120378</id>
        <doi>10.14569/IJACSA.2021.0120378</doi>
        <lastModDate>2021-03-31T07:07:46.4170000+00:00</lastModDate>
        
        <creator>Afshan Jafri</creator>
        
        <subject>Morphemes; sub-lexemes; speech recognition; Arabic; concatenative morphology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(3), 2021</description>
        <description>This paper adopts a novel sub-lexical approach to construct viable continuous speech recognition systems with scalable vocabulary that use the components of words to form the elements of pronunciation dictionaries and recognition lattices. The proposed Concatenative ASR family utilizes combination rules between morphemes (prefixes, stems, and suffixes), along with their theoretical grammatical categories. The constrained structure reduces invalid words by using grammar rules governing agglutination of affixes with stems, while having a large vocabulary space and hence fewer out-of-vocabulary words. In pursuing this approach, the project develops automatic speech recognition (ASR) parameterized models, designs parameter values, constructs and implements ASR systems, and analyzes the characteristics of these systems. The project designs parameter values in the context of Arabic to yield a subset hierarchy of vocabularies of the ASR systems facilitating meaningful analysis. It investigates the characteristics of the ASR systems with respect to vocabulary, recognition lattice, dictionary, and word error rate (WER). In the experiments, the standard Word ASR model has the best characteristics for vocabulary of up to five thousand words and the Concatenative ASR family is most appropriate for vocabulary of up to half a million words. The paper shows that the approach used encompasses fundamentally different processes of word formation and thus is applicable to languages that exhibit concatenative word-formation processes.</description>
        <description>http://thesai.org/Downloads/Volume12No3/Paper_78-Concatenative_Speech_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Attention on Measurable and Behavioral-driven Complete Service Composition Design Process</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120377</link>
        <id>10.14569/IJACSA.2021.0120377</id>
        <doi>10.14569/IJACSA.2021.0120377</doi>
        <lastModDate>2021-03-31T07:07:46.4000000+00:00</lastModDate>
        
        <creator>Ilyass El Kassmi</creator>
        
        <creator>Radia Belkeziz</creator>
        
        <creator>Zahi Jarir</creator>
        
        <subject>Non-Functional requirements composition; behavioral non-functional requirements; quantifiable non-functional requirements; model checking; web service composition formalization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(3), 2021</description>
        <description>The web service technology has still proved its effectiveness in the digital revolution we are facing. This success unfortunately raises more and more complex obstacles, particularly related to the service composition. The integration of Non-Functional Requirements (NFRs) in each step of service composition process, starting with abstract service composition specification to the generation of the verified and concrete composed services, represents one of them. Furthermore, this complexity remains more difficult when NFRs are addressed in both quantifiable (i.e. Quality of Service) and behavioral aspects. Despite the relevant contributions present in the literature, this challenge still remains an open issue when considering NFRs modeling, publishing, integrating with each other, and handling conflicts and dependencies in the whole composition’s lifecycle. As a consequence, we suggest this contribution that aims to propose an approach showing how to weave efficiently required NFRs with functional requirements in a complete lifecycle composition supporting specification, formalization, model checking verification and integration steps of desired concrete composite service. Patient Health Records in Regional and University Health Centers in Morocco is used as a case study to experiment our approach.</description>
        <description>http://thesai.org/Downloads/Volume12No3/Paper_77-Deep_Attention_on_Measurable_and_Behavioral_Driven.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Multi-purpose Data Pre-processing Framework using Machine Learning for Enterprise Data Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120376</link>
        <id>10.14569/IJACSA.2021.0120376</id>
        <doi>10.14569/IJACSA.2021.0120376</doi>
        <lastModDate>2021-03-31T07:07:46.3830000+00:00</lastModDate>
        
        <creator>Venkata Ramana B</creator>
        
        <creator>Narsimha G</creator>
        
        <subject>Standard domain length; domain specific rule engine; double differential clustering; change percentage; dependency map</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(3), 2021</description>
        <description>Growth in the data processing industry has automated decision making for various domains such as engineering, education and also many fields of research. The increased growth has also accelerated higher dependencies on the data driven business decisions on enterprise scale data models. The accuracy of such decisions solely depends on correctness of the data. In the recent past, a good number of data cleaning methods are projected by various research attempts. Nonetheless, most of these outcomes are criticized for higher generalness or higher specificness. Thus, the demand for multi-purpose, however domain specific, framework for enterprise scale data pre-processing is in demand in the recent time. Hence, this work proposes a novel framework for data cleaning method as missing value identification using the standard domain length with significantly reduced time complexity, domain specific outlier identification using customizable rule engine, detailed generic outlier reduction using double differential clustering and finally dimensionality reduction using the change percentage dependency mapping. The outcome from this framework is significantly impressive as the outliers and missing treatment showcases nearly 99% accuracy over benchmarked dataset.</description>
        <description>http://thesai.org/Downloads/Volume12No3/Paper_76-A_Multi_Purpose_Data_Pre_Processing_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Pitch Contour Stylization by Marking Voice Intonation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120375</link>
        <id>10.14569/IJACSA.2021.0120375</id>
        <doi>10.14569/IJACSA.2021.0120375</doi>
        <lastModDate>2021-03-31T07:07:46.3530000+00:00</lastModDate>
        
        <creator>Sakshi Pandey</creator>
        
        <creator>Amit Banerjee</creator>
        
        <creator>Subramaniam Khedika</creator>
        
        <subject>Pitch contour; pitch marking; linear stylization; straight-line approximation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(3), 2021</description>
        <description>The stylization of pitch contour is a primary task in the speech prosody for the development of a linguistic model. The stylization of pitch contour is performed either by statistical learning or statistical analysis. The recent statistical learning models require a large amount of data for training purposes and rely on complex machine learning algorithms. Whereas, the statistical analysis methods perform stylization based on the shape of the contour and require further processing to capture the voice intonations of the speaker. The objective of this paper is to devise a low-complexity transcription algorithm for the stylization of pitch contour based on the voice intonation of a speaker. For this, we propose to use of pitch marks as a subset of points for the stylization of the pitch contour. The pitch marks are the instance of glottal closure in a speech waveform that captures characteristics of speech uttered by a speaker. The selected subset can interpolate the shape of the pitch contour and acts as a template to capture the intonation of a speaker’s voice, which can be used for designing applications in speech synthesis and speech morphing. The algorithm balances the quality of the stylized curve and its cost in terms of the number of data points used. We evaluate the performance of the proposed algorithm using the mean square error and the number of lines used for fitting the pitch contour. Furthermore, we perform a comparison with other existing stylization algorithms using the LibriSpeech ASR corpus.</description>
        <description>http://thesai.org/Downloads/Volume12No3/Paper_75-Pitch_Contour_Stylization_by_Marking_Voice_Intonation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Arabic Tweets Sentiment Analysis about Online Learning during COVID-19 in Saudi Arabia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120373</link>
        <id>10.14569/IJACSA.2021.0120373</id>
        <doi>10.14569/IJACSA.2021.0120373</doi>
        <lastModDate>2021-03-31T07:07:46.3230000+00:00</lastModDate>
        
        <creator>Asma Althagafi</creator>
        
        <creator>Ghofran Althobaiti</creator>
        
        <creator>Hosam Alhakami</creator>
        
        <creator>Tahani Alsubait</creator>
        
        <subject>Social media analytics; sentiment analysis; online learning; Arabic tweets</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(3), 2021</description>
        <description>The COVID-19 pandemic can be considered as the greatest challenge of our time and is defining and re-shaping many aspects of our life such as learning and teaching, especially in the academic year of 2020. While some people could adapt quickly to online learning, others consider it to be inefficient. The re-opening of schools and universities is currently under consideration. However, many experts in many countries suggested that at least one semester should be online, during the pandemic. Understanding the public’s emotional reaction to online learning has become significant. This paper studies the attitude of people of Saudi Arabia towards online learning. We have used a collection of Arabic tweets posted in 2020, collected mainly via hashtags that originated in Saudi Arabia. Our sentiment analysis has shown that people have maintained a neutral response to online learning. This study will allow scholars and decision makers to understand the emotional effects of online learning on communities.</description>
        <description>http://thesai.org/Downloads/Volume12No3/Paper_73-Arabic_Tweets_Sentiment_Analysis_about_Online_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Framework for Data Research in GIS Database using Meshing Techniques and the Map-Reduce Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120374</link>
        <id>10.14569/IJACSA.2021.0120374</id>
        <doi>10.14569/IJACSA.2021.0120374</doi>
        <lastModDate>2021-03-31T07:07:46.3230000+00:00</lastModDate>
        
        <creator>Abdoulaye SERE</creator>
        
        <creator>Jean Serge Dimitri OUATTARA</creator>
        
        <creator>Didier BASSOLE</creator>
        
        <creator>Jose Arthur OUEDRAOGO</creator>
        
        <creator>Moubaric KABORE</creator>
        
        <subject>Map-reduce; big data; digital health; classification; Geographic Information System (GIS); COVID-19; Spark; Mon-goDb; NewSQL;NoSQL</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(3), 2021</description>
        <description>Everywhere, centers, laboratories, hospital and pharmacy have faced many challenges to delivery quality of health service due to constraints related to limited availability of resources such as drugs, places, equipments and specialists, often in health deficit with increasing number of patients, for instance during COVID-19 pandemic. Late information on these constraints from health service centers will play negatively on service quality because of time delayed between requesting service on place and the response to delivery safe service. All these problems don’t strengthen prevention or fighting against diseases in a region. This paper proposes a data research framework in a NoSQL database based on GIS data, containing an abstract table that could be inherited or specialized to any adopted GIS solution leading to a central data management instead of installing several database sites. The central database accepts data updated in back office by data owner and allows data research based on meshing Techniques and the map-reduce algorithm in front office. Variant meshing techniques have been presented to clustering GIS data with associated definitions of the content of map-reduce in order to improve processing time. In application in health service, the experimental results reveal that this system contributes to improve drug management in pharmacies and could be also used in others fields such as Finance, Education and Shopping through agencies spread over the territory, to strengthen national information systems and harmonised data.</description>
        <description>http://thesai.org/Downloads/Volume12No3/Paper_74-A_Framework_for_Data_Research_in_GIS_Database.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Parameter-free Clustering Algorithm based K-means</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120372</link>
        <id>10.14569/IJACSA.2021.0120372</id>
        <doi>10.14569/IJACSA.2021.0120372</doi>
        <lastModDate>2021-03-31T07:07:46.2900000+00:00</lastModDate>
        
        <creator>Said Slaoui</creator>
        
        <creator>Zineb Dafir</creator>
        
        <subject>Data mining; clustering; overlapping clustering; k-means; cluster centre initialization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(3), 2021</description>
        <description>Clustering is one of the relevant data mining tasks, which aims to process data sets in an effective way. This paper introduces a new clustering heuristic combining the E-transitive heuristic adapted to quantitative data and the k-means algorithm with the goal of ensuring the optimal number of clusters and the suitable initial cluster centres for k-means. The suggested heuris-tic, called PFK-means, is a parameter-free clustering algorithm since it does not require the prior initialization of the number of clusters. Thus, it generates progressively the initial cluster centres until the appropriate number of clusters is automatically detected. Moreover, this paper exposes a thorough comparison between the PFK-means heuristic, its diverse variants, the E-Transitive heuristic for clustering quantitative data and the traditional k-means in terms of the sum of squared errors and accuracy using different data sets. The experiments results reveal that, in general, the proposed heuristic and its variants provide the appropriate number of clusters for different real-world data sets and give good clusters quality related to the traditional k-means. Furthermore, the experiments conducted on synthetic data sets report the performance of this heuristic in terms of processing time.</description>
        <description>http://thesai.org/Downloads/Volume12No3/Paper_72-A_Parameter_free_Clustering_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Neural Network-based Relationship Identification Framework to Discriminate Fake Profile Over Social Media</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120371</link>
        <id>10.14569/IJACSA.2021.0120371</id>
        <doi>10.14569/IJACSA.2021.0120371</doi>
        <lastModDate>2021-03-31T07:07:46.2600000+00:00</lastModDate>
        
        <creator>Suneet Joshi</creator>
        
        <creator>Deepak Singh Tomar</creator>
        
        <subject>Social media; anomaly detection; malicious activity; spam account; fake account; sockpuppet; deep neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(3), 2021</description>
        <description>Involvement of social media like personal, business and political propaganda activities, attracts anti-social activities and has also increased. Anti-social elements get a wider platform to spread negativity after hiding their identity behind fake and false profiles. In this paper, an analytical and methodological user identification framework is developed to significantly binds implicit and explicit link relationship over the end-users graphical perspective. Identify malicious user, its communal information and sockpuppet node. Apart from that, this work provides the concept of the deep neural network approach over the graphical and linguistic perspective of end-user to classify as malicious, fake and genuine. This concept also helps identify the trade-off between the similarity of nodes attributes and the density of connections to classifying identical profile as sockpuppet over social media.</description>
        <description>http://thesai.org/Downloads/Volume12No3/Paper_71-Deep_Neural_Network_based_Relationship.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Distributed Mining of High Utility Sequential Patterns with Negative Item Values</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120370</link>
        <id>10.14569/IJACSA.2021.0120370</id>
        <doi>10.14569/IJACSA.2021.0120370</doi>
        <lastModDate>2021-03-31T07:07:46.2430000+00:00</lastModDate>
        
        <creator>Manoj Varma</creator>
        
        <creator>Saleti Sumalatha</creator>
        
        <creator>Akhileshwar Reddy</creator>
        
        <subject>High utility sequential pattern mining; big data; utility mining; negative utility; distributed algorithms</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(3), 2021</description>
        <description>The sequential pattern mining was widely used to solve various business problems, including frequent user click pattern, customer analysis of buying product, gene microarray data analysis, etc. Many studies were going on these pattern mining to extract insightful data. All the studies were mostly concentrated on high utility sequential pattern mining (HUSP) with positive values without a distributed approach. All the ex-isting solutions are centralized which incurs greater computation and communication costs. In this paper, we introduce a novel algorithm for mining HUSPs including negative item values in support of a distributed approach. We use the Hadoop map reduce algorithms for processing the data in parallel. Various pruning techniques have been proposed to minimize the search space in a distributed environment, thus reducing the expense of processing. To our understanding, no algorithm was proposed to mine High Utility Sequential Patterns with negative item values in a distributed environment. So, we design a novel algorithm called DHUSP-N (Distributed High Utility Sequential Pattern mining with Negative values). DHUSP-N can mine high utility sequential patterns considering the negative item utilities from Bigdata.</description>
        <description>http://thesai.org/Downloads/Volume12No3/Paper_70-Distributed_Mining_of_High_Utility_Sequential_Patterns.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Zero-resource Multi-dialectal Arabic Natural Language Understanding</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120369</link>
        <id>10.14569/IJACSA.2021.0120369</id>
        <doi>10.14569/IJACSA.2021.0120369</doi>
        <lastModDate>2021-03-31T07:07:46.2130000+00:00</lastModDate>
        
        <creator>Muhammad Khalifa</creator>
        
        <creator>Hesham Hassan</creator>
        
        <creator>Aly Fahmy</creator>
        
        <subject>Natural language processing; natural language understanding; low-resource learning; semi-supervised learning; named entity recognition; part-of-speech tagging; sarcasm detec-tion; pre-trained language models</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(3), 2021</description>
        <description>A reasonable amount of annotated data is required for fine-tuning pre-trained language models (PLM) on down-stream tasks. However, obtaining labeled examples for different language varieties can be costly. In this paper, we investigate the zero-shot performance on Dialectal Arabic (DA) when fine-tuning a PLM on modern standard Arabic (MSA) data only— identifying a significant performance drop when evaluating such models on DA. To remedy such performance drop, we propose self-training with unlabeled DA data and apply it in the context of named entity recognition (NER), part-of-speech (POS) tagging, and sarcasm detection (SRD) on several DA varieties. Our results demonstrate the effectiveness of self-training with unlabeled DA data: improving zero-shot MSA-to-DA transfer by as large as ~10% F₁ (NER), 2% accuracy (POS tagging), and 4.5% F₁ (SRD). We conduct an ablation experiment and show that the performance boost observed directly results from the unlabeled DA examples used for self-training. Our work opens up opportunities for leveraging the relatively abundant labeled MSA datasets to develop DA models for zero and low-resource dialects. We also report new state-of-the-art performance on all three tasks and open-source our fine-tuned models for the research community.</description>
        <description>http://thesai.org/Downloads/Volume12No3/Paper_69-Zero_resource_Multi_dialectal_Arabic_Natural_Language.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deployment and Migration of Virtualized Services with Joint Optimization of Backhaul Bandwidth and Load Balancing in Mobile Edge-Cloud Environments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120368</link>
        <id>10.14569/IJACSA.2021.0120368</id>
        <doi>10.14569/IJACSA.2021.0120368</doi>
        <lastModDate>2021-03-31T07:07:46.1970000+00:00</lastModDate>
        
        <creator>Tarik Chanyour</creator>
        
        <creator>Mohammed Oucamah Cherkaoui Malki</creator>
        
        <subject>Mobile edge-cloud computing; delay-sensitive ser-vices; container migration; container deployment; backhaul band-width; load balancing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(3), 2021</description>
        <description>Mobile edge-cloud computing environments appear as a novel computing paradigm to offer effective processing and storage solutions for delay sensitive applications. Besides, the container based virtualization technology becomes solicited due to its natural lightweight and portability as well as its small migration overhead that leads to seamless service migration and load balancing. However, with the mobility property, the users’ demands in terms of the backhaul bandwidth is a critical parameter that influences the delay constraints of the running applications. Accordingly, a Binary Integer Programming (BIP) optimization problem is formulated. It minimizes the users’ perceived backhaul delays and enhances the load-balancing degree in order to offer more chance to accept new requests along the network. Also, by introducing bandwidth constraints, the available user backhaul bandwidth after the placement are enhanced. Then, the adopted methodology to design two heuristic algorithms based on Ant Colony System (ACS) and Simulated Annealing (SA) is presented. The proposed schemes are compared using different metrics,and the benefits of the ACS-based solution compared to the SA-based as well as a genetic algorithm (GA) based solutions are demonstrated. Indeed, the normalized cost and the total backhaul costs are given by more optimal values using the ACS algorithm compared to the other solutions.</description>
        <description>http://thesai.org/Downloads/Volume12No3/Paper_68-Deployment_and_Migration_of_Virtualized_Services.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluation of Collaborative Filtering for Recommender Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120367</link>
        <id>10.14569/IJACSA.2021.0120367</id>
        <doi>10.14569/IJACSA.2021.0120367</doi>
        <lastModDate>2021-03-31T07:07:46.1800000+00:00</lastModDate>
        
        <creator>Maryam Al-Ghamdi</creator>
        
        <creator>Hanan Elazhary</creator>
        
        <creator>Aalaa Mojahed</creator>
        
        <subject>Co-clustering; collaborative filtering; KNN; NMF; recommender systems; slope one</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(3), 2021</description>
        <description>Recently, due to the increasing amount of data on the Internet along with the increase in products’ purchasing via e-commerce websites, Recommender Systems (RS) play an important role in guiding customers to buy products they may prefer. Furthermore, these systems help the companies to advertise their products to the most potential customers, and therefore raise their revenues. Collaborative Filtering (CF) is the most popular RS approach. It is classified into memory-based and model-based filtering. Memory-based filtering is in turn classified into user-based and item-based. Several algorithms have been proposed for CF. In this paper, a comparison has been performed between different CF algorithms to assess their performance. Specifically, we evaluated K-Nearest Neighbor (KNN), Slope One, co-clustering and Non-negative Matrix Factorization (NMF) algorithms. KNN algorithm is representative of the memory-based CF approach (both user-based and item-based). The other three algorithms, on the other hand, are under the model-based CF approach. In our experiments, we used a popular MovieLens dataset based on six evaluation metrics. Our results reveal that the KNN algorithm for item-based CF outperformed all other algorithms examined in this paper.</description>
        <description>http://thesai.org/Downloads/Volume12No3/Paper_67-Evaluation_of_Collaborative_Filtering.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intrusion Detection using Deep Learning Long Short-term Memory with Wrapper Feature Selection Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120366</link>
        <id>10.14569/IJACSA.2021.0120366</id>
        <doi>10.14569/IJACSA.2021.0120366</doi>
        <lastModDate>2021-03-31T07:07:46.1500000+00:00</lastModDate>
        
        <creator>Sana Al Azwari</creator>
        
        <creator>Hamza Turabieh</creator>
        
        <subject>Intrusion detection; feature selection; long short-term memory; binary genetic algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(3), 2021</description>
        <description>Recently, many companies move to use cloud com-puting systems to enhance their performance and productivity. Using these cloud computing systems allows the execution of applications, data, and infrastructures on cloud platforms (i.e., online), which increase the number of attacks on such systems. As a resulting, building robust Intrusion detection systems (IDS) is needed. The main goal of IDS is to detect normal and abnormal network trafﬁc. In this paper, we propose a hybrid approach between an Enhanced Binary Genetic Algorithms (EBGA) as a wrapper feature selection (FS) algorithm and Long Short-Term Memory (LSTM). A novel injection method to prevent premature convergence of the GA is proposed in this paper. An intelligent k-means algorithm is employed to examine the solution distribution in the search space. Once 80% of the solutions belong to one cluster, an injection method (i.e., add new solutions) is used to redistribute the solutions over the search space. EBGA will reduce the search space as a preprocessing step, while LSTM works as a binary classiﬁcation method. UNSW-NB15, a real-world public dataset, is used in this work to evaluate the proposed system. The obtained results show the ability of feature selection method to enhance the overall performance of LSTM.</description>
        <description>http://thesai.org/Downloads/Volume12No3/Paper_66-Intrusion_Detection_using_Deep_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Recent Advancement in Speech Recognition for Bangla: A Survey</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120365</link>
        <id>10.14569/IJACSA.2021.0120365</id>
        <doi>10.14569/IJACSA.2021.0120365</doi>
        <lastModDate>2021-03-31T07:07:46.1330000+00:00</lastModDate>
        
        <creator>Sadia Sultana</creator>
        
        <creator>M. Shahidur Rahman</creator>
        
        <creator>M. Zafar Iqbal</creator>
        
        <subject>Bangla ASR; Bangla speech corpora; speaker de-pendency; vocabulary size; classification approaches; challenges</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(3), 2021</description>
        <description>This paper presents a brief study of remarkable works done for the development of Automatic Speech Recognition (ASR) system for Bangla language. It discusses information of available speech corpora for this language and reports major contributions made in this research paradigm in the last decade. Some important design issues to develop a speech recognizer are: levels of recognition, vocabulary size, speaker dependency and approaches for classifications; these have been defined in this paper in the order of complexity of speech recognition. It also highlights on some challenges which are very important to resolve in this exciting research field. Different studies carried out on last decade for Bangla speech recognition have been shortly reviewed in a chronological order. It was found that selection of classification model and training dataset play important roles in speech recognition.</description>
        <description>http://thesai.org/Downloads/Volume12No3/Paper_65-Recent_Advancement_in_Speech_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Clustering of Association Rules for Big Datasets using Hadoop MapReduce</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120364</link>
        <id>10.14569/IJACSA.2021.0120364</id>
        <doi>10.14569/IJACSA.2021.0120364</doi>
        <lastModDate>2021-03-31T07:07:46.1030000+00:00</lastModDate>
        
        <creator>Salahadin A. Moahmmed</creator>
        
        <creator>Mohamed A. Alasow</creator>
        
        <creator>El-Sayed M. El-Alfy</creator>
        
        <subject>Internet of Things; big data mining; clustering; association rules; Hadoop</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(3), 2021</description>
        <description>Mining association rules is essential in the discovery of knowledge hidden in datasets. There are many efficient association rule mining algorithms. However, they may suffer from generating large number of rules when applied to big datasets. Large number of rules makes knowledge discovery a daunting task because too many rules are difficult to understand, interpret or visualize. To reduce the number of discovered rules, researchers proposed approaches, such as rules pruning, summarizing, or clustering. For the flourishing field of big data and Internet-of-Things (IoT), more effective solutions are crucial to cope with the rapid evolution of data. In this paper, we are proposing a novel parallel association rule clustering approach which is based on Hadoop MapReduce. We ran many experiments to study the performance of the proposed approach, and promising results have been demonstrated, e.g. the lowest scaleup was 77%.</description>
        <description>http://thesai.org/Downloads/Volume12No3/Paper_64-Clustering_of_Association_Rules_for_Big_Datasets.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Computer Research Project Management</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120363</link>
        <id>10.14569/IJACSA.2021.0120363</id>
        <doi>10.14569/IJACSA.2021.0120363</doi>
        <lastModDate>2021-03-31T07:07:46.0870000+00:00</lastModDate>
        
        <creator>Lassad Mejri</creator>
        
        <creator>Henda Hajjami Ben Ghezala</creator>
        
        <creator>Raja Hanafi</creator>
        
        <subject>Research projects; computer research project ontology; knowledge management; project memory</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(3), 2021</description>
        <description>Most research project managers, laboratories directors, young researchers at the beginning stage of thesis or professional research projects leaders are well effective at dealing with planned, scheduled events — they know how to function in conducting their research projects according to traditional knowledge areas of classical processes lied to time, cost, human resources, risk, stakeholders, and quality management. Unfortunately, they may have little specific training in selecting the best thematic of research. Indeed, they have no experience in identifying adequate research problems. Despite their motivation for the selected project and research thematic, they don’t well master research problematic and how to deal with: Literature for the selected thematic of research: (Sources, Documents, reports and technical folders): List of problems encountered during the research theme conducting and how to make profit of the obtained solutions approaches for these kinds of research problems. -  How to decide if this research theme and the list of connected problems are already resolved or not by any other research team. This paper aims to develop this idea and finally to propose ontology named &quot;Onto-Research-Project&quot; that formalizes all the domain knowledge of computer research projects. Our final goal is to propose an approach for historical research projects reusing. The output of this approach is a computer research project memory. In this way, we have to make use and to restructure the knowledge obtained from the research computer projects stored in the database “HAL-Archives-Nouvelles”.</description>
        <description>http://thesai.org/Downloads/Volume12No3/Paper_63-Computer_Research_Project_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Smart Intersection Design for Traffic, Pedestrian and Emergency Transit Clearance using Fuzzy Inference System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120362</link>
        <id>10.14569/IJACSA.2021.0120362</id>
        <doi>10.14569/IJACSA.2021.0120362</doi>
        <lastModDate>2021-03-31T07:07:46.0700000+00:00</lastModDate>
        
        <creator>Aditi Agrawal</creator>
        
        <creator>Rajeev Paulus</creator>
        
        <subject>Adaptive traffic light control; smart intersection; fuzzy logic; emergency vehicle; pedestrian crossing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(3), 2021</description>
        <description>Traffic flow is regulated and controlled with the aid of traffic signals implemented at all major intersections in urban areas. With the increase in vehicles, the traditional control strategies are incapable of clearing heavy traffic which leads to long traffic queues and prolonged waiting time at intersections. Smart cities are increasingly adopting solutions by developing smart traffic lights to improve the flow of vehicles. A major demand arises to increase the efficiency of traffic controllers with the objective to minimize traffic congestion, prioritize emergency transit and give way to pedestrians to cross the lanes at an intersection. This requires leveraging the existing techniques that identify the best solutions at the lowest possible cost. This paper proposes Fuzzy Adaptive Control System (FACS) that uses fuzzy logic to decide the phase sequence and green-time for each lane based on sensed input parameters. It is designed with an aim to improve traffic clearance at isolated intersection especially in peak traffic hours of the day along with giving precedence to emergency vehicle as soon as it is detected and also assist pedestrian passage thus reducing their waiting time at the intersection. Performance of the proposed Fuzzy Adaptive Control System (FACS) is evaluated through simulations and compared with Pre-Timed Control System (PTCS) and Traffic Density-based Control System (TDCS) at a busy intersection with lanes leading to offices, schools and hospitals. Simulation results show significant improvement over PTCS and TDCS in terms of traffic clearance, immediate addressing of the emergency vehicle and giving preference to pedestrian passage at the intersection.</description>
        <description>http://thesai.org/Downloads/Volume12No3/Paper_62-Smart_Intersection_Design_for_Traffic.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Empirical Study on Microsoft Malware Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120361</link>
        <id>10.14569/IJACSA.2021.0120361</id>
        <doi>10.14569/IJACSA.2021.0120361</doi>
        <lastModDate>2021-03-31T07:07:46.0400000+00:00</lastModDate>
        
        <creator>Rohit Chivukula</creator>
        
        <creator>Mohan Vamsi Sajja</creator>
        
        <creator>T. Jaya Lakshmi</creator>
        
        <creator>Muddana Harini</creator>
        
        <subject>Multi-class classification; malware detection; XGBoost</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(3), 2021</description>
        <description>A malware is a computer program which causes harm to software. Cybercriminals use malware to gain access to sensitive information that will be exchanged via software infected by it. The important task of protecting a computer system from a malware attack is to identify whether given software is a malware. Tech giants like Microsoft are engaged in developing anti-malware products. Microsoft&#39;s anti-malware products are installed on over 160M computers worldwide and examine over 700M computers monthly. This generates huge amount of data points that can be analyzed as potential malware. Microsoft has launched a challenge on coding competition platform Kaggle.com, to predict the probability of a computer system, installed with windows operating system getting affected by a malware, given features of the windows machine. The dataset provided by Microsoft consists of 10,868 instances with 81 features, classified into nine classes. These features correspond to files of type asm (data with assembly language code) as well as binary format. In this work, we build a multi class classification model to classify which class a malware belongs to. We use K-Nearest Neighbors, Logistic Regression, Random Forest Algorithm and XgBoost in a multi class environment. As some of the features are categorical, we use hot encoding to make them suitable to the classifiers. The prediction performance is evaluated using log loss. We analyze the accuracy using only asm features, binary features and finally both. xGBoost provide a better log-loss value of 0.078 when only asm features are considered, a value of 0.048 when only binary features are used and a final log loss of 0.03 when all features are used, over other classifiers.</description>
        <description>http://thesai.org/Downloads/Volume12No3/Paper_61-Empirical_Study_on_Microsoft_Malware.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comprehensive Analysis of Flow Incorporated Neural Network based Lightweight Video Compression Architecture</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120360</link>
        <id>10.14569/IJACSA.2021.0120360</id>
        <doi>10.14569/IJACSA.2021.0120360</doi>
        <lastModDate>2021-03-31T07:07:46.0100000+00:00</lastModDate>
        
        <creator>Sangeeta </creator>
        
        <creator>Preeti Gulia</creator>
        
        <creator>Nasib Singh Gill</creator>
        
        <subject>Deep learning; video compression; autoencoder; SSIM; PSNR</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(3), 2021</description>
        <description>The increasing video content over the internet motivated the exploration of novel approaches in the video compression domain. Though neural network based architectures have already emerge as de-facto in the field of image compression and analytics, their application in video compression also result in promising outputs. Adaptive and efficient compression techniques are required for video transmission over varying bandwidth. Several deep learning based techniques and enhancements were proposed and experimented but they didn’t exhibit full optimal behavior and are not end to end trained and optimized. In the zest of a pure and end to end trainable compression technique, a deep learning based video compression architecture has been proposed comprises of frame autoencoder, flow autoencoder and motion extension network for the reconstruction of predicted frames. The video compression network has been designed incrementally and trained with random emission steps strategy. The proposed work results in significant improvement in visual perception quality measured in SSIM and PSNR when compared to some state-of-art techniques but in trade-off with frame reconstruction time sheet.</description>
        <description>http://thesai.org/Downloads/Volume12No3/Paper_60-Comprehensive_Analysis_of_Flow_incorporated_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Question Answering Systems: A Systematic Literature Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120359</link>
        <id>10.14569/IJACSA.2021.0120359</id>
        <doi>10.14569/IJACSA.2021.0120359</doi>
        <lastModDate>2021-03-31T07:07:45.9930000+00:00</lastModDate>
        
        <creator>Sarah Saad Alanazi</creator>
        
        <creator>Nazar Elfadil</creator>
        
        <creator>Mutsam Jarajreh</creator>
        
        <creator>Saad Algarni</creator>
        
        <subject>Question answering systems; syntax; knowledge systems; deep learning; machine learning; systematic literature review; artificial intelligence</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(3), 2021</description>
        <description>Question answering systems (QAS) are developed to answer questions presented in natural language by extracting the answer. The development of QAS is aimed at making the Web more suited to human use by eliminating the need to sift through a lot of search results manually to determine the correct answer to a question. Accordingly, the aim of this study was to provide an overview of the current state of QAS research. It also aimed at highlighting the key limitations and gaps in the existing body of knowledge relating to QAS. Furthermore, it intended to identify the most effective methods utilized in the design of QAS. The systematic review of literature research method was selected as the most appropriate methodology for studying the research topic. This method differs from the conventional literature review as it is more comprehensive and objective. Based on the findings, QAS is a highly active area of research, with scholars taking diverse approaches in the development of their systems. Some of the limitations observed in these studies encompass the focused nature of current QAS, weaknesses associated with models that are used as building blocks for QAS, the need for standard datasets and question formats hence limiting the applicability of the QAS in practical settings, and the failure of researchers to examine their QAS solutions comprehensively. The most effective methods for designing QAS include focusing on syntax and context, utilizing word encoding and knowledge systems, leveraging deep learning, and using elements such as machine learning and artificial intelligence. Going forward, modular designs ought to be encouraged to foster collaboration in the creation of QAS.</description>
        <description>http://thesai.org/Downloads/Volume12No3/Paper_59-Question_Answering_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automata-based Algorithm for Multiple Word Matching</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120358</link>
        <id>10.14569/IJACSA.2021.0120358</id>
        <doi>10.14569/IJACSA.2021.0120358</doi>
        <lastModDate>2021-03-31T07:07:45.9630000+00:00</lastModDate>
        
        <creator>Majed AbuSafiya</creator>
        
        <subject>Algorithms; finite state automata; word matching; KMP</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(3), 2021</description>
        <description>In this paper, an automata-based algorithm that finds the valid shifts of a given set of words W in text T is presented. Unlike known string matching algorithms, a preprocessing phase is applied to T and not to the words being searched for. In this phase, a deterministic finite state automaton (DFA) that recognizes the words in T is built and is augmented with their shifts in T. The preprocessing phase is relatively expensive in terms of time and space. However, it needs to be done once for any number of words to match in a given text document. The algorithm is analyzed for complexity, implemented and compared with an adjusted version of KMP algorithm. It showed better performance than KMP algorithm for large number of words to match in T.</description>
        <description>http://thesai.org/Downloads/Volume12No3/Paper_58-Automata_Based_Algorithm_for_Multiple_Word_Matching.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fog Network Area Management Model for Managing Fog-cloud Resources in IoT Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120357</link>
        <id>10.14569/IJACSA.2021.0120357</id>
        <doi>10.14569/IJACSA.2021.0120357</doi>
        <lastModDate>2021-03-31T07:07:45.9470000+00:00</lastModDate>
        
        <creator>Anwar Alghamdi</creator>
        
        <creator>Ahmed Alzahrani</creator>
        
        <creator>Vijey Thayananthan</creator>
        
        <subject>Resource management; job scheduling; load balancing; mobile agent software; fog computing; Internet of Things (IoT)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(3), 2021</description>
        <description>The Internet of Things (IoT) paradigm is at the forefront of the present and future research activities. The enormous amount of sensing data needing to be processed increases dramatically in volume, variety, and velocity. In response, cloud computing was involved in handling the challenges of collecting, storing, and processing the data. The fog computing technology is a model used to support cloud computing by implementing pre-processing tasks close to the end-user for achieving low latency, less power consumption, and high scalability. However, some resources in fog computing network are not suitable for some tasks, or the number of requests increases outside capacity. So, it is more efficient to reduce sending tasks to the cloud. Perhaps some other fog resources are idle, and it is better to be federated rather than forwarding them to the cloud. This issue affects the fog environment&#39;s performance when dealing with large applications or applications sensitive to time processing. This research aims to propose a holistic fog-based resource management model to efficiently discover all the available services placed in resources considering their capabilities, deploy jobs into appropriate resources in the network effectively, and improve the IoT environment&#39;s performance. Our proposed model consists of three main components: job scheduling, job placement, and mobile agent software, explained in detail in this paper.</description>
        <description>http://thesai.org/Downloads/Volume12No3/Paper_57-Fog_Network_Area_Management_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Efficient and Secure Group based Collusion Resistant Public Auditing Scheme for Cloud Storage</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120356</link>
        <id>10.14569/IJACSA.2021.0120356</id>
        <doi>10.14569/IJACSA.2021.0120356</doi>
        <lastModDate>2021-03-31T07:07:45.9170000+00:00</lastModDate>
        
        <creator>Smita Chaudhari</creator>
        
        <creator>Gandharba Swain</creator>
        
        <subject>Public auditing; collusion attack; ring signature; message authentication code; indistinguishability obfuscation; dynamic data</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(3), 2021</description>
        <description>Tremendous changes have been seen in the arena of cloud computing from previous years. Many organizations share their data or files on cloud servers to avoid infrastructure and maintenance costs. Employees from different departments create their specific groups and share sensitive information among group members. Revoked users from the group may try to access this information by colluding with an untrusted cloud server. Many researchers have specified revocation procedures using re-signature, proxy-re-signature concept to deflect the collusion between the cloud server and a revoked user. But these techniques are costly in terms of communication overhead and verification cost if combined with auditing techniques to prove the integrity of outsourced data on the cloud server. To reduce this cost, a collusion resistant public auditing scheme with group member revocation is proposed in this paper. In this scheme, the data owner regularly updates the recent valid members list which is used by a third-party auditor to validate the signature so that collusion can be avoided. To verify the integrity of outsourced data, proposed scheme uses one of the modern cryptographic technique indistinguishability obfuscation combined with a one-way function which can reduce the verification time significantly. Experimental results show that the proposed scheme decreases the communication overhead and verification cost compared to existing schemes.</description>
        <description>http://thesai.org/Downloads/Volume12No3/Paper_56-Efficient_and_Secure_Group_based_Collusion.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detecting Malware based on Analyzing Abnormal behaviors of PE File</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120355</link>
        <id>10.14569/IJACSA.2021.0120355</id>
        <doi>10.14569/IJACSA.2021.0120355</doi>
        <lastModDate>2021-03-31T07:07:45.9000000+00:00</lastModDate>
        
        <creator>Lai Van Duong</creator>
        
        <creator>Cho Do Xuan</creator>
        
        <subject>Malware; portable executable file format; detection malware; abnormal behaviors; machine learning; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(3), 2021</description>
        <description>Attack by spreading malware is a dangerous attack form that is very difficult to detect and prevent. Attack techniques that spread malware through users and then escalate privileges in the system are increasingly used by attackers. The three main methods and techniques for tracking and detecting malware that is being currently studied and applied include signature-based, behavior-based, and hybrid techniques. In particular, the behavior-based technique with the support of machine learning algorithms has given high efficiency. On the other hand, in reality, attackers often find various ways and techniques to hide behaviors of the malware based on the Portable Executable File Format (PE File) of the malware. This makes it difficult for surveillance systems to detect malware. From the above reasons, in this paper, we propose a malware detection method based on the PE File analysis technique using machine learning and deep learning algorithms. Our main contribution in this paper is proposing some features that represent abnormal behaviors of malware based on PE File and the efficiency of some machine learning algorithms in the classification process.</description>
        <description>http://thesai.org/Downloads/Volume12No3/Paper_55-Detecting_Malware_based_on_Analyzing_Abnormal.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Motor Insurance Claim Status Prediction using Machine Learning Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120354</link>
        <id>10.14569/IJACSA.2021.0120354</id>
        <doi>10.14569/IJACSA.2021.0120354</doi>
        <lastModDate>2021-03-31T07:07:45.8700000+00:00</lastModDate>
        
        <creator>Endalew Alamir</creator>
        
        <creator>Teklu Urgessa</creator>
        
        <creator>Ashebir Hunegnaw</creator>
        
        <creator>Tiruveedula Gopikrishna</creator>
        
        <subject>Motor insurance claim; machine learning; classification; Random Forest (RF); Support Vector Machine (SVM); supervised learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(3), 2021</description>
        <description>The insurance claim is a basic problem in insurance companies. Insurance insurers always have a challenge to the growing of insurance claim loss. Because there is the occurrence of claim fraud and the volume of claim data increases in the insurance companies. As a result, it is difficult to classify the insured claim status during the claim review process. Therefore, the aims of the study was to build a machine learning model that classifies and make motor insurance claim status prediction in machine learning approach. To achieve this study Missing value ratio, Z- Score, encoding techniques and entropy were used as data set preparation techniques. The final preprocessed data sets split using K- Fold cross validation techniques into training and testing sets. Finally the prediction model was built using Random Forest (RF) and Multi Class –Support Vector Machine (SVM).The performance of the models, RF and Multi –Class SVM classifiers were evaluated using Accuracy, Precision, Recall, and F- measure. The prediction accuracy of the model is capable of predicting the motor insurance claim status with 98.36% and 98.17% by RF and SVM classifiers respectively. As a result, RF classifier is slightly better than Multi-Class Support vector machines. Developing and implementing hybrid model to benefit from the advantages of different algorithms having graphical user interface to apply the solution to real world problem of the insurance company is a pressing future work.</description>
        <description>http://thesai.org/Downloads/Volume12No3/Paper_54-Motor_Insurance_Claim_Status_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Efficient Task Scheduling in Cloud Computing using Multi-objective Hybrid Ant Colony Optimization Algorithm for Energy Efficiency</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120353</link>
        <id>10.14569/IJACSA.2021.0120353</id>
        <doi>10.14569/IJACSA.2021.0120353</doi>
        <lastModDate>2021-03-31T07:07:45.8370000+00:00</lastModDate>
        
        <creator>Fatima Umar Zambuk</creator>
        
        <creator>Abdulsalam Ya’u Gital</creator>
        
        <creator>Mohammed Jiya</creator>
        
        <creator>Nahuru Ado Sabon Gari</creator>
        
        <creator>Badamasi Ja’afaru</creator>
        
        <creator>Aliyu Muhammad</creator>
        
        <subject>Ant colony; scheduling; hybrid; foraging; cloud computing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(3), 2021</description>
        <description>The efficiency of Internet services is determined by the Cloud computing process. Various challenges in computing are being faced, such as security, the efficient allocation of resources, which in turn results in the waste of resources. Researchers have explored a number of approaches over the past decade to overcome these challenges. The main objective of this research is to explore the task scheduling of cloud computing using multi-objective hybrid Ant Colony Optimization (ACO) with Bacterial Foraging (ACOBF) behavior. ACOBF technique maximized resource utilization (Service Provider Profit) and also reduced Makespan and user wait times Job request. ACOBF classifies the user job request in three classes based on the sensitivity of the protocol associated with each request, Schedule Job request in each class based on job request deadline and create a Virtual Machine (VM) cluster to minimize energy consumption. Based on comprehensive experimentation, the simulated results show that the performance of ACOBF outperforms the benchmarked techniques in terms of convergence, diversity of solutions and stability.</description>
        <description>http://thesai.org/Downloads/Volume12No3/Paper_53-Efficient_Task_Scheduling_in_Cloud_Computing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Improved Multi-label Classifier Chain Method for Automated Text Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120352</link>
        <id>10.14569/IJACSA.2021.0120352</id>
        <doi>10.14569/IJACSA.2021.0120352</doi>
        <lastModDate>2021-03-31T07:07:45.8230000+00:00</lastModDate>
        
        <creator>Adeleke Abdullahi</creator>
        
        <creator>Noor Azah Samsudin</creator>
        
        <creator>Shamsul Kamal Ahmad Khalid</creator>
        
        <creator>Zuhaila Ali Othman</creator>
        
        <subject>Text classification; multi-label classification; classifier chain; particle swarm optimization; genetic algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(3), 2021</description>
        <description>Automated text classification is the task of grouping documents (text) automatically into categories from a predefined set. The conventional approach to classification involves mapping a single class label each to a data point (instance). In multi-label classification (MLC), the task is to develop models that could predict multiple class labels to a data instance. There exist several MLC methods such as classifier chain (CC) and binary relevance (BR). However, there are drawbacks with these methods such as random label sequence ordering issue. This study attempts to address this issue peculiar with the classifier chain method. In this paper, a hybrid heuristic evolutionary-based technique is proposed. The proposed PSOGCC is a combination of particle swarm optimization (PSO) and genetic algorithm (GA). Genetic operators of GA are integrated with the basic PSO algorithm for finding the global best solution representing an optimized label sequence order in the chain classifier. In the experiment, three MLC methods: BR, CC, and PSOGCC are implemented using five benchmark multi-label datasets and five standard evaluation metrics. The proposed PSOGCC method improved the predictive performance of the chain classifier by obtaining the best results of 98.66%, 99.5%, 99.16%, 99.33%, 0.0011 accuracy, precision, recall, f1 Score, and Hammingloss values, respectively.</description>
        <description>http://thesai.org/Downloads/Volume12No3/Paper_52-An_Improved_Multi_label_Classifier_Chain_method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SGBBA: An Efficient Method for Prediction System in Machine Learning using Imbalance Dataset</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120351</link>
        <id>10.14569/IJACSA.2021.0120351</id>
        <doi>10.14569/IJACSA.2021.0120351</doi>
        <lastModDate>2021-03-31T07:07:45.7900000+00:00</lastModDate>
        
        <creator>Saiful Islam</creator>
        
        <creator>Umme Sara</creator>
        
        <creator>Abu Kawsar</creator>
        
        <creator>Anichur Rahman</creator>
        
        <creator>Dipanjali Kundu</creator>
        
        <creator>Diganta Das Dipta</creator>
        
        <creator>A.N.M. Rezaul Karim</creator>
        
        <creator>Mahedi Hasan</creator>
        
        <subject>Imbalanced dataset; sub sample; accuracy; fraud; confusion matrix; bagging</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(3), 2021</description>
        <description>A real world big dataset with disproportionate classification is called imbalance dataset which badly impacts the predictive result of machine learning classification algorithms. Most of the datasets faces the class imbalance problem in machine learning. Most of the algorithms in machine learning work perfectly with about equal samples counts for every class. A variety of solutions have been suggested in the past time by the different researchers and applied to deal with the imbalance dataset. The performance of these methods is lower than the satisfactory level. It is very difficult to design an efficient method using machine learning algorithms without making the imbalance dataset to balance dataset. In this paper we have designed an method named SGBBA: an efficient method for prediction system in machine learning using Imbalance dataset. The method that is addressed in this paper increases the performance to the maximum in terms of accuracy and confusion matrix. The proposed method is consisted of two modules such as designing the method and method based prediction. The experiments with two benchmark datasets and one highly imbalanced credit card datasets are performed and the performances are compared with the performance of SMOTE resampling method. F-score, specificity, precision and recall are used as the evaluation matrices to test the performance of the proposed method in terms of any kind of imbalance dataset. According to the comparison of the result of the proposed method computationally attains the effective and robust performance than the existing methods.</description>
        <description>http://thesai.org/Downloads/Volume12No3/Paper_51-SGBBA_An_Efficient_Method_for_Prediction_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>RETRACTED: Multi-level Protection (Mlp) Policy Implementation using Graph Database</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120350</link>
        <id>10.14569/IJACSA.2021.0120350</id>
        <doi>10.14569/IJACSA.2021.0120350</doi>
        <lastModDate>2021-03-31T07:07:45.7430000+00:00</lastModDate>
        
        <creator>Lingala Thirupathi</creator>
        
        <creator>Venkata Nageswara Rao Padmanabhuni</creator>
        
        <subject>Database; graph; protection; multi-level</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(3), 2021</description>
        <description>After careful and considered review of the content of this paper by a duly constituted expert committee, this paper has been found to be in violation of IJACSA`s Publication Principles. We hereby retract the content of this paper. Reasonable effort should be made to remove all past references to this paper. Retraction DOI: 10.14569/IJACSA.2021.0120350.retraction</description>
        <description>http://thesai.org/Downloads/Volume12No3/Paper_50-Multi_Level_Protection_MLP_Policy_Implementation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Smart Digital Forensic Framework for Crime Analysis and Prediction using AutoML</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120349</link>
        <id>10.14569/IJACSA.2021.0120349</id>
        <doi>10.14569/IJACSA.2021.0120349</doi>
        <lastModDate>2021-03-31T07:07:45.7130000+00:00</lastModDate>
        
        <creator>Sajith A Johnson</creator>
        
        <creator>S Ananthakumaran</creator>
        
        <subject>Forensic investigation; digital forensic; automated machine learning; smart forensic framework</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(3), 2021</description>
        <description>Over the most recent couple of years, the greater part of the information, for example books, recordings, pictures, clinical, forensic, criminal and even the hereditary data of people are being pushed toward digitals and cyber-dataspaces. This problem requires sophisticated techniques to deal with the vast amounts of data. We propose a novel solution to the problem of gaining actionable intelligence from the voluminous existing and potential digital forensic data. We have formulated an Automated Learning Framework ontology for Digital Forensic Applications relating to collaborative crime analysis and prediction. The minimum viable ontology we formulated by studying the existing literature and applications of Machine learning has been used to devise an Automated Machine Learning implementation to be quantitatively and qualitatively studied in its capabilities to aid intelligence practices of Digital Forensic Investigation agencies in representing, reasoning and forming actionable insights from the vast and varied collected real world data. A testing implementation of the framework is made to assess performance of our proposed generalized Smart Forensic Framework for Digital Forensics applications by comparison with existing solutions on quantitative and qualitative metrics and assessments. We will use the insights and performance metrics derived from our research to motivate forensic intelligence agencies to exploit the features and capabilities provided by AutoML Smart Forensic Framework applications.</description>
        <description>http://thesai.org/Downloads/Volume12No3/Paper_49-Smart_Digital_Forensic_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimal Routing based Load Balanced Congestion Control using MAODV in WANET Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120348</link>
        <id>10.14569/IJACSA.2021.0120348</id>
        <doi>10.14569/IJACSA.2021.0120348</doi>
        <lastModDate>2021-03-31T07:07:45.6970000+00:00</lastModDate>
        
        <creator>Kanthimathi S</creator>
        
        <creator>JhansiRani P</creator>
        
        <subject>Routing; congestion control; Wireless Ad Hoc Networks (WANET); Modified Ad hoc on-demand Distance Vector (MAODV); Levy Flight Based Black Widow Optimization (LF-BWO); Stochastic Gradient Descent Deep Learning Neural Network (SGD-DLNN)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(3), 2021</description>
        <description>A decentralized sort of network that can allow the nodes to communicate with them lacking any central controller is Wireless Ad hoc Networks (WANET). Network Congestions can befall on account of nodes&#39; restricted Bandwidth (BW) together with dynamic topology. Network Congestions brings about data loss as it makes the Data Packets (DP) be dropped on the network. Therefore, in order to lessen Network Congestions, it is necessary to model Congestion Control (CC) systems aimed at the WANET. Thus, this paper offers an optimal routing centered CC scheme utilizing the Modified Ad hoc on-demand Distances Vector (MAODV) Routing Protocol (RP) aimed at the WANET. Here, primarily, the Source Node (SN) together with Destination Nodes (DN) is initialized, and after that, the MAODV discovers the multiple routing paths. Subsequently, Stochastic Gradients Descent Deep Learning Neural Network (SGD-DLNN) identifies the Congestion Status (CS) of every node in the discovered paths. In addition, the MAODV allocates the traffic over the optimum congestion-free routing path if congestion befalls. The Levy Flight Based Black Widow Optimization (LF-BWO) algorithm chooses the optimal routing paths as of congestion-free paths. Centered upon path lifetime, residual energy, link cost, together with path distance, this algorithm enhances the Data Transmission (DT) performance by means of discovering a path. The experimentation’s outcomes are rendered to exhibit the proposed RP’s effectiveness.</description>
        <description>http://thesai.org/Downloads/Volume12No3/Paper_48-Optimal_Routing_Based_Load_Balanced_Congestion.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Novel Data Oriented Structure Learning Approach for the Diabetes Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120347</link>
        <id>10.14569/IJACSA.2021.0120347</id>
        <doi>10.14569/IJACSA.2021.0120347</doi>
        <lastModDate>2021-03-31T07:07:45.6670000+00:00</lastModDate>
        
        <creator>Adel THALJAOUI</creator>
        
        <subject>Classification; Bayesian Network; structure learning; score oriented approach; diabetes analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(3), 2021</description>
        <description>Diabetes mellitus is considered a significant disease an ever rising epidemic. Accordingly this disease represents a worldwide public-health-crisis. Several classification techniques have been recently employed for diabetes diagnosis, however only few researches have been dedicated to facilitating its analysis based on knowledge representation using probabilistic modelling. Bayesian Network as a probabilistic graphical model is considered as one of the most effective techniques of classification. Bayesian Network (BN) is widely employed in several domains like risk analysis, medicine, bioinformatics and security. This probabilistic graphical model represents an effective formalism to reason under uncertainty. The construction of the BN model goes through two learning phases of structure and parameter. The first learning phase of BN skeleton has been assessed as complex problem (NP-hard problem). Accordingly, several methods have been introduced amongst which the score based algorithms that are considered as one of the most powerful methods of structure learning. In this paper, we introduce a novel algorithm based on graph theory and the information theory combination. The proposed algorithm called GIT algorithm for Parents and children detection for BN structure learning. In addition, we evaluate the obtained results and using the reference networks, we prove the efficiency of the proposed GIT algorithm in terms of accuracy. Furthermore, we apply our algorithm in a real field, especially for detecting the interesting dependencies which are useful for the diabetes analysis.</description>
        <description>http://thesai.org/Downloads/Volume12No3/Paper_47-Novel_Data_Oriented_Structure_Learning_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Recognizing Human Emotions from Eyes and Surrounding Features: A Deep Learning Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120346</link>
        <id>10.14569/IJACSA.2021.0120346</id>
        <doi>10.14569/IJACSA.2021.0120346</doi>
        <lastModDate>2021-03-31T07:07:45.6500000+00:00</lastModDate>
        
        <creator>Md Nymur Rahman Shuvo</creator>
        
        <creator>Shamima Akter</creator>
        
        <creator>Md. Ashiqul Islam</creator>
        
        <creator>Shazid Hasan</creator>
        
        <creator>Muhammad Shamsojjaman</creator>
        
        <creator>Tania Khatun</creator>
        
        <subject>Human emotion recognition; convolutional neural network (CNN); transfer learning; fine-tuning; VGG-16; Inception-ResNet -V2; DenseNet-201</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(3), 2021</description>
        <description>The need for an efficient intelligent system to detect human emotions is imperative. In this study, we proposed an automated convolutional neural network-based approach to recognize the human mental state from eyes and their surrounding features. We have applied deep convolutional neural network based Keras applications with the help of transfer learning and fine-tuning. We have worked with six universal emotions (i.e., happiness, disgust, sadness, fear, anger, and surprise) with a dataset containing 588 unique double eye images. In this study, we considered the eyes and their surrounding areas (Upper and lower eyelid, glabella, and brow) to detect the emotional state. The state and movement of the iris and pupil can vary with the various mental states. The common features found within the entire eyes during different mental states can help to capture human expression. The dataset was trained with pre-trained weights and used a confusion matrix to analyze the prediction to achieve better accuracy. The highest accuracy was achieved by DenseNet-201 is 91.78%, whereas VGG-16 and Inception-ResNet-v2 show 90.43% and 89.67%, respectively. This study will provide an insight into the current state of research to obtain better facial recognition.</description>
        <description>http://thesai.org/Downloads/Volume12No3/Paper_46-Recognizing_Human_Emotions_from_Eyes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Big Data Analytics Framework for Childhood Infectious Disease Surveillance and Response System using Modified MapReduce Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120345</link>
        <id>10.14569/IJACSA.2021.0120345</id>
        <doi>10.14569/IJACSA.2021.0120345</doi>
        <lastModDate>2021-03-31T07:07:45.6330000+00:00</lastModDate>
        
        <creator>Mdoe Mwamnyange</creator>
        
        <creator>Edith Luhanga</creator>
        
        <creator>Sanket R. Thodge</creator>
        
        <subject>Big data analytics; childhood infectious diseases; infectious disease surveillance system; infectious disease report cases; framework; Hadoop; healthcare big data; map reduce</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(3), 2021</description>
        <description>Tanzania, like most East African countries, faces a great burden from the spread of preventable infectious childhood diseases. Diarrhea, acute respiratory infections (ARI), pneumonia, malnutrition, hepatitis, and measles are responsible for the majority of deaths amongst children aged 0-5 years. Infectious disease surveillance and response is the foundation of public healthcare practices, and it is increasingly being undertaken using information technology. Tanzania however, due to challenges in information technology infrastructure and public health resources, still relies on paper-based disease surveillance. Thus, only traditional clinical patient data is used. Nontraditional and pre-diagnostic infectious disease report case data are excluded. In this paper, the development of the Big Data Analytics Framework for Childhood Infectious Disease Surveillance and Response System is presented. The framework was designed to guide healthcare professionals to track, monitor, and analyze infectious disease report cases from sources such as social media for prevention and control of infectious diseases affecting children. The proposed framework was validated through use-cases scenario and performance-based comparison.</description>
        <description>http://thesai.org/Downloads/Volume12No3/Paper_45-Big_Data_Analytics_Framework_for_Childhood.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Distinctive Context Sensitive and Hellinger Convolutional Learning for Privacy Preserving of Big Healthcare Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120344</link>
        <id>10.14569/IJACSA.2021.0120344</id>
        <doi>10.14569/IJACSA.2021.0120344</doi>
        <lastModDate>2021-03-31T07:07:45.6030000+00:00</lastModDate>
        
        <creator>Sujatha K</creator>
        
        <creator>Udayarani V</creator>
        
        <subject>Big data; information technology; distinctive; impact; context sensitive hashing; quasi-identifier; Hellinger; convolutional neural</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(3), 2021</description>
        <description>The collection and effectiveness of sensitive Big Data have grown with Information Technology (IT) development. While using sensitive Big Data to acquire relevant information, it becomes indispensable that irrelevant sensitive data are reduced to safeguard personal information in healthcare sector. Many privacy-preserving strategies have been applied in the recent years using quasi-identifiers (QI) for applications like health services. However, privacy preservation over quasi-identifiers is still challenging in the context of Big Data because most datasets were of huge volume. Existing methods suffer from higher time consumption and lower data utility because of dynamically progressing datasets. In this paper, an efficient Distinctive Context Sensitive and Hellinger Convolutional Learning (DCS-HCL) is introduced to ensure privacy preservation and achieve high data utility for big healthcare datasets. First, Distinctive Impact Context Sensitive Hashing model is designed for the given input Big Dataset where both the distinctive and impact values are identified and applied to Context Sensitive Hashing. With this, similar QI-classes are mapped to evolve the computationally efficient anonymyzed data. Second, Hellinger Convolutional Neural Privacy Preservation model is presented to preserve the privacy of the sensitive unstructured data. This is performed by hashing QI-class values, weight updation and bias in CNN to increase the accuracy and to reduce the information loss. Evaluation results demonstrate that with proposed method with large-volume unstructured datasets improved performance of run time, data utility, information loss and accuracy significantly over existing methods.</description>
        <description>http://thesai.org/Downloads/Volume12No3/Paper_44-Distinctive_Context_Sensitive_and_Hellinger.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Collision-aware MAC Protocol for Efficient Performance in Wireless Sensor Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120343</link>
        <id>10.14569/IJACSA.2021.0120343</id>
        <doi>10.14569/IJACSA.2021.0120343</doi>
        <lastModDate>2021-03-31T07:07:45.5870000+00:00</lastModDate>
        
        <creator>Hamid Hajaje</creator>
        
        <creator>Mounib Khanafer</creator>
        
        <creator>Zine El Abidine Guennoun</creator>
        
        <creator>Junaid Israr</creator>
        
        <creator>Mouhcine Guennoun</creator>
        
        <subject>Wireless sensor networks; beacon-enabled IEEE 802.15.4; binary exponent backoff; adaptive backoff; fairness; power consumption; reliability; channel utilization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(3), 2021</description>
        <description>Both IEEE 802.11 and IEEE 802.15.4 standards adopt the CSMA-CA algorithm to manage contending nodes’ access to the wireless medium. CSMA-CA utilizes the Binary Exponential Backoff (BEB) scheme to reduce the probability of packet collisions over the communication channel. However, BEB suffers from unfairness and degraded channel utilization, as it usually favors the last node that succeeded in capturing the medium to send its packets. Also, BEB updates the size of the contention window in a deterministic fashion, without taking into consideration the level of collisions over the channel. The latter factor has a direct impact on the channel utilization and therefore incorporating it in the computation of the contention window’s size can have positive impacts on the overall performance of the backoff algorithm. In this paper, we propose a new adaptive backoff algorithm that overcomes the shortcomings of BEB and outperforms it in terms of channel utilization, power conservation, and reliability, while preserving the fairness among nodes. We model our algorithm using Markov chain and validate our system through extensive simulations. Our results show a promising performance for an efficient backoff algorithm.</description>
        <description>http://thesai.org/Downloads/Volume12No3/Paper_43-A_Collision_Aware_MAC_Protocol.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Floating Content: Experiences and Future Directions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120342</link>
        <id>10.14569/IJACSA.2021.0120342</id>
        <doi>10.14569/IJACSA.2021.0120342</doi>
        <lastModDate>2021-03-31T07:07:45.5570000+00:00</lastModDate>
        
        <creator>Shahzad Ali</creator>
        
        <subject>Floating content; opportunistic communications</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(3), 2021</description>
        <description>Floating content is a promising communication paradigm based on pure ad hoc communications. It has a huge usage potential for various context-aware applications. In this paper, recent research related to floating content communication paradigm is presented. This paper focuses on some of the vital experiences ranging from analytical models to simulations to the real-world implementations. Some important results on the performance of floating content based on analytical models, simulations, and real-world implementations are presented. These results not only show the usefulness of the existing analytical models but also explain the ways of extending these existing models for incorporating new communication technologies and mobility models. This paper also highlights the energy consumption of smartphone applications based on floating content and explains how new communication technologies impact the feasibility of using floating content as a communication service for different applications. Based on the experiences, new future directions are highlighted that can prove to be very beneficial for researchers investigating this area.</description>
        <description>http://thesai.org/Downloads/Volume12No3/Paper_42-Floating_Content_Experiences_and_Future_Directions.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Developing a Framework for Data Communication in a Wireless Network using Machine Learning Technique</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120341</link>
        <id>10.14569/IJACSA.2021.0120341</id>
        <doi>10.14569/IJACSA.2021.0120341</doi>
        <lastModDate>2021-03-31T07:07:45.5400000+00:00</lastModDate>
        
        <creator>Somya Khidir Mohmmed Ataelmanan</creator>
        
        <creator>Mostafa Ahmed Hassan Ali</creator>
        
        <subject>Anomaly detection; internet of things; wireless attacks; artificial intelligence; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(3), 2021</description>
        <description>The emergence of Internet of Things (IoT) has become a huge innovation for utilizing the enormous power of wireless media. The adaptation of smart devices, with intelligent networking, has greatly enhanced the traffic of the IoT environment. The present security mechanism is primarily focusing on specific areas such as content filtering, monitoring techniques, and anomaly detection. A vulnerability reflects the inability of a network that allows an attacker to detect the extent of existing mechanism of security. The existing techniques focused on specific attacks rather than monitoring the whole network. However, there is a demand for a framework to govern and protect data and services in IoT network. Anomaly detection framework is a resource intensive activity to protect data and services of IoT / Wireless Sensor Networks (WSN). It supports application layer of IoT network and traces it frequently to find the existence of malicious activities. In this study, researchers proposed an anomaly detection framework to safeguard against wireless attacks. The proposed framework has employed a machine learning technique to detect the traces of wireless attacks. It supports IoT based networks to monitor the functionalities of the resources. In addition, it discusses the open challenges in IoT networks with possible solutions. Researchers employed a test bed for evaluating the proposed framework. The outcome of the study shows that the proposed framework provides better services with more security.</description>
        <description>http://thesai.org/Downloads/Volume12No3/Paper_41-Developing_a_Framework_for_Data_Communication.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predicting Internet Banking Effectiveness using Artificial Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120340</link>
        <id>10.14569/IJACSA.2021.0120340</id>
        <doi>10.14569/IJACSA.2021.0120340</doi>
        <lastModDate>2021-03-31T07:07:45.5100000+00:00</lastModDate>
        
        <creator>Ala Aldeen Al-Janabi</creator>
        
        <subject>Artificial Neural Network (ANN); internet banking (IB); Artificial Intelligence (AI); e-banking effectiveness; regression model; rapidminer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(3), 2021</description>
        <description>This research aims at building a prediction model to predict the effectiveness of internet banking (IB) in Qatar. The proposed model employs the aspect of hybrid approach through using the regression and neural network models. This study is one of the fewest to evaluate the effectiveness of IB through adopting two data mining approaches including regression and neural networks. The regression analysis is used to optimize and minimize the input dataset metrics through excluding the insignificant attributes. The study builds a dataset of 250 records of internet banking quality metrics where each instance includes 8 metrics. Moreover, the study uses the rapidminer application in building and validating the proposed prediction model. The results analysis indicates that the proposed model predicts the 88.5% of IB effectiveness, and the input attributes influence the customer satisfaction. Also, the results show the prediction model has correctly predict 68% of the test dataset of 50 records using neural networks without regression optimization. However, after employment of regression, the prediction accuracy of satisfaction improved by 12% (i.e. 78%). Finally, it is recommended to test the proposed model in the prediction in other online services such as e-commerce.</description>
        <description>http://thesai.org/Downloads/Volume12No3/Paper_40-Predicting_Internet_Banking_Effectiveness.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid Model for Documents Representation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120339</link>
        <id>10.14569/IJACSA.2021.0120339</id>
        <doi>10.14569/IJACSA.2021.0120339</doi>
        <lastModDate>2021-03-31T07:07:45.4770000+00:00</lastModDate>
        
        <creator>Dina Mohamed</creator>
        
        <creator>Ayman El-Kilany</creator>
        
        <creator>Hoda M. O. Mokhtar</creator>
        
        <subject>Document representation; latent dirichlet allocation; hierarchical latent dirichlet allocation; Word2vec; Isolation Forest</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(3), 2021</description>
        <description>Text representation is a critical issue for exploring the insights behind the text. Many models have been developed to represent the text in defined forms such as numeric vectors where it would be easy to calculate the similarity between the documents using the well-known distance measures. In this paper, we aim to build a model to represent text semantically either in one document or multiple documents using a combination of hierarchical Latent Dirichlet Allocation (hLDA), Word2vec, and Isolation Forest models. The proposed model aims to learn a vector for each document using the relationship between its words’ vectors and the hierarchy of topics generated using the hierarchical Latent Dirichlet Allocation model. Then, the isolation forest model is used to represent multiple documents in one representation as one profile to facilitate finding similar documents to the profile. The proposed text representation model outperforms the traditional text representation models when applied to represent scientific papers before performing content-based scientific papers recommendation for researchers.</description>
        <description>http://thesai.org/Downloads/Volume12No3/Paper_39-A_Hybrid_Model_for_Documents_Representation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Correlating Crime and Social Media: Using Semantic Sentiment Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120338</link>
        <id>10.14569/IJACSA.2021.0120338</id>
        <doi>10.14569/IJACSA.2021.0120338</doi>
        <lastModDate>2021-03-31T07:07:45.4630000+00:00</lastModDate>
        
        <creator>Rhea Mahajan</creator>
        
        <creator>Vibhakar Mansotra</creator>
        
        <subject>Crimes; social media; Twitter; BiLSTM; semantic sentiment analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(3), 2021</description>
        <description>Crimes occur all over the world and with regularly changing criminal strategies, law enforcement agencies need to manage them adequately and productively. If these agencies have prior data on the crime or an early indication of the eventual felonious activity, it would encourage them to have some strategic preferences so that they can deploy their restricted and elite assets at the spot of a suspected crime or even better explore it to the point of anticipation. So, integration of social media content can act as a catalyst in bridging the gap between these challenges as we are aware of the fact that almost all our population uses social media and their life, thoughts, and, mindset are available digitally through their social media profiles. In this paper, an attempt has been made to predict crime pattern using geo-tagged tweets from five regions of India. We hypothesized that publicly available data from Twitter may include features that can portray a correlation between Tweets and the Crime pattern using Data Mining. We have further applied Semantic Sentiment Analysis using Bi-directional Long Short memory (BiLSTM) and feed forward neural network to the tweets to determine the crime intensity across a region. The performance of our prosed approach is 84.74 for each class of sentiment. The results showed a correlation between crime pattern predicted from Tweets and actual crime incidents reported.</description>
        <description>http://thesai.org/Downloads/Volume12No3/Paper_38-Correlating_Crime_and_Social_Media.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Movement Control of Smart Mosque’s Domes using CSRNet and Fuzzy Logic Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120337</link>
        <id>10.14569/IJACSA.2021.0120337</id>
        <doi>10.14569/IJACSA.2021.0120337</doi>
        <lastModDate>2021-03-31T07:07:45.4470000+00:00</lastModDate>
        
        <creator>Anas H. Blasi</creator>
        
        <creator>Mohammad Awis Al Lababede</creator>
        
        <creator>Mohammed A. Alsuwaiket</creator>
        
        <subject>Artificial intelligence; CNN; CSRnet; fuzzy logic; fuzzy control</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(3), 2021</description>
        <description>Mosques are worship places of Allah and must be preserved clean, immaculate, provide all the comforts of the worshippers in them. The prophet&#39;s mosque in Medina/ Saudi Arabia is one of the most important mosques for Muslims. It occupies second place after the sacred mosque in Mecca/ Saudi Arabia, which is in constant overcrowding by all Muslims to visit the prophet Mohammad&#39;s tomb. This paper aims to propose a smart dome model to preserve the fresh air and allow the sunlight to enter the mosque using artificial intelligence techniques. The proposed model controls domes movements based on the weather conditions and the overcrowding rates in the mosque. The data have been collected from two different resources, the first one from the database of Saudi Arabia weather&#39;s history, and the other from Shanghai Technology Database. Congested Scene Recognition Network (CSRNet) and Fuzzy techniques have applied using Python programming language to control the domes to be opened and closed for a specific time to renew the air inside the mosque. Also, this model consists of several parts that are connected for controlling the mechanism of opening/closing domes according to weather data and the situation of crowding in the mosque. Finally, the main goal of this paper has been achieved, and the proposed model has worked efficiently and specifies the exact duration time to keep the domes open automatically for a few minutes for each hour head.</description>
        <description>http://thesai.org/Downloads/Volume12No3/Paper_37-Movement_Control_of_Smart_Mosques_Domes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Proof-of-Review: A Review based Consensus Protocol for Blockchain Application</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120336</link>
        <id>10.14569/IJACSA.2021.0120336</id>
        <doi>10.14569/IJACSA.2021.0120336</doi>
        <lastModDate>2021-03-31T07:07:45.4170000+00:00</lastModDate>
        
        <creator>Dodo Khan</creator>
        
        <creator>Low Tang Jung</creator>
        
        <creator>Manzoor Ahmed Hashmani</creator>
        
        <subject>Blockchain; consensus protocol; transaction chain; review chain; prove-of-review; PoW; PoS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(3), 2021</description>
        <description>Blockchain is considered one of the most disruptive technologies of our time and in the last 2 decades andhas drawn attention from research and industrial communities. Blockchain is basically a distributed ledger with immutable records, mostly utilized to perform the transactions across various nodes after achieving the mutual consensus between all the associated nodes. The consensus protocol is a core component of Blockchain technology, playing a vital role in Blockchain’s success, global emergence, and disruption capability.Many consensus protocols such as PoW, PoS, PoET, etc. have been proposed to make Blockchain more efficient to meet real-time application requirements. However, these protocols have their respective limitations of low throughput and high latency and sacrifice on scalability.These limitations have motivated this research team to introduce a novel review-based consensus protocol called Proof-of-Review, which is aimed to establish an efficient, reliable, and scalable Blockchain. The “review” in the proposed protocol is referring to the community trust on a node, which is entirely depending on the node’s previous behavior within the network which includes the previous transactions and interaction with other nodes. Those reviews eventually become the trust value gained by the node. The more positive the reviews the more trustworthyis the nodeto be considered in the network and vice versa. The most trustworthy node is selected to become the round leader and allows to publish a new block. The architecture of the proposed protocol is based on two parallel chains i.e. Transaction Chain and Review Chain. Both chains are linked to each other. The transaction chain stores the transaction whereas the review chain will store the reviews and be analyzed with an NLP algorithm to find the round leader for the next round.</description>
        <description>http://thesai.org/Downloads/Volume12No3/Paper_36-Proof_of_Review_A_Review_Based_Consensus_Protocol.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Analysis of Secured Hash Algorithms for Blockchain Technology and Internet of Things</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120335</link>
        <id>10.14569/IJACSA.2021.0120335</id>
        <doi>10.14569/IJACSA.2021.0120335</doi>
        <lastModDate>2021-03-31T07:07:45.4000000+00:00</lastModDate>
        
        <creator>Monika Parmar</creator>
        
        <creator>Harsimran Jit Kaur</creator>
        
        <subject>Blockchain Technology; IoT; Secured Hash Algorithms; IoT Security; SHA; MD5</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(3), 2021</description>
        <description>Cryptography algorithms play a vital role in Information Security and Management. To test the credibility, reliability of metadata exchanged between the sender and the recipient party of IoT applications different algorithms must be used. The hashing is also used for Electronic Signatures and based on how hard it is to hack them; various algorithms have different safety protocols. SHA-1, SHA-2, SHA3, MD4, and MD5, etc. are still the most accepted hash protocols. This article suggests the relevance of hash functions and the comparative study of different cryptographic techniques using blockchain technology. Cloud storage is amongst the most daunting issues, guaranteeing the confidentiality of encrypted data on virtual computers. Several protection challenges exist in the cloud, including encryption, integrity, and secrecy. Different encryption strategies are seeking to solve these problems of data protection to an immense degree. This article will focus on the comparative analysis of the SHA family and MD5 based on the speed of operation, its security concerns, and the need of using the Secure Hash Algorithm.</description>
        <description>http://thesai.org/Downloads/Volume12No3/Paper_35-Comparative_Analysis_of_Secured_Hash_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cybersecurity Awareness Level: The Case of Saudi Arabia University Students</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120334</link>
        <id>10.14569/IJACSA.2021.0120334</id>
        <doi>10.14569/IJACSA.2021.0120334</doi>
        <lastModDate>2021-03-31T07:07:45.3700000+00:00</lastModDate>
        
        <creator>Wejdan Aljohni</creator>
        
        <creator>Nazar Elfadil</creator>
        
        <creator>Mutsam Jarajreh</creator>
        
        <creator>Mwahib Gasmelsied</creator>
        
        <subject>Cybersecurity; awareness; protection; internet; students; higher education; security awareness; survey; APAT</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(3), 2021</description>
        <description>Cybersecurity plays an important role in reliance on digital equipment and programs to manage daily lives chores including the transmission and storage of personal information. Therefore, it is a global issue in our growing society, and it becomes increasingly important to measure and analyze the awareness of it. In this paper, a questionnaire has been designed to measure the current level of cybersecurity awareness (CSA) among Saudi university students. Cybersecurity students&#39; awareness level questionnaire has been adapted from few other previous cybersecurity awareness campaigns. In this questionnaire, a total of 136 students have participated in the survey. The questionnaire was collected to measure the cybersecurity students’ awareness level through their knowledge, culture, and surrounding environment or through students’ behavior by thee affected factors. These are: gender, location, and study department of the students. The study findings reveal that the students’ awareness is in an average has no significant difference in cybersecurity awareness level between male and female students, but females show a bit more concern about cybersecurity. However, there is a clear and high awareness of students of computer and information technology departments compared to others. Moreover, urban students outperformed students in remote areas in awareness of cybersecurity. The survey results indicate that the study model has been effective in measuring students&#39; awareness.</description>
        <description>http://thesai.org/Downloads/Volume12No3/Paper_34-Cybersecurity_Awareness_Level.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Novel Modelling of the Hash-based Authentication of Data in Dynamic Cloud Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120333</link>
        <id>10.14569/IJACSA.2021.0120333</id>
        <doi>10.14569/IJACSA.2021.0120333</doi>
        <lastModDate>2021-03-31T07:07:45.3530000+00:00</lastModDate>
        
        <creator>Anil Kumar G</creator>
        
        <creator>Shantala C.P</creator>
        
        <subject>Cloud computing; data deduplication; data integrity; data privacy; data security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(3), 2021</description>
        <description>A datacenter in a cloud environment houses a massive quantity of data in a distributed manner. However, with the increasing number of threats like data deduplication attack over the cloud environment, it is quite challenging to ascertain data&#39;s full-fledged security. In this regard, data integrity and security are highly questionable. A review of existing literature shows that the existing solutions are not much suitable to meet the requirements and support the existing distributed storage system&#39;s security demands concerning data integrity due to the usage of the inferior authentication mechanism. Also, the most frequently used public-key encryption is found not to be purely suitable resource constraint devices. Therefore, this manuscript presents a unique model of authentication of data where a simplified hashing proposition has been designed towards scheduling a distributed chain of data. The idea is to perform dynamic authentication that is present of any form of the adversary. The design of proposed scheme is lightweight which offers cross-verifiable hash-based challenges matching scheme with the provision of the non-repudiation of the tractions using the inclusion of a cloud auditor units. The experiment was carried on numerical computing tool considering, data volume, verification count and verification delay as prime performance metrics. The simulation outcomes shows that the proposed system excels in better security performance as well it is flexible compared to the existing system.</description>
        <description>http://thesai.org/Downloads/Volume12No3/Paper_33-Novel_Modelling_of_the_Hash_based_Authentication.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Potential Data Collections Methods for System Dynamics Modelling: A Brief Overview</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120332</link>
        <id>10.14569/IJACSA.2021.0120332</id>
        <doi>10.14569/IJACSA.2021.0120332</doi>
        <lastModDate>2021-03-31T07:07:45.3230000+00:00</lastModDate>
        
        <creator>Aisyah Ibrahim</creator>
        
        <creator>Hamdan Daniyal</creator>
        
        <creator>Tuty Asmawaty Abdul Kadir</creator>
        
        <creator>Adzhar Kamaludin</creator>
        
        <subject>System dynamics modelling; data collection methods; data source; system dynamics methodology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(3), 2021</description>
        <description>System Dynamics (SD) modelling is a highly complex process. Although the SD methodology has been discussed extensively in most breakthroughs and present literature, discussions on data collection methods for SD modelling are not explained in details in most studies. To date, comprehensive descriptions of knowledge extraction for SD modelling is still scarce in the literature either. In an attempt to fill in the gap, three primary groups of data sources proposed by Forrester: (1) mental database, (2) written database and (3) numerical database, were reviewed, including the potential data collections methods for each database by taking into account the advancement of current computer and information technology. The contributions of this paper come in threefolds. First, this paper highlights the potential data sources that deserved to be acknowledged and reflected in the SD domain. Second, this paper provides insights into the appropriate mix and match of data collection methods for SD development. Third, this paper provides a practical synthesis of potential data sources and their suitability according to the SD modelling stage, which can serve as modelling practice guidelines.</description>
        <description>http://thesai.org/Downloads/Volume12No3/Paper_32-Potential_Data_Collections_Methods.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Real-Time Intelligent Thermal Comfort Prediction Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120331</link>
        <id>10.14569/IJACSA.2021.0120331</id>
        <doi>10.14569/IJACSA.2021.0120331</doi>
        <lastModDate>2021-03-31T07:07:45.3070000+00:00</lastModDate>
        
        <creator>Farid Ali Mousa</creator>
        
        <creator>Heba Hamdy Ali</creator>
        
        <subject>Thermal comfort; chicken swarm optimization; momentum back propagation; neural network; bio-inspired optimization algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(3), 2021</description>
        <description>Real-time prediction model of indoor thermal comfort depending on Momentum Back Propagation (MBP) function is established by using Arduino hardware and mobile application. The air temperature indoor, air velocity, and relative humidity are gathered via temperature sensor and transferred via Bluetooth to the mobile application to predicate thermal comfort. A significant challenge in designing MBP is to decide the best architecture and parameters as the number of layers and nodes, and number of epochs for the network given the data for the AI issues. These parameters are usually selected on heuristic and fine-tuned manually, which could be as boring as the performance assessment may take hours to test the output of a single MBP parameterization. This paper tends to the issue of determining appropriate parameters for the MBP by applying chicken swarm optimalization (CSO) algorithm. The CSO algorithm simulates the chicken swarm searching for the best parameter employs the Fitness function of these parameters which yielding minimum error and high accuracy. The proposed accuracy approximately equals 98.3% when using the best parameters obtained from Chicken Swarm Optimization (CSO). The proposed methodology performance is assessed on the collected dataset from weather archive and in the context of thermal comfort prediction, that mapping relations between the indoor features and thermal index.</description>
        <description>http://thesai.org/Downloads/Volume12No3/Paper_31-Real_time_Intelligent_Thermal_Comfort_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Trade-off between Energy Consumption and Transmission Rate in Mobile Ad-Hoc Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120330</link>
        <id>10.14569/IJACSA.2021.0120330</id>
        <doi>10.14569/IJACSA.2021.0120330</doi>
        <lastModDate>2021-03-31T07:07:45.2730000+00:00</lastModDate>
        
        <creator>Ashraf Al Sharah</creator>
        
        <creator>Mohammad Alhaj</creator>
        
        <creator>Firas Al Naimat</creator>
        
        <subject>Coalition; MANETs; power-aware routing; power consumption; transmission power control-based; transmission rate</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(3), 2021</description>
        <description>Mobile Ad-Hoc Networks are decentralized systems of mobile nodes where each node is responsible for computing and retaining the routing and topology details. The autonomous nature of the nodes stresses more on the way of handling power consumptions. This raises a concern about how to improve the power efficiency that leads to a better battery life of the mobile nodes. It is important to balance between power and transmission rates to improve the network lifetime and reduce the sudden failure of the nodes. In this paper, a new transmission power control-based scheme is proposed that allows the mobile nodes to achieve a tradeoff between balancing transmission rate and power consumption. Through defining and updating two tables for every node that contain the average transmission rate for the neighboring nodes and number of times the neighboring node is used for data transmission. We validate our proposed scheme using several test cases.</description>
        <description>http://thesai.org/Downloads/Volume12No3/Paper_30-Trade_off_between_Energy_Consumption.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Smart Home Energy Management System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120329</link>
        <id>10.14569/IJACSA.2021.0120329</id>
        <doi>10.14569/IJACSA.2021.0120329</doi>
        <lastModDate>2021-03-31T07:07:45.2600000+00:00</lastModDate>
        
        <creator>Yasser AL Sultan</creator>
        
        <creator>Ben Salma Sami</creator>
        
        <creator>Bassam A. Zafar</creator>
        
        <subject>Home energy management; multi-agent system; real-time system; energy recovery</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(3), 2021</description>
        <description>Home energy management system has been selected as an attractive research issue due to its ability to enhance energy security by including devices, entertainment systems, security systems, environmental controls, etc. Home automation is incorporated as a potential technology to ensure efficient electricity performance without interruption, solve power demand problems and coordinate devices with innovative technologies. In this context, our proposal seeks to implement an accurate home energy management system. The proposed approach aims to improve uninterrupted electricity production and provide comfortable services to families. To implement correct system operations and meet each device&#39;s power demand, a Reel Time Energy Management System (RT-EMS) will be implemented and discussed through some required tasks using the Multi-Agent System (MAS). Each agent will be determined according to some criteria to implement the appropriate design and meet each device&#39;s power demand. The obtained results will show that the proposed system meets the general objectives of RT-EMS.</description>
        <description>http://thesai.org/Downloads/Volume12No3/Paper_29-Smart_Home_Energy_Management_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Smart Internet of Vehicles Architecture based on Deep Learning for Occlusion Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120328</link>
        <id>10.14569/IJACSA.2021.0120328</id>
        <doi>10.14569/IJACSA.2021.0120328</doi>
        <lastModDate>2021-03-31T07:07:45.2270000+00:00</lastModDate>
        
        <creator>Shaya A. Alshaya</creator>
        
        <subject>Internet of vehicle; deep learning; collaborative technologies; cloud; edge computing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(3), 2021</description>
        <description>In our days, the cyber world is developing due to the revolution of smart cities and machine learning technologies. The internet of Things constitutes the essential background of cyber technology. As a case study, the Internet of Vehicles is one of the leading applications which is developed quickly. Studies are focused on resolving issues related to real-time problems and privacy leakage. Uploading data from the cloud during the data collection step is the origin of delay issues. This process decreases the level of privacy. The objective of the present paper is to ensure a high level of privacy and accelerated data collection. During this study, we propose an advanced Internet of Vehicle architecture to conduct the data collection step. An occlusion detection application based on a deep learning technique is performed to evaluate the IoV architecture. Training data at Distributed Intelligent layer ensures not only the privacy of data but also reduces the delay.</description>
        <description>http://thesai.org/Downloads/Volume12No3/Paper_28-Smart_Internet_of_Vehicles_Architecture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Speech-to-Text Conversion in Indonesian Language Using a Deep Bidirectional Long Short-Term Memory Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120327</link>
        <id>10.14569/IJACSA.2021.0120327</id>
        <doi>10.14569/IJACSA.2021.0120327</doi>
        <lastModDate>2021-03-31T07:07:45.2130000+00:00</lastModDate>
        
        <creator>Suci Dwijayanti</creator>
        
        <creator>Muhammad Abid Tami</creator>
        
        <creator>Bhakti Yudho Suprapto</creator>
        
        <subject>Speech-to-text; Deep Bidirectional Long Short-Term Memory (LSTM); spectrogram; Mel frequency cepstral coefficients (MFCC); word error rate</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(3), 2021</description>
        <description>Nowadays, speech is used also for communication between humans and computers, which requires conversion from speech to text. Nevertheless, few studies have been performed on speech-to-text conversion in Indonesian language, and most studies on speech-to-text conversion were limited to the conversion of speech datasets with incomplete sentences. In this study, speech-to-text conversion of complete sentences in Indonesian language is performed using the deep bidirectional long short-term memory (LSTM) algorithm. Spectrograms and Mel frequency cepstral coefficients (MFCCs) were utilized as features of a total of 5000 speech data spoken by ten subjects (five males and five females). The results showed that the deep bidirectional LSTM algorithm successfully converted speech to text in Indonesian. The accuracy achieved by the MFCC features was higher than that achieved with the spectrograms; the MFCC obtained the best accuracy with a word error rate value of 0.2745% while the spectrograms were 2.0784%. Thus, MFCCs are more suitable than spectrograms as feature for speech-to-text conversion in Indonesian. The results of this study will help in the implementation of communication tools in Indonesian and other languages.</description>
        <description>http://thesai.org/Downloads/Volume12No3/Paper_27-Speech_to_Text_Conversion_in_Indonesian_Language.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Customer Retention: Detecting Churners in Telecoms Industry using Data Mining Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120326</link>
        <id>10.14569/IJACSA.2021.0120326</id>
        <doi>10.14569/IJACSA.2021.0120326</doi>
        <lastModDate>2021-03-31T07:07:45.1800000+00:00</lastModDate>
        
        <creator>Mahmoud Ewieda</creator>
        
        <creator>Essam M Shaaban</creator>
        
        <creator>Mohamed Roushdy</creator>
        
        <subject>Quality of service; churn prediction; classification; data mining; prediction model; customer retention</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(3), 2021</description>
        <description>Customers are more concerned with the quality of services that companies can provide. Customer churn is the percentage of service for subscribers, who stop their subscriptions or the proportion of customers, who discontinue using the product of the firm or service within a given time frame. Services by various service providers or sellers are not very distinct that raise rivalry between firms to maintain the quality of their services and upgrade them. This paper aims at manifesting the service quality effect on customer satisfaction and churn prediction to reveal customers who have meant to leave a service. Predictive models can give the extent of the service quality effect on customer satisfaction for the correct determination of possible churners shortly for the provision of a retention solution. This paper analyses the impact of service quality and prediction models that depend on data mining (DM) techniques. The present model contains five steps: data-pre-processing, feature selection, sampling of data, training our classifier, testing for prediction, and output of prediction. A data set with 17 attributes and 5000 records used - which consist of 75% training the model and 25% testing- are randomly selected. The DM techniques applied in this paper are Boruta algorithm, C5.0, Neural Network, Support Vector Machine, and random forest via open-source software R and WEKA.</description>
        <description>http://thesai.org/Downloads/Volume12No3/Paper_26-Customer_Retention_Detecting_Churners.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Internet of Things Security: A Review of Enabled Application Challenges and Solutions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120325</link>
        <id>10.14569/IJACSA.2021.0120325</id>
        <doi>10.14569/IJACSA.2021.0120325</doi>
        <lastModDate>2021-03-31T07:07:45.1670000+00:00</lastModDate>
        
        <creator>Mona Algarni</creator>
        
        <creator>Munirah Alkhelaiwi</creator>
        
        <creator>Abdelrahman Karrar</creator>
        
        <subject>Internet of things; internet of things application; internet of things privacy; internet of things architecture; internet of things security; challenges; security protocol</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(3), 2021</description>
        <description>The Internet of Things (IoT) has been widely used in every aspect of life. The rapid development of IoT technologies raises concerns regarding security and privacy. IoT security is a critical concern in the preservation of the privacy and reliability of users’ private information. The privacy concern becomes the biggest barrier to further usage of IoT technology. This paper presents a review of IoT application areas in smart cities, smart homes, and smart healthcare that leverage such techniques from the point of view of security and privacy and present relevant challenges. In addition, we present potential tools to ensure the security and preservation of privacy for IoT applications. Furthermore, a review of relevant research studies has been carried out and discusses the security of IoT infrastructure, the protocols, the challenges, and the solutions. Finally, we provide insight into challenges in the current research and recommendations for future works. The reviewed IoT applications have made life easier, but IoT devices that use unencrypted networks are increasingly coming under attack by malicious hackers. This leads to access to sensitive personal data. There is still time to protect devices better by pursuing security solutions with this technology. The results illustrate several technological and security challenges, such as malware, secure privacy management, and non-security infrastructure for cloud storage that still require effective solutions.</description>
        <description>http://thesai.org/Downloads/Volume12No3/Paper_25-Internet_of_Things_Security.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Generic Approach for Allocating Movement Permits During/Outside Curfew Period during COVID-19</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120324</link>
        <id>10.14569/IJACSA.2021.0120324</id>
        <doi>10.14569/IJACSA.2021.0120324</doi>
        <lastModDate>2021-03-31T07:07:45.1330000+00:00</lastModDate>
        
        <creator>Yaser Chaaban</creator>
        
        <subject>COVID-19 pandemic; social distancing; resource allocation problem; movement permits</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(3), 2021</description>
        <description>During the coronavirus disease (COVID-19) pandemic, different exciting concepts around solutions, technical components, smartphone applications, and novel wireless services are needed to adapt to the new lifestyle standard that emerged from the COVID-19 crisis. In this context, social distancing was imposed to prevent or decrease significantly further transmission of the COVID-19. In other words, research results have shown that slowing the spread of COVID-19 is the way to save people&#39;s lives and relieve the burden on health-care systems. This social distancing can be tracked using cell phone movement/data. This paper presents a new approach/algorithm for allocating and optimizing/adapting the movement permits during/outside curfew periods inside workplaces, buildings, companies and institutions. This approach is an effective tool to reduce the spread of COVID-19 by promoting health safety during the pandemic, especially in places where social distancing can be difficult. Consequently, this paper presents a technological solution to automate the process of giving movement permits in workplaces. The paper results showed that the proposed strategy of social distancing inside buildings is effective enough to flatten the curve. Furthermore, health authorities are not required to recommend staying home to slow the spread of COVID-19. Consequently, this paper introduces a solution for the resource sharing problem (resource allocation problem), where multiple agents (people or robots) of a system trying to move reliably in their environment. The biggest concern of these agents is to avoid collisions (infections). As a result, the experiments carried out in this paper showed the high performance of the designed algorithm complying with COVID-19 social distancing regulations.</description>
        <description>http://thesai.org/Downloads/Volume12No3/Paper_24-A_Generic_Approach_for_Allocating_Movement.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Artificial Intelligence: Machine Translation Accuracy in Translating French-Indonesian Culinary Texts</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120323</link>
        <id>10.14569/IJACSA.2021.0120323</id>
        <doi>10.14569/IJACSA.2021.0120323</doi>
        <lastModDate>2021-03-31T07:07:45.1200000+00:00</lastModDate>
        
        <creator>Muhammad Hasyim</creator>
        
        <creator>Firman Saleh</creator>
        
        <creator>Rudy Yusuf</creator>
        
        <creator>Asriani Abbas</creator>
        
        <subject>Machine translation; Google translation; accuracy; culinary texts; artificial intelligence</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(3), 2021</description>
        <description>The use of machine translation as artificial intelligence (AI) keeps increasing and the world’s most popular translation tool is Google Translate (GT). This tool is not merely used for the benefits of learning and obtaining information from foreign languages through translation but has also been used as a medium of interaction and communication in hospitals, airports and shopping centers. This paper aims to explore machine translation accuracy in translating French-Indonesian culinary texts (recipes). The samples of culinary text were taken from the internet. The research results show that the semiotic model of machine translation in GT is the translation from the signifier (forms) of the source language to the signifier (forms) of the target language by emphasizing the equivalence of the concept (signified) of the source language and the target language. GT aids to translate the existing French-Indonesian culinary text concepts through words, phrases and sentences. A problem encountered in machine translation for culinary texts is a cultural equivalence. GT machine translation cannot accurately identify the cultural context of the source language and the target language, so the results are in the form of a literal translation. However, the accuracy of GT can be improved by refining the translation of cultural equivalents through words, phrases and sentences from one language to another.</description>
        <description>http://thesai.org/Downloads/Volume12No3/Paper_23-Artificial_Intelligence_Machine_Translation_Accuracy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SVM Machine Learning Classifier to Automate the Extraction of SRS Elements</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120322</link>
        <id>10.14569/IJACSA.2021.0120322</id>
        <doi>10.14569/IJACSA.2021.0120322</doi>
        <lastModDate>2021-03-31T07:07:45.0870000+00:00</lastModDate>
        
        <creator>Ayad Tareq Imam</creator>
        
        <creator>Aysh Alhroob</creator>
        
        <creator>Wael Jumah Alzyadat</creator>
        
        <subject>Information extraction; named entity recognition; machine learning; support vector machine; software requirement specification; WEKA; I-CASE</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(3), 2021</description>
        <description>The process of extraction of software entities such as system, use case, and actor from an English natural language description of a user’s software requirements is a linguistic and semantic process of a natural language processing application. Entity extraction is known to be a complicated and challenging problem by researchers in the fields of linguistics or computation, due to the ambiguities in natural languages. This paper presents a named entity recognition method called SyAcUcNER (System Actor Use-Case Named Entity Recognizer), for extracting the system, actor, and use case entities from unstructured English descriptions of user requirements for the software. SyAcUcNER uses one of the Machine Learning (ML) approaches, that is, the Support Vector Machine (SVM) as an effective classifier. Also, SyAcUcNER uses a semantic role labeling process to tag the words in the text of user software requirements. SyAcUcNER is the first work that defines the structure of a requirements engineering specialized NER, the first work that uses a specialized NER model as an approach for extracting actor and use case entities from English language requirements description, and the first time an SVM has been used to specify the semantic meanings of words in a certain domain of discourse; that is the Software Requirements Specification (SRS). The performance of SyAcUcNER, which utilizes WEKA’s SVM, is evaluated using a binomial technique, and the results gained from running SyAcUcNER on text corpora from assorted sources give weighted averages of 76.2% for precision, 76% for recall, and 72.1% for the F-measure.</description>
        <description>http://thesai.org/Downloads/Volume12No3/Paper_22-SVM_Machine_Learning_Classifier.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluating Software Quality Attributes using Analytic Hierarchy Process (AHP)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120321</link>
        <id>10.14569/IJACSA.2021.0120321</id>
        <doi>10.14569/IJACSA.2021.0120321</doi>
        <lastModDate>2021-03-31T07:07:45.0730000+00:00</lastModDate>
        
        <creator>Botchway Ivy Belinda</creator>
        
        <creator>Akinwonmi Akintoba Emmanuel</creator>
        
        <creator>Nunoo Solomon</creator>
        
        <creator>Alese Boniface Kayode</creator>
        
        <subject>Analytic Hierarchy Process (AHP); software quality; quality attribute; quality model; sub-attributes</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(3), 2021</description>
        <description>The use of quality software is of importance to stakeholders and its demand is on the increase. This work focuses on meeting software quality from the user and developer’s perspective. After a review of some existing software-quality models, twenty-four software quality attributes addressed by ten models such as the McCall’s, Boehm’s, ISO/IEC, FURPS, Dromey’s, Kitchenham’s, Ghezzi’s, Georgiadou’s, Jamwal’s and Glibb’s models were identified. We further categorized the twenty-four attributes into a group of eleven (11) main attributes and another group of thirteen (13) sub-attributes. Thereafter, questionnaires were administered to twenty experts from fields including Cybersecurity, Programming, Software Development and Software Engineering. Analytic Hierarchy Process (AHP) was applied to perform a multi-criteria decision-making assessment on the responses from the questionnaires to select the suitable software quality attribute for the development of the proposed quality model to meet both users and developer’s software quality requirements. The results obtained from the assessment showed Maintainability to be the most important quality attribute followed by Security, Testability, Reliability, Efficiency, Usability, Portability, Reusability, Functionality, Availability and finally, Cost.</description>
        <description>http://thesai.org/Downloads/Volume12No3/Paper_21-Evaluating_Software_Quality_Attributes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Model for Predicting Customer Desertion of Telephony Service using Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120320</link>
        <id>10.14569/IJACSA.2021.0120320</id>
        <doi>10.14569/IJACSA.2021.0120320</doi>
        <lastModDate>2021-03-31T07:07:45.0570000+00:00</lastModDate>
        
        <creator>Carlos Acero-Chara&#241;a</creator>
        
        <creator>Erbert Osco-Mamani</creator>
        
        <creator>Tito Ale-Nieto</creator>
        
        <subject>Software KNIME; Support Vector Machine (SVM); neural networks; decision trees</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(3), 2021</description>
        <description>In the present study, it is observed that many people are affected by the services provided by telephony, who leave the service for different reasons, for which the use of a model based on decision trees is proposed, which allows predicting potential dropouts from Customers of a telecommunications company for telephone service. To verify the results, several algorithms were used such as neural networks, support vector machine and decision trees, for the design of the predictive models the KNIME software was used, and the quality was evaluated as the percentage of correct answers in the predicted variable. The results of the model will allow acting proactively in the retention of clients and improves the services provided. A data set with 21 predictor variables that influence customer churn was used. A dependent variable (churn) was used, which is an identifier that determines if the customer left = 1, did not leave = 0 the company&#39;s service. The results with a test data set reach a precision of 91.7%, which indicates that decision trees turn out to be an attractive alternative to develop prediction models of customer attrition in this type of data, due to the simplicity of interpretation of the results.</description>
        <description>http://thesai.org/Downloads/Volume12No3/Paper_20-Model_for_Predicting_Customer_Desertion.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modeling a Functional Engine for the Opinion Mining as a Service using Compounded Score Computation and Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120319</link>
        <id>10.14569/IJACSA.2021.0120319</id>
        <doi>10.14569/IJACSA.2021.0120319</doi>
        <lastModDate>2021-03-31T07:07:45.0230000+00:00</lastModDate>
        
        <creator>Rajeshwari D</creator>
        
        <creator>Puttegowda. D</creator>
        
        <subject>Text mining; opinion; sentiments; machine learning; unstructured data; cloud services</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(3), 2021</description>
        <description>The ever-growing use of the digital platform for the various walks of the applications, primarily on the collaborative platforms of e-commerce, e-learning, social media, blogging, and many more, produces a large corpus of unstructured text data. Many potential strategic solutions require an accurate and fast classification process of the Opinion&#39;s text corpus hidden patterns. In-premise applications have various real-time feasibility constraints. Therefore, offering an Opinion as a Service on the cloud platforms is a new research domain. This paper proposes a design framework of the evolution of the classification engine for opinion mining using score-based computation using a customized Vader algorithm. Another method for scalability is a machine learning model that supports a large corpus of unstructured text data classifications. The model validation is performed for the various complexes, unstructured text datasets with the different performance metrics of the cumulative score, learning rate, loss function, and specificity analysis. These metrics indicate the models&#39; stability and scalability behaviors and their accuracy and robustness across different datasets.</description>
        <description>http://thesai.org/Downloads/Volume12No3/Paper_19-Modeling_a_Functional_Engine.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Conceptual Model with Built-in Process Mining</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120318</link>
        <id>10.14569/IJACSA.2021.0120318</id>
        <doi>10.14569/IJACSA.2021.0120318</doi>
        <lastModDate>2021-03-31T07:07:44.9770000+00:00</lastModDate>
        
        <creator>Sabah Al-Fedaghi</creator>
        
        <subject>Process-mining techniques; event log; conceptual modeling; static model; events model; behavioral model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(3), 2021</description>
        <description>Process mining involves discovering, monitoring, and improving real processes by extracting knowledge from event logs in information systems. Process mining has become an important topic in recent years, as evidenced by a growing number of case studies and commercial tools. Current studies in this area assume that event records are created separately from a conceptual model (CM). Techniques are then used to discover missing processes and conformance with the CM, as well as for checks and enhancements. By contrast, in this paper we focus on modeling events as part of a tight multilevel CM that includes a static description, dynamics, events-log scheme, and monitoring and control system. If there is an out-of-model event log, it is treated as a requirement needed to build or enrich the CM. The motivation for such a unified system is our thesis that process mining is an essential component of a CM with built-in mining capabilities to perform self-process mining and attain completeness. Accordingly, our proposed conceptual model facilitates collecting data generated about itself. The resultant framework emphasizes an integrated representation of systems to include process-mining functionalities. Case studies that start with event logs are recast to evolve around a model-first approach that is not limited to the initial event log. The result presents a framework that achieves the aims of process mining in a more comprehensive way.</description>
        <description>http://thesai.org/Downloads/Volume12No3/Paper_18-Conceptual_Model_with_Built_in_Process_Mining.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Formal Verification of an Efficient Architecture to Enhance the Security in IoT</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120317</link>
        <id>10.14569/IJACSA.2021.0120317</id>
        <doi>10.14569/IJACSA.2021.0120317</doi>
        <lastModDate>2021-03-31T07:07:44.9630000+00:00</lastModDate>
        
        <creator>Eman K. Elsayed</creator>
        
        <creator>L. S. Diab</creator>
        
        <creator>Asmaa. A. Ibrahim</creator>
        
        <subject>Internet of things (IoT); IoT architecture; IoT security; formal modeling and verification; Event-B</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(3), 2021</description>
        <description>The Internet of Things (IoT) is one of the world&#39;s newest intelligent communication technologies. There are several kinds of novels about IoT architectures, but they still suffer from security and privacy challenges. Formal verification is a vital method for detecting potential weaknesses and vulnerabilities at an early stage. During this paper, a framework in the Event-B formal method will be used to design a formal description of the secure IoT architecture to cover the security properties of the IoT architecture. As well as using various Event-B properties like formal verification, functional checks, and model checkers to design different formal spoofing attacks for the IoT environment. Additionally, the Accuracy of the IoT architecture can be obtained by executing different Event-B runs like simulations, proof obligation, and invariant checking. By applied formal verification, functional checks and model checkers verified models of IoT-EAA architecture have automatically discharged 82.35% of proof obligations through different Event-B provers. Finally, this paper will focus on introducing a well-defined IoT security infrastructure to address and reduce the security challenges.</description>
        <description>http://thesai.org/Downloads/Volume12No3/Paper_17-Formal_Verification_of_an_Efficient_Architecture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards Natural Language Processing with Figures of Speech in Hindi Poetry</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120316</link>
        <id>10.14569/IJACSA.2021.0120316</id>
        <doi>10.14569/IJACSA.2021.0120316</doi>
        <lastModDate>2021-03-31T07:07:44.9470000+00:00</lastModDate>
        
        <creator>Milind Kumar Audichya</creator>
        
        <creator>Jatinderkumar R. Saini</creator>
        
        <subject>“Alankaar”; figure of speech; Hindi; Natural Language Processing (NLP); poetry</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(3), 2021</description>
        <description>Poems have always been an excellent way of expressing emotions in any language. In particular, Hindi poetry is having versatile popularity among native and non-native speakers all over the world. A typical poem in Hindi is characterized by meter (“Chhand”), emotion (“Rasa”), and figure of speech (“Alankaar”). The present research work is the first of its kind in Hindi Natural Language Processing (NLP), which touches on the area of Hindi figure of speech. The authors have created a systematic hierarchical structure of Hindi “Alankaar” types and sub-types and attempted and extended the work to identify a few. A taxonomical list of 58 Hindi figures of speech is presented along with their nearest mapping to English equivalents. On the sidelines, the paper also presents the distinct rules for each type and sub-type needed for the classification task of NLP. The authors achieved 97% efficiency in reporting the first results with an average execution time of 0.002 seconds.</description>
        <description>http://thesai.org/Downloads/Volume12No3/Paper_16-Towards_Natural_Language_Processing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Using Machine Learning Technologies to Classify and Predict Heart Disease</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120315</link>
        <id>10.14569/IJACSA.2021.0120315</id>
        <doi>10.14569/IJACSA.2021.0120315</doi>
        <lastModDate>2021-03-31T07:07:44.9300000+00:00</lastModDate>
        
        <creator>Mohammed F. Alrifaie</creator>
        
        <creator>Zakir Hussain Ahmed</creator>
        
        <creator>Asaad Shakir Hameed</creator>
        
        <creator>Modhi Lafta Mutar</creator>
        
        <subject>Classification; Naive Bayes (NB); (Support Vector Machine SVM); Random Forest; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(3), 2021</description>
        <description>The techniques of data mining are used widely in the healthcare sector to predict and diagnose various diseases. Diagnosis of heart disease is considered as one of the very important applications of these systems. Data is being collected today in a large amount where people need to rely on the device. In recent years, heart disease has increased excessively and heart disease has become one of the deadliest diseases in many countries. Most data sets often suffer from extreme values that reduce the accuracy percentage in classification. Extreme values are defined in terms of irrelevant or incorrect data, missing values, and the incorrect values of the dataset. Data conversion is another very important way to preconfigure the process of converting data into suitable mining models by acting assembly or assembly and filtering methods such as eliminating duplicate features by using the link and one of the wrap methods, and applying the repeated discrimination feature. This process is performed, dealing with lost values through the &quot;Remove with values&quot; methods and methods of estimating the layer. Classification methods like Na&#239;ve Bayes (NB) and Random Forest (RF) are applied to the original datasets and data sets with the feature of selection methods too. All of these operations are implemented on three various sets of heart disease data for the analysis of pre-treatment effect in terms of accuracy.</description>
        <description>http://thesai.org/Downloads/Volume12No3/Paper_15-Using_Machine_Learning_Technologies.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning Hybrid with Binary Dragonfly Feature Selection for the Wisconsin Breast Cancer Dataset</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120314</link>
        <id>10.14569/IJACSA.2021.0120314</id>
        <doi>10.14569/IJACSA.2021.0120314</doi>
        <lastModDate>2021-03-31T07:07:44.9000000+00:00</lastModDate>
        
        <creator>Marian Mamdouh Ibrahim</creator>
        
        <creator>Dina Ahmed Salem</creator>
        
        <creator>Rania Ahmed Abdel Azeem Abul Seoud</creator>
        
        <subject>Breast cancer; Wisconsin data set; classifiers; deep learning; feature selection; dragonfly</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(3), 2021</description>
        <description>Breast cancer is the world’s top cancer affecting women. While the danger of the factors varies from a place, lifestyle, and diet. Treatment procedures after discovering a confirmed cancer case can reduce the risk of the disease. Unfortunately, breast cancers that arise in low and middle-income countries are diagnosed at a very late stage in which the chances of survival are impeded and reduced. Early detection is therefore required not only to improve the accuracy of discovering breast cancer but also to increase the chances of making the right decision on a successful treatment plan. There have been several studies tending to build software models utilizing machine learning and soft computing techniques for cancer detection. This research aims to build a model scheme to facilitate the detection of breast cancer and to provide the exact diagnosis. Improving the accuracy of a proposed model has, therefore, been one of the key fields of study. The model is based on deep learning that intends to develop a framework to accurately separate benign and malignant breast tumors. This study optimizes the learning algorithm by applying the Dragonfly algorithm to select the best features and perfect parameter values of the deep learning model. Moreover, it compares deep learning results against that of support vector machine (SVM), random forest (RF), and k nearest neighbor (KNN). Those classifiers are chosen as they are the most reliable algorithms having a solid fingerprint in the field of clinical data classification. Consequently, the hybrid model of deep learning combined with binary dragonfly has accurately classified between benign and malignant breast tumors with fewer features. Besides, deep learning model has achieved better accuracy in classifying Wisconsin Breast Cancer Database using all available features.</description>
        <description>http://thesai.org/Downloads/Volume12No3/Paper_14-Deep_Learning_Hybrid_with_Binary_Dragonfly.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predicting the Anxiety of Patients with Alzheimer’s Dementia using Boosting Algorithm and Data-Level Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120313</link>
        <id>10.14569/IJACSA.2021.0120313</id>
        <doi>10.14569/IJACSA.2021.0120313</doi>
        <lastModDate>2021-03-31T07:07:44.8830000+00:00</lastModDate>
        
        <creator>Haewon Byeon</creator>
        
        <subject>Anxiety; AdaBoost; patients with Alzheimer&#39;s dementia; SMOTE; XGBoost</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(3), 2021</description>
        <description>Since overfitting due to imbalanced data can cause prediction errors during the learning process of machine learning and degrades the prediction performance of the model (e.g., sensitivity), it is necessary to add an additional data sampling technique in the model development step to reduce overfitting to overcome this issue, in addition to selecting a machine learning algorithm suitable for the data. This study examined Alzheimer&#39;s patients living in South Korea to understand the predictors of anxiety using boosting algorithms (i.e., AdaBoost and XGBoost) and data-level approach (raw data, undersampling, oversampling, and SMOTE) and confirmed the machine learning algorithm with the best prediction performance. We analyzed 253 elderly people who were diagnosed with Alzheimer&#39;s disease (aged from 60 to 74 years old) who visited rehabilitation hospitals for early dementia screening. This study developed models for predicting the anxiety of Alzheimer&#39;s dementia patients using AdaBoost and XGBoost. Moreover, this study compared the prediction performance (i.e., accuracy, sensitivity, and specificity) of the models. The results of this study showed that XGBoost based on SMOTE (accuracy=0.84, sensitivity=0.85, and specificity=0.81) was identified as the model with the best prediction performance. Consequently, the results of this study presented that using a SMOTE-XGBoost model may provide higher accuracy than using a SMOTE-Adaboost model for developing a prediction model using outcome variable imbalanced data such as disease data in the future.</description>
        <description>http://thesai.org/Downloads/Volume12No3/Paper_13-Predicting_the_Anxiety_of_Patients.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>FishDeTec: A Fish Identification Application using Image Recognition Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120312</link>
        <id>10.14569/IJACSA.2021.0120312</id>
        <doi>10.14569/IJACSA.2021.0120312</doi>
        <lastModDate>2021-03-31T07:07:44.8700000+00:00</lastModDate>
        
        <creator>Siti Nurulain Mohd Rum</creator>
        
        <creator>Fariz Az Zuhri Nawawi</creator>
        
        <subject>Component; Freshwater Fish; fish species recognition; FishDeTec; Convolutional Neural Network (CNN); VGG16</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(3), 2021</description>
        <description>The underwater imagery processing is always in high demand, especially the fish species identification. This activity is as important not only for the biologist, scientist, and fisherman, but it is also important for the education purpose. It has been reported that there are more than 200 species of freshwater fish in Malaysia. Many attempts have been made to develop the fish recognition and classification via image processing approach, however, most of the existing work are developed for the saltwater fish species identification and used for a specific group of users. This research work focuses on the development of a prototype system named FishDeTec to the detect the freshwater fish species found in Malaysia through the image processing approach. In this study, the proposed predictive model of the FishDeTec is developed using the VGG16, is a deep Convolutional Neural Network (CNN) model for a large-scale image classification processing. The experimental study indicates that our proposed model is a promising result.</description>
        <description>http://thesai.org/Downloads/Volume12No3/Paper_12-FishDeTec_A_Fish_Identification_Application.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Applying Synthetic Minority Over-sampling Technique and Support Vector Machine to Develop a Classifier for Parkinson’s disease</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120311</link>
        <id>10.14569/IJACSA.2021.0120311</id>
        <doi>10.14569/IJACSA.2021.0120311</doi>
        <lastModDate>2021-03-31T07:07:44.8370000+00:00</lastModDate>
        
        <creator>Haewon Byeon</creator>
        
        <creator>Byungsoo Kim</creator>
        
        <subject>Kernel type; Rey complex figure test; support vector machine; SMOTE; Parkinson’s disease</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(3), 2021</description>
        <description>As the number of Parkinson’s disease patients increases in the elderly population, it has become a critical issue to understand the early characteristics of Parkinson’s disease and to detect Parkinson’s disease as soon as possible during normal aging. This study minimized the imbalance issue by employing Synthetic Minority Over-sampling Technique (SMOTE), developed eight Support Vector Machine (SVM) models for predicting Parkinson’s disease using different kernel types {(C-SVM or Nu-SVM)&#215;(Gaussian kernel, linear, polynomial, or sigmoid algorithm)}, and compared the accuracy, sensitivity, and specificity of the developed models. This study evaluated 76 senior citizens with Parkinson’s disease (32 males and 44 females) and 285 healthy senior citizens without Parkinson’s disease (148 males and 137 females). The analysis results showed that the liner kernel-based Nu-SVM had the highest sensitivity (62.0%), specificity (81.6%), and overall accuracy (71.3%). The major negative relationship factors of the Parkinson’s disease prediction model were MMSE-K, Stroop Test, Rey Complex Figure Test (RCFT), verbal memory test, ADL, IADL, 70 years old or older, middle school graduation or below, and women. When the influence of variables was compared using “functional weight”, RCFT was identified as the most influential variable in the model for distinguishing Parkinson’s disease from healthy elderly. The results of this study implied that developing a prediction model by using linear kernel-based Nu-SVM would be more accurate than other kernel-based SVM models for handling imbalanced disease data.</description>
        <description>http://thesai.org/Downloads/Volume12No3/Paper_11-Applying_Synthetic_Minority_Over_sampling_Technique.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-objective based Optimal Network Reconfiguration using Crow Search Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120310</link>
        <id>10.14569/IJACSA.2021.0120310</id>
        <doi>10.14569/IJACSA.2021.0120310</doi>
        <lastModDate>2021-03-31T07:07:44.8230000+00:00</lastModDate>
        
        <creator>Surender Reddy Salkuti</creator>
        
        <subject>Battery storage; distributed generation; evolutionary algorithms; network reconfiguration; renewable energy; uncertainty</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(3), 2021</description>
        <description>This paper presents an optimal network reconfiguration (ONR)/feeder reconfiguration (FRC) approach by considering the total operating cost and system power losses minimizations as objectives. The ONR/FRC is a feasible approach for the enhancement of system performance in distribution systems (DSs). FRC alters the topological structure of feeders by changing the close/open status of the tie and sectionalizing switches in the system. Apart from the power received from the main grid, this paper considers the power from distributed generation (DG)  sources such as wind energy generators (WEGs), solar photovoltaic (PV) units, and battery energy storage (BES) units. The proposed multi-objective-based ONR/FRC problem has been solved by using the multi-objective crow search algorithm (MO-CSA). The proposed methodology has been implemented on two (14 bus and 17 bus) distribution systems with three feeders.</description>
        <description>http://thesai.org/Downloads/Volume12No3/Paper_10-Multi_objective_based_Optimal_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Analysis of Deep Neural Network based on Transfer Learning for Pet Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120309</link>
        <id>10.14569/IJACSA.2021.0120309</id>
        <doi>10.14569/IJACSA.2021.0120309</doi>
        <lastModDate>2021-03-31T07:07:44.8070000+00:00</lastModDate>
        
        <creator>Bhavesh Jaiswal</creator>
        
        <creator>Nagendra Gajjar</creator>
        
        <subject>Machine learning; knowledge distillation; transfer learning; domain adaptation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(3), 2021</description>
        <description>Deep learning frameworks have progressed beyond human recognition capabilities and, now it’s the perfect opportunity to optimize them for implementation on the embedded platforms. The present deep learning architectures support learning capabilities, but they lack flexibility for applying learned knowledge on the tasks in other unfamiliar domains. This work tries to fill this gap with the deep neural network-based solution for object detection in unrelated domains with a focus on the reduced footprint of the developed model. Knowledge distillation provides efficient and effective teacher-student learning for a variety of different visual recognition tasks. A lightweight student network can be easily trained under the guidance of the high-capacity teacher networks. The teacher-student architecture implementation on binary classes shows a 20% improvement in accuracy within the same training iterations using the transfer learning approach. The scalability of the student model is tested with binary, ternary and multiclass and their performance is compared on basis of inference speed. The results show that the inference speed does not depend on the number of classes. For similar recognition accuracy, the inference speed of about 50 frames per second or 20ms per image. Thus, this approach can be generalized as per the application requirement with minimal changes, provided the dataset format compatibility.</description>
        <description>http://thesai.org/Downloads/Volume12No3/Paper_9-Performance_Analysis_of_Deep_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comprehensive Analysis of Resource Allocation and Service Placement in Fog and Cloud Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120308</link>
        <id>10.14569/IJACSA.2021.0120308</id>
        <doi>10.14569/IJACSA.2021.0120308</doi>
        <lastModDate>2021-03-31T07:07:44.7730000+00:00</lastModDate>
        
        <creator>A. S. Gowri</creator>
        
        <creator>P.Shanthi Bala</creator>
        
        <creator>Immanuel Zion Ramdinthara</creator>
        
        <subject>Cloud; fog; reinforcement learning; energy-efficient computing; resource allocation; service placement</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(3), 2021</description>
        <description>The voluminous data produced and consumed by digitalization, need resources that offer compute, storage, and communication facility. To withstand such demands, Cloud and Fog computing architectures are the viable solutions, due to their utility kind and accessibility nature. The success of any computing architecture depends on how efficiently its resources are allocated to the service requests. Among the existing survey articles on Cloud and Fog, issues like scalability and time-critical requirements of the Internet of Things (IoT) are rarely focused on. The proliferation of IoT leads to energy crises too. The proposed survey is aimed to build a Resource Allocation and Service Placement (RASP) strategy that addresses these issues. The survey recommends techniques like Reinforcement Learning (RL) and Energy Efficient Computing (EEC) in Fog and Cloud to escalate the efficacy of RASP. While RL meets the time-critical requirements of IoT with high scalability, EEC empowers RASP by saving cost and energy. As most of the early works are carried out using reactive policy, it paves the way to build RASP solutions using alternate policies. The findings of the survey help the researchers, to focus their attention on the research gaps and devise a robust RASP strategy in Fog and Cloud environment.</description>
        <description>http://thesai.org/Downloads/Volume12No3/Paper_8-Comprehensive_Analysis_of_Resource_Allocation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Change Detection Method with Multi-temporal Satellite Images based on Wavelet Decomposition and Tiling</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120307</link>
        <id>10.14569/IJACSA.2021.0120307</id>
        <doi>10.14569/IJACSA.2021.0120307</doi>
        <lastModDate>2021-03-31T07:07:44.7430000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>Daubechies wavelet; multi-resolution analysis: MRA; change detection; multi-temporal satellite image; geometric distortion; Landsat Thematic Mapper (TM) image</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(3), 2021</description>
        <description>Change detection method with multi-temporal satellite images based on Wavelet decomposition with Daubechies wavelet function (Multi Resolution Analysis), and tiling is proposed. The method allows detection of changes in time series analysis and is not sensitive to geometric distortions included in the satellite images. In this paper, the author proposed a method based on MRA as a method for extracting change points from satellite images acquired over many periods. Change detection method with multi-temporal satellite images based on Wavelet decomposition and tiling is proposed. The method allows to detect changes and is not sensitive to geometric distortions included in the satellite images. The experimental results with simulation image and a Landsat Thematic Mapper (TM) image show that more appropriate changes can be detected with the proposed method in comparison with the existing method of subtraction. When applied to simulations and real satellite images, it was confirmed that they were robust to minute nonlinear geometric distortion.</description>
        <description>http://thesai.org/Downloads/Volume12No3/Paper_7-Change_Detection_Method_with_Multi_Temporal.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fuzzy based Techniques for Handling Missing Values</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120306</link>
        <id>10.14569/IJACSA.2021.0120306</id>
        <doi>10.14569/IJACSA.2021.0120306</doi>
        <lastModDate>2021-03-31T07:07:44.7430000+00:00</lastModDate>
        
        <creator>Malak El-Bakry</creator>
        
        <creator>Farid Ali</creator>
        
        <creator>Ayman El-Kilany</creator>
        
        <creator>Sherif Mazen</creator>
        
        <subject>Time series data; fuzzy logic; membership functions; machine learning; missing values</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(3), 2021</description>
        <description>Usually, time series data suffers from high percentage of missing values which is related to its nature and its collection process. This paper proposes a data imputation technique for imputing the missing values in time series data. The Fuzzy Gaussian membership function and the Fuzzy Triangular membership function are proposed in a data imputation algorithm in order to identify the best imputation for the missing values where the membership functions were used to calculate weights for the data values of the nearest neighbor’s before using them during imputation process. The evaluation results show that the proposed technique outperforms traditional data imputation techniques where the triangular fuzzy membership function has shown higher accuracy than the gaussian membership function during evaluation.</description>
        <description>http://thesai.org/Downloads/Volume12No3/Paper_6-Fuzzy_based_Techniques_for_Handling_Missing_Values.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Determinants of e-Commerce Use at Different Educational Levels: Empirical Evidence from Turkey</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120305</link>
        <id>10.14569/IJACSA.2021.0120305</id>
        <doi>10.14569/IJACSA.2021.0120305</doi>
        <lastModDate>2021-03-31T07:07:44.7130000+00:00</lastModDate>
        
        <creator>Seyda &#220;nver</creator>
        
        <creator>&#214;mer Alkan</creator>
        
        <subject>Electronic commerce; online shopping; educational level; e-commerce; Turkey; binary logistic regression</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(3), 2021</description>
        <description>Rapid spread of internet has made e-commerce an essential and effective tool for commercial transactions. The purpose of this study is to investigate e-commerce use differences between individuals in Turkey according to their educational levels and to specify the relationship between demographic characteristics of individuals and e-commerce use. In this study, the cross-sectional data obtained by Household Information Technologies Usage Survey were used. Binary logistic regression analysis was utilized to determine the factors associated with the e-commerce use of individuals. This study has concluded that the variables of income level, age, gender, occupation, region, social media use, use of internet banking, use of e-government, number of information equipment in a household and the number of people in a household have relationships with e-commerce use. In addition, it has been found out that the variables in e-commerce use showed differences according to educational levels of individuals. It has been determined as a result of this study that as the education level of the individuals increased, their tendency towards online shopping increased. Higher education level refers to higher income level at both state and private institutions and more perception towards innovations. This has naturally a positive effect on online shopping behaviors of individuals.</description>
        <description>http://thesai.org/Downloads/Volume12No3/Paper_5-Determinants_of_e_Commerce_use_at_different_Educational_Levels.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Feeder Reconfiguration in Unbalanced Distribution System with Wind and Solar Generation using Ant Lion Optimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120304</link>
        <id>10.14569/IJACSA.2021.0120304</id>
        <doi>10.14569/IJACSA.2021.0120304</doi>
        <lastModDate>2021-03-31T07:07:44.6330000+00:00</lastModDate>
        
        <creator>Surender Reddy Salkuti</creator>
        
        <subject>Distributed energy resources; evolutionary algorithms; feeder reconfiguration; operational cost; optimization algorithms; three-phase transformers</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(3), 2021</description>
        <description>This paper proposes an approach for the distribution system (DS) feeder reconfiguration (FRC) of balanced and unbalanced networks by minimizing the total cost of operation. Network reconfiguration is a feasible technique for system performance enhancement in low voltage distribution systems. In this work, wind and solar photovoltaic (PV) units are selected as distributed energy resources (DERs) and they are considered in the proposed FRC approach. The uncertainties related to DERs are modeled using probability analysis. In most cases, the distribution system is an unbalanced system and the 3-phase transformers play a vital role as they have different configurations. This paper proposes efficient power flow models for the unbalanced distribution systems with various 3-phase transformer configurations. The proposed FRC approach has been solved by using the evolutionary algorithm based Ant Lion Optimization (ALO), and it has been implemented on 17 bus test system considering the balanced and unbalanced distribution systems with and without RESs.</description>
        <description>http://thesai.org/Downloads/Volume12No3/Paper_4-Feeder_Reconfiguration_in_Three_Phase.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Adopting Vulnerability Principle as the Panacea for Security Policy Monitoring</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120303</link>
        <id>10.14569/IJACSA.2021.0120303</id>
        <doi>10.14569/IJACSA.2021.0120303</doi>
        <lastModDate>2021-03-31T07:07:44.6030000+00:00</lastModDate>
        
        <creator>Prosper K. Yeng</creator>
        
        <creator>Stephen D. Wolthusen</creator>
        
        <creator>Bian Yang</creator>
        
        <subject>Information security; vulnerability principle; ethics; security policy monitoring</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(3), 2021</description>
        <description>Despite the adoption of information security poli-cies, many industries continue to suffer from the harm of non-compliance. Some of these harms include illegal disclosure of customers sensitive data, leakages of business trade secrets, and various kinds of cyber-attacks. The impact of such harm can be enormous.To avert this, monitoring the compliance of information security policies (otherwise known as use policies) have been adopted as a strategy towards enhancing security policy compliance. One of the main essence of use policy monitoring is to enhance security policy compliance so as to prevent harm. Ironically, the consequences of use policy monitoring can be detrimental. While proponents use utilitarianism ethics to argue that the monitoring of use policy is enhancing security policy compliance, the opponents of use policy skewed to deontological ethics to argue against the monitoring of security policy. Deon-tological ethics is of the view that monitoring of security policy intrudes on employees’ privacy and tend to hamper on their work performance. There have not been any clear solution to this discourse. A survey was conducted to understand the extend of security policy monitoring. Vulnerability principle was therefore explored as the panacea towards enhancing the monitoring of use policy to satisfy all the involve stakeholders.</description>
        <description>http://thesai.org/Downloads/Volume12No3/Paper_3-Adopting_Vulnerability_Principle_as_the_Panacea.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intellectual Singularity of Quasi-Holographic Paradigm for a Brain-like Video-Component of Artificial Mind</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120302</link>
        <id>10.14569/IJACSA.2021.0120302</id>
        <doi>10.14569/IJACSA.2021.0120302</doi>
        <lastModDate>2021-03-31T07:07:44.5730000+00:00</lastModDate>
        
        <creator>Yarichin E. M</creator>
        
        <creator>Gruznov V.M</creator>
        
        <creator>Yarichina G.F</creator>
        
        <subject>Differential holographic principle; video-data; structuring; super-saccades; integral quasi-holographic principle; long-range action; bundles; ascending hierarchy; singularity; video-component; video-thesaurus; video-intelligence; architec&#172;ture; artificial intelligence; artificial mind; full sampling theorem; descending hierarchy; hierarchical feedback</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(3), 2021</description>
        <description>On the basis of a new post-Shannon information approach (quantitative and qualitative together), a hierarchical process of evaluating video-information by an intellectual brain-like video-component of artificial mind is considered. The development of the classical (Shannon&#39;s) informational approach to the level of the new (post-Shannon&#39;s) informational approach made it possible to formulate an important additional “bonus” in the form of a differential holographic principle (DHP). DHP made it possible to present video information on a dualistic basis, considering its physical and structural components, considered together. Developed an integral quasi-holographic principle (IQHP) is built on the basis of the DHP. However, in contrast to the DHP this principle represents a supra-physical (abstract) principle, uses a long-range action template and is realized instantly (i.e. with an infinitely high speed). In a joint tandem of physical (quantitative) and structural (qualitative) components of video-information evaluation, the structural component is dominant. Due to this, the technology of the video-component of artificial mind based on IQHP always takes the form of an ascending hierarchy of structured (abstract) evaluations of video-information. This technology also includes a hierarchy of self-learning stages, thanks to which the constant development of macro-objects of video-information in the form of video-thesauruses as high-quality measuring scales is carried out. This maintains the relevance, efficiency, and instantaneousness of the video-component of the artificial mind in evaluation video-information. Based on the ideas and principles of a new (post-Shannon) information approach to evaluation video-information, the structural and functional architecture of the video-component of artificial mind built. This architecture is not biologically inspired, but it turned out to be surprisingly exactly coinciding with the known structure of the human neocortex (by the number of levels of the ascending hierarchy, by the presence of a hierarchy in direct and feedback, by the method of structuring and collecting input elementary video-data, etc.). A new theorem for a complete sample of video-data, considered together in physical and structural form, is formulated. The direct version of this theorem corresponds to an ascending hierarchy of video-information evaluations based on IQHP and bundles of video-information’s evaluations. The inverse version characterizes the global hierarchical feedback, which takes the form of a descending hierarchy of “service” video-information evaluations.</description>
        <description>http://thesai.org/Downloads/Volume12No3/Paper_2-Intellectual_Singularity_of_Quasi_Holographic_Paradigm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Texture Classification using Angular and Radial Bins in Transformed Domain</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120301</link>
        <id>10.14569/IJACSA.2021.0120301</id>
        <doi>10.14569/IJACSA.2021.0120301</doi>
        <lastModDate>2021-03-31T07:07:44.4930000+00:00</lastModDate>
        
        <creator>Arun Kulkarni</creator>
        
        <creator>Aavash Sthapit</creator>
        
        <creator>Ashim Sedhain</creator>
        
        <creator>Bishrut Bhattarai</creator>
        
        <creator>Saurav Panthee</creator>
        
        <subject>Texture; discrete cosine transform; radial and angular bins; decision tree; support vector machine; logarithmic regression</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(3), 2021</description>
        <description>Texture is generally recognized as fundamental to perceptions. There is no precise definition or characterization available in practice. Texture recognition has many applications in areas such as medical image analysis, remote sensing, and robotic vision. Various approaches such as statistical, structural, and spectral have been suggested in the literature. In this paper we propose a method for texture feature extraction. We transform the image into a two-dimensional Discrete Cosine Transform (DCT) and extract features using the ring and wedge bins in the DCT plane. These features are based on texture properties such as coarseness, smoothness, graininess, and directivity of the texture pattern in the image. We develop a model to classify texture images using extracted features. We use three classifiers: the Decision Tree, Support Vector Machine (SVM), and Logarithmic Regression (LR). To test our approach, we use Brodatz texture image data set consisting of 111 images of different texture patterns. Classification results such as accuracy and F-score obtained from the three classifiers are presented in the paper.</description>
        <description>http://thesai.org/Downloads/Volume12No3/Paper_1-Texture_Classification_using_Angular_and_Radial_Bins.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Agile Fitness of Software Companies in Bangladesh: An Empirical Investigation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.01202103</link>
        <id>10.14569/IJACSA.2021.01202103</id>
        <doi>10.14569/IJACSA.2021.01202103</doi>
        <lastModDate>2021-03-02T13:05:44.8970000+00:00</lastModDate>
        
        <creator>M M Mahbubul Syeed</creator>
        
        <creator>Razib Hayat Khan</creator>
        
        <creator>Jonayet Miah</creator>
        
        <subject>Agile manifesto; agile methodology; scrum; soft-ware development projects; software companies in Bangladesh</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>With the mandate of light-weight working practices, iterative development, customer collaboration and incremental delivery of business values, Agile software development methods have become the de-facto standard for commercial software development, worldwide. Consequently, this research aims to empirically investigate the preparedness and the adoption of agile practices in the prominent software companies in Bangladesh. To achieve this goal, an extensive survey with 16 established software companies in Bangladesh is carried out. Results exhibit that the Scrum agile methodology is the highest practiced one. Alongside, to a great extent these software companies have the readiness to effectively adopt the Scrum methodology. However, with regard to practicing the Scrum principles, they fall short in many key aspects.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_103-Agile_Fitness_of_Software_Companies_in_Bangladesh.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Human Recognition using Single-Input-Single-Output Channel Model and Support Vector Machines</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.01202102</link>
        <id>10.14569/IJACSA.2021.01202102</id>
        <doi>10.14569/IJACSA.2021.01202102</doi>
        <lastModDate>2021-03-01T14:58:54.4770000+00:00</lastModDate>
        
        <creator>Sameer Ahmad Bhat</creator>
        
        <creator>Abolfazl Mehbodniya</creator>
        
        <creator>Ahmed Elsayed Alwakeel</creator>
        
        <creator>Julian Webber</creator>
        
        <creator>Khalid Al-Begain</creator>
        
        <subject>Motion detection; pattern recognition; received sig-nal strength indicator; Software Defined Radio (SDR); supervised learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>WiFi based human motion recognition systems mainly rely on the availability of Channel State Information (CSI). Embedded within WiFi devices, the present radio sub-systems can output CSI that describes the response of a wireless communication channel. Radio subsystems as such, use complex hardware architectures that consume lots of energy during data transmission, as well as exhibit phase drift in the sub-carriers. Although human motion recognition (HMR) based on multi-carrier transmission systems show better classification accuracy, transmission of multiple sub-carriers results in an increase in the overall energy consumption at the transmitter. Apparently CSI based systems can be perceived as process intensive and power hungry devices. To alleviate the process intensive computing and reduce energy consumption in WiFi, this study proposes a human recognition system that uses only one radio carrier frequency. The study uses two software defined radios and a machine learning classifier to identify four humans, and the study results show that human identification is possible with 99% accuracy using only one radio carrier. The results of this study will have an impact on the development process of smart sensing systems, particularly those that relate to healthcare, authentication, and passive monitoring and sensing.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_102-Human_Recognition_using_Single_Input_Single_Output.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cluster-based Access Control Mechanism for Cellular D2D Communication Networks with Dense Device Deployment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.01202101</link>
        <id>10.14569/IJACSA.2021.01202101</id>
        <doi>10.14569/IJACSA.2021.01202101</doi>
        <lastModDate>2021-03-01T14:58:54.4600000+00:00</lastModDate>
        
        <creator>Thanh-Dat Do</creator>
        
        <creator>Ngoc-Tan Nguyen</creator>
        
        <creator>Thi-Huong-Giang Dang</creator>
        
        <creator>Nam-Hoang Nguyen</creator>
        
        <creator>Minh-Trien Pham</creator>
        
        <subject>D2D communications; access control; channel al-location; power assignment; interference mitigation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>In cellular device-to-device (D2D) communication networks, devices can communicate directly with each other without passing through base stations. Access control is an important function of radio resource management which aims to reduce frequency collision and mitigate interference between user’s connections. In this paper, we propose a cluster-based access control (CBAC) mechanism for heterogeneous cellular D2D communication networks with dense device deployment where both the macro base station and smallcell base stations (SBSs) coexist. In the proposed CBAC mechanism, relied on monitoring interference from its neighboring SBSs, each SBS firstly selects their operating bandwidth parts. Then, it jointly al-locates channels and assigns transmission power to smallcell user equipments (SUEs) for their uplink transmissions and users using D2D communications to mitigate their interference to uplink transmissions of macrocell user equipments (MUEs). Through computer simulations, numerical results show that the proposed CBAC mechanism can provide higher network throughput as well as user throughput than those of the network-assisted device-decided scheme proposed in the literature. Simulation results also show that SINR of uplink transmissions of MUEs and D2D communications managed by the MBS can be significantly improved.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_101-Cluster_based_Access_Control_Mechanism.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Priority-Mobility Aware Clustering Routing Algorithm for Lifetime Improvement of Dynamic Wireless Sensor Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.01202100</link>
        <id>10.14569/IJACSA.2021.01202100</id>
        <doi>10.14569/IJACSA.2021.01202100</doi>
        <lastModDate>2021-03-01T14:58:54.4430000+00:00</lastModDate>
        
        <creator>Rajiv R. Bhandari</creator>
        
        <creator>K. Raja Sekhar</creator>
        
        <subject>Cluster; routing; sleep scheduling; priority; reinforcement</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>Wireless sensor network with mobility is rapidly evolving and increasing in the recent decade. The cluster and hierarchical routing strategy demonstrates major changes in the lifespan of the network and the scalability. The latency, average energy consumption, packet distribution ratio is highly impacted due to a lack of coordination between cluster head and extreme mobile network nodes. Overall efficiency of highly mobile wireless sensor network is reduced by current techniques such as mobility-conscious media access control, sleep/wakeup scheduling and transmission of real-time services in wireless sensor network. This paper proposes a novel Priority-Mobility Aware Clustering Routing algorithm (p-MACRON) for high delivery of packets by assigning fair weightage to each and every packet of node. To automatically decide the scheduling policy, reinforcement learning approach is integrated. The mixed approach of priority and self-learning results into better utilization of energy. The experimental result shows comparisons of slotted sense multiple access protocol, AODV, MEMAC and P-MACRON, in which proposed algorithm delivered better results in terms of interval, packet size and simulation time.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_100-Priority_Mobility_Aware_Clustering_Routing_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Disposable Virtual Machines and Challenges to Digital Forensics Investigation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120299</link>
        <id>10.14569/IJACSA.2021.0120299</id>
        <doi>10.14569/IJACSA.2021.0120299</doi>
        <lastModDate>2021-03-01T14:58:54.4130000+00:00</lastModDate>
        
        <creator>Mohammed Yousuf Uddin</creator>
        
        <creator>Sultan Ahmad</creator>
        
        <creator>Mohammad Mazhar Afzal</creator>
        
        <subject>Digital forensics; digital investigation; disposable virtual machines; light weight virtual machine; Microsoft sandbox; QEMU; qubes</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>Digital forensics field faces new challenges with emerging technologies. Virtualization is one of the significant challenges in the field of digital forensics. Virtual Machines (VM) have many advantages either it be an optimum utilization of hardware resources or cost saving for organizations. Traditional forensics’ tools are not competent enough to analyze the virtual machines as they only support for physical machines, to overcome this challenge Virtual Machine Introspection technologies were developed to perform forensic investigation of virtual machines. Until now, we were dealing with persistent virtual machines; these are created once and used many times. We have extreme version of virtual machine and that is disposable virtual machine. However, the disposable virtual machine once created and are used one time, it vanish from the system without leaving behind any significant traces or artifacts for digital investigator. The purpose of this paper is to discuss various disposable virtualiza-tion technologies available and challenges posed by them on the digital forensics investigation process and provided some future directions to overcome these challenges.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_99-Disposable_Virtual_Machines_and_Challenges.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Reinforcement Learning based Handover Management for Millimeter Wave Communication</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120298</link>
        <id>10.14569/IJACSA.2021.0120298</id>
        <doi>10.14569/IJACSA.2021.0120298</doi>
        <lastModDate>2021-03-01T14:58:54.3970000+00:00</lastModDate>
        
        <creator>Michael S. Mollel</creator>
        
        <creator>Shubi Kaijage</creator>
        
        <creator>Michael Kisangiri</creator>
        
        <subject>Handover management; 5G; machine learning; re-inforcement learning; mm-wave communication</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>The Millimeter Wave (mm-wave) band has a broad-spectrum capable of transmitting multi-gigabit per-second date-rate. However, the band suffers seriously from obstruction and high path loss, resulting in line-of-sight (LOS) and non-line-of-sight (NLOS) transmissions. All these lead to significant fluctu-ation in the signal received at the user end. Signal fluctuations present an unprecedented challenge in implementing the fifth gen-eration (5G) use-cases of the mm-wave spectrum. It also increases the user’s chances of changing the serving Base Station (BS) in the process, commonly known as Handover (HO). HO events become frequent for an ultra-dense dense network scenario, and HO management becomes increasingly challenging as the number of BS increases. HOs reduce network throughput, and hence the significance of mm-wave to 5G wireless system is diminished without adequate HO control. In this study, we propose a model for HO control based on the offline reinforcement learning (RL) algorithm that autonomously and smartly optimizes HO decisions taking into account prolonged user connectivity and throughput. We conclude by presenting the proposed model’s performance and comparing it with the state-of-art model, rate based HO scheme. The results reveal that the proposed model decreases excess HO by 70%, thus achieving a higher throughput relative to the rates based HO scheme.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_98-Deep_Reinforcement_Learning_based_Handover_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Streaming of Global Navigation Satellite System Data from the Global System of Navigation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120297</link>
        <id>10.14569/IJACSA.2021.0120297</id>
        <doi>10.14569/IJACSA.2021.0120297</doi>
        <lastModDate>2021-03-01T14:58:54.3830000+00:00</lastModDate>
        
        <creator>Liliana Ibeth Barbosa-Santillan</creator>
        
        <creator>Juan Jaime Sanchez-Escobar</creator>
        
        <creator>Luis Francisco Barbosa-Santillan</creator>
        
        <creator>Amilcar Meneses-Viveros</creator>
        
        <creator>Zhan Gao</creator>
        
        <creator>Julio Cesar Roa-Gil</creator>
        
        <creator>Gabriel A. Le&#180;on Paredes</creator>
        
        <subject>GLONASS; streaming; extraction; satellites data; observation files; metadata</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>The Big Data phenomenon has driven a revolution in data and has provided competitive advantages in business and science domains through data analysis. By Big Data, we mean the large volumes of information generated at high speeds from various information sources, including social networks, sensors for multiple devices, and satellites. One of the main problems in real applications is the extraction of accurate information from large volumes of unstructured data in the streaming process. Here, we extract information from data obtained from the GLONASS satellite navigation system. The knowledge acquired in the discovery of geolocation of an object has been essential to the satellite systems. However, many of these findings have suffered changes as error vocalizations and many data. The Global Navigation Satellite System (GNSS) combines several existing navigation and geospatial positioning systems, including the Global Positioning System, GLONASS, and Galileo. We focus on GLONASS because it has a constellation with 31 satellites. Our research’s difficulties are: (a) to handle the amount of data that GLONASS produces efficiently and (b) to accelerate data pipeline with parallelization and dynamic access to data because these have only structured one part. This work’s main contribution is the Streaming of GNSS Data from the GLONASS Satellite Navigation System for GNSS data processing and dynamic management of meta-data. We achieve a three-fold improvement in performance when the program is running with 8 and 10 threads.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_97-Streaming_of_Global_Navigation_Satellite_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Efficient Lung Nodule Classification Method using Convolutional Neural Network and Discrete Cosine Transform</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120296</link>
        <id>10.14569/IJACSA.2021.0120296</id>
        <doi>10.14569/IJACSA.2021.0120296</doi>
        <lastModDate>2021-03-01T14:58:54.3500000+00:00</lastModDate>
        
        <creator>Abdelhamid EL HASSANI</creator>
        
        <creator>Brahim AIT SKOURT</creator>
        
        <creator>Aicha MAJDA</creator>
        
        <subject>Convolutional neural network; discrete cosine trans-form; pulmonary nodule classification; computer aided diagnosis systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>In today’s medicine, Computer-Aided Diagnosis Systems (CAD) are very used to improve the screening test accuracy of pulmonary nodules. Processing, classification, and detection techniques form the basis of CAD architecture. In this work, we focus on the classification step in a CAD system where we use Discrete Cosine Transform (DCT) along with Convolutional Neural Network (CNN) to perform an efficient classification method for pulmonary nodules. Combining both DCT and CNN, the proposed method provides high-level accuracy that outperforms the conventional CNN model.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_96-Efficient_Lung_Nodule_Classification_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Convolutional Neural Network for Chicken Diseases Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120295</link>
        <id>10.14569/IJACSA.2021.0120295</id>
        <doi>10.14569/IJACSA.2021.0120295</doi>
        <lastModDate>2021-03-01T14:58:54.3200000+00:00</lastModDate>
        
        <creator>Hope Mbelwa</creator>
        
        <creator>Jimmy Mbelwa</creator>
        
        <creator>Dina Machuve</creator>
        
        <subject>Image classification; Convolutional Neural Net-works (CNNs); disease detection; transfer learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>For many years in the society, farmers rely on experts to diagnose and detect chicken diseases. As a result, farmers lose many domesticated birds due to late diagnoses or lack of reliable experts. With the available tools from artificial intelligence and machine learning based on computer vision and image analysis, the most common diseases affecting chicken can be identified easily from the images of chicken droppings. In this study, we propose a deep learning solution based on Convolution Neural Networks (CNN) to predict whether the faeces of chicken belong to either of the three classes. We also leverage the use of pre-trained models and develop a solution for the same problem. Based on the comparison, we show that the model developed from the XceptionNet outperforms other models for all metrics used. The experimental results show the apparent gain of transfer learning (validation accuracy of 9􀀀% using pretraining over its contender 􀀁􀀂.􀀃􀀄% developed CNN from fully training on the same dataset). In general, the developed fully trained CNN comes second when compared with the other model. The results show that pre-trained XceptionNet method has overall performance and highest prediction accuracy, and can be suitable for chicken disease detection application.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_95-Deep_Convolutional_Neural_Network_for_Chicken_Diseases.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Impact of Mobile Applications for a Lima University in Pandemic</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120294</link>
        <id>10.14569/IJACSA.2021.0120294</id>
        <doi>10.14569/IJACSA.2021.0120294</doi>
        <lastModDate>2021-03-01T14:58:54.3030000+00:00</lastModDate>
        
        <creator>Carlos Diaz-Nunez</creator>
        
        <creator>Gianella Sanchez-Cochachin</creator>
        
        <creator>Yordin Ricra-Chauca</creator>
        
        <creator>Laberiano Andrade-Arenas</creator>
        
        <subject>Higher education; internet connection; mobile ap-plications; pandemic; university</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>The current global pandemic situation has forced universities to opt for distance education, relying on digital tools that are currently available, such as course management platforms like Moodle, videoconferencing applications like Google Meet or Zoom, or instant messaging apps like WhatsApp. In this study it is detailed that these tools have made virtual education an effective alternative to provide education without having a physical space where teachers and students can concentrate. In addition, this document shows that in this form of teaching learning it is not necessary to have a computer, it is enough to have a cell phone to access this type of education in Peru, since most of the country’s homes have a smartphone . Both students and teachers affirm that, although a little more time is invested than usual, this teaching method is satisfactory. The result obtained is that the use of mobile applications plays a very important role in virtual classes since the vast majority of students use the cell phone. In conclusion, teaching and learning in higher university education with the use of mobile applications, both teachers and students said that it was of great help due to the interaction through communication with WhatsApp, zoom, Google meet, among others. In addition, being in constant communication with the students through the applications strengthened the teaching.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_94-Impact_of_Mobile_Applications_for_a_Lima_University.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Framework for Modelling Wheelchairs under the Realm of Internet-of-Things</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120293</link>
        <id>10.14569/IJACSA.2021.0120293</id>
        <doi>10.14569/IJACSA.2021.0120293</doi>
        <lastModDate>2021-03-01T14:58:54.2900000+00:00</lastModDate>
        
        <creator>Sameer Ahmad Bhat</creator>
        
        <creator>Muneer Ahmad Dar</creator>
        
        <creator>Hazem Elalfy</creator>
        
        <creator>Mohammed Abdul Matheen</creator>
        
        <creator>Saadiya Shah</creator>
        
        <subject>Wheelchairs IoT; acceptability engineering; human-centered engineering; innovative technologies; early adopters</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>Innovations in research labs are driven to global markets by applied, established standard engineering practices, using state-of-the-art research that most likely results in manu-facturing highly effective and efficient engineered products. As a technology, that enables assistance to physically challenged people, wheelchairs have attracted researchers across the globe whilst showcasing an increased demand for higher production. However, wheelchairs in relation with the environments im-plementing Internet-of-things (IoT) devices, have been mostly overlooked to include the assessment of global market trends. Therefore, this paper proposes Acceptability Engineering (AE) framework to enhance the growth and expansion of markets relying on the environments, wherein wheelchairs can coordinate with IoT to enable smart technologies. AE as a standard engi-neering approach would help in – evaluating the characteristics of IoT-wheelchair environments, analysing their market trends, and highlighting the deficiencies between early and prevailing markets. This will significantly impact the manufacturers, who market wheelchairs specific to the IoT environments, and in addition manufacturers would be able to identify the potential users of their manufactured products.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_93-A_Novel_Framework_for_Modelling_Wheelchairs.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>High Speed Single-Stage Face Detector using Depthwise Convolution and Receptive Fields</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120292</link>
        <id>10.14569/IJACSA.2021.0120292</id>
        <doi>10.14569/IJACSA.2021.0120292</doi>
        <lastModDate>2021-03-01T14:58:54.2570000+00:00</lastModDate>
        
        <creator>Rahul Yadav</creator>
        
        <creator>Priyanka</creator>
        
        <creator>Priyanka Kacker</creator>
        
        <subject>Artificial intelligence; computer-vision; Convolu-tional Neural Network (CNN); face detector</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>At present face detectors use a large Convolutional Neural Network (CNN) to achieve high detection performance, which is a widely used sub-area of artificial intelligence. These face detectors have a large number of parameters which reduces their detection speed dreadfully on a system with low computa-tional resources. This is a challenging problem to achieve good performance and high detection speed with finite computational power. In this paper, we propose a single-stage end-to-end trained face detector to address this challenging problem. The computational cost is reduced by using depthwise convolution and swiftly reducing the size of an input image. The early layers of the model use CReLU (Concatenated Rectified Linear Unit) activations to preserve the information and generate better representative features of the input. Respective Field (RF) blocks used in the model improve the detection performance. The proposed model is of 1.7 Megabytes size, able to achieve 42 FPS (Frame Per Second) on CPU (i5-8330H) and 179 FPS on GPU (GTX1060). The model is evaluated on various benchmark datasets like WIDER FACE, PASCAL faces and AFW and archive good performance compared to other state of art methods.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_92-High_Speed_Single_Stage_Face_Detector.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Security, Privacy and Trust in IoMT Enabled Smart Healthcare System: A Systematic Review of Current and Future Trends</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120291</link>
        <id>10.14569/IJACSA.2021.0120291</id>
        <doi>10.14569/IJACSA.2021.0120291</doi>
        <lastModDate>2021-03-01T14:58:54.2430000+00:00</lastModDate>
        
        <creator>Thavavel Vaiyapuri</creator>
        
        <creator>Adel Binbusayyis</creator>
        
        <creator>Vijayakumar Varadarajan</creator>
        
        <subject>Smart healthcare system; internet of medical things; authentication and authorization; security and privacy; blockchain; intrusion detection system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>In the past decades, healthcare has witnessed a swift transformation from traditional specialist/hospital centric approach to a patient-centric approach especially in the smart healthcare system (SHS). This rapid transformation is fueled on account of the advancements in numerous technologies. Amongst these technologies, the Internet of medicals things (IoMT) play an imperative function in the development of SHS with regard to productivity of electronic devices in addition to reliability, accuracy. Recently, several researchers have shown interest to leverage the benefits of IoMT for the development of SHS by interconnecting with the existing healthcare services and available medical resources. Though the integration of IoMT within medical resources enable to revolutionize the patient healthcare service from reactive to proactive care system, the security of IoMT is still in its infancy. As IoMT are mainly employed to capture extremely sensitive individual health data, the security and privacy of IoMT is of paramount importance and very crucial in safeguarding the patient life which could otherwise adversely affect the patient health state and in worse case may also lead to loss of life. Motivated by this crucial requirement, several researchers in tandem to the advancement in IoMT technologies have continuously made noteworthy progress to tackle the security and privacy issues in IoMT. Yet, many possible potential directions exist for future investigation. This necessitates for a complete overview of existing security and privacy solutions in the field of IoMT. Therefore, this paper aims to canvass the literature on the most promising state-of-the-art solutions for securing IoMT in SHS especially in the light of security, privacy protection, authentication and authorization and the use of blockchain for secure data sharing. Finally, highlights the review outcome briefing not only the benefits and limitation of existing security and privacy solutions but also summarizing the opportunities and possible potential future directions that can drive the researchers of next decade to improve and shape their research committed on safe integration IoMT in SHS.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_91-Security_Privacy_and_Trust_in_IoMT.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Smart Home Energy Management System based on the Internet of Things (IoT)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120290</link>
        <id>10.14569/IJACSA.2021.0120290</id>
        <doi>10.14569/IJACSA.2021.0120290</doi>
        <lastModDate>2021-03-01T14:58:54.2270000+00:00</lastModDate>
        
        <creator>Emmanuel Ampoma Affum</creator>
        
        <creator>Kwame Agyeman-Prempeh Agyekum</creator>
        
        <creator>Christian Adumatta Gyampomah</creator>
        
        <creator>Kwadwo Ntiamoah-Sarpong</creator>
        
        <creator>James Dzisi Gadze</creator>
        
        <subject>Internet of things; energy efficiency; home control; smart home</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>The global increasing demand for energy has brought attention to the need for energy efficiency. Markedly noticeable in developing areas, energy challenges can be at-tributed to the losses in the distribution and transmission sys-tems, and insufficient demand-side energy management. Demand-oriented systems have been widely proposed as feasible solutions. Smart Home Energy Management Systems have been proposed to include smart Internet of Things (IoT)-capable devices in an ecosystem programmed to achieve energy efficiency. However, these systems apply only to already-smart devices and are not appropriate for the many locales where a majority of appliances are not yet IoT-capable. In this paper, we establish the need to pay attention to non-smart appliances, and propose a solution for incorporating such devices into the energy-efficient IoT space. As a solution, we propose Homergy, a smart IoT-based Home Energy Management Solution that is useful for any market –advanced and developing. Homergy consists of the Homergy Box (which is an IoT device with Internet connectivity, an in-built microcontroller and opto-coupled relays), a NoSQL cloud-based database with streaming capabilities, and a secure cross-platform mobile app (Homergy Mobile App). To validate and illustrate the effectiveness of Homergy, the system was deployed and tested in 3 different consumer scenarios: a low-consuming house, a single-user office and a high-consuming house. The results indicated that Homergy produced weekly energy savings of 0.5 kWh for the low-consuming house, 0.35 kWh for the single-user office, and a 13-kWh improvement over existing smart-devices-only systems in the high-consuming house.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_90-Smart_Home_Energy_Management_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automatic Classification of Preliminary Diabetic Retinopathy Stages using CNN</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120289</link>
        <id>10.14569/IJACSA.2021.0120289</id>
        <doi>10.14569/IJACSA.2021.0120289</doi>
        <lastModDate>2021-03-01T14:58:54.2100000+00:00</lastModDate>
        
        <creator>Omar Khaled</creator>
        
        <creator>Mahmoud ElSahhar</creator>
        
        <creator>Mohamed Alaa El-Dine</creator>
        
        <creator>Youssef Talaat</creator>
        
        <creator>Yomna M. I. Hassan</creator>
        
        <creator>Alaa Hamdy</creator>
        
        <subject>Diabetes mellitus; diabetic retinopathy; DR; convo-lutional neural networks (CNNs); image processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>Diabetes Mellitus is one of the modern world’s most prominent and dominant maladies. This condition later on leads to a menacing eye disease called Diabetic Retinopathy (DR). Diabetic Retinopathy is a retinal disease that is caused by high blood sugar levels in the retina, and can naturally progress to irreversible vision loss (blindness). The primary purpose of this imperative research is the early detection and classification of this hazardous condition, to try and prevent any threatening complications in the future. In the course of recent years, Convo-lutional Neural Networks (CNNs) turned out to be exceptionally famous and fruitful in solving and unraveling image processing and object detection problems for enormous datasets. Throughout this pivotal research, a model was proposed to detect the presence of (DR) and classify it into 5 distinct stages, factoring in an immense and substantial dataset. The model starts by applying preprocessing techniques such as normalization, to maintain the same dimensions for all the images before proceeding to the main processing stage. Furthermore, diverse sampling methods such as “Resize &amp; Crop”, “Rotation”, and “Flipping” have been tested out, so as to pinpoint the best augmentation technique. Finally, the normalized images were fed into a Convolutional Neural Network (CNN), to predict whether a person suffers from DR or not, and classify the level/stage of the disease. The proposed method was utilized on 88,700 retinal fundus images, which are a parcel of the full (EyePACS) dataset, and finally achieved 81.12%, 89.16%, and 84.16% for sensitivity, specificity, and accuracy, respectively.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_89-Automatic_Classification_of_Preliminary_Diabetic.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Approaches based on Simulated Annealing, Tabu Search and Ant Colony Optimization for Solving the k-Minimum Spanning Tree Problem</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120288</link>
        <id>10.14569/IJACSA.2021.0120288</id>
        <doi>10.14569/IJACSA.2021.0120288</doi>
        <lastModDate>2021-03-01T14:58:54.1800000+00:00</lastModDate>
        
        <creator>El Houcine Addou</creator>
        
        <creator>Abelhafid Serghini</creator>
        
        <creator>El Bekkaye Mermri</creator>
        
        <subject>k-Minimum spanning tree; metaheuristics; simu-lated annealing; ant colony optimization algorithms; tabu search; approximation algorithms</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>In graph theory, the k-minimum spanning tree problem is considered to be one of the well-known NP hard problems to solve. This paper address this problem by proposing several hybrid approximate approaches based on the combination of simulated annealing, tabu search and ant colony optimization algorithms. The performances of the proposed methods are compared to other approaches from the literature using the same well-known library of benchmark instances.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_88-Hybrid_Approaches_based_on_Simulated_Annealing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybridized Deep Learning Method for Bengali Image Captioning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120287</link>
        <id>10.14569/IJACSA.2021.0120287</id>
        <doi>10.14569/IJACSA.2021.0120287</doi>
        <lastModDate>2021-03-01T14:58:54.1630000+00:00</lastModDate>
        
        <creator>Mayeesha Humaira</creator>
        
        <creator>Shimul Paul</creator>
        
        <creator>Md Abidur Rahman Khan Jim</creator>
        
        <creator>Amit Saha Ami</creator>
        
        <creator>Faisal Muhammad Shah</creator>
        
        <subject>Bengali image captioning; hybrid architecture; In-ceptionResNet; Xception</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>An omnipresent challenging research topic in com-puter vision is the generation of captions from an input image. Previously, numerous experiments have been conducted on image captioning in English but the generation of the caption from the image in Bengali is still sparse and in need of more refining. Only a few papers till now have worked on image captioning in Bengali. Hence, we proffer a standard strategy for Bengali image caption generation on two different sizes of the Flickr8k dataset and BanglaLekha dataset which is the only publicly available Bengali dataset for image captioning. Afterward, the Bengali captions of our model were compared with Bengali captions generated by other researchers using different architectures. Additionally, we employed a hybrid approach based on InceptionResnetV2 or Xception as Convolution Neural Network and Bidirectional Long Short-Term Memory or Bidirectional Gated Recurrent Unit on two Bengali datasets. Furthermore, a different combination of word embedding was also adapted. Lastly, the performance was evaluated using Bilingual Evaluation Understudy and proved that the proposed model indeed performed better for the Bengali dataset consisting of 4000 images and the BanglaLekha dataset.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_87-A_Hybridized_Deep_Learning_Method_for_Bengali_Image.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fully Convolutional Networks for Local Earthquake Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120286</link>
        <id>10.14569/IJACSA.2021.0120286</id>
        <doi>10.14569/IJACSA.2021.0120286</doi>
        <lastModDate>2021-03-01T14:58:54.1470000+00:00</lastModDate>
        
        <creator>Youness Choubik</creator>
        
        <creator>Abdelhak Mahmoudi</creator>
        
        <creator>Mohammed Majid Himmi</creator>
        
        <subject>Earthquake detection; fully convolutional networks; data normalization; classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>Automatic earthquake detection is widely studied to replace manual detection, however, most of the existing methods are sensitive to seismic noise. Hence, the need for Machine and Deep Learning has become more and more significant. Regardless of successful applications of the Fully Convolutional Networks (FCN) in many different fields, to the best of our knowledge, they are not yet applied in earthquake detection. In this paper, we propose an automatic earthquake detection model based on FCN classifier. We used a balanced subset of STanford EArthquake Dataset (STEAD) to train and validate our classifier. Each sample from the subset is re-sampled from 100Hz to 50Hz then normalized. We investigated different, widely used, feature normalization methods, which consist of normalizing all features in the same range, and we showed that feature normalization is not suitable for our data. On the contrary, sample normalization, which consists of normalizing each sample of our dataset individually, improved the accuracy of our classifier by ∼16% compared to using raw data. Our classifier exceeded 99% on training data, compared to 􀀀83% when using raw data. To test the efficiency of our classifier, we applied it to real continuous seismic data from XB Network from Morocco and compared the results to our catalog containing 77 earthquakes. Our results show that we could detect 75 out of 77 earthquakes contained in the catalog.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_86-Fully_Convolutional_Networks_for_Local_Earthquake.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Parallelization Technique using Hybrid Programming Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120285</link>
        <id>10.14569/IJACSA.2021.0120285</id>
        <doi>10.14569/IJACSA.2021.0120285</doi>
        <lastModDate>2021-03-01T14:58:54.1170000+00:00</lastModDate>
        
        <creator>Abdullah Algarni</creator>
        
        <creator>Abdulraheem Alofi</creator>
        
        <creator>Fathy Eassa</creator>
        
        <subject>Serial code translation; parallel code; C++; hybrid programming model; auto-translation; S2PMOACC</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>A multi-core processor is an integrated circuit that contains multiple core processing unit. For more than two decades, the single-core processors dominated the computing environment. The continuous development of hardware and processors led to the emergence of high-performance computers that able to address complex scientific and engineering programs quickly. Besides, running the software codes sequentially increases the execution time in huge and complex programs. The serial code is converted to parallel code to improve the program perfor-mances and reduce the execution time. Therefore, parallelization helps programmers solve computing problems efficiently. This study introduced a novel automatic translation tool that converts serial C++ code into a hybrid parallel code. The study analyzed the performance of the proposed S2PMOACC tool using linear algebraic dense matrix multiplication benchmarking. Besides, we introduced Message Passing Interface (MPI) + Open Accelerator (OpenACC) as a hybrid programming model without preliminary knowledge of parallel programming models and dependency analysis of their source code. The research outcomes enhance the program performances and decrease the implementation time. Moreover, our proposed technique offers better performance than other tools.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_85-Parallelization_Technique_using_Hybrid_Programming_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Particle Physics Simulator for Scientific Education using Augmented Reality</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120284</link>
        <id>10.14569/IJACSA.2021.0120284</id>
        <doi>10.14569/IJACSA.2021.0120284</doi>
        <lastModDate>2021-03-01T14:58:54.1000000+00:00</lastModDate>
        
        <creator>Hasnain Hyder</creator>
        
        <creator>Gulsher Baloch</creator>
        
        <creator>Khawaja Saad</creator>
        
        <creator>Nehal Shaikh</creator>
        
        <creator>Abdul Baseer Buriro</creator>
        
        <creator>Junaid Bhatti</creator>
        
        <subject>Particle physics; augmented reality; proton-proton collision; Higgs field; interactive classroom; AR in education; AR based lab experiments</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>In this era of fourth industrial revolution, young learners need to be equipped with 21st century skills, such as critical thinking, creativity, communication, collaboration, innovation and problem solving. Augmented Reality (AR) based learning systems are an effective tool to embed these skills. This paper presents a detailed review of latest research on an AR-based learning systems. Furthermore, an AR-based learning system is proposed to demonstrate the particle physics experiments i.e. proton-proton collision and Higgs field. The proposed learning system algorithms are developed using particle system of unity 3D software. Then, Microsoft Kinect sensor is interfaced with unity 3D to create an immersive experience. Then, the qualitative analysis of the proposed system and latest AR-based learning systems is presented. Finally, the quantitative analysis of the proposed system is conducted. Overall, the results suggest that 85% of the participants recommended the proposed learning system.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_84-Particle_Physics_Simulator_for_Scientific_Education.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Complexity Survey on Density based Spatial Clustering of Applications of Noise Clustering Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120283</link>
        <id>10.14569/IJACSA.2021.0120283</id>
        <doi>10.14569/IJACSA.2021.0120283</doi>
        <lastModDate>2021-03-01T14:58:54.0700000+00:00</lastModDate>
        
        <creator>Boulchahoub Hassan</creator>
        
        <creator>Rachiq Zineb</creator>
        
        <creator>Labriji Amine</creator>
        
        <creator>Labriji Elhoussine</creator>
        
        <subject>Unsupervised learning; clustering; density clustering; DBSCAN</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>Data Clustering is an interesting field of unsupervised learning that has been extensively used and discussed over several research papers and scientific studies. It handles several issues related to data analysis by grouping similar entities into the same set. Up to now, many algorithms were developed for clustering using several techniques including centroids, density and dendrograms approaches. We count nowadays more than 100 diverse algorithms and many enhancements for each algorithm. Therefore, data scientists still struggle to find the best clustering method to use among this diversity of techniques. In this paper we present a survey on DBSCAN algorithm and its enhancements with respect to time requirement. A significant comparison of DBSCAN versions is also illustrated in this paper to help data scientist make decisions about the best version of DBSCAN to use.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_83-A_Complexity_Survey_on_Density_based_Spatial_Clustering.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Regression Test Case Prioritization: A Systematic Literature Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120282</link>
        <id>10.14569/IJACSA.2021.0120282</id>
        <doi>10.14569/IJACSA.2021.0120282</doi>
        <lastModDate>2021-03-01T14:58:54.0530000+00:00</lastModDate>
        
        <creator>Ali Samad</creator>
        
        <creator>Hairulnizam Mahdin</creator>
        
        <creator>Rafaqut Kazmi</creator>
        
        <creator>Rosziati Ibrahim</creator>
        
        <subject>Software testing; regression testing; test case prioritization; cost; code coverage; fault detection ability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>The techniques associated with the Test Case Prioritization (TCP) are used to reduce the cost of regression testing to achieve the objectives that the modifications in the target code would not impact the functionality of updated software. The effectiveness of the TCP is measured based on the cost, the code coverage, and fault detection ability. The regression testing techniques proposed so far are focusing on one or two effectiveness parameters. In this paper, we presented a state-of-art review of the approaches used in regression testing in detail. The second objective is to combine these effective adequacy measures into a single or multi-objective TCP task. This systematic literature review is conducted to identify the state-of-the-art research in regression TCP from 2007 to 2020. The research identifies fifty-two (52) relevant studies that were focusing on these three selection parameters to justify their findings. The results reveal that there were six families of regression TCP in which meta-heuristic regression TCP were reported in 38% and generic regression TCP techniques in 31%. The parameters used as prioritization criteria were cost, code coverage, and fault detection ability. The code coverage is reported by 38%, cost in 17%, and cost and code coverage in 31%. There were three sources for datasets were identified named Software artefact Infrastructure Repository (SIR), Apache Software Foundation, and Git Hub. The measurement and metrics used to validate the effectiveness are inclusiveness, precision, recall, and retest-all.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_82-Regression_Test_Case_Prioritization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Face Recognition based on Convolution Neural Network and Scale Invariant Feature Transform</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120281</link>
        <id>10.14569/IJACSA.2021.0120281</id>
        <doi>10.14569/IJACSA.2021.0120281</doi>
        <lastModDate>2021-03-01T14:58:54.0400000+00:00</lastModDate>
        
        <creator>Jamilah ALAMRI</creator>
        
        <creator>Rafika HARRABI</creator>
        
        <creator>Slim BEN CHAABANE</creator>
        
        <subject>Face recognition; training; testing; CNN; SIFT; accuracy; classifier</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>Recently, Face Recognition (FR) has been received wide attention from both the research community and the cyber security industrial companies. Low accuracy of recognition is considered a main challenge when it comes to talking about employing the Artificial Intelligence (AI) for FR. In this work, the Scale Invariant Feature Transform (SIFT) and the Convolutional Neural Networks (CNN) feature extraction methods are utilized to build an AI based classifier. The CNN extracts features through both the convolutional and polling layers, while the SIFT extracts features depending on the scale space, directions, and histograms of points of interest. The features that are extracted by the CNN and the SIFT methods are used as an inputs for the KNN classifier. The experimental results with 400 test images of 40 persons, with 240 images are randomly chosen as training sets and 160 images from test sets, demonstrate in terms of accuracy, sensitivity, and error rate, that the CNN-based KNN classifier achieved better results when compared to the SIFT-based KNN classifier (accuracy = 97%, sensitivity = 93%, error rate = 3%).</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_81-Face_Recognition_based_on_Convolution_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Heart Diseases Prediction for Optimization based Feature Selection and Classification using Machine Learning Methods</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120280</link>
        <id>10.14569/IJACSA.2021.0120280</id>
        <doi>10.14569/IJACSA.2021.0120280</doi>
        <lastModDate>2021-03-01T14:58:54.0230000+00:00</lastModDate>
        
        <creator>N. Rajinikanth</creator>
        
        <creator>L. Pavithra</creator>
        
        <subject>Teaching learning based optimization; kernel density; support vector machine; k-nearest neighbour; ensemble learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>Globally, heart disease is considered to be the major cause of death. As per statistics, 17.9 million people are losing their lives every year worldwide. Chronic Kidney Disease (CKD) and Breast Cancer takes the next positions in the list. Disease classification is an important issue that needs more attention now. Making use of an optimized technique for such classification would be a better option. In this heart disease classification, initially, feature selection was done using Teaching learning based Optimization based (TLO) and Kernel Density. TLO is based on the process of classroom teaching, which involves too much iteration that leads to time complexity. Similarly, a certain level of misclassifications has been observed by using Kernel Density (KD). In the proposed method, K-Nearest Neighbour (KNN) is used to address the issue of NaN values and Density based Modified Teaching Learning based Optimization (DMTLO) is used for feature selection. Finally the classification process is done by considering Support Vector Machine (SVM) and Ensemble (Adaboosting method). SVM categorizes data bydissimilar class names by defining a group of support vectors that are part of the group of training inputs that plan a hyper plane in the attribute space. Ensemble method is used to solve statistical, computational and representational problems. Experimental outcomes have proved that the projected DMTLOovertakes the existing methodologies with required quantity of attributes.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_80-Heart_Diseases_Prediction_for_Optimization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design and Implementation of a Strong and Secure Lightweight Cryptographic Hash Algorithm using Elliptic Curve Concept: SSLHA-160</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120279</link>
        <id>10.14569/IJACSA.2021.0120279</id>
        <doi>10.14569/IJACSA.2021.0120279</doi>
        <lastModDate>2021-03-01T14:58:53.9930000+00:00</lastModDate>
        
        <creator>Bhaskar Prakash Kosta</creator>
        
        <creator>Pasala Sanyasi Naidu</creator>
        
        <subject>Cryptography hash function; message digests; authentication; elliptic curve concepts</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>Cryptographic hash function assumes a fundamental job in numerous pieces of cryptographic algorithm and conventions, particularly in authentication, non-repudiation and information trustworthiness administrations. A cryptographic hash work takes a commitment of optional tremendous size message and conveys a fixed little size hash code as a yield. In the proposed work SSLHA-160 (a strong and secure lightweight cryptographic hash algorithm), each 512-digit square of a message is first diminished to 256-bit. A cryptographic hash work takes a contribution of discretionary enormous size message and delivers a fixed little size hash code as a yield. In the proposed work SSLHA-160 (A strong and secure lightweight cryptographic hash algorithm), each 512- digit square of a message is first diminished to 256-bit and afterward partitioned into eight equivalent block of 32 pieces each and each 32-cycle block is additionally separated into two sub-block of 16-piece each. These two sub-blocks go about as two purposes of an elliptic curve, which are utilized for computing another point which is of 16 pieces. The new point esteems are thusly handled to produce message digest. SSLHA-160 is easy to develop, simple to actualize and displays solid torrential slide impact (avalanche), when contrasted with SHA1, RIPEMD160 and MD5.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_79-Design_and_Implementation_of_a_Strong_and_Secure_Lightweight.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detecting Hate Speech using Deep Learning Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120278</link>
        <id>10.14569/IJACSA.2021.0120278</id>
        <doi>10.14569/IJACSA.2021.0120278</doi>
        <lastModDate>2021-03-01T14:58:53.9770000+00:00</lastModDate>
        
        <creator>Chayan Paul</creator>
        
        <creator>Pronami Bora</creator>
        
        <subject>Bi-directional Long Short Term Memory (Bi-LSTM); deep learning; hate speech; Long Short Term Memory (LSTM); text classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>Social networking sites saw a steep rise in terms of number of users in last few years. As a result of this, the interaction among the users also increased considerably. Along with these posting racial comments based on cast, race, gender, religion, etc. also increased. This propagation of negative messages is collectively known as hate speeches. Often these posts containing negative comments in social networking sites create law and order situations in the society, leading to loss of human life and properties. Detecting hate speech is one of the major challenges faced in recent time. In recent past, there have been a considerable amount of research going on the field of detection of hate speech in the social networking sites. Researchers in the fields of Natural Language Processing and Machine Learning have done considerable amount research in in this area. This paper uses a simple up sampling method to make the data balanced and implements deep learning models like Long Short Term Memory (LSTM) and Bi-directional Long Short Term Memory (Bi-LSTM) for improved accuracy in detecting hate speech in social networking sites. LSTM was found to have better accuracy that Bi-LSTM for the data set considered. LSTM also had better values for precision and F1 score. Bi-LSTM only for higher values for recall.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_78-Detecting_Hate_Speech_using_Deep_Learning_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluation of Sentiment Analysis based on AutoML and Traditional Approaches</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120277</link>
        <id>10.14569/IJACSA.2021.0120277</id>
        <doi>10.14569/IJACSA.2021.0120277</doi>
        <lastModDate>2021-03-01T14:58:53.9470000+00:00</lastModDate>
        
        <creator>K. T.Y. Mahima</creator>
        
        <creator>T.N.D.S.Ginige</creator>
        
        <creator>Kasun De Zoysa</creator>
        
        <subject>Automated machine learning; sentiment analysis; deep learning; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>AutoML or Automated Machine Learning is a set of tools to reduce or eliminate the necessary skills of a data scientist to build machine learning or deep learning models. Those tools are able to automatically discover the machine learning models and pipelines for the given dataset within very low interaction of the user. This concept was derived because developing a machine learning or deep learning model by applying the traditional machine learning methods is time-consuming and sometimes it is challenging for experts as well. Moreover, present AutoML tools are used in most of the areas such as image processing and sentiment analysis. In this research, the authors evaluate the implementation of a sentiment analysis classification model based on AutoML and Traditional approaches. For the evaluation, this research used both deep learning and machine learning approaches. To implement the sentiment analysis models HyperOpt SkLearn, TPot as AutoML libraries and, as the traditional method, Scikit learn libraries were used. Moreover for implementing the deep learning models Keras and Auto-Keras libraries used. In the implementation process, to build two binary classification and two multi-class classification models using the above-mentioned libraries. Thereafter evaluate the findings by each AutoML and Traditional approach. In this research, the authors were able to identify that building a machine learning or a deep learning model manually is better than using an AutoML approach.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_77-Evaluation_of_Sentiment_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Extensive Analysis of the Vision-based Deep Learning Techniques for Action Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120276</link>
        <id>10.14569/IJACSA.2021.0120276</id>
        <doi>10.14569/IJACSA.2021.0120276</doi>
        <lastModDate>2021-03-01T14:58:53.9300000+00:00</lastModDate>
        
        <creator>Manasa R</creator>
        
        <creator>Ritika Shukla</creator>
        
        <creator>Saranya KC</creator>
        
        <subject>Action recognition; deep learning; vision sensors; convolution neural networks (CNN); recurrent neural networks (RNN); action classification; temporal action detection; spatiotemporal action detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>Action recognition involves the idea of localizing and classifying actions in a video over a sequence of frames. It can be thought of as an image classification task extended temporally. The information obtained over the multitude of frames is aggregated to comprehend the action classification output. Applications of action recognition systems range from assistance for healthcare systems to human-machine interaction. Action recognition has proven to be a challenging task as it poses many impediments including high computation cost, capturing extended context, designing complex architectures, and lack of benchmark datasets. Increasing the efficiency of algorithms in human action recognition can significantly improve the probability of implementing it in real-world scenarios. This paper has summarized the evolution of various action localization, classification, and detection algorithms applied to data from vision-based sensors. We have also reviewed the datasets that have been used for the action classification, localization, and detection process. We have further explored the areas of action classification, temporal and spatiotemporal action detection, which use convolution neural networks, recurrent neural networks, or a combination of both.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_76-An_Extensive_Analysis_of_the_Vision_based_Deep_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detecting Generic Network Intrusion Attacks using Tree-based Machine Learning Methods</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120275</link>
        <id>10.14569/IJACSA.2021.0120275</id>
        <doi>10.14569/IJACSA.2021.0120275</doi>
        <lastModDate>2021-03-01T14:58:53.8970000+00:00</lastModDate>
        
        <creator>Yazan Ahmad Alsariera</creator>
        
        <subject>Generic attack; decision trees; cybersecurity; intrusion detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>The development Intrusion Detection System (IDS) has a solid impact in mitigating against internal and external cyber threats among other cybersecurity methods. The machine learning-based method for IDS has proven to be an effective approach to detecting either anomaly or multiple classes of intrusion. For the detection of various types of intrusion by a single IDS model, it is discovered that the overall high accuracy of the IDS model does not translate to high accuracy for each attack type. Some intrusion attacks are seen to share similarities with other attacks thereby evading detection, one of which is the generic attack. The notoriety of the generic attack is the ability of a single generic attack to compromise a whole bunch of block-ciphers. Therefore, this study proposed a machine learning framework to specifically detect generic network intrusion by implementing two (2) decision tree algorithms. The decision tree methods were developed using two distinct variants namely the J48 and Random Tree algorithms. A balanced generic network dataset was curated and used for model development. A 10-fold cross-validation technique was implemented for model development and performance evaluation, where all obtainable performance scores were extracted and presented. The performances of the decision tree methods for generic network intrusion attack detection were comparative analysis and also evaluated against existing methods. The proposed methods of this study are robust, stable and empirically seen to have outperformed existing methods.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_75-Detecting_Generic_Network_Intrusion_Attacks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Using Behaviour-driven Requirements Engineering for Establishing and Managing Agile Product Lines</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120274</link>
        <id>10.14569/IJACSA.2021.0120274</id>
        <doi>10.14569/IJACSA.2021.0120274</doi>
        <lastModDate>2021-03-01T14:58:53.8830000+00:00</lastModDate>
        
        <creator>Heba Elshandidy</creator>
        
        <creator>Sherif Mazen</creator>
        
        <creator>Ehab Hassanein</creator>
        
        <creator>Eman Nasr</creator>
        
        <subject>Agile product line engineering; behaviour-driven requirements engineering; observational study; requirements engineering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>Requirements engineering in agile product line engineering refers to both common and variability components establishing a software. Although it is conventional for the requirements engineering to take place in a dedicated upfront domain analysis phase, agile-based environments denounce such a proactive behaviour. This paper provides an observational study examining a reactive incremental requirement engineering approach called behaviour-driven requirements engineering. The proposed approach uses behaviour-driven development to establish and maintain agile product lines. The findings of the study are very promising and suggest the following: the approach is easy to understand and quick to learn; the approach supports the constantly changing nature of software development; and using behaviour-driven requirements engineering produces reliable and coherent requirements. In practice, the observational study showed that using the proposed approach saved time for development team and customers, decreased costs, improved the software quality, and shortened the time-to-market.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_74-Using_Behaviour_driven_Requirements_Engineering.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mobile-based Decision Support System for Poultry Farmers: A Case of Tanzania</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120273</link>
        <id>10.14569/IJACSA.2021.0120273</id>
        <doi>10.14569/IJACSA.2021.0120273</doi>
        <lastModDate>2021-03-01T14:58:53.8670000+00:00</lastModDate>
        
        <creator>Martha Shapa</creator>
        
        <creator>Lena Trojer</creator>
        
        <creator>Dina Machuve</creator>
        
        <subject>Decision support system; chatbot; mobile application; poultry farming; data-driven approach</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>Poultry farms in Tanzania are characterized by inadequate management practices which are mainly caused by the lack of adequate systems to guide the small-scale poultry farmers in decision making. It is well-established that information is a key factor in making effective decisions in numerous sectors including poultry farming. Furthermore, various researchers have identified the use of mobile decision support tools to be an effective way of aiding farmers in making informed decisions. In this paper, we present a mobile-based decision support system that will aid rural and small-scale poultry farmers in Tanzania to obtain reliable information that is crucial for making proper decisions in their farming activities. In this context, a mobile-based decision support system was achieved through a mobile application integrated with a chatbot assistant to provide a solution to various poultry farming-related problems and simplify their decision-making process. We used a data-driven approach towards developing an informational chatbot assistant for Android smartphones that is capable of interacting with small-scale poultry farmers through natural conversations by utilizing the RASA framework.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_73-Mobile_based_Decision_Support_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Review of Public Procurement Fraud Detection Techniques Powered by Emerging Technologies</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120272</link>
        <id>10.14569/IJACSA.2021.0120272</id>
        <doi>10.14569/IJACSA.2021.0120272</doi>
        <lastModDate>2021-03-01T14:58:53.8370000+00:00</lastModDate>
        
        <creator>Nikola Modrušan</creator>
        
        <creator>Kornelije Rabuzin</creator>
        
        <creator>Leo Mršic</creator>
        
        <subject>Public procurement; fraud detection techniques; corruption detection; fraud detection review; fraud data source</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>Numerous studies and various methods have been used to detect and prevent corruption in public procurement. With the development of IT technology and thus the digitization of the Public Procurement Process (PPP), the amount of available data is increasing. Studies have shown progress in this area and have revealed many challenges and open issues geared to the various goals outlined in this paper. Different data mining and business intelligence techniques and methods are being used to develop models that will find any suspicious public procurement process, contracts, economic operators, or to classify observations as corrupt. In addition to using classification models, methods such as association rules and graph databases are used to find relationships between economic operators and contracting authorities, as well as to find daughter companies that participate in PPP collusion. Therefore, this paper addresses a comprehensive review of the emerging techniques and models used for the detection of suspicious or corrupted observations, their goals, open issues, challenges, methods and metrics used, tools, and relevant data sources. The findings show that models are mostly fitted on historical data and move in the direction of an early warning system. Moreover, the efficiency of fraud or anomaly detection depends on data set quality and detection of the most important red flags. The study is presenting a summary of identified fraud detection model objectives such as predicting fraud risk in contracts and contractors or finding split purchases, and detection of used data sources such as public procurement process or economic operator data.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_72-Review_of_Public_Procurement_Fraud_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Acquisition of Positional Accuracy with Comparative Analysis of GPS and EGNOS in Urban Constituency</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120271</link>
        <id>10.14569/IJACSA.2021.0120271</id>
        <doi>10.14569/IJACSA.2021.0120271</doi>
        <lastModDate>2021-03-01T14:58:53.8200000+00:00</lastModDate>
        
        <creator>Zeeshan Ali</creator>
        
        <creator>Riaz Ahmed Soomro</creator>
        
        <creator>Faisal Ahmed Dahri</creator>
        
        <creator>Muhammad Mujtaba Shaikh</creator>
        
        <subject>Differential GPS; augmentation; EGNOS; EDAS; on-board equipment; urban and positional accuracy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>Over the years, precise positioning has been the ultimate goal for Satellite Navigation Systems. The American Global Navigation Satellite System deliver the position and time information intended for various sectors such as vehicle tracking, oil exploration, atmospheric studies, astronomical telescope pointing, airport and harbor security tracking etc. Corresponding technological competitors such as Russian Global Navigation Satellite System (GLONASS), European Union’s GALILEO, China’s BeiDou and Japanese Quasi Zenith Satellite System (QZSS) are few other versions of Satellite based Augmentation Systems. Nevertheless, stern security measures, geographical statistics and stimulation of diverse Electronic Gadgets at indoor/outdoor surroundings make it critical to acquire data about any vicinity with seamless accessibility, accuracy and integrity with satellite links. In this paper, positional accuracy has been tested with analysis of EGNOS, EDAS and simple GPS receiver models at Rome City, Italy. To support results, various real time experiments/tests has been performed with GPS Receiver SIRF Demo software. The test was conducted on-board a car by installing a laptop equipped with GPS Receiver plus supportive SBAS (EGNOS particularly) through three diverse bus routes of locality and outcomes of few tested samples inside the Rome City center are specified to check the availability of desired satellite signals. Subsequently, comparative analysis has been executed between the simple GPS data received and GPS + EGNOS data collected during daytime traffic. The strength of test signals reveals accuracy of EGNOS in open terrain area with less congestion. Furthermore, Asian and European Advanced GPS systems are compared in terms of performance as well as feasibility of authentic, accurate and swift satellite navigation systems.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_71-Acquisition_of_Positional_Accuracy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Self Supervised Defending Mechanism Against Adversarial Iris Attacks based on Wavelet Transform</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120270</link>
        <id>10.14569/IJACSA.2021.0120270</id>
        <doi>10.14569/IJACSA.2021.0120270</doi>
        <lastModDate>2021-03-01T14:58:53.8030000+00:00</lastModDate>
        
        <creator>Meenakshi K</creator>
        
        <creator>G. Maragatham</creator>
        
        <subject>Iris classification; deep neural networks; adversarial attack; defense method; wavelet processing; biometrics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>In biometric applications, deep neural networks have presented significant improvements. However, when presenting carefully designed input training data known as adversarial examples, their output is severely reduced. These types of attacks are termed as adversarial attacks, and any biometric security system is greatly affected by these attacks. In the proposed work, an effective defensive mechanism has been developed against adversarial attacks which are introduced in iris images. The proposed defensive mechanism is following the concept of wavelet domain processing and it investigates the mid and high frequency components of wavelet domain components. Based on this, the model reproduces the various denoised copies of input iris images.  The proposed strategies are intended to denoise each sub-band of the wavelet domain and assess the sub-bands most likely to be affected by the adversary using the reconstruction error measured for each sub-band. We test the effectiveness of the proposed adversarial protection mechanism against various attack methods and analyzed the results with other state of the art defense approaches.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_70-A_Self_Supervised_Defending_Mechanism.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Urban Addressing Practices and Geocoding Algorithm Validity in Developing Countries</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120269</link>
        <id>10.14569/IJACSA.2021.0120269</id>
        <doi>10.14569/IJACSA.2021.0120269</doi>
        <lastModDate>2021-03-01T14:58:53.7730000+00:00</lastModDate>
        
        <creator>Mohamed El Imame MALAAININE</creator>
        
        <creator>Hatim LECHGAR</creator>
        
        <subject>Addressing system; geocoding; Geographic Information System (GIS)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>Addressing systems have a key role in understanding and managing economic connections and social conditions, especially in urban territories. Developing countries need to learn from previous experiences and adapt solutions and techniques to their local contexts. A review of the world bank’s experience in addressing cities in Africa during the 1990s provides valuable lessons. It provides an understanding of the operational issues and the key success factors of such operations. It also helps to understand the conceptual components of these systems and the efforts required to build them in the field before the creation of their IT infrastructure. An addressing experience from a private sector initiative in Casablanca-Morocco is also reviewed, where efforts concern the creation of a comprehensive database of addresses. The methods used to collect the data in the field are presented as well as the conceptual model for its integration. The validity of geocoding techniques, which represent the core computing tools of addressing systems, is discussed. In the Moroccan context, the official addressing rules follow Western models and standards, used by default in geocoding algorithms. The study of data collected in Casablanca, processed with GIS tools and algorithms, shows that the percentage of cases not respecting these rules is far from negligible. The analysis was particularly interested in the two main criteria of address numbers: “parity” and “respect of intervals”, analyzed by street segment. Compliance with these conditions was only observed at about 53%. It is then concluded that a geocoding system based on a linear model is not sufficiently validated in the Moroccan context.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_69-Urban_Addressing_Practices_and_Geocoding.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Identifying Communication Issues Contributing to the Formation of Chaotic Situation: An AGSD View</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120268</link>
        <id>10.14569/IJACSA.2021.0120268</id>
        <doi>10.14569/IJACSA.2021.0120268</doi>
        <lastModDate>2021-03-01T14:58:53.7570000+00:00</lastModDate>
        
        <creator>Hina Noor</creator>
        
        <creator>Babur Hayat Malik</creator>
        
        <creator>Zeenat Amjad</creator>
        
        <creator>Mahek Hanif</creator>
        
        <creator>Sehrish Tabussum</creator>
        
        <creator>Rahat Mansha</creator>
        
        <creator>Kinza Mubasher</creator>
        
        <subject>Chaotic situation; chaos; issues; communication; agile; distributed software development; global distributed software development; communication challenges; AGSD</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>The software can be constructed in many different contexts using various approaches to software creation, Software Development (GSD), Agile Software Development (ASD) and Agile Global Software Development (AGSD) in an ecumenically distributed way (a coalescence of GSD and ASD). This GSD (Global Engenderment of Software) is becoming increasingly important. Although communication is important in the sharing of information between team members, there are additional barriers to multi-site software creation, various time zones and cultures, IT infrastructure, etc., and delays in communication activities that are already problematic. In the case of Agile Global Software Development (AGSD), Agile Global Software Development (AGSD) is much more critical and plays a primary role in interaction and communication. The aim of this paper is to tackle the chaos problems associated with evolution of Agile Global Software (AGSD). We have obtained knowledge from previous works and from web reviews from worldwide, a literature review was conducted. Using a conceptual model, tabulated based on authors, and addressed also, the chaos issues are then illustrated. We identify the most discussed and less discussed issues in the literature. It is consequential to define the chaos issue in order to illustrate the genuine issues that subsisted in AGSD.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_68-Identifying_Communication_Issues.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of a Virtual Pet Simulator for Pain and Stress Distraction for Pediatric Patients using Intelligent Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120267</link>
        <id>10.14569/IJACSA.2021.0120267</id>
        <doi>10.14569/IJACSA.2021.0120267</doi>
        <lastModDate>2021-03-01T14:58:53.7270000+00:00</lastModDate>
        
        <creator>Angie Solis-Vargas</creator>
        
        <creator>Iam Contreras-Alc&#225;zar</creator>
        
        <creator>Jose Sulla-Torres</creator>
        
        <subject>Virtual pet; pediatric patients; pain; stress; smart techniques; A-star algorithm; flocking algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>Pediatric medical procedures are often stressful and painful for children, so they can resist and make the work of doctors and nurses a little more complicated. This research aims to develop a virtual pet simulator to distract pediatric patients from pain and stress using smart techniques. The methodology used is SUM. The primary data for the development of the simulator were gravity, the player&#39;s position, the speed, and the mass for the calculation of the predictive physics in the toy to interact with the pet. As part of the intelligent techniques, the A-star algorithm was used for the pet to follow the user and the flocking algorithm to have a natural behavior of a group of animals and thus have a higher immersion level. Trials were conducted with pediatric patients where those who made use of the virtual pet simulator during the medical procedure felt less pain and stress than those who did not try the simulator. Therefore, it is highly recommended to use alternatives such as the one developed to reduce pain and stress in pediatric patients.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_67-Development_of_a_Virtual_Pet_Simulator.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Meta Analysis of Attention Models on Legal Judgment Prediction System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120266</link>
        <id>10.14569/IJACSA.2021.0120266</id>
        <doi>10.14569/IJACSA.2021.0120266</doi>
        <lastModDate>2021-03-01T14:58:53.7100000+00:00</lastModDate>
        
        <creator>G. Sukanya</creator>
        
        <creator>J.Priyadarshini</creator>
        
        <subject>Legal judgment prediction; hierarchical attention neural network; text processing; transformer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>Artificial Intelligence in legal research is transforming the legal area in manifold ways. Pendency of court cases is a long-lasting problem in the judiciary due to various reasons such as lack of judges, lack of technology in legal services and the legal loopholes. The judicial system has to be more competent and more reliable in providing justice on time. One of the major causes of pending cases is the lack of legal intelligence to assist the litigants. The study in this paper reviews the challenges faced by judgment prediction system due to lengthy case facts using deep learning model. The Legal Judgment prediction system can help lawyers, judges and civilians to predict the win or loss rate, punishment term and applicable law articles for new cases. Besides, the paper reviews current encoding and decoding architecture with attention mechanism of transformer model that can be used for Legal Judgment Prediction system. Natural Language Processing using deep learning is an exploring field and there is a need for research to evaluate the current state of the art at the intersection of good text processing and feature representation with a deep learning model. This paper aims to develop a systematic review of existing methods used in the legal judgment prediction system and about the Hierarchical Attention Neural network model in detail. This can also be used in other applications such as legal document classification, sentimental analysis, news classification, text translation, medical reports and so on.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_66-A_Meta_Analysis_of_Attention_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Infrastructure Study for Solving Connectivity Problems Through the Nile River</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120265</link>
        <id>10.14569/IJACSA.2021.0120265</id>
        <doi>10.14569/IJACSA.2021.0120265</doi>
        <lastModDate>2021-03-01T14:58:53.6930000+00:00</lastModDate>
        
        <creator>Noha Kamal</creator>
        
        <creator>Ibrahim Gomaa</creator>
        
        <subject>Communications; optical fiber cables; delft3d; underwater / river crossing cable</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>Fiber optics cables present various benefits over regular cables when used as a data transportation medium in today’s communication networks. It is noted that there are significant challenges in the connectivity of inner cities that are located far inland away from the coastal areas. Most of the networks developed in Africa, especially in Egypt, are connected via submarine cables flowing across coastal areas. Very few connections are constructed to connect inner cities by crossing the Nile. The Nile River is characterized by a wide area, offering a natural path for underwater cables’ laying areas. In this study, the analysis and evaluation of the laying of these cables along the bed of the Nile River in Egypt, rather than crossing it, is investigated. There are many issues with laying fiber optic cables across the Nile River. Some of these are the requirement of using more than one node over fiber optic cable for each. When the number of nodes increases, the cost of installation and drilling effort increases with each node. The fiber optic cable path along the Nile River is simulated with a numerical model (Delft-3D). Two different scenarios for laying cables were applied and analyzed to evaluate the effect of the predicted water surface and sediment profiles on the fiber optic cable path. Based on the results obtained, the fiber-optic network infrastructure is proposed to solve connectivity problems by laying fiber optic cables along the Nile River.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_65-Infrastructure_Study_for_Solving_Connectivity_Problems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mobile Technologies’ Utilization and Competency among College Students</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120264</link>
        <id>10.14569/IJACSA.2021.0120264</id>
        <doi>10.14569/IJACSA.2021.0120264</doi>
        <lastModDate>2021-03-01T14:58:53.6630000+00:00</lastModDate>
        
        <creator>Mokhtar Hood Bindhorob</creator>
        
        <creator>Khaled Salmen Aljaaidi</creator>
        
        <subject>Mobile technologies; utilization; competency; college students</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>The focus of this study is to (1) determine the level of mobile technologies’ utilization and competency and (2) report separately the level of basic operation, communication and collaboration, information Seeking, digital citizenship and creativity and innovation skills in mobile technologies among first year undergraduate students in College of Computer Science and Information Technology at Hadhramout University. The sample size consists of 148 freshmen students. Using the descriptive statistical analysis, the results of this study reveal that undergraduates in College of Computer Science and Information Technology have highly utilized mobile technologies. It was also figured out that they were extremely capable to use these devices. Moreover, it was revealed that undergraduates’ competency and utilization levels were so high in communication purpose due to certain social and educational reasons. Due to the results of the study, wider and greater implication of this method in College of Computer Science and Information Technology and other colleges at Hadhramout University is recommended along with the activation of the university apps.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_64-Mobile_Technologies_Utilization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Secure Intruder Information Sharing in Wireless Sensor Network for Attack Resilient Routing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120263</link>
        <id>10.14569/IJACSA.2021.0120263</id>
        <doi>10.14569/IJACSA.2021.0120263</doi>
        <lastModDate>2021-03-01T14:58:53.6470000+00:00</lastModDate>
        
        <creator>Venkateswara Rao M</creator>
        
        <creator>Srinivas Malladi</creator>
        
        <subject>Flooding; malicious zone; network overhead; overhearing rate; packet delivery ratio</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>Securing the routing process against attacks in wireless sensor network (WSN) is a vital factor to ensure the reliability of the network. In the existing system, a secure attack resilient routing for WSN using zone-based topology is proposed against message drops, message tampering and flooding attacks. The secure attack resilient routing provides a protection against attacks by skipping the routing towards less secure zones. However, the existing work did not consider the detection and isolation of the malicious nodes in the zone based wireless sensor network. To solve this issue, we proposed enhanced attack resilient routing by detecting malicious zones and isolating the malicious nodes. We proposed a three-tire framework by adopting sequential probability test to detect and isolate malicious nodes. Attacker information is shared in a secure manner in the network, so that routing selection decision can be made locally in addition to attack resiliency route selection provided at the sink. Overhearing rate is calculated for all nodes in each zone to detect blackhole attackers. Simulation results shows that the proposed Three Tier Frame work provides more security, reduced network overhead and improved Packet delivery ratio in WSNs by comparing with the existing works.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_63-Secure_Intruder_Information_Sharing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Machine Learning based Optimization Scheme for Detection of Spam and Malware Propagation in Twitter</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120262</link>
        <id>10.14569/IJACSA.2021.0120262</id>
        <doi>10.14569/IJACSA.2021.0120262</doi>
        <lastModDate>2021-03-01T14:58:53.6330000+00:00</lastModDate>
        
        <creator>Savita Kumari Sheoran</creator>
        
        <creator>Partibha Yadav</creator>
        
        <subject>Social networking sites, Twitter, spam, malware, Cosine similarity, Jaccard similarity, genetic algorithm, artificial neural network </subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>Social networking sites are new generation of web-services providing global community of users in an online environment. Twitter is one of such popular social networks having more than 152 million daily active users making a half billions of tweets per day. Owing to its immense popularity, the accounts of legitimate Twitter users are always at a risk from spammers and hackers. Spam and Malware are the most affecting threats reported on the Twitter platform. To preserve the privacy and ensure data safety for online Twitter community, it is necessary develop a framework to safeguard their accounts from such malicious attackers. Machine Learning is recently matured and widely used technique useful to prevent the propagation of such malicious activities in social media. However, the Machine Learning based techniques have yielded a promising result in filtering the undesired contents from the user tweets but its efficiency always remains restricted within the technological limits of the technique used. To devise a more efficient model to detect propagation of spam and malware in the Twitter, this research has proposed a Machine Learning based optimization scheme based on hybrid similarity (Cosine and Jaccard) measured in conjunction with Genetic Algorithm (GA) and Artificial Neural Network (ANN). The Cosine with Jaccard in hybridization has been applied on the Twitter dataset to identify the tweets containing spam and malware. In conjunction to it the GA has been used to enhance the training rate and reduce training error by automatically selecting the designed fitness function while the ANN was applied to classify malicious tweets from through voting rule. The simulation experiments were conducted to compute the precision rate, recall, F-measures. The results of Machine Learning based optimization scheme proposed in this research were compared with the existing state-of-arts techniques already available in this regime. The comparative study reveals that the model proposed in this research is faster and more precise then the existing models.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_62-Machine_Learning_based_Optimization_Scheme.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Pixel Value Difference based Face Recognition for Mitigation of Secret Message Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120261</link>
        <id>10.14569/IJACSA.2021.0120261</id>
        <doi>10.14569/IJACSA.2021.0120261</doi>
        <lastModDate>2021-03-01T14:58:53.6000000+00:00</lastModDate>
        
        <creator>Alaknanda S. Patil</creator>
        
        <creator>G. Sundari</creator>
        
        <subject>Audio; Face recognition; Information Security; LSB; Steganography; Video</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>Data security is an important aspect of the modern digital world. Authentication is necessary for the prevention of data from intruders and hackers. Most of the existing system uses textual password which can provide only single-layer security. The textual passwords are simple but they may prone to spyware as well as dictionary attacks. Hence there is a need for a highly secure and multilayer security method. Steganography, the art of hiding the existence of a message by embedding it into another medium, can be exploited in an authentication system. Steganography has emerged as a technology that introduced steganalysis to detect hidden information. In this approach, the multimedia file is the input that is to be transferred over the media. On the transmitter side, the audio and video files are extracted. The secret audio file is embedded with an audio file using the LSB method while the face of the authenticated person is embedded into the video frame using the Pixel Value Differencing (PVD) method. At the receiver side, the face is extracted using the reverse PVD method and authenticated using the Convolutional Neural Network-based face recognition method. After authentication, the secret audio is extracted using the reverse LSB method. The results show that the MSE, RMSE, PSNR, and SSIM of 0.0000045303, 0.0021, 53.5877, and 0.9957, respectively.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_61-Pixel_Value_Difference_based_Face_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimality Assessments of Classifiers on Single and Multi-labelled Obstetrics Outcome Classification Problems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120260</link>
        <id>10.14569/IJACSA.2021.0120260</id>
        <doi>10.14569/IJACSA.2021.0120260</doi>
        <lastModDate>2021-03-01T14:58:53.5870000+00:00</lastModDate>
        
        <creator>Udoinyang G. Inyang</creator>
        
        <creator>Samuel A. Robinson</creator>
        
        <creator>Funebi F. Ijebu</creator>
        
        <creator>Ifiok J. Udo</creator>
        
        <creator>Chuwkudi O. Nwokoro</creator>
        
        <subject>Pregnancy outcome; random forest; multi-label learning; comparative analytics; machine learning algorithms; single label learning; maternal outcome prediction; decision tree</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>It is indisputable that clinicians cannot exactly state the outcome of pregnancies through conventional knowledge and methods even as the surge in human knowledge continues. Hence, several computational techniques have been adapted for precise pregnancy outcome (PO) prediction. Obstetric datasets for PO determination exist as single label learning (SLL), multi-label learning (MLL) and multi-target (MTP) problems. There is however no single classifier recommended to optimally satisfy the needs of all the classification types. This work therefore identifies six widely used PO classifiers and investigates their performances in all three classification categories; to find the best performing classifier. Obstetric dataset exposed to input rank analysis via Principal component Analysis, produced thirteen (13) significant features for the experiment. Accuracy, F1-measure and build/test time were used as evaluation metrics. Decision tree (DT) had an average accuracy and F1 score of 89.23% and 88.23% respectively, with 1.0 average rank. Under MLL configuration, average accuracy (91.71%) and F1 score (94.28%) were highest in the random forest (RF) which had a 1.0 average test time rank. Using MTP, DT had an average accuracy of 88.80% and average F1 score of 71.13%, the multi-layered perceptron (MLP) had the best time cost with an average rank value of 2.0. From the results, RF is most optimal in terms of accuracy and average rank value, while DT is the most efficient in terms of time cost. The comparative analysis of global averages of the six base classifiers shows that RF is the most optimal algorithm with an average accuracy of 87.3% given all three data setups in the study. MLP on the other hand had an unexpectedly high time cost, making it unsuitable for similar data classifications if time is the main criterion. It is recommended that the choice of the classifier should either be RF or DT depending on the application domain and whether or not time cost is a major consideration.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_60-Optimality_Assessments_of_Classifiers.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Improvement of Network Coding for Heterogeneous Data Items with Scheduling Algorithms in Wireless Broadcast</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120259</link>
        <id>10.14569/IJACSA.2021.0120259</id>
        <doi>10.14569/IJACSA.2021.0120259</doi>
        <lastModDate>2021-03-01T14:58:53.5700000+00:00</lastModDate>
        
        <creator>Romana Rahman Ema</creator>
        
        <creator>Md. Alam Hossain</creator>
        
        <creator>Nazmul Hossain</creator>
        
        <creator>Syed Md. Galib</creator>
        
        <creator>Md. Shafiuzzaman</creator>
        
        <subject>Network coding; scheduling algorithms; CR graph; wireless broadcast; simulation; LTSF; STOBS; performance metric</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>This is the age of information. Now-a-days everyone communicates with each other by means of digital systems. Humans are always communicating with each other on the go. On-demand broadcasting is an efficient way to broadcast information according to user requests. In an on-demand broadcasting network, anyone can satisfy multiple clients in one broadcast which helps to fulfill the enormous demand of information by clients. The optimized flow of digital data in a network through the transmission of digital evidence about messages is called network coding. The “digital evidence” is composed of two or more messages. Network coding incorporated with data scheduling algorithms can further improve the performance of on-demand broadcasting networks. Using network coding, anyone can broadcast multiple data items using single broadcast strategy which can satisfy the needs of more clients. In this work, it is described that network coding cannot always maintain its superiority over non-network coding when the system handles different sized data items. However, the causes of performance reduction on network coding have been analyzed and THETA based dynamic threshold value integration strategy has been proposed through which the network coding can overcome its limitation for handling heterogeneous data items. In the proposed strategy, THETA based dynamic threshold will control which data item will be selected from the Client Relationship graph (CR-graph) so that large sized data items cannot be encoded with small sized data items. Simulation result shows some interesting performance comparison.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_59-Performance_Improvement_of_Network_Coding.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Wavelet Neural Network based Robust Text Recognition for Overlapping Characters</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120258</link>
        <id>10.14569/IJACSA.2021.0120258</id>
        <doi>10.14569/IJACSA.2021.0120258</doi>
        <lastModDate>2021-03-01T14:58:53.5530000+00:00</lastModDate>
        
        <creator>Neha Tripathi</creator>
        
        <creator>Pushpinder Singh Patheja</creator>
        
        <subject>Text recognition; overlapped characters; deep wavelet neural network; feature extraction; segmentation; basis function; optical character recognition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>This paper presents a deep learning based intelligent text recognition system with touching and overlapped characters. The robustness and effectiveness in the proposed model are enhanced through the modified configuration of neural network known as Deep Wavelet Neural Network (DWNN). The capability of deep learning networks to learn efficiently from an unlabeled dataset has attracted the attention of many researchers over the last decade. However, the performance of these networks is subject to the quality of the dataset and invariant image representation. Numerous optical character recognition techniques have also been presented in the recent years, but the overlapped and touching characters have not been addressed much. The nonlinear and uncertain representation of image data in case of overlapped text adds severe complexity in the process of feature extraction and respective learning. The proposed architecture of DWNN uses fast decaying wavelet functions as activation function in place of conventional sigmoid function to cope up with the uncertainties and nonlinearity of the data representation in overlapped text images. It comprises of cascaded layered architecture of translated and dilated versions of wavelets as activation functions for the training and feature extraction at multiple levels. The local transformation and deformation variation in the visual data has also been taken care efficiently through the modified architecture of DWNN. Comprehensive experimental analysis has been performed over various test images to verify the effectiveness of the proposed text recognition system. The performance of the proposed method is assessed with the help of the metrics, namely, estimation error, cost function and accuracy. The proposed approach will be implemented in MATLAB.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_58-Deep_Wavelet_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Analysis of the Impact on Air Quality Due to the Operation of La Oroya Metallurgical Complex using the Grey Clustering Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120257</link>
        <id>10.14569/IJACSA.2021.0120257</id>
        <doi>10.14569/IJACSA.2021.0120257</doi>
        <lastModDate>2021-03-01T14:58:53.5400000+00:00</lastModDate>
        
        <creator>Alexi Delgado</creator>
        
        <creator>Luis Vasquez</creator>
        
        <creator>Luis Espinoza</creator>
        
        <creator>Manuel Mej&#237;a</creator>
        
        <creator>Erick Yauri</creator>
        
        <creator>Chiara Carbajal</creator>
        
        <creator>Enrique Lee Huaman&#237;</creator>
        
        <subject>Air quality assessment; Grey clustering method; particulate matter (PM10); sulfur dioxide (SO2)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>Air pollution is one of the biggest problems worldwide due to the increase of burning of fossil fuels by industries around the world. In the present work, the air quality study will be carried out with the grey clustering method, since the data obtained presents a certain level of uncertainty. In order to obtain a correct analysis of air quality, the comparison was made in two different years with the same monitoring stations. The air quality assessment was carried out in three monitoring stations located in three different districts of the province of La Oroya (La Oroya Antigua, minor town of Huari and Santa Rosa de Sacco), in which, they installed sampling equipment for the evaluation on the basis of 10 particulate matter (PM10) and sulfur dioxide (SO2). In each point of study a positive result was obtained, where an improvement in air quality can be seen, this it is due to the reduction of mining activity in the study area. These results show the improvement over the years. Finally, this method can also be used by any organization in the nation for water or air quality studies.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_57-Comparative_Analysis_of_the_Impact_on_Air_Quality.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Evaluation of the Localization Quality of the Arabic Versions of Learning Management Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120256</link>
        <id>10.14569/IJACSA.2021.0120256</id>
        <doi>10.14569/IJACSA.2021.0120256</doi>
        <lastModDate>2021-03-01T14:58:53.5230000+00:00</lastModDate>
        
        <creator>Abdulfattah Omar</creator>
        
        <subject>Ambiguity; Arabic; language inconsistencies; Learning Management Systems (LMSs); localization quality</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>The recent years have witnessed the development of numerous Learning Management Systems (LMSs) to address the increasing needs of individuals and institutions all over the world. For accessibility and commercial purposes, many of these LMSs are released in different languages using what is known as localization systems. In this regard, there has been a parallel between the development of LMSs on one hand and the localization systems on the other. One main aspect in the recent evaluation systems and studies of LMSs is localization quality. Despite the prolific literature on localization quality, very little has been done on Arabic localization. As thus, this study is concerned with the evaluation of the localization quality of the Arabic version of LMSs. In order to explore users’ perceptions towards the Arabic versions of the LMSs, an online questionnaire was conducted. Participants were asked about their familiarity with LMSs and whether they used the Arabic versions of these systems. They were also asked about their experiences with the Arabic localization of these systems and whether they faced any problems in dealing with the Arabic version. The findings indicate that translation inconsistencies are the main problems with the Arabic versions of different LMSs including Blackboard Learn, Microsoft Teams, and Zoom. These problems have negative impacts on the effectiveness and reliability of these systems in schools, universities, and training institutions. For the proper implementation of LMSs both localization and translation should go hand-in-hand. Localization developers and LMSs designers need to consider the linguistic the peculiar linguistic features of Arabic. The findings of the study have implications to translation programs in Arab universities and training institutions. Program designers should integrate translation technologies and localization systems into translation studies. They need to consider the changes within the translation industry. The study was limited to the study of translation quality in the Arabic versions of the localized LMSs. The localization quality of other software programs, games, websites, and applications needs to be explored. Finally, it is recommended to develop a quality matrix that encompasses all the dimensions and peculiarities of Arabic localization.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_56-An_Evaluation_of_the_Localization_Quality.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Water Level Monitoring and Control System in Elevated Tanks to Prevent Water Leaks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120255</link>
        <id>10.14569/IJACSA.2021.0120255</id>
        <doi>10.14569/IJACSA.2021.0120255</doi>
        <lastModDate>2021-03-01T14:58:53.5070000+00:00</lastModDate>
        
        <creator>Christian Baldeon-Perez</creator>
        
        <creator>Brian Meneses-Claudio</creator>
        
        <creator>Alexi Delgado</creator>
        
        <subject>Arduino; ultrasonic sensor; stockouts; algorithm; monitoring and control; water leaks; elevated tanks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>The shortage of water is recurrent in Lima, Per&#250; and in the world, either due to natural disasters, deficiency in pipes due to age, or breakages by external agents such as heavy trucks, heavy machinery, etc., which damage to the underground pipes causing flooding and, shortages in the affected area or zone. As a possible solution, many inhabitants have elevated tanks, but they do not have an automatic control, nor view the water level in the tank, nor recognize possible water leaks if they occur, such leaks are economic detriment to the user. The objective research work wants to avoid shortages in the short period, controlling and monitoring of water for use at home or industry. For the implementation of this project, the technology of the Arduino Uno board, a 16x2 LCD screen, an ultrasonic sensor, and a mini pump will be used, which will be fed with a DC voltage, it is intended to have manual control every time, otherwise the work will be fully automatic. The results obtained were as expected, always displaying on the LCD screen, as at the beginning of the process with the tank empty and its corresponding alarm on a led diode, the percentage of water that gradually rises until the end of the process with the full tank message and its corresponding alarm on another led diode. The implementation of this project is economical, which is why it is very viable for many households and companies that can choose this alternative.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_55-Water_Level_Monitoring_and_Control_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Verb Sense Disambiguation by Measuring Semantic Relatedness between Verb and Surrounding Terms of Context</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120254</link>
        <id>10.14569/IJACSA.2021.0120254</id>
        <doi>10.14569/IJACSA.2021.0120254</doi>
        <lastModDate>2021-03-01T14:58:53.4930000+00:00</lastModDate>
        
        <creator>Arpita Dutta</creator>
        
        <creator>Samir Kumar Borgohain</creator>
        
        <subject>Word sense disambiguation; ambiguous verb; context; semantic space; latent semantic analysis; polysemy; machine translation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>Word sense disambiguation (WSD) is considered an AI complete problem which may be defined as the ability to resolve the intended meaning of ambiguous words occurring in a language. Language has complex structure and is highly ambiguous which has deep rooted relations between its different components specifically words, sentences and paragraphs. Incidentally, human beings can easily comprehend and resolve the intended meanings of the ambiguous words. The difficulty arises in building a highly accurate machine translation system or information retrieval system because of ambiguity. A number of algorithms have been devised to solve ambiguity but the success rate of these algorithms are very much limited. Context might have played a decisive role in human judgment while deciphering the meaning of polysemic words. A significant number of psychological models have been proposed to emulate the way the human beings understand the meaning of words, sentences or text depending on the context. The pertinent question that the researchers want to address is how the meanings are represented by human beings in mental memory and whether it is feasible to simulate with a computational model. Latent Semantic Analysis (LSA), a mathematical technique which is effective in representation of meanings in the form of vectors that closely approximates human semantic space. By comparing the vectors in the LSA generated semantic space, the closest neighbours of the word vector can be derived which indirectly provides lot of information about a word. However, LSA does not provide a complete theory of meaning. That is why psychological process modules are combined with LSA to make the theory of meaning concrete. Predication algorithm with LSA was proposed by Kintch, 2001 which was sufficient to capture various word senses and was successful in homonym disambiguation. Meaning of a word might have multiple senses specifically verbs. For example, verb “run” has 42 senses in WordNet. In order to find the correct sense of a verb is really a daunting task and resolving verb ambiguity using psycholinguistic model is very much limited. The proposed method has exploited the high dimensional vector LSA space resulted from training samples by applying predication algorithm to derive the most appropriate semantic neighbours for the target polysemous verb from the semantic space. Finally the vector space of test samples are checked with the training samples i.e. semantic neighbours to classify the senses of polysemous words in accurate manner.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_54-Verb_Sense_Disambiguation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Enrichment of Texture Information to Improve Optical Flow for Silhouette Image</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120253</link>
        <id>10.14569/IJACSA.2021.0120253</id>
        <doi>10.14569/IJACSA.2021.0120253</doi>
        <lastModDate>2021-03-01T14:58:53.4770000+00:00</lastModDate>
        
        <creator>Bedy Purnama</creator>
        
        <creator>Mera Kartika Delimayanti</creator>
        
        <creator>Kunti Robiatul Mahmudah</creator>
        
        <creator>Fatma Indriani</creator>
        
        <creator>Mamoru Kubo</creator>
        
        <creator>Kenji Satou</creator>
        
        <subject>Optical flow; silhouette image; artificial increase of texture information</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>Recent advances in computer vision with machine learning enabled detection, tracking, and behavior analysis of moving objects in video data. Optical flow is fundamental information for such computations. Therefore, accurate algorithm to correctly calculate it has been desired long time. In this study, it was focused on the problem that silhouette data has edge information but does not have texture information. Since popular algorithms for optical flow calculation do not work well on the problem, a method was proposed in this study. It artificially enriches the texture information of silhouette images by drawing shrunk edge on the inside of it with a different color. By the additional texture information, it was expected to give a clue of calculating better optical flows to popular optical flow calculation algorithms. Through the experiments using 10 videos of animals from the DAVIS 2016 dataset and TV-L1 algorithm for dense optical flow calculation, two values of errors (MEPE and AAE) were evaluated and it was revealed that the proposed method improved the performance of optical flow calculation for various videos. In addition, some relationships among the size of shrunk edge and the type and the speed of movement were suggested from the experimental results.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_53-The_Enrichment_of_Texture_Information.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Public Sentiment Analysis on Twitter Data during COVID-19 Outbreak</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120252</link>
        <id>10.14569/IJACSA.2021.0120252</id>
        <doi>10.14569/IJACSA.2021.0120252</doi>
        <lastModDate>2021-03-01T14:58:53.4430000+00:00</lastModDate>
        
        <creator>Mohammad Abu Kausar</creator>
        
        <creator>Arockiasamy Soosaimanickam</creator>
        
        <creator>Mohammad Nasar</creator>
        
        <subject>COVID-19; corona virus; corona; pandemic; social media; sentiment analysis; Twitter</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>The COVID-19 pandemic, is also known as the coronavirus pandemic, is an ongoing serious global problem all over the world. The outbreak first came to light in December 2019 in Wuhan, China. This was declared pandemic by the World Health Organization on 11th March 2020. COVID-19 virus infected on people and killed hundreds of thousands of people in the United States, Brazil, Russia, India and several other countries. Since this pandemic continues to affect millions of lives, and a number of countries have resorted to either partial or full lockdown. People took social media platforms to share their emotions, and opinions during this lockdown to find a way to relax and calm down. In this research work, sentiment analysis on the tweets of people from top ten infected countries has been conducted. The experiments have been conducted on the collected data related to the tweets of people from top ten infected countries with the addition of one more country chosen from Gulf region, i.e. Sultanate of Oman. A dataset of more than 50,000 tweets with hashtags like #covid-19, #COVID19, #CORONAVIRUS, #CORONA, #StayHomeStaySafe, #Stay Home, #Covid_19, #CovidPandemic, #covid19, #Corona Virus, #Lockdown, #Qurantine, #qurantine, #Coronavirus Outbreak, #COVID etc. posted between June 21, 2020 till July 20, 2020 was considered in this research. Based on the tweets posted in English a sentiment analysis was performed. This research was conducted to understand how people from different infected countries cope with the situation. The tweets were collected, pre-processed and then text mining algorithms used and finally sentiment analysis have been done and presented with the results. The purpose of this research paper to know about the sentiments of people from COVID-19 infected countries.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_52-Public_Sentiment_Analysis_on_Twitter_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Smart Control System for Smart City using IoT</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120251</link>
        <id>10.14569/IJACSA.2021.0120251</id>
        <doi>10.14569/IJACSA.2021.0120251</doi>
        <lastModDate>2021-03-01T14:58:53.4300000+00:00</lastModDate>
        
        <creator>Parasa Avinash</creator>
        
        <creator>B Krishna Vamsi</creator>
        
        <creator>Thumu Srilakshmi</creator>
        
        <creator>P V V Kishore</creator>
        
        <subject>Internet Society; Attention-Grabbing; Networking; Enabling Technologies; Connectivity Models; IP Address; ADAFRUIT; Embedded Systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>As technologies are introducing and improving day by day, there is a tremendous change in the applications like “Smart City”. The Internet of Things (IoT) is the best approach to combine various Sensors with Embedded devices to create solutions for the real time problems and this will help us to connect with Internet Society. The term IoT means controlling the things through Internet, in other terminology the objects will “talk” to each other and build some communication to work or to react. There are so many attention-grabbing modules in our society, so in this project we will Implement a model of Smart City with around six elements like Smart Garbage System, Smart Irrigation System, Smart Building, Smart Parking System, Restaurant Menu Ordering System and Manhole Detection and Monitoring System that too in an advanced way. Which means we are going to make some advancement in all these elements like Sending messages by using GSM module with Sensor responses, Sending the information and controlling the components through cloud platform like ADAFRUIT, Accessing webpages through IP Address using Networking domain, Enabling Technologies, Connectivity models and we are going to make all this system automatic. The main objective of this project is without any Human Involvement all the systems or elements has to work to make life easier. The required technologies in this project is Internet of Things (IoT), Embedded Systems and Networking.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_51-Smart_Control_System_for_Smart_City_using_IoT.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A DNA Cryptographic Solution for Secured Image and Text Encryption</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120250</link>
        <id>10.14569/IJACSA.2021.0120250</id>
        <doi>10.14569/IJACSA.2021.0120250</doi>
        <lastModDate>2021-03-01T14:58:53.4130000+00:00</lastModDate>
        
        <creator>Bahubali Akiwate</creator>
        
        <creator>Latha Parthiban</creator>
        
        <subject>DNA cryptography; image encryption; text encryption; DNA digital coding; DNA sequences</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>In recent days, DNA cryptography is gaining more popularity for providing better security to image and text data. This paper presents a DNA based cryptographic solution for image and textual information. Image encryption involves scrambling at pixel and bit levels based on hyperchaotic sequences. Both image and text encryption involves basic DNA encoding rules, key combination, and conversion of data into binary and other forms. This new DNA cryptographic approach adds more dynamicity and randomness, making the cipher and keys harder to break. The proposed image encryption technique presents better results for various parameters, like Image Histogram, Correlation co-efficient, Information Entropy, Number of Pixels Change Rate (NPCR), and Unified Average Changing Intensity (UACI), Key Space, and Sensitivity compared with existing approaches. Improved time and space complexity, random key generation for text encryption prove that DNA cryptography can be a better security solution for new applications.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_50-A_DNA_Cryptographic_Solution_for_Secured_Image.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sentiment Analysis using Social and Topic Context for Suicide Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120249</link>
        <id>10.14569/IJACSA.2021.0120249</id>
        <doi>10.14569/IJACSA.2021.0120249</doi>
        <lastModDate>2021-03-01T14:58:53.3830000+00:00</lastModDate>
        
        <creator>E. Rajesh Kumar</creator>
        
        <creator>K.V.S.N. Rama Rao</creator>
        
        <subject>Social context; topic context; microblogging; Laplacian matrix; emotional and sentimental</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>In many fields, analysing large user-generated microblogs is very crucial and drawing many researchers to study. However, processing such short and noisy microblogs is very difficult and challenging. Most prior studies use only texts to find the polarity of sentiment and presume that microblog site is independent and distributed identically, ignoring networked data from microblogs. Consequently, not satisfied with performance motivated by emotional and sentimental sociological approaches. This paper proposes a new methodology that incorporates social and topic context to analyze sentiment on microblogs by introducing the meaning of structure similarity into social context. Unlike from previous research employing direct relations from user and by suggesting a new method to quantify structure similarity. In addition, to design the microblog semantic relation, topic context is introduced. The Laplacian matrix of these graph produced by these context combines social and topic context and Laplacian regularization is applied to the microblogging sentiment model. The Experimental results on the two datasets show that, the suggested model had reliably and substantially outperformed the baseline methods that is helpful for suicide prediction.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_49-Sentiment_Analysis_using_Social_and_Topic_Context.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Exploratory Study of Some Machine Learning Techniques to Classify the Patient Treatment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120248</link>
        <id>10.14569/IJACSA.2021.0120248</id>
        <doi>10.14569/IJACSA.2021.0120248</doi>
        <lastModDate>2021-03-01T14:58:53.3670000+00:00</lastModDate>
        
        <creator>Mujiono Sadikin</creator>
        
        <creator>Ida Nurhaida</creator>
        
        <creator>Ria Puspita Sari</creator>
        
        <subject>Electronic health record; XGBoost; patient treatment; patient laboratory test data</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>Numerous studies have been carried out on computation and its applications to medical data with proven benefits for improving the quality of public health. However, not all research results or practical applications can be applied to all conditions but must be in accordance with the various contexts such as community culture, geographical, or citizen behaviors. Unfortunately, the use of digital data in Indonesia is still very limited. The study objective is to assess various data mining techniques to utilize data from laboratory test results collected from a private hospital in Indonesia in predicting the next patient treatment. Furthermore, various machine learning classification techniques were explored for the purpose. Based on the experiments, it was concluded that XGBoost with hyperparameter tuning produced the best accuracy level at 0.7579, compared to other classifiers. A better level of accuracy can be obtained by enriching the type of dataset used, such as the patient&#39;s medical record history.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_48-Exploratory_Study_of_Some_Machine_Learning_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Disruptive Technologies for Labor Market Information System Implementation Enhancement in the UAE: A Conceptual Perspective</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120247</link>
        <id>10.14569/IJACSA.2021.0120247</id>
        <doi>10.14569/IJACSA.2021.0120247</doi>
        <lastModDate>2021-03-01T14:58:53.3500000+00:00</lastModDate>
        
        <creator>Ghada Goher</creator>
        
        <creator>Maslin Masrom</creator>
        
        <creator>Astuty Amrin</creator>
        
        <creator>Noorlizawati Abd Rahim</creator>
        
        <subject>Disruptive technologies; labor market information systems; skills and educational mismatch; future of work; system engineering; system design thinking; COVID-19</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>In December 2019, the world learned about the first outbreak of the novel coronavirus (COVID-19) that first broke out in Wuhan, China. This limited outbreak in a small province of China has rapidly evolved into a global pandemic that has led to a health and economic crisis. As millions of individuals have lost their lives, others have lost their jobs due to the recession of 2020. While the skills and educational mismatch have been a prevalent problem in the UAE labor market, it is logical to assume that the global pandemic has likely increased this problem&#39;s extent. Therefore, there is an urgent need to adopt an agile, innovative solution to address the upcoming challenges in the labor markets due to the lack of skilled resources and the fear of future work amid the COVID-19 pandemic. Since industry and academia have identified skills and educational mismatch as a complex and multivariate problem, the paper builds a conceptual case from a system engineering perspective to solve this problem efficiently. Based on the literature reviewed related to disruptive technologies and labor market management systems, the paper proposes a new implementation approach for an integrated labor market information system enabled by the most widely used disruptive technologies components in the UAE (Machine Learning, AI, Blockchain, Internet of Things, Big Data Analytics, and Cloud Computing). The proposed approach is considered one of the immediate course of actions required to minimize the UAE economy’s negative impact due to the presence of the skills and educational mismatch phenomena.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_47-Disruptive_Technologies_for_Labor_Market_Information_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimum Spatial Resolution of Satellite-based Optical Sensors for Maximizing Classification Performance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120246</link>
        <id>10.14569/IJACSA.2021.0120246</id>
        <doi>10.14569/IJACSA.2021.0120246</doi>
        <lastModDate>2021-03-01T14:58:53.3370000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>Spectral information; spatial information; maximum likelihood decision rule; satellite image; image classification; mixed pixel (Mixels); optimum spatial resolution; classification performance; spatial and spectral features</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>Optimum spatial resolution of satellite based optical sensors for maximizing classification performance is investigated. Also, classification performance assessment method considering spatial resolution of satellite based optical imagers is proposed. Optimum spatial resolution which makes the highest classification accuracy is determined from spatial frequency components, spectral features of objects and classification method. First, in this paper, based on the relationship between variance of pixels and classification accuracy, classification accuracy for Landsat Multiple Spectral Scanner: MSS images with various Instantaneous Field of View (IFOV) will be shown. In their connection, variance of pixel values for images with various IFOV will be clarified. Second, assuming the shape of boundary line between adjacent categories is circle, relationship among IFOV, ratio of Mixels and classification accuracy will be cleared under the supposition that the number of Mixels equals to that of misclassified pixels. Finally, it will be also shown that aforementioned relationships and optimum spatial resolution have been confirmed by using airborne based MSS data of Sayama district in Japan.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_46-Optimum_Spatial_Resolution_of_Satellite.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Prediction of Sunspots using Fuzzy Logic: A Triangular Membership Function-based Fuzzy C-Means Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120245</link>
        <id>10.14569/IJACSA.2021.0120245</id>
        <doi>10.14569/IJACSA.2021.0120245</doi>
        <lastModDate>2021-03-01T14:58:53.3200000+00:00</lastModDate>
        
        <creator>Muhammad Hamza Azam</creator>
        
        <creator>Mohd Hilmi Hasan</creator>
        
        <creator>Said Jadid Abdul Kadir</creator>
        
        <creator>Saima Hassan</creator>
        
        <subject>Fuzzy logics; fuzzy c-means (FCM); Gaussian membership function; prediction; sunspots; triangular membership function</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>Fuzzy logic is an algorithm that works on “degree of truth”, instead of the conventional crisp logic where the possible answer can be 1 or 0. Fuzzy logic resembles human thinking as it considers all the possible outcomes between 1 and 0 and it tries to reflect reality. Generation of membership functions is the key factor of fuzzy logic. An approach for generating fuzzy gaussian and triangular membership function using fuzzy c-means is considered in this research. The problem related to sunspot prediction is considered and its accuracy is calculated. It is evident from the results that the proposed technique of generating membership functions using fuzzy c-means can be adopted for predicting sunspots.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_45-Prediction_of_Sunspots_using_Fuzzy_Logic.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Efficient Color LED Driver based on Self-Configuration Current Mirror Circuit</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120244</link>
        <id>10.14569/IJACSA.2021.0120244</id>
        <doi>10.14569/IJACSA.2021.0120244</doi>
        <lastModDate>2021-03-01T14:58:53.3030000+00:00</lastModDate>
        
        <creator>Shaheer Shaida Durrani</creator>
        
        <creator>Abu Zaharin Bin Ahmad</creator>
        
        <creator>Bakri Bin Hassan</creator>
        
        <creator>Atif Sardar Khan</creator>
        
        <creator>Asif Nawaz</creator>
        
        <creator>Naveed Jan</creator>
        
        <creator>Rehan Ali Khan</creator>
        
        <creator>Rohi Tariq</creator>
        
        <creator>Ahmed Ali Shah</creator>
        
        <creator>Tariq Bashir</creator>
        
        <creator>Zia Ullah Khan</creator>
        
        <creator>Sheeraz Ahmed</creator>
        
        <subject>Color LED driver; current mirror circuit; super diode</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>The string channel of Color LED driver with precise current balancing is proposed. It is noted that to drive a multiple LEDs string is by using a proper current source, due to the level of the brightness LED depends on the quantity of the current flows. In the production of LEDs, the variation in the forward voltage for each LED has been found significantly high. This variation causes different levels of brightness in LEDs. Then, controlling load current of LED by using a resistor to limit the LED current flowing is considered by associated with the forward voltage, instantly. Current sources have been designed to become immune to the above problem since it regulates the current, and not the voltage which flows through the LEDs. Hence, constant current source is the essential requirement to drive the LEDs. Besides, it is complex for color LEDs, dependent on the number of control nodes and dimming configuration. To arrange an accurate load current for the different sets of string color LEDs, the efficient LED driver is required, in which the current sharing is complement to each LED strings. Therefore, this paper suggests a color LED driver with self-configuration of enhanced current mirrors in multiple LED strings. The load currents have been efficiently balanced among the identical loads and unequal loads. The comparable efficiency of the string color LEDs losses has been shown thoroughly.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_44-An_Efficient_Color_LED_Driver.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Survey of Centralized and Decentralized Access Control Models in Cloud Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120243</link>
        <id>10.14569/IJACSA.2021.0120243</id>
        <doi>10.14569/IJACSA.2021.0120243</doi>
        <lastModDate>2021-03-01T14:58:53.2900000+00:00</lastModDate>
        
        <creator>Suzan Almutairi</creator>
        
        <creator>Nusaybah Alghanmi</creator>
        
        <creator>Muhammad Mostafa Monowar</creator>
        
        <subject>Cloud computing; access control; cloud security; centralized; decentralized</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>In recent years, cloud computing has become a popular option for a number of different businesses sectors. It is a paradigm employed to deliver a range of computing services, such as sharing resources via the Internet. Security issues in cloud computing necessitates the need for a mechanism to keep the system safe and reliable. An access control mechanism is one that permits or denies access to cloud services. This paper presents a survey of access control models in Cloud Computing. Several existing surveys on access control mechanisms in cloud computing mainly focused on traditional access control models and encryption-based access control models while the others focused on applying blockchain technology in cloud access control. However, access models possess different characteristics, such as the system’s reliance on a centralized cloud trusted system administrator to manage the access policy or adopting decentralized approach. This paper reviews and analyses existing access control mechanisms in cloud computing, based on centralized and decentralized access control models, provides detailed comparisons on each model’s advantages and limitations, and discusses the challenges of, and future research direction for access control.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_43-Survey_of_Centralized_and_Decentralized_Access_Control.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards the Development of Computational Thinking and Mathematical Logic through Scratch</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120242</link>
        <id>10.14569/IJACSA.2021.0120242</id>
        <doi>10.14569/IJACSA.2021.0120242</doi>
        <lastModDate>2021-03-01T14:58:53.2730000+00:00</lastModDate>
        
        <creator>Benjam&#237;n Maraza-Quispe</creator>
        
        <creator>Ashtin Maurice Sotelo-Jump</creator>
        
        <creator>Olga Melina Alejandro-Oviedo</creator>
        
        <creator>Lita Marianela Quispe-Flores</creator>
        
        <creator>Lenin Henry Cari-Mogrovejo</creator>
        
        <creator>Walter Cornelio Fernandez-Gambarini</creator>
        
        <creator>Luis Ernesto Cuadros-Paz</creator>
        
        <subject>Scratch; computational thinking; logic reasoning; teaching</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>Currently the need to provide quality education to future generations has led to the development of new teaching methodologies, within this fact the tools provided by information technologies have been positioned as the future of learning, in this sense, the learning to program is no longer considered a selective skill in the field of computing, being today a necessity for any student who wants to be competent in this globalized and dynamic world. Within this context, the present research aims to analyze to what extent the use of the Scratch programming language allows the development of computational thinking skills and mathematical logic. The methodology consisted of the application of programming fundamentals through Scratch 3.0 to an experimental group composed of 25 students who were randomly selected from a population of 100 students, the data collection was carried out through a test of logical reasoning standardized by Acevedo and Oliva and a test of levels of computational thinking standardized by Gonz&#225;lez. According to the results, a significant difference is postulated in the performance of the students in both tests, having a more considerable improvement in the criteria: Loops, Control of Variables (CV), Probability (PB) and Combinatorial Operations (CB). Therefore, it is concluded by highlighting the importance of teaching basic concepts of Computer Science such as computational thinking and mathematical logic, since it contributes to the internalization of concepts when developing algorithms in problem-solving.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_42-Towards_the_Development_of_Computational_Thinking.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparing the Accuracy and Developed Models for Predicting the Confrontation Naming of the Elderly in South Korea using Weighted Random Forest, Random Forest, and Support Vector Regression</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120241</link>
        <id>10.14569/IJACSA.2021.0120241</id>
        <doi>10.14569/IJACSA.2021.0120241</doi>
        <lastModDate>2021-03-01T14:58:53.2570000+00:00</lastModDate>
        
        <creator>Haewon Byeon</creator>
        
        <subject>Confrontation naming; generative naming; support vector regression; random forest; weighted random forest</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>Since dementia patients clearly show the retrogression of linguistic ability from the early stage, evaluating cognitive and language abilities is very important when diagnosing dementia. Among them, naming is an essential item (sub-test) that is always included in the dementia-screening test. This study developed confrontation naming prediction models using support vector regression (SVR), random forest, and weighted random forest for the elderly in the community and identified an algorithm showing the best performance by comparing the accuracy of the models. This study used 485 elderly subjects (248 men and 237 women) living in Seoul and Incheon who were 74 years old or older. Prediction models were developed using SVR, random forest, and weighted random forest algorithms. This study revealed that the root mean squared error of weighted random forests was the lowest when comparing the prediction performance using models based on SVR, random forest, and weighted random forest. Future studies are needed to compare the prediction performance of weighted random forest with other machine learning models by calculating various performance indices such as sensitivity, specificity, and harmonic mean using data from various fields to prove the superior prediction performance of weighted random forest.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_41-Comparing_the_Accuracy_and_Developed_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimal Power Allocation in Downlink Non-Orthogonal Multiple Access (NOMA)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120240</link>
        <id>10.14569/IJACSA.2021.0120240</id>
        <doi>10.14569/IJACSA.2021.0120240</doi>
        <lastModDate>2021-03-01T14:58:53.2430000+00:00</lastModDate>
        
        <creator>Wajd Fahad Alghasmari</creator>
        
        <creator>Laila Nassef</creator>
        
        <subject>Non-orthogonal multiple access; power allocation; genetic algorithm; user pairing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>Fifth generation of wireless cellular networks promise to enable better services anytime and anywhere. Non-orthogonal multiple access (NOMA) stands as a suitable multiple accessing scheme due to the ability to allow multiple users to share the same radio resource simultaneously via different domains (power, code, etc.). Through the introduced power domain, users multiplexed at the radio resource within different power levels.   This paper studies power allocation in downlink NOMA, an optimization problem formulated that aims to maximize the system&#39;s sum rate. To solve the problem, a genetic algorithm based power allocation (GAPA) was proposed that uses genetic algorithm (GA) that employs heuristics to search for suitable solutions. The performance of the proposed power allocation algorithm compared with full search power allocation (FSPA) that gives an optimal performance. Results show that GAPA reaches a performance near to FSPA with lower complexity. In addition, GAPA simulated with various user paring algorithms. Channel state sorting based user pairing with GAPA achieves the best performance comparing to random user pairing algorithm and exhaustive user pairing.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_40-Optimal_Power_Allocation_in_Downlink.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Emerging Line of Research Approach in Precision Agriculture: An Insight Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120239</link>
        <id>10.14569/IJACSA.2021.0120239</id>
        <doi>10.14569/IJACSA.2021.0120239</doi>
        <lastModDate>2021-03-01T14:58:53.2270000+00:00</lastModDate>
        
        <creator>Vanishree K</creator>
        
        <creator>Nagaraja G S</creator>
        
        <subject>Precision agriculture; smart farming; wireless sensor network; internet-of-things; remote sensing; variable rate technology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>The present state of agriculture and its demand is very much different than what it used to be two decades back. Hence, Precision Agriculture (PA) is more in demand to address this challenging demand. With consistent pressure to develop multiple products over the same agricultural land, farmers find PA’s adoption the best rescue-based solution with restricted resources. PA accelerates the yield and potentially assists in catering up the demand of scarcity of demands of food. With the increasing adoption of PA-based technologies over farming, there are best possibilities to explore efficient farming practices and better decision-making facilitated by real-time data availability. There isan evolution of various novel technologies to boost agricultural performance, i.e. variable rate technology, Geomapping, remote sensing, automated steering system, and satellite positioning system. Apart from this, it is also observed that Internet-of-Things (IoT) and Wireless Sensor Network (WSN) have been slowly penetrating this area to acceleratePA&#39;s technological advancement. It is noticed that the adoption of sensing technology is a common factor in almost all the techniques used in PA. However, there is no clear idea about the most dominant approach in this regard. Therefore, this paper discusses existing approaches concerning standard conventional PA and sensing-based PA using WSN. The study contributes towards some impressive learning outcomes to state that WSN and IoT are extensive to boost PA.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_39-Emerging_Line_of_Research_Approach_in_Precision_Agriculture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Survey on Dental Imaging for Building Classifier to Benefit the Dental Implant Practitioners</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120238</link>
        <id>10.14569/IJACSA.2021.0120238</id>
        <doi>10.14569/IJACSA.2021.0120238</doi>
        <lastModDate>2021-03-01T14:58:53.2100000+00:00</lastModDate>
        
        <creator>Shashikala J</creator>
        
        <creator>Thangadurai N</creator>
        
        <subject>Dental implant; complication; implant failure; dental imaging; pre-processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>Endo-osseous implants are considered an ideal dental fixture. It is becoming the preferred choice of the edentulous patient to rehabilitate toothlessness because of their aesthetic and functional outcome. Despite the successful surgery and implant placement, complications occur, which may be related to several factors, like operative assessment, treatment planning, patient-related factors, surgical procedures, and surgeons&#39; experience. Comprehensive radiological assessment plays a vital role in clinical analysis for better treatment planning, avoiding complications, and increasing the Implant&#39;s success rate. However, despite the variety of dental imaging, choosing the right imaging technology has become difficult for clinical experts. The investigative survey conducted in this paper aims to determine the correlation between different imaging modalities, their essential role in implant therapy. This review extensively discussed which types of computational operations applied to image modalities in the existing literature address various noises and other relevant issues. These study findings reveal significant issues with various dental imaging modalities and provide an understanding to bridge all existing research gaps towards building cost-effective classification and predictive models for accurate dental treatment planning and higher implant success rates.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_38-A_Survey_on_Dental_Imaging_for_Building_Classifier.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards an Ontological Learner’s Modeling During and After the COVID-19 Pandemic</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120237</link>
        <id>10.14569/IJACSA.2021.0120237</id>
        <doi>10.14569/IJACSA.2021.0120237</doi>
        <lastModDate>2021-03-01T14:58:53.1930000+00:00</lastModDate>
        
        <creator>Amina OUATIQ</creator>
        
        <creator>Kamal El Guemmat</creator>
        
        <creator>Khalifa Mansouri</creator>
        
        <creator>Mohammed Qbadou</creator>
        
        <subject>Learner model; personalization; adaptive learning; ontology; COVID-19</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>The health crisis and the unprecedented upheaval in the education systems which it caused are far from being over, consequently, the adaptation of the learning experience is most needed, and it should take into consideration the criteria of this specific crisis and its impact on the physical and mental health of the learners. In this article, we aimed to present an ontology-based learner model that will bring together the pedagogical and psychological characteristics, but also the health risks generated by the epidemic on the learners, following a Knowledge-Engineering Methodology. We developed an ontology that combines the IMS-LIP standard features and the learner characteristic. It is ready for different uses in different systems and situations during and after the COVID-19 pandemic, and it will give a global representation of the learner in order to allow him to get the best-adapted courses.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_37-Towards_an_Ontological_Learners_Modeling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>PHY-DTR: An Efficient PHY based Digital Transceiver for Body Coupled Communication using IEEE 802.3 on FPGA Platform</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120236</link>
        <id>10.14569/IJACSA.2021.0120236</id>
        <doi>10.14569/IJACSA.2021.0120236</doi>
        <lastModDate>2021-03-01T14:58:53.1630000+00:00</lastModDate>
        
        <creator>Sujaya B. L</creator>
        
        <creator>S.B. Bhanu Prashanth</creator>
        
        <subject>Body coupled communication; physical layer; digital; FPGA; radiofrequency; human body</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>Body coupled communication (BCC) is an efficient networking approach to body area network (BAN) based on Human-centric communication. The BCC provides interference only between humans in very close proximity. In this work, an efficient Physical layer (PHY) based digital transceiver) is designed for BCC. The digital transceiver Module mainly contains a Digital transmitter (TX) with Manchester encoder, clock synchronization unit, and Digital receiver (RX) with Manchester decoder. The TX and RX modules are designed using a finite state machine as per the IEEE 802.3 Standards. The complete work is also varied for BAN applications by connecting two Application layer transceivers and two Physical layer-based digital transceivers. The architecture is simulated in a Model-sim simulator. The complete Module is synthesized using different FPGA families, and the hardware design constraints are contrasted. The digital transceiver works at 231.28 MHz operating frequency, consumes 0.113W power, and provides a 7.7 Mbps data rate and 4.67 Kbps/Slice efficiency on Artix-7 FPGA. The proposed transceiver is also compared with existing digital transceivers with hardware constraints improvements.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_36-PHY_DTR_an_Efficient_PHY_based_Digital_Transceiver.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Using Blockchain based Authentication Solution for the Remote Surgery in Tactile Internet</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120235</link>
        <id>10.14569/IJACSA.2021.0120235</id>
        <doi>10.14569/IJACSA.2021.0120235</doi>
        <lastModDate>2021-03-01T14:58:53.1470000+00:00</lastModDate>
        
        <creator>Tarik HIDAR</creator>
        
        <creator>Anas ABOU EL KALAM</creator>
        
        <creator>Siham BENHADOU</creator>
        
        <creator>Oussama MOUNNAN</creator>
        
        <subject>Tactile internet; blockchain; human to machine interaction; authentication; remote surgery</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>Since the Tactile Internet has been considered as a new era of Internet, delivering real-time interactive systems as well as ultra-reliable and ultra-responsive network connectivity, tremendous efforts have been made to ensure authentication between communication’s parties to secure remote surgery. Since this human to machine interaction like remote surgery is critical and the communication between the surgeon and the tactile actor i.e. robot arms should be fully protected during the surgical procedure, a fully secure mutual user authentication scheme should be used in order to establish a secure session among the communicating parties. The existing methods usually require a server to ensure the authentication among the communicating parties, which makes the system vulnerable to single of point failure and not fit the design of such critical distributed environment i.e. tactile internet. To address these issues, we propose a new decentralized blockchain based authentication solution for tactile internet. In our proposed solution, there is no need for a trusted party; moreover, the decentralized nature of our proposed solution makes the authentication immutable, efficient, secure, and low latency requirement. The implementation of our proposed solution is deployed on Ethereum official test network Ropsten. The experimental results show that our solution is efficient, highly secured, and flexible.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_35-By_using_Blockchain_based_on_Authentication_Solution.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fuzzy based Search in Motion Estimation for Real Time Video Compression</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120234</link>
        <id>10.14569/IJACSA.2021.0120234</id>
        <doi>10.14569/IJACSA.2021.0120234</doi>
        <lastModDate>2021-03-01T14:58:53.1330000+00:00</lastModDate>
        
        <creator>Upendra Kumar Srivastava</creator>
        
        <creator>Rakesh Kumar Yadav</creator>
        
        <subject>Fuzzy logic; motion estimation; compression; current frame; reference frame; predicted frame</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>Video compression ratio, quality and efficiency are determined by the motion estimation algorithm. Motion estimation is used to perform inter frame prediction in video sequences. The individual frames are divided into blocks the motion estimation is computed by a video codec such as H.264. A video codec computes the displacement of block between the previous frame (reference frame) and the current frame, for each block in current frame the best motion vector is determined in the reference frame as a block belongs to a current frame. In this, research paper, a novel technique has been presented for motion vector calculation, using fuzzy Gaussian membership function. The motion estimation block uses fuzzy membership function to estimate the connectedness of different blocks of the current frame to that of the reference frame The fuzzy decision matching is done based on the matching criterion and the best matching block is selected. The motion vectors are thus calculated with respect to the reference frame. The fuzzification process produces optimally matched blocks, which are then utilized to calculate the motion vectors of the predicted frame. Using fuzzy based search the search area is automatically updated and adaptive search steps provides an optimized result of search. As in real time streaming no file is exchanged during the transmission user is not able to download the file the only way for smooth transmission is frame management fuzzy based search for the motion estimation provides a better compression for the predicted frames.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_34-Fuzzy_based_Search_in_Motion_Estimation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Investigation of Factors Affecting Employee Satisfaction of IT Sector</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120233</link>
        <id>10.14569/IJACSA.2021.0120233</id>
        <doi>10.14569/IJACSA.2021.0120233</doi>
        <lastModDate>2021-03-01T14:58:53.1000000+00:00</lastModDate>
        
        <creator>Eiman Tamah Al-Shammari</creator>
        
        <subject>Job satisfaction; IT sector; productivity; intangible benefits; communication</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>Job satisfaction or employee satisfaction has various definitions, but we can generalize it by how gratified an individual is with his or her job. Happy employees help to strengthen the company by lowering turnover and increasing loyalty. Job satisfaction also promotes a healthy working environment that helps to attract talent and increase productivity. However, little research has been done that focuses specifically on the IT sector. The goal of this research is to measure the level of satisfaction among Kuwaiti IT workers and discover tangible and intangible factors affecting their job satisfaction. To highlight factors contributing to positive satisfaction in the IT jobs in Kuwait, we propose a six-factor structural model, including compensation, workplace, intangible benefits, support, communication, and satisfaction. A targeted snowball descriptive survey was distributed via WhatsApp messages to Information Technology workers; 209 responses were collected after data cleaning. SPSS statistical software was used to analyze the data, with results indicating IT employees felt an average level of satisfaction. Additionally, several work-related variables were significantly associated with job satisfaction. Work position showed a statistically significant association with work satisfaction. Finally, individuals in a leading position reported higher satisfaction compared to individuals in non-leading positions.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_33-Investigation_of_Factors_Affecting_Employee_Satisfaction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fungal Blast Disease Detection in Rice Seed using Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120232</link>
        <id>10.14569/IJACSA.2021.0120232</id>
        <doi>10.14569/IJACSA.2021.0120232</doi>
        <lastModDate>2021-03-01T14:58:53.0870000+00:00</lastModDate>
        
        <creator>Raj Kumar</creator>
        
        <creator>Gulsher Baloch</creator>
        
        <creator>Pankaj</creator>
        
        <creator>Abdul Baseer Buriro</creator>
        
        <creator>Junaid Bhatti</creator>
        
        <subject>Fungal blast; machine learning; support vector machine (SVM); logistic regression; decision tree; Na&#239;ve Bayes; random forest; linear discriminant analysis (LDA); principal component analysis (PCA); image acquisition; pre-processing; feature extraction; F1-Score; convolutional classifier; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>The economy of Pakistan mainly relies upon agriculture alongside other vital industries. Fungal blast is one of the significant plant diseases found in rice crops, leading to reduction of agricultural products and hindrance in the country&#39;s economic development. Plant disease detection is an initial step towards improving the yield and quality of agricultural products. Manual Analyzation of plant health is tiresome, time taking and costly. Machine learning offers an alternate inspection method providing benefits of automated inspection, ease of availability, and cost reduction. The visual patterns on the rice plants are processed using the machine learning classifiers such as support vector machine (SVM), logistic regression, decision tree, Na&#239;ve Bayes, random forest, linear discriminant analysis (LDA), principal component analysis (PCA), and based on classification results plants are recognized as healthy or unhealthy. For this process, a dataset containing 1000 images of rice seed crop is collected from different fields of Kashmore, and whole analysis of image acquisition, pre-processing, and feature extraction is done on the rice seed only. The dataset is annotated with healthy and unhealthy samples with the help of a plant disease expert. The algorithms used for processing data are evaluated in terms of F1-score and testing accuracy. This paper contains results from traditional classifiers, and alongside these classifiers, transfer learning has been used to compare the results. Finally, a comparative analysis is done between the results of traditional classifiers and deep learning networks.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_32-Fungal_Blast_Disease_Detection_in_Rice_Seed.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Nonlinear Rainfall Yearly Prediction based on Autoregressive Artificial Neural Networks Model in Central Jordan using Data Records: 1938-2018</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120231</link>
        <id>10.14569/IJACSA.2021.0120231</id>
        <doi>10.14569/IJACSA.2021.0120231</doi>
        <lastModDate>2021-03-01T14:58:53.0700000+00:00</lastModDate>
        
        <creator>Suhail Sharadqah</creator>
        
        <creator>Ayman M Mansour</creator>
        
        <creator>Mohammad A Obeidat</creator>
        
        <creator>Ramiro Marbello</creator>
        
        <creator>Soraya Mercedes Perez</creator>
        
        <subject>Jordan; rainfall distribution; time series analyses; Levenberg-Marquardt algorithm; climate change</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>Jordan is suffering a chronicle water resources shortage. Rainfall is the real input for all water resources in the country. Acceptable accuracy of rainfall prediction is of great importance in order to manage water resources and climate change issues. The actual study include the analysis of time series trends of climate change regards to rainfall parameter. Available rainfall data for five stations from central Jordan where obtained from the Ministry of water and irrigation that cover the interval 1938- 2018. Data have been analyzed using Nonlinear Autoregressive Artificial Neural Networks NAR-ANN) based on Levenberg-Marquardt algorithm. The NAR model tested the rainfall data using one input layer, one hidden layer and one output layer with a different combinations of number of neuron in hidden layer and epochs. The best combination was using 25 neurons and 12 epochs. The classification performance or the quality of result is measured by mean square error (MSE). For all the meteorological stations, the MSE values were negligible ranging between 4.32*10-4 and 1.83*10-5. The rainfall prediction result show that forecasting rainfall values in the base of calendar year are almost identical with those estimated for seasonal year when dealing with long record of years. The average predicted rainfall values for the coming ten-year in comparison with long-term rainfall average show; strong decline for Dana station, some decrees for Rashadia station, huge increase in Abur station, and relatively limited change between predicted and long-term average for Busira and Muhai Stations.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_31-Nonlinear_Rainfall_Yearly_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Selection of Social Media Applications for Ubiquitous Learning using Fuzzy TOPSIS</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120230</link>
        <id>10.14569/IJACSA.2021.0120230</id>
        <doi>10.14569/IJACSA.2021.0120230</doi>
        <lastModDate>2021-03-01T14:58:53.0400000+00:00</lastModDate>
        
        <creator>Caitlin Sam</creator>
        
        <creator>Nalindren Naicker</creator>
        
        <creator>Mogiveny Rajkoomar</creator>
        
        <subject>Social media applications; Ubiquitous learning; fuzzy Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS); Multiple criteria decision-making tools</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>The exponential advancements in Information and Communications Technology has led to its prevalence in education, especially with the arrival of COVID-19. Ubiquitous learning (u-learning) is everyday learning that happens irrespective of time and place and it is enabled by m-learning, e-learning, and social computing such as social media. Due to its popularity, there has been an expansion of social media applications for u-learning. The aim of this research paper was to establish the most relevant social media applications for u-learning in schools. Data was collected from 260 respondents, which comprised learners, and instructors in high schools who were asked to rank 14 of the top social media applications for ubiquitous learning. Fuzzy TOPSIS (Technique for Order Preference by Similarity to Ideal Solution) was the method employed for the ranking of the 14 of the most popular social media applications using 15 education requirements, 15 technology criteria, and 260 decision makers. The simulation was implemented on MATLAB R2020a. The results showed that YouTube was the most likely social media application to be selected for u-learning with a closeness coefficient of 0.9188 and that Viber was the least likely selected social media application with a closeness coefficient of 0.0165. The inferences of this research study will advise researchers in the intelligent decision support systems field to reduce the time and effort made by instructors and learners to select the most beneficial social media application for u-learning.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_30-Selection_of_Social_Media_Applications.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intelligent Climate Control System inside a Greenhouse</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120229</link>
        <id>10.14569/IJACSA.2021.0120229</id>
        <doi>10.14569/IJACSA.2021.0120229</doi>
        <lastModDate>2021-03-01T14:58:53.0400000+00:00</lastModDate>
        
        <creator>A Labidi</creator>
        
        <creator>A. Chouchaine</creator>
        
        <creator>A. Mami</creator>
        
        <subject>Greenhouse; climate; humidity; fuzzy logic controller; humidification; dehumidification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>An agricultural greenhouse is an environment to ensure intensive agricultural production. The favorable climatological conditions (temperature, lighting, humidity ...) for agricultural production must be reproduced in a non-natural way by controlling these parameters using several actuators (heating/air conditioning, ventilation, and humidifier/ dehumidifier). The objective of this study is to control the humidity inside the greenhouse; it is a problem that remains to be negotiated. To that end, an actuator based on a humidifier and a dehumidifier was installed in an experimental greenhouse and activated by a fuzzy logic controller to achieve the desired optimal indoor humidity in the greenhouse.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_29-Intelligent_Climate_Control_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Enhanced Artificial Bee Colony: Na&#239;ve Bayes Technique for Optimizing Software Testing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120228</link>
        <id>10.14569/IJACSA.2021.0120228</id>
        <doi>10.14569/IJACSA.2021.0120228</doi>
        <lastModDate>2021-03-01T14:58:53.0070000+00:00</lastModDate>
        
        <creator>Palak </creator>
        
        <creator>Preeti Gulia</creator>
        
        <creator>Nasib Singh Gill</creator>
        
        <subject>Software testing; artificial bee colony; swarm intelligence; Na&#239;ve Bayes; test case selection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>Software driven technology has become a part of life and the quality of software largely depends on the extent of effective testing performed during various phases of development. A wide range of nature inspired searching techniques are employed over years to automate the testing process and provide promising solutions to elude the infeasibility of exhaustive testing. These techniques use metaheuristics and work by converting the problem space into search space. A subset of optimized solutions is searched that reduces overall time by shortening the testing time. Objective: An enhanced Artificial Bee Colony- Na&#239;ve Bayes optimizer for test case selection is proposed in this paper. This article also aims to provide brief insights into the emergence of hybrid swarm-inspired techniques over the last two decades. Method: The modified Artificial Bee colony is applied after component selection and further optimization is achieved using Na&#239;ve Bayes classifier. The proposed technique is implemented and evaluated taking three benchmark programs into consideration. The proposed technique is also compared to other competitive swarm intelligence-based techniques of its class. Results: The experimental results show that the proposed technique outperforms other swarm-inspired techniques in terms of execution time in a given scenario and capable of higher detection of faults with minimal test case selection. Conclusion: The proposed approach is an improvement over existing techniques and helps in huge time and cost saving. It will contribute to the testing society and enhance the overall quality of the software.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_28-An_Enhanced_Artificial_Bee_Colony.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Early Detection of Severe Flu Outbreaks using Contextual Word Embeddings</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120227</link>
        <id>10.14569/IJACSA.2021.0120227</id>
        <doi>10.14569/IJACSA.2021.0120227</doi>
        <lastModDate>2021-03-01T14:58:52.9930000+00:00</lastModDate>
        
        <creator>Redouane Karsi</creator>
        
        <creator>Mounia Zaim</creator>
        
        <creator>Jamila El Alami</creator>
        
        <subject>ELMo; SVM; contextual word embeddings; semantic term weighting; health surveillance; text classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>The purpose of automated health surveillance systems is to predict the emergence of a disease. In most cases, these systems use a text categorization model to classify any clinical text into a category corresponding to an illness. The problem arises when the target classes refer to diseases sharing multiple information such as symptoms. Thus, the classifier will have difficulty discriminating the disease under surveillance from other conditions of the same family, causing an increase in misclassification rate. Clinical texts contain keywords carrying relevant information to distinguish diseases with similar symptoms. However, these specific words are rare and sparse. Therefore, they have a minor impact on machine learning models&#39; performance. Assuming that emphasizing specific terms contributes to improving classification performance, we propose an algorithm that enriches training samples with terms semantically similar to specific terms using the deep contextualized word embeddings ELMo. Next, we devise a weighting scheme combining chi-square and semantic scores to reflect the relatedness between features and the disease under surveillance. We evaluate our model using the SVM algorithm trained on i2b2 dataset supplemented by documents collected from Ibn Sina hospital in Rabat. Experimental results show a clear improvement in classification performance than baseline methods with an F-measure reaching 86.54%.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_27-Early_Detection_of_Severe_Flu_Outbreaks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Visibility and Ethical Considerations of Pakistani Universities Researchers on Google Scholar</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120226</link>
        <id>10.14569/IJACSA.2021.0120226</id>
        <doi>10.14569/IJACSA.2021.0120226</doi>
        <lastModDate>2021-03-01T14:58:52.9600000+00:00</lastModDate>
        
        <creator>Muhammad Asghar Khan</creator>
        
        <creator>Tariq Rahim Soomro</creator>
        
        <subject>Google scholar; research visibility; Pakistani researchers; ethical considerations; web bot; research in Pakistan</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>Maximizing visibility by using academic profiling sites is very crucial in the academic world to improve the readership of research papers published and constant evaluation of research quality. In this article, the authors focused on the visibility of Pakistani University scholars on Google Scholar (GS). An intelligent Web Bot (MAKGBOT) was developed to collect the scholarly data of all Pakistani scholars, whose data is publicly available on Google Scholar. The findings of this research show that 87% of Pakistani universities have a presence on Google Scholar. It analyzes the research performance of scholars based on the last five years’ data from 2016 to 2020. Furthermore, the analysis reports the level of scholarly activities of all provinces and autonomous areas of Pakistan. This paper concludes by discussing the ethical issue of misrepresentation of information on the public profile and its consequences on the rankings of legitimate scholars.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_26-Visibility_and_Ethical_Considerations.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Feature Selection and Ensemble Learning Methods for Gene Selection and Cancer Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120225</link>
        <id>10.14569/IJACSA.2021.0120225</id>
        <doi>10.14569/IJACSA.2021.0120225</doi>
        <lastModDate>2021-03-01T14:58:52.9600000+00:00</lastModDate>
        
        <creator>Sultan Noman Qasem</creator>
        
        <creator>Faisal Saeed</creator>
        
        <subject>Microarray; gene selection; ensemble classification; cancer classification; gene expression</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>A promising research field in bioinformatics and data mining is the classification of cancer based on gene expression results. Efficient sample classification is not supported by all genes. Thus, to identify the appropriate genes that help efficiently distinguish samples, a robust feature selection method is needed. Redundancy in the data on gene expression contributes to low classification performance. This paper presents the combination for gene selection and classification methods using ranking and wrapper methods. In ranking methods, information gain was used to reduce the size of dimensionality to 1% and 5%. Then, in wrapper methods K-nearest neighbors and Na&#239;ve Bayes were used with Best First, Greedy Stepwise, and Rank Search. Several combinations were investigated because it is known that no single model can give the best results using different datasets for all circumstances. Therefore, combining multiple feature selection methods and applying different classification models could provide a better decision on the final predicted cancer types. Compared with the existing classifiers, the proposed assembly gene selection methods obtained comparable performance.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_25-Hybrid_Feature_Selection_and_Ensemble_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Factors Influencing Master Data Quality: A Systematic Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120224</link>
        <id>10.14569/IJACSA.2021.0120224</id>
        <doi>10.14569/IJACSA.2021.0120224</doi>
        <lastModDate>2021-03-01T14:58:52.9470000+00:00</lastModDate>
        
        <creator>Azira Ibrahim</creator>
        
        <creator>Ibrahim Mohamed</creator>
        
        <creator>Nurhizam Safie Mohd Satar</creator>
        
        <subject>Quality management; total quality management; data quality; data quality management; master data; master data quality; master data quality management; systematic literature review</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>Master data refers to the data that represents the core business of the organization, shared among different applications, departments, and organizations and most valued as the important asset to the organization. Despite the outward benefit of master data mainly in decision making and organization performance, the quality of master data is at risk. This is due to the critical challenges in managing master data quality the organization may expose. Hence the primary aim of this study is to identify factors influencing master data quality from the lens of total quality management while adopting the systematic literature review method. The study proposed 19 factors that inhibit the quality of master data namely data governance, information system, data quality policy and standard, data quality assessment, integration, continuous improvement, teamwork, data quality vision and strategy, understanding of the systems and data quality, data architecture management, personnel competency, top management support, business driver, legislation, information security management, training, change management, customer focus, and data supplier management that can be categorized to five components which are organizational, managerial, stakeholder, technological, and external. Another important finding is the identification of the differences for factors influencing master data compared to other data domain which are business driver, organizational structure, organizational culture, performance evaluation and rewards, evaluate cost/benefit tradeoffs, physical environment, risk management, storage management, usage of data, internal control, input control, staff participation, middle management&#39;s commitment, the role of data quality and data quality manager, audit, and personnel relation. It is expected that the findings of this study will contribute to a deeper understanding of the factors that will lead to an improved master data quality.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_24-Factors_Influencing_Master_Data_Quality.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>IoT System for Vital Signs Monitoring in Suspicious Cases of Covid-19</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120223</link>
        <id>10.14569/IJACSA.2021.0120223</id>
        <doi>10.14569/IJACSA.2021.0120223</doi>
        <lastModDate>2021-03-01T14:58:52.9130000+00:00</lastModDate>
        
        <creator>John Amachi-Choqque</creator>
        
        <creator>Michael Cabanillas-Carbonell</creator>
        
        <subject>Covid-19; vital signs; internet of things; NodeMCU; IoT platform</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>Currently the world is going through a pandemic caused by Covid-19, the World Health Organization recommends to stay isolated from the rest of the people. This research shows the development of a prototype based on the internet of things, which aims to measure three very important aspects: heart rate, blood oxygen saturation and body temperature, these will be measured through sensors that will be connected to a NodeMCU module that integrates a Wi-Fi module, which will transmit the data to an IoT platform through which the data can be displayed, achieving real-time monitoring of the vital signs of the patient suspected of Covid-19.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_23-IoT_System_for_Vital_Signs_Monitoring.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Digitization of Supply Chains as a Lever for Controlling Cash Flow Bullwhip: A Systematic Literature Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120222</link>
        <id>10.14569/IJACSA.2021.0120222</id>
        <doi>10.14569/IJACSA.2021.0120222</doi>
        <lastModDate>2021-03-01T14:58:52.8970000+00:00</lastModDate>
        
        <creator>Hicham Lamzaouek</creator>
        
        <creator>Hicham Drissi</creator>
        
        <creator>Naima El Haoud</creator>
        
        <subject>Cash flow bullwhip; digital technologies; digitization; supply chain; cash flow; bullwhip effect</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>Due to the new possibilities offered by digital technologies, more and more companies are embarking on a process of digitizing their supply chains. This dynamic seems to be the opportunity to analyse the impact that digital technologies may have on one of the phenomena that disrupt financial flows within supply chains, and that can alter the companies’ treasury, namely that of cash flow bullwhip (CFB). The results of the systematic literature review that was carried out allow to affirm that several technologies can contribute positively to limiting this phenomenon and this by acting on these operational causes, which are the reliability of forecasts, batch orders, the fluctuation in sales prices, rationing games, and lead times.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_22-Digitization_of_Supply_Chains_as_a_Lever.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Feature Engineering for Human Activity Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120221</link>
        <id>10.14569/IJACSA.2021.0120221</id>
        <doi>10.14569/IJACSA.2021.0120221</doi>
        <lastModDate>2021-03-01T14:58:52.8830000+00:00</lastModDate>
        
        <creator>Basma A. Atalaa</creator>
        
        <creator>Ibrahim Ziedan</creator>
        
        <creator>Ahmed Alenany</creator>
        
        <creator>Ahmed Helmi</creator>
        
        <subject>Human activity recognition; random forest; feature engineering; sensor signal processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>Human activity recognition (HAR) techniques can significantly contribute to the enhancement of health and life care systems for elderly people. These techniques, which generally operate on data collected from wearable sensors or those embedded in most smart phones, have therefore attracted increasing interest recently. In this paper, a random forest-based classifier for human activity recognition is proposed. The classifier is trained using a set of time-domain features extracted from raw sensor data after being segmented into windows of 5 seconds duration. A detailed study of model parameter selection is presented using the statistical t-test. Several simulation experiments are conducted on the WHARF accelerometer benchmark dataset, to compare the performance of the proposed classifier to support vector machines (SVM) and Artificial Neural Network (ANN). The proposed model shows high recognition rates for different activities in the WHARF dataset compared to other classifiers using the same set of features. Furthermore, it achieves an overall average precision of 86.1% outperforming the recognition rate of 79.1% reported in the literature using Convolution Neural Networks (CNN) for the WHARF dataset. From a practical point of view, the proposed model is simple and efficient. Therefore, it is expected to be suitable for implementation in hand-held devices such as smart phones with their limited memory and computational resources.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_21-Feature_Engineering_for_Human_Activity_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of Modern Distributed Systems based on Microservices Architecture</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120220</link>
        <id>10.14569/IJACSA.2021.0120220</id>
        <doi>10.14569/IJACSA.2021.0120220</doi>
        <lastModDate>2021-03-01T14:58:52.8500000+00:00</lastModDate>
        
        <creator>Isak Shabani</creator>
        
        <creator>Endrit M&#235;ziu</creator>
        
        <creator>Blend Berisha</creator>
        
        <creator>Tonit Biba</creator>
        
        <subject>Distributed systems; microservice; monolithic; web services; Jmeter</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>Distributed systems are very commonplace nowadays. They have seen an enormous growth in use during the past few years. The idea to design systems that are robust, scalable, reliable, secure and fault tolerance are some of the many reasons of this development and growth. Distributed systems provide a shift from traditional ways of building systems where the whole system is concentrated in a single and indivisible unit. The latest architectural changes are progressing toward what is known as microservices. The monolithic systems, which can be considered as ancestors of microservices, cannot fulfill the requirements of today’s big and complex applications. In this paper we decompose a monolithic application into microservices using three different architectural patterns and draw comparisons between the two architectural styles using detailed metrics that are generated from the Apache JMeter tool. The application is created via .NET framework, uses the MVC pattern and is fictive. The two comparable apps before testing with Apache JMeter, will be deployed in almost identical hosting environment in order to gain results that are valuable. Using the generated data, we deduce the advantages and disadvantages of the two architectural styles.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_20-Design_of_Modern_Distributed_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Hybrid Approach for Cost Effective Prediction of Software Defects</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120219</link>
        <id>10.14569/IJACSA.2021.0120219</id>
        <doi>10.14569/IJACSA.2021.0120219</doi>
        <lastModDate>2021-03-01T14:58:52.8500000+00:00</lastModDate>
        
        <creator>Satya Srinivas Maddipati</creator>
        
        <creator>Malladi Srinivas</creator>
        
        <subject>Cost effective problem; principle component analysis; adaptive neuro fuzzy inference system; area under ROC curve</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>Identifying software defects during early stages of Software Development life cycle reduces the project effort and cost. Hence there is a lot of research done in finding defective proneness of a software module using machine learning approaches. The main problems with software defect data are cost effective and imbalance. Cost effective problem refers to predicting defective module as non defective induces high penalty compared to predicting non defective module as defective. In our work, we are proposing a hybrid approach to address cost effective problem in Software defect data. To address cost effective problem, we used bagging technique with Artificial Neuro Fuzzy Inference system as base classifier. In addition to that, we also addressed Class Imbalance &amp; High dimensionality problems using Artificial Neuro Fuzzy inference system &amp; principle component analysis respectively. We conducted experiments on software defect datasets, downloaded from NASA dataset repository using our proposed approach and compared with approaches mentioned in literature survey. We observed Area under ROC curve (AuC) for proposed approach was improved approximately 15% compared with highly efficient approach mentioned in literature survey.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_19-An_Hybrid_Approach_for_Cost_Effective_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Singer Gender Classification using Feature-based and Spectrograms with Deep Convolutional Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120218</link>
        <id>10.14569/IJACSA.2021.0120218</id>
        <doi>10.14569/IJACSA.2021.0120218</doi>
        <lastModDate>2021-03-01T14:58:52.8200000+00:00</lastModDate>
        
        <creator>Mukkamala S.N.V. Jitendra</creator>
        
        <creator>Y. Radhika</creator>
        
        <subject>Gender identification; spectrogram; genetic algorithm-based feature selection (GAFS); music information retrieval (MIR); music recommendation; and singer’s gender identification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>The task of music information retrieval (MIR) is gaining much importance since the digital cloud is growing sparklingly. An important attribute of MIR is the singer-id, which helps effectively during the recommendation process. It is highly difficult to identify a singer in the case of music as the number of signers available in the digital cloud is high. The process of identifying the gender of a singer may simplify the task of singer identification and also helps with the recommendation. Hence, an effort has been made to detect the gender information of a singer. Two different datasets have been considered. Of which, one is collected from Indian cine industries having 20 different singer details of four regional languages. The other dataset is standard Artist20. Various spectral, temporal, and pitch related features have been used to obtain better accuracy. The features considered for this task are Mel-frequency cepstral coefficients (MFCCs), pitch, velocity, and acceleration of MFCCs. The experimentation has been done on various combinations of the mentioned features with the support of artificial neural networks (ANNs) and random forest (RF). Further, the genetic algorithm-based feature selection (GAFS) has been used to select the suitable features out of the best combination obtained. Moreover, we have also utilized the recent popular convolutional neural networks (CNNs) with the support of spectrograms to obtain better accuracy over the traditional feature vector. Average accuracy of 91.70% is obtained for both the Indian and Western clips, which is an improved accuracy of 3% over hand engineering features.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_18-Singer_Gender_Classification_using_Feature.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Simulation Study on Blood Flow Mechanism of Vein in Existence of Different Thrombus Size</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120217</link>
        <id>10.14569/IJACSA.2021.0120217</id>
        <doi>10.14569/IJACSA.2021.0120217</doi>
        <lastModDate>2021-03-01T14:58:52.8030000+00:00</lastModDate>
        
        <creator>Nabilah Ibrahim</creator>
        
        <creator>Nur Shazilah Aziz</creator>
        
        <creator>Muhammad Kamil Abdullah</creator>
        
        <creator>Gan Hong Seng</creator>
        
        <subject>Blood velocity profile; velocity contour plot; computational fluid dynamic (CFD); thrombus; vein valve</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>Blood velocity is expected to be as a parameter for detecting abnormality of blood such as the existence of thrombus. Proper blood flow in veins is important to ensure effective return of deoxygenated blood to the heart. However, it is much challenging to recognize the vessel condition due to the inability to visualize the thrombus presence in the vessel. The presence of noise in the image obtained from ultrasound scanning is one of the obstructions in recognizing it. Considering the difficulty, this study aims to assess the velocity and vorticity at the vein valve region using Computational Fluid Dynamics (CFD) method. The velocity of blood and the size of valve orifice are considered important parameters in designing the vein since the stenosis and irregularities of velocity in blood vessels are known as the risk factors for thrombus formation. From the simulation, the velocity contour plot of the blood flow can be visualized clearly. The blood distribution was presented using velocity profile while the fluid particles movement was shown by the velocity vector. The low blood velocity clearly shows the low velocity region which reside at the cusps area and at the beginning of the valve leaflets. Therefore, the present study is able to visualize and evaluate the probable location of thrombus development in the blood vessel.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_17-Simulation_Study_on_Blood_Flow_Mechanism.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Robotic Education in 21st Century: Teacher Acceptance of Lego Mindstorms as Powerful Educational Tools</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120216</link>
        <id>10.14569/IJACSA.2021.0120216</id>
        <doi>10.14569/IJACSA.2021.0120216</doi>
        <lastModDate>2021-03-01T14:58:52.7900000+00:00</lastModDate>
        
        <creator>Mardhiah Masril</creator>
        
        <creator>Ambiyar</creator>
        
        <creator>Nizwardi Jalinus</creator>
        
        <creator>Ridwan</creator>
        
        <creator>Billy Hendrik</creator>
        
        <subject>Education; TAM; teacher acceptance; Lego; Robotic</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>Acceptance of robotic technology in education is a crucial issue in the revolution industry 4.0 era. This study aims to explore the acceptance of Lego Mindstorms Ev3 as one kind of robotic technology by the teachers as a learning resources that can develop teachers and student’s skills. The Technology Acceptance Model (TAM) was introduced by using questionnaires. The questionnaires were responded by 22 elementary school teachers who have experiences with Lego Mindstorms ev3 kits in a workshop. The data was carried out by presenting descriptive statistics, correlation, and regression analyses. Based on the acceptance testing of Lego Mindstorms Ev3 with the TAM model, the result showed that subjective norms (SN) and self-efficacy (SE) as external variables were effective on the acceptance of Lego Mindstorms Ev3 as a learning tools by teachers. Teacher’s SN have a positive correlation with perceived usefulness (PU), perceived ease to use (PE), and behavioral intention to use (BI). Teacher’s self-efficacy were significant in predicting PE and BI. PU and PE had a positive effect on Attitude toward using Lego Mindstorms Ev3 by teachers, and it continued to use. Finally, most teachers have shown positive reactions to Lego Mindstorms Ev3 as educational tools.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_16-Robotic_Education_in_21st_Century.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>AMBA: Adaptive Monarch Butterfly Algorithm based Information of Transfer Scheduling in Cloud for Big Information Application</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120215</link>
        <id>10.14569/IJACSA.2021.0120215</id>
        <doi>10.14569/IJACSA.2021.0120215</doi>
        <lastModDate>2021-03-01T14:58:52.7570000+00:00</lastModDate>
        
        <creator>D. Sugumaran</creator>
        
        <creator>C. R. Bharathi</creator>
        
        <subject>Information transfer scheduling; AMBA; throughput maximization; migration operation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>In present days cloud computing is most famous innovation and has a great deal of research potential in different zones like allocation of resource, scheduling of data transfer, security as well as privacy and so on. Data transfer Scheduling is one of the significant issues for improving the proficiency of all cloud based administrations. In cloud computing, data transfer scheduling is utilized to allot the task to best reasonable asset for execution. There are various types of data transfer scheduling algorithms. A few issues like execution time, execution cost, high delay time, complexity, and high data transfer cost as well as various optimization problems have been measured in existing papers. To tackle all the above problems, in this paper, a new Adaptive approach are introduced which is a combination of Monarch Butterfly and Genetic (AMBA) Algorithm based data transfer scheduling is proposed. So here the concept is to develop an optimal algorithm for scheduling the data transfer in an efficient way which helps in reducing the data transfer time. The performance of proposed methodology analyzed in terms of evaluation metrics.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_15-AMBA_Adaptive_Monarch_Butterfly_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Adaptive Congestion Window Algorithm for the Internet of Things Enabled Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120214</link>
        <id>10.14569/IJACSA.2021.0120214</id>
        <doi>10.14569/IJACSA.2021.0120214</doi>
        <lastModDate>2021-03-01T14:58:52.7430000+00:00</lastModDate>
        
        <creator>Ramadevi Chappala</creator>
        
        <creator>Ch.Anuradha</creator>
        
        <creator>P. Sri Ram Chandra Murthy</creator>
        
        <subject>Congestion window; internet of things; packet delivery ratio; throughput</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>Heterogeneous constrained computing resources in the Internet of Things (IoT) are communicated, collected, and share information from the environment using sensors and other high-speed technologies which generate tremendous traffic and lead to congestion in the Internet of Things (IoT) networks. This paper proposes an Adaptive Congestion Window (ACW) algorithm for the Internet of Things. This algorithm is adapted to the traffic changes in the network. The main objective of this paper is to increase the packet delivery ratio and reduce delay while enhancing the throughput which can be attained by avoiding congestion. Therefore, in the proposed algorithm, the congestion window size is depending on the transmission rate of the source node, the available bandwidth of the path, and the receiving rate of the destination node. The congestion window size is altered when the link on the path needs to be shared/released with/by other paths of different transmission in the network. The proposed algorithm, ACW is simulated, evaluated in terms of packet delivery ratio, throughput, and delay. The performance of the proposed algorithm, ACW is compared with IoT Congestion Congrol Algorithm (IoT-CCA) and Improved Stream Control Transmission Protocol (IMP-SCTP) and proved to be better by 27.4%, 11.8%, and 33.7% than IoT-CCA and 44.1%, 22.6%, and 50% than IMP-SCTP concerning packet delivery ratio, throughput, and delay respectively. The variation in congestion window size with time is also projected. </description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_14-Adaptive_Congestion_Window_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Meta-analysis of Educational Data Mining for Predicting Students Performance in Programming</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120213</link>
        <id>10.14569/IJACSA.2021.0120213</id>
        <doi>10.14569/IJACSA.2021.0120213</doi>
        <lastModDate>2021-03-01T14:58:52.7270000+00:00</lastModDate>
        
        <creator>Devraj Moonsamy</creator>
        
        <creator>Nalindren Naicker</creator>
        
        <creator>Timothy T. Adeliyi</creator>
        
        <creator>Ropo E. Ogunsakin</creator>
        
        <subject>Data mining; educational data mining; machine learning; performance; programming</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>An essential skill amid the 4th industrial revolution is the ability to write good computer programs. Therefore, higher education institutions are offering computer programming as a module not only in computer related programmes but other programmes as well. However, the number of students that underperform in programming is significantly higher than the non-programming modules. It is, therefore, crucial to be able to accurately predict the performance of students pursuing programming since this will help in identifying students that may underperform and the necessary support interventions can be timeously put in place to assist these students. The objective of this study is therefore to obtain the most effective Educational Data Mining approaches used to identify those students that may underperform in computer programming. The PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analysis) approach was used in conducting the meta-analysis. The databases searched were, namely, ACM, Google Scholar, IEEE, Pro-Quest, Science Direct and Scopus. A total of 11 scientific research publications were included in the meta-analysis for this study from 220 articles identified through database searching. The residual amount of heterogeneity was high (τ2 = 0.03; heterogeneity I2 = 99.46% with heterogeneity chi-square = 1210.91, a degree of freedom = 10 and P = &lt;0.001). The estimated pooled performance of the algorithms was 24% (95% CI (13%, 35%). Meta-regression analysis indicated that none of the moderators included have influenced the heterogeneity of studies. The result of effect estimates against its standard error indicated publication bias with a P-value of 0.013. These meta-analysis findings indicated that the pooled estimate of algorithms is high.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_13-A_Meta_analysis_of_Educational_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mitigating Denial of Service Signaling Threats in 5G Mobile Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120211</link>
        <id>10.14569/IJACSA.2021.0120211</id>
        <doi>10.14569/IJACSA.2021.0120211</doi>
        <lastModDate>2021-03-01T14:58:52.6970000+00:00</lastModDate>
        
        <creator>Raja Ettiane</creator>
        
        <creator>Rachid EL Kouch</creator>
        
        <subject>5G New Radio (NR) network; Radio Resource Control (RRC) state model; Denial of Service (DoS); signaling threats; randomization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>With the advent of 5th generation (5G) technology, the mobile paradigm witnesses a tremendous evolution involving the development of a plethora of new applications and services. This enormous technological growth is accompanied with an huge signaling overhead among 5G network elements, especially with emergence of massive devices connectivity. This heavy signaling load will certainly be associated with an important security threats landscape, including denial of service (DoS) attacks against the 5G control plane. In this paper, we analyse the performance of a defense mechanism based randomization technique designed to mitigate the impact of DoS signaling attack in 5G system. Based on massive machine-type communications (mMTC) traffic pattern, the simulation results show that the proposed randomization mechanism decreases significantly the signaling data volume raised from the new 5G Radio Resource Control (RRC) model under normal and malicious operating conditions, which up to 70% while avoiding the unnecessary resource consumption.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_11-Mitigating_Denial_of_Service_Signaling_Threats.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Regulation Proposal for the Implementation of 5G Technology in Peru</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120212</link>
        <id>10.14569/IJACSA.2021.0120212</id>
        <doi>10.14569/IJACSA.2021.0120212</doi>
        <lastModDate>2021-03-01T14:58:52.6970000+00:00</lastModDate>
        
        <creator>Luis Nunez-Tapia</creator>
        
        <subject>5G; regulation; antennas; radio spectrum</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>Telecommunications play a very important role in people’s life, for years there has been an evolution of this technology in the mobile communications’ industry reaching up to the 5G technology, which is more advanced than 4G, making it thus more comfortable for the user. In Peru, 5G technology has not been implemented though because there is fear from great part of population regarding its the antennas. Another fear is nowadyas the spread of COVID–19, this is because there is a lot of false information that has poor scientific support, even that information has been denied by the Ministry of Transport and Communications (MTC) from Peru, but still people hold on to these fears. Due to the aforementioned reasons, the present investigation aims to carry out an assessment of the benefits that the 5G technology would bring to the country and also proposes a regulatory frame for the radioelectric spectrum that will occupy this technology in Peru. By evaluating a regulation proposal of 5G technology in Peru, it is shown that the implementation of this technology will bring benefits in the social and economic sectors of the country.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_12-Regulation_Proposal_for_the_Implementation_of_5G_Technology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Gender Differences in the Perception of a Student Information System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120209</link>
        <id>10.14569/IJACSA.2021.0120209</id>
        <doi>10.14569/IJACSA.2021.0120209</doi>
        <lastModDate>2021-03-01T14:58:52.6630000+00:00</lastModDate>
        
        <creator>Rana Alhajri</creator>
        
        <creator>Ahmed Al-Hunaiyyan</creator>
        
        <creator>Bareeq Alghannam</creator>
        
        <creator>Abdullah Alshaher</creator>
        
        <subject>Gender differences; student information system; human-computer interaction; usability; user experience; perceptions</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>There is growing recognition that electronic student information systems support college administrations and enhance student performance. These systems must fulfill their user’s needs by understanding gender differences among users. This study analyzes gender variations concerning the utilization of online student information systems (SIS), with its central concern being how the dynamics of user experience (UX) are affected. A broad agreement is evident throughout the literature that gender is a crucial aspect when assessing human-computer interactions. Consequently, usability factors are brought into question, although there is some indication among researchers that too much weight is being applied. Study findings are gathered to represent the hedonic and pragmatic qualities of users, with clarifications of students’ perspectives deducted from qualitative methods, together with a UX examination made via Kuwait’s Public Authority for Applied Education and Training (PAAET) institute. Results suggest that none of the differing approaches and habits the two genders have toward UX should be considered as substantial, with the overall sample recording a perception of UX that is “slightly positive”. Furthermore, this research highlights difficulties with usability that developers may wish to take onboard for system upgrades.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_9-Gender_Differences_in_the_Perception_of_a_Student.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Student Information System: Investigating User Experience (UX)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120210</link>
        <id>10.14569/IJACSA.2021.0120210</id>
        <doi>10.14569/IJACSA.2021.0120210</doi>
        <lastModDate>2021-03-01T14:58:52.6630000+00:00</lastModDate>
        
        <creator>Ahmed Al-Hunaiyyan</creator>
        
        <creator>Rana Alhajri</creator>
        
        <creator>Bareeq Alghannam</creator>
        
        <creator>Abdullah Al-Shaher</creator>
        
        <subject>Student information system; user experience; usability; human-computer interaction; e-learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>There is growing recognition that electronic student information systems support college administrations and enhance student performance. These systems must fulfill their user’s needs (efficiently achieve their academic goals) while also providing a positive user experience (UX). This study used quantitative and qualitative approaches to elucidate students’ perceptions and investigate UX toward the SIS currently used at the Public Authority for Applied Education and Training (PAAET), a higher education institution in Kuwait. Survey data collected from 645 PAAET students were analyzed to determine their perceptions of and experiences using this SIS. The findings revealed that students had a slightly positive UX with this SIS. The system’s perspicuity, stimulation, and dependability were rated slightly higher than its novelty, attractiveness, and efficiency. The most pertinent usability issues that focus on the human interaction with systems were identified and discussed, hoping that it will allow officials and SIS system developers alike to make relevant and impactful improvements to newer versions of these systems. These results shed light on the need for continuous SIS evaluation and a broad research scope to develop innovative SIS with intelligent functions for novel activities. Such features enhance students’ interactivity and productivity, which encourage their academic success.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_10-Student_Information_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Technology in Education: Attitudes Towards using Technology in Nutrition Education</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120208</link>
        <id>10.14569/IJACSA.2021.0120208</id>
        <doi>10.14569/IJACSA.2021.0120208</doi>
        <lastModDate>2021-03-01T14:58:52.6170000+00:00</lastModDate>
        
        <creator>Asrar Sindi</creator>
        
        <creator>James Stanfield</creator>
        
        <creator>Abdullah Sheikh</creator>
        
        <subject>Technology; application; online games; nutrition; education</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>Digital technologies have influenced how teachers conduct the daily practice, and students learn in classrooms. In addition, technology is increasingly being deployed in the classroom environment via a combination of kinesthetics’, visual’s and auditory approaches. This paper aims to investigate teachers’ and students’ attitude towards using technology in nutritional education. Then, it discusses the impact of online games to enhance nutritional education of students. After that it will discuss the implication and findings of applying learning games to the curriculum from both teachers and students perspectives.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_8-Technology_in_Education_Attitudes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>HADOOP: A Comparative Study between Single-Node and Multi-Node Cluster</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120207</link>
        <id>10.14569/IJACSA.2021.0120207</id>
        <doi>10.14569/IJACSA.2021.0120207</doi>
        <lastModDate>2021-03-01T14:58:52.5870000+00:00</lastModDate>
        
        <creator>Elisabeta ZAGAN</creator>
        
        <creator>Mirela DANUBIANU</creator>
        
        <subject>Hadoop; HDFS; single-node cluster; multi-node cluster; namenode; secondary namenode; datanodes; jobtracker; tasktrackers</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>Data analysis has become a challenge in recent years as the volume of data generated has become difficult to manage, therefore more hardware and software resources are needed to store and process this huge amount of data. Apache Hadoop is a free framework, widely used thanks to the Hadoop Distributed Files System (HDFS) and its ability to relate to other data processing and analysis components such as MapReduce for processing data, Spark - in-memory Data Processing, Apache Drill - SQL on Hadoop, and many other. In this paper, we analyze the Hadoop framework implementation making a comparative study between Single-node and Multi-node cluster on Hadoop. We will explain in detail the two layers at the base of the Hadoop architecture: HDFS Layer with its deamons NameNode, Secondary NameNode, DataNodes and MapReuce Layer with JobTrackers, TaskTrackers daemons. This work is part of a complex one aiming to perform data processing in Data Lake structures.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_7-HADOOP_ A_Comparative_Study.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improve the Effectiveness of Image Retrieval by Combining the Optimal Distance and Linear Discriminant Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120206</link>
        <id>10.14569/IJACSA.2021.0120206</id>
        <doi>10.14569/IJACSA.2021.0120206</doi>
        <lastModDate>2021-03-01T14:58:52.5700000+00:00</lastModDate>
        
        <creator>Phuong Nguyen Thi Lan</creator>
        
        <creator>Tao Ngo Quoc</creator>
        
        <creator>Quynh Dao Thi Thuy</creator>
        
        <creator>Minh-Huong Ngo</creator>
        
        <subject>Content-based image retrieval; deep learning; similarity measures; Mahalanobis metric distance; linear discriminant analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>In image retrieval with relevant feedback, classification and distance calculation have a great influence on image retrieval accuracy. In this paper, we propose an image retrieval method, called ODLDA (Image Retrieval using the optimal distance and linear discriminant analysis). The proposed method can effectively exploit user’s feedback from relevant and irrelevant image sets, which uses linear discriminant analysis to find a linear projection with an improved similarity measure. The experimental results performed on the two benchmark datasets have confirmed the superiority of the proposed method.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_6-Improve_the_Effectiveness_of_Image_Retrieval.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Transliterating N&#244;m Scripts into Vietnamese National Scripts using Statistical Machine Translation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120205</link>
        <id>10.14569/IJACSA.2021.0120205</id>
        <doi>10.14569/IJACSA.2021.0120205</doi>
        <lastModDate>2021-03-01T14:58:52.5400000+00:00</lastModDate>
        
        <creator>Dien Dinh</creator>
        
        <creator>Phuong Nguyen</creator>
        
        <creator>Long H. B. Nguyen</creator>
        
        <subject>Statistical machine translation; automatic translit-eration; N&#244;m Script (ch&#250; N&#244;m); vietnamese national script (ch&#250; Qu&#232;c ng&#250;)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>N&#244;m scripts were used as the Vietnamese writing system from the 10th century to the early 20th century. During this period, N&#244;m scripts were the means to record a broad range of historical events, literary works, medical knowledge, as well as wisdom of many other domains. Unfortunately, since hardly any native Vietnamese speaker can read N&#244;m scripts nowadays, these valuable documents have not been fully harnessed. To address this gap, it is necessary to build an automatic transliteration system that can support us in decoding the ancient scripts and gaining knowledge of our Vietnamese ancestors. This study focuses on categorizing and reviewing the current progress on the Statistical Machine Translation (SMT) approaches to transliterate N&#244;m scripts into Vietnamese national scripts. In this paper, we discuss the differences between N&#244;m scripts and Vietnamese national scripts, systematically compare SMT models in transliterating N&#244;m scripts into Vietnamese national scripts, as well as having a thorough outlook on several promising research directions.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_5-Transliterating_N&#244;m_Scripts_into_Vietnamese.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Advanced Debugger for Arduino</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120204</link>
        <id>10.14569/IJACSA.2021.0120204</id>
        <doi>10.14569/IJACSA.2021.0120204</doi>
        <lastModDate>2021-03-01T14:58:52.5230000+00:00</lastModDate>
        
        <creator>Jan Dolinay</creator>
        
        <creator>Petr Dost&#225;lek</creator>
        
        <creator>Vladim&#237;r Vašek</creator>
        
        <subject>Arduino; debugger; microcontroller; software debugging</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>This article describes improved version of our source-level debugger for Arduino. The debugger can be used to debug Arduino programs using GNU debugger GDB with Eclipse or Visual Studio Code as the visual front-end. It supports all the functionally expected from a debugger such as stepping through the code, setting breakpoints, or viewing and modifying variables. These features are otherwise not available for the popular AVR-based Arduino boards without an external debug probe and modification of the board. With the presented debugger it is only needed to add a program library to the user program and optionally replace the bootloader. The debugger can speed up program development and make the Arduino platform even more usable as a tool for controlling various experimental apparatus or teaching computer programming. The article focuses on the new features and improvements we made in the debugger since its introduction in 2016. The most important improvement over the old version is the support for inserting breakpoints into program memory which allows debugging without affecting the speed of the debugged program and inserting breakpoints into interrupt service routines. Further enhancements include loading the program via the debugger and newly added support for Arduino Mega boards.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_4-Advanced_Debugger_for_Arduino.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluating the Accuracy of Models for Predicting the Speech Acceptability for Children with Cochlear Implants</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120203</link>
        <id>10.14569/IJACSA.2021.0120203</id>
        <doi>10.14569/IJACSA.2021.0120203</doi>
        <lastModDate>2021-03-01T14:58:52.4930000+00:00</lastModDate>
        
        <creator>Haewon Byeon</creator>
        
        <subject>Cochlear implants; speech acceptability; support vector regression; random forest; mean absolute errors</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>This study developed a model for predicting healthy hearing people’s speech acceptability for children with cochlear implants using multiple regression analysis, support vector regression, and random forest and evaluated the prediction performance of the model by comparing mean absolute errors and root mean squared errors. This study targeted 91 hearing-impaired children between four and eight years old who had worn cochlear implants at least one year and less than five years. Speech data of children wearing cochlear implants (CI) were collected through two tasks: speaking and reading. The outcome variable, healthy hearing people’s speech acceptability for children wearing CI was evaluated by 80 college students (freshman and sophomore) who did not have prior knowledge of children with a cochlear implant. The results of this study showed that the random forest algorithm (mean absolute errors=0.81and root mean squared error=0.108) was the best model for predicting the speech acceptability of children wearing CI. The results of this study imply that the predictive performance of random forest will be the best among ensemble models when developing a machine learning model using speech data of children wearing CI.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_3-Evaluating_the_Accuracy_of_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Space Mining Robot Prototype for NASA Robotic Mining Competition Utilizing Systems Engineering Principles</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120202</link>
        <id>10.14569/IJACSA.2021.0120202</id>
        <doi>10.14569/IJACSA.2021.0120202</doi>
        <lastModDate>2021-03-01T14:58:52.4770000+00:00</lastModDate>
        
        <creator>Tariq Tashtoush</creator>
        
        <creator>Jesus A. Vazquez</creator>
        
        <creator>Julian Herrera</creator>
        
        <creator>Liliana Hernandez</creator>
        
        <creator>Lisa Martinez</creator>
        
        <creator>Michael E. Gutierrez</creator>
        
        <creator>Osiris Escamilla</creator>
        
        <creator>Rosaura E. Martinez</creator>
        
        <creator>Alejandra Diaz</creator>
        
        <creator>Jorge Jimenez</creator>
        
        <creator>Jose Isaac Segura</creator>
        
        <creator>Marcus Martinez</creator>
        
        <subject>NASA robotic mining competition; mining robot; ice regolith; autonomous; NASA; space exploration; systems life-cycle; mechanical structure design; control system; systems engineering; software development</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>The 2017 National Aeronautics and Space Ad-ministration (NASA) Robotic Mining Competition (RMC) is an outstanding opportunity for engineering students to imple-ment all the knowledge and experience that they gained in the undergraduate years, in building a robot that will provide an intellectual insight to NASA, to develop innovative robotic excavation concepts. For this competition, multiple universities from all over the U.S. will create teams of students and faculty member to design and build a mining robot that can traverse, mine, excavate at least 10 kg of regolith, then deposit it in a bin in the challenging simulated Martian terrain. Our team’s goal is to improve on our current design, and overcome DustyTRON 2.0’s limitations by analyzing them and implementing new engineering solutions. The process to improve this system will enable our team members to learn mechanical, electrical, and software engineering. DustyTRON 3.0 is divided into three sub-teams, namely, Mechanical, Circuitry, Software sub-teams. Mechanical team focused on solving the mechanical structure, robot mobility, stability and weight distribution. Circuitry team focused on the electrical components such as batteries, wiring, and motors. The Software team focused on programming the NVidia TK1, Arduino controller, and cameras integration. This paper will outline the detailed work following systems engineering principles to complete this project, from research, to design process and robot building compete at the Kennedy Space Center. Only 54 teams were invited to participate from allover the US and DustyTRON team represented the state of Texas and placed the 29th and awarded the “Innovative Design” award.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_2-Space_Mining_Robot_Prototype.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Model-driven Framework for Requirement Traceability</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120201</link>
        <id>10.14569/IJACSA.2021.0120201</id>
        <doi>10.14569/IJACSA.2021.0120201</doi>
        <lastModDate>2021-03-01T14:58:52.4300000+00:00</lastModDate>
        
        <creator>Nader Kesserwan</creator>
        
        <creator>Jameela Al-Jaroodi</creator>
        
        <subject>Requirements; traceability; model transformation; do-178c; model-driven testing; traceability scheme</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(2), 2021</description>
        <description>In software development, requirements traceability is often mandated. It is important to apply to support various software development activities like result evaluation, regression testing and coverage analysis. Model-Driven Testing is one approach to provide a way to verify and validate requirements. However, it has many challenges in test generation in addition to the creation and maintenance of traceability information across test-related artifacts. This paper presents a model-based methodology for requirements traceability that relies on leveraging model transformation traceability techniques to achieve compliance with DO-178C standard as defined in the software verification process. This paper also demonstrates and evaluates the proposed methodology using avionics case studies focusing on the functional aspects of the requirements specified with the UCM (Use Case Maps) modeling language.</description>
        <description>http://thesai.org/Downloads/Volume12No2/Paper_1-Model_driven_Framework_for_Requirement_Traceability.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Accessibility Evaluation of the Websites of Top-ranked Hospitals in Saudi Arabia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120180</link>
        <id>10.14569/IJACSA.2021.0120180</id>
        <doi>10.14569/IJACSA.2021.0120180</doi>
        <lastModDate>2021-02-02T09:08:19.5370000+00:00</lastModDate>
        
        <creator>Obead Alhadreti</creator>
        
        <subject>Accessibility; hospital websites; Saudi Arabia</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(1), 2021</description>
        <description>Hospital websites offer the potential to improve healthcare service delivery. They can provide up-to-date information and services to patients, at low cost and regardless of their level of abilities. This, in turn, can reduce overcrowding in hospitals and reduce spread of disease, especially in circumstances like the current COVID-19 pandemic. It is, therefore, imperative for designers to ensure the accessibility of hospital websites to the widest possible range of people. This study aims to evaluate the accessibility of the websites of top-ranked hospitals in Saudi Arabia using AChecker. The sample included the websites of the top ten hospitals from each of the public and private sectors. The results show that only 20% of the evaluated websites conformed fully to the Web Content Accessibility Guidelines 2.0. No significant difference was found in terms of the accessibility compliance between the websites of the public and private hospitals. The most frequently observed accessibility errors were related to the structure of information, non-text content, labels and instructions, headings, and keyboard access. The study concludes that Saudi hospitals are not doing an adequate job of meeting accessibility guidelines, thereby denying many of their web customers the ability to fully use their websites.</description>
        <description>http://thesai.org/Downloads/Volume12No1/Paper_80-An_Accessibility_Evaluation_of_the_Websites.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Heuristic Evaluation of Peruvian Government Web Portals, used within the State of Emergency</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120178</link>
        <id>10.14569/IJACSA.2021.0120178</id>
        <doi>10.14569/IJACSA.2021.0120178</doi>
        <lastModDate>2021-01-30T14:38:15.3270000+00:00</lastModDate>
        
        <creator>Flores Quispe Percy Santiago</creator>
        
        <creator>Mamani Condori Kevin Alonso</creator>
        
        <creator>Paniura Huamani Jose Maykol</creator>
        
        <creator>Anampa Chura Diego David</creator>
        
        <creator>Richart Smith Escobedo Quispe</creator>
        
        <subject>Heuristic evaluation; usability; heuristic principles; government web portals; Covid-19</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(1), 2021</description>
        <description>The development of web platforms is very abundant at present, this event has developed exponentially due to the state of emergency. Therefore, there is a need to evaluate the quality of these platforms to ensure a good user experience, especially if these platforms are governmental. Currently, there are several platforms that have been developed by governments of different nations, which are related to the theme of covid-19; which should be evaluated from a heuristic perspective to detect usability problems that can occur when users interact with a product and identify ways to solve them. This article presents a heuristic evaluation of Peruvian government web portals, used within the state of emergency; for this purpose, a heuristic evaluation is carried out using a list of 15 heuristic principles, proposed by Toni Granollers, from two government platforms: Aprendo en casa and Covid19 Minsa. In this way, it was identified that the Peruvian platform Aprendo en casa has fewer usability problems compared to the Covid19 Minsa platform. Therefore, there is a need to renew or update the Covid19 Minsa platform using the results of the heuristic evaluation performed; on the other hand, although a heuristic evaluation of these Peruvian government platforms is being carried out, it is recommended to continue with a research that uses other usability evaluation methodologies for other platforms of daily use, such as the S/ 380 bonus platform, Bonus for independent or AFP withdrawal.</description>
        <description>http://thesai.org/Downloads/Volume12No1/Paper_78-Heuristic_Evaluation_of_Peruvian_Government_Web_Portals.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>k-Integer-Merging on Shared Memory</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120179</link>
        <id>10.14569/IJACSA.2021.0120179</id>
        <doi>10.14569/IJACSA.2021.0120179</doi>
        <lastModDate>2021-01-30T14:38:15.3270000+00:00</lastModDate>
        
        <creator>Ahmed Y Khedr</creator>
        
        <creator>Ibrahim M Alseadoon</creator>
        
        <subject>Merging; parallel algorithm; shared memory; optimality; linear work</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(1), 2021</description>
        <description>The k integer-merging problem is to merge the k sorted arrays into a new sorted array that contains all elements of Ai,∀ i. We propose a new parallel algorithm based on exclusive read exclusive write shared memory. The algorithm runs in O(log n) time using n/log n processors. The algorithm performs linear work, O(n), and has optimal cost. Furthermore, the total work done by the algorithm is less than the best-known previous parallel algorithms for k merging problem.</description>
        <description>http://thesai.org/Downloads/Volume12No1/Paper_79-k_Integer_Merging_on_Shared_Memory.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implementation of an e-Commerce System for the Automation and Improvement of Commercial Management at a Business Level</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120177</link>
        <id>10.14569/IJACSA.2021.0120177</id>
        <doi>10.14569/IJACSA.2021.0120177</doi>
        <lastModDate>2021-01-30T14:38:15.2970000+00:00</lastModDate>
        
        <creator>Anthony Tupia-Astoray</creator>
        
        <creator>Laberiano Andrade-Arenas</creator>
        
        <subject>Agile development; e-commerce; scrum methodol-ogy; prototype; system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(1), 2021</description>
        <description>At present micro and small businesses engaged in the production and marketing of products and have a single means of sale, whether stalls or physical stores, have been affected by the current crisis that is happening due to the pandemic, which came in early 2020 to Europe and different countries in Latin America, which is causing terrible damage to the economy of enterprises, since it does not have a means of virtual sales, where they can offer and market their products so that trade continues to operate during the pandemic. In this way, we designed a prototype e-commerce system meeting the requirements required by the organizations. Where it was based on the Scrum methodology as an agile development framework for the realization of the project. The use of the Marvel design tool allowed the creation of web platform prototypes. Obtaining as a result, prototypes according to an e-commerce system complying with the development procedures established by the Scrum team, which gives you a novel proposal and a productive approach to start implementing e-commerce within the sales processes of each business area. Therefore, this e-commerce system prototype proposal can be implemented by the different micro companies that wish to have a new online sales method and improve their commercial area process, allowing the increase of their client portfolio, as well as their production.</description>
        <description>http://thesai.org/Downloads/Volume12No1/Paper_77-Implementation_of_an_e_Commerce_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Improved Biometric Fusion System of Fingerprint and Face using Whale Optimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120176</link>
        <id>10.14569/IJACSA.2021.0120176</id>
        <doi>10.14569/IJACSA.2021.0120176</doi>
        <lastModDate>2021-01-30T14:38:15.2800000+00:00</lastModDate>
        
        <creator>Tajinder Kumar</creator>
        
        <creator>Shashi Bhushan</creator>
        
        <creator>Surender Jangra</creator>
        
        <subject>Biometric fusion; face recognition; fingerprint recognition; feature extraction; feature optimization; classifier</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(1), 2021</description>
        <description>In the field of wireless multimedia authentication unimodal biometric model is commonly used but it suffers from spoofing and limited accuracy. The present work pro-poses the fusion of features of face and fingerprint recognition system as an Improved Biometric Fusion System (IBFS) leads to improvement in performance. Integrating multiple biometric traits recognition performance is improved and thereby reducing fraudulent access.The paper introduces an IBFS comprising of authentication systems that are Improved Fingerprint Recogni-tion System (IFPRS) and Improved Face Recognition System (IFRS) are introduced. Whale optimization algorithm is used with minutiae feature for IFPRS and Maximally Stable External Regions (MSER) for IFRS. To train the designed IBFS, Pattern net model is used as a classification algorithm. Pattern net works based on processed data set along with SVM to train the IBFS model to achieve better classification accuracy. It is observed that the proposed fusion system exhibited average true positive rate and accuracy of 99.8 percentage and 99.6 percentage, respectively.</description>
        <description>http://thesai.org/Downloads/Volume12No1/Paper_76-An_Improved_Biometric_Fusion_System_of_Fingerprint.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cuckoo-Neural Approach for Secure Execution and Energy Management in Mobile Cloud Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120175</link>
        <id>10.14569/IJACSA.2021.0120175</id>
        <doi>10.14569/IJACSA.2021.0120175</doi>
        <lastModDate>2021-01-30T14:38:15.2630000+00:00</lastModDate>
        
        <creator>Vishal </creator>
        
        <creator>Bikrampal Kaur</creator>
        
        <creator>Surender Jangra</creator>
        
        <subject>Mobile cloud computing; VM migration; energy consumption; SLA violation; VM selection; overloading; under loading</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(1), 2021</description>
        <description>Along with an explosive growth in mobile appli-cations and the emergence of the concept of cloud computing, in mobile cloud computing (MCC) has been familiarized as a potential technology for mobile users. Employing MCC to enable mobile users to realize the benefits of cloud computing in an environment friendly way is an effective strategy to meet today’s industrial demands. With the ever-increasing demand for MCC technology, energy efficiency has become extremely relevant in mobile cloud computing infrastructure. The concept of mobile cloud computing offers low cost and high availability to the mobile cloud users on pay-per-use basis. However, the challenges such as resource management and energy consumption are still faced by mobile cloud providers. If the allocation of the resources is not managed in a secure manner, the false allocation will lead to more energy consumption. This article demonstrates the importance of energy-saving mechanisms in cloud data centers and elaborates the importance of the “energy efficiency” relationship to promote the adoption of these mechanisms in practical scenarios. The utilization of resources are being maximized by minimizing the energy consumption. To achieve this, an integrated approach using Cuckoo Search (CS) with Artificial Neural network (ANN) is presented here. Initially, the Virtual Machines (VMs) are sorted as per their CPU utilization using Modified Best Fit Decreasing Approach (MBFD). This suffers from the increase in Service Level Agreement (SLA) violation along with many VM migrations. If the migration is not done at an appropriate host, which can hold the VM for long, Service Level Agreement Violation (SLAV) will be high.</description>
        <description>http://thesai.org/Downloads/Volume12No1/Paper_75-Cuckoo_Neural_Approach_for_Secure_Execution.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Inventory Management Analysis under the System Dynamics Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120174</link>
        <id>10.14569/IJACSA.2021.0120174</id>
        <doi>10.14569/IJACSA.2021.0120174</doi>
        <lastModDate>2021-01-30T14:38:15.2330000+00:00</lastModDate>
        
        <creator>Shal&#180;om Adonai Huaraz Morales</creator>
        
        <creator>Laberiano Andrade-Arenas</creator>
        
        <subject>Causal diagram; dynamics of system; forrester dia-gram; inventory management; Vensim</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(1), 2021</description>
        <description>In this work, the modeling of the system dynamics concerning inventory management has been carried out, this was done to achieve a correct analysis on said management, and thus make decisions that benefit the company. Knowing that the problem lies in the mismanagement of inventories, which are managed by certain companies with little or vast knowledge about inventory management, for which, it is desired to make use of the system dynamics modeling, so that this, achieves a correct analysis of the management, but is focused on the dynamics of system. The result obtained, from the development of the methodology applied in this work, was a correct and adequate analysis of the dynamics modeling of system in inventory management, which was achieved, using the simulation software known as Vensim, and a methodology based on three stages that are the Causal Diagram, the Forrester Diagram, and the mathematical equations.</description>
        <description>http://thesai.org/Downloads/Volume12No1/Paper_74-Inventory_Management_Analysis_under_the_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Building a Personalized Fitness Recommendation Application based on Sequential Information</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120173</link>
        <id>10.14569/IJACSA.2021.0120173</id>
        <doi>10.14569/IJACSA.2021.0120173</doi>
        <lastModDate>2021-01-30T14:38:15.2170000+00:00</lastModDate>
        
        <creator>Manal Abdulaziz</creator>
        
        <creator>Bodor Al-motairy</creator>
        
        <creator>Mona Al-ghamdi</creator>
        
        <creator>Norah Al-qahtani</creator>
        
        <subject>Big data; big data processing; recommendation system; sport analysis; K-means</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(1), 2021</description>
        <description>Nowadays sports plays a very important role in the life of the human being and it allows to keep him healthy and make him always active. Sport is essential for people to have a healthy mind. However, the practice of a sport can have negative effects on the body and human health if it is practiced incorrectly or if it is not adapted to the body or the human health. This is why, in this paper, we have proposed a recommendations system that allows the selection of the right person to practice the right sport according to several factors such as heart rate, speed and size. The implementation was applied to the FitRec dataset with the help of SPARK tool, and the results show that the proposed method is capable of generating the appropriate training for different groups according to their information, where each group gets the appropriate training. The grouping of this data was done by the k-means method.</description>
        <description>http://thesai.org/Downloads/Volume12No1/Paper_73-Building_a_Personalized_Fitness_Recommendation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Topic based Sentiment Analysis for COVID-19 Tweets</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120172</link>
        <id>10.14569/IJACSA.2021.0120172</id>
        <doi>10.14569/IJACSA.2021.0120172</doi>
        <lastModDate>2021-01-30T14:38:15.2030000+00:00</lastModDate>
        
        <creator>Manal Abdulaziz</creator>
        
        <creator>Alanoud Alotaibi</creator>
        
        <creator>Mashail Alsolamy</creator>
        
        <creator>Abeer Alabbas</creator>
        
        <subject>Social media analysis; COVID-19; topics extraction; sentiment analysis; LDA; spark; twitter</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(1), 2021</description>
        <description>The incessant Coronavirus pandemic has had a detrimental impact on nations across the globe. The essence of this research is to demystify the social media’s sentiments regarding Coronavirus. The paper specifically focuses on twitter and extracts the most discussed topics during and after the first wave of the Coronavirus pandemic. The extraction was based on a dataset of English tweets pertinent to COVID-19. The research study focuses on two main periods with the first period starting from March 01,2020 to April 30, 2020 and the second period starting from September 01,2020 to October 31, 2020. The Latent Dirichlet Allocation (LDA) was adopted for topics extraction whereas a lexicon based approach was adopted for sentiment analysis. In regards to implementation, the paper utilized spark platform with Python to enhance speed and efficiency of analyzing and processing large-scale social data. The research findings revealed the appearance of conflicting topics throughout the two Coronavirus pandemic periods. Besides, the expectations and interests of all individuals regarding the various topics were well represented.</description>
        <description>http://thesai.org/Downloads/Volume12No1/Paper_72-Topic_based_Sentiment_Analysis_for_COVID_19_Tweets.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Study of Post-COVID-19 Employability in Peru through a Dynamic Model, Between 2020 and 2025</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120171</link>
        <id>10.14569/IJACSA.2021.0120171</id>
        <doi>10.14569/IJACSA.2021.0120171</doi>
        <lastModDate>2021-01-30T14:38:15.1870000+00:00</lastModDate>
        
        <creator>Richard Ronny Arias Marreros</creator>
        
        <creator>Keyla Vanessa Nalvarte Dionisio</creator>
        
        <creator>Luis Alberto Romero Tuanama</creator>
        
        <creator>Juber Alfonso Quiroz Gutarra</creator>
        
        <creator>Laberiano Andrade-Arenas</creator>
        
        <subject>Employability; forrester diagram; population; sys-tem dynamics; vensim</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(1), 2021</description>
        <description>The research work is focused on the sector of the population that will have a job, taking into account that the pandemic was a problem that resulted in many people losing their jobs due to the economic crisis that affected all countries because the first half of the year 2020 the equivalent of 400 million full-time jobs were lost and there was a 14% drop in working hours worldwide, also in Lima 1.2 million people were left without work. For this reason, a dynamic analysis was developed for the projection of post-COVID19 employability in Peru from 2020 to 2025 to obtain an approximate knowledge of the population’s labor outlook, implementing system dynamics as a methodology since it includes this recommendation, given that any model built through its application will be based on the opinion of those involved in the system to be represented. In this work, system dynamics is presented as a very useful methodology for the analysis of complex problems, developing the Forrester and causal diagram with the help of Vensim software. As a result, the approximate number of jobs that will be available was visualized, and it was observed that the future of employability will be at risk, which is why a good government strategy is necessary so that this does not happen since people need to satisfy their professional, economic and development needs.</description>
        <description>http://thesai.org/Downloads/Volume12No1/Paper_71-Study_of_Post_COVID_19_Employability_in_Peru.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Adaptive Genetic Algorithm for a New Variant of the Gas Cylinders Open Split Delivery and Pickup with Two-dimensional Loading Constraints</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120170</link>
        <id>10.14569/IJACSA.2021.0120170</id>
        <doi>10.14569/IJACSA.2021.0120170</doi>
        <lastModDate>2021-01-30T14:38:15.1700000+00:00</lastModDate>
        
        <creator>Anouar Annouch</creator>
        
        <creator>Adil Bellabdaoui</creator>
        
        <subject>Vehicle routing problem; split delivery and pickup; multi-depot; two-dimensional loading; genetic algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(1), 2021</description>
        <description>This paper studies a combination of two well-known problems in distribution logistics, which are the truck loading problem and the vehicle routing problem. In our context, a customer daily demand exceeds the truck capacity. As a result, the demand has to be split into several routes. In addition, it is required to assign customers to depots, which means that each customer is visited just once by any truck in the fleet. Moreover, we take into consideration a customer time windows. The studied problem can be defined as a Multi-depots open split delivery and pickup vehicle routing problem with two-dimensional loading constraints and time windows (2L-MD-OSPDTW). A mathemat-ical formulation of the problem is proposed as a mixed-integer linear programming model. Then, a set of four class instances is used in a way that reflects the real-life case study. Furthermore, a genetic algorithm is proposed to solve a large scale dataset. Finally, preliminary results are reported and show that the MILP performs very well for small test instances while the genetic algorithm can be efficiently used to solve the problem for a wide-reaching test instances.</description>
        <description>http://thesai.org/Downloads/Volume12No1/Paper_70-An_Adaptive_Genetic_Algorithm_for_a_New_Variant.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fake Reviews Detection using Supervised Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120169</link>
        <id>10.14569/IJACSA.2021.0120169</id>
        <doi>10.14569/IJACSA.2021.0120169</doi>
        <lastModDate>2021-01-30T14:38:15.1400000+00:00</lastModDate>
        
        <creator>Ahmed M. Elmogy</creator>
        
        <creator>Usman Tariq</creator>
        
        <creator>Ammar Mohammed</creator>
        
        <creator>Atef Ibrahim</creator>
        
        <subject>Fake reviews detection; data mining; supervised machine learning; feature engineering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(1), 2021</description>
        <description>With the continuous evolve of E-commerce systems, online reviews are mainly considered as a crucial factor for building and maintaining a good reputation. Moreover, they have an effective role in the decision making process for end users. Usually, a positive review for a target object attracts more customers and lead to high increase in sales. Nowadays, deceptive or fake reviews are deliberately written to build virtual reputation and attracting potential customers. Thus, identifying fake reviews is a vivid and ongoing research area. Identifying fake reviews depends not only on the key features of the reviews but also on the behaviors of the reviewers. This paper proposes a machine learning approach to identify fake reviews. In addition to the features extraction process of the reviews, this paper applies several features engineering to extract various behaviors of the reviewers. The paper compares the performance of several experiments done on a real Yelp dataset of restaurants reviews with and without features extracted from users behaviors. In both cases, we compare the performance of several classifiers; KNN, Naive Bayes (NB), SVM, Logistic Regression and Random forest. Also, different language models of n-gram in particular bi-gram and tri-gram are taken into considerations during the evaluations. The results reveal that KNN(K=7) outperforms the rest of classifiers in terms of f-score achieving best f-score 82.40%. The results show that the f-score has increased by 3.80%when taking the extracted reviewers behavioral features into consideration.</description>
        <description>http://thesai.org/Downloads/Volume12No1/Paper_69-Fake_Reviews_Detection_using_Supervised_Machine.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Blockchain-based Crowdsourced Task Assessment Framework using Smart Contract</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120168</link>
        <id>10.14569/IJACSA.2021.0120168</id>
        <doi>10.14569/IJACSA.2021.0120168</doi>
        <lastModDate>2021-01-30T14:38:15.1230000+00:00</lastModDate>
        
        <creator>Linta Islam</creator>
        
        <creator>Syada Tasmia Alvi</creator>
        
        <creator>Mafizur Rahman</creator>
        
        <creator>Ayesha Aziz Prova</creator>
        
        <creator>Md. Nazmul Hossain</creator>
        
        <creator>Jannatul Ferdous Sorna</creator>
        
        <creator>Mohammed Nasir Uddin</creator>
        
        <subject>Blockchain; crowdsourcing; task allocation; smart contract</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(1), 2021</description>
        <description>In today’s world, crowdsourcing is a highly rising paradigm where mass people are engaged in solving a problem. Though this system has a lot of advantages, yet people are not interested in working on this platform. Thus, we survey people to find out the constraints of this platform and the main reason behind their unwillingness. 59% of people think that security and privacy is the major challenge of a crowdsourcing platform. Therefore, we propose a blockchain-based crowdsourced system which can provide security and privacy to the user’s information. We also have used a smart contract to verify the task so that the users get the exact output that they have wanted. We implemented our system and compared the performance with the existing systems. Our proposed approach outperforms the current methods in terms of cost and properties.</description>
        <description>http://thesai.org/Downloads/Volume12No1/Paper_68-A_Blockchain_based_Crowdsourced_Task_Assessment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Systematic Study of Duplicate Bug Report Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120167</link>
        <id>10.14569/IJACSA.2021.0120167</id>
        <doi>10.14569/IJACSA.2021.0120167</doi>
        <lastModDate>2021-01-30T14:38:15.0770000+00:00</lastModDate>
        
        <creator>Som Gupta</creator>
        
        <creator>Sanjai Kumar Gupta</creator>
        
        <subject>AUSUM; feature-based; deep learning; semantic; unsupervised</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(1), 2021</description>
        <description>Defects are an integral part of any software project. They can arise at any time, at any phase of the software development or the maintenance phase. In open source projects, open bug repositories are used to maintain the bug reports. When a new bug report arrives, a person called “Triager” analyzes the bug report and assign it to some responsible developer. But before assigning, has to look if it is duplicate or not. Duplicate Bug Report is one of the big problems in the maintenance of bug repositories. Lack of knowledge and vocabulary skills of reporters sometimes increases the effort required for this purpose. Bug Tracking Systems are usually used to maintain the bug reports and are the most consulted resource during the maintenance process. Because of the Uncoordinated nature of the submission of bug reports to the tracking system, many times the same bug report is reported by many users. Duplicate Bug Reports lead to the waste of resources and the economy. It creates problems for triagers and requires a lot of analysis and validation. Lot of work has been done in the field of duplicate bug report detection. In this paper, we present the researches systematically done in this field by classifying the works into three categories and listing down the methods being used for the classified researches. The paper considers the papers till January 2020 for the analysis purpose. The paper mentions the strengths, limitations, data set, and the major approach used by the popular papers of the research in this field. The paper also lists the challenges and future directions in this field of research.</description>
        <description>http://thesai.org/Downloads/Volume12No1/Paper_67-A_Systematic_Study_of_Duplicate_Bug_Report.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mobile Application Design with IoT for Environmental Pollution Awareness</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120165</link>
        <id>10.14569/IJACSA.2021.0120165</id>
        <doi>10.14569/IJACSA.2021.0120165</doi>
        <lastModDate>2021-01-30T14:38:14.9370000+00:00</lastModDate>
        
        <creator>Anthony Ramos-Romero</creator>
        
        <creator>Brighitt Garcia-Yataco</creator>
        
        <creator>Laberiano Andrade-Arenas</creator>
        
        <subject>Environmental pollution; internet of things; scrum; mobile app</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(1), 2021</description>
        <description>In the present study, we have seen that many people are affected by environmental pollution, so we proposed a mobile application prototype to facilitate awareness of this environmental problem. The methodology that will help us make this application will be the Scrum methodology, since it is adaptable to the constant changes in the mobile application development process. We will also use an Internet of Things and Fire based technology to collect data from the various air pollution sensors. Since users require a visualization of the contamination in real time. The mobile application to which users will automatically register will have easy access for use in monitoring and controlling atmospheric pollution, whose reception data will be through sensors. The result that has been obtained is that it was achieved with the implementation of the application that people are aware of the damage that these pollutants cause to the environment.</description>
        <description>http://thesai.org/Downloads/Volume12No1/Paper_65-Mobile_Application_Design_with_IoT.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Integrated Imbalanced Learning and Deep Neural Network Model for Insider Threat Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120166</link>
        <id>10.14569/IJACSA.2021.0120166</id>
        <doi>10.14569/IJACSA.2021.0120166</doi>
        <lastModDate>2021-01-30T14:38:14.9370000+00:00</lastModDate>
        
        <creator>Mohammed Nasser Al-Mhiqani</creator>
        
        <creator>Rabiah Ahmed</creator>
        
        <creator>Z Zainal Abidin</creator>
        
        <creator>S.N Isnin</creator>
        
        <subject>Security; insider threat; insider threats detection; machine learning; deep learning; imbalanced data</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(1), 2021</description>
        <description>The insider threat is a vital security problem concern in both the private and public sectors. A lot of approaches available for detecting and mitigating insider threats. However, the implementation of an effective system for insider threats detection is still a challenging task. In previous work, the Machine Learning (ML) technique was proposed in the insider threats detection domain since it has a promising solution for a better detection mechanism. Nonetheless, the (ML) techniques could be biased and less accurate when the dataset used is hugely imbalanced. Therefore, in this article, an integrated insider threat detection is named (AD-DNN), which is an integration of adaptive synthetic technique (ADASYN) sampling approach and deep neural network technique (DNN). In the proposed model (AD-DNN), the adaptive synthetic (ADASYN) is used to solve the imbalanced data issue and the deep neural network (DNN) for insider threat detection. The proposed model uses the CERT dataset for the evaluation process. The experimental results show that the proposed integrated model improves the overall detection performance of insider threats. A significant impact on the accuracy performance brings a better solution in the proposed model compared with the current insider threats detection system.</description>
        <description>http://thesai.org/Downloads/Volume12No1/Paper_66-An_Integrated_Imbalanced_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparison of Deep and Traditional Learning Methods for Email Spam Filtering</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120164</link>
        <id>10.14569/IJACSA.2021.0120164</id>
        <doi>10.14569/IJACSA.2021.0120164</doi>
        <lastModDate>2021-01-30T14:38:14.9070000+00:00</lastModDate>
        
        <creator>Abdullah Sheneamer</creator>
        
        <subject>Spam filtering; machine learning; deep learning; LSTM; CNN</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(1), 2021</description>
        <description>Electronic mail, or email, is a method for com-municating using the internet which is inexpensive, effective, and fast. Spam is a type of email where unwanted messages, usually unwanted commercial messages, are distributed in large quantities by a spammer. The objective of such behavior is to harm email users; these messages need to be detected and prevented from being sent to users in the first place. In order to filter these emails, the developers have used machine learning methods. This paper discusses different methods which are used deep learning methods such as a Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM) models with(out) a GloVe model in order to classify spam and non-spam messages. These models are only based on email data, and the extraction set of features is automatic. In addition, our work provides a comparison between traditional machine learning and deep learning algorithms on spam datasets to find out the best way to intrusion detection. The results indicate that deep learning offers improved performance of precision, recall, and accuracy. As far as we are aware, deep learning methods show great promise in being able to filter email spam, therefore we have performed a comparison of various deep learning methods with traditional machine learning methods. Using a benchmark dataset consisting of 5,243 spam and 16,872 not-spam and SMS messages, the highest achieved accuracy score is 96.52% using CNN with the GloVe model.</description>
        <description>http://thesai.org/Downloads/Volume12No1/Paper_64-Comparison_of_Deep_and_Traditional_Learning_Methods.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modelling Health Process and System Requirements Engineering for Better e-Health Services in Saudi Arabia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120163</link>
        <id>10.14569/IJACSA.2021.0120163</id>
        <doi>10.14569/IJACSA.2021.0120163</doi>
        <lastModDate>2021-01-30T14:38:14.8900000+00:00</lastModDate>
        
        <creator>Fuhid Alanazi</creator>
        
        <creator>Valerie Gay</creator>
        
        <creator>Mohammad N. Alanazi</creator>
        
        <creator>Ryan Alturki</creator>
        
        <subject>e-health; e-health systems; e-healthy modelling and e-health modelling system requirements</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(1), 2021</description>
        <description>This systematic review aimed to examine the published works on e-health modelling system requirements and suggest one applicable to Saudi Arabia. PRISMA method was adopted to search, screen and select the papers to be included in this review. Google Scholar was used as the search engine to collect relevant works. From an initial 74 works, 20 were selected after all screening procedures as per PRISMA flow diagram. The 20 selected works were discussed under various sections. The review revealed that goal setting is the first step. Using the goals, a model can be created based on which system requirements can be elicited. Different research used different approaches within this broad framework and applied the procedures to varying healthcare contexts. Based on the findings, an attempt has been made to set the goals and elicit the system requirements for a diabetes self-management model for the entire country in Saudi Arabian context. This is a preliminary model which needs to be tested, improved and then implemented.</description>
        <description>http://thesai.org/Downloads/Volume12No1/Paper_63-Modelling_Health_Process_and_System_Requirements.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards a Real Time Distributed Flood Early Warning System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120162</link>
        <id>10.14569/IJACSA.2021.0120162</id>
        <doi>10.14569/IJACSA.2021.0120162</doi>
        <lastModDate>2021-01-30T14:38:14.8600000+00:00</lastModDate>
        
        <creator>EL MABROUK Marouane</creator>
        
        <subject>Flood; forecasting; distributed decision support system; multi-agent system; anytime algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(1), 2021</description>
        <description>Since the beginning of humanity, floods have caused a lot of damages and have killed many people. So far, floods are causing a lot of losses and damage in many countries every year. When facing such a disaster, effective decisions regarding flood management must be made using real-time data, which must be analyzed and, more importantly, controlled. In this paper, we present a distributed decision support system that can be deployed to support flood management decision makers. Our system is based on Multi-Agent Systems and Anytime Algorithm, and it has two modes of processing: a Pre-Processing mode to test and control the information sent by sensors in real-time and the Main Processing mode, which has three different parts. The first part is the Trigger Mode for monitoring rainfall and triggering the second part, the offline mode, which predicts the flood based on historical data without going through the real-time decision support system. Finally, the online mode predicts the flood based on real-time data and on a combination of communications among different modules, hydrodynamic data, Geographic Information System (GIS), decision support and a remote sensing module to determine information about the flood.</description>
        <description>http://thesai.org/Downloads/Volume12No1/Paper_62-Towards_a_Real_Time_Distributed_Flood.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of a System to Manage Letters of Recommendation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120160</link>
        <id>10.14569/IJACSA.2021.0120160</id>
        <doi>10.14569/IJACSA.2021.0120160</doi>
        <lastModDate>2021-01-30T14:38:14.8430000+00:00</lastModDate>
        
        <creator>Reham Alabduljabbar</creator>
        
        <subject>Letter of recommendation; mobile application; letter of reference; android mobile application; LoR; standardized letters of recommendations</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(1), 2021</description>
        <description>Letter of Recommendation (LoR) is a letter which describes qualifications, skills, and abilities of the person being recommended. Students need to request a letter of recommendation from their instructors for the purpose of applying for jobs, internships and academic studies. The main obstacle which many students face is that instructors do not response quickly, especially if the students have graduated. On the other hand, the instructors find it difficult to manage the many requests they receive, especially at the end of the semester. The work in this paper presents the design, development and testing of system that aims to replace the traditional method of requesting/issuing LoRs with a more systematic standardized way. The developed system may be adopted to simplify, unify, and improve the process. The results showed that the developed system enhanced the communication between requesters and issuers, reduced the amount of time and effort comparing with traditional way and achieved the usability requirements.</description>
        <description>http://thesai.org/Downloads/Volume12No1/Paper_60-Development_of_A_System_to_Manage_Letters.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Conceptual Temporal Modeling Applied to Databases</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120161</link>
        <id>10.14569/IJACSA.2021.0120161</id>
        <doi>10.14569/IJACSA.2021.0120161</doi>
        <lastModDate>2021-01-30T14:38:14.8430000+00:00</lastModDate>
        
        <creator>Sabah Al-Fedaghi</creator>
        
        <subject>Conceptual modeling; temporal database; static model; events model; behavioral model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(1), 2021</description>
        <description>We present a different approach to developing a concept of time for specifying temporality in the conceptual modeling of software and database systems. In the database field, various proposals and products address temporal data. The difficulty with most of the current approaches to modeling temporality is that they represent and record time as just another type of data (e.g., values of a bank balance or amounts of money), instead of appreciating that time and its values are unique, in comparison to typical data attributes. Time is an engulfing phenomenon that lifts a system’s entire model from staticity to dynamism and beyond. In this paper, we propose a conceptualization of temporality involving the construction of a multilevel modeling method that progresses from static representation to system compositions that form regions of dynamism. Then, a chronology of events is used to define the system’s behavior. Lastly, the events are viewed as data sources with which to build a temporal model. A case-study model of a temporal banking-management system database that extends UML and the object-constraint language is re-modeled using thinging machine (TM) modeling. The resultant TM diagrammatic specification delivers a new approach to temporality that can be extended to be a holistic monitoring system for historic data and events.</description>
        <description>http://thesai.org/Downloads/Volume12No1/Paper_61-Conceptual_Temporal_Modeling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Discretization Approach of Bat and K-Means</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120159</link>
        <id>10.14569/IJACSA.2021.0120159</id>
        <doi>10.14569/IJACSA.2021.0120159</doi>
        <lastModDate>2021-01-30T14:38:14.8100000+00:00</lastModDate>
        
        <creator>Rozlini Mohamed</creator>
        
        <creator>Noor Azah Samsudin</creator>
        
        <subject>Classification; discretization; feature selection; optimization algorithm; bat algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(1), 2021</description>
        <description>Bat algorithm is one of the optimization techniques that mimic the behavior of bat. Bat algorithm is a powerful algorithm in finding the optimum feature data collection. Classification is one of the data mining tasks that useful in knowledge representation. But, the high dimensional data become the issue in the classification that interrupt classification accuracy. From the literature, feature selection and discretization able to overcome the problem. Therefore, this study aims to show Bat algorithm is potential as a discretization approach and as a feature selection to improve classification accuracy. In this paper, a new hybrid Bat-K-Mean algorithm refer as hBA is proposed to convert continuous data into discrete data called as optimize discrete dataset. Then, Bat is used as feature selection to select the optimum feature from the optimized discrete dataset in order to reduce the dimension of data. The experiment is conducted by using k-Nearest Neighbor to evaluate the effectiveness of discretization and feature selection in classification by comparing with continuous dataset without feature selection, discrete dataset without feature selection, and continuous dataset without discretization and feature selection. Also, to show Bat is potential as a discretization approach and feature selection method. . The experiments were carried out using a number of benchmark datasets from the UCI machine learning repository. The results show the classification accuracy is improved with the Bat-K-Means optimized discretization and Bat optimized feature selection.</description>
        <description>http://thesai.org/Downloads/Volume12No1/Paper_59-A_New_Discretization_Approach_of_Bat_and_K_means.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Olive Oil Ripping Time Prediction Model based on Image Processing and Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120158</link>
        <id>10.14569/IJACSA.2021.0120158</id>
        <doi>10.14569/IJACSA.2021.0120158</doi>
        <lastModDate>2021-01-30T14:38:14.7970000+00:00</lastModDate>
        
        <creator>Mutasem Shabeb Alkhasawneh</creator>
        
        <subject>Neural network; image processing; olive ripping time; prediction; classifications</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(1), 2021</description>
        <description>The agriculture sector in Jordan depends very much on planting the olive trees. More than ten million of olive trees are planted in the Jordanian soil. Olive fruit are harvested for two purposes; either to produce oil or to produce olive table (pickled olive). Olive fruit harvesting time for extracting the oil from the olive fruit is crucial. Hence, harvesting the olive fruit on ripping time gives the best amount and quality of oil. It also, could lose 15% to 20% of multiple values because of harvesting time. Olive fruit ripping time is varied since it depends on the rainfall, temperature and cultivation. A system to predict the optimal time for harvesting olive fruit for producing oil only is introduced. It based one Digital Image Processing (DIP) and artificial intelligent neural network. Moreover, four features were extracted from the olive fruit image based on the red, green and blue colors. The proposed system tested olive fruits in three stages of ripping time; under ripping, on ripping and over ripping. The classification accuracy achieved in the three stages was 97.51% in under ripping stage 95.10% in ripping stage, and 96.12% in over ripping stage. The proposed system performance was 96.14%.</description>
        <description>http://thesai.org/Downloads/Volume12No1/Paper_58-Olive_Oil_Ripping_Time_Prediction_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comparative Analysis of Machine Learning Models for First-break Arrival Picking</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120157</link>
        <id>10.14569/IJACSA.2021.0120157</id>
        <doi>10.14569/IJACSA.2021.0120157</doi>
        <lastModDate>2021-01-30T14:38:14.7630000+00:00</lastModDate>
        
        <creator>Mohammed Ayub</creator>
        
        <creator>SanLinn I. Kaka</creator>
        
        <subject>First-break arrival picking; seismology; neural networks; machine learning; feature ranking</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(1), 2021</description>
        <description>First-break (FB) picking is an important and necessary step in seismic data processing and there is a need to develop precise and accurate auto-picking solutions. Our investigation in this study includes eight machine learning models. We use 1195 raw traces to extract several features and train for accurate picking and monitoring the performance of each model using well-defined evaluation metrics. Careful investigation of the scores shows that a single metric alone is not sufficient to evaluate the arrival picking models in real-time. Correlation analysis of predicted probabilities and predicted classes of machine learning models confirm that the performance metrics that use predicted probabilities have higher score value than those that use predicted classes. Our study which incorporates comparisons of different machine learning models based on different performance metrics, training time, and feature importance indicates that the approach we developed in this study is helpful and provides an opportunity to determine the real-time suitability of different methodologies for automatic FB arrival picking with clear deep insight. Based on performance scores, we bench-marked the Extra Tree classifier as the most efficient model for FB arrival picking with accuracy and F1-score above 95%.</description>
        <description>http://thesai.org/Downloads/Volume12No1/Paper_57-A_Comparative_Analysis_of_Machine_Learning_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluation of Water Quality in the Lower Huallaga River Watershed using the Grey Clustering Analysis Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120156</link>
        <id>10.14569/IJACSA.2021.0120156</id>
        <doi>10.14569/IJACSA.2021.0120156</doi>
        <lastModDate>2021-01-30T14:38:14.7500000+00:00</lastModDate>
        
        <creator>Alexi Delgado</creator>
        
        <creator>Diego Cuadra</creator>
        
        <creator>Karen Simon</creator>
        
        <creator>Katya Bonilla</creator>
        
        <creator>Katherine Tineo</creator>
        
        <creator>Enrique Lee Huaman&#237;</creator>
        
        <subject>Water quality; prati index; grey clustering method; protection and sustainable conservation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(1), 2021</description>
        <description>Currently, the evaluation of water quality is a topic of global interest, due to its socio-cultural, environmental and economic importance, but in recent years this quality is deteriorating due to inadequate management in the conservation, disposal and use of water by the competent authorities, private-state entities and the population itself. An alternative to determine the quality of a water body in an integrated manner is the Grey Clustering Method, which was used in this study taking as an indicator the Prati Quality Index, with the objective of making an objective analysis of the quality of the water bodies under study. The case study is the Lower Watershed of the Huallaga River, located between the region of Loreto and San Martin, along which 12 monitoring stations were established to evaluate its surface water quality, through the analysis of 7 parameters: pH, BOD, COD, Total Suspended Solids (TSS), Ammonia Nitrogen, Substrates and Nitrates. Finally, it was determined that the water quality of eleven monitoring stations in the Lower Huallaga River Watershed are within the &quot;Uncontaminated&quot; category, while one monitoring station is within the &quot;Highly Contaminated&quot; category of the Prati Index, this due to its proximity to a landfill. The results obtained in this study, could be useful for the authorities responsible for the protection and sustainable conservation of the Huallaga River Watershed, in order to propose appropriate measures to improve its quality, additionally, this study could be a reference for future studies since the proposed method allowed to prioritize the quality level of the water bodies and identify critical areas.</description>
        <description>http://thesai.org/Downloads/Volume12No1/Paper_56-Evaluation_of_Water_Quality_in_the_Lower_Huallaga_River.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Efficient Data Replication Technique with Fault Tolerance Approach using BVAG with Checkpoint and Rollback-Recovery</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120155</link>
        <id>10.14569/IJACSA.2021.0120155</id>
        <doi>10.14569/IJACSA.2021.0120155</doi>
        <lastModDate>2021-01-30T14:38:14.7500000+00:00</lastModDate>
        
        <creator>Sharifah Hafizah Sy Ahmad Ubaidillah</creator>
        
        <creator>A. Noraziah</creator>
        
        <creator>Basem Alkazemi</creator>
        
        <subject>Data replication; computational intelligence; fault tolerance; binary vote assignment on grid; checkpoint and rollback-recovery</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(1), 2021</description>
        <description>Data replication has been one of the pathways for distributed database management as well as computational intelligence scheme as it continues to improve data access and reliability. The performance of data replication technique can be crucial when it involves failure interruption. In order to develop a more efficient data replication technique which can cope with failure, a fault tolerance approach needs to be applied in the data replication transaction. Fault tolerance approach is a core issue for a transaction management as it preserves an operation mode transaction prone to failure. In this study, a data replication technique known as Binary Vote Assignment on Grid (BVAG) has been combined with a fault tolerance approach named as Checkpoint and Rollback-Recovery (CR) to evaluate the effectiveness of applying fault tolerance approach in a data replication transaction. Binary Vote Assignment on Grid with Checkpoint and Rollback-Recovery Transaction Manager (BVAGCRTM) is used to run the BVAGCR proposed method. The performance of the proposed BVAGCR is compared to standard BVAG in terms of total execution time for a single data replication transaction. The experimental results reveal that BVAGCR improves the BVAG total execution time in failure environment of about 31.65% by using CR fault tolerance approach. Besides improving the total execution time of BVAG, BVAGCR also reduces the time taken to execute the most critical phase in BVAGCRTM which is Update (U) phase by 98.82%. Therefore, based on the benefits gained, BVAGCR is recommended as a new and efficient technique to obtain a reliable performance of data replication with failure condition in distributed databases.</description>
        <description>http://thesai.org/Downloads/Volume12No1/Paper_55-An_Efficient_Data_Replication_Technique.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dual Frequencies Usage by Full and Incomplete Ring Elements</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120154</link>
        <id>10.14569/IJACSA.2021.0120154</id>
        <doi>10.14569/IJACSA.2021.0120154</doi>
        <lastModDate>2021-01-30T14:38:14.7170000+00:00</lastModDate>
        
        <creator>Kamarulzaman Mat</creator>
        
        <creator>Norbahiah Misran</creator>
        
        <creator>Mohammad Tariqul Islam</creator>
        
        <creator>Mohd Fais Mansor</creator>
        
        <subject>Incomplete ring; full ring; dual frequencies; polarization-dependent</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(1), 2021</description>
        <description>Full and incomplete ring patch elements antenna can be mixed to generate a variety of responses with respect to the direction of polarization. Dual frequencies requirement in numerous usage like in tracking radar can be achieved by combining the patches. To obtain the favored operating frequency, a straightforward design of a ring patch and also the addition of a gap have been utilized. Various sizes of the gap and the gap&#39;s position within a ring have been examined in the study. In order to investigate the element conduct, the surface current distribution, return loss, as well as reflection phase, are monitored. The evaluation revealed that the ring element with a smaller width would offer a sharp gradient of reflection phase that lowers its bandwidth performance. Meanwhile, the element performance was not affected by the gap&#39;s placement at the upper and lower part of the ring. However, the gap&#39;s size of 0.2 mm placed at the left or right position of the ring has shifted up the resonant frequency from 8.1 GHz to 11.8 GHz. For this study, it was shown that the mixture of full and incomplete ring elements has the potential to be utilized as an antenna to comprehend the operation of monopulse.</description>
        <description>http://thesai.org/Downloads/Volume12No1/Paper_54-Dual_Frequencies_usage_by_Full_and_Incomplete_Ring.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Blockchain in Insurance: Exploratory Analysis of Prospects and Threats</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120153</link>
        <id>10.14569/IJACSA.2021.0120153</id>
        <doi>10.14569/IJACSA.2021.0120153</doi>
        <lastModDate>2021-01-30T14:38:14.7030000+00:00</lastModDate>
        
        <creator>Anokye Acheampong AMPONSAH</creator>
        
        <creator>Adebayo Felix ADEKOYA</creator>
        
        <creator>Benjamin Asubam WEYORI</creator>
        
        <subject>Blockchain technology; insurance industry; hyperledger; ethereum</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(1), 2021</description>
        <description>Ever since the first generation of blockchain technology became very successful and the FinTech enormously benefited from it with the advent of cryptocurrency, the second and third generations championed by Ethereum and Hyperledger have explored the extension of blockchain in other domains like IoT, supply chain management, healthcare, business, privacy, and data management. A field as huge as the insurance industry has been underrepresented in literature. Therefore, this paper presents how investments in blockchain technology can profit the insurance industry. We discuss the basics of blockchain technology, popular platforms in use today, and provide a simple theoretical explanation of the insurance sub-processes which blockchain can mutate positively. We also discuss hurdles to be crossed to fully implement blockchain solutions in the insurance domain.</description>
        <description>http://thesai.org/Downloads/Volume12No1/Paper_53-Blockchain_in_Insurance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Prototype of Web System for Organizations Dedicated to e-Commerce under the SCRUM Methodology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120152</link>
        <id>10.14569/IJACSA.2021.0120152</id>
        <doi>10.14569/IJACSA.2021.0120152</doi>
        <lastModDate>2021-01-30T14:38:14.6700000+00:00</lastModDate>
        
        <creator>Ventocilla Gomero-Fanny</creator>
        
        <creator>Aguila Ruiz Bengy</creator>
        
        <creator>Laberiano Andrade-Arenas</creator>
        
        <subject>Agile; e-commerce; scrum; sales; user stories; sprint</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(1), 2021</description>
        <description>This research work is based on making a prototype e-commerce system applying the scrum methodology, because currently the systems of many organizations are still developed following traditional methodologies, i.e. the system does not meet the requirements that should provide value to the organization in addition to its poor information security, leading to the organization&#39;s data to the vulnerability of loss or theft. That is why this article aims to design a web system for organizations dedicated to e-commerce under the agile SCRUM methodology. The methodology allowed to design the prototype meeting the needs of the organization with frequent retrospectives and continuous communication between stakeholders, in addition to proposing the information security of the company&#39;s data. In the results of this research the user stories were analyzed, which allowed the division into 4 Sprint deliverables of the general modules proposed in the article, for which a maximum score of 21 story points per Sprint was placed; in which 16 story points were placed to the first sprint, the second Sprint with 20 story points, the third Sprint with 21 story points and the fourth Sprint with 16 story points. The e-commerce system proposal will be beneficial for the organization and its customers, since it will allow them to have a system according to their requirements and needs in a secure manner.</description>
        <description>http://thesai.org/Downloads/Volume12No1/Paper_52-Prototype_of_Web_System_for_Organizations.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>HoloLearn: An Interactive Educational System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120151</link>
        <id>10.14569/IJACSA.2021.0120151</id>
        <doi>10.14569/IJACSA.2021.0120151</doi>
        <lastModDate>2021-01-30T14:38:14.6100000+00:00</lastModDate>
        
        <creator>Shoroog Alghamdi</creator>
        
        <creator>Samar Aloufi</creator>
        
        <creator>Layan Alsalea</creator>
        
        <creator>Fatma Bouabdullah</creator>
        
        <subject>Interactive educational system; hologram technology; user interaction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(1), 2021</description>
        <description>The HoloLearn project is a sophisticated interactive educational system that attempts to simplify the educational process in the field of medicine, mainly, through the use of Hologram technology. The Hologram technology has been used in conjunction with the feature of user interaction, to take the whole educational process to a completely new level providing students with a different learning experience. The system is more dedicated to medical students as they must study diverse, complicated structures of the human body anatomy and its internal organs. HoloLearn is aimed at replacing the traditional educational techniques with a new one that involves user interaction with real-sized 3D objects. Based on the interview conducted with medical students from different universities and educational levels in Saudi Arabia, and the questionnaire results, it has been found that traditional learning techniques are insufficient/inefficient as they lack quality and most of the criteria that could qualify them to be highly effective as reliable learning materials. Therefore, there is an increasing need for new learning strategies/methods with enough capabilities to give the students the chance to perceive every concept they study, rather than depending on their imagination to picture what a human body looks like from inside, they need visual learning methods. From another perspective, teachers also face difficulties when explaining medical concepts, especially those related to human body structure and behavior. The currently available materials and sources are mostly theoretical. They promote indoctrination and a result-driven approach instead of engaging the students in a process of sharing knowledge and ideas from both parties i.e., teacher and students. In fact, students have to listen and read instead of practicing and exploring, consequently students are prone to loss of concentration and mental distraction during lessons repeatedly, while from another angle they suffer from long study hours and difficulties in retrieving information. The results of this project indicate that when combining the Hologram technology with the user interaction feature, the educational process can be highly improved, and can be much more creative and entertaining.</description>
        <description>http://thesai.org/Downloads/Volume12No1/Paper_51-HoloLearn_An_Interactive_Educational_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>B-droid: A Static Taint Analysis Framework for Android Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120150</link>
        <id>10.14569/IJACSA.2021.0120150</id>
        <doi>10.14569/IJACSA.2021.0120150</doi>
        <lastModDate>2021-01-30T14:38:14.5770000+00:00</lastModDate>
        
        <creator>Rehab Almotairy</creator>
        
        <creator>Yassine Daadaa</creator>
        
        <subject>Static analysis; taint analysis; fuzz testing; android applications; mobile malwares; data flow analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(1), 2021</description>
        <description>Android is currently the most popular smartphone operating system in use, with its success attributed to the large number of applications available from the Google Play Store. However, these contain issues relating to the storage of the user’s sensitive data, including contacts, location, and the phone’s unique identifier (IMEI). Use of these applications therefore risks exfiltration of this data, including unauthorized tracking of users’ behavior and violation of their privacy. Sensitive data leaks are currently detected with taint analysis approaches. This paper addresses these issues by proposing a new static taint analysis framework specifically for Android platforms, termed “B-Droid”. B-Droid is based on static taint analysis using a large set of sources and sinks techniques, side by side with the fuzz testing concept, in order to detect privacy leaks, whether malicious or unintentional by analyses the behavior of Applications Under Test (AUTs). This has the potential to offer improved precision in comparison to earlier approaches. To ensure the quality of our analysis, we undertook an evaluation testing a variety of Android applications installed on a mobile after filtering according to the relevant permissions. We found that B-Droid efficiently detected five of the most prevalent commercial spyware applications on the market, as well as issuing an immediate warning to the user, so that they can decide not to continue with the AUTs. This paper provides a detailed analysis of this method, along with its implementation and results.</description>
        <description>http://thesai.org/Downloads/Volume12No1/Paper_50-B_droid_A_Static_Taint_Analysis_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Systematic Review of Methodologies for the Development of Embedded Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120149</link>
        <id>10.14569/IJACSA.2021.0120149</id>
        <doi>10.14569/IJACSA.2021.0120149</doi>
        <lastModDate>2021-01-30T14:38:14.5600000+00:00</lastModDate>
        
        <creator>Kristina Blaškovic</creator>
        
        <creator>Sanja Candrlic</creator>
        
        <creator>Alen Jakupovic</creator>
        
        <subject>Embedded systems; development; methodology; multilayer conceptual network; cluster analysis; k-means algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(1), 2021</description>
        <description>Embedded systems encompass software and hardware components developed in parallel. These systems have been the focus of interest for many scholars who emphasized development issues related to embedded systems. Moreover, they proposed different approaches for facilitating the development process. The aim of this work is to identify desirable characteristics of existing development methodologies, which provide a good foundation for development of new methodologies. For that purpose, systematic mapping methodology was applied to the area of embedded systems, resulting in a classification scheme, graphically represented by a multilayer conceptual network. Afterwards, the most significant clusters were identified, using the k-means algorithm and the squared Euclidean distance formula. Overall, the results provide guidelines for further research aiming to propose a holistic approach for the development of special case of embedded systems.</description>
        <description>http://thesai.org/Downloads/Volume12No1/Paper_49-Systematic_Review_of_Methodologies.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Visualization of Arabic Entities in Online Social Media using Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120148</link>
        <id>10.14569/IJACSA.2021.0120148</id>
        <doi>10.14569/IJACSA.2021.0120148</doi>
        <lastModDate>2021-01-30T14:38:14.5300000+00:00</lastModDate>
        
        <creator>Khowla Mohammed Alyamani</creator>
        
        <creator>Abdul Khader Jilani Saudagar</creator>
        
        <subject>Machine learning; classification; visualization; Arabic entities; social media</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(1), 2021</description>
        <description>In recent years, the use of social media and the amount of exchangeable data have increased considerably. The increase in exchangeable data makes data mining, analysis and visualization of relevant information a challenging task. This research work assesses, categorizes, and analyzes Arabic entities on social media selected by users at certain time intervals. To accomplish this aim, the authors built a highly efficient classification model to classify entities according to three categories: person, location, and organization. The developed model captures an entity and specific time, collects all the posts on tweeter that refer to the entity at this specific time, and then classifies, visualize the entity through three methods. It first starts with classifying the entity through a corpus model that depends on customized corpus. If the entity is not classified through that model, it will be send to an indicators model which uses the pre-indicators or post-indicators for classing. Finally, the entity is passed to a gazetteer model which searches for the entity in three gazetteers (person, location, and organization), and accordingly determines the number of times the entity reference is repeated. This work allows scholars and researchers in different fields to visualize the frequency of entities referenced by a community. It also compares how references to entities change over time. The experimental results show that accuracy of the developed model in classifying the tweets is nearly 90%.</description>
        <description>http://thesai.org/Downloads/Volume12No1/Paper_48-Visualization_of_Arabic_Entities_in_Online_Social_Media.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cyber Situation Awareness Perception Model for Computer Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120147</link>
        <id>10.14569/IJACSA.2021.0120147</id>
        <doi>10.14569/IJACSA.2021.0120147</doi>
        <lastModDate>2021-01-30T14:38:14.5130000+00:00</lastModDate>
        
        <creator>Olofintuyi Sunday Samuel</creator>
        
        <subject>Situation awareness; intrusion detection system; artificial neural network based decision tree; decision tree; classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(1), 2021</description>
        <description>With the increase in cyber threats, computer network security has raised a lot of issues among various companies. In order to guide against all these threats, a formidable Intrusion Detection System (IDS) is needed. Various Machine Learning (ML) algorithms such as Artificial Neural Network (ANN), Decision Tree (DT), Support Vector Machine (SVM), Na&#239;ve Bayes, etc. has been used for threat detection. In light of the novel threats, there is a need to use a combination of tools to accurately enhance intrusion detection in computer networks, this is because intruders are gaining ground in the cyber world and the side effects on organizations cannot be quantified. The aim of this work is to provide an enhanced model for the detection of threats on the computer network. The combination of DT and ANN is proposed to accurately predict threats. With this model, a network administrator will be rest assured to some extent based on the prediction of the model. Two different supervised machine algorithms were hybridized in this research. NSL-KDD dataset was deployed for the simulation process in WEKA environment. The proposed model gave 0.984 precision, 0.982 sensitivity and 0.987 accuracy.</description>
        <description>http://thesai.org/Downloads/Volume12No1/Paper_47-Cyber_Situation_Awareness_Perception_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Classification of Arabic-Speaking Website Pages with Unscrupulous Intentions and Questionable Language</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120146</link>
        <id>10.14569/IJACSA.2021.0120146</id>
        <doi>10.14569/IJACSA.2021.0120146</doi>
        <lastModDate>2021-01-30T14:38:14.4830000+00:00</lastModDate>
        
        <creator>Haya Mesfer Alshahrani</creator>
        
        <subject>Extremism; textual analysis; classification; Posit; SAFAR</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(1), 2021</description>
        <description>This study aims to put forward a comprehensive and detailed classification system to categorize different Arabic-speaking website pages with unscrupulous intentions and questionable language. The methodology of this is based on a quantitative approach by using different algorithms (supervised) to build a model for data classification by using manually categorized information. The classification algorithm used to construct the model uses quantitative information extracted by Posit or SAFAR textual analysis framework. This model functions with (58) features combined from Posit – n-grams and morphological SAFAR V2 POS tools. This model achieved more than (94 %) success in the level of precision. The results of this study revealed that the best results reaching 94% precision have been achieved by combining Posit + SAFAR + (18 attributes Posit+ SAFAR N-Gram). Moreover, the most reliable results have been achieved by applying a Random Forest classification algorithm using regression. The research recommends working more on this topic and using new algorithms and techniques.</description>
        <description>http://thesai.org/Downloads/Volume12No1/Paper_46-Classification_of_Arabic_Speaking_Website.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Survey on Image Encryption using Chaos-based Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120145</link>
        <id>10.14569/IJACSA.2021.0120145</id>
        <doi>10.14569/IJACSA.2021.0120145</doi>
        <lastModDate>2021-01-30T14:38:14.4670000+00:00</lastModDate>
        
        <creator>Veena G</creator>
        
        <creator>Ramakrishna M</creator>
        
        <subject>Chaos theory; image encryption; logistic map; PWLCM; tent map</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(1), 2021</description>
        <description>Encryption methods such as AES (Advanced Encryption Standard), DES (Data Encryption Standard), etc. cannot be used for image encryption as images contain a huge amount of redundant data, a high correlation between neighboring pixels and size of the image is very large. Chaos- based techniques have suitable properties that are required for image encryption. The properties include sensitivity to initial conditions, pseudorandom number, ergodicity, and density of periodic orbits. In this paper, a survey of image encryption using chaos-maps such as a logistic map, piecewise linear chaotic map (PWLCM), tent map, etc. is done in order to choose best map for image encryption. Comparison of image encryption using different chaotic maps is done by considering parameters such as key-space and correlation analysis.</description>
        <description>http://thesai.org/Downloads/Volume12No1/Paper_45-A_Survey_on_Image_Encryption.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-beam Antenna Array Operating Over Switch On/Off Element Condition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120144</link>
        <id>10.14569/IJACSA.2021.0120144</id>
        <doi>10.14569/IJACSA.2021.0120144</doi>
        <lastModDate>2021-01-30T14:38:14.4070000+00:00</lastModDate>
        
        <creator>Julian Imami</creator>
        
        <creator>Elson Agastra</creator>
        
        <creator>Aleksand&#235;r Biberaj</creator>
        
        <subject>Multi-beam; phased antenna array; Hausdorff distance; Woodward-Lawson</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(1), 2021</description>
        <description>In this work, is presented the design of a linear multi-beam antenna. The designed procedure is focused on the possibility of switching off a part of antenna array elements, in active antenna systems, in order to preserve resources (power and heat dissipation). The behavior of the original multi beam antenna design is investigated on the radiation pattern alteration due to the switched off elements. The choice of switching on/off antenna elements requires less computational effort from the algorithms incorporated in the active antenna system. Antenna array beam design using progressive phase shifts permits beam orthogonality which is valuable over use multiple beam antenna. In this work, turning off part of antenna elements will inevitably change the beam orthogonality conditions. Despite this, the case presented in this paper, shows beam space discrimination better than 10dB. To rank the behavior of modified antenna, with the turned off elements, are used both Euclidean and Hausdorff distances to measure the changes resulted from the modified array performance. The obtained solutions show the applicability of binary operation on existing antenna array. The metric showed in here can be effectively used as ranking criteria.</description>
        <description>http://thesai.org/Downloads/Volume12No1/Paper_44-Multi_beam_Antenna_Array_Operating.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancement of 3D Seismic Images using Image Fusion Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120143</link>
        <id>10.14569/IJACSA.2021.0120143</id>
        <doi>10.14569/IJACSA.2021.0120143</doi>
        <lastModDate>2021-01-30T14:38:14.3900000+00:00</lastModDate>
        
        <creator>Abrar Alotaibi</creator>
        
        <creator>Mai Fadel</creator>
        
        <creator>Amani Jamal</creator>
        
        <creator>Ghadah Aldabbagh</creator>
        
        <subject>Image fusion; seismic image; seismic attribute; neural networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(1), 2021</description>
        <description>Seismic images are data collected by sending seismic waves to the earth subsurface, recording the reflection and providing subsurface structural information. Seismic attributes are quantities derived from seismic data and provide complementary information. Enhancing seismic images by fusing them with seismic attributes will improve the subsurface visualization and reduce the processing time. In seismic data interpretation, fusion techniques have been used to enhance the resolution and reduce the noise of a single seismic attribute. In this paper, we investigate the enhancement of 3D seismic images using image fusion techniques and neural networks to combine seismic attributes. The paper evaluates the feasibility of using image fusion models pretrained on specific image fusion tasks. These models achieved the best results on their respective tasks and are tested for seismic image fusion. The experiments showed that image fusion techniques are capable of combining up to three seismic attributes without distortion, future studies can increase the number. This is the first study conducted using pretrained models on other types of images for seismic image fusion and the results are promising.</description>
        <description>http://thesai.org/Downloads/Volume12No1/Paper_43-Enhancement_of_3D_Seismic_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Role of Electronic Means in Enhancing the Intellectual Security among Students at the University of Jordan</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120142</link>
        <id>10.14569/IJACSA.2021.0120142</id>
        <doi>10.14569/IJACSA.2021.0120142</doi>
        <lastModDate>2021-01-30T14:38:14.3730000+00:00</lastModDate>
        
        <creator>Mohammad Salim Al-Zboun</creator>
        
        <creator>Mamon Salim Al-Zboun</creator>
        
        <creator>Hussam N. Fakhouri</creator>
        
        <subject>Electronic means; intellectual security; enhancement</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(1), 2021</description>
        <description>The study aims to identify the role of electronic means in enhancing intellectual security among students at the University of Jordan. To achieve the objective of the study, a study instrument is developed, and the study sample consists of (525) male and female students. The results show that the response of the university students in assessing the role of electronic means in enhancing the students&#39; intellectual security is with an arithmetic mean of (3.07), and a standard deviation of (1.128), and this score is considered medium. In light of the results of the study, the researchers recommend employing electronic means in activating the intellectual security among the university education students in Jordan.</description>
        <description>http://thesai.org/Downloads/Volume12No1/Paper_42-The_Role_of_Electronic_Means.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Impact of the Mining Activity on the Water Quality in Peru Applying the Fuzzy Logic with the Grey Clustering Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120141</link>
        <id>10.14569/IJACSA.2021.0120141</id>
        <doi>10.14569/IJACSA.2021.0120141</doi>
        <lastModDate>2021-01-30T14:38:14.3430000+00:00</lastModDate>
        
        <creator>Alexi Delgado</creator>
        
        <creator>Anabel Fernandez</creator>
        
        <creator>Brigitte Chirinos</creator>
        
        <creator>Gabriel Barboza</creator>
        
        <creator>Enrique Lee Huaman&#237;</creator>
        
        <subject>Mining activity; water quality; grey clustering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(1), 2021</description>
        <description>Mining activity in the department of Jun&#237;n, Peru, is intense, due to the great existing mining-metallic potential that exists in the place, the Yauli and Andaychagua rivers, located in the Yauli Province, belonging to Jun&#237;n, receive a large volume of discharges causing deterioration in the quality of the water of these rivers. To evaluate this quality in an integral way, fuzzy logic will be applied with the Grey Clustering methodology, defining the central point triangular whitening weight functions (CTWF), having as grey classes the Environmental Quality Standards for water (ECA-Water), Category 3, which were modified for research purposes. Four monitoring points were evaluated, both upstream (point PY-01) and downstream (PY-02) of the Yauli River; as upstream (PA-01) and downstream (PA-02) of the Andaychagua river. From the analysis it was determined that the water quality in PY-01 is 0.7302, 0.8795 in the dry season and 0.5980 in the wet season; in PY-02, 0.5448, 0.6448 in dry season and 0.5628 in wet season were obtained. At point PA-01 it is 0.8213, 0.8691 in dry season and 0.7902 in wet season; In PA-02, 0.8385, 0.8827 in the dry season and 0.8118 in the wet season were obtained, concluding that there is good water quality, decreasing in wet seasons, this due to the influence of the rains in the contact waters. The research provides an integration of the parameters that are considered in the ECA-Water with other international standards allowing to determine a more precise evaluation of the quality status of the Yauli and Andaychagua rivers after receiving the effluents generated by the mining activity, benefiting the relevant authorities for decision making and providing a methodology that improves the analysis of the results obtained by the specific parameters that are evaluated in the environmental monitoring.</description>
        <description>http://thesai.org/Downloads/Volume12No1/Paper_41-Impact_of_the_Mining_Activity.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Discovery Engine for Finding Hidden Connections in Prose Comprehension from References</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120140</link>
        <id>10.14569/IJACSA.2021.0120140</id>
        <doi>10.14569/IJACSA.2021.0120140</doi>
        <lastModDate>2021-01-30T14:38:14.3430000+00:00</lastModDate>
        
        <creator>Amal Babour</creator>
        
        <creator>Javed I. Khan</creator>
        
        <creator>Fatema Nafa</creator>
        
        <creator>Kawther Saeedi</creator>
        
        <creator>Dimah Alahmadi</creator>
        
        <subject>Knowledge graph; ontology engine; text comprehension; text summarization; Wikipedia; WordNet</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(1), 2021</description>
        <description>Reading is one of the essential practices of modern human learning. Comprehending prose text simply from the available text is particularly challenging as in general the comprehension of prose requires the use of external knowledge or references. Although the processes of reading comprehension have been widely studied in the field of psychology, no algorithm level models for comprehension have yet to be developed. This paper has proposed a comprehension engine consisting of knowledge induction which connects the knowledge space by augmenting associations within it. The connections are achieved through the automatic incremental reading of external references and the capturing of high familiarity knowledge associations between prose concepts. The Ontology Engine is used to find lexical knowledge associations amongst concept pairs, with the objective being to obtain a knowledge space graph with a single giant component to establish a base model for prose comprehension. The comprehension engine is evaluated through experiments with various selected prose texts. Akin to human readers, it could mine reference texts from modern knowledge corpuses such as Wikipedia and WordNet. The results demonstrate the potential efficiency of using the comprehension engine that enhances the quality of reading comprehension in addition to reducing reading time. This comprehension engine is considered the first algorithm level model for comprehension compared with existing works.</description>
        <description>http://thesai.org/Downloads/Volume12No1/Paper_40-Discovery_Engine_for_Finding_Hidden_Connections.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Cryptocurrency-based E-mail System for SPAM Control</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120139</link>
        <id>10.14569/IJACSA.2021.0120139</id>
        <doi>10.14569/IJACSA.2021.0120139</doi>
        <lastModDate>2021-01-30T14:38:14.3100000+00:00</lastModDate>
        
        <creator>Shafiya Afzal Sheikh</creator>
        
        <creator>M. Tariq Banday</creator>
        
        <subject>E-mail; SPAM; blockchain; cryptocurrency; Ethereum</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(1), 2021</description>
        <description>Sending bulk e-mail is commercially cheap and technically easy, making it profitable for spammers, even if a tiny percentage of recipients falls for the attacks or turn into customers. Some researchers have proposed making e-mail paid so that sending bulk e-mail becomes expensive, making spamming unprofitable and a futile exercise unless many victims respond to spam. On the other hand, the small sending fee is negligible for legitimate e-mail users. Making e-mail paid is a challenging task if implemented using a conventional payment system or developing new cryptocurrencies. Traditional payment systems are challenging to integrate with e-mail systems, and new cryptocurrencies will have challenges in adoption by users on the required scale. This work proposes using cryptocurrency payments to make e-mail senders pay for sending an e-mail without creating a new cryptocurrency or a new blockchain. In the proposed system, the recipients of the e-mail can collect the payments and use the collected revenues to send e-mail messages or even sell them on an exchange. The proposed solution has been implemented using Ropsten, an Ethereum Test Network and tested using enhanced E-mail Client and Server software.</description>
        <description>http://thesai.org/Downloads/Volume12No1/Paper_39-A_Cryptocurrency_based_E_mail_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Recognizing Activities of Daily Living using 1D Convolutional Neural Networks for Efficient Smart Homes</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120138</link>
        <id>10.14569/IJACSA.2021.0120138</id>
        <doi>10.14569/IJACSA.2021.0120138</doi>
        <lastModDate>2021-01-30T14:38:14.2970000+00:00</lastModDate>
        
        <creator>Sumaya Alghamdi</creator>
        
        <creator>Etimad Fadel</creator>
        
        <creator>Nahid Alowidi</creator>
        
        <subject>Deep learning; one-dimensional convolutional neural networks; time-series classification; Activities of Daily Living (ADLs); smart home; recommendation system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(1), 2021</description>
        <description>Human activity recognition is considered a challenging task in sensor-based monitoring systems. In ambient intelligent environments, such as smart homes, collecting data from ambient sensors is useful for recognizing activities of daily living, which can then be used to provide assistance to inhabitants. Activities of daily living are composed of complex multivariable time series data that has high dimensionality, is huge in size, and is updated constantly. Thus, developing methods for analyzing time series data to extract meaningful features and specific characteristics would help solve the problem of human activity recognition. Based on the noticeable success of deep learning in the field of time series classification, we developed a model called a deep one-dimensional convolutional neural network (Deep 1d-CNN) for recognizing activities of daily living in smart homes. Our model contains several one-dimensional convolution layers coupled with max-pooling technique to learn the internal representation of time series data and automatically generate very deep features for recognizing different activity types. For the performance evaluation, we tested our deep model on the new real-life dataset, ContextAct@A4H, and the results showed that our model achieved a high F1 score (0.90). We also extended our study to show the potential energy saving in smart homes through recognizing activities of daily living. We built a recommendation system based on the activities recognized by our deep model to detect the devices that are wasting energy, and recommend the user to execute energy optimization actions. The experiment indicated that recognizing activities of daily living can result in energy savings of around 50%.</description>
        <description>http://thesai.org/Downloads/Volume12No1/Paper_38-Recognizing_Activities_of_Daily_Living.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Cognizance of Green Computing Concept and Practices among Secondary School Students: A Preliminary Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120137</link>
        <id>10.14569/IJACSA.2021.0120137</id>
        <doi>10.14569/IJACSA.2021.0120137</doi>
        <lastModDate>2021-01-30T14:38:14.2630000+00:00</lastModDate>
        
        <creator>Shafinah Kamarudin</creator>
        
        <creator>Siti Munirah Mohd</creator>
        
        <creator>Nurul Nadwa Zulkifli</creator>
        
        <creator>Rosli Ismail</creator>
        
        <creator>Ribka Alan</creator>
        
        <creator>Philip Lepun</creator>
        
        <creator>Muhd Khaizer Omar</creator>
        
        <subject>Awareness; energy consumption; green computing; environment pollution; secondary students</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(1), 2021</description>
        <description>The use of information communication technology (ICT) is growing and has been a compulsory norm in society. However, the increased use of ICT facilities in all developing countries has contributed to higher energy use and lead to environmental pollution. This study explores the extent of awareness among the younger generation about green computing concepts and practices. In this study, a total of 94 secondary school students were sampled across Selangor state. The data were gathered using a set of questionnaires comprising of 20 items pertaining to the harmful effects of using computers and communication gadgets on the environment, awareness of the concepts and practices of green computing. The findings indicate it reveal that secondary school students are still not aware of the green computing concept. It is observed that 54.35% of students may not realize that computers and communication devices could be disposed of eco-safe. Furthermore, 61.96% of students do not realize that computer hardware can be recycled, and 75% of them do not have experience in disposing of their computers. Surprisingly, they mostly practice green computing when it comes to reducing energy consumption. This study contributes to determining the current level of students’ green computing awareness in a sustaining environment. In conclusion, students need to be educated on utilizing ICT resources and practicing green computing mechanisms to boost environmental sustainability.</description>
        <description>http://thesai.org/Downloads/Volume12No1/Paper_37-The_Cognizance_of_Green_Computing_Concept.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Simplified Framework for Benchmarking Standard Downlink Scheduler over Long Term Evolution</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120136</link>
        <id>10.14569/IJACSA.2021.0120136</id>
        <doi>10.14569/IJACSA.2021.0120136</doi>
        <lastModDate>2021-01-30T14:38:14.2330000+00:00</lastModDate>
        
        <creator>Srinivasa R K</creator>
        
        <creator>Hemanth Kumar A.R</creator>
        
        <subject>eNodeB; Scheduler; HARQ; Best-CQI; Round Robin</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(1), 2021</description>
        <description>Downlink scheduling is one of the essential operations when it comes to improving the quality of service in Long Term Evolution (LTE). With an increasing user base, there will be an extensive challenge in resource provisioning too. A review of existing approaches shows that there is a significant possibility of improvement in this regard, and hence, the proposed manuscript presents a benchmarking model for addressing the issues associated with Best-Channel Quality Indicator (CQI), Round Robin, and Hybrid Automatic Repeat Request (HARQ). The outcome shows HARQ scheduling to offer better performance in higher throughput, higher fairness, and lower delay over different test cases.</description>
        <description>http://thesai.org/Downloads/Volume12No1/Paper_36-Simplified_Framework_for_Benchmarking.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Quantification of Surface Water-Groundwater Exchanges by GIS Coupled with Experimental Gauging in an Alluvial Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120135</link>
        <id>10.14569/IJACSA.2021.0120135</id>
        <doi>10.14569/IJACSA.2021.0120135</doi>
        <lastModDate>2021-01-30T14:38:14.2030000+00:00</lastModDate>
        
        <creator>Yassyr Draoui</creator>
        
        <creator>Fouad Lahlou</creator>
        
        <creator>Jamal Chao</creator>
        
        <creator>Lhoussaine El Mezouary</creator>
        
        <creator>Imane Al Mazini</creator>
        
        <creator>Mohamed Jalal El Hamidi</creator>
        
        <subject>Surface water and groundwater; river-groundwater exchanges; geographic information system; differential gauging</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(1), 2021</description>
        <description>Surface water and groundwater are two interrelated components, where the influence of one automatically affects the quantity and quality of the other. These exchange flows are robustly influenced by some mechanisms such as permeability, lithological nature of the soil, landscape, in addition to the difference between the hydrometric height of the river and the piezometric level of groundwater. The study area of Bou Ahmed plain is vulnerable to intensive pumping mainly in the coastal fringe. The increase in water demand, due to demographic development, is accompanied by pressure on groundwater abstraction which causes significant drops of the groundwater level. The main objectives of this study are to develop Geographic Information System database and mathematical models to analyze spatial and temporal hydrogeological characteristics and hydrodynamic functioning of groundwater flow of the Bou Ahmed aquifer. The present work exhibits the characteristics of the river-groundwater exchanges in an alluvial plain. Therefore, we quantified the flows exchanged between a river and its groundwater using GIS tools along with measurements of parameters obtained by the differential gauging, which was carried out in the field, and hydrogeological boreholes data. These quantified flows, moreover, enabled us to eventually estimate the uncertainties related to the use of the GIS method. These results will also be used to support a set of groundwater simulations based on MODFLOW code in the Bou Ahmad aquifer. These models also associated with develop Geographic Information System will help to better plan, manage and control the groundwater resources of this aquifer.</description>
        <description>http://thesai.org/Downloads/Volume12No1/Paper_35-Quantification_of_Surface_Water_Groundwater_Exchanges.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Semi-Direct Routing Approach for Mobile IP</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120133</link>
        <id>10.14569/IJACSA.2021.0120133</id>
        <doi>10.14569/IJACSA.2021.0120133</doi>
        <lastModDate>2021-01-30T14:38:14.1870000+00:00</lastModDate>
        
        <creator>Basil Al-Kasasbeh</creator>
        
        <subject>Mobile IP; direct routing; indirect routing; care-of address; home agent; foreign agent</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(1), 2021</description>
        <description>The Mobile IP (MIP) protocol is used to maintain device connectivity while the device is moving between networks through a permanent IP address and temporary care-of-address (CoA). There are two techniques to implement MIP; these are direct and indirect. The indirect is commonly used in the current industry due to its stability while the mobile host (MH) frequently moves from network to another. However, the indirect technique suffers from the problems of delays and enlargement related to the packet size. The direct technique is more sensitive to frequent mobility, yet it required less transformation overhead with stable mobility. Accordingly, to overcome the disadvantages of both techniques, a semi-direct technique is proposed in this paper. The proposed technique is implemented by minimizing the home agent&#39;s interference (HA) with a push notification to the correspondent node (CN) that concerns any modification in the moving MH&#39;s CoA. The simulation of the proposed technique, the indirect and the direct routing techniques showed the advantages of the semi-direct routing technique over the conventional approaches. The results showed that the semi-direct approach outperformed the conventional approaches in terms of delay and overhead with frequently moved MH.</description>
        <description>http://thesai.org/Downloads/Volume12No1/Paper_33-Semi_Direct_Routing_Approach_for_Mobile_IP.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Automated Convolutional Neural Network Based Approach for Paddy Leaf Disease Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120134</link>
        <id>10.14569/IJACSA.2021.0120134</id>
        <doi>10.14569/IJACSA.2021.0120134</doi>
        <lastModDate>2021-01-30T14:38:14.1870000+00:00</lastModDate>
        
        <creator>Md. Ashiqul Islam</creator>
        
        <creator>Md. Nymur Rahman Shuvo</creator>
        
        <creator>Muhammad Shamsojjaman</creator>
        
        <creator>Shazid Hasan</creator>
        
        <creator>Md. Shahadat Hossain</creator>
        
        <creator>Tania Khatun</creator>
        
        <subject>Paddy leaf disease; deep convolutional neural network (DNN); transfer learning; VGG-19; ResNet-101; Inception-ResNet-V2; Xception</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(1), 2021</description>
        <description>Bangladesh and India are significant paddy-cultivation countries in the globe. Paddy is the key producing crop in Bangladesh. In the last 11 years, the part of agriculture in Bangladesh&#39;s Gross Domestic Product (GDP) was contributing about 15.08 percent. But unfortunately, the farmers who are working so hard to grow this crop, have to face huge losses because of crop damages caused by various diseases of paddy. There are approximately more than 30 diseases of paddy leaf and among them, about 7-8 diseases are quite common in Bangladesh. Paddy leaf diseases like Brown Spot Disease, Blast Disease, Bacterial Leaf Blight, etc. are very well known and most affecting one among different paddy leaf diseases. These diseases are hampering the growth and productivity of paddy plants which can lead to great ecological and economical losses. If these diseases can be detected at an early stage with great accuracy and in a short time, then the damages to the crops can be greatly reduced and the losses of the farmers can be prevented. This paper has worked on 4 types of diseases and one healthy leaf class of the paddy. The main goal of this paper is to provide the best results for paddy leaf disease detection through an automated detection approach with the deep learning CNN models that can achieve the highest accuracy instead of the traditional lengthy manual disease detection process where the accuracy is also greatly questionable. It has analyzed four models such as VGG-19, Inception-Resnet-V2, ResNet-101, Xception, and achieved better accuracy from Inception-ResNet-V2 is 92.68%.</description>
        <description>http://thesai.org/Downloads/Volume12No1/Paper_34-An_Automated_Convolutional_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>BiDETS: Binary Differential Evolutionary based Text Summarization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120132</link>
        <id>10.14569/IJACSA.2021.0120132</id>
        <doi>10.14569/IJACSA.2021.0120132</doi>
        <lastModDate>2021-01-30T14:38:14.1570000+00:00</lastModDate>
        
        <creator>Hani Moetque Aljahdali</creator>
        
        <creator>Ahmed Hamza Osman Ahmed</creator>
        
        <creator>Albaraa Abuobieda</creator>
        
        <subject>Differential evolution; text summarization; PSO; GA; evolutionary algorithms; optimization techniques; feature weighting; ROUGE; DUC</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(1), 2021</description>
        <description>In extraction-based automatic text summarization (ATS) applications, feature scoring is the cornerstone of the summarization process since it is used for selecting the candidate summary sentences. Handling all features equally leads to generating disqualified summaries. Feature Weighting (FW) is an important approach used to weight the scores of the features based on their presence importance in the current context. Therefore, some of the ATS researchers have proposed evolutionary-based machine learning methods, such as Particle Swarm Optimization (PSO) and Genetic Algorithm (GA), to extract superior weights to their assigned features. Then the extracted weights are used to tune the scored-features in order to generate a high qualified summary. In this paper, the Differential Evolution (DE) algorithm was proposed to act as a feature weighting machine learning method for extraction-based ATS problems. In addition to enabling the DE to represent and control the assigned features in binary dimension space, it was modulated into a binary coded format. Simple mathematical calculation features have been selected from various literature and employed in this study. The sentences in the documents are first clustered according to a multi-objective clustering concept. DE approach simultaneously optimizes two objective functions, which are compactness measuring and separating the sentence clusters based on these objectives. In order to automatically detect a number of sentence clusters contained in a document, representative sentences from various clusters are chosen with certain sentence scoring features to produce the summary. The method was tested and trained using DUC2002 dataset to learn the weight of each feature. To create comparative and competitive findings, the proposed DE method was compared with evolutionary methods: PSO and GA. The DE was also compared against the best and worst systems benchmark in DUC 2002. The performance of the BiDETS model is scored with 49% similar to human performance (52%) in ROUGE-1; 26% which is over the human performance (23%) using ROUGE-2; and lastly 45% similar to human performance (48%) using ROUGE-L. These results showed that the proposed method outperformed all other methods in terms of F-measure using the ROUGE evaluation tool.</description>
        <description>http://thesai.org/Downloads/Volume12No1/Paper_32-BiDETS_Binary_Differential_Evolutionary.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Characterization of Quaternary Deposits in the Bou Ahmed Coastal Plain (Chefchaouen, Morocco): Contribution of Electrical Prospecting</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120131</link>
        <id>10.14569/IJACSA.2021.0120131</id>
        <doi>10.14569/IJACSA.2021.0120131</doi>
        <lastModDate>2021-01-30T14:38:14.0770000+00:00</lastModDate>
        
        <creator>Yassyr Draoui</creator>
        
        <creator>Fouad Lahlou</creator>
        
        <creator>Imane Al Mazini</creator>
        
        <creator>Jamal Chao</creator>
        
        <creator>Mohamed Jalal El Hamidi</creator>
        
        <subject>Bou Ahmed plain; hydrogeological units; reservoir geometry; vertical electrical sounding surveys; geographic information system; quaternary deposits</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(1), 2021</description>
        <description>The Bou Ahmed plain, which is part of the internal area of the Rif, is located along the Mediterranean coast, 30 kilometers of Oued Laou town. This basin is made up by a quaternary filling mainly formed by detrital fluvial facies, channeled conglomerates surmounted by fluvial sand interlayered with pebbles; these facies can be new potential aquifers formations areas. Therefore, the main goal of this study is to build a lithostratigraphic three dimensional model to identify the hydrogeological units and the reservoir geometry of the Bou Ahmed plain. In order to achieve this goal, we have created a database made up by Vertical Electrical Sounding surveys and drilling data integrated into a Geographic Information System. This database allowed us to establish a three-dimensional model of the bottom, geoelectric cross-sections, isopach and isoresistivity maps of new potential aquifers units. This approach allowed us to explain the modalities of deposition for the quaternary deposits of the Bou Ahmed plain and to identify potential hydrogeological reservoirs. These results will also be used to develop a hydrodynamic model based on MODFLOW code in the Bou Ahmed aquifer.</description>
        <description>http://thesai.org/Downloads/Volume12No1/Paper_31-Characterization_of_Quaternary_Deposits.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modeling the Estimation Errors of Visual-based Systems Developed for Vehicle Speed Measurement</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120130</link>
        <id>10.14569/IJACSA.2021.0120130</id>
        <doi>10.14569/IJACSA.2021.0120130</doi>
        <lastModDate>2021-01-30T14:38:14.0600000+00:00</lastModDate>
        
        <creator>Abdulrazzaq Jawish Alkherret</creator>
        
        <creator>Musab AbuAddous</creator>
        
        <subject>Intelligent transportation systems; image processing; vehicle detection; vehicle tracking; speed estimation; traffic simulation; linear regression analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(1), 2021</description>
        <description>This paper aims to modeling the relationship between the error of visual-based systems developed for vehicle speed estimation (as dependent variable) and each of the detection region length, the camera angle, and the volume-to-capacity ratio (V/C), as independent variables. Simulation software (VISSIM) is used to generate a set of video clips of predefined traffic based on different values of the dependent variables. These videos are analyzed with a video-based detection and tracking model (VBDATM) developed in 2015. Errors are expressed as differences between each of the actual speeds generated by VISSIM and the speeds computed by the VBDATM divided by the actual speed. The results conducted by the forward stepwise regression analysis show that the V/C ratio does not affect the accuracy of the estimate and there are weak relationships between the estimation error and each of camera position and the detection region length.</description>
        <description>http://thesai.org/Downloads/Volume12No1/Paper_30-Modeling_the_Estimation_Errors_of_Visual_based_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>“iSAY”: Blockchain-based Intelligent Polling System for Legislative Assistance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120129</link>
        <id>10.14569/IJACSA.2021.0120129</id>
        <doi>10.14569/IJACSA.2021.0120129</doi>
        <lastModDate>2021-01-30T14:38:14.0300000+00:00</lastModDate>
        
        <creator>Deshan Wattegama</creator>
        
        <creator>Pasindu S.Silva</creator>
        
        <creator>Chamika R. Jayathilake</creator>
        
        <creator>Kalana Elapatha</creator>
        
        <creator>Kavinga Abeywardena</creator>
        
        <creator>N. Kuruwitaarachchi</creator>
        
        <subject>Blockchain; machine learning; distributed systems; e-voting; legislative assistance; liquid democracy; natural language processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(1), 2021</description>
        <description>“iSAY”&#39; is a Blockchain-based polling system created for legislative assistance. Sri Lanka is a democratic country. Country follows a representative democracy and voters in Sri Lanka vote for their preferred government based on their election mandate. However, governments implement legislative decisions that are not stated in the election mandate. People won’t get a chance to state their opinion on this legislative matter and the government also doesn’t know whether people like this or not.  To solve this issue, in this paper the authors propose a blockchain-based intelligent polling application for legislative assistance.  “iSay” is an application where blockchain technology gets together with machine learning to add value into the public opinion. The government can create a poll about a legislative decision and people can state their opinion which could be further discussed in the legislature. Adding a significant change to the blockchain based e-voting solutions this paper proposes a novel feature where users can add their idea to a relevant poll. Using machine learning algorithms all these user ideas will be classified and analyzed before presenting to the government. Through this research, it is expected to deploy scalable elections among the general public and get their vote and ideas about specific legislations to generate an overview of general public opinion about legislative decisions.</description>
        <description>http://thesai.org/Downloads/Volume12No1/Paper_29-ISAY_Blockchain_based_Intelligent_Polling_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Using IndoWordNet for Contextually Improved Machine Translation of Gujarati Idioms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120128</link>
        <id>10.14569/IJACSA.2021.0120128</id>
        <doi>10.14569/IJACSA.2021.0120128</doi>
        <lastModDate>2021-01-30T14:38:14.0130000+00:00</lastModDate>
        
        <creator>Jatin C. Modh</creator>
        
        <creator>Jatinderkumar R. Saini</creator>
        
        <subject>Contextual information; Gujarati; idiom; IndoWordNet; Machine Translation System (MTS); n-gram model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(1), 2021</description>
        <description>Gujarati language is the Indo-Aryan language spoken by the Gujaratis, the people of the state of Gujarat of India. Gujarati is the one of the 22 official languages recognized by the Indian government. Gujarati script was adopted from Devanagari script. Approximately 3000 idioms are available in Gujarati language. Machine translation of any idiom is the challenging task because contextual information is important for the translation of a particular idiom. For the translation of Gujarati idioms into English or any other language, surrounding contextual words are considered for the translation of specific idiom in the case of ambiguity of the meaning of idiom. This paper experiments the IndoWordNet for Gujarati language for getting synonyms of surrounding contextual words. This paper uses n-gram model and experiments various window sizes surrounding the particular idiom as well as role of stop-words for correct context identification. The paper demonstrates the usefulness of context window in case of ambiguity in the meaning identification of idioms with multiple meanings. The results of this research could be consumed by any destination-independent machine translation system for Gujarati language.</description>
        <description>http://thesai.org/Downloads/Volume12No1/Paper_28-Using_IndoWordNet_for_Contextually_Improved_Machine.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Learning Management System based on Machine Learning: The Case Study of Ha&#39;il University - KSA</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120127</link>
        <id>10.14569/IJACSA.2021.0120127</id>
        <doi>10.14569/IJACSA.2021.0120127</doi>
        <lastModDate>2021-01-30T14:38:13.9830000+00:00</lastModDate>
        
        <creator>Mohamed H&#233;di Ma&#226;loul</creator>
        
        <creator>Youn&#232;s Bahou</creator>
        
        <subject>Learning management systems; blackboard; machine learning; semi-supervised learning; personalization system; SPS System; user profile; social profile</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(1), 2021</description>
        <description>Online learning environments have become an established presence in higher education in the Kingdom of Saudi Arabia, especially with the expected of Covid-19 pandemic. At present, supporting e-learning with interactive virtual campuses is a future aim in education. In order to solve the problems of the interactivity and the adaptability of e-online learning systems in Saudi universities, this paper proposes a module, based on digital learning, and to be used in learning management systems to meet the challenges a future goal in e-online learning. The e-learning system should be intelligent and has the possibility to inspire the specific characteristics (i.e., metadata) of a student used to access to their social media profiles.</description>
        <description>http://thesai.org/Downloads/Volume12No1/Paper_27-Learning_Management_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Personalized Book Recommendation System using Machine Learning Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120126</link>
        <id>10.14569/IJACSA.2021.0120126</id>
        <doi>10.14569/IJACSA.2021.0120126</doi>
        <lastModDate>2021-01-30T14:38:13.8270000+00:00</lastModDate>
        
        <creator>Dhiman Sarma</creator>
        
        <creator>Tanni Mittra</creator>
        
        <creator>Mohammad Shahadat Hossain</creator>
        
        <subject>Personalize book recommendation; recommendation system; clustering; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(1), 2021</description>
        <description>As the amounts of online books are exponentially increasing due to COVID-19 pandemic, ﬁnding relevant books from a vast e-book space becomes a tremendous challenge for online users. Personal recommendation systems have been emerged to conduct eﬀective search which mine related books based on user rating and interest. Most of these existing systems are user-based ratings where content-based and collaborative-based learning methods are used. These systems&#39; irrationality is their rating technique, which counts the users who have already been unsubscribed from the services and no longer rate books. This paper proposed an eﬀective system for recommending books for online users that rated a book using the clustering method and then found a similarity of that book to suggest a new book. The proposed system used the K-means Cosine Distance function to measure distance and Cosine Similarity function to ﬁnd Similarity between the book clusters. Sensitivity, Speciﬁcity, and F Score were calculated for ten diﬀerent datasets. The average Speciﬁcity was higher than sensitivity, which means that the classiﬁer could re-move boring books from the reader&#39;s list. Besides, a receiver operating characteristic curve was plotted to ﬁnd a graphical view of the classiﬁers&#39; accuracy. Most of the datasets were close to the ideal diagonal classiﬁer line and far from the worst classiﬁer line. The result concludes that recommendations, based on a particular book, are more accurately eﬀective than a user-based recommendation system.</description>
        <description>http://thesai.org/Downloads/Volume12No1/Paper_26-Personalized_Book_Recommendation_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comprehensive Multilayer Convolutional Neural Network for Plant Disease Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120125</link>
        <id>10.14569/IJACSA.2021.0120125</id>
        <doi>10.14569/IJACSA.2021.0120125</doi>
        <lastModDate>2021-01-30T14:38:13.8100000+00:00</lastModDate>
        
        <creator>Radhika Bhagwat</creator>
        
        <creator>Yogesh Dandawate</creator>
        
        <subject>Crop diseases; plant disease detection; hyperparameters; deep learning; convolutional neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(1), 2021</description>
        <description>Agriculture has a dominant role in the world’s economy. However, losses due to crop diseases and pests significantly affect the contribution made by the agricultural sector. Plant diseases and pests recognized at an early stage can help limit the economic losses in agriculture production around the world. In this paper, a comprehensive multilayer convolutional neural network (CMCNN) is developed for plant disease detection that can analyze the visible symptoms on a variety of leaf images like, laboratory images with a plain background, complex images with real field conditions and images of individual disease symptoms or spots. The model performance is evaluated on three public datasets -Plant Village repository having images of the whole leaf with plain background, Plant Village repository with complex background and Digipathos repository with images of lone lesions and spots. Hyperparameters like learning rate, dropout probability, and optimizer are fine-tuned such that the model is capable of classifying various types of input leaf images. The overall classification accuracy of the model in handling laboratory images is 99.85%, real field condition images is 98.16% and for images with individual disease symptoms is 99.6%. The proposed design is also compared with the popular CNN architectures like GoogleNet, VGG16, VGG19 and ResNet50. The experimental results indicate that the suggested generic model has higher robustness in handling various types of leaf images and has better classification capability for plant disease detection. The obtained results suggest the favorable use of the proposed model in a decision support system to identify diseases in several plant species for a large range of leaf images.</description>
        <description>http://thesai.org/Downloads/Volume12No1/Paper_25-Comprehensive_Multilayer_Convolutional_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Text Coherence Analysis based on Misspelling Oblivious Word Embeddings and Deep Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120124</link>
        <id>10.14569/IJACSA.2021.0120124</id>
        <doi>10.14569/IJACSA.2021.0120124</doi>
        <lastModDate>2021-01-30T14:38:13.7800000+00:00</lastModDate>
        
        <creator>Md. Anwar Hussen Wadud</creator>
        
        <creator>Md. Rashadul Hasan Rakib</creator>
        
        <subject>Coherence analysis; deep neural network; distributional representation; misspellings; NLP; word embedding</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(1), 2021</description>
        <description>Text coherence analysis is the most challenging task in Natural Language Processing (NLP) than other subfields of NLP, such as text generation, translation, or text summarization. There are many text coherence methods in NLP, most of them are graph-based or entity-based text coherence methods for short text documents. However, for long text documents, the existing methods perform low accuracy results which is the biggest challenge in text coherence analysis in both English and Bengali. This is because existing methods do not consider misspelled words in a sentence and cannot accurately assess text coherence. In this paper, a text coherence analysis method has been proposed based on the Misspelling Oblivious Word Embedding Model (MOEM) and deep neural network. The MOEM model replaces all misspelled words with the correct words and captures the interaction between different sentences by calculating their matches using word embedding. Then, the deep neural network architecture is used to train and test the model. This study examines two different types of datasets, one in Bengali and the other in English, to analyze text consistency based on sentence sequence activities and to evaluate the effectiveness of this model. In the Bengali language dataset, 7121 Bengali text documents have been used where 5696 (80%) documents have been used for training and 1425 (20%) documents for testing. And in the English language dataset, 6000 (80%) documents have been used for training and 1500 (20%) documents for model evaluation out of 7500 text documents. The efficiency of the proposed model is compared with existing text coherence analysis techniques. Experimental results show that the proposed model significantly improves automatic text coherence detection with 98.1% accuracy in English and 89.67% accuracy in Bengali. Finally, comparisons with other existing text coherence models of the proposed model are shown for both English and Bengali datasets.</description>
        <description>http://thesai.org/Downloads/Volume12No1/Paper_24-Text_Coherence_Analysis_based_on_Misspelling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Robust Control Approach of SISO Coupled Tank System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120123</link>
        <id>10.14569/IJACSA.2021.0120123</id>
        <doi>10.14569/IJACSA.2021.0120123</doi>
        <lastModDate>2021-01-30T14:38:13.7630000+00:00</lastModDate>
        
        <creator>Mohd Hafiz Jali</creator>
        
        <creator>Ameelia Ibrahim</creator>
        
        <creator>Rozaimi Ghazali</creator>
        
        <creator>Chong Chee Soon</creator>
        
        <creator>Ahmad Razif Muhammad</creator>
        
        <subject>Sliding Mode Control; PID controller; robustness; coupled tank system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(1), 2021</description>
        <description>This paper presents the design principles of sliding mode controller, which is implemented in the coupled tank system. The Sliding Mode Control (SMC) controller exhibited a robust stability which can overcome nonlinearities, reduce disturbances and noise that occur in the coupled tank system. The work start with mathematical modelling the coupled tank system using second order single input single output (SISO) technique. Then, the sliding mode controller design began by deriving the sliding surface according to the second order coupled tank system. The control variables in this system, which are C1 and C2 are manipulated to obtain the best performances of the SMC. From the simulations, the performances characteristic of the SMC is analysed and investigated. The output response is obtained by implementing the SMC on the plant and compared with the proportional, integral, and derivative (PID) controller as a benchmarked controller. The results show that the robust SMC has better output response compared to the PID controller.</description>
        <description>http://thesai.org/Downloads/Volume12No1/Paper_23-Robust_Control_Approach_of_SISO_Coupled_Tank_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of a Rule-based Personal Finance Management System based on Financial Well-being</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120122</link>
        <id>10.14569/IJACSA.2021.0120122</id>
        <doi>10.14569/IJACSA.2021.0120122</doi>
        <lastModDate>2021-01-30T14:38:13.7330000+00:00</lastModDate>
        
        <creator>Alhanoof Althnian</creator>
        
        <subject>Artificial intelligence; rule-based; deductive reasoning; forward chaining; personal finance; financial well-being</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(1), 2021</description>
        <description>Financial planning plays an important role in people’s lives. The recent COVID-19 outbreak has caused sudden unemployment for many people across the globe, leaving them with a financial crisis. Recent surveys indicate that financial matters continue as the leading cause of stress for employees. Further, many millennials overspend and make unfortunate financial decisions due to their incapability to manage their earnings, which forbids them from maintaining financial satisfaction. Financial well-being as defined by The American Consumer Financial Protection Bureau (CFPB) is a state where one fully meets current and ongoing financial obligations, feels secure in their financial future, and is able to make choices to enjoy life. This work proposes a Personal Finance Management (PFM) system with a new architecture that aims to guide users to reach the state of financial well-being, as defined by CFPB. The proposed system consists of a rule-based system that provides users with actionable advice to make informed spending decisions and achieve their financial goals.</description>
        <description>http://thesai.org/Downloads/Volume12No1/Paper_22-Design_of_A_Rule_based_Personal_Finance_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Survey on Junction Selection based Routing Protocols for VANETs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120121</link>
        <id>10.14569/IJACSA.2021.0120121</id>
        <doi>10.14569/IJACSA.2021.0120121</doi>
        <lastModDate>2021-01-30T14:38:13.7170000+00:00</lastModDate>
        
        <creator>Irshad Ahmed Abbasi</creator>
        
        <creator>Elfatih Elmubarak Mustafa</creator>
        
        <subject>Position-based; inter-vehicular; urban scenario; algorithms; reliability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(1), 2021</description>
        <description>Objectives: To compare significant position-based routing protocols based on their underlying techniques such as junction selection mechanisms that provide vehicle-to-vehicle communications in city scenarios. Background: Vehicular Adhoc Network is the most significant offshoot of Mobile Adhoc Networks which is capable of organizing itself in an infrastructure-less environment. The network builds smart transportation which facilitates deriving in-terms of traffic safety by exchanging timely information in a proficient manner. Findings: The main features of vehicular adhoc networks pertaining to the city environment like high mobility, network segmentation, sporadic interconnections, and impediments are the key challenges for the development of an effective routing protocol. These features of the urban environment have a great impact on the performance of a routing protocol. This study presents a brief survey on the most substantial position-based routing schemes premeditated for urban inter-vehicular communication scenarios. These protocols are provided with their operational techniques for exchanging messages between vehicles. A comparative analysis is also provided, which is based on various important factors such as the mechanisms of intersection selection, forwarding strategies, vehicular traffic density, local maximum conquering methods, mobility of vehicular nodes, and secure message exchange. Application/Improvements: the outcomes observed from this paper motivate us to improve routing protocol in terms of security, accuracy, and reliability in vehicular adhoc networks. Furthermore, it can be employed as a foundation of references in determining literature that are worth mentioning to the routing in vehicular communications.</description>
        <description>http://thesai.org/Downloads/Volume12No1/Paper_21-A_Survey_on_Junction_Selection_based_Routing_Protocols.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Design and Implementation of Mobile Heart Monitoring Applications using Wearable Heart Rate Sensor</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120120</link>
        <id>10.14569/IJACSA.2021.0120120</id>
        <doi>10.14569/IJACSA.2021.0120120</doi>
        <lastModDate>2021-01-30T14:38:13.6870000+00:00</lastModDate>
        
        <creator>Ummi Namirah Hashim</creator>
        
        <creator>Lizawati Salahuddin</creator>
        
        <creator>Raja Rina Raja Ikram</creator>
        
        <creator>Ummi Rabaah Hashim</creator>
        
        <creator>Ngo Hea Choon</creator>
        
        <creator>Mohd Hariz Naim Mohayat</creator>
        
        <subject>Personalized healthcare; heart monitoring; wearable sensor; mobile; android</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(1), 2021</description>
        <description>Heart monitoring is important to deter any catastrophic because of heart failure that may happen. Continuous real-time heart monitoring could prevent sudden death due to heart attack. Nevertheless, the major challenge associated with continuous heart monitoring in the traditional approach is to undertake regular medical check-ups at the hospital or clinic. Hence, the aim of this study is to develop a mobile app where patients can real-time monitor their heart rate (HR) and detect abnormal HR whenever it occurs. Caregivers will be notified when a patient is detected with abnormal HR. The mobile app was developed for Android-based smartphones. A wearable HR sensor is used to collect RR data and transmitted to the smartphone via Bluetooth connection. User acceptance test was conducted to comprehend the intention and satisfaction level of the prospective users to use the application. The user acceptance test shows compatibility, perceived usefulness, perceived ease of use, trust, and behavioral intention to use had a high acceptance rate. It is expected that the developed app may provide a more plausible tool in monitoring HR personally, conveniently, and continuously at any time and anywhere.</description>
        <description>http://thesai.org/Downloads/Volume12No1/Paper_20-The_Design_and_Implementation_of_Mobile_Heart_Monitoring.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Evaluation of Different Mobile Ad-hoc Network Routing Protocols in Difficult Situations</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120119</link>
        <id>10.14569/IJACSA.2021.0120119</id>
        <doi>10.14569/IJACSA.2021.0120119</doi>
        <lastModDate>2021-01-30T14:38:13.6700000+00:00</lastModDate>
        
        <creator>Sultan Mohammed Alkahtani</creator>
        
        <creator>Fahd Alturki</creator>
        
        <subject>Ad-hoc network; performance evaluation; network simulation; MANET; DSR; AODV; DYMO; routing protocols; Omnet++</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(1), 2021</description>
        <description>Performance evaluation of Mobile Ad-hoc Network (MANET) routing protocols is essential for selecting the appropriate protocol for the network. Many routing protocols and different simulation tools were proposed to address this task. This paper will introduce an overview of MANETs routing protocols as well as evaluate MANET performance by using three reactive protocols—Dynamic Source Routing (DSR), Ad-Hoc On-demand Distance Vector (AODV), and Dynamic MANET On-Demand (DYMO)—in three different scenarios. These scenarios are designed carefully to mimic real situations by using OMNET++. The first scenario evaluates the performance when the number of nodes increases. In the second scenario, the performance of the network will be evaluated in the presence of obstacles. In the third scenario, a group of nodes will be suddenly shut down during the communication. The network evaluation is carried out in terms of packets received, end-to-end delay, transmission count or routing overhead, throughput, and packet ratio.</description>
        <description>http://thesai.org/Downloads/Volume12No1/Paper_19-Performance_Evaluation_of_different_Mobile_Ad_Hoc_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detection and Recognition of Moving Video Objects: Kalman Filtering with Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120118</link>
        <id>10.14569/IJACSA.2021.0120118</id>
        <doi>10.14569/IJACSA.2021.0120118</doi>
        <lastModDate>2021-01-30T14:38:13.6400000+00:00</lastModDate>
        
        <creator>Hind Rustum Mohammed</creator>
        
        <creator>Zahir M. Hussain</creator>
        
        <subject>Convolution Neural Network (CNN); Kalman filter; moving object; video tracking</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(1), 2021</description>
        <description>Research in object recognition has lately found that Deep Convolutional Neuronal Networks (CNN) provide a breakthrough in detection scores, especially in video applications. This paper presents an approach for object recognition in videos by combining Kalman filter with CNN. Kalman filter is first applied for detection, removing the background and then cropping object. Kalman filtering achieves three important functions: predicting the future location of the object, reducing noise and interference from incorrect detections, and associating multi-objects to tracks. After detection and cropping the moving object, a CNN model will predict the category of object. The CNN model is built based on more than 1000 image of humans, animals and others, with architecture that consists of ten layers. The first layer, which is the input image, is of 100 * 100 size. The convolutional layer contains 20 masks with a size of 5 * 5, with a ruling layer to normalize data, then max-pooling. The proposed hybrid algorithm has been applied to 8 different videos with total duration of is 15.4 minutes, containing 23100 frames. In this experiment, recognition accuracy reached 100%, where the proposed system outperforms six existing algorithms.</description>
        <description>http://thesai.org/Downloads/Volume12No1/Paper_18-Detection_and_Recognition_of_Moving_Video_Objects.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Stanza Type Identification using Systematization of Versification System of Hindi Poetry</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120117</link>
        <id>10.14569/IJACSA.2021.0120117</id>
        <doi>10.14569/IJACSA.2021.0120117</doi>
        <lastModDate>2021-01-30T14:38:13.6230000+00:00</lastModDate>
        
        <creator>Milind Kumar Audichya</creator>
        
        <creator>Jatinderkumar R. Saini</creator>
        
        <subject>Chhand; computational linguistics; Hindi; metadata; poetry; prosody; stanza; verse</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(1), 2021</description>
        <description>Poetry covers a vast part of the literature of any language. Similarly, Hindi poetry is also having a massive portion in Hindi literature. In Hindi poetry construction, it is necessary to take care of various verse writing rules. This paper focuses on the automatic metadata generation from such poems by computational linguistics integrated advance and systematic, prosody rule-based modeling and detection procedures specially designed for Hindi poetry. The paper covers various challenges and the best possible solutions for those challenges, describing the methodology to generate automatic metadata for “Chhand” based on the poems’ stanzas. It also provides some advanced information and techniques for metadata generation for “Muktak Chhands”. Rules of the “Chhands” incorporated in this research were identified, verified, and modeled as per the computational linguistics perspective the very first time, which required a lot of effort and time.  In this research work, 111 different “Chhand” rules were found. This paper presents rule-based modeling of all of the “Chhands”. Out of the all modeled “Chhands” the research work covers 53 “Chhands” for which at least 20 to 277 examples were found and used for automatic processing of the data for metadata generation. For this research work, the automatic metadata generator processed 3120 UTF-8 based inputs of 53 Hindi “Chhand” types, achieved 95.02% overall accuracy, and the overall failure rate was 4.98%. The minimum time taken for the processing of “Chhand” for metadata generation was 1.12 seconds, and the maximum was 91.79 seconds.</description>
        <description>http://thesai.org/Downloads/Volume12No1/Paper_17-Stanza_Type_Identification_using_Systematization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Full Direction Local Neighbors Pattern (FDLNP)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120116</link>
        <id>10.14569/IJACSA.2021.0120116</id>
        <doi>10.14569/IJACSA.2021.0120116</doi>
        <lastModDate>2021-01-30T14:38:13.5930000+00:00</lastModDate>
        
        <creator>Maher Alrahhal</creator>
        
        <creator>Supreethi K.P</creator>
        
        <subject>Content-Based image retrieval; full direction local neighbor patterns; local neighbor pattern; gray-level co-occurrence matrix; ensemble classifiers; k-means clustering; hadoop</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(1), 2021</description>
        <description>In this paper, we proposed the Full Direction Local Neighbor Pattern (FDLNP) algorithm, which is a novel method for Content-Based Image Retrieval. FDLNP consists of many steps, starting from generating Max and Min Quantizers followed by building two matrix types (the Eight Neighbors Euclidean Decimal Coding matrix, and Full Direction Matrixes). After that, we extracted Gray-Level Co-occurrence Matrix (GLCM) from those matrixes to derive the important features from each GLCM matrixes and finishing with merging the output of previous steps with Local Neighbor Patterns (LNP) histogram. For decreasing the feature vector length, we proposed five extension methods from FDLNP by choosing the specific direction matrixes. Our results demonstrate the effectiveness of our proposed algorithm on color and texture databases, comparing with recent works, with regard to the Precision, Recall, mean Average Precision (mAP), and Average Retrieval Rate (ARR). For enhancing the image retrieval accuracy, we proposed a novel framework that combined the image retrieval system with clustering and classification algorithms. Moreover, we proposed a distributed model that used our FDLNP method with Hadoop to get the ability to process a huge number of images in a reasonable time.</description>
        <description>http://thesai.org/Downloads/Volume12No1/Paper_16-Full_Direction_Local_Neighbors_Pattern.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Amalgamation of Machine Learning and Slice-by-Slice Registration of MRI for Early Prognosis of Cognitive Decline</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120115</link>
        <id>10.14569/IJACSA.2021.0120115</id>
        <doi>10.14569/IJACSA.2021.0120115</doi>
        <lastModDate>2021-01-30T14:38:13.5770000+00:00</lastModDate>
        
        <creator>Manju Jain</creator>
        
        <creator>C.S. Rai</creator>
        
        <creator>Jai Jain</creator>
        
        <creator>Deepak Gambhir</creator>
        
        <subject>Brain atrophy; registration; Freesurfer; GLCM; texture features; FDR; decision support system; SVM; AdaBoost; Randomforest Bagging; KNN; Naive Bayes; classification; hyperparameters; GridsearchCV; Sklearn; Python</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(1), 2021</description>
        <description>Brain atrophy is the degradation of brain cells and tissues to the extent that it is clearly indicative during Mini-Mental State Exam test and other psychological analysis. It is an alarming state of the human brain that progressively results in Alzheimer disease which is not curable. But timely detection of brain atrophy can help millions of people before they reach the state of Alzheimer. In this study we analyzed the longitudinal structural MRI of older adults in the age group of 42 to 96 of OASIS 3 Open Access Database. The nth slice of one subject does not match with the nth slice of another subject because the head position under the magnetic field is not synchronized. As a radiologist analyzes the MRI image data slice wise so our system also compares the MRI images slice wise, we deduced a method of slice by slice registration by driving mid slice location in each MRI image so that slices from different MRI images can be compared with least error. Machine learning is the technique which helps to exploit the information available in abundance of data and it can detect patterns in data which can give indication and detection of particular events and states. Each slice of MRI analyzed using simple statistical determinants and Gray level Co-Occurrence Matrix based statistical texture features from whole brain MRI images. The study explored varied classifiers Support Vector Machine, Random Forest, K-nearest neighbor, Naive Bayes, AdaBoost and Bagging Classifier methods to predict how normal brain atrophy differs from brain atrophy causing cognitive impairment. Different hyper parameters of classifiers tuned to get the best results. The study indicates Support Vector Machine and AdaBoost the most promising classifier to be used for automatic medical image analysis and early detection of brain diseases. The AdaBoost gives accuracy of 96.76% with specificity 95.87% and sensitivity 87.37% and receiving operating curve accuracy 96.3%. The SVM gives accuracy of 96% with 92% specificity and 87% sensitivity and receiving operating curve accuracy 95.05%.</description>
        <description>http://thesai.org/Downloads/Volume12No1/Paper_15-Amalgamation_of_Machine_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Customer Profiling for Malaysia Online Retail Industry using K-Means Clustering and RM Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120114</link>
        <id>10.14569/IJACSA.2021.0120114</id>
        <doi>10.14569/IJACSA.2021.0120114</doi>
        <lastModDate>2021-01-30T14:38:13.5600000+00:00</lastModDate>
        
        <creator>Tan Chun Kit</creator>
        
        <creator>Nurulhuda Firdaus Mohd Azmi</creator>
        
        <subject>Customer Profiling; LRFMP; RFM; Data Mining; K-Means Clustering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(1), 2021</description>
        <description>Malaysia&#39;s online retail industry is growing sophisticated for the past years and is not expected to stop growing in the following years. Meanwhile, customers are becoming smarter about buying. Online Retailers have to identify and understand their customer needs to provide appropriate services/products to the demanding customer and attracting new customers. Customer profiling is a method that helps retailers to understand their customers. This study examines the usefulness of the LRFMP model (Length, Recency, Frequency, Monetary, and Periodicity), the models that comprised part of its variables, and its predecessor RFM model using the Silhouette Index test. Furthermore, an automated Elbow Method was employed and its usefulness was compared against the conventional visual analytics. As result, the RM model was selected as the finest model in performing K-Means Clustering in the given context. Despite the unusefulness of the LRFMP model in K-Means Clustering, some of its variables remained useful in the customer profiling process by providing extra information on cluster characteristics. Moreover, the effect of sample size on cluster validity was investigated. Lastly, the limitations and future research recommendations are discussed alongside the discussion to bridge for future works.</description>
        <description>http://thesai.org/Downloads/Volume12No1/Paper_14-Customer_Profiling_for_Malaysia_Online_Retail_Industry.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modeling of the Factors Affecting e-Commerce Use in Turkey by Categorical Data Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120113</link>
        <id>10.14569/IJACSA.2021.0120113</id>
        <doi>10.14569/IJACSA.2021.0120113</doi>
        <lastModDate>2021-01-30T14:38:13.5300000+00:00</lastModDate>
        
        <creator>&#214;mer Alkan</creator>
        
        <creator>Hasan K&#252;&#231;&#252;koglu</creator>
        
        <creator>G&#246;khan Tutar</creator>
        
        <subject>Electronic commerce; online shopping; online purchase; e-commerce; Turkey; multinomial probit regression</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(1), 2021</description>
        <description>e-Commerce use is a subject of study that has been frequently discussed in recent years. The aim of this study was to detect the socio-demographic and economic factors affecting e-commerce use of individuals in Turkey. The micro dataset obtained from Information and Communication Technology (ICT) Usage Survey in Households performed by the Turkish Statistical Institute in 2014-2018 was employed in this study. Multinomial logistic and multinomial probit regression analyses were performed to detect the factors affecting e-commerce use of individuals in Turkey. The data of 129,643 individuals, who participated in ICT Usage Survey in Households in 2014-2018, were employed in the regression analyses. According to the analysis results, the variables of survey year, age, gender, educational level, occupation, income level, region and household size were detected to be effective on online shopping. The results of the study indicated that e-commerce use was gradually increasing. It was determined that more educated and young individuals and individuals living in relatively more developed regions were more inclined to online shopping. Policies should also be developed to increase e-commerce use of low educated individuals and individuals over middle age. In particular, small and medium-sized businesses (SMB) should pay more attention to the use of e-commerce in order to increase their activities by taking these situations into consideration. Indeed, how important e-commerce use is has been found out in epidemics/pandemics such as COVID-19, which causes people to lock themselves at home in the countries.</description>
        <description>http://thesai.org/Downloads/Volume12No1/Paper_13-Modeling_of_the_Factors_Affecting_e_Commerce.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Ground Control Point Generation from Simulated SAR Image Derived from Digital Terrain Model and its Application to Texture Feature Extraction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120112</link>
        <id>10.14569/IJACSA.2021.0120112</id>
        <doi>10.14569/IJACSA.2021.0120112</doi>
        <lastModDate>2021-01-30T14:38:13.5000000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>Ground Control Point: GCP; Digital Terrain Model: DTM; scattering model; complex dielectric constant; texture feature; matching success rate; GCP chip</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(1), 2021</description>
        <description>Ground Control Point: GCP generation from simulated topographic map derived from Digital Terrain Model: DTM is proposed. Also, texture feature extraction is attempted from the simulated image. In this study, simulated image is derived from elevation data only, under assumptions of a simple scattering model without consideration of complex dielectric constant of the targets of interest. The performance of the acquired GCPs was evaluated by using several measures with texture features of GCP chip images. This paper describes the details about proposed method for acquisition of GCPs and simulated results on relationship between texture features and GCP matching success rate corresponding to the cross correlation between reference and distorted GCP chip images.</description>
        <description>http://thesai.org/Downloads/Volume12No1/Paper_12-Ground_Control_Point_Generation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Validation of the Components and Elements of Computational Thinking for Teaching and Learning Programming using the Fuzzy Delphi Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120111</link>
        <id>10.14569/IJACSA.2021.0120111</id>
        <doi>10.14569/IJACSA.2021.0120111</doi>
        <lastModDate>2021-01-30T14:38:13.5000000+00:00</lastModDate>
        
        <creator>Karimah Mohd Yusoff</creator>
        
        <creator>Noraidah Sahari Ashaari</creator>
        
        <creator>Tengku Siti Meriam Tengku Wook</creator>
        
        <creator>Noorazean Mohd Ali</creator>
        
        <subject>Expert consensus; focus group; problem-solving; components; elements</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(1), 2021</description>
        <description>Computational Thinking is a phrase employed to explain the developing concentration on students&#39; knowledge development regarding designing computational clarifications to problems, algorithmic Thinking, and coding. The difficulty of learning computer programming is a challenge for students and teachers. Students&#39; ability in programming is closely related to their problem-solving skills and their cognitive abilities. Even though computational thinking is a problem-solving skill in the 21st century, its use for programming needs to be planned systematically taken into account the appropriate components and elements. Therefore, this study aims to validate the main components and elements of computational thinking for solving problems in programming. At the beginning of the study, researchers conducted a literature review to determine the components and the elements of computational thinking that could be used in teaching and learning programming. This validation involved the consensus of a group of experts using the Fuzzy Delphi method. The data were analysed using the Fuzzy Delphi technique, where the experts individually evaluated the components and elements agreed upon prior discussion. A group of experts consisting of 15 people validated 14 components and 35 elements. The results showed that all components and elements reached a threshold (d) value of less than 0.2, a percentage of agreement exceeded 75%, and the Fuzzy score (A) exceeded 0.5. The finding indicates that the main components and elements of the proposed computational thinking are suitable for problem-solving approaches in programming.</description>
        <description>http://thesai.org/Downloads/Volume12No1/Paper_11-Validation_of_the_Components_and_Elements.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predicting the Depression of the South Korean Elderly using SMOTE and an Imbalanced Binary Dataset</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120110</link>
        <id>10.14569/IJACSA.2021.0120110</id>
        <doi>10.14569/IJACSA.2021.0120110</doi>
        <lastModDate>2021-01-30T14:38:13.4830000+00:00</lastModDate>
        
        <creator>Haewon Byeon</creator>
        
        <subject>Random forests; gradient boosting machine; SMOTE; undersampling; imbalanced data; oversampling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(1), 2021</description>
        <description>Since the number of healthy people is much more than that of ill people, it is highly likely that the problem of imbalanced data will occur when predicting the depression of the elderly living in the community using big data. When raw data are directly analyzed without using supplementary techniques such as a sample algorithm for datasets, which have imbalanced class ratios, it can decrease the performance of machine learning by causing prediction errors in the analysis process. Therefore, it is necessary to use a data sampling technique for overcoming this imbalanced data issue. As a result, this study tried to identify an effective way for processing imbalanced data to develop ensemble-based machine learning by comparing the performance of sampling methods using the depression data of the elderly living in South Korean communities, which had quite imbalanced class ratios. This study developed a model for predicting the depression of the elderly living in the community using a logistic regression model, gradient boosting machine (GBM), and random forest, and compared the accuracy, sensitivity, and specificity of them to evaluate the prediction performance of them. This study analyzed 4,085 elderly people (≥60 years old) living in the community. The depression data of the elderly in the community used in this study had an unbalance issue: the result of the depression screening test showed that 87.5% of subjects did not have depression, while 12.5% of them had depression. This study used oversampling, undersampling, and SMOTE methods to overcome the unbalance problem of the binary dataset, and the prediction performance (accuracy, sensitivity, and specificity) of each sampling method was compared. The results of this study confirmed that the SMOTE-based random forest algorithm showing the highest accuracy (a sensitivity ≥ 0.6 and a specificity ≥ 0.6) was best prediction performance among random forest, GBM, and logistic regression analysis. Further studies are needed to compare the accuracy of SMOTE, undersampling, and oversampling for imbalanced data with high dimensional y-variables.</description>
        <description>http://thesai.org/Downloads/Volume12No1/Paper_10-Predicting_the_Depression_of_the_South_Korean_Elderly.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Impact of Artificial Intelligence-enabled Software-defined Networks in Infrastructure and Operations: Trends and Challenges</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120109</link>
        <id>10.14569/IJACSA.2021.0120109</id>
        <doi>10.14569/IJACSA.2021.0120109</doi>
        <lastModDate>2021-01-30T14:38:13.4670000+00:00</lastModDate>
        
        <creator>Mohammad Riyaz Belgaum</creator>
        
        <creator>Zainab Alansari</creator>
        
        <creator>Shahrulniza Musa</creator>
        
        <creator>Muhammad Mansoor Alam</creator>
        
        <creator>M. S. Mazliham</creator>
        
        <subject>Artificial intelligence; infrastructure and operations; software-defined network; virtualization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(1), 2021</description>
        <description>The emerging technologies trending up in information and communication technology are tuning the enterprises for betterment. The existing infrastructure and operations (I&amp;O) are supporting enterprises with their services and functionalities, considering the diverse requirements of the end-users. However, they are not free of the challenges and issues to address as the technology has advanced. This paper explains the impact of artificial intelligence (AI) in the enterprises using software-defined networking (SDN) in I&amp;O. The fusion of artificial intelligence with software-defined networking in infrastructure and operations enables to automate the process based on experience and provides opportunities to the management to make quick decisions. But this fusion has many challenges to be addressed. This research aimed to discuss the trends and challenges impacting infrastructure and operations, and the role of AI-enabled SDN in I&amp;O and discusses the benefits it provides that influence the directional path. Furthermore, the challenges to be addressed in implementing the AI-enabled SDN in I&amp;O shows future directions to explore.</description>
        <description>http://thesai.org/Downloads/Volume12No1/Paper_9-Impact_of_Artificial_Intelligence_Enabled_Software.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimize the Cost of Resources in Federated Cloud by Collaborated Resource Provisioning and Most Cost-effective Collated Providers Resource First Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120108</link>
        <id>10.14569/IJACSA.2021.0120108</id>
        <doi>10.14569/IJACSA.2021.0120108</doi>
        <lastModDate>2021-01-30T14:38:13.4530000+00:00</lastModDate>
        
        <creator>V Pradeep Kumar</creator>
        
        <creator>Kolla Bhanu Prakash</creator>
        
        <subject>Cloud computing; federated cloud; collaborated resource provisioning; optimized cost</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(1), 2021</description>
        <description>Cloud Computing works as the best solution for providing many of its services for cloud consumer agents with different requests for huge computational VM&#39;s with large storage capacity. The instance requests of cloud consumers will dynamically change as per their usage of application requirements with the demand for business growth, and single-vendor cloud becomes a constraint to satisfy these needs of the cloud consumers. Federated Cloud can contribute its solution approaches to meet these dynamic needs of cloud consumer requests of resource instances. The interoperability of clouds was made realistic with cloud federation. This paper provides an optimized solution approach where a set of collaborated cloud providers will provide services to satisfy consumer agents&#39; multiple requests. It presents the two-phase collaborated resource provisioning (CCRP) approach and Most Cost-Effective Collated Providers Resources First (MCECPRF) algorithm. The algorithm’s efficiency has been tested with specific data set for optimizing the cost for cloud consumer agents and analyzes the cancellation of requests, decision time for provisioning for different VM configurations within specific time slots.</description>
        <description>http://thesai.org/Downloads/Volume12No1/Paper_8-Optimize_the_Cost_of_Resources_in_Federated_Cloud.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Clustering K-Means Algorithms and Econometric Lethality Model by Covid-19, Peru 2020</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120107</link>
        <id>10.14569/IJACSA.2021.0120107</id>
        <doi>10.14569/IJACSA.2021.0120107</doi>
        <lastModDate>2021-01-30T14:38:13.4370000+00:00</lastModDate>
        
        <creator>Javier Pedro Flores Arocutipa</creator>
        
        <creator>Jorge Jinchu&#241;a Huallpa</creator>
        
        <creator>Gamaniel Carbajal Navarro</creator>
        
        <creator>Lu&#237;s Delf&#237;n Bermejo Peralta</creator>
        
        <subject>Infected; lethality; COVID 19 cycle; razing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(1), 2021</description>
        <description>Objective: The study looks at how the Covid-19 wave was in Peru, where and when it begins, where and when it culminates. As it faced, the shortcomings that were detected and especially that very little could be done to confront the disease as an emerging country. The wave began in May and ended in August with the greatest number of deaths and then fell. Methodology: Basic, explanatory level, with SINADEF data by region, of the situation room, to get the number of deaths, between January and September 2020/2019. Results. The relationship between infected and deceased was found a Pearson Rho of 0.94. The total death toll model depends on Lima, Hu&#225;nuco, and Piura. The differences between the deaths of 2019 and 2020 were corroborated with the ANOVA, where a bilateral sig of 0.042 was got. The COVID cycle is found in the cluster algorithm model, of the nine months in 44.4% of them, it generated the highest lethality, between May and August. Conclusion. It is proven that COVID devastated regions of Peru. The model generated by the K-Means algorithm tells us that the COVID-19 cycle began in March and reached its highest peak of deceased and then descended.</description>
        <description>http://thesai.org/Downloads/Volume12No1/Paper_7-Clustering_K_Means_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Data-Forensic Determination of the Accuracy of International COVID-19 Reporting: Using Zipf’s Law for Pandemic Investigation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120106</link>
        <id>10.14569/IJACSA.2021.0120106</id>
        <doi>10.14569/IJACSA.2021.0120106</doi>
        <lastModDate>2021-01-30T14:38:13.4070000+00:00</lastModDate>
        
        <creator>Aamo Iorliam</creator>
        
        <creator>Anthony T S Ho</creator>
        
        <creator>Santosh Tirunagari</creator>
        
        <creator>David Windridge</creator>
        
        <creator>Adekunle Adeyelu</creator>
        
        <creator>Samera Otor</creator>
        
        <creator>Beatrice O. Akumba</creator>
        
        <subject>COVID-19; power-law; pandemic; Zipf’s Law; WHO</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(1), 2021</description>
        <description>Severe outbreaks of infectious disease occur throughout the world with some reaching the level of international pandemic: Coronavirus (COVID-19) is the most recent to do so. In this paper, a mechanism is set out using Zipf’s law to establish the accuracy of international reporting of COVID-19 cases via a determination of whether an individual country’s COVID-19 reporting follows a power-law for confirmed, recovered, and death cases of COVID-19. The probability of Zipf’s law (P-values) for COVID-19 confirmed cases show that Uzbekistan has the highest P-value of 0.940, followed by Belize (0.929), and Qatar (0.897). For COVID-19 recovered cases, Iraq had the highest P-value of 0.901, followed by New Zealand (0.888), and Austria (0.884). Furthermore, for COVID-19 death cases, Bosnia and Herzegovina had the highest P-value of 0.874, followed by Lithuania (0.843), and Morocco (0.825). China, where the COVID-19 pandemic began, is a significant outlier in recording P-values lower than 0.1 for the confirmed, recovered, and death cases. This raises important questions, not only for China, but also any country whose data exhibits P-values below this threshold. The main application of this work is to serve as an early warning for World Health Organization (WHO) and other health regulatory bodies to perform more investigations in countries where COVID-19 datasets deviate significantly from Zipf’s law. To this end, this paper provide a tool for illustrating Zipf’s law P-values on a global map in order to convey the geographic distribution of reporting anomalies.</description>
        <description>http://thesai.org/Downloads/Volume12No1/Paper_6-Data_Forensic_Determination_of_the_Accuracy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of a Physical Impairment Prediction Model for Korean Elderly People using Synthetic Minority Over-Sampling Technique and XGBoost</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120105</link>
        <id>10.14569/IJACSA.2021.0120105</id>
        <doi>10.14569/IJACSA.2021.0120105</doi>
        <lastModDate>2021-01-30T14:38:13.3900000+00:00</lastModDate>
        
        <creator>Haewon Byeon</creator>
        
        <subject>Random forest; XGBoost; GBM; gradient boosting machine; physical impairment prediction model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(1), 2021</description>
        <description>The old people&#39;s &#39;physical functioning&#39; is a key factor of active ageing as well as a major factor in determining the quality of life and the need for long-term care in old age. Previous studies that identified factors related to ADL mostly used regression analysis to predict groups of high physical impairment risk. Regression analysis is useful for confirming individual risk factors, but has limitations in grasping multiple risk factors. As methods for resolving this limitation of regression models, machine learning ensemble boosting models such as random forest and eXtreme Gradient Boosting (XGBoost) are widely used. Nonetheless, the prediction performances of XGBoost, such as accuracy and sensitivity, remain to be verified additionally by follow-up studies. This article proposes an effective method of dealing with imbalanced data for the development of ensemble-based machine learning, by comparing the performances of disease data sampling methods. This study analyzed 3,351 old people aged 65 or above who resided in local communities and completed the survey. As machine learning models to predict physical impairment in old age, this study compared the logistic regression model, XGBoost and random forest, with respect to the predictive performances of accuracy, sensitivity, and specificity. This study selected as the final model a model whose sensitivity and specificity were 0.6 or above and whose accuracy was highest. As a result, synthetic minority over-sampling technique (SMOTE)-based XGBoost whose accuracy, sensitivity, and specificity were 0.67, 0.81, and 0.75, respectively, was determined as the most excellent predictive performance. The results of this study suggest that in case of developing a predictive model using imbalanced data like disease data, it is efficient to use the SMOTE-based XGBoost model.</description>
        <description>http://thesai.org/Downloads/Volume12No1/Paper_5-Development_of_a_Physical_Impairment_Prediction_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning Architectures and Techniques for Multi-organ Segmentation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120104</link>
        <id>10.14569/IJACSA.2021.0120104</id>
        <doi>10.14569/IJACSA.2021.0120104</doi>
        <lastModDate>2021-01-30T14:38:13.3730000+00:00</lastModDate>
        
        <creator>Valentin Ogrean</creator>
        
        <creator>Alexandru Dorobantiu</creator>
        
        <creator>Remus Brad</creator>
        
        <subject>Deep Learning; Multi-Organ Segmentation; Fully Convolutional Neural Networks (FCNs); Generative Adversarial Networks (GANs); Recurrent Neural Networks (RNNs)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(1), 2021</description>
        <description>Deep learning architectures used for automatic multi-organ segmentation in the medical field have gained increased attention in the last years as the results and achievements outweighed the older techniques. Due to improvements in the computer hardware and the development of specialized network designs, deep learning segmentation presents exciting developments and opportunities also for future research. Therefore, we have compiled a review of the most interesting deep learning architectures applicable to medical multi-organ segmentation. We have summarized over 50 contributions, most of which are more recent than 3 years. The papers were grouped into three categories based on the architecture: “Convolutional Neural Networks” (CNNs), “Fully Convolutional Neural Networks” (FCNs) and hybrid architectures that combine more designs - including “Generative Adversarial Networks” (GANs) or “Recurrent Neural Networks” (RNNs). Afterwards we present the most used multi-organ datasets, and we finalize by making a general discussion of current shortcomings and future potential research paths.</description>
        <description>http://thesai.org/Downloads/Volume12No1/Paper_4-Deep_Learning_Architectures_and_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application-based Evaluation of Automatic Terminology Extraction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120103</link>
        <id>10.14569/IJACSA.2021.0120103</id>
        <doi>10.14569/IJACSA.2021.0120103</doi>
        <lastModDate>2021-01-30T14:38:13.3600000+00:00</lastModDate>
        
        <creator>Marija Brkic Bakaric</creator>
        
        <creator>Nikola Babic</creator>
        
        <creator>Maja Matetic</creator>
        
        <subject>Terminology extraction; hybrid methods; evaluation; precision; recall; gold standard; language resources</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(1), 2021</description>
        <description>The aim of this paper is to evaluate performance of several automatic term extraction methods which can be easily utilized by translators themselves. The experiments are conducted on German newspaper articles in the domain of politics on the topic of Brexit. However, they can be easily replicated on any other topic or language as long as it is supported by all three tools used. The paper first provides an extensive introduction into the field of automatic terminology extraction. Next, selected terminology extraction methods are assessed using precision with respect to the gold standard compiled on the same corpus. Moreover, the corpus has been completely annotated to allow for the calculation of recall. The effects of using five cut-off points are examined in order to find an optimal value which should be used in translation practice.</description>
        <description>http://thesai.org/Downloads/Volume12No1/Paper_3-Application_based_Evaluation_of_Automatic_Terminology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Human-Robot Interaction and Collaboration (HRI-C) Utilizing Top-View RGB-D Camera System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120102</link>
        <id>10.14569/IJACSA.2021.0120102</id>
        <doi>10.14569/IJACSA.2021.0120102</doi>
        <lastModDate>2021-01-30T14:38:13.3430000+00:00</lastModDate>
        
        <creator>Tariq Tashtoush</creator>
        
        <creator>Luis Garcia</creator>
        
        <creator>Gerardo Landa</creator>
        
        <creator>Fernando Amor</creator>
        
        <creator>Agustin Nicolas Laborde</creator>
        
        <creator>Damian Oliva</creator>
        
        <creator>Felix Safar</creator>
        
        <subject>Robotics manipulator; robot end-effector; computer vision; human-robot interaction (HRI); human-robot collaboration (HRC); robotics safety; scorbot; Kinect; RGB camera; industrial system modeling; manufacturing systems design</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(1), 2021</description>
        <description>In this study, a smart and affordable system that utilizes an RGB-D camera to measure the exact position of an operator with respect to an adjacent robotic manipulator was developed. This developed technology was implemented in a simulated human operation in an automated manufacturing robot to achieve two goals; enhancing the safety measures around the robot by adding an affordable smart system for human detection and robot control and developing a system that will allow the between the human-robot collaboration to finish a predefined task. The system utilized an Xbox Kinect V2 sensor/camera and Scorbot ER-V Plus to model and mimics the selected applications. To achieve these goals, a geometric model for the Scorbot and Xbox Kinect V2 was developed, a robotics joint calibration was applied, an algorithm of background segmentation was utilized to detect the operator and a dynamic binary mask for the robot was implemented, and the efficiency of both systems based on the response time and localization error was analyzed. The first application of the Add-on Safety Device aims to monitor the working-space and control the robot to avoid any collisions when an operator enters or gets closer. This application will reduced and remove physical barriers around the robots, expand the physical work area, reduce the proximity limitations, and enhance the human-robots interaction (HRI) in an industrial environment while sustaining a low cost. The system was able to respond to human intrusion to prevent any collision within 500 ms on average, and it was found that the system’s bottleneck was PC and robot inter-communication speed. The second application was developing a successful collaborative scenario between a robot and a human operator, where a robot will deposit an object on the operator’s hand, mimicking a real-life human-robot collaboration (HRC) tasks. The system was able to detect the operator’s hand and it’s location then command the robot to place an object on the hand, the system was able to place the object within a mean error of 2.4 cm, and the limitation of this system was the internal variables and data transmitting speed between the robot controller and main computer. These results are encouraging and ongoing work aims to experiment with different operations and implement gesture detection in real-time collaboration tasks while keeping the human operator safe and predicting their behavior.</description>
        <description>http://thesai.org/Downloads/Volume12No1/Paper_2-Human_Robot_Interaction_and_Collaboration.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Traffic Shaping Algorithm for SDN-Sliced Networks using a New WFQ Technique</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2021</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2021.0120101</link>
        <id>10.14569/IJACSA.2021.0120101</id>
        <doi>10.14569/IJACSA.2021.0120101</doi>
        <lastModDate>2021-01-30T14:38:13.2800000+00:00</lastModDate>
        
        <creator>Ronak Al-Haddad</creator>
        
        <creator>Erika Sanchez Velazquez</creator>
        
        <creator>Arooj Fatima</creator>
        
        <creator>Adrian Winckles</creator>
        
        <subject>Network congestion; SDN; slicing; QoS; queueing; OpenFlow (OF); Weighted Fair Queuing (WFQ); SPSS Analysis of Variance (ANOVA)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 12(1), 2021</description>
        <description>Managing traditional networks comes with number of challenges due to their limitations, in particular, because there is no central control. Software-Defined Networking (SDN) is a relatively new idea in networking, which enables networks to be centrally controlled or programmed using software applications. Novel traffic shaping (TS) algorithms are proposed for the implementation of a Quality of Service (QoS) bandwidth management technique to optimise performance and solve network congestion problems. Specifically, two algorithms, namely “Packet tagging, Queueing and Forwarding to Queues” and “Allocating Bandwidth”, are proposed for implementing a Weighted Fair Queuing (WFQ) technique, as a new methodology in an SDN-sliced testbed to reduce congestion and facilitate a smooth traffic flow. This methodology aimed at improving QoS that does two things simultaneously, first, making traffic conform to an individual rate using WFQ to make the appropriate queue for each packet. Second, the methodology is combined with buffer management, which decides whether to put the packet into the queue according to the proposed algorithm defined for this purpose. In this way, the latency and congestion remain in check, thus meeting the requirements of real-time services. The Differentiated Service (DiffServ) protocol is used to define classes in order to make network traffic patterns more sensitive to the video, audio and data traffic classes, by specifying precedence for each traffic type. SDN networks are controlled by floodlight controller(s) and FlowVisor, the slicing controller, which characterise the behaviour of such networks. Then, the network topology is modelled and simulated via the Mininet Testbed emulator platform. To achieve the highest level of accuracy, The SPSS statistical package Analysis of Variance (ANOVA) is used to analyse particular traffic measures, namely throughput, delay and jitter as separate performance indices, all of which contribute to QoS. The results show that the TS algorithms do, indeed, permit more advanced allocation of bandwidth, and that they reduce critical delays compared to the standard FIFO queueing in SDN.</description>
        <description>http://thesai.org/Downloads/Volume12No1/Paper_1-A_Novel_Traffic_Shaping_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comparison of EDM Tools and Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111295</link>
        <id>10.14569/IJACSA.2020.0111295</id>
        <doi>10.14569/IJACSA.2020.0111295</doi>
        <lastModDate>2020-12-31T12:59:33.9630000+00:00</lastModDate>
        
        <creator>Eman Alshehri</creator>
        
        <creator>Hosam Alhakami</creator>
        
        <creator>Abdullah Baz</creator>
        
        <creator>Tahani Alsubait</creator>
        
        <subject>Educational Data Mining (EDM); students’ performance; prediction; higher education; WEKA</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>Several higher educational institutions are adapting the strategy of predicting the student’s performance throughout the academic years. Such a practice ensures not only better academic outcomes but also helps the institutions to reorient their curriculums and teaching pedagogies so as to add to the students’ learning curve. Educational Data Mining (EDM) has risen as a useful technology in this league. EDM techniques are now being used for predicting the enrolment of students in a specific course, detection of any irregular grades, prediction about students’ performance, analyzing and visualizing of data, and providing feedback for overall improvement in the academic spheres. This paper reviews the studies related to EDM, including the approaches, data sets, tools, and techniques that have been used in those studies, and points out the most efficient techniques. This review paper uses true prediction accuracy as a standard for the comparison of different techniques for each EDM applications of the surveyed literature. The results show that the J48 and K-means are the most effective techniques for predicting the students’ performance. Furthermore, the results also cite that Bayesian and Decision Tree Classifiers are the most widely used techniques for predicting the students’ performance. In addition, this paper highlights that the most widely used tool was WEKA, with approximately 75% frequency. The present study’s empirical assessments would be a significant contribution in the domain of EDM. The comparison of different tools and techniques presented in this study are corroborative and conclusive, thus the results will prove to be an effective reference point for the practitioners in this field. As a much needed technological asset in the present day educational context, the study also suggests that additional surveys are recommended to be driven for each of the EDM applications by taking into account more standards to set the best techniques more accurately.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_95-A_Comparison_of_EDM_Tools_and_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predicting Hospitals Hygiene Rate during COVID-19 Pandemic</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111294</link>
        <id>10.14569/IJACSA.2020.0111294</id>
        <doi>10.14569/IJACSA.2020.0111294</doi>
        <lastModDate>2020-12-31T12:59:33.9470000+00:00</lastModDate>
        
        <creator>Abdulrahman M. Qahtani</creator>
        
        <creator>Bader M. Alouffi</creator>
        
        <creator>Hosam Alhakami</creator>
        
        <creator>Samah Abuayeid</creator>
        
        <creator>Abdullah Baz</creator>
        
        <subject>COVID-19; machine learning; hospitals hygiene; World Health Organization (WHO); personal protective equipment; K-means clustering; Naive Bayes; random forest</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>COVID-19 pandemic has reached global attention with the increasing cases in the whole world. Increasing awareness for the hygiene procedures between the hospital’s staff, and the society became the main concern of the World Health Organization (WHO). However, the situation of COVID-19 Pan-demic has encouraged many researchers in different fields to investigate to support the efforts offered by the hospitals and their health practitioners. The main aim of this research is to predict the hospital’s hygiene rate during COVID-19 using COVID-19 Nursing Home Dataset. We have proposed a feature extraction, and comparing the results estimating from K-means clustering algorithm, and three classification algorithms: random forest, decision tree, and Naive Bayes, for predicting the hospital’s hygiene rate during COVID-19. However, the results show that classification algorithms have addressed better performance than K-means clustering, in which Naive Bayes considered the best algorithm for achieving the research goal with accuracy value equal to 98.1%. AS a result the research has discovered that the hospitals that offered weekly amounts of personal protective equipment (PPE) have passed the personal quality test, which lead to a decrease in the number of COVID-19 cases between the hospital’s staff.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_94-Predicting_Hospitals_Hygiene_Rate_during_COVID_19.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Post Classification in the Social Networks using the Map-reduce Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111293</link>
        <id>10.14569/IJACSA.2020.0111293</id>
        <doi>10.14569/IJACSA.2020.0111293</doi>
        <lastModDate>2020-12-31T12:59:33.9330000+00:00</lastModDate>
        
        <creator>Abdoulaye SERE</creator>
        
        <creator>Jose Arthur OUEDRAOGO</creator>
        
        <creator>Boureima ZERBO</creator>
        
        <creator>Oumarou SIE</creator>
        
        <subject>Big Data; map-reduce; social network; cyber-security; classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>Wrongdoing is increasing through social media. Detecting them requires highlighting the most interesting topics in the posts. This essential part in the characterization of social network users could be done by a classification of posts. For this, we use a tuple of keywords and the Map-reduce algorithm for data collection and extraction. The main purpose is to achieve success on software realization which will establish a network between social networks to extract data and to speed up the classification of posts. The proposed method consists of verifying a sequence of keywords in the posts, following a grammar in order to determine classes. It allows the categorization of posts and monitoring of social networks. The categorization facilitates research of a particular post containing specific words. Thus, we contribute to increase capacity for wrongdoing prevention and strengthening cyber-security.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_93-Post_Classification_in_the_Social_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Perceptual Matching based Deduplication Scheme using Gabor-ORB Filters for Medical Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111292</link>
        <id>10.14569/IJACSA.2020.0111292</id>
        <doi>10.14569/IJACSA.2020.0111292</doi>
        <lastModDate>2020-12-31T12:59:33.9170000+00:00</lastModDate>
        
        <creator>Sonal Ayyappan</creator>
        
        <creator>C Lakshmi</creator>
        
        <subject>Perceptual matching; ORB feature extraction; Gabor filters; MRI scan; deduplication</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>In the ever widening field of telemedicine, there is a greater need for intelligent methods to selectively choose data that are relevant enough to be transmitted over a network and checked remotely. By the very nature of medical imaging, a large amount of data is generated per imaging or scanning session. For instance, a Magnetic Resonance Images (MRI) scan consist of hundreds to thousands of images related to slices of the organ being scanned. But at often times all of these slices are not of interest during the process of medical diagnosis by the medical practitioner. Not only does this result in the access of unwanted data remotely, but it can also put greater strain on the bandwidth available over the network. If the relevant images can be selected automatically without human intervention, ensuring great sensitivity, the above-mentioned issues can also be alleviated. This paper proposes a novel method of perceptual matching and selection of relevant MRI images by using a deduplicating technique of combining Gabor filter with Oriented FAST and Rotated BRIEF (ORB) feature extraction technique on a vast set of MRI scan images. The outcome of this method are relevant deduplicated MRI scan images which can save the bandwidth and will be easy for the medical practitioner to verify remotely.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_92-A_Perceptual_Matching_based_Deduplication_Scheme.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Evaluation of CNN Architectures for Image Caption Generation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111291</link>
        <id>10.14569/IJACSA.2020.0111291</id>
        <doi>10.14569/IJACSA.2020.0111291</doi>
        <lastModDate>2020-12-31T12:59:33.8830000+00:00</lastModDate>
        
        <creator>Sulabh Katiyar</creator>
        
        <creator>Samir Kumar Borgohain</creator>
        
        <subject>Convolutional Neural Network (CNN); image caption generation; feature extraction; comparison of different CNNs</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>Aided by recent advances in Deep Learning, Image Caption Generation has seen tremendous progress over the last few years. Most methods use transfer learning to extract visual information, in the form of image features, with the help of pre-trained Convolutional Neural Network models followed by transformation of the visual information using a Caption Generator module to generate the output sentences. Different methods have used different Convolutional Neural Network Architectures and, to the best of our knowledge, there is no systematic study which compares the relative efficacy of different Convolutional Neural Network architectures for extracting the visual information. In this work, we have evaluated 17 different Convolutional Neural Networks on two popular Image Caption Generation frameworks: the first based on Neural Image Caption (NIC) generation model and the second based on Soft-Attention framework. We observe that model complexity of Convolutional Neural Network, as measured by number of parameters, and the accuracy of the model on Object Recognition task does not necessarily co-relate with its efficacy on feature extraction for Image Caption Generation task. We release the code at https://github.com/iamsulabh/cnn variants.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_91-Comparative_Evaluation_of_CNN_Architectures.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Influence of Loss Function Usage at SIAMESE Network in Measuring Text Similarity</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111290</link>
        <id>10.14569/IJACSA.2020.0111290</id>
        <doi>10.14569/IJACSA.2020.0111290</doi>
        <lastModDate>2020-12-31T12:59:33.8700000+00:00</lastModDate>
        
        <creator>Suprapto </creator>
        
        <creator>Joseph A. Polela</creator>
        
        <subject>Sentence; similarity; triplet; contrastive; CNN; Siamese; dataframe</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>In a text matching similarity task, a model takes two sequence of text as an input and predicts a category or scale value to show their relationship. A developed model is to measure the similarity - one of relationship between those two text. The model is SIAMESE network that implement two copies of same network of CNN, it takes text_1 and text_2 as the inputs respectively for two CNN networks. The output of each CNN network is features vector of the corresponding text input, both outputs are then fed by a loss function to calculate the value of loss (i.e. similarity). This research implemented two types of loss functions, i.e. Triplet loss and Contrastive loss. The usage purpose of these two types of loss functions was to see the influence toward the measurement results of similarity between two text being compared. The metrices used for this comparison are precision, recall, and F1-score. Based on the experimental results done on 1500 pairs of sentences, and varied on the epoch value starting from 10 until 200 with an increment of 10, showed the best result was for epoch value of 180 with precision 0.8004, recall 0.6780, and F1-score 0.6713 for Triplet loss function; and epoch value of 160 with precision 0.6463, recall 0.6440, and F1-score 0.6451 for Contrastive loss function gave the best performance. So that, the Triplet loss function gave better influence than Contrastive loss function in measuring similarity between two given sentences.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_90-The_Influence_of_Loss_Function_Usage_at_SIAMESE_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Big Data Framework for Satellite Images Processing using Apache Hadoop and RasterFrames: A Case Study of Surface Water Extraction in Phu Tho, Viet Nam</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111289</link>
        <id>10.14569/IJACSA.2020.0111289</id>
        <doi>10.14569/IJACSA.2020.0111289</doi>
        <lastModDate>2020-12-31T12:59:33.8530000+00:00</lastModDate>
        
        <creator>Dung Nguyen</creator>
        
        <creator>Hong Anh Le</creator>
        
        <subject>Satellite imagery; big data; water surface; Apache Hadoop; RasterFrames</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>Earth data, collected from many sources such as remote sensing imagery, social media, and sensors, are growing tremendously. Among them, satellite imagery which play an important roles for monitoring environment and natural changes are increased exponentially in term of both volume and speed. This paper introduces an approach to managing and analyzing such data sources based on Apache Hadoop and RasterFrames. First, it presents the architecture and the general flow of the proposed distributed framework. Based on this, we can imple-ment and perform efficient computations on a big data in parallel without moving data to the center computer which might lead to network congestion. Finally, the paper presents a case study that analyzes the water surface of a Vietnam region using the proposed platform.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_89-A_Big Data_Framework_for_Satellite_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of National Cybersecurity Strategies</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111288</link>
        <id>10.14569/IJACSA.2020.0111288</id>
        <doi>10.14569/IJACSA.2020.0111288</doi>
        <lastModDate>2020-12-31T12:59:33.8230000+00:00</lastModDate>
        
        <creator>Alexandra Santisteban Santisteban</creator>
        
        <creator>Lilian Ocares Cunyarachi</creator>
        
        <creator>Laberiano Andrade-Arenas</creator>
        
        <subject>Cybersecurity; national strategies; risks; threats; vulnerability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>Nowadays the use of information and communica-tion technology has been incorporated in a general way in the daily life of a nation allowing the optimization in its processes. However, with it comes serious risks and threats that can affect cyber security because of the vulnerability they show. In addition, there are several factors that contribute to the proliferation of criminal actions in cyber security, the profitability offered by its exploitation in economic, political or other terms, the ease and low cost of the tools used to carry out attacks and the ease of hiding the attacker, make it possible for these activities to be carried out anonymously, from anywhere in the world and with impunity.The main objective of the research is to analyze and design National Cybersecurity Strategies to counter attacks. The methodology of this research was conducted in an exploratory and descriptive manner. As a result of the research work, a design of National Cybersecurity Strategies was obtained after an in-depth analysis of the appropriate strategies and thus minimizing the different attacks that can be carried out.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_88-Analysis_of_National_Cybersecurity_Strategies.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Road Traffic Accidents Injury Data Analytics</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111287</link>
        <id>10.14569/IJACSA.2020.0111287</id>
        <doi>10.14569/IJACSA.2020.0111287</doi>
        <lastModDate>2020-12-31T12:59:33.8070000+00:00</lastModDate>
        
        <creator>Mohamed K Nour</creator>
        
        <creator>Atif Naseer</creator>
        
        <creator>Basem Alkazemi</creator>
        
        <creator>Muhammad Abid Jamil</creator>
        
        <subject>Traffic Accidents Analytics (RTA); data mining; machine learning; XGBOOST</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>Road safety researchers working on road accident data have witnessed success in road traffic accidents analysis through the application data analytic techniques, though, little progress was made into the prediction of road injury. This paper applies advanced data analytics methods to predict injury severity levels and evaluates their performance. The study uses predictive modelling techniques to identify risk and key factors that contributes to accident severity. The study uses publicly available data from UK department of transport that covers the period from 2005 to 2019. The paper presents an approach which is general enough so that can be applied to different data sets from other countries. The results identified that tree based techniques such as XGBoost outperform regression based ones, such as ANN. In addition to the paper, identifies interesting relationships and acknowledged issues related to quality of data.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_87-Road_Traffic_Accidents_Injury_Data_Analytics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Factors Influencing Computing Students’ Readiness to Online Learning for Understanding Software Engineering Foundations in Saudi Arabia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111286</link>
        <id>10.14569/IJACSA.2020.0111286</id>
        <doi>10.14569/IJACSA.2020.0111286</doi>
        <lastModDate>2020-12-31T12:59:33.7900000+00:00</lastModDate>
        
        <creator>Abdulaziz Alhubaishy</creator>
        
        <subject>Readiness to online learning; software-engineering education; improving online learning; university students</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>The spread of Coronavirus disease (COVID-19) has enforced most universities/institutions over the world to transform their educational models (face-to-face and blended) bearing in mind the online educational environments as a temporary substitute. Consequently, all universities/institutions in Saudi Arabia have requested their students to continue the learning process using online environments. This transition has provided an opportunity to deeply investigate possible challenges as well as factors that influence the adoption of online learning as a future educational model for undergraduate students. This research measures the current undergraduate students’ readiness for online learning and investigates factors that influence their level of readiness. Firstly, the research proposes the adoption of a validated multidimensional instrument to measure under-graduate students’ readiness for online learning in different universities. Secondly, the research elaborates the findings by an in-depth study that highlights the main obstacles that hinder computing students’ readiness to learn Software Engineering (SE) foundations using online learning. The research adopts survey research to measure students’ readiness and analyzes the data to extract the readiness levels of different dimensions of the adopted instrument. Furthermore, interviews were conducted to specify the influential factors on computing students’ readiness levels regarding learning SE foundations. Results show that students’ readiness level for online learning is within the acceptable range while some improvements are needed. Furthermore, the study found that students’ cognition, willingness, ignorance, and the amount of assistant and help they receive play a significant role in the success/failure of the adoption of learning SE foundations through online environment.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_86-Factors_Influencing_Students_Readiness.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Rule-based Approach toward Automating the Assessments of Academic Curriculum Mapping</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111285</link>
        <id>10.14569/IJACSA.2020.0111285</id>
        <doi>10.14569/IJACSA.2020.0111285</doi>
        <lastModDate>2020-12-31T12:59:33.7600000+00:00</lastModDate>
        
        <creator>Abdullah Alshanqiti</creator>
        
        <creator>Tanweer Alam</creator>
        
        <creator>Mohamed Benaida</creator>
        
        <creator>Abdallah Namoun</creator>
        
        <creator>Ahmad Taleb</creator>
        
        <subject>Rule based algorithm; curriculum mapping; student learning outcomes; program outcomes; assessment; curriculum matrix</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>Curriculum mapping is the blueprint of a successful academic program. It is progressively utilised in higher education as a monitoring tool in the current age of standard-based regu-lations and empowers program leaders and course instructors to align their curricula for the offered courses and the corresponding learning outcomes. It is often depicted by a two-dimensional matrix expressing the relationship between the students learning outcomes (i.e., SOs) and the courses. However, its mapping remains a challenging exercise, even for experienced program leaders. The complexity stems from the fact that mistakes are prone to happen during the mapping, and program leaders need to be aware of the rules and the acceptable practices of curriculum-effective mapping. Besides, it is not straightforward to spot contradictions in the SO-course mappings. Consequently, this paper aims to tackle these challenges by investigating effective-mapping rules from existing curriculum mappings, which allows one to inspect the SO-course mappings, discover inefficiencies, and provide suggestions for improving the curriculum mapping. We identify the main mapping criteria and propose a rule-based algorithm for curriculum matrix assessments. This algorithm is implemented in an online application and evaluated using a user-based experiment, relying on curriculum mapping experts. The findings have shedded light on the promise of our approach.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_85-A_Rule_based_Approach_toward_Automating_the_Assessments.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi Modal RGB D Action Recognition with CNN LSTM Ensemble Deep Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111284</link>
        <id>10.14569/IJACSA.2020.0111284</id>
        <doi>10.14569/IJACSA.2020.0111284</doi>
        <lastModDate>2020-12-31T12:59:33.7600000+00:00</lastModDate>
        
        <creator>D. Srihari</creator>
        
        <creator>P. V. V. Kishore</creator>
        
        <subject>Human action recogniiton; RGB D video data; convolutional neural networks; long short-term memory</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>Human action recognition has transformed from a video processing problem into multi modal machine learning problem. The objective of this work is to perform multi modal human action recognition on an ensemble hybrid network of CNN and LSTM layers. The proposed CNN - LSTM ensemble network is a 2 - stream framework with one ensemble stream learning RGB sequences and the other depth. This proposed framework can learn both temporal and spatial dynamics in both RGB and depth modal action data. The hybrid network is found to be receptive towards both spatial and temporal fields because of the hierarchical structure of CNNs and LSTMs. Finally, to test our proposed model, we used our own BVCAction3D and three RGB D benchmark action datasets. The experiments were conducted on all the datasets using the proposed framework and was found to be effective when compared to similar deep learning architectures.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_84-Multi_Modal_RGB_D_Action_Recognition_with_CNN.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SDN based Intrusion Detection and Prevention Systems using Manufacturer Usage Description: A Survey</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111283</link>
        <id>10.14569/IJACSA.2020.0111283</id>
        <doi>10.14569/IJACSA.2020.0111283</doi>
        <lastModDate>2020-12-31T12:59:33.7300000+00:00</lastModDate>
        
        <creator>Noman Mazhar</creator>
        
        <creator>Rosli Salleh</creator>
        
        <creator>Mohammad Asif Hossain</creator>
        
        <creator>Muhammad Zeeshan</creator>
        
        <subject>Intrusion Detection and Prevention Systems (IDPS); Internet of Things (IoT); Software Defined Network (SDN); Machine Learning (ML); Deep learning (DL); Manufacturer Usage Description (MUD)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>Internet of things (IoT) is an emerging paradigm that integrates several technologies. IoT network constitutes of many interconnected devices that include various sensors, actu-ators, services and other communicable objects. The increasing demand for IoT and its services have created several security vulnerabilities. Conventional security approaches like intrusion detection systems are not up to the expectation to fulfil the security challenges of IoT networks, due to the conventional technologies used in them. This article presents a survey of intrusion detection and prevention system (IDPS), using state of art technologies, in the context of IoT security. IDPS constitutes of two parts: intrusion detection system and intrusion prevention system. An intrusion detection system (IDS) is used to detect and analyze both inbound and outbound network traffic for malicious activities. An intrusion prevention system (IPS) can be aligned with IDS by proactively inspecting a system’s incoming traffic to mitigate harmful requests. The alignment of IDS and IPS is known as intrusion detection and prevention systems (IDPS). The amalgamation of new technologies, like software-defined network (SDN), machine learning (ML), and manufacturer usage description (MUD), in IDPS is putting the security on the next level. In this study IDPS and its performance benefits are analyzed in the context of IoT security. This survey describes all these prominent technologies in detail and their integrated applications to complement IDPS in the IoT network. Future research directions and challenges of IoT security have been elaborated in the end.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_83-SDN_based_Intrusion_Detection_and_Prevention.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Secure Software Engineering: A Knowledge Modeling based Approach for Inferring Association between Source Code and Design Artifacts</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111282</link>
        <id>10.14569/IJACSA.2020.0111282</id>
        <doi>10.14569/IJACSA.2020.0111282</doi>
        <lastModDate>2020-12-31T12:59:33.7300000+00:00</lastModDate>
        
        <creator>Chaman Wijesiriwardana</creator>
        
        <creator>Ashanthi Abeyratne</creator>
        
        <creator>Chamal Samarage</creator>
        
        <creator>Buddika Dahanayake</creator>
        
        <creator>Prasad Wimalaratne</creator>
        
        <subject>Software security; threat modeling; knowledge mod-eling; security flaws</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>Secure software engineering has emerged in recent decades by encouraging the idea of software security has to be an integral part of all the phases of the software development lifecycle. As a result, each phase of the lifecycle is associated with security-specific best practices such as threat modeling and static code analysis. It was observed that various artifacts (i.e., security requirements, architectural flaws, bug reports, security test cases) generated as a result of security best practices tend to be segregated. This creates a significant barrier to resolve the security issues at the implementation phase since most of them are originated in the design phase. In order to address this issue, this paper presents a knowledge-modeling based approach to semantically infer the associations between architectural level security flaws and code-level security bugs, which is manually tedious. Threat modeling and static analysis are used to identify security flaws and security bugs, respectively. The case study based experimental results revealed that the architectural level security flaws have a significant impact on originating security bugs in the code level. Besides, the evaluation results confirmed the scalability of the proposed approach to large-scale industrial software products.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_82-Secure_Software_Engineering.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detection of nCoV-19 from Hybrid Dataset of CXR Images using Deep Convolutional Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111281</link>
        <id>10.14569/IJACSA.2020.0111281</id>
        <doi>10.14569/IJACSA.2020.0111281</doi>
        <lastModDate>2020-12-31T12:59:33.6970000+00:00</lastModDate>
        
        <creator>Muhammad Ahmed Zaki</creator>
        
        <creator>Sanam Narejo</creator>
        
        <creator>Sammer Zai</creator>
        
        <creator>Urooba Zaki</creator>
        
        <creator>Zarqa Altaf</creator>
        
        <creator>Naseer u Din</creator>
        
        <subject>COVID-19; artificial intelligence; deep learning; chest x-ray image analysis; convolutional neural network; InceptionV3</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>The Corona-virus spreads too quickly among humans and reaches more than 72 million people around the world until now. To avoid spread, it is very important to recognize the individuals infected. The deep learning (DL) technique for the detection of patients with Corona-virus infection using Chest X-rays (CXR) images is proposed in this article. Besides, we show how to implement an advanced model for deep learning, using chest X-rays (CXR) images, to identify COVID-19 (nCoV-19). The goal is to provide an intellectual image recognition model for over-stressed medical professionals with a second pair of eyes. In using the current publicly available COVID-19 data-sets we emphasize the challenges (including image data-set size and image quality) in developing a valuable deep learning model. We suggest a pre-trained model of a semi-automated image, create a robust image data-set for designing and evaluating a deep learning algorithm. This will provide the researchers and practitioners with a solid path to the future development of an improved model.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_81-Detection_of_nCoV_19_from_Hybrid_Dataset.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Model for Traffic Management based on Text Mining Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111280</link>
        <id>10.14569/IJACSA.2020.0111280</id>
        <doi>10.14569/IJACSA.2020.0111280</doi>
        <lastModDate>2020-12-31T12:59:33.6830000+00:00</lastModDate>
        
        <creator>Ahmed Ibrahim Naguib</creator>
        
        <creator>Hala Abdel-Galil</creator>
        
        <creator>Sayed AbdelGaber</creator>
        
        <subject>Classification; machine learning; text mining; traffic management</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>It is very important for traffic management to be able to correctly recognize traffic trends from large historical traffic data, particularly the congestion pattern and road collisions. This can be used to reduce congestion, improve protection, and increase the accuracy of traffic forecasting. Choosing the correct and effective text mining methodology helps speed up and reduces the time and effort needed to retrieve valuable knowledge and information for future prediction and decision-making processes. Modeling collisions or accident risk has also been an important aspect of traffic management and road safety, as it helps recognize problems and causes that contribute to a higher risk of accidents, promotes treatment delivery, and reduces crashes to save more lives and avoid road congestion. Therefore, this work-study proposed a model that relies on the different text mining methodology to determine clearly what circumstances affect and who is involved more in an accident. Using different classification and machine learning techniques applied to get the optimum classifiers used in this model. The experimental results on real-world datasets demonstrate that the proposed models outperform Prayag Tiwari’s Research Work related to the Leeds UK Dataset.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_80-A_Model_for_Traffic_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Iterative, Self-Assessing Entity Resolution System: First Steps toward a Data Washing Machine</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111279</link>
        <id>10.14569/IJACSA.2020.0111279</id>
        <doi>10.14569/IJACSA.2020.0111279</doi>
        <lastModDate>2020-12-31T12:59:33.6500000+00:00</lastModDate>
        
        <creator>John R. Talburt</creator>
        
        <creator>Awaad K. Al Sarkhi</creator>
        
        <creator>Daniel Pullen</creator>
        
        <creator>Leon Claassens</creator>
        
        <creator>Richard Wang</creator>
        
        <subject>Unsupervised entity resolution; data curation; frequency blocking; entropy regulated; data washing machine</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>Data curation is the process of acquiring multiple sources of data, assessing and improving data quality, standardizing, and integrating the data into a usable information product, and eventually disposing of the data. The research describes the building of a proof-of-concept for an unsupervised data curation process addressing a basic form of data cleansing in the form of identifying redundant records through entity resolution and spelling corrections. The novelty of the approach is to use ER as the first step using an unsupervised blocking and stop word scheme based on token frequency. A scoring matrix is used for linking unstandardized references, and an unsupervised process for evaluating linking results based on cluster entropy. The ER process is iterative, and in each iteration, the match threshold is increased. The prototype was tested on 18 fully-annotated test samples of primarily synthetic person data varied in two different ways, good data quality versus poor data quality, and a single record layout versus two different record layouts. In samples with good data quality and using both single and mixed layouts, the final clusters had an average F-measure of 0.91, precision of 0.96, and recall of 0.87 outcomes comparable to results from a supervised ER process. In samples with poor data quality whether mixed or single layout, the average F-measure was 0.78, precision 0.74, and recall 0.83 showing that data quality assessment and improvement is still a critical component of successful data curation. The results demonstrate the feasibility of building an unsupervised ER engine to support data integration for good quality references while avoiding the time and effort to standardize reference sources to a common layout, design, and test matching rules, design blocking keys, or test blocking alignment. Also, the paper proposes how unsupervised data quality improvement processes could also be incorporated into the design allowing the model to address an even broader range of data curation applications.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_79-An_Iterative_Self_Assessing_Entity_Resolution_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Analysis of Advanced IoT Encryption on Serialization Concept: Application Optimization Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111278</link>
        <id>10.14569/IJACSA.2020.0111278</id>
        <doi>10.14569/IJACSA.2020.0111278</doi>
        <lastModDate>2020-12-31T12:59:33.6330000+00:00</lastModDate>
        
        <creator>Johan Setiawan</creator>
        
        <creator>Muhammad Taufiq Nuruzzaman</creator>
        
        <subject>Internet of Things; benchmark; cipher; block mode; serialization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>This study investigates the effect of serialization concepts with cipher algorithms and block mode on structured data on execution time in low-level computing IoT devices. The research was conducted based on IoT devices, which are currently widely used in online transactions. The result of overheating on the CPU is fatal if the encryption load is not reduced. One of the consequences is an increase in the maintenance obligations. So that from this influence, the user experience level will have bad influence. This study uses experimental methods by exploring serialization, ciphers, and block mode using benchmarks to get better data combination algorithms. The four test data groups used in benchmarking will produce an experimental benchmark dataset on the selected AES, Serpent, Rijndael, BlowFish, and block mode ciphers. This study indicates that YAML minify provides an optimal encryption time of 21% and decryption of 27% than JSON Pretty if an average of the whole test is taken. On the other hand, the AES cipher has a significant effect on the encryption and decryption process, which is 51% more optimal for the YAML minify serialization.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_78-Performance_Analysis_of_Advanced_IoT_Encryption.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Preliminary Intergenerational Photo Conversation Support System based on Fine-tuning VGG16 Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111277</link>
        <id>10.14569/IJACSA.2020.0111277</id>
        <doi>10.14569/IJACSA.2020.0111277</doi>
        <lastModDate>2020-12-31T12:59:33.6030000+00:00</lastModDate>
        
        <creator>Lei JIANG</creator>
        
        <creator>Panote Siriaraya</creator>
        
        <creator>Dongeun Choi</creator>
        
        <creator>Noriaki Kuwahara</creator>
        
        <subject>Intergenerational photo conversation support system; TF-IDF; VGG16; transfer learning; fine-tune</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>China has the largest number of elderly people in the world, young volunteers have become the main force in caring for the elderly. It is urgent to establish a photo conversation support system to build a bridge of communication between young volunteers and the elderly. Previous research generally used perceptual analysis or machine learning methods to find photos suitable for intergenerational conversation, this paper uses deep learning models  to further learn the potential features of two datasets suitable for and not suitable for intergenerational conversations. However, the original datasets are too small, it was first proposed to use TF-IDF in conversation recording and data augmentation technology in images to expand the datasets. Then on the basis of the VGG16 model combined with transfer learning and fine-tuning technologies, five models were designed. The accuracy of the best model on the validation set and test set reached 96% and 94.5%. In particular, the recall rate of the not suitable dataset reached 100%, all not suitable photos were identified. At the same time, the recall rate of other datasets reached 71% for not suitable photos. It shows that the system is also applicable to other datasets and can effectively eliminate photos that are not suitable for intergenerational conversations.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_77-A_Preliminary_Intergenerational_Photo_Conversation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Medical Data Classification using Fuzzy Min Max Neural Network Preceded by Feature Selection through Moth Flame Optimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111276</link>
        <id>10.14569/IJACSA.2020.0111276</id>
        <doi>10.14569/IJACSA.2020.0111276</doi>
        <lastModDate>2020-12-31T12:59:33.5870000+00:00</lastModDate>
        
        <creator>Ashish Kumar Dehariya</creator>
        
        <creator>Pragya Shukla</creator>
        
        <subject>Moth flame optimization; nature inspired optimization; feature selection; fitness function; fuzzy min-max neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>Prediction of the diseases are possible using medical diagnosis system. This type of health care model can be developed using soft computing techniques. Hybrid approaches of data classification and optimization algorithm increases data classification accuracy. This research proposed applications of Moth Flame optimization (MFO) and Fuzzy Min Max Neural Network (FMMNN) for the development of medical data classification system. Here MFO algorithm considers bulk of features from the disease dataset and produces optimized set of features based on fitness function. MFO is able to avoid local minima problem and this is the main cause behind production of optimal set of features. Optimized features are then passed to FMMNN for classification of malignant and benign cases. As classification is concerned, model experiment achieved 97.74% accuracy for Liver Disorders and 86.95 % accuracy for Pima Indian Diabetes dataset. Improving the medical data classification accuracy is directly related to attain good human health.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_76-Medical_Data_Classification_using_Fuzzy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of Smart Healthcare System for Visually Impaired using Speech Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111275</link>
        <id>10.14569/IJACSA.2020.0111275</id>
        <doi>10.14569/IJACSA.2020.0111275</doi>
        <lastModDate>2020-12-31T12:59:33.5570000+00:00</lastModDate>
        
        <creator>Khloud Almutairi</creator>
        
        <creator>Ahmed Ismail Ebada</creator>
        
        <creator>Samir Abdlerazek</creator>
        
        <creator>Hazem Elbakry</creator>
        
        <subject>Personal assistance; speech recognition; visually impaired person assistant; smart wearable device; smart sensory system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>This paper presents a solution for the Visually Impaired (VI) based on wearable devices. VI people need support or a guide to support them to locomote from one place to another. Using Wearables help users to achieve a great understanding of the surrounding environment. The proposed system is based on wearable smart glasses to support VI to locomote. It provides a solution integrated with speech recognition to get the destination name and look for the routes. The proposed system is based on Google maps with speech recognition to work as user assistance. The results of the research results proved that the system works with high accuracy of 99% and can help the person as an effective tool for localization guidance. The system can assist VI people to move and have a better life quality.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_75-Development_of_Smart_Healthcare_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Agent Mining Framework for Analyzing Moroccan Olive Oil Datasets</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111274</link>
        <id>10.14569/IJACSA.2020.0111274</id>
        <doi>10.14569/IJACSA.2020.0111274</doi>
        <lastModDate>2020-12-31T12:59:33.5570000+00:00</lastModDate>
        
        <creator>Belabed Imane</creator>
        
        <creator>Jaara El Miloud</creator>
        
        <creator>Belabed Abdelmajid</creator>
        
        <creator>Talibi Alaoui Mohammed</creator>
        
        <subject>Quantitative association rules; clustering of variables; multi-agent system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>Data mining and intelligent agents have become two promising research areas. Each intelligent agent functions independently while cooperating with other agents, to perform effectively assigned tasks. The main goal of this research, is to provide a mining implementation that can help biological researchers for discovering parameters that affect the cost of olive oil in Morocco. To solve this problem, we used a method involving two data mining techniques, clustering of variables, quantitative association rules and multi-agent system to fuse these two techniques. Therefore, we have developed a multi-agent framework that has been validated by using concrete data from the Provincial Direction of Agriculture of Berkane, Morocco. To prove the performance of our framework, we tested the proposed multi-agent tool using three datasets from different fields. Conforming to biological researchers, our method generates a clear knowledge because the framework proposes high-confidence rules that can correctly identify olive oil factors.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_74-Agent_Mining_Framework_for_Analyzing_Moroccan.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Facebook Profile Credibility Detection using Machine and Deep Learning Techniques based on User’s Sentiment Response on Status Message</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111273</link>
        <id>10.14569/IJACSA.2020.0111273</id>
        <doi>10.14569/IJACSA.2020.0111273</doi>
        <lastModDate>2020-12-31T12:59:33.5270000+00:00</lastModDate>
        
        <creator>Esraa A. Afify</creator>
        
        <creator>Ahmed Sharaf Eldin</creator>
        
        <creator>Ayman E. Khedr</creator>
        
        <subject>Fake profiles detection; credible profiles detection; sentiment analysis; supervised machine learning classifiers; unsupervised machine learning; binary classification; deep learning neural network; evaluation metrics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>Recently, the impact of online Social Networks sites (SNS) has dramatically changed, and fake accounts became a vital issue that has rapidly evolved. This issue gives rise to how to assess and measure the credibility of User-Generated Content (UGC). This content is used in finding trusted sources of information on SNS like Facebook, Twitter, etc. Consequently, classifying users’ profiles and analyzing each user’s behavior response based on the content generated became a challenge that must be solved. One of the most significant approaches is Sentiment Analysis (SA) which plays a major role in assessing and detecting the credibility degree of each user account behavior. In this paper, the aim of the study is to measure and predict the user’s profile credibility by declaring the correlation degree among the UGC features that affect users’ responses to status messages. The proposed models were implemented using six Supervised Machine Learning classifiers, an Unsupervised Machine Learning cluster model, and a Deep Learning Neural Network (NN) model. The research paper presents two experiments to evaluate Facebook profile credibility. At first, we applied a binary classification model to classify profiles into fake or genuine users. Then, we conducted a classification model on genuine users based on the credibility theory by using the Analytical Hierarchical Process (AHP) approach and computed the credibility score for each. Secondly, we selected and analyzed a public Facebook page (CNN public page) and obtained data from it for users’ sentiment reactions and responses on statuses Messages relating to different topics on the period (2016/2017). Then, we performed LDA on the status corpus (Topic Modeling algorithm, Latent Dirichlet Allocation) to generate topic vectors. In addition, we performed Principal Component Analysis (PCA) method to visualize and classify each status topic distribution. Afterthought, we produced a status corpus cluster to classify users’ behaviors through statuses posted and users’ comments. As a conclusion of this study, the first experimental results achieved 95% and 99% accuracy to classify fake/genuine users and incredible/credible accounts, respectively. The second experiment outcome identified the clusters for the status corpus in 10 topic-features distribution and classified users’ contents into credible or not according to the final calculated credibility score.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_73-Facebook_Profile_Credibility_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>You Aren’t Alone: Building Arabic Online Supporting Communities using Recommender System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111272</link>
        <id>10.14569/IJACSA.2020.0111272</id>
        <doi>10.14569/IJACSA.2020.0111272</doi>
        <lastModDate>2020-12-31T12:59:33.5100000+00:00</lastModDate>
        
        <creator>Monirah Alajlan</creator>
        
        <creator>Nouf Alsuhaymi</creator>
        
        <creator>Sara Alnasser</creator>
        
        <creator>Abeer Almohaidib</creator>
        
        <creator>Nouf Bin Slimah</creator>
        
        <creator>Madawi Alruwaished</creator>
        
        <creator>Najla Alosaimi</creator>
        
        <subject>Emotional support; recommender system; classification; support group</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>People are now digitally connected, making the world a single large community. This remarkable benefit has solved many communication issues. For instance, people who go through difficult times and lack the emotional support required to overcome these crises can now join an online support group. For many years, such people had to travel to a predetermined location in a predetermined time to join a support group. Today, with the increasing availability of digital services, these groups can now meet online. For these reasons, this paper presents ‘You aren’t alone’ mobile application, an interactive mobile-based application designed for Arab people who need psychological support. This application will help in enriching the Arabic content in the field of social support and will help in building supporting communities by peering users to the appropriate support group, anonymously without the fear of judgment. The application will enhance the peering process through a recommender system that reads the user’s Twitter timeline and classifies the tweets as belonging to one of the available support groups.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_72-You_Arent_Alone_Building_Arabic_Online_Supporting.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predicting Mental Illness using Social Media Posts and Comments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111271</link>
        <id>10.14569/IJACSA.2020.0111271</id>
        <doi>10.14569/IJACSA.2020.0111271</doi>
        <lastModDate>2020-12-31T12:59:33.4800000+00:00</lastModDate>
        
        <creator>Mohsin kamal</creator>
        
        <creator>Saif Ur Rehman khan</creator>
        
        <creator>Shahid Hussain</creator>
        
        <creator>Anam Nasir</creator>
        
        <creator>Khurram Aslam</creator>
        
        <creator>Subhan Tariq</creator>
        
        <creator>Mian Farhan Ullah</creator>
        
        <subject>Machine learning; mental disorders; Reddit; Schizophrenia; Autism; OCD; PTSD</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>From the last decade, a significant increase of social media implications could be observed in the context of e-health. The medical experts are using the patient’s post and their feedbacks on social media platforms to diagnose their infectious diseases. However, there are only few studies who have leveraged the capabilities of machine learning (ML) algorithms to classify the patient’s mental disorders such as Schizophrenia, Autism, and Obsessive-compulsive disorder (OCD) and Post-traumatic stress disorder (PTSD). Moreover, these studies are limited to large number of posts and relevant comments which could be considered as a threat for their effectiveness of their proposed methods. In contrast, this issue is addressed by proposing a novel ML methodology to classify the patient’s mental illness on the basis of their posts (along with their relevant comments) shared on the well-known social media platform “Reddit”. The proposed methodology is exploit by leveraging the capabilities of widely-used classifier namely “XGBoost” for accurate classification of data into four mental disorder classes (Schizophrenia, Autism, OCD and PTSD). Subsequently, the performance of the proposed methodology is compared with the existing state of the art classifiers such as Na&#239;ve Bayes and Support vector machine whose performance have been reported by the research community in the target domain. The experimental result indicates the effectiveness of the proposed methodology to classify the patient data more effectively as compared to the state of the art classifiers. 68% accuracy was achieved, indicating the efficacy of the proposed model.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_71-Predicting_Mental_Illness_using_Social_Media_Posts.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluation of Blockchain-based Data Sharing Acceptance among Intelligence Community</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111270</link>
        <id>10.14569/IJACSA.2020.0111270</id>
        <doi>10.14569/IJACSA.2020.0111270</doi>
        <lastModDate>2020-12-31T12:59:33.4470000+00:00</lastModDate>
        
        <creator>Wan Nurhidayat Wan Muhamad</creator>
        
        <creator>Noor Afiza Mat Razali</creator>
        
        <creator>Muslihah Wook</creator>
        
        <creator>Khairul Khalil Ishak</creator>
        
        <creator>Norulzahrah Mohd Zainudin</creator>
        
        <creator>Nor Asiakin Hasbullah</creator>
        
        <creator>Suzaimah Ramli</creator>
        
        <subject>Technology acceptance model; technology readiness index; blockchain acceptance; PLS-SEM; data sharing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>Intelligence data are among the critical elements used as a reference for risk-assessment and decision-making regarding national security. The intelligence data are shared among intelligence agencies in the intelligence community in improving the efficiency of their services. Centralised data with central authority is highly exposed to being an easy target of attackers. Leaked or unauthorised access of the intelligence data to a non-intelligence community will bring severe effect to a state.  Blockchain as immutable and high-security technology is capable of providing cryptographic data in a decentralised environment and potentially can be applied for data sharing among the intelligence community. However, the acceptance and readiness of users on blockchain usage in the intelligence community are yet to be systematically studied. Considering the statement, this paper proposed an evaluation method to study the acceptance of blockchain technology by integrating a reliable acceptance model for blockchain technology implementation in the intelligence community. The acceptance model consisted of constructs from the Technology Acceptance Model 3 (TAM 3) and Technology Readiness Index 2 (TRI 2) and was analysed using partial least squares structural equation modelling (PLS-SEM). In this study, the result indicates that TAM 3 and TRI 2.0 integration model could contribute to determining the acceptance level in developing blockchain-based intelligence data sharing for the intelligence community.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_70-Evaluation_of_Blockchain_based_Data_Sharing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Efficient Smart Weighted and Neighborhood-enabled Load Balancing Scheme for Constraint Oriented Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111269</link>
        <id>10.14569/IJACSA.2020.0111269</id>
        <doi>10.14569/IJACSA.2020.0111269</doi>
        <lastModDate>2020-12-31T12:59:33.4330000+00:00</lastModDate>
        
        <creator>Mohammed Amin Almaiah</creator>
        
        <subject>Wireless Sensor Networks (WNSs); load balancing; PSO; routing protocol; low power devices</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>In wireless sensor networks (WNSs), uniform load or traffic distribution strategy is one of the main challenging issues, which is tightly coupled with the resource-limited networks. To address this problem, various mechanisms have been developed and presented in the literature. However, these approaches were either application specific that is designed for a specific application area such as smart building or overlay complex. Therefore, a simplified and energy efficient load-balancing scheme is always needed for the resource-limited networks. In this paper, an efficient and neighborhood-enabled load-balancing scheme is presented to resolve the aforementioned issues specifically with available resources. For this purpose, the proposed scheme bounds every member node to collect various information about neighboring nodes i.e., those nodes resides in its communication range. Moreover, if residual energy Er of sensor node is less than the defined threshold value then it shares this information with neighboring nodes. In the proposed neighborhood-enabled load balancing scheme, every sensor node Ci prefers to route packets through the optimal paths particularly those paths where probability of critical nodes is negligible i.e., path where critical node(s) are not deployed. Simulation results showed that the proposed neighborhood-enabled load-balancing scheme is better than existing approaches in terms of network lifetime (both individual node and whole WSNs), throughput, and average packet delivery ratio and end-to-end delay performance metrics.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_69-An_Efficient_Smart_Weighted.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Effective Solution to Count-to-Infinity Problem for both Complex and Linear Sub-Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111268</link>
        <id>10.14569/IJACSA.2020.0111268</id>
        <doi>10.14569/IJACSA.2020.0111268</doi>
        <lastModDate>2020-12-31T12:59:33.4170000+00:00</lastModDate>
        
        <creator>Sabrina Hossain</creator>
        
        <creator>Kazi Mushfiquer Rahman</creator>
        
        <creator>Ahmed Omar</creator>
        
        <creator>Anisur Rahman</creator>
        
        <subject>Distance vector routing; local area network; routing information protocol</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>Distance vector routing protocol determines the best route for forwarding information from one node to another node based on distance. For calculating the best route, Distance-vector routing protocols use the Bellman-ford algorithm and the Ford-Fulkerson algorithm. The Bellman-Ford distributed algorithm calculates the shortest path. On the other hand, Routing Information Protocol is commonly used for managing router information management protocol within a Local Area Network or an interconnected Local Area Network group. The main problem with Distance Vector Routing protocols is routing loops. Because the Bellman-Ford Algorithm cannot prevent loops. Moreover, the routing loop triggers a problem with Count to Infinity. This research paper gives an effective solution to the Count to Infinity problem for link down situation and also for the router down situation in both complex and linear sub-network. For the router down situation, when any router goes down, then other nodes will recalculate their routing table with the dependency column. Moreover, the costs are calculated by the shortest path algorithm. If any link is down and all routers are up, then all routers will recalculate their routing table using Dijkstra instead of the Bellman-Ford algorithm. To determine the loops and prevent the loops are the main objectives. This method is mainly based on a routing table algorithm where the Dijkstra algorithm will be used after each iteration and will modify the routing table for each node. Preventing the routing loops will not converge into Count to Infinity Problem.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_68-An_Effective_Solution_to_Count_to_Infinity.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Secure Energy Efficient Attack Resilient Routing Technique for Zone based Wireless Sensor Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111267</link>
        <id>10.14569/IJACSA.2020.0111267</id>
        <doi>10.14569/IJACSA.2020.0111267</doi>
        <lastModDate>2020-12-31T12:59:33.3830000+00:00</lastModDate>
        
        <creator>Venkateswara Rao M</creator>
        
        <creator>Srinivas Malladi</creator>
        
        <subject>Energy efficiency; malicious zones; preference score probabilistic fuzzy score; residual energy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>Security and Energy efficiency are two key factors to be contemplated in the design of applications based on wireless sensor networks (WSNs). Optimization of energy consumption is obligatory for an increased life time of the network. Without security, attackers can disrupt the entire operation of sensor network by instigating diverse attacks like message tampering, message dropping either in partial or whole and message flooding etc. This work proposes a secure energy efficient attack resilient routing technique for zone based wireless sensor network with proactive detection of malicious zones and mitigation from the attacks. Different from earlier works on detecting each malicious node, this work cleaves the network to zones and allots a probabilistic fuzzy score to model the success ratio of packet propagation through the zone. The routing is adaptive to ongoing residual energy and security risks. A Firm decision cannot be made on the frameworks influencing the life time of the network considering it may influence the operations of network. Experimentation of the proposed solution is done in NS2 and contrasted with the existing solutions to prove the effectiveness of the approach.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_67-Secure_Energy_Efficient_Attack_Resilient_Routing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>New Sector Scan Geometry for High Frame Rate 2D-Echocardiography using Phased Arrays</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111266</link>
        <id>10.14569/IJACSA.2020.0111266</id>
        <doi>10.14569/IJACSA.2020.0111266</doi>
        <lastModDate>2020-12-31T12:59:33.3700000+00:00</lastModDate>
        
        <creator>Wided Hechkel</creator>
        
        <creator>Brahim Maaref</creator>
        
        <creator>Nejib Hassen</creator>
        
        <subject>2D Echocardiography; high frame rate; multi-line transmit beamforming; new scan geometry; reduced crosstalk</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>2D echocardiography high frame rate techniques do have some drawbacks such as crosstalk artifacts caused by the interactions between the parallel transmitted and received beams. In this paper, we suggest a new cardiac imaging technique based on MLT (Multi-Line Transmission). The main idea of our approach is to benefit from the scan geometry to reduce the interference between the simultaneously transmitted beams. We propose to do the scan at different depths and in parallel to the diagonal scan sector. Therefore, compared to existing MLT techniques, the new scan sector strategy will result in artifacts&#39; reduction in the ultrasound imaging systems. We entitled our approach the Synthetic Sum of Multi-Line Transmission (SS-MLT). Simulations of Point Spread Function (PSF), multiple Point Spread Functions (PSFs) and Cyst Phantom (CP) provided in this paper are compared in our approach to the main classical ultrasound imaging approaches. Therefore, the SS-MLT exhibits a very similar lateral profile as the Single Line Transmission (SLT) algorithm. Hence, the simulation results indicate a potential value of a future hardware implementation of SS-MLT technique.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_66-New_Sector_Scan_Geometry_for_High_Frame.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fraud Detection in Credit Cards using Logistic Regression</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111265</link>
        <id>10.14569/IJACSA.2020.0111265</id>
        <doi>10.14569/IJACSA.2020.0111265</doi>
        <lastModDate>2020-12-31T12:59:33.3530000+00:00</lastModDate>
        
        <creator>Hala Z Alenzi</creator>
        
        <creator>Nojood O Aljehane</creator>
        
        <subject>Classifier; logistic regression; accuracy; smoothing; artificial intelligence; cross validation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>Due to the increasing number of customers as well as the increasing number of companies that use credit cards for ending financial transactions, the number of fraud cases has increased dramatically. Dealing with noisy and imbalanced data, as well as with outliers, has accentuated this problem. In this work, fraud detection using artificial intelligence is proposed. The proposed system uses logistic regression to build the classifier to prevent frauds in credit card transactions. To handle dirty data and to ensure a high degree of detection accuracy, a pre-processing step is used. The pre-processing step uses two novel main methods to clean the data: the mean-based method and the clustering-based method. Compared to two well-known classifiers, the support vector machine classifier and voting classifier, the proposed classifier shows better results in terms of accuracy, sensitivity, and error rate.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_65-Fraud_Detection_in_Credit_Cards.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Determinants towards a Better Acceptance Model of IoT in KSA and Eradication of Distrust in Omnipresent Environments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111264</link>
        <id>10.14569/IJACSA.2020.0111264</id>
        <doi>10.14569/IJACSA.2020.0111264</doi>
        <lastModDate>2020-12-31T12:59:33.3230000+00:00</lastModDate>
        
        <creator>Abdulaziz A. Albesher</creator>
        
        <creator>Adeeb Alhomoud</creator>
        
        <subject>Internet of Things (IoT); security; health; IoT acceptance; smart cities</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>This paper highlights several of the key determinants that play a vital role in the acceptance of Internet of Things (IoT) technologies in the Kingdom of Saudi Arabia (KSA). Based on the governmental focus towards technology and the response of the citizens towards embracing new technologies, several determining factors are presented. Certain essential application areas of IoT are analyzed including the local industry, agriculture and livestock, health, education, smart metropolitans and smart government. In addition, we also explore acceptance at the personal level, such as home and privacy of individuals, security, and personal management with IoT wearables. Towards the end of this paper, some challenges of the IoT acceptance are presented along with the analysis of key enablers. All the rationalizations lead to the conclusion that IoT acceptance is inevitable based on the number of associated benefits which will enhance once the posed challenges are addressed.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_64-Determinants_towards_a_Better_Acceptance_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced Algorithm for Reconstruction of Three-Dimensional Mesh from Medical Images using Tessellation of Recent Graphics Cards</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111263</link>
        <id>10.14569/IJACSA.2020.0111263</id>
        <doi>10.14569/IJACSA.2020.0111263</doi>
        <lastModDate>2020-12-31T12:59:33.3070000+00:00</lastModDate>
        
        <creator>Lamyae Miara</creator>
        
        <creator>Said Benomar El Mdeghri</creator>
        
        <creator>Mohammed Oucamah Cherkaoui Malki</creator>
        
        <subject>3D reconstruction; medical imaging; marching cubes; displacement vectors; contour extraction; contour matching</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>The reconstruction of a 3D mesh using displacement vectors for medical images is a recent method that allows the exploitation of modern GPUs. This method demonstrated its efficiency by accelerating 3D visualization calculations and optimizing the storage process. In fact, it is divided into two main stages. The first step is the construction of a basic mesh by applying the Marching Cubes algorithm, and the second step is the extraction of the displacement vectors, which represent the details lost in the basic mesh. In fact, the Marching Cubes algorithm used to build the basic mesh suffers from some problems that we will try to overcome in this article. These problems are summarized in the ambiguity encountered during the construction of the basic mesh in some cases. Also, the resulting basic mesh must undergo modifications, in order not to have errors of form, which requires time and memory, and which gives the end a final mesh which is not optimal and even erroneous in certain situations. Our method is based on extracting the contours of the anatomy to be reconstructed from a sequence of 2D images. Each contour will be represented by a triangle. The shape of the basic mesh will then be the result of the connection of these triangles. This strategy avoids the use of the marching cubes algorithm in the reconstruction of the basic mesh in order to overcome the problems mentioned above.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_63-Enhanced_Algorithm_for_Reconstruction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Temporal-based Optimization to Solve Data Sparsity in Collaborative Filtering</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111262</link>
        <id>10.14569/IJACSA.2020.0111262</id>
        <doi>10.14569/IJACSA.2020.0111262</doi>
        <lastModDate>2020-12-31T12:59:33.2770000+00:00</lastModDate>
        
        <creator>Ismail Ahmed Al-Qasem Al-Hadi</creator>
        
        <creator>Mohammad Ahmed Alomari</creator>
        
        <creator>Eissa M. Alshari</creator>
        
        <creator>Waheed Ali H. M. Ghanem</creator>
        
        <creator>Safwan M Ghaleb</creator>
        
        <subject>Collaborative filtering; matrix factorization; temporal-based approaches; whale optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>Collaborative Filtering (CF) is a widely used technique in recommendation systems. It provides personal recommendations for users based on their preferences. However, this technique suffers from the sparsity issue which occurs due to a high proportion of missing rating scores in a rating matrix. Several factorization approaches have been used to address the sparsity issue. Such techniques have also been considered to tackle other challenges such as the overfitted predicted scores. Nevertheless, they suffer from setbacks such as drift in user preferences and items’ popularity decay. These challenges can be solved by prediction approaches that accurately learn the long-term and short-term preferences integrated with factorization features. Nonetheless, the current temporal-based factorization approaches do not accurately learn the convergence of the assigned k clusters due to a lower number of short-term periods. Additionally, the use of optimization algorithms in the learning process to reduce prediction errors is time-consuming which necessitates a faster optimization algorithm. To address these issues, a new temporal-based approach named TWOCF is proposed in this paper. TWOCF utilizes the elbow clustering method to define the optimal number of clusters for the temporal activities of both users and items. This approach deploys the whale optimization algorithm to accurately learn short-term preferences within other factorization and temporal features. Experimental results indicate that TWOCF exhibits a superior CF prediction accuracy achieved within a shorter execution time when compared to the benchmark approaches.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_62-Temporal_based_Optimization_to_Solve_Data_Sparsity.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Digital Transformation in Higher Education: A Framework for Maturity Assessment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111261</link>
        <id>10.14569/IJACSA.2020.0111261</id>
        <doi>10.14569/IJACSA.2020.0111261</doi>
        <lastModDate>2020-12-31T12:59:33.2600000+00:00</lastModDate>
        
        <creator>Adam Marks</creator>
        
        <creator>Maytha AL-Ali</creator>
        
        <creator>Reem Atassi</creator>
        
        <creator>Abedallah Zaid Abualkishik</creator>
        
        <creator>Yacine Rezgu</creator>
        
        <subject>Digital transformation; higher education; COVID-19</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>Literature in digital transformation maturity is scarce. Digital transformation in higher education, especially after COVID-19 is seen as inevitable. This research explores digital transformation maturity and challenges within Higher Education. The significance of this study stems from the role digital transformation plays in today’s knowledge economy. This study proposes a new framework based on Deloitte’s 2019 digital transformation assessment framework with Petkovic 2014 mega and major higher education process mapping. The study triangulates the findings of multiple research instruments, including survey, interviews, case study, and direct observation. The research findings show a significant variance between the respondents’ perception of digital transformations maturity levels, and the core requirements of digital transformation maturity. The findings also show the lack of holistic vision, digital transformation competency, and data structure and processing as the leading challenges of digital transformation.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_61-Digital_Transformation_in_Higher_Education.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancement of Natural Language to SQL Query Conversion using Machine Learning Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111260</link>
        <id>10.14569/IJACSA.2020.0111260</id>
        <doi>10.14569/IJACSA.2020.0111260</doi>
        <lastModDate>2020-12-31T12:59:33.2430000+00:00</lastModDate>
        
        <creator>Akshar Prasad</creator>
        
        <creator>Sourabh S Badhya</creator>
        
        <creator>Yashwanth YS</creator>
        
        <creator>Shetty Rohan</creator>
        
        <creator>Shobha G</creator>
        
        <creator>Deepamala N</creator>
        
        <subject>Machine learning; natural language to SQL query; long short-term memory; embedding layer; elastic search; hybrid architecture</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>In the age of information explosion, there is a huge data that is stored in the form of database and accessed using various querying languages. The major challenges faced by a user accessing this data is to learn the querying language and understand the various syntax associated with it. Query given in the form of Natural Language helps any na&#239;ve user to access database without learning the query languages. The current process of conversion of Natural Language to SQL Query using a rule-based algorithm is riddled with challenges -- identification of partial or implied data values and identification of descriptive values being the predominant ones. This paper discusses the use of a synchronous combination of a hybrid Machine Learning model, Elastic Search and WordNet to overcome the above-mentioned challenges. An embedding layer followed by a Long Short-Term Memory model is used to identify partial or implied data values, while Elastic Search has been used to identify descriptive data values (values which have lengthy data values and may contain descriptions). This architecture enables conversion systems to achieve robustness and high accuracies, by extracting meta data from the natural language query. The system gives an accuracy of 91.7% when tested on the IMDb database and 94.0% accuracy when tested on Company Sales database.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_60-Enhancement_of_Natural_Language_to_SQL.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A User-centered Design Approach to Near Field Communication-based Applications for Children</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111259</link>
        <id>10.14569/IJACSA.2020.0111259</id>
        <doi>10.14569/IJACSA.2020.0111259</doi>
        <lastModDate>2020-12-31T12:59:33.2130000+00:00</lastModDate>
        
        <creator>Mrim Alnfiai</creator>
        
        <subject>Near Field Communication (NFC); mobile device applications; NFC tags; RFID; early learning; K-12 students; preschool children; educational software; usability; user-centered design</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>There is an abundance of technology targeting children in terms of education, entertainment, and health; however, little research has been conducted on the usability of Near Field Communication (NFC) to create an interactive, digital environment for children easily accessible on a mobile device. NFC technology is a component of Radio Frequency Identification (RFID) technology and is affordable, intuitive, and accessible. The following research evaluates existing NFC applications for children in terms of their ease of use, appropriateness, and areas in need of improvement. Recommendations are provided in visual design, audio enhancements, reward system, and privacy and security concerns. It is concluded that adopting NFC technology in all facets of life will positively benefit the most vulnerable population, children, but first progress toward a user-centered design for this group is required.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_59-A_User_centered_Design_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Evolutionary Algorithm for Short Addition Chains</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111258</link>
        <id>10.14569/IJACSA.2020.0111258</id>
        <doi>10.14569/IJACSA.2020.0111258</doi>
        <lastModDate>2020-12-31T12:59:33.1970000+00:00</lastModDate>
        
        <creator>Hazem M. Bahig</creator>
        
        <creator>Khaled A. Alutaibi</creator>
        
        <creator>Mohammed A. Mahdi</creator>
        
        <creator>Amer AlGhadhban</creator>
        
        <creator>Hatem M. Bahig</creator>
        
        <subject>Addition chain; short chain; evolutionary algorithm; modular exponentiation; RSA</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>The encryption efficiency of the Rivest-Shamir-Adleman cryptosystem is based on decreasing the number of multiplications in the modular exponentiation (ME) operation. An addition chain (AC) is one of the strategies used to reduce the time consumed by ME through generating a shortest/short chain. Due to the non-polynomial time required for generating a shortest AC, several algorithms have been proposed to find a short AC in a faster time. In this paper, we use the evolutionary algorithm (EA) to find a short AC for a natural number. We discuss and present the role of every component of the EA, including the population, mutation operator, and survivor selection. Then we study, practically, the effectiveness of the proposed method in terms of the length of chain it generates by comparing it with three kinds of algorithms: (1) exact, (2) non-exact deterministic, and (3) non-exact non-deterministic. The experiment is conducted on all natural numbers that have 10, 11, 12, 13, and 14 bits. The results demonstrate that the proposed algorithm has good performance compared to the other three types of algorithms.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_58-An_Evolutionary_Algorithm_for_Short_Addition_Chains.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Learners’ Activity Indicators Prediction in e-Learning using Fuzzy Logic</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111257</link>
        <id>10.14569/IJACSA.2020.0111257</id>
        <doi>10.14569/IJACSA.2020.0111257</doi>
        <lastModDate>2020-12-31T12:59:33.1830000+00:00</lastModDate>
        
        <creator>Sanae CHEHBI</creator>
        
        <creator>Rachid ELOUAHBI</creator>
        
        <creator>Chakir FRI</creator>
        
        <subject>e-Learning; tracking; Moodle; traces; activity indicators; fuzzy logic</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>With the idea of introducing computer supports in education, Online Learning (named also e-learning) associated on one hand, the concept of network, therefore that of distance and concepts of communicating interaction, whether between the learner and the teacher (or tutor), or between the learners themselves and on the other hand exchanges and collaboration. Any activity in e-learning leaves recorded traces stored in a database system. Until now, data on student activity is stored as low-level information; however, the volume of this information is too large to be processed and interpreted by tutors, requiring data collection and preparation to give it meaning. In addition, according to the studies carried out in this direction, the tracking of learners must be guaranteed in all stages of e-learning process, to assist and help them when they encounter problems that they cannot solve. The lack of direct contact between the tutor and the learners can cause a lack of feedback of the learning activity; all these problems can lead to a high rate of abundance in e-learning. Our work aims to develop a model for predicting learner activity indicators using fuzzy logic without going through rigid calculations but based on consultation traces and skill assessment scores. Based on the traces collected from the Learning Management System (LMS) Moodle, it could give the tutor high level processing of the learning activity.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_57-Learners_ Activity_Indicators_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Examining Users’ Willingness to Post Sensitive Personal Data on Social Media</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111255</link>
        <id>10.14569/IJACSA.2020.0111255</id>
        <doi>10.14569/IJACSA.2020.0111255</doi>
        <lastModDate>2020-12-31T12:59:33.1500000+00:00</lastModDate>
        
        <creator>O’la Hmoud Al-laymoun</creator>
        
        <creator>Ali Aljaafreh</creator>
        
        <subject>Self-disclosure; sensitive data; Facebook policy; government regulations; privacy control; privacy risk</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>Reaping the vast benefits of ubiquitous social media requires users to share their information, preferences, and interests on these websites. This research draws on communications privacy management theory and the online privacy literature to develop and validate an empirical research model testing users’ willingness to share sensitive data on Facebook. The data were collected using an online survey from 569 respondents, however; 515 responses were valid for the statistical analysis. The valid data were analyzed using SMART-PLS2. The findings showed the need for attention as a significant predictor of Facebook users’ willingness. Neither individual perceptions of privacy control nor privacy risks had an impact on the variable of interest. Furthermore, the evidence supported the positive impact of each of deposition to value privacy and the perceived effectiveness of Facebook’s privacy policy on mitigating Facebook users’ perceptions of the risks of posting their private data on the website. The paper discusses the study’s theoretical and managerial implications along with its limitations.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_55-Examining_Users_Willingness_to_Post_Sensitive.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Investigating Epidemic Growth of COVID-19 in Saudi Arabia based on Time Series Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111256</link>
        <id>10.14569/IJACSA.2020.0111256</id>
        <doi>10.14569/IJACSA.2020.0111256</doi>
        <lastModDate>2020-12-31T12:59:33.1500000+00:00</lastModDate>
        
        <creator>Mohamed Torky</creator>
        
        <creator>M. Sh Torky</creator>
        
        <creator>Azza Ahmed</creator>
        
        <creator>Aboul Ella Hassanein</creator>
        
        <creator>Wael Said</creator>
        
        <subject>COVID-19 outbreak; time series models; SIRD; SEIRD; SEIQRDP</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>Predictive mathematical models for simulating the spread of the COVID-19 pandemic are an interesting and fundamental approach to understand the infection growth curve of the epidemic and to plan effective control strategies. Time series predictive models are one of the most important mathematical models that can be utilized for studying the pandemic growth curve. In this study, three-time series models (Susceptible-Infected-Recovered-Death (SIRD) model, Susceptible-Exposed-Infected-Recovered-Death (SEIRD) model, and Susceptible-Exposed-Infected-Quarantine-Recovered-Death-Insusceptible, (SEIQRDP) model) have been investigated and simulated on a real dataset for investigating Covid-19 outbreak spread in Saudi Arabia. The simulation results and evaluation metrics proved that SIRD and SEIQRDP models provided a minimum difference error between reported data and fitted data. So using SIRD, and SEIQRDP models are used for predicting the pandemic end in Saudi Arabia. The prediction results showed that the Covid-19 growth curve will be stable with detected zero active cases on 2 February 2021 according to the prediction computations of the SEIQRDP model. Also, the prediction results based on the SIRD model showed that the outbreak will be stable with active cases after July 2021.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_56-Investigating_Epidemic_Growth_of_COVID_19.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Problematic Use of Mobile Phones during the COVID-19 Pandemic in Peruvian University Students, 2020</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111254</link>
        <id>10.14569/IJACSA.2020.0111254</id>
        <doi>10.14569/IJACSA.2020.0111254</doi>
        <lastModDate>2020-12-31T12:59:33.1200000+00:00</lastModDate>
        
        <creator>Rosa Perez-Siguas</creator>
        
        <creator>Randall Seminario-Unzueta</creator>
        
        <creator>Hernan Matta-Solis</creator>
        
        <creator>Melissa Yauri-Machaca</creator>
        
        <creator>Eduardo Matta-Solis</creator>
        
        <subject>Mental health; pandemic; coronavirus; mobile phones</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>The problematic use of mobile phones during the COVID-19 pandemic has been predicted as a mental alteration in university students due to confinement due to the health crisis that exists in our country and in the world, therefore, the objective of the study is to determine the problematic use of mobile phones during the COVID-19 pandemic. COVID-19 in Peruvian university students, 2020. This is a quantitative, non-experimental, descriptive, and cross-sectional study, with a population of 163 Peruvian university students, who answered a questionnaire of sociodemographic data and the Mobile Phone Problem Use Scale. In the results where it can observe regarding the problematic use of mobile phones that 103 (63.2%) of the university students have a high problematic use, 59 (36.2%) medium problematic use and 1 (0.6%) use low problematic use. In conclusion, programs on mental health should be carried out during the COVID-19 pandemic in university students.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_54-Problematic_Use_of_Mobile_Phones.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Artificial Neural Network based Emotion Classification and Recognition from Speech</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111253</link>
        <id>10.14569/IJACSA.2020.0111253</id>
        <doi>10.14569/IJACSA.2020.0111253</doi>
        <lastModDate>2020-12-31T12:59:33.1030000+00:00</lastModDate>
        
        <creator>Mudasser Iqbal</creator>
        
        <creator>Syed Ali Raza</creator>
        
        <creator>Muhammad Abid</creator>
        
        <creator>Furqan Majeed</creator>
        
        <creator>Ans Ali Hussain</creator>
        
        <subject>Emotion States; ANN; BR; BRANN; emotion classifier; speech emotion recognition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>Emotion recognition from speech signals is still a challenging task. Hence, proposing an efficient and accurate technique for speech-based emotion recognition is also an important task. This study is focused on four basic human emotions (sad, angry, happy, and normal) recognition using an artificial neural network that can be detected through vocal expressions resulting in more efficient and productive machine behaviors. An effective model based on a Bayesian regularized artificial neural network (BRANN) is proposed in this study for speech-based emotion recognition. The experiments are conducted on a well-known Berlin database having 1470 speech samples carrying basic emotions with 500 samples of angry emotions, 300 samples of happy emotions, 350 samples of a neutral state, and 320 samples of sad emotions. The four features Frequency, Pitch, Amplitude, and formant of speech is used to recognize four basic emotions from speech. The performance of the proposed methodology is compared with the performance of state-of-the-art methodologies used for emotion recognition from speech. The proposed methodology achieved 95% accuracy of emotion recognition which is highest as compared to other states of the art techniques in the relevant domain.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_53-Artificial_Neural_Network_based_Emotion_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Adaptive Retrieval Time-Related Data Model for Tracking Factors Affecting Diabetes</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111252</link>
        <id>10.14569/IJACSA.2020.0111252</id>
        <doi>10.14569/IJACSA.2020.0111252</doi>
        <lastModDate>2020-12-31T12:59:33.0870000+00:00</lastModDate>
        
        <creator>Ibrahim AlBidewi</creator>
        
        <creator>Nashwan Alromema</creator>
        
        <creator>Fahad Alotaibi</creator>
        
        <subject>Diabetes database; time-data model; diabetes observations; valid-time data; knowledge-based data</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>In the last four decades several dozens of representing time-oriented data/knowledge bases have been presented. Some of these representations violate First Normal Form (1NF) by using Non-First Normal Form (N1NF) prototypes and temporal nested representations, while others simulated the concepts of temporal data with relational data representation without violating 1NF. In this article, a new interval-based knowledge representational data model with an optimized retrieval techniques are employed for modeling and optimality retrieve a biomedical time-varying data (factors/observations that affect the diabetes). The used time-related data model is more compact to represent time-varying data with less memory (capacity) storage with respect to the main representations in the literature, but which is as expressive as those representations (a transformation algorithms show that data represented in this model can be transferred to/from the representations in the literature with zero percent loss of information). A new data structure is defined with the optimal retrieval techniques to prove some basic properties of the used time-model and to ensure that the time-model is an extension and reduction of the main representations in the literature, namely TQuel and BCDM. The expressive power, reducibility, and easy implementation of the proposed model, especially for the legacy systems, are considered as advantages of the proposed model.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_52-Adaptive_Retrieval_Time_Related_Data_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Invasive Weed Optimization with Tabu Search Algorithm for an Energy and Deadline Aware Scheduling in Cloud Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111251</link>
        <id>10.14569/IJACSA.2020.0111251</id>
        <doi>10.14569/IJACSA.2020.0111251</doi>
        <lastModDate>2020-12-31T12:59:33.0570000+00:00</lastModDate>
        
        <creator>Pradeep Venuthurumilli</creator>
        
        <creator>Sridhar Mandapati</creator>
        
        <subject>Cloud computing; scheduling; Genetic Algorithm (GA); Invasive Weed Optimization (IWO) and Tabu Search (TS)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>The current existing high flexibility, profitability, and potential have made cloud computing extremely popular among the companies. This is used for improving and applying resources in an efficient manner and optimize makespan of the tasks. Scheduling is easy while there are only a few tasks to complete with few resources. Contrastingly, at the time the users forward several demands to the environment of the cloud, there may be a need for optimally selecting and allocating resources for achieving the desired quality of service that makes scheduling challenging. In this work, using intelligent metaheuristic algorithms for processing the requests and tasks of users in energy-aware scheduling made for a deadline is proposed. Genetic Algorithm (GA) the evolutionary algorithm that is inspired by the natural process of selection and the evolution theory. The Invasive Weed Optimization (IWO) was yet another novel stochastic based on the population that was a derivative-free technique of optimization inspired by the growth of the weed plants. The TABU Search (TS) was a generalization technique of local search where the TABU list was used for preventing cycling and further generating the candidates of the neighborhood. A hybrid GA with the TS (GA-TS) with a hybrid IWO with TS (IWO-TS) has been proposed for the energy and deadline aware scheduling. The framework further offers optimization of energy and performance. The primary purpose of this algorithm has been to improve deadline and scheduling in cloud computing along with local as well as global search algorithms. This framework will offer optimization of performance and energy. The reason behind presenting this algorithm was improving both scheduling and deadline in cloud computing using both local and global algorithms and results proved the algorithm to have better results.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_51-Hybrid_Invasive_Weed_Optimization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Algorithm Design to Determine Possibility of Student Graduate Time in Student Grade Recapitulation Application</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111250</link>
        <id>10.14569/IJACSA.2020.0111250</id>
        <doi>10.14569/IJACSA.2020.0111250</doi>
        <lastModDate>2020-12-31T12:59:33.0400000+00:00</lastModDate>
        
        <creator>Marliana Budhiningtias Winanti</creator>
        
        <creator>Umi Narimawati</creator>
        
        <creator>Suwinarno Nadjamudin</creator>
        
        <creator>Hendar Rubedo</creator>
        
        <creator>Syahrul Mauluddin</creator>
        
        <subject>Algorithm; graduate; subject; grade</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>This study aims to create an algorithm model to determine the potential time for student graduation to be applied to the grade recapitulation information system at XYZ University Information System Study Program. The XYZ University Information System Study Program already has a grade recapitulation information system, but the grade recapitulation information system has not been able to provide potential information about when a student can graduate from college. Information about probability graduating from college is very important as evaluation material in providing direction to students as an effort to achieve graduate on time. The more students who graduate on time, it can help increase the value of accreditation. The existing grade recapitulation information system can only display a history of grades and courses that have been taken by a student, so that the guardian lecturer has difficulty checking the courses that have not been taken by the student guidance and difficulty obtaining information when the student&#39;s guidance can graduate from college. In this study, an algorithm model was used to calculate the student&#39;s graduate time based on the calculation and mapping of subjects that had not been taken and had not yet passed. Based on the test results, the average time needed to determine student graduation time is 0.165 seconds.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_50-Algorithm_Design_to_Determine_Possibility.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Compact Scrutiny of Current Video Tracking System and its Associated Standard Approaches</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111249</link>
        <id>10.14569/IJACSA.2020.0111249</id>
        <doi>10.14569/IJACSA.2020.0111249</doi>
        <lastModDate>2020-12-31T12:59:33.0270000+00:00</lastModDate>
        
        <creator>Karanam Sunil Kumar</creator>
        
        <creator>N P Kavya</creator>
        
        <subject>Video tracking; object tracking; visual tracking; video surveillance; object detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>With an increasing demands of video tracking systems with object detection over wide ranges of computer vision applications, it is necessary to understand the strengths and weaknesses of the present situation of approaches. However, there are various publications on different techniques in the visual tracking system associated with video surveillance application. It has been seen that there are prime classes of approaches that are only three, viz. point-based tracking, kernel-based tracking, and silhouette-based tracking. Therefore, this paper contributes to studying the literature published in the last decade to highlight the techniques obtained and brief the tracking performance yields. The paper also highlights the present research trend towards these three core approaches and significantly highlights the open-end research issues in these regards. The prime aim of this paper is to study all the prominent approaches of video tracking system which has been evolved till date in various literatures. The idea is to understand the strength and weakness associated with the standard approach so that new approaches could be effectively act as a guideline for constructing a new upcoming model. The prominent challenge in reviewing the existing approaches are that all the approaches are targeted towards achieving accuracy, whereas there are various other connected problems with internal process which has not been considered for e.g. feature extraction, processing time, dimensional problems, non-inclusion of contextual factor, which has been an outcome of the proposed review findings. The paper concluded by highlighting this as research gap acting as contribution of this review work and further states that there are some good possibilities of new work evolution if these issues are considered prior to developing any video tracking system. Overall, this paper offers an unbiased picture of the current state of video tracking approaches to be considered for developing any upcoming model.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_49-Compact_Scrutiny_of_Current_Video_Tracking_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automatic Detection of Elbow Abnormalities in X-ray Imagery</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111248</link>
        <id>10.14569/IJACSA.2020.0111248</id>
        <doi>10.14569/IJACSA.2020.0111248</doi>
        <lastModDate>2020-12-31T12:59:33.0100000+00:00</lastModDate>
        
        <creator>Mashal Afzal</creator>
        
        <creator>M. Moazzam Jawaid</creator>
        
        <creator>Rizwan Badar Baloch</creator>
        
        <creator>Sanam Narejo</creator>
        
        <subject>Deformation detection; multi-class probabilistic segmentation; edge detection and geometrical shape detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>Abnormality or deformity in any of the bone disrupts overall function of human skeleton. Hence, orthopedic abnormalities are common reasons for emergency department visits and elbow deformation is one of the common issue seen among emergency patients. Despite high frequency of elbow-related casualties, there is no standardized method for interpretation of digital X-rays. Accordingly, we have proposed a model for automatic deformation detection in elbow and connected forearm bones using Image Processing techniques. The X-ray images were preprocessed and the region of interest is segmented using Multi Class Probabilistic Segmentation in first step. Subsequently, multi-phase canny edge detector was used to highlight the edges and descriptive features were extracted to differentiate among normal and abnormal X-rays. On the basis of those features, three tests were performed to automatically trace deformities in different bones associated with elbow. The publically available Musculoskeletal Radiographs (MURA) dataset has been used in this research. Hence, 250 elbow X-rays from the dataset were investigated for geometrical shape distortions, crack, damage and extra-ordinary distance between the bones. Accordingly, the proposed method shows promising results in terms of 86.20% accuracy.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_48-Automatic_Detection_of_Elbow_Abnormalities.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Credit Card Business in Malaysia: A Data Analytics Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111247</link>
        <id>10.14569/IJACSA.2020.0111247</id>
        <doi>10.14569/IJACSA.2020.0111247</doi>
        <lastModDate>2020-12-31T12:59:32.9800000+00:00</lastModDate>
        
        <creator>Mohamed Khaled Yaseen</creator>
        
        <creator>Mafas Raheem</creator>
        
        <creator>V. Sivakumar</creator>
        
        <subject>Credit card; predictive analytics; random forest; sentiment analysis; banking</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>The revolution of big data has made resonance in the banking sector especially in dealing with the massive amount of data. The banks have the opportunity to know about the customer&#39;s opinions and satisfaction regarding their products by analyzing the data gathered every day. So, the banks can transform these data into high-quality information that allow banks to improve their business especially in credit cards which is becoming a short-term business for the banks nowadays. Further, the sentiment analysis has become immense in the field of data analytics especially the customers’ opinion makes a huge impact in making profitable business decisions. The outcome of the sentiment analysis does assist the banks to know the deficiencies of their product and allow them to improve their products to satisfy the customers. From the sentiment analysis, 45% of the customers were negative, 30% were positive and 25% were neutral towards the credit card facility offered by the commercial banks. Also, the prediction of credit card customer satisfaction will contribute in a significant way to create new opportunities for the banks to enhance their promotion aspects as well as the credit card business in future. Random Forest algorithm was applied with three various experiments utilizing the normal data, balanced data and the optimized model with the normal data. The optimized model with the normal data obtained the highest accuracy of 87.38% followed by the normal dataset by 85.82% and the least accuracy was for the balanced dataset by 82.83%.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_47-Credit_Card_Business_in_Malaysia.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Admission Exam Web Application Prototype for Blind People at the University of Sciences and Humanities</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111246</link>
        <id>10.14569/IJACSA.2020.0111246</id>
        <doi>10.14569/IJACSA.2020.0111246</doi>
        <lastModDate>2020-12-31T12:59:32.9470000+00:00</lastModDate>
        
        <creator>Alexis Carrion-Silva</creator>
        
        <creator>Carlos Diaz-Nunez</creator>
        
        <creator>Laberiano Andrade-Arenas</creator>
        
        <subject>Admission; Balsamiq; blind; scrum; soft systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>Currently, there is a large sector of Peru&#39;s population that has some type of disability. Every year, the government creates norms for their integration into society. However, to date, total integration is not achieved. One of the points that can be seen is at the moment of taking an entrance exam, where to this day, they do not have the tools to perform in an autonomous and optimal way. Taking this into account, the objective of this article is the development of a prototype admission test for blind people at the University of Sciences and Humanities. A hybrid methodology is used with between Soft Systems and Scrum. The results were obtained from the analysis of both methodologies and a final product prototyped with Balsamiq, demonstrating an optimal union between these two methodologies. Therefore, the prototype will facilitate the performance of blind people during their entrance exam.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_46-Admission_Exam_Web_Application_Prototype.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Decision Support System for Detecting Age and Gender from Twitter Feeds based on a Comparative Experiments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111245</link>
        <id>10.14569/IJACSA.2020.0111245</id>
        <doi>10.14569/IJACSA.2020.0111245</doi>
        <lastModDate>2020-12-31T12:59:32.9330000+00:00</lastModDate>
        
        <creator>Roobaea Alroobaea</creator>
        
        <creator>Sali Alafif</creator>
        
        <creator>Shomookh Alhomidi</creator>
        
        <creator>Ahad Aldahass</creator>
        
        <creator>Reem Hamed</creator>
        
        <creator>Rehab Mulla</creator>
        
        <creator>Bedour Alotaibi</creator>
        
        <subject>Decision support system; age detection; gender detection; author profiling; deep learning; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>Author profiling aims to correlate writing style with author demographics. This paper presents an approach used to build a Decision Support System (DSS) for detecting age and gender from Twitter feeds. The system is implemented based on Deep Learning (DL) algorithms and Machine Learning (ML) algorithms to distinguish between classes of age and gender. The results show that every algorithm has different results of age and gender based on the model architecture and power points of each algorithm. Our decision support system is more accurate in predicting the age and the gender of author profiling from his\her written tweets. It adopts the deep learning model using CNN and LSTM methods. Our results outperform those obtained in the competitive conference s CLEF 2019.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_45-A_Decision_Support_System_for_Detecting_Age.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Applying Digital Image Processing Technology in Discovering Green Patches in the Desert of Saudi Arabia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111244</link>
        <id>10.14569/IJACSA.2020.0111244</id>
        <doi>10.14569/IJACSA.2020.0111244</doi>
        <lastModDate>2020-12-31T12:59:32.9170000+00:00</lastModDate>
        
        <creator>Ali Mehdi</creator>
        
        <creator>Md Alamin Bhuiyan</creator>
        
        <subject>Image processing; change detection; optical flow analysis; green patches; color segmentation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>In recent years, the Kingdom of Saudi Arabia has witnessed a noticeable growth of grass and small trees in the desert, forming green patches. Those green patches may have the potential to spread and cover a wider area in the desert in the coming years, thus, making areas of the desert potential agricultural land. This research aims to detect the change of green patches in the desert of Saudi Arabia to solve the challenge that is mainly due to the lack of an organized dataset. Using a series of satellite images of the desert landscape, a change detection algorithm is used to identify the changes in green spaces. This algorithm includes the presentation of multi-temporal datasets to evaluate the chronological special effects. This paper presents an optical flow analysis among images captured at different time sequences. The algorithm shows promising results of change detection in green patches in the desert of Saudi Arabia detected by color segmentation. The algorithm has been validated over a set of satellite images demonstrating an effective performance.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_44-Applying_Digital_Image_Processing_Technology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Agent-based Model for Simulating Urban System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111243</link>
        <id>10.14569/IJACSA.2020.0111243</id>
        <doi>10.14569/IJACSA.2020.0111243</doi>
        <lastModDate>2020-12-31T12:59:32.9000000+00:00</lastModDate>
        
        <creator>Fatimazahra BARRAMOU</creator>
        
        <creator>Malika ADDOU</creator>
        
        <subject>Multi agent systems; simulation; modelling; urban system; cellular agent; vector agent</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>In this paper, we address the issue related to the modelling and simulation of urban systems. We propose a new approach for simulating urban system based on multi-agent paradigm. Our proposed model is based on the use of a coupling between cellular agents and vector agents. These agents made it possible to take into account the spatial dimension of urban systems as well as the modelling of all the rules that govern them. To allow reusability of our model, we apply the VOYELLE approach by defining an environment model, an organization model, an agent model and an interaction model. We test our proposed model with a case study on Casablanca city. We discuss the problem of urbanization of Casablanca by following an approach that reduces the problem into two sub-problems similar but are treated differently: first, predict the city’s need of housing (individual housing zone, multifamily housing zone, ...) and then anticipate the city’s need of public services and ensure better spatial distribution of these equipment to best serve the people needs. Then we did experimentation with two simulation scenarios by changing in each scenario the hypotheses concerning urban planning especially in terms of demographic growth rate and residential sprawl.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_43-Agent_based_Model_for_Simulating_Urban_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Analysis of Fermat Factorization Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111242</link>
        <id>10.14569/IJACSA.2020.0111242</id>
        <doi>10.14569/IJACSA.2020.0111242</doi>
        <lastModDate>2020-12-31T12:59:32.8830000+00:00</lastModDate>
        
        <creator>Hazem M. Bahig</creator>
        
        <creator>Mohammed A. Mahdi</creator>
        
        <creator>Khaled A. Alutaibi</creator>
        
        <creator>Amer AlGhadhban</creator>
        
        <creator>Hatem M. Bahig</creator>
        
        <subject>Integer factorization; Fermat’s algorithm; RSA; factorization with sieving; perfect square</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>The Rivest-Shamir-Adleman (RSA) cryptosystem is one of the strong encryption approaches currently being used for secure data transmission over an insecure channel. The difficulty encountered in breaking RSA derives from the difficulty in finding a polynomial time for integer factorization. In integer factorization for RSA, given an odd composite number n, the goal is to find two prime numbers p and q such that n = p q. In this paper, we study several integer factorization algorithms that are based on Fermat’s strategy, and do the following: First, we classify these algorithms into three groups: Fermat, Fermat with sieving, and Fermat without perfect square. Second, we conduct extensive experimental studies on nine different integer factorization algorithms and measure the performance of each algorithm based on two parameters: the number of bits for the odd composite number n, and the number of bits for the difference between two prime factors, p and q. The results obtained by the algorithms when applied to five different data sets for each factor reveal that the algorithm that showed the best performance is the algorithms based on (1) the sieving of odd and even numbers strategy, and (2) Euler’s theorem with percentage of improvement of 44% and 36%, respectively compared to the original Fermat factorization algorithm. Finally, the future directions of research and development are presented.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_42-Performance_Analysis_of_Fermat_Factorization_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Analysis of Human Activities Recognition using Smartwatches Dataset</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111241</link>
        <id>10.14569/IJACSA.2020.0111241</id>
        <doi>10.14569/IJACSA.2020.0111241</doi>
        <lastModDate>2020-12-31T12:59:32.8700000+00:00</lastModDate>
        
        <creator>Saadia Karim</creator>
        
        <creator>SM Aqil Burney</creator>
        
        <creator>Nadeem Mahmood</creator>
        
        <subject>Human activity recognition; smartwatches; big data; machine learning; random forest</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>Today, the era of smart devices evolving the human behavior interaction to a changing environment where the learning of activities is monitored to predict the next step of human behavior. The smart devices have these sensors built-in (accelerometer and gyroscope), which are continuously generating a large amount of data. The data used to identify the novel patterns of human behavior, together with machine learning and data mining techniques. Classification of human motions with motion sensor data is among the current topics of study. The classification is an important part of data mining techniques and used in this work to find the accuracy of instances in the given dataset. Thus, it is possible to follow the activities of a user carrying only a smartwatch. The smartwatches consisting of four different models from two manufacturers are used. Furthermore, the experiment contains nine users and seven activities performed by them. After the classification was determined, the data set to which the principal component analysis has been applied was classified by decision stump, j48, Bayes net, naive Bayes, naive Bayes multinomial text, random forest, and logit boost methods, and their performances were compared. The most successful result was obtained from the random forest method. The accuracy of the Random Forest classification algorithm on nominal datasets is 99.99% on both accelerometer and gyroscope sensors.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_41-An_Analysis_of_Human_Activities_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>On Developing High-Speed Heterogeneous and Composite ES Network through Multi-Master Interface</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111240</link>
        <id>10.14569/IJACSA.2020.0111240</id>
        <doi>10.14569/IJACSA.2020.0111240</doi>
        <lastModDate>2020-12-31T12:59:32.8530000+00:00</lastModDate>
        
        <creator>J Rajasekhar</creator>
        
        <creator>JKR Sastry</creator>
        
        <subject>Embedded systems; embedded networks; hybridization of embedded networks; hybridizations through multi-master communication</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>These days, many heterogeneous and composite embedded systems contain many subnets developed using different bus-based protocols, such as I2C, CAN, USB, and  RS485.  There is always a requirement to Interface and interconnect the heterogeneous ES networks to achieve and establish a composite network. The ES networks developed using different protocols differ in many ways, considering the speed of communication, Arbitration, Synchronization, and Timing. Many solutions are being offered using heterogeneous embedded systems, especially in implementing automation systems, without addressing integration and proper interfacing. In this paper, a Multi-Master based interfacing of a CAN and I2C networking through Ethernet-based interfacing has been presented especially to find the optimum speeds at which the networks must be operated for different data packet sizes. It has been shown in the paper that it is quite efficient and effective when a data packet of size 40 bytes is driven using an I2C speed of 5120 bits, Ethernet speed of 20480 bits, and CAN speed of 500 bits.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_40-On_Developing_High_Speed_Heterogeneous.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Assessment of Organizational Capabilities for ERP Implementation in SMEs: A Governance Model for IT Success using a Resource-based Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111239</link>
        <id>10.14569/IJACSA.2020.0111239</id>
        <doi>10.14569/IJACSA.2020.0111239</doi>
        <lastModDate>2020-12-31T12:59:32.8230000+00:00</lastModDate>
        
        <creator>Houcine Chatti</creator>
        
        <creator>Evan Asfoura</creator>
        
        <creator>Gamal Kassem</creator>
        
        <subject>ERP success; systemic approach; quantitative study; structural equation modelling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>One of the most coveted technological innovations is the increasing use of Integrated Management Software (ERP) since the early 1990s. ERP is considered a powerful reengineering tool that profoundly transforms a company&#39;s business processes and changes the way to conduct reengineering projects and implement new software. The significant number of failures as reported in the literature on ERP emphasizes the fact that some companies may not realize the expected benefits. This result becomes particularly significant in the case of SMEs which have their own contingencies in addition to the scarcity of resources which may, in turn, lead to the failure of ERP implementation. This leads initially to ask, in one hand, about the variables of success of this innovation i.e. the determinants of the techno-organizational innovation; and on the other hand, about the existence of a model of dependences analysis between these determinants and their success as perceived by the management. The current empirical research is carried out in 92 companies having adopted a whole or a part of their IS with an ERP system. Having analyzed the data collected via a questionnaire, and applying the method of structural equation (MSE), results prove the existence of one &quot;general fit&quot; between the data and the supposed relations of causality.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_39-An_Assessment_of_Organizational_Capabilities.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Adaptive e-Learning AI-Powered Chatbot based on Multimedia Indexing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111238</link>
        <id>10.14569/IJACSA.2020.0111238</id>
        <doi>10.14569/IJACSA.2020.0111238</doi>
        <lastModDate>2020-12-31T12:59:32.8070000+00:00</lastModDate>
        
        <creator>Salma El Janati</creator>
        
        <creator>Abdelilah Maach</creator>
        
        <creator>Driss El Ghanami</creator>
        
        <subject>e-Learning; Chatbot; Speech-To-Text; NLP; Keywords Extraction; Text Clustering; Multimedia Indexing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>With the rapid evolution of e-learning technology, the multiple sources of information become more and more accessible. However, the availability of a wide range of e-learning offers makes it difficult for learners to find the right content for their training needs. In this context, our paper aims to design an e-learning AI-powered Chatbot allowing interaction with learners and suggesting the e-learning content adapted to their needs. In order to achieve these objectives, we first analysed the e-learning multimedia content to extract the maximum amount of information. Then, using Natural Language Processing (NLP) techniques, we introduced a new approach to extract keywords. After that, we suggest a new approach for multimedia indexing based on extracted keywords. Finally, the Chatbot architecture is realized based on the multimedia indexing and deployed on online messaging platforms. The suggested approach aims to have an efficient way to represent the multimedia content based on keywords. We compare our approach with approaches in literature and we deduce that the use of keywords on our approach result on a better representation and reduce time to construct multimedia indexing. The core of our Chatbot is based on this indexed multimedia content which enables it to look for the information quickly. Then our designed Chatbot reduce response time and meet the learner’s need.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_38-Adaptive_e_Learning_AI_Powered_Chatbot.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybird Framework based on Autoencoder and Deep Neural Networks for Fashion Image Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111237</link>
        <id>10.14569/IJACSA.2020.0111237</id>
        <doi>10.14569/IJACSA.2020.0111237</doi>
        <lastModDate>2020-12-31T12:59:32.7900000+00:00</lastModDate>
        
        <creator>Aziz Alotaibi</creator>
        
        <subject>Fashion detection; fashion classification; convolutional autoencoder; deep learning; insert</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>Deep learning has played a huge role in computer vision fields due to its ability to extract underlying and complex features of input images. Deep learning is applied to complex vision tasks to perform image recognition and classification. Recently, Apparel classification, is an application of computer vision, has been intensively explored and investigated. This paper proposes an effective framework, called DeepAutoDNN, based on deep learning algorithms for apparel classification. DeepAutoDNN framework combines a deep autoencoder with deep neural networks to extract the complex patterns and high-level features of fashion images in supervised manner. These features are utilized via categorical classifier to predict the given image to the right label. To evaluate the performance and investigate the efficiency of the proposed framework, several experiments have been conducted on the Fashion-MNIST dataset, which consists of 70000 images: 60000 and 10000 images for training and test, respectively. The results have shown that the proposed framework can achieve accuracy of 93.4%. In the future, this framework performance can be improved by utilizing generative adversarial networks and its variant.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_37-A_Hybird_Framework_based_on_Autoencoder.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Determinants of Privacy Protection Behavior in Social Networking Sites</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111236</link>
        <id>10.14569/IJACSA.2020.0111236</id>
        <doi>10.14569/IJACSA.2020.0111236</doi>
        <lastModDate>2020-12-31T12:59:32.7770000+00:00</lastModDate>
        
        <creator>Siti Norlyana Suhaimi</creator>
        
        <creator>Nur Fadzilah Othman</creator>
        
        <creator>Raihana Syahirah</creator>
        
        <creator>Syarulnaziah Anawar</creator>
        
        <creator>Zakiah Ayop</creator>
        
        <creator>Cik Feresa Mohd Foozy</creator>
        
        <subject>Privacy; social networking sites; privacy; protection motivation theory; hyperpersonal communication theory</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>Social Networking Sites (SNSs) are an attractive online platform for social interaction and communication. Since SNSs are easily accessed by a large number of people, a large quantity of data is also stored in the SNSs. Consequently, concern regarding the exposure to privacy risk will emerge. In this case, users need privacy protection behavior to protect their privacy in SNSs. This paper aims to determine the motivational determinants of privacy protection behavior among high school students in protecting their data or personal information when using SNSs. To identify the determinants of privacy protection behavior, a questionnaire survey was administered on 200 high school students. This study proposed a conceptual model that offers an understanding of motivational determinants of privacy protection behavior in social networking sites. Results indicate that perceived anonymity is the most significant determinant in motivating privacy behavior followed by perceived intrusiveness, perceived severity, self-efficacy, perceived vulnerability, and response efficacy. The results of this study will shed some light on understanding the levels of privacy protection behavior in SNSs, and identify suitable interventions in motivating privacy protection behavior among high school students. Finally, with the combined theory of Protection Motivation Theory (PMT) and Hyperpersonal Communication Theory (HCT), this model provides the basis to direct future studies in the related field.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_36-Determinants_of_Privacy_Protection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Learner Behavior in e-Learning as a Multicriteria Attribute based on Perspective of Flow Experience</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111235</link>
        <id>10.14569/IJACSA.2020.0111235</id>
        <doi>10.14569/IJACSA.2020.0111235</doi>
        <lastModDate>2020-12-31T12:59:32.7600000+00:00</lastModDate>
        
        <creator>Dadang Syarif Sihabudin Sahid</creator>
        
        <subject>Flow experience; learning behavior; multicriteria attribute; TAM; e-learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>Flow experience describe psychological condition in the form of optimal experience of an activity. Flow shows an interconnection, interest, and pleasure toward an activity, thus enable user to fully participate in the activity. In e-learning activity, flow provides positive experience of a learning process. This condition is essential in user’s ability to achieve high performance. Therefore, it is important to identify user’s flow experience during his interaction with e-learning. This information can be used as reference of how e-learning model provide response that in accordance with user’s psychological condition. Assessment of psychological experience based on flow theory have been conducted in many studies, particulary based on experience sampling method. However, these survey methods require high effort thus they are inefficient. The previous studies in this topic only covers conventional learning, with face-to-face interaction. In e-learning, particulary those that use adaptive context aware e-learning approach, flow experience can be assessed by conducting inference based on learning behavior parameters of learners during interaction with e-learning. However, there is no study that provide relation among learner’s learning behavior in e-learning with parameters of flow experience. Therefore, this study tested hypotheses aimed to obtain relation between learning behavior and flow experience. Hypotheses model constructed by involving technology acceptance model (TAM), expectation confirmation model, and flow experience as learning psychological condition. Learning behavior as a multicriteria attribute was represented by actual usage in form of intensity of using e-learning. Meanwhile, perceived balance of skill and challenge as representation of flow experience was selected as main variable in the proposed hypotheses. The result showed that these variables had positive relation with each other.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_35-Learner_behavior_in_e_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>iDietScoreᵀᴹ: Meal Recommender System for Athletes and Active Individuals</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111234</link>
        <id>10.14569/IJACSA.2020.0111234</id>
        <doi>10.14569/IJACSA.2020.0111234</doi>
        <lastModDate>2020-12-31T12:59:32.7430000+00:00</lastModDate>
        
        <creator>Norashikin Mustafa</creator>
        
        <creator>Abdul Hadi Abd Rahman</creator>
        
        <creator>Nor Samsiah Sani</creator>
        
        <creator>Mohd Izham Mohamad</creator>
        
        <creator>Ahmad Zawawi Zakaria</creator>
        
        <creator>Azimah Ahmad</creator>
        
        <creator>Noor Hafizah Yatiman</creator>
        
        <creator>Ruzita Abd Talib</creator>
        
        <creator>Poh Bee Koon</creator>
        
        <creator>Nik Shanita Safii</creator>
        
        <subject>Expert system; meal planning; sports nutrition; inference engine; design and development</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>Individualized meal planning is a nutrition counseling strategy that focuses on improving food behavior changes. In the sports setting, the number of experts who are sports dietitians or nutritionists (SD/SN) is small in number, and yet the demand for creating meal planning for a vast number of athletes often cannot be met. Although some food recommender system had been proposed to provide healthy menu planning for the general population, no similar solution focused on the athlete&#39;s needs. In this study, the iDietScoreTM architecture was proposed to give athletes and active individuals virtual individualized meal planning based on their profile, includes energy and macronutrients requirement, sports category, age group, training cycles, training time and individual food preferences. Knowledge acquisition on the expert domain (the SN) was conducted prior to the system design through a semi-structured interview to understand meal planning activities&#39; workflow. The architecture comprises: (1) iDietScoreTM web for SN/SD, (2) mobile application for athletes and active individuals and (3) expert system. SN/SD used the iDietScoreTM web to develop a meal plan and initiate the compilation meal plan database for further use in the expert system. The user used iDietScoreTM mobile app to receive the virtual individualized meal plan. An inference-based expert system was applied in the current study to generate the meal plan recommendation and meal reconstruction for the user. Further research is necessary to evaluate the prototype&#39;s usability by the target user (athletes and active individuals).</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_34-iDietScoreTM_Meal_Recommender_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Developing an Intelligent Framework for Improving the Quality of Service in the Government Organizations in the Kingdom of Saudi Arabia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111233</link>
        <id>10.14569/IJACSA.2020.0111233</id>
        <doi>10.14569/IJACSA.2020.0111233</doi>
        <lastModDate>2020-12-31T12:59:32.7130000+00:00</lastModDate>
        
        <creator>Abdulelah Abdallah AlGosaibi</creator>
        
        <creator>Abdul Rahaman Wahab Sait</creator>
        
        <creator>Abdulaziz Fahad AlOthman</creator>
        
        <creator>Shadan AlHamed</creator>
        
        <subject>Key performance indicators; big data; hierarchical analysis; artificial intelligence; privacy policies; metadata; pattern generation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>The Kingdom of Saudi Arabia is enhancing the services and applications in government organizations through the number of systems that generate a massive amount of data through Big Data technology. Recently, the Global Artificial Intelligent Summit 2020, Saudi Data and Artificial Intelligence Authority (SDAIA), NEOM have launched an Artificial Intelligence (AI) strategy that aligns with the Kingdom Vision 2030. AI opens a wide door for opportunities and new strategies that will narrow the gap in the skillset of individuals and promote research and innovation in the IT industry. Organizations lack advanced techniques to evaluate the performance of individuals and departments that supports improving the quality of service. The introduction of AI-based applications in the government and private sectors will facilitate decision-makers in tracking and optimizing the efficiency of departments and individuals. This research aims to develop an intelligent framework for government organizations to improve the quality of services rendered to customers and businesses. In addition, it highlights the importance of AI policies in archiving metadata. This paper presents a framework for an organization that contains Chatbot, Sentiment Analysis, and Key Performance Indicators to improve the services. A synthetic dataset is employed as a testbed to evaluate the performance of the framework. The outcome of this study shows that the proposed framework able to improve the performance of organizations. Using this proposed framework, organizations can build a mechanism for their workforce to retrieve meaningful information. Moreover, it provides significant features include efficient data extraction, data management, and AI-based security for effective document management.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_33-Developing_an_Intelligent_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Smart Approximation Algorithm for Minimum Vertex Cover Problem based on Min-to-Min (MtM) Strategy</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111232</link>
        <id>10.14569/IJACSA.2020.0111232</id>
        <doi>10.14569/IJACSA.2020.0111232</doi>
        <lastModDate>2020-12-31T12:59:32.6970000+00:00</lastModDate>
        
        <creator>Jawad Haider</creator>
        
        <creator>Muhammad Fayaz</creator>
        
        <subject>Minimum vertex cover; approximation algorithms; maximum independent set; benchmark instances; graph theory</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>In this paper, we have proposed an algorithm based on min-to-min approach. In the proposed algorithm first the degree of each vertex of the graph is calculated. Next the vertex with minimum degree is selected, after which all the neighbors of the minimum degree are located. In the neighbors of the minimum degree vertex, again the vertex with the minimum degree is found and put into the set minimum vertex cover and deleted from the graph. Again, the degree of each vertex of the updated graph is calculated and again the same process is repeated until the graph becomes empty. In case of tie, all the neighbors of the minimum degree vertices are computed and then the minimum degree vertex in all of them is added to minimum vertex degree set. The same process is repeated until the graph becomes empty. The proposed algorithm is a very simple, efficient, and easy to understand and implement. The proposed min-to-min algorithm is evaluated on small as well as on large benchmark instances and the results indicate that the performance of the min-to-min algorithm is far better as compared to the other state-of-art algorithms in term of accuracy and computation complexity. We have also used the proposed method to solve the maximum independent set problem.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_32-A_Smart_Approximation_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Verification of Himawari-8 Observation Data using Cloud Optical Thickness (COT) and Cloud Image Energy</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111231</link>
        <id>10.14569/IJACSA.2020.0111231</id>
        <doi>10.14569/IJACSA.2020.0111231</doi>
        <lastModDate>2020-12-31T12:59:32.6670000+00:00</lastModDate>
        
        <creator>Umar Ali Ahmad</creator>
        
        <creator>Wendi Harjupa</creator>
        
        <creator>Dody Qory Utama</creator>
        
        <creator>Risyanto</creator>
        
        <creator>Alex Lukmanto Suherman</creator>
        
        <creator>Wahyu Pamungkas</creator>
        
        <creator>Prayitno Abadi</creator>
        
        <creator>Agus Virgono</creator>
        
        <creator>Burhanuddin Dirgantoro</creator>
        
        <creator>Reza Rendian Septiawan</creator>
        
        <creator>Mas’ud Adhi Saputra</creator>
        
        <subject>Himawari-8; COT; image classification; cloud energy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>Himawari-8 satellite cloud observation data covers all areas of Indonesia. The cloud observation data can be used for observations of current weather conditions and short-term predictions. This paper reports the verification method of Himawari-8 Observation Data using Cloud Optical Thickness (COT) and compared to Cloud Image Energy. The verification test was carried out to determine the accuracy of Himawari-8&#39;s observations. COT data were verified using energy data from the observation image of the time-lapse camera. First, the time-lapse camera captures and classifies the cloud image. Subsequently, the energy of each image frame was calculated and re-grouped the result based on the energy to determine the type of the cloud. The results show that there is a positive correlation between COT and low energy values with cumulonimbus cloud detection, on the contrary for Cirrus-cloud type. However, the data requires a more accurate observation method to obtain data from cloud images on the Himawari-8 satellite, specifically for regions with a small spatial size of 4 km and thin clouds in the lower layer.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_31-Verification_of_Himawari_8_Observation_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Simulation and Analysis of Variable Antenna Designs for Effective Stroke Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111230</link>
        <id>10.14569/IJACSA.2020.0111230</id>
        <doi>10.14569/IJACSA.2020.0111230</doi>
        <lastModDate>2020-12-31T12:59:32.6500000+00:00</lastModDate>
        
        <creator>Amor Smida</creator>
        
        <subject>Stroke detection; Specific Absorption Rate (SAR); Patch Antenna; S-parameter (S11); electromagnetic wave; bio-heat transfer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>The variety of applications of patch antenna for portable applications has opened the avenues for the possibilities of having compact, cost-efficient, and life-saving devices. Considering the challenges of portability and cost in making it feasible for detecting strokes in the masses of developing countries where the demand is quite high, this study builds the groundwork for such device fabrication. In total five antenna designs were investigated for their assessment in identifying the stroke. Two main studies of electromagnetic wave interaction and bio-heating of the human head phantom had been accomplished and the results are compared. The main comparison and identification of the stroke location with the human head phantom are presented by the specific absorption rate (SAR), both visualized as volumetric plot and stacked contour slices for clarifying the shape and positioning of the stroke in vertical and horizontal dimensions. The results show that the SAR values for Antenna A &amp; D are the lowest with the values of 1.44 x 10-5 W/kg and 1.96 x 10-5 W/kg, respectively. But the induced electric field and isothermal temperature achieved were highest by Antenna D, with values of 0.25 emw and 133.92 x 10-8 K, respectively; and, the 2-D far-field radiation patterns confirmed better performance by it amongst all others. Hence, making the Antenna D as the most preferred choice for the prototyping stage. The overall trade-off of key parameters is studied herein in this simulation study and based on that the most suitable antenna design is proposed for the experimental prototype testing. The results suggest that the simulation results give a clear insight into the feasibility of stroke detection with the proposed setup and presents high viability for portable, low-cost, and rapid stroke detection applications.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_30-Simulation_and_Analysis_of_Variable_Antenna_Designs.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Platform for Extracting Driver Behavior from Vehicle Sensor Big Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111229</link>
        <id>10.14569/IJACSA.2020.0111229</id>
        <doi>10.14569/IJACSA.2020.0111229</doi>
        <lastModDate>2020-12-31T12:59:32.6200000+00:00</lastModDate>
        
        <creator>Sultan Ibrahim bin Ibrahim</creator>
        
        <creator>Emad Felemban</creator>
        
        <creator>Faizan Ur Rehman</creator>
        
        <creator>Ahmad Muaz Qamar</creator>
        
        <creator>Akhlaq Ahmad</creator>
        
        <creator>Abdulrahman A. Majrashi</creator>
        
        <subject>GPS Data; AVL sensors; hajj; big data; traffic analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>Traffic analysis of vehicles in densely populated areas and places of public gathering can provide interesting insights into crowd behavior. Hajj is a spatio-temporally bound religious activity that is held annually and attended by more than 2 million people. More than 17,000 buses are used to transport pilgrims on fixed days to fixed locations. This poses great challenges in terms of crowd management. Using Global Positioning System (GPS) and Automatic Vehicle Location (AVL) sensors attached to buses, a large amount of spatio-temporal vehicle data can be collected for traffic analysis. In this paper, we present a study whereby driver behavior was extracted from an analysis of vehicle big data. We have explained in detail how we collected data, cleaned it, moved it to a big data repository, processed it and extracted information that helped us characterize driver behavior according to our definition of aggressiveness. We have used data from 17,000 buses that has been collected during Hajj 2018.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_29-A_Platform_for_Extracting_Driver.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Traffic Distribution Routing Algorithm for Low Level VPNs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111228</link>
        <id>10.14569/IJACSA.2020.0111228</id>
        <doi>10.14569/IJACSA.2020.0111228</doi>
        <lastModDate>2020-12-31T12:59:32.6030000+00:00</lastModDate>
        
        <creator>Abdelwahed Berguiga</creator>
        
        <creator>Ahlem Harchay</creator>
        
        <creator>Ayman Massaoudi</creator>
        
        <creator>Radhia Khdhir</creator>
        
        <subject>Virtual Private Networks (VPN); Quality of Service (QoS); NS-2; Simulations; Shortest Path Routing (SPR); Traffic Split Routing (TSR); Routing algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>Virtual Private Networks (VPN) constitute a particular class of shared networks. In such networks, the resources are shared among several customers. The management of these resources requires a high level of automation to obtain the dynamics necessary for the well-functioning of a VPN. In this paper, we consider the problem of a network operator who owns the physical infrastructure and who wishes to deliver VPN service to his customers. These customers may be Internet Service providers, large corporations and enterprises. We propose a new routing approach referred to as Traffic Split Routing (TSR) which splits the traffic as fairly as possible between the network links. We show that TSR outperforms Shortest Path Routing (SPR) in terms of the number of admitted VPN and in terms of Quality of Service.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_28-A_New_Traffic_Distribution_Routing_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Fast Military Object Recognition using Extreme Learning Approach on CNN</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111227</link>
        <id>10.14569/IJACSA.2020.0111227</id>
        <doi>10.14569/IJACSA.2020.0111227</doi>
        <lastModDate>2020-12-31T12:59:32.5730000+00:00</lastModDate>
        
        <creator>Hari Surrisyad</creator>
        
        <creator>Wahyono</creator>
        
        <subject>Training-speed; resource; backpropagationm; CNN; ELM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>Convolutional Neural Network (CNN) is an algorithm that can classify image data with very high accuracy but requires a long training time so that the required resources are quite large. One of the causes of the long training time is the existence of a backpropagation-based classification layer, which uses a slow gradient-based algorithm to perform learning, and all parameters on the network are determined iteratively. This paper proposes a combination of CNN and Extreme Learning Machine (ELM) to overcome these problems. Combination process is carried out using a convolution extraction layer on CNN, which then combines it with the classification layer using the ELM method. ELM method is Single Hidden Layer Feedforward Neural Networks (SLFNs) which was created to overcome traditional CNN’s weaknesses, especially in terms of training speed of feedforward neural networks. The combination of CNN and ELM is expected to produce a model that has a faster training time, so that its resource usage can be smaller, but maintaining the accuracy as much as standard CNN. In the experiment, the military object classification problem was implemented, and it achieves smaller resources as much as 400 MB on GPU comparing to standard CNN.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_27-A_Fast_Military_Object_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Industrial Energy Load Profile Forecasting under Enhanced Time of Use Tariff (ETOU) using Artificial Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111226</link>
        <id>10.14569/IJACSA.2020.0111226</id>
        <doi>10.14569/IJACSA.2020.0111226</doi>
        <lastModDate>2020-12-31T12:59:32.5570000+00:00</lastModDate>
        
        <creator>Mohamad Fani Sulaima</creator>
        
        <creator>Siti Aishah Abu Hanipah</creator>
        
        <creator>Nur Rafiqah Abdul Razif</creator>
        
        <creator>Intan Azmira Wan Abdul Razak</creator>
        
        <creator>Aida Fazliana Abdul Kadir</creator>
        
        <creator>Zul Hasrizal Bohari</creator>
        
        <subject>Time of use; artificial neural network; energy forecasting; load profile</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>The demand response program involves consumers to mitigate peak demand and reducing global CO2 emission. In sustaining this effort, energy provider such as Tenaga Nasional Berhad (TNB) in Peninsular Malaysia has introduced Enhance Time of Use (ETOU) tariff. However, since 2015, small numbers join the ETOU program due to less confidence in managing their energy consumption profile. Thus, this study provides an optimum forecasting load profile model for TOU and ETOU tariffs using Artificial Neural Network (ANN). An industry&#39;s average energy profile has been used as a case study, while the forecasting technique has been conducted to find the optimum energy load profile congruently. The load shifting technique has been adopted under ETOU tariff price while integrating to the ANN procedure. A significant comparison in terms of cost reduction between TOU and ETOU electricity tariffs has been made. In contrast, ANN performance results in searching for the best-shifted load profile have been analyzed accordingly. From the proposed method, the total electricity cost saving has been founded to be saved for about 7.9% monthly. It is hoped that this work will benefit the energy authority and consumers in future action, respectively.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_26-Industrial_Energy_Load_Profile_Forecasting.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predictive System of Semiconductor Failures based on Machine Learning Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111225</link>
        <id>10.14569/IJACSA.2020.0111225</id>
        <doi>10.14569/IJACSA.2020.0111225</doi>
        <lastModDate>2020-12-31T12:59:32.5270000+00:00</lastModDate>
        
        <creator>Yousef El Mourabit</creator>
        
        <creator>Youssef El Habouz</creator>
        
        <creator>Hicham Zougagh</creator>
        
        <creator>Younes Wadiai</creator>
        
        <subject>Machine learning; semiconductor; predictive maintenance; industry 4.0</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>Maintenance in manufacturing has been developed and researched in the last few decades at a very rapid rate. It’s a major step in process control to build a decision tool that detects defects in equipment or processes as quickly as possible to maintain high process efficiencies. However, the high complexity of machines, and the increase in data available in almost all areas, makes research on improving the accuracy of fault detection via data-mining more and more challenging issue in this field. In our paper we present a new predictive model of semiconductor failures, based on machine learning approach, for predictive maintenance in industry 4.0. The framework of our model includes: Dataset and data acquisition, data preprocessing in three phases (over-sampling, data cleaning, and attribute reduction with principal component analysis (PCA) technique and CfsSubsetEval technique), data modeling, evaluation model and implementation model. We used SECOM dataset to develop four different models based on four algorithms (Naive Bayesian, C4.5 Decision tree, Multilayer perceptron (MLP), Support vector machine), according to the five metrics (True Positive rate, False Positive rate, Precision, F-Mesure and Accuracy). We implemented our new predictive model with 91, 95% of accuracy, as a new efficient predictive model of semiconductor failures.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_25-Predictive_System_of_Semiconductor_Failures.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of Indonesian Motorcycle Gang with Social Network Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111224</link>
        <id>10.14569/IJACSA.2020.0111224</id>
        <doi>10.14569/IJACSA.2020.0111224</doi>
        <lastModDate>2020-12-31T12:59:32.5270000+00:00</lastModDate>
        
        <creator>Edi Surya Negara</creator>
        
        <creator>Ria Andryani</creator>
        
        <creator>Deni Erlansyah</creator>
        
        <creator>Rezki Syaputra</creator>
        
        <subject>Social network; data mining; data analytics; community detection; motorcycle gang</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>Analysis of motorcycle gang networks in Indonesia was conducted to determine the dynamics of the motor gang network. This analysis is needed by the government in making appropriate and effective policies in overcoming social problems caused by the existence of this group. The purpose of this study is to detect and determine the community structure of motorcycle gang networks in Indonesia through the use of big data available on the internet, especially social media. This research also utilizes several approaches such as social and behavioral sciences, as well as the computer technology in understanding and finding solutions to problems that arise in society. This study uses a social network analysis method as an instrument that will reveal the social structure of motorcycle gangs with a network centrality approach and community detection. This research succeeded in finding the network structure pattern and network insight of motorcycle gangs by finding the most influential actors. The study also found 25 motorcycle gang groups with high-value network interactions and these groups had more than 2000 active members on social media. In the biker gang social network analysis, the most influential actor has 531 degrees with a weighted degree of 1557.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_24-Analysis_of_Indonesian_Motorcycle_Gang.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Prevention of Attacks in Mobile Ad Hoc Network using African Buffalo Monitoring Zone Protocol</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111223</link>
        <id>10.14569/IJACSA.2020.0111223</id>
        <doi>10.14569/IJACSA.2020.0111223</doi>
        <lastModDate>2020-12-31T12:59:32.4930000+00:00</lastModDate>
        
        <creator>R. Srilakshmi</creator>
        
        <creator>M.Jaya Bhaskar</creator>
        
        <subject>Mobile ad hoc network; malicious nodes; routing protocol; wormhole attack; security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>Mobile ad hoc networks (MANET) can be utilized for communicating wirelessly. However, MANET is affected by many attacks and malicious activities. In MANET, the prevention approach is necessary to secure communication. MANET is easily affected by numerous attacks such as wormhole (WH) attack, Grey-hole (GH) attack, and black-hole (BH) attack in which the sender hubs can’t able to transmits the message to the target node due to the malicious behavior. To prevent the attacks in MANET, this research introduces a novel routing protocol as African Buffalo Monitoring Zone Protocol (ABMZP). This approach is utilized for preventing wormhole attack and other malicious activities in MANET. This mechanism monitors the communication channel continuously and identifies the attack detection. Sequentially, the ABMZP approach prevents the harmful nodes and finds the alternate path for communication. The simulation of this research is done with the use of Network Simulator 2 (NS-2) and finally, the efficiency of the projected ABMZP work outcomes are compared with the latest existing techniques and provides superior results.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_23-Prevention_of_Attacks_in_Mobile_Ad_Hoc_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Noise and Restoration of UAV Remote Sensing Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111222</link>
        <id>10.14569/IJACSA.2020.0111222</id>
        <doi>10.14569/IJACSA.2020.0111222</doi>
        <lastModDate>2020-12-31T12:59:32.4800000+00:00</lastModDate>
        
        <creator>Asmala Ahmad</creator>
        
        <creator>Khadijah Amira Mohd Fauzey</creator>
        
        <creator>Mohd Mawardy Abdullah</creator>
        
        <creator>Suliadi Firdaus Sufahani</creator>
        
        <creator>Mohd Yazid Abu Sari</creator>
        
        <creator>Abd Rahman Mat Amin</creator>
        
        <subject>Noise; restoration; remote sensing; UAV; PSNR</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>Remotely sensed images captured from a camera mounted on a UAV (unmanned aerial vehicle) are exposed to noise caused by internal factors, such as the UAV system itself or external factors such as atmospheric conditions. Such images need to be restored before they can undergo further processing stages. This study aims to analyse the effects of salt and pepper noise on a UAV image and restore the image by removing the noise effects. In doing so, a UAV image, with red, green and blue channel and containing regions of different spectral properties, is experimented with salt and pepper noise of different densities. Image restoration procedure is formulated using median filtering of variable sizes. Peak-signal to noise ratio (PSNR) and mean square error (MSE) analysis are performed to measure image quality before and after restoration.  An optimal filter size is chosen based on the highest PSNR of the restored image. The results show that the effects of noise on UAV images are dependent on the spectral properties of the image channels and the regions of interest. The proposed restoration works best for images with low- compared to high-density noises.    Blue channel is found having the largest variation of optimal filter size, 18.5, compared to other channels because of the high response to noise within its short spectral wavelength region. Landscape’s vegetation has the largest variation of optimal filter size, 22, compared to other regions due to the sensitivity of its dark spectral properties.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_22-Noise_and_Restoration_of_UAV_Remote_Sensing_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Exploring UX Maturity in Software Development Environments in Saudi Arabia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111221</link>
        <id>10.14569/IJACSA.2020.0111221</id>
        <doi>10.14569/IJACSA.2020.0111221</doi>
        <lastModDate>2020-12-31T12:59:32.4630000+00:00</lastModDate>
        
        <creator>Obead Alhadreti</creator>
        
        <subject>User experience; UX maturity; Saudi Arabia</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>User experience (UX) design is becoming increasingly crucial for developing successful software today. It can determine whether or not users stay engaged with a product or service. It is, therefore, important that organizations have their users in mind when developing software and that there is a maturity for UX work. However, there are still organizations which do not value UX highly and where UX maturity is low. This paper reported the results of a survey of 75 practitioners working in software-development environments in Saudi Arabia. The survey was conducted in July 2020 and aimed to explore practitioners&#39; perceptions of UX maturity, UX significance, and the challenges that face UX process in software development environments. The results show a higher than expected perception of organizational UX maturity amongst the practitioners surveyed, with the majority considering their organizations to be at an &quot;Integrated phase&quot;. The degree of awareness of UX value was also found higher than anticipated. Furthermore, the study reveals important information about the most used UX methods as task analysis, prototyping, and heuristic evaluation. It also shows that UX assessment and user involvement being considered during different stages of product development, particularly in the prototyping phase. The major challenges that face UX process were found to be the need to improve UX consistency and the ability of teams and departments to collaborate.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_21-Exploring_UX_Maturity_in_Software_Development.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Effects of Privacy Preserving Data Publishing based on Overlapped Slicing on Feature Selection Stability and Accuracy</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111220</link>
        <id>10.14569/IJACSA.2020.0111220</id>
        <doi>10.14569/IJACSA.2020.0111220</doi>
        <lastModDate>2020-12-31T12:59:32.4470000+00:00</lastModDate>
        
        <creator>Mohana Chelvan P</creator>
        
        <creator>Perumal K</creator>
        
        <subject>Overlapped slicing; privacy preserving data publishing; feature selection; Jaccard Index; selection stability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>Feature selection is vital for data mining as each organization gathers a colossal measure of high dimensional microdata. Among significant standards of the algorithms for feature selection, the primary one which is currently considered as significant is feature selection stability along with accuracy. Privacy preserving data publishing methods with various delicate traits are analyzed to lessen the likelihood of adversaries to figure the touchy values. By and large, protecting the delicate values is typically accomplished by anonymizing data by utilizing generalization and suppression methods which may bring about information loss. Strategies other than generalization and suppression are investigated to diminish information loss. Privacy preserving data publishing with the overlapped slicing technique with various delicate ascribes tackles the issues in microdata with numerous touchy attributes. Feature selection stability is a vital criterion of data mining technique because of the accumulation of ever increasing dimensionality of microdata due to everyday activities on the World Wide Web. Feature selection stability is directly correlated with data utility. Feature selection stability is data centric and hence modifications of a dataset for privacy preservation affects feature selection stability along with data utility. As feature selection stability is data-driven, the impacts of privacy preserving data publishing based on overlapped slicing on feature selection stability and accuracy is investigated in this paper.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_20-The_Effects_of_Privacy_Preserving_Data_Publishing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Genetic Algorithm Approach for Inter and Intra Homogeneous Grouping Considering Multi-student Characteristics</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111219</link>
        <id>10.14569/IJACSA.2020.0111219</id>
        <doi>10.14569/IJACSA.2020.0111219</doi>
        <lastModDate>2020-12-31T12:59:32.4170000+00:00</lastModDate>
        
        <creator>A. M. Aseere</creator>
        
        <subject>Genetic algorithm; group formation; intragroup homogeneity; intergroup homogeneity; fitness; permutation; random keys representation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>This paper addresses the problem of group formation in collaborative learning by considering the students’ characteristics. The proposed solution is based on a Genetic Algorithm (GA), which minimizes an objective function that has two main aims. Indeed, the proposed GA’s fitness function helps to achieve two objectives: Fairness in the formation of different groups, resulting in intergroup homogeneity, and a low gap in the levels of students within a group, which corresponds to intragroup homogeneity. Exhaustive experiments were conducted using three different sizes of randomly generated data sets and several crossover operators. Indeed, the order crossover and the crossovers based on random keys representation are experimented. The reported results show that the proposed approach guarantees the efficient grouping of students. In addition, comparisons with existing approaches based on GA confirm the ability of the proposed approach to provide greater intergroup and intragroup homogeneity. In addition, the uniform crossover based on random keys representation ensures better grouping quality than do the other experimented crossover operators.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_19-A_Genetic_Algorithm_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Applications of Clustering Techniques in Data Mining: A Comparative Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111218</link>
        <id>10.14569/IJACSA.2020.0111218</id>
        <doi>10.14569/IJACSA.2020.0111218</doi>
        <lastModDate>2020-12-31T12:59:32.4000000+00:00</lastModDate>
        
        <creator>Muhammad Faizan</creator>
        
        <creator>Megat F. Zuhairi</creator>
        
        <creator>Shahrinaz Ismail</creator>
        
        <creator>Sara Sultan</creator>
        
        <subject>Clustering; data analysis; data mining; unsupervised learning; k-mean; algorithms</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>In modern scientific research, data analyses are often used as a popular tool across computer science, communication science, and biological science. Clustering plays a significant role in the reference composition of data analysis. Clustering, recognized as an essential issue of unsupervised learning, deals with the segmentation of the data structure in an unknown region and is the basis for further understanding. Among many clustering algorithms, “more than 100 clustering algorithms known” because of its simplicity and rapid convergence, the K-means clustering algorithm is commonly used. This paper explains the different applications, literature, challenges, methodologies, considerations of clustering methods, and related key objectives to implement clustering with big data. Also, presents one of the most common clustering technique for identification of data patterns by performing an analysis of sample data.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_18-Applications_of_Clustering_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predicting Undergraduate Admission: A Case Study in Bangabandhu Sheikh Mujibur Rahman Science and Technology University, Bangladesh</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111217</link>
        <id>10.14569/IJACSA.2020.0111217</id>
        <doi>10.14569/IJACSA.2020.0111217</doi>
        <lastModDate>2020-12-31T12:59:32.3700000+00:00</lastModDate>
        
        <creator>Md. Protikuzzaman</creator>
        
        <creator>Mrinal Kanti Baowaly</creator>
        
        <creator>Maloy Kumar Devnath</creator>
        
        <creator>Bikash Chandra Singh</creator>
        
        <subject>Undergraduate admission; educational data mining; XGBoost; Light GBM; GBM; evaluation metrics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>The university admission tests find the applicant&#39;s ability to admit to the desired university. Nowadays, there is a huge competition in the university admission tests. The failure in the admission tests makes an examinee depressed. This paper proposes a method that predicts undergraduate admission in universities. It can help students to improve their preparation to get a chance at their desired university. Many factors are responsible for the failure or success in an admission test. Educational data mining helps us to analyze and extract information from these factors. Here, the authors apply three machine learning algorithms XGBoost, LightGBM, and GBM on a collected dataset to estimate the probability of getting admission to the university after attending or before attending the admission test. They also evaluate and compare the performance levels of these three algorithms based on two different evaluation metrics – accuracy and F1 score. Furthermore, the authors explore the important factors which influence predicting undergraduate admission.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_17-Predicting_Undergraduate_Admission.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Convolutional Neural Network using Hu’s Moments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111216</link>
        <id>10.14569/IJACSA.2020.0111216</id>
        <doi>10.14569/IJACSA.2020.0111216</doi>
        <lastModDate>2020-12-31T12:59:32.3530000+00:00</lastModDate>
        
        <creator>Sanad AbuRass</creator>
        
        <creator>Ammar Huneiti</creator>
        
        <creator>Mohammad Belal Al-Zoubi</creator>
        
        <subject>CNN; image transformations; invariant; Hu’s moments</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>Convolutional Neural Networks (CNN) is a powerful deep learning method which is mostly used in image classification and image recognition applications. It has achieved acceptable accuracy in these fields but it still suffers some limitations. One of these limitations of CNN is the lack of ability to be invariant to the input data due to some transformations such as rotation, scaling, skewness, etc. In this paper we present an approach to optimize CNN in order to enhance its performance regarding the invariant limitation by using Hu’s moments. The Hu’s moments of an image are weighted averages of the image’s intensities of the pixels, which produce statistics about the image, and these moments are invariant to image transformations. This means that, even if some changes were made to the image, it will always produce almost the same moments values. The main idea behind the proposed approach is extracting Hu’s moments of the image and concatenating them with the flatten vector then feeding the new vector to the fully connected layer. The experimental results show that an acceptable loss, accuracy, precision, recall and F1 score have been achieved on three benchmark datasets which are MNIST hand written digits dataset, MNIST fashion dataset and the CIFAR10 dataset.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_16-Enhancing_Convolutional_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analyzing the Barriers and Possibilities with p-values towards Starting a New Postgraduate Computer and Engineering Programs at Najran University: A Cross-Sectional Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111215</link>
        <id>10.14569/IJACSA.2020.0111215</id>
        <doi>10.14569/IJACSA.2020.0111215</doi>
        <lastModDate>2020-12-31T12:59:32.3230000+00:00</lastModDate>
        
        <creator>Abdullah Alghamdi</creator>
        
        <subject>Postgraduate program; p-values; barriers; Najran university; English language; schedule</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>A cross-sectional study was conducted to find out the barriers and their possible solutions to start a new postgraduate computer and engineering programs at Najran University (NU), Kingdom of Saudi Arabia. This study includes interviews and surveys consist of 35 questions. The total number of the participant was 363; most of them were employees at the government and private sectors. In this study to analysis the result IBM’s Statistical Package for the Social Sciences (SPSS) version 22 is used to analyze the result with the respective p-values calculated using the Pearson Chi-square test. Study reveals that 95.6% of participants want to pursue a graduate degree. However, only (46.9%) can communicate in English academically. Among the respondents, about (42.1%) started a graduate program before, but only (11%) has completed the program. Others could not continue their graduate programs because they were not able to attend the classes coming from a far distance and uncomfortable classes schedule and time. Hence, questions were distributed among the participants to get their opinions to find the solution. Study reveals that among the participants&#39; both government and private sectors employees shown the importance to have online classes with (69.56%) and to have a comfortable schedule and time of courses with (95.65%). This study outlined the main barriers and possible solutions and recommendations that may be helpful for higher education institutions and organizations to start new graduate programs.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_15-Analyzing_the_Barriers_and_Possibilities.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Architecture of Intelligent Career Prediction System based on the Cognitive Technology for Producing Graduates to the Digital Manpower</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111214</link>
        <id>10.14569/IJACSA.2020.0111214</id>
        <doi>10.14569/IJACSA.2020.0111214</doi>
        <lastModDate>2020-12-31T12:59:32.3070000+00:00</lastModDate>
        
        <creator>Pongsaton Palee</creator>
        
        <creator>Panita Wannapiroon</creator>
        
        <creator>Prachyanun Nilsook</creator>
        
        <subject>Architecture of intelligent career prediction system; cognitive technology; producing graduates; digital manpower</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>This research is a documentary research aimed at designing the architecture of the intelligent career prediction system based on the cognitive technology for producing graduates to the digital manpower. The research methods were divided into three phases: Phase 1, Composition Synthesis of Intelligent Career Prediction System. Phase 2, Intelligent Career Prediction System Architecture Designing based on the Cognitive Technology for producing graduates to the digital manpower. Phase 3, an assessment of the suitability of the architecture of the intelligent career prediction system based on the cognitive technology for producing graduates to the digital manpower. The architecture of the intelligent career prediction system by using the cognitive technology can be divided into three parts: 1) People involved in the architecture of the intelligent career prediction system consisting of five groups of related persons: students, staff, teachers, digital Enterprises, system administrator. 2) The architecture of the intelligent career prediction system consisting of four components: 1) User management, 2. Prediction Data Management, 3) Prediction Management system, 4) Prediction Display system, and cloud computing, an assessment of the suitability the architecture of the intelligent career prediction system based on the cognitive technology for producing graduates to the digital manpower by nine experts in the intelligent career prediction system and cognitive technology. The statistics used in the research are Mean and standard deviation. The evaluation results showed that the developed architecture was the most suitable, with the combined mean of 4.54, and the standard deviation was 0.49.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_14-The_Architecture_of_Intelligent_Career_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Recovering UML2 Sequence Diagrams from Execution Traces</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111213</link>
        <id>10.14569/IJACSA.2020.0111213</id>
        <doi>10.14569/IJACSA.2020.0111213</doi>
        <lastModDate>2020-12-31T12:59:32.2770000+00:00</lastModDate>
        
        <creator>EL Mahi BOUZIANE</creator>
        
        <creator>Chafik BAIDADA</creator>
        
        <creator>Abdeslam JAKIMI</creator>
        
        <subject>Execution traces; Reverse engineering; UML2; Sequence Diagram; Colored Petri Nets</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>Reverse engineering is a proven and efficient technique for automatically generating UML2 models from object-oriented legacy systems with missing or obsolete documentation. To perform reverse engineering, two techniques are used: dynamic and static analysis. Dynamic analysis refers to collecting information when the system is running while static analysis corresponds to inspecting the source code. Dynamic analysis is preferred than static one in order to extract dynamic models that represents the behavior of a systems because of polymorphism and dynamic binding. In this paper, we present new different methodology that use Colored Petri Nets (CPNs) to recover UML2 Sequence Diagram (SD). First, it generates execution traces corresponding to the different scenarios representing the system behavior. Then, CPNs are used to model and analyze these execution traces to extract UML2 sequence diagram. Our case study illustrates the process of our approach and show that sequence diagram can be extracted with a good accuracy.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_13-Recovering_UML2_Sequence_Diagrams.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Effectiveness of Adopting e-Learning during COVID-19 at Hashemite University</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111212</link>
        <id>10.14569/IJACSA.2020.0111212</id>
        <doi>10.14569/IJACSA.2020.0111212</doi>
        <lastModDate>2020-12-31T12:59:32.2430000+00:00</lastModDate>
        
        <creator>Alaa Obeidat</creator>
        
        <creator>Rana Obeidat</creator>
        
        <creator>Mohammed Al-Shalabi</creator>
        
        <subject>e-Learning; COVID19; classroom; ICT; Hashemite University; educational platform</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>e-Learning is the utilization of the electronic technologies and the media to deliver the educational content to the learners, enabling them to interact actively with the content, the teachers, and their peers. Students’ interaction can be either synchronous or asynchronous or a combination of both. One advantage of the e-learning is that learners can access the educational content at any place and time saving them effort, time, and cost. To deal with the unprecedented crisis of COVID-19 and the risk of virus transmission in the public, the vast majority of higher learning institutions globally were locked out and the delivery of the educational content moved from the traditional classroom teaching to the internet. The purpose of this study was to assess students’ perceptions of the effectiveness of the e-learning during COVID-19 pandemic at the Hashemite University, Jordan. A total of 399 students completed the online survey of the study. Study results showed that students’ overall evaluation of their e-learning experiences were generally positive. However, students reported that they faced problems in the e-learning experiences of which most were related to technical issues (e.g., lack of a viable internet network, lack of laptops, etc.). Microsoft Teams was the platform most preferred by students for e-learning and the majority of students accessed the educational content using smart phones. Only gender and student’s academic specialty had significant associations with their perceptions of the effectiveness of the e-learning.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_12-The_Effectiveness_of_Adopting_e_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Ice Concentration Estimation Method with Satellite based Microwave Radiometer by Means of Inversion Theory</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111211</link>
        <id>10.14569/IJACSA.2020.0111211</id>
        <doi>10.14569/IJACSA.2020.0111211</doi>
        <lastModDate>2020-12-31T12:59:32.2300000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>Ice concentration; Microwave radiometer; Inversion Theory; Comiso’s Bootstrap algorithm; The Special Sensor Microwave Imager (SSM/I); Japanese Earth Resources Satellite: JERS-1; Synthetic Aperture Radar: SAR</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>Ice concentration estimation method with satellite-based microwave radiometer by means of inversion theory is proposed. Through experiments, it is found that the proposed methods are superior to the existing methods, the NASA Team algorithm and the Comiso&#39;s Bootstrap algorithm with up to 45% of improvement on ice concentration estimation accuracy based on the simulation study. Also 1.5 to 2.1% of improvement was achieved for the proposed method compared to the NASA Team and Comiso&#39;s Bootstrap algorithms for the actual The Special Sensor Microwave Imager (SSM/I) data of Okhotsk using Japanese Earth Resources Satellite: JERS-1/Synthetic Aperture Radar: SAR data as a truth data for estimating ice concentration.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_11-Ice_Concentration_Estimation_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning based Approach for Bone Diagnosis Classification in Ultrasonic Computed Tomographic Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111210</link>
        <id>10.14569/IJACSA.2020.0111210</id>
        <doi>10.14569/IJACSA.2020.0111210</doi>
        <lastModDate>2020-12-31T12:59:32.1970000+00:00</lastModDate>
        
        <creator>Marwa Fradi</creator>
        
        <creator>Mouna Afif</creator>
        
        <creator>Mohsen Machhout</creator>
        
        <subject>USCT; Inception-V3; MobileNet; Ameobanet-V2; classification; accuracy; transfer deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>Artificial intelligence (AI) in the area of medical imaging has shown a developed technology to have automatically the true diagnosis especially in ultrasonic imaging area. At this light, two types of neural networks algorithms have been developed to automatically classify the Ultrasonic Computed Tomographic (USCT) images into three categories, such as healthy, fractured and osteoporosis bone USCT images. In this work, at first step, a Convolutional Neural Network including two types of CNN models such (Inception-V3 and MobileNet) are proposed as a classifier system. At second step, an evolutionary neural network is proposed with the AmeobaNet model for USCT image classification. Results achieve 100% for train accuracy and 96%, 91.7% and 87.5% using Amoebanet, Inception-V3 and MobileNet respectively for the test accuracy. Results outperforms the state of the art and prove the robustness of the proposed classifier system with a short time process by its implementation on GPU.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_10-Deep_Learning_based_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Prospects and Challenges of Learning Management Systems in Higher Education</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111209</link>
        <id>10.14569/IJACSA.2020.0111209</id>
        <doi>10.14569/IJACSA.2020.0111209</doi>
        <lastModDate>2020-12-31T12:59:32.1830000+00:00</lastModDate>
        
        <creator>Ahmed Al-Hunaiyyan</creator>
        
        <creator>Salah Al-Sharhan</creator>
        
        <creator>Rana AlHajri</creator>
        
        <subject>Learning Management Systems (LMS); e-learning; Information Communication Technology (ICT); Higher Education (HE)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>Many higher education institutions nowadays are equipped with Learning Management Systems (LMS) to provide rich online learning solutions and utilize its functions and capabilities to improve the learning practices. The current study aims to gain instructors’ perspective of LMS, investigate the use of its functions, and identify the barriers that may influence LMS utilization at the Gulf University for Science and Technology (GUST). This research aims to examine current practices, opinions, and challenges that help academicians and system developers contribute to better learning practices and academic achievement. The study used a quantitative method that included a sample of 58 faculty members. Findings obtained from the questionnaire indicated that instructors were generally comfortable and had positive perceptions about LMS Moodle. The results revealed that LMS&#39;s administrative functions, such as files and announcements, are widely used compared to the advanced interactive learning activities. Moreover, LMS&#39;s use on mobile devices is infrequent, and more emphasis must be placed on using LMS friendly user interfaces that can enable all tools and functions to use LMS.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_9-Prospects_and_Challenges_of_Learning_Management_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Can Model Checking Assure, Distributed Autonomous Systems Agree? An Urban Air Mobility Case Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111208</link>
        <id>10.14569/IJACSA.2020.0111208</id>
        <doi>10.14569/IJACSA.2020.0111208</doi>
        <lastModDate>2020-12-31T12:59:32.1500000+00:00</lastModDate>
        
        <creator>Anubhav Gupta</creator>
        
        <creator>Siddhartha Bhattacharyya</creator>
        
        <creator>S. Vadivel</creator>
        
        <subject>Formal methods; autonomous systems; distributed algorithms; assurance for distributed protocols; distributed protocol modeling and verification; distributed autonomous systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>Advancement in artificial intelligence, internet of things and information technology have enabled the delegation of execution of autonomous services to autonomous systems for civil applications. It is envisioned, that with an increase in the demand for autonomous systems, the decision making associated in the execution of the autonomous services will be distributed, with some of the responsibility in decision making, shifted to the autonomous systems. Thus, it is of utmost importance that we assure the correctness of distributed protocols, that multiple autonomous systems will follow, as they interact with each other in providing the service. Towards this end, we discuss our pro-posed framework to model, analyze and assure the correctness of distributed protocols executed by autonomous systems to provide a service. We demonstrate our approach by formally modeling the behavior of autonomous systems that will be involved in providing services in the Urban Air Mobility framework that enables air taxis to transport passengers.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_8-Can_Model_Checking_Assure_Distributed_Autonomous_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Concurrent Detection of Linear and Angular Motion using a Single-Mass 6-axis Piezoelectric IMU</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111207</link>
        <id>10.14569/IJACSA.2020.0111207</id>
        <doi>10.14569/IJACSA.2020.0111207</doi>
        <lastModDate>2020-12-31T12:59:32.1330000+00:00</lastModDate>
        
        <creator>Hela Almabrouk</creator>
        
        <creator>Mohamed Hadj Said</creator>
        
        <creator>Fares Tounsi</creator>
        
        <creator>Brahim Mezghani</creator>
        
        <creator>Guillaume Agnus</creator>
        
        <creator>Yves Bernard</creator>
        
        <subject>Inertial measurement unit; piezoelectric detection; angular rate; linear acceleration; electronic circuitry</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>This paper exhibits operating system and performances of a novel single-mass 6-axis Inertial Measurement Unit (IMU) using piezoelectric detection. The electronic processing circuitry for the concurrent detection of linear and angular motion is proposed. The IMU structure is based on the use of 2 rings, connected with eight electrodes, implemented on the top of a piezoelectric membrane used for both sense and drive modes. The four inner electrodes are used for components detection due to the direct piezoelectric effect, while the outer electrodes are used to generate the drive mode due to the reverse piezoelectric effect. Through finite element analysis, we show that linear accelerations generate an offset voltage on the sensing electrodes, while angular rates lead to a change in the amplitude of the initial AC signal caused by the drive mode. The present work represents an innovative design able to separate 6 motion data from signals using only 4 electrodes. The specific electronic circuitry for acceleration and angular rate data dissociation shows a very efficient method for signal separation since no leakage readout occurs in all six axes. Besides, other particular interest is that under no circumstances, angular outputs disturb or affect acceleration ones and vice versa. The evaluated sensitivities are 364 mV/g and 65.5 mV/g for in-plane and out-of-plane linear accelerations, respectively. Similarly, angular rates sensitivities are 2.59 mV/rad/s and 522 mV/rad/s.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_7-Concurrent_Detection_of_Linear_and_Angular_Motion.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Developing a Mining Robot for Mars Exploitation: NASA Robotics Mining Competition (RMC)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111205</link>
        <id>10.14569/IJACSA.2020.0111205</id>
        <doi>10.14569/IJACSA.2020.0111205</doi>
        <lastModDate>2020-12-31T12:59:32.1030000+00:00</lastModDate>
        
        <creator>Tariq Tashtoush</creator>
        
        <creator>Agustin Velazquez</creator>
        
        <creator>Andres Aranguren</creator>
        
        <creator>Cristian Cavazos</creator>
        
        <creator>David Reyes</creator>
        
        <creator>Edgar Hernandez</creator>
        
        <creator>Emily Bueno</creator>
        
        <creator>Esteban Otero</creator>
        
        <creator>Gerardo Zamudio</creator>
        
        <creator>Hector Casarez</creator>
        
        <creator>Jorge Rullan</creator>
        
        <creator>Jose Rodriguez</creator>
        
        <creator>Juan Carlos Villarreal</creator>
        
        <creator>Michael Gutierrez</creator>
        
        <creator>Patricio Rodriguez</creator>
        
        <creator>Roberto Torres</creator>
        
        <creator>Rosaura Martinez</creator>
        
        <creator>Sanjuana Partida</creator>
        
        <subject>NASA Robotics Mining competition; mining robot; ice regolith; autonomous; NASA space exploration; systems life-cycle; mechanical structure design; control system; systems engineering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>This paper focuses on demonstrating the design and build stages, and effort done by Systems Engineering students team (DustyTRON NASA Robotics) to develop a mining robot that was used in the 2016 National Aeronautics &amp; Space Ad-ministration (NASA) Robotics Mining Competition (RMC). The objective of the NASA RMC challenge is to encourage engineering students to design and build a robot that will excavate, collect, and deposit a simulated Martian regolith. Mining water/ice, and regolith is very essential task for space missions and resource utilization, they contain many elements such as metals, minerals, and other compounds. The Mining will allow extracting pro-pellants from the regolith such as Oxygen and Hydrogen that can be used as an energy source for in-space transportation. In addition, the space mining system can be used in tasks that are important for human and robotics scientific investigations. The DustyTRON team consists of Systems Engineering students, who are divided into 1) hardware design, 2) electrical circuitry and 3) software development sub-teams. Each team works in harmony to overcome the challenges had previously experienced, such as heavy weight, circuitry layout design, autonomous and user control modes, and better software interface. They designed and built a remote controlled excavator robot, that can collect and deposit a minimum of ten (10) kilograms of regolith simulant within 15 minutes. The developed robot with its innovative mining mechanisms and control system and software will assist NASA in enhancing the current methodologies used for space/planet exploration and resources’ mining especially the Moon and Mars. NASA’s going-on project aims to send exploration robots that collect resources for analysis before sending astronauts. In 2016, only 56 United State (US) teams were invited to participate, and DustyTRON was one of three university teams from the state of Texas, the team placed the 16th in overall performance. This paper will address the full engineering life-cycle process including research, concept design and development, constructing the robot and system closeout by delivering the team’s robot for the competition in Kennedy Space Center in Florida.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_5-Developing_a_Mining_Robot_for_Mars_Exploitation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Context Classification based on Mixing Ratio Estimation by Means of Inversion Theory</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111206</link>
        <id>10.14569/IJACSA.2020.0111206</id>
        <doi>10.14569/IJACSA.2020.0111206</doi>
        <lastModDate>2020-12-31T12:59:32.1030000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>Search engine; fuzzy expression; knowledge base system; membership function; mixed pixel: Mixel; context information; inverse problem solving</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>A contextual image classification method with a proportion estimation of the pixels composed of several classes, Mixed pixels (Mixels), is proposed. The method allows us to check the connectivity of separated road segments, which are observed frequently as discontinuity of roads in satellite remote sensing imagery. Under the assumption of almost same proportions for the Mixels in the discontinuous portion of road segments, a proportion estimation method utilizing Inverse Problem Solving is proposed. The experimental results with the simulation data including observation noise show 73.5~98.8(%) of improvements in terms of proportion estimation accuracy (Root Mean Square: RMS error), compared to the results from the previously proposed method with generalized inverse matrix. Also, usefulness of contextual classification based on the proposed proportion estimation was confirmed for the investigation of connectivity of roads in remotely sensed images from space.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_6-Context_Classification_based_on_Mixing_Ratio_Estimation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cryptanalysis and Countermeasure of “An Efficient NTRU-based Authentication Protocol in IoT Environment”</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111204</link>
        <id>10.14569/IJACSA.2020.0111204</id>
        <doi>10.14569/IJACSA.2020.0111204</doi>
        <lastModDate>2020-12-31T12:59:32.0870000+00:00</lastModDate>
        
        <creator>YoHan Park</creator>
        
        <creator>Woojin Seok</creator>
        
        <creator>Wonhyuk Lee</creator>
        
        <creator>Hong Taek Ju</creator>
        
        <subject>Post-quantum; NTRU; biometrics; user authentica-tion; key agreement</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>A quantum computer is a paradigm of information processing that can show remarkable possibilities of exponentially improved information processing. However, this paradigm could disrupt the current cryptosystem, which is called quantum computing attacks, by calculating factoring problem and discrete logarithm problem. Recently, NTRU is applied to various security systems, because it provides security against to provide secu-rity against quantum computing attacks. Furthermore, NTRU provides similar security level and efficient computation time of encryption/decryption compared to traditional PKC. In 2018, Jeong et al. proposed an user authentication and key distribution scheme using NTRU. They claimed that their scheme provides various security properties and secure against quantum comput-ing attacks. In this paper, we demonstrate that their scheme has security pitfalls and incorrectness in login and authentication phase. We also suggest countermeasures to fix the incorrectness and provide security against various attacks.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_4-Cryptanalysis_and_Countermeasure_of_An_Efficient_NTRU.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Reducing Energy Consumption in Microcontroller-based Systems with Multipipeline Architecture</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111203</link>
        <id>10.14569/IJACSA.2020.0111203</id>
        <doi>10.14569/IJACSA.2020.0111203</doi>
        <lastModDate>2020-12-31T12:59:32.0730000+00:00</lastModDate>
        
        <creator>Cristian Andy Tanase</creator>
        
        <subject>Multi-pipeline register; RISC V (Reduced Instruc-tion Set Computer); power consumption; multi-threading; FPGA (Field Programmable Gate Array); variable frequency</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>Current mobile battery powered systems require low power consumption as possible without affecting the overall performance of the system. The purpose of this article is to present a multi-pipeline architecture implemented on a RISC V processor with 4 levels pipeline. Each thread has an assigned CLKSCALE registry that allows to use a clock with a lower or higher frequency, depending on the value written in the CLKSCALE registry. Depending on the importance and the need to be executed at a lower or higher speed each thread will enter into execution with its frequency given by CLKSCALE. It is known that each system has its own “real time”. The notion of real time is very relative depending on the environment in which the system operates. Thus, if the system responds to external stimulus for a time that does not affect the operation of the whole, then we say the system is in real time. The system response can be quick or slow. It is important that this response does not lead to malfunction in operation. Therefore, certain threads can work at lower frequencies (those responding to slower external stimulus) and others must operate at high frequencies to allow quick response to fast external stimulus. It is known that the consumed power is directly proportional to the frequency of computing. Thus, the threads that do not require to run at maximum frequency, will consume less energy when they run. The entire system will consume less energy without affecting its performance. This architecture was implemented on a Xilinx FPGA ARTY A7 kit using the Vivado 2018.3 development tools.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_3-Reducing_Energy_Consumption_in_Microcontroller_based_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Impact of Teaching Operating Systems using Two Different Teaching Modalities</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111202</link>
        <id>10.14569/IJACSA.2020.0111202</id>
        <doi>10.14569/IJACSA.2020.0111202</doi>
        <lastModDate>2020-12-31T12:59:31.9930000+00:00</lastModDate>
        
        <creator>Ingrid A. Buckley</creator>
        
        <subject>Operating systems; synchronous online course; traditional course; face-to-face course; online course</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>This paper presents a preliminary look at the performance of two cohorts enrolled in an Operating System course which was taught using two different teaching delivery methods. Operating systems is a technical, senior-level, undergraduate course that includes abstract concepts, mechanisms, and their implementations. This course exposes students to a UNIX-based operating system and includes concurrent programming (threads and synchronization), inter-process communication, CPU scheduling main memory, and virtual memory management. Technical courses present an additional dimension of difficulty when compared to non-technical courses which are more focused on soft skills because they require strong technical skills such as programming and problem-solving. This paper discusses other research studies and statistical data which underscore some of the challenges and differences encountered when teaching a traditional face-to-face versus an online course and the impact on student success. In this work, the 2019 cohort was taught operating systems in the traditional face-to-face modality, while the 2020 cohort was taught the course using the synchronous online modality. The synchronous online modality is very similar to the face-to-face traditional class, in that, lectures are delivered in real-time; this allows students to ask the instructor questions in real-time. Each cohort was tested on the same course objectives (topics) over one semester in 2019 and 2020. The instructor presents the students’ performance on three(3) course exams and discusses the differences and similarities in their overall performance between the two groups.</description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_2-The_Impact_of_Teaching_Operating_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance based Comparison between Several Link Prediction Methods on Various Social Networking Datasets (Including Two New Methods)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111201</link>
        <id>10.14569/IJACSA.2020.0111201</id>
        <doi>10.14569/IJACSA.2020.0111201</doi>
        <lastModDate>2020-12-31T12:59:31.8700000+00:00</lastModDate>
        
        <creator>Ahmad Rawashdeh</creator>
        
        <subject>Social networks; link prediction; comparison; experiment</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(12), 2020</description>
        <description>This work extends my previous work on link prediction in Social Networks. In this research, I used two additional datasets, Twitter dataset and Facebook Social Circles Dataset and I ran link prediction methods on these datasets. In my previous work, I performed experiment on the Facebook dataset and proposed two new link prediction methods: Neighbors Connectivity and Common Neighbors of Neighbors (CNN). As in my previous work, in this work, I ran the link prediction methods for several training and testing sizes. Results showed that For Facebook dataset, random had the highest precision, followed by Neighbors Connectivity, then Preferential Attachment, followed by Jaccard/CC, Adamic-Adar, finally CNN. For Twitter dataset, random achieved the highest precision. Preferential Attachment achieved the next highest precision, and Adamic-Adar achieved the least precision. For Facebook Social Circles dataset, Preferential-Attachment achieved the highest precision of 1.08891 followed by random for a training and testing sizes of (1535, 2504) respectively. That is said with slight variation on the orders depending on the training and testing size. The low precision values achieved with Facebook and Twitter datasets are due to the graph types which are sparse as indicated in the datasets websites which confirms Kleinberg finding. </description>
        <description>http://thesai.org/Downloads/Volume11No12/Paper_1-Performance_based_Comparison_between_Several_Link.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Phishing Image Spam Classification Research Trends: Survey and Open Issues</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111196</link>
        <id>10.14569/IJACSA.2020.0111196</id>
        <doi>10.14569/IJACSA.2020.0111196</doi>
        <lastModDate>2020-12-04T12:22:41.5900000+00:00</lastModDate>
        
        <creator>Ovye John Abari</creator>
        
        <creator>Nor Fazlida Mohd Sani</creator>
        
        <creator>Fatimah Khalid</creator>
        
        <creator>Mohd Yunus Bin Sharum</creator>
        
        <creator>Noor Afiza Mohd Ariffin</creator>
        
        <subject>Phishing; spam; image spam classification; machine learning; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>A phishing email is an attack that focused com-pletely on people to circumvent existing traditional security algorithms. The email appears to be a dependable, appropriate, and solid communication medium for internet users. At present, the email is submerged with spam content, both in text-based form or undesired text planted inside the images. This study reviews articles on phishing image spam classification published from 2006 to 2020 based on spam classification application domains, datasets, features sets, spam classification methods, and the measurement metrics adopted in the existing studies. More than 50 articles, both from Web of Science and Scopus databases were picked. Achieving the study’s target, we carried out a broad survey and analysis to identify the domains where spam classification was applied. Furthermore, several public data sets, features set, classification methods, and measuring metrics are found and the popular once were pinpointed. The study revealed that Personal Collection, Dredze, and Spam Archives datasets are the most commonly used datasets in image spam classification research. Low-level and image metadata are the most widely used features set. The methods of image spam classification as identified in this study are supervised machine learning, unsu-pervised machine learning, semi-supervised machine learning, content-based and statistical learning. Among these methods, the most commonly utilized is the Support Vector Machine (SVM) which falls under supervised machine learning. This is followed by Na&#168;ıve Bayes and K-Nearest Neighbor. The commonly adopted metrics for the performance evaluation of the existing image spam classifiers are also identified and briefly discussed. We compared the performance of the state-of-the-art image spam models. Lastly, we pointed out promising directions for future research.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_96-Phishing_Image_Spam_Classification_Research_Trends.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dense Dilated Inception Network for Medical Image Segmentation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111195</link>
        <id>10.14569/IJACSA.2020.0111195</id>
        <doi>10.14569/IJACSA.2020.0111195</doi>
        <lastModDate>2020-12-04T12:22:41.5130000+00:00</lastModDate>
        
        <creator>Surayya Ado Bala</creator>
        
        <creator>Shri Kant</creator>
        
        <subject>Deep learning; Dense-Net; inception network; medical image segmentation; U-Net</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>In recent years, various encoder-decoder-based U-Net architecture has shown remarkable performance in medical image segmentation. However, these encoder-decoder U-Net has a drawback in learning multi-scale features in complex segmentation tasks and weak ability to generalize to other tasks. This paper proposed a generalize encoder-decoder model called dense dilated inception network (DDI-Net) for medical image segmentation by modifying U-Net architecture. We utilize three steps; firstly, we propose a dense path to replace the skip connection in the middle of the encoder and decoder to make the model deeper. Secondly, we replace the U-Net&#39;s basic convolution blocks with a modified inception module called multi-scale dilated inception module (MDI) to make the model wider without gradient vanish and with fewer parameters. Thirdly, data augmentation and normalization are applied to the training data to improve the model generalization. We evaluated the proposed model on three subtasks of the medical segmentation decathlon challenge. The experiment results prove that DDI-Net achieves superior performance than the compared methods with a Dice score of 0.82, 0.68, and 0.79 in brain tumor segmentation for edema, non-enhancing, and enhancing tumor. For the hippocampus segmentation, the result achieves 0.92 and 0.90 for anterior and posterior, respectively. For the heart segmentation, the method achieves 0.95 for the left atrial.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_95-Dense_Dilated_Inception_Network_for_Medical_Image_Segmentation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Analysis of Threat Modeling Methods for Cloud Computing towards Healthcare Security Practice</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111194</link>
        <id>10.14569/IJACSA.2020.0111194</id>
        <doi>10.14569/IJACSA.2020.0111194</doi>
        <lastModDate>2020-12-01T09:57:25.2100000+00:00</lastModDate>
        
        <creator>Prosper K. Yeng</creator>
        
        <creator>Stephen D. Wulthusen</creator>
        
        <creator>Bian Yang</creator>
        
        <subject>Cloud computing; healthcare; threat modelling; security practice; data privacy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>Healthcare organizations consist of unique activities including collaborating on patients care and emergency care. The sector also accumulates high sensitive multifaceted patients’ data such as text reports, radiology images and pathological slides. The large volume of the data is often stored as Electronic Health Records (EHR) which must be frequently updated while ensuring higher percentage up-time for constant availability of patients’ records. Healthcare as a critical infrastructure also needs highly skilled IT personnel, Information and Communication Technol-ogy (ICT) and infrastructure with regular maintenance culture. Fortunately, cloud computing can provide these necessary services at a lower cost. But with all thees enormous benefits of cloud computing, it is characterized with various information security issues which is not enticing to healthcare. Amid many threat modelling methods, which of them is suitable for identifying cloud related threats towards the adoption of cloud computing for healthcare? This paper compared threat modelling methods to determine their suitability for identifying and managing healthcare related threats in cloud computing. Threat modelling in pervasive computing (TMP) was identified to be suitable and can be combined with Attack Tree (AT), Attack Graph (AG) and Practical Threat Analysis (PTA) or STRIDE (spoofing, tampering, repudiation, information disclosure, denial of service and elevation of privilege). Also Attack Tree (AT) could be complemented with TMP, AG and STRIDE or PTA. Healthcare IT security professionals can hence rely on these methods in their security practices, to identify cloud related threats for healthcare. Essentially, privacy related threat modeling methods such as LINDDUN framework, need to be included in these synergy of cloud related threat modelling methods towards enhancing security and privacy for healthcare needs.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_94-Comparative_Analysis_of_Threat_Modeling_Methods.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Data Augmentation using Generative Adversarial Network for Gastrointestinal Parasite Microscopy Image Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111193</link>
        <id>10.14569/IJACSA.2020.0111193</id>
        <doi>10.14569/IJACSA.2020.0111193</doi>
        <lastModDate>2020-12-01T09:57:25.2100000+00:00</lastModDate>
        
        <creator>Mila Yoselyn Pacompia Machaca</creator>
        
        <creator>Milagros Lizet Mayta Rosas</creator>
        
        <creator>Eveling Castro-Gutierrez</creator>
        
        <creator>Henry Abraham Talavera Diaz</creator>
        
        <creator>Victor Luis Vasquez Huerta</creator>
        
        <subject>Generative Adversarial Network (GAN); Deep Con-volutional Generative Adversaria Network (DCGAN); gastrointesti-nal parasites; classification; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>Gastrointestinal parasitic diseases represent a latent problem in developing countries; it is necessary to create a support tools for the medical diagnosis of these diseases, it is required to automate tasks such as the classification of samples of the causative parasites obtained through the microscope using methods like deep learning. However, these methods require large amounts of data. Currently, collecting these images represents a complex procedure, significant consumption of resources, and long periods. Therefore it is necessary to propose a computational solution to this problem. In this work, an approach for generating sets of synthetic images of 8 species of parasites is presented, using Deep Convolutional Adversarial Generative Networks (DCGAN). Also, looking for better results, image enhancement techniques were applied. These synthetic datasets (SD) were evaluated in a series of combinations with the real datasets (RD) using the classification task, where the highest accuracy was obtained with the pre-trained Resnet50 model (99,2%), showing that increasing the RD with SD obtained from DCGAN helps to achieve greater accuracy.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_93-Data_Augmentation_using_Generative_Adversarial_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Band Selection Approach for Hyperspectral Image Classification using the Kolmogorov Variational Distance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111192</link>
        <id>10.14569/IJACSA.2020.0111192</id>
        <doi>10.14569/IJACSA.2020.0111192</doi>
        <lastModDate>2020-12-01T09:57:25.1770000+00:00</lastModDate>
        
        <creator>Mohammed LAHLIMI</creator>
        
        <creator>Mounir Ait KERROUM</creator>
        
        <creator>Youssef FAKHRI</creator>
        
        <subject>Band selection; Bayes Information Criterion (BIC); Bhattacharyya Distance; divergence distance; hyperspectral imag-ing; Kolmogorov Variation Distance; Gaussian Mixture Model (GMM); Robust Expectation Maximization (REM); remote sensing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>In this paper, we introduce a novel band selection approach based on the Kolmogorov Variational Distance (KoVD) for Hyperspectral image classification. The main reason we are taking interest in KoVD is its unique relation to the classifi-cation error. Our previous works on band selection using the Mutual Information (MI), the Divergence Distance (DD), or the Bhattacharyya Distance (BD) inspire this study; thus, we are particularly interested in finding out how KoVD performs against these distances in terms of the numbers of band retained and the classification accuracy. All the distances in this study are modeled with the Gaussian Mixture Model (GMM) using the Bayes Information Criterion (BIC) / Robust Expectation-Maximization (REM). The experiments are carried on four benchmark Hy-perspectral images: Kennedy Space Center, Salinas, Botswana, and Indian Pines (92AV3C). The results show that band selection based on the Kolmogorov Variational Distance performs better than BD and DD, meanwhile against MI the results were too close.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_92-A_Novel_BS_Approach_for_HI_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hindustani or Hindi vs. Urdu: A Computational Approach for the Exploration of Similarities Under Phonetic Aspects</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111191</link>
        <id>10.14569/IJACSA.2020.0111191</id>
        <doi>10.14569/IJACSA.2020.0111191</doi>
        <lastModDate>2020-12-01T09:57:25.1630000+00:00</lastModDate>
        
        <creator>Muhammad Suffian Nizami</creator>
        
        <creator>Tafseer Ahmed</creator>
        
        <creator>Muhammad Yaseen Khan</creator>
        
        <subject>Lexical Similarity; Urdu; Hindi; Edit Distance; Pho-netics; Natural Language Processing; Computational Linguistics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>The semantic coexistence is the reason to adopt the language spoken by other people. In such human habitats, different languages share words typically known as loan words which appears not only as of the principal medium of enriching language vocabulary but also for creating influence upon each other for building stronger relationships and forming multilin-gualism. In this context, the spoken words are usually common but their writing scripts vary or the language may have become a digraphia. In this paper, we presented the similarities and relatedness between Hindi and Urdu (that are mutually intelligible and major languages of Indian sub-continent). In general, the method modifies edit-distance; and works in the fashion that instead of using alphabets from the words it uses articulatory features from the International Phonetic Alphabets (IPA) to get the phonetic edit distance. This paper also shows the results for the languages consonant under the method which quantifies the evidence that the Urdu and Hindi languages are 67.8% similar on average despite the script differences.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_91-Hindustani_or_Hindi_vs._Urdu_A_Computational_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparison of the CatBoost Classifier with other Machine Learning Methods</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111190</link>
        <id>10.14569/IJACSA.2020.0111190</id>
        <doi>10.14569/IJACSA.2020.0111190</doi>
        <lastModDate>2020-12-01T09:57:25.1300000+00:00</lastModDate>
        
        <creator>Abdullahi A. Ibrahim</creator>
        
        <creator>Raheem L. Ridwan</creator>
        
        <creator>Muhammed M. Muhammed</creator>
        
        <creator>Rabiat O. Abdulaziz</creator>
        
        <creator>Ganiyu A. Saheed</creator>
        
        <subject>Machine learning algorithms; data science; Cat-Boost; loan approvals; staff promotion</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>Machine learning and data-driven techniques have become very famous and significant in several areas in recent times. In this paper, we discuss the performances of some machine learning methods with the case of the catBoost classifier algorithm on both loan approval and staff promotion. We compared the algorithm’s performance with other classifiers. After some feature engineering on both data, the CatBoost algorithm outperforms other classifiers implemented in this paper. In analysis one, features such as loan amount, loan type, applicant income, and loan purpose are major factors to predict mortgage loan approvals. In the second analysis, features such as division, foreign schooled, geopolitical zones, qualification, and working years had a high impact on staff promotion. Hence, based on the performance of the CatBoost in both analyses, we recommend this algorithm for better prediction of loan approvals and staff promotion.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_90-Comparison_of_the_CatBoost_Classifier.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Non-Linear Control Strategies for Attitude Maneuvers in a CubeSat with Three Reaction Wheels</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111189</link>
        <id>10.14569/IJACSA.2020.0111189</id>
        <doi>10.14569/IJACSA.2020.0111189</doi>
        <lastModDate>2020-12-01T09:57:25.1170000+00:00</lastModDate>
        
        <creator>Brayan Espinoza Garcia</creator>
        
        <creator>Ayrton Martin Yanyachi</creator>
        
        <creator>Pablo Raul Yanyachi</creator>
        
        <subject>Attitude control; attitude maneuvers; adaptive con-trol; feedback control; CubeSat; Quaternions; reaction wheels; comparison</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>Development of nanosatellites with CubeSat stan-dard allow students and professionals to get involved into the aerospace technology. In nanosatellites, attitude plays an im-portant role since they can be affected by various disturbances such as gravity gradient and solar radiation. These disturbances generate a torque in the system that must be corrected in order to maintain the CubeSat behavior. In this article, the kinematic and dynamic equations applied to a CubeSat with three reaction wheels are presented. In order to provide a solution to the atti-tude maneuvering problem, three robust control laws developed by Boskovic, Dando, and Chen are presented and evaluated. Furthermore, these laws are compared with a feedback control law developed by Schaub and modified to use Quaternions. The simulated system was subjected to disturbances caused by a Gravity Gradient Torque and misalignments in the reaction wheels. The effectiveness of each law is determined using the Average of Square of the Commanded Control Torque (ASCCT), the Error Euler Angle Integration (EULERINT), the settlement time, the estimated computational cost (O), and the steady-state error (ess).</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_89-Non_Linear_Control_Strategies.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Anti-Molestation: An IoT based Device for Women’s Self-Security System to Avoid Unlawful Activities</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111188</link>
        <id>10.14569/IJACSA.2020.0111188</id>
        <doi>10.14569/IJACSA.2020.0111188</doi>
        <lastModDate>2020-12-01T09:57:25.0830000+00:00</lastModDate>
        
        <creator>Md. Imtiaz Hanif</creator>
        
        <creator>Shakil Ahmed</creator>
        
        <creator>Wahiduzzaman Akanda</creator>
        
        <creator>Shohag Barman</creator>
        
        <subject>Anti-rape; IoT device; smart-safety device; women safety; wearable device; GSM/GPRS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>Now-a-days, the public, mostly women and children are facing much harassment from the societies. The unlawful activities against ladies and children have been increasing signifi-cantly, and regularly we find out about eve-teasing, sexual assault cases, and attempt to molest or even killing after rape in public places or open areas. Also, many cases had gone unwarranted due to short pieces of evidence. In Bangladesh, the current statistics of sexual assaults and various unlawful activities are proliferating. To acknowledge these problems, in this paper, we have designed an IoT-based (Internet of Things) embedded device that is able to communicate with the law enforcement agency by dialing “999” (An Emergency Telephone Number in Bangladesh) on demand. The device contains Arduino Pro-Mini Microcontroller with a GSM (Global System for Mobile communication) module and can send SMS (short message service) with the victim’s present area to the law enforcement agency and relatives via GPRS (General Packet Radio Services). The proposed device’s form factor is too tiny to carry out easily at anywhere and anytime. The device features the “Plug &amp; Play” functionalities, which means one button to operate the entire device. Also, the device is cost-effective so that people of every level can afford it at a reasonable price.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_88-Anti_Molestation_an_IoT_based_Device.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of a Mobile Application for the Learning of People with Down Syndrome through Interactive Games</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111187</link>
        <id>10.14569/IJACSA.2020.0111187</id>
        <doi>10.14569/IJACSA.2020.0111187</doi>
        <lastModDate>2020-12-01T09:57:25.0700000+00:00</lastModDate>
        
        <creator>Richard Arias-Marreros</creator>
        
        <creator>Keyla Nalvarte-Dionisio</creator>
        
        <creator>Laberiano Andrade-Arenas</creator>
        
        <subject>Application of games; cognitive disabilities; Down’s Syndrome; scrum methodology; Troncoso Method</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>The research work is focused on people with Down syndrome since it is the most common genetic disorder worldwide, also, these people have cognitive and visual-motor disabilities, however, the Peruvian state does not use even 1% of the budget allocated to the educational sector of these people, so, by not receiving the education they can not develop their skills. Therefore, a prototype of a mobile application was designed for the learning of people with Down syndrome through interactive games, implementing the Scrum methodology for the development of the application prototype, using the Troncoso method for the teaching of reading and writing, but the teaching of visual-motor coordination was added to the method and concerning the design of the prototype, the Balsamiq tool was used because it was the most appropriate. And so the objective of developing the prototype of the application was achieved. Having as a result that people with Down syndrome can read or write basic words and differentiate the hemispheres of their body, through the unlimited attempts of the exercises, in each level or type of learning. In this way, with the teachings received, these people will have a better quality of life, being able to integrate into society, and be more independent when performing daily activities.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_87-Design_of_a_Mobile_Application_for_the_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Clustering-Based Hybrid Approach for Multivariate Missing Data Imputation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111186</link>
        <id>10.14569/IJACSA.2020.0111186</id>
        <doi>10.14569/IJACSA.2020.0111186</doi>
        <lastModDate>2020-12-01T09:57:25.0370000+00:00</lastModDate>
        
        <creator>Aditya Dubey</creator>
        
        <creator>Akhtar Rasool</creator>
        
        <subject>Clustering; imputation; KNN; missing at random; multivariate</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>In the era of big data, a significant amount of data is produced in many applications areas. However due to various reasons including sensor failures, communication failures, environmental disruptions, and human errors, missing values are found frequently These missing data in the observed data make a challenge for other data mining approaches, requiring the missed data to be handled at the preprocessing stage of data mining. Several approaches for handling the missing data have been proposed in the past. These approaches consider the whole dataset for making a prediction, making the whole imputation approach to be cumbersome. This paper proposes the procedure which makes use of the local similarity structure of the dataset for making an Imputation. The K-means clustering technique along with the weighted KNN makes efficient imputation of the missed value. The results are compared against imputations by mean substitution and Fuzzy C Means (FCM). The proposed imputation technique shows that it performs better than other imputation procedures.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_86-Clustering_Based_Hybrid_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Augmented Reality Electronic Glasses Prototype to Improve Vision in Older Adults</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111185</link>
        <id>10.14569/IJACSA.2020.0111185</id>
        <doi>10.14569/IJACSA.2020.0111185</doi>
        <lastModDate>2020-12-01T09:57:25.0230000+00:00</lastModDate>
        
        <creator>Lilian Ocares Cunyarachi</creator>
        
        <creator>Alexandra Santisteban Santisteban</creator>
        
        <creator>Laberiano Andrade-Arenas</creator>
        
        <subject>Augmented reality; design thinking; electronic glasses; low vision; seniors</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>In this article, we focus on the elderly who suffer from low vision. We seek to design Augmented reality electronic glasses to help the elderly who suffer from vision problems, which causes a limitation when performing their daily activities, so this can affect their development in society causing serious physical and emotional damage, so we used a set of scientific articles that analyzed the percentage of visual impairment. Technology has demonstrated on numerous occasions that it can be a great ally for the health and well-being of the elderly. In this work, the objective is to design electronic glasses to help the elderly improve their vision. The methodology used is Design Thinking, which thanks to the phases of this methodology helps us to understand, collect information about the problem and give a solution, the result obtained is a prototype of electronic glasses in which it will benefit adults who suffer from low vision. As for the case study, we will show the design of the mobile application and the detailed development of the prototype.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_85-Augmented_Reality_Electronic_Glasses_Prototype.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of a Mobile Application for the Automation of the Census Process in Peru</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111184</link>
        <id>10.14569/IJACSA.2020.0111184</id>
        <doi>10.14569/IJACSA.2020.0111184</doi>
        <lastModDate>2020-12-01T09:57:24.9900000+00:00</lastModDate>
        
        <creator>Luis Alberto Romero Tuanama</creator>
        
        <creator>Juber Alfonso Quiroz Gutarra</creator>
        
        <creator>Laberiano Andrade-Arenas</creator>
        
        <subject>Automation; Balsamiq Mockup; census; scrum</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>This study shows that the traditional census process in Peru has many shortcomings, including the loss of data and the long duration of the process. To solve this problem, a mobile application was designed to automate the census process in Peru. For the development, we will rely on the agile scrum methodology, Balsamiq Mockup and Adobe XD tools, for help us to make prototypes of this application. In the last census, many families were not registered due to lack of time or other factors, so we designed this prototype of a mobile application, which will help the census taker to make the data recording process faster. The result obtained is the proposal of a productive approach, optimizing the census process, through a mobile application, where each census taker will register the data of the families on the census in a faster way and this information will be taken directly to the database of the organization conducting the census, thus avoiding loss of data, saving time and money.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_84-Design_of_a_Mobile_Application_for_the_Automation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>PlusApps: Towards a Privacy Risk Analysis for Android Plus Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111183</link>
        <id>10.14569/IJACSA.2020.0111183</id>
        <doi>10.14569/IJACSA.2020.0111183</doi>
        <lastModDate>2020-12-01T09:57:24.9770000+00:00</lastModDate>
        
        <creator>Abdullah J. Alzahrani</creator>
        
        <subject>Android security; malware detection; permission analysis; privacy risk; plus application</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>The Android platform leads the mobile operating system marketplace and subsequently has drawn the interest of malware authors and researchers. The significant number of proposed malware detection techniques, classification models and practical reverse engineering solutions are insufficient and there is a lack of perfection. Also, the number of Android apps has increased significantly in recent years, as has the number of apps revealing confidential data. It is essential to investigate the applications and make sure that none of them are leaking privacy data, and consequently a privacy leak analysis approach is needed. Therefore, this paper investigates plus apps behavior and data leakages with a machine-learning algorithm to determine the best features for differentiating plus apps from original apps. The result of the analysis discloses that the SVM classifier presents the greatest accuracy. Further investigation demonstrates that the classifier with the ranking algorithm that uses correlation coefficient (CorEvel) and information gain (InfGain) methods offers more exceptional precision than the other correlation algorithms. The result of this experiment proves that the ranking algorithm is able to decrease the dimension of features and produce an accuracy of 96.60%.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_83-PlusApps_Towards_a_Privacy_Risk_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fine-Tuning Pre-Trained Convolutional Neural Networks for Women Common Cancer Classification using RNA-Seq Gene Expression</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111182</link>
        <id>10.14569/IJACSA.2020.0111182</id>
        <doi>10.14569/IJACSA.2020.0111182</doi>
        <lastModDate>2020-12-01T09:57:24.9600000+00:00</lastModDate>
        
        <creator>Fadi Alharbi</creator>
        
        <creator>Murtada K. Elbashir</creator>
        
        <creator>Mohanad Mohammed</creator>
        
        <creator>Mohamed Elhafiz Mustafa</creator>
        
        <subject>Fine-tuning; RNA-Seq; gene expression</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>Most of the recent cancer classification methods use gene expression profile as features because it can provide very important information regarding tumor characteristics. Motivated by their success in the computer vision area now deep learning has been successfully applied to medical data because it can read non-linear patterns in a complex feature and can allow the leverage of information from unlabeled data of problems that do not belong to the problem being handled. In this paper, we implement transfer learning, which refers to the use of a model trained on one task to perform classification on another task to classify five cancer types that most commonly affect women. We used VGG16, Xception, DenseNet, and ResNet50 as base models and then added a dense layer to reflect our five-class classification problem. To avoid training over-fitting that can result in a very high training accuracy and a low cross-validation accuracy we used L2-regularization. We retrained (fine-tuned) these models using a five-fold cross-validation approach on RNA-Seq gene expression data after transforming it into 2D-image like data. We used the softmax activation function with the prediction dense layer and adam as optimizer in the model fit for all four architectures. The highest performance is obtained when fine-tuning Xception architecture, which achieved classification accuracy = 98.6%, precision = 98.6%, recall = 97.8%, and F1-score = 98% on five-fold cross-validation training and testing approach.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_82-Fine_Tuning_Pre_Trained_Convolutional_Neural_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Legal Requirements towards Enhancing the Security of Medical Devices</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111181</link>
        <id>10.14569/IJACSA.2020.0111181</id>
        <doi>10.14569/IJACSA.2020.0111181</doi>
        <lastModDate>2020-12-01T09:57:24.9430000+00:00</lastModDate>
        
        <creator>Prosper K. Yeng</creator>
        
        <creator>Stephen D. Wulthusen</creator>
        
        <creator>Bian Yang</creator>
        
        <subject>Information security; medical device; legal require-ment; healthcare; privacy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>Over 25 million Americans are dependent on med-ical devices. However, the patients who need these devices only have two choices, thus the choice between using an insecure critical-life-functioning devices or the choice to live without the support of a medical device with the consequences of the threats presented by the disease. This study therefore conducted a state-of-the-art on security requirements, concerning medical devices in the US and EU. Food, Drugs and Cosmetic Act, HIPAA, Medical Device Regulations of EU and GDPR were some of the identified regulations for controlling the security of these devices. Statutory laws such as computer Fraud and abuse Act (CFAA), Anti-Tempering Act, Panel Code as well as Battery and Trespass to Chattel in the civil law, were also identified. In analyzing the security requirements, there are less motivations on criminal charges against cyber criminals in addressing the security issues. Because it is often challenging to identify the culprits in medical device hacks. It is also difficult to hold device manufactures on negligence of duty especially after the device has been approved or if the harm on patient was as a result of a cyber attacker. Suggestions have been provided to improve upon the regulations so that both the regulatory bodies and MDM can improve upon their security conscious care.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_81-Legal_Requirements_toward_Enhancing_the_Security.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Automatic Agricultural Crop Maintenance System using Runway Scheduling Algorithm: Fuzzyc-LR for IoT Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111180</link>
        <id>10.14569/IJACSA.2020.0111180</id>
        <doi>10.14569/IJACSA.2020.0111180</doi>
        <lastModDate>2020-12-01T09:57:24.9130000+00:00</lastModDate>
        
        <creator>G. Balakrishna</creator>
        
        <creator>Nageswara Rao Moparthi</creator>
        
        <subject>Runway Scheduling Algorithm (RWSA); Sensor Calibration and Feedback Method (SCFM); IoT; fuzzy-c; logistic regression (LR)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>In this framework, the crop diseases have been identified using three types of methods, fuzzy-c as a clustering algorithm, runway scheduling trains like classification algorithm, and logistic regression as prediction algorithm. These techniques are meaningful solutions for losses in yields and the quantity of agriculture production. In this work, crop disease and corresponding fertilizers are predicted based on pattern scalability by the above algorithms. It proposes a Sensor Calibration and Feed Back Method (SCFM) with RWSA for better agriculture crop maintenance with automation and Fuzzy-c, Logistic regressions are helpful in studying the datasets of the crops for classifying the disease. This research tries to identify the leaf color, leaf size, disease of plant, and fertilizer for the illness of crops. In this context, RWSA-Agriculture gives the solution for current problems and improves the F1-Score. The data collected from local sensors and remote station is estimated with the dataset, these sensor based L.R., and Fuzzy-c controls disease prediction system in SCFM and RWSA. This technique accurately regulates the dispensing of water as well as chemicals; fertilizers for crop monitor and prevent the diseases of crops. This investigation gives performance metrics values i.e PSNR=44.18dB, SSIM = 0.9943, BPP =1.46, Tp=0.945 and CR = 5.25.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_80-The_Automatic_Agricultural_Crop_Maintenance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Developing an Information Management Strategy for e-government in Saudi Arabia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111179</link>
        <id>10.14569/IJACSA.2020.0111179</id>
        <doi>10.14569/IJACSA.2020.0111179</doi>
        <lastModDate>2020-12-01T09:57:24.8970000+00:00</lastModDate>
        
        <creator>Fatmah Almehmadi</creator>
        
        <subject>e-Government; Information technology; Information management; Trend analysis; Saudi Arabia; the USA; the Republic of Korea</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>Given the current Corona virus pandemic, the role of e-government in both developed and developing countries is becoming more important than ever. This study aims to assess the development of e-government in Saudi Arabia and to compare it with that of two world e-government leaders: USA and the Republic of Korea, during the period of 2003-2020. Data analysis consists of: 1) a comparative, cross-country, longitudinal analysis of the e-government development index (EGDI) relating to Saudi Arabia, the USA and the Republic of Korea; 2) a trend analysis of the online services, telecommunication infrastructure, and human capital indicators; and 3) a gap analysis to pinpoint the gap between Saudi Arabia and the USA, and the gaps between Saudi Arabia and the Republic of Korea. The results reveal a continuous rise in the rankings of Saudi Arabia’s EGDI over the years. However, findings also indicate some areas that require more improvement. An information management strategy for the support of e-government in Saudi Arabia has been developed, describing the current e-government situation and setting high, medium, and low-level priorities that the country needs to consider in order to better its compliance with international e-government practices.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_79-Developing_an_Information_Management_Strategy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Repeated Median Filtering Method for Denoising Mammogram Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111178</link>
        <id>10.14569/IJACSA.2020.0111178</id>
        <doi>10.14569/IJACSA.2020.0111178</doi>
        <lastModDate>2020-12-01T09:57:24.8800000+00:00</lastModDate>
        
        <creator>Hussain AlSalman</creator>
        
        <subject>Mammogram images; image denoising; median filter; repeated median filtering; speckle noise; salt and paper noise</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>In the medical field, mammogram analysis is one of the most important breast cancer detection procedures and early diagnosis. During the image acquisition process of mammograms, the acquired images may be contained some noises due to the change of illumination and sensor error. Hence, it is necessary to remove these noises without affecting the edges and fine details, achieving an effective diagnosis of beast images. In this work, a repeated median filtering method is proposed for denoising digital mammogram images. A number of experiments are conducted on a dataset of different mammogram images to evaluate the proposed method using a set of image quality metrics. Experimental results are reported by computing the image quality metrics between the original clean images and denoised images that are corrupted by different levels of simulated speckle noise as well as salt and paper noise. Evaluation quality metrics showed that the repeated median filter method achieves a higher result than the related traditional median filter method.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_78-A_Repeated_Median_Filtering_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Meezaj: An Interactive System for Real-Time Mood Measurement and Reflection based on Internet of Things</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111177</link>
        <id>10.14569/IJACSA.2020.0111177</id>
        <doi>10.14569/IJACSA.2020.0111177</doi>
        <lastModDate>2020-12-01T09:57:24.8500000+00:00</lastModDate>
        
        <creator>Ehsan Ahmad</creator>
        
        <subject>Subject well-being; happiness; IoT; Arduino; Vision 2030</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>Subjective well-being has a critical affect on progress and productivity vital for digital and strategical trans-formation. Increase in the suicide attempts of the college and university students is a clear indication of stress and anxiety among the students. Offering a fulfilling and healthy life to promote the life-long learning journey is also one of the important objectives of the Vision 2030 for modernization of the Kingdom of Saudi Arabia. Due to the multifaceted nature of subjective well-being, real-time mood measurement and reflection is a challenging task and demands using latest technologies. This paper aims to present Meezaj, an interactive system for real-time mood measurement and reflection leveraging the Internet of Things (IoT) technology. Architecture and workflow of the Meezaj system are discussed in detail. Meezaj not only promotes the sense of significance in the students, by indicating that their happiness matters in decision making, but also assists policy makers to identify factors affecting the happiness in an educational institute.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_77-Meezaj_An_Interactive_System_for_Real_Time_Mood.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Security Issues in Near Field Communications (NFC)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111176</link>
        <id>10.14569/IJACSA.2020.0111176</id>
        <doi>10.14569/IJACSA.2020.0111176</doi>
        <lastModDate>2020-12-01T09:57:24.8330000+00:00</lastModDate>
        
        <creator>Arwa Alrawais</creator>
        
        <subject>Near Field Communications (NFC); NFC attacks; NFC countermeasures; NFC vulnerabilities</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>Near Field Communications (NFC) is a rising tech-nology that enables two devices that are within close proximity to quickly establish wireless contactless communications. It looks intuitively secure enough and various applications like ticketing, mobile payments, access grant etc. are taking advantage of NFC and flooding into the market in recent years. However, is it worth to trust such applications at the risk of leaking the user’s private information? This paper surveys NFC vulnerabilities and exploits different kinds of security attacks. Upon surveying related materials, the paper covered possible solutions that could defend against those security threats. Furthermore, attacks and countermeasures evaluation in terms of practicality and cost have been further investigated.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_76-Security_Issues_in_Near_Field_Communications.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Speaker-Independent Speech Recognition using Visual Features</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111175</link>
        <id>10.14569/IJACSA.2020.0111175</id>
        <doi>10.14569/IJACSA.2020.0111175</doi>
        <lastModDate>2020-12-01T09:57:24.8030000+00:00</lastModDate>
        
        <creator>Pooventhiran G</creator>
        
        <creator>Sandeep A</creator>
        
        <creator>Manthiravalli K</creator>
        
        <creator>Harish D</creator>
        
        <creator>Karthika Renuka D</creator>
        
        <subject>Visual speech recognition; audio speech recogni-tion; visemes; lip reading system; Convolutional Neural Network (CNN)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>Visual Speech Recognition aims at transcribing lip movements into readable text. There have been many strides in automatic speech recognition systems that can recognize words with audio and visual speech features, even under noisy conditions. This paper focuses only on the visual features, while a robust system uses visual features to support acoustic features. We propose the concatenation of visemes (lip movements) for text classification rather than a classic individual viseme map-ping. The result shows that this approach achieves a significant improvement over the state-of-the-art models. The system has two modules; the first one extracts lip features from the input video, while the next is a neural network system trained to process the viseme sequence and classify it as text.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_75-Speaker_Independent_Speech_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Solution for Container Placement and Load Balancing based on ACO and Bin Packing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111174</link>
        <id>10.14569/IJACSA.2020.0111174</id>
        <doi>10.14569/IJACSA.2020.0111174</doi>
        <lastModDate>2020-12-01T09:57:24.7870000+00:00</lastModDate>
        
        <creator>Oussama SMIMITE</creator>
        
        <creator>Karim AFDEL</creator>
        
        <subject>Cloud; virtualization; container; placement; Green IT; containerization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>Currently, data centers energy consumption in the cloud is attracting a lot of interest. One of the most approaches to optimize energy and cost in data centers is virtualization. Recently, a new type of container-based virtualization has ap-peared, containers are considered very light and modular virtual machines, they offer great flexibility and the possibility of migra-tion from one environment to another, which allows optimizing applications for the cloud. Another approach to saving energy is to consolidate the workload, which is the amount of processing that the computer has to perform at any given time. In this article, we will study the container placement algorithm that takes into account the QoS requirements of different users in order to minimize energy consumption. Thus, we proposed a Hybrid approach for managing resources and workload based on ant colony optimization (ACO) and the first-fit decreasing (FFD) algorithm to avoid unnecessary power consumption. The results of the experiment indicate that using the first-fit decreasing algorithm (FFD) for container placement is better than ant colony optimization especially in a homogeneous systems. On the other hand the ant colony optimization shows very satisfying results in the case of workload management.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_74-Hybrid_Solution_for_Container_Placement.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Identifying the Impacts of Active and Passive Attacks on Network Layer in a Mobile Ad-hoc Network: A Simulation Perspective</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111173</link>
        <id>10.14569/IJACSA.2020.0111173</id>
        <doi>10.14569/IJACSA.2020.0111173</doi>
        <lastModDate>2020-12-01T09:57:24.7570000+00:00</lastModDate>
        
        <creator>Uthumansa Ahamed</creator>
        
        <creator>Shantha Fernando</creator>
        
        <subject>Active attack; network layer; passive attack; perfor-mance matrices; simulation study</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>In this research, we attempted to investigate about the features and behaviors of network layer based active and passive attacks in Ad-hoc On-Demand Vector (AODV) routing protocol in Mobile Ad-hoc Networks (MANET). Through the literature survey, we try to understand the features of each attacks and examine the behaviors of these attacks through simulations via Network Simulator 2 (NS2). Blackhole, Grayhole and Wormhole attacks are used in this simulation study. Each attacks are introduced independently into the network to find the impacts on network performances that are evaluated through Packet Delivery Ratio (PDR), Average End-to-End Delay (AEED), Throughput, Average Data Dropping Rate (ADDR) and Simula-tion Processing Time at Intermediate Nodes (SPTIN). To obtain more accurate results, simulation parameters are maintained same in each simulation. A controller network is simulated to compare with each attack simulation. Simulations are repeated by changing the number of connected intermediate nodes (hops) in the network. We observed at collected data analysis, the lowest SPTIN in the network that contained a Blackhole or Grayhole attack out of these three attacks. The network which is affected by a Blackhole attack shows higher amount of ADDR than controller network. Furthermore data forwarding rate is higher in the network which is affected by a Wormhole attack. Finally, according to the simulation studies, we are able to understand that Blackhole and Grayhole attacks cause more damage to the network performances than Wormhole attacks.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_73-Identifying_the_Impacts_of_Active_and_Passive_Attacks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improved PSO Performance using LSTM based Inertia Weight Estimation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111172</link>
        <id>10.14569/IJACSA.2020.0111172</id>
        <doi>10.14569/IJACSA.2020.0111172</doi>
        <lastModDate>2020-12-01T09:57:24.7400000+00:00</lastModDate>
        
        <creator>Y. V.R.Naga Pawan</creator>
        
        <creator>Kolla Bhanu Prakash</creator>
        
        <subject>Particle swarm optimization; inertia weight; long short term memory; benchmark functions; convergence</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>Particle Swarm Optimization (PSO) is first introduced in the year 1995. It is mostly an applied population-based meta-heuristic optimization algorithm. PSO is diversely used in the areas of sciences, engineering, technology, medicine, and humanities. Particle Swarm Optimization (PSO) is improved its performance by tuning the inertia weight, topology, velocity clamping. Researchers proposed different Inertia Weight based PSO (IWPSO). Every Inertia Weight based PSO in excelling the existing PSOs. A Long Short Term Memory (LSTM) predicting inertia weight based PSO (LSTMIWPSO) is proposed and its performance is compared with constant, random, and linearly decreasing Inertia Weight PSO. Tests are conducted on swarm sizes 50, 75, and 100 with dimensions 10, 15, and 25. The experimental results show that LSTM based IWPSO supersedes the CIWPSO, RIWPSO, and LDIWPSO.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_72-Improved_PSO_Performance_using_LSTM.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Educational Data Mining for Monitoring and Improving Academic Performance at University Levels</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111171</link>
        <id>10.14569/IJACSA.2020.0111171</id>
        <doi>10.14569/IJACSA.2020.0111171</doi>
        <lastModDate>2020-12-01T09:57:24.7270000+00:00</lastModDate>
        
        <creator>Ezekiel U Okike</creator>
        
        <creator>Merapelo Mogorosi</creator>
        
        <subject>Educational data mining; learning management systems; Weka system tools; improved academic performance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>This study applied Educational Data Mining on 712 sample of logs extracted from Moodle Learning Management System (LMS) at an African University in order to measure students and staff patterns of use of the LMS resources and hence determine if the quantity of participation measured in the amount of time spent on the use of LMS resources improved academic performance of students. Data collected from Moodle LMS was preprocessed and analyzed using machine learning algorithms of clustering, classification and visualization from WEKA system tools. The dataset consisted of Course tools (Quiz, Assignment, Chat, Forum, URL, Folder and Files), Lecturer and Student usage of the tools. Furthermore, SPSS was used to obtain a matrix for coefficients of correlations for course tools, tests and final grade. Correlation analysis was done to verify if students use of course tools had impact on student’s academic performance. Findings indicated the pattern of usage for course1 as Quiz (38358), System (17910), Forum (8663), File (8566), Assignment (1235), Folder (514, File Submission (172), and Chat (37); Course2 as System (11920), Quiz (8208), Forum (4476), File (4394), Assignment (257), Chat (247), URL (125), and File Submission (38); Course3 as System (2622),File (1022), Folder (570), Forum (258), and URL (2). Overall, evaluating the correlation between the use of LMS resources and students’ performance, findings indicated there is significant relationship between the use of LMS resources and students’ academic performance at 0.01 level of significant. The findings are useful for strategic academic planning purpose with LMS data at the university.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_71-Educational_Data_Mining_for_Monitoring.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Drop-Out Prediction in Higher Education Among B40 Students</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111169</link>
        <id>10.14569/IJACSA.2020.0111169</id>
        <doi>10.14569/IJACSA.2020.0111169</doi>
        <lastModDate>2020-12-01T09:57:24.6930000+00:00</lastModDate>
        
        <creator>Nor Samsiah Sani</creator>
        
        <creator>Ahmad Fikri Mohamed Nafuri</creator>
        
        <creator>Zulaiha Ali Othman</creator>
        
        <creator>Mohd Zakree Ahmad Nazri</creator>
        
        <creator>Khairul Nadiyah Mohamad</creator>
        
        <subject>Machine learning; prediction; student attrition; student drop-out; B40; random forest; decision tree; artificial neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>Malaysia citizens are categorized into three different income groups which are the Top 20 Percent (T20), Middle 40 Percent (M40), and Bottom 40 Percent (B40). One of the focus areas in the Eleventh Malaysia Plan (11MP) is to elevate the B40 household group towards the middle-income society. In 2018, it was estimated that 4.1 million households belong to this group. The government of Malaysia has widened access to higher education for the B40 group in an effort to reduce the gaps in socioeconomics and to improve their living standards. Statistical data shows that since 2013, a yearly intake of students in bachelor&#39;s degree programs in Malaysia&#39;s public universities amounts to more than 85,000. Despite this huge number of enrolments, not all were able to graduate, including students from low-income family background. Data mining approach with machine learning techniques has been widely used effectively and accurately to predict students at risk of dropping out in general education. However, machine learning related works on student attrition in Malaysia&#39;s higher education is generally lacking. Therefore, in this research, three machine learning models were developed using Decision Tree, Random Forest and Artificial Neural Network algorithm in order to classify attrition among B40 students in bachelor&#39;s degree programs in Malaysia&#39;s public universities. Comparative performance analysis between the three models indicates that the Random Forest model is the best model in predicting student attrition in this study. Random Forest model outperforms the other two models in terms of accuracy, precision, recall and F-measure with the value of 95.93%, 97.10%, 81.26% and 88.50%, respectively. Nevertheless, there is a statistically significant difference in performance between the Random Forest model and Decision Tree model but no statistically significant difference between Random Forest models and Artificial Neural Network model.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_69-Drop_Out_Prediction_in_Higher_Education.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Proficiency Assessment of Machine Learning Classifiers: An Implementation for the Prognosis of Breast Tumor and Heart Disease Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111170</link>
        <id>10.14569/IJACSA.2020.0111170</id>
        <doi>10.14569/IJACSA.2020.0111170</doi>
        <lastModDate>2020-12-01T09:57:24.6930000+00:00</lastModDate>
        
        <creator>Talha Ahmed Khan</creator>
        
        <creator>Kushsairy A. Kadir</creator>
        
        <creator>Shahzad Nasim</creator>
        
        <creator>Muhammad Alam</creator>
        
        <creator>Zeeshan Shahid</creator>
        
        <creator>M.S Mazliham</creator>
        
        <subject>Breast cancer; benign; malignant; logistic regression; cardiovascular disease; heart disease diagnosis; support vector machine; classifiers; k-nearest neighbors</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>Breast cancer and heart disease can be acknowledged as very dangerous and common disease in many countries including Pakistan. In this paper classifiers comparative study has been performed for the tumor and heart disease classification. Around one lac women are diagnosed annually with this life-threatening disease having no family history of the disease. If it is not treated on time it may grow and spread to the other parts of human body. Mammograms are the X-rays of the breast which can be used for the screening of cancer tumor. Prior identification of breast cancer may increase the chance of survival up to 70 percent. Tumors which causes cancer can be categorized into two types: a) Benign and b) Malignant. Benign tumor can be explained as the tumor which are not attached to neighbor tissues or spread in the other parts of the body. In Malignant tumor, other parts may be affected by it as it can grow and spread in the other parts of the body. To classify the tumor as Malignant or Benign is very complex as the similarities of cancer tumor and tumor caused by the skin inflammation are almost same. The early identification of Malignant is mandatory to protect the patient life. Diversified medical methods based on deep learning and machine learning have been developed to treat the patients as cancer is a very serious and crucial issue in this era. In this research paper machine learning algorithms like logistic regression, K-NN and tree have been applied to the breast cancer data set which has been taken from UCI Machine learning repository. Comparative study of classifiers has been performed to determine the better classifier for the robust prediction of breast tumors. Simulated results proved that using Logistic regression, ninety-one percent accuracy was achieved. The research showed that logistic regression can be applied for the accurate and precise early prediction of breast cancer. Cardiovascular disease is very common throughout the world. It has been noticed that health in cardiac patients that there are so many factors which causes heart disease or heart attack. The factors leading to the heart failure includes varying blood pressure, high sugar, cardiac pain, and heart rate, high cholesterol level (LDL), artery blockage and irregular ECG signals. Many researchers proved that stress in patients can also be the reason for the heart disease. Higher numbers of cardiac surgeries like angioplasty and heart by-pass are performed on annual basis. Actually, people don’t care about their lifestyle and diet and fully ignore the symbols. It can be early predicted and cured if proper testing and medication for heart is done. Sometimes there is a false pain which has the same feeling like angina pain depicting cardiovascular disease. To reduce the false alarm and robustly classify the heart disease, several machine learning approaches have been adopted. In proposed research for the accurate classification of heart disease comparison has been performed among support vector machine (SVM), K-nearest neighbors K-NN and linear discriminant analysis. Simulated results demonstrated that Support vector machine was found to be a better classifier having an accuracy of 80.4%.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_70-Proficiency_Assessment_of_Machine_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Genetic Programming-Based Code Generation for Arduino</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111168</link>
        <id>10.14569/IJACSA.2020.0111168</id>
        <doi>10.14569/IJACSA.2020.0111168</doi>
        <lastModDate>2020-12-01T09:57:24.6770000+00:00</lastModDate>
        
        <creator>Wildor Ferrel</creator>
        
        <creator>Luis Alfaro</creator>
        
        <subject>Genetic programming; Arduino mega board; multi-objective linear genetic programming; cooperative coevolutionary algorithm; automatic generation of programs; Arduino based thermometer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>This article describes a methodology for writing the program for the Arduino board using an automatic generator of assembly language routines that works based on a cooperative coevolutionary multi-objective linear genetic programming algorithm. The methodology is described in an illustrative example that consists of the development of the program for a digital thermometer organized on a circuit formed by the Arduino Mega board, a text LCD module, and a temperature sensor. The automatic generation of a routine starts with an input-output table that can be created in a spreadsheet. The following routines have been automatically generated: initialization routine for the text LCD screen, routine for determining the temperature value, routine for converting natural binary code into unpacked two-digit BCD code, routine for displaying a symbol on the LCD screen. The application of this methodology requires basic knowledge of the assembly programming language for writing the main program and some initial configuration routines. With the application of this methodology in the illustrative example, 27% of the program lines were written manually, while the remaining 73% were generated automatically. The program, produced with the application of this methodology, preserves the advantage of assembly language programs of generating machine code much smaller than that generated by using the Arduino programming language.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_68-Genetic_Programming_Based_Code_Generation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>COVID-19 Transmission Risks Assessment using Agent-Based Weighted Clustering Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111167</link>
        <id>10.14569/IJACSA.2020.0111167</id>
        <doi>10.14569/IJACSA.2020.0111167</doi>
        <lastModDate>2020-12-01T09:57:24.6630000+00:00</lastModDate>
        
        <creator>P. Vidya Sagar</creator>
        
        <creator>T. Pavan Kumar</creator>
        
        <creator>G. Krishna Chaitanya</creator>
        
        <creator>Moparthi Nageswara Rao</creator>
        
        <subject>COVID-19; machine learning; weighted clustering; malicious node; susceptible node; head; trust</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>Coronavirus is a pandemic disease spreading from human-to-human rapidly all over the world. This virus is origin from common cold to severe disease such as MERS-CoV and SARS-CoV. Initially it was identified in China, December 2019. The main aim of this research is used to identify the COVID-19 transmission risks assessment from human-to-human within a cluster. The agent-based weighted clustering approach is used to identify the corona virus infected people rapidly within a cluster. In the weighted clustered approach, the normal agents are consisted as susceptible node and the corona virus infected people are considered as malicious node. The Cluster Head (CH) is elected based upon some weighting factors and the trust value is evaluated for all the agents within the cluster. The cluster head were periodically transfers the malicious node information to all other nodes within the cluster. Finally, the agent-based weighted clustering machine learning model approach is used to identify the number of corona virus infected people within the cluster.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_67-COVID_19_Transmission_Risks_Assessment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Machine Learning based Model for COVID-19 Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111166</link>
        <id>10.14569/IJACSA.2020.0111166</id>
        <doi>10.14569/IJACSA.2020.0111166</doi>
        <lastModDate>2020-12-01T09:57:24.6300000+00:00</lastModDate>
        
        <creator>Tamer Sh. Mazen</creator>
        
        <subject>Coronavirus; COVID-19; coronavirus in Egypt; supervised machine learning; regression models</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>Since end of 2019, the World Health Organization (WHO) provided the name COVID-19 for the disease caused by the novel coronavirus. Coronavirus is a family of viruses that are named according to the spiky crown existed on the outer surface of the virus. The novel coronavirus, also known as SARS-CoV-2, which is a contagious respiratory virus that first reported in Wuhan, China. According to the rapid and sudden spread for COVID-19, it attracts most of the scientists and researchers all over the world. Researchers in the data science field are trying to analyze the worldwide infection cases day-by-day to gain a complete statistical view of the current situation. In this paper, a novel approach to predict the daily infection records for COVID-19 is presented. The model is applied for Egypt as well as the highest 10 ranked countries based on the number of cases and rate of change. The proposed model is implemented based on supervised Machine-Learning Regression algorithms. The dataset used for prediction was issued by WHO starting from 22 Jan 2020.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_66-A_Novel_Machine_Learning_Based_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-Channel Muscle Armband Implementation: Electronic Circuit Validation and Considerations towards Medical Device Regulation Assessment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111165</link>
        <id>10.14569/IJACSA.2020.0111165</id>
        <doi>10.14569/IJACSA.2020.0111165</doi>
        <lastModDate>2020-12-01T09:57:24.6170000+00:00</lastModDate>
        
        <creator>Martha Rocio Gonzales Loli</creator>
        
        <creator>Elsa Regina Vigo Ayasta</creator>
        
        <creator>Leyla Agueda Cavero Soto</creator>
        
        <creator>Jose Albites-Sanabria</creator>
        
        <subject>Component; muscle armband; surface electromyography; medical device regulation; transradial users</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>Multi-channel muscle arrays are commonly used as sensors in bionic prosthetic devices offering an innovative solution to recover motion in transradial amputees. This study presents preliminary assessments towards validation of a muscle armband for usage in transradial users. Analog and digital components were designed based on medical agencies’ recommendations to assess future compliance with Latin American medical device regulations. The study follows two approaches, an exploratory and pre-experimental design. Design was validated and confronted among research literature and medical device regulations. For validation, a pre-experimental design was guided by a quantitative paradigm. Muscle signal was assessed before and after the condition circuit for up to four muscle signals in real time. The present study considers both the conditioning muscle signal circuit and the embedded logic implementation to record signals from the designed muscle armband. Results show that the proposed device allows to record noninvasive signals with a frequency from 20-500 Hz.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_65-Multi_Channel_Muscle_Armband_Implementation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Object based Image Splicing Localization using Block Artificial Grids</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111164</link>
        <id>10.14569/IJACSA.2020.0111164</id>
        <doi>10.14569/IJACSA.2020.0111164</doi>
        <lastModDate>2020-12-01T09:57:24.5830000+00:00</lastModDate>
        
        <creator>P N R L Chandra Sekhar</creator>
        
        <creator>T N Shankar</creator>
        
        <subject>Image forensics; splicing localization; block artifi-cial grids; object segmentation; double compression</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>People share pictures freely with their loved ones and others using smartphones or social networking sites. The news industry and the court of law use the pictures as evidence for their investigation. Simultaneously, user-friendly photo editing tools alter the content of pictures and make their validity ques-tionable. Over two decades, research work is going on in image forensics to determine the picture’s trustworthiness. This paper proposes an efficient statistical method based on Block Artificial Grids in double compressed images to identify regions attacked by image manipulation. In contrast to existing approaches, the proposed approach extracts the artefacts on individual objects instead of the entire image. A localized algorithm is proposed based on the cosine dissimilarity between objects and exploit the tampered object with maximum dissimilarity among objects. The experimental results reveals that the proposed method is superior over other current methods.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_64-Object_based_Image_Splicing_Localization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Moment Features based Violence Action Detection using Optical Flow</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111163</link>
        <id>10.14569/IJACSA.2020.0111163</id>
        <doi>10.14569/IJACSA.2020.0111163</doi>
        <lastModDate>2020-12-01T09:57:24.5700000+00:00</lastModDate>
        
        <creator>A F M Saifuddin Saif</creator>
        
        <creator>Zainal Rasyid Mahayuddin</creator>
        
        <subject>Violence detection; feature extraction; classification; optical flow</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>Instantaneous detection of violence is still an unsolved research problem although artificial intelligence lives its prosperous years. The severity of injury causes due to violence can be minimized by detecting violence in real time demands for effective violence detection. Various methods were previously proposed for violence detection which could not provide robust results due many challenges, i.e. noise, motion estimation, lack of appropriate feature selection, lack of effective classification approach, complex background and variations in illumination. This research proposes an efficient method for violence detection using moment features to use motion patterns to facilitate detection in each frame and provides smaller area as region of interest. This means probability for extraction of motion intensity is getting lost because of same colored object in the background is reduced and thus minimizes background complexity. After that, proposed method uses optical flow to calculate angles and linear distances in each frame. In this context, if there is any frame loss due to noise or illumination variation, proposed method uses Kalman filter to process that frame by illuminating noise. Finally, decision for violence is determined using random forest classifier from single feature vector by generating a set of probabilities for each class. Proposed research performed extensive experimentation where accuracy rate of 99.12% was achieved using frame rate of 35 fps which is higher comparing with previous research results. Experimental results reveal the effectiveness of the proposed methodology.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_63-Moment_Features_Based_Violence_Action.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Energy Efficient Cluster based Routing Protocol with Secure IDS for IoT Assisted Heterogeneous WSN</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111162</link>
        <id>10.14569/IJACSA.2020.0111162</id>
        <doi>10.14569/IJACSA.2020.0111162</doi>
        <lastModDate>2020-12-01T09:57:24.5530000+00:00</lastModDate>
        
        <creator>Sultan Alkhliwi</creator>
        
        <subject>Wireless Sensor Networks (WSN); Clustering; Routing; Type II fuzzy logic; salp swarm algorithm; long short-term memory (LSTM)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>Currently, wireless sensor networks (WSNs) and the Internet of Things (IoT) have become useful in a wide range of applications. The nodes in IoT assisted WSN commonly operate on restricted battery units, meaning energy efficiency is a major design issue. Clustering and route selection processes are commonly utilized energy-efficient techniques for WSN. Although several cluster-based routing approaches are available for homogeneous WSN, only a limited number of studies have focused on energy efficient heterogeneous WSN (HWSN). Moreover, security poses a major design issue in the HWSN. This paper introduces an energy efficient cluster-based routing protocol with a secure intrusion detection system in HWSN called EECRP-SID. The proposed EECRP-SID technique involves three main phases: cluster construction, optimal path selection, and intrusion detection. Initially, the type II fuzzy logic-based clustering (T2FC) technique with three input parameters are applied for cluster head (CH) selection. These parameters are residual energy level (REL), distance to the base station (DTBS), and node density (NDEN). In addition to CH selection, the salp swarm optimization (SSO) technique is utilized to select optimal paths for inter cluster data transmission, which results in energy efficient HWSN. Finally, to achieve security in cluster based WSN, an effective intrusion detection system (IDS) using long short-term memory (LSTM) is executed on the CHs to identify the presence of intruders in the network. The EECRP-SID method was implemented in MATLAB, and experimental outcomes indicate that it outperformed the compared methods in terms of distinct performance measures.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_62-Energy_Efficient_Cluster_Based_Routing_Protocol.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced Method to Stream Real Time Data in IoT using Dynamic Voltage and Frequency Scaling with Memory</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111161</link>
        <id>10.14569/IJACSA.2020.0111161</id>
        <doi>10.14569/IJACSA.2020.0111161</doi>
        <lastModDate>2020-12-01T09:57:24.5370000+00:00</lastModDate>
        
        <creator>H. A. Hashim</creator>
        
        <subject>Dynamic voltage; frequency scaling; central processing unit</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>DVFS (Dynamic Voltage and Frequency Scaling) is a popular CPU (Central Processing Unit) level voltage frequency scale technology based on the application precedence. To motivate recurrence / voltage scaling as a feasible tool for energy productivity, i) basic workloads should ensure that memory recurrence scaling has an impact with insignificant degradation and (ii) that there is an enormous open door for reduction of power in this work. Therefore, if memory recurrence is that, two limiting forces on energy efficiency impact all items in an anomalous state. The competence depends on both power and runtime, because energy is the result of time and energy. The reduction in control alone will increase skills. However, further discussions at lower control work focuses are conducted, expanding operating times and energy in this way. There is a bloating edge that decreases the recurrence / voltage of memory in this way. This shows further that the recurrence of statically-scaling memory has little impact on many lower workloads because of recurrent effects only the idling of transmission interchange, part of the memory dormancy. This will be shown. Inspire in this paper, the scaling of memory recurrence will affect frame power (show a systemic model to simplify the scaling of voltage) and therefore electricity. It presents DVFS memory computing in real time. The DVFS technology is popular for measuring the frequency of voltage according to the CPU level applications. In this work, an enhanced DVFS with memory technique proposal is used to decrease energy use and improve performance at the memory level.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_61-Enhanced_Method_to_Stream_Real_Time_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Impact of Genetic Operators in a Hybrid GA-KNN Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111160</link>
        <id>10.14569/IJACSA.2020.0111160</id>
        <doi>10.14569/IJACSA.2020.0111160</doi>
        <lastModDate>2020-12-01T09:57:24.5230000+00:00</lastModDate>
        
        <creator>Raghad Sehly</creator>
        
        <creator>Mohammad Mezher</creator>
        
        <subject>Data mining; classification; K-NN; GA; Pima Indian Diabetes Dataset; UCI</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>Diabetes is a chronic disease caused by a deficiency of insulin that is prevalent around the world. Although doctors diagnose diabetes by testing glucose levels in the blood, they cannot determine whether a person is diabetic on this basis alone. Classification algorithms are an immensely helpful approach to accurately predicting diabetes. Merging two algorithms like the K-Nearest Neighbor (K-NN) Algorithm and the Genetic Algorithm (GA) can enhance prediction even more. Choosing an optimal ratio of crossover and mutation is one of the common obstacles faced by GA researchers. This paper proposes a model that combines K-NN and GA with Adaptive Parameter Control to help medical practitioners confirm their diagnosis of diabetes in patients. The UCI Pima Indian Diabetes Dataset is deployed on the Anaconda python platform. The mean accuracy of the proposed model is 0.84102, which is 1% better than the best result in the literature review.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_60-Performance_Impact_of_Genetic_Operators.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Permission Extraction Framework for Android Malware Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111159</link>
        <id>10.14569/IJACSA.2020.0111159</id>
        <doi>10.14569/IJACSA.2020.0111159</doi>
        <lastModDate>2020-12-01T09:57:24.4900000+00:00</lastModDate>
        
        <creator>Ali Ghasempour</creator>
        
        <creator>Nor Fazlida Mohd Sani</creator>
        
        <creator>Ovye John Abari</creator>
        
        <subject>Malware detection; android device; operating system; malicious application; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>Nowadays, Android-based devices are more utilized than other Operating Systems based devices. Statistics show that the market share for android on mobile devices in March 2018 is 84.8 percent as compared with only 15.1 percent iOS.  These numbers indicate that most of the attacks are subjected to Android devices. In addition, most people are keeping their confidential information on their mobile phones, and hence there is a need to secure this operating system against harmful attacks.  Detecting malicious applications in the Android market is becoming a very complex procedure. This is because as the attacks are increasing, the complexity of feature selection and classification techniques are growing. There are a lot of solutions on how to detect malicious applications on the Android platform but these solutions are inefficient to handle the features extraction and classification due to many false alarms. In this work, the researchers proposed a multi-level permission extraction framework for malware detection in an Android device. The framework uses a permission extraction approach to label malicious applications by analyzing permissions and it is capable of handling a large number of applications while keeping the performance metrics optimized. A static analysis method was employed in this work. Support Vector Machine (SVM) and Decision Tree Algorithm was used for the classification. The results show that while increasing input data, the model tries to keep detection accuracy at an acceptable level.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_59-Permission_Extraction_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Feature-Based Sentiment Analysis for Arabic Language</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111158</link>
        <id>10.14569/IJACSA.2020.0111158</id>
        <doi>10.14569/IJACSA.2020.0111158</doi>
        <lastModDate>2020-12-01T09:57:24.4770000+00:00</lastModDate>
        
        <creator>Ghady Alhamad</creator>
        
        <creator>Mohamad-Bassam Kurdy</creator>
        
        <subject>Sentiment analysis; feature-based; colloquial Arabic; opinion mining; natural language processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>In light of the spread of e-commerce and e-marketing, and the presence of a huge number of reviews and texts written by people to share views on products, it became necessary to give attention to extracting these opinions automatically and analyzing the feelings of the reviewers. The goal is to obtain reports evaluating products and contribute to improve services at a glance. Sentiment Analysis is a relatively recent study that deals with the processing of natural texts published in web sites and social networks. However, the processing of texts written in the Arabic language is one of the challenges that specialists face because people do not rely on standard Arabic, writing people in spoken/colloquial languages and use various dialects. This paper will present feature-based sentiment analysis for Arabic language which works on text analysis technique that breaks down text into aspects (attributes or components of a product or service), and then allocates each one a sentiment level (positive, negative or neutral).</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_58-Feature_Based_Sentiment_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Supplier Qualification Model (SQM): A Quantitative Model for Supplier Agreements Evaluation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111157</link>
        <id>10.14569/IJACSA.2020.0111157</id>
        <doi>10.14569/IJACSA.2020.0111157</doi>
        <lastModDate>2020-12-01T09:57:24.4600000+00:00</lastModDate>
        
        <creator>Mohammed Omar</creator>
        
        <creator>Yehia Helmy</creator>
        
        <creator>Ahmed Bahaa Farid</creator>
        
        <subject>Agile practices; vendor selection; CMMI; outsourcing; software acquisition; supplier agreement management; supplier selection; supplier evaluation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>Recently software outsourcing has increasingly widespread due to the valuable economical and technical benefits it introduced to the software development industry. Where the software development organizations adopt a third party to acquire a software project component (product, service). In the acquisition, process companies rely on the CMMI supplier agreement management (SAM) process area to select the potential supplier. Potential suppliers (vendors) are carefully selected through a dedicated process to ensure the delivery of high-quality and reliable services. Most of the published work in the context of how to evaluate and select the right supplier is based on a normal process with plain steps, nevertheless, no literature was reported to evaluate suppliers in a measurable way and select the potentials depending on a quantitative model. The purpose of this paper is to propose a practical quantitative model called the Supplier Qualification Model that enables the organizations to easily evaluate and select the potential suppliers through a measurable approach depends on monitoring and executing the SLAs of the SAM. The proposed model has been verified by implementing it through building an extension for one of the worldwide leading Agile management platforms according to Gartner (Microsoft Team Foundation Server). Multiple versions of the extension were implemented to target the major versions of Microsoft Team Foundation Server and validated by using them in 426 worldwide companies. This proves the suitability of the model to be used.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_57-Supplier_Qualification_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Assessment of Surface Water Quality on the Upper Watershed of Huallaga River, in Peru, using Grey Systems and Shannon Entropy</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111156</link>
        <id>10.14569/IJACSA.2020.0111156</id>
        <doi>10.14569/IJACSA.2020.0111156</doi>
        <lastModDate>2020-12-01T09:57:24.4270000+00:00</lastModDate>
        
        <creator>Alexi Delgado</creator>
        
        <creator>Jharison Vidal</creator>
        
        <creator>Jhon Castro</creator>
        
        <creator>Jhonel Felix</creator>
        
        <creator>Jorge Saenz</creator>
        
        <subject>Grey clustering; Huallaga river; Prati index; Shannon entropy; water quality</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>The assessment of the quality of surface water is a complex issue that entails the comprehensive analysis of several parameters that are altered by natural or man-made causes. In this sense, the Grey Clustering method, which is based on Grey Systems theory, and Shannon Entropy, based on the artificial intelligence approach, provide an alternative to evaluate water quality in an integral way considering the uncertainty within the analysis. In the present study, the water quality on the upper watershed of Huallaga river was evaluated taking into account the monitoring results of twenty-one points carried out by the National Water Authority (ANA) analyzing nine parameters of the Prati index. The results showed that all the monitoring points of the Huallaga river were classified as not contaminated, which means that the discharges, generated by economic activities, are carried out through of treatment plants meeting the quality parameters. Finally, the results obtained can be of great help to the ANA and the regional and local authorities of Peru in making decisions to improve the management of the Huallaga river watershed.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_56-Assessment_of_Surface_Water_Quality.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>BOTNETs: A Network Security Issue</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111155</link>
        <id>10.14569/IJACSA.2020.0111155</id>
        <doi>10.14569/IJACSA.2020.0111155</doi>
        <lastModDate>2020-12-01T09:57:24.4130000+00:00</lastModDate>
        
        <creator>Umar Iftikhar</creator>
        
        <creator>Kashif Asrar</creator>
        
        <creator>Maria Waqas</creator>
        
        <creator>Syed Abbas Ali</creator>
        
        <subject>BOTNET; malware; drones; zombies; threats</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>With the technological advancements in the field of networking and information technology in general, organizations are enjoying the technological blessings and simultaneously under perpetual threats that are present in the form of attacks, designed especially to disable organizations and their infrastructure, as the gravest cyber threats in recent times. Compromised computers or BOTNETs are unarguably the most severe threat to the security of internet community. Organizations are doing their best to curb BOTNETs in every possible way, spending huge amount of their budget every year for available hardware and software solutions. This paper presents a survey on the security issues raised by the BOTNETs, their future; how they are evolving and how they could be circumvent to secure the most valuable resource of the organizations which is data. The compromised systems may be treated like viruses in the network which are capable of performing substantial loss to the organization including theft of confidential information. This paper highlights the parameters that should be considered by the organizations or Network administrators to find out the anomalies that may point to the presence of BOTNET in the network. The early detection may reduce the impact of damage by taking timely actions against compromised systems.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_55-BOTNETs_A_Network_Security_Issue.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Examining the Effect of Online Gaming Addiction on Adolescent Behavior</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111154</link>
        <id>10.14569/IJACSA.2020.0111154</id>
        <doi>10.14569/IJACSA.2020.0111154</doi>
        <lastModDate>2020-12-01T09:57:24.3970000+00:00</lastModDate>
        
        <creator>Maha AlDwehy</creator>
        
        <creator>Hedia Zardi</creator>
        
        <subject>Online gaming addiction; adolescent behavior on internet; privacy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>It exceeds daily rates of Internet use among adolescents compared to adults’ use of the Internet, as it was monitored that the number of adolescents on the Internet is increasing all over the world. Today, as a result of the ease of access to the Internet in the world, most adolescents’ access to the internet world is easier and more common. In this paper, we review some studies that explain the behavior of adolescents while gaming online and its effects. There are some statistics to determine the impact of the Internet on teenagers. The study reviews past studies on adolescent behavior and privacy with a potential impact on ado-lescent behavior, which has become one of the most important problems. We focused on exploring online game addiction con-cerns and their effects on teens&#39; behavior. The purpose of this type of study is to determine the objective and examine this study within the backdrop of social reality. This study employed a quantitative methodology. We have selected this methodology because it has been proven to be reliable and has sound construct validity. The data was analyzed using the SPL smart tool and the main objective of this study was to investigate adolescent&#39;s be-havior in terms of their addiction to online games, and to study parents&#39; awareness of the dangers of online games for their chil-dren. The study explored various factors that can influence ad-diction fears and examines their effects on adolescent behavior and contributed to the literature by identifying correlation factors and addressing this gap by applying through SEM application specifically the Smart PLS tool.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_54-Examining_the_Effect_of_Online_Gaming_Addiction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Relationship of Trustworthiness and Ethical Value in the Healthcare System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111153</link>
        <id>10.14569/IJACSA.2020.0111153</id>
        <doi>10.14569/IJACSA.2020.0111153</doi>
        <lastModDate>2020-12-01T09:57:24.3670000+00:00</lastModDate>
        
        <creator>Rajes Khana</creator>
        
        <creator>Manmeet Mahinderjit Singh</creator>
        
        <creator>Faten Damanhoori</creator>
        
        <creator>Norlia Mustaffa</creator>
        
        <subject>Ethics; ethical value; trustworthiness; breast self-examination; healthcare system; social media</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>Females prefer discovering social media or healthcare systems to finding information and presenting their cases with any physician; however, the behavior of physicians tends to be uncontrollable on the healthcare system. Physicians have the capacity to share all of their patients’ information with their colleagues without any permission or concern from the patients.  For this reason, it is of utmost importance to design a breast self-examination system that can keep monthly track of self-exam data and communication between patient and physician. To develop such a system, identify the ethical values and trustworthiness as an indicator. Then, the survey will provide the details on ethical values and trustworthiness applicable in the system. Therefore, this research objective on the importance of ethical value and trustworthiness in the healthcare system. The survey on 772 respondents leading to the importance of the ethical value being used in the healthcare system is required. The ethical value of interaction, integrity, confidentiality, protection, caring, and fairness have a significant influence on the healthcare system. The path coefficients are answering Hypothesis I in presenting the positive relationship and significant effect between ethical value and BSE system (P&lt;.001). On the other side, trustworthiness has a significant influence on the healthcare system. The path coefficients are answering Hypothesis II in presenting the positive relationship and significant effect between trustworthiness and the BSE system (P&lt;.001). Finally, the relationship in healthcare between trustworthiness and ethical value is on integrity with honesty and belief.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_53-The_Relationship_of_Trustworthiness_and_Ethical_Value.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Home Security System with Face Recognition based on Convolutional Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111152</link>
        <id>10.14569/IJACSA.2020.0111152</id>
        <doi>10.14569/IJACSA.2020.0111152</doi>
        <lastModDate>2020-12-01T09:57:24.3500000+00:00</lastModDate>
        
        <creator>Nourman S. Irjanto</creator>
        
        <creator>Nico Surantha</creator>
        
        <subject>Home door security; CNN Alexnet; facial recognition; Raspberry Pi</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>Security of house doors is very important and becomes the basis for the simplest and easiest security and sufficient to provide a sense of security to homeowners and along with technological developments, especially in the IoT field, which makes technological developments in locking house doors have developed a lot like locking house doors with faces and others. The development of facial recognition systems has also developed and has been implemented for home door locking systems and is an option that is quite simple and easy to use and is quite accurate in recognizing the face of homeowners. The development of the CNN method in facial recognition has become one of the face recognition systems that are easy to implement and have good accuracy in recognizing faces and has been used in object recognition systems and others. In this study, using the CNN Alexnet facial recognition system which is implemented in a door locking system, data collection is done by collecting 1048 facial data on the face of the homeowner using a system which is then used to train machine learning where the results are quite accurate where the accuracy is the result is 97.5% which is quite good compared to some other studies. The conclusion is the CNN Alexnet method can perform facial recognition which is quite accurate which can be implemented on the IoT device, namely, the Raspberry Pi.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_52-Home_Security_System_with_Face_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Ontology-Based Predictive Maintenance Tool for Power Substation Faults in Distribution Grid</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111151</link>
        <id>10.14569/IJACSA.2020.0111151</id>
        <doi>10.14569/IJACSA.2020.0111151</doi>
        <lastModDate>2020-12-01T09:57:24.3330000+00:00</lastModDate>
        
        <creator>Moamin A. Mahmoud</creator>
        
        <creator>Alicia Y.C. Tang</creator>
        
        <creator>Kuganesan Kumar</creator>
        
        <creator>Nur Liyana Law Mohd Firdaus Law</creator>
        
        <creator>Mathuri Gurunathan</creator>
        
        <creator>Durkasiny Ramachandran</creator>
        
        <subject>Predictive maintenance; ontology; power substation faults; distribution grid</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>Recent advances in Power Grid (PG) technology pose an important problem of measuring the effectiveness of power grid configurations. Current assessment models are not adequate to mitigate the setup issues due to the absence of a high-fidelity evaluation framework that can consider diverse scenarios based on the market interest. Consequently, we develop a highly flexible Ontology-based Evaluation System that can accommodate and assess different scenarios. The use of ontology as middleware is the best approach to produce an efficient, semantically aware, and operationally accurate system environment for managing flexibility in evaluation. The evaluation is made by predicting the failure intensity and subsequently generate a maintenance report of a particular configuration. The selection of the best configuration is made by comparing the maintenance report of different configurations. The developed evaluation system consists of three main components which are Configuration Generator Tool (GCT), Failure Prediction Model (FDM), and Hybrid Simulation Platform (HSP). The GCT is a knowledge-based system that provides a powerful tool for engineers to generate alternative configurations. The GCT data were collected from literature, validated by experts, and modeled using Web Ontology Language (OWL). While the HSP was developed using several modelings and ontology-based tools such as blender 3D modeling, unity 3d, asp.net, my sql, and apache Jena fuseki. Finally, the FDM was developed based on the impact and relationship of odd events to power grid components and the impact of a failed component to other components, the prediction is modeled using two methods Poisson Model and Likelihood Estimation Method.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_51-An_Ontology_Based_Predictive_Maintenance_Tool.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>RHEM: A Robust Hybrid Ensemble Model for Students’ Performance Assessment on Cloud Computing Course</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111150</link>
        <id>10.14569/IJACSA.2020.0111150</id>
        <doi>10.14569/IJACSA.2020.0111150</doi>
        <lastModDate>2020-12-01T09:57:24.3030000+00:00</lastModDate>
        
        <creator>Sapiah Sakri</creator>
        
        <creator>Ala Saleh Alluhaidan</creator>
        
        <subject>Academic performance; classification algorithms; cloud computing course; ensemble algorithms; hybrid ensemble classifier model; student academic performance tracking</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>Creating tools, such as a prediction model to assist students in a traditional or virtual setting, is an essential activity in today&#39;s educational climate. The early stage towards incorporating these predictive models using techniques of machine learning focused on predicting the achievement of students in terms of the grades obtained. The research aim is to propose a robust hybrid ensemble model (RHEM) that can warn at-risks students (on Cloud Computing course) of their likely outcomes at the early semester assessment. We hybridised four renowned single algorithms – Na&#239;ve Bayes, Multilayer Perceptron, k-Nearest Neighbours, and Decision Table – with four well-established ensemble algorithms – Bagging, RandomSubSpace, MultiClassClassifier, and Rotation Forest – which produced 16 new hybrid ensemble classifier models. Hence, we have thoroughly and rigorously built, trained, and tested 24 models all together. The experiment concluded that the Rotation Forest + MultiLayer Perceptron model was the best performing model based on the model evaluation in terms of Accuracy (91.70%), Precision (86.1%), F-Score rate (87.3%), and Receiver Operating Characteristics Area detection (98.6%). Our research will help students identify their likely final grades in terms of whether they are excellent, very good, good, pass, or fail, and, thus, transform their academic conduct to achieve higher grades in the final exam accordingly.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_50-RHEM_A_Robust_Hybrid_Ensemble_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Liver Tumor Segmentation using Superpixel based Fast Fuzzy C Means Clustering</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111149</link>
        <id>10.14569/IJACSA.2020.0111149</id>
        <doi>10.14569/IJACSA.2020.0111149</doi>
        <lastModDate>2020-12-01T09:57:24.2870000+00:00</lastModDate>
        
        <creator>Munipraveena Rela</creator>
        
        <creator>Suryakari Nagaraja Rao</creator>
        
        <creator>Patil Ramana Reddy</creator>
        
        <subject>CT scan image; image segmentation; fuzzy c mean clustering; liver mask; superpixel image</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>In computer aided diagnosis of liver tumor detection, tumor segmentation from the CT image is an important step. The majority of methods are not able to give an integrated structure for finding fast and effective tumor segmentation. Hence segmentation of tumor is most difficult task in diagnosing. In this paper, CT abdominal image is segmented using Superpixel-based fast Fuzzy C Means clustering algorithm to decrease the time needed for computation and eradicate the manual interface. In this algorithm, a superpixel image with perfect contour can be obtain using a Multiscale morphological gradient reconstruction operation. Superpixel is pre-segmentation algorithm and is employed to obtain segmentation accuracy. FCM with modified object is used to obtain the color segmentation. This method is examined on 20 CT images gathered from liveratlas database, results shows that this approach is fast and accurate compared to most of segmentation algorithms. Statistical parameters which include accuracy, precision, sensitivity, specificity, dice, rfn and rfp are calculated for segmented image. The results shows that this algorithm gives high accuracy of 99.58% and improved rfn value of 8.34% compared with methods discussed in the literature.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_49-Liver_Tumor_Segmentation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Lung Cancer Detection using Bio-Inspired Algorithm in CT Scans and Secure Data Transmission through IoT Cloud</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111148</link>
        <id>10.14569/IJACSA.2020.0111148</id>
        <doi>10.14569/IJACSA.2020.0111148</doi>
        <lastModDate>2020-12-01T09:57:24.2570000+00:00</lastModDate>
        
        <creator>C. Venkatesh</creator>
        
        <creator>Polaiah Bojja</creator>
        
        <subject>Pulmonary; mortality; carcinogenic; swarm intelligence; IoT</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>Primary recognition of pulmonary cancer nodules eloquently increases the odds of survival, also leads it solider problem to resolve, as it often relies on a tomography scan filmic examination. By increasing the possibility of effective treatment, earlier tumor diagnosis decreases lung cancer mortality. Radiologists usually diagnose lung cancer on medical images by a systematic analysis that consumes more time and is unreliable often, because of the substantial improvement in the transmission of data in the healthcare sector, the protection and integrity of medical data has been a huge problem for healthcare applications. This study utilizes computational intelligence techniques. For detection and data transmission, a novel Hybrid model is therefore proposed in this paper. Two steps are involved in the proposed method where diverse image processing procedures are used to detect cancer in the first step using MATLAB and data transfer to authorized persons via the IoT cloud in the second stage. The simulated steps include pre-processing, segmentation by Otsu thresholding along with swarm intelligence algorithm, extraction of features by local binary pattern and classification using the support vector machine (SVM). This work demonstrates the dominance of swarm-intelligent framework over the conventional algorithms in terms of performance metrics like sensitivity, accuracy and specificity as well as training time. The tests carried out show that the model built can achieve up to 92.96 percent sensitivity, 93.53 percent accuracy and 98.52 percent specificity.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_48-Lung_Cancer_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Efficient Digital Space Vector PWM Module for 3-φ Voltage Source Inverter (VSI) on FPGA</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111147</link>
        <id>10.14569/IJACSA.2020.0111147</id>
        <doi>10.14569/IJACSA.2020.0111147</doi>
        <lastModDate>2020-12-01T09:57:24.2400000+00:00</lastModDate>
        
        <creator>Shalini Vashishtha</creator>
        
        <creator>Rekha K.R</creator>
        
        <subject>Digital space vector PWM; 3-phase voltage source inverter; sector generation module; switching time generation; FPGA; Verilog-HDL; Xilinx</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>The realization of digital control circuitry based PWM strategies provides many advantages. It includes better prototyping, higher switching frequency, simple hardware, and flexibility by overcoming the limitations of analog control strategies. In this article, The Digital space vector-based Pulse width Modulation ((DSV-PWM) is designed. The DSV-PWM Module includes, mainly, Xdq reference frame, Sector generation, Square root, switching time generation, Carry-save adder (CSA), and PWM Generation module. These modules are designed using simple logical operations, and combinational circuits to improve the DSV-PWM performance. The DSV-PWM Module is synthesized and implemented on a cost-effective Artix-7 FPGA device. The present work utilizes a &lt; 1% chip area, operates at 597.83 MHz of maximum frequency, and utilizes 110mW of total power on FPGA Device. The DSV-PWM module is also compared with the existing SV-PWM approach with better improvement in hardware constraints like chip area, operating frequency, and dynamic power (mW).</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_47-An_Efficient_Digital_Space_Vector.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Improved Time-Based One Time Password Authentication Framework for Electronic Payments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111146</link>
        <id>10.14569/IJACSA.2020.0111146</id>
        <doi>10.14569/IJACSA.2020.0111146</doi>
        <lastModDate>2020-12-01T09:57:24.2100000+00:00</lastModDate>
        
        <creator>Md Arif Hassan</creator>
        
        <creator>Zarina Shukur</creator>
        
        <creator>Mohammad Kamrul Hasan</creator>
        
        <subject>Electronic payments; One Time Password (OTP); Quick Response (QR) code; Time based One Time Password (TOTP)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>One-time Password is important in present day scenario in the purposes of improving the security of electronic payments. Security sensitive environment or perhaps organization avoid the resources from unauthorized access by allowing different access control mechanism as user authentication. There are several safety issues in one Password based authentication. However, studies show that OTP sent over SMS are causing different causes and issues, which lead to precious time, delay in transaction. User authentication can be raised with more levels within the procedure of multi-factor authentication scheme. Time-based One-time Password and biometrics are one of the widely accepted mechanisms that incorporate multi-factor authentication. In this paper, we approach the Time-based OTP authentication algorithm with biometric fingerprints to secure an electronic payment. This algorithm uses a secret key exchanged between the client and the server and uses a certain password through the algorithm. The shuffle of the TOTP approach better wear by screening the key as being a QR code, as revealed in the majority movable applications are able to read. It offers confidentiality at the application level within the system to protect user credential within equal entities (the user and the server) for preventing brute force and dictionary attacks. Thus, the proposed system design is possible for users because of the lack of the concern of holding its own hardware token or additional charges from the short message service. Our suggested approach has been found to improve safety performance substantially compared to existing methods with regard to authentication and authorization. This research hopes to boost research effort on further advancement of cryptosystems surrounding multi-factor authentication.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_46-An_Improved_Time_Based_One_Time_Password.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Voice-Disorder Identification of Laryngeal Cancer Patients</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111145</link>
        <id>10.14569/IJACSA.2020.0111145</id>
        <doi>10.14569/IJACSA.2020.0111145</doi>
        <lastModDate>2020-12-01T09:57:24.1930000+00:00</lastModDate>
        
        <creator>G. B. Gour</creator>
        
        <creator>V.Udayashankara</creator>
        
        <creator>Dinesh K. Badakh</creator>
        
        <creator>Yogesh A Kulkarni</creator>
        
        <subject>Support Vector Machine (SVM); random forest; Mel Frequency Cepstral Coefficients (MFCC); voice disorder detection; laryngeal cancer; non-linear features</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>This Previous studies have shown that much of laryngeal cancer-based work was carried out with a minimal set of linear features. Much of the work was focused on the study of larynx preservation, quality of life around radiotherapy, or surgery. The voice disorder database was not solely limited to laryngeal cancer. In the context of this, the paper proposes a non-invasive voice disorder detection of laryngeal cancer patients. The sustained vowel /a/ was recorded with 55 laryngeal cases and 55 healthy cases. Owing to the non-linearity property of the vocal cords, seven non-linear parameters along with biologically inspired 39 Mel-Frequency Cepstral Coefficients (MFCC) are extracted. This forms a laryngeal dataset of size 110X46. The wrapper method is used for better feature selection and to enhance the discriminating ability of the present work. The classification is carried out using a tuned support vector machine (SVM) with grid search and random forest (RF). The present work has shown an improved accuracy of 76.56% with SVM and 80% in the case of random forest. The forward selection of features along with the involvement of non-linear features has played a significant role in the better performance of the present system.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_45-Voice_Disorder_Identification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Recent Progress of Blockchain Initiatives in Government</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111144</link>
        <id>10.14569/IJACSA.2020.0111144</id>
        <doi>10.14569/IJACSA.2020.0111144</doi>
        <lastModDate>2020-12-01T09:57:24.1770000+00:00</lastModDate>
        
        <creator>Faizura Haneem</creator>
        
        <creator>Hussin Abu Bakar</creator>
        
        <creator>Nazri Kama</creator>
        
        <creator>Nik Zalbiha Nik Mat</creator>
        
        <creator>Razatulshima Ghazali</creator>
        
        <creator>Yasir Mahmood</creator>
        
        <subject>Blockchain initiatives; governments; review</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>Blockchain is a decentralized and distributed ledger technology that aims to ensure transparency, data security, and integrity. There are rising interest and investment by the governments and industries in Blockchain to deliver significant cost savings and increase efficiency. Identifying Blockchain initiatives that are currently implemented in the government world-wide could improve understanding as well as set benchmarks for specific countries. However, although some review studies on Blockchain initiatives have been carried out, there are very few studies that uncover Blockchain initiatives implemented by the government in Asian countries. Hence, this study reviews Blockchain initiatives in the five-top e-government development index (EGDI) countries in Asia; South Korea, Singapore, Japan, United Arab Emirates, and Cyprus. We strategized our review methods by utilizing relevant keyword searches in existing literature, books, academic journals, conferences, and industrial reports. The results of this study will help other researchers and practitioners to recognize the current stage of Blockchain initiatives in the government of Asian countries.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_44-Recent_Progress_of_Blockchain_Initiatives_in_Government.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Process Level Social Media Business Value Configuration of SMEs in Saudi Arabia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111143</link>
        <id>10.14569/IJACSA.2020.0111143</id>
        <doi>10.14569/IJACSA.2020.0111143</doi>
        <lastModDate>2020-12-01T09:57:24.1470000+00:00</lastModDate>
        
        <creator>Anwar Shams Eldin</creator>
        
        <creator>Awadia Elnour</creator>
        
        <creator>Rugaia Hassan</creator>
        
        <subject>Interaction effects of social media and IT resources; process level; SMEs; social media business value; social media capabilities; management’s commitment to innovation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>The key enabler of strategic design based on IT is process level value; however, few researchers have tackled the mechanisms through which small and medium-sized enterprises (SMEs) can create value at the process level. This study sheds light on the mechanism of creating social media business value at the process level by identifying the interaction effects of social media and IT resources and the mediating role of management’s commitment to innovation as an organizational factor. The research model is based on the IT business value approach, quantitative and descriptive methodology is adopted, and the data are analyzed using structural equation modeling. Among the findings based on 301 SMEs in the Kingdom of Saudi Arabia, that management’s commitment to innovation is a necessary condition for social media resources to create dynamic capabilities, and the interaction effects between social media resources and IT resources on social media capability have no impact on the value-generation process at the process level. The result improves the understanding of the theoretical implications of social media business value at the process level, which can be used to guide theorizing about IT business value. SME managers, IT designers, and national decision-makers can use the findings to gain strategic advantage through social media platforms.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_43-Process_Level_Social_Media_Business_Value.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-Verse Algorithm based Approach for Multi-criteria Path Planning of Unmanned Aerial Vehicles</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111142</link>
        <id>10.14569/IJACSA.2020.0111142</id>
        <doi>10.14569/IJACSA.2020.0111142</doi>
        <lastModDate>2020-12-01T09:57:24.1300000+00:00</lastModDate>
        
        <creator>Raja Jarray</creator>
        
        <creator>Soufiene Bouallegue</creator>
        
        <subject>Unmanned aerial vehicles; path planning problem; multiobjective optimization; multiobjective multi-verse algorithm; decision-making model; nonparametric statistical tests</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>In this paper, a method based on a Multiobjective Multi-Verse Optimizer (MOMVO) is proposed and successfully implemented to solve the unmanned aerial vehicles’ path planning problem. The generation of each coordinate of the aircraft is reformulated as a multiobjective optimization problem under operational constraints. The shortest and smoothest path by avoiding all obstacles and threats is the solution of such a hard optimization problem. A set of competitive metaheuristics such as Multiobjective Salp Swarm Algorithm (MSSA), Grey Wolf Optimizer (MOGWO), Particle Swarm Optimization (MOPSO) and Non-dominated Sorting Genetic Algorithm II (NSGA-II) are retained as comparison tools for the problem’s resolution. To assess the performance of the reported algorithms and conclude about their effectiveness, an empirical study is firstly performed for solving different multiobjective test functions from the literature. These algorithms are then used to obtain a set of optimal Pareto solutions for the multi-criteria path planning problem. An efficient Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) of Multi Criteria Decision-Making (MCDM) model is investigated to find the optimal solution from the non-dominant ones. Demonstrative results and statistical analysis are presented and compared in order to show the effectiveness of the proposed MOMVO-based path planning technique.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_42-Multi_Verse_Algorithm_Based_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of Steganographic on Digital Evidence using General Computer Forensic Investigation Model Framework</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111141</link>
        <id>10.14569/IJACSA.2020.0111141</id>
        <doi>10.14569/IJACSA.2020.0111141</doi>
        <lastModDate>2020-12-01T09:57:24.1000000+00:00</lastModDate>
        
        <creator>Muh. Hajar Akbar</creator>
        
        <creator>Sunardi</creator>
        
        <creator>Imam Riadi</creator>
        
        <subject>Steganography; anti forensics; general computer forensic investigation model; hiderman</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>Steganography is one of the anti-forensic techniques used by criminals to hide information in other messages which can cause problems in the investigation process and difficulties in obtaining original information evidence on the digital crime. Digital forensic analysts are required ability to find and extract the messages that have been inserted by using proper tools. The purpose of this research is to analyze the hidden digital evidence using steganography techniques. This research uses the static forensics method by applying five stages in the Generic Forensics Investigation Model framework, namely pre-process, acquisition &amp; preservation, analysis, presentation, and post-process as well as extracting files that have been infiltrated based on case scenarios involving digital crime. The tools used are FTK Imager, Autopsy, WinHex, Hiderman, and StegSpy. The results on the steganographic file insertion experiment of 20 files indicate that StegSpy and Hiderman are effective on the steganographic analysis of digital evidence. StegSpy can detect the presence of secret messages with 85% success rate. The extraction process using Hiderman for 18 files with containing steganographic messages had 100% successful.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_41-Analysis_of_Steganographic_on_Digital_Evidence.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Investigating Students’ Computational Thinking Skills on Matter Module</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111140</link>
        <id>10.14569/IJACSA.2020.0111140</id>
        <doi>10.14569/IJACSA.2020.0111140</doi>
        <lastModDate>2020-12-01T09:57:24.0830000+00:00</lastModDate>
        
        <creator>Noraini Lapawi</creator>
        
        <creator>Hazrati Husnin</creator>
        
        <subject>Computational thinking skills; problem solving skill; teaching and learning; decomposition; evaluation; algorithm; science module; matter; secondary level students</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>The revolution of the fourth industrial has impacted most aspect of our life and demanding a paradigm shift including education. It has become to our attention that there is a need to inculcate complex problem-solving skills among youth to equipped them to face the challenges in the era of digital technology. To fulfill the needs, computational thinking was introduced in school curriculum in Malaysia in 2017. It is still rather new, and this creates opportunity to understand how computational thinking can best be integrated in teaching and learning. In this study, we developed a module for a science topic, Matter and examine its impact on computational thinking skills on 65 students at secondary level. The computational thinking skills integrated in this study were abstraction, decomposition, algorithm, generalization, and evaluation. A quasi-experimental method was employed, and the ANCOVA result showed that there was no significant difference between control and treatment group on computational thinking skills. However, the score means for each of the computational thinking skills for both groups, showed that three skills in the treatment group were higher than the control group. The three computational thinking skills were decomposition, evaluation, and algorithm. This study suggested that CT involved mental process and proper planning is crucial to integrate computational thinking skills as teaching and learning is very contextual in nature.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_40-Investigating_Students_Computational_Thinking_Skills.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Level of Budget Execution According to the Professional Profile of Regional Governors Applying Machine Learning Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111139</link>
        <id>10.14569/IJACSA.2020.0111139</id>
        <doi>10.14569/IJACSA.2020.0111139</doi>
        <lastModDate>2020-12-01T09:57:24.0530000+00:00</lastModDate>
        
        <creator>Jose Luis Morales Rocha</creator>
        
        <creator>Mario Aurelio Coyla Zela</creator>
        
        <creator>Nakaday Irazema Vargas Torres</creator>
        
        <creator>Jarol Teofilo Ramos Rojas</creator>
        
        <creator>Daniel Quispe Mamani</creator>
        
        <creator>Jose Oscar Huanca Frias</creator>
        
        <subject>Machine learning; multiple regression; professional experience; university studies; public budget; governor; public spending</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>Machine Learning is a discipline of artificial intelligence that implements computer systems capable of learning complex patterns automatically and predicting future behaviors. The objective was to implement a Machine Learning model that allows to identify, classify and predict the influence of the professional training of the governors in the execution of the public spending of the regional governments of Peru. Of the 14 indicators of academic training, professional experience and university studies were selected as significant indicators that contribute to the execution of public spending by the 25 governors of Peru. For the prediction of the execution of the public spending of the regional governors, a supervised learning algorithm was implemented. The mean square error for the Machine Learning regression model was 4.20 and the coefficient of determination was 0.726, which indicates that the execution of public spending by regional governments is explained with 72.6% by the professional experience and university studies of the governors. The regional governors of Peru with university studies and professional experience achieve better results in the execution of public spending in the regional governments of Peru.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_39-Level_of_Budget_Execution.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Effective Heuristic Method to Minimize Makespan and Flow Time in a Flow Shop Problem</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111138</link>
        <id>10.14569/IJACSA.2020.0111138</id>
        <doi>10.14569/IJACSA.2020.0111138</doi>
        <lastModDate>2020-12-01T09:57:24.0370000+00:00</lastModDate>
        
        <creator>Miguel Fernandez</creator>
        
        <creator>Avid Roman-Gonzalez</creator>
        
        <subject>Flow shop problem; multi-objective optimization; non-dominated solution</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>In this paper, it is presented a heuristic method for solving the multi-objective flow shop problem. The work carried out considers the simultaneous optimization of the makespan and the flow time; both objectives are essential in measuring the production system&#39;s performance since they aim to reduce the completion time of jobs, increase the efficiency of resources, and reduce waiting time in queue. The proposed method is an adaptation of multi-objective Newton&#39;s method, which is applied to problems with functions of continuous variables. In this adaptation, the method seeks to improve a sequence of jobs through local searches recursively. The computational experiments show the potential of the proposed method to solve medium-sized and large instances compared with other existing literature methods.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_38-An_Effective_Heuristic_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Hybrid KNN Classification Approach based on Particle Swarm Optimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111137</link>
        <id>10.14569/IJACSA.2020.0111137</id>
        <doi>10.14569/IJACSA.2020.0111137</doi>
        <lastModDate>2020-12-01T09:57:24.0070000+00:00</lastModDate>
        
        <creator>Reem Kadry</creator>
        
        <creator>Osama Ismael</creator>
        
        <subject>K-nearest neighbour; outlier; multidimensional; particle swarm optimization; scored k-nearest neighbour</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>K-Nearest Neighbour algorithm is widely used as a classification technique due to its simplicity to be applied on different types of data. The presence of multidimensional and outliers data have a great effect on the accuracy of the K-Nearest Neighbour algorithm. In this paper, a new hybrid approach called Particle Optimized Scored K-Nearest Neighbour was proposed in order to improve the performance of K-Nearest Neighbour. The new approach is implemented in two phases; the first phase help to solve the multidimensional data by making feature selection using Particle Swarm Optimization algorithm, the second phase help to solve the presence of outliers by taking the result of the first phase and apply on it a new proposed scored K-Nearest Neighbour technique. This approach was applied on Soybean dataset, using 10 fold cross validation. The experiment results shows that the proposed approach achieves better results than the K-Nearest Neighbour algorithm and it’s modified.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_37-A_New_Hybrid_KNN_Classification_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Conceptual Model for Connected Vehicles Safety and Security using Big Data Analytics</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111136</link>
        <id>10.14569/IJACSA.2020.0111136</id>
        <doi>10.14569/IJACSA.2020.0111136</doi>
        <lastModDate>2020-12-01T09:57:23.9900000+00:00</lastModDate>
        
        <creator>Noor Afiza Mat Razali</creator>
        
        <creator>Nuraini Shamsaimon</creator>
        
        <creator>Muslihah Wook</creator>
        
        <creator>Khairul Khalil Ishak</creator>
        
        <subject>Connected vehicles; safety and security monitoring; collision prediction; congestion prediction; machine learning; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>The capability of Connected Vehicles (CVs) connecting to a nearby vehicle, surrounding infrastructure and cyberspace presents a high risk in the aspect of safety and security of the CV and others. Data volume generated from the sensors and infrastructure in CVs environment are enormous. Thus, CVs implementations require a real-time big data processing and analytics to detect any anomaly in the CVs’s environment which are physical layer, network layer and application layer. CVs are exposed to various vulnerabilities associated with exploitations or malfunctions of the components in each layer that could result in various safety and security event such as congestion and collision. The safety and security risks added an extra layer of required protection for the CVs implementation that need to be studied and refined. To address this gap, this research aims to determine the basic components of safety and security for CVs implementation and propose a conceptual model for safety and security in CVs by applying the machine learning and deep learning techniques. The proposed model is highly correlated to safety and security and could be applied in congestion and collision prediction.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_36-Conceptual_Model_for_Connected_Vehicles_Safety.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimize the Combination of Categorical Variable Encoding and Deep Learning Technique for the Problem of Prediction of Vietnamese Student Academic Performance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111135</link>
        <id>10.14569/IJACSA.2020.0111135</id>
        <doi>10.14569/IJACSA.2020.0111135</doi>
        <lastModDate>2020-12-01T09:57:23.9600000+00:00</lastModDate>
        
        <creator>Do Thi Thu Hien</creator>
        
        <creator>Cu Thi Thu Thuy</creator>
        
        <creator>Tran Kim Anh</creator>
        
        <creator>Dao The Son</creator>
        
        <creator>Cu Nguyen Giap</creator>
        
        <subject>Deep learning techinique; categorical data type; “learned” embedding encoding; student academic performance prediction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>Deep learning techniques have been successfully applied in many technical fields such as computer vision and natural language processing, and recently researchers have paid much attention to the application of this technology in socio-economic problems including the student academic performance prediction (SAPP) problem. In this specialization, this study focusses on both designing an appropriate Deep learning model and handling categorical input variables. In fact, categorical data variables are quite popular in student academic performance prediction problem, and deep learning technique in particular or artificial neural network in general only work well with numerical data variables. Therefore, this study investigates the performance of the combination categorical encoding methods including label encoding, one-hot encoding and “learned” embedding encoding with deep learning techniques including Deep Dense neural network and Long short-term memory neural network for SAPP problem. In experiment, this study compared these proposed models with each other and with some prediction methods based on other machine learning algorithms at the same time. The results showed that the categorical data transformation method using the “learned” embedding encoding improved performance of the deep learning models, and its combination with long short-term memory network gave an outstanding result for the researched problem.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_35-Optimize_the_Combination_of_Categorical_Variable.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Pilot Study of an Instrument to Assess Undergraduates’ Computational thinking Proficiency</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111134</link>
        <id>10.14569/IJACSA.2020.0111134</id>
        <doi>10.14569/IJACSA.2020.0111134</doi>
        <lastModDate>2020-12-01T09:57:23.9270000+00:00</lastModDate>
        
        <creator>Debby Erce Sondakh</creator>
        
        <creator>Kamisah Osman</creator>
        
        <creator>Suhaila Zainudin</creator>
        
        <subject>Computational thinking; assessment; skills; attitudes; undergraduates; self-assessment</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>The potentiality of computational thinking (CT) in problem solving has gained much attention in academic communities. This study aimed at developing and validating an instrument, called Hi-ACT, to assess CT ability of university undergraduates. The Hi-ACT evaluates both technical and soft skills applicable to CT based problem solving. This paper reports a pilot study conducted to test and refine the initial Hi-ACT. Survey method was employed through which questionnaire comprising of 155 items was piloted among 548 university undergraduates. Structural equation modeling with partial least squares was applied to examine the Hi-ACT’s reliability and validity. Composite reliability was used to assess internal consistency reliability, while convergent validity was evaluated using based on items’ outer loadings and constructs’ average variance extracted. As a result, 41 items were excluded, and an instrument to assess CT ability comprising 114 items and ten constructs (abstraction, algorithmic thinking, decomposition, debugging, generalization, evaluation, problem solving, teamwork, communication, and spiritual intelligence) was developed. The reliability and validity of the Hi-ACT in its pilot form have been verified.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_34-A_Pilot_Study_of_an_Instrument_to_Assess_Undergraduates.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Validation Analysis of Scalable Vector Graphics (SVG) File Upload using Magic Number and Document Object Model (DOM)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111133</link>
        <id>10.14569/IJACSA.2020.0111133</id>
        <doi>10.14569/IJACSA.2020.0111133</doi>
        <lastModDate>2020-12-01T09:57:23.9130000+00:00</lastModDate>
        
        <creator>Fahmi Anwar</creator>
        
        <creator>Abdul Fadlil</creator>
        
        <creator>Imam Riadi</creator>
        
        <subject>Magic number; Scalable Vector Graphics (SVG); security; upload; validation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>The use of technology is increasing rapidly, such as applications or services connected to the Internet. Security is considered necessary because of the growing and increasing use of digital systems. With the number of threats to attacks on digital form or server systems is required to handle the risk of attacks on the server, the file upload feature. The system usually processes the file upload feature on a website or server with server-side (back-end) validation or filtering of digital object file types or a client-side (front-end) web browser in HTML or Javascript. Filtering techniques for Scalable Vector Graphics (SVG) usually files only see the file extension or Multipurpose Internet Mail Extension (MIME) type of an uploaded file. However, this filtering can still manipulate, for example, in ASCII prefix checking, which has two writes, namely &quot;&lt;?xml” and “&lt;svg ”. SVG files do not contain metadata such as image encoded in JPEG or PNG files. This problem can overcome by adding filtering techniques to check the validation of a file with validation of eXtensible Markup Language (XML) using magic numbers and the Document Object Model (DOM). This research developed using the waterfall method and black-box security testing refers to a software security testing method in which security controls, defense, and application design are tested. Handling of security validation for uploading SVG files using file extensions and MIME types has a success rate of 75 percent from the eight tested scenarios while handling using file extensions, magic numbers, and Document Object Model (DOM) produces a success rate of 100 percent from 8 test scenarios. Testing uses a black-box so that handling using the file extension, magic number, and Document Object Model (DOM) is better than using only file extensions and mime types.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_33-Validation_Analysis_of_Scalable_Vector_Graphics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Design Study to Improve user Experience of a Procedure Booking Software in Healthcare</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111132</link>
        <id>10.14569/IJACSA.2020.0111132</id>
        <doi>10.14569/IJACSA.2020.0111132</doi>
        <lastModDate>2020-12-01T09:57:23.8030000+00:00</lastModDate>
        
        <creator>Hanaa Abdulkareem Alzahrani</creator>
        
        <creator>Reem Abdulaziz Alnanih</creator>
        
        <subject>Procedure booking software; health systems design tool; cardiac catheterization; user experience; usability evaluation; system usability scale</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>In the era of technology-driven healthcare delivery and the proliferation of e-health systems, procedure booking software is becoming common. Procedure booking software (PBS) affects healthcare delivery by improving health care efficiency and outcomes, while cutting costs. Therefore, poor software design for PBS, especially if it is designed for important and critical appointments such as cardiac catheterization operations, creates stress for physicians and may result in their rejection of this technology. Moreover, if the system design forces them to spend more time documenting health information, physicians would then tend to prefer face-to-face interaction with patients. Software with poor usability increases the workload of physicians thus reducing system efficiency. So designing a useful and effective web user interface for such software is an essential requirement for health websites. The aim of this paper is to design and develop a PBS as a case study using the health systems design (HSD) tool. HSD is a validated design tool for creating PBS based on physician behavior and persona. The applicability of a PBS design is explored by physicians evaluated. The PBS design was evaluated in terms of objective and subjective characteristics and user experience attributes. Test participants were divided into two groups: specialists and fellows. The results show that there was no significant difference between participants in either group. All were able to complete the tasks successfully with a minimum amount of time, clicks, and errors indicating that the effectiveness, efficiency and cognitive load were similar for all participants. User satisfaction yielded a score of 86 on the System Usability Scale (SUS), putting it in the A Grade. Also, user experience attributes demonstrated that participants were satisfied using the proposed design system.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_32-A_Design_Study_to_Improve_user_Experience.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Harmonic Mean based Classification of Images using Weighted Nearest Neighbor for Tagging</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111131</link>
        <id>10.14569/IJACSA.2020.0111131</id>
        <doi>10.14569/IJACSA.2020.0111131</doi>
        <lastModDate>2020-12-01T09:57:19.8970000+00:00</lastModDate>
        
        <creator>Anupama D. Dondekar</creator>
        
        <creator>Balwant A. Sonkamble</creator>
        
        <subject>Image classification; k-nearest neighbor; weighted nearest neighbor; harmonic mean vector; color and texture features</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>On image sharing websites, the images are associated with the tags. These tags play a very important role in an image retrieval system. So, it is necessary to recommend accurate tags for the images. Also, it is very important to design and develop an effective classifier that classifies images into various sematic categories which is the necessary step towards tag recommendation for the images. The performance of existing tag recommendation based on k nearest neighbor methods can be affected due to the number of k neighbors, distance measures, majority voting irrespective of the class and outlier present in the k-neighbors. To increase the accuracy of the classification and to overcome the issues in existing k nearest neighbor methods, the Harmonic Mean based Weighted Nearest Neighbor (HM-WNN) classifier is proposed for the classification of images. Given an input image, the HM-WNN determines k nearest neighbors from each category for color and texture features separately over the entire training set. The weights are assigned to the closest neighbor from each category so that reliable neighbors contribute more to the accuracy of classification. Finally, the categorical harmonic means of k nearest neighbors are determined and classify an input image into the category with a minimum mean. The experimentation is done on a self-generated dataset. The result shows that the HM-WNN gives 88.01% accuracy in comparison with existing k-nearest neighbor methods.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_31-Harmonic_Mean_Based_Classification_of_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Geometrical Scale and Rotation Independent Feature Extraction Technique for Multi-lingual Character Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111130</link>
        <id>10.14569/IJACSA.2020.0111130</id>
        <doi>10.14569/IJACSA.2020.0111130</doi>
        <lastModDate>2020-12-01T09:57:19.8670000+00:00</lastModDate>
        
        <creator>Narasimha Reddy Soora</creator>
        
        <creator>Ehsan Ur Rahman Mohammed</creator>
        
        <creator>Sharfuddin Waseem Mohammed</creator>
        
        <subject>Feature extraction; character recognition; crossing count features; edit distance; scale and rotation independent feature extraction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>This paper presents a novel geometrical scale and rotation independent feature extraction (FE) technique for multi-lingual character recognition (CR). The performance of any CR techniques mainly depends on the robustness of the proposed FE methods. Currently, there are very few scale and rotation independent FE techniques present in the literature which successfully extract the robust features from characters with noise such as distortion and breaks in the characters. Many FE methods from the literature failed to distinguish the characters which look similar in their appearance. So, in this paper, we have proposed a novel scale and rotation independent geometrical shape FE technique which successfully recognized distorted, broken, and similarly looking characters. Aside from the proposed FE technique, we&#39;ve used crossing count (CC) features. Finally, we have combined the proposed features with CC features to make as Feature Vector (FV) of the character to be recognized. The proposed CR technique is evaluated using publicly available media-lab license plate (LP), ISI_Bengali, and Chars74K benchmark data sets and achieved encouraging results. To further assess the performance of the proposed FE method, we&#39;ve used a proprietary data set containing nearly 168000 multi-lingual characters from English, Devanagari, and Marathi scripts and achieved encouraging results. We have observed better classification rates for the proposed FE method using publicly available benchmark data sets as compared to few of the CR FE methods from the literature.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_30-A_Novel_Geometrical_Scale_and_Rotation_Independent_Feature.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Using Interdependencies for the Prioritization and Reprioritization of Requirements in Incremental Development</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111129</link>
        <id>10.14569/IJACSA.2020.0111129</id>
        <doi>10.14569/IJACSA.2020.0111129</doi>
        <lastModDate>2020-12-01T09:57:19.8330000+00:00</lastModDate>
        
        <creator>Aryaf Al-Adwan</creator>
        
        <creator>Anaam Aladwan</creator>
        
        <subject>Requirement engineering; incremental model; requirement prioritization; requirement interdependencies; dependency graph; queuing theory</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>There is a growing trend to develop and deliver the software in an incremental manner; to achieve greater consistency in the developed software and better customer satisfaction during the requirement engineering process. Some of the developed increments in the incremental model will be delivered to consumers and run in their environments, so a set of these requirements are evaluated, introduced, and delivered as the first increment. Other requirements are delivered as the next step and so on for the next increment. The priority of requirements plays an important role in each increment, but it is precluded by the interdependences between the requirements and resources constraints. Therefore, this paper introduces a model for requirements prioritization and a reprioritization based on these important factors. The first one is the requirement interdependencies which are described as a hybrid approach of tractability list and directed acyclic graph, and the second factor is the constraints of the requirements resources that are used based on the queuing theory for requirements reprioritization. In order to achieve this, two algorithms namely; Priority Dependency Graph (PDG) and Resources Constraints Reprioritization (RCR), were proposed with a linear time complexity and implemented via a case study.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_29-Using_Interdependencies_for_the_Prioritization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SDCT: Multi-Dialects Corpus Classification for Saudi Tweets</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111128</link>
        <id>10.14569/IJACSA.2020.0111128</id>
        <doi>10.14569/IJACSA.2020.0111128</doi>
        <lastModDate>2020-12-01T09:57:19.8200000+00:00</lastModDate>
        
        <creator>Afnan Bayazed</creator>
        
        <creator>Ola Torabah</creator>
        
        <creator>Redha AlSulami</creator>
        
        <creator>Dimah Alahmadi</creator>
        
        <creator>Amal Babour</creator>
        
        <creator>Kawther Saeedi</creator>
        
        <subject>Arabic dialects; dialects classification; language classification; natural language processing; Saudi dialects; sentiment analysis; Twitter</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>There is an increasing demand for analyzing the contents of social media. However, the process of sentiment analysis in Arabic language especially Arabic dialects can be very complex and challenging. This paper presents details of collecting and constructing a classified corpus of 4180 multi-dialectal Saudi tweets (SDCT). The tweets were annotated manually by five native speakers in two stages. The first stage annotated the tweets as Hijazi, Najdi, and Eastern based on some Saudi regions. The second stage annotated the sentiment as positive, negative, and natural. The annotation process was evaluated using Kappa Score. The validation process used cross validation technique through eight baseline experiments for training different classifier models. The results present that the 10-folds validation provides greater accuracy than 5-folds across the eight experiments and the classification of the Eastern dialects achieved the best accuracy compared to the other dialects with an accuracy of 91.48%.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_28-SDCT_Multi_Dialects_Corpus_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Self-Organizing Map based Wallboards to Interpret Sudden Call Hikes in Contact Centers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111127</link>
        <id>10.14569/IJACSA.2020.0111127</id>
        <doi>10.14569/IJACSA.2020.0111127</doi>
        <lastModDate>2020-12-01T09:57:19.7870000+00:00</lastModDate>
        
        <creator>Samaranayaka J. R. A. C. P</creator>
        
        <creator>Prasad Wimalaratne</creator>
        
        <subject>Multidimensional data; visualization; contact centers; self-organizing map; clustering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>In a contact center, it is required to foresee and excavate any disturbance to the daily experiencing call pattern. Abnormal call pattern may be a result of a sudden change in the organization’s external world. Expecting a methodological analysis prior to meet customers’ demand may introduce a delay for queuing customers. It is required a fast and promising method to predict and reasoning any unwilling event.   It is not possible to draw conclusions by considering one dimension such as total call count. Total call count may increase in same way due to a failure in any service. Research mainly focusses on reasoning multidimensional events based on historical records.   In contrast to traditional wallboards, our approach is capable of clustering and predicting disturbances to the normal call patterns based on historical knowledge by considering many dimensions such as queue statistics of many service queues. Our approach showed improved results over traditional wallboards equipped with 2D or 3D graphs.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_27-Self_Organizing_Map_Based_Wallboards.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Development of Parameter Estimation Method for Chinese Hamster Ovary Model using Black Widow Optimization Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111126</link>
        <id>10.14569/IJACSA.2020.0111126</id>
        <doi>10.14569/IJACSA.2020.0111126</doi>
        <lastModDate>2020-12-01T09:57:19.4430000+00:00</lastModDate>
        
        <creator>Nurul Aimi Munirah</creator>
        
        <creator>Muhammad Akmal Remli</creator>
        
        <creator>Noorlin Mohd Ali</creator>
        
        <creator>Hui Wen Nies</creator>
        
        <creator>Mohd Saberi Mohamad</creator>
        
        <creator>Khairul Nizar Syazwan Wan Salihin Wong</creator>
        
        <subject>Chinese Hamster Ovary; Black Widow optimization; metaheuristic; parameter estimation; genetic study</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>Chinese Hamster Ovary (CHO) cells are very famous in biological and medical research, especially in the protein production industry. It is because the characteristic of the cells with low chromosome numbers make it suitable for genetic study. However, all the data tends to be noisy and not fit. That is why many parameter estimation methods have been developed since their first introduction to determine the best value for a particular parameter. Metaheuristic parameter estimation is an algorithm framework that is processed using some technique to generate a pattern or graph. It will help the researcher get the fitted graph model, correct data, and estimate the value based on the data&#39;s behaviour. This process started with implementing the parameter estimation that can be generated by using the combination of mathematical models and all the data obtained from the researcher&#39;s experiments. This way, biomedical research&#39;s cell culture can benefit from all this metaheuristic parameter estimation used. A kinetic model can estimate the data obtained from the Chinese Hamster Ovary (CHO) cells. Therefore, this paper proposed a Black Widow Optimisation (BWO) algorithm inspired by the bizarre mating behaviour of a spider as the method to use to solve the problem. The proposed algorithm has been compared with the other three famous algorithms, which are Particle Swarm Optimization (PSO), Differential Evolutionary (DE), and Bees Optimization Algorithm (BOA). The results showed that the proposed algorithm could get better value in terms of the best cost despite taking a long time to use.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_26-The_Development_of_Parameter_Estimation_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving Intelligent Personality Prediction using Myers-Briggs Type Indicator and Random Forest Classifier</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111125</link>
        <id>10.14569/IJACSA.2020.0111125</id>
        <doi>10.14569/IJACSA.2020.0111125</doi>
        <lastModDate>2020-12-01T09:57:19.3970000+00:00</lastModDate>
        
        <creator>Nur Haziqah Zainal Abidin</creator>
        
        <creator>Muhammad Akmal Remli</creator>
        
        <creator>Noorlin Mohd Ali</creator>
        
        <creator>Danakorn Nincarean Eh Phon</creator>
        
        <creator>Nooraini Yusoff</creator>
        
        <creator>Hasyiya Karimah Adli</creator>
        
        <creator>Abdelsalam H Busalim</creator>
        
        <subject>Machine learning; random forest; Myers–Briggs Type Indicator&#174; (MBTI); personality prediction; random forest classifier; social media; Twitter user</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>The term “personality” can be defined as the mixture of features and qualities that built an individual&#39;s distinctive characters, including thinking, feeling and behaviour. Nowadays, it is hard to select the right employees due to the vast pool of candidates. Traditionally, a company will arrange interview sessions with prospective candidates to know their personalities. However, this procedure sometimes demands extra time because the total number of interviewers is lesser than the total number of job seekers. Since technology has evolved rapidly, personality computing has become a popular research field that provides personalisation to users. Currently, researchers have utilised social media data for auto-predicting personality. However, it is complex to mine the social media data as they are noisy, come in various formats and lengths. This paper proposes a machine learning technique using Random Forest classifier to automatically predict people&#39;s personality based on Myers–Briggs Type Indicator&#174; (MBTI). Researchers compared the performance of the proposed method in this study with other popular machine learning algorithms. Experimental evaluation demonstrates that Random Forest classifier performs better than the different three machine learning algorithms in terms of accuracy, thus capable in assisting employers in identifying personality types for selecting suitable candidates.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_25-Improving_Intelligent_Personality_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>ITTP-PG: A Novel Grouping Technique to Enhance VoIP Service Bandwidth Utilization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111124</link>
        <id>10.14569/IJACSA.2020.0111124</id>
        <doi>10.14569/IJACSA.2020.0111124</doi>
        <lastModDate>2020-12-01T09:57:19.1930000+00:00</lastModDate>
        
        <creator>Mayy Al-Tahrawi</creator>
        
        <creator>Mosleh Abulhaj</creator>
        
        <creator>Yousef Alrabanah</creator>
        
        <creator>Sumaya N. Al-Khatib</creator>
        
        <subject>Voice over Internet Protocol (VoIP); Internet Telephony Transport Protocol (ITTP); packet grouping; network bandwidth</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>Recently, the field of telecommunications started to migrate to Voice over Internet Protocol (VoIP) service. VoIP service applications produce packets with short payload sizes to reduce packetization delay. That is, increasing the preamble size and expends the network link bandwidth. Packet grouping is a technique to enhance the employment of network link bandwidth. Numerous grouping techniques are suggested to enhance link bandwidth employment when using RTP/UDP protocols. Unlike previous research, this article suggests a packet grouping technique that works over the Internet Telephony Transport Protocol (ITTP), not RTP/UDP. This technique is called ITTP Packet Grouping (ITTP-PG). The ITTP-PG technique groups VoIP packets, which exist in the same route, in a single ITTP/IP preamble instead of an ITTP/IP preamble to each packet. Consequently, preamble size is diminished and network link bandwidth is saved. ITTP-PG also adds 3-byte runt-preamble to each packet to distinguish the grouped packets. The suggested ITTP-PG technique is simulated and compared with the conventional ITTP protocol (without grouping) using three elements, namely, the number of concurrent VoIP calls, preamble overhead, and bandwidth usage. Based on all these elements, the ITTP-PG technique outperforms the conventional ITTP protocol. For example, the result shows that bandwidth usage improved by up to 45.9% in the tested cases.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_24-ITTP_PG_A_Novel_Grouping_Technique.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Extreme Learning Machine Model Approach on Airbnb Base Price Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111123</link>
        <id>10.14569/IJACSA.2020.0111123</id>
        <doi>10.14569/IJACSA.2020.0111123</doi>
        <lastModDate>2020-12-01T09:57:19.1630000+00:00</lastModDate>
        
        <creator>Fikri Nurqahhari Priambodo</creator>
        
        <creator>Agus Sihabuddin</creator>
        
        <subject>Airbnb; base price prediction; extreme learning machine; fast learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>The base price of Airbnb properties prediction is still a new area of prediction research, especially with the Extreme Learning Machine (ELM). The previous studies had several suggestions for the advantages of ELM, such as good generalization performance, fast learning speed, and high prediction accuracy. This paper proposes how the ELM approach is used as a prediction model for Air BnB base price. Generally, the steps are setting hidden neuron numbers, randomly assigning input weight and hidden layer biases, calculating the output layer; and the entire learning measure finished through one numerical change without iteration. The performance of the model is estimated utilizing mean squared error, mean absolute percentage error,  and root mean squared error. Experiment with Airbnb dataset in London with twenty-one features as input generates a faster learning speed and better accuracy than the existing model.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_23-An_Extreme_Learning_Machine_Model_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Measuring Impact of Traffic Parameters in Adaptive Signal Control through Microscopic Simulation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111122</link>
        <id>10.14569/IJACSA.2020.0111122</id>
        <doi>10.14569/IJACSA.2020.0111122</doi>
        <lastModDate>2020-12-01T09:57:18.9770000+00:00</lastModDate>
        
        <creator>Fatin Ayuni Bt Aminzal</creator>
        
        <creator>Munzilah Binti Md Rohani</creator>
        
        <subject>Adaptive signal control; optimal cycle length; saturation flow rate; lost time; microsimulation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>This paper aims to exploit the traffic parameters setting in adaptive traffic control. In this study, is known as Dynamic Timing Optimiser (DTO). DTO is an online algorithm, uses real-time optimisation in estimating cycle length according to fluctuations arrival flow registered from the detector. DTO cycle time estimation is also incorporated with preset parameters including saturation flow rate (s) and lost time (L). However, these traffic flow parameters commonly inputted as one deterministic value which adopted for the whole day. For example, presumed constant of saturation flow rate (s) do not accurately represent an actual oversaturated condition. The effects of employing inaccurate saturation flow rate (s) lead to the underestimation of cycle length. Therefore, a set of parameters value is applied and tested encompass of default value and adjusted value that implied a heaviest traffic condition through microscopic simulation. This resulted in outcomes of intersection performance in terms of intersection delay, travel time and throughput. According to simulation result, saturation flow rate (s) parameters show a great influence in cycle length optimisation compared to lost time (L) parameter. Employing a realistic saturation flow rate (s) while inputting parameters in DTO according to real traffic conditions contribute to a less intersection delay. In addition, the study revealed that a longer lost time (L) configured in the signal system, a longer cycle length generated by DTO algorithm. As predicted, high delay occurs during long cycle length yet benefited in allowing a higher throughput.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_22-Measuring_Impact_of_Traffic_Parameters.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Presenting and Evaluating Scaled Extreme Programming Process Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111121</link>
        <id>10.14569/IJACSA.2020.0111121</id>
        <doi>10.14569/IJACSA.2020.0111121</doi>
        <lastModDate>2020-12-01T09:57:18.7270000+00:00</lastModDate>
        
        <creator>Muhammad Ibrahim</creator>
        
        <creator>Shabib Aftab</creator>
        
        <creator>Munir Ahmad</creator>
        
        <creator>Ahmed Iqbal</creator>
        
        <creator>Bilal Shoaib Khan</creator>
        
        <creator>Muhammad Iqbal</creator>
        
        <creator>Baha Najim Salman Ihnaini</creator>
        
        <creator>Nouh Sabri Elmitwally</creator>
        
        <subject>Extreme Programming Process Model; XP; modified XP; scaled XP; customized XP; empirical comparison; empirical analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>Extreme programming (XP) is one of the widely used software process model for the development of small scale projects from agile family. XP is widely accepted by software industry due to various features it provides such as: handling frequent changing requirements, customer satisfaction, rapid feedback, iterative structure, team collaboration, and small releases. On the other hand, XP also holds some drawbacks, including: less documentation, less focus on design, and poor architecture. Due to all of these limitations, XP is only suitable for small scale projects and doesn’t work well for medium and large scale projects. To resolve this issue many researchers have proposed its customized versions, particularly for medium and large scale projects. The real issue arises when XP is selected for the development of small scale and low risk project but gradually due to requirement change, the scope of the project changes from small scale to medium or large scale project. At that stage its structure and practices which works well for small project cannot handle the extended scope. To resolve this issue, this paper contributes by proposing a scaled version of XP process model called SXP. The proposed model can effectively handle such situation and can be used for small as well as for medium and large scale project with same efficiency. Furthermore, this paper also evaluates the proposed model empirically in order to reflect its effectiveness and efficiency. A small scale client oriented project is developed by using proposed SXP and empirical results are collected. For an effective evaluation, the collected results are compared with a published case study of XP process model. It is reflected by detailed empirical analysis that the proposed SXP performed well as compared to traditional XP.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_21-Presenting_and_Evaluating_Scaled_Extreme_Programming.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Ensemble Learning for Rainfall Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111120</link>
        <id>10.14569/IJACSA.2020.0111120</id>
        <doi>10.14569/IJACSA.2020.0111120</doi>
        <lastModDate>2020-12-01T09:57:18.6800000+00:00</lastModDate>
        
        <creator>Nor Samsiah Sani</creator>
        
        <creator>Abdul Hadi Abd Rahman</creator>
        
        <creator>Afzan Adam</creator>
        
        <creator>Israa Shlash</creator>
        
        <creator>Mohd Aliff</creator>
        
        <subject>Ensemble learning; classification; rainfall prediction; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>Climate change research is a discipline that analyses the varying weather patterns for a particular period of time. Rainfall forecasting is the task of predicting particular future rainfall amount based on the measured information from the past, including wind, humidity, temperature, and so on. Rainfall forecasting has recently been the subject of several machine learning (ML) techniques with differing degrees of both short-term and also long-term prediction performance. Although several ML methods have been suggested to improve rainfall forecasting, the task of appropriate selection of technique for specific rainfall durations is still not clearly defined. Therefore, this study proposes an ensemble learning to uplift the effectiveness of rainfall prediction. Ensemble learning as an approach that combines multiple ML multiple rainfall prediction classifiers, which include Na&#239;ve Bayes, Decision Tree, Support Vector Machine, Random Forest and Neural Network based on Malaysian data. More specifically, this study explores three algebraic combiners: average probability, maximum probability, and majority voting. An analysis of our results shows that the fused ML classifiers based on majority voting are particularly effective in boosting the performance of rainfall prediction compared to individual classification.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_20-Ensemble_Learning_for_Rainfall_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards a Standardization of Learning Behavior Indicators in Virtual Environments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111119</link>
        <id>10.14569/IJACSA.2020.0111119</id>
        <doi>10.14569/IJACSA.2020.0111119</doi>
        <lastModDate>2020-12-01T09:57:18.6630000+00:00</lastModDate>
        
        <creator>Benjamin Maraza-Quispe</creator>
        
        <creator>Olga Melina Alejandro-Oviedo</creator>
        
        <creator>Walter Choquehuanca-Quispe</creator>
        
        <creator>Nicolas Caytuiro-Silva</creator>
        
        <creator>Jose Herrera-Quispe</creator>
        
        <subject>Indicators; behavior; learning; analytical; environments; virtual</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>The need to analyze student interactions in virtual learning environments (VLE) and the improvements this generates is an increasingly emerging reality in order to make timely predictions and optimize student learning. This research aims to implement a proposal of standardized learning behavior indicators in virtual learning environments (VLE) to design and implement efficient and timely learning analytics (LA) processes. The methodology consisted of a data management analysis that was carried out in the Moodle platform of the Faculty of Education Sciences of the National University of San Agustin of Arequipa, with the participation of 20 teachers, where qualitative online questionnaires were used to collect the participants&#39; perceptions. The results propose a standard in terms of indicators of behavior in the teaching-learning process in EVA as they are: Preparation for learning, progress in the progress of the course, resources for learning, interaction in the forums and evaluation of resources. These were evaluated through learning analytics and show the efficiency of the proposed indicators. The conclusions highlight the importance of implementing standardized behavior indicators that allow us to efficiently develop learning analytics processes in VLE in order to obtain better predictions to make timely decisions and optimize the teaching-learning processes.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_19-Towards_a_Standardization_of_Learning_Behavior.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implementation of Low Cost Remote Primary Healthcare Services through Telemedicine: Bangladesh Perspectives</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111118</link>
        <id>10.14569/IJACSA.2020.0111118</id>
        <doi>10.14569/IJACSA.2020.0111118</doi>
        <lastModDate>2020-12-01T09:57:18.6470000+00:00</lastModDate>
        
        <creator>Uzzal Kumar Prodhan</creator>
        
        <creator>Tushar Kanti Saha</creator>
        
        <creator>Rubya Shaharin</creator>
        
        <creator>Toufik Ahmed Emon</creator>
        
        <creator>Mohammad Zahidur Rahman</creator>
        
        <subject>Raspberry PI; DGHS; Arduino; Portable; ECG; SPO2</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>In this paper, we have implemented a low cost primary healthcare service for the remote rural people of Bangladesh. These services were delivered through our developed advanced telemedicine model. The main aim of this paper is to provide basic healthcare service through the developed low cost hardware. We have developed Arduino based low cost hardware’s to be used for this telemedicine services. Remote patients of Bangladesh can get the expert doctors opinion without going to the urban areas. We have collected nine vital signs such as electrocardiogram (ECG), oxygen saturation (SPO2), blood pressure, temperature, body position, glucose level, airflow, height, and weight of patients to be used in our model. We have removed unwanted signals from the collected vital signs through several filtering algorithms. Our system was successfully tested with the patients of Marie Stopes Bangladesh Hospital. From our developed model, rural patients can get primary healthcare services from the pharmacy of any remote village of Bangladesh with the assistance of local doctor by using Raspberry PI. Finally, we can say that the deployment of the developed healthcare service will reduce the cost of the telemedicine services and advances the healthcare facilities for the remote people of Bangladesh.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_18-Implementation_of_Low_Cost_Remote_Primary_Healthcare.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Smart Start and HER for a Directed and Persistent Reinforcement Learning Exploration in Discrete Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111117</link>
        <id>10.14569/IJACSA.2020.0111117</id>
        <doi>10.14569/IJACSA.2020.0111117</doi>
        <lastModDate>2020-12-01T09:57:18.6300000+00:00</lastModDate>
        
        <creator>Heba Alrakh</creator>
        
        <creator>Muhammad Fahmi Miskon</creator>
        
        <creator>Rozilawati Mohd Nor</creator>
        
        <subject>Reinforcement learning; hindsight experience replay; smart start; limit search space; exploration-exploitation trade off</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>Reinforcement learning (RL) solves sequential decision making problems through trial and error, through experiences can be amassed to achieve goals and increase the accumulative rewards. Exploration-exploitation dilemma is a critical challenge in reinforcement learning, particularly environments with misleading or sparse rewards which have shown difficulties to construct a suitable exploration strategy. In this paper a framework for Smart Start (SS) and Hindsight experience replay (HER) is developed to improve the performance of SS and make the exploration more directed especially in the early episodes. The framework Smart Start and Hindsight experience replay (SS+HER) was studied in discrete maze environment with sparse rewards. The results reveal that the framework doubles the rewards at the early episodes and decreases the time of the agent to reach the goal.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_17-Smart_Start_and_HER_for_a_Directed_and_Persistent.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Conceptual Data Modelling Framework for Context-Aware Text Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111116</link>
        <id>10.14569/IJACSA.2020.0111116</id>
        <doi>10.14569/IJACSA.2020.0111116</doi>
        <lastModDate>2020-12-01T09:57:18.6000000+00:00</lastModDate>
        
        <creator>Nazia Tazeen</creator>
        
        <creator>K. Sandhya Rani</creator>
        
        <subject>Text classification; topic modeling; natural language processing; sentiment analysis; drug dataset; context-aware model; diagnostic analytics; feature extraction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>Data analytics has an interesting variant that aims to understand an entity&#39;s behavior. It is termed as diagnostic analytics, which answers “why type questions”. “Why type questions” find their applications in emotion classification, brand analysis, drug review modeling, customer complaints classification etc. Labeled data form the core of any analytics&#39; problem, leave alone diagnostic analytics; however, labeled data is not always available. In some cases, it is required to assign labels to unknown entities and understand its behavior. For such scenarios, the proposed model unites topic modeling and text classification techniques. This combined data model will help to solve diagnostic issues and obtain meaningful insights from data by treating the procedure as a classification problem. The proposed model uses Improved Latent Drichlet Allocation for topic modeling and sentiment analysis to understand an entity&#39;s behavior and represent it as an Improved Multinomial Na&#239;ve Bayesian data model to achieve automated classification. The model is tested using drug review dataset obtained from UCI repository. The health conditions with their associated drug names were extracted from the reviews and sentiment scores were assigned. The sentiment scores reflected the behavior of various drugs for a particular health condition and classified them according to their quality. The proposed model performance is compared with existing baseline models and it is proved that our model exhibited better than other models.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_16-A_Conceptual_Data_Modelling_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Covid-19 Ontology Engineering-Knowledge Modeling of Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111115</link>
        <id>10.14569/IJACSA.2020.0111115</id>
        <doi>10.14569/IJACSA.2020.0111115</doi>
        <lastModDate>2020-12-01T09:57:18.5830000+00:00</lastModDate>
        
        <creator>Vinu Sherimon</creator>
        
        <creator>Sherimon P.C</creator>
        
        <creator>Renchi Mathew</creator>
        
        <creator>Sandeep M. Kumar</creator>
        
        <creator>Rahul V. Nair</creator>
        
        <creator>Khalid Shaikh</creator>
        
        <creator>Hilal Khalid Al Ghafri</creator>
        
        <creator>Huda Salim Al Shuaily</creator>
        
        <subject>COVID-19; ontology; SARS-CoV-2; ontology reasoning; SWRL; SQWRL</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>COVID-19 pandemic has rapidly spread across the world since its arrival in December 2019 from Wuhan, China. This pandemic has disrupted the health of the citizens in such a way that the impact is enormous in terms of economy and social aspects. Education, employment, income, well-being of the humankind is affected very crucially by this corona virus. Nations world-wide are struggling to battle this emergency.  Intensive studies are being carried out to control this pandemic by researchers all over the world. Medical science has advanced a lot with the application of computer assisted solutions in health care. Ontology based clinical decision support systems (CDSS) assist medical practitioners in the diagnosis and treatment of diseases. They are well known in data sharing, interoperability, knowledge reuse, and decision support. This research article presents the development of ontology for SARS-CoV-2 (COVID-19) to be used in a CDSS, which is proposed in the satellite clinics of Royal Oman Police (ROP), Sultanate of Oman. The key concepts and the concept relationships of COVID-19 is represented using an ontology. Semantic Web Rule Language (SWRL) is used to model the rules related to the initial diagnosis of the patient and Semantic Query Enhanced Web Rule Language (SQWRL) is used to retrieve the data stored in the ontology. The developed ontology successfully classified the patients into one of the different categories as non-suspected, suspected, probable, and confirmed. The reasoning time and the query execution time is found to be optimal.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_15-Covid_19_Ontology_Engineering_Knowledge_Modelling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluation of Student Core Drives on e-Learning during the Covid-19 with Octalysis Gamification Framework</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111114</link>
        <id>10.14569/IJACSA.2020.0111114</id>
        <doi>10.14569/IJACSA.2020.0111114</doi>
        <lastModDate>2020-12-01T09:57:18.5700000+00:00</lastModDate>
        
        <creator>Fitri Marisa</creator>
        
        <creator>Sharifah Sakinah Syed Ahmad</creator>
        
        <creator>Zeratul Izzah Mohd Yusoh</creator>
        
        <creator>Anastasia L Maukar</creator>
        
        <creator>Ronald David Marcus</creator>
        
        <creator>Anang Aris Widodo</creator>
        
        <subject>Gamificaton; education; Covid-19 pandemic; octalysis framework</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>Learning activities during the Covid-19 pandemic were carried out with an online system even though in reality many institutions had not prepared their systems and infrastructure properly. Some e-learning media that are generally used based on survey results include 53.81% google classrooms combined with other applications that are not integrated with the institution&#39;s Learning Management System. This condition provides research opportunities to evaluate the effectiveness of online learning, especially how students are motivated to learn the method, where the results can be used as a reference in developing and refining the method. Based on many studies, that the gamification model can increase individual motivation in carrying out activities, this study uses a gamification octalysis framework to analyze the extent of the role of gamification in the learning process and measure the amount of student motivation in online learning activities. The evaluation results show that the conclusion of the Likert scale results in a &quot;High&quot; level, while the highest score is &quot;Very High&quot;. As for the octalysis test scale, the average score of 6.5 on a scale of 1 to 10. The conclusion from the results of this evaluation is that the motivation to learn e-learning during the Covid-19 period is quite high and has the potential to be developed. While the results of the Octalysis framework with 8 core drives are still average, for that we need innovation in E-learning which aims to increase student motivation based on Octalysis&#39;s 8 core drives. The results of this study recommend that gamification is needed to increase student learning motivation in order to improve learning outcomes.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_14-Evaluation_of_Student_Core_Drives_on_E_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Efficient Domain-Adaptation Method using GAN for Fraud Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111113</link>
        <id>10.14569/IJACSA.2020.0111113</id>
        <doi>10.14569/IJACSA.2020.0111113</doi>
        <lastModDate>2020-12-01T09:57:18.5530000+00:00</lastModDate>
        
        <creator>Jeonghyun Hwang</creator>
        
        <creator>Kangseok Kim</creator>
        
        <subject>Fraud detection; domain adaptation; data augmentation; deep learning; GAN</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>In this paper, an efficient domain-adaptation method is proposed for fraud detection. The proposed method employs the discriminative characteristics used in feature maps and generative adversarial networks (GANs), to minimize the deviation that occurs when a common feature is shifted between two domains. To solve class imbalance problem and increase the model’s detection accuracy, new data samples are generated by applying a minority class data augmentation method, which uses a GAN. We evaluate the classification performance of the proposed domain-adaption model by comparing it against support vector machine (SVM) and convolutional neural network (CNN) models, using classification performance evaluation indicators. The experimental results indicated that the proposed model is applicable to both test datasets; furthermore, it requires less time for learning. Although the SVM offers a better detection performance than the CNN and proposed domain-adaptation model, its learning time exceeds those of the other two models when dataset increases. Also, although the detection performance of the CNN-based model is similar to that of the proposed domain-adaptation model, its learning process is longer. In addition, although the GAN used to solve the class imbalance problem of the two datasets requires slightly more time than SMOTE (synthetic minority oversampling technique), it shows a better classification performance and is effective for datasets featuring class imbalances.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_13-An_Efficient_Domain_Adaptation_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Single Modality-Based Event Detection Framework for Complex Videos</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111112</link>
        <id>10.14569/IJACSA.2020.0111112</id>
        <doi>10.14569/IJACSA.2020.0111112</doi>
        <lastModDate>2020-12-01T09:57:18.5370000+00:00</lastModDate>
        
        <creator>Sheeraz Arif</creator>
        
        <creator>Adnan Ahmed Siddiqui</creator>
        
        <creator>Rajesh Kumar</creator>
        
        <creator>Avinash Maheshwari</creator>
        
        <creator>Komal Maheshwari</creator>
        
        <creator>Muhammad Imran Saeed</creator>
        
        <subject>Event detection; single-stream; feature fusion; temporal encoding</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>Event detection of rare and complex events in large video datasets or in unconstrained user-uploaded videos on internet is a challenging task. The presence of irregular camera movement, viewpoint changes, illumination variations and significant changes in the background make extremely difficult to capture underlying motion in videos. In addition, extraction of features using different modalities (single streams) may offer computational complexities and cause abstraction of confusing and irrelevant spatial and semantic features. To address this problem, we present a single stream (RGB only) based on feature of spatial and semantic features extracted by modified 3D Residual Convulsion Network. We combine the spatial and semantic features based on this assumption that difference between both types of features can discover the accurate and relevant features. Moreover, introduction of temporal encoding builds the relationship in consecutive video frames to explore discriminative long-term motion patterns. We conduct extensive experiments on prominent publically available datasets. The obtained results demonstrate the great power of our proposed model and improved accuracy compared with existing state-of-the-art methods.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_12-Single_Modality_Based_Event_Detection_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implementation of Text Base Information Retrieval Technique</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111111</link>
        <id>10.14569/IJACSA.2020.0111111</id>
        <doi>10.14569/IJACSA.2020.0111111</doi>
        <lastModDate>2020-12-01T09:57:18.5070000+00:00</lastModDate>
        
        <creator>Syed Ali Jafar Zaidi</creator>
        
        <creator>Safdar Hussain</creator>
        
        <creator>Samir Brahim Belhaouari</creator>
        
        <subject>Information retrieval; sequence matcher method; relevance feedback</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>Everyone is in the need of accurate and efficient information retrieval in no time. Search engines are the main source to extract the required information, when a user search a query and wants to generate the results. Different search engines provide different Application Programming Interface (API) and Libraries to the researchers and the programmers to access the data that has been stored in servers of the search engines. When a researcher or programmer search&#39;s a query by using API, it returns a Java Script Orientation Notation (JSON) file. In this JSON file, information is encapsulated where scraping techniques are used to filter out the text. The aim of this paper is to propose a different approach to effectively and efficiently filter out the queries based on text which has been searched by the search engines and return the most appropriate results to the users after matching the searched text because the previous techniques which are used are not enough efficient. We use different comparison techniques, i.e. Sequence Matcher Method and then compare the results of this technique with relevance feedback and in the end we found that our proposed technique is providing much better results.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_11-Implementation_of_Text_Base_Information.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mapping Linguistic Variations in Colloquial Arabic through Twitter</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111110</link>
        <id>10.14569/IJACSA.2020.0111110</id>
        <doi>10.14569/IJACSA.2020.0111110</doi>
        <lastModDate>2020-12-01T09:57:18.5070000+00:00</lastModDate>
        
        <creator>Abdulfattah Omar</creator>
        
        <creator>Hamza Ethleb</creator>
        
        <creator>Mohamed Elarabawy Hashem</creator>
        
        <subject>Colloquial Arabic; computational statistical model; lexical patterns; linguistic mapping; principal component analysis (PCA)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>The recent years have witnessed the development of different computational approaches to the study of linguistic variations and regional dialectology in different languages including English, German, Spanish and Chinese. These approaches have proved effective in dealing with large corpora and making reliable generalizations about the data. In Arabic, however, much of the work on regional dialectology is so far based on traditional methods; therefore, it is difficult to provide a comprehensive mapping of the dialectal variations of all the colloquial dialects of Arabic. As thus, this study is concerned with proposing a computational statistical model for mapping the linguistic variation and regional dialectology in Colloquial Arabic through Twitter based on the lexical choices of speakers. The aim is to explore the lexical patterns for generating regional dialect maps as derived from Twitter users. The study is based on a corpus of 1597348 geolocated Twitter posts. Using principal component analysis (PCA), data were classified into distinct classes and the lexical features of each class were identified. Results indicate that lexical choices of Twitter users can be usefully used for mapping the regional dialect variation in Colloquial Arabic.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_10-Mapping_Linguistic_Variations_in_Colloquial_Arabic.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Recursive Least Square: RLS Method-Based Time Series Data Prediction for Many Missing Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111109</link>
        <id>10.14569/IJACSA.2020.0111109</id>
        <doi>10.14569/IJACSA.2020.0111109</doi>
        <lastModDate>2020-12-01T09:57:18.4900000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Kaname Seto</creator>
        
        <subject>Special Sensor Microwave/Imager (SSM/I); Defense Meteorological Satellite Program (DMSP); Kalman filter; Recursive Least Square (RLS) method; missing data; parameter estimation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>Prediction methods for time series data with many missing data based on Recursive Least Square (RLS) method are proposed. There are two parameter tuning algorithms, time update and measurement update algorithms for parameter estimation of Kalman filter. Two learning methods for parameter estimation of Kalman filter are proposed based on RLS method. One is the method without measurement update algorithm (RLS-1). The other one is the method without both time and measurement update algorithms (RLS-2). The methods are applied to the time series data of Defense Meteorological Satellite Program (DMSP) / Special Sensor Microwave/Imager (SSM/I) data with a plenty of missing data. It is found that the proposed RLS-2 method shows smooth and fast convergence in learning process in comparison to the RLS-1.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_9-Recursive_Least_Square.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>STEM-Technology Example of the Computational Problem of a Chain on a Cylinder</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111108</link>
        <id>10.14569/IJACSA.2020.0111108</id>
        <doi>10.14569/IJACSA.2020.0111108</doi>
        <lastModDate>2020-12-01T09:57:18.4770000+00:00</lastModDate>
        
        <creator>Valery Ochkov</creator>
        
        <creator>Konstantin Orlov</creator>
        
        <creator>Evgeny Barochkin</creator>
        
        <creator>Inna Vasileva</creator>
        
        <creator>Evgeny Nikulchev</creator>
        
        <subject>STEM technology; math education; closed chain; Mathcad</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>An application of the STEM technology to the computational problem of the parameters of a closed chain (with and without load) thrown over a horizontal cylinder is considered. The numerical solution is found and its graphical interpretation is made by compiling a system of transcendental equations, as well as carrying out numerical optimization with constraints. The approximating analytical dependence is determined using the fitting functions. In the process of solving a number of concepts from mathematics, physics, computer science are examined. Some possibilities of using specialized mathematical packages (in particular, Mathcad) and of working on online platforms are shown. Additional problems options for using STEM technology are presented.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_8-STEM_Technology_Example_of_the_Computational_Problem.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Prelaunch Matching Architecture for Distributed Intelligent Image Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111107</link>
        <id>10.14569/IJACSA.2020.0111107</id>
        <doi>10.14569/IJACSA.2020.0111107</doi>
        <lastModDate>2020-12-01T09:57:18.4300000+00:00</lastModDate>
        
        <creator>Anton Ivaschenko</creator>
        
        <creator>Arkadiy Krivosheev</creator>
        
        <creator>Pavel Sitnikov</creator>
        
        <subject>Multi-agent technology; artificial neural networks; image recognition; electricity meter data processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>The paper presents a multi-agent solution for dynamic combination of several artificial neural networks used for image recognition. As opposed to the existing methods there is introduced a dispatcher agent that provides prelaunch matching of possible pro-active identification algorithms through competition. The proposed solution was implemented to solve a problem of stream processing of photo images produced by a number of distributed cameras using an intelligent mobile application. It was probated and utilized in practice to capture the results of electrical meters that are manually monitored by a group of patrol personnel inspectors using hand held devices. Prelaunch matching architecture allowed increasing the quality of digits recognition using various neural networks depending on the operating conditions.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_7-Prelaunch_Matching_Architecture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Definition of Unique Objects by Convolutional Neural Networks using Transfer Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111106</link>
        <id>10.14569/IJACSA.2020.0111106</id>
        <doi>10.14569/IJACSA.2020.0111106</doi>
        <lastModDate>2020-12-01T09:57:18.4130000+00:00</lastModDate>
        
        <creator>Rusakov K. D</creator>
        
        <creator>Seliverstov D.E</creator>
        
        <creator>Osipov V.V</creator>
        
        <creator>Reshetnikov V.N</creator>
        
        <subject>Recognition of medical masks; COVID-19; convolutional neural networks; retina face; Resnet</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>This article solves the problem of detecting medical masks on a person&#39;s face. Medical mask is one of the most effective measures to prevent infection with COVID-19, and its automatic detection is an actual task. The introduction of automatic recognition of medical masks in existing information security systems will allow quickly identify the violator of the mask regime, which in turn will increase security in a pandemic. The article provides a detailed analysis of existing solutions for face detection and automatic recognition of medical masks, method based on the use of convolutional neural networks was proposed. A distinctive feature of the new method is the use of two neural networks at once, using the RetinaFace neural network architecture at the face search stage and using the Resnet neural network architecture at the face mask recognition stage. It is shown that the use of transfer learning on scales, learned to work with faces, significantly accelerates learning and increases the accuracy of recognition. However, with this approach, there are some false positives, for example, when you try to cover your face with your hands, imitating a medical mask. Based on the study, we can conclude that the algorithm is applicable in the security system to determine the presence/absence of a medical mask on a person&#39;s face, as well as the need for additional research to solve the problems of false positives of the algorithm.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_6-Definition_of_Unique_Objects.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Autoencoder based Semi-Supervised Anomaly Detection in Turbofan Engines</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111105</link>
        <id>10.14569/IJACSA.2020.0111105</id>
        <doi>10.14569/IJACSA.2020.0111105</doi>
        <lastModDate>2020-12-01T09:57:18.3800000+00:00</lastModDate>
        
        <creator>Ali Al Bataineh</creator>
        
        <creator>Aakif Mairaj</creator>
        
        <creator>Devinder Kaur</creator>
        
        <subject>Anomaly detection; autoencoder; bayesian hyperparameter tuning; turbofan engine</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>This paper proposes a semi-supervised autoencoder based approach for the detection of anomalies in turbofan engines. Data used in this research is generated through simulation of turbofan engines created using a tool known as Commercial Modular Aero-Propulsion System Simulation (CMAPSS). C-MAPSS allows users to simulate various operational settings, environmental conditions, and control settings by varying various input parameters. Optimal architecture of autoencoder is discovered using Bayesian hyperparameter tuning approach. Autoencoder model with optimal architecture is trained on data representing normal behavior of turbofan engines included in training set. Performance of trained model is then tested on data of engines included in test set. To study the effect of redundant features removal on performance, two approaches are implemented and tested: with and without redundant features removal. Performance of proposed models is evaluated using various performance evaluation metrics like F1-score, Precision and Recall. Results have shown that best performance is achieved when autoencoder model is used without redundant feature removal.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_5-Autoencoder_based_Semi_Supervised_Anomaly_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Critical Success Factors on the Implementation of ERP Systems: Building a Theoretical Framework</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111104</link>
        <id>10.14569/IJACSA.2020.0111104</id>
        <doi>10.14569/IJACSA.2020.0111104</doi>
        <lastModDate>2020-12-01T09:57:18.3670000+00:00</lastModDate>
        
        <creator>Asimina Kouriati</creator>
        
        <creator>Thomas Bournaris</creator>
        
        <creator>Basil Manos</creator>
        
        <creator>Stefanos A. Nastis</creator>
        
        <subject>Enterprise resource planning (ERP); ERP implementation; critical success factors (CSFs); content analysis (CA); categorization; theoretical framework</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>Existing pressure for the confrontation of a radically changing external environment has led many companies to invest in various Information Systems, such as Enterprise Resource Planning (ERP), in order to optimize their production processes and strategies. Despite the fact that ERP system is an important strategic tool, many companies fail to take advantage of its benefits due to their default in many aspects of management and implementation. This study aims to investigate the critical success factors of enterprise resource planning system implementation and build a categorization framework so as to create a theoretical base that enhances any further research approaches in various sectors of the economy. Therefore, 37 ERP critical success factors were identified by using Content Analysis method and classified into relative categories to the ERP orientations of implementation and the ERP life cycle phases. Finally, these two types of categorization were merged in order to examine the critical success factors’ behavior during the ERP implementation. This paper and the multilateral theoretical framework it creates, sets out how critical success factors must be taken into account by companies and marks a beginning point that promises a sequence of further research approaches in particular economic sectors or in a set of them. By fulfilling the purpose of this study, a significant contribution to computer science literature and especially to the ERP field is offered.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_4-Critical_Success_Factors_on_the_Implementation_of_ERP.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detecting Spam in Twitter Microblogging Services: A Novel Machine Learning Approach based on Domain Popularity</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111103</link>
        <id>10.14569/IJACSA.2020.0111103</id>
        <doi>10.14569/IJACSA.2020.0111103</doi>
        <lastModDate>2020-12-01T09:57:18.3330000+00:00</lastModDate>
        
        <creator>Khalid Binsaeed</creator>
        
        <creator>Gianluca Stringhini</creator>
        
        <creator>Ahmed E. Youssef</creator>
        
        <subject>Spam detection; phishing detection; domain popularity; machine learning; Twitter</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>Detecting Internet malicious activities has been and continues to be a critical issue that needs to be addressed effectively. This is essential to protect our personal information, computing resources, and financial capitals from unsolicited actions, such as, credential information theft, downloading and installing malware, extortion, etc. The introduction of the social media such as Twitter has given malicious users a new and a promising platform to perform their activities, ranging from a simple spam message to taking a full control over the victim’s machine. Twitter revealed that its algorithms for detecting spam are not very effective; most of the trending hashtags include unrelated spam and advertising tweets which indicates that there is a problem with the currently used spam detection framework. This paper proposes a new approach for detecting spam in Twitter microblogging using Machine Learning (ML) techniques and domain popularity services. The proposed approach comprises two main stages: 1) Tweets are collected periodically and filtered by selecting the ones that appear more frequently than a decided threshold in the specified period (i.e. common tweets). Then, an inspection is conducted on the common tweets by checking the associated URL domain with Alexa’s top one million globally viewed websites. If a tweet is common on Twitter but does not appear on the top one million globally viewed websites, it is flagged as a potential spam. 2) The second stage kicks in by running ML algorithms on the flagged tweets to extract features that help detect the cluster of spam and prevent it in real-time. The performance of the proposed approach has been evaluated using three most popular classification models (random forest, J48, and Na&#239;ve Bayes). For all classifiers, results showed the effectiveness of the proposed method in terms of different performance metrics (e.g. precision, sensitivity, F1-score, accuracy) and using different test scenarios.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_3-Detecting_Spam_in_Twitter_Microblogging_Services.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Involving American Schools in Enhancing Children’s Digital Literacy and Raising Awareness of Risks Associated with Internet Usage</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111102</link>
        <id>10.14569/IJACSA.2020.0111102</id>
        <doi>10.14569/IJACSA.2020.0111102</doi>
        <lastModDate>2020-12-01T09:57:17.9600000+00:00</lastModDate>
        
        <creator>Mohammed Tawfik Hussein</creator>
        
        <creator>Reem M. Hussein</creator>
        
        <subject>Digital literacy; e-learning; internet risks; online education and safety</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>The purpose of this study is to shine the light on the importance of educating students on digital literacy and netiquette, for technology has become a common denominator in most of our tasks. This study is mostly concerned with involving schools in educating students on this matter since students spend most of their time in schools. The paper expresses the urgency of increasing the dose of digital literacy taught in schools to help raise students’ awareness to potential risks the internet has. It breaks down the risks that young users are prone to face as well as ways to safely avoid them. Further, the paper analyzes the state standards practiced in the US. to serve as a wake-up call for schools to work on improving their standards to protect young users from the versatile harms. Therefore, schools are conjured to take on the role of enhancing students’ digital literacy and their understanding of the potential risks present online.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_2-Involving_American_Schools_in_Enhancing_Children.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Classification of Imbalanced Datasets using One-Class SVM, k-Nearest Neighbors and CART Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111101</link>
        <id>10.14569/IJACSA.2020.0111101</id>
        <doi>10.14569/IJACSA.2020.0111101</doi>
        <lastModDate>2020-12-01T09:57:17.8800000+00:00</lastModDate>
        
        <creator>Maruthi Rohit Ayyagari</creator>
        
        <subject>SVM; k-NN; CART; OKC; classification; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(11), 2020</description>
        <description>In this paper a new algorithm, OKC classifier is proposed that is a hybrid of One-Class SVM, k-Nearest Neighbours and CART algorithms. The performance of most of the classification algorithms is significantly influenced by certain characteristics of datasets on which these are modeled such as imbalance in class distribution, class overlapping, lack of density, etc. The proposed algorithm can perform the classification task on imbalanced datasets without re-sampling. This algorithm is compared against a few well known classification algorithms and on datasets having varying degrees of class imbalance and class overlap. The experimental results demonstrate that the proposed algorithm has performed better than a number of standard classification algorithms.</description>
        <description>http://thesai.org/Downloads/Volume11No11/Paper_1-Classification_of_Imbalanced_Dataset.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Acoustic Embeddings for Identifying Parkinsonian Speech</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111089</link>
        <id>10.14569/IJACSA.2020.0111089</id>
        <doi>10.14569/IJACSA.2020.0111089</doi>
        <lastModDate>2020-11-04T12:26:53.0930000+00:00</lastModDate>
        
        <creator>Zafi Sherhan Syed</creator>
        
        <creator>Sajjad Ali Memon</creator>
        
        <creator>Abdul Latif Memon</creator>
        
        <subject>Affective computing; deep acoustic embeddings; Parkinson’s disease; social signal processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>Parkinson’s disease is a serious neurological impair-ment which adversely affects the quality of life in individuals. While there currently does not exist any cure for this disease, it is well known that early diagnosis can be used to improve the quality of life of affected individuals through various types of therapy. Speech based screening of Parkinson’s disease is an active area of research intending to offer a non-invasive and passive tool for clinicians to monitor changes in voice that arise due to Parkinson’s disease. Whereas traditional methods for speech based identification rely on domain-knowledge based hand-crafted features, in this paper, we investigate the efficacy of and propose the deep acoustic embeddings for identification of Parkinsonian speech. To this end, we conduct several experiments to benchmark deep acoustic embeddings against handcrafted features for differentiating between speech from individuals with Parkinson’s disease and those who are healthy. We report that deep acoustic embeddings consistently perform better than domain-knowledge features. We also report on the usefulness of decision-level fusion for improving the classification performance of a model trained on these embeddings.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_89-Deep_Acoustic_Embeddings_for_Identifying_Parkinsonian_Speech.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Novel Control Scheme for Prosthetic Hands through Spatial Understanding</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111088</link>
        <id>10.14569/IJACSA.2020.0111088</id>
        <doi>10.14569/IJACSA.2020.0111088</doi>
        <lastModDate>2020-10-31T18:17:10.3570000+00:00</lastModDate>
        
        <creator>Yunan He</creator>
        
        <creator>Osamu Fukuda</creator>
        
        <creator>Nobuhiko Yamaguchi</creator>
        
        <creator>Hiroshi Okumura</creator>
        
        <creator>Kohei Arai</creator>
        
        <subject>Prosthetic hand; vision-inertial fusion; pose estimation; motion tracking; internal measurement unit; augmented reality; control scheme; spatial features</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>A novel control scheme for prosthetic hands through spatial understanding is proposed. The proposed control scheme features an imaging sensor and an inertial measurement unit (IMU) sensor, which makes prosthetic hands capable of visual and motion sensing. The imaging sensor captures the scene where the user is willing to grasp the object. The control system recognizes the target object, extracts its surface features and estimates its pose from the captured images. Then the spatial relationship is constructed between the hand and the target object. With the help of IMU sensors, the relationship can be tracked and kept wherever the prosthetic hand moves even the object is out of the view range of the camera. To interact with the user, this process is visualized using augmented reality (AR) technology. A test platform based on the proposed control scheme is developed and a case study is performed with the platform.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_88-Novel_Control_Scheme_for_Prosthetic_Hands.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Census Estimation using Histogram Representation of 3D Surfaces: A Case Study Focusing on the Karak Region</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111087</link>
        <id>10.14569/IJACSA.2020.0111087</id>
        <doi>10.14569/IJACSA.2020.0111087</doi>
        <lastModDate>2020-10-31T18:17:10.3430000+00:00</lastModDate>
        
        <creator>Subhieh El-Salhi</creator>
        
        <creator>Safaa Al-Haj Saleh</creator>
        
        <creator>Frans Coenen</creator>
        
        <subject>Histogram representation; Geographic Information System (GIS); population estimation; 3D surface; satellite images; data mining; classification technique; Karak region</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>National and regional infrastructure planning is founded on the use of many factors, of which population size can be argued to be the most fundamental. Population size is typically acquired through a census. However, manual census collection is an expensive and resource intensive process; especially in regions that are poorly connected. Computer-aided population estimation, when done accurately, therefore offers significant benefit. This paper presents a comprehensive framework for estimating the population size of a region of interest by applying classification techniques to terrain data. Central to the proposed framework is a novel histogram representation technique designed to support the generation of appropriate and effective classifiers central to the operation of the framework. The presented work uses the Karak region, in Jordan, as a case study for population size estimation. The proposed framework and the representation technique have been evaluated using a variety of classification mechanisms and parameter settings. The reported evaluation of the proposed representation technique demonstrates that good results can be obtained with regard to estimate the population size.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_87-Census_Estimation_using_Histogram_Representation_of_3D_Surfaces.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-Label Arabic Text Classification: An Overview</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111086</link>
        <id>10.14569/IJACSA.2020.0111086</id>
        <doi>10.14569/IJACSA.2020.0111086</doi>
        <lastModDate>2020-10-31T18:17:10.3270000+00:00</lastModDate>
        
        <creator>Nawal Aljedani</creator>
        
        <creator>Reem Alotaibi</creator>
        
        <creator>Mounira Taileb</creator>
        
        <subject>Machine learning; text classification; multi-label classification; Arabic natural language processing; hierarchical classification; Lexicon approach</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>There is a massive growth of text documents on the web. This led to the increasing need for methods that can organize and classify electronic documents (instances) automati-cally. Multi-label classification task is widely used in real-world problems and it has been applied on di˙erent applications. It assigns multiple labels for each document simultaneously. Few and insu&#255;cient research studies have investigated the multi-label text classification problem in the Arabic language. Therefore, this survey paper aims to present an extensive review of the existing multi-label classification methods and techniques that can deal with multi-label problem. Besides, we focus on Arabic language by covering the relevant applications of multi-label classification on the Arabic text, and identify the main challenges faced by these studies. Furthermore, this survey presents an experimental comparisons of di˙erent multi-label classification methods applied for the Arabic context and points out some baseline results. We found that further investigations are also needed to improve the multi-label classification task in the Arabic language, especially the hierarchical classification task.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_86-Multi_Label_Arabic_Text_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-Target Energy Disaggregation using Convolutional Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111085</link>
        <id>10.14569/IJACSA.2020.0111085</id>
        <doi>10.14569/IJACSA.2020.0111085</doi>
        <lastModDate>2020-10-31T18:17:10.3100000+00:00</lastModDate>
        
        <creator>Mohammed Ayub</creator>
        
        <creator>El-Sayed M. El-Alfy</creator>
        
        <subject>Energy disaggregation; smart meters; load monitoring; ENERTALK dataset; multi-target disaggregation; multi-target regression; NILM knowledge transfer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>Non-Intrusive Load Monitoring (NILM) has be-come popular for smart meters in recent years due to its low cost installation and maintenance. However, it requires efficient and robust machine learning models to disaggregate the respective electrical appliance energy from the mains. This study investigated NILM in the form of direct point-to-point multiple and single target regression models using convolutional neural networks. Two model architectures have been utilized and compared using five different metrics on two benchmarking datasets (ENERTALK and REDD). The experimental results showed that multi-target disaggregation setting is more complex than single-target disaggregation. For multi-target setting of ENERTALK dataset, the highest individual F1-score is 95.37%and the overall average F1-score is 75.00%. Better results were obtained for the multi-target setting of the other dataset with higher overall average F1-score of 83.32%. Additionally, the robustness and knowledge transfer capability of the models through cross-appliance and cross-domain disaggregation was demonstrated by training for a specific appliance on a specific data, and testing for a different appliance, house and dataset. The proposed models can also disaggregate simultaneous operating appliances with higher F1-scores.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_85-Multi_Target_Energy_Disaggregation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Face Verification across Aging using Deep Learning with Histogram of Oriented Gradients</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111084</link>
        <id>10.14569/IJACSA.2020.0111084</id>
        <doi>10.14569/IJACSA.2020.0111084</doi>
        <lastModDate>2020-10-31T18:17:10.2800000+00:00</lastModDate>
        
        <creator>Areeg Mohammed Osman</creator>
        
        <creator>Serestina Viriri</creator>
        
        <subject>Facial aging; verify faces; Convolutional Neural Networks (CNN); Histogram of Oriented Gradients (HOG)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>One of the complex procedures which affect man’s face shape and texture is facial aging. These changes tend to deteriorate the efficacy of systems that automatically verify faces. It seems that CNN (also known as Convolutional Neural Networks) are thought to be one of the most common deep learning approaches where multiple layers are trained robustly while maintaining the minimum number of learned parameters to improve system performance. In this paper, a deeper model of convolutional neural network is fitted with Histogram of Oriented Gradients (HOG) descriptor to handle feature extraction and classification of two face images with the age gap is proposed. Furthermore, the model has been trained and tested in the MORPH and FG-NET datasets. Experiments on FG-NET achieve a state of the arts accuracy (reaching 100%) while results on MORPH dataset have significant improvements in accuracy of 99.85%.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_84-Face_Verification_across_Aging_using_Deep_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-Objective Evolutionary Programming for Developing Recommender Systems based on Collaborative Filtering</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111083</link>
        <id>10.14569/IJACSA.2020.0111083</id>
        <doi>10.14569/IJACSA.2020.0111083</doi>
        <lastModDate>2020-10-31T18:17:10.2630000+00:00</lastModDate>
        
        <creator>Edward Hinojosa-Cardenas</creator>
        
        <creator>Edgar Sarmiento-Calisaya</creator>
        
        <creator>Cesar A. Martinez-Salinas</creator>
        
        <creator>Lehi Quincho-Mamani</creator>
        
        <creator>Jair F. Huaman-Canqui</creator>
        
        <subject>Collaborative filtering; clustering; evolutionary programming; multi-objective; recommender systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>In the era of internet, several online platforms offer many items to users. Users could spend a lot of time to find (or not) some items they are interested, sometimes, they will probably not find the desired items. An effective strategy to overcome this problem is a recommender system, one of the most popular applications of machine learning. Recommender systems select most appropriate items to an specific user based on previous information between items and users, and they are developed using diffeent approaches. One of the most successful approach for developing recommender systems is collaborative filtering, which can filter out items that a user might like based on reactions of users with similar profiles. Often, traditional recommender systems only consider precision as evaluation metric of performance, however, others metrics (like recall, diversity, novelty, etc) are also important. Unfortunately, some metrics are conflicting, e.g., precision impacts negatively on other metrics. This paper presents a multi-objective evolutionary programming method for developing a recommender system, which is based on a new collaborative filtering technique, while maximizes the recall for a given precision, The new collaborative filtering technique uses three components for recommending an item to a user: 1) clustering of users; 2) a previous memory-based prediction; and 3) five decimal parameters (threshold average clustering, threshold penalty, threshold incentive, weight attached to average clustering and weight attached to Pearson correlation). The multiobjective evolutionary programming optimizes the clustering of users and the five decimal parameters, while, it searches maximizes both similarity precision and recall objectives. A comparison between the proposed method and a previous nonevolutionary method shows that the proposed method improves precision and recall metric on a benchmark database.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_83-Multi_Objective_Evolutionary_Programming.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Static vs. Dynamic Modelling of Acoustic Speech Features for Detection of Dementia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111082</link>
        <id>10.14569/IJACSA.2020.0111082</id>
        <doi>10.14569/IJACSA.2020.0111082</doi>
        <lastModDate>2020-10-31T18:17:10.2470000+00:00</lastModDate>
        
        <creator>Muhammad Shehram Shah Syed</creator>
        
        <creator>Zafi Sherhan Syed</creator>
        
        <creator>Elena Pirogova</creator>
        
        <creator>Margaret Lech</creator>
        
        <subject>Dementia detection; speech classification; neural networks; recurrent neural networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>Dementia is a chronic neurological disease that causes cognitive disabilities and significantly impacts daily ac-tivities of affected individuals. It is known that early detection of dementia can improve the quality of life of patients through a specialized care program. Recently, there has been a growing interest in speech-based screening of neurological diseases such as dementia. The focus is on continuous monitoring of changes in speech of dementia patients, aiming to identify the early onset of the disease which could facilitate development of preventative treatment care. In this work, we propose a dynamic (temporal) modeling of acoustic speech characteristics aiming at identifying the signs of dementia. The classification performance of the proposed framework is compared with a baseline static modeling of acoustic speech features. Experimental results show that the proposed dynamic approach outperforms the static method. It achieves the classification accuracy of 74.55% compared to 66.92% obtained using the static models.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_82-Static_vs_Dynamic_Modelling_of_Acoustic_Speech_Features.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Perception Centered Self-Driving System without HD Maps</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111081</link>
        <id>10.14569/IJACSA.2020.0111081</id>
        <doi>10.14569/IJACSA.2020.0111081</doi>
        <lastModDate>2020-10-31T18:17:10.2170000+00:00</lastModDate>
        
        <creator>Alan Sun</creator>
        
        <subject>Self-driving; lane lines detection; traffic lines detection; visual localization; HD Maps</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>Building a fully autonomous self-driving system has been discussed for more than 20 years yet remains unsolved. Previous systems have limited ability to scale. Their localization subsystem needs labor-intensive map recording for running in a new area, and the accuracy decreases after the changes occur in the environment. In this paper, a new localization method is proposed to solve the scalability problems, with a new method for detecting and making sense of diverse traffic lines. Like the way human drives, a self-driving system should not rely on an exact position to travel in most scenarios. As a result, without HD Maps, GPS or IMU, the proposed localization subsystem relies only on detecting driving-related features around (like lane lines, stop lines, and merging lane lines). For spotting and reasoning all these features, a new line detector is proposed and tested against multiple datasets.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_81-A_Perception_Centered_Self_Driving_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Classification of Common and Uncommon Tones by P300 Feature Extraction and Identification of Accurate P300 Wave by Machine Learning Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111080</link>
        <id>10.14569/IJACSA.2020.0111080</id>
        <doi>10.14569/IJACSA.2020.0111080</doi>
        <lastModDate>2020-10-31T18:17:10.2000000+00:00</lastModDate>
        
        <creator>Rafia Akhter</creator>
        
        <creator>Kehinde Lawal</creator>
        
        <creator>Md. Tanvir Rahman</creator>
        
        <creator>Shamim Ahmed Mazumder</creator>
        
        <subject>Event Related Potential (ERP); classification; P300; machine-learning; oddball-paradigm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>An event-related potential (ERP) is a measure of brain response to a specific sensory, cognitive, or motor event. One common ERP technique used in cognition research is the oddball paradigm where the brain’s response to common and uncommon stimuli is compared. The neurologic response to the oddball paradigm produces a P300 ERP which is one of the major visual/auditory sensory ERP components. The purpose of this study to classify ERP responses to common and uncommon tones by extracting the P300 feature from ERP epochs and identify the accurate shape of the P300 wave. For recording ERP data, and OpenBCI system is used. P300 features are extracted using EEGlab which is a mathematical tool of MATLAB. Finally, various types of machine learning models are used for identifying the accurate shape of a P300 wave and then classifying common and uncommon auditory tones. For stimuli classification, all of the algorithms evaluated performed efficiently and built a consistent model with 93.75% to 99.1% evaluation accuracy. Also, for P300 shape detection, NN model showed the best performance with 94.95% accuracy. These findings have the potential to add useful machine learning-based methods to the clinical application of ERPs.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_80-Classification_of_Common_and_Uncommon_Tones.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluating the Effect of Multiple Filters in Automatic Language Identification without Lexical Knowledge</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111079</link>
        <id>10.14569/IJACSA.2020.0111079</id>
        <doi>10.14569/IJACSA.2020.0111079</doi>
        <lastModDate>2020-10-31T18:17:10.1870000+00:00</lastModDate>
        
        <creator>Guan-Lip Soon</creator>
        
        <creator>Nur-Hana Samsudin</creator>
        
        <creator>Dennis Lim</creator>
        
        <subject>Language identification; speech recognition; speech filters; minimal language data; minimal lexical information; optimal performance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>The classical language identification architecture would require a collection of languages independent text and speech information for training by the system before it can identify the languages correctly. This paper also address language identification framework however with data has been downsized considerably from general language identification architecture. The system goal is to identify the type of language being spoken to the system based on a series of trained speech with sound file features and without any language text data or lexical knowledge of the spoken language. The system is also expected to be able to be deployed in mobile platform in future. This paper is specifically about measuring the performance optimisation of audio filters on a CNN model integration for the language identification system. There are several metric to gauge the performance identification system for a classification problem. Precision, recall and F1 Scores is presented for the performance evaluation with different combination of filters together with CNN model as the framework of the language identification system. The goal is not to get the best filter for noise, instead to identify the filter that is a good fit to develop language model with environmental noise for a robust language identification system. Our experiments manage to identify the best combination of filters to increase the accuracy of language identification using short speech. This resulting us to modify our pre-processing phase in the overall language identification system.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_79-Evaluating_the_Effect_of_Multiple_Filters_in_Automatic_Language.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>HPSOGWO: A Hybrid Algorithm for Scientific Workflow Scheduling in Cloud Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111078</link>
        <id>10.14569/IJACSA.2020.0111078</id>
        <doi>10.14569/IJACSA.2020.0111078</doi>
        <lastModDate>2020-10-31T18:17:10.1530000+00:00</lastModDate>
        
        <creator>Neeraj Arora</creator>
        
        <creator>Rohitash Kumar Banyal</creator>
        
        <subject>Cloud computing; hybrid algorithms; metaheuristic algorithms; optimization; workflow scheduling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>Virtualization is one of the key features of cloud computing, where the physical machines are virtually divided into several virtual machines in the cloud. The user’s tasks are run on these virtual resources as per the requirements. When the user requests the services to the cloud, the user’s tasks are allotted to the virtual resources depending on their needs. An efficient scheduling mechanism is required for optimizing the involved parameters. Scientific workflows deals with a large amount of data with dependency constraints and is used to simplify the applications in the diverse scientific domains. Scheduling the workflow in cloud computing is a well-known NP-hard problem. Deploying such data- and compute-intensive workflow on the cloud needs an efficient scheduling algorithm. In this paper, we have proposed a multi-objective model based hybrid algorithm (HPSOGWO), which combines the desirable characteristics of two well-known algorithms, particle swarm optimization (PSO), and grey wolf optimization (GWO). The results are analyzed under complex real-world scientific workflows such as Montage, CyberShake, Inspiral, and Sipht. We have considered the two essential parameters: total execution time and total execution cost while working in the cloud environment. The simulation results show that the proposed algorithm performs well compared to other state-of-the-art algorithms such as round-robin (RR), ant colony optimization (ACO), heterogeneous earliest time first (HEFT), and particle swarm optimization (PSO).</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_78-HPSOGWO_A_Hybrid_Algorithm_for_Scientific_Workflow_Scheduling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Emotion Analysis of Arabic Tweets during COVID-19 Pandemic in Saudi Arabia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111077</link>
        <id>10.14569/IJACSA.2020.0111077</id>
        <doi>10.14569/IJACSA.2020.0111077</doi>
        <lastModDate>2020-10-31T18:17:10.1400000+00:00</lastModDate>
        
        <creator>Huda Alhazmi</creator>
        
        <creator>Manal Alharbi</creator>
        
        <subject>Emotion analysis; Arabic tweets; COVID-19; Twitter; Lexicon-based</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>Social media has emerged as an effective platform to investigate people’s opinion and feeling towards crisis situations. Along with Coronavirus crisis, range of different emotions reveal, including anger, sadness, fear, trust, and anticipation. In this paper, we investigate public’s emotional responses associated with this pandemic using Twitter as platform to perform our analysis. We investigate how emotional perspective vary regarding lockdown ending in Saudi Arabia. We develop an emotion detection method to classify tweets into standard eight emotions. Furthermore, we present insights into the changes in the intensity of the emotions over time. Our finding shows that joy and anticipation are the most dominant among all emotions. While people express positive emotions, there are tones of fear, anger, and sadness revealed. Moreover, this research might help to better understand public behaviors to gain insight and make the proper decisions.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_77-Emotion_Analysis_of_Arabic_Tweets_during_COVID_19.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>How Images Defects in Street Scenes Affect the Performance of Semantic Segmentation Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111076</link>
        <id>10.14569/IJACSA.2020.0111076</id>
        <doi>10.14569/IJACSA.2020.0111076</doi>
        <lastModDate>2020-10-31T18:17:10.1230000+00:00</lastModDate>
        
        <creator>Hoda Imam</creator>
        
        <creator>Bassem A. Abdullah</creator>
        
        <creator>Hossam E. Abd El Munim</creator>
        
        <subject>Semantic segmentation; deep learning; cityscapes; DeepLabv3+; PSPNet</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>Semantic segmentation methods are used in au-tonomous car development to label pixels of road images (e.g. street, building, pedestrian, car, and so on). DeepLabv3+ and PSPNet are two of the best performance semantic segmentation methods according to Cityscapes benchmark. Although these methods achieved a very high performance with clear road images, yet these two methods are not tested under severe imaging conditions. In this work, we provided new Cityscapes datasets with severe imaging conditions: foggy, rainy, blurred, and noisy datasets. We evaluated the performance of DeepLabv3+ and PSPNet using our datasets. Our work demonstrated that although these models have high performance with clear images, they show very weak performance among the different imaging challenges. We proved that the road semantic segmentation methods must be evaluated using different kinds of severe imaging conditions to ensure the robustness of these methods in autonomous driving.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_76-How_Images_Defects_in_Street_Scenes_Affect.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Continuous Human Activity Recognition in Logistics from Inertial Sensor Data using Temporal Convolutions in CNN</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111074</link>
        <id>10.14569/IJACSA.2020.0111074</id>
        <doi>10.14569/IJACSA.2020.0111074</doi>
        <lastModDate>2020-10-31T18:17:10.0930000+00:00</lastModDate>
        
        <creator>Abbas Shah Syed</creator>
        
        <creator>Zafi Sherhan Syed</creator>
        
        <creator>Areez Khalil Memon</creator>
        
        <subject>Convolutional Neural Networks; deep learning; Human Activity Recognition (HAR); inertial sensors; LARa dataset</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>Human activity recognition has been an important task for the research community. With the introduction of deep learning architectures, the performance of activity recognition algorithms has improved significantly. However, most of the research in this area has focused on activity recognition for health/assisted living with other applications being given less attention. This paper considers continuous activity recognition in logistics (order picking and packing operations) using a convolutional neural network with temporal convolutions on inertial measurement sensor data from the recently released LARa dataset. Four variants of the popular CNN-IMU are experimented upon and a discussion of the results is provided. The results indicate that temporal convolutions are able to achieve satisfactory performance for some activities (hand center and cart) whereas they perform poorly for the activities of stand and hand up.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_74-Continuous_Human_Activity_Recognition_in_Logistics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of Multi-View Graph Embedding for Features Selection and Remotely Sensing Signal Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111075</link>
        <id>10.14569/IJACSA.2020.0111075</id>
        <doi>10.14569/IJACSA.2020.0111075</doi>
        <lastModDate>2020-10-31T18:17:10.0930000+00:00</lastModDate>
        
        <creator>Abdullah Alhumaidi Alotaibi</creator>
        
        <creator>Sattam Alotaibi</creator>
        
        <subject>Signal processing; remote sensing images; features selection; graph embedding; unlabeled samples</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>Now-a-days, signal processing remains an intensive challenging area of research. In fact, various strategies have been suggested to address semi-supervised, feature selection and unlabeled samples challenges. The most frequent achievement was dedicated to exploit a single kind of feature/view from the original data. Recently, advanced techniques aimed to explore signals from different views and to, properly, integrate divergent kinds of interdependent features. In this paper, we propose a novel design of a multi-View Graph Embedding for features selection allowing a convenient integration of complementary weighted features. The proposed framework combines the singular properties of each feature space to accomplish a physically meaningful cooperative low-dimensional selection of input data. This allows us not only to perform a semi-supervised classification, but also to propagates narrow class information to unlabeled sample when only partial labeling knowledge is available. This paper makes the following contributions: (i) a feature selection schema for data refinement; and (ii) the adaptation of a multi-view graph-based approach by a better tackling of semi-supervised and dimensionality issues. Our experimental results, conducted by using a mixture of complementary features and aerial images datasets, demonstrate the effectiveness of the proposed framework without significantly increasing computational complexity.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_75-Design_of_Multi_View_Graph_Embedding.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Artificial Bee Colony Algorithm Optimization for Video Summarization on VSUMM Dataset</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111073</link>
        <id>10.14569/IJACSA.2020.0111073</id>
        <doi>10.14569/IJACSA.2020.0111073</doi>
        <lastModDate>2020-10-31T18:17:10.0770000+00:00</lastModDate>
        
        <creator>Vinsent Paramanantham</creator>
        
        <creator>S. SureshKumar</creator>
        
        <subject>Artificial Bee Colony optimization; video summarization; online video highlighting; sparse-land; anomaly detection; image histogram; HOG; HOOF; canny edge</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>This paper attempts to prove that the Artificial Bee Colony algorithm can be used as an optimization algorithm in sparse-land setup to solve Video Summarization. The critical challenge in doing quasi(real-time) video summarization is still time-consuming with ANN-based methods, as these methods require training time. By doing video summarization in a quasi (real-time), we can solve other challenges like anomaly detection and Online Video Highlighting. A simple threshold function is tested to see the reconstruction error of the current frame given the previous 50 frames from the dictionary. The frames with higher threshold errors form the video summarization. In this work, we have used Image histogram, HOG, HOOF, and Canny edge features as features to the ABC algorithm. We have used Matlab 2014a for doing the feature extraction and ABC algorithm for VS. The results are compared to the existing methods. The evaluation scores are calculated on the VSUMM dataset for all the 50 videos against the two user summaries. This research answers how the ABC algorithm can be used in a sparse-land setup to solve video summarization. Further studies are required to understand the performance evaluation scores as we change the threshold function.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_73-Artificial_Bee_Colony_Algorithm_Optimization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Physical Parameter Estimation of Linear Voltage Regulators using Model-based Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111072</link>
        <id>10.14569/IJACSA.2020.0111072</id>
        <doi>10.14569/IJACSA.2020.0111072</doi>
        <lastModDate>2020-10-31T18:17:10.0600000+00:00</lastModDate>
        
        <creator>Ng Len Luet</creator>
        
        <creator>Mohd Hairi Mohd Zaman</creator>
        
        <creator>Asraf Mohamed Moubark</creator>
        
        <creator>M Marzuki Mustafa</creator>
        
        <subject>Linear voltage regulator; stability; capacitor; equivalent series resistance; physical parameter; model-based approach</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>Electronic systems are becoming increasingly so-phisticated due to the emergence of advanced technology, which can produce robust integrated circuits by reducing the dimensions of transistors to just a few nanometers. Furthermore, most elec-tronic systems nowadays are in the form of system-on-chip and thus require stable voltage specifications. One of the critical elec-tronic components is the linear voltage regulator (LVR). LVRs are types of power converter used to maintain a stable and constant DC voltage to the load. Therefore, LVR stability is an essential aspect of voltage regulator design. The main factor influencing the stability of LVRs is the load disturbance. In general, disturbances such as a sudden change in load current can be compensated for by an output capacitor, which, contains a parasitic element known as equivalent series resistance (ESR). Therefore, the ESR and output capacitor specified in the datasheet is essential to compensate for load disturbance. However, LVR manufacturers typically do not provide detailed information, such as the internal physical parameters associated with the LVR in the datasheet. This situation leads to difficulties in identifying the behavior and stability of LVR. Therefore, this study aims to develop a method for estimating the internal physical parameters of LVR circuits that are difficult to measure directly by using a model-based approach (MBA). In this study, the MBA estimates the LVR model transfer function by analyzing the input and output signals via a linear regression method. Simulations through MATLAB and OrCAD Capture CIS software verify the estimated LVR model transfer function. Results show that the MBA has an excellent performance in estimating the physical parameters of LVRs and determining their stability.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_72-Physical_Parameter_Estimation_of_Linear_Voltage_Regulators.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Game-Based Learning Approach to Improve Students’ Spelling in Thai</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111071</link>
        <id>10.14569/IJACSA.2020.0111071</id>
        <doi>10.14569/IJACSA.2020.0111071</doi>
        <lastModDate>2020-10-31T18:17:10.0300000+00:00</lastModDate>
        
        <creator>Krittiya Saksrisathaporn</creator>
        
        <subject>Game for learning; game&#173;based learning; mobile game; paired sample t&#173;test; Thai; misspelled words</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>The problem of misspelled Thai words written in social media is increasing rapidly by youth in Thailand. To decrease the number of misspelled Thai words and improve the learning achievement for Thai youth, a first&#173;person 3D mobile game was developed. The game is run on an Android smartphone applying a gyroscope sensor. This game has 3 levels in 5 stages. The learning achievement is evaluated from 37 players’ pre&#173; and post&#173;test scores, who are bachelor’s degree students of Animation and Game, College of Arts, Media and Technology, Chiang Mai University, Thailand. The data were statistically analysed by a paired sample t&#173;test. Pre&#173; and post&#173;test scores were weakly and positively correlated (r = 0.666, p &lt; 0.001). There was a significant average difference between pre&#173; and post&#173;test scores (t36 = &#173;11.776, p &lt; 0.001). On average, post&#173;test scores were 15.027 points higher than pre&#173;test scores (95% CI [&#173;17.615, &#173;12.439]). The results of the research show that the game&#173;based learning approach significantly improved players’ learning achievement in misspelled written Thai words.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_71-A_Game_Based_Learning_Approach_to_Improve_Students_Spelling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Trust-Based Collaborative Filtering Approach to Design Recommender Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111070</link>
        <id>10.14569/IJACSA.2020.0111070</id>
        <doi>10.14569/IJACSA.2020.0111070</doi>
        <lastModDate>2020-10-31T18:17:10.0130000+00:00</lastModDate>
        
        <creator>Vineet K. Sejwal</creator>
        
        <creator>Muhammad Abulaish</creator>
        
        <subject>Recommender system; collaborative filtering; cold-start; trust; rating prediction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>Collaborative Filtering (CF) is one of the most fre-quently used recommendation techniques to design recommender systems that improve accuracy in terms of recommendation, coverage, and rating prediction. Although CF is a well-established and popular algorithm, it suffers with issues like black-box recommendation, data sparsity, cold-start, and limited content problems that hamper its performance. Moreover, CF is fragile and it is not suitable to find similar users. The existing literatures on CF show that integrating users’ social information with a recommender system can handle the above-mentioned issues effectively. Recently, trustworthiness among users is considered as one such social information that has been successfully combined with CF to predict ratings of the unrated items. In this paper, we propose a trust-based recommender system, TrustRER, which integrates users’ trusts into an existing user-based CF algorithm for rating prediction. It uses both ratings and textual information of the items to generate a trust network for users and derives the trust scores. For trust score, we have defined three novel trust statements based on user rating values, emotion values, and re-view helpfulness votes. To generate a trust network, we have used trust propagation metrics to compute trust scores between those users who are not directly connected. The proposed TrustRER is experimentally evaluated over three datasets related to movie, music, and hotel and restaurant domains, and it performs significantly better in comparison to nine standard baselines and one state-of-the-art recommendation method. TrustRER is also able to effectively deal with the cold-start problem because it improves the rating prediction accuracy for cold-start users in comparison to baselines and state-of-the-art method.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_70-A_Trust_Based_Collaborative_Filtering_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Blockchain and Internet of Things for Business Process Management: Theory, Challenges, and Key Success Factors</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111069</link>
        <id>10.14569/IJACSA.2020.0111069</id>
        <doi>10.14569/IJACSA.2020.0111069</doi>
        <lastModDate>2020-10-31T18:17:09.9830000+00:00</lastModDate>
        
        <creator>Mabrook S. Al-Rakhami</creator>
        
        <creator>Majed Al-Mashari</creator>
        
        <subject>Business process management; BPM; Internet of things; IoT; blockchain technology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>The combination of business process management (BPM) and emerging technologies is a logical step. The evolving number of advanced technologies shows that contributions are with a high value for business processes. From a BPM point of view, value creation from the modern technologies such as Internet of things and Blockchain technology is pivotal on a progressively higher scale and affect the business process in different aspects. However, current research in this area still at the beginning. In order to close this research gap and the lack of experience in this area, the topic of integrating Blockchain and IoT technologies with BPM will play an essential role in a corporate context, particularly in the context of inter/intra-organizational information systems and their diverse design options. This review paper aims to survey the impact of tow emerging technologies: Internet of Things and Blockchain technology on BPM and illustrates the current state of the art in this research domain. Each technology was investigated through a design science research approach to provide as a descriptive theoretical overview, characterize its theoretical background, challenges, and key success factors.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_69-Blockchain_and_Internet_of_Things_for_Business_Process.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Level of Depression in Positive Patients for COVID-19 in the Puente Piedra District of North Lima, 2020</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111068</link>
        <id>10.14569/IJACSA.2020.0111068</id>
        <doi>10.14569/IJACSA.2020.0111068</doi>
        <lastModDate>2020-10-31T18:17:09.9500000+00:00</lastModDate>
        
        <creator>Rosa Perez-Siguas</creator>
        
        <creator>Eduardo Matta-Solis</creator>
        
        <creator>Hernan Matta-Solis</creator>
        
        <creator>Anika Remuzgo-Artezano</creator>
        
        <creator>Lourdes Matta-Zamudio</creator>
        
        <creator>Melissa Yauri-Machaca</creator>
        
        <subject>Depression; coronavirus; patients; home care</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>Depression in the positive patient for COVID-19 is one of the emotional confrontations that must endure during isolation and quarantine, therefore, the objective of the research study is to determine the Level of Depression in positive patients for COVID-19 in the district of Puente Piedra in North Lima, 2020. This is a quantitative, non-experimental, descriptive and cross-sectional study, making a flow chart in relation to home care by the nursing professional in a population of 23 Positive patient for COVID-19 from the Puente Piedra district of North Lima, who answered a questionnaire with Sociodemographic data and the self-assessment scale for Zung&#39;s depression. In the results where we can observe, with respect to the level of depression in patients positive for COVID-19, where 14 patients represent 60.9% of the total are with a normal level of depression, 9 patients represent 39.1% of the total are slightly depressed. In conclusion, it is necessary to intervene in the psychological aspect according to the characteristics of each patient such as gender and age, in the future, it is recommended to carry out more research at the national level, as it will allow researchers to go into more detail about the mental health of the patient during and after the COVID-19 pandemic.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_68-Level_of_Depression_in_Positive_Patients_for_COVID_19.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Virtual Simulation for Entertainment using Genetic Information</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111067</link>
        <id>10.14569/IJACSA.2020.0111067</id>
        <doi>10.14569/IJACSA.2020.0111067</doi>
        <lastModDate>2020-10-31T18:17:09.9370000+00:00</lastModDate>
        
        <creator>Jin Woo Kim</creator>
        
        <creator>Hun Lim</creator>
        
        <creator>Taeho You</creator>
        
        <creator>Mee Young Sung</creator>
        
        <subject>Virtual simulation; entertainment; genetic information; DeoxyriboNucleic Acid (DNA) matching; celebrity; favorability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>The genetic information has been researched to predict the disease and to discover the clue of biological causes, especially in the medical field. However, based on the reliability of genetic information, it also gives a powerful realistic experience with VR devices. In this paper, we developed a dating simulation game that users can meet celebrities. The human personality of the celebrity, the favorability feedback depends on each choice of the conversation and a proper choice creation is based on the genetic information of the user and a celebrity. Besides, a method for utilizing genetic DNA information for virtual simulations is proposed. In this study, we have established that DNA information is related to human relationships both in the real and virtual worlds. Also, we concluded that the DNA information contributes to the development of interpersonal and mental health. However, a person&#39;s personality or preference for a specific situation or object, etc. are not determined only through genes. Therefore, more quantitative and multifaceted studies will need to be conducted on the effect of genes on personal preferences. We only experimented with virtual characters in virtual reality. It would be meaningful to proceed with the human experiment not only with a virtual character in the virtual environment, but also another human in the virtual environment. Eventually, the result of the experiment between virtual characters and humans should be compared.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_67-Virtual_Simulation_for_Entertainment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Prediction of Outpatient No-Show Visits by using Deep Neural Network from Large Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111066</link>
        <id>10.14569/IJACSA.2020.0111066</id>
        <doi>10.14569/IJACSA.2020.0111066</doi>
        <lastModDate>2020-10-31T18:17:09.9200000+00:00</lastModDate>
        
        <creator>Riyad Alshammari</creator>
        
        <creator>Tahani Daghistani</creator>
        
        <creator>Abdulwahhab Alshammari</creator>
        
        <subject>No-show; outpatients; machine learning; prediction model introduction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>Patients’ no-show is one of the leading causes of increasing financial burden for healthcare organizations and is an indicator of healthcare systems&#39; quality and performance. Patients&#39; no-show affects healthcare delivery, workflow, and resource planning. The study aims to develop a prediction model predict no-show visits using a machine learning approach. A large volume of data was extracted from electronic health records of patient visits in outpatient clinics under the umbrella of large medical cities in Saudi Arabia. The data consists of more than 33 million visits, with an 85% no-show rate. A total of 29 features were utilized based on demographic, clinical, and appointment characteristics. Nine features were an original data element, while data elements derived 20 features. This study used and compared three machine learning algorithms; Deep Neural Network (DNN), AdaBoost, and Naive Bayes (NB). Results revealed that the DNN performed better in comparison to NB and AdaBoost. DNN achieved a weighted average of 98.2% and 94.3% of precision and recall, respectively. This study shows that machine learning has the potential to improve the efficiency and effectiveness of healthcare. The results are considered promising, and the model can be an excellent candidate for implementation.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_66-The_Prediction_of_Outpatient_No_Show_Visits.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Very Deep Neural Networks for Extracting MITE Families Features and Classifying them based on DNA Scalograms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111065</link>
        <id>10.14569/IJACSA.2020.0111065</id>
        <doi>10.14569/IJACSA.2020.0111065</doi>
        <lastModDate>2020-10-31T18:17:09.9030000+00:00</lastModDate>
        
        <creator>Mael SALAH JRAD</creator>
        
        <creator>Afef ELLOUMI OUESLATI</creator>
        
        <creator>Zied LACHIRI</creator>
        
        <subject>DNA scalograms; genomic signature; classification; deep learning; transfer learning; VGGNET; accuracy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>DNA sequencing has recently generated a very large volume of data in digital format. These data can be compressed, processed and classified only by using automatic tools which have been employed in biological experiments. In this work, we are interested in the classification of particular regions in C. Elegans Genome, a recently described group of transposable elements (TE) called Miniature Inverted-repeat Transposable Elements (MITEs). We particularly focus on the four MITE families (Cele1, Cele2, Cele14, and Cele42). These elements have distinct chromosomal distribution patterns and specific number conserved on the six autosomes of C. Elegans. Thus, it is necessary to define specific chromosomal domains and the potential relationship between MITEs and Tc / mariner elements, which makes it difficult to determine the similarities between MITES and TC classes. To solve this problem and more precisely to identify these TEs, these data are classified and compressed, in this study, using an efficient classifier model. The application of this model consists of four steps. First, the DNA sequence are mapped in a scalogram’s form. Second, the characteristic motifs are extracted in order to obtain a genomic signature. Third, MITE database is randomly divided into two data sets: 70% for training and 30%for tests. Finally, these scalograms are classified using Transfer Learning Approach based on pre-trained models like VGGNet. The introduced model is efficient as it achieved the highest accuracy rates thanks to the recognition of the correct characteristic patterns and the overall accuracy rate reached 97.11% for these TEs samples classification. Our approach allowed also classifying and identifying the MITES Classes compared to the TC class despite their strong similarity. By extracting the features and the characteristic patterns, the volume of massive data was considerably reduced.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_65-Very_Deep_Neural_Networks_for_Extractiong_MITE_Families.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>High-Security Image Steganography Technique using XNOR Operation and Fibonacci Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111064</link>
        <id>10.14569/IJACSA.2020.0111064</id>
        <doi>10.14569/IJACSA.2020.0111064</doi>
        <lastModDate>2020-10-31T18:17:09.8900000+00:00</lastModDate>
        
        <creator>Ali Abdulzahra Almayyahi</creator>
        
        <creator>Rossilawati Sulaiman</creator>
        
        <creator>Faizan Qamar</creator>
        
        <creator>Abdulwahhab Essa Hamzah</creator>
        
        <subject>Image steganography; Huffman algorithm; XNOR operation; Fibonacci algorithm; LSB; PSNR</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>Since the number of internet users is increasing and sensitive information is exchanging continuously, data security has become a problem. Image steganography is one of the ways to exchange secret data securely using images. However, several issues need to be mitigated, especially in the imperceptibility (security) aspect, which is the process of embedding secret data in the images that can be vulnerable to attacks. This paper focuses on developing a secure method for hiding secret messages in an image, based on the standard Least Significant Bit (LSB). Before proceeding with the embedding stage, the secret message&#39;s size is reduced by compression using the Huffman algorithm, followed by two operations, which are the Boolean operation Exclusive-NOR (XNOR) operation and the Fibonacci algorithm when selecting pixels to embed the secret message.  As a result of these processes, a stego-image is created with two secret keys. We obtained promising results against standard images with higher Peak Signal-to-Noise Ratio (PSNR) values of 66.6170, 65.8928, and 65.9386 dB for Lena.bmp, Baboon.bmp, and Pepper.bmp, respectively, as compared to other state-of-the-art schemes. The evaluation stage proves the increasing level of security as well as imperceptibility.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_64-High_Security_Image_Steganography_Technique.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing VoIP BW Utilization over ITTP Protocol</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111063</link>
        <id>10.14569/IJACSA.2020.0111063</id>
        <doi>10.14569/IJACSA.2020.0111063</doi>
        <lastModDate>2020-10-31T18:17:09.8730000+00:00</lastModDate>
        
        <creator>AbdelRahman H. Hussein</creator>
        
        <creator>Mahran Al-Zyoud</creator>
        
        <creator>Kholoud Nairoukh</creator>
        
        <creator>Sumaya N. Al-Khatib</creator>
        
        <subject>Voice over IP (VoIP); VoIP protocols; Internet Telephony Transport Protocol (ITTP); payload compression; bandwidth utilization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>The revolution of Voice over Internet Protocol (VoIP) technology has propagated everywhere and replaced the conventional telecommunication technology (e.g. landline). Nevertheless, several enhancements need to be done on VoIP technology to improve its performance. One of the main issues is to improve the VoIP network bandwidth (BW) utilization. VoIP packet payload compression is one of the key approaches to do that. This paper proposes a new method to compress VoIP packet payload. The suggested method works over internet telephony transport protocol (ITTP) and named Delta-ITTP method. The core idea of the Delta-ITTP method is to find and transmit the delta between the successive VoIP packet payloads, which is typically smaller than the original VoIP packet payload. The suggested Delta-ITTP method implements VoIP packet payload compression at the sender side and decompression at the receiver side. During the compression process, the Delta-ITTP method needs to keep some values to restore the original VoIP packet payload at the receiver side. For this, the Delta-ITTP method utilizes some of the IP protocol fields and no additional header is needed. The Delta-ITTP method has been deployed and compared with the traditional ITTP protocol without compression. The result showed that up to 19% BW saving was achieved in the tested cases leading to the desired enhancement in the VoIP network BW utilization.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_63-Enhancing_VoIP_BW_Utilization_over_ITTP_Protocol.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Impact of Change in Business IT Alignment: Evaluation with CBITA Tool</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111062</link>
        <id>10.14569/IJACSA.2020.0111062</id>
        <doi>10.14569/IJACSA.2020.0111062</doi>
        <lastModDate>2020-10-31T18:17:09.8570000+00:00</lastModDate>
        
        <creator>Imgharene Kawtar</creator>
        
        <creator>Doumi Karim</creator>
        
        <creator>Baina Salah</creator>
        
        <subject>Agility; business IT alignment; change; project</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>Organizations introduce changes to adapt to an agile context in a turbulent environment. These change often have an impact on business and information technology. In most cases, the change impacts organizational elements that make the adaptation not well defined, leaving out elements that can lead to misalignment. The change is realized in this article as a project, which will impact all the Business-IT alignment elements. It is important to know the scope of the impact in order to make a complete adaptation. By reviewing the literature, we found that there is no work dealing with this aspect. To fill this gap, we determine the impact of the project on the organization by considering of Business-IT alignment and we proceed with a comparison of the AS IS model of the organization with the TO BE model which is the target to be implemented keeping the system aligned. Accordingly, we propose a metamodel for change by the impact of the project on the Business-IT alignment and a set of rules and algorithms to predict the impact and adaptation. To make these contributions operational, the implementation of Change Business-IT Alignment Tool and demonstrate its applicability through a case study of an urban agency.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_62-Impact_of_Change_in_Business_IT_Alignment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dynamics of Organizational Change</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111061</link>
        <id>10.14569/IJACSA.2020.0111061</id>
        <doi>10.14569/IJACSA.2020.0111061</doi>
        <lastModDate>2020-10-31T18:17:09.8270000+00:00</lastModDate>
        
        <creator>Maximo Flores-Cabezas</creator>
        
        <creator>Desiree Flores-Moya</creator>
        
        <creator>Brian Meneses-Claudio</creator>
        
        <subject>System dynamics; diagram; vertical organization; process organization; focused organization; modular organization; flexible organization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>In this research, the evolution of change in an organization, due to continuous changes in the market, is disclosed in a qualitative and quantitative way. The changes developed in the organizational structure were aimed at the search for a flexible, dynamic and agile organization, which would allow adapting to the demands of increasingly informed customers, this engine of change called customer required the company to develop a flexible organizational structure. For this, the key concepts were reviewed, such as: systems, processes, activities, modeling and the use of the Systems Dynamics tool for the elaboration of the causal diagram and flow diagram, which allowed to identify, analyze and evaluate the variables that affected each stage of organizational change. The evolution of the change that the organization developed was carried out in an unplanned sequential manner, in the following stages: first a vertical organization, second an organization by processes, third a focused organization, fourth a modular organization and later a flexible organization, which allowed adapt to changes in customer orders, orders that each time increased in the characteristics of the product model, but decreased in quantities. The changes developed by the organization allowed to increase the response speed in order delivery by 43%.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_61-Dynamics_of_Organizational_Change.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Secure Software Defined Networks Controller Storage using Intel Software Guard Extensions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111060</link>
        <id>10.14569/IJACSA.2020.0111060</id>
        <doi>10.14569/IJACSA.2020.0111060</doi>
        <lastModDate>2020-10-31T18:17:09.8100000+00:00</lastModDate>
        
        <creator>Qasmaoui Youssef</creator>
        
        <creator>Maleh Yassine</creator>
        
        <creator>Abdelkrim Haqiq</creator>
        
        <subject>Software defined networks; software guard extensions; storage; integrity; confidentiality</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>The SDN controller is the core of the software-defined network (SDN), which provides important network operations that needs to be protected from all type of threats. Many researches have been focusing on different layers of security regarding the SDN controller such as Anti-DDOS system or enforcement of TLS connection between the controller and the Open-vswitches. One of the major security threats targeting any program is the environment execution itself (e.g. Operating system and the hardware itself). Intel&#39;s Software Guard Extension (SGX) offers a sloid layer of security applied to applications by creating a Trusted execution environment. SDN controller relay on a storage module to keep sensitive data such as Flow Rules, users’ credentials and configuration files. Protecting this side of the SDN controller is a must in term of security. To date, no work has been conducted considering SDN controller storage security using Intel SGX. This paper introduces an SGX enabled SDN controller. The new controller ensures the integrity and the confidentiality in a trusted execution environment by leveraging a recent hardware technology called intel SGX. This technology provides a trusted and secure enclave. Enclaves are   sealed and unsealed by intel SGX attestation mechanisms to protect the executed code and data inside live memory and disk from being altered by any unauthorized access. High privileged codes such as the OS itself is kept from altering data inside enclaves. We implemented the Intel SGX using the Floodlight SDN controller running a real enabled Intel SGX hardware. Our evaluation shows that the SGX enabled SDN controller introduces a slightly observable performance overhead to the floodlight controller compared to advantages in term of security.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_60-Secure_Software_Defined_Networks_Controller_Storage.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Solution for Distributed Database Problems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111059</link>
        <id>10.14569/IJACSA.2020.0111059</id>
        <doi>10.14569/IJACSA.2020.0111059</doi>
        <lastModDate>2020-10-31T18:17:09.7930000+00:00</lastModDate>
        
        <creator>Bishoy Sameeh Matta Sawiris</creator>
        
        <creator>Manal A. Abdel-Fattah</creator>
        
        <subject>Distributed database; database challenges; deep learning; fragmentation; blockchain; security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>Distributed Databases Systems (DDBS) are a set of logically networked computer databases, managed by different sites, locations and accessible to the user as a single database. DDBS is an emerging technology that is useful in data storage and retrieval purposes. Still, there are some problems and issues that degrade the performance of distributed databases. The Aim of this paper is to provide a novel solution to distributed database problems that is based on distributed database challenges collected in one diagram and on the relationship among DDB challenges in another diagram. This solution presents two methodologies for Distributed Databases management Systems: deep learning-based fragmentation and allocation, and blockchain technology-based security provisioning. The contribution of this paper is twofold. First, it summarizes major issues and challenges in the distributed database. Additionally, it reviews the research efforts presented to resolve the issues. Secondly, this paper presents a distributed database solution that resolves the major issue of distributed database technology. This paper also highlights the future research directions that are appropriate for distributed database technology after the implementation in a large-scale environment and recommended the technologies that can be used to ensure the best implementation of the proposed solution.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_59-A_Novel_Solution_for_Distributed_Database_Problems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>White Blood Cells Detection using YOLOv3 with CNN Feature Extraction Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111058</link>
        <id>10.14569/IJACSA.2020.0111058</id>
        <doi>10.14569/IJACSA.2020.0111058</doi>
        <lastModDate>2020-10-31T18:17:09.7630000+00:00</lastModDate>
        
        <creator>Nurasyeera Rohaziat</creator>
        
        <creator>Mohd Razali Md Tomari</creator>
        
        <creator>Wan Nurshazwani Wan Zakaria</creator>
        
        <creator>Nurmiza Othman</creator>
        
        <subject>Alexnet; darknet19; darknet53; detection; VGG16; white blood cells; YOLO</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>There are several types of blood cancer. One of them is Leukaemia. This is due to leukocyte or white blood cell (WBCs) production problem in the bone marrow. Detection at earlier stage is important so that the patient is able to get a proper treatment. The conventional detection and blood count method is less efficient and it is done manually by pathologist. Thus, there will be a long line to wait for the results and also delay the treatment. A faster detection procedure and technique will have high impact on the real time diagnostic. Fortunately, these problems are able to overcome by making the blood test procedures automatic. One of the effort is the development of deep learning for WBCs detection and classification. In computer aided WBCs detection, the You Only Look Once (YOLO) based platform present a promising outcome. However, the investigation of optimal YOLO structure remains vague. This paper investigate the effect of the deep learning based WBCs detection using You Only Look Once version 3 (YOLOv3) with different pretrained Convolutional Neural Network (CNN) model. The models that been tested are the Alexnet, Visual Geometry Group 16 (VGG16), Darknet19 and the existing YOLOv3 feature extraction model, the Darknet53. The architecture consist of the bounding box for class prediction, feature extraction, and additional convolutional layers. It was trained with 242 WBCs images from Local Initiatives Support Corporation (LISC) dataset. The final outcome shows that the YOLOv3 architecture with Alexnet as its feature extractor produced the highest mean average precision of 98% and have better performance than the other models.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_58-White_Blood_Cells_Detection_using_YOLOv3.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Secure Communication across the Internet by Encrypting the Data using Cryptography and Image Steganography</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111057</link>
        <id>10.14569/IJACSA.2020.0111057</id>
        <doi>10.14569/IJACSA.2020.0111057</doi>
        <lastModDate>2020-10-31T18:17:09.7470000+00:00</lastModDate>
        
        <creator>P Rajesh</creator>
        
        <creator>Mansoor Alam</creator>
        
        <creator>Mansour Tahernezhadi</creator>
        
        <creator>T Ravi Kumar</creator>
        
        <creator>Vikram Phaneendra Rajesh</creator>
        
        <subject>Cryptography; image steganography; least significant bits; secure communication; Huffman coding; data encryption; data compression</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>Sharing the information has become a facile task nowadays just like one-tap which can take the information to any component of the world. This whole thing transpired over the evolution of the cyber world, which avails to stay connected with the entire world. Due to the wide-spread utilization of the cyber world, it leads a peril of data breaching by some incognito or unauthorized people while it is being sent from one utilizer to another. Unauthorized people can get access to the data and extract utilizable information from it. The confidential data being sent through the web which may get tampered while reaching the other end-utilizer. So, to dispense this data breaching, we can encrypt the data being sent and the receiver can only decrypt the message so that we can conceal the data. It routes a tremendous way to do this, the most popular one is cryptography, and another is steganography. Anteriorly there subsist many ways in these techniques like Image Steganography, Secret key Cryptography, LSB method, and so on which are being used to encrypt data and secure communication. One of the algorithms of cryptography is utilized along with Image Steganography to encrypt the data to ascertain more security which resembles the two-step verification process. In proposed paper we utilized new Huffman coding algorithm in step of the Image Steganography to ascertain that even an astronomically immense data can fit into a minute image. The ciphertext is compressed utilizing Huffman Coding and then it gets embedded into an image utilizing LSB method of Image Steganography in which the least paramount bits of the image are superseded with the data from the antecedent step. We implemented the analytical using python and it shows better compression results with large volumes of data to transfer easily through network.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_57-Secure_Communication_across_the_Internet.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dual Annular Ring Coupled Stacked Psi Shape Patch Antenna for Wireless Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111056</link>
        <id>10.14569/IJACSA.2020.0111056</id>
        <doi>10.14569/IJACSA.2020.0111056</doi>
        <lastModDate>2020-10-31T18:17:09.7170000+00:00</lastModDate>
        
        <creator>K. Mahesh Babu</creator>
        
        <creator>T.V. Rama Krishna</creator>
        
        <subject>Annular ring; stacked psi shape; WLAN; ISM band</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>The paper aims to design and analyze an annular ring coupled stacked Psi shaped patch antenna with coplanar waveguide (CPW) feed technique operating for dual band frequency applications. The proposed model comprises of stacked Psi shapes resonating for lower order frequency. The second order resonating band was obtained through capacitively coupled overlapped annular rings. The geometrical dimensions of the proposed model are 3.5*2 (L*W) based on lower order resonating band. The design and simulations were performed using DS CST Microwave Studio suite. The model achieves dual resonant bands, (2.19- 2.68) GHz with impedance bandwidth 490 MHz and (5.569 – 6.09) GHz with 530 MHz impedance bandwidth. The center frequencies are 2.42 GHz and 5.815 GHz with return loss -28.97 dB and -28.99 dB respectively. The design exhibited a maximum gain of 5 dB with bidirectional and omni directional patterns in E and H- planes. The axial ratio at the two resonating bands was less than 3 dB. Parametric analysis was performed for reflection co-efficient on geometric variables like ground width, ground length and permittivity. From (3-5) GHz frequency range, a perfect notch band was also exhibited. Simulated and measured results have shown a good concurrence. The model was suitable for WLAN (Wireless Local Area Network) and ISM (Industrial, Scientific and Medical) Band applications.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_56-Dual_Annular_Ring_Coupled_Stacked.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of an Electro-Stimulator Controlled by Bluetooth for the Improvement of Muscular Atrophy</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111055</link>
        <id>10.14569/IJACSA.2020.0111055</id>
        <doi>10.14569/IJACSA.2020.0111055</doi>
        <lastModDate>2020-10-31T18:17:09.7000000+00:00</lastModDate>
        
        <creator>Paul Portilla Achata</creator>
        
        <creator>Ra&#250;l Sulla Torres</creator>
        
        <creator>Juan Carlos Copa Pineda</creator>
        
        <creator>Agueda Munoz del Carpio Toia</creator>
        
        <creator>Jose Sulla-Torres</creator>
        
        <subject>Design; electro-stimulator; microcontroller; electric pulses; muscular atrophy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>Muscle stimulation consists of a muscle&#39;s work when it is in constant exercise or contraction. This article presents an electro-stimulator design that generates electrical pulses in muscle cells through two electrodes positioned in an area of the body, causing the response of said cells to improve muscle atrophy. The steps that were followed were the construction of the block diagram, the design, and development of the circuit, the design and development of the control based on the pic12F683 and pic 16F690 microcontrollers, finally, the development of the software of the mobile application that controls the equipment using Bluetooth signals, based on the standard of IEC 60601-1 for basic safety and essential equipment performance. It was possible to obtain control of the frequency, application time, and amplitude of the duty cycle to have better results when applying therapy in specific areas of the body through the mobile application. Finally, the design is developed to respond to the user&#39;s parameters, using the Bluetooth of a mobile device and allowing the generation of electrical pulses in the muscle cells to improve muscle atrophy. The team can be part of therapeutic sessions for people with quadriplegia, improving the physiotherapy sessions performed on patients.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_55-Design_of_an_Electro_Stimulator_Controlled_by_Bluetooth.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Empirical Oversampling Threshold Strategy for Machine Learning Performance Optimisation in Insurance Fraud Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111054</link>
        <id>10.14569/IJACSA.2020.0111054</id>
        <doi>10.14569/IJACSA.2020.0111054</doi>
        <lastModDate>2020-10-31T18:17:09.6700000+00:00</lastModDate>
        
        <creator>Bouzgarne Itri</creator>
        
        <creator>Youssfi Mohamed</creator>
        
        <creator>Bouattane Omar</creator>
        
        <creator>Qbadou Mohamed</creator>
        
        <subject>Machine learning; oversampling; SMOTE; insurance fraud</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>Insurance fraud is one of the most practiced frauds in the sectors of the economy. Faced with increasingly imaginative underwriters to create fraud scenarios and the emergence of organized crime groups, the fraud detection process based on artificial intelligence remains one of the most effective approaches. Real world datasets are usually unbalanced and are mainly composed of &quot;no-fraudulent&quot; class with a very small percentage of &quot;fraudulent&quot; examples to train our model, thus prediction models see their performance severely degraded when the target class appears so poorly represented. Therefore, the present work aims to propose an approach that improves the relevance of the results of the best-known machine learning algorithms and deals with imbalanced classes in classification problems for prediction against insurance fraud. We use one of the most efficient approaches to re-balance training data: SMOTE. We adopted the supervised method applied to automobile claims dataset &quot;carclaims.txt&quot;. We compare the results of the different measurements and question the results and relevance of the measurements in the field of study of unbalanced and labeled datasets. This work shows that the SMOTE Method with the KNN Algorithm can achieve better classifier performance in a True Positive Rate than the previous research. The goal of this work is to lead a study of algorithm selections and performance evaluation among different ML classification algorithms, as well as to propose a new approach TH-SMOTE for performance improvement using the SMOTE method by defining the optimum oversampling threshold according to the G-mean measure.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_54-Empirical_Oversampling_Threshold_Strategy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Customized BERT with Convolution Model : A New Heuristic Enabled Encoder for Twitter Sentiment Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111053</link>
        <id>10.14569/IJACSA.2020.0111053</id>
        <doi>10.14569/IJACSA.2020.0111053</doi>
        <lastModDate>2020-10-31T18:17:09.6530000+00:00</lastModDate>
        
        <creator>Fatima-ezzahra LAGRARI</creator>
        
        <creator>Youssfi ELKETTANI</creator>
        
        <subject>Twitter data; sentiment analysis; tokenization; optimized BERT; Lion Algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>The Twitter messaging service has turned out to be a domain for news consumers and patrons to convey their sentiments. Capturing these emotions or sentiments in an accurate manner remains a major challenge for analysts. Moreover, the Twitter data include both spam and authentic contents that often affects accurate sentiment categorization. This paper introduces a new customized BERT (Bidirectional Encoder Representations from Transformers) based sentiment classification. The proposed work consists on pre-processing and tokenization step followed by a customized BERT based classification via optimization concept. Initially, the collected raw tweets are pre-processed via &quot;stop word removal, stemming and blank space removal&quot;. Prevailing semantic words are acquired, from which the tokens (meaningful words) are extracted in the tokenization phase. Subsequently, these extracted tokens will be subjected to classification via optimized BERT, which weights and biases are optimally tuned by Standard Lion Algorithm (LA). In addition, the maximum sequence length of BERT encoder is updated with standard LA. Finally, the performance of the proposed work is compared over other state-of-the-art models with respect to different performance measures.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_53-Customized_BERT_with_Convolution_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Machine Learning Approach to Identifying Students at Risk of Dropout: A Case Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111052</link>
        <id>10.14569/IJACSA.2020.0111052</id>
        <doi>10.14569/IJACSA.2020.0111052</doi>
        <lastModDate>2020-10-31T18:17:09.6230000+00:00</lastModDate>
        
        <creator>Roderick Lottering</creator>
        
        <creator>Robert Hans</creator>
        
        <creator>Manoj Lall</creator>
        
        <subject>EDM; student dropout; binary classification; ensemble method; KDD</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>The increase in students’ dropout rate is a huge concern for institutions of higher learning.  In this article, classification techniques are applied to determine students “at-risk” of dropping out of their registered qualifications. Being able to identify such students timeously will be beneficial to both the students and the institutions with which they are registered. This study makes use of Random Forest, Support Vector Machines, Decision Trees, Na&#239;ve Bayes, K-Nearest Neighbor, and Logistic Regression for classification purposes. The selected algorithms were applied on a dataset of 4419 student records obtained from the institutional database related to Diploma students enrolled in the Faculty of Information, Communication and Technology. The results reveal that the overall accuracy rate of Random Forest (94.14%) was better than the other algorithms in identifying students at risk of dropout.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_52-A_Machine_Learning_Approach_to_Identifying_Students.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Manar: An Arabic Game-based Application Aimed for Teaching Cybersecurity using Image Processing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111051</link>
        <id>10.14569/IJACSA.2020.0111051</id>
        <doi>10.14569/IJACSA.2020.0111051</doi>
        <lastModDate>2020-10-31T18:17:09.6070000+00:00</lastModDate>
        
        <creator>Afnan Alsadhan</creator>
        
        <creator>Asma Alotaibi</creator>
        
        <creator>Lulu Altamran</creator>
        
        <creator>Majd Almalki</creator>
        
        <creator>Moneera Alfulaij</creator>
        
        <creator>Tarfa Almoneef</creator>
        
        <subject>Cybersecurity; image processing; social engineering; cryptography</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>People use the Internet for various activities, including exchanging money, playing games, and shopping. However, this powerful network came at a cost: the features that provide the Internet with these capabilities are also what make it vulnerable. The need for cybersecurity tools and practices was recognized, especially for children, since they tend to be naive and can be easily tricked. Aimed at children from 6 to 12 years old, Manar is an Arabic smartphone game that seeks to build a generation who are well-informed about cybersecurity issues. It teaches them about notable cybersecurity topics such as social engineering and cryptography, it also has a very appealing theme to attract children to play the game. The theme being “pirates and islands”, each level will be represented as an island and a moving pirate ship will navigate between the levels. The application introduces the technology of image processing in a unique way, allowing children to move around and look for objects, which makes the game as interactive as possible to attract children’s attention.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_51-Manar_an_Arabic_Game_Based_Application.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Supervised Hyperspectral Image Classification using SVM and Linear Discriminant Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111050</link>
        <id>10.14569/IJACSA.2020.0111050</id>
        <doi>10.14569/IJACSA.2020.0111050</doi>
        <lastModDate>2020-10-31T18:17:09.5930000+00:00</lastModDate>
        
        <creator>Shambulinga M</creator>
        
        <creator>G. Sadashivappa</creator>
        
        <subject>Linear discriminant analysis; support vector machine; guided image filtering; bilateral filter</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>Hyperspectral images are used to recognize and determine the objects on the earth’s surface. This image contains more number of spectral bands and classifying the image becoming a difficult task. Problems of higher number of spectral dimensions are addressed through feature extraction and reduction. However, accuracy and computational time are the important challenges involved in the classification of hyperspectral images. Hence in this paper, a supervised method has been developed to classify the hyperspectral image using support vector machine (SVM) and linear discriminant analysis (LDA). In this work, spectral features of the images are extracted and reduced using LDA. Spectral features of hyperspectral images are classified using SVM with RBF kernel like buildings, vegetation fields, etc. The simulation results show that the SVM algorithm combined with LDA has good accuracy and less computational time. Furthermore, the accuracy of classification is enhanced by incorporating the spatial features using edge-preserving filters.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_50-Supervised_Hyperspectral_Image_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparison Performance of Lymphocyte Classification for Various Datasets using Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111049</link>
        <id>10.14569/IJACSA.2020.0111049</id>
        <doi>10.14569/IJACSA.2020.0111049</doi>
        <lastModDate>2020-10-31T18:17:09.5770000+00:00</lastModDate>
        
        <creator>Syadia Nabilah Mohd Safuan</creator>
        
        <creator>Mohd Razali Md Tomari</creator>
        
        <creator>Wan Nurshazwani Wan Zakaria</creator>
        
        <creator>Nor Surayahani Suriani</creator>
        
        <subject>Convolutional neural network; Google colab; training accuracy; validation accuracy; white blood cell</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>Analyzing and classifying five types of Lymphocyte White Blood Cell (WBC) is important to monitor the lack or excessive amount of cell in human body. These harmful amount of cell must be detected early for the early treatment can be run to the patient. However, the process may be tedious and time consuming as it is done manually by the experts. Other than that, it may yield inaccurate result as it depends on the pathologist skill and experience. This work presents a way that can be the second opinion to the experts using computer aided system as a solution. Convolutional Neural Network (CNN) is applied to the system to avoid complex structure and to eliminate the features extraction process. Three CNN models of mobilenet, resnet and VGG-16 is experimented on three different datasets which are kaggle, LISC and IDB-2. Kaggle, LISC and IDB-2 dataset consist of 6000, 242 and 260 images respectively. The result is divided into two parts which are dataset and model. As for IDB-2 dataset, the best model is VGG with training and validation accuracy of 0.9721 and 0.7913 respectively. While for kaggle and LISC dataset, the best model is resnet as it achieved training accuracy of 0.9713 and 0.9771 respectively. The highest validation accuracy for kaggle is 0.5955 and 0.5781 for LISC. Lastly, the best database that is most suitable for all model is IDB-2 database. It obtained highest training and validation accuracy for all model of mobilenet, resnet and VGG-16.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_49-Comparison_Performance_of_Lymphocyte_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Gabor Capsule Network for Plant Disease Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111048</link>
        <id>10.14569/IJACSA.2020.0111048</id>
        <doi>10.14569/IJACSA.2020.0111048</doi>
        <lastModDate>2020-10-31T18:17:09.5430000+00:00</lastModDate>
        
        <creator>Patrick Mensah Kwabena</creator>
        
        <creator>Benjamin Asubam Weyori</creator>
        
        <creator>Ayidzoe Abra Mighty</creator>
        
        <subject>Convolutional neural networks; capsule network; gabor filters; crop diseases; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>Crop diseases contribute significantly to food insecurity, malnutrition, and poverty in Africa where the majority of the population is into Agriculture. Manual plant disease recognition methods are widespread but limited, ineffective, costly, and time-consuming making the need to search for automatic and efficient methods of recognition more crucial. Machine learning and Convolutional Neural Networks have been applied in other jurisdictions in an attempt to solve these problems. They have achieved impressive results in this domain but tend to be ‘data-hungry‘, invariant, and vulnerable to attacks that can easily lead to misclassifications. Capsule Networks, on the other hand, avoids the weaknesses of CNNs and has not been widely used in this area. This article, therefore, proposes the use of Gabor and Capsule network to recognize blurred, deformed, and unseen tomato and citrus disease images. Experimental results show that the proposed model can achieve a 98.13% test accuracy which is comparable to the performance of state-of-the-art CNN models in the literature. Also, the proposed model outperformed two state-of-the-art deep learning models (which were implemented as baseline models) in terms of robustness, flexibility, fast converges, and having fewer parameters. This work can be extended to other crops and may well serve as a useful tool for the recognition of unseen plant diseases under bad weather and bad illumination conditions.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_48-Gabor_Capsule_Network_for_Plant_Disease_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Attribute-based Encryption for Fine-Grained Access Control on Secure Hybrid Clouds</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111047</link>
        <id>10.14569/IJACSA.2020.0111047</id>
        <doi>10.14569/IJACSA.2020.0111047</doi>
        <lastModDate>2020-10-31T18:17:09.5300000+00:00</lastModDate>
        
        <creator>Sridhar Reddy Vulapula</creator>
        
        <creator>Srinivas Malladi</creator>
        
        <subject>Secure hybrid cloud; geometric data perturbation; efficiency; fine-grained access control; attribute-based encryption</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>In the present scenario, the proliferation of cloud computing services allows hospitals and institutions to move their healthcare data to the cloud, enabling global access to data and on-demand high-quality services at a lower cost. Healthcare data has sensitive attributes to be shielded from leakage due to inference attacks by a curious intruder, either directly or indirectly. A hybrid cloud is a mix of both private and public clouds proposed for the storage of health data. Carefully distributing data between private and public clouds to provide protection. While there has been ample work for the delivery of health data for some time now, it does not appear to be more effective in terms of both data retrieval and consideration for fine-grained access control of the data. This work suggests a cordial approach for a more reliable delivery of data using geometric data disruption of health data over hybrid clouds. It is focused on an in-depth review of the results. The distribution enforces fine-grained data access control using attribute-based encryption. In addition, the approach also addresses a method to effectively extract relevant information from hybrid clouds.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_47-Attribute_Based_Encryption.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Load Balancing Problem on Hyper Hexa Cell Interconnection Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111046</link>
        <id>10.14569/IJACSA.2020.0111046</id>
        <doi>10.14569/IJACSA.2020.0111046</doi>
        <lastModDate>2020-10-31T18:17:09.4970000+00:00</lastModDate>
        
        <creator>Aryaf Al-Adwan</creator>
        
        <creator>Basel A. Mahafzah</creator>
        
        <creator>Anaam Aladwan</creator>
        
        <subject>Parallel computing; load balancing; Hyper Hexa-Cell; interconnection network; Dimension Exchange Method (DEM)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>Dynamic load balancing techniques prevents computer nodes from overloading unevenly while leaving other idle. It is considered as one of the most challenging topics in parallel computing. Moreover, it is essential for increasing the efficiency of highly parallel systems especially in solving multitask problems with unpredictable load estimates. Particularly, over each processor in the parallel systems and interconnection networks. This paper focuses on developing an efficient algorithm for load balancing on Hyper Hexa Cell (HHC) interconnection network, namely, HHCLB algorithm. Basically, the Dimension Exchange Method (DEM) approach is used in this paper to construct a new load balancing approach on the network of HHC interconnections. Thus, an algorithm was introduced and simulated using java threads, where the performance of the algorithm is evaluated both analytically and experimentally. The evaluation was in terms of various performance metrics, including, execution time, load balancing accuracy, communication cost. By implementing the proposed load balancing algorithm to the HHC network, a high degree of accuracy and minimal execution time was achieved. It is important to highlight that the algorithm recorded small gap between the execution time for small number of processors and large number of processors. For instance, the algorithm achieved 0.14 seconds for balancing the load of 6 processors while 0.59 seconds for balancing the load of 3072 processors. This proves how effective the algorithm is in balancing the load for different network sizes from small to large number of processors, with a slight difference in execution time.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_46-Load_Balancing_Problem_on_Hyper_Hexa.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>MSTD: Moroccan Sentiment Twitter Dataset</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111045</link>
        <id>10.14569/IJACSA.2020.0111045</id>
        <doi>10.14569/IJACSA.2020.0111045</doi>
        <lastModDate>2020-10-31T18:17:09.4830000+00:00</lastModDate>
        
        <creator>Soukaina MIHI</creator>
        
        <creator>Brahim AIT BEN ALI</creator>
        
        <creator>Ismail EL BAZI</creator>
        
        <creator>Sara AREZKI</creator>
        
        <creator>Nabil LAACHFOUBI</creator>
        
        <subject>Sentiment analysis; Moroccan dialect; machine-learning; stemming; lemmatization; feature extraction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>With the proliferation of social media and Internet accessibility, a massive amount of data has been produced. In most cases, the textual data available through the web comes mainly from people expressing their views in informal words. The Arabic language is one of the hardest Semitic languages to deal with because of its complex morphology. In this paper, a new contribution to the Arabic resources is presented as a large Moroccan dataset retrieved from Twitter and carefully annotated by native speakers. For the best of our knowledge, this dataset is the largest Moroccan dataset for sentiment analysis. It is distinguished by its size, its quality given by the commitment of annotators, and its accessibility for the research community. Furthermore, the MSTD (Moroccan Sentiment Twitter Dataset) is benchmarked through experiments carried out for 4-way classification as well as polarity classification (positive, negative). Various machine-learning algorithms are combined to feature extraction techniques to reach optimal settings. This work also presents the effect of stemming and lemmatization on the improvement of the obtained accuracies.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_45-MSTD_Moroccan_Sentiment_Twitter_Dataset.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards a Multi-Agent based Network Intrusion Detection System for a Fleet of Drones</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111044</link>
        <id>10.14569/IJACSA.2020.0111044</id>
        <doi>10.14569/IJACSA.2020.0111044</doi>
        <lastModDate>2020-10-31T18:17:09.4670000+00:00</lastModDate>
        
        <creator>Said OUIAZZANE</creator>
        
        <creator>Fatimazahra BARRAMOU</creator>
        
        <creator>Malika ADDOU</creator>
        
        <subject>Fleet of drones; drone; intrusion detection; multi agent system; security; intrusion detection system; autonomy; distribution; UAV; unmanned aerial vehicle; unknown attacks; known attacks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>The objective of this research work is to propose a new model of intrusion detection system for a fleet of UAVs deployed with an ad hoc communication architecture. The security of a drone fleet is rarely addressed by the scientific community, and most research has focused on routing protocols and battery autonomy, while ignoring the security aspect. The multi-agent paradigm is considered the most adequate and appropriate solution to model an effective intrusion detection system capable of detecting intrusions targeting a drone fleet. Multi-agent systems can perfectly address the security problem of a drone fleet, given the mobility, autonomy, cooperation and distribution characteristics present in the network linking the different nodes of the fleet. The proposed model consists of a set of cooperative, autonomous, communicating, learning and intelligent agents that collaborate with each other to carry out intrusion and suspicious activity detection missions that can target the network of a fleet of drones. Our system is autonomous and can detect known and unknown cyber attacks in real time without the need for human experts, who generally design the signatures of known attacks for conventional intrusion detection systems.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_44-Toward_a_Multi_Agent_based_Network_Intrusion.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Robust Drowsiness Detection for Vehicle Driver using Deep Convolutional Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111043</link>
        <id>10.14569/IJACSA.2020.0111043</id>
        <doi>10.14569/IJACSA.2020.0111043</doi>
        <lastModDate>2020-10-31T18:17:09.4370000+00:00</lastModDate>
        
        <creator>A F M Saifuddin Saif</creator>
        
        <creator>Zainal Rasyid Mahayuddin</creator>
        
        <subject>Drowsiness detection; convolutional neural network; face region extraction; pupil detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>Drowsiness detection during driving is still an unsolved research problem which needs to be addressed to reduce road accidents. Researchers have been trying to solve this problem using various methods where most of these solution lacks behind in accuracy, real-time performance, costly, complex to build, and has a higher computational cost with low frame rate. This research proposes robust method for drowsiness detection of vehicle drivers based on head pose estimation and pupil detection by extracting facial region initially. Proposed method used frame aggregation strategy in case of face region cannot be extracted in any frame due to shortcomings, i.e. light reflection, shadow. In order to improve identification under highly varying lighting conditions, proposed research used cascade of regressors cutting edge method where each regression refers estimation of facial landmarks. Proposed method used deep convolutional neural network (DCNN) for accurate pupil detection to learn non linear data pattern. In this context, challenges of varying illumination, blurring and reflections for robust pupil detection are overcome by using batch normalization for stabilizing distributions of internal activations during training phase which makes overall methodology less influenced by parameter initialization. Proposed research performed extensive experimentation where accuracy rate of 98.97% was achieved using frame rate of 35 fps which is higher comparing with previous research results. Experimental results reveal the effectiveness of the proposed methodology.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_43-Robust_Drowsiness_Detection_for_Vehicle_Driver.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid POS Tagger for Khasi, an Under Resourced Language</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111042</link>
        <id>10.14569/IJACSA.2020.0111042</id>
        <doi>10.14569/IJACSA.2020.0111042</doi>
        <lastModDate>2020-10-31T18:17:09.4200000+00:00</lastModDate>
        
        <creator>Medari Janai Tham</creator>
        
        <subject>Khasi corpus; BIS tagset; Khasi POS tagger; conditional random fields (CRF); Hidden Markov Model (HMM)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>Khasi is an Austro-Asiatic language spoken mainly in the state of Meghalaya, India, and can be considered as an under resourced and under studied language from the natural language processing perspective. Part-of-speech (POS) tagging is one of the major initial requirements in any natural language processing tasks where part of speech is assigned automatically to each word in a sentence. Therefore, it is only natural to initiate the development of a POS tagger for Khasi and this paper presents the construction of a Hybrid POS tagger for Khasi. The tagger is developed to address the tagging errors of a Khasi Hidden Markov Model (HMM) POS tagger by integrating conditional random fields (CRF). This integration incorporates language features which are otherwise not feasible in an HMM POS tagger. The results of the Hybrid Khasi tagger have shown significant improvement in the tagger’s accuracy as well as substantially reducing most of the tagging confusion of the HMM POS tagger.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_42-A_Hybrid_POS_Tagger_for_Khasi.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Approach to Mammogram Classification using Spatio-Temporal and Texture Feature Extraction using Dictionary based Sparse Representation Classifier</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111041</link>
        <id>10.14569/IJACSA.2020.0111041</id>
        <doi>10.14569/IJACSA.2020.0111041</doi>
        <lastModDate>2020-10-31T18:17:09.3900000+00:00</lastModDate>
        
        <creator>Vaishali D. Shinde</creator>
        
        <creator>B. Thirumala Rao</creator>
        
        <subject>Mammogram; segmentation; classification; feature extraction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>Cancer is a chronic disease and increasing rapidly worldwide. Breast cancer is one of the most crucial cancer which affects the women health and causes death of the women. In order to predict the breast cancer, mammogram is considered as a promising technique which helps to identify the early stages of cancer. However, several schemes have been developed during last decade to overcome the performance related issues but achieving the desired performance is still challenging task. To overcome this issue, we introduce a novel and robust approach of feature extraction and classification. According to the proposed approach, first of all, we apply pre-processing stage where image binarization is applied using Niblack’s method and later Region of Interest (ROI) extraction and segmentation schemes are applied. In the next phase of work, we developed a mixed strategy of feature extraction where we consider Gray Level Co-occurrence Matrix (GLCM), Histogram of oriented Gradients (HoG) with Principal Component Analysis (PCA) for dimension reduction, Scale-invariant Feature Transform (SIFT), and non-parametric Discrete Wavelet Transform (DWT) features are extracted. Finally, we present K-Singular value decomposition (SVD) based dictionary learning scheme and applied the Sparse representation classifier (SRC) classification approach and performance is evaluated using MATLAB tool. An extensive experimental study is carried out which shows that the proposed approach achieves classification accuracy as 98.13%, Precision as 97.58%, Recall as 98.36%, and F-Score 97.95%. The performance of proposed approach is compared with the state-of-art techniques which shows that the proposed approach gives better performance.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_41-A_Novel_Approach_to_Mammogram_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Handwritten Numeric Image Classification with Quantum Neural Network using Quantum Computer Circuit Simulator</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111040</link>
        <id>10.14569/IJACSA.2020.0111040</id>
        <doi>10.14569/IJACSA.2020.0111040</doi>
        <lastModDate>2020-10-31T18:17:09.3730000+00:00</lastModDate>
        
        <creator>Achmad Benny Mutiara</creator>
        
        <creator>Muhammad Amir Slamet</creator>
        
        <creator>Rina Refianti</creator>
        
        <creator>Yusuf Sutanto</creator>
        
        <subject>Image classification; quantum neural network; quantum computer; TensorFlow</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>Quantum Computer is a computer machine using principles of quantum mechanics in doing its computation. The Quantum Computer Machine itself is still in the development stage and has not been deployed yet, however TensorFlow provides a library for hybrid quantum-classical machine learning called TensorFlow Quantum (TFQ). One of the quantum computing models is the Quantum Neural Network (QNN). QNN is adapted from classical neural networks capable of processing qubit data and passing quantum circuits. QNN is a machine learning model that allows quantum computers to classify image data. The image data used is classical data, but classical data cannot reach a superposition state. So in order to carry out this protocol, the data must be readable into a quantum device that provides superposition. QNN uses a supervised learning method to predict image data. Quantum Neural Network (QNN) with a supervised learning method for classifying handwritten numeric image data is implemented using a quantum computer circuit simulation using the Python 3.6 programming language. The quantum computer circuit simulation is designed using library Cirq and TFQ. The classification process is carried out on Google Colab. The results of training on the QNN model obtained a value of 0.337 for the loss value and 0.3427 for the validation loss value. Meanwhile, the hinge accuracy value from the training results is 0.8603 for the hinge accuracy value with training data and 0.8669 for the hinge accuracy validation value. Model testing is done by providing 100 handwritten number images that are tested, with 53 image data of number three and 47 image data of number six. The results obtained for the percentage of testing accuracy are 100% for the number three image and 100% for the number six image. Thus, the total percentage of testing is 100%.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_40-Handwritten_Numeric_Image_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Effect of Route Length and Signal Attenuation on Energy Consumption in V2V Communication</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111039</link>
        <id>10.14569/IJACSA.2020.0111039</id>
        <doi>10.14569/IJACSA.2020.0111039</doi>
        <lastModDate>2020-10-31T18:17:09.3570000+00:00</lastModDate>
        
        <creator>Mahmoud Zaki Iskandarani</creator>
        
        <subject>Intelligent transportation systems; routing; connected vehicles; energy; V2V; multipath fading; BSM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>Simulation of Vehicle-to Vehicle (V2V) communication and connectivity is carried out. The main objective of the carried out V2Vcommunication simulation is to study the effect of route length, number of hops per route, attenuation related parameter and message size on energy consumption of transmitted bits per sent message. Mathematical modeling (using the original radio energy model),  and analysis, is carried out to quantify and approximate the effect of attenuation related parameter (α) and Route Length (LR) on energy consumption of transmitted Basic Safety Message (BSM) for both, 256 Bytes and 320 Bytes size. The original energy radio model is expanded to include not only message size, but also the effect of number of hops on energy consumption. The work successfully proved the critical effect of α and number of hops on energy consumption for a fixed BSM size and the effect of α on the transitional characteristics of routes as a function of number of hops. It is clear from the simulation that α has an adverse effect on consumed energy for transmitted BSMs and also has a marked effect on routes, where any value of α above 3 will lead to energy depletion among, other negative effects on communicating devices.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_39-Effect_of_Route_Length_and_Signal_Attenuation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Prototyping with Raspberry Pi in Healthcare Domain</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111038</link>
        <id>10.14569/IJACSA.2020.0111038</id>
        <doi>10.14569/IJACSA.2020.0111038</doi>
        <lastModDate>2020-10-31T18:17:09.3430000+00:00</lastModDate>
        
        <creator>Hari Kishan Kondaveeti</creator>
        
        <creator>Sruti Raman</creator>
        
        <creator>Praveen Raj</creator>
        
        <subject>Information extraction; bibliometric study; prototype; Raspberry Pi; healthcare</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>The objective of this paper is to conduct a bibliometric study on the use of Raspberry Pi in the medical field. In the past several decades healthcare advancements have played a major role and Raspberry Pi being the charm with its extensive features and low cost, it is of interest to know whether the development in health care technologies with respect to Raspberry Pi has created an impact or not. A platform known as Biblioshiny has been used to collect statistical information and perform the analysis. A total of 154 journal articles have been collected from PubMed, a free full-text archive of biomedical and life sciences journal literature at the U.S. National Institutes of Health&#39;s National Library of Medicine (NIH/NLM) and analysis has been made on various parameters such as top authors, countries and affiliations etc. The conclusions drawn help us to understand the usage of Raspberry Pi in the healthcare domain. The Bibliometric Analysis done indicates that there has been an increase in the research over the years and the authors from various countries have been working elaborately indicating that there has been a good amount of usage of Raspberry Pi in the healthcare domain. Overall, our results demonstrate the trending topics the authors currently working on and collaborations amongst authors and countries. Finally, this paper identifies that there are no motor themes but displays the budding keywords (or the ideas where authors have worked on) in the health care domain emerging with the prototyping of Raspberry Pi.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_38-Prototyping_with_Raspberry_Pi_in_Healthcare_Domain.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comprehensive Study of Blockchain Services: Future of Cryptography</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111037</link>
        <id>10.14569/IJACSA.2020.0111037</id>
        <doi>10.14569/IJACSA.2020.0111037</doi>
        <lastModDate>2020-10-31T18:17:09.3270000+00:00</lastModDate>
        
        <creator>Sathya AR</creator>
        
        <creator>Barnali Gupta Banik</creator>
        
        <subject>Cryptography; cryptocurrencies; blockchain; bitcoin</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>Cryptography is the process of protecting information from intruders and letting only the intended users’ access and understand it. It is a technique originated in 2000 BC where simple methods were used in earlier times to keep the information in a way that is not understandable by everyone. Only the intended receiver knows how to decode the information. Later, as technology advances, many sophisticated techniques were used to protect the message so that no intrusion can invade the information. Many mathematically complex algorithms like AES, RSA are used to encrypt and decrypt the data. Due to the advancements in the computer science field, recently, cryptography is used in the development of cryptographic currencies of cryptocurrencies. Blockchain technology, a distributed ledger technology identified to be the foundation of Bitcoin cryptocurrency, implements a high-level cryptographic technique like public-key cryptography, Hash Functions, Merkle Trees, Digital signatures like Elliptic curve digital signatures, etc. These advanced cryptographic techniques are used to provide security to blockchain data and for the secure transmission of information, thereby making Blockchain more popular and demandable. Blockchain applies cryptography in various phases, and some of the techniques used in Blockchain are advanced in cryptographic sciences. This paper intends to provide a brief introduction to cryptography and Blockchain Technology and discusses how both technologies can be integrated to provide the best of the security to the data. This paper reviews the various cryptographic attacks in Blockchain and the various security services offered in Blockchain. The challenges of blockchain security are also analyzed and presented briefly in this paper.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_37-A_Comprehensive_Study_of_Blockchain_Services.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Acceptance Test Driven Development Model with Combinatorial Logic</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111036</link>
        <id>10.14569/IJACSA.2020.0111036</id>
        <doi>10.14569/IJACSA.2020.0111036</doi>
        <lastModDate>2020-10-31T18:17:09.2930000+00:00</lastModDate>
        
        <creator>Subhash Tatale</creator>
        
        <creator>V. Chandra Prakash</creator>
        
        <subject>Software requirements specification; software development life cycle; acceptance test driven development; combinatorial logic; combinatorial testing; user acceptance tests; railway reservation system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>In the Software Development Life Cycle, modelling plays a most significant role in designing and developing software efficiently. Acceptance Test-Driven Development (ATDD) is a powerful agile software development model where a customer provides user acceptance test suits as a part of Software Requirements Specifications. A design has to develop a system so that User Acceptance Tests will be successful. In some systems, the Combinatorial Logic and Combinatorial Testing play a very crucial role. The authors have proposed a novel approach to enhance the existing Acceptance Test Driven Development model to Combinatorial Logic Oriented-ATDD model by incorporating combinatorial logic. Refinement with respect to combinatorial logic needs to be incorporated in all the stages of Software Development Life Cycle, i.e. starting from Software Requirement Specifications to User Acceptance Tests. This comprehensive approach derives the acceptance tests from user requirements effectively and efficiently. In this paper, the existing Indian Railway Reservation System is considered as a case study, and it was fully implemented as per proposed Combinatorial Logic Oriented-ATDD model.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_36-Enhancing_Acceptance_Test_Driven_Development_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>High-Level Description of Robot Architecture</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111035</link>
        <id>10.14569/IJACSA.2020.0111035</id>
        <doi>10.14569/IJACSA.2020.0111035</doi>
        <lastModDate>2020-10-31T18:17:09.2800000+00:00</lastModDate>
        
        <creator>Sabah Al-Fedaghi</creator>
        
        <creator>Manar AlSaraf</creator>
        
        <subject>Conceptual model; robot architectural specification; robot behavior; static diagram; dynamism</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>Architectural Description (AD) is the backbone that facilitates the implementation and validation of robotic systems. In general, current high-level ADs reflect great variation and lead to various difficulties, including mixing ADs with implementation issues. They lack the qualities of being systematic and coherent, as well as lacking technical-related forms (e.g., icons of faces, computer screens). Additionally, a variety of languages exist for eliciting requirements, such as object-oriented analysis methods susceptible to inconsistency (e.g., those using multiple diagrams in UML and SysML). In this paper, we orient our research toward a more generic conceptualization of ADs in robotics. We apply a new modeling methodology, namely the Thinging Machine (TM), to describe the architecture in robotic systems. The focus of such an application is on high-level specification, which is one important aspect for realizing the design and implementation in such systems. TM modeling can be utilized in documentation and communication and as the first step in the system’s design phase. Accordingly, sample robot architectures are re-expressed in terms of TM, thus developing (1) a static model that captures the robot’s atemporal aspects, (2) a dynamic model that identifies states, and (3) a behavioral model that specifies the chronology of events in the system. This result shows a viable approach in robot modeling that determines a robot system’s behavior through its static description.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_35-High_Level_Description_of_Robot_Architecture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving the Performance of Various Privacy Preserving Databases using Hybrid Geometric Data Perturbation Classification Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111034</link>
        <id>10.14569/IJACSA.2020.0111034</id>
        <doi>10.14569/IJACSA.2020.0111034</doi>
        <lastModDate>2020-10-31T18:17:09.2630000+00:00</lastModDate>
        
        <creator>Sk. Mohammed Gouse</creator>
        
        <creator>G.Krishna Mohan</creator>
        
        <subject>Privacy preserving databases; machine learning; perturbation; high dimensionality; data filtering; data classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>As the size of the privacy preserving databases is increasing, it is difficult to improve the privacy and accuracy of these databases due to dimensionality and runtime. However, most of the traditional privacy preserving models are independent of privacy and runtime. Also, it is essential to preserve the privacy of the large sensitive attributes before publishing it to the third-party servers. As a result, a novel framework is required to improve the privacy as well as accuracy on the high dimensional privacy preserving data with less runtime. In order to improve the privacy, accuracy and runtime of the traditional privacy preserving models, a hybrid perturbation based privacy preserving classification model is proposed on the multiple databases. In this work, a new data transformation approach, hybrid geometrical perturbation approach and hybrid boosting classifier are proposed in order to enhance the overall efficiency of the model on the privacy preserving databases. In this work, a hybrid geometric perturbation approach is used to enhance the privacy preserving on the sensitive attributes. Initially, a pre-processing method is applied on the input dataset in order to remove the noise in the feature values. A hybrid machine learning classifier is proposed to predict the privacy preserving class label based on the training data. Experimental results represents the proposed hybrid geometric perturbation based boosting classifier has better statistical accuracy, recall, precision and runtime than the conventional models.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_34-Improving_the_Performance_of_Various_Privacy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid Approach for Single Channel Speech Enhancement using Deep Neural Network and Harmonic Regeneration Noise Reduction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111033</link>
        <id>10.14569/IJACSA.2020.0111033</id>
        <doi>10.14569/IJACSA.2020.0111033</doi>
        <lastModDate>2020-10-31T18:17:09.2330000+00:00</lastModDate>
        
        <creator>Norezmi Jamal</creator>
        
        <creator>N. Fuad</creator>
        
        <creator>MNAH. Shaabani</creator>
        
        <subject>Speech enhancement; single channel microphone; deep neural network; constrained Wiener Filter; post-filtering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>This paper presents a hybrid approach for single channel speech enhancement using deep neural network (DNN) and harmonic regeneration noise reduction (HRNR). The DNN was used as a supervised algorithm to predict new target mask such as constrained Wiener Filter (cWF) target mask from noisy mixture signal that was transformed into gammatone filter bank features. Meanwhile, HRNR algorithm was applied in the post-filtering strategy to eliminate residual noise. The DNN algorithm is an emerging supervised speech enhancement to overcome heavy nonstationary noise and low signal-to-noise ratio (SNR) issues. To validate the proposed algorithm with new target mask, 600 Malay utterances combining male and female speakers were used in a training session while 120 Malay utterances were used in a prediction session. The short time objective intelligibility (STOI) and perceptual evaluation of speech quality (PESQ) scores were calculated as the performance metrics. In this work, the proposed target mask outperformed other baseline target masks. Thus, PESQ and STOI scores for the hybrid speech enhancement algorithm is 1.17 and 0.79, respectively, at - 5 dB babble noise SNR.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_33-A_Hybrid_Approach_for_Single_Channel_Speech_Enhancement.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>High Priority Requests in Grid Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111032</link>
        <id>10.14569/IJACSA.2020.0111032</id>
        <doi>10.14569/IJACSA.2020.0111032</doi>
        <lastModDate>2020-10-31T18:17:09.2170000+00:00</lastModDate>
        
        <creator>Tariq Alwadan</creator>
        
        <creator>Salah Alghyaline</creator>
        
        <creator>Azmi Alazzam</creator>
        
        <subject>Grid computing; multi-agents system; grid broker; static priority mechanism; distributed resources</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>Grid computing is an enhanced technology consisting of a pool of connected machines that belong to multiple organizations in different sites to form a distributed system. This system can be used to deal with complex scientific or business problems. It is developed to help share distributed resources that may be diverted in nature and solve many computing problems. The typical decentralized grid computing model faces many challenges, such as; different systems and software architectures, quickly handling the enormous amount of grid requesters, and finding the appropriate resources for the grid users. Some of the grid requests might need to get significant attention and fast response to the other requests. Usually, the Grid Broker (GB) works as a third party or mediator between grid service providers and grid service requesters. This paper introduced a new automated system that can help exploit the grid power, improve its functionality, and enhance its performance. This research presents a new architecture for the Grid Broker that can assist with high priority requests and be processed first. This system is also used to monitor and provision the grid providers&#39; work during the job running. It uses a multi-agent system to facilitates its work and accomplish its tasks. The proposed approach is evaluated using the Jade simulator. The results show that using the proposed approach can enhance the way of dealing with high priority requests coming to the grid.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_32-High_Priority_Requests_in_Grid_Environment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Environmental Sustainability Coding Techniques for Cloud Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111031</link>
        <id>10.14569/IJACSA.2020.0111031</id>
        <doi>10.14569/IJACSA.2020.0111031</doi>
        <lastModDate>2020-10-31T18:17:09.1870000+00:00</lastModDate>
        
        <creator>Shakeel Ahmed</creator>
        
        <subject>Environmental sustainability; cloud computing; energy-efficient; software development life cycle; parameterized development; green computing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>Cloud Computing (CC) has recently received substantial attention, as a promising approach for providing various information and communication technologies (ICT) services. Running enormously complicated and sophisticated software on the cloud requires more energy and most of the energy is wasted. It is required to explore opportunities to reduce emission of carbon in the CC environment that causes global warming. Global warming affects environment and it is required to lessen greenhouse gas emissions which can be achieved by adopting energy-efficient technologies that largely need to reduce the energy consumption caused during computations, for storing data and during communications to achieve Green computing. In literature, most of the energy-saving techniques focus on hardware aspects. Many improvements can be done in regard to energy efficiency from the software perspective by considering and paying attention to the energy consumption aspects of software for environmental sustainability. The aim of the research is to propose energy-aware profiling environmental sustainability techniques based on parameterized development technique to reduce the energy that has been spent and measure its efficiency inured to support software developers to enhance their code development process in terms of reducing energy. Experiments have been carried out and results prove that the suggested techniques have enabled in achieving energy consumption and achieve environmental sustainability. </description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_31-Environmental_Sustainability_Coding_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Binary Operating Antenna Array Elements by using Hausdorff and Euclidean Metric Distance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111030</link>
        <id>10.14569/IJACSA.2020.0111030</id>
        <doi>10.14569/IJACSA.2020.0111030</doi>
        <lastModDate>2020-10-31T18:17:09.1530000+00:00</lastModDate>
        
        <creator>Elson Agastra</creator>
        
        <creator>Julian Imami</creator>
        
        <creator>Olimpjon Shurdi</creator>
        
        <subject>Antenna array; Hausdorff distance; Woodward-Lawson</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>In this paper, the linear antenna array is designed by the usage of Woodward–Lawson method. The design procedure fits antenna array radiation pattern to a predefined/required radiation mask. In this study will be investigated the possibility of powering off some antenna elements without modifying the behavior and power ratio of the elements which remains on. The aim of powering off antenna elements is to reduce power consumption of the designed array antenna and to reduce power dissipation problems on modern full digital beam forming architecture. The choice of binary operation of antenna element (on/off) reduces computational effort required for complex beam forming techniques. The results can be stored on look-up tables, in order to be recalled on demand by antenna operator. There are used two different metrics to identify how close, to the required design, is the modified antenna pattern. Euclidean and Hausdorff distances are both used as score of the modified array performance. The obtained solutions shows the applicability of binary operations on existing antenna array and the metric can be effectively used as ranking solutions.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_30-Binary_Operating_Antenna_Array_Elements.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modified K-nearest Neighbor Algorithm with Variant K Values</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111029</link>
        <id>10.14569/IJACSA.2020.0111029</id>
        <doi>10.14569/IJACSA.2020.0111029</doi>
        <lastModDate>2020-10-31T18:17:09.1400000+00:00</lastModDate>
        
        <creator>Kalyani C. Waghmare</creator>
        
        <creator>Balwant A. Sonkamble</creator>
        
        <subject>Classification; K-nearest Neighbor (KNN) classification algorithm; Indian Classical Music; Performance measures; Heap data structure</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>In Machine Learning K-nearest Neighbor is a renowned supervised learning method. The traditional KNN has the unlike requirement of specifying ‘K’ value in advance for all test samples. The earlier solutions of predicting ‘K’ values are mainly focused on finding optimal-k-values for all samples. The time complexity to obtain the optimal-k-values in the previous method is too high. In this paper, a Modified K-Nearest Neighbor algorithm with Variant K is proposed. The KNN algorithm is divided in the training and testing phase to find K value for every test sample. To get the optimal K value the data is trained for various K values with Min-Heap data structure of 2*K size. K values are decided based on the percentage of training data considered from every class. The Indian Classical Music is considered as a case study to classify it in different Ragas. The Pitch Class Distribution features are input to the proposed algorithm. It is observed that the use of Min-Heap has reduced the space complexity nonetheless Accuracy and F1-score for the proposed method are increased than traditional KNN algorithm as well as Support Vector Machine, Decision Tree Classifier for Self-Generated Dataset and Comp-Music Dataset.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_29-Modified_K_Nearest_Neighbor_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Tracking Coronavirus Pandemic Diseases using Social Media: A Machine Learning Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111028</link>
        <id>10.14569/IJACSA.2020.0111028</id>
        <doi>10.14569/IJACSA.2020.0111028</doi>
        <lastModDate>2020-10-31T18:17:09.1070000+00:00</lastModDate>
        
        <creator>Nuha Noha Fakhry</creator>
        
        <creator>Evan Asfoura</creator>
        
        <creator>Gamal Kassam</creator>
        
        <subject>Pandemic diseases; outbreak detection; social media; sentiment analysis; machine learning; text mining; geo-located data; CRISP-DM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>With the increasing use of social media, a growing need exists for systems that can extract useful information from huge amounts of data. While, People post personal data interactively, an outbreak of an epidemic event can be noticed from these data. The issue of detecting the route of pandemic diseases is addressed. The main objective of this research work is to use a dual machine learning approach to evaluate current and future data of Covid-19 cases based on published social media information in specific geographical region and show how the disease spreads geographically over the time. The dual machine learning approach used based on traditional data mining methods to estimate disease cases found in social media related to specific geographical region. On other hand, sentiment analysis is conducted to assess the public perception of the disease awareness on the same region.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_28-Tracking_Coronavirus_Pandemic_Diseases.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Empirical Analysis of BERT Embedding for Automated Essay Scoring</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111027</link>
        <id>10.14569/IJACSA.2020.0111027</id>
        <doi>10.14569/IJACSA.2020.0111027</doi>
        <lastModDate>2020-10-31T18:17:09.0930000+00:00</lastModDate>
        
        <creator>Majdi Beseiso</creator>
        
        <creator>Saleh Alzahrani</creator>
        
        <subject>Automated Essay Scoring (AES); BERT; deep learning; neural network; language model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>Automated Essay Scoring (AES) is one of the most challenging problems in Natural Language Processing (NLP). The significant challenges include the length of the essay, the presence of spelling mistakes affecting the quality of the essay and representing essay in terms of relevant features for the efficient scoring of essays. In this work, we present a comparative empirical analysis of Automatic Essay Scoring (AES) models based on combinations of various feature sets. We use 30-manually extracted features, 300-word2vec representation, and 768-word embedding features using BERT model and forms different combinations for evaluating the performance of AES models. We formulate an automated essay scoring problem as a rescaled regression problem and quantized classification problem. We analyzed the performance of AES models for different combinations. We compared them against the existing ensemble approaches in terms of Kappa Statistics and Accuracy for rescaled regression problem and quantized classification problem respectively. A combination of 30-manually extracted features, 300-word2vec representation, and 768-word embedding features using BERT model results up to 77.2 &#177; 1.7 of Kappa statistics for rescaled regression problem and 75.2 &#177; 1.0 of accuracy value for Quantized Classification problem using a benchmark dataset consisting of about 12,000 essays divided into eight groups. The reporting results provide directions to the researchers in the field to use manually extracted features along with deep encoded features for developing a more reliable AES model.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_27-An_Empirical_Analysis_of_BERT_Embedding.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implementation of Random Direction-3D Mobility Model to Achieve Better QoS Support in MANET</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111026</link>
        <id>10.14569/IJACSA.2020.0111026</id>
        <doi>10.14569/IJACSA.2020.0111026</doi>
        <lastModDate>2020-10-31T18:17:09.0770000+00:00</lastModDate>
        
        <creator>Munsifa Firdaus Khan</creator>
        
        <creator>Indrani Das</creator>
        
        <subject>Ad Hoc On Demand Distance Vector (AODV); random direction; mobility model; Quality-of-Service (QoS); Mobile Ad Hoc Networks (MANETs); NS-3</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>Mobile Ad hoc Networks (MANETs) provides changing network topology due to the mobility of nodes. The complexity of the network increases because of dynamic topology of nodes. In a MANET, nodes communicate with each other without the help of any infrastructure. Therefore, achieving QoS in MANET becomes a little difficult. The movement of mobile nodes is represented through mobility models. These models have great impact on QoS in MANET. We have proposed a mobility model which is a 3D implementation of existing Random Direction (RD) mobility model. We have done a simulation on AODV with QoS metrics throughput, delay and PDR, using NS-3 and performed analysis of the proposed mobility models with other 3D mobility models, namely Random Way Point (RWP) and Gauss Markov (GM). It is concluded that our proposed model gives better throughput, delay and PDR for AODV routing protocol in comparing to RWP and GM mobility models. This paper is for students and researchers who are involved in wireless technology and MANET. It will help them to understand how a mobility model impacts the entire network and how its enhancement improves the QoS in MANET.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_26-Implementation_of_Random_Direction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detection of Anomalous In-Memory Process based on DLL Sequence</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111025</link>
        <id>10.14569/IJACSA.2020.0111025</id>
        <doi>10.14569/IJACSA.2020.0111025</doi>
        <lastModDate>2020-10-31T18:17:09.0430000+00:00</lastModDate>
        
        <creator>Binayak Panda</creator>
        
        <creator>Satya Narayan Tripathy</creator>
        
        <subject>Anomalous In-memory Process; dynamic analysis; DLL hijacking; DLL injection; TF-IDF multinomial logistic regression</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>The use of Computer systems to keep track of day to day activities for single-user systems as well as the implementation of business logic in enterprises is the demand of the hour. As it plays a vital role in making available information on one click as well as impacts improvement in business and influences the profit or loss. There is always a possible threat from unauthorized users as well as untrusted or unknown applications. Trivially a host is intended to run with a list of known or trusted applications based on user’s preference. Any application beyond the trusted list can be called as untrusted or unknown application, which is not expected to run on that host. Untrusted applications becomes available to a host from sources like websites, emails, external storage devices etc. Such untrusted programs may be malicious or non-malicious in nature but the presence must be detected, as it is not a trusted program from user’s view point. All such programs may target the system either to steal valuable information or to decrease the system performance without the knowledge of the user of the system. Antimalware vendors provide support to defend the system from malicious programs. They do not include users trusted program list in to consideration. It is also true that new instances of attacks are found very frequently. Hence there is a need for a system which can be self-defending from anomalous activities on the system with reference to a trusted program list. In this paper design of an “Anomalous In-Memory Process detector based on the use of the DLL (Dynamic Link Library) sequence” is proposed, which does accountability of trusted programs intended to run on a particular host and create a knowledgebase of classes of processes with TF-IDF (Term Frequency-Inverse Document Frequency) multinomial logistic regression based learning approach. This knowledgebase becomes useful to map a suspected In-memory process to a class of processes using loaded DLL’s of it. With a cross-validation approach, the suspected process and processes of its predicted class are used to conclude whether it is a trusted, variant of the trusted or untrusted process for that host. Not necessarily the untrusted program is a malware but it may be a program not listed in the trusted program list for the specific host. Hence this work aims to detect anomaly in concern with list of trusted applications based on user’s preference by doing a dynamic analysis on In-memory processes.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_25-Detection_of_Anomalous_In_Memory_Process.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybridized Machine Learning based Fractal Analysis Techniques for Breast Cancer Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111024</link>
        <id>10.14569/IJACSA.2020.0111024</id>
        <doi>10.14569/IJACSA.2020.0111024</doi>
        <lastModDate>2020-10-31T18:17:09.0300000+00:00</lastModDate>
        
        <creator>Munmun Swain</creator>
        
        <creator>Sumitra Kisan</creator>
        
        <creator>Jyotir Moy Chatterjee</creator>
        
        <creator>Mahadevan Supramaniam</creator>
        
        <creator>Sachi Nandan Mohanty</creator>
        
        <creator>NZ Jhanjhi</creator>
        
        <creator>Azween Abdullah</creator>
        
        <subject>Mammography; feature extraction; fractal dimension; box-counting method; classification; support vector machine</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>The usefulness of Fractal Analysis (FA) is not limited to a particular area. It is applied in variety of fields and has shown its efficiency towards irregular objects. Fractal dimension is the best measure of the roughness for natural elements and hence, it can be treated as a feature of the natural object. Breast masses are irregular and divers from a malignant tumor to benign; hence breast can be treated as one of the best areas where fractal geometry can be applied. It gives a scope where fractal geometry concept can be used as a feature extraction technique in mammogram. On the other hand, the support vector machine is an emerging technique for classification. The survey shows that few works have done on breast mass classification using support vector machine. In our work two most effective techniques are used in separate operations, FA: Box Count Method (BCM) and Support Vector Machine (SVM) that result well in their fields. Feature extraction is done through Box Count Method. The extracted feature, “fractal dimension”, measures the complexity of the input data set of 42 images. For the next segment, the resulting Fractal Dimensions (FD) are processed under the support vector machine classifier to classify benign and malignant cells. The result analysis shows that the combination of SVM and FD yielded the highest with 98.13% accuracy.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_24-Hybridized_Machine_Learning_based_Fractal_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An M/M/1 Preemptive Queue based Priority MAC Protocol for WBSN to Transmit Pilgrims’ Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111023</link>
        <id>10.14569/IJACSA.2020.0111023</id>
        <doi>10.14569/IJACSA.2020.0111023</doi>
        <lastModDate>2020-10-31T18:17:09.0130000+00:00</lastModDate>
        
        <creator>Shah Murtaza Rashid Al Masud</creator>
        
        <creator>Mahmood ul Hassan</creator>
        
        <creator>Khalid Mahmood</creator>
        
        <creator>Muhammad Akram</creator>
        
        <subject>Wireless body sensor network; medium access control protocol; preemptive queue; priority; heterogeneous traffic</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>Every year during Hajj in Saudi Arabia and Kumbh Mela in India, many pilgrims suffering from different medical emergencies thus need real-time and fast healthcare services. Quick healthcare can be facilitated by setting up Wireless Body Sensor Network (WBSN) on pilgrims because of its suitability for a wide range of medical applications. However, higher delay, data loss and excessive energy consumption may occur in the network when multiple emergency data aggregate at the coordinator for accessing the data communication channel simultaneously. In this context, for low delay and energy-efficient data transmission, an M/M/1 preemptive queue technique is proposed and minimal backoff period is considered to develop a priority Medium Access Control (MAC) protocol for WBSN. Our proposed MAC is designed based on IEEE802.15.6 standard that supports modified MAC superframe structure for heterogeneous traffic. The proposed priority MAC protocol has been simulated using the Castalia simulator to analyze the results. In the first scenario considering varying nodes, the delay is calculated as 13 ms and 33 ms for the emergency, and the normal medical condition. Besides, for emergency and normal medical condition energy consumption per bit is calculated at around 0.12 &#181;j and 0.19 &#181;j. In the second scenario, we consider variation in traffic size. For 16 bytes traffic size, delay of extremely very high critical traffic is 5.8 ms and 14.5 ms for extremely low critical traffic. Similarly, extremely very high critical traffic consumes 0.035 &#181;j energy per bit, whereas extremely low critical traffic consumes 0.37 &#181;j. in the third scenario, the delay, data loss rate, average energy consumption and throughput for the proposed priority MAC are analyzed. Result demonstrates our proposed priority MAC protocol outperforms the state-of-the-art protocols.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_23-An_MM1_Preemptive_Queue_based_Priority.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Distributed Beam Forming Techniques for Dual-hop Decode-and-Forward based Cooperative Relay Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111022</link>
        <id>10.14569/IJACSA.2020.0111022</id>
        <doi>10.14569/IJACSA.2020.0111022</doi>
        <lastModDate>2020-10-31T18:17:08.9970000+00:00</lastModDate>
        
        <creator>Zahoor Ahmed</creator>
        
        <creator>Zuhaibuddin Bhutto</creator>
        
        <creator>Syed Muhammad Shehram Shah</creator>
        
        <creator>Ramesh Kumar</creator>
        
        <creator>Ayaz Hussain</creator>
        
        <subject>Beamforming; base station; decode and forward; mobile station; relay station</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>In this paper, it has been proved that the transmission rate can be increased substantially by alleviating co-channel interference by use of beamforming techniques at relay stations. In this setup, the downlink transmission segment is taken into consideration from the Base Station (BS) to two Mobile Stations (MS). The data is transmitted concurrently through two Relay Stations (RS) using the same frequency channel. It is assumed that the RSs use decode-and-forward (DF) strategy. In this technique of beamforming, pre-coding vectors are used at the RS to alleviate co-channel interferences. Due to this strategy, each user will be able to get its own data sans interference. Two pre-coding techniques which incarnate two different transmission protocols have been proposed. Simulations results show that such type of schemas outperforms their counterpart brethren schemas.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_22-Distributed_Beam_Forming_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Rule-based Text Normalization for Malay Social Media Texts</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111021</link>
        <id>10.14569/IJACSA.2020.0111021</id>
        <doi>10.14569/IJACSA.2020.0111021</doi>
        <lastModDate>2020-10-31T18:17:08.9670000+00:00</lastModDate>
        
        <creator>Siti Noor Allia Noor Ariffin</creator>
        
        <creator>Sabrina Tiun</creator>
        
        <subject>Malay normalization; Malay text normalization; informal Malay text; Malay tweets; rule-based normalizer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>Malay social media text is a text written on social media networks like Twitter. Commonly, this text comprises non-standard words, filled with dialects, foreign languages, word abbreviations, grammatical neglect, spelling errors, and many more. It is well known that this type of text is difficult to process due to its high noise and distinct text structure. Such problems can be resolved using rigorous text normalization, which is critical before any technique can be implemented and evaluated on social media text. In this paper, an improved normalization method towards Malay social media text was proposed by converting non-standard Malay words using a rule-based model. The method normalizes common language words often used by Malaysian users, such as non-standard Malay (like dialect and slangs), Romanized Arabic, and English words. Thus, a Malay text normalizer was proposed using a set of rules that extend across different domains of natural language processing (NLP) and is expected to address the challenges of processing Malay social media text. This study implements the proposed Malay text normalizer in a Part-of-Speech (POS) tagging application to evaluate the normalizer’s performance. The implementation demonstrates a substantial improvement in the POS tagging efficiency over several pre-processing stages, with an improvement of accuracy up to 31.8%. The increase of accuracy in the POS tagging indicates two main points. First, the Malay text normalizer’s rules improve the performance of a Malay text normalizer on social media text. Second, our proposed Malay text normalizer has successfully improved the POS tagging percentage and demonstrates the importance of normalized pre-processing in any NLP application.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_21-Rule_Based_Text_Normalization_for_Malay_Social_Media.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Planar Antenna on Flexible Substrate for Future 5G Energy Harvesting in Malaysia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111020</link>
        <id>10.14569/IJACSA.2020.0111020</id>
        <doi>10.14569/IJACSA.2020.0111020</doi>
        <lastModDate>2020-10-31T18:17:08.9500000+00:00</lastModDate>
        
        <creator>A. K. M. Zakir Hossain</creator>
        
        <creator>Nurulhalim Bin Hassim</creator>
        
        <creator>S. M. Kayser Azam</creator>
        
        <creator>Md Shazzadul Islam</creator>
        
        <creator>Mohammad Kamrul Hasan</creator>
        
        <subject>Planar monopole; flexible substrate; 3G; bending effect; rectenna; RF rectifier; energy harvesting</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>This article presents a planar monopole antenna on flexible substrate for middle band 5G (3.5 GHz) application in Malaysia. The antenna has been designed and optimized for its gain and efficiency with an improved performance in contrast of flexible substrate-based other antennas. The antenna resonates at 3.53 GHz and it has a -10dB bandwidth of 545 MHz. The bending effects of this antenna on the S-parameter and gain have also been investigated. The antenna is able to suppress all the other frequency bands until 20 GHz. The designed antenna has been utilized with a newly designed rectifier to act as an rectenna at 3.5 GHz for RF energy harvesting applications. A reasonable amount of DC output voltage of 930 mV, and a Power Conversion Efficiency of 43.5% have been obtained while 0 dBm RF input power is delivered to the rectifier input terminal. Apart from the utilization as an energy harvester being connected with the proposed rectifier, the designed antenna on flexible substrate can also be employed to biomedical and sensor applications.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_20-A_Planar_Antenna_on_Flexible_Substrate.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Investigative Study of Genetic Algorithms to Solve the DNA Assembly Optimization Problem</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111019</link>
        <id>10.14569/IJACSA.2020.0111019</id>
        <doi>10.14569/IJACSA.2020.0111019</doi>
        <lastModDate>2020-10-31T18:17:08.9200000+00:00</lastModDate>
        
        <creator>Hachemi Bennaceur</creator>
        
        <creator>Meznah Almutairy</creator>
        
        <creator>Nora Alqhtani</creator>
        
        <subject>Genetic Algorithms; Traveling Salesman Problem; Quadratic Assignment Problem; DNA fragments assembly problem</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>This paper aims to highlight the motivations for investigating genetic algorithms to solve the DNA Fragments Assembly problem (DNA_FA). DNA_FA is an optimization problem that attempts to reconstruct the original DNA sequence by finding the shortest DNA sequence from a given set of fragments. We showed that the DNA_FA optimization problem is a special case of the two well-known optimization problems: The Traveling Salesman Problem (TSP) and the Quadratic Assignment Problem (QAP). TSP and QAP are important problems in the field of combinatorial optimization and for which there exists an abundant literature. Genetic Algorithms (GA) applied to these problems have led to very satisfactory results in practice. In the perspective of designing efficient genetic algorithms to solve DNA_FA we showed the existence of a polynomial-time reduction of DNA-FA into TSP and QAP enabling us to point out some technical similarities in terms of solutions and search space complexity. We then conceptually designed a genetic algorithm platform for solving the DNA-FA problem inspired from the existing efficient genetic algorithms in the literature solving TSP and QAP problems. This platform offers several ingredients enabling us to create several variants of GA solvers for the DNA assembly optimization problems.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_19-An_Investigative_Study_of_Genetic_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comparison of Classification Models to Detect Cyberbullying in the Peruvian Spanish Language on Twitter</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111018</link>
        <id>10.14569/IJACSA.2020.0111018</id>
        <doi>10.14569/IJACSA.2020.0111018</doi>
        <lastModDate>2020-10-31T18:17:08.9030000+00:00</lastModDate>
        
        <creator>Ximena M. Cuzcano</creator>
        
        <creator>Victor H. Ayma</creator>
        
        <subject>Cyberbullying detection; machine learning; natural language processing; feature extraction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>Cyberbullying is a social problem in which bullies’ actions are more harmful than in traditional forms of bullying as they have the power to repeatedly humiliate the victim in front of an entire community through social media. Nowadays, multiple works aim at detecting acts of cyberbullying via the analysis of texts in social media publications written in one or more languages; however, few investigations target the cyberbullying detection in the Spanish language. In this work, we aim to compare four traditional supervised machine learning methods performances in detecting cyberbullying via the identification of four cyberbullying-related categories on Twitter posts written in the Peruvian Spanish language. Specifically, we trained and tested the Naive Bayes, Multinomial Logistic Regression, Support Vector Machines, and Random Forest classifiers upon a manually annotated dataset with the help of human participants. The results indicate that the best performing classifier for the cyberbullying detection task was the Support Vector Machine classifier.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_18-A_Comparison_of_Classification_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Efficient DWT based Fusion Algorithm for Improving Contrast and Edge Preservation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111017</link>
        <id>10.14569/IJACSA.2020.0111017</id>
        <doi>10.14569/IJACSA.2020.0111017</doi>
        <lastModDate>2020-10-31T18:17:08.8730000+00:00</lastModDate>
        
        <creator>Sumanth Kumar Panguluri</creator>
        
        <creator>Laavanya Mohan</creator>
        
        <subject>Visible image; infrared image; Discrete Wavelet Transform (DWT); Inverse Discrete Wavelet Transform (IDWT); novel mean-weighted fusion rule; max fusion rule</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>The main principle of infrared (IR) image is that it captures thermal radiation of light. The objects that are captured in low light, fog, and snow conditions can be detected clearly in IR image. But the major drawback of IR image is that it provides poor resolution and low texture information. Due to that humans are unable to understand overall scene information present in IR image. Nowadays for the detection of objects in poor weather conditions with improved texture information, the result of visible (VI) and IR image fusion is used. It is mostly used in military, surveillance, and remote sensing applications. The efficient DWT based fusion algorithm for improving contrast and edge preservation is presented in this paper. First morphology hat transform is applied on source images for improving contrast. DWT on decomposition produces low frequency and high frequency sub-bands. A novel mean weighted fusion rule is introduced in this paper for fusing low frequency sub-bands. Its aim is to improve the visual quality of final fused image. The max fusion rule has used for fusing high frequency sub-bands to improve edge information. The final fused image is reconstructed by using IDWT. In this paper, the proposed fusion algorithm has produced improved results both subjectively and as well as objectively when compared to existing fusion methods.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_17-Efficient_DWT_based_Fusion_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improved Selected Mapping Technique for Reduction of PAPR in OFDM Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111016</link>
        <id>10.14569/IJACSA.2020.0111016</id>
        <doi>10.14569/IJACSA.2020.0111016</doi>
        <lastModDate>2020-10-31T18:17:08.8430000+00:00</lastModDate>
        
        <creator>Saruti Gupta</creator>
        
        <creator>Ashish Goel</creator>
        
        <subject>Orthogonal Frequency Division Multiplexing (OFDM); Peak to Average Power Ratio (PAPR); Selected Mapping (SLM); Complementary Cumulative Distribution Function (CCDF); PAPR threshold</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>High peak to average power ratio (PAPR) is a limiting factor towards the performance of an OFDM system. Selected Mapping (SLM) is a popular peak to average power ratio (PAPR) reduction scheme used with the OFDM systems. In this technique, U set of candidate sequences are generated that leads to improvement in the PAPR reduction ability of the OFDM systems. The major concern of the conventional SLM is that as the number of candidates is increased, there is a proportional rise in the inverse fast Fourier transforms (IFFT) computations of the systems. In our article we have proposed a scheme in which we increase the number of candidate sequence as (U+U2/4) that leads to improved PAPR performance of the OFDM systems without any equivalent rise in the IFFT computations. It has been demonstrated that both the simulation and analytical results are well-approximated in our proposed schemes. We also estimate the threshold value of PAPR at a fixed value of complementary cumulative distribution function (CCDF) for different number of subcarriers and candidate sequences. Results demonstrate that our proposed scheme outperforms in terms of PAPR reduction ability of the OFDM signal and obtains effective PAPR threshold values with negligible loss in BER performance of the system.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_16-Improved_Selected_Mapping_Technique.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Debris Run-Out Modeling Without Site-Specific Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111015</link>
        <id>10.14569/IJACSA.2020.0111015</id>
        <doi>10.14569/IJACSA.2020.0111015</doi>
        <lastModDate>2020-10-31T18:17:08.8270000+00:00</lastModDate>
        
        <creator>NMT De Silva</creator>
        
        <creator>Prasad Wimalaratne</creator>
        
        <subject>Landslide flow path; route of debris; hazard mapping; D8 Algorithm; multiple direction flow algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>Recent population growth and actions near hilly areas increase the vulnerability of occurring landslides. The effects of climate change further increase the likelihood of landslide danger. Therefore, accurate analysis of unstable slope behavior is crucial to prevent loss of life and destruction to property. Predicting landslide flow path is essential in identifying the route of debris, and it is essential necessary component in hazard mapping. However, current methodologies of determining the flow direction of landslides require costly site-specific data such as surface soil type, categories of underground soil layers, and other related field characteristics. This paper demonstrates an approach to predict the flow direction without site-specific data, taking a large landslide incident in Sri Lanka at Araranyaka region in the district of Kegalle as a case study. Spreading area assessment was based on deterministic eight-node (D8) and Multiple Direction Flow (MDF) flow directional algorithms. Results acquired by the model were compared with the real Aranayaka landslide data set and the landslide hazard map of the area. Debris paths generated from the proof of concept software tool using the D8 algorithm showed greater than 76% agreement, and MDF showed greater than 87% agreement with the actual flow paths and other related statistics such as maximum width of the slide, run-out distance, and slip surface area.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_15-Debris_Run_Out_Modeling_without_Site_Specific_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Classification Javanese Letters Model using a Convolutional Neural Network with KERAS Framework</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111014</link>
        <id>10.14569/IJACSA.2020.0111014</id>
        <doi>10.14569/IJACSA.2020.0111014</doi>
        <lastModDate>2020-10-31T18:17:08.7930000+00:00</lastModDate>
        
        <creator>Yulius Harjoseputro</creator>
        
        <subject>Javanese letters; deep learning; convolutional neural network; epoch; framework KERAS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>One of the essential things in research engaged in the field of Computer Vision is image classification, wherein previous studies models were used to classify an image. Javanese Letters, in this case, is a basis of a sentence that uses the Javanese language. The problem is that Javanese sentences are often found in Yogyakarta, especially the use of name tourist attractions, making it difficult for tourists to translate these Javanese sentences. Therefore, in this study, we try to create a Javanese character classification model hoping that this model will later be used as a basis for developing research into the next stage. One of the most popular methods lately for dealing with image classification problems is to use Deep Learning techniques, namely using the Convolutional Neural Network (CNN) method using the KERAS framework. The simplicity of the training model and dataset used in this work brings the advantage of computation weight and time. The model has an accuracy of 86.68% using 1000 datasets and conducted for 50 epochs based on the results. The average inference time with the same specification mentioned above is 0.57 seconds, and again the fast inference time is because of the simplicity of the model and dataset toolbar. This model&#39;s advantages with fast and light computation time bring the possibility to use this model on devices with limited computation resources such as mobile devices, familiar web server interface, and internet-of-things devices.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_14-A_Classification_Javanese_Letters_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Identification of Student-Teachers Groups’ Needs in Physical Education and Sport for Designing an Open Distance Learning on the Model of Small Private Online Courses</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111013</link>
        <id>10.14569/IJACSA.2020.0111013</id>
        <doi>10.14569/IJACSA.2020.0111013</doi>
        <lastModDate>2020-10-31T18:17:08.7800000+00:00</lastModDate>
        
        <creator>Mostafa HAMSE</creator>
        
        <creator>Said LOTFI</creator>
        
        <creator>Mohammed TALBI</creator>
        
        <subject>Needs; physical education and sport; professional training; SPOC; teachers</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>Currently, there are witnessing several distance-learning offerings: FOAD (Open Distance Learning) MOOCs (Massive Open Online Course) and SPOCS (Small Private Online Courses) in various intervention sectors including education and training. However, little research has dealt with analyzing the needs of participants before implementing SPOCs in higher education. This study aims to identify needs in order to design and guide a technopedagogical device in SPOCs’ form for teacher training.The results showed that more than 70% of interviewees declared that SPOC reduces participants’ travel time, 87% aimed at developing professional competence in planning learning, 77% wanted students’ evaluation and more than 60% wanted to know the disciplinary knowledge relating to physical and sporting activities (PSA) and their Learning activities’ management.In addition, 64.3% of participants preferred, as device’s form and design, the four modalities at the same time: text structured in title, video capsules, images and sound recording. In terms of educational tutoring ,more than 75% of participants declared their need to understand certain concepts in the course. These results will guide us to focus attention on three basic professional skills: planning, management and evaluation of learning as a priority training module in the envisaged SPOC with technical and pedagogical support both audiovisual and textual.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_13-Identification_of_Student_Teachers_Groups.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>KadOLSR: An Efficient P2P Overlay for Mobile Ad Hoc Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111012</link>
        <id>10.14569/IJACSA.2020.0111012</id>
        <doi>10.14569/IJACSA.2020.0111012</doi>
        <lastModDate>2020-10-31T18:17:08.7470000+00:00</lastModDate>
        
        <creator>Mohammad Al Mojamed</creator>
        
        <subject>Peer-To-Peer P2P; Mobile ad hoc Networks MANET; cross layering; Kademlia; Optimized Link State Routing OLSR; KadOLSR</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>P2P and MANET are self-organized, decentralized, and dynamic networks. Although both networks have common characteristics, they are used for different purposes and operate at different layers. P2P provides the ability for peers to store and locate services in the network, while MANET provides an underlying routing capability to reach other mobile nodes. Thus, P2P and MANET could complement each other. However, P2P is originally designed to operate over the Internet, which provides rich routing capabilities compared to MANET. Therefore, deploying P2P over MANET must come with careful consideration of how to adjust P2P approaches to better suit MANET. In this paper, a novel system called KadOLSR is proposed to better develop an efficient P2P over MANET. The structure of the well-known Kademlia is used along with the OLSR. KadOLSR optimizes the similarities between P2P and MANET to reduce overlay management communication overhead and hence deploys a lightweight and efficient P2P over MANET. The network layer routing information is shared with the overlay to achieve the optimization. A cross-layer channel is constructed between the network layer and the overlay layer to exchange relevant routing information. The proposed system is designed, and its performance is evaluated using a network simulator. The performance of KadOLSR is also compared to one of the recent P2P for MANET systems. The simulation results show that KadOLSR performs well across all different network sizes and mobility speeds.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_12-KadOLSR_An_Efficient_P2P_Overlay.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of Relay Selection Method for Audio Transmission in Cooperative Ad Hoc Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111011</link>
        <id>10.14569/IJACSA.2020.0111011</id>
        <doi>10.14569/IJACSA.2020.0111011</doi>
        <lastModDate>2020-10-31T18:17:08.7330000+00:00</lastModDate>
        
        <creator>Usha Padma</creator>
        
        <creator>H.V.Kumaraswamy</creator>
        
        <creator>S.Ravishankar</creator>
        
        <subject>Cooperative communication; channel strength; ad hoc networks; maximal ratio combining; IEEE 802.11b; G.711 Codec; Mean Openion Score (MOS)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>The quality of service parameters, such as Latency and Bit error rate for audio transmission in IEEE 802.11b Wireless ad hoc network are analyzed in this paper. The issue addressed here is that the quality of the audio, when transmitted directly from source to destination in ad hoc network is low. This can be improved by incorporating a relay between source and destination, where the relay uses decode and forward technique before forwarding the information to the destination. Destination applies Maximal ratio combining (MRC), to combine the signal received from the source and the relay. This concept is called Cooperative communication. A location aware channel estimation based relay selection strategy is proposed in this paper for a wireless ad hoc network. Audio is transmitted using 16 QAM modulation scheme over a Rayleigh fading channel in the presence of additive white Gaussian noise (AWGN). This paper focusses on the relay selection method for audio transmission in cooperative ad hoc networks where best relay is selected based on the average channel strength supported between the source and the destination. Audio quality at the destination is observed for the cases of relay presence and relay absence. Results showed that the measured audio quality with the presence of relay was far better than the measured quality in the absence of relay which stresses the importance of cooperative communication.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_11-Development_of_Relay_Selection_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Blockchain Network Model to Improve Supply Chain Visibility based on Smart Contract</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111010</link>
        <id>10.14569/IJACSA.2020.0111010</id>
        <doi>10.14569/IJACSA.2020.0111010</doi>
        <lastModDate>2020-10-31T18:17:08.7170000+00:00</lastModDate>
        
        <creator>Arwa Mukhtar</creator>
        
        <creator>Awanis Romli</creator>
        
        <creator>Noorhuzaimi Karimah Mohd</creator>
        
        <subject>Supply chain management; supply chain visibility; blockchain; smart contact; information sharing; traceability; inventory visibility</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>Due to the increasing complexity of supply chains over the past years, many factors significantly contribute to lowering the supply chains performance. Poor visibility is one of the major challenging factors that lowers supply chains performance. This paper proposes a Blockchain-based supply chain network model to improve the supply chain visibility. The model focuses in improving the visibility measurements properties: information sharing, traceability, and inventory visibility. The proposed model consists of information sharing, traceability, and inventory visibility platforms based on Blockchain technology smart contract. The model built with Hyperledger platform and extend the Hyperledger Composer Supply Chain Network (HCSC) model. The research is designed to three main phases. First phase: the preliminary phase which is the literature review phase to identify the existing challenges in the domain. The second phase: the design and implementation phase which is the development steps of the proposed research model. The third phase: the evaluation phase which represent the performance evaluation of the proposed model and the comparisons between the proposed model and the existing models. In the evaluation performance, the common performance metrics Lead time and average inventory levels will be compared in the proposed model, Cloud-based information system, and the traditional supply chain. These proposed platforms offer an end-to-end visibility of products, orders, and stock levels for supply chain practitioners and customers within supply chain networks. Which helps managers’ access key information that support critical business decisions and offers essential criteria for competitiveness and therefore, enhance supply chain performance.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_10-Blockchain_Network_Model_to_Improve_Supply.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Ranking Beauty Clinics in Riyadh using Lexicon-Based Sentiment Analysis and Multiattribute-Utility Theory</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111009</link>
        <id>10.14569/IJACSA.2020.0111009</id>
        <doi>10.14569/IJACSA.2020.0111009</doi>
        <lastModDate>2020-10-31T18:17:08.6870000+00:00</lastModDate>
        
        <creator>Zuhaira M. Zain</creator>
        
        <creator>Aya A. Alhajji</creator>
        
        <creator>Norah S. Alotaibi</creator>
        
        <creator>Najwa B. Almutairi</creator>
        
        <creator>Alaa D. Aldawas</creator>
        
        <creator>Muneerah M. Almazrua</creator>
        
        <creator>Atheer M. Alhammad</creator>
        
        <subject>Arabic language; beauty clinics; ranking; Lexicon-based; machine learning; sentiment analysis; multiattribute-utility theory</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>In recent years, the amount of beauty-related user-created content has steadily increased. Digital beauty-clinic reviews have major impact on user preferences. In supporting user selection decisions, ranking beauty clinics via online reviews is a valuable study subject, although research on this problem is still fairly limited. Sentiment analysis is a very important subject in the research community to evaluate a predefined sentiment from online texts written in a natural language on a particular topic. Recently, research on sentiment analysis for the Arabic language has become popular, since the language has become the fastest-growing language on the web. However, most sentiment-analysis tools are designed for the Modern Standard Arabic language, which is not widely used on social-media platforms. Moreover, the number of lexicons designed to handle the informal Arabic language is restricted, especially in the beauty-clinic-related domain. Besides, numerous sentiment-analysis studies have concentrated on improving the accuracy of sentiment classifiers. Studies about choosing the right company or product on the basis of the results of sentiment analysis are still missing. In decision-analysis domain, the multiattribute-utility theory has been extensively used in selecting the best option among a set of alternatives. Thus, this research aims to propose a systematic methodology that can develop a beauty-clinic-domain-related sentiment lexicon in Saudi dialect, perform sentiment analysis on online reviews of 10 beauty clinics in Riyadh based on the built lexicon, and feed the lexicon-based sentiment analysis results to the multiattribute-utility theory method to evaluate and rank the beauty clinics. Results showed that the Abdelazim Bassam Clinic is Riyadh’s best beauty clinic on the basis of the proposed method. The research not only impacts data analysts regarding how to rate beauty clinics on the basis of lexicon-based sentiment-analysis results, but also directs users toward selecting the best beauty clinic.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_9-Ranking_Beauty_Clinics_in_Riyadh.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Data Retrieval Method based on Physical Meaning and its Application for Prediction of Linear Precipitation Zone with Remote Sensing Satellite Data and Open Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111008</link>
        <id>10.14569/IJACSA.2020.0111008</id>
        <doi>10.14569/IJACSA.2020.0111008</doi>
        <lastModDate>2020-10-31T18:17:08.6700000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>Linear precipitation zone; remote sensing satellite data; water vapor; cloud liquid; upper atmospheric wind; disaster mitigation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>Data retrieval method based on physical meaning is proposed together with its application for prediction of linear precipitation zone with remote sensing satellite data and open data. The linear precipitation zone causes extremely severe storm and flood damage, landslide, and so on. Linear precipitation zone is formed in the case that the warm, moist air must continuously flow in, the force to lift this air up and it is often in collisions with mountain slopes or cold fronts, the atmospheric conditions are unstable, and there is a certain direction of wind above the sky. These conditions can be monitored by remote sensing satellite data. The proposed method is intended to attempt for prediction of the linear precipitation zone for disaster mitigation. There are water vapor data, cloud liquid data, cloud fraction data, and upper atmospheric wind data derived from the remote sensing satellite-based mission instruments. Through experiment in the case of the linear precipitation zone which was occurred in northern Kyushu, Japan in the begging of July 2020, a possibility to detect the linear precipitation zone was confirmed. Also, flooding damages and other disasters occurred in the northern Kyushu, in same time period caused by the detected linear precipitation zone is detected with Sentinel-1 of SAR data.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_8-Data_Retrieval_Method_based_on_Physical_Meaning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Extraction of Keywords for Retrieval from Paper Documents and Drawings based on the Method of Determining the Importance of Knowledge by the Analytic Hierarchy Process: AHP</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111007</link>
        <id>10.14569/IJACSA.2020.0111007</id>
        <doi>10.14569/IJACSA.2020.0111007</doi>
        <lastModDate>2020-10-31T18:17:08.6530000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>AHP method; extraction keywords; production rule system; document/diagram recognitions; certainty factor</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>Extraction method of keywords for retrieval from paper documents and drawings based on the method of determining the importance of knowledge by the Analytic Hierarchy Process: AHP method is proposed. The method allows distinguish the documents into three categories, letter, form and drawing types of documents, then the most appropriate knowledge about keyword for retrievals, font size, location, frequency of the words etc. are selected for each document type. Production rules are created with more than five of the knowledge on keywords for retrievals. Traditional production system employs isolated knowledge so that it is not easy to take overall suitability of the knowledge. In order to overcome this situation, AHP is employed in the proposed system. Through experiments with 100 documents and diagrams, 98% success rate is achieved, and it is found that appropriate candidates for keywords with likelihood or certainty factor can be extracted with the proposed system. The proposed production system shows 50% of improvement on success rate of the keywords extraction from documents and diagrams compared to the existing production system without AHP.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_7-Extraction_of_Keywords_for_Retrieval_from_Paper_Documents.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimizing the C4.5 Decision Tree Algorithm using MSD-Splitting</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111006</link>
        <id>10.14569/IJACSA.2020.0111006</id>
        <doi>10.14569/IJACSA.2020.0111006</doi>
        <lastModDate>2020-10-31T18:17:08.6400000+00:00</lastModDate>
        
        <creator>Patrick Rim</creator>
        
        <creator>Erin Liu</creator>
        
        <subject>C4.5 Algorithm; decision tree; data mining; machine learning; classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>We propose an optimization of Dr. Ross Quin-lan’s C4.5 decision tree algorithm, used for data mining and classification. We will show that by discretizing and binning a data set’s continuous attributes into four groups using our novel technique called MSD-Splitting, we can significantly improve both the algorithm’s accuracy and efficiency, especially when applied to large data sets. We applied both the standard C4.5 algorithm and our optimized C4.5 algorithm to two data sets obtained from UC Irvine’s Machine Learning Repository: Census Income and Heart Disease. In our initial model, we discretized continuous attributes by splitting them into two groups at the point with the minimum expected information requirement, in accordance with the standard C4.5 algorithm. Using five-fold cross-validation, we calculated the average accuracy of our initial model for each data set. Our initial model yielded a 75.72% average accuracy across both data sets. The average execution time of our initial model was 1,541.57 s for the Census Income data set and 50.54 s for the Heart Disease data set. We then optimized our model by applying MSD-Splitting, which discretizes continuous attributes by splitting them into four groups using the mean and the two values one standard deviation away from the mean as split points. The accuracy of our model improved by an average of 5.11%across both data sets, while the average execution time reduced by an average of 96.72% for the larger Census Income data set and 46.38% for the Heart Disease data set.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_6-Optimizing_the_C4_5_Decision_Tree_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Chaotic System for Text Encryption Optimized with Genetic Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111005</link>
        <id>10.14569/IJACSA.2020.0111005</id>
        <doi>10.14569/IJACSA.2020.0111005</doi>
        <lastModDate>2020-10-31T18:17:08.6070000+00:00</lastModDate>
        
        <creator>Unnikrishnan Menon</creator>
        
        <creator>Anirudh Rajiv Menon</creator>
        
        <creator>Atharva Hudlikar</creator>
        
        <subject>Chaotic map; genetic algorithm; encryption; bifurcation diagram; Lyapunov exponent</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>With meteoric developments in communication systems and data storage technologies, the need for secure data transmission is more crucial than ever. The level of security provided by any cryptosystem relies on the sensitivity of the private key, size of the key space as well as the trapdoor function being used. In order to satisfy the aforementioned constraints, there has been a growing interest over the past few years, in studying the behavior of chaotic systems and their applications in various fields such as data encryption due to characteristics like randomness, unpredictability and sensitivity of the generated sequence to the initial value and its parameters. This paper utilizes a novel 2D chaotic function that displays a uniform bifurcation over a large range of parameters and exhibits high levels of chaotic behavior to generate a random sequence that is used to encrypt the input data. The proposed method uses a genetic algorithm to optimize the parameters of the map to enhance security for any given textual data. Various analyses demonstrate an adequately large key space and the existence of multiple global optima indicating the necessity of the proposed system and the security provided by it.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_5-A_Novel_Chaotic_System_for_Text_Encryption.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>e-Lifestyle Confirmatory of Consumer Generation Z</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111004</link>
        <id>10.14569/IJACSA.2020.0111004</id>
        <doi>10.14569/IJACSA.2020.0111004</doi>
        <lastModDate>2020-10-31T18:17:08.5930000+00:00</lastModDate>
        
        <creator>Tony Wijaya</creator>
        
        <creator>Arum Darmawati</creator>
        
        <creator>Andreas M Kuncoro</creator>
        
        <subject>e-Lifestyle; consumer; Generation Z; social media; information technology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>The development of information technology has changed daily life patterns that tend towards digital. Differences across generations will result in understanding different behaviors and lifestyles, which are a challenge in this research. Lifestyle is needed in determining market segments of consumer behavior. Lifestyle understanding in Generation Z is expected to provide valuable information in various fields of socio-economic life. These findings are expected to provide an overview for marketers targeting the market in this segment. Understanding lifestyles can be an ingredient in developing marketing strategies according to the intended segment, especially Generation Z, which has identified a lifestyle following information technology or digital development. The research aimed to confirm e-lifestyle factors among Generation Z, especially university students, as members of the academic environment&#39;s dominant academic community. Specifically, the aim is to identify the pattern of e-lifestyle formation in Generation Z, especially among students and the information or social media used by Generation Z. This type of research is a survey. This research was initiated through empirical field observations. The study population used in this study was university students in Yogyakarta-Indonesia. The sampling technique uses a simple random sampling technique. The data used are primary: the response given by research subjects related to e-lifestyle factors. Data was collected through a survey using a questionnaire. The data analysis technique in this study uses a Confirmatory Factor Analysis (CFA). The results showed that the motives that became the basis of e-lifestyle in the Z generation corresponded to four factors, namely, e-activities, e-interests, e-opinions, and e-values. Information or social media are often used by Generation Z, namely, Instagram, Youtube, Line, Facebook, Twitter, Discard, Pinterest, Spotify, and Telegram. The purpose of using the information or social media is communication, entertainment, consumption or shopping, and community activities.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_4-E_lifestyle_Confirmatory_of_Consumer_Generation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Support Kernel Classification: A New Kernel-Based Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111003</link>
        <id>10.14569/IJACSA.2020.0111003</id>
        <doi>10.14569/IJACSA.2020.0111003</doi>
        <lastModDate>2020-10-31T18:17:08.5600000+00:00</lastModDate>
        
        <creator>Ouiem Bchir</creator>
        
        <creator>Mohamed M. Ben Ismail</creator>
        
        <creator>Sara Algarni</creator>
        
        <subject>Supervised learning; classification; kernel based learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>In this paper, we introduce a new classification approach that learns class dependent Gaussian kernels and the belongingness likelihood of the data points with respect to each class. The proposed Support Kernel Classification (SKC) is designed to characterize and discriminate between the data instances from the different classes. It relies on the maximization of the intra-class distances and the minimization of the intra-class distances to learn the optimal Gaussian parameters. In fact, a novel objective function is proposed to model each class using one Gaussian function. The experiments conducted using synthetic datasets demonstrated the effectiveness of the proposed algorithm. Moreover, the results obtained using real datasets proved that the proposed classifier outperforms the relevant state of the art approaches.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_3-Support_Kernel_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Alzheimer’s Disease Detection using Neighborhood Components Analysis and Feature Selection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111002</link>
        <id>10.14569/IJACSA.2020.0111002</id>
        <doi>10.14569/IJACSA.2020.0111002</doi>
        <lastModDate>2020-10-31T18:17:08.5430000+00:00</lastModDate>
        
        <creator>Mohamed Maher Ben Ismail</creator>
        
        <creator>Reema Alabdullatif</creator>
        
        <creator>Ouiem Bchir</creator>
        
        <subject>Alzheimer detection; classification; feature selection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>In this paper, we propose a Computer Aided Diagnosis (CAD) system in order to assist the physicians in the early detection of Alzheimer’s Disease (AD) and ensure an effective diagnosis. The proposed framework is designed to be fully-automated upon the capture of the brain structure using Magnetic Resonance Imaging (MRI) scanners. The Voxel-Based Morphometry (VBM) analysis is a key element in the proposed detection process as it is intended to investigate the Gray Matter (GM) tissues in the captured MRI images. In other words, the feature extraction phase consists in encoding the voxel properties in the MRI images into numerical vectors. The resulting feature vectors are then fed into a Neighborhood Component Analysis and Feature Selection (NCFS) algorithm coupled with K-Nearest Neighbor (KNN) algorithm in order to learn a classification model able to recognize AD cases. The feature selection based on NCFS algorithm improved the overall classification performance.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_2-Alzheimers_Disease_Detection_using_Neighborhood_Components.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Analysis for Mining Images of Deep Web</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0111001</link>
        <id>10.14569/IJACSA.2020.0111001</id>
        <doi>10.14569/IJACSA.2020.0111001</doi>
        <lastModDate>2020-10-31T18:17:08.4970000+00:00</lastModDate>
        
        <creator>Ily Amalina Ahmad Sabri</creator>
        
        <creator>Mustafa Man</creator>
        
        <subject>Data extraction; Document Object Model; web data extraction; Wrapper using Hybrid DOM and JSON; Wrapper Extraction of Image using DOM and JSON</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(10), 2020</description>
        <description>In this paper, advancing web scale knowledge extraction and alignment by integrating few sources has been considered by exploring different methods of aggregation and attention in order to focus on image information. An improved model, namely, Wrapper Extraction of Image using DOM and JSON (WEIDJ) has been proposed to extract images and the related information in fastest way. Several models, such as Document Object Model (DOM), Wrapper using Hybrid DOM and JSON (WHDJ), WEIDJ and WEIDJ (no-rules) are been discussed. The experimental results on real world websites demonstrate that our models outperform others, such as Document Object Model (DOM), Wrapper using Hybrid DOM and JSON (WHDJ) in terms of mining in a higher volume of web data from a various types of image format and taking the consideration of web data extraction from deep web.</description>
        <description>http://thesai.org/Downloads/Volume11No10/Paper_1-Performance_Analysis_for_Mining_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modeling and Interpretation of Covid-19 Infections Data at Peru through the Mitchell’s Criteria</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110986</link>
        <id>10.14569/IJACSA.2020.0110986</id>
        <doi>10.14569/IJACSA.2020.0110986</doi>
        <lastModDate>2020-10-01T13:28:56.3430000+00:00</lastModDate>
        
        <creator>Huber Nieto-Chaupis</creator>
        
        <subject>Covid-19; epidemiology; machine learning; Tom Mitchell; Monte Carlo</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(9), 2020</description>
        <description>In this paper, the criteria of Tom Mitchell based at the philosophy of Machine Learning have been used to interpret data of new cases per week of infections by Covid-19 at Per&#180;u. For this, it was constructed a mathematical scheme that encloses the Mitchell’s criteria as well as the idea of propagation as commonly used in modern physics to attack complex problems of interactions. With this, both the 2009 season of AH1N1 flu outbreak and the ongoing Covid-19 data were analyzed in terms of task, performance and experience. In contrast with the AH1N1 case, the Covid-19 data do not exhibit any performance in terms of minimize infections at the first weeks of the beginning of the outbreak, suggesting that precise actions to reduce infections have not been taken appropriately.</description>
        <description>http://thesai.org/Downloads/Volume11No9/Paper_86-Modeling_and_Interpretation_of_Covid_19_Infections_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An IoT based Urban Areas Air Quality Monitoring Prototype</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110985</link>
        <id>10.14569/IJACSA.2020.0110985</id>
        <doi>10.14569/IJACSA.2020.0110985</doi>
        <lastModDate>2020-10-01T13:28:56.2200000+00:00</lastModDate>
        
        <creator>Martin M. Soto-Cordova</creator>
        
        <creator>Martha Medina-De-La-Cruz</creator>
        
        <creator>Anderson Mujaico-Mariano</creator>
        
        <subject>Air pollution; air quality; Arduino; Internet of Things (IoT); cloud service; MQTT; Air Quality Index (AQI); sensors</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(9), 2020</description>
        <description>According to the World Health Organization, the most affected places with the presence of polluting gases and particles in suspension are urban areas due to the emissions corresponding to human activities, they have also caused diseases and deaths in millions of people in the world. This paper describes the process of design and implementation of an electronic prototype applying the Internet of Things concept with a cloud storage and processing service. This device has the purpose of monitoring in real-time the air quality through the presence of pollutant gases and PM10 and PM2.5 suspended particles to carry out later studies that contribute to prevention measures in the health care of the population.</description>
        <description>http://thesai.org/Downloads/Volume11No9/Paper_85-An_IoT_based_Urban_Areas_Air_Quality_Monitoring_Prototype.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Disaster Recovery in Cloud Computing Systems: An Overview</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110984</link>
        <id>10.14569/IJACSA.2020.0110984</id>
        <doi>10.14569/IJACSA.2020.0110984</doi>
        <lastModDate>2020-09-30T13:08:25.4130000+00:00</lastModDate>
        
        <creator>Abedallah Zaid Abualkishik</creator>
        
        <creator>Ali A. Alwan</creator>
        
        <creator>Yonis Gulzar</creator>
        
        <subject>Cloud computing; data backup; disaster recovery; multi-cloud</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(9), 2020</description>
        <description>With the rapid growth of internet technologies, large-scale online services, such as data backup and data recovery are increasingly available. Since these large-scale online services require substantial networking, processing, and storage capacities, it has become a considerable challenge to design equally large-scale computing infrastructures that support these services cost-effectively. In response to this rising demand, cloud computing has been refined during the past decade and turned into a lucrative business for organizations that own large datacenters and offer their computing resources. Undoubtedly cloud computing provides tremendous benefits for data storage backup and data accessibility at a reasonable cost. This paper aims at surveying and analyzing the previous works proposed for disaster recovery in cloud computing. The discussion concentrates on investigating the positive aspects and the limitations of each proposal. Also examined are discussed the current challenges in handling data recovery in the cloud context and the impact of data backup plan on maintaining the data in the event of natural disasters. A summary of the leading research work is provided outlining their weaknesses and limitations in the area of disaster recovery in the cloud computing environment. An in-depth discussion of the current and future trends research in the area of disaster recovery in cloud computing is also offered. Several work research directions that ought to be explored are pointed out as well, which may help researchers to discover and further investigate those problems related to disaster recovery in the cloud environment that have remained unresolved.</description>
        <description>http://thesai.org/Downloads/Volume11No9/Paper_84-Disaster_Recovery_in_Cloud_Computing_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mobility-Aware Container Migration in Cloudlet-Enabled IoT Systems using Integrated Muticriteria Decision Making</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110983</link>
        <id>10.14569/IJACSA.2020.0110983</id>
        <doi>10.14569/IJACSA.2020.0110983</doi>
        <lastModDate>2020-09-30T13:08:25.3970000+00:00</lastModDate>
        
        <creator>Mutaz A. B. Al-Tarawneh</creator>
        
        <subject>Internet of Things (IoT); container; migration; Cloudlet; criteria; decision making</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(9), 2020</description>
        <description>Service migration plays a vital role in continuous service delivery in Internet of Things (IoT) systems. This pa-per presents a mobility-aware container migration algorithm for Cloudlet-enabled IoT systems. The proposed algorithm is based on an integrated multicriteria decision making (MCDM) approach. It has been implemented using a specialized simulation tool and compared to other existing migration algorithms. Simu-lation results demonstrate the ability of the proposed algorithm to achieve up to 48%, 48%, 20% and 36% improvement in migration time, service downtime, migration reliability and service loss rate, respectively as compared to other migration algorithms. The proposed algorithm is capable of perceiving the run-time dynamics of IoT systems and appropriately manage the process of container migration.</description>
        <description>http://thesai.org/Downloads/Volume11No9/Paper_83-Mobility_Aware_Container_Migration_in_Cloudlet.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Parameter Estimation of the ALBA Autonomous Surface Craft</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110982</link>
        <id>10.14569/IJACSA.2020.0110982</id>
        <doi>10.14569/IJACSA.2020.0110982</doi>
        <lastModDate>2020-09-30T13:08:25.3800000+00:00</lastModDate>
        
        <creator>Melanie M. Valdivia-Fernandez</creator>
        
        <creator>Brayan A. Monroy-Ochoa</creator>
        
        <creator>Daniel D. Yanyachi</creator>
        
        <creator>Juan C. Cutipa-Luque</creator>
        
        <subject>Autonomous surface craft; parameter estimation; modeling; maximum likelihood; nonlinear; zigzag</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(9), 2020</description>
        <description>Arequipa region holds the largest extension of the Peruvian littoral at the Pacific sea, has also fresh water resources composed of rivers and lagoons from the coast to the Andes highland. The ALBA vehicle is a low-cost autonomous surface vessel with open source architecture that is being developed to support water monitoring tasks in the region. This article deals with the nonlinear identification problem for an autonomous surface craft and the maximum likelihood estimation approach is used to estimate its parameters. The parametric nonlinear model is considered with simulated and experimental data. The results shows good fitting values when two, three and a maximum four parameters are estimated.</description>
        <description>http://thesai.org/Downloads/Volume11No9/Paper_82-Parameter_Estimation_of_the_ALBA_Autonomous_Surface.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Small-LRU: A Hardware Efficient Hybrid Replacement Policy</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110981</link>
        <id>10.14569/IJACSA.2020.0110981</id>
        <doi>10.14569/IJACSA.2020.0110981</doi>
        <lastModDate>2020-09-30T13:08:25.3500000+00:00</lastModDate>
        
        <creator>Purnendu Das</creator>
        
        <creator>Bishwa Ranjan Roy</creator>
        
        <subject>Replacement policies; cache memories; last level cache; hardware overheads; dead block</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(9), 2020</description>
        <description>Replacement policy plays a major role in improving the performance of the modern highly associative cache memo-ries. As the demand of data intensive application is increasing it is highly required that the size of the Last Level Cache (LLC) must be increased. Increasing the size of the LLC also increases the associativity of the cache. Modern LLCs are divided into multiple banks where each bank is a set-associative cache. The replacement policy implemented on such highly associative banks consume significant hardware (storage and area) overhead. Also the Least Recently Used (LRU) based replacement policy has an issue of dead blocks. A block in the cache is called dead, if the block is not used in the future before its eviction from the cache. In LRU policy, a dead block can not be remove early until it become LRU-block. So, we have proposed a replacement technique which is capable of removing dead block early with reduced hardware cost between 77% to 91% in comparison to baseline techniques. In this policy random replacement is used for 70% ways and LRU is applied for rest of the ways. The early eviction of dead blocks also improves the performance of the system by 5%.</description>
        <description>http://thesai.org/Downloads/Volume11No9/Paper_81-Small_LRU_A_Hardware_Efficient_Hybrid_Replacement.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>DistB-SDoIndustry: Enhancing Security in Industry 4.0 Services based on Distributed Blockchain through Software Defined Networking-IoT Enabled Architecture</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110980</link>
        <id>10.14569/IJACSA.2020.0110980</id>
        <doi>10.14569/IJACSA.2020.0110980</doi>
        <lastModDate>2020-09-30T13:08:25.3200000+00:00</lastModDate>
        
        <creator>Anichur Rahman</creator>
        
        <creator>Umme Sara</creator>
        
        <creator>Dipanjali Kundu</creator>
        
        <creator>Saiful Islam</creator>
        
        <creator>Md. Jahidul Islam</creator>
        
        <creator>Mahedi Hasan</creator>
        
        <creator>Ziaur Rahman</creator>
        
        <creator>Mostofa Kamal Nasir</creator>
        
        <subject>IoT; SDN; BC; AI; security; privacy; industry 4.0</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(9), 2020</description>
        <description>The concept of Industry 4.0 is a newly emerging focus of research throughout the world. However, it has lots of challenges to control data, and it can be addressed with various technologies like Internet of Things (IoT), Big Data, Artificial Intelligence (AI), Software Defined Networking (SDN), and Blockchain (BC) for managing data securely. Further, the complexity of sensors, appliances, sensor networks connecting to the internet and the model of Industry 4.0 has created the chal-lenge of designing systems, infrastructure and smart applications capable of continuously analyzing the data produced. Regarding these, the authors present a distributed Blockchain-based security to industry 4.0 applications with SDN-IoT enabled environment. Where the Blockchain can be capable of leading the robust, privacy and confidentiality to our desired system. In addition, the SDN-IoT incorporates the different services of industry 4.0 with more security as well as flexibility. Furthermore, the authors offer an excellent combination among the technologies like IoT, SDN and Blockchain to improve the security and privacy of Industry 4.0 services properly. Finally , the authors evaluate performance and security in a variety of ways in the presented architecture.</description>
        <description>http://thesai.org/Downloads/Volume11No9/Paper_80-DistB_SDoIndustry_Enhancing_Security_in_Industry.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Unified Approach for White Blood Cell Segmentation, Feature Extraction, and Counting using Max-Tree Data Structure</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110979</link>
        <id>10.14569/IJACSA.2020.0110979</id>
        <doi>10.14569/IJACSA.2020.0110979</doi>
        <lastModDate>2020-09-30T13:08:25.3030000+00:00</lastModDate>
        
        <creator>Bilkis Jamal Ferdosi</creator>
        
        <subject>Segmentation; feature extraction; White Blood Cell (WBC); mathematical morphology; max-tree</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(9), 2020</description>
        <description>Accurate identification and counting of White Blood Cells (WBCs) from microscopy blood cell images are vital for several blood-related disease diagnoses such as leukemia. The inevitability of automated cell image analysis in medical diagnosis results in a plethora of research for the last few decades. Microscopic blood cell image analysis involves three major steps: cell segmentation, classification, and counting. Several techniques have been employed separately to solve these three problems. In this paper, a simple unified model is proposed for White Blood Cell segmentation, feature extraction for classification, and counting with connected mathematical morphological operators implemented using the max-tree data structure. Max-tree creates a hierarchical representation of connected components of all possible gray levels present in an image in such a way that the root holds the connected components comprise of pixels with the lowest intensity value and the connected components comprise of pixels with the highest intensity value are in the leaves. Any associated attributes such as the size or shape of each connected component can be efficiently calculated on the fly and stored in this data structure. Utilizing this knowledge-rich data structure, we obtain a better segmentation of the cells that preserves the morphology of the cells and consequently obtain better accuracy in cell counting.</description>
        <description>http://thesai.org/Downloads/Volume11No9/Paper_79-Unified_Approach_for_White_Blood_Cell_Segmentation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fundamental Capacity Analysis for Identically Independently Distributed Nakagami-q Fading Wireless Communication</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110978</link>
        <id>10.14569/IJACSA.2020.0110978</id>
        <doi>10.14569/IJACSA.2020.0110978</doi>
        <lastModDate>2020-09-30T13:08:25.2730000+00:00</lastModDate>
        
        <creator>Siam Bin Shawkat</creator>
        
        <creator>Md. Mazid-Ul-Haque</creator>
        
        <creator>Md. Sohidul Islam</creator>
        
        <creator>Borshan Sarker Sonok</creator>
        
        <subject>Wireless communication; SIMO channel capacity; Nakagami-q fading; Hoyt distribution; low SNR regime</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(9), 2020</description>
        <description>With the advancement in technology, decent trans-fer rate of data for fast communication is an exigency. Different distributions on different wireless communication channels have been used previously to model them and to do performance analysis on the systems. In this work, capacity analysis of iden-tically independently distributed Nakagami-q fading single-input multiple-output (SIMO) wireless communication is presented. The derivation of channel capacity with the analytical solution have been conducted using small limit argument approximation. Where the small limit argument approximation corresponds to the low signal-to-noise ratio (SNR) regime. SIMO channel capacity behavior with respect to number of receiver antennas and with respect to SNR have been explored in depth. The improvement of capacity is depicted rigorously. It has been found that using Nakagami-q distribution, capacity of the system increases as number of receiver antenna increases. It is also found that the capacity of this SIMO wireless system can be further improved through changing of certain parameters.</description>
        <description>http://thesai.org/Downloads/Volume11No9/Paper_78-Fundamental_Capacity_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Real-Time Healthcare Monitoring System using Online Machine Learning and Spark Streaming</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110977</link>
        <id>10.14569/IJACSA.2020.0110977</id>
        <doi>10.14569/IJACSA.2020.0110977</doi>
        <lastModDate>2020-09-30T13:08:25.2570000+00:00</lastModDate>
        
        <creator>Fawzya Hassan</creator>
        
        <creator>Masoud E. Shaheen</creator>
        
        <creator>Radhya Sahal</creator>
        
        <subject>Online machine learning; streaming data; Apache Spark; Apache Kafka; spark streaming machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(9), 2020</description>
        <description>The real-time monitoring and tracking systems play a critical role in the healthcare field. Wearable medical devices with sensors, mobile applications, and health cloud have continuously generated an enormous amount of data, often called streaming big data. Due to the higher speed of the streaming data, it is difficult to ingest, process, and analyze such huge data in real-time to make real-time actions in case of emergencies. Using traditional methods that are inadequate and time-consuming. Therefore, there is a significant need for real-time big data stream processing to guarantee an effective and scalable solution. So, we proposed a new system for online prediction to predict health status using Spark streaming framework. The proposed system focuses on applying streaming machine learning models (i.e. streaming linear regression with SGD) on streaming health data events ingested to spark streaming through Kafka topics. The experimental results are done on the historical medical datasets (i.e. diabetes dataset, heart disease dataset, and breast cancer dataset) and generated dataset which is simulated to wearable medical sensors. The historical datasets have shown that the accuracy improvement ratio obtained using the diabetes disease dataset is the highest one with respect to the other two datasets with an accuracy of 81%. For generated datasets, the online prediction system has achieved accuracy with 98% at 5 seconds window size. Beyond this, the experimental results have proofed that the online prediction system can online learn and update the model according to the new data arrival and window size.</description>
        <description>http://thesai.org/Downloads/Volume11No9/Paper_77-Real_Time_Healthcare_Monitoring_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Using Wearable Sensors for Human Activity Recognition in Logistics: A Comparison of Different Feature Sets and Machine Learning Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110976</link>
        <id>10.14569/IJACSA.2020.0110976</id>
        <doi>10.14569/IJACSA.2020.0110976</doi>
        <lastModDate>2020-09-30T13:08:25.2270000+00:00</lastModDate>
        
        <creator>Abbas Shah Syed</creator>
        
        <creator>Zafi Sherhan Syed</creator>
        
        <creator>Muhammad Shehram Shah</creator>
        
        <creator>Salahuddin Saddar</creator>
        
        <subject>Human Activity Recognition (HAR); inertial sen-sors; LARa dataset; smart industry</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(9), 2020</description>
        <description>The topic of human activity recognition has gained a lot of attention due to its usage for exercise monitoring, smart health and assisted living. Even though the aforementioned domains have received significant interest by researchers, activity recognition for industrial settings has received little attention in comparison. Industry 4.0 involves the assimilation of industrial workers with robots and other equipment used in the industry and necessitates the development of recognition methodologies for activities being performed in industries. In this regard, this paper presents a comparison in performance of various time/frequency domain features and popular machine learning algorithms for use in activity recognition in a logistics scenario. Experiments were conducted on inertial measurement sensor data from the recently released LARa dataset which involved three feature sets being used with four machine learning algorithms; Support Vector Machines, Decision Trees, Random Forests and Extreme Gradient Boost (XGBoost). The best result achieved in the experiments was an average accuracy of 78.61% using the XGBoost classifier while using both time and frequency domain features. This work serves as a baseline for activity recognition in logistics using IMU sensors and enables the development of solutions to support fulfillment of Industry 4.0 goals.</description>
        <description>http://thesai.org/Downloads/Volume11No9/Paper_76-Using_Wearable_Sensors_for_Human_Activity_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Product Recommendation in Offline Retail Industry by using Collaborative Filtering</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110975</link>
        <id>10.14569/IJACSA.2020.0110975</id>
        <doi>10.14569/IJACSA.2020.0110975</doi>
        <lastModDate>2020-09-30T13:08:25.2100000+00:00</lastModDate>
        
        <creator>Bayu Yudha Pratama</creator>
        
        <creator>Indra Budi</creator>
        
        <creator>Arlisa Yuliawati</creator>
        
        <subject>Recommendation system; offline retail store; memory-based collaborative filtering; customer segmentation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(9), 2020</description>
        <description>The variety of purchased products is important for retailers. When a customer buys a specific product in a large number, the customer might get benefit, such as more discounts. On contrary, this could harm the retailers since only some products are sold quickly. Due to this problem, big retailers try to entice customers to buy many variations of products. For an offline retailer, promoting specific products based on the markets’ taste is quite challenging because of the unavailability of information regarding customers’ preferences. This study utilized four years of purchase transaction data to implicitly find customers’ ratings or feedback towards specific products they have purchased. This study employed two Collaborative Filtering methods in generating product recommendations for customers and find the best method. The result shows that the Memory-based approach (k-NN Algorithm) outperformed the Model-based (SVD Matrix Factorization). Another finding is that the more data training being used, the better the performance of the recommendation system will result. To cope with the data scalability issue, customer segmentation through k-Means Clustering was applied. The result implies that this is not necessary since it failed to boost up the models&#39; accuracy. The result of the recommendation system is then applied in a suggested business process for a specific offline retailer shop.</description>
        <description>http://thesai.org/Downloads/Volume11No9/Paper_75-Product_Recommendation_in_Offline_Retail_Industry.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>DBSR: A Depth-Based Secure Routing Protocol for Underwater Sensor Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110974</link>
        <id>10.14569/IJACSA.2020.0110974</id>
        <doi>10.14569/IJACSA.2020.0110974</doi>
        <lastModDate>2020-09-30T13:08:25.1930000+00:00</lastModDate>
        
        <creator>Ayman Alharbi</creator>
        
        <subject>UWSN; DBR; ECC</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(9), 2020</description>
        <description>Depth-Based protocol has gained considerable attention as an efficient routing scheme for Underwater Wireless Sensor Networks UWSNs. It requires only depth information to perform the routing process. Despite this feature, UWSNs which operate with the employment of DBR protocol are vulnerable to depth spoofing attack. In this paper, Depth Based Secure Routing protocol is proposed to overcome this vulnerability. DBSR modifies traditional DBR routing algorithm by securing the depth information which is embedded in the header part of DBR packet. In addition to that, each node verifies the sender’s identity based on a digital signature scheme. We extensively evaluate the overhead and performance gain of DBSR for two signature schemes based on Elliptic Curve Cryptography method considering various network conditions. The simulation study is performed using NS3-based simulator. Our results show that DBSR can avoid depth-spoofing attack by achieving 95% and 85% delivery ratios under low and high network loads respectively. Contrary to popular belief, results show that careful utilization of cryptographic techniques is justifiable without significant overhead on the communication cost.</description>
        <description>http://thesai.org/Downloads/Volume11No9/Paper_74-DBSR_A_Depth_Based_Secure_Routing_Protocol.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>VerbNet based Citation Sentiment Class Assignment using Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110973</link>
        <id>10.14569/IJACSA.2020.0110973</id>
        <doi>10.14569/IJACSA.2020.0110973</doi>
        <lastModDate>2020-09-30T13:08:25.1770000+00:00</lastModDate>
        
        <creator>Zainab Amjad</creator>
        
        <creator>Imran Ihsan</creator>
        
        <subject>Citation content analysis; sentiment analysis; semantic analysis; ontology; natural language processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(9), 2020</description>
        <description>Citations are used to establish a link between articles. This intent has changed over the years, and citations are now being used as a criterion for evaluating the research work or the author and has become one of the most important criteria for granting rewards or incentives. As a result, many unethical activities related to the use of citations have emerged. That is why content-based citation sentiment analysis techniques are developed on the hypothesis that all citations are not equal. There are several pieces of research to find the sentiment of a citation, however, only a handful of techniques that have used citation sentences for this purpose. In this research, we have proposed a verb-oriented citation sentiment classification for researchers by semantically analyzing verbs within a citation text using VerbNet Ontology, natural language processing &amp; four different machine learning algorithms. Our proposed methodology emphasizes the verb as a fundamental element of opinion. By developing and assessing the proposed methodology and according to benchmark results, the methodology can perform well while dealing with a variety of datasets. The technique has shown promising results using Support Vector Classifier.</description>
        <description>http://thesai.org/Downloads/Volume11No9/Paper_73-VerbNet_Based_Citation_Sentiment_Class_Assignment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hate Speech Detection in Twitter using Transformer Methods</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110972</link>
        <id>10.14569/IJACSA.2020.0110972</id>
        <doi>10.14569/IJACSA.2020.0110972</doi>
        <lastModDate>2020-09-30T13:08:25.1470000+00:00</lastModDate>
        
        <creator>Raymond T Mutanga</creator>
        
        <creator>Nalindren Naicker</creator>
        
        <creator>Oludayo O Olugbara</creator>
        
        <subject>Attention transformer; deep learning; neural network; recurrent network; sequence transduction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(9), 2020</description>
        <description>Social media networks such as Twitter are increasingly utilized to propagate hate speech while facilitating mass communication. Recent studies have highlighted a strong correlation between hate speech propagation and hate crimes such as xenophobic attacks. Due to the size of social media and the consequences of hate speech in society, it is essential to develop automated methods for hate speech detection in different social media platforms. Several studies have investigated the application of different machine learning algorithms for hate speech detection. However, the performance of these algorithms is generally hampered by inefficient sequence transduction. The Vanilla recurrent neural networks and recurrent neural networks with attention have been established as state-of-the-art methods for the assignments of sequence modeling and sequence transduction. Unfortunately, these methods suffer from intrinsic problems such as long-term dependency and lack of parallelization. In this study, we investigate a transformer-based method and tested it on a publicly available multiclass hate speech corpus containing 24783 labeled tweets. DistilBERT transformer method was compared against attention-based recurrent neural networks and other transformer baselines for hate speech detection in Twitter documents. The study results show that DistilBERT transformer outperformed the baseline algorithms while allowing parallelization.</description>
        <description>http://thesai.org/Downloads/Volume11No9/Paper_72-Hate_Speech_Detection_in_Twitter.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Meta-Analysis of Artificial Intelligence Works in Ubiquitous Learning Environments and Technologies</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110971</link>
        <id>10.14569/IJACSA.2020.0110971</id>
        <doi>10.14569/IJACSA.2020.0110971</doi>
        <lastModDate>2020-09-30T13:08:25.1300000+00:00</lastModDate>
        
        <creator>Caitlin Sam</creator>
        
        <creator>Nalindren Naicker</creator>
        
        <creator>Mogiveny Rajkoomar</creator>
        
        <subject>Educational data mining; intelligent systems; artificial intelligence; PRISMA; machine learning; ubiquitous learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(9), 2020</description>
        <description>Ubiquitous learning (u-learning) refers to anytime and anywhere learning. U-learning has progressed to be considered a conventional teaching and learning approach in schools and is adopted to continue with the school curriculum when learners cannot attend schools for face-to-face lessons. Computer Science, namely the field of Artificial Intelligence (AI) presents tools and techniques to support the growth of u-learning and provide recommendations and insights to academic practitioners and AI researchers. Aim: The aim of this study was to conduct a meta-analysis of Artificial Intelligence works in ubiquitous learning environments and technologies to present state from the plethora of research. Method: The mining of related articles was devised according to the technique of Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). The complement of included research articles was sourced from the broadly used databases, namely, Science Direct, Springer Link, Semantic Scholar, Academia, and IEEE. Results: A total of 16 scientific research publications were shortlisted for this study from 330 articles identified through database searching. Using random-effects model, the estimated pooled estimate of artificial intelligence works in ubiquitous learning environments and technologies reported was 10% (95% CI: 3%, 22%; I2 = 99.46%, P = 0.00) which indicates the presence of considerable heterogeneity. Conclusion: It can be concluded based on the experimental results from the sub group analysis that machine learning studies [18% (95% CI: 11%, 25%), I2 = 99.83%] was considerably more heterogeneous (I2 = 99.83%) than intelligent decision support systems, intelligent systems and educational data mining. However, this does not mean that intelligent decision support systems, intelligent systems and educational data mining is not efficient.</description>
        <description>http://thesai.org/Downloads/Volume11No9/Paper_71-Meta_Analysis_of_Artificial_Intelligence_Works.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Population based Optimized and Condensed Fuzzy Deep Belief Network for Credit Card Fraudulent Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110970</link>
        <id>10.14569/IJACSA.2020.0110970</id>
        <doi>10.14569/IJACSA.2020.0110970</doi>
        <lastModDate>2020-09-30T13:08:25.1000000+00:00</lastModDate>
        
        <creator>Jisha M. V</creator>
        
        <creator>D. Vimal Kumar</creator>
        
        <subject>Credit card fraudulent; uncertainty; intuitionistic fuzzy; fuzzy deep belief network; sea turtle foraging</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(9), 2020</description>
        <description>In this information era, with the advancement in technology, there is a high risk due to financial fraud which is a continually increasing menace during online transactions. Credit card fraudulent identification is a toughest challenge because of two important issues, as the profile of the credit card user’s behavior changes constantly and credit card datasets are skewed. The factors which greatly affects the credit card fraudulent transaction detection are primarily based on data sampling models, features involved in feature selection and detection approaches implied. To overwhelm these issues, instead of using certainty theory, this paper encapsulates with three different empowered models are deployed for intellectual way of fraudulent transaction detection. In this work uncertainty theory of intuitionistic fuzzy theorem to determine the significant features which will influence the detection process effectively. Maximized relevancy among dependent and independent features of credit card dataset are determined using grade of membership and non-membership information of each features. The intuitionistic fuzzy mutual information with the knowledge of entropy it selects the features with highest information score as significant feature subset. This proposed model devised Fuzzy Deep Belief Network enriched with Sea Turtle Foraging for credit card fraudulent detection (EFDBN-STFA). The fuzzy deep belief network greatly handles the complex pattern of credit card transactions with its deep knowledge and stacked restricted Boltzmann machine the pattern of dataset is analyzed. The weights assigned to the hidden nodes are fine-tuned by the sea turtle foraging using its fitness measure and thus it improves the detection accuracy of the FDBN. Simulation results proved the efficacy of EFDBN-STFA on two different credit card datasets with its gained ability of handling hesitation factor and optimization using metaheuristic approach, it achieves higher detection rate with reduced false alarms compared to other existing detection models.</description>
        <description>http://thesai.org/Downloads/Volume11No9/Paper_70-Population_based_Optimized_and_Condensed.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Video Processing for Animation at Key Points of Movement in the Mimosa Pudica</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110969</link>
        <id>10.14569/IJACSA.2020.0110969</id>
        <doi>10.14569/IJACSA.2020.0110969</doi>
        <lastModDate>2020-09-30T13:08:25.0830000+00:00</lastModDate>
        
        <creator>Rodolfo Romero-Herrera</creator>
        
        <creator>Laura Mendez-Segundo</creator>
        
        <subject>Harris; Brisk; correlation; ROI (Region of Interest); Canny. Sobel; Mimosa Pudica; movement</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(9), 2020</description>
        <description>The processing of an image of a moving plant is inadequate, for this reason, digital video processing must be incorporated, which allows the behavior of an algorithm to be analyzed over time. A method is presented that takes images of a plant with autonomous movement filmed on video; the frames are digitally processed and the information is used to generate animations. Our representation of the structure is derived from an analysis of the image where the plant is deformed; the projections of the movement of the plant are recovered from the video frames and are used as a basis to generate videograms in an animation based on key points taken from an image; Harris and Brisk algorithms are applied. The main plant used is the Mimosa Pudica. Once the frames have been obtained, correlation is proposed as a mechanism to find movement. The techniques are equally useful for any other moving plant such as carnivores or sunflowers.</description>
        <description>http://thesai.org/Downloads/Volume11No9/Paper_69-Video_Processing_for_Animation_at_Key_Points.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-Dimensional Fraud Detection Metrics in Business Processes and their Application</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110968</link>
        <id>10.14569/IJACSA.2020.0110968</id>
        <doi>10.14569/IJACSA.2020.0110968</doi>
        <lastModDate>2020-09-30T13:08:25.0700000+00:00</lastModDate>
        
        <creator>Badr Omair</creator>
        
        <creator>Ahmad Alturki</creator>
        
        <subject>Business process fraud; fraud detection; fraud indicators; fraud measures; fraud metrics; PBF; red flags</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(9), 2020</description>
        <description>Occupational fraud is defined as the deliberate misuse of one’s occupation for personal enrichment. It poses a significant challenge for organizations and governments. Estimates indicate that the funds involved in occupational fraud cases investigated across 125 countries between 2018 and 2019 exceeded US$3.6 billion. Process-based fraud (PBF) is a form of occupational fraud that is perpetrated inside business processes. Business processes underlie the logic of the work that organizations undertake, and they are used to execute an organization’s strategies to achieve organizational goals. Business processes should be examined for potential fraud risks to ensure that businesses achieve their objectives. While it is impossible to prevent fraud entirely, it must be detected. However, PBF detection metrics are not well developed at present. They are scattered, unstandardized, not validated, and, in some cases, absent. This study aimed to develop a comprehensive PBF detection metric by leveraging and operationalizing a taxonomy of fraud detection metrics for business processes as an underlying theory. 41 PBF detection metrics were deduced from the taxonomy using design science research. To evaluate their utility, the application of the metrics was undertaken using illustrative scenarios, and a real example of the implementation of the metrics was provided. The developed metrics form a complete, classified, validated, and standardized list of PBF detection metrics, which include all the necessary PBF detection dimensions. It is expected that the stakeholders involved in PBF detection will use the metrics established in this work in their practice to increase the effectiveness of the PBF detection process.</description>
        <description>http://thesai.org/Downloads/Volume11No9/Paper_68-Multi_Dimensional_Fraud_Detection_Metrics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Netnography and Text Mining to Understand Perceptions of Indian Travellers using Online Travel Services</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110967</link>
        <id>10.14569/IJACSA.2020.0110967</id>
        <doi>10.14569/IJACSA.2020.0110967</doi>
        <lastModDate>2020-09-30T13:08:25.0530000+00:00</lastModDate>
        
        <creator>Dashrath Mane</creator>
        
        <creator>Prateek Srivastava</creator>
        
        <subject>Consumer; travellers; netnography; text mining; OTA; sentiment; perception</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(9), 2020</description>
        <description>Advancements in the electronic commerce industry have helped online travel services (OTA) in many ways. The paper examines the overall impact of traveller’s using online services and their sentiments derived from a collection of reviews for online travel service providers known as online travel agents (OTA) in India. Customer reviews from different identified sources are collected and the satisfaction of travellers using various online travel services is analyzed using netnographic analysis and text mining. This paper also covers a detailed process of data collection, analysis using netnography and text mining methods which helps us for the analysis and deriving sentiments from collected reviews. Various results obtained are presented as part of token lists, keyword analysis, and service-specific analysis. The statistical analysis of different results is tested to understand the relationship between various services and OTA.</description>
        <description>http://thesai.org/Downloads/Volume11No9/Paper_67-Netnography_and_Text_Mining_to_Understand_Perceptions.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of a Graphic Information System Applied to Quality Statistic Control in Production Processes</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110966</link>
        <id>10.14569/IJACSA.2020.0110966</id>
        <doi>10.14569/IJACSA.2020.0110966</doi>
        <lastModDate>2020-09-30T13:08:25.0370000+00:00</lastModDate>
        
        <creator>Laura Vazquez</creator>
        
        <creator>Alicia Valdez</creator>
        
        <creator>Griselda Cortes</creator>
        
        <creator>Mariana Rosales</creator>
        
        <subject>Information system; Nelson’s rules; Model View Controller pattern; C#; ASP.NET</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(9), 2020</description>
        <description>One of the advantages that organizations have when using an Information System is the control of their activities. This article develops an Information System that will allow an organization to graphically obtain the real results of a production process by applying Nelson&#39;s eight rules to determine if any measured variable is out of control. The software architecture pattern used is the Model View Controller (MVC) to keep the functionality of the application separate. The front-end, that is, the part that interacts with the users, was developed in ASP.NET as a web platform to provide the required services, JavaScript, HTML 5, Razor and Bootstrap. The back-end, which is the part that processes the entry of the front-end and performs the calculations, operations, communication with the database and reading of files, was developed with the C sharp programming language, the SQL Server database management system and the entity framework. As a result, the system sends an e-mail as an alarm with an explanation of what has happened when it detects that some measured variable is out of control by applying Nelson&#39;s rules. This allows the organization to make effective decisions in the processes involved.</description>
        <description>http://thesai.org/Downloads/Volume11No9/Paper_66-Development_of_a_Graphic_Information_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Cluster based Non-Linear Regression Framework for Periodic Multi-Stock Trend Prediction on Real Time Stock Market Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110965</link>
        <id>10.14569/IJACSA.2020.0110965</id>
        <doi>10.14569/IJACSA.2020.0110965</doi>
        <lastModDate>2020-09-30T13:08:25.0070000+00:00</lastModDate>
        
        <creator>Lakshmana Phaneendra Maguluri</creator>
        
        <creator>R. Ragupathy</creator>
        
        <subject>Multi-stock trend prediction; stock market; clustering; nonlinear regression</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(9), 2020</description>
        <description>Trend prediction is and has been one of the very important tasks in the stock market since day one. For a sophisticated trend prediction using real time stock market data, stock sentiment news and technical analysis plays a vital role. While predicting the trend in the conventional way, technical indicators are delayed due to temporal data and less historic data. All the conventional stock trend predicting methods sustained without sentiment scores, technical scores and time periods for trend prediction. Considering the fact that all the previous conventional methods of stock trend predictions are bound to take single stock for trend prediction due to high computational memory and time, this prototype of highly functioning algorithms focus on trend prediction with multi stock data breaking all the conventional rules. This multi stock trend prediction model commissions and implements the effectively programmed algorithms on real time stock market data set. In this multi-stock trend prediction model, a new stock technical indicator and new stock sentiment score are proposed in order to improve the stock feature selection for trend prediction. In order to find the best real time feature selection model, a technical feature selection measure and stock news sentiment score are developed and incorporated. We used integrated stock market data to make a hybrid clustered model to find the relational multi stocks. Giving a final verdict, this is a cluster based nonlinear regression multi stock framework in order to predict the time-based trend prediction. The multi stock trend regression accuracy is bettered by 12% and recall by 11% while we cross check the experimental outcomes, henceforth making this model more accurate and precision furnished.</description>
        <description>http://thesai.org/Downloads/Volume11No9/Paper_65-A_Cluster_based_Non_Linear_Regression_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implementing the Behavioral Semantics of Diagrammatic Languages by Co-simulation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110964</link>
        <id>10.14569/IJACSA.2020.0110964</id>
        <doi>10.14569/IJACSA.2020.0110964</doi>
        <lastModDate>2020-09-30T13:08:24.9900000+00:00</lastModDate>
        
        <creator>Daniel-Cristian Craciunean</creator>
        
        <subject>DSML; cyber-physical systems; behavioral semantics; standalone FMU; FMI; diagrammatic language</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(9), 2020</description>
        <description>Due to the multidisciplinary nature of cyber-physical systems, it is impossible for an existing modeling language to be used effectively in all cases. For this reason, the development of domain-specific modeling languages is beginning to become an integral part of the modeling process. This diversification of modeling languages often implies the need to co-simulate subsystems in order to obtain the effect of a complete system. This paper presents how behavioral semantics of a diagrammatic DSML can be implemented by co-simulation. For the formal specification of the language we used mechanisms from the category theory. To specify behavioral semantics, we introduced the notion of behavioral rule as an aggregation between a graph transformation and a behavioral action. The paper also contains a relevant example and demonstrates that the implementation of behavioral semantics of a diagrammatic model can be achieved by co-simulating standalone FMUs associated to behavioral rules.</description>
        <description>http://thesai.org/Downloads/Volume11No9/Paper_64-Implementing_the_Behavioral_Semantics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Trading Saudi Stock Market Shares using Multivariate Recurrent Neural Network with a Long Short-term Memory Layer</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110963</link>
        <id>10.14569/IJACSA.2020.0110963</id>
        <doi>10.14569/IJACSA.2020.0110963</doi>
        <lastModDate>2020-09-30T13:08:24.9770000+00:00</lastModDate>
        
        <creator>Fahd A. Alturki</creator>
        
        <creator>Abdullah M. Aldughaiyem</creator>
        
        <subject>Time series; neural network; long short-term memory; stock price; Tadawul</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(9), 2020</description>
        <description>This study tests the Saudi stock market weak form using the weak form of an efficient market hypothesis and proposes a recurrent neural network (RNN) to produce a trading signal. To predict the next-day trading signal of several shares in the Saudi stock market, we designed the RNN with a long short-term memory architecture. The network input comprises several time series features that contribute to the classification process. The proposed RNN output is fed to a trading agent that buys or sells shares based on the share current value, current available balance, and the current number of shares owned. To evaluate the proposed neural network, we used the historical oil price data of Brent crude oil in combination with other stock features (e.g., previous day (opening and closing price of the evaluated share). The results indicate that oil price variations affect the Saudi stock market. Furthermore, with 55% accuracy, the proposed RNN model produces the next-day trading signal. For the same period, the proposed RNN trading method achieves an investment gain of 23%, whereas the buy-and-hold method obtained 1.2%.</description>
        <description>http://thesai.org/Downloads/Volume11No9/Paper_63-Trading_Saudi_Stock_Market_Shares.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cotton Leaf Image Segmentation using Modified Factorization-Based Active Contour Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110962</link>
        <id>10.14569/IJACSA.2020.0110962</id>
        <doi>10.14569/IJACSA.2020.0110962</doi>
        <lastModDate>2020-09-30T13:08:24.9430000+00:00</lastModDate>
        
        <creator>Bhagya M Patil</creator>
        
        <creator>Basavaraj Amarapur</creator>
        
        <subject>Cotton leaf; active contour; Gradient Vector Flow (GVF); Adaptive Diffusion Flow (ADF); Vector Flow Convolution (VFC); Modified factorization based active contour (MFACM)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(9), 2020</description>
        <description>Cotton plant is one of the most widely cultivated crop across worldwide. The leaf is one of the important parts which help in the food production. There are different cotton leaf diseases like Alternaria spot, foliar, bacterial blight, etc. which affects the agricultural yield. In order to detect the diseases, leaf region extraction becomes a significant task and to achieve this we use image processing techniques. Henceforth in this paper, a novel method used to extract the leaf region from a complex background. The proposed method is used for leaf extraction from complex background. The algorithm used in this method is modified factorization based active contour (MFACM) which helps in getting better output images. The database images used for research are acquired from the field using a digital camera. The proposed work is compared with existing active contour algorithms like Gradient Vector Flow (GVF), Adaptive Diffusion Flow (ADF), and Vector Flow Convolution (VFC). From the experiment, it can be observed that the proposed method is better than the other active contour methods in terms of computation time and the number of iterations. In addition to that segmented result is analyzed using specificity, sensitivity, precision which showed that our proposed method is better than the other methods.</description>
        <description>http://thesai.org/Downloads/Volume11No9/Paper_62-Cotton_Leaf_Image_Segmentation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Empirical Study of e-Learning Interface Design Elements for Generation Z</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110961</link>
        <id>10.14569/IJACSA.2020.0110961</id>
        <doi>10.14569/IJACSA.2020.0110961</doi>
        <lastModDate>2020-09-30T13:08:24.9130000+00:00</lastModDate>
        
        <creator>Hazwani Nordin</creator>
        
        <creator>Dalbir Singh</creator>
        
        <creator>Zulkefli Mansor</creator>
        
        <subject>e-Learning; interface design; generation Z; culture; focus group</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(9), 2020</description>
        <description>E-learning is the latest evolution of electronic-based learning that creates, fosters, delivers and facilitates the learning process anytime and anywhere with the use of interactive network technology. The use of e-learning as a learning platform makes users want a high quality of interface design to interact with the e-learning system. Interface design that meets students’ needs and expectations may increase their involvement and satisfaction towards e-learning, especially generation Z students. However, interface design is always being criticized and has become a part of issues that cause the failure of e-learning. Lack of understanding about students’ cultural background and preferences towards e-learning interface design are the major factors that contribute to this phenomenon. To ensure the success of e-learning, a model of interface design specifically for generation Z students’ culture that consists of interface elements and design characteristics need to be developed. So, this study aimed to address this subject by identifying e-learning interface elements and design characteristics from existing literature, confirming the elements and design characteristics and discovering related elements for e-learning interface design from generation Z students’ perspective. This study used semi-structured for a focus group interview that included seven students. The focus group interview involved three main steps which were sampling, protocol and research instruments. This study validated several interface elements and design characteristics that contribute to the model of e-learning interface design. The findings could guide the interface designer in designing e-learning interface for generation Z students.</description>
        <description>http://thesai.org/Downloads/Volume11No9/Paper_61-An_Empirical_Study_of_E_Learning_Interface_Design.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Most Efficient Classifiers for the Students’ Academic Dataset</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110960</link>
        <id>10.14569/IJACSA.2020.0110960</id>
        <doi>10.14569/IJACSA.2020.0110960</doi>
        <lastModDate>2020-09-30T13:08:24.8970000+00:00</lastModDate>
        
        <creator>Ebtehal Ibrahim Al-Fairouz</creator>
        
        <creator>Mohammed Abdullah Al-Hagery</creator>
        
        <subject>Data mining; student performance; classification algorithms; evaluation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(9), 2020</description>
        <description>Educational institutions contain a vast collection of data accumulated for years, so it is difficult to use this data to solve problems related to the progress of the educational process and also contribute to achieving quality. For this reason, the use of data mining techniques helps to extract hidden knowledge that helps in making the decisions necessary to develop education and achieve quality requirements. The data of this study obtained from the College of Business and Economics at Qassim University. Three of the classifiers were compared in this study Decision Tree, Random Forest and Na&#239;ve Bayes. The results showed that Random Forest outperforms other algorithms with 71.5% of Precision, 71.2% F1-score, and also it got 71.3% of Recall and Classification Accuracy (CA). This study helps reduce failure by providing an academic advisor to students who have weaknesses in achieving a high-Grade Point Average (GPA). It also helps in developing the educational process by discovering and overcoming weaknesses.</description>
        <description>http://thesai.org/Downloads/Volume11No9/Paper_60-The_Most_Efficient_Classifiers.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Computational Analysis of Arabic Cursive Steganography using Complex Edge Detection Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110959</link>
        <id>10.14569/IJACSA.2020.0110959</id>
        <doi>10.14569/IJACSA.2020.0110959</doi>
        <lastModDate>2020-09-30T13:08:24.8800000+00:00</lastModDate>
        
        <creator>Anwar H. Ibrahim</creator>
        
        <creator>Abdulrahman S. Alturki</creator>
        
        <subject>Arabic language; Berkeley Segmentation Database (BSD); Least Significant Bit (LSB); Roberts; Prewitt; Sobel; LoG; Canny</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(9), 2020</description>
        <description>Arabic language contains a multiple set of features which complete the process of embedding and extracting text from the compressed image. In specific, the Arabic language covers numerous textual styles and shapes of the Arabic letter. This paper investigated Arabic cursive steganography using complex edge detection techniques via compressed image, which comprises several characteristics (short, medium and Long sentence) as per the research interest. Sample of images from the Berkeley Segmentation Database (BSD) was utilized and compressed with a diverse number of bits per pixel through Least Significant Bit (LSB) technology. The method presented in this paper was evaluated based on five complex edge detectors (Roberts, Prewitt, Sobel, LoG, and Canny) via MATLAB. Canny edge detector has been demonstrated to be the most excellent solution when it is vital to perform superior edge discovery over-compressed image with little several facts, but Sobel appears to be better in term of the execution time for Long sentence contents.</description>
        <description>http://thesai.org/Downloads/Volume11No9/Paper_59-Computational_Analysis_of_Arabic_Cursive_Steganography.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Artificial Intelligent Techniques for Palm Date Varieties Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110958</link>
        <id>10.14569/IJACSA.2020.0110958</id>
        <doi>10.14569/IJACSA.2020.0110958</doi>
        <lastModDate>2020-09-30T13:08:24.8500000+00:00</lastModDate>
        
        <creator>Lazhar Khriji</creator>
        
        <creator>Ahmed Chiheb Ammari</creator>
        
        <creator>Medhat Awadalla</creator>
        
        <subject>Palm date; feature extraction; machine learning; computer vision</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(9), 2020</description>
        <description>The demand on high quality palm dates is increasing due to its energy value and nutrient content, which are of great importance in human diet. To meet consumer and market standards with large-scale production, in Oman as among the top date producer, an inline classification system is of great importance. This paper addresses the potentiality of using Machine-Learning (ML) techniques in classifying automatically, without any physical measurement, the six most popular date fruit varieties in Oman. The effect of color, shape, size, and texture features and the critical parameters of the classifiers on the classification efficiency has been endeavored. Three different ML techniques have been used for automatic classification and qualitative comparison: (i) Artificial Neural Networks (ANN), (ii) Support Vector Machine (SVM), and (iii) K-Nearest Neighbor (KNN). Based on the merge of color, shape and size features contributes to achieve the highest accuracy. Experimental results show that the ANN classifier outperforms both SVM and KNN with the highest classification accuracy of 99.2%. This developed vision system in this paper can be successfully integrated in the packaging date factories.</description>
        <description>http://thesai.org/Downloads/Volume11No9/Paper_58-Artificial_Intelligent_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Impact of Project-Based Learning on Networking and Communications Competences</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110957</link>
        <id>10.14569/IJACSA.2020.0110957</id>
        <doi>10.14569/IJACSA.2020.0110957</doi>
        <lastModDate>2020-09-30T13:08:24.8330000+00:00</lastModDate>
        
        <creator>Cristian Castro-Vargas</creator>
        
        <creator>Maritza Cabana-Caceres</creator>
        
        <creator>Laberiano Andrade-Arenas</creator>
        
        <subject>Project-based learning; competencies; networking; communications; network convergence</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(9), 2020</description>
        <description>The objective of this article is to establish the impact of project-based learning on networking and communication competences I in engineering students from Lima city. The study was of an applied type, quasi-experimental design, and was made up of a population conformed by 39 students of the VI cycle of engineering, an objective test was applied to measure the impact of project-based learning on network and communication competences I. The research results determined the statistically significant relationship of project-based learning and networking and communications competences I in engineering studentswith pretest values of Z = -, 498 greater than -1.96 (critical point) and level of significance p-value = 0.618 greater than α = 0 ,05 (p&gt;α) and then with values in the posttest of Z = - 4,488 less than -1.96 (critical point) and level of significance p-value = 0.000 less than α = 0.05 (p &lt;α), therefore the project-based learning positively and significantly impacts on network and communication competences I, in engineering students, supporting the alternative hypothesis and rejecting the null hypothesis. Consequently, we reached the conclusion that the application of the project-based learning methodology has demonstrated that caused a positive and significant impact on network and communications competences I in engineering students.</description>
        <description>http://thesai.org/Downloads/Volume11No9/Paper_57-Impact_of_Project_Based_Learning_on_Networking.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Online Plagiarism Detection System based on Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110956</link>
        <id>10.14569/IJACSA.2020.0110956</id>
        <doi>10.14569/IJACSA.2020.0110956</doi>
        <lastModDate>2020-09-30T13:08:24.8200000+00:00</lastModDate>
        
        <creator>El Mostafa Hambi</creator>
        
        <creator>Faouzia Benabbou</creator>
        
        <subject>Plagiarism detection; plagiarism detection tools; deep learning; Doc2vec; Stacked Long Short-Term Memory (SLSTM); Convolutional Neural Network (CNN); Siamese neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(9), 2020</description>
        <description>The Plagiarism is an increasingly widespread and growing problem in the academic field. Several plagiarism techniques are used by fraudsters, ranging from a simple synonym replacement, sentence structure modification, to more complex method involving several types of transformation. Human based plagiarism detection is difficult, not accurate, and time-consuming process. In this paper we propose a plagiarism detection framework based on three deep learning models: Doc2vec, Siamese Long Short-term Memory (SLSTM) and Convolutional Neural Network (CNN). Our system uses three layers: Preprocessing Layer including word embedding, Learning Layers and Detection Layer. To evaluate our system, we carried out a study on plagiarism detection tools from the academic field and make a comparison based on a set of features. Compared to other works, our approach performs a good accuracy of 98.33 % and can detect different types of plagiarism, enables to specify another dataset and supports to compare the document from an internet search.</description>
        <description>http://thesai.org/Downloads/Volume11No9/Paper_56-A_New_Online_Plagiarism_Detection_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid Approach to Enhance Scalability, Reliability and Computational Speed in LoRa Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110955</link>
        <id>10.14569/IJACSA.2020.0110955</id>
        <doi>10.14569/IJACSA.2020.0110955</doi>
        <lastModDate>2020-09-30T13:08:24.7870000+00:00</lastModDate>
        
        <creator>S. Raja Gopal</creator>
        
        <creator>V S V Prabhakar</creator>
        
        <subject>Hybrid technique; Internet of Things (IoT); lightweight scheduling; LoRa; pruning algorithm; scalability; reliability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(9), 2020</description>
        <description>The spreading out of Internet of Things (IoT) facilitates with new wireless communication Technologies. To have reliable communication for long duration, low power wide area can be aimed at sensor nodes. These days, LoRa has become a recognizable preference for IoT based solutions. In this paper LoRa network performance is analyzed. A hybrid technique is proposed to overcome the scalability, reliability and computational speed issues in LoRa network. In the proposed hybrid technique, a lightweight scheduling technique is used to address scalability and reliability issues and then pruning algorithm is also incorporated to enhance the computational speed of LoRa network for IoT applications. Further, LoRa network for IoT applications is analyzed using LoRaWAN NS-3 module and the parameters packet error ratio (PER), network throughput and fairness for improved reliability and scalability are illustrated. Simulation results are obtained for multiple gateways scenarios. The analysis presents that the LoRa network has addressed scalability and reliability issues using lightweight scheduling technique and computational speed is also enhanced using pruning algorithm. Therefore, the hybrid technique illustrated for LoRa network for IoT applications is in good agreement.</description>
        <description>http://thesai.org/Downloads/Volume11No9/Paper_55-A_Hybrid_Approach_to_Enhance_Scalability.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Outlier Detection using Nonparametric Depth-Based Techniques in Hydrology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110954</link>
        <id>10.14569/IJACSA.2020.0110954</id>
        <doi>10.14569/IJACSA.2020.0110954</doi>
        <lastModDate>2020-09-30T13:08:24.7730000+00:00</lastModDate>
        
        <creator>Insia Hussain</creator>
        
        <subject>Outlyingness functions; nonparametric techniques; flood characteristics; principal component scores; multivariate analysis; functional analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(9), 2020</description>
        <description>Several issues arise when extending the methods of outlier detection from a single dimension to a higher dimension. These issues include limited methods for visualization, marginal methods inadequacy, lacking a natural order and limitation in parametric modeling. The intension to overcome and address such limitations the nonparametric outlier identifier, based on depth functions, is introduced. These identifiers comprise of four threshold type outlyingness functions for outlier detection that are Mahalanobis distance, Tukey depth, spatial Mahalanobis depth, and projection depth. The object of the present research is the application of the proposed nonparametric technique in hydrology. The study is intended to be executed in two different frameworks that are multivariate hydrological data analysis and functional hydrological data analysis. The event of a flood is graphically represented by hydrograph whose components are used for computing flood characteristics that are peak(p) and volume(v). These characteristics are frequently employed for the various types of analysis in the multivariate study. Whereas, hydrograph is exhaustively employed in the analysis of functional data so that all the important information regarding flood event are not missed while analysis. The proposed technique in a multivariate framework is applied to the bivariate flood characteristics (p,v)while in functional framework proposed approach is applied to the initial two scores of principal components denoted as (z_1,z_2 ), since initial two principal components capture major variation of data employed for analysis.</description>
        <description>http://thesai.org/Downloads/Volume11No9/Paper_54-Outlier_Detection_using_Nonparametric_Depth.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Physiotherapy: Design and Implementation of a Wearable Sleeve using IMU Sensor and VR to Measure Elbow Range of Motion</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110953</link>
        <id>10.14569/IJACSA.2020.0110953</id>
        <doi>10.14569/IJACSA.2020.0110953</doi>
        <lastModDate>2020-09-30T13:08:24.7570000+00:00</lastModDate>
        
        <creator>Anzalna Narejo</creator>
        
        <creator>Attiya Baqai</creator>
        
        <creator>Neha Sikandar</creator>
        
        <creator>Absar Ali</creator>
        
        <creator>Sanam Narejo</creator>
        
        <subject>Range of Motion (RoM); physiotherapy; Inertial Measurement Unit (IMU); Virtual Reality (VR)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(9), 2020</description>
        <description>Range of motion (RoM) is the measurement of angular movement of joints that defines the joints flexibility. It is crucial to measure RoM while performing musculo-skeletal diagnostics. The physiotherapy and the visits to hospitals can be very costly and demands a great deal of time; also most of the current digital instruments, used to measure RoM, are very expensive and hard to use. In this paper a digital wearable sleeve device is designed and tested which is cheap, time efficient and easy to use. The designed device is tested to be within 95 % agreement with Universal Goniometer (UG) when tested using Bland Altman Plots. Patients can take their measurements on their own and visualize results on their desktops or mobile phones. Patients also have graphical feedback, highlighting the extent of variation between their exercise performance and standard exercise. In addition to this; patients can also compare their current exercise from previous exercise using Kalmogorov-Simronov (K-S) test automatically. To make exercising more fun, we have developed 3D VR (Virtual Reality) gaming environment for elbow flexion, elbow supination and pronation and elbow extension exercises where patient can exercise in an interactive environment and visualize their progress side by side.</description>
        <description>http://thesai.org/Downloads/Volume11No9/Paper_53-Physiotherapy_Design_and_Implementation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Physically-Based Animation in Performing Accuracy Bouncing Simulation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110952</link>
        <id>10.14569/IJACSA.2020.0110952</id>
        <doi>10.14569/IJACSA.2020.0110952</doi>
        <lastModDate>2020-09-30T13:08:24.7270000+00:00</lastModDate>
        
        <creator>Loh Ngiik Hoon</creator>
        
        <subject>Bouncing simulation; physics algorithm; physics animation; real time animation; animation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(9), 2020</description>
        <description>This study investigates the use of physics formulas in achieving plausible bouncing simulation in animation. The need for physics animation was to produce visually believable animations adhering to the basic laws of physics. Based on the review, the creation of accurate timing simulation in bouncing dynamic was significantly difficult particularly in setting keyframes. It was comprehensible that setting the value of keyframes was unambiguous while specifying the timing for keyframes was a harder task and often time-consuming. The case study of bouncing balls’ simulation was carried out in this research and the variables of mass, velocity, acceleration, force, and gravity are taking into consideration in the motion. However, the bouncing dynamic is a significant study in animation. It is often used and it shows many different aspects of animations, such as a falling object, walking, running, hopping, and juggling. Therefore, the physical framework was proposed in this study based on numerical simulations, as the real-time animation can be addressed for controlling the motion of bouncing dynamic object in animation. Animation based physics algorithm provided the animator the ability to control the realism of animation without setting the keyframe manually, to provide an extra layer of visually convincing simulation.</description>
        <description>http://thesai.org/Downloads/Volume11No9/Paper_52-Physically_Based_Animation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dissemination and Implementation of THK-ANEKA and SAW-Based Stake Model Evaluation Website</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110951</link>
        <id>10.14569/IJACSA.2020.0110951</id>
        <doi>10.14569/IJACSA.2020.0110951</doi>
        <lastModDate>2020-09-30T13:08:24.7100000+00:00</lastModDate>
        
        <creator>Dewa Gede Hendra Divayana</creator>
        
        <creator>I Putu Wisna Ariawan</creator>
        
        <creator>Agus Adiarta</creator>
        
        <subject>Evaluation website; stake model; THK; ANEKA; SAW</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(9), 2020</description>
        <description>The purpose of this study was to provide information about the dissemination and implementation of the THK-ANEKA and SAW-based Stake model evaluation website at Vocational Schools of IT in Bali. THK is an acronym for Tri Hita Karana. ANEKA is an acronym for Akuntabilitas, Nasionalisme, Etika publik, Komitmen mutu, dan Anti korupsi (in Indonesian) or Accountability, Nationalism, Public ethics, Quality commitment, and Anti-corruption (in English). SAW is an acronym for Simple Additive Weighting. This study used a development approach with the Borg and Gall model which consists of 10 development stages. Research in 2020 was focused on the dissemination and implementation stages. The research location was at several Vocational Schools of IT in Bali Province. The subjects involved in assessing website implementation were 110 respondents. The tool used to assess was a questionnaire. The analysis technique was carried out by interpreting the effectiveness level of dissemination and implementation. It was a reference to the eleven scale effectiveness standard. The research results showed that the dissemination and implementation of the THK-ANEKA and SAW-based Stake model evaluation website at Vocational Schools of IT in Bali had gone well. It was able to be seen from the documentary evidence of the dissemination activities implementation. The percentage results of the website implementing effectiveness were 88.973% and the simulation results of implementing the SAW method which was already accurate. It showed the evaluation aspects that support the realization of positive morals and students’ learning quality.</description>
        <description>http://thesai.org/Downloads/Volume11No9/Paper_51-Dissemination_and_Implementation_of_THK_ANEKA.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Effect of Requirements Quality and Requirements Volatility on the Success of Information Systems Projects</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110950</link>
        <id>10.14569/IJACSA.2020.0110950</id>
        <doi>10.14569/IJACSA.2020.0110950</doi>
        <lastModDate>2020-09-30T13:08:24.6930000+00:00</lastModDate>
        
        <creator>Eman Osama</creator>
        
        <creator>Ayman Khedr</creator>
        
        <creator>Mohamed Abdelsalam</creator>
        
        <subject>Requirement engineering; Software Requirements Specification (SRS); requirements quality; requirements volatility; project success factor</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(9), 2020</description>
        <description>This study aims to identify the effect of poorly written requirements specifications of software development and its continuous changes; on information systems’ projects success and its influence on time and cost overrun of the project based on empirical understanding in practice. As the world is moving towards the internet of things and due to the dramatic increase in demand on complex information systems projects, the development of information systems became more difficult and handling the customers’ requirements became very challenging. This research follows a conclusive design, Using a descriptive research design was held first to reveal and discover the characteristics of a good requirement, and then a quantitative method was used through conducting questionnaire and distributing to more than 400 participants in the software industry in Egypt, to understand the relationship between variables and how to improve the quality of data based on real world observations or experiment. The data collected was analyzed using python and R analysis techniques. The results indicates that, the organizations with the highest quality of requirements and less requirement volatility, have higher software success rates in terms of Project’s efficiency as well as Business and direct organizational success, while the requirements volume doesn’t have significant effect on success rates. From this analysis we developed an initial model.</description>
        <description>http://thesai.org/Downloads/Volume11No9/Paper_50-The_Effect_of_Requirements_Quality.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Predictive Model for the Determination of Academic Performance in Private Higher Education Institutions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110949</link>
        <id>10.14569/IJACSA.2020.0110949</id>
        <doi>10.14569/IJACSA.2020.0110949</doi>
        <lastModDate>2020-09-30T13:08:24.6630000+00:00</lastModDate>
        
        <creator>Francis Makombe</creator>
        
        <creator>Manoj Lall</creator>
        
        <subject>Classification modelling; data mining; higher education institutions; accuracy; academic performance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(9), 2020</description>
        <description>The growth and development of predictive models in the current world has influenced considerable changes. Today, predictive modelling of academic performance has transformed more than a few institutions by improving their students&#39; academic performance. This paper presents a computational predictive model using artificial neural networks to predict whether a student will pass or fail. The model is unique in the current literature as it is specifically designed to evaluate the effectiveness of the predictive strategies on neural networks as well as on five additional algorithms. The analysis of the experimental results shows that Artificial Neural Networks outperformed the eXtremeGBoost, Linear Regression, Support Vector Machine, Naive Bayes, and Random Forest algorithms for academic performance prediction.</description>
        <description>http://thesai.org/Downloads/Volume11No9/Paper_49-A_Predictive_Model_for_the_Determination_of_Academic_Performance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Cluster-Based Mitigation Strategy Against Security Attacks in Wireless Sensor Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110948</link>
        <id>10.14569/IJACSA.2020.0110948</id>
        <doi>10.14569/IJACSA.2020.0110948</doi>
        <lastModDate>2020-09-30T13:08:24.6470000+00:00</lastModDate>
        
        <creator>Jahangir Khan</creator>
        
        <creator>Ansar Munir Shah</creator>
        
        <creator>Babar Nawaz</creator>
        
        <creator>Khalid Mahmood</creator>
        
        <creator>Muhammad Kashif Saeed</creator>
        
        <creator>Mehmood ul Hassan</creator>
        
        <subject>Wireless sensor network; security attacks; security issues; clusters</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(9), 2020</description>
        <description>Wireless Sensor Networks (WSNs) applications range across distinct application comprising of event detection at real-time. WSNs can be deployed for not only mobile nodes but also for static sensor nodes (SNs) for various applications which may include health care system, smart parking, environmental monitoring etc. Sensor nodes in WSN are constrained in terms of energy contents of each node and can be accessible by other nodes in a wireless medium are more likely to be susceptible to various categories of attacks. Wireless Network are more likely prone to various kinds of security attacks, one such type of attack caused by a malicious attacker, which can result to decay in the lifetime of the network and an adverse scenario can even lead to congestion in the entire network. This paper presents the overview of various attacks and their consequences on different layers and evaluates defense strategy used to mitigate the various categories of attacks on Wireless Sensor Networks. This study proposes a cluster-based approach for each node of a WSN where the nodes of network constrained by energy can organize and perform network duties as per the network performance for this one node performs the role of cluster head (CH) which is elected on the basis of the &quot;Reputation&quot; of a node which is an indicator of nodes individual behavior in the network and &quot;Net_Credit_Score&quot; which determines the cooperating behavior of sensor node in the cluster. Further, this study highlights few parameters which can be implemented to further enhance the defense strategy by taking into account the factors such as Cluster count, Stability factor of both the Cluster and Cluster Head and Intra-Cluster topology which can be crucial. This will result in formulating a road map for designing a secure and resistant reputation-based system for WSN to overcome the various security related attacks.</description>
        <description>http://thesai.org/Downloads/Volume11No9/Paper_48-A_Cluster_Based_Mitigation_Strategy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Machine Learning based Analysis on Human Aggressiveness and Reactions towards Uncertain Decisions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110947</link>
        <id>10.14569/IJACSA.2020.0110947</id>
        <doi>10.14569/IJACSA.2020.0110947</doi>
        <lastModDate>2020-09-30T13:08:24.6300000+00:00</lastModDate>
        
        <creator>Sohaib Latif</creator>
        
        <creator>Abdul Kadir Abdullahi Hasan</creator>
        
        <creator>Abdaziz Omar Hassan</creator>
        
        <subject>Opinion mining; Na&#239;ve Bayes; linear regression; support vector machine</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(9), 2020</description>
        <description>Tweet data can be processed as a useful information. Social media sites like Twitter, Facebook, Google+ are rapidly growing popularity. These social media sites provide a platform for people to share and express their views about daily routine life, have to discuss on particular topics, have discussion with different communities, or connect with globe by posting messages. Tweets posted on twitter are expressed as opinions. These opinions can be used for different purposes such as to take public views on uncertain decisions such as Muslim ban in America, War in Syria, American Soldiers in Afghanistan etc. These decisions have direct impact on user’s life such as violations &amp; aggressiveness are common causes. For this purpose, we will collect opinions on some popular decision taken in past decade from twitter. We will divide the sentiments into two classes that is anger (hatred) and positive. We will propose a hypothesis model for such data which will be used in future. We will use Support Vector Machine (SVM), Naive Bayes (NB), and Logistic Regression (LR) classifier for text classification task. Further-more, we will also compare SVM results with NB, LR. Research will help us to predict early behaviors &amp; reactions of people before the big consequences of such decisions.</description>
        <description>http://thesai.org/Downloads/Volume11No9/Paper_47-Machine_Learning_based_Analysis_on_Human_Aggressiveness.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Identifying Critical Success Factors of Financial ERP System in Higher Education Institution using ADVIAN&#174; Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110946</link>
        <id>10.14569/IJACSA.2020.0110946</id>
        <doi>10.14569/IJACSA.2020.0110946</doi>
        <lastModDate>2020-09-30T13:08:24.6000000+00:00</lastModDate>
        
        <creator>Ayogeboh Epizitone</creator>
        
        <creator>Oludayo. O. Olugbara</creator>
        
        <subject>Cross impact; enterprise resource; impact analysis; resource planning; success factor</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(9), 2020</description>
        <description>Enterprise resource planning (ERP) has been widely accepted by many organizations as an information technology process to seamlessly integrate, manage, and boost performance in different units of an organization. However, there linger an unpleasant chasm on success and satisfaction rates of ERP system implementation that have limited the effective use of the system. Moreover, the critical success factors (CSFs) of ERP system implementation have not been investigated in the literature for the case of financial functions in higher education institutions. This paper, through the application of advanced impact analysis (ADVIAN&#174;) method exploits the CSFs of ERP system to support financial functions in a higher education institution. The applied ADVIAN&#174; method highlights the CSFs that are measured according to the measures of criticality, integration, and stability. Furthermore, using precarious, driving, and driven measurements for ranking the factors, an effective model of CSFs for a financial ERP system implementation is attained to support financial functions. The study findings provide a comprehensive methodological scheme that can be used as a reference guide and as an orientation point for efficacious planning, implementing, and using ERP systems to support financial functions in higher education institutions.</description>
        <description>http://thesai.org/Downloads/Volume11No9/Paper_46-Identifying_Critical_Success_Factors_of_Financial_ERP.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Machine Learning-Based Phishing Attack Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110945</link>
        <id>10.14569/IJACSA.2020.0110945</id>
        <doi>10.14569/IJACSA.2020.0110945</doi>
        <lastModDate>2020-09-30T13:08:24.5830000+00:00</lastModDate>
        
        <creator>Sohrab Hossain</creator>
        
        <creator>Dhiman Sarma</creator>
        
        <creator>Rana Joyti Chakma</creator>
        
        <subject>Phishing attack; phishing attack detection; phishing website detection; machine learning; random forest classifier</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(9), 2020</description>
        <description>This paper explores machine learning techniques and evaluates their performances when trained to perform against datasets consisting of features that can differentiate between a Phishing Website and a safe one. This capability of telling these sites apart from one another is vital in the modern-day internet surfing. As more and more of our resources shift online, one vulnerability and a leak of sensitive information by someone could bring everything down in a connected network. This paper&#39;s objective through this research is to highlight the best technique for identifying one of the most commonly occurring cyberattacks and thus allow faster identification and blacklisting of such sites, therefore leading to a safer and more secure web surfing experience for everyone. To achieve this, we describe each of the techniques we look into in great detail and use different evaluation techniques to portray their performance visually. After pitting all of these techniques against each other, we have concluded with an explanation in this paper that Random Forest Classifier does indeed work best for Phishing Website Detection.</description>
        <description>http://thesai.org/Downloads/Volume11No9/Paper_45-Machine_Learning_Based_Phishing_Attack.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implementation of a Clinical Decision Support Systems-Based Neonatal Monitoring System Framework</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110944</link>
        <id>10.14569/IJACSA.2020.0110944</id>
        <doi>10.14569/IJACSA.2020.0110944</doi>
        <lastModDate>2020-09-30T13:08:24.5700000+00:00</lastModDate>
        
        <creator>Sobowale A. A</creator>
        
        <creator>Olaniyan O. M</creator>
        
        <creator>Adetan. O</creator>
        
        <creator>Adanigbo. O</creator>
        
        <creator>Esan. A</creator>
        
        <creator>Olusesi. A.T</creator>
        
        <creator>Wahab. W.B</creator>
        
        <creator>Adewumi. O. A</creator>
        
        <subject>Clinical Decision Supports Systems (CDSS); Fuzzy Inference System (FIS); Neonatal Intensive Care Unit (NICU); vital signs; neonates</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(9), 2020</description>
        <description>A Clinical Decision Support-based information systems to monitor the vital signs of the neonate’s conditions in prematurely born babies placed in infant incubators of Neonatal Intensive Care Unit (NICU) is developed in this work. A DMS was developed consisting of a supervisory microcomputer and sensitive sensors for measuring the vital signs. The Conventional Monitoring System (CMS) was used simultaneously with the DMS to collect the vital sign readings of thirty (30) neonates, over a period of one week. Fuzzy Inference System CDSS (FIS-CDSS) was developed for the three inputs: Temperature, Heart rate and Respiration rate (THR) based on their membership functions’ value (low, medium, high) and twenty-seven (27) IF-THEN fuzzy rules using fuzzy logic toolbox in Matrix Laboratory 8.1 (R2014a). The FIS-CDSS maps the THR to an output status (Normal, Abnormal and Critical). The performance of the FIS-CDSS was evaluated using confusion matrix. The results showed that the system yielded sensitivity ranges of 90 - 100, 80 - 89, 70 - 79, 60 - 69 and 50 - 59% for five, eleven, seven, six and one neonates, respectively with an average sensitivity of 77.92%. The specificity of the system ranged from 5.00 to 66.67% with an associated average specificity of 35.10%. The accuracy of the FIS-CDSS ranged from 70 to 100, 60 to 69, 50 to 59 and 0 to 49% for nine, nine, eight and four neonates, respectively with an average accuracy of 60.94%. The developed system provides adequate and accurate information for on-the-spot assessment of neonates for decision making that improves the mortality rate and recovery period of neonates.</description>
        <description>http://thesai.org/Downloads/Volume11No9/Paper_44-Implementation_of_a_Clinical_Decision_Support_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Self-Configurable Current-Mirror Technique for Parallel RGB Light-Emitting Diodes (LEDs) Strings</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110943</link>
        <id>10.14569/IJACSA.2020.0110943</id>
        <doi>10.14569/IJACSA.2020.0110943</doi>
        <lastModDate>2020-09-30T13:08:24.5370000+00:00</lastModDate>
        
        <creator>Shaheer Shaida Durrani</creator>
        
        <creator>Asif Nawaz</creator>
        
        <creator>Muhamamd Shahzad</creator>
        
        <creator>Rehan Ali Khan</creator>
        
        <creator>Abu Zaharin Ahmad</creator>
        
        <creator>Ahmed Ali Shah</creator>
        
        <creator>Sheeraz Ahmed</creator>
        
        <creator>Zeeshan Najam</creator>
        
        <subject>Current-balancing; LED driver; current mirror</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(9), 2020</description>
        <description>Traditional current-mirror circuits require buck converter to deal with one fixed current load. This paper deals with improved self-adjustable current-mirror methods that can address different LED loads under different conditions with the help of one buck converter. The operating principle revolves around a dynamic and self-configurable combinational circuit of transistor and op-amp based current balancing circuit, along with their op-amp based dimming circuits. The proposed circuit guarantees uniformity in the outputs of the circuit. This scheme of current-balancing circuits omitted the need for separate power supply to control the load currents through different kinds of LEDs, i.e. RGB LEDs. The proposed methods are identical and modular, which can be scaled to any number of parallel current sources. The principle methodology has been successfully tested in Simulink environment to verify the current balancing of parallel LED strings.</description>
        <description>http://thesai.org/Downloads/Volume11No9/Paper_43-Self_Configurable_Current_Mirror_Technique.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Using of Redundant Signed-Digit Numeral System for Accelerating and Improving the Accuracy of Computer Floating-Point Calculations</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110942</link>
        <id>10.14569/IJACSA.2020.0110942</id>
        <doi>10.14569/IJACSA.2020.0110942</doi>
        <lastModDate>2020-09-30T13:08:24.5230000+00:00</lastModDate>
        
        <creator>Otsokov Sh. A</creator>
        
        <creator>Magomedov Sh.G</creator>
        
        <subject>High-precision computation; redundant signed-digit numeral system; signed-digit floating-point format; redundant signed-digit arithmetic; decimal fractions</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(9), 2020</description>
        <description>The article proposes a method for software implementation of floating-point computations on a graphics processing unit (GPU) with an increased accuracy, which eliminates sharp increase in rounding errors when performing arithmetic operations of addition, subtraction or multiplication with numbers that are significantly different from each other in magnitude. The method is based on the representation of floating-point numbers in the form of decimal fractions that have uniform distribution within a range and the use of redundant signed-digit numeral system to speed up calculations. The results of computational experiments for evaluating the effectiveness of the proposed approach are presented. The effect of accelerating computations is obtained for the problems of calculating the sum of an array of numbers and determining the dot product of vectors. The proposed approach is also applicable to the discrete Fourier transform.</description>
        <description>http://thesai.org/Downloads/Volume11No9/Paper_42-Using_of_Redundant_Signed_Digit_Numeral_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning Bidirectional LSTM based Detection of Prolongation and Repetition in Stuttered Speech using Weighted MFCC</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110941</link>
        <id>10.14569/IJACSA.2020.0110941</id>
        <doi>10.14569/IJACSA.2020.0110941</doi>
        <lastModDate>2020-09-30T13:08:24.5070000+00:00</lastModDate>
        
        <creator>Sakshi Gupta</creator>
        
        <creator>Ravi S. Shukla</creator>
        
        <creator>Rajesh K. Shukla</creator>
        
        <creator>Rajesh Verma</creator>
        
        <subject>Speech; stuttering; deep learning; WMFCC; Bi-LSTM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(9), 2020</description>
        <description>Stuttering is a neuro-development disorder during which normal speech flow is not fluent. Traditionally Speech-Language Pathologists used to assess the extent of stuttering by counting the speech disfluencies manually. Such sorts of stuttering assessments are arbitrary, incoherent, lengthy, and error-prone. The present study focused on objective assessment to speech disfluencies such as prolongation and syllable, word, and phrase repetition. The proposed method is based on the Weighted Mel Frequency Cepstral Coefficient feature extraction algorithm and deep-learning Bidirectional Long-Short term Memory neural network for classification of stuttered events. The work has utilized the UCLASS stuttering dataset for analysis. The speech samples of the database are initially pre-processed, manually segmented, and labeled as a type of disfluency. The labeled speech samples are parameterized to Weighted MFCC feature vectors. Then extracted features are inputted to the Bidirectional-LSTM network for training and testing of the model. The effect of different hyper-parameters on classification results is examined. The test results show that the proposed method reaches the best accuracy of 96.67%, as compared to the LSTM model. The promising recognition accuracy of 97.33%, 98.67%, 97.5%, 97.19%, and 97.67% was achieved for the detection of fluent, prolongation, syllable, word, and phrase repetition, respectively.</description>
        <description>http://thesai.org/Downloads/Volume11No9/Paper_41-Deep_Learning_Bidirectional_LSTM_based_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Electricity Cost Prediction using Autoregressive Integrated Moving Average (ARIMA) in Korea</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110940</link>
        <id>10.14569/IJACSA.2020.0110940</id>
        <doi>10.14569/IJACSA.2020.0110940</doi>
        <lastModDate>2020-09-30T13:08:24.4770000+00:00</lastModDate>
        
        <creator>Safdar Ali</creator>
        
        <creator>Do-Hyeun Kim</creator>
        
        <subject>Electricity price; electricity cost; Autoregressive Integrated Moving Average (ARIMA); prediction; energy consumption</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(9), 2020</description>
        <description>Electricity cost plays a vital role due to the immense increase in power utilization, rise in energy rates and alarms about the variations and impact on the environment which ultimately affects electricity cost. We claim that electrical power utilization data became more beneficial if it is presented to the customers along with the prediction of power consumption, prediction of energy prices and prediction of its expected electricity cost. It will assist the residents to alter their power utilization behavior, and thus will have an optimistic influence on the electricity production companies, dissemination network and electricity grid. In this study, we present a residential area power cost prediction by applying the Autoregressive Integrated Moving Average (ARIMA) technique in Korean apartments. We have investigated the energy utilization data on the foundation of daily, weekly and monthly power utilization. The accumulated data constructed on daily, weekly and monthly utilization are selected. Then we predict the maximum and average power consumption cost for each of the predicted daily, weekly and monthly power consumption. The power consumption and general price (General Electricity Price in Korea) data of Korea are used to analyze the efficiency of the prediction algorithm. The accuracy of the power cost prediction using the ARIMA model is verified using the absolute error.</description>
        <description>http://thesai.org/Downloads/Volume11No9/Paper_40-Electricity_Cost_Prediction_using_Autoregressive.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>GAIT based Behavioral Authentication using Hybrid Swarm based Feed Forward Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110939</link>
        <id>10.14569/IJACSA.2020.0110939</id>
        <doi>10.14569/IJACSA.2020.0110939</doi>
        <lastModDate>2020-09-30T13:08:24.4600000+00:00</lastModDate>
        
        <creator>Gogineni Krishna Chaitanya</creator>
        
        <creator>Krovi Raja Sekhar</creator>
        
        <subject>GIAT behavioral pattern recognition; feedforward neural network; krill herd algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(9), 2020</description>
        <description>Authentication of appropriate users for accessing the liable gadgets exists as one among the prime theme in security models. Illegal access of gadgets such as smart phones, laptops comes with an uninvited consequences, such as data theft, privacy breakage and a lot more. Straight forward approaches like pattern based security, password and pin based security are quite expensive in terms of memory where the user has to keep remembering the passwords and in case of any security issue risen then the password has to be changed and once again keep remembering the recent one. To avoid these issues, in this paper an effective GAIT based model is proposed with the hybridization of Artificial Neural Network model namely Feedforward Neural Network Model with Swarm based algorithm namely Krill Herd optimization algorithm (KH). The task of KH is to optimize the weight factor of FNN which leads to the convergence of optimal solution at the end of the run. The proposed model is examined with 6 different performance measures and compared with four different existing classification model. The performance analysis shows the significance of proposed model when compared with the existing algorithms.</description>
        <description>http://thesai.org/Downloads/Volume11No9/Paper_39-GAIT_based_Behavioral_Authentication.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design and Implementation of Real Time Data Acquisition System using Reconfigurable SoC</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110938</link>
        <id>10.14569/IJACSA.2020.0110938</id>
        <doi>10.14569/IJACSA.2020.0110938</doi>
        <lastModDate>2020-09-30T13:08:24.4430000+00:00</lastModDate>
        
        <creator>Dharmavaram Asha Devi</creator>
        
        <creator>Tirumala Satya Savithri</creator>
        
        <creator>Sai Sugun.L</creator>
        
        <subject>Application Specific Integrated Circuit (ASIC); Advanced eXtensible Interface (AXI); data acquisition system; Field Programmable Gate Array (FPGA); Peripheral Module (PMOD); System on Chip (SoC)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(9), 2020</description>
        <description>System on chip (SoC) technology is widely used for high speed and efficient embedded systems in various computing applications. To perform this task, Application Specific IC (ASIC) based system on chips are generally used till now by spending maximum amount of research, development time and money. However, this is not a comfortable choice for low and medium-level capacity industries. The reason is, with ASIC or standard IC design implementation, it is very difficult task where quick time to market, upgradability and flexibility are required. Therefore, better solution to this problem is design with reconfigurable SoCs. Therefore, FPGAs can be replaced in the place of ASICs where we can have more flexible and re-configurable platform than ASIC. In the embedded world, in many applications, accessing and controlling are the two important tasks. There are several ways of accessing the data and the corresponding data acquisition systems are available in the market. For defence, avionics, aerospace and automobile applications, high performance and accurate data acquisition systems are desirable. Therefore, an attempt is made in the proposed work, and it has been discussed that how a reconfigurable SoC based data acquisition system with high performance is designed and implemented. It is a semicustom design implemented with Zynq processing system IP, reconfigurable 7-series FPGA used as programmable logic, hygro, ambient light sensor and OLEDrgb peripheral module IPs. All these sensor and display peripheral modules are interfaced with processing unit via AXI-interconnect. The proposed system is a reconfigurable SoC meant for high-speed data acquisition system with an operating frequency of 100MHz. Such system is perfectly suitable for high speed and economic real time embedded systems.</description>
        <description>http://thesai.org/Downloads/Volume11No9/Paper_38-Design_and_Implementation_of_Real_Time_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Proposed User Requirements Document for Children’s Learning Application</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110937</link>
        <id>10.14569/IJACSA.2020.0110937</id>
        <doi>10.14569/IJACSA.2020.0110937</doi>
        <lastModDate>2020-09-30T13:08:24.4130000+00:00</lastModDate>
        
        <creator>Mira Kania Sabariah</creator>
        
        <creator>Paulus Insap Santosa</creator>
        
        <creator>Ridi Ferdiana</creator>
        
        <subject>User requirements; user requirements document; learning application; aspect of pedagogy children</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(9), 2020</description>
        <description>User requirements are the highest level of requirements. Flawed user requirements document can cause defects in the software being built—aspects of applications that were not presented in the user requirements document to cause a defect. In learning applications for children, there are aspects of pedagogy that need to be well documented. This aspect is not available in the general user requirements document, so it is often not well presented. The learning style and thinking skills level is crucial to be well presented in the user requirements document. That was because the children&#39;s persona cannot be compared at every range criteria of developmental age. That factor will undoubtedly affect the specifications of the software to be built. Users&#39; viewpoints about different requirements can also make developers wrong in determining requirements. Applying requirements prioritization in the user requirements document can help resolve the problem. Measurement of document quality was also performed using parameters in measuring the quality of the user requirements document. The results of measuring the quality of the user requirements document found that it is reliable for use.</description>
        <description>http://thesai.org/Downloads/Volume11No9/Paper_37-A_Proposed_user_Requirements_Document.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Efficient Cluster-Based Approach to Thwart Wormhole Attack in Adhoc Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110936</link>
        <id>10.14569/IJACSA.2020.0110936</id>
        <doi>10.14569/IJACSA.2020.0110936</doi>
        <lastModDate>2020-09-30T13:08:24.3970000+00:00</lastModDate>
        
        <creator>Kollu Spurthi</creator>
        
        <creator>T N Shankar</creator>
        
        <subject>MANET; LGF; K++ Means clustering; network security; wormhole attack; blackhole attack; secured algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(9), 2020</description>
        <description>Mobile Ad-hoc networks is an ascertaining domain with promising advancements, attracting researchers with a scope of enhancements and evolutions. These networks lack a definite structure and are autonomous with dynamic nature. The strength of the Ad-hoc network lies in the routing protocols making it an apt choice for transmission. With several types of routing protocols available our focus is on LGF (Location-based Geo-casting and Forwarding) protocol that falls in Position based category. LGF assures to grab the attention with its feature of low bandwidth consumption and routing overhead at the cost of unvolunteered attacks resulting in compromising the security of data. In our approach, we present a technique to overcome the profound attacks like Wormhole and Blackhole by aggregating LGF with k++ Means Clustering aiming at route optimization and promoting security services. The proposed mechanism is evaluated against QoS factors like End to End delay, Delivery Ratio, Load balancing of LGF using Simulator NS3.2 which envisioned drastic performance acceleration in the aforementioned model.</description>
        <description>http://thesai.org/Downloads/Volume11No9/Paper_36-An_Efficient_Cluster_Based_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automated Estrus Detection for Dairy Cattle through Neural Networks and Bounding Box Corner Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110935</link>
        <id>10.14569/IJACSA.2020.0110935</id>
        <doi>10.14569/IJACSA.2020.0110935</doi>
        <lastModDate>2020-09-30T13:08:24.3670000+00:00</lastModDate>
        
        <creator>Nilo M. Arago</creator>
        
        <creator>Chris I. Alvarez</creator>
        
        <creator>Angelita G. Mabale</creator>
        
        <creator>Charl G. Legista</creator>
        
        <creator>Nicole E. Repiso</creator>
        
        <creator>Rodney Rafael A. Robles</creator>
        
        <creator>Timothy M. Amado</creator>
        
        <creator>Romeo Jr. L. Jorda</creator>
        
        <creator>August C. Thio-ac</creator>
        
        <creator>Jessica S. Velasco</creator>
        
        <creator>Lean Karlo S. Tolentino</creator>
        
        <subject>Dairy cows; estrus detection; image processing; TensorFlow Object Detection API; custom neural network; object overlapping</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(9), 2020</description>
        <description>Thorough and precise estrus detection plays a crucial role in the fertility of dairy cows. Farmers commonly used direct visual monitoring in recognizing estrus signs which demands time and effort and causes misinterpretations. The primary sign of estrus is the standing heat, where the dairy cows stand to be mounted by other cows for a few seconds. Through the years, researchers developed various detection methods, yet most of these methods involve contact and invasive approaches that affect the estrus behaviors of cows. So, the proponents developed a non-invasive and non-contact estrus detection system using image processing to detect standing heat behaviors. Through the TensorFlow Object Detection API, the proponents trained two custom neural network models capable of visualizing bounding boxes of the predicted cow objects on image frames. The proponents also developed an object overlapping algorithm that utilizes the bounding box corners to detect estrus activities. Based on the conducted tests, an estrus event occurs when the centroids of the detected objects measure a distance of less than 360px and have two interior angles with another fixed point of less than 25&#176; and greater than 65&#176; for Y and X axes, respectively. If the conditions are met, the program will save the image frame and will declare an estrus activity. Otherwise, it will restart its estrus detection and counting. The system observed 17 cows, a carabao, and a bull through the cameras installed atop of a cowshed, and detects the estrus events with an efficiency of 50%.</description>
        <description>http://thesai.org/Downloads/Volume11No9/Paper_35-Automated_Estrus_Detection_for_Dairy_Cattle.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Susceptible, Infectious and Recovered (SIR Model) Predictive Model to Understand the Key Factors of COVID-19 Transmission</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110934</link>
        <id>10.14569/IJACSA.2020.0110934</id>
        <doi>10.14569/IJACSA.2020.0110934</doi>
        <lastModDate>2020-09-30T13:08:24.3500000+00:00</lastModDate>
        
        <creator>DeepaRani Gopagoni</creator>
        
        <creator>P V Lakshmi</creator>
        
        <subject>COVID-19; SIR modeling; WHO; disease spread</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(9), 2020</description>
        <description>On 31 December 2019, WHO was alerted to several cases of pneumonia in Wuhan City, Hubei Province of China. The virus did not match any other known virus. This raised concern because when a virus is new, general behavior and how it affects, people do not know. Initial few cases reportedly had some link to a large seafood and animal market, suggesting animal-to-person spread. However, a growing number of patients reportedly have not had exposure to animal markets, indicating person-to-person spread is occurring. At this time, it’s unclear how easily or sustainably this virus is spreading between people. At any given time during a flu epidemic, firstly, should know the number of people who are infected. Second, to know the numbers who have been infected and have recovered, because these people now have immunity to the disease. Well established SIR modeling methodology is used to develop a predictive model in order to understand the key factors that impact the COVID-19 transmission.</description>
        <description>http://thesai.org/Downloads/Volume11No9/Paper_34-Susceptible_Infectious_and_Recovered_SIR_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Review of Recommender Systems for Choosing Elective Courses</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110933</link>
        <id>10.14569/IJACSA.2020.0110933</id>
        <doi>10.14569/IJACSA.2020.0110933</doi>
        <lastModDate>2020-09-30T13:08:24.3330000+00:00</lastModDate>
        
        <creator>Mfowabo Maphosa</creator>
        
        <creator>Wesley Doorsamy</creator>
        
        <creator>Babu Paul</creator>
        
        <subject>Recommender systems; elective courses; data mining algorithms; systematic literature review; higher education</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(9), 2020</description>
        <description>In higher education, students face challenges when choosing elective courses in their study programmes. Most higher education institutions employ advisors to assist with this task. Recommender systems have their origins in commerce and are used in other sectors such as education. Recommender systems offer an alternative to the use of human advisors. This paper aims to examine the scope of recommender systems that assist students in choosing elective courses. To achieve this, a systematic literature review (SLR) on recommender systems corpus for choosing elective courses published from 2010–2019 was conducted. Of the 16 981 research articles initially identified, only 24 addressed recommender systems for choosing elective courses and were included in the final analysis. These articles show that several recommender systems approaches and data mining algorithms are used to achieve the task of recommending elective courses. This study identified gaps in current research on the use of recommender systems for choosing elective courses. Further work in several unexplored areas could be examined to enhance the effectiveness of recommender systems for elective courses. This study contributes to the body of literature on recommender systems, in particular those applied for assisting students in choosing elective courses within higher education.</description>
        <description>http://thesai.org/Downloads/Volume11No9/Paper_33-A_Review_of_Recommender_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Improved Image Retrieval by Using Texture Color Descriptor with Novel Local Textural Patterns</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110932</link>
        <id>10.14569/IJACSA.2020.0110932</id>
        <doi>10.14569/IJACSA.2020.0110932</doi>
        <lastModDate>2020-09-30T13:08:24.3200000+00:00</lastModDate>
        
        <creator>Punit Kumar Johari</creator>
        
        <creator>Rajendra Kumar Gupta</creator>
        
        <subject>Image retrieval; Binary pattern; feature extraction; Median Binary Pattern for Color (MBPC) image; Median Binary Pattern for Hue (MBPH)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(9), 2020</description>
        <description>This paper proposes a new local descriptor of color, texture known as a Median Binary Pattern for color images (MBPC) and Median Binary Pattern of the Hue (MBPH). These suggested methods are extract discriminative features for the color image retrieval. In the surrounding region of a local window, the suggested descriptor classification uses a plane to a threshold that distinguish two classes of color pixels. The Median Binary Patterns of the hue features are derived in the color space from HIS, called MBPH to maximize the discriminatory power of the proposed MBPC operator. In addition to MBPC, MBPH are fused to extract the MBPC+MBPH resulting in an efficient image recovery method combined with color histogram (CH). The structure of the two suggested MBPC and MBPH descriptors are combined with the other fuzzyfied based color histogram descriptor that formed MBPC+MBPH+FCH to improve the performance of the suggested method. The proposed methods are applied on datasets Wang, Corel-5K, and Corel-10K. Experimental results depicted that results of proposed methods are better than existing method in terms of retrieved accuracy. The significant recognition accuracy obtained from the proposed methods which is 60.1 and 63.9 for Wang dataset, 41.88 and 42.47 for Corel-5K and 32.89 and 33.89 for Corel-10K dataset. This hybrid proposed method greatly deals with different textural patterns as well as able to grasp minute color details.</description>
        <description>http://thesai.org/Downloads/Volume11No9/Paper_32-An_Improved_Image_Retrieval_by_using_Texture_Color.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Effects of Speed and Altitude on Wireless Air Pollution Measurements using Hexacopter Drone</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110931</link>
        <id>10.14569/IJACSA.2020.0110931</id>
        <doi>10.14569/IJACSA.2020.0110931</doi>
        <lastModDate>2020-09-30T13:08:24.2870000+00:00</lastModDate>
        
        <creator>Rami Noori</creator>
        
        <creator>Dahlila Putri Dahnil</creator>
        
        <subject>Unmanned Aerial Vehicles (UAVs); low-cost sensors; air pollution; LoRa; air quality; radio transmitter; atmosphere</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(9), 2020</description>
        <description>Air pollution has a severe impact on human beings and one of the top risks facing human health. The data collection near pollution sources is difficult to obtain due to obstacles such as industrial and rural areas, where sensing usually fails to give enough information about the air quality. Unmanned Aerial Vehicles (UAVs) equipped with different sensors offer new approaches and opportunity to air pollution and atmospheric studies. Despite that, there are new challenges that emerged with using UAVs, in particular, the effect of wind-generated from UAVs propellers rotation on the efficiency and ability to sense and measure gas concentrations. The results of gas measurement are affected by the propellers rotation and the wind resistance. Thus, the effect of changing UAV speed and altitude on the gas measurement both vertically and horizontally need to be performed. The aims of this paper is to propose a new mobile-wireless air pollution system composed of UAV equipped with low-cost sensors using LoRa transmission. The proposed system is evaluated by studying the effect of changing altitude and speed on the measured gas concentrations CO, LPG, H2, and smoke when flying in horizontally and vertically directions. The results showed that our system is capable of measuring CO, LPG, H2, and smoke in the vertical mode in both hovering and deploying scenarios. While in horizontal mode the results showed that system can detect and measure gas concentrations at speeds less than or equal to 6 m/s. While at high speed of 8 and 10 m/s there will be an impact on its performance and accuracy to detect the targeted gases. Also, the results showed that the LoRa shield and Radio transmitter AT9S can successfully transmit up to 800 m horizontally and 400 feet vertically.</description>
        <description>http://thesai.org/Downloads/Volume11No9/Paper_31-The_Effects_of_Speed_and_Altitude.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Educational Tool for Generation and Analysis of Multidimensional Modeling on Data Warehouse</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110930</link>
        <id>10.14569/IJACSA.2020.0110930</id>
        <doi>10.14569/IJACSA.2020.0110930</doi>
        <lastModDate>2020-09-30T13:08:24.2570000+00:00</lastModDate>
        
        <creator>Elena Fabiola Ruiz Ledesma</creator>
        
        <creator>Elizabeth Moreno Galv&#225;n</creator>
        
        <creator>Enrique Alfonso Carmona Garc&#237;a</creator>
        
        <creator>Laura Ivoone Garay Jim&#233;nez</creator>
        
        <subject>Educational data mining; data cube; view materialization; educational software</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(9), 2020</description>
        <description>The curricular inclusion of topics, study plans, and teaching programs related to the study of Data Science has been trending mostly in higher-level education for the last years. However, the previous knowledge requirements for students to adequately assimilate these lessons are more specialised than the ones they obtain during secondary education. On the one hand, the interaction with complexes techniques and materials is needed, and on the other, tools to practice on-demand are required in the current learning. So, this is an excellent opportunity for the creation of data analysis tools for educational purpose that could be considered as a starting point of a broad area of application. This paper presents a pedagogical support tool aimed to facilitate the student approach to the basic knowledge of data mining through the practice of the analysis of online analytical processing (OLAP). It is a prototype that allows the visualisation of the multidimensional cubes generated with all possible combinations of the dimensions of the data set, as well as their storage in databases, the recovery operations for views, and the implementation of an algorithm for the selection of the optimal view set for materialising the set of records resulting from a search of the database, and computing the materialisation costs and total records recovered. The prototype also carries out and present recurrent patterns and association rules while considering factors such as support variables and reliability. All of this is steps are done explicitly to aid the students to comprehend the generation process of data cubes in the data mining discipline.</description>
        <description>http://thesai.org/Downloads/Volume11No9/Paper_30-Educational_Tool_for_Generation_and_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Autism Spectrum Disorder Diagnosis using Optimal Machine Learning Methods</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110929</link>
        <id>10.14569/IJACSA.2020.0110929</id>
        <doi>10.14569/IJACSA.2020.0110929</doi>
        <lastModDate>2020-09-30T13:08:24.2400000+00:00</lastModDate>
        
        <creator>Maitha Rashid Alteneiji</creator>
        
        <creator>Layla Mohammed Alqaydi</creator>
        
        <creator>Muhammad Usman Tariq</creator>
        
        <subject>Autism diagnosis; autism disorder; autism detection; machine learning; ASD</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(9), 2020</description>
        <description>Autism spectrum disorder (ASD) is the disorder of communication and behavior that affects children and adults. It can be diagnosed at any stage of life. Most importantly, the first two years of life, regardless of ethnicity, race, or economic groups. There are different variations of ASD according to the severity and type of symptoms experienced by people. It is a lifelong disorder, but treatment and services can improve the symptoms. The literature focuses on one of the main methods used by physicians to diagnose ASD. Many types of research and medical reports have been reviewed; however, a few of them only give good medical results for the strong differentiation of ASD from healthy people. This paper focuses on using machine learning algorithms to predict an individual with specific ASD symptoms. The target is to predict an individual with specific ASD symptoms and finding the best machine learning model for diagnosis. Further, the paper aims to make the autism diagnosis faster to deliver the required treatment at an early stage of child development.</description>
        <description>http://thesai.org/Downloads/Volume11No9/Paper_29-Autism_Spectrum_Disorder_Diagnosis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Finger Movement Discrimination of EMG Signals Towards Improved Prosthetic Control using TFD</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110928</link>
        <id>10.14569/IJACSA.2020.0110928</id>
        <doi>10.14569/IJACSA.2020.0110928</doi>
        <lastModDate>2020-09-30T13:08:24.2100000+00:00</lastModDate>
        
        <creator>E. F. Shair</creator>
        
        <creator>N.A. Jamaluddin</creator>
        
        <creator>A.R. Abdullah</creator>
        
        <subject>Electromyography; feature extraction; time-frequency distribution; spectrogram; classification; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(9), 2020</description>
        <description>Prosthetic is an artificially made as a substitute or replacement for missing part of a body. The function of the missing body part can be replaced by using the prosthesis and it can help disabled people do their activities easily. A myoelectric control system is a fundamental part of modern prostheses. The electromyogram (EMG) signals are used in this system to control the prosthesis movements by taking it from a person’s muscle. The problem for the myoelectric control system is when it did not receive the same attention to control fingers due to more dexterous of individual and combined finger control in a signal. Thus, a method to solve the problem of the myoelectric control system by using time-frequency distribution (TFD) is proposed in this paper. The EMG features of the individual and combine finger movements for ten subjects and ten different movements is extracted using TFD, ie. spectrogram. Three machine learning algorithms which are Support Vector Machine (SVM), k-Nearest Neighbor (KNN) and Ensemble Classifier are then used to classify the individuals and combine finger movement based on the extracted EMG feature from the spectrogram. The performance of the proposed method is then verified using classification accuracy. Based on the results, the overall accuracy for the classification is 90% (SVM), 100% (KNN) and 100% (Ensemble Classifier), respectively. The finding of the study could serve as an insight to improve the conventional prosthetic control strategies.</description>
        <description>http://thesai.org/Downloads/Volume11No9/Paper_28-Finger_Movement_Discrimination_of_EMG_Signals.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Decision-Making Analysis using Arduino-Based Electroencephalography (EEG): An Exploratory Study for Marketing Strategy</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110927</link>
        <id>10.14569/IJACSA.2020.0110927</id>
        <doi>10.14569/IJACSA.2020.0110927</doi>
        <lastModDate>2020-09-30T13:08:24.1930000+00:00</lastModDate>
        
        <creator>Ahmad Faiz Yazid</creator>
        
        <creator>Siti Munirah Mohd</creator>
        
        <creator>Abdul Razzak Khan Rustum Ali Khan</creator>
        
        <creator>Shafinah Kamarudin</creator>
        
        <creator>Nurhidaya Mohamad Jan</creator>
        
        <subject>Arduino-based electroencephalography (EEG); neuromarketing; short-term memory; TV commercials; visual cognition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(9), 2020</description>
        <description>Business technology has brought conventional marketing methods to the next level. These emerging integrated technologies has contributed to the growth and understanding of the consumer decision making process. Several studies have attempted to evaluate media content, especially on video advertising or TV commercials using various neuroimaging techniques such as the electroencephalography (EEG) device. Currently, the use of neuroscience in Malaysia’s marketing research is very limited due to its limited adoption as an emerging technology in this field. This research uncovers the neuroscientific approach, particularly through the use of an EEG device; examining consumers’ responses in terms of brain wave signals and cognition. A proposed theoretical framework on the factors affecting visual stimulus (movement, color, shape, and light) during the decision-making process by watching video advertising had been customized using two conceptual models of sensory stimulation. Ten respondents participated in the experiment to investigate the spectral changes of alpha brain wave signals detected in the occipital lobe. A 2-channel Arduino-based EEG device from Backyard Brains and Spike Recorder software was used to analyze the EEG signal through Fast Fourier Transform (FFT) method. Results obtained from the investigated population showed that there was statistically significant brain wave activity during the observation of the video advertising which demonstrated the interconnection with short-term memory through visual stimulus. Application of the neuroscience tool helped to explore consumer brain responses towards marketing stimuli with regards to the consumers’ decision-making processes. This study manifests a useful tool for neuromarketing and concludes with a discussion, together with recommendations on the way forward.</description>
        <description>http://thesai.org/Downloads/Volume11No9/Paper_27-Decision_Making_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Understanding user Emotions Through Interaction with Persuasive Technology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110926</link>
        <id>10.14569/IJACSA.2020.0110926</id>
        <doi>10.14569/IJACSA.2020.0110926</doi>
        <lastModDate>2020-09-30T13:08:24.1770000+00:00</lastModDate>
        
        <creator>Wan Nooraishya Wan Ahmad</creator>
        
        <creator>Nazlena Mohamad Ali</creator>
        
        <creator>Ahmad Rizal Ahmad Rodzuan</creator>
        
        <subject>Emotion; emotional states; interaction; persuasive technology; captology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(9), 2020</description>
        <description>Emotions play a vital role in persuasion; thus, the use of persuasive applications should affect and appeal to the users’ emotions. However, studies in persuasive technology have yet to discover what triggered the users’ emotions. Therefore, the objectives of this study are to examine user emotions and to identify the factors that affect user emotions in using persuasive applications. This study is conducted in three stages; pre-interaction, during-interaction and post-interaction, employed a mixed-method approach using Geneva Emotions Wheel (GEW) and open-ended survey questions that analyzed using thematic analysis. The result shows that most of the emotions that users felt belong to high-control positive valence emotions that consist of interest, joy and pleasure. User, system and interaction are the three factors that triggered the emotions encompasses of elements such as Individual Awareness, Personality, Interface Design, Persuasive Function, Content Presentation, System Quality, Usability, and Tasks. The findings contribute to the body knowledge of Persuasive Technology, where the discovered factors and its elements are the antecedents that should be the concern in constructing an emotion-based trust design framework that could bring emotional impact to users to ensure a successful persuasion.</description>
        <description>http://thesai.org/Downloads/Volume11No9/Paper_26-Understanding_User_Emotions.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Critical Analysis of IS Governance Frameworks: A Metamodel of the Integrated Use of CobiT Framework</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110924</link>
        <id>10.14569/IJACSA.2020.0110924</id>
        <doi>10.14569/IJACSA.2020.0110924</doi>
        <lastModDate>2020-09-30T13:08:24.1470000+00:00</lastModDate>
        
        <creator>Lamia MOUDOUBAH</creator>
        
        <creator>Abir EL YAMAMI</creator>
        
        <creator>Mansouri KHALIFA</creator>
        
        <creator>Mohammed QBADOU</creator>
        
        <subject>Information Systems Governance (ISG); IS process; business process; COBIT</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(9), 2020</description>
        <description>Information Systems Governance (ISG) is an essential component of corporate governance. It refers to the implementation of the means of decision-making. A considerable number of studies on information systems governance (ISG) have been published. Nevertheless, there is a need to conceptualize and model this theoretical context. The aim of this paper is to provide a study of frameworks that integrates this domain as well as to bring a modeling of the concepts that structure the framework of this domain and a profound and clear understanding of the IS process, IS governance has been studied as a concept. The results demonstrated that the adoption of the COBIT repository in the organization could amplify its efforts. This input therefore enables the organization to capitalize on and build up knowledge in the field of IS governance, and to propose models for delivering an integrated, business-aligned IS.</description>
        <description>http://thesai.org/Downloads/Volume11No9/Paper_24-A_Critical_Analysis_of_IS_Governance_Frameworks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Prioritization of Software Functional Requirements from Developers Perspective</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110925</link>
        <id>10.14569/IJACSA.2020.0110925</id>
        <doi>10.14569/IJACSA.2020.0110925</doi>
        <lastModDate>2020-09-30T13:08:24.1470000+00:00</lastModDate>
        
        <creator>Muhammad Yaseen</creator>
        
        <creator>Aida Mustapha</creator>
        
        <creator>Noraini Ibrahim</creator>
        
        <subject>Requirements prioritization; Functional Requirements (FRs); directed graph; spanning tree (ST); Analytical Hierarchical Process (AHP)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(9), 2020</description>
        <description>Prioritizing software requirements is important and difficult task during requirements management phase of requirements engineering. To ensure timely delivery of project, software developers have to prioritize functional requirements. The importance of prioritization increases when size of requirements is big. Software for large enterprises like the Enterprise Resource Planning (ERP) systems are more likely to be developed by a team of software developers where large size requirements are distributed in parallel team members. However, requirements are dependent on each other, therefore development of pre-requisite requirements must be carefully timed and should be implemented first. Therefore, assigning importance and priority to some requirements over others is necessary so that requirements can be available on time to developers. This paper proposes a prioritization approach for functional requirements on the basis of their importance during implementation. The design of research method consists of Analytical Hierarchical Process (AHP) technique based on spanning trees. Through spanning trees, dependent requirements were linked in hierarchical structure and then AHP were applied. As a result of prioritization, requirements were distributed in such a way that dependency among requirements of developers were kept minimum as much as possible so that waiting time of requirements for their pre-requisite were reduced. With reduced effect of dependency in requirements of parallel developers, timely delivery of software projects can be assured.</description>
        <description>http://thesai.org/Downloads/Volume11No9/Paper_25-Prioritization_of_Software_Functional_Requirements.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Real Time Implementation and Comparison of ESP8266 vs. MSP430F2618 QoS Characteristics for Embedded and IoT Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110923</link>
        <id>10.14569/IJACSA.2020.0110923</id>
        <doi>10.14569/IJACSA.2020.0110923</doi>
        <lastModDate>2020-09-30T13:08:24.1170000+00:00</lastModDate>
        
        <creator>Krishnaveni Kommuri</creator>
        
        <creator>Venkata Ratnam Kolluru</creator>
        
        <subject>Communication; ESP8266; gateway; Internet of Things (IoT); MSP430; power; sensor</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(9), 2020</description>
        <description>This research article proposes a novel Smart Com- munication Platform (SCP) to improve the Quality of Service (QoS) parameters in real time by using MSP430F2618. A static network has been implemented with narrow band Internet of Things (IoT) architecture which contains 10 nodes. SCP performs tracking of environmental parameters like Temperature, Humidity, Pressure, Proximity and light. A prototype has been developed by using Open source Red hat Linux 14.4 version and programmed in Embedded C.MSP430F2618 has been configured as master and slave nodes, the output is observed in a serial monitor and Gateway as well. The QoS parameters of MSP430F2618 and ESP8266 are compared in terms of power. The power consumption improvements of QoS (Quality of Service) analysis results are around 1.01mW has been seen with the experimental setup. These empirical results are much useful for wireless sensor network and IoT applications.</description>
        <description>http://thesai.org/Downloads/Volume11No9/Paper_23-Real_Time_Implementation_and_Comparison.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Pseudo Amino Acid Feature-based Protein Function Prediction using Support Vector Machine and K-Nearest Neighbors</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110922</link>
        <id>10.14569/IJACSA.2020.0110922</id>
        <doi>10.14569/IJACSA.2020.0110922</doi>
        <lastModDate>2020-09-30T13:08:24.0830000+00:00</lastModDate>
        
        <creator>Anjna Jayant Deen</creator>
        
        <creator>Manasi Gyanchandani</creator>
        
        <subject>Membrane protein types; classifiers; SVM (RBF); KNN; Random Forest; PseAAC</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(9), 2020</description>
        <description>Bioinformatics facing the vital challenge in protein function prediction due to protein data are available in primary structure, an amino acid sequence. Every protein cell sequence length and size are in different sequence order. Protein is available in 20 amino acid sequence alphabetic order; however, the corresponding information of the membrane protein sequence is insufficient to capture the function and structures of a protein from primary sequence datasets. A challenging task to correctly identify protein structure and function from amino acid sequence. The basic principle of PseAAC (Pseudo Amino Acid Composition) is to generate a discrete number of every protein samples. In each protein, sequence length varies due to protein functions. Some protein sequence length is less than 50, and some are large. Due to this, different sizes of the amino acid sample are chances to lose sequence order information. PseAAC feature generates a fixed size descriptor value in vector space to overcome sequence information loss and is used to further systematic evolution. Therefore machine learning computational tool synthesizes accurate identification of structure and function class of membrane protein. In this study, SVM (Support Vector Machine) and KNN (K-nearest neighbors) based prediction classifier used to identifying membrane protein and their types.</description>
        <description>http://thesai.org/Downloads/Volume11No9/Paper_22-Pseudo_Amino_Acid_Feature_Based_Protein_Function.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Aspect-Based Sentiment Analysis and Emotion Detection for Code-Mixed Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110921</link>
        <id>10.14569/IJACSA.2020.0110921</id>
        <doi>10.14569/IJACSA.2020.0110921</doi>
        <lastModDate>2020-09-30T13:08:24.0530000+00:00</lastModDate>
        
        <creator>Andi Suciati</creator>
        
        <creator>Indra Budi</creator>
        
        <subject>Sentiment analysis; emotion; multi-label classification; machine learning; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(9), 2020</description>
        <description>Review can affect customer decision making because by reading it, people manage to know whether the review is positive, or negative. However, positive, negative, and neutral, without considering the emotion will be not enough because emotion can strengthen the sentiment result. This study explains about the comparison of machine learning and deep learning in sentiment as well as emotion classification with multi-label classification. In machine learning comparison, the problem transformation that we used are Binary Relevance (BR), Classifier Chain (CC), and Label Powerset (LP), with Decision Tree (DT), Random Forest (RF), Support Vector Machine (SVM), and Extra Tree Classifier (ET) as algorithms of machine learning. The features we compared are n-gram language model (unigram, bigram, unigram-bigram). For deep learning, algorithms that we applied are Gated Recurrent Unit (GRU) and Bidirectional Long Short-Term Memory (BiLSTM), using self-developed word embedding. The comparison results show RF dominates with 88.4% and 89.54% F1 scores with CC method for food aspect, and LP for price, respectively. For service and ambience aspects, ET leads with 92.65% and 87.1% with LP and CC methods, respectively. On the other hand, in deep learning comparison, GRU and BiLSTM obtained similar F1- score for food aspect, 88.16%. On price aspect, GRU leads with 83.01%. However, for service and ambience, BiLSTM achieved higher F1-score, 89.03% and 84.78%.</description>
        <description>http://thesai.org/Downloads/Volume11No9/Paper_21-Aspect_Based_Sentiment_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Grid Search Tuning of Hyperparameters in Random Forest Classifier for Customer Feedback Sentiment Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110920</link>
        <id>10.14569/IJACSA.2020.0110920</id>
        <doi>10.14569/IJACSA.2020.0110920</doi>
        <lastModDate>2020-09-30T13:08:24.0370000+00:00</lastModDate>
        
        <creator>Siji George C G</creator>
        
        <creator>B.Sumathi</creator>
        
        <subject>Classification; grid search; hyperparameters; parameter tuning; random forest classifier; sentiment analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(9), 2020</description>
        <description>Text classification is a common task in machine learning. One of the supervised classification algorithm called Random Forest has been generally used for this task. There is a group of parameters in Random Forest classifier which need to be tuned. If proper tuning is performed on these hyperparameters, the classifier will give a better result. This paper proposes a hybrid approach of Random Forest classifier and Grid Search method for customer feedback data analysis. The tuning approach of Grid Search is applied for tuning the hyperparameters of Random Forest classifier. The Random Forest classifier is used for customer feedback data analysis and then the result is compared with the results which get after applying Grid Search method. The proposed approach provided a promising result in customer feedback data analysis. The experiments in this work show that the accuracy of the proposed model to predict the sentiment on customer feedback data is greater than the performance accuracy obtained by the model without applying parameter tuning.</description>
        <description>http://thesai.org/Downloads/Volume11No9/Paper_20-Grid_Search_Tuning_of_Hyperparameters.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Clustering Hybrid Algorithm for Smart Datasets using Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110919</link>
        <id>10.14569/IJACSA.2020.0110919</id>
        <doi>10.14569/IJACSA.2020.0110919</doi>
        <lastModDate>2020-09-30T13:08:24.0070000+00:00</lastModDate>
        
        <creator>Dar Masroof Amin</creator>
        
        <creator>Munishwar Rai</creator>
        
        <subject>Random Forests (RF); Jaccard Similarity (JS); triangle; smart data; root mean square error; mean absolute error; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(9), 2020</description>
        <description>In the field of data science, Machine Learning is treated as sub-field which primarily deals with designing of algorithms which have ability to learn from previous information and make future predictions accordingly. In traditional computational world the Machine Learning was generally performed on highly performance servers and machines. The implementation of these concepts on Big Data analytics algorithms has high potential and is still in its early stages. So far as machine learning is concerned, performance measure is an important parameter to evaluate the overall functionality of the algorithms. The data set is a different entity and the measuring of performance on a data which is unseen is also called as test set, and training set is a Data set which is training itself. The Data Mining is extensively using learning algorithms for data analysis and to formulate future predications based on archived data. The research presented provides a step forward to make smart data sets out of training data set by evaluating machine learning algorithm. The research presented a novel hybrid algorithm that attempts to incorporate the feature of similarities in Random Forest machine learning algorithm for improving the classification accuracy and efficiency of working.</description>
        <description>http://thesai.org/Downloads/Volume11No9/Paper_19-A_Clustering_Hybrid_Algorithm_for_Smart_Datasets.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cloud-Based Outsourcing Framework for Efficient IT Project Management Practices</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110918</link>
        <id>10.14569/IJACSA.2020.0110918</id>
        <doi>10.14569/IJACSA.2020.0110918</doi>
        <lastModDate>2020-09-30T13:08:24.0070000+00:00</lastModDate>
        
        <creator>Mesfin Alemu</creator>
        
        <creator>Abel Adane</creator>
        
        <creator>Bhupesh Kumar Singh</creator>
        
        <creator>Durga Prasad Sharma</creator>
        
        <subject>Outsourcing; project management; cloud; IT industry; framework</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(9), 2020</description>
        <description>The optimum utilization of human resources is one of the crucial exercises in IT organizations. To provide a well-organized and cohesive working environment, organizations need to review their work culture in reference to newly evolved tools and techniques. To reduce the development cost of the IT projects and the optimum utilization of human resources, organizations need to review and redesign the project development processes. The significant challenges faced by IT organizations are the rapid switch-over (attrition) of IT professionals, physical migration or deployment, and redeployment of the human resources. This research paper is an effort towards the multilateral exploration of the techniques to adapt and improve the ICT enabled project management practices in an outsourced environment. This research is an effort with special reference to developing countries such as Ethiopia, where an acute shortage of high skilled IT human resources and their physical migration from one project location to another project location is a costly and challenging task. Ethiopia as a developing country and its IT industry is challenged by several issues like the capacity of ICT infrastructure and the skilled human resources. In such situations, IT projects are either challenged, impaired, or completed failed due to lack of IT human resources with desired skills and ultramodern up to date IT infrastructure. In this research paper, cloud computing technology is assumed as a key to the solution. For this, a systematic and careful investigation using mixed data analysis approach was used to adopt cloud-based outsourcing in IT project management practices i.e. design, development, and testing over outsourced systems by outsourced IT human resources. The major findings of this paper are to investigate and analyze how these cloud-based resources can be explored without physical movement or migration. For the novel improvement in the existing IT project management practices, the salient stakeholders’ views were collected and analyzed for designing cloud-based outsourcing IT project management framework for the Ethiopian IT industry. The framework was functionally tested over the cloud-based Bitrix24 platform.</description>
        <description>http://thesai.org/Downloads/Volume11No9/Paper_18-Cloud_Based_Outsourcing_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Empirical Comparison of Fake News Detection using different Machine Learning Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110917</link>
        <id>10.14569/IJACSA.2020.0110917</id>
        <doi>10.14569/IJACSA.2020.0110917</doi>
        <lastModDate>2020-09-30T13:08:23.9770000+00:00</lastModDate>
        
        <creator>Abdulaziz Albahr</creator>
        
        <creator>Marwan Albahar</creator>
        
        <subject>Fake news; classification; machine learning; performance comparison</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(9), 2020</description>
        <description>Relying on social networks to follow the news has its pros and cons. Social media websites indeed allow the spread of information among people quickly. However, such websites might be leveraged to circulate low-quality news full of misinformation, i.e., &quot;fake news.&quot; The wide distribution of fake news has a considerable negative impact on individuals and society as a whole. Thus, detecting fake news published on the various social media websites has lately become an evolving research area that is drawing great attention. Detecting the widespread fake news over the numerous social media platforms presents new challenges that make the currently deployed algorithms ineffective or not applicable anymore. Basically, fake news is deliberately written on the first place to mislead readers to accept false information as being true, which makes it difficult to detect based on news content solely; consequently, auxiliary information, like user social engagements on social media websites, need to be taken into account to help make a better detection. Using such auxiliary information is challenging because users&#39; social engagements with fake news produce noisy, unstructured, and incomplete Big-Data. Due to the fact that fake news detection on social media is fundamental, this research aims at examining four well-known machine learning algorithms, namely the random forest, the Na&#239;ve Bayes, the neural network, and the decision trees, distinctively to validate the efficiency of the classification performance on detecting fake news. We conducted an experiment on a widely used public dataset i.e. LIAR, and the results show that the Na&#239;ve Bayes classifier defeats the other algorithms remarkably on this dataset.</description>
        <description>http://thesai.org/Downloads/Volume11No9/Paper_17-An_Empirical_Comparison_of_Fake_News_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Efficient Method for Three Loop MMSE-SIC based Iterative MIMO Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110916</link>
        <id>10.14569/IJACSA.2020.0110916</id>
        <doi>10.14569/IJACSA.2020.0110916</doi>
        <lastModDate>2020-09-30T13:08:23.9600000+00:00</lastModDate>
        
        <creator>Zuhaibuddin Bhutto</creator>
        
        <creator>Saleem Ahmed</creator>
        
        <creator>Syed Muhammad Shehram Shah</creator>
        
        <creator>Azhar Iqbal</creator>
        
        <creator>Faraz Mehmood</creator>
        
        <creator>Imdadullah Thaheem</creator>
        
        <creator>Ayaz Hussain</creator>
        
        <subject>MIMO; Iterative Detection and Decoding (IDD); sphere decoding; Minimum Mean Squared Errors with Soft Interference Cancellation (MMSE-SIC)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(9), 2020</description>
        <description>Iterative decoding is one of the promising methods to improve the performance of MIMO systems. In iterative processing channel decoder and MIMO detector share the information in order to enhance the overall system performance. However, iterative processing requires a lot of computations therefore it is considered as a computationally complex approach due to complex detection schemes involving iterative processing. There are several promising detection methods that require further improvements and they can be candidates in order to practically implement iterative processing. In this paper, the propose method to improve the efficiency of three loop-based minimum mean squared errors with soft interference cancellation (MMSE-SIC) method by reducing its complexity with a single inverse operation. Simulation results are given in order to provide detail analysis of the proposed MMSE-SIC based approach for iterative detection and decoding (IDD).</description>
        <description>http://thesai.org/Downloads/Volume11No9/Paper_16-Efficient_Method_for_Three_Loop_MMSE.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Survey on Privacy Vulnerabilities in Permissionless Blockchains</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110915</link>
        <id>10.14569/IJACSA.2020.0110915</id>
        <doi>10.14569/IJACSA.2020.0110915</doi>
        <lastModDate>2020-09-30T13:08:23.9430000+00:00</lastModDate>
        
        <creator>Aisha Zahid Junejo</creator>
        
        <creator>Manzoor Ahmed Hashmani</creator>
        
        <creator>Abdullah Abdulrehman Alabdulatif</creator>
        
        <subject>Blockchains; privacy vulnerabilities; cryptographic primitives; anonymity; confidentiality</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(9), 2020</description>
        <description>Blockchain decentralization not only ensures transparency of transactions to eliminate need of trusting third party, but also makes the transactions of the network to be publicly accessible to all the participating peers in the network. As a result, data anonymity and confidentiality are compromised making several business enterprises and industrialists hesitant to adopt the technology. Although research community has proposed various privacy-preserving solutions for blockchain, however, they still lack in efficiency resulting in distrust of industries in opting for the technology. This study is conducted for contributing to the existing body of knowledge corresponding to privacy in blockchains. The fundamental goal of this study is to delve into privacy vulnerabilities of the blockchain network in a permissionless setting by identifying non-trivial roots of factors causing privacy breach in blockchain and presenting limitation of existing privacy preserving mechanisms. Studies with superficial comparison of privacy preserving techniques are available in literature but a detailed and in-depth analysis of their limitations and causes of privacy breach in blockchain is yet not done. Therefore, in this paper we first present comprehensive analysis of various privacy breaching factors of the blockchain networks. Next, we discuss existing cryptographic and non-cryptographic solutions in literature. We found out that these existing privacy preserving mechanisms have their own set of limitations and hence are inefficient at current point of time. The existing privacy preserving mechanisms need further consideration of the research community before they’re widely adopted and benchmarked. Therefore, in the end, we identified some future directions that need to be addressed to model an efficient privacy preserving mechanism for wider adoption of the blockchain technology.</description>
        <description>http://thesai.org/Downloads/Volume11No9/Paper_15-A_Survey_on_Privacy_Vulnerabilities.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>High-Speed and Secure Elliptic Curve Cryptosystem for Multimedia Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110914</link>
        <id>10.14569/IJACSA.2020.0110914</id>
        <doi>10.14569/IJACSA.2020.0110914</doi>
        <lastModDate>2020-09-30T13:08:23.9130000+00:00</lastModDate>
        
        <creator>Mohammad Alkhatib</creator>
        
        <subject>Elliptic curves cryptosystem; performance; binary method; projective coordinates; security applications</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(9), 2020</description>
        <description>The last few years witnessed a rapid increase in the use of multimedia applications, which led to an explosion in the amount of data sent over communication networks. Therefore, it has become necessary to find an effective security solution that preserves the confidentiality of such enormous amount of data sent through unsecure network channels and, at the same time, meets the performance requirements for applications that process the data. This research introduces a high-speed and secure elliptic curve cryptosystem (ECC) appropriate for multimedia security. The proposed ECC improves the performance of data encryption process by accelerating the scaler multiplication operation, while strengthening the immunity of the cryptosystem against side channel attacks. The speed of the encryption process has been increased via the parallel implementation of ECC computations in both the upper scaler multiplication level and the lower point operations level. To accomplish this, modified version of the Right to Left binary algorithm as well as eight parallel multipliers (PM) were used to allow parallel implementation for point doubling and addition. Moreover, projective coordinates systems were used to remove the time-consuming inversion operation. The current 8-PM Montgomery ECC achieves higher performance level compared to previous ECC implementations, and can reduce the risk of side channel attacks. In addition, current research work provides performance and resources-consumption analysis for Weierstrass and Montgomery elliptic curve representations over prime field. However, the proposed ECC implementation consumes more resources. Presented ECCs were implemented using VHDL, and synthesized using the Xilinx tool with target FPGA.</description>
        <description>http://thesai.org/Downloads/Volume11No9/Paper_14-High_Speed_and_Secure_Elliptic_Curve_Cryptosystem.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Best Path in Mountain Environment based on Parallel Hill Climbing Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110913</link>
        <id>10.14569/IJACSA.2020.0110913</id>
        <doi>10.14569/IJACSA.2020.0110913</doi>
        <lastModDate>2020-09-30T13:08:23.8800000+00:00</lastModDate>
        
        <creator>Raja Masadeh</creator>
        
        <creator>Ahmad Sharieh</creator>
        
        <creator>Sanad Jamal</creator>
        
        <creator>Mais Haj Qasem</creator>
        
        <creator>Bayan Alsaaidah</creator>
        
        <subject>Hill climbing; heuristic search; parallel processing; Message Passing Interface (MPI)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(9), 2020</description>
        <description>Heuristic search is a search process that uses domain knowledge in heuristic rules or procedures to direct the progress of a search algorithm. Hill climbing is a heuristic search technique for solving certain mathematical optimization problems in the field of artificial intelligence. In this technique, starting with a suboptimal solution is compared to starting from the base of the hill, and improving the solution is compared to walking up the hill. The optimal solution of the hill climbing technique can be achieved in polynomial time and is an NP-complete problem in which the numbers of local maxima can lead to an exponential increase in computational time. To address these problems, the proposed hill climbing algorithm based on the local optimal solution is applied to the message passing interface, which is a library of routines that can be used to create parallel programs by using commonly available operating system services to create parallel processes and exchange information among these processes. Experimental results show that parallel hill climbing outperforms sequential methods.</description>
        <description>http://thesai.org/Downloads/Volume11No9/Paper_13-Best_Path_in_Mountain_Environment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Pynq-YOLO-Net: An Embedded Quantized Convolutional Neural Network for Face Mask Detection in COVID-19 Pandemic Era</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110912</link>
        <id>10.14569/IJACSA.2020.0110912</id>
        <doi>10.14569/IJACSA.2020.0110912</doi>
        <lastModDate>2020-09-30T13:08:23.8800000+00:00</lastModDate>
        
        <creator>Yahia Said</creator>
        
        <subject>Face mask detection; Coronavirus COVID-19; YOLO; Convolutional Neural Network (CNN); embedded device; Pynq Z1 board</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(9), 2020</description>
        <description>The recent Coronavirus COVID-19 is a very infectious disease that is transmitted through droplets generated when an infected person coughs, sneezes, or exhales. So, people must wear a face mask to reduce the power of the transition of this virus. Governments around the world have imposed the use of face masks in public spaces and supermarkets. In this paper, we propose to build a face mask detection system based on a lightweight Convolutional Neural Network (CNN) and the YOLO object detection framework to implement it on an embedded low power device. The object detection framework was designed using a single Convolutional Neural Network for object detection in real-time. To make the YOLO framework suitable for embedded implementation, we propose to build a lightweight Convolutional Neural Network and quantize it by using a single bit for weight and 2 bits for activations. The proposed network called Pynq-YOLO-Net was implemented on the Pynq Z1 platform. The computation was divided between the software and the hardware. The features extraction part was executed on the hardware device and the output part was executed on the software. This configuration has allowed reaching real-time processing with a very good detection accuracy of 97% when tested on the combination of collected datasets.</description>
        <description>http://thesai.org/Downloads/Volume11No9/Paper_12-Pynq_YOLO_Net_An_Embedded_Quantized.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards Computational Models to Theme Analysis in Literature</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110911</link>
        <id>10.14569/IJACSA.2020.0110911</id>
        <doi>10.14569/IJACSA.2020.0110911</doi>
        <lastModDate>2020-09-30T13:08:23.8200000+00:00</lastModDate>
        
        <creator>Abdulfattah Omar</creator>
        
        <subject>Computational models; computational semantics; lexical clustering; lexical content; philological methods; Thomas Hardy; Vector Space Model (VSM)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(9), 2020</description>
        <description>The recent years have witnessed the development of numerous computational methods that have been widely used in humanities and literary studies. In spite of their potentials of such methods in terms of providing workable solutions to different inherent problems within these domains including selectivity, objectivity, and replicability, very little has been done on thematic studies in literature. Almost all the work is done through traditional methods based on individual researchers’ reading of texts and intuitive abstraction of generalizations from that reading. These approaches have negative implications to issues of objectivity and replicability. Furthermore, it is challenging for such traditional methods to deal effectively with the hundreds of thousands of new novels that are published every year. In the face of these problems, this study proposes an integrated computational model for the thematic classifications of literary texts based on lexical clustering methods. As an example, this study is based on a corpus including Thomas Hardy’s novels and short stories. Computational semantic analysis based on the vector space model (VSM) representation of the lexical content of the texts is used. Results indicate that the selected texts were thematically grouped based on their semantic content. It can be claimed that text clustering approaches which have long been used in computational theory and data mining applications can be usefully used in literary studies.</description>
        <description>http://thesai.org/Downloads/Volume11No9/Paper_11-Towards_Computational_Models_to_Theme_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Forecasting the Global Horizontal Irradiance based on Boruta Algorithm and Artificial Neural Networks using a Lower Cost</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110910</link>
        <id>10.14569/IJACSA.2020.0110910</id>
        <doi>10.14569/IJACSA.2020.0110910</doi>
        <lastModDate>2020-09-30T13:08:23.8030000+00:00</lastModDate>
        
        <creator>Abdulatif Aoihan Alresheedi</creator>
        
        <creator>Mohammed Abdullah Al-Hagery</creator>
        
        <subject>Global horizontal irradiance; artificial neural networks; feature selection; boruta algorithm; cost reduction; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(9), 2020</description>
        <description>More solar-based electricity generation stations have been established markedly in recent years as new and an important source of renewable energy. That is to ensure a more efficient, reliable integration of solar power to overcome several challenges such as, the future forecasting, the costly equipment in the metrological stations. One of the effective prediction methods is Artificial Neural Networks (ANN) and the Boruta algorithm for optimal attributes selection, to train the proposed prediction model to obtain high accurate prediction performance at a lower cost. The precise goal of this research is to predict the Global Horizontal Irradiance (GHI) by building the ANN model. Also, reducing the total number of GHI prediction attributes/features consequently reducing the cost of devices and equipment required to predict this important factor. The dataset applied in this research is real data, collected from 2015-2018 by solar and meteorological stations in KSA. It provided by King Abdullah City for Atomic and Renewable Energy (KA CARE). The findings emphasize the achievement of accurate predictions of solar radiation with a minimum cost, which is considered to be highly important in KSA and all other countries that have a similar environment.</description>
        <description>http://thesai.org/Downloads/Volume11No9/Paper_10-Forecasting_the_Global_Horizontal_Irradiance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SQ-Framework for Improving Sustainability and Quality into Software Product and Process</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110909</link>
        <id>10.14569/IJACSA.2020.0110909</id>
        <doi>10.14569/IJACSA.2020.0110909</doi>
        <lastModDate>2020-09-30T13:08:23.7730000+00:00</lastModDate>
        
        <creator>Kamal Uddin Sarker</creator>
        
        <creator>Aziz Bin Deraman</creator>
        
        <creator>Raza Hasan</creator>
        
        <creator>Ali Abbas</creator>
        
        <subject>Software project management; sustainable project; sustainable product; sustainable and quality model; system development methodology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(9), 2020</description>
        <description>Sustainability is one of the most important quality factors and it integrates some other quality factors in the product too. Moreover, it makes an effective workflow and improves user satisfaction. A manager can meet the target by controlling a project but sustainability is more versatile. Quality factors are the measuring criteria of a product while sustainability drives to make the quality product, efficient project, and successful organization so it is a package of strategy, tasks, processes, technologies, and stakeholders. It is observed that there is a lacking of sustainability practice in software engineering like other engineering communities. There are many software developing models that exist with limited scope in quality control for sustainability. Given the aforementioned viewpoint, this research proposes a new software project management framework, “SQ-Framework”. Its hybrid structure consists of the features of methods, quality models, and sustainability. The execution guidance of “SQ-Framework” is provided according to “Karlskrona manifesto”. A manager can use the framework to improve the management process of a project, a developer can integrate quality factors with sustainability into the product, an executive could be motivated to integrate quality and sustainability strategy in the organization, and the users get inspiration to practice sustainability.</description>
        <description>http://thesai.org/Downloads/Volume11No9/Paper_9-SQ_Framework_for_Improving_Sustainability.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Reward-Based DSM Program for Residential Electrical Loads in Smart Grid</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110908</link>
        <id>10.14569/IJACSA.2020.0110908</id>
        <doi>10.14569/IJACSA.2020.0110908</doi>
        <lastModDate>2020-09-30T13:08:23.7400000+00:00</lastModDate>
        
        <creator>Muthuselvi G</creator>
        
        <creator>Saravanan B</creator>
        
        <subject>DSM; LSE; RLA; smart grid; reward</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(9), 2020</description>
        <description>There is a positive attitude towards the use of different strategies for engaging in demand response (DR) programs in energy markets through the innovation and trend of smart grid technologies. In this paper, a reward-based approach is proposed that enhances the involvement of customers in the DR program by assuring the customer’s comfort level. Most of the previous works considered limited controllable loads like thermal loads for demand side management (DSM). In this approach thermal and all active electrical loads are considered for the analysis. Comfort indicator is used for the analysis which is an important parameter for measuring comfort of each resident. This technique significantly reduces the utility reward cost and maximizes the user satisfaction level compared with existing approach. The numerical example considered in this work illustrates the fruitfulness of the proposed approach. This problem is formulated as mixed-integer linear programming (MILP) and solved by using CPLEX solver in General Algebraic Modelling Software (GAMS).</description>
        <description>http://thesai.org/Downloads/Volume11No9/Paper_8-Reward_Based_DSM_Program_for_Residential_Electrical_Loads.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Factored Phrase-based Statistical Machine Pre-training with Extended Transformers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110907</link>
        <id>10.14569/IJACSA.2020.0110907</id>
        <doi>10.14569/IJACSA.2020.0110907</doi>
        <lastModDate>2020-09-30T13:08:23.7270000+00:00</lastModDate>
        
        <creator>Vivien L. Beyala</creator>
        
        <creator>Marcellin J. Nkenlifack</creator>
        
        <creator>Perrin Li Litet</creator>
        
        <subject>Machine translation; transformer; statistical machine; morphologically rich; hybrid</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(9), 2020</description>
        <description>This paper presents the development of a cascaded hybrid multi- lingual automatic translation system, by allowing a tight coupling between the two underlying research approach in machine translation, namely, the neuronal (deterministic approach) and statistical (probabilistic approach), while fully taking advantage of each method in order to improve translation performance. This architecture addresses two major problems frequently occurring when dealing with morphologically richer languages in MT, that is, the significant number unknown tokens generated due to the presence of out of vocabulary (OOV) words, and size of the output vocabulary. Additionally, we incorporated factors (additional word-level linguistic information) in order to alleviate data sparseness problem or potentially reduce language ambiguity, the factors we considered are lemmatization and Part-of-Speech tags (taking into consideration its various compounds). We combined a fully-factored transformer and a factored PB-SMT, where, the training data is pre-translated using the trained fully-factored transformer, and afterwards employed to build an PB-SMT system, parallelly using the pre-translated development set to tune parameters. Finally, in order to produce the desired results, we operated the FPB-SMT system to re-decode the pre-translated test set in a post-processing step. Experiments performed on translations from Japanese to English and English to Japanese reveals that our proposed cascaded hybrid framework outperforms the strong HMT state-of-the-art by over 8.61% BLEU and 7.25% BLEU, respectively, for validation set, and over 8.70% BLEU and 7.70% BLEU, respectively, for test set.</description>
        <description>http://thesai.org/Downloads/Volume11No9/Paper_7-Factored_Phrase_Based_Statistical_Machine.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Recommender System for Mobile Applications of Google Play Store</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110906</link>
        <id>10.14569/IJACSA.2020.0110906</id>
        <doi>10.14569/IJACSA.2020.0110906</doi>
        <lastModDate>2020-09-30T13:08:23.6930000+00:00</lastModDate>
        
        <creator>Ahlam Fuad</creator>
        
        <creator>Sahar Bayoumi</creator>
        
        <creator>Hessah Al-Yahya</creator>
        
        <subject>Application profile; content-based filtering; Google play; mobile applications; recommender systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(9), 2020</description>
        <description>With the growth in the smartphone market, many applications can be downloaded by users. Users struggle with the availability of a massive number of mobile applications in the market while finding a suitable application to meet their needs. Indeed, there is a critical demand for personalized application recommendations. To address this problem, we propose a model that seamlessly combines content-based filtering with application profiles. We analyzed the applications available on the Google Play app store to extract the essential features for choosing an app and then used these features to build app profiles. Based on the number of installations, the number of reviews, app size, and category, we developed a content-based recommender system that can suggest some apps for users based on what they have searched for in the application’s profile. We tested our model using a k-nearest neighbor algorithm and demonstrated that our system achieved good and reasonable results.</description>
        <description>http://thesai.org/Downloads/Volume11No9/Paper_6-A_Recommender_System_for_Mobile_Applications.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Weather Variability Forecasting Model through Data Mining Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110905</link>
        <id>10.14569/IJACSA.2020.0110905</id>
        <doi>10.14569/IJACSA.2020.0110905</doi>
        <lastModDate>2020-09-30T13:08:23.6770000+00:00</lastModDate>
        
        <creator>Sultan Shekana</creator>
        
        <creator>Addisu Mulugeta</creator>
        
        <creator>Durga Prasad Sharma</creator>
        
        <subject>Meteorological data; weather forecasting; multilayer-perceptron; Na&#239;ve Bayes; multinomial logistic regression algorithms</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(9), 2020</description>
        <description>Climate and weather variability are thought-provoking for world communities. In this apprehension, weather variability imposes a broad impact on the economy and the survival of the living entities. In relation to the African continent country Ethiopia, it is desirable to have great attention on the weather variability. The Ethiopian Dodota Woreda region is continuously affected by repeated droughts. It gives a great alarm to investigate and analyze the factors which are major causes of the frequent occurrence of drought. Although the weather scientists and domain experts are overwhelmed with meteorological data but lacking in analyzing and revealing the hidden knowledge or patterns about weather variability. This paper is an effort to design an enhanced predictive model for weather variability forecasting through Data Mining Techniques. The parameters used in this research are temperature, dew point, sunshine, rainfall, wind speed, maximum temperature, minimum temperature, and relative humidity to enhance the accuracy of forecasting. To improve the accuracy, we used the Multilayer-perceptron (MLP), Na&#239;ve Bayes, and multinomial logistic regression algorithms to design a proposed Predictive Model. The knowledge discovery in database (KDD) process model was used as a framework for the modeling purpose. The research findings revealed that the aforementioned parameters have a strong positive relationship with weather forecasting in meteorology sectors. The MLP model with selected parameters presents an interesting predictive accuracy result i.e. 98.3908% as correctly classified instances. The most performing algorithm, MLP was chosen and used to generate interesting patterns. The domain experts (meteorologists) validated the discovered patterns for the improved accuracy of weather variability forecasting.</description>
        <description>http://thesai.org/Downloads/Volume11No9/Paper_5-Weather_Variability_Forecasting_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Maximum Likelihood Classification based on Classified Result of Boundary Mixed Pixels for High Spatial Resolution of Satellite Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110904</link>
        <id>10.14569/IJACSA.2020.0110904</id>
        <doi>10.14569/IJACSA.2020.0110904</doi>
        <lastModDate>2020-09-30T13:08:23.6470000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>Maximum likelihood classification; optimum threshold; Landsat TM; MSS; Mixed Pixel; spatial resolution</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(9), 2020</description>
        <description>Maximum Likelihood Classification: MLC based on classified result of boundary Mixed Pixels (Mixel) for high spatial resolution of remote sensing satellite images is proposed and evaluated with Landsat Thematic Mapper: TM images. Optimum threshold indicates different results for TM and Multi Spectral Scanner: MSS data. This may since the TM spatial resolution is 2.7 times finer than MSS, and consequently, TM imagery has more spectral variability for a class. The increase of the spectral heterogeneity in a class and the higher number of channels being used in the classification process may play significant role. For example, the optimum threshold for classifying an agricultural scene using MSS data is about 2.5 standard deviations, while that for TM corresponds to more than four standard deviations. This paper compares the optimum threshold between MSS and TM and suggests a method of using unassigned boundary pixels to determine the optimum threshold. Further, it describes the relationship of the optimum threshold to the class variance with a full illustration of TM data. The experimental conclusions suggest to the user some systematic methods for obtaining an optimal classification with MLC.</description>
        <description>http://thesai.org/Downloads/Volume11No9/Paper_4-Maximum_Likelihood_Classification_based_on_Classified_Result.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>6G: Envisioning the Key Technologies, Applications and Challenges</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110903</link>
        <id>10.14569/IJACSA.2020.0110903</id>
        <doi>10.14569/IJACSA.2020.0110903</doi>
        <lastModDate>2020-09-30T13:08:23.6300000+00:00</lastModDate>
        
        <creator>Syed Agha Hassnain Mohsan</creator>
        
        <creator>Alireza Mazinani</creator>
        
        <creator>Warda Malik</creator>
        
        <creator>Imran Younas</creator>
        
        <creator>Nawaf Qasem Hamood Othman</creator>
        
        <creator>Hussain Amjad</creator>
        
        <creator>Arfan Mahmood</creator>
        
        <subject>IoT; AI; communication technologies; holographic communication; blockchain</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(9), 2020</description>
        <description>In 2030, 6G is going to bring remarkable revolution in communication technologies as it will enable Internet of Everything. Still many countries are working over 5G and B5G has yet to be developed, while some research groups have already initiated projects on 6G. 6G will provide high and sophisticated QoS e.g. virtual reality and holographic communication. At this stage, it is impossible to speculate every detail of 6G and which key technologies will mark 6G. The wide applications of ICT, such as IoT, AI, blockchain technology, XR (Extended Reality) and VR (Virtual Reality), has created the emergence of 6G technology. On the basis of 5G technique, 6G will put profound impact over ubiquitous connectivity, holographic connectivity, deep connectivity and intelligent connectivity. Notably, research fraternity should focus on challenges and issues of 6G. They need to explore various alternatives to meet desired parameters of 6G. Thus, there are many potential challenges to be envisioned. This review study outlines some future challenges and issues which can hamper deployment of 6G. We subsequently define key potential features of 6G to provide the state of the art of 6G technology for future research. We have provided a review of extant research on 6G. In this review, technology prospects, challenges, key areas and related issues are briefly discussed. In addition, we have provided technologies breakdown and framework of 6G. We have shed light over future directions, applications and practical considerations of 6G to help researchers for possible breakthroughs. Our aim is to aggregate the efforts and eliminate the technical uncertainties towards breakthrough innovations for 6G.</description>
        <description>http://thesai.org/Downloads/Volume11No9/Paper_3-6G_Envisioning_the_Key_Technologies.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Classification of Pulmonary Nodule using New Transfer Method Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110902</link>
        <id>10.14569/IJACSA.2020.0110902</id>
        <doi>10.14569/IJACSA.2020.0110902</doi>
        <lastModDate>2020-09-30T13:08:23.6000000+00:00</lastModDate>
        
        <creator>Syed Waqas Gillani</creator>
        
        <creator>Bo Ning</creator>
        
        <subject>New transfer method; VOI extraction; feature extraction; classification; LIDC-IDRI dataset</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(9), 2020</description>
        <description>Lung cancer is among the world&#39;s worst cancers, and accounted for 27%of all cancers in 2018. Despite substantial improvement in recent diagnoses and medications, the five year cure ratio is just 19%. Before even the diagnosis, classification of lung nodule is an essential step, particularly because early detection can help doctors with a highly valued opinion. CT image detection and classification is possible easily and accurately with advanced vision devices and machine-learning technology. This field of work has been extremely successful. Researchers have already attempted to improve the accuracy of CAD structures by computational tomography (CT) in the screening of lung cancer in several deep learning models. In this paper, we proposed a fully automated lung CT system for lung nodule classification, namely, new transfer method (NTM) which has two parts. First features are extracted by applying different VOI and feature extraction techniques. We used intensity, shape, contrast of border and spicula extraction to extract the lung nodule. Then these nodules are transfer to the classification part where we used advance-fully convolution network (A-FCN) to classify the lung nodule between benign and malignant. Our A- FCN network contain three types of layers that helps to enhance the performance and accuracy of NTM network which are convolution layer, pooling layer and fully connected layer. The proposed model is trained on LIDC-IDRI dataset and attained an accuracy of 89.90 % with AUC of 0.9485.</description>
        <description>http://thesai.org/Downloads/Volume11No9/Paper_2-Classification_of_Pulmonary_Nodule.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Efficient GPU Implementation of Multiple-Precision Addition based on Residue Arithmetic</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110901</link>
        <id>10.14569/IJACSA.2020.0110901</id>
        <doi>10.14569/IJACSA.2020.0110901</doi>
        <lastModDate>2020-09-30T13:08:23.5370000+00:00</lastModDate>
        
        <creator>Konstantin Isupov</creator>
        
        <creator>Vladimir Knyazkov</creator>
        
        <subject>Multiple-precision algorithm; integer arithmetic; residue number system; GPU; CUDA</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(9), 2020</description>
        <description>In this work, the residue number system (RNS) is applied for efficient addition of multiple-precision integers using graphics processing units (GPUs) that support the Compute Unified Device Architecture (CUDA) platform. The RNS allows calculations with the digits of a multiple-precision number to be performed in an element-wise fashion, without the overhead of communication between them, which is especially useful for massively parallel architectures such as the GPU architecture. The paper discusses two multiple-precision integer algorithms. The first algorithm relies on if-else statements to test the signs of the operands. In turn, the second algorithm uses radix complement RNS arithmetic to handle negative numbers. While the first algorithm is more straightforward, the second one avoids branch divergence among threads that concurrently compute different elements of a multiple-precision array. As a result, the second algorithm shows significantly better performance compared to the first algorithm. Both algorithms running on an NVIDIA RTX 2080 Ti GPU are faster than the multi-core GNU MP implementation running on an Intel Xeon 4100 processor.</description>
        <description>http://thesai.org/Downloads/Volume11No9/Paper_1-Efficient_GPU_Implementation_of_Multiple_Precision.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid Model based on Radial basis Function Neural Network for Intrusion Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110896</link>
        <id>10.14569/IJACSA.2020.0110896</id>
        <doi>10.14569/IJACSA.2020.0110896</doi>
        <lastModDate>2020-08-31T13:22:27.5200000+00:00</lastModDate>
        
        <creator>Marwan Albahar</creator>
        
        <creator>Ayman Alharbi</creator>
        
        <creator>Manal Alsuwat</creator>
        
        <creator>Hind Aljuaid</creator>
        
        <subject>Intrusion detection; neural network; radial basis function; directed batch growing self-organizing map</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>An Intrusion Detection System (IDS) is a system that monitors the network for identifying malicious activities. Upon identifying the unusual activities, IDS sends a notification to the network administrators to warn about the hackers’ hostile activities. To detect intrusion, signature-based systems are consid-ered to be one of the most effective methods. However, they cannot detect new attacks. Additionally, it is costly and challenging to keep the attack signatures database up to date with known signatures, which constructed a significant drawback. Neural networks are capable of learning through input patterns and have the potential to generalize data. In this paper, we propose a hybrid model based on Directed Batch Growing Self-Organizing Map (DBGSOM) combined with a Radial Basis Function Neural Network (RBFNN) detecting abnormalities in the network. Based on our experiment, the proposed model performed well and has resulted in satisfactory performance measures compared to Self-Organizing Maps and Radial Basis Function Neural Network (SOM&amp;RBFNN) model.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_96-A_Hybrid_Model_based_on_Radial_basis_Function.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Passenger Communication System for Next-Generation Self-Driving Cars: A Buddy</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110895</link>
        <id>10.14569/IJACSA.2020.0110895</id>
        <doi>10.14569/IJACSA.2020.0110895</doi>
        <lastModDate>2020-08-31T13:22:27.5200000+00:00</lastModDate>
        
        <creator>M Talha Bin Ahmed Lodhi</creator>
        
        <creator>Faisal Riaz</creator>
        
        <creator>Yasir Mehmood</creator>
        
        <creator>Muhammad Farrukh Farid</creator>
        
        <creator>Abdul Ghafoor Dar</creator>
        
        <creator>Muhammad Atif Butt</creator>
        
        <creator>Samia Abid</creator>
        
        <creator>Hasan Ali Asghar</creator>
        
        <subject>Autonomous Vehicles (AVs); passenger communi-cation system; the simulation engine</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>With the rapid emergence of autonomous vehicles, there is a need to build such communication systems which help the passengers to communicate with autonomous vehicles (AVs) robustly. In this regard, this research work presents a multimodal passenger communication system. The communica-tion system is known as “buddy&quot; for AVs. Buddy is an all in one control system for AVs which incorporates touch, speech, text, and emotion recognition methods of interaction. Buddy makes it easy for passengers to interact with AVs. It enables the communication between the passengers and the AV which eventually provides a safe driving experience. Moreover, we have proposed and developed our own simulator two evaluate the performance of our proposed passenger communication system. We have also conducted extensive infield-tests to test the effectiveness of the proposed system. The extensive rigor analysis validates the results and hence the significance of the proposed passenger communication system.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_95-Passenger_Communication_System_for_Next_Generation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Robust Control and Fuzzy Logic Guidance for an Unmanned Surface Vehicle</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110894</link>
        <id>10.14569/IJACSA.2020.0110894</id>
        <doi>10.14569/IJACSA.2020.0110894</doi>
        <lastModDate>2020-08-31T13:22:27.4900000+00:00</lastModDate>
        
        <creator>Marcelo M. Huayna-Aguilar</creator>
        
        <creator>Juan C. Cutipa-Luque</creator>
        
        <creator>Pablo Raul Yanyachi</creator>
        
        <subject>Robust control; guidance; fuzzy; unmanned surface vehicle</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>This work deals with guidance and control of an unmanned surface vehicle which has the mission to monitor au-tonomously the water quality condition in Peruvian sea onshore. The vehicle is a catamaran class with two slender bodies propelled by two electric thrusts in differential and common modes in order to maneuver in surge and in yaw directions. A multivariable control approach is proposed in order to control these two variables and a fuzzy logic-based guidance tracks predefined trajectories at the sea surface. The conjunction between robust control and guidance algorithms is validated numerically and the results show good stability and performance despite the presence of disturbance, noise sensors and model uncertainties.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_94-Robust_Control_and_Fuzzy_Logic_Guidance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Date Grading using Machine Learning Techniques on a Novel Dataset</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110893</link>
        <id>10.14569/IJACSA.2020.0110893</id>
        <doi>10.14569/IJACSA.2020.0110893</doi>
        <lastModDate>2020-08-31T13:22:27.4730000+00:00</lastModDate>
        
        <creator>Hafsa Raissouli</creator>
        
        <creator>Abrar Ali Aljabri</creator>
        
        <creator>Sarah Mohammed Aljudaibi</creator>
        
        <creator>Fazilah Haron</creator>
        
        <creator>Ghada Alharbi</creator>
        
        <subject>Date grading; machine learning; k-nearest neigh-bor; support vector machine; convolutional neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>Dates grading is a crucial stage in the dates’ facto-ries. However, it is done manually in most of the Middle Eastern industries. This study, using a novel dataset, identifies the suitable machine learning techniques to grade dates based on the image of the date. The dataset consists of three different types of dates, namely, Ajwah, Mabroom, and Sukkary with each having three different grades. The dates were obtained from Manafez company and graded by their experts. The color, size and texture of the dates are the features that have been considered in this work. To determine the color, we have used color properties in RGB (red, green, and blue) color space. For measuring the size, we applied the best least-square fitting ellipse. To analyze the texture, we used Weber local descriptor to distinguish between texture patterns. In order to identify the suitable grading classifier, we have experimented three approaches, namely, k-nearest neighbor (KNN), support vector machine (SVM) and convolutional neural network (CNN). Our experiments have shown that CNN is the best classifier with an accuracy of 98% for Ajwah, 99% for Mabroom, and 99% for Sukkary. Hence, the CNN classifier has been incorporated in our date grading system</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_93-Date_Grading_using_Machine_Learning_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Comparison of Natural Language Understanding Engines in the Educational Domain</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110892</link>
        <id>10.14569/IJACSA.2020.0110892</id>
        <doi>10.14569/IJACSA.2020.0110892</doi>
        <lastModDate>2020-08-31T13:22:27.4400000+00:00</lastModDate>
        
        <creator>Victor Juan Jimenez Flores</creator>
        
        <creator>Oscar Juan Jimenez Flores</creator>
        
        <creator>Juan Carlos Jimenez Flores</creator>
        
        <creator>Juan Ubaldo Jimenez Castilla</creator>
        
        <subject>Chatbot; natural language understanding; NLU; F1 score; performance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>Recently, chatbots are having a great importance in different domains and are becoming more and more common in customer service. One possible cause is the wide variety of platforms that offer the natural language understanding as a service, for which no programming skills are required. Then, the problem is related to which platform to use to develop a chatbot in the educational domain. Therefore, the main objective of this paper is to compare the main natural language understanding (NLU) engines and determine which could perform better in the educational domain. In this way, researchers can make more justified decisions about which NLU engine to use to develop an educational chatbot. Besides, in this study, six NLU platforms were compared and performance was measured with the F1 score. Training data and input messages were extracted from Mariateguino Bot, which was the chatbot of the Jose&#180; Carlos Mari&#180;ategui University during 2018. The results of this comparison indicates that Watson Assistant has the best performance, with an average F1 score of 0.82, which means that it is able to answer correctly in most cases. Finally, other factors can condition the choice of a natural language understanding engine, so that ultimately the choice is left to the user.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_92-Performance_Comparison_of_Natural_Language_Understanding.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Nitrogen Fertilizer Recommendation for Paddies through Automating the Leaf Color Chart (LCC)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110891</link>
        <id>10.14569/IJACSA.2020.0110891</id>
        <doi>10.14569/IJACSA.2020.0110891</doi>
        <lastModDate>2020-08-31T13:22:27.4100000+00:00</lastModDate>
        
        <creator>Torikul Islam</creator>
        
        <creator>Rafsan Uddin Beg Rizan</creator>
        
        <creator>Yeasir Arefin Tusher</creator>
        
        <creator>Md Shafiuzzaman</creator>
        
        <creator>Md. Alam Hossain</creator>
        
        <creator> Syed Galib</creator>
        
        <subject>Leaf Color Chart (LCC); Convolutional Neural Net-work (CNN); fertilizer recommendation system; color classification; Decision Tree (DT)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>Nitrogen fertilizer is inevitable for rice production to ensure that the crop’s nitrogen need is adequately supplied, during the growing season. International Rice Research Institute (IRRI) has proposed Leaf Color Chart (LCC) to detect the exact nitrogen need of paddy. Farmers generally monitor the plant’s growth (which is also an indicator of the nitrogen concentration of leaves) by comparing the leaf color with the corresponding color of the LCC. Currently, in most cases, LCC is used manually to determine the fertilizer need and thus, there is a chance of either overestimating or underestimating the amount of fertilizer. To avoid this problem, a smart fertilizer recommendation system is proposed in this paper. The proposed method is able to automate the manual acquisition and interpretation of leaf color for classi-fication through LCC. The experimentation considers a sample of 6000 Aman paddy leaf images. The data acquisition process was performed according to IRRI’s guidance of taking the paddy leaf images within the body shade by our developed application. The data/images have already been made public in Kaggle - a well-known dataset website. The semantic segmentation of the dataset was performed by a powerful Convolutional Neural Network (CNN) backbone architecture - DeepLabV3+. Color classification into 4 categories of the LCC was performed by CNN architecture which consists of seven layers. Information gain based evaluation was performed in the Decision Tree (DT) approach to select features and with the selected features DT classified images into 4 categories. Color classification by our two proposed methods achieved 94.22% accuracy in CNN Model and 91.22% accuracy in the DT classifier.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_91-Nitrogen_Fertilizer_Recommendation_for_Paddies.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluating the Quality of a Person’s Calligraphy using Image Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110890</link>
        <id>10.14569/IJACSA.2020.0110890</id>
        <doi>10.14569/IJACSA.2020.0110890</doi>
        <lastModDate>2020-08-31T13:22:27.3930000+00:00</lastModDate>
        
        <creator>Aaron Walter Avila Cordova</creator>
        
        <creator>Armando Flores Choque</creator>
        
        <creator>Joseph Clinthon Paucar Nu&#241;ez</creator>
        
        <subject>Calligraphy; image processing; character recognition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>The problem of not developing good handwriting as a child has serious consequences for learning, these range from training human memory to the capacity for innovation. To assess the quality of a person’s handwriting, it is necessary to process large amounts of images and with current improvements in machine learning this process is increasingly precise, but the development of these algorithms is complicated. For this reason, this article presents the proposal developed to evaluate the quality of a person’s handwriting through image recognition in order to assist in its improvement, but performing image processing in a practical way. This evaluation will be carried out on a group of university students from the Arequipa region, in Peru, using an image processing system that allows character recognition. According to the degree of proximity, the level of handwriting will be determined in a quantified way in percentage degrees. The tests carried out show that the quality of calligraphy in university students in the Arequipa region varies between low and medium.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_90-Evaluating_the_Quality_of_a_Persons_Calligraphy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Non-invasive Device to Lessen Tremors in the Hands due to Parkinson’s Disease</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110889</link>
        <id>10.14569/IJACSA.2020.0110889</id>
        <doi>10.14569/IJACSA.2020.0110889</doi>
        <lastModDate>2020-08-31T13:22:27.3630000+00:00</lastModDate>
        
        <creator>Juan Hinostroza-Quinones</creator>
        
        <creator>Manuel Vasquez-Cunia</creator>
        
        <subject>Parkinson; non-invasive device; vibrations; Arduino</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>One of the severe neurological disorders that affects the central nervous system is Parkinson’s disease, which causes that patients can not perform routine tasks such as eating and writing. According to statistical data, there are more than 10 million people in the world who suffer from this disease and the Latin American nation of Peru is no stranger to this, since approximately 30 thousand people suffer from it. Until today there is not a cure for this disease; however, there are different chemical, biological and electronic methods that help to improve the quality of life of patients with this disease. This research aims to design a low-cost device that is able to diminish tremors in patients with Parkinson. The non-invasive device presented and developed in this study will work with the help of 5 vibratory motors and a microcontroller. The vibrations generated by the motors in the patient’s wrist will distract the brain and as result the tremors of the hand due to Parkinson’s disease will be reduced.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_89-Non_Invasive_Device_to_Lessen_Tremors.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Prototype of an Automatic Irrigation System fo Peruvian Crop Fields</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110888</link>
        <id>10.14569/IJACSA.2020.0110888</id>
        <doi>10.14569/IJACSA.2020.0110888</doi>
        <lastModDate>2020-08-31T13:22:27.3470000+00:00</lastModDate>
        
        <creator>Luis Nunez-Tapia</creator>
        
        <subject>Automatic irrigation; crop fields; Arduino; bluetooth</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>Water is an important factor to sustain life and for such a reason it is necessary to take care of it since this is a limited resource. In Peruvian agriculture; however, there is a high percentage of water wasted, as this activity consumes 92%of fresh water; thus, making Peru the 37th country worldwide in misusing water. Due to the aforementioned and considering that the agricultural sector is an important factor for the Peruvian economy, the current study aims to implement a system for automatic irrigation of crop fields in Peru, with the goal of optimizing the use of water and not to waste it as it usually happens. After the implementation of the first prototype of the irrigation system using an Arduino microcontroller and low-cost electronic components, it could be observed that during the tests, 75 and 76.5% of the water that is normally used for irrigation was saved for a dry rainless and dry rainy patch of crop field, respectively. The monitoring of the humidity of the soil was possible due to bluetooth communication. The presented results show the viability of the system and in a follow-up study, large-scale tests are expected.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_88-A_Prototype_of_an_Automatic_Irrigation_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Scalability Validation of the Posting Access Method through UPPAAL-SMC Model-Checker</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110887</link>
        <id>10.14569/IJACSA.2020.0110887</id>
        <doi>10.14569/IJACSA.2020.0110887</doi>
        <lastModDate>2020-08-31T13:22:27.3170000+00:00</lastModDate>
        
        <creator>Bethaina Touijer</creator>
        
        <creator>Yann Ben Maissa</creator>
        
        <creator>Salma Mouline</creator>
        
        <subject>WBANs; IEEE 802.15.6 MAC protocol; posting access method; UPPAAL-SMC; energy consumption; throughput</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>The standard IEEE 802.15.6 provides a new phys-ical layer (PHY) and medium access control sublayer (MAC) specifications that support several challenges of wireless body area networks (WBANs). The posting is the access method of the IEEE 802.15.6 MAC protocol that is used by the hub to send data to the nodes. In this paper, we use a formal method to evaluate the posting access method under the WBANs stochastic environment. Based on the statistical model checking (SMC) toolset UPPAAL-SMC, we model and evaluate the behavior of the posting access method in terms of scalability. The evaluation results showed that according to the allocated time intervals, the energy consumption, and the throughput the scalability was validated.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_87-Scalability_Validation_of_the_Posting_Access_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning with Data Transformation and Factor Analysis for Student Performance Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110886</link>
        <id>10.14569/IJACSA.2020.0110886</id>
        <doi>10.14569/IJACSA.2020.0110886</doi>
        <lastModDate>2020-08-31T13:22:27.3000000+00:00</lastModDate>
        
        <creator>Tran Thanh Dien</creator>
        
        <creator>Sang Hoai Luu</creator>
        
        <creator>Nguyen Thanh-Hai</creator>
        
        <creator>Nguyen Thai-Nghe</creator>
        
        <subject>Deep learning; student performance; mark prediction; Long Short Term Memory (LSTM); Convolutional Neural Networks (CNN); data pre-processing; multidisciplinary university</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>Student performance prediction is one of the most concerning issues in the field of education and training, especially educational data mining. The prediction supports students to select courses and design appropriate study plans for themselves. Moreover, student performance prediction enables lecturers as well as educational managers to indicate what students should be monitored and supported to complete their programs with the best results. These supports can reduce formal warnings and expulsions from universities due to students’ poor performance. This study proposes a method to predict student performance using various deep learning techniques. Also, we analyze and present several techniques for data pre-processing (e.g., Quantile Transforms and MinMax Scaler) before fetching them into well-known deep learning models such as Long Short Term Memory (LSTM) and Convolutional Neural Networks (CNN) to do prediction tasks. Experiments are built on 16 datasets related to numerous different majors with appropriately four million samples collected from the student information system of a Vietnamese multidisciplinary university. Results show that the proposed method provides good prediction results, especially when using data transformation. The results are feasible for applying to practical cases.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_86-Deep_Learning_with_Data_Transformation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Impact Analysis of Network Layer Attacks in Real-Time Wireless Sensor Network Testbed</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110885</link>
        <id>10.14569/IJACSA.2020.0110885</id>
        <doi>10.14569/IJACSA.2020.0110885</doi>
        <lastModDate>2020-08-31T13:22:27.2700000+00:00</lastModDate>
        
        <creator>Navjot Sidhu</creator>
        
        <creator>Monika Sachdeva</creator>
        
        <subject>Attack; impact; performance; real-time; Wireless Sensor Network (WSN)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>With the rapid increase in the demand for Wireless Sensor Network (WSN) applications. The intrusive activities are also raised. To save these networks from the intruders it is required to understand the implications of any malicious act. Most of the researchers have utilized simulated software to understand the impact of such intrusions, however, real network conditions vary from the simulated environment. Therefore, the current work focuses on analyzing the impact of network layer attacks in real-time WSN testbed. The contributions of this work are threefold. Firstly, it presents the deployment of a real-time experimental testbed using standardized sensor devices in a multi-hop topological arrangement. Secondly, it provides the implementation details of seven network layer attacks: Blackhole (BH), Dropping Node (DN), Drop Route Request (DRREQ), Drop Route Reply (DRREP), Drop Route Error (DRERR), Grayhole (GH) and Sinkhole (SH) in a single testbed. Finally, the testbed performance with and without each attack is monitored and compared in terms of network performance metrics to understand the attacks’ impact. This work will be helpful for the research community for proposing efficient attack detection and prevention solutions for these networks.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_85-Impact_Analysis_of_Network_Layer_Attacks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Complete Methodology for Kuzushiji Historical Character Recognition using Multiple Features Approach and Deep Learning Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110884</link>
        <id>10.14569/IJACSA.2020.0110884</id>
        <doi>10.14569/IJACSA.2020.0110884</doi>
        <lastModDate>2020-08-31T13:22:27.2530000+00:00</lastModDate>
        
        <creator>Aravinda C. V</creator>
        
        <creator>Lin Meng</creator>
        
        <creator>ATSUMI Masahiko</creator>
        
        <creator>Udaya Kumar Reddy K.R</creator>
        
        <creator>Amar Prabhu G</creator>
        
        <subject>Kuzushiji character; zonal features; structural features; invariant moments</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>As per the studies during many decades, substantial research efforts have been devoting towards character recogni-tion. This task is not so easy as it it appears; in fact humans’ have error rate about more than 6%, while reading the handwritten characters and recognizing. To solve this problem an effort has been made by applying the multiple features for recognizing kuzushiji character, without any knowledge of the font family presented. At the outset a pre-processing step that includes image binarization, noise removal and enhancement was applied. Second step was segmenting the page-sample by applying contour technique along with convex hull method to detect individual character. Third step was feature extraction which included zonal features (ZF), structural features (SF) and invariant moments (IM). These feature vectors were passed for training and testing to the various machine learning and deep learning models to classify and recognize the given character image sample. The accuracy achieved was about 85-90% on the data-set which consisted of huge data samples round about 3929 classes followed by 392990 samples.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_84-A_Complete_Methodology_for_Kuzushiji_Historical_Character.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning Approach for Forecasting Water Quality in IoT Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110883</link>
        <id>10.14569/IJACSA.2020.0110883</id>
        <doi>10.14569/IJACSA.2020.0110883</doi>
        <lastModDate>2020-08-31T13:22:27.2230000+00:00</lastModDate>
        
        <creator>Nguyen Thai-Nghe</creator>
        
        <creator>Nguyen Thanh-Hai</creator>
        
        <creator>Nguyen Chi Ngon</creator>
        
        <subject>Forecasting model; deep learning; Long-Short Term Memory (LSTM); water quality indicators</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>Global climate change and water pollution effects have caused many problems to the farmers in fish/shrimp raising, for example, the shrimps/fishes had early died before harvest. How to monitor and manage quality of the water to help the farmers tackling this problem is very necessary. Water quality monitoring is important when developing IoT systems, especially for aquaculture and fisheries. By monitoring the real-time sensor data indicators (such as indicators of salinity, temperature, pH, and dissolved oxygen - DO) and forecasting them to get early warning, we can manage the quality of the water, thus collecting both quality and quantity in shrimp/fish raising. In this work, we introduce an architecture with a forecasting model for the IoT systems to monitor water quality in aquaculture and fisheries. Since these indicators are collected every day, they becomes sequential/time series data, we propose to use deep learning with Long-Short Term Memory (LSTM) algorithm for forecasting these indicators. Experimental results on several data sets show that the proposed approach works well and can be applied for the real systems.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_83-Deep_Learning_Approach_for_Forecasting_Water_Quality.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Extending Shared-Memory Computations to Multiple Distributed Nodes</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110882</link>
        <id>10.14569/IJACSA.2020.0110882</id>
        <doi>10.14569/IJACSA.2020.0110882</doi>
        <lastModDate>2020-08-31T13:22:27.2070000+00:00</lastModDate>
        
        <creator>Waseem Ahmed</creator>
        
        <subject>GPU; OpenMP; shared memory programming; distributed programming; CUDA</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>With the emergence of accelerators like GPUs, MICs and FPGAs, the availability of domain specific libraries (like MKL) and the ease of parallelization associated with CUDA and OpenMP based shared-memory programming, node-based parallelization has recently become a popular choice among developers in the field of scientific computing. This is evident from the large volume of recently published work in various domains of scientific computing, where shared-memory programming and accelerators have been used to accelerate applications. Although these approaches are suitable for small problem-sizes, there are issues that need to be addressed for them to be applicable to larger input domains. Firstly, the primary focus of these works has been to accelerate the core kernel; acceleration of input/output operations is seldom considered. Many operations in scientific computing operate on large matrices - both sparse and dense - that are read from and written to external files. These input-output operations present themselves as bottlenecks and significantly effect the overall application time. Secondly, node-based parallelization limits a developer from distributing the computation beyond a single node without him having to learn an additional programming paradigm like MPI. Thirdly, the problem size that can be effectively handled by a node is limited by the memory of the node and accelerator. In this paper, an Asynchronous Multi-node Execution (AMNE) approach is presented that uses a unique combination of the shared-file system and pseudo-replication to extend node-based algorithms to a distributed multiple node implementation with minimal changes to the original node-based code. We demonstrate this approach by applying it to GEMM, a popular kernel in dense linear algebra and show that the presented methodology significantly advances the state of art in the field of parallelization and scientific computing.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_82-Extending_Shared_Memory_Computations.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Analysis of a Graph-Theoretic Load Balancing Method for Data Centers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110881</link>
        <id>10.14569/IJACSA.2020.0110881</id>
        <doi>10.14569/IJACSA.2020.0110881</doi>
        <lastModDate>2020-08-31T13:22:27.1900000+00:00</lastModDate>
        
        <creator>Walaa M. AlShammari</creator>
        
        <creator>Mohammed J.F. Alenazi</creator>
        
        <subject>Data center; load balancing; path diversity; network management; load management; throughput; topology; performance metrics; betweenness centrality; flow scheduling; modeling; DCNs</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>Modern data centers can process a massive amount of data in a short time with minimal errors. Data center networks (DCNs) use equal-cost, multi-path topologies to deliver split flows across alternative paths between the core layer and hosted servers, which could lead to significant overload if path scheduling is inefficient. Thus, distributing incoming requests among these paths is crucial for providing higher throughput and protection against link or switch failures. Several approaches have been proposed for path selection, mainly relying on round-robin and least-congested methods. In this paper, we propose a load-balancing method based on betweenness centrality to improve the overall performance of a data center in terms of throughput, delay, and energy consumption. For evaluation, we compare our method with baseline methods of different DCN topologies: fat-tree, DCell, and BCube. On average, the evaluation results show that our method outperforms the others. It increases throughput by 202% and 33% while reducing delay by 20% and 22%, and energy consumption by 40% and 41% compared to the round-robin and least-congested methods, respectively.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_81-Performance_Analysis_of_a_Graph_Theoretic_Load.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>xMatcher: Matching Extensible Markup Language Schemas using Semantic-based Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110880</link>
        <id>10.14569/IJACSA.2020.0110880</id>
        <doi>10.14569/IJACSA.2020.0110880</doi>
        <lastModDate>2020-08-31T13:22:27.1600000+00:00</lastModDate>
        
        <creator>Aola Yousfi</creator>
        
        <creator>Moulay Hafid El Yazidi</creator>
        
        <creator>Ahmed Zellou</creator>
        
        <subject>Schema matching; matching accuracy; semantic similarity; semantic relatedness; WordNet</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>Schema matching is a critical step in data inte-gration systems. Most recent schema matching systems require a manual double-check of the matching results to add missed matches and remove incorrect matches. Manual correction is labor-intensive and time-consuming, however without it the results accuracy is significantly lower. In this paper, we present xMatcher, an approach to automatically match XML schemas. Given two schemas S1 and S2, xMatcher identifies semantically similar schema elements between S1 and S2. To obtain correct matches, xMatcher first transforms S1 and S2 into sets of words; then, it uses a context-based measure to identify the meanings of words in their contexts; next, it captures semantic relatedness between sets of words in different schemas; finally, it uses WordNet information to calculate the similarity values between semantically related sets and matches the pairs of sets whose similarity values are greater than or equal to 0.8. The results show that xMatcher provides superior matching accuracy compared to the state of the art matching systems. Overall, our proposal can be a stepping stone towards decreasing human assistance and overcoming the weaknesses of current matching initiatives in terms of matching accuracy.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_80-xMatcher_Matching_Extensible_Markup_Language.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cloud of Things (CoT) based Parking Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110879</link>
        <id>10.14569/IJACSA.2020.0110879</id>
        <doi>10.14569/IJACSA.2020.0110879</doi>
        <lastModDate>2020-08-31T13:22:27.1430000+00:00</lastModDate>
        
        <creator>Nazish Razzaq</creator>
        
        <creator>Muhammad Asaad Subih</creator>
        
        <creator>Madiha Khatoon</creator>
        
        <creator>Amir Razi</creator>
        
        <creator>Babur Hayat Malik</creator>
        
        <creator>Nimra Ashraf</creator>
        
        <creator>Tehseen Kausar</creator>
        
        <creator>Rashida Tarrar</creator>
        
        <creator>Muhammad Usman Sabir</creator>
        
        <creator>Syed Izaz ul Hassan Bukhari</creator>
        
        <subject>Cloud computing; internet of things (IoT); parking; prediction; availability; Estimated Time of Arrival (ETA); Geofence; Geographic Information System (GIS)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>Cloud computing with an amalgamation of the internet of things (IoT) typically gave birth to an ideal field called Cloud of things (CoT). CoT maintains revolutionary services in every domain, but it has instantly become a rising star in smart transportation because a well-organized facility might present a challenge for dealing with the exponentially expanding people living in smart cities. Lack of management in transport can cause distress among people and nowadays parking has come to be one of the critical issues faced by the public daily. In this paper, we present a parking availability prediction model implemented within a geo-fence ranging from 100-150 meters based on cloud, IoT, and GIS. In contrast to present models, instead of offering no space or parking is full; our model accurately determines the ETA of a vehicle and checks the potential/chance of parking availability. It also calculates the time for the next parking space if the existing parking space accessibility is zero. Moreover, our model includes all the exogenous factors like weather, time zone conditions to gain prediction accuracy.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_79-Cloud_of_Things.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Approach for Computer Assisted Sleep Scoring Mechanism using ANN</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110878</link>
        <id>10.14569/IJACSA.2020.0110878</id>
        <doi>10.14569/IJACSA.2020.0110878</doi>
        <lastModDate>2020-08-31T13:22:27.1130000+00:00</lastModDate>
        
        <creator>Hemu Farooq</creator>
        
        <creator>Anuj Jain</creator>
        
        <creator>V.K. Sharma</creator>
        
        <creator>Iflah Aijaz</creator>
        
        <creator>Sheikh Mohammad Idrees</creator>
        
        <subject>Sleep scoring stages; EEG; EMG; EOG; artificial neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>Sleep analysis and its categories in sleep scoring system is considered to be helpful in an area of sleep research and sleep medicine. The scheduled study employs novel approach for computer assisted automated sleep scoring system using physiological signals and Artificial neural network. The data collected were recorded for seven hour, 30 second epoch for each subject. The data procured from the physiological signal was controlled and prepared to expel degenerated signals in order to extract essential data or features used for the study. As, it is known human body distributes its own electrical signals which is needed to be eliminated and these are known as artifacts and they are needed to be filtered out. In this study, signal filtering is achieved by using Butterworth Low-Pass filter. The features extracted were trained and classified using an Artificial Neural Network classifier. Even though, it is a highly complicated concept, using same in biomedical field when engaged with electrical signals which is obtained from body is novel. The accuracy estimated for the system was found to be good and thus the procedure can be very helpful in clinics, particularly useful for neurologist for diagnosing the sleep disorders.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_78-A_Novel_Approach_for_Computer_Assisted_Sleep_Scoring.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Feature Expansion using Lexical Ontology for Opinion Type Detection in Tourism Reviews Domain</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110877</link>
        <id>10.14569/IJACSA.2020.0110877</id>
        <doi>10.14569/IJACSA.2020.0110877</doi>
        <lastModDate>2020-08-31T13:22:27.0970000+00:00</lastModDate>
        
        <creator>Lim Jie Chen</creator>
        
        <creator>Gan Keng Hoon</creator>
        
        <subject>Tourism domain; online review; opinion type detection; text classification; lexical ontology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>Tourism reviews platform such as Trip Advisor become a major source for tourists to share their experiences and get some ideas for decision making. Since there are millions of reviews generated daily in the travel websites, tourist is often overwhelmed with huge information. This is where opinion type detection is important as it makes it easy for a tourist to obtain useful reviews for their understanding and planning processes based on the reviews’ opinion type. The opinion type of texts in travel mostly involves different aspects of opinion related to the travel process, such as transportation, accommodation, price, food, entertainment, and so on. The challenge of this research is to improve this detection by proposing the lexical ontology approach to address the issue of out-of-vocabulary (OOV) keywords during a supervised detection of opinion type. Besides, there are also issues where the training data for detection has poor coverage or limited in a certain domain. In this paper, we propose a review opinion type detection approach by integrating the word (feature) expansion approach in machine learning. The suggested approach consists of two stages namely feature expansion and classification. For feature expansion, Lexical Ontology (LO) is used to expand the feature-related word to the domains such as synonyms. For classification, the expanded feature is corporate to the Machine Learning approach to detect the opinion type.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_77-Feature_Expansion_using_Lexical_Ontology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards Securing Cloud Computing from DDOS Attacks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110875</link>
        <id>10.14569/IJACSA.2020.0110875</id>
        <doi>10.14569/IJACSA.2020.0110875</doi>
        <lastModDate>2020-08-31T13:22:27.0670000+00:00</lastModDate>
        
        <creator>Ishtiaq Ahmed</creator>
        
        <creator>Sheeraz Ahmed</creator>
        
        <creator>Asif Nawaz</creator>
        
        <creator>Sadeeq Jan</creator>
        
        <creator>Zeeshan Najam</creator>
        
        <creator>Muneeb Saadat</creator>
        
        <creator>Rehan Ali Khan</creator>
        
        <creator>Khalid Zaman</creator>
        
        <subject>Cloud computing; denial of service; SNORT rules; network; energy consumption</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>Cloud computing (CC) is an advanced technology that provides data sharing and access to computing resources. The cloud deployment model represents the exact type of cloud environment based on ownership, size, and accessibility rights, and also describes the purpose and nature of the cloud. Since all processes today are computerized, consumers need a lot amount of data and cache size. The security of the cloud is ensured in many levels, but the scope of intrusions makes it necessary to understand the factors that affect cloud security. CC-certified users rely on third parties for their other important security issues in third-party computing clouds. A DDoS attack is an attack-type in which it is not necessary to send a large number of packets to the server, which makes it impossible for legitimate users to access them. In this research work, a DDoS attack was launched and a tool for launching a DDoS attack was discussed. In this research, DDoS attacks were rejected using three different SNORT rules. In this research, rules predefined for detecting DDoS attacks on SNORT profiles detect and prevent DDoS attacks, but because they block certain legitimate requests and generate false alarms, this should be the subject of future research.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_75-Towards_Securing_Cloud_Computing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Innovative Approach of Verification Mechanism for both Electronic and Printed Documents</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110876</link>
        <id>10.14569/IJACSA.2020.0110876</id>
        <doi>10.14569/IJACSA.2020.0110876</doi>
        <lastModDate>2020-08-31T13:22:27.0670000+00:00</lastModDate>
        
        <creator>Md. Majharul Haque</creator>
        
        <creator>Md. Nasim Adnan</creator>
        
        <creator>Mohammod Akbar Kabir</creator>
        
        <creator>Mohammad Rifat Ahmmad Rashid</creator>
        
        <creator>Abu Sadat Mohammad Yasin</creator>
        
        <creator>Muhammad Shakil Pervez</creator>
        
        <subject>Document verification; integrity; non-repudiation; blockchain; printed documents</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>Documents are inevitably relevant in our day-to-day life. Forgery of document could have severe repercussions including financial losses, misjudgments, damage of respect, goodwill, etc. Hence, documents need to be secured from threats such as counterfeiting, falsification, tempering etc., and there should be an easy way of verification about the originality of documents. There are several existing methods for ensuring authenticity and integrity with modern technologies like the block chain, Digital Signature, etc. Most of the methods are not appropriate for public usage instantly due to their intricacy, excessive costing, and implementation problem for which the easy approach of verification is yet not available for mass people. In this situation, a method of document verification has been proposed in this paper which intends to provide (i) authenticity, (ii) integrity, (iii) availability, and (iv) non-repudiation. The proposed method will serve the purpose of mass people as it has no licensing fee, easily implementable and effortlessly useable for both electronic and printed documents. It is worth to mention that the proposed method will provide a mechanism to confirm the originality of the document using only a Smartphone in no time.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_76-An_Innovative_Approach_of_Verification_Mechanism.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Prudently Secure Information Theoretic LSB Steganography for Digital Grayscale Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110874</link>
        <id>10.14569/IJACSA.2020.0110874</id>
        <doi>10.14569/IJACSA.2020.0110874</doi>
        <lastModDate>2020-08-31T13:22:27.0370000+00:00</lastModDate>
        
        <creator>Khan Farhan Rafat</creator>
        
        <subject>Clandestine communication; covert channel; hiding data in plain sight; inveil communication; LSB steganography</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>The endangerment of online data breaches calls for exploring new and enhancing existing sneaky ways of clandestine communication to tailor those to match the present and futuristic technological and environmental needs, to which malicious intruders wouldn&#39;t have an answer. Cryptography and Steganography are the two distinct techniques that, for long, have remained priority choices for hiding vital information from the unauthorized. But the visibility of the encrypted contents makes these vulnerable to attack. Also, the recent legislative protection agreed to law enforcement authorities in Australia to sneak into pre-shared cryptographic secret keys (PSKs) shall have a devastating impact on the privacy of the people. Hence, the need of the hour is to veil in the encrypted data underneath the cover of Steganography, whose sole intent is to hide the very existence of information. This research endeavor enhances one of the most famous images Steganography technique called the Least Significant Bit (LSB) Steganography, from the security and information-theoretic standpoint by taking a known-cover and known-message attack scenario. The explicit proclamation of this research endeavor is that the security of LSB Steganography lies in inducing uncertainty at the time of bit embedding process. The test results rendered by the proposed methodology confers on the non-detectability and imperceptibility of the confidential information along with its strong resistance against LSB Steganalysis techniques.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_74-Prudently_Secure_Information_Theoretic_LSB.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards an Integrated Model of Data Governance and Integration for the Implementation of Digital Transformation Processes in the Saudi Universities</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110873</link>
        <id>10.14569/IJACSA.2020.0110873</id>
        <doi>10.14569/IJACSA.2020.0110873</doi>
        <lastModDate>2020-08-31T13:22:27.0200000+00:00</lastModDate>
        
        <creator>Abdulfattah Omar</creator>
        
        <creator>Ahmed almaghthawi</creator>
        
        <subject>COVID-19; data governance; digital transformation; higher education; Saudi Arabia</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>In the face of the challenges of the Digital Age and the outbreak of the pandemic COVID-19 which have changed higher education institutions remarkably, universities are urgently required to speed up their digitalization initiatives and cope up with the global digital developments to survive. For the successful implementation of digital transformation, however, data governance should be considered. Despite the extensive literature on data governance and digital transformation, there is no focus on the issue in the Saudi Higher Education institutions. In the face of this, the current study investigates data governance policies and practices in the Saudi Universities. This study is built on a case study design. Nine universities in Saudi Arabia were selected for the purpose of the study. These include public, community, and private universities that make up the Higher Education system in Saudi Arabia. For data collection purposes, three methods were selected. These were the survey, focus group discussions and in-depth interviews. The findings of this study indicate that data governance is an effective tool in the implementation of digital transformation processes in higher education institutions and thus should be embedded into strategies of the universities for using digital technologies in appropriate manners. Good data governance practices are required for smooth and effective digital transformation. Universities are required to create an effective functional team for the data governance tasks, develop an internal audit of data governance, follow-up the regulatory compliance procedures, define the priorities of data governance activities, provide frequent data governance training for employees and faculty members, set enforcement and follow-up standards, and conduct frequent assessments of data governance plans and policies. Although the study is limited to the Saudi universities, results and implications of this study can be a useful reference to choose effective data governance practices that can be successfully adopted and implemented to effectively manage critical information and use it to transform their day-to-day operations.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_73-Towards_an_Integrated_Model_of_Data_Governance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Clustering Algorithm for Live Road Surveillance on Highways based on DBSCAN and Fuzzy Logic</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110872</link>
        <id>10.14569/IJACSA.2020.0110872</id>
        <doi>10.14569/IJACSA.2020.0110872</doi>
        <lastModDate>2020-08-31T13:22:27.0030000+00:00</lastModDate>
        
        <creator>Hasanain Alabbas</creator>
        
        <creator>&#193;rp&#225;d Husz&#225;k</creator>
        
        <subject>Vehicular ad hoc networks (VANETs); V2V; intelligent transportation systems; clustering algorithms; road surveillance; DBSCAN algorithm; fuzzy logic control</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>Video streaming over Vehicular Ad Hoc Networks is a promising technique (VANETs), and it has gained great importance in the last few years. The highly dynamic topology of VANETs makes high-quality video streaming very challenging. In order to provide the most useful camera views to the vehicles, clustering and cluster head selection techniques are used. Too frequent camera view changes can be annoying; therefore, we propose a new stable clustering algorithm to ensure a stable live road surveillance service without interruptions for vehicles that do not have enough vision area. In the proposed solution, we integrated Density-Based Spatial Clustering of Applications with Noise (DBSCAN) with Fuzzy Logic Control (FLC). DBSCAN is used to form the clusters, while FLC is used to find the best cluster head for the cluster. Different parameters are utilized like density parameters for DBSCAN, and relative speed, acceleration, leadership degree and vision area for fuzzy logic. Our proposed algorithm showed better results in terms of cluster lifetime and vehicle status change. Our proposed algorithm has been compared with another clustering scheme to prove the effectiveness of our proposed algorithm.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_72-A_New_Clustering_Algorithm_for_Live_Road.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Adaptive Quality Switch-aware Framework for Optimal Bitrate Video Streaming Delivery</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110871</link>
        <id>10.14569/IJACSA.2020.0110871</id>
        <doi>10.14569/IJACSA.2020.0110871</doi>
        <lastModDate>2020-08-31T13:22:26.9900000+00:00</lastModDate>
        
        <creator>Wafa A. Alqhtani</creator>
        
        <creator>Ashraf A. Taha</creator>
        
        <creator>Maazen S. Alsabaan</creator>
        
        <subject>Adaptive video streaming; average bit rate; mobile devices; modeling; quality of experience; quality switches; wireless networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>Given a large number of online video viewers, video streaming, over various networks, is important communication technology. The multitude of viewers makes it challenging for service providers to provide a good viewing experience for subscribers. Video streaming capabilities are designed based on concepts including quality, viewing flexibility, changing network conditions, and specifications for different customer devices. Adjusting the quality levels, and controlling various relevant parameters to stream the video content with good quality and without interruption is vital. This paper proposes an adaptive framework to balance the average video bitrates with respect to appropriate quality switches, making the transition to higher switches more seamless. The quality adaptation scheme increases the bitrates to the maximum value at their current quality switch before shifting to a higher level. This reduced switching times between levels and guarantees the stability of viewing and avoids interruptions. The use of a dynamic system ensures optimal performance, by controlling system parameters and making the algorithm more tunable. We built the system using an open-source DASH library (Libdash) with QuickTime player, studied the video load changes on two performance parameters, Central Processing Unit and Memory usages that have a high impact on multimedia quality. Consequently, the values of parameters that affected the performance of video streaming could be decreased, permitting users to regulate the parameters according to their preferences. Further, reducing the switching levels will reduce the overloads that occur while transferring from one level to another.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_71-An_Adaptive_Quality_Switch_Aware_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Intermediate Representation-based Approach for Query Translation using a Syntax-Directed Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110870</link>
        <id>10.14569/IJACSA.2020.0110870</id>
        <doi>10.14569/IJACSA.2020.0110870</doi>
        <lastModDate>2020-08-31T13:22:26.9570000+00:00</lastModDate>
        
        <creator>Hassana NASSIRI</creator>
        
        <creator>Mustapha MACHKOUR</creator>
        
        <creator>Mohamed HACHIMI</creator>
        
        <subject>Data Model; Relational Database; eXtensible Markup Language (XML); translation; model integration; intermediate representation; ANTLR (ANother Tool for Language Recognition)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>We aspire to make one query reasonably sufficient to extract data regardless of the data model used in our research. In such a way, users can freely use any query language they master to interrogate the heterogeneous database, not necessarily the query language associated with the model. Thus, overcoming the needing to deal with multiple query languages, which is, usually, an unwelcome matter for non-expert users and even for the expert ones. To do so, we proposed a new translation approach, relying on an intermediate query language to convert the user query into a suitable query language, according to the nature of data interrogated. Which is more beneficial rather than repeat the whole process for each new query submission. On the other hand, this empowers the system to be modular and divided into multiple, more flexible, and less complicated components. Therefore, it increases possibilities to make independent transformations and to switch between several query languages efficiently. By using our system, querying each data model with the corresponding query language is no longer bothersome. As a start, we are covering the eXtensible Markup Language (XML) and relational data models, whether native or hybrid. Users can retrieve data sources over these models using just one query, expressed with either the XML Path Language (XPath) or the Structured Query Language (SQL).</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_70-An_Intermediate_Representation_based_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Review on Research Challenges, Limitations and Practical Solutions for Underwater Wireless Power Transfer</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110869</link>
        <id>10.14569/IJACSA.2020.0110869</id>
        <doi>10.14569/IJACSA.2020.0110869</doi>
        <lastModDate>2020-08-31T13:22:26.9400000+00:00</lastModDate>
        
        <creator>Syed Agha Hassnain Mohsan</creator>
        
        <creator>Asad Islam</creator>
        
        <creator>Mushtaq Ali Khan</creator>
        
        <creator>Arfan Mahmood</creator>
        
        <creator>Laraba Selsabil Rokia</creator>
        
        <creator>Alireza Mazinani</creator>
        
        <creator>Hussain Amjad</creator>
        
        <subject>Underwater wireless power transfer; charging; MIMO; AUV</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>Wireless power transmission is the process to transmit electrical energy without using wire or any physical link. WPT is mainly used at such places where it is difficult to transfer energy using conventional wires. In this research work, we investigated the need and feasibility of wireless power transmission for underwater applications. This research paper will outline research challenges, limitations and practical consideration for underwater wireless power transfer (UWPT). Recent researchers have focused on WPT in air. However, WPT is still a challenging task in underwater environment. In this study, we have presented a review on previous research works in underwater wireless power transfer (UWPT). We have provided a comparison of different techniques implemented for wireless power transfer. This paper also proposes the idea of MIMO wireless power transmission for AUVs charging. This paper elaborates capabilities and limitations of the wireless power transfer system in underwater media as stochastic nature of ocean is a big challenge in wireless power transmission. We have also addressed design challenges and seawater effects on UWPT system.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_69-A_Review_on_Research_Challenges.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving Palmprint based Biometric System Performance using Novel Multispectral Image Fusion Scheme</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110868</link>
        <id>10.14569/IJACSA.2020.0110868</id>
        <doi>10.14569/IJACSA.2020.0110868</doi>
        <lastModDate>2020-08-31T13:22:26.9270000+00:00</lastModDate>
        
        <creator>Essia Thamri</creator>
        
        <creator>Kamel Aloui</creator>
        
        <creator>Mohamed Saber Naceur</creator>
        
        <subject>Biometric recognition; palmprint; multispectral images; image fusion; Log-Gabor; KPCA; DWT; CASIA</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>Nowadays, there are several identification systems which are based on different biometric modalities. In particular, multispectral images of palmprints captured in different spectral bands have a very distinctive biometric identifier. This paper proposes a novel fusion scheme of a biometric recognition system by multiSpectral palmprint. This system is composed of three blocks: (1) extraction of the region of interest (ROI) from multispectral images, (2) a new image fusion architecture based on the measurement of decorrelation, and (3) a scheme of dimension reduction and classification. The proposed image fusion system combines the information from the same left and right spectral band using the 2D discrete wavelet (DWT) transform technique. In addition, a feature extraction using the Log-Gabor transform is performed, while the feature size has been reduced using the Kernel Principal Component Analysis technique (KPCA). In Our experiments we use CASIA multispectral palmprint database. We obtained an accuracy rate (ACC) of 99.50% for the spectral bands WHT (white light) and 940 nm and an equal error rate EER = 0.05%.These results show that our system is robust against spoofing.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_68-Improving_Palmprint_based_Biometric_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Secure Access Control Model for Cloud Computing Environment with Fuzzy Max Interval Trust Values</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110867</link>
        <id>10.14569/IJACSA.2020.0110867</id>
        <doi>10.14569/IJACSA.2020.0110867</doi>
        <lastModDate>2020-08-31T13:22:26.9100000+00:00</lastModDate>
        
        <creator>Aakib Jawed Khan</creator>
        
        <creator>Shabana Mehfuz</creator>
        
        <subject>Cloud computing; encryption; fuzzy logic; trust computing; role based access control; resource management</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>Cloud computing needs service provider with reliable communication for increasing the user trust. As existence of cloud depends on quality of services, evaluation of this trust value needs to be carried out by the cloud. Many of the web services provided by E-commerce, social sites, digital platform maintain this for the faith of user by estimating the reliability of service provider. This paper focuses on a model that can identify real nodes by its behavior in cloud. Here fuzzy max interval values have been evaluated from the transactional behavior of the node in fixed interval. By increase in transaction count, trust value of real node trust increases and trust value of malicious nodes decreases. The work is based on Role based Access Control (RBAC), which has three type of roles (Admin, Data owner, Node). Data owner content security was achieved by AES algorithm and only trusted node can access those resources. Experiment was performed by carrying out simulations on ideal and environment under attack. Analysis of evaluation parameters values shows that proposed model of fuzzy max interval trust is better as compared to other existing Domain Partition Trust Model (DPTM), for identification of malicious nodes.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_67-A_Secure_Access_Control_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intelligent Tutoring Supported Collaborative Learning (ITSCL): A Hybrid Framework</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110866</link>
        <id>10.14569/IJACSA.2020.0110866</id>
        <doi>10.14569/IJACSA.2020.0110866</doi>
        <lastModDate>2020-08-31T13:22:26.8800000+00:00</lastModDate>
        
        <creator>Ijaz Ul Haq</creator>
        
        <creator>Aamir Anwar</creator>
        
        <creator>Iqra Basharat</creator>
        
        <creator>Kashif Sultan</creator>
        
        <subject>Intelligent Tutoring System (ITS); Computer-Supported Collaborative Learning (CSCL); Artificial Intelligence (AI); individual learning; collaborative learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>Recently Intelligent Tutoring Systems (ITS) and Computer-Supported Collaborative Learning (CSCL) have got much attention in the field of computer science, artificial intelligence, cognitive psychology, and educational technologies. An ITS is a technologically intelligent system that provides an adaptive learning paradigm for an individual learner only, while CSCL is also a technology-driven learning paradigm that supports groups of learners in pertaining knowledge by collaboration. In a multidisciplinary research field—the Learning Sciences, both individual and collaborative learning have their own significance. This research aims to extend ITS for collaborative constructivist view of learning using CSCL. Integrating both design architecture of CSCL and ITS, this research model propose a new conceptual framework underpinning “Intelligent Tutoring Supported Collaborative Learning (ITSCL)”. ITSCL extend ITS by supporting multiple learners interacting system. ITSCL support three different types of interaction levels. The first level of interaction supports individual learning by learner-tutor interaction. The second and third level of interaction support collaborative learning, by learner-learner interaction and tutor-group of collaborative learners’ interactions, respectively. To evaluate ITSCL, a prototype model was implemented to conduct few experiments. The statistical results extrapolate the learning gains, measured from Paired T-Test and frequency analysis, contend a significant learning gain and improvement in the learning process with enhanced learning performance.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_66-Intelligent_Tutoring_Supported_Collaborative_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detecting Violent Radical Accounts on Twitter</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110865</link>
        <id>10.14569/IJACSA.2020.0110865</id>
        <doi>10.14569/IJACSA.2020.0110865</doi>
        <lastModDate>2020-08-31T13:22:26.8630000+00:00</lastModDate>
        
        <creator>Ahmed I. A. Abd-Elaal</creator>
        
        <creator>Ahmed Z. Badr</creator>
        
        <creator>Hani M. K. Mahdi</creator>
        
        <subject>Machine learning; ISIS; Daesh; extremism; data mining; social media; Twitter</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>In the past few years and as a result of the enormous spreading of social media platforms worldwide, many radical groups tried to invade social media cyber space in order to disseminate their ideologies and destructive plans. This brutal invasion to society daily life style must be resisted as social media networks are interacted with on daily basis. As some violent radical groups such as ISIS has developed well designed propaganda strategies that enables them to recruit more members and supporters all over the world using social media facilities. So it is crucial to find an efficient way to detect the violent-radical accounts in social media networks. In this paper, an intelligent system that autonomously detects ISIS online community in Twitter social media platform is proposed. The proposed system analyzes both linguistic features and behavioral features such as hashtags, mentions and who they follow. The system consists of two main sub-systems, namely the crawling and the inquiring subsystems. The crawling subsystem uses the initially known ISIS-related accounts to establish an ISIS-account detector. The inquiring subsystem aims to detect Pro ISIS-accounts.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_65-Detecting_Violent_Radical_Accounts.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application of Kinect Technology and Artificial Neural Networks in the Control of Rehabilitation Therapies in People with Knee Injuries</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110864</link>
        <id>10.14569/IJACSA.2020.0110864</id>
        <doi>10.14569/IJACSA.2020.0110864</doi>
        <lastModDate>2020-08-31T13:22:26.8330000+00:00</lastModDate>
        
        <creator>Bisset Gonzales Loayza</creator>
        
        <creator>Alberto Calla Bendita</creator>
        
        <creator>Mario Huaypuna Cjuno</creator>
        
        <creator>Jose Sulla-Torres</creator>
        
        <subject>Machine learning; artificial neural network; kinect; physiotherapy; rehabilitation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>In the field of physiotherapy, the recognition of the poses of the human body is obtaining more research so that the patient has an accelerated recovery rate in his rehabilitation. Nowadays, it is not so challenging to have devices like Microsoft Kinect that allow us to interact with the user for the recognition of poses and body gestures. The objective of this work to capture the data of the joints of a person&#39;s body through a set of angles using the Kinect device, then artificial neural networks with the Back-Propagation algorithm were used for machine learning, and their precision was determined. The results found on the performance of the neural network show that 99.70% accuracy was achieved in the classification of the patients&#39; postures, which can be used as an alternative in the rehabilitation therapies of patients with knee injuries.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_64-Application_of_Kinect_Technology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Framework for Mobile Telecom Network Analysis using Big Data Platform</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110863</link>
        <id>10.14569/IJACSA.2020.0110863</id>
        <doi>10.14569/IJACSA.2020.0110863</doi>
        <lastModDate>2020-08-31T13:22:26.8170000+00:00</lastModDate>
        
        <creator>M. M. Abo Khedra</creator>
        
        <creator>A. A. Abd EL-Aziz</creator>
        
        <creator>Hedi HAMDI</creator>
        
        <creator>Hesham A. Hefny</creator>
        
        <subject>SNA; influencer; acquisition; community detection; link prediction; call detailed record; on-net node; off-net node</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>Social Network Analysis measures the interconnection between humans, entities or communities and the streaming of messages between them. This kind of Analysis studies the relationship between different people in a very deep way; it shows how one node (subscriber) in the network can affect the others. This research studies the connections between the customers in many different ways to help any telecom operator increase the cross and up-selling of its products and services as follows: detect communities of subscribers which are a group of nodes collected together to form a community, identify the connection types and label the links between the customers as (business, friends , family and others), as well as identifying the top influencers in the network who can spread positive or negative messages about products and services provided by the company through communities in the network and determine off-net customers who can be acquired to be targeted by specific marketing campaigns. A real cell phone dataset of 116 Million call detailed records of SMS and Voice Calls of an Egyptian Communication Service Provider (CSP) is used.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_63-A_Novel_Framework_for_Mobile_Telecom.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Study on Dominant Factor for Academic Performance Prediction using Feature Selection Methods</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110862</link>
        <id>10.14569/IJACSA.2020.0110862</id>
        <doi>10.14569/IJACSA.2020.0110862</doi>
        <lastModDate>2020-08-31T13:22:26.7870000+00:00</lastModDate>
        
        <creator>Phauk Sokkhey</creator>
        
        <creator>Takeo Okazaki</creator>
        
        <subject>Educational data mining; dominant factors; feature selection methods; prediction models; student performance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>All educational institutions always try to investigate the learning behaviors of students and give early prediction toward student’s outcomes for interventing and improving their learning performance. Educational data mining (EDM) offers various effective prediction models to predict student performance. Simultaneously, feature selection (FS) is a method of EDM that is utilized to determine the dominant factors that are needed and sufficient for the target concept. FS method extracts high-quality data that reduce the complexity of the prediction task that can increase the robustness of decision rule. In this paper, we provide a comparative study of feature selection methods for determining dominant factors that highly affect classification performance and improve the performance of prediction models. A new feature selection CHIMI based on ranked vector score is proposed. Analysis of feature sets of each FS method to get the dominant set is executed. The experimental results show that by using the dominant set of the proposed CHIMI method, the classification performance of the proposed models is significantly improved.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_62-Study_on_Dominant_Factor_for_Academic_Performance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automatic Hate Speech Detection using Machine Learning: A Comparative Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110861</link>
        <id>10.14569/IJACSA.2020.0110861</id>
        <doi>10.14569/IJACSA.2020.0110861</doi>
        <lastModDate>2020-08-31T13:22:26.7870000+00:00</lastModDate>
        
        <creator>Sindhu Abro</creator>
        
        <creator>Sarang Shaikh</creator>
        
        <creator>Zahid Hussain Khand</creator>
        
        <creator>Zafar Ali</creator>
        
        <creator>Sajid Khan</creator>
        
        <creator>Ghulam Mujtaba</creator>
        
        <subject>Hate speech; online social networks; natural language processing; text classification; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>The increasing use of social media and information sharing has given major benefits to humanity. However, this has also given rise to a variety of challenges including the spreading and sharing of hate speech messages. Thus, to solve this emerging issue in social media sites, recent studies employed a variety of feature engineering techniques and machine learning algorithms to automatically detect the hate speech messages on different datasets. However, to the best of our knowledge, there is no study to compare the variety of feature engineering techniques and machine learning algorithms to evaluate which feature engineering technique and machine learning algorithm outperform on a standard publicly available dataset. Hence, the aim of this paper is to compare the performance of three feature engineering techniques and eight machine learning algorithms to evaluate their performance on a publicly available dataset having three distinct classes. The experimental results showed that the bigram features when used with the support vector machine algorithm best performed with 79% off overall accuracy. Our study holds practical implication and can be used as a baseline study in the area of detecting automatic hate speech messages. Moreover, the output of different comparisons will be used as state-of-art techniques to compare future researches for existing automated text classification techniques.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_61-Automatic_Hate_Speech_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Lean IT Transformation Plan for Information Systems Development</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110860</link>
        <id>10.14569/IJACSA.2020.0110860</id>
        <doi>10.14569/IJACSA.2020.0110860</doi>
        <lastModDate>2020-08-31T13:22:26.7530000+00:00</lastModDate>
        
        <creator>Muhammad K. A. Kiram</creator>
        
        <creator>Maryati Mohd Yusof</creator>
        
        <subject>Failure; lean IT; information systems; information systems development; socio-technical; waste</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>Information systems development (ISD) is prone to failure, which can be defined as a time-consuming and costly phenomenon that provides value that is not directly appealing to clients. While ISD can be enhanced using various tools, models, and frameworks, failures related to ISD remain to be evident and costly. These failures are related to human, organizational, and technological factors and waste in ISD. This study identifies the information system (IS) success criteria and factors that contribute to ISD waste. A qualitative case study was conducted for an ICT research unit by using interview, observation, and document analysis techniques as a means of analyzing the IS success criteria, leanness level, and waste. Findings show that lean IT approaches and IS success criteria can be combined to develop a holistic transformation plan for organizational ISD. This transformation plan can potentially assist IS developers deliver high-value IS while driving organizational growth towards the fourth industrial revolution.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_60-Lean_IT_Transformation_Plan_for_Information.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>CAREdio: Health Screening and Heart Disease Prediction System for Rural Communities in the Philippines</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110859</link>
        <id>10.14569/IJACSA.2020.0110859</id>
        <doi>10.14569/IJACSA.2020.0110859</doi>
        <lastModDate>2020-08-31T13:22:26.7400000+00:00</lastModDate>
        
        <creator>Lean Karlo S. Tolentino</creator>
        
        <creator>John Erick L. Isoy</creator>
        
        <creator>Kayne Adriane A. Bulawan</creator>
        
        <creator>Mary Claire T. Co</creator>
        
        <creator>Caryl Faye C. Monreal</creator>
        
        <creator>Ian Joshua W. Vitto</creator>
        
        <creator>Maria Victoria C. Padilla</creator>
        
        <creator>Jay Fel C. Quijano</creator>
        
        <creator>Romeo Jr. L. Jorda</creator>
        
        <creator>Jessica S. Velasco</creator>
        
        <subject>Cardiovascular diseases; health screening; disease prediction; mobile application; machine learning; rural population</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>Cardiovascular diseases cover a large quantity of worldwide disease load, setting it to top leading cause of death. In the Philippines, given the rapid economic advancement and urbanization, the most vulnerable sector has not been impacted by this development. Data from the Philippine Statistical Authority (PSA) in 2016 revealed that of the country’s total recorded deaths, six out of ten were medically unattended and of which the largest portion are from the rural population. Consequently, medical analysis is needed to perform effectively and precisely however, most developing countries have limited resources and lack medical expert for specialized field such as cardiologists. The proponents essentially seeks to address the issues Philippine health sector specifically in rural and remote populace by executing efficient and low-cost health screening and diseases prediction system using commercially available medical devices and machine learning algorithms for the prediction of three of the most heart diseases (Hypertension, Heart Attack, Diabetes). The system is composed of CAREdio mobile app, prototype hardware consists of different health sensors and devices, and a machine learning model that is applied to determine the user’s individual probability of having a specific heart disease. The machine learning models used were trained using the data gathered from Rosario Reyes Health Center and Ospital ng Sampaloc (Sampaloc Hospital), both located in Manila City, Philippines. CAREdio achieves accuracy values over 0.80 for all diseases. The system can diagnose multiple cardiovascular diseases in a single app that will benefit people rural communities.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_59-CAREdio_Health_Screening_and_Heart_Disease.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improvement of Body Movements and Stability of Blind or Visually Impaired Adults by Physical Activity using Kinect V2</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110858</link>
        <id>10.14569/IJACSA.2020.0110858</id>
        <doi>10.14569/IJACSA.2020.0110858</doi>
        <lastModDate>2020-08-31T13:22:26.7230000+00:00</lastModDate>
        
        <creator>Marwa Bouri</creator>
        
        <creator>Ali Khalfallah</creator>
        
        <creator>Med Salim Bouhlel</creator>
        
        <subject>Posture; visual impaired; physical exercise; audio feedback; Kinect; body tracking; balance; falling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>People who are blind or low vision need to follow activities routines for their mental and physical health to minimize the risk of suffering from bleeding in articulation but they have problems due to difficulties and inaccessibility of displacement. This paper introduces and evaluate a set of exercises to improve the bodily movement and stability using body tracking by Microsoft Kinect V2 and audio feedback. These exercises are composed of a sequence of different postures, has an audio feedback personalized to help people to understand each gestures and can correct them if it is not correct, and generates a summary graph to evaluate the success rate of exercises. To obtain the 3D joint coordinates from the depth sensor, we used the SDK V2.0 of the Microsoft Kinect. We use these coordinates to calculate the distances and angles between joints of interest firstly to position the user in the area field of the Kinect sensor, evaluate the different postures of movements of knees, elbows and shoulders, and detect the body balance if he is leaning and in which direction to avoid falling. These physical exercises have been evaluated to improve feasibility and feedback with persons who are blind or low vision.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_58-Improvement_of_Body_Movements_and_Stability.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automatic Extraction of Rarely Explored Materials and Methods Sections from Research Journals using Machine Learning Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110857</link>
        <id>10.14569/IJACSA.2020.0110857</id>
        <doi>10.14569/IJACSA.2020.0110857</doi>
        <lastModDate>2020-08-31T13:22:26.7070000+00:00</lastModDate>
        
        <creator>Kavitha Jayaram</creator>
        
        <creator>Prakash G</creator>
        
        <creator>Jayaram V</creator>
        
        <subject>Data-mining; rule-based; machine-learning; term extraction; classification; materials and methods; acknowledgment</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>The scientific community is expanding by leaps and bounds every day owing to pioneering and path breaking scientific literature published in journals around the globe. Viewing as well as retrieving this data is a challenging task in today’s fast paced world. The essence and importance of scientific research papers for the expert lies in their experimental and theoretical results along with the sanctioned research projects from the organizations. Since scant work has been done in this direction, the alternative option is to explore text mining by machine learning techniques. Myriad journals are available on material research which throws light on a gamut of materials, synthesis methods, and characterization methods used to study properties of the materials. Application of materials has many diversified areas, hence selected papers from “Journal of Material Science” where “Materials and Methods” sections contains names of the method, characterization techniques (instrumental methods), algorithms, images, etc. used in research work. The “Acknowledgment” section conveys information about authors’ proximity, collaborations with organizations that are again not explored for the citation network. In the present articulated work, our attempt is to derive a means to automatically extract methods or terminologies used in characterization techniques, author, organization data from “Materials and Methods” and “Acknowledgment” sections, using machine learning techniques. Another goal of this research is to provide a data set for characterization terms, classification and an extended version of the existing citation network for material research. The complete dataset will help new researchers to select research work, find new domains and techniques to solve advanced scientific research problems.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_57-Automatic_Extraction_of_Rarely_Explored_Materials.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Flood Forecasting Model based on Wireless Sensor and Actor Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110856</link>
        <id>10.14569/IJACSA.2020.0110856</id>
        <doi>10.14569/IJACSA.2020.0110856</doi>
        <lastModDate>2020-08-31T13:22:26.6900000+00:00</lastModDate>
        
        <creator>Sheikh Tahir Bakhsh</creator>
        
        <creator>Naveed Ahmed</creator>
        
        <creator>Basit Shahzad</creator>
        
        <creator>Mohammed Basheri</creator>
        
        <subject>Flood forecasting; GIS; remote sensing; hydrology; particle swarm optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>Flood forecasting is a challenging area of research that can help to save precious lives by timely intimating about the flood possibilities. The role of advancements in computing and allied technologies has moved the research towards a new horizon. Researchers from all over the world are using Artificial Neural Networks (ANN), Global Information Systems (GIS), and Wireless Sensor Networks (WSN) based schemes for flash flood forecasting and hydrological risk analysis. ANN and GIS-based solutions are much costly whereas the analysis and prediction using WSN require much less cost for infrastructure deployment. It will enable the use of flood prediction mechanisms in the third world and poor countries. New variation in storage capacity can be a vital source to eliminate or reduce the chance of flood. By considering this observation, it is proposed to develop a generic flood prediction scheme that can manage the system as per resources and environmental conditions. A heterogeneous WSN has considered where powerful Collector Nodes (CN) continuously takes values from member sensor nodes in the region. CN transmits the alerts to Administration Unit (AU) when threshold values are crossed. It is proposed to save the threshold values from all water sources like storage capacity, water inflow, and outflow in the repository for decision making. Moreover, environmental parameters including water level, humidity, temperature, air pressure, rainfall, soil moisture, etc. are considered for the analysis of flood prediction. We have also considered the evaluation of these parameters in specified boundary values that were not considered in existing schemes. In this research study, we have used Arc GIS and NS2 simulation tools to analyze the parameters and predict the probability of the occurrence of a flood.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_56-A_Flood_Forecasting_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Improved Ant Colony Optimization Algorithm: A Technique for Extending Wireless Sensor Networks Lifetime Utilization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110855</link>
        <id>10.14569/IJACSA.2020.0110855</id>
        <doi>10.14569/IJACSA.2020.0110855</doi>
        <lastModDate>2020-08-31T13:22:26.6770000+00:00</lastModDate>
        
        <creator>Ademola P. Abidoye</creator>
        
        <creator>Elisha O. Ochola</creator>
        
        <creator>Ibidun C. Obagbuwa</creator>
        
        <creator>Desmond W. Govender</creator>
        
        <subject>Sensor nodes; advanced nodes; fog nodes; data centre; cloud computing; ant colony optimization; visual sensor networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>Wireless sensor networks (WSNs) are one of the most essential technologies in the 21st century due to their increase in various application areas and can be deployed in areas where cable and power supply are difficult to use. However, sensor nodes that form these networks are energy-constrained because they are powered by non-rechargeable small batteries. Thus, it is imperative to design a routing protocol that is energy efficient and reliable to extend network lifetime utilization. In this article, we propose an improved ant colony optimization algorithm: a technique for extending wireless sensor networks lifetime utilization called AMACO. We present a new clustering method to avoid the overhead that is usually involved during the election of cluster heads in the previous approaches and energy holes within the network. Moreover, fog computing is integrated into the scheme due to its ability to optimize the limited power source of WSNs and to scale up to the requirements of the Internet of Things applications. All the data packets received by the fog nodes are transmitted to the cloud for further analysis and storage. An improved ant colony optimization (ACO) algorithm is used to construct optimal paths between the cluster heads and fog nodes for a reliable end-to-end data packets delivery. The simulation results show that the network lifetime in AMACO increased by 22.0%, 30.7%, and 32.0% in comparison with EBAR, IACO-MS, and RRDLA before the first node dies (FND) respectively. It increased by 15.2%, 18.4%, and 33.5% in comparison with EBAR, IACO-MS, and RRDLA before half nodes die (HND) respectively. Finally, it increased by 28.2%, 24.9%, and 58.9% in comparison with EBAR, IACO-MS, and RRDLA before the last node dies (LND) respectively.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_55-An_Improved_Ant_Colony_Optimization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Lake Data Warehouse Architecture for Big Data Solutions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110854</link>
        <id>10.14569/IJACSA.2020.0110854</id>
        <doi>10.14569/IJACSA.2020.0110854</doi>
        <lastModDate>2020-08-31T13:22:26.6600000+00:00</lastModDate>
        
        <creator>Emad Saddad</creator>
        
        <creator>Ali El-Bastawissy</creator>
        
        <creator>Hoda M. O. Mokhtar</creator>
        
        <creator>Maryam Hazman</creator>
        
        <subject>Traditional data warehouse; big data; semi-structured data; unstructured data; novel data warehouses architecture; Hadoop; spark</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>Traditional Data Warehouse is a multidimensional repository. It is nonvolatile, ‎subject-oriented, integrated, time-variant, and non-‎operational data. It is gathered from multiple ‎heterogeneous data ‎sources. We need to adapt traditional Data Warehouse architecture to deal with the new ‎challenges imposed by the abundance of data and the current big data characteristics, containing ‎volume, value, variety, validity, volatility, visualization, variability, and venue. The new ‎architecture also needs to handle existing drawbacks, including availability, scalability, and ‎consequently query performance. This paper introduces a novel Data Warehouse architecture, named Lake ‎Data Warehouse Architecture, to provide the traditional Data Warehouse with the capabilities to ‎overcome the challenges. ‎Lake Data Warehouse Architecture depends on merging the traditional Data Warehouse architecture ‎with big data technologies, like the Hadoop framework and Apache Spark. It provides a hybrid ‎solution in a complementary way. The main advantage of the proposed architecture is that it ‎integrates the current features in ‎traditional Data Warehouses and big data features acquired ‎through integrating the ‎traditional Data Warehouse with Hadoop and Spark ecosystems. Furthermore, it is ‎tailored to handle a tremendous ‎volume of data while maintaining availability, reliability, and ‎scalability.‎</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_54-Lake_Data_Warehouse_Architecture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>IoT based Automatic Damaged Street Light Fault Detection Management System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110853</link>
        <id>10.14569/IJACSA.2020.0110853</id>
        <doi>10.14569/IJACSA.2020.0110853</doi>
        <lastModDate>2020-08-31T13:22:26.6430000+00:00</lastModDate>
        
        <creator>Ashok Kumar Nanduri</creator>
        
        <creator>Siva Kumar Kotamraju</creator>
        
        <creator>G L Sravanthi</creator>
        
        <creator>Sadhu Ratna Babu</creator>
        
        <creator>K V K V L Pavan Kumar</creator>
        
        <subject>IoT (Internet of Things); GSM (Global System for Mobile); LDR (Light Dependent Resistor); LED (Light-Emitting Diode); GPS (Global Positioning System); Raspberry; Twilio</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>The IoT (Internet of Things) is a blooming technology that mainly concentrates on the interconnection of devices or components to one another and the people. As the time being, many of these connections are changing as “Device – Device” from “Human to Device”. Finding the faulty street light automatically is become a vital milestone by using this technology. The primary goal of the project is to provide control and identification of the damaged street light automatically. The lighting system which targets the energy and automatic operation on economical affordable for the streets and immediate information response about the street light fault. In general, the damage of the street light is observed by getting the complaints from the colony (street) people. Whereas in this proposed work using sensors these lights working status is easily captured without any manual interaction. So that it reduces manual efforts and the delay to fix problems. So, to reduce such problem we come with the solution wherein automatic detection of street light issues i.e.; whether the street light is working or not will be found at night time and it should send the notification to the authorized person if there is a problem in particular streetlight and also the location of the place where the streetlight is damaged. The street lights are automatically ON/OFF using IoT. In this system, it checks whether the street light is ON/ OFF. The LDR sensor will ON/OFF the street lights automatically, based on the condition of the weather.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_53-IoT_Based_Automatic_Damaged_Street_Light.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detection of Plant Disease on Leaves using Blobs Detection and Statistical Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110852</link>
        <id>10.14569/IJACSA.2020.0110852</id>
        <doi>10.14569/IJACSA.2020.0110852</doi>
        <lastModDate>2020-08-31T13:22:26.6130000+00:00</lastModDate>
        
        <creator>N. S. A. M Taujuddin</creator>
        
        <creator>A.I.A Mazlan</creator>
        
        <creator>R. Ibrahim</creator>
        
        <creator>S. Sari</creator>
        
        <creator>A. R. A Ghani</creator>
        
        <creator>N. Senan</creator>
        
        <creator>W.H.N.W Muda</creator>
        
        <subject>Image processing; blob detection; edge detection; statistical analysis; disease detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>Plant is exposed to many attacks from various micro-organism, bacterial disease and pests. The symptoms of the attacks are usually distinguished through the leaves, stem or fruit inspection. Disease that are commonly attack plants are Powdery Mildew and Leaf Blight and it may cause severe damaged if not controlled in early stages. Image processing has widely being used for identification, detection, grading and quality inspection in the agriculture field. Detection and identification disease of a plant is very important especially, in producing a high-quality fruit. Leaves of a plant can be used to determine the health status of that plant. The objective of this work is to develop a system that capable to detect and identify the type of disease based on Blobs Detection and Statistical Analysis. A total 45 sample leaves images from different colour and type were used and the accuracy is analysed. The Blobs Detection technique are used to detect the healthiness of plant leaves. While Statistical Analysis is used by calculating the Standard Deviation and Mean value to identify the type disease. Result is compared with manual inspection and it is found that the system has 86% in accuracy compared to manual detection process.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_52-Detection_of_Plant_Disease_on_Leaves.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Weight Prediction System for Nile Tilapia using Image Processing and Predictive Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110851</link>
        <id>10.14569/IJACSA.2020.0110851</id>
        <doi>10.14569/IJACSA.2020.0110851</doi>
        <lastModDate>2020-08-31T13:22:26.5970000+00:00</lastModDate>
        
        <creator>Lean Karlo S. Tolentino</creator>
        
        <creator>Celline P. De Pedro</creator>
        
        <creator>Jatt D. Icamina</creator>
        
        <creator>John Benjamin E. Navarro</creator>
        
        <creator>Luigi James D. Salvacion</creator>
        
        <creator>Gian Carlo D. Sobrevilla</creator>
        
        <creator>Apolo A. Villanueva</creator>
        
        <creator>Timothy M. Amado</creator>
        
        <creator>Maria Victoria C. Padilla</creator>
        
        <creator>Gilfred Allen M. Madrigal</creator>
        
        <subject>Fish; growth; Tilapia; image processing; predictive analysis; weight prediction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>Fish farmers are likely to cultivate poor quality fish to accommodate the rising demands for food due to the ever-increasing population. Fish growth monitoring greatly helps on producing higher quality fish products which leads to a better impact in the aquatic animal food production industry. However, monitoring through manual weighing and measuring stresses them that affects their health resulting to poorer quality or even fish kills. This paper presents a low-cost monitoring and Hough gradient method-based weight prediction system for Nile Tilapia (Oreochromis niloticus) using Raspberry Pi microcontroller and two low-cost USB cameras. This study aims to improve fish growth rate through monitoring the growth of the fishes with image processing eliminating the traditional way of obtaining fish measurements. By using paired t-test, the acquired values imply that the weight algorithm used to measure the weight of the fishes is accurate and acceptable to use. Growth performance of 10 Nile Tilapia was obtained in two intensive aquaculture setups – one for automated fish weighing through image processing and predictive analysis and the other setup for manual weighing. In response to weight prediction application, the growth of the fishes increased by 47.88%.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_51-Weight_Prediction_System_for_Nile_Tilapia.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comparative Study of Microservices-based IoT Platforms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110850</link>
        <id>10.14569/IJACSA.2020.0110850</id>
        <doi>10.14569/IJACSA.2020.0110850</doi>
        <lastModDate>2020-08-31T13:22:26.5670000+00:00</lastModDate>
        
        <creator>Badr El Khalyly</creator>
        
        <creator>Abdessamad Belangour</creator>
        
        <creator>Mouad Banane</creator>
        
        <creator>Allae Erraissi</creator>
        
        <subject>IoT platforms; microservices; WSM method; IoT architecture; smart devices</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>Internet of Things (IoT) is a set of technologies that aim at fitting together smart devices and applications to build an IoT ecosystem. The target of this kind of ecosystem is to enhance interaction between machines and humans through hardware to software binding while reducing cost and resource consumption. On the application level, IoT ecosystems were implemented by various technologies that all seek better interconnection, monitoring, and controlling of IoT smart devices. Among recent technologies, Microservices, which are a variant of the service-oriented architecture, are subject to great excitement. In fact, Microservices are an emerging technology built around Microservice paradigm which goal is to offer services with a small granularity, which exactly meets the distributed nature of IoT devices while maintaining a loosely coupled architecture between IoT components among other advantages. Efforts to build Microservice-based IoT platforms sooner emerged to take advantage of the numerous benefits of the Microservice paradigm to build scalable, interoperable, and dynamic ecosystems. The goal of this paper is to list these approaches, classify them and compare them using a Weighted Scoring Model (WSM) method. This involves, studying these platforms, establishing relevant criteria for comparison, assigning weights for each criterion, and finally calculating scores. The obtained results reveal the weaknesses and strengths of each of the studied platforms.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_50-A_Comparative_Study_of_Microservices.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Movie Rating Prediction using Ensemble Learning Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110849</link>
        <id>10.14569/IJACSA.2020.0110849</id>
        <doi>10.14569/IJACSA.2020.0110849</doi>
        <lastModDate>2020-08-31T13:22:26.5500000+00:00</lastModDate>
        
        <creator>Zahabiya Mhowwala</creator>
        
        <creator>A. Razia Sulthana</creator>
        
        <creator>Sujala D. Shetty</creator>
        
        <subject>Machine learning; ensemble learning; random forest algorithm; XGBoost; movie rating prediction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>Over the last few decades, social media platforms have gained a lot of popularity. People of all ages, gender, and areas of life have their presence on at least one of the social platforms. The data that is generated on these platforms has been and is being used for better recommendations, marketing activities, forecasting, and predictions. Considering predictions, the movie industry worldwide produces a large number of movies per year. The success of these movies depends on various factors like budget, director, actor, etc. However, it has become a trend to predict the rating of the movie based on the data collected from social media related to the movie. This will help a number of businesses relying on the movie industry in making promotional and marketing decisions. In this report, the aim is to collect movie data from IMDB and its social media data from YouTube and Wikipedia and compare the performance of two machine learning algorithms – Random Forest and XGBoost – best known for their high accuracy with small datasets, but large feature set. The collection of data is done from multiple sources or APIs.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_49-Movie_Rating_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Framework for Brain Tumor Segmentation and Classification using Deep Learning Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110848</link>
        <id>10.14569/IJACSA.2020.0110848</id>
        <doi>10.14569/IJACSA.2020.0110848</doi>
        <lastModDate>2020-08-31T13:22:26.5200000+00:00</lastModDate>
        
        <creator>Sunita M. Kulkarni</creator>
        
        <creator>G. Sundari</creator>
        
        <subject>Brain MRI; segmentation; CNN; deep learning; transfer learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>The brain tumor is a cluster of the abnormal tissues, and it is essential to categorize brain tumors for treatment using Magnetic Resonance Imaging (MRI). The segmentation of tumors from brain MRI is understood to be complicated and also crucial tasks. It can be further use in surgery, medical preparation, and assessments. In addition to this, the brain MRI classification is also essential. The enhancement of machine learning and technology will aid radiologists in diagnosing tumors without taking invasive steps. In this paper, the method to detect a brain tumor and classification has been present. Brain tumor detection processes through pre-processing, skull stripping, and tumor segmentation. It is employing a thresholding method followed by morphological operations. The number of training image influences the feature extracted by the CNN, then CNN models overfit after some epoch. Hence, deep learning CNN with transfer learning techniques has evolved. The tumorous brain MRI is classified using CNN based AlexNet architecture. Further, the malignant brain tumor is classified using GooLeNet transfer learning architecture. The performance of this approach is evaluated by precision, recall, F-measure, and accuracy metrics.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_48-A_Framework_for_Brain_Tumor_Segmentation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Scalable Asymmetric Security Mechanism for Internet of Things</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110847</link>
        <id>10.14569/IJACSA.2020.0110847</id>
        <doi>10.14569/IJACSA.2020.0110847</doi>
        <lastModDate>2020-08-31T13:22:26.4900000+00:00</lastModDate>
        
        <creator>Ayesha Siddiqa</creator>
        
        <creator>Sohail Ahmed</creator>
        
        <subject>Asymmetric cryptography; confidentiality; internet of things; security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>The Internet of things stances rigorous demands on excellence of quality and the vitality of security. It becomes vital to provide an extremely reliable encryption algorithm with less complexity and computational expense in IoT paradigm. Most of the protocols designed in past for communication between sender and the receiver based on asymmetric cryptography algorithms poses high computational cost. Therefore, this paper presents a less complex and more secure and fast encryption algorithm for communication between devices i.e. Asymmetric Scalable Security between sender and the receiver of the information. We present a reliable, secure, scalable and efficient communication protocol that used asymmetric algorithm for securing the exchange of information between sender and the receiver. The proposed communication protocol is lightweight encryption method that does not require complex resources to perform the computations involved for using the asymmetric cryptography. The simulation results also show that the proposed method is efficient in terms of time and space and ensures confidentiality. Therefore, the proposed scheme is beneficial for providing the secure communication for the power and resource constrained IoT devices.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_47-Scalable_Asymmetric_Security_Mechanism.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intelligent and Scalable IoT Edge-Cloud System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110846</link>
        <id>10.14569/IJACSA.2020.0110846</id>
        <doi>10.14569/IJACSA.2020.0110846</doi>
        <lastModDate>2020-08-31T13:22:26.4730000+00:00</lastModDate>
        
        <creator>Shifa Manihar</creator>
        
        <creator>Tasneem Bano Rehman</creator>
        
        <creator>Ravindra Patel</creator>
        
        <creator>Sanjay Agrawal</creator>
        
        <subject>Scalability; internet of things; self organizing map; edge; horizontal scalability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>Scalability is an utter compulsory for the success of the IoT’s unprecedentedly growing network. The operational and financial bottlenecks allied with growth can be overwhelming for those peeping to integrate IoT solutions. As the IoT technology proceeds, so is the scale of operations desired to arrive at a wider target region. Breakdown may take place not because of device’s ability to scale, but due to data scale. As more devices are being incorporated, more data/information will be amassed, stored, processed, and scrutinized. The volume of this collection simply cannot be managed from a single edge device by deploying vertical approach. When starting small, it’s important to peep into the future and anticipate growth. Companies that can’t adapt to unpredictable market changes will fold without the right IoT architecture in place. Therefore, a scalable IOT framework has been proposed in the paper, which will provide load balancing or scalability by deploying the provisions of horizontal scalability for the system. The framework will be utilizing SOM for the purpose of classifying applications (whether delay sensitive or delay insensitive), so that proper decisions can be made based on the incoming data (typically signals) and if edge gets over flooded with the data, then edge is scaled to instigate the other edge for computing additional requests . The proposed system is termed as intelligent because its algorithm empowers the edge to take decision and classify applications based on the type of requirement of the application.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_46-Intelligent_and_Scalable_IoT_Edge.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Determining the Presence of Metabolic Pathways using Machine Learning Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110845</link>
        <id>10.14569/IJACSA.2020.0110845</id>
        <doi>10.14569/IJACSA.2020.0110845</doi>
        <lastModDate>2020-08-31T13:22:26.4400000+00:00</lastModDate>
        
        <creator>Yara Saud Aljarbou</creator>
        
        <creator>Fazilah Haron</creator>
        
        <subject>Metabolic pathway prediction; pathway dataset; metabolic network of organism; machine learning; support vector machine</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>The reconstruction of the metabolic network of an organism based on its genome sequence is a key challenge in systems biology. One of the strategies that can be used to address this problem is the prediction of the presence or the absence of a metabolic pathway from a reference database of known pathways. Although, such models have been constructed manually, obviously such a method cannot be used to cover thousands of genomes that has been sequenced. Therefore, more advanced techniques are needed for computational representation of metabolic networks. In this research, we have explored machine learning approach to determine the presence or the absent of metabolic pathway based on its annotated genome. We have built our own dataset of 4978 instances of pathways. The dataset consists of 1585 pathways with each having 20 different representations from 20 organisms. The pathways were obtained from the BioCyc Database Collection. The pathway dataset also consists of 20 features used to describe each pathway. In order to identify the suitable classifier, we have experimented five machine learning algorithms with and without applying feature selection methods, namely Decision Tree, Naive Bayes, Support Vector Machine, K-Nearest Neighbor and Logistic Regression. Our experiments have shown that Support Vector Machine is the best classifier with an accuracy of 96.9%, while the maximum accuracy reached by the previous work is 91.2%. Hence, adding more data to the pathway dataset can improve the performance of the machine learning classifiers.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_45-Determining_the_Presence_of_Metabolic_Pathways.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comprehensive Interaction Model for Cloud Management</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110844</link>
        <id>10.14569/IJACSA.2020.0110844</id>
        <doi>10.14569/IJACSA.2020.0110844</doi>
        <lastModDate>2020-08-31T13:22:26.4270000+00:00</lastModDate>
        
        <creator>Md. Nasim Adnan</creator>
        
        <creator>Md. Majharul Haque</creator>
        
        <creator>Mohammad Rifat Ahmmad Rashid</creator>
        
        <creator>Mohammod Akbar Kabir</creator>
        
        <creator>Abu Sadat Mohammad Yasin</creator>
        
        <creator>Muhammad Shakil Pervez</creator>
        
        <subject>Cloud computing; cloud management; cloud customer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>Cloud computing is readily being adopted by enterprises due to its following benefits: ability to provide better service to customers, improved flexibility, lower barrier to entry for an enterprise, lower maintenance cost on IT service, availability etc. However, the interaction between cloud service provider and customer is not well-defined yet. Understanding of the service offered while approaching cloud computing paradigm and also understanding of the required actions during the period of receiving a cloud service e.g. provision of new resources, scaling up/down, billing, etc. remains a concern for the enterprises. This paper proposes a segregated interaction model to manage the receiving of a cloud service in a hierarchical way.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_44-Comprehensive_Interaction_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Low Power, Minimal Dead Zone Digital PFD for Biomedical Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110843</link>
        <id>10.14569/IJACSA.2020.0110843</id>
        <doi>10.14569/IJACSA.2020.0110843</doi>
        <lastModDate>2020-08-31T13:22:26.3930000+00:00</lastModDate>
        
        <creator>Sudhakiran Gunda</creator>
        
        <creator>Ernest Ravindran R. S</creator>
        
        <subject>Biomedical Implantable Device (BIMD); Digital Phase Frequency Detector (DPFD); Digital Controlled Oscillator (DCO); Sense Amplifier Based Flip-flop (SAFF); NIKSTRO or SURAV</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>Chronic diseases and rising aging populations are the major reasons towards the usage of low power, low noise, life time performance Biomedical Implantable Devices. Efficient architectural designs will be responsible for the requirements set out above. This paper focuses on the ADPLL DPFD architecture for implantable biomedical devices. For high performance DPFD, the dead zone, lock in time is a seldom limitation to ADPLLs. In the present paper, a new approach to design a dead zone free with fast and high locking time and low phase noise DPFD is considered to be a challenge. This can be accomplished by carefully controlling the reference and feedback clock frequencies of the phase detector with the proposed NIKSTRO/SURAV latch based sense amplifier. The proposed architecture was developed and simulated using 45nm technology and it is observed that it provides a 20ns dead zone with 4.8mW of power consumption at the rate of 1.8GHz, while the lock in time for the proposed method is 340ns with moderate phase noise. It is also noted that the designed one showed better results when compared to the existing ones.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_43-A_Novel_Low_Power_Minimal_Dead_Zone.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detecting Health-Related Rumors on Twitter using Machine Learning Methods</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110842</link>
        <id>10.14569/IJACSA.2020.0110842</id>
        <doi>10.14569/IJACSA.2020.0110842</doi>
        <lastModDate>2020-08-31T13:22:26.3800000+00:00</lastModDate>
        
        <creator>Faisal Saeed</creator>
        
        <creator>Wael M.S. Yafooz</creator>
        
        <creator>Mohammed Al-Sarem</creator>
        
        <creator>Essa Abdullah Hezzam</creator>
        
        <subject>Health-related misinformation; cancer disease; fake information; Twitter; classification formatting</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>Nowadays, the huge usage of internet leads to tremendous information growth as a result of our daily activities that deal with different sources such as news articles, forums, websites, emails and social media. Social media is a rich source of information that deeply affect users by its useful content. However, there are a lot of rumors in these social media platforms which can cause critical consequences to the people’s lives, especially if it is related to the health-related information. Several studies focused on automatically detecting rumors from social media by applying machine learning and intelligent methods. However, few studies concerned about health-related rumors in Arabic language. Therefore, this paper is dealing with detecting health-related rumors focusing on cancer treatment information that are spread over social media using Arabic language. In addition, it presents the process of creating a dataset that is called Health-Related Rumors Dataset (HRRD) which will be available and beneficial for further studies in health-related research. Furthermore, an experiment has been conducted to investigate the performance of several machine learning methods to detect the health-related rumors on social media for Arabic language. The experimental results showed the rumors can be detected with an accuracy of 83.50%.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_42-Detecting_Health_Related_Rumors_on_Twitter.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>EDES-ACM: Enigma Diagonal Encryption Standard Access Control Model for Data Security in Cloud Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110841</link>
        <id>10.14569/IJACSA.2020.0110841</id>
        <doi>10.14569/IJACSA.2020.0110841</doi>
        <lastModDate>2020-08-31T13:22:26.3630000+00:00</lastModDate>
        
        <creator>Sameer </creator>
        
        <creator>Harish Rohil</creator>
        
        <subject>Cloud; security; multiuser; EDES-ACM; computation cost</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>The data management across the different domains is the foremost requirement for many organizations universally. The organization establishes the cloud computing paradigm to handle the data effectively due to its robust scaling at a low cost. In recent times the usage of cloud and its data is increasing with the multiuser environments. This resulted in the issue of ensuring the security of data in the cloud environment uploaded by the owners. The cloud service providers and researchers implemented several schemes to ensure data security. However, the task of providing security with multiuser remains tedious with data leakage. A novel Enigmatic Diagonal Encryption Standard (EDES) algorithm to provide access control over the cloud is proposed. The framework with the proposed algorithm is named as EDES-ACM. The Inverse Decisional Diffie Hellman (IDDH) technique is used for generating the signature of the group. The data is encrypted with the EDES algorithm by the data owner. The encrypted data is provided to the user and is accessed by the EDES based private key. The group manager monitors the cloud and provides the activity report to owners based on which the revocation is performed. The framework is validated for its performance on security parameters and compared with the existing models on computation cost. The EDES-ACM framework is effective with low computation cost. The future notion for the proposed framework is to include the block chain technology that may improve the security and better accumulation of data.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_41-EDES_ACM_Enigma_Diagonal_Encryption_Standard.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Recognition of Local Birds of Bangladesh using MobileNet and Inception-v3</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110840</link>
        <id>10.14569/IJACSA.2020.0110840</id>
        <doi>10.14569/IJACSA.2020.0110840</doi>
        <lastModDate>2020-08-31T13:22:26.3470000+00:00</lastModDate>
        
        <creator>Md. Mahbubur Rahman</creator>
        
        <creator>Al Amin Biswas</creator>
        
        <creator>Aditya Rajbongshi</creator>
        
        <creator>Anup Majumder</creator>
        
        <subject>Recognition; MobileNet; Inception-v3; transfer learning; computer vision; Bangladeshi bird</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>Recognition of bird species can be a challenging task due to various complex factors. The purpose of this work is to distinguish various local bird species of Bangladesh from the image data. The MobileNet and Inception-v3 model which is mainly an image classification model used here to accomplish this work. Here, we have used a total of four approaches namely Inception-v3 without transfer learning, Inception-v3 with transfer learning, MobileNet without transfer learning, and MobileNet with transfer learning to accomplish the task. To evaluate our experimental results, we have calculated F1 Score besides the model’s accuracy and also presented the ROC curve to evaluate the model’s output quality. Then we have done a comparison among the applied four approaches. The experimental result has proved the working capability of the applied four approaches. Among these four approaches, MobileNet with transfer learning outperforms the others and obtained a test accuracy of 91.00%. For each of the classes, MobileNet with transfer learning obtained the highest F1 Score than other approaches.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_40-Recognition_of_Local_Birds_of_Bangladesh.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Increasing User Satisfaction of Mobile Commerce using Usability</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110839</link>
        <id>10.14569/IJACSA.2020.0110839</id>
        <doi>10.14569/IJACSA.2020.0110839</doi>
        <lastModDate>2020-08-31T13:22:26.3330000+00:00</lastModDate>
        
        <creator>Ninyikiriza Deborah Lynn</creator>
        
        <creator>Arefin Islam Sourav</creator>
        
        <creator>Djoko Budiyanto Setyohadi</creator>
        
        <subject>Usability; usability testing; mobile commerce; mobile application</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>Online shopping continues to simplify people’s way of life in the present world. People no longer need to physically visit stores to buy items for home use or other purposes. This can be done online using mobile applications to order for preferred items, which can be delivered in return. However, the increase in the features of mobile applications and mobile devices makes usability testing a necessary aspect in today’s advancement of technology. This paper uses an experiment-based usability testing evaluation on Lazada Indonesia mobile application to understand four usability factors, namely, ease of use, efficiency, functionality, and satisfaction. The test was performed using 40 participants and all were students from Universitas Atma Jaya Yogyakarta, Universitas Gadjah Mada, and Universitas Sanata Dharma. These performed 6 tasks on the mobile application and later answered a questionnaire to capture their views concerning the usability of the mobile application. The results were analyzed using SPSS software and descriptive statistics were displayed using standard deviation and arithmetic means. The results from the evaluation showed that the mobile application is easy to use, efficient, and has good functionality. However, some issues were mentioned by participants which indicated that users were not fully satisfied with the mobile application. Therefore, this calls for designers to consider these usability issues to increase user satisfaction.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_39-Increasing_User_Satisfaction_of_Mobile_Commerce.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Combined Text Mining: Fuzzy Clustering for Opinion Mining on the Traditional Culture Arts Work</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110838</link>
        <id>10.14569/IJACSA.2020.0110838</id>
        <doi>10.14569/IJACSA.2020.0110838</doi>
        <lastModDate>2020-08-31T13:22:26.3170000+00:00</lastModDate>
        
        <creator>Elta Sonalitha</creator>
        
        <creator>Anis Zubair</creator>
        
        <creator>Priyo Dari Molyo</creator>
        
        <creator>Salnan Ratih Asriningtias</creator>
        
        <creator>Bambang Nurdewanto</creator>
        
        <creator>Bondhan Rio Prambanan</creator>
        
        <creator>Irfan Mujahidin</creator>
        
        <subject>Text mining; opinion mining; fuzzy clustering; arts work</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>The Indonesian government is currently intensifying work programs in the field of traditional arts and culture. In order to realize the promotion of the country&#39;s culture, the government has enacted a law on cultural promotion. One indicator of the achievement of the promotion of culture, among others, with the collection of data on traditional culture, the data mapping and data inventory can be processed into information and knowledge. In this research, indicators of performance indicators were compiled from connoisseurs of traditional works of art using data in the city of Malang, East Java, Indonesia. The results of the audience&#39;s opinion on cultural offerings can be used as a benchmark for the success of the promotion of traditional culture. When the culture is explored and tried to be displayed again, it is important to know the audience&#39;s satisfaction and understanding of the display that has just been witnessed. The results of the description of respondents in the form of opinions on the artwork will be collected as data processed using Text Mining with the Clustering of Fuzzy C-Means method to determine the audience&#39;s opinion about Feeling , which is related to feelings when viewing the beauty of the artwork, Value is related to the assessment to an art work that can be in the form of art weights contained in the work of art, and Emphasizing , which is related to empathy or respect for the art world, including professions related to the world such as dancers, musicians and others. The results achieved from this study show that has good performance on the proposed method. This can be known from the results of data testing using cluster variance V = 0.00000217901. Based on these values it can be concluded that the value of all cluster variants is good.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_38-Combined_Text_Mining.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Attendance System using Machine Learning-based Face Detection for Meeting Room Application</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110837</link>
        <id>10.14569/IJACSA.2020.0110837</id>
        <doi>10.14569/IJACSA.2020.0110837</doi>
        <lastModDate>2020-08-31T13:22:26.2870000+00:00</lastModDate>
        
        <creator>Rahmat Muttaqin</creator>
        
        <creator>Nopendri</creator>
        
        <creator>Syifaul Fuada</creator>
        
        <creator>Eueung Mulyana</creator>
        
        <subject>Detection; face; IoT; meeting room; attendance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>In a modern meeting room, a smart system to make attendance quickly is mandatory. Most of the existing systems perform manual attendance, such as registration and fingerprint. Despite the fingerprint method can reject the Unknown person and give the grant access to the Known person, it will take time to register first a person one-by-one. Moreover, it is possible to create long queues for fingerprint checking before entering the meeting room. Machine learning, along with the Internet of Things (IoT) technology is the best solution; it offers many advantages when applied in the meeting rooms. Generally, the method used is to create a presence by detecting faces. In this paper, we present a facial recognition authentication based on machine learning technology for connection to the meeting rooms. Furthermore, specific website to display the detection result and data storage design testing is developed. The method uses 1) the Dlib library for deep learning purposes, 2) OpenCV for video camera processing, and 3) Face Recognition for Dlib processing. The proposed system allows placing the multiple cameras in a meeting room as needed. However, in this work, we only used one camera as the main system. Tests conducted include identification of one Known person, identification of one Unknown person, identification of two people, and three people. The parameter to be focused is the required time in detecting the number of faces recorded by the camera. The results reveal that the face can be recognized or not recognized, then it will be displayed on the website.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_37-Attendance_System_using_Machine_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Automated Framework for Detecting Change in the Source Code and Test Case Change Recommendation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110835</link>
        <id>10.14569/IJACSA.2020.0110835</id>
        <doi>10.14569/IJACSA.2020.0110835</doi>
        <lastModDate>2020-08-31T13:22:26.2530000+00:00</lastModDate>
        
        <creator>Niladri Shekar Dey</creator>
        
        <creator>Purnachand Kollapudi</creator>
        
        <creator>M V Narayana</creator>
        
        <creator>I Govardhana Rao</creator>
        
        <subject>Change detection; pre-requisite detection; feature detection; functionality detection and test case change recommendation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>Improvements and acceleration in software development has contributed towards high quality services in all domains and all fields of industry causing increasing demands for high quality software developments. In order to match with the high-quality software development demands, the software development industry is adopting human resources with high skills, advanced methodologies and technologies for accelerating the development life cycle. In the software development life cycle, one of the biggest challenges is the change management between versions of the source codes. The versing of the source code can be caused by various reasons such as change in the requirements or adaptation of functional update or technological upgradations. The change management does not only affect the correctness of the release for the software service, rather also impact the number of test cases. It is often observed that, the development life cycle is delayed due to lack of proper version control and due to the improver version control, the repetitive testing iterations. Hence the demand for better version control driven test case reduction methods cannot be ignored. A number of version control mechanisms are proposed by the parallel research attempts. Nevertheless, most of the version controls are criticized for not contributing towards the test case generation of reduction. Henceforth, this work proposes a novel probabilistic refactoring detection and rule-based test case reduction method in order to simplify the testing and version control mechanism for the software development. The refactoring process is highly adopted by the software developers for making efficient changes such as code structure, functionality or apply change in the requirements. This work demonstrates a very high accuracy for change detection and management. This results into a higher accuracy for test case reductions. The final outcome of this work is to reduce the development time for the software for making the software development industry a better and efficient world.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_35-An_Automated_Framework_for_Detecting_Change.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Investigating Transmission Power Control Strategy for Underwater Wireless Sensor Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110836</link>
        <id>10.14569/IJACSA.2020.0110836</id>
        <doi>10.14569/IJACSA.2020.0110836</doi>
        <lastModDate>2020-08-31T13:22:26.2530000+00:00</lastModDate>
        
        <creator>Syed Agha Hassnain Mohsan</creator>
        
        <creator>Hussain Amjad</creator>
        
        <creator>Alireza Mazinani</creator>
        
        <creator>Sahibzada Adil Shahzad</creator>
        
        <creator>Mushtaq Ali Khan</creator>
        
        <creator>Asad Islam</creator>
        
        <creator>Arfan Mahmood</creator>
        
        <creator>Ahmad Soban</creator>
        
        <subject>Component; underwater wireless sensor networks; transmission power; sensor; power consumption</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>Underwater wireless sensor network (UWSN) has proven its high stature in both civil and military operations including underwater life monitoring, communication and invasion detection. However, UWSNs are vulnerable to a wide class of power consumption issues. Underwater sensor nodes consume power provided by integrated limited batteries. It is a challenging issue to replace these batteries under harsh aquatic conditions. Thus, in an energy-constrained underwater system it is pivotal to seek strategies for improving the life expectancy of the sensors. In this paper, we propose transmission power control mechanism for underwater wireless sensor networks (UWSNs). We experimentally investigate the impact of transmission power and propose a control mechanism to enhance the performance of underwater wireless sensor network. In this proposed mechanism, source nodes will adjust its transmission power according to the location of destination node. This paper aims to provide a mechanism which is incorporated in SEEC. This study also outlines the mathematical modeling for proposed idea. Moreover, we have compared results of our scheme with previous implemented schemes.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_36-Investigating_Transmission_Power_Control_Strategy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design and Implementation of 6LoWPAN Application: A Performance Assessment Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110834</link>
        <id>10.14569/IJACSA.2020.0110834</id>
        <doi>10.14569/IJACSA.2020.0110834</doi>
        <lastModDate>2020-08-31T13:22:26.2230000+00:00</lastModDate>
        
        <creator>Nin Hayati Mohd Yusoff</creator>
        
        <creator>Nurul Azma Zakaria</creator>
        
        <creator>Adil Hidayat Rosli</creator>
        
        <subject>6LoWPAN protocol; performance analysis; overhead</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>Industrial Revolution 4.0 promises an overall improvement to the communications technology by improving on the quality and flexibility of IoT application deployment. Currently, most of these applications are embedded devices from various manufacturers, networks, and technologies. As such, it would be total chaos of just getting the various and myriad devices and technologies to work together, let alone making them to work in perfect harmony. Regardless, the IoT espouses on the seamless integration and interoperability of the said devices and technologies. In realizing this goal, it would be imperative to say that the ability of the IoT system in adopting and adapting to new devices, services, and application is crucial, while at the same time it would not be in any way jeopardizing or compromising the existing system, especially the routing protocol. In view of the IP-based communication technology in WSN, the 6LoWPAN network has been chosen for the task, and the RPL protocol has been strongly considered for the 6LoWPAN solution. However, the RPL overhead tends to be spiralling upwards when additional information transmission occurs. In mitigating this anomaly, therefore, the HRPL was proposed to enhance the RPL protocol in reducing routing overhead. This study focusses on the performance analysis of RPL and HRPL based on the physical experimentation of the 6LoWPAN network in a real scenario. The results show HRPL protocol outperforms in all the performance-tested evaluations: CTO (38.7%), latency (26%), and convergence time (37%). It was also discovered that the number of DIS and DAO (RPL control message) packet is significantly reduced when the DIO message was reduced. At the same time, latency and convergence time also registered a decrease in their respective values correspondingly. Meanwhile, based on our observation, several experiments are needed to investigate how variants topology affect HRPL capabilities.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_34-Design_and_Implementation_of_6LoWPAN_Application.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Image Restoration based on Maximum Entropy Method with Parameter Estimation by Means of Annealing Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110833</link>
        <id>10.14569/IJACSA.2020.0110833</id>
        <doi>10.14569/IJACSA.2020.0110833</doi>
        <lastModDate>2020-08-31T13:22:26.2070000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>Image restoration; Maximum Entropy Method (MEM); annealing; Advanced Very High Resolution Radiometer (AVHRR); Conjugate Gradient Method (CGM)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>Image restoration based on Maximum Entropy Method (MEM) with parameter estimation by means of annealing method is proposed. The proposed method allows spatial resolution enhancement. Using overlap sampling with a low resolution of sensor, high spatial resolution (corresponding to the sampling interval) can be achieved through ground data processing with image restoration methods. Through the experiments with simulation imagery data derived from Advanced Very High Resolution Radiometer (AVHRR) data, it was found that spatial resolution enhancement can be achieved, MEM is superior to the others when S/N ratio is poor (less than 33) while Conjugate Gradient Method (CGM) is superior when the S/N ratio is higher than 33. It was also found that the CGM is superior to the proposed method for the existing sampling jitter.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_33-Image_Restoration_based_on_Maximum_Entropy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of K-means, DBSCAN and OPTICS Cluster Algorithms on Al-Quran Verses</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110832</link>
        <id>10.14569/IJACSA.2020.0110832</id>
        <doi>10.14569/IJACSA.2020.0110832</doi>
        <lastModDate>2020-08-31T13:22:26.1900000+00:00</lastModDate>
        
        <creator>Mohammed A. Ahmed</creator>
        
        <creator>Hanif Baharin</creator>
        
        <creator>Puteri N.E. Nohuddin</creator>
        
        <subject>K-means; DBSCAN; OPTICS; Al-Baqarah clustering; Silhouette Coefficient; Tafseer; text clustering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>Chapter Al-Baqarah is the longest chapter in the Holy Quran, and it covers various topics. Al-Quran is the primary text of Islamic faith and practice. Millions of Muslims worldwide use Al - Quran as their reference book, and it, therefore, helps Muslims and Islamic scholars as guidance of the law life. Text clustering (unsupervised learning) is a process of separation that has to be divided text into the same section of similar documents. There are many text clustering algorithms and techniques used to make clusters, such as partitioning and density-based methods. In this paper, k-means preferred as a partitioning method and DBSCAN, OPTICS as a density-based method. This study aims to investigate and find which algorithm produced as the best accurate performance cluster for Al-Baqarah’s English Tafseer chapter. Data preprocessing and feature extraction using Term Frequency-Inverse Document Frequency (TF-IDF) have applied for the dataset. The result shows k-means outperformed even has the smallest of Silhouette Coefficient (SC) score compared to others due to less implementation time with no noise production for seven clusters of Al-Baqarah chapter. OPTICS has no noise with the medium of SC score but has the longest implementation time due to its complexity.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_32-Analysis_of_k_means_DBSCAN_and_OPTICS.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Big Data Architecture for Real-Time Student Attention Detection and Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110831</link>
        <id>10.14569/IJACSA.2020.0110831</id>
        <doi>10.14569/IJACSA.2020.0110831</doi>
        <lastModDate>2020-08-31T13:22:26.1770000+00:00</lastModDate>
        
        <creator>Tarik Hachad</creator>
        
        <creator>Abdelalim Sadiq</creator>
        
        <creator>Fadoua Ghanimi</creator>
        
        <subject>Attention detection; big data analysis; stream processing; real-time processing; Apache Flink; Apache Spark; Apache Storm; Lambda architecture; Kappa architecture</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>Big Data technologies and their analytical methods can help improve the quality of education. They can be used to process and analyze classroom video streams to predict student attention, this would greatly improve the learning-teaching experience. With the increasing number of students and the expansion of educational institutions, processing and analyzing video streams in real-time become a complicated issue. In this paper, we have reviewed the existing systems of student attention detection, open-source real-time data stream processing technologies, and the two major data stream processing architectures. We also proposed a new Big Data architecture for real-time student attention detection.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_31-A_New_Big_Data_Architecture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Analysis of Efficient Pre-trained Networks based on Transfer Learning for Tomato Leaf Diseases Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110830</link>
        <id>10.14569/IJACSA.2020.0110830</id>
        <doi>10.14569/IJACSA.2020.0110830</doi>
        <lastModDate>2020-08-31T13:22:26.1430000+00:00</lastModDate>
        
        <creator>Sawsan Morkos Gharghory</creator>
        
        <subject>Deep learning; Alex; squeeze; VGG16 networks; tomato leaf diseases diagnosis and classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>Early diagnosis and accurate identification to tomato leaf diseases contribute on controlling the diffusion of infection and guarantee healthy to the plant which in role result in increasing the crop harvest. Nine common types of tomato leaf diseases have a great effect on the quality and quantity of tomato crop yield. The tradition approaches of features extraction and image classification cannot ensure a high accuracy rate for leaf diseases identification. This paper suggests an automatic detection approach for tomato leaf diseases based on the fine tuning and transfer learning to the pre-trained of deep Convolutional Neural Networks. Three pre-trained deep networks based on transfer learning: AlexNet, VGG-16 Net and SqueezeNet are suggested for their performances analysis in tomato leaf diseases classification. The proposed networks are carried out on two different dataset, one of them is a small dataset using only four different diseases while the other is a large dataset of leaves accompanied with symptoms of nine diseases and healthy leaves. The performance of the suggested networks is evaluated in terms of classification accuracy and the elapsed time during their training. The performance of the suggested networks using the small dataset are also compared with that of the-state-of-the-art technique in literature. The experimental results with the small dataset demonstrate that the accuracy of classification of the suggested networks outperform by 8.1% and 15% over the classification accuracy of the technique in literature. On other side when using the large dataset, the proposed pre-trained AlexNet achieves high classification accuracy by 97.4% and the consuming time during its training is lower than those of the other pre-trained networks. Generally, it can be concluded that AlexNet has outstanding performance for diagnosing the tomato leaf diseases in terms of accuracy and execution time compared to the other networks. On contrary, the performance of VGG-16 Net in metric of classification accuracy is the best yet the largest consuming time among other networks.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_30-Performance_Analysis_of_Efficient_Pre_Trained_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comparative Analysis of Data Mining Techniques on Breast Cancer Diagnosis Data using WEKA Toolbox</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110829</link>
        <id>10.14569/IJACSA.2020.0110829</id>
        <doi>10.14569/IJACSA.2020.0110829</doi>
        <lastModDate>2020-08-31T13:22:26.1300000+00:00</lastModDate>
        
        <creator>Majdah Alshammari</creator>
        
        <creator>Mohammad Mezher</creator>
        
        <subject>Data mining; breast cancer; data mining techniques; classification; WEKA toolbox</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>Breast cancer is considered the second most common cancer in women compared to all other cancers. It is fatal in less than half of all cases and is the main cause of mortality in women. It accounts for 16% of all cancer mortalities worldwide. Early diagnosis of breast cancer increases the chance of recovery. Data mining techniques can be utilized in the early diagnosis of breast cancer. In this paper, an academic experimental breast cancer dataset is used to perform a data mining practical experiment using the Waikato Environment for Knowledge Analysis (WEKA) tool. The WEKA Java application represents a rich resource for conducting performance metrics during the execution of experiments. Pre-processing and feature extraction are used to optimize the data. The classification process used in this study was summarized through thirteen experiments. Additionally, 10 experiments using various different classification algorithms were conducted. The introduced algorithms were: Na&#239;ve Bayes, Logistic Regression, Lazy IBK (Instance-Bases learning with parameter K), Lazy Kstar, Lazy Locally Weighted Learner, Rules ZeroR, Decision Stump, Decision Trees J48, Random Forest and Random Trees. The process of producing a predictive model was automated with the use of classification accuracy. Further, several experiments on classification of Wisconsin Diagnostic Breast Cancer and Wisconsin Breast Cancer, were conducted to compare the success rates of the different methods. Results conclude that Lazy IBK classifier k-NN can achieve 98% accuracy among other classifiers. The main advantages of the study were the compactness of using 13 different data mining models and 10 different performance measurements, and plotting figures of classifications errors.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_29-A_Comparative_Analysis_of_Data_Mining_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Impact of Circular Field in Underwater Wireless Sensor Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110828</link>
        <id>10.14569/IJACSA.2020.0110828</id>
        <doi>10.14569/IJACSA.2020.0110828</doi>
        <lastModDate>2020-08-31T13:22:26.1130000+00:00</lastModDate>
        
        <creator>Syed Agha Hassnain Mohsan</creator>
        
        <creator>Mushtaq Ali Khan</creator>
        
        <creator>Arfan Mahmood</creator>
        
        <creator>Muhammad Hammad Akhtar</creator>
        
        <creator>Hussain Amjad</creator>
        
        <creator>Asad Islam</creator>
        
        <creator>Alireza Mazinani</creator>
        
        <creator>Syed Muhammad Tayyab Shah</creator>
        
        <subject>Component; underwater wireless sensor networks; mobile sink; routing scheme; circlular field</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>Underwater Wireless Sensor Networks (UWSNs) face challenges regarding high propagation delay, limited bandwidth, 3D topology and excessive energy consumptions. In this paper, a routing scheme with circular field is proposed for an efficient collection of data packets by using two mobile sinks in UWSNs. Results of proposed scheme are compared with previous implemented schemes which are used to measure the usage of mobile sink in the collection of data packets. In this study, we have compared the proposed scheme with current state-of-the-art routing protocols. The statistical significance of this work was analyzed in MATLAB. Marked observations to emerge from obtained results include an improvement in lifetime, increased throughput, increment in alive nodes and balanced energy consumption. In our view, these results strengthen the validity of proposed circular field. A significant increase in received packets is observed because maximum nodes are alive till 1500 rounds which provide maximum communication and less chance of creating void holes.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_28-Impact_of_Circular_Field_in_Underwater.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Facilitating the Detection of ASD in Ultrasound Video using RHOOF and SVM</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110827</link>
        <id>10.14569/IJACSA.2020.0110827</id>
        <doi>10.14569/IJACSA.2020.0110827</doi>
        <lastModDate>2020-08-31T13:22:26.0970000+00:00</lastModDate>
        
        <creator>Mrunal Ninad Annadate</creator>
        
        <creator>Manoj Nagmode</creator>
        
        <subject>Two dimensional; apical four chamber; region-based histograms of oriented optical flow; machine learning; area under the curve; support vector machine; congenital</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>In the medical field various motion tracking techniques like block matching, optical flow, and histogram of oriented optical flow (HOOF) are being experimented for the abnormality detection. The information furnished by the existing techniques is inadequate for medical diagnosis. This technique has an inherent drawback, as the entire image is considered for motion vector calculation, increasing the time complexity. Also, the motion vectors of unwanted objects are getting accounted during abnormality detection, leading to misidentification / misdiagnosis. In this research, our main objective is to focus more on the region of abnormality by avoiding the unwanted motion vectors from the rest of the portion of the heart, allowing better time complexity. Proposed a region-based HOOF (RHOOF) for blood motion tracking and estimation; after experimentation, it is observed that RHOOF is four times faster than HOOF. The performance of supervised machine learning techniques was evaluated based on accuracy, precision, sensitivity, specificity, and area under the curve. In the medical field more importance is given to the sensitivity than accuracy. Support vector machine (SVM) has outperformed other technique on sensitivity and time complexity, hence chosen for abnormality classification in this work. An algorithm has been devised to use combination of RHOOF and SVM for the detection of atrial septal defect (ASD).</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_27-Facilitating_the_Detection_of_ASD_in_Ultrasound_Video.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimized Cardiovascular Disease Detection and Features Extraction Algorithms from ECG Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110826</link>
        <id>10.14569/IJACSA.2020.0110826</id>
        <doi>10.14569/IJACSA.2020.0110826</doi>
        <lastModDate>2020-08-31T13:22:26.0670000+00:00</lastModDate>
        
        <creator>Sanjay Ghodake</creator>
        
        <creator>Shashikant Ghumbre</creator>
        
        <creator>Sachin Deshmukh</creator>
        
        <subject>Electrocardiogram; heart disease; cardiovascular disease; hybrid filtering; features extraction; QRS and ST beats</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>A heart disease called cardiovascular diseases (CVD) is another leading cause for the death. There are several reasons that lead the CVD in human beings. The early detect of CVD helps to take necessary medical attentions to prevent the harms. The conventional techniques for CVD detection were manual and expensive which often delivers the inaccurate diagnosis. Since from the last decade the other inexpensive Computer Aided Diagnosis (CAD) based methods gained significant medical attentions. The CAD based techniques mainly based on raw Electro Cardiogram (ECG) signals of patient for the accurate and economical detection of CVD at early stage. In recent past, there are several CAD systems designed for CVD diagnosis utilizing raw ECG signals, however accuracy of CVD detection utilizing ECG bothered through several issues of research like QRS beats extraction, artefacts, efficient features extraction. This research paper present CVD novel framework, utilizing raw ECG signals and designed hybrid pre-processing algorithm for extracting artefacts and noise through raw ECG signal. Further designed simple and efficient dynamic thresholding based technique to extract the beats such as Q, R, S, and ST segment through pre-processing ECG signal. Third step perform the fusion of extracted beats and apply the feature extraction method called Normalized Higher Order Statistic (NHOS). The normalized HOS techniques asses the complexity among all the QRS based beats and delivers the more unique features for the accuracy enhancement. The final step is the classification by using five different classifiers for the CVD detection. The simulation results presented in this paper demonstrate that proposed framework achieved the significant accuracy improvement.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_26-Optimized_Cardiovascular_Disease_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Computational Approach to Explore Extremist Ideologies in Daesh Discourse</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110825</link>
        <id>10.14569/IJACSA.2020.0110825</id>
        <doi>10.14569/IJACSA.2020.0110825</doi>
        <lastModDate>2020-08-31T13:22:26.0500000+00:00</lastModDate>
        
        <creator>Ayman F. Khafaga</creator>
        
        <subject>Computational linguistics; concordance; Daesh; frequency analysis; ideology; Rumiyah</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>This paper uses a computer-based frequency analysis to present an ideological discourse analysis of extremist ideologies in Daesh discourse. More specifically, by using a computer-assisted text analysis, the paper attempts to investigate the hidden extremist ideologies beyond the discourse of the first issue of Rumiyah, one of the main digital publications of Daesh. The paper’s main objectives are to expose hidden ideologies beyond the mere linguistic form of discourse, to offer better linguistic understanding of the manipulative use of language in religious discourse, and to highlight the relevance of using a computer-based frequency analysis to discourse studies and corpus linguistics. The paper also employs van Dijk&#39;s ideological discourse analysis, by adopting his positive self-presentation and negative other-presentation strategies. Findings reveal that Daesh discourse in Rumiyah is rhetorically structured to hide the manipulative ideologies of its users, which in turn functions to reformulate the social, political and religious attitudes of its readers.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_25-A_Computational_Approach_to_Explore_Extremist_Ideologies.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The TPOA Telecentre: A Community Sustainable Telecentre Architecture</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110824</link>
        <id>10.14569/IJACSA.2020.0110824</id>
        <doi>10.14569/IJACSA.2020.0110824</doi>
        <lastModDate>2020-08-31T13:22:26.0370000+00:00</lastModDate>
        
        <creator>Chong Eng Tan</creator>
        
        <creator>Poline Bala</creator>
        
        <creator>Sei Ping Lau</creator>
        
        <creator>Siew Mooi Wong</creator>
        
        <subject>Telecentre; sustainability; TPOA; telecentre architecture; ICT4D; rural development</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>This paper presented the telecentre implementation for the Orang Asli villages in remote rural areas under the Telecentre Program for Orang Asli (TPOA). TPOA telecentre architecture aims to assist the achievement of a rural community sustainable telecentre through innovation and strategic adoption of ICT technology. Lessons learned from our past telecentre experience have outlined various challenges in the technical aspects of the telecentre implementation and operation. The TPOA telecentre ICT architecture has been designed to address the outlined issues hence producing a smoother telecentre operation that enables the rural communities to self-sustain their own telecentres. The technical support for the remote rural telecentre can be very expensive and impractical due to the extreme physical access condition. Hence, the rural communities themselves have to carry out the support and maintenance to sustain the operation of the telecentre. The TPOA telecentre architecture has enabled a relatively friendly to operate ICT platform in order to assist and make it possible for the Orang Asli to sustain, support, and maintain the telecentre operation.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_24-The_TPOA_Telecentre.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modeling and Assessing the Power Consumption Behavior of Sensor Nodes using Petri Nets</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110823</link>
        <id>10.14569/IJACSA.2020.0110823</id>
        <doi>10.14569/IJACSA.2020.0110823</doi>
        <lastModDate>2020-08-31T13:22:26.0030000+00:00</lastModDate>
        
        <creator>Alaa E. S. Ahmed</creator>
        
        <subject>Wireless sensor networks; power modeling; event-trigger; colored petri net; WSN protocols; distributed systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>Power Consumption is an influential and important concern in Wireless Sensor Networks (WSNs). Assessing and determining the impact of the power behavior of sensor nodes is an inclusive concern that should be executed at the network pre-deployment phase rather than post-deployment phase. Providing an accurate power modeling improves the development process of WSN applications and protocols. This paper introduces the use of Colored Petri Nets (CPNs) to model the power behavior and relations among different sensor node components when operated in an event driven environment. The objective is to figure out the overall power behavior of the nodes by considering power consumed accompanied with different states transition at a specific packets arrival and service rate values. Colored Petri Nets is a modeling language employed for validating and evaluating concurrent and distributed systems. The introduced model is beneficial since it provides the network designer with a way to determine surrogate designs or to check the compatibility of an existing power model behavior. The proposed model has been validated through the comparison of the power behaviors of two sensor nodes Mica2 and Telos. The results demonstrate the efficiency of using the proposed model to draw and analyze the WSNs energy consumption behavior.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_23-Modeling_and_Assessing_the_Power_Consumption.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fuzzy based Reliable Cooperative Spectrum Sensing for Smart Grid Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110822</link>
        <id>10.14569/IJACSA.2020.0110822</id>
        <doi>10.14569/IJACSA.2020.0110822</doi>
        <lastModDate>2020-08-31T13:22:25.9900000+00:00</lastModDate>
        
        <creator>Laila Nassef</creator>
        
        <creator>Reemah Al-Hebshi</creator>
        
        <subject>Cognitive radio; wireless networks; cooperative spectrum sensing; reliable fusion; fuzzy inference system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>The huge demand for spectrum has created immediate need to make available new licensed and/or unlicensed spectrum bands to satisfy the explosive growth of spectrum demands and to satisfy the quality of service requirements of diverse applications. Spectrum shortage and harsh environment have become a challenging bottleneck to achieve reliable communications in the smart grid. Cognitive radio is the emerging technology to achieve both spectrum and reliability awareness. Cooperative spectrum sensing takes advantage of spatial diversity to reduce the impact of receiver uncertainty. However, the harsh smart grid environments limit advantageous of cooperation due to variations of signal to noise ratio on which energy detection technique depends on. This paper proposes a reliable spectrum detection for a cluster based cooperative spectrum sensing in harsh smart grid environment, where cognitive cluster heads and a centralized cognitive radio based fusion center are deployed to solve both spectrum and reliability problems. The proposed fuzzy inference system is based on three fuzzy descriptors of energy difference, link quality, and local probability of detection. The results show the superiority of proposed fuzzy based fusion scheme to enhance accuracy of spectrum decision in harsh environment.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_22-Fuzzy_based_Reliable_Cooperative_Spectrum.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Quality in Use of an Android-based Mobile Application for Calculation of Bone Mineral Density with the Standard ISO/IEC 25022</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110821</link>
        <id>10.14569/IJACSA.2020.0110821</id>
        <doi>10.14569/IJACSA.2020.0110821</doi>
        <lastModDate>2020-08-31T13:22:25.9730000+00:00</lastModDate>
        
        <creator>Jose Sulla-Torres</creator>
        
        <creator>Andrea Gutierrez-Quintanilla</creator>
        
        <creator>Henry Pinto-Rodriguez</creator>
        
        <creator>Rossana Gomez-Campos</creator>
        
        <creator>Marco Cossio-Bolanos</creator>
        
        <subject>Android mobile application; bone mineral density; firebase; software quality; ISO/IEC 25022</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>One of the most critical bone diseases is osteoporosis, which can be evaluated through measurements of bone mineral density. Though there is a lot of commercialized portable about health in general, few are oriented towards bone health and with a lack of user-friendly interface and data management system. This paper presents a mobile application development for the calculation of the Bone Mineral density, which integrates with Google technology. The Mobile-D methodology for the development of mobile applications, due to the sequentially in the processes or stages is used. Bone mineral density (BMD) was calculated using anthropometric regression equations, and an Android-based Mobile Application with Google technology was developed. By using Firebase Authentication and Firebase Storage provided by Google technology, it allows admin to have full control over database management. In short, this mobile application allows the calculation of the BMD of the students and data storage and data uploading to cloud storage for post-processing, online data management system with user authentication. In addition, the Internacional Organization for Standardization / International Electrotechnical Commission (ISO/IEC) 25022 standard was used to evaluate the quality in use of the mobile app, resulting in 93% of quality in use, this app being able to be used by health professionals for better decision-making.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_21-Quality_in_use_of_an_Android_based_Mobile_Application.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Noise Reduction on Bracketed Images for High Dynamic Range Imaging</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110820</link>
        <id>10.14569/IJACSA.2020.0110820</id>
        <doi>10.14569/IJACSA.2020.0110820</doi>
        <lastModDate>2020-08-31T13:22:25.9570000+00:00</lastModDate>
        
        <creator>Seong-O Shim</creator>
        
        <subject>Image denoising; high dynamic range imaging; noise detection; noise removal; image restoration</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>The quality of a high dynamic range (HDR) image produced from bracketed images taken at different camera exposure times can be degraded by noise contained in bracketed images. In this paper, we propose a noise reduction method on bracketed images based on exposure time ratio. First, for each pixel pair of a same scene point lying on two different images, the ratio of their intensity values is compared with the ratio of exposure times of the images on which the pixels are lying. If the compared ratios are close, these two pixels are included in noise-free pixels set. The complement of noise-free pixels set is defined as noisy pixels set. Then, the intensity value of each pixel in noisy pixels set is corrected by its expected value computed from noise-free pixel of the same scene point lying on another image. Experimental results show that all the noisy intensity values can be correctly restored from noise-free pixels except the saturated pixels. Denoising results by PSNR show that the proposed method outperforms the other recent denoising methods such as based-on pixel density filter (BPDF), noise adaptive fuzzy switching median filter (NAFSMF), and adaptive Riesz mean filter (ARmF).</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_20-Noise_Reduction_on_Bracketed_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Arabic Handwritten Character Recognition based on Convolution Neural Networks and Support Vector Machine</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110819</link>
        <id>10.14569/IJACSA.2020.0110819</id>
        <doi>10.14569/IJACSA.2020.0110819</doi>
        <lastModDate>2020-08-31T13:22:25.9270000+00:00</lastModDate>
        
        <creator>Mahmoud Shams</creator>
        
        <creator>Amira. A. Elsonbaty</creator>
        
        <creator>Wael. Z. ElSawy</creator>
        
        <subject>Handwritten Arabic recognition; convolutional neural networks; support vector machine</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>Recognition of Arabic characters is essential for natural language processing and computer vision fields. The need to recognize and classify the handwritten Arabic letters and characters are essentially required. In this paper, we present an algorithm for recognizing Arabic letters and characters based on using deep convolution neural networks (DCNN) and support vector machine (SVM). This paper addresses the problem of recognizing the Arabic handwritten characters by determining the similarity between the input templates and the pre-stored templates using both fully connected DCNN and dropout SVM. Furthermore, this paper determines the correct classification rate (CRR) depends on the accuracy of the corrected classified templates, of the recognized handwritten Arabic characters. Moreover, we determine the error classification rate (ECR). The experimental results of this work indicate the ability of the proposed algorithm to recognize, identify, and verify the input handwritten Arabic characters. Furthermore, the proposed system determines similar Arabic characters using a clustering algorithm based on the K-means clustering approach to handle the problem of multi-stroke in Arabic characters. The comparative evaluation is stated and the system accuracy reached 95.07% CRR with 4.93% ECR compared with the state of the art.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_19-Arabic_Handwritten_Character_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>New RTP Packet Payload Shrinking Method to Enhance Bandwidth Exploitation Over RTP Protocol</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110818</link>
        <id>10.14569/IJACSA.2020.0110818</id>
        <doi>10.14569/IJACSA.2020.0110818</doi>
        <lastModDate>2020-08-31T13:22:25.9100000+00:00</lastModDate>
        
        <creator>AbdelRahman H. Hussein</creator>
        
        <creator>Mwaffaq Abu-Alhaija</creator>
        
        <creator>Kholoud Nairoukh</creator>
        
        <subject>VoIP; RTP protocol; payload compression; bandwidth exploitation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>One of the pillars to run businesses is the telecommunications. Most of the institutions are migrating, if not already migrated, to Voice over Internet Protocol (VoIP) technology. However, VoIP still need some improvements, in terms of networks bandwidth exploitation and VoIP call quality, to meet the businesses expectations. Networks bandwidth exploitation, which is our concern in this paper, has been enhanced using different approaches and methods. This paper suggests a new method to enhance networks bandwidth exploitation Packet’s payload shrinking (compression) approach. The suggested method works with the RTP protocol and called RTP Payload Shrinking (RPS) method. As the name implies, the RPS method will reduce the size of the RTP packet payload, through shrinking it based on specific algorithm, which enhances the networks bandwidth exploitation. The RPS method utilizes the RTP fields to store the values that are needed to apply the shrinking algorithm at the sender and receiver sides. The effectiveness of the proposed RPS method has been examined in comparison to conventional RTP protocol without shrinking. The deployment result showed that the saved bandwidth ratio has reached up to nearly 17% in the tested scenarios. Therefore, enhancing the network bandwidth exploitation.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_18-New_RTP_Packet_Payload_Shrinking_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cyber Security Defence Policies: A Proposed Guidelines for Organisations Cyber Security Practices</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110817</link>
        <id>10.14569/IJACSA.2020.0110817</id>
        <doi>10.14569/IJACSA.2020.0110817</doi>
        <lastModDate>2020-08-31T13:22:25.8930000+00:00</lastModDate>
        
        <creator>Julius Olusegun Oyelami</creator>
        
        <creator>Azleena Mohd Kassim</creator>
        
        <subject>Cyber security; cyber defence policy; organisation; cyber security practices</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>Many organisations have been struggling to defend their cyberspace without a specific direction or guidelines to follow and they have described and identified cyber attack as a devastating potential on business operation in a broader perspective. Since then, researchers in cyber security have come out with numerous reports on threats and attack on organisations. This study is conducted to develop and propose a Cyber Security Defence Policies (CSDP) by harmonising and synthesizing the existing practices identified from the literature review. Observation and questionnaire were adopted to evaluate, review and collect data under ethical agreement from 10 organisations. The validation is based on the principal components for the proposed CSDP and the proposed CSDP, using SPSS as the statistical tool. The result shows that, the validation of the proposed CSDP by 20 experts reveals a standard deviation of 0.607, 0.759, 0.801, 0.754, 0.513, 0.587 and 0.510 on each of the principal components without a missing value respectively. While the correlation matrix and the reproduced correlation matrix for the proposed CSDP indicated 61% and the percentage of acceptance on the principal components for the proposed CSDP are higher than 50%. Therefore, from the outcome, it has shown that the acceptance responds towards the proposed CSDP and the result from the principal components analysis (eigenvalue analysis) are significant enough for implementation and can be adopted by organisations as a guidelines for organisation cyber security practices.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_17-Cyber_Security_Defence_Policies.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid Deep Learning Model for Arabic Text Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110816</link>
        <id>10.14569/IJACSA.2020.0110816</id>
        <doi>10.14569/IJACSA.2020.0110816</doi>
        <lastModDate>2020-08-31T13:22:25.8630000+00:00</lastModDate>
        
        <creator>Mohammad Fasha</creator>
        
        <creator>Bassam Hammo</creator>
        
        <creator>Nadim Obeid</creator>
        
        <creator>Jabir AlWidian</creator>
        
        <subject>Arabic optical character recognition; deep learning; convolutional neural networks; recurrent neural networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>Arabic text recognition is a challenging task because of the cursive nature of Arabic writing system, its joint writing scheme, the large number of ligatures and many other challenges. Deep Learning (DL) models achieved significant progress in numerous domains including computer vision and sequence modelling. This paper presents a model that can recognize Arabic text that was printed using multiple font types including fonts that mimic Arabic handwritten scripts. The proposed model employs a hybrid DL network that can recognize Arabic printed text without the need for character segmentation. The model was tested on a custom dataset comprised of over two million word samples that were generated using (18) different Arabic font types. The objective of the testing process was to assess the model’s capability in recognizing a diverse set of Arabic fonts representing a varied cursive styles. The model achieved good results in recognizing characters and words and it also achieved promising results in recognizing characters when it was tested on unseen data. The prepared model, the custom datasets and the toolkit for generating similar datasets are made publically available, these tools can be used to prepare models for recognizing other font types as well as to further extend and enhance the performance of the proposed model.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_16-A_Hybrid_Deep_Learning_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An ACM\IEEE and ABET Compliant Curriculum and Accreditation Management Framework</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110815</link>
        <id>10.14569/IJACSA.2020.0110815</id>
        <doi>10.14569/IJACSA.2020.0110815</doi>
        <lastModDate>2020-08-31T13:22:25.8470000+00:00</lastModDate>
        
        <creator>Manar Salamah Ali</creator>
        
        <subject>Curriculum coherence; body of knowledge; accreditation; knowledge base design; ABET</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>Following methodological and systemized approaches in creating course syllabi and program curriculums are very crucial for assuring the coherence (correctness, completeness, consistency, and validity) of curriculums. Furthermore, designing coherent curriculums have a direct impact on achieving curriculum outcomes. For institutions seeking accreditation, presenting evidence of curriculum coherence is mandatory. In this paper, a general framework architecture for curriculum and accreditation management is proposed. Furthermore, we propose a detailed design for a knowledge base that comprises of: a) the ACM\IEEE body of knowledge for the Computer Science Department, b) course syllabi, and c) course articulation matrices. We show how to utilize the proposed knowledge base in the quality improvement life cycle, in ABET accreditation, and as a significant step towards curriculum coherence.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_15-An_ACMIEEE_and_ABET_Compliant_Curriculum.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Assessing Vietnamese Text Readability using Multi-Level Linguistic Features</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110814</link>
        <id>10.14569/IJACSA.2020.0110814</id>
        <doi>10.14569/IJACSA.2020.0110814</doi>
        <lastModDate>2020-08-31T13:22:25.8330000+00:00</lastModDate>
        
        <creator>An-Vinh Luong</creator>
        
        <creator>Diep Nguyen</creator>
        
        <creator>Dien Dinh</creator>
        
        <creator>Thuy Bui</creator>
        
        <subject>Text readability; text difficulty; readability formula; linguistics features; Vietnamese</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>Text readability is the problem of determining whether a text is suitable for a certain group of readers, and thus building a model to assess the readability of text yields great significance across the disciplines of science, publishing, and education. While text readability has attracted attention since the late nineteenth century for English and other popular languages, it remains relatively underexplored in Vietnamese. Previous studies on this topic in Vietnamese have only focused on the examination of shallow word-level features using surface statistics such as frequency and ratio. Hence, features at higher levels like sentence structure and meaning are still untapped. In this study, we propose the most comprehensive analysis of Vietnamese text readability to date, targeting features at all linguistic levels, ranging from the lexical and phrasal elements to syntactic and semantic factors. This work pioneers the investigation on the effects of multi-level linguistic features on text readability in the Vietnamese language.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_14-Assessing_Vietnamese_Text_Readability.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Blockchain-based Global Travel Review Framework</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110813</link>
        <id>10.14569/IJACSA.2020.0110813</id>
        <doi>10.14569/IJACSA.2020.0110813</doi>
        <lastModDate>2020-08-31T13:22:25.8170000+00:00</lastModDate>
        
        <creator>Tanakorn Karode</creator>
        
        <creator>Warodom Werapun</creator>
        
        <creator>Tanwa Arpornthip</creator>
        
        <subject>Consumer online review; traveling; blockchain; smart contract</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>An online review system is an important part of almost every e-commerce platform, especially a tourism e-commerce. However, various problems exist in the current online review systems. The review content is stored in a centralized database of each individual platform. Each platform differs in review management methods. In some cases, the review score of the same product disagrees across different platforms. Moreover, a centralized system has low transparency because it is difficult to trace individual actions within the system. As a result, some users are skeptical of the reliability of online reviews in centralized systems. This work proposes a global travel review framework based-on the blockchain technology. The incorporation of blockchain helps improve an online review system. The best practices for online review management from popular platforms, and the guidelines from trusted sources are used to develop the new system. The use of blockchain improves an online review system through its unique features of high transparency, security, and reliability. Additionally, the proposed framework relies on a community-driven environment. The accessibility level of users is controlled by using the smart contract. There is no single authoritative owner of the system. All participants in the system can exert controls on the system equally. This work illustrates the details of a blockchain-based global travel review framework. The advantages and disadvantages of such a system are discussed. The proposed framework can be easily integrated with any existing platforms since it can be accessed publicly.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_13-Blockchain_based_Global_Travel_Review_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Monopole Antenna on Transparent Substrate and Rectifier for Energy Harvesting Applications in 5G</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110812</link>
        <id>10.14569/IJACSA.2020.0110812</id>
        <doi>10.14569/IJACSA.2020.0110812</doi>
        <lastModDate>2020-08-31T13:22:25.7870000+00:00</lastModDate>
        
        <creator>S. M. Kayser Azam</creator>
        
        <creator>Md. Shazzadul Islam</creator>
        
        <creator>A. K. M. Zakir Hossain</creator>
        
        <creator>Mohamadariff Othman</creator>
        
        <subject>Monopole antenna; transparent substrate; 5G; energy harvesting; rectifier</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>In line with the harvested energy required for the emerging 5G technology, this article proposes a planar monopole antenna and a rectifier. The proposed Coplanar Waveguide (CPW)-fed antenna is printable on a transparent Poly-Ethylene Terephthalate (PET) substrate. The antenna has a center frequency at 3.51 GHz within a bandwidth of 307 MHz that covers the pioneer 5G band in Malaysia. The designed omnidirectional antenna exhibits the maximum gain of 1.51 dBi with a total efficiency of 95.17 percent. At the antenna frequency, a rectifier has been designed with the voltage doubler technique on a Rogers RO3003 substrate. Over an input RF power of 0 dBm, the rectifier has a power conversion efficiency around 42 percent. The proposed antenna rejects harmonics at least until 16 GHz frequency that makes it compatible with the rectifier to eliminate an additional bandpass filter or impedance matching network from the energy harvesting system.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_12-Monopole_Antenna_on_Transparent_Substrate.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design and Performance Analysis of Different Dielectric Substrate based Microstrip Patch Antenna for 5G Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110811</link>
        <id>10.14569/IJACSA.2020.0110811</id>
        <doi>10.14569/IJACSA.2020.0110811</doi>
        <lastModDate>2020-08-31T13:22:25.7700000+00:00</lastModDate>
        
        <creator>Nurulazlina Ramli</creator>
        
        <creator>Shehab Khan Noor</creator>
        
        <creator>Taher Khalifa</creator>
        
        <creator>N. H. Abd Rahman</creator>
        
        <subject>Efficiency; gain; microstrip patch antenna; permittivity; substrates</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>In this paper, a 3.5 GHz microstrip patch antenna using three different substrates materials with varying relative permittivity have been designed. However, the thickness of the substrates are slightly different from each other which is 1.6 mm for FR-4, 1.575 mm for RT-5880 and 1.58 mm for TLC-30 have been chosen to carry out this work. The three substrates materials are FR-4 (Design-1), RT-5880 (Design-2), and TLC-30 (Design-3) with the relative permittivity of 4.3, 2.2, and 3, respectively. The antennas&#39; performances in terms of reflection coefficient, voltage standing wave ratio (VSWR), bandwidth, gain, and efficiency performance is simulated, analyzed and compared using CST Microwave studio (CST 2019). The findings reveal that there is a significant change in gain and bandwidth due to different relative permittivity and the thickness value of the substrate materials. The gains achieved were at 3.338 dB, 4.660 dB, and 5.083 dB for Design-1, Design-2 and Design-3 respectively. The efficiency of the antennas also showed that TLC-30 gave the best efficiency at 75.70% when compared to FR-4 which was at 60.13% and RT-5880 which was at 61.51% efficiency. All the proposed antennas have a bandwidth above 100 MHz where Design-1 had a bandwidth of 247.1 MHz whilst Design-2 had a bandwidth of 129.7 MHz and finally, Design-3 had a bandwidth of 177.2 MHz.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_11-Design_and_Performance_Analysis_of_Different_Dielectric.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Local Binary Pattern Method (LBP) and Principal Component Analysis (PCA) for Periocular Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110810</link>
        <id>10.14569/IJACSA.2020.0110810</id>
        <doi>10.14569/IJACSA.2020.0110810</doi>
        <lastModDate>2020-08-31T13:22:25.7400000+00:00</lastModDate>
        
        <creator>Sereen Alkhazali</creator>
        
        <creator>Mohammad El-Bashir</creator>
        
        <subject>Periocular recognition; Local Binary Pattern (LBP); Principal Component Analysis (PCA); k-Nearest Neighbors (k-NN)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>Identification of identity through eye is gaining more and more importance. Commonly, the researchers approach the eye from any of three parts, the iris, the circumference around the eye, and the iris and its circumference. This study follows a holistic approach to identity identification by using the iris and whole periocular area and proposes a periocular recognition system (PRS) that has been developed using the Local Binary Pattern (LBP) technique combined with Principal Component Analysis (PCA) at the feature extraction stage and the k-nearest neighbors (k-NN) algorithm as a classifier at the classification stage. This system achieves identity recognition through three steps: pre-processing, feature extraction, and classification. Pre-processing is applied to the images so as to convert them to grayscale. In the feature extraction step, the LBP method is applied to extract the texture feature from the images and use it in PCA to reduce data dimensionality and obtain the relevant data so that only the important features are extracted. These two steps are applied both in the training phase and the testing phase of image processing. On the other hand, the testing data sets are processed using the k-NN classifier. The proposed PRS was tested on data drawn from the PolyU database using more than one basis of system experience. Specifically, the system performance was tested once on all 209 subjects present in the database and once on 140 subjects. This database also contains images taken in the visible (VIS) and near-infrared (NIR) regions of the electromagnetic radiation (EMR) spectrum. So, the system was tested on images taken in both regions separately for matching. As well, the proposed PRS benefited from the availability of images for the right and left perioculars. Performance was, therefore, tested on images of each side of the periocular area (the left and right sides) separately, as well as for the combination of the two sides. The identity recognition rates characteristic of the proposed PRS were most often higher than the recognition rates produced by systems reported in the literature. The highest recognition accuracy obtained from the proposed system, which is 98.21%, was associated with the 140-subject data sub-set.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_10-Local_Binary_Pattern_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Fuzzy Clustering Approach for Gene Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110809</link>
        <id>10.14569/IJACSA.2020.0110809</id>
        <doi>10.14569/IJACSA.2020.0110809</doi>
        <lastModDate>2020-08-31T13:22:25.7230000+00:00</lastModDate>
        
        <creator>Meskat Jahan</creator>
        
        <creator>Mahmudul Hasan</creator>
        
        <subject>Meskat-Hasan clustering (MH clustering); MH Extended K-Means clustering; K-Means; fuzzy clustering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>Automatic cluster detection is crucial for real-time gene expression data where the quantity of missing values and noise ratio is relatively high. In this paper, algorithms of dynamical determination of the number of cluster and clustering have been proposed without any pre and post clustering assumptions. Proposed fuzzy Meskat-Hasan (MH) clustering provides solutions for sophisticated datasets. MH clustering extracts the hidden information of the unknown datasets. Based on the findings, it determines the number of clusters and performs seed based clustering dynamically. MH Extended K-Means cluster algorithm which is a nonparametric extension of the traditional K-Means algorithm and provides the solution for automatic cluster detection including runtime cluster selection. To ensure the accuracy and optimum partitioning, seven validation techniques were used for cluster evaluation. Four well known datasets were used for validation purposes. In the end, MH clustering and MH Extended K-Means clustering algorithms were found as a triumph over traditional algorithms.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_9-A_Novel_Fuzzy_Clustering_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predicting Breast Cancer via Supervised Machine Learning Methods on Class Imbalanced Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110808</link>
        <id>10.14569/IJACSA.2020.0110808</id>
        <doi>10.14569/IJACSA.2020.0110808</doi>
        <lastModDate>2020-08-31T13:22:25.6900000+00:00</lastModDate>
        
        <creator>Keerthana Rajendran</creator>
        
        <creator>Manoj Jayabalan</creator>
        
        <creator>Vinesh Thiruchelvam</creator>
        
        <subject>Breast cancer; class imbalance; diagnosis; bayesian network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>A widespread global health concern among women is the incidence of the second most leading cause of fatality which is breast cancer. Predicting the occurrence of breast cancer based on the risk factors will pave the way to an early diagnosis and an efficient treatment in a quicker time. Although there are many predictive models developed for breast cancer in the past, most of these models are generated from highly imbalanced data. The imbalanced data is usually biased towards the majority class but in cancer diagnosis, it is crucial to diagnose the patients with cancer correctly which are oftentimes the minority class. This study attempts to apply three different class balancing techniques namely oversampling (Synthetic Minority Oversampling Technique (SMOTE)), undersampling (SpreadSubsample) and a hybrid method (SMOTE and SpreadSubsample) on the Breast Cancer Surveillance Consortium (BCSC) dataset before constructing the supervised learning methods. The algorithms employed in this study include Na&#239;ve Bayes, Bayesian Network, Random Forest and Decision Tree (C4.5). The balancing method which yields the best performance across all the four classifiers were tested using the validation data to determine the final predictive model. The performances of the classifiers were evaluated using a Receiver Operating Characteristic (ROC) curve, sensitivity, and specificity.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_8-Predicting_Breast_Cancer_via_Supervised_Machine.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Road Object Detection using Yolov3 and Kitti Dataset</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110807</link>
        <id>10.14569/IJACSA.2020.0110807</id>
        <doi>10.14569/IJACSA.2020.0110807</doi>
        <lastModDate>2020-08-31T13:22:25.6600000+00:00</lastModDate>
        
        <creator>Ghaith Al-refai</creator>
        
        <creator>Mohammed Al-refai</creator>
        
        <subject>Pedestrian detection; computer vision; CNN; ma-chine learning; artificial intelligence; vehicle safety</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>Road objects (such as pedestrians and vehicles) detection is a very important step to enhance road safety and achieve autonomous driving. Many on-vehicle sensors, such as radars, lidars and ultrasonic sensors, are used to detect surrounding objects. However, cameras are widely used sensors for road objects detection for the rich information they provide and their inexpensive prices with compared to other sensors. Machine learning and computer vision algorithms are utilized to classify objects in the collected images and videos. There are many computer vision algorithms proposed for image and video object detection, e.g. logistic regression and SVM with feature extraction. However, Convolutional Neural Network (CNN) al-gorithms showed a high detection accuracy compared to other approaches. This research implements You Only Look Once (YOLO) algorithm that uses Draknet-53 CNN to detect four classes: pedestrians, vehicles, trucks and cyclists. The model is trained using Kitti images dataset which is collected from public roads using vehicle’s front looking camera. The algorithm is tested, and detection results are presented.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_7-Road_Object_Detection_Using_Yolov3.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Study of K-Nearest Neighbour Classification Performance on Fatigue and Non-Fatigue EMG Signal Features</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110806</link>
        <id>10.14569/IJACSA.2020.0110806</id>
        <doi>10.14569/IJACSA.2020.0110806</doi>
        <lastModDate>2020-08-31T13:22:25.6430000+00:00</lastModDate>
        
        <creator>W. M. Bukhari</creator>
        
        <creator>C. J. Yun</creator>
        
        <creator>A. M. Kassim</creator>
        
        <creator>M. O. Tokhi</creator>
        
        <subject>Electromyography; surface electromyography; k-nearest neigbour classifier; feature extraction; dynamic contraction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>For our body to move, the muscle must activate by relaxing and contracting. Muscle activation produces bio-electric signals that can be detected using Electromyography or EMG. The signal produced by the muscle is affected by the type of contraction done by the muscle. The eccentric contraction generating different EMG signals from concentric contraction. EMG signal contains multiple features. These features can be extracted using MATLAB software. This paper focuses on the bicep brachii and brachioradialis in the upper arm and forearm, respectively. The EMG signals are extracted using surface EMG whereby electrical pads are placed onto the surface of the muscle. Features can then be extracted from the EMG signal. This paper will focus on the MAV, VAR, and RMS features of the EMG signal. The features are then classified into eccentric, concentric or isometric contraction. The performance of the K-Nearest Neighbour (KNN) classifier is inconsistent due to the EMG data variabilities. The accuracy varies from one data set to another. However, it is concluded that non-fatigue signal classification accuracy is higher than fatigue signal classification accuracy.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_6-Study_of_K_Nearest_Neighbour_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development and Verification of Serial Fault Simulation for FPGA Designs using the Proposed RASP-FIT Tool</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110805</link>
        <id>10.14569/IJACSA.2020.0110805</id>
        <doi>10.14569/IJACSA.2020.0110805</doi>
        <lastModDate>2020-08-31T13:22:25.6130000+00:00</lastModDate>
        
        <creator>Abdul Rafay Khatri</creator>
        
        <creator>Ali Hayek</creator>
        
        <creator>Josef Borcsok</creator>
        
        <subject>FPGA design; fault injection; fault simulation; RASP-FIT tool; Verilog HDL</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>Fault simulation is the critical approach for many applications such as fault detection &amp; diagnostics, test set quality measurement, generation of test vectors, circuit testability, and many others along with the help of fault injection technique. The fault simulation approach is divided into many types. The most straightforward approach among them is a serial fault simulation. In the simulation process, the circuit under test is faulted, and a faulty copy is achieved by either using a simulator command technique or instrumentation technique. A fault simulator must examine the behaviour of specified target fault in design and classified as detected or undetected by the applied test patterns. To modify the original code is a very challenging and time-consuming task. Therefore, the RASP-FIT tool is developed, which alters the fault-free FPGA design, which is under investigation, at the Verilog HDL code level. It produces the copies of faulty design along with the top design file for several fault simulation methods. Using this tool, a serial fault simulation environment can easily be created with no much effort. In this work, a serial fault simulation method is verified and validated using the RASP-FIT tool for an ISCAS’85 benchmark design as an example.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_5-Development_and_Verification_of_Serial_Fault_Simulation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Home Intrusion Detection System using Recycled Edge Devices and Machine Learning Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110804</link>
        <id>10.14569/IJACSA.2020.0110804</id>
        <doi>10.14569/IJACSA.2020.0110804</doi>
        <lastModDate>2020-08-31T13:22:25.5970000+00:00</lastModDate>
        
        <creator>Daewoo Kwon</creator>
        
        <creator>Jinseok Song</creator>
        
        <creator>Chanho Choi</creator>
        
        <creator>Eun-Kyu Lee</creator>
        
        <subject>Security; intrusion detection; edge computing; Internet of Things; recycling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>This paper proposes a home intrusion detection system that makes the best use of a retired smartphone and an existing Wi-Fi access point. On-board sensors in the smartphone mounted on an entrance door records signals upon unwanted door opening. The access point is reconfigured to serve as a home server and thus it can process sensor data to detect unauthorized access to home by an intruder. Recycling devices enables a home owner to build own security system with no cost as well as helps our society deal with millions of retired devices and waste of computing resources in already-deployed IT devices. In order to improve detection accuracy, this paper proposes a detection method that employs a machine learning algorithm and an analysis technique of time series data. To minimize energy consumption on a battery-powered smartphone, the proposed system utilizes as few sensors as possible and offloads all the computation to the home edge server. We develop a prototype and run experiments to evaluate accuracy performance of the proposed system. Results show that it can detect intrusion with probability of 95% to 100%.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_4-A_Home_Inrusion_Detection_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Impacts of Decomposition Techniques on Performance and Latency of Microservices</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110803</link>
        <id>10.14569/IJACSA.2020.0110803</id>
        <doi>10.14569/IJACSA.2020.0110803</doi>
        <lastModDate>2020-08-31T13:22:25.5830000+00:00</lastModDate>
        
        <creator>Chaitanya K. Rudrabhatla</creator>
        
        <subject>Microservices; decomposition techniques; single responsibility principle; common closure principle; performance; latency</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>Micro service architecture (MSA) has undoubtedly become the most popular modern-day architecture, often used in conjunction with the rapidly advancing public cloud platforms to reap the best benefits of salability, elasticity and agility. Though MSA is highly advantageous and comes with a huge set of benefits, it has its own set of challenges. To achieve the separation of concerns and optimal performance, defining the boundaries for the services clearly and their underlying persistent stores is quintessential. But logically segregating the services is a major challenge faced while designing the MSA. Some of the guiding principles like Single responsibility principle (SRP) and common closure principle (CCP) are put in place to drive the design and separation of microservices. With the use of these techniques the service layer can be designed either by (i) Building the services related to a business subdomain and packaging them as a microservice; (ii) or Defining the entity relationship model and then building the services based on the business capabilities which are grouped together as a microservice; (iii) or understanding the big picture of the application scope and combining both the strategies to achieve the best of both worlds. This paper explains these decomposition approaches in detail by comparing them with the real-world use cases and explains which pattern is suitable under which circumstances and at the same time examines the impacts of these approaches on the performance and latency using a research project.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_3-Impacts_of_Decomposition_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Developing a Radiating L-shaped Search Algorithm for NASA Swarm Robots</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110802</link>
        <id>10.14569/IJACSA.2020.0110802</id>
        <doi>10.14569/IJACSA.2020.0110802</doi>
        <lastModDate>2020-08-31T13:22:25.5500000+00:00</lastModDate>
        
        <creator>Tariq Tashtoush</creator>
        
        <creator>Jalil Ahmed</creator>
        
        <creator>Valeria Arce</creator>
        
        <creator>Heriberto Dominguez</creator>
        
        <creator>Kevin Estrada</creator>
        
        <creator>William Montes</creator>
        
        <creator>Ashley Paredez</creator>
        
        <creator>Pedro Salce</creator>
        
        <creator>Alexia Serna</creator>
        
        <creator>Mireya Zarazua</creator>
        
        <subject>NASA Swarmathon competition; swarm robotics; search algorithm; autonomous; Robot Operating System (ROS); NASA space exploration; simulation; autonomous robot swarm; collaborative robots; autonomous behavior; cooperative robots; swarmies; L-Shaped search; GitHub</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>This paper focuses on designing a search algorithm that the DustySWARM team used in the 2019 NASA Swarmathon competition. The developed search algorithm will be implemented and tested on multiple rovers, a.k.a. Swarmies or Swarm Robots. Swarmies are compact rovers, designed by NASA to mimic Ants behavior and perform an autonomous search for simulated Mars resources. This effort aimed to assist NASA’s mission to explore the space and discover new resources on the Moon and Mars. NASA’s going-on project has the goal to send robots that explore and collect resources for analysis before sending Astronauts, as the swarm option is safer and more affordable. All rovers must utilize the exact algorithm and collaborate and cooperate to find all available resources in their search path and retrieve them to the space station location. Additionally, swarmies will autonomously search while avoiding obstacles and mapping the surrounding environment for future missions. This algorithm allows a swarm of six robots to search an unknown area for simulated resources called AprilTags (cubes with QR codes). The code was developed using C/C++, GitHub, and Robotics Operation Systems (ROS) and tested by utilizing the Gazebo Simulation environment and by running physical trials on the swarmies. The team analyzed a few algorithms from previous years and other researchers then developed the Radiating L-Shape Search (RLS) Algorithm. This paper will summarize the algorithm design, code development, and trial results that were provided to the NASA Space Exploration Engineering team.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_2_Developing_a_Radiating_L_Shaped_Search_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>LXPER Index: A Curriculum-specific Text Readability Assessment Model for EFL Students in Korea</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110801</link>
        <id>10.14569/IJACSA.2020.0110801</id>
        <doi>10.14569/IJACSA.2020.0110801</doi>
        <lastModDate>2020-08-31T13:22:25.5030000+00:00</lastModDate>
        
        <creator>Bruce W. Lee</creator>
        
        <creator>Jason Hyung-Jong Lee</creator>
        
        <subject>Natural language processing; machine learning; text readability assessment; EFL (English as Foreign Language) education</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(8), 2020</description>
        <description>Automatic readability assessment is one of the most important applications of Natural Language Processing (NLP) in education. Since automatic readability assessment allows the fast selection of appropriate reading material for readers at all levels of proficiency, it can be particularly useful for the English education of English as Foreign Language (EFL) students around the world. However, most readability assessment models are developed for the native readers of English and have low accuracy for texts in non-native English Language Training (ELT) curriculum. We introduce LXPER Index, which is a readability assessment model for non-native EFL readers in the ELT curriculum of Korea. To measure LXPER Index, we use the mixture of 22 features which we prove to be significant in text readability assessment. We also introduce the Text Corpus of the Korean ELT Curriculum (CoKEC-text), which is the first collection of English texts from a non-native country’s ELT curriculum with each text’s target grade level labeled. In addition, we assembled the Word Corpus of the Korean ELT Curriculum (CoKEC-word), which is a collection of words from the Korean ELT curriculum with word difficulty labels. Our experiments show that our new model, trained with CoKEC-text, significantly improves the accuracy of automatic readability assessment for texts in the Korean ELT curriculum. The methodology used in this research can be applied to other ELT curricula around the world.</description>
        <description>http://thesai.org/Downloads/Volume11No8/Paper_1-Lxper_Index_A_Curriculum_Specific_Text.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Ontology-based Course Teacher Assignment within Universities</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110787</link>
        <id>10.14569/IJACSA.2020.0110787</id>
        <doi>10.14569/IJACSA.2020.0110787</doi>
        <lastModDate>2020-08-04T06:42:23.1470000+00:00</lastModDate>
        
        <creator>Ghadeer Ashour</creator>
        
        <creator>Ahmad Al-Dubai</creator>
        
        <creator>Imed Romdhani</creator>
        
        <subject>Semantic; university ontology; academic profile; syllabus; course-teacher assignment</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>Educational institutions suffer from the enormous amount of data that keeps growing continuously. These data are usually scattered and unorganised, and it comes from different resources with different formats. Besides, modernization vision within these institutions aims to reduce human action and replace it with automatic devices interactions. To have the full benefit from these data and use it within the modern systems, they have to be readable and understandable by machines. Those data and knowledge with semantic descriptions make an easy way to monitor and manage decision processes within universities to solve many educational challenges. In this study, an educational ontology is developed to model the semantic courses and academic profiles in universities and use it to solve the challenge of assigning the most appropriate academic teacher to teach a specific course.</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_87-Ontology_based_Course_Teacher_Assignment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Proposal for a Software Architecture as a Tool for the Fight Against Corruption in the Regional Governments of Peru</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110786</link>
        <id>10.14569/IJACSA.2020.0110786</id>
        <doi>10.14569/IJACSA.2020.0110786</doi>
        <lastModDate>2020-07-31T11:34:59.6070000+00:00</lastModDate>
        
        <creator>Martin M. Soto-Cordova</creator>
        
        <creator>Samuel Leon-Cardenas</creator>
        
        <creator>Kevin Huayhuas-Caripaza</creator>
        
        <creator>Raquel M. Sotomayor-Parian</creator>
        
        <subject>Software architecture; anti-corruption; regional government; local government; corruption perception index</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>This paper covers the problem of corruption in Peru, with an emphasis on regional governments, and presents a proposal for an anti-corruption software application architecture for those levels of government. The design of the proposal starts from the analysis of corruption encompassing statistical studies, trust evolution, government management, legal situation and incidents in information technology. Also, aspects of the budget allocation, crime data, political party financing data, management resources, contracting processes, integration systems and citizen participation are presented, for the subsequent presentation of the data structure and resources for the software application architecture. The methodology used is of an exploratory documentary type. In addition, a systemic approach and development are considered in three layers: data persistence, logical process, and presentation; considering the interrelationships that must exist between them for the development of the proposed architecture.</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_86-Proposal_for_a_Software_Architecture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predicting Customer Retention using XGBoost and Balancing Methods</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110785</link>
        <id>10.14569/IJACSA.2020.0110785</id>
        <doi>10.14569/IJACSA.2020.0110785</doi>
        <lastModDate>2020-07-31T11:34:59.5730000+00:00</lastModDate>
        
        <creator>Atallah M. AL-Shatnwai</creator>
        
        <creator>Mohammad Faris</creator>
        
        <subject>Customer retention; churn prediction; oversam-pling; XGBoost</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>Customer retention is considered as one of the important concerns for many companies and financial institutions like banks, telecommunication service providers, investment ser-vices, insurance and retail sectors. Recent marketing indicators and metrics show that attracting and gaining new customers or subscribers is much more expensive and difficult than retaining existing ones. Therefore, losing a customer or a subscriber will negatively impact the growth and the profitability if the company. In this work, we propose a customer retention model based on one of the most powerful machine learning classifiers which is XGBoost. The latter classifier is experimented when combined wit different oversampling methods to improve its performance in the used imbalanced dataset. The experimental results show very promising results compared to other well-known classifiers.</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_85-Predicting_Customer_Retention_using_XGBoost.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Vehicle Counting using Deep Learning Models: A Comparative Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110784</link>
        <id>10.14569/IJACSA.2020.0110784</id>
        <doi>10.14569/IJACSA.2020.0110784</doi>
        <lastModDate>2020-07-31T11:34:59.5600000+00:00</lastModDate>
        
        <creator>Azizi Abdullah</creator>
        
        <creator>Jaison Oothariasamy</creator>
        
        <subject>CNN; transfer learning; deep learning; object de-tection; vehicle detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>Recently, there has been a shift to deep learning architectures for better application in vehicle traffic control systems. One popular deep learning library used for detecting vehicle is TensorFlow. In TensorFlow, the pre-trained model is very efficient and can be transferred easily to solve other similar problems. However, due to inconsistency between the original dataset used in the pre-trained model and the target dataset for testing, this can lead to low-accuracy detection and hinder vehicle counting performance. One major obstacle in retraining deep learning architectures is that the network requires a large corpus training dataset to secure good results. Therefore, we propose to perform data annotation and transfer learning from an existing model to construct a new model for vehicle detection and counting in the real world urban traffic scenes. Then, the new model is compared with the experimental data to verify the validity of the new model. Besides, this paper reports some experimental results, comprising a set of innovative tests to identify the best detection algorithm and system performance. Furthermore, a simple vehicle tracking method is proposed to aid the vehicle counting process in challenging illumination and traffic conditions. The results showed a significant improvement of the proposed system with the average vehicle counting of 80.90%.</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_84-Vehicle_Counting_using_Deep_Learning_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Multi-Class Neural Network Model for Rapid Detection of IoT Botnet Attacks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110783</link>
        <id>10.14569/IJACSA.2020.0110783</id>
        <doi>10.14569/IJACSA.2020.0110783</doi>
        <lastModDate>2020-07-31T11:34:59.5430000+00:00</lastModDate>
        
        <creator>Haifaa Alzahrani</creator>
        
        <creator>Maysoon Abulkhair</creator>
        
        <creator>Entisar Alkayal</creator>
        
        <subject>Internet of Things (IoT); IoT botnets; IoT security; intrusion detection system; deep learning; neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>The tremendous number of Internet of Things (IoT) devices and their widespread use have made our lives considerably more manageable and safer. At the same time, however, the vulnerability of these innovations means that our day-to-day existence is surrounded by insecure devices, thereby facilitating ways for cybercriminals to launch various attacks by large-scale robot networks (botnets) through IoT. In consideration of these issues, we propose a neural network-based model to detect IoT botnet attacks. Furthermore, the model provides multi-classification, which is necessary for taking appropriate countermeasures to understand and stop the attacks. In addition, it is independent and does not require specific equipment or software to fetch the required features. According to the con-ducted experiments, the proposed model is accurate and achieves 99.99%, 99.04% as F1 score for two benchmark datasets in addition to fulfilling IoT constraints regarding complexity and speed. It is less complicated in terms of computations, and it provides real-time detection that outperformed the state-of-the-art, achieving a detection time ratio of 1:5 and a ratio of 1:8.</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_83-A_Multi_Class_Neural_Network_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Single and Ensemble Classification for Predicting User’s Restaurant Preference</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110782</link>
        <id>10.14569/IJACSA.2020.0110782</id>
        <doi>10.14569/IJACSA.2020.0110782</doi>
        <lastModDate>2020-07-31T11:34:59.5100000+00:00</lastModDate>
        
        <creator>Esra’a Alshdaifat</creator>
        
        <creator>Ala’a Al-shdaifat</creator>
        
        <subject>Classification; data mining; ensemble algorithms; restaurant preferences</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>Classification is one of the most attractive and powerful data mining functionalities. Classification algorithms are applied to real-world problems to produce intelligent prediction models. Two main categories of classification algorithms can be adopted for generating prediction models: Single and Ensemble classification algorithms. In this paper, both categories are utilized to generate a novel prediction model to predict restaurant category preferences. More specifically, the central idea espoused in this paper is to construct an effective prediction model, using Single and Ensemble classification algorithms, to assist people to determine the best relevant place to go based on their demographic data, income level and place preferences. Therefore, this paper introduces a new application of classification task. According to the reported experimental results, an effective Restaurant Category Preferences Prediction Model (RCPPM) could be generated using classification algorithms. In addition, Bagging Homogeneous Ensemble classification produced the most effective RCPPM.</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_82-Single_and_Ensemble_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Abandoned Object Detection using Frame Differencing and Background Subtraction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110781</link>
        <id>10.14569/IJACSA.2020.0110781</id>
        <doi>10.14569/IJACSA.2020.0110781</doi>
        <lastModDate>2020-07-31T11:34:59.4970000+00:00</lastModDate>
        
        <creator>Mohiu Din</creator>
        
        <creator>Aneela Bashir</creator>
        
        <creator>Abdul Basit</creator>
        
        <creator>Sadia Lakho</creator>
        
        <subject>Object detection; video surveillance; tracking; back-ground subtraction; frame differencing; motion model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>Tracking objects over fixed surveillance cameras are widely used for security purposes in public areas such as train stations, airports, parking areas, and public transportation for the prevention of terrorism. Once the object is accurately detected in the image scene, we can use various visual algorithms to find a number of applications. In this paper, we introduce a model for tracking the multiple objects along with detecting the abandoned luggage in the real time environment. In our model, we used the initial frames to model the background scene. Next, we used the motion model that is background subtraction to detect and track moving objects such as the owner and the luggage. The proposed model also maintains the position history of moving objects followed by the frame differencing technique to find out the luggage history and detect the abandoned luggage by a human. We have used PETS2006 and PETS2007 dataset for the testing of the proposed system in various indoor and outdoor environments with varying lighting conditions.</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_81-Abandoned_Object_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Viral and Bacterial Pneumonia Diagnosis via Deep Learning Techniques and Model Explainability</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110780</link>
        <id>10.14569/IJACSA.2020.0110780</id>
        <doi>10.14569/IJACSA.2020.0110780</doi>
        <lastModDate>2020-07-31T11:34:59.4800000+00:00</lastModDate>
        
        <creator>Hai Thanh Nguyen</creator>
        
        <creator>Toan Bao Tran</creator>
        
        <creator>Huong Hoang Luong</creator>
        
        <creator>Trung Phuoc Le</creator>
        
        <creator>Nghi Cong Tran</creator>
        
        <subject>Interpretability; pneumonia; x-rays images; bacte-rial and viral pneumonia; image-based disease diagnosis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>Pneumonia is one of the most serious diseases for infants and young children, people older than age 65, and people with health problems or weakened immune systems. From nu-merous studies, scientists have found that a variety of organisms, including bacteria, viruses, and fungi, can be the cause of the disease. Coronavirus pandemic (COVID-2019) which comes from a type of pneumonia has been causing hundreds of thousands of deaths and is still progressing. Machine learning approaches are applied to develop models for medicine but they still work as a black-box are difficult to interpret output generated by machine learning models. In this study, we propose a method for image-based diagnosis for Pneumonia leveraging deep learning techniques and interpretability of explanation models such as Local Interpretable Model-agnostic Explanations and Saliency maps. We experiment on a variety of sizes and Convolutional neural network architecture to evaluate the efficiency of the proposed method on the set of Chest x-ray images. The work is expected to provide an approach to distinguish between healthy individuals and patients who are affected by Pneumonia as well as differentiate between viral Pneumonia and bacteria Pneumonia by providing signals supporting image-based disease diagnosis approaches.</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_80-Viral_and_Bacterial_Pneumonia_Diagnosis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of Efficient Power Supply for the Proper Operation of Bio-Mimetic Soft Lens</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110779</link>
        <id>10.14569/IJACSA.2020.0110779</id>
        <doi>10.14569/IJACSA.2020.0110779</doi>
        <lastModDate>2020-07-31T11:34:59.4630000+00:00</lastModDate>
        
        <creator>Saad Hayat</creator>
        
        <creator>Sheeraz Ahmed</creator>
        
        <creator>Malik Shah Zeb Ali</creator>
        
        <creator>Muhammad Qaiser Khan</creator>
        
        <creator>Muhammad Salman Khan</creator>
        
        <creator>Muhammad Usama</creator>
        
        <creator>Zeeshan Najam</creator>
        
        <creator>Asif Nawaz</creator>
        
        <subject>Cock croft walton multiplier; EOG; elastomers; bio mimetic lens; template</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>Soft Robotics is one of the emerging and top-notched researched fields in Robotics which collaborate and Interact with Human Machine Interface (HMI). Several power electronics and electrical devices are used for the proper operation of these robots, among them high voltage power supply plays a vital role. Several approaches are used for the design of the high voltage power supply but still there is a deficiency in the design of the power supply which fulfill our desires level (highly efficient and less complex). This paper presents the efficient power supply for the control of bio-mimetic lens. The proposed power supply is designed by the use of boost converter, single phase inverter and the cock-croft Walton Voltage Multiplier. The work employs the use of power electronics for the achievement of the efficient high voltage power supply and boost the level of voltage up to 5 kV. Numerical simulations are performed for the comprehensive testing of the proposed model. Simulink is used for designing of the high voltage supply for simulation work. More-over the results are verified with the designing of laboratory setup. The experimental results are close to the simulation results with an error less than 3%. This proof the validity of proposed high voltage power supply.</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_79-Design_of_Efficient_Power_Supply.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Disease Prediction on Imbalanced Metagenomic Dataset by Cost-Sensitive</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110778</link>
        <id>10.14569/IJACSA.2020.0110778</id>
        <doi>10.14569/IJACSA.2020.0110778</doi>
        <lastModDate>2020-07-31T11:34:59.4500000+00:00</lastModDate>
        
        <creator>Hai Thanh Nguyen</creator>
        
        <creator>Toan Bao Tran</creator>
        
        <creator>Quan Minh Bui</creator>
        
        <creator>Huong Hoang Luong</creator>
        
        <creator>Trung Phuoc Le</creator>
        
        <creator>Nghi Cong Tran</creator>
        
        <subject>Cost-sensitive; imbalanced datasets; disease predic-tion; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>Imbalanced datasets usually appear popularly to many real-world applications and studies. For metagenomic data, we also face the same issue where the number of patients is greater than the number of healthy individuals or vice versa. In this study, we propose a method to handle the imbalanced datasets issues by Cost-sensitive approach. The proposed method is evaluated on an imbalanced metagenomic dataset related to Inflammatory bowel disease to do prediction tasks. Our method reaches a noteworthy improvement on prediction performance with deep learning algorithms including a MultiLayer Perceptron and a Convolutional Neural Neural Network with the proposed cost-sensitive for Metagenome-based Disease Prediction tasks.</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_78-Enhancing_Disease_Prediction_on_Imbalanced_Metagenomic_Dataset.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>DeepScratch: Scratch Programming Language Extension for Deep Learning Education</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110777</link>
        <id>10.14569/IJACSA.2020.0110777</id>
        <doi>10.14569/IJACSA.2020.0110777</doi>
        <lastModDate>2020-07-31T11:34:59.4330000+00:00</lastModDate>
        
        <creator>Nora Alturayeif</creator>
        
        <creator>Nouf Alturaief</creator>
        
        <creator>Zainab Alhathloul</creator>
        
        <subject>Deep learning; visual programming languages; pro-gramming education; formal language definitions; neural networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>Visual programming languages make programming more accessible for novices, which open more opportunities to innovate and develop problem-solving skills. Besides, deep learning is one of the trending computer science fields that has a profound impact on our daily life, and it is important that young people are aware of how our world works. In this study, we partially attribute the difficulties novices face in building deep learning models to the used programming language. This paper presents DeepScratch, a new programming language extension to Scratch that provides powerful language elements to facilitate building and learning about deep learning models. We present the implementation process of DeepScratch, and explain the syntactical definition and the lexical definition of the extended vocabulary. DeepScratch provides two options to implement deep learning models: training a neural network based on built-in datasets and using pre-trained deep learning models. The two options are provided to serve different age groups and educational levels. The preliminary evaluation shows the usability and the effectiveness of this extension as a tool for kids to learn about deep learning.</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_77-DeepScratch_Scratch_Programming_Language.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Data Rate Limit in Low and High SNR Regime for Nakagami-q Fading Wireless Channel</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110776</link>
        <id>10.14569/IJACSA.2020.0110776</id>
        <doi>10.14569/IJACSA.2020.0110776</doi>
        <lastModDate>2020-07-31T11:34:59.4170000+00:00</lastModDate>
        
        <creator>Md. Mazid-Ul-Haque</creator>
        
        <creator>Md. Sohidul Islam</creator>
        
        <subject>Wireless communication; Nakagami-q fading; SISO channel capacity; low SNR regime; high SNR regime</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>Adequate data rate is always desired in wireless communication channels. Previously, few fading models were used to model wireless communication channels and to perform analysis on them. In this paper, analyses of data rate limit of single-input single-output (SISO) wireless communication system over Nakagami-q fading channels are presented. The calculation of capacity has been carried out using small and large limit argument approximations. The analytical solution for channel capacity is presented using small and large limit argument approximations. Where small and large limit argument approx-imations correspond low and high signal-to-noise ratio (SNR) regime. Behavior of channel capacity with respect to SNR and fading parameter respectively has been investigated deeply. The comparison of the channel capacity behavior for both low SNR and high SNR regime and have also been done and analyzed. It has found that the channel capacity increased with increasing SNR in low SNR regime. The channel capacity also behave in the same manner in high SNR regime as well.</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_76-Data_Rate_Limit_in_Low_and_High_SNR_Regime.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Convolutional Neural Network and Topic Modeling based Hybrid Recommender System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110775</link>
        <id>10.14569/IJACSA.2020.0110775</id>
        <doi>10.14569/IJACSA.2020.0110775</doi>
        <lastModDate>2020-07-31T11:34:59.3870000+00:00</lastModDate>
        
        <creator>Hira Kanwal</creator>
        
        <creator>Muhammad Assam</creator>
        
        <creator>Abdul Jabbar</creator>
        
        <creator>Salman Khan</creator>
        
        <creator>Kalimullah</creator>
        
        <subject>Recommender system; collaborative filtering; Lda2vec; Convolutional Neural Network (CNN); data sparsity problem; user reviews</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>In today’s personalized business environment, or-ganizations are providing bulk of information regarding their products and services. Recommender system has various accom-plishment on exploiting auxiliary information in matrix factor-ization. To handle data sparsity problem most recommender systems utilized deep learning techniques for in-depth analysis of item content to generate more accurate recommendations. However, these systems still have a research gap on how to handle user reviews effectively. Reviews that were written by users contain a large amount of information that can be utilized for more accurate predictions. This paper proposes a Hybrid Model to address the sparsity problem, convolutional neural network and topic modeling for recommender system, which extract the contextual features of both items and users by utilizing Deep Learning Convolutional Neural Network (CNN) along with Topic Modeling (Lda2vec) technique to generate latent factors of user and item. Topic Modeling is used to capture important topics from side information and deep learning is used to provide contextual information. To demonstrate the effectiveness of the research, an extensive experimental sets were performed on four public datasets (Amazon Instant Video, Kindle store, Health and Personal Care, Automotive). Results demonstrate that the proposed model outperformed the other state of the art approaches.</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_75-Convolutional_Neural_Network_and_Topic_Modeling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Survey on Homomorphic Encryption and Address of New Trend</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110774</link>
        <id>10.14569/IJACSA.2020.0110774</id>
        <doi>10.14569/IJACSA.2020.0110774</doi>
        <lastModDate>2020-07-31T11:34:59.3700000+00:00</lastModDate>
        
        <creator>Ayman Alharbi</creator>
        
        <creator>Haneen Zamzami</creator>
        
        <creator>Eman Samkri</creator>
        
        <subject>Homomorphic encryption; cloud computing; V2V; VANET; blockchain</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>Encryption is the process of disguising text to ensure the confidentiality of data transmitted from one party to another. Homomorphic encryption is one of the most important encryption-related processes which allows performing operation over encrypted data. Using different public key algorithms, homo-morphic encryption can be implemented in any scheme. There are many encryption algorithms to secure operations and data stor-age, which after calculations can obtain the same results. While there is a considerable contribution and enhancement in the field of homomorphic encryption for various performance metrics, there is still an necessity to clarify the applications dealing with this technology. Recently, many distinguished research papers have been filed to address the need for various applications of homomorphic encryption. Recently, many distinguished research papers have been filed to address the need for various applications of homomorphic encryption. Example of these applications but not limited to : Vehicle to Vehicle (v2v) secure communication, cloud security, Vehicular ad-hoc networks (VANET), Blockchain, E-Voting, Data mining with privacy preserving and healthcare sector. This article aims to introduce a literature survey to close the gap in homomorphic encryption systems and their applications in the protection of privacy. We focus on above-mentioned applications and present our recommendations for future work.</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_74-Survey_on_Homomorphic_Encryption.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Artificial Intelligent based System to Automate Decision Making in Assembly Solution Design</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110772</link>
        <id>10.14569/IJACSA.2020.0110772</id>
        <doi>10.14569/IJACSA.2020.0110772</doi>
        <lastModDate>2020-07-31T11:34:59.3400000+00:00</lastModDate>
        
        <creator>ABADI Chaimae</creator>
        
        <creator>MANSSOURI Imad</creator>
        
        <creator>ABADI Asmae</creator>
        
        <subject>Assembly selection; product assembly; ontologies; CBR &amp; RBR; flexible and automated; decision making system; artificial intelligence</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>Nowadays, competitiveness between industries has become very strong. Thus, industries are faced to serious challenges in terms of products qualities, time development and production cost. As assembly operations difficulties cause a big part of production problems, the integration of assembly selection since the earlier product life cycle phases has become a necessity for every company in order to survive. However, despite the large number of approaches that have been proposed in order to achieve this integration goal, many other problems are still present. It is in this context that a flexible and automated decision making system is proposed. It is based on ontologies and also on the Case Based Reasoning (CBR) and Rule Based Reasoning (RBR) concepts. Indeed, this system is an automation of the integrated DFMMA approach, in particular its assembly solution selection methodology. The developed system permits to designers avoiding the redundancy in the works by benefiting from their previous studies and their experience. In addition to that, it facilitates and automates the assembly solution selection even if the number of assembly alternatives is high. Finally, to illustrate the efficacy of the proposed system, a case of study is developed in the end of the work.</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_72-An_Artificial_Intelligent_based_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>IDP: A Privacy Provisioning Framework for TIP Attributes in Trusted Third Party-based Location-based Services Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110773</link>
        <id>10.14569/IJACSA.2020.0110773</id>
        <doi>10.14569/IJACSA.2020.0110773</doi>
        <lastModDate>2020-07-31T11:34:59.3400000+00:00</lastModDate>
        
        <creator>Muhammad Usman Ashraf</creator>
        
        <creator>Kamal M. Jambi</creator>
        
        <creator>Rida Qayyum</creator>
        
        <creator>Hina Ejaz</creator>
        
        <subject>Location Based Services (LBS); Trusted Third Party (TTP); privacy protection goals; mobile user privacy; Improved Dummy Position (IDP); Sybil Query</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>Location-Based Services (LBS) System is rapidly growing due to radio communication services with wireless mobile devices having a positioning component in it. LBS System offers location-based services by knowing the actual user position. A mobile user uses LBS to access services relevant to their locations. In order to provide Point of Interest (POI), LBS confronts numerous privacy related challenges in three different formats including Non-Trusted Third Party (NTTP), Trusted Third Party (TTP), and Mobile Peer-to-Peer (P2P). The current study emphasized the TTP based LBS system where the Location server does not provide full privacy to mobile users. In TTP based LBS system, a user’s privacy is concerned with personal identity, location information, and time information. In order to accomplish privacy under these concerns, state-of-the-art existing mechanisms have been reviewed. Hence, the aim to provide a promising roadmap to research and development communities for the right selection of privacy approach has achieved by conducting a comparative survey of the TTP based approaches. Leading to these privacy attributes, the current study addressed the privacy challenge by proposing a new privacy protection model named “Improved Dummy Position” (IDP) that protects TIP (Time, Identity, and Position) attributes under TTP LBS System. In order to validate the privacy level, a comparative analysis has been conducted by implementing the proposed IDP model in the simulation tool, Riverbed Modeler academic edition. The different scenarios of changing query transferring rate evaluate the performance of the proposed model. Simulation results demonstrate that our IDP could be considered as a promising model to protect user’s TIP attributes in a TTP based LBS system due to better performance and improved privacy level. Further, the proposed model extensively compared with the existing work.</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_73-IDP_A_Privacy_Provisioning_Framework_for_TIP.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dynamic SEIZ in Online Social Networks: Epidemiological Modeling of Untrue Information</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110771</link>
        <id>10.14569/IJACSA.2020.0110771</id>
        <doi>10.14569/IJACSA.2020.0110771</doi>
        <lastModDate>2020-07-31T11:34:59.3100000+00:00</lastModDate>
        
        <creator>Akanksha Mathur</creator>
        
        <creator>Chandra Prakash Gupta</creator>
        
        <subject>Information diffusion; epidemic models; SEIZ; rumor detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>The epidemic propagation of untrue information in online social networks leads to potential damage to society. This phenomenon has attracted attention to researchers on a faster spread of false information. Epidemic models such as SI, SIS, SIR, developed to study the infection spread on social media. This paper uses SEIZ, an enhanced epidemic model classifies the overall population in four classes (i.e. Susceptible, Exposed, Infected, Skeptic). It uses probabilities of transition from one state to another state to characterize misinformation from actual information. It suffers from two limitations i.e. the rate of change of population and state transition probabilities considered constant for the entire period of observation. In this paper, a dynamic SEIZ computes the rate of change of population at fixed intervals and the predictions based on the new rates periodically. Research findings on Twitter data have indicated that this model gives more accuracy by early indications of being untrue information.</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_71-Dynamic_SEIZ_in_Online_Social_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Classification Model of Municipal Management in Local Governments of Peru based on K-means Clustering Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110770</link>
        <id>10.14569/IJACSA.2020.0110770</id>
        <doi>10.14569/IJACSA.2020.0110770</doi>
        <lastModDate>2020-07-31T11:34:59.2930000+00:00</lastModDate>
        
        <creator>Jose Morales</creator>
        
        <creator>Nakaday Vargas</creator>
        
        <creator>Mario Coyla</creator>
        
        <creator>Jose Huanca</creator>
        
        <subject>K-means; cluster; municipality; model; municipal management</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>The K-means algorithm groups datasets into different groups, defines a fixed number of clusters, iteratively assigning data to the clusters formed by adjusting the centers in each cluster. K-means algorithm uses an unsupervised learning method to discover patterns in an input data set. The purpose of the research is to propose a municipal management classification model in the municipalities of Peru using a K-means clustering algorithm based in 58 variables obtained from the areas of human resources, heavy machinery and operating vehicles, information and communication technologies, municipal planning, municipal finances, local economic development, social services, solid waste management, cultural, recreational and sports facilities, public security, disaster risk management, environmental protection and conservation of all the municipalities of the 24 departments of Peru and the constitutional province of Callao. The results of the application of the K-means algorithm show that 32% of the municipalities made up of the municipal governments of Amazonas, Apur&#237;mac, Huancavelica, Hu&#225;nuco, Ica, Lambayeque, Loreto and San Martin; are in Cluster 1; the 8% in Cluster 2 with the municipal governments of Ancash and Cusco; in the third Cluster the 28% with the municipal governments of the constitutional Province of Callao, Madre de Dios, Moquegua, Pasco, Tacna, Tumbes and Ucayali and in Cluster 4, 32% composed of the municipal governments of Arequipa, Ayacucho, Cajamarca, Jun&#237;n, La Libertad, Lima, Piura and Puno Region.</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_70-Classification_Model_of_Municipal_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Stage Identification and Classification of Lung Cancer using Deep Convolutional Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110769</link>
        <id>10.14569/IJACSA.2020.0110769</id>
        <doi>10.14569/IJACSA.2020.0110769</doi>
        <lastModDate>2020-07-31T11:34:59.2770000+00:00</lastModDate>
        
        <creator>Varsha Prakash</creator>
        
        <creator>Smitha Vas.P</creator>
        
        <subject>Computer Aided Diagnosis (CAD); Deep Convolutional Neural Network (DCNN); pulmonary nodule; segmentation; benign; malignant; staging</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>The performance of lung segmentation is highly dependent on disease prediction task. Challenges for prediction and segmentation raise the need of using multiple learning techniques. Current models initially perform image segmentation in all CT scan images and then classify it as malicious or benign. This consumes more time since it segments both normal and abnormal CT’s. So, due to improper segmentation of images the region of interest will be inaccurate and results in false classification of images. Therefore, by initially checking the CT which has malignancy and then segmenting those lesions will provide more accuracy in segmentation of cancerous nodules thereby helps to identify the stage of cancer the patient is suffering from. The aim is to improve the current cancer detection techniques using DCNN by filtering out malignant CT scan from the medical dataset and segmenting those images for stage identification. Segmentation is done using UNET++ architecture and stage identification is done by considering the “size” (T) parameter from the globally recognized standard named “TNM staging” for classifying the spread of each malignant nodule as T1-T4. 99.83 % accuracy is achieved in lung cancer classification using VGG-16 which yields better results for both segmentation and stage identification too.</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_69-Stage_Identification_and_Classification_of_Lung_Cancer.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Genetic Algorithms for the Multiple Travelling Salesman Problem</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110768</link>
        <id>10.14569/IJACSA.2020.0110768</id>
        <doi>10.14569/IJACSA.2020.0110768</doi>
        <lastModDate>2020-07-31T11:34:59.2600000+00:00</lastModDate>
        
        <creator>Maha Ata Al-Furhud</creator>
        
        <creator>Zakir Hussain Ahmed</creator>
        
        <subject>Multiple travelling salesman problem; NP-hard; genetic algorithm; sequential constructive crossover; adaptive; greedy; comprehensive</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>We consider the multiple travelling salesman Problem (MTSP) that is one of the generalization of the travelling salesman problem (TSP). For solving this problem genetic algorithms (GAs) based on numerous crossover operators have been described in the literature. Choosing effective crossover operator can give effective GA. Generally, the crossover operators that are developed for the TSP are applied to the MTSP. We propose to develop simple and effective GAs using sequential constructive crossover (SCX), adaptive SCX, greedy SCX, reverse greedy SCX and comprehensive SCX for solving the MTSP. The effectiveness of the crossover operators is demonstrated by comparing among them and with another crossover operator on some instances from TSPLIB of various sizes with different number of salesmen. The experimental study shows the promising results by the crossover operators, especially CSCX, for the MTSP.</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_68-Genetic_Algorithms_for_the_Multiple_Travelling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Systematic Review Study of Decision Trees based Software Development Effort Estimation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110767</link>
        <id>10.14569/IJACSA.2020.0110767</id>
        <doi>10.14569/IJACSA.2020.0110767</doi>
        <lastModDate>2020-07-31T11:34:59.2470000+00:00</lastModDate>
        
        <creator>Assia Najm</creator>
        
        <creator>Abdelali Zakrani</creator>
        
        <creator>Abdelaziz Marzak</creator>
        
        <subject>Systematic literature review; decision tree; regression tree; software development effort estimation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>The role of decision trees in software development effort estimation (SDEE) has received increased attention across several disciplines in recent years thanks to their power of predicting, their ease of use, and understanding. Furthermore, there are a large number of published studies that investigated the use of a decision tree (DT) techniques in SDEE. Nevertheless, in reviewing the literature, a systematic literature review (SLR) that assesses the evidence stated on DT techniques is still lacking. The main issues addressed in this paper have been divided into five parts: prediction accuracy, performance comparison, suitable conditions of prediction, the effect of the methods employed in association with DT techniques, and DT tools. To carry out this SLR, we performed an automatic search over five digital libraries for studies published between 1985 and 2019. In general, the results of this SLR revealed that most DT methods outperform many techniques and show an improvement in accuracy when combined with association rules (AR), fuzzy logic (FL), and bagging. Additionally, it has been observed a limited use of DT tools: it is therefore suggested for researchers to develop more DT tools to promote the industrial utilization of DT amongst professionals.</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_67-Systematic_Review_Study_of_Decision_Trees.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>M2C: A Massive Performance and Energy Throttling Framework for High-Performance Computing Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110766</link>
        <id>10.14569/IJACSA.2020.0110766</id>
        <doi>10.14569/IJACSA.2020.0110766</doi>
        <lastModDate>2020-07-31T11:34:59.2130000+00:00</lastModDate>
        
        <creator>Muhammad Usman Ashraf</creator>
        
        <creator>Kamal M. Jambi</creator>
        
        <creator>Amna Arshad</creator>
        
        <creator>Rabia Aslam</creator>
        
        <creator>Iqra Ilyas</creator>
        
        <subject>High performance computing; Exascale computing; compute unified device architecture</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>At the Petascale level of performance, High-Performance Computing (HPC) systems require significant use of supercomputers with the extensive parallel programming approaches to solve the complicated computational tasks. The Exascale level of performance having 1018 calculations per second is another remarkable achievement in computing with a fathomless influence on everyday life. The current technologies are facing various challenges while achieving ExaFlop performance through energy-efficient systems. Massive parallelism and power consumption are vital challenges for achieving ExaFlop performance. In this paper, we have introduced a novel parallel programming model that provides massive performance under power consumption limitations by parallelizing data on the heterogeneous system to provide coarse grain and fine-grain parallelism. The proposed dual-hierarchical architecture is a hybrid of MVAPICH2 and CUDA, called the M2C model, for heterogeneous systems that utilize both CPU and GPU devices for providing massive parallelism. To validate the objectives of the current study, the proposed model has been implemented using bench-marking applications including linear Dense Matrix Multiplication. Furthermore, we conducted a comparative analysis of the proposed model by existing state-of-the-art models and libraries such as MOC, kBLAS, and cuBLAS. The suggested model outperforms existing models while achieving massive performance in HPC clusters and can be considered for emerging Exascale computing systems.</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_66-M2C_A_Massive_Performance_and_Energy_Throttling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Dynamic Two-Layers MI and Clustering-based Ensemble Feature Selection for Multi-Labels Text Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110764</link>
        <id>10.14569/IJACSA.2020.0110764</id>
        <doi>10.14569/IJACSA.2020.0110764</doi>
        <lastModDate>2020-07-31T11:34:59.2000000+00:00</lastModDate>
        
        <creator>Adil Yaseen Taha</creator>
        
        <creator>Sabrina Tiun</creator>
        
        <creator>Abdul Hadi Abd Rahman</creator>
        
        <creator>Masri Ayob</creator>
        
        <creator>Ali Sabah</creator>
        
        <subject>Multi-label text classification; high dimensionality; filtering method; ensemble clustering; ensemble MI feature selection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>Multi-label text classification deals with the issue that arises from each sample being related to multiple labels. The text data suffers from high dimensionality. In order to resolve this issue, a feature selection (FS) method can be implemented for efficiently removing the noisy, irrelevant, and redundant features. Multi-label FS is a powerful tool for solving the high-dimension problem. With regards to handling correlation and high dimensionality problems in multi-label text classification, this paper investigates the various heterogeneous FS ensemble schemes. In addition, this paper proposes an enhanced FS method called dynamic multi-label two-layers MI and clustering-based ensemble feature selection algorithm (DMMC-EFS). The proposed method considers the: 1) dynamic global weight of feature, 2) heterogeneous ensemble, and 3) maximum dependency and relevancy and minimum redundancy of features. This method aims to overcome the high dimensionality of multi-label datasets and acquire improved multi-label text classification. We have conducted experiments based on three benchmark datasets: Reuters-21578, Bibtex, and Enron. The experimental results show that DMMC-EFS has significantly outperformed other state-of-the-art conventional and ensemble multi-label FS methods.</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_64-A_Dynamic_Two_Layers_MI.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>ARIMA Model for Accurate Time Series Stocks Forecasting</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110765</link>
        <id>10.14569/IJACSA.2020.0110765</id>
        <doi>10.14569/IJACSA.2020.0110765</doi>
        <lastModDate>2020-07-31T11:34:59.2000000+00:00</lastModDate>
        
        <creator>Shakir Khan</creator>
        
        <creator>Hela Alghulaiakh</creator>
        
        <subject>ARIMA; forecasting; prediction analysis; time series; stocks forecasting; data mining; big data</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>With the increasing of historical data availability and the need to produce forecasting which includes making decisions regarding investments, in addition to the needs of developing plans and strategies for the future endeavors as well as the difficulty to predict the stock market due to its complicated features, This paper applied and compared auto ARIMA (Auto Regressive Integrated Moving Average model). Two customize ARIMA(p,D,q) to get an accurate stock forecasting model by using Netflix stock historical data for five years. Between the three models, ARIMA (1,1,33) showed accurate results in calculating the MAPE and holdout testing, which shows the potential of using the ARIMA model for accurate stock forecasting.</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_65-ARIMA_Model_for_Accurate_Time_Series.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced Artificial Intelligence System for Diagnosing and Predicting Breast Cancer using Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110763</link>
        <id>10.14569/IJACSA.2020.0110763</id>
        <doi>10.14569/IJACSA.2020.0110763</doi>
        <lastModDate>2020-07-31T11:34:59.1670000+00:00</lastModDate>
        
        <creator>Mona Alfifi</creator>
        
        <creator>Mohamad Shady Alrahhal</creator>
        
        <creator>Samir Bataineh</creator>
        
        <creator>Mohammad Mezher</creator>
        
        <subject>Traditional Convolutional Neural Network (TCNN); Supported Convolutional Neural Network (SCNN); shift; scaling; cancer detection; mammogram; histogram equalization; adaptive median filter</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>Breast cancer is the leading cause of death among women with cancer. Computer-aided diagnosis is an efficient method for assisting medical experts in early diagnosis, improving the chance of recovery. Employing artificial intelligence (AI) in the medical area is very crucial due to the sensitivity of this field. This means that the low accuracy of the classification methods used for cancer detection is a critical issue. This problem is accentuated when it comes to blurry mammogram images. In this paper, convolutional neural networks (CNNs) are employed to present the traditional convolutional neural network (TCNN) and supported convolutional neural network (SCNN) approaches. The TCNN and SCNN approaches contribute by overcoming the shift and scaling problems included in blurry mammogram images. In addition, the flipped rotation-based approach (FRbA) is proposed to enhance the accuracy of the prediction process (classification of the type of cancerous mass) by taking into account the different directions of the cancerous mass to extract effective features to form the map of the tumour. The proposed approaches are implemented on the MIAS medical dataset using 200 mammogram breast images. Compared to similar approaches based on KNN and RF, the proposed approaches show better performance in terms of accuracy, sensitivity, spasticity, precision, recall, time of performance, and quality of image metrics.</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_63-Enhanced_Artificial_Intelligence_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Strategy for the Morphological and Colorimetric Recognition of Erythrocytes for the Diagnosis of Forms of Anemia based on Microscopic Color Images of Blood Smears</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110762</link>
        <id>10.14569/IJACSA.2020.0110762</id>
        <doi>10.14569/IJACSA.2020.0110762</doi>
        <lastModDate>2020-07-31T11:34:59.1530000+00:00</lastModDate>
        
        <creator>J. Nango ALICO</creator>
        
        <creator>Sie OUATTARA</creator>
        
        <creator>Alain CLEMENT</creator>
        
        <subject>Erythrocyte; anemia; iron-deficiency; falciform; thalassemia; hemolytic; recognition; morphology; color; segmentation; histogram; Otsu</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>The detection of red blood cells based on morphology and colorimetric appearance is very important in improving hematology diagnostics. There are automatons capable of detecting certain forms, but these have limitations with regard to the formal identification of red blood cells because they consider certain cells to be red blood cells when they are not and vice versa. Other automata have limitations in their operation because they do not cover a sufficient area of the blood smear. In spite of their performance, biologists have very often resorted to the manual analysis of blood smears under an optical microscope for a morphological and colorimetric study. In this paper, we present a new strategy for semi-automatic identification of red blood cells based on their isolation, their automatic color segmentation using Otsu&#39;s algorithm and their morphology. The algorithms of our method have been implemented in the programming environment of the scientific software MATLAB resulting in an artificial intelligence application. The application, once launched, allows the biologist to select a region of interest containing the erythrocyte to be characterized, then a set of attributes are computed extracted from this target red blood cell. These attributes include compactness, perimeter, area, morphology, white and red proportions of the erythrocyte, etc. The types of anemia treated in this work concern the iron-deficiency, sickle-cell or falciform, thalassemia, hemolytic, etc. forms. The results obtained are excellent because they highlight different forms of anemia contracted in a patient.</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_62-A_New_Strategy_for_the_Morphological_and_Colorimetric.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>BERT+vnKG: Using Deep Learning and Knowledge Graph to Improve Vietnamese Question Answering System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110761</link>
        <id>10.14569/IJACSA.2020.0110761</id>
        <doi>10.14569/IJACSA.2020.0110761</doi>
        <lastModDate>2020-07-31T11:34:59.1200000+00:00</lastModDate>
        
        <creator>Truong H. V Phan</creator>
        
        <creator>Phuc Do</creator>
        
        <subject>Bidirectional Encoder Representation from Transformer (BERT); knowledge graph; Question Answering (QA); Long Short-Term Memory (LSTM); deep learning; Vietnamese tourism; natural language processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>A question answering (QA) system based on natural language processing and deep learning is a prominent area and is being researched widely. The Long Short-Term Memory (LSTM) model that is a variety of Recurrent Neural Network (RNN) used to be popular in machine translation, and question answering system. However, that model still has certainly limited capabilities, so a new model named Bidirectional Encoder Representation from Transformer (BERT) emerged to solve these restrictions. BERT has more advanced features than LSTM and shows state-of-the-art results in many tasks, especially in multilingual question answering system over the past few years. Nevertheless, we tried applying multilingual BERT model for a Vietnamese QA system and found that BERT model still has certainly limitation in term of time and precision to return a Vietnamese answer. The purpose of this study is to propose a method that solved above restriction of multilingual BERT and applied for question answering system about tourism in Vietnam. Our method combined BERT and knowledge graph to enhance accurately and find quickly for an answer. We experimented our crafted QA data about Vietnam tourism on three models such as LSTM, BERT fine-tuned multilingual for QA (BERT for QA), and BERT+vnKG. As a result, our model outperformed two previous models in terms of accuracy and time. This research can also be applied to other fields such as finance, e-commerce, and so on.</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_61-BERT_vnKG_Using_Deep_Learning_and_Knowledge_Graph.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design Optimization of Power and Area of Two-Stage CMOS Operational Amplifier Utilizing Chaos Grey Wolf Technique</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110760</link>
        <id>10.14569/IJACSA.2020.0110760</id>
        <doi>10.14569/IJACSA.2020.0110760</doi>
        <lastModDate>2020-07-31T11:34:59.1070000+00:00</lastModDate>
        
        <creator>Telugu Maddileti</creator>
        
        <creator>Govindarajulu Salendra</creator>
        
        <creator>Chandra Mohan Reddy Sivappagari</creator>
        
        <subject>CMOS; CGWO; optimization technique; operational amplifier; aspect ratio; power dissipation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>Low Power Dissipation is an emerging challenge in the current electronics industry. Area shrinking has found the most prominent place and is the foundation of every constricted size in the utilization of CMOS circuits in Integrated Circuit Manufacturing. Functionality in terms of rapidity, dissipation of power, etc. are strongly influenced by the dimensions of transistors in many CMOS Integrated Circuits. The significant formulation parameters in CMOS circuit design to perform optimization of the above-mentioned parameters, and various techniques were projected earlier to a maximum extent possible. Latency, Power, and Dimension are significant parameters in the design of CMOS based IC design. Most analog circuit reduction in terms of size as parameter typically describes solitary or many objectives-controlled optimization issues. The eminent challenges with regards to size and power dissipation can be described as problems that are typically encountered under certain conditions. In this study, the design of a two-stage CMOS Differential Amplifier applying the nature-inspired Grey Wolf Algorithm for optimizing the area and power is utilized. To enhance the formulation terms concerning important considerations such as the amount incurred, strength, and functionality; a computerized formulation approach is used. This formulated design proposal will meet specifications such as positive and negative Slew Rate, Unity Gain Bandwidth and Phase Margin, etc. Chaos theory can be induced into the Grey Wolf Optimization Algorithm (CGWO) with the help of speeding global convergence metric i.e. Speed. The results obtained from CGWO are then analyzed with the functionality of other prevailing optimization techniques employed in the analog circuit sizing. Depending on the investigations, CGWO functions reduce the dimensions of the circuit and analyze the prevailing techniques by achieving a healthier rate of convergence and power dissipation with low value.</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_60-Design_Optimization_of_Power_and_Area.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Identity Attributes Metric Modelling based on Mathematical Distance Metrics Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110759</link>
        <id>10.14569/IJACSA.2020.0110759</id>
        <doi>10.14569/IJACSA.2020.0110759</doi>
        <lastModDate>2020-07-31T11:34:59.0900000+00:00</lastModDate>
        
        <creator>Felix Kabwe</creator>
        
        <creator>Jackson Phiri</creator>
        
        <subject>Mathematical modeling; Cosine Similarity Measure; text frequency; inverse document frequency; cyber space; term weight; internet; digital identity; trust model; normalization; text mining</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>Internet has brought a lot of security challenges on the interaction, activities, and transactions that occur online. These include pervasion of privacy of individuals, organizations, and other online actors. Relationships in real life get affected by online mischievous actors with intent to misrepresent or ruin the characters of innocent people, leading to damaged relationships. Proliferation of cybercrime has threatened the value and benefits of internet. Identity theft by fraudsters with intent to steal assets in real space or online has escalated. This study has developed a metrics model based on distance metrics in order to quantify the credential identity attributes used in online services and activities. This is to help address the digital identity challenges, bring confidence to online activities and ownership of assets. The application forms and identity tokens used in the various sectors to identify online users were used as the sources of the identity attributes in this paper. The corpus toolkits were used to mine and extract the identity attributes from the various forms of identity tokens. Term weighting schemes were used to compute the term weight of the identity attributes. Other methods used included Shannon Entropy and the Term Frequency-Inverse Document Frequency scheme (TF*IDF). Standardization of data using data normalization method has been applied. The results show that using the Cosine Similarity Measure, we can identify the identity attributes in any given identity token used to identify individuals and entities. This will help to attach the legitimate ownership to the digital identity attributes. The developed model can be used to uniquely identify an online identity claimant and help address the security challenge in identity management systems. The proposed model can also identify the key identity attributes that could be used to identify an entity in real or cyber spaces.</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_59-Identity_Attributes_Metric_Modelling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Entropy-Based k Shortest-Path Routing for Motorcycles: A Simulated Case Study in Jakarta</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110758</link>
        <id>10.14569/IJACSA.2020.0110758</id>
        <doi>10.14569/IJACSA.2020.0110758</doi>
        <lastModDate>2020-07-31T11:34:59.0600000+00:00</lastModDate>
        
        <creator>Muhamad Asvial</creator>
        
        <creator>M. Faridz Gita Pandoyo</creator>
        
        <creator>Ajib Setyo Arifin</creator>
        
        <subject>Traffic congestion; motorcycle; navigation apps; EBkSP</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>Traffic congestion is a serious problem in rapidly developing urban areas like Jakarta, Indonesia’s capital city. To avoid the congestion, motorcycles assisted with navigation apps are popular solution. However, the existing navigation apps do not take into account traffic data. This paper proposes an open-source navigation app for motorcycle by taking into account the traffic data and wide road to avoid congestion. The propose navigation app uses entropy-balanced k shortest paths (EBkSP) algorithm to suggest different routes to different users to prevent further congestion. Tests show that the proposed route planning system in the app gives routes that are significantly shorter than motorcycle routes planned by Google Maps. The EBkSP algorithm also distributes vehicles more evenly among routes than the random kSP algorithm and does so in a practical amount of computing time.</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_58-Entropy_Based_k_Shortest_Path_Routing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Intelligent Approach for Detecting Palm Trees Diseases using Image Processing and Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110757</link>
        <id>10.14569/IJACSA.2020.0110757</id>
        <doi>10.14569/IJACSA.2020.0110757</doi>
        <lastModDate>2020-07-31T11:34:59.0430000+00:00</lastModDate>
        
        <creator>Hazem Alaa</creator>
        
        <creator>Khaled Waleed</creator>
        
        <creator>Moataz Samir</creator>
        
        <creator>Mohamed Tarek</creator>
        
        <creator>Hager Sobeah</creator>
        
        <creator>Mustafa Abdul Salam</creator>
        
        <subject>Machine learning; deep learning; image processing; leaf spots; blight spots; red palm weevil</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>Today’s palm trees diseases which cause a huge loss in production are extremely hard to detect either because these diseases are hidden inside the texture of the palm itself and cannot be seen by naked eyes or because it appears on its leaves which are hardly examined due to how far they really are from the ground. In this paper we’re interested in detecting three of the most common diseases threatening palms today, Leaf Spots, Blight Spots and Red Palm Weevil. Diagnosis of these diseases are done by capturing normal and thermal images of palm trees then, image processing techniques were applied to the acquired images. Two classifiers were used, CNN to differentiate between Leaf Spots and Blight Spots diseases and SVM for Red Palm Weevil pest. The results for CNN and SVM algorithms showed a success rate of accuracy ratio 97.9% and 92.8% respectively, these results are considered to be the best results in this domain as far as we know. The paper also includes the first gathered thermal images dataset for palms infected with Red Palm Weevil and healthy palms as well.</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_57-An_Intelligent_Approach_for_Detecting_Palm_Trees.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Novel Design and Implementation of a Vehicle Controlling and Tracking System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110756</link>
        <id>10.14569/IJACSA.2020.0110756</id>
        <doi>10.14569/IJACSA.2020.0110756</doi>
        <lastModDate>2020-07-31T11:34:59.0100000+00:00</lastModDate>
        
        <creator>Hasan K. Naji</creator>
        
        <creator>Iuliana Marin</creator>
        
        <creator>Nicolae Goga</creator>
        
        <creator>Cristian Taslitschi</creator>
        
        <subject>Vehicle controlling; smart system; tracking system; microcontroller; messaging; GSM; GBS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>The purpose of this project is to build a system that will quickly track the location of a stolen vehicle, thereby reducing the cost and effort of police. Moreover, the vehicle’s computer system can be controlled remotely by the owners of the vehicle or police. More precisely, the goal of this work is to design a, develop remote control of the vehicle, and find the locations with Latitude (LAT) and Longitude (LONG).</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_56-Novel_Design_and_Implementation_of_a_Vehicle.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Effective Opinion Words Extraction for Food Reviews Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110755</link>
        <id>10.14569/IJACSA.2020.0110755</id>
        <doi>10.14569/IJACSA.2020.0110755</doi>
        <lastModDate>2020-07-31T11:34:58.9970000+00:00</lastModDate>
        
        <creator>Phuc Quang Tran</creator>
        
        <creator>Ngoan Thanh Trieu</creator>
        
        <creator>Nguyen Vu Dao</creator>
        
        <creator>Hai Thanh Nguyen</creator>
        
        <creator>Hiep Xuan Huynh</creator>
        
        <subject>Review classification; opinion words; machine learning; important features; Amazon</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>Opinion mining (known as sentiment analysis or emotion Artificial Intelligence) holds important roles for e-commerce and benefits to numerous business and organizations. It studies the use of natural language processing, text analysis, computational linguistics, and biometrics to provide us business valuable insights into how people feel about our product brand or service. In this study, we investigate reviews from Amazon Fine Food Reviews dataset including about 500,000 reviews and propose a method to transform reviews into features including Opinion Words which then can be used for reviews classification tasks by machine learning algorithms. From the obtained results, we evaluate useful Opinion Words which can be informative to identify whether the review is positive or negative.</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_55-Effective_Opinion_Words_Extraction_for_Food_Reviews.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of the Use of Technological Tools in University Higher Education using the Soft Systems Methodology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110754</link>
        <id>10.14569/IJACSA.2020.0110754</id>
        <doi>10.14569/IJACSA.2020.0110754</doi>
        <lastModDate>2020-07-31T11:34:58.9630000+00:00</lastModDate>
        
        <creator>Alexandra Ramos Bernaola</creator>
        
        <creator>Marianne Aldude Tipula</creator>
        
        <creator>Jose Estrada Moltalvo</creator>
        
        <creator>Valeria Se&#241;as Sandoval</creator>
        
        <creator>Laberiano Andrade-Arenas</creator>
        
        <subject>Soft system; involved; university; virtual classroom; students</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>This article analyzes the professional training of students according to the current situation. Students are experiencing a new modality of study due to the Covid-19 and specialists have to evaluate which would be the best solutions in order that students achieve greater understanding, because they are one of the most affected today. Therefore, we will be able to analyze in a detailed way each one of the causes that our problem presents to evaluate and thus to be able to obtain the different points of view of those who are involved in this problem. In this study we will apply the methodology of the soft systems with a systemic approach and a holistic vision to analyze the situation presented by all those involved. Having as results that it is necessary to evaluate in a better way the solutions that are given to higher education institutions, since it should have as main objective the achievement of a better teaching towards the students and thus opting for the use of good methodologies of learning.</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_54-Analysis_of_the_use_of_Technological_Tools.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Method for Automatically Processing Outliers of a Quantitative Variable</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110753</link>
        <id>10.14569/IJACSA.2020.0110753</id>
        <doi>10.14569/IJACSA.2020.0110753</doi>
        <lastModDate>2020-07-31T11:34:58.9500000+00:00</lastModDate>
        
        <creator>NIANGORAN Aristhophane Kerandel</creator>
        
        <creator>MENSAH Edo&#233;t&#233; Patrice</creator>
        
        <creator>ACHIEPO Odilon Yapo M</creator>
        
        <creator>DIAKO Doffou J&#233;rome</creator>
        
        <subject>Outliers; boxplot; exploratory data analysis; Programming R; data science</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>In data analysis processes, the treatment of outliers in quantitative variables is very critical as it affects the quality of the conclusions. However, despite the existence of very good tools for detecting outliers, dealing with them is not always straightforward. Indeed, statisticians recommend modeling the process underlying outliers to identify the best way to deal with them. In the context of Data Science and Machine Learning, the identification of processes that generate outliers remains problematic because this work requires a visual human interpretation of certain statistical tools. The techniques proposed so far, are systematic imputations by a central tendency characteristic, usually the arithmetic mean or median. Although adapted to the framework of Data Science and Machine Learning, these different approaches cause a fundamental problem, that of modifying the distribution of the initial data. The purpose of our paper is to propose an algorithm that allows the automatic processing of outliers by a software while preserving the distributional structure of the treated variable, whatever the law of probability is. The method is based on the moustache box theory developed by John Tukey. The procedure is tested with existing real data. All treatments are performed with the R programming language.</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_53-Method_for_Automatically_Processing_Outliers.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid Recommender System to Enrollment for Elective Subjects in Engineering Students using Classification Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110752</link>
        <id>10.14569/IJACSA.2020.0110752</id>
        <doi>10.14569/IJACSA.2020.0110752</doi>
        <lastModDate>2020-07-31T11:34:58.9330000+00:00</lastModDate>
        
        <creator>Jerson Erick Herrera Rivera</creator>
        
        <subject>Hybrid; recommender system; academic performance; term frequency; inverse term frequency; natural language processing; k-NN; XGBoost; MAP-k</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>One of the main problems that engineering university students face is making the correct decision regarding the lines of elective subjects to enroll based on available information (preferences, syllabus, schedules, subject content, possible academic performance, teacher, curriculum, and others). Under these circumstances, this research work seeks to develop a Hybrid Recommender System. For this, a model based on the Content-based approach of all the subjects that has been studied is developed (using Natural Language Processing and the statistical measures Term Frequency and Inverse Term Frequency), giving it appropriate relevance with the grades that the student has achieved. In addition, a model based on a Collaborative Filtering approach is developed, establishing relationships between different students, identifying similar academic behaviors. Thus, the system will recommend to the student in which lines of elective subjects to enroll to obtain better results in the academic field. The given recommendation will be obtained from machine learning models (XGBoost and k-NN) based on the similarity between the contents of each subject with respect to the line of elective subject and based on the academic relationship between all the students. To achieve the objective, data from engineering students between 2011 and 2016 has been analyzed. The results obtained indicate that the recommendations reach a MAP-k of 82.14% and a precision of 91.83%.</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_52-A_Hybrid_Recommender_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Household Overspending Model Amongst B40, M40 and T20 using Classification Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110751</link>
        <id>10.14569/IJACSA.2020.0110751</id>
        <doi>10.14569/IJACSA.2020.0110751</doi>
        <lastModDate>2020-07-31T11:34:58.9030000+00:00</lastModDate>
        
        <creator>Zulaiha Ali Othman</creator>
        
        <creator>Azuraliza Abu Bakar</creator>
        
        <creator>Nor Samsiah Sani</creator>
        
        <creator>Jamaludin Sallim</creator>
        
        <subject>Overspending; classification; poverty; household</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>The family economy is a critical indicator of the well-being of a family institution. It can be seen by the total income and how well the household finances is managed. In Malaysia, the household income level is categorized as B40, M40 and T20. These categories can also indicate the poverty level of the household. Overspending is a phenomenon where the monthly expenses are more than the household&#39;s total income, which affects economic wellbeing. Finding important factors that affect the spending patterns among the household can reveal the causes of overspending. It will assist the government in mitigating such problems. Availability of 4 million household expenditure records obtained from the survey conducted in 2016 by the Department of Statistics Malaysia eases the aim of this study to develop a household overspending model by using machine learning. The model is developed using 12 household demographic attributes with 14451 household records. The attributes are the number of households, area, state, strata, race, highest certificate, marital status, gender, housing, income, total expenditure, and category as attributes class. The model development employs five machine learning algorithms namely decision tree, Na&#239;ve Bayes, Neural network, Support Vector Machines, Nearest Neighbour. The results show that the decision tree through J48 algorithm has produced the easiest rule to be interpreted. The model shows four attributes which were income, state, races and number of households that highly influence the overspending problem. Based on the research finding, it can be concluded that these attributes are essential for improving the indicator measure for Malaysian Family Wellbeing Index in the aspect of overspending.</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_51-Household_Overspending_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Method for Predicting Human Walking Patterns using Smartphone’s Accelerometer Sensor</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110750</link>
        <id>10.14569/IJACSA.2020.0110750</id>
        <doi>10.14569/IJACSA.2020.0110750</doi>
        <lastModDate>2020-07-31T11:34:58.8870000+00:00</lastModDate>
        
        <creator>Zaid T. Alhalhouli</creator>
        
        <subject>Smartphone’s; accelerometer sensor; walking patterns; machine learning classifiers</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>Recently, the techniques for monitoring and recognizing human walking patterns have become one of the most important research topics, especially in health applications related to fitness and disease progression. This paper aims at combining machine learning techniques with Smartphone sensors readings (i.e. accelerometer sensor) in order to develop a smart model capable of classifying walking patterns into different categories (fast, normal, slow, very slow or very fast) along with variable of gender, male or female and sensor place, waist, hand or leg. In this paper, we use several machine learning algorithms including: Neural Network, KNN, Random forest, and Tree to train and test extracted data from Smartphone sensors. The results indicate that Smartphone sensor can be exploited in developing a reliable model for identifying the human walking patterns based on accelerometer readings. In addition, results show that Random forest is the best performing classifiers with an accuracy of (92.3%) and (91.8%) when applied on waist datasets for both males and females respectively.</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_50-A_Method_for_Predicting_Human_Walking_Patterns.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Super-Resolution using Deep Learning to Support Person Identification in Surveillance Video</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110749</link>
        <id>10.14569/IJACSA.2020.0110749</id>
        <doi>10.14569/IJACSA.2020.0110749</doi>
        <lastModDate>2020-07-31T11:34:58.8700000+00:00</lastModDate>
        
        <creator>Lamya Alkanhal</creator>
        
        <creator>Deena Alotaibi</creator>
        
        <creator>Nada Albrahim</creator>
        
        <creator>Sara Alrayes</creator>
        
        <creator>Ghaida Alshemali</creator>
        
        <creator>Ouiem Bchir</creator>
        
        <subject>Deep learning; image processing; super-resolution; surveillance video</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>Recently, video surveillance systems have been perceived as important technical tools that play a fundamental role in protecting people and assets. In particular, the recorded surveillance video sequences are used as evidence to solve violation, theft and criminal cases. Therefore, the identification of the person present on the crime scene becomes a critical task. In this paper, we proposed a Deep learning-based Super-Resolution system that aims to enhance the faces images captured from surveillance video in order to support suspect identification. The proposed system relies on an image processing technique called Super-Resolution that consists of recovering high-resolution images from low-resolution ones. More specifically, we used the Very-Deep Super-Resolution (VDSR) neural network to enhance the image quality. The proposed model was trained with CelebA faces dataset and used to enhance the resolution of the QMUL-SurvFace dataset. It yielded a Peak Signal-to-Noise Ratio (PSNR) improvement of 7% and Structural Similarity Index (SSIM) improvement of 3%. Most importantly, it increased the face recognition rate by 45.7%.</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_49-Super_Resolution_using_Deep_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid Document Features Extraction with Clustering based Classification Framework on Large Document Sets</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110748</link>
        <id>10.14569/IJACSA.2020.0110748</id>
        <doi>10.14569/IJACSA.2020.0110748</doi>
        <lastModDate>2020-07-31T11:34:58.8570000+00:00</lastModDate>
        
        <creator>S Anjali Devi</creator>
        
        <creator>S Siva Kumar</creator>
        
        <subject>Classification; document feature extraction; document similarity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>As the size of the document collections are increasing day-by-day, finding an essential document clusters for classification problem is one of the major problem due to high inter and intra document variations. Also, most of the conventional classification models such as SVM, neural network and Bayesian models have high true negative rate and error rate for document classification process. In order to improve the computational efficacy of the traditional document classification models, a hybrid feature extraction-based document cluster approach and classification approaches are developed on the large document sets. In the proposed work, a hybrid glove feature selection model is proposed to improve the contextual similarity of the keywords in the large document corpus. In this work, a hybrid document clustering similarity index is optimized to find the essential key document clusters based on the contextual keywords. Finally, a hybrid document classification model is used to classify the clustered documents on large corpus. Experimental results are conducted on different datasets, it is noted that the proposed document clustering-based classification model has high true positive rate, accuracy and low error rate than the conventional models.</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_48-A_Hybrid_Document_Features_Extraction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modeling of a Tourism Group Decision Support System using Risk Analysis based Knowledge Base</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110747</link>
        <id>10.14569/IJACSA.2020.0110747</id>
        <doi>10.14569/IJACSA.2020.0110747</doi>
        <lastModDate>2020-07-31T11:34:58.8230000+00:00</lastModDate>
        
        <creator>Putu Sugiartawan</creator>
        
        <creator>Sri Hartati</creator>
        
        <creator>Aina Musdholifah</creator>
        
        <subject>GDSS modeling; risk analysis; tourism site; knowledge base; Bali tourism</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>The increasing number of tourist destination becomes the main factor for export earning, job vacancy, business development, and infrastructure. The problem that occurs is the difference in regional income (GDP) that is quite significant in each region. Thus, it is necessary for the government to make a decision or policy in increasing tourist visits, mainly in Bali. In this case, choosing the most efficient decision from a number of decisions is for the government, tourists, community leaders, academics, and entrepreneurs in the tourism sector, especially in Bali. It is important to have a modeling decision support group (GDSS). GDSS modeling by integrating a knowledge-based (KB) risk analysis can determine decisions, extract information, and identify problems in the tourism sector especially, tourism objects in each region, more specifically. Problem identification in risk analysis modeling is determining decisions in handling risks and finding solutions from alternative tourism decisions that are potentially enlarged and knowledge gained from each decision-maker (DM). The process of identifying knowledge starts with comparing the assessment criteria on each tourism object and knowledge of tourism decision-makers. The results of GDSS modeling are subsequently integrated into knowledge-based risk analysis so that a decision is obtained in the form of an impact or risk and solution or recommendation in developing the specified tourism object. The purpose of combining the result is to understand the impacts or risks that may arise, and recommendations recommended so that the impacts or risks can be avoided.</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_47-Modeling_of_a_Tourism_Group_Decision_Support.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Self Organising Fuzzy Logic Classifier for Predicting Type-2 Diabetes Mellitus using ACO-ANN</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110746</link>
        <id>10.14569/IJACSA.2020.0110746</id>
        <doi>10.14569/IJACSA.2020.0110746</doi>
        <lastModDate>2020-07-31T11:34:58.8100000+00:00</lastModDate>
        
        <creator>Ratna Patil</creator>
        
        <creator>Sharvari Tamane</creator>
        
        <creator>Kanishk Patil</creator>
        
        <subject>Ant colony optimization; feature selection; fuzzy logic classifier; self organizing; type-2 diabetes mellitus</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>In today’s digital world, a dataset with large number of attributes has a curse of dimensionality where the computation time grows exponentially with the number of dimensions. To overcome the problem of computation time and space, appropriate method of feature selection can be developed using metaheuristic approaches. The aim of this work is to investigate the use of ant colony optimization with the help of neural network to select near optimal feature subset and integrate it with the self-organizing fuzzy logic classifier for improving the recognition rate. The proposed fuzzy classifier derives prototype from the collected data through an offline training process and uses it to develop a fuzzy inference system for classification. Once trained, it can continuously learn from streaming data and later adapts the changing facts by updating the system structure recursively. The developed model is not based on predefined parameters used in the data generation model but is derived from the empirically observed data.</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_46-Self_Organising_Fuzzy_Logic_Classifier.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Differential Evolution-based Approach for Tone-Mapping of High Dynamic Range Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110745</link>
        <id>10.14569/IJACSA.2020.0110745</id>
        <doi>10.14569/IJACSA.2020.0110745</doi>
        <lastModDate>2020-07-31T11:34:58.7930000+00:00</lastModDate>
        
        <creator>Ahmed Almaghthawi</creator>
        
        <creator>Farid Bourennani</creator>
        
        <creator>Ishtiaq Rasool Khan</creator>
        
        <subject>HDR image; LDR image; metaheuritics; differential evolution; tone-mapping; histogram</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>Recently, high dynamic range (HDR) imaging has received significant attention from research community as well as the industrial companies due to valuable applications of HDR images in better visualization and analysis. However, HDR images need to be converted to low dynamic range (LDR) images for viewing on standard LDR display screens. Several tone-mapping operators have been proposed for the conversion, however, so far, no significant works have been reported employing artificial intelligence to achieve better enhancement of the output images. In this paper, we present an optimization-based approach, to enhance the quality of the tone-mapped LDR images using metaheuristics. More specifically, the optimization process is based on the differential evolution (DE) algorithm which takes tone-mapping function of an existing histogram-based method as initial guess and refines the histogram bins iteratively leading to progressive enhancement of the quality of LDR image. The final results produced by the proposed optimized histogram-based approach (OHbA) showed better performance compared to the existing state-of-the-art tone-mapping algorithms.</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_45-Differential_Evolution_based_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Language of Persuasion in Courtroom Discourse: A Computer-Aided Text Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110744</link>
        <id>10.14569/IJACSA.2020.0110744</id>
        <doi>10.14569/IJACSA.2020.0110744</doi>
        <lastModDate>2020-07-31T11:34:58.7770000+00:00</lastModDate>
        
        <creator>Bader Nasser Aldosari</creator>
        
        <creator>Ayman F. Khafaga</creator>
        
        <subject>Computer-aided text analysis; legal discourse; persuasion; critical discourse analysis; power; control</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>This paper uses a Computer-Aided Text Analysis (CATA) and a Critical Discourse Analysis (CDA) to investigate the language of persuasion in courtroom discourse. More specifically, the paper tries to explore the extent to which a computer-aided text analysis contributes to decoding the various persuasive strategies employed to control, defend or accuse within the framework of courtroom discourse. Two research questions are tackled in this paper: first, what are the strategies of persuasion employed in the selected data? Second, how can a computer-aided text analysis reveal these persuasive tools that influence the attitudes of recipients? By means of the adopted computer-assisted textual analysis, four CDA strategies are discussed in this study: questioning, repetition, emotive language, and justification. The paper reveals that language in courtroom discourse can be used to persuade or biased to manipulate. In both cases, a triadic relationship between language, law, and computer is emphasized.</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_44-The_Language_of_Persuasion_in_Courtroom_Discourse.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Virtual Machine Escape in Cloud Computing Services</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110743</link>
        <id>10.14569/IJACSA.2020.0110743</id>
        <doi>10.14569/IJACSA.2020.0110743</doi>
        <lastModDate>2020-07-31T11:34:58.7470000+00:00</lastModDate>
        
        <creator>Hesham Abusaimeh</creator>
        
        <subject>Cloud computing; virtual machine escape; cloud security; impact of VM escape; VM escape counter measures; VM escape nature</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>It’s of axioms; every progress that is devised in the field of facilitating daily life through technology is matched by many complications in terms of the methods that led to the creation of these inventions and how to maintain their sustainability, consistency, and development. In digital world that became not only as axiom of its nature, but it is now one of the main inherent features that define digital technology. Whereas Major international companies are in a big race to produce the new development and invention of their products to be supplied to markets, and all of that should be conquered within not more than a year. The immersion in that big race has to be armed with patience and deep breath.</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_43-Virtual_Machine_Escape_in_Cloud_Computing_Services.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development and Analysis of a Zeta Method for Low-Cost, Camera-based Iris Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110742</link>
        <id>10.14569/IJACSA.2020.0110742</id>
        <doi>10.14569/IJACSA.2020.0110742</doi>
        <lastModDate>2020-07-31T11:34:58.7470000+00:00</lastModDate>
        
        <creator>Eko Ihsanto</creator>
        
        <creator>Jeffry Kurniawan</creator>
        
        <creator>Diyanatul Husna</creator>
        
        <creator>Alfan Presekal</creator>
        
        <creator>Kalamullah Ramli</creator>
        
        <subject>Iris recognition; iris segmentation; Zeta; authentication; biometric; pattern recognition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>Iris recognition is an alternative authentication method. Many studies have tried to improve iris recognition as a biometric-based alternative for secure authentication. Iris segmentation is an important part of iris recognition because it defines the image region that is used for subsequent processing such as feature extraction and matching, hence directly affects the overall iris recognition performance. This work focuses on the development of an authentication system using localization methods and half-polar normalization of the iris. The proposed Zeta method uses a new model of eye segmentation and normalization that can be used simultaneously on both eyes, considering different iris patterns in those two eyes. There are seven variants of the proposed and tested Zeta method: Zeta-v1, Zeta-v2, Zeta-v3, Zeta-v4, Zeta-v5, Zeta-v6, and Zeta-v7. Overall, the method achieved an average segmentation time performance of 0.0138427 seconds. The most accurate rate was by the Zeta-v1 method, with a value threshold of 100% on the wrong rejection rate and 94.9% on the correct acceptance rate.</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_42-Development_and_Analysis_of_a_Zeta_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Effective Voice Frame Shrinking Method to Enhance VoIP Bandwidth Exploitation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110741</link>
        <id>10.14569/IJACSA.2020.0110741</id>
        <doi>10.14569/IJACSA.2020.0110741</doi>
        <lastModDate>2020-07-31T11:34:58.7130000+00:00</lastModDate>
        
        <creator>Qusai Shambour</creator>
        
        <creator>Sumaya N. Alkhatib</creator>
        
        <creator>Mosleh M. Abualhaj</creator>
        
        <creator>Yousef Alrabanah</creator>
        
        <subject>VoIP; VoIP protocols; ITTP protocol; payload compression; bandwidth exploitation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>The traditional telecommunication system (e.g. landline telephone system) are increasingly being replaced by Voice over Internet Protocol (VoIP) systems because of the very low or free rate. However, one of the main handicaps of VoIP adoption is the inefficient bandwidth exploitation issue. A key approach to handle this issue is packet multiplexing. This article proposes a new VoIP packet payload compression method that enhances bandwidth exploitation over Internet Telephony Transport Protocol (ITTP) protocol. The proposed method is called payload shrinking over ITTP (ITTP-PS). As the name implies, the proposed ITTP-PS method shrinks the VoIP packet payload based on a certain mechanism. The ITTP-PS method has two entities, namely, sender ITTP-PS (S-ITTP-PS) and receiver ITTP-PS (R-ITTP-PS). The main function of the S-ITTP-PS entity is to shrink the VoIP packet payload, while the main function of the R-ITTP-PS entity is restoring the VoIP packet payload to its normal size. To perform the R-ITTP-PS entity function, the ITTP-PS method will reemploy the flag bits in the IP protocol header. The ITTP-PS method has been implemented and compared with traditional ITTP protocol without shrinking the VoIP packet payload. The comparison based on the VoIP packet payload shrinking ratio and isochronous calls capacity improvement ratio. The result showed that the VoIP packet payload shrinking ratio has enhanced by up to around 20%, while the isochronous calls capacity improvement ratio has enhanced by up to around 9.5%. Therefore, enhancing the VoIP bandwidth exploitation over ITTP protocol.</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_41-Effective_Voice_Frame_Shrinking_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Learning Analytics Tool Adoption by University Students</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110740</link>
        <id>10.14569/IJACSA.2020.0110740</id>
        <doi>10.14569/IJACSA.2020.0110740</doi>
        <lastModDate>2020-07-31T11:34:58.7130000+00:00</lastModDate>
        
        <creator>Seren Basaran</creator>
        
        <creator>Ahmed Mohamed Daganni</creator>
        
        <subject>Higher education; learning analytics; learning tools; North Cyprus; students; technology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>Learning analytics refers to a systematic process involving measuring, collecting, analyzing and reporting data about learners with the aim of fully understanding how best learning environments can be optimized to increase efficiency. The aim of this study is to understand the factors contributing to the learning analytics adoption by university students in North Cyprus. Participants comprised of students from three universities in North Cyprus. 718 valid questionnaires involving items from the adopted UTAUT (Unified Theory of Acceptance and Use of Technology) model was used in the study. The results have shown that there was a weak negative correlation between Performance Expectancy and Technology Use Intention implying that when students are aware of how a technology operates and if it satisfies their requirements, then they will be ready to adopt learning analytics. There was also a negative weak correlation between Effort Expectancy and Technology Use Intention. A positive weak correlation between Social Influence and Technology Use Intention was observed while there was a negative weak correlation between Technology Use Intention and Technology Use Behavior implying that when a students have intentions of using learning analytics, they show a positive behavior towards the technology. The study also shows that there was also moderate positive correlation between Technology Anxiety and Technology User Behavior. This study is considered to be of great benefit and practical implementation to researchers, instructors, students, universities and the ministry of education.</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_40-Learning_Analytics_Tool_Adoption.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>How Entrepreneurs Utilize Accelerators: A Demographic Factor Analysis in Turkey using Regression</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110739</link>
        <id>10.14569/IJACSA.2020.0110739</id>
        <doi>10.14569/IJACSA.2020.0110739</doi>
        <lastModDate>2020-07-31T11:34:58.7000000+00:00</lastModDate>
        
        <creator>Ceren Cubukcu</creator>
        
        <creator>Sevinc Gulsecen</creator>
        
        <subject>Accelerators; e-business; startups; regression</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>This study examines entrepreneurs participating into eight accelerator programs located in Istanbul, Turkey. Business accelerators are a new kind of incubation program built in particular to help technology entrepreneurs and assist them reach to the next level. In total eight accelerator programs are researched in this study. A survey is developed for this study and applied to entrepreneurs attending these eight accelerator programs. In this survey, the effectiveness of these programs are measured according to the demographics of entrepreneurs. The aim of this research is to analyze how entrepreneurs use the services given by the accelerator program. In relation to entrepreneurs’ age, gender, work experience, educational status and family background, several hypotheses have been identified for assessing the value of supports given in these accelerator programs. The data of this research have been examined via SPSS using Mann-Whitney and Kruskal-Wallis methods. According to the results of these tests, a regression model called Generalized Linear Mixed Model (GLMM) has been developed. This study adds to the literature by examining accelerator supports and facilities so that accelerators can set apart their programs in line with the requests of the entrepreneurs.</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_39-How_Entrepreneurs_Utilize_Accelerators.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Level of Depression in Tuberculosis Patients of Los Olivos Health Centers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110738</link>
        <id>10.14569/IJACSA.2020.0110738</id>
        <doi>10.14569/IJACSA.2020.0110738</doi>
        <lastModDate>2020-07-31T11:34:58.6670000+00:00</lastModDate>
        
        <creator>Katherine Trinidad-Carrillo</creator>
        
        <creator>Ruth Santana-Cercado</creator>
        
        <creator>Katherine Castillo-Nanez</creator>
        
        <creator>Brian Meneses-Claudio</creator>
        
        <creator>Hernan Matta-Solis</creator>
        
        <subject>Tuberculosis; depression; patient health questionnaire 9; health centers; tuberculosis lifestyle</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>Tuberculosis is a contagious infectious disease, caused by Mycobacterium Tuberculosis, transmitted through its release into the air when a sick person coughs, sneezes or talks, so it can be inhaled by another person and infect it, it is necessary for patients to adopt family, work and social distancing to avoid infections, causing a risk of developing different levels of depression, which is detrimental due to its negative influence on decision-making. The frequency of depression in society is high, as is the predisposition of patients diagnosed with Tuberculosis due to the sudden change in their lifestyle, which is why it was proposed to determine the level of depression in tuberculosis patients at health centers from Los Olivos district, this study will also allow to know the most frequent physical and psychological reactions, in addition to the most predominant sex. To obtain the information, the corresponding permits were obtained and the Patient Health Questionnaire 9 (PHQ-9), an international and nationally validated standardized instrument, was applied; the data was processed in the statistical software SPSS 24.0, and the graphics were subsequently extracted; where the following obtained results were reflected: 100% of the participants had some level of depression, the most prevalent being the level of moderate depression with 35.56%, being more present in the female population with 21.11% , it was also shown that 48.9% of patients almost always have little interest or pleasure in doing things.</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_38-Level_of_Depression_in_Tuberculosis_Patients.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fast Side Information Generation for High-Resolution Videos in Distributed Video Coding Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110737</link>
        <id>10.14569/IJACSA.2020.0110737</id>
        <doi>10.14569/IJACSA.2020.0110737</doi>
        <lastModDate>2020-07-31T11:34:58.6530000+00:00</lastModDate>
        
        <creator>Shahzad Khursheed</creator>
        
        <creator>Nasreen Badruddin</creator>
        
        <creator>Varun Jeoti</creator>
        
        <creator>Manzoor Ahmed Hashmani</creator>
        
        <subject>Fast side information algorithm; phase-based interpolation (Phase-I); DVC; DVC decoder for high-resolution videos; real-time DVC decoding; real-time side information</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>Distributed video coding (DVC) is an attractive and promising scheme that suits the constrained video applications, such as wireless sensor networks or wireless surveillance systems. In DVC, estimation of fast and consistent side information (Տį) is a critical issue for instant and real-time decoding. This issue becomes even more serious for high-resolution videos. Therefore, to minimise the side information estimation computational complexity, in this work, a computationally low complex DVC codec is proposed, which uses a simple phase interpolation (Phase-I) algorithm. It performs faster for all resolutions videos, and significant results are achieved for high-resolution videos with a large group of pictures (GOP). For the proposed technique, the computation time rapidly decreases with an increase in resolution. It performs 221% to 280% faster from conventional frame interpolation method for high-resolution videos and large GOP at the cost of little degradation in the visual quality of estimated side information.</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_37-Fast_Side_Information_Generation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Machine Learning Approach for Recognizing the Holy Quran Reciter</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110735</link>
        <id>10.14569/IJACSA.2020.0110735</id>
        <doi>10.14569/IJACSA.2020.0110735</doi>
        <lastModDate>2020-07-31T11:34:58.6200000+00:00</lastModDate>
        
        <creator>Jawad H Alkhateeb</creator>
        
        <subject>Holy Quran audio analysis; MFCC; KNN; ANN; Machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>Mainly, the holy Quran is the holy book for all Muslims. Reading the holy Quran is a special reading with rules. Reading the Holy Quran is called recitation. One of the Muslim essential activities is reading or listening to the Holy Quran. In this paper, a machine learning approach for recognizing the reader of the holy Quran (reciter) is proposed. The proposed system contains basic traditional phases for a recognition system, including data acquisition, pre-processing, feature extraction, and classification. A dataset is created for ten well-known reciters. The reciters are the prayer leaders in the holy mosques in Mecca and Madinah. The audio dataset set is analyzed using the Mel Frequency Cepstral Coefficients (MFCC). Both the K nearest neighbor (KNN) classifier, and the artificial neural network (ANN) classifier are applied for classification purpose. The pitch is used as features which are utilized to train the ANN and the KNN for classification. Two chapters from the Holy Quran are selected in this paper for system validation. Excellent accuracy is achieved. Using the ANN, the proposed system gives 97.62% accuracy for chapter 18 and 96.7% accuracy for chapter 36. On the other hand, the proposed system gives 97.03% accuracy for chapter 18 and 96.08% accuracy for chapter 36 by using the KNN.</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_35-A_Machine_Learning_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modeling and Performance Analysis of an Adaptive PID Speed Controller for the BLDC Motor</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110736</link>
        <id>10.14569/IJACSA.2020.0110736</id>
        <doi>10.14569/IJACSA.2020.0110736</doi>
        <lastModDate>2020-07-31T11:34:58.6200000+00:00</lastModDate>
        
        <creator>Md Mahmud</creator>
        
        <creator>S. M. A. Motakabber</creator>
        
        <creator>A. H. M. Zahirul Alam</creator>
        
        <creator>Anis Nurashikin Nordin</creator>
        
        <creator>A. K.M. Ahasan Habib</creator>
        
        <subject>QFT; PWM; BLDC motor; PID controller; adaptive; adaptive PID controller; APIDC</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>Brushless Direct Current (BLDC) motor is the most popular useable motor for automation and industry. For good performance of the BLDC motor hunger driving circuit but the driving circuit is costly, complex control mechanism, various parameter dependency and low torque. The Proportional Integral (PI), Proportional Integral Derivative (PID), fuzzy logic, adaptive, Quantity Feedback Theory (QFT), Pulse Width Modulation (PWM) controller are the common types of control method existing for the BLDC motor. This research explores some well-working experiments and identified the PID controller as far more applicable controller. For well efficacious and useful in getting satisfied control performance if the adaptability is implemented. This research proposed a combined method using PID and PID auto tuner, having the ability to improve the system adaptability, given the method named as adaptive PID controller. To verify the performance, MATLAB simulation platform was used, and a benchmark system was developed based on the actual BLDC motor parameters, auxiliary systems, and mathematically solved parameters. All work has done by using MATLAB/ Simulink.</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_36-Modeling_and_Performance_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Efficient Binary Clonal Selection Algorithm with Optimum Path Forest for Feature Selection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110734</link>
        <id>10.14569/IJACSA.2020.0110734</id>
        <doi>10.14569/IJACSA.2020.0110734</doi>
        <lastModDate>2020-07-31T11:34:58.5900000+00:00</lastModDate>
        
        <creator>Emad Nabil</creator>
        
        <creator>Safinaz Abdel-Fattah Sayed</creator>
        
        <creator>Hala Abdel Hameed</creator>
        
        <subject>Feature selection; artificial immune system; clonal selection algorithm; optimization; optimum path forest</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>Feature selection is an important step in different applications such as data mining, classification, pattern recognition, and optimization. Until now, finding the most informative set of features among a large dataset is still an open problem. In computer science, a lot of metaphors are imported from nature and biology and proved to be efficient when applying them in an artificial way to solve a lot of problems. Examples include Neural Networks, Human Genetics, Flower Pollination, and Human Immune system. Clonal selection is one of the processes that happens in the human immune system while recognizing new infections. Mimicking this process in an artificial way resulted in a powerful algorithm, which is the Clonal Selection Algorithm. In this paper, we tried to explore the power of the Clonal Selection Algorithm in its binary form for solving the feature selection problem, we used the accuracy of the Optimum-Path Forest classifier, which is much faster than other classifiers, as a fitness function to be optimized. Experiments on three public benchmark datasets are conducted to compare the proposed Binary Clonal Selection Algorithm in conjunction with the Optimum Path Forest classifier with other four powerful algorithms. The four algorithms are Binary Flower Pollination Algorithm, Binary Bat Algorithm, Binary Cuckoo Search, and Binary Differential Evolution Algorithm. In terms of classification accuracy, experiments revealed that the proposed method outperformed the other four algorithms and moreover with a smaller number of features. Also, the proposed method took less average execution time in comparison with the other algorithms, except for Binary Cuckoo Search. The statistical analysis showed that our proposal has a significant difference in accuracy compared with the Binary Bat Algorithm and the Binary Differential Evolution Algorithm.</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_34-An_Efficient_Binary_Clonal_Selection_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Classification of Freshwater Zooplankton by Pre-trained Convolutional Neural Network in Underwater Microscopy</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110733</link>
        <id>10.14569/IJACSA.2020.0110733</id>
        <doi>10.14569/IJACSA.2020.0110733</doi>
        <lastModDate>2020-07-31T11:34:58.5600000+00:00</lastModDate>
        
        <creator>Song Hong</creator>
        
        <creator>Syed Raza Mehdi</creator>
        
        <creator>Hui Huang</creator>
        
        <creator>Kamran Shahani</creator>
        
        <creator>Yangfang Zhang</creator>
        
        <creator>Junaidullah</creator>
        
        <creator>Kazim Raza</creator>
        
        <creator>Mushtaq Ali Khan</creator>
        
        <subject>AlexNet; automatic image classification; Convolutional Neural Networks (CNN); freshwater zooplankton; transfer learning; underwater microscope</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>Zooplankton is enormously diverse and fundamental group of microorganisms that exists in almost every freshwater body, determining its ecology and play a vital role in food chain. Considering the significance of zooplankton, the study of freshwater zooplankton is very essential which intensely relies on the classification of images. However, the routine manual analysis and classification is laborious, time consuming and expensive, and poses a significant challenge to experts. Thus, for recent decade much research is focused on the development of underwater imaging technologies and intelligent classification system of zooplankton. This work presents devotion to observation of freshwater zooplankton by designed underwater microscope and modeling the system for automatic classification among four different taxa. Unlike most of the existing zooplankton image classification systems, this model is trained on a comparatively small dataset collected from freshwater by designed underwater microscope. Transfer learning of pre-trained AlexNet Convolutional Neural Network (CNN) model proved to be a potential approach in the system design. Among four networks trained over two datasets, the best overall classification accuracy of up to 93.1%, comparable to other existing systems was achieved on test dataset (92.5% for Calanoid and Cyclopoid (Female), 90% for Cyclopoid (Male) and 97.5% for Daphnia). Graphical User Interface (GUI) of the model constructed on MATLAB, makes it easy for the users to collect images for building database, train network and to classify images of different taxa. Moreover, the designed system is adaptable to the addition of more classes in the future.</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_33-Classification_of_Freshwater_Zooplankton.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>QUES: A Quality Estimation System of Arabic to English Translation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110732</link>
        <id>10.14569/IJACSA.2020.0110732</id>
        <doi>10.14569/IJACSA.2020.0110732</doi>
        <lastModDate>2020-07-31T11:34:58.5430000+00:00</lastModDate>
        
        <creator>Manar Salamah Ali</creator>
        
        <creator>Anfal Alatawi</creator>
        
        <creator>Bayader Alsahafi</creator>
        
        <creator>Najwa Noorwali</creator>
        
        <subject>Translation quality estimation; translation adequacy; translation fluency; supervised machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>Estimating translation quality is a problem of growing importance as it has many potential applications. The quality of translation from Arabic to English is especially difficult to evaluate due to the languages being distant languages: different in syntax and low in lexical similarity. We propose a feature-based framework for estimating the quality of Arabic to English translations at the sentence level. The proposed method works without reference translations, considers both fluency and adequacy of translations, and does not imply assumptions on the source of translation (humans, machines, or post-edited machine translations); thus, making the solution applicable to increasingly more situations. This research solves the translation quality estimation problem by treating it as a supervised machine learning problem. The proposed model utilizes regression algorithms (SVR and Linear Regression) to predict quality scores of unseen translated texts at runtime. This is accomplished by training models on a labeled parallel corpus and mapping extracted features to the quality label. The prediction models succeeded in predicting fluency and adequacy of translations with a Mean Absolute Error of 0.84 and 1.02, respectively. Furthermore, we show that in a similar setting of our approach, fluency of an Arabic to English translated sentence on its own, is an appropriate indication of a translation’s overall quality.</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_32-QUES_A_Quality_Estimation_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Compact Broadband and High Gain Tapered Slot Antenna with Stripline Feeding Network for H, X, Ku and K Band Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110731</link>
        <id>10.14569/IJACSA.2020.0110731</id>
        <doi>10.14569/IJACSA.2020.0110731</doi>
        <lastModDate>2020-07-31T11:34:58.5100000+00:00</lastModDate>
        
        <creator>Permanand Soothar</creator>
        
        <creator>Hao Wang</creator>
        
        <creator>Chunyan Xu</creator>
        
        <creator>Zaheer Ahmed Dayo</creator>
        
        <creator>Badar Muneer</creator>
        
        <creator>Kelash Kanwar</creator>
        
        <subject>Tapered slot antenna (TSA); compact; radial cavity; broadband impedance bandwidth; peak realized gain; etched slots</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>In this paper, a planar travelling wave tapered slot antenna with compact size is proposed for wireless communication applications. The prototype of antenna is developed on Roger RT/Duroid 5880 laminate with tan δ = 0.0009, a relative permittivity of 2.2 while working in the range of 6GHz – 21GHz. The simple feeding technique transits with radial cavity and the opening taper profile. The antenna dimensions of the antenna have been designed in such a manner so as to enable impedance matching. The parametric study of the variables is carried out by various scrupulous simulations. The designed characteristic antenna has achieved an impedance bandwidth in the broadband spectrum of 111.11% at the minimum 10-dB return loss and peak realized gain of 7dBi is obtained for a resonant frequency of 19.6GHz. The simulated results are in good agreement with experimental results and hence make the antenna suitable for H ( 6 - 8 GHz), X (8 - 12 GHz), Ku (12 - 18 GHz) and K (18 - 26 GHz) and future wireless communication applications.</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_31-A_Compact_Broadband_and_High_Gain.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Object Detection using Template and HOG Feature Matching</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110730</link>
        <id>10.14569/IJACSA.2020.0110730</id>
        <doi>10.14569/IJACSA.2020.0110730</doi>
        <lastModDate>2020-07-31T11:34:58.4970000+00:00</lastModDate>
        
        <creator>Marjia Sultana</creator>
        
        <creator>Tasniya Ahmed</creator>
        
        <creator>Partha Chakraborty</creator>
        
        <creator>Mahmuda Khatun</creator>
        
        <creator>Md. Rakib Hasan</creator>
        
        <creator>Mohammad Shorif Uddin</creator>
        
        <subject>Computer vision; template matching; HOG; feature extraction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>In the present era, the applications of computer vision is increasing day by day. Computer vision is related to the automatic recognition, exploration and extraction of the necessary information from a particular image or a group of image sets. This paper addresses the method to detect the desired object from an image. Usually, a template of the desired object is used in detection through a matching technique named Template Matching. But it works well when the template image is cropped from the original one, which is not always invariant due to various transformations in the test images. To cope with this difficulty and to develop a generalized approach, we investigate in detail another technique which is known as HOG (Histogram of Oriented Gradient) approach. In HOG, the image is divided into overlapping blocks of template size and then compare each block’s normalized HOG with the normalized HOG of the template to find the best match of the object. We perform experiments with a large number of images and have found satisfactory performance.</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_30-Object_Detection_using_Template_and_HOG_Feature.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Applying Aspect Oriented Programming in Distributed Application Engineering</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110729</link>
        <id>10.14569/IJACSA.2020.0110729</id>
        <doi>10.14569/IJACSA.2020.0110729</doi>
        <lastModDate>2020-07-31T11:34:58.4800000+00:00</lastModDate>
        
        <creator>Fatiha Khalifa</creator>
        
        <creator>Samira Chouraqui</creator>
        
        <subject>Aspect oriented programming; design by contract; web service composition; parameters adaptation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>Aspect-oriented programming is an emerging programming paradigm that stretches during the development phases in different domains. Many researchers have focused on the use of this paradigm in web service composition in different research axis. However, none of them use together aspect-oriented programming and design by contract to deal with the adaptation of the parameters in the web service composition process. This paper proposes a web service composition algorithm based on the planning graph using both Aspect-oriented programming and design by contract concept. The aspect-oriented Programming approach provides explicit support for separation of crosscutting concerns in web services composition whereas the design by contract approach allows the processing of parameters execution in pre-condition and post-condition mode by using contracts in order to ensure correct service execution with adaptation to external parameters without touching in properties which can be dealt with re-construction of the composite service. Future development of this planning graph will include the introduction of the dynamic way of aspect oriented programming and add comparison results.</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_29-Applying_Aspect_Oriented_Programming.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Emotional Impact for Predicting Student Performance in Intelligent Tutoring Systems (ITS)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110728</link>
        <id>10.14569/IJACSA.2020.0110728</id>
        <doi>10.14569/IJACSA.2020.0110728</doi>
        <lastModDate>2020-07-31T11:34:58.4500000+00:00</lastModDate>
        
        <creator>Kouame Abel Assielou</creator>
        
        <creator>Ciss&#233; Th&#233;odore Haba</creator>
        
        <creator>Bi Tra Goor&#233;</creator>
        
        <creator>Tanon Lambert Kadjo</creator>
        
        <creator>Kouakou Daniel Yao</creator>
        
        <subject>Intelligent tutoring system; student performance prediction; matrix factorization; emotional impact; achievement emotions</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>Current Intelligent Tutoring Systems (ITS) provide better recommendations for students to improve their learning. These recommendations mainly involve students’ performance prediction, which remains problematic for ITS, despite the significant improvements made by prediction methods such as Matrix Factorization (MF). The present contribution therefore aims to provide a solution to this prediction problem by proposing an approach that combines Multiple Linear Regression (Modelling Emotional Impact) and a Weighted Multi-Relational Matrix Factorization model to take advantage of both student cognitive and emotional faculties. This approach takes into account not only the relationships that exist between students, tasks and skills, but also students’ emotions. Experimental results on a set of pedagogical data collected from 250 students show that our approach significantly improves the results of Student Performance Prediction.</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_28-Emotional_Impact_for_Predicting_Student_Performance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Factors Affecting SME Owners in Adopting ICT in Business using Thematic Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110727</link>
        <id>10.14569/IJACSA.2020.0110727</id>
        <doi>10.14569/IJACSA.2020.0110727</doi>
        <lastModDate>2020-07-31T11:34:58.4330000+00:00</lastModDate>
        
        <creator>Anis Nur Assila Rozmi</creator>
        
        <creator>Puteri N.E. Nohuddin</creator>
        
        <creator>Abdul Razak Abdul Hadi</creator>
        
        <creator>Mohd Izhar A. Bakar</creator>
        
        <creator>A. Imran Nordin</creator>
        
        <subject>Small Medium Enterprise (SME); Information and Communications Technology (ICT); thematic analysis; internal factors; external factor</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>In Malaysia, Small Medium Enterprise (SME) has become the main contributor for the income generation and employment thus are expected to increase country’s GDP growth. Hence, it is important for the SME owners to ensure the sustainability of their business in today’s business settings that has moved to digital businesses in which technology is used to create new value in business models, customer experiences and the internal capabilities that support its core operations. Yet, there are still local SMEs who did not fully utilized the advantages of adopting Information and Communications Technology (ICT) in their business operation and transactions. This paper presents an interview done with 12 SME owners that operates their businesses in Kuala Lumpur and Selangor. The study is to gain an understanding on the factors affecting SME owners in adopting ICT in their business by using the Thematic Analysis method. The outcome shows that there are two central themes that affect ICT usage in SME which are the (1) Internal Factor and (2) External Factor. Further, we identified two (2) components that effect the Internal Factor, namely Company (capital, company’s age, less skilled workers and family business) and SME Owners (time, education, perceptions and experiences) and another two (2) components that affect the External Factor, namely Technology (high cost, complicated, system’s security and stability) and Regulators (government’s initiatives, training skills and no urgency). Hence, the result is important to the SME owners and management as well as the government or the authorities to resolve these issues in order to increase the ICT usage among the SME owners in Malaysia.</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_27-Factors_Affecting_SME_Owners_in_Adopting_ICT.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Lightweight Security Mechanism over MQTT Protocol for IoT Devices</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110726</link>
        <id>10.14569/IJACSA.2020.0110726</id>
        <doi>10.14569/IJACSA.2020.0110726</doi>
        <lastModDate>2020-07-31T11:34:58.4030000+00:00</lastModDate>
        
        <creator>Sanaz Amanlou</creator>
        
        <creator>Khairul Azmi Abu Bakar</creator>
        
        <subject>Internet of Things (IoT); MQTT; Pre-Shared Keys (PSK); elliptic curve cryptography; Diffie-Hellman Ephemeral (DHE); Digital Signature Algorithm (DSA); Perfect Forward Secrecy (PFS); authentication; power consumption; wireless sensors</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>Security is one of the main concerns with regard to the Internet of Things (IoT) networks. Since most IoT devices are restricted in resource and power consumption, it is not easy to implement robust security mechanisms. There are different methods to secure network communications; however, they are not applicable to IoT devices. In addition, most authentication methods use certificates in which signing and verifying certificates need more computation and power. The main objective of this paper is to propose a lightweight authentication and encryption mechanism for IoT constrained devices. This mechanism uses ECDHE-PSK which is the Transport Layer Security (TLS) authentication algorithm over Message Queuing Telemetry Transport (MQTT) Protocol. This authentication algorithm provides a Perfect Forward Secrecy (PFS) feature that makes an improvement in security. It is the first time that this TLS authentication algorithm is implemented and evaluated over the MQTT protocol for IoT devices. To evaluate resource consumption of the proposed security mechanism, it was compared with the default security mechanism of the MQTT protocol and the ECDHE-ECDSA that is a certificate-based authentication algorithm. They were evaluated in terms of CPU utilization, execution time, bandwidth, and power consumption. The results show that the proposed security mechanism outperforms the ECDHE-ECDSA in all tests.</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_26-Lightweight_Security_Mechanism_over_MQTT_Protocol.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Malware Analysis in Web Application Security: An Investigation and Suggestion</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110725</link>
        <id>10.14569/IJACSA.2020.0110725</id>
        <doi>10.14569/IJACSA.2020.0110725</doi>
        <lastModDate>2020-07-31T11:34:58.3870000+00:00</lastModDate>
        
        <creator>Abhishek Kumar Pandey</creator>
        
        <creator>Fawaz Alsolami</creator>
        
        <subject>Malware analysis; web application; application security; fuzzy-AHP; forecasting</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>Malware analysis is essentially used for the identification of malware and its objectives. However, the present era has seen the process of malware analysis being used for enhancing security methods for different domains of technology. This study has attempted to analyze the current situation and status of malware analysis in web application security through some objectives. These objectives helps the authors to analyze the purpose, used methodology of malware analysis in web application security previously as well as authors select and find a prioritized technique of malware analysis through a hybrid multi criteria decision making procedure called fuzzy-Analytical Hierarchy Process. This fuzzy-AHP methodology helps the authors to find and recommend a most prioritized malware analysis techniques and type as well as suggest a ranking of various malware analysis techniques that used in web application security frequently for experts and developers use. Furthermore, second section of paper forecast the attack statistics and publication statistics of malwares and malware analysis in web application security respectively for understanding the sensitivity of topic and need of investigation. The proposed tactic intends to be an effective reckoner for web developers and facilitate in malware analysis for securing web applications. Additionally, the study also forecast the publication and attack scenario of malware and malware analysis for web application security that gives a complimentary overview of domain.</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_25-Malware_Analysis_in_Web_Application_Security.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Students’ Perception of the Effect of Cognitive Factors in Determining Success in Computer Programming: A Case Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110724</link>
        <id>10.14569/IJACSA.2020.0110724</id>
        <doi>10.14569/IJACSA.2020.0110724</doi>
        <lastModDate>2020-07-31T11:34:58.3700000+00:00</lastModDate>
        
        <creator>Jotham Msane</creator>
        
        <creator>Bethel Mutanga Murimo</creator>
        
        <creator>Tarirai Chani</creator>
        
        <subject>Cognitive factors; performance; programming; self-efficacy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>The reliance on science and technology by both countries and corporate entities is increasingly evident as the evolving trend of digitization not only pervades every facet of life but also assumes a dominant role. Correspondingly, the significance of producing competent computer science and information technology (IT) graduates becomes highly imperative. Already, in most developed and developing countries, there has been an increasing demand for these competencies such as network engineers, programmers, and other IT-related specialists. Although these competencies are equally valuable, programming skills constitute the core of the strength of every other IT-related competence. Nevertheless, programming is reported in the literature to be one of the most difficult courses to students. Moreover, the level of performance in programming is said to be significantly low with an attendant high rate of students’ dropout. There is a concerted research effort toward addressing the challenge of poor academic performance by attempting to answer the question of what factors affect academic performance in general. However, there is scanty literature on the factors that affect the ability to understand the concept of programming in specific. This paper, therefore, reports a case study investigation of students’ perception of the effect of cognitive factors as the determinant of success in computer programming. The findings showed that performance in introductory programming is impacted by a range of interrelated cognitive factors including self-efficacy and the love for technology.</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_24-Students_Perception_of_the_Effect_of_Cognitive_Factors.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predicting Cervical Cancer using Machine Learning Methods</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110723</link>
        <id>10.14569/IJACSA.2020.0110723</id>
        <doi>10.14569/IJACSA.2020.0110723</doi>
        <lastModDate>2020-07-31T11:34:58.3570000+00:00</lastModDate>
        
        <creator>Riham Alsmariy</creator>
        
        <creator>Graham Healy</creator>
        
        <creator>Hoda Abdelhafez</creator>
        
        <subject>Cervical cancer; machine learning; voting method; risk factors; SMOTE; PCA</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>In almost all countries, precautionary measures are less expensive than medical treatment. The early detection of any disease gives a patient better chances of successful treatment than disease discovery at an advanced stage of its development. If we do not know how to treat patients, any treatment we can provide would be useful and would provide a more comfortable life. Cervical cancer is one such disease, considered to be fourth among the most common types of cancer in women around the world. There are many factors that increase the risk of cervical cancer, such as age and use of hormonal contraceptives. Early detection of cervical cancer helps to raise recovery rates and reduce death rates. This paper aims to use machine learning algorithms to find a model capable of diagnosing cervical cancer with high accuracy and sensitivity. The cervical cancer risk factor dataset from the University of California at Irvine (UCI) was used to construct the classification model through a voting method that combines three classifiers: Decision tree, logistic regression and random forest. The synthetic minority oversampling technique (SMOTE) was used to solve the problem of imbalance dataset and, together with the principal component analysis (PCA) technique, to reduce dimensions that do not affect model accuracy. Then, stratified 10-fold cross-validation technique was used to prevent the overfitting problem. This dataset contains four target variables–Hinselmann, Schiller, Cytology, and Biopsy–with 32 risk factors. We found that using the voting classifier, SMOTE and PCA techniques helped raise the accuracy, sensitivity, and area under the Receiver Operating Characteristic curve (ROC_AUC) of the predictive models created for each of the four target variables to higher rates. In the SMOTE-voting model, accuracy, sensitivity and PPA ratios improved by 0.93 % to 5.13 %, 39.26 % to 46.97 % and 2 % to 29 %, respectively for all target variables. Moreover, using PCA technology reduced computational processing time and increasing model efficiency. Finally, after comparing our results with several previous studies, it was found that our models were able to diagnose cervical cancer more efficiently according to certain evaluation measures.</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_23-Predicting_Cervical_Cancer_using_Machine_Learning_Methods.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Systematic Review on Practical Considerations, Recent Advances and Research Challenges in Underwater Optical Wireless Communication</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110722</link>
        <id>10.14569/IJACSA.2020.0110722</id>
        <doi>10.14569/IJACSA.2020.0110722</doi>
        <lastModDate>2020-07-31T11:34:58.3230000+00:00</lastModDate>
        
        <creator>Syed Agha Hassnain Mohsan</creator>
        
        <creator>Md. Mehedi Hasan</creator>
        
        <creator>Alireza Mazinani</creator>
        
        <creator>Muhammad Abubakar Sadiq</creator>
        
        <creator>Muhammad Hammad Akhtar</creator>
        
        <creator>Asad Islam</creator>
        
        <creator>Laraba Selsabil Rokia</creator>
        
        <subject>Component; underwater optical wireless communication; underwater technologies; research questions; 5G/6G</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>Underwater Optical Wireless Communication (UOWC) has gained significant attraction in many underwater activities because of its high bandwidth as compared to radio frequency and acoustic technologies. Underwater Optical Wireless Communication (UOWC) has high stature in underwater observation, exploration and monitoring applications. However, due to complex nature of ocean water, several practical challenges exist in deployment of UOWC links. Qualitative and effective research has been carried out in UOWCs from last few decades. Ambition behind this research systematic study is to provide a comprehensive survey on latest research in UOWCs. Herein, we provide a brief discussion on major research challenges, limitations and development in UOWCs. We provide a periodical review on UOWC issues and potential challenges highlighted in previous studies. In this paper, we have also investigated research methods to gain attention of research fraternity towards future technologies and challenges on the basis of existing approaches. Thus, it is our foremost requirement to provide state-of-the-art analysis of existing UOWCs. Significant deliberation has been provided with recent bibliography.</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_22-A_Systematic_Review_on_Practical_Considerations.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimized Machine Learning based Classifications of Staging in Gynecological Cancers using Feature Subset through Fused Feature Selection Process</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110721</link>
        <id>10.14569/IJACSA.2020.0110721</id>
        <doi>10.14569/IJACSA.2020.0110721</doi>
        <lastModDate>2020-07-31T11:34:58.3100000+00:00</lastModDate>
        
        <creator>B Nithya</creator>
        
        <creator>V Ilango</creator>
        
        <subject>Ovarian cancer; cervical cancer; diagnosis; gynaecological cancers; staging; feature selection; machine learning; classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>After diagnosing the cancer, the next step is to identify the staging of the cancer to start with the appropriate treatment plans. There are different kinds of gynaecological cancers and this research lays emphasis on cervical and ovarian cancer types with their staging classifications. The cervical and ovarian cancers data from SEER registry are used in this work. This work intends to propose an optimized classification method for staging prediction in gynaecological cancers through fused feature selection process that aimed to provide an optimal feature subset. The fused feature selection process includes the hybridization of relief filter approach with wrapper method of genetic algorithm to produce revised feature subset of data as an outcome. Accordingly, this work attained an improved feature subset through fused feature selection process for precise classification of cervical and ovarian cancer stages by identifying their significant features. The predictive models are established with 10-fold cross validation using major classification algorithms like C5.0, Random Forest and KNN. The classification results are attained for the respective types of cervical, ovarian cancer stages and the stage-wise classification based on patients age also obtained through this proposed method. The results portrayed that the women in the age group of 45 and above are more critical with the incidence of cervical and ovarian cancer types. Random Forest method has shown progressive accuracy rate with progressive percentage of other performance outcomes. Also, this work recognized that the best and optimal feature subset selection could condense the complexity of the predictive model.</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_21-Optimized_Machine_Learning_based_Classifications.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Tai Chi Care: An Exergaming Software using Microsoft Kinect V2 for Blind or Low Vision Person during Confinement</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110720</link>
        <id>10.14569/IJACSA.2020.0110720</id>
        <doi>10.14569/IJACSA.2020.0110720</doi>
        <lastModDate>2020-07-31T11:34:58.2770000+00:00</lastModDate>
        
        <creator>Marwa Bouri</creator>
        
        <creator>Ali Khalfallah</creator>
        
        <creator>Med Salim Bouhlel</creator>
        
        <subject>Tai Chi; COVID-19; visual impaired; physical exercise; exergaming; audio feedback; Kinect; body tracking</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>Blind or low vision people need to practice activities for their mental and physical health to minimize the risk of suffering from articulation pain but they have problems due to difficulties and inaccessibility of displacement especially during the COVID-19 pandemic where everyone in this world was asked to stay at home during confinement. To solve these problems, we have developed a software tool for a care Tai Chi exergaming to encourage them to practice exercise at home using body tracking by Microsoft Kinect V2 and audio feedback. This software acts as a Tai Chi treatment, teaches four poses, and has a customized audio feedback to help person to understand each pose and generates progress graphs to evaluate the success of these exercises. We used the SDK libraries of the Kinect to obtain 3D joint position from sensors of the Kinect to calculate the angles and distances between joints to help the person to position in front of the Kinect, evaluate the different gestures of flexions and extensions of knees and elbows of each exercises, and body balance direction to avoid falling risk. These exercises have been evaluated with persons who are blind or low vision to improve feasibility and feedback.</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_20-Tai_Chi_Care_An_Exergaming_Software.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Handwriting Recognition using Artificial Intelligence Neural Network and Image Processing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110719</link>
        <id>10.14569/IJACSA.2020.0110719</id>
        <doi>10.14569/IJACSA.2020.0110719</doi>
        <lastModDate>2020-07-31T11:34:58.2600000+00:00</lastModDate>
        
        <creator>Sara Aqab</creator>
        
        <creator>Muhammad Usman Tariq</creator>
        
        <subject>Support vector machine; neural network; artificial intelligence; handwriting processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>Due to increased usage of digital technologies in all sectors and in almost all day to day activities to store and pass information, Handwriting character recognition has become a popular subject of research. Handwriting remains relevant, but people still want to have Handwriting copies converted into electronic copies that can be communicated and stored electronically. Handwriting character recognition refers to the computer&#39;s ability to detect and interpret intelligible Handwriting input from Handwriting sources such as touch screens, photographs, paper documents, and other sources. Handwriting characters remain complex since different individuals have different handwriting styles. This paper aims to report the development of a Handwriting character recognition system that will be used to read students and lectures Handwriting notes. The development is based on an artificial neural network, which is a field of study in artificial intelligence. Different techniques and methods are used to develop a Handwriting character recognition system. However, few of them focus on neural networks. The use of neural networks for recognizing Handwriting characters is more efficient and robust compared with other computing techniques. The paper also outlines the methodology, design, and architecture of the Handwriting character recognition system and testing and results of the system development. The aim is to demonstrate the effectiveness of neural networks for Handwriting character recognition.</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_19-Handwriting_Recognition_using_Artificial_Intelligence.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Pre-Class Learning Content Approach for the Implementation of Flipped Classrooms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110718</link>
        <id>10.14569/IJACSA.2020.0110718</id>
        <doi>10.14569/IJACSA.2020.0110718</doi>
        <lastModDate>2020-07-31T11:34:58.2470000+00:00</lastModDate>
        
        <creator>Soly Mathew Biju</creator>
        
        <creator>Ayodeji Olalekan Salau</creator>
        
        <creator>Joy Nnenna Eneh</creator>
        
        <creator>Vincent Egoigwe Sochima</creator>
        
        <creator>Izuchukwu ThankGod Ozue</creator>
        
        <subject>Flipped classroom; active learning; online videos; student-centered approach; increased interaction; pre-class content</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>The nascent recognition of computing in curriculum across countries is also accompanied by several pedagogic inefficiencies especially concerning insufficient time available for teacher-student interaction. In this paper, a flipped classroom concept was identified as an effective approach to teaching students at various levels in the academia including Higher Education. Preparing the pre-class content and considering the format used to deliver it has not gained much consideration. There are several ways in which this content could be provided to students to prepare them before an in-class activity where a flipped-classroom approach can be implemented. The present study analyzed the success of the flipped classroom concept based on a comparative analysis of the two types of flipped classroom pre-class content delivery methods: online videos and online PowerPoint slides. Evaluation was performed using paired T-test. The results show that the two approaches have significantly different means and huge differences between them. The students preferred online videos to online PowerPoint (ppt) methods underlining the importance of the proposed flipped classroom approach.</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_18-A_Novel_Pre_Class_Learning_Content_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancement of Two-Tier ATM Security Mechanism: Towards Providing a Real-Time Solution for Network Issues</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110717</link>
        <id>10.14569/IJACSA.2020.0110717</id>
        <doi>10.14569/IJACSA.2020.0110717</doi>
        <lastModDate>2020-07-31T11:34:58.2130000+00:00</lastModDate>
        
        <creator>Syed Anas Ansar</creator>
        
        <creator>Satish Kumar</creator>
        
        <creator>Mohd. Waris Khan</creator>
        
        <creator>Amitabha Yadav</creator>
        
        <creator>Raees Ahmad Khan</creator>
        
        <subject>ATM-fraud; security: Unique Identifier (UID); shoulder surfing; shimming; trapping</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>In the current scenario, the crime rate has tremendously increased with respect to the Automatic Teller Machine (ATM). During the last few years, criminals are becoming more sophisticated and paid more attention to ATMs. The majority of ATMs in India are working on a single authentication technique. The attacks, such as skimming, shimming, card cloning, card swapping, shoulder surfing, etc. works due to the use of minimal authentication in ATMs. So, the concern about the security of ATMs is reached to its peak level. Nowadays, banks have moved towards the two-tier authentication level. Recently in India, some banks have adopted the One Time Password (OTP) mechanism along with a UID number to perform the transaction in ATMs. In such a case, dependency on the cellular network for OTP is also a significant concern. To overcome these types of issues researcher proposed a two-tier authentication mechanism. The paper addresses the recent problems and their solution with the help of a two-way authentication method. To resolve the network issue, the researcher also proposed a novel technique, i.e., Security Question-based verification mechanism.</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_17-Enhancement_of_Two_Tier_ATM_Security_Mechanism.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Efficient Convolutional Neural Network for Paddy Leaf Disease and Pest Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110716</link>
        <id>10.14569/IJACSA.2020.0110716</id>
        <doi>10.14569/IJACSA.2020.0110716</doi>
        <lastModDate>2020-07-31T11:34:58.2000000+00:00</lastModDate>
        
        <creator>Norhalina Senan</creator>
        
        <creator>Muhammad Aamir</creator>
        
        <creator>Rosziati Ibrahim</creator>
        
        <creator>N. S. A. M Taujuddin</creator>
        
        <creator>W.H.N Wan Muda</creator>
        
        <subject>Convolutional neural network; image classification; paddy classification; paddy disease and pest</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>Improving the quality and quantity of paddy production is very important since rice is the most consumed staple food for billion people around the world. Early detection of the paddy diseases and pests at different stages of growth is very crucial in paddy production. However, the current manual method in detecting and classifying the paddy diseases and pests requires a very knowledgeable farmer and time consuming. Thus, this study attempts to utilize an effective image processing and machine learning technique to detect and classify the paddy diseases and pests more accurately and less time processing. To accomplish this study, 3355 images comprises of 4 classes paddy images which are healthy, brown spot, leaf blast, and hispa was used. Then the proposed five layers of CNN technique is used to classify the images. The result shows that the proposed CNN technique is outperform and achieved the accuracy rate up to 93% as compared to other state-of-art comparative models.</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_16-An_Efficient_Convolutional_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel ASCII Code-based Polybius Square Alphabet Sequencer as Enhanced Cryptographic Cipher for Cyber Security Protection (APSAlpS-3CS)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110715</link>
        <id>10.14569/IJACSA.2020.0110715</id>
        <doi>10.14569/IJACSA.2020.0110715</doi>
        <lastModDate>2020-07-31T11:34:58.1670000+00:00</lastModDate>
        
        <creator>Jan Carlo T. Arroyo</creator>
        
        <creator>Ariel Roy L. Reyes</creator>
        
        <creator>Allemar Jhone P. Delima</creator>
        
        <subject>Cryptography; ciphers; ciphertext; modified polybius cipher; plaintext</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>For all industries, cybersecurity is regarded as one of the major areas of concern that needs to be addressed. Data and information in all forms should be safeguarded to avoid leakage of information, data theft, and robbery through intruders and hackers. This paper proposes a modification on the traditional 5x5 Polybius square in cryptography, through dynamically generated matrices. The modification is done through shifting cell elements for every encrypted character using a secret key and its ASCII decimal code equivalents. The results of the study revealed that the modified Polybius cipher offers a more secure plaintext-ciphertext conversion and is difficult to break, as evident in the frequency analysis. In the proposed method, each element produced in the digraphs exhibits a wider range of possible values. However, with the increase of process in the encryption and decryption, the modified Polybius cipher obtained a longer execution time of 0.0031ms, being identified as its tradeoff. The unmodified Polybius cipher, however, obtained an execution time of 0.0005ms. Future researchers may address the execution time tradeoff of the modified Polybius cipher.</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_15-A_Novel_ASCII_Code_based_Polybius_Square_Alphabet.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enterprise Architecture “As-Is” Analysis for Competitive Advantage</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110714</link>
        <id>10.14569/IJACSA.2020.0110714</id>
        <doi>10.14569/IJACSA.2020.0110714</doi>
        <lastModDate>2020-07-31T11:34:58.1530000+00:00</lastModDate>
        
        <creator>Eithar Mohamed Mahmoud Nasef</creator>
        
        <creator>Nur Azaliah Abu Bakar</creator>
        
        <subject>Enterprise architecture; internet service provider; competitive advantage; “As-Is” analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>In the telecommunication market, it is essential to ensure that the infrastructure and resources of the internet service provider can adapt and grow. In contrast, provide the best quality of data services and offering the best packages for their customers. It is essential to ensure that an internet service provider company remain competitive and agile so that it can provide better products and services promptly to the market. At iiNET, raising awareness of how having an enterprise-wide understanding and view of how the business processes run and all the existing technology within the organisation is vital in ensuring their adaptability and growth in the telecom industry. This paper discusses the challenges which IINET is currently facing and how an enterprise architecture solution is proposed to provide iiNET with the strategic advantage it needs to overcome those challenges. The existing EA frameworks are discussed and analysed to select the best fit for iiNET’s EA solution. Finally, the “As-Is” architecture at iiNET is explained as the findings for this EA implementation phase.</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_14-Enterprise_Architecture_As_Is_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Barley Quality Estimation Method with UAV Mounted NIR Camera Data based on Regressive Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110713</link>
        <id>10.14569/IJACSA.2020.0110713</id>
        <doi>10.14569/IJACSA.2020.0110713</doi>
        <lastModDate>2020-07-31T11:34:58.1200000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Eisuke Kisu</creator>
        
        <creator>Kazuhiro Nagafuchi</creator>
        
        <subject>Unmanned Aerial Vehicle: UAV; Near Infrared: NIR camera; Daishimochi; anthocyanin; &#223;-glucan and water contents; barley quality</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>Barley quality estimation method with Unmanned Aerial Vehicle: UAV based Near Infrared: NIR camera data based on regressive analysis is proposed. The proposed method allows to predict barley quality, anthocyanin, β-glucan and water contents in the harvested “Daishimochi” of barley grains before the harvest. The prediction method proposed here is based on regression analysis with the Near Infrared: NIR camera data mounted on UAV which allows to estimate barley quality, anthocyanin, β-glucan and water contents in the harvested “Daishimochi” of barley grains before the harvest.. This is the first original attempt for the prediction in the world. Through experiment, it is found that water content (%), Anthocyanin content (mg Cy3G/100 g), Anthocyanin content (mg Cy3G/100 g: which corresponds to dry matter), and barley β-glucan (%) can be predicted before the harvest with high R2 value (more than 0.99). Therefore, farmers can control fertilizer and water supply for improvement of the Daishimochi barley grain quality.</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_13-Barley_Quality_Estimation_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Text Messages: A Computer-Mediated Discourse Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110711</link>
        <id>10.14569/IJACSA.2020.0110711</id>
        <doi>10.14569/IJACSA.2020.0110711</doi>
        <lastModDate>2020-07-31T11:34:58.1070000+00:00</lastModDate>
        
        <creator>Waheed M. A. Altohami</creator>
        
        <subject>Text messages; computer-mediated communication; discourse; technological affordances</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>This study explores the discourse of text messages from a microlinguistic perspective by means of concordance analysis. It aims at sorting the dominant phonological, lexical and grammatical features that mark texting as a peculiar asynchronous mode of computer-mediated communication. Also, it investigates how technology reshapes texters’ linguistic habits as long as spatio-temporal constraints are imposed. The study goes beyond the description of linguistic features as it takes at its core the explanation of the functions performed by each of these features. Findings showed that most of the phonological, lexical and grammatical features of the discourse of text messages were consciously employed to save space and to speed up communication. Furthermore, the study demonstrated that though the discourse of text messages is space-bound and visually decontextualized, it proved to be cohesive, adaptable and interactive in order to perform common language functions such as greetings, expressing attitudes, congratulations, showing involvement, asking for information and demonstrating social solidarity. Finally, based on textual evidence, findings showed that texters created a set of orthographical surrogates to recompense the absence of verbal and para-verbal cues due to specific technological affordances.</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_11-Text_Messages_A_Computer_Mediated_Discourse_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Flood Damage Area Detection Method by Means of Coherency Derived from Interferometric SAR Analysis with Sentinel-A SAR</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110712</link>
        <id>10.14569/IJACSA.2020.0110712</id>
        <doi>10.14569/IJACSA.2020.0110712</doi>
        <lastModDate>2020-07-31T11:34:58.1070000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Hiroshi Okumura</creator>
        
        <creator>Shogo Kajiki</creator>
        
        <subject>Flooding area detection; Synthetic Aperture Radar: SAR; interferometric SAR analysis; coherency; back scattering cross section; remote sensing satellite</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>Flood damage area detection method by means of coherency derived from interferometric SAR analysis with Sentinel-A SAR is proposed. One of the key issues for flooding area detection is to estimate it as soon as possible. The flooding area due to heavy rain, typhoon, severe storm, however, is usually covered with clouds. Therefore, it is not easy to detect with optical imagers onboard remote sensing satellite. On the other hand, Synthetic Aperture Radar: SAR onboard remote sensing satellites allows to observe the flooding area even if it is cloudy and rainy weather conditions. Usually, flooding area shows relatively small back scattering cross section due to the fact that return signal from the water surface is quite small because of dielectric loss. It, however, is not clear enough of the flooding area detected by using return signal of SAR data from the water surface. The proposed method uses coherency derived from interferometric SAR analysis. Through experiment, it is found that the proposed method is useful to detect the flooding area clearly.</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_12-Flood_Damage_Area_Detection_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Cyber-Physical Approach to Resilience and Robustness by Design</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110710</link>
        <id>10.14569/IJACSA.2020.0110710</id>
        <doi>10.14569/IJACSA.2020.0110710</doi>
        <lastModDate>2020-07-31T11:34:58.0730000+00:00</lastModDate>
        
        <creator>Giovanni Di Orio</creator>
        
        <creator>Guilherme Brito</creator>
        
        <creator>Pedro Malo</creator>
        
        <creator>Abhinav Sadu</creator>
        
        <creator>Nikolaus Wirtz</creator>
        
        <creator>Antonello Monti</creator>
        
        <subject>Double virtualization; critical energy infrastructures; cyber-physical systems; resilience</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>Modern critical infrastructures (e.g. Critical Energy Infrastructures) are increasingly evolving into complex and distributed networks of Cyber-Physical Systems. Although the cyber systems provide great flexibility in the operation of critical infrastructure, it also introduces additional security threats that need to be properly addressed during the design and development phase. In this landscape, resilience and robustness by design are becoming fundamental requirements. In order to achieve that, new approaches and technological solutions have to be developed that guarantee i) the fast incident/attack detection; and ii) the adoption of proper mitigation strategies that ensure the continuity of service from the infrastructure. The “Double Virtualization” emerged recently as a potential strategy/approach to ensure the robust and resilient design and management of critical energy infrastructures based on Cyber-Physical Systems. The presented approach exploits the separation of the virtual capabilities/functionalities of a device from the physical system and/or platform used to run/execute them while allowing to dynamically (re-) configure the system in the presence of predicted and unpredicted incidents/accidents. Internet-based technologies are used for developing and deploying the envisioned approach.</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_10-A_Cyber_Physical_Approach_to_Resilience_and_Robustness.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cluster based Detection and Reduction Techniques to Identify Wormhole Attacks in Underwater Wireless Sensor Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110708</link>
        <id>10.14569/IJACSA.2020.0110708</id>
        <doi>10.14569/IJACSA.2020.0110708</doi>
        <lastModDate>2020-07-31T11:34:58.0600000+00:00</lastModDate>
        
        <creator>Tejaswini R Murgod</creator>
        
        <creator>S Meenakshi Sundaram</creator>
        
        <subject>Underwater communication; wormhole attack; round trip time; EEHRCP</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>Underwater Wireless Sensor Networks (UWSN) is widely used in variety of applications but none of the applications have taken network security into considerations. Deployment of underwater network is a challenging task and because of the hash underwater environment, the network is vulnerable to large class of security attacks. Recent research on underwater communication focuses mainly on energy efficiency, network connectivity and maximum communication range. The nature of underwater sensor network makes it more attractive for the attackers. One of the most serious problems in underwater networks is wormhole attack. In this research work we concentrate on providing security to the underwater network against wormhole attacks. We introduce the wormhole attack in the network and propose a solution to detect this attack in underwater wireless networks. Energy Efficient Hybrid Optical - Acoustic Cluster Based Routing Protocol (EEHRCP) is incorporated and using the round trip time and other characteristics of wormhole attack, the presence of the wormhole attack in the network is identified. The simulation results depicts that the proposed wormhole detection mechanism increases throughput by 26%, reduces energy consumption by 3%, reduces end to end delay by 13% and increases packet delivery ratio by 3%.</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_8-Cluster_based_Detection_and_Reduction_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Effect of Multi-Frequency Beam Alignment on Non-Line-of-Site Vehicle to Infrastructure Communication using CI Model (CI-NLOS-V2I)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110709</link>
        <id>10.14569/IJACSA.2020.0110709</id>
        <doi>10.14569/IJACSA.2020.0110709</doi>
        <lastModDate>2020-07-31T11:34:58.0600000+00:00</lastModDate>
        
        <creator>Mahmoud Zaki Iskandarani</creator>
        
        <subject>Intelligent transportation systems; autonomous vehicles; connected vehicles; mmWave; channel model; path loss; CI Model; NLOS; V2I; V2V</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>Investigation of the effect of beam alignment for milimeter wave (mmWave) transmission in the case of Vehicle-to-Infrastructure communication (V2I) is carried out. The investigation covered varying transmission-reception (TX-RX) distances. The effect of carrier frequency variation using different antenna angles and gains is also analyzed. The results showed convergence of path loss (PL) values regardless of angle or antenna gain (dBi). The investigation also proved that shadow fading (SF), which is related to standard deviation () and exponent number (n) is a main contributor to the observed high path loss values in the case of misalignment. It is also noted that the path loss values decreases as a function of frequency per same travelled distance, which is related to the exponent number. This work highlights the importance of antenna alignment and that V2I communication can be very much optimized if and when auto-antenna alignment is used, and the importance of multi-antenna arrays.</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_9-Effect_of_Multi_Frequency_Beam_Alignment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimization of Production Processes using BPMN and ArchiMate</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110707</link>
        <id>10.14569/IJACSA.2020.0110707</id>
        <doi>10.14569/IJACSA.2020.0110707</doi>
        <lastModDate>2020-07-31T11:34:58.0270000+00:00</lastModDate>
        
        <creator>Hana Tomaskova</creator>
        
        <subject>Production processes; graphic modelling; BPMN; ArchiMate</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>This article aims to map and optimize production processes through the graphical form using syntax combination of BPMN and ArchiMate. In the first phase, the existing business processes of the manufacturing company in the Czech Republic were analyzed. In the second phase, the optimization of produc-tion processes was subsequently proposed. These optimizations were based on a combination of two ArchiMate and BPMN syntaxes with implementing ERP systems, enabling the design to utilize more efficient modern technology. The as-is-to-be process was documented in BPMN and ArchiMate, and a process-based simulation tool was used to quantify the effects of process improvement.</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_7-Optimization_of_Production_Processes_using_BPMN.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Reduction of the Humidity Contained in the Harvest Cereals by the Means of in High Frequency Electromagnetic Field Processing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110706</link>
        <id>10.14569/IJACSA.2020.0110706</id>
        <doi>10.14569/IJACSA.2020.0110706</doi>
        <lastModDate>2020-07-31T11:34:57.9800000+00:00</lastModDate>
        
        <creator>Cornelia Emilia Gordan</creator>
        
        <creator>Ioan Mircea Gordan</creator>
        
        <creator>Vasile Darie ?oproni</creator>
        
        <creator>Carmen Otilia Molnar</creator>
        
        <creator>Mircea Nicolae Arion</creator>
        
        <creator>Francisc Ioan Hathazi</creator>
        
        <subject>High frequency electromagnetic field; numerical simulation; microwave processing; oat drying; agricultural seeds</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>The new environmental friendly microwave technologies represents an important concern in the environmental policies. The use of microwave energy for the processing of different agricultural products presents the advantage of a green technology which allow a uniform distribution of electromagnetic and thermal field with a short relatively time of the process. In the paper is studied the microwave drying technology used in the drying process of oat seeds. In this sense the experiments were carried out for different working conditions of the equipment with respect to applied microwave power and obtained temperatures. A numerical model associated to the problem and solved by the means of finite element method is used. These allows us to obtain the electromagnetic field distribution through simulation inside the microwave dryer. The simulations were performed in order to obtain good quality products that may be used for seeding and food industry. The approached method is flexible so as it can be applied to all cereals.</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_6-Reduction_of_the_Humidity_Contained_in_the_Harvest_Cereals.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Network Reconfiguration for Minimizing Power Loss by Moth Swarm Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110705</link>
        <id>10.14569/IJACSA.2020.0110705</id>
        <doi>10.14569/IJACSA.2020.0110705</doi>
        <lastModDate>2020-07-31T11:34:57.9630000+00:00</lastModDate>
        
        <creator>Thuan Thanh Nguyen</creator>
        
        <creator>Duong Thanh Long</creator>
        
        <subject>Network reconfiguration; moth swarm algorithm; power loss; particle swarm optimization; distribution system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>This paper presents a network reconfiguration approach for minimizing power loss of the distribution system based on moth swarm algorithm (MSA). The MSA is a recent metaheuristic inspired from the navigational technique of moths in the dark for finding food sources. For searching optimal solution, MSA used three different mechanisms of generating new solutions consisting of L&#233;vy-flights, Gaussian walks and spiral flight. The effectiveness of MSA is validated on two distribution systems consisting of the 33-nodes and 69-nodes. The simulation results are compared to particle swarm optimization and other available approaches in the literature. The calculated results on the test systems show that MSA can be an effective and reliable tool for the NR problems.</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_5-Network_Reconfiguration_for_Minimizing_Power_Loss.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Developing Web-based Support Systems for Predicting Poor-performing Students using Educational Data Mining Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110704</link>
        <id>10.14569/IJACSA.2020.0110704</id>
        <doi>10.14569/IJACSA.2020.0110704</doi>
        <lastModDate>2020-07-31T11:34:57.9330000+00:00</lastModDate>
        
        <creator>Phauk Sokkhey</creator>
        
        <creator>Takeo Okazaki</creator>
        
        <subject>Academic performance prediction systems; educational data mining; dominant factors; feature selection methods; prediction models; student performance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>The primary goal of educational systems is to enrich the quality of education by maximizing the best results and minimizing the failure rate of poor-performing students. Early predicting student performance has become a challenging task for the improvement and development of academic performance. Educational data mining is an effective discipline of data mining concerned with information integrated into the education domain. The study is of this work is to propose techniques in educational data mining and integrate it into a web-based system for predicting poor-performing students. A comparative study of prediction models was conducted. Subsequently, high performing models were developed to get higher performance. The hybrid random forest named Hybrid RF produces the most successful classification. For the context of intervention and improving the learning outcomes, a novel feature selection method named MICHI, which is the combination of mutual information and chi-square algorithms based on the ranked feature scores is introduced to select a dominant set and improve performance of prediction models. By using the proposed techniques of educational data mining, and academic performance prediction system is subsequently developed for educational stockholders to get an early prediction of student learning outcomes for timely intervention. Experimental results and evaluation surveys report the effectiveness and usefulness of the developed academic prediction system. The system is used to help educational stakeholders for intervening and improving student performance.</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_4-Developing_Web_based_Support_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Porting X Windows System to Operating System Compliant with Portable Operating System Interface</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110703</link>
        <id>10.14569/IJACSA.2020.0110703</id>
        <doi>10.14569/IJACSA.2020.0110703</doi>
        <lastModDate>2020-07-31T11:34:57.9170000+00:00</lastModDate>
        
        <creator>Andrey V Zhadchenko</creator>
        
        <creator>Kirill A. Mamrosenko</creator>
        
        <creator>Alexander M. Giatsintov</creator>
        
        <subject>X Window System; X11; X.Org Server; Xorg; XFree86; Portable Operating System Interface (POSIX); graphics; Realtime Operating System (RTOS)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>Now-a-days graphical interface is very important for any operating system, even the embedded ones. Adopting existing solutions will be much easier than developing your own. Moreover, a lot of software may be reused in this case. This article is devoted to X Window System adaptation for Portable Operating System Interface (POSIX) compliant real-time operating system Baget. Many encountered problems come from the tight connection between X and Linux, therefore it is expected to encounter these issues during usage of X on non-Linux systems. Discussed problems include, but not limited to the absence of dlopen, irregular file paths, specific device drivers. Instructions and recommendations to solve these issues are given. A comparison between XFree86 and Xorg implementations of X is discussed. Although synthetic tests show Xorg performance superiority, XFree86 consumes fewer system resources and is easier to port.</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_3-Porting_X_Windows_System_to_Operating_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>“Dr.J”: An Artificial Intelligence Powered Ultrasonography Breast Cancer Preliminary Screening Solution</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110702</link>
        <id>10.14569/IJACSA.2020.0110702</id>
        <doi>10.14569/IJACSA.2020.0110702</doi>
        <lastModDate>2020-07-31T11:34:57.8870000+00:00</lastModDate>
        
        <creator>Zhenzhong Zhou</creator>
        
        <creator>Xueqin Xie</creator>
        
        <creator>Alex L. Zhou</creator>
        
        <creator>Zongjin Yang</creator>
        
        <creator>Muhammad Nabeel</creator>
        
        <creator>Yongjie Deng</creator>
        
        <creator>Zhongxiong Feng</creator>
        
        <creator>Xiaoling Zheng</creator>
        
        <creator>Zhiwen Fang</creator>
        
        <subject>Breast cancer preliminary screening; lesions detection; ultrasonography; artificial intelligence; deep learning; cloud computing; BI-RADS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>Breast cancer ranks top incidence rate among all malignant tumors for women, globally. Early detection through regular preliminary screening is critical to decreasing the breast cancer’s fatality rate. However, the promotion of preliminary screening faces major limitations of human diagnosis capacity, cost, and technical reliability in China and most of the world. To meet these challenges, we developed a solution featuring an innovative division of labor model by incorporating artificial intelligence (AI) with ultrasonography and cloud computing. The objective of this research was to develop a solution named “Dr.J”, which applies AI to process real-time video live feed from ultrasonography, which is physically safe and more suitable for Asian women. It can automatically detect and highlight the suspected breast cancer lesions and provide BI-RADS (Breast Imaging-Reporting and Data System) ratings to assist human diagnosis. “Dr.J” does not require its frontline operators to have prior medical or IT background and thus significantly lowers manpower threshold for preliminary screening promotion. Furthermore, its cloud computing platform can store detailed breast cancer data such as images and BI-RADS ratings for further essential needs in medical treatment, research and health management, etc. as well as establishing a hierarchy medical service network for this disease. Therefore, “Dr.J” significantly enhances the availability and accessibility of preliminary screening service for breast cancer at grassroots.</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_2-Dr_J_An_Artificial_Intelligence_Powered_Ultrasonography.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dependency Evaluation and Visualization Tool for Systems Represented by a Directed Acyclic Graph</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110701</link>
        <id>10.14569/IJACSA.2020.0110701</id>
        <doi>10.14569/IJACSA.2020.0110701</doi>
        <lastModDate>2020-07-31T11:34:57.8230000+00:00</lastModDate>
        
        <creator>Sobitha Samaranayake</creator>
        
        <creator>Athula Gunawardena</creator>
        
        <subject>Data visualization; degree planning; dynamic flowchart; prerequisite structure; adjacency matrix</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(7), 2020</description>
        <description>There is a dearth of data visualization tools for displaying college degree-planning information, especially course prerequisite and complex academic requirement information. The existing methods for exploring degree plans involve a painstaking what-if analysis of static data presented in a convoluted format. In this paper, we present a data visualization tool, named as Dependency Evaluation and Visualization (DEV) chart, to visualize course prerequisite structure and a dynamic flowchart to guide students and advisors through all possible degree requirement completions. DEV chart uses an adjacency matrix of a directed acyclic graph to store a course structure for a degree into a database. Since DEV chart is created dynamically by updating data associated with each node of the directed graph, it provides a mechanism for adding an alert system when prerequisite conditions are not met, and hence the user can visualize the available courses at each step. Similarly, DEV chart can be used with project planning where nodes represent tasks and edges represent their dependencies.</description>
        <description>http://thesai.org/Downloads/Volume11No7/Paper_1-Dependency_Evaluation_and_Visualization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Review on Virtual Machine Positioning and Consolidation Strategies for Energy Efficiency in Cloud Data Centers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110687</link>
        <id>10.14569/IJACSA.2020.0110687</id>
        <doi>10.14569/IJACSA.2020.0110687</doi>
        <lastModDate>2020-07-02T16:22:04.9500000+00:00</lastModDate>
        
        <creator>Nahuru Ado Sabongari</creator>
        
        <creator>Abdulsalam Ya’u Gital</creator>
        
        <creator>Souley Boukari</creator>
        
        <creator>Badamasi Ja’afaru</creator>
        
        <creator>Muhammad Auwal Ahmed</creator>
        
        <creator>Haruna Chiroma</creator>
        
        <subject>Energy efficiency; optimization; cloud data centers</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>The cloud data center consumes massively more and more energy which is considered inacceptable. Therefore further efforts are needed to improve the energy efficiency of such data centers by using Server Consolidation to minimize the number of Active Physical Machines (APMs) in a data center setting. Strategies for positioning and transformation of VM maintain their usefulness as a roadmap to maximum consolidation. The latest techniques do complex restructuring, thus optimizing VM&#39;s positioning. The paper provides a detailed state-of - the-art strategies for VM positioning and consolidation that help improve energy efficiency in cloud data centers. A comparison is provided here between the strategies that revealed the worthiness, limitations and suggestions of strengthening other methods along the way.</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_87-A_Review_on_Virtual_Machine_Positioning_and_Consolidation_Strategies.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Factors Influencing Practice of Human Resource Information System in Organizations: A Hybrid Approach of AHP and DEMATEL</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110686</link>
        <id>10.14569/IJACSA.2020.0110686</id>
        <doi>10.14569/IJACSA.2020.0110686</doi>
        <lastModDate>2020-07-02T16:22:04.9200000+00:00</lastModDate>
        
        <creator>Abdul Kadar Muhammad Masum</creator>
        
        <creator>Faisal Bin Abid</creator>
        
        <creator>ABM Yasir Arafat</creator>
        
        <creator>Loo-See Beh</creator>
        
        <subject>Analytic Hierarchy Processes (AHP); Decision Making Trial and Evaluation Laboratory (DEMATEL); factor; Human Resource Information System (HRIS); Multi-Criteria Decision Making (MCDM) Model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>This paper blends the development of the Technology-Organization-Environment (TOE) framework and Human-Organization-Technology (HOT) fit model to identify the factors that influence the administration choice in embracing human resource information system (HRIS) in the organizations. Here, a hybrid Multi-Criteria Decision Making (MCDM) model combining the Decision Making Trial and Evaluation Laboratory (DEMATEL) and Analytic hierarchy Processes (AHP) is used to achieve the objective of the study. In this study, the experts agree that the staffs IT skill is most significant than other factors for the Human dimension. Similarly, IT infrastructure, top level support, and competitive pressure are the most vital factors for Technology, Organization and Environment dimensions respectively. Moreover, this paper will help the managers to take care of some factors that are vital for HRIS implementation in the organizations.</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_86-Factors_Influencing_Practice_of_Human_Resource_Information_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Security of a New Hybrid Ciphering System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110685</link>
        <id>10.14569/IJACSA.2020.0110685</id>
        <doi>10.14569/IJACSA.2020.0110685</doi>
        <lastModDate>2020-07-02T16:22:04.8400000+00:00</lastModDate>
        
        <creator>Mohammed BOUGRINE</creator>
        
        <creator>Fouzia OMARY</creator>
        
        <creator>Salima TRICHNI</creator>
        
        <subject>Security; confidentiality; hybrid encryption; evolutionary algorithms; symmetrical encryption; cryptography</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>The protection of privacy is a very sensitive subject and comes into force in all areas. They represent the first priority in the development of new technologies. In fact, opt for a new Big data or IOT technology is a very difficult decision for organizations and calls into question the confidentiality, integrity, authenticity and non-repudiation of their data. Convincing these organizations to adhere to technological intelligence is tantamount to providing them with powerful tools and mechanisms of security that are resistant to new types of vulnerability. However, the problem today is that most security tools are based on old cryptographic primitives. Certainly; they have proved their resistance until today but the need to have others becomes crucial in order to meet the new technological requirements. In this paper, we propose a new hybrid encryption alternative based on two encryption systems, the first one is an evolutionary encryption system and the second one is based on an asymmetric encryption system. To present this work we begin with a description of our evolutionary cipher system. Then, we present the principle of proposed hybridization and its contribution compared to other existing systems. Finally, we perform a detailed study on the safety of this system and its long-term resistance.</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_85-Security_of_a_New_Hybrid_Ciphering_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>ParaCom: An IoT based Affordable Solution Enabling People with Limited Mobility to Interact with Machines</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110684</link>
        <id>10.14569/IJACSA.2020.0110684</id>
        <doi>10.14569/IJACSA.2020.0110684</doi>
        <lastModDate>2020-06-30T14:58:38.3930000+00:00</lastModDate>
        
        <creator>Siddharth Sekar</creator>
        
        <creator>Nirmit Agarwal</creator>
        
        <creator>Vedant Bapodra</creator>
        
        <subject>Internet of Things (IoT); Motor Neuron Disease (MND); Amyotrophic Lateral Sclerosis (ALS); Arduino</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>There are many people in this world who don’t have the ability to communicate with others due to some unforeseen accident. Users who are paralyzed and/or suffering from different Motor Neuron Diseases (MND) like Amyotrophic Lateral Sclerosis (ALS), Primary Lateral Sclerosis etc, by making them more independent. Patients suffering from these diseases are not able to move their arms and legs, lose their body balance and the ability to speak. Here we propose an IoT based communication controller using the concept of Morse Code Technology which controls the smartphone of the user. This paper proposes a solution to give the user ability to communicate to other people using machine as an intermediator. The device will require minimal inputs from the user.</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_84-ParaCom_an_IoT_based_Affordable_Solution.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automatic Building Change Detection on Aerial Images using Convolutional Neural Networks and Handcrafted Features</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110683</link>
        <id>10.14569/IJACSA.2020.0110683</id>
        <doi>10.14569/IJACSA.2020.0110683</doi>
        <lastModDate>2020-06-30T14:58:38.3800000+00:00</lastModDate>
        
        <creator>Diego Alonso Javier Quispe</creator>
        
        <creator>Jose Sulla-Torres</creator>
        
        <subject>Bi-temporal images; convolutional neural network (CNN); building detection; building change detection; Mask R-CNN</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>In this article, we present a new framework to solve the task of building change detection, making use of a convolutional neural network (CNN) for the building detection step, and a set of handcrafted features extraction for the change detection. The buildings are extracted using the method called Mask R-CNN which is a neural network used for object- based instance segmentation and has been tested in different case studies to segment different types of objects obtaining good results. The buildings are detected in bitemporal images, where three different comparison metrics MSE, PSNR and SSIM are used to differentiate if there are changes in buildings, we used this metrics in the Hue, Saturation and Brightness representation of the image. Finally the characteristics are classified by two algorithms, Support Vector Machine and Random Forest, so that both results can be compared. The experiments were performed in a large dataset called WHU building dataset, which contains very high-resolution (VHR) aerial images. The results obtained are comparable to those of the state of the art.</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_83-Automatic_Building_Change_Detection_on_Aerial_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Road Damage Detection Utilizing Convolution Neural Network and Principal Component Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110682</link>
        <id>10.14569/IJACSA.2020.0110682</id>
        <doi>10.14569/IJACSA.2020.0110682</doi>
        <lastModDate>2020-06-30T14:58:38.3470000+00:00</lastModDate>
        
        <creator>Elizabeth Endri</creator>
        
        <creator>Alaa Sheta</creator>
        
        <creator>Hamza Turabieh</creator>
        
        <subject>Pavement crack; Convolutional Neural Network (CNN); Principal Component Analysis (PCA)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>Roads should always be in a reliable con-dition and maintained regularly. One of the problems that should be maintained well is the pavement cracks problem. This a challenging problem that faces road engineers, since maintaining roads in a stable condition is needed for both drivers and pedestrians. Many meth-ods have been proposed to handle this problem to save time and cost. In this paper, we proposed a two-stage method to detect pavement cracks based on Principal Component Analysis (PCA) and Convolutional Neural Network (CNN) to solve this classification problem. We employed a Principal Component Analysis (PCA) method to extract the most significant features with a di˙erent number of PCA components. The proposed approach was trained using a Mendeley Asphalt Crack dataset, which contains 400 images of road cracks with a 480&#215;480 resolution. The obtained results show how PCA helped in speeding up the learning process of CNN.</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_82-Road_Damage_Detection_Utilizing_Convolution_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predicting Number of Hospital Appointments When No Data Is Available</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110681</link>
        <id>10.14569/IJACSA.2020.0110681</id>
        <doi>10.14569/IJACSA.2020.0110681</doi>
        <lastModDate>2020-06-30T14:58:38.3300000+00:00</lastModDate>
        
        <creator>Harold Caceres</creator>
        
        <creator>Nelson Fuentes</creator>
        
        <creator>Julio Aguilar</creator>
        
        <creator>Cesar Baluarte</creator>
        
        <creator>Karim Guevara</creator>
        
        <creator>Eveling Castro-Gutierrez</creator>
        
        <creator>Omar U. Florez</creator>
        
        <subject>Multi linear Regression; hospital appointments; ma-chine learning; correlation matrix</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>Usually, in a hospital, the data generated by each department or section is treated in isolation, believing that there is no relationship between them. It is thought that while one department is in high demand, it can not influence that another may have the same demand or not have any demand. In this paper, we question this approach by considering information from departments as components of a large system in the hospital. Thus, we present an algorithm to predict the appointments of departments when data is not available using data from other departments. This algorithm uses a model based on multiple linear regression using a correlation matrix to measure the rela-tionship between the departments with different time windows. After running our algorithm for different time windows and departments, we experimentally find that while we increase the extension of a time window and learn dependencies in the data, its corresponding precision decreases. Indeed, a month of data is the minimum sweet spot to leverage information from other departments and still provide accurate predictions. These results are important to develop per-department health policies under limited data, an interesting problem that we plan to investigate in future works.</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_81-Predicting_Number_of_Hospital_Appointments.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Document Classification Method based on Graphs and Concepts of Non-rigid 3D Models Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110680</link>
        <id>10.14569/IJACSA.2020.0110680</id>
        <doi>10.14569/IJACSA.2020.0110680</doi>
        <lastModDate>2020-06-30T14:58:38.3170000+00:00</lastModDate>
        
        <creator>Lorena Castillo Galdos</creator>
        
        <creator>Cristian Lopez Del Alamo</creator>
        
        <creator>Grimaldo D&#180;avila Guill&#180;en</creator>
        
        <subject>Document classification; graphs; non-rigid 3D mod-els; Universidad Nacional de San Agust&#180;in de Arequipa (UNSA)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>Text document classification is an important re-search topic in the field of information retrieval, and so it is how we represent the information extracted from the documents to be classified. There exists document classification methods and tech-niques based on the vector space model, which doesn’t capture the relation between words, which is considered of importance to make a better comparison and therefore classification. For this reason, two significant contributions were made, the first one is the way to create the feature vector for document comparison, which uses adapted concepts of non-rigid 3D models comparison and graphs as a data structure to represent such documents. The second contribution is the classification method itself, which uses the representative feature vectors of each category to classify new documents.</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_80-Document_Classification_Method_based_on_Graphs.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automated Recognition of Sincere Apologies from Acoustics of Speech</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110679</link>
        <id>10.14569/IJACSA.2020.0110679</id>
        <doi>10.14569/IJACSA.2020.0110679</doi>
        <lastModDate>2020-06-30T14:58:38.2830000+00:00</lastModDate>
        
        <creator>Zafi Sherhan Syed</creator>
        
        <creator>Muhammad Shehram Shah</creator>
        
        <creator>Abbas Shah Syed</creator>
        
        <subject>Sincerity; affective computing; social signal processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>Sincerity is an important characteristic of communicative behavior which represents an honest, truthful, and genuine display of verbal and non-verbal expressions. Individuals who are deemed sincere often appear more charismatic and can influence a large number of people. In this paper, we propose a multi-model fusion framework to identify sincerely delivered apologies by modelling difference between acoustics of sincere and insincere utterances. The efficacy of this framework is benchmarked using the Sincere Apology Corpus (SAC). We show that our proposed methods can improve the baseline classification performance (in terms of unweighted average recall) for SAC from 66.02% to 70.97% for the validation partition and 66.61% to 75.49% for the test partition. Moreover, as part of our investigation, we found that gender dependency can influence the classification performance of machine learning models, with models trained for male subjects performing better than those trained for female subjects.</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_79-Automated_Recognition_of_Sincere_Apologies.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Acoustic Frequency Optimization for Underwater Wireless Sensor Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110678</link>
        <id>10.14569/IJACSA.2020.0110678</id>
        <doi>10.14569/IJACSA.2020.0110678</doi>
        <lastModDate>2020-06-30T14:58:38.2830000+00:00</lastModDate>
        
        <creator>Emad Felemban</creator>
        
        <subject>Underwater Wireless Sensor Network (UWSN); acoustic signal; mathematical modeling; optimization; noise level; optimal frequency</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>In recent years, research in Underwater Wireless Sensor Network (UWSN) was the interest of many research groups as it can be used for many important applications such as disaster management, marine environment monitoring, fish farming, and military surveillance. There are many challenges in underwater acoustic communication: strong signal attenuation, limited bandwidth, long propagation delay, high transmission loss, and energy consumption. In this paper, we present a simple flow of mathematical models for the underwater acoustic channel for the underwater acoustic communication channel. We also investigate the influence of different parameters governing the communication channel’s performance, such as temperature and wind speed. We also show the importance of selecting the optimal communication frequency to increase communication SNR. We implemented the mathematical model in MATLAB and made it available online for other researchers. We found out that selecting the optimal frequency is very crucial when wind speed is high.</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_78-Acoustic_Frequency_Optimization_for_Underwater_Wireless_Sensor_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>News Aggregator and Efficient Summarization System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110677</link>
        <id>10.14569/IJACSA.2020.0110677</id>
        <doi>10.14569/IJACSA.2020.0110677</doi>
        <lastModDate>2020-06-30T14:58:38.2700000+00:00</lastModDate>
        
        <creator>Alaa Mohamed</creator>
        
        <creator>Marwan Ibrahim</creator>
        
        <creator>Mayar Yasser</creator>
        
        <creator>Mohamed Ayman</creator>
        
        <creator>Menna Gamil</creator>
        
        <creator>Walaa Hassan</creator>
        
        <subject>News aggregator; text summarization; text enhance-ment; textRank algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>News Aggregator is simply an online software which collects new stories and events around the world from various sources all in one place. News aggregator plays a very important role in reducing time consumption, as all of the news that would be explored through more than one website will be placed only in a single location. Also, summarizing this aggregated content absolutely will save reader’s time. A proposed technique used called the TextRank algorithm that showed promising results for summarization. This paper presents the main goal of this project which is developing a news aggregator able to aggregate relevant articles of a certain input keyword or key-phrase. Summarizing the relevant articles after enhancing the text to give the reader understandable and efficient summary.</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_77-News_Aggregator_and_Efficient_Summarization_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Future of the Internet of Things Emerging with Blockchain and Smart Contracts</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110676</link>
        <id>10.14569/IJACSA.2020.0110676</id>
        <doi>10.14569/IJACSA.2020.0110676</doi>
        <lastModDate>2020-06-30T14:58:38.2530000+00:00</lastModDate>
        
        <creator>Mir Hassan</creator>
        
        <creator>Chen Jincai</creator>
        
        <creator>Adnan Iftekhar</creator>
        
        <creator>Xiaohui Cui</creator>
        
        <subject>Internet of Things (IoT); blockchain; smart con-tracts; peer-to-peer security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>The Internet of Things (IoT) has the potential to change the way the world works from home automation to smart cities, from improved healthcare to an efficient management sys-tem in supply chains to industry 4.0 revolution. IoT is increasingly becoming an essential part of the homes and industrial automa-tion; nevertheless, there are still many challenges that need to fix. IoT solutions are costly and complicated, while issues regarding security and privacy must be addressed with a sustainable plan. Support the growing number of connected devices; the IoT is in dire need of a reboot. Blockchain technology might be the answer. Starting as a decentralized financial solution in the form of Bitcoin, Blockchain technology has expanded to diverse areas and Information Technology applications. Blockchain technology and Smart Contracts can address the outstanding security and privacy issues that impede further development of the IoT. Blockchain is a decentralized system with no central governance, facilitates interactions, promotes new and improved transaction models, and allows autonomous coordination of the devices using enhanced encryption techniques. The primary reason for this paper is to showcase the challenges and problems we are facing with the current internet of things solutions and analyze how the use of Blockchain and Smart Contracts can help achieve a new, more robust internet of things system. Finally, we examine some of the many projects using the Internet of Things together with Blockchain and Smart Contracts, to create new solutions that are only possible by integrating these technologies.</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_76-Future_of_the_Internet_of_Things_Emerging_with_Blockchain.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sensing of Environmental Variables for the Analysis of Indoor Air Pollution</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110675</link>
        <id>10.14569/IJACSA.2020.0110675</id>
        <doi>10.14569/IJACSA.2020.0110675</doi>
        <lastModDate>2020-06-30T14:58:38.2230000+00:00</lastModDate>
        
        <creator>Jaime Xilot</creator>
        
        <creator>Guillermo Molero-Castillo</creator>
        
        <creator>Edgard Benitez-Guerrero</creator>
        
        <creator>Everardo Barcenas</creator>
        
        <subject>Air pollution; Ambient Intelligence (AmI); indoor air quality; wireless sensor network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>Ambient intelligence systems try to perceive the environment and react, proactively and pervasively to improve peoples environmental conditions. A current challenge in Ambient intelligence is trying to mitigate environmental risks that affect global public health, such as increasing air pollution. This paper presents the analysis of some environmental variables related to indoor air pollutants, such as CO, PM 2.5, PM 10, humidity and temperature; all of these captured in a university environment. The environmental measurements were carried out through a wireless sensor network consisting of two nodes. The cloud computing service, that is, ThingSpeak, was used as the storage medium. With this network, the presence of pollutants in the study area were detected with concentration levels within the permitted ranges, as well as its correlation with the atmospheric variables of temperature and humidity. The implementation of the sensor network allowed the capture of data in a transparent and non-intrusive way, and the analysis allowed the understanding of the behavior of pollutants in indoor spaces, where air circulation is limited, which in the face of high levels of pollution can be harmful to human health.</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_75-Sensing_of_Environmental_Variables.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Estimating the Causes of Poor Academic Performance of Students: A Case Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110674</link>
        <id>10.14569/IJACSA.2020.0110674</id>
        <doi>10.14569/IJACSA.2020.0110674</doi>
        <lastModDate>2020-06-30T14:58:38.2070000+00:00</lastModDate>
        
        <creator>Juwesh Binong</creator>
        
        <subject>Academic performance; qualitative rating; factors; correlation coefficient; analytic hierarchy process</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>Poor academic performance of students is not the only concern for parents and teachers, but also a concern for the country as a whole. This paper makes an attempt to identify the cause(s) of poor academic achievement. This paper presents a method of identifying the most influencing factor on academic performance. The proposed method capable of using qualitative ratings as input for the factors considered and find the correlation of each factor with academic performance, and finally rank the influences of the factors on performance to sort out the most influencing one. The study was carried out on the academic performance of 189 students of B.Tech for five academic semesters. The results indicate the degree influences of various factors on performance, with the most influencing one being the academic ability of students.</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_74-Estimating_the_Causes_of_Poor_Academic_Performance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Capsule Network for Cyberthreat Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110673</link>
        <id>10.14569/IJACSA.2020.0110673</id>
        <doi>10.14569/IJACSA.2020.0110673</doi>
        <lastModDate>2020-06-30T14:58:38.1900000+00:00</lastModDate>
        
        <creator>Sahar Altalhi</creator>
        
        <creator>Maysoon Abulkhair</creator>
        
        <creator>Entisar Alkayal</creator>
        
        <subject>Capsule network; dynamic routing; deep learning; Twitter; text analysis; attack detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>In cybersecurity, analyzing social network data has become an essential research area due to its property of providing real-time updates about real-world events. Studies have shown that Twitter can contain information about security threats before some specialized sites. Thus, the classification of tweets into security-related and not security-related can help with early warnings for such attacks. In this study, the use of a capsule network (CapsNet), the new deep learning algo-rithm, is investigated for the first time in the field of security attack detection using Twitter. The aim was to increase the accuracy of tweet classification by using CapsNet rather than a convolutional neural network (CNN). To achieve the research objective, the original implementation of CapsNet with dynamic routing is adapted to be suitable for text analysis using tweet data set. A random search technique was used to tune the model’s hyperparameters. The experimental results showed that CapsNet exceeded the baseline CNN on the same data set, with accuracy of 92.21% and a 92.2% F1 score; also, word2vec embedding performed better than a random initialization.</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_73-Capsule_Network_for_Cyberthreat_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Exerting 2D-Space of Sentiment Lexicons with Machine Learning Techniques: A Hybrid Approach for Sentiment Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110672</link>
        <id>10.14569/IJACSA.2020.0110672</id>
        <doi>10.14569/IJACSA.2020.0110672</doi>
        <lastModDate>2020-06-30T14:58:38.1600000+00:00</lastModDate>
        
        <creator>Muhammad Yaseen Khan</creator>
        
        <creator>Khurum Nazir Junejo</creator>
        
        <subject>Hybrid approach; machine learning; sentiment analysis; sentiment lexicons; sentiment space; social media analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>Sentiment mining from the textual content on the web can give valuable insights for discernment, strategic decision making, targeted advertisement, and much more. Supervised machine learning (ML) approaches do not capture the sentiment inherent in the individual terms. Whereas the unsupervised sen-timent lexicon (SL) based approaches lag behind ML approaches because of a bias they have towards one sentiment than the other. In this paper, we propose a hybrid approach that uses unsuper-vised sentiment lexicons to transform the term space into a two-dimensional sentiment space on which a discriminative classifier is trained in a supervised fashion. This hybrid approach yields higher accuracy, faster training, and lower memory footprint than the ML approaches. It is more suitable for scenarios where training data is scarce. We support our claim by reporting results on six social media datasets using five sentiment lexicons and four ML algorithms.</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_72-Exerting_2D_Space_of_Sentiment_Lexicons.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Successive Texture and Shape based Active Contours for Train Bogie Part Segmentation in Rolling Stock Videos</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110671</link>
        <id>10.14569/IJACSA.2020.0110671</id>
        <doi>10.14569/IJACSA.2020.0110671</doi>
        <lastModDate>2020-06-30T14:58:38.1600000+00:00</lastModDate>
        
        <creator>Kaja Krishnamohan</creator>
        
        <creator>Ch.Raghava Prasad</creator>
        
        <creator>P.V.V.Kishore</creator>
        
        <subject>Automation of Train Rolling Stock Examination; level sets; shape priors; texture priors; export system models</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>Train Rolling Stock Examination (TRSE) is a pro-cedure for checking damages in the undercarriage of a moving train at 30kmph. The undercarriage of a train is called bogie according to railway manuals. Traditionally, TRSE is performed manually by set of highly skilled personnel of the railway near to the train stations. This paper presents a new method to segment the TRSE bogie parts which can assist trained railway personnel for better performance and consequently reduce train accidents. This work uses visualization techniques as a pair of virtual eyes to help checking of each bogie part remotely using high speed video data. Our previous AC models are being supervised by a weak shape image which has shown to improve segmentation accuracies on a closely packed inhomogeneous train bogie object space. However, the inner texture of the objects in the bogies is found to be necessary for better object segmentation. Here, this paper proposes an algorithm for bogie parts segmentation as successive texture and shape-based AC model (STSAC). In this direction, texture of the bogie part is applied serially before the shape to guide the contour towards the desired object of interest. This contrasts with the previous approaches where texture is applied to extract object shape, loosing texture information completely in the output image. To test the proposed method for their ability in extracting objects from videos captured under ambient conditions, the train rolling stock video database is built with 5 videos. In contrast to previous models the proposed method has produced shape rich texture objects through contour evolution performed sequentially.</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_71-Successive_Texture_and_Shape_based_Active_Contours.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving Intrusion Detection System using Artificial Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110670</link>
        <id>10.14569/IJACSA.2020.0110670</id>
        <doi>10.14569/IJACSA.2020.0110670</doi>
        <lastModDate>2020-06-30T14:58:38.1300000+00:00</lastModDate>
        
        <creator>Marwan Ali Albahar</creator>
        
        <creator>Muhammad Binsawad</creator>
        
        <creator>Jameel Almalki</creator>
        
        <creator>Sherif El-etriby</creator>
        
        <creator>Sami Karali</creator>
        
        <subject>New regularizer; anomaly detection; NSL-KDD dataset; CIDDS-001 dataset; UNSW-NB15</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>Currently, network communication is more suscep-tible to different forms of attacks due to its expanded usage, accessibility, and complexity in most areas, consequently imposing greater security risks. One method to halt attacks is to identify different forms of irregularities in the data transmitted and processed during communication. Detection of anomalies is a vital process to secure a system. To this end, machine learning plays a key role in identifying abnormalities and intrusion in communica-tion over a network. The term regularization is one of the major aspects of training machine learning models, in which, it plays a primary role in several successful Artificial neural network models, by inducing regularization in the model training. Then, this technique is integrated with an Artificial Neural Network (ANN) for classifying and detecting irregularities in network communication efficiency. The purpose of regularization is to discourage learning a more flexible or complex model. Thus, the machine learning model generalizes enough to perform accurately on unseen data. For training and testing purposes, NSL-KDD, CIDDS-001 (External and Internal Server Data), and UNSW-NB15 datasets were utilized. Through extensive experiments, the proposed regularizer reaches higher True Positive Rate (TPR) and precision compared L1 and L2 norm regularization algorithms. Thus, it is concluded that the proposed regularizer demonstrates a strong intrusion detection ability.</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_70-Improving_Intrusion_Detection_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced Pre-processing and Parameterization Process of Generic Code Clone Detection Model for Clones in Java Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110669</link>
        <id>10.14569/IJACSA.2020.0110669</id>
        <doi>10.14569/IJACSA.2020.0110669</doi>
        <lastModDate>2020-06-30T14:58:38.1130000+00:00</lastModDate>
        
        <creator>Nur Nadzirah Mokhtar</creator>
        
        <creator>Al-Fahim Mubarak-Ali</creator>
        
        <creator>Mohd Azwan Mohamad Hamza</creator>
        
        <subject>Code clone; code clone detection model; java applications; computational intelligence</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>Code clones are repeated source code in a program. There are four types of code clone which are: Type 1, Type 2, Type 3 and Type 4. Various code clone detection models have been used to detect code clone. Generic Code Clone model is a model that consists of a combination of five processes in detecting code clone from Type-1 until Type-4 in Java Applications. The five processes are Pre-processing, Transformation, Parameterization, Categorization and Match Detection process. This work aims to improve code clone detection by enhancing the Generic Code Clone Detection (GCCD) model. Therefore, the Preprocessing and Parameterization process is enhanced to achieve this aim. The enhancement is to determine the best constant and weightage that can be used to improve the code clone detection result. The code clone detection result from the proposed enhancement shows that private with its weightage is the best constant and weightage for the Generic Code Clone Detection Model.</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_69-Enhanced_Pre_Processing_and_Parameterization_Process.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Machine Learning: A Tool to Detect Phishing Attacks in Communication Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110668</link>
        <id>10.14569/IJACSA.2020.0110668</id>
        <doi>10.14569/IJACSA.2020.0110668</doi>
        <lastModDate>2020-06-30T14:58:38.0970000+00:00</lastModDate>
        
        <creator>Ademola Philip Abidoye</creator>
        
        <creator>Boniface Kabaso</creator>
        
        <subject>Phishing attack; data sets; URL classification; phishing URL; attackers; machine learning; classifiers; Internet</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>Phishing is a cyber-attack that uses disguised email as a weapon and has been on the rise in recent times. Innocent Internet users if peradventure clicking on a fraudulent link may cause him to fall victim to divulging his personal information such as credit card PIN, login credentials, banking information, and other sensitive information. There are many ways in which attackers can trick victims to reveal their personal information. In this article, we select important phishing URLs features that can be used by an attacker to trick Internet users into taking the attacker&#39;s desired action. We use two machine learning techniques to accurately classify our data sets. We compare the performance of other related techniques with our scheme. The results of the experiments show that the approach is highly effective in detecting phishing URLs and attained an accuracy of 97.8% with 1.06% false-positive rate, 0.5% false-negative rate, and an error rate of 0.3%. The proposed scheme performs better compared to other selected related work. This shows that our approach can be used for real-time applications in detecting phishing URLs.</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_68-Hybrid_Machine_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Signature based Network Intrusion Detection System using Feature Selection on Android</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110667</link>
        <id>10.14569/IJACSA.2020.0110667</id>
        <doi>10.14569/IJACSA.2020.0110667</doi>
        <lastModDate>2020-06-30T14:58:38.0670000+00:00</lastModDate>
        
        <creator>Onyedeke Obinna Cyril</creator>
        
        <creator>Taoufik Elmissaoui</creator>
        
        <creator>Okoronkwo M.C</creator>
        
        <creator>Ihedioha Uchechi .M</creator>
        
        <creator>Chikodili H.Ugwuishiwu</creator>
        
        <creator>Okwume .B. Onyebuchi</creator>
        
        <subject>Signature Detection; Feature Selection; android phone; Smart Intrusion Detection System (SIDS)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>This paper Smart Intrusion Detection System (IDS), is a contribution to efforts towards detecting intrusion and malicious activities on Android phone. The goal of this paper is to raise user’s awareness of the high rate of intrusions or malicious activities on Android phones and to provide counter measure system for more secured operations. The proposed system (SIDS) detects any intrusion or illegal activities on android and also takes a selfie of the intruder unknown to him/her and keep in the log for the view of the user. The object oriented analysis and design method (OOADM), was adopted in the development. This approach was used to model and develop the system using real intrusion features and processes to detect intrusions more flexibly and efficiently. Signature detection was also used to detect attacks by looking for specific patterns. The system detects intrusions and immediately sends an alert to the user to notify of an illegal or malicious attempt and the location of the intruder.</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_67-Signature_based_Network_Intrusion_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Data Warehouse System for Multidimensional Analysis of Tuition Fee Level in Higher Education Institutions in Indonesia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110666</link>
        <id>10.14569/IJACSA.2020.0110666</id>
        <doi>10.14569/IJACSA.2020.0110666</doi>
        <lastModDate>2020-06-30T14:58:38.0500000+00:00</lastModDate>
        
        <creator>Ardhian Agung Yulianto</creator>
        
        <creator>Yoshiya Kasahara</creator>
        
        <subject>Data warehouse; higher education institution; multidimensional analysis; Indonesia; tuition-fee-level management</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>In this study, we developed a data warehouse (DW) system for tuition-fee-level management for higher education institutions (HEI) in Indonesia. The system was developed to provide sufficient information to the administrators for decision making of tuition fees of applicants by integrating multisource data. A simple but sufficient method was introduced using the open-source following the business requirements of HEI’s administrator. As a business intelligence (BI) approach, four procedures are applied e.g., preparation, integration, analysis, and visualization to construct a tuition-fee-level management system. The DW demonstrate four basic dimensions (faculty, year, entrant type, and tuition fee level) in all seven dimensions and three data regarding applicants, tuition fee level, and payment status. Analytical results were tuition fee level trends, top five faculty by applicants, and fees collected from the student trends. Those analysis results were presented in various charts and graphics contained at a dashboard of tuition fee level, which has many functions to provide insight relative to the business performance. The DW system described in this paper can be used as a guideline for tuition-fee-level management for HEIs in Indonesia.</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_66-Data_Warehouse_System_for_Multidimensional_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Survey on Detection and Prevention of Web Vulnerabilities</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110665</link>
        <id>10.14569/IJACSA.2020.0110665</id>
        <doi>10.14569/IJACSA.2020.0110665</doi>
        <lastModDate>2020-06-30T14:58:38.0200000+00:00</lastModDate>
        
        <creator>Muhammad Noman</creator>
        
        <creator>Muhammad Iqbal</creator>
        
        <creator>Amir Manzoor</creator>
        
        <subject>Web security survey; web vulnerabilities; detection and prevention techniques</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>The Internet provides a vast range of benefits to society and empowers the users in a variety of ways to use web applications. Simply, the internet has become the most transformative and fast-growing technology ever built, but it also brings new security challenges to web services in internet applications because of the scattered and open nature of the internet. A simple vulnerability in the program code could favor/benefit an attacker to obtain unauthorized access and perform adversary actions. Hence, the security of web applications from a hacking attempt is of paramount importance. This paper focuses on a literature survey recapitulating security solutions and major vulnerabilities to promote further research by systemizing the existing methods, on a bigger horizon. The data is collected from an absolute of 86 primary studies that are taken from well-known digital libraries. Different methods comprising secure programming, static, Dynamic, Hybrid analysis, and machine learning classify the data from articles. The quantity of references or the significance of a developing strategy is kept in account while selecting articles. Overall, our survey suggests that there is no way to alleviate all the web vulnerabilities therefore more studies is desirable in the area of web information security. All methods’ complexity is addressed and some recommendations regarding when to use the application of given methods are provided. Finally, we typify the experience gained and examine future research openings in web application security.</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_65-A_Survey_on_Detection_and_Prevention.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Efficient Classifier using Machine Learning Technique for Individual Action Identification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110664</link>
        <id>10.14569/IJACSA.2020.0110664</id>
        <doi>10.14569/IJACSA.2020.0110664</doi>
        <lastModDate>2020-06-30T14:58:38.0030000+00:00</lastModDate>
        
        <creator>G. L. Sravanthi</creator>
        
        <creator>M.Vasumathi Devi</creator>
        
        <creator>K.Satya Sandeep</creator>
        
        <creator>A.Naresh</creator>
        
        <creator>A.Peda Gopi</creator>
        
        <subject>Human action recognition; machine learning; neural networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>Human action recognition is an important branch of computer vision and is getting increasing attention from researchers. It has been applied in many areas including surveillance, healthcare, sports and computer games. This proposed work focuses on designing a human action recognition system for a human interaction dataset. Literature research is conducted to determine suitable algorithms for action recognition. In this proposed work, three machine learning models are implemented as the classifiers for human actions. An image processing method and a projection-based feature extraction algorithm are presented to generate training examples for the classifier. The action recognition task is divided into two parts: 4-class human posture recognition and 5-class human motion recognition. Classifiers are trained to classify input data into one of the posture or motion classes. Performance evaluations of the classifiers are carried out to assess validation accuracy and test accuracy for action recognition. The architecture designs for the centralized and distributed recognition systems are presented. Later these designed architectures are simulated on the sensor network to evaluate feasibility and recognition performance. Overall, the designed classifiers show a promising performance for action recognition.</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_64-An_Efficient_Classifier_using_Machine_Learning_Technique.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Neuro-fuzzy System with Particle Swarm Optimization for Classification of Physical Fitness in School Children</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110663</link>
        <id>10.14569/IJACSA.2020.0110663</id>
        <doi>10.14569/IJACSA.2020.0110663</doi>
        <lastModDate>2020-06-30T14:58:37.9870000+00:00</lastModDate>
        
        <creator>Jose Sulla-Torres</creator>
        
        <creator>Gonzalo Luna-Luza</creator>
        
        <creator>Doris Ccama-Yana</creator>
        
        <creator>Juan Gallegos-Valdivia</creator>
        
        <creator>Marco Cossio-Bola&#241;os</creator>
        
        <subject>Classification; ANFIS; particle swarm optimization; physical fitness; RMSE</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>Physical fitness is widely known to be one of the critical elements of a healthy life. The sedentary attitude of school children is related to some health problems due to physical inactivity. The following article aims to classify the physical fitness in school children, using a database of 1813 children of both sexes, in a range that goes from six to twelve years. The physical tests were flexibility, horizontal jump, and agility that served to classify the physical fitness using neural networks and fuzzy logic. For this, the ANFIS (adaptive network fuzzy inference system) model was used, which was optimized using the Particle Swarm Optimization algorithm. The experimental tests carried out showed an RMSE error of 3.41, after performing 500 interactions of the PSO algorithm. This result is considered acceptable within the conditions of this investigation.</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_63-Neuro_Fuzzy_System_with_Particle_Swarm_Optimization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Analytics of Classifiers on Resampled Datasets for Pregnancy Outcome Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110662</link>
        <id>10.14569/IJACSA.2020.0110662</id>
        <doi>10.14569/IJACSA.2020.0110662</doi>
        <lastModDate>2020-06-30T14:58:37.9400000+00:00</lastModDate>
        
        <creator>Udoinyang G. Inyang</creator>
        
        <creator>Francis B. Osang</creator>
        
        <creator>Imo J. Eyoh</creator>
        
        <creator>Adenrele A. Afolorunso</creator>
        
        <creator>Chukwudi O. Nwokoro</creator>
        
        <subject>Imbalance learning; pregnancy outcome; random forest; SMOTE; imbalance data</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>The main challenges of predictive analytics revolve around the handling of datasets, especially the disproportionate distribution of instances among classes in addition to classifier-suitability issues. This unequal spread causes imbalance learning and severely obstructs prediction accuracy. In this paper, the performances of six classifiers and the effect of data balancing (DB) and formation approaches for predicting pregnancy outcome (PO) were investigated. Synthetic minority oversampling technique (SMOTE), resampling with and without replacement, were adopted for data imbalance treatment. Six classifiers including random forest (RF) were evaluated on each resampled dataset with four test modes using Waikato Environment for Knowledge Analysis and R programming libraries. The results of analysis of variance performed separately using F-measure and root mean squared error showed that mean performance of classifiers across the datasets varied significantly (F=117.9; p=0.00) at 95% confidence interval, while turkey multi-comparison test revealed RF(mean=0.78) and SMOTE (mean=0.73) as having significantly different means. The RF model on SMOTE produced each PO class accuracy ≥0.89, area under the curve ≥ 0.96 and coverage of 97.8% and was adjudged the best classifier-DB method pair. However, there was no significant difference (F=0.07, 0.01; p=1.000) in the mean performances of classifiers across test data modes respectively. It reveals that train/test data modes insignificantly affect classification accuracy, although there are noticeable variations in computational cost. The methodology significantly enhance the predictive accuracy of minority classes and confirms the importance of data-imbalance treatment, and the suitability of RF for PO classification.</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_62-Comparative_Analytics_of_Classifiers.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Usability Evaluation of a Tangible User Interface and Serious Game for Identification of Cognitive Deficiencies in Preschool Children</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110661</link>
        <id>10.14569/IJACSA.2020.0110661</id>
        <doi>10.14569/IJACSA.2020.0110661</doi>
        <lastModDate>2020-06-30T14:58:37.9270000+00:00</lastModDate>
        
        <creator>S&#225;nchez-Morales. A</creator>
        
        <creator>Durand-Rivera J.A</creator>
        
        <creator>Mart&#237;nez-Gonz&#225;lez C.L</creator>
        
        <subject>User interface; wizard of Oz; usability; HCI; input device</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>Detecting deficits in reading and writing literacy skills has been of great interest in the scientific community to correlate executive functions with future academic skills. In the present study, a prototype of a serious multimedia runner-type game was developed, Play with SID, designed to detect deficiencies in cognitive abilities in preschool children (sustained attention, memory, working memory, visuospatial abilities, and reaction time), before learning to read and write. Usability tests are used in Human-Computer Interaction to determine the feasibility of a system; it is the proof of concepts before the development of real systems. The aim of this paper was to evaluate the usability of the interface of the serious game, as well as the tangible user interface, a teddy bear with motion sensors. A usability study using the Wizard of Oz technique was conducted with 18 neurotypical preschool participants, ages 4 to 6. Concepts related to interactivity (interaction, the fulfillment of the activity objective, reaction to stimuli, and game time without distraction) were observed, as well as eye-tracking to assess attention and the Usability Scale System (SUS) to measure usability. According to the usability evaluation (confidence interval between 74.74% and 90.47%), the prototype has good to excellent usability, with no statistically significant differences between the age groups. The observed concept with the highest score was the game time without distraction. This characteristic will allow evaluating sustained attention. Also, we found out that the tangible interface use leads to the observation of laterality development, which will be added to the design of the serious game. The use of observation-based usability assessment techniques is useful for obtaining information from the participants when their communication skills are developing, and the expression of their perception in detail is limited.</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_61-Usability_Evaluation_of_a_Tangible_User_Interface.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comparison of Data Sampling Techniques for Credit Card Fraud Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110660</link>
        <id>10.14569/IJACSA.2020.0110660</id>
        <doi>10.14569/IJACSA.2020.0110660</doi>
        <lastModDate>2020-06-30T14:58:37.9100000+00:00</lastModDate>
        
        <creator>Abdulla Muaz</creator>
        
        <creator>Manoj Jayabalan</creator>
        
        <creator>Vinesh Thiruchelvam</creator>
        
        <subject>Data imbalance; credit card fraud; sampling techniques</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>Credit Card fraud is a tough reality that continues to constrain the financial sector and its detrimental effects are felt across the entire financial market. Criminals are continuously on the lookout for ingenious methods for such fraudulent activities and are a real threat to security. Therefore, there is a need for early detection of fraudulent activity to preserve customer trust and safeguard their business. A major challenge faced in designing fraud detection systems is dealing with the class imbalance issue in the data since genuine transactions outnumber the fraudulent transactions typically account less than 1% of the total transactions. This is an important area of study as the positive case (fraudulent case) is hard to distinguish and becomes even harder with the inflow of data where the representation of such cases even decreases further. This study trained four predictive models, Artificial Neural Network (ANN), Gradient Boosting Machine (GBM) and Random Forest (RF) on different sampling methods. Random Under Sampling (RUS), Synthetic Minority Over-sampling Technique (SMOTE), Density-Based Synthetic Minority Over-Sampling Technique (DBSMOTE) and SMOTE combined with Edited Nearest Neighbour (SMOTEENN) was used for all models. The findings of this study indicate promising results with SMOTE based sampling techniques. The best recall score obtained was with SMOTE sampling strategy by DRF classifier at 0.81. The precision score for this classifier was observed to be 0.86. Stacked Ensemble was trained for all the sampled datasets and found to have the best average performance at 0.78. The Stacked Ensemble model has shown promise in the detection of fraudulent transactions across most of the sampling strategies.</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_60-A_Comparison_of_Data_Sampling_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced Insertion Sort by Threshold Swapping</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110659</link>
        <id>10.14569/IJACSA.2020.0110659</id>
        <doi>10.14569/IJACSA.2020.0110659</doi>
        <lastModDate>2020-06-30T14:58:37.8930000+00:00</lastModDate>
        
        <creator>Basima Elshqeirat</creator>
        
        <creator>Muhyidean Altarawneh</creator>
        
        <creator>Ahmad Aloqaily</creator>
        
        <subject>Sorting; design of algorithm; insertion sort; enhanced insertion sort; threshold swapping</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>Sorting is an essential operation that takes place in arranging data in a specific order, such as ascending or descending with numeric and alphabetic data. There are various sorting algorithms for each situation. For applications that have incremental data and require e an adaptive sorting algorithm, the insertion sort algorithm is the most suitable choice, because it can deal with each element without the need to sort the whole dataset. Moreover, the Insertion sort algorithm can be the most popular sorting algorithm because of its simple and straightforward steps. Hence, the insertion sort algorithm performance decreases when it comes to large datasets. In this paper, an algorithm is designed to empirically improve the performance of the insertion sort algorithm, especially for large datasets. The new proposed approach is stable, adaptive and very simple to translate into programming code. Moreover, this proposed solution can be easily modiﬁed to obtain in-place variations of such an algorithm by maintaining their main features. From our experimental results, it turns out that the proposed algorithm is very competitive with the classic insertion sort algorithm. After applying the proposed algorithm and comparing it with the classic insertion sort, the time taken to sort a specific dataset was reduced by 23%, regardless of the dataset’s size. Furthermore, the performance of the enhanced algorithm will increase along with the size of the dataset. This algorithm does not require additional resources nor the need to sort the whole dataset every time a new element is added.</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_59-Enhanced_Insertion_Sort_by_Threshold_Swapping.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparitive Study of Time Series and Deep Learning Algorithms for Stock Price Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110658</link>
        <id>10.14569/IJACSA.2020.0110658</id>
        <doi>10.14569/IJACSA.2020.0110658</doi>
        <lastModDate>2020-06-30T14:58:37.8630000+00:00</lastModDate>
        
        <creator>Santosh Ambaprasad Sivapurapu</creator>
        
        <subject>Time series; deep learning; ARIMA; VAR; LSTM; GRU; CNN 1D; genetic algorithm; Tree Structured Parzen Estimator (TPE)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>Stock Price Prediction has always been an intriguing research problem in financial domain. In the past decade, various methodologies based on classical time series, machine learning, deep learning and hybrid models which constitute the combinations of algorithms have been proposed with reasonable effectiveness in predicting the stock price. There is also considerable research work in comparing the performances of these models. However, from literature review, stems a concern, that is, lack of formal methodology that allows comparison of performances of the different models. For example, the lack of guidance on the generalizability of the time series models and optimised deep learning models is concerning. In addition, there is also a lack of guidance on general fitment of models, which can vary in accordance with forecasting requirement of stock price. This study is aimed at establishing a formal methodology of comparing different types of time series forecasting models based on like for like paradigm. The effectiveness of Deep Learning and Time-Series models have been evaluated by predicting the close prices of three banking stocks. The characteristics of the models in terms of generalizability are compared. The impact of the forecasting period on performance for various models are evaluated on a common metric. In most of the previous studies, the forecasting was done for the periods of 1 day, 5 days or 31 days. To keep the impact of volatility in the stock market due to various political and economic shocks both at international and domestic domains to the minimum, the forecasting periods of up 2 days for short term and 5 days for long term are considered. It has been evidenced that the deep learning models have outperformed time series models in terms of generalisability as well as short- and long-term forecasts.</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_58-Comparitive_Study_of_Time_Series.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Framework for Semantic Text Clustering</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110657</link>
        <id>10.14569/IJACSA.2020.0110657</id>
        <doi>10.14569/IJACSA.2020.0110657</doi>
        <lastModDate>2020-06-30T14:58:37.8470000+00:00</lastModDate>
        
        <creator>Soukaina Fatimi</creator>
        
        <creator>Chama EL Saili</creator>
        
        <creator>Larbi Alaoui</creator>
        
        <subject>Text clustering; similarity measure; ontology; semantic web; RDF; RDFS; OWL; reasoning; inferencing rules; SPARQL; topic modeling; summarization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>Existing approaches for text clustering are either agglomerative, divisive or based on frequent itemsets. However, most of the suggested solutions do not take the semantic associations between words into account and documents are only regarded as bags of unrelated words. Indeed, traditional text clustering methods usually focus on the frequency of terms in documents to create connected homogenous clusters without considering associated semantic which will of course lead to inaccurate clustering results. Accordingly, this research aims to understand the meanings of text phrases in the process of clustering to make maximum usage and use of documents. The semantic web framework is filled with useful techniques enabling database use to be substantial. The goal is to exploit these techniques to the full usage of the Resource Description Framework (RDF) to represent textual data as triplets. To come up a more effective clustering method, we provide a semantic representation of the data in texts on which the clustering process would be based. On the other hand, this study opts to implement other techniques within the clustering process such as ontology representation to manipulate and extract meaningful information using RDF, RDF Schemas (RDFS), and Web Ontology Language (OWL). Since Text clustering is an indispensable task for better exploitation of documents, the use of documents may be more intelligently conducted while considering semantics in the process of text clustering to efficiently identify the more related groups in a document collection. To this end, the proposed framework combines multiple techniques to come up with an efficient approach combining machine learning tools with semantic web principles. The framework allows documents RDF representation, clustering, topic modeling, clusters summarizing, information retrieval based on RDF querying and Reasoning tools. It also highlights the advantages of using semantic web techniques in clustering, subject modeling and knowledge extraction based on processes of questioning, reasoning and inferencing.</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_57-A_Framework_for_Semantic_based_Text_Clustering.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Knowledge Sharing Framework for Modern Code Review to Diminish Software Engineering Waste</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110656</link>
        <id>10.14569/IJACSA.2020.0110656</id>
        <doi>10.14569/IJACSA.2020.0110656</doi>
        <lastModDate>2020-06-30T14:58:37.8300000+00:00</lastModDate>
        
        <creator>Nargis Fatima</creator>
        
        <creator>Sumaira Nazir</creator>
        
        <creator>Suriayati Chuprat</creator>
        
        <subject>Knowledge sharing; modern code review; software engineering wastes; waiting waste; lean software development</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>Modern Code Review (MCR) is a quality assurance technique that involves massive interactions between team members of MCR. Presently team members of MCR are confronting with the problem of waiting waste production, which results in their psychological distress and project delays. Therefore, the MCR team needs to have effective knowledge sharing during MCR activities, to avoid the circumstances that lead the team members to the waiting state. The objective of this study is to develop the knowledge sharing framework for MCR team to reduce waiting waste. The research methodology used for this study is the Delphi survey. The conducted Delphi survey intended to produce the finalized list of knowledge sharing factors and to recognize and prioritize the most influencing knowledge sharing factor for MCR activities. The study results reported 22 knowledge sharing factors, 135 sub-factor, and 5 categories. Grounded on the results of the Delphi survey the knowledge sharing framework for MCR has been developed. The study is beneficial for software engineering researchers to outspread the research. It can also help the MCR team members to consider the designed framework to increase knowledge sharing and diminish waiting waste.</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_56-Knowledge_Sharing_Framework_for_Modern_Code.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Routing Protocol based on Floyd-Warshall Algorithm Allowing Maximization of Throughput</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110655</link>
        <id>10.14569/IJACSA.2020.0110655</id>
        <doi>10.14569/IJACSA.2020.0110655</doi>
        <lastModDate>2020-06-30T14:58:37.8170000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>Network routing protocol; virtual private network; autonomous system; open shortest path first; Floyd-Warshall algorithm; Dijkstra algorithm; throughput</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>Routing protocol based on Floyd-Warshall algorithm which allows maximization of throughput is proposed. The metric function in the proposed routing protocol is throughput including not only send packets but also retransmission packets in order for improving effectiveness and efficiency of the network in concern. Through simulation studies, it is found that the proposed routing protocol is superior to the conventional Open Shortest Path First: OSPF based on Dijkstra algorithm for shortest path determination from the point of view of maximizing throughput. A routing protocol for Virtual Private Network (VPN) in Autonomous System (AS) based on maximizing throughput is proposed. Through a comparison between the proposed protocol and the existing protocols, OSPF (Widely used), it is also found that the required time for transmission of packets from one node to another node of the proposed protocol is 56.54% less than that of the OSPF protocol.</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_55-Routing_Protocol_based_on_Floyd_Warshall_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Review on Honeypot-based Botnet Detection Models for Smart Factory</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110654</link>
        <id>10.14569/IJACSA.2020.0110654</id>
        <doi>10.14569/IJACSA.2020.0110654</doi>
        <lastModDate>2020-06-30T14:58:37.8000000+00:00</lastModDate>
        
        <creator>Lee Seungjin</creator>
        
        <creator>Azween Abdullah</creator>
        
        <creator>NZ Jhanjhi</creator>
        
        <subject>IoT; smart factory; honeypot; Botnets; detection; security; model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>Since the Swiss Davos Forum in January 2017, the most searched keywords related to the Fourth Revolutionary Industry are AI technology, big data, and IoT. In particular, the manufacturing industry seeks to advance information and communication technology (ICT) to build a smart factory that integrates the management of production processes, safety, procurement, and logistics services. Such smart factories can effectively solve the problem of frequent occurrences of accidents and high fault rates. An increasing number of cases happening in smart factories due to botnet DDoS attacks have been reported in recent times. Hence, the Internet of Thing security is of paramount importance in this emerging field of network security improvement. In response to the cyberattacks, smart factory security needs to gain its defending ability against botnet. Various security solutions have been proposed as solutions. However, those emerging approaches to IoT security are yet to effectively deal with IoT malware, also known as Zero-day Attacks. Botnet detection using honeypot has been recently studied in a few researches and shows its potential to detect Botnet in some applications effectively. Detecting botnet by honeypot is a detection method in which a resource is intentionally created within a network as a trap to attract botnet attackers with the purpose of closely monitoring and obtaining their behaviors. By doing this, the tracked contents are recorded in a log file. It is then used for analysis by machine learning. As a result, responding actions are generated to act against the botnet attack. In this work, a review of literature looks insight into two main areas, i.e. 1) Botnet and its severity in cybersecurity, 2) Botnet attacks on a smart factory and the potential of the honeypot approach as an effective solution. Notably, a comparative analysis of the effectiveness of honeypot detection in various applications is accomplished and the application of honey in the smart factories is reviewed.</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_54-A_Review_on_Honeypot_Based_Botnet_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluation Criteria for RDF Triplestores with an Application to Allegrograph</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110653</link>
        <id>10.14569/IJACSA.2020.0110653</id>
        <doi>10.14569/IJACSA.2020.0110653</doi>
        <lastModDate>2020-06-30T14:58:37.7830000+00:00</lastModDate>
        
        <creator>Khadija Alaoui</creator>
        
        <creator>Mohamed Bahaj</creator>
        
        <subject>RDF; RDFS; SPARQL; triplestore; big data; NoSQL; AllegroGraph</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>Since its launching as the standard language of the semantic web, the Resource Description Framework RDF has gained an enormous importance in many fields. This has led to the appearance of a variety of data systems to store and process RDF data. To help users identify the best suited RDF data stores for their needs, we establish a list of evaluation and comparison criteria of existing RDF management systems also called triplestores. This is the first work addressing such topic for such triplestores. The criteria list highlights various aspects and is not limited to special stores but covers all types of stores including among others relational, native, centralized, distributed and big data stores. Furthermore, this criteria list is established taking into account relevant issues in accordance with triplestores tasks with respect to the main issues of RDF data storage, RDF data processing, performance, distribution and ease of use. As a study case we consider an application of the evaluation criteria to the graph RDF triplestore AllegroGraph.</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_53-Evaluation_Criteria_for_RDF_Triplestores.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Usability Evaluation of Open Source Learning Management Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110652</link>
        <id>10.14569/IJACSA.2020.0110652</id>
        <doi>10.14569/IJACSA.2020.0110652</doi>
        <lastModDate>2020-06-30T14:58:37.7700000+00:00</lastModDate>
        
        <creator>Seren Basaran</creator>
        
        <creator>Rafia Khalleefah Hamad Mohammed</creator>
        
        <subject>E-learning; ISO/IEC 9126; learning management systems; quality model; usability evaluation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>Advancements in Information and Communications Technology has enabled learning to be conducted online frequently through Learning Management Systems (LMS). The use of Learning Management Systems (LMS) as tools for learning in the present Internet age is seen as an important solution to remedy major problems particularly faced by higher education instructors, students and universities. However, any quality and usability related information regarding such widely used learning management systems are rarely encountered in the literature. The main objective of this study is to evaluate the system quality of the top five widely used open source learning management systems through the external characteristics of ISO/IEC 9126 quality standards evaluation model for Moodle, ATutor, Eliademy, Forma LMS and Dokeos with two experts. ISO/IEC 9126 quality model is adequate for evaluating important system quality metrics. Results highlighted in detail a set of usability and quality issues that are associated to external characteristics for each open LMS which require further attention of developers, educators and researchers to improve the quality of learning.</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_52-Usability_Evaluation_of_Open_Source_Learning_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automation of Traditional Exam Invigilation using CCTV and Bio-Metric</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110651</link>
        <id>10.14569/IJACSA.2020.0110651</id>
        <doi>10.14569/IJACSA.2020.0110651</doi>
        <lastModDate>2020-06-30T14:58:37.7530000+00:00</lastModDate>
        
        <creator>MD Jiabul Hoque</creator>
        
        <creator>Md. Razu Ahmed</creator>
        
        <creator>Md. Jashim Uddin</creator>
        
        <creator>Muhammad Mostafa Amir Faisal</creator>
        
        <subject>Automation in invigilation; bio-metric authentication; CCTV monitoring; e-assessment; Parallax Data Acquisition Tool (PLX-DAQ); traditional invigilation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>In education, whilst e-learning has been playing a significant role over the last few years due to its flexibility and remote-based education system, majority of courses are still relying upon traditional approaches of learning due to the lack of integrity and security of online based examinations and assessments in e-learning. As such, traditional approach of examination system is considered superior method than e-examination but it has few limitations in its tag such as excessive number of physical resources (invigilators) is required and high occurrences of malpractices by the students during exam. The objective of this paper is to develop a framework for traditional pen and paper based examination system where number of invigilators will substantially be reduced and malpractices by the students during exam will be abolished. In order to implement the proposed examination system, educational institutions are required to preserve a database using Parallax Data Acquisition tool (PLX-DAQ) that incorporates bio-metric information of all students. Before entering in the examination hall, examinees go through authentication process via bio-metric reader that is attached in front of each exam hall. During the examination, examinees are monitored and controlled by an invigilator from distance through the use of 360-degree Closed-Circuit Television (CCTV) cameras as well as ultra-high sensitive microphones and speakers. Here, CCTV cameras are used to monitor examinees physical malpractices and microphones are used to control examinees vocal malpractices. Only one invigilator is required for n-number of exam halls in this process. The communication between students and invigilator can be done with microphones and speakers attached in both exam halls and invigilator room. This model will wipe out malpractices during examination. It will be a cost effective, simple and secure solution of complex traditional exam invigilation process.</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_51-Automation_of_Traditional_Exam_Invigilation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Efficiency and Performance of Optimized Robust Controllers in Hydraulic System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110650</link>
        <id>10.14569/IJACSA.2020.0110650</id>
        <doi>10.14569/IJACSA.2020.0110650</doi>
        <lastModDate>2020-06-30T14:58:37.7230000+00:00</lastModDate>
        
        <creator>Chong Chee Soon</creator>
        
        <creator>Rozaimi Ghazali</creator>
        
        <creator>Shin Horng Chong</creator>
        
        <creator>Chai Mau Shern</creator>
        
        <creator>Yahaya Md. Sam</creator>
        
        <creator>Zulfatman Has</creator>
        
        <subject>Robust Control Design; optimization; tracking efficiency analysis; controller effort analysis; electro-hydraulic actuator system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>Common applications involved hydraulic system in real-time including heavy machinery, air-craft system, and transportation. Real-time applications, however, are notorious to suffered with nonlinearities due to unavoidable mechanical structures. Thus, control system engaged to manipulate and minimize the effect resulting from the nonlinearities. In this paper, few control approaches are executed in the Electro-Hydraulic Actuator (EHA) system. The control approaches including the generally used Proportional-Integral-Derivative (PID) controller, the reinforced Fractional-Order (FO) PID controller, the Sliding Mode Controller (SMC), and the enhanced hybrid SMC-PID controller. In order to obtain proper parameter of each controller, the Particle Swarm Optimization (PSO) technique is applied. The output data are then analysed based on the performance indices in terms of the consumption of the energy and the error produced. The performance indices including Root Mean Square Error/Voltage (RMSE/V), Integral Square Error/Voltage (ISE/V), Integral Time Square Error/Voltage (ITSE/V), Integral Absolute Error/Voltage (IAE/V), and Integral Time Absolute Error/Voltage (ITAE/V). It is observed in the results, based on the performance indices in terms of error and voltage, the hybrid SMC-PID capable of generating better outcomes with reference to tracking capabilities and energy usage.</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_50-Efficiency_and_Performance_of_Optimized_Robust_Controllers.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Solving Travelling Salesman Problem (TSP) by Hybrid Genetic Algorithm (HGA)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110649</link>
        <id>10.14569/IJACSA.2020.0110649</id>
        <doi>10.14569/IJACSA.2020.0110649</doi>
        <lastModDate>2020-06-30T14:58:37.7070000+00:00</lastModDate>
        
        <creator>Ali Mohammad Hussein Al-Ibrahim</creator>
        
        <subject>Traveling Salesman Problem (TSP); NP-Complete problem; genetic algorithms; local Search algorithm; crossover mechanism; neighboring solution algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>The Traveling Salesman Problem (TSP) is easy to qualify and describe but difficult and very hard to be solved. There is known algorithm that can solve it and find the ideal outcome in polynomial time, so it is NP-Complete problem. The Traveling Salesman Problem (TSP) is related to many others problems because the techniques used to solve it can be easily used to solve other hard Optimization problems, which allows of circulating it results on many other optimization problems. Many techniques were proposed and developed to solve such problems, including Genetic Algorithms. The aim of the paper is to improve and enhance the performance of genetic algorithms to solve the Traveling Salesman Problem (TSP) by proposing and developing a new Crossover mechanism and a local search algorithm called the Search for Neighboring Solution Algorithm, with the goal of producing a better solution in a shorter period of time and fewer generations. The results of this study for a number of different size standard benchmarks of TSP show that the proposed algorithms that use Crossover proposed mechanism can find the optimum solution for many of these TSP benchmarks by (100%) ,and within the rate (96%-99%) of the optimal solution to some for others. The comparison between the proposed Crossover mechanism and other known Crossover mechanisms show that it improves the quality of the solutions. The proposed Local Search algorithm and Crossover mechanism produce superior results compared to previously propose local search algorithms and Crossover mechanisms. They produce near optimum solutions in less time and fewer generations.</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_49-Solving_Travelling_Salesman_Problem.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Situational Modern Code Review Framework to Support Individual Sustainability of Software Engineers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110648</link>
        <id>10.14569/IJACSA.2020.0110648</id>
        <doi>10.14569/IJACSA.2020.0110648</doi>
        <lastModDate>2020-06-30T14:58:37.6900000+00:00</lastModDate>
        
        <creator>Sumaira Nazir</creator>
        
        <creator>Nargis Fatima</creator>
        
        <creator>Suriayati Chuprat</creator>
        
        <subject>Situations; situational factors; Modern Code Review (MCR); sustainable software engineer; situational software engineering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>Modern Code Review (MCR) is a socio-technical practice to improve source code quality and ensure successful software development. It involves the interaction of software engineers from different cultures and backgrounds. As a result, a variety of unknown situational factors arise that impact the individual sustainability of MCR team members and affect their productivity by causing mental distress, fear of unknown and varying situations. Therefore, the MCR team needs to be aware of the accurate situational factors, however, they are confronted with the issue of lack of competency in the identification of situational factors. This study aims to conduct the Delphi survey to investigate the optimal and well-balanced MCR-related situational factors. The conducted survey also aimed to recognize and prioritize the most influencing situational factors for MCR activities. The study findings reported 21 situational factors, 147 sub-factors, and 5 Categories. Based on the results of the Delphi survey the identified situational factors are transformed into the situational MCR framework. This study might be helpful to support the individual sustainability of the MCR team by making them aware of the situations that can occur and vary during the execution of the MCR process. This research might also help the MCR team to improve their productivity and sustain in the industry for longer. It can also support software researchers who want to contribute to situational software engineering from varying software engineering contexts.</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_48-Situational_Modern_Code_Review_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>On the Digital Applications in the Thematic Literature Studies of Emily Dickinson’s Poetry</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110647</link>
        <id>10.14569/IJACSA.2020.0110647</id>
        <doi>10.14569/IJACSA.2020.0110647</doi>
        <lastModDate>2020-06-30T14:58:37.6770000+00:00</lastModDate>
        
        <creator>Abdulfattah Omar</creator>
        
        <subject>Cluster analysis; digital applications; Emily Dickinson; lexical content; philological methods; thematic studies; Vector Space Model (VSM)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>Thematic studies in literature have traditionally been based on philological methods supported by personal knowledge and evaluation of the texts. A major problem with studies in this tradition is that they are not objective or replicable. With the development of digital technologies and applications, it is now possible for theme analysis in literary texts to be based at least partially on objective replicable methods. In order to address issues of objectivity and replicability in thematic classification of literary text, this study proposes a computational model to theme analysis of the poems of Emily Dickinson using cluster analysis based on a vector space model (VSM) representation of the lexical content of the selected texts. The results indicate that the proposed model yields usable results in understanding the thematic structure of Dickinson’s prose fiction texts and that they do so in an objective and replicable way. Although the results of the analysis are broadly in agreement with existing, philologically-based critical opinion about the thematic structure of Dickinson’s work, the contribution of this study is to give that critical opinion a scientific, objective, and replicable basis. The methodology used in this study is mathematically-based, clear, objective, and replicable. Finally, the results of the study have their positive implications to the use of computational models in literary criticism and literature studies. The success of computer-aided approaches in addressing inherent problems in the field of literary studies related to subjectivity and selectivity argues against the theoretical objections to the involvement of computer and digital applications in the study of literature.</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_47-On_the_Digital_Applications_in_the_Thematic_Literature.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application of Homomorphic Encryption on Neural Network in Prediction of Acute Lymphoid Leukemia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110646</link>
        <id>10.14569/IJACSA.2020.0110646</id>
        <doi>10.14569/IJACSA.2020.0110646</doi>
        <lastModDate>2020-06-30T14:58:37.6600000+00:00</lastModDate>
        
        <creator>Ishfaque Qamar Khilji</creator>
        
        <creator>Kamonashish Saha</creator>
        
        <creator>Jushan Amin Shonon</creator>
        
        <creator>Muhammad Iqbal Hossain</creator>
        
        <subject>CryptoNets; neural network; Acute Lymphoid Leukemia (ALL); homomorphic</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>Machine learning is now becoming a widely used mechanism and applying it in certain sensitive fields like medical and financial data has only made things easier. Accurate Diagnosis of cancer is essential in treating it properly. Medical tests regarding cancer in recent times are quite expensive and not available in many parts of the world. CryptoNets, on the other hand, is an exhibit of the use of Neural-Networks over data encrypted with Homomorphic Encryption. This project demonstrates the use of Homomorphic Encryption for outsourcing neural-network predictions in case of Acute Lymphoid Leukemia (ALL). By using CryptoNets, the patients or doctors in need of the service can encrypt their data using Homomorphic Encryption and send only the encrypted message to the service provider (hospital or model owner). Since Homomorphic Encryptions allow the provider to operate on the data while it is encrypted, the provider can make predictions using a pre-trained Neural-Network while the data remains encrypted all throughout the process and finally sending the prediction to the user who can decrypt the results. During the process the service provider (hospital or the model owner) gains no knowledge about the data that was used or the result since everything is encrypted throughout the process. Our work proposes a Neural Network model which will be able to predict ALL-Acute Lymphoid Leukemia with approximate 80% accuracy using the C_NMC Challenge dataset. Prior to building our own model, we used the dataset and pre-process it using a different approach. We then ran on different machine learning and Neural Network models like VGG16, SVM, AlexNet, ResNet50 and compared the validation accuracies of these models with our own model which lastly gives better accuracy than the rest of the models used. We then use our own pre-trained Neural Network to make predictions using CryptoNets. We were able to achieve an encrypted prediction of about 78% which is close to what we achieved when validating our own CNN model that has a validation accuracy of 80% for prediction of Acute Lymphoid Leukemia (ALL).</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_46-Application_of_Homomorphic_Encryption.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Underwater Wireless Sensor Network Route Optimization using BIHH Technique</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110645</link>
        <id>10.14569/IJACSA.2020.0110645</id>
        <doi>10.14569/IJACSA.2020.0110645</doi>
        <lastModDate>2020-06-30T14:58:37.6430000+00:00</lastModDate>
        
        <creator>Turki Ali Alghamdi</creator>
        
        <subject>Sensor nodes; energy; routing; black hole; hamming code; hex code</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>Underwater wireless sensor network (UWSN) is established in water bodies such as oceans, seas and rivers to observe the activity of military, to perform rescue operations and to do mining activity of resources. The sensor nodes communicate through acoustic channels. These nodes have limited battery life (energy), narrow bandwidth and a channel is incurred with delays and noise posing security thrust. The art of work presented different routing protocols in this era to utilize energy and bandwidth efficiently with less delay and to provide the security against black hole attack. However, these methods do not show an appropriate enhancement in the security and to utilize the bandwidth efficiently due to mobile environment. As a result of which, the delay also increases. In this paper a secured and bandwidth utilization path is enhanced using Bellman Inora Hex Hamming technique (BIHH), which not only improves the performance of the routing but also saves the energy. The presented approach is validated with network simulator.</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_45-Underwater_Wireless_Sensor_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Data Fusion-Link Prediction for Evolutionary Network with Deep Reinforcement Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110644</link>
        <id>10.14569/IJACSA.2020.0110644</id>
        <doi>10.14569/IJACSA.2020.0110644</doi>
        <lastModDate>2020-06-30T14:58:37.6300000+00:00</lastModDate>
        
        <creator>Marcus Lim</creator>
        
        <creator>Azween Abdullah</creator>
        
        <creator>NZ Jhanjhi</creator>
        
        <subject>Metadata; time-series network; social network analysis; criminal network; deep reinforcement learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>The sophistication of covert activities employed by criminal networks with technology has been proven to be very challenging for criminal enforcement fraternity to cripple their activities. In view of this, law enforcement agencies need to be equipped with criminal network analysis (CNA) technology which can provide advanced and comprehensive intelligence to uncover the primary members (nodes) and associations (links) within the network. The design of tools to predict links between members mainly rely on Social Network Analysis (SNA) models and machine learning (ML) techniques to improve the precision of the model. The primary challenge of constructing classical ML models such as random forest (RF) with an acceptable level of accuracy is to obtain a large enough dataset to train the model. Obtaining a large enough dataset in the domain of criminal networks is a significant problem due to the stealthy and covert nature of their activities compared to social networks. The main objective of this research is to demonstrate that a link prediction model constructed with a relatively small dataset and dataset generated through self-simulation by leveraging on deep reinforcement learning (DRL) can contribute towards higher precision in predicting links. The training of the model was further fused with metadata (i.e. environment attributes such as criminal records, education level, age and police station proximity) in order to capture the real-life attributes of organised crimes which is expected to improve the performance of the model. Therefore, to validate the results, a baseline model designed without incorporating metadata (CNA-DRL) was compared with a model incorporating metadata (MCNA-DRL).</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_44-Data_Fusion_Link_Prediction_for_Evolutionary_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Review on Personality Types and Learning Styles in Team-based Learning for Information Systems Students</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110643</link>
        <id>10.14569/IJACSA.2020.0110643</id>
        <doi>10.14569/IJACSA.2020.0110643</doi>
        <lastModDate>2020-06-30T14:58:37.6130000+00:00</lastModDate>
        
        <creator>Muhammad Zul Aiman Zulkifli</creator>
        
        <creator>K.S. Savita</creator>
        
        <creator>Noreen Izza Arshad</creator>
        
        <subject>Team-based learning; personality type; learning style; undergraduate students</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>Team-based learning (TBL) has become a preferable method in learning approach at higher educational level. There are a lot of articles that discussed on the benefits and process of implementation of team-based learning but lack of studies that focus on the composition of members in team-based learning and effects of personality types and learning styles towards it. This article set out to analyze the existing literatures on team-based learning implementation at undergraduate and how personality types and learning styles affected the learning process plus exploring these topics in information systems field. Guided by Okoli systematic review method, a systematic review from Scopus, Web of Sciences and Association of Information Systems (AIS) databases has been conducted. Results shows that TBL received positive feedback from the scholars but only have issues on the implementation process consist of the usage of student’s personality and learning styles, role of team members, TBL management in classroom, TBL is not “fit for all” and current studies about TBL. The usage of personality and learning style instruments is one of the suggested ways to improve it but there are no details guidelines available yet on how to use it. There is lack of studies about team-based learning in information systems field.</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_43-Review_on_Personality_Types_and_Learning_Styles.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comparative Study of Eight Crossover Operators for the Maximum Scatter Travelling Salesman Problem</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110642</link>
        <id>10.14569/IJACSA.2020.0110642</id>
        <doi>10.14569/IJACSA.2020.0110642</doi>
        <lastModDate>2020-06-30T14:58:37.5970000+00:00</lastModDate>
        
        <creator>Zakir Hussain Ahmed</creator>
        
        <subject>Traveling salesman problem; maximum scatter; genetic algorithms; crossover operators; sequential constructive crossover</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>The maximum scatter traveling salesman problem (MSTSP), a variation of the famous travelling salesman problem (TSP), is considered here for our study. The aim of problem is to maximize the minimum edge in a salesman’s tour that visits each city exactly once in a network. It is proved be NP-hard problem and considered to be very difficult problem. To solve this kind of problems efficiently, one must use heuristic/metaheuristic algorithms, and genetic algorithm (GA) is one of them. Out of three operators in GAs, crossover is the most important operator. So, we consider eight crossover operators in GAs for solving the MSTSP. These operators have originally been designed for the TSP which can also be applied on the MSTSP after some modifications. The crossover operators are first illustrated manually through an example and then executed on some well-known TSPLIB instances of different types and sizes. The obtained comparative study clearly demonstrates the usefulness of the sequential constructive crossover operator for the MSTSP. Finally, a relative ranking of the crossover operators is reported.</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_42-A_Comparative_Study_of_Eight_Crossover_Operators.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Preparing Graduates with Digital Literacy Skills Toward Fulfilling Employability Need in 4IR Era: A Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110641</link>
        <id>10.14569/IJACSA.2020.0110641</id>
        <doi>10.14569/IJACSA.2020.0110641</doi>
        <lastModDate>2020-06-30T14:58:37.5800000+00:00</lastModDate>
        
        <creator>Khuraisah MN</creator>
        
        <creator>Fariza Khalid</creator>
        
        <creator>Hazrati Husnin</creator>
        
        <subject>Digital literacy; computer literacy; information literacy; employability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>This systematic review aims to review and synthesize employer expectations towards digital skills among graduates, steps, and measurements taken by higher education institutions to prepare students and harness motivation among students to make themselves competitive and marketable toward fulfilling employability needs in 4IR era. It was designed based on the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). Articles published between January 2016 and 2020 were sought from three electronic databases: Science Direct, Scopus, and Web of Science. Additional items gain from the Universiti Kebangsaan Malaysia repository are also considered to be reviewed. All papers were reviewed, and the quality assessment was performed. Twenty articles were finally selected. Data were extracted, organized, and analyzed using a narrative synthesis. The review identified three overarching themes: (1) Employer perspectives on their expectation from young graduates. (2) Institutions’ views on how they should prepare their students for the 4IR era. (3) Students’ perspectives on how they could motivate themselves. The systematic review provides insightful information on the required digital literacy skills among young graduates, expectations of the industry player, and how digital literacies can be developed in the institutions.</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_41-Preparing_Graduates_with_Digital_Literacy_Skills.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Architectural Proposal for a Syllabus Management System using the ISO/IEC/IEEE 42010</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110640</link>
        <id>10.14569/IJACSA.2020.0110640</id>
        <doi>10.14569/IJACSA.2020.0110640</doi>
        <lastModDate>2020-06-30T14:58:37.5500000+00:00</lastModDate>
        
        <creator>Anthony Meza-Luque</creator>
        
        <creator>Alvaro Fern&#225;ndez Del Carpio</creator>
        
        <creator>Karina Rosas Paredes</creator>
        
        <creator>Jose Sulla-Torres</creator>
        
        <subject>Management; architecture; software; syllabus; skills; ISO/IEC IEEE 42010</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>The efficiency in the academic and administrative procedures of higher education clearly marks competitive advantage in aspects of quality, which consists in the continuous improvement and improvement for the achievement of educational objectives. In our institution, the syllabus treatment is currently carried out manually, delaying many educational processes. Therefore, it is proposed to innovate through a software architecture approach based on the standard “ISO/IEC/ IEEE 42010: Systems and software engineering - Architecture Description” to describe the architecture Syllabus Management System software. It is developed in three stages: Analysis, Design and Verification. This will allow professors to develop their research, training, teaching and presentation of timely reports, with the measurement of achieved skills and abilities of students, managers and academic authorities, and to make decisions based on the results obtained by the tool allowing an improvement in the quality of the contents and development flow of the syllabus.</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_40-Architectural_Proposal_for_a_Syllabus_Management_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Generic Framework Architecture for Verifying Embedded Components</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110639</link>
        <id>10.14569/IJACSA.2020.0110639</id>
        <doi>10.14569/IJACSA.2020.0110639</doi>
        <lastModDate>2020-06-30T14:58:37.5330000+00:00</lastModDate>
        
        <creator>Lamia ELJADIRI</creator>
        
        <creator>Ismail ASSAYAD</creator>
        
        <subject>Algorithms; automation; embedded components; embedded systems; formal verification; framework; LTL properties; Promela; SystemC; SysVerPml; system design</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>This dissertation presents a framework for the formal verification of standard embedded components such us bus protocol, microprocessor, memory blocks, various IP blocks, and a software component. It includes a model checking of embedded systems components. The algorithms are modeled on SystemC and transformed on Promela language (PROcess or PROtocol MEta LAnguage) with the integration of LTL (Linear Temporal Logic) properties extracting from state machines in order to reduce verification complexity. Thus, SysVerPml is not only dedicated to verifying generated properties but also for the automation integration of other properties in models if needed. In the following, we will provide the answer to the problems of component representation on the design system, what properties are appropriate for each component, and how to verify properties.</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_39-Generic_Framework_Architecture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>ER Model Partitioning: Towards Trustworthy Automated Systems Development</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110638</link>
        <id>10.14569/IJACSA.2020.0110638</id>
        <doi>10.14569/IJACSA.2020.0110638</doi>
        <lastModDate>2020-06-30T14:58:37.4730000+00:00</lastModDate>
        
        <creator>Dhammika Pieris</creator>
        
        <creator>M. C Wijegunesekera</creator>
        
        <creator>N. G. J Dias</creator>
        
        <subject>Conceptual model; Entity Relationship (ER) model; relational database schema; information preservation; transformation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>In database development, a conceptual model is created, in the form of an Entity-relationship (ER) model, and transformed to a relational database schema (RDS) to create the database. However, some important information represented on the ER model may not be transformed and represented on the RDS. This situation causes a loss of information during the transformation process. With a view to preserving information, in our previous study, we standardized the transformation process as a one-to-one and onto mapping from the ER model to the RDS. For this purpose, we modified the ER model and the transformation algorithm resolving some deficiencies existed in them. Since the mapping was established using a few real-world cases as a basis and for verification purposes, a formal-proof is necessary to validate the work. Thus, the ongoing research aiming to create a proof will show how a given ER model can be partitioned into a unique set of segments and use it to represent the ER model itself. How the findings can be used to complete the proof in the future will also be explained. Significance of the research on automating database development, teaching conceptual modeling, and using formal methods will also be discussed.</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_38-ER_Model_Partitioning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>IoT Enabled Air Quality Monitoring for Health-Aware Commuting Recommendation in Smart Cities</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110637</link>
        <id>10.14569/IJACSA.2020.0110637</id>
        <doi>10.14569/IJACSA.2020.0110637</doi>
        <lastModDate>2020-06-30T14:58:37.3000000+00:00</lastModDate>
        
        <creator>Riaz UlAmin</creator>
        
        <creator>Muhammad Akram</creator>
        
        <creator>Najeeb Ullah</creator>
        
        <creator>Muhammad Ashraf</creator>
        
        <creator>Abdul Sattar Malik</creator>
        
        <subject>Un-planned Areas; Vehicular ad hoc Networks; Pollution Monitoring; AQI; Health Aware Commuting Recommendations</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>The importance of air pollution control in smart cities has been realized by almost every department of society. Research community has been working in collaboration with industry to craft sensors for measuring different types of pollution levels in the environment. However, it is rarely possible to implant the sensors in all geographical areas. It is important to measure pollution levels in almost every part of the world with life and implement clean environment policies. However, in unplanned areas, the implementation of environmental policies faces problems because such areas lack in communication infrastructure and cost of huge amount of fixed or static sensors. This work envisions availability of sensor-equipped-VANET based system to monitor pollution levels in different areas of an unplanned city. This paper proposes an autonomous VANET system that can carry environmental sensors to collect data from an area at different intervals, process it to transform data into information and forward the information to node that has capability to collect all information and then send it to server machine for further process either using VANET or some reliable network connectivity. Based on the collected data, this research further contributes health aware commuting recommendation based on cost effective monitoring of air quality.</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_37-IoT_Enabled_Air_Quality_Monitoring.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Bangla Optical Character Recognition and Text-to-Speech Conversion using Raspberry Pi</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110636</link>
        <id>10.14569/IJACSA.2020.0110636</id>
        <doi>10.14569/IJACSA.2020.0110636</doi>
        <lastModDate>2020-06-30T14:58:37.2830000+00:00</lastModDate>
        
        <creator>Aditya Rajbongshi</creator>
        
        <creator>Md. Ibadul Islam</creator>
        
        <creator>Al Amin Biswas</creator>
        
        <creator>Md. Mahbubur Rahman</creator>
        
        <creator>Anup Majumder</creator>
        
        <creator>Md. Ezharul Islam</creator>
        
        <subject>Optical character recognition; Bangla text; speech conversion; Raspberry Pi; camera module</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>Optical Character Recognition (OCR) technology is very helpful for visually impaired or illiterate persons who are unable to read text documents but need to reach the content of the text documents. In this paper, a camera-based assistive device is used that can be applied for visually impaired or illiterate people to understand Bangla text documents by listening to the contents of the Bangla text images. This work mainly involves the extraction of the Bangla text from the Bangla text image and converts the extracted text to speech. This work has been fulfilled with Raspberry Pi and a camera module by applying the concepts of the Tesseract OCR engine, the Open Source Computer Vision, and the Google Speech Application Program Interface. This work can help people speaking Bangla language who are unable to read or have a significant loss of visual sight.</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_36-Bangla_Optical_Character_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Categorization of Relevant Sequence Alignment Algorithms with Respect to Data Structures</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110635</link>
        <id>10.14569/IJACSA.2020.0110635</id>
        <doi>10.14569/IJACSA.2020.0110635</doi>
        <lastModDate>2020-06-30T14:58:37.2530000+00:00</lastModDate>
        
        <creator>Hasna El Haji</creator>
        
        <creator>Larbi Alaoui</creator>
        
        <subject>Sequence alignment; data structures; bioinformatics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>Sequence Alignment is an active research subfield of bioinformatics. Today, sequence databases are rapidly and steadily increasing. Thus, to overcome this issue, many efficient algorithms have been developed depending on various data structures. The latter have demonstrated considerable efficacy in terms of run-time and memory consumption. In this paper, we briefly outline existing methods applied to the sequence alignment problem. Then we present a qualitative categorization of some remarkable algorithms based on their data structures. Specifically, we focus on research works published in the last two decades (i.e. the period from 2000 to 2020). We describe the employed data structures and expose some important algorithms using each. Then we show potential strengths and weaknesses among all these structures. This will guide biologists to decide which program is best suited for a given purpose, and it also intends to highlight weak points that deserve attention of bioinformaticians in future research.</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_35-A_Categorization_of_Relevant_Sequence_Alignment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Classification of Multiple Sclerosis Disease using Cumulative Histogram</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110634</link>
        <id>10.14569/IJACSA.2020.0110634</id>
        <doi>10.14569/IJACSA.2020.0110634</doi>
        <lastModDate>2020-06-30T14:58:37.2370000+00:00</lastModDate>
        
        <creator>Menna Safwat</creator>
        
        <creator>Fahmi Khalifa</creator>
        
        <creator>Hossam El-Din Moustafa</creator>
        
        <subject>Cumulative Histogram (CH); Magnetic Resonance Image (MRI); Multiple Sclerosis (MS); White Matter (WM)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>Multiple sclerosis (MS) is a chronic disease that affects different body parts including the brain. Detection and classification of MS brain lesions is of immense importance to physicians for the administration of appropriate treatment. Thus, this study investigates an automated framework for the diagnoses and classification of MS lesions in brain using magnetic resonance imaging (MRI). First, the MRI images format converted from dicom images of each patient into TIF format as MS lesion appears in white matter (WM) obviously. This is followed by a brain tissue segmentation using a k-nearest neighbor classifier. Then, cumulative empirical distributions or cumulative histograms (CH) of the segmented lesions are estimated along with other texture/statistical features that work on the difference between the intensity of MS lesions and its surrounding tissues. Finally, these CDFs are fused with and the statistical features for the classification of MS using K mean classifiers. Experiments are conducted, using transverse T2-weighted MR brain scans from 20 patients that are highly sensitive in detecting MS plaques, with gold standard classification obtained by an experienced MS. By comparing the evaluated performance with statistical features, our proposed fusion scored the highest accuracy with 98% and a false-positive rate of 1%.</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_34-Classification_of_Multiple_Sclerosis_Disease.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Systematic Overview and Comparative Analysis of Service Discovery Protocols and Middlewares</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110633</link>
        <id>10.14569/IJACSA.2020.0110633</id>
        <doi>10.14569/IJACSA.2020.0110633</doi>
        <lastModDate>2020-06-30T14:58:37.2230000+00:00</lastModDate>
        
        <creator>Jawad Hussain Awan</creator>
        
        <creator>Usman Naseem</creator>
        
        <creator>Shah Khalid Kha</creator>
        
        <creator>Nazar Waheed</creator>
        
        <subject>Pervasive computing; service discovery protocols; context-aware; middleware; privacy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>Context is major source of communication, where information is gathered easily from user context due to the progress of smart and context aware systems. Even, service directory also supports the systems to response the requests, sent through client. In this paper, the authors overviews context aware systems, their sensing capabilities in location or beyond location along with COIVA (a context aware system). Eight discovery protocols along with their functionalities (such as DEAPspace, DNS-SD, JXTA, RDP, LDAP, CORBA Trader, UDDI and Superstring) are discussed and compared to evaluate the performance and efficiency of system. In addition, six middleware (such as CAMPUS, CASF, SeCoMan, CoCaMAAL, BDCaM, and FlexRFID) are compared to evaluate factors (such as Architectural style, Context abstraction/Reasoning level, Context awareness level, Contextual adaptation approaches, Decision making, and Programming model). The authors further categorized them into sub categories discussed in Section 4 and named CoCaMAAL as better middleware as compared to others.</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_33-A_Systematic_Overview_and_Comparative_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Image Detection Model for Construction Worker Safety Conditions using Faster R-CNN</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110632</link>
        <id>10.14569/IJACSA.2020.0110632</id>
        <doi>10.14569/IJACSA.2020.0110632</doi>
        <lastModDate>2020-06-30T14:58:37.2070000+00:00</lastModDate>
        
        <creator>Madihah Mohd Saudi</creator>
        
        <creator>Aiman Hakim Ma’arof</creator>
        
        <creator>Azuan Ahmad</creator>
        
        <creator>Ahmad Shakir Mohd Saudi</creator>
        
        <creator>Mohd Hanafi Ali</creator>
        
        <creator>Anvar Narzullaev</creator>
        
        <creator>Mohd Ifwat Mohd Ghazali</creator>
        
        <subject>PPE; OSH; accident; construction site; image detection; faster R-CNN</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>Many accidents occur on construction sites leading to injury and death. According to the Occupational Safety Health Administration (OSHA), falls, electrocutions, being struck-by-objects and being caught in or between an object were the four main causes of worker deaths on construction sites. Many factors contribute to the increase in accidents, and personal protective equipment (PPE) is one of the defense mechanisms used to mitigate them. Thus, this paper presents an image detection model about workers’ safety conditions based on PPE compliance by using the Faster Region-based Convolutional Neural Networks (R-CNN) algorithm. This experiment was conducted using Tensorflow involving 1,129 images from the MIT Places Database (from Scene Recognition) as a training dataset, and 333 anonymous dataset images from real construction sites for evaluation purposes. The experimental results showed 276 of the images being detected as safe, and an average accuracy rate of 70%. The strength of this paper is based on the image detection of the three PPE combinations, involving hardhats, vests and boots in the case of construction workers. In future, the threshold and image sharpness (low resolution) will be two main characteristics of further refinement in order to improve the accuracy rate.</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_32-Image_Detection_Model_for_Construction_Worker.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Developing Skills of Cloud Computing to Promote Knowledge in Saudi Arabian Students</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110631</link>
        <id>10.14569/IJACSA.2020.0110631</id>
        <doi>10.14569/IJACSA.2020.0110631</doi>
        <lastModDate>2020-06-30T14:58:37.1900000+00:00</lastModDate>
        
        <creator>Ahmed Sadik</creator>
        
        <creator>Mohammed Albahiri</creator>
        
        <subject>Cloud computing applications; e-learning environment; higher education; knowledge economy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>The present study aims to develop the skills of the cloud computing applications and the knowledge economy among the university students by designing a participatory electronic learning environment. A sample was chosen from the students of the “General Diploma” in the Faculty of Education, King Khalid University. This sample was divided into two groups; experimental group that comprised of 15 students trained through the participatory e-learning environment; whereas, the control group comprised of 17 students, who were trained through the Blackboard Learning Management System. Skills for cloud computing applications and a knowledge economy skills scale were developed. Kolmogorov-Smirnov Mann was used for identifying the normality test of variables. Whitney test and Spearman correlation test were used to analyze the results, which indicated that the design of a participatory e-learning environment based on the theory of communication contributed to improve the skills level of cloud computing applications and knowledge economy skills. The results showed that participatory e-learning environment based on the theory of communication significantly contributes towards improving the skills level of cloud computing applications and knowledge economy skills among the students from Saudi Arabian universities. Moreover, future studies need to focus on blueprint in the context of the educational system of Saudi Arabia.</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_31-Developing_Skills_of_Cloud_Computing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancement of Fundus Images for Diagnosing Diabetic Retinopathy using B-Spline</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110630</link>
        <id>10.14569/IJACSA.2020.0110630</id>
        <doi>10.14569/IJACSA.2020.0110630</doi>
        <lastModDate>2020-06-30T14:58:37.1770000+00:00</lastModDate>
        
        <creator>Tayba Bashir</creator>
        
        <creator>Khurshid Asghar</creator>
        
        <creator>Mubbashar Saddique</creator>
        
        <creator>Shafiq Hussain</creator>
        
        <creator>Inam Ul Haq</creator>
        
        <subject>B-spline; medical images enhancement; fundus images; diabetic retinopathy; interpolation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>Medical images, such as CT scan, MRI, X-ray, mammography and fundus are commonly used in medical diagnosis process and helpful to improve diagnose of disease in a better way and reduces the chances of ambiguous perceptions. Medical images are mostly available in low contrast, brightness and noisy form due to camera/ radio waves intrinsic properties while capturing, which disrupts the diagnosis process using medical images. Enhancement of these images can improve the diagnosis process. The proposed enhancement technique of fundus images is based on the B-spline interpolation, in which intensity transformation curve is based on the control points of the curve. Messidore and Drive datasets of Diabetic Retinopathy (DR) are used to evaluate the proposed enhancement technique. Results shows that the fundus images have reasonable visual and quantitative enhancement when performed comparison with recent techniques. Results are of evidence that the proposed approach has substantial outcome and preserves important information of fundus images by lowering noise.</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_30-Enhancement_of_Fundus_Images_for_Diagnosing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimised Tail-based Routing for VANETs using Multi-Objective Particle Swarm Optimisation with Angle Searching</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110629</link>
        <id>10.14569/IJACSA.2020.0110629</id>
        <doi>10.14569/IJACSA.2020.0110629</doi>
        <lastModDate>2020-06-30T14:58:37.1430000+00:00</lastModDate>
        
        <creator>Mustafa Qasim AL-Shammari</creator>
        
        <creator>Ravie Chandren Muniyandi</creator>
        
        <subject>VANETs; Routing; PDR; E2E delay; optimization; multi-objective particle swarm; location based routing; MOPSO</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>Routing protocols for vehicular ad hoc networks (VANETs) are highly important, as they are essential for operating the concept of intelligent transportation system and several other applications. VANET Routing entails awareness about the nature of the road and various other parameters that affect the performance of the protocol. Optimising the VANET routing guarantees optimal metrics, such as low E2E delay, high packet delivery ratio (PDR) and low overhead. Since its performance is of multi-objective nature, it needs multi-objective optimisation as well. Most researchers have focused on a single objective or weighted average for multi-objective optimisation. Only a few of the studies have tackled the actual multi-objective optimisation of VANET routing. In this article, we propose a novel reactive routing protocol named tail-based routing, based on the concept of location-aided routing (LAR). We first re-defined the request zone to reduce the lateral width with respect to the lateral distance between the source and destination and named it tail. Next, we incorporated angle searching with crowding distance inside the multi-objective optimisation MO-PSO and called it MO-PSO-angle. Then, we conducted optimisation of tail-based routing using MO-PSO-angle and compared it with optimised LAR, which exhibited the superiority of the latter. The best improvement was at the optimisation point with a 96% improvement of PDR and a 313% improvement in E2E delay.</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_29-Optimised_Tail_based_Routing_for_VANETs.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Transitioning to Online Learning during COVID-19 Pandemic: Case Study of a Pre-University Centre in Malaysia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110628</link>
        <id>10.14569/IJACSA.2020.0110628</id>
        <doi>10.14569/IJACSA.2020.0110628</doi>
        <lastModDate>2020-06-30T14:58:37.1300000+00:00</lastModDate>
        
        <creator>Ahmad Alif Kamal</creator>
        
        <creator>Norhunaini Mohd Shaipullah</creator>
        
        <creator>Liyana Truna</creator>
        
        <creator>Muna Sabri</creator>
        
        <creator>Syahrul N. Junaini</creator>
        
        <subject>E-learning; STEM; coronavirus: pandemic; education technology; assessment; technology acceptance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>In the last decade, online learning has grown rapidly. However, the outbreak of coronavirus (COVID-19) has caused learning institutions to embrace online learning due to the lockdown and campus closure. This paper presents an analysis of students’ feedback (n=354) from the Centre of Pre-University Studies (PPPU), Universiti Malaysia Sarawak (UNIMAS), Malaysia, during the transition to fully online learning. Three phases of online surveys were conducted to measure the learners’ acceptance of the migration and to identify related problems. The result shows that there is an increased positivity among the students on the vie of teaching and learning in STEM during the pandemic. It is found that online learning would not be a hindrance, but blessing towards academic excellence in the face of calamity like the COVID-19 pandemic. The suggested future research direction will be of interest to educators, academics, and researchers’ community.</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_28-Transitioning_to_Online_Learning_during_COVID_19.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Serious Games Requirements for Higher-Order Thinking Skills in Science Education</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110627</link>
        <id>10.14569/IJACSA.2020.0110627</id>
        <doi>10.14569/IJACSA.2020.0110627</doi>
        <lastModDate>2020-06-30T14:58:37.0970000+00:00</lastModDate>
        
        <creator>Siti Norliza Awang Noh</creator>
        
        <creator>Nor Azan Mat Zin</creator>
        
        <creator>Hazura Mohamed</creator>
        
        <subject>Higher-Order Thinking (HOT) skills; educational games; serious games; interface design; science education</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>Education in the 21st century emphasises the mastery of higher-order thinking skills (HOTS) in the pursuit of developing competitive human capital globally. HOTS can be taught through science education. However, science education is considered very challenging leaving students feeling less interested and less motivated. Apart from that, students are found to be weak in mastering their thinking skills based on the decline in students’ achievements in the Trend in International Mathematics and Science Study (TIMSS) and the Programme International Students Assessment (PISA) tests. This situation highlights the need to change the approach of teaching and learning science in line with the current technological changes to meet the challenges of globalisation. Previous studies showed that the use of serious games in learning can enhance students’ thinking skills. Thus, serious games can be used to develop higher-order thinking skills among students. This paper presented results of preliminary study using interviews, document analysis, and questionnaire survey. Findings have shown that there are several issues and challenges of teaching and learning in implementing HOTS in science education, in addition to game design requirement for science education. The requirements will be used to design a serious game implementing HOTS in science education.</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_27-Serious_Games_Requirements_for_Higher_Order.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Role of ICT Projects in Enterprises: Investments, Benefits and Evaluation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110626</link>
        <id>10.14569/IJACSA.2020.0110626</id>
        <doi>10.14569/IJACSA.2020.0110626</doi>
        <lastModDate>2020-06-30T14:58:37.0800000+00:00</lastModDate>
        
        <creator>Khaled H. Alyoubi</creator>
        
        <subject>ICT projects; ICT investment; ICT benefits; ICT evaluation strategies</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>The enterprise’s dependency on Information and Communication Technologies (ICT) resources is essential, which cover their several business and operational activities. Enhancing operational capabilities, advancing working environment, and improving employees skills are major benefits provided by modern ICT resources. The real pressure is on organizations to upgrade the ICT infrastructure with latest development to compete in the market. This research investigates the role of ICT projects in an organization from investment, benefits and evaluation perspectives. Based on the literature review, the conceptual framework proposed to understand the relationship between ICT project’s investments, benefits, and evaluation. The main purpose of this study is to investigate the approach of enterprises toward ICT investments. Moreover, to understand the type of ICT evaluation strategies that practicing by organizations. Therefore, the proposed framework is applied and validated through multiple case studies to confirm the list of variables collected from literature review. The conducted investigation will help to certify the findings of literature review through selected case studies. The analysis of responses presented in different format to understand the current role and status of ICT projects; investment, benefits and evaluation performed in different organizations. The outcome of this study will addresses substantial factors and offers references for the organization to build their ICT investment and evaluation model. The type of ICT investment, benefits, and measurement models extracted in this research can act as a reference for the organization to develop their own ICT investments policies.</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_26-The_Role_of_ICT_Projects_in_Enterprises.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Investigating the Awareness and Usage of Moodle Features at Hashemite University</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110625</link>
        <id>10.14569/IJACSA.2020.0110625</id>
        <doi>10.14569/IJACSA.2020.0110625</doi>
        <lastModDate>2020-06-30T14:58:37.0670000+00:00</lastModDate>
        
        <creator>Haneen Hijazi</creator>
        
        <creator>Ghadeer Al-Kateb</creator>
        
        <creator>Eslam Alkhawaldeh</creator>
        
        <subject>Moodle; learning management system; features; awareness; usage; activities; resources; tools; correlation; regression</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>E-learning plays a vital role in the educational process. Learning management systems are being essential component of e-learning. Moodle learning management system is being widely used in Higher Education Institutions due to the rich features it provides that support the learning process. Standard Moodle comprises 21 features (14 activities and 7 resources). Little research has been carried out to examine these features in particular. In this research, the awareness and usage of Moodle features among faculty members at Hashemite University, Jordan are investigated. A sample of 140 instructors were surveyed. Then, the responses were analyzed to find the overall awareness and usage of each feature. Furthermore, the correlation between awareness and usage and how the awareness of Moodle features is associated with their usage were analyzed through correlation and regression analysis. The study revealed that instructors expressed highest awareness towards File, Folder, Assignment, URL and Quiz features whilst the least awareness was towards SCORM package and IMS content package features. Regarding usage, the study identified the File, Folder, Assignment and URL features as the most heavily used features whereas the least commonly used features have been IMS Content Package, SCORM package, Wiki, Glossary, Workshop, Database, Survey, External tool and Choice. Moreover, the study statistically demonstrated a strong correlation between the awareness and usage of features and that changes in the awareness of Moodle features are significantly associated with changes in their usage. In other words, the study revealed that features with low awareness tend to have low usage and that the usage would increase as the awareness increases. The study would help Moodle administrators in Higher Education Institutions decide about the most important features that should be installed in their customized instance of Moodle. Furthermore, the study would help Hashemite University responsible parties in identifying the least commonly used and the least well-known features, allowing them to focus on increasing the levels of awareness and usage of those features in a way that might reflect positively on the learning process.</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_25-Investigating_the_Awareness_and_Usage_of_Moodle.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Measuring the Performance of Inventory Management System using Arena Simulator</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110624</link>
        <id>10.14569/IJACSA.2020.0110624</id>
        <doi>10.14569/IJACSA.2020.0110624</doi>
        <lastModDate>2020-06-30T14:58:37.0500000+00:00</lastModDate>
        
        <creator>Fawaz J. Alsolami</creator>
        
        <subject>Inventory management system; simulation; arena simulation tool; demand and inventory</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>The demonstration of inventory management systems is presented in this study to deal with a situation, where organizations are facing challenges due to uncertain behavior of demand. The implementation conducted using simulation technique to generate sampling experiment through computing and statistical methods. The important aspect of the simulation is to understand level of satisfaction regarding proposed system and its attributes. To assist the organization in controlling and managing the inventory system, the research provides the solution using Arena simulation tool. Firstly, the study explained the use of simulation and then reviews the common approaches of simulation. Secondly, the research proposed a framework for inventory control system. For practical implementation Arena tool used for model implementation. The main purpose of this experiment is to measure and analyze the applicability of potential system using a simulation tool. The results indicated the successful implementation of the proposed framework for inventory system. The model used multiple inventory system’s variables such as demand, inventory stock and realized cost. For demonstrating the real world system’s behavior, we used stochastic model for demand. The model executed using single server queuing (M/M/1) approach, but replicated several times. The results highlighted the high performances of machine located on multiples places and processed demand requests on each replication run. The study presented in this research demonstrates the association between demand and inventory. The proposed model can support the manufacturing organizations to control and manage inventory system.</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_24-Measuring_the_Performance_of_Inventory_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>4GL Code Generation: A Systematic Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110623</link>
        <id>10.14569/IJACSA.2020.0110623</id>
        <doi>10.14569/IJACSA.2020.0110623</doi>
        <lastModDate>2020-06-30T14:58:37.0330000+00:00</lastModDate>
        
        <creator>Abdullah A H Alzahrani</creator>
        
        <subject>Software engineering; code transformation; 4GL; code generation; Model Driven Development (MDD); Extraction Transform Load (ETL); Model Driven Engineering (MDE); Rapid Application Development (RAD)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>Code generation is longstanding goal in software engineering. It allows more productivity of computer programming as it aims to provide automation of transformation of models into actual source code. This process has been covered adequately in many programming languages. However, this topic has not been covered sufficiently with regards to Fourth Generation Languages (4GL) which have a high specialized nature. The goal of this paper is to represent a systematic literature review of 4GL Code generation. The paper focuses on reviewing systemically the studies published in the past 20 years on the topic. This is to investigate the trends in the topic and the approaches introduced in order to identify potential new research lines.</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_23-4GL_Code_Generation_A_Systematic_Review.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analyzing the Performance of Web-services during Traffic Anomalies</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110622</link>
        <id>10.14569/IJACSA.2020.0110622</id>
        <doi>10.14569/IJACSA.2020.0110622</doi>
        <lastModDate>2020-06-30T14:58:37.0200000+00:00</lastModDate>
        
        <creator>Avneet Dhingra</creator>
        
        <creator>Monika Sachdeva</creator>
        
        <subject>Denial of service; DDoS attack; flash event; performance metrics; throughput; response time</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>Intentional or unintentional, service denial leads to substantial economic and reputation losses to the users and the web-service provider. However, it is possible to take proper measures only if we understand and quantify the impact of such anomalies on the victim. In this paper, essential performance metrics distinguishing both transmission issues and application issues have been discussed and evaluated. The legitimate and attack traffic has been synthetically generated in hybrid testbed using open-source software tools. The experiment covers two scenarios, representing DDoS attacks and Flash Events, with varying attack strengths to analyze the impact of anomalies on the server and the network. It has been demonstrated that as the traffic surges, response time increases, and the performance of the target web-server degrades. The performance of the server and the network is measured using various network level, application level, and aggregate level metrics, including throughput, average response time, number of legitimate active connections and percentage of failed transactions.</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_22-Analyzing_the_Performance_of_Web_Services.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Distributed Denial of Service Attacks in Cloud Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110621</link>
        <id>10.14569/IJACSA.2020.0110621</id>
        <doi>10.14569/IJACSA.2020.0110621</doi>
        <lastModDate>2020-06-30T14:58:37.0030000+00:00</lastModDate>
        
        <creator>Hesham Abusaimeh</creator>
        
        <subject>Cloud; cloud computing; DoS attacks; DDoS attacks; DDoS prevention; DDoS mitigation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>The Cloud Computing attacks have been increased since the expanded use of the cloud computing. One of the famous attacks that targets the cloud computing is the distributed denial of service (DDoS) attack. The common features and component of the cloud structure make it more reachable from this kind of attack. The DDOS is targeting the large number of devices connected in any cloud service provider based on its scalability and reliability features that make the cloud available from anywhere and anytime. This attack mainly generate a large number of malicious packets to make the targeted server busy dealing with these huge number of packets. There many techniques to defend the DDOS attack in the regular networks, while in the cloud computing this task is more complicated regarding the various characteristics of the cloud that make the defending process not an easy task. This paper will investigate most of the method used in detecting and preventing and then recover from the DDoS in the cloud computing environment.</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_21-Distributed_Denial_of_Service_Attacks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Critical Factors Affecting the Intention to Adopt Big Data Analytics in Apparel Sector, Sri Lanka</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110620</link>
        <id>10.14569/IJACSA.2020.0110620</id>
        <doi>10.14569/IJACSA.2020.0110620</doi>
        <lastModDate>2020-06-30T14:58:36.9730000+00:00</lastModDate>
        
        <creator>Hiruni Bolonne</creator>
        
        <creator>Piyavi Wijewardene</creator>
        
        <subject>Critical factors; TOE framework; Technology Acceptance Model (TAM); attitude towards using; intention to adopt; Big Data Analytics (BDA); apparel sector; Sri Lanka</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>Big data has become a potential research area in apparel industry due to vast amount of data generated in a short period of time. However, the inability to adapt to the challenging and digital environment has pulled out the weaker from the industry while growing the adopters more and more powerful players. As the insights generated out of data becoming core competitive advantages, now it is pertinent to identify which factors would affect the intention to adopt big data analytics in an apparel sector organization. The three contexts of the Technology-Organization-Environment (TOE) framework along with Technology Acceptance Model (TAM) were used as foundational frameworks to explore the influence on the attitude towards using of users which would ultimately affect to the intention of adopting big data analytics. The findings generated from the study denotes that factors considered in both TOE framework and TAM model except organizational context having a positive correlation towards the user’s attitude of using which would ultimately lead the organization in enhancing its intention to adopt big data analytics. Finally, the research concludes that the variable, attitude towards using plays a positive mediating role between the direct relationship of critical factors affecting the intention to adopt big data analytics. It is hoped that findings of this research would enrich the existing literature while affecting practitioners to involve in adopting big data analytics by prioritizing investments accordingly.</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_20-Critical_Factors_Affecting_the_Intention.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Handwritten Arabic Characters Recognition using a Hybrid Two-Stage Classifier</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110619</link>
        <id>10.14569/IJACSA.2020.0110619</id>
        <doi>10.14569/IJACSA.2020.0110619</doi>
        <lastModDate>2020-06-30T14:58:36.9400000+00:00</lastModDate>
        
        <creator>Amjad Ali Al-Jourishi</creator>
        
        <creator>Mahmoud Omari</creator>
        
        <subject>Arabic character recognition; Support Vector Machine (SVM); neural network (NN); hybrid classifier</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>Handwritten Arabic character recognition presents a big challenge to researchers in the field of pattern recognition. Arabic characters are characterized by their highly-cursive nature and many of them have a similar appearance. For example, the only difference between some of the alphabet characters is the existence of a number dots above or below the main character shape. This paper proposes a system for isolated off-line handwritten Arabic character recognition using the Discrete Cosine Transform (DCT) as the feature extraction method and a two-stage hybrid classifier. The two stages are a Support Vector Machine (SVM) and a neural network (NN). The first stage is a two-class SVM classifier which classifies a character either a character with dot(s) or without dot(s). The output of this stage is used to extend the feature vector of the character by the class value to give it an extra unique feature. The extend feature vector is fed to a multi-class neural network model to classify the character. The proposed approach is tested on a database of Arabic handwritten characters called AlexU Isolated Alphabet (AIA9K) containing 8,737 character images. The experimental results of the first stage classifier showed a high recognition accuracy rate of 99.14%. The proposed two-stage hybrid classifier obtained an average recognition accuracy rate of 91.84% over all Arabic Alphabet characters.</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_19-Handwritten_Arabic_Characters_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Causes of Failure in the Implementation and Functioning of Information Systems in Organizations</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110618</link>
        <id>10.14569/IJACSA.2020.0110618</id>
        <doi>10.14569/IJACSA.2020.0110618</doi>
        <lastModDate>2020-06-30T14:58:36.9270000+00:00</lastModDate>
        
        <creator>Jos&#233; Ram&#243;n Figueroa-Flores</creator>
        
        <creator>Elizabeth Acosta-Gonzaga</creator>
        
        <creator>Elena Fabiola Ruiz-Ledesma</creator>
        
        <subject>Information systems; outsourcing; resistance to change; organizational culture; decision making; information systems implementation failures</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>When implementing or starting up an information system, there are usually a number of causes that can lead to its failure. Today, there are few companies that do not rely on technology to carry out their business processes. Wanting to have a competitive advantage over its competitors and the changing global business, puts pressure on the implementation of information systems implementation projects, be it an ERP (Enterprise Resource Planning), a CRM (Customer Relationship Management) or Big Data projects to manage a central repository of all internal and external data that a company can manage. Although it is an illusion for the company to start a project to implement an information system, its failure can lead to its key business processes not being carried out correctly. This article has the purpose of exposing the most common causes when implementing an information system, but also during the operation of the system, which can lead to organizational chaos and to take measures that no company wishes to take. A real case of failure is exposed during the implementation of an information system in an important Mexican company. The research team was allowed to interview general and systems area managers as well as employees. In addition, a survey was carried out among 30 people between managers and heads of department who followed closely on the implementation process of the global operations and technology system within of the company. The most influential factors were a deficient administration, a bad definition of the project and inappropriate consultancy.</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_18-Causes_of_Failure_in_the_Implementation_and_Functioning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>DVB-T2 Radio Frequency Signal Observation and Parameter Correlation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110617</link>
        <id>10.14569/IJACSA.2020.0110617</id>
        <doi>10.14569/IJACSA.2020.0110617</doi>
        <lastModDate>2020-06-30T14:58:36.9100000+00:00</lastModDate>
        
        <creator>Bexhet Kamo</creator>
        
        <creator>Elson Agastra</creator>
        
        <creator>Shkelzen Cakaj</creator>
        
        <subject>DVB-T2; radio coverage; statistical correlation RF data; field measurements</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>In this paper, field test measurement are described and statistically correlated to obtain useful information about radiofrequency (RF) behavior of Digital Video Broadcasting - Terrestrial, second generation (DVB-T2) channels. Monitored radiofrequency data parameters are analyzed from statistical perspective and for finding, if any, linear correlation between them. Practical series of field measurements in the surrounding of Kor&#231;a city in Albania are performed for consecutive 48 hours with sample data each second. The obtained results show the main issues that need to be considered in monitoring service reception quality which is not strongly related to the received channel power level but to the Modulation Error Rate (MER) parameter.</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_17-DVB_T2_Radio_Frequency_Signal_Observation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Discrete Cosine Transformation based Image Data Compression Considering Image Restoration</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110616</link>
        <id>10.14569/IJACSA.2020.0110616</id>
        <doi>10.14569/IJACSA.2020.0110616</doi>
        <lastModDate>2020-06-30T14:58:36.8800000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>Discrete Cosine Transformation; data compression; image restoration; Landsat TM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>Discrete Cosine Transformation (DCT) based image data compression considering image restoration is proposed. An image data compression method based on the compression (DCT) featuring an image restoration method is proposed. DCT image compression is widely used and has four major image defects. In order to reduce the noise and distortions, the proposed method expresses a set of parameters for the assumed distortion model based on an image restoration method. The results from the experiment with Landsat TM (Thematic Mapper) data of Saga show a good image compression performance of compression factor and image quality, namely, the proposed method achieved 25% of improvement of the compression factor compared to the existing method of DCT with almost comparable image quality between both methods.</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_16-Discrete_Cosine_Transformation_based_Image_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An IoT based Approach for Efficient Home Automation with ThingSpeak</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110615</link>
        <id>10.14569/IJACSA.2020.0110615</id>
        <doi>10.14569/IJACSA.2020.0110615</doi>
        <lastModDate>2020-06-30T14:58:36.8800000+00:00</lastModDate>
        
        <creator>Mubashir Ali</creator>
        
        <creator>Zarsha Nazim</creator>
        
        <creator>Waqar Azeem</creator>
        
        <creator>Muhammad Haroon</creator>
        
        <creator>Aamir Hussain</creator>
        
        <creator>Khadija Javed</creator>
        
        <creator>Maria Tariq</creator>
        
        <subject>Internet of Things (IoT); home automation; Arduino; ThingSpeak; sensors; cloud computing; mobile computing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>With passage of time, technology is rapidly growing. People and daily life processes are highly dependent on internet. The Internet of Things (IoT) is an area of magnificent impact, growth and potential with the advent and rapid growth of smart homes, smart agriculture, smart cities and smart everything. Internet of Things (IoT) construct an environment in which everything is integrated and digitalized. People depend on smart phones and want to do their daily routine tasks in easy and quick way. Ordinary homes consist of multiple digital appliances that are controlled or managed by individual remote systems. It’s very hectic to use multiple individual remotes to control various component of homes. In current technological era, rather than home appliances, almost all type of home components available in digital forms. Various home automation systems with different specifications and implementations were proposed in literature. This research objective is to introduce an IoT based approach for efficient home automation system using Arduino and ThingSpeak. We have automated almost all essential aspects of smart home. Proposed system is efficient in terms of low power consumption, green building and increases the life of digital appliances. ThingSpeak cloud platform is used to integrate the home components; analyze and process the data. State of the art MQTT protocol is implemented for LAN communication. This paper will provide a path to IoT developers and researchers to sense, digitalize and control the homes in perspective of future IoT. Moreover, this work is serving as an instance of how life will be easier with the help of IOT applications.</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_15-An_IoT_based_Approach_for_Efficient_Home_Automation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Learning based Coding for Medical Image Compression</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110614</link>
        <id>10.14569/IJACSA.2020.0110614</id>
        <doi>10.14569/IJACSA.2020.0110614</doi>
        <lastModDate>2020-06-30T14:58:36.8470000+00:00</lastModDate>
        
        <creator>Abdul Khader Jilani Saudagar</creator>
        
        <subject>Image compression; medical image processing; neural network; learning based coding; peak signal-to-noise ratio</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>The area of Image processing has emerged with different coding approaches, and applications which are ranging from fundamental image compression model to high quality applications. The advancement of image processing, has given the advantage of automation in various image coding applications, among which medical image processing is one of the prime area. Medical diagnosis has always remained a time taking and sensitive approach for accurate medical treatment. Towards improving these issues, automation systems have been developed. In the process of automation, the images are processed and passed to a remote processing unit for processing and decision making. It is observed that, images are coded for compression to minimize the processing and computational overhead. However, the issue of compressing data over accuracy always remains a challenge. Thus, for an optimization in image compression, there is a need for compression through the reduction of non-relevant coefficients in medical images. The proposed image compression model helped in developing a coding technique to attain accurate compression by retaining image precision with lower computational overhead in clinical image coding. Towards making the image compression more efficient, this research work introduces an approach of image compression based on learning coding. This research achieves superior results in terms of Compression rate, Encoding time, Decoding time, Total processing time and Peak signal-to-noise ratio (PSNR).</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_14-Learning_based_Coding_for_Medical_Image.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cultural Algorithm Initializes Weights of Neural Network Model for Annual Electricity Consumption Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110613</link>
        <id>10.14569/IJACSA.2020.0110613</id>
        <doi>10.14569/IJACSA.2020.0110613</doi>
        <lastModDate>2020-06-30T14:58:36.8300000+00:00</lastModDate>
        
        <creator>Gawalee Phatai</creator>
        
        <creator>Sirapat Chiewchanwattana</creator>
        
        <creator>Khamron Sunat</creator>
        
        <subject>Neural network; weights initialization; metaheuristic algorithm; cultural algorithm; annual electricity consumption prediction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>The accurate prediction of annual electricity consumption is crucial in managing energy operations. The neural network (NN) has achieved a lot of achievements in yearly electricity consumption prediction due to its universal approximation property. However, the well-known back-propagation (BP) algorithms for training NN has easily got stuck in local optima. In this paper, we study the weights initialization of NN for the prediction of annual electricity consumption using the Cultural algorithm (CA), and the proposed algorithm is named as NN-CA. The NN-CA was compared to the weights initialization using the other six metaheuristic algorithms as well as the BP. The experiments were conducted on the annual electricity consumption datasets taken from 21 countries. The experimental results showed that the proposed NN-CA achieved more productive and better prediction accuracy than other competitors. This result indicates the possible consequences of the proposed NN-CA in the application of annual electricity consumption prediction.</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_13-Cultural_Algorithm_Initializes_Weights_of_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intelligent Risk Alarm for Asthma Patients using Artificial Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110612</link>
        <id>10.14569/IJACSA.2020.0110612</id>
        <doi>10.14569/IJACSA.2020.0110612</doi>
        <lastModDate>2020-06-30T14:58:36.8170000+00:00</lastModDate>
        
        <creator>Rawabi A. Aroud</creator>
        
        <creator>Anas H. Blasi</creator>
        
        <creator>Mohammed A. Alsuwaiket</creator>
        
        <subject>Asthma; ANN; data mining; intelligent systems; machine learning; traffic-related pollution</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>Asthma is a chronic disease of the airways of the lungs. It results in inflammation and narrowing of the respiratory passages; which prevents air flow into the airways and leads to frequent bouts of shortness of breath with wheezing accompanied by coughing and phlegm after exposure to inhalation of substances that provoke allergic reactions or irritation of the respiratory system. Data mining in healthcare system is very important in diagnosing and understanding data, so data mining aims to solve basic problems in diagnosing diseases due to the complexity of diagnosing asthma. Predicting chemicals in the atmosphere is very important and one of the most difficult problems since the last century. In this paper, the impact of chemicals on asthma patient will be presented and discussed. Sensor system called MQ5 will be used to examine the smoke and nitrogen content in the atmosphere. MQ5 will be inserted in a wristwatch that checks the smoke and nitrogen content in the patient’s place, the system shall issue a warning alarm if this gas affects the person with asthma. It will be based on the Artificial Neural Networks (ANN) algorithm that has been built using data that containing a set of chemicals such as carbon monoxide, NMHC (GT) acid gas, C6H6 (GT) Gasoline, NOx (GT) Nitrogen Oxide, and NO2 (GT) Nitrogen Dioxide. The temperature and humidity will be also used as they can negatively affect asthma patient. Finally, the rating model was evaluated and achieved 99.58% classification accuracy.</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_12-Intelligent_Risk_Alarm_for_Asthma_Patients.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Usability and Design Issues of Mobile Assisted Language Learning Application</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110611</link>
        <id>10.14569/IJACSA.2020.0110611</id>
        <doi>10.14569/IJACSA.2020.0110611</doi>
        <lastModDate>2020-06-30T14:58:36.7830000+00:00</lastModDate>
        
        <creator>Kashif Ishaq</creator>
        
        <creator>Fadhilah Rosdi</creator>
        
        <creator>Nor Azan Mat Zin</creator>
        
        <creator>Adnan Abid</creator>
        
        <subject>Educational technology; language learning; literacy and numeracy drive; mobile application (App); m-learning; usability; user interface design</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>This paper aims to look at teachers, government officials, and students for Literacy &amp; Numeracy Drive (LND), a smartphone app for students in Punjab province, Pakistan, to teach languages and math. Furthermore, to recognize LND usability and design problems while its use for grade three in schools. As the usability and design issues of LND were not discussed since the launch of this application. The methodology for this study is the questionnaire for teachers and semi-structured interviews for government officials of District Sheikhupura and students. The result shows that LND has various usability and design problems in its current form, i.e., buttons, icons, color schemes, sluggish performance, and fonts. Besides, teachers, government officials, and students suggested that game-based learning consists of an interactive interface, phonics, key animations to be created and adopted. Highly engaging and appealing delivery of the curriculum and improvements in the appraisal will improve the participation of students and deliver better outcomes.</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_11-Usability_and_Design_Issues_of_Mobile_Assisted_Language.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Assessment and Analysis of Blended Learning in IT Education: A Longitudinal Study in Saudi Electronic University</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110610</link>
        <id>10.14569/IJACSA.2020.0110610</id>
        <doi>10.14569/IJACSA.2020.0110610</doi>
        <lastModDate>2020-06-30T14:58:36.7700000+00:00</lastModDate>
        
        <creator>Mohamed Habib</creator>
        
        <creator>Muhammad Ramzan</creator>
        
        <subject>Blended learning; educational data; information technology; longitudinal study; data mining; decision tree</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>Blended learning is a new educational model that binds traditional face-to-face learning with application of modern tools and technologies. This helps in retaining the positive features of the traditional learning while allowing students to realize potential of modern technologies. In blended learning, student perceptions and satisfaction plays a key role. Longitudinal studies can help identify patterns of these perceptions and expectations to evolve blended learning with changing times and technologies. In this paper, a longitudinal study has been carried out with the students and the faculty of Saudi Electronic University to identify major drivers and their role in shaping student perceptions and satisfaction. The results of this longitudinal study have been validated, and their subsequent comparisons are ascertained with the application of a decision tree based data mining technique. Based on the analysis and the findings of this study, the paper presents recommendations to improve blended learning experience and enhance the effectiveness of the teaching pedagogies developed consequently.</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_10-Performance_Assessment_and_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Artificial Intelligence: What it Was, and What it Should Be?</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110609</link>
        <id>10.14569/IJACSA.2020.0110609</id>
        <doi>10.14569/IJACSA.2020.0110609</doi>
        <lastModDate>2020-06-30T14:58:36.7370000+00:00</lastModDate>
        
        <creator>Hala Abdel Hameed</creator>
        
        <subject>Classical AI; machine learning; Neuro-Symbolic AI; Cognitive-based AI; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>Artificial Intelligence was embraced as an idea of simulating unique abilities of humans, such as thinking, self-improvement, and expressing their feelings using different languages. The idea of “Programs with Common Sense&quot; was the main and central goal of Classical AI; it was, mainly built around an internal, updatable cognitive model of the world. But, now almost all the proposed models and approaches lacked reasoning and cognitive models and have been transferred to be more data driven. In this paper, different approaches and techniques of AI are reviewed, specifying how these approaches strayed from the main goal of Classical AI, and emphasizing how to return to its main objective. Additionally, most of the terms and concepts used in this field such as Machine Learning, Neural Networks and Deep Learning are highlighted. Moreover, the relations among these terms are determined, trying to remove mysterious and ambiguities around them. The transition from the Classical AI to Neuro-Symbolic AI and the need for new Cognitive-based models are also explained and discussed.</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_9-Artificial_Intelligence_What_It_Was.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Memory Design for High-Throughput and Low-Power Table Lookup in Internet Routers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110608</link>
        <id>10.14569/IJACSA.2020.0110608</id>
        <doi>10.14569/IJACSA.2020.0110608</doi>
        <lastModDate>2020-06-30T14:58:36.7230000+00:00</lastModDate>
        
        <creator>Hayato Yamaki</creator>
        
        <subject>Inter routers; packet processing; table lookup; hy-brid memory architecture; Packet Processing Cache (PPC)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>Table lookup is a major process to decide the packet processing throughput and power efficiency of routers. To realize high-throughput and low-power table lookup, recent routers have employed several table lookup approaches, such as TCAM (Ternary Content Addressable Memory) based approach and DRAM (Dynamic Random Access Memory) based approach, depending on the purpose. However, it is difficult to realize both ultrahigh throughput and significant low power due to the trade-off between them. To satisfy both of the demands, this study proposes a hybrid memory design, which combines TCAM, DRAM, PPC (Packet Processing Cache), CMH (Cache Miss Handler), and IP Cache, to enable a high-throughput and low-power table lookup. The simulation results using an in-house cycle-accurate simulator showed that the proposed memory design achieved nearly 1 Tbps throughput with similar power of the DRAM-based approach. When compared to the approach proposed in a recent study, the proposed memory design can realize 1.95x higher throughput with 11% power consumption.</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_8-Hybrid_Memory_Design_for_High_Throughput.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving Disease Prediction using Shallow Convolutional Neural Networks on Metagenomic Data Visualizations based on Mean-Shift Clustering Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110607</link>
        <id>10.14569/IJACSA.2020.0110607</id>
        <doi>10.14569/IJACSA.2020.0110607</doi>
        <lastModDate>2020-06-30T14:58:36.7070000+00:00</lastModDate>
        
        <creator>Hai Thanh Nguyen</creator>
        
        <creator>Toan Bao Tran</creator>
        
        <creator>Huong Hoang Luong</creator>
        
        <creator>Trung Phuoc Le</creator>
        
        <creator>Nghi C. Tran</creator>
        
        <subject>Clustering algorithm; metagenomic; visualization; disease prediction; mean-shift; personalized medicine; species abundance; bacterial</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>Metagenomic data is a novel and valuable source for personalized medicine approaches to improve human health. Data Visualization is a crucial technique in data analysis to explore and find patterns in data. Especially, data resources from metagenomic often have very high dimension so humans face big challenges to understand them. In this study, we introduce a visualization method based on Mean-shift algorithm which enables us to observe high-dimensional data via images exhibiting clustered features by the clustering method. Then, these generated synthetic images are fetched into a convolutional neural network to do disease prediction tasks. The proposed method shows promising results when we evaluate the approach on four metagenomic bacterial species abundance datasets related to four diseases including Liver Cirrhosis, Colorectal Cancer, Obesity, and Type 2 Diabetes.</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_7-Improving_Disease_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Estimate the Total Completion Time of the Workload</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110606</link>
        <id>10.14569/IJACSA.2020.0110606</id>
        <doi>10.14569/IJACSA.2020.0110606</doi>
        <lastModDate>2020-06-30T14:58:36.6770000+00:00</lastModDate>
        
        <creator>Muhammad Amjad</creator>
        
        <creator>Waqas Ahmad</creator>
        
        <creator>Zia Ur Rehman</creator>
        
        <creator>Waqar Hussain</creator>
        
        <creator>Syed Badar Ud Duja</creator>
        
        <creator>Bilal Ahmed</creator>
        
        <creator>Usman Ali</creator>
        
        <creator>M. Abdul Qadoos</creator>
        
        <creator>Ammad Khan</creator>
        
        <creator>M. Umar Farooq Alvi</creator>
        
        <subject>Query interactions; estimate time; running time</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>The business intelligence workload is required to serve analytical process. The data warehouses have a very large collection of digital data. The large collection of digital data is required to analytical process within the perplexing workload. The main problem for perplexing workload is to estimate the total completion time. Estimate total completion time is required when workload is executed as a batch of queries. To estimate the queries according to their interaction aware scheme because queries are run in batches. The database administrators often require to perceive how much longer time for business intelligence workloads will take to complete. This question ascends, when database administrator entails to accomplish workloads within existing time frame. The database system executes mixes of multiple queries concurrently. We would rather measure query interactions of a mix than practiced approach to consider each query separately. A novel approach as a estimate framework is presented to estimate running time of a workload based on experiment driven modeling coupled with workload simulation. An estimation framework is developed which has two major parts offline phase and online phase. Offline phase collects the experiments sampling of mixes which has different query types. To find the good accuracy for estimating the running time of the workload by evaluation with TPC-H queries on PostgreSQL.</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_6-Estimate_the_Total_Completion_Time.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Study of EIGRP and OSPF Protocols based on Network Convergence</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110605</link>
        <id>10.14569/IJACSA.2020.0110605</id>
        <doi>10.14569/IJACSA.2020.0110605</doi>
        <lastModDate>2020-06-30T14:58:36.6600000+00:00</lastModDate>
        
        <creator>Ifeanyi Joseph Okonkwo</creator>
        
        <creator>Ikiomoye Douglas Emmanuel</creator>
        
        <subject>OSPF (Open Shortest Path First); EIGRP (Enhanced Interior Gateway Routing Protocol); routing; protocol; network; convergence; topology; routers; packets; Wireshark</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>Dynamic routing protocols are one of the fastest growing routing protocols in networking technologies because of their characteristics such as high throughput, flexibility, low overhead, scalability, easy configuration, bandwidth, and CPU utilization. Albeit convergence time is a critical problem in any of these routing protocols. Convergence time describes summary of the updated, complete, and accurate information of the network. Several studies have investigated EIGRP and OSPF on the internet; however, only a few of these studies have considered link failure and addition of new links using different network scenarios. This research contributes to this area. This comparative study uses a network simulator GNS3 to simulate different network topologies. The results are validated using Cisco hardware equipment in the laboratory. The network topology implemented in this research are star and mesh topology. The results are validated using Cisco hardware equipment in the laboratory. Wireshark is effectively used in capturing and analyzing the packets in the networks. This helps in monitoring accurate time response for the various packets. The results obtained from Wireshark suggest the EIGRP has a higher performance in terms of convergence duration with a link failure or new link added to the network than the OSPF routing protocol. Following this study EIGRP is recommended for most heterogeneous network implementations over OSPF routing protocol.</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_5-Comparative_Study_of_EIGRP_and_OSPF_Protocols.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Pedestrian Detection and Tracking Method for Robot Equipped with Laser Radar</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110604</link>
        <id>10.14569/IJACSA.2020.0110604</id>
        <doi>10.14569/IJACSA.2020.0110604</doi>
        <lastModDate>2020-06-30T14:58:36.6430000+00:00</lastModDate>
        
        <creator>Zhu Bin</creator>
        
        <creator>Zhang Jian Rong</creator>
        
        <creator>Wang Yan Fang</creator>
        
        <creator>Wu Jin Ping</creator>
        
        <subject>Laser radar; likelihood field model; pedestrian detection; pedestrian tracking; simultaneous location and mapping</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>In order to detect and track pedestrians in complex indoor backgrounds, a pedestrian detection and tracking method for indoor robots equipped with Laser radar is proposed. Firstly, The SLAM (Simultaneous Location and Mapping) technology is applied to obtain 2D grid map for a strange environment; then, Monte Carlo localization is employed to obtain the posterior pose of the robot in the map; then, an improved likelihood field background subtraction algorithm is proposed to extract the interesting foreground in changeable environment; then, the hierarchical clustering algorithm combining with an improved leg model is proposed to detect the objective pedestrian; at last, an improved tracking intensity formula is designed to track and follow the objective pedestrian. Experimental results in some complex environments show that our method can effectively reduce the impact of confusing scenarios which are challenges for other algorithms, such as the motion of the chair, the suddenly passing by person and when the objective pedestrian close to the wall and so on, and can detect, track and follow pedestrians in real time with high accuracy.</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_4-A_Pedestrian_Detection_and_Tracking_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Adapting CRISP-DM for Idea Mining: A Data Mining Process for Generating Ideas Using a Textual Dataset</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110603</link>
        <id>10.14569/IJACSA.2020.0110603</id>
        <doi>10.14569/IJACSA.2020.0110603</doi>
        <lastModDate>2020-06-30T14:58:36.6300000+00:00</lastModDate>
        
        <creator>Workneh Y. Ayele</creator>
        
        <subject>CRISP-IM; idea generation; idea evaluation; idea mining evaluation; dynamic topic modeling; CRISP-DM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>Data mining project managers can benefit from using standard data mining process models. The benefits of using standard process models for data mining, such as the de facto and the most popular, Cross-Industry-Standard-Process model for Data Mining (CRISP-DM) are reduced cost and time. Also, standard models facilitate knowledge transfer, reuse of best practices, and minimize knowledge requirements. On the other hand, to unlock the potential of ever-growing textual data such as publications, patents, social media data, and documents of various forms, digital innovation is increasingly needed. Furthermore, the introduction of cutting-edge machine learning tools and techniques enable the elicitation of ideas. The processing of unstructured textual data to generate new and useful ideas is referred to as idea mining. Existing literature about idea mining merely overlooks the utilization of standard data mining process models. Therefore, the purpose of this paper is to propose a reusable model to generate ideas, CRISP-DM, for Idea Mining (CRISP-IM). The design and development of the CRISP-IM are done following the design science approach. The CRISP-IM facilitates idea generation, through the use of Dynamic Topic Modeling (DTM), unsupervised machine learning, and subsequent statistical analysis on a dataset of scholarly articles. The adapted CRISP-IM can be used to guide the process of identifying trends using scholarly literature datasets or temporally organized patent or any other textual dataset of any domain to elicit ideas. The ex-post evaluation of the CRISP-IM is left for future study.</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_3-Adapting_CRISP_DM_for_Idea_Mining.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Evaluation of LoRa ES920LR 920 MHz on the Development Board</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110602</link>
        <id>10.14569/IJACSA.2020.0110602</id>
        <doi>10.14569/IJACSA.2020.0110602</doi>
        <lastModDate>2020-06-30T14:58:36.5970000+00:00</lastModDate>
        
        <creator>Puput Dani Prasetyo Adi</creator>
        
        <creator>Akio Kitagawa</creator>
        
        <subject>Coverage; lifetime; low power; lightweight; long range; development; board; free Space; drone</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>This study contains the LoRa ES920LR test on obstruction or resistance conditions and its comparison with Free Space Path Loss (FSPL) using Drone means without obstacles. The ES920LR has 920 MHz frequency channel settings, 125 kHz Bandwidth, and SF 7-12, with 13 dBm Output Power. Comes with Sleep mode operation with Command Prompt based settings. The development board used is a leafony board, Leafony board is a board with a small size that is compatible with Arduino IDE, using a micro ATmega M328P microcontroller, this board is mounted tiled with complete facilities, e.g., a power supply board, four different sensor boards, MCU boards, communication boards e.g., WiFi and Bluetooth, specifically in this article using the LoRa ES920LR on the Leafony board, using LoRa because it requires a long range (km) and low power, and expansion boards that can be developed, Expansion to Leafony boards is expected to reduce the power consumption of the sensor node, lifetime, lif, small size, and lightweight. Furthermore, this algorithm is used to optimize LoRa Coverage and LoRa Lifetime. The results of receiving the FSPL LoRa 920LR have a Power Receiver (Pr) of 30 dB at a distance of 1 meter or A, at 500 meters 85 dB, 1500 meters 95 dB, 2.5km 100 dB. Attenuation is caused by distance, although not significant, other factors are obstacles or obstacles, bad weather (rain, snow).</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_2-Performance_Evaluation_of_LoRa.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modeling Real-World Load Patterns for Benchmarking in Clouds and Clusters</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110601</link>
        <id>10.14569/IJACSA.2020.0110601</id>
        <doi>10.14569/IJACSA.2020.0110601</doi>
        <lastModDate>2020-06-30T14:58:36.5500000+00:00</lastModDate>
        
        <creator>Kashifuddin Qazi</creator>
        
        <subject>Cloud computing; workload generator; cluster com-putting</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(6), 2020</description>
        <description>Cloud computing has currently permeated all walks of life. It has proven extremely useful for organizations and individual users to save costs by leasing compute resources that they need. This has led to an exponential growth in cloud computing based research and development. A substantial number of frameworks, approaches and techniques are being proposed to enhance various aspects of clouds, and add new features. One of the constant concerns in this scenario is creating a testbed that successfully reflects a real-world cloud datacenter. It is vital to simulate realistic, repeatable, standardized CPU and memory workloads to compare and evaluate the impact of the different approaches in a cloud environment. This paper introduces Cloudy, which is an open-source workload generator that can be used within cloud instances, Virtual Machines (VM), containers, or local hosts. Cloudy utilizes resource usage traces of machines from Google and Alibaba clusters to simulate up to 16000 different, real-world CPU and memory load patterns. The tool also provides a variety of machine metrics for each run, that can be used to evaluate and compare the performance of the VM, container or host. Additionally, it includes a web-based visualization component that offers a number of real-time statistics, as well as overall statistics of the workload such as seasonal trends, and autocorrelation. These statistics can be used to further analyze the real-world traces, and enhance the understanding of workloads in the cloud.</description>
        <description>http://thesai.org/Downloads/Volume11No6/Paper_1-Modeling_Real_World_Load_Patterns.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Feature Selection and Performance Improvement of Malware Detection System using Cuckoo Search Optimization and Rough Sets</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110587</link>
        <id>10.14569/IJACSA.2020.0110587</id>
        <doi>10.14569/IJACSA.2020.0110587</doi>
        <lastModDate>2020-06-03T11:08:58.4930000+00:00</lastModDate>
        
        <creator>Ravi Kiran Varma P</creator>
        
        <creator>PLN Raju</creator>
        
        <creator>K V Subba Raju</creator>
        
        <creator>Akhila Kalidindi</creator>
        
        <subject>Cuckoo search; rough sets; feature optimization; malware analysis; malware detection; feature reduction; clamp dataset</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>The proliferation of malware is a severe threat to host and network-based systems. Design and evaluation of efficient malware detection methods is the need of the hour. Windows Portable Executable (PE) files are a primary source of windows based malware. Static malware detection involves an analysis of several PE header file features and can be done with the help of machine learning tools. In the design of efficient machine learning models for malware detection, feature reduction plays a crucial role. Rough set dependency degree is a proven tool for feature reduction. However, quick reduct using rough sets is an NP-hard problem. This paper proposes a hybrid Rough Set Feature Selection using Cuckoo Search Optimization, RSFSCSO, in finding the best collection of reduced features for malware detection. Random forest classifier is used to evaluate the proposed algorithm; the analysis of results proves that the proposed method is highly efficient.</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_87-Feature_Selection_and_Performance_Improvement.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluation of a Model Maximizing the Quality Value of Selected Software Components in a Library</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110586</link>
        <id>10.14569/IJACSA.2020.0110586</id>
        <doi>10.14569/IJACSA.2020.0110586</doi>
        <lastModDate>2020-06-03T11:08:58.4170000+00:00</lastModDate>
        
        <creator>Koffi Kouakou Ive Arsene</creator>
        
        <creator>Samassi Adama</creator>
        
        <creator>Kouam&#233; Appoh</creator>
        
        <subject>Software component quality; reuse; reusable components; reusability; mathematical model; simulation; validation; maintenance effort</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>Reusable software components are selected from libraries by developers and integrated into existing software systems to improve their quality. In this article, we evaluate a mathematical model based on an approach of optimization of the selection of the software components according to their quality. This is a linear programming model with constraints. It takes into account the quality characteristics of the components based on standard ISO / IEC 9126, the financial cost and the adaptation time. The experience with the ILOG Cplex Studio optimization tool gave satisfactory results.</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_86-Evaluation_of_a_Model_Maximizing_the_Quality_Value.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Ensemble Methods to Detect XSS Attacks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110585</link>
        <id>10.14569/IJACSA.2020.0110585</id>
        <doi>10.14569/IJACSA.2020.0110585</doi>
        <lastModDate>2020-06-01T11:24:33.3670000+00:00</lastModDate>
        
        <creator>PMD Nagarjun</creator>
        
        <creator>Shaik Shakeel Ahamad</creator>
        
        <subject>Cross-site scripting; machine learning; ensemble learning; random forest; bagging; boosting</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>Machine learning techniques are gaining popularity and giving better results in detecting Web application attacks. Cross-site scripting is an injection attack widespread in web applications. The existing solutions like filter-based, dynamic analysis, and static analysis are not effective in detecting unknown XSS attacks, and machine learning methods can detect unknown XSS attacks. Existing research to detect XSS attacks by using machine learning methods have issues like single base classifiers, small datasets, and unbalanced datasets. In this paper, supervised ensemble learning techniques trained on a large labeled and balanced dataset to detect XSS attacks. The ensemble methods used in this research are random forest classification, AdaBoost, bagging with SVM, Extra-Trees, gradient boosting, and histogram-based gradient boosting. Analyzed and compared the performance of ensemble learning algorithms by using the confusion matrix.</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_85-Ensemble_Methods_to_Detect_XSS_Attacks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluating Contact Detection, Size Recognization and Grasping State of an Object using Soft Elastomer Gripper</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110584</link>
        <id>10.14569/IJACSA.2020.0110584</id>
        <doi>10.14569/IJACSA.2020.0110584</doi>
        <lastModDate>2020-05-30T15:41:51.1000000+00:00</lastModDate>
        
        <creator>Kazi Abaul Jamil</creator>
        
        <subject>Grasping process; contact feedback; size recogniza-tion</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>Object handling process of the sensitive or fragile object is critical to preserve its quality. In this domain, soft robotics has gained a lot of attention. However, limitations of detecting contact and grasping behavior is still a challenging task due to the non-linear behavior of soft gripper. Moreover, to regulate grasping behavior, exact real-time contact feedback is a crucial task. To improve the contact detection accuracy a gradient-based algorithm is proposed with the feedback from a simple resistive flex sensor and a pressure sensor. For that purpose firstly, the resistive flex sensor is embedded into the gripper to get the real-time gripper’s finger position. Secondly, solenoid valves and pressure sensors are used to control the pneumatic pressure from the pump, and finally, a closed-loop control system is developed for controlling the grasping process. The proposed contact detection algorithm can provide contact feedback with an accuracy of &#177;3m􀀀, which is implemented to perform the size recognization of the sphere-shaped objects. A real-time experimental setup has been developed which can successfully perform a pick and place of fruits and vegetables. The key benefits of the proposed algorithm are less complexity and better accuracy.</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_84-Evaluating_Contact_Detection_Size_Recognization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>2D/3D Registration with Rigid Alignment of the Pelvic Bone for Assisting in Total Hip Arthroplasty Preoperative Planning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110583</link>
        <id>10.14569/IJACSA.2020.0110583</id>
        <doi>10.14569/IJACSA.2020.0110583</doi>
        <lastModDate>2020-05-30T15:41:51.0700000+00:00</lastModDate>
        
        <creator>Christian A. Suca Velando</creator>
        
        <creator>Eveling G. Castro Gutierrez</creator>
        
        <subject>Digitally Reconstructed Radiograph (DRR); inten-sity registration; rigid transformation; 3D models; Total Hip Arthro-plasty (THA); preoperative planning; Computed Tomography (CT)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>In Total Hip Arthroplasty preoperative planning requires the definition of medical parameters that help during the intraoperative process; these parameters must be allocated with accuracy to make an implant to the patient. Currently, preoperative planning carries out with different methods. It can be by using a prosthesis template (2D) projected on x-ray images or by using a computed tomography (CT) in order to set a 3D prosthesis. We propose an alternative developing preoperative planning through reconstructed 3D models using 2D x-ray images, which help to get the same precise information such as a CT. On this paper it has proposed to test the framework from the authors Bertelsen A and Borro D, it is an ITK-Based Framework for 2D-3D Registration between x-ray images and a computed tomography. We used the approach of this paper using two fixed images (reference images) and one moving image (image to transform) to do a intensity registration. This method uses a ray casting interpolator to generate a Digitally Reconstructed Radiograph (DRR) or virtual x-ray. We also applied a normalized gradient correlation for comparing the patient x-ray image and the virtual x-ray image optimized by a nonlinear conjugate gradient, both metric and optimizer are useful to update rigid transformation parameters which have an additional scale parameter which produced better results such as 0.01855mm on the alignment of relocated reference volume and 15.5915mm on the alignment of deformed and relocated reference volume of Hausdorff distance between both models (reference volume and transformed volumetric template).</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_83-2D3D_Registration_with_Rigid_Alignment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Robust Scheme to Improving Security of Data using Graph Theory</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110582</link>
        <id>10.14569/IJACSA.2020.0110582</id>
        <doi>10.14569/IJACSA.2020.0110582</doi>
        <lastModDate>2020-05-30T15:41:51.0530000+00:00</lastModDate>
        
        <creator>Khalid Bekkaoui</creator>
        
        <creator>Soumia Ziti</creator>
        
        <creator>Fouzia Omary</creator>
        
        <subject>Cryptography; encryption; security; graph theory; Hamiltonian circuit; adjacency matrix</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>With the incredible growth of using internet and other new telecommunication technologies, cryptography has be-come an absolute necessity for securing communications between two or more entities, particularly in the case of transferring confi-dential data. In the literature, many encryption systems have been proposed against attack threats. These schemes should normally overcome the concerns by ensuring confidentiality, integrity and authenticity of transmitted data. However, several of them have shown weaknesses in terms of security and complexity. Hence the need for a robust and powerful non-standard encryption algorithm to prevent any traditional opportunity to sniff data. In this work, we propose a new encryption system that perfectly meets the security requirements. The scheme is based essentially on the principles of graph theory which are very promising at plain text representations. Our approach proposes another use of the concept of Hamiltonian circuit and adjacency matrix using a shared key and a pseudo-random generator. After analysis of the experimental results, which were very promising, the technique was found to be both efficient and robust.</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_82-A_Robust_Scheme_to_Improving_Security_of_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A High Performance System for the Diagnosis of Headache via Hybrid Machine Learning Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110580</link>
        <id>10.14569/IJACSA.2020.0110580</id>
        <doi>10.14569/IJACSA.2020.0110580</doi>
        <lastModDate>2020-05-30T15:41:51.0230000+00:00</lastModDate>
        
        <creator>Ahmad Qawasmeh</creator>
        
        <creator>Noor Alhusan</creator>
        
        <creator>Feras Hanandeh</creator>
        
        <creator>Maram Al-Atiyat</creator>
        
        <subject>High performance computing; Clinical Decision Support System (CDSS); machine learning; primary and secondary headache; performance analysis and improvement; headache diag-nosis; open medical application</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>Headache has been a major concern for patients, medical doctors, clinics and hospitals over the years due to several factors. Headache is categorized into two major types:(1) Primary Headache, which can be tension, cluster or migraine, and (2) Secondary Headache where further medical evaluation must be considered. This work presents a high performance Headache Prediction Support System (HPSS). HPSS provides preliminary guidance for patients, medical students and even clinicians for initial headache diagnosis. The mechanism of HPSS is based on a hybrid machine learning model. First, 19 selected attributes (questions) were chosen carefully by medical specialists according to the most recent International Classification of Headache Disorders (ICHD-3) criteria. Then, a questionnaire was prepared to confidentially collect data from real patients under the supervision of specialized clinicians at different hospitals in Jordan. Later, a hybrid solution consisting of clustering and classification was employed to emphasize the diagnosis results obtained by clinicians and to predict headache type for new patients respectively. Twenty-six (26) different classification algorithms were applied on 614 patients’ records. The highest accuracy was obtained by integrating K-Means and Random Forest with a migraine accuracy of 99.1% and an overall accuracy of 93%. Our web-based interface was developed over the hybrid model to enable patients and clinicians to use our system in the most convenient way. This work provides a comparative study of different headache diagnosis systems via 9 different performance metrics. Our hybrid model shows a great potential for highly accurate headache prediction. HPSS was used by different patients, medical students, and clinicians with a very positive feedback. This work evaluates and ranks the impact of headache symptoms on headache diagnosis from a machine learning perspective. This can help medical experts for further headache criteria improvements.</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_80-A_High_Performance_System_for_the_Diagnosis_of_Headache.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Efficient Cache Architecture for Table Lookups in an Internet Router</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110581</link>
        <id>10.14569/IJACSA.2020.0110581</id>
        <doi>10.14569/IJACSA.2020.0110581</doi>
        <lastModDate>2020-05-30T15:41:51.0230000+00:00</lastModDate>
        
        <creator>Hayato Yamaki</creator>
        
        <subject>Router architecture; table lookup; Packet Process-ing Cache (PPC); Ternary Content Addressable Memory (TCAM)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>Table lookup is the most important operation in routers from the aspects of both packet processing throughput and power consumption. To realize the table lookup at high throughput with low energy, Packet Processing Cache (PPC) has been proposed. PPC stores table lookup results into a small SRAM (static random access memory) per flow and reuses the cached results to process subsequent packets of the same flow. Because the SRAM is accessed faster with significant lower energy than TCAM (Ternary Content Addressable Memory), which is conventionally used as a memory for storing the tables in routers, PPC can process packets at higher throughput with lower power consumption when the table lookup results of the packets are in PPC. Although the PPC performance depends on the PPC hit/miss rates, recent PPCs still show high PPC miss rates and cannot achieve sufficient performance. In this paper, efficient cache architecture, constructed of two different techniques, is proposed to improve the PPC miss rate more. The simulation results indicated that the combined approach of them achieved 1.72x larger throughput with 41.4% lower energy consumption in comparison to the conventional PPC architecture.</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_81-Efficient_Cache_Architecture_for_Table_Lookups.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>VIPEye: Architecture and Prototype Implementation of Autonomous Mobility for Visually Impaired People</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110579</link>
        <id>10.14569/IJACSA.2020.0110579</id>
        <doi>10.14569/IJACSA.2020.0110579</doi>
        <lastModDate>2020-05-30T15:41:50.9770000+00:00</lastModDate>
        
        <creator>Waqas Nawaz</creator>
        
        <creator>Khalid Bashir</creator>
        
        <creator>Kifayat Ullah Khan</creator>
        
        <creator>Muhammad Anas</creator>
        
        <creator>Hafiz Obaid</creator>
        
        <creator>Mughees Nadeem</creator>
        
        <subject>Safe and Interesting Path (SIP); Visually Impaired People (VIP); autonomous; mobility; computer vision; path guid-ance; VIPEye prototype; navigation; graph mining</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>Comfortable movement of a visually impaired per-son in an unknown environment is non-trivial task due to com-plete or partial short-sightedness, absence of relevant information and unavailability of assistance from a non-impaired person. To fulfill the visual needs of an impaired person towards autonomous navigation, we utilize the concepts of graph mining and computer vision to produce a viable path guidance solution. We present an architectural perspective and a prototype implementation to determine safe &amp; interesting path (SIP) from an arbitrary source to desired destination with intermediate way points, and guide visually impaired person through voice commands on that path. We also identify and highlight various challenging issues, that came up while developing a prototype solution, i.e. VIPEye -An Eye for Visually Impaired People, to this aforementioned problem, in terms of task’s difficulty and availability of required resources or information. Moreover, this study provides candidate research directions for researchers, developers, and practitioners in the development of autonomous mobility services for visually impaired people.</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_79-VIPEye_Architecture_and_Prototype_Implementation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Parallel QR Factorization using Givens Rotations in MPI-CUDA for Multi-GPU</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110578</link>
        <id>10.14569/IJACSA.2020.0110578</id>
        <doi>10.14569/IJACSA.2020.0110578</doi>
        <lastModDate>2020-05-30T15:41:50.9600000+00:00</lastModDate>
        
        <creator>Miguel Tapia-Romero</creator>
        
        <creator>Amilcar Meneses-Viveros</creator>
        
        <creator>Erika Hern&#180;andez-Rubio</creator>
        
        <subject>Givens factorization; CUDA; heterogeneous pro-gramming; scalable parallelism</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>Modern supercomputers incorporate the use of multi-core processors and graphics processing units. Applications running on these computers take advantage of these technologies with scalable programs that work with multicores and accelerator such as graphics processing unit. QR factorization is essential for several numerical tasks, such as linear equations solvers, compute inverse matrix or compute a diagonal matrix, to name a few. There are several factorization algorithm such as LU, Cholesky, Givens and Householder, among others. The efficient parallel implementation of each parallelization algorithm will depend on the structure of the data and the type of parallel architecture used. A common strategy in parallel programming is to break a problem into subproblems to solve them in different processing units. This is very useful when dealing with complex problems or when the data is too large to work with the available memory. However, it is not clear how data partitioning affects subtask performance when mapping to processing units, specifically to graphical processing units. This work explores the partitioning of large symmetric matrix data for QR factorization using Givens rotations and its parallel implementation using MPI and CUDA is presented.</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_78-Parallel_QR_Factorization_using_Givens_Rotations.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Quantifying Feature Importance for Detecting Depression using Random Forest</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110577</link>
        <id>10.14569/IJACSA.2020.0110577</id>
        <doi>10.14569/IJACSA.2020.0110577</doi>
        <lastModDate>2020-05-30T15:41:50.9300000+00:00</lastModDate>
        
        <creator>Hatoon AlSagri</creator>
        
        <creator>Mourad Ykhlef</creator>
        
        <subject>Machine learning; random forest; feature selection; feature importance; depression; emotions; twitter</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>Feature selection based on importance is a funda-mental step in machine learning models because it serves as a vital technique to orient the use of variables to what is most efficient and effective for a given machine learning model. In this study, an explainable machine learning model based on Random forest, is built to address the problem of identification of depression level for Twitter users. This model reflects its transparency through calculating its feature importance. There are several techniques to quantify the importance of features. However, in this study, random forest is used as both a classifier, which has over-performing aspects over many classifiers such as decision trees, and a method for weighting the input features as their importance imply. In this study, the importance of features is measured using different techniques including random forest, and the results of these techniques are compared. Furthermore, feature importance uses the concept of weighting the input variables inside a complete system for recommending a solution for depressed persons. The experimental results confirm the superiority of random forest over other classifiers using three different methods for measuring the features importance. The accuracy of random forest classification reached 84.7%, and the importance of features increased the classifier accuracy to 84.9%.</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_77-Quantifying_Feature_Importance_for_Detecting_Depression.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning Model for Identifying the Arabic Language Learners based on Gated Recurrent Unit Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110576</link>
        <id>10.14569/IJACSA.2020.0110576</id>
        <doi>10.14569/IJACSA.2020.0110576</doi>
        <lastModDate>2020-05-30T15:41:50.9130000+00:00</lastModDate>
        
        <creator>Seifeddine Mechti</creator>
        
        <creator>Roobaea Alroobaea</creator>
        
        <creator>Moez Krichen</creator>
        
        <creator>Saeed Rubaiee</creator>
        
        <creator>Anas Ahmed</creator>
        
        <subject>Arabic; Native Language Identification (NLI); deep learning; Gated Recurrent Unit Network (GRUN)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>This paper focuses on identifying the Arabic Lan-guage learners. The main contribution of the proposed method is to use a deep learning model based on the Gated Recurrent Unit Network (GRUN). The proposed model explores a multitude of stylistic features such as the syntax, the lexical and the n-grams ones. To the best of our awareness, the obtained results outperform those obtained by the best existing systems. Our accuracy is the best comparing with the pioneers (45% vs 41%), considering the limited data and the unavailability of accurate tools dedicated to the Arabic language.</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_76-Deep_Learning_Model_for_Identifying_the_Arabic_Language.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Thinging-Oriented Modeling of Unmanned Aerial Vehicles</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110575</link>
        <id>10.14569/IJACSA.2020.0110575</id>
        <doi>10.14569/IJACSA.2020.0110575</doi>
        <lastModDate>2020-05-30T15:41:50.8970000+00:00</lastModDate>
        
        <creator>Sabah Al-Fedaghi</creator>
        
        <creator>Jassim Al-Fadhli</creator>
        
        <subject>Unmanned aerial vehicle (UAV); drone; conceptual modeling; diagrammatic representation; system architecture</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>In recent years, there has been a dramatic increase in both practical and research applications of unmanned aerial vehicles (UAVs). According to the literature, there is a need in this area to develop a more refined model of UAV system architecture—in other words, a conceptual model that defines the system’s structure and behavior. The existing models mostly are fractional and do not account for the entire important dynamic attributes. Progress in this area could reduce ambiguity and increase reliability in the design of such systems. This paper aims to advance the modeling of UAV system architecture by adopting a conceptual model called a thinging (abstract) machine in which all of the UAV’s software and hardware components are viewed in terms of the flow of things and five generic operations. We apply this model to a real case study of a drone. The results— an integrated conceptual representation of the drone—support the viability of this approach.</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_75-Thinging_Oriented_Modeling_of_Unmanned_Aerial_Vehicles.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of a Recurrent Neural Network Model for English to Yor&#249;b&#225; Machine Translation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110574</link>
        <id>10.14569/IJACSA.2020.0110574</id>
        <doi>10.14569/IJACSA.2020.0110574</doi>
        <lastModDate>2020-05-30T15:41:50.8830000+00:00</lastModDate>
        
        <creator>Adebimpe Esan</creator>
        
        <creator>John Oladosu</creator>
        
        <creator>Christopher Oyeleye</creator>
        
        <creator>Ibrahim Adeyanju</creator>
        
        <creator>Olatayo Olaniyan</creator>
        
        <creator>Nnamdi Okomba</creator>
        
        <creator>Bolaji Omodunbi</creator>
        
        <creator>Opeyemi Adanigbo</creator>
        
        <subject>Recurrent; tokenizer; corpus; translation; evaluation; correlation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>This research developed a recurrent neural network model for English to Yoruba machine translation. Parallel corpus was obtained from the English and Yoruba bible corpus. The developed model was tested and evaluated using both manual and automatic evaluation techniques. Results from manual evaluation by ten human evaluators show that the system is adequate and fluent. Also, results from automatic evaluation shows that the developed model has decent and good translation as well as higher accuracy because it has better correlation with human judgment.</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_74-Development_of_A_Recurrent_Neural_Network_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Urbanization Change Analysis based on SVM and RF Machine Learning Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110573</link>
        <id>10.14569/IJACSA.2020.0110573</id>
        <doi>10.14569/IJACSA.2020.0110573</doi>
        <lastModDate>2020-05-30T15:41:50.8500000+00:00</lastModDate>
        
        <creator>Farhad Hassan</creator>
        
        <creator>Tauqeer Safdar</creator>
        
        <creator>Ghulam Irtaza</creator>
        
        <creator>Aman Ullah Khan</creator>
        
        <creator>Syed Muhammad Husnain Kazmi</creator>
        
        <creator>Farah Murtaza</creator>
        
        <subject>Random Forest (RF); Support Vector Machine (SVM); GEE; classification; machine learning classifier; multi-temporal change analysis; urban change analysis; LANDSAT8; Kappa co-efficient</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>To maintain sustainability in the development, measured the yearly change rate of the land through Land Cover classified maps that hold the data which is surveyed as an influential factor for environment management and urbanization. This paper measured the change rate, which is helpful for the management of the city to define the new policy and implement the best one to maintain the natural resources. Machine Learning algorithms are utilized to produce the most acknowledged Land Cover maps using the GEE cloud-based reliable platform using the LANDSAT8 satellite imagery. For the classification used the Random Forest (RF) and Support Vector Machine (SVM) Algorithm. This investigation also found that the Support Vector Machine (SVM) classifier accomplished better over-all accuracy and Kappa coefficient as compared to the Random Forest (RF) classifier while the training sample for both is the same.</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_73-Urbanization_Change_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Air Quality Monitoring Device for Vehicular Ad Hoc Networks: EnvioDev</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110572</link>
        <id>10.14569/IJACSA.2020.0110572</id>
        <doi>10.14569/IJACSA.2020.0110572</doi>
        <lastModDate>2020-05-30T15:41:50.8370000+00:00</lastModDate>
        
        <creator>Josip Balen</creator>
        
        <creator>Srdan Ljepic</creator>
        
        <creator>Kristijan Lenac</creator>
        
        <creator>Sadko Mandzuka</creator>
        
        <subject>Air pollution; air quality; Arduino; sensors; SUMO; VANET</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>Urban air pollution has become a major concern for numerous densely populated cities globally since poor air quality may cause various health problems. The first crucial step towards solving this important problem is to identify the most critical areas with the highest air pollution over the allowed limit. Nowadays, air pollution is monitored by various stationary measurement systems that are expensive, large, consume a big amount of energy and gathered data has a low spatial resolution. This paper presents EnvioDev, a mobile air quality and traffic conditions measurement device for Vehicular Ad hoc Networks (VANETs) that can be used on any type of a vehicle. EnvioDev was tested in a real-world urban environment measuring CO, CH4 and LPG concentrations, as well as air temperature and humidity in order to create a city pollution map and the results are presented in the paper. Moreover, in order to determine how many EnvioDev devices are required to obtain close to a real-time air quality map of an urban area, three experiments with Simulation of Urban Mobility (SUMO) simulator were conducted. In the experiments an urban city map was divided into five zones and data aggregation frequencies are varied during different traffic load periods in order to study the number of required vehicles with EnvioDev measurement device. The obtained results show that by increasing the data aggregation frequency the number of required vehicles with EnvioDev measurement device increases and it is depended on the size and topology of the testing area.</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_72-Air_Quality_Monitoring_Device.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Voice Scrambling Algorithm based on 3D Chaotic Map System (VSA3DCS) to Encrypt Audio Files</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110571</link>
        <id>10.14569/IJACSA.2020.0110571</id>
        <doi>10.14569/IJACSA.2020.0110571</doi>
        <lastModDate>2020-05-30T15:41:50.8200000+00:00</lastModDate>
        
        <creator>Osama M. Abu Zaid</creator>
        
        <subject>Lorenz chaotic map; Chen&#39;s chaotic map; Arnold cat map; scrambling algorithms; audio encryption</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>Here, a proposed voice scrambling algorithm established on one of two 3D chaotic maps systems (VSA3DCS) will be presented, discussed, and applied on audio signals file. The two 3D chaotic map systems in which any one of them is used to build VSA3DCS are Chen&#39;s chaotic map system and Lorenz chaotic map system. Also Arnold cat map-based scrambling algorithm will be applied on the same sample of audio signals. These Scrambling algorithms are used to encrypt the audio files by shuffling the positions of signals at different conditions with the audio file as one block or two blocks. Amplitude values of audio signals with signals&#39; time are registered and plotted for original file versus encrypted files which are produced from applying VSA3DCS using Chen&#39;s, VSA3DCS using Lorenz, and Arnold-based algorithm. The spectrogram frequencies of audio signals with signals&#39; time are plotted for original file versus encrypted files for all algorithms. Also, the histogram of the original file and encrypted audio signals are registered and plotted. The comparative analysis is presented by using some measuring factors for both of encryption and decryption processes, such as; the time of encryption and decryption, Correlation Coefficient of original and encrypted signals between the samples, the Spectral Distortion (SD) measure, Log-Likelihood Ratio (LLR) measure, and key sensitivity measuring factor. The results of several experimental and comparative analyses will show that the VSA3DCS algorithm using Chen&#39;s or Lorenz is a good algorithm to provide an effective and safe solution to voice signal encryption, and also VSA3DCS algorithm better than Arnold-based algorithm in all results with all cases.</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_71-Voice_Scrambling_Algorithm_based_on_3D.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Wind Power Integration with Smart Grid and Storage System: Prospects and Limitations</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110570</link>
        <id>10.14569/IJACSA.2020.0110570</id>
        <doi>10.14569/IJACSA.2020.0110570</doi>
        <lastModDate>2020-05-30T15:41:50.7900000+00:00</lastModDate>
        
        <creator>Saad Bin Abul Kashem</creator>
        
        <creator>Muhammad E. H. Chowdhury</creator>
        
        <creator>Amith Khandakar</creator>
        
        <creator>Jubaer Ahmed</creator>
        
        <creator>Azad Ashraf</creator>
        
        <creator>Nushrat Shabrin</creator>
        
        <subject>Wind power system; wind turbines; energy storage system; microgrids; nation grids</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>Wind power generation is playing a pivotal role in adopting renewable energy sources in many countries. Over the past decades, we have seen steady growth in wind power generation throughout the world. This article aims to summarize the operation, conversion and integration of the wind power with conventional grid and local microgrids so that it can be a one-stop reference for early career researchers. The study is carried out primarily based on the horizontal axis wind turbine and the vertical axis wind turbine. Afterward, the types and methods of storing this electric power generated are discussed elaborately. On top of that, this paper summarizes the ways of connecting the wind farms with conventional grid and microgrid to portray a clear picture of existing technologies. Section-wise, the prospects and limitations are discussed and opportunities for future technologies are highlighted.  It is envisaged that, this paper will help researchers and engineering professionals to grasp the fundamental concepts related to wind power generation concisely and effectively.</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_70-Wind_Power_Integration_with_Smart_Grid.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Using Fuzzy c-Means for Weighting Different Fuzzy Cognitive Maps</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110569</link>
        <id>10.14569/IJACSA.2020.0110569</id>
        <doi>10.14569/IJACSA.2020.0110569</doi>
        <lastModDate>2020-05-30T15:41:50.7730000+00:00</lastModDate>
        
        <creator>Mamoon Obiedat</creator>
        
        <creator>Ali Al-yousef</creator>
        
        <creator>Ahmad Khasawneh</creator>
        
        <creator>Nabhan Hamadneh</creator>
        
        <creator>Ashraf Aljammal</creator>
        
        <subject>Complex problems; uncertainty; fuzzy cognitive map (FCM); fuzzy c-Means; FCM weight values</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>Currently, complex socio-ecological problems have increasingly prevailed with uncertainty that often dominates these domains. In order to better represent these problems, there is an urgent need to engage a wide range of different stakeholders&#39; perspectives, regardless of their levels of expertise and knowledge. Then, these perspectives should be combined in an appropriate manner for a comprehensive and reasonable problem representation. Fuzzy cognitive map (FCM) has proven to be powerful and useful as a soft computing approach in addressing and representing such problem domains. By the FCM approach, the relevant stakeholders can represent their perspectives in the form of FCM system. Normally, relevant stakeholders have different levels of knowledge, and hence produce different representations (FCMs). Therefore, these FCMs should be weighted appropriately before the combination process. This paper uses fuzzy c-means clustering technique to assign different weights for different FCMs according to their importance in representing the problem. First, fuzzy c-means is used to compute the membership values of belonging of FCMs to the selected clusters based on the FCMs similarities that show how convergent and consistent they are. According to these membership values, the importance clusters&#39; values are calculated, in which a cluster with a high membership value from all FCMs is the cluster with the high importance value, and vice versa. Next, the importance values for FCMs are derived from the importance values of the clusters by looking at the amount of contributions of FCMs memberships to the clusters. Finally, FCMs importance values are used to assign weight values to these FCMs, which are used when they are combined. The suitability of the proposed method is investigated using a real dataset that includes an appropriate number of FCMs collected from different stakeholders.</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_69-Using_Fuzzy_c_Means_for_Weighting.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Oscillation Preventing Closed-Loop Controllers via Genetic Algorithm for Biped Walking on Flat and Inclined Surfaces</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110568</link>
        <id>10.14569/IJACSA.2020.0110568</id>
        <doi>10.14569/IJACSA.2020.0110568</doi>
        <lastModDate>2020-05-30T15:41:50.7570000+00:00</lastModDate>
        
        <creator>Sabri Yilmaz</creator>
        
        <creator>Metin Gokasan</creator>
        
        <creator>Seta Bogosyan</creator>
        
        <subject>Humanoid robot; biped walking; Model Predictive Control (MPC); Genetic Algorithm (GA); trajectory planning; Zero Moment Point (ZMP); linear inverted pendulum</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>In this study, a closed-loop controller is designed to overcome the dynamical insufficiency of the 3D Linear Inverted Pendulum Model (LIPM) via the Genetic Algorithm (GA). The main idea is to still use the 3D LIPM with a closed-loop controller because of its ease at modeling. While suppressing the dynamical flaws only the legs are used, in other words a robot is used which does not have any upper body elements to have a more modular robot. For this purpose, a biped is modeled with the 3D LIPM which is one of the most famous modeling methods of humanoid robots for the ease of modeling and fast calculations during the trajectory planning. After obtaining the simple model, Model Predictive Control (MPC) is applied to the 3D LIPM to find the reference trajectories for the biped while satisfying the Zero Moment Point (ZMP) criteria. The found reference trajectories applied to the full dynamical model on Matlab Simulink and the real biped in the laboratory at Istanbul Technical University. From the simulation results on the flat and inclined surfaces and real-time experiments on a flat surface some dynamical flaws are observed due to the simple modeling. To overcome these flaws a Proportional-Integral (PI) controller is designed, and the optimal value of the controller gains are found by the GA. The results assert that the designed controller can overcome the observed flaws and makes biped move more stable, smoother, and move without steady-state error.</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_68-Oscillation_Preventing_Closed_Loop_Controllers.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Customer Churn Prediction Model and Identifying Features to Increase Customer Retention based on User Generated Content</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110567</link>
        <id>10.14569/IJACSA.2020.0110567</id>
        <doi>10.14569/IJACSA.2020.0110567</doi>
        <lastModDate>2020-05-30T15:41:50.7400000+00:00</lastModDate>
        
        <creator>Essam Abou el Kassem</creator>
        
        <creator>Shereen Ali Hussein</creator>
        
        <creator>Alaa Mostafa Abdelrahman</creator>
        
        <creator>Fahad Kamal Alsheref</creator>
        
        <subject>Customer churn; telecom sector; churn prediction; sentiment analysis; machine learning; customer retention</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>Customer churn is a problem for most companies because it affects the revenues of the company when a customer switch from a service provider company to another in the telecom sector. For solving this problem we put two main approaches: the first one is identifying the main factors that affect customers churn, the second one is detecting the customers that have a high probability to churn through analyzing social media. For the first approach we build a dataset through practical questionnaires and analyzing them by using machine learning algorithms like Deep Learning, Logistic Regression, and Na&#239;ve Bayes algorithms. The second approach is customer churn prediction model through analyzing their opinions through their user-generated content (UGC) like comments, posts, messages, and products or services&#39; reviews. For analyzing the UGC we used Sentiment analysis for finding the text polarity (negative/positive). The results show that the used algorithms had the same accuracy but differ in arrangement of attributes according to their weights in the decision.</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_67-Customer_Churn_Prediction_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Gamification Experience and Virtual Reality in Teaching Astronomy in Basic Education</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110566</link>
        <id>10.14569/IJACSA.2020.0110566</id>
        <doi>10.14569/IJACSA.2020.0110566</doi>
        <lastModDate>2020-05-30T15:41:50.7100000+00:00</lastModDate>
        
        <creator>Norka Bedregal-Alpaca</creator>
        
        <creator>Olha Sharhorodska</creator>
        
        <creator>Luis Jim&#233;nez-Gonz&#225;les</creator>
        
        <creator>Robert Arce-Apaza</creator>
        
        <subject>Gamification; game-based learning; reward system; student motivation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>Regardless of the country, there is a trend: the world of school and the modern world are two different poles. Young people see school as boring compared to the entertainment of today&#39;s technology. Most students prefer to play or surf the internet, but not study. Gamification is projected as a methodological practice that aims to turn classrooms into playful immersion scenarios, using participatory strategies with the incorporation of electronic devices. This article shows the results obtained by applying gamification techniques in the research project aimed at supporting astronomy learning for basic education students. When using the app, the student must overcome challenges to earn different achievements and rewards. Among the results highlights the student&#39;s motivation during the learning process and the perception of satisfaction of the personal achievements achieved.</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_66-A_Gamification_Experience_and_Virtual_Reality.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Process Model Collection with Structural Variants and Evaluations</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110565</link>
        <id>10.14569/IJACSA.2020.0110565</id>
        <doi>10.14569/IJACSA.2020.0110565</doi>
        <lastModDate>2020-05-30T15:41:50.6930000+00:00</lastModDate>
        
        <creator>Rimsha Anam</creator>
        
        <creator>Tahir Muhammad</creator>
        
        <subject>Business process modeling; process model collection; Process Model Matching (PMM); structural variants</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>Today in the era of the latest technologies, Business Process Management Systems (BPMS) have allowed organizations to build process model repositories which help to maintain the flow of operations in the form of various process models. Business process models are virtual models that can imitate the actual activities of an organization. Searching for semantically similar activities between pairs of process models in a repository is known as Process Model Matching (PMM). From the past few years, PMM has been gaining momentum due to its wide range of applications such as integration of process models, process model clone detection, and process model knowledge discovery. Different types of PMM techniques have been applied on available process model repositories but these repositories contained a limited number of process models. Another notable aspect of PMM is that the existing techniques have not achieved the desired results which questions the effectiveness of process model repositories. To address this problem, the authors of this study have developed a substantial, diverse, and carefully developed process model collection. This process model collection is compared with existing SAP collection to highlight its significance and superiority. Furthermore, the proposed process model collection represents structural variations of example process models which are governed by the defined set of rules. To reflect structural variations between process models of our collection, existing structural similarity approaches such as structural metrics and graph edit distance were applied by using a custom-developed tool. Our proposed process model collection is freely available to the research community which can be used to build new PMM techniques and for assessment of existing PMM techniques.</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_65-A_Process_Model_Collection_with_Structural_Variants.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Early Forest Fire Detection System using Wireless Sensor Network and Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110564</link>
        <id>10.14569/IJACSA.2020.0110564</id>
        <doi>10.14569/IJACSA.2020.0110564</doi>
        <lastModDate>2020-05-30T15:41:50.6800000+00:00</lastModDate>
        
        <creator>Wiame Benzekri</creator>
        
        <creator>Ali El Moussati</creator>
        
        <creator>Omar Moussaoui</creator>
        
        <creator>Mohammed Berrajaa</creator>
        
        <subject>Forest fire detection; wireless sensor network; deep learning; internet of things; low power wide area network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>Due to the global warming, which mechanically increases the risk of starting fires. The number of forest fires is increasing and will increase more and more. To better support the fire soldiers on the ground, we present in this work a system for early detection of forest fires. This system is more precise compared to traditional surveillance approaches such as lookout towers and satellite surveillance. The proposed system is based on collecting environmental wireless sensor network data from the forest and predicting the occurrence of a forest fire using artificial intelligence, more particularly Deep Learning (DL) models. The combination of such a system based on the concept of the Internet of Things (IoT) is made up of a Low Power Wide Area Network (LPWAN), fixed or mobile sensors and a good suitable model of deep learning. That several models derived from deep learning were evaluated and compared enabled us to show the feasibility of an autonomous and real-time environmental monitoring platform for dynamic risk factors of forest fires.</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_64-Early_Forest_Fire_Detection_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Enhanced Distance Vector-Hop Algorithm using New Weighted Location Method for Wireless Sensor Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110563</link>
        <id>10.14569/IJACSA.2020.0110563</id>
        <doi>10.14569/IJACSA.2020.0110563</doi>
        <lastModDate>2020-05-30T15:41:50.6630000+00:00</lastModDate>
        
        <creator>Fengrong Han</creator>
        
        <creator>Izzeldin Ibrahim Mohamed Abdelaziz</creator>
        
        <creator>Xinni Liu</creator>
        
        <creator>Kamarul Hawari Ghazali</creator>
        
        <subject>Wireless Sensor Network (WSN); localization algorithm; range-free; distance vector-hop (DV-Hop) localization algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>Location is an indispensable segment for Wireless Sensor Network (WSN), since when events happened, we need to know location. The distance vector-hop (DV-Hop) technique is a popular range-free localization algorithm due to its cost efficiency and non-intricate process. Nevertheless, it suffers from poor accuracy, and it is highly influenced by network topology; Especially, more hop counts lead to more errors. In the final phase, least squares are employed to address nonlinear equation, which will gain greater location errors. Aimed at addressing problems mentioned above, an enhanced DV-Hop algorithm based on weighted factor, along with new weighted least squares location technique, is proposed in this paper, and it is called WND-DV-Hop. First, the one hop count of unknown node was corrected by employed received signal strength indication (RSSI) technology. Next, in order to reduce average hop distance error, a weighted coefficient based on beacon node hop count was constructed. A new weighted least squares method was embedded to solve nonlinear equation problem. Finally, considerable experiments were carried out to estimate the performance of WND-DV-Hop, compared the outcomes with state-of-the-art DV-Hop, IDV-Hop, Checkout-DV-Hop, and New-DV-Hop depicted in literature. The empirical findings demonstrated that WND-DV-Hop significantly outperformed other localization algorithms.</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_63-An_Enhanced_Distance_Vector_Hop_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design and Experimental Analysis of Touchless Interactive Mirror using Raspberry Pi</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110562</link>
        <id>10.14569/IJACSA.2020.0110562</id>
        <doi>10.14569/IJACSA.2020.0110562</doi>
        <lastModDate>2020-05-30T15:41:50.6330000+00:00</lastModDate>
        
        <creator>Abdullah Memon</creator>
        
        <creator>Syed Mudassir Kazmi</creator>
        
        <creator>Attiya Baqai</creator>
        
        <creator>Fahim Aziz Umrani</creator>
        
        <subject>Smart Mirror; Raspberry Pi; Pi-camera; Application Programing Interface; Hue Saturation Value; Region of Interest</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>A prototype of a smart gesture-controlled mirror with enhanced interactivity is proposed and designed in this paper. With the help of hand gestures, the mirror provides some basic amenities like time, news, weather etc. The designed system uses Pi cam for image acquisition to perform the functions like gesture recognition and an ultrasonic sensor for presence detection.  This paper also discusses the experimental analysis of human gesture interaction using parameters like the angle which ranges its horizon from 0 to 37 degree when tilting the forearm up and down and 0 to 15 degrees when the forearm is twisted right to left and otherwise based on yellow, pink and white background colours. Additionally, the range of detection using an ultrasonic sensor is restricted to the active region of 69.6 to 112.5 degrees. Moreover, time delay which takes half a second for a time as retrieve with the system and take 6 to 10 seconds for fetching headlines and weather information from the internet. These analyses are taken into account to subsequently improve the design algorithm of the gesture-controlled smart mirror. The framework developed comprises of three different gesture defects under which mirror will display the mentioned information on its screen.</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_62-Design_and_Experimental_Analysis_of_Touchless_Interactive_Mirror.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Still Image-based Human Activity Recognition with Deep Representations and Residual Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110561</link>
        <id>10.14569/IJACSA.2020.0110561</id>
        <doi>10.14569/IJACSA.2020.0110561</doi>
        <lastModDate>2020-05-30T15:41:50.6170000+00:00</lastModDate>
        
        <creator>Ahsan Raza Siyal</creator>
        
        <creator>Zuhaibuddin Bhutto</creator>
        
        <creator>Syed Muhammad Shehram Shah</creator>
        
        <creator>Azhar Iqbal</creator>
        
        <creator>Faraz Mehmood</creator>
        
        <creator>Ayaz Hussain</creator>
        
        <creator>Saleem Ahmed</creator>
        
        <subject>Human activity recognition; action recognition; deep learning; transfer learning; residual learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>Iterative Recognizing human activity in a scene is still a challenging and an important research area in the field of computer vision due to its various possible implementations on many fields including autonomous driving, bio medical, machine intelligent vision etc. Recently deep learning techniques have emerged and successfully deployed models for image recognition and classification, object detection, and speech recognition. Due to promising results the state of art deep learning techniques have replaced the traditional techniques. In this paper, a novel method is presented for human activity recognition based on pre-trained Convolutional Neural Network (CNN) model utilized as feature extractor and deep representations are followed by Support Vector Machine (SVM) classifier for action recognition. It has been observed that previously learnt CNN knowledge from large scale data-set could be transferred to activity recognition task with limited training data. The proposed method is evaluated on publicly available stanford40 human action data-set, which includes 40 classes of actions and 9532 images. The comparative experiment results show that proposed method achieves better performance over conventional methods in term of accuracy and computational power.</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_61-Still_Image_Based_Human_Activity_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Parkinson’s Disease Diagnosis using Spiral Test on Digital Tablets</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110560</link>
        <id>10.14569/IJACSA.2020.0110560</id>
        <doi>10.14569/IJACSA.2020.0110560</doi>
        <lastModDate>2020-05-30T15:41:50.5870000+00:00</lastModDate>
        
        <creator>Najd Al-Yousef</creator>
        
        <creator>Raghad Al- Saikhan</creator>
        
        <creator>Reema Al- Gowaifly</creator>
        
        <creator>Reem Al-Abdullatif</creator>
        
        <creator>Felwa Al-Mutairi</creator>
        
        <creator>Ouiem Bchir</creator>
        
        <subject>Component; Parkinson&#39;s disease (PD); computer-aided diagnosis; pattern recognition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>For a proper diagnosis, Parkinson&#39;s disease (PD) requires frequent visits to the doctor for physical tests, causing a huge burden on the patient. As PD impairs the handwriting ability, the handwriting pattern can be used as an indicator for PD diagnosis. More specifically, the Static Spiral Test (SST) and the Dynamic Spiral Test (DST), that consists in retracing spirals using digital pen. Such exam can be self-conducted by the patient, and thus it would be convenient and non-time-consuming for both the patient and the medical staff. In this project, we designed and implemented a system that automatically self-aid-diagnoses PD using SST and DST on digital tablets. The system includes two main components, image processing techniques to pre-process and extract the appropriate visual features and machine learning techniques to recognize PD automatically. The conducted experiment showed that the semi-local Edge Histogram Descriptor extracted from DST drawing, and conveyed to a Gaussian Kernel Support Vector Machine outperforms the other considered systems with an accuracy, specificity and sensitivity around 90%.</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_60-Parkinsons_Disease_Diagnosis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Natural Language Processing based Anomalous System Call Sequences Detection with Virtual Memory Introspection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110559</link>
        <id>10.14569/IJACSA.2020.0110559</id>
        <doi>10.14569/IJACSA.2020.0110559</doi>
        <lastModDate>2020-05-30T15:41:50.5700000+00:00</lastModDate>
        
        <creator>Suresh K. Peddoju</creator>
        
        <creator>Himanshu Upadhyay</creator>
        
        <creator>Jayesh Soni</creator>
        
        <creator>Nagarajan Prabakar</creator>
        
        <subject>System call sequence; anomaly detection; natural language processing; memory forensics; cosine similarity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>Malware has become a significant problem for the security of computers in this scientific era. Nowadays, machine learning techniques are applied to find anomalous activities in computers especially in virtualization environments. Identifying anomalous activities in virtual machines with virtual memory introspector and analyzing data with machine learning techniques are need of current trend. In this paper, an anomaly detection method is implemented using Natural Language Processing (NLP) based on Bags of System Calls (BoSC) for learning the behavior of applications on Windows virtual machines running on Xen hypervisor. During this process, system call traces are extracted from normal applications (benign processes) and malware affected applications (malicious processes) with the help of virtual memory introspection. Preprocessing of extracted system call sequences is done to obtain valid system call sequences through filtering and ordering of redundant system calls. Further, analysis of behavior of system call sequences is carried out with NLP based anomaly detection techniques. During this process, Cosine Similarity Algorithm (Co-Sim) is applied to identify malicious processes running on a VM. Apart from this, Point Detection Algorithm is applied to precisely locate the point of compromise in the system call sequences. The results shown in this paper indicates that both of these algorithms detect anomalies in the running processes with 99% accuracy.</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_59-Natural_Language_Processing_based_Anomalous_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Disparity of Stereo Images by Self-Adaptive Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110558</link>
        <id>10.14569/IJACSA.2020.0110558</id>
        <doi>10.14569/IJACSA.2020.0110558</doi>
        <lastModDate>2020-05-30T15:41:50.5400000+00:00</lastModDate>
        
        <creator>Md. Abdul Mannan Mondal</creator>
        
        <creator>Mohammad Haider Ali</creator>
        
        <subject>Stereo correspondence; stereo matching; window cost; adaptive search; disparity; sum of absolute differences</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>This paper introduces a new searching method named “Self Adaptive Algorithm (SAA)” for computing stereo correspondence or disparity of stereo image. The key idea of this method relies on the previous search result which increases searching speed by reducing the search zone and by avoiding false matching. According to the proposed method, stereo matching search range can be selected dynamically until finding the best match. The searching range -dmax to +dmax is divided into two searching regions. First one is -dmax to 0 and second one is 0 to +dmax .To determine the correspondence of a pixel of the reference image (left image), the window costs of the right image are computed either for -dmax to 0 region or for 0 to +dmax region depending only on the matching pixel position. The region where the window costs will be computed- will be automatically selected by the proposed algorithm based on previous matching record. Thus the searching range is reduced to 50% within every iteration. The algorithm is able to infer the upcoming candidate’s pixel position depending on the intensity value of reference pixel. So the proposed approach improves window costs calculation by avoiding false matching in the right image and reduces the search range as well. The proposed method has been compared with the state-of-the-art methods which were evaluated on Middlebury standard stereo data set and our SAA outperforms the latest methods both in terms of speed and gain enhancement with no degradation of accuracy.</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_58-Disparity_of_Stereo_Images_by_Self_Adaptive_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analytical Comparison Between the Information Gain and Gini Index using Historical Geographical Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110557</link>
        <id>10.14569/IJACSA.2020.0110557</id>
        <doi>10.14569/IJACSA.2020.0110557</doi>
        <lastModDate>2020-05-30T15:41:50.5230000+00:00</lastModDate>
        
        <creator>Majid Zaman</creator>
        
        <creator>Sameer Kaul</creator>
        
        <creator>Muheet Ahmed</creator>
        
        <subject>Geographical data mining; information gain; Gini index; machine learning; decision tree</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>The historical geographical data of Kashmir province is spread across two disparate files having attributes of Maximum Temperature, Minimum Temperature, Humidity measured at 12 A.M., Humidity measured at 3 P.M., rainfall besides auxiliary parameters like date, year etc. The parameters Maximum Temperature, Minimum Temperature, Humidity measured at 12 A.M., Humidity measured at 3 P.M. are continuous in nature and here, in this study, we applied Information Gain and Gini Index on these attributes to convert continuous data into discrete values, their after we compare and evaluate the generated results. Of the four attributes, two have same results for Information Gain and Gini Index; one attribute has overlapping results while as only one attribute has conflicting results for Information Gain and Gini Index. Subsequently, continuous valued attributes are converted into discrete values using Gini index. Irrelevant attributes are not considered and auxiliary attributes are labeled accordingly. Consequently, the data set is ready for the application of machine learning (decision tree) algorithms.</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_57-Analytical_Comparison_between_the_Information_Gain.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multiclass Pattern Recognition of Facial Images using Correlation Filters</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110556</link>
        <id>10.14569/IJACSA.2020.0110556</id>
        <doi>10.14569/IJACSA.2020.0110556</doi>
        <lastModDate>2020-05-30T15:41:50.4900000+00:00</lastModDate>
        
        <creator>Nisha Chandran S</creator>
        
        <creator>Charu Negi</creator>
        
        <creator>Poonam Verma</creator>
        
        <subject>Pattern recognition; correlation filter; multiclass recognition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>Pattern Recognition comes naturally to humans and there are many pattern recognition tasks which humans can perform admirably well. However, human pattern recognition cannot compete with machine speed when the number of classes to be recognized becomes tremendously large. In this paper, we analyze the effectiveness of correlation filters for pattern classification problems. We have used Distance Classifier Correlation Filter (DCCF) for pattern classification of facial images. Two essential qualities of a correlation filter are distortion tolerance and discrimination ability. DCCF transposes the feature space in such a way that the images belonging to the same class gets closer and the images from different class moves far apart; thereby increasing the distortion tolerance and the discrimination ability. The results obtained demonstrate the effectiveness of the approach for face recognition applications.</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_56-Multiclass_Pattern_Recognition_of_Facial_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi Focus Image Fusion using Image Enhancement Techniques with Wavelet Transformation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110555</link>
        <id>10.14569/IJACSA.2020.0110555</id>
        <doi>10.14569/IJACSA.2020.0110555</doi>
        <lastModDate>2020-05-30T15:41:50.4770000+00:00</lastModDate>
        
        <creator>Sarwar Shah Khan</creator>
        
        <creator>Muzammil Khan</creator>
        
        <creator>Yasser Alharbi</creator>
        
        <subject>Multi focus image fusion; image enhancement; unsharp masking; Laplacian Filter (LF); Stationary Wavelet Trans-forms (SWT); frequency domain technique</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>Multi-focus image fusion produces a unification of multiple images having different areas in focus, which contain necessary and detailed information in the individual image. The paper is proposing a novel idea in a pre-processing step in the image fusion environment in which sharpening techniques applied before fusion in the pre-processing step. This article is proposing multi-focus hybrid techniques for fusion, based on image enhancement, which helps to identify the key features and minor details and then fusion performed on the enhanced images. In image enhancement, we introduced a new hybrid sharpening method that combines Laplacian Filter (LF) with a Discrete Fourier Transform (DFT) and also performs sharpening using the Unsharp sharpen approach. Then fusion is performed using Stationary Wavelet Transformation (SWT) technique to fused the enhanced images and obtaining more detail of the resultant image. The proposed approach is applied to two image sets, e., the “planes” and “clocks” image sets. The quality of the output image evaluated using both qualitative and quantitative approaches. Four will know quantitative metrics used to assess the performance of the novel technique. The experimental results of the novel methods showed efficient, improved outcomes and better for multi-focused image fusion. The SWT (LF+DFT) and SWT (Unsharp Mask) are 2.6 %, 1.8%, and 0.62%, 0.61% better than the best baseline measure, i.e., SWT, considering RMSE (Root Mean Square Error) for both image sets.</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_55-Multi_Focus_Image_Fusion_using_Image_Enhancement.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Identification and Assesment of the Specific Absorption Rate (SAR) Generated by the most used Telephone in Peru in 2017</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110554</link>
        <id>10.14569/IJACSA.2020.0110554</id>
        <doi>10.14569/IJACSA.2020.0110554</doi>
        <lastModDate>2020-05-30T15:41:50.4430000+00:00</lastModDate>
        
        <creator>Natalia Indira Vargas-Cuentas</creator>
        
        <creator>Mark Clemente-Arenas</creator>
        
        <creator>Avid Roman-Gonzalez</creator>
        
        <creator>Roxana Moran-Morales</creator>
        
        <subject>Electromagnetic radiation; SAR; ComoSAR; GSM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>According to the World Health Organization (WHO) it is estimated that between 5 and 10% of the population is electro sensitive, the excessive or prolonged exposure to electromagnetic waves can damage health. Currently the electromagnetic radiation generated by wireless mobile telephony involves our daily lives, since it is reported that there are more than 5 billion cell phone users. Each country establishes its own relative national standards on exposure to electromagnetic fields, which Peru lacks. Nevertheless, they are based in standards that has not been revised since 1996. This contribution seeks to identify the Specific Absorption Rate (SAR) generated by the most used mobile phone in Peru in 2017 using the ComoSAR measuring system in the GSM (Global System for Mobile communications) band at a frequency of 900 MHz. The results obtained will evaluate the behavior of electromagnetic waves by affecting the emulated tissues of body density and dielectric constant. The maximum SAR values recorded in the measurement were 0.05 W/Kg for a 1 g cube and 0.02 W/Kg for a 10 g cube. On the other hand, the average values obtained were 0.046 W/Kg for a 1 g cube and 0.019 W/Kg for a 10 g cube. The SAR values measured in the conditions of the experiment are below that what is indicated by the US standard and the European standard of SAR values.</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_54-Identification_and_Assesment_of_the_Specific_Absorption_Rate.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automatic Segmentation of Hindi Speech into Syllable-Like Units</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110553</link>
        <id>10.14569/IJACSA.2020.0110553</id>
        <doi>10.14569/IJACSA.2020.0110553</doi>
        <lastModDate>2020-05-30T15:41:50.4300000+00:00</lastModDate>
        
        <creator>Ruchika Kumari</creator>
        
        <creator>Amita Dev</creator>
        
        <creator>Ashwani Kumar</creator>
        
        <subject>Database; short term energy; convex hull; speech segmentation; syllable</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>To develop the high-quality Text-to-Speech (TTS) system, appropriate segmentation of continuous speech into the syllabic units placed an important role. The research work has been implemented for automatic syllable based speech segmentation technique for continuous speech for the Hindi language. The experiments were conducted by using the energy convex hull approach for clean, continuous speech for Hindi. In this method, the Savitzky-Golay filter was applied on the short term energy (STE) signal to increase the signal to noise ratio (SNR), followed by applying the median filter to preserve the boundaries, hence smoothing the energy curve. Also, the Hamming sliding-window was applied twice on speech signal to get the more accurate depth of convex hull valleys. Further, the algorithm was tested on 50 unique utterances chosen from the travel domain. The accuracy of the proposed algorithm has been calculated and obtains that 76.07% syllables have time-error less than 30 ms with manual segmentation reference. The performance of the proposed algorithm is also analyzed and gives better-segmented accuracy as compared to the existing group delay segmentation technique for fricatives or nasal sounds. The syllable base segmented database is suitable for the speech technology system for Hindi in the travel domain.</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_53-Automatic_Segmentation_of_Hindi_Speech.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modeling of Coronavirus Behavior to Predict it’s Spread</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110552</link>
        <id>10.14569/IJACSA.2020.0110552</id>
        <doi>10.14569/IJACSA.2020.0110552</doi>
        <lastModDate>2020-05-30T15:41:50.3970000+00:00</lastModDate>
        
        <creator>Shakir Khan</creator>
        
        <creator>Amani Alfaifi</creator>
        
        <subject>COVID-19; coronavirus; SIR model; data mining; R Software; forecasting model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>With the increasing presence and feast of infectious diseases and their fatalities in densest areas, many academics and societies have become fascinated in discovering new behaviors to predict these diseases&#39; feast behaviors. This media will help them to plan and contain the disease better in trivial provinces and thus decrease the beating of human lives. Some cases of an indeterminate cause of pneumonia occurred in Wuhan, Hubei, China, in December 2019, with clinical presentations closely resembling viral pneumonia. In-depth analyzes of the sequencing from lower respiratory tract samples discovered a novel coronavirus, called 2019 novel coronavirus (2019-nCoV). Current events showed us how easily a coronavirus could take root and spread—such viruses transmitted easily between persons. To cure with these infections, we applied time series forecasting model in this paper to predict possible coronavirus events. The forecasting model applied is SIR. The results of the implemented models compared with the actual data.</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_52-Modeling_of_Coronavirus_Behavior.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Efficient Methodology for Water Supply Pipeline Risk Index Prediction for Avoiding Accidental Losses</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110551</link>
        <id>10.14569/IJACSA.2020.0110551</id>
        <doi>10.14569/IJACSA.2020.0110551</doi>
        <lastModDate>2020-05-30T15:41:50.3830000+00:00</lastModDate>
        
        <creator>Muhammad Shuaib Qureshi</creator>
        
        <creator>Ayman Aljarbouh</creator>
        
        <creator>Muhammad Fayaz</creator>
        
        <creator>Muhammad Bilal Qureshi</creator>
        
        <creator>Wali Khan Mashwani</creator>
        
        <creator>Junaid Khan</creator>
        
        <subject>Neural networks; normalization; risk index; mean square error; statistical moments</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>The accidents happening to buildings and other human facilitation sectors due to poor water supply pipelining system is a random phenomenon, but an efficient estimation system can help to escape from such accidents. Such a system can be useful in assisting the caretakers to take the initiative measures to avoid the occurrence of the accidents or at least reduce the associated risk. In this paper, we target this issue by proposing a water supply pipelines risk estimation methodology using feed forward backpropagation neural network (FFBPNN). For validation and performance evaluation, real data of water supply pipelines collected in Seoul, Republic of South Korea from 1987 to 2010 is used. A comprehensive analysis is performed in order to get reasonable results with both original and pre-processed input data. Pre-processing consists of two steps: data normalization and statistical moments computation. Statistical moments are mean, variance, kurtosis and skewness. Significant improvement in prediction accuracy is observed with data pre-processing in terms of selected performance metrics, such as mean absolute error (MAE), mean absolute percentage error (MAPE) and root mean squared error (RMSE).</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_51-An_Efficient_Methodology_for_Water_Supply.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Metaheuristic for the Capacitated Multiple Traveling Repairman Problem</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110550</link>
        <id>10.14569/IJACSA.2020.0110550</id>
        <doi>10.14569/IJACSA.2020.0110550</doi>
        <lastModDate>2020-05-30T15:41:50.3670000+00:00</lastModDate>
        
        <creator>Khanh-Phuong Nguyen</creator>
        
        <creator>Ha-Bang Ban</creator>
        
        <subject>CmTRP; NNS; VNS; GVNS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>The Capacitated Multiple Traveling Repairmen Problem (CmTRP) is an extension of the Multiple Traveling Repairmen Problem (mTRP). In the CmTRP, the number of vehicles is dispatched to serve a set of customers, while each vehicle’s capacity is limited by a predefined-value as well as each customer is visited exactly once. The goal is to find a tour that minimizes the sum of waiting times. The problem is NP-hard because it is harder than the mTRP. Even finding a feasible solution is also NP-hard problem. To solve medium and large size instances, a metaheuristic algorithm is proposed. The first phase constructs a feasible solution by combining between the Nearest Neighborhood Search (NNS) and Variable Neighborhood Search (VNS), while the optimization phase develops the feasible solution by the General Variable Neighborhood Search (GVNS). The combination maintains the balance between intensification and diversification to escape local optima. The proposed algorithm is implemented on benchmark instances from the literature. The results indicate that the developed algorithm obtains good feasible solutions in a short time, even for the cases with up to 200 vertices.</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_50-Metaheuristic_for_the_Capacitated_Multiple_Traveling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Secured Large Heterogeneous HPC Cluster System using Massive Parallel Programming Model with Accelerated GPUs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110549</link>
        <id>10.14569/IJACSA.2020.0110549</id>
        <doi>10.14569/IJACSA.2020.0110549</doi>
        <lastModDate>2020-05-30T15:41:50.3370000+00:00</lastModDate>
        
        <creator>Khalid Alsubhi</creator>
        
        <subject>High Performance Computing HPC; MPI; OpenMP; CUDA; Supercomputing Systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>High Performace Computing (HPC) architectures are expected to develop first ExaFlops computer. This Exascale processing framework will be proficient to register ExaFlops estimation every subsequent that is thousands-overlay increment in current Petascale framework. Current advancements are confronting a few difficulties to move toward such outrageous registering framework. It has been anticipated that billion-way of parallelism will be exploited to discover Exascale level secured system that provide massive performance under predefined limitations such as processing cores and power consumption. However, the key elements of the strategies are required to develop a secured ExaFlops level energy efficient system. This study proposes a non-blocking, overlapping and GPU computation based tri-hybird model (OpenMP, CUDA and MPI) model that provide a massive parallelism through different granularity levels. We implemented the three different message passing strategies including and performed the experiments on Aziz-Fujitsu PRIMERGY CX400 supercomputer. It was observed that a comprehensive experimental study has been conducted to validate the performance and energy efficiency of our model. Experimental investigation shows that the EPC could be considered as an initiative and leading model to achieve massive performance through efficient scheme for Exascale computing systems.</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_49-A_Secured_Large_Heterogeneous_HPC_Cluster_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>New Learning Approach for Unsupervised Neural Networks Model with Application to Agriculture Field</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110548</link>
        <id>10.14569/IJACSA.2020.0110548</id>
        <doi>10.14569/IJACSA.2020.0110548</doi>
        <lastModDate>2020-05-30T15:41:50.3370000+00:00</lastModDate>
        
        <creator>Belattar Sara</creator>
        
        <creator>Abdoun Otman</creator>
        
        <creator>El khatir Haimoudi</creator>
        
        <subject>Kohonen-self organizing map; gram-schmidt algorithm; principal component analysis; agriculture field; crop yield prediction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>An accurate and lower cost hybrid machine learning algorithm based on a combination of Kohonen-Self Organizing Map (SOM) and Gram-Schmidt (GSHM) algorithm was proposed, to enhance the crop yield prediction and to increase the agricultural production. The combination of GSHM and SOM allows to withdraw the most informative components about our data, by overcoming correlation issues between input data prior to the training process. The improved hybrid algorithm was trained firstly on data that have a correlation problem, and it was compared with another hybrid model based on SOM and Principal Component Analysis (PCA), secondly, it was trained using selected soil parameters related to the atmosphere (e.g. pH, nitrogen, phosphate, potassium, depth, temperature, and rainfall). A comparative study with the standard SOM was conducted. The improved Kohonen-Self Organizing Map when applied to correlated data, demonstrated better results in terms of classification accuracy (8/8), and rapidity = 0.015s  compared to a classification accuracy (7/8) and a  rapidity = 97,828 s using SOM combined with PCA. Moreover, the proposed algorithm resulted in better results for crop prediction in terms of maximum iteration number of 675, mean error ≤0.00022, and rapidity = 18.422s versus an iteration number of 729, mean error ≤ 0.000916 and rapidity= 23.707s with the standard SOM. The proposed algorithm allowed us to overcome correlation issues, and to improve the classification, learning process, and rapidity, with the potential to apply for predicting crop yield in the agricultural field.</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_48-New_Learning_Approach_for_unsupervised_Neural_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>IoT Technology for Facilities Management: Understanding End user Perception of the Smart Toilet</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110547</link>
        <id>10.14569/IJACSA.2020.0110547</id>
        <doi>10.14569/IJACSA.2020.0110547</doi>
        <lastModDate>2020-05-30T15:41:50.3030000+00:00</lastModDate>
        
        <creator>Venushini Raendran</creator>
        
        <creator>R. Kanesaraj Ramasamy</creator>
        
        <creator>Intan Soraya Rosdi</creator>
        
        <creator>Ruzanna Ab Razak</creator>
        
        <creator>Nurazlin Mohd Fauzi</creator>
        
        <subject>IoT system; facilities management; smart toilet; resource optimization; user acceptance; theory of planned behaviour</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>The Internet of Things (IoT) plays an important role as an emerging technology. IoT platforms enable electronic devices to collect, process, and monitor various types of data. The Smart Toilet system featured in this paper is based on IoT technology, and it is designed for resource optimization in facility management services and for bringing convenience to individual end users. This paper presents a study conducted on individual end user perception of the Smart Toilet. A total of 124 respondents had participated in the study’s online survey and statistical data analysis methods were used to analyse the data. Results indicate user perception of the proposed Smart Toilet and the Smart Toilet app.</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_47-IoT_Technology_for_Facilities_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Segmentation of Fuzzy Enhanced Mammogram Mass Images by using K-Mean Clustering and Region Growing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110546</link>
        <id>10.14569/IJACSA.2020.0110546</id>
        <doi>10.14569/IJACSA.2020.0110546</doi>
        <lastModDate>2020-05-30T15:41:50.2730000+00:00</lastModDate>
        
        <creator>Nidhi Singh</creator>
        
        <creator>S. Veenadhari</creator>
        
        <subject>INT operator; feature extraction; k-mean clustering; mammogram; median filter; segmentation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>Providing intention to encourage radiologist’s appraisal for distinguishing proof or order of mammogram images, different methods were suggested by specialists since past two decades. By means of this technical paper, we propose segmentation on advanced mammogram imaging with k-means clustering and locale developing systems tending to support specialists or radiologists to figure out cancerous areas with computer-aided techniques. The suggested task is further classified within two stages: Applied/implemented pre-processing, at primary stage. With the pre-processing stage, we carried a median filter to expel undesirable salt and pepper clamor. Further, we apply fuzzy intensification operator (INT) to upgrade the distinction of intake images. During subsequent stage, improved fuzzy imaging conduces as input for k-mean clustering. Secondly, the locale developing technique is employed with previously generated clustered imagery to partition mammogram into homogeneous areas indicated through force from pixels. With the end goal of the experiment, we utilized the smaller than normal MAIS dataset. The experiment’s end result shows that proposed strategy accomplishes higher precision.</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_46-Segmentation_of_Fuzzy_Enhanced_Mammogram.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Cuckoo Feature Filtration Method for Intrusion Detection (Cuckoo-ID)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110545</link>
        <id>10.14569/IJACSA.2020.0110545</id>
        <doi>10.14569/IJACSA.2020.0110545</doi>
        <lastModDate>2020-05-30T15:41:50.2570000+00:00</lastModDate>
        
        <creator>Wafa Alsharafat</creator>
        
        <subject>Cuckoo algorithm; feature filtration; intrusion detection; XCS; detection rate</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>Intrusion Detection Systems (IDSs) play a crucial role in keeping online systems secure from attacks. However, these systems usually face the challenge of needing to handle and analyze a vast volume of data in order to achieve intrusion detection. Feature filtration is a solution that overcomes this challenge by focusing on the characteristic network features that play a significant role in enabling these systems to achieve high detection rates. This paper presents an intelligent cuckoo feature filtration method that is intended to prune away insignificant network features. Then, an IDS (the Cuckoo-ID ) is designed in which an eXtended Classifier System (XCS) uses the filtered features for improving the rate of detection of network intrusions. Thus, the main objective of Cuckoo-ID is to maximize the detection rate (DR) and minimize the false alarm rate (FAR). Experiments were then run on the KDDcup’99 dataset to test the intrusion detection (ID) efficiency of the proposed system. The results showed that cuckoo filtration does profoundly raise the ID rate of the entire system. Finally, the DR and FAR of Cuckoo-ID were compared with those of intrusion detection methods that depend on network feature filtration.</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_45-The_Cuckoo_Feature_Filtration_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Human Gait Recognition Against Information Theft in Smartphone using Residual Convolutional Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110544</link>
        <id>10.14569/IJACSA.2020.0110544</id>
        <doi>10.14569/IJACSA.2020.0110544</doi>
        <lastModDate>2020-05-30T15:41:50.2270000+00:00</lastModDate>
        
        <creator>Gogineni Krishna Chaitanya</creator>
        
        <creator>Krovi.Raja Sekhar</creator>
        
        <subject>Authentication; biometric analysis; genuine user; information loss; residual convolutional neural network; smartphone</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>The genuine user of the smartphone is identified and information theft is prevented by continuous authentication, which is one of the most emerging features in biometrics application. A person is recognized by analysing the physiological or behavioural attributes is defined as biometrics. The physiological qualities include iris acknowledgment, impression of finger, palm and face geometry are used in the biometric validation frameworks. In the existing entry-point authentication techniques, a confidential information is lost because of internal attacks, while identifying the genuine user of the smartphone. Therefore, a biometric validation framework is designed in this research study to differentiate an authorized user by recognizing the gait. In order to identify the unauthorized smartphone access, a human gait recognition is carried out by implementing a Residual Convolutional Neural Network (RCNN) approach. A personal information of end user in smartphone is secured and presented a better solution from unauthorized access by proposed architecture. The performance of RCNN method is compared with the existing Deep Neural Network (DNN) in terms of classification accuracy. The simulation results showed that the RCNN method achieved 98.15% accuracy, where DNN achieved 95.67% accuracy on OU-ISIR dataset.</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_44-A_Human_Gait_Recognition_against_Information_Theft.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intelligent Fleet Management System for Open Pit Mine</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110543</link>
        <id>10.14569/IJACSA.2020.0110543</id>
        <doi>10.14569/IJACSA.2020.0110543</doi>
        <lastModDate>2020-05-30T15:41:50.2100000+00:00</lastModDate>
        
        <creator>Hajar BNOUACHIR</creator>
        
        <creator>Meryiem CHERGUI</creator>
        
        <creator>Nadia MACHKOUR</creator>
        
        <creator>Mourad ZEGRARI</creator>
        
        <creator>Aziza CHAKIR</creator>
        
        <creator>Laurent DESHAYES</creator>
        
        <creator>Atae SEMMAR</creator>
        
        <creator>Hicham MEDROMI</creator>
        
        <subject>Fleet management system; open pit mine; monitoring; architectures; artificial intelligence; real time system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>Fleet management systems are currently used to coordinate mobility and delivery services in a wide range of areas. However, their traditional control architecture becomes a critical bottleneck in open and dynamic environments, where scalability and autonomy are key factors in their success. In this article, we propose an intelligent distributed Fleet Management System architecture for an open pit mine that allows mining vehicles control in a real time context, according to users’ requirements. Enriched by an intelligence layer made possible by the use of high-performance artificial intelligence algorithms and a reliable and efficient perception mechanism based on Internet of Things technologies and governed by an smart and integrated decision system that allows the fleet management system to improve its agility and its response to user requirements, our architecture presents numerous contributions to the domain. These contributions enable the fleet management system to meet the interoperability and autonomy requirements of the most widely used standards in the field, such as ISA 95.</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_43-Intelligent_Fleet_Management_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mapping UML Sequence Diagram into the Web Ontology Language OWL</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110542</link>
        <id>10.14569/IJACSA.2020.0110542</id>
        <doi>10.14569/IJACSA.2020.0110542</doi>
        <lastModDate>2020-05-30T15:41:50.1930000+00:00</lastModDate>
        
        <creator>Mo’men Elsayed</creator>
        
        <creator>Nermeen Elkashef</creator>
        
        <creator>Yasser F.Hassan</creator>
        
        <subject>Mapping; Unified Modeling Language; UML; sequence diagram; ontology; Web Ontology Language; OWL</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>In this paper, we propose a new mapping technique from the OMG’s UML modeling language into the Web Ontology Language (OWL) to serve the Semantic Web. UML (Unified Modeling Language) is widely accepted and used as a standardized modeling language in Object-Oriented Analysis (OOA) and Design (OOD) approach by domain experts to model real-world objects in software development. On the other hand, the conceptualization, which is represented in OWL, is designed to process the content of information rather than just present the information. Therefore, the matter of migrating UML to OWL is becoming an energetic research domain. OWL (Web Ontology Language) is a Semantic Web language designed for defining ontologies on the Web. An ontology is a formal specification naming and definition of shared data. This technique describes how to map UML Models into OWL and allows us to keep semantic of UML sequence diagrams such as messages, the sequence of messages, guard invariant, etc. to make data of UML sequence diagrams machine-readable.</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_42-Mapping_UML_Sequence_Diagram.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Where is the Highest Rate of Children with Anemia in Peru? An Answer using Grey Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110541</link>
        <id>10.14569/IJACSA.2020.0110541</id>
        <doi>10.14569/IJACSA.2020.0110541</doi>
        <lastModDate>2020-05-30T15:41:50.1630000+00:00</lastModDate>
        
        <creator>Alexi Delgado</creator>
        
        <creator>Nicolly Perez Ccancce</creator>
        
        <subject>Anemia; CTWF; grey systems; grey clustering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>Anemia, or chronic malnutrition, in children under 5 is a social problem that affects the infant crucially in his or her growth and cognitive development. However, this problem presents itself in different ways in each department of Peru. Thus, in this work, the methodology of grey clustering, which is based on grey systems theory, was applied. The case study was conducted in the 25 departments of Peru, to analyze the departments most affected by chronic malnutrition in children under 5 years of age. The results of the study were that the departments of Cajamarca, Huancavelica, Loreto and Pasco have the highest rate of children with anemia; this may be due to the fact that these departments have greater poverty or negligence with respect to proper food handling. The results of this study could help local authorities such as the Ministry of Health to combat malnutrition, and also serve as a basis for future studies to evaluate the social impact of other conditions on health from a mathematical perspective.</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_41-Where_is_the_Highest_Rate_of_Children_with_Anemia.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Ensemble Machine Learning Model for Higher Learning Scholarship Award Decisions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110540</link>
        <id>10.14569/IJACSA.2020.0110540</id>
        <doi>10.14569/IJACSA.2020.0110540</doi>
        <lastModDate>2020-05-30T15:41:50.1470000+00:00</lastModDate>
        
        <creator>Wirawati Dewi Ahmad</creator>
        
        <creator>Azuraliza Abu Bakar</creator>
        
        <subject>Scholarship classification; ensemble learning; rules-based classification; rules-based ensemble</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>The role of higher learning in Malaysia is to ensure high quality educational ecosystems in developing individual potentials to fulfill the national aspiration. To implement this role with success, scholarship offer is an important part of strategic plan. Since the increasing number of undergraduates’ student every year, the government must consider to apply a systematic strategy to manage the scholarship offering to ensure the scholarship recipient must be selected in effective way. The use of predictive model has shown effective can be made. In this paper, an ensemble knowledge model is proposed to support the scholarship award decision made by the organization. It generates list of eligible candidates to reduce human error and time taken to select the eligible candidate manually. Two approached of ensemble are presented. Firstly, ensembles of model and secondly ensembles of rule-based knowledge. The ensemble learning techniques, namely, boosting, bagging, voting and rules-based ensemble technique and five base learners’ algorithm, namely, J48, Support Vector Machine (SVM), Artificial Neuron Network (ANN), Na&#239;ve Bayes (NB) and Random Tree (RT) are used to develop the model. Total of 87,000 scholarship application data are used in modelling process. The result on accuracy, precision, recall and F-measure measurement shows that the ensemble voting techniques gives the best accuracy of 86.9% compare to others techniques. This study also explores the rules obtained from the rules-based model J48 and Apriori and managed to select the best rules to develop an ensemble rules-based models which is improved the study for classification model for scholarship award.</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_40-Ensemble_Machine_Learning_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mutual Coexistence in WBANs: Impact of Modulation Schemes of the IEEE 802.15.6 Standard</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110539</link>
        <id>10.14569/IJACSA.2020.0110539</id>
        <doi>10.14569/IJACSA.2020.0110539</doi>
        <lastModDate>2020-05-30T15:41:50.1330000+00:00</lastModDate>
        
        <creator>Marwa BOUMAIZ</creator>
        
        <creator>Mohammed EL GHAZI</creator>
        
        <creator>Mohammed FATTAH</creator>
        
        <creator>Anas BOUAYAD</creator>
        
        <creator>Moulhime EL BEKKALI</creator>
        
        <subject>Body Area Network (BAN); mutual coexistence; interference; Differential Binary Phase Shift Keying (DBPSK); Differential Quadrature Phase Shift Keying (DQPSK)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>Due to the mobility of subjects carrying wireless Body Area Networks (WBANs), a BAN may be found in an environment that contains other adjacent BANs, which may influence its proper functioning. The purpose of this paper is to study the effect of interference between adjacent BANs on the performance of a reference BAN in terms of packet loss rate (PLR), while considering the following four parameters: the distance separating adjacent BANs, the number of nodes and traffic payload of an interferer BAN, and the transmission data rate. The study is conducted for the two modulation schemes proposed by the IEEE 802.15.6 standard in the 2.4 GHz narrow band, which are: Differential Binary Phase Shift Keying (DBPSK) modulation and Differential Quadrature Phase Shift Keying (DQPSK) modulation. Simulation results have shown that the adoption of a lower-order modulation such as DBPSK can reduce the effect of interference among adjacent BANs.</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_39-Mutual_Coexistence_in_WBANs.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Trait-based Deep Learning Automated Essay Scoring System with Adaptive Feedback</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110538</link>
        <id>10.14569/IJACSA.2020.0110538</id>
        <doi>10.14569/IJACSA.2020.0110538</doi>
        <lastModDate>2020-05-30T15:41:50.1000000+00:00</lastModDate>
        
        <creator>Mohamed A. Hussein</creator>
        
        <creator>Hesham A. Hassan</creator>
        
        <creator>Mohammad Nassef</creator>
        
        <subject>AES system; trait evaluation; adaptive feedback; deep learning; neural networks; ASAP</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>Numerous Automated Essay Scoring (AES) systems have been developed over the past years. Recent advances in deep learning have shown that applying neural network approaches to AES systems has accomplished state-of-the-art solutions. Most neural-based AES systems assign an overall score to given essays, even if they depend on analytical rubrics/traits. The trait evaluation/scoring helps to identify learners’ levels of performance. Besides, providing feedback to learners about their writing performance is as important as assessing their level. Producing adaptive feedback to the learners requires identifying the strengths/weaknesses and the magnitude of influence of each trait. In this paper, we develop a framework that strengthens the validity and enhances the accuracy of a baseline neural-based AES model with respect to traits evaluation/scoring. We extend the model to present a method based on essay traits prediction to give trait-specific adaptive feedback. We explored multiple deep learning models for the automatic essay scoring task, and we performed several analyses to get some indicators from these models. The results show that Long Short-Term Memory (LSTM) based system outperformed the baseline study by 4.6% in terms of quadratic weighted Kappa (QWK). Moreover, the prediction of the traits scores enhance the efficiency of the prediction of the overall score. Our extended model is used in the iAssistant, an educational module that provides trait-specific adaptive feedback to learners.</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_38-A_Trait_based_Deep_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Image Captioning using Deep Learning: A Systematic Literature Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110537</link>
        <id>10.14569/IJACSA.2020.0110537</id>
        <doi>10.14569/IJACSA.2020.0110537</doi>
        <lastModDate>2020-05-30T15:41:50.0870000+00:00</lastModDate>
        
        <creator>Murk Chohan</creator>
        
        <creator>Adil Khan</creator>
        
        <creator>Muhammad Saleem Mahar</creator>
        
        <creator>Saif Hassan</creator>
        
        <creator>Abdul Ghafoor</creator>
        
        <creator>Mehmood Khan</creator>
        
        <subject>Image Captioning; Deep Learning; Neural Network; Recurrent Neural Network (RNN); Convolution Neural Network (CNN); Long Short Term Memory (LSTM)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>Auto Image captioning is defined as the process of generating captions or textual descriptions for images based on the contents of the image. It is a machine learning task that involves both natural language processing (for text generation) and computer vision (for understanding image contents). Auto image captioning is a very recent and growing research problem nowadays. Day by day various new methods are being introduced to achieve satisfactory results in this field. However, there are still lots of attention required to achieve results as good as a human. This study aims to find out in a systematic way that what different and recent methods and models are used for image captioning using deep learning? What methods are implemented to use those models? And what methods are more likely to give good results. For doing so we have performed a systematic literature review on recent studies from 2017 to 2019 from well-known databases (Scopus, Web of Sciences, IEEEXplore). We found a total of 61 prime studies relevant to the objective of this research. We found that CNN is used to understand image contents and find out objects in an image while RNN or LSTM is used for language generation. The most commonly used datasets are MS COCO used in all studies and flicker 8k and flicker 30k. The most commonly used evaluation matrix is BLEU (1 to 4) used in all studies. It is also found that LSTM with CNN has outperformed RNN with CNN. We found that the two most promising methods for implementing this model are Encoder Decoder, and attention mechanism and a combination of them can help in improving results to a good scale. This research provides a guideline and recommendation to researchers who want to contribute to auto image captioning.</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_37-Image_Captioning_using_Deep_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Effect of Firm’s Size on Corporate Performance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110536</link>
        <id>10.14569/IJACSA.2020.0110536</id>
        <doi>10.14569/IJACSA.2020.0110536</doi>
        <lastModDate>2020-05-30T15:41:50.0530000+00:00</lastModDate>
        
        <creator>Meiryani </creator>
        
        <creator>Olivia</creator>
        
        <creator>Jajat Sudrajat</creator>
        
        <creator>Zaidi Mat Daud</creator>
        
        <subject>Firm size; financial performance; return on assets; market to book value</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>The purpose of this study is to determine the effect of capital structure on firm’s financial performance that is conducted on 55 manufacturing sector listed companies in Indonesia Stock Exchange. The data analysis is conducted using R Studio software. Study is used data panel analysis with random effect model. The result of this study are (1) firm&#39;s size has no effect on firm&#39;s financial performance which is proxied by return-on-assets; (2) firm&#39;s size has no effect on firm&#39;s financial performance which is proxied by market-to-book-value.</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_36-The_Effect_of_Firms_Size_on_Corporate_Performance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Minimization of Spectrum Fragmentation for Improvement of the Quality of Service in Multifiber Elastic Optical Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110535</link>
        <id>10.14569/IJACSA.2020.0110535</id>
        <doi>10.14569/IJACSA.2020.0110535</doi>
        <lastModDate>2020-05-30T15:41:50.0400000+00:00</lastModDate>
        
        <creator>Boris Stephane ZOUNEME</creator>
        
        <creator>Georges Nogbou ANOH</creator>
        
        <creator>Souleymane OUMTANAGA</creator>
        
        <subject>Routing and spectrum allocation; fragmentation; multifiber elastic network; quality of service</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>Internet data traffic is still growing considerably in recent decades. In view of this exponential and dynamic growth, elastic optical networks are emerging as a promising solution for today&#39;s flexibly allocated bandwidth transmission technologies. The setup and release of  dynamic connections with different spectrum bandwidths and data rates leads over time to spectrum fragmentation in the network. However, single-fiber eastic optical networks are faced with the problem of optical spectrum fragmentation. Spectrum fragmentation refers to small blocks, isolated, non-aligned spectrum segments which is a critical issue for elastic optical network researchers.With the advent of multifiber, this fragmentation ratio has become more pronounced, resulting in a high blocking ratio in multifiber elastic optical networks. In this paper, we propose a new routing and spectrum allocation algorithm to minimize fragmentation in multifiber elastic optical networks. In the first step, we define different virtual topologies as many as there is fiber, for each virtual topology, the k shortest paths are determined to find the candidate paths between the source and the destination according to the minimization of a proposed parameter called allocation cost. In the second step, we apply the resource allocation algorithm followed by the choice of the optimal path with a minimum energy cost. Blocking probability and spectrum utilization are used to evaluate the performance of our algorithm. Simulation results show the effectiveness of our proposed approach and algorithm.</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_35-Minimization_of_Spectrum_Fragmentation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Workflow Scheduling Algorithm for Reducing Data Transfers in Cloud IaaS</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110534</link>
        <id>10.14569/IJACSA.2020.0110534</id>
        <doi>10.14569/IJACSA.2020.0110534</doi>
        <lastModDate>2020-05-30T15:41:50.0070000+00:00</lastModDate>
        
        <creator>Jean Edgard GNIMASSOUN</creator>
        
        <creator>Tchimou N’TAKPE</creator>
        
        <creator>Gokou Herv&#233; Fabrice DIEDIE</creator>
        
        <creator>Souleymane OUMTANAGA</creator>
        
        <subject>Workflow scheduling; makespan reduction; multi-cores virtual machine; data-intensive workflows; IaaS cloud</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>The cloud IaaS easily offers to have homogeneous multi-core machines (whether they are &quot;bare metal&quot; machines or virtual machines). On each of these machines, there can be high-performance input-output SSD disks. That allows to distribute the files produced during the execution of the workflow to different machines in order to minimize the additional costs associated with transferring these files. In this paper, we propose a scheduling algorithm called WSRDT (Workflow Scheduling Reducing Data Transfers) whose purpose is to minimize the makespan (execution time) of data-intensive workflows by reducing transfers data between dependent tasks on the network. Intermediate files produced by tasks are stored locally on the disk of the machine where the tasks were executed. We experimentally verify that the increase in the number of cores per machine reduces the additional cost due to data transfers on the network. Experiences with a veritable workflow show those advantages of the algorithms presented. Data-driven scheduling significantly reduces the execution time and the volume of data transferred on the network, our approach outperforms one of the best state-of-the-art algorithms that we have adapted with our hypotheses.</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_34-A_Workflow_Scheduling_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Genetic Algorithm with Comprehensive Sequential Constructive Crossover for the Travelling Salesman Problem</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110533</link>
        <id>10.14569/IJACSA.2020.0110533</id>
        <doi>10.14569/IJACSA.2020.0110533</doi>
        <lastModDate>2020-05-30T15:41:49.9900000+00:00</lastModDate>
        
        <creator>Zakir Hussain Ahmed</creator>
        
        <subject>Genetic algorithm; reverse greedy sequential constructive crossover; comprehensive sequential constructive crossover; travelling salesman problem; NP-hard</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>The travelling salesman problem (TSP) is a very famous NP-hard problem in operations research as well as in computer science. To solve the problem several genetic algorithms (GAs) are developed which depend primarily on crossover operator. The crossover operators are classified as distance-based crossover operators and blind crossover operators. The distance-based crossover operators use distances between nodes to generate the offspring(s), whereas blind crossover operators are independent of any kind of information of the problem, except follow the problem’s constraints. Selecting better crossover operator can lead to successful GA. Several crossover operators are available in the literature for the TSP, but most of them are not leading good GA. In this study, we propose reverse greedy sequential constructive crossover (RGSCX) and then comprehensive sequential constructive crossover (CSCX) for developing better GAs for solving the TSP. The usefulness of our proposed crossover operators is shown by comparing with some distance-based crossover operators on some TSPLIB instances. It can be concluded from the comparative study that our proposed operator CSCX is the best crossover in this study for the TSP.</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_33-Genetic_Algorithm_with_Comprehensive_Sequential.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Approach to Predicting Learner Performance with Reduced Forgetting</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110532</link>
        <id>10.14569/IJACSA.2020.0110532</id>
        <doi>10.14569/IJACSA.2020.0110532</doi>
        <lastModDate>2020-05-30T15:41:49.9770000+00:00</lastModDate>
        
        <creator>Dagou Dangui Augustin Sylvain Legrand KOFFI</creator>
        
        <creator>Tchimou N’TAKPE</creator>
        
        <creator>Assohoun ADJE</creator>
        
        <creator>Souleymane OUMTANAGA</creator>
        
        <subject>Performance prediction; e-learning; artificial neural networks; forgetting factor</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>The work on predicting learner performance allows researchers through machine learning methods to participate in the improvement of e-learning. This improvement allows, little by little, e-learning to be promoted and adopted by several educational structures around the world. Neural networks, widely used in various performance prediction works, have made several exploits. However, factors that are highly influential in the field of learning have not been explored in machine learning models. For this reason, our study attempts to show the importance of the forgetting factor in the learning system. Thus, to contribute to the improvement of accuracy in performance predictions. The interest being to draw the attention of researchers in this field to very influential factors that are not exploited. Our model takes into account the study of the forgetting factor in neural networks. The objective is to show the importance of attenuation the forgetting, on the quality of performance predictions in e-learning. Our model is compared to those based on Random Forest and linear regression algorithms. The results of our study show first that neural networks (95.20%) are better than Random Forest (95.15%) and linear regression (93.80%). Then, with the attenuation of forgetting, these algorithms give 96.63%, 95.85% and 93.80% respectively. This work allowed us to show the great relevance of oblivion in neural networks. Thus, the exploration of other unexploited factors will make better performance prediction models.</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_32-A_New_Approach_to_Predicting_Learner_Performance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Prediction of Heart Diseases (PHDs) based on Multi-Classifiers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110531</link>
        <id>10.14569/IJACSA.2020.0110531</id>
        <doi>10.14569/IJACSA.2020.0110531</doi>
        <lastModDate>2020-05-30T15:41:49.9600000+00:00</lastModDate>
        
        <creator>Amirah Al Shammari</creator>
        
        <creator>Haneen al Hadeaf</creator>
        
        <creator>Hedia Zardi</creator>
        
        <subject>Classification; diseases; heart-attack; multi-classifier; heart disease detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>At present, the number of articles on Heart Disease Detection (HDD) based on classification searched by Google Scholar search engine exceeds 17,000. The medical sector is one of the most important fields that benefit from ML. Heart diseases (HDs) are considered to be the leading cause of death worldwide, as it is difficult for doctors to predict them earlier. Therefore, the HDD is highly required. Today, the health sector contains huge data that has hidden information where this information can be considered as essential to make diagnostic decisions. In this paper, a new diagnostic model for the detection of HDs is on a multi-classifier applied to the heart disease dataset, which consists of 270 instances and 13 attributes. Our multi-classifier is composed of Artificial Neural Network (ANN), Na&#239;ve Bays (NB), J48, and REPTree classifiers, which select the most accurate of them. In addition, the most effective feature on prediction is determined by applying feature selection using the “GainRatioAttributeEval” technique and &quot;Ranker&quot; method based on the full tainting set. Experimental results show that the NB classifier is the best, and our model yields over 85% accuracy using the WEKA tool.</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_31-Prediction_of_Heart_Diseases.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comprehensive Science Mapping Analysis of Textual Emotion Mining in Online Social Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110530</link>
        <id>10.14569/IJACSA.2020.0110530</id>
        <doi>10.14569/IJACSA.2020.0110530</doi>
        <lastModDate>2020-05-30T15:41:49.9300000+00:00</lastModDate>
        
        <creator>Shivangi Chawla</creator>
        
        <creator>Monica Mehrotra</creator>
        
        <subject>Emotion mining; emotion models; bibliometric analysis; science mapping analysis; co-citation analysis; network analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>Textual Emotion Mining (TEM) tackles the problem of analyzing the text in terms of the emotions, it expresses or evokes. It focuses on a series of approaches, methods, and tools to help understand human emotions. The understanding would play a pivotal role in developing relevant systems to meet human needs. This work has drawn significant interest from researchers worldwide. This article carries out a science mapping analysis of TEM literature indexed in the Web of Science (WoS), to provide quantitative and qualitative insight into the TEM research. To explain the evolution of mainstream contents, various bibliometric indicators and metrics are used which identify annual publication counts, authorship patterns, performance of countries/regions, and institutes. To further supplement this study, various types of network analysis are also performed like co-citation analysis, co-occurrence analysis, bibliographic coupling, and co-authorship pattern analysis.  Additionally, a fairly comprehensive manual analysis of top-cited and most-used journal and proceeding papers is also conducted to understand the growth and evolution of this domain. As per the authors’ knowledge, this manuscript provides the first thorough investigation of TEM&#39;s research status through a bibliometric examination of scientific publications. Expedient results are recorded that will allow TEM researchers to uncover the growth pattern, seek collaborations, enhance the selection of research topics, and gain a holistic view of the aggregate progress in the domain. The presented facts and analysis of TEM will help the researchers’ fraternity to carry out the future study.</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_30-A_Comprehensive_Science_Mapping_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Principal Component Analysis on Morphological Variability of Critical Success Factors for Enterprise Resource Planning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110529</link>
        <id>10.14569/IJACSA.2020.0110529</id>
        <doi>10.14569/IJACSA.2020.0110529</doi>
        <lastModDate>2020-05-30T15:41:49.9130000+00:00</lastModDate>
        
        <creator>Ayogeboh Epizitone</creator>
        
        <creator>Oludayo. O. Olugbara</creator>
        
        <subject>Enterprise resources; morphological variability; principal component; resource planning; success factor</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>The concept of critical success factors (CSFs) has been widely used as a measure to tackle the hurdles associated with numerous implementations of enterprise resource planning (ERP) systems. This study evaluates the morphological variability of CSFs using the analytical principal component analysis technique to identify principal components (PCs) that can be adopted for a successful ERP system implementation. The dataset of 205 CSFs from 127 different studies was evaluated for the morphological variability in those studies. According to the results, 66 PCs were identified and ranked accordingly. The first 49 PCs with eigenvalues greater than 1 accounted for 89.67 % of the variability recorded. The first 6 PCs respectively accounted for 13.67%, 19.37%, 24.67%, 29.41%, 33.52% and 36.94% cumulative variations. In general, the graphical illustration of the study results show the palpable division between the taxonomic groups for 3 PCs.</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_29-Principal_Component_Analysis_on_Morphological_Variability.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Play-Centric Designing of a Serious Game Prototype for Low Vision Children</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110528</link>
        <id>10.14569/IJACSA.2020.0110528</id>
        <doi>10.14569/IJACSA.2020.0110528</doi>
        <lastModDate>2020-05-30T15:41:49.8830000+00:00</lastModDate>
        
        <creator>Nurul Izzah Othman</creator>
        
        <creator>Nor Azan Mat Zin</creator>
        
        <creator>Hazura Mohamed</creator>
        
        <subject>Serious game; play-centric design; accessibility; low vision; low fidelity prototype</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>Currently, with the advancement of Information and Communications Technology (ICT), gaming industry becomes one of the fastest growing industries. This trend leads to development of serious games as alternative tool for creating effective learning experience. However, most educational applications such as serious games mainly used visuals such graphics and animations that pose challenges for low vision children. Visually impaired users especially children with low vision would face difficulty using the applications. They have problems to see highly visual elements of the games. Accessibility refers to how a certain software or application is accessible to disabled users. Several accessibility aspects should be considered when designing user interfaces for children with low vision. Thus, a game designed to fulfill their needs is needed. However, the challenge of serious game design is not only to consider users’ accessibility needs, but also the playability aspects as well so that the visually impaired children can enjoy playing regardless of their disabilities. This paper presents a study on designing a low fidelity serious game prototype for low vision children using play-centric design approach, focusing on playability to obtain feedback from low vision children. Then, based on users’ feedbacks, the game prototype will be refined to improve the game design.</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_28-Play_Centric_Designing_of_a_Serious_Game_Prototype.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Generalized Approach to Analysis of Multifractal Properties from Short Time Series</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110527</link>
        <id>10.14569/IJACSA.2020.0110527</id>
        <doi>10.14569/IJACSA.2020.0110527</doi>
        <lastModDate>2020-05-30T15:41:49.8670000+00:00</lastModDate>
        
        <creator>Lyudmyla Kirichenko</creator>
        
        <creator>Abed Saif Ahmed Alghawli</creator>
        
        <creator>Tamara Radivilova</creator>
        
        <subject>Fractal time series; multifractal analysis; estimation of multifractal characteristics; generalized Hurst exponent; practical applications of fractal analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>The paper considers a generalized approach to the time series multifractal analysis. The focus of research is on the correct estimation of multifractal characteristics from short time series. Based on numerical modeling and estimating, the main disadvantages and advantages of the sample fractal characteristics obtained by three methods: the multifractal fluctuation detrended analysis, wavelet transform modulus maxima and multifractal analysis using discrete wavelet transform are studied. The generalized Hurst exponent was chosen as the basic characteristic for comparing the accuracy of the methods. A test statistic for determining the monofractal properties of a time series using the multifractal fluctuation detrended analysis is proposed. A generalized approach to estimating the multifractal characteristics of short time series is developed and practical recommendations for its implementation are proposed. A significant part of the study is devoted to practical applications of fractal analysis. The proposed approach is illustrated by the examples of multifractal analysis of various real fractal time series.</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_27-Generalized_Approach_to_Analysis_of_Multifractal_Properties.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Radar GPR Application to Explore and Study Archaeological Sites: Case Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110526</link>
        <id>10.14569/IJACSA.2020.0110526</id>
        <doi>10.14569/IJACSA.2020.0110526</doi>
        <lastModDate>2020-05-30T15:41:49.8370000+00:00</lastModDate>
        
        <creator>Ahmed Faize</creator>
        
        <creator>Gamil Alsharahi</creator>
        
        <creator>Mohammed Hamdaoui</creator>
        
        <subject>Archaeological sites; exploring; ground penetrating radar; processing data</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>The issue of exploring and searching for archaeological sites is very important for a greater knowledge of the history of ancient nations and peoples. Recently, Ground Penetrating Radar GPR technology appeared to detect objects buried and study as depth as tens of meters. This work aims to apply this technique in studying and exploring some archaeological sites using the 500 MHz antenna. This study has proven its effectiveness and success. Also, one of the important programs used in the processing of the obtained data is called Reflexw.</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_26-Radar_GPR_Application_to_Explore_and_Study_Archaeological_Sites.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Indexed Metrics for Link Prediction in Graph Analytics</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110525</link>
        <id>10.14569/IJACSA.2020.0110525</id>
        <doi>10.14569/IJACSA.2020.0110525</doi>
        <lastModDate>2020-05-30T15:41:49.8030000+00:00</lastModDate>
        
        <creator>Marcus Lim</creator>
        
        <creator>Azween Abdullah</creator>
        
        <creator>NZ Jhanjhi</creator>
        
        <creator>Mahadevan Supramaniam</creator>
        
        <subject>Link prediction; social network analysis; criminal network; deep reinforcement learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>With the explosive growth of the Internet and the desire to harness the value of the information it contains, the prediction of possible links (relationships) between key players in social networks based on graph-theory principles has garnered great attention in recent years. Consequently, many fields of scientific research have converged in the development of graph analysis techniques to examine the structure of social networks with a very large number of users. However, the relationship between persons within the social network may not be evident when the data-capture process is incomplete or a relationship may have not yet developed between participants who will establish some form of actual interaction in the future. As such, the link-prediction metrics for certain social networks such as criminal networks, which tend to have highly inaccurate data records, may need to incorporate additional circumstantial factors (metadata) to improve their predictive accuracy. One of the key difficulties in link-prediction methods is extracting the structural attributes necessary for the classification of links. In this research, we analysed a few key structural attributes of a network-oriented dataset based on proposed social network analysis (SNA) metrics for the development of link-prediction models. By combining structural features and metadata, the objective of this research was to develop a prediction model that leverages the deep reinforcement learning (DRL) classification technique to predict links/edges even on relatively small-scale datasets, which can constrain the ability to train supervised machine-learning models that have adequate predictive accuracy.</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_25-Indexed_Metrics_for_Link_Prediction_in_Graph_Analytics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid based Energy Efficient Cluster Head Selection using Camel Series Elephant Herding Optimization Algorithm in WSN</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110524</link>
        <id>10.14569/IJACSA.2020.0110524</id>
        <doi>10.14569/IJACSA.2020.0110524</doi>
        <lastModDate>2020-05-30T15:41:49.7900000+00:00</lastModDate>
        
        <creator>N. Lavanya</creator>
        
        <creator>T. Shankar</creator>
        
        <subject>Camel algorithm; Cluster Head Selection; Elephant Herding Optimization; meta-heuristic algorithm; network lifespan; wireless sensor network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>The rapid growth in wireless technology is enabling a variety of advances in wireless sensor networks (WSNs). By providing the sensing capabilities and efficient wireless communication, WSNs are becoming important factor in day to day life. WSNs have many commercial, industrial and telecommunication applications. Maximizing network lifespan is a primary objective in wireless sensor networks as the sensor nodes are powered by a non-rechargeable battery. The main challenges in wireless sensor networks (WSNs) are area of coverage, network’s lifetime and aggregating. Balanced node establishment (clustering) is the foremost strategy for extending the entire network&#39;s lifetime by aggregating the sensed information at the head of the cluster. The recent research trend suggests Meta-heuristic algorithms for the intelligent selection of ideal Cluster Heads (CHs). The existing Cluster Head Selection (CHS) algorithm suffers from the inconsistent trade-offs between exploration – exploitation and global best examine constraints. In this research, a novel Camel series Elephant Herding Optimization (CSEHO) algorithm is proposed to enhance the random occurrences of Camel algorithm by the Elephant Herding Optimization algorithm for optimal CHS. The Camel algorithm imitates the itinerant actions of a camel in the desert for the scavenging procedure. The visibility monitoring condition of the camel algorithm improves the efficiency of exploitation, whereas the exploration inefficiency of a Camel algorithm is compensated optimally by the Elephant Herding Optimization operator (Clan and separator). The superior performance of the proposed CSEHO algorithm is validated by comparing its performance with various other existing CHS algorithms. The overall attainment of the offered CSEHO algorithm is 21.01%, 31.21%, 44.08%, 67.51%, and 85.66%, better than that of EHO, CA, PSO, LEACH, and DT, respectively.</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_24-Hybrid_based_Energy_Efficient_Cluster_Head_Selection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Coronavirus Social Engineering Attacks:  Issues and Recommendations</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110523</link>
        <id>10.14569/IJACSA.2020.0110523</id>
        <doi>10.14569/IJACSA.2020.0110523</doi>
        <lastModDate>2020-05-30T15:41:49.7730000+00:00</lastModDate>
        
        <creator>Ahmed Alzahrani</creator>
        
        <subject>Social engineering; coronavirus; COVID-19; phishing; vishing; smishing; scams; working remotely; cybersecurity; security awareness; human security behavior</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>During the current coronavirus pandemic, cybercriminals are exploiting people’s anxieties to steal confidential information, distribute malicious software, perform ransomware attacks and use other social engineering attacks. The number of social engineering attacks is increasing day by day due to people&#39;s failure to recognize the attacks. Therefore, there is an urgent need for solutions to help people understand social engineering attacks and techniques. This paper helps individuals and industry by reviewing the most common coronavirus social engineering attacks and provides recommendations for responding to such an attack. The paper also discusses the psychology behind social engineering and introduces security awareness as a solution to reduce the risk of social engineering attacks.</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_23-Coronavirus_Social_Engineering_Attacks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Efficient Model for Mining Outlier Opinions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110522</link>
        <id>10.14569/IJACSA.2020.0110522</id>
        <doi>10.14569/IJACSA.2020.0110522</doi>
        <lastModDate>2020-05-30T15:41:49.7570000+00:00</lastModDate>
        
        <creator>Neama Hassan</creator>
        
        <creator>Laila A. Abd-Elmegid</creator>
        
        <creator>Yehia K. Helmy</creator>
        
        <subject>Opinion mining; sentiment analysis; anomaly detection; outliers; reviews; text analysis; natural language processing; rapidminer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>In the internet era, opinion mining became a critical technique used in many applications. The internet offers a featured chance for users to express and share their views and experiences anywhere and at any time through various methods as online reviews, personal blogs, Facebook, Twitter and companies’ websites. Such treasure of online data generated by users play an essential role in decision-making process and have the ability to make radical changes in several fields. Although the opinionated text can provide significantly invaluable information for the wide community either are individuals, business, or government, the outlier or anomaly opinions could have the same impact but in opposite manner which harm these fields. Consequently, there is an urge to develop techniques to detect the outlier opinions and avoid their negative impacts on several application domains which rely on opinion mining. In this paper, an efficient model for mining outlier opinions has been proposed. The proposed MOoM model, stands for Mining Outlier Opinion Model, offers for the first time the ability to mine outlier opinions from product’s free-text reviews. Accordingly, it can help the decision makers to improve the overall sentiment analysis process and perform further analysis on the outlier opinions to get better understanding for them and avoid their negative impact. The proposed model consists of three modules; Data preprocessing module, Opinion mining module, and outlier opinions detection module. The proposed model utilizes the lexicon-based approach to extract sentiment polarity from each review in the dataset. Also, it uses the Distance-based outlier detection algorithm to produce a graded list of review holders with outlier opinions. Experimental study is presented to evaluate the proposed model and the results proved the model’s ability to detect outlier opinions in the product reviews effectively. The model is adaptable to be used in other fields rather than product’s reviews by customizing its modules’ layers.</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_22-An_Efficient_Model_for_Mining_Outlier_Opinions.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Measuring the Similarity between the Sanskrit Documents using the Context of the Corpus</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110521</link>
        <id>10.14569/IJACSA.2020.0110521</id>
        <doi>10.14569/IJACSA.2020.0110521</doi>
        <lastModDate>2020-05-30T15:41:49.7400000+00:00</lastModDate>
        
        <creator>Jatinderkumar R. Saini</creator>
        
        <creator>Prafulla B. Bafna</creator>
        
        <subject>Cosine; dimension reduction; sanskrit; synset; matthews correlation coefficient</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>Identifying the similarity between two documents is a challenging but important task. It benefits various applications like recommender systems, plagiarism detection and so on. To process any text document one of the popularly used approaches is document term matrix (DTM). The proposed approach processes the oldest, untouched, one of the morphologically critical languages, Sanskrit and builds a document term matrix for Sanskrit (DTMS) and Document synset matrix Sanskrit (DSMS). DTMS uses the frequency of the term whereas DSMS uses the frequency of synset instead of term and contributes to the dimension reduction. The proposed approach considers the semantics and context of the corpus to solve the problem of polysemy. More than 760 documents including Subhashitas and stories are processed together. F1 Score, precision, Matthews Correlation coefficient (MCC) which is the most balanced measure and accuracy are used to prove the betterment of the proposed approach.</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_21-Measuring_the_Similarity_between_the_Sanskrit_Documents.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Clustering-Based Trajectory Outlier Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110520</link>
        <id>10.14569/IJACSA.2020.0110520</id>
        <doi>10.14569/IJACSA.2020.0110520</doi>
        <lastModDate>2020-05-30T15:41:49.7270000+00:00</lastModDate>
        
        <creator>Eman O. Eldawy</creator>
        
        <creator>Hoda M.O. Mokhtar</creator>
        
        <subject>Data mining; outlier detection; trajectory data processing; clustering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>The improvement in mobile computing techniques has generated massive trajectory data, which represent the mobility of moving objects like vehicles, animals, and people. Mining trajectory data and especially outlier detection in trajectory data is an attractive and challenging topic that fascinated many researchers. In this paper, we propose a Clustering-Based Trajectory Outlier Detection algorithm (CB-TOD). The proposed algorithm partitions a trajectory into line segments and decreases those line segments to a smaller set (Summary-trajectory SS(t)) without affecting the spatial properties of the original trajectory. After that the CB-TOD algorithm using a clustering method to detect the cluster with the smallest number of segments for a trajectory and a small number of neighbors to be sub-trajectory outliers for this trajectory. Also, our proposed algorithm can detect outlier trajectories in the dataset. The main advantage of CB-TOD algorithm is reducing the computational time for outlier detection especially for big trajectory data without affecting the efficiency of the outlier detection results. Experimental results demonstrate that CB-TOD outperforms the state of art existing algorithms in identifying outlier sub-trajectories and also outlier trajectories in real trajectory dataset.</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_20-Clustering_Based_Trajectory_Outlier_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analytical Study between Human Urban Planning and Geographic Information Systems: “The Case of the City of Casablanca”</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110519</link>
        <id>10.14569/IJACSA.2020.0110519</id>
        <doi>10.14569/IJACSA.2020.0110519</doi>
        <lastModDate>2020-05-30T15:41:49.6930000+00:00</lastModDate>
        
        <creator>Mohamed Rtal</creator>
        
        <creator>Mostafa Hanoun</creator>
        
        <subject>Casablanca; human urban; urbanization of information systems; urban expansion</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>Since the early 1910s, the city of Casablanca has experienced urban and civic expansion so that its population has become a regional center, but this expansion has not been achieved in an organized and equal manner. This has resulted in significant overlap between systematic and structured reconstruction and random reconstruction, and if the geographic researcher is able to study this transformation by observation in the field where he described the phenomena and visualized them with the naked eye, then urbanization of information systems is one of the methods of expression that allows him to study it successfully and more precisely. The study of the stages of urbanization that the city of Casablanca has gone through using urbanization of information systems will give actors and researchers a clear vision of how this expansion can be achieved, it’s positive and negative implications, and its prospects, and will thus help to prepare and manage the urban area of the city. This study period extended from 1910 to 2020, and we relied on a set of documents, satellite photos, aerial photos and old maps.</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_19-Analytical_Study_between_Human_Urban_Planning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Machine Learning and Statistical Modelling for Prediction of Novel COVID-19 Patients Case Study: Jordan</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110518</link>
        <id>10.14569/IJACSA.2020.0110518</id>
        <doi>10.14569/IJACSA.2020.0110518</doi>
        <lastModDate>2020-05-30T15:41:49.6800000+00:00</lastModDate>
        
        <creator>Ebaa Fayyoumi</creator>
        
        <creator>Sahar Idwan</creator>
        
        <creator>Heba AboShindi</creator>
        
        <subject>Novel COVID-19; machine learning; logistic regression; support vector machine; multi-layer perceptron</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>As of December 2019, the world’s view on life has been changed due to ongoing COVID-19 pandemic. This requires the use of all kinds of technology to help identify coronavirus patients and control the spread of this disease. In this paper, an online questionnaire was developed as a tool to collect data. This data was used as an input for various prediction models based on statistical model (Logistic Regression, LR) and machine learning model (Support Vector Machine, SVM, and Multi-Layer Perceptron, MLP). These models were utilized to predict potential patients of COVID-19 based on their signs and symptoms. The MLP has shown the best accuracy (91.62%) compared to the other models. Meanwhile, the SVM has shown the best precision 91.67%.</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_18-Machine_Learning_and_Statistical_Modelling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluation of the Diffusion Phenomenon using Information from Twitter</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110517</link>
        <id>10.14569/IJACSA.2020.0110517</id>
        <doi>10.14569/IJACSA.2020.0110517</doi>
        <lastModDate>2020-05-30T15:41:49.6630000+00:00</lastModDate>
        
        <creator>Kohei Otake</creator>
        
        <creator>Takashi Namatame</creator>
        
        <subject>Twitter; diffusion phenomenon; natural language processing; mixture model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>Social media services, including social networking services (SNSs) and microblogging services, are gaining prominence. SNSs have a variety of information on products and services, such as product introductions, utilization methods, and reviews. It is important for companies to utilize SNSs to understand the various ways of engaging with them. Against this backdrop, numerous studies have focused on marketing activities (e.g., consumer behavior and sales promotion) using information on the internet from sources such as SNSs, blogs, and news sites. In particular, to understand the dissemination of information on the Internet, various researchers have undertaken studies pertaining to the diffusion phenomenon occurring in the real world. Here, topic diffusion is a phenomenon whereby a certain topic is shared with several other users. In this study, we aimed to evaluate the diffusion phenomenon on Twitter. In particular, we focused on the state of a targeted topic and analyzed the estimation of the topic using natural language processing (NLP) and time series analysis. First, we collected tweets containing four titles of animation broadcasts using hashtags. Approximately 250,000 tweets were posted on Twitter in a month. Second, we used NLP methods such as morphological analysis and N-gram analysis to characterize the contents of each title. Third, using the time series data for the tweets, we created a mixture model that replicated the diffusion phenomenon. We clustered the diffusion phenomenon using this model. Finally, we combined the features related to the content of the tweets and the results of the clustering of the diffusion phenomenon and evaluated them.</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_17-Evaluation_of_the_Diffusion_Phenomenon.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Analysis of Transient Fault-Injection and Fault-Tolerant System for Digital Circuits on FPGA</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110516</link>
        <id>10.14569/IJACSA.2020.0110516</id>
        <doi>10.14569/IJACSA.2020.0110516</doi>
        <lastModDate>2020-05-30T15:41:49.6470000+00:00</lastModDate>
        
        <creator>Sharath Kumar Y N</creator>
        
        <creator>Dinesha P</creator>
        
        <subject>Digital circuits; transient fault; fault injection; fault tolerant; triple modular redundancy; dual modular redundancy; majority voter logic; self-voter logic</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>A Fault-Tolerant System is necessary to improve the reliability of digital circuits with the presence of Fault Injection and also improves the system performance with better Fault Coverage. In this work, an efficient Transient Fault-Injection system (FIS) and Fault-Tolerant System (FTS) are designed for digital circuits. The FIS includes Berlekamp Massey Algorithm (BMA) based LFSRs, with fault logic followed by one – hot-encoder register, which generates the faults. The FTS is designed using Triple-Modular-Redundancy (TMR) and Dual Modular- Redundancy (DMR). The TMR module is designed using the Majority Voter Logic (MVL), and DMR is designed using Self-Voter Logic (SVL) for digital circuits such as synchronous and asynchronous circuits. The four different MVL approaches are designed in the TMR module for digital circuits. The FIS-FTS module is designed on Xilinx-ISE 14.7 environment and implemented on Artix-7 FPGA. The synthesis results include chip area, gate count, delay, and power are analyzed along with fault tolerance, and coverage for given digital circuits. The fault tolerance is analyzed using Modelsim-simulator. The FIS-FTS module covers an average of 99.17% fault coverage for both synchronous and asynchronous circuits.</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_16-Performance_Analysis_of_Transient_Fault_Injection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Analysis of Methodologies for Domain Ontology Development: A Systematic Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110515</link>
        <id>10.14569/IJACSA.2020.0110515</id>
        <doi>10.14569/IJACSA.2020.0110515</doi>
        <lastModDate>2020-05-30T15:41:49.6170000+00:00</lastModDate>
        
        <creator>Abdul Sattar</creator>
        
        <creator>Ely Salwana Mat Surin</creator>
        
        <creator>Mohammad Nazir Ahmad</creator>
        
        <creator>Mazida Ahmad</creator>
        
        <creator>Ahmad Kamil Mahmood</creator>
        
        <subject>Knowledge management; interlocking institutional worlds; domain ontology; comparison of methodologies; ontology development</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>Interlocking Institutional Worlds (IWs) is a concept explaining the need to interoperate between institutions (or players), to solve problems of common interest in a given domain. Managing knowledge in the IWs domain is complex, but promoting knowledge sharing based on standards and common terms agreeable to all players is essential and is something that must be established. In this sense, ontologies, as a conceptual tool and a key component of knowledge-based systems, have been used by organizations for effective knowledge management of the domain of discourse. There are many methodologies that have been proposed by several researchers during the last decade. However, designing a domain ontology for IWs needs a well-defined ontology development methodology. Therefore, in this article, a survey has been conducted to compare ontology development methodologies between 2015 and 2020. The purpose of this survey is to identify limitations and benefits of previously developed ontology development methodologies. The criteria for the comparison of methodologies has been derived from evolving trends in literature. Our findings give some guidelines that help to define a suitable methodology for designing any domain ontology under the domain of interlocking institutional worlds.</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_15-Comparative_Analysis_of_Methodologies.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Efficient Approach for Storage of Big Data Streams in Distributed Stream Processing Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110514</link>
        <id>10.14569/IJACSA.2020.0110514</id>
        <doi>10.14569/IJACSA.2020.0110514</doi>
        <lastModDate>2020-05-30T15:41:49.6000000+00:00</lastModDate>
        
        <creator>Sultan Alshamrani</creator>
        
        <creator>Quadri Waseem</creator>
        
        <creator>Abdullah Alharbi</creator>
        
        <creator>Wael Alosaimi</creator>
        
        <creator>Hamza Turabieh</creator>
        
        <creator>Hashem Alyami</creator>
        
        <subject>Distributed stream databases; storage optimization; stream archive storage; time expiration</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>Besides, centralized managing, processing and querying, the storage is one of the important components of a big data management. There is always a huge requirement of storing immense volumes of heterogeneous data in different formats. In big data steam processing applications, the storage is given a priority and always plays a big role in historical data analysis. During stream processing, some of the incoming data and the intermediate results are always a good source of future samples. These samples can be used for the future evaluation to eliminate the numerous mistakes of storing and maintaining the big data streams. Hence, a big data stream application requires an efficient support for storage of historical queries. The researchers, scientist and academicians are working hard to develop a sophisticated mechanism that is needed for storage to keep the most useful data for the future references by means of stream archive storage. However, a stream processing system can’t store the whole incoming stream data for future references. A technique is needed to get rid of the expired data and free the space for more incoming data in an archive storage. Hence keeping in view, the storage space limitation, integration issues and its associated cost, we try to optimize the stream archive storage and free more space for future data. The proposed enhanced algorithm will help to delete the obsolete data (retention or expired) and free the space for the new incoming data in a distributed platform. Our paper presents an Enhanced Time Expired Algorithm (ETEA) for stream archived storage in a distributed environment for removing the obsolete data based on time expiration and providing a space for the new incoming data for historical data analysis during the skew time (Hot Spots).We also evaluated the efficiency of our algorithm using the skew factor. The experimental results show that our approach is 98% efficient and fast than other conventional techniques.</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_14-An_Efficient_Approach_for_Storage_of_Big_Data_Streams.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Difficulties in Teaching Online with Blackboard Learn Effects of the COVID-19 Pandemic in the Western Branch Colleges of Qassim University</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110512</link>
        <id>10.14569/IJACSA.2020.0110512</id>
        <doi>10.14569/IJACSA.2020.0110512</doi>
        <lastModDate>2020-05-30T15:41:49.5700000+00:00</lastModDate>
        
        <creator>Fahad Alturise</creator>
        
        <subject>COVID-19; blackboard learn; e-learning; learning management system; pandemic; difficulties; Qassim University</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>The global COVID-19 pandemic has compelled educational institutions to shift from face-to-face teaching methods to fully online courses. This was possible with the help of information technology advances, which led to the creation of Blackboard Learn, a Learning Management System (LMS). By transitioning their systems to this newly developed LMS, the western branch colleges of Qassim University in the Kingdom of Saudi Arabia were able to support e-learning. To investigate the influence of online learning e-courses on educational institutions and learning outcomes, this paper intends to perform surveys on both faculties and students. The survey mainly focuses on course objectives, practical skills, faculty member’s responses regarding query and discussion, explanations on applied courses, problem-solving, and improving teamwork skills. A comprehensive investigation of the faculties reveals that 59.08% of faculty members believe it is challenging to facilitate course objectives due to the lack of practical lab work and other detailed knowledge exchange on applied courses, which leads to the faculties being unsatisfied with online courses when compared with traditional systems. Moreover, 77.17% of the students think it is difficult to have discussions during online courses in order to solve queries, and this diminishes their problem-solving capability. In addition, with an online course system, there is no way to physically collaborate in teams and work on team projects to improve teamwork abilities.</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_12-Difficulties_in_Teaching_Online_with_Blackboard.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Using Geographical Information System for Mapping Public Schools Distribution in Jeddah City</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110513</link>
        <id>10.14569/IJACSA.2020.0110513</id>
        <doi>10.14569/IJACSA.2020.0110513</doi>
        <lastModDate>2020-05-30T15:41:49.5700000+00:00</lastModDate>
        
        <creator>Abdulkader A. Murad</creator>
        
        <creator>Abdulmuakhir I. Dalhat</creator>
        
        <creator>Ammar A Naji</creator>
        
        <subject>GIS; school mapping; educational facilities; geodatabase; spatial and network analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>Geographical Information System (GIS) remains a unique tool use for school mapping for a clear understanding of the nature, planning, and distribution of educational facilities. The study carried out a GIS analysis for male primary and secondary schools’ distribution in Jeddah city, Saudi Arabia, to show the significance of using GIS tools to assist the educational planning authorities. To understand, re-plan and address the location, distribution and availability challenges of the schools in Jeddah city. A Geodatabase for the study area was created, which incorporates education and population data collected from authorities. Spatial and network analyses are utilised to understand the location distribution, students’ density, and the accessibility of the schools in the study region. The analyses results identified the services and students’; density, directional growth of the schools, drive-time service areas and served and un-served populace for the authorities in Saudi Arabia to make better planning decisions, address present and future challenges in the provision of primary schools to residents and most importantly to improve educational services. The findings revealed that shorter travel distances found in the denser (central) part of the city and some regions that need more schools.</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_13-Using_Geographical_Information_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Satellite Image Database Search Engine which Allows Fuzzy Expression of Geophysical Parameters of Queries</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110511</link>
        <id>10.14569/IJACSA.2020.0110511</id>
        <doi>10.14569/IJACSA.2020.0110511</doi>
        <lastModDate>2020-05-30T15:41:49.5530000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>Search engine; fuzzy expression; knowledge base system; membership function</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>Satellite image database search engine which allows fuzzy expression of geophysical parameters of queries is proposed. A search engine based on knowledge based system which allows fuzzy expression of queries is proposed. A prototype system is created and tested. Whereas conventional search systems had to know in advance the functions of the search system, information about search keys, etc., the search engine proposed here guided the search conditions in a conversational form, by allowing ambiguous expressions (six adverb language hedges) at that time, the user is released from such annoyance. To make this possible, a membership function for each attribute information is defined, and a search condition refinement by fuzzy logic is introduced. The results show that the system accepts a fuzzy expression of query as well as a comprehensive dialogues between users and the system.</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_11-Satellite_Image_Database_Search_Engine.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Prototype of Thai Blockchain-based Voting System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110510</link>
        <id>10.14569/IJACSA.2020.0110510</id>
        <doi>10.14569/IJACSA.2020.0110510</doi>
        <lastModDate>2020-05-30T15:41:49.5400000+00:00</lastModDate>
        
        <creator>Krittaphas Wisessing</creator>
        
        <creator>Phattaradon Ekthammabordee</creator>
        
        <creator>Thattapon Surasak</creator>
        
        <creator>Scott C.-H. Huang</creator>
        
        <creator>Chakkrit Preuksakarn</creator>
        
        <subject>Blockchain; internet election; hyperledger fabric; data integrity; voting reliability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>In this paper, the prototype of Thai voting sys-tem using blockchain technology (B-VoT) has been successfully designed and developed. Hyperledger Fabric was chosen to be a major blockchain infrastructure. The web-application was developed to allow voters to vote. In addition, real-time voting results will be shown on the same website after election period has passed. We connected the web-application and blockchain database together in order to store the votes as a blockchain transaction. With our Blockchain Internet Election, the voters can easily elect on the website and their votes is automatically stored in the blockchain database as a single blockchain transaction. This voting prototype can assure the data integrity because no one can modify any information or voting results. Therefore, our system could have a huge impact on the voting reliability as well as rebuild public trustworthiness in Thailand election.</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_10-The_Prototype_of_Thai_Blockchain.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Creating Knowledge Environment during Lean Product Development Process of Jet Engine</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110509</link>
        <id>10.14569/IJACSA.2020.0110509</id>
        <doi>10.14569/IJACSA.2020.0110509</doi>
        <lastModDate>2020-05-30T15:41:49.5070000+00:00</lastModDate>
        
        <creator>Zehra C. Araci</creator>
        
        <creator>Ahmed Al-Ashaab</creator>
        
        <creator>Muhammad Usman Tariq</creator>
        
        <creator>Jan H. Braasch</creator>
        
        <creator>M. C. Emre Simsekler</creator>
        
        <subject>Knowledge creation and visualization; knowledge management; new product development; lean product development; set-based concurrent engineering; trade-off curves; aircraft engine noise reduction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>Organizations invest intense resources in their product development processes. This paper aims to create a knowledge environment using trade-off curves during the early stages of the set-based concurrent engineering (SBCE) process of an aircraft jet engine for a reduced noise level at takeoff. Data is collected from a range of products in the same family as the jet engine. Knowledge-based trade-off curves are used as a methodology to create and visualize knowledge from the collected data. Findings showed that this method provides designers with enough confidence to identify a set of design solutions during the SBCE applications.</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_9-Creating_Knowledge_Environment_during_Lean_Product.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Adaptive Hybrid Synchronization Primitives: A Reinforcement Learning Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110508</link>
        <id>10.14569/IJACSA.2020.0110508</id>
        <doi>10.14569/IJACSA.2020.0110508</doi>
        <lastModDate>2020-05-30T15:41:49.4900000+00:00</lastModDate>
        
        <creator>Fadai Ganjaliyev</creator>
        
        <subject>Spinning; sleeping; blocking; spin-then-block; spin-then-park; reinforcement learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>The choice of synchronization primitive used to protect shared resources is a critical aspect of application performance and scalability, which has become extremely unpredictable with the rise of multicore machines. Neither of the most commonly used contention management strategies works well for all cases: spinning provides quick lock handoff and is attractive in an undersubscribed situation but wastes processor cycles in oversubscribed scenarios, whereas blocking saves processor resources and is preferred in oversubscribed cases but adds up to the critical path by lengthening the lock handoff phase. Hybrids, such as spin-then-block and spin-then-park, tackle this problem by switching between spinning and blocking depending on the contention level on the lock or the system load. Consequently, threads follow a fixed strategy and cannot learn and adapt to changes in system behavior. To this end, it is proposed to use principles of machine learning to formulate hybrid methods as a reinforcement learning problem that will overcome these limitations. In this way, threads can intelligently learn when they should spin or sleep. The challenges of the suggested technique and future work is also briefly discussed.</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_8-Adaptive_Hybrid_Synchronization_Primitives.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Technologies for Making Reliable Decisions on a Variety of Effective Factors using Fuzzy Logic</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110507</link>
        <id>10.14569/IJACSA.2020.0110507</id>
        <doi>10.14569/IJACSA.2020.0110507</doi>
        <lastModDate>2020-05-30T15:41:49.4600000+00:00</lastModDate>
        
        <creator>Yousef Ibrahim Daradkeh</creator>
        
        <creator>Irina Tvoroshenko</creator>
        
        <subject>Attainability; completeness of decision-making; conflict; factor; membership function; state space</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>Problem of choosing methods has been presented for increasing the reliability of solutions while reducing the time resource for researching the state space of a particular object. Effective factors that can significantly affect the attainability of decision-making goals in a fuzzy process development environment are identified. Presented the selected factors in a classic form in natural language, which significantly increases the confidence in the decisions made by determining the maximum, minimum values of membership functions and the set of criteria by using manifest conflict situations when deciding. Information technologies and approaches for identifying the completeness of decision-making goals on a variety of effective factors are proposed. The effectiveness of the proposed solutions using the fuzzy inference procedure and the Zade-Mamdani’s approach is estimated. Software testing carried of the described methods for evaluating and deciding on a variety of factors out using object-oriented programming. Experimental testing of realized ideas has confirmed an increase in the reliability of making aim management decisions in various applied subject areas.</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_7-Technologies_for_Making_Reliable_Decisions_on_a_Variety.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Clash between Segment-level MT Error Analysis and Selected Lexical Similarity Metrics</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110506</link>
        <id>10.14569/IJACSA.2020.0110506</id>
        <doi>10.14569/IJACSA.2020.0110506</doi>
        <lastModDate>2020-05-30T15:41:49.4430000+00:00</lastModDate>
        
        <creator>Marija Brkic Bakaric</creator>
        
        <creator>Kristina Tonkovic</creator>
        
        <creator>Lucia Nacinovic Prskalo</creator>
        
        <subject>Machine translation; evaluation; error analysis; BLEU; CHRF++; MQM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>The aim of this paper is to evaluate the quality of popular machine translation engines on three texts of different genre in a scenario in which both source and target languages are morphologically rich. Translations are obtained from Google Translate and Microsoft Bing engines and German-Croatian is selected as the language pair. The analysis entails both human and automatic evaluation. The process of error analysis, which is time-consuming and often tiresome, is conducted in the user-friendly Windows 10 application TREAT. Prior to annotation, training is conducted in order to familiarize the annotator with MQM, which is used in the annotation task, and the interface of TREAT. The annotation guidelines elaborated with examples are provided. The evaluation is also conducted with automatic metrics BLEU and CHRF++ in order to assess their segment-level correlation with human annotations on three different levels–accuracy, mistranslation, and the total number of errors. Our findings indicate that neither the total number of errors, nor the most prominent error category and subcategory, show consistent and statistically significant segment-level correlation with the selected automatic metrics.</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_6-Clash_between_Segment_Level_MT_Error_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Abnormal Behavior Detection Method using Optical Flow Model and OpenPose</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110505</link>
        <id>10.14569/IJACSA.2020.0110505</id>
        <doi>10.14569/IJACSA.2020.0110505</doi>
        <lastModDate>2020-05-30T15:41:49.4130000+00:00</lastModDate>
        
        <creator>Zhu Bin</creator>
        
        <creator>Xie Ying</creator>
        
        <creator>Luo Guohu</creator>
        
        <creator>Chen Lei</creator>
        
        <subject>Image sequence analysis; abnormal behavior recognition; fractional order variational optical flow model; random forest</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>Abnormal behavior detection and recognition of pedestrian in escalator has always been a challenging task in intelligent video surveillance system. To cope this problem, a method combining optical flow vector of passenger with human skeleton extraction is proposed. At first, adaptive dual fractional order optical flow model is used to estimate the optical flow field under scenes with illumination changes, low contrast and uneven illumination. At the same time, the OpenPose deep convolutional neural network is used to extract body skeleton and persons in image can be located. Then, the optical flow field and the human skeleton are combined to obtain the optical flow vector of the passenger head. After that the optical flow field of the passenger head and the step of escalator under the passenger foot are used for abnormal behavior detection and recognition, random forest is employed to behavior classifier. Experimental results show that our proposed method and its improvement strategy can accurately estimate the optical flow field in real time of low contrast outdoor videos with insufficient illumination, uneven brightness and illumination changes, the accuracy of abnormal action detection and recognition can reach to 97.98% and 92.28%.</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_5-An_Abnormal_Behavior_Detection_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detecting C&amp;C Server in the APT Attack based on Network Traffic using Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110504</link>
        <id>10.14569/IJACSA.2020.0110504</id>
        <doi>10.14569/IJACSA.2020.0110504</doi>
        <lastModDate>2020-05-30T15:41:49.3970000+00:00</lastModDate>
        
        <creator>Cho Do Xuan</creator>
        
        <creator>Lai Van Duong</creator>
        
        <creator>Tisenko Victor Nikolaevich</creator>
        
        <subject>Advanced Persistent Threat (APT); abnormal behavior; network traffic; machine learning; APT detection; Control Server (C&amp;C Server)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>APT (Advanced Persistent Threat) attack is a form of dangerous attack, it has clear intentions and targets. APT uses a variety of sophisticated, complex methods and technologies to attack on targets to gain confidential, sensitive information.  Currently, the problem of detecting APT attacks still faces many challenges. The reason is APT attacks are designed specifically for each specific target, so it is difficult to detect them based on experiences or predefined rules. There are many different methods that are researched and applied to detect early signs of APT attacks in an organization. Today, one method of great concern is analyzing connections to detect a control server (C&amp;C Server) in the APT attack campaign. This method has great practical significance because we just need to detect early the connection of malware to the control server, we will prevent quickly attack campaigns. In this paper, we propose a method to detect C&amp;C Server based on network traffic analysis using machine learning.</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_4-Detecting_CC_Server_in_the_APT_Attack.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Forest Fire Detection System using LoRa Technology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110503</link>
        <id>10.14569/IJACSA.2020.0110503</id>
        <doi>10.14569/IJACSA.2020.0110503</doi>
        <lastModDate>2020-05-30T15:41:49.3830000+00:00</lastModDate>
        
        <creator>Nicoleta Cristina GAITAN</creator>
        
        <creator>Paula HOJBOTA</creator>
        
        <subject>LoRa; real-time; long range wide area network; internet of things</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>Millions of hectares of forest worldwide are affected annually by fires, which can lead to the loss of human lives, materials, destruction of natural flora and fauna but also can lead to the losses of raw materials. The problem is even greater in forests that are not guarded and do not have communication systems available. Thus, in recent years, have been proposed various systems that use devices based on Internet of Things (IoT) for real-time forest fire detection. In this paper, it is proposed a system capable of quickly detecting forest fires on long wide distance. In the development of this system it is used LoRa (Long Range) technology based on LoRaWAN (Long Range Wide Area Network) protocol which is capable to connect low power devices distributed on large geographical areas being an innovative and great solution for transmissions of a low data transfer rate and a low transmission power on high ranges, and because has a great efficiency.</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_3-Forest_Fire_Detection_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Robust Speed Control for Networked DC Motor System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110502</link>
        <id>10.14569/IJACSA.2020.0110502</id>
        <doi>10.14569/IJACSA.2020.0110502</doi>
        <lastModDate>2020-05-30T15:41:49.3670000+00:00</lastModDate>
        
        <creator>Sokliep Pheng</creator>
        
        <creator>Luo Xiaonan</creator>
        
        <creator>Rachana Lav</creator>
        
        <creator>Zhongshuai Wang</creator>
        
        <creator>Zetao Jiang</creator>
        
        <subject>The Networked DC Motor System; Observer Design; Robust Speed Control; Data Dropout; LMI</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>In this paper, we used observer-based H∞ output feedback control problems for the communication from the controller to dc motor and considered data packet dropout characterized by the Bernoulli random binary distribution the disturbance. The uncertain parameters have also been considered in the network-based dc motor system. Firstly, we used the robust H∞ output feedback control strategy to optimize controller gain and observer gain to guarantee the mean square stability. The observer-based H∞ output feedback has been designed to achieve robust speed control in the mean-square sense and optimize the parameters of the control system while guaranteeing the robust H∞ output feedback performance. Then, when data is transmitted in the control system, we illustrated that the system is stable and robust speed control can be achieved as well as the result realized.</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_2-Robust_Speed_Control_for_Networked_DC_Motor_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Service Scheduling Security Model for a Cloud Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110501</link>
        <id>10.14569/IJACSA.2020.0110501</id>
        <doi>10.14569/IJACSA.2020.0110501</doi>
        <lastModDate>2020-05-30T15:41:49.2730000+00:00</lastModDate>
        
        <creator>Abdullah Sheikh</creator>
        
        <creator>Malcolm Munro</creator>
        
        <creator>David Budgen</creator>
        
        <subject>Scheduling; security; model; cost; cloud computing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(5), 2020</description>
        <description>Scheduling tasks on a standalone system can be complex but applying it to a cloud environment can be even more complex because of the large amount of resources available. An added complexity in a Cloud environment is that of security. This paper addresses scheduling from a security point of view and presents a Scheduling Security Model and evaluates its effectiveness to meet user’s requirements with a number of worked examples with different scenarios.</description>
        <description>http://thesai.org/Downloads/Volume11No5/Paper_1-A_Service_Scheduling_Security_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Improved CoSaMP Multiuser Detection for Uplink Grant Free NOMA System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.01104108</link>
        <id>10.14569/IJACSA.2020.01104108</id>
        <doi>10.14569/IJACSA.2020.01104108</doi>
        <lastModDate>2020-05-01T10:35:12.9370000+00:00</lastModDate>
        
        <creator>Saifullah Adnan</creator>
        
        <creator>Yuli Fu</creator>
        
        <creator>Jameel Ahmed Bhutto</creator>
        
        <creator>Junejo Naveed Ur Rehman</creator>
        
        <creator>Raja Asif Wagan</creator>
        
        <creator>Abbas Ghulam</creator>
        
        <subject>MMSE; multi-user detection; CoSaMP; NOMA; MPA; SNR</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>Non-Orthogonal Multiple Access (NOMA) is the most prominent technology that enhances massive connectivity and spectral efficiency in 5G cellular communication. It provides services to the multi-users in time, frequency, and code domain with significant power level. Message Passing Algorithm (MPA) detection in a multi-user uplink grant-free system requires user activity information at the receiver that makes it impractical. To circumvent this problem, (MPA) is combined with Compressed Sensing (CS) based detection which not only detects the user activity but also the signal data. However, the Compressive Sampling Matching pursuit (CoSaMP) algorithm uses Zero Forcing (ZF) detector to estimate the signal but its performance degrades with increment in Signal to Noise Ratio (SNR). Therefore, Minimum Mean Square Error (MMSE) detector in CoSaMP algorithm is deployed in this paper that enhances detection accuracy and BER performance. The simulation results validate that the proposed algorithm attains better performance than MPA and conventional CoSaMP algorithm in high SNR.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_108-An_Improved_CoSaMP_Multiuser_Detection_for_Uplink.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Arabic Word Recognition System for Historical Documents using Multiscale Representation Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.01104107</link>
        <id>10.14569/IJACSA.2020.01104107</id>
        <doi>10.14569/IJACSA.2020.01104107</doi>
        <lastModDate>2020-04-30T16:06:47.0530000+00:00</lastModDate>
        
        <creator>Said Elaiwat</creator>
        
        <creator>Marwan Abu-Zanona</creator>
        
        <subject>Word recognition; multiscale convexity concavity analysis; historical documents; dynamic time warping</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>In the last decades, huge efforts have been made to develop automated handwriting recognition systems. The task of recognition usually involves several complex processes includ-ing image pre-processing, segmentation, features extracting and matching. This task usually gets harder by processing historical documents as they involve skews, document degradation and structure noise. Although, the success that has been achieved in English language, the recognition of handwritten Arabic still constitutes a major challenge for many reasons. The characteristic of Arabic language, as a Semitic language, differs from other languages (e.g., European languages) in several aspects such as complex structure, implicit characters, concatenation and, writing styles and direction. This work proposes a full recognition system for the task of word recognition from from Arabic historical documents. In the proposed system, a novel feature extraction method is presented to define robust features from Arabic words. Prior Feature extraction, each input image is pre-processed and segmented resulting in segmented words. After that, the features of each word/sub-word are defined based on Multiscale Convexity Concavity(MCC) analysis of contour word shape. For feature matching, a circular shift method is proposed to burn the computational cost instead of using traditional dynamic time warping (DTW) which exhibits high computational cost. Finally, the proposed algorithm has been evaluated under well-known dataset, namely, Ibn Sina, and showed high performance for historical documents with low computational cost.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_107-Arabic_Word_Recognition_System_for_Historical_Documents.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>On the Recovery of Terrestrial Wireless Network using Cognitive UAVs in the Disaster Area</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.01104106</link>
        <id>10.14569/IJACSA.2020.01104106</id>
        <doi>10.14569/IJACSA.2020.01104106</doi>
        <lastModDate>2020-04-30T16:06:47.0370000+00:00</lastModDate>
        
        <creator>Najam Ul Hasan</creator>
        
        <creator>Prajoona Valsalan</creator>
        
        <creator>Umer Farooq</creator>
        
        <creator>Imran Baig</creator>
        
        <subject>Cognitive radio networks; spectrum allocation; sen-sor network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>Natural disasters such as earthquakes, floods and fires may cause the existing wireless network infrastructure to collapse, leaving behind several disconnected network parts. UAVs could help to establish communication between these disconnected parts using their ability to hover and fly across the affected region. However, UAV deployment faces several problems, including how many UAVs would be sufficient and where they could be placed. Such problems can be addressed centrally in a situation with verified information about the segmented network, such as the number of disconnected parts, the number of nodes in each part and the location of each node. However, a damaged network with unknown information (which is mostly the case) requires a distributed networking establishment mechanism. Therefore, this paper proposes an algorithm to restore connectivity among the disconnected parts of the damaged network. Cognitive radio-based UAVs (CR-UAVs) fly into the affected area and try to connect the various parts of the damaged network using the pro-posed algorithm. The main objective of the proposed algorithm is to connect the different disconnected parts of the broken network with the fewest possible UAVs in the least possible time. The results of the MATLAB simulation illustrate the significance of the proposed algorithm in terms of the number of UAVs used and the total distance they fly.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_106-On_the_Recovery_of_Terrestrial_Wireless_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Identifying Muscle Strength Imbalances in Athletes using Motion Analysis Incorporated with Sensory Inputs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.01104105</link>
        <id>10.14569/IJACSA.2020.01104105</id>
        <doi>10.14569/IJACSA.2020.01104105</doi>
        <lastModDate>2020-04-30T16:06:47.0200000+00:00</lastModDate>
        
        <creator>Sameera S. Vithanage</creator>
        
        <creator>Maneesha S. Ratnadiwakara</creator>
        
        <creator>Damitha Sandaruwan</creator>
        
        <creator>Shiromi Arunathileka</creator>
        
        <creator>Maheshya Weerasinghe</creator>
        
        <creator>Chathuranga Ranasinghe</creator>
        
        <subject>Musculoskeletal imbalance; movement analysis; motion tracking; injury prevention</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>Movement analysis is one of the commonly used methods in the context of physiotherapy to identify dysfunctions in the human musculoskeletal system. The overhead squat is a popular movement pattern that is also approved by NASM (National Academy of Sports Medicine of USA) among the various movement patterns that are used to identify muscle dysfunctions. It is commonly used to draw conclusions on an athlete’s muscle imbalance in the clinical field based on observed compensations of the movement pattern. It is used by trainers as well as fitness enthusiasts to routinely assess their movement patterns. The correct execution of movements in every athlete is crucial since the incorrect bio-mechanics can result in injuries that would take a considerable amount of time to recover through rehabilitation. Thus, there is a need to evaluate injury risks accurately. The primary purpose of this research is to propose a method of detecting muscle imbalances in collegiate athletes with the aid of a low-cost motion tracking device. This proposed method facilitates the detection of muscle imbalances in both upper-body as well as lower-body during the execution of the overhead squat.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_105-Identifying_Muscle_Strength_Imbalances_in_Athletes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Introducing the Urdu-Sindhi Speech Emotion Corpus: A Novel Dataset of Speech Recordings for Emotion Recognition for Two Low-Resource Languages</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.01104104</link>
        <id>10.14569/IJACSA.2020.01104104</id>
        <doi>10.14569/IJACSA.2020.01104104</doi>
        <lastModDate>2020-04-30T16:06:46.9900000+00:00</lastModDate>
        
        <creator>Zafi Sherhan Syed</creator>
        
        <creator>Sajjad Ali Memon</creator>
        
        <creator>Muhammad Shehram Shah</creator>
        
        <creator>Abbas Shah Syed</creator>
        
        <subject>Speech emotion recognition; affective computing; social signal processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>Speech emotion recognition is one of the most active areas of research in the field of affective computing and social signal processing. However, most research is directed towards a select group of languages such as English, German, and French. This is mainly due to a lack of available datasets in other languages. Such languages are called low-resource languages given that there is a scarcity of publicly available datasets. In the recent past, there has been a concerted effort within the research community to create and introduce datasets for emotion recognition for low-resource languages. To this end, we introduce in this paper the Urdu-Sindhi Speech Emotion Corpus, a novel dataset consisting of 1,435 speech recordings for two widely spoken languages of South Asia, that is Urdu and Sindhi. Furthermore, we also trained machine learning models to establish a baseline for classification performance, with accuracy being measured in terms of unweighted average recall (UAR). We report that the best performing model for Urdu language achieves a UAR = 65.00% on the validation partition and a UAR = 56.96% on the test partition. Meanwhile, the model for Sindhi language achieved UARs of 66.50% and 55.29% on the validation and test partitions, respectively. This classification performance is considerably better than the chance level UAR of 16.67%. The dataset can be accessed via https://zenodo.org/record/3685274.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_104-Introducing_the_Urdu_Sindhi_Speech_Emotion_Corpus.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Multi-Criteria Recommendation Framework using Adaptive Linear Neuron</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.01104103</link>
        <id>10.14569/IJACSA.2020.01104103</id>
        <doi>10.14569/IJACSA.2020.01104103</doi>
        <lastModDate>2020-04-30T16:06:46.9730000+00:00</lastModDate>
        
        <creator>Mohammed Hassan</creator>
        
        <creator>Mohamed Hamada</creator>
        
        <creator>Saratu Yusuf Ilu</creator>
        
        <subject>Multi-criteria recommender systems; adaptive linear neuron; artificial neural network; singular value decomposition; prediction accuracy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>Recent developments in the field of recommender systems have led to a renewed interest in employing some of the sophisticated machine learning algorithms to combine multiple characteristics of items during the process of making recom-mendations. Considerable number of research papers have been published on multi-criteria recommendation techniques. Most of these studies have focused only on using some basic statistical methods or simply by extending the similarity computation of the traditional heuristic-based techniques to model the system. Researchers have not treated the uncertainty that exists about the relationship between multi-criteria modelling approaches and effectiveness of some of the complex and powerful machine learning techniques; in fact, no previous study has investigated the role of artificial neural networks to design and develop the system using aggregation function approach. This paper seeks to remedy these challenges by analysing the performance of multi-criteria recommender systems, modelled by integrating an adaptive linear neuron that was trained using delta rule, and asymmetric sin-gular value decomposition algorithms. The proposed model was implemented, trained and tested using a multi-criteria dataset for recommending movies to users based on action, story, direction, and visual effects of movies. Taken together, the empirical results of the study suggested that there is a strong association between artificial neural networks and the modelling approaches of multi-criteria recommendation technique.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_103-A_Multi_Criteria_Recommendation_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Clone Detection Techniques for JavaScript and Language Independence: Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.01104102</link>
        <id>10.14569/IJACSA.2020.01104102</id>
        <doi>10.14569/IJACSA.2020.01104102</doi>
        <lastModDate>2020-04-30T16:06:46.9430000+00:00</lastModDate>
        
        <creator>Danyah Alfageh</creator>
        
        <creator>Hosam Alhakami</creator>
        
        <creator>Abdullah Baz</creator>
        
        <creator>Eisa Alanazi</creator>
        
        <creator>Tahani Alsubait</creator>
        
        <subject>Clone detection; code clones; JavaScript; language independent clone detection; web applications</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>Code clone detection is an active field of study in computer science. Despite its rich history, it lacks focus on web scripting languages. Due to the expansion of web applications and web development amongst developers of varying education and experience levels, they inevitably resort to cloning through out the web. The spread of code clones is further increased by websites like StackOverflow and GitHub. In this paper, we will be focusing on clone detection research done to target clones in JavaScript code and discuss its areas of concern. Also, we will summarize language independent research done and possibility of its application on JavaScript and web applications.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_102-Clone_Detection_Techniques_for_JavaScript.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of a Practical Tool in Pick-and-Place Tasks for Human Workers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.01104101</link>
        <id>10.14569/IJACSA.2020.01104101</id>
        <doi>10.14569/IJACSA.2020.01104101</doi>
        <lastModDate>2020-04-30T16:06:46.9130000+00:00</lastModDate>
        
        <creator>Yunan He</creator>
        
        <creator>Osamu Fukuda</creator>
        
        <creator>Daisuke Sakaguchi</creator>
        
        <creator>Nobuhiko Yamaguchi</creator>
        
        <creator>Hiroshi Okumura</creator>
        
        <creator>Kohei Arai</creator>
        
        <subject>Pick-and-place task; human-robot collaboration; cognitive system; hand tools</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>We introduce smart hand, a practical tool for human workers in pick-and-place tasks. It is developed to avoid picking up the wrong thing from one location or place the things in an unexpected location. Smart hand features sensors (e.g., imaging sensors, motion sensors) to sense the world and offers suggestions or aid based on the sensed results when a human worker is performing a pick-and-place task. A smart hand prototype is made in the study. In our design, the smart hand has an RGB-D sensor and an inertial measurement unit (IMU). RGB-D sensor is used to do object detection and distance/position estimation while IMU is used to track the motion of the smart hand. An experiment is conducted to compare the two working conditions that a subject performs the pick-and-place tasks with or without the smart hand. The experiment results proved that the smart hand can avoid human errors in pick-and-place tasks.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_101-Development_of_a_Practical_Tool_in_Pick_and_Place_Tasks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Clustering Nodes and Discretizing Movement to Increase the Effectiveness of HEFA for a CVRP</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.01104100</link>
        <id>10.14569/IJACSA.2020.01104100</id>
        <doi>10.14569/IJACSA.2020.01104100</doi>
        <lastModDate>2020-04-30T16:06:46.8970000+00:00</lastModDate>
        
        <creator>Ubassy Abdillah</creator>
        
        <creator>Suyanto Suyanto</creator>
        
        <subject>Swarm intelligence; capacitated vehicle routing problem; firefly algorithm; differential evolution; hybrid evolution-ary firefly algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>A Capacitated Vehicle Routing Problem (CVRP) is an important problem in transportation and industry. It is challenging to be solved using some optimization algorithms. Unfortunately, it is not easy to achieve a global optimum solution. Hence, many researchers use a combination of two or more optimization algorithms, which based on swarm intelligence methods, to overcome the drawbacks of the single algorithm. In this research, a CVRP optimization model, which contains two main processes of clustering and optimization, based on a discrete hybrid evolutionary firefly algorithm (DHEFA), is proposed. Some evaluations on three CVRP cases show that DHEFA produces an averaged effectiveness of 91.74%, which is much more effective than the original FA that gives mean effectiveness of 87.95%. This result shows that clustering nodes into several clusters effectively reduces the problem space, and the DHEFA quickly searches the optimum solution in those partial spaces.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_100-Clustering_Nodes_and_Discretizing_Movement.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>On-Road Deer Detection for Advanced Driver Assistance using Convolutional Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110499</link>
        <id>10.14569/IJACSA.2020.0110499</id>
        <doi>10.14569/IJACSA.2020.0110499</doi>
        <lastModDate>2020-04-30T16:06:46.8630000+00:00</lastModDate>
        
        <creator>W Jino Hans</creator>
        
        <creator>V Sherlin Solomi</creator>
        
        <creator>N Venkateswaran</creator>
        
        <subject>Computer vision; animal detection; deep learning; Animal Vehicle Collision (AVC)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>Animal-vehicle collision (AVC) is a major concern in road safety that affects human life, properties, and wildlife. Most of the collisions happen with large animals especially deer that enters the road suddenly. Furthermore, the threat is even more alarming in poor visibility conditions such as night-time, fog, rain, etc. Therefore, it is vital to detect the presence of deer on roadways to mitigate the severity of deer-vehicle collision (DVC). This paper presents an efficient methodology to detect deer on roadways both during the day and night-time conditions using deep learning framework. A two-class CNN model differentiating a deer from its background is developed. The background will have a few classes of objects such as motorcycles, cars, and trees which are frequently encountered on roadways. A self-constructed dataset with both RGB and thermal images is used to train the CNN model. Sliding window technique is used to localize the spatial region of deer in an image. The performance of the proposed CNN model is compared with state-of-the art classifiers and pre-trained CNN models and the results validate its effectiveness.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_99-On_Road_Deer_Detection_for_Advanced_Driver_Assistance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Metamorphic Testing of AI-based Applications: A Critical Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110498</link>
        <id>10.14569/IJACSA.2020.0110498</id>
        <doi>10.14569/IJACSA.2020.0110498</doi>
        <lastModDate>2020-04-30T16:06:46.8500000+00:00</lastModDate>
        
        <creator>Muhammad Nadeem Khokhar</creator>
        
        <creator>Muhammad Bilal Bashir</creator>
        
        <creator>Muhammad Fiaz</creator>
        
        <subject>Metamorphic testing; metamorphic relation; test oracle problem; artificial intelligence; genetic algorithm; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>Metamorphic testing is the youngest testing ap-proach among other members of the testing family. It is de-signed to test software, which are complex in nature and it is difficult to compute test oracle for them against a given set of inputs. Metamorphic testing approach tests the software with the help of metamorphic relations that guide the tester to check if the observed output can be produced after applying a certain input. Since its first appearance, a lot of research has been done to check its effectiveness on different complex families of software applications like search engines, compilers, artificial intelligence (AI) and so on. Artificial intelligence has gained immense attention due to its successfully application in many of the computer science and even other domains like medical science, social science, economic, and so on. AI-based applications are quite complex in nature as compared to other conventional software applications and because of that they are hard to test. We have selected specifically testing of AI-based applications for this research study. Although all the researchers claim to propose the best set of metamorphic relations to test AI-based applications but that still needs to be verified. In this study, we have performed a critical review supported by rigorous set of parameters that we have prepared after thorough literature survey. The survey shows that researchers have applied metamorphic testing on applications that are either based on Genetic Algorithm (GA) or Machine Learning (ML). Our analysis has helped us identifying the strengths and weaknesses of the proposed approaches. Research still needs to be done to design a generalized set of metamorphic rules that can test a family of AI applications rather than just one. The findings are supported by strong arguments and justified with logical reasoning. The identified problem domains can be targeted by the researchers in future to further enhance the capabilities of metamorphic testing and its range of applications.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_98-Metamorphic_Testing_of_AI_based_Applications.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Overview of Fault Tolerance Techniques and the Proposed TMR Generator Tool for FPGA Designs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110497</link>
        <id>10.14569/IJACSA.2020.0110497</id>
        <doi>10.14569/IJACSA.2020.0110497</doi>
        <lastModDate>2020-04-30T16:06:46.8170000+00:00</lastModDate>
        
        <creator>Abdul Rafay Khatri</creator>
        
        <subject>FPGA designs; fault tolerance; TMR technique; Verilog HDL</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>The FPGA has been involved in many safety and mission-critical applications in the last few decades. FPGA designs are also critical to errors and failures due to radiations. Fault-tolerant systems should be designed to overcome the effect of faults or failure during the operation of the systems. The primary objective of any fault tolerance technique is to produce a dependable system. Fault tolerance techniques add the capability to perform proper functioning in the presence of a fault. Fault-tolerant techniques can detect the faults and correct them, or mask the faults. The overview of the most standard techniques used for FPGA designs is described in the paper. Among them, it is found that the Triple Modular Redundancy (TMR) technique is the most straight forward in terms of implementation and easy to use. The proposed TMR code generator for implementing the FPGA design is also described. These FPGA designs are written in Verilog Hardware Description Language (HDL) at the different abstraction levels.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_97-Overview_of_Fault_Tolerance_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>3D Hand Gesture Representation and Recognition through Deep Joint Distance Measurements</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110496</link>
        <id>10.14569/IJACSA.2020.0110496</id>
        <doi>10.14569/IJACSA.2020.0110496</doi>
        <lastModDate>2020-04-30T16:06:46.8170000+00:00</lastModDate>
        
        <creator>P. Vasavi</creator>
        
        <creator>Suman Maloji</creator>
        
        <creator>E. Kiran Kumar</creator>
        
        <creator>D. Anil Kumar</creator>
        
        <creator>N. Sasikala</creator>
        
        <subject>Gesture recognition; 3D motion capture; deep learn-ing; joint relational distance maps</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>Hand gestures with finger relationships are among the toughest features to extract for machine recognition. In this paper, this particular research challenge is addressed with 3D hand joint features extracted from distance measurements which are then colour mapped as spatio temporal features. Further patterns are learned using an 8-layer convolutional neural network (CNN) to estimate the hand gesture. The results showed a higher degree of recognition accuracy when compared to similar 3D hand gesture methods. The recognition accuracy for our dataset KL 3DHG with 220 classes was around 94.32%. Robustness of the proposed method was validated with only available benchmark 3D skeletal hand gesture dataset DGH 14/28.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_96-3D_Hand_Gesture_Representation_and_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Development of Simulator Considering Behavioral Psychology of Japanese to Improve Evacuation Ratio in Flood</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110495</link>
        <id>10.14569/IJACSA.2020.0110495</id>
        <doi>10.14569/IJACSA.2020.0110495</doi>
        <lastModDate>2020-04-30T16:06:46.7870000+00:00</lastModDate>
        
        <creator>Tatsuki Fukuda</creator>
        
        <subject>Simulator; game theory; flood</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>In Japan, the natural disasters causes a lot of damages of residents. For example, the flood caused by heavy rain and house collapse due to earthquake. As you know, no one can evacuate from earthquake because it is not knowable that when the earthquake will occur. The residents, however, often have chances to evacuate from flood caused by heavy rain because there is a little time left before the flood occurs. In order to improve the evacuation ratio, the system to share the evacuation status of neighbors has been proposed. Although a survey showed that the system is so effective to improve the evacuation ratio, the number of neighbors to share the evacuation status has not been clear. The aim of this study is a development of the simulation of the residents in order to find the optimal number of the neighbors to share the evacuation status. In this paper, the simulator based on the behavioral psychology for the evacuation ratio in flood is considered. The main target in the simulation is the action of human, so the game theory is appliable. The residents, which is players in the game theory or agents in the simulator, will make decisions based on the statuses of their neighbors. In the experiments, the actual evacuation ratio can be obtained by a simulation with a premise that the residents can never know the evacuation status of their neighbors. For the future work, the optimal number of neighbors to share evacuation status should be simulated in view of the improvement of evacuation ratio in flood.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_95-A_Development_of_Simulator_Considering_Behavioral_Psychology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Educational Data Mining Applications and Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110494</link>
        <id>10.14569/IJACSA.2020.0110494</id>
        <doi>10.14569/IJACSA.2020.0110494</doi>
        <lastModDate>2020-04-30T16:06:46.7570000+00:00</lastModDate>
        
        <creator>Fatima Alshareef</creator>
        
        <creator>Hosam Alhakami</creator>
        
        <creator>Tahani Alsubait</creator>
        
        <creator>Abdullah Baz</creator>
        
        <subject>Educational data mining; student performance; pre-diction; classification; clustering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>Educational data mining (EDM) uses data mining techniques to analyze huge amounts of student data in the educa-tional environments. The main purpose of EDM is to analyze and solve educational issues and, consequently, improve educational processes. With the emergence of EDM applications in the educational environments, several techniques have been identified to implement these applications. This paper reviews the relevant studies in EDM including datasets and techniques used in those studies and identifies the most effective techniques. The most prevalent applications include predicting student performance, detecting undesirable student behaviors, grouping students and student modeling. These applications aim to help decision makers in the educational institutions to understand student situations, improve students’ performance, identify learning priorities for different groups of students and develop learning process. The prediction accuracy is selected as the evaluation criteria for the effectiveness of educational data mining techniques. The results show that Bayesian Network and Random Forest are the most effective techniques for predicting student performance, Social Network Analysis is the best technique for detecting undesirable student behaviors, Clustering and Social Network Analysis are the most effective techniques for grouping students and student modeling, respectively. This study recommends conducting more comprehensive and extended studies to evaluate the effectiveness of EDM techniques with an extended evaluation criteria.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_94-Educational_Data_Mining_Applications.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Recovery of Structural Controllability into Critical Infrastructures under Malicious Attacks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110493</link>
        <id>10.14569/IJACSA.2020.0110493</id>
        <doi>10.14569/IJACSA.2020.0110493</doi>
        <lastModDate>2020-04-30T16:06:46.7400000+00:00</lastModDate>
        
        <creator>Bader Alwasel</creator>
        
        <subject>Structural controllability; control systems; cyber physical systems; power dominating set; recovery from attacks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>The problem of controllability of networks can be seen in critical infrastructure systems which are increasingly susceptible to random failures and/or malicious attacks. The ability to recover controllability quickly following an attack can be considered a major problem in control systems. If this is not ensured, it can enable the attacker to create more disrup-tions as well as, like the electric power networks case, violate real-time restrictions and result in the control of the network degrading and its observability reducing significantly. Thus, the present paper examines structural controllability problem that has been in focus through the equivalent problem of the Power Dominating Set (PDS) introduced in the context of electrical power network control. However, the controllability optimisation problem can be seen as computationally infeasible regarding large complex networks because such problems are considered NP-hard and as having low approximability. Hence, the ability of structural controllability recoverability will be explored as per the PDS formulation, especially following perturbations in which an attacker with sufficient knowledge of the network topology is only able to completely violate the current driver control nodes of the original control network leading to a degradation of controllability of dependent nodes. The results highlight that the use of directed Laplacian matrix can be a useful approach for analysing structural controllability of a network. The simulation results show also that an increase of a connectivity probability of the distribution of links in Directed ER networks can minimise the number of driver control nodes which is highly desirable while monitoring the entire network.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_93-Recovery_of_Structural_Controllability.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Balochi Non Cursive Isolated Character Recognition using Deep Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110492</link>
        <id>10.14569/IJACSA.2020.0110492</id>
        <doi>10.14569/IJACSA.2020.0110492</doi>
        <lastModDate>2020-04-30T16:06:46.7100000+00:00</lastModDate>
        
        <creator>Ghulam Jan Naseer</creator>
        
        <creator>Abdul Basit</creator>
        
        <creator>Imran Ali</creator>
        
        <creator>Arif Iqbal</creator>
        
        <subject>Convolutional neural network; data augmentation; character recognition; cursive character recognition; detection; text segmentation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>The text recognition research in artificial intelli-gence has enabled machines not only to recognize the human spoken languages but also to interpret them. Optical character recognition is a subarea of AI that converts scanned text images into an editable document. The researchers proposed various text recognition techniques to identify cursive and connected scripts written from left to right but their correct recognition is still a challenging problem for the visual methods. The Balochi language is one of them spoken by a significant part of the world population and no research conducted on the recognition this regional language of Pakistan. In this paper, we propose a convolutional neural network based model for Balochi script recognition for non-cursive characters. Our model optimized small VGGNet model and achieved exceptional precision and speed over the state of the art methods of machine learning. We experimented and compared the proposed method with the baseline LeNet model, the results showed the proposed method improved over the baseline method with a precision of 96%. We additionally collected and processed the Balochi characters dataset and made it public to carry further research in the future.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_92-Balochi_Non_Cursive_Isolated_Character_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Proposed Authentication Protocol for IoT using Blockchain and Fog Nodes</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110491</link>
        <id>10.14569/IJACSA.2020.0110491</id>
        <doi>10.14569/IJACSA.2020.0110491</doi>
        <lastModDate>2020-04-30T16:06:46.6930000+00:00</lastModDate>
        
        <creator>Ahmed Nabil Abdalah</creator>
        
        <creator>Ammar Mohamed</creator>
        
        <creator>Hesham A. Hefny</creator>
        
        <subject>Internet of Things; smart contract; blockchain; fog computing; authentication; access token</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>The IoT offers enormous opportunities and also brings some challenges. Authentication considered one of the main challenges introduced by IoT. IoT devices are not able to protect themselves due to there limited processing and storage capabilities. Researchers proposed authentication algorithms with either a lack of scalability or vulnerable to cyberattacks. In this paper, we propose a decentralized token-based authentication based on fog computing and blockchain. The protocol provides a secure authentication protocol using access token, ECC cryptog-raphy, and also blockchain as decentralized identity storage. The blockchain uses cryptographic identifiers, records immutability, and provenance, which allows the implementation of a decentral-ized authentication protocol. These features ensure a light and secure identity management system. We evaluate this protocol communication between controller, gateways, and devices using AVISPA/ HLPSL, and results obtained from AVISPA simulation shows that our protocol is safe based on secrecy and strong authentication criteria. The paper uses four test cases to test the Ethereum smart contract implementation to ensure the system functions properly.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_91-Proposed_Authentication_Protocol_for_IoT.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>General Variable Neighborhood Search for the Quote-Travelling Repairman Problem</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110490</link>
        <id>10.14569/IJACSA.2020.0110490</id>
        <doi>10.14569/IJACSA.2020.0110490</doi>
        <lastModDate>2020-04-30T16:06:46.6630000+00:00</lastModDate>
        
        <creator>Ha-Bang Ban</creator>
        
        <subject>Q-TRP; GVNS; AM; GRASP</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>The Quota-Travelling Repairman Problem (Q-TRP) tries to find a tour that minimizes the waiting time while the profit collected by a repairman is not less than a predefined value. The Q-TRP is an extended variant of the Travelling Repairman Problem (TRP). The problem is NP-hard problem; therefore, metaheuristic is a natural approach to provide near-optimal solutions for large instance sizes in a short time. Currently, several algorithms are proposed to solve the TRP. However, the quote constraint does not include, and these algorithms cannot be adapted to the Q-TRP. Therefore, developing an efficient algorithm for the Q-TRP is necessary. In this paper, we suggest a General Variable Neighborhood Search (GVNS) that combines with the perturbation and Adaptive Memory (AM) techniques to prevent the search from local optima. The algorithm is implemented with a benchmark dataset. The results demonstrate that good solutions, even the optimal solutions for the problem with 100 vertices, can be reached in a short time. Moreover, the algorithm is comparable with the other metaheuristic algorithms in accordance with the solution quality.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_90-General_Variable_Neighborhood_Search.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detection of Suicidal Intent in Spanish Language Social Networks using Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110489</link>
        <id>10.14569/IJACSA.2020.0110489</id>
        <doi>10.14569/IJACSA.2020.0110489</doi>
        <lastModDate>2020-04-30T16:06:46.6470000+00:00</lastModDate>
        
        <creator>Kid Valeriano</creator>
        
        <creator>Alexia Condori-Larico</creator>
        
        <creator>Jos&#232; Sulla-Torres</creator>
        
        <subject>Spanish; suicide ideation; embeddings; machine learning; phrases classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>Suicide is a considerable problem in our population, early intervention for its prevention has a very important role, in order to counteract the number of deaths from suicide. Today, just over half of the world’s population uses social networks, where they express ideas, feelings, desires, including suicide intentions. Motivated by these factors, the main objective is the automatic detection of suicidal ideations in social networks in the Spanish language, in order to serve as a base component to alert and achieve early and specialized interventions. For this, a Spanish suicide phrase classification model has been implemented, since currently no related works in this language with a machine learning approach were found. However, there were some challenges in performing this task, such as understand-ing natural language, generating training data, and obtaining reliable accuracy in classifying these phrases. To construct our classification model, two opposite and popular types of phrase embeddings were chosen, and the most widely used classification algorithms in the literature were compared. Obtaining, as a result, the confirmation that it is possible to classify phrases with suicidal ideation in the Spanish language with good accuracy using semantic representations.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_89-Detection_of_Suicidal_Intent_in_Spanish_Language.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Image Search based on Words Extracted from Others’ Utterances for Effective Idea Generation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110488</link>
        <id>10.14569/IJACSA.2020.0110488</id>
        <doi>10.14569/IJACSA.2020.0110488</doi>
        <lastModDate>2020-04-30T16:06:46.6130000+00:00</lastModDate>
        
        <creator>Yutaka Yamaguchi</creator>
        
        <creator>Daisuke Shibata</creator>
        
        <creator>Chika Oshima</creator>
        
        <creator>Koichi Nakayama</creator>
        
        <subject>Autosuggest; brainstorming; search word; smart-phone; SWISS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>People often engage in brainstorming because they want to develop attractive products that involve a new idea. Consequently, many studies, methods, and systems that aim to help people generate ideas have been proposed. We developed the search websites images using search suggestions (SWISS) system, which displays images based on a word extracted from brainstorming participants’ utterances and adds additional words using an autosuggest function to stimulate idea generation. We aimed to determine whether the images searched based on the other participants’ utterances or those of other participants were more effective for idea generation. Sixteen university students participated in a brainstorming session using SWISS in two conditions. In Condition A, the participants could see the images searched based on the other participants’ utterances. These were projected onto a wide display behind each participant during the brainstorming session. In Condition B, the participants could see the images searched based on their utterances, which were displayed on a smartphone. The results indicate that the rate at which the images were related to the ideas in Condition A was higher than in Condition B. SWISS could spread the participants’ ideas through the images using an autosuggest function and extract words from the other participants’ utterances.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_88-Image_Search_based_on_Words_Extracted_from_Others_Utterances.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application of Piecewise Linear Approximation Method for the Estimation of Origin-Destination Matrix</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110487</link>
        <id>10.14569/IJACSA.2020.0110487</id>
        <doi>10.14569/IJACSA.2020.0110487</doi>
        <lastModDate>2020-04-30T16:06:46.6000000+00:00</lastModDate>
        
        <creator>Miguel Fern&#225;ndez</creator>
        
        <creator>Enrique Lee Huaman&#237;</creator>
        
        <creator>Aldo Fern&#225;ndez</creator>
        
        <creator>Avid Roman-Gonzalez</creator>
        
        <subject>Urban freight transport; origin-destination matrix; mixed-integer programming model; piecewise linear approximation method</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>This paper presents a Mixed-Integer Programming Model for the urban freight transport planning problem through the estimation of the Origin-Destination Matrix. The Origin-Destination Matrix is used to know the pattern of travel or vehicle flow between different zones of a city and is estimated from the counting of vehicles on the routes of a road network. For the estimation of the Origin-Destination Matrix, the Entropy Maximization approach is applied. This approach is based on a non-linear optimization model. In order to overcome this difficulty, an optimization model based on the Piecewise Linear Approximation Method is proposed. To test the proposed model, an instance was built based on a road network of a real case. The proposed model obtained good results in a reduced computational time, demonstrating its usefulness for the urban freight transport planning.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_87-Application_of_Piecewise_Linear_Approximation_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design and Construction of a Low-Cost Device for the Evaluation of Redox Behaviour using Lineal Voltammetry Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110486</link>
        <id>10.14569/IJACSA.2020.0110486</id>
        <doi>10.14569/IJACSA.2020.0110486</doi>
        <lastModDate>2020-04-30T16:06:46.5670000+00:00</lastModDate>
        
        <creator>Kevin Rodriguez-Villarreal</creator>
        
        <creator>Alicia Alva</creator>
        
        <creator>Daniel Ramos-Sono</creator>
        
        <creator>Michael Cieza Terrones</creator>
        
        <creator>Avid Roman-Gonzalez</creator>
        
        <subject>Potentiostat; blood lead; toxicity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>Electrochemical techniques have been generating great interest due to their wide range of applications and their ease of use. For this reason, in recent years’ electrochemical techniques such as cyclic voltammetry, anodic voltametric stripping, chronoamperometry and linear voltammetry have been developed. Linear voltammetry is one of the most widely used electrochemical techniques, where a voltage range is applied to a solution with the analyte and then current data is collected as a response. For this, an electrochemical cell with its 3 electrodes (working electrode, counter electrode, reference electrode) and a device for voltage control and current evaluation (potentiostat) is used. A potentiostat is an electronic device that allows the voltage or current to be regulated according to the electrochemical technique to be performed. The devices are usually very expensive due to their high precision, for this reason, our project is focused on the development of a low cost system that allows us to recognize redox systems by using linear voltammetry. our potentiostat system was able to differentiate in a redox salt (sodium chloride) from the support electrolytes (chlorohydric acid, nitric acid, sulfuric acid), allowing us to evaluate redox behavior at a cost of less than $ 40.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_86-Design_and_Construction_of_a_Low_Cost_Device.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Profiling Patterns in Healthcare System: A Preliminary Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110485</link>
        <id>10.14569/IJACSA.2020.0110485</id>
        <doi>10.14569/IJACSA.2020.0110485</doi>
        <lastModDate>2020-04-30T16:06:46.5370000+00:00</lastModDate>
        
        <creator>Nicholas Khin-Whai Chan</creator>
        
        <creator>Angela Siew-Hoong Lee</creator>
        
        <creator>Zuraini Zainol</creator>
        
        <subject>Big data analytics; data mining; descriptive analysis; healthcare; pattern profiling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>In the 21st century, our planet revolves around data and is known as a digital earth. The astonishing growth in data has resulted in an increase in interest of Big Data Analytics to capture, store, process, analyze and visualize unprecedented amount of information. Big data has undoubtedly and will continue to shape modern information driven society where behind all the available data, there is a hidden potential to discover meaningful insights and patterns which may impact businesses in unexpected measures. The exponential growth of data is also present in the healthcare sector. In Malaysia, most employees are provided with medical benefits which includes general medical costs to hospitalization benefits and insurance coverages. With the healthcare data and information stored with the Human Resource (HR), employers could potentially analyze and identify patterns in the historical medical claims which could then help in making specific decisions to understand their employee population health and the usage of the premium coverage. Therefore, the aim of this research is to better understand the patterns presented in the employees’ healthcare data. Through the analysis and understanding of the patterns in past medical claim history, potential strategies can be proposed to allow employers to provide proactive and reactive measures to potentially help sustain medical expenditure.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_85-Profiling_Patterns_in_Healthcare_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Machine Learning Model for Personalizing Online Arabic Journalism</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110484</link>
        <id>10.14569/IJACSA.2020.0110484</id>
        <doi>10.14569/IJACSA.2020.0110484</doi>
        <lastModDate>2020-04-30T16:06:46.5070000+00:00</lastModDate>
        
        <creator>Nehad Omar</creator>
        
        <creator>Yasser M. K. Omar</creator>
        
        <creator>Fahima A. Maghraby</creator>
        
        <subject>Personalization; e-journalism; KNN (K-nearest Neighborhood); dynamic user profile; NLP (Natural Language Processing); data driven organization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>The paper discusses a model of generating dynamic profile for Arabic News Users, capturing user preference by analyzing his review of historical news, and recommend him news as soon as he creates account on News Mobile App, Preference is calculated based on article main keywords score, which is extracted from article headline &amp; body as NLP (Natural Language Processing), when user reads an article, its keywords are calculated with rate of interest to his profile. Machine Learning techniques are used in the proposed model to recommend user the relevant news to his preferences and provide him personalization. The model used hybrid filtering techniques to recommend user suitable articles to his preferences, as Content-Based, Collaborative, and Demographic filtering techniques with KNN (K-nearest neighborhood). The model combined between those techniques to enhance the recommendation process, after recommendation happened, that the model tracks User behavior with the recommended articles, whether he reviewed it or not, and the actions he did on the article page to calculate his rate of interest, then dynamically updates his profile in real time with interested keywords score , thus By having User profile and defined preference, the model can help Arabic news publisher to classify users into segments, and track changes in their opinion and inclination, using observation method of read news from different user segments, and which articles attract them, thus it leads publishers to visualize their data and raise their profitability, and to follow the international trend in e-journalism industry to be a data driven organization.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_84-Machine_Learning_Model_for_Personalizing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>DMTree: A Novel Indexing Method for Finding Similarities in Large Vector Sets</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110483</link>
        <id>10.14569/IJACSA.2020.0110483</id>
        <doi>10.14569/IJACSA.2020.0110483</doi>
        <lastModDate>2020-04-30T16:06:46.4900000+00:00</lastModDate>
        
        <creator>Phuc Do</creator>
        
        <creator>Trung Phan Hong</creator>
        
        <creator>Huong Duong To</creator>
        
        <subject>MTree; DMTree; spark; distributed k-NN query; distributed range query</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>In a vector set, to find similarities we will compute distances from the querying vector to all other vectors. On a large vector set, computing too many distances as above takes a lot of time. So we need to find a way to compute less distance and the MTree structure is the technique we need. The MTree structure is a technique of indexing vector sets based on a defined distance. We can solve effectively the problems of finding similarities by using the MTree structure. However, the MTree structure is built on one computer so the indexing power is limited. Today, large vector sets, not fit in one computer, are more and more. The MTree structure failed to index these large vector sets. Therefore, in this work, we present a novel indexing method, extended from the MTree structure, that can index large vector sets. Besides, we also perform experiments to prove the performance of this novel method.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_83-DM_Tree_A_Novel_Indexing_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Exploratory Study of the Effect of Obstetric Psychoprophylaxis on the Cortisol Level in Pregnant Women, Huancavelica - Per&#250;</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110482</link>
        <id>10.14569/IJACSA.2020.0110482</id>
        <doi>10.14569/IJACSA.2020.0110482</doi>
        <lastModDate>2020-04-30T16:06:46.4600000+00:00</lastModDate>
        
        <creator>Lina Carden&#225;s-Pineda</creator>
        
        <creator>Alicia Alva Mantari</creator>
        
        <creator>Rossibel Mu&#241;oz</creator>
        
        <creator>Gabriela Ordo&#241;ez-Ccora</creator>
        
        <creator>Tula Guerra</creator>
        
        <creator>Sandra Jurado-Condori</creator>
        
        <subject>Cortisol; obstetric; psychoprophylaxis; pregnant women; sessions; gestational; serum; observed</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>Objective: To determine the cortisol level of patients who make use of the Obstetric Psychoprophylaxis (OPP) service in a first-level health center, February - May 2018. Material and methods: Descriptive, prospective, cross-sectional. Results: 68.75% of pregnant women have a stable conjugal relationship, while 25% are single and 6.25% separated, 50% have a higher education degree and 50% have a secondary education degree. Apparently, cortisol does not change according to gestational age, however, the number of OPP sessions influences the level of cortisol, so more assisted sessions means less cortisol. Conclusion: the greater exposure to obstetric psychoprophylaxis, the less levels of cortisol am (morning) in serum are observed. It could be due to psychoprophylaxis has a component that works the mental state; further studies are recommended.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_82-Exploratory_Study_of_the_Effect_of_Obstetric_Psychoprophylaxis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cross-site Scripting Research: A Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110481</link>
        <id>10.14569/IJACSA.2020.0110481</id>
        <doi>10.14569/IJACSA.2020.0110481</doi>
        <lastModDate>2020-04-30T16:06:46.4430000+00:00</lastModDate>
        
        <creator>PMD Nagarjun</creator>
        
        <creator>Shaik Shakeel Ahamad</creator>
        
        <subject>Cross-site scripting; web security; web applications; XSS attacks; mobile</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>Cross-site scripting is one of the severe problems in Web Applications. With more connected devices which uses different Web Applications for every job, the risk of XSS attacks is increasing. In Web applications, hacker steals victims session details or other important information by exploiting XSS vulnerabilities. We studied 412 research papers on cross-site scripting, which are published in between 2002 to 2019. Most of the existing XSS prevention methods are Dynamic analysis, Static analysis, Proxy based method, Filter based method etc. We categorized existing methods and discussed solutions presented on papers and discussed impact of XSS attacks, different defensive methods and research trends in XSS attacks.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_81-Cross_Site_Scripting_Research_A_Review.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improved Security Particle Swarm Optimization (PSO) Algorithm to Detect Radio Jamming Attacks in Mobile Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110480</link>
        <id>10.14569/IJACSA.2020.0110480</id>
        <doi>10.14569/IJACSA.2020.0110480</doi>
        <lastModDate>2020-04-30T16:06:46.4130000+00:00</lastModDate>
        
        <creator>Ahmad K. Al Hwaitat</creator>
        
        <creator>Mohammed Amin Almaiah</creator>
        
        <creator>Omar Almomani</creator>
        
        <creator>Mohammed Al-Zahrani</creator>
        
        <creator>Rizik M. Al-Sayed</creator>
        
        <creator>Rania M.Asaifi</creator>
        
        <creator>Khalid K. Adhim</creator>
        
        <creator>Ahmad Althunibat</creator>
        
        <creator>Adeeb Alsaaidah</creator>
        
        <subject>Jamming attacks; Mobility; PSO; mobile networks; attacked detection; network security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>Jamming attack is one of the most common threats on wireless networks through sending a high-power signal to the network in order to corrupt legitimate packets. To address Jamming attacks problem, the Particle Swarm Optimization (PSO) algorithm is used to describe and simulate the behavior of a large group of entities, with similar characteristics or attributes, as they progress to achieve an optimal group, or swarm. Therefore, in this study enhanced version of PSO is proposed called the Improved PSO algorithm aims to enhance the detection of jamming attack sources over randomized mobile networks. The simulation result shows that Improved PSO algorithm in this study is faster at obtaining the location of the given mobile network at which coverage area is minimal and hence central compared to other algorithms. The Improved PSO as well was applied to a mobile network. The Improved PSO algorithm was evaluated with two experiments. In the First experiment, The Improved PSO was compared with PSO, GWO and MFO, obtained results shown the Improved PSO is the best algorithm among others to fine obtain the location for jamming attack. In Second experiment, Improved PSO was compared with PSO in mobile network environment. The obtain results prove that Improved PSO is better than PSO for obtaining the location in mobile network where coverage area is minimal and hence central.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_80-Improved_Particle_Swarm_Optimization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning based Intelligent Surveillance System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110479</link>
        <id>10.14569/IJACSA.2020.0110479</id>
        <doi>10.14569/IJACSA.2020.0110479</doi>
        <lastModDate>2020-04-30T16:06:46.3970000+00:00</lastModDate>
        
        <creator>Muhammad Ishtiaq</creator>
        
        <creator>Sultan H. Almotiri</creator>
        
        <creator>Rashid Amin</creator>
        
        <creator>Mohammed A. Al Ghamdi</creator>
        
        <creator>Hamza Aldabbas</creator>
        
        <subject>HMG; ALMD; PBoW; DPNs LOP; BoF; CT; LDA; EBT</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>In the field of developing innovation, pictures are assuming as an important entity. Almost in all fields, picture base data is considered very beneficial, like in the field of security, facial acknowledgment, or therapeutic imaging, pictures make the existence simple for people. In this paper, an approach for both human detection and classification of single human activity recognition is proposed. We implement the pre-processing technique which is the fusion of the different methods. In the first step, we select the channel, apply the top hat filter, adjust the intensity values, and contrast stretching by threshold values applied to enhance the quality of the image. After pre-processing a weight-based segmentation approach is implemented to detect and compute the frame difference using cumulative mean. A hybrid feature extraction technique is used for the recognition of human action. The extracted features are fused based on serial-based fusion and later on fused features are utilized for classification. To validate the proposed algorithm 4 datasets as HOLLYWOOD, UCF101, HMDB51, and WEIZMANN are used for action recognition. The proposed technique performs better than the existing one.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_79-Deep_Learning_Based_Intelligent_Surveillance_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Design of Packet Scheduling Algorithm to Enhance QoS in High-Speed Downlink Packet Access (HSDPA) Core Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110478</link>
        <id>10.14569/IJACSA.2020.0110478</id>
        <doi>10.14569/IJACSA.2020.0110478</doi>
        <lastModDate>2020-04-30T16:06:46.3630000+00:00</lastModDate>
        
        <creator>Sohail Ahmed</creator>
        
        <creator>Mubashar Ali</creator>
        
        <creator>Abdullah Baz</creator>
        
        <creator>Hosam Alhakami</creator>
        
        <creator>Bilal Akbar</creator>
        
        <creator>Imran Ali Khan</creator>
        
        <creator>Adeel Ahmed</creator>
        
        <creator>Muhammad Junaid</creator>
        
        <subject>Packet Scheduling; Classification; DiffServ; LLQ; EURANE</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>Voice over Internet Protocol (VOIP) in an efficient manner is a basic requirement of modern era. The real time and non-real traffics demand customized communication provisioning to get guarantee of service. For this we proposed a user fulfillment design for facilitating packets switching in 3G cellular network to insure provisioning of QoS (quality of service) in DiffServ (Differentiated Services) Network. To enhance QoS for real time traffic by reducing delay, packet loss and jitter, we proposed Low latency queuing (LLQ) algorithm. In this paper, we focused on packet scheduling, Diffserv and QoS classes mapping into Universal Mobile telecommunication System (UMTS) classes and buffering. To associate different types of real time multimedia traffic, the QoS provisioning mechanism used different code points of Diffserv. The new idea in LLQ is to map the video and voice traffics against two separate queues and used priority queuing in Low latency queuing for voice traffic. The results got from reproductions shows that proposed calculation meets the QoS prerequisites.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_78-A_Design_of_Packet_Scheduling_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Feature Selection for Phishing Website Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110477</link>
        <id>10.14569/IJACSA.2020.0110477</id>
        <doi>10.14569/IJACSA.2020.0110477</doi>
        <lastModDate>2020-04-30T16:06:46.3330000+00:00</lastModDate>
        
        <creator>Shafaizal Shabudin</creator>
        
        <creator>Nor Samsiah Sani</creator>
        
        <creator>Khairul Akram Zainal Ariffin</creator>
        
        <creator>Mohd Aliff</creator>
        
        <subject>Relevant features; phishing; web threat; classification; machine learning; feature selection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>Phishing is an attempt to obtain confidential information about a user or an organization. It is an act of impersonating a credible webpage to lure users to expose sensitive data, such as username, password and credit card information. It has cost the online community and various stakeholders hundreds of millions of dollars. There is a need to detect and predict phishing, and the machine learning classification approach is a promising approach to do so. However, it may take several phases to identify and tune the effective features from the dataset before the selected classifier can be trained to identify phishing sites correctly. This paper presents the performance of two feature selection techniques known as the Feature Selection by Omitting Redundant Features (FSOR) and Feature Selection by Filtering Method (FSFM) to the &#39;Phishing Websites&#39; dataset from the University of California Irvine and evaluates the performance of phishing webpage detection via three different machine learning techniques: Random Forest (RF) tree, Multilayer Perceptron (MLP) and Naive Bayes (NB). The most effective classification performance of these machine learning algorithms is further rectified based on a selected subset of features set by various feature selection methods. The observational results have shown that the optimized Random Forest (RFPT) classifier with feature selection by the FSFM achieves the highest performance among all the techniques.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_77-Feature_Selection_for_Phishing_Website.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Method to Detect and Avoid Hardware Trojan for Network-on-Chip Architecture based on Error Correction Code and Junction Router (ECCJR)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110476</link>
        <id>10.14569/IJACSA.2020.0110476</id>
        <doi>10.14569/IJACSA.2020.0110476</doi>
        <lastModDate>2020-04-30T16:06:46.3170000+00:00</lastModDate>
        
        <creator>Hafiz Ali Hamza Gondal</creator>
        
        <creator>Sajida Fayyaz</creator>
        
        <creator>Arooj Aftab</creator>
        
        <creator>Saira Nokhaiz</creator>
        
        <creator>Muhammad Bilal Arshad</creator>
        
        <creator>Waqas Saleem</creator>
        
        <subject>Hardware trojan;  network-on-chip; intellectual property; error correcting code; junction router</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>Modern technologies has changed our life, such as everywhere computing communication and internet. Number of transistors increasing in a system day by day and this trend will continue further. The wire connection is easily breakable and not a reliable technology in field of networks. In conventional network dedicated wired path is used among the intellectual property (IP) core for the purpose of communication and due to this wired connection network is not reliable and not scalable. Network-on-Chip Architecture was introduced to solve these problems and gave notable improvements over conventional bus and crossbar communication architectures. Many companies prefer third party vendors for the development of their design in order to reduce the cost. It gives advantage but due to the access of design anyone can do changes at any stage of development cycle. This type of malicious modification of hardware during design or fabrication process is known as Hardware Trojan (HT). It can change the functional behavior of a system or may leak the secret information of critical application which results in degradation of system performance. The proposed research is based on combination of Error Correcting Code and Junction router to detect and avoid HT which can be used for reliable communication in NoC architecture. Simulation results showed good performance of proposed algorithm in term of Packet Latency and Reliability.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_76-A_Method_to_Detect_and_Avoid_Hardware_Trojan.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Human Action Recognition and Behaviour Analysis Technique using SWFHOG</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110475</link>
        <id>10.14569/IJACSA.2020.0110475</id>
        <doi>10.14569/IJACSA.2020.0110475</doi>
        <lastModDate>2020-04-30T16:06:46.2870000+00:00</lastModDate>
        
        <creator>Aditi Jahagirdar</creator>
        
        <creator>Manoj Nagmode</creator>
        
        <subject>Action recognition; behaviour analysis; HOG; salient wavelet feature; neural network; wavelet transform; SWFHOG</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>In this paper, a new local feature, called, Salient Wavelet Feature with Histogram of Oriented Gradients (SWFHOG) is introduced for human action recognition and behaviour analysis. In the proposed approach, regions having maximum information are selected based on their entropies. The SWF feature descriptor is formed by using the wavelet sub-bands obtained by applying wavelet decomposition to selected regions. To improve the accuracy further, the SWF feature vector is combined with the Histogram of Oriented Gradient global feature descriptor to form the SWFHOG feature descriptor. The proposed algorithm is evaluated using publicly available KTH, Weizmann, UT Interaction, and UCF Sports datasets for action recognition. The highest accuracy of 98.33% is achieved for the UT interaction dataset. The proposed SWFHOG feature descriptor is tested for behaviour analysis to identify the actions as normal or abnormal. The actions from SBU Kinect and UT Interaction dataset are divided into two sets as Normal Behaviour and Abnormal Behaviour. For the application of behaviour analysis, 95% recognition accuracy is achieved for the SBU Kinect dataset and 97% accuracy is obtained for the UT Interaction dataset. Robustness of the proposed SWFHOG algorithm is tested against Camera view angle change and imperfect actions using Weizmann robustness testing datasets. The proposed SWFHOG method shows promising results as compared to earlier methods.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_75-A_Novel_Human_Action_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Machine Learning Techniques to Visualize and Predict Terrorist Attacks Worldwide using the Global Terrorism Database</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110474</link>
        <id>10.14569/IJACSA.2020.0110474</id>
        <doi>10.14569/IJACSA.2020.0110474</doi>
        <lastModDate>2020-04-30T16:06:46.2700000+00:00</lastModDate>
        
        <creator>Enrique Lee Huaman&#237;</creator>
        
        <creator>Alva Mantari Alicia</creator>
        
        <creator>Avid Roman-Gonzalez</creator>
        
        <subject>Artificial intelligence; decision trees; machine learning; random forest; terrorist attack</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>Terrorist attacks affect the confidence and security of citizens; it is a violent form of a political struggle that ends in the destruction of order. In the current decade, along with the growth of social networks, terrorist attacks around the world are still ongoing and have had potential growth in recent years. Consequently, it is necessary to identify where the attacks were committed and where is the possible area for an attack. The objective is to provide assertive solutions to these events. As a solution, this research focuses on one of the branches of artificial intelligence (AI), which is the Automatic Learning, also called Machine Learning. The idea is to use AI techniques to visualize and predict possible terrorist attacks using classification models, the decision trees, and the Random Forest. The input would be a database that has a systematic record of worldwide terrorist attacks from 1970 to the last recorded year, which is 2018. As a final result, it is necessary to know the number of terrorist attacks in the world, the most frequent types of attacks and the number of seizures caused by region; furthermore, to be able to predict what kind of terrorist attack will occur and in which areas of the world. Finally, this research aims to help the scientific community use artificial intelligence to provide various types of solutions related to global events.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_74-Machine_Learning_Techniques_to_Visualize.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Representing and Simulating Uncertainty of the Quality of Service of Web Services using Fuzzy Cognitive Map Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110473</link>
        <id>10.14569/IJACSA.2020.0110473</id>
        <doi>10.14569/IJACSA.2020.0110473</doi>
        <lastModDate>2020-04-30T16:06:46.2570000+00:00</lastModDate>
        
        <creator>Mamoon Obiedat</creator>
        
        <creator>Ahmad Khasawneh</creator>
        
        <creator>Mustafa Banikhalaf</creator>
        
        <creator>Ali Al-yousef</creator>
        
        <subject>Web services; quality of services; uncertainty; fuzzy cognitive maps; simulation; decision support systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>Web services are growing rapidly to provide clients, either organizations or individuals, with multiple Internet services and to offer solutions for the integration of many applications. Quality of Service (QoS) of a Web service is the key consideration of both service providers and users. Thus, measuring the QoS requires, in addition to its normative requirements, engaging the views of clients and service providers and environmental factors. Human intervention and the environment may lead to uncertainty and result in uncertain factors in assessing QoS. In such a case, traditional computing and statistical techniques cannot provide an accurate representation of inherited uncertainties, especially when uncertain variables are connected with ambiguous (fuzzy) relationships. An alternative is to use a soft computing approach. This paper proposes a Fuzzy Cognitive Map (FCM) model as a soft computing approach that can represent and simulate the uncertainty of QoS. FCM represents the uncertain variables in the domain knowledge and their connections in the form of a signed directed graph consisting of nodes representing the variables and directed arrows representing the cause-effect relationships. In addition, it allows representing imprecise data either using numeric data, i.e. in the ranges [0, 1] and [-1, 1], or linguistic data, i.e., &quot;low, medium, high&quot;. For calculations, FCM is converted to an adjacency matrix to find the effects of variables on each other. Scenario simulations can also be implemented to help decision makers to investigate appropriate outcomes. Finally, the proposed approach is tested by an experiment to demonstrate its reasonability and admissibility of the representation and simulation of the uncertainty of the QoS domain knowledge.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_73-Representing_and_Simulating_Uncertainty.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced Data Lake Clustering Design based on K-means Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110472</link>
        <id>10.14569/IJACSA.2020.0110472</id>
        <doi>10.14569/IJACSA.2020.0110472</doi>
        <lastModDate>2020-04-30T16:06:46.2230000+00:00</lastModDate>
        
        <creator>Jabrane Kachaoui</creator>
        
        <creator>Abdessamad Belangour</creator>
        
        <subject>Big data; Data Lake; NoSQL; MongoDB; K-means; metadata</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>In recent years, Big Data requirements have evolved. Organizations are trying more than ever to accent their efforts on industrial development of all data at their disposal and move further away from underpinning technologies. After investing around Data Lake concept, organizations must now overhaul their data architecture to face IoT (Internet of Things) and AI (Artificial Intelligence) expansion. Efficient and effective data mapping treatments could serve in understanding the importance of data being transformed and used for decision-making process endorsement. As current relational databases are not able to manage large amounts of data, organizations headed towards NoSQL (Not only Structured Query Language) databases. One such known NoSQL database is MongoDB, which has a high scalability. This article mainly put forward a new data model able to extract, classify, and then map data for the purpose of generating new more structured data that meet organizational needs. This can be carried out by calculating various metadata attributes weights, which are considered as important information. It also processed on data clustering stored into MongoDB. This categorization based on data mining clustering algorithm named K-Means.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_72-Enhanced_Data_Lake_Clustering_Design.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Authentication using Robust Primary PIN (Personal Identification Number), Multifactor Authentication for Credit Card Swipe and Online Transactions Security</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110471</link>
        <id>10.14569/IJACSA.2020.0110471</id>
        <doi>10.14569/IJACSA.2020.0110471</doi>
        <lastModDate>2020-04-30T16:06:46.2100000+00:00</lastModDate>
        
        <creator>S. Vaithyasubramanian</creator>
        
        <subject>Credit card fraud; online transaction; card swipe transaction; Personal Index Number (PIN); Card Verification Value (CVV); One Time Password (OTP); Primary Personal Index Number; security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>In view of effectiveness, ease of access and profitability, the advancement in e-Commerce is an immense step in forward. The development is went with further unusual vulnerabilities to worry. The significant issue all through the world in credit card management is credit card fraud. Because of extortion and fraudsters persistently look for better approaches to confer unlawful activities the organizations, users and establishment finds tremendous loss yearly. In the common Credit Card extortion process, fraudulent transaction will be distinguished only after the transaction is finished. In recent Studies, the security of Credit Card Transaction from unauthorized admittance or usage are addressed by diverse access control methods. This paper illustrates a new scheme of Authentication using Primary PIN and Multifactor authentication to secure credit card transactions.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_71-Authentication_using_Robust_Primary_PIN.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Improved Method for Taxonomy Development in Information Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110470</link>
        <id>10.14569/IJACSA.2020.0110470</id>
        <doi>10.14569/IJACSA.2020.0110470</doi>
        <lastModDate>2020-04-30T16:06:46.1770000+00:00</lastModDate>
        
        <creator>Badr Omair</creator>
        
        <creator>Ahmad Alturki</creator>
        
        <subject>Classification; design science; design science research; taxonomy; taxonomy development; typology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>Theories of information systems (IS) can be categorized into five types: analytic, explaining, prediction, explaining and prediction, and design and action theory. A taxonomy could be considered a type of analysis theory which specifies the dimensions and characteristics of objects of interest by defining their shared features. Developing a taxonomy can be well suited to Design Science Research (DSR), since the primary goal of DSR is to develop an artifact. DSR is a scientific method that attempts to combine knowledge about the design and development of a solution to enhance existing systems, solve problems, and create a new artifact, such as a taxonomy. Taxonomy is crucial for understanding any phenomenon. It provides a holistic view of that phenomenon, and the classification of objects helps researchers and practitioners to understand complex domains. Nickerson, Varshney and Muntermann offered a method to develop a taxonomy based on well-established literature. Their method considered the only well-established taxonomy development method in the IS discipline. However, the literature reveals that the taxonomy development process in IS research often remains vague and taxonomies are rarely evaluated. This paper aims to improve the taxonomy development method by adopting comprehensive steps from DSR. This includes developing an integration framework for all forms of reasoning logic that are used for developing taxonomy components. The improved method supports creativity by including abduction as a reasoning logic. It also facilitates the efforts of developing a taxonomy for novice researchers by providing a complete taxonomy development roadmap.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_70-An_Improved_Method_for_Taxonomy_Development.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards Robust Combined Deep Architecture for Speech Recognition : Experiments on TIMIT</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110469</link>
        <id>10.14569/IJACSA.2020.0110469</id>
        <doi>10.14569/IJACSA.2020.0110469</doi>
        <lastModDate>2020-04-30T16:06:46.1470000+00:00</lastModDate>
        
        <creator>Hinda DRIDI</creator>
        
        <creator>Kais OUNI</creator>
        
        <subject>Automatic speech recognition; deep learning; phoneme recognition; convolutional neural network; long short-term memory; gated recurrent unit; deep neural network; recurrent neural network; CLDNN; CGDNN; TIMIT</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>Over the last years, many researchers have engaged in improving accuracies on Automatic Speech Recognition (ASR) task by using deep learning. In state-of-the-art speech recognizers, both Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) based Reccurent Neural Network (RNN) have achieved improved performances compared to Convolutional Neural Network (CNN) and Deep Neural Network (DNN). Due to the strong complementarity of CNN, LSTM-RNN and DNN, they may be combined in one architecture called Convolutional Long Short-Term Memory, Deep Neural Network (CLDNN). Similarly we propose to combine CNN, GRU-RNN and DNN in a single deep architecture called Convolutional Gated Recurrent Unit, Deep Neural Network (CGDNN). In this paper, we present our experiments for phoneme recognition task tested on TIMIT data set. A phone error rate of 15.72% has been reached using the proposed CGDNN model. The achieved result confirms the superiority of CGDNN over all their baselines networks used alone and also over the CLDNN architecture.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_69-Towards_Robust_Combined_Deep_Architecture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Quantitative Exploratory Analysis of the Variation in Hemoglobin Between the Third Trimester of Pregnancy and Postpartum in a Vulnerable Population in VRAEM - Per&#250;</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110468</link>
        <id>10.14569/IJACSA.2020.0110468</id>
        <doi>10.14569/IJACSA.2020.0110468</doi>
        <lastModDate>2020-04-30T16:06:46.1300000+00:00</lastModDate>
        
        <creator>Lina Carden&#225;s-Pineda</creator>
        
        <creator>Raquel Aron&#233;s-C&#225;rdenas</creator>
        
        <creator>Gabriela Ordo&#241;ez- Ccora</creator>
        
        <creator>Mariza C&#225;rdenas</creator>
        
        <creator>Doris Quispe</creator>
        
        <creator>Jenny Mendoza</creator>
        
        <creator>Alicia Alva Mantari</creator>
        
        <subject>Hemoglobin; pregnant; puerperium; childbirth; cesarean; anemia; vulnerable area</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>The study is based on determining the difference in hemoglobin in the third trimester of pregnancy and the immediate puerperium of childbirths attended at the Hospital de Apoyo San Francisco, during 2018, located in the VRAEM area, a vulnerable area due to the level of poverty and exposure of the area. The type of the research was observational, retrospective longitudinal section in related samples, the study has the descriptive level, it has a sample of 107 childbirths, 55 vaginal and 52 cesarean, the documentary review technique was used, the data was analyzed with the statistical program &quot;R&quot; for data analysis, the non-parametric test Wilcoxon was used. Results: a difference of 1.52 g/dl has been found between hemoglobin of the third trimester of pregnancy 11.89 g/dl and immediate puerperium 10.37 g/dl; when analyzing the difference in vaginal childbirths, hemoglobin in the third trimester was found at 11.90 g/dl and the immediate puerperium was 10.65 g/dl, and in cesarean childbirths was 11.94 g/dl in the third trimester of pregnancy and 10.14 g/dl in the immediate puerperium, finding differences of 1.52 g/dl in vaginal childbirths and 1.8 g/dl in cesarean. Conclusion, there is a significant difference in hemoglobin in the third trimester and postpartum at a p-value of 0.05, being higher in cesarean childbirths; the average postpartum hemoglobin denotes anemia despite the fact that the blood losses were within normal parameters, which indicates that it must achieve that the pregnant women reach the third trimester of pregnancy with at least 13 g/dl of hemoglobin.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_68-Quantitative_Exploratory_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>From Traditional to Intelligent Academic Advising: A Systematic Literature Review of e-Academic Advising</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110467</link>
        <id>10.14569/IJACSA.2020.0110467</id>
        <doi>10.14569/IJACSA.2020.0110467</doi>
        <lastModDate>2020-04-30T16:06:46.1130000+00:00</lastModDate>
        
        <creator>Abeer Assiri</creator>
        
        <creator>Abdullah AL-Malaise AL-Ghamdi</creator>
        
        <creator>Hani Brdesee</creator>
        
        <subject>Academic advising system; electronic advising system; higher education; intelligent systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>Academic advising plays a crucial role in the achievement of the educational institution purposes. It is an essential element in solving students&#39; academic problems and maximizing their satisfaction and loyalty. Universities around the world have always tried to improve academic advising to personalize the student’s experience. In fact, technology has the power to improve the advising process and facilitate its corresponding tasks and this has historically taken different forms. Accordingly, this paper provides an overview of academic advising and the technologies proposed to improve it. The authors present a systematic literature review on research papers that proposed an electronic academic advising system to view the research trends and identify electronic academic advising major challenges. The main contribution of this paper is to survey the different aspects and trends about the electronic systems that have been proposed to serve academic advising. This paper is a part of major research that aims to transfer the traditional academic advising to one based on Artificial Intelligence, via the current phase of academic advising.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_67-From_Traditional_to_Intelligent_Academic_Advising.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Role of Emerging IoT Big Data and Cloud Computing for Real Time Application</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110466</link>
        <id>10.14569/IJACSA.2020.0110466</id>
        <doi>10.14569/IJACSA.2020.0110466</doi>
        <lastModDate>2020-04-30T16:06:46.0830000+00:00</lastModDate>
        
        <creator>Mamoona Humayun</creator>
        
        <subject>Internet of Things (IoT); big data (BGD); cloud computing (CC); sensors; actuators; healthcare</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>Although the Internet of things (IoT), cloud computing (CC), and Big Data (BGD) are three different approaches that have evolved independently of each other over time; however, with time, they are becoming increasingly interconnected. The convergence of IoT, CC, and BGD provides new opportunities in various real-time applications, including telecommunication, healthcare, business, education, science, and engineering. Together, these approaches are facing various challenges during data gathering, processing, and management. The focus of this research paper is to pinpoint the emerging trends in IoT, CC, and BGD. The convergence of these approaches and their impact on various real-time applications, benefits, and challenges associated with all these approaches, current industry trends, and future research directions with especial focus on the healthcare domain. The paper also provides a conceptual framework that integrates IoT, CC, and BGD and provides an IoT centric cloud infrastructure using BGD. Finally, this paper summarizes by providing directions for researchers and practitioners about how to leverage the benefits of combining these approaches.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_66-Role_of_Emerging_IoT_Big_Data_and_Cloud_Computing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Two Level Edge Activated Carry Save Adder for High Speed Processors</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110465</link>
        <id>10.14569/IJACSA.2020.0110465</id>
        <doi>10.14569/IJACSA.2020.0110465</doi>
        <lastModDate>2020-04-30T16:06:46.0670000+00:00</lastModDate>
        
        <creator>K Mariya Priyadarshini</creator>
        
        <creator>R.S Ernest Ravindran</creator>
        
        <creator>Ipseeta Nanda</creator>
        
        <subject>Carry Save Adder; digital integrated circuits; flip-flop; switching power dissipation; two level edge triggering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>In today’s increasing demand of higher integration levels of VLSI and ULSI processors memory capacity and ALU efficiency plays a critical role in designing. The chip-size of memory depends on number of Flip-Flop’s (FF) which are the micro cells to store binary values. An efficient adder is always a parameter to estimate the cost effectiveness of multipliers used by ALU. In this paper the authors focuses on frequency clock utilization and also on low power consumption. It presents a novel Carry Save Adder (CSA) combined with the concept of two level clock triggering for high speed integrated circuits. The authors proposes a new Two Level Edge Triggered (TLET) FF’s built with 14Transistors (14T) and 12Transistors (12T), efficient in terms of switching power dissipation and delay in this paper. The innovative idea deals with CSA 14T and 12T which is compared in terms of Switching Power Dissipation (SPD) from 0.8V to 2.0V. The difference in SPD from 0.8V to 2.0V supply voltage analysis is 132.0nWatts for CSA using 16T FFs, 85.6nWatts for CSA using 14T and only 70.3nWatts for CSA using 12T FFs. In this paper, there is full utilization of clock signal.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_65-A_Novel_Two_Level_Edge_Activated_Carry_Adder.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of Customer Satisfaction Factors on e-Commerce Payment System Methods in Indonesia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110463</link>
        <id>10.14569/IJACSA.2020.0110463</id>
        <doi>10.14569/IJACSA.2020.0110463</doi>
        <lastModDate>2020-04-30T16:06:46.0370000+00:00</lastModDate>
        
        <creator>Hafidz Risqiadi Putra</creator>
        
        <creator>Sfenrianto</creator>
        
        <subject>e-Commerce; payment system methods; DeLone and McLean; Structural Equation Model (SEM)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>e-Commerce companies are currently competing to make it easier for customers to make transactions with a variety of payment system methods that have been provided and developed. The research aims to find out the factors that influence customer satisfaction in using the payment system method. The variables used in the study are service, comfort, speed, convenience, benefits, active use and security in conducting transactions. The results of the study concluded what factors influence satisfaction to develop a payment system method. The research model and questionnaire use a modified research model of the successful information system model DeLone and McLane and technology acceptance by Tella (2012) and in analyzing the results of the questionnaire, researchers used descriptive statistics and Structural Equation Model (SEM) analysis using AMOS V.26. The results of the management of these data the researchers concluded that there is one variable that is perceived comfort does not significantly affect satisfaction. Results of this study are expected to provide a reference that can be used by digital business people, particularly financial technology or e-commerce companies in improving services in applying the Payment System Method by factors that influence the level of customer satisfaction to maintain customer loyalty to the company.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_63-Analysis_of_Customer_Satisfaction_Factors.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Image Classification Considering Probability Density Function based on Simplified Beta Distribution</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110464</link>
        <id>10.14569/IJACSA.2020.0110464</id>
        <doi>10.14569/IJACSA.2020.0110464</doi>
        <lastModDate>2020-04-30T16:06:46.0370000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>Synthetic aperture radar (SAR); maximum likelihood classification: MLH; probability density function (PDF); simplified beta distribution</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>Method for image classification considering Probability Density Function (PDF) based on simplified beta distributions is proposed. In this paper, image classification for Synthetic Aperture Radar (SAR) data is concerned. In particular, Probability Density Function (PDF) of SAR data is followed by not multivariate normal distribution but Chi-Square like distribution. It, however, is not always true that the PDF of SAR data is followed by Chi-Square distribution. Due to the mismatch between Chi-Square distribution and actual distribution, classification performance gets worth. In this paper, simplified beta distribution is assumed for the PDF of the SAR data. Furthermore, it is used to add texture information to the SAR data when the Maximum Likelihood classification is applied. In the paper, “Contrast” of texture feature is added to the SAR data. Through the experiments with real SAR data, it is found that matching error between real PDF and the proposed simplified beta distribution is smaller than the normal distribution. It is also found that applying the proposed distribution-adaptive maximum likelihood method using the simplified beta-distribution could achieve a classification accuracy improvement of 94.7% and 12.1%.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_64-Image_Classification_Considering_Probability_Density.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>BlockTrack-L: A Lightweight Blockchain-based Provenance Message Tracking in IoT</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110462</link>
        <id>10.14569/IJACSA.2020.0110462</id>
        <doi>10.14569/IJACSA.2020.0110462</doi>
        <lastModDate>2020-04-30T16:06:46.0070000+00:00</lastModDate>
        
        <creator>Muhammad Shoaib Siddiqui</creator>
        
        <creator>Toqeer Ali Syed</creator>
        
        <creator>Adnan Nadeem</creator>
        
        <creator>Waqas Nawaz</creator>
        
        <creator>Sami S. Albouq</creator>
        
        <subject>Data provenance; cloud-based IoT; blockchain; attribute based encryption; light-weight signature generation; light-weight authentication</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>Data tracking is of great significance and a central part in digital forensics. In today&#39;s complex network design, Internet of Things (IoT) devices communicate with each other and require strong security mechanisms. In maintaining an audit trail of IoT devices or provenance of IoT device data, it is important to know the origins of requests to ensure certain level of trust in IoT data. Blockchain can provide traceability of records generated from IoT devices in a sensitive environment. In this paper, we present an application layer data provenance model that works on execute-order architecture for cloud based IoT networks. It supports high throughput of transactions on the blockchain network with lightweight security overhead by using outsourced encryption on edge nodes. All communications among the IoT devices are connected to a blockchain network and stored on permissioned blockchain peers. The proposed system is evaluated to have less cryptographic load by offloading the IoT nodes with Edge nodes.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_62-BlockTrack_LA_Lightweight_Blockchain.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modelling a Hybrid Wireless/Broadband over Power Line (BPL) Communication in 5G</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110461</link>
        <id>10.14569/IJACSA.2020.0110461</id>
        <doi>10.14569/IJACSA.2020.0110461</doi>
        <lastModDate>2020-04-30T16:06:45.9900000+00:00</lastModDate>
        
        <creator>Mohammad Woli Ullah</creator>
        
        <creator>Mohammad Azazur Rahman</creator>
        
        <creator>Md. Humayun Kabir</creator>
        
        <creator>Muhammad Mostafa Amir Faisal</creator>
        
        <subject>5G; wireless; broadband over power line; hybrid; millimeter wave; small cell</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>5G will explore wireless communication dynamically, which will provide high-speed internet with low latency. So, real-time communication will be possible, and a vast number of devices will connect from different points. 5G has introduced five enabling technologies. Millimeter wave and small cell are two of them. Small cell connects the user devices by using the millimeter wave. As the frequency range has limitations of distance and high attenuation, it should be reliable for uninterrupted communication. To ensure the uninterrupted communication, hybrid communication network can be a significant solution. In this research, hybrid wireless and Broadband over Power Line (BPL) communication model has been proposed, and the model integrates both technologies in the small cell end. A simulation of BPL and theoretical analysis of wireless communication have also been shown in this paper. From those analysis, the total throughput of the hybrid model has been calculated. Broadband over Power Line is chosen as well as wireless communication in this model because of its infrastructure availability both in city and rural areas, cost-effectiveness and quick installation process. Moreover, the hybrid network will increase the throughput volume, and both communications will act as a backup in an emergency.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_61-Modelling_a_Hybrid_WirelessBroadband.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Soft Computing for Scalability in Context Aware Location based Services</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110460</link>
        <id>10.14569/IJACSA.2020.0110460</id>
        <doi>10.14569/IJACSA.2020.0110460</doi>
        <lastModDate>2020-04-30T16:06:45.9600000+00:00</lastModDate>
        
        <creator>Priti Jagwani</creator>
        
        <creator>Saroj Kaushik</creator>
        
        <subject>Context aware LBS; Fuzzy C Means; Genetic Algorithm; location privacy; K anonymity; scalability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>Ubiquitous computing blended with context awareness gives user the facility of “anywhere anytime” computing. Location based services represents a class of context aware computing. Involvement of location as the primary input in location based services triggered concerns for user’s privacy. Most of the privacy work in domain of location based services relies on obfuscation strategy along with K anonymity. The proposed work acknowledges the idea of calculating value of K for K anonymity using context factors in fuzzy format. However, with increasing number of these fuzzy context factors resulting in more fuzzy rules, the system will tend to get slower. In order to address this issue, requirement is to reduce the size of rule base without hampering the performance much. Goal of the proposed work is to attain scalability and high performance for the above said system. Towards this, reduction of number of rules in the rule base, of fuzzy inference system has been done using Fuzzy C Means and Genetic Algorithm. Results of reduced rule base have been compared with the results of exhaustive rule base. It has been identified that number of rules can be reduced up to considerable extent with comparable performances and acceptable level of error.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_60-Soft_Computing_for_Scalability_in_Context_Aware_Location.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Meta-Model for Strategic Educational Goals</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110459</link>
        <id>10.14569/IJACSA.2020.0110459</id>
        <doi>10.14569/IJACSA.2020.0110459</doi>
        <lastModDate>2020-04-30T16:06:45.9270000+00:00</lastModDate>
        
        <creator>Mohammad Alhaj</creator>
        
        <creator>Ashraf Sharah</creator>
        
        <subject>Model-driven engineering; meta-model; goal model; ecore modeling framework; object constraint language; strategic educational goals</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>Metamodeling technique is adopted widely in different fields related to software and system engineering. A meta-model represents the abstraction of a detailed design at multiple level. It is used in any structured environment ruled by a certain constraints and obligations and instantiate different platform specific domains from a single platform independent domain. This paper proposes a new model-driven approach for generating and analyzing automatically the outcome measures of strategic educational goals model. A new meta-model augmented with arithmetic semantics is created for Strategic Educational Goals where a set of outlines defines the enhancement framework of an academic organization. The vision, mission, program educational objectives and student outcomes are the four common strategic educational goals. These Goals support the performance roadmap to measures the institution situation and progress. The proposed meta-model is used to evaluate the strategic educational goals in a formal way, improve the continuous improvement process in academic organizations and allows the assessment at different level of management.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_59-A_Meta_Model_for_Strategic_Educational_Goals.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Three Levels of Modeling: Static (Structure/Trajectories of Flow), Dynamic (Events) and Behavioral (Chronology of Events)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110458</link>
        <id>10.14569/IJACSA.2020.0110458</id>
        <doi>10.14569/IJACSA.2020.0110458</doi>
        <lastModDate>2020-04-30T16:06:45.9130000+00:00</lastModDate>
        
        <creator>Sabah Al-Fedaghi</creator>
        
        <subject>System behavior; static model; chronology of events; conceptual representation; dynamic specification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>Constructing a conceptual model as an abstract representation of a portion of the real world involves capturing the (1) static (things/objects and trajectories of flow), (2) the dynamic (event identification), and (3) the behavior (e.g., acceptable chronology of events) of the modeled system. This paper focuses on examining the behavior notion in modeling and current works in the “behavior space” to illustrate that the problem of behavior and its related concepts in modeling lacks a clear-cut systematic basis.  The purpose is to advance the understanding of system behavior to avoid ambiguity-related problems in system specification. It is proposed to base the notion of behavior on a new conceptual model, called the thinging machine, which is a tool for modeling that establishes three levels of representation: (1) a static structural description that is constructed upon the flow of things in five generic operations (activities; i.e., create, process, release, transfer and receive); (2) a dynamic representation that identifies hierarchies of events based on five generic events; and (3) a chronology of events. This is shown through examples that support the thinging machine as a new methodology suitable for all three levels of specification.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_58-Three_Levels_of_Modeling_Static.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>BlockChain with IoT, an Emergent Routing Scheme for Smart Agriculture</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110457</link>
        <id>10.14569/IJACSA.2020.0110457</id>
        <doi>10.14569/IJACSA.2020.0110457</doi>
        <lastModDate>2020-04-30T16:06:45.8970000+00:00</lastModDate>
        
        <creator>Sabir Hussain Awan</creator>
        
        <creator>Sheeraz Ahmed</creator>
        
        <creator>Asif Nawaz</creator>
        
        <creator>Sozan Sulaiman Maghdid</creator>
        
        <creator>Khalid Zaman</creator>
        
        <creator>M.Yousaf Ali Khan</creator>
        
        <creator>Zeeshan Najam</creator>
        
        <creator>Sohail Imran</creator>
        
        <subject>IoT; efficient; energy scheme; agriculture</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>Blockchain is an emerging field of study in a number of applications and domains. Especially when combine with Internet of Things (IoT) this become truly transformative, opening up new plans of action, improving engagement and revolutionizing many sectors including agriculture. IoT devices are intelligent and have high critical capabilities but low-powered and have less storage, and face many challenges when used in isolation. Maintaining the network and consuming IoT energy by means of redundant or fabricated data transfer lead to consumption of high energy and reduce the life of IoT network. Therefore, an appropriate routing scheme should be in place to ensure consistency and energy efficiency in an IoT network. This research proposes an efficient routing scheme by integrating IoT with Blockchain for distributed nodes which work in a distributed manner to use the communicating links efficiently. The proposed protocol uses smart contracts within heterogeneous IoT networks to find a route to Base Station (BS). Each node can ensure route from an IoT node to sink then base station and permits IoT devices to collaborate during transmission. The proposed routing protocol removes redundant data and blocks IoT architecture attacks and leads to lower consumption of energy and improve the life of network. The performance of this scheme is compared with our existing scheme IoT-based Agriculture and LEACH in Agriculture. Simulation results show that integrating IoT with Blockchain scheme is more efficient, uses low energy, improves throughput and enhances network lifetime.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_57-BlockChain_with_IoT_an_Emergent_Routing_Scheme.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Parkinson’s Disease Classification using Gaussian Mixture Models with Relevance Feature Weights on Vocal Feature Sets</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110456</link>
        <id>10.14569/IJACSA.2020.0110456</id>
        <doi>10.14569/IJACSA.2020.0110456</doi>
        <lastModDate>2020-04-30T16:06:45.8800000+00:00</lastModDate>
        
        <creator>Ouiem Bchir</creator>
        
        <subject>Gaussian Mixture Models; relevance feature weights; Parkinson’s disease; acoustic feature sets</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>In order to perceive automatically the manifestation of dysarthria in Parkinson’s disease, we propose a novel classifier which is able to categorize acoustic features and detects articulatory deficits. The proposed approach incorporates relevance feature weighting to the Gaussian mixture model in order to address the issue of high dimensionality. Besides, it learns the relevance feature weights with respect to each model along with the Gaussian mixture model parameters to deal with the specificity of the class models. In order to assess the performance of the proposed approach, we used the data collected by the department of neurology in Cerrahpaşa faculty of medicine at Istanbul University. The obtained results of the Gaussian mixture models with relevance feature weights algorithm are first compared to the GMM results, and to the most recent related work. The experimental results showed the effectiveness of the proposed approach with an accuracy of 0.89 and an MCC score of 0.7.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_56-Parkinsons_Disease_Classification_using_Gaussian_Mixture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Acoustic Modeling in Speech Recognition: A Systematic Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110455</link>
        <id>10.14569/IJACSA.2020.0110455</id>
        <doi>10.14569/IJACSA.2020.0110455</doi>
        <lastModDate>2020-04-30T16:06:45.8500000+00:00</lastModDate>
        
        <creator>Shobha Bhatt</creator>
        
        <creator>Anurag Jain</creator>
        
        <creator>Amita Dev</creator>
        
        <subject>Acoustic modeling; speech recognition; systematic review; acoustic unit; MFCC; classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>The paper presents a systematic review of acoustic modeling (AM) techniques in speech recognition(SR). Acoustic modeling establishes a relationship between acoustic information and language construct in SR. Over the past decades, researchers presented studies addressing specific concerns in AM. However, all previous research works lack a systematic and comprehensive review of acoustic modeling issues. A systematic review is introduced to understand the acoustic modeling issues in speech recognition. This paper provides an extensive and comprehensive inspection of various researches that have been performed since 1984. The extensive investigation and analysis into AM was performed by getting the relevant data from 73 research works chose after the screening process between the years from 1984 to 2020. The systematic review process was divided into different parts to investigate acoustic modeling issues. Main issues in acoustic modeling such as feature extraction techniques, acoustic modeling units, speech corpora, classification methods, different tools used, language issues applied, and evaluation parameters were investigated. This study helps the reader to understand various acoustic modeling issues with comprehensive details. The research outcomes presented in this study depict research trends and shed light on new research topics in AM. The result of this review can be used to build a better speech recognition system by choosing a suitable acoustic modeling construct in SR.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_55-Acoustic_Modeling_in_Speech_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Review of Critical Research Areas under Information Diffusion in Social Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110454</link>
        <id>10.14569/IJACSA.2020.0110454</id>
        <doi>10.14569/IJACSA.2020.0110454</doi>
        <lastModDate>2020-04-30T16:06:45.8170000+00:00</lastModDate>
        
        <creator>Surbhi Kakar</creator>
        
        <creator>Monica Mehrotra</creator>
        
        <subject>Information diffusion; influence maximization; retweet prediction; influence models</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>An online social network is a network where people exchange their ideas or opinions. Exchange of ideas between users leads to spread of information at a larger scale in the social networks. This spread of information is also called information diffusion. This work is dedicated to identifying research areas under the umbrella of Information Diffusion. The objective of this work is to present an extensive review of such areas, identify the existing research gaps and explore future directions of work. The review also identifies the methodologies, features and aspects studied in the current literature and proposes the optimal feature set to improve performance. This review will enable researchers to quickly identify the research areas, the current gaps and steer them into the possible future directions associated with them.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_54-A_Review_of_Critical_Research_Areas.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of Vulnerability in Emergency Situations in Kindergarten and Primary School Education Centers in Peru</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110453</link>
        <id>10.14569/IJACSA.2020.0110453</id>
        <doi>10.14569/IJACSA.2020.0110453</doi>
        <lastModDate>2020-04-30T16:06:45.8030000+00:00</lastModDate>
        
        <creator>Witman Alvarado-D&#237;az</creator>
        
        <creator>Alva Mantari Alicia</creator>
        
        <creator>Meneses-Claudio Brian</creator>
        
        <creator>Avid Roman-Gonzalez</creator>
        
        <subject>Geographic information systems; student-teacher relationship; students; teachers; maps; number of vulnerable students</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>The people who require greater protection and safety are children, mainly when they are in an educational center, where teachers are responsible for their care, therefore, it is important to have prepared teachers to face emergency situations, since, the sense of insecurity is greater in national schools due to the shortage of prepared teachers to handle emergencies situations in Peru; there are studies which mention that 98.2% of accidents in educational centers are trauma and falls, also 1 of every 4 students suffers a fracture, therefore, in this study, spatial data of kindergarten and primary education is presented from Peru, relating the number of students per teacher for the year 2019. The regions whose student-teacher relationship is risky for the welfare of the students are presented and analyzed by georeference, this data is public and is provided by the Ministerio de Educaci&#243;n de Per&#250; (MINEDU), and using tools from the Geographic Information System (GIS), and it was possible to generate maps at the district level. Observing at the maps, it was possible to identify that the areas with the greatest risk are in the natural region of the jungle. Base on the spatial distribution of vulnerable points and outliers of the student-teacher relationship at the levels of kindergarten and primary education, it is recommended that governmental and non-governmental institutions in Peru allocate resources urgently to reduce student vulnerability, reducing the relationship between the number of students and teachers, in order to get better the response to any accident or natural disaster.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_53-Analysis_of_Vulnerability_in_Emergency_Situations.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhance Medical Sentiment Vectors through Document Embedding using Recurrent Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110452</link>
        <id>10.14569/IJACSA.2020.0110452</id>
        <doi>10.14569/IJACSA.2020.0110452</doi>
        <lastModDate>2020-04-30T16:06:45.7870000+00:00</lastModDate>
        
        <creator>Rami N. M. Yousef</creator>
        
        <creator>Sabrina Tiun</creator>
        
        <creator>Nazlia Omar</creator>
        
        <creator>Eissa M. Alshari</creator>
        
        <subject>Adverse drug reaction; medical sentiment analysis; recurrent neural network; support vector machine; logistic regression</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>Adverse Drug Reaction (ADR) extraction is the process of identifying drug implications mentioned in social posts. Handling medical text for the identification of ADR is vital to research in terms of configuring the side effect and other medical-related entities within any medical text. However, investigating the role of such effect in the context of positive and negative is the responsibility of sentiment classification task where every medical review document would be categorized into its polarity, this is known as Medical Sentiment Analysis (MSA). Several studies have presented various techniques for MSA. Most of the recent studies have concentrated on architectures such as the Convolutional Neural Network (CNN) to get the document embedding. Yet, such architecture focuses only on the input without considering the previous or latter input. This might lead to weaker embedding for the document where some terms would not be considered. Hence, this paper proposes a new document embedding approach based on the Recurrent Neural Network (RNN) to improve the sentiment classification. Using a benchmark dataset of medical sentiments, the proposed method showed greater performance of sentiment classification accuracy. Such finding proves the effectiveness of RNN in producing document embedding.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_52-Enhance_Medical_Sentiment_Vectors.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Ontological Model of Hadith Texts</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110451</link>
        <id>10.14569/IJACSA.2020.0110451</id>
        <doi>10.14569/IJACSA.2020.0110451</doi>
        <lastModDate>2020-04-30T16:06:45.7570000+00:00</lastModDate>
        
        <creator>Bendjamaa Fairouz</creator>
        
        <creator>Taleb Nora</creator>
        
        <creator>Arari Amina Nouha</creator>
        
        <subject>Ontology engineering; Islamic ontology; Methontology; semantic representation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>The Hadith being the second source of legislation after the Holy Qur&#39;an in the religion of Islam, it represents a large body of knowledge in unstructured textual form. The specification of Hadiths makes its automatic exploitation a rather robust and an almost impossible task. To enable different types of computer systems to exploit this knowledge, various researchers used a formal representation of the semantics of Hadith. The widely used semantic representation is ontology defined as concepts and relations extracted from the Hadith in the form of a structure interpretable both by the machine and the human. In this article, we propose an ontology of the Hadith using an approach inspired by the &quot;METHONTOLOGY&quot; methodology. In this project, we are dealing with religious texts in traditional Arabic, and we face many difficulties in achieving complete precision and correctness. Hence, we decided to follow an entirely manual process to ensure the correctness of the results. Since manual ontology development is both time and effort consuming, we decided to focus only on “Wudhu2” related Hadiths.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_51-An_Ontological_Model_for_the_Semantic_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>FLA-IoT: Virtualization Enabled Architecture for Heterogeneous Systems in Internet of Things</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110450</link>
        <id>10.14569/IJACSA.2020.0110450</id>
        <doi>10.14569/IJACSA.2020.0110450</doi>
        <lastModDate>2020-04-30T16:06:45.7400000+00:00</lastModDate>
        
        <creator>Irfan Latif Memon</creator>
        
        <creator>Shakila Memon</creator>
        
        <creator>Junaid Ahmed Bhatti</creator>
        
        <creator>Raheel Ahmed Memon</creator>
        
        <creator>Abdul Sattar Chan</creator>
        
        <subject>Internet of Things; virtualization; virtual mote; cloud; heterogeneous systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>A flexible architecture is always required when trying to communicate with heterogeneous kind of systems, and IoT is the largest communication network of the history, which is bringing life to everything around us. Currently available three and four layered communication architectures are the popular basic structures to implement IoT. Where three Layers architecture is composed of perception, network and application layers and four layer architecture is composed of perception, network, service, application layer. The problem with existing architectures is that some layers are not well managed and complex in structure and lacks in the interoperability of different kind devices. In this research we present a virtualization enabled architecture Flexible Layered Architecture for Internet of Things (FLA-IoT) to overcome those challenges. FLA-IoT provides a simple structure with well-organized layers and introduces the creation of Virtual Mote (virtual object) from all real-world devices to enable the communication between unlike devices. This results in an indiscriminate communication between different real-world devices with a well-managed layered architecture.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_50-FLA_IoT_Virtualization_Enabled_Architecture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Resource Optimisation using Multithreading in Support Vector Machine</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110449</link>
        <id>10.14569/IJACSA.2020.0110449</id>
        <doi>10.14569/IJACSA.2020.0110449</doi>
        <lastModDate>2020-04-30T16:06:45.7230000+00:00</lastModDate>
        
        <creator>Wong Soon Fook</creator>
        
        <creator>Abdul Hadi Abd Rahman</creator>
        
        <creator>Nor Samsiah Sani</creator>
        
        <creator>Afzan Adam</creator>
        
        <subject>Robot vision; recognition; multithreading; real-time</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>Image processing is one of the most important features for vision-based robotic and being used in various applications to increase productivity. Various researchers reported issues computation problem to detect objects in low cost device such as vision-based robotic car. In the fast-paced development of technology, a system that runs automatically with the right results is essential to the completion of a job. This study aims to propose an effective multithreading for road sign recognition. We implemented multithreading algorithm for train and detector processes in SVM to utilise the multicore CPU and evaluate in various condition on by a Raspberry Pi platform. It aims to solve the real-time computation issue using Pi camera. Experimental results show significant improvement of performance to the detection accuracy. In conclusion multithreading significantly improve the detection performance using Raspberry Pi processors with various image resolution and number of SVM model.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_49-Resource_Optimisation_using_Multithreading.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Retinal Blood Vessel Extraction using Wavelet Decomposition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110448</link>
        <id>10.14569/IJACSA.2020.0110448</id>
        <doi>10.14569/IJACSA.2020.0110448</doi>
        <lastModDate>2020-04-30T16:06:45.6930000+00:00</lastModDate>
        
        <creator>Diana Tri Susetianingtias</creator>
        
        <creator>Sarifuddin Madenda</creator>
        
        <creator>Fitrianingsih</creator>
        
        <creator>Dea Adlina</creator>
        
        <creator>Rodiah</creator>
        
        <creator>Rini Arianty</creator>
        
        <subject>Blood vessels; extraction; fundus retina; identification; wavelet</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>One important part of the eye that is critical for processing visual information before it is sent through the optic nerve to the visual cortex is the retina. The retina of each individual has its own uniqueness that can be used as a characteristic feature in identifying, verifying, and authenticating. The traditional authentication process has various weaknesses such as forgetting the PIN code or losing the ID card used for obtaining system authentication. The results of extracted retinal blood vessels can be used as a feature in the formation of an individual identification system. In the imaging using a fundus camera, the retina’s blood vessel has distinguishing shape and number of candidates from one human retina to another. In this research, researchers will develop an algorithm for extracting the retinal fundus image’s blood vessels. The feature extraction is done by taking the fundus image feature which is the blood vessel as one of the unique characteristics in forming an individual identification system. The number of blood vessel candidates will then be calculated from the extracted blood vessel result. This research uses wavelet function by looking at the very complex texture of blood vessels using the approximation coefficient. The direction detail coefficient on the wavelet is also used to perform the extraction of retinal blood vessels where the structure of the retinal blood vessels in the fundus image is in all directions. The results of these blood vessel candidates will be used in further research to formulate a biometric system that is formed by unique features in the retinal fundus image which will be used to identify individuals using body traits.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_48-Retinal_Blood_Vessel_Extraction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Framework for Enhancing QoS of Big Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110447</link>
        <id>10.14569/IJACSA.2020.0110447</id>
        <doi>10.14569/IJACSA.2020.0110447</doi>
        <lastModDate>2020-04-30T16:06:45.6630000+00:00</lastModDate>
        
        <creator>Dar Masroof Amin</creator>
        
        <creator>Munishwar Rai</creator>
        
        <subject>IoT (internet of things); big data; DSDSS (Domain Specific Data Distribution Algorithm); AI (Artificial Intelligence); ML (Machine Learning)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>The dire increase in number of devices connected to the internet is making inherent growth in creation of data. The use of data science in research is creating opportunities for better business analytics and generation of future trends. The data is growing with ever increasing rate and the exponential growth of data is creating opportunities for utilizing the same in process of analysis. The techniques and technology in place is not able to cater the needs of growing data on the Internet. The research work presented here provides a novel framework for improving the performance and management of big data clusters. The research proposed provides a detailed aspect how big data can be handled in the respective domains. The prime aim of this research is to formulate and implement an algorithm by testing with different data sets which can make the process of mining and handling big data easy in the organizations. The framework provides optimized results as compared to traditional systems.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_47-A_Novel_Framework_for_Enhancing_QoS_of_Big_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Using Fuzzy-Logic in Decision Support System based on Personal Ratings</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110446</link>
        <id>10.14569/IJACSA.2020.0110446</id>
        <doi>10.14569/IJACSA.2020.0110446</doi>
        <lastModDate>2020-04-30T16:06:45.6300000+00:00</lastModDate>
        
        <creator>Hmood Al-Dossari</creator>
        
        <creator>Sultan Alyahya</creator>
        
        <subject>Service recommendation; standard detection; user ratings; predication; fuzzy logic; decision support</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>The decision making process of selecting a service is very complex. Current recommendation systems make a generic recommendation to users regardless of their personal standards. This can result in a misleading recommendation because different users normally have different standards in evaluating services. Some of them might be harsh in their assessment while others are lenient. In this paper, we propose a standard-based approach to assist users in selecting their preferred services. To do so, we develop a judgement model to detect users’ standards then utilize them in a service recommendation process. To study the accuracy of our approach, 65536 service invocation results are collected from 3184 service users. The experimental results show that our proposed approach achieves better prediction accuracy than other approaches.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_46-Using_Fuzzy_Logic_in_Decision_Support_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of Cooperative Activities in Teaching-Learning University Subjects: Elaboration of a Proposal</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110445</link>
        <id>10.14569/IJACSA.2020.0110445</id>
        <doi>10.14569/IJACSA.2020.0110445</doi>
        <lastModDate>2020-04-30T16:06:45.6000000+00:00</lastModDate>
        
        <creator>Norka Bedregal-Alpaca</creator>
        
        <creator>Arasay Padron-Alvarez</creator>
        
        <creator>Elisa Casta&#241;eda-Huaman</creator>
        
        <creator>V&#237;ctor Cornejo-Aparicio</creator>
        
        <subject>Cooperative learning; competency focus; cooperative techniques; evaluation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>University professors face the challenge of incorporating activities that promote student engagement, discussion, conflict resolution, and teamwork. In this context, cooperative learning emerges as the pedagogical model that fosters teamwork; organizes students into groups where joint and coordinated work reinforces individual and collective learning. The proposal presented facilitates the design of cooperative activities that consider the necessary interdependence between learning, teaching, content and context. In addition to explaining how to articulate all these aspects, it also places the student as the center of the training process, for this it collects the main guidelines of cooperative learning and enriches the learning environment with the potential of management knowledge and communication provided by Information and Communication Technologies. To inform the proposal, the results obtained in four subjects of a mathematical nature are presented; results showing improvements in student learning.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_45-Design_of_Cooperative_Activities_in_Teaching.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fermat Factorization using a Multi-Core System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110444</link>
        <id>10.14569/IJACSA.2020.0110444</id>
        <doi>10.14569/IJACSA.2020.0110444</doi>
        <lastModDate>2020-04-30T16:06:45.5830000+00:00</lastModDate>
        
        <creator>Hazem M. Bahig</creator>
        
        <creator>Hatem M. Bahig</creator>
        
        <creator>Yasser Kotb</creator>
        
        <subject>Integer factorization; fermat factorization; parallel algorithm; multi-core</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>Factoring a composite odd integer into its prime factors is one of the security problems for some public-key cryptosystems such as the Rivest-Shamir-Adleman cryptosystem. Many strategies have been proposed to solve factorization problem in a fast running time. However, the main drawback of the algorithms used in such strategies is the high computational time needed to find prime factors. Therefore, in this study, we focus on one of the factorization algorithms that is used when the two prime factors are of the same size, namely, the Fermat factorization (FF) algorithm. We investigate the performance of the FF method using three parameters: (1) the number of bits for the composite odd integer, (2) size of the difference between the two prime factors, and (3) number of threads used. The results of our experiments in which we used different parameters values indicate that the running time of the parallel FF algorithm is faster than that of the sequential FF algorithm. The maximum speed up achieved by the parallel FF algorithm is 6.7 times that of the sequential FF algorithm using 12 cores. Moreover, the parallel FF algorithm has near-linear scalability.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_44-Fermat_Factorization_using_a_Multi_Core_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Air Quality Prediction (PM2.5 and PM10) at the Upper Hunter Town - Muswellbrook using the Long-Short-Term Memory Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110443</link>
        <id>10.14569/IJACSA.2020.0110443</id>
        <doi>10.14569/IJACSA.2020.0110443</doi>
        <lastModDate>2020-04-30T16:06:45.5530000+00:00</lastModDate>
        
        <creator>Alexi Delgado</creator>
        
        <creator>Ramiro Ricardo Maque Acu&#241;a</creator>
        
        <creator>Chiara Carbajal</creator>
        
        <subject>Air quality; long-short term memory (LSTM); 2.5 particulate matter (PM₂․₅); 10 Particulate matter (PM₁₀)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>Air quality is crucial for the environment and the life quality of citizens. Therefore, in the present study a software application is developed to predict air quality on the basis of 2.5 particulate matter (〖PM〗_(2.5)) and 10particulate matter (〖PM〗_10), in the city of Upper Hunter, Australia, as it is considered to be one of the cities with the lowest air quality levels worldwide. For this purpose, it has been decided to use the methodology of long-short term memory (LSTM) from data collected by NSW department of planning industry and environment during the period of 30 September 2012 to 30 September 2019, to predict the behavior of the mentioned particulate matter during the month of October 2019. A comparison between the average and maximum values suggested by the software and the actual values has been made and it is shown that the predicted results of the study are quite close to reality. Finally, the results obtained in this study may serve as a basis for local authorities to proceed with the necessary protocols and measures in case an alarming prediction occurs.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_43-Air_Quality_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>CASC 3N vs. 4N: Effect of Increasing Cellular Automata Neighborhood Size on Cryptographic Strength</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110442</link>
        <id>10.14569/IJACSA.2020.0110442</id>
        <doi>10.14569/IJACSA.2020.0110442</doi>
        <lastModDate>2020-04-30T16:06:45.5370000+00:00</lastModDate>
        
        <creator>Fatima Ezzahra Ziani</creator>
        
        <creator>Anas Sadak</creator>
        
        <creator>Charifa Hanin</creator>
        
        <creator>Bouchra Echandouri</creator>
        
        <creator>Fouzia Omary</creator>
        
        <subject>Stream ciphers; cellular automata; neighborhood size; dieharder; NIST STS; cryptographic properties; attacks on stream ciphers</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>Stream ciphers are symmetric cryptosystems that rely on pseudorandom number generators (PRNGs) as a primary building block to generate a keystream. Stream ciphers have been extensively studied and many designs were proposed throughout the years. One of the popular designs used is the combination of linear feedback shift registers (LFSRs) and nonlinear feedback shift registers (NFSRs). Although this design is suitable for both software and hardware implementation and provides a good randomness behavior, it is still subject to attacks such as fault attacks and correlations attacks. Cellular automata (CAs) based stream ciphers are another design class that has been proposed. CAs display good cryptographic properties as well as a good randomness behavior, also high computational speed and a higher level of security. The use of CAs as cryptographic primitives is not recent and has been thoroughly investigated, especially the use of three-neighborhood one-dimensional cellular automata. In this article, the authors investigate the impact of increasing the neighborhood size of CAs on the security level and the cryptographic properties provided. Thereafter, four-neighborhood one-dimensional CAs are studied and a stream cipher algorithm is proposed. The security of the proposed algorithm is demonstrated by using the results of standard tests (i.e. NIST Test Suite and Dieharder Battery of Tests), particularly by computing the cryptographic properties of the used CAs and by showing the resistance of the suggested algorithm to mostly known attacks.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_42-CASC_3N_vs_ 4N_Effect_of_Increasing_Cellular_Automata.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Z Specification for Reliability Requirements of a Service-based System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110441</link>
        <id>10.14569/IJACSA.2020.0110441</id>
        <doi>10.14569/IJACSA.2020.0110441</doi>
        <lastModDate>2020-04-30T16:06:45.5070000+00:00</lastModDate>
        
        <creator>Manoj Lall</creator>
        
        <creator>John A. Van Der Poll</creator>
        
        <subject>Reliability; non-functional requirements; Web services; Quality of Service; Formal specification; UML modelling; Z</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>The utilization of a Web services based application depends not only on meeting its functional requirements but also its non-functional requirements. The nonfunctional requirements express the quality of service (QoS) expected from a system. The QoS describes the capability of the service to meet the requirements of its consumers.  In the context of Web services, considerations of QoS are critical for a number of reasons. Reliability is among the important QoS requirements of such distributed components as it enhances confidence in the services provided.  Although the importance of QoS requirements are well established, they are often ignored until the end of the development cycle. Reasons cited for this are that they are difficult to define and represent precisely, and relay on entities that may not be known at early stages. This articles aims to address the challenges of incorporating the QoS at an early stage of service development and represent it in a precise manner.  To achieve this goal, this paper makes use of a process model to facilitate the incorporation of the QoS attributes and Z as the specification language for its formalism. Reliability is used to exemplify the process. The Z schemas have been checked for syntax and type using the Fuzz type checker.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_41-A_Z_Specification_for_Reliability_Requirements.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fuzzy Logic based Anti-Slip Control of Commuter Train with FPGA Implementation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110440</link>
        <id>10.14569/IJACSA.2020.0110440</id>
        <doi>10.14569/IJACSA.2020.0110440</doi>
        <lastModDate>2020-04-30T16:06:45.4900000+00:00</lastModDate>
        
        <creator>Fozia Hajano</creator>
        
        <creator>Tayab D Memon</creator>
        
        <creator>Farzana Rauf Abro</creator>
        
        <creator>Imtiaz Hussain Kalwar</creator>
        
        <creator>Burhan</creator>
        
        <subject>Wheel rail contact condition; anti-slip; railway wheelset fuzzy logic; FPGA hardware estimation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>In the railway industry, slip control has always been essential due to the low friction and low adhesion between the wheels and the rail and has been an issue for the design, activity, and operation of railroad vehicles. Slip is an unpredictable parameter in the railroad that disintegrates the surface of the railroad with a contact surface of the boggy wheel brought about by the mechanical force of traction phenomena, it destabilizes the railway traction which does not fulfill safety and punctuality requirements. In this paper, we present the work based on developing a fuzzy logic-based anti-slip controller for the commuter train using FPGA implementation which minimizes slip parameters. The development of a fuzzy logic-based anti-slip controller for the commuter train is designed in MATLAB and then tested for area-performance parameters in FPGA through the system generator library. Simulation is performed to demonstrate the effectiveness of the proposed fuzzy logic control system for anti-slip control under various parameters, the results of simulation prove the effectiveness of the proposed control system as compared with conventional PID controller and shows high anti-slip control performance under nonlinearity of brake dynamics.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_40-Fuzzy_Logic_based_Anti_Slip_Control.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Impact of Translation Software on Improving the Performance of Translation Majors</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110439</link>
        <id>10.14569/IJACSA.2020.0110439</id>
        <doi>10.14569/IJACSA.2020.0110439</doi>
        <lastModDate>2020-04-30T16:06:45.4600000+00:00</lastModDate>
        
        <creator>Abdulfattah Omar</creator>
        
        <creator>Ayman F. Khafaga</creator>
        
        <creator>Iman El-Nabawi Abdel Wahed Shaalan</creator>
        
        <subject>CAT; Saudi Universities; SDL Trados Studios; translation pedagogy; translation technologies</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>The recent years have witnessed a high demand for professional translation services due to the global nature of world economy, accessibility to data in different languages, and the development of unprecedented communication channels. Translators can no longer meet these growing needs of customers and businesses. As such, different translation technologies have been developed to help translation learners and professionals improve their performance in producing a high quality translation. These technologies are now widely integrated into translation programs in different universities, institutes and training centers around the world for their usefulness and reliability in improving the performance of translation learners and professionals in the delivery of trustworthy and professional translation services. Nevertheless, surveys show that the Saudi labor market still has serious problems with qualified translators who are familiar with translation technologies that have negative impacts on the quality and delivery of translation services. This study, therefore, seeks to explore the opportunities and challenges of incorporating translation software and latest technologies into translation pedagogy in the Saudi universities. An open-ended interview with 37 translation instructors from 9 Saudi universities was conducted. Results indicate that the integration of translation software and technologies is still less than expected. This can be attributed to the fact that the majority of instructors prefer manual translation over computer-assisted translation (CAT), translation technologies are not provided by the institutions, and learning outcomes are not linked to labor market needs. </description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_39-The_Impact_of_Translation_Software.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Efficient and Rapid Method for Detection of Mutations in Deoxyribonucleic Acid - Sequences</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110438</link>
        <id>10.14569/IJACSA.2020.0110438</id>
        <doi>10.14569/IJACSA.2020.0110438</doi>
        <lastModDate>2020-04-30T16:06:45.4600000+00:00</lastModDate>
        
        <creator>Wajih Rhalem</creator>
        
        <creator>Jamal El Mhamdi</creator>
        
        <creator>Mourad Raji</creator>
        
        <creator>Ahmed Hammouch</creator>
        
        <creator>Aqili Nabil</creator>
        
        <creator>Nassim Kharmoum</creator>
        
        <creator>Hassan Ghazal</creator>
        
        <subject>Alignment algorithms; genomic sequences; dynamic polynomial interpolation; mutations</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>The comparison of genomic sequences plays a key role in determining the structural and functional relationships between genes. This comparison is carried out by identifying the similarities, differences and mutations between genomic sequences. This makes it possible to study and analyze the genetic and the evolutionary relationships between organisms. Alignment algorithms have been in the spotlight for the last few decades, due to a vast genomic data explosion. They have attracted a great deal of interest from many researchers who focus on the development of practical solutions to ensure effective alignments with an optimal response time. In this paper, a novel algorithm based on Discrete To Continuous &quot;DTC&quot; approach has been developed. The proposed methodology was compared against other existing methods, which are largely based on the concept of string matching. Experimental results show that the DTC algorithm delivers supremely efficient alignment with a reduced response time.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_38-An_Efficient_and_Rapid_Method_for_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modeling and Analyses the Equivalent-Schema Models for OSRR and COSRR Coupled to Planar Transmission Lines by Scattering Bond Graph</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110437</link>
        <id>10.14569/IJACSA.2020.0110437</id>
        <doi>10.14569/IJACSA.2020.0110437</doi>
        <lastModDate>2020-04-30T16:06:45.4270000+00:00</lastModDate>
        
        <creator>Islem Salem</creator>
        
        <creator>Hichem Taghouti</creator>
        
        <creator>Abdelkader Mami</creator>
        
        <subject>Scattering Bond Graph (SBG); metamaterials; wave matrix [W]; matrix scattering [S]; transmission line; OSRR and OCSRR</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>Following consumer demand, international competition has become increasingly high, suddenly industrialists are mobilizing to meet the requirements. Faced with this challenge, and in order to be able to respond to these prerequisites, manufacturers are looking for techniques that will allow them to gain productivity by increasing the rate of perfection before going to manufacturing. Modeling presents the most important phase in a construction chain since it allows not only analysis and understanding of the physical system but also to improve its behavior according to the desired objective from the design phase. The results presented in this article concern the modeling of the transmission lines of metamaterials loaded with OSRR &quot;Open Split-Ring Resonators&quot; and COSRR &quot;Complementary Open Split-Ring Resonators&quot; resonators, with the aim of improving analysis, synthesis and understanding of this system. By using the Scattering Bond Graph technique, which improves the adaptation of the impedance, and reduces the bandwidth. This technique allows us to deduce the scattering parameters (matrix [S]) of the OSRR / COSRR TL elements from the wave matrix [W], hence this matrix is determined through on the specific properties of the equivalent Bond Graph presentation based on the notion of causality.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_37-Modeling_and_Analyses_the_Equivalent_Schema_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Understanding Proximity Mobile Payment Acceptance among Saudi Individuals: An Exploratory Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110436</link>
        <id>10.14569/IJACSA.2020.0110436</id>
        <doi>10.14569/IJACSA.2020.0110436</doi>
        <lastModDate>2020-04-30T16:06:45.4130000+00:00</lastModDate>
        
        <creator>Rana Alabdan</creator>
        
        <creator>Sulphey MM</creator>
        
        <subject>Awareness; mobile payment; Saudi; quantitative study; e-payment; banking; ease of use</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>The electronic method of payments such as cash, debit, or credit as the new method of mobile payment, which is a paradigm shift in the pattern payment. Mobile payment is increasingly an essential electronic service in the banking sector, which also plays a competitive advantage among banks. Although mobile payment has gradually been accepted in Saudi Arabia, limited research has been conducted to explore the barriers to accepting mobile payment among Saudi nationals. This study examined the factors of mobile payment acceptance among Saudis. An online survey was conducted among 414 respondents. The study has succeeded in identifying Ease of Use, Utility, Security, and Awareness as the factors that could result in accepting mobile payment options. It is also found that male respondents have better mobile acceptance than females. A few suggestions to enhance mobile acceptance among the Saudi population are also provided. It is anticipated that the present study will act as a trigger for further studies in this noteworthy area.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_36-Understanding_Proximity_Mobile_Payment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Privacy, Security and Usability for IoT-enabled Weight Loss Apps</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110435</link>
        <id>10.14569/IJACSA.2020.0110435</id>
        <doi>10.14569/IJACSA.2020.0110435</doi>
        <lastModDate>2020-04-30T16:06:45.3970000+00:00</lastModDate>
        
        <creator>Ryan Alturki</creator>
        
        <creator>Valerie Gay</creator>
        
        <creator>Nabeela Awan</creator>
        
        <creator>Mohammad Alshehri</creator>
        
        <creator>Mohammed J. AlGhamdi</creator>
        
        <creator>Ateeq ur Rehman</creator>
        
        <subject>Internet of things; obesity; usability; data security; privacy; app</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>Obesity is considered as the main health issue worldwide. The obesity rate within Saudi’s citizens is rising alarmingly. The Internet of Things (IoT)-enabled mobile apps can assist obese Saudi users in losing weight via collecting sensitive personal information and then providing accurate and personalized weight loss advice. These data can be collected using embedded IoT devices in a smartphone. However, these IoT-enabled apps should be usable and able to provide data security and user privacy protection. This paper aims to continue our usability study for two Arabic weight loss IoT-enabled apps by performing a qualitative analysis for them. It discusses users’ and health professionals’ feedbacks, concerns and suggestions. Based on the analysis, a comprehensive usability guideline for developing a new Arabic weight loss IoT-enabled app for obese Saudi users is provided.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_35-Privacy_Security_and_usability_for_IoT.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Multiple Linear Regressions Model for Crop Prediction with Adam Optimizer and Neural Network Mlraonn</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110434</link>
        <id>10.14569/IJACSA.2020.0110434</id>
        <doi>10.14569/IJACSA.2020.0110434</doi>
        <lastModDate>2020-04-30T16:06:45.3800000+00:00</lastModDate>
        
        <creator>M. Lavanya</creator>
        
        <creator>R. Parameswari</creator>
        
        <subject>Multiple Linear Regression; Adam Optimization; Neural Network; Keras; Machine learning algorithm; Root Mean Square Error (RMSE); Mean Square Error (MSE); Mean Absolute Error (MAE); presence of Hydrogen (pH); Electrical Conductivity (EC); Organic Matter (OM)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>Due to the increase in population, demand for the food is increasing day by day. Crop prediction is necessary or need of the hour to fill the gap between the demand and the supply. Instead of following a traditional system for crop selection method, a successful crop selection for the given soil properties will help the farmers to get the expected crop yield. The objective of the proposed work is to develop one such system. The proposed system is developed using real data with various soil parameters acquired from soil laboratory located in Chennai. This system uses 16 parameters of soil which includes all the micro, macro nutrients along with that pH, EC, OM values and the recommended crop for the soil parameter. The proposed Mlraonn (Multiple Linear Regression with Adam Optimization in Neural Network) model is developed using Keras software mainly used for Deep Learning. A neural network approach is used to construct a regression model. The model is evaluated with Loss Metrics such as RMSE, MSE, and MAE. The proposed algorithm is compared with the existing standardized machine learning algorithms. It is found that the proposed algorithm gave very minimal error as output in all the above three categories of loss metrics than the standardized algorithm such as Random Forest Regression and Multiple Linear Regression.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_34-A_Multiple_Linear_Regressions_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Using Concordance to Decode the Ideological Weight of Lexis in Learning Narrative Literature: A Computational Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110433</link>
        <id>10.14569/IJACSA.2020.0110433</id>
        <doi>10.14569/IJACSA.2020.0110433</doi>
        <lastModDate>2020-04-30T16:06:45.3630000+00:00</lastModDate>
        
        <creator>Ayman F. Khafaga</creator>
        
        <creator>Iman El-Nabawi Abdel Wahed Shaalan</creator>
        
        <subject>Concordance; computational linguistics; learning; ideology; narrative literature; lexical items</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>In learning narrative literature, students find difficulty in comprehending and approaching the ideological messages beyond the usage of recurrent lexical items in literary narrative texts. This problem comes as a result of the huge number of lexical items literary texts, particularly the narrative, abound in. The use of the computer and of computational linguistics work makes it possible to process and examine large data for a variety of purposes and to investigate questions that could not feasibly be answered if the analysis was carried manually. This paper, therefore, investigates the relevance of using concordance to decode the ideological weight of lexis in narrative literature. The main objective of the paper is to explore the extent to which certain ideologies and themes are decoded in literary narrative texts undergone a computational concordance analysis. This is conducted by means of a computational concordancing that is intended for providing two verifiable inputs: Frequency Distribution (FD) and Key Word in Context (KWIC). The paper is grounded on an experimental study, where 39 majoring English students attending one novel course at Prince Sattam bin Abdulaziz University, were voluntarily involved in an optional course addressing the study of a narrative literary text: Animal Farm. Participants were divided into two groups: an experimental group and a control group. The former was allowed to use concordance, whereas the latter was only permitted to use conventional methods of studying narrative texts (mere reading). Both groups are assigned to find out the themes and the ideological meanings inferred from a selected list of 13 words from the novel. Results show that the experimental group manages, by applying concordance, to decode the ideological weight of the selected words in a more accurate, credible and faster way than the control group. This in turn facilitates the process of determining the intended message addressed in the novel, either at the character-character level of discourse or at the author-readers one.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_33-Using_Concordance_to_Decode_the_Ideological_Weight.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sentiment Analysis for Assessment of Hotel Services Review using Feature Selection Approach based-on Decision Tree</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110432</link>
        <id>10.14569/IJACSA.2020.0110432</id>
        <doi>10.14569/IJACSA.2020.0110432</doi>
        <lastModDate>2020-04-30T16:06:45.3330000+00:00</lastModDate>
        
        <creator>Dyah Apriliani</creator>
        
        <creator>Taufiq Abidin</creator>
        
        <creator>Edhy Sutanta</creator>
        
        <creator>Amir Hamzah</creator>
        
        <creator>Oman Somantri</creator>
        
        <subject>Sentiment analysis; feature selection; decision tree; support vector machines; Na&#239;ve Bayes; hotel services</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>To get the best hotel accommodation equipped with great services is all what a tourist want. Hotel reviews found in social media sometimes become a reference to book a hotel room. The problem is there is sometimes inaccuracy in understanding the reviewer’s sentiment; therefore sentiment analysis approach is used in this study. The sentiment analysis approach use three algorithms within this article; Na&#239;ve Bayes, Support vector machines, and decision tree. The result of the experiment is that decision tree is the best algorithm, however the accuracy level still become a focus since it is not optimal. The purpose of this study is to find a hybrid sentiment analysis model of an intelligent application that can be used as a decision support for hotel service assessment recommendations problem. In this paper, we proposed a model which was developed using the feature selection (FS) approach, whereas the improvement of model accuracy was done using information gain (IG). In this study, the experiment was carried out through five stages, namely taking the research dataset in the form of hotel service assessment texts, data pre-processing, weighting, experimental models, and evaluation. Experiments were conducted to get the best accuracy on the proposed model, while the evaluations were carried out to determine the accuracy of the model. Based on the experimental results, the best accuracy level in the model is 88.54%.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_32-Sentiment_Analysis_for_Assessment_of_Hotel_Services.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Using Combined List Hierarchy and Headings of HTML Documents for Learning Domain-Specific Ontology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110431</link>
        <id>10.14569/IJACSA.2020.0110431</id>
        <doi>10.14569/IJACSA.2020.0110431</doi>
        <lastModDate>2020-04-30T16:06:45.3030000+00:00</lastModDate>
        
        <creator>Muhammad Ahsan Raza</creator>
        
        <creator>Binish Raza</creator>
        
        <creator>Taiba Jabeen</creator>
        
        <creator>Sehrish Raza</creator>
        
        <creator>Munnawar Abbas</creator>
        
        <subject>Ontology learning; semantic web; sports ontology; HTML documents; knowledge extraction; ontology engineering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>HTML pages contain unstructured and diverse information. However, these documents lack semantics and are not machine understandable. Semantic webs aim to add formal semantics to web data, whereas ontology provides formal semantics to a domain and is thus considered a foundation of semantic webs. Domain ontologies can be constructed manually, but this process is tedious and inefficient. Thus, this study presents an ontology learning (OL) model to create domain ontologies automatically from a set of HTML pages. The key insight of this research is that it combines the list structure and headings of HTML pages to recognize the ontology vocabulary. The approach also incorporates synonym relationships with ontology and allows the semantic interpretation of ontology concepts. We implement the proposed OL approach to build sports ontology from a collection of sports domain HTML documents. The new sports ontology is tested using FaCT++ reasoner; results show no inconsistency in the ontology. Furthermore, experts evaluate the successful mapping of HTML lists and headings to the ontology vocabulary. The proposed OL approach performs effectively and achieves 92.7% and 95.4% precision values for list and heading mapping, respectively.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_31-Using_Combined_List_Hierarchy_and_Headings_of_HTML.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Near Duplicate Image Retrieval using Multilevel Local and Global Convolutional Neural Network Features</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110430</link>
        <id>10.14569/IJACSA.2020.0110430</id>
        <doi>10.14569/IJACSA.2020.0110430</doi>
        <lastModDate>2020-04-30T16:06:45.2870000+00:00</lastModDate>
        
        <creator>Tejas Mehta</creator>
        
        <creator>C. K. Bhensdadia</creator>
        
        <subject>Near duplicate image retrieval; local CNN features; global CNN features</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>In this work, we present an approach based on multilevel local as well as global Convolutional Neural Network (CNN) feature matching to retrieve near duplicate images. CNN features are suitable for visual matching. The CNN features of entire image may not give accuracy in retrieval due to various image editing/capturing operations. Our retrieval task focuses on matching image pairs based on local and global levels. In local matching, an image is segmented into fixed size blocks followed by extracting patches by considering neighboring regions at different levels. Matching local image patches at different levels provides robustness to our retrieval model. In local patch extraction, we select blocks containing SURF feature points instead of selecting all blocks. CNN features are extracted and stored for each image patch and then followed by extraction of global CNN features. Finally, similarity between image pairs is computed by considering all extracted CNN features. Our similarity function is based on correlation and number of blocks found in matching. We implemented our proposed approach on benchmarking Holiday dataset. Retrieval results show remarkable improvement in mean average precision (mAP) on the dataset.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_30-Near_Duplicate_Image_Retrieval.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Empirical Study on Intelligent Android Malware Detection based on Supervised Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110429</link>
        <id>10.14569/IJACSA.2020.0110429</id>
        <doi>10.14569/IJACSA.2020.0110429</doi>
        <lastModDate>2020-04-30T16:06:45.2700000+00:00</lastModDate>
        
        <creator>Talal A.A Abdullah</creator>
        
        <creator>Waleed Ali</creator>
        
        <creator>Rawad Abdulghafor</creator>
        
        <subject>Android; malware applications; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>The increasing number of mobile devices using the Android operating system in the market makes these devices the first target for malicious applications. In recent years, several Android malware applications were developed to perform certain illegitimate activities and harmful actions on mobile devices. In response, specific tools and anti-virus programs used conventional signature-based methods in order to detect such Android malware applications. However, the most recent Android malware apps, such as zero-day, cannot be detected through conventional methods that are still based on fixed signatures or identifiers. Therefore, the most recently published research studies have suggested machine learning techniques as an alternative method to detect Android malware due to their ability to learn and use the existing information to detect the new Android malware apps. This paper presents the basic concepts of Android architecture, Android malware, and permission features utilized as effective malware predictors. Furthermore, a comprehensive review of the existing static, dynamic, and hybrid Android malware detection approaches is presented in this study. More significantly, this paper empirically discusses and compares the performances of six supervised machine learning algorithms, known as K-Nearest Neighbors (K-NN), Decision Tree (DT), Support Vector Machine (SVM), Random Forest (RF), Na&#239;ve Bayes (NB), and Logistic Regression (LR), which are commonly used in the literature for detecting malware apps.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_29-Empirical_Study_on_Intelligent_Android_Malware.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of an eHealth app: Privacy, Security and Usability</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110428</link>
        <id>10.14569/IJACSA.2020.0110428</id>
        <doi>10.14569/IJACSA.2020.0110428</doi>
        <lastModDate>2020-04-30T16:06:45.2400000+00:00</lastModDate>
        
        <creator>Ryan Alturki</creator>
        
        <creator>Valerie Gay</creator>
        
        <creator>Nabeela Awan</creator>
        
        <creator>Mohammad Alshehri</creator>
        
        <creator>Mohammed J. AlGhamdi</creator>
        
        <creator>Mehwish Kundi</creator>
        
        <subject>Obesity; usability; data security; privacy; app</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>Obesity and overweight are considered a health threat globally. Saudi Arabia is a country that has a high percentage of people suffering from obesity. These people can be helped to lose weight through the usage of mobile apps as these apps can collect users’ personal information. These collected data is used to provide precise and personalized weight loss advices. However, weight loss apps must be user friendly, provide data security and user privacy protection. In this paper, we analyze the usability, security, and privacy of a weight loss app. Our main aim to clarify the data privacy and security procedure and test the usability level of the new Arabic weight loss app ‘Akser Waznk’ that is developed considering the social and cultural norms of Saudi users.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_28-Analysis_of_an_eHealth_App_Privacy_Security.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Arrhythmia Classification using 2D Convolutional Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110427</link>
        <id>10.14569/IJACSA.2020.0110427</id>
        <doi>10.14569/IJACSA.2020.0110427</doi>
        <lastModDate>2020-04-30T16:06:45.2230000+00:00</lastModDate>
        
        <creator>Robby Rohmantri</creator>
        
        <creator>Nico Surantha</creator>
        
        <subject>Convolutional neural network; CNN; CNN 2D; image classifier; electrocardiogram; ECG; arrhythmia</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>Arrhythmia is an abnormal situation of heartbeat rate that may cause a critical condition to our body and this condition gets more dangerous as our cardiovascular system gets more vulnerable as we grow older. To diagnose this abnormality, the arrhythmia expert or cardiologist uses an electrocardiogram (ECG) by analyzing the pattern. ECG is a heartbeat signal that is produced by a tool called an electrocardiograph sensor that records the electrical impulses produced by the heart. Convolutional Neural Networks (CNN) is often used by researchers to classify ECG signals to Arryhtmia classes. The state-of-the-art research had applied CNN 2D (CNN 2D) with accuracy up to 99% with 128x128 image size obtained by transforming the ECG signal. In this paper, authors try to classify arrhythmia disorder with a different approach by creating simpler image classifier using CNN 2D with a smaller variety of input size that is smaller than state-the-art input and group the classes based on transformed ECG signal from MIT-BIH Arrhythmia database with the purpose to know what the most optimum input and the best accuracy to classify ECG signal image. The result of this research had produced an accuracy of up to 98.91% for 2 Classes, 98.10% for 7 Classes dan 98.45% for 8 Classes.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_27-Arrhythmia_Classification_using_2D.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Regression Model and Neural Network Applied to the Public Spending Execution</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110426</link>
        <id>10.14569/IJACSA.2020.0110426</id>
        <doi>10.14569/IJACSA.2020.0110426</doi>
        <lastModDate>2020-04-30T16:06:45.1930000+00:00</lastModDate>
        
        <creator>Jos&#233; Morales</creator>
        
        <creator>Jos&#233; Huanca</creator>
        
        <subject>Regression; neural network; multilayer perceptron; institutional budget; public spending</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>Artificial Neural Networks are connectionist systems formed by numerous process units called neurons connected to each other, which adapt their structure through learning techniques to solve problems of function approximation and pattern classification. They process information that is supplied to them, either to obtain relationships between them and the objective function that is intended to be approximated, or by classifying these data into different categories. Regression analysis aims to determine the type of functional relationship that exists between a dependent variable and one or more independent variables. The purpose of the research is to use regression methods (multiple regression) and artificial neural networks (multilayer perceptron) to determine the influence of spending execution on the regional government&#39;s public budget. 95% of the variability of the budget of Moquegua region has been determined and explained by the three sectors (primary, secondary and tertiary) and 5% is determined by other factors outside the regional government budget. The determination coefficients R2 = 95.9% in the regression model and R2 = 95.3% in the neural network (multilayer perceptron). It has been demonstrated that Artificial neural networks and regression models have obtained very similar results, achieving good and good-fit models.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_26-Regression_Model_and_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predict Students’ Academic Performance based on their Assessment Grades and Online Activity Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110425</link>
        <id>10.14569/IJACSA.2020.0110425</id>
        <doi>10.14569/IJACSA.2020.0110425</doi>
        <lastModDate>2020-04-30T16:06:45.1770000+00:00</lastModDate>
        
        <creator>Amal Alhassan</creator>
        
        <creator>Bassam Zafar</creator>
        
        <creator>Ahmed Mueen</creator>
        
        <subject>Predict student performance; learning management system; data mining; educational data mining; classification model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>The ability to predict students’ academic performance is critical for any educational institution that aims to improve their students&#39; learning process and achievement. Although students’ performance prediction problem is studied widely, it still represents a challenge and complex issue for educational institutions due to the different features that affect students learning process and achievement in courses. Moreover, the utilization of web-based learning systems in education provides opportunities to study how students learning and what learning behavior leading them to success. The main objective of this research was to investigate the impact of assessment grades and online activity data in the Learning Management System (LMS) on students’ academic performance. Based on one of the commonly used data mining techniques for prediction, called classification. Five classification algorithms were applied that decision tree, random forest, sequential minimal optimization, multilayer perceptron, and logistic regression. Experimental results revealed that assessment grades are the most important features affecting students&#39; academic performance. Moreover, prediction models that included assessment grades alone or in combination with activity data perform better than models based on activity data alone. Also, random forest algorithm performs well for predicting student a cademic performance, followed by decision tree.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_25-Predict_Students_Academic_Performance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Neural Networks Combined with STN for Multi-Oriented Text Detection and Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110424</link>
        <id>10.14569/IJACSA.2020.0110424</id>
        <doi>10.14569/IJACSA.2020.0110424</doi>
        <lastModDate>2020-04-30T16:06:45.1470000+00:00</lastModDate>
        
        <creator>Saif Hassan Katper</creator>
        
        <creator>Abdul Rehman Gilal</creator>
        
        <creator>Abdullah Alshanqiti</creator>
        
        <creator>Ahmad Waqas</creator>
        
        <creator>Aeshah Alsughayyir</creator>
        
        <creator>Jafreezal Jaafar</creator>
        
        <subject>Spatial Transformer Networks (STNs); Deep Neural Networks (DNN); ICDAR dataset; multi-oriented text; STN-OCR</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>Developing systems for interpreting visuals, such as images, videos is really challenging but important task to be developed and applied on benchmark datasets. This study solves the very challenge by using STN-OCR model consisting of deep neural networks (DNN) and Spatial Transformer Networks (STNs). The network architecture of this study consists of two stages: localization network and recognition network. In the localization network it finds and localizes text regions and generates sampling grid. Whereas, in the recognition network, text regions will be input and then this network learns to recognize text including low resolution, curved and multi-oriented text. Deep learning-based approaches require a lot of data for training effectively, therefore, this study has used two benchmark datasets, Street View House Numbers (SVHN) and International Conference on Document Analysis and Recognition (ICDAR) 2015 to evaluate the system. The STN-OCR model achieves better results than literature on these datasets.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_24-Deep_Neural_Networks_Combined_with_STN.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Empirical Investigation on the Impact of Public Expenditures on Inclusive Economic Growth in Morocco: Application of the Autoregressive Distributed Lag Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110423</link>
        <id>10.14569/IJACSA.2020.0110423</id>
        <doi>10.14569/IJACSA.2020.0110423</doi>
        <lastModDate>2020-04-30T16:06:45.1130000+00:00</lastModDate>
        
        <creator>Imad KHANCHAOUI</creator>
        
        <creator>Abdeslam EL MOUDDEN</creator>
        
        <creator>Sara El Aboudi</creator>
        
        <subject>Inclusive economic growth; public expenditures; human capital development expenditures; public investment expenditures; cointegration; ARDL model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>Today more than ever, the international institutions (the IMF, the World Bank, the OECD and the UN) as well as the public authorities are interested in questions related to the development issue in general, and more particularly to inclusive growth. The reason is that in most developing countries, such as Morocco, the increase in economic growth does not necessarily and automatically have an effect on poverty and social disparities reduction. In this context, the study aims to analyse the impact of public expenditures, in particular the human capital development expenditure (education and health) and the public investment, on inclusive economic growth in Morocco through the use of the autoregressive distributed lag (ARDL) model on annual macroeconomic data from 1980 to 2018 and the bounds cointegration test of Pesaran. The results of the estimates show that, in the long term, public investment expenditures positively contribute to economic growth. Furthermore, they revealed that strong government action on human capital development expenditures is the most powerful instrument for enhancing inclusive economic growth in Morocco.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_23-Empirical_Investigation_on_the_Impact_of_Public_Expenditures.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mobile Health Services in Saudi Arabia-Challenges and Opportunities</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110422</link>
        <id>10.14569/IJACSA.2020.0110422</id>
        <doi>10.14569/IJACSA.2020.0110422</doi>
        <lastModDate>2020-04-30T16:06:45.0830000+00:00</lastModDate>
        
        <creator>Amr Jadi</creator>
        
        <subject>m-Health; IoT; Saudi Hospitals; challenges</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>In this work, the mobile health services (MHS) approach has been introduced to encourage locals with different educational backgrounds. This work intends to minimize personal interaction hours between patients and doctors in a real-time healthcare environment. The increasing number of pilgrims to Saudi Arabia (SA) demands such an arrangement for the benefit of both people and service provider authorities. Especially dealing with the patients visiting at the time of Ramadan is going to be a challenging task for the authorities and healthcare service providers if some kind of virus spreads in the Kingdom. The recent Corona virus threat is making most of the people panic and almost all the countries in the world are feeling the heat to tackle such a scenario. Due to a famous pilgrim destination, dealing the visitor’s flow is always a challenging task. Therefore, the proposed MHS uses the latest applications of neural networks (NN), artificial intelligence (AI), bigdata (BD) and predictive data analytics (PDA) for improving the performance of healthcare operations. At the initial stage of this research, the risk prediction and mitigation process of various events have seen an accuracy of 95 %. Applications of AI and BD are being extensively used to upgrade the patient records and information at a faster rate to enhance the overall performance of healthcare services.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_22-Mobile_Health_Services_in_Saudi_Arabia.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Neural Network Conversation Model enables the Commonly Asked Student Query Agents</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110421</link>
        <id>10.14569/IJACSA.2020.0110421</id>
        <doi>10.14569/IJACSA.2020.0110421</doi>
        <lastModDate>2020-04-30T16:06:45.0670000+00:00</lastModDate>
        
        <creator>Nittaya Muangnak</creator>
        
        <creator>Natakorn Thasnas</creator>
        
        <creator>Thapani Hengsanunkul</creator>
        
        <creator>Jakkarin Yotapakdee</creator>
        
        <subject>Automated conversational agent; chatbot; natural language processing; FAQs’ bot; artificial neural network; artificial intelligence; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>One of the challenges in academic counselling is to provide an automated service system for students. There several query questions asking the faculty staffs about related-academic services each semester. Offered the communication interface more convenience, the novel approach based on neural network model is introduced to investigate the automated conversational agent. The pre-defined dialogue sentences were collected manually from the student query questions and used as the training dataset. The questions have been varied and grouped by topic-categorizing queried from the registration help desk of the department. Artificial intelligence and machine learning have contributed each other to build the conversational agent so-call KUSE-ChatBOT plugged and used in the modern messenger application, LINE. The system is also included the dialogue back-end management system to use in further deep learning model updating. Tensorflow, the machine learning development platform originated by Google, was performed and obtained the learning model using Python development kits. The LINE Messaging APIs is then contributed as the user interface where users could have FAQs&#39; conversation via the LINE application. The KUSE-ChatBOT is outperformed and efficient by providing automated consultation to the students precisely with the accuracy rate over 75 percent. The system could assist the staffs to be able to lessen the workload of answering the same question repeatedly and give response to the student timely.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_21-The_Neural_Network_Conversation_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Study on Extended Scratch-Build Concept Map to Enhance Students’ Understanding and Promote Quality of Knowledge Structure</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110420</link>
        <id>10.14569/IJACSA.2020.0110420</id>
        <doi>10.14569/IJACSA.2020.0110420</doi>
        <lastModDate>2020-04-30T16:06:45.0370000+00:00</lastModDate>
        
        <creator>Didik Dwi Prasetya</creator>
        
        <creator>Tsukasa Hirashima</creator>
        
        <creator>Yusuke Hayashi</creator>
        
        <subject>Concept map; open-ended; extended; knowledge structure</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>Many studies reported that an open-ended concept map technique is a standard for reflecting learners&#39; knowledge structure. However, little information has been provided that expands open-ended concept mapping to improve students&#39; learning outcomes and meaningful learning. This study aimed to investigate the effects of Extended Scratch-Build (ESB) concept mapping on students&#39; learning outcomes, consisting of understanding, map size, and quality of knowledge structure. ESB is an extended open-ended technique that requests students to connect a prior-existing original concept map with a new additional map on related material topics. ESB offers an expansion of concept maps by adding new propositions and linking them to previous existing maps to enhance meaningful learning. Twenty-five university students have participated in the present study. The collected data included a pre-test, post-test, delayed-test, map size, and quality of map proposition scores. The Wilcoxon signed-rank test was used to confirm the ESB performance. The statistical results indicated that ESB could improve meaningful learning through extended concept mapping approach and had a positive effect on students&#39; learning outcomes. This study also emphasized that there was a correlation between the original and additional maps on students&#39; learning outcomes.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_20-Study_on_Extended_Scratch_Build_Concept_Map.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Marathi Document: Similarity Measurement using Semantics-based Dimension Reduction Technique</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110419</link>
        <id>10.14569/IJACSA.2020.0110419</id>
        <doi>10.14569/IJACSA.2020.0110419</doi>
        <lastModDate>2020-04-30T16:06:45.0070000+00:00</lastModDate>
        
        <creator>Prafulla B. Bafna</creator>
        
        <creator>Jatinderkumar R. Saini</creator>
        
        <subject>Cosine similarity; marathi; synset; term matrix; wordnet</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>Textual data is increasing exponentially and to extract the required information from the text, different techniques are being researched. Some of these techniques require the data to be presented in the tabular or matrix format. The proposed approach designs the Document Term Matrix for Marathi (DTMM) corpus and converts unstructured data into a tabular format. This approach has been called DTMM in this paper and it fails to consider the semantics of the terms. We propose another approach that forms synsets and in turn reduces dimensions to formulate a Document Synset Matrix for Marathi (DSMM) corpus. This also helps in better capturing the semantics and hence is context-based. We abbreviate and call this approach as DSMM and carry out experiments for document-similarity measurement on a corpus consisting of more than 1200 documents, consisting of both verses as well as proses, of Marathi language of India. Marathi text processing has been largely an untouched area. The precision, recall, accuracy, F1-score and error rate are used to prove the betterment of the proposed technique.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_19-Marathi_Document_Similarity_Measurement.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Local Neighborhood-based Outlier Detection of High Dimensional Data using different Proximity Functions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110418</link>
        <id>10.14569/IJACSA.2020.0110418</id>
        <doi>10.14569/IJACSA.2020.0110418</doi>
        <lastModDate>2020-04-30T16:06:44.9730000+00:00</lastModDate>
        
        <creator>Mujeeb Ur Rehman</creator>
        
        <creator>Dost Muhammad Khan</creator>
        
        <subject>High dimensional data; density-based anomaly detection; local outlier; outlier detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>In recent times, dimension size has posed more challenges as compared to data size. The serious concern of high dimensional data is the curse of dimensionality and has ultimately caught the attention of data miners. Anomaly detection based on local neighborhood like local outlier factor has been admitted as state of art approach but fails when operated on the high number of dimensions for the reason mentioned above. In this paper, we determine the effects of different distance functions on an unlabeled dataset while digging outliers through the density-based approach. Further, we also explore findings regarding runtime and outlier score when dimension size and number of nearest neighbor points (min_pts) are varied. This analytic research is also very appropriate and applicable in the domain of big data and data science as well.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_18-Local_Neighborhood_based_Outlier_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Power Allocation Evaluation for Downlink Non-Orthogonal Multiple Access (NOMA)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110417</link>
        <id>10.14569/IJACSA.2020.0110417</id>
        <doi>10.14569/IJACSA.2020.0110417</doi>
        <lastModDate>2020-04-30T16:06:44.9600000+00:00</lastModDate>
        
        <creator>Wajd Fahad Alghasmari</creator>
        
        <creator>Laila Nassef</creator>
        
        <subject>Non-Orthogonal Multiple Access; NOMA; Power Allocation; User Pairing; Spectral Efficiency; Sum Rate</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>Fifth-generation of wireless cellular systems has the potential to increase capacity, spectral efficiency, and fairness among users. The Non-Orthogonal Multiple Access based wireless networks (NOMA) is the next generation multiplexing technique. NOMA breaks the orthogonality of traditional multiple access to allow multiple users to share the same radio resource simultaneously. The main challenge in designing NOMA is the selection of the resource allocation algorithms since user pairing and power allocation are coupled. This paper compares the performance of three power allocation schemes: fixed power allocation, fractional transmit power allocation and full search power allocation. The algorithms are analyzed in different simulation scenarios using three performance metrics of the spectrum efficiency and energy efficiency and sum rate. Additionally, the impact of user pairing algorithms studied through two user pairing schemes: random user pairing and channel state sorting based user pairing. Results indicate the superiority of NOMA to increase the capacity compared to traditional orthogonal multiple access. On the other hand, full search power allocation is the best performance compared to the other power allocation schemes though it is highly complex compared to fractional transmit power that gives a suboptimal performance.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_17-Power_Allocation_Evaluation_for_Downlink.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Clustering Social Networks using Nature-inspired BAT Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110416</link>
        <id>10.14569/IJACSA.2020.0110416</id>
        <doi>10.14569/IJACSA.2020.0110416</doi>
        <lastModDate>2020-04-30T16:06:44.9270000+00:00</lastModDate>
        
        <creator>Seema Rani</creator>
        
        <creator>Monica Mehrotra</creator>
        
        <subject>Community detection; nature inspired optimization; bat algorithm; discrete particle swarm optimization; social network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>The widespread extent of internet availability at low cost impels user activities on social media. As a result, a huge number of networks with a lot of varieties are easily accessible. Community detection is one of the significant tasks to understand the behavior and functionality of such real-world networks. Mathematically, community detection problem has been modeled as an optimization problem and various meta-heuristic approaches have been applied to solve the same. Progressively, many new nature-inspired algorithms have also been explored to handle the diverse optimization problems in the last decade. In this paper, nature-inspired Bat Algorithm (BA) is adopted and a new variant of Discrete Bat algorithm (NVDBA) is recommended to identify the communities from social networks. The recommended scheme does not require the number of communities as a prerequisite. The experiments on a number of real-world networks have been performed to assess the performance of the proposed approach which in turn confirms its validity. The results confirm that the recommended algorithm is competitive with other existing methods and offers promising results for identifying communities in social networks.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_16-Clustering_Social_Networks_using_Nature_Inspired_BAT.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Transformation of SysML Requirement Diagram into OWL Ontologies</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110415</link>
        <id>10.14569/IJACSA.2020.0110415</id>
        <doi>10.14569/IJACSA.2020.0110415</doi>
        <lastModDate>2020-04-30T16:06:44.8970000+00:00</lastModDate>
        
        <creator>Helna Wardhana</creator>
        
        <creator>Ahmad Ashari</creator>
        
        <creator>Anny Kartika Sari</creator>
        
        <subject>SysML Diagram; Requirement Diagram; ontology; OWL; transformation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>The Requirement Diagrams are used by the System Modeling Language (SysML) to depict and model non-functional requirements, such as response time, size, or system functionality, which cannot be accommodated in the Unified Modeling Language (UML). Nevertheless, SysML still lacks the capability to represent the semantic contexts within the design. Web Ontology Language (OWL) can be used to capture the semantic context of system design; hence, the transformation of SysML diagrams into OWL is needed. The current method of SysML Diagrams transformation into OWL is still done manually so that it is very vulnerable to errors, and the translation process requires more time and effort for system engineers. This research proposes a model that can automatically transform a SysML Requirement Diagram into an OWL file so that system designs can be easily understood by both humans and machines. It also allows users to extract knowledge contained in the previous diagrams. The transformation process makes use of a transformation rule and an algorithm that can be used to change a SysML Requirement Diagram into an OWL ontology file. XML Metadata Interchange (XMI) serialization is used as the bridge to perform the transformation. The produced ontology can be viewed in Prot&#233;g&#233;. The class and subclass hierarchy, as well as the object properties and data properties, are clearly shown. In the experiment, it is also shown that the model can conduct the transformation correctly.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_15-Transformation_of_SysML_Requirement_Diagram.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Effects of Various Modes of Online Learning on Learning Results</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110414</link>
        <id>10.14569/IJACSA.2020.0110414</id>
        <doi>10.14569/IJACSA.2020.0110414</doi>
        <lastModDate>2020-04-30T16:06:44.8630000+00:00</lastModDate>
        
        <creator>Muhammad Rusli</creator>
        
        <subject>Online learning; web learning; blended learning; face to face learning; interactive multimedia learning; learning results</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>The demand for online learning particularly in a college is necessity to be developed and implemented as an alternative method of delivery learning materials in this millennial era. Nowadays, the developments are strongly supported by the advancement of Information Technology and Communication (ICT) and Multimedia Technology. Nevertheless, during the development or engineering process of the online learning, the principles of interactive, creative and effective learning deserve attention. The challenge now is the suitable mode of online learning decided to be developed and applied so that the learning process is conducted effectively. There are few things to be considered in the development of the learning, such as: how large the percentages of the number of online meetings are in comparison to face-to-face meetings and how the content type. This study aims to investigate the effects of various modes of online learning to the learning result. There are some teaching methods or modes namely face-to-face, blended, web, and online learning. This experiment is conducted to implement all the same learning materials and is available online for the four online learning modes. The research subject observed is the students of ITB STIKOM BALI who attend the Multimedia Learning course in odd semester 2019/2020. There are four classes with 108 students and each class is given a different mode of online learning. The method of analysis of this study is the statistical analysis, ANCOVA Univariate, on which 1 factor with 4 treatments. The result of this study revealed that there is equality of the students’ learning results toward the four modes of online learning. Therefore, the development of online learning for conceptual types of teaching materials or the achievement of student learning at the level of understanding is recommended.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_14-The_Effects_of_Various_Modes_of_Online_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predicting the Optimal Date and Time to Send Personalized Marketing Messages to Repeat Buyers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110413</link>
        <id>10.14569/IJACSA.2020.0110413</id>
        <doi>10.14569/IJACSA.2020.0110413</doi>
        <lastModDate>2020-04-30T16:06:44.8500000+00:00</lastModDate>
        
        <creator>Alexandros Deligiannis</creator>
        
        <creator>Charalampos Argyriou</creator>
        
        <creator>Dimitrios Kourtesis</creator>
        
        <subject>Personalized marketing automation; customer re-lationship management; conversion rate optimization; customer engagement; machine learning; XGBoost regression; cloud com-puting; data privacy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>Most of today’s digital marketing campaigns which are sent through email and mobile messaging are bulk campaigns which deliver the same message at the same time to all customers, regardless of their needs and preferences. The outcomes are bad customer experience, low engagement and low conversion rates. Modern marketing automation tools aim to facilitate personalized communications, such as scheduling of individual marketing messages based on each individual subscriber’s profile. This research focuses on the problem of automatically deciding on the optimal date and time for sending consent-based personalized marketing messages. We specifically focus on the case of repeat consumers of consumer packaged goods (CPG) which require regular replacement or replenishment. The objective is to timely anticipate the needs of consumers in order to increase their level of engagement as well as the rate at which they repurchase products. The proposed solution is based on a regression model trained with transactional data and instant messaging metadata. We describe the way such a model can be created and deployed to a scalable high-performance environment and provide pilot evaluation results that suggest a significant improvement in marketing effectiveness.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_13-Predicting_the_Optimal_Date_and_Time.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Robust Pneumonia Classification Approach based on Self-Paced Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110412</link>
        <id>10.14569/IJACSA.2020.0110412</id>
        <doi>10.14569/IJACSA.2020.0110412</doi>
        <lastModDate>2020-04-30T16:06:44.8170000+00:00</lastModDate>
        
        <creator>Sarpong Kwadwo Asare</creator>
        
        <creator>Fei You</creator>
        
        <creator>Obed Tettey Nartey</creator>
        
        <subject>Anterior-posterior chest images; self-paced learn-ing; self-training; pneumonia classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>This study proposes a self-paced learning scheme that integrates self-training and deep learning to select and learn labeled and unlabeled data samples for classifying anterior-posterior chest images as either being pneumonia-infected or normal. With this new approach, a model is first trained with labeled data. The model is evaluated on unlabeled data to generate pseudo labels for the unlabeled data. Using a novel selection scheme, the pseudo-labeled samples are then selected to update the model in next training iteration of the semi-supervised training process. The selected pseudo-labeled images to be added to the next training iteration are images with the most confident probabilities from every unlabeled class. Such a selection scheme prevents mistake reinforcement, which is a prevalent occurrence in self-training. With deep models having the tendency to latch onto well-represented class samples while ignoring less transferable and represented classes, especially in the case of unbalanced data, the proposed method utilizes a novel algorithm for the generation and selection of reliable top-K pseudo-labeled samples to be used in updating the model during the next training phase. Such an approach does not only force the model to learn the hard samples in the training data, it also helps enlarge the training set by generating enough samples that satisfy the hunger of deep models. Extensive experimental evaluation of the proposed method yields higher accuracy results compared to methods mentioned in the literature on the same dataset, an indication of the effectiveness of the proposed method.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_12-A_Robust_Pneumonia_Classification_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application of Dual Artificial Neural Networks for Emergency Load Shedding Control</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110411</link>
        <id>10.14569/IJACSA.2020.0110411</id>
        <doi>10.14569/IJACSA.2020.0110411</doi>
        <lastModDate>2020-04-30T16:06:44.7870000+00:00</lastModDate>
        
        <creator>Nghia. T. Le</creator>
        
        <creator>Anh. Huy. Quyen</creator>
        
        <creator>Au. N. Nguyen</creator>
        
        <creator>Binh. T. T. Phan</creator>
        
        <creator>An. T. Nguyen</creator>
        
        <creator>Tan. T. Phung</creator>
        
        <subject>Load shedding; Artificial Neural Network; AHP algorithm; emergency control; frequency stability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>This paper proposes a new model in emergency control of load shedding based on the combination of dual Artificial Neural Network to implement the load shedding, restore the power system frequency and prevent the power system blackout. The first Artificial Neural Network (ANN1) quickly recognizes the state with or without load shedding when a short-circuit occurs in the electrical system. The second Artificial Neural Network (ANN2) identifies and controls the selection of load shedding strategies. These load shedding strategies include pre-designed rules which is built on the AHP algorithm to calculate the importance factor of the load units and select the priority of the load shedding. In case the ANN1 results in a load shedding, the load shedding control strategy is immediately implemented. Therefore, the decision making time is much shorter than the under frequency load shedding method. The effectiveness of the proposed method is tested on the IEEE 39-bus system which proves the effectiveness of this method.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_11-Application_of_Dual_Artificial_Neural_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Arduino based Smart Home Automation System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110410</link>
        <id>10.14569/IJACSA.2020.0110410</id>
        <doi>10.14569/IJACSA.2020.0110410</doi>
        <lastModDate>2020-04-30T16:06:44.7870000+00:00</lastModDate>
        
        <creator>Daniel Chioran</creator>
        
        <creator>Honoriu Valean</creator>
        
        <subject>Energy consumption; home automation; serial communication; microcontrollers</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>Around the World massive quantities of energy are consumed in residential buildings leading to a negative impact on the environment. Also, the number of wireless connected devices in use around the World is constantly and rapidly increasing, leading to potential health risks due to over exposer to electromagnetic radiation. An opportunity appears to reduce the energy consumption in residential buildings by introducing smart home automation systems. Multiple such solutions are available in the market with most of them being wireless, so the challenge is to design such systems that would limit the quantity of newly generated electromagnetic radiation. For this we look at several wired, serial communication methods and we successfully test such a method using a simple protocol to exchange data between an Arduino microcontroller board and a Visual C# app running on a Windows computer. We aim to show that if desired, smart home automation systems can still be built using simple viable alternatives to wireless communication.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_10-Arduino_based_Smart_Home_Automation_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Textile EEG Cap using Dry-Comb Electrodes for Emotion Detection of Elderly People</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110409</link>
        <id>10.14569/IJACSA.2020.0110409</id>
        <doi>10.14569/IJACSA.2020.0110409</doi>
        <lastModDate>2020-04-30T16:06:44.7570000+00:00</lastModDate>
        
        <creator>Fangmeng ZENG</creator>
        
        <creator>Panote Siriaraya</creator>
        
        <creator>Dongeun Choi</creator>
        
        <creator>Noriaki Kuwahara</creator>
        
        <subject>EEG monitoring; elderly people; emotion detection; textile EEG cap; wearing-comfort</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>Emotions are fundamental to human life and can impact elderly healthcare encounters between caregiver and patient. Detecting emotions by monitoring the physical signals with wearable smart devices offers new promises for care support. While there are multiple studies on wearable devices, few of these pertain to soft electroencephalogram (EEG) caps designed for long-time wear by elderly people. In this study, a 4-channel textile cap was designed with dry electrodes held by an ultra-soft gel holder, while fashion and ergonomic design features were introduced to enhance wearability and comfort. The dry-electrode textile cap performed highly for monitoring EEG signals; closely matching the wet electrodes equipment. All participants reported positive feedback stating that the textile cap was softer, lighter, and more comfortable than other devices. A cumulative contribution rate of 72.199% for two factors (materials properties factor and design pattern factor) was achieved using the principal factor method (PFA), which are influencing the usability of the wearable devices. An average emotion classification accuracy of 81.32% was obtained from 5 healthy elderly subjects. It was thus concluded that the proposed method provides a stable monitoring and comfortable user experience for users, and can be used to detect emotions for elderly people with good results in the future.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_9-Textile_EEG_Cap_using_Dry_Comb_Electrodes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Augmented Reality Application for Hand Motor Skills Rehabilitation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110408</link>
        <id>10.14569/IJACSA.2020.0110408</id>
        <doi>10.14569/IJACSA.2020.0110408</doi>
        <lastModDate>2020-04-30T16:06:44.7400000+00:00</lastModDate>
        
        <creator>Alexandr Kolsanov</creator>
        
        <creator>Sergey Chaplygin</creator>
        
        <creator>Sergey Rovnov</creator>
        
        <creator>Anton Ivaschenko</creator>
        
        <subject>3D modeling; augmented reality; virtual reality; simulation; hand motor skills rehabilitation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>The paper presents an augmented reality based solution for hand movement rehabilitation using visual and tactile feedback. The proposed approach to the rehabilitation process combines the effects on visual, auditory and tactile channels of perception and simulation scenarios. In order to develop a hand movement model capable of solving rehabilitation tasks there was studied the concept of immersive virtual reality. Original 3D models and scenes have been developed to simulate the basic hand positions and motor functions. To provide efficient hand motion fixation it is recommended to implement a mechanical position tracking system based on a sensor glove improved by using the resistive transducers. Augmented reality is implemented to inspire and motivate the user to perform the required exercises by generating the corresponding pictures. Personalized comparative analysis of the dynamics of the patient&#39;s initial condition and rehabilitation results help to study the motor function and restore everyday skills. The proposed solution allows achieving accuracy and adequacy of fingers movements sufficient to satisfy the requirements of medical rehabilitation applications.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_8-Augmented_Reality_Application.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Variation of Aerosol Pollution in Peru during the Quarantine Due to COVID-19</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110407</link>
        <id>10.14569/IJACSA.2020.0110407</id>
        <doi>10.14569/IJACSA.2020.0110407</doi>
        <lastModDate>2020-04-30T16:06:44.7100000+00:00</lastModDate>
        
        <creator>Avid Roman-Gonzalez</creator>
        
        <creator>Natalia I. Vargas-Cuentas</creator>
        
        <subject>COVID-19; coronavirus; pollution; Sentinel-5P; Peru</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>Due to COVID-19, which is a type of pneumonia produced by a coronavirus family virus, the Peruvian government has decreed mandatory social isolation. This isolation is extended until 26 April 2020. Due to this situation, people must stay at home and only go out to make purchases to cover basic needs. This situation, between other things, probably causes pollution reduction that is important for our ecosystem. In Peru, there is not a measurable way to quantify the impact of social isolation on air pollution. The present work aims to show more objectively how much decrease the aerosol pollution in Peru. For this purpose, one uses remote sensing data from Copernicus Data Hub of the European Space Agency, specifically, Sentinel-5 Precursor satellite. The results show an essential reduction of aerosol pollution in different regions of Peru, especially in Lima and the Amazon regions.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_7-Variation_of_Aerosol_Pollution_in_Peru.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Framework of Moving Object Tracking based on Object Detection-Tracking with Removal of Moving Features</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110406</link>
        <id>10.14569/IJACSA.2020.0110406</id>
        <doi>10.14569/IJACSA.2020.0110406</doi>
        <lastModDate>2020-04-30T16:06:44.6630000+00:00</lastModDate>
        
        <creator>Ly Quoc Ngoc</creator>
        
        <creator>Nguyen Thanh Tin</creator>
        
        <creator>Le Bao Tuan</creator>
        
        <subject>Moving object tracking; object detection; camera localization; 3D environment reconstruction; tracking by detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>Object Tracking (OT) on a Moving Camera so-called Moving Object Tracking (MOT) is extremely vital in Computer Vision. While other conventional tracking methods based on fixed camera can only track the objects in its range, a moving camera can tackle this issue by following the objects. Moreover, single tracker is used widely to track object but it is not effective due to the moving camera because the challenges such as sudden movements, blurring, pose variation. The paper proposes a method inherited by tracking by detection approach. It integrates a single tracker with object detection method. The proposed tracking system can track object efficiency and effectively because object detection method can be used to find the tracked object again if the single tracker loses track. Three main contributions are presented in the paper as follow. First, the proposed Unified Visual based-MOT system can do the tasks such as Localization, 3D Environment Reconstruction and Tracking based on Stereo Camera and Inertial Measurement Unit (IMU). Second, it takes into account camera motion and the moving objects to improve the precision rate in localization and tracking. Third, proposed tracking system based on integration of single tracker as Deep Particle Filter and Object Detection as Yolov3. The overall system is tested on the dataset KITTI 2012, and it has achieved a good accuracy rate in real time.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_6-A_New_Framework_of_Moving_Object_Tracking.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>P System Framework for Ant Colony Algorithm in IoT Data Routing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110405</link>
        <id>10.14569/IJACSA.2020.0110405</id>
        <doi>10.14569/IJACSA.2020.0110405</doi>
        <lastModDate>2020-04-30T16:06:44.6470000+00:00</lastModDate>
        
        <creator>Aurimas Gedminas</creator>
        
        <creator>Liudas Duoba</creator>
        
        <creator>Dalius Navakauskas</creator>
        
        <subject>Membrane computing; P system; Ant Colony System; Internet of Things; energy consumption; load balancing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>The Internet of Things (IoT) is a critical part of current information technology. When designing IoT data routes, device limited resources such as the computation speed, available amount of memory, remaining battery power or channel bandwidth, to name a few, must be considered. Since the Ant Colony System (ACS) is successfully applied to solving different routing problems, the implementation of ACS for routing in the IoT has also been considered. A P system inspired by the nature of membrane processes not only simplifies a complex system behavior annotation but also delivers a good balance between performance, flexibility and scalability. For this reason, the P system framework for ACS in IoT data routing has been investigated. From the research conducted, MMAPS, which is a combination of the P system and the Max-Min Ant System, is seen to perform better than the ACS.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_5-P_System_Framework_for_Ant_Colony.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Content Delivery Networks in Cross-border e-Commerce</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110404</link>
        <id>10.14569/IJACSA.2020.0110404</id>
        <doi>10.14569/IJACSA.2020.0110404</doi>
        <lastModDate>2020-04-30T16:06:44.6170000+00:00</lastModDate>
        
        <creator>Artur Strzelecki</creator>
        
        <subject>e-Commerce; cross-border e-commerce; content delivery networks; DNS lookup time; time to first byte</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>Cross-border e-commerce has been growing in recent years. Purchasing goods from abroad is getting easier due to global deliveries, well-known payment methods and decreasing language barriers. Content Delivery Network (CDN) is a technical solution. Deploying CDN for cross-border e-commerce can improve performance and consumer experience. In this paper, four sets of cross-border online stores, containing together 57 e-commerce stores, are examined. Each set is a group of online stores operating in many European markets with different Domain Name System (DNS) settings. Two sets use Cloudflare CDN and its DNS server. Other two sets use country DNS server settings without CDN. Results show that DNS lookup time significantly decrease for cross-border users when online stores are using a CDN, improving overall website time load. Increase of speed for resolving domain names is on average from 40ms to 5ms. No significant improvement was observed for user in the same country location.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_4-Content_Delivery_Networks_in_Cross_Border.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Conceptual Framework for Finding Approximations to Minimum Weight Triangulation and Traveling Salesman Problem of Planar Point Sets</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110403</link>
        <id>10.14569/IJACSA.2020.0110403</id>
        <doi>10.14569/IJACSA.2020.0110403</doi>
        <lastModDate>2020-04-30T16:06:44.5830000+00:00</lastModDate>
        
        <creator>Marko Dodig</creator>
        
        <creator>Milton Smith</creator>
        
        <subject>Computational geometry; minimum weight triangu-lation; combinatorial optimization; traveling salesman problem</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>We introduce a novel Conceptual Framework for finding approximations to both Minimum Weight Triangulation (MWT) and optimal Traveling Salesman Problem (TSP) of planar point sets. MWT is a classical problem of Computational Geometry with various applications, whereas TSP is perhaps the most researched problem in Combinatorial Optimization. We provide motivation for our research and introduce the fields of triangulation and polygonization of planar point sets as theoretical bases of our approach, namely, we present the Isoperimetric Inequality principle, measured via Compactness Index, as a key link between our two stated problems. Our experiments show that the proposed framework yields tight approximations for both problems.</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_3-Conceptual_Framework_for_Finding_Approximations.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Issues and Challenges: Cloud Computing e-Government in Developing Countries</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110402</link>
        <id>10.14569/IJACSA.2020.0110402</id>
        <doi>10.14569/IJACSA.2020.0110402</doi>
        <lastModDate>2020-04-30T16:06:44.5670000+00:00</lastModDate>
        
        <creator>Naif Al Mudawi</creator>
        
        <creator>Natalia Beloff</creator>
        
        <creator>Martin White</creator>
        
        <subject>Challenges; issue; privacy; security; social; e-governance services; citizen</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>Cloud computing has become essential for IT resources that can be delivered as a service over the Internet. Many e-government services that are used worldwide provide communities with relatively complex applications and services. Governments are still facing many challenges in their implementation of e-government services in general, including Saudi Arabia, such as poor IT infrastructure, lack of finance, and insufficient data security. This research paper investigates the challenges of e-government cloud service models in developing countries. This paper finds that governments in developing countries are influenced by how the top management deals with the attention to the adoption of cloud computing. Further, organisational readiness levels of technologies, such as IT infrastructure, internet availability and social trust of the adoption of new technology as cloud computing, still present limitations for e-government cloud services adoption. Based on the findings of the critical review, this paper identifies the issues and challenges affecting the adoption of cloud computing in e-government such as IT infrastructure, internet availability, and trust adopted new technologies thereby highlighting benefits of cloud computing-based e-government services. Furthermore, we propose recommendations for developing IT systems focused on trust when adopting cloud computing in e-government services (CCEGov).</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_2-Issues_and_Challenges_Cloud_Computing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Deep Neural Network Study of the ABIDE Repository on Autism Spectrum Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110401</link>
        <id>10.14569/IJACSA.2020.0110401</id>
        <doi>10.14569/IJACSA.2020.0110401</doi>
        <lastModDate>2020-04-30T16:06:44.5070000+00:00</lastModDate>
        
        <creator>Xin Yang</creator>
        
        <creator>Paul T. Schrader</creator>
        
        <creator>Ning Zhang</creator>
        
        <subject>DNN; ASD; rs-fMRI; ABIDE; CPAC</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(4), 2020</description>
        <description>The objective of this study is to implement deep neural network (DNN) models to classify autism spectrum disorder (ASD) patients and typically developing (TD) participants. The experimental design utilizes functional connectivity features extracted from resting-state functional magnetic resonance imaging (rs-fMRI) originating in the multisite repository Autism Brain Imaging Data Exchange (ABIDE) over a significant set of training samples. Our methodology and results have two main parts. First, we build DNN models using the TensorFlow framework in python to classify ASD from TD. Here we acquired an accuracy of 75.27%. This is significantly higher than any known accuracy (71.98%) using the same data. We also obtained a recall of 74% and a precision of 78.37%. In summary, and based on our literature review, this study demonstrated that our DNN (128-64) model achieves the highest accuracy, recall, and precision on the ABIDE dataset to date. Second, using the same ABIDE data, we implemented an identical experimental design with four distinct hidden layer configuration DNN models each preprocessed using four different industry accepted strategies. These results aided in identifying the preprocessing technique with the highest accuracy, recall, and precision: the Conﬁgurable Pipeline for the Analysis of Connectomes (CPAC).</description>
        <description>http://thesai.org/Downloads/Volume11No4/Paper_1-A_Deep_Neural_Network_Study.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Perceived Usability of Educational Chemistry Game Gathered via CSUQ Usability Testing in Indonesian High School Students</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110389</link>
        <id>10.14569/IJACSA.2020.0110389</id>
        <doi>10.14569/IJACSA.2020.0110389</doi>
        <lastModDate>2020-03-30T10:40:38.5670000+00:00</lastModDate>
        
        <creator>Herman Tolle</creator>
        
        <creator>Muhammad Hafis</creator>
        
        <creator>Ahmad Afif Supianto</creator>
        
        <creator>Kohei Arai</creator>
        
        <subject>Usability testing; CSUQ; educational game; male students; female students</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>Educational game is now a commonplace among students and teachers alike. Recent researches show that studies regarding educational game general effectiveness in the learning environment are nothing new. However, usability studies in the educational game are rather rare compared to general non-game-related usability studies. This research synthesizes the result obtained from the Computer System Usability Questionnaire (CSUQ) and separated between multiple students pre-existing grouping such as genders, prior knowledge, as well as experimental treatment setup such as materials given before the game session. The metrics are tested in an Indonesian high school by using an educational game of chemistry regarding the topic of reaction rate with a total of 53 participants. General results show that there exist many differences of perceived usability aspects between male and female students, the existence of learning materials given before the game session, as well as the existence of students&#39; prior knowledge. Overall, the main findings of this research show that usability in the educational game is affected by gender, materials existence, and previous knowledge existence.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_89-Perceived_Usability_of_Educational_Chemistry.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Software-Defined Networking (SDN) based VANET Architecture: Mitigation of Traffic Congestion</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110388</link>
        <id>10.14569/IJACSA.2020.0110388</id>
        <doi>10.14569/IJACSA.2020.0110388</doi>
        <lastModDate>2020-03-30T10:40:38.5500000+00:00</lastModDate>
        
        <creator>Tesfanesh Adbeb</creator>
        
        <creator>Wu Di</creator>
        
        <creator>Muhammad Ibrar</creator>
        
        <subject>Software-Defined Networking; VANET; congestion; feasible path; NS3; SUMO</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>In VANETs (Vehicular Ad-hoc Networks), the num-ber of vehicles increased continuously, leading to significant traffic problems like traffic congestion, a feasible path, and associated events like accidents. Though, the Intelligent Transportation System (ITS) providing excellent services, such as safety appli-cations and emergency warnings. However, ITS has limitations regarding traffic management tasks, scalability, and flexibility because of the enormous number of vehicles. Therefore, extending the traditional VANET architecture is indeed a must. Thus, in the recent period, the design of the SD-VANETs (Software-Defined Networking defined VANETs) has gained significant interest and made VANET more intelligent. The SD-VANET architecture can handle the aforesaid VANET challenges. The centralized (logically) SDN architecture is programmable and also has global information about the VANET architecture. Therefore, it can effortlessly handle scalability, traffic management, and traffic congestion issues. The traffic congestion problem leads to longer trip times, decreases the vehicles’ speed, and prolong average end-to-end delay. Though, somewhere, some routes in the network are available with capacity, which can minimize the congestion problem and its characteristics. Therefore, we proposed heuristic algorithms called Congestion-Free Path (CFP) and Optimize CFP (OCFP), in SD-VANET architecture. The proposed algorithms address the traffic congestion issue and also provide a feasible path (less end-to-end delay) for a vehicle in VANET. We used the NS-3 simulator to evaluate the performance of the proposed algorithms, and for generating a real scenario of VANET traffic; we use the SUMO module. The results show that the proposed algorithms decrease road traffic congestion drastically compared to exiting approaches.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_88-Software_Defined_Networking_SDN_based_VANET_Architecture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Feature Selection for Learning-to-Rank using Simulated Annealing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110387</link>
        <id>10.14569/IJACSA.2020.0110387</id>
        <doi>10.14569/IJACSA.2020.0110387</doi>
        <lastModDate>2020-03-30T10:40:38.5200000+00:00</lastModDate>
        
        <creator>Mustafa Wasif Allvi</creator>
        
        <creator>Mahamudul Hasan</creator>
        
        <creator>Lazim Rayan</creator>
        
        <creator>Mohammad Shahabuddin</creator>
        
        <creator>Md. Mosaddek Khan</creator>
        
        <creator>Muhammad Ibrahim</creator>
        
        <subject>Information retrieval; learning-to-rank; feature se-lection; meta-heuristic optimization algorithm; simulated annealing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>Machine learning is being applied to almost all corners of our society today. The inherent power of large amount of empirical data coupled with smart statistical techniques makes it a perfect choice for almost all prediction tasks of human life. Information retrieval is a discipline that deals with fetching useful information from a large number of documents. Given that today millions, even billions, of digital documents are available, it is no surprise that machine learning can be tailored to this task. The task of learning-to-rank has thus emerged as a well-studied domain where the system retrieves the relevant documents from a document corpus with respect to a given query. To be successful in this retrieving task, machine learning models need a highly useful set of features. To this end, meta-heuristic optimization algorithms may be utilized. The aim of this work is to investigate the applicability of a notable meta-heuristic algorithm called simulated annealing to select an effective subset of features from the feature pool. To be precise, we apply simulated annealing algorithm on the well-known learning-to-rank datasets to methodically select the best subset of features. Our empirical results show that the proposed framework achieve gain in accuracy while using a smaller subset of features, thereby reducing training time and increasing effectiveness of learning-to-rank algorithms.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_87-Feature_Selection_for_Learning_to_Rank.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Invariant Feature Extraction for Component-based Facial Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110386</link>
        <id>10.14569/IJACSA.2020.0110386</id>
        <doi>10.14569/IJACSA.2020.0110386</doi>
        <lastModDate>2020-03-30T10:40:38.5030000+00:00</lastModDate>
        
        <creator>Adam Hassan</creator>
        
        <creator>Serestina Viriri</creator>
        
        <subject>Invariant features; facial components; facial recog-nition; age progression; HOG; LBP</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>This paper proposes an alternative invariant feature extraction technique for facial recognition using facial compo-nents. Can facial recognition over age progression be improved by analyzing individual facial components? The individual facial components: eyes, mouth, nose, are extracted using face landmark points. The Histogram of Gradient (HOG) and Local Binary Pattern (LBP) features are extracted from the individually de-tected facial components, followed by random subspace principal component analysis and cosine distance. One of the preprocessing steps implemented is the facial image alignment using angle of inclination. The experimental results show that facial recognition over age progression can be improved by analyzing individual facial components. The entire facial image can change over time, but appearance of some individual facial components is invariant.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_86-Invariant_Feature_Extraction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of an Interactive Tool based on Combining Graph Heuristic with Local Search for Examination Timetable Problem</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110385</link>
        <id>10.14569/IJACSA.2020.0110385</id>
        <doi>10.14569/IJACSA.2020.0110385</doi>
        <lastModDate>2020-03-30T10:40:38.4870000+00:00</lastModDate>
        
        <creator>Ashis Kumar Mandal</creator>
        
        <subject>Examination timetable; graph heuristic; local search meta-heuristic; ITC2007 exam dataset; interactive tool; NP-hard problem</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>Every university faces a lot of challenges to solve the examination timetabling problem because the problem is NP-hard and contains numerous institutional constraints. Although several attempts have been taken to address the issue, there are scarcities of interactive and automated tools in this domain that can schedule exams effectively by considering institutional resources, different constraints, and student enrolment in courses. This paper presents the development of a system as a graphical and interactive tool for examination timetabling problem. To develop the system, combining graph coloring heuristic and local search meta-heuristic algorithms are employed. The graph heuris-tic ordering is incorporated for constructing initial solution(s), whereas the local search meta-heuristic algorithms are used to produce quality exam timetables. Different constraints and objective functions from ITC2007 exam competition rules are adopted, as it is a complex real word exam timetabling problem. Finally, the system is tested on the ITC2007 exam benchmark dataset, and test results are presented. The main aspect of the system is to deliver an easy-to-handle tool that can generate quality timetables based on institutional demands and smoothly manage several key components. These components are collecting data associated with the enrolment of students in exams, defining hard and soft constraints, and allocating times and resources. Overall, this software can be used as a commercial scheduler in order to provide institutions with automated, accurate, and quick exam timetable.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_85-Development_of_an_Interactive_Tool.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Vehicle Routing Optimization for Surplus Food in Nonprofit Organizations</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110384</link>
        <id>10.14569/IJACSA.2020.0110384</id>
        <doi>10.14569/IJACSA.2020.0110384</doi>
        <lastModDate>2020-03-30T10:40:38.4570000+00:00</lastModDate>
        
        <creator>Ahmad Alhindi</creator>
        
        <creator>Abrar Alsaidi</creator>
        
        <creator>Waleed Alasmary</creator>
        
        <creator>Maazen Alsabaan</creator>
        
        <subject>Non-profit organization; vehicle routing problem; donor; surplus food; decision support</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>Non-profit organizations mitigate the problem of food insecurity by collecting surplus food from donors and delivering it to underprivileged people. In this paper, we focus on a single non-profit organization located in Makkah city (Saudi Arabia), referred to as Ekram. The current surplus food pickup/delivery and operational routing model of Ekram orga-nization have several apparent deficiencies. First, we model the surplus pickup/delivery and operational routing model as a vehi-cle routing problem (VRP). Then, we optimize the pickup/delivery of different types of food groups through the different available routes. Finally, we utilize the formulated VRP problem by minimizing the total route distances. Our proposed model ensures reduction of the total time and effort necessary to send the collecting vehicles to the donors of surplus food.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_84-Vehicle_Routing_Optimisation_for_Surplus_Food.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predicting Students’ Performance of the Private Universities of Bangladesh using Machine Learning Approaches</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110383</link>
        <id>10.14569/IJACSA.2020.0110383</id>
        <doi>10.14569/IJACSA.2020.0110383</doi>
        <lastModDate>2020-03-30T10:40:38.4400000+00:00</lastModDate>
        
        <creator>Md. Sabab Zulfiker</creator>
        
        <creator>Nasrin Kabir</creator>
        
        <creator>Al Amin Biswas</creator>
        
        <creator>Partha Chakraborty</creator>
        
        <creator>Md. Mahfujur Rahman</creator>
        
        <subject>Prediction; machine learning; weighted voting ap-proach; private universities of Bangladesh</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>Every year thousands of students get admitted into different universities in Bangladesh. Among them, a large number of students complete their graduation with low scoring results which affect their careers. By predicting their grades before the final examination, they can take essential measures to ameliorate their grades. This article has proposed different machine learning approaches for predicting the grade of a student in a course, in the context of the private universities of Bangladesh. Using different features that affect the result of a student, seven different classifiers have been trained, namely: Support Vector Machine (SVM), K-Nearest Neighbor (KNN), Logistic Regression, Decision Tree, AdaBoost, Multilayer Perceptron (MLP), and Extra Tree Classifier for classifying the students’ final grades into four quality classes: Excellent, Good, Poor, and Fail. Afterwards, the outputs of the base classifiers have been aggregated using the weighted voting approach to attain better results. And here this study has achieved an accuracy of 81.73%, where the weighted voting classifier outperforms the base classifiers.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_83-Predicting_Students_Performance_of_the_Private_Universities.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intelligent Parallel Mixed Method Approach for Characterising Viral YouTube Videos in Saudi Arabia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110382</link>
        <id>10.14569/IJACSA.2020.0110382</id>
        <doi>10.14569/IJACSA.2020.0110382</doi>
        <lastModDate>2020-03-30T10:40:38.4100000+00:00</lastModDate>
        
        <creator>Abdullah Alshanqiti</creator>
        
        <creator>Ayman Bajnaid</creator>
        
        <creator>Abdul Rehman Gilal</creator>
        
        <creator>Shuaa Aljasir</creator>
        
        <creator>Aeshah Alsughayyir</creator>
        
        <creator>Sami Albouq</creator>
        
        <subject>Virality; text mining; sentiment analysis; social media analysis; mixed method approach</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>In social networking platforms, comprehending vi-rality, exemplified by YouTube, is of great importance, which helps in understanding what characteristics utilised to create content along with what dynamics involved in contributing to YouTube’s strength as a platform for sharing content. The current literature surrounding virality problem appears sparse concern-ing development theories, investigations regarding empirical facts, and an understanding of what makes videos go viral. The over-arching objective is to understand deeply the phenomena of viral YouTube videos in Saudi Arabia, hence we propose an intelligent convergent parallel mixed-methods approach that begins, as an internal step, by a qualitative thematic analyses method and an NLP-based quantitative method independently, followed by training an unsupervised clustering model for integrating the internal analysis outputs for deeper insights. We have empirically analysed some trended YouTube videos along with their contents, for studying such phenomena. One of our main findings revealed that boosting entertainments, traditions, politics, and/or religion issues when making a video, that is associated in somehow with sarcastic or rude remarks, is likely the preeminent impulse for letting a regular video go viral.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_82-Intelligent_Parallel_Mixed_Method_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improved Candidate Generation for Pedestrian Detection using Background Modeling in Connected Vehicles</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110381</link>
        <id>10.14569/IJACSA.2020.0110381</id>
        <doi>10.14569/IJACSA.2020.0110381</doi>
        <lastModDate>2020-03-30T10:40:38.3930000+00:00</lastModDate>
        
        <creator>Ghaith Al-Refai</creator>
        
        <creator>Osamah A. Rawashdeh</creator>
        
        <subject>Pedestrian detection; computer vision; image pro-cessing; machine learning; vehicle safety</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>Pedestrian detection is widely used in today’s ve-hicle safety applications to avoid vehicle-pedestrian accidents. The current technology of pedestrian detection utilizes onboard sensors such as cameras, radars, and Lidars to detect pedestrians, then information is used in a safety feature like Automatic Emer-gency Braking (AEB). This paper proposes pedestrian detection system using vehicle connectivity, image processing and computer vision algorithms. In the proposed model, vehicles collect image frames using on-vehicle cameras, then frames are transferred to the Infrastructure database using Vehicle to Infrastructure communication (V2I). Image processing and machine learning algorithms are used to process the infrastructure images for pedestrian detection. Background modeling is used to extract the foreground regions in an image to identify regions of interest for candidate generation. This paper explains the algorithms of the infrastructure pedestrian detection system, which includes image registration, background modeling, image filtering, candi-date generation, feature extraction, and classification. The paper explains the MATLAB implementation of the algorithm with a road-collected dataset and provides analysis for the detection results with respect to detection accuracy and runtime. The algorithm implementation results show an improvement in the detection performance and algorithm runtime.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_81-Improved_Candidate_Generation_for_Pedestrian_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced Performance of the Automatic Learning Style Detection Model using a Combination of Modified K-Means Algorithm and Naive Bayesian</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110380</link>
        <id>10.14569/IJACSA.2020.0110380</id>
        <doi>10.14569/IJACSA.2020.0110380</doi>
        <lastModDate>2020-03-30T10:40:38.3800000+00:00</lastModDate>
        
        <creator>Nurul Hidayat</creator>
        
        <creator>Retantyo Wardoyo</creator>
        
        <creator>Azhari SN</creator>
        
        <creator>Herman Dwi Surjono</creator>
        
        <subject>Learning management system; log file, K-means; Davies-Bouldin Index</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>Learning Management System (LMS) is well de-signed and operated by an exceptional teaching team, but LMS does not consider the needs and characteristics of each student’s learning style. The LMS has not yet provided a feature to detect student diversity, but LMS has a track record of student learning activities known as log files. This study proposes a detection model of student’s learning styles by utilizing information on log file data consisting of four processes. The first process is pre-processing to get 29 features that are used as the input in the clustering process. The second process is clustering using a modified K-Means algorithm to get a label from each test data set before the classification process is carried out. The third process is detecting learning styles from each data set using the Naive Bayesian classification algorithm, and finally, the analysis of the performance of the proposed model. The test results using the validity value of the Davies-Bouldin Index (DBI) matrix indicate that the modified K-Means algorithm achieved 2.54 DBI, higher than that of original K-Means with 2.39 DBI. Besides having high validity, it also makes the algorithm more stable than the original K-Means algorithm because the labels of each dataset do not change. The improved performance of the clustering algorithm also increases the values of precision, recall, and accuracy of the automatic learning style detection model proposed in this study. The average precision value rises from 65.42% to 71.09%, the value of recall increases from 72.09% to 80.23%, and the value of accuracy increases from 67.06% to 71.60%.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_80-Enhanced_Performance_of_the_Automatic_Learning_Style.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Binning Approach based on Classical Clustering for Type 2 Diabetes Diagnosis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110379</link>
        <id>10.14569/IJACSA.2020.0110379</id>
        <doi>10.14569/IJACSA.2020.0110379</doi>
        <lastModDate>2020-03-30T10:40:38.3470000+00:00</lastModDate>
        
        <creator>Hai Thanh Nguyen</creator>
        
        <creator>Nhi Yen Kim Phan</creator>
        
        <creator>Huong Hoang Luong</creator>
        
        <creator>Nga Hong Cao</creator>
        
        <creator>Hiep Xuan Huynh</creator>
        
        <subject>Unsupervised binning; K-means clustering algo-rithm; metagenomics; metagenome-based disease prediction; Type 2 diabetes diagnosis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>In recent years, numerous studies have been fo-cusing on metagenomic data to improve the ability of human disease prediction. Although we face the complexity of disease, some proposed frameworks reveal promising performances in using metagenomic data to predict disease. Type 2 diabetes (T2D) diagnosis by metagenomic data is one of the challenging tasks compared to other diseases. The prediction performances for T2D usually reveal poor results which are around 65% in accuracy in state-of-the-art. In this study, we propose a method com-bining K-means clustering algorithm and unsupervised binning approaches to improve the performance in metagenome-based disease prediction. We illustrate by experiments on metagenomic datasets related to Type 2 Diabetes that the proposed method embedded clusters generated by K-means allows to increase the performance in prediction accuracy reaching approximately or more than 70%.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_79-Binning_Approach_based_on_Classical_Clustering.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Minimal Order Linear Functional Observers Design for Multicell Series Converter</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110378</link>
        <id>10.14569/IJACSA.2020.0110378</id>
        <doi>10.14569/IJACSA.2020.0110378</doi>
        <lastModDate>2020-03-30T10:40:38.3330000+00:00</lastModDate>
        
        <creator>Mariem Jday</creator>
        
        <creator>Paul-Etienne Vidal</creator>
        
        <creator>Joseph Hagg`ege</creator>
        
        <subject>Multicell converter; voltage capacitor; hybrid model; Z(TN)-Observability; functional observer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>The requirement of high voltage-power level for many applications like the energy conversion system give rise to use a structure of a multilevel converter like the multicell series converter. To benefit as much as possible from this power converter an appropriate voltage distribution for each cell must be performed, hence the need to estimate this voltages. This paper aims to design a minimal single linear functional observer for a multicell series converter to estimate the capacitor voltages. Based on its hybrid model, an observability study prove the ability to estimate this capacitor voltages. Also a linear functional observers are proposed using a direct procedure without solving the Sylvester equation and based on an operation mode classification approach. Simulations of four-cells multicell converter are given in order to check the efficiency of the converter’s hybrid model and the performance of the proposed minimal single linear functional observers.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_78-Minimal_Order_Linear_Functional_Observers_Design.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Analysis of Machine Learning Techniques for Smart Agriculture: Comparison of Supervised Classification Approaches</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110377</link>
        <id>10.14569/IJACSA.2020.0110377</id>
        <doi>10.14569/IJACSA.2020.0110377</doi>
        <lastModDate>2020-03-30T10:40:38.3170000+00:00</lastModDate>
        
        <creator>Rhafal Mouhssine</creator>
        
        <creator>Abdoun Otman</creator>
        
        <creator>El khatir Haimoudi</creator>
        
        <subject>Support vector machine; K-nearest neighbor; deep neural networks; convolutional neural networks; smart agriculture; Cifar10</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>Agriculture form one of the most important aspects of life necessities, it is responsible to feed 7.7 billion person for the time being, and it is expected to supply more than 9.6 billion individual in 2050, the thing that made classical farming insufficient, and give birth to the notion of smart farming, and the race has begun toward using the latest technologies in the field. They integrate the Internet of Things (IoT), automation, Artificial Intelligence (AI), etc. And as researchers from a country that highly depends on agriculture, we have decided to also contribute to this evolution, and we chose Machine learning (ML) as our entrance to the field to satisfy the need for automated classification of the different products produced by a farm. In this work, we wanted to solve the problem of automatic classification of agricultural products, without the need of any human intervention, and we concentrate on the classification of red fruits, due to our proximity to a location that its product is red fruits. In other words, we are doing a comparative study among the well-known approaches that are used in image classification, and we are applying the best-found method to correctly classify the pictures of red fruits. And this empirically leads us to achieve great results as shown in the numerical result area.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_77-Performance_Analysis_of_Machine_Learning_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>CA-PCS: A Cellular Automata based Partition Ciphering System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110376</link>
        <id>10.14569/IJACSA.2020.0110376</id>
        <doi>10.14569/IJACSA.2020.0110376</doi>
        <lastModDate>2020-03-30T10:40:38.2700000+00:00</lastModDate>
        
        <creator>Fatima Ezzahra Ziani</creator>
        
        <creator>Anas Sadak</creator>
        
        <creator>Charifa Hanin</creator>
        
        <creator>Bouchra Echandouri</creator>
        
        <creator>Fouzia Omary</creator>
        
        <subject>Partition ciphering system; partition problem; fre-quency analysis; cellular automata; avalanche effect; confusion; diffusion; statistical properties; cryptographic properties</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>In this paper, the authors present a modified version of the Partition Ciphering System (PCS) encryption system previously proposed. The previously developed encryption system PCS uses the partition problem to encrypt a message. The goals of newly developed system are avoiding statistical and frequency attacks, by providing a balance between 0s and 1s, ensuring a good level of entropy and achieving confidentiality through encryption. One of the novelties of the new design compared to its predecessor is the use of cellular automata (CAs) during the encryption. The use of CAs is justified by their good cryptographic properties that provide a level of security against attacks, and better confusion and diffusion properties. The new design is first presented with details of the encryption and decryption mechanisms. Then, the results of the DIEHARDER battery of tests, the results of the avalanche test, a security analysis and the performance of the system are outlined. Finally, a comparison between CA-PCS and PCS as well as the AES encryption system is provided. The paper shows that the modified version of PCS displays a better performance as well as a good level of security against attacks.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_76-CA_PCS_A_Cellular_Automata.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Code Readability Management of High-level Programming Languages: A Comparative Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110375</link>
        <id>10.14569/IJACSA.2020.0110375</id>
        <doi>10.14569/IJACSA.2020.0110375</doi>
        <lastModDate>2020-03-30T10:40:38.2530000+00:00</lastModDate>
        
        <creator>Muhammad Usman Tariq</creator>
        
        <creator>Muhammad Bilal Bashir</creator>
        
        <creator>Muhammad Babar</creator>
        
        <creator>Adnan Sohail</creator>
        
        <subject>Source code; high-level programming languages; Java; C++; C#; code readability; code readability index</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>Quality can never be an accident and therefore, software engineers are paying immense attention to produce quality software product. Source code readability is one of those important factors that play a vital role in producing quality software. The code readability is an internal quality attribute that directly affects the future maintenance of the software and re-usability of same code in similar other projects. Literature shows that readability does not just rely on programmer’s ability to write tidy code but it also depends on programming language’s syntax. Syntax is the most visible part of any programming language that directly influence the readability of its code. If readability is a major factor for a given project, the programmers should know about the language that they shall choose to achieve the required level of quality. For this we compare the readability of three most popular high-level programming languages; Java, C#, and C++. We propose a comprehensive framework for readability comparison among these languages. The comparison has been performed on the basis of certain readability parameters that are referenced in the literature. We have also implemented an analysis tool and performed extensive experiments that produced interesting results. Furthermore, to judge the effectiveness of these results, we have performed statistical analysis using SPSS (Statistical Package for Social Sciences) tool. We have chosen the Spearman’s correlation ad Mann Whitney’s T-test for the same. The results show that among all three languages, Java has the most readable code. Programmers should use Java in the projects that have code readability as a significant quality requirement.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_75-Code_Readability_of_High_Level_Programming_Languages.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Prediction Intervals based on Doubly Type-II Censored Data from Gompertz Distribution in the Presence of Outliers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110374</link>
        <id>10.14569/IJACSA.2020.0110374</id>
        <doi>10.14569/IJACSA.2020.0110374</doi>
        <lastModDate>2020-03-30T10:40:38.2230000+00:00</lastModDate>
        
        <creator>S. F. Niazi Alil</creator>
        
        <creator>Ayed R. A. Alanzi</creator>
        
        <subject>Bayesian prediction; Gompertz distribution; predic-tive distribution; doubly Type-II censored data; Markov Chain Monte Carlo; single outliers</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>The study aims at getting the Bayesian predication intervals for some order statistics of future observations from the distribution of Gompertz (Gomp (a;&#223;)). Doubly Type-II censored data has assisted obtaining in the presence of single outlier that arose from the different same family members of distribution. Single outlier of type &#223; &#223;0 and &#223;+ &#223;0 are considered and bivariate independent prior density for a and &#223; are used. The problem of solving the Double integral to obtain the closed form for a and &#223;, leads us to use MCMC for calculating the Bayesian Predication Intervals. The use of numerical examples and statistical data has enable to properly present and describe the procedure. We conclude that the Bayesian predication intervals are shorter for y1 than y5 when we are increasing the &#223;0 value.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_74-Prediction_Intervals_based_on_Doubly_Type.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of Web Content Quality Factors for Massive Open Online Course using the Rasch Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110373</link>
        <id>10.14569/IJACSA.2020.0110373</id>
        <doi>10.14569/IJACSA.2020.0110373</doi>
        <lastModDate>2020-03-30T10:40:38.2070000+00:00</lastModDate>
        
        <creator>Wan Nurhayati Wan Ab Rahman</creator>
        
        <creator>Hazura Zulzalil</creator>
        
        <creator>Iskandar Ishak</creator>
        
        <creator>Ahmad Wiraputra Selamat</creator>
        
        <subject>Web content; quality model; hierarchical model; Rasch Model; rating scale; survey reliability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>The lack of understanding among content providers towards the quality of MOOC motivates the development of several MOOC quality models. However, none was focused on the web content from the perspective of content providers or experts despite the facts that their views are important particularly in the development phase. MOOCs learners and instructors definitely understand the functional external quality, but content providers have better understanding to the internal qualities, which is required during the development phase. The initial quality model for MOOC web content based on 7C’s of Learning Design and PDCA model for continuity have been proposed, consisted of nine categories and 54 factors. This research focuses on the validation towards the proposed model by content providers and experts to provide systematic evidence of construct validity. This involved two main processes; content validity test and survey on acceptability. The content validity test was conducted to confirm the agreeability of proposed categories and factors among respondents. The Dichotomous Rasch model was utilized to explain the conditional probability of a binary outcome, given the person&#39;s agreeability level and the item&#39;s endorsability level. Subsequently, the survey on acceptability was conducted to obtain confirmation and verification from the experts group pertaining on MOOC web content quality factors. Rasch Rating Scale model was used since it specifies the set of items, which share the same rating scale structure. The usage of the Rasch Model in instrument development  generally ease variable measurement by converting the nonlinear raw data to linear scale, while assists researchers in tackling fitness validation and other instrumentation issues like person reliability and unidimensionality. This paper demonstrates the strengths of applying Rasch Model in construct validation and instrument building, which provides a strong foundation for the model adaptation as a methodological tool.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_73-Analysis_of_Web_Content_Quality_Factors.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Histogram Equalization based Enhancement and MR Brain Image Skull Stripping using Mathematical Morphology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110372</link>
        <id>10.14569/IJACSA.2020.0110372</id>
        <doi>10.14569/IJACSA.2020.0110372</doi>
        <lastModDate>2020-03-30T10:40:38.1900000+00:00</lastModDate>
        
        <creator>Zahid Ullah</creator>
        
        <creator>Su-Hyun Lee</creator>
        
        <creator>Donghyeok An</creator>
        
        <subject>Contrast enhancement; skull stripping; magnetic resonance imaging; mathematical morphology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>In brain image processing applications the skull stripping is an essential part to explore. In numerous medical image applications the skull stripping stage act as a pre-processing step as due to this stage the accuracy of diagnosis increases in the manifold. The MR image skull stripping stage removes the non-brain tissues from the brain part such as dura, skull, and scalp. Nowadays MRI is an emerging method for brain imaging. However, the existence of the skull region in the MR brain image and the low contrast are the two main drawbacks of magnetic resonance imaging. Therefore, we have proposed a method for contrast enhancement of brain MRI using histogram equalization techniques. While morphological image processing technique is used for skull stripping from MR brain image. We have implemented our proposed methodology in the MATLAB R2015a platform. Peak signal to noise ratio, Signal to noise ratio, Mean absolute error, Root mean square error has been used to evaluate the results of our presented method. The experimental results illustrate that our proposed method effectively enhance the image and remove the skull from the brain MRI.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_72-Histogram_Equalization_Based_Enhancement.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Educational Data Mining based ICT Competency among e-Learning Tutors using Statistical Classifier</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110371</link>
        <id>10.14569/IJACSA.2020.0110371</id>
        <doi>10.14569/IJACSA.2020.0110371</doi>
        <lastModDate>2020-03-30T10:40:38.1770000+00:00</lastModDate>
        
        <creator>Lalbihari Barik</creator>
        
        <creator>Ahmad AbdulQadir AlRababah</creator>
        
        <creator>Yasser Difulah Al-Otaibi</creator>
        
        <subject>Data mining; e-learning tutors; Naive Bayes Classifiers algorithms; ICT; QTS numeracy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>The implementation of computer-supported collaborative learning has come to play a pivotal role in e-learning platforms. Educational Data Mining (EDM) is a promising area for the exclusive skill development of e-learning tutors, the major concern being investigations over large datasets. The tutors possessing efficient and sufficient soft skills can teach students within less time and with greater productivity. EDM is a regularly used research area that handles the development of methods to explore new ideas in the educational field. Computer-supported collaborative learning in e-learning and competencies on a real-time perspective among teachers are calculated using statistical classifiers. This paper aims to identify a feasible perspective on EDM based ICT competency over e-learning tutors using statistical classifiers. A set of tutors from diverse e-learning centers of various universities is selected for the evaluation purpose. The teachers from the department of mathematics in the universities are selected to attend a professional Qualified Teacher Status numeracy skills test and tutors’ online test. The results of online tests are collected and correlated with the Naive Bayes Classifiers algorithms. Naive Bayes Classifiers are used in this paper to find the classification performance results among teachers. Naive Bayes based classification is beneficial for skill identification and improvement among the teachers. Significantly, the data mining classifiers performed well with the large dataset.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_71-Enhancing_Educational_Data_Mining.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Temporal Analysis of GDOP to Quantify the Benefits of GPS and GLONASS Combination on Satellite Geometry</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110370</link>
        <id>10.14569/IJACSA.2020.0110370</id>
        <doi>10.14569/IJACSA.2020.0110370</doi>
        <lastModDate>2020-03-30T10:40:38.1600000+00:00</lastModDate>
        
        <creator>Claudio Meneghini</creator>
        
        <creator>Claudio Parente</creator>
        
        <subject>GDOP (Geometric Dilution of Precision); GPS (Global Positioning System); GLONASS (Globalnaya Navigazionnaya Sputnikovaya Sistema); Multi-GNSS (Global Navigation Satellite System) Constellation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>Global Navigation Satellite Systems (GNSS) have developed rapidly over the last few years. At present, there are GNSS receivers that combine satellites from two or more different constellations. The geometry of the satellites in relation to the receiver location, i.e. how nearly or distantly they are disposed in the sky, impacts on the quality of the survey, which is essential to achieve the highest level of position accuracy. A dimensionless number identified as Geometric Dilution of Precision (GDOP) is used to represent the efficiency of the satellite distribution and can be easy calculated for each location and time using satellite ephemeris. This paper quantifies the influence of multi-GNSS constellation, in particular GPS (Global Positioning System) and GLONASS (Globalnaya Navigazionnaya Sputnikovaya Sistema) combination, on satellite geometry considering a precise period. A new index named Temporal Variability of Geometric Dilution of Precision (TVGDOP) is proposed and analyzed in different scenarios (different cut-off angles as well as real obstacles such as terrain morphology and buildings). The new index is calculated for each of the two satellite systems (GPS and GLONASS) as well as for their integration. The TVGDOP values enable the three cases to be compared and permit to quantify the benefits of GNSS integration on satellite geometry. The results confirm the efficiency of the proposed index to highlight the better performance of combination GPS+GLONASS especially in presence of obstacles.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_70-Temporal_Analysis_of_GDOP.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced Accuracy of Heart Disease Prediction using Machine Learning and Recurrent Neural Networks Ensemble Majority Voting Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110369</link>
        <id>10.14569/IJACSA.2020.0110369</id>
        <doi>10.14569/IJACSA.2020.0110369</doi>
        <lastModDate>2020-03-30T10:40:38.1430000+00:00</lastModDate>
        
        <creator>Irfan Javid</creator>
        
        <creator>Ahmed Khalaf Zager Alsaedi</creator>
        
        <creator>Rozaida Ghazali</creator>
        
        <subject>Deep learning; machine learning; heart disease; majority voting ensemble; University of California; Irvine (UCI) dataset</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>To solve many problems in data science, Machine Learning (ML) techniques implicates artificial intelligence which are commonly used. The major utilization of ML is to predict the conclusion established on the extant data. Using an established dataset machine determine emulate and spread them to an unfamiliar data sets to anticipate the conclusion. A few classification algorithm’s accuracy prediction is satisfactory, although other perform limited accuracy. Different ML and Deep Learning (DL) networks established on ANN have been extensively recommended for the disclosure of heart disease in antecedent researches. In this paper, we used UCI Heart Disease dataset to test ML techniques along with conventional methods (i.e. random forest, support vector machine, K-nearest neighbor), as well as deep learning models (i.e. long short-term-memory and gated-recurrent unit neural networks). To improve the accuracy of weak algorithms we explore voting based model by combining multiple classifiers. A provisional cogent approach was used to regulate how the ensemble technique can be enforced to improve an accuracy in the heart disease prediction. The strength of the proposed ensemble approach such as voting based model is compelling in improving the prognosis accuracy of anemic classifiers and established adequate achievement in analyze risk of heart disease. A superlative increase of 2.1% accuracy for anemic classifiers was attained with the help of an ensemble voting based model.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_69-Enhanced_Accuracy_of_Heart_Disease_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Climate Change Adaptation and Resilience through Big Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110368</link>
        <id>10.14569/IJACSA.2020.0110368</id>
        <doi>10.14569/IJACSA.2020.0110368</doi>
        <lastModDate>2020-03-30T10:40:38.1130000+00:00</lastModDate>
        
        <creator>Md Nazirul Islam Sarker</creator>
        
        <creator>Bo Yang</creator>
        
        <creator>Yang Lv</creator>
        
        <creator>Md Enamul Huq</creator>
        
        <creator>M M Kamruzzaman</creator>
        
        <subject>Disaster resilience; administrative resilience; community resilience; disaster management; environmental management</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>The adverse effect of climate change is gradually increasing all over the world and developing countries are more sufferer. The potential of big data can be an effective tool to make an appropriate adaptation strategy and enhance the resilience of the people. This study aims to explore the potential of big data for taking proper strategy against climate change effects as well as enhance people’s resilience in the face of the adverse effect of climate change. A systematic literature review has been conducted in the last ten years of existing kinds of literature. This study argues that resilience is a process of bounce back to the previous condition after facing any adverse effect. It also focuses on the integrated function of the adaptive, absorptive and transformative capacity of a social unit such as individual, community or state for facing any natural disaster. Big data technologies have the capacity to show the information regarding upcoming issues, current issues and recovery stages of the adverse effect of climate change. The findings of this study will enable policymakers and related stakeholders to take appropriate adaptation strategies for enhancing the resilience of the people of the affected areas.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_68-Climate_Change_Adaptation_and_Resilience.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Data Mining for Student Advising</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110367</link>
        <id>10.14569/IJACSA.2020.0110367</id>
        <doi>10.14569/IJACSA.2020.0110367</doi>
        <lastModDate>2020-03-30T10:40:38.0970000+00:00</lastModDate>
        
        <creator>Hosam Alhakami</creator>
        
        <creator>Tahani Alsubait</creator>
        
        <creator>Abdullah Aljarallah</creator>
        
        <subject>Data mining; performance prediction; student analytics; academic advising; classification algorithms; decision tree; J48; neural network; Weka</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>This paper illustrates how to use data mining techniques to help in advising students and predicting their academic performance. Data mining is used to get previously unknown, hidden and perhaps vital knowledge from a large amount of data. It combines domain knowledge, advanced analytical skills, and a vast knowledge base to reveal hidden patterns and trends that are applicable in virtually any sector ranging from engineering to medicine, to business. However, it is possible for educational institutes to use data mining to find useful information from their databases. This is usually called Educational Data Mining (EDM). Advancing the field of EDM with new data analysis techniques and new machine learning algorithms is vital. Classification and clustering techniques will be used in this project to study and analyse student performance. The key importance of this project is that it discusses different data mining techniques in the literature review to study student behaviour depending upon their performance. We tried to identify the most suitable algorithms from the existing research methods to predict the success of students. Various data mining approaches were discussed and their results were evaluated. In this paper, the J48 algorithm was applied to the data set, gathered from Umm Al-Qura University in Makkah.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_67-Data_Mining_for_Student_Advising.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimized Approach in Requirements Change Management in Geographically Dispersed Environment (GDE)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110366</link>
        <id>10.14569/IJACSA.2020.0110366</id>
        <doi>10.14569/IJACSA.2020.0110366</doi>
        <lastModDate>2020-03-30T10:40:38.0830000+00:00</lastModDate>
        
        <creator>Shahid N. Bhatti</creator>
        
        <creator>Frnaz Akbar</creator>
        
        <creator>Mohammad A. Alqarni</creator>
        
        <creator>Amr Mohsen Jadi</creator>
        
        <creator>Abdulrahman A. Alshdadi</creator>
        
        <creator>Abdulah J. Alzahrani</creator>
        
        <subject>CM (Change Moderator); GDE (Geographically Dispersed Environment); MCR (Managing Changing Requirements); MCR in the GDE framework; RM (Requirements Management)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>Managing requirements is an essential trait in engineering development process as requirements change and emerge throughout the development process. In the following research work the primary prominence is to eke out requirements that are changing frequently in geographically dispersed setup. To efficiently and effectively cope up with the changing requirements are the key to fulfill customers’ requirements in geographically dispersed environment (GDE) and thus, appropriate procedural modeling is presented in this work to covenant with changing requirements to cut overall cost of the project and increase profitability by gratify the customers and the stakeholders. In the following research we have proposed an approach to tackle changing requirements in software development that are geographically dispersed and we have validated the presented procedural model through case scenario. Comprehensive systematic literature review has been performed in this section (II) to propose the efficient methodology in GDE, traits and risk and further to effectively eke out the evolving project’s requirements in geographically dispersed environment. Changing requirements in geologically dispersed environment can effectively be managed if the proposed MCR model followed and it will mitigate the risk and challenges which we have to face in global software development and as well as it will cut down the overall project’s cost and profitability will expectedly increase.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_66-Optimized_Approach_in_Requirements_Change.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning based, a New Model for Video Captioning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110365</link>
        <id>10.14569/IJACSA.2020.0110365</id>
        <doi>10.14569/IJACSA.2020.0110365</doi>
        <lastModDate>2020-03-30T10:40:38.0670000+00:00</lastModDate>
        
        <creator>Elif G&#252;sta &#214;zer</creator>
        
        <creator>Ilteber Nur Karapinar</creator>
        
        <creator>Sena Basbug</creator>
        
        <creator>S&#252;meyye Turan</creator>
        
        <creator>Anil Utku</creator>
        
        <creator>M. Ali Akcayol</creator>
        
        <subject>Video captioning; CNN; LSTM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>Visually impaired individuals face many difficulties in their daily lives. In this study, a video captioning system has been developed for visually impaired individuals to analyze the events through real-time images and express them in meaningful sentences. It is aimed to better understand the problems experienced by visually impaired individuals in their daily lives. For this reason, the opinions and suggestions of the disabled individuals within the Altınokta Blind Association (Turkish organization of blind people) have been collected to produce more realistic solutions to their problems. In this study, MSVD which consists of 1970 YouTube clips has been used as training dataset. First, all clips have been muted so that the sounds of the clips have not been used in the sentence extraction process. The CNN and LSTM architectures have been used to create sentence and experimental results have been compared using BLEU 4, ROUGE-L and CIDEr and METEOR.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_65-Deep_Learning_Based_A_New_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>TADOC : Tool for Automated Detection of Oral Cancer</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110364</link>
        <id>10.14569/IJACSA.2020.0110364</id>
        <doi>10.14569/IJACSA.2020.0110364</doi>
        <lastModDate>2020-03-30T10:40:38.0330000+00:00</lastModDate>
        
        <creator>Khalid Nazim Abdul Sattar</creator>
        
        <subject>Cancer; CT Scan; MRI Scan; Machine Learning; Deep learning; Convolutional Neural Network (CNN); Whole Slide Image (WSI); Residual Networks (ResNets)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>Cancer is a group of related diseases and it is necessary to classify the type and its impact. In this paper an automated learning-based system for detection of oral cancer from Whole Slide Images (WSI) has been designed. The main challenges of the system were to handle the huge dataset and to train the machine learning model as it consumed more time for each iteration involved. This further increased the time consumed to get a proper model and decrease of freedom for experimentation. Other important key features of the system were to implement a futuristic deep learning architecture to classify small patches from the large whole slide images and use of carefully designed post-processing methods for the slide-based classification.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_64-TADOC_Tool_for_Automated_Detection_of_Oral_Cancer.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Producing Standard Rules for Smart Real Estate Property Buying Decisions based on Web Scraping Technology and Machine Learning Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110363</link>
        <id>10.14569/IJACSA.2020.0110363</id>
        <doi>10.14569/IJACSA.2020.0110363</doi>
        <lastModDate>2020-03-30T10:40:38.0030000+00:00</lastModDate>
        
        <creator>Haris Ahmed</creator>
        
        <creator>Tahseen Ahmed Jilani</creator>
        
        <creator>Waleej Haider</creator>
        
        <creator>Syed Noman Hasany</creator>
        
        <creator>Mohammad Asad Abbasi</creator>
        
        <creator>Ahsan Masroor</creator>
        
        <subject>Web scraping technology; HtmlAgilityPack; machine learning; C4.5 decision tree; Weka-J48</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>Purchasing of real estate property is a stressful and time-consuming activity, regardless of the individual in question is a buyer or seller. The act is also a major financial decision which can lead to numerous consequences if taken hastily. Therefore, it is encouraged that a person should properly invest their time and money in research relating to price demands, property type and location, etc. It can be a difficult task to assess what real estate property can be considered as the best property to buy. The key idea of the current research study is to create a set of standard rules, which should be embraced to make a smart decision of buying real estate property, based on web scraping technology and machine learning techniques.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_63-Producing_Standard_Rules_for_Smart_Real_Estate.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automated Measurement of Hepatic Fat in T1-Mapping and DIXON MRI as a Powerful Biomarker of Metabolic Profile and Detection of Hepatic Steatosis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110362</link>
        <id>10.14569/IJACSA.2020.0110362</id>
        <doi>10.14569/IJACSA.2020.0110362</doi>
        <lastModDate>2020-03-30T10:40:37.9870000+00:00</lastModDate>
        
        <creator>Khouloud AFFI</creator>
        
        <creator>Mnaouer KACHOUT</creator>
        
        <subject>Nonalcoholic fatty liver disease; non-alcoholic steatohepatitis; image processing; metabolic diseases; magnetic resonance imaging; active contour</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>Abnormal or excessive excess of intraperitoneal fat at different anatomical sites (heart, kidneys, liver, etc.) alters the metabolic profile by generating diseases causing cardiovascular complications. These include hepatic steatosis, which requires being increased surveillance before its severe progression to cirrhosis and its complications. Our objective in this study (in-vivo) was to propose a new approach to characterize and quanti-fy hepatic fat. Then, differentiated patients with metabolic dis-eases, obesity, Type 2 diabetes (T2D), metabolic syndrome and healthy subjects. This distinction was not only according to tradi-tional measurement tools such as body mass index (BMI) and waist circumference, but also according to the amount of fat from magnetic resonance imaging (MRI) DIXON image and T1-mapping at 1.5 Tesla (T). The evaluation results show that our proposed approach is reproducible, fast and robust. The distri-bution of the amount of hepatic fat in a cohort of data composed of four groups shows that hepatic fat is able to differentiate the metabolic population on the study chest. Relationship study of hepatic fat and cardiovascular parameters shows that hepatic fat is able to differentiate the metabolic population on the study chest. The relationship study of hepatic fat and cardiovascular parameters shows that hepatic fat has a negative influence on the heart if the amount it increases.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_62-Automated_Measurement_of_Hepatic_Fat.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhance the Security and Prevent Vampire Attack on Wireless Sensor Networks using Energy and Broadcasts Threshold Values</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110361</link>
        <id>10.14569/IJACSA.2020.0110361</id>
        <doi>10.14569/IJACSA.2020.0110361</doi>
        <lastModDate>2020-03-30T10:40:37.9730000+00:00</lastModDate>
        
        <creator>Hesham Abusaimeh</creator>
        
        <subject>Network security; vampire attack; sensor nodes; energy; lifetime; power consumption; packets delivery ratio</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>Measuring and monitoring the surrounded environment are the main tasks of the most battery-based Wireless Sensor Networks (WSNs). The main energy consumption in the WSN is the communication and transferring data between these nodes. There are many researches works on how to preserve the energy consumption of the nodes inside this network. Most of these methods could save the energy and made the WSN lives for longer. However, there might be another reason of consuming energy and loosing these nodes from the network by the threats that targeting this kind of Networks such as the vampire attack that load the WSN with fake traffic. In this paper, a proposed method is presented of preventing the vampire attack from wasting the energy of the sensor nodes based on the energy level of the intermediate nodes in the way to the destination.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_61-Enhance_the_Security_and_Prevent_Vampire_Attack.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Recurrent Neural Networks for Meteorological Time Series Imputation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110360</link>
        <id>10.14569/IJACSA.2020.0110360</id>
        <doi>10.14569/IJACSA.2020.0110360</doi>
        <lastModDate>2020-03-30T10:40:37.9400000+00:00</lastModDate>
        
        <creator>Anibal Flores</creator>
        
        <creator>Hugo Tito</creator>
        
        <creator>Deymor Centty</creator>
        
        <subject>Recurrent neural network; long short-term memory; gated recurrent unit; univariate time series imputation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>The aim of the work presented in this paper is to analyze the effectiveness of recurrent neural networks in imputation processes of meteorological time series, for this six different models based on recurrent neural networks such as Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) are implemented and it is experimented with hourly meteorological time series such as temperature, wind direction and wind velocity. The implemented models have architectures of 2, 3 and 4 sequential layers and their results are compared with each other, as well as with other imputation techniques for univariate time series mainly based on moving averages. The results show that for temperature time series on average the recurrent neural network achieve better results than the imputation techniques based on moving averages; in the case of wind direction time series, on average only one model based on RNN manages to exceed the models based on moving averages; and finally, for wind velocity time series on average, no RNN-based model manages to exceed the results achieved by moving averages-based models.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_60-Recurrent_Neural_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Control BLDC Motor Speed using PID Controller</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110359</link>
        <id>10.14569/IJACSA.2020.0110359</id>
        <doi>10.14569/IJACSA.2020.0110359</doi>
        <lastModDate>2020-03-30T10:40:37.9400000+00:00</lastModDate>
        
        <creator>Md Mahmud</creator>
        
        <creator>S. M. A. Motakabber</creator>
        
        <creator>A. H. M. Zahirul Alam</creator>
        
        <creator>Anis Nurashikin Nordin</creator>
        
        <subject>PID controller; green technology; fuzzy logic control; speed control; BLDC motor</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>At present, green technology is a major concern in every country around the world and electricity is a clean energy which encourages the acquisition of this technology. The main applications of electricity are made through the use of electric motors. Electric power is converted to mechanical energy using a motor, that is to say, the major applications of electrical energy are accomplished through electric motors. Brushless direct current (BLDC) motors have become very attractive in many applications due to its low maintenance costs and compact structure. The BLDC motors can be substituted to make the industries more dynamic. To get better performance BLDC motor requires control drive facilitating to control its speed and torque. This paper describes the design of the BLDC motor control system using in using MATLAB/SIMULINK software for Proportional Integral Derivative (PID) algorithm that can more effectively improve the speed control of these types of motors. The purpose of the paper is to provide an overview about the functionality and design of the PID controller. Finally, the study undergoes some well-functioning tests that will support that the PID regulator is far more applicable, better operational, and effective in achieving satisfactory control performance compared to other controllers.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_59-Control_BLDC_Motor_Speed.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Personality Classification from Online Text using Machine Learning Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110358</link>
        <id>10.14569/IJACSA.2020.0110358</id>
        <doi>10.14569/IJACSA.2020.0110358</doi>
        <lastModDate>2020-03-30T10:40:37.9100000+00:00</lastModDate>
        
        <creator>Alam Sher Khan</creator>
        
        <creator>Hussain Ahmad</creator>
        
        <creator>Muhammad Zubair Asghar</creator>
        
        <creator>Furqan Khan Saddozai</creator>
        
        <creator>Areeba Arif</creator>
        
        <creator>Hassan Ali Khalid</creator>
        
        <subject>Personality recognition; re-sampling; machine learning; XGBoost; class imbalanced; MBTI; social networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>Personality refer to the distinctive set of characteristics of a person that effect their habits, behaviour’s, attitude and pattern of thoughts. Text available on Social Networking sites provide an opportunity to recognize individual’s personality traits automatically. In this proposed work, Machine Learning Technique, XGBoost classifier is used to predict four personality traits based on Myers- Briggs Type Indicator (MBTI) model, namely Introversion-Extroversion(I-E), iNtuition-Sensing(N-S), Feeling-Thinking(F-T) and Judging-Perceiving(J-P) from input text. Publically available benchmark dataset from Kaggle is used in experiments. The skewness of the dataset is the main issue associated with the prior work, which is minimized by applying Re-sampling technique namely random over-sampling, resulting in better performance. For more exploration of the personality from text, pre-processing techniques including tokenization, word stemming, stop words elimination and feature selection using TF IDF are also exploited. This work provides the basis for developing a personality identification system which could assist organization for recruiting and selecting appropriate personnel and to improve their business by knowing the personality and preferences of their customers. The results obtained by all classifiers across all personality traits is good enough, however, the performance of XGBoost classifier is outstanding by achieving more than 99% precision and accuracy for different traits.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_58-Personality_Classification_from_Online_Text.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Nabiha: An Arabic Dialect Chatbot</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110357</link>
        <id>10.14569/IJACSA.2020.0110357</id>
        <doi>10.14569/IJACSA.2020.0110357</doi>
        <lastModDate>2020-03-30T10:40:37.8800000+00:00</lastModDate>
        
        <creator>Dana Al-Ghadhban</creator>
        
        <creator>Nora Al-Twairesh</creator>
        
        <subject>Artificial intelligence; natural language processing; chatbot; artificial intelligence markup language; Pandorabots; Arabic; Saudi dialect</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>Nowadays, we are living in the era of technology and innovation that impact various fields, including sciences. In computing and technology, many outstanding and attractive programs and applications have emerged, including programs that try to mimic the human behavior. A chatbot is an example of the artificial intelligence-based computer programs that try to simulate the human behavior by conducting a conversation and an interaction with the users using natural language. Over the years, various chatbots have been developed for many languages (such as English, Spanish, and French) to serve many fields (such as entertainment, medicine, education, and commerce). Unfortunately, Arabic chatbots are rare. To our knowledge, there is no previous work on developing a chatbot for the Saudi Arabic dialect. In this study, we have developed “Nabiha,” a chatbot that can support conversation with Information Technology (IT) students at King Saud University using the Saudi Arabic dialect. Therefore, Nabiha will be the first Saudi chatbot that uses the Saudi dialect. To facilitate access to Nabiha, we have made it available on different platforms: Android, Twitter, and Web. When a student wants to talk with Nabiha, she can download an application, talk with her on Twitter, or visit her website. Nabiha was tested by the students of the IT department, and the results were somewhat satisfactory, considering the difficulty of the Arabic language in general and the Saudi dialect in particular.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_57-Nabiha_an_Arabic_Dialect_Chatbot.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Efficient Mining of Maximal Bicliques in Graph by Pruning Search Space</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110356</link>
        <id>10.14569/IJACSA.2020.0110356</id>
        <doi>10.14569/IJACSA.2020.0110356</doi>
        <lastModDate>2020-03-30T10:40:37.8630000+00:00</lastModDate>
        
        <creator>Youngtae Kim</creator>
        
        <creator>Dongyul Ra</creator>
        
        <subject>Graph algorithms; maximal bicliques; maximal biclique mining; complete bipartite graphs; pruning search space; social networks; protein networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>In this paper, we present a new algorithm for mining or enumerating maximal biclique (MB) subgraphs in an undirected general graph. Our algorithm achieves improved theoretical efficiency in time over the best algorithms. For an undirected graph with n vertices, m edges and k maximal bicliques, our algorithm requires O(kn2) time, which is the state of the art performance. Our main idea is based on a strategy of pruning search space extensively. This strategy is made possible by the approach of storing maximal bicliques immediately after detection and allowing them to be looked up during runtime to make pruning decisions. The space complexity of our algorithm is O(kn) because of the space used for storing the MBs. However, a lot of space is saved by using a compact way of storing MBs, which is an advantage of our method. Experiments show that our algorithm outperforms other state of the art methods.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_56-Efficient_Mining_of_Maximal_Bicliques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Beyond Sentiment Classification: A Novel Approach for Utilizing Social Media Data for Business Intelligence</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110355</link>
        <id>10.14569/IJACSA.2020.0110355</id>
        <doi>10.14569/IJACSA.2020.0110355</doi>
        <lastModDate>2020-03-30T10:40:37.8470000+00:00</lastModDate>
        
        <creator>Ibrahim Said Ahmad</creator>
        
        <creator>Azuraliza Abu Bakar</creator>
        
        <creator>Mohd Ridzwan Yaakub</creator>
        
        <creator>Mohammad Darwich</creator>
        
        <subject>Purchase intention; sentiment analysis; lexicon; social media; product reviews</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>Extracting people’s opinions from social media has attracted a large number of studies over the years. This is as a result of the growing popularity of social media. People share their sentiments and opinions via these social media platforms. Therefore, extracting and analyzing these sentiments is beneficial in many ways, for example, business intelligence. However, despite a large number of studies on extracting and analyzing social media data, only a fraction of these studies focuses on its practical application. In this study, we focus on the use of product reviews for identifying whether the reviews signify the intention of purchase or not. Therefore, we propose a novel lexicon-based approach for the classification of product reviews into those that signify the intention of purchase and those that do not signify the intention of purchase. We evaluated our proposed approach using a benchmark dataset based on accuracy, precision, and recall. The experimental results obtained prove the efficiency of our proposed approach to purchase intention identification.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_55-Beyond_Sentiment_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Priority based Energy Distribution for Off-grid Rural Electrification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110354</link>
        <id>10.14569/IJACSA.2020.0110354</id>
        <doi>10.14569/IJACSA.2020.0110354</doi>
        <lastModDate>2020-03-30T10:40:37.8170000+00:00</lastModDate>
        
        <creator>Siva Raja Sindiramutty</creator>
        
        <creator>Chong Eng Tan</creator>
        
        <creator>Sei Ping Lau</creator>
        
        <subject>PI (Panel Input); BP (Battery Power); critical appliances; non-critical appliances; prioritization; operating hour</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>Rural off-grid electrification is always very challenging due to mostly using limited output renewable energy source such as solar power system. Owing to its nature of power generation that depends on weather condition, the reliability in power provision is often affected by uncontrolled overwhelming usage or bad weather condition. Total power system blackout that frequently happens not only disturb the night activity routine but also can be life threatening if the rural community is unable to initiate telephony communication with the outside world during state of emergency due to power outage. In order to reduce the frequency of total system blackout caused by the reasons mentioned, we proposed a priority-based energy distribution scheme to assist the off-grid standalone solar power system to improve the overall operating hours of the critical appliances in rural areas. The scheme takes into consideration of criticality of the home appliances as defined by the rural users, so that the system would distribute power supply based on the current state of the system with an objective to prolong the service availability of the critical appliances that matter the most to the users. The scheme has been evaluated under simulated scenario and has shown a 100% operation availability of the critical appliance is achievable even during bad weather season that has very low solar input.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_54-Priority_Based_Energy_Distribution.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automatic Assessment of Performance of Hospitals using Subjective Opinions for Sentiment Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110353</link>
        <id>10.14569/IJACSA.2020.0110353</id>
        <doi>10.14569/IJACSA.2020.0110353</doi>
        <lastModDate>2020-03-30T10:40:37.8000000+00:00</lastModDate>
        
        <creator>Muhammad Badruddin Khan</creator>
        
        <subject>Health informatics; Classification Algorithms; Sentiment Analysis; Sentiment Lexicons; Text Mining</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>Social media is the venue where the opinions are shared in form of text, images and videos by public. Hospitals’ performance can be judged by opinions that are written by patients or their relatives. Machine learning techniques can be used to detect sentiments of the opinion givers. For the research work presented in this article, opinions for few big hospitals were collected using Facebook, twitter and hospitals’ webpage. The corpus was constructed and the sentiment analysis was performed after few preprocessing tasks. Resources like Stanford POS Tagger and WordNet were used to discover aspects. In this paper, the challenges of annotation of subjective opinions are discussed in detail. Two sentiment lexicons namely NRC-Affect-Intensity lexicon and SentiWordNet 3.0 lexicon were used to calculate sentiment scores of the comments that were used by different machine learning classifiers. Moreover, the results of the experiments on the constructed dataset are provided. For the experiments that aimed to discover overall sentiments of user towards hospital, Random forest outperformed other classifiers achieving accuracy of 76.49% using scores from NRC-Affect-Intensity lexicon. For the experiments that were directed towards discovering sentiments of users towards particular aspect of a hospital, Random forest overtook other classifiers reaching accuracy of 80.7339 % using NRC-Affect-Intensity lexicon sentiment scores. The research results show that machine learning can be very helpful in identifying sentiments of users from their textual comments that are vastly available on different social media platforms. The results can be helpful in improvement of hospital performance and are expected to contribute to growing field of health informatics.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_53-Automatic_Assessment_of_Performance_of_Hospitals.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cloud Computing Adoption at Higher Educational Institutions in the KSA for Sustainable Development</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110352</link>
        <id>10.14569/IJACSA.2020.0110352</id>
        <doi>10.14569/IJACSA.2020.0110352</doi>
        <lastModDate>2020-03-30T10:40:37.8000000+00:00</lastModDate>
        
        <creator>Ashraf Ali</creator>
        
        <subject>Cloud computing; higher educational institutions; Cloud Service Provers (CSP); Software-as-a-Service (SaaS); Platform-as-a-Service (PaaS); Infrastructure-as-a-Service (IaaS)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>Rapid changes in the advancements of information and communication technologies (ICT) have prompted Higher Educational Institutions (HEIs) to enhance teaching and learning. Over the years, cloud computing (CC) has become an emerging and adoptable paradigm in many industries including healthcare, finance, and law with its promising benefits. This trend is also growing in the field of education around the globe. Due to its inherent qualities of reliability, scalability, flexibility and reasonable cost, cloud is the solution that addressed the accessibility issue for quality education. CC plays an important role and will have major impacts on HEIs of the Kingdom of Saudi Arabia (KSA) in the near future. HEIs are used to utilize the benefits of CC based services provided by the cloud service providers (CSPs). The CSP can be owned by the KSA government, private, or third-party vendors. By using cloud-based services at HEIs, staff, faculty, and students can utilize its services to perform various academic responsibilities on demand. This paper aims to adopt CC for HEIs and explore the prominent features and potential benefits of adopting cloud services in the HEIs of KSA. This paper also reveals numerous challenges, impacts, and major issues involved in adopting cloud services for HEIs.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_52-Cloud_Computing_Adoption_at_Higher_Educational_Institutions.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing the Quality of Service of Cloud Computing in Big Data using Virtual Private Network and Firewall in Dense Mode</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110351</link>
        <id>10.14569/IJACSA.2020.0110351</id>
        <doi>10.14569/IJACSA.2020.0110351</doi>
        <lastModDate>2020-03-30T10:40:37.7700000+00:00</lastModDate>
        
        <creator>Hussain Shah</creator>
        
        <creator>Aziz ud Din</creator>
        
        <creator>Abizar</creator>
        
        <creator>Adil Khan</creator>
        
        <creator>Shams ud Din</creator>
        
        <subject>Cloud computing; big data; firewall; virtual private network; security; performance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>Cloud Computing entails accessing and storing programs and data over the internet instead of the hard drive of a personal computer. Over the Internet, it is the practice of software and hardware to pass a service. Cloud gives the ability to consumers to access big data and use applications from every device that can have access to the internet, however, the key problem is security and this can be solvable by a firewall and Virtual Private Network. Recently, research has been accomplished in deploying firewalls and Virtual Private Networks with parameters of throughput and load in sparse mode. In this paper, an examination of firewall and Virtual Private Network is considered based on average throughput, average packet loss and average end-to-end delay in dense mode. To examine the performance of cloud computing without Firewall and Virtual Private Network, with firewall only, and with firewall and Virtual Private Network is the research goal. The simulation results have shown that Firewall and Virtual Private Network offers better security through a wide investigation with slight distress in the cloud performance.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_51-Enhancing_the_Quality_of_Service_of_Cloud_Computing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Effect of Header-based Features on Accuracy of Classifiers for Spam Email Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110350</link>
        <id>10.14569/IJACSA.2020.0110350</id>
        <doi>10.14569/IJACSA.2020.0110350</doi>
        <lastModDate>2020-03-30T10:40:37.7530000+00:00</lastModDate>
        
        <creator>Priti Kulkarni</creator>
        
        <creator>Jatinderkumar R. Saini</creator>
        
        <creator>Haridas Acharya</creator>
        
        <subject>Email classification; Chi-Square; correlation; relief feature selection; wrapper; information gain; Naive Bayes; J48; spam; support vector machine; random forest; NBTree</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>Emails are an integral part of communication in today’s world. But Spam emails are a hindrance, leading to reduction in efficiency, security threats and wastage of bandwidth. Hence, they need to be filtered at the first filtering station, so that employees are spared the drudgery of handling them. Most of the earlier approaches are mainly focused on building content-based filters using body of an email message. Use of selected header features to filter spam, is a better strategy, which was initiated by few researchers. In this context, our research intends to find out minimum number of features required to classify spam and ham emails. A set of experiments was conducted with three datasets and five Feature Selection techniques namely Chi-square, Correlation, Relief Feature Selection, Information Gain, and Wrapper. Five-classification algorithms-Na&#239;ve Bayes, Decision Tree, NBTree, Random Forest and Support Vector Machine were used. In most of the approaches, a trade-off exists between improper filtering and number of features. Hence arriving at an optimum set of features is a challenge. Our results show that in order to achieve the objective of satisfactory filtering, minimum 5 and maximum 14 features are required.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_50-Effect_of_Header_based_Features_on_Accuracy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>RETRACTED: Design and Development of AI-based Mirror Neurons Agent towards Emotion and Empathy</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110349</link>
        <id>10.14569/IJACSA.2020.0110349</id>
        <doi>10.14569/IJACSA.2020.0110349</doi>
        <lastModDate>2020-03-30T10:40:37.7370000+00:00</lastModDate>
        
        <creator>Faisal Rehman</creator>
        
        <creator>Adeel Munawar</creator>
        
        <creator>Aqsa Iftikhar</creator>
        
        <creator>Awais Qasim</creator>
        
        <creator>Jawad Hassan</creator>
        
        <creator>Fouzia Samiullah</creator>
        
        <creator>Muhammad Basit Ali Gilani</creator>
        
        <creator>Neelam Qasim</creator>
        
        <subject>Mirror neurons functionalities; emotions; empathy; machine learning; artificial intelligence</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>After careful and considered review of the content of this paper by a duly constituted expert committee, this paper has been found to be in violation of IJACSA`s Publication Principles. We hereby retract the content of this paper. Reasonable effort should be made to remove all past references to this paper. Retraction DOI: 10.14569/IJACSA.2020.0110349.retraction</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_49-Design_and_Development_of_AI_based_Mirror_Neurons.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Accident Detection and Disaster Response Framework Utilizing IoT</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110348</link>
        <id>10.14569/IJACSA.2020.0110348</id>
        <doi>10.14569/IJACSA.2020.0110348</doi>
        <lastModDate>2020-03-30T10:40:37.7070000+00:00</lastModDate>
        
        <creator>Shoaib ul Hassan</creator>
        
        <creator>Jingxia CHEN</creator>
        
        <creator>Tariq Mahmood</creator>
        
        <creator>Ali Akbar Shah</creator>
        
        <subject>Internet of things (IOT); accident detection; nearby places; nearby hospitals; cloud computing; intelligent transportation systems; information and communication technologies</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>The internet of things (IoT) leads the noteworthy edges above customary information and communication technologies (ICT) for Intelligent Transportation Systems (ITS). The progression in the transportation system, the increment in vehicles and the accidents happened on the roads are cumulative up to an alarming situation. Additionally, 1.256 million people expire by the road bumps every year and it is very problematic to find the precise accident location of the user. If an accident occurs, the survival rate of the victim increases, if he is given instantaneous remedial assistance. You can provide remedial assistance to the victim only when you identify the precise location of accident. The main persistence of this system is to identify an accident and find the location of the user. After tracing the location, the system will search nearby hospitals for remedial treatment. System will send a message that contains user&#39;s current location, to the nearby hospitals in case of an emergency. System will acquire recommended contacts from the cloud and also send message to them for user’s support by using API. If the user is safe, he can cancel the message that is being sent to the nearest hospital and the recommended contacts. This system will help the users in saving their lives within minimal time.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_48-Accident_Detection_and_Disaster_Response_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mosques Smart Domes System using Machine Learning Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110347</link>
        <id>10.14569/IJACSA.2020.0110347</id>
        <doi>10.14569/IJACSA.2020.0110347</doi>
        <lastModDate>2020-03-30T10:40:37.6900000+00:00</lastModDate>
        
        <creator>Mohammad Awis Al Lababede</creator>
        
        <creator>Anas H. Blasi</creator>
        
        <creator>Mohammed A. Alsuwaiket</creator>
        
        <subject>Decision tree; k-nearest neighbors; smart domes; weather prediction; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>Millions of mosques around the world are suffering some problems such as ventilation and difficulty getting rid of bacteria, especially in rush hours where congestion in mosques leads to air pollution and spread of bacteria, in addition to unpleasant odors and to a state of discomfort during the pray times, where in most mosques there are no enough windows to ventilate the mosque well. This paper aims to solve these problems by building a model of smart mosques’ domes using weather features and outside temperatures. Machine learning algorithms such as k-Nearest Neighbors (k-NN) and Decision Tree (DT) were applied to predict the state of the domes (open or close). The experiments of this paper were applied on Prophet’s mosque in Saudi Arabia, which basically contains twenty-seven manually moving domes. Both machine learning algorithms were tested and evaluated using different evaluation methods. After comparing the results for both algorithms, DT algorithm was achieved higher accuracy 98% comparing with 95% accuracy for k-NN algorithm. Finally, the results of this study were promising and will be helpful for all mosques to use our proposed model for controlling domes automatically.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_47-Mosques_Smart_Domes_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparison of Item Difficulty Estimates in a Basic Statistics Test using ltm and CTT Software Packages in R</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110346</link>
        <id>10.14569/IJACSA.2020.0110346</id>
        <doi>10.14569/IJACSA.2020.0110346</doi>
        <lastModDate>2020-03-30T10:40:37.6600000+00:00</lastModDate>
        
        <creator>Jonald L. Pimentel</creator>
        
        <creator>Marah Luriely A. Villaruz</creator>
        
        <subject>Classical test theory; indices; item calibration; item difficulty; item response theory; R software</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>Two free computer software packages “ltm” and “CTT” in the R software environment were tested to demonstrate its usefulness in an item test analysis. The calibration of the item difficulty parameters given the binary responses of two hundred five examinees for the fifteen items multiple choice test were analyzed using the Classical Test Theory (CTT) and Item Response Theory (IRT) methodologies. The software latent trait model “ltm” employed the IRT framework while the software classical test theory functions “CTT” operated under CTT. The IRT Rasch model was used to model the responses of the examinees. The conditional maximum likelihood estimation method was used to estimate the item difficulty parameters for all the items. On the other hand, all the item difficulty indices using the “CTT” software were also calculated. Both the statistical analyses of this study were done in the R software. Results showed that among the fifteen items, the estimates of their item difficulty parameters differed mostly on their values between the two methods. In an IRT framework, items showed extreme difficulty or easy cases as compared to CTT. However, when the estimated values were categorized into intervals and labelled according to its verbal difficulty description, both methodologies showed some similarities in their item difficulties.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_46-Comparison_of_Item_Difficulty_Estimates.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>“Onto-Computer-Project”, a Computer Project Domain Ontology : Construction and Validation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110345</link>
        <id>10.14569/IJACSA.2020.0110345</id>
        <doi>10.14569/IJACSA.2020.0110345</doi>
        <lastModDate>2020-03-30T10:40:37.6430000+00:00</lastModDate>
        
        <creator>Mejri Lassaad</creator>
        
        <creator>Hanafi Raja</creator>
        
        <creator>Henda Hajjami Ben Ghezala</creator>
        
        <subject>Domain ontology; ontology construction; ontology validation; computer project; project memory; knowledge representation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>Ontologies, nowadays, play a primordial role in the representation, the re-use and the sharing of knowledge of a well given domain in a consensual and explicit way more precisely in the computing field. It is in this context that we have proposed a domain ontology baptised onto-computer-project which presents the key to our research goal. This essential goal is to arrive at a final step to elaborate a knowledge based system for computer projects reusing. The aimed system is essentially based on the construction of a memory projects. This memory projects could be defined as a collection of historical and achieved projects around the sphere of computer. This sphere is so wide including many subfields beginning at the database, software engineering fields and arriving at the fields of artificial intelligence, computer vision and so on. This research work requires at first to construct a well-defined ontology in the way to structure and to unify vocabulary often shared by multi actors in the domain of computer projects. To concretize this goal, our paper will describe a construction approach for the proposed domain ontology which is mainly based on an existing methodology named “methontology”. The proposed ontology construction approach, which is composed of seven steps, is the result of a comparative study between some ontology construction approaches belonging to different categories of methodology. In fact, we can distinguish four main categories of ontology development approaches: ontology construction approaches from zero, text-based construction approaches, building approaches based on the reuse of already existing ontologies, and crowd souring-based approaches. In our research work, we are interested by the approach of building ontology from zero. Indeed, the construction of the proposed ontology follows an autonomous approach which is not based on any other existing ontology or the updating of an already constructed ontology. In addition, in this paper we are interested by the problem of validating the content of domain ontology and in this context we have proposed an incremental approach for validating the proposed ontology which is composed of six steps. In this context we have studied some ontology validation approaches: those which are questionnaire based, others based on question answering. The problem here that all approaches studied are single actor approaches where a single validation actor can validate the entire ontology and this by applying the semantic and the structural validation definitively with no return. The main originality of our validation approach consists essentially of three criteria: the incremental validation, the multi-intervention, and the respecting of the “V” cycle. In fact, the passage from one validation step to another results in an update of the initial ontology and this by the intervention of three experts (project management expert, a project computer expert and a specialist in ontology engineering). Our proposal approach requires a feedback between all the validation phases and can return to any expert for revalidation if needed. The result of this research is improved a validated ontology which is allowed us to build our project memory and to feed our knowledge base which will serve us to develop our knowledge-based system.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_45-Onto_Computer_Project_a_Computer_Project.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Place-based Uncertainty Prediction using IoT Devices for a Smart Home Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110344</link>
        <id>10.14569/IJACSA.2020.0110344</id>
        <doi>10.14569/IJACSA.2020.0110344</doi>
        <lastModDate>2020-03-30T10:40:37.6300000+00:00</lastModDate>
        
        <creator>Amr Jadi</creator>
        
        <subject>IoT; place-based approach; uncertainty prediction; MLP; SVM; BN; DTW</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>In this work, an uncertainty prediction method for the home environment is proposed using the IoT devices (sensors) for predicting uncertainties using place-based approach. A neural network (NN) based smart communication system was implemented to test the results obtained from place-based approach using the inputs from sensors linked with internet of things (IoT). In general, there are so many smart systems for home automation is available for alerting the owners using IoT, but they can communicate only after an accident happens. But it is always better to predict a hazard before it happens is very important for a safe home environment due to the presence of kids and pet animals at home in the absence of parents and guardians. Therefore, in this work, the uncertainty prediction component (UPC) using place-based approach helps to make suitable prediction decisions and plays a vital role to predict uncertain events at the smart home environment. A comparison of different classifiers like multi-layer perceptrons (MLP), Bayesian Networks (BN), Support Vector Machines (SVM), and Dynamic Time Warping (DTW) is made to understand the accuracy of the obtained results using the proposed approach. The results obtained in this method shows that place-based approach is providing far better results as compared to the global approach with respect to training and testing time as well. Almost a difference of 10 times is seen with respect to the computing times, which is a good improvement to predict uncertainties at a faster rate.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_44-Place_Based_Uncertainty_Prediction_using_IoT.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>New Approach for the Detection of Family of Geometric Shapes in the Islamic Geometric Patterns</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110343</link>
        <id>10.14569/IJACSA.2020.0110343</id>
        <doi>10.14569/IJACSA.2020.0110343</doi>
        <lastModDate>2020-03-30T10:40:37.6130000+00:00</lastModDate>
        
        <creator>Ait Lahcen Yassine</creator>
        
        <creator>Jali Abdelaziz</creator>
        
        <creator>El Oirrak Ahmed</creator>
        
        <creator>Abdelmalek. Thalal</creator>
        
        <creator>Youssef. Aboufadil</creator>
        
        <creator>M. A. Elidrissi R</creator>
        
        <subject>Family geometric; shapes; Euclidean distance; ‘Hasba’; geometric art; Islamic patterns</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>This article proposes a new approach to detect the family of geometric shapes in Islamic geometric patterns. This type of geometric pattern which is constructed by tracing the grids with the respect of precise measurement criteria and the concept of symmetry of a method which is called ‘Hasba’. This geometric pattern generally found in the tiles which cover the floors or walls of many buildings around the Islamic world such as mosques. this article describe a new method which is based on the calculation of the Euclidean distance between the different geometric shapes which constitute the geometric Islamic pattern, in order to detect similar regions in this type of geometric pattern encountered in Islamic art. </description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_43-New_Approach_for_the_Detection_of_Family.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Applying Social-Gamification for Interactive Learning in Tuberculosis Education</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110342</link>
        <id>10.14569/IJACSA.2020.0110342</id>
        <doi>10.14569/IJACSA.2020.0110342</doi>
        <lastModDate>2020-03-30T10:40:37.5830000+00:00</lastModDate>
        
        <creator>Dhana Sudana</creator>
        
        <creator>Andi W.R. Emanuel</creator>
        
        <creator>Suyoto</creator>
        
        <creator>Ardorisye S. Fornia</creator>
        
        <subject>Education; gamification; mobile application; social-media; tuberculosis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>There are several methods of education for tuberculosis, and one of them is through the DOTS (Direct Observed Treatment Shortcourse) program. The management of tuberculosis education through the DOTS program is performed in clinics and hospitals only to patients and their families. The purpose of this study is to describe the development and testing of a prototype (social-game education) for interactive education for tuberculosis patients in particular and the general public. The data collection process is through direct observation of tuberculosis patients and health professionals (doctors, nurses, and DOTS health workers). Challenge the game in the prototype was provided with content that contained tuberculosis information that had been previously validated by a specialist. In addition to tuberculosis information as the main content, two important elements are making up this prototype, which are gamification and social media elements. In the game elements, this study adopted elements of the leaderboard, badge/achievement, challenge, and level. As for the third element, social media includes likes, comments, and shares. Prototype application testing was conducted on two participant groups (N = 48) consisting of 23 tuberculosis patients and 25 random participants. By using the user experience questionnaire (UEQ) technique, this research focuses on identifying the user&#39;s motivation in capturing compositional information as well as the clarity of the prototype. With a confidence interval of 5% (p = 0.05) per scale. The results indicate that participants have a high level of motivation towards the prototype; this is seen in the rating scale of stimulation with an average of 1.578. Likewise the effectiveness level of information in the rating scale of perspicuity has a mean of 1.224 also has a rate that is quite effective.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_42-Applying_Social_Gamification_for_Interactive_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intelligent System for Price Premium Prediction in Online Auctions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110341</link>
        <id>10.14569/IJACSA.2020.0110341</id>
        <doi>10.14569/IJACSA.2020.0110341</doi>
        <lastModDate>2020-03-30T10:40:37.5670000+00:00</lastModDate>
        
        <creator>Mofareah Bin Mohamed</creator>
        
        <creator>Mahmoud Kamel</creator>
        
        <subject>Classification; auction; CART; training; testing; preprocessing; noise; outlier; DSM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>The use of data mining techniques in the field of auctions has attracted considerable interest from the research community. In auctions, the users try to achieve the highest gain and avoid loss as much as possible. Therefore, data mining techniques can be implemented in the auctioning domain to develop an intelligent method that can be used by the users in online auctions. However, determining the factors that affect the result of an auction, especially the initial price, is critical. In addition, the intelligent system must be established based on clean data to ensure the accuracy of the results. In this paper, we propose an intelligent system (classifier) to predict the initial price of auctions. The proposed system uses the double smoothing method (DSM) for data cleaning in terms of preprocessing. This system is implemented on a data set collected from the eBay website and cleaned using the proposed DSM. In the training phase, the CART technique is employed for the classifier construction. Compared to similar techniques, the proposed system exhibits a better performance in terms of the accuracy and robustness against noisy data, as determined using ROC curves.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_41-Intelligent_System_for_Price_Premium_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Model of Tools for Requirements Elicitation Process for Children’s Learning Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110340</link>
        <id>10.14569/IJACSA.2020.0110340</id>
        <doi>10.14569/IJACSA.2020.0110340</doi>
        <lastModDate>2020-03-30T10:40:37.5500000+00:00</lastModDate>
        
        <creator>Mira Kania Sabariah</creator>
        
        <creator>Paulus Insap Santosa</creator>
        
        <creator>Ridi Ferdiana</creator>
        
        <subject>Requirements elicitation; communication; children learning application; pedagogical aspect; learning style</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>Requirements Elicitation are the initial stages in the application development process, where a set of needs from the system will be built and obtained by communicating with stakeholders who have a direct and indirect influence on those needs. Failure in the requirements elicitation process was caused by weak communication. Communication is an essential thing in carrying out the requirements elicitation process. The selection of the right elicitation technique is not only a solution. Informants as sources of information on requirements also need to be considered. The choice of the correct technique often fails because of the tools not useful. The availability of the right form of equipment needs to be considered so that the communication between the elicitation team and the informant goes well. Children have characteristics not the same as adults. Limitations in terms of psychomotor, cognitive, and emotional children are considered in choosing elicitation techniques and tools. These limitations are also influenced by the age range of child development. The use of digital elicitation devices is recommended to be used in the requirements elicitation process. The presentation of interactive tools makes it easier for children to convey their desires. In learning applications for children, aspects of pedagogy that need to be explored are learning styles and children&#39;s thinking abilities. Every child in every age range has a different preference for learning style. That is because children do not have learning experiences. That also applies to the level of thinking ability of children. Therefore, these two things need to be appropriately explored when the learning application development process. The proposed elicitation tool model was made by taking into account both components of that pedagogical aspects. The test results of the built model show that the application has satisfaction. That means that children can communicate well in conveying the needed as requirements to the learning application.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_40-Model_of_Tools_for_Requirements_Elicitation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improve Speed Real-Time Rendering in Mixed Reality HOLOLENS during Training</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110339</link>
        <id>10.14569/IJACSA.2020.0110339</id>
        <doi>10.14569/IJACSA.2020.0110339</doi>
        <lastModDate>2020-03-30T10:40:37.5200000+00:00</lastModDate>
        
        <creator>Rafeek Mamdouh</creator>
        
        <creator>Hazem M. El-Bakry</creator>
        
        <creator>Alaa Riad</creator>
        
        <creator>Nashaat El-Khamisy</creator>
        
        <subject>Mixed reality; time interval; semantic segmentation; Microsoft HOLOLENS; computer visualization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>Augmented reality (AR), virtual reality (VR), and mixed reality (MR) are advanced applications of computer visualization Which a hybrid structure allows users to explore novel used with other technologies in healthcare among other sectors give a promising future owing to the capabilities that come along with the technology that enables the medical personnel can carry out their surgical operations precisely. HOLOLENS 1, an MR product by Microsoft, is one of the first AR devices that have been widely applied in medicine for the treatment of complex diseases. It is also applied in operations that require a lot of care, for example, surgery of the liver. It is the main objective in the research to use HOLOLENS in performing surgeries while increasing Time Interval and controlling Semantic Segmentation while maintaining the truth of patient liver data during surgery for the segmentation liver 3d model. Next, we describe a new technology that increases the points of light, and the more the 3D intensity, the brighter the images the easier to interact with them. Holographic intensity is also to avoid blurring images to the point where the user sees through transparent HOLOLENS lenses. This improves Time Interval lens sensitivity and user detection in the environment. Finally, we describe a new framework for improving speed real-time render and model segmentation used hybrid Visualization between VR &amp; AR called MR in which we decree render time speed through increase point light throw color calculations and energy function to be fast in sending and receiving data via WIFI unit.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_39-Improve_Speed_Real_Time_Rendering.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Remote Sensing Satellite Image Clustering by Means of Messy Genetic Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110338</link>
        <id>10.14569/IJACSA.2020.0110338</id>
        <doi>10.14569/IJACSA.2020.0110338</doi>
        <lastModDate>2020-03-30T10:40:37.5030000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>Genetic Algorithm: GA; Messy GA; Simple GA; clustering introduction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>Messy Genetic Algorithm (GA) is applied to the satellite image clustering. Messy GA allows to maintain a long schema, due to the fact that schema can be expressed with a variable length of codes, so that more suitable cluster can be found in comparison to the existing Simple GA clustering. The results with simulation data show that the proposed Messy GA based clustering shows four times better cluster separability in comparison to the Simple GA while the results with Landsat TM data of Saga show almost 65% better clustering performance.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_38-Remote_Sensing_Satellite_Image_Clustering.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Modified Weight Optimization for Artificial Higher Order Neural Networks in Physical Time Series</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110337</link>
        <id>10.14569/IJACSA.2020.0110337</id>
        <doi>10.14569/IJACSA.2020.0110337</doi>
        <lastModDate>2020-03-30T10:40:37.4870000+00:00</lastModDate>
        
        <creator>Noor Aida Husaini</creator>
        
        <creator>Rozaida Ghazali</creator>
        
        <creator>Nureize Arbaiy</creator>
        
        <creator>Norhamreeza Abdul Hamid</creator>
        
        <creator>Lokman Hakim Ismail</creator>
        
        <subject>Modified Cuckoo Search Markov Chain Mont&#233; Carlo; MCS-MCMC; neural networks; higher order; time series forecasting</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>Many methods and approaches have been proposed for analyzing and forecasting time series data. There are different Neural Network (NN) variations for specific tasks (e.g., Deep Learning, Recurrent Neural Networks, etc.). Time series forecasting are a crucial component of many important applications, from stock markets to energy load forecasts. Recently, Swarm Intelligence (SI) techniques including Cuckoo Search (CS) have been established as one of the most practical approaches in optimizing parameters for time series forecasting. Several modifications to the CS have been made, including Modified Cuckoo Search (MCS) that adjusts the parameters of the current CS, to improve algorithmic convergence rates. Therefore, motivated by the advantages of these MCSs, we use the enhanced MCS known as the Modified Cuckoo Search-Markov Chain Mont&#233; Carlo (MCS-MCMC) learning algorithm for weight optimization in Higher Order Neural Networks (HONN) models. The L&#233;vy flight function in the MCS is replaced with Markov Chain Mont&#233; Carlo (MCMC) since it can reduce the complexity in generating the objective function. In order to prove that the MCS-MCMC is suitable for forecasting, its performance was compared with the standard Multilayer Perceptron (MLP), standard Pi-Sigma Neural Network (PSNN), Pi-Sigma Neural Network-Modified Cuckoo Search (PSNN-MCS), Pi-Sigma Neural Network-Markov Chain Mont&#233; Carlo  (PSNN-MCMC), standard Functional Link Neural Network (FLNN), Functional Link Neural Network-Modified Cuckoo Search (FLNN-MCS) and Functional Link Neural Network-Markov Chain Mont&#233; Carlo (FLNN-MCMC) on various physical time series and benchmark dataset in terms of accuracy. The simulation results prove that the HONN-based model combined with the MCS-MCMC learning algorithm outperforms the accuracy in the range of 0.007% to 0.079% for three (3) physical time series datasets.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_37-A_Modified_Weight_Optimization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Approach for Multi-Level Evaluation of Strategic Educational Goals</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110336</link>
        <id>10.14569/IJACSA.2020.0110336</id>
        <doi>10.14569/IJACSA.2020.0110336</doi>
        <lastModDate>2020-03-30T10:40:37.4730000+00:00</lastModDate>
        
        <creator>Mohammad Alhaj</creator>
        
        <creator>Mohammad Hassan</creator>
        
        <creator>Abdullah Al-Refai</creator>
        
        <subject>Evaluation process; goal model; multi-level modelling; goal requirement language; program educational goals</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>Educational organizations with multiple level of management promotes for their strategic educational goals as a correlated and clustered data. The typical assessment and feedback approaches are paper-based where word documents and flowcharts are used to evaluate strategic educational goals augmented with quantitative indicators. Unfortunately, the paper-based approach often neglects the relationship and dependencies between the educational goals defined at different levels. This may lead to complications in the analysis, lack of clarity, and subject to different interpretations by the multiple management. We propose a multi-level model-driven approach that improves the assessment of strategic educational goals, handles the clustered data efficiently and allows the individual and group level assessment to take effect simultaneously. The approach also allows decision makers in academic institution to extract valuable information from goal models at different academic levels and measure the fulfilment of the educational goals with respect to the target performance in a formal way.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_36-A_New_Approach_for_Multi_Level_Evaluation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimizing Genetic Algorithm Performance for Effective Traffic Lights Control using Balancing Technique (GABT)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110335</link>
        <id>10.14569/IJACSA.2020.0110335</id>
        <doi>10.14569/IJACSA.2020.0110335</doi>
        <lastModDate>2020-03-30T10:40:37.4570000+00:00</lastModDate>
        
        <creator>Mahmoud Zaki Iskandarani</creator>
        
        <subject>Genetic algorithm; traffic lights; intelligent transportation systems; correlation; roulette wheel selection; boltzmann selection; selection pressure; population</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>Genetic Algorithm (GA) is implemented and simulation tested for the purpose of adaptable traffic lights management at four roads-intersection. The employed GA uses hybrid Boltzmann Selection (BS) and Roulette Wheel Selection techniques (BS-RWS). Selection Pressure (SP) and Population (Pop) parameters are used to tune and balance the designed GA to obtain optimized and correct control of passing vehicles. A very successful implementation of such parameters resulted in obtaining minimum number of Iterations (IRN) for a wide spectrum of SP and Pop. The algorithm is mathematically modeled and analyzed and a proof is obtained regarding the condition for balanced GA. Such Balanced GA is most useful in traffic management for an optimized Intelligent Transportation Systems, as it requires minimum iterations for convergence with faster dynamic controlling time.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_35-Optimizing_Genetic_Algorithm_Performance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>JeddahDashboard (JDB): Visualization of Open Government Data in Kingdom of Saudi Arabia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110334</link>
        <id>10.14569/IJACSA.2020.0110334</id>
        <doi>10.14569/IJACSA.2020.0110334</doi>
        <lastModDate>2020-03-30T10:40:37.4400000+00:00</lastModDate>
        
        <creator>Mashael Khayyat</creator>
        
        <subject>Open Data; Open Government Data; visualization; Dashboard; Saudi Arabia; KSA</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>Open data is data that anyone can freely use, access and redistribute without financial, legal or even technical restrictions. Accordingly, all governmental and non-governmental organizations may publish data that they own open for various purposes on the Internet without any restrictions such as (climate statistics, education statistics, transportation, industry, water abstraction, etc.). Further, Open Government Data (OGD) initiatives are proliferated in every country including Kingdom of Saudi Arabia (KSA). OGD should supposedly escalate the transparency, collaboration, and participation of citizens towards using OGD. However, the presentation of OGD format may not be attractive enough to users and vice-versa the data may not be easy for them to understand and interpret. These stumbling blocks may dampen the use of OGD among citizens. The problems can be resolved through visualization of the available data sets and to represent these data in accordance to user preference. This research emphasizes on visualization efforts of OGD in KSA named JeddahDashboard (JDB) website. The aim of creating JeddahDashBoard is to visualize the published government data in KSA. The idea was inspired by the DublinDashBoard in Ireland where data and real-time information, time series index data, and interactive maps on vast aspects of the city are provided mostly in an interactive ways and attractive charts that are easy to understand. In order to create JeddahDashBoard, two tools were used “the tableau” and then “chart.js” because the later was simple and flexible. Finally, this paper shares researchers experience and challenges in establishing JDB.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_34-JeddahDashboard_JDB_Visualization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Adaptive Scheduling Design for Time Slotted Channel Hopping Enabled Mobile Adhoc Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110333</link>
        <id>10.14569/IJACSA.2020.0110333</id>
        <doi>10.14569/IJACSA.2020.0110333</doi>
        <lastModDate>2020-03-30T10:40:37.4100000+00:00</lastModDate>
        
        <creator>Sridhara S. B</creator>
        
        <creator>Ramesha M</creator>
        
        <creator>Veeresh Patil</creator>
        
        <subject>6TiSCH; access fairness; energy efficiency; MANET (Mobile Ad Hoc Network); TSCH (Time Slotted Channel Hopping); scheduling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>Industrial Internet of things (IIOT) applications comprises the wearable sensor devices for human activity monitoring; these devices generate the continuous data at higher data rate and it is powered through the battery. Hence, it restricts the uses of wireless protocol such as IEEE 802.15.4 and BLE (Bluetooth Low Energy). Moreover, there are promising technologies such as TSCH (Time Slotted Channel Hoping) MAC (Medium Access Control) which can be deployed in the different environment, which are prone to interference. In this research work  we focus on overcoming the issue for designing the AS (Adaptive Scheduling) for TSCH – MANET (Mobile Adhoc Network); furthermore it is very difficult to design scheduling technique considering the unpredictable nature of data source location and wireless link, this  results in waste of reserve resource. Moreover proposed Adaptive Scheduling model allows the both slots i.e. shared and dedicated slots, it also allows the communicating device to active the assigned slots adaptively. Hence, our proposed AS model achieves the higher overall access fairness, minimal idle listening overhead, higher packet deliver rate; further to cope up with the higher traffic load MANET device can activate the additional slots dynamically. Moreover, the outcome of Adaptive Scheduling model shows the higher data transmission and lower energy consumption.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_33-Adaptive_Scheduling_Design_for_Time_Slotted.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Logical Intervention in the Form of Work Breakdown Strategy using Object Constraint Language for e-Commerce Application</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110332</link>
        <id>10.14569/IJACSA.2020.0110332</id>
        <doi>10.14569/IJACSA.2020.0110332</doi>
        <lastModDate>2020-03-30T10:40:37.3930000+00:00</lastModDate>
        
        <creator>Shikha Singh</creator>
        
        <creator>Manuj Darbari</creator>
        
        <subject>OCL; e-commerce; concurrent handling; work breakdown structure; augmented querying</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>This paper proposes a framework for Rule Based inhibition on e-commerce website for prevention of double payment and computing time invariant for concurrent event handling. Authors have analyzed computational models in terms of Customer segmentation replicating their buying characteristics and Dialogue level constraint establishment through OCL. The tool used are MDT-OCL and matching logic for logic level interpretation. The MDT tool generates Context Syntax Tree. We have used LPG grammar to be applied WBS codification by differentiating Descriptive Noun containing description about the products and associated verbs about the product. Authors have used Eclipse plug-ins to embed it logic constraint mapping to check for any ontological errors and double selection and payment errors.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_32-Logical_Intervention_in_the_Form_of_Work_Breakdown.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Application of Zipf&#39;s Law for Prose and Verse Corpora Neutrality for Hindi and Marathi Languages</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110331</link>
        <id>10.14569/IJACSA.2020.0110331</id>
        <doi>10.14569/IJACSA.2020.0110331</doi>
        <lastModDate>2020-03-30T10:40:37.3800000+00:00</lastModDate>
        
        <creator>Prafulla B. Bafna</creator>
        
        <creator>Jatinderkumar R. Saini</creator>
        
        <subject>Marathi; NLP; Synset; Zipf’s law</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>Availability of the text in different languages has become possible, as almost all websites have offered multilingual option. Hindi is considered as official language in one of the states of India. Hindi text analysis is dominated by the corpus of stories and poems. Before performing any text analysis token extraction is an important step and supports many applications like text summarization , categorizing text and so on. Token extraction is a part of Natural language processing (NLP). NLP includes many steps such as preprocessing the corpus, lemmatization and so on. In this paper the tokens are extracted by two methods and on two corpora. BaSa, a context-based term extraction technique having different NLP activities, e.g. Term Frequency Inverse Document Frequency (TF-IDF) and Zipf ‘s law are used to count and compare extracted tokens. Further token comparison between both of the methods is achieved. The corpus contains proses and verses of Hindi as well as the Marathi language. Common tokens from corpora of verses and proses of Marathi as well as Hindi are identified to prove that both of them behave same as per as NLP activities are concerened. The betterment of BaSa over Zipf’s law is proved. Hindi Corpus includes 820 stories and 710 poems and Marathi corpus includes 610 stories and 505 poems.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_31-An_Application_of_Zipfs_Law.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Adapted Lesk Algorithm based Word Sense Disambiguation using the Context Information</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110330</link>
        <id>10.14569/IJACSA.2020.0110330</id>
        <doi>10.14569/IJACSA.2020.0110330</doi>
        <lastModDate>2020-03-30T10:40:37.3630000+00:00</lastModDate>
        
        <creator>Manish Kumar</creator>
        
        <creator>Prasenjit Mukherjee</creator>
        
        <creator>Manik Hendre</creator>
        
        <creator>Manish Godse</creator>
        
        <creator>Baisakhi Chakraborty</creator>
        
        <subject>Word Sense Disambiguation; natural language processing; WordNet; context; machine translation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>The process of identifying the meaning of a polysemous word correctly from a given context is known as the Word Sense Disambiguation (WSD) in natural language processing (NLP). Adapted Lesk algorithm based system is proposed which makes use of knowledge based approach. This work utilizes WordNet as the knowledge source (lexical database). The proposed system has three units – Input query, Pre-Processing and WSD classifier. Task of input query is to take the inputs sentence (which is an unstructured query) from the user and render it to the pre-processing unit. Pre-processing unit will convert the received unstructured query into a structured query by adding some features such as Part of Speech (POS) tagging, grammatical identification (Subject, Verb, and Object) and this structured query is transferred to the WSD classifier. WSD classifier uniquely identifies the sense of the polysemous word using the context information of the query and the lexical database.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_30-Adapted_Lesk_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis on the Requirements of Computational Thinking Skills to Overcome the Difficulties in Learning Programming</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110329</link>
        <id>10.14569/IJACSA.2020.0110329</id>
        <doi>10.14569/IJACSA.2020.0110329</doi>
        <lastModDate>2020-03-30T10:40:37.3470000+00:00</lastModDate>
        
        <creator>Karimah Mohd Yusoff</creator>
        
        <creator>Noraidah Sahari Ashaari</creator>
        
        <creator>Tengku Siti Meriam Tengku Wook</creator>
        
        <creator>Noorazean Mohd Ali</creator>
        
        <subject>Problem-solving; STEM; difficulties in learning programming; cognitive; novice</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>Programming has evolved as an effort to strengthen science, technology, engineering and mathematics (STEM). Programming is a complex process, especially for novices, since it requires problem-solving skills to solve problems of developing algorithms and programme codes. Problem-solving competencies, which are necessary as 21st-century skills, include a set of cognitive skills that are related to problem-solving and programme development or specifically known as computational thinking (CT) skills. In particular, this study quantitatively assessed the computational thinking skills in the context of programming, specifically on the difficulties in learning programming. From the perspectives of the instructors, the survey results highlighted the need to implement CT skills as an approach in teaching and learning programming. A model for teaching and learning programming is necessary as a guide for instructors in the teaching and learning process of programming.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_29-Analysis_on_the_Requirements_of_Computational_Thinking_Skills.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Spectrum Occupancy Measurement of Cellular Spectrum and Smart Network Sharing in Pakistan</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110328</link>
        <id>10.14569/IJACSA.2020.0110328</id>
        <doi>10.14569/IJACSA.2020.0110328</doi>
        <lastModDate>2020-03-30T10:40:37.3170000+00:00</lastModDate>
        
        <creator>Aftab Ahmed Mirani</creator>
        
        <creator>Sajjad Ali Memon</creator>
        
        <creator>Saqib Hussain</creator>
        
        <creator>Muhammad Aamir Panhwar</creator>
        
        <creator>Syed Rizwan Ali Shah</creator>
        
        <subject>Cognitive radio; spectrum occupancy; cellular networks; spectrum analyzer; mobile network operators; sharing models; passive infrastructure sharing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>In wireless communication, the radio spectrum is a very rare and precious resource that has currently become a major problem to efficiently exploit the underutilized band of the static allocated licensed band. Recently, the cognitive radio (CR) has emerged as a promising technology to overcome the spectrum crisis, in which, the licensed band can be utilized by the unlicensed user until and unless it does not affect the transmission of the licensed band. In this paper, the spectrum occupancy of three bands i.e. GSM 900, 1800 and 2100 bands have been measured through spectrum analyzer in the indoor and outdoor environment. The measured results of all the three bands have been calculated through MATLAB against the power spectral density versus frequency plots. Results have shown that the majority of the licensed band is underutilized. Therefore, the CR can play a pivotal role to efficiently utilize the unused spectrum and to overcome the cellular wireless spectrum crisis in Pakistan. The second part of this paper deals with the emerging concept of network sharing among mobile operators and its impacts on cost. Network sharing become the standard among mobile operators worldwide and so in Pakistan. Capital (CAPEX) as well as operational (OPEX) expenditure and rapid advancement in technology encouraged all operators to go for sharing business models. In Pakistan, all four mobile operators Jazz, Telenor, Zong, and Ufone are actively adopting this model to maintain EBITDA (earnings before interest, taxes, depreciation, and amortization). Mainly there are two types of network sharing, Passive infrastructure sharing, and active resource sharing. Passive network sharing is widely used in Pakistan among operators.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_28-Spectrum_Occupancy_Measurement_of_Cellular_Spectrum.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Improved RDWT-based Image Steganography Scheme with QR Decomposition and Double Entropy</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110327</link>
        <id>10.14569/IJACSA.2020.0110327</id>
        <doi>10.14569/IJACSA.2020.0110327</doi>
        <lastModDate>2020-03-30T10:40:37.3000000+00:00</lastModDate>
        
        <creator>Ke-Huey Ng</creator>
        
        <creator>Siau-Chuin Liew</creator>
        
        <creator>Ferda Ernawan</creator>
        
        <subject>Steganography; image steganography; transform domain; Redundant Discrete Wavelet Transform (RDWT); QR decomposition; entropy; human visual system (HVS); imperceptibility</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>This paper introduces an improved RDWT-based image steganography with QR decomposition and double entropy system. It demonstrates image steganography method that hides grayscale secret image into grayscale cover image using RDWT, QR decomposition and entropy calculation. The proposed scheme made use of the human visual system (HVS) in the embedding process. Both cover and secret image are being segmented into non-overlapping blocks with identical block size. Then, entropy values generated from every image block will be sorted from the lowest value to the highest value. The embedding process starts by embedding the secret image block with lowest entropy value into the cover image block with lowest entropy value. The process goes on until all image blocks have been embedded. Embedding secret image into cover image according to the entropy values causes differences that HVS can less likely to detect because of the small changes on image texture. By applying the double entropy system, proposed scheme managed to achieve a higher PSNR value of 60.3773 while previous work gave a value of 55.5771. In terms of SSIM value, proposed scheme generated a value of 0.9998 comparing to previous work’s value of 0.9967. The proposed scheme eliminated the false-positive issue and required low computational time of only 0.72 seconds for embedding and 1.14 seconds for extraction process. Also, it has shown better result compared to previous work in terms of imperceptibility.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_27-An_Improved_RDWT_based_Image_Steganography.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Recognition of Image in Different Cameras using an Improved Algorithm in Viola-Jones</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110326</link>
        <id>10.14569/IJACSA.2020.0110326</id>
        <doi>10.14569/IJACSA.2020.0110326</doi>
        <lastModDate>2020-03-30T10:40:37.2830000+00:00</lastModDate>
        
        <creator>Washington Garcia-Quilachamin</creator>
        
        <creator>Luzmila Pro - Concepci&#243;n</creator>
        
        <subject>Algorithm; video surveillance; cameras; image of a person</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>Technological evolution through computer tools has given rise to tasks of impossible recognition for an ordinary man, but at the same time favorable for the safety of people. Deep learning is considered a tool that uses images and video to detect and interpret real-world scenes. Therefore, it is necessary to validate the application of an algorithm with different cameras for the recognition of people, being a contribution to surveillance in domestic environments and of companies. In this research, an algorithm is presented that, through a camera, allows to detect the image of a person. The objective of this research is to validate the process in the recognition of the image with four cameras through the application of the improved algorithm Viola-Jones. The validation was carried out through a mathematical analysis, which allowed us to base the recognition of the image using four different cameras. As a result of the study, an effective and functional validation was obtained, about the results achieved with the application of the algorithm, using the four cameras and effective in the speed-based recognition concerning the different tests performed on the capture and recognition of each image, reducing the recognition time and optimizing the software and hardware used.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_26-Recognition_of_Image_in_Different_Cameras.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Enhanced Twitter Corpus for the Classification of Arabic Speech Acts</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110325</link>
        <id>10.14569/IJACSA.2020.0110325</id>
        <doi>10.14569/IJACSA.2020.0110325</doi>
        <lastModDate>2020-03-30T10:40:37.2530000+00:00</lastModDate>
        
        <creator>Majdi Ahed</creator>
        
        <creator>Bassam H. Hammo</creator>
        
        <creator>Mohammad A. M. Abushariah</creator>
        
        <subject>Arabic speech acts; twitter; modern standard Arabic; speech act classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>Twitter has gained wide attention as a major social media platform where many topics are discussed on daily basis through millions of tweets. A tweet can be viewed as a speech act (SA), which is an utterance for presenting information, hiding indirect meaning, or carrying out an action. According to SA theory, SA can represent an assertion, a question, a recommendation, or many other things. In this paper, we tackle the problem of constructing a reference corpus of Arabic tweets for the classification of Arabic speech acts. We refer to this corpus as the Arabic Tweets Speech Act Corpus (ArTSAC). It is an enhancement of a modern standard Arabic (MSA) tweet corpus of speech acts called ArSAS. ArTSAC is more advantageous than ArSAS in terms of its richness of annotated features. The goal of ArTSAC is twofold: Firstly, to understand the purpose and intention of tweets which act in accordance with the SA theory, and hence positively influencing the development of many natural language processing (NLP) applications. Secondly, as a future goal, to be used as a benchmark annotated dataset for testing and evaluating state-of-the-art Arabic SA classification algorithms and applications. ArTSAC has been put in practice to classify Arabic tweets containing speech acts using the Support Vector Machine (SVM) classification algorithm. The results of the experiments show that the enhanced ArTSAC corpus achieved an average precision of 90.6% and an F-score of 89.6%. Substantially it outperformed the results of its predecessor ArTSAC corpus.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_25-An_Enhanced_Twitter_Corpus_for_the_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving Performance of the Multiplexed Colored QR Codes</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110324</link>
        <id>10.14569/IJACSA.2020.0110324</id>
        <doi>10.14569/IJACSA.2020.0110324</doi>
        <lastModDate>2020-03-30T10:40:37.2230000+00:00</lastModDate>
        
        <creator>Islam M. El-Sheref</creator>
        
        <creator>Fatma A. El-Licy</creator>
        
        <creator>Ahmed H. Asad</creator>
        
        <subject>Quick response codes; colored quick response codes; data capacity enhancement; multiplexing quick response color coding</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>The vast popularity and useful applications of the QR code were the incentives that encourage the research towards improving its storage capacity and performance. A colored quick response (QR) algorithm is proposed, devised and tested to expand the capacity of the black and white modules to hold colored modules that can fold as many as those available in the 8 bits red-Green-Blue (RGB) color code. Fast Multiplexing Technique (FMT) is established to improve the performance and the storage capacity of the QR color code by multiplexing the black/white QR codes into RGB color shades then folding them into the QR code Modules. Comparative experiments with the classical multiplexing technique -MUX system- proved that FMT has a much better performance, (exponentially faster), while maintaining the capacity multitude to 24 folds that of the classical QR code.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_24-Improving_Performance_of_the_Multiplexed_Colored_QR_Codes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>State-of-the-Art Reformation of Web Programming Course Curriculum in Digital Bangladesh</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110323</link>
        <id>10.14569/IJACSA.2020.0110323</id>
        <doi>10.14569/IJACSA.2020.0110323</doi>
        <lastModDate>2020-03-30T10:40:37.2070000+00:00</lastModDate>
        
        <creator>Susmita Kar</creator>
        
        <creator>Md. Masudul Islam</creator>
        
        <creator>Mijanur Rahaman</creator>
        
        <subject>Web engineering; web development; outcome based learning; CDIO; web course curriculum; web ecosystem; digital Bangladesh</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>For last 15 years universities around the world are continuously developing effective curricula for Web Engineering in order to create good opportunities for graduates to cope up with IT-Software industries. From this study we will show the gap between the skill requirements of IT-Software industries and universities’ web course curricula. Also, we will provide a balanced and structured web course curriculum for any universities. Nowadays, there is a rapid development in web-based applications everywhere but most of our students are late bloomer in programming. So, to ease their difficulties in web sector we need a balanced web curriculum and effective teaching method. By this curriculum one can achieve an overall idea and a minimum view of web engineering which can be beneficial for them in further Web development. Students get a little knowledge in their university on Web Engineering because of the vastness of the contents and the small duration of semester. Our two-semester web course curricula will help them to overcome this problem. Two-semester web course curricula have a huge impact on achieving the minimum required skill in web development field in IT-Software industries. It will help to obtain most of the area of web related content also it will increase problem solving skill and versatile knowledge of web engineering in undergraduate life.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_23-State_of_the_Art_Reformation_of_Web_Programming_Course.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mobility Management Challenges and Solutions in Mobile Cloud Computing System for Next Generation Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110322</link>
        <id>10.14569/IJACSA.2020.0110322</id>
        <doi>10.14569/IJACSA.2020.0110322</doi>
        <lastModDate>2020-03-30T10:40:37.1770000+00:00</lastModDate>
        
        <creator>L. Pallavi</creator>
        
        <creator>A. Jagan</creator>
        
        <creator>B. Thirumala Rao</creator>
        
        <subject>Mobility management; energy consumption; network resource management; traffic management; security management</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>As of late, there is a dynamic improvement in the field of mobile computing, and mobile cloud computing (MCC) has been familiar with a potential development for portable administrations. Likewise, the mobile phones and their applications have high framework in the administration at any point and grow more rapidly. Again, MCC is depended upon to deliver on a very basic level increasingly inventive with multi applications. Moreover, Mobile handling incorporates versatile equipments, portable correspondence, cell programming, and right now there are various compact cloud applications. Versatility the board for supporting the area following and area based assistance is a significant issue of savvy city by giving the way to the smooth transportation of individuals and products. The mobility is valuable to contribute the development in both open and private transportation frameworks for keen urban communities. As the information is distributed computing and getting to it with cell phones all the exchanges experience the system so it is powerless against assault. For keeping the utilization of this fundamental apparatus of steady in this development world we are giving a portion of the answers for these difficulties to address in the field of MCC. In this paper, the main challenges faced by the enterprises and their corresponding solutions are discussed with the mobile cloud computing.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_22-Mobility_Management_Challenges_and_Solutions.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Video Genre Classification using Convolutional Recurrent Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110321</link>
        <id>10.14569/IJACSA.2020.0110321</id>
        <doi>10.14569/IJACSA.2020.0110321</doi>
        <lastModDate>2020-03-30T10:40:37.1600000+00:00</lastModDate>
        
        <creator>K Prasanna Lakshmi</creator>
        
        <creator>Mihir Solanki</creator>
        
        <creator>Jyothi Swaroop Dara</creator>
        
        <creator>Avinash Bhargav Kompalli</creator>
        
        <subject>Convolutional recurrent neural networks; video classification; temporal and spatial aspects; machine learning; computer vision; images classification; audio classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>A wide amount of media in the internet is in the form of video files which have different formats and encodings. Easy identification and sorting of videos becomes a mammoth task if done manually. With an ever-increasing demand for video streaming and download, the Video Classification problem is brought into foresight for managing such large and unstructured data over the internet and locally. We present a solution for classifying videos into genres and locality by training a Convolutional Recurrent Neural Network. It involves feature extraction from video files in the form of frames and audio. The Neural Networks makes a suitable prediction. The final output layer will place the video in a certain genre. This problem could be applied to a vast number of applications including but not limited to search optimization, grouping, critic reviews, piracy detection, targeted advertisements, etc. We expect our fully trained model to identify, with acceptable accuracy, any video or video clip over the internet and thus eliminate the cumbersome problem of manual video classification.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_21-Video_Genre_Classification_using_Neural_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Smart Energy Control Internet of Things based Agriculture Clustered Scheme for Smart Farming</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110320</link>
        <id>10.14569/IJACSA.2020.0110320</id>
        <doi>10.14569/IJACSA.2020.0110320</doi>
        <lastModDate>2020-03-30T10:40:37.1430000+00:00</lastModDate>
        
        <creator>Sabir Hussain Awan</creator>
        
        <creator>Sheeraz Ahmed</creator>
        
        <creator>Zeeshan Najam</creator>
        
        <creator>Muhammad Yousaf Ali Khan</creator>
        
        <creator>Asif Nawaz</creator>
        
        <creator>Muhammad Fahad</creator>
        
        <creator>Muhammad Tayyab</creator>
        
        <creator>Atif Ishtiaq</creator>
        
        <subject>Agriculture; IoT; network; energy; scheme</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>The era of smart farming has already begun, and its consequences for society and environment are expected to be massive. In this situation, Internet of Things (IoT) technologies have become a key route towards new agricultural practices. IoT nodes detect and track physical or environmental conditions and transmit data through multihop routing to their base station. However, these IoT nodes have come up with energy constraints and complex routing processes due to limited capacities. Hence, lead to data transmission failure and delay in the fields of IoT-based farming. Because of these limitations, the IoT nodes distant from the base station are dependent on their cluster heads (CHs), causing additional load on CHs leading to high energy consumption and shortening their lifetime. To address these issues, this research proposes a smart energy control IoT based agriculture clustered scheme to reduce load on CHs by introducing a novel clustering scheme. Simulations are conducted for validation and comparison is made with LEACH protocol in Agriculture and results show that proposed scheme has much lower energy consumption and longer network life as compared to its counterparts.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_20-Smart_Energy_Control_Internet_of_Things.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>WoT Communication Protocol Security and Privacy Issues</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110319</link>
        <id>10.14569/IJACSA.2020.0110319</id>
        <doi>10.14569/IJACSA.2020.0110319</doi>
        <lastModDate>2020-03-30T10:40:37.1130000+00:00</lastModDate>
        
        <creator>Sadia Murawat</creator>
        
        <creator>Fahima Tahir</creator>
        
        <creator>Maria Anjum</creator>
        
        <creator>Mudasar Ahmed Soomro</creator>
        
        <creator>Saima Siraj</creator>
        
        <creator>Zojan Memon</creator>
        
        <creator>Anees Muhammad</creator>
        
        <creator>Khuda Bux</creator>
        
        <subject>Internet of Things; Web of Things; WoT; security issues; privacy issues; protocols MQTT CoAP</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>In this paper, we have proposed a novel approach for the prevention of the Internet of Things (IoT) from fake devices and highlighted privacy issues by using third party Application Program Interface (RestAPI) in Web of Things (WoT). For the ease of life, the usage of IoT devices, sensors, and Radio-Frequency Identifications (RFIDs) increased rapidly. Such as in transport for monitoring vehicles, taxi services, healthcare for patient’s health condition monitoring, smart cars, smart grids, and smart homes, etc. Due to this for financial gain attackers are targeting these networks or protocol and adversaries are trying to damage the reputation of the organization or to steal intellectual property. From the last two decades or more, the injection vulnerabilities are more threatening security risks for the web application still exists. The new security challenges occur for the security professional or security researchers in the form of IoT or WoT (Web of Things) communication protocols implementation. These protocol Message Queuing Telemetry Transport (MQTT), Constrained Application Protocol (CoAP), WebSockets, and RestAPI have a different type of security issues. Respectively insertion of fake devices, authentication is not implemented in WebSocket connections, and user’s privacy can be leaked with the use of RestAPI without its validation. We have developed a program in Personal Home Pages (PHP) for the detection of new devices in the IoT network. With this, the user’s privacy and data will be protected along with some critical security issues of WoT underlying protocols.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_19-WoT_Communication_Protocol_Security.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Prediction of Prostate Cancer using Ensemble of Machine Learning Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110318</link>
        <id>10.14569/IJACSA.2020.0110318</id>
        <doi>10.14569/IJACSA.2020.0110318</doi>
        <lastModDate>2020-03-30T10:40:37.0970000+00:00</lastModDate>
        
        <creator>Oyewo O. A</creator>
        
        <creator>Boyinbode O.K</creator>
        
        <subject>Prostate cancer; machine learning; support vector; machine; decision tree; multilayer perceptron; diseases</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>Several diseases are associated with humans; some are synonymous to female and some to male. Example of diseases synonymous to the male gender is Prostate Cancer (PC).  Prostate cancer occurs when cells in the prostate gland starts to grow uncontrollably. Statistics shows that prostate cancer is becoming an epidemic among men. Hence, several research works have tried to solve this problem using various methods. Although numerous medical research works are ongoing in the area, the need to introduce technology to battle the epidemic is paramount. Because of this, some researchers have developed several models to help solve issues of prostate cancer in men, but the area is still open to contribution. Recently, some researchers have adopted some well-established Machine Learning (ML) techniques to predict and diagnose the occurrence of prostate cancer, but issues of low prediction accuracy, inability to implement model, low sensitivity; among others still lingers. This paper approached these challenges by developing an ensemble model that combines three (3) ML techniques; Support Vector Machine (SVM), Decision Tree (DT), and Multilayer Perceptron (MP) to predict PC in men. Our developed model was evaluated using sensitivity, specificity and accuracy as performance metrics, and our result showed a prediction accuracy of 99.06%, sensitivity of 98.09% and, specificity of 99.54%, which is a relative improvement on the existing systems.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_18-Prediction_of_Prostate_Cancer.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Heart Rate Monitoring with Smart Wearables using Edge Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110317</link>
        <id>10.14569/IJACSA.2020.0110317</id>
        <doi>10.14569/IJACSA.2020.0110317</doi>
        <lastModDate>2020-03-30T10:40:37.0670000+00:00</lastModDate>
        
        <creator>Stephen Dewanto</creator>
        
        <creator>Michelle Alexandra</creator>
        
        <creator>Nico Surantha</creator>
        
        <subject>Internet-of-things; heart rate; smart wearables; real-time monitoring</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>Heart is a vital component of every human health. The development of wearables and its sensor enables the possibility of easy-to-use real-time monitoring. The goal of this study is to improve an IoT monitoring system by enabling real-time heart rate monitoring and analysis, also to assess the use of PPG sensors in smart wearables compared to other clinical-tested heart rate sensors. The PPG sensor will be used to record heart rate data of the user physically. The measurements are then sent to the application for pre-processing. The application can then transmit the pre-processed measurements to the cloud server for monitoring or further analysis, i.e. to assess the health of users’ heart. The measurement comparison with measurement collected by a BCG sensor is carried out in this paper. While neither are standard for heart rate measurements, the findings of the evaluation show that the PPG sensor achieves quite similar input data and assessment results during awake stages. The Fitbit sensor tested often underestimates, with sometimes delayed or doesn’t detect a sudden increase in heart rate during sleeping.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_17-Heart_Rate_Monitoring_with_Smart_Wearables.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Project based Learning Application Experience in Engineering Courses: Database Case in the Professional Career of Systems Engineering</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110316</link>
        <id>10.14569/IJACSA.2020.0110316</id>
        <doi>10.14569/IJACSA.2020.0110316</doi>
        <lastModDate>2020-03-30T10:40:37.0200000+00:00</lastModDate>
        
        <creator>C&#233;sar Baluarte-Araya</creator>
        
        <subject>Project based learning; formative research; competences; skills; formative research report</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>In many universities, training research is applied in courses as a basic element of research and fundamental in the professional training of every student, which result in strengthening and increasing knowledge about certain areas, as well as to achieve skills, competence, abilities and attitudes. The present work shows the formal application experience of the Project Based Learning (ABPr) methodology in Database Course (BD) at the Professional School of Systems Engineering (EPIS) of the National University of San Agust&#237;n (UNSA), Arequipa-Peru, accommodating the nature of the course being the theory taught by a teacher and laboratory practices by another teacher. The goal is to apply an active teaching strategy to an engineering training course. The methodology used is Project-Based Learning for a research project training for a real problem in an organization to be developed by each team in the semester, with deliverables that will be evaluated by grade scale and the formative research report assessed through the rubric; the input and feedback that the teacher makes of them serves for the improvement and experience in the training of the student. The results obtained show that the objectives in the training of students were achieved, as well as the development of the competencies related to the course, in addition that the application of ABPr gives good results for courses of an engineering career serving as feedback for the continuous improvement of this course and experience for the implementation of ABPr in other curriculum courses. Concluding that formative research as a pillar of a basic level of research initiation is given in a cross-cutting way in the curriculum courses, that the active teaching strategies properly planned and properly applied to each reality allow to achieve the desired results such as: increase knowledge of the area, strengthen skills, abilities, attitudes as the case of the present.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_16-Project_Based_Learning_Application.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Problem based Learning with Information and Communications Technology Support: An Experience in the Teaching-Learning of Matrix Algebra</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110315</link>
        <id>10.14569/IJACSA.2020.0110315</id>
        <doi>10.14569/IJACSA.2020.0110315</doi>
        <lastModDate>2020-03-30T10:40:36.9870000+00:00</lastModDate>
        
        <creator>Norka Bedregal-Alpaca</creator>
        
        <creator>Olha Sharhorodska</creator>
        
        <creator>Doris Tupacyupanqui-Jaen</creator>
        
        <creator>Victor Corneko-Aparicio</creator>
        
        <subject>Problem-based learning; cooperative techniques; constructionism; matrix algebra</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>Students and teachers face problems in the teaching-learning processes of matrix algebra, due to the level of abstraction required, the difficulty of calculation and the way in which the contents are presented. Problem-based Learning (PBL) arises as a solution to this problem, as it contextualizes the contents in everyday life, allows students to actively build that knowledge and contributes to the development of skills. The proposal describes a didactic sequence based on PBL, which uses cooperative techniques and MATLAB, as instruments that facilitate the resolution of problems close to the student experience. The features of the Moodle platform are used to support the face-to-face educational process. The perception of students, in relation to the activity shows that 83% believe that it contributed to the understanding of the topics covered and 79% think that it allowed them to develop their creativity and capacity for expression.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_15-Problem_based_Learning_with_Information.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improved Control Strategies of Electric Vehicles Charging Station based on Grid Tied PV/Battery System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110314</link>
        <id>10.14569/IJACSA.2020.0110314</id>
        <doi>10.14569/IJACSA.2020.0110314</doi>
        <lastModDate>2020-03-30T10:40:36.9730000+00:00</lastModDate>
        
        <creator>Abdelilah Hassoune</creator>
        
        <creator>Mohamed Khafallah</creator>
        
        <creator>Abdelouahed Mesbahi</creator>
        
        <creator>Tarik Bouragba</creator>
        
        <subject>component; Electric vehicle charging station; solar energy; battery storage buffer; electrical grid; charging station management algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>In this paper, improved control strategies of a smart topology of EVs charging station (CS) based on grid tied PV/Battery system are designed and analyzed. The proposed strategies consist of three operating modes i.e. Pv2B; charging a battery storage buffer (BSB) of the CS from solar energy, V2G; discharging an EV battery via grid, and Pv2G; injecting the produced power from PV system into the energy distribution system. However, the BSB is connected to the PV system through a single ended primary inductor converter, the V2G operating mode is emulated by an EV lithium-ion battery tied to the grid via a high frequency full bridge inverter and a bidirectional dc/dc converter. The aim of this work is to improve the energy efficiency of the CS by using a hybrid energy system. Simulation studies are performed in Matlab/Simulink in order to operate the proposed solar CS with multiple control strategies of each case scenario based on a CS management algorithm (CSMA). To provide credible findings of this research, a low power prototype is developed in order to validate the proposed CSMA and its associated controls.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_14-Improved_Control_Strategies_of_Electric_Vehicles.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Usability Study of Smart Phone Messaging for Elderly and Low-literate Users</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110313</link>
        <id>10.14569/IJACSA.2020.0110313</id>
        <doi>10.14569/IJACSA.2020.0110313</doi>
        <lastModDate>2020-03-30T10:40:36.9570000+00:00</lastModDate>
        
        <creator>Rajibul Anam</creator>
        
        <creator>Abdelouahab Abid</creator>
        
        <subject>Smartphone interface; smartphone messaging; visual color; adaptation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>Smartphones are electronic devices that people can carry around and install/add compatible third-party Apps to expend their functionality. Smartphones are mainly developed for calling and messaging purposes. All applications’ interfaces are designed for the current trends. Therefore, Senior Citizen and Low-literate users face difficulties to use smartphones due to the perceived complicated interface and functionality. This paper analyzes Senior Citizen and Low-literate user&#39;s requirements to read and write messages from users “memory load”, “navigation consistency”, “consistency and standard”, and “touch screen finger-based tapping” perspective. Then a framework based on “visual representation”, “navigation” and “miss click avoidance” is developed. A comparison between the proposed application and other messaging applications is provided. This research work focused on the Senior Citizen and Low-literate users to improve their user experience of the smartphone messaging application.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_13-Usability_Study_of_Smart_Phone_Messaging.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Image-based Individual Cow Recognition using Body Patterns</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110311</link>
        <id>10.14569/IJACSA.2020.0110311</id>
        <doi>10.14569/IJACSA.2020.0110311</doi>
        <lastModDate>2020-03-30T10:40:36.9270000+00:00</lastModDate>
        
        <creator>Rotimi-Williams Bello</creator>
        
        <creator>Abdullah Zawawi Talib</creator>
        
        <creator>Ahmad Sufril Azlan Mohamed</creator>
        
        <creator>Daniel A. Olubummo</creator>
        
        <creator>Firstman Noah Otobo</creator>
        
        <subject>Cow; body patterns; convolutional neural network; image; recognition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>The existence of illumination variation, non-rigid object, occlusion, non-linear motion, and real-time implementation requirement has made tracking in computer vision a challenging task. In order to recognize individual cow and to mitigate all the challenging tasks, an image processing system is proposed using the body pattern images of the cow. This system accepts an input image, performs processing operation on the image, and output results in form of classification under certain categories. Technically, convolutional neural network is modeled for the training and testing of each pattern image of 1000 acquired images of 10 species of cow which will pass it through a series of convolution layers with filters, pooling, fully connected layers and softmax function for the pattern images classification with probabilistic values between 0 and 1. The performance evaluation of the proposed system for both training and testing data was carried out for each cow’s identification and 92.59% and 89.95% accuracies were achieved respectively.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_11-Image_based_Individual_Cow_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modeling Network Security: Case Study of Email System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110312</link>
        <id>10.14569/IJACSA.2020.0110312</id>
        <doi>10.14569/IJACSA.2020.0110312</doi>
        <lastModDate>2020-03-30T10:40:36.9270000+00:00</lastModDate>
        
        <creator>Sabah Al-Fedaghi</creator>
        
        <creator>Hadeel Alnasser</creator>
        
        <subject>Network security; conceptual model; diagrammatic representation; email system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>We study operational security in computer network security, including infrastructure, internal processes, resources, information, and physical environment. Current works on developing a security framework focus on a security ontology that contributes to applying common vocabulary, but such an approach does not assist in constructing a foundation for a holistic security methodology. We focus on defining the bounds and creating a representation of a security system by developing a diagrammatic representation (i.e. a model) as a means to describe computer network processes. The model, referred to a thinging machine, is a first step toward developing a security strategy and plan. The general aim is to demonstrate that the representation of the security system plays a key role in making thinking visible through conceptual description of the operational environment, a region in which active security operations are undertaken. We apply the proposed model for email security by conceptually describing a real email system.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_12-Modeling_Network_Security.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Method for Rainfall Rate Estimation with Satellite based Microwave Radiometer Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110310</link>
        <id>10.14569/IJACSA.2020.0110310</id>
        <doi>10.14569/IJACSA.2020.0110310</doi>
        <lastModDate>2020-03-30T10:40:36.8930000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>Rainfall rate estimation; Advanced Microwave Scanning Radiometer: AMSR; geometric relation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>Method for rainfall rate estimation with satellite based microwave radiometer data is proposed. A method to consider the geometric relationship of the observed ice particles and microwave radiometer in the estimation of precipitation is shown, and its validity is shown by comparing it with precipitation radar data on the ground. Observations at high altitudes, such as ice particles, differ greatly in the location of the observation point projected on the ground surface and in the upper troposphere where the observations exist. This effect was insignificant when the precipitation was small because ice particles were often absent, but it was found that the effect was large when the precipitation was large. In other words, the proposed method is effective and effective for Advanced Microwave Scanning Radiometer (AMSR) data in Houston, which was shown as an example of a highly developed convective rain cloud with an In the case of Kwajalein, the effect is insignificant. In addition, the proposed method requires an assumption of ice particle height, and it is necessary to make assumptions based on climatic values. In addition, microwaves in the 89 GHz band, which are considered to be sensitive to ice particles, are not only sensitive to ice particles, so it must be taken into account that they are also affected by the presence of non-ice particles.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_10-Method_for_Rainfall_Rate_Estimation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implementation of a Proof of Concept for a Blockchain-based Smart Contract for the Automotive Industry in Mauritius</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110309</link>
        <id>10.14569/IJACSA.2020.0110309</id>
        <doi>10.14569/IJACSA.2020.0110309</doi>
        <lastModDate>2020-03-30T10:40:36.8800000+00:00</lastModDate>
        
        <creator>Keshav Luchoomun</creator>
        
        <creator>Sameerchamd Pudaruth</creator>
        
        <creator>Somveer Kishnah</creator>
        
        <subject>Blockchain; smart contract; hyperledger fabric; vehicles; Mauritius</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>In recent years, there has been a growth of interest in the blockchain technology across a wide range of industries. Blockchain technology has the potential to transform the way businesses operate especially in the automotive industry. The distributed infrastructure and the secure nature of the blockchain technology encourages trust among businesses and consumers. In Mauritius, the automotive industry is facing challenges such as tampering of vehicle information, falsification of mileage and poor traceability which leads to a lack of trust from customers. In this work, an implementation of a proof of concept (POC) for a blockchain-based smart contract application has been proposed and implemented to mitigate these challenges. The automotives use cases: (a) vehicle importation; and (b) vehicle sale and registration have been implemented in the IBM blockchain platform which provides a secure and transparent way to invoke transactions. Finally, the performance and benefits of the Hyperledger Fabric vehicle application have been assessed based on transparency, security, traceability and efficiency.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_9-Implementation_of_a_Proof_of_Concept.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Ontology Driven ESCO LOD Quality Enhancement</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110308</link>
        <id>10.14569/IJACSA.2020.0110308</id>
        <doi>10.14569/IJACSA.2020.0110308</doi>
        <lastModDate>2020-03-30T10:40:36.8630000+00:00</lastModDate>
        
        <creator>Adham Kahlawi</creator>
        
        <subject>ESCO; linked open data; ontology; semantic web; data quality; SPARQL; OWL; metadata</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>The labor market is a system that is complex and difficult to manage. To overcome this challenge, the European Union has launched the ESCO project which is a language that aims to describe this labor market. In order to support the spread of this project, its dataset was presented as linked open data (LOD). Since LOD is usable and reusable, a set of conditions have to be met. First, LOD must be feasible and high quality. In addition, it must provide the user with the right answers, and it has to be built according to a clear and correct structure. This study investigates the LOD of ESCO, focusing on data quality and data structure. The former is evaluated through applying a set of SPARQL queries. This provides solutions to improve its quality via a set of rules built in first order logic. This process was conducted based on a new proposed ESCO ontology.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_8-An_Ontology_Driven_ESCO_LOD_Quality_Enhancement.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Solution to the Hyper Complex, Cross Domain Reality of Artificial Intelligence: The Hierarchy of AI</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110307</link>
        <id>10.14569/IJACSA.2020.0110307</id>
        <doi>10.14569/IJACSA.2020.0110307</doi>
        <lastModDate>2020-03-30T10:40:36.8330000+00:00</lastModDate>
        
        <creator>Andrew Kear</creator>
        
        <creator>Sasha L. Folkes</creator>
        
        <subject>Artificial intelligence; classification; ground truth value; Hierarchy of AI; Model of AI</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>Artificial Intelligence (AI) is an umbrella term used to describe machine-based forms of learning. This can encapsulate anything from Siri, Apple’s smartphone-based assistant, to Tesla’s autonomous vehicles (self-driving cars). At present, there are no set criteria to classify AI. The implications of which include public uncertainty, corporate scepticism, diminished confidence, insufficient funding and limited progress. Current substantial challenges exist with AI such as the use of combinationally large search space, prediction errors against ground truth values, the use of quantum error correction strategies. These are discussed in addition to fundamental data issues across collection, sample error and quality. The concept of cross realms and domains used to inform AI, is considered. Furthermore there is the issue of the confusing range of current AI labels. This paper aims to provide a more consistent form of classification, to be used by institutions and organisations alike, as they endeavour to make AI part of their practice. In turn, this seeks to promote transparency and increase trust. This has been done through primary research, including a panel of data scientists / experts in the field, and through a literature review on existing research. The authors propose a model solution in that of the Hierarchy of AI.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_7-A_Solution_to_the_Hyper_Complex.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>ECG and EEG Pattern Classifications and Dimensionality Reduction with Laplacian Eigenmaps</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110306</link>
        <id>10.14569/IJACSA.2020.0110306</id>
        <doi>10.14569/IJACSA.2020.0110306</doi>
        <lastModDate>2020-03-30T10:40:36.8170000+00:00</lastModDate>
        
        <creator>Monica Fira</creator>
        
        <creator>Liviu Goras</creator>
        
        <subject>Laplacian Eigenmaps; dimensionality reduction; biosignals; electrocardiographic signal (ECG); electroencephalogram (EEG)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>In this paper, we investigate the effect of dimensionality reduction using Laplacian Eigenmap (LE) in the case of several classes of electroencephalogram (EEG) and electrocardiographic (ECG) signals. Classification results based on a boosting method for EEG signals exhibiting P300 wave and k-nearest neighbour for ECG signals belonging to 8 classes are computed and compared. For EEG signals, the difference between the rate of classification in the original and reduced space with LE is relatively small, only several percent (maximum 10% for the 3 – dimensional space), and the original EEG signals belonging to a 128-dimensional space. This means that, for classification purposes the dimensionality of EEG signals can be reduced without significantly affecting the global and local arrangement of data. Moreover, for EEG signals that are collected at high frequencies, a first stage of data preprocessing can be done by reducing the dimensionality. For ECG signals, for segmentation with and without centering of the R wave, there is a slight decrease in the classification rate at small data sizes. It is found that for an initial dimensionality of 301 the size of the signals can be reduced to 30 without significantly affecting the classification rate. Below this dimension there is a decrease of the classification rate but still the results are very good even for very small dimensions, such as 3. It has been found that the classification results in the reduced space are remarkable close to those obtained for the initial spaces even for small dimensions.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_6-ECG_and_EEG_Pattern_Classifications.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Classification of Malignant and Benign Lung Nodule and Prediction of Image Label Class using Multi-Deep Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110305</link>
        <id>10.14569/IJACSA.2020.0110305</id>
        <doi>10.14569/IJACSA.2020.0110305</doi>
        <lastModDate>2020-03-30T10:40:36.8000000+00:00</lastModDate>
        
        <creator>Muahammad Bilal Zia</creator>
        
        <creator>Zhao Juan Juan</creator>
        
        <creator>Xujuan Zhou</creator>
        
        <creator>Ning Xiao</creator>
        
        <creator>Jiawen Wang</creator>
        
        <creator>Ammad Khan</creator>
        
        <subject>Lung nodule classification; dilated blocks; dual DCNNs; multi-task learning; multi-deep model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>Lung cancer has been listed as one of the world’s leading causes of death. Early diagnosis of lung nodules has great significance for the prevention of lung cancer. Despite major improvements in modern diagnosis and treatment, the five-year survival rate is only 18%. Before diagnosis, the classification of lung nodules is one important step, in particular, because automatic classification may help doctors with a valuable opinion. Although deep learning has shown improvement in the image classifications over traditional approaches, which focus on handcraft features, due to a large number of intra-class variational images and the inter-class similar images due to various imaging modalities, it remains challenging to classify lung nodule. In this paper, a multi-deep model (MD model) is proposed for lung nodule classification as well as to predict the image label class. This model is based on three phases that include multi-scale dilated convolutional blocks (MsDc), dual deep convolutional neural networks (DCNN A/B), and multi-task learning component (MTLc). Initially, the multi-scale features are derived through the MsDc process by using different dilated rates to enlarge the respective area. This technique is applied to a pair of images. Such images are accepted by dual DCNNs, and both models can learn mutually from each other in order to enhance the model accuracy. To further improve the performance of the proposed model, the output from both DCNNs split into two portions. The multi-task learning part is used to evaluate whether the input image pair is in the same group or not and also helps to classify them between benign and malignant. Furthermore, it can provide positive guidance if there is an error. Both the intra-class and inter-class (variation and similarity) of a dataset itself increase the efficiency of single DCNN. The effectiveness of mentioned technique is tested empirically by using the popular Lung Image Consortium Database (LIDC) dataset. The results show that the strategy is highly efficient in the form of sensitivity of 90.67%, specificity 90.80%, and accuracy of 90.73%.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_5-Classification_of_Malignant_and_Benign_Lung_Nodule.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mobile Sensor Node Deployment Strategy by using Graph Structure based on Estimation of Communication Connectivity and Movement Path</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110304</link>
        <id>10.14569/IJACSA.2020.0110304</id>
        <doi>10.14569/IJACSA.2020.0110304</doi>
        <lastModDate>2020-03-30T10:40:36.7830000+00:00</lastModDate>
        
        <creator>Koji Kawabata</creator>
        
        <creator>Tsuyoshi Suzuki</creator>
        
        <subject>Wireless sensor networks; deployment strategy; communication connectivity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>We propose a multiple-mobile sensor node (MSN) deployment strategy that considers wireless communication quality and operation time of underground wireless sensor networks. After an underground disaster, it is difficult to perform a rescue operation because the internal situation cannot be confirmed. Hence, gathering information using a teleoperated robot has been widely discussed. However, wireless communication is unstable and the corresponding wireless infrastructure to operate the teleoperated robot is unavailable underground. Therefore, we studied the disaster information-gathering support system using wireless sensor networks and a rescue robot. In this study, the movement path information of the teleoperated robot is fed to MSNs in a graph structure. MSNs are deployed in the underground environment by adding an evaluation of communication quality and operation status to a given graph structure. The simulation was evaluated in an assumed underground environment. The results confirmed that the wireless communication quality between each MSN was maintained and energy consumption was balanced during the deployment.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_4-Mobile_Sensor_Node_Deployment_Strategy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The New High-Performance Face Tracking System based on Detection-Tracking and Tracklet-Tracklet Association in Semi-Online Mode</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110303</link>
        <id>10.14569/IJACSA.2020.0110303</id>
        <doi>10.14569/IJACSA.2020.0110303</doi>
        <lastModDate>2020-03-30T10:40:36.7530000+00:00</lastModDate>
        
        <creator>Ngoc Q. Ly</creator>
        
        <creator>Tan T. Nguyen</creator>
        
        <creator>Tai C. Vong</creator>
        
        <creator>Cuong V. Than</creator>
        
        <subject>Face tracking; face re-identification; detection-tracking; tracklet-tracklet association</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>Despite recent advances in multiple object tracking and pedestrian tracking, multiple-face tracking remains a challenging problem. In this work, the authors propose a framework to solve the problem in semi-online manner (the framework runs in real-time speed with two-second delay). The proposed framework consists of two stages: detection-tracking and tracklet-tracklet association. Detection-tracking stage is for creating short tracklets. Tracklet-tracklet association is for merging and assigning identifications to those tracklets. To the best of the authors’ knowledge, the authors make contributions in three aspects: 1) the authors adopt a principle often used in online approaches as a part of the framework and introduce a tracklet-tracklet association stage to leverage future information; 2) the authors propose a motion affinity metric to compare trajectories of two tracklets; 3) the authors propose an efficient way to employ deep features in comparing tracklets of faces. The authors achieved 78.7% precision plot AUC, 68.1% success plot AUC on MobiFace dataset (test set). On OTB dataset, the authors achieved 78.2% and 72.5% precision plot AUC, 51.9% and 43.9% success plot AUC on normal and difficult face subsets, respectively. The average speed was maintained at around 44 FPS. In comparison to the state-of-the-art methods, the proposed framework’s performance maintains high rankings in top 3 on two datasets while keeping the processing speed higher than the other methods in top 3.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_3-The_New_High_Performance_Face_Tracking_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Real-Time Cryptocurrency Price Prediction by Exploiting IoT Concept and Beyond: Cloud Computing, Data Parallelism and Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110302</link>
        <id>10.14569/IJACSA.2020.0110302</id>
        <doi>10.14569/IJACSA.2020.0110302</doi>
        <lastModDate>2020-03-30T10:40:36.7370000+00:00</lastModDate>
        
        <creator>Ajith Premarathne</creator>
        
        <creator>Malka N. Halgamuge</creator>
        
        <creator>R. Samarakody</creator>
        
        <creator>Ampalavanapillai Nirmalathas</creator>
        
        <subject>Internet of things; IoT; data parallelism; deep learning; cloud computing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>Cryptocurrency has as of late pulled in extensive consideration in the fields of economics, cryptography, and computer science due to it is an encrypted digital currency, peer- to- peer virtual forex produced using codes, and it is much the same as another medium of the trade like real cash. This study mainly focuses to combine the Deep Learning with Data parallelism and Cloud Computing Machine learning engine as “hybrid architecture” to predict new Cryptocurrency prices by using historical Cryptocurrency data. The study has exploited 266,776 of Cryptocurrency prices values from the pilot experiment, and Deep Learning algorithm used for the price prediction. The four hybrid architecture models, namely, (i)&#160;standalone PC, (ii) Cloud computing without data parallelism (GPU-1), (iii) Cloud computing with data parallelism (GPU-4), and (iv) Cloud computing with data parallelism (GPU-8) introduced and utilized for the analysis. The performance of each model is evaluated using different performance evaluation parameters. Then, the efficiency of each model was compared using different batch sizes. An experimental result reveals that Cloud computing technology exposes new era by performing parallel computing in IoT to reduce computation time up to 90% of the Deep Learning algorithm-based Cryptocurrencies price prediction model and many other IoT applications such as character recognition, biomedical field, industrial automation, and natural disaster prediction.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_2-Real_Time_Cryptocurrency_Price_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Apple Carving Algorithm to Approximate Traveling Salesman Problem from Compact Triangulation of Planar Point Sets</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110301</link>
        <id>10.14569/IJACSA.2020.0110301</id>
        <doi>10.14569/IJACSA.2020.0110301</doi>
        <lastModDate>2020-03-30T10:40:36.6600000+00:00</lastModDate>
        
        <creator>Marko Dodig</creator>
        
        <creator>Milton Smith</creator>
        
        <subject>TSP; heuristics; combinatorial optimization; computational geometry; compact triangulation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(3), 2020</description>
        <description>We propose a modified version of the Convex Hull algorithm for approximating minimum-length Hamiltonian cycle (TSP) in planar point sets. Starting from a full compact triangulation of a point set, our heuristic “carves out” candidate triangles with the minimal Triangle Inequality Measure until all points lie on the outer perimeter of the remaining partial triangulation. The initial candidate list consists of triangles on the convex hull of a given planar point set; the list is updated as triangles are eliminated and new triangles are thereby exposed. We show that the time and space complexity of the “apple carving” algorithm are O(n2) and O(n), respectively. We test our algorithm using a well-known problem subset and demonstrate that our proposed algorithm outperforms nearly all other TSP tour construction heuristics.</description>
        <description>http://thesai.org/Downloads/Volume11No3/Paper_1-Apple_Carving_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Opportunistic use of Spectral Holes in Karachi using Convolutional Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110297</link>
        <id>10.14569/IJACSA.2020.0110297</id>
        <doi>10.14569/IJACSA.2020.0110297</doi>
        <lastModDate>2020-03-01T14:53:21.9270000+00:00</lastModDate>
        
        <creator>Aamir Zeb Shaikh</creator>
        
        <creator>Shabbar Naqvi</creator>
        
        <creator>Minaal Ali</creator>
        
        <creator>Yamna Iqbal</creator>
        
        <subject>Cognitive Radio; Spectral hole; Deep Learning; Convolutional neural network (CNN)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>Wireless services appearing in the next generation wireless standard i.e. 6G include Internet of Everything (IoE), Holographic communications, smart transportation and smart cities require exponential rise in the bandwidth in addition to other requirements. The current static spectrum allocation policy does not allow any new entrant to exploit already grid-locked Radio Frequency (RF) spectrum. Hence, quest for larger bandwidth can be fulfilled through other technologies. These include exploiting sub-Terahertz band, Visible Light Communication and Cognitive Radio scheme or exploiting of RF bands in opportunistic fashion. Cognitive Radio is one of those engines to exploit the RF spectrum in secondary style. Cognitive Radio can use artificial intelligence driven algorithms to complete the task. Several intelligent algorithms can be used for better forecasting of spectral holes. Convolutional Neural Network (CNN) is a Deep Learning algorithm that can be used to predict the presence of a spectral holes that can be opportunistically exploited for efficient utilization of RF spectrum in secondary fashion. This paper investigates the performance of CNN for metropolitan Karachi city of Pakistan so that the users can be provided with uninterrupted access to the network even under busy hours. Dataset for the proposed setup is collected for 1805 MHz frequency band through NI 2901 Universal Software Radio Peripheral (USRP) devices. The root mean square error (RMSE) for the predicted results using CNN appears to be 81.02 at epoch of 200 and mini-batch loss of 3281.8. Based on the predicted results, it was concluded that CNN can be useful for investigating the possible opportunistic usage of RF spectrum, however, further investigation is required with different datasets.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_97-Opportunistic_Use_of_Spectral_Holes_in_Karachi.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Algorithmic Approach for Maritime Transportation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110296</link>
        <id>10.14569/IJACSA.2020.0110296</id>
        <doi>10.14569/IJACSA.2020.0110296</doi>
        <lastModDate>2020-03-01T14:53:21.8930000+00:00</lastModDate>
        
        <creator>Peri Pinakpani</creator>
        
        <creator>Aruna Polisetty</creator>
        
        <creator>G Bhaskar N Rao</creator>
        
        <creator>Harrison Sunil D</creator>
        
        <creator>B Mohan Kumar</creator>
        
        <creator>Dandamudi Deepthi</creator>
        
        <creator>Aneesh Sidhireddy</creator>
        
        <subject>Maritime transport; multimodal logistics; game theory; greedy algorithm; freight management; intermodal transportation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>Starting from the 3rd millennium BC, Indian maritime trade has augmented the life of a common man and businesses alike. This study, finds that India can leverage on the 7,500 long coast line and derive holistic development in terms of interconnected ports with hinterland connectivity and realize lower expenditure coupled with reduced carbon emission. This research analyzed a decade of cargo data from origination to destination and found that around 82.95 per cent (953 MMTPA in 2017–18) of road based consignments in India comprised of Fertilizers, Hydrocarbons, Coal, Lubricants and Oil. Essentially, a quantum of this i.e. 78.39 per cent of MMTPA cargo consignments (State Owned Hydrocarbons) traverses on Indian roads. The study drew parameters of this transportation paradigm and modeled the same using Artificial Intelligence to depict a monumental opportunity to rationalize costs, improve efficiency and reduce carbon emission to strengthen the argument for the employment of Multimodal Logistics in the Maritime Sector. Subsequent to model derivation the same set of parameters are plotted as an efficient transit map of Interstate transit lines connecting 16 major hubs which now handle bulk cargo shipped by all modes of transport. For the pollution segment a collaborative game theoretic approach i.e., Shapley value is proposed for improved decision making. This study presents data driven and compelling research evidence to portray the benefits of collaboration between firms in terms of time and cost. The study also proposes the need and method to improve hinterland connectivity using a scalable greedy algorithm which is tested with real time data of Coal and Bulk Cargo. As a scientific value addition, this study presents a mathematical model that can be implemented across geographies seamlessly using Information Communication Technology.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_96-An_Algorithmic_Approach_for_Maritime_Transportation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Review of Intelligent Tutorial Systems in Computer and Web based Education</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110295</link>
        <id>10.14569/IJACSA.2020.0110295</id>
        <doi>10.14569/IJACSA.2020.0110295</doi>
        <lastModDate>2020-02-29T14:32:41.2970000+00:00</lastModDate>
        
        <creator>Luis Alfaro</creator>
        
        <creator>Claudia Rivera</creator>
        
        <creator>Elisa Castaneda</creator>
        
        <creator>Jesus Zuniga-Cueva</creator>
        
        <creator>Maria Rivera-Chavez</creator>
        
        <creator>Francisco Fialho</creator>
        
        <subject>Intelligent learning systems; computer assisted learning environments; web based education</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>ITS (Intelligent Tutoring Systems) are integrated and complex systems, designed and developed using approaches and methods of artificial intelligence (AI), for the resolution of problems and requirements of the teaching/learning activities in the field of education and training of students and the workforce based in computers an web based emerging resources. These sys-tems can establish the level of student knowledge and the learning strategies used to improve the level of knowledge to support the detection and correction of student misconceptions. Their purpose is to contribute to the process of teaching and learning in a given area of knowledge, respecting the individuality of the student. In this paper, a review of intelligent tutorial systems (ITS) is presented, from the perspective of their application and usability in modern learning concepts. The methodology used was that of bibliographical review of classic works of the printed and digital literature in relation to ITS and e-Learning systems, as well as searches in diverse databases, of theses and works in universities and digital repositories. The main weakness of the research lies in the fact that the search was limited to documents published in the English, Spanish and Portuguese.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_95-A_Review_of_Intelligent_Tutorial_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparison of Accuracy between Long Short-Term Memory-Deep Learning and Multinomial Logistic Regression-Machine Learning in Sentiment Analysis on Twitter</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110294</link>
        <id>10.14569/IJACSA.2020.0110294</id>
        <doi>10.14569/IJACSA.2020.0110294</doi>
        <lastModDate>2020-02-29T14:32:41.2830000+00:00</lastModDate>
        
        <creator>Aries Muslim</creator>
        
        <creator>Achmad Benny Mutiara</creator>
        
        <creator>Rina Refianti</creator>
        
        <creator>Cut Maisyarah Karyati</creator>
        
        <creator>Galang Setiawan</creator>
        
        <subject>Sentiment analysis; deep learning; machine learn-ing; Long Short-Term Memory (LSTM); Multinomial Logistic Regression (MLR)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>The paper is about sentiment analysis research on Twitter. In this research data with the keyword, ‘Russian Hacking’ concerning the 2016 US presidential election on Twitter was taken as a dataset using Twitter API with Python pro-gramming language. The first process in sentiment analysis is the cleaning phase of tweet data, then using the Lexicon-based method to produce positive, negative, and neutral sentiment values for each tweet. Data that has been cleaned and classified will be processed in the Deep learning method with Long Short-Term Memory (LSTM) algorithm and Machine learning method with Multinomial Logistic Regression (MLR) algorithm. The accuracy of these two classification methods are calculated using the confusion-matrix method. The accuracy obtained from the LSTM classification method is 93 % and the MLR classification method is 92 %. Thus, it can be concluded that LSTM is better in classifying sentiments compared to MLR.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_94-Comparison_of_Accuracy_between_Long_Short_Term_Memory.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Technique for Panorama-Creation using Multiple Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110293</link>
        <id>10.14569/IJACSA.2020.0110293</id>
        <doi>10.14569/IJACSA.2020.0110293</doi>
        <lastModDate>2020-02-29T14:32:41.2670000+00:00</lastModDate>
        
        <creator>Moushumi Zaman Bonny</creator>
        
        <creator>Mohammad Shorif Uddin</creator>
        
        <subject>Panorama; image stitching; correlation; multiple images; image features</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>Image stitching, which is a process of integration of multiple images to create a panoramic image using all contents fitted into one frame, finds wide-spread applications in medical, high resolution digital map, satellite and video imaging. This paper proposes a framework to develop panorama image with multiple images. The framework is an automatic process that takes multiple images, checks correlation of the sequential images and removes overlapping area if exists and creates the panorama.We have done experimentations using different image-sets consisting multiple images with and without overlapping and got satisfactory results.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_93-A_Technique_for_Panorama_Creation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Map Reduce based REmoving Dependency on K and Initial Centroid Selection MR-REDIC Algorithm for clustering of Mixed Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110292</link>
        <id>10.14569/IJACSA.2020.0110292</id>
        <doi>10.14569/IJACSA.2020.0110292</doi>
        <lastModDate>2020-02-29T14:32:41.2500000+00:00</lastModDate>
        
        <creator>Khyati R. Nirmal</creator>
        
        <creator>K.V.V Satyanarayana</creator>
        
        <subject>Machine learning; clustering; similarity measure-ment; initial centroid selection; number of clusters; map reduce paradigm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>In machine learning, clustering is recognized as widely used task to find hidden structure of data. While handling the massive amount of data, the traditional clustering algorithm degrades in performance due to size and mixed type of at-tributes. The Removal Dependency on K and Initial Centroid Selection (REDIC) algorithm is designed to handle mixed data with frequency based dissimilarity measurement for categorical attributes. The selection of initial centroids and prior decision for number of cluster improves the efficiency of REDIC algorithm. To deal with the large scale data, the REDIC algorithm is migrated to Map Reduce paradigm,and Map Reduce based REDIC( MR-REDIC) algorithm is proposed. The large amount of data is divided into small chunks and parallel approach is used to reduce the execution time of algorithm.The proposed algorithm inherits the feature of REDIC algorithm to cluster the data.The algorithm is implemented in Hadoop environment with three different configuration and evaluated using five bench mark data sets. Experimental results show that the Speed up value of data is gradually shifting towards linear by increasing number of data nodes from one to four. The algorithm also achieves the near to closer value for Scale up parameter, while maintaining the accuracy of algorithm.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_92-Map_Reduce_based_REmoving_Dependency.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Knowledge based Authentication Techniques and Challenges</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110291</link>
        <id>10.14569/IJACSA.2020.0110291</id>
        <doi>10.14569/IJACSA.2020.0110291</doi>
        <lastModDate>2020-02-29T14:32:41.2200000+00:00</lastModDate>
        
        <creator>Hosam Alhakami</creator>
        
        <creator>ShouqAlhrbi</creator>
        
        <subject>Knowledge-based authentication; artifact-based au-thentication; biometric-based authentication; usability; vulnerabil-ities; memorability; performance; cost</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>Knowledge-based Authentication (KBA) is an au-thentication approach, which verifying the user identity when accessing services such as finical websites. KBA requests specific information to prove personal identity of the owner. This paper discusses the challenges that are faced by KBA techniques. Memorability is the main obstacle in KBA since the users trying to utilize simple passwords or unify the passwords in various services, a step that cause problems and issues with compliance with security policies. Furthermore, the technique of mixing username/password is considered as another important challenge of KBA due to the recall-based authentication. This discussion includes a comparative analysis of KBA’s techniques based on trade-off criteria to support making of decision. This study’s results can support organizations in the recommendations process of a suitable KBA technique for organizations.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_91-Knowledge_based_Authentication_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Lexical Variation and Sentiment Analysis of Roman Urdu Sentences with Deep Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110290</link>
        <id>10.14569/IJACSA.2020.0110290</id>
        <doi>10.14569/IJACSA.2020.0110290</doi>
        <lastModDate>2020-02-29T14:32:41.2030000+00:00</lastModDate>
        
        <creator>Muhammad Arslan Manzoor</creator>
        
        <creator>Saqib Mamoon</creator>
        
        <creator>Song Kei Tao</creator>
        
        <creator>Ali Zakir</creator>
        
        <creator>Muhammad Adil</creator>
        
        <creator>Jianfeng Lu</creator>
        
        <subject>Sentiment analysis; Self-Attention Bidirectional LSTM (SA-BiLSTM); Roman Urdu language; review classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>Sentiment analysis is the computational study of re-views, emotions, and sentiments expressed in the text. In the past several years, sentimental analysis has attracted many concerns from industry and academia. Deep neural networks have achieved significant results in sentiment analysis. Current methods mainly focus on the English language, but for minority languages, such as Roman Urdu that has more complex syntax and numerous lexical variations, few research is carried out on it. In this paper, for sentiment analysis of Roman Urdu, the novel “Self-attention Bidirectional LSTM (SA-BiLSTM)” network is proposed to deal with the sentence structure and inconsistent manner of text representation. This network addresses the limitation of the unidirectional nature of the conventional architecture. In SA-BiLSTM, Self-Attention takes charge of the complex formation by correlating the whole sentence, and BiLSTM extracts context rep-resentations to tackle the lexical variation of attended embedding in preceding and succeeding directions. Besides, to measure and compare the performance of SA-BiLSTM model, we preprocessed and normalized the Roman Urdu sentences. Due to the efficient design of SA-BiLSTM, it can use fewer computation resources and yield a high accuracy of 68.4% and 69.3% on preprocessed and normalized datasets, respectively, which indicate that SA-BiLSTM can achieve better efficiency as compared with other state-of-the-art deep architectures.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_90-Lexical_Variation_and_Sentiment_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Ranking System for Ordinal Longevity Risk Factors using Proportional-Odds Logistic Regression</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110289</link>
        <id>10.14569/IJACSA.2020.0110289</id>
        <doi>10.14569/IJACSA.2020.0110289</doi>
        <lastModDate>2020-02-29T14:32:41.1900000+00:00</lastModDate>
        
        <creator>Nur Haidar Hanafi</creator>
        
        <creator>Puteri Nor Ellyza Nohuddin</creator>
        
        <subject>Longevity risk; ordinal risk factors; risk ranking; proportional-odds ratios; effect analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>Longevity improvements have traditionally been analysed and extrapolated for future actuarial projections of longevity risk by using a range of statistical methods with different combinations of statistical data types. These meth-ods have shown great performances in explaining the trend movements of the longevity rate. However, actuaries believe that knowing the trend movements is not enough, especially in controlling the impact of the longevity risk. Accessing the effects of each level of the risk factors, especially ordinal risk factors, towards the improvements of the longevity rate would provide significant additional knowledge to the trend movements. Therefore, this study was conducted to determine the potentiality of Proportional-Odds Logistics Regression in ranking the levels of the ordinal risk factors based on their effects on longevity improvements. Based on the results, this method has successfully reordered the levels of each risk factor to be according to their effects in improving longevity rate. Hence, a more meaningful ranking system has been developed based on these new ordered risk factors. This new ranking system will help in improving the ability of any statistical methods in projecting the longevity risk when handling ordinal variables.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_89-Ranking_System_for_Ordinal_Longevity_Risk_Factors.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Missing Data Prediction using Correlation Genetic Algorithm and SVM Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110288</link>
        <id>10.14569/IJACSA.2020.0110288</id>
        <doi>10.14569/IJACSA.2020.0110288</doi>
        <lastModDate>2020-02-29T14:32:41.1730000+00:00</lastModDate>
        
        <creator>Aysh Alhroob</creator>
        
        <creator>Wael Alzyadat</creator>
        
        <creator>Ikhlas Almukahel</creator>
        
        <creator>Hassan Altarawneh</creator>
        
        <subject>Missing data; Support Vector Machine (SVM); ge-netic algorithm; hybrid algorithm; correlation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>Data exists in large volume in the modern world, it becomes very useful when decoded correctly to inform decision making towards tackling real word issues. However, when the data is conflicting, it becomes a daunting task to get obtain information. Working on missing data has become a very impor-tant task in big data analysis. This paper considers the handling of the missing data using the Support Vector Machine (SVM) based on a technique called Correlation-Genetic Algorithm-SVM. This data is to be subjected to the SVM classification technique after identifying the attribute’s correlation and application of the genetic algorithm. The application of the correlation enables a clear view of the attributes which are highly correlated within a particular dataset. The results indicate that apart from the SVM, the application of the proposed hybrid algorithm produces better outcomes identification rate and accuracy is considered. The proposed approach is also compared with depicts the Mean Identification rate of applying the neural network, the result indicate a consistent accuracy hence making it better.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_88-Missing_Data_Prediction_using_Correlation_Genetic_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Beyond the Horizon: A Meticulous Analysis of Clinical Decision-Making Practices</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110287</link>
        <id>10.14569/IJACSA.2020.0110287</id>
        <doi>10.14569/IJACSA.2020.0110287</doi>
        <lastModDate>2020-02-29T14:32:41.1570000+00:00</lastModDate>
        
        <creator>Bilal Saeed Raja</creator>
        
        <creator>Sohail Asghar</creator>
        
        <subject>Decision support system; clinical decision support; classification; clustering; association rule mining; multi-objective evolutionary optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>Clinical advancements are one of the major out-comes of the technological phase shift of data sciences. The signif-icance of information technology in medical sciences by utilizing the Clinical Decision Support System (CDSS) has opened the spillways of exponentially improved predictive models. Utilizing the latest norms of classification algorithms on clinical data are widely incorporated for prognostic assessments. Medical experts have to make decisions that are crucial in nature and if the research can develop a mechanism that assists them in evolving solid reasoning, infer the knowledge and clearly express their clinical decision by justifying their assertions made, it will be a win-win situation. However, this field of science is still an unknown world for clinicians despite the fact that the enormous amount of medical data cannot be exploited to its maximum without invoking the technological support. The objective of this research is to introduce the clinicians and policymakers of the medical domain with the renowned computer-based methodolo-gies employed to construct a clinical decision support system. We expect that gaining the technical insight into the medical domain by the stakeholders will ensure commissioning the accurate and effective CDSS for improved healthcare delivery.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_87-Beyond_the_Horizon_A_Meticulous_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Understanding Attribute-based Access Control for Modelling and Analysing Healthcare Professionals’ Security Practices</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110286</link>
        <id>10.14569/IJACSA.2020.0110286</id>
        <doi>10.14569/IJACSA.2020.0110286</doi>
        <lastModDate>2020-02-29T14:32:41.1270000+00:00</lastModDate>
        
        <creator>Livinus Obiora Nweke</creator>
        
        <creator>Prosper Yeng</creator>
        
        <creator>Stephen D. Wolthusen</creator>
        
        <creator>Bian Yang</creator>
        
        <subject>Attribute-Based Access Control (ABAC); e-health systems; Personal Health Record (PHR); Electronic Health Record (EHR)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>In recent years, there has been an increase in the application of attribute-based access control (ABAC) in electronic health (e-health) systems. E-health systems are used to store a patient’s electronic version of medical records. These records are usually classified according to their usage i.e., electronic health record (EHR) and personal health record (PHR). EHRs are electronic medical records held by the healthcare providers, while PHRs are electronic medical records held by the patients themselves. Both EHRs and PHRs are critical assets that require access control mechanism to regulate the manner in which they are accessed. ABAC has demonstrated to be an efficient and effective approach for providing fine grained access control to these critical assets. In this paper, we conduct a survey of the existing literature on the application of ABAC in e-health systems to understand the suitability of ABAC for e-health systems and the possibility of using ABAC access logs for observing, modelling and analysing security practices of healthcare professionals. We categorize the existing works according to the application of ABAC in PHR and EHR. We then present a discussion on the lessons learned and outline future challenges. This can serve as a basis for selecting and further advancing the use of ABAC in e-health systems</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_86-Understanding_Attribute_based_Access_Control.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Visual Analytics System for Route Planning and Emergency Crowd Evacuation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110284</link>
        <id>10.14569/IJACSA.2020.0110284</id>
        <doi>10.14569/IJACSA.2020.0110284</doi>
        <lastModDate>2020-02-29T14:32:41.1100000+00:00</lastModDate>
        
        <creator>Saleh Basalamah</creator>
        
        <subject>Emergency evacuation; crowd management; visual-ization; intelligent systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>Emergency evacuation from crowded public spaces is of great importance to all authorities around the world. Many systems have been developed by researchers to address the optimization of emergency evacuation routing and planning. This paper presents a visual analytics system for route planning and emergency crowd evacuation; a web-based visualization and simulation system that allows stakeholders to develop and assess route planning and evacuation procedures for emergency scenar-ios. The system takes advantage of the available OpenStreetMap comprehensive spatial database to enable users to implement evacuation scenarios in almost anywhere OpenStreetMap dataset is available. Using multiple infrastructure-specific varying sce-narios, such as adjusting capacities of roads/pathways or their closure, the tool can identify bottleneck areas thus allowing the assessment of potential improvements to pedestrian and transportation network to relieve the bottleneck and improve evacuation time. As a case study, we use this system for the city of Makkah in Saudi Arabia and the city of Minnesota in the United States.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_84-A_Visual_Analytics_System_for_Route_Planning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detecting Video Surveillance Using VGG19 Convolutional Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110285</link>
        <id>10.14569/IJACSA.2020.0110285</id>
        <doi>10.14569/IJACSA.2020.0110285</doi>
        <lastModDate>2020-02-29T14:32:41.1100000+00:00</lastModDate>
        
        <creator>Umair Muneer Butt</creator>
        
        <creator>Sukumar Letchmunan</creator>
        
        <creator>Fadratul Hafinaz Hassan</creator>
        
        <creator>Sultan Zia</creator>
        
        <creator>Anees Baqir</creator>
        
        <subject>Anomalous detection; surveillance video; VGG16; VGG19; ConvoNet; AlexNet</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>The meteoric growth of data over the internet from the last few years has created a challenge of mining and extracting useful patterns from a large dataset. In recent years, the growth of digital libraries and video databases makes it more challenging and important to extract useful information from raw data to prevent and detect the crimes from the database automatically. Street crime snatching and theft detection is the major challenge in video mining. The main target is to select features/objects which usually occurs at the time of snatching. The number of moving targets imitates the performance, speed and amount of motion in the anomalous video. The dataset used in this paper is Snatch 101; the videos in the dataset are further divided into frames. The frames are labelled and segmented for training. We applied the VGG19 Convolutional Neural Network architecture algorithm and extracted the features of objects and compared them with original video features and objects. The main contribution of our research is to create frames from the videos and then label the objects. The objects are selected from frames where we can detect anomalous activities. The proposed system is never used before for crime prediction, and it is computationally efficient and effective as compared to state-of-the-art systems. The proposed system outperformed with 81 % accuracy as compared to state-of-the-art systems.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_85-Detecting_Video_Surveillance_Using_VGG19.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>KWA: A New Method of Calculation and Representation Accuracy for Speech Keyword Spotting in String Results</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110283</link>
        <id>10.14569/IJACSA.2020.0110283</id>
        <doi>10.14569/IJACSA.2020.0110283</doi>
        <lastModDate>2020-02-29T14:32:41.0930000+00:00</lastModDate>
        
        <creator>Nguyen Tuan Anh</creator>
        
        <creator>Hoang Thi Kim Dung</creator>
        
        <subject>Speech Keyword Spotting; KWS; keyword accuracy; Keyword Spotting Accuracy (KWA); speech recognition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>This study proposes a new method to measure and represent accuracy for Keyword Spotting (KWS) problem in non-aligned string results. Our approach, called Keyword Spotting Accuracy (KWA), was improved from the Levenshtein Distance algorithm, that used to evaluate the accuracy of the keywords in KWS by measuring the minimum distance between two strings. The main improved algorithm is to show the status of each keyword in training phase for predicted and true labels. In which, representing which words are correct, which ones need to be inserted, substituted or deleted when comparing the prediction labels with true ones during the training phase. In addition, a new method of presenting the multiple keywords in results was proposed to indicate the accuracy of each keyword. This method can display detailed results by keywords, from which, we can obtain the accuracy, distribution, and balance of the keywords in the training dataset by actual speech variance, not by counting keywords in true labels as usual.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_83-KWA_A_New_Method_of_Calculation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards a Dynamic Scalable IoT Computing Platform Architecture</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110282</link>
        <id>10.14569/IJACSA.2020.0110282</id>
        <doi>10.14569/IJACSA.2020.0110282</doi>
        <lastModDate>2020-02-29T14:32:41.0800000+00:00</lastModDate>
        
        <creator>Desoky Abdelqawy</creator>
        
        <creator>Amr Kamel</creator>
        
        <creator>Soha Makady</creator>
        
        <subject>Interent of Things (IoT); IoT platforms; IoT archi-tecture; edge computing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>Internet of Things (IoT) has become an interesting topic among technology titans and different business groups. IoT platforms have been introduced to support the development of IoT applications and services. Such platforms connect the real and virtual worlds of objects, systems and people. Even though IoT platforms increasingly target various domains, they still suffer from various limitations. (1) Integrating hardware devices from different providers/vendors (thereafter referenced as heteroge-neous hardware) is still a subtle task. (2) Providing a scalable solution without altering the end user privacy (e.g., through the use of cloud platforms) is hard to achieve. (3) Handling IoT Applications reliability as well as platform reliability is still not fully supported. (4) Addressing Safety-critical applications needs are still not covered by such platforms. A novel scalable dynamic computing platform architecture is proposed to address such lim-itations and provide simultaneous support for five non-functional requirements. The supported non-functional requirements are scalability, reliability, privacy, timing for real-time systems and safety. The proposed architecture uses a novel network topology design, virtualization and containerization concepts, along with a service-oriented architecture. We present and use a smart home case study to evaluate how traditional IoT platform architectures are compared to the proposed architecture, in terms of supporting the five non-functional requirements.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_82-Towards_a_Dynamic_Scalable_IoT_Computing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Review of Vision and Challenges of 6G Technology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110281</link>
        <id>10.14569/IJACSA.2020.0110281</id>
        <doi>10.14569/IJACSA.2020.0110281</doi>
        <lastModDate>2020-02-29T14:32:41.0470000+00:00</lastModDate>
        
        <creator>Faiza Nawaz</creator>
        
        <creator>Jawwad Ibrahim</creator>
        
        <creator>Muhammad Awais Ali</creator>
        
        <creator>Maida Junaid</creator>
        
        <creator>Sabila Kousar</creator>
        
        <creator>Tamseela Parveen</creator>
        
        <subject>Wireless communication; visions; 6G; cellular network; generations; digital technology; satellite networks; cell less architecture</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>With the accelerated evolution of smart terminals and rising fresh applications, wireless information traffic has sharply enhanced and underway cellular networks (even 5G) can’t entirely compete the rapidly emerging technical necessities. A fresh framework of wireless communication, the sixth era (6G) framework, by floating aid of artificial intelligence is anticipated to be equipped somewhere in the range of 2027 and 2030. This paper presents critical analysis of Vision of 6G wireless communication and its network structure; also outline a number of important technical challenges, additionally some possible solutions related to 6G, as well as physical layer transmission procedures, network designs, security methods.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_81-A_Review_of_Vision_and_Challenges_of_6G_Technology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Tuning of Spade Card Antenna using Mean Average Loss of Backpropagation Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110280</link>
        <id>10.14569/IJACSA.2020.0110280</id>
        <doi>10.14569/IJACSA.2020.0110280</doi>
        <lastModDate>2020-02-29T14:32:41.0000000+00:00</lastModDate>
        
        <creator>Irfan Mujahidin</creator>
        
        <creator>Dwi Arman Prasetya</creator>
        
        <creator>Nachrowie</creator>
        
        <creator>Samuel Aji Sena</creator>
        
        <creator>Putri Surya Arinda</creator>
        
        <subject>Spade card antenna; mean average loss; neural network; performance tuning antenna</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>The microstrip antennas have different dimensions to get the desired performance, especially for microstrip antennas that have complex components and dimensions with the performance: the range of frequency at 2.4 GHz until 3.6 GHz, Maximum Power of Gain value is 5.83 dB and the minimum value is 3 dB and Maximum Directivity Value is 6.22 and the minimum value is 3.32. in consequence, needs to fill the demand for a new and the corresponding design as solvent to adaptive matching as tuner the frequency on antenna design that needs requires a complex mathematical method and simulation. This paper has the novel design to tune the performance of spade card microstrip antenna that can operate on the single, dual or multi-band and able to produce circular or linear polarization using Backpropagation Neural Network in order to obtain an optimum design with a backpropagation algorithm as a solution to simplify the design process. As a result, after 20000 epochs the training loss is around 0.044 and the testing loss is around 0.058. The model has a good performance despite only using a few numbers of training data.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_80-Performance_Tuning_of_Spade_Card_Antenna.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Investigation of a Convolution Neural Network Architecture for Detecting Distracted Pedestrians</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110279</link>
        <id>10.14569/IJACSA.2020.0110279</id>
        <doi>10.14569/IJACSA.2020.0110279</doi>
        <lastModDate>2020-02-29T14:32:40.9700000+00:00</lastModDate>
        
        <creator>Igor Grishchenko</creator>
        
        <creator>El Sayed Mahmoud</creator>
        
        <subject>Convolutional neural networks; computer vision; cognitive load; distractive behavior</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>The risk of pedestrian accidents has increased due to the distracted walking increase. The research in the autonomous vehicles industry aims to minimize this risk by enhancing the route planning to produce safer routes. Detecting distracted pedestrians plays a significant role in identifying safer routes and hence decreases pedestrian accident risk. Thus, this research aims to investigate how to use the convolutional neural networks for building an algorithm that significantly improves the accuracy of detecting distracted pedestrians based on gathered cues. Particularly, this research involves the analysis of pedestrian’ images to identify distracted pedestrians who are not paying attention when crossing the road. This work tested three different architectures of convolutional neural networks. These architectures are Basic, Deep, and AlexNet. The performance of the three architectures was evaluated based on two datasets. The first is a new training dataset called SCIT and created by this work based on recorded videos of volunteers from Sheridan College Institute of Technology. The second is a public dataset called PETA, which was made up of images with various resolutions. The ConvNet model with the Deep architecture outperformed the Basic and AlexNet architectures in detecting distracted pedestrian.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_79-An_Investigation_of_a_Convolution_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Review of Data Gathering Algorithms for Real-Time Processing in Internet of Things Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110278</link>
        <id>10.14569/IJACSA.2020.0110278</id>
        <doi>10.14569/IJACSA.2020.0110278</doi>
        <lastModDate>2020-02-29T14:32:40.9530000+00:00</lastModDate>
        
        <creator>Atheer A. Kadhim</creator>
        
        <creator>Norfaradilla Wahid</creator>
        
        <subject>Internet of Things (IoT); Wireless Sensor Network (WSN) and Data Gathering; Virtual Machine (VM); Virtualization Cloud (VC); Data Reduction (DR); Access point (AP); Mobile Ad hoc Network (MANET)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>Today, Wireless Sensor Network (WSN) has become an enabler technology for the Internet of Things (IoT) applications. The emergence of various applications has then enabled the need for robust and efficient data collection and transfer algorithms. This paper presents a comprehensive review for the existing data gathering algorithms and the technologies adopted for that applications. After reviewing the algorithms and the challenges related to them, which extend the physical reach of the monitoring capability; they possess several constraints such as limited energy availability, low memory size, and low processing speed, which are the principal obstacles to designing efficient management protocols for WSN-IoT integration.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_78-A_Review_of_Data_Gathering_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluating the Impact of GINI Index and Information Gain on Classification using Decision Tree Classifier Algorithm*</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110277</link>
        <id>10.14569/IJACSA.2020.0110277</id>
        <doi>10.14569/IJACSA.2020.0110277</doi>
        <lastModDate>2020-02-29T14:32:40.9230000+00:00</lastModDate>
        
        <creator>Suryakanthi Tangirala</creator>
        
        <subject>Supervised learning; classification; decision tree; information gain; GINI index</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>Decision tree is a supervised machine learning algorithm suitable for solving classification and regression problems. Decision trees are recursively built by applying split conditions at each node that divides the training records into subsets with output variable of same class. The process starts from the root node of the decision tree and progresses by applying split conditions at each non-leaf node resulting into homogenous subsets. However, achieving pure homogenous subsets is not possible. Therefore, the goal at each node is to identify an attribute and a split condition on that attribute that minimizes the mixing of class labels, thus resulting into nearly pure subsets. Several splitting indices were proposed to evaluate the goodness of the split, common ones being GINI index and Information gain. The aim of this study is to conduct an empirical comparison of GINI index and information gain. Classification models are built using decision tree classifier algorithm by applying GINI index and Information gain individually. The classification accuracy of the models is estimated using different metrics such as Confusion matrix, Overall accuracy, Per-class accuracy, Recall and Precision. The results of the study show that, regardless of whether the dataset is balanced or imbalanced, the classification models built by applying the two different splitting indices GINI index and information gain give same accuracy. In other words, choice of splitting indices has no impact on performance of the decision tree classifier algorithm.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_77-Evaluating_the_Impact_of_GINI_Index.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Evaluation of Deep Autoencoder Network for Speech Emotion Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110276</link>
        <id>10.14569/IJACSA.2020.0110276</id>
        <doi>10.14569/IJACSA.2020.0110276</doi>
        <lastModDate>2020-02-29T14:32:40.9070000+00:00</lastModDate>
        
        <creator>Maria AndleebSiddiqui</creator>
        
        <creator>Wajahat Hussain</creator>
        
        <creator>Syed Abbas Ali</creator>
        
        <creator>Danish-ur-Rehman</creator>
        
        <subject>Auto-encoder; emotions; DNN; classification accuracy; autism</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>The learning methods with multiple levels of representation is called deep learning methods. The composition of simple but now linear modules results in deep-learning model. Deep-learning in near future will have many more success, because it requires very little engineering in hands and it can easily take ample amount of data for computation. In this paper the deep learning network is used to recognize speech emotions. The deep Autoencoder is constructed to learn the speech emotions (Angry, Happy, Neutral, and Sad) of Normal and Autistic Children. Experimental results evident that the categorical classification accuracy of speech is 46.5% and 33.3% for Normal and Autistic children speech respectively. Whereas, Auto encoder shows a very low classification accuracy of 26.1% for only happy emotion and no classification accuracy for Angry, Neutral and Sad emotions.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_76-Performance_Evaluation_of_Deep_Autoencoder_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Adaptive Sequential Constructive Crossover Operator in a Genetic Algorithm for Solving the Traveling Salesman Problem</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110275</link>
        <id>10.14569/IJACSA.2020.0110275</id>
        <doi>10.14569/IJACSA.2020.0110275</doi>
        <lastModDate>2020-02-29T14:32:40.8770000+00:00</lastModDate>
        
        <creator>Zakir Hussain Ahmed</creator>
        
        <subject>Genetic algorithm; adaptive sequential constructive crossover; traveling salesman problem; NP-hard</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>Genetic algorithms are widely used metaheuristic algorithms to solve combinatorial optimization problems that are constructed on the survival of the fittest theory. They obtain near optimal solution in a reasonable computational time, but do not guarantee the optimality of the solution. They start with random initial population of chromosomes, and operate three different operators, namely, selection, crossover and mutation, to produce new and hopefully better populations in consecutive generations. Out of the three operators, crossover operator is the most important operator. There are many existing crossover operators in the literature. In this paper, we propose a new crossover operator, named adaptive sequential constructive crossover, to solve the benchmark travelling salesman problem. We then compare the efficiency of the proposed crossover operator with some existing crossover operators like greedy crossover, sequential constructive crossover, partially mapped crossover operators, etc., under same genetic settings, for solving the problem on some benchmark TSPLIB instances. The experimental study shows the effectiveness of our proposed crossover operator for the problem and it is found to be the best crossover operator.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_75-Adaptive_Sequential_Constructive_Crossover_Operator.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Measures of Organizational Training in the Capability Maturity Model Integration (CMMI)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110274</link>
        <id>10.14569/IJACSA.2020.0110274</id>
        <doi>10.14569/IJACSA.2020.0110274</doi>
        <lastModDate>2020-02-29T14:32:40.8600000+00:00</lastModDate>
        
        <creator>Mahmoud Khraiwesh</creator>
        
        <subject>Organizational training; training; measures; CMMI; GQM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>Training has a major impact on organizational commitment. Organizational objectives can be met by executing several training strategies and programs to enhance training. Organizational training aimed at developing employees’ knowledge and skills. It shall enable employees to carry out their duties efficiently and effectively. Two goals and seven practices of the organizational training process area in the capability maturity model integration (CMMI) framework are analyzed through this study. That’s done to set common measures for organizational training. CMMI is a framework for assessing and improving software systems. The researcher implemented the Goal Questions Metrics (GQM) model on two objectives and seven specific practices of the organizational training process area in CMMI. That’s done to set measures.The researcher confirmed that the defined measures are the true measure for each one of the seven specific practices.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_74-Measures_of_Organizational_Training.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Testing different Channel Estimation Techniques in Real-Time Software Defined Radio Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110273</link>
        <id>10.14569/IJACSA.2020.0110273</id>
        <doi>10.14569/IJACSA.2020.0110273</doi>
        <lastModDate>2020-02-29T14:32:40.8300000+00:00</lastModDate>
        
        <creator>Ponnaluru Sowjanya</creator>
        
        <creator>Penke Satyanarayana</creator>
        
        <subject>Channel Estimation; GNU Radio Companion (GRC); Orthogonal frequency-domain multiplexing (OFDM); software-defined radio (SDR); Spectral Temporal Averaging (STA); Universal Software Radio Peripheral (USRP)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>In modern wireless communication to maximize spectral efficiency and to minimize the bit error rate OFDM (Orthogonal frequency-domain multiplexing) is used. OFDM is used broadly in networks using various protocols, including wireless vehicular environment IEEE 802.11p, IEEE 802.16d/e Wireless Metropolitan Area Networks, Long-Term Evolution 3GPP networks and IEEE 802.11a/g/n Wireless Local Area Networks. The main challenges involved when using OFDM for wireless communications are short channel-coherence bandwidth and the narrow coherence time, and both have a major effect on the reliability and latency of data packet communication. These properties increase the difficulty of channel equalization because the channel may change drastically over the period of a single packet. Spectral Temporal Averaging is an enhanced decision-directed channel equalization technique that improves communication performance (as far as the frame delivery ratio (FDR) and throughput) in typical channel conditions. This paper reports tests of Spectral Temporal Averaging channel equalization in an IEEE 802.11a network, compared with other channel equalization techniques in terms of the FDR in a real-time environment. Herein, a software defined Radio (SDR) platform was used for estimating the channel. This proves that the system can provide over 90% of delivery ratio at 25 db of Signal to Noise Ratio (SNR) for various digital modulation techniques. For this purpose, an experimental setup consisting of software-defined radio, Universal Software Radio Peripheral (USRP) N210 along with wide bandwidth daughter board as hardware and GNU radio is used.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_73-Testing_different_Channel_Estimation_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Study of Truncating and Statistical Stemming Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110272</link>
        <id>10.14569/IJACSA.2020.0110272</id>
        <doi>10.14569/IJACSA.2020.0110272</doi>
        <lastModDate>2020-02-29T14:32:40.8130000+00:00</lastModDate>
        
        <creator>Sanaullah Memon</creator>
        
        <creator>Ghulam Ali Mallah</creator>
        
        <creator>K.N.Memon</creator>
        
        <creator>AG Shaikh</creator>
        
        <creator>Sunny K.Aasoori</creator>
        
        <creator>Faheem Ul Hussain Dehraj</creator>
        
        <subject>Stemming; truncating; statistical; NLP; IR; Lovins; Porters; Paice/Husk; Dawson; N-gram;  HMM; YASS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>Search and indexing systems bear a significant quality called word stemming, is lump of content excavating requests, IR frameworks and natural language handling frameworks. The fundamental topic in the search and indexing through time is to upgrade infer via robotized diminishing and fussing of the words into word roots. From index term by evacuating any connected prefixes and postfixes, Stemming is done to proceeding piece of work of index word, and more extensive idea than the real word is spoken by trunk. In an IR framework, the numeral of recovered archives is expanded by stemming process.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_72-Comparative_Study_of_Truncating_and_Statistical_Stemming.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Exploiting White Spaces for Karachi through Artificial Intelligence: Comparison of NARX and Cascade Feed Forward Back Propagation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110271</link>
        <id>10.14569/IJACSA.2020.0110271</id>
        <doi>10.14569/IJACSA.2020.0110271</doi>
        <lastModDate>2020-02-29T14:32:40.7970000+00:00</lastModDate>
        
        <creator>Shabbar Naqvi</creator>
        
        <creator>Minaal Ali</creator>
        
        <creator>Aamir Zeb Shaikh</creator>
        
        <creator>Yamna Iqbal</creator>
        
        <creator>Abdul Rahim</creator>
        
        <creator>Saima Khadim</creator>
        
        <creator>Talat Altaf</creator>
        
        <subject>6G; cognitive radio; NARX; cascaded feed forward neural network; learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>Marriage of Internet of Everything (IoE) and Cognitive Radio driven technologies seems near under the umbrella of 6G and 6G+ communication standard. The expected new services that will be introduced in 6G communication will require high data rates for transmission. The learning based algorithms will play a key role towards successful implementation of these novel technologies and evolving next generation wireless standards for providing ubiquitous connectivity. This paper investigates performance of two artificial neural network (ANN) based algorithms for Karachi. These include Nonlinear autoregressive exogenous Algorithm (NARX) and cascade feed forward back propagation neural network (CFFBNN) scheme. A dataset for Karachi is also developed for 1805 MHZ. The results of the two algorithms are compared that show Mean Square Error (MSE) for CFFBNN is 6.8877e-5 at epoch 16 and MSE for NARX is 3.1506e-11 at epoch 26. Hence, exploiting computational performance, NARX performs much superior than the classis CFFBNN algorithm.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_71-Exploiting_White_Spaces_for_Karachi.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Task Sensitivity in Continuous Electroencephalogram Person Authentication</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110270</link>
        <id>10.14569/IJACSA.2020.0110270</id>
        <doi>10.14569/IJACSA.2020.0110270</doi>
        <lastModDate>2020-02-29T14:32:40.7830000+00:00</lastModDate>
        
        <creator>Rui-Zhen Wong</creator>
        
        <creator>Yun-Huoy Choo</creator>
        
        <creator>Azah Kamilah Muda</creator>
        
        <subject>Electroencephalogram; continuous authentication; task sensitivity; multimodal stimuli; Mahalanobis distance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>This research investigates on the task sensitivity in multimodal stimulation task for continuous person authentication using the electroencephalogram (EEG) signals. Pattern analysis aims to train from historical examples for prediction on the unseen data. However, data trials in EEG stimulation consists of inseparable cognitive information that is difficult to ensure that the testing trials contain the cognitive information matching to the training data. Since the EEG signals are unique across individuals, we assume that multimodal stimulation task in EEG analysis is not sensitive in train-test data trials control. Data trial inconsistency during training and testing can still be used as biometrics to authenticate a person. The EEG signals were collected using the 10-20 systems from 20 healthy subjects. During data acquisition, subjects were asked to operate a computer and perform various computer-related tasks (e.g.: mouse click, mouse scrolling, keyboard typing, browsing, reading, video watching, music listening, playing computer games, and etc.) as their preferences, without interruption. Features extracted from Welch’s estimated Power Spectral Density in different frequency bands were tested. The designed authentication approach computed intra- and inter-personal variability using Mahalanobis distance to authenticate subject. The proposed EEG continuous authentication approach has succeeded. Data collected from multimodal stimulus disregard of task sensitivity able to authenticate subject, where the highest verification performance shown in the low-Beta frequency band. Evidence found that effective frequency region on the middle band was anticipated due to the data collected was based on subject voluntary actions. Future research will focus on the effect of subject voluntary and involuntary actions on the effective frequency region.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_70-Task_Sensitivity_in_Continuous_Electroencephalogram.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design and Development of Autonomous Pesticide Sprayer Robot for Fertigation Farm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110269</link>
        <id>10.14569/IJACSA.2020.0110269</id>
        <doi>10.14569/IJACSA.2020.0110269</doi>
        <lastModDate>2020-02-29T14:32:40.7500000+00:00</lastModDate>
        
        <creator>A. M. Kassim</creator>
        
        <creator>M. F. N. M. Termezai</creator>
        
        <creator>A. K. R. A. Jaya</creator>
        
        <creator>A. H. Azahar</creator>
        
        <creator>S Sivarao</creator>
        
        <creator>F. A. Jafar</creator>
        
        <creator>H.I Jaafar</creator>
        
        <creator>M. S. M. Aras</creator>
        
        <subject>Pesticide spryer; autonomous robot; fertigation; farm; under crop leaves</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>The management of pest insects is the critical component of agricultural production especially in the fertigation based farm. Although the fertigation farm in Malaysia has advantages in the fertilization and irrigation management system, it still lacking with the pest management system. Since almost the insect and pests are living under the crop’s leaves, it is difficult and hard labor work to spray under the leaves of the crop. Almost agricultural plants are damaged, weakened, or killed by insect pests especially. These results in reduced yields, lowered quality, and damaged plants or plant products that cannot be sold. Even after harvest, insects continue their damage in stored or processed products. Therefore, the aim of this study is to design and develop an autonomous pesticide sprayer for the chili fertigation system. Then, this study intends to implement a flexible sprayer arm to spray the pesticide under the crop’s leaves, respectively. This study involves the development of unmanned pesticide sprayer that can be mobilized autonomously. It is because the pesticide is a hazardous component that can be affected human health in the future if it exposed during manual spraying method especially in a closed area such as in the greenhouse. The flexible sprayer boom also can be flexibly controlled in the greenhouse and outdoor environment such as open space farms. It is expected to have a successful pesticide management system in the fertigation based farm by using the autonomous pesticide sprayer robot. Besides, the proposed autonomous pesticide sprayer also can be used for various types of crops such as rockmelon, tomato, papaya. pineapples, vegetables and etc.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_69-Design_and_Development_of_Autonomous_Pesticide_Sprayer.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Significance of Electronic Word of Mouth (e-WOM) in Opinion Formation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110268</link>
        <id>10.14569/IJACSA.2020.0110268</id>
        <doi>10.14569/IJACSA.2020.0110268</doi>
        <lastModDate>2020-02-29T14:32:40.7500000+00:00</lastModDate>
        
        <creator>Javaria Khalid</creator>
        
        <creator>Aneela Abbas</creator>
        
        <creator>Rida Akbar</creator>
        
        <creator>Muhammad Qasim Mahmood</creator>
        
        <creator>Rafia</creator>
        
        <creator>Arslan Tariq</creator>
        
        <creator>Madiha Khatoon</creator>
        
        <creator>Ayesha Akbar</creator>
        
        <creator>Samreen Azhar</creator>
        
        <creator>Asra Meer</creator>
        
        <creator>Muhammad Junaid Ud Din</creator>
        
        <subject>Component; E-WOM (Electronic word of mouth); opinion formation; positive reviews; negative reviews</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>In the realm of interconnected digital world, social ranking systems are readily used in different sections of society, for several reasons. The private and public sectors both are making use of social ranking systems as a tool to engineer human behavior, and crafting a digitally stimulated social control. Online reviews and ratings are one of the significant marketing strategies of online sellers to steer out consumers’ opinion and ultimately their purchasing decisions. Buyers usually go through these reviews and ratings while purchasing online product or hiring online services. Online consumer reviews, recommendations for product and services, and peer viewpoints play a significant role in the customer&#39;s opinion formation. Different online forums of product reviews, ratings and recommendations differ in their objectives, functions, and characteristics. This paper focuses upon a systematic literature review and comparative study of the influence the positive and negative reviews and ratings of the products, automobile services, movies, restaurants, products and services on OLX &amp; eBay, etc. have on opinion formation. Moreover, how these reviews influence others opinions of buying and using the products, services and apps will be analyzed. </description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_68-Significance_of_Electronic_Word_of_Mouth.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Proposed Method to Solve Cold Start Problem using Fuzzy user-based Clustering</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110267</link>
        <id>10.14569/IJACSA.2020.0110267</id>
        <doi>10.14569/IJACSA.2020.0110267</doi>
        <lastModDate>2020-02-29T14:32:40.7200000+00:00</lastModDate>
        
        <creator>Syed Badar Ud Duja</creator>
        
        <creator>Baoning Niu</creator>
        
        <creator>Bilal Ahmed</creator>
        
        <creator>M. Umar Farooq Alvi</creator>
        
        <creator>Muhammad Amjad</creator>
        
        <creator>Usman Ali</creator>
        
        <creator>Zia Ur Rehman</creator>
        
        <creator>Waqar Hussain</creator>
        
        <subject>Recommender system; collaborating filtering; cold start problem; clustering; user based clustering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>With the elevation of the online accessibility to almost everything, many logics, systems and algorithms have to be revised to match the pace of the trends among the socialized networks. One such system; recommendation system has become very important as far as the socialized networks are concerned . In such paced and vibrant environment of the online accessibility and availability to heavy and large amount of data uploaded to the internet such as, movies, books, research articles and much more. The method of recommendation where provides the socialized networks between the operators, at the same instance, it provides references for the users to asses other users that effects their socialized relation directly or indirectly. Collaborative filtering is the technique used for recommending the same taste of picks to that of the user, and it is accomplished by the user’s mutual collaboration, this technique is mostly used by the social networking sites. Nowadays this technique is not only popular but common for recommending the data to the user; meanwhile it also motivates the researchers to find the more effective system and algorithm so that the user’s satisfaction can be achieved by recommending them the data according to their search history. This paper suggests the CF (Collaborative Filtering) model that is based on the user’s truthful information applied by the FCM (Fuzzy C-means) clustering. This study proposes that the fuzzy truthful information of the user is to be combined with rating of the content by other users to produce a recommender system formula with a coupled coefficient with new parameters. To achieve the results the Data set of Movie Lens is included in the study which shows significant improvement in the recommendation subjected to the condition of cold start.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_67-A_Proposed_Method_to_Solve_Cold_Start_Problem.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Localization of Mobile Submerged Sensors using Lambert-W Function and Cayley-Menger Determinant</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110266</link>
        <id>10.14569/IJACSA.2020.0110266</id>
        <doi>10.14569/IJACSA.2020.0110266</doi>
        <lastModDate>2020-02-29T14:32:40.7030000+00:00</lastModDate>
        
        <creator>Anirban Paul</creator>
        
        <creator>Miad Islam</creator>
        
        <creator>Md. Ferdousur Rahman</creator>
        
        <creator>Anisur Rahman</creator>
        
        <subject>Lambert-W function; Cayley-Menger determinant; submerged mobile sensor; single beacon; localization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>This paper demonstrates a new mechanism to localize mobile submerged sensors using only a single beacon node. In range-based localization, fast and accurate distance measurement is vital in underwater wireless sensor networks (UWSN). The knowledge of exact coordinates of the sensors is as important as the actuated data in underwater wireless sensor networks.  Mostly bouncing technique is used to determine the distance between the beacon and the sensors. Moreover, to determine the coordinates, trilateration and multilateration technique is used; where using multiple beacons (usually three or more) is the most common approach. Nevertheless, because of many factors, this method gives less accurate results in distance measurements, which finally leads to determine erroneous coordinates. As TDOA is very ponderous to achieve in underwater environment because of time synchronization; again, using AOA is extremely difficult and challenging; TOA is the most common approach and is widely employed. However, it still needs precise synchronization. So, to determine the distances between beacon and sensor nodes, we have used a method based on Lambert-W function in this study, which is an approach based on RSS, and it avoids any synchronization. Besides, coordinates of the mobile sensors are calculated using Cayley-Menger determinant. In this paper, the method is derived and the accuracy is verified by simulation results.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_66-Localization_of_Mobile_Submerged_Sensors.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Geo Security using GPT Cryptosystem</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110265</link>
        <id>10.14569/IJACSA.2020.0110265</id>
        <doi>10.14569/IJACSA.2020.0110265</doi>
        <lastModDate>2020-02-29T14:32:40.6730000+00:00</lastModDate>
        
        <creator>Eraj Khan</creator>
        
        <creator>Abbas Khalid</creator>
        
        <creator>Arshad Ali</creator>
        
        <creator>Muhammad Atif</creator>
        
        <creator>Ahmad Salman Khan</creator>
        
        <subject>Location based security; code based cryptosystem; cipher-text; GPT</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>This paper describes an implementation of location-based encryption using a public key cryptosystem based on the rank error correcting codes. In any code based cryptosystem, public and private keys are in the form of matrices based over the finite field. This work proposes an algorithm for calculating public and private key matrices based on the geographic location of the intended receiver. The main idea is to calculate a location specific parity check matrix and then corresponding public key. Data is encrypted using public key. Some information about the parity check matrix along with other private keys are sent to receiver as cipher-text, encrypted with another instance of the public or GPT cryptosystem using public key of the receiver. The proposed scheme also introduces a method of calculating different parity check matrix for each user.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_65-Geo_Security_using_GPT_Cryptosystem.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Person Re-Identification System at Semantic Level based on Pedestrian Attributes Ontology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110264</link>
        <id>10.14569/IJACSA.2020.0110264</id>
        <doi>10.14569/IJACSA.2020.0110264</doi>
        <lastModDate>2020-02-29T14:32:40.2030000+00:00</lastModDate>
        
        <creator>Ngoc Q. Ly</creator>
        
        <creator>Hieu N. M. Cao</creator>
        
        <creator>Thi T. Nguyen</creator>
        
        <subject>Person Re-Identification (Re-ID); Pedestrian Attributes Ontology (PAO); Deep Convolution Neuron Network (DCNN); Multi-task Deep Convolution Neuron Network (MDCNN); Local Multi-task Deep Convolution Neuron Network (Local MDCNN); Imbalanced Data Solver (IDS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>Person Re-Identification (Re-ID) is a very important task in video surveillance systems such as tracking people, finding people in public places, or analysing customer behavior in supermarkets. Although there have been many works to solve this problem, there are still remaining challenges such as large-scale datasets, imbalanced data, viewpoint, fine-grained data (attributes), the Local Features are not employed at semantic level in online stage of Re-ID task, furthermore, the imbalanced data problem of attributes are not taken into consideration. This paper has proposed a Unified Re-ID system consisted of three main modules such as Pedestrian Attribute Ontology (PAO), Local Multi-task DCNN (Local MDCNN), Imbalance Data Solver (IDS).  The new main point of  our Re-ID system is the power of  mutual support of PAO, Local MDCNN and IDS to exploit the inner-group correlations of attributes and pre-filter the mismatch candidates from Gallery set based on semantic information as Fashion Attributes and Facial Attributes,  to solve the imbalanced data of attributes without adjusting network architecture and data augmentation. We experimented on the well-known Market1501 dataset. The experimental results have shown the effectiveness of our Re-ID system and it could achieve the higher performance on Market1501 dataset in comparison to some state-of-the-art Re-ID methods.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_64-Person_Re_Identification_System_at_Semantic.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Framework for Producing Effective and Efficient Secure Code through Malware Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110263</link>
        <id>10.14569/IJACSA.2020.0110263</id>
        <doi>10.14569/IJACSA.2020.0110263</doi>
        <lastModDate>2020-02-29T14:32:40.0000000+00:00</lastModDate>
        
        <creator>Abhishek Kumar Pandey</creator>
        
        <creator>Ashutosh Tripathi</creator>
        
        <creator>Mamdouh Alenezi</creator>
        
        <creator>Alka Agrawal</creator>
        
        <creator>Rajeev Kumar</creator>
        
        <creator>Raees Ahmad Khan</creator>
        
        <subject>Malware analysis; reuse code; framework; static analysis; dynamic analysis; reverse engineering; manual analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>Malware attacks are creating huge inconveniences for organizations and security experts. Due to insecure web applications, small businesses and personal systems are the most vulnerable targets of malware attacks. In the wake of this burgeoning cyber security breach, this article propositions a framework for a complete malware analysis process including dynamic analysis, static analysis, and reverse engineering process. Further, the article provides an approach of malicious code identification, mitigation, and management through a hybrid process of malware analysis, priority-based vulnerability mitigation process and various source code management approaches. The framework delivers a combined package of identification, mitigation and management that simplifies the process of malicious code handling. The proposed framework also gives a solution for reused codes in software industry. Successful implementation of the framework will make the code more robust in the face of unexpected behavior and deliver a revolutionary stage wise process for malicious code handling in software industry.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_63-A_Framework_for_Producing_Effective_and_Efficient_Secure_Code.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Ransomware Behavior Attack Construction via Graph Theory Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110262</link>
        <id>10.14569/IJACSA.2020.0110262</id>
        <doi>10.14569/IJACSA.2020.0110262</doi>
        <lastModDate>2020-02-29T14:32:39.8430000+00:00</lastModDate>
        
        <creator>Muhammad Safwan Rosli</creator>
        
        <creator>Raihana Syahirah Abdullah</creator>
        
        <creator>Warusia Yassin</creator>
        
        <creator>Faizal M.A</creator>
        
        <creator>Wan Nur Fatihah Wan Mohd Zaki</creator>
        
        <subject>Ransomware; behavior analysis; graph theory; file activity system; Neo4j</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>Ransomware has becoming a current trend of cyberattack where its reputation among malware that cause a massive amount recovery in terms of cost and time for ransomware victims. Previous studies and solutions have showed that when it comes to malware detection, malware behavior need to be prioritized and analyzed in order to recognize malware attack pattern. Although the current state-of-art solutions and frameworks used dynamic analysis approach such as machine learning that provide more impact rather than static approach, but there is not any approachable way in representing the analysis especially a detection that relies on malware behavior. Therefore, this paper proposed a graph theory approach which is analysis of the ransomware behavior that can be visualized into graph-based pattern. An experiment has been conducted with ten ransomware samples for malware analysis and verified using VirusTotal. Then, file system among features were selected in the experiment as a medium to understand the behavior of ransomware using data capturing tools. After that, the result of the analysis was visualized in a graph pattern based on Neo4j which is graph database tool. By using graph as a base, the discussion has been made to recognize each type of ransomware that acts differently in the file system and analyze which node that have the most impact during analysis part.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_62-Ransomware_Behavior_Attack_Construction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Ontology-Driven IoT based Healthcare Formalism</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110261</link>
        <id>10.14569/IJACSA.2020.0110261</id>
        <doi>10.14569/IJACSA.2020.0110261</doi>
        <lastModDate>2020-02-29T14:32:39.8300000+00:00</lastModDate>
        
        <creator>Salwa Muhammad Akhtar</creator>
        
        <creator>Makia Nazir</creator>
        
        <creator>Kiran Saleem</creator>
        
        <creator>Hafiz Mahfooz Ul Haque</creator>
        
        <creator>Ibrar Hussain</creator>
        
        <subject>Internet of Things; BDI reasoning agents; ontology; smart healthcare</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>The recent developments in the Internet of Things (IoT) paradigms have significantly influenced human life, which made their lives much more comfortable, secure and relaxed. With the remarkable upsurge of the smart systems and applications, people are becoming addicted to using these devices and having many dependencies on them. With the advent of modern smart healthcare systems, and significant advancements in IoT enabled technologies have facilitated patients and physicians to be connected in real-time for providing healthcare services whenever and wherever needed. These systems often consist of tiny sensors and usually run on smart devices using mobile applications. However, these systems become even more challenging when there is a need to take intelligent decision making dynamically in a highly decentralized environment. In this paper, we propose a Belief-Desire-Intention (BDI) based multi-agent formalism for ontology-driven healthcare systems that perform BDI based reasoning to take intelligent decision making dynamically in order to achieve the desired goals. We illustrate the use of the proposed approach using a simple case study with the prototypal implementation of heart monitoring applications.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_61-An_Ontology_Driven_IoT_based_Healthcare.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Digital Twins Development Architectures and Deployment Technologies: Moroccan use Case</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110260</link>
        <id>10.14569/IJACSA.2020.0110260</id>
        <doi>10.14569/IJACSA.2020.0110260</doi>
        <lastModDate>2020-02-29T14:32:39.7970000+00:00</lastModDate>
        
        <creator>Mezzour Ghita</creator>
        
        <creator>Benhadou Siham</creator>
        
        <creator>Medromi Hicham</creator>
        
        <subject>Digital twins; industry 4.0; digital twins challenges and opportunities; Moroccan industrial context</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>With the initiation of the fourth industrial revolution and the advent of information and communication technologies that reinforces the development of advanced technological solutions which engage data sciences, artificial intelligence, and cyber physical systems many long-established research concepts have been revived with in-depth applications within manufacturing plants. Thus currently the interest is turning more and more towards technologies and approaches that can combine between the virtual world and its increased capacities in computer sciences and processing, and the physical world with its complex systems and constantly evolving requirements. A relevant concept in this context is the concept of digital twins. Digital twins as defined by their founder Dr Michael Grieves are virtual replicas of a physical system that evolves within a virtual environment in order to mirror their real counterparts’ life cycle and evolvement within the physical environment for applications in numerous domains. This paper’s aim is to present a literature review of digital twin concept, its different development and deployment architectures, and its potential of application across Moroccan industrial ecosystems.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_60-Digital_Twins_Development_Architectures.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Optimal Prediction Model’s Credit Risk: The Implementation of the Backward Elimination and Forward Regression Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110259</link>
        <id>10.14569/IJACSA.2020.0110259</id>
        <doi>10.14569/IJACSA.2020.0110259</doi>
        <lastModDate>2020-02-29T14:32:39.7830000+00:00</lastModDate>
        
        <creator>Sara HALOUI</creator>
        
        <creator>Abdeslam El MOUDDEN</creator>
        
        <subject>Credit risk; prediction; optimal model; backward elimination; statistical modeling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>The purpose of this paper is to verify whether there is a relationship between credit risk, main threat to the banks, and the demographic, marital, cultural and socio-economic characteristics of a sample of 40 credit applicants, by using the optimal backward elimination model and the forward regression method. Following the statistical modeling, the final result allows us to know the variables that have a degree of significance lower than 5%, and therefore a significant relationship with the credit risk, namely the CSP (Socio-occupational category), the amount of credit requested, the repayment term and the type of credit. However, by implementing the second method, the place of residence variable was selected as an impacting variable for the chosen model. Overall, these features will help us better predict the risk of bank credit.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_59-An_Optimal_Prediction_Models_Credit_Risk.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dataset Augmentation for Machine Learning Applications of Dental Radiography</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110258</link>
        <id>10.14569/IJACSA.2020.0110258</id>
        <doi>10.14569/IJACSA.2020.0110258</doi>
        <lastModDate>2020-02-29T14:32:39.7670000+00:00</lastModDate>
        
        <creator>Shahid Khan</creator>
        
        <creator>Altaf Mukati</creator>
        
        <subject>Data augmentation; Cone-Beam Computed Tomography; dental X-Rays; panoramic; dataset; classification; deep convolutional neural network; benchmark</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>The performance of any machine learning algorithm heavily depends on the quality and quantity of the training data. Machine learning algorithms, driven by training data can accurately predict and produce the right outcome when trained through enough amount of quality data. In the medical applications, being more critical, the accuracy is of utmost importance. Obtaining medical imaging data, enough to train machine learning algorithm is difficult due to a variety of reasons. An effort has been made to produce an augmented dental radiography dataset to train machine learning algorithms. 116 panoramic dental radiographs have been manually segmented for each tooth producing 32 classes of teeth. Out of 3712 images of individual tooth, 2910 were used for machine learning through general augmentation methods that include rotation, intensity transformation and flipping of the images, creating a massive dataset of 5.12 million unique images. The dataset is labeled and classified into 32 classes. This dataset can be used to train deep convolutional neural networks to perform classification and segmentation of teeth in x-rays, Cone-Beam CT scans and other radiographs. We retrained AlexNet on a subset of 80,000 images of the entire dataset and obtained classification accuracy of 98.88% on 10 classes. The retraining on original dataset yielded 88.31%. The result is evident of nearly a 10% increase in the performance of the classifier trained on the augmented dataset. The training and validation datasets include teeth affected with metal objects. The manually segmented dataset can be used as a benchmark to evaluate the performance of machine learning algorithms for performing tooth segmentation and tooth classification.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_58-Dataset_Augmentation_for_Machine_Learning_Applications.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Intellectual Detection System for Intrusions based on Collaborative Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110257</link>
        <id>10.14569/IJACSA.2020.0110257</id>
        <doi>10.14569/IJACSA.2020.0110257</doi>
        <lastModDate>2020-02-29T14:32:39.7500000+00:00</lastModDate>
        
        <creator>Dhikhi T</creator>
        
        <creator>M.S. Saravanan</creator>
        
        <subject>Intrusion detection; machine learning; deep learning; convolutional autoencoder; softmax classifier; NSL-KDD dataset</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>The necessity for safety of information in a network has inflated due to the impressive growth of web applications. Several methods of intrusion detection are used to detect irregularities which depend on precision, detection frequency, other parameters and are anticipated to familiarize to vigorously varying risk scenes. To accomplish consistent abnormalities detection in a network many machine learning algorithms have been formulated by researchers. A technique based on unsupervised machine learning that use two separate machine learning algorithms to identify anomalies in a network viz convolutional autoencoder and softmax classifier is proposed. These profound models were skilled as well as evaluated on NSLKDD test data sets on the NSLKDD training dataset. Using well-known classification metrics such as accuracy, precision and recall, these machine learning models were assessed. The developed intrusion detection system model experimental findings showed promising outcomes in anomaly detection systems for real-world implementation and is compared with the prevailing definitive machine learning techniques. This strategy increases the detection of network intrusion and offers a renewed intrusion detection study method.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_57-An_Intellectual_Detection_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Evaluation LoRa-GPRS Integrated Electricity usage Monitoring System for Decentralized Mini-Grids</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110256</link>
        <id>10.14569/IJACSA.2020.0110256</id>
        <doi>10.14569/IJACSA.2020.0110256</doi>
        <lastModDate>2020-02-29T14:32:39.7200000+00:00</lastModDate>
        
        <creator>Shaban Omary</creator>
        
        <creator>Anael Sam</creator>
        
        <subject>Internet of Things; LoRa; GPRS; decentralized mini-grid systems; electricity usage monitoring</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>The emerging Internet of Things (IoT) technologies such as Long-Range (LoRa), combined with traditional cellular communications technologies such as General Packet Radio Service (GPRS) offers decentralized mini-grid companies the opportunity to have cost-effective monitoring systems for the mini-grid resources. Nevertheless, most of the existing decentralized mini-grid companies still rely on traditional cellular networks to fully monitor electricity consumption information, which is not a feasible solution especially for the resource-constrained mini-grid systems. This paper presents the performance evaluation of the proposed LoRa-GPRS integrated power consumption monitoring system for decentralized mini-grid centers. Each mini-grid center consists of a network of custom designed smart meters equipped with LoRa modules for local data collection, while the GPRS gateway is used to transmit collected data from the local monitoring centre to the cloud server. Performance testing was conducted by using five electrical appliances whose power consumption data form the cloud server was compared to the same data collected by using a reference digital meter. The correlation between the two data sets was used as a key performance metric of the proposed system. The performance results show that the proposed system has a good accuracy hence providing a cost-effective framework for monitoring and managing of power resources in decentralized mini-grid centers.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_56-Performance_Evaluation_LoRa_GPRS_Integrated_Electricity.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Semantic Architecture for Modelling and Reasoning IoT Data Resources based on SPARK</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110255</link>
        <id>10.14569/IJACSA.2020.0110255</id>
        <doi>10.14569/IJACSA.2020.0110255</doi>
        <lastModDate>2020-02-29T14:32:39.7030000+00:00</lastModDate>
        
        <creator>Ahmed Salama</creator>
        
        <creator>Masoud E. Shaheen</creator>
        
        <creator>Haytham Al-Feel</creator>
        
        <subject>Big Data; Internet of Things; Semantic Modelling; Semantic_Reasonin; Semantic_Rules; Sensors; Apache SPARK; SPARK_SQL</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>Electronic Internet-of-Things is one of the foremost valuable techniques today. Through it, everything everywhere the globe became connected and intelligent, eliminating the wants to human-to-human interaction to perform tasks. This by changing all of those objects like humans, machines, devices and something around to be simply an internet Protocol (IP) to be expressed within the network environment through completely different sensors and actuators devices which might facilitate the interaction between all of them. These different types of sensors generate a large volume of various information and data. This type of sensor information created it generally useless because of the heterogeneity and lack of interoperability of it that represents it in unstructured form. So, investing from semantic internet techniques might handle these main challenges that face the IoT applications. Hence, the main contribution behind this research aims to boost the performance and quality of sensors information retrieved from IoT resources and applications by using semantic web technologies to resolve the matter of heterogeneity and interoperability and then convert the unstructured sensor data to structured form to realize the next level of investing of sensors employed in IoT applications. Also, the aim through this research to improve the performance of the tremendous amount of information that represents the demonstrated IoT information utilizing Big Data techniques such as Spark and its query language that&#39;s named SPARK-SQL as a streaming inquiry language for a colossal amount of information. The proposed architecture demonstrated that utilizing the semantic techniques to model the streaming sensors data improve the value of information and permit us to gather unused information. Moreover, the improvement by using SPARK leads to extend the performance of utilizing this sensor information in terms of the time retrieval of running queries, particularly when running the same queries utilizing the conventional SPARQL inquiry language.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_55-Semantic_Architecture_for_Modelling_and_Reasoning_IoT.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards a Powerful Solution for Data Accuracy Assessment in the Big Data Context</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110254</link>
        <id>10.14569/IJACSA.2020.0110254</id>
        <doi>10.14569/IJACSA.2020.0110254</doi>
        <lastModDate>2020-02-29T14:32:39.6900000+00:00</lastModDate>
        
        <creator>Mohamed TALHA</creator>
        
        <creator>Nabil ELMARZOUQI</creator>
        
        <creator>Anas ABOU EL KALAM</creator>
        
        <subject>Big data; data quality; data accuracy assessment; big data sampling; schema matching; record linkage; similarity measurement</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>Data Accuracy is one of the main dimensions of Data Quality; it measures the degree to which data are correct. Knowing the accuracy of an organization&#39;s data reflects the level of reliability it can assign to them in decision-making processes. Measuring data accuracy in Big Data environment is a process that involves comparing data to assess with some &quot;reference data&quot; considered by the system to be correct. However, such a process can be complex or even impossible in the absence of appropriate reference data. In this paper, we focus on this problem and propose an approach to obtain the reference data thanks to the emergence of Big Data technologies. Our approach is based on the upstream selection of a set of criteria that we define as &quot;Accuracy Criteria&quot;. We use furthermore a set of techniques such as Big Data Sampling, Schema Matching, Record Linkage, and Similarity Measurement. The proposed model and experiment results allow us to be more confident in the importance of data quality assessment solution and the configuration of the accuracy criteria to automate the selection of reference data in a Data Lake.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_54-Towards_a_Powerful_Solution_for_Data_Accuracy_Assessment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automated Machine Learning Tool: The First Stop for Data Science and Statistical Model Building</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110253</link>
        <id>10.14569/IJACSA.2020.0110253</id>
        <doi>10.14569/IJACSA.2020.0110253</doi>
        <lastModDate>2020-02-29T14:32:39.6730000+00:00</lastModDate>
        
        <creator>DeepaRani Gopagoni</creator>
        
        <creator>P V Lakshmi</creator>
        
        <subject>Automated machine learning; regression models; support vector machines; QSAR; QSPR; artificial neural networks; k-means clustering; R program; shiny web app; drug design; market analysis; supervised learning; Naive Bayes classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>Machine learning techniques are designed to derive knowledge out of existing data. Increased computational power, use of natural language processing, image processing methods made easy creation of rich data. Good domain knowledge is required to build useful models. Uncertainty remains around choosing the right sample data, variables reduction and selection of statistical algorithm. A suitable statistical method coupled with explaining variables is critical for model building and analysis. There are multiple choices around each parameter. An automated system which could help the scientists to select an appropriate data set coupled with learning algorithm will be very useful. A freely available web-based platform, named automated machine learning tool (AMLT), is developed in this study. AMLT will automate the entire model building process. AMLT is equipped with all most commonly used variable selection methods, statistical methods both for supervised and unsupervised learning. AMLT can also do the clustering. AMLT uses statistical principles like R2 to rank the models and automatic test set validation. Tool is validated for connectivity and capability by reproducing two published works.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_53-Automated_Machine_Learning_Tool.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>JobChain: An Integrated Blockchain Model for Managing Job Recruitment for Ministries in Sultanate of Oman</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110252</link>
        <id>10.14569/IJACSA.2020.0110252</id>
        <doi>10.14569/IJACSA.2020.0110252</doi>
        <lastModDate>2020-02-29T14:32:39.6430000+00:00</lastModDate>
        
        <creator>Vinu Sherimon</creator>
        
        <creator>Sherimon P.C</creator>
        
        <creator>Alaa Ismaeel</creator>
        
        <subject>Blockchain; permissioned; chaincode; hyperledger composer playground; job recruitment</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>Industries around the world has revolutionized with the arrival of blockchain technology. Blockchain applications and use cases are in the process of development in different domains. This research presents a blockchain platform “JobChain” to manage the job recruitments. The case study is conducted for the job recruitments in various Ministries in Sultanate of Oman. Currently, in Oman, citizens are aware of the job vacancies through the advertisements posted in newspapers or social media. A job seeker then applies for the desired job and thereafter the qualified candidates are called for tests/ interviews. To ease this process, a solution based on blockchain which includes various Ministries and the citizens/ residents of Sultanate of Oman is proposed in this research. Ministries can post the job vacancies in the blockchain and qualified citizen(s) can submit their application. Relevant cryptographic functions are used to verify the authenticity of the participants in the blockchain network. The citizens feel the existence of a trusted secure government, which is mandatory for the development of a country. Unlike traditional models, blockchain eliminates the need of intermediary agents (e.g. Job Consultancies) thereby providing direct communication between the participants of the blockchain. The proposed blockchain framework helps the citizens in Oman to get updated about the job vacancies. Hyperledger Composer Playground is used to design and test the proposed blockchain business network. Preliminary results show that the participants and assets are created successfully and the transactions to approve a job vacancy and a job application is done through the proposed blockchain network.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_52-JobChain_An_Integrated_Blockchain_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cloud based Power Failure Sensing and Management Model for the Electricity Grid in Developing Countries: A Case of Zambia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110251</link>
        <id>10.14569/IJACSA.2020.0110251</id>
        <doi>10.14569/IJACSA.2020.0110251</doi>
        <lastModDate>2020-02-29T14:32:39.6270000+00:00</lastModDate>
        
        <creator>Janet Nanyangwe Sinkala</creator>
        
        <creator>Jackson Phiri</creator>
        
        <subject>Cloud architecture; power failure sensing; low voltage network; electric grid</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>In most developing countries, huge parts of the electric power grid are not monitored making it difficult for the service provider to determine when there is a power failure in the electric grid, especially if the power failure occurs in the Low Voltage level. Clients usually have to call and inform the utility’s customer service centre to report a power failure. However, this system of addressing power outages is not very effective and usually results in long durations of system interruptions. This paper proposes a cloud based power failure sensing system to enable automatic power failure sensing and reporting as well as monitoring of the low voltage power network in Zambia, a developing country in Southern Africa. A baseline study was conducted to determine the challenges faced by both the electric power utility company called Zambia Electricity Supply Corporation (ZESCO) and the electricity consumers in the current power failure reporting management model. The results from the baseline study indicate that challenges are being faced by electricity consumers when it comes to reporting power failures. These include failure to get through to the customer call centre due to constantly engaged lines, unanswered calls, failed calls and network failure. The challenge faced by the electricity service provider is the inability to attend to all the customers through the call centre as customer calls are rejected due to limited Call Centre system resources. To address these challenges the proposed cloud based power failure sensor model made use of a Voltage sensor circuit, Arduino Microcontroller board, SIM808 GSM/GPRS/GPS module, cloud architecture, Web Application and Google Map API. Results from the proposed model show improved reporting time, location information and quick response to power failures.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_51-Cloud_based_Power_Failure_Sensing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>3D Trilateration Localization using RSSI in Indoor Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110250</link>
        <id>10.14569/IJACSA.2020.0110250</id>
        <doi>10.14569/IJACSA.2020.0110250</doi>
        <lastModDate>2020-02-29T14:32:39.5930000+00:00</lastModDate>
        
        <creator>Nur Diana Rohmat Rose</creator>
        
        <creator>Low Tan Jung</creator>
        
        <creator>Muneer Ahmad</creator>
        
        <subject>RSSI; RFID; Indoor Positioning System (IPS); trilateration; 3D localization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>Received Signal Strength Indicator (RSSI) is one of the most popular technique for outdoor and indoor localization. There are many previous researches on RSSI-based indoor localization systems. However, most of them lack a solid classification method that reduce localization errors with better accuracy. This paper will focus on indoor localization methods to provide a technological perspective of indoor positioning systems. This paper proposes an indoor localization by using 3D trilateration method to locate target tags from RFID readers that used RSSI measurements for range determination. There will be six test cases for each reader. This system can track any target within the selected area with less localization error.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_50-3D_Trilateration_Localization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Assessing Advanced Machine Learning Techniques for Predicting Hospital Readmission</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110249</link>
        <id>10.14569/IJACSA.2020.0110249</id>
        <doi>10.14569/IJACSA.2020.0110249</doi>
        <lastModDate>2020-02-29T14:32:39.5800000+00:00</lastModDate>
        
        <creator>Samah Alajmani</creator>
        
        <creator>Kamal Jambi</creator>
        
        <subject>Boosting; random forest; linear discriminant analysis; k-nearest neighbor; machine learning; hospital readmission; predictive analytics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>Predicting the probability of hospital readmission is one of the most important healthcare problems for satisfactory, high-quality service in chronic diseases such as diabetes, in order to identify needful resources such as rooms, medical staff, beds, and specialists. Unfortunately, not many studies in the literature address this issue. Most studies involve forecasting the probability of diseases. For prediction, several machine learning methods can be implemented. Nonetheless, comparative studies that identify the most effective approaches for the method prediction are also insufficient. With this aim, our paper introduces a comparative study in the literature across five popular methods to predict the probability of hospital readmission in patients suffering from diabetes. The selected techniques include linear discriminant analysis, instance-based learning (K-nearest neighbors), and ensemble-based learning (random forest, AdaBoost, and gradient boosting) techniques. The study showed that the best performance was in random forest whereas the worst performance was shown by linear discriminant analysis.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_49-Assessing_Advanced_Machine_Learning_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Extending Tangible Interactive Interfaces for Education: A System for Learning Arabic Braille using an Interactive Braille Keypad</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110247</link>
        <id>10.14569/IJACSA.2020.0110247</id>
        <doi>10.14569/IJACSA.2020.0110247</doi>
        <lastModDate>2020-02-29T14:32:39.5630000+00:00</lastModDate>
        
        <creator>Hind Taleb Bintaleb</creator>
        
        <creator>Duaa Al Saeed</creator>
        
        <subject>Braille; tangible interface; e-learning; Arduino; accessibility; usability; visually impaired; blind</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>Learning Braille for visual impairments means being able to read, write and communicate with others. There exist several educational tools for learning Braille. Unfortunately, for Arabic Braille, there is a lack of interactive educational tools and what is mostly used is the traditional learning tools, such as the Braille block. Replacing those tools with some more effective and interactive e-learning tools would help to improve the learning process.  This paper introduces a new educational system with a tangible and interactive interface. This system aims to help blind children to learn Arabic Braille letters and numbers using an interactive tactile Braille keypad together with the educational website. The interactive tactile Braille keypad was built using an Arduino connected with the educational website. A usability test was conducted and results showed that the system is easy to use and suggested that using the interactive Braille keypad with the educational website will improve the learning outcomes for blind children.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_47-Extending_Tangible_Interactive_Interfaces_for_Education.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Very High-Performance Echo Canceller for Digital Terrestrial Television in Single Frequency Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110248</link>
        <id>10.14569/IJACSA.2020.0110248</id>
        <doi>10.14569/IJACSA.2020.0110248</doi>
        <lastModDate>2020-02-29T14:32:39.5630000+00:00</lastModDate>
        
        <creator>El Miloud Ar-Reyouchi</creator>
        
        <creator>Yousra Lamrani</creator>
        
        <creator>Kamal Ghoumid</creator>
        
        <creator>Salma Rattal</creator>
        
        <subject>Gap filler; Digital Adaptive Equalizer (DAE); Doppler Enhanced Echo Canceller (DEEC); Single-Frequency Networks (SFN); Coded Orthogonal Frequency Division Multiplex (COFDM)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>The principal aim of this paper is to cancel out the natural and man-made echoes in single-frequency networks (SFN). The challenge is to detect and remove feedback echoes and enhance the intelligibility of the essential parameters in SFN of digital terrestrial television broadcasting (DTTB) transmitter systems, especially the Modulation Error Ratio (MER), with optimizing coverage areas. We suggest a Digital Video Broadcasting (DVB) gap filler (GF) with two types of echo cancelling: Digital Adaptive Equalizer (DAE) and Doppler Enhanced Echo Canceller (DEEC). The proposed GF outperforms standard GF (SGF), finite impulse response filter (FIR GF), and adaptive GF (AGF) techniques by 33%, 17%, and 13%, respectively. Furthermore, the obtained MER makes the proposed GF (PGF) ideal for operating in SFN using Coded Orthogonal Frequency Division Multiplex (COFDM) technique.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_48-Very_High_Performance_Echo_Canceller.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Attribution of Cyberattack using Association Rule Mining (ARM)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110246</link>
        <id>10.14569/IJACSA.2020.0110246</id>
        <doi>10.14569/IJACSA.2020.0110246</doi>
        <lastModDate>2020-02-29T14:32:39.5330000+00:00</lastModDate>
        
        <creator>Md Sahrom Abu</creator>
        
        <creator>Siti Rahayu Selamat</creator>
        
        <creator>Robiah Yusof</creator>
        
        <creator>Aswami Ariffin</creator>
        
        <subject>CTI; association rule mining; Apriori Algorithm; attribution; interestingness measures</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>With the rapid development of computer networks and information technology, an attacker has taken advantage to manipulate the situation to launch a complicated cyberattack. This complicated cyberattack causes a lot of problems among the organization because it requires an effective cyberattack attribution to mitigate and reduce the infection rate. Cyber Threat Intelligence (CTI) has gain wide coverage from the media due to its capability to provide CTI feeds from various data sources that can be used for cyberattack attribution. In this paper, we study the relationship of basic Indicator of Compromise (IOC) based on a network traffic dataset from a data mining approach. This dataset is obtained using a crawler that is deployed to pull security feed from Shadowserver. Then an association analysis method using Apriori Algorithm is implemented to extract rules that can discover interesting relationship between large sets of data items. Finally, the extracted rules are evaluated over the factor of interestingness measure of support, confidence and lift to quantify the value of association rules generated with Apriori Algorithm. By implementing the Apriori Algorithm in Shadowserver dataset, we discover some association rules among several IOC which can help attribute the cyberattack.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_46-An_Attribution_of_Cyberattack_using_Association_Rule_Mining.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Three-Dimensional Shape Reconstruction from a Single Image by Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110245</link>
        <id>10.14569/IJACSA.2020.0110245</id>
        <doi>10.14569/IJACSA.2020.0110245</doi>
        <lastModDate>2020-02-29T14:32:39.5170000+00:00</lastModDate>
        
        <creator>Kentaro Sakai</creator>
        
        <creator>Yoshiaki Yasumura</creator>
        
        <subject>Computer vision; 3D reconstruction; deep learning; convolutional neural network; feature learning; normal vector</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>Reconstructing a three-dimensional (3D) shape from a single image is one of the main topics in the field of computer vision. Some of the methods for 3D reconstruction adopt machine learning. These methods use machine learning for acquiring the relationship between 3D shape and 2D image, and reconstruct 3D shapes by using the learned relationship. However, since only predefined features (pixels in the image) are used, it is not possible to obtain the desired features of the 2D image for 3D reconstruction. Therefore, this paper presents a method for reconstructing 3D shapes by learning features of 2D images using deep learning. This method uses Convolutional Neural Network (CNN) for feature learning to reconstruct a 3D shape. Pooling layers and convolutional layers of the CNN capture spatial information about images and automatically select valuable image features. This paper presents two types of the reconstruction methods. The first one is to first estimate the normal vector of the object, and then reconstruct the 3D shape from the normal vector by deep learning. The second one is direct reconstruction of the 3D shape from an image by a deep neural network. The experimental results using human face images showed that the proposed method can reconstruct 3D shapes with higher accuracy than the previous methods.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_45-Three_Dimensional_Shape_Reconstruction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing the Bitrate and Power Spectral Density of PPM TH-IR UWB Signals using a Sub-Slot Technique</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110244</link>
        <id>10.14569/IJACSA.2020.0110244</id>
        <doi>10.14569/IJACSA.2020.0110244</doi>
        <lastModDate>2020-02-29T14:32:39.4870000+00:00</lastModDate>
        
        <creator>Bashar Al-haj Moh’d</creator>
        
        <creator>Nidal Qasem</creator>
        
        <subject>Bitrate; FFT; PPM; PSD; spectral estimation; sub-slot; TH-IR; UWB</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>Increasing the receiver’s bitrate and suppressing the spectral line are issues of major interest in the design of compliant Time-Hopping Impulse Radio (TH-IR) Ultra-Wide Band (UWB) systems. Suppression of spectral lines has been commonly addressed by randomizing the position of each pulse to make the period as large as possible. Our analysis suggests that this influences the overall shape of a signal’s Power Spectral Density (PSD) in a way that is useful for spectral line suppression or diminishing the PSD maximum peak power. A method for utilizing the system to generate a Dynamic-Location Pulse-Position Modulated (DLPPM) signal for transmission across a UWB communications channel is presented, and an analytical derivation of the PSD of a proposed DLPPM signal TH-IR UWB is introduced. Our proposed method can be applied without affecting the users of other concurrent applications. The theoretical model for DPLM TH-IR is compared with the PSD for conventional DPLM TH-IR. The results show that spectral estimation methods based on Fast Fourier Transform (FFT) significantly overestimate the continuous part of the PSD for small and medium signal lengths, which has implications for assessing interference margins by means of simulation. Another purpose of this paper is to improve a predesigned system by increasing the receiver’s bitrate. This will be achieved by using the bits that control the sub-slot technique as information and designing a receiver capable of detecting them. The bitrate is effectively doubled. Finally, the proposed system for DPLM TH-IR has been built inside Simulink/MATLAB to test its results via a conventional DPLM TH-IR system.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_44-Enhancing_the_Bitrate_and_Power_Spectral_Density.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Machine Learning based Access Control Framework for the Internet of Things</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110243</link>
        <id>10.14569/IJACSA.2020.0110243</id>
        <doi>10.14569/IJACSA.2020.0110243</doi>
        <lastModDate>2020-02-29T14:32:39.4700000+00:00</lastModDate>
        
        <creator>Aissam Outchakoucht</creator>
        
        <creator>Anas Abou El Kalam</creator>
        
        <creator>Hamza Es-Samaali</creator>
        
        <creator>Siham Benhadou</creator>
        
        <subject>Access control; internet of things; machine learning; security; smart city</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>The main challenge facing the Internet of Things (IoT) in general, and IoT security in particular, is that humans have never handled such a huge amount of nodes and quantity of data. Fortunately, it turns out that Machine Learning (ML) systems are very effective in the presence of these two elements. However, can IoT devices support ML techniques? In this paper, we investigated this issue and proposed a twofold contribution: a thorough study of the IoT paradigm and its intersections with ML from a security perspective; then, we actually proposed a holistic ML-based framework for access control, which is the defense head of recent IT systems. In addition to learning techniques, this second pillar was based on the organization and attribute concepts to avoid role explosion problems and applied to a smart city case study to prove its effectiveness.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_43-Machine_Learning_based_Access_Control_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Managing an External Depot in a Production Routing Problem</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110242</link>
        <id>10.14569/IJACSA.2020.0110242</id>
        <doi>10.14569/IJACSA.2020.0110242</doi>
        <lastModDate>2020-02-29T14:32:39.4530000+00:00</lastModDate>
        
        <creator>Bi Koua&#239; Bertin Kay&#233;</creator>
        
        <creator>Moustapha Diaby</creator>
        
        <creator>Tchimou N’Takp&#233;</creator>
        
        <creator>Souleymane Oumtanaga</creator>
        
        <subject>Production; inventory; distribution; transport; branch-and-cut; decomposition heuristic; MIP; genetic algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>This paper addresses a production and distribution problem in a supply chain.  The supply chain consists of a plant with no storage capacity that produces only one type of product.  The manufactured products are then transported to a depot for storage. Customers demand is met by a homogeneous fleet of vehicles that begins and ends their trips at the depot. The objective of the study is to minimize the overall cost of production, inventory and transport throughout the supply chain. A Branch-and-Cut and a hybrid Two Phases Decomposition Heuristic using a Mixed Integer Programming and a Genetic Algorithm have been developed to solve the problem.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_42-Managing_an_External_Depot_in_a_Production_Routing_Problem.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards Social Network Sites Acceptance in e-Learning System: Students Perspective at Palestine Technical University-Kadoorie</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110241</link>
        <id>10.14569/IJACSA.2020.0110241</id>
        <doi>10.14569/IJACSA.2020.0110241</doi>
        <lastModDate>2020-02-29T14:32:39.4400000+00:00</lastModDate>
        
        <creator>Mohannad Moufeed Ayyash</creator>
        
        <creator>Fadi A.T. Herzallah</creator>
        
        <creator>Waleed Ahmad</creator>
        
        <subject>Social network sites; e-Learning system; perceived usefulness; perceived ease of use; perceived enjoyment; social influence; perceived information security; Palestine</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>This study aims to examine social network sites acceptance in an e-Learning system and to propose a model encompassing determining factors that affect students’ intentions to use social network sites in the e-Learning system. The proposed model was built based on the Technology Acceptance Model (TAM), perceived enjoyment, social influences, and perceived information security from the literature review. The quantitative method of data collection using a questionnaire survey is used in the current study. The data analysed using a structural equation modeling (SEM) approach through partial least square (PLS) software version 3. The results indicated that perceived ease of use, perceived usefulness, perceived enjoyment, social influences, and perceived information security has a significant and positive impact on student’s acceptance of social network sites in an e-Learning system at Palestine Technical University-Kadoorie. Theoretical and practical implications discussed.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_41-Toward_Social_Network_Sites_Acceptance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Extent to which Individuals in Saudi Arabia are Subjected to Cyber-Attacks and Countermeasures</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110240</link>
        <id>10.14569/IJACSA.2020.0110240</id>
        <doi>10.14569/IJACSA.2020.0110240</doi>
        <lastModDate>2020-02-29T14:32:39.4230000+00:00</lastModDate>
        
        <creator>Abdullah A H Alzahrani</creator>
        
        <subject>Cybercrimes; cybersecurity; identity theft; cyberattacks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>In light of the rapid development of technology and the increase in the number of users of the Internet via computers and smart devices, cybercrimes impact on enterprises, organizations, governments and individuals has been significant. Researches and reports on the impact of cybercrime and methods of prevention and protection are been introduced regularly. However, the majority focuses on the impact on organizations and governments. This paper aims to use a survey methodology in order to highlights the impact of cybercrimes on the individuals in Saudi Arabia and measure the awareness of cybersecurity among individuals. In addition, this research aims to investigate the common cybercrimes which target the individuals in Saudi Arabia and the countermeasure taken by them.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_40-The_Extent_to_which_Individuals_in_Saudi_Arabia.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Image Fusion Scheme using Wavelet Transform for Concealed Weapon Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110239</link>
        <id>10.14569/IJACSA.2020.0110239</id>
        <doi>10.14569/IJACSA.2020.0110239</doi>
        <lastModDate>2020-02-29T14:32:39.3930000+00:00</lastModDate>
        
        <creator>Hanan A. Hosni Mahmoud</creator>
        
        <subject>Concealed weapon detection; image fusion; pixel alignment; wave sensors</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>The aim of this paper is to detect concealed weapons, especially in high security places like in airports, train stations and places with large crowds, where concealed weapons are not allowed. We aim to specify suspicious person who may have a concealed weapon. In this paper, an Image Fusion technique using pixel alignment and discrete wavelet transform is proposed.  It is mainly utilized for Concealed Weapon Detection. Image fusion can be defined as extracting information from two or images into a single image to enhance the detection. Image fusion allows detecting concealed weapons underneath a person’s clothing with imaging sensors such as Infrared imaging or Passive Millimeter Wave sensors. A data fusion scheme for simpler sensors based on correlation coefficients is proposed and utilized. We proposed an image fusion scheme that utilizes fusion dependency rules using wavelet (WT) and inverse wavelet transform (IWT). The fusion rule is to select the coefficient with the highest correlation rate. The higher the correlation the stronger of the co-existed feature. Experimental results shows the superiority of the proposed algorithm both in quality and real time requirement. The proposed algorithm has a real time response time that is less than other comparable algorithms by 40%. At the same time it retains higher quality as shown in the experimental results. It outperforms other algorithms by superior PSNR of more than 10% of the comparable algorithms in average.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_39-A_Novel_Image_Fusion_Scheme.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Short Poem Generation (SPG): A Performance Evaluation of Hidden Markov Model based on Readability Index and Turing Test</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110238</link>
        <id>10.14569/IJACSA.2020.0110238</id>
        <doi>10.14569/IJACSA.2020.0110238</doi>
        <lastModDate>2020-02-29T14:32:39.3770000+00:00</lastModDate>
        
        <creator>Ken Jon M. Tarnate</creator>
        
        <creator>May M. Garcia</creator>
        
        <creator>Priscilla Sotelo-Bator</creator>
        
        <subject>Evaluation metrics; Hidden Markov Model; poetry generation; readability test; turing test</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>We developed a Hidden Markov Model (HMM) that automatically generates short poem. The HMM was trained using the forward-backward algorithm also known as Baum Welch algorithm. The training process was exhausted by a hundreds of iterations through recursion method. Then we used the Viterbi algorithm to decode all the best possible hidden states to predict the next word, and from the previous predicted word, it will generate another word, then another word until it reaches the desire word length that was set in the program. Afterwards, the model was evaluated using several kinds of readability metrics index which measure the reading difficulty and comprehensiveness of the generated poem. Then, we performed a Turing Test, which participated by 75 college students, who are well versed in poetry. They determined if the generated poems was created by a human or a machine. Based from the evaluation results, the highest readability score index of the generated short poem is in the grade 16th level. While 69.2% of the participants in the Turing Test, agreed that most of the machine generated poems were likely created by some well-known poets and writers.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_38-Short_Poem_Generation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Internet of Things for Crowd Panic Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110237</link>
        <id>10.14569/IJACSA.2020.0110237</id>
        <doi>10.14569/IJACSA.2020.0110237</doi>
        <lastModDate>2020-02-29T14:32:39.3600000+00:00</lastModDate>
        
        <creator>Habib Ullah</creator>
        
        <creator>Ahmed B. Altamimi</creator>
        
        <creator>Rabie A. Ramadan</creator>
        
        <subject>VANET; smart cities; crowd behavior; deep learning; Recurrent Neural Networks (RNN); Convolution Neural Networks (CNN); Invariant Feature Transform (SIFT)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>Crowd behavior detection is important for the smart cities applications such as people gathering for different events. However, it is a challenging problem due to the internal states of the crowd itself and the surrounding environment. This paper proposes a novel crowd behavior detection framework based on a number of parameters. We first exploit a computer vision approach based on scale invariant feature transform (SIFT) to classify the crowd behavior either into panic or normalness. We then consider a number of other parameters from the surroundings namely crowd coherency, social interaction, motion information, randomness in crowd speed, internal chaos level, crowd condition, crowd temporal history, and crowd vibration status along with time stamp. Subsequently, these parameters are fed to deep learning model during training stage and the behavior of the crowd is detected during the testing stage. The experimental results show that proposed method renders significant performance in terms of crowd behavior detection.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_37-The_Internet_of_Things_for_Crowd_Panic_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid Intrusion Detection System for SDWSN using Random Forest (RF) Machine Learning Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110236</link>
        <id>10.14569/IJACSA.2020.0110236</id>
        <doi>10.14569/IJACSA.2020.0110236</doi>
        <lastModDate>2020-02-29T14:32:39.3430000+00:00</lastModDate>
        
        <creator>Indira K</creator>
        
        <creator>Sakthi U</creator>
        
        <subject>SDWSN; IDS; Salp Swarm Optimization; Random Forest Classifier</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>It is indeed an established fact which network security systems had certain technical problems that mostly tends to lead to security risks. Nowadays, Attackers could still continue to abuse the security vulnerabilities as well as shatter the systems and networks, and is quite pricey and even sometimes extremely difficult to resolve all layout and computing faults. The above appears to suggest that methodologies relying on preventive measures seem to be no longer secure and perhaps tracking of intrusion is necessary as a last line of defense. A Hybrid in Software Defined Wireless Sensor Network (SDWSN) the Intrusion Detection System is designed for this paper which really incorporates the benefits of Salp Swarm Optimization (SSO) algorithm as well as the classification of Machine Learning method it is based upon Random Forest (RF). We propose SSO optimization procedures to guarantee that the ideal features for the intrusion detector are chosen and in addition for improving the Random Forest (RF) classifier detection efficiency. To assess / calculate the reliability of the proposed approach here we make use of the generic NSL KDD dataset. Therefore, our proposed hybrid IDS-SSO-RF classifier further analyzes these detected abnormal activities. The known and unknown attacks are also identified. Hybrid framework also shown by the experimental results can reliably detect anomaly behavior and obtains better results in terms in terms of delay, delivery ratio, drop overhead, energy consumption and throughput.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_36-A_Hybrid_Intrusion_Detection_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Integrated Fuzzy based Decision Support System for the Management of Human Disease</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110235</link>
        <id>10.14569/IJACSA.2020.0110235</id>
        <doi>10.14569/IJACSA.2020.0110235</doi>
        <lastModDate>2020-02-29T14:32:39.3300000+00:00</lastModDate>
        
        <creator>Blessing Ekong</creator>
        
        <creator>Idara Ifiok</creator>
        
        <creator>Ifreke Udoeka</creator>
        
        <creator>James Anamfiok</creator>
        
        <subject>Human diseases; fuzzy based decision support system; human disease; fuzzy logic; C#</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>To eliminate some of the inaccuracies in the diagnosis of human diseases, decision support systems based on algorithms and technologies such as Artificial Neural Network, Fuzzy Logic etc. have been used. The results of such diagnosis are used for treatment and management purposes. Inaccurate and imprecise diagnosis may lead to wrong treatment methods which in turn may result in death or complications.  Although treatments are widely carried out using drugs, there exist other treatment methods such as alternative medicine, complimentary medicine which could be used for treatment. We propose an Integrated Fuzzy Based Decision Support System which focuses on the integration of both alternative and pure medicine for the management of malaria. The results obtained showed that integrating these two treatment and management methods will eliminate the limitations of the individual methods therefore bridging the gap between alternative and pure medicine in the treatment and management of human diseases. The system is implemented in C#.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_35-Integrated_Fuzzy_based_Decision_Support_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Smart Home System based on Internet of Things</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110234</link>
        <id>10.14569/IJACSA.2020.0110234</id>
        <doi>10.14569/IJACSA.2020.0110234</doi>
        <lastModDate>2020-02-29T14:32:39.2970000+00:00</lastModDate>
        
        <creator>Rihab Fahd Al-Mutawa</creator>
        
        <creator>Fathy Albouraey Eassa</creator>
        
        <subject>Internet of Things (IoT); smart home; system; architecture; security; management</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>The Internet of Things (IoT) describes a network infrastructure of identifiable things that share data through the Internet. A smart home is one of the applications for the Internet of Things. In a smart home, household appliances could be monitored and controlled remotely. This raises a demand for reliable security solutions for IoT systems. Authorization and authentication are challenging IoT security operations that need to be considered. For instance, unauthorized access, such as cyber-attacks, to a smart home system could cause danger by controlling sensors and actuators, opening the doors for a thief. This paper applies an extra layer of security of multi-factor authentication to act as a prevention method for mitigating unauthorized access. One of those factors is face recognition, as it has recently become popular due to its non-invasive biometric techniques, which is easy to use with cameras attached to most trending computers and smartphones. In this paper, the gaps in existing IoT smart home systems have been analyzed, and we have suggested improvements for overcoming them by including necessary system modules and enhancing user registration and log-in authentication. We propose software architecture for implementing such a system. To the best of our knowledge, the existing IoT smart home management research does not support face recognition and liveness detection within the authentication operation of their suggested software architectures.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_34-A_Smart_Home_System_based_on_Internet_of_Things.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparison of Anomaly Detection Accuracy of Host-based Intrusion Detection Systems based on Different Machine Learning Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110233</link>
        <id>10.14569/IJACSA.2020.0110233</id>
        <doi>10.14569/IJACSA.2020.0110233</doi>
        <lastModDate>2020-02-29T14:32:39.2830000+00:00</lastModDate>
        
        <creator>Yukyung Shin</creator>
        
        <creator>Kangseok Kim</creator>
        
        <subject>Anomaly detection; host based intrusion detection system; system calls; cyber security; machine learning; simulation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>Among the different host-based intrusion detection systems, an anomaly-based intrusion detection system detects attacks based on deviations from normal behavior; however, such a system has a low detection rate. Therefore, several studies have been conducted to increase the accurate detection rate of anomaly-based intrusion detection systems; recently, some of these studies involved the development of intrusion detection models using machine learning algorithms to overcome the limitations of existing anomaly-based intrusion detection methodologies as well as signature-based intrusion detection methodologies. In a similar vein, in this study, we propose a method for improving the intrusion detection accuracy of anomaly-based intrusion detection systems by applying various machine learning algorithms for classification of normal and attack data. To verify the effectiveness of the proposed intrusion detection models, we use the ADFA Linux Dataset which consists of system call traces for attacks on the latest operating systems. Further, for verification, we develop models and perform simulations for host-based intrusion detection systems based on machine learning algorithms to detect and classify anomalies using the Arena simulation tool.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_33-Comparison_of_Anomaly_Detection_Accuracy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cross-Language Plagiarism Detection using Word Embedding and Inverse Document Frequency (IDF)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110231</link>
        <id>10.14569/IJACSA.2020.0110231</id>
        <doi>10.14569/IJACSA.2020.0110231</doi>
        <lastModDate>2020-02-29T14:32:39.2500000+00:00</lastModDate>
        
        <creator>Hanan Aljuaid</creator>
        
        <subject>NLP; cross-language plagiarism detection; word embedding; similarity detection; IDF</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>The purpose of cross-language textual similarity detection is to approximate the similarity of two textual units in different languages. This paper embeds the distributed representation of words in cross-language textual similarity detection using word embedding and IDF. The paper introduces a novel cross-language plagiarism detection approach constructed with the distributed representation of words in sentences. To improve the textual similarity of the approach, a novel method is used called CL-CTS-CBOW. Consequently, adding the syntax feature to the approach is improved by a novel method called CL-WES. Afterward, the approach is improved by the IDF weighting method. The corpora used in this study are four Arabic-English corpora, specifically books, Wikipedia, EAPCOUNT, and MultiUN, which have more than 10,017,106 sentences and uses with supported parallel and comparable assemblages. The proposed method in this paper combines different methods to confirm their complementarity. In the experiment, the proposed system obtains 88% English-Arabic similarity detection at the word level and 82.75% at the sentence level with various corpora.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_31-Cross_Language_Plagiarism_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Study of LoRa Performance in Monitoring of Patient’s SPO2 and Heart Rate based IoT</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110232</link>
        <id>10.14569/IJACSA.2020.0110232</id>
        <doi>10.14569/IJACSA.2020.0110232</doi>
        <lastModDate>2020-02-29T14:32:39.2500000+00:00</lastModDate>
        
        <creator>Puput Dani Prasetyo Adi</creator>
        
        <creator>Akio Kitagawa</creator>
        
        <subject>Pulse; heart rate; adaptive; data rate; long range; bitrate</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>In this research, a sensor that will be equipped with blood oxygen saturation function (SPO2) blood and Heart Rate is MH-ET Live max30102 Sensor with Library Max30105. The advantage of this sensor is compatible with ATmega 328P, which is the Arduino board, the first experiment using Arduino Uno. Therefore, MH-ET Sensor data is integrated with Wireless Sensor Network (WSN) devices, e.g, LoRa (Long Range) 915 MHz and calculate WSN path loss when sending sensor data in mountainous areas, the model used to represent signal analysis and measurements in this study is the Ground Reflection (2-ray) model. therefore, the conditions that can be explained are patients who will send their data over hilly areas and hospitals or medical treatments called receiving nodes or coordinator nodes in much lower areas, in the same situation adding routers is expected to be a comparison of whether the data sent faster or even no impact. Furthermore, in this study, it is expected to provide clear results on the function of the router as the sender of pulse sensor data. The point is patients who are in a higher area with the level of impossibility in bringing the patient due to the condition of the patient so that the SPO2 data transmission and heart rate of the patient are expected to be known quickly by the medical authorities through the sensor node device attached to the patient&#39;s body. The use of the Adaptive Data Rate (ADR) Algorithm is used to optimize data rate, time on air (ToA) or airtime and energy consumption in the network. Therefore, the End Device (ED) in the ADR algorithm must be static (non-mobile). In the process of measuring the ADR algorithm in the position of sending data (uplink) n-bits to n-gateway. Next, the application server used is ThingsSpeak or The Things Network (TTN).</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_32-A_Study_of_LoRa_Performance_in_Monitoring_of_Patients_SPO2.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Using Social Network Analysis to Understand Public Discussions: The Case Study of #SaudiWomenCanDrive on Twitter</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110230</link>
        <id>10.14569/IJACSA.2020.0110230</id>
        <doi>10.14569/IJACSA.2020.0110230</doi>
        <lastModDate>2020-02-29T14:32:39.2200000+00:00</lastModDate>
        
        <creator>Zubaida Jastania</creator>
        
        <creator>Mohammad Ahtisham Aslam</creator>
        
        <creator>Rabeeh Ayaz Abbasi</creator>
        
        <creator>Kawther Saeedi</creator>
        
        <subject>Social network analysis; twitter; public discussion; network science</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>Social media analytics has experienced significant growth over the past few years due to the crucial importance of analyzing and measuring public social behavior on different social networking sites. Twitter is one of the most popular social networks and means of online news that allows users to express their views and participate in a wide range of different issues in the world. Expressed opinions on Twitter are based on diverse experiences that represent a broad set of valuable data that can be analyzed and used for many purposes. This study aims to understand the public discussions that are conducted on Twitter about essential topics and developing an analytics framework to analyze these discussions. The focus of this research is the analytical framework of Arabic public discussions using the hashtag #SaudiWomenCanDrive, as one of the hot trends of Twitter discussions. The proposed framework analyzed more than two million tweets using methods from social network analysis. The framework uses the metrics of graph centrality to reveal essential people in the discussion and community detection methods to identify the communities and topics used in the discussion. Results show that @SaudiNews50, @Algassabinasser, and @Abdulrahman were top users in two networks, while @KingSalman and @LoujainHathloul were the top two users in another network. Consequently, “King Salman” and “Loujain Hathloul” Twitter accounts were identified as influencers, whereas “Saudi News” and “Algassabi Nasser” were the leading distributors of the news. Therefore, similar phenomena could be analyzed using the proposed framework to analyze similar behavior on other public discussions.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_30-Using_Social_Network_Analysis_to_Understand_Public_Discussions.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Arabic Morphological Analysis Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110229</link>
        <id>10.14569/IJACSA.2020.0110229</id>
        <doi>10.14569/IJACSA.2020.0110229</doi>
        <lastModDate>2020-02-29T14:32:39.2030000+00:00</lastModDate>
        
        <creator>Ameerah Alothman</creator>
        
        <creator>AbdulMalik Alsalman</creator>
        
        <subject>Arabic analyzer; Arabic lexicon; classification morphology; morphological analysis; natural language processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>Recently, activity surrounding Arabic natural language processing has increased significantly. Morphological analysis is the basis of most tasks related to Arabic natural language processing. There are many scientific studies on Arabic morphological analysis, yet most of them lack an accurate classification of Arabic morphology and fail to cover both recent and traditional techniques. This paper aims to survey Arabic morphological analysis techniques from 2005 to 2019 and to organize them into a reasonable and expandable classification system. To facilitate and support new research, this paper compares the currently available Arabic morphological analyzers, reaches certain conclusions, and proposes some promising directions for future research in Arabic morphological analysis.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_29-Arabic_Morphological_Analysis_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dynamic Changes of Multiple Sclerosis Lesions on T2-FLAIR MRI using Digital Image Processing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110227</link>
        <id>10.14569/IJACSA.2020.0110227</id>
        <doi>10.14569/IJACSA.2020.0110227</doi>
        <lastModDate>2020-02-29T14:32:39.1730000+00:00</lastModDate>
        
        <creator>Marwan A. A. Hamid</creator>
        
        <creator>Walid Al-haidri</creator>
        
        <creator>Waseemullah</creator>
        
        <creator>Najeed Ahmed Khan</creator>
        
        <creator>Bilal Ahmed Usmani</creator>
        
        <creator>Syed. M. Wasim Raza</creator>
        
        <subject>Multiple sclerosis; T2-FLAIR; magnetic resonance imaging; digital image processing; image segmentation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>Multiple Sclerosis (MS) is a complex autoimmune neurological disease affecting the myelin sheath of the nerve system. In the world, there are about 2.5 million patients with MS, in South and East Asia the ratio of MS is high. This disease affects young and middle-aged people. The MS is a fatal disease, and the numbers and volumes of MS lesions can be used to determine the degree of disease severity and track its progression. The detection of multiple sclerosis is a critical problem in MRI images because MS is described as frequently involves lesions, it can be appeared on a scan at one time-point and not appeared in subsequent time points. Also, MS on the T2 FLAIR MRI image is more often manifested by the presence of focal changes in the substance of the brain and spinal cord, which complicate their dynamic control according to MRI data. The detection and extraction of the MS lesions features are not just a tedious and time-consuming process, but also required experts and trained physicians, so the computer-aided tools become very important to overcome these obstacles. In this paper, we present a novel computer-aided approach based on digital image processing methods for enhancing the structures, removing undesired signals, segmenting the MS  lesions from the background, and finally measuring the size of MS lesions to provide information about the current status of MS, which represent MS lesions that are either new, increasing or shrinking. The accuracy of the proposed methodology was 96%, according to the results presented in data. The lack of accuracy is related to some errors in segmentation.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_27-Dynamic_Changes_of_Multiple_Sclerosis_Lesions.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Scientific VS Non-Scientific Citation Annotational Complexity Analysis using Machine Learning Classifiers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110228</link>
        <id>10.14569/IJACSA.2020.0110228</id>
        <doi>10.14569/IJACSA.2020.0110228</doi>
        <lastModDate>2020-02-29T14:32:39.1730000+00:00</lastModDate>
        
        <creator>Hassan Raza</creator>
        
        <creator>M. Faizan</creator>
        
        <creator>Naeem Akhtar</creator>
        
        <creator>Ayesha Abbas</creator>
        
        <creator>Naveed-Ul-Hassan</creator>
        
        <subject>Classification; machine learning; sentimental analysis; scientific citations; non- scientific citation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>This paper evaluates the citation sentences’ annotation complexity of both scientific as well as non-scientific text related articles to find out major complexity reasons by performing sentiment analysis of scientific and non-scientific domain articles using our own developed corpora of these domains separately. For this research, we selected different data sources to prepare our corpora in order to perform sentimental analysis. After that, we have performed a manual annotation procedure to assign polarities using our defined annotation guidelines. We developed a classification system to check the quality of annotation work for both domains. From results, we have found that the scientific domain gave us more accurate results than the non-scientific domain. We have also explored the reasons for less accurate results and concluded that non-scientific text especially linguistics is of complex nature that leads to poor understanding and incorrect annotation.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_28-Scientific_VS_Non_Scientific_Citation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>e-Participation Model for Kuwait e-Government</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110226</link>
        <id>10.14569/IJACSA.2020.0110226</id>
        <doi>10.14569/IJACSA.2020.0110226</doi>
        <lastModDate>2020-02-29T14:32:39.1570000+00:00</lastModDate>
        
        <creator>Zainab M. Aljazzaf</creator>
        
        <creator>Sharifa Ayad Al-Ali</creator>
        
        <creator>Muhammad Sarfraz</creator>
        
        <subject>e-Government; e-participation; e-participation factors; e-participation model; e-information; e-consultation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>Internet has an influence on every aspect of modern life. The increasing interest in e-government has led to increase in public expenditure on communication technologies. The technology provides and facilitates opportunities for citizens to interact with e-government, so called e-participation. In fact, it makes the citizens involvement higher in the delivery of services, administration, and decision making. People need to engage themselves and participate in e-government to achieve its objectives. e-Government literature explored the factors that influence people to participate in e-government. However, the study of e-participation is new in Kuwait. Therefore, this paper aims to find out the critical factors affecting e-participation in Kuwait. To attain the purpose of the research study, a conceptual model has been developed, keeping the context of Kuwait society in view. Then, a questionnaire has been designed and used to test the conceptual model. The results indicate that technical factors, social influence, political factors, perceived usefulness, and perceived ease-of-use are the significant factors that influence the citizen’s intention to participate in Kuwait e-government. Consequently, the results of this study need to be adopted by the government to enhance e-participation in Kuwait e-government.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_26-E_Participation_Model_for_Kuwait_E_Government.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Teachers’ Experiences in the Development of Digital Storytelling for Cyber Risk Awareness</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110225</link>
        <id>10.14569/IJACSA.2020.0110225</id>
        <doi>10.14569/IJACSA.2020.0110225</doi>
        <lastModDate>2020-02-29T14:32:39.1430000+00:00</lastModDate>
        
        <creator>Fariza Khalid</creator>
        
        <creator>Tewfiq El-Maliki</creator>
        
        <subject>Cybersecurity; awareness; education; video; digital storytelling; media; case study</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>Although the Internet has positively impacted people&#39;s lives, it also has its dark side. There have been reports on the increase of cases of violence, racial abuse, cyber-bullying, online fraud, addiction to gaming and gambling, and pornography. A vital issue has emerged that Internet users still lack awareness of these online risks. In this study, our respondents were involved in the development of educational videos related to cyber risk topics using a storytelling approach. The participants in this study were 28 in-service teachers who took a master class on Resource and Information Technology. This study aims to examine the issues that participants took into consideration while planning and developing their digital stories, and their experiences developing digital stories about cyber risks. The data was collected using a written reflection. The data was then analyzed thematically using NVivo Software. The findings indicate how the respondents valued their experience in planning, developing, and evaluating their storytelling videos. The impact of learning from the videos on the students’ affective domain is also discussed. We further discuss the benefits of the storytelling approach for behavior change.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_25-Teachers_Experiences_in_the_Development_of_Digital_Storytelling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>On Exhaustive Evaluation of Eager Machine Learning Algorithms for Classification of Hindi Verses</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110224</link>
        <id>10.14569/IJACSA.2020.0110224</id>
        <doi>10.14569/IJACSA.2020.0110224</doi>
        <lastModDate>2020-02-29T14:32:39.1100000+00:00</lastModDate>
        
        <creator>Prafulla B. Bafna</creator>
        
        <creator>Jatinderkumar R. Saini</creator>
        
        <subject>Classification; eager machine learning algorithm; Hindi; prediction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>Implementing supervised machine learning on the Hindi corpus for classification and prediction of verses is an untouched and useful area. Classifying and predictions benefits many applications like organizing a large corpus, information retrieval and so on. The metalinguistic facility provided by websites makes Hindi as a major language in the digital domain of information technology today. Text classification algorithms along with Natural Language Processing (NLP) facilitates fast, cost-effective, and scalable solution. Performance evaluation of these predictors is a challenging task. To reduce manual efforts and time spent for reading the document, classification of text data is important. In this paper, 697 Hindi poems are classified based on four topics using four eager machine-learning algorithms. In the absence of any other technique, which achieves prediction on Hindi corpus, misclassification error is used and compared to prove the betterment of the technique. Support vector machine performs best amongst all.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_24-On_Exhaustive_Evaluation_of_Eager_Machine_Learning_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Investigation of a 7-Level Inverter-based Electric Spring Subjected to Distribution Network Dynamics</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110223</link>
        <id>10.14569/IJACSA.2020.0110223</id>
        <doi>10.14569/IJACSA.2020.0110223</doi>
        <lastModDate>2020-02-29T14:32:39.0930000+00:00</lastModDate>
        
        <creator>K. K. Deepika</creator>
        
        <creator>G.Kesava Rao</creator>
        
        <creator>J. Vijaya Kumar</creator>
        
        <creator>Satya Ravi Sankar Rai</creator>
        
        <subject>Electric spring; critical load; multilevel inverter; voltage balancing circuit; voltage regulation </subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>This paper aims to provide solution to mitigate the voltage variations in critical load caused by the high penetration of DGs into distribution system using Electric Springs (ES). In this regard, there is a need for its exploration with various converter circuits. The improvised topology opens new avenues in the renewable energy powered micro grids for the implementation of ES with a Multi-Level Inverter (MLI) comprising a voltage balancing circuit providing a better quality of power system stability and voltage regulation. This paper captures the voltage dynamics of distribution system dominated by Renewable variability for varying reactive power of the DGs and constantly changing consumer demands. These are analyzed and explained using voltage profiles and power flows in Matlab/Simulink environment. It is practically shown that with the developed ES topology %THD in the system is conspicuously reduced and voltage regulation is seamlessly improved.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_23-Investigation_of_a_7_Level_Inverter_Based_Electric_Spring.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Robust Deep Learning Model for Financial Distress Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110222</link>
        <id>10.14569/IJACSA.2020.0110222</id>
        <doi>10.14569/IJACSA.2020.0110222</doi>
        <lastModDate>2020-02-29T14:32:39.0630000+00:00</lastModDate>
        
        <creator>Magdi El-Bannany</creator>
        
        <creator>Meenu Sreedharan</creator>
        
        <creator>Ahmed M. Khedr</creator>
        
        <subject>Financial distress prediction; multi-layer perceptron; long short-term memory; convolutional neural network; deep neural network; optimized deep learning model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>This paper investigates the ability of deep learning networks on financial distress prediction. This study uses three different deep learning models, namely, Multi-layer Perceptron (MLP), Long Short-term Memory (LSTM) and Convolutional Neural Networks (CNN). In the first phase of the study, different Optimization techniques are applied to each model creating different model structures, to generate the best model for prediction. The top results are presented and analyzed with various optimization parameters. In the second phase, MLP, the best classifier identified in the first phase is further optimized through variations in architectural configurations. This study investigates the robust deep neural network model for financial distress prediction with the best optimization parameters. The prediction performance is evaluated using different real-time datasets, one containing samples from Kuwait companies and another with samples of companies from GCC countries. We have used the technique of resampling for all experiments in this study to get the most accurate and unbiased results. The simulation results show that the proposed deep network model far exceeds classical machine learning models in terms of predictive accuracy. Based on the experiments, guidelines are provided to the practitioners to generate a robust model for financial distress prediction.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_22-A_Robust_Deep_Learning_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automatic Detection of Plant Disease and Insect Attack using EFFTA Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110221</link>
        <id>10.14569/IJACSA.2020.0110221</id>
        <doi>10.14569/IJACSA.2020.0110221</doi>
        <lastModDate>2020-02-29T14:32:39.0470000+00:00</lastModDate>
        
        <creator>Kapilya Gangadharan</creator>
        
        <creator>G. Rosline Nesa Kumari</creator>
        
        <creator>D. Dhanasekaran</creator>
        
        <creator>K. Malathi</creator>
        
        <subject>Texture analysis; features; computer vision; inter- class; intra-class</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>The diagnosis of plant disease by computer vision using digital image processing methodology is a key for timely intervention and treatment of healthy agricultural procedure and to increase the yield by natural means. Timely addressal of these ailments can be the difference between the prevention and perishing of an ecosystem. To make the system more efficient and feasible we have proposed an algorithm called Enhanced Fusion Fractal Texture Analysis (EFFTA). The proposed method consists of Feature Fusion technique which combines SIFT- Scale Invariant Feature Transform and DWT- Discrete Wavelet Transform based SFTA- Segment Based Fractal Texture Analysis. Image as a whole can be detected by shape, texture and color. SIFT is used to detect the texture feature, it extracts the set of descriptors that is very useful in local texture recognition and it captures accurate key points for detecting the diseased area. Further extraction of texture is considered and that can be performed by WSFTA method. It adopts intra- class analysis and inter- class analysis. Extracted features trained using Back Propagation Neural Network. It improves and expands the success rate and accuracy of extraction also it provides higher precision and efficiency when compared to the other traditional methods.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_21-Automatic_Detection_of_Plant_Disease.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>IoT and Blockchain in the Development of Smart Cities</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110220</link>
        <id>10.14569/IJACSA.2020.0110220</id>
        <doi>10.14569/IJACSA.2020.0110220</doi>
        <lastModDate>2020-02-29T14:32:39.0170000+00:00</lastModDate>
        
        <creator>Laith T. Khrais</creator>
        
        <subject>Internet of things; blockchain technologies; smart cities; emerging markets; electronic commerce</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>With the advent and proliferation of the internet, the fourth industrial revolution is in full swing. As a result, different technologies have the potential to impact the course of human development. In other words, worldwide populations are moving towards growing urban centers and as a result, smart cities are emerging as the integration of human activities and technologies. These smart cities are built on top of different technologies such as blockchain and the Internet of Things (IoT). Consequently, the applications of these technologies in current and future smart cities will not only change the nature of human interaction and governance but also how business is conducted. This paper proposes an experimental study (qualitative and quantitative) that will determine the impact of blockchain and IoT technologies on the development of smart cities. It aims to derive insight from questions such as how current business models are preparing themselves for this disruption, the challenges they will face, and the potential contributions the two technologies will have on business development. The study’s outcomes will provide the rationale for why businesses should start paying attention to these technologies and start on an early adoption plan that will slowly transform their business models as smart cities mature.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_20-IoT_and_Blockchain_in_the_Development_of_Smart_Cities.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Conceptual Framework for Developing an ERP Module for Quality Management and Academic Accreditation at Higher Education Institutions: The Case of Saudi Arabia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110219</link>
        <id>10.14569/IJACSA.2020.0110219</id>
        <doi>10.14569/IJACSA.2020.0110219</doi>
        <lastModDate>2020-02-29T14:32:39.0000000+00:00</lastModDate>
        
        <creator>Mohammad Samir Abdel-Haq</creator>
        
        <subject>Quality assurance; academic accreditation framework; ERP module</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>As a result of the high priority given by universities in Saudi Arabia to the implementation of quality systems and achieve international and local academic accreditation, especially NCAAA, as well as, academic accreditation standards require the provision of a set of data, documents, reports and evidence distributed among the various departments of the university (academic and non-academic), which must be provided periodically and annually. These universities need to provide an integrated system between these departments to cover the requirements of quality and academic accreditation. On the other hand, there are ERP systems that suit the organizational environment of the university, but these systems did not implement a module to cover quality assurance requirements or academic accreditation. This study proposed a framework includes ERP module for quality requirements and academic accreditation to facilitate the collection of required data from various sources within the university and helps to provide the necessary reports and statistics, where the module organizes the necessary processes for quality and accreditation by providing a special database and through linking with the other subsystems at the university.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_19-Conceptual_Framework_for_Developing_an_ERP_Module.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SentiFilter: A Personalized Filtering Model for Arabic Semi-Spam Content based on Sentimental and Behavioral Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110218</link>
        <id>10.14569/IJACSA.2020.0110218</id>
        <doi>10.14569/IJACSA.2020.0110218</doi>
        <lastModDate>2020-02-29T14:32:38.9870000+00:00</lastModDate>
        
        <creator>Mashael M. Alsulami</creator>
        
        <creator>Arwa Yousef AL-Aama</creator>
        
        <subject>Personalization; sentiment analysis; behavioral analysis; spam detection; recommendation systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>Unwanted content in online social network services is a substantial issue that is continuously growing and negatively affecting the user-browsing experience. Current practices do not provide personalized solutions that meet each individual’s needs and preferences. Therefore, there is a potential demand to provide each user with a personalized level of protection against what he/she perceives as unwanted content. Thus, this paper proposes a personalized filtering model, which we named SentiFilter. It is a hybrid model that combines both sentimental and behavioral factors to detect unwanted content for each user towards pre-defined topics. An experiment involving 80,098 Twitter messages from 32 users was conducted to evaluate the effectiveness of the SentiFilter model. The effectiveness was measured in terms of the consistency between the implicit feedback derived from the SentiFilter model towards five selected topics and the explicit feedback collected explicitly from participants towards the same topics. Results reveal that commenting behavior is more effective than liking behavior to detect unwanted content because of its high consistency with users’ explicit feedback. Findings also indicate that sentiment of users’ comments does not reflect users’ perception of unwanted content. The results of implicit feedback derived from the SentiFilter model accurately agree with users’ explicit feedback by the indication of the low statistical significance difference between the two sets. The proposed model is expected to provide an effective automated solution for filtering semi-spam content in favor of personalized preferences.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_18-SentiFilter_A_Personalized_Filtering_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Review and Development Methodology of a LightWeight Security Model for IoT-based Smart Devices</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110217</link>
        <id>10.14569/IJACSA.2020.0110217</id>
        <doi>10.14569/IJACSA.2020.0110217</doi>
        <lastModDate>2020-02-29T14:32:38.9530000+00:00</lastModDate>
        
        <creator>Mathuri Gurunathan</creator>
        
        <creator>Moamin A. Mahmoud</creator>
        
        <subject>Lightweight; security mode; IoT; smart devices</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>Internet of Things (IoT) turns into another time of the Internet, which contains connected smart objects over the Internet. IoT has numerous applications, for example, smart city, smart home, smart grid and healthcare. In common, the IoT system comprises of heterogeneous devices that deliver then trade endless sums of safety-critical information, also as privacy-sensitive information. Nevertheless, connected devices can give your business a genuine lift, yet anything that is connected to the Internet can be vulnerable to cyberattacks. Most present IoT arrangements rely upon centralized architecture by associating with cloud servers through the Internet. The public cloud is described as computing services publicized by third-party suppliers over the Internet, making them accessible to anybody who needs to use or buy them. This solution gives magnificent flexible calculation and information the executives capacities, as IoT systems are developing increasingly mind-boggling; nonetheless, despite everything, it faces different of security issues. One of the weaknesses is that your information moving in IoT devices by means of public cloud could be in danger, despite the fact that the hacker was not explicitly focusing on you and with the public cloud you have insignificant authority over how rapidly you can grow the cloud. In this case, a secured protocol in IoT is vital to ensure optimum security to the information being traded between connected devices. To overcome the limitation, in this paper, we conduct a comprehensive review on existing security protocols and propose a development methodology of a blockchain-based lightweight security model that provides end to end security. By utilizing lightweight, an authenticated client can get to the information of IoT sensors remotely. The presentation investigation shows that lightweight offers better security, less overheads, and low communication.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_17-A_Review_and_Development_Methodology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design and Analysis of an Arithmetic and Logic Unit using Single Electron Transistor</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110216</link>
        <id>10.14569/IJACSA.2020.0110216</id>
        <doi>10.14569/IJACSA.2020.0110216</doi>
        <lastModDate>2020-02-29T14:32:38.9230000+00:00</lastModDate>
        
        <creator>Aarthy M</creator>
        
        <creator>Sriadibhatla Sridevi</creator>
        
        <subject>Single electron transistor; reversible logic gates; low power; speed</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>The demand for low power dissipation and increasing speed elicits numerous research efforts in the field of nano CMOS technology. The Arithmetic Logic Unit is the core of any central processing unit. In this paper, we designed a 4-bit Arithmetic and Logic Unit (ALU) using Single Electron Transistor (SET). Single-electron transistor (SET) is a new type of switching nanodevice that uses controlled single-electron tunneling to amplify the current. The single-electron transistor (SET) is highly scalable and possesses ultra-low power consumption when compared to conventional semiconductor devices. Reversible logic gates designed using SET are used for performing 4-bit arithmetic operations. We modelled symmetric single gate SET operating at room temperature using Verilog A code. The design is carried out in cadence simulation environment. The 4-bit SET based ALU design exhibits the power of 0.52 nW and delay of 350pS.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_16-Design_and_Analysis_of_an_Arithmetic_and_Logic_Unit.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fast FPGA Prototyping based Real-Time Image and Video Processing with High-Level Synthesis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110215</link>
        <id>10.14569/IJACSA.2020.0110215</id>
        <doi>10.14569/IJACSA.2020.0110215</doi>
        <lastModDate>2020-02-29T14:32:38.9070000+00:00</lastModDate>
        
        <creator>Refka Ghodhbani</creator>
        
        <creator>Layla Horrigue</creator>
        
        <creator>Taoufik Saidani</creator>
        
        <creator>Mohamed Atri</creator>
        
        <subject>High-level synthesis; FPGA; fast prototyping; real-time image processing; video surveillance; computer-aided design; model-based design; HDL coder; FPGA</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>Programming in high abstraction level is known by its benefits. It can facilitate the development of digital image and video processing systems. Recently, high-level synthesis (HLS) has played a significant role in developing this field of study. Real time image and video Processing solution needing high throughput rate are often performed in a dedicated hardware such as FPGA. Previous studies relied on traditional design processes called VHDL and Verilog and to synthesize and validate the hardware. These processes are technically complex and time consuming. This paper introduces an alternative novel approach. It uses a Model-Based Design workflow based on HDL Coder (MBD), Vision HDL Toolbox, Simulink and MATLAB for the purpose of accelerating the design of image and video solution. The main purpose of the present paper is to study the complexity of the design development and minimize development time (Time to market: TM) of conventional FPGA design. In this paper, the complexity of the development™ can be reduced by 60% effectively by automatically generating the IP cores and downloading the modeled design through the Xilinx tools and give more advantages of FPGA related to the other devices like ASIC and GPU.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_15-Fast_FPGA_Prototyping_based_Real_Time_Image_and_Video.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Feature Selection in Text Clustering Applications of Literary Texts: A Hybrid of Term Weighting Methods</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110214</link>
        <id>10.14569/IJACSA.2020.0110214</id>
        <doi>10.14569/IJACSA.2020.0110214</doi>
        <lastModDate>2020-02-29T14:32:38.8600000+00:00</lastModDate>
        
        <creator>Abdulfattah Omar</creator>
        
        <subject>Feature selection; frequency; PCA; term weight; text clustering; TF-IDF; variance; VSC</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>The recent years have witnessed an increasing use of automated text clustering approaches and more particularly Vector Space Clustering (VSC) methods in the computational analysis of literary data including genre classification, theme analysis, stylometry, and authorship attribution. In spite of the effectiveness of VSC methods in resolving different problems in these disciplines and providing evidence-based research findings, the problem of feature selection remains a challenging one. For reliable text clustering applications, a clustering structure should be based on only and all the most distinctive features within a corpus. Although different term weighting approaches have been developed, the problem of identifying the most distinctive variables within a corpus remains challenging especially in the document clustering applications of literary texts. For this purpose, this study proposes a hybrid of statistical measures including variance analysis, term frequency-inverse document frequency, TF-IDF, and Principal Component Analysis (PCA) for selecting only and all the most distinctive features that can be usefully used for generating more reliable document clustering that can be usefully used in authorship attribution tasks. The study is based on a corpus of 74 novels written by 18 novelists representing different literary traditions. Results indicate that the proposed model proved effective in the successful extraction of the most distinctive features within the datasets and thus generating reliable clustering structures that can be usefully used in different computational applications of literary texts.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_14-Feature_Selection_in_Text_Clustering_Applications.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Discovering the Relationship between Heat-Stress Gene Expression and Gene SNPs Features using Rough Set Theory</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110213</link>
        <id>10.14569/IJACSA.2020.0110213</id>
        <doi>10.14569/IJACSA.2020.0110213</doi>
        <lastModDate>2020-02-29T14:32:38.7370000+00:00</lastModDate>
        
        <creator>Heba Zaki</creator>
        
        <creator>Mohammad Nassef</creator>
        
        <creator>Amr Ahmed Badr</creator>
        
        <creator>Ahmed Farouk Al-Sadek</creator>
        
        <subject>RNA sequencing (RNA-seq); variant calling; Single Nucleotide Polymorphisms (SNPs) analysis; rough sets; gene expression</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>Over the years of applying machine learning in bioinformatics, we have learned that scientists, working in many areas of life sciences, call for deeper knowledge of the modeled phenomenon than just the information used to classify the objects with a certain quality. As dynamic molecules of gene activities, transcriptome profiling by RNA sequencing (RNA-seq) is becoming increasingly popular, which not only measures gene expression but also structural variations such as mutations and fusion transcripts. Moreover, Single nucleotide polymorphisms (SNPs) are of great potential in genetics, breeding, ecological and evolutionary studies. Rough sets could be successfully employed to tackle various problems such as gene expression clustering and classification. This study provides general guidelines for accurate SNP discovery from RNA-seq data. Those SNPs annotations are used to find relation between their biological features and the differential expression of the genes to which those SNPs belong. Rough sets are utilized to define this kind of relationship into a finite set of rules. Set of (32) generated rules proved good results with strength, certainty and coverage evaluation terms. This strategy is applied to the analysis of SNPs in A. thaliana plant under heat-stress.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_13-Discovering_the_Relationship_between_Heat_Stress.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Impact of Information Technologies on HR Effectiveness: A Case of Azerbaijan</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110212</link>
        <id>10.14569/IJACSA.2020.0110212</id>
        <doi>10.14569/IJACSA.2020.0110212</doi>
        <lastModDate>2020-02-29T14:32:38.7200000+00:00</lastModDate>
        
        <creator>Aida Guliyeva</creator>
        
        <creator>Ulviyya Rzayeva</creator>
        
        <creator>Aygun Abdulova</creator>
        
        <subject>HR effectiveness; recruitment needs; maintenance and development tasks; management and planning tasks; performance of enterprises and banks; human capital; return on investment</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>In the article, the impact of information technologies’ (IT) implementation into the work of human resource departments for increased effectiveness is explored. Modern relations at the enterprise require the most important network-based enterprise’s unit to be a strategic, flexible, cost-effective and service-oriented division of the organization. Although the influence of IT on Human Resources Management (HRM) has been a focus of scientists’ attention, no empirical research has been conducted in this area in Azerbaijan. The authors use the experience and initiatives of enterprises and national banks to show the current state and results of the implementation of IT in HRM. Obtained data show that IT is not widely used in Azerbaijani organizations to perform HRM functions. The results also show that, although IT should have a certain impact on all sectors in terms of HRM, the types of IT used should vary significantly in recruitment, maintenance, and development tasks.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_12-Impact_of_Information_Technologies_on_HR_Effectiveness.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Document Length Variation in the Vector Space Clustering of News in Arabic: A Comparison of Methods</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110211</link>
        <id>10.14569/IJACSA.2020.0110211</id>
        <doi>10.14569/IJACSA.2020.0110211</doi>
        <lastModDate>2020-02-29T14:32:38.6900000+00:00</lastModDate>
        
        <creator>Abdulfattah Omar</creator>
        
        <creator>Wafya Ibrahim Hamouda</creator>
        
        <subject>Arabic; document length; news clustering; semantic similarity; TF-IDF; VSC</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>This article is concerned with addressing the effect of document length variation on measuring the semantic similarity in the text clustering of news in Arabic. Despite the development of different approaches for addressing the issue, there is no one strong conclusion recommending one approach. Furthermore, many of these have not been tested for the clustering of news in Arabic. The problem is that different length normalization methods can yield different analyses of the same data set, and that there is no obvious way of selecting the best one. The choice of an inappropriate method, however, has negative impacts on the accuracy and thus the reliability of clustering performance. Given the lack of agreement and disparity of opinions, we set out to comprehensively evaluate the existing normalization techniques to prove empirically which one is the best for the normalization of text length to improve the text clustering performance of news in Arabic. For this purpose, a corpus of 693 stories representing different categories and of different lengths is designed. Data is analyzed using different document length normalization methods along with vector space clustering (VSC), and then the analysis on which the clustering structure agrees most closely with the bibliographic information of the news stories is selected. The analysis of the data indicates that the clustering structure based on the byte length normalization method is the most accurate one. One main problem, however, with this method is that the lexical variables within the data set are not ranked which makes it difficult for retaining only the most distinctive lexical features for generating clustering structures based on semantic similarity. As thus, the study proposes the integration of TF-IDF for ranking the words within all the documents so that only those with the highest TF-IDF values are retained. It can be finally concluded that the proposed model proved effective in improving the function of the byte normalization method and thus on the performance and reliability of news clustering in Arabic. The findings of the study can also be extended to IR applications in Arabic. The proposed model can be usefully used in supporting the performance of the retrieval systems of Arabic in finding the most relevant documents for a given query based on semantic similarity, not document length.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_11-Document_Length_Variation_in_the_Vector_Space.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Microservices based Approach for City Traffic Simulation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110209</link>
        <id>10.14569/IJACSA.2020.0110209</id>
        <doi>10.14569/IJACSA.2020.0110209</doi>
        <lastModDate>2020-02-29T14:32:38.6570000+00:00</lastModDate>
        
        <creator>Toma Becea</creator>
        
        <creator>Honoriu Valean</creator>
        
        <subject>Traffic simulation; microservices; distributed com-putting</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>The paper proposes a city traffic software simula-tion based on actors which run independently of one another and have specific characters in their behavior. To run indepentently actors are modeled as microservices and they are running within an orchestration framework. Their behavior is modeled as with specific algorithms for each of their type, embedded in each actor’s type code. They may act based on the data about all the other actors, data which is gathered together by a single entity called city simulator. An orchestration model is proposed and all the actors use a communication protocol to offer data to the city simulator and request data from it</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_9-A_Microservices_based_Approach_for_City_Traffic_Simulation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Genres and Actors/Actresses as Interpolated Tags for Improving Movie Recommender Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110210</link>
        <id>10.14569/IJACSA.2020.0110210</id>
        <doi>10.14569/IJACSA.2020.0110210</doi>
        <lastModDate>2020-02-29T14:32:38.6570000+00:00</lastModDate>
        
        <creator>Nghia Duong-Trung</creator>
        
        <creator>Quynh Nhut Nguyen</creator>
        
        <creator>Dung Ngoc Le Ha</creator>
        
        <creator>Xuan Son Ha</creator>
        
        <creator>Tan Tai Phan</creator>
        
        <creator>Hiep Xuan Huynh</creator>
        
        <subject>Movielens; movie recommender systems; tag inter-polation; colloborative filtering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>Abstract—A movie recommender system has been proven to be a convincing implement on carrying out comprehensive and complicated recommendation which helps users find appropriate movies conveniently. It follows a mechanism that a user can be accurately recommended movies based on other similar interests, e.g. collaborative filtering, and the movies themselves, e.g. content-based filtering. Therefore, the systems should come with predeter-mined information either by users or by movies. One interesting research question should be asked: “what if this information is missing or not manually manipulated?” The problem has not been addressed in the literature, especially for the 100K and 1M variations of the MovieLens datasets. This paper exploits the movie recommender system based on movies’ genres and actors/actresses themselves as the input tags or tag interpolation. We apply tag-based filtering and collaborative filtering that can effectively predict a list of movies that is similar to the movie that a user has been watched. Due to not depending on users’ profiles, our approach has eliminated the effect of the cold-start problem. The experiment results obtained on MovieLens datasets indicate that the proposed model may contribute ade-quate performance regarding efficiency and reliability, and thus provide better-personalized movie recommendations. A movie recommender system has been deployed to demonstrate our work. The collected datasets have been published on our Github repository to encourage further reproducibility and improvement.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_10-Genres_and_ActorsActresses_as_Interpolated_Tags.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intelligent Haptic Virtual Simulation for Suture Surgery</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110208</link>
        <id>10.14569/IJACSA.2020.0110208</id>
        <doi>10.14569/IJACSA.2020.0110208</doi>
        <lastModDate>2020-02-29T14:32:38.6270000+00:00</lastModDate>
        
        <creator>Mee Young Sung</creator>
        
        <creator>Byeonghun Kang</creator>
        
        <creator>Jungwook Kim</creator>
        
        <creator>Taehoon Kim</creator>
        
        <creator>Hyeonseok Song</creator>
        
        <subject>Haptic; virtual simulation; suture surgery; artificial intelligence; assistant</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>The aim of this study is to develop an intelligent haptic virtual simulation for suture surgery, enabled with an AI assistant. This haptic VR suture simulation mainly composed of three parts: visuo-haptic rendering for surgery, replica training for surgery, and AI assistant for surgery. In the simulation, a trainee surgeon can practice the suturing procedure using tactile touches in the stereoscopic 3D virtual environment. The  simulation adopts a “precise haptic collision detection method using subdivision surface and sphere clustering” developed in the previous studies of authors. In addition, an expert surgeon&#39;s suture operations can be replicated to the medical trainee&#39;s simulation to experience it as it is. This method has the advantage of reproducing the experience of the expert’s ideal movements of surgical procedures. The data recording of the skilled surgeon&#39;s motions and operations are normalized into a form suitable for AI learning. The AI assistant distinguishes five typical types of suture surgery methods and learns the proper methods for various wounds using deep learning techniques, then it suggests the most appropriate suture method. The suture simulation can reduce the cost and time required for surgical training and eventually provide safe and accurate physical surgery.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_8-Intelligent_Haptic_Virtual_Simulation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>JEPPY: An Interactive Pedagogical Agent to Aid Novice Programmers in Correcting Syntax Errors</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110207</link>
        <id>10.14569/IJACSA.2020.0110207</id>
        <doi>10.14569/IJACSA.2020.0110207</doi>
        <lastModDate>2020-02-29T14:32:38.6100000+00:00</lastModDate>
        
        <creator>Julieto E. Perez</creator>
        
        <creator>Dante D. Dinawanao</creator>
        
        <creator>Emily S. Tabanao</creator>
        
        <subject>Pedagogical agent; error quotient; syntax-error correction; compile errors; human computer interaction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>Programming is a complicated task and correcting syntax error is just one among the many tasks that makes it difficult. Error messages produced by the compiler allow novice learners to know their errors. However, these messages are puzzling, and most of the times misleading due to cascading of errors, which can be detrimental to running a syntax-error free program. In most laboratory setting, it is the role of the teachers to assist their students while doing activities. However, in our experienced, considering the large number of students in a class, it may seem difficult for teachers to assist their students one-by- one given the time constraints. In this paper, the design and implementation of an interactive pedagogical agent named JEPPY is presented. It is intended to assist novice learners learning to program using C++ as a programming language. In order to see on how students struggle or progress in dealing with errors, the proponents implemented the Error Quotient (EQ) developed by Jadud. The principles of the cognitive requirements of an agent- based learning environment were followed. The agent was put into test by novice learners in a laboratory setting. Logs of interaction between the embodied agent and the participants were recorded, aside from the compile errors and edit actions. These mechanisms show us some insight on the interaction behavior of learner to the agent. </description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_7-JEPPY_An_Interactive_Pedagogical_Agent.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sea Breeze Damage Estimation Method using Sentinel of Remote Sensing Satellite Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110206</link>
        <id>10.14569/IJACSA.2020.0110206</id>
        <doi>10.14569/IJACSA.2020.0110206</doi>
        <lastModDate>2020-02-29T14:32:38.5800000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>Sentinel; disaster relief; satellite remote sensing; flooding; oil spill; synthetic aperture radar; optical sensor;  vegetation index</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>Sea breeze damage estimation method using Sentinel of remote sensing satellite data is proposed. There are two kinds of sea breeze damage. Namely, one is vegetation degradation due to sea salt from sea breeze and the other one is leaf lodging due to strong winds from sea breeze. Kyushu, Japan had severe storm due to the typhoon #17 during from 21 September to 23 September 2019. Optical sensor and Synthetic Aperture Radar: SAR onboard remote sensing satellite are used for disaster relief. NDVI and false colored imagery data derived from the Sentinel-1 and 2 data are used for disaster relief. Through experiments, it is found that sea salt damage on rice paddy fields in particular can be relieved by NDVI and false colored imagery data while rice lodging can also be relieved by SAR data.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_6-Sea_Breeze_Damage_Estimation_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predicting Book Sales Trend using Deep Learning Framework</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110205</link>
        <id>10.14569/IJACSA.2020.0110205</id>
        <doi>10.14569/IJACSA.2020.0110205</doi>
        <lastModDate>2020-02-29T14:32:38.5630000+00:00</lastModDate>
        
        <creator>Tan Qin Feng</creator>
        
        <creator>Murphy Choy</creator>
        
        <creator>Ma Nang Laik</creator>
        
        <subject>Generative adversarial network; deep learning framework; book sales forecasting; regression</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>A deep learning framework like Generative Adversarial Network (GAN) has gained popularity in recent years for handling many different computer visions related problems. In this research, instead of focusing on generating the near-real images using GAN, the aim is to develop a comprehensive GAN framework for book sales ranks prediction, based on the historical sales rankings and different attributes collected from the Amazon site. Different analysis stages have been conducted in the research. In this research, a comprehensive data preprocessing is required before the modeling and evaluation. Extensive predevelopment on the data, related features selections for predicting the sales rankings, and several data transformation techniques are being applied before generating the models. Later then various models are being trained and evaluated on prediction results. In the GAN architecture, the generator network that used to generate the features is being built, and the discriminator network that used to differentiate between real and fake features is being trained before the predictions. Lastly, the regression GAN model prediction results are compared against the different neural network models like multilayer perceptron, deep belief network, convolution neural network.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_5-Predicting_Book_Sales_Trend_using_Deep_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Vision-based Indoor Localization Algorithm using Improved ResNet</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110204</link>
        <id>10.14569/IJACSA.2020.0110204</id>
        <doi>10.14569/IJACSA.2020.0110204</doi>
        <lastModDate>2020-02-29T14:32:38.5330000+00:00</lastModDate>
        
        <creator>Zeyad Farisi</creator>
        
        <creator>Tian Lianfang</creator>
        
        <creator>Li Xiangyang</creator>
        
        <creator>Zhu Bin</creator>
        
        <subject>Deep learning; residual network; loss function; dropout; indoor localization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>The output of the residual network fluctuates greatly with the change of the weight parameters, which greatly affects the performance of the residual network. For dealing with this problem, an improved residual network is proposed. Based on the classical residual network, batch normalization, adaptive -dropout random deactivation function and a new loss function are added into the proposed model. Batch normalization is applied to avoid vanishing/exploding gradients.  -dropout is applied to increase the stability of the model, which we select different dropout method adaptively by adjusting parameter. The new loss function is composed by cross entropy loss function and center loss function to enhance the inter class dispersion and intra class aggregation. The proposed model is applied to the indoor positioning of mobile robot in the factory environment. The experimental results show that the algorithm can achieve high indoor positioning accuracy under the premise of small training dataset. In the real-time positioning experiment, the accuracy can reach 95.37.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_4-Vision_based_Indoor_Localization_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Drying Process Simulation Methodology based on Chemical Kinetics Laws</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110203</link>
        <id>10.14569/IJACSA.2020.0110203</id>
        <doi>10.14569/IJACSA.2020.0110203</doi>
        <lastModDate>2020-02-29T14:32:38.5170000+00:00</lastModDate>
        
        <creator>Vladimir M. Arapov</creator>
        
        <creator>Dmitriy A. Kazartsev</creator>
        
        <creator>Igor A. Nikitin</creator>
        
        <creator>Maria V. Babaeva</creator>
        
        <creator>Svetlana V. Zhukovskaya</creator>
        
        <creator>Svetlana N. Tefikova</creator>
        
        <creator>Galina V. Posnova</creator>
        
        <creator>Igor V. Zavalishin</creator>
        
        <subject>Modeling; drying; chemical kinetics; activation energy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>It is shown that the existing approaches to drying process modeling, based on a system of interconnected differential equations of heat and mass transfer or on statistical processing of drying process experimental data, have significant drawbacks. It greatly complicates the development of computer means for controlling production processes. During modeling it is proposed to consider drying process from the standpoint of physical chemistry as a quasi-topochemical heterogeneous reaction and perform mathematical modeling of this process based on the laws of chemical kinetics. The basic issues of methodology of drying process modeling based on the laws of chemical kinetics are reviewed: the study of the equation of drying rate during the removal of free and bound moisture; methods for determining composition of the aqueous fractions with different forms and energy of moisture in materials; methods of determination of an activation energy of moisture; the influence of the concentration of moisture and other process factors on the drying speed. The methodological approach considered in the article allows developing reliable mathematical models of drying kinetics for the purposes of computer technologies for managing production processes and avoiding the errors that the authors note in previously published works.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_3-Drying_Process_Simulation_Methodology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fast and Accurate Fish Detection Design with Improved YOLO-v3 Model and Transfer Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110202</link>
        <id>10.14569/IJACSA.2020.0110202</id>
        <doi>10.14569/IJACSA.2020.0110202</doi>
        <lastModDate>2020-02-29T14:32:38.5000000+00:00</lastModDate>
        
        <creator>Kazim Raza</creator>
        
        <creator>Song Hong</creator>
        
        <subject>Deep learning; computer vision; transfer learning; improved YOLOv3; anchor box; custom dataset</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>Object Detection is one of the problematic Computer Vision (CV) problems with countless applications. We proposed a real-time object detection algorithm based on Improved You Only Look Once version 3 (YOLOv3) for detecting fish. The demand for monitoring the marine ecosystem is increasing day by day for a vigorous automated system, which has been beneficial for all of the researchers in order to collect information about marine life. This proposed work mainly approached the CV technique to detect and classify marine life. In this paper, we proposed improved YOLOv3 by increasing detection scale from 3 to 4, apply k-means clustering to increase the anchor boxes, novel transfer learning technique, and improvement in loss function to improve the model performance. We performed object detection on four fish species custom datasets by applying YOLOv3 architecture. We got 87.56% mean Average Precision (mAP). Moreover, comparing to the experimental analysis of the original YOLOv3 model with the improved one, we observed the mAP increased from 87.17% to 91.30. It showed that improved version outperforms than the original YOLOv3 model.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_2-Fast_and_Accurate_Fish_Detection_Design.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Review of Asset-Centric Threat Modelling Approaches</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110201</link>
        <id>10.14569/IJACSA.2020.0110201</id>
        <doi>10.14569/IJACSA.2020.0110201</doi>
        <lastModDate>2020-02-29T14:32:38.4400000+00:00</lastModDate>
        
        <creator>Livinus Obiora Nweke</creator>
        
        <creator>Stephen D. Wolthusen</creator>
        
        <subject>Threat modelling; asset-centric; asset-centric threat modelling approaches</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(2), 2020</description>
        <description>The threat landscape is constantly evolving. As attackers continue to evolve and seek better methods of compro-mising a system; in the same way, defenders continue to evolve and seek better methods of protecting a system. Threats are events that could cause harm to the confidentiality, integrity, or availability of information systems, through unauthorized disclosure, misuse, alteration, or destruction of information or information system. The process of developing and applying a representation of those threats, to understand the possibility of the threats being realized is referred to as threat modelling. Threat modelling approaches provide defenders with a tool to characterize potential threats systematically. They include the prioritization of threats and mitigation based on probabilities of the threats being realized, the business impacts and the cost of countermeasures. In this paper, we provide a review of asset-centric threat modelling approaches. These are threat modelling techniques that focus on the assets of the system being threat modelled. First, we discuss the most widely used asset-centric threat modelling approaches. Then, we present a gap analysis of these methods. Finally, we examine the features of asset-centric threat modelling approaches with a discussion on their similarities and differences.</description>
        <description>http://thesai.org/Downloads/Volume11No2/Paper_1-A_Review_of_Asset_Centric_Threat_Modelling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Delay-Aware and User-Adaptive Offloading of Computation-Intensive Applications with Per-Task Delay in Mobile Edge Computing Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110190</link>
        <id>10.14569/IJACSA.2020.0110190</id>
        <doi>10.14569/IJACSA.2020.0110190</doi>
        <lastModDate>2020-02-05T11:36:29.9800000+00:00</lastModDate>
        
        <creator>Tarik Chanyour</creator>
        
        <creator>Youssef HMIMZ</creator>
        
        <creator>Mohamed EL GHMARY</creator>
        
        <creator>Mohammed Oucamah CHERKAOUI MALKI</creator>
        
        <subject>Mobile edge computing; user-adaptive offloading; computation-intensive offloading; per-task delay; tasks satisfaction optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>Mobile-edge computing (MEC) is a new paradigm with a great potential to extend mobile users capabilities because-of its proximity. It can contribute efficiently to optimize the energy consumption to preserve privacy, and reduce the bottlenecks of the network traffic. In addition, intensive-computation offloading is an active research area that can lessen latencies and energy consumption. Nevertheless, within multi-user networks with a multi-task scenario, select the tasks to offload is complex and critical. Actually, these selections and the resources’ allocation have to be carefully considered as they affect the resulting energies and delays. In this work, we study a scenario con-sidering a user-adaptive offloading where each user runs a list of heavy computation-tasks. Every task has to be processed in its associated MEC server within a fixed deadline. Hence, the proposed optimization problem target the minimization of a weighted-sum normalized function depending on three metrics. The first is energy consumption, the second is the total processing delays, and the third is the unsatisfied processing workload. The solution of the general problem is obtained using the solutions of two sub-problems. Also, all solutions are evaluated using a set of simulation experiments. Finally, the execution times are very encouraging for moderate sizes, and the proposed heuristic solutions give satisfactory results in terms of users cost function in pseudo-polynomial times.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_90-Delay_Aware_and_User_Adaptive_Offloading.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>LEA-SIoT: Hardware Architecture of Lightweight Encryption Algorithm for Secure IoT on FPGA Platform</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110189</link>
        <id>10.14569/IJACSA.2020.0110189</id>
        <doi>10.14569/IJACSA.2020.0110189</doi>
        <lastModDate>2020-02-04T09:29:28.5730000+00:00</lastModDate>
        
        <creator>Bharathi R</creator>
        
        <creator>N. Parvatham</creator>
        
        <subject>IOT Devices; Security algorithm; Encryption; Decryption; Key generation; FPGA</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>The Internet of Things (IoT) is one of the emerging technology in today’s world to connect billions of electronic devices and providing the data security to these electronic devices while transmission from the attacks is a big challenging task. These electronic devices are smaller and consume less power. The conventional security algorithms are complex with its computations and not suitable for IoT environments. In this article, the hardware architecture of the new Lightweight encryption algorithm (LEA) for the secured Internet of things (SIoT) is designed, which includes Encryption, decryption along with key generation process. The New LEA-SIoT is a hybrid combination of the Feistel networks and Substitution-permutation Network (SPN). The encryption/decryption architecture is the composition of Logical operations, substitution transformations, and swapping. The encryption/decryption process is designed for 64-bit data input and 64-bit key inputs. The key generation process is designed with the help of KHAZAD block cipher algorithm. The encryption and key generation process are executing in parallel with pipelined architecture with five rounds to improve the hardware and computational complexity in IoT systems. TheLEA-SIoT is designed on the Xilinx platform and implemented on Artix-7 FPGA. The hardware constraints like area, power, and timing utilization are summarized. The Comparison of the LEA-SIoT with similar security algorithms is tabulated with improvements.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_89-LEA_SIoT_Hardware_Architecture_of_Lightweight_Encryption.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards an Improvement of Fourier Transform</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110188</link>
        <id>10.14569/IJACSA.2020.0110188</id>
        <doi>10.14569/IJACSA.2020.0110188</doi>
        <lastModDate>2020-02-04T09:29:28.5130000+00:00</lastModDate>
        
        <creator>Khalid Aznag</creator>
        
        <creator>Toufik Datsi</creator>
        
        <creator>Ahmed El Oirrak</creator>
        
        <creator>Essaid El Bachari</creator>
        
        <subject>Fourier transform; NUFT; noise; invariant</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>With the development of information technology and the coming period of large data, the image signals play an increasingly more significant role in our life because of the phenomenal development of system correspondence innovation, and the comparing high proficiency image handling strategies are requested earnestly. The Fourier transform is an important image processing tool, which is used in a wide range of applications, such as image filtering, image analysis, image compression and image reconstruction. It &#39;s the simplest among the other transformation method used in mathematics. The real time consumption is lesser due to this method. It has a vast use in image processing, particularly object 2D, 3D and other representation. This paper proposes a new Fourier transform which is called Non Uniform Fourier Transform (NUFT). The proposed descriptor takes into consideration the change of point index. Also, an application is made on 2D set of points and a real image. The main advantages of the proposed transform are invariance under change of index point and robustness to noise. Also, the extraction of invariant under rotation and affinity is immediate because the linearity is assured. The proposed descriptor is tested on MPEG 7 database and compared with the normal Fourier transform to shows its efficiency. The experimental results prove the effectiveness of the proposed descriptor.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_88-An_Improvement_of_Fourier_Transform.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Pedestrian Crowd Detection and Segmentation using Multi-Source Feature Descriptors</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110187</link>
        <id>10.14569/IJACSA.2020.0110187</id>
        <doi>10.14569/IJACSA.2020.0110187</doi>
        <lastModDate>2020-02-01T05:13:11.9800000+00:00</lastModDate>
        
        <creator>Saleh Basalamah</creator>
        
        <creator>Sultan Daud Khan</creator>
        
        <subject>Crowd detection; Fourier analysis; crowd analysis; crowd segmentation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>Crowd analysis is receiving much attention from research community due to its widespread importance in public safety and security. In order to automatically understand crowd dynamics, it is imperative to detect and segment crowd from the background. Crowd detection and segmentation serve as pre-processing step in most crowd analysis applications, for example, crowd tracking, behavior understanding and anomaly detection. Intuitively, the crowd regions can be extracted using background modeling or using motion cues. However, these model accumulate many false positives when the crowd is static. In this paper, we propose a novel framework that automatically detects and segments crowd by integrating appearance features from multiple sources. We evaluate our proposed framework using challenging images with varying crowd densities, camera viewpoints and pedestrian appearances. From qualitative analysis, we observe that the proposed framework work perform well by precisely segmenting crowd in complex scenes.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_87-Pedestrian_Crowd_Detection_and_Segmentation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Investigation of Deep Learning-based Techniques for Load Disaggregation, Low-Frequency Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110186</link>
        <id>10.14569/IJACSA.2020.0110186</id>
        <doi>10.14569/IJACSA.2020.0110186</doi>
        <lastModDate>2020-02-01T05:13:11.9470000+00:00</lastModDate>
        
        <creator>Abdolmaged Alkhulaifi</creator>
        
        <creator>Abdulah J. Aljohani</creator>
        
        <subject>NILM; deep learning; load disaggregation; recur-rent long short-term memory; gate recurrent unit</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>Unlike sub-metering, which requires individual appliances to be equipped with their own meters, non-intrusive load monitoring (NILM) use algorithms to discover appliance individual consumption from the aggregated overall energy reading. Approaches that uses low frequency sampled data are more applicable in a real world smart meters that has typical sampling capability of &lt;= 1Hz. In this paper, a systematic literature review on deep-learning-based approaches for NILM problem is conducted, aiming to analyse the four key aspects pertaining to deep learning adoption. This includes deep learning model adoption, features selection that are used to train the model, used data set and model accuracy. In our study, analyses the performance of four different deep learning approaches, namely, denoising autoencoder (DAE), recurrent long short-term memory (LSTM) , Recurrent gate recurrent unit (GRU), and sequence to point. Our experiments will be conducted using the two data sets, namely, REDD and UK-DALE. According to our analysis, the sequence to point model has achieved the best results with an average mean absolute error (MAE) of 14:98 watt when compared to other counterpart algorithms.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_86-Investigation_of_Deep_Learning_based_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>DDoS Flooding Attack Mitigation in Software Defined Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110185</link>
        <id>10.14569/IJACSA.2020.0110185</id>
        <doi>10.14569/IJACSA.2020.0110185</doi>
        <lastModDate>2020-02-01T05:13:11.9330000+00:00</lastModDate>
        
        <creator>Safaa MAHRACH</creator>
        
        <creator>Abdelkrim HAQIQ</creator>
        
        <subject>Software Defined Networks (SDN); Distributed De-nial of Service (DDoS); network security; P4 language; DDoS mitigation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>Distributed denial of service (DDoS) attacks which have been completely covered by the security community, today pose a potential new menace in the software defined networks (SDN) architecture. For example, the disruption of the SDN controller could interrupt data communication in the whole SDN network. DDoS attacks can produce a great number of new and short traffic flows (e.g., a series of TCP SYN requests), which may launch spiteful flooding requests to overcharge the controller and cause flow-table overloading attacks at SDN switches. In this research work, we propose a lightweight and practical mitigation mechanism to protect SDN architecture against DDoS flooding threats and ensure a secure and efficient SDN-based networking environment. Our proposal extends the Data Plane (DP) with a classification and mitigation module to analyze the new incoming packets, classify the benign requests from the SYN flood attacks, and perform the adaptive countermeasures. The simulation results indicate that the proposed defending mechanism may efficiently tackle the DDoS flood attacks in the SDN architecture and also in the downstream servers.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_85-DDoS_Flooding_Attack_Mitigation_in_Software_Defined_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimal Topology Generation for Linear Wireless Sensor Networks based on Genetic Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110184</link>
        <id>10.14569/IJACSA.2020.0110184</id>
        <doi>10.14569/IJACSA.2020.0110184</doi>
        <lastModDate>2020-02-01T05:13:11.9000000+00:00</lastModDate>
        
        <creator>Adil A. Sheikh</creator>
        
        <creator>Emad Felemban</creator>
        
        <subject>Ad hoc networks; network topology; genetic al-gorithms; computer simulation; computer networks management; network lifetime estimation; optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>A linear network is a type of wireless sensor network in which sparse nodes are deployed along a virtual line; for example, on streetlights or columns of a bridge, tunnel, and pipelines. The typical deployment of Linear Wireless Sensor Net-work (LWSN) creates an energy hole around the sink node since nodes near the sink nodes deplete their energy faster than others. Optimal network topology is one of the key factors that can help improve LWSN performance and lifetime. Finding optimal topology becomes tough in large network where total possible combinations is very high. We propose an Optimal Topology Generation (OpToGen) framework based on genetic algorithm for LWSN. Network deployment tools can use OpToGen to configure and deploy LWSNs. Through a discrete event simulator, we demonstrate that the use of genetic algorithm accomplishes fast convergence to optimal topologies as well as less computational overhead as compared to brute force search for optimal topology. We have evaluated OpToGen on the number of generations it took to achieve the best topology for various sized LWSNs. The trade-off between energy consumption and different network sizes is also reported.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_84-Optimal_Topology_Generation_for_Linear_Wireless_Sensor_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Priority-based Routing Framework for Image Transmission in Visual Sensor Networks: Experimental Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110182</link>
        <id>10.14569/IJACSA.2020.0110182</id>
        <doi>10.14569/IJACSA.2020.0110182</doi>
        <lastModDate>2020-02-01T05:13:11.8700000+00:00</lastModDate>
        
        <creator>Emad Felemban</creator>
        
        <creator>Atif Naseer</creator>
        
        <creator>Adil Amjad</creator>
        
        <subject>Priority-based routing; visual sensor networks; test-bed; framework</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>A Visual Sensor Network (VSN) is a specialized Wireless Sensor Network (WSN) equipped with cameras. Its primary function is to capture images, videos and send them to power rich sink nodes for processing. As image data is much larger than scalar data sensed by a typical WSN, applications of VSN require much bigger amount of data to be transferred to the sink. Due to constraints of WSN such as low energy, limited CPU power and scarce memory, transmission of large amount of data becomes challenging. On the other hand, some VSN applications require critical image features sooner than the entire image to take action. In this paper, we provide the details of experiments done using our proposed Priority-based Routing Framework for Image Transmission (PRoFIT). PRoFIT is designed to deliver critical image features at high priority to the sink node for early processing. Peak signal-to-noise ratio (PSNR) analyses show that PRoFIT improves VSN application response time as compared to priority-less routing. This paper also contains the design of our VSN testbed. Multiple indoor and outdoor experiments were performed to validate the framework. This framework also improves the energy efficiency of the network. The results show that the PRoFIT is 40% efficient in terms of energy consumption.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_82-Priority_based_Routing_Framework_for_Image_Transmission.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Deep Learning Approach for Handwritten Arabic Names Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110183</link>
        <id>10.14569/IJACSA.2020.0110183</id>
        <doi>10.14569/IJACSA.2020.0110183</doi>
        <lastModDate>2020-02-01T05:13:11.8700000+00:00</lastModDate>
        
        <creator>Mohamed Elhafiz Mustafa</creator>
        
        <creator>Murtada Khalafallah Elbashir</creator>
        
        <subject>Deep learning; Arabic names recognition; holistic paradigm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>Optical Character recognition (OCR) has enabled many applications as it has attained high accuracy for all printing documents and also for handwriting of many languages. How-ever, the state-of-the-art accuracy of Arabic handwritten word recognition is far behind. Arabic script is cursive (both printed and handwritten). Therefore, traditionally Arabic recognition systems segment a word to characters first before recognizing its characters. Arabic word segmentation is very difficult because Arabic letters contain many dots. Moreover, Arabic letters are context sensitive and some letters overlapped vertically. A holis-tic recognizer that recognizes common words directly (without segmentation) seems the plausible model for recognizing Arabic common words. This paper presents the result of training a Conventional Neural Network (CNN), holistically, to recognize Arabic names. Experiments result shows that the proposed CNN is distinct and significantly superior to other recognizers that were used with the same dataset.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_83-A_Deep_Learning_Approach_for_Handwritten_Arabic_Names.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modelling an Indoor Crowd Monitoring System based on RSSI-based Distance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110181</link>
        <id>10.14569/IJACSA.2020.0110181</id>
        <doi>10.14569/IJACSA.2020.0110181</doi>
        <lastModDate>2020-02-01T05:13:11.8400000+00:00</lastModDate>
        
        <creator>Syifaul Fuada</creator>
        
        <creator>Trio Adiono</creator>
        
        <creator>Prasetiyo</creator>
        
        <creator>Hartian Widhanto Shorful Islam</creator>
        
        <subject>Wi-Fi tracker system; RSSI-based distance; intersection density method; Nonlinear Least Square (NLS) method; Kalman Filter (KF)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>This paper reports a real-time localization algorithm system that has a main function to determine the location of devices accurately. The model can locate the smartphone position passively (which do not need a set on a smartphone) as long as the Wi-Fi is turned on. The algorithm uses Intersection Density, and the Nonlinear Least Square Algorithm (NLS) method that utilizes the Lavenberg-Marquart method. To minimize the localization error, Kalman Filter (KF) is used. The algorithm is computed under Matlab approach. The most obtained model will be implemented in this Wi-Fi tracker system using RSSI-based distance for indoor crowd monitoring. According to the experiment result, KF can improve Hit ratio of 81.15 %.   Hit ratio is predicting results of a location that is less than 5 m from the actual area (location). It can be obtained from several RSSI scans, the calculation is as follows: the number of non-error results divided by the number of RSSI scans and multiplied by 100%.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_81-Modelling_an_Indoor_Crowd_Monitoring_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Developing Decision Tree based Models in Combination with Filter Feature Selection Methods for Direct Marketing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110180</link>
        <id>10.14569/IJACSA.2020.0110180</id>
        <doi>10.14569/IJACSA.2020.0110180</doi>
        <lastModDate>2020-02-01T05:13:11.8230000+00:00</lastModDate>
        
        <creator>Ruba Obiedat</creator>
        
        <subject>Direct marketing; data mining; decision tree; simpleCart; C4.5; reptree; random tree; weka; confusion matrix; class-imbalance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>Direct Marketing is a form of advertising strategies which aims to communicate directly with the most potential customers for a certain product using the most appropriate communication channel. Banks are spending a huge amount of money on their marketing campaigns, so they are increasingly interested in this topic in order to maximize the efficiency of their campaigns, especially with the existence of high competition in the market. All marketing campaigns are highly dependent on the huge amount of available data about customers. Thus special Data Mining techniques are needed in order to analyze these data, predict campaigns efficiency and give decision makers indications regarding the main marketing features affecting the marketing success. This paper focuses on four popular and common Decision Tree (DT) algorithms: SimpleCart, C4.5, RepTree and Random Tree. DT is chosen because the generated models are in the form of IF-THEN rules which are easy to understand by decision makers with poor technical background in banks and other financial institutions. Data was taken from a Portuguese bank direct marketing campaign. A filter-based Feature selection is applied in the study to improve the performance of the classification. Results show that SimpleCart has the best results in predicting the campaigns success. Another interesting finding that the five most significant features influencing the direct marketing campaign success to be focused on by decision makers are: Call duration, offered interest rate, number of employees making the contacts, customer confidence and changes in the prices levels.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_80-Developing_Decision_Tree_based_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Survey on Cloud Data Security using Image Steganography</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110179</link>
        <id>10.14569/IJACSA.2020.0110179</id>
        <doi>10.14569/IJACSA.2020.0110179</doi>
        <lastModDate>2020-02-01T05:13:11.7930000+00:00</lastModDate>
        
        <creator>Afrah Albalawi</creator>
        
        <creator>Nermin Hamza</creator>
        
        <subject>Security; cloud computing; image steganography; data hiding; data storage</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>Now-a-days, cloud computing proved its impor-tance where it is being used by small and big organizations. The importance of cloud computing is due to the various services provided by the cloud. One of these services is storage as a service (SaaS) which allows users to store their data in the cloud databases. The drawback of this service is the security challenge since a third party manages the data. The users need to feel safe to store their data in the cloud. Consequently, we need for models that will enhance the data security. The image steganography is a way to protect data from unauthorized access. Image steganography allows users to conceal secret data in a cover image. In this paper, we review and compare some of the recent works proposed to protect cloud data using image steganography. The first comparison of models based on the algorithms they used, advantages and drawbacks. The second comparison of the models based on the aims of steganography: quality where the model produces a stego-image with high quality, security where the secret data is difficult to detect and capacity where the model allows to hide large amounts of data.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_79-A_Survey_on_Cloud_Data_Security.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>HarmonyMoves: A Unified Prediction Approach for Moving Object Future Path</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110178</link>
        <id>10.14569/IJACSA.2020.0110178</id>
        <doi>10.14569/IJACSA.2020.0110178</doi>
        <lastModDate>2020-02-01T05:13:11.7770000+00:00</lastModDate>
        
        <creator>Mohammed Abdalla</creator>
        
        <creator>Hoda M. O. Mokhtar</creator>
        
        <creator>Neveen ElGamal</creator>
        
        <subject>Trajectory prediction; machine learning; moving objects</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>Trajectory prediction plays a critical role on many location-based services such as proximity-based marketing, routing services, and traffic management. The vast majority of existing trajectory prediction techniques utilize the object’s motion history to predict the future path(s). In addition to, their assumptions that the objects’ moving with recognized patterns or know their routes. However, these techniques fail when the history is unavailable. Also, these techniques fail to predict the path when the query moving objects lost their ways or moving with abnormal patterns. This paper introduces a system named HarmonyMoves to predict the future paths of moving objects on road networks without relying on their past trajectories. The system checks the harmony between the query object and other moving objects, after that if the harmony exists, this means that there are other objects in space moving like the query object. Then, a Markov Model is adopted to analyze this set of similar motion patterns and generate the next potential road segments of the object with their probabilities. If the harmony does not exist, HarmonyMoves considers this query object as abnormal object (object lost the way and needs support to return back known routes), for this purpose HarmonyMoves employed a new module to handle this case. A fundamental aspect of HarmonyMoves lies in achieving a high accurate prediction while performing efficiently to return query answers.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_78-HarmonyMoves_A_Unified_Prediction_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Innovative Smartphone-based Solution for Traffic Rule Violation Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110177</link>
        <id>10.14569/IJACSA.2020.0110177</id>
        <doi>10.14569/IJACSA.2020.0110177</doi>
        <lastModDate>2020-02-01T05:13:11.7430000+00:00</lastModDate>
        
        <creator>Waleed Alasmary</creator>
        
        <subject>Participatory sensing; traffic violation detection; au-tomatic detection; applied computer vision; resources optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>This paper introduces a novel smartphone-based solution to detect different traffic rule violations using a variety of computer vision and networking technologies. We propose the use of smartphones as participatory sensors via their cameras to detect the moving and stationary objects (e.g., cars and lane markers) and understand the resulting driving and traffic violation of each object. We propose novel framework which uses a fast in-mobile traffic violation detector for rapid detection of traffic rule violation. After that, the smartphone transmits the data to the cloud where more powerful computer vision and machine learning operations are used to detect the traffic violation with a higher accuracy. We show that the proposed framework detection is very accurate by combining a) a Haar-like feature cascade detector at the in-mobile level, and b) a deep learning-based classifier, and support-vector machine-based classifiers in the cloud. The accuracy of the deep convolutional network is about 92% for true positive and 95% for true negative. The proposed framework demonstrates a potential for mobile-based traffic violation detection by especially by combining the information of accurate relative position and relative speed. Finally, we propose a real-time scheduling scheme in order to optimize the use of battery and real-time bandwidth of the users given partially known navigation information among the different users in the network, which us the real case. We show that the navigation information is very important in order to better utilize the battery and bandwidth for each user for a small number of users compared to the navigation trajectory length. That is, the utilization of the resources is directly related to the number of available participants, and the accuracy of navigation information.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_77-An_Innovative_Smartphone_based_Solution.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards an Intelligent Approach for the Restitution of Physical Soil Parameters</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110176</link>
        <id>10.14569/IJACSA.2020.0110176</id>
        <doi>10.14569/IJACSA.2020.0110176</doi>
        <lastModDate>2020-02-01T05:13:11.7300000+00:00</lastModDate>
        
        <creator>Ibtissem HOSNI</creator>
        
        <creator>Lilia BENNACEUR FARAH</creator>
        
        <creator>Imed Riadh FARAH</creator>
        
        <creator>Raouf BENNACEUR</creator>
        
        <creator>Mohamed Saber NACEUR</creator>
        
        <subject>Inversion; air fractions; multi-layered; multiscale; SPM; genetic algorithms</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>The analysis of the radar response on natural surfaces has been subject of intense research during the last decades in the field of remote sensing. Unless the availability of accurate values of surface roughness parameter, the restitution of soil moisture from radar backscattering signal can constantly provide inaccurate estimates. Characterization of soil roughness is not fully understood, so a wide range of roughness values can be obtained for the same studied surface when using different measurement methodologies. Various studies have shown a weak agreement between experimental measurements of soil physical parameters and theoretical values under natural conditions. Due to this nonlinearity and its ill-posedness, the inversion of backscattering radar signal on soils for restitution of physical soil parameters is particularly complex. The aim of the present work is the restitution of soil physical parameters from backscattered radar signal using an adapted backscattering model to the soil proposed description. As our study focuses on little rough soils, we have adopted in this work a multi-layered modified multiscale bi-dimensional Small Perturbation Model (2D MLS SPM). Subsequently, we propose a new way of describing the dielectric constant, with the aim of including air fractions in the multiscale multilayer description of the soil. Calculating the dielectric constant is based on the consideration of a soil comprising two phases, a fraction of soil, and an air fraction. For the inversion method, a methodology of coupling between neural networks (NN) and genetic algorithms (GA) was carried on in order to restitute the physical properties of the soil. Samples were generated by the original MLS 2D SPM followed by a neural network to obtain the statistic soil moisture and MLS roughness parameters algorithm. thereafter, these restored values were modelled by the genetic algorithms to resolve, in part or in whole, the disagreement between the retrieval and original values.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_76-Towards_an_Intelligent_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Angle Adjustment for Vertical and Diagonal Communication in Underwater Sensor</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110175</link>
        <id>10.14569/IJACSA.2020.0110175</id>
        <doi>10.14569/IJACSA.2020.0110175</doi>
        <lastModDate>2020-02-01T05:13:11.6970000+00:00</lastModDate>
        
        <creator>Aimen Anum</creator>
        
        <creator>Tariq Ali</creator>
        
        <creator>Shuja Akbar</creator>
        
        <creator>Iqra Obaid</creator>
        
        <creator>Muhammad Junaid Anjum</creator>
        
        <creator>Umar Draz</creator>
        
        <creator>Momina Shaheen</creator>
        
        <subject>Wireless Sensor Networks; Underwater wireless sensor network; DVRP; Terrestrial Wireless Sensor Networks; Depth Based Routing (DBR)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>Underwater wireless sensor network has been an area of interest for few previous decades. UWSNs consists of tiny sensors responsible for monitoring different underwater events and transmit the collected data to sink node. In the harsh and continuously changing environment of water, gaining better communication and performance is a difficult task as compared to networks available on land because of different underwater characteristics such as end-to-end delays, node movement and energy constraints. In this paper a novel routing technique named angle adjustment for vertical and diagonal communication was proposed which don’t use any location information of nodes. It is also efficient in terms of energy and end-to-end delays. In this approach, the source node evaluates the flooding zone based on the angle by using the basic formula for forwarding the packet to the sink. After evaluating the flooding zone, the angles of each node are compared and the packet is sent to node closest to vertical line. The proposed approach is evaluated with the help of NS-2 with AquaSim. The results show better results performance in data delivery, end-to end delays and energy consumption than DBR.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_75-Angle_Adjustment_for_Vertical_and_Diagonal_Communication.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Improved Deep Learning Approach based on Variant Two-State Gated Recurrent Unit and Word Embeddings for Sentiment Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110174</link>
        <id>10.14569/IJACSA.2020.0110174</id>
        <doi>10.14569/IJACSA.2020.0110174</doi>
        <lastModDate>2020-02-01T05:13:11.6830000+00:00</lastModDate>
        
        <creator>Muhammad Zulqarnain</creator>
        
        <creator>Suhaimi Abd Ishak</creator>
        
        <creator>Rozaida Ghazali</creator>
        
        <creator>Nazri Mohd Nawi</creator>
        
        <creator>Muhammad Aamir</creator>
        
        <creator>Yana Mazwin Mohmad Hassim</creator>
        
        <subject>RNN; GRU; LSTM; encoder; Two-state GRU; Long-term dependencies; Sentence Classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>Sentiment classification is an important but challenging task in natural language processing (NLP) and has been widely used for determining the sentiment polarity from user opinions. And word embedding technique learned from a various contexts to produce same vector representations for words with same contexts and also has been extensively used for NLP tasks. Recurrent neural networks (RNNs) are common deep learning architecture that are extensively used mechanism to address the classification issue of variable-length sentences. In this paper, we analyze to investigate variant-Gated Recurrent Unit (GRU) that includes encoder method to preprocess data and improve the impact of word embedding for sentiment classification. The real contributions of this paper contain the proposal of a novel Two-State GRU, and encoder method to develop an efficient architecture namely (E-TGRU) for sentiment classification. The empirical results demonstrated that GRU model can efficiently acquire the words employment in contexts of user’s opinions provided large training data. We evaluated the performance with traditional recurrent models, GRU, LSTM and Bi-LSTM two benchmark datasets, IMDB and Amazon Products Reviews respectively. Results present that: 1) proposed approach (E-TGRU) obtained higher accuracy than three state-of-the-art recurrent approaches; 2) Word2Vec is more effective in handling as word vector in sentiment classification; 3) implementing the network, an imitation strategy shows that our proposed approach is strong for text classification.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_74-An_Improved_Deep_Learning_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>HybridFatigue: A Real-time Driver Drowsiness Detection using Hybrid Features and Transfer Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110173</link>
        <id>10.14569/IJACSA.2020.0110173</id>
        <doi>10.14569/IJACSA.2020.0110173</doi>
        <lastModDate>2020-02-01T05:13:11.6500000+00:00</lastModDate>
        
        <creator>Qaisar Abbas</creator>
        
        <subject>Driver fatigue; image processing; deep learning; transfer learning; convolutional neural network; deep belief network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>Road accidents mainly caused by the state of driver drowsiness. Detection of driver drowsiness (DDD) or fatigue is an important and challenging task to save road-side accidents. To help reduce the mortality rate, the “HybridFatigue” DDD system was proposed. This HybridFatigue system is based on integrating visual features through PERCLOS measure and non-visual features by heart-beat (ECG) sensors. A hybrid system was implemented to combine both visual and non-visual features. Those hybrid features have been extracted and classified as driver fatigue by advanced deep-learning-based architectures in real-time. A multi-layer based transfer learning approach by using a convolutional neural network (CNN) and deep-belief network (DBN) was used to detect driver fatigue from hybrid features. To solve night-time driving and to get accurate results, the ECG sensors were utilized on steering by analyzing heartbeat signals in case if the camera is not enough to get facial features. Also to solve the accurate detection of center head-position of drivers, two-cameras were mounted instead of a single camera. As a result, a new HybridFatigue system was proposed to get high accuracy of driver&#39;s fatigue. To train and test this HybridFatigue system, three online datasets were used. Compare to state-of-the-art DDD system, the HybridFatigue system is outperformed. On average, the HybridFatigue system achieved 94.5% detection accuracy on 4250 images when tested on different subjects in the variable environment. The experimental results indicate that the HybridFatigue system can be utilized to decrease accidents.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_73-HybridFatigue_A_Real_Time_Driver_Drowsiness_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Realization of CORDIC based GMSK System with FPGA Prototyping</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110172</link>
        <id>10.14569/IJACSA.2020.0110172</id>
        <doi>10.14569/IJACSA.2020.0110172</doi>
        <lastModDate>2020-02-01T05:13:11.6370000+00:00</lastModDate>
        
        <creator>Renuka Kajur</creator>
        
        <creator>K V Prasad</creator>
        
        <subject>GMSK; CORDIC algorithm; FPGA; DFS; Gaussian Filter; pipelined; integrator; differentiator; channel</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>The Gaussian Minimum Shift Keying (GMSK) modulation is a digital modulation scheme using frequency shift keying with no phase discontinuities, and it provides higher spectral efficiency in radio communication systems. In this article, the cost-effective hardware architecture of the GMSK system is designed using pipelined CORDIC and optimized CORDIC models. The GMSK systems mainly consist of the NRZ encoder, Integrator, Gaussian filter followed by FM Modulator using CORDIC models and Digital Frequency Synthesizer (DFS) for IQ Modulation in transmitter section along with channel, the receiver section has FM demodulator, followed by Differentiator and NRZ decoder. TheCORDIC algorithms play a crucial role in GMSK systems for IQ generation and improve the system performance on a single chip. Both the pipelined CORDIC and optimized CORDIC models are designed for 6-stages. The optimized CORDIC model is designed using quadrature mapping method along with pipeline structure. The GMSK systems are implemented on Artix-7 FPGA with FPGA prototyping. The Performance analysis is represented in terms of hardware constraints like area, time and power. These results show that the optimized CORDIC based GMSK system is a better option than the pipelined CORDIC based GMSK systems for real-time scenarios.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_72-Performance_Realization_of_CORDIC_based_GMSK_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Stemming Text-based Web Page Classification using Machine Learning Algorithms: A Comparison</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110171</link>
        <id>10.14569/IJACSA.2020.0110171</id>
        <doi>10.14569/IJACSA.2020.0110171</doi>
        <lastModDate>2020-02-01T05:13:11.6030000+00:00</lastModDate>
        
        <creator>Ansari Razali</creator>
        
        <creator>Salwani Mohd Daud</creator>
        
        <creator>Nor Azan Mat Zin</creator>
        
        <creator>Faezehsadat Shahidi</creator>
        
        <subject>Web page classification; stemming; machine learning; Na&#239;ve Bayes; k-NN; SVM; multilayer perceptron</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>The research aim is to determine the effect of word-stemming in web pages classification using different machine learning classifiers, namely Na&#239;ve Bayes (NB), k-Nearest Neighbour (k-NN), Support Vector Machine (SVM) and Multilayer Perceptron (MP). Each classifiers&#39; performance is evaluated in term of accuracy and processing time. This research uses BBC dataset that has five predefined categories. The result demonstrates that classifiers&#39; performance is better without word stemming, whereby all classifiers show higher classification accuracy, with the highest accuracy produced by NB and SVM at 97% for F1 score, while NB takes shorter training time than SVM. With word stemming, the effect on training and classification time is negligible, except on Multilayer Perceptron in which word stemming has effectively reduced the training time.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_71-Stemming_Text_based_Web_Page_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>EDUGXQ: User Experience Instrument for Educational Games’ Evaluation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110170</link>
        <id>10.14569/IJACSA.2020.0110170</id>
        <doi>10.14569/IJACSA.2020.0110170</doi>
        <lastModDate>2020-02-01T05:13:11.5900000+00:00</lastModDate>
        
        <creator>Vanisri Nagalingam</creator>
        
        <creator>Roslina Ibrahim</creator>
        
        <creator>Rasimah Che Mohd Yusoff</creator>
        
        <subject>User Experience (UX); framework; psychometrically; educational games; educational games’ evaluation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>A significant increase in research on educational computer games in recent years has proven that the demand for educational games has increased as well. However, production of incompatible educational games not only cost wastage of money but also energy and time for game designers and game developers. To produce a suitable educational game, it is important to understand the user’s need as well as the educational need. Therefore, this study aims to develop a User Experience (UX) framework for educational games (EDUGX) based on UX elements and psychometrically validate a new instrument, EDUGX questionnaire (EDUGXQ) that is appropriate to evaluate educational games. Based on literature review, six main UX elements were identified which are Flow, Immersion, Player Context, Game Usability, Game System and Learnability to construct the framework. In this paper, we first discussed the development process of EDUGX framework followed by EDUGXQ. This study will also review and discuss several UX questionnaires for educational games in UX design evaluation which at the same time supports the framework’s elements to develop the EDUGXQ.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_70-EDUGXQ_User_Experience_Instrument.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Service-Oriented Architecture for Optimal Service Selection and Positioning in Extremely Large Crowds</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110169</link>
        <id>10.14569/IJACSA.2020.0110169</id>
        <doi>10.14569/IJACSA.2020.0110169</doi>
        <lastModDate>2020-02-01T05:13:11.5570000+00:00</lastModDate>
        
        <creator>Mohammad A.R. Abdeen</creator>
        
        <subject>Large crowd management; Service-Oriented Architecture; multi-objective optimization; Hajj; Mena; WSDL</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>The problem of managing large crowds has many aspects and has been reported in the literature. One of these aspects is the distribution of supplies such as food and water in especially when the targeted region is overcrowded. Some of the challenges is to plan the locations of food and water supply centres in such a way to achieve multiple objective functions such as the type of food and the shortest distance to customer. A practical example of this problem is the food distribution and food cart location in the region of Mena (also known as Tent City, in Saudi Arabia) during the yearly pilgrimage season. In this work, we propose a Service Oriented Architecture (SOA) for positioning services in the region of Mena (Mecca – Saudi Arabia) that covers an area of approximately 20 square kilometres during the pilgrimage season. The architecture proposes an optimal service selection as well as a mobile food cart positioning algorithm based on client pre-set profiles to achieve multiple objective functions for the clients as well as the service providers. Some of these objective functions are the least waiting time to be served, the shortest distance to service, the lowest cost, and the maximum profit for the service provider.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_69-A_Service_Oriented_Architecture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Translator System for Peruvian Sign Language Texts through 3D Virtual Assistant</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110168</link>
        <id>10.14569/IJACSA.2020.0110168</id>
        <doi>10.14569/IJACSA.2020.0110168</doi>
        <lastModDate>2020-02-01T05:13:11.5570000+00:00</lastModDate>
        
        <creator>Gleny Paola Gamarra Ramos</creator>
        
        <creator>Mar&#237;a Elisabeth Farf&#225;n Choquehuanca</creator>
        
        <subject>Peruvian sign language; Lexical-Syntactic Analysis; Avatar 3D</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>The population with hearing impairment in Peru is a community that doesn’t receive the necessary support from the government, nor does it have the necessary resources for the inclusion of its members as active persons in this society. Few initiatives have been launched to achieve this goal and for this reason we will create a resource that will give the deaf community to have greater access to textual information of the listener-speaking community. Our goal is to build a tool that translates general texts and also academic content such as encyclopedias into Peruvian sign language which will be represented by a 3D avatar. This translation will feature different lexical-syntactic processing modules as well as the disambiguation of terms using a small lexicon, similar to Wordnet synsets. The project is developed in collaboration with the deaf community.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_68-Translator_System_for_Peruvian_Sign_Language.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Impact of Deep Learning Techniques on SMS Spam Filtering</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110167</link>
        <id>10.14569/IJACSA.2020.0110167</id>
        <doi>10.14569/IJACSA.2020.0110167</doi>
        <lastModDate>2020-02-01T05:13:11.5270000+00:00</lastModDate>
        
        <creator>Wael Hassan Gomaa</creator>
        
        <subject>SMS Spam Filtering; Deep Learning; RNN; GRU; LSTM; CNN; RCNN; RMDL</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>Over the past decade, phone calls and bulk SMS have been fashionable. Although many advertisers assume that SMS has died, it is still alive. It is one of the simplest and most cost-effective marketing tools for companies to communicate on a personal level to their customers. The spread of SMS has led to the risk of spam. Most of the previous studies that attempted to detect spam were based on manually extracted features using classical machine learning classifiers. This paper explores the impact of applying various deep learning techniques on SMS spam filtering; by comparing the results of seven different deep neural network architectures and six classifiers for classical machine learning. Proposed methodologies are based on the automatic extraction of the required features. On a benchmark data set consisting of 5574 records, a fabulous accuracy of 99.26% has been resulted using Random Multimodel Deep Learning (RMDL) architecture.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_67-The_Impact_of_Deep_Learning_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Robotic Technology for Figural Creativity Enhancement: Case Study on Elementary School</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110166</link>
        <id>10.14569/IJACSA.2020.0110166</id>
        <doi>10.14569/IJACSA.2020.0110166</doi>
        <lastModDate>2020-02-01T05:13:11.4930000+00:00</lastModDate>
        
        <creator>Billy Hendrik</creator>
        
        <creator>Nazlena Mohamad Ali</creator>
        
        <creator>Norshita Mat Nayan</creator>
        
        <subject>Robotic technology; figural creativity; curriculum; TKF; education; KTSP; K-13</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>Robotic technology is a field that is in great demand today and it is very useful to human life, especially in the aspect of education. It would help students to be more active in the learning process. Creativity can be stimulating with the use of robotic technology. One kind of creativity is Figural Creativity (FC). This study investigated the effect of Robotic Technology as a learning tool to improve the FC skills of students. Forty (40) elementary school students aged 10-11 years, were the participants in this study. Students&#39; creativity skills were measured from the Figural Creativity Test (TKF). This test was carried out before the intervention (pre-test) and after the intervention program (post-test). In the intervention program, students were given some education about robotic technology. To analyze the test results, we made use of the Statistical Service Products and Solutions package. The findings showed that the level of creativity in students with the K13 curriculum improved better, the FC scores of students in K-13 Curriculum were improved up to 23% with sig. 2-tailed = .000, p&lt;.05 and the FC scores of the KTSP curriculum only improved only by 1.7 % with sig.2-tailed value = .572, p&gt;.05. Thus, robotic technology learning is more effective in improving the FC of students with the K13 curriculum. Based on the result, we make a recommendation to the Ministry of Education that robotic technology is applied as an educational tool in the educational sectors.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_66-Robotic_Technology_for_Figural_Creativity_Enhancement.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predicting IoT Service Adoption towards Smart Mobility in Malaysia: SEM-Neural Hybrid Pilot Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110165</link>
        <id>10.14569/IJACSA.2020.0110165</id>
        <doi>10.14569/IJACSA.2020.0110165</doi>
        <lastModDate>2020-02-01T05:13:11.4800000+00:00</lastModDate>
        
        <creator>Waqas Ahmed</creator>
        
        <creator>Sheikh Muhamad Hizam</creator>
        
        <creator>Ilham Sentosa</creator>
        
        <creator>Habiba Akter</creator>
        
        <creator>Eiad Yafi</creator>
        
        <creator>Jawad Ali</creator>
        
        <subject>Smart Mobility; Internet of Things (IoT); Radio-Frequency Identification (RFID); Neural Networks; Technology Acceptance Model (TAM)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>Smart city is synchronized with digital environment and its transportation system is vitalized with RFID sensors, Internet of Things (IoT) and Artificial Intelligence. However, without user’s behavioral assessment of technology, the ultimate usefulness of smart mobility cannot be achieved. This paper aims to formulate the research framework for prediction of antecedents of smart mobility by using SEM-Neural hybrid approach towards preliminary data analysis. This research undertook smart mobility service adoption in Malaysia as study perspective and applied the Technology Acceptance Model (TAM) as theoretical basis. An extended TAM model was hypothesized with five external factors (digital dexterity, IoT service quality, intrusiveness concerns, social electronic word of mouth and subjective norm). The data was collected through a pilot survey in Klang Valley, Malaysia. Then responses were analyzed for reliability, validity and accuracy of model. Finally, the causal relationship was explained by Structural Equation Modeling (SEM) and Artificial Neural Networking (ANN). The paper will share better understanding of road technology acceptance to all stakeholders to refine, revise and update their policies. The proposed framework will suggest a broader approach to investigate individual-level technology acceptance.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_65-Predicting_IoT_Service_Adoption.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Determinants of Interface Criteria Learning Technology for Disabled Learner using Analytical Hierarchy Process</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110164</link>
        <id>10.14569/IJACSA.2020.0110164</id>
        <doi>10.14569/IJACSA.2020.0110164</doi>
        <lastModDate>2020-02-01T05:13:11.4630000+00:00</lastModDate>
        
        <creator>Syazwani Ramli</creator>
        
        <creator>Hazura Mohamed</creator>
        
        <creator>Zurina Muda</creator>
        
        <subject>Selection design; learning technology; disabled learner; decision making; Analytical Hierarchy Process (AHP)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>The advancement of technology nowadays is rapidly increasing due to the leveraging availability of learning technology. Due to the rapid change in the availability of technology, it is crucial for disabled learners to select a good technology design that may help them to achieve better academic achievements. Selecting a good design of technology involves a decision making process to choose several designs of learning technology. In general, the abilities, capacities and achievements of disabled learners are lower compared to a normal child. Using a good approach and assisted with the right selection of learning technology may help disabled learners to get a better understanding and achievement in academic matters. In this study, the analytical hierarchy process (AHP) approach was used to determine the best appropriate design of learning technology for disabled learners. Three hierarchy levels made up of criteria, sub-criteria and alternatives were considered. This study finds the best selection design of elements that can be used in the development of learning technology in a classroom of disabled learners.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_64-Determinants_of_Interface_Criteria_Learning_Technology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Analysis of Machine Learning Classifiers for Detecting PE Malware</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110163</link>
        <id>10.14569/IJACSA.2020.0110163</id>
        <doi>10.14569/IJACSA.2020.0110163</doi>
        <lastModDate>2020-02-01T05:13:11.4330000+00:00</lastModDate>
        
        <creator>ABM.Adnan Azmee</creator>
        
        <creator>Pranto Protim Choudhury</creator>
        
        <creator>Md. Aosaful Alam</creator>
        
        <creator>Orko Dutta</creator>
        
        <creator>Muhammad Iqbal Hossai</creator>
        
        <subject>Malware detection; machine learning; data protection; XGBoost; support vector machine; extra tree classifiers; artificial neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>In this modern era of technology, securing and protecting one’s data has been a major concern and needs to be focused on. Malware is a program that is designed to cause harm and malware analysis is one of the paramount focused points under the sight of cyber forensic professionals and network administrations. The degree of the harm brought about by malignant programming varies to a great extent. If this happens at home to a random person then that may lead to some loss of irrelevant or unimportant information but for a corporate network, it can lead to loss of valuable business data. The existing research does focus on some few machine learning algorithms to detect malware and very few of them worked with Portable Executables (PE) files. In this paper, we mainly focused on top classification algorithms and compare their accuracy to find out which one is giving the best result according to the dataset and also compare among these algorithms. Top machine learning classification algorithms were used alongside neural networks such as Artificial Neural Network, XGBoost, Support Vector Machine, Extra Tree Classifier, etc. The experimental result shows that XGBoost achieved the highest accuracy of 98.62 percent when compared with other approaches. Thus, to provide a better solution for this kind of anomalies, we have been interested in researching malware detection and want to contribute to building strong and protective cybersecurity.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_63-Performance_Analysis_of_Machine_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Plant Disease Detection using Internet of Thing (IoT)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110162</link>
        <id>10.14569/IJACSA.2020.0110162</id>
        <doi>10.14569/IJACSA.2020.0110162</doi>
        <lastModDate>2020-02-01T05:13:11.4170000+00:00</lastModDate>
        
        <creator>Muhammad Amir Nawaz</creator>
        
        <creator>Tehmina khan</creator>
        
        <creator>Rana Mudassar Rasool</creator>
        
        <creator>Maryam Kausar</creator>
        
        <creator>Amir Usman</creator>
        
        <creator>Tanvir Fatima Naik Bukht</creator>
        
        <creator>Rizwan Ahmad</creator>
        
        <creator>Jaleel Ahmad</creator>
        
        <subject>Plant diseases; internet of things; temperature sensor plants;  farming</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>This paper presents the idea of internet of things (IOT) innovation to percept data, and talks about the job of the IOT innovation in farming infection and bug nuisance control, which incorporates rural ailment and bug checking framework, gathering sickness and creepy crawly bother data utilizing sensor hubs, information preparing and mining, etc. A malady and bug irritation control framework dependent on IOT is proposed, which comprised of three levels and three frameworks. The framework can give another approach to get to horticultural data for the farm. In this paper a computerized framework has been created to decide if the plant is ordinary or infected. The typical development of the plants, yield and nature of horticultural items is truly influenced by plant illness. This paper attempt to build up a robotized framework that identifies the nearness of disease in the plants. A mechanized ailment recognition framework is created utilizing sensors like temperature, moistness and shading dependent on variety in plant leaf wellbeing condition. The qualities dependent on temperature, mugginess and shading parameters are utilized to distinguish nearness of plant sickness.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_62-Plant_Disease_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Situational Factors for Modern Code Review to Support Software Engineers’ Sustainability</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110161</link>
        <id>10.14569/IJACSA.2020.0110161</id>
        <doi>10.14569/IJACSA.2020.0110161</doi>
        <lastModDate>2020-02-01T05:13:11.3870000+00:00</lastModDate>
        
        <creator>Sumaira Nazir</creator>
        
        <creator>Nargis Fatima</creator>
        
        <creator>Suriayati Chuprat</creator>
        
        <subject>Situational; modern code review; sustainable software engineer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>Software engineers working in Modern Code Review (MCR) are confronted with the issue of lack of competency in the identification of situational factors. MCR is a software engineering activity for the identification and fixation of defects before the delivery of the software product. This issue can be a threat to the individual sustainability of software engineers and it can be addressed by situational awareness. Therefore, the objective of the study is to identify situational factors concerning the MCR process. Systematic Literature Review (SLR) has been used to identify situational factors. Data coding along with continuous comparison and memoing procedures of grounded theory and expert review has been used to produce an exclusive and validated list of situational factors grouped under categories. The study results conveyed 23 situational factors that are grouped into 5 broad categories i.e. People, Organization, Technology, Source Code and Project. The study is valuable for researchers to extend the research and for software engineers to identify situations and sustain for longer.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_61-Situational_Factors_for_Modern_Code_Review.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Knowledge Sharing Factors for Modern Code Review to Minimize Software Engineering Waste</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110160</link>
        <id>10.14569/IJACSA.2020.0110160</id>
        <doi>10.14569/IJACSA.2020.0110160</doi>
        <lastModDate>2020-02-01T05:13:11.3700000+00:00</lastModDate>
        
        <creator>Nargis Fatima</creator>
        
        <creator>Sumaira Nazir</creator>
        
        <creator>Suriayati Chuprat</creator>
        
        <subject>knowledge sharing; modern code review; software engineering waiting waste</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>Software engineering activities, for instance, Modern Code Review (MCR) produce quality software by identifying the defects from the code. It involves social coding and provides ample opportunities to share knowledge among MCR team members.  However, the MCR team is confronted with the issue of waiting waste due to poor knowledge sharing among MCR team members. As a result, it delays the project delays and increases mental distress. To minimize the waiting waste, this study aims to identify knowledge sharing factors that impact knowledge sharing in MCR. The methodology employed for this study is a systematic literature review to identify knowledge sharing factors, data coding with continual comparison and memoing techniques of grounded theory to produce a unique and categorized list of factors influencing knowledge sharing. The identified factors were then assessed through expert panel for its naming, expressions, and categorization.  The study finding reported 22 factors grouped into 5 broad categories. i.e. Individual, Team, Social, Facility conditions, and Artifact. The study is useful for researchers to extend the research and for the MCR team to consider these factors to enhance knowledge sharing and to minimize waiting waste.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_60-Knowledge_Sharing_Factors_for_Modern_Code_Review.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Fuzzy Multi-Objective Covering-based Security Quantification Model for Mitigating Risk of Web based Medical Image Processing System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110159</link>
        <id>10.14569/IJACSA.2020.0110159</id>
        <doi>10.14569/IJACSA.2020.0110159</doi>
        <lastModDate>2020-02-01T05:13:11.3400000+00:00</lastModDate>
        
        <creator>Abdullah Algarni</creator>
        
        <creator>Masood Ahmad</creator>
        
        <creator>Abdulaziz Attaallah</creator>
        
        <creator>Alka Agrawal</creator>
        
        <creator>Rajeev Kumar</creator>
        
        <creator>Raees Ahmad Khan</creator>
        
        <subject>Web based medical image processing; fuzzy analytical hierarchy process; TOPSIS method; security management</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>Medical image processing is one of the most active research areas and has big impact on the health sector. With the arrival of intelligent processes, web based medical image processing has become simple and errorless. Web based application is now used extensively for medical image processing. Large amount of medical data is generated daily with more and more data being shared over public and private networks for the diagnosis of diseases through the web based image processing systems. Medical images like that of the CT (Computed Tomography) scan, MRI (Magnetic Resonance Imaging), X-Ray and Ultrasound images, etc., contain highly personal data of the patients. This data needs to be secured from intruders. Medical images are more sensitive to external interruption and manipulation in data may cause changes in the result. Data breaches in medical cases can lead to wrong diagnosis or even more fatal possibilities with life threatening results. So, security in web based medical image processing is a major issue. However, ensuring security for the medical images while preserving the characteristics of confidentiality, integrity, availability, etc., of medical images poses a major challenge. Working towards a feasible solution, in this study, authors are using a list of criteria for checking security level of the web based image processing system. We propose Fuzzy Analytic Hierarchy Process (FAHP) combined with Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) in the list of criteria that affect the security assessment in medical image processing. At the results we see that FAHP-TOPSIS produce good results in security checking in web based medical image processing system. At the data analysis section all the steps showed which is involved in our model.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_59-A_Fuzzy_Multi_Objective_Covering.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Multi-Objectives Optimization to Develop the Mobile Dimension in a Small Private Online Course (SPOC)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110158</link>
        <id>10.14569/IJACSA.2020.0110158</id>
        <doi>10.14569/IJACSA.2020.0110158</doi>
        <lastModDate>2020-02-01T05:13:11.3230000+00:00</lastModDate>
        
        <creator>Naima BELARBI</creator>
        
        <creator>Abdelwahed NAMIR</creator>
        
        <creator>Mohamed TALBI</creator>
        
        <creator>Nadia Chafiq</creator>
        
        <subject>Mobile dimension; MOOC/SPOC; multi-objective optimization; criteria; decision</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>The impact of the mobile technology trend is being felt in several sectors today, including education. In this paper, we present an analysis of the development of the mobile dimension in a Massive Open Online Course (MOOC) or a Small Private Online Course (SPOC) as a decision-making problem among various approaches which cannot be ordered incontestably from the best to the worst. This is due to the fact that the various approaches to integrate the mobile dimension are different and that each solution presents both advantages and shortcomings from a technological point of view. The decision must be made on the basis of the end-users&#39; requirements and usage. We propose to view this situation as a multi-objective optimization problem as the decision is a compromise between several conflicting objectives/criteria. The various approaches to the development of mobile access to a MOOC/SPOC are presented first and then compared using various criteria. Then we provide an analysis of the alternatives to find the non-dominant Pareto solutions.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_58-A_Multi_Objectives_Optimization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Secure V2V Communication in IOV using IBE and PKI based Hybrid Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110157</link>
        <id>10.14569/IJACSA.2020.0110157</id>
        <doi>10.14569/IJACSA.2020.0110157</doi>
        <lastModDate>2020-02-01T05:13:11.2930000+00:00</lastModDate>
        
        <creator>Satya Sandeep Kanumalli</creator>
        
        <creator>Anuradha Ch</creator>
        
        <creator>Patanala Sri Rama Chandra Murty</creator>
        
        <subject>Privacy; internet of vehicles; hashing; ibe; public key certificate</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>We live in the world of “Internet of Everything”, which lead to advent of different applications and Internet of vehicles (IOV) of one among them, which is a major step forward for the future of transportation system. Vehicle to vehicle (V2V) communication plays a major role in which a vehicle may send sensitive, non-sensitive messages and these messages are encrypted with public keys, which makes distribution of public keys is a major problem due to the vehicle need to be anonymous having pseudonyms which changes more frequently and makes it more complicated. Here we proposed a hybrid approach, which uses existing Public key certificate for authorization of the vehicle and Identity Based Encryption to generate public keys from the pseudonyms and use it in secure V2V communication without compromising anonymity of the vehicle.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_57-Secure_V2V_Communication_in_IOV.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Critical Analysis of Brain Magnetic Resonance Images Tumor Detection and Classification Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110156</link>
        <id>10.14569/IJACSA.2020.0110156</id>
        <doi>10.14569/IJACSA.2020.0110156</doi>
        <lastModDate>2020-02-01T05:13:11.2430000+00:00</lastModDate>
        
        <creator>Zahid Ullah</creator>
        
        <creator>Su-Hyun Lee</creator>
        
        <creator>Donghyeok An</creator>
        
        <subject>Magnetic Resonance Imaging (MRI); Computed Tomography (CT); MRI Classification; Tumor Detection; Digital Image Processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>The image segmentation, tumor detection and extraction of tumor area from brain MR images are the main concern but time-consuming and tedious task performed by clinical experts or radiologist, while the accuracy relies on their experiences only. So, to overcome these limitations, the usage of computer-aided design (CAD) technology has become very important. Magnetic resonance imaging (MRI) and Computed Tomography (CT) are the two major imaging modalities that are used for brain tumor detection. In this paper, we have carried out a critical review of different image processing techniques of brain MR images and critically evaluate these different image processing techniques in tumor detection from brain MR images to identify the gaps and limitations of those techniques. Therefore, to obtain precise and better results, the gaps can be filled and limitations of various techniques can be improved. We have observed that most of the researchers have employed these stages such as Pre-processing, Feature extraction, Feature reduction, and Classification of MR images to find benign and malignant images.  We have made an effort in this area to open new dimensions for the readers to explore the concerned field of research.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_56-Critical_Analysis_of_Brain_Magnetic_Resonance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Behavior of Learning Rules in Hopfield Neural Network for Odia Script</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110155</link>
        <id>10.14569/IJACSA.2020.0110155</id>
        <doi>10.14569/IJACSA.2020.0110155</doi>
        <lastModDate>2020-02-01T05:13:11.2300000+00:00</lastModDate>
        
        <creator>Ramesh Chandra Sahoo</creator>
        
        <creator>Sateesh Kumar Pradhan</creator>
        
        <subject>Hopfield network; Odia script; Hebbian; pseudo-inverse; Storkey; NIT dataset</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>Automatic character recognition is one of the challenging fields in pattern recognition especially for handwritten Odia characters as many of these characters are similar and rounded in shape. In this paper, a comparative performance analysis of Hopfield neural network for storing and recalling of handwritten and printed Odia characters with three different learning rules such as Hebbian, Pseudo-inverse and Storkey learning rule has been presented. An experimental exploration of these three learning rules in Hopfield network has been performed in two different ways to measure the performance of the network to corrupted patterns. In the first experimental work, an attempt has been proposed to demonstrate the performance of storing and recalling of Odia characters (vowels and consonants) in image form of size 30 X 30 on Hopfield network with different noise percentages. At the same time, the performance of recognition accuracy has been observed by partitioning the dataset into training and a different testing dataset with k-fold cross-validation method in the second experimental attempt. The simulation results obtained in this study express the comparative performance of the network for recalling of stored patterns and recognizing a new set of testing patterns with various noise percentages for different learning rules.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_55-Behavior_of_Learning_Rules_in_Hopfield_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>EEG Emotion Signal of Artificial Neural Network by using Capsule Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110154</link>
        <id>10.14569/IJACSA.2020.0110154</id>
        <doi>10.14569/IJACSA.2020.0110154</doi>
        <lastModDate>2020-02-01T05:13:11.1970000+00:00</lastModDate>
        
        <creator>Usman Ali</creator>
        
        <creator>Haifang Li</creator>
        
        <creator>Rong Yao</creator>
        
        <creator>Qianshan Wang</creator>
        
        <creator>Waqar Hussain</creator>
        
        <creator>Syed Badar ud Duja</creator>
        
        <creator>Muhammad Amjad</creator>
        
        <creator>Bilal Ahmed</creator>
        
        <subject>Emotion recognition; caps net; EEG signal; multidimensional feature; hybrid neural networks; CNN; Granger; motor imagery classification; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>Human emotion recognition through electroencephalographic (EEG) signals is becoming attractive. Several evolutions used for our research mechanism technology to describe two different primaries: one used for combining the vital attribute, frequency sphere, and physical element of the EEG signals, and the architecture describes the two-dimensional image. Emotion realization is imposing effort in the computer brain interface field, which is mostly used to understand the field of education, medical military, and many others. The allocation issue arises in the required area of emotion recognition. In this paper, the allocation structure based on Caps Net neural network is described. The heder factor shows that the best point to classified the original EEG signals scarce group to using many of the algorithms like Lasso for a better function to used and other than occupy the heights.Furthermore, essential features like tiny subset take by input for the computer network attain for many ultimate emotional classifications. Many of the results show to alternate the best parameters model use and other network formats to making the Caps Net and another neural network act as the emotional valuation on EEG signals. It attains almost 80.22% and 85.41% average allocation efficiency under demeanor and view of the emotion pathway as compared to the Support Vector Machine (SVM) and convolutional neural network(CNN or ConvNet). A significant allocation edge attains the best conclusion and automatically enhances the performance of the EEG emotional classification. Deep learning access, such as CNN has widely used to improve primary allocation performance of motor symbolism-based brain-computer interfaces (BCI). As we know that CNN&#39;s limited allocation achievement degraded when an essential point data is distorted. Basically, in the electroencephalography (EEG) case, the signals consist of the same user are not measure. So we implement the Capsule networks (CapsNet), which is essential to extract many features. By that, it attains a much more powerful and positive performance than the old CNN approaches.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_54-EEG_Emotion_Signal_of_Artificial_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Breast Cancer Computer-Aided Detection System based on Simple Statistical Features and SVM Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110153</link>
        <id>10.14569/IJACSA.2020.0110153</id>
        <doi>10.14569/IJACSA.2020.0110153</doi>
        <lastModDate>2020-02-01T05:13:11.1670000+00:00</lastModDate>
        
        <creator>Yahia Osman</creator>
        
        <creator>Umar Alqasemi</creator>
        
        <subject>Breast cancer; MIAS; features extraction; SVM; mammogram; clusters; computer-aided detection systems; KNN; ROI</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>Computer-Aided Detection (CADe) systems are becoming very helpful and useful in supporting physicians for early detection of breast cancer. In this paper, a CADe system that is able to detect abnormal clusters in mammographic images will be implemented using different classifiers and features. The CADe system will utilize a Support Vector Machine (SVM) and K-Nearest Neighbor (KNN) as classifiers. Adopting mammographic database from Mammographic Image Analysis Society (MIAS), for training and testing, the performance of the two types of classifiers are compared in terms of sensitivity, specificity, and accuracy. The obtained values for the previous parameters show the efficiency of the CADe system to be used as a secondary screening method in detecting abnormal clusters given the Region of Interest (ROI). The best classifier is found to be SVM showed 96% accuracy, 92% sensitivity and 100% specificity.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_53-Breast_Cancer_Computer_Aided_Detection_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Combining 3D Interpolation, Regression, and Body Features to build 3D Human Data for Garment: An Application to Building 3D Vietnamese Female Data Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110152</link>
        <id>10.14569/IJACSA.2020.0110152</id>
        <doi>10.14569/IJACSA.2020.0110152</doi>
        <lastModDate>2020-02-01T05:13:11.1500000+00:00</lastModDate>
        
        <creator>Tran Thi Minh Kieu</creator>
        
        <creator>Nguyen Tung Mau</creator>
        
        <creator>Le Van</creator>
        
        <creator>Pham The Bao</creator>
        
        <subject>Anthropometry; 3D scanning; human body modeling; interpolation; parametric modeling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>Modeling 3D human body is an advanced technique used in human motion analysis and garment industry. In this paper, we propose a method for forming deformation functions so that we can rebuild the 3D human body given anthropometric measurements. The advanced idea in our approach is that we split the 3D body into small parts. In that way, we can specialize different set of parameters needed to interpolate for each section. With an interpolation approach, we build a 3D human body for 593 female bodies with the corresponding body shape but require fewer input measurements than 3D laser scans.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_52-Combining_3D_Interpolation_Regression.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Improved Framework for Content-based Spamdexing Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110151</link>
        <id>10.14569/IJACSA.2020.0110151</id>
        <doi>10.14569/IJACSA.2020.0110151</doi>
        <lastModDate>2020-02-01T05:13:11.1200000+00:00</lastModDate>
        
        <creator>Asim Shahzad</creator>
        
        <creator>Hairulnizam Mahdin</creator>
        
        <creator>Nazri Mohd Nawi</creator>
        
        <subject>Information retrieval; Web spam detection; content spam; pos ratio; search spam; Keywords stuffing; machine generated content detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>To the modern Search Engines (SEs), one of the biggest threats to be considered is spamdexing. Nowadays spammers are using a wide range of techniques for content generation, they are using content spam to fill the Search Engine Result Pages (SERPs) with low-quality web pages. Generally, spam web pages are insufficient, irrelevant and improper results for users. Many researchers from academia and industry are working on spamdexing to identify the spam web pages. However, so far not even a single universally efficient method is developed for identification of all spam web pages. We believe that for tackling the content spam there must be improved methods. This article is an attempt in that direction, where a framework has been proposed for spam web pages identification. The framework uses Stop words, Keywords Density, Spam Keywords Database, Part of Speech (POS) ratio, and Copied Content algorithms. For conducting the experiments and obtaining threshold values WEBSPAM-UK2006 and WEBSPAM-UK2007 datasets have been used. An excellent and promising F-measure of 77.38% illustrates the effectiveness and applicability of proposed method.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_51-An_Improved_Framework_for_Content_based_Spamdexing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Artificial Deep Neural Network for the Binary Classification of Network Traffic</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110150</link>
        <id>10.14569/IJACSA.2020.0110150</id>
        <doi>10.14569/IJACSA.2020.0110150</doi>
        <lastModDate>2020-02-01T05:13:11.1030000+00:00</lastModDate>
        
        <creator>Shubair A. Abdullah</creator>
        
        <creator>Ahmed Al-Ashoor</creator>
        
        <subject>Deep learning; ANN; packet classification; binary classification; malicious traffic classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>Classifying network packets is crucial in intrusion detection. As intrusion detection systems are the primary defense of the infrastructure of networks, they need to adapt to the exponential increase in threats. Despite the fact that many machine learning techniques have been devised by researchers, this research area is still far from finding perfect systems with high malicious packet detection accuracy. Deep learning is a subset of machine learning and aims to mimic the workings of the human brain in processing data for use in decision-making. It has already shown excellent capabilities in dealing with many real-world problems such as facial recognition and intelligent transportation systems. This paper develops an artificial deep neural network to detect malicious packets in network traffic. The artificial deep neural network is built carefully and gradually to confirm the optimum number of input and output neurons and the learning mechanism inside hidden layers. The performance is analyzed by carrying out several experiments on real-world open source traffic datasets using well-known classification metrics. The experiments have shown promising results for real-world application in the binary classification of network traffic.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_50-An_Artificial_Deep_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detecting Flooding Attacks in Communication Protocol of Industrial Control Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110149</link>
        <id>10.14569/IJACSA.2020.0110149</id>
        <doi>10.14569/IJACSA.2020.0110149</doi>
        <lastModDate>2020-02-01T05:13:11.0730000+00:00</lastModDate>
        
        <creator>Rajesh L</creator>
        
        <creator>Penke Satyanarayana</creator>
        
        <subject>Supervisory Control and Data Acquisition (SCADA); Remote Telemetry Unit (RTU); Programmable Logic Controllers (PLC); Communication Protocol; MODBUS; Industrial Control Systems (ICS)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>Industrial Control Systems (ICS) are normally using for monitoring and controlling various process plants like Oil &amp; Gas refineries, Nuclear reactors, Power generation and transmission, various chemical plants etc., in the world. MODBUS is the most widely used communication protocol in these ICS systems, which is using for bi-directional data transfer of sensor data between data acquisition servers and Intelligent Electronic Devices (IED) like Programmable Logic Controllers (PLC) or Remote Telemetry Unit (RTU). The security of ICS systems is a major concern in safe and secure operations of these plants. This Modbus protocol is more vulnerable to cyber security attacks because security measures were not considered in mind at the time of protocol design. Denial-of-Service (DoS) attack or flooding attack is one of the prominent attacks for MODBUS, which affects the availability of the control system. In this paper, a new method was proposed, to detect user application-level flooding or DoS attacks and triggers alarm annunciator and displays suitable alarms in Supervisory Control and Data Acquisition system (SCADA) to draw the attention of administrators or engineers to take corrective action. This method detected highest percentage of attacks with less time compared to other methods. This method also considered all types of conditions, which triggers flooding attack in MODBUS protocol.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_49-Detecting_Flooding_Attacks_in_Communication_Protocol.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Usefulness of Mobile Assisted Language Learning in Primary Education</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110148</link>
        <id>10.14569/IJACSA.2020.0110148</id>
        <doi>10.14569/IJACSA.2020.0110148</doi>
        <lastModDate>2020-02-01T05:13:11.0570000+00:00</lastModDate>
        
        <creator>Kashif Ishaq</creator>
        
        <creator>Nor Azan Mat Zin</creator>
        
        <creator>Fadhilah Rosdi</creator>
        
        <creator>Adnan Abid</creator>
        
        <creator>Qasim Ali</creator>
        
        <subject>Literacy and numeracy drive; monitoring and evaluation assistant; assessment; usability; content; design; infrastructure</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>Literacy &amp; Numeracy Drive (LND) is a mobile application that is used in public sector primary schools in Punjab province, Pakistan to teach students of Grade 03 on a tablet for learning languages and Mathematics. Persons designated the role of a Monitoring &amp; Evaluation Assistant (MEA) visit every school allocated by authorities once in a month and select 07-10 students randomly to evaluate them on his own tablet by asking multiple questions related to English, Urdu and Mathematics. After the evaluation, MEA has to upload the result on the official portal for the respective school. This study aims to evaluate the effectiveness of LND for its usefulness, usability, accessibility, content, and assessments by involving students and teachers using this application in different schools. A mixed-method study has been adopted in which 57 teachers and nearly 300 students from different locations of the district and from different schools have been selected, to measure the effectiveness of LND and evaluate the effectiveness with the help of interviews and questionnaires. The result reveals, in its current form, the LND application is not effective and needs improvement in usability, design, content, accessibility, infrastructure, and assessment. Furthermore, teachers recommend that game-based learning consists of an interactive interface, phonics, animations. As the more interactive and attractive presentation of the content and variations in the assessment may increase students’ involvement and will make this application more effective and will produce good results.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_48-Usefulness_of_Mobile_Assisted_Language.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Barchan Sand Dunes Collisions Detection in High Resolution Satellite Images based on Image Clustering and Transfer Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110147</link>
        <id>10.14569/IJACSA.2020.0110147</id>
        <doi>10.14569/IJACSA.2020.0110147</doi>
        <lastModDate>2020-02-01T05:13:11.0430000+00:00</lastModDate>
        
        <creator>M. A. Azzaoui</creator>
        
        <creator>L. Masmoudi</creator>
        
        <creator>H. El Belrhiti</creator>
        
        <creator>I. E. Chaouki</creator>
        
        <subject>High resolution satellite images; remote sensing; transfer learning; image segmentation; sand dunes; desertification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>Desertification is a core concern for populations living in arid and semi-arid areas. Specifically, barchans dunes which are the fastest moving sand dunes put constant pressure over human settlements and infrastructure. Remote sensing was used to analyze sand dunes around Tarfaya city located in the south of Morocco in the Sahara Desert. In this area, dunes form long corridors made of thousands crescent shaped dunes moving simultaneously, thus, making data gathering in the field very difficult. A computer vision approach based on machine learning was proposed to automate the detection of barchans sand dunes and monitor their complex interactions. An IKONOS high resolution satellite image was used with the combination of a clustering algorithm for image segmentation of the dunes corridor, and a Transfer Learning model which was trained to detect three classes of objects: Barchan dunes, bare fields and a new introduced class consisting of dunes collisions. Indeed, collisions were very difficult to model using classical digital image processing methods due to the large variability of their shapes. The model was trained on 1000 image patches which were annotated then augmented to generate a larger dataset. The obtained detection results showed an accuracy of 84,01%. The interest of this research was to provide with a relatively affordable approach for tracking sand dunes locations in order to better understand their dynamics.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_47-Barchan_Sand_Dunes_Collisions_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Classification of Non-Discriminant ERD/ERS Comprising Motor Imagery Electroencephalography Signals</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110146</link>
        <id>10.14569/IJACSA.2020.0110146</id>
        <doi>10.14569/IJACSA.2020.0110146</doi>
        <lastModDate>2020-02-01T05:13:11.0270000+00:00</lastModDate>
        
        <creator>Zaib unnisa Asi</creator>
        
        <creator>M. Sultan Zia</creator>
        
        <creator>Umair Muneer Butt</creator>
        
        <creator>Aneela Abbas</creator>
        
        <creator>Sadaf Ilyas</creator>
        
        <subject>MI EEG Signals; non-discriminant ERD/ERS; evoked potentials; common spatial pattern; Gaussian process classifier</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>Classification of Motor Imagery (MI) Electroencephalography (EEG) signals has always been an important aspect of Brain Computer Interface (BCI) systems. Event Related Desynchronization (ERD)/ Event Related Synchronization (ERS) plays a significant role in finding discriminant features of MI EEG signals. ERD/ERS is one type and Evoked Potential (EP) is another type of brain response. This study focuses upon the classification of MI EEG signals by Removing Evoked Potential (REP) from non-discriminant MI EEG data in filter band selection, called REP. This optimization is done to enhance the classification performance. A comprehensive comparison of several pipelines is presented by using famous feature extraction methods, namely Common Spatial Pattern (CSP), XDawn. The effectiveness of REP is demonstrated on the PhysioNet dataset which is an online data resource. Comparison is done between the performance of pipelines including proposed one (Common Spatial Pattern (CSP) and Gaussian Process Classifier (GPC)) as well as before and after applying REP. It is observed that the REP approach has improved the classification accuracy of all the subjects used as well as all the pipelines, including state of the art algorithms, up to 20%.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_46-Classification_of_Non_Discriminant_ERD_ERS.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Usability of Mobile Assisted Language Learning App</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110145</link>
        <id>10.14569/IJACSA.2020.0110145</id>
        <doi>10.14569/IJACSA.2020.0110145</doi>
        <lastModDate>2020-02-01T05:13:10.9930000+00:00</lastModDate>
        
        <creator>Kashif Ishaq</creator>
        
        <creator>Nor Azan Mat Zin</creator>
        
        <creator>Fadhilah Rosdi</creator>
        
        <creator>Adnan Abid</creator>
        
        <creator>Qasim Ali</creator>
        
        <subject>Literacy and numeracy drive; usability; user experience; mobile app; assessment; public school</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>The aim of this study is to evaluate the usability of Mobile Assisted Language Learning i.e. Literacy and Numeracy Drive (LND) which is smartphone application to learn language and mathematics in public sector primary schools of Punjab, the biggest province of Pakistan. In this study, usability tests were conducted which included surveys of questionnaires from teachers and students. The user experience, reliability, and performance of mobile application assessed, along with user satisfaction. The LND mobile application has not been found to be successful, with a poor user interface and requires improvement. The &quot;Using Experience,&quot; &quot;Ease of Use&quot; and &quot;Usefulness&quot; variables have been the lowest scorers in terms of user experience. Mobile device specifications were not simple and confusing; the services provided by the LND were not appealing and effective for students or teachers. This research suggested several improvements in the usability and functionality of this LND application based on assessed user experience. Many schools have chosen to use mobile apps for the teaching and evaluation of language at school. The use of mobile-assisted learning at public sector schools in Punjab, invites us to gauge the usability and effectiveness of this approach at such a huge scale which will make it more effective.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_45-Usability_of_Mobile_Assisted_Language.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automatic Detection and Correction of Blink Artifacts in Single Channel EEG Signals</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110144</link>
        <id>10.14569/IJACSA.2020.0110144</id>
        <doi>10.14569/IJACSA.2020.0110144</doi>
        <lastModDate>2020-02-01T05:13:10.9800000+00:00</lastModDate>
        
        <creator>G Bhaskar N Rao</creator>
        
        <creator>Anumala Vijay Sankar</creator>
        
        <creator>Peri Pinak Pani</creator>
        
        <creator>Aneesh Sidhireddy</creator>
        
        <subject>Electroencephalogram (EEG); ocular artifacts; wavelet transform; hybrid threshold</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>Ocular Artifacts (OAs) are inevitable during EEG acquisition and make the signal analysis critical. Detection and correction of these artifacts is a major problem now a day’s. In this paper an energy detection method is used to detect the artifacts and performed wavelet thresholding within the researched zones to protect neural data at non blink regions. Various sets of Wavelet Transform (WT) techniques and threshold functions are collated and identification of the optimum combination for OA’s separation is indicated in many research areas including Technology &amp; Management. The output of these methods at blink regions is compared interns of various standard metrics using established techniques of Supply Chains. Results of this study demonstrate that the SWT+HT has better in rejecting the artifacts than other methods in this paradigm.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_44-Automatic_Detection_and_Correction_of_Blink_Artifacts.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of Flipbook using Web Learning to Improve Logical Thinking Ability in Logic Gate</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110143</link>
        <id>10.14569/IJACSA.2020.0110143</id>
        <doi>10.14569/IJACSA.2020.0110143</doi>
        <lastModDate>2020-02-01T05:13:10.9630000+00:00</lastModDate>
        
        <creator>Rizki Noor Prasetyono</creator>
        
        <creator>Rito Cipta Sigitta Hariyono</creator>
        
        <subject>Multimedia based learning; flipbook; web learning; logical thinking; logic gates</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>The multimedia-based learning process has great potential to change the way of learning. One of them is a growing multimedia Flipbook from textbooks, with ease of reading and learning without carrying a thick book. The purpose of this study is to produce a Flipbook-assisted Web Learning development product and to improve the ability to logical thinking by using those products. This type of research using the 4D Models model consisting of define, design, develop, and disseminate. The results of the development with expert data validation analysis obtained an overall percentage of 83.92% included in the excellent category. Analysis of the validation of media expert&#39;s overall percentage obtained 80% included in the excellent category. And the validation analysis of peers the overall percentage found 84.78% included in the excellent category. From all analysis shows that flipbook-assisted teaching materials web learning is appropriate to be used. The results of the t-test calculations show 10.25 higher than 2.045, the use of flipbook logic gates assisted with web learning increases the ability of logical thinking. And the analysis of N-Gain 0.39 in including the medium criteria, the flipbook gate logic assisted with web learning to increase the ability to logical thinking.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_43-Development_of_Flipbook_using_Web_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cancelable Face Template Protection using Transform Features for Cyberworld Security</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110142</link>
        <id>10.14569/IJACSA.2020.0110142</id>
        <doi>10.14569/IJACSA.2020.0110142</doi>
        <lastModDate>2020-02-01T05:13:10.9470000+00:00</lastModDate>
        
        <creator>Firdous Kausar</creator>
        
        <subject>Cancelable biometrics; face authentication; feature extraction; Gabor filter; Principal Component Analysis (PCA); wavelet transformation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>Cyber world becomes a fundamental and vital component of the physical world with the increase of dependence on internet-connected devices in industry and government organizations. Provision of privacy and security of users during online communication offers unique cybersecurity challenges for industry and government. Intrusion is one of the crucial issues of cybersecurity, which can be overcome by providing the vigorous authentication solutions. Biometrics authentication is used in different cybersecurity systems for user authentication purpose. The cancelable biometric is a solution to rid of privacy problems in traditional biometric systems. This paper purposes a new cancelable face authentication method, which uses Hybrid Gabor PCA (HGPCA) descriptor for cyberworld security. The proposed method uses the wavelet transform for the extraction of the features of the face images by using Gabor filter and Principal Component Analysis (PCA). Later, both types of features have been ensemble by using the simple concatenation scheme. Then scrambling has been applied to the fused features by using the random key generated by the user. So finally, scrambled fused features has been stored in the database which are used for the cancelable biometric authentication as well as recovery. HGPCA achieves “cancelability” and increases the authentication accuracy. The proposed method has been tested on three standard face datasets. Experimental results of the proposed method have been compared with existing methods by using standard quantitative measures that show superiority over existing methods.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_42-Cancelable_Face_Template_Protection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Multi-Class Classification for the First Six Surats of the Holy Quran</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110141</link>
        <id>10.14569/IJACSA.2020.0110141</id>
        <doi>10.14569/IJACSA.2020.0110141</doi>
        <lastModDate>2020-02-01T05:13:10.9330000+00:00</lastModDate>
        
        <creator>Nouh Sabri Elmitwally</creator>
        
        <creator>Ahmed Alsayat</creator>
        
        <subject>Text classification; machine learning; natural language processing; text pre-processing; feature selection; data mining; Holy Quran</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>The Holy Quran is one of the holy books revealed to the prophet Muhammad in the form of separate verses. These verses were written on tree leaves, stones, and bones during his life; as such, they were not arranged or grouped into one book until later. There is no intelligent system that is able to distinguish the verses of Quran chapters automatically. Accordingly, in this study we propose a model that can recognize and categorize Quran verses automatically and conclusion the essential features through Quran chapters classification for the first six Surat of the Holy Quran chapters, based on machine learning techniques. The classification of the Quran verses into chapters using machine learning classifiers is considered an intelligent task. Classification algorithms like Na&#239;ve Bayes, SVM, KNN, and decision tree J48 help to classify texts into categories or classes. The target of this research is using machine learning algorithms for the text classification of the Holy Quran verses. As the Quran texts consists of 114 chapters, we are only working with the first six chapters. In this paper, we build a multi-class classification model for the chapter names of the Quranic verses using Support Vector Classifier (SVC) and GaussianNB. The results show the best overall accuracy is 80% for the SVC and 60% for the Gaussian Na&#239;ve Bayes.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_41-The_Multi_Class_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Framework for Detecting Botnet Command and Control Communication over an Encrypted Channel</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110140</link>
        <id>10.14569/IJACSA.2020.0110140</id>
        <doi>10.14569/IJACSA.2020.0110140</doi>
        <lastModDate>2020-02-01T05:13:10.9000000+00:00</lastModDate>
        
        <creator>Zahian Ismail</creator>
        
        <creator>Aman Jantan</creator>
        
        <creator>Mohd. Najwadi Yusoff</creator>
        
        <subject>Botnet; Botnet Analysis and Detection System (BADS); encrypted channel; machine learning; accuracy; autonomous</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>Botnet employs advanced evasion techniques to avoid detection. One of the Botnet evasion techniques is by hiding their command and control communication over an encrypted channel like SSL and TLS.  This paper provides a Botnet Analysis and Detection System (BADS) framework for detecting Botnet. The BADS framework has been used as a guideline to devise the methodology, and we divided this methodology into six phases: i. data collection, customization, and conversion, ii. feature extraction and feature selection, iii. Botnet prediction and classification, iv. Botnet detection, v. attack notification, and vi. testing and evaluation. We tend to use the machine learning algorithm for Botnet prediction and classification.  We also found several challenges in implementing this work. This research aims to detect Botnet over an encrypted channel with high accuracy, fast detection time, and provides autonomous management to the network manager.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_40-A_Framework_for_Detecting_Botnet_Command.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Neural Network Supported Chemiresistor Array System for Detection of NO2 Gas Pollution in Smart Cities (NN-CAS)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110139</link>
        <id>10.14569/IJACSA.2020.0110139</id>
        <doi>10.14569/IJACSA.2020.0110139</doi>
        <lastModDate>2020-02-01T05:13:10.8700000+00:00</lastModDate>
        
        <creator>Mahmoud Zaki Iskandarani</creator>
        
        <subject>Gases; chemiresistors; neural networks; sensor array; correlation; road side unit; intelligent transportation systems; smart cities</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>Neural Networks supported Chemiresistor array system is designed and laboratory tested for the detection of emissive gasses from vehicles and other sources of pollution. The designed and tested system is based on an integrated PbPc array of chemiresistors that sends signals corresponding to emitted NO2 gas to Signal Processing Unit. The process comprises using relative conductivity values of Edge sensors to Central sensor for detected gas as an indicator of response characteristics and profiling for NO2 gas pollution level. The process continues up to the limit where Edge Sensor values for relative conductivity equates, then the relative conductivity for the Edge Sensors is used as a control value to shut down the sampling system and send a warning message of excessive pollution. Pollution could be due to a number of factors besides vehicles, such as gas leaks. Optimization of array elements response is carried out using Neural Networks (Back Propagation Algorithm). The proposed system is promising and could further be developed to become a vital and integrated part of Intelligent Transportation Systems (ITS) in order to monitor emission of hazardous gases, and could be integrated with Road Side Units (RSUs) of urban areas in smart cities.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_39-Neural_Network_Supported_Chemiresistor_Array_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Solution to Protect Encryption Keys when Encrypting Database at the Application Level</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110138</link>
        <id>10.14569/IJACSA.2020.0110138</id>
        <doi>10.14569/IJACSA.2020.0110138</doi>
        <lastModDate>2020-02-01T05:13:10.8700000+00:00</lastModDate>
        
        <creator>Karim El bouchti</creator>
        
        <creator>Soumia Ziti</creator>
        
        <creator>Fouzia Omary</creator>
        
        <creator>Nassim Kharmoum</creator>
        
        <subject>Database encryption; encryption key protection model; database encryption keys protection; data security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>Encrypting databases at the application level (client level) is one of the most effective ways to secure data. This strategy of data security has the advantage of resisting attacks performed by the database administrators. Although the data and encryption keys will be necessarily stored in the clear on the client level, which implies a problem of trust viz-a-viz the client since it is not always a trusted site. The client can attack encryption keys at any time. In this work, we will propose an original solution that protects encryption keys against internal attacks when implementing database encryption at the application level. The principle of our solution is to transform the encryption keys defined in the application files into other keys considered as the real keys, for encryption and decryption of the database, by using the protection functions stored within the database server. Our proposed solution is considered as an effective way to secure keys, especially if the server is a trusted site. The solution implementation results displayed better protection of encryption keys and an efficient process of data encryption /decryption. In fact, any malicious attempt performed by the client to hold encryption keys from the application level cannot be succeeded since the real values of keys are not defined on it.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_38-A_New_Solution_to_Protect_Encryption_Keys.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Facial Emotion Recognition using Neighborhood Features</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110137</link>
        <id>10.14569/IJACSA.2020.0110137</id>
        <doi>10.14569/IJACSA.2020.0110137</doi>
        <lastModDate>2020-02-01T05:13:10.8400000+00:00</lastModDate>
        
        <creator>Abdulaziz Salamah Aljaloud</creator>
        
        <creator>Habib Ullah</creator>
        
        <creator>Adwan Alownie Alanazi</creator>
        
        <subject>Haar features; feature integration; emotion recognition; face detection; localized features; multiclass SVM classifier</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>We present a new method for human facial emotions recognition. For this purpose, initially, we detect faces in the images by using the famous cascade classifiers. Subsequently, we then extract a localized regional descriptor (LRD) which represents the features of a face based on regional appearance encoding. The LRD formulates and models various spatial regional patterns based on the relationships between local areas themselves instead of considering only raw and unprocessed intensity features of an image. To classify facial emotions into various classes of facial emotions, we train a multiclass support vector machine (M-SVM) classifier which recognizes these emotions during the testing stage. Our proposed method takes into account robust features and is independent of gender and facial skin color for emotion recognition. Moreover, our method is illumination and orientation invariant. We assessed our method on two benchmark datasets and compared it with four reference methods. Our proposed method outperformed them considering both the datasets.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_37-Facial_Emotion_Recognition_using_Neighborhood_Features.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Categorizing Attributes in Identifying Learning Style using Rough Set Theory</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110136</link>
        <id>10.14569/IJACSA.2020.0110136</id>
        <doi>10.14569/IJACSA.2020.0110136</doi>
        <lastModDate>2020-02-01T05:13:10.8070000+00:00</lastModDate>
        
        <creator>Dadang Syarif Sihabudin Sahid</creator>
        
        <creator>Riswan Efendi</creator>
        
        <creator>Emansa Hasri Putra</creator>
        
        <creator>Muhammad Wahyudi</creator>
        
        <subject>Learning style; rough set; categorizing attributes; conditional attributes; decision attributes</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>In a learning process, learning style becomes one crucial factor that should be considered.  However, it is still challenging to determine the learning style of the student, especially in an online learning activity. Data-driven methods such as artificial intelligence and machine learning are the latest and popular approaches for predicting the learning style. However, these methods involve complex data and attributes. It makes it quite heavy in the computational process. On the other hand, the literate based driven approach has a limitation in inconsistency between results with the learning behavior. Combination, both approaches, gives a better accuracy level. However, it still leaves some issues such as ambiguity and a wide of range of attributes value. These issues can be reduced by finding the right approach and categorization of attributes. Rough set proposed the simple way that can compromise with the ambiguity, vague, and uncertainty. Rough set generated the rules that can be used for prediction or classification decision attributes. Yet, due to the method based on categorical data, it must be careful in determining the category of attributes. Hence, this research investigated several categorizing attributes in the identification learning style. The results showed that the approach gives a better prediction of the learning style. Different categories give different results in terms of accuracy level, number of eliminated data, number of eliminated attributes, and number of generated rules criteria. For decision making, it can be considered by balancing of these criteria.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_36-Categorizing_Attributes_in_Identifying_Learning_Style.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SBAG: A Hybrid Deep Learning Model for Large Scale Traffic Speed Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110135</link>
        <id>10.14569/IJACSA.2020.0110135</id>
        <doi>10.14569/IJACSA.2020.0110135</doi>
        <lastModDate>2020-02-01T05:13:10.7770000+00:00</lastModDate>
        
        <creator>Adnan Riaz</creator>
        
        <creator>Muhammad Nabeel</creator>
        
        <creator>Mehak Khan</creator>
        
        <creator>Huma Jamil</creator>
        
        <subject>Attention mechanism; large scale traffic prediction; Gated Recurrent Unit (GRU); Bidirectional Long-short term Memory (BiLSTM); Intelligent Transportation System (ITS)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>Intelligent Transportation System (ITS) is the fundamental requirement to an intelligent transport system. The proposed hybrid model Stacked Bidirectional LSTM and Attention-based GRU (SBAG) is used for predicting the large scale traffic speed. To capture bidirectional temporal dependencies and spatial features, BDLSTM and attention-based GRU are exploited. It is the first time in traffic speed prediction that bidirectional LSTM and attention-based GRU are exploited as a building block of network architecture to measure the backward dependencies of a network. We have also examined the behaviour of the attention layer in our proposed model. We  compared the proposed model with state-of-the-art models e.g. Fully Convolutional Network, Gated Recurrent Unit, Long -short term Memory, Bidirectional Long-short term Memory and achieved superior performance in large scale traffic speed prediction.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_35-SBAG_A_Hybrid_Deep_Learning_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predicting the Future Transaction from Large and Imbalanced Banking Dataset</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110134</link>
        <id>10.14569/IJACSA.2020.0110134</id>
        <doi>10.14569/IJACSA.2020.0110134</doi>
        <lastModDate>2020-02-01T05:13:10.7600000+00:00</lastModDate>
        
        <creator>Sadaf Ilyas</creator>
        
        <creator>Sultan Zia</creator>
        
        <creator>Umair Muneer Butt</creator>
        
        <creator>Sukumar Letchmunan</creator>
        
        <creator>Zaib un Nisa</creator>
        
        <subject>Machine Learning (ML); banking; Santander; transactions; prediction; imbalanced; unbalanced; skewed; hyperparameter; oversampling; undersampling; EDA; dimensionality reduction; PCA; LDA; LR; RF; DT; MLP; GBM; CatBoost; XGBoost; AdaBoost;  LigtGBM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>Machine learning (ML) algorithms are being adopted rapidly for a range of applications in the finance industry. In this paper, we used a structured dataset of Santander bank, which is published on a data science and machine learning competition site (kaggle.com) to predict whether a customer would make a transaction or not? The dataset consists of two classes, and it is imbalanced. To handle imbalance as well as to achieve the goal of prediction with the least log loss, we used a variety of methods and algorithms. The provided dataset is partitioned into two sets of 200,000 entries each for training and testing. 50% of data is kept hidden on their server for evaluation of the submission. A detailed exploratory data analysis (EDA) of datasets is performed to check the distributions of values. Correlation between features and importance of characteristics is calculated. To calculate the feature importance, random forest and decision trees are used. Furthermore, principal component analysis and linear discriminant analysis are used for dimensionality reduction. We have used 9 different algorithms including logistic regression (LR), Random forests (RF), Decision tree (DT), Multilayer perceptron (MLP), Gradient boosting method (GBM), Category boost (CatBoost), Extreme gradient boosting (XGBoost), Adaptive boosting (Adaboost) and Light gradient boosting (LigtGBM) method on the dataset.  We proposed LighGBM as a regression problem on the dataset and it outperforms the state-of-the-art algorithms with 85%  accuracy. Later, we have used fine-tune hyperparameters for our dataset and implemented them in combination with the LighGBM. This tuning improves performance, and we have achieved 89% accuracy.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_34-Predicting_the_Future_Transaction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Classification Models for Determining Types of Academic Risk and Predicting Dropout in University Students</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110133</link>
        <id>10.14569/IJACSA.2020.0110133</id>
        <doi>10.14569/IJACSA.2020.0110133</doi>
        <lastModDate>2020-02-01T05:13:10.7300000+00:00</lastModDate>
        
        <creator>Norka Bedregal-Alpaca</creator>
        
        <creator>V&#237;ctor Cornejo-Aparicio</creator>
        
        <creator>Joshua Z&#225;rate-Valderrama</creator>
        
        <creator>Pedro Yanque-Churo</creator>
        
        <subject>Educational data mining; ID3 algorithm; C4.5 algorithm; artificial neural network; classification algorithms; student desertion; academic risk</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>Academic performance is a topic studied not only to identify those students who could drop out of their studies, but also to classify them according to the type of academic risk they could find themselves. An application has been implemented that uses academic information provided by the university and generates classification models from three different algorithms: artificial neural networks, ID3 and C4.5. The models created use a set of variables and criteria for their construction and can be used to classify student desertion and more specifically to predict their type of academic risk. The performance of these models was compared to define the one that provided the best results and that will serve to make the classification of students. Decision tree algorithms, C4.5 and ID3, presented better measurements with respect to the artificial neural network. The tree generated using the C4.5 algorithm presented the best performance metrics with correctness, accuracy, and sensitivity equal to 0.83, 0.87, and 0.90 respectively. As a result of the classification to determine student desertion it was concluded, according to the model generated using the C4.5 algorithm, that the ratio of credits approved by a student to the credits that he should have taken is the variable more significant. The classification, depending on the type of academic risk, generated a tree model indicating that the number of abandoned subjects is the most significant variable. The admission scan modality through which the student entered the university did not turn out to be significant, as it does not appear in the generated decision tree.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_33-Classification_Models_for_Determining_Types_of_Academic_Risk.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Crime Mapping Model based on Cloud and Spatial Data: A Case Study of Zambia Police Service</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110132</link>
        <id>10.14569/IJACSA.2020.0110132</id>
        <doi>10.14569/IJACSA.2020.0110132</doi>
        <lastModDate>2020-02-01T05:13:10.7130000+00:00</lastModDate>
        
        <creator>Jonathan Phiri</creator>
        
        <creator>Jackson Phiri</creator>
        
        <creator>Charles S. Lubobya</creator>
        
        <subject>Zambia police; web application; mobile application; cloud model; crime mapping; spatial data</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>Crime mapping is a strategy used to detect and prevent crime in the police service. The technique involves the use of geographical maps to help crime analysts identify and profile crimes committed in different residential areas, as well as determining best methods of responding. The development of geographic information system (GIS) technologies and spatial analysis applications coupled with cloud computing have significantly improved the ability of crime analysts to perform this crime mapping function. The aim of this research is to automate the processes involved in crime mapping using spatial data. A baseline study was conducted to identify the challenges in the current crime mapping system used by the Zambia Police Service. The results show that 85.2% of the stations conduct crime mapping using physical geographical maps and pins placed on the map while 14.8% indicated that they don’t use any form of crime mapping technique. In addition, the study revealed that all stations that participated in the study collect and process the crime reports and statistics manually and keep the results in books and papers. The results of the baseline study were used to develop the business processes and a crime mapping model, this was implemented successfully. The proposed model includes a spatial data visualization of crime data based on Google map. The proposed model is based on the Cloud Architecture, Android Mobile Application, Web Application, Google Map API and Java programming language. A prototype was successfully developed and the test results of the proposed system show improved data visualization and reporting of crime data with reduced dependency on manual transactions. It also proved to be more effective than the current system.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_32-Crime_Mapping_Model_based_on_Cloud_and_Spatial_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An IoT based Home Automation Integrated Approach: Impact on Society in Sustainable Development Perspective</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110131</link>
        <id>10.14569/IJACSA.2020.0110131</id>
        <doi>10.14569/IJACSA.2020.0110131</doi>
        <lastModDate>2020-02-01T05:13:10.6830000+00:00</lastModDate>
        
        <creator>Yasir Mahmood</creator>
        
        <creator>Nazri Kama</creator>
        
        <creator>Azri Azmi</creator>
        
        <creator>Suraya Ya’acob</creator>
        
        <subject>Internet of Things; smart home; sustainable development; home automation; environment sustainability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>In recent years, due to substantial evolution in the field of consumer electronics, the society is striving to optimize efficiency, energy savings, green technology and environmental sustainability in their daily lives at homes. Most of the people are controlling and monitoring home appliances manually and therefore, facing lots of problems in managing natural resources, cost, effort and security which lead towards an un-comfortable and un-reliable life. Numerous ‘intelligent’ devices such as smartphones, tablets, air-conditioners, etc. have promoted the key concept of the Internet of Things (IoT) based home automation. Entrenched with technology, these devices can be distantly monitored and controlled over the Internet at home and anywhere in the world. Over the past few decades, global warming has become a severe worldwide challenge. However, sustainable development and green technology play an important role in climate change. The primary purpose of this study is to save natural resources, reduce energy consumption, and to understand the impact of home automation on the society in order to achieve the goal of green technology and environmental sustainability. In this paper, IoT based home automation approach integrated with the smart meter, solar, wind, geothermal renewable energy resources and government green awareness program to extensively optimize the need of energy consumption, security, cost, convenience and cleaner environment for the society is proposed. In addition, a survey was conducted among the target audience for the purpose of identifying and evaluating its least impact on the environment and society in a sustainable development perspective. The results of this survey are statistically analyzed using IBM SPSS statistics version 23. The results revealed that there is a significant impact of home automation on the society thereby contributing to its solution.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_31-An_IoT_based_Home_Automation_Integrated_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Novel Language Resources for Hindi: An Aesthetics Text Corpus and a Comprehensive Stop Lemma List</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110130</link>
        <id>10.14569/IJACSA.2020.0110130</id>
        <doi>10.14569/IJACSA.2020.0110130</doi>
        <lastModDate>2020-02-01T05:13:10.6670000+00:00</lastModDate>
        
        <creator>Gayatri Venugopal-Wairagade</creator>
        
        <creator>Jatinderkumar R. Saini</creator>
        
        <creator>Dhanya Pramod</creator>
        
        <subject>Hindi; corpus; aesthetics; stopwords; stoplemmas</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>This paper is an effort to complement the contributions made by researchers working toward the inclusion of non-English languages in natural language processing studies. Two novel Hindi language resources have been created and released for public consumption. The first resource is a corpus consisting of nearly thousand pre-processed fictional and non-fictional texts spanning over hundred years. The second resource is an exhaustive list of stop lemmas created from 12 corpora across multiple domains, consisting of over 13 million words, from which more than 200,000 lemmas were generated, and 11 publicly available stop word lists comprising over 1000 words, from which nearly 400 unique lemmas were generated. This research lays emphasis on the use of stop lemmas instead of stop words owing to the presence of various, but not all morphological forms of a word in stop word lists, as opposed to the presence of only the root form of the word, from which variations could be derived if required. It was also observed that stop lemmas were more consistent across multiple sources as compared to stop words. In order to generate a stop lemma list, the parts of speech of the lemmas were investigated but rejected as it was found that there was no significant correlation between the rank of a word in the frequency list and its part of speech. The stop lemma list was assessed using a comparative method. A formal evaluation method is suggested as future work arising from this study.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_30-Novel_Language_Resources_for_Hindi.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Tuberculosis Prevention Model in Developing Countries based on Geospatial, Cloud and Web Technologies</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110129</link>
        <id>10.14569/IJACSA.2020.0110129</id>
        <doi>10.14569/IJACSA.2020.0110129</doi>
        <lastModDate>2020-02-01T05:13:10.6370000+00:00</lastModDate>
        
        <creator>Innocent Mwila</creator>
        
        <creator>Jackson Phiri</creator>
        
        <subject>Evidence-based; monitoring; cloud computing; geospatial data analysis; mapping; web technologies; information; decision-making; tuberculosis prevention</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>Information is important when making decisions. Decisions which are based on gut feeling and made in the absence of evidence always tend to be less effective in most situations. This is also the case when it comes to Tuberculosis (TB) disease control and prevention intervention planning and implementation. The lack of evidence-based information upon which decisions for action to help with the prevention of spread of TB has proved to be less effective in the prevention of the disease as TB keeps spreading. The aim of this paper was to design and develop a prototype system that would provide TB program managers with information and tools which can be used to make decisions which can effectively influence the fight against the spread of TB through the application of cloud computing, geospatial data analysis and web technologies. The system would improve disease monitoring and tracking through the use of the identified technologies, by displaying the geographical distribution of TB cases in the communities on a mapping application as well as providing reports which TB program managers can use to make decisions when planning and implementing disease control and prevention activities.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_29-Tuberculosis_Prevention_Model_in_Developing_Countries.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Scalability Performance for Low Power Wide Area Network Technology using Multiple Gateways</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110128</link>
        <id>10.14569/IJACSA.2020.0110128</id>
        <doi>10.14569/IJACSA.2020.0110128</doi>
        <lastModDate>2020-02-01T05:13:10.6200000+00:00</lastModDate>
        
        <creator>N.A. Abdul Latiff</creator>
        
        <creator>I.S. Ismail</creator>
        
        <creator>M. H. Yusoff</creator>
        
        <subject>Low power wide area network; scalability; simulation; multiple gateways</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>Low Power Wide Area Network is one of the leading technologies for the Internet of Things. The capability to scale is one of the advantage criteria for a technology to compare to each other. The technology uses a star network topology for communication between the end-node and gateway. The star network topology enables the network to support a large number of end-nodes and with multiple of gateways deployed in the network, it can increase the number of end nodes even more. This paper aims to investigate the performance of the Low Power Wide Area Network Technology, focusing on the capability of the network to scale using multiple gateways as receivers. We model the network system based on the communication behaviours between the end-node and gateways. We also included the communication limit range for the data signal from the end-node to successfully be received by the gateways. The performance of the scalability for the Low Power Wide Area Network Technology is shown by the successfully received packet data at the gateways. The simulation to study the scalability was done based on several parameters, such as the number of end-nodes, gateways, channels and also application time. The results show that the amount of successfully received data signal at gateway increased as the gateways, application time and channel used increased.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_28-Scalability_Performance_for_Low_Power_Wide_Area_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Critical Review on Adverse Effects of Concept Drift over Machine Learning Classification Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110127</link>
        <id>10.14569/IJACSA.2020.0110127</id>
        <doi>10.14569/IJACSA.2020.0110127</doi>
        <lastModDate>2020-02-01T05:13:10.5900000+00:00</lastModDate>
        
        <creator>Syed Muslim Jameel</creator>
        
        <creator>Manzoor Ahmed Hashmani</creator>
        
        <creator>Hitham Alhussain</creator>
        
        <creator>Mobashar Rehman</creator>
        
        <creator>Arif Budiman</creator>
        
        <subject>Big data classification; machine learning; online supervised learning; concept drift; Adaptive Convolutional Neural Network Extreme Learning Machine (ACNNELM); Meta-Cognitive Online Sequential Extreme Learning Machine (MOSELM); Online Sequential Extreme Learning Machine (OSELM); Real Drift (RD); Virtual Drift (VD); Hybrid Drift (HD); Deep Learning (DL); Shallow Learning (SL); Concept Drift (CD)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>Big Data (BD) is participating in the current computing revolution in a big way. Industries and organizations are utilizing their insights for Business Intelligence using Machine Learning Models (ML-Models). Deep Learning Models (DL-Models) have been proven to be a better selection than Shallow Learning Models (SL-Models). However, the dynamic characteristics of BD introduce many critical issues for DL-Models, Concept Drift (CD) is one of them. CD issue frequently appears in Online Supervised Learning environments in which data trends change over time. The problem may even worsen in the BD environment due to veracity and variability factors. Due to the CD issue, the accuracy of classification results degrades in ML-Models, which may make ML-Models not applicable. Therefore, ML-Models need to adapt quickly to changes to maintain the accuracy level of the results. In current solutions, a substantial improvement in accuracy and adaptability is needed to make ML-Models robust in a non-stationary environment. In the existing literature, the consolidated information on this issue is not available. Therefore, in this study, we have carried out a systematic critical literature review to discuss the Concept Drift taxonomy and identify the adverse effects and existing approaches to mitigate CD.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_27-A_Critical_Review_on_Adverse_Effects_of_Concept_Drift.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>IoT System for Sleep Quality Monitoring using Ballistocardiography Sensor</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110126</link>
        <id>10.14569/IJACSA.2020.0110126</id>
        <doi>10.14569/IJACSA.2020.0110126</doi>
        <lastModDate>2020-02-01T05:13:10.5730000+00:00</lastModDate>
        
        <creator>Nico Surantha</creator>
        
        <creator>C’zuko Adiwiputra</creator>
        
        <creator>Oei Kurniawan Utomo</creator>
        
        <creator>Sani Muhamad Isa</creator>
        
        <creator>Benfano Soewito</creator>
        
        <subject>Internet-of-Things; sleep quality; ballisto-cardiography; HRV; ECG</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>Sleep is very important for people to preserve their physical and mental health. The development of the ballistocardiography (BCG) sensor enables the possibility of day-to-day and portable monitoring at home. The goal of this study is to develop an IoT sleep quality monitoring system using BCG sensors, microcontrollers and cloud servers. The BCG sensor produces ECG data from the physical activity of the patient. The data is sent to the sensor and is read by the microcontroller. The sensor data is collected and pre-processed in the microcontroller. The microcontroller then transmits the data obtained from the BCG sensor to the cloud server for further analysis, i.e. to assess the sleep quality. The assessment of data transmission efficiency and resource consumption is carried out in this paper. The findings of the evaluation show that the proposed method achieves higher efficiency, lower response time and decreases memory usage by up to 77% compared to the conventional method.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_26-IoT_System_for_Sleep_Quality_Monitoring.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Link Breakage Time Prediction Algorithm for Efficient Power and Routing in Unmanned Aerial Vehicle Communication Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110125</link>
        <id>10.14569/IJACSA.2020.0110125</id>
        <doi>10.14569/IJACSA.2020.0110125</doi>
        <lastModDate>2020-02-01T05:13:10.5270000+00:00</lastModDate>
        
        <creator>Haque Nawaz</creator>
        
        <creator>Husnain Mansoor Ali</creator>
        
        <subject>UAV; link breakage; algorithm; power; RSPS; routing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>UAV Communication Networks (UAVCN) comes under the umbrella of Ad hoc Network technology. It has the critical differences with existing wireless networks, which are high mobility, high speed, dynamic updates, and changes in topology due to high movement, which creates the problem of link breakages and affects the routing performance. This problem degrades the performance of UAVCN in terms; it decreases throughput and minimizes the packet delivery ratio. In this paper, we have tried to overcome this problem by considering the received signal power strength (RSPS). We have proposed an algorithm which uses the received signal power strength and time and calculates the link breakage time prediction by using the interpolation method. We have implemented the proposed technique by modifying the OLSR protocol. The extended protocol termed EPOLSR, which efficiently using the signal power strength and time and increasing the performance of UAVCN. The extended protocol implemented by using a research tool network simulator (v3). The metrics received rate, no of received packets, throughput, and packet delivery ratio (PDR) is considered for evaluation. We have examined the proposed EPOLSR with existing routing protocols. It has been observed that the modified protocol performs better concerning all existing evaluated routing approaches.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_25-Link_Breakage_Time_Prediction_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Model for Measuring benefit of Government IT Investment using Fuzzy AHP</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110124</link>
        <id>10.14569/IJACSA.2020.0110124</id>
        <doi>10.14569/IJACSA.2020.0110124</doi>
        <lastModDate>2020-02-01T05:13:10.5100000+00:00</lastModDate>
        
        <creator>Prih Haryanta</creator>
        
        <creator>Azhari Azhari</creator>
        
        <creator>Khabib Mustofa</creator>
        
        <subject>IT investment; government investment; ex-post evaluation; benefit creation; fuzzy AHP; analytic hierarchy process</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>Information Technology (IT) has become a mandatory for every organization including government. Investment on IT can help government to deliver services to the citizen. Every IT investment should give the maximum result. Measurement for the benefit of IT investment is needed to make sure that it has deliver the missions and goals. There are plenty models for measuring the feasibility of an IT investment before the implementation. But there are still few models to measure the IT investment after implementation. This paper proposes a model to measure the benefit of an IT investment after implementation, especially in government organizations. The model uses generic IS/IT business value category which consists of 13 categories and 73 sub-categories. Each category will be weighted according to organization preference using Fuzzy Analitic Hierarchy Process (FAHP). This model is applied to measure IT investments in the Ministry of Finance of the Republic of Indonesia, named SPAN and SAKTI applications. The weighted benefit score of SPAN is 76.39%, while the original score is 75.89%. The weighted benefit score of SAKTI is 68.08%, while the original score is 67.33%. The differences between the original score and weighted score indicate that the model accommodates the organization’s preference in the evaluation.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_24-Model_for_Measuring_Benefit_of_Government.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Open Challenges for Crowd Density Estimation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110123</link>
        <id>10.14569/IJACSA.2020.0110123</id>
        <doi>10.14569/IJACSA.2020.0110123</doi>
        <lastModDate>2020-02-01T05:13:10.4930000+00:00</lastModDate>
        
        <creator>Shaya A Alshaya</creator>
        
        <subject>Crowd density; count density; deep learning; CNN; datasets; metrics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>Nowadays, many emergency systems and surveillance systems are related to the management of the crowd. The supervision of a crowded area presents a great challenge especially when the size of the crowd is unknown. This issue presents a point of start to the field of the estimation of the crowd based on density or counts. The density of a crowded area is one of the important topics dealt with in many kinds of applications like surveillance, security, biology, traffic. In this paper, we try not only to present a deep review of the different approaches/techniques used in the previous works to estimate the size of the crowd but also to describe the different datasets used. A comparison of some related works based on the weakness and the strength features of each approach is highlighted to show the important research key related to the field of the estimation of the crowded area.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_23-Open_Challenges_for_Crowd_Density_Estimation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mobile Cloud Learning based on User Acceptance using DeLone and McLean Model for Higher Education</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110122</link>
        <id>10.14569/IJACSA.2020.0110122</id>
        <doi>10.14569/IJACSA.2020.0110122</doi>
        <lastModDate>2020-02-01T05:13:10.4800000+00:00</lastModDate>
        
        <creator>Kristiawan Nugroho</creator>
        
        <creator>Sugeng Murdowo</creator>
        
        <creator>Farid Ahmadi</creator>
        
        <creator>Tri Suminar</creator>
        
        <subject>Mobile learning; cloud computing; DeLone; McLean; model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>Mobile learning has been used in the learning process in several tertiary institutions in Indonesia. However, several universities have not been able to implement mobile learning due to the limitations of computer and network infrastructure. Cloud computing is a solution for agencies that experience limitations in computer infrastructure in the form of internet-based services for their customers. This paper discusses the implementation of mobile cloud learning which is a combination of mobile learning technology and cloud computing using the renewal DeLone and McLean Model which is a successful model to measure how important the implementation of a mobile cloud learning system is. The results showed that from the F test results obtained Fcount of 13,222, then Fcount&gt; Ftable (13,222&gt; 3.01), then Ho was rejected and H1 was accepted. So it can be concluded Information Quality, System Quality, and Service Quality together affect the Intensity of Use.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_22-Mobile_Cloud_Learning_based_on_user_Acceptance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Blockchain-based Electronic Voting System with Special Ballot and Block Structures that Complies with Indonesian Principle of Voting</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110121</link>
        <id>10.14569/IJACSA.2020.0110121</id>
        <doi>10.14569/IJACSA.2020.0110121</doi>
        <lastModDate>2020-02-01T05:13:10.4470000+00:00</lastModDate>
        
        <creator>Gottfried Christophorus Prasetyadi</creator>
        
        <creator>Achmad Benny Mutiara</creator>
        
        <creator>Rina Refianti</creator>
        
        <subject>Blockchain; voting; design; simulation; Python</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>Blockchain technology could be implemented not only in digital currency, but also in other fields. One such implementation is in democratic life, namely voting. This research focuses on designing a blockchain-based electronic voting system for medium to large-scale usage that complies with law, specifically voting principles in Indonesia. In this research, we proposed the following: a ballot design as block transaction employing UUID version 4, a modified block structure using SHA3-256 hash algorithm, and a voting protocol. The minimum length of a ballot is 43 bytes (excluding ECDSA signature) if one character is used as candidate’s identifier and timestamp is stored as integer. We built a simulation program using Python-based Django web framework to cast 10,000 votes and mine them into blocks. Tampered transactions in each block could be detected and restored by synchronizing data with another node. We also evaluated the proposed system. By using this system, voters can exercise voting principles in Indonesia: direct, public, free, confidential, honest, and fair.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_21-Blockchain_based_Electronic_Voting_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Noise Reduction in Spatial Data using Machine Learning Methods for Road Condition Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110120</link>
        <id>10.14569/IJACSA.2020.0110120</id>
        <doi>10.14569/IJACSA.2020.0110120</doi>
        <lastModDate>2020-02-01T05:13:10.4330000+00:00</lastModDate>
        
        <creator>Dara Anitha Kumari</creator>
        
        <creator>A. Govardhan</creator>
        
        <subject>Spatial image moments; adaptive logistic de-noising; machine learning; noise removal; correlative corrections</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>With the increase in the road transportation system the safety concerns for the road travels are also increasing. In order to ensure the road safety, various government and non-government efforts are visible to maintain the road quality and transport network system. The maintenance of the road condition is in the verse of getting automated for the quick identification of potholes, cracks and patch works and repair. The automation process is taking place in majority of the counties with the help of ICT enabled frameworks and devices. The primary device used for the purpose is the geo location enabled image capture devices. Regardless to mention the image capture process is always prune to noises and must be removed for better further analysis. Also, the spatial data is collected from the road networks are also prune to various error such as missing values or outliers due to the induced noises in the capture devices. Hence, the demand of the current research is to purpose a complete solution for the noise identification and removal from the spatial road network data for making the automation process highly successful and highly accurate. In the recent time, many parallel research attempts are observed, which resulted into solving the problem of noise reduction in all aspects of spatial data. Nevertheless, all the parallel research outcomes have failed to provide a single solution for all the noise issues. Henceforth, this work proposes three novel algorithms to solve spatial image noise problem using the adaptive moment filtration, missing value noise from the spatial data using adaptive logistic analysis and finally, the outlier noise removal from the same spatial data using corrective logistic machine learning method. The outcome of this work is nearly 70% accuracy in image noise reduction, 90% accuracy for missing value and outlier removal. The work also justifies the information loss reduction by nearly 50%. The final outcome of the work is to ensure higher accuracy for road maintenance automation.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_20-Noise_Reduction_in_Spatial_Data_using_Machine_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Malicious URL Detection based on Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110119</link>
        <id>10.14569/IJACSA.2020.0110119</id>
        <doi>10.14569/IJACSA.2020.0110119</doi>
        <lastModDate>2020-02-01T05:13:10.4170000+00:00</lastModDate>
        
        <creator>Cho Do Xuan</creator>
        
        <creator>Hoa Dinh Nguyen</creator>
        
        <creator>Tisenko Victor Nikolaevich</creator>
        
        <subject>URL; malicious URL detection; feature extraction; feature selection; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>Currently, the risk of network information insecurity is increasing rapidly in number and level of danger. The methods mostly used by hackers today is to attack end-to-end technology and exploit human vulnerabilities. These techniques include social engineering, phishing, pharming, etc. One of the steps in conducting these attacks is to deceive users with malicious Uniform Resource Locators (URLs). As a results, malicious URL detection is of great interest nowadays. There have been several scientific studies showing a number of methods to detect malicious URLs based on machine learning and deep learning techniques. In this paper, we propose a malicious URL detection method using machine learning techniques based on our proposed URL behaviors and attributes. Moreover, bigdata technology is also exploited to improve the capability of detection malicious URLs based on abnormal behaviors. In short, the proposed detection system consists of a new set of URLs features and behaviors, a machine learning algorithm, and a bigdata technology. The experimental results show that the proposed URL attributes and behavior can help improve the ability to detect malicious URL significantly. This is suggested that the proposed system may be considered as an optimized and friendly used solution for malicious URL detection.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_19-Malicious_URL_Detection_based_on_Machine_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sign Language Semantic Translation System using Ontology and Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110118</link>
        <id>10.14569/IJACSA.2020.0110118</id>
        <doi>10.14569/IJACSA.2020.0110118</doi>
        <lastModDate>2020-02-01T05:13:10.3870000+00:00</lastModDate>
        
        <creator>Eman K Elsayed</creator>
        
        <creator>Doaa R. Fathy</creator>
        
        <subject>Deep Learning (DL); ontology; sign language translation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>Translation and understanding sign language may be difficult for some. Therefore, this paper proposes a solution to this problem by providing an Arabic sign language translation system using ontology and deep learning techniques. That is to interpret user’s signs to different meanings. This paper implemented ontology on the sign language domain to solve some sign language challenges. In this first version, simple static signs composed of Arabic alphabets and some Arabic words started to translate. Deep Convolution Neural Network (CNN) architecture was trained and tested on a pre-made Arabic sign language dataset and on a dataset collected in this paper to obtain better accuracy in recognition. Experimental results show that according to the pre-made Arabic sign language dataset the classification accuracy of the training set (80% of the dataset) was 98.06% and recognition accuracy of the testing set (20% of the dataset) was 88.87%. According to the collected dataset, the classification accuracy of the training set was 98.6% and Semantic recognition accuracy of the testing set was 94.31%.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_18-Sign_Language_Semantic_Translation_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Blockchain based Mobile Money Interoperability Scheme</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110117</link>
        <id>10.14569/IJACSA.2020.0110117</id>
        <doi>10.14569/IJACSA.2020.0110117</doi>
        <lastModDate>2020-02-01T05:13:10.3700000+00:00</lastModDate>
        
        <creator>Fickson Mvula</creator>
        
        <creator>Jackson Phiri</creator>
        
        <creator>Simon Tembo</creator>
        
        <subject>Blockchain; mobile money interoperability; clearing and settlement; blockchain security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>Developing Countries in Africa in general and Zambia in particular, have seen a rapid rise in use of mobile payment platforms. This has not only revolutionized access to finance for the poor but also allowed them access to other financial products such as savings or insurance. With a growing number of mobile money providers in Zambia, there is need for a solution that would enable integration of the mobile money provider’s systems using a central clearinghouse for purposes of clearing and settlement to achieve mobile money interoperability. In this study, we first reviewed the technical landscape and features of mobile payment systems in Zambia and then assessed the feasibility of using blockchain technology in proposing a settlement and clearing system that would facilitate mobile money interoperability. A prototype system was then designed in which amounts being interchanged between providers are managed as assets on a permissioned blockchain. The system runs a distributed shared ledger, which provides non-repudiation, data privacy and data origin authentication, by leveraging the consistency features of blockchain technology.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_17-A_Blockchain_based_Mobile_Money.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Awareness of Ethical Issues when using an e-Learning System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110116</link>
        <id>10.14569/IJACSA.2020.0110116</id>
        <doi>10.14569/IJACSA.2020.0110116</doi>
        <lastModDate>2020-02-01T05:13:10.3530000+00:00</lastModDate>
        
        <creator>Talib Ahmad Almseidein</creator>
        
        <creator>Omar Musa Klaif.Mahasneh</creator>
        
        <subject>Code of ethics; ethical issues; e-learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>Transformation to the digital system has made life easier, and the acceptance of the e-learning system in the academic life of students is a fact. Therefore, many educational organizations use the e-learning environment for teaching-learning activities. The present study has been conducted to evaluate the awareness of the undergraduate students to ethics and to determine if there is a difference according to gender and academic level variables when using an e-learning system at Al-Balqa Applied University in Jordan. A self-questionnaire has been designed to measure the participant&#39;s awareness of ethics. It consists of 20 items classified in three ethical categories; Intellectual property rights, vandalism and Privacy. The results show that the awareness of students is low in all three categories regarding their commitment to the ethical issues when using an e-learning system. Result also show that there are no significant differences between undergraduate students&#39; gender and academic level related to the awareness of ethical issues. Therefore, undergraduate students should be fully knowledgeable about ethical issues to avoid unethical behavior while using of the e-learning system.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_16-Awareness_of_Ethical_Issues.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Edge IoT Networking</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110115</link>
        <id>10.14569/IJACSA.2020.0110115</id>
        <doi>10.14569/IJACSA.2020.0110115</doi>
        <lastModDate>2020-02-01T05:13:10.3230000+00:00</lastModDate>
        
        <creator>Majed Mohaia Alhaisoni</creator>
        
        <subject>Edge IoT; network centrality; communicability; degree; closeness </subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>Data transmission has witnessed a new wave of emerging technologies such as IoT. This new way of communication could be done through smart communication such as smart sensors and actuators. Thus, data traffic keeps traversing to the main servers in order to accomplish the tasks at the sensors side. However, this way of communication has encountered certain issues related to network due to the nature of routing forth and back from the end users to the main servers. Subsequently, this incurs high delay and packet loss which successively degrades the overall Quality of Service (QoS). On the other hand, the new way of data transmission, which is called “edge IoT network”, has not only helped on reducing the load over the network but also made the nodes to be more self-manage at the edge. However, this approach has some limitations due to the power consumption and efficiency, which would lead to node failure and data loss. Therefore, this paper presents a new model of combining network science and computer network in order to enhance the edge IoT efficiency. Simulation results have shown a clear evidence in improving the efficiency, communicability, degree, and overall closeness.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_15-Edge_IoT_Networking.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Effectiveness of Stemming in the Stylometric Authorship Attribution in Arabic</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110114</link>
        <id>10.14569/IJACSA.2020.0110114</id>
        <doi>10.14569/IJACSA.2020.0110114</doi>
        <lastModDate>2020-02-01T05:13:10.2930000+00:00</lastModDate>
        
        <creator>Abdulfattah Omar</creator>
        
        <creator>Wafya Ibrahim Hamouda</creator>
        
        <subject>Authorship attribution; cluster analysis; GOLD stemmer; Khoga stemmer; Light 10 stemmer; stemming; stylometry</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>The recent years have witnessed the development of numerous approaches to authorship attribution including statistical and linguistic methods. Stylometric authorship attribution, however, remains among the most widely used due to its accuracy and effectiveness. Nevertheless, many authorship problems remain unresolved in terms of Arabic. This can be attributed to different factors including linguistic peculiarities that are not usually considered in standard authorship systems. In the case of Arabic, the morphological features carry unique stylistic features that can be usefully used in testing authorship in controversial texts and writings. The hypothesis is that much of these morphological features are lost due to the execution of stemming. As such, this study is concerned with investigating the effectiveness of stemming in the stylometric applications to authorship attribution in Arabic. In so doing, three Arabic stemmers GOLD stemmer, Khoga stemmer, Light 10 stemmer are used. By way of illustration, a corpus of 2400 news articles written by different 97 authors is designed. To evaluate the effectiveness of stemming, the selected articles (both stemmed and unstemmed texts) are clustered using cluster analysis methods. Comparisons are made between clustering structures based on stemmed and unstemmed datasets. The results indicate that stemming has negative impacts on the accuracy of the clustering performance and thus on the reliability of stylometric authorship testing in Arabic. The peculiar stylistic features of the affixation processes in Arabic can, thus, be usefully used for improving the performance of authorship attribution applications in Arabic. It can be finally concluded that stemming is not effective in the stylometric authorship applications in Arabic.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_14-The_Effectiveness_of_Stemming_in_the_Stylometric_Authorship.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Systematic Review on Students’ Engagement in Classroom: Indicators, Challenges and Computational Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110113</link>
        <id>10.14569/IJACSA.2020.0110113</id>
        <doi>10.14569/IJACSA.2020.0110113</doi>
        <lastModDate>2020-02-01T05:13:10.2930000+00:00</lastModDate>
        
        <creator>Latha Subramainan</creator>
        
        <creator>Moamin A. Mahmoud</creator>
        
        <subject>Classroom interaction; student engagement; engagement indicators; engagement challenges; computational techniques</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>Students’ engagement in a classroom is a key factor that influences several educational outcomes. Studies by the University of California, Los Angeles (UCLA) and British universities found that 40% of students are frequently experiencing boredom and less than 20% of students ask questions in class due to poor engagement. A survey by Malaysia’s Program for International Student Assessment (PISA) found that 80% of the participating schools fell into the poor performance bracket. However, studies in this line of research are limited and scattered. To provide a clear insight into this problem and support researchers, it is crucial to understand the current state of research in this area. Consequently, in this paper, a comprehensive review is conducted to map the literature studies to a consistent taxonomy. Search terms revealed 87 papers from several databases that have been classified into seven categories. A systematic review method is applied, analysis is performed, and finally, findings, discussion, and recommendations are presented.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_13-A_Systematic_Review_on_Students_Engagement.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Intelligent and Adaptive Model for Change Management</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110112</link>
        <id>10.14569/IJACSA.2020.0110112</id>
        <doi>10.14569/IJACSA.2020.0110112</doi>
        <lastModDate>2020-02-01T05:13:10.2600000+00:00</lastModDate>
        
        <creator>Ali M Alshahrani</creator>
        
        <subject>Change Management; External Environment; Internal Environment; Decision Support System (DSS)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>The continuous and rapid changes that are taking place in the world today makes the change management crucial to any organization. The existing change management models bridge the gap between motivation, planning, and implementation. These models are highly significant as the organizational change is not a rare event anymore, but it is an ongoing process, and the ‘business as usual’ model becomes insignificant to most organizations. To automatize the theoretical model of change management,  an intelligent and adaptive model for change management is developed in this paper, which takes into consideration all the positive and negative effects (factors) that may take place at any time and any place internally (internal factors) or externally (external factors). Based on these factors, accordingly, the proposed model can efficiently find a reasonable solution that adapts to the existing situation to avoid any failure of organizational management. The proposed system is built based on a decision support system (DSS) with inputs that represent the influencing factors and an output that represents feedback on the method of management. In this paper, the proposed change management model has been verified, and the results have been reported accordingly.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_12-An_Intelligent_and_Adaptive_Model_for_Change_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Adjacency Effects of Layered Clouds by Means of Monte Carlo Ray Tracing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110111</link>
        <id>10.14569/IJACSA.2020.0110111</id>
        <doi>10.14569/IJACSA.2020.0110111</doi>
        <lastModDate>2020-02-01T05:13:10.2430000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>Monte Carlo simulation; top of the atmosphere radiance; cloud type; adjacency effects; layered clouds</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>Adjacency effects from layered box shaped clouds are clarified by means of Monte Carlo Simulation: MCS taking into account a phase function of cloud particles and multi-layered plane parallel atmosphere. MCS allows estimation of top of the atmosphere radiance. Influences on adjacency effects of phase function of the clouds in concern and the number of layers of the plane parallel atmosphere are also clarified together with the effects from the top and the bottom clouds. There are 10 of cloud types in meteorological definition. One-layer cloud for cumulus and cumulonimbus clouds are investigated in this study.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_11-Adjacency_Effects_of_Layered_Clouds.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Algorithm Naive Bayesian Classifier and Random Key Cuckoo Search for Virtual Machine Consolidation Problem</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110110</link>
        <id>10.14569/IJACSA.2020.0110110</id>
        <doi>10.14569/IJACSA.2020.0110110</doi>
        <lastModDate>2020-02-01T05:13:10.2430000+00:00</lastModDate>
        
        <creator>Yasser Moaly</creator>
        
        <creator>Basheer A.Youssef</creator>
        
        <subject>Cloud computing; Naive Bayesian classifier; Random Key Cuckoo Search; Energy-efficiency; SLA-aware; Virtual Machine consolidation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>The trade-off between Energy consumption and SLA violation presents a serious challenge in cloud computing environments. A non-aggressive virtual machine consolidation algorithm is a good approach to reduce the consumed energy as well as SLA violation. A well-known strategy to deal with the virtual machine consolidation problem consists of four steps: host overloading detection, host under-loading detection, virtual machine selection and virtual machine placement. In this paper, the previous strategy is modified by merging the last two steps virtual machine selection and virtual machine placement, to avoid any poor solutions caused by solving both steps separately. In the host overloading/under-loading detection steps, we classified host status into five classes: Over-Utilized, Nearly Over-Utilized, Normal Utilized, Under-Utilized and Switched Off, then an algorithm, based on the Naive Bayesian Classifier, was introduced in order to detect the future host state for minimizing the number of virtual machine migrations; as a result, the energy consumption and performance degradation due to migrations will be minimized. In the virtual machine selection and placement steps, we introduced an algorithm based on the Random Key Cuckoo Search to reduce the energy consumption and enhance the SLA violation. To assess the algorithm, real data traces for 10 days, were used to verify the proposed algorithms. The experimental results proved that the proposed algorithms can significantly reduce the consumed energy as well as the SLA violation in data centers.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_10-Hybrid_Algorithm_Naive_Bayesian_Classifier.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>RETRACTED: An Enhanced K-Nearest Neighbor Predictive Model through Metaheuristic Optimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110109</link>
        <id>10.14569/IJACSA.2020.0110109</id>
        <doi>10.14569/IJACSA.2020.0110109</doi>
        <lastModDate>2020-02-01T05:13:10.2130000+00:00</lastModDate>
        
        <creator>Allemar Jhone P. Delima</creator>
        
        <subject>CIGAL-KNN; GA-KNN; IBAX operator; KNN algorithm; prediction models</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>After careful and considered review of the content of this paper by a duly constituted expert committee, this paper has been found to be in violation of IJACSA`s Publication Principles. We hereby retract the content of this paper. Reasonable effort should be made to remove all past references to this paper. Retraction DOI: 10.14569/IJACSA.2020.0110109.retraction</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_9-An_Enhanced_K_Nearest_Neighbor_Predictive_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Using the Convolution Neural Network Attempts to Match Japanese Women’s Kimono and Obi</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110108</link>
        <id>10.14569/IJACSA.2020.0110108</id>
        <doi>10.14569/IJACSA.2020.0110108</doi>
        <lastModDate>2020-02-01T05:13:10.1970000+00:00</lastModDate>
        
        <creator>Kumiko Komizo</creator>
        
        <creator>Noriaki Kuwahara</creator>
        
        <subject>Digital fashion; kimono; obi; convolution neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>Currently, the decline in kimono usage in Japan is serious. This has become an important problem for the kimono industry and kimono culture. The reason behind this lack of usage is that Japanese clothing has many strict rules attached to it. One of those difficult rules is that kimonos have status, and one must consider the proper kimono to wear depending on the place and type of event. At the same time, the obi (sash) also has status, and the status of the kimono and obi must match. The matching of the kimono and obi is called “obiawase” in Japanese, and it is not just a matter of the person wearing the kimono selecting a pair that she likes. Instead, the first place you wear a kimono determines its status, and the obi must match that status and kimono. In other words, the color, material, meaning behind the pattern must be matched with obi. Kimono patterns may evoke the seasons or a celebratory event. All this must be considered. The kimono was originally everyday wear, and people were taught these things in their households, but with today’s increasingly nuclear families, that person who could teach these things isn’t nearby, adding to the lack of use of kimonos. Because of this, there has been interest in using CNN (Convolution Neural Network) from the digital fashion industry. We are attempting to use machine learning to tackle the difficult task of matching an obi to a kimono, using the CNN machines drawing the most attention today.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_8-Using_the_Convolution_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Comparison of CRUD Methods using NET Object Relational Mappers: A Case Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110107</link>
        <id>10.14569/IJACSA.2020.0110107</id>
        <doi>10.14569/IJACSA.2020.0110107</doi>
        <lastModDate>2020-02-01T05:13:10.1830000+00:00</lastModDate>
        
        <creator>Doina Zmaranda</creator>
        
        <creator>Lucian-Laurentiu Pop-Fele</creator>
        
        <creator>Cornelia Gyor&#246;di</creator>
        
        <creator>Robert Gyor&#246;di</creator>
        
        <creator>George Pecherle</creator>
        
        <subject>ORM (Object Relational Mapper); domain-level development; performance evaluation; CRUD (Create Read Update Delete) operations</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>Most applications available nowadays are using an Object Relational Mapper (ORM) to access and save data. The additional layer that is being wrapped over the database induces a performance impact in detrimental of raw SQL queries; on the other side, the advantages of using ORMs by focusing on domain level through application development represent a premise for easier development and simpler code maintenance. In this context, this paper makes a performance comparison between three of the most used ORM technologies from the .NET family: Entity Framework Core 2.2, nHibernate 5.2.3 and Dapper 1.50.5. The main objective of the paper is to make a comparative analysis of the impact that a specific ORM has on application performance when realizing database requests. In order to perform the analysis, a specific testing architecture was designed to ensure the consistency of tests. Performance evaluation for time responses and memory usage for each technology was done using the same CRUD (Create Read Update Delete) operations on the database. The results obtained proved that the decision to use one of another is dependent of the most used type of operation. A comprehensive discussion based on results analysis is done in order to support a decision for choosing a specific ORM by the software engineers in the process of software design and development.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_7-Performance_Comparison_of_CRUD_Methods.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Pathological Worrying and Artificial Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110106</link>
        <id>10.14569/IJACSA.2020.0110106</id>
        <doi>10.14569/IJACSA.2020.0110106</doi>
        <lastModDate>2020-02-01T05:13:10.1370000+00:00</lastModDate>
        
        <creator>Carlos Pelta</creator>
        
        <subject>Pathological worrying; artificial neural networks; cascade-correlation algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>Worrying is a cognitive process that focuses on potential future negative events, where the outcome is often uncertain. Worries can arise in chains with one worry leading to another, often without solution. This may give rise to an uncontrollable worrying that may be associated with psychiatric disorders such as anxiety and depression. The generation of progressively more negative chains of worries can lead to a catastrophic phenomenon of pathological worrying. In this article we show that catastrophic worrying can be simulated by using a cascade-correlation algorithm for artificial neural networks.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_6-Pathological_Worrying_and_Artificial_Neural_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Proposal of a Sustainable Agile Model for Software Development</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110105</link>
        <id>10.14569/IJACSA.2020.0110105</id>
        <doi>10.14569/IJACSA.2020.0110105</doi>
        <lastModDate>2020-02-01T05:13:10.1200000+00:00</lastModDate>
        
        <creator>Oscar Antonio Alvarez Gal&#225;n</creator>
        
        <creator>Jos&#233; Luis Cendejas Vald&#233;z</creator>
        
        <creator>Heberto Ferreira Medina</creator>
        
        <creator>Gustavo A. Vanegas Contreras</creator>
        
        <creator>Jes&#250;s Leonardo Soto Sumuano</creator>
        
        <subject>Sustainability; agile methodologies; software development; MDSIC; sustainability variables</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>Sustainability is even more important now than ever if we speak in the context of organizational growth, it is necessary that technological products, such as software developments, are certified as green- environmental friendly technology that would mean a competitive advantage for an organization that implements an agile methodology for software development that takes sustainability into account, giving the organization new ways to market their software products as environmentally friendly. This study proposes a model for agile software development, it has taken into account that software development must be based upon reusing old hardware, free non-privative software and code (open source), as well as virtualization of servers and machines, to create software that can be useful for over a decade, as a result, we expect a reduction of planned obsolescence in hardware, which means taking one step ahead to help solve the problem that the big amount of electronic waste (e-waste) means nowadays worldwide.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_5-Proposal_of_a_Sustainable_Agile_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Machine Learning Algorithms for Predicting Academic Performance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110104</link>
        <id>10.14569/IJACSA.2020.0110104</id>
        <doi>10.14569/IJACSA.2020.0110104</doi>
        <lastModDate>2020-02-01T05:13:10.0900000+00:00</lastModDate>
        
        <creator>Phauk Sokkhey</creator>
        
        <creator>Takeo Okazaki</creator>
        
        <subject>Student performance; machine learning algorithms; k-fold cross-validation; principal component analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>The large volume of data and its complexity in educational institutions require the sakes from informative technologies. In order to facilitate this task, many researchers have focused on using machine learning to extract knowledge from the education database to support students and instructors in getting better performance. In prediction models, the challenging task is to choose the effective techniques which could produce satisfying predictive accuracy. Hence, in this work, we introduced a hybrid approach of principal component analysis (PCA) as conjunction with four machines learning (ML) algorithms: random forest (RF), C5.0 of decision tree (DT), and na&#239;ve Bayes (NB) of Bayes network and support vector machine (SVM), to improve the performances of classification by solving the misclassification problem. Three datasets were used to confirm the robustness of the proposed models. Through the given datasets, we evaluated the classification accuracy and root mean square error (RSME) as evaluation metrics of the proposed models. In this classification problem, 10-fold cross-validation was proposed to evaluate the predictive performance. The proposed hybrid models produced very prediction results which shown itself as the optimal prediction and classification algorithms.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_4-Hybrid_Machine_Learning_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards Security Effectiveness Evaluation for Cloud Services Selection following a Risk-Driven Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110103</link>
        <id>10.14569/IJACSA.2020.0110103</id>
        <doi>10.14569/IJACSA.2020.0110103</doi>
        <lastModDate>2020-02-01T05:13:10.0730000+00:00</lastModDate>
        
        <creator>Sarah Maroc</creator>
        
        <creator>Jian Biao Zhang</creator>
        
        <subject>Cloud computing; cloud services selection; decision-making; risk-driven assessment; security evaluation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>Cloud computing is gaining a lot of popularity with an increasing number of services available in the market. This has rendered services selection and evaluation a difficult and challenging task, particularly for security-based evaluation. A key problem with much of the literature on cloud services security evaluation is that it fails to consider the overall evaluation context given the cloud characteristics and the underlying influence factors including threats, vulnerabilities, and security controls. In this paper, we propose a holistic risk-driven security evaluation approach for cloud services selection. We first use fuzzy DEMATEL method to jointly assess the likelihood and impact of threats with respect to the cloud service types, the exploitability of vulnerabilities to the identified threats, and the effectiveness of security controls in mitigating those vulnerabilities. Consequently, the overall diffusion of risk is captured via the relations across these concepts, which is leveraged to filter and prioritize the most critical security controls. The selected controls were then weighted using a combination of fuzzy DEMATEL and fuzzy ANP methods based on several factors, including their effectiveness in preventing the identified risks, user’s preferences and level of control (i.e., responsibilities). The latter denotes how much control a cloud user is transferring to the cloud provider. To enhance the reliability of the results, the subjective weights were integrated with objective weights using the Entropy method. Finally, the TOPSIS method was employed for services ranking and the Improvement Gap Analysis (IGA) method was leveraged to provide more insights on the strength and weaknesses of the selected services. An illustrative example is given to demonstrate the application of the proposed framework.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_3-Towards_Security_Effectiveness_Evaluation_for_Cloud_Services.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Search Space of Adversarial Perturbations against Image Filters</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110102</link>
        <id>10.14569/IJACSA.2020.0110102</id>
        <doi>10.14569/IJACSA.2020.0110102</doi>
        <lastModDate>2020-02-01T05:13:10.0430000+00:00</lastModDate>
        
        <creator>Dang Duy Thang</creator>
        
        <creator>Toshihiro Matsui</creator>
        
        <subject>Deep neural networks; image filters; adversarial examples; image classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>The superiority of deep learning performance is threatened by safety issues for itself. Recent findings have shown that deep learning systems are very weak to adversarial examples, an attack form that was altered by the attacker’s intent to deceive the deep learning system. There are many proposed defensive methods to protect deep learning systems against adversarial examples. However, there is still lack of principal strategies to deceive those defensive methods. Any time a par-ticular countermeasure is proposed, a new powerful adversarial attack will be invented to deceive that countermeasure. In this study, we focus on investigating the ability to create adversarial patterns in search space against defensive methods that use image filters. Experimental results conducted on the ImageNet dataset with image classification tasks showed the correlation between the search space of adversarial perturbation and filters. These findings open a new direction for building stronger offensive methods towards deep learning systems.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_2-Search_Space_of_Adversarial_Perturbations.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>High Performance Computing in Resource Poor Settings: An Approach based on Volunteer Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2020</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2020.0110101</link>
        <id>10.14569/IJACSA.2020.0110101</id>
        <doi>10.14569/IJACSA.2020.0110101</doi>
        <lastModDate>2020-02-01T05:13:09.9930000+00:00</lastModDate>
        
        <creator>Adamou Hamza</creator>
        
        <creator>Azanzi Jiomekong</creator>
        
        <subject>Volunteer computing; resource poor settings; high performance computing; matrix multiplication</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 11(1), 2020</description>
        <description>High Performance Computing (HPC) systems aim to solve complex computing problems (in a short amount of time) that are either too large for standard computers or would take too long. They are used to solve computational problems in many fields such as medical science (for drug discovery, breast cancer detection in images, etc.), climate science, physics, mathematical science, etc. Existing solutions such as HPC Supercomputer, HPC Cluster, HPC Cloud or HPC Grid are not adapted for resource poor settings (mainly for developing countries) because their fees are generally beyond the funding (particularly for academics) and the administrative complexity to access to HPC Grid creates a higher barrier. This paper presents an approach allowing to build a Volunteer Computing system for HPC in resource poor settings. This solution does not require any additional investment in hardware, but relies instead on voluntary machines already owned by the private users. The experiment has been made on the mathematical problem of solving the matrices multiplication using Volunteer Computing system. Given the success of this experiment, the enrollment of other volunteers has already started. The goal being to create a powerful Volunteer Computing system with the maximum number of computers.</description>
        <description>http://thesai.org/Downloads/Volume11No1/Paper_1-High_Performance_Computing_in_Resource_Poor_Settings.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving Gated Recurrent Unit Predictions with Univariate Time Series Imputation Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101290</link>
        <id>10.14569/IJACSA.2019.0101290</id>
        <doi>10.14569/IJACSA.2019.0101290</doi>
        <lastModDate>2020-01-02T06:45:26.9700000+00:00</lastModDate>
        
        <creator>Anibal Flores</creator>
        
        <creator>Hugo Tito</creator>
        
        <creator>Deymor Centty</creator>
        
        <subject>Gated recurrent unit; local average of nearest neighbors; case based reasoning imputation; GRU+LANN; GRU+CBRi</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>The work presented in this paper has its main objective to improve the quality of the predictions made with the recurrent neural network known as Gated Recurrent Unit (GRU). For this, instead of making different adjustments to the architecture of the neural network in question, univariate time series imputation techniques such as Local Average of Nearest Neighbors (LANN) and Case Based Reasoning Imputation (CBRi) are used. It is experimented with different gap-sizes, from 1 to 11 consecutive NAs, resulting in the best gap-size of six consecutive NA values for LANN and for CBRi the gap-size of two NA values. The results show that both imputation techniques allow improving prediction quality of Gated Recurrent Unit, being LANN better than CBRi, thus the results of the best configurations of LANN and CBRi allowed to surpass the techniques with which they were compared.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_90-Improving_Gated_Recurrent_unit_Predictions.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>High Predictive Performance of Dynamic Neural Network Models for Forecasting Financial Time Series</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101289</link>
        <id>10.14569/IJACSA.2019.0101289</id>
        <doi>10.14569/IJACSA.2019.0101289</doi>
        <lastModDate>2020-01-02T06:45:26.4070000+00:00</lastModDate>
        
        <creator>Haya Alaskar</creator>
        
        <subject>Dynamic neural network; financial time series; prediction stock market; financial forecasting; deep learning-based technique</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>The study presents high predictive performance of dynamic neural network models for noisy time series data; explicitly, forecasting the ﬁnancial time series from the stock market. Several dynamic neural networks with different architecture models are implemented for forecasting stock market prices and oil prices. A comparative analysis of eight architectures of dynamic neural network models was carried out and presented. The study has explained the techniques used in the study involving the processing of data, management of noisy data, and transformations stationary time series. Experimental testing used in this work includes mean square error, and mean absolute percentage error to evaluate forecast accuracy. The results depicted that the different structures of the dynamic neural network models can be successfully used for the prediction of nonstationary ﬁnancial signals, which is considered very challenging since the signals suffer from noise and volatility. The nonlinear autoregressive neural network with exogenous inputs (NARX) does considerably better than other network models as the accuracy of the comparative evaluation achieves a better performance in terms of proﬁt return. In non-stationary signals, Long short term memory results are considered the best on mean square error, and mean absolute percentage error.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_89-High_Predictive_Performance_of_Dynamic_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Adaptive Cluster based Model for Fast Video Background Subtraction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101288</link>
        <id>10.14569/IJACSA.2019.0101288</id>
        <doi>10.14569/IJACSA.2019.0101288</doi>
        <lastModDate>2019-12-31T12:06:05.4030000+00:00</lastModDate>
        
        <creator>Muralikrishna SN</creator>
        
        <creator>Balachandra Muniyal</creator>
        
        <creator>U Dinesh Acharya</creator>
        
        <subject>Background subtraction; Gaussian mixture model; -means; clustering; object detection; transform</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>Background subtraction (BGS) is one of the impor-tant steps in many automatic video analysis applications. Several researchers have attempted to address the challenges due to illumination variation, shadow, camouflage, dynamic changes in the background and bootstrapping requirement. In this paper, a method to perform BGS using dynamic clustering is proposed. A background model is generated using the K􀀀-means algorithm. The normalized γ corrected distance values and an automatic threshold value is used to perform the background subtraction. The background models are updated online to handle slow illu-mination changes. The experiment was conducted on CDNet2014 dataset. The experimental results show that the proposed method is fast and performs well for baseline, camera-jitter and dynamic background categories of video.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_88-Adaptive_Cluster_based_Model_for_Fast_Video_Background.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Embedded Mission Decision-Making based on Dynamic Decision Networks in SoPC Platform</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101287</link>
        <id>10.14569/IJACSA.2019.0101287</id>
        <doi>10.14569/IJACSA.2019.0101287</doi>
        <lastModDate>2019-12-31T12:06:05.3870000+00:00</lastModDate>
        
        <creator>Hanen Chenini</creator>
        
        <subject>Bayesian Decision Making; Dynamic Bayesian Net-works (DBN); Multiple-Criteria Decision-Making (MCDM); SoPC; practical case</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>This paper tackles a Bayesian Decision Making approach for unmanned aerial vehicle (UAV) mission that allows UAV to quickly react to unexpected events under dynamic environments. From online observations and the mission state-ment, the proposed approach is designed by means of Dynamic Bayesian Networks (DBN) arising from the safety or performance failures analysis. After proposing a DBN model, a probabilistic approach based on Multiple-Criteria Decision-Making (MCDM) is then applied to find the best configuration reaching a balance between performance and energy consumption, thus decide which tasks will be implemented as SW and which as HW execution units, regarding the mission requirement. The proposal UAV mission decision-making is three-pronged, providing: (1) real time image pre-processing of sensor observations; (2) temporal and probabilistic approach based on Bayesian Networks to continuously update the mission plan during the flight; and (3) low-power hardware and software implementations for online and real time embedded Decision Making using Xilinx System on Programmable Chip (SoPC) platform. The proposed approach is then validated with a practical case UAV mission planning using the proposed dynamic decision-maker implemented on embedded system based on a hybrid device.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_87-Embedded_Mission_Decision_Making.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Analysis of Network Intrusion Detection System using Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101286</link>
        <id>10.14569/IJACSA.2019.0101286</id>
        <doi>10.14569/IJACSA.2019.0101286</doi>
        <lastModDate>2019-12-31T12:06:05.3700000+00:00</lastModDate>
        
        <creator>Abdullah Alsaeedi</creator>
        
        <creator>Mohammad Zubair Khan</creator>
        
        <subject>Intrusion Detection System (IDS); classifiers; AI; machine learning; KDD99; CICIDS2017; DoS; U2R; R2L</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>With the coming of the Internet and the increasing number of Internet users in recent years, the number of attacks has also increased. Protecting computers and networks is a hard task. An intrusion detection system is used to detect attacks and to protect computers and network systems from these attacks. This paper aimed to compare the performance of Random Forests, Decision Tree, Gaussian Na&#168;ıve Bayes, and Support Vector Machines in detecting network attacks. An up-to-date dataset was chosen to compare the performance of these classifiers. The results of the conducted experiments demonstrate that both Random Forests and Decision Tree performed effectively in detecting attacks.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_86-Performance_Analysis_of_Network_Intrusion_Detection_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Semantic Knowledge Transformation for Context-aware Heterogeneous Formalisms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101285</link>
        <id>10.14569/IJACSA.2019.0101285</id>
        <doi>10.14569/IJACSA.2019.0101285</doi>
        <lastModDate>2019-12-31T12:06:05.3570000+00:00</lastModDate>
        
        <creator>Hafiz Mahfooz Ul Haque</creator>
        
        <creator>Sajid Ullah Khan</creator>
        
        <creator>Ibrar Hussain</creator>
        
        <subject>Context-aware system; semantic knowledge trans-formation; ontology; interoperability; smart spaces</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>In recent years, an increasing social dependency has been observed over the cell phones and now evolved into smart devices. Due to the rapid escalation of these smart devices, users are becoming habitual in utilizing these services using smart-phones and /or wearable devices in which different applications are running to assist and facilitate users in daily life routine activities. Mobility and context-awareness are the core features of pervasive computing. Context-awareness has the capability to identify the current situation and respond accordingly in the environment whenever and wherever needed. However, it is quite challenging to detect and sense the more appropriate contextual information when various interactive devices communicate among themselves. This paper presents the semantic knowledge transfor-mation techniques for ontology-driven context-aware formalisms to model heterogeneous systems. We propose theoretical as well as practical approaches to transform semantic knowledge into first-order Horn-clause rules format which can be used by context-aware multi-agent systems to achieve their desired goals.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_85-Semantic_Knowledge_Transformation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>BulkSort: System Design and Parallel Hardware Implementation Considerations</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101284</link>
        <id>10.14569/IJACSA.2019.0101284</id>
        <doi>10.14569/IJACSA.2019.0101284</doi>
        <lastModDate>2019-12-31T12:06:05.3230000+00:00</lastModDate>
        
        <creator>Soukaina Ihirri</creator>
        
        <creator>Ahmed Errami</creator>
        
        <creator>Mohammed Khaldoun</creator>
        
        <creator>Essaid Sabir</creator>
        
        <subject>Sorting; FPGA; bulk-sort; parallel processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>Algorithms are commonly perceived as difficult subjects. Many applications today require complex algorithms. However, the researchers look for ways to make them as simple as possible. In high time demanding fields, the process of sorting represents one of the foremost issues in the data structure for searching and optimization algorithms. In parallel processing, we divide program instructions among multiple processors by breaking problems into modules that can be executed in parallel, to reduce the execution time. In this paper, we proposed a novel parallel, re-configurable and adaptive sorting network of the BulkSort algorithm. Our architecture is based on simple and elementary operations such as comparison and binary shifting. The main strength of the proposed solution is the ability to sort in parallel without memory usage. Experimental results show that our proposed model is promising according to the required resources and its ability to perform a high-speed sorting process. In this study, we take into account the analysis result of the Simulink design to establish the required hardware resources of the proposed system.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_84-BulkSort_System_Design_and_Parallel_Hardware_Implementation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Employing Takaful Islamic Banking through State of the Art Blockchain: A Case Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101283</link>
        <id>10.14569/IJACSA.2019.0101283</id>
        <doi>10.14569/IJACSA.2019.0101283</doi>
        <lastModDate>2019-12-31T12:06:05.2930000+00:00</lastModDate>
        
        <creator>Mohammad Abdeen</creator>
        
        <creator>Salman Jan</creator>
        
        <creator>Sohail Khan</creator>
        
        <creator>Toqeer Ali</creator>
        
        <subject>Takaful; hyperledger; blockchain; consensus; de-centralized network; muzariba and wakalah</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>Takaful – an Islamic alternative to conventional in-surance – is fast becoming one of the most important constituents of modern Islamic financial market. The fundamental difference between the two forms of risk mitigation is entrenched from the type of contract selected. The conventional insurance work on the principle of bilateral contracts between the customer (insured) and insurance provider where the insured pay regular premium in return for payment of compensation, in case of a predefined event occurs. On the other hand, Takaful works on the principle of mutual guarantee, cooperation and indemnity where the participants in the scheme mutually insure each other. The Takaful providers are mainly responsible for managing, administering and investigating the Takaful funds according to Islamic laws. This studies provides a decentralized architecture that securely implements Takaful risk mitigation system accord-ing to its principles. Since all major banking sectors are shifting towards Blockchain technology, as it is currently the only viable solution to offers security, transparency, integrity of resources and ensure trustworthiness among customers. The proposed studies offer state-of-the-art Blockchain technology and focus provide a Takaful system that strictly follows the underlying Islamic laws for this risk mitigation system. Moreover, the proposed platform provides all Takaful transactions over Blockchain that brings confidence and transparency to the community involved in the process.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_83-Employing_Takaful_Islamic_Banking.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluating Programmed Artificial Insemination for Cattle Production</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101282</link>
        <id>10.14569/IJACSA.2019.0101282</id>
        <doi>10.14569/IJACSA.2019.0101282</doi>
        <lastModDate>2019-12-31T12:06:05.2600000+00:00</lastModDate>
        
        <creator>Takuya Yoshihara</creator>
        
        <creator>Yunan He</creator>
        
        <creator>Osamu Fukuda</creator>
        
        <creator>Hiroshi Okumura</creator>
        
        <creator>Kohei Arai</creator>
        
        <creator>Iqbal Ahmed</creator>
        
        <creator>Kenji Endo</creator>
        
        <creator>Naoki Takenouchi</creator>
        
        <creator>Hideo Matsuda</creator>
        
        <creator>Tadayuki Yamanouchi</creator>
        
        <creator>Junki Egashira</creator>
        
        <creator>Kenichi Yamashita</creator>
        
        <subject>Bayesian network; programmed artificial insemina-tion; cattle production</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>Cattle productivity in Japan has been declining though livestock farmers and breeders tried to use artificial insemination regularly. The reason behind this declining produc-tivity is the poor evaluation of the applicability of artificial insem-ination. To address this issue, this research proposes an objective evaluation method to estimate the applicability of programmed Artificial Insemination (pAI). The objective evaluation method tries to estimate the applicability of pAI based on the analysis of various indices from dairy and beef cattle using Bayesian Network Model (BNM). The estimation of the pAI applicability considers 14 and 17 physiological indices for diary and beef cattle respectively. These indices include the basic information (days after childbirth, parity, etc.), diagnosis of appearance, diagnosis of genital organ and the veterinarians’ judgments. The overall success rate in estimating the applicability is 89.8% for 1051 records of dairy cattle and 95.6% for 1128 records of beef cattle. The proposed method avoids the subjective error in estimating the applicability of pAI. In addition, the experiment revealed that the applicability of pAI can be evaluated even though the number of measure indices is few.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_82-Evaluating_Programmed_Artificial_Insemination.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>UAV Path Planning for Civil Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101281</link>
        <id>10.14569/IJACSA.2019.0101281</id>
        <doi>10.14569/IJACSA.2019.0101281</doi>
        <lastModDate>2019-12-31T12:06:05.1670000+00:00</lastModDate>
        
        <creator>IDALENE Asmaa</creator>
        
        <creator>BOUKHDIR Khalid</creator>
        
        <creator>MEDROMI Hicham</creator>
        
        <subject>Unmanned Aerial Vehicle (UAV); path planning; path re-planning; computer simulation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>We will present a simple and efficient algorithm for solving the path planning problem for civil UAV operating in a dynamic or incomplete environment. This algorithm searches for a continuous waypoints sequence starting from the initial configuration, visiting all the desired locations and reaching the final position. We will present our proposed algorithm on two steps: The first produces a sorted location set. The second step generates an optimal path for the overall mission. The same algorithm constructs the initial path or re-plans a new one when changes occur to the configuration space. To prove the effectiveness of our proposed algorithm, we will provide computer simulations. A comparison of many results will show that this algorithm yields good experience performance over a wide variety of examples.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_81-UAV_Path_Planning_for_Civil_Applications.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Internet of Things Cyber Attacks Detection using Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101280</link>
        <id>10.14569/IJACSA.2019.0101280</id>
        <doi>10.14569/IJACSA.2019.0101280</doi>
        <lastModDate>2019-12-31T12:06:05.1200000+00:00</lastModDate>
        
        <creator>Jadel Alsamiri</creator>
        
        <creator>Khalid Alsubhi</creator>
        
        <subject>Network anomaly detection; machine learning; In-ternet of Things (IoT); cyberattacks; bot-IoT dataset</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>The Internet of Things (IoT) combines hundreds of millions of devices which are capable of interaction with each other with minimum user interaction. IoT is one of the fastest-growing areas in of computing; however, the reality is that in the extremely hostile environment of the internet, IoT is vulnerable to numerous types of cyberattacks. To resolve this, practical countermeasures need to be established to secure IoT networks, such as network anomaly detection. Regardless that attacks cannot be wholly avoided forever, early detection of an attack is crucial for practical defense. Since IoT devices have low storage capacity and low processing power, traditional high-end security solutions to protect an IoT system are not appropriate. Also, IoT devices are now connected without human intervention for longer periods. This implies that intelligent network-based security solutions like machine learning solutions must be developed. Although many studies in recent years have discussed the use of Machine Learning (ML) solutions in attack detection problems, little attention has been given to the detection of attacks specifically in IoT networks. In this study, we aim to contribute to the literature by evaluating various machine learning algorithms that can be used to quickly and effectively detect IoT network attacks. A new dataset, Bot-IoT, is used to evaluate various detection algorithms. In the implementation phase, seven different machine learning algorithms were used, and most of them achieved high performance. New features were extracted from the Bot-IoT dataset during the implementation and compared with studies from the literature, and the new features gave better results.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_80-Internet_of_Things_Cyber_Attacks_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Technical Guide for the RASP-FIT Tool</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101279</link>
        <id>10.14569/IJACSA.2019.0101279</id>
        <doi>10.14569/IJACSA.2019.0101279</doi>
        <lastModDate>2019-12-31T12:06:05.1070000+00:00</lastModDate>
        
        <creator>Abdul Rafay Khatri</creator>
        
        <subject>Code-modifier; fault injection; FPGA designs; fault injection tool; Verilog HDL</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>Fault injection tools are designed to serve various purposes, such as validate the design under test concerning reli-ability requirements, find sensitive/critical locations that require error mitigation, determine the expected circuit response in the existence of faults. Fault Simulation/Emulation (S/E) applications are involved in Field Programmable Gate Array (FPGA) based design’s verification and simulation at the Hardware Description Languages (HDL) code level. A tool is developed, named RASP-FIT, to perform code modification of FPGA designs, testing of such designs, and finding the sensitive area of designs. This tool works on the FPGA designs written in Verilog HDL at various abstraction levels, gate, data-flow and behavioural levels. This paper presents a technical aspect and the user-guide for the proposed tool in detail, which includes generation of the standalone application (an executable file of the tool for Windows operating system) and installation method.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_79-A_Technical_Guide_for_the_RASP_FIT_Tool.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Knowledge Construction by Immersion in Virtual Reality Environments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101278</link>
        <id>10.14569/IJACSA.2019.0101278</id>
        <doi>10.14569/IJACSA.2019.0101278</doi>
        <lastModDate>2019-12-31T12:06:05.0900000+00:00</lastModDate>
        
        <creator>Luis Alfaro</creator>
        
        <creator>Claudia Rivera</creator>
        
        <creator>Jorge Luna-Urquizo</creator>
        
        <creator>Sofia Alfaro</creator>
        
        <creator>Francisco Fialho</creator>
        
        <subject>Computer assisted learning environments; immer-sive technologies; virtual reality; full immersion in virtual reality environments; knowledge construction by immersion in virtual reality</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>The objective of this work is to analyze the potential use of Immersive Virtual Reality technologies as a teaching/learning tool to enrich the organization of the learning environments of educational programs. The study and analysis of human cognition is theoretically based, also considering the Biology of Cognition and various approaches proposed by the theoreticians and researchers of Education Sciences. In the work, the state of the art of immersive technologies is established and their contributions in the construction of the knowledge of cognitive subjects are analyzed as a means for the development of teaching/learning activities, with the support of emerging immersive technologies. The methodology used is that of the bibliographic review of the classic works of printed literature in relation to the Biology of Cognition, and the search in diverse databases of theses and diverse works in universities and digital repositories. The main weakness of the research lies in the fact that the search was limited to documents using English, Spanish, and Portuguese language. To finish, conclusions and recommendations for future work have been established.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_78-Knowledge_Construction_by_Immersion_in_Virtual_Reality.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Assessing Architectural Sustainability during Software Evolution using Package-Modularization Metrics</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101277</link>
        <id>10.14569/IJACSA.2019.0101277</id>
        <doi>10.14569/IJACSA.2019.0101277</doi>
        <lastModDate>2019-12-31T12:06:05.0730000+00:00</lastModDate>
        
        <creator>Mohsin Shaikh</creator>
        
        <creator>Dilshod Ibarhimov</creator>
        
        <creator>Baqir Zardari</creator>
        
        <subject>Software architecture; software modularity; soft-ware quality; packages</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>Sustainability of software architectures is largely dependent on cost-effective evolution and modular architecture. Careful modularization, characterizing proper design of complex system is cognitive and challenging task for insuring improved sustainability. Moreover, failure to modularize the software sys-tems during its evolution phases often results in requiring extra effort towards managing design deterioration and solving unfore-seen inter-dependencies. In this paper, we present an empirical perspective of package-level modularization metrics proposed by Sarkar, Kak and Rama to characterize modularization quality through packages. In particular, we explore impact of these design based modularization metrics on other well known mod-ularity metrics and software quality metrics. Our experimental examination over open source java software systems illustrates that package-level modularization metrics significantly correlate with architectural sustainability measures and quality metrics of software systems.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_77-Assessing_Architectural_Sustainability_during_Software_Evolution.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>On Developing an Integrated Family Mobile Application</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101276</link>
        <id>10.14569/IJACSA.2019.0101276</id>
        <doi>10.14569/IJACSA.2019.0101276</doi>
        <lastModDate>2019-12-31T12:06:05.0570000+00:00</lastModDate>
        
        <creator>Subhieh El-Salhi</creator>
        
        <creator>Fairouz Farouq</creator>
        
        <creator>Randa Obeidallah</creator>
        
        <creator>Mo’taz Al-Hami</creator>
        
        <subject>Mobile technology; social apps; family mobile ap-plication; Augmented Reality (AR)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>Now-a-days mobile applications have been seen as the most effective, popular and powerful technologies and this is due to the widespread of mobile devices. Moreover, the raising power of mobile devices has a great impact on people of all ages; and more specifically on social relationships including interaction between parents and kids. Therefore, this paper presents a highly integrated Family Mobile Application (FMA) that provides a wide range of services to control, manage, organize and support the different daily tasks of family members effectively. The essential tasks of the FMA are mainly described in terms of facilitating the daily life routine and responsibilities, enhancing the communication between the family members (in different aspects) and supporting the Augmented Reality (AR) which is directed to the children of the family to support educational goals in particular. Moreover, a website has been established to enrich the functionality of the proposed FMA application. The FMA has been analysed, designed, implemented and evaluated on real-world users of the system. The evaluation was con-ducted in terms of the usability testing that considers satisfaction, simplicity and ease of use. More details of the real evaluation are illustrated and presented.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_76-On_Developing_an_Integrated_Family_Mobile_Application.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Vulnerable Road User Detection using YOLO v3</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101275</link>
        <id>10.14569/IJACSA.2019.0101275</id>
        <doi>10.14569/IJACSA.2019.0101275</doi>
        <lastModDate>2019-12-31T12:06:05.0430000+00:00</lastModDate>
        
        <creator>Saranya K C</creator>
        
        <creator>Arunkumar Thangavelu</creator>
        
        <subject>Yolo v3; Tsinghua-Daimler cyclist benchmark; cy-clist detection; pedestrian detection; IoU</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>Detection and classification of vulnerable road users (VRUs) is one of the most crucial blocks in vision based navigation systems used in Advanced Driver Assistance Systems. This paper seeks to evaluate the performance of object classification algorithm, You Only Look Once i.e. YOLO v3 algorithm for the purpose of detection of a major subclass of VRUs i.e. cyclists and pedestrians using the Tsinghua – Daimler dataset. The YOLO v3 algorithm used here requires less computational resources and hence promises a real time performance when compared to its predecessors. The model has been trained using the training images in the mentioned benchmark and have been tested for the test images available for the same. The average IoU for all the truth objects is calculated and the precision recall graph for different thresholds was plotted.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_75-Vulnerable_Road_User_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Distributed SDN Deployment in Backbone Networks for Low-Delay and High-Reliability Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101274</link>
        <id>10.14569/IJACSA.2019.0101274</id>
        <doi>10.14569/IJACSA.2019.0101274</doi>
        <lastModDate>2019-12-31T12:06:05.0270000+00:00</lastModDate>
        
        <creator>Mohammed J.F Alenazi</creator>
        
        <subject>Software Defined Networking (SDN); Controller Placement Problem (CPP); physical network; graph robustness metrics; reliability; resilience</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>Internet applications, such as video streaming, critical-mission, and health applications, require real-time or near real-time data delivery. In this context, Software Defined Networking (SDN) has been introduced to simplify the network management providing a more dynamic and flexible configuration based on centralizing the network intelligence. One of the main challenges in SDN applications consists of selecting the number of deployed SDN controllers, and their locations, towards improving the network performance in terms of low delay and high reliability. Traditional k-center and 􀀀-median methods have been fairly successful in reducing propagation latency, but ignore other important network aspects such as reliability. This paper proposes a new approach for controller placement that addresses both network reliability and reducing network delay. The proposed heuristic algorithm focuses on four different robustness functions, viz, algebraic connectivity (AC), network criticality (NC), load centrality (LC), and communicability, and has been applied in four different real-world physical networks, for performance evaluation based on degree-, closeness-, and betweenness-based centrality-based attacks. Experimental results show that the proposed controller selection algorithms based on AC, NC, LC, and communicability, achieve a high network resilience and low C2C delays, outperforming the latest, widely-used baseline methods, such as 􀀀-median and 􀀀-center ones, especially when using the NC method.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_74-Distributed_SDN_Deployment_in_Backbone_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Method to Find Image Recovery</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101273</link>
        <id>10.14569/IJACSA.2019.0101273</id>
        <doi>10.14569/IJACSA.2019.0101273</doi>
        <lastModDate>2019-12-31T12:06:04.9970000+00:00</lastModDate>
        
        <creator>Nouf Saeed Alotaibi</creator>
        
        <subject>Computer vision; image enhancement; digital image processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>Scattering media imagery is degraded during the physical process of image formation, which shifts contrast, color, and turns overall visibility white. With the computer vision system, sight can be amazingly restored. Although the medium transmission in distant artifacts is small, it is vulnerable to amplification of the noise. Here we present / propose the picture recovery of the L0 gradient, which solves the issue discussed previously. In comparison to raw images, the single image is processed and recovered significantly improves while noise amplification discards. The state-of-the-art studies on dehazing have been reviewed in this paper. In addition, L0-gradient minimization of image smoothing was studied in combination with H Kosmedier Image Formation Physical System to solve the dehazing issue as L0 smoothing approximates better results with higher false discovery rate (FDR) .Recovery using L0-Gradient Minimization is formalized in a depth chart that reduces noise adaptively to recover estimated structures marginally in spatially changing media delivery. The minimal gradient is non-zero. Therefore, noise and blur in the nearby objects with low measurement difficulty and impact have been effectively removed, raising the transmitting approximation contributing to the enhancement of the recovered image. We are experimenting with atmospheric, submarine, at night, indoor turbid medium images qualitatively and quantitatively.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_73-A_New_Method_to_Find_Image_Recovery.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Adverse Impacts of Social Networking Sites on Academic Result: Investigation, Cause Identification and Solution</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101272</link>
        <id>10.14569/IJACSA.2019.0101272</id>
        <doi>10.14569/IJACSA.2019.0101272</doi>
        <lastModDate>2019-12-31T12:06:04.9630000+00:00</lastModDate>
        
        <creator>Maruf Ahmed Tamal</creator>
        
        <creator>Maharunnasha Antora</creator>
        
        <creator>Karim Mohammed Rezaul</creator>
        
        <creator>Md. Abdul Aziz</creator>
        
        <creator>Pabel Miah</creator>
        
        <subject>Social networking sites; SNS; social media; SM; addiction; mental health; poor academic outcome; sleep disorder; social media tracker introduction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>Social networking sites (SNS) have become more prevalent over the previous decade. Interactive design and addictive characteristics have made SNS an almost indispensable part of life, particularly among university learners. Previous studies have shown that excessive use of SNS adversely affects learners&#39; academic success as well as mental health. However, still now, there is a lack of clear evidence of the actual rationalization behind these adverse effects. Concurrently, any significant preventive measures are not yet introduced to counter the excessive use of SNS, particularly for students. To bridge this gap, considering a view of 1862 students (male = 1183, female = 659), the current study investigates how and in which way spending time in SNS negatively influences students’ academic performance. Correlation and regression analyses showed that there is a powerful negative correlation between students’ spending time in social media (STISM) and their educational outcome. Simultaneously, our investigation indicates that classroom standing social media use and late night social media use result in poor educational outcome of the students. Based on the findings of the investigation, an Android based application framework called SMT (Social Media Tracker) is designed and partially implemented to minimize the engagement between students and SNS.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_72-Adverse_Impacts_of_SNS_on_Academic_Result.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Efficient Algorithm to Find the Height of a Text Line and Overcome Overlapped and Broken Line Problem during Segmentation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101271</link>
        <id>10.14569/IJACSA.2019.0101271</id>
        <doi>10.14569/IJACSA.2019.0101271</doi>
        <lastModDate>2019-12-31T12:06:04.9500000+00:00</lastModDate>
        
        <creator>Sanjibani Sudha Pattanayak</creator>
        
        <creator>Sateesh Kumar Pradhan</creator>
        
        <creator>Ramesh Chandra Mallik</creator>
        
        <subject>Document image analysis; line segmentation; word segmentation; database creation; printed Odia document</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>Line segmentation is a critical phase of the Optical Character Recognition (OCR) which separates the individual lines from the image documents. The accuracy rate of the OCR tool is directly proportional to the line segmentation accuracy followed by the word/character segmentation. In this context, an algorithm, named height_based_segmentation is proposed for the text line segmentation of printed Odia documents.  The proposed algorithm finds the average height of a text line and it helps to minimize the overlapped text line cases. The algorithm also includes post-processing steps to combine the modifier zone with the base zone. The performance of the algorithm is evaluated through the ground truth and also by comparing it with the existing segmentation approaches.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_71-An_Efficient_Algorithm_to_Find_the_Height.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>KNN and SVM Classification for Chainsaw Sound Identification in the Forest Areas</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101270</link>
        <id>10.14569/IJACSA.2019.0101270</id>
        <doi>10.14569/IJACSA.2019.0101270</doi>
        <lastModDate>2019-12-31T12:06:04.9330000+00:00</lastModDate>
        
        <creator>N’tcho Assoukpou Jean GNAMELE</creator>
        
        <creator>Yelakan Berenger OUATTARA</creator>
        
        <creator>Toka Arsene KOBEA</creator>
        
        <creator>Genevi&#232;ve BAUDOIN</creator>
        
        <creator>Jean-Marc LAHEURTE</creator>
        
        <subject>KNN Algorithm; SVM Algorithm; MFCC; sound recognition; forest monitoring; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>We present in this paper a comparative study of two classifiers, namely, SVM (support vector machine) and KNN (K-Nearest Neighbors), which we combine to MFCC (Mel-Frequency Cepstral Coefficients) in order to make possible the detection of chainsaw’s sounds in a forest environment. Optimization’s calculation of the relevant characteristics of the sounds recorded in the forest and the judicious choice of the key parameters of the classifiers allows us to obtain a true positive rate of 95.63% for the SVM-LOG-KERNEL and 94.02% for the KNN. The SVM-LOG-KERNEL classifier offers a better classification result and a processing time 30 times faster than KNN.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_70-KNN_and_SVM_Classification_for_Chainsaw_Sound.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Object Detection and Tracking using Deep Learning and Artificial Intelligence for Video Surveillance Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101269</link>
        <id>10.14569/IJACSA.2019.0101269</id>
        <doi>10.14569/IJACSA.2019.0101269</doi>
        <lastModDate>2019-12-31T12:06:04.9030000+00:00</lastModDate>
        
        <creator>Mohana </creator>
        
        <creator>HV Ravish Aradhya</creator>
        
        <subject>Artificial Intelligence (AI); Computer Vision (CV); Convolution Neural Network (CNN); You Look Only Once (YOLOv3); Urban Vehicle Dataset; Common objects in Context (COCO); Object detection; object tracking</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>Data is the new oil in current technological society. The impact of efficient data has changed benchmarks of performance in terms of speed and accuracy. The enhancement is visualizable because the processing of data is performed by two buzzwords in industry called Computer Vision (CV) and Artificial Intelligence (AI). Two technologies have empowered major tasks such as object detection and tracking for traffic vigilance systems. As the features in image increases demand for efficient algorithm to excavate hidden features increases. Convolution Neural Network (CNN) model is designed for urban vehicle dataset for single object detection and YOLOv3 for multiple object detection on KITTI and COCO dataset. Model performance is analyzed, evaluated and tabulated using performance metrics such as True Positive (TP), True Negative (TN), False Positive (FP), False Negative (FN), Accuracy, Precision, confusion matrix and mean Average Precession (mAP). Objects are tracked across the frames using YOLOv3 and Simple Online Real Time Tracking (SORT) on traffic surveillance video. This paper upholds the uniqueness of the state of the art networks like DarkNet. The efficient detection and tracking on urban vehicle dataset is witnessed. The algorithms give real-time, accurate, precise identifications suitable for real-time traffic applications.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_69-Object_Detection_and_Tracking_using_Deep_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Classification Performance of Violence Content by Deep Neural Network with Monarch Butterfly Optimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101268</link>
        <id>10.14569/IJACSA.2019.0101268</id>
        <doi>10.14569/IJACSA.2019.0101268</doi>
        <lastModDate>2019-12-31T12:06:04.8700000+00:00</lastModDate>
        
        <creator>Ashikin Ali</creator>
        
        <creator>Norhalina Senan</creator>
        
        <creator>Iwan Tri Riyadi Yanto</creator>
        
        <creator>Saima Anwar Lashari</creator>
        
        <subject>Deep learning; monarch butterfly; violence video; classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>Violence is self-sufficient, it is perplexing due to visibility of content dissimilarities among the positive instances that been displayed on media. Besides, the ever-increasing demand on internet, with various types of videos and genres, causes difficulty for a proper search of these videos to ensure the contents is humongous. It involves in aiding users to choose movies or web videos suitable for audience, in terms of classifying violence content. Nevertheless, this is a cumbersome job since the definition of violence is broad and subjective. Detecting such nuances from videos becomes technical without a human’s supervision that can lead to conceptual problem. Generally, violence classification is performed based on text, audio, and visual features; to be precise, it is more relevant to use of audio and visual base. However, from this perspective, deep neural network is the current build-up in machine learning approach to solve classification problems. In this research, audio and visual features are learned by the deep neural network for more specific violence content classification. This study has explored the implementation of deep neural network with monarch butterfly optimization (DNNMBO) to effectively perform the classification of the violence content in web videos. Hence, the experiments are conducted using YouTube videos from VSD2014 dataset that are publicly available by Technicolor group. The results are compared with similar modified approaches such as DNNPSO and the original DNN. The outcome has shown 94% of violence classification rate by DNNMBO.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_68-Classification_Performance_of_Violence_Content.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>HCAHF: A New Family of CA-based Hash Functions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101267</link>
        <id>10.14569/IJACSA.2019.0101267</id>
        <doi>10.14569/IJACSA.2019.0101267</doi>
        <lastModDate>2019-12-31T12:06:04.8400000+00:00</lastModDate>
        
        <creator>Anas Sadak</creator>
        
        <creator>Fatima Ezzahra Ziani</creator>
        
        <creator>Bouchra Echandouri</creator>
        
        <creator>Charifa Hanin</creator>
        
        <creator>Fouzia Omary</creator>
        
        <subject>Hash function; boolean function; cellular automata; cryptography; information security; avalanche; nist statistical suite; DIEHARDER battery of tests; generic attacks; dedicated attacks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>Cryptographic hash functions (CHF) represent a core cryptographic primitive. They have application in digital signature and message authentication protocols. Their main building block are Boolean functions. Those functions provide pseudo-randomness and sensitivity to the input. They also help prevent and lower the risk of attacks targeted at CHF. Cellular automata (CA) are a class of Boolean functions that exhibit good cryptographic properties and display a chaotic behavior. In this article, a new hash function based on CA is proposed. A description of the algorithm and the security measures to increase the robustness of the construction are presented. A security analysis against generic and dedicated attacks is included. It shows that the hashing algorithm has good security features and meet the security requirements of a good hashing scheme. The results of the tests and the properties of the CA used demonstrate the good statistical and cryptographic properties of the hash function.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_67-HCAHF_A_New_Family_of_CA.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Knowledge based Soil Classification Towards Relevant Crop Production</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101266</link>
        <id>10.14569/IJACSA.2019.0101266</id>
        <doi>10.14569/IJACSA.2019.0101266</doi>
        <lastModDate>2019-12-31T12:06:04.8400000+00:00</lastModDate>
        
        <creator>Waleej Haider</creator>
        
        <creator>M. Nouman Durrani</creator>
        
        <creator>Aqeel ur Rehman</creator>
        
        <creator>Sadiq ur Rehman</creator>
        
        <subject>Knowledge creation; agriculture; soil classification; random forest; knowledge distribution; crop relevancy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>Pakistan’s economy is strongly associated with agriculture sector.  For a country having 25 % of GDP contributed through agriculture, there is a need to modernize the agriculture by acclimatizing contemporary approaches. Unfortunately, it has become a common trend among farmers to cultivate crops, being used in food items or which can easily be sold out in the market without using knowledge about the suitability or relevancy of crops to the soil environment. Consequently, the farmers face financial losses. Many researchers have proposed soil classification methods for various soils related researches, but they have very little contribution towards guidance of the farmers to select most suitable crops for cultivation at a particular soil type. Without the use of technology and computer-assisted approaches, the process of classifying soil environments could not help the farmers in taking decisions regarding appropriate crop selection in their respective fields. In this paper, an effective knowledge-oriented approach for soil classification in Pakistan has been presented using crowd sourced data obtained from 1557 users regarding 103 agricultural zones. The data were also obtained from AIMS (Govt. of Punjab) and Ministry of National Food Security &amp; Research. In this work, random forest classifier has been used for processing and predicting complex tiered relationship among soil types belonging to agricultural zones and major suitable crops for improving yield production. The proposed model helps in computing the degree of relevancy of crop to agricultural region that help former selecting suitable crops for their cultivated lands.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_66-Knowledge_based_Soil_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Energy Efficient Cluster Head Selection using Hybrid Squirrel Harmony Search Algorithm in WSN</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101265</link>
        <id>10.14569/IJACSA.2019.0101265</id>
        <doi>10.14569/IJACSA.2019.0101265</doi>
        <lastModDate>2019-12-31T12:06:04.8070000+00:00</lastModDate>
        
        <creator>N Lavanya</creator>
        
        <creator>T. Shankar</creator>
        
        <subject>Cluster head selection; clustering; harmony search algorithm; squirrel search algorithm; wireless sensor network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>The Wireless Sensor Network (WSN) has found an extensive variety of applications, which include battlefield surveillance, monitoring of environment and traffic, modern agriculture due to their effectiveness in communication. Clustering is one of the significant mechanisms for enhancing the lifespan of the network in WSN. This clustering scheme is exploited to improve the sensor network’s lifespan by decreasing the network’s energy consumption and increasing the stability of the network. The existing cluster head selection algorithm suffers from the inconsistent tradeoffs between exploration – exploitation and global search constraints. Therefore, in this research, the hybridization of two popular optimization algorithms, namely, Harmony Search Algorithm (HSA) and Squirrel Search Algorithm (SSA) is executed for optimal selection of cluster heads in WSN with respect to distance and energy. The proposed Hybrid Squirrel Harmony Search Algorithm (HSHSA) is found to be energy efficient when compared with first node death (FND) and last node death (LND) of existing Cluster Head Selection (CHS) techniques. In addition to this, the proposed HSHSA shows enhancements in overall throughput and residual energy of the wireless sensor network by 31.02% and 85.69%, respectively than the existing algorithms.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_65-Energy_Efficient_Cluster_Head_Selection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-Label Classification using an Ontology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101264</link>
        <id>10.14569/IJACSA.2019.0101264</id>
        <doi>10.14569/IJACSA.2019.0101264</doi>
        <lastModDate>2019-12-31T12:06:04.7930000+00:00</lastModDate>
        
        <creator>Yaya TRAORE</creator>
        
        <creator>Sadouanouan MALO</creator>
        
        <creator>Didier BASSOLE</creator>
        
        <creator>Abdoulaye SERE</creator>
        
        <subject>Multi-label classification (ML); Binary Relevance (BR); ontology; categorization; prediction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>During these last few years, the problem of multi-label classification (ML) has been studied in several domains, such as text categorization. Multi-label classification is a main challenging task because each instance can be assigned to multiple classes simultaneously. This paper studies the problem of Multi-label classification in the context of web pages categorization. The categories are defined in an ontology. Among the weakness of the multi-label classification methods, exist the number of positive and negative examples used to build the training dataset of a specific label. So the challenge comes from the huge number of labels combinations that grows exponentially. In this paper, we present an ontology-based Multi-label classification which exploit dependence between the labels.  In addition, our approach uses the ontology to take into account relationships between labels and to give the selection of positive and negative examples in the learning phase. In the prediction phase, if a label is not predicted, the ontology is used to prune the set of descendant labels. The results of experimental evaluation show the effectiveness of our approaches.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_64-Multi_Label_Classification_using_an_Ontology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Method for Patients Identification in Emergency Cases using RFID based RADIO Technology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101263</link>
        <id>10.14569/IJACSA.2019.0101263</id>
        <doi>10.14569/IJACSA.2019.0101263</doi>
        <lastModDate>2019-12-31T12:06:04.7600000+00:00</lastModDate>
        
        <creator>Eman Galaleldin Ahmed Khalil</creator>
        
        <creator>Asim Seedahmed Ali Osman</creator>
        
        <subject>Medical records; radio frequency identification; magnetic card reader; patient; emergency; electronic health record; laboratory</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>Medical records provide an important role in the process of providing health care in hospitals and in various types of medical institutions. Medical records play a vital role in maintaining the information of the entire patients which includes the basic information, medical information, history of operation and medication etc. These medical records have been produced for the purpose of identifying a patient. In this paper, a novel method for identification of patients using the Radio Frequency Identification (RFID) technology is proposed. This paper explains the concept of electronic medical record and explains how to use RFID based technology in order to create an electronic medical-card for patients. The proposed methodology also aims to identify patients quickly in the case of emergencies using the magnetic card reader device, which provides detailed medical information for the patient file. It also helps the doctors who are present in the ambulance of patient. The proposed methodology is importance in some emergency cases where patients cannot provide their information to the hospital because they didn’t know their identity and medical history.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_63-A_Novel_Method_for_Patients_Identification_in_Emergency.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Joint Demographic Features Extraction for Gender, Age and Race Classification based on CNN</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101262</link>
        <id>10.14569/IJACSA.2019.0101262</id>
        <doi>10.14569/IJACSA.2019.0101262</doi>
        <lastModDate>2019-12-31T12:06:04.7470000+00:00</lastModDate>
        
        <creator>Zaheer Abbas</creator>
        
        <creator>Sajid Ali</creator>
        
        <creator>Muhammad Ashad Baloch</creator>
        
        <creator>Hamida Ilyas</creator>
        
        <creator>Moneeb Ahmad</creator>
        
        <creator>Mubasher H. Malik</creator>
        
        <creator>Noreen Javaid</creator>
        
        <creator>Tanvir Fatima Naik Bukht</creator>
        
        <subject>Appearance features; age; gender; wrinkle analysis; face angle; classification; race; LBP</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>Automatic verification and identification of face from facial image to obtain good accuracy with huge dataset of training and testing to using face attributes from images is still challengeable. Hence proposing efficient and accurate facial image identification and classification based of facial attributes is important task. The prediction from human face image is much complex. The proposed research work for automatic gender, age and race classification is based on facial features and Convolutional Neural Network (CNN). The proposed study uses the physical appearance of human face to predict age, gender and race. The proposed methodology consists of three sub systems, Gender, Ageing and Race. Therefore different feature are extracted for every sub system. These features are extracted by using Primary, Secondary features, Face Angle, Wrinkle Analysis, LBP and WLD. The accuracy of classification is based on these features. CNN used to classify by using these features. The proposed study has been evaluated and tested on large database MORPH II and UTKF. The performance of proposed system is compared with state of art techniques.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_62-Joint_Demographic_Features_Extraction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Embracing Localization Inaccuracy with a Single Beacon</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101261</link>
        <id>10.14569/IJACSA.2019.0101261</id>
        <doi>10.14569/IJACSA.2019.0101261</doi>
        <lastModDate>2019-12-31T12:06:04.7130000+00:00</lastModDate>
        
        <creator>Anisur Rahman</creator>
        
        <creator>Vallipuram Muthukkumarasamy</creator>
        
        <subject>Underwater localization; linearization; mobile beacon; Cayley-Menger determinant; bearing; underwater wireless sensor network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>This paper illustrates a new mechanism to determine the coordinates of the sensors using a beacon node and determines the definitive error associated with it. In UWSNs (underwater wireless sensor networks), actual and precise location of the deployed sensors which accumulate data is vital, because the accumulated data without the location information has less significance. Moreover it has limited value in the domain of location based services. In UWSN, trilateration or multilateration is exploited to assess the location of the deployed hosts; having three or more reference nodes to localize a deployed sensor is not pragmatic at all. On the other hand, non-linear equations are usually solved in conventional method where degree-of-freedom is uncertain to lead to an exclusive solution. In this paper, associated localization inaccuracies has been shown for a unique configuration where a single beacon is used to determine the coordinates of three deployed sensors simultaneously. Cayley-Menger determinant is used for the configuration and system of nonlinear distance equations have been linearized for better accuracy and convergence. Simulations with Euclidean distances validate the propounded model and reflect the acquired accuracies in sensors’ coordinates and bearings. Moreover, an experiment has been conducted with ultrasonic sensors in terrestrial environments to validate the proposed model; the associated inaccuracies were found to be generated from the distance measurement errors; on the other hand, considering Euclidean distances proves the model to be precise and accurate.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_61-Embracing_Localization_Inaccuracy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Heart Disease Prediction based on External Factors: A Machine Learning Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101260</link>
        <id>10.14569/IJACSA.2019.0101260</id>
        <doi>10.14569/IJACSA.2019.0101260</doi>
        <lastModDate>2019-12-31T12:06:04.7000000+00:00</lastModDate>
        
        <creator>Maruf Ahmed Tamal</creator>
        
        <creator>Md Saiful Islam</creator>
        
        <creator>Md Jisan Ahmmed</creator>
        
        <creator>Md. Abdul Aziz</creator>
        
        <creator>Pabel Miah</creator>
        
        <creator>Karim Mohammed Rezaul</creator>
        
        <subject>Heart disease; Risk prediction; Decision Tree (DT); Support Vector Machine (SVM); Naive Bayes (NB); Random Forest (RF); Logistic Regression (LR); Quadratic Discriminant Analysis (QDA); Machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>Technology has immensely changed the world over the last decade. As a consequence, the life of the people is undergoing multiple changes that directly have positive and negative effects on health. Less physical activity and a lot of virtual involvements are pushing people into various health-related issues and heart disease is one of them. Currently, it has gained a great deal of attention among various life-threatening diseases. Heart disease can be detected or diagnosed by different medical tests by considering various internal factors. However, this type of approach is not only time-consuming but also expensive. Concurrently, there are very few studies conducted on heart disease prediction based on external factors. To bridge this gap, we proposed a heart disease prediction model based on the machine learning approach which enables predicting heart disease with 95% accuracy.  To acquire the best result, 6 distinct machine learning classifiers (Decision Tree, Random Forest, Naive Bayes, Support Vector Machine, Quadratic Discriminant, and Logistic Regression) were used. At the same time, sklearn.ensemble.ExtraTreesClassifier has been used to extract relevant features to improve predictive accuracy and control over-fitting. Findings reveal that Support Vector Machine (SVM) outperforms the others with greater accuracy (95%).</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_60-Heart_Disease_Prediction_based_on_External_Factors.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Outlier Detection using Graphical and Nongraphical Functional Methods in Hydrology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101259</link>
        <id>10.14569/IJACSA.2019.0101259</id>
        <doi>10.14569/IJACSA.2019.0101259</doi>
        <lastModDate>2019-12-31T12:06:04.6670000+00:00</lastModDate>
        
        <creator>Insia Hussain</creator>
        
        <subject>Rainbow plot; bivariate bagplot; functional bagplot; bivariate boxplot; functional boxplot</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>Graphical methods are intended to be introduced in hydrology for visualizing functional data and detecting outliers as smooth curves. These proposed methods comprise of a rainbow plot for visualization of data in large amount and bivariate and functional bagplot and boxplot for detection of outliers graphically. The bagplot and boxplot are composed by using first two score series of robust principal component following Tukey’s depth and regions of highest density. These proposed methods have the tendency to produce not only the graphical display of hydrological data but also the detected outliers. These outliers are intended to be compared with outliers obtained from several other existing nongraphical methods of outlier detection in functional context so that the superiority of the proposed graphical methods for identifying outliers can be legitimated. Hence present paper aims to demonstrate that the graphical methods for detection of outliers are authentic and reliable approaches compare to those methods of outlier detection that are nongraphical.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_59-Outlier_Detection_using_Graphical_and_Nongraphical_Functional.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Research Trends in Surveillance through Sousveillance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101258</link>
        <id>10.14569/IJACSA.2019.0101258</id>
        <doi>10.14569/IJACSA.2019.0101258</doi>
        <lastModDate>2019-12-31T12:06:04.6370000+00:00</lastModDate>
        
        <creator>Siraj Munir</creator>
        
        <creator>Syed Imran Jami</creator>
        
        <subject>Semantics; querying; profiling; IoT; surveillance; sousveillance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>Collective Intelligence is an immense research area that has wide application to cross-disciplines, like social, legal, and computation. Research trends in Surveillance find its place in the work of this area generating curated data set helpful in answering complex queries. Sousveillance is a recent term coined by researchers and had been discussed in different literatures. However our findings suggest that integration of Surviellance through Sousviellance data set has not been given much importance in collective fashion. In this literature we introduced an effective model of collective intelligence by integrating surveillance through sousviellance in a campus environment. For testbed networking devices are used to generate sousvillance data to provide validation, and cleaning to enable reliability and trust in the target object.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_58-Research_Trends_in_Surveillance_through_Sousveillance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Face Recognition on Low-Resolution Image using Multi Resolution Convolution Neural Network and Antialiasing Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101257</link>
        <id>10.14569/IJACSA.2019.0101257</id>
        <doi>10.14569/IJACSA.2019.0101257</doi>
        <lastModDate>2019-12-31T12:06:04.6200000+00:00</lastModDate>
        
        <creator>Mario Imandito</creator>
        
        <creator>Suharjito</creator>
        
        <subject>Face recognition; low resolution; convolutional neural network; antialiasing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>Video surveillance applications usually take pictures of faces that have a low resolution (12x12) due to distance, lighting and shooting angles Most of face recognition algorithm have the poor performance accuracy and poor identify face on low resolution. Based on the problem, identifying the face of the query in low resolution, based on high resolution (64x64) proves to be a huge challenge. The aim of this research is to develop a new model for face recognition of low-resolution image in order to increase the accuracy of recognition. A Multi-Resolution Convolutional Neural Network (MRCNN) is proposed to address the problem. First, Antialiasing is used in preprocessing phase, then use MRCNN to extract the feature of the image. LWF (Labeled Face in Wild) will be used to evaluate the model. The result of this study is increasing the accuracy of face recognition on low-resolution image compared to the previous MRCNN model.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_57-Face_Recognition_on_Low_Resolution_Image.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Systematic TRMA Protocol for Yielding Secure Environment for Authentication and Privacy Aspects</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101256</link>
        <id>10.14569/IJACSA.2019.0101256</id>
        <doi>10.14569/IJACSA.2019.0101256</doi>
        <lastModDate>2019-12-31T12:06:04.5900000+00:00</lastModDate>
        
        <creator>Anusha R</creator>
        
        <creator>Veena Devi Shastrimath V</creator>
        
        <subject>Mutual Authentication; Modified Pad-Gen; Radiofrequency Identification (RFID); Privacy; Security; Tag-Reader Mutual Authentication (TRMA)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>RFID is a system that uses the radio waves to scrutinize and capture data pertained to a tag for an object attached to it. In spite of RFID&#39;s wide application in industries, it poses a severe security issue. There is high susceptibility that RFID might be attacked with future attacks to invade the privacy and data in the system. To protect the RFID system against such attacks, the Pad-generation (Pad-Gen) function is used. This paper presents a mutual authentication scheme Tag Reader Mutual Authentication (TRMA) that is implemented using two approaches, the XOR operation and the MOD operation by modifying the Pad-Gen function. The proposed framework is executed on low-cost Artix7 FPGA XC7A100T-3CSG324, and its hardware verification is done on chip scope pro tool.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_56-A_Systematic_TRMA_Protocol_for_Yielding_Secure_Environment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Efficient Method for Speeding up Large-Scale Data Transfer Process to Database: A Case Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101255</link>
        <id>10.14569/IJACSA.2019.0101255</id>
        <doi>10.14569/IJACSA.2019.0101255</doi>
        <lastModDate>2019-12-31T12:06:04.5570000+00:00</lastModDate>
        
        <creator>Ginanjar Wiro Sasmito</creator>
        
        <creator>M. Nishom</creator>
        
        <subject>Big data; speeds up; data processing; data transfer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>Among the of characteristics of Large Data complexity comprising of volume, velocity, variety, and veracity (4Vs), this paper focuses on the volume to ensure a better performance of data extract, transform, and load processes in the context of data migration from one server to the other due to the necessity of update to the population data of Tegal City. An approach often used by most programmers in the Department of Population and Civil Registration of Tegal City is conducting the transfer process by transferring all available data (in specific file format) to the database server regardless of the file size. It is prone to errors that may disrupt the data transfer process like timeout, oversized data package, or even lengthy execution time due to large data size. The research compares several approaches to extract, transform, and load/transfer large data to a new server database using a command line and native-PHP programming language (object-oriented and procedural style) with different file format targets, namely SQL, XML, and CSV. The performance analysis that we conducted showed that the big scale data transfer method using LOAD DATA INFILE statement with comma-separated value (CSV) data source extension is the fastest and effective, therefore recommendable.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_55-An_Efficient_Method_for_Speeding_up_Large_Scale_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Mobile Agent Team Works based on Load-Balancing Middleware for Distributed Computing Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101254</link>
        <id>10.14569/IJACSA.2019.0101254</id>
        <doi>10.14569/IJACSA.2019.0101254</doi>
        <lastModDate>2019-12-31T12:06:04.5430000+00:00</lastModDate>
        
        <creator>Fat&#233;ma Zahra Benchara</creator>
        
        <creator>Mohamed Youssfi</creator>
        
        <subject>Load balancing; middleware; parallel and distributed systems; parallel and distributed computing models; high performance computing; mobile agents; distributed computing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>The aim of this paper is to present a load balancing middleware for parallel and distributed systems. The great challenge is to balance the tasks between heterogeneous distributed nodes for parallel and distributed computing models based distributed systems, by the way to ensure HPC (High performance computing) of these models. Accordingly, the proposed middleware is based on mobile agent team work which implements an efficient method with two strategies: (i) Load balancing Strategy that determines the node tasks assignment based on node performance, and (ii) Rebalancing Strategy that detects the unbalanced nodes and enables tasks migration. The paper focuses on the proposed middleware and its cooperative mobile agent team work model strategies to dynamically balance the nodes, and scale up distributed computing systems. Indeed, some experimental results that highlight the performance and efficiency of the proposed middleware are presented.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_54-A_Mobile_Agent_Team_Works_based_Load_Balancing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>GPLDA: A Generalized Poisson Latent Dirichlet Topic Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101253</link>
        <id>10.14569/IJACSA.2019.0101253</id>
        <doi>10.14569/IJACSA.2019.0101253</doi>
        <lastModDate>2019-12-31T12:06:04.5270000+00:00</lastModDate>
        
        <creator>Ibrahim Bakari Bala</creator>
        
        <creator>Mohd Zainuri Saringat</creator>
        
        <subject>Bag-of-word; generalized Poisson distribution; topic model; latent Dirichlet allocation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>The earliest modification of Latent Dirichlet Allocation (LDA) in terms of words or document attributes is by relaxing its exchangeability assumption via the Bag-of-word (BoW) matrix. Several authors have proposed many modifications of the original LDA by focusing on model that assumes the current topic depends on the words from previous topic. Most of the earlier work ignored the document length distribution since it is assumed that it will fizzle out at the modelling stage. Thus, in this paper, the Poisson document length distribution of LDA model is replaced with Generalized Poisson (GP) distribution which has the strength of capturing complex structures. The main strengths of GP are in capturing overdispersed (variance larger than mean) and under dispersed (variance smaller than mean) count data. The Poisson distribution used by LDA strongly relies on the assumption that the mean and variance of document lengths are equal. This assumption is often unrealistic with most real-life text data where the variance of document length may be greater than or less than their mean. Approximate estimate of the GPLDA model parameters was achieved using Newton-Raphson approximation technique of log-likelihood.  Performance and comparative analysis of GPLDA with LDA using accuracy and F1 showed improved results.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_53-GPLDA_A_Generalized_Poisson_Latent_Dirichlet.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Impact of using Social Network on Academic Performance by using Contextual and Localized Data Analysis of Facebook Groups</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101252</link>
        <id>10.14569/IJACSA.2019.0101252</id>
        <doi>10.14569/IJACSA.2019.0101252</doi>
        <lastModDate>2019-12-31T12:06:04.4970000+00:00</lastModDate>
        
        <creator>Muhammad Aqeel</creator>
        
        <creator>Mukarram Pasha</creator>
        
        <creator>Muhammad Saeed</creator>
        
        <creator>Muhammad Kamran Nishat</creator>
        
        <creator>Maryam Feroz</creator>
        
        <creator>Farhan Ahmed Siddiqui</creator>
        
        <creator>Nasir Touheed</creator>
        
        <subject>Social networks; data analysis; data mining; NLP; sentiment analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>Social Networks due to their intrinsic nature of being addictive have become an integral part of our civilization and plays an important role in our daily interactions. Facebook being the largest global online network, is used as a primary platform for carrying out our study and hypothesis testing. We built a web crawler for data extraction and used that data for our analysis. Primary goal of this study is to identify patterns among members of a Facebook group using a contextual and localised approach. We also intend to validate some hypotheses using a data driven approach like comparison of student’s social participation and activeness with actual class participation and its impact on his/her grades. We have also used user interactions in Facebook groups for identifying close relationships. The polarity of content in a group’s comments and posts defines a lot about that group and is also conferred in this paper.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_52-The_Impact_of_using_Social_Network_on_Academic_Performance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dynamic Performance of Synchronous Generator in Steam Power Plant</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101251</link>
        <id>10.14569/IJACSA.2019.0101251</id>
        <doi>10.14569/IJACSA.2019.0101251</doi>
        <lastModDate>2019-12-31T12:06:04.4500000+00:00</lastModDate>
        
        <creator>Ramadoni Syahputra</creator>
        
        <creator>Andi Wahyu Nugroho</creator>
        
        <creator>Kunnu Purwanto</creator>
        
        <creator>Faaris Mujaahid</creator>
        
        <subject>Synchronous generator; steam power plant; dynamic performance; efficiency</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>This paper presents dynamic performance of synchronous generator in steam power plant. Steam power plants are the most popular power plants to date. Until the end of 2018, 48.43% of the total installed capacity of power plants in Indonesia is this type of power plant. The largest steam power plant in Indonesia is in Paiton, Probolinggo, East Java, which is the object of this research. In its operation, the generator in this generator experiences dynamics as electricity load changes. This study discusses the analysis of the performance of synchronous generators to changes in electrical load. The analysis includes the voltage, active power, reactive power, power factor, and generator efficiency variables. The results showed that the generator performance remained good despite serving a very dynamic electricity load.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_51-Dynamic_Performance_of_Synchronous_Generator.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Power Quality Evaluation for Electrical Installation of Hospital Building</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101250</link>
        <id>10.14569/IJACSA.2019.0101250</id>
        <doi>10.14569/IJACSA.2019.0101250</doi>
        <lastModDate>2019-12-31T12:06:04.4330000+00:00</lastModDate>
        
        <creator>Agus Jamal</creator>
        
        <creator>Sekarlita Gusfat Putri</creator>
        
        <creator>Anna Nur Nazilah Chamim</creator>
        
        <creator>Ramadoni Syahputra</creator>
        
        <subject>Power quality; power capacitor; hospital building; electrical installation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>This paper presents improvements to the quality of power in hospital building installations using power capacitors. Power quality in the distribution network is an important issue that must be considered in the electric power system. One important variable that must be found in the quality of the power distribution system is the power factor. The power factor plays an essential role in determining the efficiency of a distribution network. A good power factor will make the distribution system very efficient in using electricity. Hospital building installation is one component in the distribution network that is very important to analyze. Nowadays, hospitals have a lot of computer-based medical equipment. This medical equipment contains many electronic components that significantly affect the power factor of the system. In this study, power quality analysis has been carried out on the building installation of one of the largest hospitals in Yogyakarta, Indonesia. In the initial condition, the power losses at the facility were quite high. Installation of power capacitors in these installations can improve the power factor, and ultimately improve the performance of the electrical installation system in the hospital building.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_50-Power_Quality_Evaluation_for_Electrical_Installation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Accurate Speech Emotion Recognition by using Brain-Inspired Decision-Making Spiking Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101249</link>
        <id>10.14569/IJACSA.2019.0101249</id>
        <doi>10.14569/IJACSA.2019.0101249</doi>
        <lastModDate>2019-12-31T12:06:04.4030000+00:00</lastModDate>
        
        <creator>Madhu Jain</creator>
        
        <creator>Ms. Shilpi Shukla</creator>
        
        <subject>Brain-inspired decision-making spiking neural network (BDM-SNN); deep belief network; social ski-driver (SSD) optimization; emotion recognition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>A portion of speech recognition is taken away by emotion recognition which is a smart update and it is necessary for its gain massively. Feature selection is an indispensable stage among the furtherance of various schemes in order to implement the classification of sentiments in speaking. The communication among features prompted from the alike audio origin has been rarely deliberated at present, which might yield terminated features and cause an upswing in the computational costs. To resolve these defects the deep learning-based feature extraction technique is used. An incredible modernization in speech recognition in recent years incorporates machine learning techniques with a deep structure for feature extraction. In this paper, the speech signal obtained from the SAVEE database is used as an input for a deep belief network. In order to perform pre-training in the network, the layer-wise rapacious feature extraction tactic is implemented and by using systematic samples, the smearing back-propagation method is accomplished for attaining fine-tuning. Brain-inspired decision-making spiking neural network (SNNs) is used to recognize different emotions but training by deep SNNs remains a challenge, but it improves the determination of the result. In order to enhance the parameters of SNNs, a social ski-driver (SSD) evolutionary optimization algorithm is used. The results of the SNN-SSD algorithm are related to artificial neural networks and long short term memory with different emotions to refine the classification for authorization.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_49-Accurate_Speech_Emotion_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Visualising Image Data through Image Retrieval Concept using a Hybrid Technique: Songket Motif’s</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101248</link>
        <id>10.14569/IJACSA.2019.0101248</id>
        <doi>10.14569/IJACSA.2019.0101248</doi>
        <lastModDate>2019-12-31T12:06:04.3700000+00:00</lastModDate>
        
        <creator>Nadiah Yusof</creator>
        
        <creator>Amirah Ismail</creator>
        
        <creator>Nazatul Aini Abd Majid</creator>
        
        <subject>Multimedia; image; content-based image retrieval (CBIR); image retrieval; near-duplicate; principal component analysis (PCA); geometric</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>It has been proven that the massive dataset is strictly complex in Content Based image Retrieval (CBIR) because the present strategies in CBIR might have faced difficulties in feature extraction of the images. Moreover, technological constraints encountered in the analysis and extraction of the image arrays are how the system customizes the primitive geometric structures known as polygonal approximations structure. Hence, this study has discovered that image feature extraction is utilized by applying the Principal Component Analysis (PCA) technique, which is primarily based on the matrix of image representation that will enlarge the similarity of detection. The PCA approach needs to be enhanced resulting from the lack of the extraction of features in songket motives images. Therefore, this study proposes a new hybrid model that will integrate PCA with geometric techniques for image feature extraction to increase the recall and precision result. This paper employs the use of a qualitative experimental design model that involves three phases of activities. First, the analysis and design phase, secondly is a development phase, and lastly is the testing and evaluation phase. This paper focuses on those two phases in terms of design and development phases. The outcome process of the empirical phase is followed by designing the algorithm and model based on the result of literature review. This study has found that the hybrid between the principal component analysis model and the geometry technique will help to reduce the problems faced by the basic engineering technique model, which is the constraint in analysing and extracting the image features to customize the geometric primitive structure.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_48-Visualising_Image_Data_through_Image_Retrieval.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards the Identification of Student Learning Communities using Centrality</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101247</link>
        <id>10.14569/IJACSA.2019.0101247</id>
        <doi>10.14569/IJACSA.2019.0101247</doi>
        <lastModDate>2019-12-31T12:06:04.3570000+00:00</lastModDate>
        
        <creator>Intissar Salhi</creator>
        
        <creator>Hanaa El Fazazi</creator>
        
        <creator>Mohammed Qbadou</creator>
        
        <creator>Khalifa Mansouri</creator>
        
        <subject>Student’s learning communities; complex network; learner activity; community detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>Emergence of universities towards “digital university” has already been present for some years. The use of digital is largely developed to ensure a good quality of education. Universities therefore use large-scale learning management systems to manage the interaction between learners and teachers. Teachers can provide online training and educational materials for students following their classes and courses, monitor their participation and evaluate their performance. Students can use interactive features such as discussion threads, videoconferences, and discussion forums. These online tools make it possible to create new social networks or connect online social interactions. This will allow us to understand the structure of this complex network and extract useful information. In this article, we report our research on the detection of student learning communities based on learner activity. We found that it is possible to group students in communities through their messages and response structures using standard community detection algorithms. Also, that their behaviours can be strongly correlated with their closest peers who belong to the same community.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_47-Towards_the_Identification_of_Student_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Identification of People with Parkinson&#39;s Suspicions through Voice Signal Processing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101246</link>
        <id>10.14569/IJACSA.2019.0101246</id>
        <doi>10.14569/IJACSA.2019.0101246</doi>
        <lastModDate>2019-12-31T12:06:04.3230000+00:00</lastModDate>
        
        <creator>Brian Meneses-Claudio</creator>
        
        <creator>Witman Alvarado-Diaz</creator>
        
        <creator>Avid Roman-Gonzalez</creator>
        
        <subject>Voice signal processing; Parkinson Disease (PD); Fast Fourier Transform; speech signal segmentation; audio treatment</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>Parkinson is considered a disease with a very random prognosis, in addition to its origin due to a multisystemic neurodegenerative process that affects the central nervous system, which is responsible for motor control of the body and also produces chronic joint pain if the patient is not treated also suffers states of depression. This disease currently has no cure so it recommends the patient&#39;s family to provide quality of life, the age of incidence is from 40 years, according to the INCN (Instituto Nacional de Ciencias Neurol&#243;gicas) indicates that there are 3,000 cases of Parkinson in Peru annually. In this research paper, it proposes the creation of an algorithm in MATLAB capable of extracting the characteristics of the voice spectrum through the voice signal processing to provide an early detection so that they can receive treatment, appease and slow down Parkinson&#39;s disease. This processing will consist of submitting the audio by the Fast Fourier Transform (FFT), identifying the signal bodies, separating by frequency periods, to finally find the average and maximum values. It was identified that in the lower frequencies are where there are major differences, in addition the test was done with patients who has Parkinson&#39;s suspicions and the same differences were obtained resulting in the frequency periods [9Hz – 13Hz], [20Hz –30Hz] and [40Hz – 54Hz]. Also note that the period of 20 Hz to 30 Hz is where if the values in this frequency are less than 3.5 in amplitude they are principles of suspicion of Parkinson&#39;s disease.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_46-Identification_of_People_with_Parkinsons_Suspicions.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Developing a Framework for Potential Candidate Selection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101245</link>
        <id>10.14569/IJACSA.2019.0101245</id>
        <doi>10.14569/IJACSA.2019.0101245</doi>
        <lastModDate>2019-12-31T12:06:04.3070000+00:00</lastModDate>
        
        <creator>Farzana Yasmin</creator>
        
        <creator>Mohammad Imtiaz Nur</creator>
        
        <creator>Mohammad Shamsul Arefin</creator>
        
        <subject>Information extraction; named entity recognition; machine learning; skyline queries</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>Recruitment is the process of hiring the right person for the right job. In the current competitive world, recruiting the right person from thousands of applicants is a tedious work. In addition, analyzing these huge numbers of applications manually might result into biased and erroneous output which may eventually cause problems for the companies. If these pools of resumes can be analyzed automatically and presented to the employers in a systematic way for choosing the appropriate person for their company, it may help the applicants and the employers as well. So in order to solve this need, we have developed a framework that takes the resume of the candidates, pull out information from them by recognizing the named entities using machine learning and score the applicants according to some predefined rules and employer requirements. Furthermore, employers can select the best suited candidates for their jobs from these scores by using skyline filtering.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_45-Developing_a_Framework_for_Potential_Candidate_Selection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predictive Control for Distributed Smart Street Light Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101244</link>
        <id>10.14569/IJACSA.2019.0101244</id>
        <doi>10.14569/IJACSA.2019.0101244</doi>
        <lastModDate>2019-12-31T12:06:04.2770000+00:00</lastModDate>
        
        <creator>Pei Zhen Lee</creator>
        
        <creator>Sei Ping Lau</creator>
        
        <creator>Chong Eng Tan</creator>
        
        <subject>Traffic prediction; adaptive street lighting; smart cities; energy efficient; network congestion</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>With the advent of smart city that embedded with smart technology, namely, smart streetlight, in urban development, the quality of living for citizens has been vastly improved. TALiSMaN is one of the promising smart streetlight schemes to date, however, it possesses certain limitation that led to network congestion and packet dropped during peak road traffic periods. Traffic prediction is vital in network management, especially for real-time decision-making and latency-sensitive application. With that in mind, this paper analyses three real-time short-term traffic prediction models, specifically simple moving average, exponential moving average and weighted moving average to be embedded onto TALiSMaN, that aim to ease network congestion. Additionally, the paper proposes traffic categorisation and packet propagation control mechanism that uses historical road traffic data to manage the network from overload. In this paper, we evaluate the performance of these models with TALiSMaN in simulated environment and compare them with TALiSMaN without traffic prediction model. Overall, weighted moving average showed promising results in reducing the packet dropped while capable of maintaining the usefulness of the streetlight when compared to TALiSMaN scheme, especially during rush hour.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_44-Predictive_Control_for_Distributed_Smart_Street_Light_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Proof of Credibility: A Blockchain Approach for Detecting and Blocking Fake News in Social Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101243</link>
        <id>10.14569/IJACSA.2019.0101243</id>
        <doi>10.14569/IJACSA.2019.0101243</doi>
        <lastModDate>2019-12-31T12:06:04.2600000+00:00</lastModDate>
        
        <creator>Mohamed Torky</creator>
        
        <creator>Emad Nabil</creator>
        
        <creator>Wael Said</creator>
        
        <subject>Blockchain technology; social networks; fake news detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>Rumors and misleading information detection and prevention still represent a big challenge against social network developers and researchers. Since newsworthy information propagation is a traditional behavior of most of the users in social media, then verifying information credibility and reliability is indeed a vital security requirement for social network platforms. Due to its immutability, security, tamper-proof and P2P design, Blockchain as a powerful technology can provide a magical solution to overcome this challenge. This Paper introduces a novel blockchain approach called Proof of Credibility (PoC) for detecting fake news and blocking its propagation in social networks. The functionality of the PoC protocol has been simulated on two datasets of newsworthy tweets collected from different news sources on Twitter. The results clarified a satisfying performance and efficiency of the proposed approach in detecting rumors and blocking its propagation.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_43-Proof_of_Credibility_A_Blockchain_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cloud-Edge Network Data Processing based on User Requirements using Modify MapReduce Algorithm and Machine Learning Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101242</link>
        <id>10.14569/IJACSA.2019.0101242</id>
        <doi>10.14569/IJACSA.2019.0101242</doi>
        <lastModDate>2019-12-31T12:06:04.2130000+00:00</lastModDate>
        
        <creator>Methaq Kadhum</creator>
        
        <creator>Saher Manaseer</creator>
        
        <creator>Abdel Latif Abu Dalhoum</creator>
        
        <subject>Edge computing; cloud computing; data processing; data partitioning; MapReduce; machine learning; feature selection; user requirement</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>Edge computing extends cloud computing to enhancing network performance in terms of latency and network traffic of many applications such as: The Internet of Things (IoT), Cyber-Physical Systems (CPS), Machine to Machine (M2M) technologies, Industrial Internet, and Smart Cities. This extension aims at reducing data communication and transmission through the network. However, data processing is the main challenge facing edge computing. In this paper, we proposed a data processing framework based on both edge computing and cloud computing, that is performed by partitioning (classification and restructuring) of data schema on the edge computing level based on feature selection. These features are detected using MapReduce algorithm and a proposed machine learning subsystem built on user requirements. Our approach mainly relies on the assumption that the data sent by edge devices can be used in two forms, as control data (i.e. real-time analytics) and as knowledge extraction data (i.e. historical analytics).We evaluated the proposed framework based on the amount of transmitted, stored data and data retrieval time, the results show that both the amount of sending data was optimized and data retrieval time was highly decreased. Our evaluation was applied experimentally and theoretically on a hypothetical system in a kidney disease center.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_42-Cloud_Edge_Network_Data_Processing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of Multi-hop Wireless Sensor Networks using Probability Propagation Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101241</link>
        <id>10.14569/IJACSA.2019.0101241</id>
        <doi>10.14569/IJACSA.2019.0101241</doi>
        <lastModDate>2019-12-31T12:06:04.1830000+00:00</lastModDate>
        
        <creator>Komgrit Jaksukam</creator>
        
        <creator>Teerawat Tongloy</creator>
        
        <creator>Santad Chuwongin</creator>
        
        <creator>Siridech Boonsang</creator>
        
        <subject>Probabilistic modelling; wireless sensor network; multi-hop networks; data collection scheme; Monte Carlo simulation; probability propagation models; probabilistic analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>This paper presents a formula for estimating the probability of collecting a given amount of data from a propagation model and multi-hop wireless sensor networks (WSNs) based on Monte Carlo simulation with cluster-tree topology. The probabilistic model is based on an analytical model of the IEEE 802.15.4 MAC protocol. The probability of successful node transmission is extended to the probabilities of successful collection at the cluster P(X=k) and sink node P(X ≥ k). A numerical example has been provided for comparing the probabilities. We propose a model to calculate the probability from the ratio of the collection rate to the total number of nodes and therefore provide the likeliness of complete data collection. Finally, the results from our analysis provide an estimation of the probability of achieving successful transmission in WSNs.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_41-Analysis_of_Multi_Hop_Wireless_Sensor_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Flooding and Oil Spill Disaster Relief using Sentinel of Remote Sensing Satellite Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101240</link>
        <id>10.14569/IJACSA.2019.0101240</id>
        <doi>10.14569/IJACSA.2019.0101240</doi>
        <lastModDate>2019-12-31T12:06:04.1670000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>Sentinel; disaster relief; satellite remote sensing; flooding; oil spill; synthetic aperture radar; optical sensor; vegetation index</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>Flooding and oil spill disaster relief using Sentinel of remote sensing satellite data is conducted. Kyushu, Japan had severe heavy rain from 26 August to 30 August 2019. Optical sensor and Synthetic Aperture Radar: SAR onboard remote sensing satellite is used for disaster relief. NDVI and SWIR data derived from the Sentinel data are used for disaster relief. Merits and demerits of the optical sensor and SAR instrument are compared from the disaster relief of point of view.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_40-Flooding_and_Oil_Spill_Disaster_Relief_using_Sentinel.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Clustering based Privacy Preserving of Big Data using Fuzzification and Anonymization Operation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101239</link>
        <id>10.14569/IJACSA.2019.0101239</id>
        <doi>10.14569/IJACSA.2019.0101239</doi>
        <lastModDate>2019-12-31T12:06:04.1530000+00:00</lastModDate>
        
        <creator>Saira Khan</creator>
        
        <creator>Khalid Iqbal</creator>
        
        <creator>Safi Faizullah</creator>
        
        <creator>Muhammad Fahad</creator>
        
        <creator>Jawad Ali</creator>
        
        <creator>Waqas Ahmed</creator>
        
        <subject>Big data; clustering; privacy preservation; reconstruction; perturbation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>Big Data is used by data miner for analysis purpose which may contain sensitive information. During the procedures it raises certain privacy challenges for researchers. The existing privacy preserving methods use different algorithms that results into limitation of data reconstruction while securing the sensitive data. This paper presents a clustering based privacy preservation probabilistic model of big data to secure sensitive information..model to attain minimum perturbation and maximum privacy. In our model, sensitive information is secured after identifying the sensitive data from data clusters to modify or generalize it.The resulting dataset is analysed to calculate the accuracy level of our model in terms of hidden data, lossed data as result of reconstruction. Extensive experiements are carried out in order to demonstrate the results of our proposed model. Clustering based Privacy preservation of individual data in big data with minimum perturbation and successful reconstruction highlights the significance of our model in addition to the use of standard performance evaluation measures.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_39-Clustering_based_Privacy_Preserving_of_Big_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modification of Manual Raindrops Type Observatory Ombrometer with Ultrasonic Sensor HC-SR04</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101238</link>
        <id>10.14569/IJACSA.2019.0101238</id>
        <doi>10.14569/IJACSA.2019.0101238</doi>
        <lastModDate>2019-12-31T12:06:04.1070000+00:00</lastModDate>
        
        <creator>Anton Yudhana</creator>
        
        <creator>Jessy Rahmayanti</creator>
        
        <creator>Son Ali Akbar</creator>
        
        <creator>Subhas Mukhopadhyay</creator>
        
        <creator>Ismail Rakip Karas</creator>
        
        <subject>Observatory Ombrometer; rainfall; database; ultrasonic sensor; IoT; rain gauge</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>Water, in any way it comes, is important for the life of all living things. Indonesia is an area of tropical equatorial with a variation of rain, which is quite high. The regularity of the distribution of rainfall is one of the aspects most important to the activity of the community. As the development of technology, the intensity of rainfall can be measured manually using Ombrometer Observatory tool. The manual tool for measuring the rain precipitation, Ombrometer Observatorium, is used to take data manually. Samples should be taken at 7.00 a.m. everyday using a measuring cup to know the height of the water contained. However, the type is prone to error at the high rainfall intensity, since the drainage of the samples is conducted every 24 hours. Therefore, much water is wasted. To solve the problem, a modification of a rainfall gauge was made, that is Ombrometer Observatory with ultrasonic sensor HC-SR04. The height of the water in the container is sent through a server of which the data is stored in the database every ten minutes to reduce the risk of evaporation. It also minimizes the error in measuring the rainfall intensity. The results have been compared to the ones by BMKG (Meteorology, Climatology, and Geophysics Agency). The correlation value of the measurement ratio reached 0.9739 or 97.39%.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_38-Modification_of_Manual_Raindrops_Type_Observatory.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards the Development of Collaborative Learning in Virtual Environments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101237</link>
        <id>10.14569/IJACSA.2019.0101237</id>
        <doi>10.14569/IJACSA.2019.0101237</doi>
        <lastModDate>2019-12-31T12:06:04.0730000+00:00</lastModDate>
        
        <creator>Benjamin Maraza-Quispe</creator>
        
        <creator>Nicol&#225;s Caytuiro-Silva</creator>
        
        <creator>Eveling Castro-Gutierrez</creator>
        
        <creator>Melina Alejandro-Oviedo</creator>
        
        <creator>Walter Choquehuanca-Quispe</creator>
        
        <creator>Walter Fernandez-Gambarini</creator>
        
        <creator>Luis Cuadros-Paz</creator>
        
        <creator>Betsy Cisneros-Chavez</creator>
        
        <subject>Wikis; forums; chat; learning; collaborative</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>The objective of the research is to evaluate strategies such as Wikis, Forums and Chat in the development of collaborative learning in higher education students. A collaborative experience was developed with 25 students in an asynchronous e-learning environment. The activities consisted of forum discussions, chat and project development in a wiki environment. The research method includes a quantitative analysis whose forum rating was developed by applying a rubric. The use of didactic strategies such as Wikis, Forums and Chat in the learning sessions promotes collaborative learning where the main factors for this to happen are the degree of appropriation of these technologies by students and the mastery of their use by teachers. It is not possible to affirm the superiority of one tool over another because each has its own characteristics and could be used for different purposes, besides having complementary functions, they must organize and complement each other to develop collaborative learning.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_37-Towards_the_Development_of_Collaborative_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cardiovascular Disease Diagnosis: A Machine Learning Interpretation Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101236</link>
        <id>10.14569/IJACSA.2019.0101236</id>
        <doi>10.14569/IJACSA.2019.0101236</doi>
        <lastModDate>2019-12-31T12:06:04.0570000+00:00</lastModDate>
        
        <creator>Hossam Meshref</creator>
        
        <subject>Heart diseases; machine learning; artificial neural networks; support vector machines; Na&#239;ve Bayes; decision trees; random forests; model interpretation; feature ranking cost index</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>Research on heart diseases has always been the center of attention of the world health organization. More than 17.9 million people died from it in 2016, which represent 31% of the overall deaths globally. Machine learning techniques have been used extensively in that area to assist physicians to develop a firm opinion about the conditions of their heart disease patients. Some of the existing machine learning models still suffers from limited predication ability, and the chosen analysis approaches are not suitable. As well, it was noticed that the existing approaches pay more attention to building high accuracy models, while overlooking the ability to interpret and understand the recommendations of these models. In this research, different renowned machine learning techniques: Artificial Neural Networks, Support Vector Machines, Na&#239;ve Bayes, Decision Trees and Random Forests have been investigated to help in building, understanding and interpreting different heart disease diagnosing models. The Artificial Neural Networks model showed the best accuracy of 84.25% compared to the other models. In addition, it was found that despite some designed models have higher accuracies than others, it may be safer to choose a lower accuracy model as a final design of this study. This sacrifice was essential to make sure that a more transparent and trusted model is being used in the heart disease diagnosis process. This transparency validation was conducted using a newly suggested metric: the Feature Ranking Cost index. The use of that index showed promising results by making it clear as which machine learning model has a balance between accuracy and transparency. It is expected that following the detailed analyses and the use of this research findings will be useful to the machine learning community as it could be the basis for post-hoc prediction model interpretation of different clinical data sets.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_36-Cardiovascular_Disease_Diagnosis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Real-Time Carpooling Application based on k-NN Algorithm: A Case Study in Hashemite University</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101235</link>
        <id>10.14569/IJACSA.2019.0101235</id>
        <doi>10.14569/IJACSA.2019.0101235</doi>
        <lastModDate>2019-12-31T12:06:04.0100000+00:00</lastModDate>
        
        <creator>Subhieh El Salhi</creator>
        
        <creator>Fairouz Farouq</creator>
        
        <creator>Randa Obeidallah</creator>
        
        <creator>Yousef Kilani</creator>
        
        <creator>Esraa Al Shdaifat</creator>
        
        <subject>Mobile Application; Carpooling; Data mining; Classification; k-NN algorithms</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>The current revolution of mobile technology in different aspects of community directs the researchers and scientists to employ this technology to identify practical solutions for daily life problems using mobiles. One of the major challenges in our developing countries is the public transportation system. Public transportation system is an essential requirement for the welfare of modern society and has a critical impact on the people productivities and thus on the entire economic development process. Therefore, different solutions had been investigated to find applicable solutions. “Carpooling” is one of the initiative solutions that based on the usage of a single shared car by a group of people heading to the same location on a daily basis. In addition, carpooling can be considered as an efficient alternative to overcome the limitations of the conventional transportation system with an easier, quicker and more environmentally friendly car journeys. This paper presents an intelligent carpooling mobile app to commute students of the Hashemite University. The proposed solution is founded on using data mining technique, and more specifically the k-Nearest-Neighbour (k-NN) technique.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_35-Real_Time_Carpooling_Application.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Integrated Methodological Framework for Digital Transformation Strategy Building (IMFDS)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101234</link>
        <id>10.14569/IJACSA.2019.0101234</id>
        <doi>10.14569/IJACSA.2019.0101234</doi>
        <lastModDate>2019-12-31T12:06:03.9970000+00:00</lastModDate>
        
        <creator>Zineb Korachi</creator>
        
        <creator>Bouchaib Bounabat</creator>
        
        <subject>Digital transformation strategy; digital transformation assessment; IT governance; IT framework</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>There is still a conflict among the definitions, frameworks, and formulation of the digital transformation strategy in the literature. Despite extensive research on Digital Transformation Strategies and Digital Transformation Assessment, there is not a clear and global meta-model describing the general concepts and guidelines of the digital transformation to frame and drive a successful digital transformation. Several digital transformation approaches have been presented in the literature, these approaches are focusing on specific cases and specific concepts. The present paper describes the digital transformation and its relationship with IT governance. It presents how IT governance can lead the digital transformation. A literature review has been conducted on the most well-known IT Frameworks (COBIT, ITIL, CMMI) and their structure in order to provide a standard and known framework by practitioners. This paper proposes an Integrated Methodological Framework for Digital Transformation Strategy Building. The proposed framework is called IMFDS, it is based on IT governance elements (Business Strategic Planning, IT Strategic Planning, IT Organizational Structure, IT Reporting, IT Budgeting, IT Investment Decisions, Steering committee, IT Prioritization Process and IT Reaction Capacity). It provides specific guidelines to help organizations formulating, implementing and monitoring their transformation strategies. IMFDS is articulated across 9 blocks (steps) and 34 processes.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_34-Integrated_Methodological_Framework_for_Digital_Transformation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Understanding Students’ Motivation and Learning Strategies to Redesign Massive Open Online Courses based on Persuasive System Development</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101233</link>
        <id>10.14569/IJACSA.2019.0101233</id>
        <doi>10.14569/IJACSA.2019.0101233</doi>
        <lastModDate>2019-12-31T12:06:03.9630000+00:00</lastModDate>
        
        <creator>Mohamad Hidir Mhd Salim</creator>
        
        <creator>Nazlena Mohamad Ali</creator>
        
        <creator>Mohamad Taha Ijab</creator>
        
        <subject>Persuasive; MOOCs; motivation; learning strategies</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>Electronic learning or E-learning is currently flourishing immensely in areas such as secondary and tertiary education, lifelong learning programs and adult education. Within recent years, massive open online courses (MOOCs) have received profound attention within the field of E-learning. Persuasive principles can be implemented to enhance the system design and motivate students to engage with the system. The aim of this study is to identify students’ motivation and learning strategies that affect their academic performance in using MOOCs among tertiary education students. 40 students enrolled in the Ethnic Relations course participated in the online survey. Motivated Strategies for Learning Questionnaire (MSLQ) is the instrument used in this study while Automatic Linear Modelling (ALM) and Multiple Linear Regression (MLR) were used in the analysis. The result shows that there is a correlation between students’ motivation, learning strategies and their academic performance. It is found that resource management, cognitive and metacognitive and value component are the main scales that influenced their motivation and learning strategies towards excellent academic performance. The results can be used to fulfil the first phase of designing a persuasive system based on the Persuasive System Design (PSD) model which is to understand the issues behind a system.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_33-Understanding_Students_Motivation_and_Learning_Strategies.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Problems Solving of Cell Subscribers based on Expert Systems Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101232</link>
        <id>10.14569/IJACSA.2019.0101232</id>
        <doi>10.14569/IJACSA.2019.0101232</doi>
        <lastModDate>2019-12-31T12:06:03.9330000+00:00</lastModDate>
        
        <creator>Ahmad AbdulQadir AlRababah</creator>
        
        <subject>Neural network expert system; telecommunications companies; system analysis; cellular network; structured tasks; reliable information; human capabilities; telecommunications services</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>With the growing demand for telecommunications services, the number of calls to telecommunications companies related to issues of using services, setting up and maintaining equipment, as well as resolving possible problems arising in the process of using services is also growing. From the point of view of system analysis, the problem is the mismatch between the existing and the required (target) state of the system for a given state of the environment at the moment in time. Based on this definition, we consider the problem of the subscriber of the cellular network a mismatch between the existing and the required state of the cellular network in this state of the environment at the moment in time. The state of the cellular network is characterized by the functioning of all devices, the proposed range of services. A short time for analyzing problem situations and making decisions, a large amount of information characterizing the current situation, the difficulty of solving poorly formalized and poorly structured tasks in the absence of complete and reliable information about the state of the cellular communication network and the functioning of its elements lead to a mismatch of human capabilities to effectively solve these problems. In this regard, the development and implementation of a precedent based neural network expert system for solving the problems of subscribers of a cellular communication network is an urgent scientific and technical task.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_32-Problems_Solving_of_Cell_Subscribers_based_on_Expert_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Indonesian Words Error Detection System using Nazief Adriani Stemmer Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101231</link>
        <id>10.14569/IJACSA.2019.0101231</id>
        <doi>10.14569/IJACSA.2019.0101231</doi>
        <lastModDate>2019-12-31T12:06:03.9170000+00:00</lastModDate>
        
        <creator>Anton Yudhana</creator>
        
        <creator>Abdul Fadlil</creator>
        
        <creator>Muhamad Rosidin</creator>
        
        <subject>Indonesian; word error; stemming; Nazief and Adriani stemmer algorithm; detection system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>Stemming in each language has a different process and is determined according to the structure of the language. Stemming is mostly used as a complete step in the processing of words and phrases. There are many stemming algorithms available, and some used as a process for word processing. One function of stemming is to detect word errors in Indonesian. In this study, researchers created the Indonesian words error detection system using Nazief and Adriani algorithm. In the trials conducted, the system will accept text input obtained from the user. Then the system will preprocess the text. In this study, there are three stages of preprocessing, namely tokenization, case folding, and filtering. After the stages in preprocessing are finished, the system will call each word for the process of stemming. The results of the stemming will be compared with the base words available in the database. If it does not match, then the word is highlighted and is considered an error word. The first finding is the Nazief Adriani&#39;s algorithm can be able to detect words error until 100%. The second finding is the Nazief Adriani&#39;s algorithm also detect non-words error, the accuracy of detecting is 97.464%.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_31-Indonesian_Words_Error_Detection_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Study between Lean Six Sigma and Lean-Agile for Quality Software Requirement</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101230</link>
        <id>10.14569/IJACSA.2019.0101230</id>
        <doi>10.14569/IJACSA.2019.0101230</doi>
        <lastModDate>2019-12-31T12:06:03.9030000+00:00</lastModDate>
        
        <creator>Narishah Mohamed Salleh</creator>
        
        <creator>Puteri NE Nohuddin</creator>
        
        <subject>Lean Six Sigma; Lean Agile; DMAIC; SCRUM; Requirement Elicitation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>Requirement Elicitation is one of the challenging phases in the entire software development life cycle. It is the process of extracting and analyzing the requirements from customers to understand thoroughly of what system needs to be built. Despite all the advances in methodologies and practice approaches, extracting and establishing the right requirements are still part of the research debate. The objective of this paper is to compare the characteristics of two hybrid development approaches; Lean Six Sigma vs. Lean Agile. Most of the comparative studies done by most of the research compared within its relative knowledge such as; Lean vs. Six Sigma, Define-Measure-Analyze-Improve-Control vs. Design-For-Six-Sigma or Lean vs. Six Sigma vs Lean Six Sigma. Whereas in software industries, the comparative studies were focused on Lean vs. Agile, Agile vs. Waterfall, Lean vs. Kanban vs. Agile, which compared the project size, process cycle time, sequential or iterative process. The following parts of the study is to explore the differences and similarities in principles and practices. The study contributes significantly to the business analysts to systematically address the solutions and actions to ensure continuous improvement in producing quality software requirement.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_30-Comparative_Study_between_Lean_Six_Sigma.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Multi-Layered Security Model for Learning Management System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101229</link>
        <id>10.14569/IJACSA.2019.0101229</id>
        <doi>10.14569/IJACSA.2019.0101229</doi>
        <lastModDate>2019-12-31T12:06:03.8570000+00:00</lastModDate>
        
        <creator>Momeen Khan</creator>
        
        <creator>Tallat Naz</creator>
        
        <creator>Mohammad Awad Hamad Medani</creator>
        
        <subject>Multi-layered security model; designing a security model for learning management system; learning management system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>A learning management system is a web-based software application that is used for the documentation, administration, tracking, reporting and delivery of training programs and educational courses. It is an efficient and effective way to give valuable information to the students in a short time. With the evolution of e-learning, the learning management system is widely adopted in the education sector as well as in corporate market. Thus, it became a valued target for attackers to focus their attacks on LMS platforms. Most of the popular learning management systems available now a day don’t pay enough attention to the security mechanism and that gives opportunity to intruders to gain unauthorized access by manipulating the security gaps and breach into the system. The result is information leakage, unwanted data deletion or modification and compromised integrity of the data. The aim of this research paper is to focus on the need of security concerns and to provide a solution that can make the learning management system secure from any possible potential threats and attacks. In this paper, a complete multi-layered security model is proposed. The implementation of proposed model will provide a very secure environment for any learning management system.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_29-A_Multi_Layered_Security_Model_for_Learning_Management_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>5G Enabled Technologies for Smart Education</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101228</link>
        <id>10.14569/IJACSA.2019.0101228</id>
        <doi>10.14569/IJACSA.2019.0101228</doi>
        <lastModDate>2019-12-31T12:06:03.8230000+00:00</lastModDate>
        
        <creator>Delali Kwasi Dake</creator>
        
        <creator>Ben Adjei Ofosu</creator>
        
        <subject>5G Networks; smart education; smart campus; machine learning; artificial intelligence; big data; internet of things</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>5G technology use cases depicts the prospects of 5G network model to revolutionize Industry and Education is not an exception. The 5G model in general is made up of three main blocks: Enhanced Mobile Broadband, Massive Machine Type Communication and Ultra Reliable and Low Latency Communication. Within these blocks are the services 5G offers to users. In this paper, we focus on Educational users as beneficiaries of 5G technologies. The modern day Educational Institutions can benefit from the deployment of 5G-enabled services adapted to this sector. We proposed frameworks relating 5G and its disruptive technologies in advancing tools that will propel the idea of a Smart Educational System. This paper hence provides a comprehensive discussion on 5G technologies that will facilitate new teaching and learning trends in Educational environment.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_28-5G_Enabled_Technologies_for_Smart_Education.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Handwritten Arabic Text Recognition using Principal Component Analysis and Support Vector Machines</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101227</link>
        <id>10.14569/IJACSA.2019.0101227</id>
        <doi>10.14569/IJACSA.2019.0101227</doi>
        <lastModDate>2019-12-31T12:06:03.7930000+00:00</lastModDate>
        
        <creator>Faisal Al-Saqqar</creator>
        
        <creator>Atallah. M AL-Shatnawi</creator>
        
        <creator>Mofleh Al-Diabat</creator>
        
        <creator>Mesbah Aloun</creator>
        
        <subject>Handwritten Arabic text; holistic recognition; principal component analysis; support vector machines</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>In this paper, an offline holistic handwritten Arabic text recognition system based on Principal Component Analysis (PCA) and Support Vector Machine (SVM) classifiers is proposed. The proposed system consists of three primary stages: preliminary processing, feature extraction using PCA, and classification using the polynomial, linear, and Gaussian SVM classifiers. In this proposed system, text skeleton is first extracted and the images of the text are normalized into uniform size for extraction of the global features of the Arabic words using PCA. Recognition performance of this proposed system was evaluated on version 2 of the IFN/ENIT database of handwritten Arabic text using the polynomial, linear, and Gaussian SVM classifiers. The classification results of the proposed system were compared with the results produced by a benchmark. TRS that is depending on the Discrete Cosine Transform (DCT) method using numerous normalization sizes of Arabic text images. The experimental testing results support the effectiveness of the proposed system in holistic recognition of the handwritten Arabic text.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_27-Handwritten_Arabic_Text_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Framework for Cloud Security Risk Management based on the Business Objectives of Organizations</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101226</link>
        <id>10.14569/IJACSA.2019.0101226</id>
        <doi>10.14569/IJACSA.2019.0101226</doi>
        <lastModDate>2019-12-31T12:06:03.7470000+00:00</lastModDate>
        
        <creator>Ahmed E Youssef</creator>
        
        <subject>Information security; data privacy; cloud security risks; risk management; business objectives; cloud computing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>Security is considered one of the top ranked risks of Cloud Computing (CC) due to the outsourcing of sensitive data onto a third party. In addition, the complexity of the cloud model results in a large number of heterogeneous security controls that must be consistently managed. Hence, no matter how strongly the cloud model is secured, organizations continue suffering from lack of trust on CC and remain uncertain about its security risk consequences. Traditional risk management frameworks do not consider the impact of CC security risks on the business objectives of the organizations. In this paper, we propose a novel Cloud Security Risk Management Framework (CSRMF) that helps organizations adopting CC identifies, analyze, evaluate, and mitigate security risks in their Cloud platforms. Unlike traditional risk management frameworks, CSRMF is driven by the business objectives of the organizations. It allows any organization adopting CC to be aware of cloud security risks and align their low-level management decisions according to high-level business objectives. In essence, it is designed to address impacts of cloud-specific security risks into business objectives in a given organization. Consequently, organizations are able to conduct a cost-value analysis regarding the adoption of CC technology and gain an adequate level of confidence in Cloud technology. On the other hand, Cloud Service Providers (CSP) is able to improve productivity and profitability by managing cloud-related risks. The proposed framework has been validated and evaluated through a use-case scenario.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_26-A_Framework_for_Cloud_Security_Risk_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimal Global Threshold based on Two Dimension Otsu for Block Size Decision in Intra Prediction of H.264/AVC Coding</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101225</link>
        <id>10.14569/IJACSA.2019.0101225</id>
        <doi>10.14569/IJACSA.2019.0101225</doi>
        <lastModDate>2019-12-31T12:06:03.7300000+00:00</lastModDate>
        
        <creator>Sawsan Morkos Gharghory</creator>
        
        <subject>H.264/AVC coding; intra prediction; block size decision; Otsu two dimensions method</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>The Advanced Video Coding (H.264/AVC) has proved its ability in finding the tradeoff between the compressed bit rate value and the visual quality of video comparing to the others of traditional coding. One of the most encoder stages consuming time is the intra prediction in which different sizes of a block are exhaustively examined for selecting the suitable block size to the best block mode decision. In this paper, an efficient approach is suggested to select the best block size for the intra prediction adaptively to achieve high compression efficiency. The proposed approach exploits the idea of quad tree decomposition for blocks partitioning based on a predefined threshold value. An optimal global threshold value based on two dimension Otsu technique is suggested for the decision of block division in this work. The proposed technique is carried out on different set of videos resolutions with different quantization parameters using Matlab software. The comparison of the proposed approach with the reference JM18.6 video coding is done in terms of bit rate (BR), time saving and peak signal to noise ratio (PSNR). A tangible acceleration on the running time can be accomplished besides improvement in both of visual quality and bit rate with some of QCIF and CIF videos resolution by the proposed technique. The simulation results demonstrate saving in time by average 42% to 68% with CIF and QCIF videos. Concerning the visual quality in terms of Bjontegaard Delta parameters, the PSNR improved in which its value increased from 0.2 to 1.6, while the value of BR reduced from 0.79 to 15.3 respectively with some videos of resolutions QCIF, CIF and 720p. In addition, the performance of the suggested approach with the high resolution videos achieves minor improvement with some of them while has a slightly degradation with others of them.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_25-Optimal_Global_Threshold_based_on_Two_Dimension_Otsu.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detecting Fake Images on Social Media using Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101224</link>
        <id>10.14569/IJACSA.2019.0101224</id>
        <doi>10.14569/IJACSA.2019.0101224</doi>
        <lastModDate>2019-12-31T12:06:03.6830000+00:00</lastModDate>
        
        <creator>Njood Mohammed AlShariah</creator>
        
        <creator>Abdul Khader Jilani Saudagar</creator>
        
        <subject>Convolution Neural Network (CNN); Image forgery; Classification; Alexnet; Rectified Linear Unit (ReLU); SoftMax function; Features extraction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>In this technological era, social media has a major role in people’s daily life. Most people share text, images, and videos on social media frequently (e.g. Twitter, Snapchat, Facebook, and Instagram). Images are one of the most common types of media share among users on social media. So, there is a need for monitoring of images contained in social media. It has become easy for individuals and small groups to fabricate these images and disseminate them widely in a very short time, which threatens the credibility of the news and public confidence in the means of social communication. This research attempted to propose an approach to extracting image content, classify it and verify the authenticity of digital images and uncover manipulation. Instagram is one of the most important websites and mobile image sharing applications on social media. This allows users to take photos, add digital photographic filters and upload pictures. There are many unwanted contents in Instagram&#39;s posts such as threats and forged images, which may cause problems to society and national security. This research aims to build a model that can be used to classify Instagram content (images) to detect any threats and forged images. The model was built using deep algorithms learning which is Convolutional Neural Network (CNN), Alexnet network and transfer learning using Alexnet. The results showed that the proposed Alexnet network offers more accurate detection of fake images compared to the other techniques with 97%. The results of this research will be helpful in monitoring and tracking in the shared images in social media for unusual content and forged images detection and to protect social media from electronic attacks and threats.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_24-Detecting_Fake_Images_on_Social_Media.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enrichment Ontology with Updated user Data for Accurate Semantic Annotation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101223</link>
        <id>10.14569/IJACSA.2019.0101223</id>
        <doi>10.14569/IJACSA.2019.0101223</doi>
        <lastModDate>2019-12-31T12:06:03.6530000+00:00</lastModDate>
        
        <creator>Haytham Al-Feel</creator>
        
        <creator>Hanaa Ghareib Hendi</creator>
        
        <creator>Heba Elbeh</creator>
        
        <subject>Ontology; Semantic Web; Semantic Annotation; RSS News</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>Annotation is considered one of the main applications that semantic web applies. The idea beyond annotation focused on adding metadata to existing information which facilitates machines dealing with data that have meanings and can be readable. Semantic annotation is one of the techniques used for the enrichment of web content semantically, which facilitates writing comments and evaluate previously annotated resources that can lead to better search results. Our framework aims to enrich ontology via embedding data directly to ontology in order to have completed and accurate data.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_23-Enrichment_Ontology_with_Updated_user_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Scientific Text Sentiment Analysis using Machine Learning Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101222</link>
        <id>10.14569/IJACSA.2019.0101222</id>
        <doi>10.14569/IJACSA.2019.0101222</doi>
        <lastModDate>2019-12-31T12:06:03.6370000+00:00</lastModDate>
        
        <creator>Hassan Raza</creator>
        
        <creator>M. Faizan</creator>
        
        <creator>Ahsan Hamza</creator>
        
        <creator>Ahmed Mushtaq</creator>
        
        <creator>Naeem Akhtar</creator>
        
        <subject>Sentimental analysis; scientific citations; machine learning; scientific literature; classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>Over time, textual information on the World Wide Web (WWW) has increased exponentially, leading to potential research in the field of machine learning (ML) and natural language processing (NLP). Sentiment analysis of scientific domain articles is a very trendy and interesting topic nowadays. The main purpose of this research is to facilitate researchers to identify quality research papers based on their sentiment analysis. In this research, sentiment analysis of scientific articles using citation sentences is carried out using an existing constructed annotated corpus. This corpus is consisted of 8736 citation sentences. The noise was removed from data using different data normalization rules in order to clean the data corpus. To perform classification on this data set we developed a system in which six different machine learning algorithms including Na&#239;ve-Bayes (NB), Support Vector Machine (SVM), Logistic Regression (LR), Decision Tree (DT), K-Nearest Neighbor (KNN) and Random Forest (RF) are implemented. Then the accuracy of the system is evaluated using different evaluation metrics e.g. F-score and Accuracy score. To improve the system’ accuracy additional features selection techniques, such as lemmatization, n-graming, tokenization, and stop word removal are applied and found that our system provided significant performance in every case compared to the base system. Our method achieved a maximum of about 9% improved results as compared to the base system.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_22-Scientific_Text_Sentiment_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Distributed Shadow Controllers based Moving Target Defense Framework for Control Plane Security</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101221</link>
        <id>10.14569/IJACSA.2019.0101221</id>
        <doi>10.14569/IJACSA.2019.0101221</doi>
        <lastModDate>2019-12-31T12:06:03.6070000+00:00</lastModDate>
        
        <creator>Muhammad Faraz Hyder</creator>
        
        <creator>Muhammad Ali Ismail</creator>
        
        <subject>Control plane security; moving target defense; shadow controllers; software defined networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>Moving Target Defense (MTD) has drawn substantial attention of research community in recent past for designing secure networks. MTD significantly reduced the asymmetric advantage of attackers by constantly changing the attack surface. In this paper Software Defined Networking (SDN) based MTD framework SMTSC (SDN based MTD framework using Shadow Controllers) has been proposed. Although the previous work in SDN based MTD targets the Data plane security, we exploit MTD for the protection of Control plane of SDN. The proposed solution uses the concept of Shadow Controllers for producing dynamism in order to provide security at the Control plane of SDN environment. We proposed the concepts of Shadow Controllers for throttling the reconnaissance attacks targeting Controllers. The advantage of our approach is multifold. First it exploits the mechanism of MTD for providing security in the Control plane. The other advantage is that the multi-controller approach provides higher availability in the SDN network. Another critical gain is the lower computational overhead of SMTSC. Mininet and ONOS Controller are used to implement the proposed framework. The effectiveness and overheads of the framework is evaluated in terms of attacker’s effort, defender cost and complexity introduced in the network. Results demonstrated promising trends for the protection of Control plan of SDN environment.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_21-Distributed_Shadow_Controllers_based_Moving_Target.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Investigation of Different Modulation Formats for Extended Reach NG-PON2 using RSOA</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101220</link>
        <id>10.14569/IJACSA.2019.0101220</id>
        <doi>10.14569/IJACSA.2019.0101220</doi>
        <lastModDate>2019-12-31T12:06:03.5570000+00:00</lastModDate>
        
        <creator>S Rajalakshmi</creator>
        
        <creator>T. Shankar</creator>
        
        <subject>Fiber To The Home (FTTH); Passive Optical Network (PON); Next Generation-Passive Optical Networks Stage 2 (NG-PON2); Quality of Service (QoS); Reflective Semiconductor Optical Amplifier (RSOA); Time and Wavelength Division Multiplexing (TWDM)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>Global market forecasts predicted that by 2020, more than 26 billion internet devices and connections universally interconnected will require nearly 3 times the data traffic generated when compared to the year 2015. The increase in data traffic, demands for enormous bandwidth capacity. The potential to deliver 10 Gbps of huge data to individual businesses and households will be of paramount importance and a challenging issue for the present day service providers. An intensive study is carried out for the Fiber-To-The-Home Passive Optical Network (FTTH PON) for their use in the optical communication, due to their high data rates and more bandwidth. The current evolution of Next Generation-Passive Optical Networks Stage 2 (NG-PON2) network is the primary key technology for the growing demands of higher bandwidth and transmission of the data to the subscribers present in the access network from the service providers. Time Wavelength Division Multiplexing PON (TWDM-PON) architecture is the viable essential solution for NG-PON2 which provides more bandwidth for bidirectional transmission. This article proposes a design for extended reach TWDM-PON based on reflective semiconductor optical amplifier (RSOA). The exclusive feature of the RSOA is the wavelength conversion, which replaces the transmitters in the subscriber end. The Quality of Service (QoS) performance is critically analyzed for different optical modulation formats in proposed extended reach TWDM-PON using RSOA. The TWDM-PON using RSOA is simulated and investigated for different photodetectors. The analysis is also carried for various distance and data rates. The results exhibited that APD receivers have better performance of minimum bit error rate obtained is 10-11 and minimum Q factor is 6.2 when compared with PiN receivers. The comparative analysis of different modulation formats shows that the Carrier Suppressed Return to Zero- Differential Phase Shift Keying (CSRZ-DPSK) gives the best performance for longer distance and large data rates and Return to Zero(RZ) gives the least performance.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_20-Investigation_of_different_Modulation_Formats.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Winning the Polio War in Pakistan</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101219</link>
        <id>10.14569/IJACSA.2019.0101219</id>
        <doi>10.14569/IJACSA.2019.0101219</doi>
        <lastModDate>2019-12-31T12:06:03.5100000+00:00</lastModDate>
        
        <creator>Toorab Khan</creator>
        
        <creator>Waheed Noor</creator>
        
        <creator>Junaid Babar</creator>
        
        <creator>Maheen Bakhtyar</creator>
        
        <subject>Prediction; visualization; regression; clustering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>Polio is one of the most important issues which have caught the global attention. It has been eradicated globally except Pakistan and Afghanistan. Its quiet alarming, where whole world is polio free, still polio cases are emerging from Pakistan. The major motivation behind this research is to study and analyze the past cases (trend analysis) and to predict the number of future cases and obstacles hindering Pakistan to eliminate polio. The areas with peak level of influx could be prioritized for effective tracking, planning and monitoring of vaccination activities and better utilization of human resources for targeted and controlled interventions. It shall provide better management and resource allocation decisions for speedy eradication of this epidemic syndrome. Polio cases are displayed on Google Maps for localization and clustering, and trend analysis is performed for future prediction using linear regression.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_19-Winning_the_Polio_War_in_Pakistan.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Memory-based Collaborative Filtering: Impacting of Common Items on the Quality of Recommendation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101218</link>
        <id>10.14569/IJACSA.2019.0101218</id>
        <doi>10.14569/IJACSA.2019.0101218</doi>
        <lastModDate>2019-12-31T12:06:03.4970000+00:00</lastModDate>
        
        <creator>Hael Al-bashiri</creator>
        
        <creator>Hasan Kahtan</creator>
        
        <creator>Mansoor Abdullateef Abdulgabber</creator>
        
        <creator>Awanis Romli</creator>
        
        <creator>Mohammad Adam Ibrahim Fakhreldin</creator>
        
        <subject>Collaborative filtering; memory-based; similarity method; data sparsity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>In this study, the impact of the common items between a pair of users on the accuracy of memory-based collaborative filtering (CF) is investigated. Although CF systems are a widely used recommender system, data sparsity remains an issue. As a result, the similarity weight between a pair of users with few ratings is almost a fake relationship. In this work, the similarity weight of the traditional similarity methods is determined using exponential functions with various thresholds. These thresholds are used to specify the size of the common items amongst the users. Exponential functions can devalue the similarity weight between a pair of users who has few common items and increase the similarity weight for users who have sufficient co-rated items. Therefore, the pair of users with sufficient co-rated items obtains a stronger relationship than those with few common items. Thus, the significance of this paper is to succinctly test the impacting of common items on the quality of recommendation that creates an understanding for the researchers by discussing the findings presented in this study. The MovieLens datasets are used as benchmark datasets to measure the effect of the ratio of common items on the accuracy. The result verifies the considerable impact exerted by the factor of common items.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_18-Memory_based_Collaborative_Filtering.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Affective Educational Application of Fish Tank Hydroponics System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101217</link>
        <id>10.14569/IJACSA.2019.0101217</id>
        <doi>10.14569/IJACSA.2019.0101217</doi>
        <lastModDate>2019-12-31T12:06:03.4800000+00:00</lastModDate>
        
        <creator>Rodolfo Romero Herrera</creator>
        
        <creator>Francisco Gallegos Funes</creator>
        
        <subject>Hydroponics; affective interface; embedded system; gardens; educational software</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>This project develops algorithms for the design and implementation of an embedded system in hydroponic gardens for homes located on roofs, terraces, and even in kitchens in the house; since a fishbowl is used. Vegetables, flowers, etc. are contemplated. One plant was obtained per seed. The development and care of Nature is also a special theme of this project with the objective of educating in the care of the environment, for which it is essential not to overlook the affection that one has for life. It is here that it is important to ensure that plants can transmit the physical conditions in which they find themselves through an emotional interface that translates the lack of water into an emotional state of sadness; or enough moisture with a state of joy. Thus a technique is presented, which allows through affective or emotional interfaces to educate owners about the care of the plant and take advantage of the emotional states of the people for the development of educational software.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_17-Affective_Educational_Application_of_Fish_Tank.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comparative Study of Supervised Machine Learning Techniques for Diagnosing Mode of Delivery in Medical Sciences</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101216</link>
        <id>10.14569/IJACSA.2019.0101216</id>
        <doi>10.14569/IJACSA.2019.0101216</doi>
        <lastModDate>2019-12-31T12:06:03.4630000+00:00</lastModDate>
        
        <creator>Syeda Sajida Hussain</creator>
        
        <creator>Tooba Fatima</creator>
        
        <creator>Rabia Riaz</creator>
        
        <creator>Sanam Shahla Rizvi</creator>
        
        <creator>Farina Riaz</creator>
        
        <creator>Se Jin Kwon</creator>
        
        <subject>Machine learning; supervised learning; bioinformatics; medical sciences</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>The uses of machine learning techniques in medical diagnosis are very helpful tools now-a-days. By using machine learning algorithms and techniques, many complex medical problems can be solved easily and quickly. Without these techniques, it was a difficult task to find the causes of a problem or to suggest most appropriate solution for the problem with high accuracy. The machine learning techniques are used in almost every field of medical sciences such as heart diseases, diabetes, cancer prediction, blood transfusion, gender prediction and many more. Both supervised and unsupervised machine learning techniques are applied in the field of medical and health sciences to find the best solution for any medical illness. In this paper, the implementation of supervised machine learning techniques is performed for classifying the data of the pregnant women on the basis of mode of delivery either it will be a C-Section or a normal delivery. This analysis allows classifying the subjects into caesarean and normal delivery cases, hence providing the insight to physician to take precautionary measures to ensure the health of an expecting mother and an expected child.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_16-A_Comparative_Study_of_Supervised_Machine_Learning_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Designing an Automated Intelligent e-Learning System to Enhance the Knowledge using Machine Learning Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101215</link>
        <id>10.14569/IJACSA.2019.0101215</id>
        <doi>10.14569/IJACSA.2019.0101215</doi>
        <lastModDate>2019-12-31T12:06:03.4170000+00:00</lastModDate>
        
        <creator>G Deena</creator>
        
        <creator>K. Raja</creator>
        
        <subject>e-Learning; teaching learning process; pre-assessment and post- assessment; blooms taxonomy; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>The modern digital world requires its users to learn continuously in order to enhance their knowledge in the working environment and the academic sector. This kind of learning is significantly facilitated by the E-Learning platform, which is better than the traditional methods. As E-Learning offers benefits like time and space independence, many learners have made it their choice. However, since an abundant of E-Learning courses are available on websites, learners are confused as to which is the right one to choose. This paper proposes an Automated Intelligent Learning (AIL) methodology which covers the entire Teaching-Learning Process (TLP) to overcome this issue. It enables the selection of suitable topics and framing an appropriate course syllabus and assessment questions for the users. In it, the learner satisfies topic selection based on Bloom&#39;s taxonomy. This enables high-quality knowledge outcomes in the learner. The subject curriculum is framed by using Hierarchical clustering techniques. This helps the user to fix suitable topics and conveniently generate questions using machine learning techniques. The proposed methodology was evaluated by carrying out post and pre-assessment tests on undergraduate students from computer science courses. The performance analysis of the proposed methodology was compared with that of the existing methodology. It was observed that the proposed methodology is effective in applying the topic selection hierarchical method to make a perfect syllabus for the course, and assessment questions. Besides, it was found to enable the learner to learn without any confusion or distraction.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_15-Designing_an_Automated_Intelligent_e_Learning_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Convolutional Neural Network Considering Physical Processes and its Application to Diatom Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101214</link>
        <id>10.14569/IJACSA.2019.0101214</id>
        <doi>10.14569/IJACSA.2019.0101214</doi>
        <lastModDate>2019-12-31T12:06:03.3700000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>Chlorophyl-a concentration; red tide; diatom; MODIS; satellite remote sensing; neural network; meteorological data</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>Convolutional Neural Network (CNN) considering physical processes with time series of stages for diatom detection with remote sensing satellite derived physical data (Chlorophyll-a, Photosynthesis Available Radiance (PAR), Turbidity, Sea Surface Temperature (SST)) and meteorological data is proposed. Diatom is bloomed under the condition of suitable sea water temperature, nutrition rich water (Chlorophyll-a derived from river water flow), photosynthesis available radiance derived from solar irradiance, transparency of the sea water for photosynthesis (turbidity), and sea water convection between bottom sea water and sea surface water. Almost all the conditions can be monitored by remote sensing satellite-based radiometers. The proposed diatom prediction based on convolutional neural network with remote sensing satellite and meteorological data is validated. Through the experiments at Ariake bay area, Kyushu, Japan with gathered time series of remote sensing data of Moderate resolution of Imaging Spectroradiometer (MODIS) derived turbidity as well as chlorophyll-a data estimated for the winter seasons (from January to March) during from 2010 to 2018 together with measured and acquired meteorological data for the same winter seasons, the proposed method is validated.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_14-Convolutional_Neural_Network_considering_Physical_Processes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Clustering Analysis for Malware Behavior Detection using Registry Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101213</link>
        <id>10.14569/IJACSA.2019.0101213</id>
        <doi>10.14569/IJACSA.2019.0101213</doi>
        <lastModDate>2019-12-31T12:06:03.3570000+00:00</lastModDate>
        
        <creator>Nur Adibah Rosli</creator>
        
        <creator>Warusia Yassin</creator>
        
        <creator>Faizal M.A</creator>
        
        <creator>Siti Rahayu Selamat</creator>
        
        <subject>Malware; malware detection; behavior analysis; k-means clustering; data registry</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>The increase of malware attacks may increase risk in information technology industry such as Industrial Revolution 4.0 that consists of multiple sectors especially in cyber security. Because of that malware detection technique plays vital role in detecting malware attack that can give high impact towards the cyber world. In accordance with the technique, one of unsupervised machine learning able to detect malware attack by identifying the behavior of the malware; which called clustering technique. Owing to this matter, current research shows a paucity of analysis in detecting malware behavior and limited source that can be used in identifying malware attacks. Thus, this paper introduce clustering detection model by using K-Means clustering approach to detect malware behavior of data registry based on the features of the malware. Clustering techniques that use unsupervised algorithm in machine learning plays an important role in grouping similar malware characteristics by studying the behavior of the malware. Throughout the experiment, malware features were selected and extracted from computer registry data and eventually used in the proposed clustering detection model to be clustered as normal or suspicious behavior. The results of the experiment indicates that this proposed model is capable to cluster normal and suspicious data into two separate groups with high detection rate which is more than 90 percent accuracy. Ultimately, the main contribution based on the findings is the proposed framework can be used to cluster the data with the use of data registry to detect malware.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_13-Clustering_Analysis_for_Malware_Behavior_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Neural Network-based Diabetic Type II High-Risk Prediction using Photoplethysmogram Waveform Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101212</link>
        <id>10.14569/IJACSA.2019.0101212</id>
        <doi>10.14569/IJACSA.2019.0101212</doi>
        <lastModDate>2019-12-31T12:06:03.3230000+00:00</lastModDate>
        
        <creator>Yousef K Qawqzeh</creator>
        
        <subject>Diabetes; prediction; classification; photoplethysmogram; neural networks; diagnosis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>This work aims to predict and classify patients into diabetic and nondiabetic subjects based on age and four independent variables extracted from the analysis of photoplethysmogram (PPG) morphology in time domain. The study has two main stages, the first one was the analysis of PPG waveform to extract b/a, RI, DiP, and SPt indices. These parameters contribute by some means to the prediction of diabetes. They were statistically significant and correlated with the HbA1C test. The second stage was building a neural network based classifier to predict diabetes. The model showed an accuracy of 90.2% in training phase and an accuracy of 85.5% in testing phase. The findings of this research work may contribute towards the prediction of diabetes in early stages. Also, the proposed classifier showed a high accuracy in predicting the existence of diabetes in Saudi people population.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_12-Neural_Network_Based_Diabetic_Type_II_High_Risk.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Embedding Adaptation Levels within Intelligent Tutoring Systems for Developing Programming Skills and Improving Learning Efficiency</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101211</link>
        <id>10.14569/IJACSA.2019.0101211</id>
        <doi>10.14569/IJACSA.2019.0101211</doi>
        <lastModDate>2019-12-31T12:06:03.3070000+00:00</lastModDate>
        
        <creator>Mohamed A Elkot</creator>
        
        <subject>Intelligent Tutoring Systems (ITS); programming skills; adaptive e-learning; learning style; learning efficiency</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>Intelligent Tutoring Systems (ITSs) represent the virtual learning environment that provides learning needs, adapts to the characteristics of learners according to their cognitive and behavioral aspects, to reach desired learning outcomes. The purpose of this study is to investigate the impact of embedding some adaptation levels within intelligent tutoring systems on developing Object-Oriented Programming skills (OOP), as well as on learning efficiency for students of the computer science department, Faculty of Science and Arts at Qassem University. In this context, the author developed an Intelligent Tutoring System (ITS) that provides multiple levels of adaptation (Learner level, links level) to support automatic adaptation to each of the students&#39; characteristics and investigate the effectiveness of the system on dependent variables. The random sample consisted of (n=44) students. Those students were divided into two similar groups, Experimental an (ITS), and Control (face-to-face, traditional). The findings revealed that there was a noticeable improvement in the students&#39; performance for the experimental group than the control group how used face-to-face method for the programming skills and learning efficiency.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_11-Embedding_Adaptation_Levels_within_Intelligent_Tutoring_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>System for Monitoring People with Disabilities in the Event of an Accident using Mobile Terminals</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101210</link>
        <id>10.14569/IJACSA.2019.0101210</id>
        <doi>10.14569/IJACSA.2019.0101210</doi>
        <lastModDate>2019-12-31T12:06:03.2770000+00:00</lastModDate>
        
        <creator>Alexandra Fanca</creator>
        
        <creator>Monica Cujerean</creator>
        
        <creator>Adela Puscasiu</creator>
        
        <creator>Dan-Ioan Gota</creator>
        
        <creator>Honoriu Valean</creator>
        
        <subject>Smartphones; built-in smartphone sensors; monitoring people; android applications; accident detecting system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>Being in the speed century, people around the world are busy scheduling every day and therefore, it is impossible to spend enough time with the elderly and people with disabilities or people who have a chronic illness. These persons need a lot more attention and care because they cannot cope with the daily activities like a healthy person would. Daily monitoring and assistance of elderly or disabled people is a very important task, both in the current activity, or, especially when emergencies can occur. Fortunately, we can easily use today&#39;s technologies, which are constantly developing to be able to monitor them remotely. This paper tries to find a solution to reduce cases of fatality due to accidents, by using advanced technologies of today, e.g. smartphones, fast communications. The use of these technologies can provide permanent monitoring of the elderly and persons with disabilities, without bounding their mobility and without affecting their quality of life. In this way, if emergency situations arise for the elderly or people with disabilities or chronic diseases, measures can be taken as soon as possible. The development of a mobile application capable to monitor the occurrence of accidents for the above-mentioned persons is obviously a help granted to the doctors involved in ensuring their health. Thus, the main objective of the application is to detect the accidental falls of the persons in the shortest possible time. Another objective is to provide an application that runs in the background of the mobile operating system, using as little as possible the power supply.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_10-System_for_Monitoring_People_with_Disabilities.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automatically Extract Vertebra and Compute the Cobb Angle based on Spine’s Features and Adaptive ASMs in Posteroanterior Radiographs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101209</link>
        <id>10.14569/IJACSA.2019.0101209</id>
        <doi>10.14569/IJACSA.2019.0101209</doi>
        <lastModDate>2019-12-31T12:06:03.2600000+00:00</lastModDate>
        
        <creator>Pham The Bao</creator>
        
        <subject>Spine detection; spine extraction; vertebrae detection; cobb angle; adaptive active shape model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>Nowadays, clinical diagnoses are more and more supported by medical equipment, but doctors still need a lot of time and effort to diagnosis. The construction of a system that can diagnose automatically will help doctors a lot when the doctor must handle many medical records. Many diseases need to be diagnosed by radiographs and they can be diagnosed automatically, especially bone diseases. The purpose of this paper is to introduce a new method to measure the curve of the spine in the X-ray images. We split this subject into two problems. The first problem is to extract the spine from the X-ray image. We use the threshold to remove redundant information from the images. Then an automatic mask is created to save the position of the spine and smooth the boundary of the spine. Based on the spine extracted from the previous steps, the Active Shape Model (ASM) which is designed from the characteristics of the vertebrae, is used to extract each vertebra from the spine. Finally, we measure the Cobb angle which is formed by vertebrae. The solution of two problems is implemented on X-ray images which have high quality, the results are more than 80% an area of the spine is extracted, and the Cobb angle is measured correctly, the accuracy of our method will decrease if the quality of the image is low.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_9-Automatically_Extract_Vertebra_and_Compute_the_Cobb_Angle.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Resonance Mitigation and Performance Improvement in Distributed Generation based LCL Filtered Grid Connected Inverters</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101208</link>
        <id>10.14569/IJACSA.2019.0101208</id>
        <doi>10.14569/IJACSA.2019.0101208</doi>
        <lastModDate>2019-12-31T12:06:03.2300000+00:00</lastModDate>
        
        <creator>Danish Khan</creator>
        
        <creator>Muhammad Mansoor Khan</creator>
        
        <creator>Yaqoob Ali</creator>
        
        <creator>Abdar Ali</creator>
        
        <creator>Imad Hussain</creator>
        
        <subject>LCL filter; high pass filter; grid connected inverters (GCI); stability analysis; robustness; SPWM technique; D-space</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>Resonance turns into a growing issue of paramount importance for stable operation of LCL filtered grid-connected inverters. Active damping algorithms are widely adopted to restrain the resonance peak associated with LCL filters. The focus of this paper is to develop an improved active damping solution based on filter capacitor current for better control performance of three-phase LCL grid-connected inverters. In the proposed solution, an improved compensator is included across the LCL filter system and capacitor current feedback. A damping loop is implemented with the proposed combination, which is further feedback at a reference voltage point of the three-phase inverter to damp the aroused resonance peak. The substantial features of the proposed configuration are wide damping range of resonance frequency and high control bandwidth, which results in faster dynamic response in comparison with conventional capacitor current proportionally feedback. Moreover, the stability of current loop is examined in detail by implementing the proposed damping method under the filter parametric variations. Finally, the efficacy of the proposed method is validated by illustrating the outcomes based on steady state and transient responses through simulations and experimental results of the laboratory prototype.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_8-Resonance_Mitigation_and_Performance_Improvement.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Software Design using Genetic Quality Components Search</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101207</link>
        <id>10.14569/IJACSA.2019.0101207</id>
        <doi>10.14569/IJACSA.2019.0101207</doi>
        <lastModDate>2019-12-31T12:06:03.2000000+00:00</lastModDate>
        
        <creator>Evgeny Nikulchev</creator>
        
        <creator>Dmitry Ilin</creator>
        
        <creator>Aleksander Gusev</creator>
        
        <subject>Software design; selection of software components set; numerical quality criteria evaluated; genetic algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>The paper presents a software design methodology based on computational experiments for effective selection of software component set. The selection of components is performed with respect to the numerical quality criteria evaluated in the reproducible experiments with various sets of components in the virtual infrastructure simulating the operating conditions of a software system being developed. To reduce the number of experiments with unpromising sets of components the genetic algorithm is applied. For representing the sets of components in the form of natural genotypes, the encoding mapping is introduced, reverse mapping is used to decipher the genotype. In the first step of the technique, the genetic algorithm creates an initial population of random genotypes that are converted into the assessed sets of software components. The paper shows the application of the proposed methodology to find the effective choice of Node.js components. For this purpose, a MATLAB program of genetic search and experimental scenario for a virtual machine running Ubuntu 16.04 LTS operating system were developed. To guarantee the proper reproduction of the experimental conditions, the Vagrant and Ansible configuration tools were used to create the virtual environment of the experiment.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_7-Software_Design_using_Genetic_Quality_Components_Search.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application of Computer-Aided to Improve Industrial Productivity in Cement Factories by using a Novel Design of Quantitative Conveyor</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101206</link>
        <id>10.14569/IJACSA.2019.0101206</id>
        <doi>10.14569/IJACSA.2019.0101206</doi>
        <lastModDate>2019-12-31T12:06:03.1830000+00:00</lastModDate>
        
        <creator>Anh Son Tran</creator>
        
        <creator>Ha Quang Thinh Ngo</creator>
        
        <subject>Motion control; cement industry; conveyor; automation; robotics system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>To deal with the industrial evolution 4.0, many cement enterprises must be enhanced the productive ability. In this paper, the novel design of quantitative conveyor is introduced to improve the industrial productivity. Firstly, the test hardware platform is set-up based on customer’s requirement. Then, the analysis and control of mechanical platform for quantitative conveyor is investigated. From the experimental results, the operation of conveyor is stable and precise to ensure the output products for cement factories.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_6-Application_of_Computer_Aided_to_Improve_Industrial_Productivity.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Energy Balanced Two-level Clustering for Large-scale Wireless Sensor Networks based on the Gravitational Search Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101205</link>
        <id>10.14569/IJACSA.2019.0101205</id>
        <doi>10.14569/IJACSA.2019.0101205</doi>
        <lastModDate>2019-12-31T12:06:03.1530000+00:00</lastModDate>
        
        <creator>Basilis Mamalis</creator>
        
        <creator>Marios Perlitis</creator>
        
        <subject>Gravitational search algorithm; wireless sensors; network lifetime; nodes clustering; data collection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>Organizing sensor nodes in clusters is an effective method for energy preservation in a Wireless Sensor Network (WSN). Throughout this research work we present a novel hybrid clustering scheme that combines a typical gradient clustering protocol with an evolutionary optimization method that is mainly based on the Gravitational Search Algorithm (GSA). The proposed scheme aims at improved performance over large in size networks, where classical schemes in most cases lead to non-efficient solutions. It first creates suitably balanced multihop clusters, in which the sensors energy gets larger as coming closer to the cluster head (CH). In the next phase of the proposed scheme a suitable protocol based on the GSA runs to associate sets of cluster heads to specific gateway nodes for the eventual relaying of data to the base station (BS). The fitness function was appropriately chosen considering both the distance from the cluster heads to the gateway nodes and the remaining energy of the gateway nodes, and it was further optimized in order to gain more accurate results for large instances. Extended experimental measurements demonstrate the efficiency and scalability of the presented approach over very large WSNs, as well as its superiority over other known clustering approaches presented in the literature.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_5-Energy_Balanced_Two_Level_Clustering.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Words Segmentation-based Scheme for Implicit Aspect Identification for Sentiments Analysis in English Text</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101204</link>
        <id>10.14569/IJACSA.2019.0101204</id>
        <doi>10.14569/IJACSA.2019.0101204</doi>
        <lastModDate>2019-12-31T12:06:03.1370000+00:00</lastModDate>
        
        <creator>Dhani Bux Talpur</creator>
        
        <creator>Guimin Huang</creator>
        
        <subject>Implicit aspect; explicit aspects; polarity; sentiments analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>Implicit and Explicit aspects extraction is the amassed research area of natural language processing (NLP) and opinion mining. This method has become the essential part of a large collection of applications which includes e-commerce, social media, and marketing. These application aid customers to buy online products and collect feedbacks based on product and aspects. As these feedbacks are qualitative feedback (comments) that help to enhance the product quality and delivery service. Whereas, the main problem is to analyze the qualitative feedback based on comments, while performing these analysis manually need a lot of effort and time. In this research paper, we developed and suggest an automatic solution for extracting implicit aspects and comments analyzing. The problem of implicit aspect extraction and sentiments analysis is solved by splitting the sentence through defined boundaries and extracting each sentence into a form of isolated list. Moreover, these isolated list elements are also known as complete sentence. As sentences are further separated into words, these words are filtered to remove anonymous words in which words are saved in words list for the aspects matching; this technique is used to measure polarity and sentiments analysis. We evaluate the solution by using the dataset of online comments.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_4-Words_Segmentation_based_Scheme.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Best-Choice Topology: An Optimized Array-based Maximum Finder</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101203</link>
        <id>10.14569/IJACSA.2019.0101203</id>
        <doi>10.14569/IJACSA.2019.0101203</doi>
        <lastModDate>2019-12-31T12:06:03.1200000+00:00</lastModDate>
        
        <creator>Marina Prvan</creator>
        
        <creator>Julije Ožegovic</creator>
        
        <creator>Ivan Soco</creator>
        
        <creator>Duje Coko</creator>
        
        <subject>Array topology; best-choice topology; maximum finder; maximum magnitude generator</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>Extracting maximum from an unsorted set of binary elements is important in many signal processing applications. Since quite few maximum-finder implementations are found in the recent literature, in this paper we provide an update on the current topic. Generally, maximum-finders are considered array-based, with parallel bit-by-bit comparison of the elements, or more efficient tree-based structures, with the hierarchical maximum extraction. In this paper, we concentrate on array-based topologies only, since our goal is to propose a new maximum-finder design called Best-Choice Topology (BCT), which is an optimized version of the standard Array Topology (AT). The usual bit-by-bit parallel comparison is applied for extracting the maximum and its one-of-N address. Boolean expressions are derived for BCT logical design and the minimum-finder equivalent. Functionality of the proposed architecture and the reference designs is verified with Xilinx ISE Design Suite 14.5. Synthesis is done on Application Specific Integrated Circuit (ASIC) TSMC 65nm technology. The conclusion of the paper is two-fold. First, we confirm the timing efficiency of BCT compared to AT. Next, we show that BCT is more efficient than the recent maximum-finder design called Maximum Magnitude Generator (MaxMG) and it has a great potential to be used for real-time signal processing applications.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_3-Best_Choice_Topology_an_Optimized_Array.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Activation and Spreading Sequence for Spreading Activation Policy Selection Method in Transfer Reinforcement Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101202</link>
        <id>10.14569/IJACSA.2019.0101202</id>
        <doi>10.14569/IJACSA.2019.0101202</doi>
        <lastModDate>2019-12-31T12:06:03.1070000+00:00</lastModDate>
        
        <creator>Hitoshi Kono</creator>
        
        <creator>Ren Katayama</creator>
        
        <creator>Yusaku Takakuwa</creator>
        
        <creator>Wen Wen</creator>
        
        <creator>Tsuyoshi Suzuki</creator>
        
        <subject>Reinforcement learning; transfer learning; spread-ing activation theory; policy selection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>This paper proposes an automatic policy selection method using spreading activation theory based on psychological theory for transfer learning in reinforcement learning. Intel-ligent robot systems have recently been studied for practical applications such as home robot, communication robot, and warehouse robot. Learning algorithms are key to building useful robot systems important. For example, a robot can explore for optimal policy with trial and error using reinforcement learning. Moreover, transfer learning enables reuse of prior policy and is effective for environment adaptability. However, humans de-termine applicable methods in transfer learning. Policy selection method has been proposed for transfer learning in reinforcement learning using spreading activation model proposed in cognitive psychology. In this paper, novel activation function and spreading sequence is discussed for spreading policy selection method. Fur-ther computer simulations are used to examine the effectiveness of the proposed method for automatic policy selection in simplified shortest-path problem.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_2-Activation_and_Spreading_Sequence_for_Spreading_Activation_Policy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Local-Set Based-on Instance Selection Approach for Autonomous Object Modelling</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101201</link>
        <id>10.14569/IJACSA.2019.0101201</id>
        <doi>10.14569/IJACSA.2019.0101201</doi>
        <lastModDate>2019-12-31T12:06:03.0430000+00:00</lastModDate>
        
        <creator>Joel Luis Carbonera</creator>
        
        <creator>Joanna Isabelle Olszewska</creator>
        
        <subject>Machine learning; instance selection; autonomous systems; object modelling; visual object recognition; computer vision; machine vision</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(12), 2019</description>
        <description>With the increasing presence of robotic agents in our daily life, computationally efficient modelling of real-world objects by autonomous systems is of prime importance for enabling these artificial agents to automatically and effectively perform tasks such as visual object recognition. For this purpose, we introduce a novel, machine-learning approach for instance selection called Approach for Selection of Border Instances (ASBI). This method adopts the notion of local sets to select the most representative instances at the boundaries of the classes, in order to reduce the set of training instances and, consequently, to reduce the computational resources that are necessary to perform the learning process of real-world objects by the artificial agents. Our new algorithm was validated on 27 standard datasets and applied on 2 challenging object-modelling datasets to test the automated object recognition task. ASBI performances were compared to those of 6 state-of-art algorithms, considering three standard metrics, namely, accuracy, reduction, and effectiveness. All the obtained results show that the proposed method is promising for the autonomous recognition task, while presenting the best trade-off between the classification accuracy and the data size reduction.</description>
        <description>http://thesai.org/Downloads/Volume10No12/Paper_1-Local_Set_Based_on_Instance_Selection_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Identification of Learning Styles and Automatic Assignment of Projects in an Adaptive e-Learning Environment using Project Based Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101191</link>
        <id>10.14569/IJACSA.2019.0101191</id>
        <doi>10.14569/IJACSA.2019.0101191</doi>
        <lastModDate>2019-12-05T12:45:57.7130000+00:00</lastModDate>
        
        <creator>Luis Alfaro</creator>
        
        <creator>Erick Apaza</creator>
        
        <creator>Jorge Luna-Urquizo</creator>
        
        <creator>Claudia Rivera</creator>
        
        <subject>Adaptive e-Learning; project based collaborative learning; case-based reasoning; learning styles; back propagation neural networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>The use of the project-based learning approach is one of the emerging trends in education and Adaptive e-Learning platforms. One of the main challenges in this line of research is to identify the learning style of students, whose results are considered in this work, for the assignment of course projects which best suit the characteristics of each student in particular as they incorporate different types of learning strategies and objects in order to contribute to facilitate and simplify the teaching/learning process of the adaptive e-Learning platform, which uses the project-based learning approach. In this work, after carrying out a review of the literature for the theoretical foundation and establishment of the state of Art, a line module of automatic recognition of learning styles is proposed, which uses the information of the interaction of the student with the system and the one that is based on Neural Networks and Fuzzy Logic, whose results are considered by the module of selection and assignment of projects that uses the Case Base Reasoning, later carrying out the tests and the analysis of the results obtained.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_91-Identification_of_Learning_Styles_and_Automatic_Assignment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automatic Semantic Categorization of News Headlines using Ensemble Machine Learning: A Comparative Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101190</link>
        <id>10.14569/IJACSA.2019.0101190</id>
        <doi>10.14569/IJACSA.2019.0101190</doi>
        <lastModDate>2019-11-30T11:51:53.7270000+00:00</lastModDate>
        
        <creator>Raghad Bogery</creator>
        
        <creator>Nora Al Babtain</creator>
        
        <creator>Nida Aslam</creator>
        
        <creator>Nada Alkabour</creator>
        
        <creator>Yara Al Hashim</creator>
        
        <creator>Irfan Ullah Khan</creator>
        
        <subject>Natural language processing; feature engineering; word embedding; text classification; ensemble learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>Due to widespread availability of Internet there are a huge of sources that produce massive amounts of daily news. Moreover, the need for information by users has been increasing unprecedently, so it is critical that the news is automatically classified to permit users to access the required news instantly and effectively. One of the major problems with online news sets is the categorization of the vast number news and articles. In order to solve this problem, the machine learning model along with the Natural Language Processing (NLP) is widely used for automatic news classification to categorize topics of untracked news and individual opinion based on the user’s prior interests. However, the existing studies mostly rely on NLP but uses huge documents to train the prediction model, thus it is hard to classify a short text without using semantics. Few studies focus on exploring classifying the news headlines using the semantics. Therefore, this paper attempts to use semantics and ensemble learning to improve the short text classification. The proposed methodology starts with preprocessing stage then applying feature engineering using word2vec with TF-IDF vectorizer. Afterwards, the classification model was developed with different classifier KNN, SVM, Na&#239;ve Bayes and Gradient boosting. The experimental results verify that Multinomial Na&#239;ve Bayes shows the best performance with an accuracy of 90.12% and recall 90%.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_90-Automatic_Semantic_Categorization_of_News_Headlines.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Learning Management System Personalization based on Multi-Attribute Decision Making Techniques and Intuitionistic Fuzzy Numbers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101188</link>
        <id>10.14569/IJACSA.2019.0101188</id>
        <doi>10.14569/IJACSA.2019.0101188</doi>
        <lastModDate>2019-11-30T11:51:53.6970000+00:00</lastModDate>
        
        <creator>Jorge Luna-Urquizo</creator>
        
        <subject>Learning Management Systems (LMS); e-Learning; multi-attribute decision making; learning styles; content personal-ization; learning objects selection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>The personalization of Learning Management Sys-tems is a fundamental task in the current context of e-Learning and the WWW. However, there are many controversies around the criteria used to make the selection and presentation of the most appropriate content for each user. The most used approaches in the last decade were the identification of learning styles, the analysis of the history and navigational behavior, and the classification of user profiles, without finding conclusive evidence to determine a method that can be adopted universally, consid-ering the complexity of the cognitive processes involved. This paper proposes an approach based on multi-attribute decision making techniques, which allows considering and combining the criteria most effectively used in the area, according to particular contexts, as a new approach to the content personalization and appropriate learning objects selection. The application of this approach aims to maximize the effectiveness and efficiency of the teaching process and enrich the user experience.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_88-Learning_Management_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Closer Look at Arabic Text Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101189</link>
        <id>10.14569/IJACSA.2019.0101189</id>
        <doi>10.14569/IJACSA.2019.0101189</doi>
        <lastModDate>2019-11-30T11:51:53.6970000+00:00</lastModDate>
        
        <creator>Mohammad A R Abdeen</creator>
        
        <creator>Sami AlBouq</creator>
        
        <creator>Ahmed Elmahalawy</creator>
        
        <creator>Sara Shehata</creator>
        
        <subject>Arabic text classification; support vector machines; k-NN; Naive Bayesian; decision trees; C4.5; maximum entropy; feature selection; Arabic dataset</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>The world has witnessed an information explosion in the past two decades. Electronic devices are now available in many varieties such as PCs, Laptops, book readers, mobile devices and with relatively affordable prices. This and the ubiquitous use of software applications such as social media and cloud applications, and the increasing trend towards digitalization, the amount of information on the global cloud has surged to an unprecedented level. Therefore, a dire need exists in order to mine this massively large amount of data and produce meaningful information. Text Classification is one of the known and well established data mining techniques that has been used and reported in the literature. Text classification methods include statistical and machine learning algorithms such as Naive Baysian, Support Vector Machines and others have widely been used. Many works have been reported regarding text classification of various languages including English, Chinese, Russian, and many others. Arabic is the fifth most spoken language in the world. There has been many works in the literature for Arabic text classification. However, and to the best of our knowledge, there is no recent work that presents a good, critical and comprehensive survey of the Arabic text classification for the past two decades. The aim of this paper is to present a concise and yet comprehensive review of the Arabic text classification. We have covered over 50 research papers covering the past two decades (2000 - 2019). The main focus of this paper is to address the following issues: 1) The techniques reported in the literature including. 2) New Techniques. 3) Most claimed efficient technique. 4) Datasets used and which ones are most popular. 5) Which feature selection techniques are used? 6) Popular classes/categories used. 7) Effect of stemming techniques on classification results.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_89-A_Closer_Look_at_Arabic_Text_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Effective Combination of Iris-based Cancelable Biometrics and Biometric Cryptosystems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101187</link>
        <id>10.14569/IJACSA.2019.0101187</id>
        <doi>10.14569/IJACSA.2019.0101187</doi>
        <lastModDate>2019-11-30T11:51:53.6670000+00:00</lastModDate>
        
        <creator>Osama Ouda</creator>
        
        <creator>Norimichi Tsumura</creator>
        
        <creator>Toshiya Nakaguchi</creator>
        
        <subject>Biometric template protection; cancelable biomet-rics; biometric cryptosystems; BioEncoding; fuzzy commitment</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>The fuzzy commitment scheme (FCS) is one of the most effective biometric cryptosystems (BCs) that provide secure management of cryptographic keys using biometric templates. In this scheme, error correcting codes (ECCs) are firstly employed to encode a cryptographic key into a codeword which is then secured via linking (committing) it with a biometric template of the same length. Unfortunately, the key length is constrained by the size of the adopted biometric template as well as the employed ECC(s). In this paper, we propose a secure iris template protection scheme that combines cancelable biometrics with the FCS in order to secure long cryptographic keys without sacrificing the recognition accuracy. First, we utilize cancelable biometrics to derive revocable templates of large sizes from the most reliable bits in iris codes. Then, the FCS is applied to the obtained cancelable iris templates to secure cryptographic keys of the desired length. The revocability of cryptographic keys as well as true iris templates is guaranteed due to the hybridization of both techniques. Experimental results show that the proposed hybrid system can achieve high recognition accuracy regardless of the key size.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_87-Effective_Combination_of_Iris_based_Cancelable_Biometrics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>UAV Control Architecture: Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101186</link>
        <id>10.14569/IJACSA.2019.0101186</id>
        <doi>10.14569/IJACSA.2019.0101186</doi>
        <lastModDate>2019-11-30T11:51:53.6500000+00:00</lastModDate>
        
        <creator>IDALENE Asmaa</creator>
        
        <creator>BOUKHDIR Khalid</creator>
        
        <creator>MEDROMI Hicham</creator>
        
        <subject>Unmanned Aerial Vehicle; control architecture; de-liberative approach; reactive approach; hybrid approach; behavior approach; hybrid behavior approach; subsumption approach</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>Since civil Unmanned Aerial Vehicles (UAVs) are expected to perform a wide rang of mission, the subject of designing an efficient control architecture for autonomous UAV is a very challenging problem. Several contributions had been done in order to implement an autonomous UAV. The key challenge of all these contributions is to develop the global strategy. Robotic control approaches could be classified into six categories: Deliberative, Reactive, Hybrid, Behavior, Hybrid Behavior and subsumption approach. In this paper, we will review the existing control architectures to extract the main features of civil UAVs. The definition, advantage and drawback of each architecture will be highlighted to finally provide a comparative study of the mentioned control approaches.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_86-UAV_Control_Architecture_Review.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Rich Style Embedding for Intrinsic Plagiarism Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101185</link>
        <id>10.14569/IJACSA.2019.0101185</id>
        <doi>10.14569/IJACSA.2019.0101185</doi>
        <lastModDate>2019-11-30T11:51:53.6200000+00:00</lastModDate>
        
        <creator>Oumaima Hourrane</creator>
        
        <creator>El Habib Benlahmer</creator>
        
        <subject>Plagiarism detection; style embedding; deep neural network; stylometry; syntactic trees</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>Stylometry plays an important role in the intrinsic plagiarism detection, where the goal is to identify potential plagiarism by analyzing a document involving undeclared changes in writing style. The purpose of this paper is to study the interaction between syntactic structures, attention mechanism, and contextualized word embeddings, as well as their effectiveness on plagiarism detection. Accordingly, we propose a new style embedding that combines syntactic trees and the pre-trained Multi-Task Deep Neural Network (MT-DNN). Additionally, we use attention mechanisms to sum the embeddings, thereby exper-imenting with both a Bidirectional Long Short-Term Memory (BiLSTM) and a Convolutional Neural Network (CNN) max-pooling for sentences encoding. Our model is evaluated on two sub-task; style change detection and style breach detection, and compared with two baseline detectors based on classic stylometric features.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_85-Rich_Style_Embedding_for_Intrinsic_Plagiarism_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fine-tuning Resource Allocation of Apache Spark Distributed Multinode Cluster for Faster Processing of Network-trace Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101184</link>
        <id>10.14569/IJACSA.2019.0101184</id>
        <doi>10.14569/IJACSA.2019.0101184</doi>
        <lastModDate>2019-11-30T11:51:53.6030000+00:00</lastModDate>
        
        <creator>Shyamasundar L B</creator>
        
        <creator>V Anilkumar</creator>
        
        <creator>Jhansi Rani P</creator>
        
        <subject>Big data; packet data analysis; network security; distributed apache spark cluster; Yet Another Resource Negotiator (YARN); parameter tuning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>In the field of network security, the task of process-ing and analyzing huge amount of Packet CAPture (PCAP) data is of utmost importance for developing and monitoring the behavior of networks, having an intrusion detection and prevention system, firewall etc. In recent times, Apache Spark in combination with Hadoop Yet-Another-Resource-Negotiator (YARN) is evolving as a generic Big Data processing platform. While processing raw network packets, timely inference of network security is a primitive requirement. However, to the best of our knowledge, no prior work has focused on systematic study on fine-tuning the resources, scalability and performance of distributed Apache Spark cluster (while processing PCAP data). For obtaining best performance, various cluster parameters like number of cluster nodes, number of cores utilized from each node, total number of executors run in the cluster, amount of main-memory used from each node, executor memory overhead allotted for each node to handle garbage collection issue, etc., have been fine-tuned, which is the focus of the proposed work. Through the proposed strategy, we could analyze 85GB of data (provided by CSIR Fourth Paradigm Institute) in just 78 seconds, using 32 node (256 cores) Spark cluster. This would otherwise take around 30 minutes in traditional processing systems.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_84-Fine_Tuning_Resource_Allocation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Survey on Domain Specific Languages Implementation Aspects</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101183</link>
        <id>10.14569/IJACSA.2019.0101183</id>
        <doi>10.14569/IJACSA.2019.0101183</doi>
        <lastModDate>2019-11-30T11:51:53.5870000+00:00</lastModDate>
        
        <creator>Eman Negm</creator>
        
        <creator>Soha Makady</creator>
        
        <creator>Akram Salah</creator>
        
        <subject>Domain Specific Language (DSL); language work-bench; language implementation aspects; software language engineering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>Domain Specific Languages (DSLs) bridge the gap between the business model and the technical model. DSLs allow the technical developer to write programs with the business domain notations. This leads to higher productivity and better quality than General Purpose Languages (GPLs). One of the main challenges of utilizing DSLs in the current software process is how to reduce the implementation cost and the knowledge required for building and maintaining DSLs. Language workbenches are environments that provide high level tools for implementing different language aspects. The purpose of this paper is to provide a survey on the different aspects of implementing DSLs. The survey includes structure, editor, semantics, and composability language aspects. Furthermore, it overviews the approaches used for each aspect and classify the current workbenches according to these approaches.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_83-Survey_on_Domain_Specific_Languages.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluate Metadata of Sparse Matrix for SpMV on Shared Memory Architecture</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101182</link>
        <id>10.14569/IJACSA.2019.0101182</id>
        <doi>10.14569/IJACSA.2019.0101182</doi>
        <lastModDate>2019-11-30T11:51:53.5570000+00:00</lastModDate>
        
        <creator>Nazmul Ahasan Maruf</creator>
        
        <creator>Waseem Ahmed</creator>
        
        <subject>Sparse matrix vector multiplication; sparse matrix metadata; sparse matrix vector multiplication parallelization; shared memory architecture; sparse matrix storage formats; high performance computing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>Sparse Matrix operations are frequently used op-erations in scientific, engineering and high-performance com-puting (HPC) applications. Among them, sparse matrix-vector multiplication (SpMV) is a popular kernel and considered an important numerical method for science, engineering and in scientific computing. However, SpMV is a computationally expen-sive operation. To obtain better performance, SpMV depends on certain factors; choosing the right storage format for the sparse matrix is one of them. Other things like data access pattern, the sparsity of the matrix data set, load balancing, sharing of the memory hierarchy, etc. are other factors that affect performance. Metadata, that describes the substructure of the sparse matrix, like shape, density, sparsity, etc. of the sparse matrix also affects performance efficiency for any sparse matrix operation. Various approaches presented in literature over the last few decades given good results for certain types of matrix structures and don’t perform as well with others. Developers thus are faced with a difficulty in choosing the most appropriate format. In this research, an approach is presented that evaluates metadata of a given sparse matrix and suggest to the developers the most suitable storage format to use for SpMV.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_82-Evaluate_Metadata_of_Sparse_Matrix.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Instagram Shopping in Saudi Arabia: What Influences Consumer Trust and Purchase Decisions?</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101181</link>
        <id>10.14569/IJACSA.2019.0101181</id>
        <doi>10.14569/IJACSA.2019.0101181</doi>
        <lastModDate>2019-11-30T11:51:53.5400000+00:00</lastModDate>
        
        <creator>Taghreed Shaher Alotaibi</creator>
        
        <creator>Afnan Abdulrahman Alkhathlan</creator>
        
        <creator>Shaden Saad Alzeer</creator>
        
        <subject>Social commerce; Instagram; trust; purchase intention; Maroof; key opinion leaders; social media influencers</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>The recent developments of social networking sites (SNSs), along with the increasing usage of online shopping, has led to the emergence of social commerce platforms. Social commerce (s-commerce) is the use of Web 2.0 technologies and social media to deliver e-commerce services for consumers. The Kingdom of Saudi Arabia (KSA) has been witnessing a rapid growth in s-commerce usage, with Instagram being the most popular networks in the region. This paper is one of the few that investigates the factors affecting consumers’ trust and purchase intentions in Instagram as a s-commerce platform in Saudi Arabia. The proposed model explores a number of factors, such as Social Media Influencers (SMIs), Key Opinion Leaders (KOLs) and consumer feedback, in terms of their influence on consumers’ trust and purchase decisions. In addition to the effect of Maroof, which is an e-service provided by the Saudi Ministry of Commerce and Investment to evaluate the reliability of online stores. Following a quantitative approach and using Partial Least Squares Modeling (PLS-SEM), findings of this study revealed a positive relationship between consumers’ trust and their purchase intentions. Additionally, the impact of SMIs and consumer feedback was shown to increase consumers’ trust, in turn affecting intent to buy from Instagram stores, while the effect of Maroof and KOLs was shown to directly influence consumers’ purchase intentions.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_81-Instagram_Shopping_in_Saudi_Arabia.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SQL to SPARQL Conversion for Direct RDF Querying</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101180</link>
        <id>10.14569/IJACSA.2019.0101180</id>
        <doi>10.14569/IJACSA.2019.0101180</doi>
        <lastModDate>2019-11-30T11:51:53.5230000+00:00</lastModDate>
        
        <creator>Ahmed ABATAL</creator>
        
        <creator>Khadija Alaoui</creator>
        
        <creator>Larbi Alaoui</creator>
        
        <creator>Mohamed Bahaj</creator>
        
        <subject>Resource Description Framework (RDF); Struc-tured Query Language (SQL); Simple Protocol and RDF Query Language (SPARQL); schema mapping; query conversion; Allegrograph</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>With the advances in native storage means of RDF data and associated querying capabilities using SPARQL, there is a need to let SQL users benefit from such capabilities for interoperability objectives and without any conversion of the RDF data into relational data. In this sense, this work present SQL2SPARQL4RDF an automatic conversion algorithm of SQL queries into SPARQL queries for querying RDF data, which extends the previously established algorithm with relevant SQL elements such as queries with INSERT, DELETE, GROUP BY and HAVING clauses. SQL users are provided with a relational schema of their RDF data against which they can formulate their SQL queries that are then converted into SPARQL equivalent ones with respect to the provided schema. This avoids the birding of translating instances and data replication and thus saving load-ing times and guaranteeing fast execution especially in the case of massive amounts of data. In addition, the automatic mapping framework developed by the java programming language, and implement many new mapping functionalities. Furthermore, to test and validate the efficiency of the mapping approach and adding a module for automatic execution and evaluation of the various obtained SPARQL queries on Allegrograph.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_80-SQL_to_SPARQL_Conversion_for_Direct_RDF_Querying.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Autonomous Navigation of Unmanned Aerial Vehicles based on Android Smartphone</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101179</link>
        <id>10.14569/IJACSA.2019.0101179</id>
        <doi>10.14569/IJACSA.2019.0101179</doi>
        <lastModDate>2019-11-30T11:51:53.5100000+00:00</lastModDate>
        
        <creator>Talal Bonny</creator>
        
        <creator>Mohamed B. Abdelsalam</creator>
        
        <subject>UAV; drone; quadrotor; android; Arduino; GPS; GPRS; GSM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>In the past few years, the adoption of drone technology across industries has increased dramatically as more businesses started to recognize its potential, uses, and scale of global reach. This paper proposes a design solution of a smart GPS quadcopter aircraft navigation system, discusses its hardware and software implementation process and eventually, analyses and reports the final test results. The flight path of the quadrotor is remotely manipulated via an Android based Graphical User Interface. This outdoor handheld application al-lows the operator to select a point of interest through Google map satellite view; consequently, the quadrotor takes off then hovers and ultimately lands on the destination location. Instructions in conjunction with coordinates are sent and received throughout a web server which serves the communication operation between the smartphone and the quadrotor. Experimental results yield fruitful data communication and successful autonomous flight control with smooth and stable maneuvering.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_79-Autonomous_Navigation_of_Unmanned_Aerial_Vehicles.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Browser Extension based Hybrid Anti-Phishing Framework using Feature Selection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101178</link>
        <id>10.14569/IJACSA.2019.0101178</id>
        <doi>10.14569/IJACSA.2019.0101178</doi>
        <lastModDate>2019-11-30T11:51:53.4930000+00:00</lastModDate>
        
        <creator>Swati Maurya</creator>
        
        <creator>Harpreet Singh Saini</creator>
        
        <creator>Anurag Jain</creator>
        
        <subject>Anti-phishing; browser extension; machine learn-ing; feature selection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>Phishing is one of the socially engineered cyberse-curity attacks where the attacker impersonates a genuine and legitimate website source and sends emails with the intention of stealing sensitive personal information. The phishing websites’ URLs are usually spread through emails by luring the users to click on them or by embedding the link to fake website replicating any genuine e-commerce website inside the invoice or other documents. The phishing problem is very wide and no single solution exists to mitigate all the vulnerabilities properly. Thus, multiple techniques are often combined and implemented to mitigate specific attacks. The primary objective of this paper is to propose an efficient and effective anti-phishing solution that can be implemented at the client-side in the form of a browser extension and should be capable to handle real-time scenarios and zero-day attacks. The proposed approach works efficiently for any phishing link carrier mode as the execution on clicking on any link or manually entering URL in the browser doesn’t proceed unless the proposed framework approves that the website associated with that URL is genuine. Also, the proposed framework is capable to handle DNS cache poisoning attacks even if the system’s DNS cache is somehow infected. This paper first presents a comprehensive review that broadly discusses the phishing life cycle and available anti-phishing countermeasures. The proposed framework considers the pros and cons of existing methodologies and presents a robust solution by combining the best features to ensure that a fast and accurate response is achieved. The effectiveness of the approach is tested in a real-time dataset consisting of live phishing and legitimate website URLs and the framework is found to be 98.1% accurate in identifying websites correctly in very less time.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_78-Browser_Extension_based_Hybrid_Anti_Phishing_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Budgets Balancing Algorithms for the Projects Assignment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101177</link>
        <id>10.14569/IJACSA.2019.0101177</id>
        <doi>10.14569/IJACSA.2019.0101177</doi>
        <lastModDate>2019-11-30T11:51:53.4770000+00:00</lastModDate>
        
        <creator>Mahdi Jemmali</creator>
        
        <subject>Heuristic; scheduling algorithms; project assignment</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>This paper focused on the resolution of the project’s assignment problem. Several heuristics have been developed and proposed in this paper to serve as lower bounds to our studied problem. In a developing country, it is interesting to make an equitable distribution of projects in different cities in order to guarantee equality and regional development. Each project is characterized by its budget. The problem is to find an appropriate schedule to assign all projects to all cities. This appropriate schedule seeking the maximization of the budget in the city that having the minimum budget. In this paper, six heuristics were proposed to carry out the objective of resolving the studied problem. The experimental results show that the algorithm given by the heuristic P6r outperforms all other heuristics cited in this paper.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_77-Budgets_Balancing_Algorithms_for_the_Projects_Assignment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intelligent Pedagogical Model with Kinesthetic-Static Immersion based on the Neuro-Linguistic Programming Approach (NLP)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101176</link>
        <id>10.14569/IJACSA.2019.0101176</id>
        <doi>10.14569/IJACSA.2019.0101176</doi>
        <lastModDate>2019-11-30T11:51:53.4630000+00:00</lastModDate>
        
        <creator>Sim&#243;n Choquehuayta Palomino</creator>
        
        <creator>Jos&#233; Herrera Quispe</creator>
        
        <creator>Luis Alfaro</creator>
        
        <creator>Blas Choquehuayta Llamoca</creator>
        
        <subject>Haptic interaction; viral immersion; learning styles; neuro-linguistic programming; educational data mining</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>In this paper, the authors propose a teaching/learning pedagogical model, based on an approach that uses neurolinguistic programming, educational data mining and haptic interaction. It also uses the theory of learning styles, which are identified with data mining techniques, clustering and the Farthest First algorithm, as well as a test of Neurolinguistic Programming. Depending on the results obtained, the teaching/learning strategies are defined, and the activities of an educational coaching are suggested, with the purpose of boosting the students&#39; attention in the classroom, stimulating their communicative and psychomotor skills. The proposal was evaluated with a sample of students of regular basic education, to whom an instrument was applied before and after carrying out the tests of applying the plan of the teaching/learning activities. For this purpose, a multifunctional learning kit was constructed, which is a didactic and playful resource, applicable to the student&#39;s psychomotor area. The kit contains an application and a hardware device called &quot;Tusuna-pad 1.0&quot;, which was implemented in the Unity games engine and was programmed using the C# language. The pedagogical model was validated with the participation of students of Regular Basic Education, considering pedagogical and computational aspects, results that were validated and duly analyzed. Finally, conclusions and recommendations for future work were established.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_76-Intelligent_Pedagogical_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Atmospheric Light Estimation using Particle Swarm Optimization for Dehazing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101175</link>
        <id>10.14569/IJACSA.2019.0101175</id>
        <doi>10.14569/IJACSA.2019.0101175</doi>
        <lastModDate>2019-11-30T11:51:53.4470000+00:00</lastModDate>
        
        <creator>Padmini T N</creator>
        
        <creator>Shankar. T</creator>
        
        <subject>Hazy images; particle swarm optimization; dark channel prior; transmission; atmospheric light</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>For the past decade, many researchers have been working towards the improvement in the visibility of single hazy images, using the haze image model. According to the haze image model, the hazy-free image is restored by estimating the atmospheric light and transmission from a hazy image. The objective of this proposed work is to improve the perceptibility by decreasing the density of haze in the hazy images. The research work was carried to estimate the optimal value of atmospheric light by tuning the weights using a bioinspired technique called Particle Swarm Optimization (PSO) based on the objective of minimizing the fog density. We have selected a fitness function or objective function which incorporates all statistical features to differentiate a clear image from the hazy image. The results are validated with the state-of-the-art, by measuring fog density of the restored image using Fog Aware Density Evaluator (FADE). Also, the results are validated by measuring the Peak signal to noise ratio (PSNR) and structural similarity index (SSI) using ground truth images from Foggy Road image database (FRIDA). This research work demonstrates better results qualitatively and quantitatively.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_75-Atmospheric_Light_Estimation_using_Particle_Swarm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Impact of Scrum and Tactic Workflow Management System on Organization Performance (A Study on Animation Studios in Pakistan)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101174</link>
        <id>10.14569/IJACSA.2019.0101174</id>
        <doi>10.14569/IJACSA.2019.0101174</doi>
        <lastModDate>2019-11-30T11:51:53.4170000+00:00</lastModDate>
        
        <creator>Abdul Wahab Khan</creator>
        
        <creator>Usman Khan</creator>
        
        <creator>Maaz Bin Ahmad</creator>
        
        <creator>Farhan Shafique</creator>
        
        <subject>MIS (Management Information System); scrum management system; deployment; performance; animated studios</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>Assessing and considering the efficiency effect of scrum management system is a significant and troublesome issue for a researcher. We prescribe that you best comprehend the adequacy of scrum management system  applications by breaking down at the data handling level in the animation studio. As we going to concentrate on the progression of data with others in the association procedure is moderately not coupled, to oblige the estimation issue. In light of the aftereffects of this research, it will reason that in the wake of actualizing the scrum management system  framework, the organization will turn out to be progressively proficient and compelling underway exercises. Also, its exhibition will upgrade and the greater part of the issues will resolve effectively which prompted better profitability and a superior notoriety in the market. The study additionally attempted to underline the effect of animation studio the scrum management system. It is proposed to determine how the scrum management system  causes a movement studio to work adequately. Research proposes that activity studios ought to give adaptability to execute the executives data frameworks, ought to likewise be noticed that to advance control of the organization&#39;s market and obtain sufficient programming and a proper program through correspondence media associations to meet the worldwide scrum management system business condition in the developing business sector development and extension.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_74-Impact_of_Scrum_and_Tactic_Workflow_Management_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Robust Optimization Approach of SQL-to-SPARQL Query Rewriting</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101173</link>
        <id>10.14569/IJACSA.2019.0101173</id>
        <doi>10.14569/IJACSA.2019.0101173</doi>
        <lastModDate>2019-11-30T11:51:53.4000000+00:00</lastModDate>
        
        <creator>Ahmed Abatal</creator>
        
        <creator>Mohamed Bahaj</creator>
        
        <creator>Soussi Nassima</creator>
        
        <subject>SQL-to-SPARQL; outer join optimization; query transformation; SQL simplification; query optimization layer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>In order to ensure the interoperability between semantic web and relational databases, several approaches have been developed to ensure SQL-to-SPARQL query transformation direction, but all these approaches have the same weakness. In fact, they convert directly the input SQL query to its equivalent SPARQL one without any pre-processing phase enabling the optimization of this input query filled by users before starting the conversion process. This weakness has motivated us to add a pretreatment phase aiming to optimize the most important SQL statements which seem to have the biggest impact on the effectiveness of the transformed queries. Our main contribution is to enrich these rewriting systems by adding an optimization layer that integrate a set of simplification rules of Left, Right and Full Outer Join in order to avoid, firstly unnecessary operations during the conversion process, and secondly SPARQL queries with a high complexity due to Optional patterns obtained from outer join in this conversion context.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_73-A_Robust_Optimization_Approach_of_SQL.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Analysis of Double Gate Junctionless Tunnel Field Effect Transistor: RF Stability Perspective</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101172</link>
        <id>10.14569/IJACSA.2019.0101172</id>
        <doi>10.14569/IJACSA.2019.0101172</doi>
        <lastModDate>2019-11-30T11:51:53.3700000+00:00</lastModDate>
        
        <creator>Veerati Raju</creator>
        
        <creator>Sivasankaran K</creator>
        
        <subject>Junctionless tunnel FET; band to band tunnelling; High-k; RF stability; critical frequency</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>This paper investigates the RF Stability performance of the Double Gate Junctionless Tunnel Field Effect Transistor (DGJL-TFET). The impact of the geometrical parameter, material and bias conditions on the key figure of merit (FoM) like Transconductance (gm), Gate capacitance (Cgg) and RF parameters like Stern Stability Factor (K), Critical Frequency (fk) are investigated. The analytical model provides the relation between fk and small signal parameters which provide guidelines for optimizing the device parameter. The results show improvement in ON current, gm, ft and fk for the optimized device structure. The optimized device parameters provide guidelines to operate DGJL-TFET for RF applications.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_72-Performance_Analysis_of_Double_Gate_Junctionless_Tunnel.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comparative Study of the Most Influential Learning Styles used in Adaptive Educational Environments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101171</link>
        <id>10.14569/IJACSA.2019.0101171</id>
        <doi>10.14569/IJACSA.2019.0101171</doi>
        <lastModDate>2019-11-30T11:51:53.3530000+00:00</lastModDate>
        
        <creator>Othmane ZINE</creator>
        
        <creator>Aziz DEROUICH</creator>
        
        <creator>Abdennebi TALBI</creator>
        
        <subject>Adaptive educational environments; learning style; Felder-Silverman; personalization; learner modeling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>E-learning has evolved from traditional content delivery approaches to a personalized, adaptive and learner-centered knowledge transfer. In the way of customizing the learning experience learning styles represent key features that cannot be neglected. Learning style designates any representative characteristic of an individual while learning, i.e. a particular way of dealing with a given learning task, the preferred media, or the learning strategies adopted in order to achieve a task. Despite the fact that the use of learning styles in adaptive educational environments has become controversial, but there is no empirical evidence of its usefulness. The main objective of our paper is to respond to the question “What learning style model is most appropriate for use in adaptive educational environments?”</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_71-A_Comparative_Study_of_the_Most_Influential_Learning_Styles.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Generic Approach for Weight Assignment to the Decision Making Parameters</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101170</link>
        <id>10.14569/IJACSA.2019.0101170</id>
        <doi>10.14569/IJACSA.2019.0101170</doi>
        <lastModDate>2019-11-30T11:51:53.3200000+00:00</lastModDate>
        
        <creator>Md Zahid Hasan</creator>
        
        <creator>Shakhawat Hossain</creator>
        
        <creator>Mohammad Shorif Uddin</creator>
        
        <creator>Mohammad Shahidul Islam</creator>
        
        <subject>Multiple attribute decision problem; average term frequency; cosine similarity; weight setup for multiple attributes; decision making</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>Weight assignment to the decision parameters is a crucial factor in the decision-making process. Any imprecision in weight assignment to the decision attributes may lead the whole decision-making process useless which ultimately mislead the decision-makers to find an optimal solution. Therefore, attributes’ weight allocation process should be flawless and rational, and should not be just assigning some random values to the attributes without a proper analysis of the attributes’ impact on the decision-making process. Unfortunately, there is no sophisticated mathematical framework for analyzing the attribute’s impact on the decision-making process and thus the weight allocation task is accomplished based on some human sensing factors. To fill this gap, present paper proposes a weight assignment framework that analyzes the impact of an attribute on the decision-making process and based on that, each attribute is evaluated with a justified numerical value. The proposed framework analyzes historical data to assess the importance of an attribute and organizes the decision problems in a hierarchical structure and uses different mathematical formulas to explicit weights at different levels. Weights of mid and higher-level attributes are calculated based on the weights of root-level attributes. The proposed methodology has been validated with diverse data. In addition, the paper presents some potential applications of the proposed weight allocation scheme.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_70-A_Generic_Approach_for_Weight_Assignment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Lizard Cipher for IoT Security on Constrained Devices</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101169</link>
        <id>10.14569/IJACSA.2019.0101169</id>
        <doi>10.14569/IJACSA.2019.0101169</doi>
        <lastModDate>2019-11-30T11:51:53.3200000+00:00</lastModDate>
        
        <creator>Ari Kusyanti</creator>
        
        <creator>Rakhmadhany Primananda</creator>
        
        <creator>Kalbuadi Joyo Saputro</creator>
        
        <subject>Lizard cipher; IoT security; Arduino; ANOVA</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>Over the past decades, security become the most challenging task in Internet of Things. Therefore, a convenient hardware cryptographic module is required to provide accelerated cryptographic operations such as encryption. This study investigates the implementation of Lizard cipher on three Arduino platforms to determine its performance. This study is successful in implementing Lizard cipher on Arduino platform as a constrained devices and resulting 0.98 MB of memory utilization. The execution time of Lizard cipher is compared among Arduino variants, i.e Arduino Mega, Arduino Nano and Arduino Uno with ANOVA test. Tukey’s HSD post-hoc test reveales that the execution time is significantly slower in Arduino Mega compared to Arduino Nano and Arduino Uno. This result will help IoT security engineers in selecting a lightweight cipher that is suitable for constraints of the target device.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_69-Lizard_Cipher_for_IoT_Security.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Topic Cluster Models for Social Healthcare Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101168</link>
        <id>10.14569/IJACSA.2019.0101168</id>
        <doi>10.14569/IJACSA.2019.0101168</doi>
        <lastModDate>2019-11-30T11:51:53.3070000+00:00</lastModDate>
        
        <creator>K Rajendra Prasad</creator>
        
        <creator>Moulana Mohammed</creator>
        
        <creator>R M Noorullah</creator>
        
        <subject>Multi-viewpoint based metric; traditional topic models; hybrid topic models; topic visualization; health tendency</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>Social media and in particular, microblogs are becoming an important data source for disease surveillance, behavioral medicine, and public healthcare. Topic Models are widely used in microblog analytics for analyzing and integrating the textual data within a corpus. This paper uses health tweets as microblogs and attempts the health data clustering by topic models. The traditional topic models, such as Latent Semantic Indexing (LSI), Probabilistic Latent Schematic Indexing (PLSI), Latent Dirichlet Allocation (LDA), Non-negative Matrix Factorization (NMF), and integer Joint NMF(intJNMF) methods are used for health data clustering; however, they are intractable to assess the number of health topic clusters. Proper visualizations are essential to extract the information from and identifying trends of data, as they may include thousands of documents and millions of words. For visualization of topic clouds and health tendency in the document collection, we present hybrid topic models by integrating traditional topic models with VAT. Proposed hybrid topic models viz., Visual Non-negative Matrix Factorization (VNMF), Visual Latent Dirichlet Allocation (VLDA), Visual Probabilistic Latent Schematic Indexing (VPLSI) and Visual Latent Schematic Indexing (VLSI) are promising methods for accessing the health tendency and visualization of topic clusters from benchmarked and Twitter datasets. Evaluation and comparison of hybrid topic models are presented in the experimental section for demonstrating the efficiency with different distance measures, include, Euclidean distance, cosine distance, and multi-viewpoint cosine similarity.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_68-Hybrid_Topic_Cluster_Models_for_Social_Healthcare.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimization of Multi-Product Aggregate Production Planning using Hybrid Simulated Annealing and Adaptive Genetic Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101167</link>
        <id>10.14569/IJACSA.2019.0101167</id>
        <doi>10.14569/IJACSA.2019.0101167</doi>
        <lastModDate>2019-11-30T11:51:53.2730000+00:00</lastModDate>
        
        <creator>Gusti Eka Yuliastuti</creator>
        
        <creator>Agung Mustika Rizki</creator>
        
        <creator>Wayan Firdaus Mahmudy</creator>
        
        <creator>Ishardita Pambudi Tama</creator>
        
        <subject>Agreggate; genetic algorithm; hybrid; production planning; simulated annealing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>In the planning of aggregate production, company stakeholders need a long time due to the many production variables that must be considered so that the production value can meet consumer demand with minimal production costs. The case study is the company that produces more than a type of product so there are several variables must be considered and computational time is required. Genetic Algorithm is applied as they have the advantage of searching in a solution space but are often trapped in locally optimal solutions. In this study, the authors proposed a new mathematical model in the form of a fitness function aimed at assessing the quality of the solution. To overcome this local optimum problem, the authors refined it by combining the Genetic Algorithm and Simulated Annealing so called hybrid approach. The function of Simulated Annealing is to improve every solution produced by Genetic Algorithm. The proposed hybrid method is proven to produce better solutions.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_67-Optimization_of_Multi_Product_Aggregate_Production.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Deep-Learning Model for Predicting and Visualizing the Risk of Road Traffic Accidents in Saudi Arabia: A Tutorial Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101166</link>
        <id>10.14569/IJACSA.2019.0101166</id>
        <doi>10.14569/IJACSA.2019.0101166</doi>
        <lastModDate>2019-11-30T11:51:53.2600000+00:00</lastModDate>
        
        <creator>Maram Alrajhi</creator>
        
        <creator>Mahmoud Kamel</creator>
        
        <subject>LSTM for time-series forecasting; deep learning; RTA; data visualization; interactive map; Saudi Arabia</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>Around the world, road traffic accidents (RTAs) cause significant concerns for decision makers and researchers on traffic safety. The diversity, rarity, and interconnectivity of historical data on factors causing car accidents point to the need for more focused studies for analyzing, predicting, and visualizing the risk of accidents over the short and long term for preventive purposes. There are many techniques and tools applied to analyze, forecast, and visualize risk. Most RTA studies have applied linear time-series methods to forecasting the risk with limited studies applying machine-learning and deep-learning techniques, especially in Saudi Arabia. Recently, many global studies have applied long short-term memory (LSTM) networks, which can be used to automatically learn the temporal dependence structures for challenging time-series forecasting problems. This paper displays a tutorial for designing a prototype of an interactive analytical tool based on a multivariate LSTM model for time-series data to predict future car accidents, fatalities, and injuries in the Kingdom of Saudi Arabia (KSA). This interactive tool visualizes the real data with the predicted values regionally in a web browser with Python. The tutorial represents the annual data of the period between 1417 (1996) and 1433 (2013), then uses the data with some contributing factors, such as population, gender, nationality, number of vehicles, and length of road, to generate the input data and predict the future values of accidents, fatalities, and injuries up to the year 1452 (2030). After that the real and predicted values are visualized regionally on an interactive map that represents the degree of risk. Finally, the paper discusses the evaluation and utilization of the proposed prototype in the future in the field of road safety.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_66-A_Deep_Learning_Model_for_Predicting_and_Visualizing_the_Risk.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of Warning Device in Risk Situations for Children with Hearing Impairment at Low Cost</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101165</link>
        <id>10.14569/IJACSA.2019.0101165</id>
        <doi>10.14569/IJACSA.2019.0101165</doi>
        <lastModDate>2019-11-30T11:51:53.2430000+00:00</lastModDate>
        
        <creator>Kevin Rodriguez-Villarreal</creator>
        
        <creator>Zumaeta-Mori Jhon</creator>
        
        <creator>Alva Mantari Alicia</creator>
        
        <creator>Roman-Gonzalez Avid</creator>
        
        <subject>Hearing disability; device; low cost; algorithm; communication</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>Hearing impairment is the partial or total loss of hearing. There are approximately 34 million hearing impaired children in the world. The equipment used as a means of communication to improve interaction with society is very expensive, so in this study was built an electronic device with the ability to recognize some words configured as an emergency message. This equipment will be used as a basic means of communication for hearing impaired children at a very low price. The equipment consists of a transmitter and a receiver, which communicate over Wi-Fi 802.11 at distances between 0m and 95m using low-power electronic devices and recent technology such as WeMos D1 mini Lite cards. This device was tested on approximately 20 people caring for hearing impaired children, obtaining a measure of approval of approximately 74%.  This is the first step in research that we plan to continue to reduce health gaps and improve communication for children with disabilities. Our group works for preventive health, reducing health gaps in the most vulnerable population.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_65-Development_of_Warning_Device_in_Risk_Situations.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>E-learning Benchmarking Adoption: A Case Study of Sur University College</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101164</link>
        <id>10.14569/IJACSA.2019.0101164</id>
        <doi>10.14569/IJACSA.2019.0101164</doi>
        <lastModDate>2019-11-30T11:51:53.2270000+00:00</lastModDate>
        
        <creator>Saleem Issa Al Zoubi</creator>
        
        <creator>Ahmad Issa Alzoubi</creator>
        
        <subject>e-learning; DOI; IS success; net benefit; extent of usage; LMS; benchmarking; quality assurance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>As an integral tool nowadays, e-learning presents fresh paradigm in the fields of education and management. The effectiveness of e-learning in the improvement of learning and teaching method has been proven. Accordingly, there have been countless works carried out to gain comprehension on the adoption of e-learning particularly in terms of its use extent. Furthermore, a new model of e-learning Success based on McLean &amp; DeLone Information System Success (IS Success) and Diffusion of Innovation (DOI) and the effects of e-learning on student’s performance are comprehensively highlighted. Appositely, e-learning benchmarking adoption among students is characterized and measured in this study with the integration of DOI with IS Success attributed with such adoption. In this quantitative study, data were gathered from the students enrolled in SUC. The results indicate that and the variables of relative advantage, complexity, system quality, information quality and service quality appear to have significant linkage to the adoption of e-learning as well as net benefit is significantly correlated with the adoption of e-learning.  In regards to the used methods for examining the adoption of e-learning particularly, future studies could benefit from the use of quantitative and qualitative methods in combination.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_64-e_Learning_Benchmarking_Adoption.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Time and Frequency Analysis of Heart Rate Variability Data in Heart Failure Patients</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101163</link>
        <id>10.14569/IJACSA.2019.0101163</id>
        <doi>10.14569/IJACSA.2019.0101163</doi>
        <lastModDate>2019-11-30T11:51:53.1970000+00:00</lastModDate>
        
        <creator>Galya N Georgieva-Tsaneva</creator>
        
        <subject>Heart rate variability; time domain analysis; frequency domain analysis; Welch periodogram; heart failure</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>The paper presents a mathematically based analysis of heart rate variability of two groups cardiology records: healthy individuals and patients diagnosed with heart failure. The main objective of the study is to perform a parametric evaluation of the cardiovascular system of the human body using time domain and frequency domain analysis of heart rate variability. Making distinguish between diseased individuals and healthy individuals is an interesting challenge that contemporary researchers are working on. Cardiologic records obtained through continuous Holter monitoring (24 hours) were used to address the issues in this study. The obtained results show significantly reduced values of most of the studied parameters in the time domain and the frequency domain in patients with heart failure compared to healthy individuals. The low values of the studied parameters indicate low variability of heart rate and poor overall health status. The graphical results of two study groups are shown when applying the modified Welch periodogram. These graphical results give a visual idea of the variability of time series in healthy and diseased individuals. The obtained numeric and graphical results show that heart failure patients can be distinguished from healthy individuals. The applied mathematical methods in heart rate variability studying can be used as an aid in the cardiology practice of doctors.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_63-Time_and_Frequency_Analysis_of_Heart_Rate_Variability.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mutual Authentication Security Scheme in Fog Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101161</link>
        <id>10.14569/IJACSA.2019.0101161</id>
        <doi>10.14569/IJACSA.2019.0101161</doi>
        <lastModDate>2019-11-30T11:51:53.1800000+00:00</lastModDate>
        
        <creator>Gohar Rahman</creator>
        
        <creator>Chuah Chai Wen</creator>
        
        <subject>Fog computing; mutual authentication; man in the middle attack; key exchange; ban logic</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>Fog paradigm is a new and emerging technology that extends the services of cloud computing near to edge network. This paradigm aims to provide rich resources near to edge devices and remove the deficiencies of cloud computing for example, latency. However, this paradigm is distributed in nature and does not guarantee the trustworthiness and good behavior of edge devices. Thus, authentication and key exchange are significant challenges in front of this new paradigm. The researchers have worked on different authentication and key exchange protocols. Recently Maged Hamada Ibrahim proposed an authentication scheme that permits fog user to authenticate mutually with fog server under the authority of a cloud service provider. Alongside, Amor et al proposed an anonymous mutual authentication scheme. In this scheme, the fog user and fog server authenticate each other without disclosing the user real identity, using public-key cryptosystem. But, we demonstrated that Maged Hamada Ibrahim does not preserve the user anonymity, hence, it was exposed to man in the middle attack. Amor et al. scheme is computationally complex as it is using public key cryptosystem that has low throughputs and requires large memory, which not suitable to employ for fog computing that connecting internet of things with small memory, and requires high throughputs. Therefore, to overcome the above-aforementioned security problems internet of things constraints, an improved mutual authentication security scheme based on advanced encryption standard and hashed message authentication code in fog computing has been proposed. Our scheme provides mutual authentication between internet of things devices and fog servers. We proved that the proposed improved scheme provides secure mutual authentication using the widely accepted Burrows Abdi Needham logic. In this study, the properties i.e. performance, security, and functionality are analyzed and compared with existing and related mutual authentication schemes. Our scheme provides better in security, functionalities, communication and computation cost as compared with the existing schemes.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_61-Mutual_Authentication_Security_Scheme_in_Fog_Computing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Milk Purity Recognition Software through Image Processing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101162</link>
        <id>10.14569/IJACSA.2019.0101162</id>
        <doi>10.14569/IJACSA.2019.0101162</doi>
        <lastModDate>2019-11-30T11:51:53.1800000+00:00</lastModDate>
        
        <creator>Alvarado-D&#237;az Witman</creator>
        
        <creator>Meneses-Claudio Brian</creator>
        
        <creator>Roman-Gonzalez Avid</creator>
        
        <subject>Milk; adulterated milk; milk with water; milk analysis; image processing; classification learner; image processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>Currently in Peru, there is a per capita milk consumption of 87 kg per year; however, the Food and Agriculture Organization of the United Nations (FAO) recommends a consumption of 120 kg per person; the industry, when the milk is acquired from small livestock suppliers, does not analyze the milk before buying it, which there is a high risk that the milk is adulterated with water, in this sense, it proposes an alternative way of preliminary detection of the presence of water in milk, only through a laser a photograph, which greatly reduces the costs of milk analysis. Milk contains different nutrients, vitamins and minerals, which are beneficial for people, so it is very known if it is adulterated or not, that way to prevent diseases. In this document, the reader will read an alternative to the existing methods for the analysis of milk, for the presented method the application of Matlab Classification Learner and the fine K-Nearest Neighbors (KNN) algorithm were used, in which a success rate of 95.4% was obtained.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_62-Milk_Purity_Recognition_Software.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Efficient Model for Medical Data Classification using Gene Features</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101160</link>
        <id>10.14569/IJACSA.2019.0101160</id>
        <doi>10.14569/IJACSA.2019.0101160</doi>
        <lastModDate>2019-11-30T11:51:53.1670000+00:00</lastModDate>
        
        <creator>Kosaraju Chaitanya</creator>
        
        <creator>Rachakonda Venkatesh</creator>
        
        <creator>Thulasi Bikku</creator>
        
        <subject>Classification; Hadoop framework; biomedical documents; feature selection; gene features; medical datasets</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>In the medical field to solve the new issues, the novel approaches for managing relevant features by using genomes are considered; using the sub-sequence of genes the outcome of interest is analyzed. In this implementation part of the model, we have given the MEDLINE and PubMed archives as inputs to the proposed model. A large number of MESH terms with gene and protein are utilized to characterize the patterns of a large number of medical documents from a large set of records. Standard datasets with different characteristics are used for examination study. The characteristics and inadequacies of different techniques are noted. Feature selection techniques are given in perspective of data composes and region traits by applying proper rules. Feature context extraction through name element distinguishing proof is an essential errand of online therapeutic report grouping for learning disclosure databases. The parameters are identified to compare with other models implemented on these datasets and the results prove that the proposed method is very effective than existing models. The primary point of the proposed ensemble learning models is to characterize the high dimensional information for gene/protein-based disease expectation in light of substantial biomedical databases. The proposed model uses an efficient ranking algorithm to select the relevant attributes from a set of all attributes; the attributes are given to the classifier to improve the accuracy based on the users’ interest.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_60-An_Efficient_Model_for_Medical_Data_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Capturing Software Security Practices using CBR: Three Case Studies</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101159</link>
        <id>10.14569/IJACSA.2019.0101159</id>
        <doi>10.14569/IJACSA.2019.0101159</doi>
        <lastModDate>2019-11-30T11:51:53.1330000+00:00</lastModDate>
        
        <creator>Ikram Elrhaffari</creator>
        
        <creator>Ounsa Roudies</creator>
        
        <subject>CBR; project features; case base; e-shop; mobiling; intranet; mutualize; security practices; security requirements</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>Generally, software security can be regarded as one of the most important issues in software engineering field since it may affect the software product effectiveness due to the various technological vulnerabilities and menaces. Most traditional software security approaches provide security activities through the software development lifecycle (SDLC) from requirements to design, implementation, testing and deployment. This paper focuses on embedding security concerns in the software development lifecycle (SDLC) using a bottom-up approach that is based on case based reasoning (CBR) paradigm. Thus, we study three high security-focusing cases for software projects, namely “e-shop”, “Mobiling” and “intranet” using a structured case study method. Then, we populate these three cases in the proposed framework that is an excerpt of the case project base. Furthermore, this paper identifies the specificity of each case, discusses completeness of the proposed framework and proposes suggestions for improvement. Finally, usages scenarios are defined sustaining the use of the proposed framework.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_59-Capturing_Software_Security_Practices.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of Password and Salt Combination Scheme To Improve Hash Algorithm Security</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101158</link>
        <id>10.14569/IJACSA.2019.0101158</id>
        <doi>10.14569/IJACSA.2019.0101158</doi>
        <lastModDate>2019-11-30T11:51:53.1200000+00:00</lastModDate>
        
        <creator>Sutriman </creator>
        
        <creator>Bambang Sugiantoro</creator>
        
        <subject>Security; hash; hashing scheme; salting; password</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>In system security, hashes play important role in ensuring data. It remains the secure and the management of access rights by those entitled to. The increasing power of hash algorithms, various methods, are carried out one of them using salting techniques. Salt is usually attached as a prefix or postfix to the plaintext before hashing. But applying salt as a prefix or postfix is not enough. There are so many ways to find the plaintext from the resulting cipher text. This research discusses the combination scheme other than the prefix and postfix between password and salt increasing the security of hash algorithms. There is no truly secure system and no algorithm that has no loopholes. But this technique is to strengthen the security of the algorithm. So that, it gives more time if an attacker wants to break into the system. To measure the strength generated from each combination scheme, a tool called Hashcat is used. That is the way known as the best composition in applying salt to passwords.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_58-Analysis_of_Password_and_Salt_Combination_Scheme.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Evaluation of IoT Messaging Protocol Implementation for E-Health Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101157</link>
        <id>10.14569/IJACSA.2019.0101157</id>
        <doi>10.14569/IJACSA.2019.0101157</doi>
        <lastModDate>2019-11-30T11:51:53.0870000+00:00</lastModDate>
        
        <creator>M Zorkany</creator>
        
        <creator>K.Fahmy</creator>
        
        <creator>Ahmed Yahya</creator>
        
        <subject>E-health; IoT; IoT protocol; CoAP; MQTT and remote patient monitoring</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>Now-a-days, e-health and healthcare applications in the internet of things are growing rapidly. These applications are starting from remote monitoring of patient&#39;s parameters in home to monitoring patients during his life activities at work, transportation, etc. So we can monitor patients at any place outside of hospitals and clinical settings. By using this technology, we can save lives and reduce the number of emergency visits to hospitals. In contemporary time, there are great progress and opportunities for the internet of things (IoT) related E-health systems. Most IoT e-health platforms consist of three main parts; client nodes (patient or doctor), IoT server and IoT communication messaging protocol. One of E-health systems design over IoT challenge is choosing the most suitable IoT messaging protocol for E-health applications. In this paper, IoT remote patient and e-Health monitoring system was designed for monitoring physiological medical signals of patients based on most two famous IoT messaging protocols, MQTT and CoAP. These medical signals can be include parameters like heart rate signals, electro-cardio graph (ECG), patient temperature, blood pressure, etc. This practical comparison between CoAP and MQTT is to choose most suitable for e-health systems.  The proposed approach was evaluated based on most significant protocol parameters like capability, efficiency, communication method and message delay. Practical and simulation results show the performance of the proposed E-health systems over IoT for different network infrastructure with different losses percentages.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_57-Performance_Evaluation_of_IoT_Messaging_Protocol_Implementation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Render Farm for Highly Realistic Images in a Beowulf Cluster using Distributed Programming Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101156</link>
        <id>10.14569/IJACSA.2019.0101156</id>
        <doi>10.14569/IJACSA.2019.0101156</doi>
        <lastModDate>2019-11-30T11:51:53.0700000+00:00</lastModDate>
        
        <creator>Enrique Lee Huaman&#237;</creator>
        
        <creator>Patricia Condori</creator>
        
        <creator>Brian Meneses-Claudio</creator>
        
        <creator>Avid Roman-Gonzalez</creator>
        
        <subject>Distributed programming; computational parallelism; Beowulf cluster; high-efficiency computing; render farm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>Now-a-days, photorealistic images are demanded for the realization of scientific models, so we use rendering tools that convert three-dimensional models into highly realistic images. The problem of generating photorealistic images occurs when the three-dimensional model becomes larger and more complex, so the time to generate an image is much greater due to the limitations of hardware resources, about this problem is implemented the render farm, which consists in a set of computers interconnected by a high-speed network that provides a strip of the global image distributed in each participating computers with the intention of reducing the processing time of highly complex computational images. The research was implemented in a high-performance Beowulf group of the Universidad de Ciencias y Humanidades using a total of 18 computers. To demonstrate the efficiency of a rendering farm implementation, scalability tests were performed using a 360&#176; equirrectangular model with a total of 67 million pixels, the work is carried out to achieve highly complex renderings in less time to benefit the direction of the research.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_56-Render_Farm_for_Highly_Realistic_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Data Sanitization Framework for Computer Hard Disk Drive: A Case Study in Malaysia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101155</link>
        <id>10.14569/IJACSA.2019.0101155</id>
        <doi>10.14569/IJACSA.2019.0101155</doi>
        <lastModDate>2019-11-30T11:51:53.0570000+00:00</lastModDate>
        
        <creator>Nooreen Ashilla Binti Yusof</creator>
        
        <creator>Siti Norul Huda Binti Sheikh Abdullah</creator>
        
        <creator>Mohamad Firham Efendy bin Md Senan</creator>
        
        <creator>Nor Zarina binti Zainal Abidin</creator>
        
        <creator>Monaliza Binti Sahri</creator>
        
        <subject>Data sanitization; anti-forensics; digital forensics; wiping technique</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>In digital forensics field, data wiping is considered one of the anti-forensics’ technique. On the other perspective, data wiping or data sanitization is the technique used to ensure that the deleted data are unable to be accessed by any unauthorized person. This paper introduces a process for data sanitization from computer hard disk drive. The process was proposed and tested using commercial data sanitization tools. Multiple testing has been conducted at accredited digital forensic laboratory of CyberSecurity Malaysia. The data sanitization process was performed using overwritten method provided by state-of-the-art data sanitization tools. For each sanitization tool, there are options for the wiping technique (overwritten) process. The options are either to wipe using single pass write or multi pass write. Logical data checking in the hard disk sector was performed during pre and post data disposal process for a proper verification. This is to ensure that the entire sector has been replaced by data sanitization bit pattern in correspondence to the selected wiping technique. In conclusion, through the verification of data sanitization it will improve the process of ICT asset disposal.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_55-Data_Sanitization_Framework_for_Computer_Hard_Disk_Drive.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving Long Short-Term Memory Predictions with Local Average of Nearest Neighbors</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101154</link>
        <id>10.14569/IJACSA.2019.0101154</id>
        <doi>10.14569/IJACSA.2019.0101154</doi>
        <lastModDate>2019-11-30T11:51:53.0400000+00:00</lastModDate>
        
        <creator>Anibal Flores</creator>
        
        <creator>Hugo Tito</creator>
        
        <creator>Deymor Centty</creator>
        
        <subject>Long Short-Term Memory; Local Average of Nearest Neighbors; univariate time series prediction; LANN; LANNc; Gated Recurrent Unit; GRU</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>The study presented in this paper aims to improve the accuracy of meteorological time series predictions made with the recurrent neural network known as Long Short-Term Memory (LSTM). To reach this, instead of just making adjustments to the architecture of LSTM as seen in different related works, it is proposed to adjust the LSTM results using the univariate time series imputation algorithm known as Local Average of Nearest Neighbors (LANN) and LANNc which is a variation of LANN, that allows to avoid the bias towards the left of the synthetic data generated by LANN. The results obtained show that both LANN and LANNc allow to improve the accuracy of the predictions generated by LSTM, with LANN being superior to LANNc. Likewise, on average the best LANN and LANNc configurations make it possible to outperform the predictions reached by another recurrent neural network known as Gated Recurrent Unit (GRU).</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_54-Improving_Long_Short_Term_Memory_Predictions.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Factors Contributing to the Success of Information Security Management Implementation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101153</link>
        <id>10.14569/IJACSA.2019.0101153</id>
        <doi>10.14569/IJACSA.2019.0101153</doi>
        <lastModDate>2019-11-30T11:51:53.0230000+00:00</lastModDate>
        
        <creator>Mazlina Zammani</creator>
        
        <creator>Rozilawati Razali</creator>
        
        <creator>Dalbir Singh</creator>
        
        <subject>Information security; information security management; success factors; key factors</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>Information Security Management (ISM) concerns shielding the integrity, confidentiality, availability, authenticity, reliability and accountability of the organisation’s information from unauthorised access in order to ensure business continuity and customers’ confidence. The importance of information security (IS) in today’s situation should be given due attention. Recognising its importance, organisations nowadays have devoted wide efforts in protecting their information. They establish information security policy, processes, and procedures as well as reengineer their organisational structures to align with ISM principles. Regardless of the efforts, security incidents continue to occur in many organisations. This phenomenon shows that the current implementation of ISM is still ineffective due to unaware of the factors contributing to the success of ISM. Thus, the objective of this paper is to identify ISM success factors and their elements through a large-scale survey. The survey involves 243 practitioners from statutory bodies, public and private organisations in Malaysia. The results of the survey indicate that top management, IS coordinator team, ISM team, IS audit team, employees, third parties, IS policy, IS procedures, resource planning, competency development and awareness, risk management, business continuity management, IS audit and IT infrastructure are the factors that contribute to the success of ISM implementation. These factors shall guide practitioners in planning and refining ISM implementation in their organisations.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_53-Factors_Contributing_to_the_Success_of_Information_Security.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improved Adaptive Semi-Unsupervised Weighted Oversampling using Sparsity Factor for Imbalanced Datasets</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101152</link>
        <id>10.14569/IJACSA.2019.0101152</id>
        <doi>10.14569/IJACSA.2019.0101152</doi>
        <lastModDate>2019-11-30T11:51:52.9930000+00:00</lastModDate>
        
        <creator>Haseeb Ali</creator>
        
        <creator>Mohd Najib Mohd Salleh</creator>
        
        <creator>Kashif Hussain</creator>
        
        <subject>Data mining; imbalanced data; minority; majority; oversampling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>With the incredible surge in data volumes, problems associated with data analysis have been increasingly complicated. In data mining algorithms, imbalanced data is a profound problem in machine learning paradigm. It appears due to desperate nature of data in which, one class with a large number of instances presents the majority class, while the other class with only a few instances is known as minority class. The classifier model biases towards the majority class and neglects the minority class which may happen to be the most essential class; resulting into costly misclassification error of minority class in real-world scenarios. Imbalanced data problem is significantly overcome by using re-sampling techniques, in which oversampling techniques are proven to be more effective than undersampling. This study proposes an Improved Adaptive Semi Unsupervised Weighted Oversampling (IA-SUWO) technique with sparsity factor, which efficiently solves between-the-class and within-the-class imbalances problem. Along with avoiding over-generalization, overfitting problems and removing noise from the data, this technique enhances the number of synthetic instances in the minority sub-clusters appropriately. A comprehensive experimental setup is used to evaluate the performance of the proposed approach. The comparative analysis reveals that the IA-SUWO performs better than the existing baseline oversampling techniques.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_52-Improved_Adaptive_Semi_Unsupervised_Weighted_Oversampling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep MRI Segmentation: A Convolutional Method Applied to Alzheimer Disease Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101151</link>
        <id>10.14569/IJACSA.2019.0101151</id>
        <doi>10.14569/IJACSA.2019.0101151</doi>
        <lastModDate>2019-11-30T11:51:52.9770000+00:00</lastModDate>
        
        <creator>Hanane Allioui</creator>
        
        <creator>Mohamed Sadgal</creator>
        
        <creator>Aziz Elfazziki</creator>
        
        <subject>Computer-Assisted Diagnosis (CAD); Alzheimer&#39;s disease (AD); Image segmentation; Machine learning; Convolutional Neural Networks (CNN); Magnetic Resonance Imaging</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>The learning techniques have a particular need especially for the detection of invisible brain diseases. Learning-based methods rely on MRI medical images to reconstruct a solution for detecting aberrant values or areas in the human brain. In this article, we present a method that automatically performs segmentation of the brain to detect brain damage and diagnose Alzheimer&#39;s disease (AD). In order to take advantages of the benefits of 3D and reduce complexity and computational costs, we present a 2.5D method for locating brain inflammation and detecting their classes. Our proposed system is evaluated on a set of public data. Preliminary results indicate the reliability and effectiveness of our Alzheimer&#39;s Disease Detection System and demonstrate that our method is beyond current knowledge of Alzheimer&#39;s disease diagnosis.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_51-Deep_MRI_Segmentation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Monitoring of Rainfall Level Ombrometer Observatory (Obs) Type using Android Sharp GP2Y0A41SK0F Sensor</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101150</link>
        <id>10.14569/IJACSA.2019.0101150</id>
        <doi>10.14569/IJACSA.2019.0101150</doi>
        <lastModDate>2019-11-30T11:51:52.9630000+00:00</lastModDate>
        
        <creator>Anton Yudhana</creator>
        
        <creator>Yunita Dwi Andriliana</creator>
        
        <creator>Son Ali Akbar</creator>
        
        <creator>Sunardi</creator>
        
        <creator>Subhas Mukhopadhyay</creator>
        
        <creator>Ismail Rakip Karas</creator>
        
        <subject>Arduino; infrared sensor; android; observatory; rainfall</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>Measurements of rainfall carried out are generally automatic, but how many parties have carried out research using automatic rain gauge instruments. The rainfall level data obtained can be used to detect flooding so that it reduces the occurrence of natural disasters earlier. The principle works are when rain falls the water is collected and the height will be detected. The Sharp GP2Y0A41SK0F sensor will be read on Arduino Uno and pass through the signal conditioning circuit and then forwarded to the Water Pump to pump water so that the water comes out of the tube and the data will be stored. ESP8266, the Wi-Fi module will send data to Android. Measurements are made when the water is full the value of max 53.2 means heavy rain. After the data is obtained, then look for the standard error of the measurement tool for 7 days. The design of the model builds a tool using a funnel with a diameter of 14 cm and a height of 26 cm. Based on the calculation of the design that has been made the measuring instrument developed is able to measure rainfall up to 26000 mm rain height.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_50-Monitoring_of_Rainfall_Level_Ombrometer_Observatory.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Semantic Micro-Services Model for Vehicle Routing using Ant Colony Optimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101149</link>
        <id>10.14569/IJACSA.2019.0101149</id>
        <doi>10.14569/IJACSA.2019.0101149</doi>
        <lastModDate>2019-11-30T11:51:52.9470000+00:00</lastModDate>
        
        <creator>Asmaa ROUDANE</creator>
        
        <creator>Mohamed YOUSSFI</creator>
        
        <creator>Khalifa MANSOURI</creator>
        
        <subject>Road traffic; road traffic management systems; intelligent systems; complex systems; graphs; real-time; optimization methods; micro-services; ant colonies; vehicle routing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>In this paper, we propose a new model of vehicle routing based on micro services using the principle of selection and composition of paths. In this model each micro service is responsible for a road resource, a step in the driver&#39;s journey, implementing a specific objective of the road course. The micro services are deployed in a cloud architecture in multiple instances according to a system of increase in load and fault tolerance. Drivers requests are sent to a proxy micro-service, with abstract structures of road courses represented by oriented graphs. The proxy micro-service is responsible for analyzing the request to determine the driver&#39;s profile and its context in the journey in order to select the micro-service responsible for providing information on the most appropriate road resource. The method of meta-heuristic optimization of road courses, used in the proposed approach, is based on the ant colony algorithm, or we will describe an adaptation of this optimization method to propose to the driver, at each stage of the journey, the most optimal resource to exploit.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_49-Semantic_Micro_Services_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>e-Parking: Multi-agent Smart Parking Platform for Dynamic Pricing and Reservation Sharing Service</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101148</link>
        <id>10.14569/IJACSA.2019.0101148</id>
        <doi>10.14569/IJACSA.2019.0101148</doi>
        <lastModDate>2019-11-30T11:51:52.9300000+00:00</lastModDate>
        
        <creator>Bassma Jioudi</creator>
        
        <creator>Aroua Amari</creator>
        
        <creator>Fouad Moutaouakkil</creator>
        
        <creator>Hicham Medromi</creator>
        
        <subject>Smart cities; smart mobility; smart parking; dynamic pricing policy; cruising traffic</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>Parking is a key element of a sustainable urban mobility policy. It plays a fundamental role in travel planning and transport management, as the foremost vector of modal choice, but also as a potential means of freeing up public spaces. In this article we define the smart parking concept, as an application of smart mobility, present a historical analysis of the evolution of smart parking framework and show a statistical analysis of the published patent applications in this field around the world using the ORBIT database. Then, we propose a new smart parking architecture based on multi-agent features. Finally, we introduce the e-Parking system, platform to improve the driver experience of crowded cities. It provides real-time parking prices and offers a reservation and guidance services. In addition, the system assigns an optimal parking for a driver based on the user’s requirements that combine proximity to destination, parking cost and dwell time, while ensuring a fair sharing of public space among users and improves traffic conditions. Our approach is based on dynamic pricing policy. Our scheme is suitable for mixed-usage areas, as it considers the presence of reserved and not reserved driver in the same parking area.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_48-E_Parking_Multi_Agent_Smart_Parking_Platform.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Challenges in Wireless Body Area Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101147</link>
        <id>10.14569/IJACSA.2019.0101147</id>
        <doi>10.14569/IJACSA.2019.0101147</doi>
        <lastModDate>2019-11-30T11:51:52.9170000+00:00</lastModDate>
        
        <creator>Muhammad Asam</creator>
        
        <creator>Tauseef Jamal</creator>
        
        <creator>Muhammad Adeel</creator>
        
        <creator>Areeb Hassan</creator>
        
        <creator>Shariq Aziz Butt</creator>
        
        <creator>Aleena Ajaz</creator>
        
        <creator>Maryam Gulzar</creator>
        
        <subject>WBAN (Wireless Body Area Network); denial of service attacks; resource management; cooperation; security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>Wireless Body Area Network (WBAN) refers to a short-range, wireless communications in the vicinity of, or inside a human body. WBAN is emerging solution to cater the needs of local and remote health care related facility. Medical and non-medical applications have been revolutionarily under consideration for providing a healthy and gratify service to humanity. Being very critical in communication of the data from body, it faces many challenges, which are to be tackled for the safety of life and benefit of the user. There is variety of challenges faced by WBAN. WBAN is favorite playground for attackers due to its usability in various applications. This article provides systematic overview of challenges in WBAN in communication and security perspectives.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_47-Challenges_in_Wireless_Body_Area_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comprehensive e-Learning System with Simulation Capabilities for Understanding of Complex Equations</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101146</link>
        <id>10.14569/IJACSA.2019.0101146</id>
        <doi>10.14569/IJACSA.2019.0101146</doi>
        <lastModDate>2019-11-30T11:51:52.8830000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>SCORM; e-learning; simulation; Physics subject; action script</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>A comprehensive e-learning system with simulation capabilities for understanding of complex equations is proposed. Through experiment with the proposed e-learning system, it is found that the proposed system is much effective and comprehensive than the conventional e-learning system without any mathematical expressions of processing and simulation capabilities by 14.5 %, the time required for learning increased by 18.2% though. In addition, the proposed system not only helps students to understand subjects and complex equations but also can be effective to increase students’ motivation to learn.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_46-Comprehensive_e_learning_System_with_Simulation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Node Relocation Techniques for Wireless Sensor Networks: A Short Survey</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101145</link>
        <id>10.14569/IJACSA.2019.0101145</id>
        <doi>10.14569/IJACSA.2019.0101145</doi>
        <lastModDate>2019-11-30T11:51:52.8830000+00:00</lastModDate>
        
        <creator>Mahmood ul Hassan</creator>
        
        <creator>Khalid Mahmood</creator>
        
        <creator>Shahzad Ali</creator>
        
        <creator>Amin Al Awady</creator>
        
        <creator>Muhammad Kashif Saeed</creator>
        
        <subject>Node failure; sensor node; post-deployment; on-demand relocation; internode connectivity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>Sensor nodes in a sensor network often operate in harsh and challenging environments and this leads to frequent failure of sensor nodes. Failure of sensor nodes leads to partitioning in the network connectivity. For significant effectiveness of applications of sensor networks, the inter-sensor connectivity among sensors is vital. Some sensors are also involved in sustaining the flow of information from the sensor to unapproachable end users. The network can be split up into multiple incoherent blocks and cease working due to physical damage or onboard energy depletion. To deal with such scenarios, a plethora of node repositioning techniques are proposed in the literature. In this article, the recent and up to date mode of research on dynamic sensor repositioning in WSN is discussed. This article classifies sensor repositioning methods into on-demand and post-deployment repositioning based on whether the optimization is accomplished at deployment time or while the network is functioning.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_45-Node_Relocation_Techniques_for_Wireless_Sensor_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automated Methodology for Optimizing Menus in Personalized Nutrition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101144</link>
        <id>10.14569/IJACSA.2019.0101144</id>
        <doi>10.14569/IJACSA.2019.0101144</doi>
        <lastModDate>2019-11-30T11:51:52.8700000+00:00</lastModDate>
        
        <creator>Valery I Karpov</creator>
        
        <creator>Nikolay M. Portnov</creator>
        
        <creator>Igor A. Nikitin</creator>
        
        <creator>Yury I. Sidorenko</creator>
        
        <creator>Sergey M. Petrov</creator>
        
        <creator>Nadezhda M. Podgornova</creator>
        
        <creator>Mikhail Yu. Sidorenko</creator>
        
        <creator>Sergey V. Shterman</creator>
        
        <creator>Igor V. Zavalishin</creator>
        
        <subject>Personalized nutrition; menu optimization; nutrition management; practical nutrition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>In the personalized nutrition rationalized management system the central practical task is to compile an optimized menu that provides the best value for a multi-criteria set of assessments. These are nutrient composition, cost (economic acceptability), energy value, food intolerances, individual preferences, etc. To solve the problem, a combined optimization method is used. It includes preliminary ordering of options and controlled enumeration. The result of solving the problem of developing a personalized nutrition menu is a diet that meets the needs of a particular diet, taking into account its nutritional status, individual preferences and intolerances, medical appointments. In connection with discrete values of the outputs of dishes and recipes, the task of optimizing the human diet is in practice a combinatorially integer, and for its solution the method of computer modeling and controlled enumeration of options was used. Evaluation of the effectiveness of optimization is carried out by external experts. The developed menu design system makes it possible to repeatedly solve the problem of optimizing a personalized menu when changing incoming data for reasons of changing dietary tasks, introducing new products, changing food preferences, etc. With this approach, the system is a personalized food model that is regularly used for rational planning and allows to achieve a reduction in labor costs (compared to the “manual” compilation of a menu in a computer system) by 2-3 times. An additional way to use such a model is the targeted design of functional product formulations. Moreover, the properties of the product are not evaluated in isolation, but as part of a specific diet.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_44-Automated_Methodology_for_Optimizing_Menus.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Smart Age Detection for Social Media</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101143</link>
        <id>10.14569/IJACSA.2019.0101143</id>
        <doi>10.14569/IJACSA.2019.0101143</doi>
        <lastModDate>2019-11-30T11:51:52.8370000+00:00</lastModDate>
        
        <creator>Manal Alghieth</creator>
        
        <creator>Jawaher Alhuthail</creator>
        
        <creator>Kholod Aldhubiay</creator>
        
        <creator>Rotan Alshowaye</creator>
        
        <subject>Age detection; deep learning; ear features; CNN; control social media</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>Over the recent years, there has been an immense attraction towards age detection due to its raised implementation in various sectors. Such as government regulations and rules, security control, and human computer interaction. Popular human features such as the face and fingerprints can be modified or changed with time. However, ear has a stable structure that does not change with time and have unique features that satisfies the requirements of a biometric trait. This research presents a detailed analysis extracting the features of the human ear only by applying Deep Learning techniques. In particular, Convolutional Neural Network (CNN) is applied on large datasets which have multiple layers to extract the features and classify them. The proposed methodology increased the number of the dataset by collecting more private children datasets, and consequently achieved high accuracy by 98.75% along with amending the architecture of the selected neural network compared to previous studies. This research can be benefited to control the contents of social media by detecting the age of group whether it is under 18 or above 18.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_43-Smart_Age_Detection_for_Social_Media.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Development of a Visual Output Approach for Programming via the Application of Cognitive Load Theory and Constructivism</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101142</link>
        <id>10.14569/IJACSA.2019.0101142</id>
        <doi>10.14569/IJACSA.2019.0101142</doi>
        <lastModDate>2019-11-30T11:51:52.8200000+00:00</lastModDate>
        
        <creator>Marini Abu Bakar</creator>
        
        <creator>Muriati Mukhtar</creator>
        
        <creator>Fariza Khalid</creator>
        
        <subject>Introductory programming; CS1; novices; Java programming; learning; objects-first</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>Programming is a skill of the future. However, decades of experience and research had indicated that the teaching and learning of programming are full of problems and challenges. As such educators and researchers are always on the look-out for suitable approaches and paradigms that can be adopted for the teaching and learning of programming. In this article, it is proposed that a visual output approach is suitable based on the current millennials affinities for graphics and visuals. The proposed VJava Module is developed via the application of two main learning theories, which are, the cognitive load theory and constructivism. There are two submodules which consist of eight chapters that cover the topics Introduction to Programming and Java, Object Using Turtle Graphics, Input and Output, Repetition Structure, Selection Structure, More Repetition Structures, Nested Loops and Arrays. To enable Java programs to produce graphical and animated outputs, the MJava library was developed and integrated into this module. The module is validated by three Java programming experts and an instructional design expert on the module content, design and usability aspects.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_42-The_Development_of_a_Visual_Output_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Survey on Distributed Greenhouse Gases Monitoring Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101141</link>
        <id>10.14569/IJACSA.2019.0101141</id>
        <doi>10.14569/IJACSA.2019.0101141</doi>
        <lastModDate>2019-11-30T11:51:52.8070000+00:00</lastModDate>
        
        <creator>Adela Puscasiu</creator>
        
        <creator>Alexandra Fanca</creator>
        
        <creator>Dan-Ioan Gota</creator>
        
        <creator>Silviu Folea</creator>
        
        <creator>Honoriu Valean</creator>
        
        <subject>Air quality; air monitoring systems; noise monitoring systems;  air pollution systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>Monitoring of air quality represents a major task, due to the direct impact of pollution on human health. Pollution has been further aggravated by the progresses that have taken place in the last decades: traffic growth, traffic noise in cities and growth of urban areas, rising cities, increased energy consumption, industrialization, and economic development. Global warming and acid rain are the results of these factors; thus, air quality is essential to be monitored. This survey presents a set of researches and applications related to air quality monitoring, aimed to detect, measure, collect and process data aggregated from sensors, such as gas sensors for sensing concentration of gases such as CO2 (Carbon dioxide), relative humidity, temperature, TVOC (Total Volatile Organic Compounds), PM (particulate matter)  and noise level. Based on some processes, users will be able to see the polluted areas on the map. The paper presents a state of the art of air monitoring systems, noise monitoring systems, air pollution systems. Also, the paper proposes a distributed greenhouse monitoring system for pollution measurement and control.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_41-A_Survey_on_Distributed_Greenhouse_Gases.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Partition Ciphering System: A Difficult Problem Based Encryption Scheme</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101139</link>
        <id>10.14569/IJACSA.2019.0101139</id>
        <doi>10.14569/IJACSA.2019.0101139</doi>
        <lastModDate>2019-11-30T11:51:52.7730000+00:00</lastModDate>
        
        <creator>Ziani Fatima Ezzahra</creator>
        
        <creator>Omary Fouzia</creator>
        
        <subject>Encryption scheme; partition problem; frequency analysis; avalanche effect; confusion; diffusion; statistical properties</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>In this article, a new encryption scheme called Partition Ciphering System is proposed to adapt and process the message according to the partition problem. The objective of this system, that can be applied as a standalone system or as a building block in a bigger system, is to achieve confidentiality, and maintain a balance between ones and zeros in the output so that attacks like frequency cryptanalysis is avoided and good entropy is met. At first, the authors describe the partition problem together with an adapted version. Secondly, the encryption and the decryption processes are provided. Next, a comparison, in terms of the statistical properties using the DIEHARDER battery, security analysis and performance, with other encryption schemes is presented. From the results, the proposed cryptosystem is resistant to frequency analysis and shows good entropy in the output. Moreover, compared to the Advanced Encryption Standard, it has a random behavior and good confusion and diffusion(Avalanche effect). Also, it displays better performance and resistance to brute force attack on the key.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_39-Partition_Ciphering_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Implementation of Business Intelligence and Analytics Integration for Organizational Performance Management: A Case Study in Public Sector</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101140</link>
        <id>10.14569/IJACSA.2019.0101140</id>
        <doi>10.14569/IJACSA.2019.0101140</doi>
        <lastModDate>2019-11-30T11:51:52.7730000+00:00</lastModDate>
        
        <creator>Jamaiah Yahaya</creator>
        
        <creator>Nur Hani Zulkifli Abai</creator>
        
        <creator>Aziz Deraman</creator>
        
        <creator>Yusmadi Yah Jusoh</creator>
        
        <subject>Business intelligence; business analytics; organisational performance management; framework; case study</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>Literature study shows that several works have been conducted on the implementation of BI in performance management, but the analytical aspects were not being considered. Business analytics is an activity of applying analytics to strengthen strategic and operational business activities. While performance management is important to determine organisational success and in public sector, it has become more challenging due to generality of public sector objectives and different level of stakeholders involved. Existing frameworks were built separately and this limits the implementation of Business Intelligence and Analytics as an integrated component, and could not meet the current performance management needs and expectations. The objective of this study is to establish a framework that integrates elements of business intelligence, analytics and performance management for the comprehensive implementation in public sector. This study identifies four main components of this integrated framework: Process, People, Governance and Ability. Each component consists of several key elements and sub-elements. The proposed framework is validated and implemented by real case study conducted in one organisation in Malaysia. The implementation demonstrates the suitability and practicality of this framework to be implemented in real environment.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_40-The_Implementation_of_Business_Intelligence_and_Analytics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Correction of the Grammatical Case Endings Errors in Arabic Language</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101138</link>
        <id>10.14569/IJACSA.2019.0101138</id>
        <doi>10.14569/IJACSA.2019.0101138</doi>
        <lastModDate>2019-11-30T11:51:52.7430000+00:00</lastModDate>
        
        <creator>Chouaib MOUKRIM</creator>
        
        <creator>Abderrahim TRAGHA</creator>
        
        <creator>El Habib BENLAHMER</creator>
        
        <subject>Automatic correction; ontology; syntactic errors; case endings; natural language processing; Arabic</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>Syntax plays a key role in natural language processing, but it does not always occupy an important position in applications. The main objective of this article is to solve the problem of the grammatical case ending errors produced by Arabic learners or certain common errors. Arabic can be considered more complex than English or French. He does not have vowels; diacritic signs (vowels) are placed above or below the letters. These diacritic signs are abandoned in most Arabic texts. This induces both grammatical and lexical ambiguities in Arabic. The present paper describes an automatic correction of this type of errors using “Stanford Parser” with an ontology containing the rules of the Arabic language. We segment the text into sentences, then we extract the annotations of each word with the syntactic relations coming from our parser, then we treat the relations obtained with our ontology. Finally, we compare the original sentence with the corrected one in order to detect the error. The implemented system achieved a total detection of about 94%. It is concluded that the approach is clearly promising by observing the results as compared to the limited number of available Arabic grammar checkers.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_38-The_Correction_of_the_Grammatical_Case_Endings.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Controlling High PAPR in Vehicular OFDM-MIMO using Downlink Optimization Model under DCT Transform</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101137</link>
        <id>10.14569/IJACSA.2019.0101137</id>
        <doi>10.14569/IJACSA.2019.0101137</doi>
        <lastModDate>2019-11-30T11:51:52.7270000+00:00</lastModDate>
        
        <creator>Ahmed Ali</creator>
        
        <creator>Esraa Eldesouky</creator>
        
        <subject>Zadoff-Chu Sequence; Discrete Cosine Transform (DCT); Peak-to-Average Power Ratio (PAPR); Vehicular Networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>The persisting challenges of the radio channel in vehicular networks entail the use of multi-antennas which is known as Multiple-Input Multiple-Output (MIMO). In order to obtain an efficient multi-user MIMO system, the power of the radio frequency (RF) components should be optimized. Necessarily practical solutions are essential to lower the vehicular nodes&#39; complexity and support a robust Orthogonal Frequency Division Multiplexing (OFDM) discipline with a simplified equalization at the receiver. In this paper, the pre-coding Zadoff-Chu Sequence (ZCS) is employed along with the Discrete Cosine Transform (DCT) to control and optimize the high peak power. It intends the transmission over multi-user MIMO downlink vehicular channels. At last, the convex optimization is utilized to guarantee the peak-to-average power ratio (PAPR) minimization. Simulation results have shown that the proposed model can lessen the high PAPR compared to the least-square pre-coding. At the same time, it proved its effectiveness and accuracy as it enhances the transmission quality over multi-user MIMO-OFDM downlink vehicular channel.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_37-Controlling_High_PAPR_in_Vehicular_OFDM.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>PathGazePIN: Gaze and Path-based Authentication Entry Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101136</link>
        <id>10.14569/IJACSA.2019.0101136</id>
        <doi>10.14569/IJACSA.2019.0101136</doi>
        <lastModDate>2019-11-30T11:51:52.7130000+00:00</lastModDate>
        
        <creator>Bayan M AlBaradi</creator>
        
        <creator>Amani M. AlTowayan</creator>
        
        <creator>Maram M. AlAnazi</creator>
        
        <creator>Sadaf Ambreen</creator>
        
        <creator>Dina M. Ibrahim</creator>
        
        <subject>User authentication; shoulder surfing attacks; PIN; gaze-based; security; accuracy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>In these days, smartphones are being widely used, people use them to store sensitive and private information. The password authentication is the most used authentication method, however, the password can be disclosure through shoulder surfing attack, since the users write their password in public places and they tend to make the password easy to remember which increase the vulnerability to attacks. Many authentications schemes were proposed to prevent shoulder surfing attack, but they still vulnerable to shoulder surfing attack or have accuracy problems or lack in usability. In this paper, we proposed a gaze-text based PIN entry method, which we called PathGazePIN. It will utilize the user&#39;s eye movement to enter the password. The main idea is allowing the user to authenticate by following numbers that move along fixed paths on the screen by using two authentication interfaces: random interface and sorted interface. The results represented that the proposed system will increase the security against shoulder surfing attack as well as usability and accuracy.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_36-PathGazePIN_Gaze_and_Path_based_Authentication_Entry_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Empirical Comparison of Machine Learning Algorithms for Classification of Software Requirements</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101135</link>
        <id>10.14569/IJACSA.2019.0101135</id>
        <doi>10.14569/IJACSA.2019.0101135</doi>
        <lastModDate>2019-11-30T11:51:52.6970000+00:00</lastModDate>
        
        <creator>Law Foong Li</creator>
        
        <creator>Nicholas Chia Jin-An</creator>
        
        <creator>Zarinah Mohd Kasirun</creator>
        
        <creator>Chua Yan Piaw</creator>
        
        <subject>Machine learning; ensemble algorithms; requirements classification; functional requirements; non-functional requirements</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>Intelligent software engineering has emerged in recent years to address some difficult problems in requirements engineering. Requirements are crucial for software development. Moreover, the classification of natural language user requirements into functional and non-functional requirements is a fundamental challenge as it defines the fulfillment criteria of the users’ expected needs and wants. Therefore the research of this article aims to explore and compare random forest algorithm and gradient boosting algorithm to determine the accuracy of functional requirements and non-functional requirements in the process of requirements classification through the conduct of experiments. Random forest and gradient boosting are ensemble algorithms in machine learning that combines the decisions from several base models to improve the prediction performance. Experimental results show that the gradient boosting algorithm yields improved prediction performance when classifying non-functional requirements, in comparison to the random forest algorithm. However, the random forest algorithm is more accurate to classify functional requirements.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_35-An_Empirical_Comparison_of_Machine_Learning_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>LCAHASH-1.1: A New Design of the LCAHASH System for IoT</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101134</link>
        <id>10.14569/IJACSA.2019.0101134</id>
        <doi>10.14569/IJACSA.2019.0101134</doi>
        <lastModDate>2019-11-30T11:51:52.6800000+00:00</lastModDate>
        
        <creator>Anas Sadak</creator>
        
        <creator>Bouchra Echandouri</creator>
        
        <creator>Fatima Ezzahra Ziani</creator>
        
        <creator>Charifa Hanin</creator>
        
        <creator>Fouzia Omary</creator>
        
        <subject>Information security; hash function; cellular automata; IoT; data integrity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>The present paper represents an extension of LCAHASH system. LCAHASH is a previously developed lightweight hash algorithm. It is based on cellular automata. It was developed as an alternative to existing hash functions to ensure data integrity and to meet the security requirements of the Internet of Things devices. Due to the limited amount of storage and the limited computation capabilities of these devices, the algorithms used by these devices should be as efficient and as robust as possible. In this contribution, we propose an enhanced version of the original LCAHASH algorithm to improve its efficiency and its robustness. A description of the system proposed along with a security analysis and the results of a statistical battery of tests (Dieharder) are included. These results show that the system proposed exhibits good statistical and cryptographic features.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_34-LCAHASH_1.1_A_New_Design_of_the_LCAHASH_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>DesCom: Routing Decision using Estimation Time in VDTN</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101133</link>
        <id>10.14569/IJACSA.2019.0101133</id>
        <doi>10.14569/IJACSA.2019.0101133</doi>
        <lastModDate>2019-11-30T11:51:52.6500000+00:00</lastModDate>
        
        <creator>Adnan Ali</creator>
        
        <creator>Jinlong Li</creator>
        
        <creator>Aqsa Tanveer</creator>
        
        <creator>Maryam Batool</creator>
        
        <creator>Nimra Choudhary</creator>
        
        <subject>Estimation time; VDTN; routing; vehicular delay- tolerant network; ONE simulator</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>VDTN was proposed as a disrupting network which is established on the paradigm of the delay-tolerant network. VDTN uses vehicular nodes to convey messages as, it permits sparse opportunistic network connectivity, which is considered by the low node density where the vehicular traffic is sporadic, and no end-to-end paths exist between nodes. The message bundle is directed from the sender to the receiver node based on the routing protocol decision. While Routing protocols take decisions based on different metrics like Time to live, Location, Remaining Buffer Size, meeting probability, etc. In this paper, a routing protocol named DesCom is proposed for Vehicular Delay-Tolerant Network under a highly suppressed and sparse environment. DesCom takes the decision based on Message TTL, Transmission rate, and Estimation time. Estimation time is calculated in our previous work. The protocol defines whether to direct the message to the requested node or search the other more suitable node to carry that data bundles. After compiling multiple simulations with different numbers of vehicles and comparing DesCom with other routing protocols it is concluded that DesCom has the least buffer time with low latency along with good delivery probability.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_33-DesCom_Routing_Decision_using_Estimation_Time_in_VDTN.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Review of Feature Selection and Sentiment Analysis Technique in Issues of Propaganda</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101132</link>
        <id>10.14569/IJACSA.2019.0101132</id>
        <doi>10.14569/IJACSA.2019.0101132</doi>
        <lastModDate>2019-11-30T11:51:52.6330000+00:00</lastModDate>
        
        <creator>Siti Rohaidah Ahmad</creator>
        
        <creator>Muhammad Zakwan Muhammad Rodzi</creator>
        
        <creator>Nurlaila Syafira Shapiei</creator>
        
        <creator>Nurhafizah Moziyana Mohd Yusop</creator>
        
        <creator>Suhaila Ismail</creator>
        
        <subject>Sentiment analysis; feature selection; swarm algorithm; propaganda</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>Propaganda is a form of communication that is used in influencing communities, or people in general, to push forward an agenda for a certain goal. Nowadays, there are different means used in distributing propaganda including postings on social media, illustrations, cartoons and animations, articles, TV and radio shows. This paper is focused on election propaganda. Candidates in elections would use propaganda as a form of communication to channel and deliver messages through social media. Sentiment analysis (SA) is then used in identifying the positive and negative elements within the propaganda itself, through analysing the related documents, social media, articles or forums. This paper presents the various techniques used by previous researchers in issues of propaganda using SA, which include feature selection to remove irrelevant features and sentiment methods to identify sentiment in documents or others. Feature selection is a dominant side in sentiment analysis due to content of textual has a high measurement classification that can jeopardize SA classification interpretation. This paper also explores several SA techniques to identify sentiments in issues of propaganda. This study has also attempted to identify the use of swarm algorithms as a suitable feature selection method in SA for propaganda issues.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_32-A_Review_of_Feature_Selection_and_Sentiment_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analyzing the Impact of Forest Cover at River Bank on Flood Spread by using Predictive Analytics and Satellite Imagery</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101131</link>
        <id>10.14569/IJACSA.2019.0101131</id>
        <doi>10.14569/IJACSA.2019.0101131</doi>
        <lastModDate>2019-11-30T11:51:52.6200000+00:00</lastModDate>
        
        <creator>Muhammad Aneeq Yusuf</creator>
        
        <creator>Muhammad Khalid Khan</creator>
        
        <creator>Tariq Mahmood</creator>
        
        <creator>Muhammad Umer</creator>
        
        <creator>Rafi Ullah Afridi</creator>
        
        <subject>Floods; forests; predictive modeling; satellite imagery; environmental modeling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>Floods have been a recurring problem for a number of countries around the world including Pakistan. It is believed that densely populated forests at river banks can prevent floods from spreading towards settlements and farmlands. The role of forest in flood spread has been an area of research for a while but the role of predictive modeling in this area is yet to be investigated in detail. In this study, we have used predictive analytics and satellite imagery to develop an environmental model that can predict the flood spread by considering forest cover at river bank and month of the year as parameters. We have used the satellite images of an area situated in the northern region of Pakistan i.e. Dera Ghazi Khan from the USGS’s Land Sat program. These images comprised of a section of the Indus River and its adjoining areas. We want to analyze the forest bank at various section of the Indus River. We developed and trained our predictive model by using the satellite imagery data and tested it on a separate dataset to determine error percentage. The model showed significant promise and predicted the flood spread with an average accuracy of above 80%.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_31-Analyzing_the_Impact_of_Forest_Cover.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Decision Support System for Employee Candidate Selection using AHP and PM Methods</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101130</link>
        <id>10.14569/IJACSA.2019.0101130</id>
        <doi>10.14569/IJACSA.2019.0101130</doi>
        <lastModDate>2019-11-30T11:51:52.6030000+00:00</lastModDate>
        
        <creator>Soleman </creator>
        
        <subject>Prospective employees; decision support systems; Analytic Hierarchy Process (AHP); profile matching</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>PT. Prima Grafika is a digital printing company that is looking for the best prospective employees. Based on research and observations that have been made, the company only uses administrative data in selecting prospective employees. Processing a lot of data and documents with the same applicant&#39;s name slowing down the data processing and exceeding the time limit has been determined to get the best employees. The solution to this problem is by making a web-based system in recruitment, a decision support system combined with Analytic Hierarchy Process (AHP) method as a weighting to conduct priority criteria analysis by pairwise comparisons between two criterion so that all criteria are covered, and the Profile Matching (PM) method as a ranking. This study uses three methods of testing, namely, Black Box Testing system tested by 60 respondents who can be accepted by the company; User Acceptance Testing (UAT) obtained from 10 respondents with an ideal score of 900 produced an actual score of 779 or 86.1%, in total this whole system is acceptable; and Delone and McLean Model Test Results obtained from 10 respondents with an ideal score of 850 produced an actual score of 726 or 85.7% which is very good. With the results of ranking: A001 - Cantika Dewi = 4.12, A004-Arif Yulkianto = 3.98, A002 Eprriadi = 3,913, A003-Rika Novriani = 3,467.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_30-Decision_Support_System_for_Employee_Candidate.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Image Steganography using Combined Nearest and Farthest Neighbors Methods</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101129</link>
        <id>10.14569/IJACSA.2019.0101129</id>
        <doi>10.14569/IJACSA.2019.0101129</doi>
        <lastModDate>2019-11-30T11:51:52.5870000+00:00</lastModDate>
        
        <creator>Farhana Sharmin</creator>
        
        <creator>Muhammad Ibrahim Khan</creator>
        
        <subject>Steganography; cryptography; Pixel Value Difference (PVD); image processing; cover image; information security; stego image</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>Security is invariably a significant concern during communication. With the ease of communication, there is always a pending threat of intrusion. Steganography is one such way to achieve security by concealing confidential information within a more innocent looking media like image, audio, video etc. In this paper, a new technique is proposed that uses the relationship of a pixel with its Nearest Neighbor and Farthest Neighbor to hide secret information into that pixel. The cover image is divided into2x2 non-overlapping blocks. According to the vulnerability of the relationship among the pixels, blocks are labelled as Stable and Unstable. The Stable block hides ‘k’ secret while unstable block hides ‘n’ secret bits. 25 types of different set of ‘k’ and ‘n’ is examined to evaluate the performance of proposed method. 2k method is applied to improve the quality of stego image. The experimental result shows that the proposed technique hides a significant number of secret bits with high PSNR. While compared with other existing methods, the proposed method achieves a much higher visual quality than that of those methods.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_29-Image_Steganography_using_Combined_Nearest_and_Farthest_Neighbors.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Investigating Factors Affecting Knowledge Management Practices in Public Sectors</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101128</link>
        <id>10.14569/IJACSA.2019.0101128</id>
        <doi>10.14569/IJACSA.2019.0101128</doi>
        <lastModDate>2019-11-30T11:51:52.5570000+00:00</lastModDate>
        
        <creator>Subashini Ganapathy</creator>
        
        <creator>Zulkefli Mansor</creator>
        
        <creator>Kamsuriah Ahmad</creator>
        
        <subject>Critical success factors; influencing factors; knowledge management; public sector</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>Knowledge Management (KM) is a systematic approach in creating, sharing, using and managing information effectively sustain knowledge regardless public or private organizations. It helps organizations to make better decision making in order to achieve the goals and increase the productivity. However, many public organizations are still facing challenges to adopt knowledge management practices compared to private organization due to lack of awareness. They are not aware of the influenced factors such as people, process, and technology. Therefore, this paper identifies influencing factors that contributed to the successful KM practices in public sectors. This study employs quantitative approaches by distributing a set of questionnaires to 83 IT practitioners in public organizations. 63 returned responses were analyzed using the Rasch Measurement Model. The findings indicated that there is a lack of participation amongst the staff in practicing efficient knowledge management due to they are still not ready to accept changes to the new system, lack of exposure and behavior. In addition, looking at critical success factor such as on the human resources (HR), there is a lack of encouragement such as rewards and recognition given to employees who practices KM in the organization. As a result, this paper highlights the most influential factors for effective knowledge management practices in terms of people, process and technology. We hope that the results can be used as a guideline to rectify the challenges in KM practices especially in the public organizations.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_28-Investigating_Factors_affecting_Knowledge_Management_Practices.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dynamic Hand Gesture to Text using Leap Motion</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101127</link>
        <id>10.14569/IJACSA.2019.0101127</id>
        <doi>10.14569/IJACSA.2019.0101127</doi>
        <lastModDate>2019-11-30T11:51:52.5400000+00:00</lastModDate>
        
        <creator>Nur Aliah Nadzirah Jamaludin</creator>
        
        <creator>Ong Huey Fang</creator>
        
        <subject>Dynamic hand gesture; leap motion; American sign language; artificial neural network; cross-correlation; geometric template matching</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>This paper presents a prototype for converting dynamic hand gestures to text by using a device called Leap Motion. It is one of the motion tracking technologies, which could be used for recognising hand gestures without the need of wearing any external devices or capturing any images and videos. In this study, five custom dynamic hand gestures of American Sign Language were created with Leap Motion to measure the recognition accuracy of the proposed prototype using the Geometric Template Matching, Artificial Neural Network, and Cross-Correlation algorithms. The experimental results showed that the prototype achieved recognition accuracy of more than 90% in the training phase and about 60% in the testing phase.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_27-Dynamic_Hand_Gesture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Segmentation of Crescent Sand Dunes in High Resolution Satellite Images using a Support Vector Machine for Allometry</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101126</link>
        <id>10.14569/IJACSA.2019.0101126</id>
        <doi>10.14569/IJACSA.2019.0101126</doi>
        <lastModDate>2019-11-30T11:51:52.5230000+00:00</lastModDate>
        
        <creator>M A Azzaoui</creator>
        
        <creator>L. Masmoudi</creator>
        
        <creator>H. El Belrhiti</creator>
        
        <creator>I. E. Chaouki</creator>
        
        <subject>Image segmentation; support vector machines; high resolution satellite images; remote sensing; sand dunes; desertification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>The study of sand dunes movement is essential to understand and prevent the desertification phenomenon, and collecting data from the field is a labor intensive task, as deserts contain usually a large number of sand dunes. We propose to use computer vision and machine learning algorithms, combined with remote sensing and specifically high resolution satellite images for collecting data about the position and characteristics of moving sand dunes. We focused on the fastest moving sand dunes called barchans, which are threatening the settlements in the region of Laayoune, Morocco. We developed a process with three stages: In the first stage, we used an image processing approach with cascading Haar features for the detection of dunes location. In the second stage, we used a support vector machine for the segmentation of contours, and in the third stage we used an algorithm to measure the allometric features of barchans dunes. We explored the collected data, and found relevant correlations between dunes length, and width, and horns sizes, which could be used as key indicators for dunes growth and progression. This study is therefore of high interest for urban planners and geologists who study sand dunes and require technical methods, based on machine learning and computer vision to allow them to collect large amount of data from satellite images to understand sand dunes progression and counter desertification problems. The use of cascading Haar feature provided a good accuracy, and the use of Support Vector Machines, along with the high resolution satellite images provided a good precision for the segmentation of barchan dunes contours, allowing the collection of morphological features which provide significant information on barchans sand dunes dynamics.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_26-Segmentation_of_Crescent_Sand_Dunes_in_High_Resolution.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Parallel Platform for Supporting Stream Ciphers Over Multi-core Processors</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101125</link>
        <id>10.14569/IJACSA.2019.0101125</id>
        <doi>10.14569/IJACSA.2019.0101125</doi>
        <lastModDate>2019-11-30T11:51:52.4930000+00:00</lastModDate>
        
        <creator>Sally Almanasra</creator>
        
        <subject>Stream ciphers; parallel computing; multithreading; cryptographic primitives; multi-core processors</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>Designing secure and fast cryptographic primitives is one of the critical issues in the current era. Several domains, including Internet of Things (IoT), military and banking, require fast and secure data encryption over public channels. Most of the existing stream ciphers are designed to work sequentially and therefore not utilizing available computing power. Also, other stream ciphers are designed based on complex mathematical problems which makes these ciphers slower due to the complex computations. For this purpose, a novel parallel platform for enhancing the performance of stream ciphers is presented. The platform is designed to work efficiently over multi-core processors using multithreading techniques. The architecture of the platform relies on independent components that can operate over multiple cores available on the corresponding communication ends. Two groups of stream ciphers were considered as case studies in our experiments. The first category includes stream ciphers of a sequential design, while the second category includes parallelizable stream ciphers. Performance tests and analysis shows that the parallel platform was able to maximize the encryption throughput of the selected stream ciphers dramatically. The enhancements on the encryption throughput is relative to the constructional design of the stream ciphers. Parallelized stream ciphers (Salsa20, DSP-128, and ECSC-128) was able to achieve higher throughput compared to other sequentially designed stream ciphers.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_25-Parallel_Platform_for_Supporting_Stream_Ciphers.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Traditional Learning Problems of Computing Students</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101124</link>
        <id>10.14569/IJACSA.2019.0101124</id>
        <doi>10.14569/IJACSA.2019.0101124</doi>
        <lastModDate>2019-11-30T11:51:52.4770000+00:00</lastModDate>
        
        <creator>Saima Siraj</creator>
        
        <creator>Akhtar Hussain Jalbani</creator>
        
        <creator>Muhammad Ibrahim Channa</creator>
        
        <subject>Traditional learning problems; computing education; statistical analysis of questionnaire</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>This paper aims to report the traditional learning problems of computing courses students. To identify the problems a questionnaire was framed to focus on the problems and issues faced by students while interacting in a traditional learning environment. The study tested the respondent’s attitude with five-point Likert Scale. The study was analyzed by using the NCSS program. Traditional learning problems were abridging by computing mean, median, mode, standard deviation and IRQ. This research highlights the problems of different computing courses particularly, problems of basic programming concepts, unable to write code, language barrier and confidence besides these, highlighted the various academic and non-academic problems. Reliability Analysis was achieved by Cronbach’s Alpha and got encouraging results of an 80% reliability coefficient.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_24-Traditional_Learning_Problems_of_Computing_Students.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimization of C&#250;k Voltage Regulator Parameters for Better Performance and Better Efficiency</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101123</link>
        <id>10.14569/IJACSA.2019.0101123</id>
        <doi>10.14569/IJACSA.2019.0101123</doi>
        <lastModDate>2019-11-30T11:51:52.4630000+00:00</lastModDate>
        
        <creator>Walid Emar</creator>
        
        <creator>Zakariya Al-omari</creator>
        
        <creator>Omar A. Saraereh</creator>
        
        <subject>C&#250;k regulator; smoothing filters; double-phase connection; overall current ripple</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>This paper discusses the harmonic distortion and voltage-current ripple minimization of a C&#250;k regulator based on the design optimization of its parameters using multichannel connection with uncoupled smoothing filters. The main attention is focused on the analysis and simulation of the fundamental and two-phase parallel connection of C&#250;k regulator with uncoupled smoothing filters. A detailed analysis has been done to show the benefits of uncoupled smoothing filters and their positive impact on balancing the energy compensation between the capacitors and inductors of the double-phase C&#250;k regulator as compared to conventional one. As a result of that the dc source current values do not go from positive to negative and vice versa as in the case of a fundamental connection which does not cause any saturation problem for the regulator. In general, multichannel parallel connection of C&#250;k regulators with uncoupled smoothing filters has ingrained benefits such as eminent current distribution characteristics, sacredness to component tolerance, reduction of parasitic effects and relief in current control complexity. Specifically, by employing double-phase connection with uncoupled smoothing filters for these regulators, overall current fluctuation can be effectively reduced by more than 25% after introducing the double-phase connection, compared to that of simple connection. Moreover, it is proved that the output voltage ripple of the double-phase connection is also reduced by more than 25% from that of the fundamental connection. Computer simulations using Simplorer 7 or Matlab and Excel have been done to validate the concepts.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_23-Optimization_of_Cuk_Voltage_Regulator.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Proposed Course Recommender Model based on Collaborative Filtering for Course Registration</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101122</link>
        <id>10.14569/IJACSA.2019.0101122</id>
        <doi>10.14569/IJACSA.2019.0101122</doi>
        <lastModDate>2019-11-30T11:51:52.4470000+00:00</lastModDate>
        
        <creator>Norazuwa Binti Salehudin</creator>
        
        <creator>Hasan Kahtan</creator>
        
        <creator>Mansoor Abdullateef Abdulgabber</creator>
        
        <creator>Hael Al-bashiri</creator>
        
        <subject>Course registration; recommender system; collaborative filtering; academic advisory</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>Students face issues and challenges in making decisions for course registration. Traditionally, students rely on suggestions from academic advisers prior to course registration. Therefore, students spend a considerable amount of time waiting for advisers to help them register for the right subjects. However, the number of students rises yearly, thereby increasing the responsibilities of lecturers. Moreover, academic advisers experience constraints in analysing data during consultations for course registration. Therefore, this study proposes a course recommender model based on collaborative filtering. Collaborative filtering is adopted because it provides recommendations based on students’ performance in previous subjects. A dataset from the Information &amp; Communication Technology Centre (ICT) of the University Malaysia Pahang is used to evaluate the proposed model. The evaluation is conducted based on two experiments. The first experiment is performed by calculating the difference between actual and predicted scores to verify prediction accuracy. Results show that the average of the mean absolute error of the proposed model is 0.319, which is highly accurate. The second experiment is conducted by comparing the recommendations of the proposed model with those of experts to validate the course recommendation accuracy of the proposed model. Results of the second experiment show that the proposed model has a 91.06% accuracy rate with an error rate of 8.94%. In addition, average precision is 0.68 and recall is 0.724, which are considered accurate. Therefore, the proposed model can play a vital role in assisting students and academic advisers to recommend the right courses during registration, thereby overcoming the limitations of academic advising.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_22-A_Proposed_Course_Recommender_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Key-Ordered Decisional Learning Parity with Noise (DLPN) Scheme for Public Key Encryption Scheme in Cloud Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101121</link>
        <id>10.14569/IJACSA.2019.0101121</id>
        <doi>10.14569/IJACSA.2019.0101121</doi>
        <lastModDate>2019-11-30T11:51:52.4300000+00:00</lastModDate>
        
        <creator>Tarasvi Lakum</creator>
        
        <creator>B.Thirumala Rao</creator>
        
        <subject>LPN; DLPN; RSA; key-ordered; time</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>The variation of decisional learning parity with noise (DLPN) named as key-Ordered DLPN based security algorithm is presented in this work. The proposed scheme uses DLPN by extending it to an even-odd-order scheme, depend on the value of probability distribution of odd and even bits for encryption, where odd and even bits are the input integer values for key generation algorithm. This states that the probability distribution of odd and even bits are ordered based on the key generation, the process of odd and even bits resolving is the solution of DLPN attacker problems, thus, the proposed scheme provides more correctness and security proof. Through the learning parity with noise (LPN), DLPN and RSA algorithms, the proposed system is evaluated, to measure the encryption time, public key and ciphertext bits.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_21-A_Key_Ordered_Decisional_Learning_Parity_with_Noise.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Framework for Traceable and Transparent Supply Chain Management for Agri-food Sector in Malaysia using Blockchain Technology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101120</link>
        <id>10.14569/IJACSA.2019.0101120</id>
        <doi>10.14569/IJACSA.2019.0101120</doi>
        <lastModDate>2019-11-30T11:51:52.4000000+00:00</lastModDate>
        
        <creator>Kok Yong Chan</creator>
        
        <creator>Johari Abdullah</creator>
        
        <creator>Adnan Shahid Khan</creator>
        
        <subject>Supply chain; blockchain; consensus algorithm; traceability; transparency</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>This paper presents a framework for traceable and transparent supply chain management (SCM) system for the agri-food sector using blockchain technology in Malaysia. Numerous researchers believed that the current SCM system consists of several weak points, especially when multiple enterprise resource planning (ERP) system utilizing centralized SCM. Thus, data transparency and traceability are limited. This study hypothesized that if blockchain technology correlates with transparency and traceability of SCM, the above limitation can be minimized, as blockchain technology works in a distributed manner. This research uses “pepper” as an agri-food domain. The research also recommends that permissioned blockchain is a better fit as compared to permissionless blockchain.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_20-A_Framework_for_Traceable_and_Transparent_Supply_Chain_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>On the Prediction of Properties of Benzene using MP4 Method Executed on High Performance Computing with Heterogeneous Platform</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101119</link>
        <id>10.14569/IJACSA.2019.0101119</id>
        <doi>10.14569/IJACSA.2019.0101119</doi>
        <lastModDate>2019-11-30T11:51:52.3830000+00:00</lastModDate>
        
        <creator>Norma Alias</creator>
        
        <creator>Riadh Sahnoun</creator>
        
        <creator>Nur Fatin Kamila Zanalabidin</creator>
        
        <subject>MP4; parallel computing; homogenous platform; properties of benzene; high performance computing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>High computational complexity problem, high computational cost and deal with a big data are the motivation to study the physical and chemical properties of benzene. Based on the limitation of memory system, processor speed and huge time step computation, we propose the implementation of parallel Gaussian suites of program, particularly the program dealing with high order M&#248;ller–Plesset perturbation theory, on high performance homogeneous computing platform (HPC) for predicting the physical and chemical properties of small to medium size molecules, such as benzene, the subject of the present work. Besides high accuracy of the geometrical parameters that can be offered by MP4 simulation, orbital shapes, HOMO-LUMO energy gaps and spectral properties of the molecule are among the properties that can be obtained with accurate prediction. In order to achieve high performance indicators, we need to execute the program in multiple instruction and data stream (MIMD) paradigm using homogenous processors architecture. At the end of this paper, it is shown that Parallel algorithm of Gaussian program using the Linda software can be executed and is well suited in both homogenous and heterogeneous processors. The performance evaluation is essentially based on run time, temporal performance, effectiveness, efficiency, and speedup.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_19-On_the_Prediction_of_Properties_of_Benzene.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance of LoRa Network for Environmental Monitoring System in Bidong Island Terengganu, Malaysia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101117</link>
        <id>10.14569/IJACSA.2019.0101117</id>
        <doi>10.14569/IJACSA.2019.0101117</doi>
        <lastModDate>2019-11-30T11:51:52.3530000+00:00</lastModDate>
        
        <creator>Nur Aziemah Azmi Ali</creator>
        
        <creator>Nurul Adilah Abdul Latiff</creator>
        
        <creator>Idrus Salimi Ismail</creator>
        
        <subject>LPWAN; LoRa; environmental monitoring; WSN</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>Recent wireless communication network which is Low Power Wide Area Network (LPWAN) bring a huge potential in monitoring system as the integration between sensors application and LPWAN technology itself contribute to greater efficiency, higher productivity, and better quality of life in multiform sectors such as in healthcare, smart city application, and monitoring system. The project involves the implementation of Low Power Wide Area Network (LPWAN) while delving into performances of the LoRa Technologies in the development of the environmental monitoring system in Bidong Island located in state of Terengganu, Malaysia. An Arduino Uno microcontroller stacked with LoRa module shield via SPI connection and in conjunction with few sensors work as end device to capture the environment data and send it to the gateway over the long range at a very low-data-rate with low power consumption for remote monitoring system. In this paper, we identify the best spreading factors that works well in the island region by manipulating all the spreading factor from SF6 to SF12. The result shows SF11 provides the best packet delivery ratio and the most stable Receive Signal Strength Indicator (RSSI) compared to other SFs. Moreover, we evaluate the external factors which caused the packet loss data and provide ways to improve the signal quality and to achieve optimal results for signal transmission and communication performance.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_17-Performance_of_LoRa_Network_for_Environmental_Monitoring.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Vulnerability Analysis of E-voting Application using Open Web Application Security Project (OWASP) Framework</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101118</link>
        <id>10.14569/IJACSA.2019.0101118</id>
        <doi>10.14569/IJACSA.2019.0101118</doi>
        <lastModDate>2019-11-30T11:51:52.3530000+00:00</lastModDate>
        
        <creator>Sunardi </creator>
        
        <creator>Imam Riadi</creator>
        
        <creator>Pradana Ananda Raharja</creator>
        
        <subject>Vulnerability; e-voting; open web application security project framework; attacker</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>This paper reports on security concerns in the E-voting used for the election of village heads. Analysis of the system and server uses two different tools to determine the accuracy of scanning vulnerabilities based on the OWASP Framework. We reported that the results of the scanning using the ZAP tool got vulnerability information with the following risk level, one high level, three medium levels, and eleven low levels. The Arachni tool got vulnerability information with the following risk level, one high level, three medium levels, and two low levels. ZAP has a more complex vulnerability view than Arachni. Fatal findings on E-voting in this E-voting system is XSS, which impacts clients, which can be exploited by attackers to bypass security. Directory Traversal allows attackers to access directories and can execute commands outside of the web server’s base directory. Cyber Hiscox Readiness report in 2018 in several European countries such as The United States, Britain, Germany, Spain, and the Netherlands, that the Attackers target through the most vulnerable security holes such as injection, Broken Authentication, Sensitive Data Exposure, XXE, Merged, Security Misconfiguration, XSS, Insecure Deserialization, Using Components with Known Vulnerabilities, Insufficient Logging, and Monitoring. The purpose of cyberattacks alone can threaten the stability of the country and disturb other factors. E-voting, as part of an electronic government system, needs to be audited in terms of security, which can cause the system to disrupt.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_18-Vulnerability_Analysis_of_E_voting_Application.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Empirical Analysis of Context based Content Delivery for M-Learning Scenarios using ANFIS</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101116</link>
        <id>10.14569/IJACSA.2019.0101116</id>
        <doi>10.14569/IJACSA.2019.0101116</doi>
        <lastModDate>2019-11-30T11:51:52.3200000+00:00</lastModDate>
        
        <creator>Sudhindra B Deshpande</creator>
        
        <creator>Shrinivas R.Mangalwede</creator>
        
        <subject>Personalization; adaptive learning e-learning; distance learning; mobile learning; M-Learning; ANFIS; fuzzy system; neural networks; personalization; context awareness; content adaption; content delivery; expert system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>Today&#39;s world is unusually popular with the internet and mobile devices in everyday life. It offers unprecedented possibilities learning with mobility. This kind of learning can be called &quot;M-learning&quot; (Mobile Learning) at any point in the world. Meeting learners &#39; needs in the current scenario in which collaborative electronic and mobile education systems have become more evolving. Every learners &#39; needs differ in terms of learning; context, content, learning styles, speed of learning, even including preferences, places. Mobile learning enables the learner to learn while moving, enabling the learner to learn in any time and any place. Learning styles have evolved along with advances in technology; specifically advances in mobile technology and mobile networks. Portable devices such as mobile phones, tabs, iPods, etc. are commonly used today by all. The way we learn has been changed with the use of these devices in education. In M-learning environment the learner has access to the contents everywhere and every time through mobile devices. Customization and learner context awareness are the important factors in providing the learner with relevant content. Appropriate content delivery based on a learner&#39;s context is a complex process. So a content delivery system is needed that takes into account learners &#39; needs such as context, style and devices features. To model such a system neural network with fuzzy reasoning can be used, to accommodate the dynamically changing learning styles, contexts and characteristics of smart device. &quot;If-then&quot; conditions can be formed to build the suggestion rules required for such a content delivery system. ANFIS i.e. Artificial Nero Fuzzy Inference System is an integral asset to create fuzzy systems with IF-THEN guidelines. To model and analyze this type of context aware and adaptive content delivery system for an M-learning environment, ANFIS can be used. In this article, use of ANFIS tool is demonstrated for various m-learning scenarios with different contexts. Four different contexts are constructed based on the inputs given by the student learners. Using ANFIS these four scenarios have been analyzed empirically for their performance based on the RMSE of various membership functions.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_16-Empirical_Analysis_of_Context_based_Content_Delivery.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Computerized Drug Verification System: A Panacea for Effective Drug Verification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101115</link>
        <id>10.14569/IJACSA.2019.0101115</id>
        <doi>10.14569/IJACSA.2019.0101115</doi>
        <lastModDate>2019-11-30T11:51:52.3070000+00:00</lastModDate>
        
        <creator>Oketa Christian Kelechi</creator>
        
        <creator>Alo Uzoma Rita</creator>
        
        <creator>Okemiri Henry Anayo</creator>
        
        <creator>Richard-Nnabu Nneka Ernestecia</creator>
        
        <creator>Achi Ifeanyi Isaiah</creator>
        
        <creator>Chinazo I. Chima</creator>
        
        <creator>Afolabi Idris Yinka</creator>
        
        <creator>Mgbanya Praise Chinenye</creator>
        
        <subject>NAFDAC verify; security; drug verification; information system; drug authentication</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>Computerized Drug Verification System (CDVS) is a research work geared towards establishing the means of identifying authentic drugs in Nigeria with emphasis on identifying the manufacturing date and expiration date respectively using an interactive mobile app. At the point of drug purchase, only few drugs have methods of verifying their authenticity using both Personal Identification Number (PIN) and National Agency for Food and Drug Administration Control (NAFDAC) number. However, the production and expiry dates of such drugs are not always known, hence drugs that have long expired may still report authentic thereby endangering the lives of the consumers. This research addresses the challenges providing an interactive platform for drug verification, with especially strength in its ability to incorporate the manufacturing date, the manufacturer, expiry date and the authentication of drugs. The product of this research, “NAFDAC VERIFY” an interactive mobile and web application can be used in verifying the authenticity of drugs in Nigeria in partnership with Mobile Authentication Service (MAS). This system which runs both on android mobile phones and web-based have been tested using some raw data and the system proves to be very robust and achieves the set out objectives.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_15-Computerized_Drug_Verification_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>On Designing Bee Inspired Routing Algorithm for Device-to-Device Communication in the Internet of Things</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101114</link>
        <id>10.14569/IJACSA.2019.0101114</id>
        <doi>10.14569/IJACSA.2019.0101114</doi>
        <lastModDate>2019-11-30T11:51:52.2730000+00:00</lastModDate>
        
        <creator>Asmaa Mohammed Almazmoomi</creator>
        
        <creator>Muhammad Mostafa Monowar</creator>
        
        <subject>Device-to-device communication; internet-of-things; Bee-Inspire Algorithm; routing protocol</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>Device-to-device communication is popular research trend that presents ubiquitous information exchange on the Internet of Things. D2D communication provides data exchange without transiting to a base station using direct communication between two devices. For such environment, successful delivery of data to the receiver is needed. In this paper, we suggest a Bee-Inspired Routing Algorithm (BIRA) for D2D communication in IoT exploits the multiple interfaces of a “thing” in IoT having different wireless standards. BIRA is on demand routing algorithm simulates the bee’s foraging behavior model to find optimal path between source and destination for multi-hop communication. The performance of BIRA is assessed through extensive simulations that concludes BIRA realizes better packet delivery ratio as well as it performs lower average end-to-end delay in different traffic load compared to the conventional AODV protocol. Also, BIRA achieves least energy consumption than AODV and increases network lifetime.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_14-On_Designing_Bee_Inspired_Routing_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Empirical Analysis of Object-Oriented Software Test Suite Evolution</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101113</link>
        <id>10.14569/IJACSA.2019.0101113</id>
        <doi>10.14569/IJACSA.2019.0101113</doi>
        <lastModDate>2019-11-30T11:51:52.2600000+00:00</lastModDate>
        
        <creator>Nada Alsolami</creator>
        
        <creator>Qasem Obeidat</creator>
        
        <creator>Mamdouh Alenezi</creator>
        
        <subject>Software; test; code complexity; code coverage; test evolution</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>The software system is evolving over the time, thus, the test suite must be repaired according to the changing code. Updating test cases manually is a time-consuming activity, especially for large test suites, which motivate the recent development of automatically repairing test techniques. To develop an effective automatic repair technique that reduces the effort of development and the cost of evolution, the developer should understand how the test suite evolves in practice. This investigation aims to conduct a comprehensive empirical study on eight Java systems with many versions of these systems and their test suites to find out how the test suite is evolving, and to find the relationship between the change in the program and the corresponding evolution in the test suite. This study showed that the test suite size is mostly increased, where the test suite complexity is stabilized. The increase (or decrease) in the code size will mostly increase (or decrease) the test suite size. However, the increasing or decreasing in the code complexity is offset by stabilizing the test suite complexity. Moreover, the percentage of the code coverage tends to be increased more than decreased, but in the mutation coverage, the opposite is true.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_13-Empirical_Analysis_of_Object_Oriented_Software.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Developing Lexicon-based Algorithms and Sentiment Lexicon for Sentiment Analysis of Saudi Dialect Tweets</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101112</link>
        <id>10.14569/IJACSA.2019.0101112</id>
        <doi>10.14569/IJACSA.2019.0101112</doi>
        <lastModDate>2019-11-30T11:51:52.2270000+00:00</lastModDate>
        
        <creator>Waleed Al-Ghaith</creator>
        
        <subject>Sentiment analysis; opinion mining; lexicon-based; Arabic text mining; Saudi Arabia</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>Majority of studies on sentiment analysis field, specifically Arabic lexicon-based approach, are focused on doing preprocessing methods on targeted dataset text or collected textual data from Twitter (Twitter dataset) rather than dealing with lexicon itself. This study proposes a new method, we constraint firstly on building a new sentiment lexicon with reasonable number of words and then doing adequate preprocessing methods on the lexicon’s words in addition to the (Twitter dataset). The study presents Saudi Dialect Sentiment lexicon called SaudiSentiPlus contains 7139 words which mostly generated from Saudi tweets and other dictionaries. Moreover, this study also presents two lexicon-based algorithms for Saudi dialect to deal with (prefixes and suffixes) letters in order to increase performance of proposed Saudi dialect lexicon. The experiment which has been conducted in this study to evaluate the performance of SaudiSentiPlus comprises four phases. The precision, recall, accuracy, and F-Score are measured in every phase. We built our testing dataset from twitter by focusing on Saudi dialect hashtags (971 thousands tweets from 162 hashtags). The results, show that accuracy of SaudiSentiPlus with the two lexicon-based algorithms reached to 81%.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_12-Developing_Lexicon_based_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Computational Efficiency of Monte Carlo Breakage of Articles using Serial and Parallel Processing: A Comparison</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101111</link>
        <id>10.14569/IJACSA.2019.0101111</id>
        <doi>10.14569/IJACSA.2019.0101111</doi>
        <lastModDate>2019-11-30T11:51:52.2130000+00:00</lastModDate>
        
        <creator>Jherna Devi</creator>
        
        <creator>Jagdesh Kumar</creator>
        
        <subject>Breakage of particles; Central Processing Unit (CPU); Graphics Processing Unit (GPU); CUDA; computational time; clock cycle; speedup factor</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>This paper presents a GPU-based parallelized and a CPU-based serial Monte-Carlo method for breakage of a particle. We compare the efficiency of the graphic card’s graphics processing unit (GPU) and the general-purpose central processing unit (CPU), in a simulation using Monte Carlo (MC) methods for processing the particle breakage. Three applications are used to compare the computational performance times, clock cycles and speedup factors, to find which platform is faster under which conditions. The architecture of the GPU is becoming increasingly programmable; it represents a potential speedup for many applications compared to the modern CPU. The objective of the paper is to compare the performance of the GPU and Intel Core i7-4790 multicore CPU. The implementation for the CPU was written in the C programming language, and the GPU implemented the kernel using Nvidia’s CUDA (Compute Unified Device Architecture). This paper compares the computational times, clock cycles and the speedup factor for a GPU and a CPU, with various simulation settings such as the number of simulation entries (SEs), for a better understanding of the GPU and CPU computational efficiency. It has been found that the number of SEs directly affects the speedup factor.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_11-The_Computational_Efficiency_of_Monte_Carlo_Breakage.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Model to Detect 2D Hand based on Multi-feature Skin Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101110</link>
        <id>10.14569/IJACSA.2019.0101110</id>
        <doi>10.14569/IJACSA.2019.0101110</doi>
        <lastModDate>2019-11-30T11:51:52.1970000+00:00</lastModDate>
        
        <creator>Abdullah Shawan Alotaibi</creator>
        
        <subject>Hand detection; feature modeling; convolutional neural networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>Recognition of hand gesture is one of Human PCs most growing interfaces. In most vision-based signal recognition system, the initial phase is hand detection and separation. Because the hands are linked to a variety of day by day, local work experiences both extraordinary changes in the illumination and the innate unbroken appearance of the hand. In order to address these issues, we suggest another 2D hand position software that can be seen as a combination of multi-feature hand proposal generation and cascading neural system network characterization (CCNN). When considering various luminances we select color, Gabor, Hoard and Filter to separate the skin and produce a hand proposal. Therefore, we are selling a cascaded CNN that holds the deep setting information between the proposals. A mix of some datasets, including a few Oxford Hands Datasets, VIVA Hand Recognition, and Egohands Datasets, is tested as the positive example and image patch Net 2012, FDDEB dataset as a bad example; the proposed Multi-Feature Directed Cascaded CNN (MFS-CCNN) strategy. Aggressive results are achieved by the technique proposed. Our average sample dataset accuracy is considerably inferior to DPM. With an average of 43.55 and 51.78 percent accuracy, our CCNN and MFS-CCNN model perform DPM. Average accuracy of the CCNN model in a combined test set is 9.16% higher than the SSD model. Still, our model is faster than a DPM based on the statistical performance.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_10-A_New_Model_to_Detect_2D_Hand_based_on_Multi_Feature_Skin_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Evaluation of Different Short Path Algorithms to Improve Oil-Gas Pipelines</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101109</link>
        <id>10.14569/IJACSA.2019.0101109</id>
        <doi>10.14569/IJACSA.2019.0101109</doi>
        <lastModDate>2019-11-30T11:51:52.1670000+00:00</lastModDate>
        
        <creator>Nabeel Naeem Almaalei</creator>
        
        <creator>Siti Noor Asyikin Mohd Razali</creator>
        
        <subject>PSPA; ACO; GA; Oil-Gas pipeline; performance; cost; short path</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>The oil-gas pipeline is a complicated and expensive system in terms of construction, control, materials, monitoring, and maintenance which includes economic, social and environmental hazards. As a case study of Iraq, the system of pipelines is above the ground and is liable to disasters that may produce an environmental tragedy as well as the loss of life and money. Hence, this article presents a performance evaluation of different short path algorithms to improve oil-gas pipelines. The chosen algorithms in this paper were Parallel Short Path Algorithm (PSPA), Ant Colony Optimization (ACO) algorithm and Genetic Algorithm (GA). The main performance metric is the cost of the pipelines. Simulation trials were performed using the MATLAB program for the chosen algorithms. The performance comparison showed that the lowest cost of laying oil and gas pipelines was by applying the GA algorithm when the number of wells was set to 50-600. Conversely, the PSPA algorithm showed the best performance in terms of required implementation time for all scenarios. Besides, PSPA appeared to have acceptable performance in terms of the cost of the pipeline when the number of wells was arranged between50-600. Furthermore, PSPA showed the best performance for 700 and 840 wells in terms of the cost of laying the oil and gas pipelines compared to ACO and GA. It should be noted that the ACO algorithm showed medium performance in terms of the cost of laying oil and gas pipelines compared to PSPA and GA.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_9-Performance_Evaluation_of_different_Short_Path.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Impact of Social Networks on Students’ Academic Achievement in Practical Programming Labs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101108</link>
        <id>10.14569/IJACSA.2019.0101108</id>
        <doi>10.14569/IJACSA.2019.0101108</doi>
        <lastModDate>2019-11-30T11:51:52.1500000+00:00</lastModDate>
        
        <creator>Ayat Al Ahmad</creator>
        
        <creator>Randa Obeidallah</creator>
        
        <subject>e-Learning; Facebook; YouTube; social network; programming lab; course learning outcome</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>Internet and Communication Technology (ICT) is being applied extensively in education to allow students to obtain information at anytime from anywhere. Social media is a new approach that is used to enhance and improve the delivery of education. Facebook groups and YouTube channels are the most operated networks that manage to deliver information. In this paper, we will study the impact of applying Facebook groups and YouTube videos on students’ academic achievements in computer programming labs especially in object-oriented programming 2 (OOP2) lab at the Hashemite University. The practical programming lab plays an important role in understanding the theoretical programming concepts, for this reason programming lab is chosen as a case study. The proposed methodology embeds the social media networks as new major dimension in the teaching process of OOP2 Lab side by side with traditional lectures and e-learning tool such as Moodle. In this research: three surveys are utilized respectively to inspect the reasons of the weakness in OOP2 lab, evaluate the course learning outcomes (CLOs) by students before and after applying the proposed methodology and to investigate the opinions of students toward using social media networks within learning process. The results showed that operating the social media sites used by students on a daily basis creates a friendly and close educational environment as well as enhancing the academic results of students.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_8-The_Impact_of_Social_Networks_on_Students_Academic.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>e-Learning System using Mashup based e-Learning Content Collection and an Attractive Avatar in OpenSimulator</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101107</link>
        <id>10.14569/IJACSA.2019.0101107</id>
        <doi>10.14569/IJACSA.2019.0101107</doi>
        <lastModDate>2019-11-30T11:51:52.1330000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>OpenSimulator; Mashup; e-learning; mobile learning; avatar; search engine</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>Mashup based e-learning content collection and provide system with an attractive avatar in OpenSimulator is proposed for making learning processes attractive and finding the students week points through learning processes with avatar. Through experiments with under graduated students, it is found that the proposed e-learning system is useful for the under graduated students. The students may find their week points through learning processes. Avatar provides the questions of the student’s week subjects. The students can enjoy the communications with their avatar. Therefore, the students have the most appropriate e-learning content which provided by the mashup based information retrievals and have lessons with their own avatar attractively using the proposed e-learning system.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_7-e_Learning_System_using_Mashup_based_e_Learning_Content.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Evaluation of E32 Long Range Radio Frequency 915 MHz based on Internet of Things and Micro Sensors Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101106</link>
        <id>10.14569/IJACSA.2019.0101106</id>
        <doi>10.14569/IJACSA.2019.0101106</doi>
        <lastModDate>2019-11-30T11:51:52.1200000+00:00</lastModDate>
        
        <creator>Puput Dani Prasetyo Adi</creator>
        
        <creator>Akio Kitagawa</creator>
        
        <subject>Long range; microcontroller; internet of things; quality of service; micro sensor</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>This research discusses how to build and analyze a 915 MHz Long Range (LoRa) E32 Frequency-based Node Sensor network with a Micro Sensor with 3 sensor outputs produced i.e, Temperature (DegC), Air Pressure (hPa), and Humidity (%). therefore, This research succeeded in making a sensor node using the LoRa E32 915 MHz using a mini type ATmega 328p microcontroller with a 3.7 volt, 1000 mAh battery. The display on the receiver uses an 8X2 LCD which will output 3 sensor data outputs. furthermore, the result and analysis of this research are how to analysis of the LoRa Chirp Signal, furthermore, LoRa Chirp Signal obtained from the Textronix Spectrum analyzer in realtime, Quality of Service (QoS), Receive Signal Strength Indicator (RSSI) (-dBm), uplink and downlink data on the Internet Server. Furthermore, The Micro Sensor Graph Output will be displayed on the application server with a sensor data graph. In this research Application Server used is Thingspeak from Mathworks.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_6-Performance_Evaluation_of_E32_Long_Range_Radio_Frequency.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intuition, Accuracy, and Immersiveness Analysis of 3D Visualization Methods for Haptic Virtual Reality</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101105</link>
        <id>10.14569/IJACSA.2019.0101105</id>
        <doi>10.14569/IJACSA.2019.0101105</doi>
        <lastModDate>2019-11-30T11:51:52.1030000+00:00</lastModDate>
        
        <creator>Taehoon Kim</creator>
        
        <creator>Chanwoo Kim</creator>
        
        <creator>Hyeonseok Song</creator>
        
        <creator>Mee Young Sung</creator>
        
        <subject>3D; stereoscopic; haptic; intuition; immersiveness; recognize; simulation; virtual reality; visualization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>The purpose of this study is to analyze the usefulness of 3D immersive stereoscopic virtual reality technology in applications that provide tactile sensations. Diverse experiments show that the haptic 3D visualization method presented in a 3D stereoscopic space using headsets is more intuitive, accurate and immersive than the haptic 3D visualization method displayed in 3D flat displays. In particular, the intuitiveness has been significantly improved in a stereoscopic 3D visualization rather than flat 3D visualization. In spite of the general superiority of the stereoscopic 3D visualization, it is mentionable that the precise operation performance has not been greatly improved in the elaborate object movements. It means that users can recognize and manipulate the position of objects more quickly in 3D stereoscopic immersive VR environments, however, the precise operation does not benefit greatly in the 3D stereoscopic visualization. Note that the degree of game immersion is remarkably augmented in the case of using 3D stereoscopic visualization.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_5-Intuition_Accuracy_and_Immersiveness_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Use of Microwave Drying Process to the Granular Materials</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101104</link>
        <id>10.14569/IJACSA.2019.0101104</id>
        <doi>10.14569/IJACSA.2019.0101104</doi>
        <lastModDate>2019-11-30T11:51:52.0700000+00:00</lastModDate>
        
        <creator>Francisc Ioan Hathazi</creator>
        
        <creator>Vasile Darie Soproni</creator>
        
        <creator>Mircea Nicolae Arion</creator>
        
        <creator>Carmen Otilia Molnar</creator>
        
        <creator>Simina Vicas (Coman)</creator>
        
        <creator>Olimpia Smaranda Mintas</creator>
        
        <subject>Microwave energy; microwave field; experimental models; laboratory models; drying and seed treatment</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>The use of electro thermal technologies based on microwave energy represents an important step in the development of new innovative solutions. The numerical modelling allows to study the influence of the high frequency electromagnetic and thermal field on the dielectric materials during the drying process before achieving practical installation. So, when is developed the experimental model will be already known some of the phenomena that characterize the system, being eliminated a number of unknown issues. This paper describes experiments conducted to gather data on production parameters in order to improve the stored corn seed quality. The interpretation and dissemination of results triggers the description of &quot;recipes&quot; for drying corn seeds. The described method is flexible and can be applied to near any agricultural seeds in further researches.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_4-The_use_of_Microwave_Drying_Process_to_the_Granular_Materials.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid Model of Autoregressive Integrated Moving Average and Artificial Neural Network for Load Forecasting</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101103</link>
        <id>10.14569/IJACSA.2019.0101103</id>
        <doi>10.14569/IJACSA.2019.0101103</doi>
        <lastModDate>2019-11-30T11:51:52.0570000+00:00</lastModDate>
        
        <creator>Lemuel Clark P Velasco</creator>
        
        <creator>Daisy Lou L. Polestico</creator>
        
        <creator>Gary Paolo O. Macasieb</creator>
        
        <creator>Michael Bryan V. Reyes</creator>
        
        <creator>Felicisimo B. Vasquez Jr</creator>
        
        <subject>Hybrid model; autoregressive integrated moving average; electric load forecasting; Artificial Neural Network (ANN)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>The complementary strengths and weaknesses of both statistical modeling paired with machine learning has been an ongoing technique in the development and implementation of forecasting models that analyze the dataset’s linear as well as nonlinear components in the generation of accurate prediction results. In this paper, autoregressive integrated moving average (ARIMA) and artificial neural networks (ANN) were implemented as a hybrid forecasting model for a power utility’s dataset in order to predict the next day’s electric load consumption. ARIMA and ANN models were serially developed resulting to the findings that out of the twelve evaluated ARIMA models, ARIMA (8,1,2) exhibited the best forecasting performance. After identifying the optimal ANN layers and input neurons, this study showed that out of the six evaluated supervised feedforward ANN models, the ANN model which employed Hyperbolic Tangent activation function and Resilient Propagation training algorithm also exhibited the best forecasting performance. With Zhang’s ARIMA and ANN hybridization technique, this study showed that the hybrid model delivered Mean Absolute Percentage Error (MAPE) of 4.09% which is within the 5% internationally accepted forecasting error for electric load forecasting. Through the findings of this research, both the ARIMA statistical model and ANN machine learning approaches showed promising results in being implemented as a forecasting model pair to analyze the linear as well as non-linear properties of a power utility’s electric load data.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_3-A_Hybrid_Model_of_Autoregressive_Integrated_Moving_Average.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>On the Possibility of Implementing High-Precision Calculations in Residue Numeral System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101102</link>
        <id>10.14569/IJACSA.2019.0101102</id>
        <doi>10.14569/IJACSA.2019.0101102</doi>
        <lastModDate>2019-11-30T11:51:52.0400000+00:00</lastModDate>
        
        <creator>Otsokov Sh A</creator>
        
        <creator>Magomedov Sh.G</creator>
        
        <subject>High-precision calculations; residue numeral system; positional numeral system; number conversion; rank determination</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>This article proposes a method for accelerating high-precision calculations by parallelizing arithmetic operations of addition, subtraction and multiplication. The proposed approach allows us to apply the advantages of the residue numeral system: absence of carry-overs when adding, subtracting, multiplying and reducing high-precision calculations with numbers of high digit capacity to parallel and independent execution of arithmetic operations with numbers of low digit capacity across many modules. Due to the complexity of performing non-modular operations such as: inverse transformation into a positional numeral system, number comparisons, sign identification and number rank calculation in a residue numeral system, the effect of acceleration of high-precision calculations is possible when solving some computational problems with a small number of non-modular operations, for example: determination of the scalar product of vectors, discrete Fourier transformation, iterative solution of systems of linear equations by the methods of Jacoby, Gaussa-Zeidel, etc. Implementation of the proposed method are demonstrated by the example of finding the scalar product of vectors.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_2-On_the_Possibility_of_Implementing_High_Precision_Calculations.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Anomaly Detection using Unsupervised Methods: Credit Card Fraud Case Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101101</link>
        <id>10.14569/IJACSA.2019.0101101</id>
        <doi>10.14569/IJACSA.2019.0101101</doi>
        <lastModDate>2019-11-30T11:51:51.9630000+00:00</lastModDate>
        
        <creator>Mahdi Rezapour</creator>
        
        <subject>Credit card fraud; anomaly detection; SVM; Mahalanobis distance; autoencoder; unsupervised techniques</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(11), 2019</description>
        <description>The usage of credit card has increased dramatically due to a rapid development of credit cards. Consequently, credit card fraud and the loss to the credit card owners and credit cards companies have been increased dramatically. Credit card Supervised learning has been widely used to detect anomaly in credit card transaction records based on the assumption that the pattern of a fraud would depend on the past transaction. However, unsupervised learning does not ignore the fact that the fraudsters could change their approaches based on customers’ behaviors and patterns. In this study, three unsupervised methods were presented including autoencoder, one-class support vector machine, and robust Mahalanobis outlier detection. The dataset used in this study is based on real-life data of credit card transaction. Due to the availability of the response, fraud labels, after training the models the performance of each model was evaluated. The performance of these three methods is discussed extensively in the manuscript. For one-class SVM and auto encoder, the normal transaction labels were used for training. However, the advantages of robust Mahalanobis method over these methods is that it does not need any label for its training.</description>
        <description>http://thesai.org/Downloads/Volume10No11/Paper_1-Anomaly_Detection_using_Unsupervised_Methods.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>From Poster to Mobile Calendar: An Event Reminder using Mobile OCR</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101075</link>
        <id>10.14569/IJACSA.2019.0101075</id>
        <doi>10.14569/IJACSA.2019.0101075</doi>
        <lastModDate>2019-11-08T11:38:10.9900000+00:00</lastModDate>
        
        <creator>Fatiha Bousbahi</creator>
        
        <subject>OCR; API; mobile apps; reminder systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(10), 2019</description>
        <description>Technological innovations are the foundation of new services today. Successful services address real-life issues that help people manage life more conveniently using relevant technologies. Currently, images are a part of daily life. People are often taking pictures of different posters for different events as exhibitions, workshops, conferences, etc., with their mobile. Unfortunately, sometimes these pictures are forgotten and events’ dates expire. As consequence, people miss events they were interested in. Hence, with the vision to provide technology-powered services, affordable and turnkey applications, this paper presents Event-Reminder, a fully automated lightweight reminder system builds upon a mobile offline OCR (Optical Character Recognition) with touch interaction making some daily tasks easier. Event-Reminder is a mobile application that would recognize the images’ text content, extract event’s date and venue and upload this event information automatically to mobile calendar in order to remind the user about the event at proper time. A prototype system is introduced in this paper.</description>
        <description>http://thesai.org/Downloads/Volume10No10/Paper_75-From_Poster_to_Mobile_Calendar_an_Event_Reminder.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Data Augmentation to Stabilize Image Caption Generation Models in Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101074</link>
        <id>10.14569/IJACSA.2019.0101074</id>
        <doi>10.14569/IJACSA.2019.0101074</doi>
        <lastModDate>2019-11-04T11:36:29.3600000+00:00</lastModDate>
        
        <creator>Hamza Aldabbas</creator>
        
        <creator>Muhammad Asad</creator>
        
        <creator>Mohammad Hashem Ryalat</creator>
        
        <creator>Kaleem Razzaq Malik</creator>
        
        <creator>Muhammad Zubair Akbar Qureshi</creator>
        
        <subject>Convolutional Neural Networks (CNN); image caption generation; data augmentation; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(10), 2019</description>
        <description>Automatic image caption generation is a challenging AI problem since it requires utilization of several techniques from different computer science domains such as computer vision and natural language processing. Deep learning techniques have demonstrated outstanding results in many different applications.
However, data augmentation in deep learning, which replicates the amount and the variety of training data available for learning
models without the burden of collecting new data, is a promising field in machine learning. Generating textual description for a
given image is a challenging task for computers. Nowadays, deep learning performs a significant role in the manipulation
of visual data with the help of Convolutional Neural Networks (CNN). In this study, CNNs are employed to train prediction
models which will help in automatic image caption generation. The proposed method utilizes the concept of data augmentation
to overcome the fuzziness of well-known image caption generation models. Flickr8k dataset is used in the experimental work of this
study and the BLEU score is applied to evaluate the reliability of the proposed method. The results clearly show the stability
of the outcomes generated through the proposed method when compared to others.</description>
        <description>http://thesai.org/Downloads/Volume10No10/Paper_74-Data_Augmentation_to_Stabilize_Image_Caption_Generation_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Software Architecture Solutions for the Internet of Things: A Taxonomy of Existing Solutions and Vision for the Emerging Research</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101073</link>
        <id>10.14569/IJACSA.2019.0101073</id>
        <doi>10.14569/IJACSA.2019.0101073</doi>
        <lastModDate>2019-10-31T13:23:17.1670000+00:00</lastModDate>
        
        <creator>Aakash Ahmad</creator>
        
        <creator>Sultan Abdulaziz</creator>
        
        <creator>Adwan Alanazi</creator>
        
        <creator>Mohammed Nazel Alshammari</creator>
        
        <creator>Mohammad Alhumaid </creator>
        
        <subject>Software and system architecture; Internet of Things; software engineering; software engineering for IoT</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(10), 2019</description>
        <description>Recently, Internet of Thing (IoT) systems enable an interconnection between systems, humans, and services to create an (autonomous) ecosystem of various computation-intensive things. Software architecture supports an effective modeling, specification, implementation, deployment, and maintenance of software-intensive things to engineer and operationalize IoT systems. In order to conceptualize and optimize the role of software architectures for IoTs, there is a dire need for research efforts to analyse the existing research and solutions to formulate the vision for futuristic research and development. In this research, we propose to empirically analyse and taxonomically classify the impacts of research on designing, architecting, and developing IoT-driven software systems. We have conducted a survey-based study of the existing research – investigating challenges, solutions and required futuristic efforts – on architecting IoT systems. The results of survey highlight that software architecture solutions support various research themes for IoT systems such as (i) cloud-based ecosystems, (ii) reference architectures, (ii) autonomous systems, and (iv) agent-based systems for IoT-based software. The results also indicate that any futuristic vision to architect IoT software should incorporate architectural processes, patterns, models and languages to support reusable, automated, and efficient development of IoTs. The proposed research documents structured and systemised knowledge about software architecture to develop IoT systems. Such knowledge can facilitate the researchers and developers to identify the key areas, understand the existing solution and their limitations to conceptualize and propose innovation solutions for existing and emerging challenges related to the development of IoT software.</description>
        <description>http://thesai.org/Downloads/Volume10No10/Paper_73-Software_Architecture_Solutions_for_the_Internet_of_Things.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automatic Classification of Academic and Vocational Guidance Questions using Multiclass Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101072</link>
        <id>10.14569/IJACSA.2019.0101072</id>
        <doi>10.14569/IJACSA.2019.0101072</doi>
        <lastModDate>2019-10-31T13:23:17.1330000+00:00</lastModDate>
        
        <creator>Omar Zahour</creator>
        
        <creator>El Habib Benlahmar</creator>
        
        <creator>Ahmed Eddaoui</creator>
        
        <creator>Oumaima Hourrane</creator>
        
        <subject>Academic and vocational guidance; multiclass neural network; e-orientation; machine learning; Holland’s theory</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(10), 2019</description>
        <description>The educational and professional orientation is an essential phase for each student to succeed in his life and his curriculum. In this context, it is very important to take into account the interests, occupations, skills, and the type of each student&#39;s personalities to make the right choice of training and to build a solid professional outline. This article deals with the problematic of educational and vocational orientation and we have developed a model for automatic classification of orientation questions. “E-Orientation Data” is a machine learning method based on John L. Holland’s Theory of RIASEC typology that uses a multiclass neural network algorithm. This model allows us to classify the questions of academic and professional orientation according to their four categories, thus allows automatic generation of questions in this area. This model can serve E-Orientation practitioners and researchers for further research as the algorithm gives us good results.</description>
        <description>http://thesai.org/Downloads/Volume10No10/Paper_72-Automatic_Classification_of_Academic_and_Vocational_Guidance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Control of PV-FC Electric Vehicle using Lyapunov based Theory</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101071</link>
        <id>10.14569/IJACSA.2019.0101071</id>
        <doi>10.14569/IJACSA.2019.0101071</doi>
        <lastModDate>2019-10-31T13:23:16.9800000+00:00</lastModDate>
        
        <creator>Saad Hayat</creator>
        
        <creator>Sheeraz Ahmed</creator>
        
        <creator>Tanveer-ul-Haq</creator>
        
        <creator>Sadeeq Jan</creator>
        
        <creator>Mehtab Qureshi</creator>
        
        <creator>Zeeshan Najam</creator>
        
        <creator>Zahid Wadud</creator>
        
        <subject>Energy Management System (EMS); Hybrid Elec-tric Vehicle (HEV); DC-DC converter; Multiple Input-Multiple Output (MIMO) system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(10), 2019</description>
        <description>Lyapunov based control is used to test whether a dynamical system is asymptotically stable or not. The control strategy is based on linearization of system. A Lyapunov function is constructed to obtain a stabilizing feedback controller. This paper deals with Lyapunov based control of multiple input single output system for hybrid electric vehicles (HEVs). Generally, an electric vehicle has an energy management system (EMS), an inverter, a DC-DC converter and a traction motor for the operation of its wheels. The control action is applied on the DC-DC converter, which works side-by-side with the EMS of the electric vehicle. The input sources considered in this study are: photo-voltaic (PV) panel, fuel cell and high voltage lithium-ion (Li-ion) battery. PV cell and fuel cell are considered as the primary sources of energy and the battery is considered as the secondary source. The converter used is a DC-DC boost converter which is connected with all the three sources. The idea follows the basic HEV principle in which multiple sources are incorporated to satisfy the power demands of the vehicle, using a DC-DC converter and an inverter, to operate its traction motor. The target is to achieve necessary tracking of all input source currents and output voltage, and fulfill the power demand of the HEV under severe load transients. The operations of the DC-DC converter are divided into three stages, each representing a different combination of the input sources. The analysis and proof of the stability of the HEV system is done using the Lyapunov stability theory.The results are discussed in the conclusion.</description>
        <description>http://thesai.org/Downloads/Volume10No10/Paper_71-Hybrid_Control_of_PV_FC_Electric_Vehicle.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Statistical Analysis and Security Evaluation of Chaotic RC5-CBC Symmetric Key Block Cipher Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101070</link>
        <id>10.14569/IJACSA.2019.0101070</id>
        <doi>10.14569/IJACSA.2019.0101070</doi>
        <lastModDate>2019-10-31T13:23:16.9630000+00:00</lastModDate>
        
        <creator>Abdessalem Abidi</creator>
        
        <creator>Anissa Sghaier</creator>
        
        <creator>Mohammed Bakiri</creator>
        
        <creator>Christophe Guyeux</creator>
        
        <creator>Mohsen Machhout</creator>
        
        <subject>Cipher Block Chaining (CBC); Rivest Cipher 5 (RC5); chaotic dynamical system; sensibility; security; randomness</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(10), 2019</description>
        <description>In some previous research works, it has been theoretically proven that RC5-CBC encryption algorithm behaves as a Devaney topological chaos dynamical system. This unpre-dictable behavior has been experimentally illustrated through such sensitivity tests analyses encompassing the avalanche effect phenomenon evaluation. In this paper, which is an extension of our previous work, we aim to prove that RC5 algorithm can guarantee a much better level of security and randomness while behaving chaotically, namely when embedded with CBC mode of encryption. To do this, we have began by evaluating the quality of such images encrypted under chaotic RC5-CBC symmetric key encryption algorithm. Then, we have presented the synthesis results of an hardware architecture that implements this chaotic algorithm in FPGA circuits.</description>
        <description>http://thesai.org/Downloads/Volume10No10/Paper_70-Statistical_Analysis_and_Security_Evaluation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluating a Cloud Service using Scheduling Security Model (SSM)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101069</link>
        <id>10.14569/IJACSA.2019.0101069</id>
        <doi>10.14569/IJACSA.2019.0101069</doi>
        <lastModDate>2019-10-31T13:23:16.9470000+00:00</lastModDate>
        
        <creator>Abdullah Sheikh</creator>
        
        <creator>Malcolm Munro</creator>
        
        <creator>David Budgen</creator>
        
        <subject>Cloud computing; security; scheduling; evaluating; cloud models</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(10), 2019</description>
        <description>The development in technology makes cloud com-puting widely used in different sectors such as academic and business or for a private purposes. Also, it can provide a convenient services via the Internet allowing stakeholders get all the benefits that the cloud can facilitate. With all the benefits of cloud computing still there are some risks such as security. This brings into consideration the need to improve the Quality of Service (QoS). A Scheduling Security Model (SSM) for Cloud Computing has been developed to address these issues. This paper will discuss the evaluation of the SSM model on some examples with different scenarios to investigate the cost and the effect on the service requested by customers.</description>
        <description>http://thesai.org/Downloads/Volume10No10/Paper_69-Evaluating_a_Cloud_Service_using_Scheduling_Security_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Securing Informative Fuzzy Association Rules using Bayesian Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101068</link>
        <id>10.14569/IJACSA.2019.0101068</id>
        <doi>10.14569/IJACSA.2019.0101068</doi>
        <lastModDate>2019-10-31T13:23:16.9300000+00:00</lastModDate>
        
        <creator>Muhammad Fahad</creator>
        
        <creator>Khalid Iqbal</creator>
        
        <creator>Somaiya Khatoon</creator>
        
        <creator>Khalid Mahmood Awan</creator>
        
        <subject>Fuzzy association rules; privacy preservation; fuzzi-fication; sensitive rules; Bayesian network; perturbation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(10), 2019</description>
        <description>In business association rules being considered as important assets, play a vital role in its productivity and growth. Different business partnership share association rules in order to explore the capabilities to make effective decision for enhance-ment of business and core capabilities. The fuzzy association rule mining approach emerged out of the necessity to mine quantitative data regularly present in database. An association rule is sensitive when it violates few rules and regulation for sharing particular nature of information to third world. Like classical association rules, there is a need for some privacy measures to be taken for retaining the standards and importance of fuzzy association rules. Privacy preservation is used for valu-able information extraction and minimizing the risk of sensitive information disclosure. Our proposed model mainly focuses to secure the sensitive information revealing association rules. In our model, sensitive fuzzy association rules are secured by identifying sensitive fuzzy item to perturb fuzzified dataset. The resulting transformed FARs are analyzed to conclude/calculate the accuracy level of our model in context of newly generated fuzzy association rules, hidden rules and lost rules. Extensive experiments are carried out in order to demonstrate the results of our proposed model. Privacy preservation of maximum number of sensitive FARs by keeping minimum perturbation highlights the significance of our model.</description>
        <description>http://thesai.org/Downloads/Volume10No10/Paper_68-Securing_Informative_Fuzzy_Association_Rules.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Classification of People who Suffer Schizophrenia and Healthy People by EEG Signals using Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101067</link>
        <id>10.14569/IJACSA.2019.0101067</id>
        <doi>10.14569/IJACSA.2019.0101067</doi>
        <lastModDate>2019-10-31T13:23:16.9000000+00:00</lastModDate>
        
        <creator>Carlos Alberto Torres Naira</creator>
        
        <creator>Cristian Jos&#180;e L&#180;opez Del Alamo</creator>
        
        <subject>Convolutional Neural Network (CNN); electroen-cephalography; Electroencephalogram Signals (EEG); deep learn-ing; schizophrenia; classification; Pearson Correlation Coefficient (PCC); Universidad Nacional de San Agust&#180;in (UNSA)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(10), 2019</description>
        <description>More than 21 million people worldwide suffer from schizophrenia. This serious mental disorder exposes people to stigmatization, discrimination, and violation of their human rights. Different works on classification and diagnosis of mental illnesses use electroencephalogram signals (EEG) because it reflects brain functioning, and how these diseases affect it. Due to the information provided by the EEG signals and the perfor-mance demonstrated by Deep Learning algorithms, the present work proposes a model for the classification of schizophrenic and healthy people through EEG signals using Deep Learning methods. Considering the properties of an EEG, high-dimensional and multichannel, we applied the Pearson Correlation Coefficient (PCC) to represent the relations between the channels, this way instead of using the large amount of data that an EEG provides, we used a shorter matrix as an input of a Convolutional Neural Network (CNN). Finally, results demonstrated that the proposed EEG-based classification model achieved Accuracy, Specificity, and Sensitivity of 90%, 90%, and 90%, respectively.</description>
        <description>http://thesai.org/Downloads/Volume10No10/Paper_67-Classification_of_People_who_Suffer_Schizophrenia.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Virtual Reality Full Immersion Techniques for Enhancing Workers Performance, 20 years Later: A Review and a Reformulation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101066</link>
        <id>10.14569/IJACSA.2019.0101066</id>
        <doi>10.14569/IJACSA.2019.0101066</doi>
        <lastModDate>2019-10-31T13:23:16.8830000+00:00</lastModDate>
        
        <creator>Luis Alfaro</creator>
        
        <creator>Claudia Rivera</creator>
        
        <creator>Jorge Luna-Urquizo</creator>
        
        <creator>Sof&#180;ia Alfaro</creator>
        
        <creator>Francisco Fialho</creator>
        
        <subject>Autopoiesis; knowledge construction; knowledge management; knowledge construction by full immersion in virtual reality environments</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(10), 2019</description>
        <description>The principal aim of this article is to review and reformulate the work published by Alfaro-Casas, Bridi and Fialho [1], in 1997, about the use of virtual reality immersion techniques for enhancing workers performance’. The challenge to be solved is related to the discussion about eventual advances occurring since the publication of the original work published. The strength of the achievements relies on the open dialogue established with different human cognition theories. We consider not only Humberto Maturana and Francisco Varela autopoiesis (human biological foundations theories) but also some other approaches derived from Education Sciences and Knowledge Management. The focus is on Artificial Intelligence and the use of Immersive technologies. The state of the art is established and its contributions towards the construction of knowledge are investigated, as a means for the development of formation and capacitation activities of the workforce. The methodology used is the bibliographical revision on several databases and the search of theses in the main universities. The greatest weakness of the research relies on the fact that we limited the search to documents using the English, Spanish, or Portuguese language. Some of the open immersion problems of virtual immersion are also treated.</description>
        <description>http://thesai.org/Downloads/Volume10No10/Paper_66-Virtual_Reality_Full_Immersion_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Review of Blockchain based Educational Projects</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101065</link>
        <id>10.14569/IJACSA.2019.0101065</id>
        <doi>10.14569/IJACSA.2019.0101065</doi>
        <lastModDate>2019-10-31T13:23:16.8700000+00:00</lastModDate>
        
        <creator>Bushra Hameed</creator>
        
        <creator>Muhammad Murad Khan</creator>
        
        <creator>Abdul Noman</creator>
        
        <creator>M. Javed Ahmad</creator>
        
        <creator>M. Ramzan Talib</creator>
        
        <creator>Faiza Ashfaq</creator>
        
        <creator>Hafiz Usman</creator>
        
        <creator>M. Yousaf</creator>
        
        <subject>Blockchain; educational-project; education; digital-certification; record-keeping</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(10), 2019</description>
        <description>Blockchain is a decentralized and shared dis-tributed ledger that records the transaction history done by totally different nodes within the whole network. The technology is practically used in the field of education for record-keeping, digital certification, etc. There have already been several papers published on this, but no one can’t find a single paper covering the blockchain-based educational projects. So, There is a gap of latest trends to education. Blockchain-based educational projects resolve the issues of today’s educators. On that basis, we conclude that there is a need for conducting a systematic literature review. This study, therefore, reviews the artistic gap between these two based on educational projects. For this purpose, the paper focuses on exploring some block-chain based projects and protocols that are used in these projects. It also analyses the block-chain features that are being used and the services are offered by the existing educational projects using block-chain features to improve the execution of this technology in education.</description>
        <description>http://thesai.org/Downloads/Volume10No10/Paper_65-A_Review_of_Blockchain_based_Educational_Projects.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Immersive Technologies in Marketing: State of the Art and a Software Architecture Proposal</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101064</link>
        <id>10.14569/IJACSA.2019.0101064</id>
        <doi>10.14569/IJACSA.2019.0101064</doi>
        <lastModDate>2019-10-31T13:23:16.8530000+00:00</lastModDate>
        
        <creator>Luis Alfaro</creator>
        
        <creator>Claudia Rivera</creator>
        
        <creator>Jorge Luna-Urquizo</creator>
        
        <creator>Juan Carlos Zu&#180;niga</creator>
        
        <creator>Alonso Portocarrero</creator>
        
        <creator>Alberto Barbosa Raposo</creator>
        
        <subject>Marketing; experiential marketing; immersive tech-nologies; immersive technologies in marketing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(10), 2019</description>
        <description>After conducting the historical review of marketing and especially experiential marketing, which considers various types of experiences such as sensations, feelings, thoughts, actions and relationships, seeking in the consumer greater satisfaction and therefore greater effectiveness in the action of marketing, as well as establishing the state of the art of immersive technologies and their applications in marketing, the authors propose a software architecture model for hotel services, which includes the description of hardware and software elements for development and implementation. The model would make it possible to bring customers closer to experiences that are very close to reality, based on their profiles and characteristics, previously treated by a recommendation module included in the proposal, a fact that supports the decision of purchase, with a high degree of adaptation to their needs and requirements. The proposal and development of the model with attributes of originality, aims to contribute to the development and technological innovation of marketing in the hotel industry. Finally, conclusions and recommendations for future work are established.</description>
        <description>http://thesai.org/Downloads/Volume10No10/Paper_64-Immersive_Technologies_in_Marketing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Method for Designing Domain-Specific Document Retrieval Systems using Semantic Indexing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101063</link>
        <id>10.14569/IJACSA.2019.0101063</id>
        <doi>10.14569/IJACSA.2019.0101063</doi>
        <lastModDate>2019-10-31T13:23:16.8370000+00:00</lastModDate>
        
        <creator>ThanhThuong T. Huynh</creator>
        
        <creator>TruongAn PhamNguyen</creator>
        
        <creator>Nhon V. Do</creator>
        
        <subject>Document representation; document retrieval system; graph matching; semantic indexing; semantic search; domain ontology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(10), 2019</description>
        <description>Using domain knowledge and semantics to con-duct e‡ective document retrieval has attracted great attention from researchers in many di‡erent communities. Ultilizing that approach, we presents the method for designing domain-speci&#255;c document retrieval systems, which manages semantic information related to document content and supports se-mantic processing in search. ⁄e proposed method integrates components such as an ontology describing domain knowl-edge, a database of document repository, semantic represen-tations for documents; and advanced search techniques based on measuring semantic similarity. In this article, a model of domain knowledge for various information retrieval tasks, called ⁄e Classed Keyphrase based Ontology (CK-ONTO), will be presented in details. We also present graph-based models for representing documents together measures for evaluating the semantic relevance for usage in searching. ⁄e above methodology has been used in designing many real-world applications such as the Job-posting retrieval system. Evaluation with real-world inspired dataset, our methods showed noticeable improvements over traditional retrieval solutions.</description>
        <description>http://thesai.org/Downloads/Volume10No10/Paper_63-A_Method_for_Designing_Domain_Specific_Document.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>LSSCW: A Lightweight Security Scheme for Cluster based Wireless Sensor Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101062</link>
        <id>10.14569/IJACSA.2019.0101062</id>
        <doi>10.14569/IJACSA.2019.0101062</doi>
        <lastModDate>2019-10-31T13:23:16.8230000+00:00</lastModDate>
        
        <creator>Ganesh R Pathak</creator>
        
        <creator>M.S.Godwin Premi</creator>
        
        <creator>Suhas H. Patil</creator>
        
        <subject>Authentication; Automated Validation of Internet Security Protocols and Applications tool; Internet of Things (IoT); key management; security; Wireless Sensor Network (WSN)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(10), 2019</description>
        <description>In last two decades, Wireless Sensor Network (WSN) is used for large number of Internet of Things (IoT) applications, such as military surveillance, forest fire detection, healthcare, precision agriculture and smart homes. Because of the wireless nature of communication, Wireless Sensor Network suffers from various attacks such as Denial of Service (DoS) attack and replay attack. Dealing with scalability and security issues is the challenging task in WSN. In this paper, we have presented a Lightweight Security Scheme for Cluster based Wireless Sensor Network (LSSCW). LSSCW has two phases: Initialization phase and data transfer phase. The work focuses on secured data aggregation in wireless sensor network with the help of symmetric and session key generation technique. Data from sensor nodes are securely transferred to base station. LSSCW is lightweight and satisfies security requirements includ-ing authenticity, confidentiality and integrity. The performance of LSSCW is verified using Automated Validation of Internet Security Protocols and Applications (AVISPA) tool. Results shows that LSSCW is secured and is efficient in terms of computation and communication overhead.</description>
        <description>http://thesai.org/Downloads/Volume10No10/Paper_62-LSSCW_A_Lightweight_Security_Scheme.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sentiment Analysis and Classification of Photos for 2-Generation Conversation in China</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101061</link>
        <id>10.14569/IJACSA.2019.0101061</id>
        <doi>10.14569/IJACSA.2019.0101061</doi>
        <lastModDate>2019-10-31T13:23:16.8070000+00:00</lastModDate>
        
        <creator>Zhou Xiaochun</creator>
        
        <creator>Choi Dong-Eun</creator>
        
        <creator>Panote Siriaraya</creator>
        
        <creator>Noriaki Kuwahara</creator>
        
        <subject>Photo; 2-generation conversation; sentiment analy-sis; Principal Component Analysis (PCA); Hierarchical Clustering Analysis (HCA); China</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(10), 2019</description>
        <description>Appropriate photos can help the Chinese empty-nest elderly and young volunteers find common topics to promote communication. However, there are little researches on such photo in China. This paper used 40 online photos with 160 sessions for the conversation experiment for the Chinese elderly and young people to analyze these photos and classify them. Sentiment analysis of Chinese conversational texts was used to estimate the speaker’s attitude towards these photos. We collected the data set from the average value of sentiment analysis, the number of words uttered by the speakers, the pulse of the elderly, and the stress level of the youth for each photo. Principal Component Analysis (PCA) was carried out as a data preprocessing step to improve classification accuracies, and we selected four Principal Components (PCs) that account for 85.20% of total variance in the data. Next, we normalized these four PCs scores for Hierarchical Clustering Analysis (HCA) of the photos, and we got four clusters with different features. The results showed that photos in cluster2 were only optimal for the youth; cluster3 only made the elderly participants speak more; cluster1 and cluster4 was not suitable for the elders and the young people. This paper firstly classified the photos for 2-generation conversation and describing their features in China. Although, we did not find any photos suitable for both the elderly and the youth, this empirical study took a step forward in the investigation of photos for 2-generation conversation in China.</description>
        <description>http://thesai.org/Downloads/Volume10No10/Paper_61-Sentiment_Analysis_and_Classification_of_Photos.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards a Prototype of a Low-Priced Domestic Incubator with Telemetry System for Premature Babies in Lima, Peru</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101060</link>
        <id>10.14569/IJACSA.2019.0101060</id>
        <doi>10.14569/IJACSA.2019.0101060</doi>
        <lastModDate>2019-10-31T13:23:16.7900000+00:00</lastModDate>
        
        <creator>Jason Chicoma-Moreno</creator>
        
        <subject>Preterm babies; incubator; Arduino; telemetry</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(10), 2019</description>
        <description>Complications due to preterm birth are the main factors of death in the group of children with five years of age or less. Hence, a thorough care for these babies is needed especially during the first weeks or months after birth. Because in Peru not too many families can afford to rent or buy a incubator, this work puts forward the design and construction of a low-priced domestic incubator with telemetry system. The most important parameters to monitor are considered to be: the temperature and the humidity inside of the incubator and the heart pulse of the baby. To maintain the levels of temperature and humidity according to medical standards, a software was developed in an Arduino Uno. In order that the parents monitor the aforementioned parameters not necessarily being in the same room where the incubator is, a bluetooth module was used with the Arduino Uno to transmit the data to an app installed in a mobile phone. The first tests have shown that the humidity and temperature levels within the incubator are maintained as desired, also the the heart pulse is the expected one. However, there is still some work to do in regard to the upper limits of the humidity and temperature levels, which will be implemented as the next step of the project. It is expected that this incubator will serve Peruvian families, specially those living at the edge of poverty, who do not have the possibility of affording an expensive incubator at home or paying for these services at hospitals, for their premature babies.</description>
        <description>http://thesai.org/Downloads/Volume10No10/Paper_60-Towards_a_Prototype_of_a_Low_Priced_Domestic_Incubator.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Using Project-based Learning in a Hybrid e-Learning System Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101059</link>
        <id>10.14569/IJACSA.2019.0101059</id>
        <doi>10.14569/IJACSA.2019.0101059</doi>
        <lastModDate>2019-10-31T13:23:16.7770000+00:00</lastModDate>
        
        <creator>Luis Alfaro</creator>
        
        <creator>Claudia Rivera</creator>
        
        <creator>Jorge Luna-Urquizo</creator>
        
        <subject>Adaptative e-Learning; Project Based Learning (PBL); intelligent agents; back propagation neural networks; fuzzy logic; case base reasoning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(10), 2019</description>
        <description>After conducting the historical review and estab-lishing the state of the art, the authors of this paper focus on the incorporation of Project Based Learning (PBL), in an adaptive e-Learning environment, a novel and emerging perspective, which allows the application of what today constitutes one of the most effective strategies for the process of teaching learning. In PBL, each project is defined as a complex task or problem of reality, for which resolution, the student must develop research activities, planning, design, development, validation, testing, etc. For the proposal of the Hybrid Architecture of the e-Learning system model, the authors use artificial intelligence techniques, which make it possible to identify the Learning Styles (LS), with the purpose of automatically assigning the projects, according to the characteristics, interests, expectations and demands of the student, who will interact with an e-Learning environment, with a high capacity of adaptation to each individual. Finally, the conclusions and recommendations of the research work are established.</description>
        <description>http://thesai.org/Downloads/Volume10No10/Paper_59-Using_Project_based_Learning_in_a_Hybrid_e_Learning_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Static Analysis on Floating-Point Programs Dealing with Division Operations</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101058</link>
        <id>10.14569/IJACSA.2019.0101058</id>
        <doi>10.14569/IJACSA.2019.0101058</doi>
        <lastModDate>2019-10-31T13:23:16.7600000+00:00</lastModDate>
        
        <creator>MG Thushara</creator>
        
        <creator>K. Somasundaram</creator>
        
        <subject>Abstract interpretation; static analysis; forward analysis; abstract domain</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(10), 2019</description>
        <description>Numerical accuracy is a critical point in safe computations when it comes to floating-point programs. Given a certain accuracy for the inputs of a program, the static analysis computes a safe approximation of the accuracy on the outputs. This accuracy depends on the propagation of the errors on the data and on the round-off errors on the arithmetic operations performed during the execution. Floating point values disposes a large dynamic range. But the main pitfall is the inaccuracies that occur with floating point computations. Based on the theory of abstract interpretation, in the paper an upper bound to the precision of the results of these computations in program have been demonstrated.</description>
        <description>http://thesai.org/Downloads/Volume10No10/Paper_58-Static_Analysis_on_Floating_Point_Programs.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Method for Segmentation of Vietnamese Identification Card Text Fields</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101057</link>
        <id>10.14569/IJACSA.2019.0101057</id>
        <doi>10.14569/IJACSA.2019.0101057</doi>
        <lastModDate>2019-10-31T13:23:16.7300000+00:00</lastModDate>
        
        <creator>Tan Nguyen Thi Thanh</creator>
        
        <creator>Khanh Nguyen Trong</creator>
        
        <subject>Optical Character Recognition (OCR); text identifi-cation; identification card detection and recognition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(10), 2019</description>
        <description>The development of deep learning in computer vision has motivated researches in related fields, including Optical Character Recognition (OCR). Many proposed models and pre-trained models in the literature demonstrate their efficient in optical text recognition. In this context, image processing techniques has an essential role in improving the accuracy of recognition task. Because, depending on the practical application, image text often suffering several degradation from blur, uneven illumination, complex background, perspective distortion and so on. In this paper, we propose a method for pre-processing, text area extraction and segmentation of Vietnamese Identification Card, in order to improve the accuracy of Region of Interest detection. The proposed method was evaluated with a large data set with different practical qualities. Experiment results demonstrate the efficiency of our method.</description>
        <description>http://thesai.org/Downloads/Volume10No10/Paper_57-A_Method_for_Segmentation_of_Vietnamese_Identification_Card.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Classification of Arabic Writing Styles in Ancient Arabic Manuscripts</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101056</link>
        <id>10.14569/IJACSA.2019.0101056</id>
        <doi>10.14569/IJACSA.2019.0101056</doi>
        <lastModDate>2019-10-31T13:23:16.7130000+00:00</lastModDate>
        
        <creator>Mohamed Ezz</creator>
        
        <creator>Mohamed A. Sharaf</creator>
        
        <creator>Al-Amira A. Hassan</creator>
        
        <subject>Arabic manuscripts; classification; feature extrac-tion; machine learning; GNB; DT; RF; K-NN classifiers; SURF; SIFT</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(10), 2019</description>
        <description>This paper proposes a novel and an effective ap-proach to classify ancient Arabic manuscripts in “Naskh” and “Reqaa” styles. This work applies SIFT and SURF algorithms to extract the features and then uses several machine learning algorithms: Gaussian Na&#168;ıve Bayes (GNB), Decision Tree (DT), Random Forest (RF) and K-Nearest Neighbor (KNN) classifiers. The contribution of this work is the introduction of synthetic features that enhance the classification performance. The training phase encompasses four training models for each style. For testing purposes, two famous books from the Islamic literature are used: 1) Al-kouakeb Al-dorya fi Sharh Saheeh Al-Bokhary; and 2) Alfaiet Ebn Malek: Mosl Al-tolab Le Quaed Al-earab. The experimental results show that the proposed algorithm yields a higher accuracy with SIFT than with SURF which could be attributed to the nature of the dataset.</description>
        <description>http://thesai.org/Downloads/Volume10No10/Paper_56-Classification_of_Arabic_Writing_Styles.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Framework for Hoax News Detection and Analyzer used Rule-based Methods</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101055</link>
        <id>10.14569/IJACSA.2019.0101055</id>
        <doi>10.14569/IJACSA.2019.0101055</doi>
        <lastModDate>2019-10-31T13:23:16.6800000+00:00</lastModDate>
        
        <creator>SY Yuliani</creator>
        
        <creator>Mohd Faizal Bin Abdollah</creator>
        
        <creator>Shahrin Sahib</creator>
        
        <creator>Yunus Supriadi Wijaya</creator>
        
        <subject>Component; hoax; news; framework; web crawling; detection; multilanguage; unsupervised algorithm; similarity algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(10), 2019</description>
        <description>Currently, the era where social media can present various facilities can answer the needs of the community for information and utilization for socio-economic interests. But the other impact of the presence of social media opens an ample space for the existence of information or hoax news about an event that is troubling the public. The hoax also provides cynical provocation, which is inciting hatred, anger, incitement to many people, directly influencing behavior so that it responds as desired by the hoax makers. Fake news is playing an increasingly dominant role in spreading Misinformation by influencing people&#39;s Perceptions or knowledge to distort their awareness and decision-making. A framework is develope dataset collection of hoax gathered using web crawlers from several websites, using classification techniques. This hoax news will be categorized into several detection parameters including, page URL, title hoax news, publish date, author, and content. Matching each word hoax using the similarity algorithm to produce the accuracy of the hoax news uses the rule-based detection method. Experiments were carried out on eleven thousand-hoax news used as training datasets and testing data sets; this data set for validation using similarity algorithms, to produce the highest accuracy of hoax text similarity. In this study, each hoax news will label into four categories, namely, Fact, Hoax, Information, Unknown. Contributions propose Automatic detection of hoax news, Automatic Multilanguage Detection, and a collection of datasets that we gather ourselves and validation that results in four categories of hoax news that have measured in terms of text similarity using similarity techniques. Further research can be continued by adding objects hate speech, black campaign, blockchain technique to ward off hoaxes, or can produce algorithms that produce better text accuracy.</description>
        <description>http://thesai.org/Downloads/Volume10No10/Paper_55-A_Framework_for_Hoax_News_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Analysis of Acceleration Sensor for Movement Detection in Vehicle Security System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101054</link>
        <id>10.14569/IJACSA.2019.0101054</id>
        <doi>10.14569/IJACSA.2019.0101054</doi>
        <lastModDate>2019-10-31T13:23:16.6670000+00:00</lastModDate>
        
        <creator>A M Kassim</creator>
        
        <creator>A. K. R. A. Jaya</creator>
        
        <creator>A. H. Azahar</creator>
        
        <creator>H. I. Jaafar</creator>
        
        <creator>S Sivarao</creator>
        
        <creator>F. A. Jafar</creator>
        
        <creator>M. S. M. Aras</creator>
        
        <subject>Security system; acceleration sensor; movement detection; ADXL345</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(10), 2019</description>
        <description>The vehicle security system is a critical part of an entire car system in order to prevent unauthorized access into the car. As the statistic has shown that the number of cases of the private car being stolen is increasing and the recovery rate is decreasing sharply, it shows that the car security system failed to perform to prevent unauthorized access. Most of the vehicle security system simply consists of a few door-open detection switches, siren, and remote control to protect the car, which appears to be weak against experienced car theft. Therefore, the project is carried out to develop a vehicle security system that can measure the dynamic acceleration inside the vehicle using the ADXL345 accelerometer and locate the coordinate of the vehicle by using U-Blox Neo-6M GPS receiver. In order to evaluate the performance of the proposed vehicle security system, the experiment to determine the most suitable position among the four places inside a car to place the device was conducted. Then, the performance analysis of the GPS receiver for accurate tracking also was done. The results showed that the most suitable position to place the device is inside the center of the car dashboard and the GPS receiver has a mean cold start-up time of 5 minutes 47 seconds and hot start-up time of 11.72 seconds, with a standard deviation of 0.000003706&#176; in latitude and 0.000002762&#176; in longitude for position tracking.</description>
        <description>http://thesai.org/Downloads/Volume10No10/Paper_54-Performance_Analysis_of_Acceleration_Sensor.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Speculating on Speculative Execution</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101053</link>
        <id>10.14569/IJACSA.2019.0101053</id>
        <doi>10.14569/IJACSA.2019.0101053</doi>
        <lastModDate>2019-10-31T13:23:16.6500000+00:00</lastModDate>
        
        <creator>Jefferson Dinerman</creator>
        
        <subject>Speculative execution; hyperthreading; Spectre; meltdown; simultaneous multithreading</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(10), 2019</description>
        <description>Threat actors continue to design exploits that specifically target physical weaknesses in processor hardware rather than more traditional software vulnerabilities. The now infamous attacks, Spector and Meltdown, ushered in a new era of hardware-based security vulnerabilities that have caused some experts to question whether the potential cybersecurity risks associated with simultaneous multithreading (SMT), also known as hyperthreading (HT), are potent enough to outweigh its computational advantages. A small pool of researchers now touts the need to disable SMT completely. However, this appears to be an extreme reaction; while a more security focused environment might be inclined to disable SMT, environments with a greater level of risk tolerance that may need the performance advantages offered by SMT to facilitate business operations, should not disable it by default and instead evaluate software application-based patch mitigations. This paper provides insights that can help make informed decisions when determining the suitability of SMT by exploring key processes related to multithreading, reviewing the most common exploits, and describing why Spectre and Meltdown do not necessarily warrant disabling HT.</description>
        <description>http://thesai.org/Downloads/Volume10No10/Paper_53-Speculating_on_Speculative_Execution.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Robust Method for Diagnostic Energetic System with Bond Graph</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101052</link>
        <id>10.14569/IJACSA.2019.0101052</id>
        <doi>10.14569/IJACSA.2019.0101052</doi>
        <lastModDate>2019-10-31T13:23:16.6200000+00:00</lastModDate>
        
        <creator>Belgacem Hamdouni</creator>
        
        <creator>Dhafer Mezghani</creator>
        
        <creator>Jamel Riahi</creator>
        
        <creator>Abdelkader Mami</creator>
        
        <subject>Bond graph; diagnostic; fault detection; energy systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(10), 2019</description>
        <description>Surveillance and supervision systems have a major role in insuring the safety and availability of industrial equipments and installations. Default detection and diagnosis is highly important to facilitate the planning and implementation of curative and preventive actions. Industrial systems are usually governed by different physical phenomena’s and diverse technological components. Bond graph, being a powerful tool based on energetic and multi-physical analysis can be a well-adapted tool in default detection. The resulting Bond Graph model, allows to apply model based diagnosis methods to detect and eventually isolate defaults. In this paper, energetic systems diagnosis problems are discussed by detailing existing diagnosis methods. The proposed modeling tool is then introduced with illustration of different use cases and applications examples. Diagnosis methods based on Bond Graph model are presented, as well as the extension of these methods with uncertain parameters models. Finally, the studied diagnosis method is applied for default detection and isolation using the study case of asynchrony motor.</description>
        <description>http://thesai.org/Downloads/Volume10No10/Paper_52-A_Robust_Method_for_Diagnostic_Energetic_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Developing an Algorithm for Securing the Biometric Data Template in the Database</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101051</link>
        <id>10.14569/IJACSA.2019.0101051</id>
        <doi>10.14569/IJACSA.2019.0101051</doi>
        <lastModDate>2019-10-31T13:23:16.6030000+00:00</lastModDate>
        
        <creator>Taban Habibu</creator>
        
        <creator>Edith Talina Luhanga</creator>
        
        <creator>Anael Elikana Sam</creator>
        
        <subject>Biometric template; template-database; multiFernet; encryption-algorithm; decryption-algorithm; Twilio SMS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(10), 2019</description>
        <description>In the current technology advancement, biometric template provides a dependable solution to the problem of user verification in an identity control system. The template is saved in the database during the enrollment and compared with query information in the verification stage. Serious security and privacy concerns can arise, if raw, unprotected data template is saved in the database. An attacker can hack the template information in the database to gain illicit access. A novel approach of encryption-decryption algorithm utilizing a design pattern of Model View Template (MVT) is developed to secure the biometric data template.  The model manages information logically, the view shows the visualization of the data, and the template addresses the data migration into pattern object. The established algorithm is based on the cryptographic module of the Fernet key instance. The Fernet keys are combined to generate a multiFernet key to produce two encrypted files (byte and text file). These files are incorporated with Twilio message and securely preserved in the database. In the event where an attacker tries to access the biometric data template in the database, the system alerts the user and stops the attacker from unauthorized access, and cross-verify the impersonator based on the validation of the ownership. Thus, helps inform the users and the authority of, how secure the individual biometric data template is, and provided a high level of the security pertaining the individual data privacy.</description>
        <description>http://thesai.org/Downloads/Volume10No10/Paper_51-Developing_an_Algorithm_for_Securing_the_Biometric_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced, Modified and Secured RSA Cryptosystem based on n Prime Numbers and Offline Storage for Medical Data Transmission via Mobile Phone</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101050</link>
        <id>10.14569/IJACSA.2019.0101050</id>
        <doi>10.14569/IJACSA.2019.0101050</doi>
        <lastModDate>2019-10-31T13:23:16.5870000+00:00</lastModDate>
        
        <creator>Achi Harrisson Thiziers</creator>
        
        <creator>Haba Cisse Th&#233;odore</creator>
        
        <creator>J&#233;r&#233;mie T. Zoueu</creator>
        
        <creator>Babri Michel</creator>
        
        <subject>e-Health; medical data transmission; asymmetric cryptography; RSA algorithm; first numbers</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(10), 2019</description>
        <description>The transmission of medical data by mobile telephony is an innovation that constitutes the m-health or more generally e-health. This telemedicine handles personal data of patients who deserve to be protected when they are transmitted via the operator or private network, so that malicious people do not have access to them. This is where cryptography comes in to secure the medical data transmitted, while preserving their confidentiality, integrity and authenticity. In this field of personal data security, public key cryptography or asymmetric cryptography is becoming increasingly prevalent, as it provides a public key to encrypt the transmitted message and a second private key, linked to the first by formal mathematics, that only the final recipient has to decrypt the message. The RSA algorithm of River and Shamir provides this asymmetric cryptography based on a public key and a private key, on two prime numbers. However, the factorization of these two prime numbers to give the variable N of RSA can be discovered by a hacker and thus make the security of medical data vulnerable. In this article, we propose a more secured RSA algorithm with n primes and offline storage of the essential parameters of the RSA algorithm. We performed a triple encryption-decryption with these n prime numbers, which made it more difficult to break the factorization of the variable N. Thus, the key generation time is longer than that of traditional RSA.</description>
        <description>http://thesai.org/Downloads/Volume10No10/Paper_50-Enhanced_Modified_and_Secured_RSA_Cryptosystem.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Model for Time Series Imputation based on Average of Historical Vectors, Fitting and Smoothing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101049</link>
        <id>10.14569/IJACSA.2019.0101049</id>
        <doi>10.14569/IJACSA.2019.0101049</doi>
        <lastModDate>2019-10-31T13:23:16.5570000+00:00</lastModDate>
        
        <creator>Anibal Flores</creator>
        
        <creator>Hugo Tito</creator>
        
        <creator>Deymor Centty</creator>
        
        <subject>Univariate time series imputation; average of historical vectors; interpolation to nearest neighbors</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(10), 2019</description>
        <description>This paper presents a novel model for univariate time series imputation of meteorological data based on three algorithms: The first of them AHV (Average of Historical Vectors) estimates the set of NA values from historical vectors classified by seasonality; the second iNN (Interpolation to Nearest Neighbors) adjusts the curve predicted by AHV in such a way that it adequately fits to the prior and next value of the NAs gap; The third LANNf allows smoothing the curve interpolated by iNN in such a way that the accuracy of the predicted data can be improved. The results achieved by the model are very good, surpassing in several cases different algorithms with which it was compared.</description>
        <description>http://thesai.org/Downloads/Volume10No10/Paper_49-Model_for_Time_Series_Imputation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards Understanding Internet of Things Security and its Empirical Vulnerabilities: A Survey</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101048</link>
        <id>10.14569/IJACSA.2019.0101048</id>
        <doi>10.14569/IJACSA.2019.0101048</doi>
        <lastModDate>2019-10-31T13:23:16.5570000+00:00</lastModDate>
        
        <creator>Salim El Bouanani</creator>
        
        <creator>Omar Achbarou</creator>
        
        <creator>My Ahmed Kiram</creator>
        
        <creator>Aissam Outchakoucht</creator>
        
        <subject>Internet of things; IoT security; security audit; IoT architecture; IoT protocols</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(10), 2019</description>
        <description>The Internet of things is no longer a concept; it is a reality already changing our lives. It aims to interconnect almost all daily used devices to help them exchange contextualized data in order to offer services adequately. Based on the existing Internet, IoT suffers indisputably from security issues that could threaten its evolution and its users’ interests. Starting from this fact, we try to define the main security threats for the IoT perimeter and propose some pertinent solutions. To do so, we first establish a state of the art concerning the IoT definition, protocols, environment, architecture and security. Then, we expose a case study of a standard IoT platform to illustrate the impact of security on all IoT layers. Furthermore, the paper presents the results of a security audit on our implemented platform. Finally, based on our evaluation, we highlight many solutions as well as possible directions for future research.</description>
        <description>http://thesai.org/Downloads/Volume10No10/Paper_48-Towards_understanding_Internet_of_Things_Security.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Lexicon-based Bot-aware Public Emotion Mining and Sentiment Analysis of the Nigerian 2019 Presidential Election on Twitter</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101047</link>
        <id>10.14569/IJACSA.2019.0101047</id>
        <doi>10.14569/IJACSA.2019.0101047</doi>
        <lastModDate>2019-10-31T13:23:16.5270000+00:00</lastModDate>
        
        <creator>Temitayo Matthew Fagbola</creator>
        
        <creator>Surendra Colin Thakur</creator>
        
        <subject>Nigeria; 2019 presidential_election; bots-awareness; EmoLex; lexicon_analysis; public_opinion; emotion_mining; sentiment_analysis; twitter; APC; PDP; win_prediction; muhammadu_buhari; atiku_abubaka</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(10), 2019</description>
        <description>Online social networks have been widely engaged as rich potential platforms to predict election outcomes’ in several countries of the world. The vast amount of readily-available data on such platforms, coupled with the emerging power of natural language processing algorithms and tools, have made it possible to mine and generate foresight into the possible directions of elections’ outcome. In this paper, lexicon-based public emotion mining and sentiment analysis were conducted to predict win in the 2019 presidential election in Nigeria. 224,500 tweets, associated with the two most prominent political parties in Nigeria, People’s Democratic Party (PDP) and All Progressive Congress (APC), and the two most prominent presidential candidates that represented these parties in the 2019 elections, Atiku Abubakar and Muhammadu Buhari, were collected between 9th October 2018 and 17th December 2018 via the Twitter’s streaming API. tm and NRC libraries, defined in the ‘R’ integrated development environment, were used for data cleaning and preprocessing purposes. Botometer was introduced to detect the presence of automated bots in the preprocessed data while NRC Word Emotion Association Lexicon (EmoLex) was used to generate distributions of subjective public sentiments and emotions that surround the Nigerian 2019 presidential election. Emotions were grouped into eight categories (sadness, trust, anger, fear, joy, anticipation, disgust, surprise) while sentiments were grouped into two (negative and positive) based on Plutchik’s emotion wheel. Results obtained indicate a higher positive and a lower negative sentiment for APC than was observed with PDP. Similarly, for the presidential aspirants, Atiku has a slightly higher positive and a slightly lower negative sentiment than was observed with Buhari. These results show that APC is the predicted winning party and Atiku as the most preferred winner of the 2019 presidential election. These predictions were corroborated by the actual election results as APC emerged as the winning party while Buhari and Atiku shared very close vote margin in the election. Hence, this research is an indication that twitter data can be appropriately used to predict election outcomes and other offline future events. Future research could investigate spatiotemporal dimensions of the prediction.</description>
        <description>http://thesai.org/Downloads/Volume10No10/Paper_47-Lexicon_based_Bot_Aware_Public_Emotion_Mining.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Evaluation of User Awareness for the Detection of Phishing Emails</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101046</link>
        <id>10.14569/IJACSA.2019.0101046</id>
        <doi>10.14569/IJACSA.2019.0101046</doi>
        <lastModDate>2019-10-31T13:23:16.5100000+00:00</lastModDate>
        
        <creator>Mohammed I Alwanain</creator>
        
        <subject>Anti-phishing countermeasures; online fraud; evaluation experiments</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(10), 2019</description>
        <description>Phishing attacks are among the most serious Internet criminal activities. They aim to make Internet users believe that they are using a trusted entity, for the purpose of stealing sensitive information, such as bank account or credit card information. Phishing costs Internet users millions of dollars each year. An effective method that can prevent such attacks is improving the security awareness of Internet users, especially in light of the significant growth of online services. This paper discusses a real-world experiment, which aims to analyze and monitor the phishing awareness of an organization’s users in order to improve their awareness. The experiments have been targeting 1500 users in the education sector. The results of the experiment reveal that phishing awareness has a significant positive effect on users’ ability to distinguish phishing emails and websites, thereby avoiding attacks.</description>
        <description>http://thesai.org/Downloads/Volume10No10/Paper_46-An_Evaluation_of_User_Awareness_for_the_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Selection of Sensitive Buses using the Firefly Algorithm for Optimal Multiple Types of Distributed Generations Allocation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101045</link>
        <id>10.14569/IJACSA.2019.0101045</id>
        <doi>10.14569/IJACSA.2019.0101045</doi>
        <lastModDate>2019-10-31T13:23:16.4930000+00:00</lastModDate>
        
        <creator>Yuli Asmi Rahman</creator>
        
        <creator>Salama Manjang</creator>
        
        <creator>Yusran</creator>
        
        <creator>Amil Ahmad Ilham</creator>
        
        <subject>Firefly algorithm; time computation; real power loss index; voltage profile index; multi-type DG</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(10), 2019</description>
        <description>Power loss is one aspect of an electric power system performance indicator. Loss of power can have an impact on poor voltage performance at the receiving end. DG integration in the network has become one of the more powerful methods. To get the maximum benefit from synchronizing the system with DG, it is necessary to ascertain the size, location, and type of DG. This study aims to determine the capacity and location of DG connections for DG type I and type II. To address the aim of this paper, a metaheuristic solution based on a firefly algorithm is used. FA can cover up the lack of metaheuristic algorithms that require a long computational time. To ensure that the load bus location solution is selected as the best DG connection location, the input of the load bus candidate has been filtered based on stability sensitivity. The proposed method is tested on IEEE 30 buses. The optimization results show a decrease in power loss and an increase in bus voltage, which affects an increase in system stability by integrating three DG units. FA validation of the evolution-based algorithm shows a significant reduction in computational time.</description>
        <description>http://thesai.org/Downloads/Volume10No10/Paper_45-Selection_of_Sensitive_Buses_using_the_Firefly_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Validation Policy Statement on the Digital Evidence Storage using First Applicable Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101044</link>
        <id>10.14569/IJACSA.2019.0101044</id>
        <doi>10.14569/IJACSA.2019.0101044</doi>
        <lastModDate>2019-10-31T13:23:16.4800000+00:00</lastModDate>
        
        <creator>Achmad Syauqi</creator>
        
        <creator>Imam Riadi</creator>
        
        <creator>Yudi Prayudi</creator>
        
        <subject>Testing; policy statement; rule; ABAC; digital evidence</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(10), 2019</description>
        <description>Digital Evidence Storage is placed to store digital evidence files. Digital evidence is very vulnerable to damage. Therefore,  making digital evidence storage need access control. Access control has several models, one of them is ABAC (Attribute-Based Access Control). ABAC is a new access control model. ABAC model has a flexible function, allows intersect with many attributes. This will be very complex and causing inconsistency and incompleteness. Access control testing is a must before access control is implemented because it is the main key in the security of a system. Especially in digital evidence storage because the data in it is very vulnerable to damage either intentionally or not. The type of access control that is widely used is ABAC because this ABAC model has a flexible function. This ABAC model intersects with many attributes, it is necessary to test the policy statement. This test is carried out to avoid inconsistencies and incompleteness in the policy statement. An example tool for testing policy statements is ACPT (Access Control Policy Testing). At ACPT there are various algorithms for creating and testing policy statements. This study uses the first applicable algorithm to test policy statements in digital evidence storage. This research has successfully tested the policy statement properly and found no inconsistencies and incompleteness.</description>
        <description>http://thesai.org/Downloads/Volume10No10/Paper_44-Validation_Policy_Statement_on_the_Digital_Evidence_Storage.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>MVC Frameworks Modernization Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101043</link>
        <id>10.14569/IJACSA.2019.0101043</id>
        <doi>10.14569/IJACSA.2019.0101043</doi>
        <lastModDate>2019-10-31T13:23:16.4470000+00:00</lastModDate>
        
        <creator>Amine Moutaouakkil</creator>
        
        <creator>Samir Mbarki</creator>
        
        <subject>Framework; Architecture-Driven Modernization (ADM); Knowledge Discovery Model (KDM); Model-View-Controller (MVC)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(10), 2019</description>
        <description>The use of web development frameworks has grown significantly, specially the Model-View-Controller (MVC) based frameworks. The ability to immigrate web applications between different frameworks available becomes more and more relevant. The automation of the migration through transformations avoid the necessity to rewrite the code entirely. Architecture Driven Modernization (ADM) is the most successful approach that standardizes and automates the reengineering process. In this paper, we define an ADM approach to generate MVC web applications models in the highest level of abstraction from Struts 2 and Codeignitter Models. To do this, we add the MVC concepts to the KDM metamodel and then we specify a set of transformations to generate MVC KDM models. This proposal is validated through the use of our approach to transform CRUD (Create, Read, Update and Delete) applications models from MVC frameworks to MVC KDM.</description>
        <description>http://thesai.org/Downloads/Volume10No10/Paper_43-MVC_Frameworks_Modernization_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Security Issues in Software Defined Networking (SDN): Risks, Challenges and Potential Solutions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101042</link>
        <id>10.14569/IJACSA.2019.0101042</id>
        <doi>10.14569/IJACSA.2019.0101042</doi>
        <lastModDate>2019-10-31T13:23:16.4300000+00:00</lastModDate>
        
        <creator>Maham Iqbal</creator>
        
        <creator>Farwa Iqbal</creator>
        
        <creator>Fatima Mohsin</creator>
        
        <creator>Muhammad Rizwan</creator>
        
        <creator>Fahad Ahmad</creator>
        
        <subject>SDN; Wireless SDN; Security Threats; AES; DES; FortNOX; TLS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(10), 2019</description>
        <description>SDN (Software Defined Networking) is an architecture that aims to improve the control of network and flexibility. It is mainly connected with open flow protocol and ODIN V2 for wireless communication. Its architecture is central, agile and programmatically configured. This paper presents a security analysis that enforces the protection of GUI by requiring authentication, SSL/TLS integration and logging/security audit services. The role based authorization FortNOX and ciphers like AES and DES will be used for encryption of data and improving the security of SDN environment. These techniques are useful for enhancing the security framework of the controller.</description>
        <description>http://thesai.org/Downloads/Volume10No10/Paper_42-Security_Issues_in_Software_Defined_Networking.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Enhanced Weighted Associative Classification Algorithm without Preassigned Weight based on Ranking Hubs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101041</link>
        <id>10.14569/IJACSA.2019.0101041</id>
        <doi>10.14569/IJACSA.2019.0101041</doi>
        <lastModDate>2019-10-31T13:23:16.4170000+00:00</lastModDate>
        
        <creator>Siddique Ibrahim S P</creator>
        
        <creator>Sivabalakrishnan M</creator>
        
        <subject>Association rule mining; hub weight; classification; heart disease; attribute weight; associative classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(10), 2019</description>
        <description>Heart disease is the preeminent reasons for death worldwide and in excess of 17 million individuals were kicked the bucket from heart disease in the past years and the mortality rate will be increased in upcoming years revealed by WHO. It is very tough to diagnose the heart problem by just observing the patient. There is a high demand in developing an efficient classifier model to help the physician to predict such threatening disease to recover the human life. Now a day, many researchers have focused novel classifier model based on Associative Classification (AC). But most of the AC algorithm does not consider the consequence of the attribute in the database and treat every itemsets equally. Moreover, weighted AC ignores the significance of the itemsets and suffering the rule evaluation due to support measure. In this proposed method we have introduced attribute weight, which does not require manual assignment of weight instead the weight would be calculated from link based model. Finally, the performance of the proposed algorithm is verified on different medical datasets from UCI repository with classical associative classification.</description>
        <description>http://thesai.org/Downloads/Volume10No10/Paper_41-An_Enhanced_Weighted_Associative_Classification_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Palm Vein Verification System based on Nonsubsampled Contourlet Transform</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101040</link>
        <id>10.14569/IJACSA.2019.0101040</id>
        <doi>10.14569/IJACSA.2019.0101040</doi>
        <lastModDate>2019-10-31T13:23:16.3830000+00:00</lastModDate>
        
        <creator>Amira Oueslati</creator>
        
        <creator>Kamel Hamrouni</creator>
        
        <creator>Nadia Feddaoui</creator>
        
        <creator>Safya Belghith</creator>
        
        <subject>Verification; palm-vein; nonsubsampled contourlet transform; region of interest; equal error rate</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(10), 2019</description>
        <description>This document presents a new approach in verification system to verify the identity of person by his intrinsic characteristics “Palm vein” which is unique, universal and easy to captured. The first step in this system is to extract the region of interest (ROI) which represent the most informative region of palm, then coding step based on nonsubsampled contourlet transform (NSCT) is presented to produce a binary vector for each ROI, next a matching step of different representative vectors is given and finally a decision is made for both identification and verification mode. This approach is tested on CASIA multispectral database; the experimentations have proved the effectiveness of this coding system in verification modes to gives 0.19% of Equal Error Rate (EER).</description>
        <description>http://thesai.org/Downloads/Volume10No10/Paper_40-Palm_Vein_Verification_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modeling Ant Colony Optimization for Multi-Agent based Intelligent Transportation System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101039</link>
        <id>10.14569/IJACSA.2019.0101039</id>
        <doi>10.14569/IJACSA.2019.0101039</doi>
        <lastModDate>2019-10-31T13:23:16.3700000+00:00</lastModDate>
        
        <creator>Shamim Akhter</creator>
        
        <creator>Md. Nurul Ahsan</creator>
        
        <creator>Shah Jafor Sadeek Quaderi</creator>
        
        <subject>Intelligent Traffic Management System (ITMS); Simulation of Urban Mobility (SUMO); traffic simulation; Contraction Hierarchies Wrapper (CHWrapper); Dijkstra; A-star (A*); Deep-Neuro-Fuzzy Classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(10), 2019</description>
        <description>This paper focuses on Sumo Urban Mobility Simulation (SUMO) and real-time Traffic Management System (TMS) simulation for evaluation, management, and design of Intelligent Transportation Systems (ITS). Such simulations are expected to offer the prediction and on-the-fly feedback for better decision-making. In these regards, a new Intelligent Traffic Management System (ITMS) was proposed and implemented - where a path from source to destination was selected by Dijkstra algorithm, and the road segment weights were calculated using real-time analyses (Deep-Neuro-Fuzzy framework) of data collected from infrastructure systems, mobile, distributed technologies, and socially-build systems. We aim to simulate the ITMS in pragmatic style with micro traffic, open-source traffic simulation model (SUMO), and discuss the challenges related to modeling and simulation for ITMS. Also, we expose a new model- Ant Colony Optimization (ACO) in SUMO tool to support a multi-agent-based collaborative decision-making environment for ITMS. Beside we evaluate ACO model performance with exiting built-in optimum route-finding SUMO models (Contraction Hierarchies Wrapper) -CHWrapper, A-star (A*), and Dijkstra) for optimum route choice. The results highlight that ACO performs better than other algorithms.</description>
        <description>http://thesai.org/Downloads/Volume10No10/Paper_39-Modeling_Ant_Colony_Optimization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hyperspectral Image Classification using Support Vector Machine with Guided Image Filter</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101038</link>
        <id>10.14569/IJACSA.2019.0101038</id>
        <doi>10.14569/IJACSA.2019.0101038</doi>
        <lastModDate>2019-10-31T13:23:16.3370000+00:00</lastModDate>
        
        <creator>Shambulinga M</creator>
        
        <creator>G. Sadashivappa</creator>
        
        <subject>Support Vector Machine (SVM); hyperspectral images; guided image filter; Principal Component Analysis (PCA)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(10), 2019</description>
        <description>Hyperspectral images are used to identify and detect the objects on the earth’s surface. Classifying of these hyperspectral images is becoming a difficult task, due to more number of spectral bands. These high dimensionality problems are addressed using feature reduction and extraction techniques. However, there are many challenges involved in the classification of data with accuracy and computational time. Hence in this paper, a method has been proposed for hyperspectral image classification based on support vector machine (SVM) along with guided image filter and principal component analysis (PCA). In this work, PCA is used for the extraction and reduction of spectral features in hyperspectral data. These extracted spectral features are classified using SVM like vegetation fields, building, etc., with different kernels. The experimental results show that SVM with Radial Basis Functions (RBF) kernel will give better classification accuracy compared to other kernels. Moreover, classification accuracy is further improved with a guided image filter by incorporating spatial features.</description>
        <description>http://thesai.org/Downloads/Volume10No10/Paper_38-Hyperspectral_Image_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Ontology Learning from Relational Databases: Transforming Recursive Relationships to OWL2 Components</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101037</link>
        <id>10.14569/IJACSA.2019.0101037</id>
        <doi>10.14569/IJACSA.2019.0101037</doi>
        <lastModDate>2019-10-31T13:23:16.3070000+00:00</lastModDate>
        
        <creator>Mohammed Reda CHBIHI LOUHDI</creator>
        
        <creator>Hicham BEHJA</creator>
        
        <subject>Relational databases; ontologies; OWL2; recursive relationship; transitivity; concept hierarchy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(10), 2019</description>
        <description>Relational databases (RDB) are widely used as a backend for information systems, and contain interesting structured data (schema and data). In the case of ontology learning, RDB can be used as knowledge source. Multiple approaches exist for building ontologies from RDB. They mainly use schema mapping to transform RDB components to ontologies. Most existing approaches do not deal with recursive relationships that can encapsulate good semantics. In this paper, two technics are proposed for transforming recursive relationships to OWL2 components: (1) Transitivity mechanism and (2) Concept Hierarchy. The main objective of this work is to build richer ontologies with deep taxonomies from RDB.</description>
        <description>http://thesai.org/Downloads/Volume10No10/Paper_37-Ontology_Learning_from_Relational_Databases.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Distance based Sweep Nearest Algorithm to Solve Capacitated Vehicle Routing Problem</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101036</link>
        <id>10.14569/IJACSA.2019.0101036</id>
        <doi>10.14569/IJACSA.2019.0101036</doi>
        <lastModDate>2019-10-31T13:23:16.2900000+00:00</lastModDate>
        
        <creator>Zahrul Jannat Peya</creator>
        
        <creator>M. A. H. Akhand</creator>
        
        <creator>Tanzima Sultana</creator>
        
        <creator>M. M. Hafizur Rahman</creator>
        
        <subject>Capacitated vehicle routing problem; sweep algorithm; sweep nearest algorithm; genetic algorithm; ant colony optimization; particle swarm optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(10), 2019</description>
        <description>The Capacitated Vehicle Routing Problem (CVRP) is an optimization problem owing to find minimal travel distances to serve customers with homogeneous fleet of vehicles. Clustering customers and then assign individual vehicles is a widely-studied way, called cluster first and route second (CFRS) method, for solving CVRP. Cluster formation is important between two phases of CFRS for better CVRP solution. Sweep (SW) clustering is the pioneer one in CFRS method which solely depends on customers’ polar angle: sort the customers according to polar angle; and a cluster starts with customer having smallest polar angle and completes it considering others according to polar angle. On the other hand, Sweep Nearest (SN) algorithm, an extension of Sweep, also considers smallest polar angle customer to initialize a cluster but inserts other customer(s) based on the nearest neighbor approach. This study investigates a different way of clustering based on nearest neighbor approach. The proposed Distance based Sweep Nearest (DSN) method starts clustering from the farthest customer point and continues for a cluster based on nearest neighbor concept. The proposed method does not rely on polar angle of the customers like SW and SN. To identify the effectiveness of the proposed approach, SW, SN and DSN have been implemented in this study for solving benchmark CVRPs. For route optimization of individual vehicles, Genetic Algorithm, Ant Colony Optimization and Particle Swarm Optimization are considered for clusters formation with SW, SN and DSN. The experimental results identified that proposed DSN outperformed SN and SW in most of the cases and DSN with PSO was the best suited method for CVRP.</description>
        <description>http://thesai.org/Downloads/Volume10No10/Paper_36-Distance_based_Sweep_Nearest_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>e-Learning Proposal Supported by Reasoning based on Instances of Learning Objects</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101035</link>
        <id>10.14569/IJACSA.2019.0101035</id>
        <doi>10.14569/IJACSA.2019.0101035</doi>
        <lastModDate>2019-10-31T13:23:16.2600000+00:00</lastModDate>
        
        <creator>Benjamin Maraza-Quispe</creator>
        
        <creator>Olga Melina Alejandro-Oviedo</creator>
        
        <creator>Walter Choquehuanca-Quispe</creator>
        
        <creator>Alejandra Hurtado-Mazeyra</creator>
        
        <creator>Walter Fernandez-Gambarini</creator>
        
        <subject>Learning; management; intelligence; styles; instances; objects; reasoning; model; personalized</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(10), 2019</description>
        <description>In recent years, new research has appeared in the area of education, which has focused on the use of information technology and the Internet to promote online learning, breaking many barriers of traditional education such as space, time, quantity and coverage. However, we have found that these new proposals present problems such as linear access to content, patronized teaching structures, and non-flexible methods in the style of user learning. Therefore, we have proposed the use of an intelligent model of personalized learning management in a virtual simulation environment based on instances of learning objects, using a similarity function through the weighted multidimensional Euclidean distance. The results obtained by the proposed model show an efficiency of 99.5%; which is superior to other models such as Simple Logistic with 98.99% efficiency, Naive Bayes with 97.98% efficiency, Tree J48 with 96.98% efficiency, and Neural Networks with 94.97% efficiency. For this, we have designed and implemented the experimental platform MIGAP (Intelligent Model of Personalized Learning Management), which focuses on the assembly of mastery courses in Newtonian Mechanics. Additionally, the application of this model in other areas of knowledge will allow better identification of the best learning style of each student; with the objective of providing resources, activities and educational services that are flexible to the learning style of each student, improving the quality of current educational services.</description>
        <description>http://thesai.org/Downloads/Volume10No10/Paper_35-E_Learning_Proposal_Supported_by_Reasoning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>CREeLS: Crowdsourcing based Requirements Elicitation for eLearning Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101034</link>
        <id>10.14569/IJACSA.2019.0101034</id>
        <doi>10.14569/IJACSA.2019.0101034</doi>
        <lastModDate>2019-10-31T13:23:16.2430000+00:00</lastModDate>
        
        <creator>Nancy M Rizk</creator>
        
        <creator>Mervat H. Gheith</creator>
        
        <creator>Ahmed M. Zaki</creator>
        
        <creator>Eman S. Nasr</creator>
        
        <subject>Requirements engineering; requirements elicitation; crowdsourcing; eLearning systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(10), 2019</description>
        <description>Crowdsourcing is the process of having a task performed by the crowd. Because of the Web evolution, recently crowdsourcing is being used in the field of Requirements Engineering to help in simplifying its activities. Among the information systems that were highly affected by the Web evolution are the eLearning Systems (eLS). eLS has special characteristics, such as the large number and diversity of users who could be geographically dispersed. To the best of our knowledge, there is little evidence that a crowdsourcing based requirements elicitation approach especially tailored for eLS that addresses their special characteristics exists. In this paper we attempt to fill in this gap. We present Crowdsourcing based Requirements Elicitation for eLS (CREeLS), which is made up of a framework of the necessary elements of crowdsourcing suggesting specific tools for each element, and a phased approach to implement the framework. We evaluated our approach through analyzing real-life users’ reviews and extracted keywords that represent users’ requirements by using topic modeling techniques. The reached results were then evaluated by manual text reviewing and the extracted features were found to be coherent. CREeLS has 0.66 precision and 0.79 recall. Hence we contend that CREeLS can help requirements engineers of eLS to analyze users’ opinions and identify the most common users’ requirements for better software evolution.</description>
        <description>http://thesai.org/Downloads/Volume10No10/Paper_34-CREeLS_Crowdsourcing_based_Requirements_Elicitation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Developing Agriculture Land Mapping using Rapid Application Development (RAD): A Case Study from Indonesia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101033</link>
        <id>10.14569/IJACSA.2019.0101033</id>
        <doi>10.14569/IJACSA.2019.0101033</doi>
        <lastModDate>2019-10-31T13:23:16.2300000+00:00</lastModDate>
        
        <creator>Antonius Rachmat Chrismanto</creator>
        
        <creator>Halim Budi Santoso</creator>
        
        <creator>Argo Wibowo</creator>
        
        <creator>Rosa Delima</creator>
        
        <creator>Reinald Ariel Kristiawan</creator>
        
        <subject>Farmland; precision agriculture; land mapping system; dashboard; software development</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(10), 2019</description>
        <description>The use of Information and Communication Technology (ICT) in agriculture has become one of the steps to improve agricultural efficiency, effectiveness, productivity, and also expected to encourage the creation of Precision Agriculture. Precision agriculture has an impact on the efficiency of operational costs to increase margins in the production of agricultural products using ICTs. One of the problems that often arise in agriculture is related to the management of agricultural land in each farmer group area. This information is closely related to the needs of agricultural production facilities and infrastructure, such as the need for fertilizers, seeds, and other resources. Web Mapping System is one of the systems to assist in land or area mapping. In this study, the Web Mapping System is expected to be used to help at agricultural land mapping, owned by farmer members of farmer groups. The developed system will store spatial data from farmland members and farmer groups. The Web Mapping System was developed using the Rapid Application Development (RAD) method, where there are several iterative processes. The result of this study is the Web Mapping System for agricultural land. With this application, farmers can find out the status of the land being cultivated or owned. In addition, the Web Mapping System can record the status of the existing land in a farmer group and the need for agricultural production facilities and infrastructure. Further, the Web Mapping System also provides information in a dashboard that can help farmer groups to manage the land owned by each farmer who is a member of the group.</description>
        <description>http://thesai.org/Downloads/Volume10No10/Paper_33-Developing_Agriculture_Land_Mapping.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of an Efficient Steganography Model using Lifting based DWT and Modified-LSB Method on FPGA</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101032</link>
        <id>10.14569/IJACSA.2019.0101032</id>
        <doi>10.14569/IJACSA.2019.0101032</doi>
        <lastModDate>2019-10-31T13:23:16.2130000+00:00</lastModDate>
        
        <creator>Mahesh A A</creator>
        
        <creator>Raja K.B</creator>
        
        <subject>Discrete Wavelet Transformation (DWT); steganography; Modified Least Significant Bit (MLSB) Method; XOR Method; FPGA; cover image; secret image; PSNR; MSE</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(10), 2019</description>
        <description>The data transmission with information hiding is a challenging task in today world. To protect the secret data or image from attackers, the steganography techniques are essential. The steganography is a process of hiding the information from one channel to another in data communication. In this research work, Design of an Efficient Steganography Model using Lifting Based DWT and Modified-LSB Method on FPGA is proposed. The stegano module includes DWT (Discrete Wavelet Transformation) with lifting scheme for the cover image and encryption with Bit mapping for a secret image, an embedded module using Modified Least Significant Bit (MLSB) Method, and Inverse DWT to generate the stegano image. The recovery module includes DWT, decoding module with pixel extraction and bit retrievals, and decryption to generate the recovered secret image. The steganography model is designed using Verilog-HDL on Xilinx platform and implemented with Artix-7 Field Programmable Gate Array (FPGA). The hardware resource constraints like Area, time, and power utilization of the proposed model results are tabulated. The performance analysis of the work is evaluated using Peak Signal to Noise Ratio (PSNR) and Mean Square Error (MSE) Ratio for a different cover and secret images with better quality. The proposed steganography model operates at high speed, which improves communication performance.</description>
        <description>http://thesai.org/Downloads/Volume10No10/Paper_32-Design_of_an_Efficient_Steganography_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancement of Packet Delivery Ratio during Rain Attenuation for Long Range Technology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101031</link>
        <id>10.14569/IJACSA.2019.0101031</id>
        <doi>10.14569/IJACSA.2019.0101031</doi>
        <lastModDate>2019-10-31T13:23:16.1800000+00:00</lastModDate>
        
        <creator>MD Hossinuzzaman</creator>
        
        <creator>Dahlila Putri Dahnil</creator>
        
        <subject>LoRa; packet delivery ratio; wireless mesh network; atmospheric attenuation; rain attenuation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(10), 2019</description>
        <description>Countries with tropical climates experience various weather changes throughout the year. The weather can drastically change from extremely hot and humid to a complete downpour in a twenty-four-hour cycle. Different atmospheric conditions such as atmospheric gas attenuation, cloud attenuation, and rain attenuation can cause interruption of electromagnetic signals and weaken the radio signals. The amount of attenuation is mostly depending on the raindrops. Rate of attenuation by rain depends on the composition, temperature, orientation, shape and fall velocity of raindrops. In this paper, we measure the effect of different atmospheric attenuations, particularly due to rain for non-line-of-sight environments and proposed a LoRa (Long Range) based wireless mesh network to enhance packet delivery ratio (PDR). We experimented with the LoRa based wireless network by taking the packet delivery ratio at different times of the day when there was no rain and performed some experiments while raining. The experiments conclude that PDR is affected by different volumes of rain where PDR decreases significantly from 100% when it was not raining and decreases to 89.5% when it rains. The results also show that the LoRa device can successfully transmit up to 1.7km in a line-of-sight environment and around 1.3km in a non-line-of-sight environment without rain. The results show the effect of atmospheric attenuation to LoRa wireless network and become a consideration factor when designing any LoRa applications for outdoor deployment.</description>
        <description>http://thesai.org/Downloads/Volume10No10/Paper_31-Enhancement_of_Packet_Delivery_Ratio.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>JsonToOnto: Building Owl2 Ontologies from Json Documents</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101030</link>
        <id>10.14569/IJACSA.2019.0101030</id>
        <doi>10.14569/IJACSA.2019.0101030</doi>
        <lastModDate>2019-10-31T13:23:16.1670000+00:00</lastModDate>
        
        <creator>Sara Sbai</creator>
        
        <creator>Mohammed Reda Chbihi Louhdi</creator>
        
        <creator>Hicham Behja</creator>
        
        <creator>Rabab Chakhmoune</creator>
        
        <subject>JSON documents; OWL2 ontologies; ontology generation; transformation rules; information theory; classification; decision trees</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(10), 2019</description>
        <description>The amount of data circulating through the web has grown rapidly recently. This data is available as semi-structured or unstructured documents, especially JSON documents. However, these documents lack semantic description. In this paper, we present a method to automatically extract an OWL2 ontology from a JSON document. We propose a set of transformation rules to transform JSON elements to ontology components. Our approach also allows analyzing the content of JSON documents to discover categorization in order to generate class hierarchy. Finally, we evaluate our approach by conducting experiments on several JSON documents. The results show that the obtained ontologies are rich in terms of taxonomic relationships.</description>
        <description>http://thesai.org/Downloads/Volume10No10/Paper_30-JsonToOnto_Building_Owl2_Ontologies.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Investigating Social Media Utilization and Challenges in the Governmental Sector for Crisis Events</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101028</link>
        <id>10.14569/IJACSA.2019.0101028</id>
        <doi>10.14569/IJACSA.2019.0101028</doi>
        <lastModDate>2019-10-31T13:23:16.1330000+00:00</lastModDate>
        
        <creator>Waleed Afandi</creator>
        
        <subject>Component; civil protection; hajj; social media; Saudi Arabia; governmental sector; flood crisis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(10), 2019</description>
        <description>The use and utilization of social media applications, tools, and services enables advanced services in daily routines, activities, and work environments. Nowadays, disconnection from social media services is a disadvantage due to their increasing use and functionality. The use of social media applications and services has provided different methods and routines for communications that ranges from posting, reposting, commenting, interacting, and live communication that can reach a mass population with minimum time, effort, and expenses compared with traditional media systems and channels. The current benefits of using social media can assist in providing better services in terms of communication and guidance for civil protection services within governmental sectors, as reported by different research studies. The use of social media has been found to be critically important by governmental agencies in different situations for directing, educating, and engaging people during different events. This study investigates the use of social media services in Saudi Arabia in governmental sectors to outline the opportunities and challenges faced, given the challenging situations faced annually during the Hajj and Ramadan rituals, and sporadic flood crisis events. This research focuses on defining the current stand and challenges of using social media services for providing mass communication and civil engagement during hazardous and challenging events in Saudi Arabia. The results of this study will be used as a roadmap for future investigation in this regard.</description>
        <description>http://thesai.org/Downloads/Volume10No10/Paper_28-Investigating_Social_Media_Utilization_and_Challenges.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Project Management Metamodel Construction Regarding IT Departments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101029</link>
        <id>10.14569/IJACSA.2019.0101029</id>
        <doi>10.14569/IJACSA.2019.0101029</doi>
        <lastModDate>2019-10-31T13:23:16.1330000+00:00</lastModDate>
        
        <creator>HAMZANE Ibrahim</creator>
        
        <creator>Belangour Abdessamad</creator>
        
        <subject>MDA; MDE; SCRUM; PRINCE 2; PMBOK; IT</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(10), 2019</description>
        <description>Given the fast technological progress, the need for project management continues to grow in terms of methodology and new concepts. In this article, we will build a framework of generating a metamodel that we will apply on the project management to generate a generic metamodel of project management, in this approach we will be based on two methodologies of project management; predictive method (ex: PRINCE 2), Agile method (ex: SCRUM). The goal of this research is to validate and apply this methodology on all the components of IT Governance then to aggregate the metamodels to restore a global metamodel for all IT Governance domains.</description>
        <description>http://thesai.org/Downloads/Volume10No10/Paper_29-Project_Management_Metamodel_Construction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Achieving High Privacy Protection in Location-based Recommendation Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101027</link>
        <id>10.14569/IJACSA.2019.0101027</id>
        <doi>10.14569/IJACSA.2019.0101027</doi>
        <lastModDate>2019-10-31T13:23:16.1030000+00:00</lastModDate>
        
        <creator>Tahani Alnazzawi</creator>
        
        <creator>Reem Alotaibi</creator>
        
        <creator>Nermin Hamza</creator>
        
        <subject>Recommender models; attacker; privacy protection; dummy; encryption; checkpoint</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(10), 2019</description>
        <description>In recent years, privacy has become great attention in the research community. In Location-based Recommendation Systems (LbRSs), the user is constrained to build queries depend on his actual position to search for the closest points of interest (POIs). An external attacker can analyze the sent queries or track the actual position of the LbRS user to reveal his\her personal information. Consequently, ensuring high privacy protection (which is including location privacy and query privacy) is a fundamental thing. In this paper, we propose a model that guarantees high privacy protection for LbRS users. The model is work by three components: The first component (selector) uses a new location privacy protection approach, namely, the smart dummy selection (SDS) approach. The SDS approach generates a strong dummy position that has high resistance versus a semantic position attack. The second component (encryptor) uses an encryption-based approach that guarantees a high level of query privacy versus a sampling query attack. The last component (constructor) constructs the protected query that is sent to the LbRS server. Our proposed model is supported by a checkpoint technique to ensure a high availability quality attribute. Our proposed model yields competitive results compared to similar models under various privacy and performance metrics.</description>
        <description>http://thesai.org/Downloads/Volume10No10/Paper_27-Achieving_High_Privacy_Protection_in_Location.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Built-in Criteria Analysis for Best IT Governance Framework</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101026</link>
        <id>10.14569/IJACSA.2019.0101026</id>
        <doi>10.14569/IJACSA.2019.0101026</doi>
        <lastModDate>2019-10-31T13:23:16.0870000+00:00</lastModDate>
        
        <creator>HAMZANE Ibrahim</creator>
        
        <creator>Belangour Abdessamad</creator>
        
        <subject>IT Governance; COBIT; ISO 38500; CMMI; ITIL; TOGAF; PMBOK; PRINCE 2; SCRUM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(10), 2019</description>
        <description>The implementation of IT governance is important to lead and evolve the information system in agreement with stakeholders. This requirement is seriously amplified at the time of the digital area considering all the new technologies that have been launched recently (Big DATA, Artificial Intelligence, Machine Learning, Deep learning, etc.). Thus, without a good rudder, every company risks getting lost in a sea endless and unreachable goal. This paper aims to provide decision-making system that allows professionals to choose IT governance framework suitable to desired criteria and their importance based on a multi-criteria analysis method (WSM), we did implement a case study based on a Moroccan company. Moreover, we present a better understanding of IT Governance aspects such as standards and best practices. This paper goes into a global objective that aims to build an integrated generated meta-model for a better approach of IT Governance.</description>
        <description>http://thesai.org/Downloads/Volume10No10/Paper_26-A_Built_in_Criteria_Analysis_for_Best_IT_Governance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Respondent’s Haptic on Academic Universities Websites of Pakistan Measuring Usability</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101025</link>
        <id>10.14569/IJACSA.2019.0101025</id>
        <doi>10.14569/IJACSA.2019.0101025</doi>
        <lastModDate>2019-10-31T13:23:16.0570000+00:00</lastModDate>
        
        <creator>Irum Naz Sodhar</creator>
        
        <creator>Baby Marina</creator>
        
        <creator>Azeem Ayaz Mirani</creator>
        
        <subject>Usability testing; survey; questionnaire; higher education websites; guidelines webcredible; operating system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(10), 2019</description>
        <description>This study based on survey, by using four higher educational (Universities) websites were selected for the usability testing with the help of response to the experience of eighty students of same age group and investigated to make pre-survey and post-survey based on an eight questionnaire for websites usability. The source for the survey was Laptops of windows 8.1 operation system used. The questionnaires were depends on two factors: one factor contains gender, nationality, respondents and second factor contains strongly agree, Agree, Undecided, Disagree, Strongly Disagree. The factor structure replicated across the study with data collected during usability test respectively in survey. There was evidence of usability with existing questionnaires, including the website usability testing by applying guidelines of webcredible. The overall results were acceptable and more meaningful for future researchers and web-developers. The questionnaire can be used to understand of websites quality and how well websites work.</description>
        <description>http://thesai.org/Downloads/Volume10No10/Paper_25-The_Respondents_Haptic_on_Academic_Universities.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Immunity-based Error Containment Algorithm for Database Intrusion Response Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101024</link>
        <id>10.14569/IJACSA.2019.0101024</id>
        <doi>10.14569/IJACSA.2019.0101024</doi>
        <lastModDate>2019-10-31T13:23:16.0400000+00:00</lastModDate>
        
        <creator>Nacim YANES</creator>
        
        <creator>Ayman M. MOSTAFA</creator>
        
        <creator>Nasser ALSHAMMARI</creator>
        
        <creator>Saad A. ALANAZI</creator>
        
        <subject>Database security; artificial immune system; error containment algorithm; database auditing; apoptotic signal; necrotic signal; secret sharing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(10), 2019</description>
        <description>The immune system has received a special attention as a potential source of inspiration for innovative approaches to solve database security issues and build artificial immune systems. Database security issues need to be correctly identified to ensure that suitable responses are taken. This paper proposes an immunity-based error containment algorithm for providing optimum response in detected intrusions. The objective of the proposed algorithm is to reduce the false positive alarms to the minimum since not all the incidents are malicious in nature. The proposed algorithm is based on apoptotic and necrotic signals which are parts of the immunity structure in human immune system. Apoptotic signals define low-level alerts that could result from legitimate users but could be also the prerequisites for an attack, while necrotic signals define high-level alerts that result from actual successful attacks.</description>
        <description>http://thesai.org/Downloads/Volume10No10/Paper_24-An_Immunity_based_Error_Containment_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Bioinspired Immune System for Intrusions Detection System in Self Configurable Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101023</link>
        <id>10.14569/IJACSA.2019.0101023</id>
        <doi>10.14569/IJACSA.2019.0101023</doi>
        <lastModDate>2019-10-31T13:23:15.8830000+00:00</lastModDate>
        
        <creator>Noor Mohd</creator>
        
        <creator>Annapurna Singh</creator>
        
        <creator>H.S. Bhadauria</creator>
        
        <subject>Networks security; intrusion detection system; AIS algorithm; KMP algorithm; self-configurable networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(10), 2019</description>
        <description>In the last couple of years, the computer frameworks have become more vulnerable to external attacks. The PC security has become the prime cause of concern for every organization. To achieve this objective Intrusion Detection System (IDS) in self-configurable networks has played a vital role in the last few decades to guard LANs. In this work, an IDS in self-configurable networks is deployed based on Bioinspired Immune System. IDS in self-configurable networks are accustomed to monitor data and network activity and alert when any suspicious activity observed security heads are alerted. A vital and common application space for versatile frameworks swarm-based is that of PC security. A PC security framework ought to protect a machine or accumulation of machines from unapproved gatecrashers. The framework seems to be capable of counteracting against external activity. Also it is comparable in usefulness to secure framework shielding from intrusion by external threats like in case of attacking microorganisms. A counterfeit insusceptible framework is a PC programming framework that mirrors a few sections of the conduct of the human resistant framework to shield PC systems from infections and comparable digital assaults. This paper demonstrates the need of a novice substring seeks calculation based on bio-roused calculations. Tests are required to create system for Network Intrusion detection that aids in securing a machine or clusters of machines from unapproved intruders. In this paper IDS in self-configurable networks is implemented by using Bio-inspired Immune System and KMP algorithm as a model IDS.</description>
        <description>http://thesai.org/Downloads/Volume10No10/Paper_23-Bioinspired_Immune_System_for_Intrusions_Detection_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-Band and Multi-Parameter Reconfigurable Slotted Patch Antenna with Embedded Biasing Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101022</link>
        <id>10.14569/IJACSA.2019.0101022</id>
        <doi>10.14569/IJACSA.2019.0101022</doi>
        <lastModDate>2019-10-31T13:23:15.8530000+00:00</lastModDate>
        
        <creator>Manoj Kumar Garg</creator>
        
        <creator>Jasmine Saini</creator>
        
        <subject>Multi-band; Multi-parameter reconfigurability; EBN; UWB; PIN diode</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(10), 2019</description>
        <description>RF PIN diodes are used to achieve reconfigurability in frequency, polarization, and radiation pattern. The antenna can be used in different bands by controlling ON and OFF states of two PIN diodes using the embedded biasing network (EBN). The antenna can be used for ultra-wideband (UWB) applications (1.0 GHz to 15.2 GHz) with a resonant frequency of 9.2 GHz. Besides ultra-wideband, it can also be switched to other bands (C, X, and Ku) with different operating frequencies (5.75 GHz, 12.3 GHz, and 15.5 GHz) at other biasing combinations. With this type of antenna, Linear and Circular polarization are achievable. The radiation pattern reconfigurable behavior in the vertical plane has also been achieved. Single Design of the proposed antenna is optimized for the multi-band and multi-parameter reconfigurability applications.</description>
        <description>http://thesai.org/Downloads/Volume10No10/Paper_22-Multi_Band_and_Multi_Parameter_Reconfigurable_Slotted_Patch.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Faculty’s Social Media usage in Higher Education Embrace Change or Left Behind</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101021</link>
        <id>10.14569/IJACSA.2019.0101021</id>
        <doi>10.14569/IJACSA.2019.0101021</doi>
        <lastModDate>2019-10-31T13:23:15.8370000+00:00</lastModDate>
        
        <creator>Seren Basaran</creator>
        
        <subject>Academic staff; age; purpose; social media experience; social networking sites; university</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(10), 2019</description>
        <description>This paper addresses faculty members’ (academic staff) viewpoints on benefits, barriers and concerns of utilizing social media and also investigates differences with respect to their social media experience in teaching, age and the purpose of using social media. The data was collected through an adopted questionnaire from 324 faculty members of two public and two private universities in north part of Cyprus and was analyzed through descriptive statistics, independent samples t-test and one-way ANOVA. Results revealed that, although faculty members appreciate benefits of using social media, they do have concerns and they are aware of barriers almost as to same degree as benefits of using social media. Those who are familiar and have used social media before think more about concerns than those who haven’t used it. Elder faculty members possess less concern about using social media than their younger and middle age colleagues. Furthermore, the purpose (personal, educational, professional) of using social media has no effect on faculty members’ viewpoints on benefits, concerns and barriers of using social media. Abundant literature on social media usage from students’ perspective and relatively limited studies examining teachers/instructors point of views on social media use particularly for developing countries constitute the primary motivation behind the emergence of such research. Faculty members should be endorsed to adopt social media for instructional and professional purpose and misconceptions about using social media and barriers should be eliminated to enhance conscious utilization of social media for teaching.</description>
        <description>http://thesai.org/Downloads/Volume10No10/Paper_21-Facultys_Social_Media_usage_in_Higher_Education.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Object Detection System to Help Navigating Visual Impairments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101020</link>
        <id>10.14569/IJACSA.2019.0101020</id>
        <doi>10.14569/IJACSA.2019.0101020</doi>
        <lastModDate>2019-10-31T13:23:15.8070000+00:00</lastModDate>
        
        <creator>Cahya Rahmad</creator>
        
        <creator>Kohei Arai</creator>
        
        <creator>Rawansyah</creator>
        
        <creator>Tanggon Kalbu</creator>
        
        <subject>Object detection; corner detection; computer vision; visual impairments; blind people</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(10), 2019</description>
        <description>The number of people with severe visual impairments and blind people in the world is 216.6 million and 38.5 million, respectively in 2018 and that number will increase every year. While the development of Computer Vision technology became popular after that method is used in automatic driving system using an object detection system to detect surrounding object, this technology can be a solution to help blind people too. This can be done by implementing Harris Corner Detection method. Harris Corner Detection method is used to detect the corner of the object in the image taken. The number of corner and location of corner in detection result can be used for predicting position and distance. To predict the distance, a triangle rule will be used in finding the distance. Furthermore, it can predict the location and distance of the object in the picture taken. From the results of the implementation above it was found that the accuracy of object detection using Harris Corner Detector&#39;s angle detection method is 88%. Therefore, this application can help detecting objects based on the number of corner and location detected using a Smartphone.</description>
        <description>http://thesai.org/Downloads/Volume10No10/Paper_20-Object_Detection_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Merge of X-ETL and XCube towards a Standard Hybrid Method for Designing Data Warehouses</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101019</link>
        <id>10.14569/IJACSA.2019.0101019</id>
        <doi>10.14569/IJACSA.2019.0101019</doi>
        <lastModDate>2019-10-31T13:23:15.7900000+00:00</lastModDate>
        
        <creator>Nawfal El Moukhi</creator>
        
        <creator>Ikram El Azami</creator>
        
        <creator>Abdelaaziz Mouloudi</creator>
        
        <creator>Abdelali Elmounadi</creator>
        
        <subject>Data warehouse design; hybrid method; relational model; multidimensional model; star model; X-ETL; semantic analysis; XCube assist</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(10), 2019</description>
        <description>There is no doubt that the hybrid approach is the best paradigm for designing effective multidimensional schemas. Its strength lies in its ability to combine the top-down and bottom-up approaches, thus exploiting the advantages of both approaches. In this paper, the authors try to identify and analyze the different hybrid methods developed for building data warehouses. The analysis revealed that the existing methods are too complicated and time consuming in the deployment phase. In order to solve this problem, the authors introduced a new hybrid method that is easy to use and saves a huge amount of deployment time. This new method consists of two main steps: the first data driven step allows an analysis of the source models by using the X-ETL method and gives rise to star models. The second requirements driven step performs a semantic analysis of the needs expressed in natural language by using the XCube Assist method. This analysis allows to improve the quality of star models generated by the X-ETL method without the intervention of a designer.</description>
        <description>http://thesai.org/Downloads/Volume10No10/Paper_19-Merge_of_X_ETL_and_XCube.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Analysis between a Photovoltaic System with Two-Axis Solar Tracker and One with a Fixed Base</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101018</link>
        <id>10.14569/IJACSA.2019.0101018</id>
        <doi>10.14569/IJACSA.2019.0101018</doi>
        <lastModDate>2019-10-31T13:23:15.7600000+00:00</lastModDate>
        
        <creator>Omar Freddy Chamorro Atalaya</creator>
        
        <creator>Dora Yvonne Arce Santillan</creator>
        
        <creator>Martin Diaz Choque</creator>
        
        <subject>Photovoltaic system; solar cells; displacement; two axes; performance; orientation; stored energy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(10), 2019</description>
        <description>In this article, the comparative analysis of the stored energies between a photovoltaic system with a two-axis solar tracker, controlled by Arduino with respect to the energy stored by a fixed-base photovoltaic system is done. This with the option of using electrical energy efficiently, since the optimal installation of photovoltaic systems plays an important role in its efficiency. Once the comparative analysis was performed, the performance of the photovoltaic system with solar tracker is determined to be 24.06% higher than the second fixed-base photovoltaic system. The correlational analysis was also carried out for the data collected from the stored energy with respect to time, thus determining that the photovoltaic system with a solar tracker has a low correlation of 0.334, given that in the solar tracker the energy stored without dependence on time or moment when the energy is captured, since, if there is a variation during the day of the direction of the sun&#39;s rays, the photovoltaic system will always seek to focus as much as possible on the sun&#39;s rays, guaranteeing sustainability in flexible storage of energy; while the fixed-base photovoltaic system has a moderate inverse correlation of - 0.489, that is, as the hours of the day pass the orientation of the sun&#39;s rays changes, and in the absence of dynamism in the orientation of the solar cells (for be fixed-based), limited energy as the hours of the day increase. Taking these reference results, it is expected to implement photovoltaic system projects with solar tracker in rural areas of Peru that lack electrical services, since it is more efficient than the fixed base photovoltaic system.</description>
        <description>http://thesai.org/Downloads/Volume10No10/Paper_18-Comparative_Analysis_between_a_Photovoltaic_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Prediction-based Curriculum Analysis using the Modified Artificial Bee Colony Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101017</link>
        <id>10.14569/IJACSA.2019.0101017</id>
        <doi>10.14569/IJACSA.2019.0101017</doi>
        <lastModDate>2019-10-31T13:23:15.7430000+00:00</lastModDate>
        
        <creator>Reir Erlinda E Cutad</creator>
        
        <creator>Bobby D. Gerardo</creator>
        
        <subject>Binary artificial bee colony; rule-driven mechanism; prediction model; curriculum analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(10), 2019</description>
        <description>Due to the vast amount of students’ information and the need of quick retrieval, establishing databases is one of the top lists of the IT infrastructure in learning institutions. However, most of these institutions do not utilize them for knowledge discovery which can aid in informed decision-making, investigation of teaching and learning outcomes, and development of prediction models in particular. Prediction models have been utilized in almost all areas and improving the accuracy of the model is sought- after this study. Thus, the study presents a Scoutless Rule-driven binary Artificial Bee Colony (SRABC) as a searching strategy to enhance the accuracy of the prediction model for curriculum analysis. Experimental verification revealed that SRABC paired with K-Nearest Neighbor (KNN) increases the prediction accuracy from 94.14% to 97.59% than paired with Support Vector Machine (SVM) and Logistic Regression (LR). SRABC is efficient in selecting 14 out of 60 variables through majority voting scheme using the data of the BSIT students of Davao Del Norte State College (DNSC), Davao del Norte, Philippines.</description>
        <description>http://thesai.org/Downloads/Volume10No10/Paper_17-A_Prediction_based_Curriculum_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Comparison of Collaborative-Filtering Approach with Implicit and Explicit Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101016</link>
        <id>10.14569/IJACSA.2019.0101016</id>
        <doi>10.14569/IJACSA.2019.0101016</doi>
        <lastModDate>2019-10-31T13:23:15.7130000+00:00</lastModDate>
        
        <creator>Fitri Marisa</creator>
        
        <creator>Sharifah Sakinah Syed Ahmad</creator>
        
        <creator>Zeratul Izzah Mohd Yusoh</creator>
        
        <creator>Tubagus Mohammad Akhriza</creator>
        
        <creator>Wiwin Purnomowati</creator>
        
        <creator>Rakesh Kumar Pandey</creator>
        
        <subject>Recommender system; collaborative-filtering; user-based; item-based; implicit-data; explicit-data</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(10), 2019</description>
        <description>Challenge in developing a collaborative filtering (CF)-based recommendation system is the problem of cold-starting of items that causes the data to sparse and reduces the accuracy of the recommendations. Therefore, to produce high accuracy a match is needed between the types of data and the approach used. Two approaches in CF include user-based and item-based CFs, both of which can process two types of data; implicit and explicit data. This work aims to find a combination of approaches and data types that produce high accuracy. Cosine-similarity is used to measure the similarity between users and also between items. Mean Absolute Error is also measured to discover the accuracy of a recommendation. Testing of three groups of data based on sparseness results in the best accuracy in an explicit data-based approach that has the smallest MAE value. The result is that the average MAE value for user based (implicit data) is 0.1032, user based (explicit data) is 0.2320, item based (implicit data) is 0.3495, and item based (explicit data) is 0.0926. The best accuracy is in the item-based (explicit-data) approach which is the smallest average MAE value.</description>
        <description>http://thesai.org/Downloads/Volume10No10/Paper_16-Performance_Comparison_of_Collaborative_Filtering_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Discovering Gaps in Saudi Education for Digital Health Transformation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101015</link>
        <id>10.14569/IJACSA.2019.0101015</id>
        <doi>10.14569/IJACSA.2019.0101015</doi>
        <lastModDate>2019-10-31T13:23:15.6800000+00:00</lastModDate>
        
        <creator>Adeeb Noor</creator>
        
        <subject>Health informatics; education; information technology; American Medical Informatics Association (AMIA); Saudi Arabia; vision 2030</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(10), 2019</description>
        <description>The growing complexity of healthcare systems worldwide and the medical profession’s increasing dependency on information technology for accurate practice and treatment call for specific standardized education in health informatics programming, and accreditation of health informatics programs based on core competencies is progressing at an international level. This study investigates the state of affairs in health informatics programs within the Kingdom of Saudi Arabia (KSA) to determine (1) how well international standards are being met and (2) what further development is needed in light of KSA’s recent eHealth initiatives. This descriptive study collected data from publicly available resources to investigate Health Informatics programs at 109 Saudi institutions. Information about coursework offered at each institution was compared with American Medical Informatics Association (AMIA) curriculum guidelines. Of 109 institutions surveyed, only a handful offered programs specifically in health informatics. Of these, most programs did not match the coursework recommended by AMIA, and the majority of programs mimicked existing curricula from other countries rather than addressing unique Saudi conditions. Education in health informatics in KSA appears to be scattered, non-standardized, and somewhat outdated. Based on these findings, there is a clear opportunity for greater focus on core competencies within health informatics programs. The Saudi digital transformation (eHealth) initiative, as part of Saudi Vision 2030, clearly calls for implementation of internationally accepted health informatics competencies in education programs and healthcare practice, which can only occur through greater collaboration between medical and technology educators and strategic partnerships with companies, medical centers, and governmental institutions.</description>
        <description>http://thesai.org/Downloads/Volume10No10/Paper_15-Discovering_Gaps_in_Saudi_Education_for_Digital_Health.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Using Brain Imaging to Gauge Difficulties in Processing Ambiguous Text by Non-native Speakers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101014</link>
        <id>10.14569/IJACSA.2019.0101014</id>
        <doi>10.14569/IJACSA.2019.0101014</doi>
        <lastModDate>2019-10-31T13:23:15.6670000+00:00</lastModDate>
        
        <creator>Imtiaz Hussain Khan</creator>
        
        <subject>Prepositional-phrase attachment ambiguity; near-infrared spectroscopy; Arabic speakers; hierarchical clustering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(10), 2019</description>
        <description>Processing ambiguous text is an ever challenging problem for humans. In this study, we investigate how native-Arabic speakers face problems in processing their non-native English language text which involves ambiguity. As a case study, we focus on prepositional-phrase (PP) attachment ambiguity whereby a PP can be attached to the preceding noun (aka low attachment) or the preceding verb (aka high attachment). We setup an experiment in which human participants read text on a computer screen and their brain activity is monitored using near infrared spectroscopy. Participants read two types of text: one involving PP-attachment ambiguity and the other unambiguous text which is used as a control for comparison purposes. The brain activity data for ambiguous and control text are clustered using hierarchical-clustering technique available in Weka. The data reveal that Arabic speakers face more difficulty in processing ambiguous text as compared to unambiguous text.</description>
        <description>http://thesai.org/Downloads/Volume10No10/Paper_14-Using_Brain_Imaging_to_Gauge_Difficulties.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of Embedded Vision System based on FPGA-SoC</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101013</link>
        <id>10.14569/IJACSA.2019.0101013</id>
        <doi>10.14569/IJACSA.2019.0101013</doi>
        <lastModDate>2019-10-31T13:23:15.6500000+00:00</lastModDate>
        
        <creator>Ahmed Alsheikhy</creator>
        
        <creator>Yahia Fahem Said</creator>
        
        <subject>Embedded vision; video processing architecture; real-time; All Programmable System on Chip (APSoC)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(10), 2019</description>
        <description>The advanced micro-electronics in the last decades provide each year new tools and devices making it possible to design more and more efficient artificial vision systems capable of meeting the constraints imposed. All the elements are thus brought together to make artificial vision one of the most promising, even unifying scientific &quot;challenges&quot; of our time. This is because the development of a vision system requires knowledge from several disciplines, from signal processing to computer architecture, through theories of probability, linear algebra, computer science, artificial intelligence and analog and digital electronics. The work proposed in this paper is located at the intersection of embedded systems and image processing domains. The objective is to propose an embedded vision system for video acquisition and processing by adding hardware accelerators in order to extract some image characteristics. With the introduction of reconfigurable platforms, such as new All Programmable System on Chip (APSoC) platforms and the advent of new high-level Electronic Design Automation (EDA) tools to configure them, FPGA-SoC based image processing has emerged as a practical solution for most computer vision problems. In this paper, we are interested in the design and implementation of an embedded vision system. This design facilitates video streaming from the camera to the monitor and hardware processing over real-time FPGA-SoC.</description>
        <description>http://thesai.org/Downloads/Volume10No10/Paper_13-Design_of_Embedded_Vision_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Semantic Ontology for Disaster Trail Management System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101012</link>
        <id>10.14569/IJACSA.2019.0101012</id>
        <doi>10.14569/IJACSA.2019.0101012</doi>
        <lastModDate>2019-10-31T13:23:15.6330000+00:00</lastModDate>
        
        <creator>Ashfaq Ahmad</creator>
        
        <creator>Roslina Othman</creator>
        
        <creator>Mohamad Fauzan</creator>
        
        <creator>Qazi Mudassar Ilyas</creator>
        
        <subject>Semantic web; ontology; information retrieval; disaster trail management</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(10), 2019</description>
        <description>Disasters, whether natural or human-made, leave a lasting impact on human lives and require mitigation measures. In the past, millions of human beings lost their lives and properties in disasters. Information and Communication Technology provides many solutions. The issue of so far developed disaster management systems is their inefficiency in semantics that causes failure in producing dynamic inferences. Here comes the role of semantic web technology that helps to retrieve useful information. Semantic web-based intelligent and self-administered framework utilizes XML, RDF, and ontologies for a semantic presentation of data. The ontology establishes fundamental rules for data searching from the unstructured world, i.e., the World Wide Web. Afterward, these rules are utilized for data extraction and reasoning purposes. Many disaster-related ontologies have been studied; however, none conceptualizes the domain comprehensively. Some of the domain ontologies intend for the precise end goal like the disaster plans. Others have been developed for the emergency operation center or the recognition and characterization of the objects in a calamity scene. A few ontologies depend on upper ontologies that are excessively abstract and are exceptionally difficult to grasp by the individuals who are not conversant with theories of the upper ontologies. The present developed semantic web-based disaster trail management ontology almost covers all vital facets of disasters like disaster type, disaster location, disaster time, misfortunes including the causalities and the infrastructure loss, services, service providers, relief items, and so forth. The objectives of this research were to identify the requirements of a disaster ontology, to construct the ontology, and to evaluate the ontology developed for Disaster Trail Management. The ontology was assessed efficaciously via competency questions; externally by the domain experts and internally with the help of SPARQL queries.</description>
        <description>http://thesai.org/Downloads/Volume10No10/Paper_12-A_Semantic_Ontology_for_Disaster_Trail_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Haze Effects on Satellite Remote Sensing Imagery and their Corrections</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101011</link>
        <id>10.14569/IJACSA.2019.0101011</id>
        <doi>10.14569/IJACSA.2019.0101011</doi>
        <lastModDate>2019-10-31T13:23:15.6200000+00:00</lastModDate>
        
        <creator>Asmala Ahmad</creator>
        
        <creator>Shaun Quegan</creator>
        
        <creator>Suliadi Firdaus Sufahani</creator>
        
        <creator>Hamzah Sakidin</creator>
        
        <creator>Mohd Mawardy Abdullah</creator>
        
        <subject>Haze effects; haze removal; remote sensing; accuracy; visibility</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(10), 2019</description>
        <description>Imagery recorded using satellite sensors operating at visible wavelengths can be contaminated by atmospheric haze that originates from large scale biomass burning. Such issue can reduce the reliability of the imagery and therefore having an effective method for removing such contamination is crucial. The principal aim of this study is to investigate the effects of haze on remote sensing imagery and develop a method for removing them. In order to get a better understanding on the behaviour of haze, the effects of haze on satellite imagery were initially studied. A methodology of removing haze based on haze subtraction and filtering was then developed. The developed haze removal method was then evaluated by means of signal-to-noise ratio (SNR) and classification accuracy. The results show that the haze removal method is able to improve the haze-affected imagery qualitatively and quantitatively.</description>
        <description>http://thesai.org/Downloads/Volume10No10/Paper_11-Haze_Effects_on_Satellite_Remote_Sensing_Imagery.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Scalable Data Analytics Market Basket Model for Transactional Data Streams</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101010</link>
        <id>10.14569/IJACSA.2019.0101010</id>
        <doi>10.14569/IJACSA.2019.0101010</doi>
        <lastModDate>2019-10-31T13:23:15.6030000+00:00</lastModDate>
        
        <creator>Aaron A Izang</creator>
        
        <creator>Nicolae Goga</creator>
        
        <creator>Shade O. Kuyoro</creator>
        
        <creator>Olujimi D. Alao</creator>
        
        <creator>Ayokunle A. Omotunde</creator>
        
        <creator>Adesina K. Adio</creator>
        
        <subject>Association rule mining; big data analytics; concept drift; market basket analysis; transactional data streams</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(10), 2019</description>
        <description>Transactional data streams (TDS) are incremental in nature thus, the process of mining is complicated. Such complications arise from challenges such as infinite length, feature evolution, concept evolution and concept drift. Tracking concept drift challenge is very difficult, thus very important for Market Basket Analysis (MBA) applications. Hence, the need for a strategy to accurately determine the suitability of item pairs within the available billions of pairs to solve concept drift chalenge of TDS in MBA. In this work, a Scalable Data Analytics Market Basket Model (SDAMBM) that handles concept drift issues in MBA was developed. Transactional data of 1,112,000 were extracted from a grocery store using Extraction, Transformation and Loading approach and 556,000 instances of the data were simulated from a cloud database. Calibev function was used to caliberate the data nodes. Lugui 7.2.9 and Comprehensive R Archive Network were used for table pivoting between the simulated data and the data collected. The SDAMBM was developed using a combination of components from elixir big data architecture, the research conceptual model and consumer behavior theories. Toad Modeler was then used to assemble the model. The SDAMBM was implemented using Monarch and Tableau to generate insights and data visualization of the transactions. Intelligent interpreters for auto decision grid, selectivity mechanism and customer insights were used as metrics to evaluate the model. The result showed that 79% of the customers from the customers’ consumption pattern of the SDAMBM preferred buying snacks and drink as shown in the visualization report through the SDAMBM visualization dashboard. Finally, this study provided a data analytics approach for managing concept drift challenge in customers’ buying pattern. Furthermore, a distinctive model for managing concept drift was also achieved. It is therefore recommended that the SDAMBM should be adopted for the enhancement of customers buying and consumption pattern by business ventures, organizations and retailers.</description>
        <description>http://thesai.org/Downloads/Volume10No10/Paper_10-Scalable_Data_Analytics_Market_Basket_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Crypto-Steganographic LSB-based System for AES-Encrypted Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101009</link>
        <id>10.14569/IJACSA.2019.0101009</id>
        <doi>10.14569/IJACSA.2019.0101009</doi>
        <lastModDate>2019-10-31T13:23:15.5870000+00:00</lastModDate>
        
        <creator>Mwaffaq Abu-Alhaija</creator>
        
        <subject>Steganography; cryptography; cryptographic steganography; crypto-steganographic system; Least-Significant Bit Replacement (LSB-method); stego-key; public-key cryptography; Advanced Encryption System (AES); Diffie-Hellman protocol; key exchange; concealment of information; PRNG</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(10), 2019</description>
        <description>The purpose of this work is to increase the level of concealment of information from unauthorized access by pre-encrypting and hiding it in multimedia files such as images. A crypto-steganographic information protection algorithm with LSB-method was implemented. The algorithm hides AES pre-encrypted confidential information in the form of text or images into target containing image files. This method uses the concept of data concealing in the least significant pixel bits of the target image files. The proposed method relies on the use of Diffie-Hellman public key exchange protocol for securely exchanging the stego-key used for LSB as well as the public key used for encrypting the secret information. The algorithm ensures that the visual quality of the image remains unchanged, with no distortions perceived by the human eye. The algorithm also complicates the detection of concealed information embedded in the target image with use of PRNG as an enhancement for LSB. The proposed system scheme achieved competitive results. On an average, the system achieved a Peak Signal-to-Noise Ratio (PSNR) of 96.3 dB and a Mean Square Error (MSE) of 0.00408. The results obtained demonstrate that the proposed system offers high payload capabilities with immunity against visual degradation of resultant stego images.</description>
        <description>http://thesai.org/Downloads/Volume10No10/Paper_9-Crypto_Steganographic_LSB_based_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Land use Detection in Nusajaya using Higher-Order Modified Geodesic Active Contour Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101008</link>
        <id>10.14569/IJACSA.2019.0101008</id>
        <doi>10.14569/IJACSA.2019.0101008</doi>
        <lastModDate>2019-10-31T13:23:15.5730000+00:00</lastModDate>
        
        <creator>N Alias</creator>
        
        <creator>M. N. Mustaffa</creator>
        
        <creator>F. Mustapha</creator>
        
        <subject>Higher-order geodesic active contour (GAC); segmentation; land use; finite difference method; numerical methods</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(10), 2019</description>
        <description>Urban development is a global phenomenon. In Johor, especially Nusajaya is one of the most rapidly developing cities. This is due to the increase of land demand and population growth. Moreover, land-use changes are considered to be one of the major components of current environmental monitoring strategies. In this context, image segmentation and mathematical model offers essential tools that can be used to analyze land use detection. The image segmentation process is known as the most important and difficult task in image analysis. Nonlinear fourth-order models had shown to have a good achievement in recovering smooth regions. Therefore, these motivate us to propose a fourth-order modified geodesic active contour (GAC) model. In the proposed model, a modified signed pressure force (SPF) function has been defined to segment the inhomogeneous satellite images. The simulations of the fourth-order modified GAC model through some numerical methods based on the higher-order finite difference method (FDM) have been illustrated. Matlab R2015a software in Windows 7 Ultimate on Intel (R) Core (TM) i5-3230M @ 2.60GHz CPU with 8 GB RAM has been considered as a computational platform for the simulation. Qualitative and quantitative differences between modified SPF functions and other SPF functions have been shown as a comparison. Hence land use detection is very useful for local governments and urban planners to enhance the future sustainable development plans of Nusajaya.</description>
        <description>http://thesai.org/Downloads/Volume10No10/Paper_8-Land_use_Detection_in_Nusajaya_using_Higher_Order.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Region-wise Ranking for One-Day International (ODI) Cricket Teams</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101007</link>
        <id>10.14569/IJACSA.2019.0101007</id>
        <doi>10.14569/IJACSA.2019.0101007</doi>
        <lastModDate>2019-10-31T13:23:15.5400000+00:00</lastModDate>
        
        <creator>Akbar Hussain</creator>
        
        <creator>Yan Qiang</creator>
        
        <creator>M. Abdul Qadoos Bilal</creator>
        
        <creator>Kun Wu</creator>
        
        <creator>Zijuan Zhao</creator>
        
        <creator>Bilal Ahmed</creator>
        
        <subject>Batting; bowling and fielding strength; PageRank; region strength; team’s strength</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(10), 2019</description>
        <description>In cricket, the region plays a significant role in ranking teams. The International Cricket Council (ICC) uses an ad-hoc points system to rank cricket teams, which entirely based on the number of wins and losses a match. The ICC ignores the strength and weaknesses of the team across the region. Even though the relative accuracy in the ad-hoc ranking is high, but they do not provide a clearly defined method of ranking. We proposed Region-wise Team Rank (RTR) and a Region-wise Weighted Team Rank (RWTR) to rank cricket teams. The intuition is to get more points to a team that wins a match from a stronger team as compared to a team that wins against a weaker team &amp; vice versa. The proposed method considers not only the number of region-wise wins and losses but also incorporates the region-wise strength and weakness of a team while assigning them the ranking score. In conclusion, the ranking list of the teams compares to the ICC official ranking.</description>
        <description>http://thesai.org/Downloads/Volume10No10/Paper_7-Region_wise_Ranking_for_ODI_Cricket_Teams.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Effective Methods to Improve the Educational Process of Medicine in Bulgaria</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101006</link>
        <id>10.14569/IJACSA.2019.0101006</id>
        <doi>10.14569/IJACSA.2019.0101006</doi>
        <lastModDate>2019-10-31T13:23:15.5270000+00:00</lastModDate>
        
        <creator>Galya N Georgieva-Tsaneva</creator>
        
        <subject>Serious educational games; learning process; gaming training; pedagogical methods; innovative technological means; medical education</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(10), 2019</description>
        <description>The introduction of modern technologies into the educational process of medical students is a challenge of the new era in education, which can increase the success of students and give them confidence in their capabilities. The paper considers the use of physiological clinical record databases as an effective means of gaining prior experience from students that will be of use to them in their professional work. The paper describes the entering of serious educational games into the learning process of students in Bulgaria. The serious games and the pedagogical methods applied therein are an innovative technological means of developing individual, social and cognitive qualities on which the individual&#39;s professional realization depends. The paper presents the results of a survey conducted at the universities of medical education in Bulgaria. Respondents&#39; opinion of their desire to use serious games in their training and how games affect them has been studied and shown.</description>
        <description>http://thesai.org/Downloads/Volume10No10/Paper_6-Effective_Methods_to_Improve_the_Educational_Process.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development Trends of Online-based Aural Rehabilitation Programs for Children with Cochlear Implant Coping with the Fourth Industrial Revolution and Implication in Speech-Language Pathology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101005</link>
        <id>10.14569/IJACSA.2019.0101005</id>
        <doi>10.14569/IJACSA.2019.0101005</doi>
        <lastModDate>2019-10-31T13:23:15.4930000+00:00</lastModDate>
        
        <creator>Haewon Byeon</creator>
        
        <subject>Cochlear implant; future promising technologies; smart diagnostics; Online-based aural rehabilitation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(10), 2019</description>
        <description>The Korea Research Foundation selected the miniaturization and development of home care devices as the future promising technologies in the biotechnology (BT) area along with the Fourth Industrial Revolution. Accordingly, it is believed that there will be innovative changes in the rehabilitation field, including the development of smart diagnostics and treatment devices. Moreover, rehabilitation equipped with individualization, precision, miniaturization, portability, and accessibility is expected to draw attention. It has been continuously reported in the past decade that hearing-impaired toddlers who became able to hear speech through cochlear implantation and hearing rehabilitation before the age of 3, which is a critical period of language development, show a language development pattern similar to that of healthy toddlers. As a result, the need for developing language rehabilitation programs customized for patients wearing artificial cochlea has emerged. In other words, since the improved hearing ability owing to cochlear implant does not guarantee to promote speech perception and language development, intensive rehabilitation and education are needed for patients to recognize the heard speech as a meaningful language for communication. Nevertheless, a literature search on domestic and foreign cases revealed that there are insufficient language rehabilitation programs for cochlear implant patients as well as customized programs for them in the clinical coalface. This study examined the trend and marketability of online-based aural rehabilitation programs for patients wearing artificial cochlea and described the implications for language rehabilitation. This study suggested the following implications for developing a customized aural rehabilitation program. It is needed to secure and develop contents that can implement “a hand-held hospital” by using medical devices and mobile devices owned by consumers that transcend time and space. Also, it is necessary to develop a cochlear implant hearing rehabilitation training program suitable for native Korean speakers.</description>
        <description>http://thesai.org/Downloads/Volume10No10/Paper_5-Development_Trends_of_Online_based_Aural_Rehabilitation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Method for Texture Mapping of In-Vehicle Camera Image in 3D Map Creation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101004</link>
        <id>10.14569/IJACSA.2019.0101004</id>
        <doi>10.14569/IJACSA.2019.0101004</doi>
        <lastModDate>2019-10-31T13:23:15.4800000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>Texture mapping; high spatial resolution of satellite imagery data; 3D map</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(10), 2019</description>
        <description>Method for texture mapping of in-vehicle camera image in 3D map creation is proposed. Top view of ground cover targets can be mapped easily. For instance, aerial photos, high spatial resolution of satellite imagery data allows creation of top view of ground cover targets and also map creation. It can be used for pedestrian navigations. On the other hand, side view of ground cover targets is not so easy to obtain. In this paper, two methods are proposed. One is to use acquired photos with cameras mounted on the dedicated cars. The other one is to use high spatial resolution of satellite imagery data, such as IKONOS, Orbview, etc. Through experiments with the aforementioned two methods, it is found that texture mapping for the ground cover targets can be done with the two proposed methods in an efficient manner.</description>
        <description>http://thesai.org/Downloads/Volume10No10/Paper_4-Method_for_Texture_Mapping.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Study of Cross-Platform Technologies for Data Delivery in Regional Web Surveys in the Education</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101003</link>
        <id>10.14569/IJACSA.2019.0101003</id>
        <doi>10.14569/IJACSA.2019.0101003</doi>
        <lastModDate>2019-10-31T13:23:15.4630000+00:00</lastModDate>
        
        <creator>Evgeny Nikulchev</creator>
        
        <creator>Dmitry Ilin</creator>
        
        <creator>Vladimir Belov</creator>
        
        <creator>Pavel Pushkin</creator>
        
        <creator>Pavel Kolyasnikov</creator>
        
        <creator>Sergey Malykh</creator>
        
        <subject>Web-surveys; mass regional polls; various platforms and browsers; cross-platform technologies</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(10), 2019</description>
        <description>Web-surveys are a popular form of collecting primary data from various studies. However, mass regional polls have their own characteristics, including the following: it is necessary to take into account various platforms and browsers, as well as the speed of networks, if rural areas remote from large centers are involved in the polls. Ensuring guaranteed data delivery in these conditions should be the right choice of technology for the implementation of surveys. The paper presents the analysis results of the technologies sustainability to various regional conditions for web survey conducted within one week at schools throughout the Russian Federation. The survey involved 20 000 real educators. The paper describes the technologies used and provides information about browsers and operation systems used by respondents. The absence of failures in data delivery confirms the effectiveness of the solutions.</description>
        <description>http://thesai.org/Downloads/Volume10No10/Paper_3-Study_of_Cross_Platform_Technologies.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Interpolation of Single Beam Echo Sounder Data for 3D Bathymetric Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101002</link>
        <id>10.14569/IJACSA.2019.0101002</id>
        <doi>10.14569/IJACSA.2019.0101002</doi>
        <lastModDate>2019-10-31T13:23:15.4470000+00:00</lastModDate>
        
        <creator>Claudio Parente</creator>
        
        <creator>Andrea Vallario</creator>
        
        <subject>Interpolation; bathymetric model; 3D model; digital depth model; kriging; radial basis function; Geographic Information System (GIS)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(10), 2019</description>
        <description>Transmitting sound waves into water, and measuring time interval between emission and return of a pulse, single beam echo sounder determines the depth of the sea. To obtain a bathymetric model representing sea-floor continuously, interpolation is necessary to process irregular spaced measured points resulting from echo sounder acquisition and calculate the depths in unsampled areas. Several interpolation methods are available in literature and the choice of the most suitable of them cannot be made a priori, but requires to be evaluated each time. This paper aims to compare different interpolation methods to process single beam echo sounder data of the Gulf of Pozzuoli (Italy) for 3D model achievement. The experiments are carried out in GIS (Geographic Information System) environment (Software: ArcGIS 10.3 and its extension Geostatistical Analyst by ESRI). The choice of the most accurate digital depth model is made using automatic cross validation. Radial basis function and kriging prove to be the best interpolation methods for the considered dataset.</description>
        <description>http://thesai.org/Downloads/Volume10No10/Paper_2-Interpolation_of_Single_Beam_Echo_Sounder_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detecting Public Sentiment of Medicine by Mining Twitter Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0101001</link>
        <id>10.14569/IJACSA.2019.0101001</id>
        <doi>10.14569/IJACSA.2019.0101001</doi>
        <lastModDate>2019-10-31T13:23:15.2900000+00:00</lastModDate>
        
        <creator>Daisuke Kuroshima</creator>
        
        <creator>Tina Tian</creator>
        
        <subject>Twitter; social media; data mining; public health</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(10), 2019</description>
        <description>The paper presents a computational method that mines, processes and analyzes Twitter data for detecting public sentiment of medicine. Self-reported patient data are collected over a period of three months by mining the Twitter feed, resulting in more than 10,000 tweets used in the study. Machine learning algorithms are used for an automatic classification of the public sentiment on selected drugs. Various learning models are compared in the study. This work demonstrates a practical case of utilizing social media in identifying customer opinions and building a drug effectiveness detection system. Our model has been validated on a tweet dataset with a precision of 70.7%. In addition, the study examines the correlation between patient symptoms and their choices for medication.</description>
        <description>http://thesai.org/Downloads/Volume10No10/Paper_1-Detecting_Public_Sentiment_of_Medicine.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Distributed Memory Parallel Fourth-Order IADEMF Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100979</link>
        <id>10.14569/IJACSA.2019.0100979</id>
        <doi>10.14569/IJACSA.2019.0100979</doi>
        <lastModDate>2019-10-03T09:43:42.3530000+00:00</lastModDate>
        
        <creator>Noreliza Abu Mansor</creator>
        
        <creator>Norma Alias</creator>
        
        <creator>Kamal Zulkifle</creator>
        
        <creator>Mohammad Khatim Hasan</creator>
        
        <subject>Fourth-order method; finite difference; red-black ordering; distributed memory architecture; parallel performance evaluations</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(9), 2019</description>
        <description>The fourth-order finite difference Iterative Alternating Decomposition Explicit Method of Mitchell and Fairweather (IADEMF4) sequential algorithm has demonstrated its ability to perform with high accuracy and efficiency for the solution of a one-dimensional heat equation with Dirichlet boundary conditions.  This paper develops the parallelization of the IADEMF4, by applying the Red-Black (RB) ordering technique. The proposed IADEMF4-RB is implemented on multiprocessor distributed memory architecture based on Parallel Virtual Machine (PVM) environment with Linux operating system.  Numerical results show that the IADEMF4-RB accelerates the convergence rate and largely improves the serial time of the IADEMF4. In terms of parallel performance evaluations, the IADEMF4-RB significantly outperforms its counterpart of the second-order (IADEMF2-RB), as well as the benchmarked fourth-order classical iterative RB methods, namely, the Gauss-Seidel (GS4-RB) and the Successive Over-relaxation (SOR4-RB) methods.</description>
        <description>http://thesai.org/Downloads/Volume10No9/Paper_79-A_Distributed_Memory_Parallel_Fourth_Order.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards A Proactive System for Predicting Service Quality Degradations in Next Generation of Networks based on Time Series</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100978</link>
        <id>10.14569/IJACSA.2019.0100978</id>
        <doi>10.14569/IJACSA.2019.0100978</doi>
        <lastModDate>2019-10-03T09:43:42.2900000+00:00</lastModDate>
        
        <creator>Errais Mohammed</creator>
        
        <creator>Rachdi Mohamed</creator>
        
        <creator>Al Sarem Mohammed</creator>
        
        <creator>Abdel Hamid Mohamed Emara</creator>
        
        <subject>Next Generation of Network (NGN); network management; enhanced Telecom Operation Management (eTOM) frameworks; prediction; time series; Ip Multimedia Subsystem (IMS); Service Level Agreement (SLA); Quality Of Service (QoS)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(9), 2019</description>
        <description>The architecture of Next Generation of networks (NGN) aims to diversify the offer of operators in added value services. To do this, NGN offers a heterogeneous architecture for the services deployment. This poses significant challenges in terms of end-to-end assurance of services. For this purpose, we propose in this work the establishment of a proactive autonomous system, capable of ensuring an acceptable quality level according to Service Level Agreement (SLA) requirements. A system that is able to predict any QoS degradation due to the prediction model based on time series adapted to NGN.</description>
        <description>http://thesai.org/Downloads/Volume10No9/Paper_78-Towards_A_Proactive_System_for_Predicting_Service_Quality.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mobile Agent Platform based Wallet for Preventing Double Spending in Offline e-Cash</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100977</link>
        <id>10.14569/IJACSA.2019.0100977</id>
        <doi>10.14569/IJACSA.2019.0100977</doi>
        <lastModDate>2019-09-30T12:36:57.9430000+00:00</lastModDate>
        
        <creator>Irwan </creator>
        
        <creator>Armein Z. R. Langi</creator>
        
        <creator>Emir Husni</creator>
        
        <subject>e-Cash; double-spending; MAPW; CPN</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(9), 2019</description>
        <description>Electronic cash (or e-cash) research has been going on for more than three decades since it was first proposed. Various schemes and methods are proposed to improve privacy and secu-rity in e-cash, but there is one security issue that less discussed mainly in offline e-cash, namely, double-spending. Generally, the mechanism to deal with double-spending in offline e-cash is performing double-spending identification when depositing the coin. Even though the mechanism is successful in identifying double-spender, but it cannot prevent double-spending. This paper proposes the Mobile Agent Platform based Wallet (MAPW) to overcome the double-spending issue in offline e-cash. MAPW uses the autonomy and cooperation of agents to give protection against malicious agent, counterfeit coin and duplicate coin. This model has been verified using Colored Petri Nets (CPN) and has proven to be successful in preventing double-spending, and overcoming malicious agent, and counterfeit coins.</description>
        <description>http://thesai.org/Downloads/Volume10No9/Paper_77-Mobile_Agent_Platform_based_Wallet.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Thai Agriculture Products Traceability System using Blockchain and Internet of Things</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100976</link>
        <id>10.14569/IJACSA.2019.0100976</id>
        <doi>10.14569/IJACSA.2019.0100976</doi>
        <lastModDate>2019-09-30T12:36:57.9300000+00:00</lastModDate>
        
        <creator>Thattapon Surasak</creator>
        
        <creator>Nungnit Wattanavichean</creator>
        
        <creator>Chakkrit Preuksakarn</creator>
        
        <creator>Scott C.-H. Huang</creator>
        
        <subject>Blockchain; internet of things; supply chain man-agement; product traceability; distributed database; data integrity; ourSQL</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(9), 2019</description>
        <description>In this paper, we successfully designed and de-veloped Thai agriculture products traceability system using blockchain and Internet of Things. Blockchain, which is the distributed database, is used for our proposed traceability system to enhance the transparency and data integrity. OurSQL is added on another layer to easier query process of blockchain database, therefore the proposed system is a user-friendly system, which cannot be found in ordinary blockchain database. The website and android application have been developed to show the tracking information of the product. The blockchain database coupling with Internet of Things give a number of benefits for our traceability system because all of the collecting information is in real-time and kept in a very secured database. Our system could have a huge impact on food traceability and supply chain management become more reliable as well as rebuild public awareness in Thailand on food safety and quality control.</description>
        <description>http://thesai.org/Downloads/Volume10No9/Paper_76-Thai_Agriculture_Products_Traceability_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Extending Conditional Preference Networks to Handle Changes</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100975</link>
        <id>10.14569/IJACSA.2019.0100975</id>
        <doi>10.14569/IJACSA.2019.0100975</doi>
        <lastModDate>2019-09-30T12:36:57.9130000+00:00</lastModDate>
        
        <creator>Eisa Alanazi</creator>
        
        <subject>AI; changes; CP-nets; preferences; decision mak-ing; product configuration</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(9), 2019</description>
        <description>Conditional Preference Networks (CP-nets) are a compact and natural model to represent conditional qualitative preferences. In CP-nets, the set of variables is fixed in advance. That is, the set of alternatives available during the decision process are always the same no matter how long the process is. In many configuration and interactive problems, it is expected that some variables are subject to be included or excluded during the configuration process due to users showing interest or boredom on some aspects of the problem. Representing and reasoning with such changes is important to the success of the application and therefore, it is important to have a model capable of dynamically including or excluding variables. In this work, we introduce active CP-nets (aCP-nets) as an extension of CP-nets where variable participation is governed by a set of activation requirements. In particular, we introduce an activation status to the CP-net variables and analyze two possible semantics of aCP-nets along with their consistency requirements.</description>
        <description>http://thesai.org/Downloads/Volume10No9/Paper_75-Extending_Conditional_Preference_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Prediction of Academic Performance Applying NNs: A Focus on Statistical Feature-Shedding and Lifestyle</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100974</link>
        <id>10.14569/IJACSA.2019.0100974</id>
        <doi>10.14569/IJACSA.2019.0100974</doi>
        <lastModDate>2019-09-30T12:36:57.8970000+00:00</lastModDate>
        
        <creator>Shithi Maitra</creator>
        
        <creator>Sakib Eshrak</creator>
        
        <creator>Md. Ahsanul Bari</creator>
        
        <creator>Abdullah Al-Sakin</creator>
        
        <creator>Rubana Hossain Munia</creator>
        
        <creator>Nasrin Akter</creator>
        
        <creator>Zabir Haque</creator>
        
        <subject>Educational Data Mining (EDM); Exploratory Data Analysis (EDA); median and mode imputation; inferential statistics; t-test; Chi-squared independence test; ANOVA-test</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(9), 2019</description>
        <description>Automation has made it possible to garner and preserve students’ data and the modern advent in data science enthusiastically mines this data to predict performance, to the interest of both tutors and tutees. Academic excellence is a phenomenon resulting from a complex set of criteria originating in psychology, habits and according to this study, lifestyle and preferences–justifying machine learning to be ideal in classifying academic soundness. In this paper, computer science majors’ data have been gleaned consensually by surveying at Ahsanullah University, situated in Bangladesh. Visually aided exploratory analysis revealed interesting propensities as features, whose significance was further substantiated by statistically inferential Chi-squared (Χ2) independence tests and independent samples t-tests for categorical and continuous variables respectively, on median/mode-imputed data. The initially relaxed p-value retained all exploratorily analyzed features, but gradual rigidification exposed the most powerful features by fitting neural networks of decreasing complexity i.e., having 24, 20 and finally 12 hidden neurons. Statistical inference uniquely helped shed off weak features prior to training, thus optimizing time and generally large computational power to train expensive predictive models. The k-fold cross-validated, hyper-parametrically tuned, robust models performed with average accuracies wavering between 90% to 96% and an average 89.21% F1-score on the optimal model, with the incremental improvement in models proven by statistical ANOVA.</description>
        <description>http://thesai.org/Downloads/Volume10No9/Paper_74-Prediction_of_Academic_Performance_Applying_NNs.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-Sessions Mechanism for Decentralized Cash on Delivery System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100973</link>
        <id>10.14569/IJACSA.2019.0100973</id>
        <doi>10.14569/IJACSA.2019.0100973</doi>
        <lastModDate>2019-09-30T12:36:57.8970000+00:00</lastModDate>
        
        <creator>Nghia Duong-Trung</creator>
        
        <creator>Xuan Son Ha</creator>
        
        <creator>Tan Tai Phan</creator>
        
        <creator>Phuong Nam Trieu</creator>
        
        <creator>Quoc Nghiep Nguyen</creator>
        
        <creator>Duy Pham</creator>
        
        <creator>Thai Tam Huynh</creator>
        
        <creator>Hai Trieu Le</creator>
        
        <subject>Blockchain; cash on delivery; multi-sessions; decentralized system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(9), 2019</description>
        <description>To date, cash on delivery (COD) is one of the most popular payment methods in developing countries thanks to the blossom of customer-to-customer e-commerce. With the widespread of a very small business model and the Internet, online shopping has become part of people’s daily activity. People browse for desirable products at the comfort of their homes and ask the online vendor that a shipper can deliver the merchandise at their doorstep. Then, COD allows customers to pay in cash when the product is delivered to their desired location. Since customers receive goods before making a payment, COD is, therefore, considered as a payment system. However, the crucial issue that previous research has not yet addressed is that their models only support single delivering session at a time. More precisely, if the current buyer is not available to receive the goods, the shipper has to wastefully wait for the complete payment and he/she cannot start shipping another merchandise. The tracking system seems to poorly handle this issue. In particular, we propose a multi-session mechanism, which consists of blockchain technology, smart contracts and hyperledger fabric platform to achieve distributed and transparent across delivering sessions in the decentralized markets. Our proposed mechanism ensure the efficiency of delivering process. The authors release our sources codes for further reproducibility and development. We conclude that the integration of multi-session mechanism and blockchain technology will cause significant efficiency across several disciplines.</description>
        <description>http://thesai.org/Downloads/Volume10No9/Paper_73-Multi_Sessions_Mechanism_for_Decentralized_Cash.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Approach for Ontology-Driven Information Retrieving Chatbot for Fashion Brands</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100972</link>
        <id>10.14569/IJACSA.2019.0100972</id>
        <doi>10.14569/IJACSA.2019.0100972</doi>
        <lastModDate>2019-09-30T12:36:57.8830000+00:00</lastModDate>
        
        <creator>Aisha Nazir</creator>
        
        <creator>Muhammad Yaseen Khan</creator>
        
        <creator>Tafseer Ahmed</creator>
        
        <creator>Syed Imran Jami</creator>
        
        <creator>Shaukat Wasi</creator>
        
        <subject>Artifical intelligence; semantic web; chatbots; fashion; ontology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(9), 2019</description>
        <description>Chatbots or conversational agents are the most projecting and widely employed artificial assistants on online social media. These bots converse with the humans in audio, visual, or textual formats. It is quite intelligible that users are keen interested in the swift and relatedly correct information for their hunt in pursuit of desired product, such that their precious time is not wasted through surfing multiple websites and business portals. In this paper, we present a novel incremental approach for building a chatbot for fashion brands based on the semantic web. We organized a dataset of 5,000 question and answers of top- 10 brands in the fashion domain, which covers the information about new arrivals, sales, packages, discounts, exchange/return policies, etc. We have also developed a dialogue interface for querying the system. The results generated against the queries are thoroughly evaluated on the criteria of time, context, history, duration, turns, significance, relevance, and fall back questions.</description>
        <description>http://thesai.org/Downloads/Volume10No9/Paper_72-A_Novel_Approach_for_Ontology-Driven_Information.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Scale and Resolution Invariant Spin Images for 3D Object Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100971</link>
        <id>10.14569/IJACSA.2019.0100971</id>
        <doi>10.14569/IJACSA.2019.0100971</doi>
        <lastModDate>2019-09-30T12:36:57.8670000+00:00</lastModDate>
        
        <creator>Jihad H’roura</creator>
        
        <creator>Aissam Bekkari</creator>
        
        <creator>Driss Mammass</creator>
        
        <creator>Ali Bouzit</creator>
        
        <creator>Patrice M&#180;eniel</creator>
        
        <creator>Alamin Mansouri</creator>
        
        <creator>Michael Roy</creator>
        
        <subject>3D object; recognition; spin image; resolution; scaling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(9), 2019</description>
        <description>Until the last decades, researchers taught that teaching a computer how to recognize a bunny, for example, in a complex scene is almost impossible. Today, computer vision system do it with a high score of accuracy. To bring the real world to the computer vision system, real objects are represented as 3D models (point clouds, meshes), which adds extra constraints that should be processed to ensure a good recognition, for example the resolution of the mesh. In this work, based on the state of the art method called Spin Image, we introduce our contribution to recognize 3D objects. Our motivation is to ensure a good recognition under different conditions such as rotation, translation and mainly scaling, resolution changes, occlusions and clutters. To that end we have analyzed the spin image algorithm to propose an extended version robust to scale and resolution changes, knowing that spin images fails to recognize 3D objects in that case. The key idea is to approach the representation of spin images of the same object under different conditions by the mean of normalization, either these conditions result in linear or non-linear correlation between images. Our contribution, unlike spin image algorithm, allows to recognize objects with different resolutions and scale. Plus it shows a good robustness to occlusions up to 60% and clutters up to 50%, tested on two datasets: Stanford and ArcheoZoo3D.</description>
        <description>http://thesai.org/Downloads/Volume10No9/Paper_71-Scale_and_Resolution_Invariant_Spin_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Timed-Arc Petri-Nets based Agent Communication for Real-Time Multi-Agent Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100970</link>
        <id>10.14569/IJACSA.2019.0100970</id>
        <doi>10.14569/IJACSA.2019.0100970</doi>
        <lastModDate>2019-09-30T12:36:57.8500000+00:00</lastModDate>
        
        <creator>Awais Qasim</creator>
        
        <creator>Sidra Kanwal</creator>
        
        <creator>Adnan Khalid</creator>
        
        <creator>Syed Asad Raza Kazmi</creator>
        
        <creator>Jawad Hassan</creator>
        
        <subject>Formal verification; FIPA; multi-agent systems; timed-arc petri nets; real-time systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(9), 2019</description>
        <description>This research focuses on Timed-Arc Petri-nets-based agent communication in real-time multi-agent systems. The Agent Communication Language is a standard language for the agents to communicate. The objective is to combine Timed-Arc Petri-nets and FIPA Performatives in real-time multi-agent systems. FIPA standards provide a richer framework for the interaction of agents and makes it easier to develop a well-defined system. It also ensures the management by precisely specifying the agent’s interaction. Though FIPA protocol has already been described with the help of Petri-nets but this specification lacks the timing aspect that is a dire need for real-time multi-agent systems. The main objective of this research is to provide a method of modeling existing FIPA performatives by combining Timed-Arc Petri-nets in real-time multi-agent systems. We have used properties, such as liveness, deadlock and reachability for the formal verification of the proposed modeling technique.</description>
        <description>http://thesai.org/Downloads/Volume10No9/Paper_70-Timed_Arc_Petri_Nets_based_Agent_Communication.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Secure Fingerprint-based Authentication System for Student’s Examination System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100968</link>
        <id>10.14569/IJACSA.2019.0100968</id>
        <doi>10.14569/IJACSA.2019.0100968</doi>
        <lastModDate>2019-09-30T12:36:57.8330000+00:00</lastModDate>
        
        <creator>Abdullah Alshbtat</creator>
        
        <creator>Nabeel Zanoon</creator>
        
        <creator>Mohammad Alfraheed</creator>
        
        <subject>Finger-print; examination system; image processing; bio informatics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(9), 2019</description>
        <description>In the fingerprint image processing, various methods have been suggested as using band pass filter, Fouries transform filter and Fuzzy systems. In this paper, we present a useful and an applicable fingerprint security system for student’s examination using image processing on such away and a well-organized algorithm is applied. As a university team work, we have recently tested this security procedure for different samples of students in our institution. The experimental results show a high level of accuracy is obtained. Due to the need to connect and manage the connection, we use the Ethernet card and the Arduino Uno card which they are combined together in such a way to do so. Moreover, the administrator runs a special website in the PC to assign ID to the scanned fingerprint. The calculation of the proposed system is carried out by uploading a suitable Adafruit fingerprint library to the used Audruino Uno card. Finally, the most important security point is that the PC has been used not only to send the developing software into the Uno card but also to disconnect the process electronically while the code is running.</description>
        <description>http://thesai.org/Downloads/Volume10No9/Paper_68-A_Novel_Secure_Fingerprint_based_Authentication_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Ensemble and Deep-Learning Methods for Two-Class and Multi-Attack Anomaly Intrusion Detection: An Empirical Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100969</link>
        <id>10.14569/IJACSA.2019.0100969</id>
        <doi>10.14569/IJACSA.2019.0100969</doi>
        <lastModDate>2019-09-30T12:36:57.8330000+00:00</lastModDate>
        
        <creator>Adeyemo Victor Elijah</creator>
        
        <creator>Azween Abdullah</creator>
        
        <creator>NZ JhanJhi</creator>
        
        <creator>Mahadevan Supramaniam</creator>
        
        <creator>Balogun Abdullateef O</creator>
        
        <subject>Cyber-security; intrusion detection system; deep learning; ensemble methods; network attacks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(9), 2019</description>
        <description>Cyber-security, as an emerging field of research, involves the development and management of techniques and technologies for protection of data, information and devices. Protection of network devices from attacks, threats and vulnerabilities both internally and externally had led to the development of ceaseless research into Network Intrusion Detection System (NIDS). Therefore, an empirical study was conducted on the effectiveness of deep learning and ensemble methods in NIDS, thereby contributing to knowledge by developing a NIDS through the implementation of machine and deep-learning algorithms in various forms on recent network datasets that contains more recent attacks types and attackers’ behaviours (UNSW-NB15 dataset). This research involves the implementation of a deep-learning algorithm–Long Short-Term Memory (LSTM)–and two ensemble methods (a homogeneous method–using optimised bagged Random-Forest algorithm, and a heterogeneous method–an Averaged Probability method of Voting ensemble). The heterogeneous ensemble was based on four (4) standard classifiers with different computational characteristics (Na&#239;ve Bayes, kNN, RIPPER and Decision Tree). The respective model implementations were applied on the UNSW_NB15 datasets in two forms: as a two-classed attack dataset and as a multi-attack dataset. LSTM achieved a detection accuracy rate of 80% on the two-classed attack dataset and 72% detection accuracy rate on the multi-attack dataset. The homogeneous method had an accuracy rate of 98% and 87.4% on the two-class attack dataset and the multi-attack dataset, respectively. Moreover, the heterogeneous model had 97% and 85.23% detection accuracy rate on the two-class attack dataset and the multi-attack dataset, respectively.</description>
        <description>http://thesai.org/Downloads/Volume10No9/Paper_69-Ensemble_and_Deep_Learning_Methods.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Use of Geospatial Technology for Epidemiological Chagas Analysis in Bolivia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100967</link>
        <id>10.14569/IJACSA.2019.0100967</id>
        <doi>10.14569/IJACSA.2019.0100967</doi>
        <lastModDate>2019-09-30T12:36:57.8200000+00:00</lastModDate>
        
        <creator>Natalia I Vargas-Cuentas</creator>
        
        <creator>Alicia Alva Mantari</creator>
        
        <creator>Avid Roman-Gonzalez</creator>
        
        <subject>Trypanosoma Cruzi; Vinchuca; Landsat 8; PCA; Normalized Difference Soil Index (NDSI); Modified Normalized Difference Water Index (MNDWI)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(9), 2019</description>
        <description>Chagas disease is caused by the parasite Trypanosoma Cruzi and transmitted by the Vinchuca. Bolivia is the country with the highest prevalence in the South American region; for example, in 2015, there was a prevalence of 33.4%. This disease causes severe intestinal and cardiac problems in the long term, 30% of the cases register cardiac symptoms, and 10% have alterations in the esophagus or colon. This research aims to analyze the relationship between environmental factors and Chagas outbreaks in an area of Bolivia to identify the environmental conditions in which the disease is transmitted, using epidemiological, meteorological data and also environmental indexes extracted from Landsat 8 satellite images. Through a Principal Components Analysis (PCA) of the environmental indexes extracted from the satellite images and the meteorological information, has been found that the environmental conditions that have a correlation with the occurrence of cases are: temperature, relative humidity, visibility, Normalized Difference Soil Index (NDSI) and Modified Normalized Difference Water Index (MNDWI).</description>
        <description>http://thesai.org/Downloads/Volume10No9/Paper_67-The_Use_of_Geospatial_Technology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>IoT based Temperature and Humidity Controlling using Arduino and Raspberry Pi</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100966</link>
        <id>10.14569/IJACSA.2019.0100966</id>
        <doi>10.14569/IJACSA.2019.0100966</doi>
        <lastModDate>2019-09-30T12:36:57.8030000+00:00</lastModDate>
        
        <creator>Lalbihari Barik</creator>
        
        <subject>IoT; Raspberry Pi; Arduino UNO; data transmission; sensors</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(9), 2019</description>
        <description>Internet of Things (IoT) plays a pivotal part in our mundane daily life by controlling electronic devices using networks. The controlling is done by minutely observing the important parameters which generate vital pieces of information concerning the functioning of these electronic devices. Simultaneously, this information will transmit these vital statistics from the transmitting device as well as save the same on the cloud to access by the applications and supplementary procedures to use them. This scrutiny associates the outcomes of the environmental observances like the humidity and temperature measurements using sensors. The gathered information could be profitably used to produce actions like distantly dominant cooling, heating devices, or long term statistics, which will be useful to control the same. The detected data are uploaded to the cloud storage through network and associate using android application. The system employs Arduino UNO with Raspberry Pi, HTU 211D sensor device, and an ESP8266 Wi-Fi module. The experimental results show the live temperature and humidity of the surroundings and the soil moisture of any plant using Arduino UNO with Raspberry Pi. Raspberry Pi is mainly used here for checking the temperature and humidity through the HTU 211D sensor element. The sensors are used for measuring the temperatures from the surroundings, storing displayed information with different devices. Here, the ESP8266 Wi-Fi module has been used for data storing purpose.</description>
        <description>http://thesai.org/Downloads/Volume10No9/Paper_66-IoT_based_Temperature_and_Humidity_Controlling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Support Vector Machine for Classification of Autism Spectrum Disorder based on Abnormal Structure of Corpus Callosum</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100965</link>
        <id>10.14569/IJACSA.2019.0100965</id>
        <doi>10.14569/IJACSA.2019.0100965</doi>
        <lastModDate>2019-09-30T12:36:57.8030000+00:00</lastModDate>
        
        <creator>Jebapriya S</creator>
        
        <creator>Shibin David</creator>
        
        <creator>Jaspher W Kathrine</creator>
        
        <creator>Naveen Sundar</creator>
        
        <subject>Autism Spectrum Disorder (ASD); ASD screening data; ABIDE; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(9), 2019</description>
        <description>Autism Spectrum Disorders (ASD) is quite difficult to diagnose using traditional methods. Early prediction of Autism Spectrum Disorders enhances the in general psychological well- being of the child. These days, the research on Autism Spectrum Disorder is performed at a very high pace than earlier days due to increased rate of ASD affected people. One possible way of diagnosing ASD is through behavioral changes of children at the early ages. Structural imaging ponders point to disturbances in various mind regions, yet the exact neuro-anatomical nature of these interruptions stays misty. Portrayal of cerebrum structural contrasts in children with ASD is basic for advancement of biomarkers that may in the long run be utilized to enhance analysis and screen reaction to treatment. In this examination we use machine figuring out how to decide a lot of conditions that together end up being prescient of Autism Spectrum Disorder. This will be of an extraordinary use to doctors, making a difference in identifying Autism Spectrum Disorder at a lot prior organize.</description>
        <description>http://thesai.org/Downloads/Volume10No9/Paper_65-Support_Vector_Machine_for_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Security and Privacy Awareness: A Survey for Smartphone User</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100964</link>
        <id>10.14569/IJACSA.2019.0100964</id>
        <doi>10.14569/IJACSA.2019.0100964</doi>
        <lastModDate>2019-09-30T12:36:57.7870000+00:00</lastModDate>
        
        <creator>Md Nawab Yousuf Ali</creator>
        
        <creator>Md. Lizur Rahman</creator>
        
        <creator>Ifrat Jahan</creator>
        
        <subject>Smartphone; Smartphone Problems; Level of Awareness (LoA); Security and Privacy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(9), 2019</description>
        <description>Smartphone becomes one of the most popular devices in last few years due to the integration of powerful technologies in it. Now-a-days a smartphone can provide different services as like as a computer provides. Smartphone holds our important personal information such as photos and videos, SMS, email, contact list, social media accounts etc. Therefore, the number of security and privacy related threats are also increasing relatively. Our research aims at evaluating how much the smartphone users are aware about their security and privacy. In this study, firstly we have taken a survey for smartphone users to access the level of smartphone security awareness displayed by the public. We also determine whether a general level of security complacency exists among smartphone users and measure the awareness of android users regarding their privacy. From survey result we have found that, most of the people are not aware about their smartphone security and privacy. Secondly, based on survey results, we have shown a method to measure the level of awareness (LOA) for the smartphone users. By using this method, a user can easily measure his/her smartphone security and privacy related level of awareness.</description>
        <description>http://thesai.org/Downloads/Volume10No9/Paper_64-Security_and_Privacy_Awareness.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Assessment of IPv4 and IPv6 Networks with Different Modified Tunneling Techniques using OPNET</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100963</link>
        <id>10.14569/IJACSA.2019.0100963</id>
        <doi>10.14569/IJACSA.2019.0100963</doi>
        <lastModDate>2019-09-30T12:36:57.7730000+00:00</lastModDate>
        
        <creator>Asif Khan Babar</creator>
        
        <creator>Zulfiqar Ali Zardari</creator>
        
        <creator>Nazish Nawaz Hussaini</creator>
        
        <creator>Sirajuddin Qureshi</creator>
        
        <creator>Song Han</creator>
        
        <subject>ISATAP; tunneling techniques; IPv4; IPv6; network performance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(9), 2019</description>
        <description>Currently, all the devices are using Internet protocol version 4 (IPv4) to access the internet. IP addresses of the IPv4 are now depleted from IPv4 pool announced by IANA (Internet Assigned Number Authority) in February 2011. To solve this issue Internet protocol version 6 (IPv6) is launched. But the main problem is current devices can’t support directly IPv6 that causes various compatibility issues. Many researchers have proposed various techniques, but still, their efficiency and performance is a big challenge. This study examines several mechanisms of transition IPv6 the backbone of multiprotocol label switching (MPLS) to evaluate &amp; compare their performances. It involves comparing different performance metrics and manual tunneling tunnel efficiency metrics. The main goal of this paper is to examine the dissimilar tunneling techniques and find out which tunneling method is better in all performance, which increases network performance. Experimental results show that ISATAP is better performance in all metrics.</description>
        <description>http://thesai.org/Downloads/Volume10No9/Paper_63-Assessment_of_IPv4_and_IPv6_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Computer Simulation Study: An Impact of Roadside Illegal Parking at Signalised Intersection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100962</link>
        <id>10.14569/IJACSA.2019.0100962</id>
        <doi>10.14569/IJACSA.2019.0100962</doi>
        <lastModDate>2019-09-30T12:36:57.7570000+00:00</lastModDate>
        
        <creator>Noorazila Asman</creator>
        
        <creator>Munzilah Md Rohani</creator>
        
        <creator>Nursitihazlin Ahmad Termida</creator>
        
        <creator>Noor Yasmin Zainun</creator>
        
        <creator>Nur Fatin Lyana Rahimi</creator>
        
        <subject>Traffic simulation; traffic flow; signalized intersection; level of service; illegal parking</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(9), 2019</description>
        <description>Traffic congestion could be a serious road traffic problem particularly at intersections because of its potential impact on the risk of accidents, vehicle delays and exhaust emissions. In addition, illegal parking by road users at intersections can give additional deterioration to the intersections that may create additional traffic flow interruptions. This paper presented assessment of the illegal parking impact on signalized intersection at Parit Raja, Malaysia using simulation approach using PTV VISSIM simulation software. The results showed that if illegal parkings at Parit Raja intersection were banned, traffic delay and travel time of vehicles will be improved and thus, improving the intersection Level of Service.</description>
        <description>http://thesai.org/Downloads/Volume10No9/Paper_62-Computer_Simulation_Study.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Human Gait Feature Extraction based-on Silhouette and Center of Mass</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100961</link>
        <id>10.14569/IJACSA.2019.0100961</id>
        <doi>10.14569/IJACSA.2019.0100961</doi>
        <lastModDate>2019-09-30T12:36:57.7400000+00:00</lastModDate>
        
        <creator>Miftahul Jannah</creator>
        
        <creator>Sarifuddin Madenda</creator>
        
        <creator>Tubagus Maulana Kusuma</creator>
        
        <creator>Hustinawaty</creator>
        
        <subject>Human gait; center of mass; silhouette; feature extraction; gait cycle; people identification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(9), 2019</description>
        <description>When someone walks, there is a repetitive movement or coordinated cycle that forms a gait. Gait is different, unique and difficult to imitate. This characteristic makes gait one of the biometrics to find out one&#39;s identity. Gait analysis is needed in the development of biometric technology, such as in the field of security surveillance and the health sector to monitor gait abnormalities. The center of mass is the unique point of every object that has a role in the study of humans walking. Each person has a different center of mass. In this research, through a series of processes in image processing such as video acquisition, segmentation, silhouette formation, and feature extraction, the center of mass of the human body can be identified using a webcam with the resolution of 640 x 480 pixels and the frame rate of 30 frames/second. The results obtained from this research were gait frames of 510 frames from 17 pedestrian videos. Segmentation process using background subtraction separates the pedestrian object image from the background. Silhouette gait was produced from a series of image enhancement processes to eliminate noise that interferes the image quality. Based on the silhouette, feature extraction provides the center of mass to distinguish each individual&#39;s gait. The sequence of center of mass can be further processed for characterizing human gait cycle for various purposes.</description>
        <description>http://thesai.org/Downloads/Volume10No9/Paper_61-Human_Gait_Feature_Extraction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Intelligent Semi-Latin Square Construct for Measuring Human Capital Intelligence in Recruitment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100960</link>
        <id>10.14569/IJACSA.2019.0100960</id>
        <doi>10.14569/IJACSA.2019.0100960</doi>
        <lastModDate>2019-09-30T12:36:57.7400000+00:00</lastModDate>
        
        <creator>Emmanuel C Ukekwe</creator>
        
        <creator>Francis S. Bakpo</creator>
        
        <creator>Mathew C.Okoronkwo</creator>
        
        <creator>Gregory E.Anichebe</creator>
        
        <subject>Memory recall ability; processing speed; Sternberg paradigm; Posner paradigm; human capital intelligence; semi-latin square</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(9), 2019</description>
        <description>Processing speed and memory recall ability are two major Human Capital Intelligence attributes required for recruitment. Matzel identified five domains of Intelligence. Unfortunately, there were no stated means for measuring them. This paper presents a framework for measuring Processing speed and Memory intelligence domains using Sternberg and Posner paradigms of short memory scanning test. A Semi-Latin square was constructed and used as a competitive platform for n= 20 student-applicant contestants. The Cumulative Grade Point Average rankings of 20 randomly selected final year student-applicants were used for the test. Results show that the CGPA performance ranking of the student-applicants differ from that of the HCI using the framework. A Wilcoxon Signed-Ranks Test was used to determine if the disparity in performance ranking was significant. Results show that there is indeed a significant difference in the performance ranking of the student-applicants using both approaches. The automated Construct was implemented using PHP and Mysql and deployed at (hcipredictor.eu3.org).</description>
        <description>http://thesai.org/Downloads/Volume10No9/Paper_60-An_Intelligent_Semi_Latin_Square_Construct.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Distributed Approach based on Transition Graph for Resolving Multimodal Urban Transportation Problem</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100959</link>
        <id>10.14569/IJACSA.2019.0100959</id>
        <doi>10.14569/IJACSA.2019.0100959</doi>
        <lastModDate>2019-09-30T12:36:57.7270000+00:00</lastModDate>
        
        <creator>Mohamed El Moufid</creator>
        
        <creator>Younes Nadir</creator>
        
        <creator>Khalid Boukhdir</creator>
        
        <creator>Siham Benhadou</creator>
        
        <creator>Hicham Medromi</creator>
        
        <subject>Multimodal transport; distributed approach; transition graph</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(9), 2019</description>
        <description>All over the world, many research studies focus on developing and enhancing real-time communications between various transport stakeholders in urban environments. Such motivation can be justified by the growing importance of pollution caused by the transport sector in urban areas. In this work, we propose an approach of assistance for displacement in urban environment taking advantages of multimodal urban transportation means, where several modes of public transports are available. In addition, we also consider the possibility of using both private modes of transport and cities parking. The proposed distributed approach described in this paper is based on an abstraction of a city multimodal graph according to the available modes of public transport and road traffic and transition graph approach to move from a mode to the other mode. Numerical results are developed to justify the effectiveness of our approach.</description>
        <description>http://thesai.org/Downloads/Volume10No9/Paper_59-A_Distributed_Approach_based_on_Transition_Graph.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep CNN-based Features for Hand-Drawn Sketch Recognition via Transfer Learning Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100958</link>
        <id>10.14569/IJACSA.2019.0100958</id>
        <doi>10.14569/IJACSA.2019.0100958</doi>
        <lastModDate>2019-09-30T12:36:57.7100000+00:00</lastModDate>
        
        <creator>Shaukat Hayat</creator>
        
        <creator>Kun She</creator>
        
        <creator>Muhammad Mateen</creator>
        
        <creator>Yao Yu</creator>
        
        <subject>Deep convolutional neural network; sketch recognition; transfer learning; global average pooling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(9), 2019</description>
        <description>Image-based object recognition is a well-studied topic in the field of computer vision. Features extraction for hand-drawn sketch recognition and retrieval become increasingly popular among the computer vision researchers. Increasing use of touchscreens and portable devices raised the challenge for computer vision community to access the sketches more efficiently and effectively. In this article, a novel deep convolutional neural network-based (DCNN) framework for hand-drawn sketch recognition, which is composed of three well-known pre-trained DCNN architectures in the context of transfer learning with global average pooling (GAP) strategy is proposed. First, an augmented-variants of natural images was generated and sum-up with TU-Berlin sketch images to all its corresponding 250 sketch object categories. Second, the features maps were extracted by three asymmetry DCNN architectures namely, Visual Geometric Group Network (VGGNet), Residual Networks (ResNet) and Inception-v3 from input images. Finally, the distinct features maps were concatenated and the features reductions were carried out under GAP layer. The resulting feature vector was fed into the softmax classifier for sketch classification results. The performance of proposed framework is comprehensively evaluated on augmented-variants TU-Berlin sketch dataset for sketch classification and retrieval task. Experimental outcomes reveal that the proposed framework brings substantial improvements over the state-of-the-art methods for sketch classification and retrieval.</description>
        <description>http://thesai.org/Downloads/Volume10No9/Paper_58-Deep_CNN_based_Features.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluation of Usability Dimensions of Smartphone Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100956</link>
        <id>10.14569/IJACSA.2019.0100956</id>
        <doi>10.14569/IJACSA.2019.0100956</doi>
        <lastModDate>2019-09-30T12:36:57.6930000+00:00</lastModDate>
        
        <creator>Shabana Shareef</creator>
        
        <creator>M.N.A. Khan</creator>
        
        <subject>Usability; Jakob-Nielson usability heuristics; smartphone applications; ease of use; understandability; learning curve</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(9), 2019</description>
        <description>This study analyses different techniques used for evaluation of various usability dimensions of software applications (apps) being used on the smartphones. The scope of this study is to evaluate various aspects of the usability techniques employed in the domain of smartphone apps. Usability assessment methodologies are evaluated for different types of applications running on different operating systems like Android, Blackberry and iOS, etc. Usability evaluation techniques and methodologies with respect to usability heuristics like field experiments, laboratory experiments models and usability standards are discussed in detail. The issues for evaluation of usability of smartphone apps are identified by considering limitations and areas of improvement outlined in the contemporary literature. A conceptual framework for usability evaluation of smartphone apps is also designed which would be validated through experimentation in the thesis work. This study is particularly useful to comprehend usability issues and their likely remedies to produce high quality smartphone apps.</description>
        <description>http://thesai.org/Downloads/Volume10No9/Paper_56-Evaluation_of_Usability_Dimensions.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Shoulder Surfing and Mobile Key-Logging Resistant Graphical Password Scheme for Smart-Held Devices</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100957</link>
        <id>10.14569/IJACSA.2019.0100957</id>
        <doi>10.14569/IJACSA.2019.0100957</doi>
        <lastModDate>2019-09-30T12:36:57.6930000+00:00</lastModDate>
        
        <creator>Sundas Hanif</creator>
        
        <creator>Fahad Sohail</creator>
        
        <creator>Shehrbano</creator>
        
        <creator>Aneeqa Tariq</creator>
        
        <creator>Muhammad Imran Babar</creator>
        
        <subject>Authentication; graphical password; shoulder surfing; mobile key-logging; security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(9), 2019</description>
        <description>In globalization of information, internet has played a vital role by providing an easy and fast access of information and systems to remote users. However, with ease for authentic users, it has made information resources accessible to unauthorized users too. To authorize legitimate user for the access of information and systems, authentication mechanisms are applied. Many users use their credentials or private information at public places to access their accounts that are protected by passwords. These passwords are usually text-based passwords and their security and effectiveness can be compromised. An attacker can steal text-based passwords using different techniques like shoulder surfing and various key logger software, that are freely available over internet. To improve the security, numerous sophisticated and secure authentication systems have been proposed that employ various biometric authentication systems, token-based authentication system etc. But these solutions providing such high-level security, require special modification in the design and hence, imply additional cost. Textual passwords that are easy to use but vulnerable to attacks like shoulder surfing, various image based, and textual graphical password schemes are proposed. However, none of the existing textual graphical passwords are resistant to shoulder surfing and more importantly to mobile key-logging. In this paper, an improved and robust textual graphical password scheme is proposed that uses sectors and colors and introducing randomization as the primary function for the character display and selection. This property makes the proposed scheme resistant to shoulder surfing and more importantly to mobile key-logging. It can be useful for authentication process of any smart held device application.</description>
        <description>http://thesai.org/Downloads/Volume10No9/Paper_57-A_New_Shoulder_Surfing_and_Mobile_Key_Logging.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Latin-Hyper-Cube-Hill-Climbing Method for Optimizing: Experimental Testing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100955</link>
        <id>10.14569/IJACSA.2019.0100955</id>
        <doi>10.14569/IJACSA.2019.0100955</doi>
        <lastModDate>2019-09-30T12:36:57.6800000+00:00</lastModDate>
        
        <creator>Calista Elysia</creator>
        
        <creator>Michelle Hartanto</creator>
        
        <creator>Ditdit Nugeraha Utama</creator>
        
        <subject>Hill-climbing; Latin-hyper-cube; optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(9), 2019</description>
        <description>A noticeable objective of this work is to experiment and test an optimization problem through comparing hill-climbing method with a hybrid method combining hill-climbing and Latin-hyper-cube. These two methods are going to be tested operating the same data-set in order to get the comparison result for both methods. The result shows that the hybrid model has a better performance than hill-climbing. Based on the number of global optimum value occurrence, the hybrid model outperformed 7.6% better than hill-climbing, and produced more stable average global optimum value. However, the model has a little longer running time due to a genuine characteristic of the model itself.</description>
        <description>http://thesai.org/Downloads/Volume10No9/Paper_55-Hybrid_Latin_Hyper_Cube_Hill_Climbing_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of a Vehicle for Driving with Convolutional Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100954</link>
        <id>10.14569/IJACSA.2019.0100954</id>
        <doi>10.14569/IJACSA.2019.0100954</doi>
        <lastModDate>2019-09-30T12:36:57.6630000+00:00</lastModDate>
        
        <creator>Arbnor Pajaziti</creator>
        
        <creator>Xhevahir Bajrami</creator>
        
        <creator>Fatjon Beqa</creator>
        
        <creator>Blendi Gashi</creator>
        
        <subject>Image processing; traffic sign; object tracking; autonomous vehicle; convolutional neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(9), 2019</description>
        <description>The aim of this paper is the design, simulation, construction and programming of the autonomous vehicle, capable of obstacle avoidance, object tracking also image and video processing. The vehicle will use a built-in camera for evaluating and navigating the terrain, a six-axis accelerometer and gyro for calculating angular velocities and accelerations, Arduino for interfacing with motors as well as with Raspberry Pi which is the main on-board computer. The design of the vehicle is performed in Autodesk Fusion 360. Most of the mechanical parts have been 3D printed.&#172;&#172; In order to control the chassis of the vehicle through the microcontrollers, the development of the PCB was required. On top of this, a camera has been added to the vehicle, in order to achieve obstacle avoidance and perform object tracking. The video processing required to achieve these goals is done by using OpenCV and Convolutional Neural Network. Among other objectives of this paper is the detection of traffic signs. The application of the Convolutional Neural Network algorithm after some of the examinations made has shown greater precision in recognizing STOP traffic sign of different positions and occlusion ratios, and finding the path for the fastest time.</description>
        <description>http://thesai.org/Downloads/Volume10No9/Paper_54-Development_of_a_Vehicle_for_Driving.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intrusion Detection System based on the SDN Network, Bloom Filter and Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100953</link>
        <id>10.14569/IJACSA.2019.0100953</id>
        <doi>10.14569/IJACSA.2019.0100953</doi>
        <lastModDate>2019-09-30T12:36:57.6470000+00:00</lastModDate>
        
        <creator>Traore Issa</creator>
        
        <creator>Kone Tiemoman</creator>
        
        <subject>Distributed denial of service; intrusion detection software; software defined network; machine learning; synchronize; acknowledgment; bloom filter</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(9), 2019</description>
        <description>The scale and frequency of sophisticated attacks through denial of distributed service (DDoS) are still growing. The urgency is required because with the new emerging paradigms of the Internet of Things (IoT) and Cloud Computing, billions of unsecured connected objects will be available.  This document deals with the detection, and correction of DDoS attacks based on real-time behavioral analysis of traffic. This method is based on Software Defined Network (SDN) technologies, Bloom filter and automatic behaviour learning. Indeed, distributed denial of service attacks (DDoS) are difficult to detect in real time. In particular, it concerns the distinction between legitimate and illegitimate packages. Our approach outlines a supervised classification method based on Machine Learning that identifies malicious and normal packets. Thus, we design and implement Defined (IDS) with a great precision. The results of the evaluation suggest that our proposal is timely and detects several abnormal DDoS-based cyber-attack behaviours.</description>
        <description>http://thesai.org/Downloads/Volume10No9/Paper_53-Intrusion_Detection_System_based_on_the_SDN_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>How to Improve the IoT Security Implementing IDS/IPS Tool using Raspberry Pi 3B+</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100952</link>
        <id>10.14569/IJACSA.2019.0100952</id>
        <doi>10.14569/IJACSA.2019.0100952</doi>
        <lastModDate>2019-09-30T12:36:57.6470000+00:00</lastModDate>
        
        <creator>Ru&#237;z-Lagunas Juan Jes&#250;s</creator>
        
        <creator>Antolino-Hern&#225;ndez Anastacio</creator>
        
        <creator>Reyes-Guti&#233;rrez Mauricio Ren&#233;</creator>
        
        <creator>Ferreira-Medina Heberto</creator>
        
        <creator>Torres-Millarez Cristhian</creator>
        
        <creator>Paniagua-Villag&#243;mez Omar</creator>
        
        <subject>Security IoT; IDS/IPS software; Pentesting tools; smart cities; prototype Raspberry</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(9), 2019</description>
        <description>This work shows a methodology of implementation and testing of the system is proposed and tested with a prototype; it is constructed with sensors and actuators that allow monitoring the behavior of the system in an environment under threats. We used an IDS / IPS as a protection tool for IoT systems, based on Raspberry Pi and Raspbian operating system. It is described in a block diagram the testing method used. We implemented the IDS/IPS Snort tool in an embedded platform Raspberry. It presents also the state of the art of cloud frameworks that have the same objective of protecting. The main contribution is the implemented testing method for Snort that can be used with security rules in other applications of embedded IoT devices.</description>
        <description>http://thesai.org/Downloads/Volume10No9/Paper_52-How_to_Improve_the_IoT_Security.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluating Factors for Predicting the Life Dissatisfaction of South Korean Elderly using Soft Margin Support Vector Machine based on Communication Frequency, Social Network Health Behavior and Depression</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100951</link>
        <id>10.14569/IJACSA.2019.0100951</id>
        <doi>10.14569/IJACSA.2019.0100951</doi>
        <lastModDate>2019-09-30T12:36:57.6330000+00:00</lastModDate>
        
        <creator>Haewon Byeon</creator>
        
        <creator>Seong-Tae Kim</creator>
        
        <subject>C-SVM; communication frequency; life satisfaction; social network; quality of life</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(9), 2019</description>
        <description>Since health and the quality of life are caused not by a single factor but by the interaction of multiple factors, it is necessary to develop a model that can predict the quality of life using multiple risk factors rather than to identify individual risk factors. This study aimed to develop a model predicting the quality of life based on C-SVM using big data and provide baseline data for a successful old age. This study selected 2,420 elderly (1,110 men, 1,310 women) who were 65 years or older and completed the Seoul Statistics Survey. The quality of life satisfaction, a binary outcome variable (satisfied or dissatisfied), was evaluated based on a self-report questionnaire. This study performed a Gauss function among the SVM algorithms. To verify the predictive power of the developed model, this study compared the Gauss function with the linear algorithm, polynomial algorithm, and sigmoid algorithm. Additionally, C-SVM and Nu-SVM were applied to four kernel algorithm types to create eight types, and prediction accuracies of the eight SVM types were estimated and compared. Among 2,420 subjects, 483 elderly (19.9%) were not satisfied with their current lives. The final prediction accuracy of this SVM using 625 support vectors was 92.63%. The results showed that the difference between C-SVM and Nu-SVM was negligible in the models for predicting the satisfaction of life in old age while the Gaussian kernel had the highest accuracy and the sigmoid kernel had the lowest accuracy. Based on the prediction model of this study, it is required to manage local communities systematically to enhance the quality of life in old age.</description>
        <description>http://thesai.org/Downloads/Volume10No9/Paper_51-Evaluating_Factors_for_Predicting_the_Life.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Impact of Relay Selection in WiMAX IEEE 802.16j Multi-hop Relay Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100950</link>
        <id>10.14569/IJACSA.2019.0100950</id>
        <doi>10.14569/IJACSA.2019.0100950</doi>
        <lastModDate>2019-09-30T12:36:57.6170000+00:00</lastModDate>
        
        <creator>Noman Mazhar</creator>
        
        <creator>Muhammad Zeeshan</creator>
        
        <creator>Anjum Naveed</creator>
        
        <subject>WiMAX; multi-hop; wireless broadband; relay; SNR</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(9), 2019</description>
        <description>Worldwide Interoperability for Microwave Access network accepts the challenge of last mile wireless access for internet. IEEE 802.16 standard, commercially known as WiMAX provide wireless broadband experience to the end subscribers and challenges many wired solutions like Digital Subscriber Line (DSL) and cable internet. Wireless network has many inherent issues like coverage holes; capacity optimization and mobility are few of them. Adding relays to multi-hop WiMAX IEEE 802.16j network present an effective solution to address them to some extent but this amendment does not elaborate any algorithm regarding the relay selection and narrate no performance guarantees. In this work, we proposed linear model that fairly allocates wireless resources among subscribers in 802.16j network. A relay selection algorithm is also presented to optimally select nodes with higher signal-to-noise ratio as relay station for nodes with lower signal-to-noise ratio objectively maximize overall network capacity. This scheme further extends network coverage area and improves network availability. We also did extensive performance evaluation of the proposed linear model. Results show that optimal relays selection scheme do provide a substantial increase of up to 66% in overall network capacity in the fixed WiMAX network. This improvement is substantial at places where network condition is not optimal. Investigating the problem further leads to the conclusion that the relay selection criterion is the key to achieve maximum network capacity.</description>
        <description>http://thesai.org/Downloads/Volume10No9/Paper_50-Performance_Impact_of_Relay_Selection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>CBRm: Case based Reasoning Approach for Imputation of Medium Gaps</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100949</link>
        <id>10.14569/IJACSA.2019.0100949</id>
        <doi>10.14569/IJACSA.2019.0100949</doi>
        <lastModDate>2019-09-30T12:36:57.6000000+00:00</lastModDate>
        
        <creator>Anibal Flores</creator>
        
        <creator>Hugo Tito</creator>
        
        <creator>Carlos Silva</creator>
        
        <subject>Case Based Reasoning; CBR; CBRm; univariate time series imputation; medium-gaps</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(9), 2019</description>
        <description>This paper presents a new algorithm called CBRm for univariate time series imputation of medium-gaps inspired by the algorithm called Case Based Reasoning Imputation (CBRi) for short-gaps. The performance of the proposed algorithm is analyzed in meteorological time series corresponding to maximum temperatures; also it was compared with several similar techniques. Although the algorithm failed to overcome in some cases to other proposals regarding precision, the results achieved are encouraging considering that some weaknesses of other proposals with which it was compared were outperformed.</description>
        <description>http://thesai.org/Downloads/Volume10No9/Paper_49-CBRm_Case_based_Reasoning_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Internal Threat Defense using Network Access Control and Intrusion Prevention System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100948</link>
        <id>10.14569/IJACSA.2019.0100948</id>
        <doi>10.14569/IJACSA.2019.0100948</doi>
        <lastModDate>2019-09-30T12:36:57.5830000+00:00</lastModDate>
        
        <creator>Andhika Surya Putra</creator>
        
        <creator>Nico Surantha</creator>
        
        <subject>Attack; integration; Intrusion Prevention System (IPS); mitigation; Network Access Control (NAC); network security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(9), 2019</description>
        <description>This study aims to create a network security system that can mitigate attacks carried out by internal users and to reduce attacks from internal networks. Further, a network security system is expected to be able to overcome the difficulty of mitigating attacks carried out by internal users and to improve network security. The method used is to integrate the ability of Network Access Control (NAC) and the Intrusion Prevention System (IPS) that have been designed and implemented in this study, then an analysis is performed to compare the results of tests that have been carried out using only the NAC with the results using integration of NAC capabilities and IPS. The results obtained from the tests that have been carried out, namely, the security system by using the integration of NAC and IPS capabilities is better than using only the NAC.</description>
        <description>http://thesai.org/Downloads/Volume10No9/Paper_48-Internal_Threat_Defense_using_Network_Access_Control.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Authentication Modeling with Five Generic Processes</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100947</link>
        <id>10.14569/IJACSA.2019.0100947</id>
        <doi>10.14569/IJACSA.2019.0100947</doi>
        <lastModDate>2019-09-30T12:36:57.5830000+00:00</lastModDate>
        
        <creator>Sabah Al-Fedaghi</creator>
        
        <creator>MennatAllah Bayoumi</creator>
        
        <subject>Security; authentication; conceptual modeling; diagrammatic representation; generic processes</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(9), 2019</description>
        <description>Conceptual modeling is an essential tool in many fields of study, including security specification in information technology systems. As a model, it restricts access to resources and identifies possible threats to the system. We claim that current modeling languages (e.g., Unified Modeling Language, Business Process Model and Notation) lack the notion of genericity, which refers to a limited set of elementary processes. This paper proposes five generic processes for modeling the structural behavior of a system: creating, releasing, transferring, receiving, and processing. The paper demonstrates these processes within the context of public key infrastructure, biometric, and multifactor authentication. The results indicate that the proposed generic processes are sufficient to represent these authentication schemes.</description>
        <description>http://thesai.org/Downloads/Volume10No9/Paper_47-Authentication_Modeling_with_Five_Generic_Processes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comparison Review based on Classifiers and Regression Models for the Investigation of Flash Floods</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100946</link>
        <id>10.14569/IJACSA.2019.0100946</id>
        <doi>10.14569/IJACSA.2019.0100946</doi>
        <lastModDate>2019-09-30T12:36:57.5700000+00:00</lastModDate>
        
        <creator>Talha Ahmed Khan</creator>
        
        <creator>Muhammad Alam</creator>
        
        <creator>Kushsairy Kadir</creator>
        
        <creator>Zeeshan Shahid</creator>
        
        <creator>M.S Mazliham</creator>
        
        <subject>Flash floods; classification; SVM; k-NN; logistic regression; quadratic SVN; ensemble bagged trees; exponential GPR</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(9), 2019</description>
        <description>Several regions of the world have been affected by one of the natural disasters named as flash floods. Many villagers who live near stream or dam, they suffer a lot in terms of property, cattle and human lives loss. Conventional early warning systems are not up to the mark for the early warning announcements. Diversified approaches have been carried out for the identification of flash floods with less false alarm rate. Forecasting approaches includes some errors and ambiguity due to the incompetent processing algorithms and measurement readings. Process variables like stream flow, water level, water color, precipitation velocity, wind speed, wave’s pattern and cloud to ground (CG) flashes have been measured for the robust identification of flash floods. A vibrant competent algorithm would be required for the investigation of flash floods with less false alarm rate.  In this research paper classifiers have been applied on the collected data set so that any researcher could easily know that which classifier is competent and can be further enhanced by combining it with other algorithms. A novel comprehensive parametric comparison has been performed to investigate the classification accuracy for the robust classification of false alarms. For the better accuracy more than one process variables have been measured but still contained some false alarm. Appropriate combination of sensor was integrated to increase the accuracy in results as multi-modal sensing device has been designed to collect the data. Linear discriminant analysis, logistic regression, quadratic support vector machine, k-nearest neighbor and Ensemble bagged tree have been applied to the collected data set for the data classification. Results have been obtained in the MATLAB and discussed in detail in the research paper.  The worst accuracy of the classification (62%) has been achieved by the coarse k-NN classifier that means coarse k-NN produced 38% false negative rate that is not acceptable in the case of forecasting. Ensemble bagged trees produced best classification results as it achieved 99 % accuracy and 1% error rate. Furthermore, according to the comprehensive parametric comparison of regression models Quadratic SVM found to be the worst with mean square error of 0.5551 and time elapsed 13.159 seconds. On the other hand, Exponential Gaussian process regression performed better than the other existing approaches with the minimum root mean squared error of 0.0002 and prediction speed of 35000 observations per second.</description>
        <description>http://thesai.org/Downloads/Volume10No9/Paper_46-A_Comparison_Review_based_on_Classifiers.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Map-based Job Recommender Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100945</link>
        <id>10.14569/IJACSA.2019.0100945</id>
        <doi>10.14569/IJACSA.2019.0100945</doi>
        <lastModDate>2019-09-30T12:36:57.5530000+00:00</lastModDate>
        
        <creator>Manal Alghieth</creator>
        
        <creator>Amal A. Shargabi</creator>
        
        <subject>Recommender systems; content-based recommendation; location-based search; maps</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(9), 2019</description>
        <description>Location is one of the most important factors to consider when looking for offering a new job. Currently, there exist many job recommender systems to help match the right candidate with the right job. A review of the existing recommender systems, included within this article, reveals that there is an absence of appropriate mapping support offering for job recommendation. This article aims to propose a general map-based job recommender model, which is implemented and applied within a system for job seekers in Saudi Arabia. The system adapts content-based technique to recommend jobs using the cosine similarity and will help Saudi job seekers finding their desired job in an efficient way using interactive maps. This ultimately will contribute to Saudi Arabia moving forward to the digital transformation which is one of the major objectives to fulfill the Saudi vision 2030.</description>
        <description>http://thesai.org/Downloads/Volume10No9/Paper_45-A_Map_based_Job_Recommender_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Visualization of Multidimensional Data by Ordering Parallel Coordinates Axes</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100944</link>
        <id>10.14569/IJACSA.2019.0100944</id>
        <doi>10.14569/IJACSA.2019.0100944</doi>
        <lastModDate>2019-09-30T12:36:57.5530000+00:00</lastModDate>
        
        <creator>Ayman Nabil</creator>
        
        <creator>Karim M. Mohamed</creator>
        
        <creator>Yasser M. Kamal</creator>
        
        <subject>Parallel coordinates; visualization; correlation coefficient; entropy function</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(9), 2019</description>
        <description>Every year business is overwhelmed by the quantity and variety of data. Visualization of Multi-dimensional data is counter-intuitive using conventional graphs. Parallel coordinates are proposed as an alternative to explore multivariate data more effectively. However, it is difficult to extract relevant information through the parallel coordinates when the data are Multi-dimensional with thousands of lines overlapping. The order of the axes determines the perception of information on parallel coordinates. This paper proposes three new techniques in order to arrange the axes in the most significant relation between the datasets. The datasets used in this paper, for Egyptian patients, with many external factors and medical tests. These factors were collected by a questionnaire sheet, made by medical researchers. The first Technique calculates the correlation between all features and the age of the patient when they get diabetes disease. The second technique is based on merging different features together and arranging the coordinates based on the correlations values. The Third Technique calculates the entropy value for each feature and then arrange the parallel coordinates in descending order based on the positive or negative values. Finally based on the result graphs, we conclude that the second method was more readable and valuable than the other two methods.</description>
        <description>http://thesai.org/Downloads/Volume10No9/Paper_44-Enhancing_Visualization_of_Multidimensional_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fraud Detection using Machine Learning in e-Commerce</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100943</link>
        <id>10.14569/IJACSA.2019.0100943</id>
        <doi>10.14569/IJACSA.2019.0100943</doi>
        <lastModDate>2019-09-30T12:36:57.5370000+00:00</lastModDate>
        
        <creator>Adi Saputra</creator>
        
        <creator>Suharjito</creator>
        
        <subject>Machine learning; random forest; Na&#239;ve Bayes; SMOTE; neural network; e-commerce; confusion matrix; G-Mean; F1-score; transaction; fraud</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(9), 2019</description>
        <description>The volume of internet users is increasingly causing transactions on e-commerce to increase as well. We observe the quantity of fraud on online transactions is increasing too.  Fraud prevention in e-commerce shall be developed using machine learning, this work to analyze the suitable machine learning algorithm, the algorithm to be used is the Decision Tree, Naive Bayes, Random Forest, and Neural Network. Data to be used is still unbalance. Synthetic Minority Over-sampling Technique (SMOTE) process is to be used to create balance data. Result of evaluation using confusion matrix achieve the highest accuracy of the neural network by 96 percent, random forest is 95 percent, Na&#239;ve Bayes is 95 percent, and Decision tree is 91 percent. Synthetic Minority Over-sampling Technique (SMOTE) is able to increase the average of F1-Score from 67.9 percent to 94.5 percent and the average of G-Mean from 73.5 percent to 84.6 percent.</description>
        <description>http://thesai.org/Downloads/Volume10No9/Paper_43-Fraud_Detection_using_Machine_Learning_in_E_Commerce.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>On Some Methods for Dimensionality Reduction of ECG Signals</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100942</link>
        <id>10.14569/IJACSA.2019.0100942</id>
        <doi>10.14569/IJACSA.2019.0100942</doi>
        <lastModDate>2019-09-30T12:36:57.5230000+00:00</lastModDate>
        
        <creator>Monica Fira</creator>
        
        <creator>Liviu Goras</creator>
        
        <subject>Dimensionality reduction; compressed sensing; electrocardiography (ECG)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(9), 2019</description>
        <description>Dimensionality reduction with two methods, namely, Laplacian Eigenmap (LE) and Locality Preserving Projections (LPP) is studied for normal and pathological noisy and noiseless ECG patterns. Besides, the possibility of using compressed sensing (CS) as a method of dimensionality reduction is also analyzed. The classification rate for the initial domain as well as in manifolds of various dimensions for the three cases are presented and compared.</description>
        <description>http://thesai.org/Downloads/Volume10No9/Paper_42-On_Some_Methods_for_Dimensionality_Reduction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Empirical Performance Analysis of Decision Tree and Support Vector Machine based Classifiers on Biological Databases</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100940</link>
        <id>10.14569/IJACSA.2019.0100940</id>
        <doi>10.14569/IJACSA.2019.0100940</doi>
        <lastModDate>2019-09-30T12:36:57.5070000+00:00</lastModDate>
        
        <creator>Muhammad Amjad</creator>
        
        <creator>Zulfiqar Ali</creator>
        
        <creator>Abid Rafiq</creator>
        
        <creator>Nadeem Akhtar</creator>
        
        <creator>Israr-Ur-Rehman</creator>
        
        <creator>Ali Abbas</creator>
        
        <subject>Classification; rules discovery; support vector machine; decision tree</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(9), 2019</description>
        <description>The classification and prediction of medical diseases is a cutting edge research problem in the medical field. The experts of machine learning are continuously proposing new classification methods for the prediction of diseases. The discovery of classification rules from medical databases for classification and prediction of diseases is a challenging and non-trivial task. It is very significant to investigate the more promising and efficient classification approaches for the discovery of classification rules from the medical databases. This paper focuses on the problem of selection of more efficient, promising and suitable classifier for the prediction of specific diseases by performing empirical studies on bunch mark medical databases. The research work under the focus concentrates on the benchmark medical data sets i.e. arrhythmia, breast-cancer, diabetes, hepatitis, mammography, lymph, liver-disorders, sick, cardiotocography, heart-statlog, breast-w, and lung-cancer. The medical data sets are obtained from the open-source UCI machine learning repository. The research work will be investigating the performance of Decision Tree (i.e. AdaBoost.NC, C45-C, CART, and ID3-C) and Support Vector Machines. For experimentation, Knowledge Extraction based on Evolutionary Learning (KEEL), a data mining tool will be used. This research work provides the empirical performance analysis of decision tree-based classifiers and SVM on a specific dataset. Moreover, this article provides a comparative performance analysis of classification approaches in terms of statistics.</description>
        <description>http://thesai.org/Downloads/Volume10No9/Paper_40-Empirical_Performance_Analysis_of_Decision_Tree.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Computer-based Approach to Detect Wrinkles and Suggest Facial Fillers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100941</link>
        <id>10.14569/IJACSA.2019.0100941</id>
        <doi>10.14569/IJACSA.2019.0100941</doi>
        <lastModDate>2019-09-30T12:36:57.5070000+00:00</lastModDate>
        
        <creator>Amal Alrabiah</creator>
        
        <creator>Mai Alduailij</creator>
        
        <creator>Martin Crane</creator>
        
        <subject>Deep learning; classification; facial fillers; wrinkle detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(9), 2019</description>
        <description>Modern medical practice has embraced facial filler injections as part of the innumerable cosmetic procedures that characterize the current age of medicine. This study proposed a novel methodological framework. The Inception model is the core of the framework. By carefully detecting the classification of wrinkles, the model can be built for different applications to aid in the detection of wrinkles that can objectively help in deciding if the forehead area needs to have filler injections. The model achieved an accuracy of 85.3%. To build the Inception model, a database has been prepared containing face forehead images, including both wrinkled and non-wrinkled face foreheads. The face image pre-processing is the first step of the proposed framework, which is important for reliable feature extraction. First, in order to detect the face and facial landmarks in the image, a Multi-task Cascaded Convolutional Networks model has been used. Before feeding the images into the deep learning Inception model for classifying whether the face foreheads have wrinkles or no wrinkles, an image cropping process is required. Given the bounding box and the facial landmarks, face foreheads can be cropped accurately. The last step of the proposed methodology is to retrain an Inception model for the new categories (Wrinkles, No Wrinkles) to predict whether a face forehead has wrinkles or not.</description>
        <description>http://thesai.org/Downloads/Volume10No9/Paper_41-Computer_based_Approach_to_Detect_Wrinkles.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Embedded System Interfacing with GNSS user Receiver for Transport Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100939</link>
        <id>10.14569/IJACSA.2019.0100939</id>
        <doi>10.14569/IJACSA.2019.0100939</doi>
        <lastModDate>2019-09-30T12:36:57.4900000+00:00</lastModDate>
        
        <creator>Mohmad Umair Bagali</creator>
        
        <creator>Thangadurai. N</creator>
        
        <subject>GNSS; GPS; IRNSS; embedded systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(9), 2019</description>
        <description>The real time vehicle movement traces using waypoint display on the base-map with IRNSS/NavIC and GPS dataset in the GUI simultaneously. In this paper, a portable electronic device with application software has been designed and developed, which would be used to capture the real time positional information of a rover using IRNSS-UR. It stores the positional information into database and displays the real time vehicle positional information like date, time, latitude, longitude and altitude using both GPS and IRNSS/NavIC receiver simultaneously. The designed hardware device with an application software developed helps in mapping the real time vehicle / rover movement at the same time which also helps in identifying the region with data loss, varying positional information, comparing the distance travelled by rover and also aid in retrieving the past surveys and mapping the traces of both IRNSS and GPS simultaneously. The vehicle movement using both IRNSS/NavIC and GPS are tracked on the base map to find the similarity and differences between two. During this research work it can be conclude that that the rover position using GPS and IRNSS were accurate and continuous in our survey duration except in few places. In that few places the data loss is observed because of the satellite visibility variations. For Indian region the IRNSS/NavIC can be a better replacement for GPS.</description>
        <description>http://thesai.org/Downloads/Volume10No9/Paper_39-Embedded_System_Interfacing_with_GNSS.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Wireless Multimedia Sensor Networks based Quality of Service Sentient Routing Protocols: A Survey</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100938</link>
        <id>10.14569/IJACSA.2019.0100938</id>
        <doi>10.14569/IJACSA.2019.0100938</doi>
        <lastModDate>2019-09-30T12:36:57.4770000+00:00</lastModDate>
        
        <creator>Ronald Chiwariro</creator>
        
        <creator>Thangadurai. N</creator>
        
        <subject>Quality of service; multipath routing; multi-channel media access control; energy efficiency</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(9), 2019</description>
        <description>Improvements in nanotechnology have introduced contemporary sensory devices that are capable of gathering multimedia data in form of images, audio and video. Wireless multimedia sensor networks are designed to handle such type of heterogeneous traffic. The ability to handle scalar and non-scalar data has led to the development of various real-time applications such as security surveillance, traffic monitoring and health systems. Since, these networks are an emergent of wireless sensor networks; they inherit constraints that exist in these traditional networks. Particularly, these networks suffer from quality of service and energy efficiency due to the nature of traffic. This paper presents the characteristics and requirements of wireless multimedia sensor networks and approaches to mitigate existing challenges. Furthermore, a review of recent research on multipath routing protocols and multi-channel media access protocols that have quality of service assurances and energy efficiency in handling multimedia data have included.</description>
        <description>http://thesai.org/Downloads/Volume10No9/Paper_38-Wireless_Multimedia_Sensor_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Evaluation of Network Gateway Design for NoC based System on FPGA Platform</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100937</link>
        <id>10.14569/IJACSA.2019.0100937</id>
        <doi>10.14569/IJACSA.2019.0100937</doi>
        <lastModDate>2019-09-30T12:36:57.4600000+00:00</lastModDate>
        
        <creator>Guruprasad S P</creator>
        
        <creator>Chandrasekar B.S</creator>
        
        <subject>Network gateway; network on chip; FPGA; routing; network interface; crossbar switch</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(9), 2019</description>
        <description>Network on Chip (NoC) is an emerging interconnect solution with reliable and scalable features over the System on Chip (SoC) and helps to overcome the drawbacks of bus-based interconnection in SoC. The multiple cores or other networks have a boundary which is limited to communicate with devices, which are directly connected to it. To communicate with these multiple cores outside the boundary, the NOC requires the gateway functionality. In this manuscript, a cost-effective Network Gateway (NG) model is designed, and also the interconnection of a network gateway with multiple cores are connected to the NoC based system is prototyped on Artix-7 FPGA. The NG mainly consists of Serializer and deserializer for transmitting and receiving the data packets with proper synchronization, temporary register to hold the network data, electronic crossbar switch is connected with multiple cores which are controlled by switch controller. The NG with the Router and different sizes of NoC based system is designed using congestion-free adaptive-XY routing. The implementation results and performance evaluation are analyzed for NG based NoC in terms of average Latency and maximum Throughput for different Packet Injection Ratio (PIR). The proposed Network gateway achieves low latency and high throughput in NoC based systems for different PIR.</description>
        <description>http://thesai.org/Downloads/Volume10No9/Paper_37-Performance_Evaluation_of_Network_Gateway_Design.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Enhanced Deep Learning Approach in Forecasting Banana Harvest Yields</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100935</link>
        <id>10.14569/IJACSA.2019.0100935</id>
        <doi>10.14569/IJACSA.2019.0100935</doi>
        <lastModDate>2019-09-30T12:36:57.4430000+00:00</lastModDate>
        
        <creator>Mariannie A Rebortera</creator>
        
        <creator>Arnel C Fajardo</creator>
        
        <subject>Yield forecasting; Deep Learning; Long short-term memory; Banana harvest yield forecasting</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(9), 2019</description>
        <description>This technical quest aspired to build deep multifaceted system proficient in forecasting banana harvest yields essential for extensive planning for a sustainable production in the agriculture sector. Recently, deep-learning (DL) approach has been used as a new alternative model in forecasting. In this paper, the enhanced DL approach incorporates multiple long short term memory (LSTM) layers employed with multiple neurons in each layer, fully trained and built a state for forecasting. The enhanced model used the banana harvest yield data from agrarian reform beneficiary (ARB) cooperative of Dapco in Davao del Norte, Philippines. The model parameters such as epoch, batch size and neurons underwent tuning to identify its optimal values to be used in the experiments. Additionally, the root-mean-squared error (RMSE) is used to evaluate the performance of the model. Using the same set of training and testing data, experiment exhibits that the enhanced model achieved the optimal result of 34.805 in terms of RMSE. This means that the enhanced model outperforms the single and multiple LSTM layer with 43.5 percent and 44.95 percent reduction in error rates, respectively. Since there is no proof that LSTM recurrent neutral network has been used with the same agricultural problem domain, therefore, there is no standard available with regards to the level of error reduction in the forecast. Moreover, investigating the performance of the model using diverse datasets specifically with multiple input features (multivariate) is suggested for exploration. Furthermore, extending and embedding this approach to a web-based along with a handy application is the future plan for the benefit of the medium scale banana growers of the region for efficient and effective decision making and advance planning.</description>
        <description>http://thesai.org/Downloads/Volume10No9/Paper_35-An_Enhanced_Deep_Learning_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Developing a Dengue Forecasting Model: A Case Study in Iligan City</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100936</link>
        <id>10.14569/IJACSA.2019.0100936</id>
        <doi>10.14569/IJACSA.2019.0100936</doi>
        <lastModDate>2019-09-30T12:36:57.4430000+00:00</lastModDate>
        
        <creator>Ian Lindley G Olmoguez</creator>
        
        <creator>Mia Amor C. Catindig</creator>
        
        <creator>Minchie Fel Lou Amongos</creator>
        
        <creator>Fatima G. Lazan</creator>
        
        <subject>Dengue; predictive models; Pearson’s correlation; multiple linear regression; Poisson regression; random forest</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(9), 2019</description>
        <description>Dengue is a viral mosquito-borne infection that is endemic and has become a major public health concern in the Philippines. Cases of dengue in the country have been recorded to be increasing, however, it is reported that the country lacks predictive system that could aid in the formulation of an effective approach to combat the rise of dengue cases. Various studies have reported that climatic factors can influence the transmission rate of dengue. Thus, this study aimed to predict the probability of dengue incidence in Iligan City per barangay based on the relationship of climatic factors and dengue cases using different predictive models with data from 2008 to 2017. Multiple Linear Regression, Poisson Regression, and Random Forest are integrated in a mini-system to automate the display of the prediction result. Results indicate that Random Forest works better with 73.0% accuracy result and 33.58% error percentage, with time period and mean temperature as predictive variables.</description>
        <description>http://thesai.org/Downloads/Volume10No9/Paper_36-Developing_a_Dengue_Forecasting_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Utilizing Feature Selection in Identifying Predicting Factors of Student Retention</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100934</link>
        <id>10.14569/IJACSA.2019.0100934</id>
        <doi>10.14569/IJACSA.2019.0100934</doi>
        <lastModDate>2019-09-30T12:36:57.4300000+00:00</lastModDate>
        
        <creator>January D Febro</creator>
        
        <subject>Educational data mining; feature selection; data preprocessing; knowledge discovery; student retention</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(9), 2019</description>
        <description>Student retention is an important issue faced by Philippine higher education institutions. It is a key concern that needs to be addressed for the reason that the knowledge they gain can contribute to the economic and community development of the country aside from financial stability and employability. University databases contain substantial information that can be queried for knowledge discovery that will aid the retention of students. This work aims to analyze factors associated with student’s success among first-year students through feature selection. This is a critical step prior to modelling in data mining, as a way to reduce computational process and improve prediction performance. In this work, filter methods are applied on datasets queried from university database. To demonstrate the applicability of this method as a pre-processing step prior to data modelling, predictive model is built using the selected dominant features. The accuracy result jumps to 92.09%.  Also, through feature selection technique, it was revealed that post-admission variables are the dominant predictors. Recognizing these factors, the university could improve their intervention programs to help students retain and succeed. This only shows that doing feature selection is an important step that should be done prior to designing any predictive model.</description>
        <description>http://thesai.org/Downloads/Volume10No9/Paper_34-Utilizing_Feature_Selection_in_Identifying_Predicting_Factors.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Microcontroller-based Vessel Passenger Tracker using GSM System: An Aid for Search and Rescue Operations</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100933</link>
        <id>10.14569/IJACSA.2019.0100933</id>
        <doi>10.14569/IJACSA.2019.0100933</doi>
        <lastModDate>2019-09-30T12:36:57.4130000+00:00</lastModDate>
        
        <creator>Joel I Miano</creator>
        
        <creator>Ernesto E. Empig</creator>
        
        <creator>Alexander R. Gaw</creator>
        
        <creator>Ofelia S. Mendoza</creator>
        
        <creator>Danilo C. Adlaon</creator>
        
        <creator>Sheena B. Ca&#241;edo</creator>
        
        <creator>Roan Duval A. Dangcal</creator>
        
        <creator>Angelie S. Sumalpong</creator>
        
        <subject>Global Positioning System (GPS); Global System for Mobile communications (GSM); Organic Light Emitting Diode (OLED); Arduino-Nano microcontroller; tracking system; life jacket; life jacket light</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(9), 2019</description>
        <description>The Maritime Transport industry in the Philippines has been growing through the years and has been a catalyst in the industrial development of the country. Although the maritime transport sector is one of the largest industries in the country, the safety devices and technology used are sluggish phase to change. The natural hazards and human error are main cause of maritime incidents, resulting to multiple casualties and missing persons every year of which this study seek to address the problem of safety in the maritime transport industry. The study aims to design and develop a system that will locate an overboard1 passenger whenever a vessel is in distress. The Floating Overboard Accident Tracking System (FLOATS) was conceptualized by combining the Search Theory, Theory of Planned Behavior (TPB) and Disaster Preparedness, and the increasing availability of tracking device and monitoring technologies and the advancement of communication systems. The system consists of the Global Positioning System (GPS) for location data, Global System for Mobile (GSM) communications for the transmission and reception of emergency messages, Arduino-Nano microcontroller to handle the processing, the used of an inflatable life jacket with signal light and a rescue update display using an organic light emitting diode (OLED) for the search and rescue operations. Tests and surveys established the functionality, reliability, and acceptability of the system, which will greatly benefit maritime incident responders by securing vessel passengers from hazards and reducing the time allotted through speedy search and rescue operations.</description>
        <description>http://thesai.org/Downloads/Volume10No9/Paper_33-Microcontroller_based_Vessel_Passenger_Tracker.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Rule-based Emotion AI in Arabic Customer Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100932</link>
        <id>10.14569/IJACSA.2019.0100932</id>
        <doi>10.14569/IJACSA.2019.0100932</doi>
        <lastModDate>2019-09-30T12:36:57.3970000+00:00</lastModDate>
        
        <creator>Mohamed M Abbassy</creator>
        
        <creator>Ayman Abo-Alnadr</creator>
        
        <subject>Component; rule-based; emotion; customer review; Arabic</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(9), 2019</description>
        <description>The e-commerce emotion analysis is notable and the most pivotal advance since it catches the customer emotion in a product, and emotions with respect to product to decide if the customer attitude is negative, positive, or neutral. Posting on the customer&#39;s reviews have turned into an undeniably famous path for individuals to share with different customers their emotion and feelings toward a product. This review has a significant impact on sales in the future. The proposed system utilizes mixed word from an adjective (adj) and adverb to improve the emotion analysis process utilized a rule-based emotion analysis. The system extracts an Arabic customer review and computes the frequency of each word. At that point, it computes the emotion and score of each customer review. The system likewise computes the emotion and score of straightforward Arabic sentence.</description>
        <description>http://thesai.org/Downloads/Volume10No9/Paper_32-Rule_based_Emotion_AI_in_Arabic_Customer.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Evaluation of Different Data Mining Techniques for Social Media News Credibility Assessment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100931</link>
        <id>10.14569/IJACSA.2019.0100931</id>
        <doi>10.14569/IJACSA.2019.0100931</doi>
        <lastModDate>2019-09-30T12:36:57.3970000+00:00</lastModDate>
        
        <creator>Sahar F Sabbeh</creator>
        
        <subject>Data mining; performance evaluation; news credibility; Twitter; social media</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(9), 2019</description>
        <description>Social media has recently become a basic source for news consumption and sharing among millions of users.  Social media platforms enable users to publish and share their own generated content with little or no restrictions. However, this gives an opportunity for the spread of inaccurate or misleading content, which can badly affect users’ beliefs and decisions. This is why credibility assessment of social media content has recently received tremendous attention. The majority of the studies in the literature focused on identifying features that provide a high predictive power when fed to data mining models and select the model with the highest predictive performance given those features. Results of these studies are conflicting regarding the best model. Additionally, they disregarded the fact that real-time credibility assessment is needed and thus time and resources consumption is crucial for model selection. This study tries to fill this gap by investigating the performance of different data mining techniques for credibility assessments in terms of both functional and operational characteristics for a balanced evaluation that considers both model performance and interoperability.</description>
        <description>http://thesai.org/Downloads/Volume10No9/Paper_31-Performance_Evaluation_of_different_Data_Mining_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Strategic Planning towards Automation of Fiber To The Home (FTTH) Considering Optic Access Network (OAN) Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100930</link>
        <id>10.14569/IJACSA.2019.0100930</id>
        <doi>10.14569/IJACSA.2019.0100930</doi>
        <lastModDate>2019-09-30T12:36:57.3830000+00:00</lastModDate>
        
        <creator>Abid Naeem</creator>
        
        <creator>Shahryar Shafique</creator>
        
        <creator>Zahid Wadud</creator>
        
        <creator>Sheeraz Ahmed</creator>
        
        <creator>Nadeem Safwan</creator>
        
        <creator>Zeeshan Najam</creator>
        
        <subject>Fiber To The Home; Passive Optical Networks; GPON; triple play; cost effective; customer satisfaction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(9), 2019</description>
        <description>With the intention to meet the increasing demand of future higher bandwidth applications, fiber based Gigabit Passive Optical Network (GPON) access is considered best resolution to deliver triple play services (voice, data, video). Hence, it becomes obligatory to migrate from traditional copper-based network to fiber-based. Due to rapid technological evolution, tough competition and budget limitation the service providers are struggling to provide a cost effective solution to minimize their operational cost with extra ordinary customer satisfaction. One of the factors that increase the cost of overall Fiber To The Home (FTTH) network is the unplanned deployment resulting in utilization of extra components and resources. Hence, it is imperative to determine a suitable technique, which helps to reduce planning process, required time and deployment cost through optimization. Automation based planning is one of the possible ways to automate the network design at probable lowest cost. In this research, a planning technique for migration from copper to fiber access network with a manageable and optimized Passive Optic Network (PON –FTTx) infrastructure is presented identifying a cost-effective strategy for developing countries.</description>
        <description>http://thesai.org/Downloads/Volume10No9/Paper_30-Strategic_Planning_towards_Automation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Identification of Issues and Challenges in Romanized Sindhi Text</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100929</link>
        <id>10.14569/IJACSA.2019.0100929</id>
        <doi>10.14569/IJACSA.2019.0100929</doi>
        <lastModDate>2019-09-30T12:36:57.3670000+00:00</lastModDate>
        
        <creator>Irum Naz Sodhar</creator>
        
        <creator>Akhtar Hussain Jalbani</creator>
        
        <creator>Muhammad Ibrahim Channa</creator>
        
        <creator>Dil Nawaz Hakro</creator>
        
        <subject>Romanized Sindhi Text (RST); Sindhi language; issues and challenges; transliterator; social networks communication</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(9), 2019</description>
        <description>Now-a-days Sindhi language is widely used in internet for the various purposes such as: newspapers, Sindhi literature, books, educational/official websites and social networks communications, teaching and learning processes. Having developed technology of computer system, users face difficulties and problems in writing Sindhi script. In this study, various issues and challenges come in the Romanized Sindhi text by using Roman transliteration (Sindhi text (ST) forms of Romanized Sindhi text) are identified. These acknowledged issues are known as noise, written script of Romanized and its style, space issues in Romanized script, some characters not suitable in Romanized Sindhi, as a paragraph, rows, character issues, punctuation, row break and font style. However, this study provides the summary of issues and challenges of Romanized Sindhi text. This research work provides detailed information of issues and challenges faced by people during chatting in Romanized Sindhi text.</description>
        <description>http://thesai.org/Downloads/Volume10No9/Paper_29-Identification_of_Issues_and_Challenges_in_Romanized_Sindhi_Text.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Nested Genetic Algorithm for Mobile Ad-Hoc Network Optimization with Fuzzy Fitness</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100928</link>
        <id>10.14569/IJACSA.2019.0100928</id>
        <doi>10.14569/IJACSA.2019.0100928</doi>
        <lastModDate>2019-09-30T12:36:57.3500000+00:00</lastModDate>
        
        <creator>NourElDin S Eissa</creator>
        
        <creator>Ahmed Zakaria Talha</creator>
        
        <creator>Ahmed F. Amin</creator>
        
        <creator>Amr Badr</creator>
        
        <subject>Broadcasting; DFCN; fuzzy logic; genetic algorithms; Madhoc simulator; MANET</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(9), 2019</description>
        <description>One of the major culprits that faces Mobile Ad-hoc networks (MANET) is broadcasting, which constitutes a very important part of the infrastructure of such networks. This paper presents a nested genetic algorithm (GA) technique with fuzzy logic-based fitness that optimizes the broadcasting capability of such networks. While normally the optimization of broadcasting is considered as a multi-objective problem with various output parameters that require tuning, the proposed system taps another approach that focuses on a single output parameter, which is the network reachability time. This is the time required for the data to reach a certain percentage of connected clients in the network. The time is optimized by tuning different decision parameters of the Delayed Flooding with Cumulative Neighborhood (DFCN) broadcasting protocol. The proposed system is developed and simulated with the help of the Madhoc network simulator and is applied on different realistic real-life scenarios. The results reveal that the reachability time responds well to the suggested system and shows that each scenario responds differently to the tuning of decision parameters.</description>
        <description>http://thesai.org/Downloads/Volume10No9/Paper_28-A_Nested_Genetic_Algorithm_for_Mobile_Ad_Hoc.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Establishing News Credibility using Sentiment Analysis on Twitter</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100927</link>
        <id>10.14569/IJACSA.2019.0100927</id>
        <doi>10.14569/IJACSA.2019.0100927</doi>
        <lastModDate>2019-09-30T12:36:57.3330000+00:00</lastModDate>
        
        <creator>Zareen Sharf</creator>
        
        <creator>Zakia Jalil</creator>
        
        <creator>Wajiha Amir</creator>
        
        <creator>Nudrat Siddiqui</creator>
        
        <subject>Sentiment analysis; tweets; opinion mining</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(9), 2019</description>
        <description>The widespread use of Internet has resulted in a massive number of websites, blogs and forums. People can easily discuss with each other about different topics and products, and can leave reviews to help out others. This automatically leads to a necessity of having a system that may automatically extract opinions from those comments or reviews to perform a strong analysis. So, it may help out businesses to know the opinions of people about their products/services so they can make further improvements. Sentiment Analysis or Opinion Mining is the system that intelligently performs classification of sentiments by extracting those opinions or sentiments from the given text (or comments or reviews). This paper presents a thorough research work carried out on tweets’ sentiment analysis. An area-specific analysis is done to determine the polarity of extracted tweets for make an automatic classification that what recent news people have liked or disliked. The research is further extended to perform retweet analysis to describe the re-distribution of reactions on a specific twitter post (or tweet).</description>
        <description>http://thesai.org/Downloads/Volume10No9/Paper_27-Establishing_News_Credibility.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design and Learning Effectiveness Evaluation of Gamification in e-Learning Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100926</link>
        <id>10.14569/IJACSA.2019.0100926</id>
        <doi>10.14569/IJACSA.2019.0100926</doi>
        <lastModDate>2019-09-30T12:36:57.3330000+00:00</lastModDate>
        
        <creator>Mohammad T Alshammari</creator>
        
        <subject>Gamification; e-learning systems; interaction design; experimental evaluation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(9), 2019</description>
        <description>This paper proposes a gamification design model that can be used to design and develop gamified e-learning systems. Furthermore, a controlled and carefully designed experimental evaluation in terms of learning effectiveness of gamification is offered. The experiment was conducted with 44 participants randomly assigned to an experimental ‘gamified’ condition and a controlled ‘non-gamified’ condition. In both conditions the same learning material, to teach computer security, were used. The main difference between the two conditions was the integration of gamification in an e-learning system designed based on the proposed model. The results indicate that learning using the gamified version of the e-learning system produces better short-term and medium-term learning gain than learning using the non-gamified e-learning version. Future avenues of research are also provided.</description>
        <description>http://thesai.org/Downloads/Volume10No9/Paper_26-Design_and_Learning_Effectiveness.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Classification of C2C e-Commerce Product Images using Deep Learning Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100925</link>
        <id>10.14569/IJACSA.2019.0100925</id>
        <doi>10.14569/IJACSA.2019.0100925</doi>
        <lastModDate>2019-09-30T12:36:57.3200000+00:00</lastModDate>
        
        <creator>Herdian </creator>
        
        <creator>Gede Putra Kusuma</creator>
        
        <creator>Suharjito</creator>
        
        <subject>Image classification; e-commerce; product images; deep learning; hyperparameter tuning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(9), 2019</description>
        <description>C2C (consumer-to-consumer) is a business model where two individuals transact or conduct business with each other using a platform. A consumer act as a seller put their product in a platform later will be displayed to another consumer act as a buyer. This condition encourages platform to maintain high quality product information especially image that is provided by the seller. Product images need to be relevant to the product itself. It can be controlled automatically using image classification. In this paper, we carried out a research to find out the best deep learning model in image classification for e-commerce products. A dataset of 12,500 product images is collected from various web sources to be used in training and testing process. Five models are selected and fine-tuned using a uniform hyperparameter set-up. Those hyperparameters are found by using a manual process by trying a lot of hyperparameters. The testing result from every model is presented and evaluated. The result shows that NASNetLarge yield the best performance among all evaluated models with 84% testing accuracy.</description>
        <description>http://thesai.org/Downloads/Volume10No9/Paper_25-Classification_of_C2C_E_Commerce_Product_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Defeasible Logic-based Framework for Contextualizing Deployed Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100923</link>
        <id>10.14569/IJACSA.2019.0100923</id>
        <doi>10.14569/IJACSA.2019.0100923</doi>
        <lastModDate>2019-09-30T12:36:57.3030000+00:00</lastModDate>
        
        <creator>Noor Sami Al-Anbaki</creator>
        
        <creator>Nadim Obeid</creator>
        
        <creator>Khair Eddin Sabri</creator>
        
        <subject>Context-awareness; nonmonotonicity; defeasible logic; distributed reasoning; argumentation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(9), 2019</description>
        <description>In human to human communication, context increases the ability to convey ideas. However, in human to application and application to application communication, this property is difficult to attain. Context-awareness becomes an emergent need to achieve the goal of delivering more user-centric personalized services, especially in ubiquitous environments. However, there is no agreed-upon generic framework that can be reused by deployed applications to support context-awareness. In this paper, a defeasible logic-based framework for context-awareness is proposed that can enhance the functionality of any deployed application. The nonmonotonic nature of defeasible logic has the capability of attaining justifiable decisions in dynamic environments. Classical defeasible logic is extended by meta-rules to increase its expressiveness power, facilitate its representation of complex multi-context systems, and permit distributed reasoning. The framework is able to produce justified decisions depending on both the basic functionality of the system that is itself promoted by contextual knowledge and any cross-cutting concerns that might be added by different authorities or due to further improvements to the system. Active concerns that are triggered at certain contexts are encapsulated in separate defeasible theories. A proof theory is defined along with a study of its formal properties. The framework is applied to a motivating scenario to approve its feasibility and the conclusions are analyzed using argumentation as an approach of reasoning.</description>
        <description>http://thesai.org/Downloads/Volume10No9/Paper_23-A_Defeasible_Logic_based_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Investigation of Pitch and Duration Range in Speech of Sindhi Adults for Prosody Generation Module</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100924</link>
        <id>10.14569/IJACSA.2019.0100924</id>
        <doi>10.14569/IJACSA.2019.0100924</doi>
        <lastModDate>2019-09-30T12:36:57.3030000+00:00</lastModDate>
        
        <creator>Shahid Ali Mahar</creator>
        
        <creator>Mumtaz Hussain Mahar</creator>
        
        <creator>Shahid Hussain Danwar</creator>
        
        <creator>Javed Ahmed Mahar</creator>
        
        <subject>Prosody generation; speech analysis; pitch; duration; Sindhi sounds</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(9), 2019</description>
        <description>Prosody refers to structure of sound and rhythm and both are essential parts of speech processing applications. It comprises of tone, stress, intonation and rhythm. Pitch and duration are the core elements of acoustic and that information can make easy to design and development for application module. Through these two peculiarities, the prosody module can be validated. These two factors have been investigated using the sounds of Sindhi adults and presented in this paper. For the experiment and analysis, 245 male and female undergraduate students were selected as speakers belonging from five different districts of upper Sindh and categorized into groups according to their age. Particular sentences were given and recorded individually from the speakers. Afterward, these sentences segmented into words and stored in a database consisting of 1960 sounds. Thus, distance of the frequency in pitch was measured via Standard Deviation (SD). The lowest Mean SD accompanied 0.25Hz and 0.28Hz received from male and female group of district Sukkur. The highest Mean SD has measured with male and female group of district Ghotki along 0.42Hz and 0.49Hz. Generally, the pitch of female’s speakers was found high in contrast to male’s speaker by 0.072Hz variation.</description>
        <description>http://thesai.org/Downloads/Volume10No9/Paper_24-Investigation_of_Pitch_and_Duration_Range.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Model of Game-based Learning in Fire Safety for Preschool Children</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100922</link>
        <id>10.14569/IJACSA.2019.0100922</id>
        <doi>10.14569/IJACSA.2019.0100922</doi>
        <lastModDate>2019-09-30T12:36:57.2870000+00:00</lastModDate>
        
        <creator>Nur Atiqah Zaini</creator>
        
        <creator>Siti Fadzilah Mat Noor</creator>
        
        <creator>Tengku Siti Meriam Tengku Wook</creator>
        
        <subject>Game-based learning; fire safety; user-centered design; effectiveness</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(9), 2019</description>
        <description>The Model of Game-based Learning in Fire Safety developed for preschool children to educate them in learning fire safety issues. Due to the lack of awareness towards fire hazard, there are few factors that have arisen regarding this issue such as children’s ages, experiences and knowledge. The main objective of this study is to identify the user requirements of preschool children in developing the Model of Game-Based Learning in Fire Safety. This study involved six preschool children of Tabika Kemas Kampung Berawan, Limbang Sarawak by using User-Centered Design method. The ability of cognitive, behavior and psychomotor skills are the main aspects to develop the model. Thus, to lower the risk of injuries during practical training in real situation, there is a need to educate them using the technology of tablet. Therefore, a prototype has been developed known as APi Game-Based Learning as a platform for children to learn about fire safety issues. Hence, this APi prototype developed to validate the Model of Game-Based Learning in Fire Safety development for preschool children. Thus, the finding of the study showed the engagement of children in learning fire safety through game improved their knowledge, behavior and psychomotor skills. Overall, this study makes an important contribution in determining the usability on the level of effectiveness towards preschool children through active learning.</description>
        <description>http://thesai.org/Downloads/Volume10No9/Paper_22-The_Model_of_Game_based_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimal Control and Design of Electrical Machines</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100921</link>
        <id>10.14569/IJACSA.2019.0100921</id>
        <doi>10.14569/IJACSA.2019.0100921</doi>
        <lastModDate>2019-09-30T12:36:57.2730000+00:00</lastModDate>
        
        <creator>Wissem BEKIR</creator>
        
        <creator>Lilia EL AMRAOUI</creator>
        
        <creator>Fr&#233;d&#233;ric GILLON</creator>
        
        <subject>Optimal control; optimal sizing; Pontryagin’s maximum principle; permeances network; hybrid stepper motor; energetic efficiency</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(9), 2019</description>
        <description>This paper presents a global optimization approach aiming to improve the energy efficiency of electrical machines. The process is made on a hybrid stepper motor allowing to simultaneously optimize design and command. This approach is axed around Pontryagin&#39;s maximum principle, which is applied to a magnetodynamic model based on permeances network model. The originality of the proposed approach is to obtain in the same process, the minimization of the energy by optimal control and the minimization of the energy by optimal sizing.</description>
        <description>http://thesai.org/Downloads/Volume10No9/Paper_21-Optimal_Control_and_Design_of_Electrical_Machines.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Approach to Rank Text-based Essays using Pagerank Method Towards Student’s Motivational Element</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100920</link>
        <id>10.14569/IJACSA.2019.0100920</id>
        <doi>10.14569/IJACSA.2019.0100920</doi>
        <lastModDate>2019-09-30T12:36:57.2570000+00:00</lastModDate>
        
        <creator>M Zainal Arifin</creator>
        
        <creator>Naim Che Pee</creator>
        
        <creator>Nanna Suryana Herman</creator>
        
        <subject>Pagerank; learning outcomes; similarity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(9), 2019</description>
        <description>Learning outcomes is one of the important factors to measure student achievement during the learning process. Today’s learning is more focused on problem-solving and reasoning to existing problems than an ordinary problem. Most exams have been directed to analysis questions for students to think and synthesize. As such is troublesome for most students, they are not ready to answer the question, thus, their answers almost similar to their friends. This implies that the teacher has tried to guide students to work professionally and originally. However, the Teacher facing difficulty assessing student’s work, especially if the assignments are conducted online without face-to-face instructions/discussions. To bridge such a gap, the teacher needs a method or algorithm to measure their rank to encourage students making an original answer. This research provides a solution in calculating students&#39; ranks based on the similarity score of the essay answers. Pagerank is a ranking algorithm used by Google, this algorithm utilizes a Markov matrix that contains the direction of similarity score for each student. These scores are computed by the power method until converging. Rank is displayed to the teacher to review the similarity level of students&#39; answers. As such is presented a line chart in which the x-axis refers to the students and the y-axis depicts the level of similarity. Ranking computation in Matlab produces an Eigen vector which acts as the rank measure. The higher the rank, the more similar is their answers to others. Hence, students with high ranks to work their answers more seriously ensure their original thoughts. In conclusion, the similarity score matrix using the PageRank algorithm can contribute to the teacher in providing peer motivation and encouraging student’s internal motivation by presenting the ranked-answers presentation.</description>
        <description>http://thesai.org/Downloads/Volume10No9/Paper_20-A_Novel_Approach_to_Rank_Text_based_Essays.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modified Seam Carving by Changing Resizing Depending on the Object Size in Time and Space Domains</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100919</link>
        <id>10.14569/IJACSA.2019.0100919</id>
        <doi>10.14569/IJACSA.2019.0100919</doi>
        <lastModDate>2019-09-30T12:36:57.2570000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>Seam carving; data compression in time and space domains; video data compression</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(9), 2019</description>
        <description>Modified seam carving by switching from the conventional method to resizing method depending on the object size is proposed. When the object size is dominant in the scene of interest, the conventional seam carving shows deformation of components in the object. To avoid the situation, resizing method is applied rather than the conventional seam carving in the proposed method. Also, the method for video data compression based on the seam carving not only in image space domain but also in time domain is proposed. It is specific feature that original quality of video picture can be displayed when it is replayed. Using frame to frame similarity defined with histograms distance between the neighboring frames, frames which have great similarity can be carved results in data is compressed in time domain. Moreover, such carved frame can be recorded in the frame header so that the carved frame can be recovered in reproducing the compressed video. Thus, video quality can be maintained, no degradation of video quality at all. Compression ratio is assessed with the several video data. It is obvious that data compression ratio of the proposed space and time domain seam carving is greater than that of the conventional space domain seam carving.</description>
        <description>http://thesai.org/Downloads/Volume10No9/Paper_19-Modified_Seam_Carving_by_Changing_Resizing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Customers Churn Prediction using Artificial Neural Networks (ANN) in Telecom Industry</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100918</link>
        <id>10.14569/IJACSA.2019.0100918</id>
        <doi>10.14569/IJACSA.2019.0100918</doi>
        <lastModDate>2019-09-30T12:36:57.2400000+00:00</lastModDate>
        
        <creator>Yasser Khan</creator>
        
        <creator>Shahryar Shafiq</creator>
        
        <creator>Abid Naeem</creator>
        
        <creator>Sheeraz Ahmed</creator>
        
        <creator>Nadeem Safwan</creator>
        
        <creator>Sabir Hussain</creator>
        
        <subject>Neural Network; ANN; prediction; churn management</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(9), 2019</description>
        <description>To survive in the fierce competition of telecommunication industry and to retain the existing loyal customers, prediction of potential churn customer has become a crucial task for practitioners and academicians through predictive modeling techniques. The identification of loyal customers can be done through efficient predictive models. By allocation of dedicated resources to the retention of these customers would control the flow of dissatisfied consumers thinking to leave the company. This paper proposes artificial neural network approach for prediction of customers intending to switch over to other operators. This model works on multiple attributes like demographic data, billing information and usage patterns from telecom companies data set. In contrast with other prediction techniques, the results from Artificial Neural Networks (ANN) based approach can predict the telecom churn with accuracy of 79% in Pakistan. The results from artificial neural network are clearly indicating the churn factors, hence necessary steps can be taken to eliminate the reasons of churn.</description>
        <description>http://thesai.org/Downloads/Volume10No9/Paper_18-Customers_Churn_Prediction_using_Artificial_Neural_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Line Area Monitoring using Structural Similarity Index</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100917</link>
        <id>10.14569/IJACSA.2019.0100917</id>
        <doi>10.14569/IJACSA.2019.0100917</doi>
        <lastModDate>2019-09-30T12:36:57.2270000+00:00</lastModDate>
        
        <creator>Abderrahman AZI</creator>
        
        <creator>Abderrahim SALHI</creator>
        
        <creator>Mostafa JOURHMANE</creator>
        
        <subject>Bresenham’s Algorithm; Structural Similarity Index; SSI; motion detection; Line Monitoring Algorithm; LMA; OpenCV; surveillance; camera; video surveillance system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(9), 2019</description>
        <description>Real-time motion detection in specific area is considered the most important task in every video surveillance system. In this paper, a novel real time motion detection algorithm introduced to process Line zones called Line Monitoring Algorithm (LMA). This algorithm integrates Bresenham’s Algorithm and Structural Similarity Index (SSI) in order to achieve the best performance. Bresenham’s Algorithm is used to collect line pixels from two given points. Then, the SSI is used for real-time calculation of similarity for line motion detection. The most attractive side of using the LMA is that the algorithm does not need to compare all pixels of the whole images or regions for line areas monitoring. This algorithm has high capability, treatment speed and efficiency for motion detection and also demands less compilation time for the hardware performance. The main objective of this paper is to use a video surveillance system implementing LMA to supervise the Car Reverse Test (CRT) for driving license exam in Morocco. The evaluation of the experiment results in implementing the proposed algorithm is reported in this paper.</description>
        <description>http://thesai.org/Downloads/Volume10No9/Paper_17-Line_Area_Monitoring.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Socialization of Information Technology Utilization and Knowledge of Information System Effectiveness at Hospital Nurses in Medan, North Sumatra</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100916</link>
        <id>10.14569/IJACSA.2019.0100916</id>
        <doi>10.14569/IJACSA.2019.0100916</doi>
        <lastModDate>2019-09-30T12:36:57.2100000+00:00</lastModDate>
        
        <creator>Roymond H Simamora</creator>
        
        <subject>Information systems; knowledge; nursing; socialization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(9), 2019</description>
        <description>Background of this research is the globalization and development of science, especially in the field of information and communication technology and communication that has influenced and has implications for changes and renewal of people&#39;s lives, including in the field of nursing. So that the role of information and communication in this aspect of life is very important, even the futurists, for the most part, have an agreement that one of the most important strengths as the source of future power is information. Purpose: identify the use of information technology in nursing to determine the effectiveness of the use of information systems in nursing, identify nurses &#39;knowledge about the effectiveness of nursing information systems, identify nurses&#39; knowledge seen from the socialization of the effectiveness of nursing information systems. Method: Quantitative Research Type with a survey approach conducted on 220 nurses. Significant validity test is &lt;0.05, Cronbach Alpha reliability test&gt; 0.60. The data is then tested in a classic assumption test consisting of multicollinearity tests, autocorrelation tests, heteroscedasticity tests, normality tests, multiple linear regression, t-tests, F tests, coefficient of determination tests. Results: the use of information technology affects the effectiveness of nursing information systems. Nurse knowledge does not affect the effectiveness of nursing information systems. Nurse knowledge seen from socialization does not affect the effectiveness of nursing information systems. The use of information technology and nurse knowledge influences the effectiveness of nursing information systems. The results of the coefficient of determination that affect the use of information technology, knowledge of nurses, socialization as a control variable on the effectiveness of nursing information systems. Suggestion: Hospital managers must pay attention to the quality of nursing human resources, through training, certification, recognition of competencies, supervision, selection, and guidance aimed at improving safe, comfortable and satisfying services for patients, families, communities.</description>
        <description>http://thesai.org/Downloads/Volume10No9/Paper_16-Socialization_of_Information_Technology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Extended Consistent Fuzzy Preference Relation to Evaluating Website Usability</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100915</link>
        <id>10.14569/IJACSA.2019.0100915</id>
        <doi>10.14569/IJACSA.2019.0100915</doi>
        <lastModDate>2019-09-30T12:36:57.1930000+00:00</lastModDate>
        
        <creator>Tenia Wahyuningrum</creator>
        
        <creator>Azhari Azhari</creator>
        
        <creator>Suprapto</creator>
        
        <subject>Usability; e-commerce; website quality; logarithmic fuzzy preference programming; consistent fuzzy preference relations</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(9), 2019</description>
        <description>In the current era, website developers recognize usability evaluation as a significant factor in the quality and success of e-commerce websites. Fuzzy Analytical Hierarchy Process (FAHP) is one method to measure the usability of the website. Several researchers have applied Logarithmic Fuzzy Preference Programming (LFPP) approach to deriving crisp weight from fuzzy pairwise comparison matrix of FAHP approach. However, there is a lack of LFPP method in determining the consistency index of the decision-maker judgment. In some cases, LFPP method will produce a consistency value of 0 from consistent fuzzy comparison matrices. This value indicates there is a contradiction with what the previous researchers have said, that a constant matrix value should be more than 0. This research proposes the extended Consistent Fuzzy Preference Relation (ECFPR) to assist the regular judgment for specifying the weights in measuring e-commerce website usability. The CFPR method used to form a new pairwise comparison matrix. ECFPR was calculating the lower and upper values at the fuzzy triangular number from the only n-1 comparison, where n is the number of criteria. The numerical experiment showed that the consistency index obtained by extended CFPR method was more significantly better than LFPP method. It was revealed that the optimal value always more than 0. The consistency index of ECFPR method has a higher mean value than LFPP, so that the use of the ECFPR method can improve the amount of consistency comparison matrices. The ECFPR method was also successfully implemented with the experimental case on evaluating e-commerce website usability.</description>
        <description>http://thesai.org/Downloads/Volume10No9/Paper_15-An_Extended_Consistent_Fuzzy_Preference_Relation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Decision Making Systems for Managing Business Processes in Enterprises Groups</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100914</link>
        <id>10.14569/IJACSA.2019.0100914</id>
        <doi>10.14569/IJACSA.2019.0100914</doi>
        <lastModDate>2019-09-30T12:36:57.1930000+00:00</lastModDate>
        
        <creator>Ali F Dalain</creator>
        
        <subject>Management systems; decision support systems; multi-agent systems; group policy; enterprise groups</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(9), 2019</description>
        <description>In the current economic realities, the forms of integration business entities through the creation of enterprise groups (EGs), reorganized from industry structures or created a new by acquiring existing companies, are becoming increasingly relevant. The economic activity of the enterprise is carried out in the conditions of economic instability and improvement of the system economic relations, which imposes fundamentally new requirements in the sphere of managing the interaction of enterprises. Under these conditions, the successful development of the enterprises and often their very existence depend both on the effective use of the management systems themselves and on the competence of the management decisions made. Consequently, for decision makers and managers of Group Policy (GP), the problem of evaluating the development of GP and promptly making sound management decisions in an unstable and rapidly changing economic environment is considered a particular relevance. One of the promising ways to solve this problem is the development of decision support systems (DSS), using scientifically based decision-making methods based on modern mathematical apparatus and computer equipment. At present, the approach to managing the development of the EGs is associated with the representation of the latter as a multi-agent system (MAS). The DSS does not replace, but complements the existing management systems in the EGs, interacting with them, and uses in its work information about the functioning of EGs units.</description>
        <description>http://thesai.org/Downloads/Volume10No9/Paper_14-Decision_Making_Systems_for_Managing_Business_Processes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>DLBS: Decentralize Load-Balance Scheduling Algorithm for Real-Time IoT Services in Mist Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100913</link>
        <id>10.14569/IJACSA.2019.0100913</id>
        <doi>10.14569/IJACSA.2019.0100913</doi>
        <lastModDate>2019-09-30T12:36:57.1800000+00:00</lastModDate>
        
        <creator>Hosam E Refaat</creator>
        
        <creator>Mohamed A.Mead</creator>
        
        <subject>Cloud computing; fog computing; mist computing; IoT; load balancing; reliability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(9), 2019</description>
        <description>Internet of Things (IoT) has been industrially investigated as Platforms as a Services (PaaS). The naive design of these types of services is to join the classic centralized Cloud computing infrastructure with IoT services. This joining is also called CoT (Cloud of Things).  In spite of the increasing resource utilization of cloud computing, but it faces different challenges such as high latency, network failure, resource limitations, fault tolerance and security etc. In order to address these challenges, fog computing is used. Fog computing is an extension of the cloud system, which provides closer resources to IoT devices. It is worth mentioning that the scheduling mechanisms of IoT services work as a pivotal function in resource allocation for the cloud, or fog computing. The scheduling methods guarantee the high availability and maximize utilization of the system resources. Most of the previous scheduling methods are based on centralized scheduling node, which represents a bottleneck for the system. In this paper, we propose a new scheduling model for manage real time and soft service requests in Fog systems, which is called Decentralize Load-Balance Scheduling (DLBS). The proposed model provides decentralized load balancing control algorithm. This model distributes the load based on the type of the service requests and the load status of each fog node. Moreover, this model spreads the load between system nodes like wind flow, it migrates the tasks from the high load node to the closest low load node. Hence the load is expanded overall the system dynamically. Finally, The DLBS is simulated and evaluated on truthful fog environment.</description>
        <description>http://thesai.org/Downloads/Volume10No9/Paper_13-DLBS_Decentralize_Load_Balance_Scheduling_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Chemical Reaction Optimization Algorithm to Find Maximum Independent Set in a Graph</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100912</link>
        <id>10.14569/IJACSA.2019.0100912</id>
        <doi>10.14569/IJACSA.2019.0100912</doi>
        <lastModDate>2019-09-30T12:36:57.1630000+00:00</lastModDate>
        
        <creator>Mohammad A Asmaran</creator>
        
        <creator>Ahmad A. Sharieh</creator>
        
        <creator>Basel A. Mahafzah</creator>
        
        <subject>Chemical reaction optimization; graph; maximum independent set; metaheuristic algorithm; modified Wilf algorithm; optimization problems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(9), 2019</description>
        <description>Finding maximum independent set (MIS) in a graph is considered one of the fundamental problems in the computer science field, where it can be used to provide solutions for various real life applications. For example, it can be used to provide solutions in scheduling and prioritization problems. Unfortunately, this problem is one of the NP-problems of computer science, which limit its usage in providing solution for such problems with large sizes. This leads the scientists to find a way to provide solutions of such problems using fast algorithms to provide some near optimal solutions. One of the techniques used to provide solutions is to use metaheuristic algorithms. In this paper, a metaheuristic algorithm based on Chemical Reaction Optimization (CRO) is applied with various techniques to find MIS for application represented by a graph. The suggested CRO algorithm achieves accuracy percentages that reach 100% in some cases. This variation depends on the overall structure of the graph along with the picked parameters and colliding molecule selection criteria during the reaction operations of the CRO algorithm.</description>
        <description>http://thesai.org/Downloads/Volume10No9/Paper_12-Chemical_Reaction_Optimization_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Criteria for Software Quality in Information System: Rasch Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100911</link>
        <id>10.14569/IJACSA.2019.0100911</id>
        <doi>10.14569/IJACSA.2019.0100911</doi>
        <lastModDate>2019-09-30T12:36:57.1470000+00:00</lastModDate>
        
        <creator>Wan Yusran Naim Wan Zainal Abidin</creator>
        
        <creator>Zulkefli Mansor</creator>
        
        <subject>Information system; quality of software; Rasch measurement model; evaluation; factors</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(9), 2019</description>
        <description>Most of the organization uses information system to manage the information and provide better decision making in order to deliver high quality services. Due to that the information system must be reliable and fulfill the quality aspect in order to accommodate organization’s need. However, some of the information system still facing problems such as slow response time, problem with accessibility and compatibility issues between hardware and software. These problems will affect the acceptance and usage of the information system especially for non-computing users. Therefore, this study was aimed to investigate the factors that significantly contribute to the quality of software for information system.  A survey was carried out by distributing a set of questionnaires to 174 respondents who are involved in development of software for information system. The data was analyzed using Rasch Measurement Model since it provides reliability of respondents and instruments. The result indicates that 30 factors had significantly contributed to the quality of software for information system and of these, six factors are under functionality, five for reliability, ten for usability, five for efficiency, two for compatibility and two for security. It is hoped that by identifying these factors, system developers can seriously consider of enhancing the quality of software for information system projects.  In future, these factors can be used to develop an evaluation tool or metrix for quality aspects of software for information system projects.</description>
        <description>http://thesai.org/Downloads/Volume10No9/Paper_11-The_Criteria_for_Software_Quality_in_Information_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Generating and Analyzing Chatbot Responses using Natural Language Processing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100910</link>
        <id>10.14569/IJACSA.2019.0100910</id>
        <doi>10.14569/IJACSA.2019.0100910</doi>
        <lastModDate>2019-09-30T12:36:57.1470000+00:00</lastModDate>
        
        <creator>Moneerh Aleedy</creator>
        
        <creator>Hadil Shaiba</creator>
        
        <creator>Marija Bezbradica</creator>
        
        <subject>Chatbot; deep learning; natural language processing; similarity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(9), 2019</description>
        <description>Customer support has become one of the most important communication tools used by companies to provide before and after-sale services to customers. This includes communicating through websites, phones, and social media platforms such as Twitter. The connection becomes much faster and easier with the support of today&#39;s technologies. In the field of customer service, companies use virtual agents (Chatbot) to provide customer assistance through desktop interfaces. In this research, the main focus will be on the automatic generation of conversation “Chat” between a computer and a human by developing an interactive artificial intelligent agent through the use of natural language processing and deep learning techniques such as Long Short-Term Memory, Gated Recurrent Units and Convolution Neural Network to predict a suitable and automatic response to customers’ queries. Based on the nature of this project, we need to apply sequence-to-sequence learning, which means mapping a sequence of words representing the query to another sequence of words representing the response. Moreover, computational techniques for learning, understanding, and producing human language content are needed.  In order to achieve this goal, this paper discusses efforts towards data preparation. Then, explain the model design, generate responses, and apply evaluation metrics such as Bilingual Evaluation Understudy and cosine similarity. The experimental results on the three models are very promising, especially with Long Short-Term Memory and Gated Recurrent Units. They are useful in responses to emotional queries and can provide general, meaningful responses suitable for customer query. LSTM has been chosen to be the final model because it gets the best results in all evaluation metrics.</description>
        <description>http://thesai.org/Downloads/Volume10No9/Paper_10-Generating_and_Analyzing_Chatbot_Responses.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Readiness Evaluation of Applying e-Government in the Society: Shall Citizens begin to Use it?</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100909</link>
        <id>10.14569/IJACSA.2019.0100909</id>
        <doi>10.14569/IJACSA.2019.0100909</doi>
        <lastModDate>2019-09-30T12:36:57.1330000+00:00</lastModDate>
        
        <creator>Laith T Khrais</creator>
        
        <creator>Yara M. Abdelwahed</creator>
        
        <creator>Mohammad Awni Mahmoud</creator>
        
        <subject>e-Government; citizens; governmental transactions; Jordan</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(9), 2019</description>
        <description>As people are in the era of the web, most of the society is using networks in their daily task, governments had found, it is crucial to build an electronic entity that was named e-government, to make transactions easier for citizens, and to make government nearer to society. The objective of this study is to assess the extent of e-government application on different countries, particular in Jordan; in addition, several experiences were displayed in this study. The examination was qualitative through interviewing governmental employees for extracting results based on their answers, focusing on the continuity of using e-government by users as a dependent variable. The conclusion was that the policies are trending to build their e-government entity, and to make it available for citizens to use. Further, this study recommends the government to concentrate on the path of building individuals’ trust as well as using social influence to reinforce the idea of e-government service and evolve its usage.</description>
        <description>http://thesai.org/Downloads/Volume10No9/Paper_9-A_Readiness_Evaluation_of_applying_E_Government.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Augmented Reality App for Teaching OOP</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100908</link>
        <id>10.14569/IJACSA.2019.0100908</id>
        <doi>10.14569/IJACSA.2019.0100908</doi>
        <lastModDate>2019-09-30T12:36:57.1170000+00:00</lastModDate>
        
        <creator>Sana Rizwan</creator>
        
        <creator>Arslan Aslam</creator>
        
        <creator>Sobia Usman</creator>
        
        <creator>Muhammad Moeez Ghauri</creator>
        
        <subject>Augmented reality; object-oriented programming; unity; visualization; human computer interaction; Vuforia; rendering; compiler</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(9), 2019</description>
        <description>Now a days, there is demanding needs of developing interactive mediums of study. As our conventional methods of learning are not very effective. Programming has become one of the core subjects of every field of study due to their vast use. However, introducing computer programming to those students who’s are not familiar with programming is a tough task. Use interactive learning through visual effects using AR (Augmented Reality) developed to provide a platform for new students to interact more in learning environment. As this learning environment becomes more effective it is easier for new comers to understand key concepts of programming more effective way.</description>
        <description>http://thesai.org/Downloads/Volume10No9/Paper_8-Augmented_Reality_App_for_Teaching_OOP.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Crowd-Generated Data Mining for Continuous Requirements Elicitation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100907</link>
        <id>10.14569/IJACSA.2019.0100907</id>
        <doi>10.14569/IJACSA.2019.0100907</doi>
        <lastModDate>2019-09-30T12:36:57.1000000+00:00</lastModDate>
        
        <creator>Ayed Alwadain</creator>
        
        <creator>Mishari Alshargi</creator>
        
        <subject>Requirements engineering; RE; crowd data mining; NLP; Twitter; continuous requirements elicitation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(9), 2019</description>
        <description>In software development projects, the process of requirements engineering (RE) is one in which requirements are elicited, analyzed, documented, and managed. Requirements are traditionally collected using manual approaches, including interviews, surveys, and workshops. Employing traditional RE methods to engage a large base of users has always been a challenge, especially when the process involves users beyond the organization’s reach. Furthermore, emerging software paradigms, such as mobile computing, social networks, and cloud computing, require better automated or semi-automated approaches for requirements elicitation because of the growth in systems users, the accessibility to crowd-generated data, and the rapid change of users’ requirements. This research proposes a methodology to capture and analyze crowd-generated data (e.g., user feedback and comments) to find potential requirements for a software system in use. It semi-automates some requirements-elicitation tasks using data retrieval and natural language processing (NLP) techniques to extract potential requirements. It supports requirements engineers’ efforts to gather potential requirements from crowd-generated data on social networks (e.g., Twitter). It is an assistive approach that taps into unused knowledge and experiences emphasizing continuous requirements elicitation during systems use.</description>
        <description>http://thesai.org/Downloads/Volume10No9/Paper_7-Crowd_Generated_Data_Mining.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Compact Broadband Antenna for Civil and Military Wireless Communication Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100906</link>
        <id>10.14569/IJACSA.2019.0100906</id>
        <doi>10.14569/IJACSA.2019.0100906</doi>
        <lastModDate>2019-09-30T12:36:57.0830000+00:00</lastModDate>
        
        <creator>Zaheer Ahmed Dayo</creator>
        
        <creator>Qunsheng Cao</creator>
        
        <creator>Yi Wang</creator>
        
        <creator>Saeed Ur Rahman</creator>
        
        <creator>Permanand Soothar</creator>
        
        <subject>Compact antenna; broadband; microstrip feeding; civil and military; peak realized gain and impedance bandwidth</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(9), 2019</description>
        <description>This paper presents a compact broadband antenna for civil and military wireless communication applications. Two prototypes of the antenna are designed and simulated. The proposed antenna is etched on low cost substrate material with compact electrical dimensions of 0.207λ&#215;0.127λ&#215;0.0094λmm3 at 2GHz frequency. The simple microstrip feeding technique and antenna dimensions are involved in the design to attain the proper impedance matching. An optimization of variables is carried out by multiple rigorous simulations. The designed antennas have achieved the broadband impedance bandwidth of 89.3% and 100% at 10dB return loss. The antennas exhibit omni directional radiation pattern at lower resonances and strong surface current distribution across the radiator. The peak realized gain of 5.2dBi at 10.9GHz resonant frequency is realized. Results reveal that the proposed broadband antenna is a better choice for WiMAX, UWB, land, naval and airborne radar applications.</description>
        <description>http://thesai.org/Downloads/Volume10No9/Paper_6-A_Compact_Broadband_Antenna.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implementing a Safe Travelling Technique to Avoid the Collision of Animals and Vehicles in Saudi Arabia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100905</link>
        <id>10.14569/IJACSA.2019.0100905</id>
        <doi>10.14569/IJACSA.2019.0100905</doi>
        <lastModDate>2019-09-30T12:36:57.0700000+00:00</lastModDate>
        
        <creator>Amr Mohsen Jadi</creator>
        
        <subject>LoRa; Sensor-based mobile applications; runtime monitoring; tracking; global positioning system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(9), 2019</description>
        <description>In this work, a safe travelling technique was proposed and implemented a LoRa based application to avoid the collision of animals with vehicles on the highways of Saudi Arabia. For the last few decades, it has been a great challenge for the authorities to secure the life of animals and human being on the roads due to the sudden passage of animals on the highways. In such situations, drivers are not aware of the animal movement, and serious damage is observed with the life of both humans and animals. A LoRaWAN based architecture with a variety of advantages towards low cost and high accuracy of finding the movement of animals is possible with the proposed method and could deliver good results as well. The accuracy of this method was improved to a maximum extent as compared to the existing system due to the usage of LoRa sensors implanted in the animal’s skin to trace with the nodes and base stations easily.</description>
        <description>http://thesai.org/Downloads/Volume10No9/Paper_5-Implementing_a_Safe_Travelling_Technique.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Smartphone Image based Agricultural Product Quality and Harvest Amount Prediction Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100904</link>
        <id>10.14569/IJACSA.2019.0100904</id>
        <doi>10.14569/IJACSA.2019.0100904</doi>
        <lastModDate>2019-09-30T12:36:57.0700000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Osamu Shigetomi</creator>
        
        <creator>Yuko Miura</creator>
        
        <creator>Satoshi Yatsuda</creator>
        
        <subject>Smartphone camera image; agricultural product quality and harvest prediction; fertilizer control; soy plantation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(9), 2019</description>
        <description>A method for agricultural product quality and harvest amount prediction by using smartphone camera image is proposed. It is desired to predict agricultural product quality and harvest amount as soon as possible after the sowing. In order for that, satellite imagery data, UAV camera based images, ground based camera images are used and tried These methods do cost significantly and these do not work so well due to some reasons, in particular, most of farmers cannot use these properly. The proposed method uses just smartphone camera acquired images. Therefore, it is totally easy to use. If the results of prediction of product quality and harvest amount are not satisfied, then farmers have to add some additional fertilizer at the appropriate time. The experimental results with soy plantations show some possibility of the proposed method.</description>
        <description>http://thesai.org/Downloads/Volume10No9/Paper_4-Smartphone_Image_based_Agricultural_Product_Quality.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Authentication and Authorization Design in Honeybee Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100903</link>
        <id>10.14569/IJACSA.2019.0100903</id>
        <doi>10.14569/IJACSA.2019.0100903</doi>
        <lastModDate>2019-09-30T12:36:57.0530000+00:00</lastModDate>
        
        <creator>Nur Husna Azizul</creator>
        
        <creator>Abdullah Mohd Zin</creator>
        
        <creator>Ravie Chandren Muniyandi</creator>
        
        <creator>Zarina Shukur</creator>
        
        <subject>HMAC-Sha; REST API; peer-to-peer; web service; honeybee computing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(9), 2019</description>
        <description>Honeybee computing is a concept based on advanced ubiquitous computing technology to support Smart City Smart Village (SCSV) initiatives. Advanced ubiquitous computing is a computing environment that contains many devices. There are two types of communication within Honeybee computing: client server and peer-to-peer. One of the authorization techniques is the OAuth technique, where a user can access an application without creating an account and can be accessed from multiple devices. OAuth is suitable to control the limited access of resources to the server. The server use REST API as web service to publish data from resources. However since Honeybee computing also supports peer-to-peer communication, security problem can still be an issue. In this paper, we want to propose the design of a secure data transmission for Honeybee computing by adopting the authorization process of OAuth 2.0 and Elliptic Curve Diffie-Hellman (ECDH) with HMAC-Sha. This article will also discuss the communication flow after adopting OAuth 2.0 and ECDH to the computing environment.</description>
        <description>http://thesai.org/Downloads/Volume10No9/Paper_3-Authentication_and_Authorization_Design.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>FPGA Implementation of RISC-based Memory-centric Processor Architecture</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100902</link>
        <id>10.14569/IJACSA.2019.0100902</id>
        <doi>10.14569/IJACSA.2019.0100902</doi>
        <lastModDate>2019-09-30T12:36:57.0370000+00:00</lastModDate>
        
        <creator>Danijela Efnusheva</creator>
        
        <subject>FPGA; memory-centric computing; processor in memory; RISC architecture; VHDL</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(9), 2019</description>
        <description>The development of the microprocessor industry in terms of speed, area, and multi-processing has resulted with increased data traffic between the processor and the memory in a classical processor-centric Von Neumann computing system. In order to alleviate the processor-memory bottleneck, in this paper we are proposing a RISC-based memory-centric processor architecture that provides a stronger merge between the processor and the memory, by adjusting the standard memory hierarchy model. Indeed, we are developing a RISC-based processor that integrates the memory into the same chip die, and thus provides direct access to the on-chip memory, without the use of general-purpose registers (GPRs) and cache memory. The proposed RISC-based memory-centric processor is described in VHDL and then implemented in Virtex7 VC709 Field Programmable Gate Array (FPGA) board, by means of Xilinx VIVADO Design Suite. The simulation timing diagrams and FPGA synthesis (implementation) reports are discussed and analyzed in this paper.</description>
        <description>http://thesai.org/Downloads/Volume10No9/Paper_2-FPGA_Implementation_of_RISC.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Artificial Intelligence Chatbots are New Recruiters</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100901</link>
        <id>10.14569/IJACSA.2019.0100901</id>
        <doi>10.14569/IJACSA.2019.0100901</doi>
        <lastModDate>2019-09-30T12:36:56.9900000+00:00</lastModDate>
        
        <creator>Nishad Nawaz</creator>
        
        <creator>Anjali Mary Gomes</creator>
        
        <subject>Artificial intelligence; chatbots; recruitment process; candidates experiences; employer branding tool; recruitment industry</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(9), 2019</description>
        <description>The purpose of the paper is to assess the artificial intelligence chatbots influence on recruitment process. The authors explore how chatbots offered service delivery to attract and candidates engagement in the recruitment process. The aim of the study is to identify chatbots impact across the recruitment process. The study is completely based on secondary sources like conceptual papers, peer reviewed articles, websites are used to present the current paper. The paper found that artificial intelligence chatbots are very productive tools in recruitment process and it will be helpful in preparing recruitment strategy for the Industry. Additionally, it focuses more on to resolve complex issues in the process of recruitment. Through the amalgamation of artificial intelligence recruitment process is increasing attention among the researchers still there is opportunity to explore in the field. The paper provided future research avenues in the field of chatbots and recruiters.</description>
        <description>http://thesai.org/Downloads/Volume10No9/Paper_1-Artificial_Intelligence_Chatbots_are_New_Recruiters.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Survey of Various Frameworks and Solutions in all Branches of Digital Forensics with a Focus on Cloud Forensics</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100880</link>
        <id>10.14569/IJACSA.2019.0100880</id>
        <doi>10.14569/IJACSA.2019.0100880</doi>
        <lastModDate>2019-09-06T12:13:24.1070000+00:00</lastModDate>
        
        <creator>Mohammed Khanafseh</creator>
        
        <creator>Mohammad Qatawneh</creator>
        
        <creator>Wesam Almobaideen</creator>
        
        <subject>Digital forensics; cloud forensics; investigation process; IoT forensics; examination stage; evidence</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(8), 2019</description>
        <description>Digital forensics is a class of forensic science interested with the use of digital information produced, stored and transmitted by various digital devices as source of evidence in investigations and legal proceedings. Digital forensics can be split up to several classes such as computer forensics, network forensics, mobile forensics, cloud computing forensics, and IoT forensics. In recent years, cloud computing has emerged as a popular computing model in various areas of human life. However, cloud computing systems lack support for computer forensic investigations. The main goal of digital forensics is to prove the presence of a particular document in a given digital device. This paper presents a comprehensive survey of various frameworks and solutions in all classes of digital forensics with a focus on cloud forensics. We start by discussing different forensics classes, their frameworks, limitations and solutions. Then we focus on the methodological aspect and existing challenges of cloud forensics. Moreover, the detailed comparison discusses drawbacks, differences and similarities of several suggested cloud computing frameworks providing future research directions.</description>
        <description>http://thesai.org/Downloads/Volume10No8/Paper_80-A_Survey_of_Various_Frameworks_and_Solutions.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Efficient Segmentation of Retinal Blood Vessels using Singular Value Decomposition and Morphological Operator</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100879</link>
        <id>10.14569/IJACSA.2019.0100879</id>
        <doi>10.14569/IJACSA.2019.0100879</doi>
        <lastModDate>2019-08-31T09:55:35.0530000+00:00</lastModDate>
        
        <creator>N. C Santosh Kumar</creator>
        
        <creator>Y. Radhika</creator>
        
        <subject>Singular value decomposition; left singular vector matrix; feature extraction; average filter; ISODAT Athresholding; morphological operators; stare anddrive databases</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(8), 2019</description>
        <description>The extensive study on retinal fundus images has become an essential part in medical domain to detect pathologies including diabetic retinopathy, cataract, glaucoma, macular degeneration,etc.which are the major causes of blindness. Automatic extraction of tree-shaped and unique retinal vascular structure from retinal fundus images is most exigent task and when achieved successfully, becomes a perfect tool helping ophthalmologists to follow appropriate diagnostic measures. In this work, a novel scheme to segment retinal tree-like vascular structure from retinal images is proposed using the Singular Value Decomposition’s left singular vector matrix of the weighted l*a*b* color model of the input image. The left singular vector matrix which captures the relevant and useful features helps in effective conversion of the input RGB image to gray image. Next, the converted gray image is contrast enhanced using CLAHE method which enhances the tree-shaped vasculature of the retinal blood vessel structure giving a rich contrast gray image. Further processing is carried out normalizing the contrast enhanced gray image by removing the image’s background using a mean filter by which blood vessels become brighter. Later, the result of the difference between gray image and normalized filtered image is keyed-in as a constraint to perform ISODATA thresholding which globally segments the foreground vasculature from the image’s background then followed by conversion of the resultant image into binary image upon which morphological opened operation is applied to take away small and falsely segmented portions producing accurate segmentation. This new technique got tested upon images contained in DRIVE and STARE databases and a performance metric called “area covered” is also calculated in addition with common metrics for sampled input image. This novel approach is empirically proven and has attaineda segmentation accuracy of 97.48%.</description>
        <description>http://thesai.org/Downloads/Volume10No8/Paper_79-An_Efficient_Segmentation_of_Retinal_Blood_Vessels.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Arabic Lexicon Learning to Analyze Sentiment in Microblogs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100878</link>
        <id>10.14569/IJACSA.2019.0100878</id>
        <doi>10.14569/IJACSA.2019.0100878</doi>
        <lastModDate>2019-08-31T09:55:35.0370000+00:00</lastModDate>
        
        <creator>Mahmoud B Rokaya</creator>
        
        <creator>Ahmed S. Ghiduk</creator>
        
        <creator>Mahmoud B. Rokaya</creator>
        
        <creator>Ahmed S. Ghiduk</creator>
        
        <subject>Sentiment analysis; sentiment lexicon; social media; twitter; optimization; mathematical programming; genetic algorithm; evolutionary computation; arabic language</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(8), 2019</description>
        <description>The study and classifying of opinions distilled from social media is called sentiment analysis. The goal of this study is to build an adaptive sentiment lexicon for Arabic language. Based on those lexicons the sentiments polarity classification can be improved. The classification problem will be stated as a mathematical programming problem. In this problem, we search a lexicon that optimizes the classification accuracy. A genetic algorithm is presented to solve the optimization problem. A meta-level feature is generated based on the adaptive lexicons provided by the genetic algorithm. The algorithm performance is supported by using it alongside n-gram features and Bing liu’s lexicon. In this work, lexicon-based and corpora-based approaches are integrated, and the lexicons are produced from the corpus. Five data sets are tested through experiments. The sentiments in all data sets are classified based on five polarity levels. A better understanding of words sentiment orientation, social media users’ culture and Arabic language can be achieved based on the lexicons generated by the proposed algorithm. Since stop words can contribute and add to the sentiment polarity, stop words will be considered and will not deleted. The results show that the F-measure is greater than 80 % in three data sets and the accuracy is greater than 80 % for all data sets. The proposed method out-performs the current methods in the literature in two of the datasets. Finally, in terms of F-measure, the proposed methods achieved better results for three datasets.</description>
        <description>http://thesai.org/Downloads/Volume10No8/Paper_78-Arabic_Lexicon_Learning_to_Analyze_Sentiment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>New Transport Layer Security using Metaheuristics and New Key Exchange Protocol</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100877</link>
        <id>10.14569/IJACSA.2019.0100877</id>
        <doi>10.14569/IJACSA.2019.0100877</doi>
        <lastModDate>2019-08-31T09:55:35.0200000+00:00</lastModDate>
        
        <creator>Mohamed Kaddouri</creator>
        
        <creator>Mohammed Bouhdadi</creator>
        
        <creator>Zakaria Kaddouri</creator>
        
        <creator>Driss Guerchi</creator>
        
        <subject>Transport Layer Security (TLS); metaheuristic; symmetric ciphering algorithm; private key exchange; hash func-tion</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(8), 2019</description>
        <description>The easiness of data transmission is one of the information security flaws that needs to be handled rigorously. It makes eavesdropping, tampering and message forgery by malicious more simple. One of the protocols developed to secure communication between the client and the server consists of using Transport Layer Security (TLS). TLS is a cryptographic protocol that allows encryption using record protocol, authentication and data integrity. In this paper, a new TLS version is proposed, named Transport Layer Security with Metaheuristics (TLSM), which is based on a recently designed metaheuristic symmetric ciphering technique for data encryption, combined with hash function SHA-SBOX and a new method for private key exchange. Compared to the existing TLS versions, the suggested protocol outperform all of them in terms of level of security of the encrypted data, key management and execution time.</description>
        <description>http://thesai.org/Downloads/Volume10No8/Paper_77-New_Transport_Layer_Security_using_Metaheuristics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Artificial Potential Field Algorithm Implementation for Quadrotor Path Planning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100876</link>
        <id>10.14569/IJACSA.2019.0100876</id>
        <doi>10.14569/IJACSA.2019.0100876</doi>
        <lastModDate>2019-08-31T09:55:35.0200000+00:00</lastModDate>
        
        <creator>Iswanto Iswanto</creator>
        
        <creator>Alfian Ma’arif</creator>
        
        <creator>Oyas Wahyunggoro</creator>
        
        <creator>Adha Imam Cahyadi</creator>
        
        <subject>Quadrotor; path planning; GNRON (Goal Nonreachable with Obstacles Nearby); artificial potential field; local minima</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(8), 2019</description>
        <description>Potential field algorithm introduced by Khatib is well-known in path planning for robots. The algorithm is very simple yet provides real-time path planning and effective to avoid robot’s collision with obstacles. The purpose of the paper is to implement and modify this algorithm for quadrotor path planning. The conventional potential method is firstly applied to introduce challenging problems, such as not reachable goals due to local minima solutions or nearby obstacles (GNRON). This will be solved later by proposed modified algorithms. The first proposed modification is by adding virtual force to the repulsive potential force to prevent local minima solutions. Meanwhile, the second one is to prevent GNRON issue by adding virtual force and considering quadrotor’s distance to goal point on the repulsive potential force. The simulation result shows that the second modification is best applied to environment with GNRON issue whereas the first one is suitable only for environment with local minima traps. The first modification is able to reach goals in six random tests with local minima environment. Meanwhile, the second one is able to reach goals in six random tests with local minima environment, six random tests with GNRON environment, and six random tests with both local minima and GNRON environment.</description>
        <description>http://thesai.org/Downloads/Volume10No8/Paper_76-Artificial_Potential_Field_Algorithm_Implementation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Modular Aspect-Oriented Programming Approach of Join Point Interfaces</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100875</link>
        <id>10.14569/IJACSA.2019.0100875</id>
        <doi>10.14569/IJACSA.2019.0100875</doi>
        <lastModDate>2019-08-31T09:55:35.0030000+00:00</lastModDate>
        
        <creator>Cristian Vidal</creator>
        
        <creator>Erika Madariaga</creator>
        
        <creator>Claudia Jim&#180;enez</creator>
        
        <creator>Luis Carter</creator>
        
        <subject>Aspect-Oriented Programming; AspectJ; JPI; class diagrams; UML</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(8), 2019</description>
        <description>This paper describes and analyzes the main differ-ences and advantages of the Join Point Interfaces (JPI) as an Aspect-Oriented Programming (AOP) approach for the modular software production concerning the standard aspect-oriented pro-gramming methodology for Java (AspectJ) to propose a structural modeling approach looking for modular software solutions. Using a Software Engineering point-of-view, we highlight the relevance of structural and conceptual design for JPI software applications. We model and implement a classic example of AOP using AspectJ and JPI as an application example to review their main difference and highlight the JPI consistency between products (models and code). Our proposal of UML JPI class diagrams allows the definition of oblivious classes which know about their JPI connections, an essential element to adapt and transform tradition like-AspectJ AOP solutions to their JPI version. Thus, for the modular software production and education, JPI seems an ideal software development approach.</description>
        <description>http://thesai.org/Downloads/Volume10No8/Paper_75-A_Modular_Aspect_Oriented_Programming_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Efficient Distributed SPARQL Queries on Apache Spark</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100874</link>
        <id>10.14569/IJACSA.2019.0100874</id>
        <doi>10.14569/IJACSA.2019.0100874</doi>
        <lastModDate>2019-08-31T09:55:34.9900000+00:00</lastModDate>
        
        <creator>Saleh Albahli</creator>
        
        <subject>Semantic web; RDF; SPARQL; SPARK; GraphX; triple patterns</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(8), 2019</description>
        <description>RDF is a widely-accepted framework for describing metadata in the web due to its simplicity and universal graph-like data model. Owing to the abundance of RDF data, existing query techniques are rendered unsuitable. To this direction, we adopt the processing power of Apache Spark to load and query a large dataset much more quickly than classical approaches. In this paper, we have designed experiments to evaluate the performance of several queries ranging from single attribute selection to selection, filtering and sorting multiple attributes in the dataset. We further experimented with the performance of queries using distributed SPARQL query on Apache Spark GraphX and studied different stages involved in this pipeline. The execution of distributed SPARQL query on Apache Spark GraphX helped us study its performance and gave insights into which stages of the pipeline can be improved. The query pipeline comprised of Graph loading, Basic Graph Pattern and Result calculating. Our goal is to minimize the time during graph loading stage in order to improve overall performance and cut the costs of data loading.</description>
        <description>http://thesai.org/Downloads/Volume10No8/Paper_74-Efficient_Distributed_SPARQL_Queries_on_Apache_Spark.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Privacy Preserving Data Mining Approach for IoT based WSN in Smart City</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100873</link>
        <id>10.14569/IJACSA.2019.0100873</id>
        <doi>10.14569/IJACSA.2019.0100873</doi>
        <lastModDate>2019-08-31T09:55:34.9730000+00:00</lastModDate>
        
        <creator>Ahmed M Khedr</creator>
        
        <creator>Walid Osamy</creator>
        
        <creator>Ahmed Salim</creator>
        
        <creator>Abdel-Aziz Salem</creator>
        
        <subject>Distributed cluster-based algorithm; association rules; Internet of Things (IoT); privacy preserving; vertically and horizontally distributed databases; wireless sensor networks (WSN)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(8), 2019</description>
        <description>Wireless Sensor Network (WSN) is one of the most fundamental technologies of Internet of Things (IoT). Various IoT devices are connected to the internet by making use of WSN composed of different sensor nodes and actuators, where these sensor nodes collaborate and accomplish their tasks dynamically. The main objective of deploying WSN-based applications is to make high precision real-time observations, and it is extremely challenging because of the limited computing power of the sensors operating under constrained environments, resource constraints like energy, computation speed, bandwidth and memory, huge volume of high speed, heterogeneous and fast-changing WSN data. These challenges encouraged the researchers to concentrate deeper on exploring data mining techniques to extract the required information from the fast-changing sensor data in WSN and thereby efficiently handle the massive data generated by the WSNs. The increasing need of data mining techniques for WSN has inspired us to propose a distributed data mining technique that effectively handles the data generated by the nodes in the WSN and prolongs the lifespan of the network. Our work provides a novel cluster based scheme to mine the sensors data without moving it to cluster head (CH) or base station (BS) to achieve maximum performance in a WSN environment. The basic idea of the proposed work is that local computations are performed by utilizing the computing power at each sensor node and then the minimum higher level statistical summaries are exchanged, which decreases the energy dissipation in communication as the amount of the sensor data transferred is considerably reduced, and thereby the sensor network lifetime is maximized and also preserve the privacy of the sensor data.</description>
        <description>http://thesai.org/Downloads/Volume10No8/Paper_73-Privacy_Preserving_Data_Mining_Approach_for_IoT.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>VoIP QoS Analysis over Asterisk and Axon Servers in LAN Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100872</link>
        <id>10.14569/IJACSA.2019.0100872</id>
        <doi>10.14569/IJACSA.2019.0100872</doi>
        <lastModDate>2019-08-31T09:55:34.9570000+00:00</lastModDate>
        
        <creator>Naveed Ali Khan</creator>
        
        <creator>Abdul Sattar Chan</creator>
        
        <creator>Kashif Saleem</creator>
        
        <creator>Zuhaibuddin Bhutto</creator>
        
        <creator>Ayaz Hussain</creator>
        
        <subject>VoIP; asterisk; axon; computational based QoS; LAN; packet loss</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(8), 2019</description>
        <description>Voice over IP (VoIP) is a developing technology and a key factor in both the emerging cyberspace engineering and also an accomplishment to set up its position in the telecom industry. VoIP technology is based on internet technology; where data packets switching system is used rather than circuit switching. Whereas, an analog signal is changed over into the digital signals in the full-duplex transmission. VoIP technology is replaced with the conventional public switched telephone network (PSTN) system due to the high flexibility and low-cost. The purpose of this research work is to deliberate experimental and computational performance in the view of quality of service (QoS) parameters of VoIP over local area network (LAN) network. The VoIP systems implementation is based on two different operating system framework (Linux and Windows), whereas, Linux-based and Windows-based private branch exchanges (PBXs), such as Asterisk (Linux-open source) and Axon (window-close source) are configured, installed and verified. QoS factors (such as packet loss, delay, jitter, etc.) are observed over the Asterisk and Axon PBXs in a LAN domain with the assistance of Paessler switch activity grapher (PRTG) monitoring tool. The validations of results are looked at for QoS parameters crosswise over both PBXs with data load (i.e., file transfer and HTTP traffic) during VoIP calls. The productivity and execution of Axon and Asterisk have been equated and analyzed over experimental based outcomes.</description>
        <description>http://thesai.org/Downloads/Volume10No8/Paper_72-VoIP_QoS_Analysis_over_Asterisk_and_Axon_Servers.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Internet of Things (IOT) based Smart Parking Routing System for Smart Cities</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100870</link>
        <id>10.14569/IJACSA.2019.0100870</id>
        <doi>10.14569/IJACSA.2019.0100870</doi>
        <lastModDate>2019-08-31T09:55:34.9430000+00:00</lastModDate>
        
        <creator>Elgarej Mouhcine</creator>
        
        <creator>Karouani Yassine</creator>
        
        <creator>El Fazazi Hanaa</creator>
        
        <creator>Khalifa Mansouri</creator>
        
        <creator>Youssfi Mohamed</creator>
        
        <subject>Intelligent traffic system; internet of things; swarm intelligent; ant colony optimization; vehicle routing system; multi-agent system; smart parking system; smart cities; cloud computing system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(8), 2019</description>
        <description>Recently, the number of cars on the road has been growing due to the increase in car manufacturing in parallel with customer services provided to help the new driver to buy cars at affordable prices. On the other hand, we find that the infrastructure of big cities cannot support this number of cars and with the disorganization of parking places in the city this problem will lead us to have serious problem in the city which involves the increase of drivers requests to find the nearest parking places to avoid traffic congestion in those areas. In parallel today we talk about the concept of smart cities and how can we use the evolution of the Internet of Things (IoT) to improve the quality of the smart city. Several efforts have been made on the Internet of Things to improve the reliability and the productivity of public infrastructure. Many problems have been handled and controlled by the IoT such as vehicle traffic congestion, road safety and the inefficient use of car parking spaces. This work introduces a novel technique based on a distributed cloud architecture of IoT to manage the parking systems combined with a distributed swarm intelligence technique using the Ant System algorithm to improve the process of finding the nearest car parking in the minimum time based on the state of traffic on this road. This prototype will help drivers to find the nearest car parking and improve the exploitation of the available car parking in the city.</description>
        <description>http://thesai.org/Downloads/Volume10No8/Paper_70-An_Internet_of_Things_IOT_based_Smart_Parking.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Automated Approach for Identification of Non-Functional Requirements using Word2Vec Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100871</link>
        <id>10.14569/IJACSA.2019.0100871</id>
        <doi>10.14569/IJACSA.2019.0100871</doi>
        <lastModDate>2019-08-31T09:55:34.9430000+00:00</lastModDate>
        
        <creator>Muhammad Younas</creator>
        
        <creator>Karzan Wakil</creator>
        
        <creator>Dayang N. A. Jawawi</creator>
        
        <creator>Muhammad Arif Shah</creator>
        
        <creator>Ahmad Mustafa</creator>
        
        <subject>Identification; non-functional requirements; seman-tic similarity; Word2Vec model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(8), 2019</description>
        <description>Non-Functional Requirements (NFR) are embedded in functional requirements in requirements specification docu-ment. Identification of NFR from the requirement document is a challenging task. Ignorance of NFR identification in early stages of development increase cost and ultimately cause the failure of the system. The aim of this approach is to help the analyst and designers in architect and design of the system by identifying NFR from the requirements document. Several supervised learning-based solutions were reported in the literature. However, for accu-rate identification of NFR, a significant number of pre-categorized requirements are needed to train supervised text classifiers and system analysts perform the categorization process manually. This study proposed an automated semantic similarity based approach which does not needs pre-categorized requirements for identification of NFR from requirements documents. The approach uses an application of Word2Vec model and popular keywords for identification of NFR. Performance of approach is measured in term of precision-recall and F-measure by applying the approach to PROMISE-NFR dataset. The empirical evidence shows that the automated semi-supervised approach reduces manual human effort in the identification of NFR.</description>
        <description>http://thesai.org/Downloads/Volume10No8/Paper_71-An_Automated_Approach_for_Identification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implementation of a Beowulf Cluster and Analysis of its Performance in Applications with Parallel Programming</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100869</link>
        <id>10.14569/IJACSA.2019.0100869</id>
        <doi>10.14569/IJACSA.2019.0100869</doi>
        <lastModDate>2019-08-31T09:55:34.9100000+00:00</lastModDate>
        
        <creator>Enrique Lee Huaman&#237;</creator>
        
        <creator>Patricia Condori</creator>
        
        <creator>Avid Roman-Gonzalez</creator>
        
        <subject>High-performance cluster; distributed programming; computational parallelism; Beowulf cluster; high-efficiency computing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(8), 2019</description>
        <description>In the Image Processing Research Laboratory (INTI-Lab) of the Universidad de Ciencias y Humanidades, the permission to use the embedded systems laboratory was obtained. INTI-Lab researchers will use this laboratory to do different research related to the processing of large scale videos, climate predictions, climate change research, physical simulations, among others. This type of projects, demand a high complexity in their processes, carried out in ordinary computers that result in an unfavorable time for the researcher. For this reason, one opted for the implementation of a high-performance cluster architecture that is a set of computers interconnected to a local network. This set of computers tries to give a unique behavior to solve complex problems using parallel computing techniques. The intention is to reduce the time directly proportional to the number of machines, giving a similarity of having a low-cost supercomputer. Different performance tests were performed scaling from 1 to 28 computers to measure time reduction. The results will show if it is feasible to use the architecture in future projects that demand processes of high scientific complexity.</description>
        <description>http://thesai.org/Downloads/Volume10No8/Paper_69-Implementation_of_a_Beowulf_Cluster.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Acquisition and Classification System of EMG Signals for Interpreting the Alphabet of the Sign Language</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100868</link>
        <id>10.14569/IJACSA.2019.0100868</id>
        <doi>10.14569/IJACSA.2019.0100868</doi>
        <lastModDate>2019-08-31T09:55:34.9100000+00:00</lastModDate>
        
        <creator>Alvarado-Diaz Witman</creator>
        
        <creator>Meneses-Claudio Brian</creator>
        
        <creator>Fiorella Flores-Medina</creator>
        
        <creator>Patricia Condori</creator>
        
        <creator>Natalia I. Vargas-Cuentas</creator>
        
        <creator>Avid Roman-Gonzalez</creator>
        
        <subject>Electromyography; EMG; sign language; essential communication; recognize the letters</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(8), 2019</description>
        <description>Taking into account that in Peru, there is an increase in people with difficulties in speaking or communicating. According to the National Institute of Statistics and Informatics of Peru (INEI for its acronym in Spanish), around 80000 people use the gesturing language. For this reason, this research proposes to use the electromyography (EMG) signals to detect the hand movement and identify the alphabet of the sign language to provide essential communication to people who need it. The idea is to classify the signals and recognize the letters of the Spanish alphabet, interpreted in the Peruvian sign language. The results show the classification of the 27 letters of the alphabet with a general success rate of 93.9%.</description>
        <description>http://thesai.org/Downloads/Volume10No8/Paper_68-Acquisition_and_Classification_System_of_EMG_Signals.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning Classification of Biomedical Text using Convolutional Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100867</link>
        <id>10.14569/IJACSA.2019.0100867</id>
        <doi>10.14569/IJACSA.2019.0100867</doi>
        <lastModDate>2019-08-31T09:55:34.8970000+00:00</lastModDate>
        
        <creator>Rozilawati Dollah</creator>
        
        <creator>Chew Yi Sheng</creator>
        
        <creator>Norhawaniah Zakaria</creator>
        
        <creator>Mohd Shahizan Othman</creator>
        
        <creator>Abd Wahid Rasib</creator>
        
        <subject>Convolutional neural network; biomedical text classification; compound term; Ohsumed dataset</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(8), 2019</description>
        <description>In this digital era, the document entries have been increasing days by days, causing a situation where the volume of the document entries in overwhelming. This situation has caused people to encounter with problems such as congestion of data, difficulty in searching the intended information or even difficulty in managing the databases, for example, MEDLINE database which stores the documents related to the biomedical field. This research will specify the solution focusing in text classification of the biomedical abstracts. Text classification is the process of organizing documents into predefined classes. A standard text classification framework consists of feature extraction, feature selection and the classification stages. The dataset used in this research is the Ohsumed dataset which is the subset of the MEDLINE database. In this research, there is a total number of 11,566 abstracts selected from the Ohsumed dataset. First of all, feature extraction is performed on the biomedical abstracts and a list of unique features is produced. All the features in this list will be added to the multiword tokenizer lexicon for tokenizing phrases or compound word. After that, the classification of the biomedical texts is conducted using the deep learning network, Convolutional Neural Network which is an approach widely used in many domains such as pattern recognition, classification and so on. The goal of classification is to accurately organize the data into the correct predefined classes. The Convolutional Neural Network has achieved a result of 54.79% average accuracy, 61.00% average precision, 60.00% average recall and 60.50% average F1-score. In short, it is hoped that this research could be beneficial to the text classification area.</description>
        <description>http://thesai.org/Downloads/Volume10No8/Paper_67-Deep_Learning_Classification_of_Biomedical_Text.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cognitive Neural Network Classifier for Fault Management in Cloud Data Center</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100866</link>
        <id>10.14569/IJACSA.2019.0100866</id>
        <doi>10.14569/IJACSA.2019.0100866</doi>
        <lastModDate>2019-08-31T09:55:34.8630000+00:00</lastModDate>
        
        <creator>S Indirani</creator>
        
        <creator>C.Jothi Venkateswaran</creator>
        
        <subject>Deep learning; Cognitive Neural Network (CNN); FP-Score; fault tolerance; VM allocation; VM migration</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(8), 2019</description>
        <description>Pro-actively handling the fault in data center is a means to allocate the VM to Host before failures, so that SLA meets for the tasks running in the data center. Existing solution [1] on fault prediction in datacenter is based on a single parameter of temperature and the fault tolerance is implemented as a reactive solution in terms of VM replication. Different from these works, a proactive fault tolerance with fault prediction based on deep learning with multiple parameters is proposed in this work. In this work Cognitive Neural Network (CNN) is used to predict the failure of hosts and initiate migration or avoid allocation to the hosts which has high probability of failures. Hosts in the data center are scored on failure probability (FP-Score) based on parameters collected at various levels using CNN. VM placement and migration policies are fine-tuned using FP-Score to manage the failure proactively.</description>
        <description>http://thesai.org/Downloads/Volume10No8/Paper_66-Cognitive_Neural_Network_Classifier.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Guideline for Decision-making on Business Intelligence and Customer Relationship Management among Clinics</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100865</link>
        <id>10.14569/IJACSA.2019.0100865</id>
        <doi>10.14569/IJACSA.2019.0100865</doi>
        <lastModDate>2019-08-31T09:55:34.8630000+00:00</lastModDate>
        
        <creator>Nur Izzati Yusof</creator>
        
        <creator>Norziha Megat Mohd. Zainuddin</creator>
        
        <creator>Noor Hafizah Hassan</creator>
        
        <creator>Nilam Nur Amir Sjarif</creator>
        
        <creator>Suraya Yaacob</creator>
        
        <creator>Wan Azlan Wan Hassan</creator>
        
        <subject>Business intelligence; customer relationship management; decision-making; guideline</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(8), 2019</description>
        <description>Business intelligence offers the capability to gain insights and perform better in decision-making by using a particular set of technologies and tools. A company’s success to a certain extent depends on customers. The complementary of business intelligence and customer relationship management will improve the efficiency of organizations, hence increase productivity and revenue. Most research works on implementation of business intelligence and customer relationship management in organizations commonly concentrate on architecture, framework, and maturity model. The process on how to implement business intelligence and customer relationship management in an organization, especially in smaller domains has not yet been clarified which make some organizations unclear on how to implement business intelligence and customer relationship management. Thus, this study investigates the process involved in the implementation of business intelligence and customer relationship management among clinics. An infographic guideline was developed based on the six process of data mining which is known as Cross Industry Standard Process for Data Mining. Four elements of business intelligence decision-making process which were gather, store, access, and analyze were also included in the process of developing the guideline. Findings from an expert’s review show that the increase of Content Validity Index was 0.7, from 0.3 during the first iteration to 1.0 in the second iteration. Therefore, this result is acceptable. The guideline appears to be a useful instrument for practitioners to implement business intelligence and customer relationship management in their clinics, however the process involved in developing the guideline could be improvised from time to time.</description>
        <description>http://thesai.org/Downloads/Volume10No8/Paper_65-A_Guideline_for_Decision_making_on_Business_Intelligence.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Biotechnical System for Recording Phonocardiography</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100864</link>
        <id>10.14569/IJACSA.2019.0100864</id>
        <doi>10.14569/IJACSA.2019.0100864</doi>
        <lastModDate>2019-08-31T09:55:34.8500000+00:00</lastModDate>
        
        <creator>Marwan Ahmed Ahmed Hamid</creator>
        
        <creator>Maria Abdullah</creator>
        
        <creator>Najeed Ahmed Khan</creator>
        
        <creator>Yasmin Mohammed Ahmed AL-Zoom</creator>
        
        <subject>Phonocardiograph (PCG); cardiovascular diseases (CVD); heart failure (HF); electrocardiograph (ECG)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(8), 2019</description>
        <description>The Phonocardiography is a graphical method of recording of the tones and noise generated by the heart with the help of the phonocardiogram machine. Cardiovascular disease (CVD) and heart failure (HF) are considered life-threatening and mostly cause death. The phonocardiograph signal (PCG) considers an indicator of abnormalities in the cardiovascular system. It provides the ability to carry out qualitative information and quantitative analysis of different tones and heart murmurs. PCG plays a major role in treatment, diagnosis and decision making of the clinical examination and biomedical research fields. The use of simple stethoscope for diagnosis the heart problem requires an experienced physician or doctors. Many people with CVD and HF are dying every day because of the lack of facilities that analysis the heart defects. Most of the low come countries suffer from a severe shortage in the Electrocardiogram (ECG) and PCG devices and trained physicians and doctors. The Poor healthcare system in these countries needs to be improved especially the problem of heart disease diagnostics. The PCG is a technique for recording and monitoring the cardiac acoustics by using a transducer and microphone. This paper attempts to design a cheap and simple biotechnical system for recording and monitoring PCG signal. The hardware was designed and implemented using the stethoscope, electret microphone, amplifiers, DC source, and jack for transmitting the PCG signal to the computer. The software was codded for monitoring and processing the PCG signal.</description>
        <description>http://thesai.org/Downloads/Volume10No8/Paper_64-Biotechnical_System_for_Recording_Phonocardiography.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Efficient Deep Learning Model for Olive Diseases Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100863</link>
        <id>10.14569/IJACSA.2019.0100863</id>
        <doi>10.14569/IJACSA.2019.0100863</doi>
        <lastModDate>2019-08-31T09:55:34.8330000+00:00</lastModDate>
        
        <creator>Madallah Alruwaili</creator>
        
        <creator>Saad Alanazi</creator>
        
        <creator>Sameh Abd El-Ghany</creator>
        
        <creator>Abdulaziz Shehab</creator>
        
        <subject>Deep learning; AlexNet; conventional neural networks; plant diseases; olive; feature extraction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(8), 2019</description>
        <description>Worldwide, plant diseases adversely influence both the quality and quantity of crop production. Thus, the early detection of such diseases proves efficient in enhancing the crop quality and reducing the production loss. However, the detection of plant diseases either via the farmers&#39; naked eyes or their traditional tools or even within laboratories is still an error prone and time consuming process. The current paper presents a Deep Learning (DL) model with a view to developing an efficient detector of olive diseases. The proposed model is distinguishable from others in a number of novelties. It utilizes an efficient parameterized transfer learning model, a smart data augmentation with balanced number of images in every category, and it functions in more complex environments with enlarged and enhanced dataset. In contrast to the lately developed state-of-art methods, the results show that our proposed method achieves higher measurements in terms of accuracy, precision, recall, and F1-Measure.</description>
        <description>http://thesai.org/Downloads/Volume10No8/Paper_63-An_Efficient_Deep_Learning_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Novel Adaptive Auto-Correction Technique for Enhanced Fingerprint Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100861</link>
        <id>10.14569/IJACSA.2019.0100861</id>
        <doi>10.14569/IJACSA.2019.0100861</doi>
        <lastModDate>2019-08-31T09:55:34.8170000+00:00</lastModDate>
        
        <creator>Thejaswini P</creator>
        
        <creator>Srikantaswamy R S</creator>
        
        <creator>Manjunatha A S</creator>
        
        <subject>Minutiae, Euclidean distance; artificial neural network; CN- crossing number; reference auto-correction; adaptive method; ISO template; auto-correction algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(8), 2019</description>
        <description>Fingerprints are the most used biometric trait in applications where high level of security is required. Fingerprint image may vary due to various environmental conditions like temperature, humidity, weather etc. Hence, it is necessary to design a fingerprint recognition system that is robust against temperature variations. Existing techniques such as automated and non-automated techniques are not real time analysis (adaptive). In this paper, we propose an adaptive auto correction technique called Reference Auto-correction Algorithm. This proposed algorithm corrects user reference fingerprint template automatically based on captured fingerprint template and the matching score obtained on daily basis to improve the recognition rate. Analysis is carried out on 250 fingerprint templates stored in the database of 10-users captured at varying temperature from 25&#176;C to 0&#176;C. The experimental result shows 40% improvement in the recognition rate after applying auto correction algorithm.</description>
        <description>http://thesai.org/Downloads/Volume10No8/Paper_61-Novel_Adaptive_Auto_Correction_Technique.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Empirical Study of Segment Particle Swarm Optimization and Particle Swarm Optimization Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100862</link>
        <id>10.14569/IJACSA.2019.0100862</id>
        <doi>10.14569/IJACSA.2019.0100862</doi>
        <lastModDate>2019-08-31T09:55:34.8170000+00:00</lastModDate>
        
        <creator>Mohammed Adam Kunna Azrag</creator>
        
        <creator>Tuty Asmawaty Abdul Kadir</creator>
        
        <subject>Se-PSO; PSO; sphere; Rosenbrock; Rastrigin; Griewank</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(8), 2019</description>
        <description>In this paper, the performance of segment particle swarm optimization (Se-PSO) algorithm was compared with that of original particle swarm optimization (PSO) algorithm. Four different benchmark functions of Sphere, Rosenbrock, Rastrigin, and Griewank with asymmetric initial range settings (upper and lower boundaries values) were selected as the test functions. The experimental results showed that, the Se-PSO algorithm achieved better results in terms of faster convergences in all the testing cases compared to the original PSO algorithm. However, the experimental results further showed the Se-PSO as a promising optimization algorithm method in some other different fields.</description>
        <description>http://thesai.org/Downloads/Volume10No8/Paper_62-Empirical_Study_of_Segment_Particle_Swarm_Optimization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Effect of e-Commerce Platforms towards Increasing Merchant’s Income in Malaysia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100860</link>
        <id>10.14569/IJACSA.2019.0100860</id>
        <doi>10.14569/IJACSA.2019.0100860</doi>
        <lastModDate>2019-08-31T09:55:34.8000000+00:00</lastModDate>
        
        <creator>M Hafiz Yusoff</creator>
        
        <creator>Mohammad Ahmed Alomari</creator>
        
        <creator>Nurul Adilah Abdul Latiff</creator>
        
        <creator>Motea S. Alomari</creator>
        
        <subject>Income increase; e-commerce; merchant; Malaysia; e-retailing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(8), 2019</description>
        <description>In 2018, ‘Hootsuite’ and ‘We Are Social’ have reported in their Digital Report that Malaysian Internet users had increased to 25.08 million users, representing 79% of Malaysian population. However, some reports indicate that even with the enhancement of digital technology, many merchants are still not using any e-commerce platform and being sceptic about it. With the pervasive increase of the Internet users, many other reports had been published to understand the relationship between e-commerce and the increasing of merchants’ income. Therefore, the objective of this research is to study the involvement of Malaysian merchants in e-commerce platforms. A sample of 1060 respondents had been selected randomly across Malaysia to participate in this research by answering a set of survey questions given online. In general, the results show that many merchants have realized the existence of e-commerce and they are familiar with it. However, they have only been utilizing it for purchasing goods and services. On the other hand, a few numbers of Malaysian merchants do engage with some type of e-commerce platforms to operate their businesses. In addition, the research is performed to identify the impact of using e-commerce on Malaysian merchants’ income.</description>
        <description>http://thesai.org/Downloads/Volume10No8/Paper_60-Effect_of_E_Commerce_Platforms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Transfer Learning Application for Automated Ischemic Classification in Posterior Fossa CT Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100859</link>
        <id>10.14569/IJACSA.2019.0100859</id>
        <doi>10.14569/IJACSA.2019.0100859</doi>
        <lastModDate>2019-08-31T09:55:34.7870000+00:00</lastModDate>
        
        <creator>Anis Azwani Muhd Suberi</creator>
        
        <creator>Wan Nurshazwani Wan Zakaria</creator>
        
        <creator>Razali Tomari</creator>
        
        <creator>Ain Nazari</creator>
        
        <creator>Mohd Norzali Hj Mohd</creator>
        
        <creator>Nik Farhan Nik Fuad</creator>
        
        <subject>Deep learning; ischemic stroke; posterior fossa; classification; convolutional neural network; computed tomography; medical diagnosis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(8), 2019</description>
        <description>Computed Tomography (CT) imaging is one of the conventional tools used to diagnose ischemic in Posterior Fossa (PF). Radiologist commonly diagnoses ischemic in PF through CT imaging manually. However, such a procedure could be strenuous and time consuming for large scale images, depending on the expertise and ischemic visibility. With the rapid development of computer technology, automatic image classification based on Machine Learning (ML) is widely been developed as a second opinion to the ischemic diagnosis. The practical performance of ML is challenged by the emergence of deep learning applications in healthcare. In this study, we evaluate the performance of deep transfer learning models of Convolutional Neural Network (CNN); VGG-16, GoogleNet and ResNet-50 to classify the normal and abnormal (ischemic) brain CT images of PF. This is the first study that intensively studies the application of deep transfer learning for automated ischemic classification in the posterior part of brain CT images. The experimental results show that ResNet-50 is capable to achieve the highest accuracy performance in comparison to other proposed models. Overall, this automatic classification provides a convenient and time-saving tool for improving medical diagnosis.</description>
        <description>http://thesai.org/Downloads/Volume10No8/Paper_59-Deep_Transfer_Learning_Application.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Web Service Testing Techniques: A Systematic Literature Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100858</link>
        <id>10.14569/IJACSA.2019.0100858</id>
        <doi>10.14569/IJACSA.2019.0100858</doi>
        <lastModDate>2019-08-31T09:55:34.7700000+00:00</lastModDate>
        
        <creator>Israr Ghani</creator>
        
        <creator>Wan M.N. Wan-Kadir</creator>
        
        <creator>Ahmad Mustafa</creator>
        
        <subject>Quality assurance; web service testing; web service testing techniques; web service component testing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(8), 2019</description>
        <description>These days continual demands on loosely coupled systems have web service gives basic necessities to deliver resolution that are adaptable and sufficient to be work at runtime for maintaining the high quality of the system. One of the basic techniques to evaluate the quality of such systems is through testing. Due to the rapid popularization of web service, which is progressing and continuously increasing, testing of web service has become a basic necessity to maintain high quality of web service. The testing of the performance of Web service based applications is attracting extensive attention. In order to evaluate the performance of web services, it is essential to evaluate the QoS (Quality of Service) attributes such as interoperability, reusability, auditability, maintainability, accuracy and performance to improve the quality of service. The purpose of this study is to introduce the systematic literature review of web services testing techniques to evaluate the QoS attributes to make the testing technique better. With the intention of better testing quality in web services, this systematic literature review intends to evaluate what QoS parameters are necessary to provide better quality assurance. The focus of systematic literature is also to make sure that quality of testing can be encouraged for the present and future. Consequently, the main attention and motivation of the study is to provide an overview of recent research efforts of web service testing techniques from the research community. Each testing technique in web services has identified apparent standards, benefits, and restrictions. This systemic literature review provides a different testing resolution to industry to decide which testing technique is the most efficient and effective with the testing assignment agenda with available resources. As for the significance, it can be said that web service testing technique are still broadly open for improvements.</description>
        <description>http://thesai.org/Downloads/Volume10No8/Paper_58-Web_Service_Testing_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Key Schedule Algorithm using 3-Dimensional Hybrid Cubes for Block Cipher</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100857</link>
        <id>10.14569/IJACSA.2019.0100857</id>
        <doi>10.14569/IJACSA.2019.0100857</doi>
        <lastModDate>2019-08-31T09:55:34.7700000+00:00</lastModDate>
        
        <creator>Muhammad Faheem Mushtaq</creator>
        
        <creator>Sapiee Jamel</creator>
        
        <creator>Siti Radhiah B. Megat</creator>
        
        <creator>Urooj Akram</creator>
        
        <creator>Mustafa Mat Deris</creator>
        
        <subject>Encryption; decryption; key schedule algorithm; hybrid cube; block cipher</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(8), 2019</description>
        <description>A key scheduling algorithm is the mechanism that generates and schedules all session-keys for the encryption and decryption process. The key space of conventional key schedule algorithm using the 2D hybrid cubes is not enough to resist attacks and could easily be exploited. In this regard, this research proposed a new Key Schedule Algorithm based on coordinate geometry of a Hybrid Cube (KSAHC) using Triangular Coordinate Extraction (TCE) technique and 3-Dimensional (3D) rotation of Hybrid Cube surface (HCs) for the block cipher to achieve large key space that are more applicable to resist any attack on the secret key. The strength of the keys and ciphertext are tested using the Hybrid Cube Encryption Algorithm (HiSea) based on Brute Force, entropy, correlation assessment, avalanche effect and NIST randomness test suit which proves the proposed algorithm is suitable for the block cipher. The results show that the proposed KSAHC algorithm has performed better than existing algorithms and we remark that our proposed model may find potential applications in information security systems.</description>
        <description>http://thesai.org/Downloads/Volume10No8/Paper_57-Key_Schedule_Algorithm_using_3_Dimensional_Hybrid_Cubes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Efficient Normalized Restricted Boltzmann Machine for Solving Multiclass Classification Problems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100856</link>
        <id>10.14569/IJACSA.2019.0100856</id>
        <doi>10.14569/IJACSA.2019.0100856</doi>
        <lastModDate>2019-08-31T09:55:34.7530000+00:00</lastModDate>
        
        <creator>Muhammad Aamir</creator>
        
        <creator>Nazri Mohd Nawi</creator>
        
        <creator>Fazli Wahid</creator>
        
        <creator>Hairulnizam Mahdin</creator>
        
        <subject>Multiclass classification; restricted Boltzmann ma-chine; Polyak averaging; image classification; Modified National Institute of Standards and Technology Datasets</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(8), 2019</description>
        <description>Multiclass classification based on unlabeled images using computer vision and image processing is currently an important issue. In this research, we focused on the phenom-ena of constructing high-level features detector for class-driven unlabeled data. We proposed a normalized restricted Boltzmann machine (NRBM) to form a robust network model. The proposed NRBM is developed to achieve the goal of dimensionality reduc-tion and provide better feature extraction with enhancement in learning more appropriate features of the data. For increment in learning convergence rate and reduction in complexity of the NRBM, we add Polyak Averaging method when training update parameters. We train the proposed NRBM network model on five variants of Modified National Institute of Standards and Technology database (MNIST) benchmark dataset. The conducted experiments showed that the proposed NRBM is more robust to noisy data as compared to state-of-art approaches.</description>
        <description>http://thesai.org/Downloads/Volume10No8/Paper_56-An_Efficient_Normalized_Restricted_Boltzmann_Machine.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>LUCIDAH Ligative and Unligative Characters in a Dataset for Arabic Handwriting</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100855</link>
        <id>10.14569/IJACSA.2019.0100855</id>
        <doi>10.14569/IJACSA.2019.0100855</doi>
        <lastModDate>2019-08-31T09:55:34.7400000+00:00</lastModDate>
        
        <creator>Yousef Elarian</creator>
        
        <creator>Irfan Ahmad</creator>
        
        <creator>Abdelmalek Zidouri</creator>
        
        <creator>Wasfi G. Al-Khatib</creator>
        
        <subject>Arabic ligatures; automatic text recognition; handwriting datasets; Hidden Markov Models</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(8), 2019</description>
        <description>Arabic script is inherently cursive, even when machine-printed. When connected to other characters, some Arabic characters may be optionally written in compact aesthetic forms known as ligatures. It is useful to distinguish ligatures from ordinary characters for several applications, especially automatic text recognition. Datasets that do not annotate these ligatures may confuse the recognition system training. Some popular datasets manually annotate ligatures, but no dataset (prior to this work) took ligatures into consideration from the design phase. In this paper, a detailed study of Arabic ligatures and a design for a dataset that considers the representation of ligative and unligative characters are presented. Then, pilot data collection and recognition experiments are conducted on the presented dataset and on another popular dataset of handwritten Arabic words. These experiments show the benefit of annotating ligatures in datasets by reducing error-rates in character recognition tasks.</description>
        <description>http://thesai.org/Downloads/Volume10No8/Paper_55-LUCIDAH_Ligative_and_Unligative_Characters.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Real-Time Intelligent Parking Entrance Management</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100854</link>
        <id>10.14569/IJACSA.2019.0100854</id>
        <doi>10.14569/IJACSA.2019.0100854</doi>
        <lastModDate>2019-08-31T09:55:34.7230000+00:00</lastModDate>
        
        <creator>Sofia Belkhala</creator>
        
        <creator>Siham Benhadou</creator>
        
        <creator>Hicham Medromi</creator>
        
        <subject>Urban mobility; smart parking; IoT; artificial intelligence; agent; multi agent system; queuing theory</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(8), 2019</description>
        <description>To help improve the situation of urban transport in the city of Casablanca, we have studied and set up a smart parking system. In this paper, we evaluate the management of the parking entrance utilising artificial intelligence. In addition, we want to establish the limits of our solution and its ability to respond to different requests in real time.</description>
        <description>http://thesai.org/Downloads/Volume10No8/Paper_54-Real_Time_Intelligent_Parking_Entrance_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Impact of ICT on Students’ Academic Performance: Applying Association Rule Mining and Structured Equation Modeling</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100852</link>
        <id>10.14569/IJACSA.2019.0100852</id>
        <doi>10.14569/IJACSA.2019.0100852</doi>
        <lastModDate>2019-08-31T09:55:34.7070000+00:00</lastModDate>
        
        <creator>Mohammad Aman Ullah</creator>
        
        <creator>Mohammad Manjur Alam</creator>
        
        <creator>Ahmed Shan-A-Alahi</creator>
        
        <creator>Mohammed Mahmudur Rahman</creator>
        
        <creator>Abdul Kadar Muhammad Masum</creator>
        
        <creator>Nasrin Akter</creator>
        
        <subject>Information and Communication Technology (ICT); student; academic; performance; association rule mining</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(8), 2019</description>
        <description>Information and communication technology (ICT) plays a significant role in university students’ academic performance. This research examined the effect of ICT on the students’ academic performance at different private universities in Chittagong, Bangladesh. Primary data have been collected from the students of those universities using a survey questionnaire. Descriptive Statistics, Reliability Analysis, Confirmatory Factor Analysis, OLS regression, Structured Equation Modeling (SEM) and Data Mining algorithms such as Association rule mining and &#233;clat have been employed to evaluate the comparative importance of the factors in identifying the academic performance of the students. From a statistical and mining perspective, overall results indicate that there is a significant relationship between ICT use and students’ academic performance. Also, student’s addiction to ICT has a significant influence on the comparative measurement in identifying the academic performance of the students. Finally, some recommendations are provided on the basis of the findings.</description>
        <description>http://thesai.org/Downloads/Volume10No8/Paper_52-Impact_of_ICT_on_Students_Academic_Performance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>WhatsApp as an Educational Support Tool in a Saudi University</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100853</link>
        <id>10.14569/IJACSA.2019.0100853</id>
        <doi>10.14569/IJACSA.2019.0100853</doi>
        <lastModDate>2019-08-31T09:55:34.7070000+00:00</lastModDate>
        
        <creator>Ahmad J Reeves</creator>
        
        <creator>Salem Alkhalaf</creator>
        
        <creator>Mohamed A. Amasha</creator>
        
        <subject>Online learning; WhatsApp; e-learning; blackboard; communication; mobile learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(8), 2019</description>
        <description>WhatsApp is a widely used social media app, growing in popularity across the Middle East, and the most popular in Saudi Arabia. In this paper, we investigate the usage of WhatsApp as an educational support tool in a Saudi university. An online survey was constructed to ascertain how students and staff feel about and utilize WhatsApp as part of their daily studies. It also aimed to gather their thoughts on other platforms offered by the university such as Blackboard and email. The survey was tested and the results analyzed for frequency distributions, mean score, and standard deviation. Our results from nearly 200 student and staff members reveals that WhatsApp is heavily utilized for a variety of educational support tasks and greatly preferred over the other platforms. We propose that WhatsApp has good potential to support not only student coordination, information dissemination and simple enquiries but also to support formal teaching and out-of-class learning.</description>
        <description>http://thesai.org/Downloads/Volume10No8/Paper_53-WhatsApp_as_an_Educational_Support_Tool.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Autonomous Monitoring System using Wi-Fi Economic</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100851</link>
        <id>10.14569/IJACSA.2019.0100851</id>
        <doi>10.14569/IJACSA.2019.0100851</doi>
        <lastModDate>2019-08-31T09:55:34.6930000+00:00</lastModDate>
        
        <creator>Michael Ames Ccoa Garay</creator>
        
        <creator>Avid Roman-Gonzalez</creator>
        
        <subject>Wemos d1 mini-skirt; Wi-Fi; sensor; internet</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(8), 2019</description>
        <description>In this project, it is presented the implementation of an autonomous monitoring system using solar panels and connecting to the network through Wi-Fi. The system will collect meteorological data and transmit in real-time to the web for the visualization and analysis of the results over temperature, humidity, and atmospheric pressure. The system will allow saving time and money, employing decision making and efficiency. For the development of this device, a small platform “Wemos D1” for the internet of things allows easy programming in the platform “Arduino IDE”.</description>
        <description>http://thesai.org/Downloads/Volume10No8/Paper_51-Autonomous_Monitoring_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Pedestrian Crossing Safety System at Traffic Lights based on Decision Tree Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100850</link>
        <id>10.14569/IJACSA.2019.0100850</id>
        <doi>10.14569/IJACSA.2019.0100850</doi>
        <lastModDate>2019-08-31T09:55:34.6770000+00:00</lastModDate>
        
        <creator>Denny Hardiyanto</creator>
        
        <creator>Iswanto</creator>
        
        <creator>Dyah Anggun Sartika</creator>
        
        <creator>Muamar Rojali</creator>
        
        <subject>Pedestrian safety; decision tree algorithm: traffic light; spraying water</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(8), 2019</description>
        <description>Pedestrians are one of the street users who have the right to get priority on security. Highway users such as vehicle drivers sometimes violate the traffic lights that is endanger pedestrians and make pedestrians feel insecure when crossing the street. Based on this problem, a tool is designed to provide a warning for the drivers or riders violating the traffic lights and prevent traffic accident by spraying water. The system is able to detect traffic violation based on changes in the value of the vehicle position on the stop line obtained from the Ultrasonic HC-SR04 sensor. When a violation is detected, a decision tree algorithm turns on the pump to spray water to the traffic violators as a deterrent effect. The results show that the vehicle located closest to the sensor has 94% precision, 88% recall and 85% accuracy, the vehicle located in the middle has 73% precision, 100% recall, and 75% accuracy, and the vehicle located furthest to the sensor has 75% precision, 100% recall and 80% accuracy.</description>
        <description>http://thesai.org/Downloads/Volume10No8/Paper_50-Pedestrian_Crossing_Safety_System_at_Traffic_Lights.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Antennas of Circular Waveguides</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100849</link>
        <id>10.14569/IJACSA.2019.0100849</id>
        <doi>10.14569/IJACSA.2019.0100849</doi>
        <lastModDate>2019-08-31T09:55:34.6600000+00:00</lastModDate>
        
        <creator>Cusacani Guerrero</creator>
        
        <creator>Julio Agapito</creator>
        
        <creator>Roman-Gonzalez Avid</creator>
        
        <subject>Circular waveguide antenna; mode; microwave oven</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(8), 2019</description>
        <description>The design of the circular waveguide antenna is proposed for displacement reflector antennas. For them, we use the frequencies of operation so that our waveguide generates the mode, (Transversal Electric), resulting in a high impedance bandwidth. The results obtained from the radiation pattern of the fabricated antenna give excellent results according to the numerical data. Used as a primary feed-in compensation, reflector decreases cross-polarization.</description>
        <description>http://thesai.org/Downloads/Volume10No8/Paper_49-Antennas_of_Circular_Waveguides.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Artificial Immune-based Algorithm for Academic Leadership Assessment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100848</link>
        <id>10.14569/IJACSA.2019.0100848</id>
        <doi>10.14569/IJACSA.2019.0100848</doi>
        <lastModDate>2019-08-31T09:55:34.6470000+00:00</lastModDate>
        
        <creator>Hamidah Jantan</creator>
        
        <creator>Nur Hamizah Syafiqah Che Azemi</creator>
        
        <creator>Zulaiha Ali Othman</creator>
        
        <subject>Immune-based algorithm; negative selection algorithm; academic leadership; performance assessment</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(8), 2019</description>
        <description>Artificial Immune-based algorithm is inspired by the biological immune system as computational intelligence approach in data analysis. Negative selection algorithm is derived from immune-based algorithm’s family that used to recognize the pattern’s changes perform by the gene detectors in complementary state. Due to the self-recognition ability, this algorithm is widely used to recognize the abnormal data or non-self especially for fault diagnosis, pattern recognition, network security etc. In this study, the self-recognition performance proposed by the negative selection algorithm been considered as a potential technique in classifying employee’s competency. Assessing the employee’s performance in organization is an important task for human resource management people to identify the right candidate in job promotion assessment. Thus, this study attempts to propose an immune-based model in assessing academic leadership performance. There are three phases involved in experimental phase i.e. data acquisition and preparation; model development; and analysis and evaluation. The data consists of academic leadership proficiency was prepared as data-set for learning and detection processes. Several experiments were conducted using cross validation process on different model to identify the most accurate model. Therefore, the accuracy of NS classifier is considered acceptable enough for this academic leadership assessment case study. For enhancement, other immune-based algorithm or bio-inspired algorithms, such as genetic algorithm, particle swam optimization, ant colony optimization would also be considered as a potential algorithm for performance assessment.</description>
        <description>http://thesai.org/Downloads/Volume10No8/Paper_48-Artificial_Immune_based_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hospital Queue Control System using Quick Response Code (QR Code) as Verification of Patient’s Arrival</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100847</link>
        <id>10.14569/IJACSA.2019.0100847</id>
        <doi>10.14569/IJACSA.2019.0100847</doi>
        <lastModDate>2019-08-31T09:55:34.6300000+00:00</lastModDate>
        
        <creator>Ridho Hendra Yoga Perdana</creator>
        
        <creator>Hudiono</creator>
        
        <creator>Mochammad Taufik</creator>
        
        <creator>Amalia Eka Rakhmania</creator>
        
        <creator>Rohman Muhamad Akbar</creator>
        
        <creator>Zainul Arifin</creator>
        
        <subject>Queue; hospital; patient; quick response code; registration</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(8), 2019</description>
        <description>Hospital is an organization that primarily provides services in the form of examination, treatment, medical treatment and other diagnostic measures required by each patient in the limits of the technology and the means provided by the hospital. So that patients get the maximum service, hospitals have to provide the best services, one of the services that are being highlighted is the queue system. This study proposed a queue control system at the hospital using a quick response code (QR Code) as verification of the patient&#39;s arrival that serves to speed up the administrative process and provide effective services, to facilitate the patient in terms of the queuing line (registration) at the hospital. The queue control system uses the website as a registration and database repository for patient data. This website has two kinds of display form, as a patient or a doctor. This kind of display is useful to speed up the administrative process, so that registration can be done at any time by the patient, with patient entering/filling data themselves and their complaints or medical records that are automatically available on the website, so that patients do not need to come to the hospital with a medical record. Furthermore, the website will filter the data entered by the patient as the patient&#39;s complaints, so that patients can choose the hospital that provides services for patients’ illness complaints which are also showing hospital addresses to facilitate patients in finding the location of the hospital. After the administration process on the website, the system will provide a QR Code as a form of verification for patients who have registered, with their data already exists in the database. When the patient arrives at the hospital, the patient only needs to scan the QR code that already exists, without taking care of the administration back because at the time the system gives a QR Code, indicating that patients already had a hospital destination along with the queue number. It is expected that the queue control system implemented at the hospital using a quick response code (QR Code) as verification of the patient&#39;s arrival, could speed up the administrative process to be more effective and efficient.</description>
        <description>http://thesai.org/Downloads/Volume10No8/Paper_47-Hospital_Queue_Control_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Classifying Cardiotocography Data based on Rough Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100846</link>
        <id>10.14569/IJACSA.2019.0100846</id>
        <doi>10.14569/IJACSA.2019.0100846</doi>
        <lastModDate>2019-08-31T09:55:34.6130000+00:00</lastModDate>
        
        <creator>Belal Amin</creator>
        
        <creator>Mona Gamal</creator>
        
        <creator>A. A. Salama</creator>
        
        <creator>I.M. El-Henawy</creator>
        
        <creator>Khaled Mahfouz</creator>
        
        <subject>Accuracy rate; cardiotocography; data mining; rough neural network; WEKA tool</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(8), 2019</description>
        <description>Cardiotocography is a medical device that monitors fetal heart rate and the uterine contraction during the period of pregnancy. It is used to diagnose and classify a fetus state by doctors who have challenges of uncertainty in data. The Rough Neural Network is one of the most common data mining techniques to classify medical data, as it is a good solution for the uncertainty challenge. This paper provides a simulation of Rough Neural Network in classifying cardiotocography dataset. The paper measures the accuracy rate and consumed time during the classification process. WEKA tool is used to analyse cardiotocography data with different algorithms (neural network, decision table, bagging, the nearest neighbour, decision stump and least square support vector machine algorithm). The comparison shows that the accuracy rates and time consumption of the proposed model are feasible and efficient.</description>
        <description>http://thesai.org/Downloads/Volume10No8/Paper_46-Classifying_Cardiotocography_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Student Clustering Model for the Learning Simplification in Educational Environments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100845</link>
        <id>10.14569/IJACSA.2019.0100845</id>
        <doi>10.14569/IJACSA.2019.0100845</doi>
        <lastModDate>2019-08-31T09:55:34.6130000+00:00</lastModDate>
        
        <creator>Khalaf Khatatneh</creator>
        
        <creator>Islam Khataleen</creator>
        
        <creator>Rami Alshwaiyat</creator>
        
        <creator>Mohammad Wedyan</creator>
        
        <subject>Clustering; students groups; education inter-active ways; students’ clustering; student’s degree; valuable secondary classes</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(8), 2019</description>
        <description>Students’ clustering is considered to be a method of granting students many different ways of being able to learn by taking into account the student’s degree. The definition of clustering can be “a group of steps that divide a group of information or (things) in a group of valuable secondary classes, which are named clusters. This implies that a cluster can be a group of things that are “alike” among them and that can be “not alike” to things that are part of other types of clusters. Thus, in this study we want to make students into different clusters or groups in order to organize students and lecturers into more interactive ways, and to extract an education that is related to some particular information and to produce some comments and feedback to the academic teacher. Conclusion, this is the specific research effort that forms the final phase. In particular, it is the result of satisfaction when deviations exist in the artifact behaviour, which is derived from the (multiply) revised hypothetical predictions. Accordingly, the results are then declared as ‘good enough’.</description>
        <description>http://thesai.org/Downloads/Volume10No8/Paper_45-A_Novel_Student_Clustering_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Systematic Literature Review of Identifying Issues in Software Cost Estimation Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100844</link>
        <id>10.14569/IJACSA.2019.0100844</id>
        <doi>10.14569/IJACSA.2019.0100844</doi>
        <lastModDate>2019-08-31T09:55:34.6000000+00:00</lastModDate>
        
        <creator>Muhammad Asif Saleem</creator>
        
        <creator>Rehan Ahmad</creator>
        
        <creator>Tahir Alyas</creator>
        
        <creator>Muhammad Idrees</creator>
        
        <creator>Asfandayar</creator>
        
        <creator>Asif Farooq</creator>
        
        <creator>Adnan Shahid Khan</creator>
        
        <creator>Kahawaja Ali</creator>
        
        <subject>Cost estimation; COCOMO; qualitative; use case point, PCA</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(8), 2019</description>
        <description>Software cost estimation plays a vital role in software project management. It is a process of predicting the effort and cost in terms of money and staff required for developing the software system. It is very much clear that software project will be successful if its estimated cost will be near to the real cost. When the project is at the acquisition stage the least details are available about a software project to be developed, due to which Problems arises in cost estimation. As the stages move on details increases for software development of software which is quite fruitful in cost estimating. However, it can be considered that estimating the software cost in the first phases will produce better results. In this research cost estimation techniques are discussed along with the issues in that particular technique and focus will be on understating the points or issues which cause hurdles or issues in estimating the cost of the software project.</description>
        <description>http://thesai.org/Downloads/Volume10No8/Paper_44-Systematic_Literature_Review_of_Identifying_Issues.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Evaluation Model for Auto-generated Cognitive Scripts</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100843</link>
        <id>10.14569/IJACSA.2019.0100843</id>
        <doi>10.14569/IJACSA.2019.0100843</doi>
        <lastModDate>2019-08-31T09:55:34.5830000+00:00</lastModDate>
        
        <creator>Ahmed M ELMougi</creator>
        
        <creator>Rania Hodhod</creator>
        
        <creator>Yasser M. K. Omar</creator>
        
        <subject>Autonomous intelligent agents; socio-cultural situations; cognitive scripts; conceptual blending; contextual structural retrieval algorithms; text coherence; sentence embedding</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(8), 2019</description>
        <description>Autonomous intelligent agents have become a very important research area in Artificial Intelligence (AI). Socio-cultural situations are one challenging area in which autonomous intelligent agents can acquire new knowledge or modify existing one. Socio-cultural situations can be best represented in the form of cognitive scripts that can allow different techniques to be used to facilitate knowledge transfer between scripts. Conceptual blending has proven successful in enhancing the social dynamics of cognitive scripts, where information is transferred from similar contextual scripts to a target script resulting in a new blended script. To the extent of our knowledge, there is no computational model available to evaluate these newly generated cognitive scripts. This work aims to develop a computational model to evaluate cognitive scripts resulting from blending two or more linear cognitive scripts. The evaluation process involves: 1) using the GloVe similarity to check if the transferred events conceptually fit the target script; 2) using the semantic view of text coherence to decide on the optimal position(s) to place the transferred event(s) in the target script. Results show that the GloVe similarity can be applied successfully to preserve the contextual meaning of cognitive scripts. Additional results show that GloVe embedding gives higher accuracy over Universal Sentence Encoder (USE) and Smooth Inverse Frequency (SIF) embedding but this comes with a high computational cost. Future work will look into reducing the computational cost and enhancing the accuracy.</description>
        <description>http://thesai.org/Downloads/Volume10No8/Paper_43-An_Evaluation_Model_for_Auto_Generated_Cognitive_Scripts.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Methodology for Engineering Domain Ontology using Entity Relationship Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100842</link>
        <id>10.14569/IJACSA.2019.0100842</id>
        <doi>10.14569/IJACSA.2019.0100842</doi>
        <lastModDate>2019-08-31T09:55:34.5670000+00:00</lastModDate>
        
        <creator>Muhammad Ahsan Raza</creator>
        
        <creator>M. Rahmah</creator>
        
        <creator>Sehrish Raza</creator>
        
        <creator>A. Noraziah</creator>
        
        <creator>Roslina Abd. Hamid</creator>
        
        <subject>Ontology engineering; semantic web; ontology validation; knowledge management</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(8), 2019</description>
        <description>Ontology engineering is an important aspect of semantic web vision to attain the meaningful representation of data. Although various techniques exist for the creation of ontology, most of the methods involve the number of complex phases, scenario-dependent ontology development, and poor validation of ontology. This research work presents a lightweight approach to build domain ontology using Entity Relationship (ER) model. Firstly, a detailed analysis of intended domain is performed to develop the ER model. In the next phase, ER to ontology (EROnt) conversion rules are outlined, and finally the system prototype is developed to construct the ontology. The proposed approach investigates the domain of information technology curriculum for the successful interpretation of concepts, attributes, relationships of concepts and constraints among the concepts of the ontology. The experts’ evaluation of accurate identification of ontology vocabulary shows that the method performed well on curriculum data with 95.75% average precision and 90.75% average recall.</description>
        <description>http://thesai.org/Downloads/Volume10No8/Paper_42-A_Methodology_for_Engineering_Domain_Ontology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Convolutional Neural Network Architecture for Plant Seedling Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100841</link>
        <id>10.14569/IJACSA.2019.0100841</id>
        <doi>10.14569/IJACSA.2019.0100841</doi>
        <lastModDate>2019-08-31T09:55:34.5530000+00:00</lastModDate>
        
        <creator>Heba A Elnemr</creator>
        
        <subject>Deep learning; convolutional neural network; plant seedling classification; weed control</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(8), 2019</description>
        <description>Weed control is a challenging problem that may face crops productivity. Weeds are perceived as an important problem because they conduce to reduce crop yields due to the expanding competition for nutrients, water, and sunlight besides they serve as hosts for diseases and pests. Thus, it is crucial to identify weeds in early growth in order to avoid their side effects on crops growth. Previous conventional machine learning technologies exploited for discriminating crops and weeding species faced challenges of effectiveness and reliability of weed detection at preliminary stages of growth. This work proposes the application of deep learning technique for plant seedling classification. A new Convolutional Neural Networks (CNN) architecture is designed to classify plant seedlings at their early growth stages. The presented technique is appraised using plant seedlings dataset. Average accuracy, precision, recall, and F1-score are utilized as evaluation metrics. The results reveal the capability of the proposed technique in discriminating among 12 species (3 crops and 9 weeds). The system achieved 94.38% average classification accuracy. The proposed system is compared with existing plant seedling systems. The results demonstrate that the proposed method outperforms the existing methods.</description>
        <description>http://thesai.org/Downloads/Volume10No8/Paper_41-Convolutional_Neural_Network_Architecture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mapping of Independent Tasks in the Cloud Computing Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100840</link>
        <id>10.14569/IJACSA.2019.0100840</id>
        <doi>10.14569/IJACSA.2019.0100840</doi>
        <lastModDate>2019-08-31T09:55:34.5370000+00:00</lastModDate>
        
        <creator>Biswajit Nayak</creator>
        
        <creator>Sanjay Kumar Padhi</creator>
        
        <subject>Scheduling; mixed model; cloud computing; makespan; healthcare</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(8), 2019</description>
        <description>Cloud computing is a technology that provides many resources and facility to share data. Due to the concept of open environment in the cloud computing the request or data increases quickly. So this problem can be solved by proper utilization of tasks along with available resources. Task scheduling algorithm plays an immense role in the cloud computing environment in minimizing the time required for completion of the task assigned to the resource available. There are several algorithms introduced to solve the problem of scheduling task of several kinds but all the developed algorithms are task dependent algorithms. The major criteria of the task scheduling algorithm are to optimize resource utilization in the diverse computing environment, so as to minimize makespan and execution time so that the accountability of healthcare industry that uses cloud computing can be enhanced. The proposed algorithm is designed to deal with variable length tasks by taking the advantages of the different heuristic algorithm and ensures optimum task scheduling with various available resources to enhance the quality of the healthcare system.</description>
        <description>http://thesai.org/Downloads/Volume10No8/Paper_40-Mapping_of_Independent_Tasks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Video Analysis with Faces using Harris Detector and Correlation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100839</link>
        <id>10.14569/IJACSA.2019.0100839</id>
        <doi>10.14569/IJACSA.2019.0100839</doi>
        <lastModDate>2019-08-31T09:55:34.5200000+00:00</lastModDate>
        
        <creator>Rodolfo Romero Herrera</creator>
        
        <creator>Francisco Gallegos Funes</creator>
        
        <creator>Jos&#233; Elias Romero Mart&#237;nez</creator>
        
        <subject>Harris detect; Viola and Jones; Harris detector; correlation; video</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(8), 2019</description>
        <description>A procedure is presented to detect changes in a video sequence based on the Viola &amp; Jones Method to obtain images of faces of persons; videos are taken of the web. The software allows to obtain images or frames separated from nose, mouth, and eyes but the case of the eyes is taken as an example. Change detection is done by using correlation and the Harris detector. The correlation results allow us to recognize changes in position in the person or movement of the camera if the individual remains fixed and vice versa. It is possible to analyze a part of the face thanks to the Harris detector; It is possible with detected reference points to recognize very small changes in the video sequence of a particular organ; such as the human eye, even when the image is of poor quality as is the case of videos downloaded from the Internet or taken with a low-resolution camera.</description>
        <description>http://thesai.org/Downloads/Volume10No8/Paper_39-Video_Analysis_with_Faces_using_Harris_Detector.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cloud Security based on the Homomorphic Encryption</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100838</link>
        <id>10.14569/IJACSA.2019.0100838</id>
        <doi>10.14569/IJACSA.2019.0100838</doi>
        <lastModDate>2019-08-31T09:55:34.5030000+00:00</lastModDate>
        
        <creator>Waleed T Al-Sit</creator>
        
        <creator>Qussay Al-Jubouri</creator>
        
        <creator>Hani Al-Zoubi</creator>
        
        <subject>Cloud computing; homomorphic encryption; security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(8), 2019</description>
        <description>Cloud computing provides services rather than products; where it offers many benefits to clients who pay to use hardware and software resources. There are many advantages of using cloud computing such as low cost, easy to maintain, and available resources. The main challenge in the Cloud system is how to obtain a highly secured system against attackers. For this reason, methods were developed to increase the security level in different techniques. This paper aims to review these techniques with their security challenges by presenting the most popular cloud techniques and applications. Homomorphic Encryption method in cloud computing is presented in this paper as a solution to increase the security of the data. By using this method, a client can perform an operation on encrypted data without being decrypted which is the same result as the computation applied to decrypted data. Finally, the reviewed security techniques are discussed with some recommendations that might be used to raise the required security level in such a system.</description>
        <description>http://thesai.org/Downloads/Volume10No8/Paper_38-Cloud_Security_based_on_the_Homomorphic_Encryption.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Steganography Performance over AWGN Channel</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100837</link>
        <id>10.14569/IJACSA.2019.0100837</id>
        <doi>10.14569/IJACSA.2019.0100837</doi>
        <lastModDate>2019-08-31T09:55:34.4900000+00:00</lastModDate>
        
        <creator>Fahd Alharbi</creator>
        
        <subject>Steganography; robustness; noise; AWGN; viterbi</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(8), 2019</description>
        <description>Steganography can be performed using frequency domain or spatial domain. In spatial domain method, the least significant bits (LSB) is the mostly used method where the least significant bits of the image&#39;s pixels binary representation are used to carry the confidential data bits. On the other hand, secret data bits in the frequency domain technique are hidden using coefficients of the image frequency representation such as discrete cosine transform (DCT). Robustness against image attacks or channel&#39;s noise is a key requirement in steganography. In this paper, we study the performance of the steganography methods over a channel with Added White Gaussian Noise (AWGN). We use the bit error rate to evaluate the performance of each method over a channel with different noise levels. Simulation results show that the frequency domain technique is more robust and achieves better bit error rate in a noisy channel than the spatial domain method. Moreover, we enhanced the steganography system robustness by using convolution encoder and Viterbi decoder. The effect of the encoder’s parameters, such as rate and constraint length is evaluated.</description>
        <description>http://thesai.org/Downloads/Volume10No8/Paper_37-Steganography_Performance_over_AWGN_Channel.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Machine Learning Approaches for Predicting the Severity Level of Software Bug Reports in Closed Source Projects</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100836</link>
        <id>10.14569/IJACSA.2019.0100836</id>
        <doi>10.14569/IJACSA.2019.0100836</doi>
        <lastModDate>2019-08-31T09:55:34.4900000+00:00</lastModDate>
        
        <creator>Aladdin Baarah</creator>
        
        <creator>Ahmad Aloqaily</creator>
        
        <creator>Zaher Salah</creator>
        
        <creator>Mannam Zamzeer</creator>
        
        <creator>Mohammad Sallam</creator>
        
        <subject>Software engineering; software maintenance; bug tracking system; bug severity; data mining; machine learning; severity prediction; closed-source projects</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(8), 2019</description>
        <description>In Software Development Life Cycle, fixing defect bugs is one of the essential activities of the software maintenance phase. Bug severity indicates how major or minor the bug impacts on the execution of the system and how rapidly the developer should fix it. Triaging a vast amount of new bugs submitted to the software bug repositories is a cumbersome and time-consuming process. Manual triage might lead to a mistake in assigning the appropriate severity level for each bug. As a consequence, a delay for fixing severe software bugs will take place. However, the whole process of assigning the severity level for bug reports should be automated. In this paper, we aim to build prediction models that will be utilized to determine the class of the severity (severe or non-severe) of the reported bug. To validate our approach, we have constructed a dataset from historical bug reports stored in JIRA bug tracking system. These bug reports are related to different closed-source projects developed by INTIX Company located in Amman, Jordan. We compare eight popular machine learning algorithms, namely Naive Bayes, Naive Bayes Multinomial, Support Vector Machine, Decision Tree (J48), Random Forest, Logistic Model Trees, Decision Rules (JRip) and K-Nearest Neighbor in terms of accuracy, F-measure and Area Under the Curve (AUC). According to the experimental results, a Decision Tree algorithm called Logistic Model Trees achieved better performance compared to other machine learning algorithms in terms of Accuracy, AUC and F-measure with values of 86.31, 0.90 and 0.91, respectively.</description>
        <description>http://thesai.org/Downloads/Volume10No8/Paper_36-Machine_Learning_Approaches_for_Predicting_the_Severity_Level.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Learning Analytics Framework for Adaptive E-learning System to Monitor the Learner’s Activities</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100835</link>
        <id>10.14569/IJACSA.2019.0100835</id>
        <doi>10.14569/IJACSA.2019.0100835</doi>
        <lastModDate>2019-08-31T09:55:34.4730000+00:00</lastModDate>
        
        <creator>Salma EL Janati</creator>
        
        <creator>Abdelilah Maach</creator>
        
        <creator>Driss El Ghanami</creator>
        
        <subject>e-Learning; adaptive e-learning system; learner model; learning analytics; business intelligence; data warehouse; content presentation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(8), 2019</description>
        <description>The adaptive e-learning system (AE-LS) research has long focused on the learner model and learning activities to personalize the learner’s experience. However, there are many unresolved issues that make it difficult for trainee teachers to obtain appropriate information about the learner&#39;s behavior. The evolution of the Learning Analytics (LA) offers new possibilities to solve problems of AE-LS. In this paper, we proposed a Business intelligence framework for AE-LS to monitor and manage the performance of the learner more effectively. The suggested architecture of the ALS proposes a data warehouse model that responds to these problems. It defines specifics measures and dimensions, which helps teachers and educational administrators to evaluate and analyze the learner’s activities. By analyzing these interactions, the adaptive e-learning analytic system (AE-LAS) has the potential to provide a predictive view of upcoming challenges. These predictions are used to evaluate the adaptation of the content presentation and improve the performance of the learning process.</description>
        <description>http://thesai.org/Downloads/Volume10No8/Paper_35-Learning_Analytics_Framework_for_Adaptive_e_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automated Greenhouses for the Reduction of the Cost of the Family Basket in the District of Villa El Salvador-Per&#250;</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100834</link>
        <id>10.14569/IJACSA.2019.0100834</id>
        <doi>10.14569/IJACSA.2019.0100834</doi>
        <lastModDate>2019-08-31T09:55:34.4570000+00:00</lastModDate>
        
        <creator>Pedro Romero Huaroto</creator>
        
        <creator>Abraham Casanova Robles</creator>
        
        <creator>Nicolh Antony Ciriaco Susanibar</creator>
        
        <creator>Avid Roman-Gonzalez</creator>
        
        <subject>Germination; protocol i2c; hummus; atmega</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(8), 2019</description>
        <description>Today, the cost of the family basket is gradually increasing, not only globally but also in our country. This increase includes the demand for vegetables and fresh vegetables that allow people to improve their quality of life. Also, this consists of the search for a healthier and more natural diet with the help of existing technologies. Against this, we propose the implementation of an automated greenhouse, with sensors and actuators that allow controlling a microclimate for correct and efficient development of vegetables. With this proposal, we obtain a saving of 50% equivalent to 8.00 dollars, concerning the planting of lettuce against market prices, thus achieving a reduction in the family basket.</description>
        <description>http://thesai.org/Downloads/Volume10No8/Paper_34-Automated_Greenhouses_for_the_Reduction_of_the_Cost_of_the_Family_Basket.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Robust Video Content Authentication using Video Binary Pattern and Extreme Learning Machine</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100833</link>
        <id>10.14569/IJACSA.2019.0100833</id>
        <doi>10.14569/IJACSA.2019.0100833</doi>
        <lastModDate>2019-08-31T09:55:34.4430000+00:00</lastModDate>
        
        <creator>Mubbashar Sadddique</creator>
        
        <creator>Khurshid Asghar</creator>
        
        <creator>Tariq Mehmood</creator>
        
        <creator>Muhammad Hussain</creator>
        
        <creator>Zulfiqar Habib</creator>
        
        <subject>Video forgery; spatial video forgery; passive forgery detection; Video Binary Pattern (VBP); feature extraction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(8), 2019</description>
        <description>Recently, due to easy accessibility of smartphones, digital cameras and other video recording devices, a radical enhancement has been experienced in the field of digital video technology. Digital videos have become very vital in court of law and media (print, electronic and social). On the other hand, a widely-spread availability of Video Editing Tools (VETs) have made video tampering very easy. Detection of this tampering is very important, because it may affect the understanding and interpretation of video contents. Existing techniques used for detection of forgery in video contents can be broadly categorized into active and passive. In this research a passive technique for video tampering detection in spatial domain is proposed. The technique comprises of two phases: 1) Extraction of features with proposed Video Binary Pattern (VBP) descriptor, and 2) Extreme Learning Machine (ELM) based classification. Experimental results on different datasets reveal that the proposed technique achieved accuracy 98.47%.</description>
        <description>http://thesai.org/Downloads/Volume10No8/Paper_33-Robust_Video_Content_Authentication.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Muscle Electro Stimulator for the Reduction of Stretch Marks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100832</link>
        <id>10.14569/IJACSA.2019.0100832</id>
        <doi>10.14569/IJACSA.2019.0100832</doi>
        <lastModDate>2019-08-31T09:55:34.4430000+00:00</lastModDate>
        
        <creator>Paima-Sahuma Jaime</creator>
        
        <creator>Duran-Berrio Linder</creator>
        
        <creator>Palacios-Cusiyunca Chrisstopher</creator>
        
        <creator>Roman-Gonzalez Avid</creator>
        
        <subject>Electro stimulator; stretch marks; muscle fibers</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(8), 2019</description>
        <description>The problem of stretch marks is generated because the skin stretches abruptly in a short time; this change causes the skin to deform and widen, forming a roughness. This roughness is what is known as stretch marks. This document arose from the need to reduce the deformation in the skin that many people suffer. This deformation mainly is due to overweight, pregnancy, or during adolescence due to rapid growth. In this document, a device will be designed that will have the task of reducing skin roughness. One will use electro-stimulation as the primary technique to apply electrical impulses. This device can limit and control the electrical signals produced to control the movement of muscle fibers and skin. The results obtained show a remarkable reduction of stretch marks in one people after the application of electrical stimuli with the device. Research shows promising results.</description>
        <description>http://thesai.org/Downloads/Volume10No8/Paper_32-Muscle_Electrostimulator_for_the_Reduction_of_Stretch_Marks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Approach for Dimensionality Reduction and Classification of Hyperspectral Images based on Normalized Synergy</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100831</link>
        <id>10.14569/IJACSA.2019.0100831</id>
        <doi>10.14569/IJACSA.2019.0100831</doi>
        <lastModDate>2019-08-31T09:55:34.4270000+00:00</lastModDate>
        
        <creator>Asma Elmaizi</creator>
        
        <creator>Hasna Nhaila</creator>
        
        <creator>Elkebir Sarhrouni</creator>
        
        <creator>Ahmed Hammouch</creator>
        
        <creator>Nacir Chafik</creator>
        
        <subject>Hyperspectral images; target detection; pixel classification; dimensionality reduction; band selection; information theory; mutual information; normalized synergy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(8), 2019</description>
        <description>During the last decade, hyperspectral images have attracted increasing interest from researchers worldwide. They provide more detailed information about an observed area and allow an accurate target detection and precise discrimination of objects compared to classical RGB and multispectral images. Despite the great potentialities of hyperspectral technology, the analysis and exploitation of the large volume data remain a challenging task. The existence of irrelevant redundant and noisy images decreases the classification accuracy. As a result, dimensionality reduction is a mandatory step in order to select a minimal and effective images subset. In this paper, a new filter approach normalized mutual synergy (NMS) is proposed in order to detect relevant bands that are complementary in the class prediction better than the original hyperspectral cube data. The algorithm consists of two steps: images selection through normalized synergy information and pixel classification. The proposed approach measures the discriminative power of the selected bands based on a combination of their maximal normalized synergic information, minimum redundancy and maximal mutual information with the ground truth. A comparative study using the support vector machine (SVM) and k-nearest neighbor (KNN) classifiers is conducted to evaluate the proposed approach compared to the state of art band selection methods. Experimental results on three benchmark hyperspectral images proposed by the NASA “Aviris Indiana Pine”, “Salinas” and “Pavia University” demonstrated the robustness, effectiveness and the discriminative power of the proposed approach over the literature approaches.</description>
        <description>http://thesai.org/Downloads/Volume10No8/Paper_31-A_Novel_Approach_for_Dimensionality_Reduction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Security Model for Web Browser Local Storage</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100830</link>
        <id>10.14569/IJACSA.2019.0100830</id>
        <doi>10.14569/IJACSA.2019.0100830</doi>
        <lastModDate>2019-08-31T09:55:34.4100000+00:00</lastModDate>
        
        <creator>Thamer Al-Rousan</creator>
        
        <creator>Bassam Al-Shargabi</creator>
        
        <creator>Hasan Abualese</creator>
        
        <subject>HTML5; security; local storage</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(8), 2019</description>
        <description>In recent years, the web browser has taken over many roles of the traditional operating system, such as acting as a host platform for web applications. Web browser storage, where the web applications can save data locally was one of the new functionalities added in HTML5. However, web functionality has increased significantly since HTML5 was introduced. As web functionality increased, so did the threats facing web users. One of the most prevalent threats was the user’s privacy violations. This study examines the existing security issues related to the usage of web browser storage and proposes a new model to secure the data saved in the browser’s storage. The model was designed and implemented as a web browser extension to secure the saved data. The model was experimentally demonstrated and the result was evaluated.</description>
        <description>http://thesai.org/Downloads/Volume10No8/Paper_30-A_New_Security_Model_for_Web_Browser.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Indoor Positioning System using Regression-based Fingerprint Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100829</link>
        <id>10.14569/IJACSA.2019.0100829</id>
        <doi>10.14569/IJACSA.2019.0100829</doi>
        <lastModDate>2019-08-31T09:55:34.3970000+00:00</lastModDate>
        
        <creator>Reginald Putra Ghozali</creator>
        
        <creator>Gede Putra Kusuma</creator>
        
        <subject>Indoor positioning system; fingerprinting; regression machine learning; convolutional neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(8), 2019</description>
        <description>Indoor Positioning System has opportunity to be used in different business platform. Based on past research, optimized localization method for Bluetooth Low Energy (BLE) to predict position of person or object with high accuracy has not been found yet. Most recent research that have solve Received Signal Strength (RSS) inconsistent value is using fingerprint method. This paper proposed a deep regression machine learning using convolutional neural network (CNN) with regression-based fingerprint model to estimate real position. The model used 5 nearest fingerprints as reference RSS values with their location (x or y) label as inputs to produce output of single value position (x or y), then repeat the process to produce second value of position to create complete coordinate of estimated position. To evaluate the proposed model, a comparison between training data with validation data using Root Mean Squared Error (RMSE) is used. The comparisons are with Multilayer Perceptron model and with the weighted sum method as benchmark. The experiment Gave results of mean distance and 90th percentile distance between proposed model with the benchmark. CNN model achieved accuracies of lower than 330cm at 90th percentile with mean distance lower than 185cm. Weighted sum model achieved accuracies lower than 360cm at 90th percentile with mean distance higher than 185cm, and MLP is in between them. The result demonstrates that the proposed method outperformed the benchmark methods.</description>
        <description>http://thesai.org/Downloads/Volume10No8/Paper_29-Indoor_Positioning_System_using_Regression.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Survey: Agent-based Software Technology Under the Eyes of Cyber Security, Security Controls, Attacks and Challenges</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100828</link>
        <id>10.14569/IJACSA.2019.0100828</id>
        <doi>10.14569/IJACSA.2019.0100828</doi>
        <lastModDate>2019-08-31T09:55:34.3970000+00:00</lastModDate>
        
        <creator>Bandar Alluhaybi</creator>
        
        <creator>Mohamad Shady Alrahhal</creator>
        
        <creator>Ahmed Alzhrani</creator>
        
        <creator>Vijey Thayananthan</creator>
        
        <subject>Agent; attack; cyber; security; requirement; maturity; protection goals</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(8), 2019</description>
        <description>Recently, agent-based software technology has received wide attention by the research community due to its valuable benefits, such as reducing the load on networks and providing an efficient solution for the transmission challenge problem. However, the major concern in building agent-based systems is related to the security of agents. In this paper, we explore the techniques used to build controls that guarantee both the protection of agents against malicious destination machines and the protection of destination machines against malicious agents. In addition, statistical-based analyses are employed to evaluate the level of maturity of the protection techniques to preserve the protection goals (the code and data, state, and itinerary of the agent), with and without the threat of attacks. Challenges regarding the security of agents are presented and highlighted by seven research questions related to satisfying cyber security requirements, protecting the visiting agent and the visited host machine from each other, providing robustness against advanced attacks that target protection goals, quantifying the security in agent-based systems, and providing features of self-protection and self-communication to the agent itself.</description>
        <description>http://thesai.org/Downloads/Volume10No8/Paper_28-A_Survey_Agent_based_Software_Technology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Attractiveness Analysis of Quiz Games</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100827</link>
        <id>10.14569/IJACSA.2019.0100827</id>
        <doi>10.14569/IJACSA.2019.0100827</doi>
        <lastModDate>2019-08-31T09:55:34.3800000+00:00</lastModDate>
        
        <creator>Tara Khairiyah Md Zali</creator>
        
        <creator>Nor Samsiah Sani</creator>
        
        <creator>Abdul Hadi Abd Rahman</creator>
        
        <creator>Mohd Aliff</creator>
        
        <subject>Quiz games; game refinement theory; attractiveness</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(8), 2019</description>
        <description>Quiz games are played on platforms such as television game shows, radio game shows, and recently, on mobile apps. In this study, HQ Trivia and SongPop 2 were chosen as the benchmark. Each game data have been collected for the analysis and the game refinement measure was employed for the assessment that focuses on different elimination tournament system for each sample. The results show that games such as HQ Trivia, which applies single-round elimination tournament, has a lower value of game refinement, in which the game is highly skillfull. Meanwhile, games that apply a round-robin system, such as SongPop 2 have a higher value of game refinement, in which the game is very stochastic. SongPop 2 and HQ Trivia both have more than 5 million downloads in Google Play Store. It is concluded that different types of quiz games which apply different kinds of tournament style have different game refinement value.</description>
        <description>http://thesai.org/Downloads/Volume10No8/Paper_27-Attractiveness_Analysis_of_Quiz_Games.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Robot Arm Analysis based on Master Device Pneumatic Actuators</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100826</link>
        <id>10.14569/IJACSA.2019.0100826</id>
        <doi>10.14569/IJACSA.2019.0100826</doi>
        <lastModDate>2019-08-31T09:55:34.3630000+00:00</lastModDate>
        
        <creator>Mohd Aliff</creator>
        
        <creator>Nor Samsiah Sani</creator>
        
        <subject>Trajectory control; master-slave control; robot arm; pneumatic cylinder</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(8), 2019</description>
        <description>Advances in technology have expanded the use of soft actuators in various fields especially in robotics, rehabilitation and medical field. Soft actuator development provides many advantages, primarily being simple structures, high power to weight ratio, good compliance, high water resistance and low production cost. However, most soft actuators suffer the problem of being oversized which could potentially hurt users as it is often made of hard materials such as steels and hard rigid plastics. Current drawbacks of soft actuator implementation in robotic arms are on its excessive weights, causing these robots to be difficult to set up by patients themselves which in turn makes it less applicable for home rehabilitation training program. Hence, there is a need to design a soft actuator which is safe and more flexible, especially for applications in areas of patients in rehabilitation area or in-house rehabilitation program. In this paper, we propose the design of robot arm using master device pneumatic actuator and analyse the implementation for the above purpose. The system comprises primarily of the master and slave arms, two accelerometers and two potentiometers providing references for attitude control, six quasi-servo valves, and SH-7125 microcontroller. Our proposed design exhibits functions of the actuator that has been generated from elastic deformation of extension and contraction of the cylinder structure when high pneumatic pressure is supplied to the chamber. The control performance of the device is investigated using simulation method, whereby the rational model of the robot arm and the quasi servo valve with the embedded controller is implemented and analysed. It is found that the analysed results of the model approved well with the desired values.</description>
        <description>http://thesai.org/Downloads/Volume10No8/Paper_26-Robot_Arm_Analysis_based_on_Master_Device.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modelling the Enterprise Architecture Implementation in the Public Sector using HOT-Fit Framework</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100825</link>
        <id>10.14569/IJACSA.2019.0100825</id>
        <doi>10.14569/IJACSA.2019.0100825</doi>
        <lastModDate>2019-08-31T09:55:34.3500000+00:00</lastModDate>
        
        <creator>Hasimi Sallehudin</creator>
        
        <creator>Nurhizam Safie Mohd Satar</creator>
        
        <creator>Nur Azaliah Abu Bakar</creator>
        
        <creator>Rogis Baker</creator>
        
        <creator>Farashazillah Yahya</creator>
        
        <creator>Ahmad Firdause Md Fadzil</creator>
        
        <subject>Enterprise architecture; public sector; HOT-Fit</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(8), 2019</description>
        <description>Enterprise architecture is very important to the public sector’s IT systems that are developed, organized, scaled up, maintained and strategized. Despite an extensive literature, the research of enterprise architecture is still at the early stage in public the sector and the reason to explain the acceptance, as well as the understanding of the implementation level of EA services still remains unclear. Therefore, this study examines the implementation of EA by measuring the Malaysian public sector’s influence factors of EA. Grounded by the Human-Organization-Technology (HOT-Fit) Model, this study proposes a conceptual framework by decomposing Human characteristics, Organizational characteristics and Technological characteristics as main categories in assessing the identified factors. A total of 92 respondents in the Malaysian public sector participated in this study. Structural Equation Modelling with Partial Least Square is the main statistical technique used in this study. The study has revealed that human characteristics such as knowledge and innovativeness to EA and technological characteristics such as relative advantage and complexity of EA influence its implementation by the Malaysian public sector. Based on the findings, the theoretical and practical implications of the study as well as limitations and future works are also discussed.</description>
        <description>http://thesai.org/Downloads/Volume10No8/Paper_25-Modelling_the_Enterprise_Architecture_Implementation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards a Classification View of Personalized e-Learning with Social Collaboration Support</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100823</link>
        <id>10.14569/IJACSA.2019.0100823</id>
        <doi>10.14569/IJACSA.2019.0100823</doi>
        <lastModDate>2019-08-31T09:55:34.3330000+00:00</lastModDate>
        
        <creator>Amal Al-Abri</creator>
        
        <creator>Yassine Jamoussi</creator>
        
        <creator>Zuhoor AlKhanjari</creator>
        
        <creator>Naoufel Kraiem</creator>
        
        <subject>Classification review; collaboration; personalized e-learning; social media</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(8), 2019</description>
        <description>With the emergence of Web 2.0 technologies, interaction and collaboration support in the educational field have been augmented. These types of support embrace researchers to enrich the e-learning environment with personalized characteristics with the utilization of the collaboration support outputs. Achieving this requires understanding the existing environments and highlights their eminence. As a result, there are many attempts to state the current status of personalized e-learning environment from different perspectives. However, these attempts targeted a specific view and direction which failed to provide us with the general view of the adoption of personalized e-learning environment with the support of social collaboration tools. This paper provides a classified view of the current status of personalized e-learning environments which incorporate social collaboration tools for providing the personalization feature. The classification adopts four different views to carry out the classification; these views are subject, purpose, method, and tool. The findings show that the utilization of the user-generated contents and social interaction functionalities for personalization is tight and not fully consumed. In short, the potential of providing personalized learning with social interaction and collaboration features remains not fully explored.</description>
        <description>http://thesai.org/Downloads/Volume10No8/Paper_23-Towards_a_Classification_view_of_Personalized_e_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Efficient Design of RPL Objective Function for Routing in Internet of Things using Fuzzy Logic</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100824</link>
        <id>10.14569/IJACSA.2019.0100824</id>
        <doi>10.14569/IJACSA.2019.0100824</doi>
        <lastModDate>2019-08-31T09:55:34.3330000+00:00</lastModDate>
        
        <creator>Adeeb Saaidah</creator>
        
        <creator>Omar Almomani</creator>
        
        <creator>Laila Al-Qaisi</creator>
        
        <creator>Mohammed Kamel MADI</creator>
        
        <subject>RPL; Objective Function; IOT; fuzzy logic; LLNs</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(8), 2019</description>
        <description>The nature of the Low power and lossy networks (LLNs) requires having efficient protocols capable of handling the resource constraints. LLNs consist of networks that connect different type of devices which has constraints resources such as energy, memory and battery life. Using the standard routing protocols such as Open Shortest Path First (OSPF) is inefficient for LLNs due to the constraints that LLNs need. So, IPv6 Routing Protocol for Low-Power and Lossy Networks (RPL) was developed to accommodate these constraints. RPL is a distance vector protocol that used the object functions (OF) to define the best tree path. However, choosing a single metric for the OF found to be unable to accommodate applications requirements. In this paper, an enhanced (OF) is proposed namely; OFRRT-FUZZY relying on several metrics combined using Fuzzy Logic. In order to overcome the limitations of using a single metric, the proposed OFRRT-FUZZY considers node and link metrics. Namely, Received Signal Strength Indicator (RSSI), Remaining Energy (RE) and Throughput (TH). The proposed OFRRT-FUZZY is implemented under Cooja simulator and then results were compared with OF0, MHROF in order to find which OF provides more satisfactory results. And simulation results show that OFRRT-FUZZY outperformed OF0 and MHROF.</description>
        <description>http://thesai.org/Downloads/Volume10No8/Paper_24-An_Efficient_Design_of_RPL_Objective_Function.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Non-linear Dimensionality Reduction-based Intrusion Detection using Deep Autoencoder</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100822</link>
        <id>10.14569/IJACSA.2019.0100822</id>
        <doi>10.14569/IJACSA.2019.0100822</doi>
        <lastModDate>2019-08-31T09:55:34.3170000+00:00</lastModDate>
        
        <creator>S Sreenivasa Chakravarthi</creator>
        
        <creator>R. Jagadeesh Kannan</creator>
        
        <subject>Autoencoder; deep learning; principal component analysis; dense neural network; false alarm rate</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(8), 2019</description>
        <description>The intrusion detection has become core part of any network of computers due to increasing amount of digital content available. In parallel, the data breaches and malware attacks have also grown in large numbers which makes the role of intrusion detection more essential. Even though many existing techniques are successfully used for detecting intruders but new variants of malware and attacks are being released every day. To counterfeit these new types of attacks, intrusion detection must be designed with state of art techniques such as Deep learning. At present the Deep learning techniques have a strong role in Natural Language Processing, Computer Vision and Speech Processing. This paper is focused on reviewing the role of deep learning techniques for intrusion detection and proposing an efficient deep Auto Encoder (AE) based intrusion detection technique. The intrusion detection is implemented in two stages with a binary classifier and multiclass classification algorithm (dense neural network). The performance of the proposed approach is presented and compared with parallel methods used for intrusion detection. The reconstruction error of the AE model is compared with the PCA and the performance of both anomaly detection and the multiclass classification is analyzed using metrics such as accuracy and false alarm rate. The compressed representation of the AE model helps to lessen the false alarm rate of both anomaly detection and attack classification using SVM and dense NN model respectively.</description>
        <description>http://thesai.org/Downloads/Volume10No8/Paper_22-Non_Linear_Dimensionality_Reduction_based_Intrusion.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Most Valuable Player Algorithm for Solving Minimum Vertex Cover Problem</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100821</link>
        <id>10.14569/IJACSA.2019.0100821</id>
        <doi>10.14569/IJACSA.2019.0100821</doi>
        <lastModDate>2019-08-31T09:55:34.3030000+00:00</lastModDate>
        
        <creator>Hebatullah Khattab</creator>
        
        <creator>Ahmad Sharieh</creator>
        
        <creator>Basel A. Mahafzah</creator>
        
        <subject>Most valuable player algorithm; minimum vertex cover problem; metaheuristic algorithms; optimization problem</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(8), 2019</description>
        <description>Minimum Vertex Cover Problem (MVCP) is a combinatorial optimization problem that is utilized to formulate multiple real-life applications. Owing to this fact, abundant research has been undertaken to discover valuable MVCP solutions. Most Valuable Player Algorithm (MVPA) is a recently developed metaheuristic algorithm that inspires its idea from team-based sports. In this paper, the MVPA_MVCP algorithm is introduced as an adaptation of the MVPA for the MVCP. The MVPA_MVCP algorithm is implemented using Java programming language and tested on a Microsoft Azure virtual machine. The performance of the MVPA_MVCP algorithm is evaluated analytically in terms of run time complexity. Its average-case run time complexity is ceased to Θ(I(|V|+|E|)), where I is the size of the initial population, |V| is the number of vertices and |E| is the number of edges of the tested graph. The MVPA_MVCP algorithm is evaluated experimentally in terms of the quality of gained solutions and the run time. The experimental results over 15 instances of DIMACS benchmark revealed that the MVPA_MVCP algorithm could, in the best case, get the best known optimal solution for seven data instances. Also, the experimental findings exposed that there is a direct relation between the number of edges of the graph under test and the run time.</description>
        <description>http://thesai.org/Downloads/Volume10No8/Paper_21-Most_Valuable_Player_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Machine Learning Approach towards Detecting Dementia based on its Modifiable Risk Factors</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100820</link>
        <id>10.14569/IJACSA.2019.0100820</id>
        <doi>10.14569/IJACSA.2019.0100820</doi>
        <lastModDate>2019-08-31T09:55:34.2870000+00:00</lastModDate>
        
        <creator>Reem Bin-Hezam</creator>
        
        <creator>Tomas E. Ward</creator>
        
        <subject>Machine learning; classification; data mining; data preparation; dementia; modifiable risk factors</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(8), 2019</description>
        <description>Dementia is considered one of the greatest global health and social care challenges in the 21st century. Fortunately, dementia can be delayed or possibly prevented by changes in lifestyle as dictated through known modifiable risk factors. These risk factors include low education, hypertension, obesity, hearing loss, depression, diabetes, physical inactivity, smoking, and social isolation. Other risk factors are non-modifiable and include aging and genetics. The main goal of this study is to demonstrate how machine learning methods can help predict dementia based on an individual’s modifiable risk factors profile. We use publicly available datasets for training algorithms to predict participant’ s cognitive state diagnosis, as cognitive normal or mild cognitive impairment or dementia. Several approaches were implemented using data from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) longitudinal study. The best classification results were obtained using both the Lancet and the Libra risk factor lists via longitudinal datasets, which outperformed cross-sectional baseline datasets. Moreover, using only data of the most recent visits provided even better results than using the complete longitudinal set. A binary classification (dementia vs. non-dementia) yielded approximately 92% accuracy, while the full multi-class prediction performance yielded to a 77% accuracy using logistic regression, followed by random forest with 92% and 70% respectively. The results demonstrate the utility of machine learning in the prediction of cognitive impairment based on modifiable risk factors and may encourage interventions to reduce the prevalence or severity of the condition in large populations.</description>
        <description>http://thesai.org/Downloads/Volume10No8/Paper_20-A_Machine_Learning_Approach_towards_Detecting_Dementia.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modeling of the Vegetable Oil Blends Composition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100819</link>
        <id>10.14569/IJACSA.2019.0100819</id>
        <doi>10.14569/IJACSA.2019.0100819</doi>
        <lastModDate>2019-08-31T09:55:34.2700000+00:00</lastModDate>
        
        <creator>Olga S Voskanyan</creator>
        
        <creator>Igor A. Nikitin</creator>
        
        <creator>Marina A. Nikitina</creator>
        
        <creator>Maria V. Klokonos</creator>
        
        <creator>Daria A. Guseva</creator>
        
        <creator>Igor V. Zavalishin</creator>
        
        <subject>Modeling; brute force; vegetable oil blends; omega-6; omega-3</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(8), 2019</description>
        <description>The article presents a computer modeling of blends of vegetable oils for treatment-and-prophylactic and healthy nutrition. To solve this problem, based on biomedical requirements, models of vegetable oil blends have been developed, taking into account the required chemical composition, mass fractions of the main components of the product, structural correlations of the biological value of blends according to the criterion of fatty acid compliance (omega-6 to omega-3). The problem was solved by the method of complete enumeration (brute force), which belongs to the class of methods for finding a solution by exhausting all sorts of options. An automated research system has been developed and implemented to model the composition of mixtures of vegetable oils in accordance with a given target function of the ratio of omega-6: omega-3 polyunsaturated fatty acids (PUFAs). The use of an automated system allowed us to model prescription formulations of blends of oils, taking into account the constraints set and a given goal function. In this study, three alternatives were obtained for a blend composition with omega-6: omega-3 in the first and second variant 5: 1, and in the third - 10: 1, which makes it possible to use them for healthy and therapeutic nutrition.</description>
        <description>http://thesai.org/Downloads/Volume10No8/Paper_19-Modeling_of_the_Vegetable_Oil_Blends_Composition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Smart Rubric-based Systematic Model for Evaluating and Prioritizing Academic Practices to Enhance the Education Outcomes</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100818</link>
        <id>10.14569/IJACSA.2019.0100818</id>
        <doi>10.14569/IJACSA.2019.0100818</doi>
        <lastModDate>2019-08-31T09:55:34.2530000+00:00</lastModDate>
        
        <creator>Mohammed Al-Shargabi</creator>
        
        <subject>Systematic Model; Smart Rubric; NCAAA Good Practices; quality of academic programs and universities; improvement action plan</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(8), 2019</description>
        <description>Recently, the impact of free-market economy, globalization, and knowledge economy has become a challenging and focal to higher educational institutions, which resulted in radical change. Therefore, it became mandatory for the academic programs to prepare highly qualified graduates to meet the new challenges, through the implementation of well-defined academic standards. For this reason, the National Center for Academic Accreditation &amp; Evaluation (NCAAA) in Kingdom of Saudi Arabia (KSA) defined a set of standards to ensure that quality of education in KSA is equivalent to the highest international standards. NCAAA standards contains of good criterions to guide the universities in evaluating their quality performance for improvement and obtain NCAAA accreditation. However, implementing NCAAA standards without supportive systems has been found to be a very complex task due to the existence of a large number of standard criterions, evaluation process occurs according to personal opinions, the lack of quality evaluation expertise, and manual calculation. This, in turn, leads to inaccurate evaluation, develops inaccurate improvement plans, and difficulty in obtaining NCAAA accreditation. Therefore, this paper introduces a systematic model that contain smart-rubrics that has been designed based on NCAAA quality performance evaluation elements supported with algorithms and mathematical models to reduce personal opinions, provide an accurate auto-evaluation, and auto-prioritization action plans for NCAAA standards. The proposed model will support academics and administrative by facilitating their NCAAA quality tasks with ease, an authenticate self-assessment, accurate action plans and simplifying accreditation tasks. Finally, the implementation of the model proved to have very efficient and effective results in supporting KSA education institution in accreditation tasks that will lead to enhance the quality of education and to obtain NCAAA accreditation.</description>
        <description>http://thesai.org/Downloads/Volume10No8/Paper_18-Smart_Rubric_based_Systematic_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modified Farmland Fertility Optimization Algorithm for Optimal Design of a Grid-connected Hybrid Renewable Energy System with Fuel Cell Storage: Case Study of Ataka, Egypt</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100817</link>
        <id>10.14569/IJACSA.2019.0100817</id>
        <doi>10.14569/IJACSA.2019.0100817</doi>
        <lastModDate>2019-08-31T09:55:34.2530000+00:00</lastModDate>
        
        <creator>Ahmed A. Zaki Diab</creator>
        
        <creator>Sultan I. EL-Ajmi</creator>
        
        <creator>Hamdy M. Sultan</creator>
        
        <creator>Yahia B. Hassan</creator>
        
        <subject>Photovoltaic; wind; fuel cell; renewable energy; hybrid energy system; modified farmland fertility optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(8), 2019</description>
        <description>In this paper, a Modified Farmland Fertility Optimization algorithm (MFFA) has been presented for optimal sizing of a grid connected hybrid system including photovoltaic (PV), wind turbines and fuel cell (FC). The system is optimal designed for providing a clean, reliable and affordable energy by adopting hybrid power systems. This system is very important for countries looking to achieve their sustainable development goals. MFFA is proposed in order to reduce the processing implementation time. The optimization method depends on the high reliability of the hybrid power supply, small fluctuation in the injected energy to the grid and high utilization of the wind and solar complementary characteristics. Moreover, MFFA is applied to minimize the cost of energy while satisfying the operational constraints. A real case study of a hybrid power system for Ataka region in Egypt is introduced to evaluate the performance of the proposed optimization method. Moreover, a comprehensive comparison between the proposed MFFA optimization technique and the conventional Farmland Fertility Algorithm (FFA) has been presented to validate the proposed MFFA.</description>
        <description>http://thesai.org/Downloads/Volume10No8/Paper_17-Modified_Farmland_Fertility_Optimization_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predicting Return Donor and Analyzing Blood Donation Time Series using Data Mining Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100816</link>
        <id>10.14569/IJACSA.2019.0100816</id>
        <doi>10.14569/IJACSA.2019.0100816</doi>
        <lastModDate>2019-08-31T09:55:34.2400000+00:00</lastModDate>
        
        <creator>Anfal Saad Alkahtani</creator>
        
        <creator>Musfira Jilani</creator>
        
        <subject>Classification; machine learning; time series analysis; blood donation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(8), 2019</description>
        <description>Since blood centers in most countries typically rely on volunteer donors to meet the hospitals&#39; needs, donor retention is critical for blood banks. Identifying regular donors is critical for the advance planning of blood banks to guarantee a stable blood supply. In this research, donors&#39; data was collected from a Saudi blood bank from 2017 to 2018. Machine learning algorithms such as logistic regression (LG), random forest (RF) and support vector classifier (SVC) were applied to develop and evaluate models for classifying blood donors as return and non-return donors. The natural imbalance of the donors&#39; distribution required extra attention and considerations to produce classifiers with good performance. Thus, over-SMOTE sampling was tested. Experiments of different classifiers showed very similar performance results. In addition to the donors return classification, a time series analysis on the donors dataset was also considered to find any seasonal variations that could be captured and delivered to blood banks for better planning and decision making. After aggregating the donation count by month, results showed that the number of donations each year was stable except for two discovered drops in June and September, which for the two observed years coincided with two religious periods: Fasting and Performing Hajj.</description>
        <description>http://thesai.org/Downloads/Volume10No8/Paper_16-Predicting_Return_Donor_and_Analyzing_Blood_Donation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Twitter Sentiment Analysis in Under-Resourced Languages using Byte-Level Recurrent Neural Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100815</link>
        <id>10.14569/IJACSA.2019.0100815</id>
        <doi>10.14569/IJACSA.2019.0100815</doi>
        <lastModDate>2019-08-31T09:55:34.2230000+00:00</lastModDate>
        
        <creator>Ridi Ferdiana</creator>
        
        <creator>Wiliam Fajar</creator>
        
        <creator>Desi Dwi Purwanti</creator>
        
        <creator>Artmita Sekar Tri Ayu</creator>
        
        <creator>Fahim Jatmiko</creator>
        
        <subject>Sentiment analysis; under-resourced problem; Indonesian dataset; twitter</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(8), 2019</description>
        <description>Sentiment analysis in non-English language can be more challenging than the English language because of the scarcity of publicly available resources to build the prediction model with high accuracy. To alleviate this under-resourced problem, this article introduces the leverage of byte-level recurrent neural model to generate text representation for twitter sentiment analysis in the Indonesian language. As the main part of the proposed model training is unsupervised and does not require much-labeled data, this approach can be scalable by using a huge amount of unlabeled data that is easily gathered on the Internet, without much dependencies on human-generated resources. This paper also introduces an Indonesian dataset for general sentiment analysis. It consists of 10,806 twitter data (tweets) selected from a total of 454,559 gathered tweets which taken directly from twitter using twitter API. The 10,806 tweets are then classified into 3 categories, positive, negative, and neutral. This Indonesian dataset could help the development of Indonesian sentiment analysis especially general sentiment analysis and encouraged others to start publishing similar dataset in the future.</description>
        <description>http://thesai.org/Downloads/Volume10No8/Paper_15-Twitter_Sentiment_Analysis_in_Under_Resourced_Languages.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hadoop MapReduce for Parallel Genetic Algorithm to Solve Traveling Salesman Problem</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100814</link>
        <id>10.14569/IJACSA.2019.0100814</id>
        <doi>10.14569/IJACSA.2019.0100814</doi>
        <lastModDate>2019-08-31T09:55:34.2070000+00:00</lastModDate>
        
        <creator>Entesar Alanzi</creator>
        
        <creator>Hachemi Bennaceur</creator>
        
        <subject>Genetic algorithms; parallel genetic algorithms; Hadoop MapReduce; island model; traveling salesman problem</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(8), 2019</description>
        <description>Achieving an optimal solution for NP-complete problems is a big challenge nowadays. The paper deals with the Traveling Salesman Problem (TSP) one of the most important combinatorial optimization problems in this class. We investigated the Parallel Genetic Algorithm to solve TSP. We proposed a general platform based on Hadoop MapReduce approach for implementing parallel genetic algorithms. Two versions of parallel genetic algorithms (PGA) are implemented, a Parallel Genetic Algorithm with Islands Model (IPGA) and a new model named an Elite Parallel Genetic Algorithm using MapReduce (EPGA) which improve the population diversity of the IPGA. The two PGAs and the sequential version of the algorithm (SGA) were compared in terms of quality of solutions, execution time, speedup and Hadoop overhead. The experimental study revealed that both PGA models outperform the SGA in terms of execution time, solution quality when the problem size is increased. The computational results show that the EPGA model outperforms the IPGA in term of solution quality with almost similar running time for all the considered datasets and clusters. Genetic Algorithms with MapReduce platform provide better performance for solving large-scale problems.</description>
        <description>http://thesai.org/Downloads/Volume10No8/Paper_14-Hadoop_MapReduce_for_Parallel_Genetic_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detection of Chronic Kidney Disease using Machine Learning Algorithms with Least Number of Predictors</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100813</link>
        <id>10.14569/IJACSA.2019.0100813</id>
        <doi>10.14569/IJACSA.2019.0100813</doi>
        <lastModDate>2019-08-31T09:55:34.1930000+00:00</lastModDate>
        
        <creator>Marwa Almasoud</creator>
        
        <creator>Tomas E Ward</creator>
        
        <subject>Chronic Kidney Disease (CKD); Random Forest (RF); Gradient Boosting (GB); Logistic Regression (LR); Support Vector Machines (SVM); Machine Learning (ML); prediction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(8), 2019</description>
        <description>Chronic kidney disease (CKD) is one of the most critical health problems due to its increasing prevalence. In this paper, we aim to test the ability of machine learning algorithms for the prediction of chronic kidney disease using the smallest subset of features. Several statistical tests have been done to remove redundant features such as the ANOVA test, the Pearson’s correlation, and the Cramer’s V test. Logistic regression, support vector machines, random forest, and gradient boosting algorithms have been trained and tested using 10-fold cross-validation. We achieve an accuracy of 99.1 according to F1-measure from Gradient Boosting classifier. Also, we found that hemoglobin has higher importance for both random forest and Gradient boosting in detecting CKD. Finally, our results are among the highest compared to previous studies but with less number of features reached so far. Hence, we can detect CKD at only $26.65 by performing three simple tests.</description>
        <description>http://thesai.org/Downloads/Volume10No8/Paper_13-Detection_of_Chronic_Kidney_Disease.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Relationship Analysis on the Experience of Hospitalised Paediatric Cancer Patient in Malaysia using Text Analytics Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100811</link>
        <id>10.14569/IJACSA.2019.0100811</id>
        <doi>10.14569/IJACSA.2019.0100811</doi>
        <lastModDate>2019-08-31T09:55:34.1770000+00:00</lastModDate>
        
        <creator>Zuraini Zainol</creator>
        
        <creator>Puteri N.E. Nohuddin</creator>
        
        <creator>Nadhirah Rasid</creator>
        
        <creator>Hamidah Alias</creator>
        
        <creator>A. Imran Nordin</creator>
        
        <subject>Patient experience; paediatric; keyword relationship analysis; bubble graph; text network analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(8), 2019</description>
        <description>The purpose of this study is to analyse the keyword relationships of paediatric cancer patient’s experiences whilst being hospitalised during the treatment session. This study collects data through 40 days of observations on 21 paediatric cancer patients. A combination of text analytical visualizations such as network analysis map and bubble graph to analyse the data is applied in this study. Through the analysis, keywords such as “cri” (crying), “lay” (laying), “sleep” (sleeping) and “watch” (watching) are found the most common activities that have been experienced by paediatric cancer patients when they were hospitalised. Based on this observation, it can be argued that these activities can be represented as the experience that they have whilst being in the hospital. Based on the findings, hospitalised paediatric cancer patient’s experience is limited due to the treatment protocol that requires them to be attached to intravenous line. Therefore, most of their activities are focused in bed such as sleeping, playing with their mobile, watching video and so on in bed. This study also offers a novel approach of transforming cancer patient data into useful knowledge about keyword relationship in paediatric cancer patient’s experience during their stay in the hospital. The incorporation of these two text analytics offers insights for researchers to understand the interesting hidden knowledge in the collection of unstructured data, and this information can be used by medical providers, psychologists, games designers and others to develop any applications that can assist their difficulties and ease their pain while warded in the hospital.</description>
        <description>http://thesai.org/Downloads/Volume10No8/Paper_11-Relationship_Analysis_on_the_Experience.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Prediction of Potential-Diabetic Obese-Patients using Machine Learning Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100812</link>
        <id>10.14569/IJACSA.2019.0100812</id>
        <doi>10.14569/IJACSA.2019.0100812</doi>
        <lastModDate>2019-08-31T09:55:34.1770000+00:00</lastModDate>
        
        <creator>Raghda Essam Ali</creator>
        
        <creator>Hatem El-Kadi</creator>
        
        <creator>Soha Safwat Labib</creator>
        
        <creator>Yasmine Ibrahim Saad</creator>
        
        <subject>Obesity; diabetes; nonalcoholic fatty liver disease; artificial neural network; support vector machine</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(8), 2019</description>
        <description>Diabetes is a disease that is chronic. Improper blood glucose control may cause serious complications in diabetic patients as heart and kidney disease, strokes, and blindness. Obesity is considered to be a massive risk factor of type 2 diabetes. Machine Learning has been applied to many medical health aspects. In this paper, two machine learning techniques were applied; Support Vector Machine (SVM) and Artificial Neural Network (ANN) to predict diabetes mellitus. The proposed techniques were applied on a real dataset from Al-Kasr Al-Aini Hospital in Giza, Egypt. The models were examined using four-fold cross validation. The results were conducted from two phases in which forecasting patients with fatty liver disease using Support Vector Machine in the first phase reached the highest accuracy of 95% when applied on 8 attributes. Then, Artificial Neural Network technique to predict diabetic patients were applied on the output of phase 1 and another different 8 attributes to predict non-diabetic, pre-diabetic and diabetic patients with accuracy of 86.6%.</description>
        <description>http://thesai.org/Downloads/Volume10No8/Paper_12-Prediction_of_Potential_Diabetic_Obese_Patients.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>RETRACTED: Improving Usable-Security of Web based Healthcare Management System through Fuzzy AHP</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100810</link>
        <id>10.14569/IJACSA.2019.0100810</id>
        <doi>10.14569/IJACSA.2019.0100810</doi>
        <lastModDate>2019-08-31T09:55:34.1470000+00:00</lastModDate>
        
        <creator>Nawaf Rasheed Alharbe</creator>
        
        <subject>Application security; application usability; usable-security; fuzzy analytic hierarchy process</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(8), 2019</description>
        <description>After careful and considered review of the content of this paper by a duly constituted expert committee,  this paper has been found to be in violation of IJACSA`s Publication Principles. We hereby retract the  content of this paper. Reasonable effort should be made to remove all past references to this paper. Retraction DOI: 10.14569/IJACSA.2019.0100810.retraction</description>
        <description>http://thesai.org/Downloads/Volume10No8/Paper_10-Improving_Usable_Security_of_Web_based_Healthcare.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Framework for Digital Data Access Control from Internal Threat in the Public Sector</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100809</link>
        <id>10.14569/IJACSA.2019.0100809</id>
        <doi>10.14569/IJACSA.2019.0100809</doi>
        <lastModDate>2019-08-31T09:55:34.1300000+00:00</lastModDate>
        
        <creator>Haslidah Halim</creator>
        
        <creator>Maryati Mohd Yusof</creator>
        
        <subject>Information management; internal threats; control framework; risk; information security; personal data access</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(8), 2019</description>
        <description>Information management is one of the main challenges in the public sector because the information is often exposed to threat risks, particularly internal ones. Information theft or misuse, which is attributed to human factors, affects the reputation of public sector organizations due to the loss of public trust in the security and confidentiality of the information and personal data that are hacked by internal parties. Most studies focus on general problem solving related to internal threats instead of digital personal data protection. Therefore, this study identifies the main security control elements for personal data access in the public sector, including information security management, human resource security, operational security, access control, and compliance. A comprehensive framework is developed based on the identified security control elements and validated using a case study. Data are collected using interview, observation, and document analysis techniques. The findings contribute to the management of information system security through a systematic approach to controlling internal threats in the public sector. This framework can serve as a guideline for the public sector in managing internal threats to reduce security incidents involving unauthorized access to digital personal data.</description>
        <description>http://thesai.org/Downloads/Volume10No8/Paper_9-Framework_for_Digital_Data_Access_Control.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mortality Prediction based on Imbalanced New Born and Perinatal Period Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100808</link>
        <id>10.14569/IJACSA.2019.0100808</id>
        <doi>10.14569/IJACSA.2019.0100808</doi>
        <lastModDate>2019-08-31T09:55:34.1300000+00:00</lastModDate>
        
        <creator>Wafa M AlShwaish</creator>
        
        <creator>Maali Ibr. Alabdulhafith</creator>
        
        <subject>Component; machine learning; support vector machine; logistic regression; gradient boosting; random forest; deep learning; ensemble model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(8), 2019</description>
        <description>This study was carried out by the New York State Department of Health, between 2012 and 2016. This experiment relates to six supervised machine learning methods: Support Vector Machine (SVM), Logistic Regression (LR), Gradient Boosting (GB), Random Forest (RF), Deep Learning (DL) and the Ensemble Model, all of which are used in the prediction of infant mortality. This experiment applied ensemble model that concentrated on assigning different weights to different models per output class in order to obtain a better predictive performance for infant mortality. Efforts were made to measure the performance and compare the classifier accuracy of each model. Several criteria, including the area under ROC curve, were considered when comparing the ensemble model (GB, RF and DL) with the other five models (SVM, LR, DL, GB and RF). In terms of these different criteria, the ensemble model outperformed the others in predicting survival rates among infant patients given a balanced data set (the areas under the ROC curve for minor, moderate, major and extreme were 98%, 95%, 92% and 97% respectively, giving a total accuracy of 80.65%). For the imbalanced dataset, (the areas under the ROC curve for minor, moderate, major and extreme were 98%, 98%, 99% and 99% respectively, giving total accuracy increased to 97.44%). The results of the experiments used in this dissertation showed that using the ensemble model provided a better level of prediction for infant mortality than the other five models, based on the relative prediction accuracy for each model for each output class. Therefore, the ensemble model provides and extremely promises classifier in terms of predicting infant mortality.</description>
        <description>http://thesai.org/Downloads/Volume10No8/Paper_8-Mortality_Prediction_based_on_Imbalanced_New_Born.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Local Average of Nearest Neighbors: Univariate Time Series Imputation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100807</link>
        <id>10.14569/IJACSA.2019.0100807</id>
        <doi>10.14569/IJACSA.2019.0100807</doi>
        <lastModDate>2019-08-31T09:55:34.1130000+00:00</lastModDate>
        
        <creator>Anibal Flores</creator>
        
        <creator>Hugo Tito</creator>
        
        <creator>Carlos Silva</creator>
        
        <subject>Univariate time series; imputation; LANN; LANN+</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(8), 2019</description>
        <description>The imputation of time series is one of the most important tasks in the homogenization process, the quality and precision of this process will directly influence the accuracy of the time series predictions. This paper proposes two simple algorithms, but quite powerful for univariate time series imputation process, which are based on the means of the nearest neighbors for the imputation of missing data. The first of them Local Average of Neighbors Neighbors (LANN) calculates the missing value from the average of the previous neighbor and the following neighbor to the missing value. The second Local Average of Neighbors Neighbors+ (LANN+), considers a threshold parameter, which allows to differentiate the calculation of the missing values according to the difference between the neighbors: for the differences less than or equal to the threshold the missing value is calculated through of LANN and for major differences the missing value is calculated from the average of the four closest neighbors, two previous and two subsequent to the missing value. Imputation results on different time series are promising.</description>
        <description>http://thesai.org/Downloads/Volume10No8/Paper_7-Local_Average_of_Nearest_Neighbors.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sound user Interface with Touch Panel for Data and Information Expression and its Application to Meteorological Data Representation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100806</link>
        <id>10.14569/IJACSA.2019.0100806</id>
        <doi>10.14569/IJACSA.2019.0100806</doi>
        <lastModDate>2019-08-31T09:55:34.1000000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>Audible; meteorology; remote sensing satellite; musical scale; multi-layered data; SUI</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(8), 2019</description>
        <description>Sound User Interface (SUI) with touch panel for representation of quantitative data and information together with its application to meteorological data representation is proposed. The proposed SUI is not a merely ear-con. Through experiments, it is found that the proposed SUI combined with visual perception makes meteorologist to understand meteorological data intuitively and is much understandable than ever in a comprehensive manner. It is also useful to hear “images” in particular, for blind person.</description>
        <description>http://thesai.org/Downloads/Volume10No8/Paper_6-Sound_user_Interface_with_Touch_Panel.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Ontological Model for Generating Complete, Form-based, Business Web Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100805</link>
        <id>10.14569/IJACSA.2019.0100805</id>
        <doi>10.14569/IJACSA.2019.0100805</doi>
        <lastModDate>2019-08-31T09:55:34.0830000+00:00</lastModDate>
        
        <creator>Daniel Strmecki</creator>
        
        <creator>Ivan Magdalenic</creator>
        
        <subject>Ontology; model; generative; automatic; programming; development</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(8), 2019</description>
        <description>This paper presents an ontological model for specifying and automatically generating complete business Web applications. First, a modular and expandable ontological model for specifying form-based, business Web applications is developed and presented. Next, the technology used for transforming the ontological specification to Java executable code is explained. Finally, the results of applying the proposed model for specifying and generating an order management application are presented. Results showed that the application of an ontological model in a generative programming approach increases the level of abstraction. This approach is especially suitable for development of software families, where similar features are reused in multiple products/applications.</description>
        <description>http://thesai.org/Downloads/Volume10No8/Paper_5-An_Ontological_Model_for_Generating_Complete.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of Software Tools for Longitudinal Studies in Psychology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100804</link>
        <id>10.14569/IJACSA.2019.0100804</id>
        <doi>10.14569/IJACSA.2019.0100804</doi>
        <lastModDate>2019-08-31T09:55:34.0830000+00:00</lastModDate>
        
        <creator>Pavel Kolyasnikov</creator>
        
        <creator>Evgeny Nikulchev</creator>
        
        <creator>Vladimir Belov</creator>
        
        <creator>Anastasia Silaeva</creator>
        
        <creator>Alexander Kosenkov</creator>
        
        <creator>Artem Malykh</creator>
        
        <creator>Zalina Takhirova</creator>
        
        <creator>Sergey Malykh</creator>
        
        <subject>Software tools for psychological research; choice of the tools for psychological research; longitudinal studies; criteria for software evaluation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(8), 2019</description>
        <description>Longitudinal studies allow to access the review of causal hypotheses directly. It means that they make possible causal relation between the order of impacts (i.e. life events, educational effects, etc.) and the consequences that then occur. Long-term data storage has specific requirements for software and methods of data storage and conversion. The paper introduces criteria for evaluating software tools in the context of their application in longitudinal studies in psychology. The study is devoted to the analysis of popular tools of psychological research based on the criteria, which were introduced.</description>
        <description>http://thesai.org/Downloads/Volume10No8/Paper_4-Analysis_of_Software_Tools_for_Longitudinal_Studies.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Study and Analysis of Delay Sensitive and Energy Efficient Routing Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100803</link>
        <id>10.14569/IJACSA.2019.0100803</id>
        <doi>10.14569/IJACSA.2019.0100803</doi>
        <lastModDate>2019-08-31T09:55:34.0370000+00:00</lastModDate>
        
        <creator>Babar Ali</creator>
        
        <creator>Tariq Mahmood</creator>
        
        <creator>Muhammad Ayzed Mirza</creator>
        
        <creator>Saleemullah Memon</creator>
        
        <creator>Muhammad Rashid</creator>
        
        <creator>Ekang Francis Ajebesone</creator>
        
        <subject>Multi-hop (MH); CHN; WSNs; BS; Data Fusion Technology (DFT); LEACH RP; DSEE</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(8), 2019</description>
        <description>Wireless Sensing Networks (WSNs) comprised of significant numbers of miniatures and reasonable sensor nodes, which sense data from surrounding and forwarded data toward the base station (BS) via multi-hop fashion through cluster head node (CHN). The random selection of CHN in WSNs is fully based on the nodes residing energy. The node residing energy and network sustainability is hot research issues of the day in WSNs. There are many deficiencies in less energy adaptive clustering hierarchy (LEACH) RP due to the rapid energy usage of ordinary and CHN because of direct communication to the base station. The rapid draining of node energy produces huge numbers of hole in the network causing retransmission of data packet, route update cost, and E2E delay. In this paper, the proposed Delay Sensitive and Energy Efficient (DSEE) Routing Protocol (RP) select CHN considering distance difference and amount of remaining energy of neighboring nodes. In this proposed approach, data fusion technology (DFT) was implemented to solve the problem of data redundancy, but it does not design a specific data fusion algorithm. At last, simulation experiments proved the superiority of the improved protocol LEACH-DSEE and finally, we compare this improved routing protocol with existing protocols by consideration metrics such as node death ratio, data packet delivery ration and node energy consumption.</description>
        <description>http://thesai.org/Downloads/Volume10No8/Paper_3-Study_and_Analysis_of_Delay_Sensitive.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Literature Review on Medicine Recommender Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100802</link>
        <id>10.14569/IJACSA.2019.0100802</id>
        <doi>10.14569/IJACSA.2019.0100802</doi>
        <lastModDate>2019-08-31T09:55:34.0200000+00:00</lastModDate>
        
        <creator>Benjamin Stark</creator>
        
        <creator>Constanze Knahl</creator>
        
        <creator>Mert Aydin</creator>
        
        <creator>Karim Elish</creator>
        
        <subject>Medicine; recommendation systems; healthcare; systematic review</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(8), 2019</description>
        <description>Medicine recommender systems can assist the medical care providers with the selection of an appropriate medication for the patients. The advanced technologies available nowadays can help developing such recommendation systems which can lead to more concise decisions. Many existing medicine recommendation systems are developed based on different algorithms. Thus, it is crucial to understand the state-of-the-art developments of these systems, their advantages and disadvantages as well as areas which require more research. In this paper, we conduct a literature review on the existing solutions for medicine recommender systems, describe and compare them based on various features, and present future research directions.</description>
        <description>http://thesai.org/Downloads/Volume10No8/Paper_2-A_Literature_Review_on_Medicine.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Applying CRISPR-Cas9 Off-Target Editing on DNA based Steganography</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100801</link>
        <id>10.14569/IJACSA.2019.0100801</id>
        <doi>10.14569/IJACSA.2019.0100801</doi>
        <lastModDate>2019-08-31T09:55:33.9430000+00:00</lastModDate>
        
        <creator>Hong Zhou</creator>
        
        <creator>Xiaoli Huan</creator>
        
        <subject>DNA; steganography; CRISPR; Cas9; sgRNA; off-target; substitution</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(8), 2019</description>
        <description>Different from cryptography which encodes data into an incomprehensible format difficult to decrypt, steganography hides the trace of data and therefore minimizes attention to the hidden data. To hide data, a carrier body must be utilized. In addition to the traditional data carriers including images, audios, and videos, DNA emerges as another promising data carrier due to its high capacity and complexity. Currently, DNA based steganography can be practiced with either biological DNA substances or digital DNA sequences. In this article, we present a digital DNA steganography approach that utilizes the CRISPR-Cas9 off-target editing such that the secret message is fragmented into multiple sgRNA homologous sequences in the genome. Retrieval of the hidden message mimics the Cas9 off-target detection process which can be accelerated by computer software. The feasibility of this approach is analyzed, and practical concerns are discussed.</description>
        <description>http://thesai.org/Downloads/Volume10No8/Paper_1-Applying_CRISPR_Cas9_Off_Target_Editing_on_DNA.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Smart Coaching: Enhancing Weightlifting and Preventing Injuries</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100789</link>
        <id>10.14569/IJACSA.2019.0100789</id>
        <doi>10.14569/IJACSA.2019.0100789</doi>
        <lastModDate>2019-07-31T10:57:49.1770000+00:00</lastModDate>
        
        <creator>Ammar Yasser</creator>
        
        <creator>Doha Tariq</creator>
        
        <creator>Radwa Samy</creator>
        
        <creator>Mennat Allah Hassan</creator>
        
        <creator>Ayman Atia</creator>
        
        <subject>Weightlifting; joints; KNN; fastDTW; Naive Bayes; SVM; detecting injuries; machine learning; IR camera</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>Getting injured is one of the most devastating and dangerous challenges that an athlete can go through and if it is a big injury it could end his/her athletic career. In this paper, we propose a system to automate the idea of coaching an athlete, by using an IR Camera (Microsoft Kinect Xbox 360) to detect the misplaced joints of the athlete while doing the lift, and alerting the athlete before an injury can occur. We are now able to detect if the lift was correct or wrong and to detect what kind of mistake has been done in the lift by the athlete by using the Fast Dynamic time warping (FastDTW) method. The FastDTW method has outperformed other classification methods and can achieve recognition with 100% accuracy for dependent user movements.</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_89-Smart_Coaching_Enhancing_Weightlifting.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Associative Classification using Automata with Structure based Merging</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100788</link>
        <id>10.14569/IJACSA.2019.0100788</id>
        <doi>10.14569/IJACSA.2019.0100788</doi>
        <lastModDate>2019-07-31T10:57:49.1470000+00:00</lastModDate>
        
        <creator>Mohammad Abrar</creator>
        
        <creator>Alex Tze Hiang Sim</creator>
        
        <creator>Sohail Abbas</creator>
        
        <subject>Associative classification; automata; ranking and pruning; rules merging; classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>Associative Classification, a combination of two important and different fields (classification and association rule mining), aims at building accurate and interpretable classifiers by means of association rules. The process used to generate association rules is exponential by nature; thus in AC, researchers focused on the reduction of redundant rules via rules pruning and rules ranking techniques. These techniques take an important part in improving the efficiency; however, pruning may negatively affect the accuracy by pruning interesting rules. Further, these techniques are time consuming in term of processing and also require domain specific knowledge to decide upon the selection of the best ranking and pruning strategy. In order to overcome these limitations, in this research, an automata based solution is proposed to improve the classifier’s accuracy while replacing ranking and pruning. A new merging concept is introduced which used structure based similarity to merge the association rules. The merging not only help to reduce the classifier size but also minimize the loss of information by avoiding the pruning. The extensive experiments showed that the proposed algorithm is efficient than AC, Naive Bayesian, and Rule and Tree based classifiers in term of accuracy, space, and speed. The merging takes the advantages of the repetition in the rules set and keep the classifier as small as possible.</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_88-Associative_Classification_using_Automata.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Image Encryption using Memetic Differential Expansion based Modified Logistic Chaotic Map</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100787</link>
        <id>10.14569/IJACSA.2019.0100787</id>
        <doi>10.14569/IJACSA.2019.0100787</doi>
        <lastModDate>2019-07-31T10:57:49.1170000+00:00</lastModDate>
        
        <creator>Anvita Gupta</creator>
        
        <creator>Dilbag Singh</creator>
        
        <creator>Manjit Kaur</creator>
        
        <subject>Image encryption; modified logistic chaotic map; memetic differential expansion</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>Under this paper, the primary conditions of a modified logistic chaotic map are created with the help of memetic differential expansion. In the beginning, the color image is broken down into different channels like red, blue and green. Then the modified logistic chaotic map essential variables are enhanced with the execution of memetic differential expansion. The strength operation is employed by the association coefficient and Entropy function. The private keys are produced by the modified logistic chaotic map. The encoded image is acquired with a combination of different encoded color channels. The memetic differential expansion builds on image encryption and the previous image encryption techniques build with the different standard images play a vital role in carrying out the larger experiments. The evaluation of the outcome of the proposed technique gives better security and efficiency in contrast to all the previously implemented image encryption techniques.</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_87-A_Novel_Image_Encryption_using_Memetic_Differential.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluation of Peer Robot Communications using CryptoROS</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100786</link>
        <id>10.14569/IJACSA.2019.0100786</id>
        <doi>10.14569/IJACSA.2019.0100786</doi>
        <lastModDate>2019-07-31T10:57:49.1000000+00:00</lastModDate>
        
        <creator>Abdul Hadi Abd Rahman</creator>
        
        <creator>Rossilawati Sulaiman</creator>
        
        <creator>Nor Samsiah Sani</creator>
        
        <creator>Afzan Adam</creator>
        
        <creator>Roham Amini</creator>
        
        <subject>Robot communication; encryption; robot operating system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>The demand of cloud robotics makes data encryp-tion essential for peer robot communications. Certain types of data such as odometry, action controller and perception data need to be secured to prevent attacks. However, the introduction of data encryption caused increment of overhead for data stream communication. This paper presents an evaluation of CryptoROS architecture on Robot Operating System (ROS) which focused on peer-to-peer conversations between nodes with confidentiality and integrity violation. OpenSSL is used to create a private key and generate a Certificate Signing Request (CSR) that contains public key and a signature. The CSR is submitted to a Certificate Authority (CA) to chain the root CA certificate and encryption of RSA private key with AES-256 and a passphrase. The protected private key are securely backed up, transported, and stored. Experiments were carried out multiple times with and without the proposed protocol intervention to assess the performance impact of the Manager. The results for different number of messages transmitted each time increased from 100, 250 to 500 with performance impact 1.7%, 0.5% and 0.2%, respectively. It is concluded that CryptoROS capable of protecting messages and service requests from unauthorized intentional alteration with authenticity verification in all components.</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_86-Evaluation_of_Peer_Robot_Communications.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Visualization and Analysis in Bank Direct Marketing Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100785</link>
        <id>10.14569/IJACSA.2019.0100785</id>
        <doi>10.14569/IJACSA.2019.0100785</doi>
        <lastModDate>2019-07-31T10:57:49.0830000+00:00</lastModDate>
        
        <creator>Alaa Abu-Srhan</creator>
        
        <creator>Bara’a Alhammad</creator>
        
        <creator>Sanaa Al zghoul</creator>
        
        <creator>Rizik Al-Sayyed</creator>
        
        <subject>Bank direct marketing; prediction; visualization; oversampling; Naive Bayes</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>Gaining the most benefits out of a certain data set is a difficult task because it requires an in-depth investigation into its different features and their corresponding values. This task is usually achieved by presenting data in a visual format to reveal hidden patterns. In this study, several visualization techniques are applied to a bank’s direct marketing data set. The data set obtained from the UCI machine learning repository website is imbalanced. Thus, some oversampling methods are used to enhance the accuracy of the prediction of a client’s subscription to a term deposit. Visualization efficiency is tested with the oversampling techniques’ influence on multiple classifier performance. Results show that the agglomerative hierarchical clustering technique outperforms other oversampling techniques and the Naive Bayes classifier gave the best prediction results.</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_85-Visualization_and_Analysis_in_Bank_Direct_Marketing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>NetMob: A Mobile Application Development Framework with Enhanced Large Objects Access for Mobile Cloud Storage Service</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100784</link>
        <id>10.14569/IJACSA.2019.0100784</id>
        <doi>10.14569/IJACSA.2019.0100784</doi>
        <lastModDate>2019-07-31T10:57:49.0530000+00:00</lastModDate>
        
        <creator>Yunus Parvej Faniband</creator>
        
        <creator>Iskandar Ishak</creator>
        
        <creator>Fatimah Sidi</creator>
        
        <creator>Marzanah A. Jabar</creator>
        
        <subject>Mobile cloud computing; data consistency; mobile back-end as a service; mobile apps; distributed systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>Mobile enterprise applications are primarily devel-oped for existing backend enterprise systems and some usage scenarios require storing of large files on the mobile devices.These files range from large PDFs to media files in various formats (MPEG videos).These files needs to be used offline , sometimes updated and shared among users. Present work studied different Mobile Backend as a service (M)BaaS platforms to understand techniques use to handle large file and found that many are either missing the feature or does not handle performance issues for large files. In this paper, we are proposing, NetMob, a mobile synchronization platform that allows resource-limited mobile devices to access large objects from the cloud. This framework is mainly focused on large file handling and has support for both table and objects data models that can be tuned for three consistency semantics, resembling strong, causal and eventual consistency. Experimental results conducted using representative workloads showed that NetMob can handle large files access with the size ranging from 100MB upto 1GB and is able to reduce sync time with object chunking in our experiment settings.</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_84-NetMob_A_Mobile_Application_Development_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Ionospheric Anomalies before the 2015 Deep Earthquake Doublet, Mw 7.5 and Mw 7.6, in Peru</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100783</link>
        <id>10.14569/IJACSA.2019.0100783</id>
        <doi>10.14569/IJACSA.2019.0100783</doi>
        <lastModDate>2019-07-31T10:57:48.9130000+00:00</lastModDate>
        
        <creator>Carlos Sotomayor-Beltran</creator>
        
        <subject>Ionospheric anomalies; earthquakes; total electron content</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>Two major earthquakes separated by ∼5 minutes occurred in the same fault in Peru at depths of 606.2 and 620.6 km on November 24, 2015. By using Global Ionospheric Maps (GIMs) from the Center for Orbit Determination in Europe (CODE) and a broadly used statistical method, differential Vertical Total Electron Content (VTEC) maps were derived. Two positive ionospheric anomalies were clearly identified in the differential VTEC maps 2 and 1 day prior to the day of the earthquakes. These anomalies were located inside the earthquakes’ preparation regions defined by the Dobrovolsky equation. On the other hand, due to the low-latitude nature of the seismic events, the Equatorial Ionization Anomaly (EIA) shape was also analyzed. A third positive disturbance was revealed between November 20 and 21, 2015. For the aforementioned anomaly and the one on November 22 (2 days before the earthquakes), an enhancement of the VTEC was observe through the considerable modification of the EIA shape into a well-defined double-crest with a trough. By looking into the Dst and Kp indices, the geomagnetic conditions starting on November 20 until the 24 were very quiet; thus, it is considered that the three detected anomalies are precursors to the earthquake doublet. Moreover, it is suggested that the mechanism at work that produced the positive disturbances is the air ionization through the release of radon from the Earth’s crust.</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_83-Ionospheric_Anomalies_before_the_2015_Deep_Earthquake.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Survey on Location Privacy-Preserving Mechanisms in Mobile Crowdsourcing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100782</link>
        <id>10.14569/IJACSA.2019.0100782</id>
        <doi>10.14569/IJACSA.2019.0100782</doi>
        <lastModDate>2019-07-31T10:57:48.8970000+00:00</lastModDate>
        
        <creator>Arwa Bashanfar</creator>
        
        <creator>Eman Al-Zahrani</creator>
        
        <creator>Maram Alutebei</creator>
        
        <creator>Wejdan Aljagthami</creator>
        
        <creator>Suhari Alshehri</creator>
        
        <subject>Mobile crowdsourcing; privacy; security; location privacy-preserving</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>Mobile Crowdsourcing (MCS) surfaced as a new affluent method for data collection and processing as a result of the boom of sensor-rich mobile devices popularity. MCS still has room for improvement, particularly in protecting workers’ private information such as location. Therefore, the installation of privacy-preserving mechanisms that insulate sensitive infor-mation and prevent attackers from obtaining information is a necessity. In this paper, we discuss location privacy threats and analyze some recently proposed mechanisms that targeted location privacy in mobile crowdsourcing. Finally, we compare and evaluate these mechanisms according to specific criteria that we define in this paper.</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_82-A_Survey_on_Location_Privacy_Preserving.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>New Criteria for Comparing Global Stochastic Derivative-Free Optimization Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100781</link>
        <id>10.14569/IJACSA.2019.0100781</id>
        <doi>10.14569/IJACSA.2019.0100781</doi>
        <lastModDate>2019-07-31T10:57:48.8800000+00:00</lastModDate>
        
        <creator>Jonathan McCart</creator>
        
        <creator>Ahmad Almomani</creator>
        
        <subject>Derivative-free optimization; algorithm comparison; test problem benchmarking</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>For many situations, the function that best models a situation or data set can have a derivative that may be difficult or impossible to find, leading to difficulties in obtaining information about the optimal values of the function. Thus, numerical methods for finding these important values without the direct involvement of the derivative have been developed, making the representation and interpretation of the results for these algorithms of importance to the researchers using them. This is the motivation to use and compare between derivative-free optimization (DFO) algorithms. The comparison methods developed in this paper were tested using three global solvers: Genetic Algorithm (GA), Particle Swarm Optimization (PSO), and Simulated Annealing (SA) on a set of 26 n-dimensional test problems of varying convexity, continuity, differentiability, separability, and modality. Each solver was run 100 times per problem at 2, 20, 50 and 100 dimensions. The formulation for each algorithm used comes from the MATLAB Optimization Toolbox, unedited or revised. New criteria for comparing DFO solver performance are introduced in terms defined as Speed, Accuracy, and Efficiency, taken at different levels of precision and dimensionality. The numerical results for these benchmark problems are analyzed using these methods.</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_81-New_Criteria_for_Comparing_Global_Stochastic.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Privacy Concerns in Online Social Networks: A Users’ Perspective</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100780</link>
        <id>10.14569/IJACSA.2019.0100780</id>
        <doi>10.14569/IJACSA.2019.0100780</doi>
        <lastModDate>2019-07-31T10:57:48.8500000+00:00</lastModDate>
        
        <creator>Ahmad Ali</creator>
        
        <creator>Ahmad Kamran Malik</creator>
        
        <creator>Mansoor Ahmed</creator>
        
        <creator>Basit Raza</creator>
        
        <creator>Muhammad Ilyas</creator>
        
        <subject>Online social networks; information security; pri-vacy; social networking; attribute disclosure</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>Social networking has elevated the human life to the heights of interaction, response and content sharing. It has been offering state of the art facilities to its users for a long time. Though, over the period of time, the systems have become quite matured yet alongside the benefits, multiple concerns of the user with regard to the privacy and information security also exist. Multidimensional threat spectrum to the Internet has also been posed to social networking tools. A lot of work is being done to un-derstand privacy concerns in social networks. In this scenario, a survey of privacy concerns in online social networks is conducted. Risks, privacy issues, and threats have been highlighted that occurred in recent years, analyzing the targets of attackers, their methods of attack and measures taken to counter/manage these threats are the focus. A social network depends on the user, social network site/application and communication medium provider i.e. the Internet facility. Existing research contains domain specific re-search work regarding privacy issues in social networks; however, a comprehensive research work related to overall infrastructure of online social networks is missing. Development of a taxonomy of threats and categorization of frauds relevant to social networks is an important contribution of this survey. After completing a comprehensive research survey on privacy concerns in online social networks, a set of privacy guidelines is provided and open research challenges are highlighted.</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_80-Privacy_Concerns_in_Online_Social_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Unmanned Ground Vehicle with Stereoscopic Vision for a Safe Autonomous Exploration</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100779</link>
        <id>10.14569/IJACSA.2019.0100779</id>
        <doi>10.14569/IJACSA.2019.0100779</doi>
        <lastModDate>2019-07-31T10:57:48.8330000+00:00</lastModDate>
        
        <creator>Jes&#180;us Jaime Moreno-Escobar</creator>
        
        <creator>Oswaldo Morales-Matamoros</creator>
        
        <creator>Ricardo Tejeida-Padilla</creator>
        
        <subject>Unmanned ground vehicle; stereoscopic vision; computer vision; autonomous exploration</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>At present there are several systems in cars that provide assistance to the driver and the tendency is that these systems are increasingly efficient and that for their operation do not require the intervention of the driver. Computer vision is relevant in this sector due to the contribution it provides, for example, through the treatment of images algorithms are designed for the detection of objects, pedestrian detection, traffic signal detection, one lane tracking and assistant parking.In this work we present an Unmanned Ground Vehicle system through Computer vision specifically Stereoscopic vision, through a sensor we obtain a disparity map that allows us to quantify the depth of each point of the captured image. The system captures an image of its environment that through internal processing of the sensor, returns a disparity map as data, which is processed by an algorithm that allows the system to navigate in a region of the space in which it is positioned for Autonomous Exploration.</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_79-Unmanned_Ground_Vehicle_with_Stereoscopic_Vision.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>New Approach based on Machine Learning for Short-Term Mortality Prediction in Neonatal Intensive Care Unit</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100778</link>
        <id>10.14569/IJACSA.2019.0100778</id>
        <doi>10.14569/IJACSA.2019.0100778</doi>
        <lastModDate>2019-07-31T10:57:48.8200000+00:00</lastModDate>
        
        <creator>Zaineb Kefi</creator>
        
        <creator>Kamel Aloui</creator>
        
        <creator>Mohamed Saber Naceur</creator>
        
        <subject>Mortality prediction; neonates; Intensive Care Units; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>Mortality remains one of the most important outcomes to predict in Intensive Care Units (ICUs). In fact, the sooner mortality is predicted, the better critical decisions are made by doctors based on patient’s illness severity. In this paper, a new approach based on Machine Learning (ML) techniques for short-term mortality prediction in Neonatal Intensive Care Unit (NICU) is proposed. This approach relies on many steps. At first, relevant features are selected from available data upon neonates’ admission and from the time-series variables collected within the two first hours of stay in the NICU from the Medical Information Mart for Intensive Care III (MIMIC-III). After that, to predict mortality, many classifiers were tested which are Linear Discriminant Analysis (LDA), K-Nearest Neighbors (KNN), Classification and Regression Trees (CART), Logistic Regression (LR), Support Vector Machine (SVM), Na&#239;ve Bayes (NB) and Random Forest (RF). The experimental results showed that LDA was the best performing classifier with an accuracy equal to 0.947 and AUROC equal to 0.97 with 31 features. The third step of this approach is mortality time prediction using the Galaxy-Random Forest method achieving an f-score equal to 0.871. The proposed approach compared favorably in terms of time, accuracy and AUROC with existing scoring systems and ML techniques. It is the first work predicting neonates mortality based on ML techniques and time-series data after only two hours of admission to the NICU.</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_78-New_Approach_based_on_Machine_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Graduation Certificate Verification Model: A Preliminary Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100777</link>
        <id>10.14569/IJACSA.2019.0100777</id>
        <doi>10.14569/IJACSA.2019.0100777</doi>
        <lastModDate>2019-07-31T10:57:48.7870000+00:00</lastModDate>
        
        <creator>Omar S Sale</creator>
        
        <creator>Osman Ghazali</creator>
        
        <creator>Qusay Al Maatouk</creator>
        
        <subject>Graduation certificate verification; graduation certificate authentication; graduation certificate forgery</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>Graduation certificates issued by universities and other educational institutions are one of the most important documents for a graduate. It is a proof of graduate’s qualifications and can be used to advance forward in one’s career and life. However, due to advances in software, printing and photocopying technologies, forgery of those certificates is made easy and as good as the original, making them difficult to detect. Several universities and educational institutions as well as businesses started to dedicate resources for verifying certificates however that is usually a tedious and quite costly process and there isn&#39;t a clear model that can be adopted by those institutions that could minimize cost and speed up the process. There are many techniques proposed for paper based document verification and this paper analyzes and expatiate the issues on those techniques. Most of the verification techniques require change in the process of certificate generation either by changing template, changing paper, changing printers, adding hardware or even adding extra information. This change may mean that the university or verifier needs the proper knowledge to execute and run the proposed technique. This also means that older certificates may not work with the newly introduced techniques. To also add some proposed techniques require a change that is not always easy or cheap like in creating a third body to verify certificates.</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_77-Graduation_Certificate_Verification_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>QoS Analysis to Optimize the Indoor Network IEEE 802.11 at UNTELS</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100776</link>
        <id>10.14569/IJACSA.2019.0100776</id>
        <doi>10.14569/IJACSA.2019.0100776</doi>
        <lastModDate>2019-07-31T10:57:48.7730000+00:00</lastModDate>
        
        <creator>Jhon S Acu&#241;a-Aroni</creator>
        
        <creator>Brayan W. Alvarado-Gomez</creator>
        
        <creator>Avid Roman–Gonzalez</creator>
        
        <subject>Access point; QoS; coverage; indoor; software</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>This paper arose from the need to improve mobility and connectivity to network users of the Universidad Nacional Tecnol&#243;gica de Lima Sur and the problems that arise on the quality of services (QoS) such as signal intermittence, high latency, low decibels of received power, low coverage equipment that doesn&#39;t support high data transit and exceed the limit of connected users. An analysis is presented for optimization of the IEEE 802.11 wireless QoS in indoor media, the coverage data collected from the different equipment based on reception power measurements made with the Wi-Fi Network Analyzer software, mapping of the ideal approach to new coverage areas and the location of Access Points in areas with high student, administrative and teaching staff of the university through simulations of the coverage of the equipment made with NETSPOT software. The results obtained show that the actual design of the wireless network presents interference by equipment configured in the same transmission channel and adjacent, as well as insufficient coverage of radiated power for indoor environments due to the poor location and choice of models of access points, and to be replaced with Access point with high density characteristics of users connected simultaneously by equipment, suitable for indoor environments with technologies such as MU-MIMO and ARM; the results are shown favorable and optimized.</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_76-QoS_Analysis_to_Optimize_the_Internal_IEEE_802_11_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Non-intrusive Driver Drowsiness Detection based on Face and Eye Tracking</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100775</link>
        <id>10.14569/IJACSA.2019.0100775</id>
        <doi>10.14569/IJACSA.2019.0100775</doi>
        <lastModDate>2019-07-31T10:57:48.7400000+00:00</lastModDate>
        
        <creator>Ameen Aliu Bamidele</creator>
        
        <creator>Kamilia Kamardin</creator>
        
        <creator>Nur Syazarin Natasha Abd Aziz</creator>
        
        <creator>Suriani Mohd Sam</creator>
        
        <creator>Irfanuddin Shafi Ahmed</creator>
        
        <creator>Azizul Azizan</creator>
        
        <creator>Nurul Aini Bani</creator>
        
        <creator>Hazilah Mad Kaidi</creator>
        
        <subject>Driver Drowsiness Detection (DDD); face tracking; eye tracking; K-nearest Neighbors (KNN); Support Vector Machine (SVM); Logistic Regression; Artificial Neural Networks (ANN)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>The rate of annual road accidents attributed to drowsy driving are significantly high. Due to this, researchers have proposed several methods aimed at detecting drivers’ drowsiness. These methods include subjective, physiological, behavioral, vehicle-based, and hybrid methods. However, recent reports on road safety are still indicating drowsy driving as a major cause of road accidents. This is plausible because the current driver drowsiness detection (DDD) solutions are either intrusive or expensive, thus hindering their ubiquitous nature. This research serves to bridge this gap by providing a test-bed for achieving a non-intrusive and low-cost DDD solution. A behavioral DDD solution is proposed based on tracking the face and eye state of the driver. The aim is to make this research an inception to DDD pervasiveness. To achieve this, National Tsing Hua University (NTHU) Computer Vision Lab’s driver drowsiness detection video dataset was utilized. Several video and image processing operations were performed on the videos so as to detect the drivers’ eye state. From the eye states, three important drowsiness features were extracted: percentage of eyelid closure (PERCLOS), blink frequency (BF), and Maximum Closure Duration (MCD) of the eyes. These features were then fed as inputs into several machine learning models for drowsiness classification. Models from the K-nearest Neighbors (KNN), Support Vector Machine (SVM), Logistic Regression, and Artificial Neural Networks (ANN) machine learning algorithms were experimented. These models were evaluated by calculating their accuracy, sensitivity, specificity, miss rate, and false alarm rate values. Although these five metrics were evaluated, the focus was more on getting optimal accuracies and miss rates. The result shows that the best models were a KNN model when k = 31 and an ANN model that used an Adadelta optimizer with 3 hidden layer network of 3, 27, and 9 neurons respective. The KNN model obtained an accuracy of 72.25% with a miss rate of 16.67%, while the ANN model obtained 71.61% and 14.44% accuracy and miss rate respectively.</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_75-Non_Intrusive_Driver_Drowsiness_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Simultaneous Stream Transmission Methods for Free Viewpoint TV: A Comparative Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100774</link>
        <id>10.14569/IJACSA.2019.0100774</id>
        <doi>10.14569/IJACSA.2019.0100774</doi>
        <lastModDate>2019-07-31T10:57:48.7270000+00:00</lastModDate>
        
        <creator>Mudassar Hussain</creator>
        
        <creator>Abdurahman Hassan A Alhazmi</creator>
        
        <creator>Rashid Amin</creator>
        
        <creator>Muhammad Almas Anjum</creator>
        
        <creator>Ali Tahir</creator>
        
        <subject>Free viewpoint TV; comparison of stream switching methods; video stream switching; fast switching; simultaneous transmissions; linear regression</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>Free Viewpoint TV is a system to view natural videos and allow users to control the viewpoint interactively. The main idea is that the users can switch between multiple video streams to find viewpoints of their own choice. The purpose of this research is to provide fast switching between video streams so that users experience less delay while viewpoint switching. In this paper, we discussed different stream switching methods in detail, including their transmission issues. In addition, we discussed various scenarios for fast stream switching in order to make services more interactive by minimizing delays. The quality of service is another factor which can be improved by assigning priorities to the packets. In addition, we discussed simultaneous stream transmission methods which are based on predictions and reduced quality streams for fast switching. Finally, we propose a prediction algorithm (Linear Regression) and system model for fast viewpoint switching and evaluate simultaneous stream transmission methods for free Viewpoint TV. The results indicate that the proposed system model improves the viewpoint switching and perform fast switching.</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_74-Simultaneous_Stream_Transmission_Methods.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development and Evaluation of Massive Open Online Course (MOOC) as a Supplementary Learning Tool: An Initial Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100773</link>
        <id>10.14569/IJACSA.2019.0100773</id>
        <doi>10.14569/IJACSA.2019.0100773</doi>
        <lastModDate>2019-07-31T10:57:48.7100000+00:00</lastModDate>
        
        <creator>Husna Hafiza R Azami</creator>
        
        <creator>Roslina Ibrahim</creator>
        
        <subject>MOOC; development; usability evaluation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>The popularity of Massive Open Online Courses (MOOCs) is prevalent among researchers and practitioners as a new paradigm of open education resource. Since the development of this technology may entail enormous investment, it is critical for institutions to clearly plan the process in designing, developing and evaluating MOOCs that fulfill the needs of target users while keeping the investment to a minimum. Evaluation plays a vital role in assuring that the developed product meets user’s satisfaction. This study presents the process of developing a MOOC as a supplementary learning tool for students in a higher education and its usability evaluation which are rarely discussed in detail in prior literatures. Evaluation was done through a questionnaire and the items were adapted from Computer System Usability Questionnaire (CSUQ). The MOOC development process in this research which was based on the ADDIE (Analysis, Design, Development, Implementation and Evaluation) model and the MOOC usability evaluation results enrich existing literatures on MOOC. Overall, findings showed that users were satisfied with the developed MOOC with most of the items gained high mean score above 4.00. When respondents were asked to comment on the strength of the MOOC, the most prominent one turned out to be the MOOC’s ability to make students’ learning easier.</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_73-Development_and_Evaluation_of_Massive_Open_Online_Course.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Classifying Red and Healthy Eyes using Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100772</link>
        <id>10.14569/IJACSA.2019.0100772</id>
        <doi>10.14569/IJACSA.2019.0100772</doi>
        <lastModDate>2019-07-31T10:57:48.6770000+00:00</lastModDate>
        
        <creator>Sherry Verma</creator>
        
        <creator>Latika Singh</creator>
        
        <creator>Monica Chaudhry</creator>
        
        <subject>Bulbar conjunctiva; hyperemia; convolutional neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>Eye is one of the most vital organs of human body. Despite being small in size, humans cannot see the life around them without it. Human eye is protected by a thin covering termed as conjunctiva which protects the eye from dust particles. It plays the role of lubricant in the eye which prevents any sort of friction in opening and closing of eye. Broadly there are two kinds of conjunctiva: bulbar and palpebral. The membrane covering the inner portion of eyelids is termed as palpebral conjunctiva and the one covering the outside portion of the eye is called as bulbar conjunctiva (white portion of eye).Due to the dilation of blood vessels the white portion of the eye also termed as sclera becomes red in color. This condition is also termed as hyperemia. The study of this development is vital in diagnosis of various pathologies. It could be result of some trauma, injury or other eye related diseases which needs to be identified for timely treatment. Enormous amount of studies have been done to study the structure and functionality of human eye. This paper highlights the work done so far for measuring the level of redness in the eye using various methodologies ranging from statistical ways to machine learning techniques and proposes a methodology using Matlab and Convolutional neural network to automate this evaluation process.</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_72-Classifying_Red_and_Healthy_Eyes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparison Shopping Engines</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100771</link>
        <id>10.14569/IJACSA.2019.0100771</id>
        <doi>10.14569/IJACSA.2019.0100771</doi>
        <lastModDate>2019-07-31T10:57:48.6630000+00:00</lastModDate>
        
        <creator>Ghizlane LAGHMARI</creator>
        
        <creator>Sanae KHALI ISSA</creator>
        
        <creator>M’hamed AIT KBIR</creator>
        
        <subject>Price comparators; shopbots; e-consumer; online consumption; shopping engines</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>Since the stimulation of both feelings of need and temptation have become excessive with the spread of internet advertising, the e-consumer have begun to feel increasingly lost and overwhelmed by offers in a purchasing cycle whose process is mostly unstructured, unguided, and unassisted or - in other words - non user-friendly. As a result, he displays a confused and suspicious attitude and desperately turns to the comparison shopping engines (CSEs) to save time and identify the best matching offer for his search request. Thus, the article in question serves as an investigation of the comparison shopping engines to know if they are up to the task of satisfying the needs of the e-consumer. This study adopts an exploratory approach about the history of online shopping engines, their operating modes, categories, and business plans as well as how they are perceived, used and evaluated. Then, a detailed identification of the various shortcomings that CSEs manifest on the side of both e-consumers and e-merchants was presented in order to eventually discuss the numerous innovations and scientific research which have been developed on the subject.</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_71-Comparison_Shopping_Engines.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Feature Fusion: H-ELM based Learned Features and Hand-Crafted Features for Human Activity Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100770</link>
        <id>10.14569/IJACSA.2019.0100770</id>
        <doi>10.14569/IJACSA.2019.0100770</doi>
        <lastModDate>2019-07-31T10:57:48.6300000+00:00</lastModDate>
        
        <creator>Nouar AlDahoul</creator>
        
        <creator>Rini Akmeliawati</creator>
        
        <creator>Zaw Zaw Htike</creator>
        
        <subject>Hierarchical extreme learning machine; kernel extreme learning machine; deep learning; feature learning; human activity recognition; feature fusion</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>Recognizing human activities is one of the main goals of human-centered intelligent systems. Smartphone sensors produce a continuous sequence of observations. These observations are noisy, unstructured and high dimensional. Therefore, efficient features have to be extracted in order to perform an accurate classification. This paper proposes a combination of Hierarchical and kernel Extreme Learning Machine (HK-ELM) methods to learn features and map them to specific classes in a short time. Moreover, a feature fusion approach is proposed to combine H-ELM based learned features with hand-crafted ones. Our proposed method was found to outperform state-of-the-art in terms of accuracy and training time. It gives an accuracy of 97.62% and takes 3.4 seconds as a training time by using a normal Central Processing Unit (CPU).</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_70-Feature_Fusion_H_ELM_based_Learned_Features.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Assessing Assistive Learning Technologies with Experimental Design</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100769</link>
        <id>10.14569/IJACSA.2019.0100769</id>
        <doi>10.14569/IJACSA.2019.0100769</doi>
        <lastModDate>2019-07-31T10:57:48.6170000+00:00</lastModDate>
        
        <creator>Gede Pramudya</creator>
        
        <creator>Aliza Che Amran</creator>
        
        <creator>Muhammad Suyanto</creator>
        
        <creator>Siti Nur Azrreen Ruslan</creator>
        
        <creator>Helmi Adly Mohd Noor</creator>
        
        <creator>Zuraida Abal Abas</creator>
        
        <subject>Assistive learning technology; disabilities; experimental design; mathematical learning; serious games; autism; visual perception</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>Assistive learning technologies are generally computer-based instruments which are focused at supporting individuals with disabilities in enhancing their learning session with minimal intervention of parents, guardians, as well as helpers. Assessments using experimental research design have frequently been utilized in order to evaluate their eﬃcacy along with feasibility. An experimental design is categorized by experimental units or treatment to use, the tendencies that are tested, as well as the way treatments are designated to units. The experimental or treatment units need sufficient a number of and representative respondents or sample. Even so, due to the limited numbers of sample units or respondents, such type of experiments is noted as subtle yet challenging experiences. Based upon our substantial encounters, this article tries to disclose such precious research experiences.</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_69-Assessing_Assistive_Learning_Technologies.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Behavioral Study of Task Scheduling Algorithms in Cloud Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100768</link>
        <id>10.14569/IJACSA.2019.0100768</id>
        <doi>10.14569/IJACSA.2019.0100768</doi>
        <lastModDate>2019-07-31T10:57:48.6000000+00:00</lastModDate>
        
        <creator>Mohammad Riyaz Belgaum</creator>
        
        <creator>Shahrulniza Musa</creator>
        
        <creator>M. S. Mazliham</creator>
        
        <creator>Muhammad Alam</creator>
        
        <subject>Cloud computing; load balancing; service broker policy; scheduling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>All the services offered by cloud computing are bundled into one service know as IT as a Service (ITaaS). The user’s processes are executed using these services. The scheduling techniques used in the cloud computing environment execute the tasks at different datacenters considering the needs of the consumers. As the requirements vary from one to one, and so the priorities also change. The jobs are executed either in a preemptive or non-preemptive way. The tasks in cloud computing also migrate from one datacenter to another considering load balancing. This research mainly focused on the study of how the Round Robin (RR) and Throttled (TR) scheduling techniques function subject to different tasks given for processing. An analysis is carried out to measure the performance based on the metrics like response time and service time at different userbases and data centers. The consumers have the option to select the server broker policy as they are the ultimate users and payers.</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_68-A_Behavioral_Study_of_Task_Scheduling_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cas-GANs: An Approach of Dialogue Policy Learning based on GAN and RL Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100766</link>
        <id>10.14569/IJACSA.2019.0100766</id>
        <doi>10.14569/IJACSA.2019.0100766</doi>
        <lastModDate>2019-07-31T10:57:48.5700000+00:00</lastModDate>
        
        <creator>Muhammad Nabeel</creator>
        
        <creator>Adnan Riaz</creator>
        
        <creator>Wang Zhenyu</creator>
        
        <subject>Generative Adversarial Networks (GANs); Graph Convolutional Network (GCN); Reinforce Learning (RL); Dialogue policy learning; Maximum Log-Likelihood (MLL)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>Dialogue management systems are commonly applied in daily life, such as online shopping, hotel booking, and driving booking. Efficient dialogue management policy helps systems to respond to the user in an effective way. Policy learning is a complex task to build a dialogue system. There are different approaches have been proposed in the last decade to build a goal-oriented dialogue agent to train the systems with an efficient policy. The Generative adversarial network (GAN) is used in the dialogue generation, in previous works to build dialogue agents by selecting the optimal policy learning. The efficient dialogue policy learning aims to improve the quality of fluency and diversity for generated dialogues. Reinforcement learning (RL) algorithms are used to optimize the policies because the sequence is discrete. In this study, we have proposed a new technique called Cascade Generative Adversarial Network (Cas-GAN) that is combination of the GAN and RL for dialog generation. The Cas-GAN can model the relations between the dialogues (sentences) by using Graph Convolutional Networks (GCN). The graph nodes are consisting of different high level and low-level nodes representing the vertices and edges of the graph. Then, we use the maximum log-likelihood (MLL) approach to train the parameters and choose the best nodes. The experimental results compared with the HRL, RL agents and we got state-of-the-art results.</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_66-Cas_GANs_An_Approach_of_Dialogue_Policy_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Prioritization of Software Functional Requirements: Spanning Tree based Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100767</link>
        <id>10.14569/IJACSA.2019.0100767</id>
        <doi>10.14569/IJACSA.2019.0100767</doi>
        <lastModDate>2019-07-31T10:57:48.5700000+00:00</lastModDate>
        
        <creator>Muhammad Yaseen</creator>
        
        <creator>Aida Mustapha</creator>
        
        <creator>Noraini Ibrahim</creator>
        
        <subject>Requirements prioritization; functional requirements; spanning tree</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>Requirements prioritization shows significant role during effective implementation of requirements. Prioritization of requirements is not easy process particularly when requirements are large in size. The current methods of prioritization face limitations as the current prioritization techniques for functional requirements rely on the responses of stakeholders instead of prioritizing requirements on the basis of internal dependencies of one requirement on other requirements. Moreover, there is need to classify requirements on the basis of their importance i.e. how much they are needed for other requirements or dependent on other requirements. Requirements are first represented with spanning trees and then prioritized. Suggested spanning tree based approach is evaluated on requirements of ODOO ERP. Requirements are assigned to four developers. Time estimation with and without prioritization are calculated. The difference in time estimation with prioritization and without prioritization shows the significance of prioritization of functional requirements.</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_67-Prioritization_of_Software_Functional_Requirements.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Seamless Connectivity for Adaptive Multimedia Provisioning over P2P-enabled IP Multimedia Subsystem</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100765</link>
        <id>10.14569/IJACSA.2019.0100765</id>
        <doi>10.14569/IJACSA.2019.0100765</doi>
        <lastModDate>2019-07-31T10:57:48.5370000+00:00</lastModDate>
        
        <creator>Adnane Ghani</creator>
        
        <creator>El Hassan Ibn Elhaj</creator>
        
        <creator>Ahmed Hammouch</creator>
        
        <creator>Abdelaali Chaoub</creator>
        
        <subject>Next generation networks; NS2; quality adaptation; scalable video coding; peer to peer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>The subsystem multimedia internet network (IMS) has been upgraded to support peer-to-peer content distribution services, the peer heterogeneity constraint imposes a challenge that is the guarantee of a certain level of QOS of the video for the different pairs, and the transfer poses a challenge for maintaining quality of service (QoS) in IMS. In this article, we we have extended a scalable Video Coding (SVC) peer-to-peer streaming adaptation model and our extension allows to add another parameter for this adaptation scheme which is the IP address of the peers, so to manage multiple network accesses to a peer when the user changes the type of access, due to a transfer or a loss of connectivity, this model is used to adapt the video quality to the static resources of the peers in order to avoid long start times, and to compare the results obtained for changing the type of network access, we performed two simulation scenarios, one with multiple peers that are connected to the network until the end of the video download by the peers and the other scenario with the change of the address of a peer during the operation of downloading the video sequence by peers. We quantified streaming performance using two metrics of evaluation (peak signal-to-noise ratio (PSNR)) and video quality metrics (VQM)), and also we extracted from the values of PLR (Packets loss rate). Our results show that our model has a better adaptation of the quality according to the network resources of the peers in term of bandwidth available in the network and the performances of the users (CPU, RAM, autonomy of the battery) and also allows a continuity of service in the network by ensuring that the list of peers is updated after each change. The results show a clear quality adaptation with heterogeneous terminals.</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_65-Seamless_Connectivity_for_Adaptive_Multimedia_Provisioning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Heuristic Evaluation of Serious Game Application for Slow-reading Students</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100764</link>
        <id>10.14569/IJACSA.2019.0100764</id>
        <doi>10.14569/IJACSA.2019.0100764</doi>
        <lastModDate>2019-07-31T10:57:48.5230000+00:00</lastModDate>
        
        <creator>Saffa Raihan Zainal Abidin</creator>
        
        <creator>Siti Fadzilah Mat Noor</creator>
        
        <creator>Noraidah Sahari Ashaari</creator>
        
        <subject>Serious game; brain-based learning; heuristic evaluation; literacy skills; slow-reading students</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>The findings of preliminary studies found that conventional approaches were still relevant but students showed weak and moderate interest and quickly lost focus rather than technology approaches such as serious games were used especially for slow reading students (SRS). Most teachers use interventions that are not specifically designed to help SRS. They usually use teaching aids below the literacy level of the SRS. Therefore, an easy and user-friendly games application called “Mari Membaca” or M2M was developed. The objective is to make sure the application is free from design and interface problems by demonstrating the application of expert-based usability evaluation techniques such as Heuristic evaluation. This paper reports the experimental heuristic evaluation of M2M for SRS among expert evaluators includes remedial teachers and game developers. This study adopted ten Usability Heuristics and seven brain-compatible instructional phases of brain-based learning to be included in the questionnaire. The overall result derived from the evaluation is 14 out of 17 (3.41-5.00) above average mean score, which are neutral (2.61-3.40) in one domain. Several comments and feedback from the experts were essentials for further improvement of the game application to ensure meets the user requirement and expectation.</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_64-Heuristic_Evaluation_of_Serious_Game_Application.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Tool for C++ Header Generation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100763</link>
        <id>10.14569/IJACSA.2019.0100763</id>
        <doi>10.14569/IJACSA.2019.0100763</doi>
        <lastModDate>2019-07-31T10:57:48.5070000+00:00</lastModDate>
        
        <creator>Patrick Hock</creator>
        
        <creator>Koichi Nakayama</creator>
        
        <creator>Kohei Arai</creator>
        
        <subject>Development; C++; header file generation; feature generation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>This paper presents a novel approach in the field of C++ development for increasing performance by reducing cognitive overhead and complexity, which results in lower costs. C++ code is split into header and cpp files. This split induces code redundancy. In addition, there are (commonly used) features for classes in C++ that are not supported by recent compilers. The developer must maintain two different files for one single content and implements unsupported features by hand. This leads to the unnecessary cognitive overhead and complex sources. The result is low development performance and high development cost. Our approach utilizes an enhanced syntax inside cpp files. It allows header file generation and therefore obsoletes the need to main-tain a header file. It also enables the generation of fea-tures/methods for classes. It aims to decrease cognitive overhead and complexity, so developers can focus on more sophisticated tasks. This will lead to increased performance and lower costs.</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_63-A_Tool_for_C_Header_Generation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Conceptual Smart City Framework for Future Industrial City in Indonesia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100762</link>
        <id>10.14569/IJACSA.2019.0100762</id>
        <doi>10.14569/IJACSA.2019.0100762</doi>
        <lastModDate>2019-07-31T10:57:48.4900000+00:00</lastModDate>
        
        <creator>Julius Galih Prima Negara</creator>
        
        <creator>Andi W. R. Emanuel</creator>
        
        <subject>Smart city; industrial city; smart industrial city; framework; Kulonprogo District</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>In Indonesia, the growth of cities from various big cities and industrial cities can cause many challenges. To face this challenge, policy makers can apply the concept of smart cities. This paper aims to analyze many studies that discuss prospective industrial city planning in a smart city perspective. This research uses information from research, models, frameworks, and tools that discuss IoT, smart cities, and industrial cities. This research provides the latest insight into smart city frameworks for industrial cities. In this study found the pillars forming the smart city for industrial cities. This framework can also be used by governments such as Kulonprogo District in the Special Region of Yogyakarta, Indonesia in preparation to transform itself into a smart industrial city. The latest use of information technology in this concept and with implementation priority steps is recommended.</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_62-A_Conceptual_Smart_City_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Efficient Algorithm for Maximal Clique Size Evaluation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100761</link>
        <id>10.14569/IJACSA.2019.0100761</id>
        <doi>10.14569/IJACSA.2019.0100761</doi>
        <lastModDate>2019-07-31T10:57:48.4600000+00:00</lastModDate>
        
        <creator>Ubaida Fatima</creator>
        
        <creator>Saman Hina</creator>
        
        <subject>Centrality measures; network analysis; maximal clique size</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>A large dataset network is considered for computation of maximal clique size (MC). Additionally, its link with popular centrality metrics to decrease uncertainty and complexity and for finding influential points of any network has also been investigated. Previous studies focus on centrality metrics like degree centrality (DC), closeness centrality (CC), betweenness centrality (BC) and Eigenvector centrality (EVC) and compare them with maximal clique size however, in this study Katz centrality measure is also considered and shows a pretty robust relation with maximal clique size (MC). Secondly, maximal clique size (MC) algorithm is also revised for network analysis to avoid complexity in computation. Association between MC and five centrality metrics has been evaluated through recognized methods that are Pearson’s correlation coefficient (PCC), Spearman’s correlation coefficient (SCC) and Kendall’s correlation coefficient (KCC). The strong strength of association between them is seen through all three correlation coefficients measure.</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_61-Efficient_Algorithm_for_Maximal_Clique_Size.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Low Power and High Reliable Triple Modular Redundancy Latch for Single and Multi-node Upset Mitigation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100760</link>
        <id>10.14569/IJACSA.2019.0100760</id>
        <doi>10.14569/IJACSA.2019.0100760</doi>
        <lastModDate>2019-07-31T10:57:48.4430000+00:00</lastModDate>
        
        <creator>S Satheesh Kumar</creator>
        
        <creator>S Kumaravel</creator>
        
        <subject>Multiple Event Transient (MET); Single Event Upset (SEU); Single Event Transient (SET); Radiation hardening; Reliability; Transient fault; Triple Modular Redundancy (TMR)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>CMOS based circuits are more susceptible to the radiation environment as the critical charge (Qcrit) decreases with technology scaling. A single ionizing radiation particle is more likely to upset the sensitive nodes of the circuit and causes Single Event Upset (SEU). Subsequently, hardening latches to transient faults at control inputs due to either single or multi-nodes is progressively important. This paper proposes a Fully Robust Triple Modular Redundancy (FRTMR) latch. In FRTMR latch, a novel majority voter circuit is proposed with a minimum number of sensitive nodes. It is highly immune to single and multi-node upsets. The proposed latch is implemented using CMOS 45 nm process and is simulated in cadence spectre environment. Results demonstrate that the proposed latch achieves 17.83 % low power and 13.88 % low area compared to existing Triple Modular Redundant (TMR) latch. The current induced due to transient fault occurrence at various sensitive nodes are exhibited with a double exponential current source for circuit simulation with a minimum threshold current value of 40 &#181;A.</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_60-Low_Power_and_High_Reliable_Triple_Modular_Redundancy_Latch.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Collaborative Integrated Model in Agile Software Development (MDSIC/MDSIC–M)-Case Study and Practical Advice</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100759</link>
        <id>10.14569/IJACSA.2019.0100759</id>
        <doi>10.14569/IJACSA.2019.0100759</doi>
        <lastModDate>2019-07-31T10:57:48.4130000+00:00</lastModDate>
        
        <creator>Jos&#233; L Cendejas Valdez</creator>
        
        <creator>Heberto Ferreira Medina</creator>
        
        <creator>Gustavo A. Vanegas Contreras</creator>
        
        <creator>Griselda Cortes Morales</creator>
        
        <creator>Alfonso Hiram Ginori Gonz&#225;lez</creator>
        
        <subject>Agile methodology; development Software (web–mobile); MDSIC / MDSIC – M; quality assurance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>The fast increase of mobile device users based on wider and easier internet access has detonated the development of mobile applications (APP) and web. Therefore, improvement and innovation have become a top priority for businesses and consumer relations. The functional quality and interface aspects in applications (software) drive companies to succeed in mobile apps market competition. This paper introduces an agile software development methodology denominated MDSIC and MDSIC-M focused on rapid application development as required by small and medium software enterprises (SMEs), and results in better quality and competitiveness. MDSIC and MDSIC - M proposes some levels with better practices that should be followed in software development projects. This article also aims to show matching indicators and results of MDSIC and MDSIC - M implementations in software projects, by assessing the needed parameters to generate quality software, and thus align technology with the goals of the organizations.</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_59-Collaborative_Integrated_Model_in_Agile_Software_Development.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Boosted Constrained K-Means Algorithm for Social Networks Circles Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100758</link>
        <id>10.14569/IJACSA.2019.0100758</id>
        <doi>10.14569/IJACSA.2019.0100758</doi>
        <lastModDate>2019-07-31T10:57:48.3970000+00:00</lastModDate>
        
        <creator>Intisar M Iswed</creator>
        
        <creator>Yasser F. Hassan</creator>
        
        <creator>Ashraf S. Elsayed</creator>
        
        <subject>Constrained clustering; boosting; social networks; k-means; kernel matrix</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>The volume of information generated by a huge number of social networks users is increasing every day. Social networks analysis has gained intensive attention in the data mining research community to identify circles of users depending on the characteristics in the individual profiles or the structure of the network. In this paper, we propose the boosting principle to find the circles of social networks. Constrained k-means clustering method is used as a weak learner with the boosting framework. This method generates a constrained clustering represented by a kernel matrix according to the priorities of the pair-wise constraints. The experimental results show that the proposed algorithm using boosting principle for social network analysis improves the performance of the clustering and outperforms the state-of-the-art.</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_58-Boosted_Constrained_K_Means_Algorithm_for_Social_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Let’s Code: A Kid-friendly Interactive Application Designed to Teach Arabic-speaking Children Text-based Programming</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100757</link>
        <id>10.14569/IJACSA.2019.0100757</id>
        <doi>10.14569/IJACSA.2019.0100757</doi>
        <lastModDate>2019-07-31T10:57:48.3670000+00:00</lastModDate>
        
        <creator>Tahani Almanie</creator>
        
        <creator>Shorog Alqahtani</creator>
        
        <creator>Albatoul Almuhanna</creator>
        
        <creator>Shatha Almokali</creator>
        
        <creator>Shaima Guediri</creator>
        
        <creator>Reem Alsofayan</creator>
        
        <subject>Edutainment; mobile application; interactive application; Arabic; children; programming; coding for kids; Python</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>Programming is the cornerstone for the development of all of the technologies we encounter in our daily lives. It also plays an important role in enhancing creativity, problem-solving, and logical thinking. Due to the importance of programming in combination with the shortage of Arabic content that aims to teach children programming, we decided to develop Let’s Code, an interactive mobile-based application designed for Arabic-speaking children from 8 to 12 years old. The application focuses on the basics of programming such as data types, variables, and control structures using the Python programming language through a simple, attractive, and age-appropriate design. The application presents its users with an interesting storyline that involves a trip to space with “Labeeb”, a robot character designed to explain programming concepts to the child throughout the trip. Each planet represents a level in the application and introduces a programming concept through a set of lessons and exercises. The application can be used by educational institutions and parents to teach programming and will provide an opportunity through which Arabic-speaking children can keep up with the development and dissemination of technology.</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_57-Lets_Code_A_Kid_Friendly_Interactive_Application.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Transforming Service Delivery with TOGAF and Archimate in a Government Agency in Peru</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100756</link>
        <id>10.14569/IJACSA.2019.0100756</id>
        <doi>10.14569/IJACSA.2019.0100756</doi>
        <lastModDate>2019-07-31T10:57:48.3500000+00:00</lastModDate>
        
        <creator>Jorge Valenzuela Posadas</creator>
        
        <subject>Enterprise architecture; business architecture; digital government; The Open Group Architecture Framework (TOGAF); archimate; design science</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>The application of The Open Group Architecture Framework (TOGAF) and Archimate to transform the citizen&#39;s service delivery by the Ministry of Labor and Employment Promotion of Peru is presented. The enterprise architecture development has followed the phases of the TOGAF Architecture Development Method (ADM) method: Architecture Vision, Business, Data, Application and Technology Architecture Definition to determine the source architecture, the target architecture and gaps to meeting the target architecture requirements. The strategic motivations, active structure, passive structure, behavior and different viewpoints models in the domains: business, data, application and technology have been constructed with the Archimate descriptive language. The viewpoints achieved by the conjunction of these open standards have allowed identifying fragmented and isolated business services, as well as duplicated data and duplicating application functions. A proposal for integrated and transversal services emerges as a result of the applied Enterprise Architecture approach. Design science is applied to obtain knowledge from the generated artifacts. The knowledge generated from this application can be useful for new initiatives to improve the delivery of services to the citizen in the Peruvian government.</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_56-Transforming_Service_Delivery_with_TOGAF.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>YAWARweb: Pilot Study about the usage of a Web Service to Raise Awareness of Blood Donation Campaigns on University Campuses in Lima, Peru</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100755</link>
        <id>10.14569/IJACSA.2019.0100755</id>
        <doi>10.14569/IJACSA.2019.0100755</doi>
        <lastModDate>2019-07-31T10:57:48.3330000+00:00</lastModDate>
        
        <creator>Alva Mantari Alicia</creator>
        
        <creator>Lipa Cueva Alonso</creator>
        
        <creator>Trinidad Qui&#241;onez Oscar</creator>
        
        <creator>Brian Meneses-Claudio</creator>
        
        <creator>Zamora Benavente Isabel</creator>
        
        <creator>Arias Guzm&#225;n Belinda</creator>
        
        <creator>Delgado-Rivera Gerson</creator>
        
        <creator>Roman-Gonzalez Avid</creator>
        
        <subject>Aplication; survey; blood donation; donor benefits</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>This document presents a preliminary study about a pilot deployment of a web service. The service is used as means to raise awareness in university campuses prior to blood donation campaigns and to measure its effect into posterior donor enrollment. The measure the level of awareness a score range from zero to four inclusive was set. It was quantified before and after giving the information. This allowed evaluating the score change influenced by the received information. Another important metric was the contrast between the community participation between the blood donation campaigns at 12th June 2018 and June 2017. During these campaigns 41 and 25 blood units were collected following the new approach and the traditional way respectively. This variation represents an increase of 64% with respect to the campaign carried out in 2017 by INSN-SB, where the only variation was the use of the application YAWARweb. Moreover, in 2018 there were 36 people interested to donate. Nonetheless, it was not possible because of insufficient hemoglobin, narrow veins, and other causes. This research has as goal to evaluate the usage of our survey through a web service as a tool to raise awareness in university campuses prior to blood donation campaigns. This survey will provide information to the participants about the benefits of blood donation. Thus, creating an incentive to participate in the campaigns and getting the results as an increment of the number of participants. Our group keeps working on preventive health and changing the picture of blood donation leveraged by technology development. The document starts with a general summary of the situation of blood donation in Peru, and then it analyzes the population where the tool is applied. It then proceeds to the methodology of implementation of YAWARweb. Finally, it presents the results of the use of the web application in the community as a method of raising awareness.</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_55-YAWARweb_Pilot_Study_about_the_usage_of_a_Web_Service.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>New Quintupling Point Arithmetic 5P Formulas for Lopez-Dahab Coordinate over Binary Elliptic Curve Cryptography</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100754</link>
        <id>10.14569/IJACSA.2019.0100754</id>
        <doi>10.14569/IJACSA.2019.0100754</doi>
        <lastModDate>2019-07-31T10:57:48.3200000+00:00</lastModDate>
        
        <creator>Waleed K AbdulRaheem</creator>
        
        <creator>Sharifah Bte Md Yasin</creator>
        
        <creator>Nur Izura Binti Udzir</creator>
        
        <creator>Muhammad Rezal bin Kamel Ariffin</creator>
        
        <subject>Elliptic Curve Cryptosystem (ECC); scalar multiplication algorithm; point arithmetic; point quintupling; Lopez-Dahab (LD); binary curve</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>In Elliptic Curve Cryptography (ECC), computational levels of scalar multiplication contains three levels: scalar arithmetic, point arithmetic and field arithmetic. To achieve an efficient ECC performance, precomputed points help to realize a faster computation, which takes away the need to repeat the addition process every time. This paper introduces new quintupling point (5P) formulas which can be precomputed once and can be reused at the scalar multiplication level. We considered mixed addition in Affine and Lŏpez-Dahab since the mixed addition computation cost is better than the traditional addition in Lŏpez-Dahab coordinates over binary curve. Two formulas are introduced for the point quintupling which (Double Double Add) and (Triple Add Double), the cost of the two formulas are 17 multiplication+12 squaringand 23 multiplication+13 squaring respectively. The two formulas are proven as valid points. The new quintupling point can be implemented with different scalar multiplication methods.</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_54-New_Quintupling_Point_Arithmetic_5P_Formulas.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluation of LoRa-based Air Pollution Monitoring System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100753</link>
        <id>10.14569/IJACSA.2019.0100753</id>
        <doi>10.14569/IJACSA.2019.0100753</doi>
        <lastModDate>2019-07-31T10:57:48.3030000+00:00</lastModDate>
        
        <creator>Nael Abd Alfatah Husein</creator>
        
        <creator>Abdul Hadi Abd Rahman</creator>
        
        <creator>Dahlila Putri Dahnil</creator>
        
        <subject>Air pollution; carbon monoxide; wireless sensor network; LoRa; communication; gateway; transmission</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>Air pollution is a threat to human health and the environment. Pollution caused by harmful gases emitted from car exhausts, factories, forest fires and other sources. Carbon monoxide, nitrogen oxides and carbon dioxide are the main elements of air pollution. Serious air pollution may cause harm to our health, thus, a real-time air pollution system to measure existing pollution is needed to classify the pollution level so that appropriate actions can be taken. In a high density area, a big number of sensor nodes are deployed to cover such places and to allow high range communication between nodes and a gateway. This paper presents a real-time and long range air pollution monitoring system for indoor and outdoor environments. The system implemented a wireless sensor network using LoRa technology for data communication between all nodes and sensors. The system consists of three nodes distributed within 900m distance to gateway for measuring the concentration of carbon monoxide, carbon dioxide and nitrogen oxide. Experimental results show the system is reliable in both indoor and outdoor applications. The distance coverage achieved up to 900m and can be displayed through a web-based system. The experiment with LoRa transmission has shown that the LoRa technology is very suitable for the air pollution system especially in long range transmission compared to other wireless transmission techniques.</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_53-Evaluation_of_LoRa_based_Air_Pollution_Monitoring_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Proposal Models for Personalization of e-Learning based on Flow Theory and Artificial Intelligence</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100752</link>
        <id>10.14569/IJACSA.2019.0100752</id>
        <doi>10.14569/IJACSA.2019.0100752</doi>
        <lastModDate>2019-07-31T10:57:48.2870000+00:00</lastModDate>
        
        <creator>Anibal Flores</creator>
        
        <creator>Luis Alfaro</creator>
        
        <creator>Jos&#233; Herrera</creator>
        
        <creator>Edward Hinojosa</creator>
        
        <subject>Massive Online Open Course; MOOC; e-learning; flow-theory; learning resource sequence; case based reasoning; reinforcement learning; q-learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>This paper presents the comparison of the results of two models for the personalization of learning resources sequences in a Massive Online Open Course (MOOC). The compared models are very similar and differ just in the way how they recommend the learning resource sequences to each participant of the MOOC. In the first model, Case Based Reasoning (CBR) and Euclidean distance is used to recommend learning resource sequences that were successful in the past, while in the second model, the Q-Learning algorithm of Reinforcement Learning is used to recommend optimal learning resource sequences. The design of the learning resources is based on the flow theory considering dimensions as knowledge level of the student versus complexity level of the learning resource with the aim of avoiding the problems of anxiety or boredom during the learning process of the MOOC.</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_52-Proposal_Models_for_Personalization_of_E_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Entanglement Classification for a Three-qubit System using Special Unitary Groups, SU(2) and SU(4)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100751</link>
        <id>10.14569/IJACSA.2019.0100751</id>
        <doi>10.14569/IJACSA.2019.0100751</doi>
        <lastModDate>2019-07-31T10:57:48.2570000+00:00</lastModDate>
        
        <creator>Siti Munirah Mohd</creator>
        
        <creator>Bahari Idrus</creator>
        
        <creator>Hishamuddin Zainuddin</creator>
        
        <creator>Muriati Mukhtar</creator>
        
        <subject>Quantum entanglement; multiqubit entanglement; entanglement classification; special unitary group; three-qubit system; quantum information</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>Entanglement is a physical phenomenon that links a pair, or a set of particles that correlates with each other, regardless of the distance between them. Recent researches conducted on entanglement are mostly focused on measurement and classification in multiqubit systems. Classification of two qubits will only distinguish the quantum state as either separable or entangled, and it can be done by measurement. Meanwhile, in a three-qubit system, it becomes more complex because of the structure of the three qubits itself. It is not sufficient to do measurement because the states are divided into three types, including fully separable state, biseparable state, and genuine entangled state. Therefore, the classification is needed to distinguish the type of states in the three-qubit system. This study aims to classify the entanglement of three-qubit pure states using a combination model of special unitary groups, SU(2) and SU(4), by changing the angle of selected parameters in SU(4) and acting on the separable pure state. The matrix represents SU(2) is 2&#215;2 matrix while matrix for SU(4) is 4&#215;4 matrix. Hence, the combination of SU(2) and SU(4) represent 8&#215;8 matrix. This classification uses the von Neumann entropy and three tangle measurements to classify the class, respectively. The results of this study have indicated that the three-qubit pure state has been successfully classified into different classes, namely, A-B-C, A-BC, C-AB, GHZ, and W, with A-B-C being a fully separable state, A-BC and C-AB are biseparable states, and GHZ and W are genuine entangled states. The results show that this model can change separable pure states to other entanglement class after transformation is done.</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_51-Entanglement_Classification_for_a_Three_Qubit_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Group Cooperative Coding Model for Dense Wireless Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100750</link>
        <id>10.14569/IJACSA.2019.0100750</id>
        <doi>10.14569/IJACSA.2019.0100750</doi>
        <lastModDate>2019-07-31T10:57:48.2400000+00:00</lastModDate>
        
        <creator>El Miloud Ar-Reyouchi</creator>
        
        <creator>Ahmed Lichioui</creator>
        
        <creator>Salma Rattal</creator>
        
        <subject>Dense Wireless networks; Bit Error Rate (BER); Low-Density Parity-Check (LDPC) codes</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>Generally, node groups in dense wireless networks (WNs) often pose the problem of communication between the central node and the rest of the nodes in a group. Adaptive Network Coded Cooperation (ANCC) for wireless central node networks adapts precisely to an extensive and dense WN, at this level the random linear network coding (RLNC) scheme with Low-Density Parity-Check (LDPC) are used as the essential ANCC evolution. This paper suggests two phases effective technique then studies the influence of the randomly chosen number of coded symbols which are correctly received in the second phase on bit error rate (BER). The proposed technique also focuses on the role of dispersion impact related to the LDPC code generating matrix. The results of the simulation allow selecting the best compromise for the best BER.</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_50-A_Group_Cooperative_Coding_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Agile Methods Selection Model: A Grounded Theory Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100749</link>
        <id>10.14569/IJACSA.2019.0100749</id>
        <doi>10.14569/IJACSA.2019.0100749</doi>
        <lastModDate>2019-07-31T10:57:48.2270000+00:00</lastModDate>
        
        <creator>Mashal Kasem Alqudah</creator>
        
        <creator>Rozilawati Razali</creator>
        
        <creator>Musab Kasim Alqudah</creator>
        
        <subject>Agile methods selection; factors; model; grounded theory analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>Agile methods adoption has increased in recent years because of its contribution to the success rate of project development. Nevertheless, the success rate of projects implemented using Agile methods has not completely reached its expected mark, and selecting the appropriate Agile methods is one of the reasons for such lag. Selecting the appropriate Agile methods is a challenging task because there are so many methods to select from. In addition, a lot of organizations consider the selection of Agile methods as a mammoth task. Therefore, to assist Agile team members, this study aimed to investigate how the appropriate Agile methods can be determined for different projects. Based on a Grounded Theory study, 23 Agile experts drawn from 19 teams across thirteen countries were interviewed. Hence, this study employed the Ground Theory of selecting Agile methods. Sixteen factors, grouped into five categories, have been found to affect the selection of twenty Agile methods. The nature of project (size, maturity, criticality and decomposability), development team skills (communication skills, domain knowledge, team technical skills and maturity), project constraints (cost/value/ROI, cost of change, time, scope and requirements volatility), customer involvement (collaboration, commitment and domain knowledge) and organizational culture (type of organizational culture) are the key factors that should guide Agile team members in the selection of an appropriate Agile methods based on the value these factors have for different organizations and/or different projects.</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_49-Agile_Methods_Selection_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Efficient Software Testing Technique based on Hybrid Database Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100748</link>
        <id>10.14569/IJACSA.2019.0100748</id>
        <doi>10.14569/IJACSA.2019.0100748</doi>
        <lastModDate>2019-07-31T10:57:48.1930000+00:00</lastModDate>
        
        <creator>Humma Nargis Aleem</creator>
        
        <creator>Mirza Mahmood Baig</creator>
        
        <creator>Muhammad Mubashir Khan</creator>
        
        <subject>Software testing; database testing; hypothetical database testing; traditional database testing; test case(s); grey box testing; software quality assurance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>In the field of computer science, software testing is referred as a critical process which is executed in order to assess and analyze the performance and risks existing in software applications. There is an emphasis on integrating specific approaches to carry out testing activities in an effective mode; the efficient strategy being explored recently is adopting hybrid database approach. For this purpose, a hybrid algorithm will be proposed to ensure the functionality and outcomes of testing procedure. The technical processes and its impact on current methodology would help to evaluate its effectiveness in software testing through which specific conclusions could be drawn. The findings of the research will elaborate effectiveness of the proposed algorithm that would be used in software testing. It would be suggested that new technology is easier and simple to assess and analyze the reliability of the software. Basically, hybrid database approach comprises of traditional and modern techniques that are deployed in order to achieve testing outcomes. It is explored from various testing methods that challenges have been identified related to focusing on traditional techniques due to which hybrid approach is now being developed in most of the areas. In the light of addressing these concepts, the paper aims to investigate the complexity and efficiency of hybrid database approach in software testing, as well as its scope in the IT industry.</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_48-Efficient_Software_Testing_Technique.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Review of Embedding Hexagonal Cells in the Circular and Hexagonal Region of Interest</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100747</link>
        <id>10.14569/IJACSA.2019.0100747</id>
        <doi>10.14569/IJACSA.2019.0100747</doi>
        <lastModDate>2019-07-31T10:57:48.1770000+00:00</lastModDate>
        
        <creator>Marina Prvan</creator>
        
        <creator>Julije Ožegovic</creator>
        
        <creator>Arijana Burazin Mišura</creator>
        
        <subject>Detector design; hexagon tessellation; region of Interest; regular grid</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>Hexagonal cells are applied in various fields of research. They exhibit many advantages, and one of the most important is their possibility to be closely packed and to form a hexagonal grid that fully covers the Region of Interest (ROI) without overlaps or gaps. ROI can be of various geometrical shapes, but this paper deals with the circular or hexagonal ROI approximations. The main purpose of our research is to provide a short review on the literature concerning the hexagonal grid, summarizing the existing state-of-the-art approaches on embedding hexagonal cells in the targeted ROI shapes and offering application-specific advantages. We report on formulas and algebraic expressions given in the existing researches that are used for calculating the number of embedded inner hexagonal cells or their vertices and/or edges. We contribute by integrating all researches in one place, finding a connection between previously unrelated applications concerning the use of embedded hexagonal grid and extracting commonality between previous researches on whether it provides the formulas on calculating the inner hexagon cells. In case only the number of edges or vertices is provided for the targeted application, we derive formulas for calculating the number of inner hexagons. Therefore, our survey results with the overview on solving the problem of embedding hexagonal cells in the desired circular or hexagonal ROI. The contribution of the review is the following: first it provides the existing and the derived formulas for calculating the embedded hexagons and second, it provides a theoretical background that is necessary to encourage further research. Namely, our main motivation, that is the geometrical design of the one of the world’s largest CERN particle detectors, Compact Muon Solenoid (CMS) is analyzed as a source for the future research directions.</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_47-A_Review_of_Embedding_Hexagonal_Cells.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Convolutional Neural Network for Diagnosing Skin Cancer</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100746</link>
        <id>10.14569/IJACSA.2019.0100746</id>
        <doi>10.14569/IJACSA.2019.0100746</doi>
        <lastModDate>2019-07-31T10:57:48.1470000+00:00</lastModDate>
        
        <creator>Mohammad Ashraf Ottom</creator>
        
        <subject>Convolutional neural network CNN; melanoma; skin cancer; image preprocessing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>Diagnosis of melanoma (skin cancer disease) is a challenging task in medical science field due to the amount and nature of the data. Skin cancer datasets are usually comes in different format and shapes including medical images, hence, data require tremendous efforts for preprocessing before the auto-diagnostic task itself. In this work, deep learning (convolutional neural network) is used to build a computer model for predicting new cases of skin cancer. The first phase in this work is to prepare images data, this include images segmentation to find useful parts that are easier for analysis and to detect region of interest in digital images, reduce the amount of noise and image illumination, and to easily detect sharp edges (boundaries) of objects. Then, the proposed approach built a convolutional neural network model which consists of three convolution layers, three max pooling layers, and four fully connected layers. Testing the model produced promising results with accuracy of 0.74. The result encourages and motivates for future improvement and research on online diagnosing of melanoma in early stages. Therefore, a web application was built to utilize the model and provide online diagnosis of melanoma.</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_46-Convolutional_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Knowledge Discovery based Framework for Enhancing the House of Quality</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100745</link>
        <id>10.14569/IJACSA.2019.0100745</id>
        <doi>10.14569/IJACSA.2019.0100745</doi>
        <lastModDate>2019-07-31T10:57:48.1300000+00:00</lastModDate>
        
        <creator>Amira M Idrees</creator>
        
        <creator>Ahmed I. ElSeddawy</creator>
        
        <creator>Mohammed Ossama Zeidan</creator>
        
        <subject>Knowledge discovery; mining techniques; customer segmentation; Saaty’s method; customer satisfaction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>Mining techniques proved to have a successful impact in different fields for many targets; one of these targets is to gain customers’ satisfaction through enhancing the products’ quality according to the voice of these customers. This research proposes a framework that is based on mining techniques and adopted Saaty method targeting to gain the customers’ satisfaction and consequently a competitive advantage in the real estate market. The proposed framework is applied during the design phase of a real estate residential building project as an improvement tool to design the building according to the customers’ requirements representing the voice of customers (VOC). The proposed Saaty method adaptation increased the number of the consistent sample which was incorrectly excluded using the traditional Saaty method. Saaty method adaptation has succeeded in enhancing the house of quality (HOQ) by achieving the consistent technical customers’ requirements for residential buildings, while customers’ segmentation succeeded in focusing on the homogeneous grouping of customers.</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_45-Knowledge_Discovery_based_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cyber Terrorist Detection by using Integration of Krill Herd and Simulated Annealing Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100744</link>
        <id>10.14569/IJACSA.2019.0100744</id>
        <doi>10.14569/IJACSA.2019.0100744</doi>
        <lastModDate>2019-07-31T10:57:48.1170000+00:00</lastModDate>
        
        <creator>Hassan Awad Hassan Al-Sukhni</creator>
        
        <creator>Azuan Bin Ahmad</creator>
        
        <creator>Madihah Mohd Saudi</creator>
        
        <creator>Najwa Hayaati Mohd Alwi</creator>
        
        <subject>Krill Herd; web content classification; cyber terrorists; simulating annealing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>This paper presents a technique to detect cyber terrorists suspected activities over the net by integrating the Krill Herd and Simulated Annealing algorithms. Three new level of categorizations, including low, high, and interleave have been introduced in this paper to optimize the accuracy rate. Two thousand datasets had been used for training and testing with 10-fold cross validation for this research and the simulations were performed using Matlab&#174;. Based on the conducted experiment, this technique produced 73.01% accuracy rate for the interleave level; thus, outperforming the benchmark work. The findings can be used as a guidance and baseline work for other researchers with the same interest in this area.</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_44-Cyber_Terrorist_Detection_by_using_Integration.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Proposed Model for Detecting Facebook News’ Credibility</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100743</link>
        <id>10.14569/IJACSA.2019.0100743</id>
        <doi>10.14569/IJACSA.2019.0100743</doi>
        <lastModDate>2019-07-31T10:57:48.0830000+00:00</lastModDate>
        
        <creator>Amira M Idrees</creator>
        
        <creator>Fahad Kamal Alsheref</creator>
        
        <creator>Ahmed I. ElSeddawy</creator>
        
        <subject>Social network; vector space model; correlation coefficient; sentiment analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>Social networks are currently one of the main News’ sources for most of their users. Moreover, News channels also consider social networks as main channels not only for spreading the news but also for measuring the feedback from their followers. Facebook Followers can comment or react to the news, which represents the follower’s feedback about this topic. Therefore, it is a fact that measuring the News’ credibility is one of the important tasks that could control the propagation of the fake news as well as the number of News’ followers. The proposed model in this research highlights the impact of the News’ followers on detecting the News’ polarity either it is fake or not. The proposed model focuses on applying an intelligent sentiment analysis using Vector Space Model (VSM) which is one of the most successful techniques on the users’ comments and reactions through the emoji. Then the degree of credibility is determined according to the correlation coefficient. An experimental study was applied using Facebook News dataset, which included the News and the followers’ feedbacks.</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_43-A_Proposed_Model_for_Detecting_Facebook_News_Credibility.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Vectorization of Text Documents for Identifying Unifiable News Articles</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100742</link>
        <id>10.14569/IJACSA.2019.0100742</id>
        <doi>10.14569/IJACSA.2019.0100742</doi>
        <lastModDate>2019-07-31T10:57:48.0830000+00:00</lastModDate>
        
        <creator>Anita Kumari Singh</creator>
        
        <creator>Mogalla Shashi</creator>
        
        <subject>Vectorization; news articles; tf-idf; word embeddings; document embeddings; text clustering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>Vectorization is imperative for processing textual data in natural language processing applications. Vectorization enables the machines to understand the textual contents by converting them into meaningful numerical representations. The proposed work targets at identifying unifiable news articles for performing multi-document summarization. A framework is introduced for identification of news articles related to top trending topics/hashtags and multi-document summarization of unifiable news articles based on the trending topics, for capturing opinion diversity on those topics. Text clustering is applied to the corpus of news articles related to each trending topic to obtain smaller unifiable groups. The effectiveness of various text vectorization methods, namely the bag of word representations with tf-idf scores, word embeddings, and document embeddings are investigated for clustering news articles using the k-means. The paper presents the comparative analysis of different vectorization methods obtained on documents from DUC 2004 benchmark dataset in terms of purity.</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_42-Vectorization_of_Text_Documents.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Post Treatment of Guided Wave by using Wavelet Transform in the Presence of a Defect on Surface</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100741</link>
        <id>10.14569/IJACSA.2019.0100741</id>
        <doi>10.14569/IJACSA.2019.0100741</doi>
        <lastModDate>2019-07-31T10:57:48.0530000+00:00</lastModDate>
        
        <creator>MARRAKH Rachid</creator>
        
        <creator>ALOUANE Houda</creator>
        
        <creator>BELHOUSSINE DRISSI Taoufiq</creator>
        
        <creator>NSIRI Benayad</creator>
        
        <creator>ZAYRIT Soumaya</creator>
        
        <subject>Lamb wave; defect; reflection; transmission; CWT; FFT2D</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>This article presents a Lamb wave processing by using two methods: Fast Fourier Transform (FFT2D) and Continuous Wavelet Transform (CWT) using Morlet wavelet. This treatment is done for a structure of two aluminum-copper plates, which are in contact edge to edge of a perpendicular junction with a thickness “e” in the presence of a rectangular and symmetrical defect located on the surface of the junction with a depth “d”. The aim under this study is to calculate the transmission and reflection energy coefficients by the two methods. The results of simulation obtained by Comsol software of an incident wave S0 at F = 800 kHz indicate us a good coherence between the two methods (FFT2D and CWT).</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_41-Post_Treatment_of_Guided_Wave_by_using_Wavelet_Transform.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Review of Ontology Development Aspects</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100740</link>
        <id>10.14569/IJACSA.2019.0100740</id>
        <doi>10.14569/IJACSA.2019.0100740</doi>
        <lastModDate>2019-07-31T10:57:48.0370000+00:00</lastModDate>
        
        <creator>Nur Liyana Law Mohd Firdaus Law</creator>
        
        <creator>Moamin A. Mahmoud</creator>
        
        <creator>Alicia Y.C. Tang</creator>
        
        <creator>Fung-Cheng Lim</creator>
        
        <creator>Hairoladenan Kasim</creator>
        
        <creator>Marini Othman</creator>
        
        <creator>Christine Yong</creator>
        
        <subject>Component; ontology; semantics web; artificial intelligence</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>Although it is widely recognized that ontology is the main approach towards semantic interoperability among information systems and services, the understanding of ontology aspects among researchers is limited. To provide a clear insight to this problem and support researchers, we need a background understanding of various aspects related to ontology. Consequently, in this paper, a comprehensive review is conducted to map the literature studies to a coherent taxonomy. These include the benefits of ontology, types of ontology, application domains, development platforms, languages, tools, and methodologies. The paper also discusses the concept of ontology, semantic Web, and its contribution to several research fields such as Artificial Intelligence, Library Science and shared knowledge. The fundamentals of ontology presented in this paper can benefit readers who wish to embark in ontology-based research and applications development.</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_40-A_Review_of_Ontology_Development_Aspects.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Decision Tree Approach for Predicting Student Grades in Research Project using Weka</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100739</link>
        <id>10.14569/IJACSA.2019.0100739</id>
        <doi>10.14569/IJACSA.2019.0100739</doi>
        <lastModDate>2019-07-31T10:57:48.0070000+00:00</lastModDate>
        
        <creator>Ertie C Abana</creator>
        
        <subject>Data mining; classification rules; decision tree; educational data mining; WEKA</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>Data mining in education is an emerging multidiscipline research field especially with the upsurge of new technologies used in educational systems that led to the storage of massive student data. This study used classification, a data mining process, in evaluating computer engineering student’s data to identify students who need academic counseling in the subject. There were five attributes considered in building the classification model. The decision tree was chosen as the classifier for the model. The accuracy of the decision tree algorithms, Random Tree, RepTree and J48, were compared using cross-validation wherein Random Tree returned the highest accuracy of 75.188%. Waikato Environment for Knowledge Analysis (WEKA) data mining tool was used in generating the classification model. The classification rules extracted from the decision tree was used in the algorithm of the Research Project Grade Predictor application which was developed using Visual C#. The application will help research instructors or advisers to easily identify students who need more attention because they are predicted to have low grades.</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_39-A_Decision_Tree_Approach_for_Predicting_Student_Grades.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Usability of “Traysi”: A Web Application for Tricycle Commuters</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100738</link>
        <id>10.14569/IJACSA.2019.0100738</id>
        <doi>10.14569/IJACSA.2019.0100738</doi>
        <lastModDate>2019-07-31T10:57:48.0070000+00:00</lastModDate>
        
        <creator>Ertie C Abana</creator>
        
        <subject>Tricycle; usability metrics; web application; fare calculation; Google API</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>This study measured the usability of a web application for tricycle commuters that was developed using Hypertext Markup Language (HTML), Cascading Style Sheet (CSS) and Javascript (JS) with the aid of Google Artificial Programming Interfaces (APIs). Toward this goal, the effectiveness, efficiency and user satisfaction were measured using common usability metrics. The effectiveness was measured in terms of task completion rate and user errors while efficiency was measured in terms of time on task. For user satisfaction, the post-task questionnaire, Single Ease Question (SEQ), was used. In order to check whether the web application will be usable even for first time users, the usability test was conducted three times. The result revealed that the usability of the web application is acceptable on the first trial. However, the usability improved on the next used which is evident on the third trial that yielded a 93.33% task completion rate with only one user error. The average time on task on every trial was lower than the maximum acceptable task time and the user satisfaction was high (x̅ = 6.00). Thus, the web application was highly usable in doing its intended purpose especially if it is repeatedly used.</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_38-Usability_of_Traysi_A_Tricycle_Commuting_Web_Application.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Citizen Attention Web Application for the Municipality of Sabinas, Coahila, Mexico</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100737</link>
        <id>10.14569/IJACSA.2019.0100737</id>
        <doi>10.14569/IJACSA.2019.0100737</doi>
        <lastModDate>2019-07-31T10:57:47.9770000+00:00</lastModDate>
        
        <creator>Griselda Cortes</creator>
        
        <creator>Alicia Valdez</creator>
        
        <creator>Laura Vazquez</creator>
        
        <creator>Alma Dominguez</creator>
        
        <creator>Cesar Gonzalez</creator>
        
        <creator>Ernestina Leija</creator>
        
        <creator>Jose Cendejas</creator>
        
        <subject>Web system; citizen attention; database</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>The information systems are fundamental to perform the daily activities of any organization. There is an increasing dependence on organizations to use information technology to achieve their objectives. This article presents the web information system that has been developed and implemented as support to manage the administrative services needs required by the citizens of the municipality of Sabinas Coahuila, M&#233;xico, seeking to be served in the best way and to obtain information from Faster and reliable way to follow up. Previously we worked manually, keeping the records in a format in Microsoft Excel. For the development of the system the agile XP methodology was used. The creation of the database was in MySQL and the development in Visual Studio 2015, the part of web programming in ASP .NET and programming in C #. With the implementation of the system, there is currently electronic control of the requests made by citizens, providing integrity, availability and confidentiality of information; at the same time, streamlining the process of capturing and receiving applications in each department of the municipality of Sabinas, Coahuila, Mexico. In addition, the system provides statistics of the requests that were attended, those that are in process and those that were not attended.</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_37-Citizen_Attention_Web_Application_for_the_Municipality.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Thermal Pain Level Estimation Method with Heart Rate and Cerebral Blood Flow</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100736</link>
        <id>10.14569/IJACSA.2019.0100736</id>
        <doi>10.14569/IJACSA.2019.0100736</doi>
        <lastModDate>2019-07-31T10:57:47.9600000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Asato Mizoguchi</creator>
        
        <creator>Hiroshi Okumura</creator>
        
        <subject>Thermal pain; support vector machine; thermal stimulus; classification; cerebral blood flow; heart rate</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>Method for thermal pain level estimation with heart rate and cerebral blood flow using SVM is proposed. Through experiments, it is found that thermal pain level is much sensitive to the cerebral blood flow rather than heart rate. Also, it is found that the performance of thermal pain estimation is much better than the previously proposed method with the number of blinks, the enlarging rate of pupil size.</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_36-Thermal_Pain_Level_Estimation_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced Mutual Authenticated Key Agreement Protocol for Anonymous Roaming Service in Global Mobility Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100735</link>
        <id>10.14569/IJACSA.2019.0100735</id>
        <doi>10.14569/IJACSA.2019.0100735</doi>
        <lastModDate>2019-07-31T10:57:47.9430000+00:00</lastModDate>
        
        <creator>Hyunsung Kim</creator>
        
        <subject>Information security; roaming security; anonymity; authenticated key agreement; cryptanalysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>With the rapid development of mobile intelligent terminals, users can enjoy ubiquitous life in global mobility networks (GLOMONET). It is essential to secure user information for providing secure roaming service in GLOMONET. Recently, Xu et al. proposed a mutual authentication and key agreement (MAKA) protocol as the basic security building block. The purpose of this paper is not only to show some security problems in Xu et al.’s MAKA protocol and but also proposes an enhanced MAKA protocol as a remedy protocol for Xu et al.’s MAKA protocol. The proposed protocol ensures higher security compared to the well-known authentication and key agreement protocols but has a bit computational overhead than them due to the security enhancements.</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_35-Enhanced_Mutual_Authenticated_Key_Agreement.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Complex Binary Adder Designs and their Hardware Implementations</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100734</link>
        <id>10.14569/IJACSA.2019.0100734</id>
        <doi>10.14569/IJACSA.2019.0100734</doi>
        <lastModDate>2019-07-31T10:57:47.9270000+00:00</lastModDate>
        
        <creator>Tariq Jamil</creator>
        
        <creator>Medhat Awadalla</creator>
        
        <creator>Iftaquaruddin Mohammed</creator>
        
        <subject>Complex number; complex binary; adder; ripple carry; decoder; minimum delay</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>Complex Binary Number System (CBNS) is (-1+j)-based on binary number system which facilitates both real and imaginary components of a complex number to be represented as single binary number. In this paper, we have presented three designs of nibble-size complex binary adders (ripple-carry, decoder-based, minimum-delay) and implemented them on various Xilinx FPGAs. The designs of base2 4-bit binary adder have also been implemented so that statistics of different adders can be compared.</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_34-Complex_Binary_Adder_Designs.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Systematic Review of Existing IoT Architectures Security and Privacy Issues and Concerns</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100733</link>
        <id>10.14569/IJACSA.2019.0100733</id>
        <doi>10.14569/IJACSA.2019.0100733</doi>
        <lastModDate>2019-07-31T10:57:47.9130000+00:00</lastModDate>
        
        <creator>Fatma Alshohoumi</creator>
        
        <creator>Mohammed Sarrab</creator>
        
        <creator>Abdulla AlHamadani</creator>
        
        <creator>Dawood Al-Abri</creator>
        
        <subject>Internet of things; IoT architecture stack; IoT layers; IoT privacy concerns; IoT security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>Internet of things (IoT) has become one of the most prominent technologies that the world has been witnessing nowadays. It provides great solutions to humanity in many significant fields of life. IoT refers to a collection of sensors or object in the universe with the capability of communicating with each other through the internet without human intervention. Currently, there is no standard IoT architecture. As it is in its infancy, IoT is surrounded by numerous security and privacy concerns. Thus, to avoid such concerns that may hinder its deployment, an IoT architecture has to be carefully designed to incorporate security and privacy solutions. In this paper, a systematic literature review was conducted to trace the evolvement of IoT architectures from its initial development in 2008 until 2018. The Comparison among these architectures is based on terms of the architectural stack, covered issues, the technology used and considerations of security and privacy aspects. The findings of the review show that the initial IoT architectures did not provide a comprehensive meaning for IoT that describe its nature, whereas the recent IoT architectures convey a comprehensive meaning of IoT, starting from data collection, followed by data transmission and processing, and ending with data dissemination. Moreover, the findings reveal that IoT architecture has evolved gradually across the years, through improving architecture stack with new solutions to mitigate IoT challenges such as scalability, interoperability, extensibility, management, etc. with lack consideration of security solutions. The findings disclose that none of the discussed IoT architectures considers privacy concerns, which indeed considered as a critical factor of IoT sustainability and success. Therefore, there is an inevitable need to consider security and privacy solutions when designing IoT architecture.</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_33-Systematic_Review_of_Existing_IoT_Architectures.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Metric-based Measurement and Selection for Software Product Quality Assessment: Qualitative Expert Interviews</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100732</link>
        <id>10.14569/IJACSA.2019.0100732</id>
        <doi>10.14569/IJACSA.2019.0100732</doi>
        <lastModDate>2019-07-31T10:57:47.8800000+00:00</lastModDate>
        
        <creator>Zubaidah Bukhari</creator>
        
        <creator>Jamaiah Yahaya</creator>
        
        <creator>Aziz Deraman</creator>
        
        <subject>Software product metric; metric selection criteria; software quality; software measurement; selection process; qualitative study</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>A systematic and efficient measurement process can assist towards the production of quality software product. Metric-based measurement method often used to assess the product quality. Currently several hundreds of metrics have been proposed by previous researchers. However, there is no specific and structured mechanism for metrics selection process. Lack of awareness, knowledge and experience lead to selecting inappropriate and unsuitable metrics for assessment of software product quality done by the practitioners and stakeholders in the industry. Literature study found that the existing selection models are irrelevant and insufficient for assisting and supporting metrics selection process in which it should consists of criteria, and systematic and practical methods of selection process. A qualitative interview was conducted involving 12 experts and practitioners to reveal current issues in software measurement, to identify elements relevant for software metric selection process and to identify the appropriate and valid software metric selection criteria. Finding from this expert interview revealed important input from industry which are: Five main issues in software measurement, six elements associated with metric selection process and 13 criteria relevant for software metric selection.</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_32-Metric_Based_Measurement_and_Selection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of Accuracy and Precision of WLAN Position Estimation System based on RSS</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100731</link>
        <id>10.14569/IJACSA.2019.0100731</id>
        <doi>10.14569/IJACSA.2019.0100731</doi>
        <lastModDate>2019-07-31T10:57:47.8670000+00:00</lastModDate>
        
        <creator>Sutiyo </creator>
        
        <creator>Risanuri Hidayat</creator>
        
        <creator>I Wayan Mustika</creator>
        
        <creator>Sunarno</creator>
        
        <subject>WLAN position estimation system; WLAN distance estimation system; accuracy and precision wireless localization; efficient and flexible outdoor wireless localization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>The coordinates of the position of a wireless access point are the main goal in the wireless localization technique. In common, outdoor wireless localization techniques use the trilateration method by installing several devices used as anchors, and this makes the cost for installation and maintenance higher. An outdoor wireless localization system that is more efficient, effective, and flexible but still accurate and precise becomes indispensable. This paper proposes a wireless position estimation system based on a received signal strength (RSS) value called WLAN Position Estimation System (WLAN PES) integrating WLAN Distance Estimation System (WLAN DES) with WLAN PES formula. To confirm the accuracy and precision of WLAN PES, this paper focuses on the analysis of WLAN PES testing conducted at ten points in which each measurement point had coordinates and angles that are different from one another. The test point was made to circle the target WLAN access point. The distance of each test point was 1000 meters against the targeted WLAN access point. In each of these test points, the WLAN finder reads RSSfnd. Then, the system calculated the estimated distance value using WLAN DES based on the RSSfnd value. After the system obtained a distance estimation value, the estimated position value was calculated with the WLAN PES formula. In addition to the distance estimation value, WLAN PES formula required some variables such as latitude and longitude coordinates from the WLAN finder position, and the bearing of the WLAN finder. WLAN PES was found to be capable of determining the estimated position coordinates of a WLAN access point with an accuracy value and precision value of 93.26% and 98.77%, respectively.</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_31-Analysis_of_Accuracy_and_Precision_of_WLAN.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comprehensive Evaluation of Cue-Words based Features and In-text Citations based Features for Citation Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100730</link>
        <id>10.14569/IJACSA.2019.0100730</id>
        <doi>10.14569/IJACSA.2019.0100730</doi>
        <lastModDate>2019-07-31T10:57:47.8330000+00:00</lastModDate>
        
        <creator>Syed Jawad Hussain</creator>
        
        <creator>Sohail Maqsood</creator>
        
        <creator>NZ Jhanjhi</creator>
        
        <creator>Azeem Khan</creator>
        
        <creator>Mahadevan Supramaniam</creator>
        
        <creator>Usman Ahmed</creator>
        
        <subject>Cue-words; in-text citation; hybrid</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>Citation plays a vital role in the scientiﬁc community of evaluating the contributions of scientiﬁc authors. Citing sources delivers a measurable way of evaluating the impact factor of journals and authors and allows for the recognition of new research issues. Different techniques for classifying citations have been proposed. Citations that provide background knowledge in the citing document have been classified as non-important or incidental by previous researchers. Citations that extend previous work in the citing document are classified as important. The accuracy achieved by existing citation models is not much higher. Better features need to be included for accurate predictions. A hybrid approach would present all possible combinations of cue-words and in-text citation-based features for citation classiﬁcations.</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_30-A_Comprehensive_Evaluation_of_Cue_Words.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Investigating Students’ Acceptance of Online Courses at Al-Ahliyya Amman University</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100729</link>
        <id>10.14569/IJACSA.2019.0100729</id>
        <doi>10.14569/IJACSA.2019.0100729</doi>
        <lastModDate>2019-07-31T10:57:47.8200000+00:00</lastModDate>
        
        <creator>Qasem Kharma</creator>
        
        <subject>TAM; UTAUT; online learning; e-learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>Online courses allow students to access the course materials anytime and anywhere. Those courses are meant to enhance and improve the learning processes. Unfortunately, by analyzing data of an online course in Al-Ahliyya Amman University, it was found that only 51% of enrolled students accessed the animated course material. This study proposed a model to understand the factors which affect students’ intention to use an online course by extending the Unified Theory of Acceptance and Use of Technology (UTAUT) and Technology Acceptance Model (TAM). The proposed research model investigated the effects of experience, perceive usefulness, awareness, effort expectancy, cost, subjective (social) norms, and behavioral intentions to use online courses on students’ adoption of online courses. Besides, the model investigated the effects of moderators, such as: college, college level, personal computer ownership, an internet access, and an online course enrollment on the relations. A questionnaire was distributed and then a structural equation modeling (SEM) approach was used to analyze the responses using SmartPLS.</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_29-Investigating_Students_Acceptance_of_Online_Courses.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Vietnamese Speech Command Recognition using Recurrent Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100728</link>
        <id>10.14569/IJACSA.2019.0100728</id>
        <doi>10.14569/IJACSA.2019.0100728</doi>
        <lastModDate>2019-07-31T10:57:47.7870000+00:00</lastModDate>
        
        <creator>Phan Duy Hung</creator>
        
        <creator>Truong Minh Giang</creator>
        
        <creator>Le Hoang Nam</creator>
        
        <creator>Phan Minh Duong</creator>
        
        <subject>Vietnamese speech command; voice control; recurrent neural networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>Voice control is an important function in many mobile devices, in a smart home, especially in providing people with disabilities a convenient way to communicate with the device. Despite many studies on this problem in the world, there has not been a formal study for the Vietnamese language. In addition, many studies did not offer a solution that can be expanded easily in the future. During this study, a dataset of Vietnamese speech commands is labeled and organized to be shared with community of general language research and Vietnamese language study in particular. This paper provides a speech collection and processing software. This study also designs and evaluates Recurrent Neural Networks to apply it to the data collected. The average recognition accuracy on the set of 15 commands for controlling smart home devices is 98.19%.</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_28-Vietnamese_Speech_Command_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Constraint-based Approach to Deal with Self-Adaptation: The Case of Smart Irrigation Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100727</link>
        <id>10.14569/IJACSA.2019.0100727</id>
        <doi>10.14569/IJACSA.2019.0100727</doi>
        <lastModDate>2019-07-31T10:57:47.7730000+00:00</lastModDate>
        
        <creator>Asmaa Achtaich</creator>
        
        <creator>Nissrine Souissi</creator>
        
        <creator>Camille Salinesi</creator>
        
        <creator>Ra&#250;l Mazo</creator>
        
        <creator>Ounsa Roudies</creator>
        
        <subject>IoT; irrigation systems; smart; constraint; product lines; self-adaptive systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>Smart irrigation is a specific application of the IoT, where devices composed of sensors and actuators, collect environmental data, like soil humidity, air temperature and brightness, in order to lunch or plan irrigation cycles. These systems function according to a configuration that dictates the way in which every component should operate. Static configurations are limited, as they only represent a set of fixed requirements. However, in domains such as the IoT, technology is continuously evolving, and various users, sometimes with various needs, interact with the system. This leads to dynamic requirements, which are fulfilled by dynamic configurations. This purpose uses the case of an irrigation system to illustrate such requirements, and proposes a constrained-based approach to design self-adaptive smart irrigation systems.</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_27-A_Constraint_Based_Approach_to_Deal_with_Self_Adaptation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Automatic Multiple Sclerosis Lesion Segmentation Approach based on Cellular Learning Automata</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100726</link>
        <id>10.14569/IJACSA.2019.0100726</id>
        <doi>10.14569/IJACSA.2019.0100726</doi>
        <lastModDate>2019-07-31T10:57:47.7400000+00:00</lastModDate>
        
        <creator>Mohammad Moghadasi</creator>
        
        <creator>Dr. Gabor Fazekas</creator>
        
        <subject>Multiple Sclerosis (MS); MATLAB; Magnetic Resonance Imaging (MRI); MS Lesions; Cellular Learning Automata (CLA); Segmentation; Support Vector Machines (SVM)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>Multiple Sclerosis (MS) is a demyelinating nerve disease which for an unknown reason assumes that the immune system of the body is affected, and the immune cells begin to destroy the myelin sheath of nerve cells. In Pathology, the areas of the distributed demyelination are called lesions that are pathologic characteristics of the Multiple Sclerosis (MS) disease. In this research, the segmentation of the lesions from one another is studied by using gray scale features and the dimensions of the lesions. The brain Magnetic Resonance Imaging (MRI) images in three planes (T1, T2, PD)1,2,3 containing MS disease lesions have been used. Cellular Learning Automata (CLA) is applied on the MRI images with a novel trial and error approach to set penalty and reward frames for each pixel. The images were analyzed in MATLAB and the results show the MS disease lesions in white and the brain anatomy in red on a black background. The proposed approach can be considered as a supplementary or superior method for other methods such as Graph Cuts (GC), fuzzy c-means, mean-shift, k-Nearest Neighbor (KNN), Support Vector Machines (SVM).</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_26-An_Automatic_Multiple_Sclerosis_Lesion_Segmentation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Micro Agent and Neural Network based Model for Data Error Detection in a Real Time Data Stream</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100725</link>
        <id>10.14569/IJACSA.2019.0100725</id>
        <doi>10.14569/IJACSA.2019.0100725</doi>
        <lastModDate>2019-07-31T10:57:47.7400000+00:00</lastModDate>
        
        <creator>Sidi Mohamed Snineh</creator>
        
        <creator>Mohamed Youssfi</creator>
        
        <creator>Abdelaziz Daaif</creator>
        
        <creator>Omar Bouattane</creator>
        
        <subject>Micro-agent; machine learning; errors; big data; multilayer perceptron; stream processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>In this paper, we present a model for learning and detecting the presence of data type errors in a real time big data stream processing context. The proposed approach is based on a collection of micro-agents. Each micro-agent is trained to detect a specific type of error using an atomic neural network based on a sample multilayer perceptron. The supervised learning process is based on a binary classifier where the training data inputs are represented by data types and data values. The Micro-Agent for Error Detection (MAED) is deployed at several instances depending on the number of error types to be handled. The orchestration mechanism of data streams to be examined is performed by a special Host Micro Agent (HMA). This later receives in real time a data stream, splits the current record into several elementary fields. Each field value is streamed to an instance of MAED Agent which responds with a signal of presence or not of a specific data type error of the corresponding data field. For each detected data type error, the HMA Agent selects and performs the appropriate cleaning algorithm from a repository to correct the present errors of the data stream. To validate this approach, we propose an implementation based on Framework Deep Learning 4j for the Machines Learning part and JADE as a Multi Agent System (MAS) platform. The used dataset is generated by an events generator for smart highways.</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_25-Micro_Agent_and_Neural_Network_based_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Impact of Flyweight and Proxy Design Patterns on Software Efficiency: An Empirical Evaluation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100724</link>
        <id>10.14569/IJACSA.2019.0100724</id>
        <doi>10.14569/IJACSA.2019.0100724</doi>
        <lastModDate>2019-07-31T10:57:47.7100000+00:00</lastModDate>
        
        <creator>Muhammad Ehsan Rana</creator>
        
        <creator>Wan Nurhayati Wan Ab Rahman</creator>
        
        <creator>Masrah Azrifah Azmi Murad</creator>
        
        <creator>Rodziah Binti Atan</creator>
        
        <subject>Software efficiency; design patterns; flyweight design pattern; proxy design pattern; measuring software efficiency; empirical evaluation of software</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>In this era of technology, delivering quality software has become a crucial requirement for the developers. Quality software is able to help an organization to success and gain a competitive edge in the market. There are numerous quality attributes introduced by various quality models. Various researches and studies prove that the quality of the object-oriented software can be improved by using design patterns. The main purpose of this research is to identify the relationships between the design patterns and software efficiency quality attribute. This research is focused on the impact of Flyweight and Proxy Design Patterns on the efficiency of software. An example scenario is used to empirically evaluate the effectiveness of applied design refinements on efficiency of a system. The techniques to measure software efficiency and the results obtained for each solution are elaborated in detail. At the end of this research, comparative analysis is provided to show the relative impact of each selected design pattern on software efficiency.</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_24-The_Impact_of_Flyweight_and_Proxy_Design.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Double Diode Ideality Factor Determination using the Fixed-Point Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100723</link>
        <id>10.14569/IJACSA.2019.0100723</id>
        <doi>10.14569/IJACSA.2019.0100723</doi>
        <lastModDate>2019-07-31T10:57:47.6930000+00:00</lastModDate>
        
        <creator>Traiki Ghizlane</creator>
        
        <creator>Ouajji Hassan</creator>
        
        <creator>Bifadene Abdelkader</creator>
        
        <creator>Bouattane Omar</creator>
        
        <subject>Ideality factor; fixed point; double diode; solar cell</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>In this paper, we are interested in the diode ideality factor study of the double exponential equivalent model, based on the properties of the fixed point method. The optimal choice of this factor will improve the photovoltaic installation profitability. The diode ideality factor is a crucial parameter to describe solar cell behavior. Different methods have been elaborated to determine its value; some of them are analytical as Lambert function and others are direct as the normal method of the coordinates of the parameters. In our case, we applied the fixed point method which is an iterative algorithm to solve non-linear equations. The values obtained by this method are compared with the calculated values achieved by other methods to prove its significance and effects.</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_23-Double_Diode_Ideality_Factor_Determination.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>User Perspective on External Value Creation Factors in Indonesia e-Commerce</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100722</link>
        <id>10.14569/IJACSA.2019.0100722</id>
        <doi>10.14569/IJACSA.2019.0100722</doi>
        <lastModDate>2019-07-31T10:57:47.6630000+00:00</lastModDate>
        
        <creator>Sfenrianto Sfenrianto</creator>
        
        <creator>Hilda Oktavianni JM</creator>
        
        <creator>Hafid Prima Putra</creator>
        
        <creator>Khoerintus</creator>
        
        <subject>External environment factors; e-commerce; user perspective; value creation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>Value creation is very important for the e-commerce companies in order to reach customers and increase company&#39;s value in the view of customer. Value creation mostly developed based on internal factor of the company. This statement is supported by many studies that researched on value creation from within the company. The purpose of this research is to find out customer’s perspective on the external environmental factors that can affect the value creation of e-commerce companies, especially in Indonesia. This research uses primary and secondary methods in data collection. Questionnaire is used as primary research methodology in this research to gather data from e-commerce users. For the secondary method, literature reviewed from previous research and existing journals is used. The results of this research are the statements from respondents regarding external environmental factors that can be classified on 5 (five) factors: Government policy and legal, telecommunication infrastructure, financial and capital investment, physical environment, and payment system.</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_22-User_Perspective_on_External_Value.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>New Approach of Automatic Modulation Classification based on in Phase-Quadrature Diagram Combined with Artificial Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100721</link>
        <id>10.14569/IJACSA.2019.0100721</id>
        <doi>10.14569/IJACSA.2019.0100721</doi>
        <lastModDate>2019-07-31T10:57:47.6470000+00:00</lastModDate>
        
        <creator>Jean Baptiste Bi Gouho</creator>
        
        <creator>D&#233;sir&#233; Mel&#232;dje</creator>
        
        <creator>Boko Aka</creator>
        
        <creator>Michel Babri</creator>
        
        <subject>Modulations; artificial neural network; clustering; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>Automatic Modulation Classification (AMC) with intelligent system is an attracting area of research due to the development of SDR (Software Defined Radio). This paper proposes a new algorithm based on a combination of k-means clustering and Artificial Neural Network (ANN). We use constellation diagram I-Q (In phase, Quadrature) as basic information. K-means algorithm is used to normalize data transmitted and pollute by the Additive White Gaussian Noise (AWGN), then the new diagram obtained is considered as an image and coded in pixel before entering in MLP (Multi-Layer Perceptron) Neural Network. Simulation results show an improvement of recognition rate under low SNR (Signal Noise Rate) compare to some results obtained in the literature.</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_21-New_Approach_of_Automatic_Modulation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Effect of Social Feature Quality on the Social Commerce System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100720</link>
        <id>10.14569/IJACSA.2019.0100720</id>
        <doi>10.14569/IJACSA.2019.0100720</doi>
        <lastModDate>2019-07-31T10:57:47.6170000+00:00</lastModDate>
        
        <creator>Nona M Nistah</creator>
        
        <creator>Maryati Mohd. Yusof</creator>
        
        <creator>Suaini Sura</creator>
        
        <creator>Ook Lee</creator>
        
        <subject>Social feature quality; relationship support; social support; s-commerce; e-commerce; customer satisfaction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>The emergence of social networks has triggered the evolution of e-commerce to what is now known as social-commerce (s-commerce). However, s-commerce users experience problems related to its social features that affect s-commerce effectiveness. Therefore, the paper examines the effect of social feature quality (SFQ) determinants on s-commerce from customer perspective by adapting the information systems success model. A total of 220 online survey responses were analyzed by using confirmatory factor analysis and the structural equitation model to test the proposed model. SFQ shows a significant effect on perceived usefulness and customer satisfaction with an s-commerce system, whereas relationship support quality shows a significant effect on perceived usefulness and customer satisfaction with an s-commerce system but not on social support. A significant relationship is also identified among perceived usefulness, customer satisfaction, and net benefits of an s-commerce system.</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_20-The_Effect_of_Social_Feature_Quality.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Mobile-based Tremor Detector Application for Patients with Parkinson’s Disease</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100719</link>
        <id>10.14569/IJACSA.2019.0100719</id>
        <doi>10.14569/IJACSA.2019.0100719</doi>
        <lastModDate>2019-07-31T10:57:47.6000000+00:00</lastModDate>
        
        <creator>Pang Xin Yi</creator>
        
        <creator>Maryati Mohd. Yusof</creator>
        
        <creator>Kalaivani Chelappan</creator>
        
        <subject>Agile; mhealth; mobile application; Parkinson; tremor detector</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>Parkinson’s disease affects millions of people worldwide and its frequency is steadily increasing. No cure is currently available for Parkinson’s disease patients, and most medications only treat the symptoms. This treatment depends on the quantification of Parkinson’s symptoms, such as hand tremors. The most commonly used method to measure human tremors is a severity scale, which lacks accuracy because it is based on the subjectivity of neurologist review. Furthermore, the use of severity scales prevents the extraction of information from tremor activity, such as speed, amplitude, and frequency. Therefore, a mobile application was developed to measure the hand tremor level of Parkinson’s patients using a mobile phone-based accelerometer. Agile method was used to develop this application, and Android Studio and Android Software Development Kit were utilized. The application runs on an Android smartphone. This application allows patients to identify their tremor activity and subsequently seek relevant medical advice. In addition, a neurologist can monitor tremor activity of patients by analyzing the records generated from this application.</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_19-A_Mobile_Based_Tremor_Detector_Application.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Communication and Computation Aware Task Scheduling Framework Toward Exascale Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100718</link>
        <id>10.14569/IJACSA.2019.0100718</id>
        <doi>10.14569/IJACSA.2019.0100718</doi>
        <lastModDate>2019-07-31T10:57:47.5830000+00:00</lastModDate>
        
        <creator>Suhelah Sandokji</creator>
        
        <creator>Fathy Eassa</creator>
        
        <subject>Exascale computing; resource utilization; hybrid task scheduling; heterogeneous computing environment; task scheduler framework</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>The race for Exascale Computing has naturally led computer architecture to transit from the multicore era and into the heterogeneous era. Exascale Computing within the heterogenous environment necessarily use the best-fit scheduling and resource utilization improvement. Task scheduling is the main critical aspect in managing the challenges of Exascale in the heterogenous computing environment. In this paper, a Communication and Computation Aware task scheduler framework (CCATSF) is introduced. The CCATSF framework consists of four parts; the first of which is the resource monitor, the second is the resources manager, the third is the task scheduler and the fourth is the dispatcher. The framework is based on a new hybrid task scheduling algorithm for a heterogenous computing environment. Our results are based on the random job generator that we implemented, and they indicate that the CCATSF framework, based on the proposed dynamic variant heterogenous early finish time (DVR-HEFT) algorithm is able to reduce the scheduler&#39;s makespan and increase the efficiency without increasing the algorithm&#39;s time complicity.</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_18-Communication_and_Computation_Aware_Task_Scheduling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Convolutional Neural Network for Automatic Identification and Classification of Fall Army Worm Moth</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100717</link>
        <id>10.14569/IJACSA.2019.0100717</id>
        <doi>10.14569/IJACSA.2019.0100717</doi>
        <lastModDate>2019-07-31T10:57:47.5700000+00:00</lastModDate>
        
        <creator>Francis Chulu</creator>
        
        <creator>Jackson Phiri</creator>
        
        <creator>Phillip O.Y. Nkunika</creator>
        
        <creator>Mayumbo Nyirenda</creator>
        
        <creator>Monica M.Kabemba</creator>
        
        <creator>Philemon H.Sohati</creator>
        
        <subject>Augmentation; convolutional neural networks; classification; fall army worm; machine learning; tensorflow; transfer learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>To combat the problem caused by the Fall Army Worm in the country there is a need to come up with robust early warning and monitoring systems as the current manual system is labor intensive and time consuming. The automation of the identification and classification of the insect is one of the novel methods that can be undertaken. Therefore this paper presents the results of training a Convolutional Neural Network model using Google’s Tensorflow Deep Learning Framework for the identification and classification of the Fall Army worm moth. Due to lack of enough training dataset and good computing power, we used transfer learning, which is the process of reusing a model trained on one task as a starting point for a model on a second task. Googles pre-trained InceptionV3 model was used as the underlying model. Data was collected from four sources namely the field, Lab setup, by crawling the internet and using Data Augmentation. We Present results of the best three trials in terms of training accuracy after several attempts to get the best metrics in terms of learning rate and training steps. The best model gave a prediction average accuracy of 82% and a 32% average prediction accuracy on false positives. The results shows that it is possible to automate the identification and classification of the Fall Army worm Moth using Convolutional Neural Networks.</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_17-A_Convolutional_Neural_Network_for_Automatic_Identification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cancer Classification from DNA Microarray Data using mRMR and Artificial Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100716</link>
        <id>10.14569/IJACSA.2019.0100716</id>
        <doi>10.14569/IJACSA.2019.0100716</doi>
        <lastModDate>2019-07-31T10:57:47.5370000+00:00</lastModDate>
        
        <creator>M A H Akhand</creator>
        
        <creator>Md. Asaduzzaman Miah</creator>
        
        <creator>Mir Hussain Kabir</creator>
        
        <creator>M. M. Hafizur Rahman</creator>
        
        <subject>Cancer classification; gene expression data; minimum redundancy maximum relevance method; artificial neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>Cancer is the uncontrolled growth of abnormal cells in the body and is a major death cause nowadays. It is notable that cancer treatment is much easier in the initial stage rather than it outbreaks. DNA microarray based gene expression profiling has become efficient technique for cancer identification in early stage and a number of studies are available in this regard. Existing methods used different feature selection methods to select relevant genes and then employed distinct classifiers to identify cancer. This study considered information theoretic based minimum Redundancy Maximum Relevance (mRMR) method to select important genes and then employed artificial neural network (ANN) for cancer classification. Proposed mRMR-ANN method has been tested on a suite of benchmark datasets of various cancer. Experimental results revealed the proposed method as an effective method for cancer classification when performance compared with several related exiting methods.</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_16-Cancer_Classification_from_DNA.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Developing an Integrated Cloud-based Framework for Securing Dataflow of Wireless Sensors</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100715</link>
        <id>10.14569/IJACSA.2019.0100715</id>
        <doi>10.14569/IJACSA.2019.0100715</doi>
        <lastModDate>2019-07-31T10:57:47.5370000+00:00</lastModDate>
        
        <creator>Habibah AL-Harbi</creator>
        
        <creator>Khalil H. A. Al-Shqeerat</creator>
        
        <creator>Ibrahim S. Alsukayti</creator>
        
        <subject>Framework; security; wireless sensor; cloud computing; data integrity; availability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>Cloud computing environment has been developed rapidly and becomes a popular trend in recent years. It provides on-demand services to several applications with access to an unlimited number of resources such as servers, storage, networks. Wireless Sensor Network, on the other hand, has been enormously progressing in various applications and producing a considerable amount of sensor data. Sensor networks are based on a group of interconnected small size sensor nodes that can be distributed over different geographical areas to observe environmental and physical phenomena. Nevertheless, it has limitations concerning power, storage, and scalability that need to be addressed adequately. Integrating wireless sensor networks with cloud computing can overcome these problems. Cloud computing provides a more secure and high available platform for effective management of sensor data. This paper proposes a framework to secure the dataflow of sensor devices from wireless sensor networks to cloud computing using an integrated environment. The framework presents an authentication scheme to validate the identity of sensor devices connected to the cloud environment. Furthermore, it provides secure environments with high availability and data integrity.</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_15-Developing_an_Integrated_Cloud_based_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Communication Disconnection Prevention System by Bandwidth Depression-Type Traffic Measurement in a Multi-Robot Environment using an LCX Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100714</link>
        <id>10.14569/IJACSA.2019.0100714</id>
        <doi>10.14569/IJACSA.2019.0100714</doi>
        <lastModDate>2019-07-31T10:57:47.5070000+00:00</lastModDate>
        
        <creator>Kei Sawai</creator>
        
        <creator>Satoshi Aoyama</creator>
        
        <creator>Takumi Tamamoto</creator>
        
        <creator>Tatsuo Motoyoshi</creator>
        
        <creator>Hiroyuki Masuta</creator>
        
        <creator>Ken’ichi Koyanagi</creator>
        
        <creator>Toru Oshima</creator>
        
        <subject>Multi-robot; tele-operation; leaky coaxial cable; LCX networks; operability; broadcast packets; transmittability of broadcast packets; network disconnection prevention; disaster reduction activity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>In this paper, we propose and develop a method for determining the transmission amount of each mobile robot connected to a network constructed with a leaky coaxial cable (LCX) by using broadcast packets. Tele-operation of mobile robots using an LCX network is more effective as an information collection method in closed spaces compared with existing methods in terms of the maintenance of the mobile robots’ running performance and the stability of the communication quality for disaster reduction activity. However, when the transmission and reception of information exceeds the maximum transmission amount, communication disconnection and transmission amount reduction occur because of band division in the communication path, and there is a risk that mobile robots will be separated from the LCX network. Therefore, to prevent the network division and the decrease of transmission amount during multi-robot operation on an LCX network, we propose a method for determining the transmission amount of each mobile robot using broadcast packets. The proposed method is evaluated on an LCX network, and its effectiveness is confirmed by evaluating the transmittability of broadcast packets and operability of mobile robot.</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_14-Communication_Disconnection_Prevention_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Mobile Robot Teleoperation System with a Wireless Communication Infrastructure using a Leaky Coaxial Cable based on TCP/IP</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100713</link>
        <id>10.14569/IJACSA.2019.0100713</id>
        <doi>10.14569/IJACSA.2019.0100713</doi>
        <lastModDate>2019-07-31T10:57:47.4900000+00:00</lastModDate>
        
        <creator>Kei Sawai</creator>
        
        <creator>Satoshi Aoyama</creator>
        
        <creator>Tatsuo Motoyoshi</creator>
        
        <creator>Toru Oshima</creator>
        
        <creator>Ken’ichi Koyanagi</creator>
        
        <creator>Hiroyuki Masuta</creator>
        
        <creator>Takumi Tamamoto</creator>
        
        <subject>Mobile robot; teleoperation; leaky coaxial cable; teleoperation infrastructure</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>In this study, we propose and develop a wireless teleoperation system for mobile robots using a leaky coaxial cable (LCX) with a wireless communication infrastructure. In closed spaces resulting from disasters, some problems have been reported, such as cable entanglement, disconnection of wired communications, and problems with teleoperations (e.g., unstable communication quality for wireless communications). In this paper, we propose a communication infrastructure system for teleoperation of a mobile robot using LCXs as a communication infrastructure that considers the above issues. In addition, the communication quality was measured for the operability of the mobile robot by constructing an IEEE 802.11b/g/n network using an LCX, and the effectiveness of the proposed system in an actual environment was confirmed. In the evaluation of the communication quality, bandwidth compression throughput values and packet jitter were measured as evaluation items at the packet level to objectively consider the teleoperation controllability.</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_13-A_Mobile_Robot_Teleoperation_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Blood Diseases Detection using Classical Machine Learning Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100712</link>
        <id>10.14569/IJACSA.2019.0100712</id>
        <doi>10.14569/IJACSA.2019.0100712</doi>
        <lastModDate>2019-07-31T10:57:47.4900000+00:00</lastModDate>
        
        <creator>Fahad Kamal Alsheref</creator>
        
        <creator>Wael Hassan Gomaa</creator>
        
        <subject>Machine learning; classification algorithms; decision trees; KNN; k-means; blood disease</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>Blood analysis is an essential indicator for many diseases; it contains several parameters which are a sign for specific blood diseases. For predicting the disease according to the blood analysis, patterns that lead to identifying the disease precisely should be recognized. Machine learning is the field responsible for building models for predicting the output based on previous data. The accuracy of machine learning algorithms is based on the quality of collected data for the learning process; this research presents a novel benchmark data set that contains 668 records. The data set is collected and verified by expert physicians from highly trusted sources. Several classical machine learning algorithms are tested and achieved promising results.</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_12-Blood_Diseases_Detection_using_Classical_Machine.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Traceability Establishment and Visualization of Software Artefacts in DevOps Practice: A Survey</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100711</link>
        <id>10.14569/IJACSA.2019.0100711</id>
        <doi>10.14569/IJACSA.2019.0100711</doi>
        <lastModDate>2019-07-31T10:57:47.4600000+00:00</lastModDate>
        
        <creator>D A Meedeniya</creator>
        
        <creator>I. D. Rubasinghe</creator>
        
        <creator>I. Perera</creator>
        
        <subject>Software traceability; visualization; comparative study; DevOps; continuous software development</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>DevOps based software process has become popular with the vision of an effective collaboration between the development and operations teams that continuously integrates the frequent changes. Traceability manages the artefact consistency during a software process. This paper explores the trace-link creation and visualization between software artefacts, existing tool support, quality aspects and the applicability in a DevOps environment. As the novelty of this study, we identify the challenges that limit the traceability considerations in DevOps and suggest research directions. Our methodology consists of concept identification, state-of-practice exploration and analytical review. Despite the existing related work, there is a lack of tool support for the traceability management between heterogeneous artefacts in software development with DevOps practices. Although many existing studies have low industrial relevance, a few proprietary traceability tools have shown a high relevance. The lack of evidence of the related applications indicates the need for a generalized traceability approach. Accordingly, we conclude that the software artefact traceability is maturing and applying collaboratively in the software process. This can be extended to explore features such as artefact change impact analysis, change propagation, continuous integration to improve software development in DevOps environments.</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_11-Traceability_Establishment_and_Visualization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fixation Detection with Ray-casting in Immersive Virtual Reality</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100710</link>
        <id>10.14569/IJACSA.2019.0100710</id>
        <doi>10.14569/IJACSA.2019.0100710</doi>
        <lastModDate>2019-07-31T10:57:47.4430000+00:00</lastModDate>
        
        <creator>Najood Alghamdi</creator>
        
        <creator>Wadee Alhalabi</creator>
        
        <subject>Eye Movement; Eye Tracking; Virtual Reality; Fixation Detection; HMD</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>This paper demonstrates the application of a proposed eye fixation detection algorithm to eye movement recorded during eye gaze input within immersive Virtual Reality and compares it with the standard frame-by-frame analysis for validation. Pearson correlations and a sample paired t-test indicated strong correlations between the two analysis methods in terms of fixation duration. The results showed that the principle of eye movement event detection in 2D can be applied successfully in a 3D environment and ensures efficient detection when combined with ray-casting and event time.</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_10-Fixation_Detection_with_Ray_Casting.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Vision based Indoor Localization Method via Convolution Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100709</link>
        <id>10.14569/IJACSA.2019.0100709</id>
        <doi>10.14569/IJACSA.2019.0100709</doi>
        <lastModDate>2019-07-31T10:57:47.4270000+00:00</lastModDate>
        
        <creator>Zeyad Farisi</creator>
        
        <creator>Tian Lianfang</creator>
        
        <creator>Li Xiangyang</creator>
        
        <creator>Zhu Bin</creator>
        
        <subject>Indoor localization; VGG16NET; transfer learning; ArcFace classifier</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>Existing indoor localization methods have bottleneck constraints such as multipath effect for Wi-Fi based methods, high cost for ultra-wide-band based methods and poor anti-interference for Bluetooth-based methods and so on. In order to avoid these problems, a vision-based indoor localization method is proposed. Firstly, the whole deployment environment is departed into several regions and each region is assigned to a location center. Then, in offline mode, the VGG16NET is pre-trained by ImageNet dataset and it is fine-tuned by images on a custom dataset towards indoor localization. In online mode, the fully trained and converged VGG16NET takes as input a video stream captured by the front RGB camera of a mobile robot and outputs features specific to the current location. The features are then used as input to an ArcFace classifier which outputs the current location of the mobile robot. Experimental results show that our method can estimate the location of a mobile object with imaging capability accurately in cluttered unstructured scenes without any other additional device. The localization accuracy can reach to 94.7%.</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_9-Vision_based_Indoor_Localization_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Measuring the Effect of Packet Corruption Ratio on Quality of Experience (QoE) in Video Streaming</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100708</link>
        <id>10.14569/IJACSA.2019.0100708</id>
        <doi>10.14569/IJACSA.2019.0100708</doi>
        <lastModDate>2019-07-31T10:57:47.3970000+00:00</lastModDate>
        
        <creator>Jonah Joshua</creator>
        
        <creator>Akpovi Ominike</creator>
        
        <creator>Oludele Awodele</creator>
        
        <creator>Achimba Ogbonna</creator>
        
        <subject>Network Emulator; Packet Corruption Ratio (PCR); Quality of Experience (QoE)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>The volume of Internet video traffic which consists of downloaded or streamed video from the Internet is projected to increase from 42,029PB monthly in 2016 to 159,161PB monthly, in 2021, representing a 31% increase in the Compound Annual Growth Rate (CAGR). The market for mobile network operators is unpredictable, fast-paced and very competitive. End users now have more options when choosing service providers. With superior network Quality of Experience (QoE), service providers can increase margins by charging more for better quality. Packet corruption occurs when the receiver cannot correctly decode transmitted bits. This study identified the threshold at which the QoE of video streaming services became unacceptable due to the effect of packet corruption. In this paper, several experiments were carried out on video streaming services, creating disturbances to evaluate the user satisfaction level using the mean opinion scores. Network Emulator (NetEm) tool was used to create the packet corruption experienced during the video sessions and the QoE for different packet corruption percentages was established. From the experiments conducted, we found that user QoE decreased as the Packet Corruption Ratio (PCR) increased. With knowledge of the effect of the PCR, service providers can ensure that the PCR is kept within acceptable limits from end-to-end and this will ultimately lead to superior QoE from end users, which will in turn translate to improved subscriber base and profitability.</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_8-Measuring_the_Effect_of_Packet_Corruption_Ratio.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>FLACC: Fuzzy Logic Approach for Congestion Control</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100707</link>
        <id>10.14569/IJACSA.2019.0100707</id>
        <doi>10.14569/IJACSA.2019.0100707</doi>
        <lastModDate>2019-07-31T10:57:47.3800000+00:00</lastModDate>
        
        <creator>Mahmoud Baklizi</creator>
        
        <subject>Congestion; Network Result Performance; GREDFL</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>The popularity of network applications has increased the number of packets travelling within the routers in networks. The movement expends most resources in such networks and consequently leads to congestion, which worsens the performance measures of networks, such as delay, packet loss and bandwidth. This study proposes a new method called Fuzzy Logic Approach for Congestion Control (FLACC), which uses fuzzy logic to decrease delay and packet loss. This method also improves network performance. In addition, FLACC employs average queue length (aql) and packet loss (PL) as input linguistic variables to control the congestion at early stages. In this study, the proposed and compared methods were simulated and evaluated. Results reveal that fuzzy logic Gentle Random Early Detection (FLGRED) showed better performance results than Gentle Random Early Detection (GRED) and GRED Fuzzy Logic in delay and packet loss and when the router buffer was in heavy congestion.</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_7-FLACC_Fuzzy_Logic_Approach_for_Congestion_Control.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Smart City Parking Lot Occupancy Solution</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100706</link>
        <id>10.14569/IJACSA.2019.0100706</id>
        <doi>10.14569/IJACSA.2019.0100706</doi>
        <lastModDate>2019-07-31T10:57:47.3670000+00:00</lastModDate>
        
        <creator>Paula Tatulea</creator>
        
        <creator>Florina Calin</creator>
        
        <creator>Remus Brad</creator>
        
        <creator>Lucian Br&#226;ncovean</creator>
        
        <creator>Mircea Greavu</creator>
        
        <subject>Smart city; car parking; occupancy; monitoring system; image features</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>In the context of Smart City projects, the management of parking lots is one of the main concerns of local administrations and of industrial solution providers. In this respect, we have presented an image processing application, which overcomes the issues of classical electro-mechanical solutions and employs the feed of a surveillance camera. The final web-based interface could provide to the clients the real-time availability and position of the parking space. The proposed method uses a series of feature measures in order to speed-up and accurately classifies the occupancy of the space. Using a published benchmark, our method has proved to provide very accurate results and have been extensively tested on two proprietary parking locations.</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_6-Smart_City_Parking_Lot_Occupancy_Solution.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards the Adoption of Smart Manufacturing Systems: A Development Framework</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100705</link>
        <id>10.14569/IJACSA.2019.0100705</id>
        <doi>10.14569/IJACSA.2019.0100705</doi>
        <lastModDate>2019-07-31T10:57:47.3330000+00:00</lastModDate>
        
        <creator>Moamin A Mahmoud</creator>
        
        <creator>Jennifer Grace</creator>
        
        <subject>Component; smart manufacturing; industry 4.0; development framework; simulation-based evaluation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>Today, a new era of manufacturing innovation is introduced as Smart Manufacturing Systems (SMS) or Industry 4.0. Many studies have discussed the different characteristics and technologies associated with SMS, however, little attention has been devoted to study the development process when establishing new SMS. The study’s objective is to propose a development framework that increases the adoption and awareness of Industry 4.0 among manufacturers and aids decision-makers in designing better SMS capabilities. The framework consists of three phases, iterative process of application modelling; evaluation to ensure optimal configuration and adoption; and finally implementation. The proposed framework is hoped to assist the industries’ management in planning for the adoption of technology, in establishing SMS or assessing the need in existing ones. Indirectly, more industry will gain the benefits as a support for their initiatives to transform into Industry 4.0.</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_5-Towards_the_Adoption_of_Smart_Manufacturing_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detecting Inter-Component Vulnerabilities in Event-based Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100704</link>
        <id>10.14569/IJACSA.2019.0100704</id>
        <doi>10.14569/IJACSA.2019.0100704</doi>
        <lastModDate>2019-07-31T10:57:47.3200000+00:00</lastModDate>
        
        <creator>Youn Kyu Lee</creator>
        
        <subject>Event-based system; program analysis; software security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>Event-based system (EBS) has become popular because of its high flexibility, scalability, and adaptability. These advantages are enabled by its communication mechanism—implicit invocation and implicit concurrency between components. The communication mechanism is based on non-determinism in event processing, which can introduce inherent security vulnerabilities into a system referred to as event attacks. Event attack is a particular type of attack that can abuse, incapacitate, and damage a target system by exploiting the system&#39;s event-based communication model. It is hard to prevent event attacks because they are administered in a way that does not differ from ordinary event-based communication in general. While a number of techniques have focused on security threats in EBS, they do not appropriately resolve the event attack issues or suffer from inaccuracy in detecting and preventing event attacks. To address the risk of event attacks, I present a novel vulnerability detection technique for EBSs that are implemented by using message-oriented middleware platform. My technique has been evaluated on 25 open-source benchmark apps and eight real-world EBSs. The evaluation exhibited my technique&#39;s higher accuracy in detecting vulnerabilities on event attacks than existing techniques as well as its applicability to real-world EBSs.</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_4-Detecting_Inter_Component_Vulnerabilities.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of Software Deformity Prone Datasets with Use of AttributeSelectedClassifier</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100703</link>
        <id>10.14569/IJACSA.2019.0100703</id>
        <doi>10.14569/IJACSA.2019.0100703</doi>
        <lastModDate>2019-07-31T10:57:47.3030000+00:00</lastModDate>
        
        <creator>Maaz Rasheed Malik</creator>
        
        <creator>Liu Yining</creator>
        
        <creator>Salahuddin Shaikh</creator>
        
        <subject>GainRatio; CFSSubsetEval; PCA; classification; defect prediction; deformity prone; defect model; classifier; bug model; softwar; attributesubsetclassifier</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>Software Deformity Prone datasets models are interesting research direction in the era of software world. In this research study, the interest class of software deformity prone is defective model datasets. There are different techniques to predict the deformity prone datasets model. Our proposed solution technique is AttributeSelectedClassifier with selected evaluators and searching method for reducing the dimensionality of training and testing data provided by defected models NASA datasets by attribute selection before being passed on classifiers. We have used three evaluators and search methods. These evaluators are CFSSubsetEval, GainRatio and Principal Component Analysis (PCA). The search methods are BestFirst and Ranker. We have used 12 different classifiers for analyzing the performance of these three evaluators with search methods. The experimental results and analysis are measured with True Positive (TP-Rate), Positive Accuracy, Area under Curve (ROC) and Correctly Classified Instances. The results showed that that CFSSubsetEval and GainRatio performance is better in almost classifiers. Hoeffding tree, Naive Bayes, Multiclass, IBK and Randomizable filtered class increased performance in Positive Accuracy in all techniques. Stacking has worst performance in positive accuracy and True Positive tp-rate in all over technique.</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_3-Analysis_of_Software_Deformity_Prone_Datasets.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Phishing Websites Detection using Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100702</link>
        <id>10.14569/IJACSA.2019.0100702</id>
        <doi>10.14569/IJACSA.2019.0100702</doi>
        <lastModDate>2019-07-31T10:57:47.2730000+00:00</lastModDate>
        
        <creator>Arun Kulkarni</creator>
        
        <creator>Leonard L. Brown III</creator>
        
        <subject>Phishing websites; classification; features; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>Tremendous resources are spent by organizations guarding against and recovering from cybersecurity attacks by online hackers who gain access to sensitive and valuable user data. Many cyber infiltrations are accomplished through phishing attacks where users are tricked into interacting with web pages that appear to be legitimate. In order to successfully fool a human user, these pages are designed to look like legitimate ones. Since humans are so susceptible to being tricked, automated methods of differentiating between phishing websites and their authentic counterparts are needed as an extra line of defense. The aim of this research is to develop these methods of defense utilizing various approaches to categorize websites. Specifically, we have developed a system that uses machine learning techniques to classify websites based on their URL. We used four classifiers: the decision tree, Na&#239;ve Bayesian classifier, support vector machine (SVM), and neural network. The classifiers were tested with a data set containing 1,353 real world URLs where each could be categorized as a legitimate site, suspicious site, or phishing site. The results of the experiments show that the classifiers were successful in distinguishing real websites from fake ones over 90% of the time.</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_2-Phishing_Websites_Detection_using_Machine_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Geo-Location Routing Protocol for Indoor and Outdoor Positioning Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100701</link>
        <id>10.14569/IJACSA.2019.0100701</id>
        <doi>10.14569/IJACSA.2019.0100701</doi>
        <lastModDate>2019-07-31T10:57:47.2100000+00:00</lastModDate>
        
        <creator>Sania Mushtaq</creator>
        
        <creator>Dr. Gasim Alandjani</creator>
        
        <creator>Saba Farooq Abbasi</creator>
        
        <creator>Dr. Nasser Abosaq</creator>
        
        <creator>Dr. Adeel Akram</creator>
        
        <creator>Dr. Shahbaz Pervez</creator>
        
        <subject>Internet of Things (IoT); Network Simulator (NS-2); Routing; Geocast Adaptive Mesh Environment for Routing (GAMER); Mobility; Global Positioning System (GPS); Framework for Internal Navigation and Discovery (FIND)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(7), 2019</description>
        <description>Internet of Things (IoT) essentially demands smart connectivity and contextual awareness of current networks with low power and cost effective wireless solutions. Routing is the backbone of the system controlling the flow of transmission. This work demonstrates a collation between performance investigations of a location-based routing protocol Geocast Adaptive Mesh Environment for Routing with contextual information collected from Global Positioning System (GPS) and Framework for Internal Navigation and Discovery (FIND) respectively. The systems are evaluated based on various metrics i.e. Accuracy, Packet Delivery Ratio and Packet Overhead by means of Network Simulator (NS-2). FIND shows enhanced performance in most cases as compared to GPS for indoor and outdoor environments. The results of this research can be deployed in different areas such as in-building navigation, hospital patient tracking, Smart City context aware service provisioning and Industry 4.0 deployments.</description>
        <description>http://thesai.org/Downloads/Volume10No7/Paper_1-Hybrid_Geo_Location_Routing_Protocol.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Efficient Machine Learning Technique to Classify and Recognize Handwritten and Printed Digits of Sudoku Puzzle</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100682</link>
        <id>10.14569/IJACSA.2019.0100682</id>
        <doi>10.14569/IJACSA.2019.0100682</doi>
        <lastModDate>2019-07-02T08:03:58.8670000+00:00</lastModDate>
        
        <creator>Sang C. Suh</creator>
        
        <creator>Aghalya Dharshni Manmatharaj</creator>
        
        <subject>Convolutional Neural Network (CNN); Artificial Neural Network (ANN); Deep Belief Network (DBN); Optical character recognition (OCR), Open source computer vision (OpenCV); Convolutional Deep Belief Network (CDBN)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(6), 2019</description>
        <description>In this paper, we propose a convolutional neural network model to recognize and classify handwritten and printed digits present in Sudoku puzzle, which is captured using smartphone camera from various magazines, and printed papers. Sudoku puzzle grid is detected using various image processing and filtering techniques such as adaptive threshold. The system described in the paper is thoroughly tested on a set of 100 Sudoku images captured with smartphone cameras under varying conditions. The system shows promising results with 98% accuracy. Our model can handle more complex conditions often present on images that were taken with phone cameras and the complexity of mixed printed and handwritten digits.</description>
        <description>http://thesai.org/Downloads/Volume10No6/Paper_82-An_Efficient_Machine_Learning_Technique.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fusing Identity Management, HL7 and Blockchain into a Global Healthcare Record Sharing Architecture</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100681</link>
        <id>10.14569/IJACSA.2019.0100681</id>
        <doi>10.14569/IJACSA.2019.0100681</doi>
        <lastModDate>2019-06-29T10:16:23.0630000+00:00</lastModDate>
        
        <creator>Mohammad A. R. Abdeen</creator>
        
        <creator>Toqeer Ali</creator>
        
        <creator>Yasar Khan</creator>
        
        <creator>M.C.E. Yagoub</creator>
        
        <subject>Healthcare; blockchain; electronic health record; identity management; Health Level Seven (HL7)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(6), 2019</description>
        <description>Healthcare record sharing among various medical roles is a critical and challenging research problem especially in today’s everchanging global IT solutions. The emergence of blockchain as a new enabling technology brought radical changes to numerous business applications, including healthcare. Blockchain is a trusted distributed ledger that forms a decentral-ized infrastructure. There have been several proposals regarding the sharing of critical healthcare records over blockchain infras-tructure without requiring prior knowledge/trust of the parties involved (patients, service providers, and insurance companies). Another yet important issue is to securely share medical records across various countries for travelling patients to ensure an integrated and ubiquitous healthcare service. In this paper, we present a globally integrated healthcare record sharing architec-ture based on blockchain and HL7 client. Healthcare records are stored at the hosting country and are not stored on the blockchain. This architecture avails medical records of travelling patients temporarily and after performing necessary authentication. The actual authorisation process is performed on a federated identity management system, such as, the Shibboleth. Though there are similarities with identity management systems, our system is unique as it involves the patient in the permission process and discloses to them the identities of entities accessed their health records. Our solution also improves performance and guarantees privacy and security through the use of blockchain and identity management system.</description>
        <description>http://thesai.org/Downloads/Volume10No6/Paper_81-Fusing_Identity_Management_HL7_and_Blockchain.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>School Manager System based on a Personal Information Architecture</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100680</link>
        <id>10.14569/IJACSA.2019.0100680</id>
        <doi>10.14569/IJACSA.2019.0100680</doi>
        <lastModDate>2019-06-29T10:16:23.0300000+00:00</lastModDate>
        
        <creator>Elena Fabiola Ruiz Ledesma</creator>
        
        <creator>Elizabeth Moreno Galv&#225;n</creator>
        
        <creator>Juan Jes&#250;s Guti&#233;rrez Garc&#237;a</creator>
        
        <creator>Chadwick Carreto Arellano</creator>
        
        <subject>Architecture; mobile computing; ubiquitous computing; information management system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(6), 2019</description>
        <description>The current technological revolution has provided multiple benefits to human activities. For their part, organizations have had the need to make changes to their business requirements, which have led them to migrate to systems and services in more complex models. Educational institutions have experienced the impact of technological progress, because, regarding school management, information handling requires to be performed through automated processes that also protect data from any human or cyber-attack. The purpose of the paper herein is to show the development and integration of a system dedicated to manage personal information within a school environment, through the implementation of an information management architecture, whose main purpose is to create certified documents that can be shared with other information systems in the same trust environment. Research is descriptive in nature as it pretends to detect abnormalities in the characteristics of PIMS, describe their associations, and prove or reject the hypothesis in order to be compared to subsequent studies.</description>
        <description>http://thesai.org/Downloads/Volume10No6/Paper_80-School_Manager_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Virtualizing a Cluster to Optimize the Problems of High Scientific Complexity within an Organization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100679</link>
        <id>10.14569/IJACSA.2019.0100679</id>
        <doi>10.14569/IJACSA.2019.0100679</doi>
        <lastModDate>2019-06-29T10:16:23.0170000+00:00</lastModDate>
        
        <creator>Enrique Lee Huaman&#237;</creator>
        
        <creator>Patricia Condori</creator>
        
        <creator>Avid Roman-Gonzalez</creator>
        
        <subject>High-performance cluster; distributed programming; computational parallelism; supercomputer; high-efficiency computing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(6), 2019</description>
        <description>The Image Processing Research Laboratory (INTI-Lab) of the Universidad de Ciencias y Humanidades has several research projects related to computer science needing high computational resources. Some of these projects are associated with climate prediction, molecule modeling, physical simulations, and others these applications generate a significant amount of data, regarding the big data issue, despite having excellent hardware features, the final result is obtained after hours or days of calculation depending on the algorithm complexity. For this reason, it is not possible to present optimal solutions at an ideal time. .In this work, we propose the virtualization and configuration of a high-performance cluster (HPC) known commercially as a &quot;supercomputer&quot; that is composed of several computers connected to a high-speed network to behave like a single computer. The virtualization is used to run a scientific algorithm that will apply performance tests using four virtual computers to demonstrate that the reduction of time is achieved by using more machines and thus be able to be implemented in the laboratories of the institution.</description>
        <description>http://thesai.org/Downloads/Volume10No6/Paper_79-Virtualizing_A_Cluster_to_Optimize_the_Problems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comparison of Sentiment Analysis Methods on Amazon Reviews of Mobile Phones</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100678</link>
        <id>10.14569/IJACSA.2019.0100678</id>
        <doi>10.14569/IJACSA.2019.0100678</doi>
        <lastModDate>2019-06-29T10:16:23.0000000+00:00</lastModDate>
        
        <creator>Sara Ashour Aljuhani</creator>
        
        <creator>Norah Saleh Alghamdi</creator>
        
        <subject>Bag-of-words; TF-IDF; glove; word2vec; logistic regression; stochastic gradient descent; naive bayes; Convolutional Neural Network; log loss; lime</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(6), 2019</description>
        <description>The consumer reviews serve as feedback for busi-nesses in terms of performance, product quality, and consumer service. In this research, we predict consumer opinion based on mobile phone reviews, in addition to providing an analysis of the most important factors behind reviews being classified as either positive, negative, or neutral. This insight could help companies improve their products as well as helping potential buyers to make the right decision. The research presented in this paper was carried out as follows: the data was pre-processed, before being converted from text to vector representation using a range of feature extraction techniques such as bag-of-words, TF-IDF, Glove, and word2vec. We study the performance of different machine learning algorithms, such as logistic regression, stochastic gradient descent, naive Bayes and convolutional neural networks. In addition, we evaluate our models using accuracy, F1-score, precision, recall and log loss function. Moreover, we apply Lime technique to provide analytical reasons for the reviews being classified as either positive, negative or neutral. Our experiments revealed that convolutional neural network with word2vec as a feature extraction technique provides the best results for both the unbalanced and balanced versions of the dataset.</description>
        <description>http://thesai.org/Downloads/Volume10No6/Paper_78-A_Comparison_of_Sentiment_Analysis_Methods.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Content-based Automatic Video Genre Identification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100677</link>
        <id>10.14569/IJACSA.2019.0100677</id>
        <doi>10.14569/IJACSA.2019.0100677</doi>
        <lastModDate>2019-06-29T10:16:22.9830000+00:00</lastModDate>
        
        <creator>Faryal Shamsi</creator>
        
        <creator>Sher Muhammad Daudpota</creator>
        
        <creator>Sarang Shaikh</creator>
        
        <subject>Motion detection; scene detection; shot boundary detection; video genre identification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(6), 2019</description>
        <description>Video content is evolving enormously with the heavy usage of internet and social media websites. Proper searching and indexing of such video content is a major challenge. The existing video search potentially relies on the information provided by the user, such as video caption, description and subsequent comments on the video. In such case, if users provide insufficient or incorrect information about the video genre, the video may not be indexed correctly and ignored during search and retrieval. This paper proposes a mechanism to understand the contents of video and categorize it as Music Video, Talk Show, Movie/Drama, Animation and Sports. For video classification, the proposed system uses audio and visual features like audio signal energy, zero crossing rate, spectral flux from audio and shot boundary, scene count and actor motion from video. The system is tested on popular Hollywood, Bollywood and YouTube videos to give an accuracy of 96%.</description>
        <description>http://thesai.org/Downloads/Volume10No6/Paper_77-Content_based_Automatic_Video_Genre_Identification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Introducing Multi Shippers Mechanism for Decentralized Cash on Delivery System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100676</link>
        <id>10.14569/IJACSA.2019.0100676</id>
        <doi>10.14569/IJACSA.2019.0100676</doi>
        <lastModDate>2019-06-29T10:16:22.9530000+00:00</lastModDate>
        
        <creator>Hai Trieu Le</creator>
        
        <creator>Ngoc Tien Thanh Le</creator>
        
        <creator>Nguyen Ngoc Phien</creator>
        
        <creator>Nghia Duong-Trung</creator>
        
        <creator>Ha Xuan Son</creator>
        
        <creator>Thai Tam Huynh</creator>
        
        <creator>The Phuc Nguyen</creator>
        
        <subject>Blockchain; smart contract; Cash on Delivery (COD); hyperledger fabric</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(6), 2019</description>
        <description>One of the major problems of e-commerce globally is the selling and buying of goods among the parties over the Internet in which the traders may not trust their partners. Cash on delivery allows customers to pay in cash when the product is delivered to their home or a location they choose. This is sometimes called a payment system because customers receive goods before making a payment. This paper investigates a critical verification process issue in the cash on delivery system. In particular, we propose a multi shippers mechanism, which consists of blockchain technology, smart contracts and hyper-ledger fabric platform to achieve distributed and trustworthy verification across participants in the decentralized markets. Our proposed mechanism is given to not only ensure the benefits of the seller but also prevent shipper’s fraudulent. The solution leverages the consistency and robustness of decentralized markets where trust is flexible and effectively controlled. To demonstrate the application and implementation of the proposed framework, we conduct several case studies on real-world transaction datasets from a local computer retailer. We also provide our sources codes for further reproducibility and development. Our conclusion is that the continued integration of multi-shipper mechanism and blockchain technology in the decentralized markets will cause significant transformations across several disciplines.</description>
        <description>http://thesai.org/Downloads/Volume10No6/Paper_76-Introducing_Multi_Shippers_Mechanism.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Competitive Algorithms for Online Conversion Problem with Interrelated Prices</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100675</link>
        <id>10.14569/IJACSA.2019.0100675</id>
        <doi>10.14569/IJACSA.2019.0100675</doi>
        <lastModDate>2019-06-29T10:16:22.9370000+00:00</lastModDate>
        
        <creator>Javeria Iqbal</creator>
        
        <creator>Iftikhar Ahmad</creator>
        
        <creator>Asadullah Shah</creator>
        
        <subject>Time series search; one-way trading; online algo-rithms; update model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(6), 2019</description>
        <description>The classical uni-directional conversion algorithms are based on the assumption that prices are arbitrarily chosen from the fixed price interval [m,M]  where m  and M  represent the estimated lower and upper bounds of possible prices 0 &lt; m &lt;= M. The estimated interval is erroneous and no attempts are made by the algorithms to update the erroneous estimates. We consider a real world setting where prices are interrelated, i.e., each price depends on its preceding price. Under this assumption, we derive a lower bound on the competitive ratio of randomized non-preemptive algorithms. Motivated by the fixed and erroneous price bounds, we present an update model that progressively improves the bounds.  Based on the update model, we propose a non-preemptive reservation price algorithm RP* and analyze it under competitive analysis. Finally, we report the findings of an experimental study that is conducted over the real world stock index data. We observe that RP* consistently outperforms the classical algorithm.</description>
        <description>http://thesai.org/Downloads/Volume10No6/Paper_75-Competitive_Algorithms_for_Online_Conversion_Problem.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Effective Framework for Tweet Level Sentiment Classification using Recursive Text Pre-Processing Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100674</link>
        <id>10.14569/IJACSA.2019.0100674</id>
        <doi>10.14569/IJACSA.2019.0100674</doi>
        <lastModDate>2019-06-29T10:16:22.9070000+00:00</lastModDate>
        
        <creator>Muhammad Bux Alvi</creator>
        
        <creator>Naeem A. Mahoto</creator>
        
        <creator>Mukhtiar A. Unar</creator>
        
        <creator>M. Akram Shaikh</creator>
        
        <subject>Machine learning; recursive text pre-processing; sentiment analysis; sentiment classification framework; Twitter</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(6), 2019</description>
        <description>With around 330 million people around the globe tweet 6000 times per second to express their feelings about a product, policy, service, or an event. Twitter message majorly consists of thoughts. Thoughts are mostly expressed as a text and it is an open challenge to extract some insight from free text. The scope of this work is to build an effective tweet level sentiment classification framework that may use these thoughts to know collective sentiment of the folk on a particular subject. Furthermore, this work also analyses the impact of proposed tweet level recursive text pre-processing approach on overall classification results. This work achieved up to 4 points accuracy improvement over baseline approach besides mitigating feature vector space.</description>
        <description>http://thesai.org/Downloads/Volume10No6/Paper_74-An_Effective_Framework_for_Tweet_Level_Sentiment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design and Application of a Smart Diagnostic System for Parkinson’s Patients using Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100673</link>
        <id>10.14569/IJACSA.2019.0100673</id>
        <doi>10.14569/IJACSA.2019.0100673</doi>
        <lastModDate>2019-06-29T10:16:22.8900000+00:00</lastModDate>
        
        <creator>Asma Channa</creator>
        
        <creator>Attiya Baqai</creator>
        
        <creator>Rahime Ceylan</creator>
        
        <subject>Parkinson patients; force sensors; machine learn-ing; Wavelet Packet Transform (WPT)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(6), 2019</description>
        <description>For analysis of Parkinson illness gait disabilities de-tection is essential. The only motivation behind this examination is to equitably and consequently differentiate among sound subjects and the one who is forbearing the Parkinson, utilizing IOT based indicative framework. In this examination absolute, 16 distinctive force sensors being attached with the shoes of subjects which documented the Multisignal Vertical Ground Reaction Force (VGRF). Overall sensors signals utilizing 1024 window estimate around the raw signals, utilizing the Packet wavelet change (PWT) five diverse characteristics that includes entropy, energy, variance, standard deviation and waveform length were derived and support vector machine (SVM) is to recognize Parkinson patients and healthy subjects. SVM is trained on 85% of the dataset and tested on 15% dataset. Preparation accomplice relies upon 93 patients with idiopathic PD (mean age: 66.3 years; 63%men and 37% ladies), and 73 healthy controls (mean age: 66.3 years; 55% men and 45% ladies). IOT framework included all 16 sensors, from which 8 compel sensors were appended to left side foot of subject and the rest of the 8 on the right side foot. The outcomes demonstrate that fifth sensor worn on a Medial part of the dorsum of right foot highlighted by R5 gives 90.3% accuracy. Henceforth this examination gives the knowledge to utilize single wearable force sensor. Hence, this examination deduce that a solitary sensor might help in differentiation amongst Parkinson and healthy subjects.</description>
        <description>http://thesai.org/Downloads/Volume10No6/Paper_73-Design_and_Application_of_a_Smart_Diagnostic_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Assessment of Open Data Sets Completeness</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100672</link>
        <id>10.14569/IJACSA.2019.0100672</id>
        <doi>10.14569/IJACSA.2019.0100672</doi>
        <lastModDate>2019-06-29T10:16:22.8730000+00:00</lastModDate>
        
        <creator>Abdulrazzak Ali</creator>
        
        <creator>Nurul A. Emran</creator>
        
        <creator>Siti A. Asmai</creator>
        
        <creator>Amelia R. Ismail</creator>
        
        <subject>Data completeness; missing values; open data; open data sources; data collection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(6), 2019</description>
        <description>The rapid growth of open data sources is driven by free-of-charge contents and ease of accessibility. While it is convenient for public data consumers to use data sets extracted from open data sources, the decision to use these data sets should be based on data sets’ quality. Several data quality dimensions such as completeness, accuracy, and timeliness are common requirements to make data fit for use. More importantly, in many cases, high-quality data sets are desirable in ensuring reliable outcomes of reports and analytics. Even though many open data sources provide data quality guidelines, the responsibility to ensure data of high quality requires commitment from data contributors. In this paper, an initial investigation on the quality of open data sets in terms of completeness dimension was con-ducted. In particular, the results of the missing values in 20 open data sets measurement were extracted from the open data sources. The analysis covered all the missing values representations which are not limited to nulls or blank spaces. The results exhibited a range of missing values ratios that indicated the level of the data sets completeness. The limited coverage of this analysis does not hinder understanding of the current level of data completeness of open data sets. The findings may motivate open data providers to design initiatives that will empower data quality policy and guidelines for data contributors. In addition, this analysis may assist public data users to decide on the acceptability of open data sets by applying the simple methods proposed in this paper or performing data cleaning actions to improve the completeness of the data sets concerned.</description>
        <description>http://thesai.org/Downloads/Volume10No6/Paper_72-An_Assessment_of_Open_Data_Sets_Completeness.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Exploiting the Interplay among Products for Efficient Recommendations</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100671</link>
        <id>10.14569/IJACSA.2019.0100671</id>
        <doi>10.14569/IJACSA.2019.0100671</doi>
        <lastModDate>2019-06-29T10:16:22.8600000+00:00</lastModDate>
        
        <creator>Anbarasu Sekar</creator>
        
        <creator>Sutanu Chakraborti</creator>
        
        <subject>Preference-based feedback; case-based conversa-tional recommender system; evidence; trade-offs; compromise; diversity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(6), 2019</description>
        <description>Recommender systems are built with the aim to reduce the cognitive load on the user. An efficient recommender system should ensure that a user spends minimal time in the process. Conversational Case-Based Recommender Systems (CCBR-RSs) depend on the feedback provided by the user to learn about the preferences of the user. Our goal is to use the feedback provided by the user effectively by exploiting the interplay among the products to build an efficient CCBR-RS. In this work, we propose two ways towards achieving that goal. In the first method, we utilize the higher order similarity and trade-off relationship among the products to propagate the evidence obtained through user feedback. In our second method, we utilize the diversity among cases/products along with the similarity and trade-off relationship to make the best use of the feedback provided by the user.</description>
        <description>http://thesai.org/Downloads/Volume10No6/Paper_71-Exploiting_the_Interplay_among_Products.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Heuristics Applied to Mutation Testing in an Impure Functional Programming Language</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100670</link>
        <id>10.14569/IJACSA.2019.0100670</id>
        <doi>10.14569/IJACSA.2019.0100670</doi>
        <lastModDate>2019-06-29T10:16:22.8430000+00:00</lastModDate>
        
        <creator>Juan Guti&#233;rrez-C&#225;rdenas</creator>
        
        <creator>Hernan Quintana-Cruz</creator>
        
        <creator>Diego Mego-Fernandez</creator>
        
        <creator>Serguei Diaz-Baskakov</creator>
        
        <subject>Mutation testing; heuristics; functional programming</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(6), 2019</description>
        <description>The task of elaborating accurate test suites for pro-gram testing can be an extensive computational work. Mutation testing is not immune to the problem of being a computational and time-consuming task so that it has found relief in the use of heuristic techniques. The use of Genetic Algorithms in mutation testing has proved to be useful for probing test suites, but it has mainly been enclosed only in the field of imperative programming paradigms. Therefore, we decided to test the feasibility of using Genetic Algorithms for performing mutation testing in functional programming environments. We tested our proposal by making a graph representations of four different functional programs and applied a Genetic Algorithm to generate a population of mutant programs. We found that it is possible to obtain a set of mutants that could find flaws in test suites in functional programming languages. Additionally, we encountered that when a source code increases its number of instructions it was simpler for a genetic algorithm to find a mutant that can avoid all of the test cases.</description>
        <description>http://thesai.org/Downloads/Volume10No6/Paper_70-Heuristics_Applied_to_Mutation_Testing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-Modal Biometric: Bi-Directional Empirical Mode Decomposition with Hilbert-Hung Transformation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100669</link>
        <id>10.14569/IJACSA.2019.0100669</id>
        <doi>10.14569/IJACSA.2019.0100669</doi>
        <lastModDate>2019-06-29T10:16:22.8130000+00:00</lastModDate>
        
        <creator>Gavisiddappa </creator>
        
        <creator>Chandrashekar Mohan Patil</creator>
        
        <creator>Shivakumar Mahadevappa</creator>
        
        <creator>Pramod KumarS</creator>
        
        <subject>Biometric Systems (BS); multimodal biometrics; bi-directional empirical mode decomposition; Hilbert-Huang trans-form; Multi-Class Support Vector Machines technique (MC-SVM); 2000 Mathematics Subject Classification: 92C55, 94A08, 92C10</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(6), 2019</description>
        <description>Biometric systems (BS) helps in reorganization of individual person based on the biological traits like ears, veins, signatures, voices, typing styles, gaits, etc. As, the Uni-modal BS does not give better security and recognition accuracy, the multimodal BS is introduced. In this paper, biological characters like face, finger print and iris are used in the feature level fusion based multimodal BS to overcome those issues. The feature extraction is performed by Bi-directional Empirical Mode Decomposition (BEMD) and Grey Level Co-occurrence Matrix (GLCM) algorithm. Hilbert-Huang transform (HHT) is applied after feature extraction to obtain local features such as local amplitude and phase. The combination of BEMD, HHT and GLCM are used for achieving effective accuracy in the clas-sification process. MMB-BEMD-HHT method is used in Multi-class support vector machine technique (MC-SVM) as a classifier. The false rejection ratio has improved using feature level fusion (FLF) and MC-SVM technique. The performance of MMB-BEMD-HHT method is measured based on the parameters like False Acceptance Ratio (FAR), False Rejection Ratio (FRR), and accuracy and compared it with an existing system. The MMB-BEMD-HHT method gave 96% of accuracy for identifying the biometric traits of individual persons.</description>
        <description>http://thesai.org/Downloads/Volume10No6/Paper_69-Multi_Modal_Biometric.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Convolutional Neural Networks in Predicting Missing Text in Arabic</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100668</link>
        <id>10.14569/IJACSA.2019.0100668</id>
        <doi>10.14569/IJACSA.2019.0100668</doi>
        <lastModDate>2019-06-29T10:16:22.7970000+00:00</lastModDate>
        
        <creator>Adnan Souri</creator>
        
        <creator>Mohamed Alachhab</creator>
        
        <creator>Badr Eddine Elmohajir</creator>
        
        <creator>Abdelali Zbakh</creator>
        
        <subject>Natural Language Processing; Convolutional Neu-ral Networks; deep learning; Arabic language; text prediction; text generation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(6), 2019</description>
        <description>Missing text prediction is one of the major concerns of Natural Language Processing deep learning community’s at-tention. However, the majority of text prediction related research is performed in other languages but not Arabic. In this paper, we take a first step in training a deep learning language model on Arabic language. Our contribution is the prediction of missing text from text documents while applying Convolutional Neural Networks (CNN) on Arabic Language Models. We have built CNN-based Language Models responding to specific settings in relation with Arabic language. We have prepared our dataset of a large quantity of text documents freely downloaded from Arab World Books, Hindawi foundation, and Shamela datasets. To calculate the accuracy of prediction, we have compared documents with complete text and same documents with missing text. We realized training, validation and test steps at three different stages aiming to increase the performance of prediction. The model had been trained at first stage on documents of the same author, then at the second stage, it had been trained on documents of the same dataset, and finally, at the third stage, the model had been trained on all document confused. Steps of training, validation and test have been repeated many times by changing each time the author, dataset, and the combination author-dataset, respectively. Also we have used the technique of enlarging training data by feeding the CNN-model each time by a larger quantity of text. The model gave a high performance of Arabic text prediction using Convolutional Neural Networks with an accuracy that have reached 97.8% in best case.</description>
        <description>http://thesai.org/Downloads/Volume10No6/Paper_68-Convolutional_Neural_Networks_in_Predicting_Missing_Text.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Satellite Image Enhancement using Wavelet-domain based on Singular Value Decomposition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100667</link>
        <id>10.14569/IJACSA.2019.0100667</id>
        <doi>10.14569/IJACSA.2019.0100667</doi>
        <lastModDate>2019-06-29T10:16:22.7670000+00:00</lastModDate>
        
        <creator>Muhammad Aamir</creator>
        
        <creator>Ziaur Rahman</creator>
        
        <creator>Yi-Fei Pu</creator>
        
        <creator>Waheed Ahmed Abro</creator>
        
        <creator>Kanza Gulzar</creator>
        
        <subject>Satellite Images; Image Enhancement; Singular Value Decomposition (SVD); Discrete Wavelet Transforms (DWT); Stationary Wavelet Transform (SWT)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(6), 2019</description>
        <description>Improving the quality of satellite images has been considered an essential field of research in remote sensing and computer vision. There are currently numerous techniques and algorithms used to achieve enhanced performance. Different algorithms have been proposed to enhance the quality of satellite images. However, satellite images enhancement is considered a challenging task and may play an integral role in a wide range of applications. Having received significant attention in recent years, this manuscript proposes a methodology to enhance the resolution and contrast of satellite images. To improve the quality of satellite images, in this study, first, the resolution of an image is improved. For resolution enhancement, first, the input image is decomposed into four frequency components (LL,LH,HL,and HH) using the stationary wavelet transform (SWT). Second, Singular value matrices (SVMs) U_A and V_A which contains high-frequency elements of an input image are obtained using singular value decomposition (SVD). Third, the high-frequency components (LH,HL) of an input image are obtained using discrete wavelet transform (DWT) and corrected by SVMs and SWT. Next, the interpolation factor is added and the high-resolution image is obtained using inverse discrete wavelet transform (IDWT). Second, the contrast of the image is optimized. For the contrast enhancement, the image is decomposed using DWT into sub-bands such as (LL,LH,HL,and HH). Next, the singular value matrix (SVM) of the LL sub-band is obtained which contains the illumination information. Then, SVM is modified to enhance the contrast. Finally, the image reconstructed using the IDWT. In this paper, the results from the method above are compared with existing approaches. The proposed method achieves high performance and yields more insightful results over conventional technique.</description>
        <description>http://thesai.org/Downloads/Volume10No6/Paper_67-Satellite_Image_Enhancement_using_Wavelet_Domain.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comprehensive Collaborating Filtering Approach using Extended Matrix Factorization and Autoencoder in Recommender System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100666</link>
        <id>10.14569/IJACSA.2019.0100666</id>
        <doi>10.14569/IJACSA.2019.0100666</doi>
        <lastModDate>2019-06-29T10:16:22.7500000+00:00</lastModDate>
        
        <creator>Mahamudul Hasan</creator>
        
        <creator>Falguni Roy</creator>
        
        <creator>Tasdikul Hasan</creator>
        
        <creator>Lafifa Jamal</creator>
        
        <subject>Recommender system; deep learning; autoencoder; matrix factorization; similarity measures</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(6), 2019</description>
        <description>Recommender system is an approach where users get suggestions based on their previous preferences. Nowadays, people are overwhelmed by the huge amount of information that is being present in any system. Sometimes, it is difficult for a user to find an appropriate item by searching the desired content. Recommender system assists users by providing suggestions of re-quired information or items based on the similar features among the users. Collaborative filtering is one of the most re-known process of recommender system where the recommendation is done by similar users or similar items. Matrix factorization is an approach which can be used to decompose a matrix into two or more matrix to generate features. Again, autoencoder is a deep learning based technique which is used to find hidden features of an object. In this paper, features are calculated using extended matrix factorization and autoencoder and then a new similarity metric has been introduced that can calculate the similarity efficiently between each pair of users. Then, an improvement of the prediction method is introduced to predict the rating accurately by using the proposed similarity measure. In the experimental section, it has been shown that our proposed method outperforms in terms of mean absolute error, precision, recall, f-measures, and average reciprocal hit rank.</description>
        <description>http://thesai.org/Downloads/Volume10No6/Paper_66-A_Comprehensive_Collaborating_Filtering_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Smart Smoking Area based on Fuzzy Decision Tree Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100665</link>
        <id>10.14569/IJACSA.2019.0100665</id>
        <doi>10.14569/IJACSA.2019.0100665</doi>
        <lastModDate>2019-06-29T10:16:22.7330000+00:00</lastModDate>
        
        <creator>Iswanto </creator>
        
        <creator>Kunnu Purwanto</creator>
        
        <creator>Weni Hastuti</creator>
        
        <creator>Anis Prabowo</creator>
        
        <creator>Muhamad Yusvin Mustar</creator>
        
        <subject>Fuzzy decision tree algorithm; smart smoking room; microcontroller; smoke sensor</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(6), 2019</description>
        <description>Cigarette smoke is very dangerous for both active and passive smokers who smoke inside a room because nicotine from cigarette smoke can stick on the wall or in the furniture and produce carcinogenic substances when reacting with air. The carcinogen chemicals in cigarettes are more dangerous when cigarette smoke is trapped in a limited space. An exhaust fan is usually used in a special room for smokers that serves to remove cigarette smoke without exchanging air in it. A smart smoking room tool specifically for smokers was made to answer the problem. The room used an ‘in and out’ exhaust fan ventilator. This fan ventilator rotated based on the quantity of carbon monoxide (CO) gas in the room detected by using a smoke sensor. Arduino Uno based on Fuzzy Decision Tree algorithm was used to control of the input voltage level in the fan ventilator. The result showed that by using the tool, the cigarette smoke in the room can be controlled effectively.</description>
        <description>http://thesai.org/Downloads/Volume10No6/Paper_65-Smart_Smoking_Area.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Study of Methods that Detect Levels of Lead and its Consequent Toxicity in the Blood</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100664</link>
        <id>10.14569/IJACSA.2019.0100664</id>
        <doi>10.14569/IJACSA.2019.0100664</doi>
        <lastModDate>2019-06-29T10:16:22.7030000+00:00</lastModDate>
        
        <creator>Kevin J. Rodriguez</creator>
        
        <creator>Alicia Alva</creator>
        
        <creator>Virginia T. Santos</creator>
        
        <creator>Avid Roman-Gonzalez</creator>
        
        <subject>Blood lead; toxicity; voltammetry; absorption spectroscopy; healthcare</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(6), 2019</description>
        <description>The present work is the study of the different methods used to determine the toxicity produced by the presence of a contaminating metal in the blood. Mainly, the presence of lead in the blood was taken as a reference to focus the work, knowing that metals like Cadmium (Cd) and Mercury (Hg) are also toxic to health and the environment. Although the information is extensive on the methods to be studied and in some cases it is not detailed to define each process, a comparative study of the most relevant and currently used methods can be carried out, taking into account that the choice will be defined according to the main characteristics of each one. Although all agree to be electrochemical processes, there are details to know which method to choose, either by sensitivity, economic or even structural factors, such as having a laboratory for its development. Environmental pollution with toxic elements is very harmful to health, even in small quantities can be very dangerous. These can be present in rivers, soil and even in the air, and these spaces are more than enough to contaminate the human being since these particles adhere in both cases for many years. It is a problem until today and therein lies the importance of the study.</description>
        <description>http://thesai.org/Downloads/Volume10No6/Paper_64-Comparative_Study_of_Methods.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Survey Energy Management Approaches in Data Centres</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100663</link>
        <id>10.14569/IJACSA.2019.0100663</id>
        <doi>10.14569/IJACSA.2019.0100663</doi>
        <lastModDate>2019-06-29T10:16:22.6870000+00:00</lastModDate>
        
        <creator>Bouchra Morchid</creator>
        
        <creator>Siham Benhadou</creator>
        
        <creator>Mariam Benhadou</creator>
        
        <creator>Abdellah Haddout</creator>
        
        <creator>Hicham Medromi</creator>
        
        <subject>Data center; power management; virtual machines; physical servers &quot;hosts&quot;; energy efficiency; SLA (Service Level Agreement); PUE (Power Usage Effectiveness); QoS (Quality of Service)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(6), 2019</description>
        <description>Data centers are today the technological backbone for any company. However, the failure to control energy consumption leads to very high operating costs and carbon dioxide emissions. On the other hand, reducing power consumption in data centers can lead to a degradation of application performance and quality of service in terms of SLA Service Level Agreement. It is therefore essential to find a compromise in terms of energy efficiency and resource consumption. This paper highlights the different approaches of energy management, related studies, algorithms used, the advantages and weaknesses of each approach related to server virtualization, consolidation and deconsolidation of virtual machines.</description>
        <description>http://thesai.org/Downloads/Volume10No6/Paper_63-Survey_Energy_Management_Approaches.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Passenger and Luggage Weight Monitoring System for Public Transport based on Sensing Technology: A Case of Zambia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100662</link>
        <id>10.14569/IJACSA.2019.0100662</id>
        <doi>10.14569/IJACSA.2019.0100662</doi>
        <lastModDate>2019-06-29T10:16:22.6730000+00:00</lastModDate>
        
        <creator>Apolinalious Bwalya</creator>
        
        <creator>Jackson Phiri</creator>
        
        <creator>Monica M. Kalumbilo</creator>
        
        <creator>David Zulu</creator>
        
        <subject>Overloading; load weight; load cells; emerging technologies</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(6), 2019</description>
        <description>The prevalence of overloading, which is exceeding the maximum load weight, on public buses in Zambia is very rampant because there is currently no system to measure and monitor load weight at bus stations, apart from weighbridges on few selected roads located far away from the loading points. The aim of this study was to design and develop a passenger and luggage weight monitoring system to mitigate the challenge of overloading on public buses. To achieve this, a baseline study was conducted to appreciate the challenges of the current system being used to manage passenger and luggage, i.e., load weight, on public buses. The risk factors considered to contribute to compromised road safety leading to road traffic accidents were also established from all stakeholders as follows: 54 percent human, 39 percent road/environmental, 6 percent vehicle and 1 percent was attributed to other factors. The results were then used as a basis to design and develop a load weight monitoring system (LWMS) based on sensing and other emerging technologies like Load Cells, Wireless Sensor Network (WSN), Internet of Things (IoT), and Cloud Computing concepts to automate the measurement of the load weight, capture and transmit data.</description>
        <description>http://thesai.org/Downloads/Volume10No6/Paper_62-Passenger_and_Luggage_Weight_Monitoring_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Circular Polarization RFID Tag for Medical Uses</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100661</link>
        <id>10.14569/IJACSA.2019.0100661</id>
        <doi>10.14569/IJACSA.2019.0100661</doi>
        <lastModDate>2019-06-29T10:16:22.6400000+00:00</lastModDate>
        
        <creator>Nada Jebali</creator>
        
        <creator>Ali Gharsallah</creator>
        
        <subject>Radio Frequency Identification (RFID); circular polarization; metal cutting through laser ablation method</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(6), 2019</description>
        <description>The aim of this paper is to present Radio Frequency Identification (RFID) Tag. The use of this kind of antennas in the medical field has a great importance in making people&#39;s life easier and improving the way to get medical information. This article is dedicated to explain the details of the method used to obtain the circular polarization using a shaped cross slot. A simulation study with the SAR values is performed to obtain an idea about the effects of the electromagnetic waves. In this study, a specific diode pin CPINUC5206-HF has been used in order to obtain a high frequency (3 GHz). Two fabrication methods have been adapted: the printed circuit board method (PCB) and metal cutting through laser ablation method (MCTLA). A comparison study between the two methods has been also conducted.</description>
        <description>http://thesai.org/Downloads/Volume10No6/Paper_61-A_Circular_Polarization_RFID_Tag.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Role of Technical Analysis Indicators over Equity Market (NOMU) with R Programing Language</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100660</link>
        <id>10.14569/IJACSA.2019.0100660</id>
        <doi>10.14569/IJACSA.2019.0100660</doi>
        <lastModDate>2019-06-29T10:16:22.6230000+00:00</lastModDate>
        
        <creator>Mohammed A. Al Ghamdi</creator>
        
        <subject>Data mining; data analysis; R programing language; Chaikin Money Flow (CMF); Stochastic Momentum Index (SMI); Relative Strength Index (RSI); Bollinger Bands (BBands); Aroon indicator</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(6), 2019</description>
        <description>The stock market is a potent, fickle and fast-changing domain. Unanticipated market occurrences and unstructured financial information complicate predicting future market responses. A tool that continues to be advantageous when forecasting future market trends in a global aspect is correlation analysis to significant market events. Data analysis can be used for the difficult task of making stock market forecasts in case the stock price rises or fall. A high number of automated exchanges in the stock market are done with advanced prognostic software. Data analysis is centered on the main idea that previously recorded data is used to predict future patterns. This advancement is aimed speculators in pinpointing hidden data in real evidence that would give them some financial foresight when considering their ventures of choice. Data analysis can be applied in order to predict the rises and falls of stocks in the future. This paper aims to critically investigate, develop and judge the different systems that predict and assess future stock trades as these systems have their own various process to foretell the fluctuations in the costs of stocks. Several different technical analysis indicators have been applied in this study including; Chaikin Money Flow (CMF), Stochastic Momentum Index (SMI), Relative Strength Index (RSI), Bollinger Bands (BBands), and Aroon (Aroon) indicator. The experiments have been conducted using R programing language over two companies’ real-world datasets obtained for two years from Saudi stock market (NOMU) which is a parallel stock market with lighter listing requirements that serves as an alternative platform for companies to go public in the main market. To the best of our knowledge, this is the first work to be conducted in NOMU stock market.</description>
        <description>http://thesai.org/Downloads/Volume10No6/Paper_60-The_Role_of_Technical_Analysis_Indicators.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Mathematical Model of Hybrid Schema Matching based on Constraints and Instances Similarity</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100659</link>
        <id>10.14569/IJACSA.2019.0100659</id>
        <doi>10.14569/IJACSA.2019.0100659</doi>
        <lastModDate>2019-06-29T10:16:22.5930000+00:00</lastModDate>
        
        <creator>Edhy Sutanta</creator>
        
        <creator>Erna Kumalasari Nurnawati</creator>
        
        <creator>Rosalia Arum Kumalasanti</creator>
        
        <subject>Constraint-based; hybrid schema matching model; instance-based; mathematical model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(6), 2019</description>
        <description>Schema matching is a crucial issue in applications that involve multiple databases from heterogeneous sources. Schema matching evolves from a manual process to a semi-automated process to effectively guide users in finding commonalities between schema elements. New models are generally developed using a combination of methods to improve the effectiveness of schema matching results. Our previous research has developed a prototype of hybrid schema matching utilizing a combination of constraints-based method and an instance-based method. The innovation of this paper presents a mathematical formulation of a hybrid schema matching model so it can be run for different cases and becomes the basis of development to improve the effectiveness of output and or efficiency during schema matching process. The developed mathematical model serves to perform the main task in the schema matching process that matches the similarity between attributes, calculates the similarity value of the attribute pair, and specifies the matching attribute pair. Based on the test results, a hybrid schema matching model is more effective than the constraints-based method or instance-based method run individually. The more matching criteria used in the schema matching provide better mapping results. The model developed is limited to schema matching processes in the relational model database.</description>
        <description>http://thesai.org/Downloads/Volume10No6/Paper_59-The_Mathematical_Model_of_Hybrid_Schema.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Depth Limitation and Splitting Criteria Optimization on Random Forest for Efficient Human Activity Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100658</link>
        <id>10.14569/IJACSA.2019.0100658</id>
        <doi>10.14569/IJACSA.2019.0100658</doi>
        <lastModDate>2019-06-29T10:16:22.4700000+00:00</lastModDate>
        
        <creator>Syarif Hidayat</creator>
        
        <creator>Ahmad Ashari</creator>
        
        <creator>Agfianto Eko Putra</creator>
        
        <subject>Activity; accuracy; classification; fall; optimization; random forest</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(6), 2019</description>
        <description>Random Forest (RF) is known as one of the best classifiers in many fields. They are parallelizable, fast to train and to predict, robust to outlier, handle unbalanced data, have low bias, and moderate variance. Apart from these advantages, there are still opportunities to increase RF efficiency. The absence of recommendations regarding the number of trees involved in RF ensembles could make the number of trees very large. This can increase the computational complexity of RF. Recommendations for not pruning the decision tree further aggravates the condition. This research attempts to build an efficient RF ensemble while maintaining its accuracy, especially in problem activity. Data collection is performed using an accelerometer sensor on a smartphone device. The data used in this research are collected from five peoples who perform 11 different activities. Each activity is carried out five times to enrich the data. This study uses two steps to improve the efficiency of the classification of the activity: 1) Optimal splitting criteria for activity classification, 2) Measured pruning to limit the tree depth in RF ensemble. The first method in this study can be applied to determine the splitting criteria that are most suitable for the classification problem of activities using Random Forest. In this case, the decision model built using the Gini Index can produce the highest accuracy. The second method proposed in this research successfully builds less complex pruned-tree without reducing its classification accuracy. The research results showed that the method applied to the Random Forest in this study was able to produce a decision model that was simple but yet accurate to classify activity.</description>
        <description>http://thesai.org/Downloads/Volume10No6/Paper_58-Depth_Limitation_and_Splitting_Criteria.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>IRPanet: Intelligent Routing Protocol in VANET for Dynamic Route Optimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100657</link>
        <id>10.14569/IJACSA.2019.0100657</id>
        <doi>10.14569/IJACSA.2019.0100657</doi>
        <lastModDate>2019-06-29T10:16:22.4530000+00:00</lastModDate>
        
        <creator>Rafi Ullah</creator>
        
        <creator>Shah Muhammad Emad</creator>
        
        <creator>Taha Jilani</creator>
        
        <creator>Waqas Azam</creator>
        
        <creator>Muhammad Zain uddin</creator>
        
        <subject>Intelligent routing protocol; heuristics based routing; applications of VANET; Vehicular Adhoc Network; VANET routing protocol</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(6), 2019</description>
        <description>This paper presents novel routing protocol, IRPANET (Intelligent Routing Protocol in VANET) for Vehicular Adhoc Network (VANET). Vehicular Ad Hoc Networks are special class of Mobile Adhoc Network, created by road vehicles installed with wireless gadgets). Since the environment is dynamic due to high mobility and the topology changes are too frequent, no connection or path can be established between nodes. The issues are challenging for the design of an effective and efficient protocol for such a dynamic environment. This problem can be solved using probabilistic, heuristic and even machine learning based approaches incorporated with store and forward mechanism. Here, we proposed a design framework using heuristics and probabilistic approaches composite with the time series techniques for selecting best and optimize path for forwarding packets using open street map (OSM). Our proposed algorithm uses various parameters (Heuristics Based Routing) for calculating optimal path for packets to be sent, such geographical position (GPS installed in every vehicle), velocity / speed of vehicle, priority of the packets, distances (Euclidean, Haversine, Vicinity) between vehicle, direction of vehicle, communication range of the vehicle, free buffer of nodes and network congestion. These networks can be used for medical emergency, security, entertainment and routing purposes (applications of VANET). These parameters while used in collaboration provide us a very strong and admissible heuristics. We have mathematically proved that the proposed technique is efficient for the routing of packets especially in medical emergency situation. </description>
        <description>http://thesai.org/Downloads/Volume10No6/Paper_57-IRPanet_Intelligent_Routing_Protocol.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving Knowledge Sharing in Distributed Software Development</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100656</link>
        <id>10.14569/IJACSA.2019.0100656</id>
        <doi>10.14569/IJACSA.2019.0100656</doi>
        <lastModDate>2019-06-29T10:16:22.4370000+00:00</lastModDate>
        
        <creator>Sara Waheed</creator>
        
        <creator>Bushra Hamid</creator>
        
        <creator>NZ Jhanjhi</creator>
        
        <creator>Mamoona Humayun</creator>
        
        <creator>Nazir A Malik</creator>
        
        <subject>Distributed software development; knowledge sharing; knowledge management</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(6), 2019</description>
        <description>Distributed Software Development has become an established software development paradigm that provides several advantages but it presents significant challenges to share and understand the knowledge required for developing software. Organizations are expected to implement appropriate practices to address knowledge management. From the existing studies, it is been analyzed that there were problems of collaboration between distributed team members which effects knowledge sharing. Documentation problem (such as missing, poor and outdated documents) and knowledge vaporization (as much of the conversation and communication is done via chat and retrieving it later is a great headache) is a major challenge in Distributed Software Development in knowledge sharing. Our main objective is to improve knowledge sharing between distributed team members and prevent knowledge vaporization and reduced documentation problem that will help in improving software development process in a distributed environment. To eliminate these challenges we proposed a framework which deals with documentation and knowledge vaporization problems and evaluated it through industrial case study and evaluate the framework performance in real-life context where actually the problem arises, we conducted the interviews and analyzed the data using thematic analysis and SUS questioner we came to the conclusion on team members response that they are satisfied with our proposed solution and it improved their knowledge sharing process. Our intention was to improve the knowledge process with our proposed solution and the evaluation showed that we resolved these problems.</description>
        <description>http://thesai.org/Downloads/Volume10No6/Paper_56-Improving_Knowledge_Sharing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Image Inpainting Approach based on Criminisi Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100655</link>
        <id>10.14569/IJACSA.2019.0100655</id>
        <doi>10.14569/IJACSA.2019.0100655</doi>
        <lastModDate>2019-06-29T10:16:22.4230000+00:00</lastModDate>
        
        <creator>Nouho Ouattara</creator>
        
        <creator>Georges Laussane Loum</creator>
        
        <creator>Ghislain Koffi Pandry</creator>
        
        <creator>Armand Kodjo Atiampo</creator>
        
        <subject>Image inpainting; Criminisi algorithm; priority function; data term; confidence term; identity function</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(6), 2019</description>
        <description>In patch-based inpainting methods, the order of filling the areas to be restored is very important. This filling order is defined by a priority function that integrates two parameters: confidence term and data term. The priority, as initially defined, is negatively affected by the mutual influence of confidence and data terms. In addition, the rapid decrease to zero of confidence term leads the numerical instability of algorithms. Finally, the data term depends only on the central pixel of the patch, without taking into account the influence of neighboring pixels. Our aim in this paper is to propose an algorithm to solve the problems mentioned above. This algorithm is based on a new definition of the priority function, a calculation of the average data term obtained from the elementary data terms in a patch and an update of the confidence term slowing its decrease and avoiding convergence to zero. We evaluated our method by comparing it with algorithms in the literature. The results show that our method provides better results both visually and in terms of the Peak Signal-to-Noise Ratio (PSNR) and Structural SIMilarity index (SSIM).</description>
        <description>http://thesai.org/Downloads/Volume10No6/Paper_55-A_New_Image_Inpainting_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Blood Vessels Segmentation in Retinal Fundus Image using Hybrid Method of Frangi Filter, Otsu Thresholding and Morphology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100654</link>
        <id>10.14569/IJACSA.2019.0100654</id>
        <doi>10.14569/IJACSA.2019.0100654</doi>
        <lastModDate>2019-06-29T10:16:22.3900000+00:00</lastModDate>
        
        <creator>Wiharto </creator>
        
        <creator>YS. Palgunadi</creator>
        
        <subject>Segmentation; morphology; frangi filter; retinal; blood vessels</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(6), 2019</description>
        <description>Diagnosis of computer-based retinopathic hypertension is done by analyzing of retinal images. The analysis is carried out through various stages, one of which is blood vessel segmentation in retinal images. Vascular segmentation of the retina is a complex problem. This is caused by non-uniform lighting, contrast variations and the presence of abnormalities due to disease. This makes segmentation not successful if it only relies on one method. The aims of this study to segment blood vessels in retinal images. The method used is divided into three stages, namely preprocessing, segmentation and testing. The first stage, preprocessing, is to improve image quality with the CLAHE method and the median filter on the green channel image. The second stage, segmenting using a number of methods, namely, frangi filter, 2D-convolution filtering, median filtering, otsu&#39;s thresholding, morphology operation, and background subtraction. The last step is testing the system using the DRIVE and STARE dataset. The test results obtained sensitivity 91.187% performance parameters, 86.896% specificity, and area under the curve (AUC) 89.041%. Referring to the performance produced, the proposed model can be used as an alternative for blood vessel segmentation of retinal images.</description>
        <description>http://thesai.org/Downloads/Volume10No6/Paper_54-Blood_Vessels_Segmentation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Electromyography Signal Acquisition and Analysis System for Finger Movement Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100653</link>
        <id>10.14569/IJACSA.2019.0100653</id>
        <doi>10.14569/IJACSA.2019.0100653</doi>
        <lastModDate>2019-06-29T10:16:22.3730000+00:00</lastModDate>
        
        <creator>Alvarado-D&#237;az Witman</creator>
        
        <creator>Meneses-Claudio Brian</creator>
        
        <creator>Roman-Gonzalez Avid</creator>
        
        <subject>EMG; muscles; disability; classification learner; Myoware</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(6), 2019</description>
        <description>Electromyography (EMG) is very important to capture muscle activity. Although many jobs establish data acquisition system, however, it is also essential to demonstrate that these data are reliable. In this sense, one proposes a design and implementation of a data acquisition system with the Myoware device and the ATmega329P microcontroller. One also proved its reliability by classifying the movement of the fingers of the hand, with the help of the algorithm k-Nearest Neighbors (KNN) and the application of Classification Learner code of Matlab. The results show a success rate of 99.1%.</description>
        <description>http://thesai.org/Downloads/Volume10No6/Paper_53-Electromyography_Signal_Acquisition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Watermarking System Architecture using the Cellular Automata Transform for 2D Vector Map</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100652</link>
        <id>10.14569/IJACSA.2019.0100652</id>
        <doi>10.14569/IJACSA.2019.0100652</doi>
        <lastModDate>2019-06-29T10:16:22.3600000+00:00</lastModDate>
        
        <creator>Saleh AL-ardhi</creator>
        
        <creator>Vijey Thayananthan</creator>
        
        <creator>Abdullah Basuhail</creator>
        
        <subject>Digital watermarking; spatial database; 2d vector map; linear cellular automata transform</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(6), 2019</description>
        <description>Technological advancement, paired with the emergence of increasingly open and sophisticated communication systems, has contributed to the growing complexity of copyright protection and ownership identification for digital content. The technique of digital watermarking has been receiving attention in the literature as a way to address these complexities. Digital watermarking involves covertly embedding a marker in a piece of digital data (e.g., a vector map, database, or audio, image, or video data) such that the marker cannot be edited, does not interfere with the quality or size of the data, and can be extracted accurately even under the deterioration of the watermarked data (e.g., as a consequence of malicious activity). The purpose of this paper is to describe a watermarking system architecture that can be applied to a 2D vector map. The proposed scheme involves embedding the watermark into the frequency domain, namely, the linear cellular automata transform (LCAT) algorithm. To evaluate the performance of the proposed scheme, the algorithm was applied to vector maps from the Riyadh Development Authority. The results indicate that the watermarking system architecture described here is efficient in terms of its computational complexity, reversibility, fidelity, and robustness against well-known attacks.</description>
        <description>http://thesai.org/Downloads/Volume10No6/Paper_52-A_Watermarking_System_Architecture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fragile Watermarking based on Linear Cellular Automata using Manhattan Distances for 2D Vector Map</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100651</link>
        <id>10.14569/IJACSA.2019.0100651</id>
        <doi>10.14569/IJACSA.2019.0100651</doi>
        <lastModDate>2019-06-29T10:16:22.3430000+00:00</lastModDate>
        
        <creator>Saleh AL-ardhi</creator>
        
        <creator>Vijey Thayananthan</creator>
        
        <creator>Abdullah Basuhail</creator>
        
        <subject>Reversible watermarking; fragile watermarking; linear cellular automata; Manhattan distances; vector map</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(6), 2019</description>
        <description>There has been a growing demand for publishing maps in secure digital format, since this ensures the integrity of data. This had lead us to put forward a method of detecting and locating modification data that is extremely accurate and simultaneously guarantees that the exact original content is recovered. More precisely, this method relies on a fragile watermarking algorithm that is developed in accordance with a frequency manner and, for every spatial feature, it can embed hidden data in 2D vector maps. The current paper proposes a frequency data-hiding scheme, which will be examined in accordance with Linear Cellular Automata Transform using Manhattan distances. Various invertible integer mappings are applied in order to find out the Manhattan distances from coordinates. To begin with, the original map is transformed into LCA, after which the watermark insertion process is carried out to transform the coefficient of the transformation result frequency into LSB. Lastly, a watermarked map is created by applying the inverse LCA transform, meaning that a LCA-transformed map is produced. Findings indicate that the suggested method is effective since in terms of invisibility and the capacity to allow for modifications. The methods also allow the detection of modification data, the addition and removal of some features, and enable the exact original content from the 2D vector map to be included.</description>
        <description>http://thesai.org/Downloads/Volume10No6/Paper_51-Fragile_Watermarking_based_on_Linear_Cellular_Automata.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Bio-inspired Think-and-Share Optimization for Big Data Provenance in Wireless Sensor Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100650</link>
        <id>10.14569/IJACSA.2019.0100650</id>
        <doi>10.14569/IJACSA.2019.0100650</doi>
        <lastModDate>2019-06-29T10:16:22.3270000+00:00</lastModDate>
        
        <creator>Adel Alkhalil</creator>
        
        <creator>Rabie Ramadan</creator>
        
        <creator>Aakash Ahmad</creator>
        
        <subject>Big data systems; data provenance; fuzzy logic; bio-inspired computing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(6), 2019</description>
        <description>Big data systems are being increasingly adopted by the enterprises exploiting big data applications to manage data-driven process, practices, and systems in an enterprise wide context. Specifically, big data systems and their underlying applications empower enterprises with analytical decision making (e.g., recommender/decision support systems) to optimize organizational productivity, competitiveness, and growth. Despite these benefits, big data applications face some challenges that include but not limited to security and privacy, authenticity, and reliability of critical data that may result in propagation of false information across systems. Data provenance as an approach and enabling mechanism (to identify the origin, manage the creation, and track the propagation of information etc.) can be a solution to above mentioned challenges for data management in an enterprise context. Data provenance solution(s) can help stakeholders and enterprises to assess the quality of data along with authenticity, reliability, and trust of information on the basis of identity, reproducibility and integrity of data. Considering the wide spread adoption of big data applications and the needs for data provenance, this paper focuses on (i) analyzing state-of-the-art for holistic presentation of provenance in big-data applications (ii) proposing a bio-inspired approach with underlying algorithm that exploits human thinking approach to support data provenance in Wireless Sensor Networks (WSNs). The proposed ‘Think-and-Share Optimization’ (TaSO) algorithms modularizes and automates data provenance in WSNs that are deployed and operated in enterprises. Evaluation of TaSO algorithm demonstrates its efficiency in terms of connectivity, closeness to the sink node, coverage, and execution time. The proposed research contextualizes bio-inspired computation to enable and optimize data provenance in WSNs. Future research aims to exploit machine learning techniques (with underlying algorithms) to automate data provenance for big data systems in networked environments.</description>
        <description>http://thesai.org/Downloads/Volume10No6/Paper_50-Bio_Inspired_Think_and_Share_Optimization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Blind Image Quality Evaluation of Stitched Image using Novel Hybrid Warping Technique</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100649</link>
        <id>10.14569/IJACSA.2019.0100649</id>
        <doi>10.14569/IJACSA.2019.0100649</doi>
        <lastModDate>2019-06-29T10:16:22.2970000+00:00</lastModDate>
        
        <creator>Sanjay T. Gandhe</creator>
        
        <creator>Omkar S. Vaidya</creator>
        
        <subject>Blind image quality evaluation; hybrid warping; image stitching; panoramic image</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(6), 2019</description>
        <description>Image stitching is collection of sequential images captured at fixed camera center having considerable amount of overlap and produces aesthetically pleasing seamless panoramic view. But, practically it is very difficult to obtain clean and pristine stitched panoramic image of particular scene as such images are apparently distorted. In this paper, a novel Hybrid Warping technique is used that combine two global warps and one local warp and helps to refine image alignment stage. Our proposed method optimizes Homography Screening to rectify problem of perspective distortion and Edge Strength Similarity approach to quantify structural irregularities. The Blind Image Quality Evaluation models such as Blind Image Quality Index (BIQI), Blind/Reference-less Image Spatial QUality Evaluator (BRISQUE) and BLind Image Integrity Notator using DCT Statistics (BLIINDS-II) are employed to measure objective quality of stitched image. The experimental results showed that blind image quality score of proposed method is significantly better than latest existing methods.</description>
        <description>http://thesai.org/Downloads/Volume10No6/Paper_49-Blind_Image_Quality_Evaluation_of_Stitched_Image.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detection of Suspicious of Diabetic Feet using Thermal Image</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100648</link>
        <id>10.14569/IJACSA.2019.0100648</id>
        <doi>10.14569/IJACSA.2019.0100648</doi>
        <lastModDate>2019-06-29T10:16:22.2800000+00:00</lastModDate>
        
        <creator>Brian Meneses-Claudio</creator>
        
        <creator>Witman Alvarado-D&#237;az</creator>
        
        <creator>Fiorella Flores-Medina</creator>
        
        <creator>Natalia I. Vargas-Cuentas</creator>
        
        <creator>Avid Roman-Gonzalez</creator>
        
        <subject>Diabetic foot; thermal images; image processing; Roberts method; heat map</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(6), 2019</description>
        <description>Diabetic foot is a chronic disease that occurs due to increased glucose levels, in addition to being the result of poorly controlled diabetes. In this case, the affected foot increases in temperature, because it contains accumulated blood. According to the Alianza para el Slvataje del Pie diab&#233;tico en el Per&#250; reported that currently has been increasing the cases of diabetic foot being 8% of the Peruvian population suffering from diabetic foot. Many research papers mention that the temperature difference of both feet has to be minimal due to the homogeneous distribution of the body, but when the temperature of the feet is higher than 2.2 &#176; C degrees with respect to the other foot, it is an indicative of diabetic foot. That is why the thermal evaluation of feet with suspected diabetic foot was raised in this research to prevent future damage or even amputation of the foot; first a thermal image of both feet is captured using the FLIR ONE Pro thermal camera following a temperature range protocol, then the images are processed in the MATLAB software in order to obtain the zones where the variations are greater or equal 2.2 degrees of temperature and finally superimpose it on the foot with higher temperature to determine the area where the highest temperature was detected. It was obtained as results, that patients with diabetic foot do not have sensitivity in both feet, which will indicate us as a result and in addition to the difference in temperature between both feet which is a possible diabetic foot.</description>
        <description>http://thesai.org/Downloads/Volume10No6/Paper_48-Detection_of_Suspicious_of_Diabetic_Feet.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dense Hand-CNN: A Novel CNN Architecture based on Later Fusion of Neural and Wavelet Features for Identity Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100647</link>
        <id>10.14569/IJACSA.2019.0100647</id>
        <doi>10.14569/IJACSA.2019.0100647</doi>
        <lastModDate>2019-06-29T10:16:22.2670000+00:00</lastModDate>
        
        <creator>Elaraby A. Elgallad</creator>
        
        <creator>Wael Ouarda</creator>
        
        <creator>Adel M. Alimi</creator>
        
        <subject>Deep learning; fusion; palmprint; squeezenet; voting</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(6), 2019</description>
        <description>Biometric recognition or biometrics has emerged as the best solution for criminal identification and access control applications where resources or information need to be protected from unauthorized access. Biometric traits such as fingerprint, face, palmprint, iris, and hand-geometry have been well explored; and matured approaches are available in order to perform personal identification. The work emphasizes the opportunities for obtaining texture information from a palmprint on the basis of such descriptors as Curvelet, Wavelet, Wave Atom, SIFT, Gabor, LBP, and AlexNet. The key contribution is the application of mode voting method for accurate identification of a person at the fusion decision level. The proposed approach was tested in a number of experiments at the CASIA and IITD palmprint databases. The testing yielded positive results supporting the utilization of the described voting technique for human recognition purposes.</description>
        <description>http://thesai.org/Downloads/Volume10No6/Paper_47-Dense_Hand_CNN_A_Novel_CNN_Architecture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dynamic Matrix Control DMC using the Tuning Procedure based on First Order Plus Dead Time for Infant-Incubator</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100646</link>
        <id>10.14569/IJACSA.2019.0100646</id>
        <doi>10.14569/IJACSA.2019.0100646</doi>
        <lastModDate>2019-06-29T10:16:22.2500000+00:00</lastModDate>
        
        <creator>J. ElHadj Ali</creator>
        
        <creator>E. Feki</creator>
        
        <creator>A. Mami</creator>
        
        <subject>Infant-incubator; DMC; MPC; higher-order; FOPDT; PRC and MIMO</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(6), 2019</description>
        <description>The concept of Model Predictive Control (MPC) is considered as one of the most important controlling strategies. It is used in several fields, such as petrochemical, oil refinery, fertilizer and chemical plants. It is also well spread among the clinicians and in the biomedical fields. In this context, our paper aims to investigate the thermal conditions inside the infant incubator for premature babies. In this study, we propose the Dynamic Matrix Control (DMC) as a control strategy. The most particularity of this strategy is applicable to the Multi-input Multi-output (MIMO) systems. It aims to compare different coupled transfer functions achieved by two identification methods in previous work. Also, a simulation of the air temperature and humidity is achieved inside the unit care. In this work, we focus on the tuning controlling parameters because it is considered as a key step in the successful performance of (DMC). Then, to obtain the (DMC), we have used an analytic tool, which is the Process Reaction Curve (PRC), for higher order transfers function because it needs a lot of work for this purpose. It should be approximated as a low order transfer function with time delay, which is achieved by using the First Order Plus Dead Time (FOPDT) of processing models. Finally, the result of the comparison of the infant-incubator is provided to show an optimal and good performance of the thermal behavior of our propos methodology and prove that a good identification ensures a better performance.</description>
        <description>http://thesai.org/Downloads/Volume10No6/Paper_46-Dynamic_Matrix_Control_DMC.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Emotion Detection in Text using Nested Long Short-Term Memory</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100645</link>
        <id>10.14569/IJACSA.2019.0100645</id>
        <doi>10.14569/IJACSA.2019.0100645</doi>
        <lastModDate>2019-06-29T10:16:22.2330000+00:00</lastModDate>
        
        <creator>Daniel Haryadi</creator>
        
        <creator>Gede Putra Kusuma</creator>
        
        <subject>Sentiment analysis; emotion detection; text mining; nested LSTM; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(6), 2019</description>
        <description>Humans have the power to feel different types of emotions because human life is filled with many emotions. Human’s emotion can be reflected through reading or writing a text. In recent years, studies on emotion detection through text has been developed. Most of the study is using a machine learning technique. In this paper, we classified 7 emotions such as anger, fear, joy, love, sadness, surprise, and thankfulness using deep learning technique that is Long Short-Term Memory (LSTM) and Nested Long Short-Term Memory (Nested LSTM). We have compared our results with Support Vector Machine (SVM). We have trained each model with 980,549 training data and tested with 144,160 testing data. Our experiments showed that Nested LSTM and LSTM give better performance than SVM to detect emotions in text. Nested LSTM gets the best accuracy of 99.167%, while LSTM gets the best performance in term of average precision at 99.22%, average recall at 98.86%, and f1-score at 99.04%.</description>
        <description>http://thesai.org/Downloads/Volume10No6/Paper_45-Emotion_Detection_in_Text.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Causal Impact Analysis on Android Market</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100644</link>
        <id>10.14569/IJACSA.2019.0100644</id>
        <doi>10.14569/IJACSA.2019.0100644</doi>
        <lastModDate>2019-06-29T10:16:22.2030000+00:00</lastModDate>
        
        <creator>Hadiqa AmanUllah</creator>
        
        <creator>Mishal Fatima</creator>
        
        <creator>Umair Muneer</creator>
        
        <creator>Sadaf Ilyas</creator>
        
        <creator>Rana Abdul Rehman</creator>
        
        <creator>Ibraheem Afzal</creator>
        
        <subject>Android; Google; statistics; mobile applications; data visualization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(6), 2019</description>
        <description>Google play store contains a large repository of apps for android users. Play store has two billion active users that have two million apps to download and use. App developers are competing to get a higher success rate and increase user satisfaction but little information is known to developers for succeeding in the android market. This paper presents a comprehensive analytical study on Google play store apps ratings, installs and reviews. This study focuses on the evaluation of the parameters required for the success of an app in different categories. For this purpose data of 10k apps and its reviews are analyzed using exploratory data analysis. This study focuses on finding a correlation between higher ratings, no of installs, reviews with app info like its category, size, and price. We are also going to analyze user reviews to get useful insights. The evaluation shows that personalization, productivity and games categories are performing very well in the android market both in terms of ratings and installs. Most high rated apps are sized below 40MB and priced below 30$, except game apps that are performing well even if they are bulky. Common customer complaints are functional errors and issues like infrequent updates, excessive ads, limited functionality and high purchase price.</description>
        <description>http://thesai.org/Downloads/Volume10No6/Paper_44-Causal_Impact_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hijaiyah Letter Interactive Learning for Mild Mental Retardation Children using Gillingham Method and Augmented Reality</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100643</link>
        <id>10.14569/IJACSA.2019.0100643</id>
        <doi>10.14569/IJACSA.2019.0100643</doi>
        <lastModDate>2019-06-29T10:16:22.1870000+00:00</lastModDate>
        
        <creator>Irawan Afrianto</creator>
        
        <creator>Agung Faishal Faris</creator>
        
        <creator>Sufa Atin</creator>
        
        <subject>Hijaiyah; intercative learning; mild retarded child; Gillingham; VAKT; augmented reality</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(6), 2019</description>
        <description>Assistive technology for children with special needs is a problem that is interesting to study. Collaboration between methods and latest technology can be used as a learning aid for them. Learning of Hijaiyah letters is the first step to being able to read the Holy Qur&#39;an. Mentally retarded children have IQs below the average normal child, so their learning process is slower and requires special methods. This study aims to develop an application by using the Gillingham and augmented reality methods to help mentally retarded children recognize Hijaiyah letters. The Gillingham method uses a visual, auditory, kinestetic, and tactile (VAKT) approach, that can be used to facilitate mentally retarded children. While augmented reality is used to develop more interesting and interactive applications. Based on the results of research and testing, it can be concluded that the learning application that was built can improve children&#39;s memory and understanding of Hijaiyah letters, The results of the pretest and posttest testing, showed an increase of 12% for children who were difficult to receive learning material and 6% for children who are classified as easy to receive learning material.</description>
        <description>http://thesai.org/Downloads/Volume10No6/Paper_43-Hijaiyah_Letter_Interactive_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Experimental Analysis of Color Image Scrambling in the Spatial Domain and Transform Domain</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100642</link>
        <id>10.14569/IJACSA.2019.0100642</id>
        <doi>10.14569/IJACSA.2019.0100642</doi>
        <lastModDate>2019-06-29T10:16:22.1730000+00:00</lastModDate>
        
        <creator>R. Rama Kishore</creator>
        
        <creator>Sunesh</creator>
        
        <subject>Color image scrambling; pixel position modification; spatial domain; Red Green Blue (RGB); transform domain</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(6), 2019</description>
        <description>This paper proposes two image-scrambling algorithms based on self-generated keys. First color image scrambling method works in the spatial domain, and second, works in the transform domain. The proposed methods cull the R, G, and B plane from the color image and scramble each plane separately by utilizing the self-generated keys. The keenness of security of proposed methods is the keys or parameters used in the scrambling process. The exploratory outcomes show that both proposed image scrambling technique performs well in terms of Number of pixel change rate (NPCR), Normalized correlation (NC), Entropy, and time consumed in encoding and decoding. The adequacy of the proposed framework has demonstrated on a data set of five images. In furtherance, the present paper gives a comparative performance analysis between proposed image scrambling methods of spatial domain and transform domain. The proposed paper also tosses some light on the scrambling work reported in the literature.</description>
        <description>http://thesai.org/Downloads/Volume10No6/Paper_42-Experimental_Analysis_of_Color_Image.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Network user Behaviors and Profile Testing based on Anomaly Detection Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100641</link>
        <id>10.14569/IJACSA.2019.0100641</id>
        <doi>10.14569/IJACSA.2019.0100641</doi>
        <lastModDate>2019-06-29T10:16:22.1400000+00:00</lastModDate>
        
        <creator>Muhammad Tahir</creator>
        
        <creator>Mingchu Li</creator>
        
        <creator>Xiao Zheng</creator>
        
        <creator>Anil Carie</creator>
        
        <creator>Xing Jin</creator>
        
        <creator>Muhammad Azhar</creator>
        
        <creator>Naeem Ayoub</creator>
        
        <creator>Atif Wagan</creator>
        
        <creator>Muhammad Aamir</creator>
        
        <creator>Liaquat Ali Jamali</creator>
        
        <creator>Muhammad Asif Imran</creator>
        
        <creator>Zahid Hussain Hulio</creator>
        
        <subject>Network user behaviors; profile testing; anomaly detection techniques; datasets; anomaly detection algorithms; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(6), 2019</description>
        <description>The proliferation of smart devices and computer networks has led to a huge rise in internet traffic and network attacks that necessitate efficient network traffic monitoring. There have been many attempts to address these issues; however, agile detecting solutions are needed. This research work deals with the problem of malware infections or detection is one of the most challenging tasks in modern computer security. In recent years, anomaly detection has been the first detection approach followed by results from other classifiers. Anomaly detection methods are typically designed to new model normal user behaviors and then seek for deviations from this model. However, anomaly detection techniques may suffer from a variety of problems, including missing validations for verification and a large number of false positives. This work proposes and describes a new profile-based method for identifying anomalous changes in network user behaviors. Profiles describe user behaviors from different perspectives using different flags. Each profile is composed of information about what the user has done over a period of time. The symptoms extracted in the profile cover a wide range of user actions and try to analyze different actions. Compared to other symptom anomaly detectors, the profiles offer a higher level of user experience. It is assumed that it is possible to look for anomalies using high-level symptoms while producing less false positives while effectively finding real attacks. Also, the problem of obtaining truly tagged data for training anomaly detection algorithms has been addressed in this work. It has been designed and created datasets that contain real normal user actions while the user is infected with real malware. These datasets were used to train and evaluate anomaly detection algorithms. Among the investigated algorithms for example, local outlier factor (LOF) and one class support vector machine (SVM). The results show that the proposed anomaly-based and profile-based algorithm causes very few false positives and relatively high true positive detection. The two main contributions of this work are a new approaches based on network anomaly detection and datasets containing a combination of genuine malware and actual user traffic. Finally, the future directions will focus on applying the proposed approaches for protecting the internet of things (IOT) devices.</description>
        <description>http://thesai.org/Downloads/Volume10No6/Paper_41-A_Novel_Network_User_Behaviors.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Big Data Technology-Enabled Analytical Solution for Quality Assessment of Higher Education Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100640</link>
        <id>10.14569/IJACSA.2019.0100640</id>
        <doi>10.14569/IJACSA.2019.0100640</doi>
        <lastModDate>2019-06-29T10:16:22.1230000+00:00</lastModDate>
        
        <creator>Samiya Khan</creator>
        
        <creator>Xiufeng Liu</creator>
        
        <creator>Kashish Ara Shakil</creator>
        
        <creator>Mansaf Alam</creator>
        
        <subject>Education big data; educational intelligence; educational technology; higher education; quality education</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(6), 2019</description>
        <description>Educational Intelligence is a broad area of big data analytical applications that make use of big data technologies for implementation of solutions for education and research. This paper demonstrates the designing, development and deployment of an educational intelligence application for real-world scenarios. Firstly, a quality assessment framework for higher education systems that evaluate institutions on the basis of performance of outgoing students was proposed. Secondly, big data enabled technological setup was used for its implementation. Literature was surveyed to evaluate existing quality frameworks. Most existing quality assessment systems take into account the dimensions related to inputs, processes and outputs, but they tend to ignore the perspective that assesses the institution on the basis of outcome of the educational process. This paper demonstrates the use of outcome perspective to compute quality metrics and create visual analytics. In order to implement and test the framework, R programming language and a cloud based big data technology that is Google, BigQuery were used.</description>
        <description>http://thesai.org/Downloads/Volume10No6/Paper_40-Big_Data_Technology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>New Method of Faults Diagnostic based on Neuro-Dynamic Sliding Mode for Flat Nonlinear Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100639</link>
        <id>10.14569/IJACSA.2019.0100639</id>
        <doi>10.14569/IJACSA.2019.0100639</doi>
        <lastModDate>2019-06-29T10:16:22.1100000+00:00</lastModDate>
        
        <creator>O. Dhaou</creator>
        
        <creator>L.Sidhom</creator>
        
        <creator>A.Abdelkrim</creator>
        
        <subject>Flat system; fault detection and isolation; inputs/parameters estimator; higher order sliding mode differentiator; dynamic neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(6), 2019</description>
        <description>This paper addresses the problem of simultaneous actuator, process and sensor Fault Detection and Isolation (FDI) for nonlinear system having flatness properties with the presence of disturbances and which are operating in closed-loop. In particular, the nonlinear system is corrupted with additive actuator, process or sensor faults with simultaneous occurrence. In this case, the residual signals might be sensitive to all of these faults that can appear in the system. The proposed FDI method is based on both input and parameter estimators that are designed in parallel. With the flatness property of such system, the design of these two estimators requires information on the measured outputs and their successive derivatives. To estimate these last one, a new scheme of the 2nd-order dynamic sliding mode differentiator is proposed. Residuals are next defined as the difference between the estimated and expected behavior. In order to isolate the faults, dynamic neural networks technique is employed. Besides, comparative study between this new differentiator and the well-known 2nd-order Levant’s differentiator is provided to show the pros and cons of the proposed FDI method. This latter is validated by the simulation results and is carried out on a three tank system.</description>
        <description>http://thesai.org/Downloads/Volume10No6/Paper_39-New_Method_of_Faults_Diagnostic.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hyperparameter Optimization in Convolutional Neural Network using Genetic Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100638</link>
        <id>10.14569/IJACSA.2019.0100638</id>
        <doi>10.14569/IJACSA.2019.0100638</doi>
        <lastModDate>2019-06-29T10:16:22.0770000+00:00</lastModDate>
        
        <creator>Nurshazlyn Mohd Aszemi</creator>
        
        <creator>P.D.D Dominic</creator>
        
        <subject>Hyperparameter; convolutional neural network; CNN; genetic algorithm; GA; random search; optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(6), 2019</description>
        <description>Optimizing hyperparameters in Convolutional Neural Network (CNN) is a tedious problem for many researchers and practitioners. To get hyperparameters with better performance, experts are required to configure a set of hyperparameter choices manually. The best results of this manual configuration are thereafter modeled and implemented in CNN. However, different datasets require different model or combination of hyperparameters, which can be cumbersome and tedious. To address this, several works have been proposed such as grid search which is limited to low dimensional space, and tails which use random selection. Also, optimization methods such as evolutionary algorithms and Bayesian have been tested on MNIST datasets, which is less costly and require fewer hyperparameters than CIFAR-10 datasets. In this paper, the authors investigate the hyperparameter search methods on CIFAR-10 datasets. During the investigation with various optimization methods, performances in terms of accuracy are tested and recorded. Although there is no significant difference between propose approach and the state-of-the-art on CIFAR-10 datasets, however, the actual potency lies in the hybridization of genetic algorithms with local search method in optimizing both network structures and network training which is yet to be reported to the best of author knowledge.</description>
        <description>http://thesai.org/Downloads/Volume10No6/Paper_38-Hyperparameter_Optimization_in_Convolutional_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implementation of Machine Learning Model to Predict Heart Failure Disease</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100637</link>
        <id>10.14569/IJACSA.2019.0100637</id>
        <doi>10.14569/IJACSA.2019.0100637</doi>
        <lastModDate>2019-06-29T10:16:22.0630000+00:00</lastModDate>
        
        <creator>Fahd Saleh Alotaibi</creator>
        
        <subject>Machine learning model; medical data; heart failure diagnoses</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(6), 2019</description>
        <description>In the current era, Heart Failure (HF) is one of the common diseases that can lead to dangerous situation. Every year almost 26 million of patients are affecting with this kind of disease. From the heart consultant and surgeon’s point of view, it is complex to predict the heart failure on right time. Fortunately, classification and predicting models are there, which can aid the medical field and can illustrates how to use the medical data in an efficient way. This paper aims to improve the HF prediction accuracy using UCI heart disease dataset. For this, multiple machine learning approaches used to understand the data and predict the HF chances in a medical database. Furthermore, the results and comparative study showed that, the current work improved the previous accuracy score in predicting heart disease. The integration of the machine learning model presented in this study with medical information systems would be useful to predict the HF or any other disease using the live data collected from patients.</description>
        <description>http://thesai.org/Downloads/Volume10No6/Paper_37-Implementation_of_Machine_Learning_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Junction Point Detection and Identification of Broken Character in Touching Arabic Handwritten Text using Overlapping Set Theory</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100636</link>
        <id>10.14569/IJACSA.2019.0100636</id>
        <doi>10.14569/IJACSA.2019.0100636</doi>
        <lastModDate>2019-06-29T10:16:22.0470000+00:00</lastModDate>
        
        <creator>Inam Ullah</creator>
        
        <creator>Mohd Sanusi Azmi</creator>
        
        <creator>Mohamad Ishak Desa</creator>
        
        <subject>Touching characters; segmentation and recognition; overlapping set theory; junction point; broken character</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(6), 2019</description>
        <description>Touching characters are formed when two or more characters share the same space with each other. Therefore, segmentation of these touching character is very challenging research topic especially for handwritten Arabic degraded documents. This is one of the key issue in recognition of the handwritten Arabic text. In order to make the recognition system more effective segmentation of these touching handwritten Arabic characters is considered to be very important research area. In this research, a new method is proposed, which is used to identify the junction or common point of Arabic touching word image by applying overlapping or intersection set theory operation, which will help to trace the correct boundary of the touching characters, identify the broken characters and also segmented these touching handwritten text in an efficient way. The proposed method has been evaluated on Arabic touching handwritten characters taken from handwritten datasets. The results show the efficiency of the proposed method. The proposed method is applicable to both degraded handwritten documents and printed documents.</description>
        <description>http://thesai.org/Downloads/Volume10No6/Paper_36-Junction_Point_Detection_and_Identification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optical Recognition of Isolated Machine Printed Sindhi Characters using Fourier Descriptors</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100635</link>
        <id>10.14569/IJACSA.2019.0100635</id>
        <doi>10.14569/IJACSA.2019.0100635</doi>
        <lastModDate>2019-06-29T10:16:22.0300000+00:00</lastModDate>
        
        <creator>Nasreen Nizamani</creator>
        
        <creator>Mujtaba Shaikh</creator>
        
        <creator>Jawed Unar</creator>
        
        <creator>Ehsan Ali</creator>
        
        <creator>Ghulam Mustafa Bhutto</creator>
        
        <creator>Abdul Rafay</creator>
        
        <subject>Features extraction; Sindhi optical character recognition; Fourier Descriptors; machine printed Sindhi characters</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(6), 2019</description>
        <description>The scale invariance characteristics play an essential role in pattern recognition applications, for example in computer vision, OCR (Optical Character Recognition), electronic publication, etc. In this paper, the shape based feature extraction techniques are used in terms of invariant properties and the region based FD (Fourier Descriptors) have been used for the recognition of isolated printed Sindhi characters. There are 56 isolated characters in Sindhi language than can be categorized to 20 different classes considering the shape of the base of each character. In this work, the dataset contains 4704 images of isolated printed Sindhi characters. The simulation result shows that the proposed method is a capable discriminating algorithm of Similar Sindhi characters and can easily extract the scale invariant features.</description>
        <description>http://thesai.org/Downloads/Volume10No6/Paper_35-Optical_Recognition_of_Isolated_Machine.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Spectral Classification of a Set of Hyperspectral Images using the Convolutional Neural Network, in a Single Training</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100634</link>
        <id>10.14569/IJACSA.2019.0100634</id>
        <doi>10.14569/IJACSA.2019.0100634</doi>
        <lastModDate>2019-06-29T10:16:22.0000000+00:00</lastModDate>
        
        <creator>Abdelali Zbakh</creator>
        
        <creator>Zoubida Alaoui Mdaghri</creator>
        
        <creator>Abdelillah Benyoussef</creator>
        
        <creator>Abdellah El Kenz</creator>
        
        <creator>Mourad El Yadari</creator>
        
        <subject>Classification; spectral; Convolutional Neural Network (CNN); deep learning; hyperspectral data; neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(6), 2019</description>
        <description>Hyperspectral imagery has seen a great evolution in recent years. Consequently, several fields (medical, agriculture, geosciences) need to make the automatic classification of these hyperspectral images with a high rate and in an acceptable time. The state-of-the-art presents several classification algorithms based on the Convolutional Neural Network (CNN) and each algorithm is training on a part of an image and then performs the prediction on the rest. This article proposes a new Fast Spectral classification algorithm based on CNN, and which allows to build a composite image from multiple hyperspectral images, then trains the model only once on the composite image. After training, the model can predict each image separately. To test the validity of the proposed algorithm, two free hyperspectral images are taken, and the training time obtained by the proposed model on the composite image is better than the time obtained from the model of the state-of-the-art.</description>
        <description>http://thesai.org/Downloads/Volume10No6/Paper_34-Spectral_Classification_of_A_Set_of_Hyperspectral_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predictive Method for Service Composition in Heterogeneous Environments within Client Requirements</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100633</link>
        <id>10.14569/IJACSA.2019.0100633</id>
        <doi>10.14569/IJACSA.2019.0100633</doi>
        <lastModDate>2019-06-29T10:16:21.9700000+00:00</lastModDate>
        
        <creator>Saleh M. Altowaijri</creator>
        
        <subject>Workflow; robust regression; prediction; cloud computing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(6), 2019</description>
        <description>Cloud computing is a new delivery model for Information Technology services. Many actors and parameters play an important role in provisioning of dynamically elastic and virtualized resources at the levels of infrastructures, platforms, and softwares. Nowadays, many cloud services are competing and present often similar offers. From the customer side, it is not always easy to select a suitable service according to customer requirements and cloud services scoring. In a real-world scenario, this is more complicated since service scoring may change over time. Besides this, it interferes with many parameters such as hardware, network infrastructure, customer demand, etc. To tackle this issue, this research work presents a novel approach for the prediction of the future score of any service in order to satisfy user requirements when executing service composition in cloud environments. This approach deals with regression techniques in order to predict the expected future offer of service based on sampling service’s history as well as user expectations.</description>
        <description>http://thesai.org/Downloads/Volume10No6/Paper_33-Predictive_Method_for_Service_Composition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluation of the Performance of the University Information Systems: Case of Moroccan Universities</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100631</link>
        <id>10.14569/IJACSA.2019.0100631</id>
        <doi>10.14569/IJACSA.2019.0100631</doi>
        <lastModDate>2019-06-29T10:16:21.9530000+00:00</lastModDate>
        
        <creator>Ayoub Gacim</creator>
        
        <creator>Hicham Drissi</creator>
        
        <creator>Abdelwahed Namir</creator>
        
        <subject>Component; information system; performance; multicriteria modeling; university</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(6), 2019</description>
        <description>The purpose of this paper is to develop a conceptual model of university information systems performance measurement. To do this resorted to the choice of 3E-3P model. This model proposes a development under the spectrum of the systemic approach. The objective is to provide a tool to decision-makers in order to understand the dynamics of performance measurement. The model is based on a logic of decomposition of the global performance into three partial performances. The measurement is carried out at each pillar individually using a multi-criteria approach (MACBETH), and subsequently the consolidation of the three partial performances is carried out with the same multicriteria logic.</description>
        <description>http://thesai.org/Downloads/Volume10No6/Paper_31-Evaluation_of_the_Performance_of_the_University_Information.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Immuno-Computing-based Neural Learning for Data Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100632</link>
        <id>10.14569/IJACSA.2019.0100632</id>
        <doi>10.14569/IJACSA.2019.0100632</doi>
        <lastModDate>2019-06-29T10:16:21.9530000+00:00</lastModDate>
        
        <creator>Ali Al Bataineh</creator>
        
        <creator>Devinder Kaur</creator>
        
        <subject>AIS; CSA; MCSA; ACO; BBNN</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(6), 2019</description>
        <description>The paper proposes two new algorithms based on the artificial immune system of the human body called Clonal Selection Algorithm (CSA) and the modified version of Clonal Selection Algorithm (MCSA), and used them to train the neural network. Conventional Artificial Neural Network training algorithm such as backpropagation has the disadvantage that it can get trapped into the local optima. Consequently, the neural network is usually incapable of obtaining the best solution to the given problem. In the proposed new CSA algorithm, the initial random weights chosen for the neural networks are considered as a foreign body called an antigen. As the human body creates several antibodies in response to fight the antigen, similarly, in CSA algorithm antibodies are created to fight the antigen. Each antibody is evaluated based on its affinity and clones are generated for each antibody. The number of clones depends on the algorithm, in CSA, the number of clones is fixed and in MCSA, number of clones is directly proportional to the affinity of the antibody. Mutation is performed on clones to improve the affinity. The best antibody emerged becomes the antigen for the next round and the process is repeated for several iterations until the best antibody that satisfies the chosen criterion is found. The best antibody is problem specific. For neural network training for data classification, the best antibody represents the set of weights and biases that gives the least error. The efficiency of the algorithm was analyzed using Iris dataset. The prediction accuracy of the algorithms were compared with other nature-inspired algorithms, such as Ant Colony Optimization (ACO), Particle Swarm Optimization (PSO) and standard backpropagation. The performance of MCSA was ahead of other algorithms with an accuracy of 99.33%.</description>
        <description>http://thesai.org/Downloads/Volume10No6/Paper_32-Immuno_Computing_based_Neural_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Smart Home Energy Management System Design: A Realistic Autonomous V2H / H2V Hybrid Energy Storage System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100630</link>
        <id>10.14569/IJACSA.2019.0100630</id>
        <doi>10.14569/IJACSA.2019.0100630</doi>
        <lastModDate>2019-06-29T10:16:21.9230000+00:00</lastModDate>
        
        <creator>Bassam Zafar</creator>
        
        <creator>Ben Slama Sami</creator>
        
        <creator>Sihem Nasri</creator>
        
        <creator>Marwan Mahmoud</creator>
        
        <subject>Home energy management system; fuel cell; super-capacitor; solar power; vehicle to home; home to vehicle</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(6), 2019</description>
        <description>The hybrid fuel cell electric vehicle powered by household power during peak use is another opportunity to reduce emissions and save money. For this reason, Vehicle-to-Home (V2H) and Home-to-Vehicle (H2V) systems were proposed as a new method of exchanging smart energy and a new method of exchanging smart energy. The main goal of this paper is to develop a smart home energy management based on IoT, generate more energy efficiency and share production between home and vehicle. In fact, the Hybrid Fuel cell electric vehicle will be used simultaneously to power household appliances during peak demand for electricity to solve energy consumption. The household&#39;s energy is derived from an accurate Autonomous hybrid power system. Several technologies such as Proton Exchange Membrane Fuel Cell, solar panel, Supercapacitor (SC) device and water electrolyzer are incorporated into the proposed system. Two-way electrical energy from the PEMFC-Hybrid Electric Vehicle and household power will be exchanged by discharging vehicle energy storage to balance energy demand and supply. To this end, a smart energy management unit (EMU) will be developed and discussed in order to meet the estimated fuel consumption mitigation goals during peak periods when demand is highest, coordinating between household power and vehicle energy storage. The Matlab / Simulink software based on an experimental database extracted from household power to demonstrate the effectiveness of the proposed strategy and its effects on V2H / H2V operations will simulate the presented design for one day. The Matlab / Simulink software based on an experimental database extracted from household power to demonstrate the effectiveness of the proposed strategy and its effects on V2H / H2V operations will simulate the presented design for one day.</description>
        <description>http://thesai.org/Downloads/Volume10No6/Paper_30-Smart_Home_Energy_Management_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Moving Object Detection in Highly Corrupted Noise using Analysis of Variance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100629</link>
        <id>10.14569/IJACSA.2019.0100629</id>
        <doi>10.14569/IJACSA.2019.0100629</doi>
        <lastModDate>2019-06-29T10:16:21.9070000+00:00</lastModDate>
        
        <creator>Asim ur Rehman Khan</creator>
        
        <creator>Muhammad Burhan Khan</creator>
        
        <creator>Haider Mehdi</creator>
        
        <creator>Syed Muhammad Atif Saleem</creator>
        
        <subject>Analysis of variance (ANOVA); image motion analysis; object detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(6), 2019</description>
        <description>This paper implements three-way nested design to mark moving objects in a sequence of images. Algorithm performs object detection in the image motion analysis. The inter-frame changes (level-A) are marked as temporal contents, while the intra-frame variations identifies critical information. The spatial details are marked at two granular levels, comprising of level-B and level-C. The segmentation is performed using analysis of variance (ANOVA). This algorithm gives excellent results in situations where images are corrupted with heavy Gaussian noise ~N(0,100). The sample images are selected in four categories: ‘baseline’, ‘dynamic background’, ‘camera jitter’, and ‘shadows’. Results are compared with previously published results on four accounts: false positive rate (FPR), false negative rate (FNR), percentage of wrong classification (PWC), and an F-measure. The qualitative and quantitative results prove that the technique out performs the previously reported results by a significant margin.</description>
        <description>http://thesai.org/Downloads/Volume10No6/Paper_29-Moving_Object_Detection_in_Highly_Corrupted_Noise.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Weld Defect Categorization from Welding Current using Principle Component Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100628</link>
        <id>10.14569/IJACSA.2019.0100628</id>
        <doi>10.14569/IJACSA.2019.0100628</doi>
        <lastModDate>2019-06-29T10:16:21.8900000+00:00</lastModDate>
        
        <creator>Hayri Arabaci</creator>
        
        <creator>Salman Laving</creator>
        
        <subject>Arc weld defects; feature extraction; PCA; classification techniques; on-line monitoring</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(6), 2019</description>
        <description>Real time welding quality control still remains a challenging task due to the dynamic characteristic of welding. Welding current of gas metal arc welding possess valuable information that can be analyzed for weld quality assessment purposes. On-line monitoring of motor current can be provided information about the welding. In this study, current signals obtained during welding in the short- circuit metal transfer mode were used for real-time categorization of deliberately induced weld defects and good welds. A hall-effect current sensor was employed on the ground wiring of the welding machine to acquire the welding current signals during the welding process. Vector reduction of the current signals in time domain was achieved by principle component analysis. The reduced vector was then classified by various classification techniques such as support vector machines, decision trees and nearest neighbor to categorize the arc weld defects or pass it as a good weld. The proposed technique has proved to be successful with accurate classification of the welding categories using all three classifiers. The classification technique is fast enough so it can be used for real time weld quality control as all the signal processing is carried out in the time domain.</description>
        <description>http://thesai.org/Downloads/Volume10No6/Paper_28-Weld_Defect_Categorization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Intelligent Cluster-Head (ICH) to Mitigate the Handover Problem of Clustering in VANETs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100627</link>
        <id>10.14569/IJACSA.2019.0100627</id>
        <doi>10.14569/IJACSA.2019.0100627</doi>
        <lastModDate>2019-06-29T10:16:21.8730000+00:00</lastModDate>
        
        <creator>A. H. Abbas</creator>
        
        <creator>Mohammed I. Habelalmateen</creator>
        
        <creator>L. Audah</creator>
        
        <creator>N.A.M. Alduais</creator>
        
        <subject>Vehicular Ad-Hoc networks; ITS; clustering; overlapping area; handover; ICH</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(6), 2019</description>
        <description>The huge development in the number of Vehicle factories have resulted in many people having lost their life due to accident, which has made vehicular Ad-hoc networks (VANETs) hot topic to enable improved communication between vehicles aimed at reducing the loss of life. The main challenge in this area is vehicle mobility, which has direct effect on network stability. Thus, most previous studies that discussed clustering focused on cluster formation, cluster-head selection and the stability of cluster to reduce the impact of mobility in the network, with little attention given to the clusters when passing from base-station to neighbor base-station. Therefore, this study focused on handover problem that occurs after cluster formation and cluster-head election during cluster passing from base station to base station, known as overlapping area. As the cluster in an overlapping area receives two signals from different base stations, the signal arriving at the cluster becomes weak due to interference between two frequencies resulting in loss of cluster information in the overlapping area. In this study, proposed a novel method named Intelligent Cluster-Head (ICH), which is a controller on two clusters that are used to change uplink between clusters to solve the handover problem in the overlapping area. The proposed method was evaluated with VMaSC-1hop method. The proposed method achieved percentage of packet loss up to 0.8%, percentage of packet delivery ratio (PDR) 99%, percentage of number of disconnected links 0.12% and percentage of network efficiency 99% in the cells edge.</description>
        <description>http://thesai.org/Downloads/Volume10No6/Paper_27-A_Novel_Intelligent_Cluster_Head.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Aspect Oriented Programming Framework to Support Transparent Runtime Monitoring of Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100626</link>
        <id>10.14569/IJACSA.2019.0100626</id>
        <doi>10.14569/IJACSA.2019.0100626</doi>
        <lastModDate>2019-06-29T10:16:21.8430000+00:00</lastModDate>
        
        <creator>Abdullah O. AL-Zaghameem</creator>
        
        <subject>Runtime state monitoring; application behavior; aspect oriented programming technique; statistical analysis; bytecode transformation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(6), 2019</description>
        <description>Monitoring the runtime state and behavior of applications is very important to evaluate the performance of these applications and to inspect their behavior. In case of legacy applications that have been developed without monitoring capabilities, there is a real challenge to accomplish runtime state monitoring. This research redefines runtime monitoring concept, and then presents an Aspect Oriented Programming (AOP) framework to equip applications with the capabilities to monitor their runtime state transparently. The framework, called RM Framework, supports three monitoring modes; Invasive-mode, Controlled-mode/(Functionality and Attribute), and Controlled-mode/Selective. The framework is applied on a Java application as a case study. The results show smooth integration between application and runtime monitoring capabilities without affecting the target application consistency.</description>
        <description>http://thesai.org/Downloads/Volume10No6/Paper_26-An_Aspect_Oriented_Programming_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Android Security Development: SpywareDetection, Apps Secure Level and Data Encryption Improvement</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100625</link>
        <id>10.14569/IJACSA.2019.0100625</id>
        <doi>10.14569/IJACSA.2019.0100625</doi>
        <lastModDate>2019-06-29T10:16:21.8270000+00:00</lastModDate>
        
        <creator>Lim Wei Xian</creator>
        
        <creator>Chan Shao Hong</creator>
        
        <creator>Yap Ming Jie</creator>
        
        <creator>Azween Abdullah</creator>
        
        <creator>Mahadevan Supramaniam</creator>
        
        <subject>Android; spyware detection; security level index; data encryption</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(6), 2019</description>
        <description>Most Android users are unaware that their smartphones are as vulnerable as any computer, and that permission by Android users is an important part of maintaining the security of Android smartphones. We present a method that uses manifest files to determine the presence of spyware and the security level of apps. Furthermore, to ensure that no leaked data occurs in Android smartphones, we propose new method for the encryption of data from Google Suite applications.</description>
        <description>http://thesai.org/Downloads/Volume10No6/Paper_25-Android_Security_Development_SpywareDetection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Approach to Control the Positional Accuracy of Point Features in Volunteered Geographic Information Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100624</link>
        <id>10.14569/IJACSA.2019.0100624</id>
        <doi>10.14569/IJACSA.2019.0100624</doi>
        <lastModDate>2019-06-29T10:16:21.8130000+00:00</lastModDate>
        
        <creator>Mennatallah H. Ibrahim</creator>
        
        <creator>Nagy Ramadan Darwish</creator>
        
        <creator>Hesham A. Hefny</creator>
        
        <subject>Volunteered geographic information; quality control; positional accuracy; point features</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(6), 2019</description>
        <description>Volunteered geographic information (VGI) is a huge source of user-generated geographic information. There is an enormous potential to use VGI in different mapping activities due to its significant advantages. VGI is found to be richer and more up-to-date than authoritative geographic information. However, VGI quality is an obvious challenge that needs to be addressed in order to get the full potential of VGI. Positional accuracy is one of the important aspects of VGI quality. Although VGI positional accuracy can be high in some contexts, VGI datasets are characterized by a large spatial heterogeneity. This paper proposes an approach for controlling positional accuracy as well as decreasing the spatial heterogeneity of point features in VGI systems. A case study has been conducted in order to ensure the applicability and effectiveness of the proposed approach.</description>
        <description>http://thesai.org/Downloads/Volume10No6/Paper_24-An_Approach_to_Control_the_Positional_Accuracy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>QoS-based Semantic Micro Services Discovery and Composition using ACO Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100623</link>
        <id>10.14569/IJACSA.2019.0100623</id>
        <doi>10.14569/IJACSA.2019.0100623</doi>
        <lastModDate>2019-06-29T10:16:21.7970000+00:00</lastModDate>
        
        <creator>Ahmed ESSAYAH</creator>
        
        <creator>Mohamed Youssfi</creator>
        
        <creator>Omar Bouattane</creator>
        
        <creator>Khalifa Mansouri</creator>
        
        <creator>Elhocein Illoussamen</creator>
        
        <subject>Semantic micro service; quality of service; learning path; e-learning platform; service discovery and composition; ant colony optimization algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(6), 2019</description>
        <description>In this paper, we present a new model of e-Learning platforms based on semantic micro services using discovery, selection and composition methods to generate learning paths. In this model, each semantic micro service represents an elementary educational resource that can be a course, an exercise, a tutorial or an evaluation implementing a precise learning path objective. The semantic micro services are described using ontologies and deployed in multi-instances in a cloud environment according to a load balancing and a fault tolerance system. Learners’ requests are sent to a proxy micro service having learning paths abstract structures represented as an oriented graph. Proxy micro service analyses the request to define the learner profile and context in order to provide him with the semantic micro services responsible of the educational resources satisfying his functional and non-functional needs. In this model, to achieve an optimal learning path generation a two steps process is employed, where local optimization uses semantic discovery and selection based on a matchmaking algorithm and a quality of service measurement, and global optimization adopts an ant colony optimization algorithm to select the best resource combination. Our experimental results show that the proposed model can effectively returns optimized learning paths considering individual, collective and pedagogical factors.
</description>
        <description>http://thesai.org/Downloads/Volume10No6/Paper_23-QoS_based_Semantic_Micro_Services.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Comparison of Detection, Recognition and Tracking Rates of the different Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100622</link>
        <id>10.14569/IJACSA.2019.0100622</id>
        <doi>10.14569/IJACSA.2019.0100622</doi>
        <lastModDate>2019-06-29T10:16:21.7670000+00:00</lastModDate>
        
        <creator>Meghana Kavuri</creator>
        
        <creator>Kolla Bhanu Prakash</creator>
        
        <subject>Detection; recognition; tracking; local binary pattern histogram; Kalman filter; particle filter</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(6), 2019</description>
        <description>This article discusses the approach of human detection and tracking in a homogeneous domain using surveillance cameras. This is a vast area in which significant research has been taking place from more than a decade and the paper is about detection of a human and its face in a given video and stores Local Binary Pattern Histogram (LBPH) features of the detected faces. Once a human is detected in the video, that person will be given a label and him/her is tracked in different video taken by multiple cameras by the application of machine learning and image processing with the help of OpenCV. Many algorithms were used for detecting, recognizing and tracking till date, thus in this paper, main thing is the comparison of the proposed algorithm with some among the state-of-the-art algorithms. And also shows how the proposed algorithm is better than the other chosen algorithms.</description>
        <description>http://thesai.org/Downloads/Volume10No6/Paper_22-Performance_Comparison_of_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Review on the Verification Approaches and Tools used to Verify the Correctness of Security Algorithms and Protocols</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100621</link>
        <id>10.14569/IJACSA.2019.0100621</id>
        <doi>10.14569/IJACSA.2019.0100621</doi>
        <lastModDate>2019-06-29T10:16:21.7500000+00:00</lastModDate>
        
        <creator>Mohammed Abdulqawi Saleh Al-humaikani</creator>
        
        <creator>Lukman Bin Ab Rahim</creator>
        
        <subject>Security algorithms; security protocols; formal verification approaches; model checking; theorem proving</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(6), 2019</description>
        <description>Security algorithms and protocols are typical essential upgrades that must be involved within systems and their structures to provide the best performance. The protocols and systems should go through verification and testing processes in order to be more efficient and accurate. In the testing of software, traditional methods are used for accuracy check-up. However, this could not fulfill the measurement of all the testing requirements. The usage of formal verification approaches in checking security properties considers their best environment to be applied. The available literature discussed several approaches on developing the most robust formal verification methods for addressing and analyzing errors that face systems. This could be during the implantation process, unknown attacks, and nondeterministic adversary on the security protocols and algorithm. In this paper, a comprehensive review of the main formal verification approaches such as model checking and theorem approving has been conducted. Moreover, the use of verification tools was briefly presented and explained thoroughly. Those formal verification methods could be involved in the design, redesign of security protocols, and algorithms based on standards and determined sizes that is decided by these techniques’ analysis. The critical analysis of the methods used in verifying the security of systems showed that model checking approaches and its tools were the most used approaches among all the reviewed methods.</description>
        <description>http://thesai.org/Downloads/Volume10No6/Paper_21-A_Review_on_the_Verification_Approaches.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Forecasting Feature Selection based on Single Exponential Smoothing using Wrapper Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100620</link>
        <id>10.14569/IJACSA.2019.0100620</id>
        <doi>10.14569/IJACSA.2019.0100620</doi>
        <lastModDate>2019-06-29T10:16:21.7200000+00:00</lastModDate>
        
        <creator>Ani Dijah Rahajoe</creator>
        
        <subject>Single exponential smoothing; forecasting; feature selection; genetic algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(6), 2019</description>
        <description>Feature selection is one way to simplify classification process. The purpose is only the selected features are used for classification process and without decreasing its performance when compared without feature selection. This research uses new feature matrix as the base for selection. This feature matrix contains forecasting result using Single Exponential Smoothing (FMF(SES)). The method uses wrapper method of GASVM and it is named FMF(SES)-GASVM. The result of this research is compared with other methods such as GA Bayes, Forward Bayes and Backward Bayes. The result shows that FMF(SES)-GASVM has maximum accuracy when compared of FMF(SES)-GA Bayes, FMF(SES)-Forward Bayes, FMF(SES)-Backward Bayes, however the number of selected features are more than if compared with FMF(SES)-GA Bayes and FMF(SES)-Forward Bayes.</description>
        <description>http://thesai.org/Downloads/Volume10No6/Paper_20-Forecasting_Feature_Selection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Collaborative Filtering Recommender System Model for Recommending Intervention to Improve Elderly Well-being</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100619</link>
        <id>10.14569/IJACSA.2019.0100619</id>
        <doi>10.14569/IJACSA.2019.0100619</doi>
        <lastModDate>2019-06-29T10:16:21.7030000+00:00</lastModDate>
        
        <creator>Aini Khairani Azmi</creator>
        
        <creator>Noraswaliza Abdullah</creator>
        
        <creator>Nurul Akmar Emran</creator>
        
        <subject>Collaborative filtering; elderly well-being; k-nearest neighbor; recommender system; successful ageing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(6), 2019</description>
        <description>In improving elderly well-being nowadays, people at home or health care centre are mostly focusing on guarding and monitoring the elderly using tools, such as CCTV, robots, and other appliances that require a great deal of cost and neat fixtures to prevent damage. Elderly observations using the recommender system are found to be implemented, but only focusing on one aspect such as nutrition and health. However, it is important to give interventions to an elderly by concentrating more on the multiple aspects of successful ageing such as social, environment, health, physical, mental and other so that it can help the elderly people in achieving successful ageing as well as improving their well-being. In this paper, two recommender system models are proposed to recommend interventions for improving elderly well-being in the multiple aspects of successful ageing. These models using a Collaborative Filtering (CF) technique to recommend interventions to an elderly based on the interventions given to other elderly who have similar conditions with the user. The process of recommending interventions involves the generation of user profiles presenting the elderly conditions in multiple aspects of successful ageing. It also applying the k-Nearest Neighbor (kNN) method to find users with similar conditions and recommending interventions based on the interventions given to the similar user. The experiment is conducted to determine the performance of the proposed Collaborative Filtering (CF) recommender system and Collaborative Filtering and Profile Matching (CFS) compared to the Basic Search (BS). The results of the experiment showed that Collaborative Filtering (CF) recommender system and Collaborative Filtering and Profile Matching (CFS) outperformed Basic Search (BS) in terms of precision, recall and F1 measure. This result showed that the proposed models are efficient to recommend interventions using elderly profiles based on many aspects of successful ageing.</description>
        <description>http://thesai.org/Downloads/Volume10No6/Paper_19-A_Collaborative_Filtering_Recommender_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of Spatially Modelled High Temperature Polymer Electrolyte Membrane Fuel Cell under Dynamic Load Conditions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100618</link>
        <id>10.14569/IJACSA.2019.0100618</id>
        <doi>10.14569/IJACSA.2019.0100618</doi>
        <lastModDate>2019-06-29T10:16:21.6730000+00:00</lastModDate>
        
        <creator>Jagdesh Kumar</creator>
        
        <creator>Jherna Devi</creator>
        
        <creator>Ghulam Mustafa Bhutto</creator>
        
        <creator>Sajida Parveen</creator>
        
        <creator>Muhammad Shafiq</creator>
        
        <subject>Current-voltage characteristics; energy conversion; fuel cells; power system modeling; power system simulation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(6), 2019</description>
        <description>This paper presents an interesting approach to observe the effects of the load variations on the performance of high temperature polymer electrolyte membrane fuel cell system, such as: hydrogen and air flow rate, output voltage, power and efficiency. The main advantage of this approach is to analyse the internal behaviour of the fuel cell like current-voltage characteristics during energy conversion, when the load is varying dynamically. This approach of power system simulation models fuel cell system by integrating 3D-COMSOL model of high temperature polymer electrolyte membrane fuel cell with MATLAB/Simulink model of the fuel cell system. The MATLAB/Simulink model for the fuel cell system includes the fuel cell stack (single cell), load (sequence of currents), air supply system (air compressor), fuel supply system (hydrogen tank), and power-efficiency block. The MATLAB/Simulink model is developed in such a way that one part behaves as an input model to the 3D-COMSOL model of the fuel cell system, whereas second part behaves as an output model that recovers the results obtained from the 3D-COMSOL of the fuel cell. This approach of power system modelling is useful to show the performance of high temperature polymer electrolyte membrane fuel cell in much better and accurate way.</description>
        <description>http://thesai.org/Downloads/Volume10No6/Paper_18-Analysis_of_Spatially_Modelled_High_Temperature_Polymer.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Brain-Controlled for Changing Modular Robot Configuration by Employing Neurosky’s Headset</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100617</link>
        <id>10.14569/IJACSA.2019.0100617</id>
        <doi>10.14569/IJACSA.2019.0100617</doi>
        <lastModDate>2019-06-29T10:16:21.6570000+00:00</lastModDate>
        
        <creator>Muhammad Haziq Hasbulah</creator>
        
        <creator>Fairul Azni Jafar</creator>
        
        <creator>Mohd. Hisham Nordin</creator>
        
        <creator>Kazutaka Yokota</creator>
        
        <subject>Dtto robot; motor imagery; OpenVibe; modular robot; configuration; communication</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(6), 2019</description>
        <description>Currently, the Brain Computer Interfaces (BCI) system was designed mostly to be implemented for control purpose or navigation which are mostly being employed for mobile robot, manipulator robot and humanoid robot by using Motor Imagery. This study presents an implementation of BCI system to Modular Self-Reconfigurable (MSR) Dtto robot so the robot able to propagate multiple configurations based on EEG-based brain signals. In this paper, a Neurosky’s Mindwave Mobile EEG headset is being used and a framework of controlling the Dtto robot by EEG signals, processed by OpenViBE software are built. The connection being established between Neurosky’s headsets to the OpenViBE, where a Motor Imagery BCI is created to receive and process the EEG data in real time. The main idea for system developed is to associate a direction (Left, Right, Up and Down) based on Hand and Feet Motor Imagery as a command for Dtto robot control. The Direction from OpenViBE were sent via Lab Streaming Layer (LSL) and transmitted via Python software to Arduino controller in the robots. To test the system performance, this study was conducted in Real time experiments. The results are being discussed in this paper.</description>
        <description>http://thesai.org/Downloads/Volume10No6/Paper_17-Brain_Controlled_for_Changing_Modular_Robot.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Exploring the Use of Digital Games as a Persuasive Tool in Teaching Islamic Knowledge for Muslim Children</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100616</link>
        <id>10.14569/IJACSA.2019.0100616</id>
        <doi>10.14569/IJACSA.2019.0100616</doi>
        <lastModDate>2019-06-29T10:16:21.6400000+00:00</lastModDate>
        
        <creator>Madihah Sheikh Abdul Aziz</creator>
        
        <creator>Panadda Auyphorn</creator>
        
        <creator>Mohd Syarqawy Hamzah</creator>
        
        <subject>Digital games; persuasive tool; Islamic knowledge; Islamic values</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(6), 2019</description>
        <description>Various digital games have been developed that focus on providing a sense of enjoyment and excitement for their players in order to be a modern tool for releasing stress or simply for pleasure. In recent years, digital games were also used for teaching and learning. For example, in History subject, games were used for retelling historical stories; at the same time, to preserve the history for the next generation to learn, understand and appreciate. Similarly, Digital games with Islamic values have also been developed to teach Islamic values or knowledge among players, in other words to persuade players to learn or improve their knowledge on Islam. Many designers assumed that games could be used as a persuasive tool to influence players, to learn and understand Islam as a way of life. However, no prior research has been done on the perception of players before and after playing Islamic digital games. To this end, this paper investigates and reports if Islamic Digital Games could persuade gamers to understanding Islam by exploring the use of these games among gamers. A total of 20 school children voluntarily participated in the experiment and the findings are reported in this paper. The study found positive effects on the users’ perception toward playing digital games embedded with Islamic values.</description>
        <description>http://thesai.org/Downloads/Volume10No6/Paper_16-Exploring_the_use_of_Digital_Games.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Computer Students Attitudes on the Integration of m-Learning Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100615</link>
        <id>10.14569/IJACSA.2019.0100615</id>
        <doi>10.14569/IJACSA.2019.0100615</doi>
        <lastModDate>2019-06-29T10:16:21.6230000+00:00</lastModDate>
        
        <creator>Abdulmohsin S. Alkhunaizan</creator>
        
        <subject>e-Learning; m-Learning; mobile applications; computer science; WhatsApp</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(6), 2019</description>
        <description>Technology has an important role in the lives particularly in the field of education nowadays because of its accessibility and affordability. Mobile learning (m-Learning) which is form of e-learning is a novel approach in the arena of educational technology that offers personal, informal, blended learning and flexible learning opportunities to learners and instructors. The present study is an attempt to determine the computer students’ attitudes on the integration of m-Learning app (WhatsApp). A total numbers of 143 student participants participated in the experiment. The study used quasi-experimental research design. The learners were performed intact groups of a public university. They were asked to complete the assignments through WhatsApp application. Two questionnaires were used to gather the data. The data was analyzed descriptively. Findings of the data showed that learners had positive attitudes towards the integration of m-Learning apps. The study also reports the suggestions for the future research and implications for the teachers.</description>
        <description>http://thesai.org/Downloads/Volume10No6/Paper_15-Computer_Students_Attitudes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Digital Preservation of Cultural Heritage: Terengganu Brassware Craft Knowledge Base</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100614</link>
        <id>10.14569/IJACSA.2019.0100614</id>
        <doi>10.14569/IJACSA.2019.0100614</doi>
        <lastModDate>2019-06-29T10:16:21.6100000+00:00</lastModDate>
        
        <creator>Wan Malini Wan Isa</creator>
        
        <creator>Nor Azan Mat Zin</creator>
        
        <creator>Fadhilah Rosdi</creator>
        
        <creator>Hafiz Mohd Sarim</creator>
        
        <subject>Cultural heritage; digital preservation; intangible cultural heritage; knowledge acquisition; knowledge base; knowledge representation; ontology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(6), 2019</description>
        <description>Early exposure to cultural heritage is necessary to preserve it from extinction. One form of cultural heritage that is now on the brink of extinction is the Terengganu brassware craft. Current young generations are mostly not interested in this heritage. Furthermore, intangible heritage in the form of knowledge and skills are only stored in the memory of the practitioners. Lack of documentation has led to the sole reliance on practitioners such that the knowledge is lost upon their demise. Hence, intangible heritage knowledge has to be acquired and stored in a knowledge base system to keep them in a systematic and permanent form. Manipulating and transferring the knowledge and skills will also ensure the continuity of this heritage, and ensure it can be accessed by future generations. This paper discusses the development of the knowledge base of Terengganu Brassware Craft as a digital preservation of cultural heritage. Knowledge acquisition was carried out using interview and observation techniques. Then, the knowledge is represented using ontology. This knowledge, in digital form, can be manipulated and disseminated to the community to ensure the continuity of the knowledge.</description>
        <description>http://thesai.org/Downloads/Volume10No6/Paper_14-Digital_Preservation_of_Cultural_Heritage.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modified Graph-theoretic Clustering Algorithm for Mining International Linkages of Philippine Higher Education Institutions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100613</link>
        <id>10.14569/IJACSA.2019.0100613</id>
        <doi>10.14569/IJACSA.2019.0100613</doi>
        <lastModDate>2019-06-29T10:16:21.5770000+00:00</lastModDate>
        
        <creator>Sheila R. Lingaya</creator>
        
        <creator>Bobby D. Gerardo</creator>
        
        <creator>Ruji P. Medina</creator>
        
        <subject>MST-based clustering; Small World Network; von Neumann Neighborhood; internationalization; Prim’s MST</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(6), 2019</description>
        <description>Graph-theoretic clustering either uses limited neighborhood or construction of a minimum spanning tree to aid the clustering process. The latter is challenged by the need to identify and consequently eliminate inconsistent edges to achieve final clusters, detect outliers and partition substantially. This work focused on mining the data of the International Linkages of Philippine Higher Education Institutions by employing a modified graph-theoretic clustering algorithm with which the Prim’s Minimum Spanning Tree algorithm was used to construct a minimum spanning tree for the internationalization dataset infusing the properties of a small world network. Such properties are invoked by the computation of local clustering coefficient for the data elements in the limited neighborhood of data points established using the von Neumann Neighborhood. The overall result of the cluster validation using the Silhouette Index with a score of .69 indicates that there is an acceptable structure found in the clustering result – hence, a potential of the modified MST-based clustering algorithm. The Silhouette per cluster with .75 being the least score means that each cluster derived for r=5 by the von Neumann Neighborhood has a strong clustering structure.</description>
        <description>http://thesai.org/Downloads/Volume10No6/Paper_13-Modified_Graph_Theoretic_Clustering_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Classification of Hand Movements based on Discrete Wavelet Transform and Enhanced Feature Extraction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100612</link>
        <id>10.14569/IJACSA.2019.0100612</id>
        <doi>10.14569/IJACSA.2019.0100612</doi>
        <lastModDate>2019-06-29T10:16:21.5630000+00:00</lastModDate>
        
        <creator>Jingwei Too</creator>
        
        <creator>Abdul Rahim Abdullah</creator>
        
        <creator>Norhashimah Mohd Saad</creator>
        
        <subject>Electromyography; feature extraction; discrete wavelet transform; classification; pattern recognition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(6), 2019</description>
        <description>Extraction of potential electromyography (EMG) features has become one of the important roles in EMG pattern recognition. In this paper, two EMG features, namely, enhanced wavelength (EWL) and enhanced mean absolute value (EMAV) are proposed. The EWL and EMAV are the modified version of wavelength (WL) and mean absolute value (MAV), which aims to enhance the prediction accuracy for the classification of hand movements. Initially, the proposed features are extracted from the EMG signals via discrete wavelet transform (DWT). The extracted features are then fed into the machine learning algorithm for classification process. Four popular machine learning algorithms include k-nearest neighbor (KNN), linear discriminate analysis (LDA), Na&#239;ve Bayes (NB) and support vector machine (SVM) are used in evaluation. To examine the effectiveness of EWL and EMAV, several conventional EMG features are used in performance comparison. In addition, the efficacy of EWL and EMAV when combine with other features are also investigated. Based on the results obtained, the combination of EWL and EMAV with other features can improve the classification performance. Thus, EWL and EMAV can be considered as valuable tools for rehabilitation and clinical applications.</description>
        <description>http://thesai.org/Downloads/Volume10No6/Paper_12-Classification_of_Hand_Movements.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Internet of Things (IOT): Research Challenges and Future Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100611</link>
        <id>10.14569/IJACSA.2019.0100611</id>
        <doi>10.14569/IJACSA.2019.0100611</doi>
        <lastModDate>2019-06-29T10:16:21.5470000+00:00</lastModDate>
        
        <creator>AbdelRahman H. Hussein</creator>
        
        <subject>Internet of Things; IoT applications; IoT challenges; future technologies; smart cities; smart environment; smart agriculture; smart living</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(6), 2019</description>
        <description>With the Internet of Things (IoT) gradually evolving as the subsequent phase of the evolution of the Internet, it becomes crucial to recognize the various potential domains for application of IoT, and the research challenges that are associated with these applications. Ranging from smart cities, to health care, smart agriculture, logistics and retail, to even smart living and smart environments IoT is expected to infiltrate into virtually all aspects of daily life. Even though the current IoT enabling technologies have greatly improved in the recent years, there are still numerous problems that require attention. Since the IoT concept ensues from heterogeneous technologies, many research challenges are bound to arise. The fact that IoT is so expansive and affects practically all areas of our lives, makes it a significant research topic for studies in various related fields such as information technology and computer science. Thus, IoT is paving the way for new dimensions of research to be carried out. This paper presents the recent development of IoT technologies and discusses future applications and research challenges.</description>
        <description>http://thesai.org/Downloads/Volume10No6/Paper_11-Internet_of_Things_IOT_Research_Challenges.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Forensic Analysis using Text Clustering in the Age of Large Volume Data: A Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100610</link>
        <id>10.14569/IJACSA.2019.0100610</id>
        <doi>10.14569/IJACSA.2019.0100610</doi>
        <lastModDate>2019-06-29T10:16:21.5300000+00:00</lastModDate>
        
        <creator>Bandar Almaslukh</creator>
        
        <subject>Digital investigation; forensic analysis; text clustering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(6), 2019</description>
        <description>Exploring digital devices in order to generate digital evidence related to an incident being investigated is essential in modern digital investigation. The emergence of text clustering methods plays an important role in developing effective digital forensics techniques. However, the issue of increasing the number of text sources and the volume of digital devices seized for analysis has been raised significantly over the years. Many studies indicated that this issue should be resolved urgently. In this paper, a comprehensive review of digital forensic analysis using text-clustering methods is presented, investigating the challenges of large volume data on digital forensic techniques. Moreover, a meaningful classification and comparison of the text clustering methods that have been frequently used for forensic analysis are provided. The major challenges with solutions and future research directions are also highlighted to open the door for researchers in the area of digital forensics in the age of large volume data.</description>
        <description>http://thesai.org/Downloads/Volume10No6/Paper_10-Forensic_Analysis_using_Text_Clustering.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Two Dimensional Electronic Nose for Vehicular Central Locking System (E-Nose-V)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100609</link>
        <id>10.14569/IJACSA.2019.0100609</id>
        <doi>10.14569/IJACSA.2019.0100609</doi>
        <lastModDate>2019-06-29T10:16:21.5000000+00:00</lastModDate>
        
        <creator>Mahmoud Zaki Iskandarani</creator>
        
        <subject>Odors; E-Nose; neural networks; security; correlation; smart vehicles; intelligent transportation systems; smart cities</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(6), 2019</description>
        <description>A new approach to vehicle security is proposed, tried, tested. The designed and tested system comprises an odor detection system (E-Nose) that sends signals corresponding to selected odors to the smart vehicle Electronic Control Unit (ECU), which is interfaced to a smart system with neural networks. The signal is interpreted in time and space, whereby a certain ordered number of samples should be obtained before the vehicle functions are unlocked. Correlation of rise and decay times and amplitudes of the signal is carried out to ensure security. The proposed system is highly secured and could be further developed to become a vital and integrated part of Intelligent Transportation Systems (ITS) through the addition of driver’s body odor smell as an extra measure of security and in cases of accidents to auto call emergency services with driver identification and some diagnostics. Such system can be utilized in smart cities.</description>
        <description>http://thesai.org/Downloads/Volume10No6/Paper_9-Two_Dimensional_Electronic_Nose.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Secure Medical Internet of Things Framework based on Parkerian Hexad Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100608</link>
        <id>10.14569/IJACSA.2019.0100608</id>
        <doi>10.14569/IJACSA.2019.0100608</doi>
        <lastModDate>2019-06-29T10:16:21.4830000+00:00</lastModDate>
        
        <creator>Nidal Turab</creator>
        
        <creator>Qasem Kharma</creator>
        
        <subject>MIoT; Perkerian Hexad; PRMS; Lightweight Encryption</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(6), 2019</description>
        <description>Medical Internet of Things (MIoT) applications enhance medical services by collecting data using devices connected to the IoT. The collected data, which may include personal data and location, is transmitted to mobile device and to health care provider via Internet Service Provider (ISP). Unfortunately, connecting a device to a network or sending data via wide network may make those devices and data vulnerable to unauthorized access. In this research, a secure 3-tier MIoT framework is proposed. Tier 1 includes the devices and sensors that will collect data. Those devices and sensors are based upon limited resources; therefore, they cannot apply complex security and privacy algorithms. Tier 2 includes the devices that will collect data from Tier 1 and submit it to Tier 3 via Internet Service Provider (ISP). Tier 3 includes the Health Information System. The framework defines the controls that are needed between layers to secure user privacy and data based on the Parkerian Hexad Model.</description>
        <description>http://thesai.org/Downloads/Volume10No6/Paper_8-Secure_Medical_Internet_of_Things_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Advanced Metaheuristics-based Tuning of Effective Design Parameters for Model Predictive Control Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100607</link>
        <id>10.14569/IJACSA.2019.0100607</id>
        <doi>10.14569/IJACSA.2019.0100607</doi>
        <lastModDate>2019-06-29T10:16:21.4700000+00:00</lastModDate>
        
        <creator>Mohamed Lotfi Derouiche</creator>
        
        <creator>Soufiene Bouall&#232;gue</creator>
        
        <creator>Joseph Hagg&#232;ge</creator>
        
        <creator>Guillaume Sandou</creator>
        
        <subject>Model predictive control; parameters tuning; advanced metaheuristics; MAGLEV 33-006; DTS200 three-tank process; LabVIEW implementation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(6), 2019</description>
        <description>This paper presents a systematic tuning approach for Model Predictive Control (MPC) parameters’ using an original LabVIEW-implementation of advanced metaheuristics algorithms. Perturbed Particle Swarm Optimization (pPSO), Gravitational Search Algorithm (GSA), Teaching-Learning Based Optimization (TLBO) and Grey Wolf Optimizer (GWO) metaheuristics are proposed to solve the formulated MPC tuning problem under operational constraints. The MPC tuning strategy is done offline for the selection of both prediction and control horizons as well as the weightings matrices. All proposed algorithms are firstly evaluated and validated on a benchmark of standard test functions. The same algorithms were then used to solve the formulated MPC tuning problem for two dynamical systems such as the magnetic levitation system MAGLEV 33-006, and the three-tank DTS200 process. Demonstrative results, in terms of statistical metrics and closed-loop systems responses, are presented and discussed in order to show the effectiveness and superiority of the proposed metaheuristics-tuned approach. The developed CAD interface for the LabVIEW implementation of the proposed metaheuristics is given and freely accessible for extended optimization puposes.</description>
        <description>http://thesai.org/Downloads/Volume10No6/Paper_7-Advanced_Metaheuristics_based_Tuning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Muscles Heating Analysis in Sportspeople to Prevent Muscle Injuries using Thermal Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100606</link>
        <id>10.14569/IJACSA.2019.0100606</id>
        <doi>10.14569/IJACSA.2019.0100606</doi>
        <lastModDate>2019-06-29T10:16:21.4530000+00:00</lastModDate>
        
        <creator>Brian Meneses-Claudio</creator>
        
        <creator>Witman Alvarado-D&#237;az</creator>
        
        <creator>Fiorella Flores-Medina</creator>
        
        <creator>Natalia I. Vargas-Cuentas</creator>
        
        <creator>Avid Roman-Gonzalez</creator>
        
        <subject>Thermal image; muscle heating; heat map; muscle injuries, temperature range</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(6), 2019</description>
        <description>Muscle heating is the process that every athlete follows before any physical activity or sport which are the legs where greater force is exerted and in case a good heating routine is not practiced, the muscles can suffer tears, cramps or fractures due to sudden movements while the muscles are cold. According to the National Institute of Arthritis and Musculoskeletal and Skin Diseases of the United States, the most common injuries occur in the ankles because it is a central point where greater force is exerted by the induced weight of the athletes, in addition if excessive muscle care is important. That is why the evaluation of muscle heating in athletes to prevent muscle damage was raised in this research work, first two thermal images of the before and after heating will be obtained using the FLIR ONE Pro thermal camera following a protocol of distance, position and temperature range, then the images are processed in the MATLAB software to map them in the temperature range and then subtract them to obtain the zones where the temperature variations are found indicating where an adequate heating has been carried out. As a result, the areas where the subtraction of both images was positive were obtained, this new image of the subtraction is superimposed on the real image, showing the real image with the areas where it has proceeded with an optimal heating.</description>
        <description>http://thesai.org/Downloads/Volume10No6/Paper_6-Muscles_Heating_Analysis_in_Sportspeople.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Innovative Means of Medical Students Teaching through Graphical Methods for Cardiac Data Estimating and Serious Games</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100605</link>
        <id>10.14569/IJACSA.2019.0100605</id>
        <doi>10.14569/IJACSA.2019.0100605</doi>
        <lastModDate>2019-06-29T10:16:21.3130000+00:00</lastModDate>
        
        <creator>Galya N. Georgieva-Tsaneva</creator>
        
        <subject>Serious games; medical education; cardiovascular disease; heart rate variability; time domain analysis; spectrogram</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(6), 2019</description>
        <description>Now-a-days non-traditional methods and tools are introduced in the training of medical students which are mediated by the rapid development of information technologies: software training systems, Serious Games, and video materials. The paper presents a software system for the processing and analysis of physiological data, which is suitable for use in the medical students’ training process and a study has been made of the use of serious games in medical education. For optimal work with a software system, a database of cardiac data has been created for healthy individuals and individuals with various cardiac diseases. The established system and the cardiology database can be used by medical doctors to study statistical parameters and graphical cardiac data representation in various diseases. The results obtained from the analysis of the data through graphical methods can be used as an effective visual means for increasing the success of medical students&#39; training. The paper presents the results of a survey of the interest of medical students in higher education Universities in Bulgaria to the inclusion of Serious Games in their medical training. The results show a high interest in game-based learning: by including serious games such as innovative training, it will be possible to build on theoretical education and to increase the efficiency of the education process.</description>
        <description>http://thesai.org/Downloads/Volume10No6/Paper_5-Innovative_Means_of_Medical_Students_Teaching.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Expert System for Milk and Animal Monitoring</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100604</link>
        <id>10.14569/IJACSA.2019.0100604</id>
        <doi>10.14569/IJACSA.2019.0100604</doi>
        <lastModDate>2019-06-29T10:16:21.2970000+00:00</lastModDate>
        
        <creator>Todor Todorov</creator>
        
        <creator>Juri Stoinov</creator>
        
        <subject>Expert system; milk quality; animal health</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(6), 2019</description>
        <description>Expert systems (ES) are one of the prominent research domains of artificial intelligence (AI). They are applications developed to solve complex problems in a particular domain, at the level of extra-ordinary human intelligence and expertise. In the paper is presented design and development of expert system for data collection, analysis and decision making for early mastitis detection. It focuses on both milk quality and animals health.</description>
        <description>http://thesai.org/Downloads/Volume10No6/Paper_4-Expert_System_for_Milk_and_Animal_Monitoring.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application of the Scattering Bond Graph Methodology for Composite Right/Left Handed Transmission Lines</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100603</link>
        <id>10.14569/IJACSA.2019.0100603</id>
        <doi>10.14569/IJACSA.2019.0100603</doi>
        <lastModDate>2019-06-29T10:16:21.2670000+00:00</lastModDate>
        
        <creator>Islem Salem</creator>
        
        <creator>Hichem Taghouti</creator>
        
        <creator>Ahmed Rahmani</creator>
        
        <creator>Abdelkader Mami</creator>
        
        <subject>Scattering Bond Graph (SBG); metamaterials; wave matrix [W]; modelization; transmission line; scattering matrix [S]; CRLH; OSCRR; OSRR</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(6), 2019</description>
        <description>The approach of the metamaterials based on the theory of transmission lines has an influence on the development of the radiofrequency domain, so the types and techniques of elaboration of these artificial lines are quite diversified. This paper will present two models of Artificial Transmission Line CRLH (Composite Right Left Handed) composed of a combination of resonators OSRR (Open Split Ring Resonators) and OSCRR (Complementary Open Split Ring Resonators). This paper will also show how the Scattering Bond Graph (SBG) study methodology allows to provide electromagnetic simulation results (Scattering parameters, phase response) more conducive to Bond Graph (BG) modeling, and will prove the possibility of having the most perfect line (solved the problem of impedance adaptation) and better understanding of the behavior of radiofrequency systems.</description>
        <description>http://thesai.org/Downloads/Volume10No6/Paper_3-Application_of_the_Scattering_Bond_Graph.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Prediction of Visibility for Color Scheme on the Web Browser with Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100602</link>
        <id>10.14569/IJACSA.2019.0100602</id>
        <doi>10.14569/IJACSA.2019.0100602</doi>
        <lastModDate>2019-06-29T10:16:21.2330000+00:00</lastModDate>
        
        <creator>Miki Yamaguchi</creator>
        
        <creator>Yoshihisa Shinozawa</creator>
        
        <subject>Visibility prediction; color recognition; pairwise comparison experiment; human color vision; neural networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(6), 2019</description>
        <description>In this study, we propose neural networks for predicting the visibility of color schemes. In recent years, most of us have accessed websites owing to the spread of the Internet. It is necessary to design web pages that allow users to access information easily. The color scheme is one of the most important elements of website design and therefore, we focus on the visibility of the background and character colors in this study. The prediction methods of visibility of color scheme have been proposed. In one of the prediction methods, neural networks are used to forecast pairwise comparison tables that indicate the visibility of background and character colors. Our model employs neural networks for color recognition and visibility prediction. The neural networks used for color recognition include functions that forecast the color class name from a color and extract the features of the color. The neural networks used for visibility prediction include functions that employ the features of background and character colors extracted by neural networks for color recognition and forecast the visibility of a color scheme. Pairwise comparison tables are forecasted with the prediction results of neural networks for visibility prediction. We conducted pairwise comparison experiment on a web browser, as well as color recognition experiment and evaluated our model. The results of the experiments suggest that our model could improve the accuracy of pairwise comparison tables compared to existing methods. Thus, proposed model can be used to predict the visibility of color schemes.</description>
        <description>http://thesai.org/Downloads/Volume10No6/Paper_2-Prediction_of_Visibility_for_Color_Scheme.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>AHP-based Security Decision Making: How Intention and Intrinsic Motivation Affect Policy Compliance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100601</link>
        <id>10.14569/IJACSA.2019.0100601</id>
        <doi>10.14569/IJACSA.2019.0100601</doi>
        <lastModDate>2019-06-29T10:16:21.1230000+00:00</lastModDate>
        
        <creator>Ahmed Alzahrani</creator>
        
        <creator>Christopher Johnson</creator>
        
        <subject>Analytic Hierarchy Process; behavioural intention; autonomy; competence; relatedness; information security policy compliance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(6), 2019</description>
        <description>Analytic hierarchy process is a multiple-criteria tool used in applications related to decision-making. In this paper, analytic hierarchy process is used as guidance in information security policy decision-making by identifying influencing factors and their weights for information security policy compliance. The weights for intrinsic motivators are identified based on self-determination theory as essential criteria, namely, autonomy, competence, relatedness, along with behavioural intention towards compliance; and use four awareness focus areas. A survey of cyber-security decision-makers at a Fortune 600 organisation provided data. The results suggest that behavioural intention (52% of the weight of influencing factors) is more important than autonomy (21%), competence (21%) or relatedness (6%) in influencing behaviour towards information security policy compliance. Determining weights of intrinsic motivation, intention, and awareness focus areas can help security decision-making and compliance with policy, and support design of effective security awareness programmes. However, these weights may in turn be affected by local organisational and cultural factors.</description>
        <description>http://thesai.org/Downloads/Volume10No6/Paper_1-AHP_based_Security_Decision_Making.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Value-Driven use Cases Triage for Embedded Systems: A Case Study of Cellular Phone</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100590</link>
        <id>10.14569/IJACSA.2019.0100590</id>
        <doi>10.14569/IJACSA.2019.0100590</doi>
        <lastModDate>2019-05-31T10:11:47.2830000+00:00</lastModDate>
        
        <creator>Neunghoe Kim</creator>
        
        <creator>Younkyu Lee</creator>
        
        <creator>Vijayan Sugumaran</creator>
        
        <creator>Soojin Park</creator>
        
        <subject>Value-based software engineering; use case triage; embedded system; cost-benefit analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>A well-defined and prioritized set of use cases enables the enhancement of an entire system by focusing on more important use cases identified in the previous iteration. These use cases are given more opportunities to be refined and tested. Until now, use case prioritization has been done from a user perspective, and through balanced measurement of actors/ objects usage. Lack of cost consideration for realization, however, renders it ineffective for economic purposes. Hence, this study incorporates the ‘value’ concept, based on cost benefit analysis, in use case prioritization for embedded systems. The use case satisfaction level is used as the surrogate for ‘benefit’, and the complexity of implementation for ‘cost’. Based on the value, use cases are prioritized. As a proof-of-concept, we apply our value-based prioritization method to the development of a camera system in a cellular phone.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_90-Value_Driven_use_Cases_Triage_for_Embedded_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Navigation Application with Safety Features</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100589</link>
        <id>10.14569/IJACSA.2019.0100589</id>
        <doi>10.14569/IJACSA.2019.0100589</doi>
        <lastModDate>2019-05-30T13:34:33.9530000+00:00</lastModDate>
        
        <creator>Andrew Usama</creator>
        
        <creator>Moustafa Waly</creator>
        
        <creator>Habiba Elwany</creator>
        
        <creator>Mohamed H. ElGazzar</creator>
        
        <creator>Monica Medhat</creator>
        
        <creator>Youssef Mobarak</creator>
        
        <subject>Traffic safety; navigation systems; neural networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>In 2017, the number of car accidents that occurred was astronomically high, even though, infrastructural road sys-tems are being continuously built and renewed to make it more efficient. But a significant problem which still remains is that a staggering number of accidents is exactly what should be avoided. In order to address this issue, this paper will serve to survey and discuss some of the solutions proposed, both software and hardware, for this problem. This will include some of the implemented safety features while also exploring how to make the system more interactive and smooth to meet user needs.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_89-Navigation_Application_with_Safety_Features.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Blockchain-based Value Added Tax (VAT) System: Saudi Arabia as a Use-Case</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100588</link>
        <id>10.14569/IJACSA.2019.0100588</id>
        <doi>10.14569/IJACSA.2019.0100588</doi>
        <lastModDate>2019-05-30T13:34:33.9230000+00:00</lastModDate>
        
        <creator>Ahmad Alkhodre</creator>
        
        <creator>Toqeer Ali</creator>
        
        <creator>Salman Jan</creator>
        
        <creator>Yazed Alsaawy</creator>
        
        <creator>Shah Khusro</creator>
        
        <creator>Muhammad Yasar</creator>
        
        <subject>VAT; hyperledger; blockchain; consensus; decen-tralized network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>Businesses need trust to confidently perform trade among each other. Centralized business models are the only mature solutions available to perform trades over the Internet. However, they have many problems which includes but are not limited to the fact that these create bottleneck on the server as well as requires trusted third parties. Recently, decentralized solutions have gained significant popularity and acceptance for future businesses. The wide acceptance of such systems is indeed due to the trust management among various untrusted business stakeholders. Many solutions have been proposed in this regard to provide de-centralized infrastructure for various business models. A standard solution that is acceptable to the industry is still in demand. Hyperledger umbrella Blockchain projects, that are supported by IBM and many other industry big players are gaining popularity due to its efficient and pluggable design. In this study, the author present the idea of utilizing Blockchain to design a Value-Added Tax (VAT) system for Saudi Arabia’s newly introduced tax system. The reason to select this business model for VAT is twofold. First, it provides an untampered distributed ledger, which cannot be deceived by any party. Each transaction in the system cannot go unnoticed by the smart contract. Sec-ondly, it provides a transparent record, and updates all involved parties regarding each activity performed by stakeholders. The newly proposed system will provide a transparent database of VAT transactions according to our smart contract design and at each stage of supply chain, tax will be deducted and stored on peer-to-peer network via consensus process. The author believes that the proposed solution will have significant impact on VAT collection in the Kingdom of Saudi Arabia.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_88-A_Blockchain_based_Value_Added_Tax.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Social Media Cyberbullying Detection using Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100587</link>
        <id>10.14569/IJACSA.2019.0100587</id>
        <doi>10.14569/IJACSA.2019.0100587</doi>
        <lastModDate>2019-05-30T13:34:33.9070000+00:00</lastModDate>
        
        <creator>John Hani</creator>
        
        <creator>Mohamed Nashaat</creator>
        
        <creator>Mostafa Ahmed</creator>
        
        <creator>Zeyad Emad</creator>
        
        <creator>Eslam Amer</creator>
        
        <creator>Ammar Mohammed</creator>
        
        <subject>Cyberbullying; machine learning; neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>With the exponential increase of social media users, cyberbullying has been emerged as a form of bullying through electronic messages. Social networks provides a rich environment for bullies to uses these networks as vulnerable to attacks against victims. Given the consequences of cyberbullying on victims, it is necessary to find suitable actions to detect and prevent it. Machine learning can be helpful to detect language patterns of the bullies and hence can generate a model to automatically detect cyberbullying actions. This paper proposes a supervised machine learning approach for detecting and preventing cyberbullying. Several classifiers are used to train and recognize bullying actions. The evaluation of the proposed approach on cyberbullying dataset shows that Neural Network performs better and achieves accuracy of 92.8% and SVM achieves 90.3. Also, NN outperforms other classifiers of similar work on the same dataset.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_87-Social_Media_Cyberbullying_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Swarm Robotics and Rapidly Exploring Random Graph Algorithms Applied to Environment Exploration and Path Planning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100586</link>
        <id>10.14569/IJACSA.2019.0100586</id>
        <doi>10.14569/IJACSA.2019.0100586</doi>
        <lastModDate>2019-05-30T13:34:33.8930000+00:00</lastModDate>
        
        <creator>Cindy Calderon-Arce</creator>
        
        <creator>Rebeca Solis-Ortega</creator>
        
        <subject>Swarm robotics; cellular automata; path planning; Rapidly-exploring Random Graph (RRG); scalarized multiobjective optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>We propose an efficient scheme based on a swarm robotics approach for exploring unknown environments. The initial goal is to trace a map which is later used to find optimal paths. The algorithm minimizes distance and danger. The proposed scheme consists in three phases: exploration, mapping and path optimization. A cellular automata approach is used for the simulation of the fist two phases. For the exploration phase, a stigmergy approach is applied in order to allow for swarm communication in a implicit way. For the path planning phase a hybrid method is proposed. First an adapted Rapidly-exploring Random Graph algorithm is used and then a scalarized multiobjective technique is applied to find the shortest path.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_86-Swarm_Robotics_and_Rapidly_Exploring_Random_Graph_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Use of Blockchain in Governance: A Systematic Literature Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100585</link>
        <id>10.14569/IJACSA.2019.0100585</id>
        <doi>10.14569/IJACSA.2019.0100585</doi>
        <lastModDate>2019-05-30T13:34:33.7530000+00:00</lastModDate>
        
        <creator>Asad Razzaq</creator>
        
        <creator>Muhammad Murad Khan</creator>
        
        <creator>Ramzan Talib</creator>
        
        <creator>Arslan Dawood Butt</creator>
        
        <creator>Noman Hanif</creator>
        
        <creator>Sultan Afzal</creator>
        
        <creator>Muhammad Razeen Raouf</creator>
        
        <subject>Blockchain; governance; voting; security; privacy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>Blockchain is a distributed network based ledger that is secured by the methods of cryptographic proof. It enables the creation of self-executable digital contracts i.e. smart contracts. This technology is working in collaboration with major areas of research including governance, IoT, health, banking and education. It has anticipated revolutionary ways, which helps us to overcome the problems of governance such as human error, voting, privacy of data, security and food safety. In governance, there is a need to ameliorate the services and facilities with the assistance of blockchain technology. This paper aims to explore the issues of governance which can be resolved with the assistance of Blockchain features. Furthermore this paper also provides the future work directions.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_85-Use_of_Blockchain_in_Governance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Assuring Non-fraudulent Transactions in Cash on Delivery by Introducing Double Smart Contracts</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100584</link>
        <id>10.14569/IJACSA.2019.0100584</id>
        <doi>10.14569/IJACSA.2019.0100584</doi>
        <lastModDate>2019-05-30T13:34:33.7200000+00:00</lastModDate>
        
        <creator>Ngoc Tien Thanh Le</creator>
        
        <creator>Quoc Nghiep Nguyen</creator>
        
        <creator>Nguyen Ngoc Phien</creator>
        
        <creator>Nghia Duong-Trung</creator>
        
        <creator>Thai Tam Huynh</creator>
        
        <creator>The Phuc Nguyen</creator>
        
        <creator>Ha Xuan Son</creator>
        
        <subject>Cash on Delivery (COD); Blockchain; smart con-tract; Ethereum; e-commerce; online payment</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>The adoption of decentralized cryptocurrency plat-forms is growing fast, thanks to the implementation of Blockchain technology and smart contracts. It encourages the novel frame-works in a wide range of applications including finance and payment methods such as cash on delivery. However, a large number of smart contracts developed for cash on delivery suffer from fraudulent transactions which enable malicious participants to break the signed contracts without sufficient penalties. A shipper will involve in the system and place a mortgage to ensure reliability. A buyer also pledges an amount of money when making the order. Our process not only ensures the interests of a seller but also prevents a fraud shipper. The penalties will be made in two scenarios: (i) the buyer refuses to receive the commodities without any reliable reasons; and (ii) the shipper attempts to make any modification on the delivered goods during transportation. To help developers create more secure and reliable cash on delivery system, we introduce double smart contracts, a framework rooted in Blockchain technology and Ethereum, to tackle those mentioned problems. We also contribute our solution as an open source software that developers can easily add to their implementation to enhance functionality.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_84-Assuring_Non_Fraudulent_Transactions.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Impact Study and Evaluation of Higher Modulation Schemes on Physical Layer of Upcoming Wireless Mobile Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100583</link>
        <id>10.14569/IJACSA.2019.0100583</id>
        <doi>10.14569/IJACSA.2019.0100583</doi>
        <lastModDate>2019-05-30T13:34:33.6900000+00:00</lastModDate>
        
        <creator>Heba Haboobi</creator>
        
        <creator>Mohammad R Kadhum</creator>
        
        <subject>Orthogonal Frequency Division Multiplexing (OFDM); Quadrature Amplitude Modulation (QAM); Bit Error Rate (BER); Signal to Noise Ratio (SNR); Bandwidth (BW); Additive White Gaussian Noise (AWGN); Rayleigh noise; physical layer (PHY)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>In this paper, the higher modulation formats (128 and 256) Quadrature Amplitude Modulation (QAM), for mod-ulation/demodulation the digital signal of the currently used Orthogonal Frequency Division Multiplexing (OFDM) system, is proposed, explored and evaluated at a wireless transmission system. The proposed modulation schemes are utilized to study the impact of adding extra bits for each transmitted sample on system performance in terms of the channel capacity, Bit Error Rate (BER) and Signal to Noise Ratio (SNR). As such, the key purpose of this research is to identify the advantages and disadvantages of using higher modulation schemes on the physical layer (PHY) of future mobile networks. In addition, the trade-off relation between the achieved bit rate and the required power of the receiver is examined in the presence of the Additive White Gaussian Noise (AWGN) and Rayleigh noise channels. Besides, the currently employed waveform (OFDM) is considered herein as an essential environment to test the effect of receiving additional complex numbers on the constellations table. Thus, investigate the ability to recognize both the phase and amplitude of intended constellations for the upcoming design of wireless transceivers. Moreover, a MATLAB simulation is employed to evaluate the proposed system mathematically and physically in an electrical back-to-back transmission system.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_83-Impact_Study_and_Evaluation_of_Higher_Modulation_Schemes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Gated Recurrent and Convolutional Network Hybrid Model for Univariate Time Series Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100582</link>
        <id>10.14569/IJACSA.2019.0100582</id>
        <doi>10.14569/IJACSA.2019.0100582</doi>
        <lastModDate>2019-05-30T13:34:33.6730000+00:00</lastModDate>
        
        <creator>Nelly Elsayed</creator>
        
        <creator>Anthony S Maida</creator>
        
        <creator>Magdy Bayoumi</creator>
        
        <subject>GRU-FCN; LSTM; fully convolutional neural net-work; time series; classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>Hybrid LSTM-fully convolutional networks (LSTM-FCN) for time series classification have produced state-of-the-art classification results on univariate time series. We empirically show that replacing the LSTM with a gated recurrent unit (GRU) to create a GRU-fully convolutional network hybrid model (GRU-FCN) can offer even better performance on many time series datasets without further changes to the model. Our empirical study showed that the proposed GRU-FCN model also outperforms the state-of-the-art classification performance in many univariate time series datasets without additional supporting algorithms requirement. Furthermore, since the GRU uses simpler architecture than the LSTM, it has fewer training parameters, less training time, smaller memory storage requirements, and simpler hardware implementation, compared to the LSTM-based models.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_82-Deep_Gated_Recurrent_and_Convolutional_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Use of Blockchain in Healthcare: A Systematic Literature Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100581</link>
        <id>10.14569/IJACSA.2019.0100581</id>
        <doi>10.14569/IJACSA.2019.0100581</doi>
        <lastModDate>2019-05-30T13:34:33.6430000+00:00</lastModDate>
        
        <creator>Sobia Yaqoob</creator>
        
        <creator>Muhammad Murad Khan</creator>
        
        <creator>Ramzan Talib</creator>
        
        <creator>Arslan Dawood Butt</creator>
        
        <creator>Sohaib Saleem</creator>
        
        <creator>Fatima Arif</creator>
        
        <creator>Amna Nadeem</creator>
        
        <subject>Issues; healthcare; blockchain; systematic review</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>Blockchain is an emerging field which works on the concept of a digitally distributed ledger and consensus algorithm removing all the threats of intermediaries. Its early applications were related to the finance sector but now this concept has been extended to almost all the major areas of research includ-ing education, IoT, banking, supplychain, defense, governance, healthcare, etc. In the field of healthcare, stakeholders (provider, patient, payer, research organizations, and supply chain bearers) demand interoperability, security, authenticity, transparency, and streamlined transactions. Blockchain technology, built over the internet, has the potential to use the current healthcare data into peer to peer and interoperable manner by using a patient-centric approach eliminating the third party. Using this technology, applications can be built to manage and share secure, transparent and immutable audit trails with reduced systematic fraud. This study reviews existing literature in order to identify the major issues of various healthcare stakeholders and to explore the features of blockchain technology that could resolve identified issues. However, there are some challenges and limitations of this technology which are needed to be focused on future research.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_81-Use_of_Blockchain_in_Healthcare.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Feature Fusion for Negation Scope Detection in Sentiment Analysis: Comprehensive Analysis over Social Media</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100580</link>
        <id>10.14569/IJACSA.2019.0100580</id>
        <doi>10.14569/IJACSA.2019.0100580</doi>
        <lastModDate>2019-05-30T13:34:33.6270000+00:00</lastModDate>
        
        <creator>Nikhil Kumar Singh</creator>
        
        <creator>Deepak Singh Tomar</creator>
        
        <subject>Sentiment analysis; feature fusion; negation cues; scope detection; conjunction analysis; punctuation mark; gram-matical dependency tree; Navies Bayes; linear regression; random forest; SVM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>Negation control for sentiment analysis is essential and effective decision support system. Negation control include identification of negation cues, scope of negation and their influence within it. Negation can either shift or change the polarity score of opinionated word. This paper present a framework for feature fusion of text feature extraction, negation cue and scope detection technique for enhancing the performance of recent sen-timent classifier for negation control. Explore text feature POS, BOW and HT with negation cue and scope detection techniques for classification technique over social media data set. This paper has included the evaluation of sentiment classification (Support vector machine, Navies Bayes, Linear Regression and Random Forest) and Nine feature fusion over presented prepossessing framework. This paper yield interesting result about collective response of feature fusion for negation scope detection and clas-sification technique. Feature Fusion vector significantly increase the polarity classification accuracy of sentiment classification technique. POS with Grammatical dependency tree can detect negation with better accuracy as compared to other feature fusion.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_80-Feature_Fusion_for_Negation_Scope_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning Approaches for Data Augmentation and Classification of Breast Masses using Ultrasound Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100579</link>
        <id>10.14569/IJACSA.2019.0100579</id>
        <doi>10.14569/IJACSA.2019.0100579</doi>
        <lastModDate>2019-05-30T13:34:33.5970000+00:00</lastModDate>
        
        <creator>Walid Al-Dhabyani</creator>
        
        <creator>Mohammed Gomaa</creator>
        
        <creator>Hussien Khaled</creator>
        
        <creator>Aly Fahmy</creator>
        
        <subject>Generative Adversarial Networks (GAN); Convolu-tional Neural Network (CNN); deep learning; breast cancer; Trans-fer Learning (TL); data augmentation; ultrasound (US) imaging; cancer diagnosis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>Breast classification and detection using ultrasound imaging is considered a significant step in computer-aided diagno-sis systems. Over the previous decades, researchers have proved the opportunities to automate the initial tumor classification and detection. The shortage of popular datasets of ultrasound images of breast cancer prevents researchers from obtaining a good performance of the classification algorithms. Traditional augmentation approaches are firmly limited, especially in tasks where the images follow strict standards, as in the case of medical datasets. Therefore besides the traditional augmentation, we use a new methodology for data augmentation using Generative Adversarial Network (GAN). We achieved higher accuracies by integrating traditional with GAN-based augmentation. This paper uses two breast ultrasound image datasets obtained from two various ultrasound systems. The first dataset is our dataset which was collected from Baheya Hospital for Early Detection and Treatment of Women’s Cancer, Cairo (Egypt), we name it (BUSI) referring to Breast Ultrasound Images (BUSI) dataset. It contains 780 images (133 normal, 437 benign and 210 malignant). While the Dataset (B) is obtained from related work and it has 163 images (110 benign and 53 malignant). To overcome the shortage of public datasets in this field, BUSI dataset will be publicly available for researchers. Moreover, in this paper, deep learning approaches are proposed to be used for breast ultrasound classification. We examine two different methods: a Convolutional Neural Network (CNN) approach and a Transfer Learning (TL) approach and we compare their performance with and without augmentation. The results confirm an overall enhancement using augmentation methods with deep learning classification methods (especially transfer learning) when evaluated on the two datasets.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_79-Deep_Learning_Approaches_for_Data_Augmentation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intruder Attacks on Wireless Sensor Networks: A Soft Decision and Prevention Mechanism</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100578</link>
        <id>10.14569/IJACSA.2019.0100578</id>
        <doi>10.14569/IJACSA.2019.0100578</doi>
        <lastModDate>2019-05-30T13:34:33.5800000+00:00</lastModDate>
        
        <creator>Iftikhar Hussain</creator>
        
        <creator>Samman Zahra</creator>
        
        <creator>Abrar Hussain</creator>
        
        <creator>Hayat Dino Bedru</creator>
        
        <creator>Shahzad Haider</creator>
        
        <creator>Diana Gumzhacheva</creator>
        
        <subject>MAC protocols; S-MAC; wireless sensor networks; intrusion detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>Because of the wide-ranging of applications in a variety of fields, such as medicine, environmental studies, robotics, warfare and security, and so forth, the research on wireless sensor networks (WSNs) has attracted much attention recently. WSNs offer economical, flexible, scalable and pragmatic solutions in many situations. Sensor nodes are tiny and have a limited, non-rechargeable battery source, small memory/computational abilities and low transmitter power. Energy resources are vital as once the battery is depleted, the node is no longer usable. Multiple medium access control (MAC) protocols are designed to increase the life cycle of a node by minimizing its unnecessary energy consumption. In some critical applications like the surveillance of enemy movements on a battlefield, opponents deploy adversary nodes to disturb the performance of WSNs by mainly depleting the battery sources of legitimate nodes. In this work, an intrusion detection mechanism has been adapted to detect different kinds of intruders’ attacks in MAC protocols of WSN’s. A soft decision mechanism has been implemented to detect collision and exhaus-tion attacks. A preventative mechanism has also been introduced, which helps a node to avoid these intrusive attacks. Results show how the lifetime of a node increases and network performance also increases with better throughput and reduced delay.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_78-Intruder_Attacks_on_Wireless_Sensor_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Framework for Drug Synergy Prediction using Differential Evolution based Multinomial Random Forest</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100577</link>
        <id>10.14569/IJACSA.2019.0100577</id>
        <doi>10.14569/IJACSA.2019.0100577</doi>
        <lastModDate>2019-05-30T13:34:33.5630000+00:00</lastModDate>
        
        <creator>Jaspreet Kaur</creator>
        
        <creator>Dilbag Singh</creator>
        
        <creator>Manjit Kaur</creator>
        
        <subject>Machine learning; random forest; drug synergy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>An efficient prediction of drug synergy plays a significant role in the medical domain. Examination of different drug-drug interaction can be achieved by considering the drug synergy score. With an rapid increase in cancer disease, it becomes difficult for doctors to predict significant amount of drug synergy. Because each cancer patient’s infection level varies. Therefore, less or more amount of drug may harm these patients. Machine learning techniques are extensively used to estimate drug synergy score. However, machine learning based drug synergy prediction approaches suffer from the parameter tuning problem. To overcome this issue, in this paper, an efficient Differential evolution based multinomial random forest (DERF) is designed and implemented. Extensive experiments by considering the existing and the proposed DERF based machine learning models. The comparative analysis of DERF reveals that it outperforms existing techniques in terms of coefficient of determination, root mean squared error and accuracy.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_77-A_Novel_Framework_for_Drug_Synergy_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comprehensive Survey on the Performance Analysis of Underwater Wireless Sensor Networks (UWSN) Routing Protocols</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100576</link>
        <id>10.14569/IJACSA.2019.0100576</id>
        <doi>10.14569/IJACSA.2019.0100576</doi>
        <lastModDate>2019-05-30T13:34:33.5330000+00:00</lastModDate>
        
        <creator>Tariq Mahmood</creator>
        
        <creator>Faheem Akhtar</creator>
        
        <creator>Sher Daudpota</creator>
        
        <creator>Khalil ur Rehman</creator>
        
        <creator>Saqib Ali</creator>
        
        <creator>Fawaz Mahiuob Mokbal</creator>
        
        <subject>Underwater Wireless Sensor Networks (UWSN); routing protocols; end-to-end delay; energy consumptions</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>The probe of innovative technologies is a furious issue of the day for the improvement of underwater wireless sensor network devices. The undersea is a remarkable and mystical region which is still unexplored and inaccessible on earth. Interest has been increasing in monitoring the medium of underwater for oceanographic data collection, surveillance application, offshore exploration, disaster prevention, commer-cial, scientific investigation, attack avoidance, and other military purposes. In underwater milieus, the sensor networks face a dangerous situation due to intrinsic water nature. However, significant challenges in this concern are high power consumption of acoustic modem, high propagation latency in data transmission, and dynamic topology of nodes due to wave movements. Routing protocols working in UWSN has low stability period due to increased data flooding which causes nodes to expire quickly due to unnecessary data forwarding and high energy consumption. The quick energy consumption of nodes originates large coverage holes in the core network. To keep sensor nodes functional in an underwater network, dedicated protocols are needed for routing that maintain the path connectivity. The path connectivity consumes more energy, high route updated cost with a high end to end delay for the retransmission of packets. So, in this paper, we are providing a comprehensive survey of different routing protocols employed in UWSN. The UWSN routing protocols are studied and evaluated related to the network environment and quality measures such as the end to end delay, dynamic network topology, energy consumption and packet delivery ratio. The merits and demerits of each routing protocol are also highlighted.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_76-A_Comprehensive_Survey_on_the_Performance_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Feature based Algorithmic Analysis on American Sign Language Dataset</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100575</link>
        <id>10.14569/IJACSA.2019.0100575</id>
        <doi>10.14569/IJACSA.2019.0100575</doi>
        <lastModDate>2019-05-30T13:34:33.5170000+00:00</lastModDate>
        
        <creator>Umair Muneer Butt</creator>
        
        <creator>Basharat Husnain</creator>
        
        <creator>Usman Ahmed</creator>
        
        <creator>Arslan Tariq</creator>
        
        <creator>Iqra Tariq</creator>
        
        <creator>Muhammad Aadil Butt</creator>
        
        <creator>Dr. Muhammad Sultan Zia</creator>
        
        <subject>Hand gesture recognition; pre-processing; weka; rapid miner; HOG; LBP; auto model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>Physical disability is one of the factor in human beings, which cannot be ignored. A person who can’t listen by nature is called deaf person. For the representation of their knowledge, a special language is adopted called ‘Sign-Language’. American Sign Language (ASL) is one of the most popular sign language that is used for learning process in deaf persons. For the representation of their knowledge by deaf persons, a special language is adopted ‘Sign-Language’. American Sign Language contains a set of digital images of hands in different shapes or hand gestures. In this paper, we present feature based algorithmic analysis to prepare a significant model for recognition of hand gestures of American Sign Language. To make a machine intelligent, this model can be used to learn efficiently. For effective machine learning, we generate a list of useful features from digital images of hand gestures. For feature extraction, we use Matlab 2018a. For training and testing, we use weka-3-9-3 and Rapid Miner 9 1.0. Both application tools are used to build an effective data modeling. Rapid Miner outperforms with 99.9% accuracy in auto model.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_75-Feature_based_Algorithmic_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Alerts Clustering for Intrusion Detection Systems: Overview and Machine Learning Perspectives</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100574</link>
        <id>10.14569/IJACSA.2019.0100574</id>
        <doi>10.14569/IJACSA.2019.0100574</doi>
        <lastModDate>2019-05-30T13:34:33.5030000+00:00</lastModDate>
        
        <creator>Wajdi Alhakami</creator>
        
        <subject>Intrusion detection systems; alert clustering; taxon-omy; survey; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>The tremendous amount of the security alerts due to the high-speed alert generation of high-speed networks make the management of intrusion detection computationally expensive. Evidently, the high-level rate of wrong alerts disproves the Intrusion Detection Systems (IDS) performances and decrease its capability to prevent cyber-attacks which lead to tedious alert analysis task. Thus, it is important to develop new tools to understand intrusion data and to represent them in a compact forms using, for example, an alert clustering process. This hot topic of research is studied here and an understandable taxonomy followed by a deep survey of main published works related to intrusion alert management is presented in this paper. The second part of this work exposes different useful steps for designing a unified IDS system on the basis of machine learning techniques which are considered one of the most powerful tools for solving certain problems related to alert management and outlier detection.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_74-Alerts_Clustering_for_Intrusion_Detection_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Bayesian Network Analysis for the Questionnaire Investigation on the Needs at Fuji Shopping Street Town under the View Point of Service Engineering</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100573</link>
        <id>10.14569/IJACSA.2019.0100573</id>
        <doi>10.14569/IJACSA.2019.0100573</doi>
        <lastModDate>2019-05-30T13:34:33.4870000+00:00</lastModDate>
        
        <creator>Tsuyoshi Aburai</creator>
        
        <creator>Akane Okubo</creator>
        
        <creator>Daisuke Suzuki</creator>
        
        <creator>Kazuhiro Takeyasu</creator>
        
        <subject>Fuji city; area rebirth; regional vitalization; Bayesian network; back propagation; service engineering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>Shopping streets at local city in Japan became old and are generally declining. In this paper, the area rebirth and/or regional revitalization of shopping street are handled. Fuji city in Japan is focused. Four big festivals are held at Fuji city (two for Fuji Shopping Street Town and two for Yoshiwara Shopping Street Town). Many people visit these festivals including residents in that area. Therefore a questionnaire investigation to the residents and visitors is conducted during these periods in order to clarify residents and visitors’ needs for the shopping street, and utilize them to the plan building of the area rebirth and/or regional revitalization of shopping street. There is a big difference between Fuji Shopping Street Town and Yoshiwara Shopping Street Town. Therefore Fuji Shopping Street Town is focused in this paper. These are analyzed by using Bayesian Network. These are analyzed by sensitivity analysis and odds ratio is calculated to the results of sensitivity analysis in order to obtain much clearer results. The analysis utilizing Bayesian Network enabled us to visualize the causal relationship among items. Furthermore, sensitivity analysis brought us estimating and predicting the prospective visitors. Sensitivity analysis is performed by back propagation method. These are utilized for constructing a much more effective and useful plan building.  Fruitful results are obtained. To confirm the findings by utilizing the new consecutive visiting records would be the future works to be investigated.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_73-Bayesian_Network_Analysis_for_the_Questionnaire_Investigation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SentiNeural: A Depression Clustering Technique for Egyptian Women Sentiments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100572</link>
        <id>10.14569/IJACSA.2019.0100572</id>
        <doi>10.14569/IJACSA.2019.0100572</doi>
        <lastModDate>2019-05-30T13:34:33.4700000+00:00</lastModDate>
        
        <creator>Doaa Mohey ElDin</creator>
        
        <creator>Mohamed Hamed N. Taha</creator>
        
        <creator>Nour Eldeen M. Khalifa</creator>
        
        <subject>Sentiment analysis; negation handling; depression analysis; neural network; clustering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>Online Sentiments Analysis is a trending research domain of study which is based on natural language processing, artificial intelligence, and computational linguistics.  Negation sentiments usually are not included in sentiment’s analysis process. The depression analysis can be improved by negative sentiments processing. The negation sentiments may contribute to classify the depression problems and its causes. The proposed clustering technique can detect female sentiments from the sentiment’s text through cause’s classification, and the written sentiment style. The combination of sentiment analysis and neural network is a promising solution for creating a new clustering algorithm. According to Egypt Independent Journal in 2018, 7% of Egyptians suffer from mental illness reported by the Public Health Ministry in Egypt. But the real statistics is more than the mentioned percentage which causes major social problems such as divorce, avoiding responsibilities, or non-marriage. This paper will address the real statistics and cluster the depression causes and social status for each sentiment. Online women sentiments are the essential focus of this research. The proposed technique consists of two algorithms clustering for user’s sex and classification algorithm for causes and responsibilities of women. The proposed clustering algorithm can recognize automatically for the sentiments user sex (females or males) and the level of depression automatically. The neural network clustering approach will produce accurate analysis results. The hardness of depression analysis implicitly and explicitly demonstrated in the different classifications for sentiments. This paper introduces a new technique for clustering sentiments and evaluating Egyptian women depression based on social sentiments.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_72-SentiNeural_A_Depression_Clustering_Technique.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dynamic Bandwidth Allocation in LAN using Dynamic Excess Rate Sensing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100571</link>
        <id>10.14569/IJACSA.2019.0100571</id>
        <doi>10.14569/IJACSA.2019.0100571</doi>
        <lastModDate>2019-05-30T13:34:33.4530000+00:00</lastModDate>
        
        <creator>Muhammad Abubakar Muhammad</creator>
        
        <creator>Muhammad Azhar Mushatq</creator>
        
        <creator>Abid Sultan</creator>
        
        <creator>Muhammad Afrasayab</creator>
        
        <subject>DER; ISPs; PIR; DBA; MRT; CIR</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>Today human and information processing system both need rapid access to anything they want on the internet. To fulfill these needs more and more internet service providers with a large amount of bandwidth are introducing themselves in the market. For these providers, a lot of bandwidth is free during off-peak hours while during peak hours the total available bandwidth might be insufficient. The primary purpose of our research is to divide and distribute the excessive bandwidth among the users during off-peak hours to attain the maximum user satisfaction. In order to do this dynamic excess rate (DER) scheme and its frame work is proposed in this paper.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_71-Dynamic_Bandwidth_Allocation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Reengineering Framework to Enhance the Performance of Existing Software</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100570</link>
        <id>10.14569/IJACSA.2019.0100570</id>
        <doi>10.14569/IJACSA.2019.0100570</doi>
        <lastModDate>2019-05-30T13:34:33.4400000+00:00</lastModDate>
        
        <creator>Jaswinder Singh</creator>
        
        <creator>Kanwalvir Singh Dhindsa</creator>
        
        <creator>Jaiteg Singh</creator>
        
        <subject>Reengineering; maintenance; decision tree; agile methodology; scrum</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>Term reengineering refers to improve the quality of the system. Continues maintenance and aging degrade the performance of the software system. Right approach and methodology must be adapted to perform reengineering. With lack of right approach and methodology, reengineering itself will be costly and time-consuming. For the process of reengineering main concerns include when to reengineer, how to estimate cost, the right approach for reengineering, and how to validate software enhancement. This research paper proposed a framework to identify the need for reengineering, to estimate the cost of reengineering, and to validate software quality improvement. Research work used the agile methodology to perform tasks of reengineering. Reengineering needs are identified using prediction based decision tree approach. Reengineering is applied using the agile Scrum methodology. Cost estimation is done using story point estimation. Performance analyses are done using complexity measures analysis of the internal design metrics and mean time to execute metric. The research used various automated tools like CKJM ver1.9, Rapid Miner studio ver7.1, and Net beans7.3 framework.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_70-Reengineering_Framework_to_Enhance_the_Performance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparison of Agile Method and Scrum Method with Software Quality Affecting Factors</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100569</link>
        <id>10.14569/IJACSA.2019.0100569</id>
        <doi>10.14569/IJACSA.2019.0100569</doi>
        <lastModDate>2019-05-30T13:34:33.4070000+00:00</lastModDate>
        
        <creator>Muhammad Asaad Subih</creator>
        
        <creator>Babur Hayat Malik</creator>
        
        <creator>Imran Mazhar</creator>
        
        <creator>Izaz-ul-Hassan</creator>
        
        <creator>Usman Sabir</creator>
        
        <creator>Tamoor Wakeel</creator>
        
        <creator>Wajid Ali</creator>
        
        <creator>Amina Yousaf</creator>
        
        <creator>Bilal-bin-Ijaz</creator>
        
        <creator>Hadiqa Nawaz</creator>
        
        <creator>Muhammad Suleman</creator>
        
        <subject>Component; SDLC; Software Quality Affecting Factors; Agile methodologies; Scrum</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>The software industry used software development lifecycle (SDLC) to design, develop, produce high quality, reliable and cost-effective software products. To develop an application, project team used some methodology which may include artifacts and pre-defining specific deliverables. There are different SDLC process models such as waterfall, iterative, spiral and agile model available to develop a quality product. In this paper we focus only on agile software development model, and Scrum model and their techniques. There are many papers and books written on agile methodologies. We will also use their knowledge in this paper. To collect data for comparison of agile method with software quality affecting factors, an online questionnaire survey was conducted. The survey sample consisted of software developers with several years of industry experience using agile methodologies. The main purpose of this study is to compare soft-ware quality affecting factors with agile and scrum model.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_69-Comparison_of_Agile_Method_and_Scrum_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparing Hybrid Tool for Static and Dynamic Object-Oriented Metrics</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100568</link>
        <id>10.14569/IJACSA.2019.0100568</id>
        <doi>10.14569/IJACSA.2019.0100568</doi>
        <lastModDate>2019-05-30T13:34:33.3930000+00:00</lastModDate>
        
        <creator>Babur Hayat Malik</creator>
        
        <creator>Javaria Khalid</creator>
        
        <creator>Hafsa Arif</creator>
        
        <creator>Ayesha Sadiqa</creator>
        
        <creator>Amara Tanveer</creator>
        
        <creator>Asia Mumtaz</creator>
        
        <creator>Zartashiya Afzal</creator>
        
        <creator>Samreen Azhar</creator>
        
        <creator>Muhammad Numan Ali</creator>
        
        <subject>Software metrics; static metrics; dynamic metrics; Object Oriented (OO)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>Software metrics are created and used by the distinctive programming associations intended for assessing, guaranteeing program excellence, activity, and software recovery. Software metrics have turned into a basic part of programming growth and are utilized in each period of the product development life cycle. Software metrics essentially measure programming items like plan source code and help us in taking technical and administrative choices. The desire of this examination is to play out the relative investigation of static and dynamic metrics. In any case, software quality characteristics, for example, performance, execution time and dependability rely upon the dynamic exercises of the product artifact. Due to every one of these variables, we favor dynamic metrics instead of customary static metrics. With the assistance of customary static metrics, we are not capable to analyze different actualities of programming. There are various types of this OO static and dynamic equipments. In this paper we have played out a similar investigation of different OO static and dynamic metrics tools and find out the hybrid too is counted as best one extraction of both, static and dynamic characteristics from mobile Android applications. The source code and a Docker compartment is utilized by open source tool in only three phases pre-static, static and dynamic examination.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_68-Comparing_Hybrid_Tool.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fast and Efficient In-Memory Big Data Processing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100567</link>
        <id>10.14569/IJACSA.2019.0100567</id>
        <doi>10.14569/IJACSA.2019.0100567</doi>
        <lastModDate>2019-05-30T13:34:33.3770000+00:00</lastModDate>
        
        <creator>Babur Hayat Malik</creator>
        
        <creator>Maliha Maryam</creator>
        
        <creator>Myda Khalid</creator>
        
        <creator>Javaria Khlaid</creator>
        
        <creator>Najam Ur Rehman</creator>
        
        <creator>Syeda Iqra Sajjad</creator>
        
        <creator>Tanveer Islam</creator>
        
        <creator>Umair Ahmed Butt</creator>
        
        <creator>Ali Raza</creator>
        
        <creator>M. Saad Nasr</creator>
        
        <subject>Big data processing; indexing techniques; R-tree; B-tree; X-tree; hashing; inverted index; graph query tree</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>With the passage of time, the data is growing exponentially and the mostly endured areas are social media networks, media hosting applications, and servers. They have thousands of Tera-bytes of data and the efficient systems, however, they are as yet confronting issue to oversee such volume of information and its size is growing each day. Data systems retrieve information with less time of In-memory. Instead of each factor data systems are required to define good usage of cache and fast memory access with help of optimization. The proposed technique to solve this problem can be the optimal indexing technique with better and efficient utilization of Cache and having less overhead of DRAM with the goal that energy can also be saved for the high-end servers.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_67-Fast_and_Efficient_In_Memory_Big_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Exploratory Analysis of the Total Variation of Electrons in the Ionosphere before Telluric Events Greater than M7.0 in the World During 2015-2016</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100566</link>
        <id>10.14569/IJACSA.2019.0100566</id>
        <doi>10.14569/IJACSA.2019.0100566</doi>
        <lastModDate>2019-05-30T13:34:33.3600000+00:00</lastModDate>
        
        <creator>Alva Mantari Alicia</creator>
        
        <creator>Zarate Segura Guillermo Wenceslao</creator>
        
        <creator>Sotomayor Beltran Carlos</creator>
        
        <creator>Brian Meneses-Claudio</creator>
        
        <creator>Roman-Gonzalez Avid</creator>
        
        <subject>Total number of electrons; ionosphere; earthquakes; prevention; risk</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>This exploratory observational study analyzes the variation of the total amount of vertical electrons (vTEC) in the ionosphere, 17 days before telluric events with grades greater than M7.0 between 2015 and 2016. Thirty telluric events have been analyzed with these characteristics. The data was obtained from 55 satellites and 300 GPS receivers that were downloaded from the Center for Orbit Determination in Europe (CODE). The variations are considered significant only if it is outside the &quot;normal&quot; ranges considered after the statistical analysis performed. The data was downloaded by a program developed in our laboratory. The downloaded data was processed and maps of variations of vTEC generated with a periodicity of 2 hours. The analysis area was considered to be a circular one with a radius of 1000km centered on the epicenter of each earthquake. Variation of vTEC was found during 2015-2016 in 100% of the earthquakes in the range from day 1 to day 17 days before the event, over the circular area of 1000 km radius centered on the epicenter of the earthquake. Of these in 96.55% there is positive variations, and a negative ones exist in 68.97% of the events. If we observe in the range from day 3 to 17 before the event, a variation was recorded in 100% of the cases, and from day 8 to day 17 before the event in 93.10% of the cases, it is important to emphasize that while the evidence in a period before the event is more likely to find evidence to develop early warning tool for earthquake prevention. This study explores the variation of vTEC as precursor events to each earthquake during 2015-2016, it is a preliminary analysis that shows us the feasibility of analyzing this information as a preamble for an exhaustive association study later. The final objective is to calculate the risk of telluric events which would benefit the population worldwide.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_66-Exploratory_Analysis_of_the_Total_Variation_of_Electrons.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cloud Computing Adoption in Small and Medium- Sized Enterprises (SMEs) of Asia and Africa</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100565</link>
        <id>10.14569/IJACSA.2019.0100565</id>
        <doi>10.14569/IJACSA.2019.0100565</doi>
        <lastModDate>2019-05-30T13:34:33.3470000+00:00</lastModDate>
        
        <creator>Babur Hayat Malik</creator>
        
        <creator>Jazba Asad</creator>
        
        <creator>Sabila Kousar</creator>
        
        <creator>Faiza Nawaz</creator>
        
        <creator>Zainab</creator>
        
        <creator>Farania Hayder</creator>
        
        <creator>Sehresh Bibi</creator>
        
        <creator>Amina Yousaf</creator>
        
        <creator>Ali Raza</creator>
        
        <subject>Cloud computing; adoption; Asia; Africa; small and medium-sized enterprises; analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>Cloud computing is a rapidly emerging technology over the last few years that has abolished the burden of purchasing heavy hardware and licensed software. Cloud computing has been advantageous to Small and Medium-sized Enterprises (SMEs), but still numerous SMEs have not adopted cloud computing to delve into its appealing benefits. Asia and Africa vary notably regarding their innovative capability. Asia has been competent to advance and sustain world leadership in technological innovations whereas Africa has not developed significantly in these terms. A seldom comparative study has been implemented on the reasons for the innovation gap between these two continents. This article examines and compares the cloud computing adoption from a Geo-regional framework; Asia and Africa. A comparative study is used to organize the findings from China in Asia, and Nigeria in Africa. The article identifies the probable benefits, usage of cloud computing and level of cloud computing adoption amid SMEs in Nigeria and China. The paper explores the margin that subsists amongst the level of cloud computing adoption in SMEs of these two countries and specifies challenges particular to each country intercepting the complete cloud computing adoption and proposes solutions for Nigerian SMEs to beat these challenges. Furthermore, the article contributes proof-supported intrusion for cloud service providers, the government and the capitalism to enhance the cloud computing adoption amid SMEs to eventually determine the enterprises for the probable financial advantage.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_65-Cloud_Computing_Adoption.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Concatenated LDPC Codes with LTE Modulation Schemes</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100564</link>
        <id>10.14569/IJACSA.2019.0100564</id>
        <doi>10.14569/IJACSA.2019.0100564</doi>
        <lastModDate>2019-05-30T13:34:33.3300000+00:00</lastModDate>
        
        <creator>Mohanad Alfiras</creator>
        
        <creator>Wael A. H. Hadi</creator>
        
        <creator>Amjad Ali Jassim</creator>
        
        <subject>Coding; modulation; Hybrid; concatenation; low density parity check</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>In a communication system, the LDPC code is considered as a good performance error correcting code which reaches near Shannon limit. In this paper a hybrid LDPC code is proposed, the hybrid term here refers to the serial concatenation of parallel LDPC codes group and a single serial LDPC code. The outer two parallel LDPC codes encoder represents outer encoder where the single LDPC encoder represents the inner encoder. This study also emphases on the performance of a hybrid coding system in consideration with three modulation schemes. The modulation schemes include quadrature phase shift keying (QPSK) and two types of quadrature amplitude modulation; 16-QAM and 64-QAM. These modulation schemes are selected due to their importance in modern communication applications, such as long term evolution (LTE); such schemes are the standard modulation schemes used with LTE system. This study investigates different LDPC code rates such as 1/2 and 1/3 and simulates the AWGN communication channel using MATLAB. The simulation results show improvement in bit error rate (BER) when using 1/3 LDPC code rate in the designed system rather than 1/2, but it also increases the system complexity. In the end, all simulation results, the comparison between different cases of LDPC code rates and system performance are summarized.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_64-Hybrid_Concatenated_LDPC_Codes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Calibrating Six-Port Compact Circuit using a New Technique Program</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100563</link>
        <id>10.14569/IJACSA.2019.0100563</id>
        <doi>10.14569/IJACSA.2019.0100563</doi>
        <lastModDate>2019-05-30T13:34:33.3130000+00:00</lastModDate>
        
        <creator>Traii Moubarek</creator>
        
        <creator>Mohannad Almanee</creator>
        
        <creator>Ali Gharsallah</creator>
        
        <subject>Calibration technique; digital receiver; explicit method; reflectometer; S parameters</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>In this paper, a calibration of six-port reflectometer using a new technique program is presented. It has been shown that a calibration procedure is based on explicit method, the method that capturing the output wave forms of six-port junction and determines the complex relationship between the two waves present at the input from the value of four outputs. The number of calibrating standards and the computation effort required are the most important parameters in selecting a calibration technique. Comparison between the results obtained from the new calibration method program with measurement results show the validity of the method proposed. This calibration technique can be used in general six-port direct digital receiver.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_63-A_Calibrating_Six_Port_Compact_Circuit.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Four-Class Motor Imagery EEG Signal Classification using PCA, Wavelet and Two-Stage Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100562</link>
        <id>10.14569/IJACSA.2019.0100562</id>
        <doi>10.14569/IJACSA.2019.0100562</doi>
        <lastModDate>2019-05-30T13:34:33.2830000+00:00</lastModDate>
        
        <creator>Md. Asadur Rahman</creator>
        
        <creator>Farzana Khanam</creator>
        
        <creator>Md. Kazem Hossain</creator>
        
        <creator>Mohammad Khurshed Alam</creator>
        
        <creator>Mohiuddin Ahmad</creator>
        
        <subject>Brain-computer interface; electroencephalogram; motor imagery; principal component analysis; wavelet packet transformation; artificial neural network; classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>Electroencephalogram (EEG) is the most significant signal for brain-computer interfaces (BCI). Nowadays, motor imagery (MI) movement based BCI is highly accepted method for. This paper proposes a novel method based on the combined utilization of principal component analysis (PCA), wavelet packet transformation (WPT), and two-stage machine learning algorithm to classify four-class MI EEG signal. This work includes four-class MI events by an imaginary lifting of the left hand, right hand, left foot, and Right Foot. The main challenge of this work is to discriminate the similar lobe EEG signal pattern such as left foot VS left hand. Another critical problem is to identify the MI movements of two different feet because their activation level is very low and show an almost similar pattern. This work firstly uses the PCA to reduce the signal dimensions of the left and right lobe of the brain. Then, WPT is used to extract the feature from the different class EEG signal. Finally, the artificial neural network is trained into two stages – 1st stage identifies the lobe from the signal pattern and the 2nd stage identifies whether the signal is of MI hand or MI foot movement. The proposed method is applied to the 4-class MI movement related EEG signals of 15 participants and found excellent classification accuracy (&gt;74% on average). The outcomes of the proposed method prove its effectiveness in practical BCI implementation.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_62-Four_Class_Motor_Imagery_EEG_Signal_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intelligent Scheduling of Bag-of-Tasks Applications in the Cloud</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100561</link>
        <id>10.14569/IJACSA.2019.0100561</id>
        <doi>10.14569/IJACSA.2019.0100561</doi>
        <lastModDate>2019-05-30T13:34:33.2670000+00:00</lastModDate>
        
        <creator>Preethi Sheba Hepsiba</creator>
        
        <creator>Grace Mary Kanaga E</creator>
        
        <subject>Bag-of-tasks applications; intelligent agent; reinforcement learning; scheduling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>The need of efficient provision resources in cloud computing is imperative in meeting the performance requirements.  The design of any resource allocation algorithm is dependent on the type of workload.  BoT (Bag-of-Tasks) which is made up of batches of independent tasks are predominant in large scale distributed systems such as the cloud and efficiently scheduling BoTs in heterogeneous resources is a known NP-Complete problem.  In this work, the intelligent agent uses reinforcement learning to learn the best scheduling heuristic to use in a state.  The primary objective of BISA (BoT Intelligent Scheduling Agent) is to minimize makespan.  BISA is deployed as an agent in a cloud testbed and synthetic workload and different configurations of a private cloud are used to test the effectiveness of BISA.  The normalized makespan is compared against 15 batch mode and immediate mode scheduling heuristics.  At its best, BISA produces a 72% lower average normalized makespan than the traditional heuristics and in most cases comparable to the best traditional scheduling heuristic.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_61-Intelligent_Scheduling_of_Bag_of_Tasks_Applications.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Speech Recognition System based on Discrete Wave Atoms Transform Partial Noisy Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100560</link>
        <id>10.14569/IJACSA.2019.0100560</id>
        <doi>10.14569/IJACSA.2019.0100560</doi>
        <lastModDate>2019-05-30T13:34:33.2530000+00:00</lastModDate>
        
        <creator>Mohamed Walid</creator>
        
        <creator>Bousselmi Souha</creator>
        
        <creator>Cherif Adnen</creator>
        
        <subject>WAT; SVM; HMM; thresholding; noise; MFCC; MLP</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>Automatic speech recognition is one of the most active research areas as it offers a dynamic platform for human-machine interaction. The robustness of speech recognition systems is often degraded in real time applications, which are often accompanied by environmental noises. In this work, we have investigated the efficiency of combining wave atoms transform (WAT) with Mel-Frequency Cepstral Coefficients (MFCC) using Support Vector Machine (SVM) as classifier in different noisy conditions. A full experimental evaluation of the proposed model has been conducted using Arabic speech database (ARADIGIT) and corrupted with “NOISEUS database” noises at different levels of SNR ranging from -5 to 15dB. The results of Simulation have indicated that the proposed algorithm has improved the recognition rate (99.9%) at 15 dB of SNR. A comparative study was conducted by applying the proposed WAT-MFCC features to multilayer perceptron (MLP) and hidden Markov model (HMM) in order to prove the efficiency and the robustness of the proposed system.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_60-Speech_Recognition_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Size Reduction and Performance Enhancement of Pi Shaped Patch Antenna using Superstrate Configuration</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100559</link>
        <id>10.14569/IJACSA.2019.0100559</id>
        <doi>10.14569/IJACSA.2019.0100559</doi>
        <lastModDate>2019-05-30T13:34:33.2200000+00:00</lastModDate>
        
        <creator>Pir Saadullah Shah</creator>
        
        <creator>Shahryar Shafique Qureshi</creator>
        
        <creator>Muhammad Haneef</creator>
        
        <creator>Sohail Imran Saeed</creator>
        
        <subject>Radiating patch; ground plane; slotted; bandwidth; gain; directivity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>Patch antennas are modern elements of today’s world communication technology. They appear to have unique characteristics and features with their unique power with handling capabilities and lighted structure. This paper focuses on superstrate configuration of patch antenna with defected ground plane and Pi Slotted radiating patch. The three different cases were taken in terms of wavelength distance to observe the performance characteristics of patch structure. The antenna designed in this study can be used for S and C band applications.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_59-Size_Reduction_and_Performance_Enhancement.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Experience in Asteroid Search using Astrometrica Software</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100558</link>
        <id>10.14569/IJACSA.2019.0100558</id>
        <doi>10.14569/IJACSA.2019.0100558</doi>
        <lastModDate>2019-05-30T13:34:33.2030000+00:00</lastModDate>
        
        <creator>Junior Ascencio-Moran</creator>
        
        <creator>Jhon Calero-Juarez</creator>
        
        <creator>Maria del Carmen Pajares-Acu&#241;a</creator>
        
        <creator>Avid Roman-Gonzalez</creator>
        
        <subject>SGAC; IASC; INTI-Lab; UCH; MPC; asteroids</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>The present work of this research consists of the analysis of telescopic images provided by the International Astronomical Search Collaboration (IASC) to find asteroids that can be named. The concern in searching for asteroids helps the scientific community that promotes the collaboration of young students and astronomy fans get experience in finding asteroids through campaigns related to these. The Space Generation Advisory Council (SGAC) campaign in partnership with IASC has found around 1500 asteroids since the beginning of October 2006 as each year more than 1000 teams from different countries participate. The Astrometrica software was used which is in charge of receiving the images in FITS format. The configuration of the selected telescope is carried out so that they are later analyzed in greater detail. Finally, a clean and precise Minor Planet Center (MPC) report is made, which is what the campaign requires so that it can go on to a preliminary phase and subsequently be accepted by the international astronomical union. The asteroids that become named will be registered in the catalog of official minor planets of the world. In the campaign related to this study, one finds 28 possible asteroids.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_58-Experience_in_Asteroid_Search_using_Astrometrica_Software.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Low-Cost and Portable Ground Station for the Reception of NOAA Satellite Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100557</link>
        <id>10.14569/IJACSA.2019.0100557</id>
        <doi>10.14569/IJACSA.2019.0100557</doi>
        <lastModDate>2019-05-30T13:34:33.1730000+00:00</lastModDate>
        
        <creator>Antony E Quiroz-Olivares</creator>
        
        <creator>Ntalia I. Vargas-Cuentas</creator>
        
        <creator>Guillermo W. Zarate Segura</creator>
        
        <creator>Avid Roman-Gonzalez</creator>
        
        <subject>Software defined by radio; Raspberry Pi; meteorological images; antenna; dipoles</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>Currently, in Peru, the study of satellite images is increasing because it has the Earth observation satellite PeruSat-1. However, the cost of implementing a ground station is very high; for this reason, it is baffling that each university has its station. In the present work, the design and implementation of a low-cost portable earth station for the reception of meteorological satellite images is proposed in an automatic way, using accessible electronic devices such as Raspberry Pi 3b +, Software Defined by Radio (SDR) and an antenna double cross four dipoles, in this way encourage the study of satellite images in schools and universities. The results obtained show the viability of this project.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_57-Low_Cost_and_Portable_Ground_Station.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>IoT-Enabled Door Lock System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100556</link>
        <id>10.14569/IJACSA.2019.0100556</id>
        <doi>10.14569/IJACSA.2019.0100556</doi>
        <lastModDate>2019-05-30T13:34:33.1430000+00:00</lastModDate>
        
        <creator>Trio Adiono</creator>
        
        <creator>Syifaul Fuada</creator>
        
        <creator>Sinantya Feranti Anindya</creator>
        
        <creator>Irfan Gani Purwanda</creator>
        
        <creator>Maulana Yusuf Fathany</creator>
        
        <subject>Internet of Things; smart home; smart lock</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>This paper covers the design of a prototype for IoT and GPS enabled door lock system. The aim of this research is to design a door lock system that does not need manual input from user for convenience purpose while also remaining secure. The system primarily consists STM32L100 microcontroller as its core, TIP102 transistor that controls 12 VDC solenoid, and Xbee module to communicate with the smart home’s host and receive status regarding user’s GPS position. The system is tested by measuring the user’s distance from the predetermined location using GPS coordinate captured by an Android application, which serves to test whether the system is able to operate as intended and measure the device’s power usage. The test result shows that the device is able to work based on GPS coordinate data received, using 42.3 mA and 587 mA current in idle and active modes, respectively.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_56-IoT_Enabled_Door_Lock_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Skyline Path Queries for Location-based Services</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100555</link>
        <id>10.14569/IJACSA.2019.0100555</id>
        <doi>10.14569/IJACSA.2019.0100555</doi>
        <lastModDate>2019-05-30T13:34:33.1270000+00:00</lastModDate>
        
        <creator>Nishu Chowdhury</creator>
        
        <creator>Mohammad Shamsul Arefin</creator>
        
        <subject>Skyline queries; trip planning; location-based services</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>A skyline query finds objects that are not dominated by another object from a given set of objects. Skyline queries help us to filter unnecessary information efficiently and provide us clues for various decision making tasks. In this paper, we consider skyline queries for location-based services and proposed a framework that can efficiently compute all non-dominated paths in road networks. A path p is said to dominate another path q if p is not worse than q in any of the k dimensions and p is better than q in at least one of the k dimensions.  Our proposed skyline framework considers several features related to road networks and return all non-dominated paths from the road networks. In our work, we compute skylines considering two different perspectives: business perspective and individual user’s perspective. We have conducted several experiments to show the effectiveness of our method. From the experimental results, we can say that our system can perform efficient computation of skyline paths from road networks.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_55-Skyline_Path_Queries.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detection of Suspicions of Varicose Veins in the Legs using Thermal Imaging</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100554</link>
        <id>10.14569/IJACSA.2019.0100554</id>
        <doi>10.14569/IJACSA.2019.0100554</doi>
        <lastModDate>2019-05-30T13:34:33.1100000+00:00</lastModDate>
        
        <creator>Brian Meneses-Claudio</creator>
        
        <creator>Witman Alvarado-Diaz</creator>
        
        <creator>Avid Roman-Gonzalez</creator>
        
        <subject>Thermal image; varicose veins; detection; image processing; image segmentation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>Varicose veins also known as venous insufficiency, are dilated veins due to an accumulation of blood that occurs in different parts of the body, the most common are in the legs, in addition to having a higher index in women for clothing style that they use. Varicose veins are classified by grades ranging from I to IV and can cause pain, itching, cramps and even ulcers if they are treated in time. Not all varicose veins can be visible superficially, many of them begin inside of the skin. According to the WHO (World Health Organization) 10% of the world population has varicose veins. That is why the detection of suspicions of varicose veins in the legs was raised in this research work, first a thermal image will be obtained using the FLIR ONE Pro thermal camera following a necessary protocol of distance and temperature range. The thermal image is processed in MATLAB to identify the segments of the histogram of the thermal image, to obtain the area of the highest temperature indicating the presence of varicose vein in the subject&#39;s leg. The segmentation of the areas with the highest temperature was obtained as a result to be overlaid on the real image, showing the real image with the varicose vein segment found in the thermal image processing.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_54-Detection_of_Suspicions_of_Varicose_Veins.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Study and Design of a Magnetic Levitator System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100553</link>
        <id>10.14569/IJACSA.2019.0100553</id>
        <doi>10.14569/IJACSA.2019.0100553</doi>
        <lastModDate>2019-05-30T13:34:33.0970000+00:00</lastModDate>
        
        <creator>Brian Meneses-Claudio</creator>
        
        <creator>Zeila Torres Santos</creator>
        
        <creator>Avid Roman-Gonzalez</creator>
        
        <subject>Magnetic levitator; electromagnet; electronic circuit; differential potential</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>Magnetic levitation is one of the mechanisms that is at the forefront of technology. It is used in its most basic form in educational teaching, where the principles of physics converge that have as their principle electromagnetism and the fields created by existing poles that repel according to a quantity of initial current, giving instructive ideas of how the theoretical formulas work, giving life to a practical visual system. The current use on a large scale are the Maglev trains of Japan or superconductivity, being the realization of the quantum effects visualized at the moment of cooling the sample. The electronic circuit tends to be stable because, when using a high-power current, a Triac is needed to compensate the electrical flow provided by the operational amplifier and, therefore, stabilize with the photodiode when activated with the Led diode. Our purpose is to create a circuit that identifies the values of the electronic components that allow reaching equilibrium, with input and output variables that indicate the position and height of the object to be levitated.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_53-Study_and_Design_of_a_Magnetic_Levitator_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Accuracy Performance Degradation in Image Classification Models due to Concept Drift</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100552</link>
        <id>10.14569/IJACSA.2019.0100552</id>
        <doi>10.14569/IJACSA.2019.0100552</doi>
        <lastModDate>2019-05-30T13:34:33.0800000+00:00</lastModDate>
        
        <creator>Manzoor Ahmed Hashmani</creator>
        
        <creator>Syed Muslim Jameel</creator>
        
        <creator>Hitham Alhussain</creator>
        
        <creator>Mobashar Rehman</creator>
        
        <creator>Arif Budiman</creator>
        
        <subject>Pre-trained networks; deep learning; concept drift; fine tuning; transfer learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>Big data is playing a significant role in the current computing revolution. Industries and organizations are utilizing their insights for Business Intelligence by using Deep Learning Networks (DLN). However, dynamic characteristics of BD introduce many critical issues for DLN; Concept Drift (CD) is one of them. CD issue appears frequently in Online Supervised Learning environments in which data trends change over time. The problem may even worsen in a BD environment due to the veracity and variability factors. The CD issue may render the DLN inapplicable by degrading the accuracy of classification results in DLN which is a very serious issue that needs to be addressed. Therefore, these DLN need to quickly adapt to changes for maintaining the accuracy level of the results. To overcome classification accuracy, we need some dynamical changes in the existing DLN. Therefore, in this paper, we examine some of the existing Shallow Learning and Deep Learning models and their behavior before and after the Concept Drift (in experiment 1) and validate the pre-trained Deep Learning network (ResNet-50). In future work, this experiment will examine the most recent pre-trained DLN (Alex Net, VGG16, VGG19) and identify their suitability to overcome Concept Drift using fine-tuning and transfer learning approaches.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_52-Accuracy_Performance_Degradation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Frequency Reconfigurable Vivaldi Antenna with Switched Resonators for Wireless Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100551</link>
        <id>10.14569/IJACSA.2019.0100551</id>
        <doi>10.14569/IJACSA.2019.0100551</doi>
        <lastModDate>2019-05-30T13:34:33.0630000+00:00</lastModDate>
        
        <creator>Rabiaa Herzi</creator>
        
        <creator>Mohamed-Ali Boujemaa</creator>
        
        <creator>Fethi Choubani</creator>
        
        <creator>Ali Gharsallah</creator>
        
        <subject>Frequency reconfigurable; Vivaldi Antenna (VA); Ultra-Wideband (UWB); slot ring resonator; wireless applications</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>In this paper, a frequency reconfigurable Vivaldi antenna with switched slot ring resonators is presented. The principle of the method to reconfigure the Vivaldi antenna is based on the perturbation of the surface currents distribution. Switched ring resonators that act as a bandpass filter are printed in specific positions on the antenna metallization. This structure has the ability to reconfigurate between wideband mode and four narrow-band modes which cover significant wireless applications. Combination of the bandpass filters and tapered slot antenna characteristics achieve an agile antenna capable to operate in UWB mode from 2 to 8 GHz and to generate multi-narrow bands at 3.5 GHz, 4GHz, 5.2 GHz, 5.5 GHz, 5.8 GHz and 6.5 GHz. The measurement and simulation results show good agreement. This antenna is an appropriate solution for wireless applications which require reconfigurable Wideband multi-narrow bands antenna.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_51-Frequency_Reconfigurable_Vivaldi_Antenna.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Interaction between Learning Style and Gender in Mixed Learning with 40% Face-to-face Learning and 60% Online Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100550</link>
        <id>10.14569/IJACSA.2019.0100550</id>
        <doi>10.14569/IJACSA.2019.0100550</doi>
        <lastModDate>2019-05-30T13:34:33.0500000+00:00</lastModDate>
        
        <creator>Anthony Anggrawan</creator>
        
        <creator>Nurdin Ibrahim</creator>
        
        <creator>Suyitno Muslim</creator>
        
        <creator>Christofer Satria</creator>
        
        <subject>Online; face-to-face; mixed learning; algorithm and programming; learning outcome; interaction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>Student learning styles are important factors that have a strong impact on student performance in learning outcomes. That is why each learning method will produce different learning outcomes for students who have different learning styles. According to the previous study concluded that mixed learning produces learning outcomes that are superior to online and face-to-face learning models, but the questions are how is the difference between learning outcomes between student learning styles in mixed learning, and whether there is an interaction between mixed learning models and student learning styles towards learning outcomes. This study provides a scientific answer solution, by conducting experimental research of mix learning with a mixture of 40% face-to-face material learning and 60% online material learning for the subject of Algorithms and Programming. Based on 2-way ANOVA, T, and SCHEFFE tests towards student learning outcomes in this study, it is found: there are differences in learning outcomes between students who have different learning styles, the learning outcomes of male students achieve better learning outcomes than female students, and there is an interaction between student gender and student learning styles towards learning outcomes, where with further tests, it was found that there is no difference in learning outcomes based on student learning styles of all students except students who have a visual learning style with male sex achieving superior learning outcomes than students who have auditory and kinesthetic learning styles.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_50-Interaction_between_Learning_Style_and_Gender.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards a Real Time Energy Management Strategy for Hybrid Wind-PV Power System based on Hierarchical Distribution of Loads</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100549</link>
        <id>10.14569/IJACSA.2019.0100549</id>
        <doi>10.14569/IJACSA.2019.0100549</doi>
        <lastModDate>2019-05-30T13:34:33.0170000+00:00</lastModDate>
        
        <creator>Abdelhadi Raihani</creator>
        
        <creator>Tajeddine Khalili</creator>
        
        <creator>Mohamed Rafik</creator>
        
        <creator>Mohammed Hicham Zaggaf</creator>
        
        <creator>Omar Bouattane</creator>
        
        <subject>Energy management; hybrid renewable energy sources; grid injection; loads distribution; energy forecasting; load forecasting</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>Energy management is a crucial aspect for achieving energy efficiency within a Hybrid Renewable energy power station. Load being unbalanced through the day, a reasonable power management can avoid energy dissipation and unnecessary grid solicitation. This article presents an energy management strategy in a real case scenario of a hybrid wind-solar power station in the ENSET campus. The approach manages energy provided by wind turbines and multiple photovoltaic panels, using a power bank as backup source. in this study actual data involving wind speed, solar radiation, load profile and energy generation was collected. Different scenarios were simulated in order to synthesize an efficient energy management and load balancing system with possible load forecasting capability. In all the simulated scenarios the study emphasizes a minimal solicitation of the grid.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_49-Toward_a_Real_Time_Energy_Management_Strategy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sea Lion Optimization Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100548</link>
        <id>10.14569/IJACSA.2019.0100548</id>
        <doi>10.14569/IJACSA.2019.0100548</doi>
        <lastModDate>2019-05-30T13:34:32.9870000+00:00</lastModDate>
        
        <creator>Raja Masadeh</creator>
        
        <creator>Basel A. Mahafzah</creator>
        
        <creator>Ahmad Sharieh</creator>
        
        <subject>Optimization; Metaheuristic optimization algorithms; Benchmarks; Sea Lion Optimization Algorithm (SLnO)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>This paper suggests a new nature inspired metaheuristic optimization algorithm which is called Sea Lion Optimization (SLnO) algorithm. The SLnO algorithm imitates the hunting behavior of sea lions in nature. Moreover, it is inspired by sea lions&#39; whiskers that are used in order to detect the prey.  SLnO algorithm is tested with 23 well-known test functions (Benchmarks). Optimization results show that the SLnO algorithm is very competitive compared to Particle Swarm Optimization (PSO), Whale Optimization Algorithm (WOA), Grey Wolf Optimization (GWO), Sine Cosine Algorithm (SCA) and Dragonfly Algorithm (DA).</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_48-Sea_Lion_Optimization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>MHealth for Decision Making Support: A Case Study of EHealth in the Public Sector</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100547</link>
        <id>10.14569/IJACSA.2019.0100547</id>
        <doi>10.14569/IJACSA.2019.0100547</doi>
        <lastModDate>2019-05-30T13:34:32.9700000+00:00</lastModDate>
        
        <creator>Majed Kamel Al-Azzam</creator>
        
        <creator>Malik Bader Alazzam</creator>
        
        <creator>Majida Khalid al-Manasra</creator>
        
        <subject>Mobile health application; UTAUT1; UTAUT2; trust factors</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>This paper seeks to explore factors that determine the acceptance of the MHealth application patients. The research relied on (UTAUT2) Unified Theory of Acceptance and Use of Technology to assess the level of acceptance of a new mobile health application by patients. The study involved conducting test surveys across medical hospitals in Jordan with the goal of collecting data from hospital visitors and their patients concerning their intention to use the new mobile health application. 98 questionnaires were collected and 44 valid responses drawn from them for onward data analysis. The UTAUT2 research model was the most appropriate one for conducting the evaluation on MHealth’s user acceptance. Its results would support the government’s goal of building m-health solutions that meet user needs. The model also enhances the roles of DSS in facilitating adoption of MHealth applications. This study provides a theoretical framework for pursuing future research work on the rates of adoption of m-health applications by patients.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_47-MHealth_for_Decision_Making_Support.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Assessment of Technology Transfer from Grid power to Photovoltaic: An Experimental Case Study for Pakistan</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100546</link>
        <id>10.14569/IJACSA.2019.0100546</id>
        <doi>10.14569/IJACSA.2019.0100546</doi>
        <lastModDate>2019-05-30T13:34:32.9530000+00:00</lastModDate>
        
        <creator>Umer Farooq</creator>
        
        <creator>Habib Ullah Manzoor</creator>
        
        <creator>Aamir Mehmood</creator>
        
        <creator>Awais Iqbal</creator>
        
        <creator>Rida Younis</creator>
        
        <creator>Amina Iqbal</creator>
        
        <creator>Fan Yang</creator>
        
        <creator>Muhammad Arshad Shehzad Hassan</creator>
        
        <creator>Nouman Faiz</creator>
        
        <subject>Solar energy; photovoltaic technologies; module efficiency; power demand satisfaction; economics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>Pakistan is located on the world map where enough solar irradiance value strikes the ground that can be harnessed to vanish the existing blackout problems of the country. Government is focusing towards renewable integration, especially solar photovoltaic (PV) technology. This work is focused to assess the techno-economic viability of different PV technologies with aim of recommending the most optimum type for domestic sector in high solar irradiance region of the country. For this purpose, standalone PV systems are installed using monocrystalline (m-Si), polycrystalline (p-Si), and amorphous crystalline (a-Si) modules on the rooftop at 31.4 oN latitude position. The performance of PV modules is evaluated based on, average output power, normalized power output efficiency, module conversion efficiency, and performance ratio. Results elaborated that m-Si module is the optimum type for the application with 23.01% average normalized power output efficiency. Economics of the system has also been evaluated in terms of the price of power value produced by PV modules with respect to the consumption of that power value from grid source in base case. Integration of such type of domestic PV systems are a need of time to make the future sustainable.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_46-Assessment_of_Technology_Transfer.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>IoT Testing-as-a-Service: A New Dimension of Automation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100545</link>
        <id>10.14569/IJACSA.2019.0100545</id>
        <doi>10.14569/IJACSA.2019.0100545</doi>
        <lastModDate>2019-05-30T13:34:32.9230000+00:00</lastModDate>
        
        <creator>Babur Hayat Malik</creator>
        
        <creator>Myda Khalid</creator>
        
        <creator>Maliha Maryam</creator>
        
        <creator>M. Nauman Ali</creator>
        
        <creator>Sheraz Yousaf</creator>
        
        <creator>Mudassar Mehmood</creator>
        
        <creator>Hammad Saleem</creator>
        
        <subject>Testing automation; IoT; interoperability testing; conformance testing; security testing; semantic testing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>Internet of Things (IoT) systems has become a global trend enhancing the capabilities smart computing era involving a variety of distributed end-devices and multi- scalable applications. The collaborative nature of IoT systems connected through the Internet increases the heterogeneity of coming data streams that need to be processed for correct decision making in a real-time environment. The processing of huge data streams for remotely distributed IoT systems create loops for data breaches and open new challenges for security and scalability of system testing. Thus, the testing of IoT systems is becoming the necessity, requires automated testing framework due to the amount of IoT devices and processing of data events is prone to error by traditional software testing. An automated IoT testing service based framework is purposed in this paper, to test the distributed IoT systems by reducing cost and scalability issues of software testing. The infrastructure of IoT systems demands a large number of platforms be developed which requires systematic testing approach. Therefore, the purposed automated IoT testing as a service model performs distributed interoperability testing, oneM2M based conformance testing, security testing of distributed systems and validating semantics/syntactic testing of IoT devices in a systematic approach. Lastly, to provide more strength to the work we discussed and analyze existing IoT testing models to evaluate our proposed model.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_45-IoT_Testing_as_a_Service.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Variable Reduction-based Prediction through Modified Genetic Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100544</link>
        <id>10.14569/IJACSA.2019.0100544</id>
        <doi>10.14569/IJACSA.2019.0100544</doi>
        <lastModDate>2019-05-30T13:34:32.9070000+00:00</lastModDate>
        
        <creator>Allemar Jhone P. Delima</creator>
        
        <creator>Ariel M. Sison</creator>
        
        <creator>Ruji P. Medina</creator>
        
        <subject>Enhanced prediction model; IBAX operator; modified genetic algorithm; prediction accuracy enhancement</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>Due to the massive influence in the use of prediction models in different sectors of society, many researchers have employed hybrid algorithms to increase the accuracy level of the prediction model. The literature suggests that the use of Genetic Algorithms (GAs) can sufficiently improve the performance of other prediction models; thus, this study. This paper introduced a new avenue of prediction integrating GA with the novel Inversed Bi-segmented Average Crossover (IBAX) operator paired with rank-based selection function to the KNN algorithm. The 70% of data from 597 records of student-respondents in the evaluation of the faculty instructional performance from the four State Universities and Colleges (SUC) in Caraga Region, Philippines were used as training set while the 30% was used for testing. The simulation result showed that the use of the proposed prediction model with the integration of the modified GA outperformed the KNN prediction model where GA with average crossover and roulette wheel selection function was used. The KNN where k value is three (3) was identified to be the optimal model for prediction with the 95.53% prediction accuracy compared to KNN with 1, 5, and 7 k values.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_44-Variable_Reduction_based_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>BHA-160: Constructional Design of Hash Function based on NP-hard Problem</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100543</link>
        <id>10.14569/IJACSA.2019.0100543</id>
        <doi>10.14569/IJACSA.2019.0100543</doi>
        <lastModDate>2019-05-30T13:34:32.8930000+00:00</lastModDate>
        
        <creator>Ali Al Shahrani</creator>
        
        <subject>Hash function; integrity message; cryptanalysis; attack; NP-hard problem</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>Secure hash function is used to protect the integrity of the message transferred on the unsecured network. Changes on the bits of the sender’s message are recognized by the message digest produced by the hash function. Hash function is mainly concerned with data integrity, where the data receiver needs to verify whether the message has been altered by eavesdropping by checking the hash value appended with the message. To achieve this purpose, we have to use a secure hash function that is able to calculate the hash value of any message. In this paper, we introduce an alternative hash function based on NP-hard problem. The chosen NP-hard problem is known as Braid Conjugacy problem. This problem has proved to be secure against cryptanalysis attacks.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_43-BHA_160_Constructional_Design_of_Hash_Function.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparison of Reducing the Speckle Noise in Ultrasound Medical Images using Discrete Wavelet Transform</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100542</link>
        <id>10.14569/IJACSA.2019.0100542</id>
        <doi>10.14569/IJACSA.2019.0100542</doi>
        <lastModDate>2019-05-30T13:34:32.8600000+00:00</lastModDate>
        
        <creator>Asim ur Rehman Khan</creator>
        
        <creator>Farrokh Janabi-Sharifi</creator>
        
        <creator>Mohammad Ghahramani</creator>
        
        <creator>Muhammad Ahsan Rehman Khan</creator>
        
        <subject>Discrete wavelet transform; image quality assessment; ultrasound medical image</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>Speckle noise in ultrasound (US) medical images is the prime factor that undermines its full utilization. This noise is added by the constructive / destructive interference of sound waves travelling through hard- and soft-tissues of a patient. It is therefore generally accepted that the noise is unavoidable. As an alternate researchers have proposed several algorithms to somewhat undermine the effect of speckle noise. The discrete wavelet transform (DWT) has been used by several researchers. However, the performance of only a few transforms has been demonstrated. This paper provides a comparison of several DWT. The algorithm comprises of a pre-processing stage using Wiener filter, and a post-processing stage using Median filter. The processed image is compared with the original image on four metrics: two are based on full-reference (FR) image quality assessment (IQA), and the remaining two are based on no-reference (NR) IQA metrics. The FR-IQA are peak signal-to-noise ratio (PSNR) and mean structurally similarity index measure (MSSIM). The two NR-IQA techniques are blind pseudo-reference image (BPRI), and blind multiple pseudo-reference images (BMPRI).  It has been demonstrated that some of these wavelet transforms outperform others by a significant margin.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_42-Comparison_of_Reducing_the_Speckle_Noise.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Designing Model of Serious Game for Flood Safety Training</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100541</link>
        <id>10.14569/IJACSA.2019.0100541</id>
        <doi>10.14569/IJACSA.2019.0100541</doi>
        <lastModDate>2019-05-30T13:34:32.8470000+00:00</lastModDate>
        
        <creator>Nursyahida Mokhtar</creator>
        
        <creator>Amirah Ismail</creator>
        
        <creator>Zurina Muda</creator>
        
        <subject>Serious game model; flood safety training; flood awareness; intrinsic motivation; psychology readiness</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>Serious games have the potential to increase motivation among users in the aspect of safety training. Additionally, serious games can also positively impact training outcomes when the knowledge and skills acquired during a serious game training are transferred to a real-world application. The development of a serious game is based on the game elements and theories determined according to the goals or objectives of the game being developed. There are existing serious games that have been used for training but less usage of scenarios and feedback element render the game less effective for training purposes. Besides that, existing serious games for training purposes fail in delivering domain content to achieve the game objectives since they are more focused on entertainment. This is because the games do not involve experts in providing game domain content. The objective of this paper is to design a serious game model for flood safety training. Preliminary study and literature review are used in this study as the research method. The result of this study is a model of a serious game for flood safety training. In conclusion, this study focuses on the design of a serious game model for flood safety training that includes the elements of serious game identified and adapted to psychology readiness based on the flood training module by Malaysian Defense Force (APM). This makes the serious game more attractive and can give intrinsic motivation to players. For future studies, every single element serious game and theory of psychology readiness in the model developed in this study will be validated with the expert game and expert psychology.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_41-Designing_Model_of_Serious_Game.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Prediction of Crude Oil Prices using Hybrid Guided Best-So-Far Honey Bees Algorithm-Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100540</link>
        <id>10.14569/IJACSA.2019.0100540</id>
        <doi>10.14569/IJACSA.2019.0100540</doi>
        <lastModDate>2019-05-30T13:34:32.8300000+00:00</lastModDate>
        
        <creator>Nasser Tairan</creator>
        
        <creator>Habib Shah</creator>
        
        <creator>Aliya Aleryani</creator>
        
        <subject>Bio inspired; best so far; crude oil prices; KSA</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>The objective of this paper is the use of new hybrid meta-heuristic method called Guided Best-So-Far Honey Bees Inspired Algorithm with Artificial Neural Network (ANN) on the Prediction of Crude Oil Prices of Kingdom of Saudi Arabia (KSA). Very high volatility of crude oil prices is one of the main hurdles for the economic development; therefore, it’s the need of the hour to predict crude oil prices, especially for oil-rich countries such as KSA. Hence, in this paper, we are proposing a hybrid algorithm, named: Guided Best-So-Far Artificial Bee Colony (GBABC) algorithm. The proposed algorithm has been trained and tested with ANN for finding the optimal weight values to increase the exploration and exploitation process with balance quantities to obtain the accurate prediction of crude oil prices. The KSA crude oil prices of the five years 2013 to 2017 have been used to train ANN with different topologies and learning parameters of the proposed method for the prediction of the crude oil prices of the next day. The simulation results have been very promising and encouraging of the proposed algorithm when compared and analyzed with ABC, GABC (Gbest Guided ABC) and Best-So-Far ABC methods for prediction purpose. In most cases, the actual prices and predicted crude oil KSA prices are very close, which were obtained by the proposed GBABC method based on the optimal weight values of ANN and minimum prediction error.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_40-Prediction_of_Crude_Oil_Prices.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Experimental Evaluation of the Virtual Environment Efficiency for Distributed Software Development</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100539</link>
        <id>10.14569/IJACSA.2019.0100539</id>
        <doi>10.14569/IJACSA.2019.0100539</doi>
        <lastModDate>2019-05-30T13:34:32.8000000+00:00</lastModDate>
        
        <creator>Pavel Kolyasnikov</creator>
        
        <creator>Evgeny Nikulchev</creator>
        
        <creator>Iliy Silakov</creator>
        
        <creator>Dmitry Ilin</creator>
        
        <creator>Alexander Gusev</creator>
        
        <subject>Distributed software development; virtual development environment; increase development efficiency; virtual machines; vagrant; Docker; NFS; webpack </subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>At every software design stage nowadays, there is an acute need to solve the problem of effective choice of libraries, development technologies, data exchange formats, virtual environment systems, characteristics of virtual machines. Due to the spread of various kinds of devices and the popularity of Web platforms, lots of systems are developed not for the universal installation on a device (box version), but for a specific architecture with the subsequent provision of web services. Under these conditions, the only way for estimating the efficiency parameters at the design stage is to conduct various kinds of experiments to evaluate the parameters of a particular solution. Using the example of the Web platform of digital psychological tools, the methods for experimental parameter evaluation were developed in the article. The mechanisms and technologies for improving the efficiency of the Vagrant and Docker cloud virtual environment were also proposed in the paper. A set of basic criteria for evaluating the effectiveness of the configuration of the virtual development environment has been determined to be rapid deployment; increase in the speed and decrease in the volume of resources used; increase in the speed of data exchange between the host machine and the virtual machine. The results of experimental estimates of the parameters that define the formulated efficiency criteria are given as: processor utilization involved (percentage); the amount of RAM involved (GB); initialization time of virtual machines (seconds); time to assemble the component completely (Build) and to reassemble the component (Watch) (seconds). To improve the efficiency, a file system access driver based on the NFS protocol was studied in the paper.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_39-Experimental_Evaluation_of_the_Virtual_Environment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Analysis of Machine Learning Techniques on Software Defect Prediction using NASA Datasets</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100538</link>
        <id>10.14569/IJACSA.2019.0100538</id>
        <doi>10.14569/IJACSA.2019.0100538</doi>
        <lastModDate>2019-05-30T13:34:32.7830000+00:00</lastModDate>
        
        <creator>Ahmed Iqbal</creator>
        
        <creator>Shabib Aftab</creator>
        
        <creator>Umair Ali</creator>
        
        <creator>Zahid Nawaz</creator>
        
        <creator>Laraib Sana</creator>
        
        <creator>Munir Ahmad</creator>
        
        <creator>Arif Husen</creator>
        
        <subject>Software defect prediction; software metrics; data mining; machine learning; classification; class imbalance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>Defect prediction at early stages of software development life cycle is a crucial activity of quality assurance process and has been broadly studied in the last two decades. The early prediction of defective modules in developing software can help the development team to utilize the available resources efficiently and effectively to deliver high quality software product in limited time. Until now, many researchers have developed defect prediction models by using machine learning and statistical techniques. Machine learning approach is an effective way to identify the defective modules, which works by extracting the hidden patterns among software attributes.  In this study, several machine learning classification techniques are used to predict the software defects in twelve widely used NASA datasets. The classification techniques include: Na&#239;ve Bayes (NB), Multi-Layer Perceptron (MLP). Radial Basis Function (RBF), Support Vector Machine (SVM), K Nearest Neighbor (KNN), kStar (K*), One Rule (OneR), PART, Decision Tree (DT), and Random Forest (RF). Performance of used classification techniques is evaluated by using various measures such as: Precision, Recall, F-Measure, Accuracy, MCC, and ROC Area. The detailed results in this research can be used as a baseline for other researches so that any claim regarding the improvement in prediction through any new technique, model or framework can be compared and verified.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_38-Performance_Analysis_of_Machine_Learning_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Novel Carrier based PWM Techniques Reduce Common Mode Voltage for Six Phase Induction Motor Drives</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100537</link>
        <id>10.14569/IJACSA.2019.0100537</id>
        <doi>10.14569/IJACSA.2019.0100537</doi>
        <lastModDate>2019-05-30T13:34:32.7670000+00:00</lastModDate>
        
        <creator>Ngoc Thuy Pham</creator>
        
        <creator>Nho Van Nguyen</creator>
        
        <subject>Six-phase induction motor; six-phase voltage source inverter; common mode voltage; carrier based pulse width modulation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>This paper proposes a novel pulse width modulation (CBPWM) technique for reducing the common mode voltage for a six-phase induction motor (SPIM) drive. This proposed CBPWM technique relies on setting up offset functions and the phase shift of carrier wares. Common mode voltage occurs under the effect of DC power Vd always in Vd/6 limits. Some ways of designing the offset function are proposed; these proposed strategies permit to reduce either the mean value or the instantaneous value of the common mode voltage. Features of proposal CBPWM solutions have been compared. Simulation and experimental results demonstrate the feasibility of the proposed solution.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_37-Novel_Carrier_based_PWM_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Visualizing Code Bad Smells</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100536</link>
        <id>10.14569/IJACSA.2019.0100536</id>
        <doi>10.14569/IJACSA.2019.0100536</doi>
        <lastModDate>2019-05-30T13:34:32.7370000+00:00</lastModDate>
        
        <creator>Maen Hammad</creator>
        
        <creator>Sabah Alsofriya</creator>
        
        <subject>Software visualization; program comprehension; data modeling; bad smells</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>Software visualization is an effective way to support human comprehension to large software systems. In software maintenance, most of the time is spent on understanding code in order to change it. This paper presents a visualization approach to help maintainers to locate and understand code bad smells. Software maintainers need to locate and understand these bad smells in order to remove them via code refactoring. Object oriented code elements are visualized as well as their bad smells if they exist. The proposed visualization shows classes as building and bad smell as letter avatars based on the initials of the names of bad smells. These avatars are shown as warning signs on the buildings. A framework is proposed to automatically analyze code to identify bad smells and to generate the proposed visualizations. The evaluation of the proposed visualizations showed they reduce the comprehension time needed to understand bad smells.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_36-Visualizing_Code_Bad_Smells.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>LBPH-based Enhanced Real-Time Face Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100535</link>
        <id>10.14569/IJACSA.2019.0100535</id>
        <doi>10.14569/IJACSA.2019.0100535</doi>
        <lastModDate>2019-05-30T13:34:32.7200000+00:00</lastModDate>
        
        <creator>Farah Deeba</creator>
        
        <creator>Hira Memon</creator>
        
        <creator>Fayaz Ali Dharejo</creator>
        
        <creator>Aftab Ahmed</creator>
        
        <creator>Abddul Ghaffar</creator>
        
        <subject>Face recognition; feature extraction; Local Binary Pattern Histogram (LBPH)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>Facial recognition has always gone through a consistent research area due to its non-modelling nature and its diverse applications. As a result, day-to-day activities are increasingly being carried out electronically rather than in pencil and paper. Today, computer vision is a comprehensive field that deals with a high level of programming by feeding the input images/videos to automatically perform tasks such as detection, recognition and classification. Even with deep learning techniques, they are better than the normal human visual system. In this article, we developed a facial recognition system based on the Local Binary Pattern Histogram (LBPH) method to treat the real-time recognition of the human face in the low and high-level images. We aspire to maximize the variation that is relevant to facial expression and open edges so to sort of encode edges in a very cheap way. These highly successful features are called the Local Binary Pattern Histogram (LBPH).</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_35-LBPH_based_Enhanced_Real_Time_Face_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimization of a Three-Phase Tetrahedral High Voltage Transformer used in the Power Supply of Microwave</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100534</link>
        <id>10.14569/IJACSA.2019.0100534</id>
        <doi>10.14569/IJACSA.2019.0100534</doi>
        <lastModDate>2019-05-30T13:34:32.6900000+00:00</lastModDate>
        
        <creator>Mouhcine Lahame</creator>
        
        <creator>Mohammed Chraygane</creator>
        
        <creator>Hamid Outzguinrimt</creator>
        
        <creator>Rajaa Oumghar</creator>
        
        <subject>Optimization; tetrahedral; voltage-doubling; transformer; magnetrons</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>This article deals with the optimization of a three-phase tetrahedral-type high voltage transformer, sized to supply three voltage-doubling cells and three magnetrons per phase. The optimization method used is based on an algorithm implemented in Matlab/Simulink to study the influence of transformer geometrical parameters on the electrical operation of the power supply. This study will allow to find reduced volume of transformer respecting the current constraints imposed by the magnetrons manufacturer. The choice of optimal solution is done by calculation of magnetrons powers in order to respect the nominal operation.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_34-Optimization_of_a_Three_Phase_Tetrahedral_High_Voltage.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Novel Joint Subcarrier and Power Allocation Method in SWIPT for WSNs Employing OFDM System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100533</link>
        <id>10.14569/IJACSA.2019.0100533</id>
        <doi>10.14569/IJACSA.2019.0100533</doi>
        <lastModDate>2019-05-30T13:34:32.6570000+00:00</lastModDate>
        
        <creator>Saleemullah Memon</creator>
        
        <creator>Kamran Ali Memon</creator>
        
        <creator>Zulfiqar Ali Zardari</creator>
        
        <creator>Muhammad Aamir Panhwar</creator>
        
        <creator>Sijjad Ali Khuhro</creator>
        
        <creator>Asiya Siddiqui</creator>
        
        <subject>Simultaneous wireless information &amp; power transfer (SWIPT); Energy harvesting (EH); Information decoding (ID); power allocation;  subcarrier allocation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>In recent research trends, simultaneous wireless information and power transfer (SWIPT) has proved to be an innovative technique to deal with limited energy problems in energy harvesting (EH) technologies for wireless sensor networks (WSNs). In this paper, a method of subcarrier and power allocation for both EH and information decoding (ID) operations is proposed under orthogonal frequency division multiplexing (OFDM) systems, with an improved the quality of service (QoS) parameters. This proposed method assigns one group of subcarriers for ID and remaining group of subcarriers is assigned for EH, despite of applying any splitting schemes. We achieved maximum EH under the ID and power constraints with an effective algorithm for the first time incorporating the dual decomposition technique which deals with power and subcarrier allocation problem jointly. The obtained simulation outcomes in relation to power allocation ratio, subcarrier allocation ratio and energy harvested (EH) at the destination node proved better when compared to the schemes that contain water filling, time switching (TS) or power splitting (PS) approaches under different target transmission rates and transmitter and receiver distances.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_33-Novel_Joint_Subcarrier_and_Power_Allocation_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design and Development of an Industrial Solver for Integrated Planning of Production and Logistics</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100532</link>
        <id>10.14569/IJACSA.2019.0100532</id>
        <doi>10.14569/IJACSA.2019.0100532</doi>
        <lastModDate>2019-05-30T13:34:32.6430000+00:00</lastModDate>
        
        <creator>Yassine El Khayyam</creator>
        
        <creator>Brahim Herrou</creator>
        
        <subject>MRP; VRP; production planning; transports planning; integration; MLRP</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>Faced with an increasingly hard competition, an increasingly unstable economic environment and ever-increasing customer requirements, companies should optimize costs and lead times not only at their level but also at the entire supply chain to which they belong. In such situation, an integrated supply chain management is necessary. In this paper, we discuss one of the essential building blocks of the integrated supply chain management, which is the integrated planning of the sup-ply chain. We introduce a new method for integrated planning of production and logistics, which is the MLRP (Manufacturing and Logistics Requirement Planning). This method allows supply chain planners to determine in advance, for the entire planning horizon, the manufacturing orders, the supplier’s commands and the transport orders as well as vehicles routing for distribution of finished products and for the collection of components/raw materials. We will also discuss in this article the design and development of the solver which execute the MLRP method, this solver is the SMLRP that will be used to implement the proposed method on the different encountered industrial cases.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_32-Design_and_Development_of_an_Industrial_Solver.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modelling and Implementation of Proactive Risk Management in e-Learning Projects: A Step Towards Enhancing Quality of e-Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100531</link>
        <id>10.14569/IJACSA.2019.0100531</id>
        <doi>10.14569/IJACSA.2019.0100531</doi>
        <lastModDate>2019-05-30T13:34:32.6100000+00:00</lastModDate>
        
        <creator>Haneen Hijazi</creator>
        
        <creator>Bashar Hammad</creator>
        
        <creator>Ahmad Al-Khasawneh</creator>
        
        <subject>e-Learning; technology-enhanced learning; quality; proactive risk management; risk factors; higher educatio</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>The introduction of e-Learning to higher education institutions has been evolving drastically. However, the quality of e-Learning becomes a central issue in order to provide all stakeholders with the necessary confidence to compete with traditional learning methods. Risk management plays a vital role in the successful implementation of e-Learning projects and in attaining high-quality e-Learning courses. Little research has been conducted about implementing risk management in e-Learning projects. This work proposes a quality assurance framework for e-Learning projects. This framework comprises a proactive risk management model that integrates risk management into the e-Learning process. This integration helps in obtaining high-quality e-Learning courses by preventing negative e-Learning risks from being materialized. The model is verified to evaluate its effectiveness through a Renewable Energy Course that was converted from a traditional face-to-face into e-Learning course. Quantitative and qualitative measures are performed to analyze the data collected through the implementation of the project. The results show that the proposed model is managed to mitigate the majority of probable risk factors leading to high-quality e-Courses development and delivery.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_31-Modelling_and_Implementation_of_Proactive_Risk.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Query Expansion based on Explicit-Relevant Feedback and Synonyms for English Quran Translation Information Retrieval</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100530</link>
        <id>10.14569/IJACSA.2019.0100530</id>
        <doi>10.14569/IJACSA.2019.0100530</doi>
        <lastModDate>2019-05-30T13:34:32.5970000+00:00</lastModDate>
        
        <creator>Nuhu Yusuf</creator>
        
        <creator>Mohd Amin Mohd Yunus</creator>
        
        <creator>Norfaradilla Wahid</creator>
        
        <subject>Query expansion; search engine; relevant feedbacks; explicit relevant feedback; synonyms; information retrieval</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>Search engines are commonly present as information retrieval applications that help to retrieve relevant information from different domain areas. The crucial part of improving the quality of search engine is based on query expansion, which expands the query with additional information to match additional important documents. This paper presents a query expansion approach that utilizes explicit relevant feedback with word synonyms and semantic relatedness. We describe the possibility and demonstrations based on the experimental work pertain to search engines where relevant judgment and word synonyms can improve search quality. In order to show the level of improving the proposed approach, we compared the results obtained from the experiments based on Yusuf Ali, Arberry and Sarwar Quran datasets. The proposed approach shows improvement over other methods.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_30-Query_Expansion_based_on_Explicit_Relevant_Feedback.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Secure Data Provenance in Internet of Things based Networks by Outsourcing Attribute based Signatures and using Bloom Filters</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100529</link>
        <id>10.14569/IJACSA.2019.0100529</id>
        <doi>10.14569/IJACSA.2019.0100529</doi>
        <lastModDate>2019-05-30T13:34:32.5630000+00:00</lastModDate>
        
        <creator>Muhammad Shoaib Siddiqui</creator>
        
        <creator>Atiqur Rahman</creator>
        
        <creator>Adnan Nadeem</creator>
        
        <creator>Ali M. Alzahrani</creator>
        
        <subject>Data provenance; bloom filter; ciphertext policy attribute based encryption; IoT</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>With the dawn of autonomous organization and network and service management, the integration of existing networks with Internet of Things (IoT) based networks is becoming a reality. With minimal human interaction, the security of IoT data moving through the network becomes prone to attacks. IoT networks require a secure provenance mechanism, which is efficient and lightweight because of the scarce computing and storage resources at the IoT nodes. In this paper, we have proposed a secure mechanism to sign and authenticate provenance messages using Ciphertext-Policy Attribute Based Encryption (CP-ABE) based signatures. The proposed technique uses Bloom filters to reduce storage requirements and an outsourced ABE mechanism to use lessen the computational requirements at the IoT devices. The proposed technique helps in reducing the storage requirements and computation time in IoT devices. The performance of the proposed mechanism is evaluated and the results show that the proposed solution is best suited for resourced constrained IoT network.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_29-Secure_Data_Provenance_in_Internet_of_Things.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Modified Adaptive Thresholding Method using Cuckoo Search Algorithm for Detecting Surface Defects</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100528</link>
        <id>10.14569/IJACSA.2019.0100528</id>
        <doi>10.14569/IJACSA.2019.0100528</doi>
        <lastModDate>2019-05-30T13:34:32.5500000+00:00</lastModDate>
        
        <creator>Yasir Aslam</creator>
        
        <creator>Santhi N</creator>
        
        <creator>Ramasamy N</creator>
        
        <creator>K. Ramar</creator>
        
        <subject>Thresholding; surface defect; optimization; image processing; coated surface</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>There are various mathematical optimization problems that can be effectively solved by meta-heuristic algorithms. The improvement of these algorithms is that they carry out iterative search processes which resourcefully act upon exploration and exploitation in spatial domain containing global and local optima. An innovative robust Cuckoo Optimization Algorithm (COA) with adaptive thresholding is proposed to solve the problem of detection and estimation of surface defects on metal coating surfaces. The proposed method is developed through implementing changes to COA and improved the performance. For improving capability of local search as well to keep the global search effect, the enhanced methods such as level set is associated with the proposed method. Also, the method adapts dynamic step size, adaptively changing with the search process for improving the rate of convergence and the ability of local search. The algorithm performance is scrutinized from the experimental analysis and results. Also, the segmentation effectiveness is further enhanced by adapting suitable methods for preprocessing and post processing. The comparison and analysis of the results accomplished with the proposed method and results of earlier methods shows superior performance of the proposed method.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_28-A_Modified_Adaptive_Thresholding_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Domain and Schema Independent Semantic Model Verbalization: A Conceptual Overview</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100527</link>
        <id>10.14569/IJACSA.2019.0100527</id>
        <doi>10.14569/IJACSA.2019.0100527</doi>
        <lastModDate>2019-05-30T13:34:32.5170000+00:00</lastModDate>
        
        <creator>Kaneeka Vidanage</creator>
        
        <creator>Noor Maizura Mohamad Noor</creator>
        
        <creator>Rosmayati Mohemad</creator>
        
        <creator>Zuriana Abu Bakar</creator>
        
        <subject>Ontology; OWL; RDF; Verbalize; Schema</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>Semantic Web-based technologies have become extremely popular and its a success that has spread across many domains, additional to the computer science domain. Nevertheless, the reusability aspects associated with the created and available semantic knowledge models are very low. The main bottleneck associated with this issue is, the difficulty associated in understanding the complex schema of a knowledge model created and barriers associated with querying the knowledge models using SPARQL or SQWRL query formulations. This research emphasizes on proposing a verbalizer which can go beyond existing Controlled Natural Language (CNL) type verbalizers and to verbalizer knowledge stored in a knowledge model file written in either RDF or OWL format, despite its domain and schematics.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_27-Domain_and_Schema_Independent_Semantic_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Advanced Emergency Warning Message Scheme based on Vehicles Speed and Traffic Densities</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100526</link>
        <id>10.14569/IJACSA.2019.0100526</id>
        <doi>10.14569/IJACSA.2019.0100526</doi>
        <lastModDate>2019-05-30T13:34:32.5030000+00:00</lastModDate>
        
        <creator>Mustafa Banikhalaf</creator>
        
        <creator>Saleh Ali Alomari</creator>
        
        <creator>Mowafaq Salem Alzboon</creator>
        
        <subject>Warning message; the broadcast storm problem; emergency vehicles</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>In intelligent transportation systems, broadcasting Warning Messages (WMs) by Vehicular Ad hoc Networks (VANETs) communication is a significant task. Designing efficient dissemination schemes for fast and reliable delivery of WMs is still an open research question. In this paper, we propose a novel messaging scheme, Advanced Speed and Density Warning Message (ASDWM). ASDWM is a broadcast-based scheme that meets design objectives and achieves high saved rebroadcast and reachability, as well as low end-to-end latency of WM delivery. The ASDWM uses vehicle speeds and vehicles density degrees to help emergency vehicles to send WM according to a road condition, adaptively. Simulation results demonstrate the effectiveness and superiority of the ASDWM over its counterparts.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_26-An_Advanced_Emergency_Warning_Message_Scheme.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards Effective Service Discovery using Feature Selection and Supervised Learning Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100525</link>
        <id>10.14569/IJACSA.2019.0100525</id>
        <doi>10.14569/IJACSA.2019.0100525</doi>
        <lastModDate>2019-05-30T13:34:32.4700000+00:00</lastModDate>
        
        <creator>Heyam H Al-Baity</creator>
        
        <creator>Norah I. AlShowiman</creator>
        
        <subject>Web service discovery; Web Service Description Language (WSDL); supervised machine learning; classification; feature selection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>With the rapid development of web service technologies, the number and variety of web services available on the internet are rapidly increasing. Currently, service registries support human classification, which has been observed to have certain limitations, such as poor query results with low precision and recall rates. With the huge amount of available web services, efficient web service discovery has become a challenging issue. Therefore, to support the effective application of web services, automatic web service classification is required. In recent years, many researchers have approached web service classification problems by applying machine learning methods to automatically classify web services. The ultimate goal of our work is to construct a classifier model that can accurately classify previously unseen web services into the proper categories. This paper presents an intensive investigation on the impact of incorporating feature selection methods (filter and wrapper) on the performance of four state-of-the-art machine learning classifiers. The purpose of employing feature selection is to find a subset of features that maximizes classification accuracy and improves the speed of traditional machine learning classifiers. The effectiveness of the proposed classiﬁcation method has been evaluated through comprehensive experiments on real-world web service datasets. The results demonstrated that our approach outperforms other state-of-the-art methods.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_25-Toward_Effective_Service_Discovery.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Performance Analysis of RPL for Low Power and Lossy Networks based on Different Objective Functions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100524</link>
        <id>10.14569/IJACSA.2019.0100524</id>
        <doi>10.14569/IJACSA.2019.0100524</doi>
        <lastModDate>2019-05-30T13:34:32.4570000+00:00</lastModDate>
        
        <creator>Mah Zaib Jamil</creator>
        
        <creator>Danista Khan</creator>
        
        <creator>Adeel Saleem</creator>
        
        <creator>Kashif Mehmood</creator>
        
        <creator>Atif Iqbal</creator>
        
        <subject>ETX; ELT; HOP; internet of things; IP; networks; network performance; routing; RPL</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>The Internet of Things (IoT) is an extensive network between people-people, people-things and things-things. With the overgrown opportunities, then it also comes with a lot of challenges proportional to the number of connected things to the network. The IPv6 allows us to connect a huge number of things. For resource-constrained IoT devices, the routing issues are very thought-provoking and for this purpose an IPv6 Routing Protocol for Low-Power and Lossy Networks (RPL) is proposed.  There are multi-HOP paths connecting nodes to the root node.  Destination Oriented Directed Acyclic Graph (DODAG) is created taking into account different parameters such as link costs, nodes attribute and objective functions.  RPL is flexible and it can be tuned as per application demands, therefore, the network can be optimized by using different objective functions.  This paper presents a novel energy efficient analysis of RPL by performing a set of simulations in COOJA simulator. The performance evaluation of RPL is compared by introducing different Objective functions (OF) with multiple metrics for the network.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_24-Comparative_Performance_Analysis_of_RPL.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Effective Framework of Pedagogy</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100523</link>
        <id>10.14569/IJACSA.2019.0100523</id>
        <doi>10.14569/IJACSA.2019.0100523</doi>
        <lastModDate>2019-05-30T13:34:32.4230000+00:00</lastModDate>
        
        <creator>Tallat Naz</creator>
        
        <creator>Momeen Khan</creator>
        
        <creator>Khalid Mahmood</creator>
        
        <subject>Learning management system; learning styles; learning path; Plan-Do-Check-Act (PDCA)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>Learning paths drive learners to proficiency by using a selected sequence of training activities under time constraints. Therefore, learners can regulate learning and give feedback for pedagogy improvements. Studying learning path evaluation provides a useful conceptual reference to enhance pedagogically. This paper proposes an approach based on the Plan-Do-Check-Act improvement cycle to systematically evaluate learning paths in learning management systems. The framework is a valuable resource that consolidates existing practices in learning management evaluation. Our approach integrates learning styles, learning profile, along with cognitive activities. The proposed framework was compared with current learning path methods. Results were competitive compared with related works.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_23-Effective_Framework_of_Pedagogy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Low Power Consuming Model of Parallel Programming for HPC System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100522</link>
        <id>10.14569/IJACSA.2019.0100522</id>
        <doi>10.14569/IJACSA.2019.0100522</doi>
        <lastModDate>2019-05-30T13:34:32.4070000+00:00</lastModDate>
        
        <creator>Mohammed Nawaf Altouri</creator>
        
        <creator>Abdullah M. Algarni</creator>
        
        <subject>HPC; parallel computation; power consumption; hybrid programming; MVAPIC2; CUDA</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>For most of the past five decades, the growing computational power of supercomputers has come primarily from a doubling of clock frequency every 18 months. Over this time period, the clock rate increased by six orders of magnitude, while the number of processors increased by three orders of magnitude. The major challenge caused by the increasing scale and complexity of HPC systems is the massive power consumption. Due to constraints on heat and the power requirements of today&#39;s microprocessors, vendors have shifted to putting multiple processors (cores) on a chip. The number of cores per chip is expected to continue increasing exponentially over the next decade. One expected strategy is the correct usage of parallel programming models that decrease power consumption and increase system performance through massive parallelism (concurrency). In the current study, we have proposed a Hybrid MVAPICH-2 + CUDA (HMC) parallel programming model that outperformed other state-of-the-art dual and tri hierarchy level approaches with respect to power consumption and execution time. Moreover, the HMC model was evaluated by implementing the matrix multiplication benchmarking application. Consequently, it can be considered a leading model for the emerging Exascale computing system.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_22-A_Low_Power_Consuming_Model_of_Parallel_Programming.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Tennis Player Training Support System based on Sport Vision</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100521</link>
        <id>10.14569/IJACSA.2019.0100521</id>
        <doi>10.14569/IJACSA.2019.0100521</doi>
        <lastModDate>2019-05-30T13:34:32.3770000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Toshiki Nishimura</creator>
        
        <creator>Hiroshi Okumura</creator>
        
        <subject>Sport vision; static eyesight; dynamic visual acuity; contrast sensitivity; eye movement; deep vision; instant vision; cooperative action of eye; hand and foot; peripheral field</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>Sports vision based tennis player training support system is proposed. In sports, gaze, dynamic visual acuity, eye movement and viewing place are important. In sports vision, Static eyesight, Dynamic visual acuity, Contrast sensitivity, Eye movement, Deep vision, Instant vision, Cooperative action of eye, hand and foot, and Peripheral field are have to be treated. In particular for the tennis, all of the items are very important. Furthermore, trajectory of gaze location and tennis racket stroke gives some instructions for skill-up of tennis play. Therefore, sports vision based tennis player training system is proposed. Through experiment, it is found that the proposed system does work well for improvement of tennis players’ skills.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_21-Tennis_Player_Training_Support_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Pedestrian Safety with Eye-Contact between Autonomous Car and Pedestrian</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100520</link>
        <id>10.14569/IJACSA.2019.0100520</id>
        <doi>10.14569/IJACSA.2019.0100520</doi>
        <lastModDate>2019-05-30T13:34:32.3600000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Akihito Yamashita</creator>
        
        <creator>Hiroshi Okumura</creator>
        
        <subject>Autonomous driving car; eye-contact; self driving car; pedestrian safety; Yolo; OpenCV; GazeRecorder</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>Method for eye-contact between autonomous car and pedestrian is proposed for pedestrian safety. The method allows to detect the pedestrian who would like to across a street through eye-contact between an autonomous driving car and the pedestrian. Through experiment, it is found that the proposed method does work well for finding such pedestrians and for noticing a sign to the pedestrians from the autonomous driving car with a comprehensive representation of face image displayed onto front glass of the car.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_20-Pedestrian_Safety_with_Eye_Contact.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Segmentation of Touching Arabic Characters in Handwritten Documents by Overlapping Set Theory and Contour Tracing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100519</link>
        <id>10.14569/IJACSA.2019.0100519</id>
        <doi>10.14569/IJACSA.2019.0100519</doi>
        <lastModDate>2019-05-30T13:34:32.3300000+00:00</lastModDate>
        
        <creator>Inam Ullah</creator>
        
        <creator>Mohd Sanusi Azmi</creator>
        
        <creator>Mohamad Ishak Desa</creator>
        
        <creator>Yazan M. Alomari</creator>
        
        <subject>Offline handwritten characters; touching characters; segmentation; overlapping set theory; morphological operation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>Segmentation of handwritten words into characters is one of the challenging problem in the field of OCR. In presence of touching characters, make this problem more difficult and challenging. There are many obstacles/challenges in segmentation of touching Arabic handwritten text. Although researches are busy in solving the problem of segmentation of these touching characters but still there exist unsolved problems of segmentation of touching offline Arabic handwritten characters. This is due to large variety of characters and their shapes. So in this research, a new method for segmentation of touching Arabic Handwritten character has been developed. The main idea of the proposed method is to segment the touching characters by identifying the touching point by overlapping set theory and ending points of the Arabic word by applying some standard morphology operation methods. After identifying all the points, segmentation method is applied to trace the boundaries of characters to separate these touching characters. Experiments were conducted on touching characters taken from different data sets. The results show the accuracy of the proposed method.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_19-Segmentation_of_Touching_Arabic_Characters.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Quality of Service and Power Consumption Optimization on the IEEE 802.15.4 Pulse Sensor Node based on Internet of Things</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100518</link>
        <id>10.14569/IJACSA.2019.0100518</id>
        <doi>10.14569/IJACSA.2019.0100518</doi>
        <lastModDate>2019-05-30T13:34:32.3130000+00:00</lastModDate>
        
        <creator>Puput Dani Prasetyo Adi</creator>
        
        <creator>Akio Kitagawa</creator>
        
        <subject>RPL; RSS; Pathloss; Zigbee; Pulse; DODAGs; IoT</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>The Purpose of this research is to determine the Quality of Service (QoS) Zigbee or IEEE 802.15.4 sensor Node use the indicators, i.e. the Receiver Signal Strength and PathLoss (attenuation (-dB)) at the time of communication of the sensor node end device to the sensor Router node or Coordinator sensor node (sink). The factor power consumption sensor node is important to maintain the lifetime sensor node, The Sensor data in this research is the pulse sensor. The development of the Wireless Sensor Network communication system is in multi-hop communication, with efforts to obtain low power consumption on each sensor node. This study utilizes the Routing Protocol for Low Power and Lossy Network (RPL) method with position management on the sensor node on DODAGs consequently, that the average power consumption value for each sensor node is low. The benefit of the Sensor node is to send pulse sensor data from various nodes that are interconnected at different distances in multi-hop so that power consumption and Quality of Services (QoS) can be identified from the sensor node. From the research results, the average PathLoss value of IEEE 802.15.4 or Zigbee in free space is obtained by comparing the various simulation values and field experiments at a distance of 50 m at -75.4 dB and the Average Receiver Signal Strength (RSS) with a comparison of Equation and Experiments in the field with parameters The minimum Power Transmitter is 0 dBm and the Power Transmitter is maximum +20 dBm at a distance of 50m at - 66.6 dBm. Therefore, Pulse Sensor data will be displayed on the Web Page and stored in the MySQL database using Raspberry PI 3 as the Internet Gateway.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_18-Quality_of_Service_and_Power_Consumption.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Method of Computer-Aided Design of a Bread Composition with Regard to Biomedical Requirements</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100517</link>
        <id>10.14569/IJACSA.2019.0100517</id>
        <doi>10.14569/IJACSA.2019.0100517</doi>
        <lastModDate>2019-05-30T13:34:32.3000000+00:00</lastModDate>
        
        <creator>Natalia A Berezina</creator>
        
        <creator>Andrey V. Artemov</creator>
        
        <creator>Igor A. Nikitin</creator>
        
        <creator>Alexander A. Budnik</creator>
        
        <subject>Modeling; polycomposite mixture; bread; gerodietic nutrition; quality</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>A method for efficient software implementation of bread optimized multicomponent mixtures has been developed. These polycomposite mixtures have a chemical composition that meets the modern physiological standards of nutrition for the elderly people. To implement the developed algorithm a high-level programming language Object Pascal was used using the IDE Borland Delphi 7.0. An unconventional raw material was selected, which allows to provide necessary requirements to the quality indicators of the finished bread in all modeled mixtures. Modeling the composition of flour mixtures for gerodietic nutrition using the software made it possible to obtain compositions with a specific ratio of prescripted components, balanced in accordance with the intended purpose.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_17-The_Method_of_Computer_Aided_Design.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>CWNN-Net: A New Convolution Wavelet Neural Network for Gender Classification using Palm Print</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100516</link>
        <id>10.14569/IJACSA.2019.0100516</id>
        <doi>10.14569/IJACSA.2019.0100516</doi>
        <lastModDate>2019-05-30T13:34:32.2830000+00:00</lastModDate>
        
        <creator>Elaraby A Elgallad</creator>
        
        <creator>Wael Ouarda</creator>
        
        <creator>Adel M. Alimi</creator>
        
        <subject>Deep learning; feature extraction; gender; voting</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>The human hand is one of the body parts with special characteristics that are unique to every individual. The distinctive features can give some information about an individual, thus, making it a suitable body part that can be relied upon for biometric identification and, specifically, gender recognition. Several studies have suggested that the hand has unique traits that help in gender classification. Human hands form part of soft biometrics as they have distinctive features that can give information about a person. Nevertheless, the information retrieved from the soft biometrics can be used to identify an individual’s gender. Furthermore, the soft biometrics can be combined with the main biometrics characteristics that can improve the quality of biometric detection. Gender classification using hand features, such as palm contributes significantly to the biometric identification domain and, hence, presents itself as a valuable research topic. This study explores the use of Discrete Wavelet Transform (DWT) in gender identification, with SqueezeNet acting as a tool for unsheathing features, and Support Vector Machine (SVM) operating as discriminative classifier. Inference is made using mode voting approach. Notably, the two datasets that were crucial for the fulfillment of the study were the 11k database and CASIA. The outcome of the tests substantiated the use of voting technique for gender recognition.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_16-CWNN_Net_A_New_Convolution_Wavelet.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analytical and Comparative Study of Different Types of Two-Leg Chopping Up Regulator</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100515</link>
        <id>10.14569/IJACSA.2019.0100515</id>
        <doi>10.14569/IJACSA.2019.0100515</doi>
        <lastModDate>2019-05-30T13:34:32.2530000+00:00</lastModDate>
        
        <creator>Walid Emar</creator>
        
        <creator>Omar A. Saraereh</creator>
        
        <subject>Chopping up regulator with a flattering inductive smoother; magnetic coupling; connection with interphasing centre-tap transformer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>The main focus of this article is to analyze and simulate the two-leg parallel connection of a chopping up regulator with flattering inductive smoothers or with an interphasing centre-tap transformer supplied by a three-phase diode rectifier and a DC link in between. The article deals with the problem of reducing total harmonic distortion, minimizing THD and EMI with low switching frequency. The Simulated three phase a.c. load model is added at the end to investigate the current and voltage harmonics. The main objective of this paper is the investigation of the problem and active impact of replacing flattering inductive smoothers used to reduce voltcurrent fluctuating waveforms of the chopping up regulator by new topology known as interphasing centre-tap transformer with magnetic coupling. The comparison of these two variations of the study is then done based on their technical parameters and economical investment viewpoint. The considered technical parameters are to be current distribution into individual legs, amount of voltcurrent ripple and area of discontinuous currents. The investment costs governed by the material requirements are essential for transformer and smoother production design. The outcome of using the interphasing centre-tap transformer is successive cancelation of voltcurrent fluctuating waveforms generated at the output of the chopping up regulator. This is proved by an experiment with 35 in input and power chopping up400-V/90-A.Software simulations in Simplorer and Matlab/Simulink or software program and regimen prototypes have been arranged to confirm the results.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_15-Analytical_and_Comparative_Study.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Reconstruction of Fingerprint Shape using Fractal Interpolation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100514</link>
        <id>10.14569/IJACSA.2019.0100514</id>
        <doi>10.14569/IJACSA.2019.0100514</doi>
        <lastModDate>2019-05-30T13:34:32.2370000+00:00</lastModDate>
        
        <creator>Abdullah Bajahzar</creator>
        
        <creator>Hichem Guedri</creator>
        
        <subject>Fingerprint images; enhancement; thresholding process; thinning algorithms; minutiae extraction; Douglas-Peucker algorithm; fractal interpolation; iterated function system (IFS)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>One of the severe problems in a fingerprint-based system is retaining the fingerprint images. In this paper, we propose a method to minimize the fingerprint images size and retain the reference points. The method is divided into three parts, the first part is about digital image preprocessing that allows us to eliminate the noise, improve the image, convert it into a binary image, detect the skeleton and locate the reference point. The second part concerns the detection of critical points by the Douglas-Peucker method. The final part presents the methodology for the fingerprint curves reconstruction using the fractal interpolation curves. The experimental result shows the accuracy of this reconstruction method. The relative error (ER) is between 2.007% and 5.627% and the mean squared error (MSE) is between 0.126 and 0.009 at a small iterations number. On the other hand, for a greater number of iterations, the ER is between 0.415% and 1.64% and MSE is between 0.000124 and 0.0167. This clearly indicates that the interpolated curves and the original curves are virtually identical and exceedingly close.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_14-Reconstruction_of_Fingerprint_Shape.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Smart Sustainable Agriculture (SSA) Solution Underpinned by Internet of Things (IoT) and Artificial Intelligence (AI)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100513</link>
        <id>10.14569/IJACSA.2019.0100513</id>
        <doi>10.14569/IJACSA.2019.0100513</doi>
        <lastModDate>2019-05-30T13:34:32.2070000+00:00</lastModDate>
        
        <creator>Eissa Alreshidi</creator>
        
        <subject>Smart Agriculture; Internet of Things; IoT; Artificial Intelligence; AI; Fragmentation; Smart Sustainable Agriculture solutions</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>The Internet of Things (IoT) and Artificial Intelligence (AI) have been employed in agriculture over a long period of time, alongside other advanced computing technologies. However, increased attention is currently being paid to the use of such smart technologies. Agriculture has provided an important source of food for human beings over many thousands of years, including the development of appropriate farming methods for different types of crops. The emergence of new advanced IoT technologies has the potential to monitor the agricultural environment to ensure high-quality products. However, there remains a lack of research and development in relation to Smart Sustainable Agriculture (SSA), accompanied by complex obstacles arising from the fragmentation of agricultural processes, i.e. the control and operation of IoT/AI machines; data sharing and management; interoperability; and large amounts of data analysis and storage. This study firstly, explores existing IoT/AI technologies adopted for SSA and secondly, identifies IoT/AI technical architecture capable of underpinning the development of SSA platforms. As well as contributing to the current body of knowledge, this research reviews research and development within SSA and provides an IoT/AI architecture to establish a Smart, Sustainable Agriculture platform as a solution.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_13-Smart_Sustainable_Agriculture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Digital Legacy: Posterity Rights Analysis and Proposed Model for Digital Memorabilia Adoption using Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100512</link>
        <id>10.14569/IJACSA.2019.0100512</id>
        <doi>10.14569/IJACSA.2019.0100512</doi>
        <lastModDate>2019-05-30T13:34:32.1730000+00:00</lastModDate>
        
        <creator>Amit Sudan</creator>
        
        <creator>Dr. Munish Sabharwal</creator>
        
        <subject>Digital assets; digital legacy; digital posterity; digital executers; digital memorabilia; SNP (Social Networking Profiles); SNS (Social Networking Sites)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>The paper informs about the digital legacy and its related concepts of posterity rights and digital memorabilia. It also deals with the right to exercise the digital posterity concerning the social networking profiles (SNP) on social networking sites (SNS). Digital Memorabilia is the compendium of people’s social profiles and the digital artifacts accumulated in the name of people in online or virtual world, it can give people an online space to connect to and be remembered online even after their demise, showing the many dimensions of their real world personality. The paper proposes a model using multiple logistic regression technique of machine learning to predict the users that will opt for a digital memorial dependent upon different factors such as age, frequency of using SNPs, awareness about digital assets and digital legacy, awareness about privacy rights concerning digital assets and awareness about rights to posterity.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_12-Digital_Legacy_Posterity_Rights_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparison of Accuracy between Convolutional Neural Networks and Na&#239;ve Bayes Classifiers in Sentiment Analysis on Twitter</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100511</link>
        <id>10.14569/IJACSA.2019.0100511</id>
        <doi>10.14569/IJACSA.2019.0100511</doi>
        <lastModDate>2019-05-30T13:34:32.1570000+00:00</lastModDate>
        
        <creator>P.O. Abas Sunarya</creator>
        
        <creator>Rina Refianti</creator>
        
        <creator>Achmad Benny Mutiara</creator>
        
        <creator>Wiranti Octaviani</creator>
        
        <subject>Sentiment-analysis; convolutional neural network; deep learning; Na&#239;ve Bayes classifier</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>The needs and demands of the community for the ease of accessing information encourage the increasing use of social media tools such as Twitter to share, deliver and search for information needed. The number of large tweets shared by Twitter users every second, making the collection of tweets can be processed into useful information using sentiment analysis. The need for a large number of tweets to produce information encourages the need for a classifier model that can perform the analysis process quickly and provide accurate results. One algorithm that is currently popular and is widely used today to build classifier models is Deep Learning. Sentiment analysis in this research was conducted on English-language tweets on the topic &quot;Turkey Crisis 2018&quot; by using one of the Deep Learning algorithms, Convolutional Neural Network (CNN). The resulting of CNN classifier model will then be compared with the Na&#239;ve Bayes Classifier (NBC) classifier model to find out which classifier model can provide better accuracy in sentiment analysis. The research methods that will be carried out in this research are data retrieval, pre-processing, model design and training, model testing and visualization. The results obtained from this research indicate that the CNN classifier model produces an accuracy of 0.88 or 88% while the NBC classifier model produces an accuracy of 0.78 or 78% in the testing phase of the data test. Based on these results it can be concluded that the classifier model with Deep Learning algorithm produces better accuracy in sentiment analysis compared to the Na&#239;ve Bayes classifier model.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_11-Comparison_of_Accuracy_between_Convolutional_Neural_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-Agent Architecture of Intelligent and Distributed Platform of Governance, Risk and Compliance of Information Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100510</link>
        <id>10.14569/IJACSA.2019.0100510</id>
        <doi>10.14569/IJACSA.2019.0100510</doi>
        <lastModDate>2019-05-30T13:34:32.1270000+00:00</lastModDate>
        
        <creator>Soukaina Elhasnaoui</creator>
        
        <creator>Saadia Drissi</creator>
        
        <creator>Hajar Iguer</creator>
        
        <creator>Hicham Medromi</creator>
        
        <subject>IT Governance risk; and compliance; information system; multi agent systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>Governance, risk management and compliance of information technologies (IT GRC) is the responsibility of the company’s executives. The IT GRC responds to the important concerns of information systems managers, to ensure the necessary changes in the Information System (IS) over time, and enable it to meet the needs of risk mitigation, regulatory compliance, value creation and strategic alignment. Like a large number of organizations&#39; activities, the IT GRC has to find a solution that is equipped through IS applications. Although these tools do exist, they are never developed by considering the IT GRC processes as a whole. We respond to this lack of consideration by proposing an intelligent and distributed platform of risk, governance and compliance of information systems that deploys a variety of IT GRC best practices and frameworks and makes an intelligent choice under constraints and parameters of the best framework to evaluate the objectives and processes in question. EAS-COM (communication system dedicated to the IT GRC platform) is our second proposal in this work: it ensures end-to-end communication between the different layers of the proposed IT GRC platform. This approach is based on Multi-Agent System (MAS) intelligence to manage the interactions between the distributed systems of the IT GRC platform.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_10-Multi_Agent_Architecture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of the Emotions’ Brainwaves</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100509</link>
        <id>10.14569/IJACSA.2019.0100509</id>
        <doi>10.14569/IJACSA.2019.0100509</doi>
        <lastModDate>2019-05-30T13:34:32.1100000+00:00</lastModDate>
        
        <creator>Witman Alvarado-D&#237;az</creator>
        
        <creator>Brian Meneses-Claudio</creator>
        
        <creator>Avid Roman-Gonzalez</creator>
        
        <subject>Electroencephalogram; EEG; emotions; amyotrophic lateral sclerosis; degenerative diseases; classification learner</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>Currently in Peru, patients with degenerative diseases, such as Amyotrophic Lateral Sclerosis (ALS) have lost of communication ability. Many researchers’ papers that establish basic communication system for these patients. It is also essential to know their feelings or state of mind through their emotions, in this study, we present an analysis of electroencephalographic signals (EEG) applied to emotions such as fear, tenderness, happiness and surprised; it was used linear discriminant analysis (LDA) to get the identification and classification of the 4 emotions with a success rate of 63.36% on average.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_9-Analysis_of_the_Emotions_Brainwaves.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Understanding Customer Voice of Project Portfolio Management Software</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100508</link>
        <id>10.14569/IJACSA.2019.0100508</id>
        <doi>10.14569/IJACSA.2019.0100508</doi>
        <lastModDate>2019-05-30T13:34:32.0800000+00:00</lastModDate>
        
        <creator>Maruthi Rohit Ayyagari</creator>
        
        <creator>Issa Atoum</creator>
        
        <subject>Project Portfolio Management (PPM); software reviews; sentiment analytics; text summarization; LDA</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>Project Portfolio Management (PPM) has gained success in many projects due to its large number of features that covers effective scheduling, risk management, collaboration, and third-party software integrations to mention a few. A broad range of PPM software is available; however, it is essential to select the PPM with minimum usage issues over time. While many companies use surveys and market research to get users feedback, the PPM product software reviews carry the voice of users; the positive and negative sentiments of the PPM software reviews. This paper collected 4,775 reviews of ten PPM software from Capttera.com. Our approach has these phases- text preprocessing, sentiment analysis, summarization, and categorizations. The software reviews are filtered and cleaned, then negative sentiments of user reviews are summarized into a set of factors that identify issues of adopted PPM software. We report the most important issues of PPM software which were related to missing technological features and lack of training. Results using Latent Dirichlet Allocation (LDA) model showed that the top ten common issues are related to software complexity and lack of required features.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_8-Understanding_Customer_Voice_of_Project_Portfolio.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Implementation of Software Anti-Ageing Model towards Green and Sustainable Products</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100507</link>
        <id>10.14569/IJACSA.2019.0100507</id>
        <doi>10.14569/IJACSA.2019.0100507</doi>
        <lastModDate>2019-05-30T13:34:32.0630000+00:00</lastModDate>
        
        <creator>Zuriani Hayati Abdullah</creator>
        
        <creator>Jamaiah Yahaya</creator>
        
        <creator>Siti Rohana Ahmad Ibrahim</creator>
        
        <creator>Sazrol Fadzli</creator>
        
        <creator>Aziz Deraman</creator>
        
        <subject>Software ageing factor; ageing prevention; software anti-ageing model; SEANA model; SeRIS Prototype System; Green And Sustainable Product; Emprirical Study</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>Software ageing is a phenomenon that normally occurs in a long running software. Progressive degradation of software performance is a symptom that shows software is getting aged and old. Researchers believe that the ageing phenomenon can be delayed by applying anti-ageing techniques towards the software or also known as software rejuvenation. Software ageing factors are classified into two categories:  internal and external factors. This study focuses on external factors of software ageing, and are categorized into three main factors: environment, human and functional. These three factors were derived from empirical study that been conducted involving fifty software practitioners in Malaysia. The anti-ageing model (SEANA model) is proposed to support in preventing the software from prematurely aged, thus prolong its usage and sustainable in their environment. SEANA model is implemented in collaboration with a government agency in Malaysia to verify and validate the model in real environment. The prototype of SEANA model was developed and applied in the real case study. Furthermore, the anti-ageing guideline and actions are suggested for ageing factors to delay the ageing phenomenon in application software and further support the greenness and sustainability of software products.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_7-The_Implementation_of_Software_Anti_Ageing_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Framework for Iris Partial Recognition based on Legendre Wavelet Filter</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100506</link>
        <id>10.14569/IJACSA.2019.0100506</id>
        <doi>10.14569/IJACSA.2019.0100506</doi>
        <lastModDate>2019-05-30T13:34:32.0500000+00:00</lastModDate>
        
        <creator>Muktar Danlami</creator>
        
        <creator>Sapiee Jamel</creator>
        
        <creator>Sofia Najwa Ramli</creator>
        
        <creator>Mustafa Mat Deris</creator>
        
        <subject>Iris recognition; partial recognition; wavelet; Legendre wavelet filter; biometric</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>An increasing need for biometrics recognition system has grown substantially to address the issues of recognition and identification especially in highly dense areas such as airport, train stations and for financial transaction. Evidences of these can be seen in some airports and also the implementation of these technologies in our mobile phones.  Among the most popular biometric technologies include facial, fingerprints and iris recognition. The iris recognition is considered by many researchers to be the most accurate and reliable form of biometric recognition, because iris can neither be surgically operated with a chance of losing slight nor change due to ageing. However, presently most iris recognition system available can only recognize iris image with frontal-looking and high-quality images. Angular image and partially capture image cannot be authenticated with existing method of iris recognition. This research investigates the possibility of developing a framework for recognition partially captured iris image. The research also adopts the Legendre wavelet filter for the iris feature extraction. Selected iris images from CASIA, UBIRIS and MMU database were used to test the accuracy of the introduced framework. A threshold for the minimum iris image required was established.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_6-A_Framework_for_Iris_Partial_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Storage Consumption Reduction using Improved Inverted Indexing for Similarity Search on LINGO Profiles</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100505</link>
        <id>10.14569/IJACSA.2019.0100505</id>
        <doi>10.14569/IJACSA.2019.0100505</doi>
        <lastModDate>2019-05-30T13:34:32.0170000+00:00</lastModDate>
        
        <creator>Muhammad Jaziem bin Javeed</creator>
        
        <creator>Nurul Hashimah Ahamed Hassain Malim</creator>
        
        <subject>Simplified Molecular-Input Line-Entry System (SMILES); LINGO profiles; similarity searching; inverted indexing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>Millions of compounds which exist in huge datasets are represented using Simplified Molecular-Input Line- Entry System (SMILES) representation. Fragmenting SMILES strings into overlapping substrings of a defined size called LINGO Profiles avoids the otherwise time-consuming conversion process. One drawback of this process is the generation of numerous identical LINGO Profiles. Introduced by Kristensen et al, the inverted indexing approach represents a modification intended to deal with the large number of molecules residing in the database. Implementing this technique effectively reduced the storage space requirement of the dataset by half, while also achieving significant speedup and a favourable accuracy value when performing similarity searching. This report presents an in-depth analysis of results, with conclusions about the effectiveness of the working prototype for this study.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_5-Storage_Consumption_Reduction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>ZigBee Radio Frequency (RF) Performance on Raspberry Pi 3 for Internet of Things (IoT) based Blood Pressure Sensors Monitoring</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100504</link>
        <id>10.14569/IJACSA.2019.0100504</id>
        <doi>10.14569/IJACSA.2019.0100504</doi>
        <lastModDate>2019-05-30T13:34:31.9870000+00:00</lastModDate>
        
        <creator>Puput Dani Prasetyo Adi</creator>
        
        <creator>Akio Kitagawa</creator>
        
        <subject>Zigbee; Raspberry Pi 3; IoT; blood pressure</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>Wireless Sensor Network has grown rapidly, e.g. using the Zigbee RF module and combined with the Raspberry Pi 3, a reason at this research is building a Wireless Sensor Network (WSN). this research discusses how sensor nodes work well and how Quality of Service (QoS) from the Sensor node being analyzed and the role of Raspberry Pi 3 as an internet gateway will sending a blood pressure data to the database and displayed in real-time on the internet, from this research it is expected that patients can check the blood pressure from home and don’t need to the Hospital even data can be quickly and accurately received by Hospital Officers, doctors, and medical personnel. the purpose of this research is make a prototype to providing a blood pressure (mmHg) real-time data from systolic and diastolic data patient’s that determine patients suffering from symptoms of certain diseases, i.e, anemia, symptoms of hypertension and even more chronic diseases. this research discusses how sensor nodes work well and how Quality of Service (QoS) from the Sensor node being analyzed and the role of Raspberry Pi 3 as an internet gateway will sending a blood pressure data to the database and displayed in real-time on the internet. Furthermore, Zigbee has the task of sending Blood pressure (mmHg) data in real-time to the database and then sent to the internet from Zigbee end-device communication to ZigBee coordinator. Zigbee communication at a distance of 5 meters, RSSI simulations show a value of -29 dBm and the experiment shows a value of -40 dBm, at a distance of 100 m, RSSI shows a value of -55 dBm (simulation) and -86 dBm (experiment).</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_4-ZigBee_Radio_Frequency_RF_Performance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Exploring Factors Associated with Voucher Program for Speech Language Therapy for the Preschoolers of Parents with Communication Disorder using Weighted Random Forests</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100503</link>
        <id>10.14569/IJACSA.2019.0100503</id>
        <doi>10.14569/IJACSA.2019.0100503</doi>
        <lastModDate>2019-05-30T13:34:31.9570000+00:00</lastModDate>
        
        <creator>Haewon Byeon</creator>
        
        <creator>Sulki Cha</creator>
        
        <creator>KyoungYuel Lim</creator>
        
        <subject>Weighted random forests; CART; speech language therapy; prediction model; voucher program</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>It is necessary to identify the demand level of consumers and recognize the support target priority based on it in order to provide efficient services with a limited budget. This study provided baseline data for spreading the use of consumer-oriented voucher service by exploring factors associated with the demand of the Voucher Program for Speech Language Therapy for preschool children. This study were analyzed 212 guardians living with children (≤5 years old) who resided in Seoul from Aug 11 to Oct 9, 2015. The outcome variable was defined as the demand (i.e., required and not required) of the Voucher Program for Speech Language Therapy. The results of the developed prediction model were compared with the results of a decision tree based on classification and regression tree (CART). The prediction performance of the developed model was evaluated using a confusion matrix. Among the 212 subjects, 112 (52.8%) responded that the Voucher Program for Speech Language Therapy was necessary. The weighted random forest-based model predicted five variables (i.e., whether preschooler caregiving services were used or not, economic activity after childbirth, the awareness of Seoul’s welfare counselor operation, mean monthly living expenses, and whether welfare related information was obtained) as the variables associated with the demand of the Voucher Program for Speech Language Therapy and the accuracy was 72.1%. It is needed to develop systematic policies to expand consumer-oriented language therapy services based on the developed prediction model for the Voucher Program for Speech Language Therapy.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_3-Exploring_Factors_Associated_with_Voucher_Program.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>High-Speed FPGA-based of the Particle Swarm Optimization using HLS Tool</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100502</link>
        <id>10.14569/IJACSA.2019.0100502</id>
        <doi>10.14569/IJACSA.2019.0100502</doi>
        <lastModDate>2019-05-30T13:34:31.9400000+00:00</lastModDate>
        
        <creator>Ali Al Bataineh</creator>
        
        <creator>Amin Jarrah</creator>
        
        <creator>Devinder Kaur</creator>
        
        <subject>FPGA; High Level Synthesis; Particle Swarm Optimization; Travelling Salesman Problem (TSP)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>The Particle Swarm Optimization (PSO) is a heuristic search method inspired by different biological populations on their swarming or collaborative behavior. This novel work has implemented PSO for the Travelling Salesman Problem (TSP) in high-level synthesis to reduce the computational time latency. The high-level synthesis design generates an estimation of the hardware resources needed to implement the PSO algorithm for TSP on FPGA. The targeted FPGA of this algorithm is the Xilinx Zynq family. The algorithm has been implemented for getting the best route between 5 given cities with given distances. The research has used 7 number of particles for a different number of iterations for generating the best route between those 5 cities. The overall latency has been reduced due to the applied optimization techniques. This paper also implemented and parallelized the same algorithm on CPU Intel I7 Processor; the result shows the FPGA implementation gives better results than CPU on the comparison of performance.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_2-High_Speed_FPGA.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analyzing Personality Traits and External Factors for Stem Education Awareness using Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100501</link>
        <id>10.14569/IJACSA.2019.0100501</id>
        <doi>10.14569/IJACSA.2019.0100501</doi>
        <lastModDate>2019-05-30T13:34:31.8770000+00:00</lastModDate>
        
        <creator>Sang C Suh</creator>
        
        <creator>Anusha Upadhyaya B.N</creator>
        
        <creator>Ashwin Nadig N.V</creator>
        
        <subject>Educational Data Mining (EDM); Science Technology Engineering Management (STEM); Machine Learning (ML); K-Nearest Neighbor (KNN); One class–Support Vector Machine (one class - SVM)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(5), 2019</description>
        <description>The purpose of the paper is to present the personality traits and the factors that influence a student to pursue STEM education using machine learning techniques. STEM courses have high regard because they play a vital role in global technology, inventions and the economy. Educational Data Mining helps us to identify patterns and relationships in a large educational database. On the other hand, Machine Learning facilitates decision making process by enabling learning from the dataset. A survey comprising of an extensive variety of questions regarding STEM education was conducted and the opinions of students from various backgrounds and disciplines were collected. A dataset was generated based on the responses from students. Machine Learning algorithms (one class-SVM and KNN) applied on this dataset emphasizes variety of courses offered, research-oriented learning, problem-solving approach, a good career with high paying job are some of the factors which may influence a student to choose STEM course.</description>
        <description>http://thesai.org/Downloads/Volume10No5/Paper_1-Analyzing_Personality_Traits_and_External_Factors.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Efficient Energy Utilization in Cloud Fog Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100476</link>
        <id>10.14569/IJACSA.2019.0100476</id>
        <doi>10.14569/IJACSA.2019.0100476</doi>
        <lastModDate>2019-05-03T12:12:14.3400000+00:00</lastModDate>
        
        <creator>Babur Hayat Malik</creator>
        
        <creator>Muhammad Nauman Ali</creator>
        
        <creator>Sheraz Yousaf</creator>
        
        <creator>Mudassar Mehmood</creator>
        
        <creator>Hammad Saleem</creator>
        
        <subject>Energy efficiency; fog computing; cloud computing; load balancing; resources allocation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(4), 2019</description>
        <description>Cloud computing provides various kind of services like storage and processing that can be accessed on-demand when required. Despite its countless benefits, it incorporates some issues too that limits the full adaption of cloud and enjoy its various benefits. Mainly the issue faced during the adaptability of cloud infrastructure is high latency and unawareness of location. To overcome these issues the concept of fog computing is introduced to reduce the load on the cloud and improve the allocation of resources. The fog provides the same services as the cloud. The main features of fog are; location awareness, low latency, and mobility. However, increasing the use of IoT devices, also increase the usage of Cloud. Fog environment. So, much usage of fog getting attention of researcher about energy consumption. In this paper, we try to solve the problem of energy consumption in terms of resources allocation by applying the load balancing algorithms and compare its result with the energy models.</description>
        <description>http://thesai.org/Downloads/Volume10No4/Paper_76-Efficient_Energy_Utilization_in_Cloud_Fog_Environment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improved Cryptanalysis of Provable Certificateless Generalized Signcryption</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100475</link>
        <id>10.14569/IJACSA.2019.0100475</id>
        <doi>10.14569/IJACSA.2019.0100475</doi>
        <lastModDate>2019-05-03T12:12:14.3100000+00:00</lastModDate>
        
        <creator>Abdul Waheed</creator>
        
        <creator>Jawaid Iqbal</creator>
        
        <creator>Nizamud Din</creator>
        
        <creator>Shahab Ul Islam</creator>
        
        <creator>Arif Iqbal Umar</creator>
        
        <creator>Noor Ul Amin</creator>
        
        <subject>Digital signature; certificateless encryption; cer-tificateless generalized signcryption; malicious-but-passive KGC; random oracle model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(4), 2019</description>
        <description>Certificateless generalized signcryption adaptively work as certificateless signcryption, signature or encryption scheme having single algorithm for suitable storage-constrained environments. Recently, Zhou et al. proposed a novel Certificates generalized scheme, and proved its ciphertext indistinguishability under adaptive chosen ciphertext attacks (IND-CCA2) using Gap Bi-linear Diffie-Hellman and Computational Diffie-Hellman assumption as well as proved existential unforgeability against chosen message attacks (EUF-CMA) using the Gap Bi-linear Diffie-Hellman and Computational Diffie-Hellman assumption in random oracle model. In this paper, we analyzed Zhou et al. scheme and unfortunately proved IND-CCA2 insecure in encryption and signcryption modes in defined security model. We also present a practical and improved scheme, provable secure in random oracle model.</description>
        <description>http://thesai.org/Downloads/Volume10No4/Paper_75-Improved_Cryptanalysis_of_Provable_Certificateless.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>New Approach based on Model Driven Engineering for Processing Complex SPARQL Queries on Hive</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100474</link>
        <id>10.14569/IJACSA.2019.0100474</id>
        <doi>10.14569/IJACSA.2019.0100474</doi>
        <lastModDate>2019-05-03T12:12:14.2930000+00:00</lastModDate>
        
        <creator>Mouad Banane</creator>
        
        <creator>Abdessamad Belangour</creator>
        
        <subject>SPARQL; big data; model driven engineering; RDF</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(4), 2019</description>
        <description>Semantic web technologies are increasingly used in different domains. The core technology of the Semantic Web is the RDF standard. Today with the growth of RDF data it requires systems capable of handling these large volumes of data and responding to very complex requests at the join level, Several RDF data processing systems have been proposed, but are not dedicated to handling complex SPARQL queries. in this paper we present a new approach based on model driven engineering for processing complex SPARQL queries using one of the big data processing tools named Hive. We evaluate our system using three datasets from LUBM Benchmark. The results of this evaluation show the performance, and the scalability of our approach, also give very interesting results when it is compared with existing works.</description>
        <description>http://thesai.org/Downloads/Volume10No4/Paper_74-New_Approach_based_on_Model_Driven_Engineering.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Digital Certificate Exchange of Agricultural Products using SOAP Web Services</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100473</link>
        <id>10.14569/IJACSA.2019.0100473</id>
        <doi>10.14569/IJACSA.2019.0100473</doi>
        <lastModDate>2019-05-03T12:12:14.2330000+00:00</lastModDate>
        
        <creator>Miyanda Chilikwela</creator>
        
        <creator>Jackson Phiri</creator>
        
        <subject>Digital; certificate; Rivest, Shamir and Adleman (RSA); Simple Object Access Protocol (SOAP); web service; model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(4), 2019</description>
        <description>Developing countries have continued to experience a number of challenges in managing import and export certificate for various goods. In this paper, we are proposing a model for digital certificate exchange in an effort to improve the security levels of data exchange among the government organizations and the business community. With the increase of various information systems being used in many organizations, data exchange between systems has become critical. The model developed uses SOAP web services for data exchange and RSA encryption for secure data exchange. Ministry of Agriculture is responsible for the issuance of import and export certificates in Zambia while Zambia Revenue Authority is responsible for ensuring that goods imported or exported out of the country have a valid certificate that is authentic. The results show that the model provides a secure and timely exchange of information between the ministries and the government agencies.</description>
        <description>http://thesai.org/Downloads/Volume10No4/Paper_73-Digital_Certificate_Exchange_of_Agricultural_Products.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards Usability Guidelines for the Design of Effective Arabic Websites</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100472</link>
        <id>10.14569/IJACSA.2019.0100472</id>
        <doi>10.14569/IJACSA.2019.0100472</doi>
        <lastModDate>2019-04-30T10:21:00.2030000+00:00</lastModDate>
        
        <creator>Abdallah Namoun</creator>
        
        <creator>Ahmad B. Alkhodre</creator>
        
        <subject>Arabic websites; design principles; font type; font size; readability; legibility; images; graphics; site performance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(4), 2019</description>
        <description>The Arabic websites constitute 1% of the web content with more than 225 million viewers and 41% Internet penetration. However, there is a lack of design guidelines related to the selection and use of appropriate font type and size and images in Arabic websites. Both text and images are vital multimedia components of websites and thereby were selected for investigation in this study. The herein paper performed an in-depth inspection of font and image design practices within 73 most visited Arabic websites in Saudi Arabia according to Alexa Internet ranking in the first quarter of 2019. Our exhaustive analysis showed discrepancies between the international design recommendations and the actual design of Arabic websites. There was a considerable variation and inconsistency in using font types and sizes between and within the Arabic websites. Arabic Droid Kufi was used mostly for styling page titles and navigation menus, whilst Tahoma was used for styling paragraphs. The font size of the Arabic text ranged from 12 to 16 pixels, which may lead to poor readability. Images were used heavily in the Arabic websites causing prolonged site loading times. Moreover, the images strongly reflected the dimensions of the Saudi culture, especially collectivism and masculinity. Current Arabic web design practices are compared against the findings from past studies about international designs and lessons aiming at ameliorating the Arabic web design are inferred.</description>
        <description>http://thesai.org/Downloads/Volume10No4/Paper_72-Towards_Usability_Guidelines_for_the_Design_of_Effective_Arabic_Websites.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards a Context-Dependent Approach for Evaluating Data Quality Cost</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100471</link>
        <id>10.14569/IJACSA.2019.0100471</id>
        <doi>10.14569/IJACSA.2019.0100471</doi>
        <lastModDate>2019-04-30T10:21:00.1870000+00:00</lastModDate>
        
        <creator>Meryam Belhiah</creator>
        
        <creator>Boucha&#239;b Bounabat</creator>
        
        <subject>Data quality improvement project; cost of data quality; data quality assessment and improvement; cost/benefit analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(4), 2019</description>
        <description>Data-related expertise is a central and determining factor in the success of many organizations. Big Tech companies have developed an operational environment that extracts benefit from collected data to increase the efficiency and effectiveness of daily operations and services offered. However, in a complex economic environment, with transparent accounting and financial management, it is not possible to solve data quality issues with “dollars” without justifications and measurable indicators beforehand. The overall goal is not to improve data quality by any means, but to plan cost-effective data quality projects that benefit the organization. This knowledge is particularly relevant for organizations with little or no experience in the field of data quality assessment and improvement. Indeed, it is important that the costs and benefits associated with data quality are explicit and above all, quantifiable for both business managers and IT analysts. Organizations must also evaluate the different scenarios related to the implementation of data quality projects. The optimal scenario must provide the best financial and business value and meet the specifications in terms of time, resources and cost. The approach presented is this paper is an evaluation-oriented approach. For data quality projects, it evaluates the positive impact on the organization&#39;s financial and business objectives, which could be linked to the positive value of quality improvement and the implementation complexity, which could be coupled with the costs of quality improvement. This paper tries also to translate empirically the implementation complexity to costs expressed in monetary terms.</description>
        <description>http://thesai.org/Downloads/Volume10No4/Paper_71-Towards_a_Context_Dependent_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Person Detection from Overhead View: A Survey</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100470</link>
        <id>10.14569/IJACSA.2019.0100470</id>
        <doi>10.14569/IJACSA.2019.0100470</doi>
        <lastModDate>2019-04-30T10:21:00.1700000+00:00</lastModDate>
        
        <creator>Misbah Ahmad</creator>
        
        <creator>Imran Ahmed</creator>
        
        <creator>Kaleem Ullah</creator>
        
        <creator>Iqbal khan</creator>
        
        <creator>Ayesha Khattak</creator>
        
        <creator>Awais Adnan</creator>
        
        <subject>Person detection; background subtraction; seg-mentation; blob based techniques; feature based techniques</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(4), 2019</description>
        <description>In recent years, overhead view based person detection gained importance, due to handling occlusion problem and providing better coverage in scene, as com-pared to frontal view. In computer vision, overhead based person detection holds significant importance in many appli-cations including person detection, person counting, person tracking, behavior analysis and occlusion free surveillance system, etc. This paper aims to provide a comprehensive sur-vey on recent development and challenges related to person detection from top view. To the best of our knowledge, it is the first attempt which provides the survey of different overhead person detection techniques. This paper provides an overview of state of the art overhead based person detection methods and guidelines to choose the appropriate method in real life applications. The techniques are divided into two main categories: the blob-based techniques and the feature-based techniques. Various detection factors such as field of view, region of interest, color space, image resolution are also examined along with a variety of top view datasets.</description>
        <description>http://thesai.org/Downloads/Volume10No4/Paper_70-Person_Detection_from_Overhead_View_A_Survey.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Software Abstractions for Large-Scale Deep Learning Models in Big Data Analytics</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100469</link>
        <id>10.14569/IJACSA.2019.0100469</id>
        <doi>10.14569/IJACSA.2019.0100469</doi>
        <lastModDate>2019-04-30T10:21:00.1570000+00:00</lastModDate>
        
        <creator>Ayaz H Khan</creator>
        
        <creator>Ali Mustafa Qamar</creator>
        
        <creator>Aneeq Yusuf</creator>
        
        <creator>Rehanullah Khan</creator>
        
        <subject>Big data; deep learning; deep auto-encoders; Re-stricted Boltzmann Machines (RBM)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(4), 2019</description>
        <description>The goal of big data analytics is to analyze datasets with a higher amount of volume, velocity, and variety for large-scale business intelligence problems. These workloads are normally processed with the distribution on massively parallel analytical systems. Deep learning is part of a broader family of machine learning methods based on learning representations of data. Deep learning plays a significant role in the information analysis by adding value to the massive amount of unsupervised data. A core domain of research is related to the development of deep learning algorithms for auto-extraction of complex data formats at a higher level of abstraction using the massive volumes of data. In this paper, we present the latest research trends in the development of parallel algorithms, optimization techniques, tools and libraries related to big data analytics and deep learning on various parallel architectures. The basic building blocks for deep learning such as Restricted Boltzmann Machines (RBM) and Deep Belief Networks (DBN) are identified and analyzed for parallelization of deep learning models. We proposed a parallel software API based on PyTorch, Hadoop Distributed File System (HDFS), Apache Hadoop MapReduce and MapReduce Job (MRJob) for developing large-scale deep learning models. We obtained about 5-30% reduction in the execution time of the deep auto-encoder model even on a single node Hadoop cluster. Furthermore, the complexity of code development is significantly reduced to create multi-layer deep learning models.</description>
        <description>http://thesai.org/Downloads/Volume10No4/Paper_69-Software_Abstractions_for_Large_Scale_Deep_Learning_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Data Citation Service for Wikipedia Articles</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100468</link>
        <id>10.14569/IJACSA.2019.0100468</id>
        <doi>10.14569/IJACSA.2019.0100468</doi>
        <lastModDate>2019-04-30T10:21:00.1400000+00:00</lastModDate>
        
        <creator>Arif Bramantoro</creator>
        
        <creator>Ahmad A. Alzahrani</creator>
        
        <subject>Scientific dataset; web services; wikipedia; pangaea; big data</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(4), 2019</description>
        <description>The citation of big scientific data is crucial not only for scientific activity but also for the scientific discovery and dissemination within scientist network. The main objective of this research is to develop a service-oriented data citation system using data mining techniques for Middle East and North Africa scientists. A novel service oriented framework is proposed to prototype the development of the system that includes query for-malization, service discovery, service composition design, service selection, search space, and service optimization. In this research, Wikipedia scientific-related articles are connected with more than 35 petabyte Pangaea datasets. The output of this work is a web service that takes Wikipedia article information as an input and provides the possible relevant datasets (if exist) related to the article. The evaluation of this research is based on a quantitative assessment performed to the quality of web service metrics, such as number of access and bandwidth utilization; which shows that the framework is robust enough to handle both big data access and its citation.</description>
        <description>http://thesai.org/Downloads/Volume10No4/Paper_68-Data_Citation_Service_for_Wikipedia_Articles.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Smartphones-Based Crowdsourcing Approach for Installing Indoor Wi-Fi Access Points</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100467</link>
        <id>10.14569/IJACSA.2019.0100467</id>
        <doi>10.14569/IJACSA.2019.0100467</doi>
        <lastModDate>2019-04-30T10:21:00.1230000+00:00</lastModDate>
        
        <creator>Ahmad Abadleh</creator>
        
        <creator>Wala Maitah</creator>
        
        <creator>Hamzeh Eyal Salman</creator>
        
        <creator>Omar Lasassmeh</creator>
        
        <creator>Awni Hammouri</creator>
        
        <subject>Crowdsourcing; indoor localization system; ac-celerometer sensors; Wi-Fi access point; smartphones</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(4), 2019</description>
        <description>This study provides a new Crowdsourcing-based approach to identify the most crowded places in an indoor environment. The Crowdsourcing Indoor Localization system (CSI) has been one of the most used techniques in location-based applications. However, many applications suffer from the inability to locate the most crowded locations for various purposes such as advertising. These applications usually need to perform a survey before identifying target places, which require additional cost and time consuming. For example, Access Points (APs) installation can rely on an automated system to identify the best places where these APs should be placed without the need to use primitive ways to determine the best locations. In this work, we present a new approach for Wi-Fi designers and advertising companies to recognize the proper positions for placing APs and advertisement activities in indoor buildings. The recorded data of the accelerometer sensors are analyzed and processed to detect user’s steps and thereby predict the most crowded places in a building. Our experiments show promising results in terms of the most widely used metrics in the subject as the accuracy for detecting users’ steps reaches 95.8% and the accuracy for detecting the crowded places is 90.4%.</description>
        <description>http://thesai.org/Downloads/Volume10No4/Paper_67-Smartphones_Based_Crowdsourcing_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Efficient MRI Segmentation and Detection of Brain Tumor using Convolutional Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100466</link>
        <id>10.14569/IJACSA.2019.0100466</id>
        <doi>10.14569/IJACSA.2019.0100466</doi>
        <lastModDate>2019-04-30T10:20:59.9830000+00:00</lastModDate>
        
        <creator>Alpana Jijja</creator>
        
        <creator>Dr. Dinesh Rai</creator>
        
        <subject>Brain tumor; segmentation; convolutional neural network; water cycle algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(4), 2019</description>
        <description>Brain tumor is one of the most life-threatening diseases at its advance stages. Hence, detection at early stages is very crucial in treatment for improvement of the life expectancy of the patients. magnetic resonance imaging (MRI) is being used extensively nowadays for detection of brain tumors that requires segmenting huge volumes of 3D MRI images which is very challenging if done manually. Thus, automatic segmentation of the images will significantly lessen the burden and also improve the process of diagnosing the tumors. This paper presents an efficient method based on convolutional neural networks (CNN) for the automatic segmentation and detection of a brain tumor using MRI images. Water cycle algorithm is applied to CNN to obtain an optimal solution. The developed technique has an accuracy of 98.5%.</description>
        <description>http://thesai.org/Downloads/Volume10No4/Paper_66-Efficient_MRI_Segmentation_and_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fuzzy Delphi Method for Evaluating HyTEE Model  (Hybrid Software Change Management Tool with Test Effort Estimation)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100465</link>
        <id>10.14569/IJACSA.2019.0100465</id>
        <doi>10.14569/IJACSA.2019.0100465</doi>
        <lastModDate>2019-04-30T10:20:59.9530000+00:00</lastModDate>
        
        <creator>Mazidah Mat Rejab</creator>
        
        <creator>Nurulhuda Firdaus Mohd Azmi</creator>
        
        <creator>Suriayati Chuprat</creator>
        
        <subject>Fuzzy Delphi Method; software traceability; test effort estimation; regression testing; software changes</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(4), 2019</description>
        <description>When changes are made to a software system during development and maintenance, they need to be tested again i.e. regression test to ensure that changes behave as intended and have not impacted the software quality. This research will produce an automated tool that can help the software manager or a maintainer to search for the coverage artifact before and after a change request. Software quality engineer can determine the test coverage from new changes which can support cost estimation, effort, and schedule estimation. Therefore, this study is intended to look at the views and consensus of the experts on the elements in the proposed model by benefitting the Fuzzy Delphi Method. Through purposive sampling, a total of 12 experts from academic and industrial have participated in the verification of items through 5-point linguistic scales of the questionnaire instrument.  Outcome studies show 90% of elements in the proposed model consists of change management, traceability support, test effort estimation support, regression testing support, report and GUI meet, the value threshold (d construct) is less than 0.2 and the percentage of the expert group is above 75%. It is shown that elements of all the items contained in the venue are needed in the HyTEE Model (Hybrid Software Change Management Tool with Test Effort Estimation) based on the consensus of experts.</description>
        <description>http://thesai.org/Downloads/Volume10No4/Paper_65-Fuzzy_Delphi_Method_for_Evaluating_HyTEE_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Content based Document Classification using Soft Cosine Measure</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100464</link>
        <id>10.14569/IJACSA.2019.0100464</id>
        <doi>10.14569/IJACSA.2019.0100464</doi>
        <lastModDate>2019-04-30T10:20:59.9370000+00:00</lastModDate>
        
        <creator>Md Zahid Hasan</creator>
        
        <creator>Shakhawat Hossain</creator>
        
        <creator>Md. Arif Rizvee</creator>
        
        <creator>Md. Shohel Rana</creator>
        
        <subject>Classification; similarity; feature extraction; cosine similarity; soft cosine measure; content; document</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(4), 2019</description>
        <description>Document classification is a deep-rooted issue in information retrieval and assumed to be an imperative part of an assortment of applications for effective management of text documents and substantial volumes of unstructured data. Automatic document classification can be defined as a content-based arrangement of documents to some predefined categories which is for sure, less demanding for fetching the relevant data at the right time as well as filtering and steering documents directly to users. For recovering data effortlessly at the minimum time, scientists around the globe are trying to make content-based classifiers and as a consequence, an assortment of classification frameworks has been developed. Unfortunately, because of using conventional algorithms, almost all of these frameworks fail to classify documents into the proper categories. However, this paper proposes the Soft Cosine Measure as a document classification method for classifying text documents based on its contents. This classification method considers the similarity of the features of the texts rather than making their physical compatibility. For example, the traditional systems consider ‘emperor’ and ‘king’ as two different words where the proposed method extracts the same meaning for both of these words. For feature extraction capability and content-based similarity measure technique, the proposed system scores the classification accuracy up to 98.60%, better than any other existing systems.</description>
        <description>http://thesai.org/Downloads/Volume10No4/Paper_64-Content_based_Document_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Using Space Syntax and Information Visualization for Spatial Behavior Analysis and Simulation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100463</link>
        <id>10.14569/IJACSA.2019.0100463</id>
        <doi>10.14569/IJACSA.2019.0100463</doi>
        <lastModDate>2019-04-30T10:20:59.9070000+00:00</lastModDate>
        
        <creator>Sheng Ming Wang</creator>
        
        <creator>Chieh-Ju Huang</creator>
        
        <subject>Spatial behavior; space syntax; information visualization; generative adversarial network (GAN); user movement</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(4), 2019</description>
        <description>This study used space syntax to discuss user movement dynamics and crowded hot spots in a commercial area. Moreover, it developed personas according to its onsite observations, visualized user movement data, and performed a deep-learning simulation using the generative adversarial network (GAN) to simulate user movement in an urban commercial area as well as the influences such move might engender. From a pedestrian perspective, this study examined the crowd behavior in a commercial area, conducted an onsite observation of people’s spatial behaviors, and simulated user movement through data-science-driven approaches. Through the analysis process, we determined the spatial differences among various roads and districts in the commercial area, and according to the user movement simulation, we identified key factors that influence pedestrian spatial behaviors and pedestrian accessibility. Moreover, we used the deformed wheel theory to investigate the spatial structure of the commercial area and the synergetic relationship between the space and pedestrians; deformed wheel theory presents the user flow differences in various places and the complexity of road distribution, thereby enabling relevant parties to develop design plans that integrate space and service provision in commercial areas. This research contributes to the interdisciplinary study of spatial behavior analysis and simulation with machine learning applications.</description>
        <description>http://thesai.org/Downloads/Volume10No4/Paper_63-Using_Space_Syntax_an_Information_Visualization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Genetic-FSM Technique for Detection of High-Volume DoS Attack</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100462</link>
        <id>10.14569/IJACSA.2019.0100462</id>
        <doi>10.14569/IJACSA.2019.0100462</doi>
        <lastModDate>2019-04-30T10:20:59.9070000+00:00</lastModDate>
        
        <creator>Mohamed Samy Nafie</creator>
        
        <creator>Khaled Adel</creator>
        
        <creator>Hassan Abounaser</creator>
        
        <creator>Amr Badr</creator>
        
        <subject>Denial of Service (DoS); Evolutionary Algorithms (EA); Finite State Machine (FSM); Genetic Algorithm (GA); Genetic Programming (GP); Hill-Climbing Search</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(4), 2019</description>
        <description>Insecure networks are vulnerable to cyber-attacks, which may result in catastrophic damages on the local and global scope. Nevertheless, one of the tedious tasks in detecting any type of attack in a network, including DoS attacks, is to determine the thresholds required to discover whether an attack is occurring or not. In this paper, a hybrid system that incorporates different heuristic techniques along with a Finite State Machine is proposed to detect and classify DoS attacks. In the proposed system, a Genetic Programming technique combined with a Genetic Algorithm are designed and implemented to represent the system core that evolves an optimized tree—based detection model. A Hill-Climbing technique is also employed to enhance the system by providing a reference point value for evaluating the optimized model and gaining better performance. Several experiments with different configurations are conducted to test the system performance using a synthetic dataset that mimics real-world network traffic with different features and scenarios. The developed system is compared to many state-of-art techniques with respect to several performance metrics. Additionally, a Mann-Whitney Wilcoxon test is conducted to validate the accuracy of the proposed system. The results show that the developed system succeeds in achieving higher overall performance and prove to be statistically significant.</description>
        <description>http://thesai.org/Downloads/Volume10No4/Paper_62-Hybrid_Genetic_FSM_Technique.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Assistive Technologies for Bipolar Disorder: A Survey</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100461</link>
        <id>10.14569/IJACSA.2019.0100461</id>
        <doi>10.14569/IJACSA.2019.0100461</doi>
        <lastModDate>2019-04-30T10:20:59.8730000+00:00</lastModDate>
        
        <creator>Yumna Anwar</creator>
        
        <creator>Dr. Arshia Khan</creator>
        
        <subject>Bipolar disorder; mobile applications; electrodermal activity; heart rate variability; behavioral intervention technologies; depression; mania</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(4), 2019</description>
        <description>Bipolar disorder is a severe mental illness characterized by periodic manic and depressive episodes. The current mode of assessment of the patient’s bipolar state is using subjective clinical diagnosis influenced by the patients self-reporting. There are many intervention technologies available to help manage the illness and many researches have worked up on objective diagnosis and state prediction. Most of the recent work is focused on sensor-based objective prediction to minimize the delay between a relapse and the patient’s visit to the clinic for diagnosis and treatment. Due to the severity of the societal and economic burden caused by bipolar disorder, these researches have been given great emphasis. In this paper, we will start with a discussion of global severity of the disorder and economic and family burden inflicted due to it; we then talk about the existing mechanisms in place to identify the current state of the bipolar patient, then we go on to discussing the behavioral intervention technologies available and researched upon to help patients manage the disorder. Next, we mention the shift in focus of the current research, i.e. towards sensor based predictive systems for patients and clinical professionals, highlighting some of the preliminary researches and clinical studies and their outcomes.</description>
        <description>http://thesai.org/Downloads/Volume10No4/Paper_61-Assistive_Technologies_for_Bipolar_Disorder.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Impact Factors of IT Flexibility within Cloud Technology on Various Aspects of IT Effectiveness</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100460</link>
        <id>10.14569/IJACSA.2019.0100460</id>
        <doi>10.14569/IJACSA.2019.0100460</doi>
        <lastModDate>2019-04-30T10:20:59.8600000+00:00</lastModDate>
        
        <creator>Salameh A Mjlae</creator>
        
        <creator>Zarina Mohamad</creator>
        
        <creator>Wan Suryani</creator>
        
        <subject>Cloud Computing Adoption (CCA); IT Flexibility (ITF);  IT Effectiveness (ITF)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(4), 2019</description>
        <description>Cloud computing Adoption has achieved an essential inflection factor; this is affecting IT and business models and strategies all through the industries. There is a lack of empirical evidence how the adoption of cloud technology and a certain power of cloud technology specifically, affect various aspects of information technology effectiveness as IT-helpfulness to users, user satisfaction, and IT-quality of service. The intent of this paper has been added to the main of knowledge that could be used by researchers, companies, and businesses alike to accomplish optimal outcomes with focus on the main useful power of cloud-based services, solutions, and components inside them. The research findings presented that statistical evidence that confirms factors of IT Flexibility inside-cloud computing has a much stronger correlation with several aspects of information technology effectiveness than the remainder. The new awareness gain would improve the decision making procedure for IT executives and IT managers at what time allowing for cloud computing adoption base services and solutions</description>
        <description>http://thesai.org/Downloads/Volume10No4/Paper_60-Impact_Factors_of_IT_Flexibility_within_Cloud_Technology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Extreme Learning Machine and Particle Swarm Optimization for Inflation Forecasting</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100459</link>
        <id>10.14569/IJACSA.2019.0100459</id>
        <doi>10.14569/IJACSA.2019.0100459</doi>
        <lastModDate>2019-04-30T10:20:59.8270000+00:00</lastModDate>
        
        <creator>Adyan Nur Alfiyatin</creator>
        
        <creator>Agung Mustika Rizki</creator>
        
        <creator>Wayan Firdaus Mahmudy</creator>
        
        <creator>Candra Fajri Ananda</creator>
        
        <subject>Extreme learning machine; particle swarm optimization; inflation; prediction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(4), 2019</description>
        <description>Inflation is one indicator to measure the development of a nation. If inflation is not controlled, it will have a lot of negative impacts on people in a country. There are many ways to control inflation, one of them is forecasting. Forecasting is an activity to find out future events based on past data. There are various kinds of artificial intelligence methods for forecasting, one of which is the extreme learning machine (ELM). ELM has weaknesses in determining initial weights using trial and error methods. So, the authors propose an optimization method to overcome the problem of determining initial weights. Based on the testing carried out the purposed method gets an error value of 0.020202758 with computation time of 5 seconds.</description>
        <description>http://thesai.org/Downloads/Volume10No4/Paper_59-Extreme_Learning_Machine_and_Particle_Swarm_Optimization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comprehensive Comparative Analysis of Two Novel Underwater Routing Protocols</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100458</link>
        <id>10.14569/IJACSA.2019.0100458</id>
        <doi>10.14569/IJACSA.2019.0100458</doi>
        <lastModDate>2019-04-30T10:20:59.8130000+00:00</lastModDate>
        
        <creator>Umar Draz</creator>
        
        <creator>Tariq Ali</creator>
        
        <creator>Khurshid Asghar</creator>
        
        <creator>Sana Yasin</creator>
        
        <creator>Zubair Sharif</creator>
        
        <creator>Qasim Abbas</creator>
        
        <creator>Shagufta Aman</creator>
        
        <subject>Data transmission; throughput; end-to-end delay; energy consumption; L2-ABF; DVRP; Delay</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(4), 2019</description>
        <description>The most unmanned area of this planet is sheltered with water; that is roughly 71.9% of the total area of this planet. A large quantity of marine life is present in this area. That is the reason underwater research is bounded due to unexplored benefits. Due to the addition of sensors and growing interests in the exploration and monitoring of marine life Underwater Wireless Sensor Network (UWSN) can play an important role. A variety of routing protocols has been deployed in order to get information between deployed nodes. Providing stable data transmission, maximum throughput, minimum consumption of the energy and delay are challenging tasks in the UWSN. These routing protocols can be Layer-by-Layer Angle-Based Flooding (L2-ABF) and Diagonal and Vertical Routing Protocol (DVRP). In order to get stable data transmission, the node density plays our role in shallow and deep water. Several parameters are employed to evaluate the output efficiency of these routing protocols. In this paper, like an end to end delay, loss of data packets during transmission and data delivery ratio within communication are considered the major parameters for evaluation. For this, the network simulator is used with the aqua sim package. The results, we have produced during this study; guides us about the best routing protocol for data transmission. It finally reveals that the L2-ABF performs better then DVRP in a different situation, further the tradeoffs relationship is achieved against multiple situations.</description>
        <description>http://thesai.org/Downloads/Volume10No4/Paper_58-A_Comprehensive_Comparative_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cybercrime in Morocco</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100457</link>
        <id>10.14569/IJACSA.2019.0100457</id>
        <doi>10.14569/IJACSA.2019.0100457</doi>
        <lastModDate>2019-04-30T10:20:59.7970000+00:00</lastModDate>
        
        <creator>M EL Hamzaoui</creator>
        
        <creator>Faycal Bensalah</creator>
        
        <subject>Cybercrime; cyberspace; information system; information and communication technologies; social psychology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(4), 2019</description>
        <description>Cybercrime encompasses all illegal actions and facts that target cyberspaces and cause enormous economic and financial damages to organizations and individuals. A cyberspace is essentially composed of digital information as well as its store and communication instruments/platforms. To remedy this phenomenon, attention has focused particularly on both computer security and legislation in an area where the human behavior is also decisive. Social psychology has well defined the concept of behavior and also studied its relations with the attitude in human action. This paper aims to broaden the scope of cybercrime to also discuss marginal phenomena which do not attract enough attention but could easily be converted to digital criminals once circumstances become appropriate. The main objective of this work is the study of the ‘human’-‘digital world’ interactivity in a specific geographical area or precisely the study of the human behavior towards digital crimes. The proposed study targets young people of a small Moroccan city that is in the south of the country central region and constitutes its global economy barycenter. The study dealt specifically with a sample of Moroccan young living in El Jadida city that coincidentally contains individuals from other Moroccan cities which enriched this study more.</description>
        <description>http://thesai.org/Downloads/Volume10No4/Paper_57-Cybercrime_in_Morocco.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Algorithm for Enhancing the QoS of Video Traffic over Wireless Mesh Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100456</link>
        <id>10.14569/IJACSA.2019.0100456</id>
        <doi>10.14569/IJACSA.2019.0100456</doi>
        <lastModDate>2019-04-30T10:20:59.7630000+00:00</lastModDate>
        
        <creator>Abdul Nasser A Moh</creator>
        
        <creator>Radhwan Mohamed Abdullah</creator>
        
        <creator>Abedallah Zaid Abualkishik</creator>
        
        <creator>Borhanuddin Bin Moh. Ali</creator>
        
        <creator>Ali A. Alwan</creator>
        
        <subject>Wireless Mesh Networks (WMNs); Medium Access Control (MAC); Quality of Service (QoS); video traffic</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(4), 2019</description>
        <description>One of the major issues in a wireless mesh networks (WMNs) which needs to be solved is the lack of a viable protocol for medium access control (MAC). In fact, the main concern is to expand the application of limited wireless resources while simultaneously retaining the quality of service (QoS) of all types of traffic. In particular, the video service for real-time variable bit rate (rt-VBR). As such, this study attempts to enhance QoS with regard to packet loss, average delay, and throughput by controlling the transmitted video packets. The packet loss and average delay of QoS for video traffic can be controlled. Results of simulation show that Optimum Dynamic Reservation-Time Division Multiplexing Access (ODR-TDMA) has achieved excellent utilization of resource that improvised the QoS meant for video packets. This study has also proven the adequacy of the proposed algorithm to minimize packet delay and packet loss, in addition to enhancing throughput in comparison to those reported in previous studies.</description>
        <description>http://thesai.org/Downloads/Volume10No4/Paper_56-Algorithm_for_Enhancing_the_QoS_of_Video_Traffic.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>CMMI-DEV Implementation Simplified</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100455</link>
        <id>10.14569/IJACSA.2019.0100455</id>
        <doi>10.14569/IJACSA.2019.0100455</doi>
        <lastModDate>2019-04-30T10:20:59.7500000+00:00</lastModDate>
        
        <creator>Maruthi Rohit Ayyagari</creator>
        
        <creator>Issa Atoum</creator>
        
        <subject>CMMI; Spiral model; Capability Maturity Model Integration; process improvement; CMMI Implementation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(4), 2019</description>
        <description>With the advance technology and increase in customer requirements, software organizations pursue to reduce cost and increase productivity by using standards and best practices. The Capability Maturity Model Integration (CMMI) is a software process improvement model that enhances productivity and reduces time and cost of running projects. As a reference model, CMMI does not specify systemic steps of how to implement the model practices, leaving a room for organization development approaches. Small organizations with low budgets and those who are not looking for CMMI appraisals cannot cope with the high price of CMMI implementation. However, they need to manage the risk of CMMI implementation under their administration. Therefore, this paper proposes a simplified plan using the spiral model to implement the CMMI to reach level 2. The objective is to make the implementation more straightforward to implement and fit CMMI specification without hiring external experts. Compared to related implementation frameworks, the proposed model is deemed competitive and applicable under organizations’ conditions.</description>
        <description>http://thesai.org/Downloads/Volume10No4/Paper_55-CMMI_DEV_Implementation_Simplified.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Real Time RNA Sequence Edition with Matrix Insertion Deletion for Improved Bio Molecular Computing using Template Match Measure</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100454</link>
        <id>10.14569/IJACSA.2019.0100454</id>
        <doi>10.14569/IJACSA.2019.0100454</doi>
        <lastModDate>2019-04-30T10:20:59.7170000+00:00</lastModDate>
        
        <creator>Kotteeswaran C</creator>
        
        <creator>Khanaa V</creator>
        
        <creator>Rajesh A</creator>
        
        <subject>Bio Molecular Computing; RNA Sequence; SDM; SMM; Templates; TMM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(4), 2019</description>
        <description>The RNA sequence editing has become a challenging task in the molecular computing. There are number of approaches that have been discussed earlier for the problem RNA editing in bio molecular computing, but they suffer to achieve higher performance. To improve the performance, an real time approach has been presented which uses sequence depth measure (SDM). The method receives the RNA sequence and estimates the depth measure for different sub sequences generated. Based on the SDM value, a cumulative sequence match measure (SMM) has been measured to classify the sequence towards the different classes available. The matrix insertion and deletion is performed based on the template match measure (TMM) which has been computed based on the matches found in the templates available for different classes. The experimental results of our approach prove to outperform in terms of Accuracy, Risk Detection Accuracy, Time Complexity and False Classification Ratio which in turn increases the performance of bio molecular computing and matrix insertion deletion.</description>
        <description>http://thesai.org/Downloads/Volume10No4/Paper_54-Real_Time_RNA_Sequence_Edition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>How Volunteering Affects the Offender’s Behavior</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100453</link>
        <id>10.14569/IJACSA.2019.0100453</id>
        <doi>10.14569/IJACSA.2019.0100453</doi>
        <lastModDate>2019-04-30T10:20:59.6870000+00:00</lastModDate>
        
        <creator>Momina Shaheen</creator>
        
        <creator>Tayyaba Anees</creator>
        
        <creator>Muhammad Imran Manzoor</creator>
        
        <creator>Shuja Akbar</creator>
        
        <creator>Iqra Obaid</creator>
        
        <creator>Aimen Anum</creator>
        
        <subject>Offender’s behavior; spreading of criminal behavior; agent-based modelling; simulation; norm violation; criminology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(4), 2019</description>
        <description>Agent Based modelling is widely used for presenting and evaluating a social phenomenon. Agent based modelling helps the researcher to analyze and evaluate a social model and its related hypothetical theories by simulating a real situation. This research presents a model for showing the behavior of an offender that is greatly influenced by volunteering of people on the offending tendencies. It is observed that how the offending behavior of someone urges others to do the same criminal act or violation of norms. And how can volunteering make the offender feel shameful of his doings and motivate others to volunteer in likewise situation in future. An agent based Model is presented and simulated to evaluate and validate the conceptualization of presented social dilemma. This model is simulated by asking some questions with exacting focus on the offending behavior of an agent. This study evaluates all the simulated results from the presented model to describe theoretical foundation spreading of offending or criminal behavior. Moreover, it validates the role of volunteering in the decrement of offending tendencies of the people as it presents an understandable situation in which offending of someone increases the offending tendencies of audience. Moreover the results of this research show that the volunteering decreases the offending tendencies of not just offender but also of the audience.</description>
        <description>http://thesai.org/Downloads/Volume10No4/Paper_53-How_Volunteering_affects_the_Offenders_Behavior.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cluster based Hybrid Approach to Task Scheduling in Cloud Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100452</link>
        <id>10.14569/IJACSA.2019.0100452</id>
        <doi>10.14569/IJACSA.2019.0100452</doi>
        <lastModDate>2019-04-30T10:20:59.6700000+00:00</lastModDate>
        
        <creator>Y. Home Prasanna Raju</creator>
        
        <creator>Nagaraju Devarakonda</creator>
        
        <subject>Task scheduling; cloud computing; clustering; k-means; particle swarm optimization; makespan</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(4), 2019</description>
        <description>Cloud computing technology enables sharing of computer system resources among users through internet. Many numbers of users may request for sharable resources from a cloud. The sharable resources must be effectively distributed among requested users with in a less amount of time. Task scheduling is one of the ways of handling the user requests effectively in a cloud environment. There were many existing biologically inspired optimization techniques worked with task scheduling problems. The proposed paper is aimed at clubbing clustering techniques with biologically inspired optimization algorithms for deriving better results. A new hybrid methodology KPSOW (K-means with PSO using weights) has been proposed in the paper, which makes use of the strengths of both the K-means and PSO algorithms with the inclusion of weights concept. The results have shown that KPSOW has made considerable changes in reducing the makespan and improves the utilization of computing resources in the cloud.</description>
        <description>http://thesai.org/Downloads/Volume10No4/Paper_52-Cluster_based_Hybrid_Approach_to_Task_Scheduling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Academic Emotions Affected by Robot Eye Color: An Investigation of Manipulability and Individual-Adaptability</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100450</link>
        <id>10.14569/IJACSA.2019.0100450</id>
        <doi>10.14569/IJACSA.2019.0100450</doi>
        <lastModDate>2019-04-30T10:20:59.6400000+00:00</lastModDate>
        
        <creator>Kento Koike</creator>
        
        <creator>Yuya Tsuji</creator>
        
        <creator>Takahito Tomoto</creator>
        
        <creator>Daisuke Katagami</creator>
        
        <creator>Takenori Obo</creator>
        
        <creator>Yuta Ogai</creator>
        
        <creator>Junji Sone</creator>
        
        <creator>Yoshihisa Udagawa</creator>
        
        <subject>Robot lecturer; academic emotions; lecture behavior; human-robot interaction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(4), 2019</description>
        <description>We investigate whether academic emotions are affected by the color of a robot’s eyes in lecture behaviors. In conventional human-robot interaction research on robot lecturers, the emphasis has been on robots assisting or replacing human lecturers. We expanded these ideas and examined whether robots could lecture using one’s behaviors that are impossible for humans. Psychological research has shown that color affects emotions. Because emotion is strongly related to learning, and a framework of emotion control is required. Thus, we considered whether emotions related to the learner’s academic work, called “academic emotions,” can be controlled by the color of a robot’s illuminated eye light. In this paper, we found that the robot’s eye light color affects academic emotions and that the effect can be manipulated and adapted to individuals. Furthermore, the manipulability of academic emotions by color was confirmed in a situation mimicking a real lecture.</description>
        <description>http://thesai.org/Downloads/Volume10No4/Paper_50-Academic_Emotions_Affected_by_Robot_Eye_Color.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Formal Concept Analysis based Framework for Evaluating Information System Success</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100451</link>
        <id>10.14569/IJACSA.2019.0100451</id>
        <doi>10.14569/IJACSA.2019.0100451</doi>
        <lastModDate>2019-04-30T10:20:59.6400000+00:00</lastModDate>
        
        <creator>Ansar Daghouri</creator>
        
        <creator>Khalifa Mansouri</creator>
        
        <creator>Mohammed Qbadou</creator>
        
        <subject>Formal concept analysis; process; multi criteria; indicator; evaluation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(4), 2019</description>
        <description>This paper aims to propose a methodology for evaluating information system success. It is based on two main fields, which are formal concept analysis and multi criteria decision-making methods. A framework whose main objective is to visualize the synchronization between company processes and information system indicators via process mapping and formal concept analysis exploited the methodology. Moreover, owing to the application of multi criteria decision-making methods, we can rank the information system among the others system for the purpose to ameliorate system performance. In practice, we apply the steps of this framework on a Moroccan bank by choosing a combination of processes and indicators.</description>
        <description>http://thesai.org/Downloads/Volume10No4/Paper_51-Formal_Concept_Analysis_based_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Efficient Mining of Association Rules based on Clustering from Distributed Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100449</link>
        <id>10.14569/IJACSA.2019.0100449</id>
        <doi>10.14569/IJACSA.2019.0100449</doi>
        <lastModDate>2019-04-30T10:20:59.6100000+00:00</lastModDate>
        
        <creator>Marwa Bouraoui</creator>
        
        <creator>Amel Grissa Touzi</creator>
        
        <subject>Distributed data; association rules mining; clustering; meta-items; meta-rules</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(4), 2019</description>
        <description>Data analysis techniques need to be improved to allow the processing of data. One of the most commonly used techniques is the Association Rule Mining. These rules are used to detect facts that often occur together within a dataset. Unfortunately, existing methods generate a large number of association rules, without accentuation on the relevance and utility of these rules, and hence, complicating the results interpretation task. In this paper, we propose a new approach for mining association rules with an emphasis on easiness of assimilation and exploitation of the carried knowledge. Our approach addresses these shortcomings, while efficiently and intelligently minimizing the rules size. In fact, we propose to optimize the size of the extraction contexts taking advantages of the Clustering techniques. We then extract frequent itemsets and rules in the form of Meta-itemsets and Meta-rules, respectively. Experiments on benchmarking datasets show that our approach leads to a significant reduction of the number of generated rules thereby speeding up the execution time.</description>
        <description>http://thesai.org/Downloads/Volume10No4/Paper_49-Efficient_Mining_of_Association_Rules.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Parallel Hybrid-Testing Tool Architecture for a Dual-Programming Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100448</link>
        <id>10.14569/IJACSA.2019.0100448</id>
        <doi>10.14569/IJACSA.2019.0100448</doi>
        <lastModDate>2019-04-30T10:20:59.5770000+00:00</lastModDate>
        
        <creator>Ahmed Mohammed Alghamdi</creator>
        
        <creator>Fathy Elbouraey Eassa</creator>
        
        <subject>Software testing; OpenACC run-time error classifications; hybrid testing tool; dual-programming model; OpenACC</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(4), 2019</description>
        <description>High-Performance Computing (HPC) recently has become important in several sectors, including the scientific and manufacturing fields. The continuous growth in building more powerful super machines has become noticeable, and the Exascale supercomputer will be feasible in the next few years. As a result, building massively parallel systems becomes even more important to keep up with the upcoming Exascale-related technologies. For building such systems, a combination of programming models is needed to increase the system&#39;s parallelism, especially dual and tri-level programming models to increase parallelism in heterogeneous systems that include CPUs and GPUs. There are several combinations of the dual-programming model; one of them is MPI+ OpenACC. This combination has several features that increase the application’s parallelism concerning heterogeneous architecture and support different platforms with more performance, productivity, and programmability. However, building systems with different programming models are error-prone and difficult and are also hard to test. Also, testing parallel applications is already a difficult task because parallel errors are hard to detect due to the non-determined behavior of the parallel application. Integrating more than one programming model inside the same application makes even it more difficult to test because this integration could come with a new type of errors. Our main contribution is to identify and categorize OpenACC run-time errors and determine their causes with a brief explanation for the first time in research. Also, we proposed a solution for detecting run-time errors in application implemented in the dual-programming model. Our solution based on using hybrid testing techniques to discover real and potential run-time errors. Finally, to the best of our knowledge, there is no parallel testing tool built to test applications programmed by using the dual-programming model MPI + OpenACC or any tri-level programming model or even the OpenACC programming model to detect their run-time errors. Also, OpenACC errors have not been classified or identified before.</description>
        <description>http://thesai.org/Downloads/Volume10No4/Paper_48-A_Parallel_Hybrid_Testing_Tool_Architecture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Quantitative Analysis of Healthy and Pathological Vocal Fold Vibrations using an Optical Flow based Waveform</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100447</link>
        <id>10.14569/IJACSA.2019.0100447</id>
        <doi>10.14569/IJACSA.2019.0100447</doi>
        <lastModDate>2019-04-30T10:20:59.5630000+00:00</lastModDate>
        
        <creator>Heyfa Ammar</creator>
        
        <subject>Quantification; vocal fold vibrations; optical flow based waveform; pathology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(4), 2019</description>
        <description>The objective assessment of the vocal fold vibrations is important in diagnosing several vocal diseases. Given the high speed of the vibrations, the high speed videoendoscopy is commonly used to capture the vocal fold movements into video recordings. Commonly, two steps are carried out in order to automatically quantify laryngeal parameters and assess the vibra-tions. The first step aims to map the spatial-temporal information contained in the video recordings into a representation that facilitates the analysis of the vibrations. Numerous techniques are reported in the literature but the majority of them require the segmentation of all the images of the video, which is a complex task. The second step aims to quantify laryngeal parameters in order to assess the vibrations. To this aim, most of the existing approaches require an additional processing to the representation in order to deduce those parameters. Furthermore, for some reported representations, the assessment of the symmetry and the periodicity of the vocal fold dynamics needs setting up parameters that are specific to the representation under consideration; which makes difficult the comparison between the existing techniques. To alleviate these problems, the present study investigates the use of a recently proposed representation named optical flow based waveform, in order to objectively quantify the laryngeal parameters. This waveform is retained in this study as it does not require the segmentation of all the images of the video. Furthermore, it will be shown in the present work that the automatic quantification of the vibrations using this waveform can be carried out without applying any additional processing. Moreover, common laryngeal parameters are exploited; hence, no specific parameters are needed to be defined for the automatic assessment of the vibrations. Experiments conducted on healthy and pathological phonation show the accuracy of the waveform. Besides, it is more sensitive to pathological phonation than the state-of-the-art techniques.</description>
        <description>http://thesai.org/Downloads/Volume10No4/Paper_47-Quantitative_Analysis_of_Healthy_and_Pathological_Vocal_Fold.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Evaluation of Completed Local Ternary Pattern (CLTP) for Face Image Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100446</link>
        <id>10.14569/IJACSA.2019.0100446</id>
        <doi>10.14569/IJACSA.2019.0100446</doi>
        <lastModDate>2019-04-30T10:20:59.5470000+00:00</lastModDate>
        
        <creator>Sam Yin Yee</creator>
        
        <creator>Taha H. Rassem</creator>
        
        <creator>Mohammed Falah Mohammed</creator>
        
        <creator>Nasrin M. Makbol</creator>
        
        <subject>Face recognition; recognition accuracy; Local Binary Pat-tern (LBP); Completed Local Binary Pattern (CLBP); Com-pleted Local Ternary Pattern (CLTP)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(4), 2019</description>
        <description>Feature extraction is the most important step that affects the recognition accuracy of face recognition. One of these features are the texture descriptors that are playing an important role as local features descriptor in many of the face recognition systems. Recently, many types of texture descriptors had been proposed and used for face recognition task. The Completed Local Ternary Pattern (CLTP) is one of the texture descriptors that has been proposed for texture image classification and had been tested for different image classification tasks. It proposed to overcome the Local Binary Pattern (LBP) drawbacks where the CLTP is more robust to noise as well as shown a good discriminative property than others. In this paper, a comprehensive study on the performance of the CLTP for face recognition task has been done. The aim of this study is to investigate and evaluate the CLTP performance using eight different face datasets and compared with the previous texture descriptors. In the experimental results, the CLTP had been shown good recognition rates and outperformed the other texture descriptors for this task. Several face datasets are used in this paper, such as Georgia Tech Face, Collection Facial Images, Caltech Pedestrian Faces, JAFFE, FEI, YALE, ORL, UMIST datasets.</description>
        <description>http://thesai.org/Downloads/Volume10No4/Paper_46-Performance_Evaluation_of_Completed_Local_Ternary_Pattern.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multimodal Age-Group Recognition for Opinion Video Logs using Ensemble of Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100445</link>
        <id>10.14569/IJACSA.2019.0100445</id>
        <doi>10.14569/IJACSA.2019.0100445</doi>
        <lastModDate>2019-04-30T10:20:59.5300000+00:00</lastModDate>
        
        <creator>Sadam Al Azani</creator>
        
        <creator>El-Sayed M. El-Alfy</creator>
        
        <subject>Multimodal recognition; opinion mining; age groups; word embedding; acoustic features; visual features; in-formation fusion; ensemble learning; Arabic speakers</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(4), 2019</description>
        <description>With the wide spread usage of smartphones and social media platforms, video logging is gaining an increasing popularity, especially after the advent of YouTube in 2005 with hundred millions of views per day. It has attracted interest of many people with immense emerging applications, e.g. filmmak-ers, journalists, product advertisers, entrepreneurs, educators and many others. Nowadays, people express and share their opinions online on various daily issues using different forms of content including texts, audios, images and videos. This study presents a multimodal approach for recognizing the speaker’s age group from social media videos. Several structures of Artificial Neural Networks (ANNs) are presented and evaluated using standalone modalities. Moreover, a two-stage ensemble network is proposed to combine multiple modalities. In addition, a corpus of videos has been collected and prepared for multimodal age-group recog-nition with focus on Arabic language speakers. The experimental results demonstrated that combining different modalities can mitigate the limitations of unimodal recognition systems and lead to significant improvements in the results.</description>
        <description>http://thesai.org/Downloads/Volume10No4/Paper_45-Multimodal_Age_Group_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dual-Cross-Polarized Antenna Decoupling for 43 GHz Planar Massive MIMO in Full Duplex Single Channel Communications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100444</link>
        <id>10.14569/IJACSA.2019.0100444</id>
        <doi>10.14569/IJACSA.2019.0100444</doi>
        <lastModDate>2019-04-30T10:20:59.5000000+00:00</lastModDate>
        
        <creator>Muhsin Muhsin</creator>
        
        <creator>Rina Pudji Astuti</creator>
        
        <subject>Massive MIMO; dual polarized; mm-Wave; cou-pling; self-interference; full duplex single channel</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(4), 2019</description>
        <description>Massive Multiple Input Multiple Output (MIMO) and Full Duplex Single Channel (FDSC) at mm-Wave are key technology of future advanced wireless communications. Self-interference is the main problem in this technique because big number of antennas. This paper proposes dual-cross-polarized configuration to reduce self-interference between antennas. We conduct some computer simulations to design the antenna and to verify self-interference effect of the designed antenna. Computer simulation shows that the proposed design has lower Envelope Correlation Coefficient (ECC). This result is achieved because dual-cross-polarized technique can reduce coupling between an-tennas. We found that bit-error-rate (BER) performances of the proposed antenna is better than single polarized antenna indicating that the designed antenna is well design to reduce self-interference effect between antennas.</description>
        <description>http://thesai.org/Downloads/Volume10No4/Paper_44-Dual_Cross_Polarized_Antenna.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Techniques, Tools and Applications of Graph Analytic</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100443</link>
        <id>10.14569/IJACSA.2019.0100443</id>
        <doi>10.14569/IJACSA.2019.0100443</doi>
        <lastModDate>2019-04-30T10:20:59.4830000+00:00</lastModDate>
        
        <creator>Faiza Ameer</creator>
        
        <creator>Muhammad Kashif Hanif</creator>
        
        <creator>Ramzan Talib</creator>
        
        <creator>Muhammad Umer Sarwar</creator>
        
        <creator>Zahid Khan</creator>
        
        <creator>Khawar Zulfiqar</creator>
        
        <creator>Ahmad Riasat</creator>
        
        <subject>Graph; graph analytic; big data; graph tools; analytical techniques</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(4), 2019</description>
        <description>Graphs have acute significance because of poly-tropic nature and have wide spread real world big data appli-cations, e.g., search engines, social media, knowledge discovery, network systems, etc. Major challenge is to develop efficient systems to store, process and analyze large graphs generated by these applications. Graph analytic is important research area in big data graphs dealing with efficient extraction of useful knowledge and interesting patterns from rapidly growing big data streams. Tremendously huge and complex data of graph applications requires specially designed graph databases having special data structures and effective features for data modeling and querying. The manipulation of large size of data requires effective scalable and distributed computational techniques for efficient graph partitioning and communication. Researchers have proposed different analytical techniques, storage structures, and processing models. This study provides insight of different graph analytical techniques and compares existing graph storage and computational technologies. This work also assesses the perfor-mance, strengths and limitations of various graph databases and processing models.</description>
        <description>http://thesai.org/Downloads/Volume10No4/Paper_43-Techniques_Tools_and_Applications_of_Graph_Analytic.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-criteria Intelligent Algorithm for Supply Chain Management</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100442</link>
        <id>10.14569/IJACSA.2019.0100442</id>
        <doi>10.14569/IJACSA.2019.0100442</doi>
        <lastModDate>2019-04-30T10:20:59.4530000+00:00</lastModDate>
        
        <creator>Mahdi Jemmali</creator>
        
        <creator>Loai Kayed B.Melhim</creator>
        
        <creator>Mafawez Alharbi</creator>
        
        <subject>Decision making; supplier; scoring; algorithms, supply chain management</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(4), 2019</description>
        <description>In most societies, supply chain management and e-procurement processes, are one of the cornerstones of any economy and a primary influencer on people’s lives. Providing these communities with their different commodity needs de-pends on a wide range of suppliers. Due to supplier’s variety and diversity; the process of choosing the suitable supplier is considered the difficult and critical process, especially if this process is performed traditionally then decision making will be time and effort consuming to reach the desired results. To solve the problem of choosing the suitable supplier, this research suggests an intelligent algorithm, based on a given determinants that are specified by the decision maker or the customer. The proposed algorithm employs a set of intelligent formulas that will convert the predefined preferences into quantitative measure-ments. Quantitative measurements will be used to differentiate between different suppliers or between the set of given offers. The experimental results showed that D3􀀀 model employed the given preferences and succeeded in selecting the most appropriate offer among the many presented offers by the available suppliers.</description>
        <description>http://thesai.org/Downloads/Volume10No4/Paper_42-Multi_Criteria_Intelligent_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Gene Optimized Deep Neural Round Robin Workflow Scheduling in Cloud</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100441</link>
        <id>10.14569/IJACSA.2019.0100441</id>
        <doi>10.14569/IJACSA.2019.0100441</doi>
        <lastModDate>2019-04-30T10:20:59.4370000+00:00</lastModDate>
        
        <creator>Shanmugasundaram M</creator>
        
        <creator>Kumar R</creator>
        
        <creator>Kittur H M</creator>
        
        <subject>Bandwidth capacity; processor time; energy; fitness function; memory; user task; virtual machine</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(4), 2019</description>
        <description>Workflow scheduling is a key problem to be solved in the cloud to increases the quality of services. Few research works have been designed for performing workflow scheduling using different techniques. But, scheduling performance of existing techniques was not effective when considering a larger number of user tasks. Besides, the makespan of workflow scheduling was higher. In order to solve such limitations, Gene Optimized Deep Neural Round Robin Scheduling (GODNRRS) Technique is proposed. The designed GODNRRS Technique contains three layers namely input, hidden and output layer to efficiently perform workflow scheduling in the cloud. The GODNRRS Technique initially gets the number of user tasks as input in the input layer and forwards it to the hidden layer. After taking input, GODNRRS Technique initializes gene population with the assist of virtual machines in Amazon cloud server at the first hidden layer. Next, GODNRRS Technique determines fitness function for each virtual machine using their energy, memory, CPU time, bandwidth capacity at the second hidden layer. Afterward, GODNRRS Technique defines a weight for each virtual machine at the third hidden layer depends on their fitness function estimation. Consequently, GODNRRS Technique distributes the user tasks to optimal virtual machines according to their weight value at the fourth hidden layer in a cyclic manner. At last, the output layer renders the scheduled tasks result. Thus, GODNRRS Technique handles workflows in the cloud with improved scheduling efficiency and lower energy and makespan. The GODNRRS Technique conduct the experimental evaluation using metrics such as scheduling efficiency, makespan, and energy consumption with respect to a different number of user tasks from LIGO , Montage and cybershake real-time applications. The experimental result show that the GODNRRS Technique is able to increases the efficiency and also reduces the makespan of workflows scheduling in the cloud as compared to state-of-the-art works.</description>
        <description>http://thesai.org/Downloads/Volume10No4/Paper_41-Gene_Optimized_Deep_Neural_Round_Robin.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intrusion-Miner: A Hybrid Classifier for Intrusion Detection using Data Mining</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100440</link>
        <id>10.14569/IJACSA.2019.0100440</id>
        <doi>10.14569/IJACSA.2019.0100440</doi>
        <lastModDate>2019-04-30T10:20:59.4070000+00:00</lastModDate>
        
        <creator>Samra Zafar</creator>
        
        <creator>Muhammad.Kamran</creator>
        
        <creator>Xiaopeng.Hu</creator>
        
        <subject>Intrusion detection system; principal component analysis; intrusion-minor; fisher discriminant ratio</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(4), 2019</description>
        <description>With the rapid growth and usage of internet, number of network attacks have increase dramatically within the past few years. The problem facing in nowadays is to observe these attacks efficiently for security concerns because of the value of data. Consequently, it is important to monitor and handle these attacks and intrusion detection system (IDS) has potentially diagnostic ability to handle these attacks to secure the network. Numerous intrusion detection approaches are presented but the main hindrance is their performance which can be improved by increasing detection rate as well as decreasing false positive rates. Optimizing the performance of IDS is very serious issue and challenging fact that gets more attention from the research community. In this paper, we proposed a hybrid classification approach ‘Intrusion-Miner’ with the help of two classifier algorithm for network anomaly detection to get optimum result and make it possible to detect network attacks. Thus, principal component analysis (PCA) and Fisher Discriminant Ratio (FDR) have been implemented for the feature selection and noise removal. This hybrid approach is compared with J48, Bayesnet, JRip, SMO, IBK and evaluate the performance using KDD99 dataset. Experimental result revealed that the precision of the proposed approach is measured as 96.1 % with low false positive and high false negative rate as compare to other state-of-the-art algorithm. The simulation result evaluation shows that perceptible progress and real-time intrusion detection can be attained as we apply the suggested models to identify diverse kinds of network attacks.</description>
        <description>http://thesai.org/Downloads/Volume10No4/Paper_40-Intrusion_Miner_A_Hybrid_Classifier_for_Intrusion_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid of Multiple Linear Regression Clustering Model with Support Vector Machine for Colorectal Cancer Tumor Size Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100439</link>
        <id>10.14569/IJACSA.2019.0100439</id>
        <doi>10.14569/IJACSA.2019.0100439</doi>
        <lastModDate>2019-04-30T10:20:59.3900000+00:00</lastModDate>
        
        <creator>Muhammad Ammar Shafi</creator>
        
        <creator>Mohd Saifullah Rusiman</creator>
        
        <creator>Shuhaida Ismail</creator>
        
        <creator>Muhamad Ghazali Kamardan</creator>
        
        <subject>Colorectal cancer; multiple linear regression; support vector machine; fuzzy c- means; clustering; prediction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(4), 2019</description>
        <description>This study proposed the new hybrid model of Multiple Linear Regression Clustering (MLRC) combined with Support Vector Machine (SVM) to predict tumor size of colorectal cancer (CRC). Three models: Multiple Linear Regression (MLR), MLRC and hybrid MLRC with SVM model were compared to get the best model in predicting tumor size of colorectal cancer using two measurement statistical errors. The proposed model of hybrid MLRC with SVM have found two significant clusters whereby, each clusters contained 15 and three significant variables for cluster 1 and 2, respectively. The experiments found that the proposed model tend to be the best model with least value of Mean Square Error (MSE) and Root Mean Square Error (RMSE). This finding has shed light to health practitioner in determining the factors that contribute to colorectal cancer.</description>
        <description>http://thesai.org/Downloads/Volume10No4/Paper_39-A_Hybrid_of_Multiple_Linear_Regression.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards a Conceptual Model to Evaluate usability of Digital Government Services in Malaysia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100438</link>
        <id>10.14569/IJACSA.2019.0100438</id>
        <doi>10.14569/IJACSA.2019.0100438</doi>
        <lastModDate>2019-04-30T10:20:59.3730000+00:00</lastModDate>
        
        <creator>Rini Yudesia Naswir</creator>
        
        <creator>Nurazean Maarop</creator>
        
        <creator>Mahmudul Hasan</creator>
        
        <creator>Salwani Daud</creator>
        
        <creator>Ganthan Narayana Samy</creator>
        
        <creator>Pritheega Magalingam</creator>
        
        <subject>Digital government; citizen-centric; quantitative; usability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(4), 2019</description>
        <description>The Malaysian government is committed to provide comprehensive digital government services and it is reflected in some policies and strategic plans such as 11th Malaysia Plan 2016-2020 (RMKe-11) for digital government transformation. However, though most of the Malaysia government services are online yet they are still inadequate and the majority of users are unhappy with the current services. Usability is a critical aspect in the success of digital government. Thus, this research aims to develop and validate a usability conceptual model of digital government services in Malaysia context to identify key factors that influence the perceived usability that assists to encourage usage and satisfaction of digital government services. This research has applied quantitative-deductive approach and employed PLS-SEM analysis. Empirical results indicate that Effectiveness, Efficiency, Learnability, Satisfaction, Usefulness, and Citizen Centric are key factors of perceived usability of digital government services. The evaluation of the proposed conceptual model yielded that three of the six factors which are Effectiveness, Satisfaction, and Citizen Centric have significant positive influence on perceived usability of digital government in Malaysia context.</description>
        <description>http://thesai.org/Downloads/Volume10No4/Paper_38-Towards_a_Conceptual_Model_to_Evaluate.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Healthcare Management using ICT and IoT based 5G</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100437</link>
        <id>10.14569/IJACSA.2019.0100437</id>
        <doi>10.14569/IJACSA.2019.0100437</doi>
        <lastModDate>2019-04-30T10:20:59.3430000+00:00</lastModDate>
        
        <creator>Vijey Thayananthan</creator>
        
        <subject>Information communication technology; e_Health; customer relationship management; Radio Frequency Identification (RFID); Internet of Things based fifth generation (IoT based 5G)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(4), 2019</description>
        <description>In healthcare management, all patients need to be looked after properly with the latest technology. Although treatment facilities of healthcare management are available wirelessly, many treatments are still pending and delayed because the number of patients is increasing. In this research, 2 problems are focused on they are the availability of treatment facilities and an efficient way of handling healthcare administration records. In healthcare management, e_Health applications focus on medical treatment and administration. However, these applications depend on the Information and Communication Technology (ICT) and Radio Frequency Identification (RFID) systems. Using IoT based 5G and the latest technologies, this research provides an efficient method to solve these problems. In this method, ICT based on 5G networks and IoT based 5G are the major components which include efficient management protocols for treating the patients and elders through the appropriate e_Health applications. Although some patients and older adults visit healthcare homes or hospitals regularly, they never become satisfied people because they always expect better services. Some healthcare management treats these people as customers and maintains customer relationship. Improving the accuracy and quality of healthcare services and customers’ satisfaction depends on Customer Relationship Management (CRM) through evolving technologies. As results, ICT based on RFID and other latest technologies enhances the quality of e_Health application and healthcare services with the satisfaction of CRM values. Despite the profits and benefits, these enhancements are the conclusions of the healthcare service and management.</description>
        <description>http://thesai.org/Downloads/Volume10No4/Paper_37-Healthcare_Management_using_ICT.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Feature-based Sentiment Analysis for Slang Arabic Text</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100436</link>
        <id>10.14569/IJACSA.2019.0100436</id>
        <doi>10.14569/IJACSA.2019.0100436</doi>
        <lastModDate>2019-04-30T10:20:59.3270000+00:00</lastModDate>
        
        <creator>Emad E Abdallah</creator>
        
        <creator>Sarah A. Abo-Suaileek</creator>
        
        <subject>Sentiment analysis; Arabic features; opinion mining; emotional features; social media</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(4), 2019</description>
        <description>The increased number of Arab users on microblogging services who use Arabic language to write and read has triggered several researchers to study the posted data and discover the user’s opinion and feelings to support decision making. In this paper, a sentiment analysis framework is presented for slang Arabic text. A new dataset with Jordanian dialect is presented. Numerous specific Arabic features are shown with their impact on slang Arabic Tweets. The new set of features consists of lexicon, writing style, grammatical and emotional features. Several experiments are conducted to test the performance of the proposed scheme. The new proposed scheme produces better results in comparison with others. The experiments show that the system performs well without translating the tweets to English or standard Arabic.</description>
        <description>http://thesai.org/Downloads/Volume10No4/Paper_36-Feature_based_Sentiment_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Analysis of Security Mechanism for Automotive Controller Area Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100435</link>
        <id>10.14569/IJACSA.2019.0100435</id>
        <doi>10.14569/IJACSA.2019.0100435</doi>
        <lastModDate>2019-04-30T10:20:59.3130000+00:00</lastModDate>
        
        <creator>Mabrouka Gmiden</creator>
        
        <creator>Mohamed Hedi Gmiden</creator>
        
        <creator>Hafedh Trabelsi</creator>
        
        <subject>Automotive CAN security; cryptographic algorithms; analysis methodology; real-time performances</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(4), 2019</description>
        <description>Connectivity of modern cars has led to security issues. A number of contributions have proposed the use of cryptographic algorithms in order to provide automotive Controller Area Network (CAN) security. However, due to CAN protocol characteristics, real time requirements within cryptographic schemes are not guaranteed. In this work, effects of implementing cryptographic approaches have been investigated by proposing a performance analysis methodology of cryptographic algorithm. Until get implanting the proposed method in a real vehicle, a platform based on STMicroelectronics’32F407 (STM32F407) microcontroller board has been deployed to test the proposed methodology. The experiments show that the implementation of a cryptographic algorithm has an impact on clock cycles number and therefore, on real-time performances.</description>
        <description>http://thesai.org/Downloads/Volume10No4/Paper_35-Performance_Analysis_of_Security_Mechanism.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Big Data Strategy</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100434</link>
        <id>10.14569/IJACSA.2019.0100434</id>
        <doi>10.14569/IJACSA.2019.0100434</doi>
        <lastModDate>2019-04-30T10:20:59.2800000+00:00</lastModDate>
        
        <creator>Alicia Valdez</creator>
        
        <creator>Griselda Cortes</creator>
        
        <creator>Sergio Castaneda</creator>
        
        <creator>Laura Vazquez</creator>
        
        <creator>Angel Zarate</creator>
        
        <creator>Yadira Salas</creator>
        
        <creator>Gerardo Haces Atondo</creator>
        
        <subject>Technological strategy; big data; Hadoop; data analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(4), 2019</description>
        <description>The importance of data analysis in companies grows every day, with a global market that generates large amounts of transactions. Industry 4.0 is one of the technological trends, which is a set of diverse technologies whose objective is the digitalization and technological connectivity of the entire value chain of organizations. Data analysis and decision making in real time have a positive impact on efficiency. One of the technologies that support this concept is big data, which can support companies to use and manage large volumes of data as support in decision making. In this research project, the computational environment of Apache Hadoop software has been analyzed to create a technological strategy that supports companies in creating a roadmap to know and implement big data technology; as a result, a computer laboratory for big data has been created at the Autonomous University of Coahuila, Mexico to support medium-sized manufacturing companies in their data analysis strategy for decision making.</description>
        <description>http://thesai.org/Downloads/Volume10No4/Paper_34-Big_Data_Strategy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Novel Software-Defined Network Approach of Flexible Network Adaptive for VPN MPLS Traffic Engineering</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100433</link>
        <id>10.14569/IJACSA.2019.0100433</id>
        <doi>10.14569/IJACSA.2019.0100433</doi>
        <lastModDate>2019-04-30T10:20:59.2630000+00:00</lastModDate>
        
        <creator>Faycal Bensalah</creator>
        
        <creator>Najib El Kamoun</creator>
        
        <subject>SDN; QoS; VPN; MPLS; Adaptive network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(4), 2019</description>
        <description>Multi-Protocol Label Switching VPN (MPLS-VPN) is a technology for connecting multiple remote sites across the operator’s private infrastructure. MPLS VPN offers advantages that traditional solutions cannot guarantee, in terms of security and quality of service. However, this technology is becoming more prevalent among businesses, banks or even public institutions. With this strong trend, the management of the paths on which these tunnels can be deployed has become a necessity is a priority need for Internet access providers (ISPs). Through the principle of controller orchestration, ISPs can overcome this difficulty. Software-defined network is a paradigm allowing through the principle of orchestration to manage the entire network infrastructure. In this paper, we propose a new approach called FNA-TE &quot;Flexible Network Adaptive - Traffic Engineering&quot;, this approach allows to manage MPLS VPN tunnels to meet the QoS requirements of those with the highest priority.</description>
        <description>http://thesai.org/Downloads/Volume10No4/Paper_33-Novel_Software_Defined_Network_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving the Performance of {0,1,3}-NAF Recoding Algorithm for Elliptic Curve Scalar Multiplication</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100432</link>
        <id>10.14569/IJACSA.2019.0100432</id>
        <doi>10.14569/IJACSA.2019.0100432</doi>
        <lastModDate>2019-04-30T10:20:59.2500000+00:00</lastModDate>
        
        <creator>Waleed K AbdulRaheem</creator>
        
        <creator>Sharifah Bte Md Yasin</creator>
        
        <creator>Nur Izura Binti Udzir</creator>
        
        <creator>Muhammad Rezal bin Kamel Ariffin</creator>
        
        <subject>Elliptic Curve Cryptosystem (ECC); scalar multiplication algorithm; {0, 1, 3}-NAF method; Non-Adjacent Form (NAF)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(4), 2019</description>
        <description>Although scalar multiplication is highly fundamental to elliptic curve cryptography (ECC), it is the most time-consuming operation. The performance of such scalar multiplication depends on the performance of its scalar recoding which can be measured in terms of the time and memory consumed, as well as its level of security. This paper focuses on the conversion of binary scalar key representation into {0, 1, 3}-NAF non-adjacent form. Thus, we propose an improved {0, 1, 3}-NAF lookup table and mathematical formula algorithm which improves the performance of {0, 1, 3}-NAF algorithm. This is achieved by reducing the number of rows from 15 rows to 6 rows, and reading two (instead of three) digits to produce one. Furthermore, the improved lookup table reduces the recoding time of the algorithm by over 60% with a significant reduction in memory consumption even with an increase in key size. Specifically, the improved lookup table reduces the memory consumption by as much as 75% for the big key, which shows its higher level of resilience to side channel attacks.</description>
        <description>http://thesai.org/Downloads/Volume10No4/Paper_32-Improving_the_Performance_of_0_1_3_NAF_Recoding_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>ABCVS: An Artificial Bee Colony for Generating Variable T-Way Test Sets</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100431</link>
        <id>10.14569/IJACSA.2019.0100431</id>
        <doi>10.14569/IJACSA.2019.0100431</doi>
        <lastModDate>2019-04-30T10:20:59.2170000+00:00</lastModDate>
        
        <creator>Ammar K Alazzawi</creator>
        
        <creator>Helmi Md Rais</creator>
        
        <creator>Shuib Basri</creator>
        
        <subject>T-way testing; variable-strength interaction; combinatorial testing; covering array; test suite generation; artificial bee colony algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(4), 2019</description>
        <description>To achieve acceptable quality and performance of any software product, it is crucial to assess various software components in the application. There exist various software-testing techniques such as combinatorial testing and covering array. However, problems such as t-way combinatorial explosion is still challenging in any combinatorial testing strategy, as it takes into consideration the entire combinations of input variables. Therefore, to overcome this problem, several optimizations and metaheuristic strategies have been suggested. One of the most effective optimization algorithms based techniques is the Artificial Bee Colony (ABC) algorithm. This paper presents t-way generation strategy for both a uniform and variable strength test suite by applying the ABC strategy (ABCVS) to reduce the size of the test suite and to subsequently enhance the test suite generation interaction. To assess both the effectiveness and performance of the presented ABCVS, several experiments were conducted applying various sets of benchmarks. The results revealed that the proposed ABCVS outweigh the existing based strategies and demonstrated wider interaction between components as opposed to AI-search based and computational based strategies. The results also revealed higher prospect of ABCVS in the aspect of its effectiveness and performance as observed in the majority of case studies.</description>
        <description>http://thesai.org/Downloads/Volume10No4/Paper_31-ABCVS_An_Artificial_Bee_Colony.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Images Steganography Approach Supporting Chaotic Map Technique for the Security of Online Transfer</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100430</link>
        <id>10.14569/IJACSA.2019.0100430</id>
        <doi>10.14569/IJACSA.2019.0100430</doi>
        <lastModDate>2019-04-30T10:20:59.2030000+00:00</lastModDate>
        
        <creator>Yasser Mohammad Al-Sharo</creator>
        
        <subject>Security; steganography; chaotic map; encryption; network data</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(4), 2019</description>
        <description>One of the most important issue in this domain is the security concern of the transfer data. The online transfer data may access illegally through attack the communication gate between the servers and the users. The main aim of this study is to enhance the security level of the online data transfer using two integrated methods; images steganography to hide the transfer data in image media, and chaotic map to remap the original format of the transfer data. The integration between these two methods is effective to secure the data in several format such as text, audio, and images. The proposed algorithm is the prototyped using JAVA programming, 20 images and text messages of usable sizes (plain data) were tested on the dataset using the developed programming. The simulation using local server is accomplished to analyze the security performance based on two factors; the plain data size and the data transfer distance. Many attacking attempts are performed on the simulation test using known attacking techniques such as observe the stego images quality. The experiment results show that about 85% of the attacking attempts fail to catch the stego images. 95% of the attacks fail in remap meaningful parts of the chaotic data. The results indicate the very good level of the propose security methods to secure the online transfer data. The contribution of this study is the effective integration between the steganography and chaotic map approaches to assure high security level of online data transfer.</description>
        <description>http://thesai.org/Downloads/Volume10No4/Paper_30-Images_Steganography_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Effect of Correlating ImageThreshold Values with Image Gradient Field on Damage Detection in Composite Structures</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100429</link>
        <id>10.14569/IJACSA.2019.0100429</id>
        <doi>10.14569/IJACSA.2019.0100429</doi>
        <lastModDate>2019-04-30T10:20:59.1870000+00:00</lastModDate>
        
        <creator>Mahmoud Zaki Iskandarani</creator>
        
        <subject>Gradient norm; edge detection; gray level mapping; segmentation; threshold; histogram; image processing; composites, impact damage</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(4), 2019</description>
        <description>Effect of image threshold level variation is studied and proved to be a critical factor in damage detection and characterization of impacted composite Reaction Injection Molding ((RIM) structures. The variation of threshold is used as an input to both gradient field algorithm and segmentation algorithm. The choice of optimum threshold for a tested composite type is achieved as a result of correlation between the resulted gradient field images and segmented images. Type and extent of damage is also analyzed using detailed pixel distribution as a function of both impact energy and threshold level variation. The demonstrated cascading based technique is shown to be promising for an accurate testing and classification of damage in composite structures in many critical areas such as medical, aerospace and automotive.</description>
        <description>http://thesai.org/Downloads/Volume10No4/Paper_29-Effect_of_Correlating_ImageThreshold_Values.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced e-Learning Experience using Case based Reasoning Methodology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100428</link>
        <id>10.14569/IJACSA.2019.0100428</id>
        <doi>10.14569/IJACSA.2019.0100428</doi>
        <lastModDate>2019-04-30T10:20:59.1700000+00:00</lastModDate>
        
        <creator>Swati Shekapure</creator>
        
        <creator>Dipti D. Patil</creator>
        
        <subject>K-nearest neighbour method; eLearning; learning objects; learning style; case based reasoning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(4), 2019</description>
        <description>In recent year’s improvement in innovation includes new limits for verifying data that will incite essential changes in eLearning. The user can see e-learning material subject to the reference given to them and select the best approach to see the resources. This proposed system addresses retrieval, reuse, revise and retain phases of CBR. For building personalized e-Learning, this work identifies different feature set such as learning style, learning object, knowledge level, and problem list. For constructing this model used case-based reasoning along with a k-nearest neighbour. Role of the K-nearest neighbour method is to identify the perfect k factor for better analysis for calculation of accurate retrieval process. There is further addition of new cases based on the simulation of new user history limit to a certain threshold value. This model acquires dynamically incremental dataset for classification. Further, there is time and accuracy comparison on dataset done by K-nearest neighbour, decision tree and support vector machine. Eventually, eLearning spares time, upgrades the learning knowledge and gives scholarly achievement.</description>
        <description>http://thesai.org/Downloads/Volume10No4/Paper_28-Enhanced_eLearning_Experience.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Analysis of Cow Disease Diagnosis Expert System using Bayesian Network and Dempster-Shafer Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100427</link>
        <id>10.14569/IJACSA.2019.0100427</id>
        <doi>10.14569/IJACSA.2019.0100427</doi>
        <lastModDate>2019-04-30T10:20:59.1570000+00:00</lastModDate>
        
        <creator>Aristoteles Aristoteles</creator>
        
        <creator>Kusuma Adhianto</creator>
        
        <creator>Rico Andrian</creator>
        
        <creator>Yeni Nuhricha Sari</creator>
        
        <subject>Expert system; Bayesian network; Dempster-Shafer; cow disease</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(4), 2019</description>
        <description>Livestock is a source of animal protein that contains essential acids that improve human intelligence and health. Popular livestock in Indonesia is cow. Consumption of meat per capita is increased by 0.1% kg / capita / year. The high demand for beef in Indonesia is due to the increasing of population in Indonesia by 1.49% per year. More than 90% of cows are reared by rural communities with less of knowledge about livestock and have low economic capabilities. In addition, the number of experts or veterinarians are also limited. One of the solutions that can be done to socialize the knowledge of experts or veterinarians is by using expert system. Some methods that can be used in expert systems are Bayesian network and Dempster-Shafer method. The purpose of this research is to analyze the comparison of cow disease diagnosis with bayesian network and Dempster-Shafer method. In order to know which method is better in diagnosing cow disease. The data used is 21 cow diseases with 77 symptoms. Each method is tested with the same 10 cases. The conclusions obtained by Bayesian network and Dempster-Shafer method. Both of methods give the same diagnosis results but with different percentage. The mean value of diagnosis percentage by Dempster-Shafer method is 87,2% while bayesian network method is 75,3%. Thus, it can be said that the Dempster-Shafer method is better at diagnosing cow disease.</description>
        <description>http://thesai.org/Downloads/Volume10No4/Paper_27-Comparative_Analysis_of_Cow_Disease_Diagnosis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comparative Analysis of Wavelet Families for the Classification of Finger Motions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100426</link>
        <id>10.14569/IJACSA.2019.0100426</id>
        <doi>10.14569/IJACSA.2019.0100426</doi>
        <lastModDate>2019-04-30T10:20:59.1230000+00:00</lastModDate>
        
        <creator>Jingwei Too</creator>
        
        <creator>Abdul Rahim Abdullah</creator>
        
        <creator>Norhashimah Mohd Saad</creator>
        
        <subject>Mother wavelet; discrete wavelet transform; wavelet packet transform; electromyography; classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(4), 2019</description>
        <description>Wavelet transform (WT) has been widely used in biomedical, rehabilitation and engineering applications. Due to the natural characteristic of WT, its performance is mostly depending on the selection of mother wavelet function. A proper mother wavelet ensures the optimum performance; however, the selection of mother wavelet is mostly empirical and varies according to dataset. Hence, this paper aims to investigate the best mother wavelet of discrete wavelet transform (DWT) and wavelet packet transform (WPT) in the classification of different finger motions. In this study, twelve mother wavelets are evaluated for both DWT and WPT. The electromyography (EMG) data of 12 finger motions are acquired from online database. Four useful features are extracted from each recorded EMG signal via DWT and WPT transformation. Afterward, support vector machine (SVM) and linear discriminate analysis (LDA) are employed for performance evaluation. Our experimental results demonstrate Bior3.3 to be the most suitable mother wavelet in DWT. On the other hand, WPT with Bior2.2 overtakes other mother wavelets in the classification of finger motions. The results obtained suggest that Biorthogonal families are more suitable for accurate EMG signals classification.</description>
        <description>http://thesai.org/Downloads/Volume10No4/Paper_26-A_Comparative_Analysis_of_Wavelet_Families.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hospital Readmission Prediction using Machine Learning Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100425</link>
        <id>10.14569/IJACSA.2019.0100425</id>
        <doi>10.14569/IJACSA.2019.0100425</doi>
        <lastModDate>2019-04-30T10:20:59.0930000+00:00</lastModDate>
        
        <creator>Samah Alajmani</creator>
        
        <creator>Hanan Elazhary</creator>
        
        <subject>Decision tree; hospital readmission; logistic regression; machine learning; multi-layer perceptron; Na&#239;ve Bayesian classifier; support vector machines</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(4), 2019</description>
        <description>One of the most critical problems in healthcare is predicting the likelihood of hospital readmission in case of chronic diseases such as diabetes to be able to allocate necessary resources such as beds, rooms, specialists, and medical staff, for an acceptable quality of service. Unfortunately relatively few research studies in the literature attempted to tackle this problem; the majority of the research studies are concerned with predicting the likelihood of the diseases themselves. Numerous machine learning techniques are suitable for prediction. Nevertheless, there is also shortage in adequate comparative studies that specify the most suitable techniques for the prediction process. Towards this goal, this paper presents a comparative study among five common techniques in the literature for predicting the likelihood of hospital readmission in case of diabetic patients. Those techniques are logistic regression (LR) analysis, multi-layer perceptron (MLP), Na&#239;ve Bayesian (NB) classifier, decision tree, and support vector machine (SVM). The comparative study is based on realistic data gathered from a number of hospitals in the United States. The comparative study revealed that SVM showed best performance, while the NB classifier and LR analysis were the worst.</description>
        <description>http://thesai.org/Downloads/Volume10No4/Paper_25-Hospital_Readmission_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Convolutional Neural Network based for Automatic Text Summarization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100424</link>
        <id>10.14569/IJACSA.2019.0100424</id>
        <doi>10.14569/IJACSA.2019.0100424</doi>
        <lastModDate>2019-04-30T10:20:59.0770000+00:00</lastModDate>
        
        <creator>Wajdi Homaid Alquliti</creator>
        
        <creator>Norjihan Binti Abdul Ghani</creator>
        
        <subject>Automatic text summarization; extracts summarization; information retrieval; deep learning; convolutional neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(4), 2019</description>
        <description>In recent times, the apps for the processing of a natural language has been formed and generated through the use of intelligent and soft computing methods that allow computer systems to practically mimic practices related to the process of human texts like the detection of plagiarism, determination of the pattern as well as machine translation, Thereafter, Text summarization serves as the procedure of abridging writing within consolidated structures.  ‘Automatic text summarization’ or the ATS is when a computer system is used to create a text summarization. In this study, the researchers have introduced a novel ATS system, i.e., CNN-ATS, which is a convolutional neural network that enables to Automatic text summarization using a text matrix representation. CNN-ATS is a deep learning system that was used to evaluate the improvements resulting from the increase in the depth to determine the better CNN configurations, assess the sentences, and determine the most informative one. Sentences deemed important are extracted for document summarization. The researchers have investigated this novel convolutional network depth for determining its accuracy during the informative sentences selection for each input text document. The experiment findings of the proposed method are based on the Convolutional Neural Network that uses 26 different configurations. It demonstrates that the resulting summaries have the potential to be better compared to other summaries. DUC 2002 served as the data warehouse. Some of the news articles were used as input in this experiment. Through this method, a new matrix representation was utilized for every sentence. The system summaries were examined by using the ROUGE tool kit at 95% confidence intervals, in which results were extracted by employing average recall, F-measure and precision from ROUGE-1, 2, and L.</description>
        <description>http://thesai.org/Downloads/Volume10No4/Paper_24-Convolutional_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Congestion Control Techniques in WSNs: A Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100423</link>
        <id>10.14569/IJACSA.2019.0100423</id>
        <doi>10.14569/IJACSA.2019.0100423</doi>
        <lastModDate>2019-04-30T10:20:59.0470000+00:00</lastModDate>
        
        <creator>Babar Nawaz</creator>
        
        <creator>Khalid Mahmood</creator>
        
        <creator>Jahangir Khan</creator>
        
        <creator>Mahmood ul Hassan</creator>
        
        <creator>Ansar Munir Shah</creator>
        
        <creator>Muhammad Kashif Saeed</creator>
        
        <subject>WSNs; congestion control; congestion preventing; reliable data</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(4), 2019</description>
        <description>Congestion control has a great importance in wireless sensor network (WSN), where efficient application of congestion control mechanisms can prolong the network lifetime. Thus, proper examination is needed to improve more refine way to address the congestion occurrence and resolution. While designing congestion control techniques, the maximum output can be achieved by efficient utilization of required resources within WSN. From last few years several approaches have been brought in, that consist of routing protocols which provide support with congestion control, congestion prevention, and reliable data routing. In old schemes the topology reset and extent traffic drop take place because sink node executes the congestion avoidance. Therefore, node level congestion avoidance, detection, congestion preventing, and resolution mechanisms have been proposed during past few years. Our paper provides a brief overview and performance comparison of centralized and distributed congestion control algorithms in WSN.</description>
        <description>http://thesai.org/Downloads/Volume10No4/Paper_23-Congestion_Control_Techniques_in_WSNs.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Instrument Development for Measuring the Acceptance of UC&amp;C: A Content Validity Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100422</link>
        <id>10.14569/IJACSA.2019.0100422</id>
        <doi>10.14569/IJACSA.2019.0100422</doi>
        <lastModDate>2019-04-30T10:20:59.0300000+00:00</lastModDate>
        
        <creator>Emy Salfarina Alias</creator>
        
        <creator>Muriati Mukhtar</creator>
        
        <creator>Ruzzakiah Jenal</creator>
        
        <subject>Acceptance model; content validity; diffusion of innovation; service science; unified communications and collaboration</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(4), 2019</description>
        <description>Studies on the acceptance of Unified Communications and Collaboration (UC&amp;C) tools such as instant messaging and video conferencing have been around for some time. Adoption and acceptance of UC&amp;C tools and services has boosted productivity and improved communications by integrating voice, video and data into one platform. UC&amp;C also allows collaboration by enabling users to interact with each other in different media. However, their acceptance rate by individual in developing nations has been low. It is hypothesized that the factors that contribute to acceptance are developed based on two underlying theories. The first is that of diffusion of innovation and the other is that of service dominant logic. Items is constructed based on eight constructs which are relative advantage, compatibility, ease of use, trialability, observability, improved service, value co-creation capacity, and coordination efficiency. In order to validate the items, content validity ratios are calculated on a set of questionnaire. The ratios will determine which items should be included or removed from the questionnaire. The paper concludes with a discussion on the implications of the findings from the experts’ evaluation and also from the content validity ratios.  The new items are used in designing the survey instrument to measure the acceptance of UC&amp;C.</description>
        <description>http://thesai.org/Downloads/Volume10No4/Paper_22-Instrument_Development_for_Measuring_the_Acceptance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Reliability and Connectivity Analysis of Vehicluar Ad Hoc Networks for a Highway Tunnel</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100421</link>
        <id>10.14569/IJACSA.2019.0100421</id>
        <doi>10.14569/IJACSA.2019.0100421</doi>
        <lastModDate>2019-04-30T10:20:59.0130000+00:00</lastModDate>
        
        <creator>Saajid Hussain</creator>
        
        <creator>Di Wu</creator>
        
        <creator>Wang Xin</creator>
        
        <creator>Sheeba Memon</creator>
        
        <creator>Naadiya Khuda Bux</creator>
        
        <creator>Arshad Saleem</creator>
        
        <subject>Minimal safe headway; reliability; highway tunnel; vehicular ad hoc networks; connectivity probability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(4), 2019</description>
        <description>Vehicular ad-hoc network (VANET) uses ‘mobile internet’ to facilitate the communication between vehicles and with the goal to ensure road safety and achieve secure communication. Thus the reliability of this type of networks is of paramount significance. The safety-related messages are disseminated in VANETs, on the wireless medium through vehicle to vehicle (V2V) and Vehicle to roadside (V2R) communications. Hence, the Reliability of network is an essential requirement. This paper considers the effect of vehicle transmission range R_tran  and vehicle density ρ  on the connectivity probability. In addition, a reliability model which takes into account minimal safe headway S_h  among nearby vehicles at highway tunnel is specified. The reason is that under the tunnel Global Positioning System (GPS), a component of onboard unit (OBU) needs a rich line of sight for perfect services, because due to signal interference, the GPS does not work properly. Though, in the case of a fully connected network, there are chances of danger between vehicles which are close to each other. Therefore, the network is not safe, as accidents and collision can happen at any time. Hence, maintaining the minimal safe headway distance under the tunnel is interesting and useful for VANET. The obtained results show that the little difference of the minimal safe headway under the tunnel can cause a serious change in the entire network reliability. Suggesting that while designing the network reliability models the safe headway S_h  cannot be ignored.</description>
        <description>http://thesai.org/Downloads/Volume10No4/Paper_21-Reliability_and_Connectivity_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Voltage Variation Signals Source Identification and Diagnosis Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100420</link>
        <id>10.14569/IJACSA.2019.0100420</id>
        <doi>10.14569/IJACSA.2019.0100420</doi>
        <lastModDate>2019-04-30T10:20:58.9830000+00:00</lastModDate>
        
        <creator>Weihown Tee</creator>
        
        <creator>Mohd Rahimi Yusoff</creator>
        
        <creator>Abdul Rahim Abdullah</creator>
        
        <creator>Muhamad Faizal Yaakub</creator>
        
        <subject>Power quality; voltage variation; spectrogram; source identification; average time frequency representation phase power</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(4), 2019</description>
        <description>Power Quality (PQ) problem has become an important issue for generating bad impact to the users nowadays. It is important to detect and identify the source of the PQ problem. This paper presents a voltage variation signals source identification and diagnosis method by determining the average time frequency representation (TFR) phase power of the impedance. The signals focused in this study are the voltage variation signals, which include voltage sag, swell and interruption. The voltage variation signals from different source location (upstream, downstream as well as up and downstream) according to the IEEE Standard 1159 by using the mathematical models. The signals are first analyzed by using the Spectrograms which act as the feature producing tool. Then, the average power TFR of phase domain of each signal is calculated and tabulated. Finally, the performance of the method is identified by using support vector machine (SVM) and k-nearest neighbor (kNN). The results show that this method is an effective and suitable technique for identifying the source of voltage variation.</description>
        <description>http://thesai.org/Downloads/Volume10No4/Paper_20-Voltage_Variation_Signals_Source_Identification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Medical Image(s) Watermarking and its Optimization using Genetic Programming</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100419</link>
        <id>10.14569/IJACSA.2019.0100419</id>
        <doi>10.14569/IJACSA.2019.0100419</doi>
        <lastModDate>2019-04-30T10:20:58.9670000+00:00</lastModDate>
        
        <creator>Rafi Ullah Habib</creator>
        
        <creator>Hani Ali Alquhayz</creator>
        
        <subject>Capacity; imperceptibility; genetic programming; image sequence; watermarking</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(4), 2019</description>
        <description>In this paper, an medical image watermarking technique has been proposed, where intelligence has been incorporated into the encoding and decoding structure. The motion vectors of the medical image sequence are used for embedding the watermark. Instead of a manual selection of the candidate motion vectors, a generalized approach is used to select the most suitable motion vectors for embedding the watermark. Genetic programming (GP) module has been employed to develop a function in accordance with imperceptibility and watermarking capacity. Employment of intelligence in the system improves its imperceptibility, capacity, and resistance toward different attacks that can occur during communication and storing. The motion vectors are generated by applying a block-based motion estimation algorithm. In this work, Full-Search method has been used for its better performance as compared to the other methods. Experimental results show marked improvement in capacity and visual similarity as compared to the conventional approaches.</description>
        <description>http://thesai.org/Downloads/Volume10No4/Paper_19-Medical_Images_Watermarking.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Recurrent Neural Network and a Discrete Wavelet Transform to Predict the Saudi Stock Price Trends</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100418</link>
        <id>10.14569/IJACSA.2019.0100418</id>
        <doi>10.14569/IJACSA.2019.0100418</doi>
        <lastModDate>2019-04-30T10:20:58.9530000+00:00</lastModDate>
        
        <creator>Mutasem Jarrah</creator>
        
        <creator>Naomie Salim</creator>
        
        <subject>Recurrent Neural Network (RNN); Discrete Wavelet Transform (DWT); deep learning; prediction; stock market</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(4), 2019</description>
        <description>Stock markets can be characterised as being complex, dynamic and chaotic environments, making the prediction of stock prices very tough. In this research work, we attempt to predict the Saudi stock price trends with regards to its earlier price history by combining a discrete wavelet transform (DWT) and a recurrent neural network (RNN). The DWT technique helped to remove the noises pertaining to the data gathered from the Saudi stock market based on a few chosen samples of companies. Then, a designed RNN has trained via the Back Propagation Through Time (BPTT) method to aid in predicting the Saudi market’s stock prices for the next seven days’ closing price pertaining to the chosen sample of companies.  Then, analysis of the obtained results was carried out to make a comparison with the results from those employing the traditional prediction algorithms like the auto regressive integrated moving average (ARIMA). Based on the comparison, it was found that the put forward method (DWT+RNN) allowed more accurate prediction of the day’s closing price versus the ARIMA method employing the mean squared error (MSE), mean absolute error (MAE) and root mean squared error (RMSE) criterion.</description>
        <description>http://thesai.org/Downloads/Volume10No4/Paper_18-A_Recurrent_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Extended Fuzzy Analytical Hierarchy Process Approach in Determinants of Employees’ Competencies in the Fourth Industrial Revolution</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100417</link>
        <id>10.14569/IJACSA.2019.0100417</id>
        <doi>10.14569/IJACSA.2019.0100417</doi>
        <lastModDate>2019-04-30T10:20:58.7970000+00:00</lastModDate>
        
        <creator>Phuc Van Nguyen</creator>
        
        <creator>Phong Thanh Nguyen</creator>
        
        <creator>Quyen Le Hoang Thuy To Nguyen</creator>
        
        <creator>Vy Dang Bich Huynh</creator>
        
        <subject>Employees’ competency; fuzzy logic; Extended Fuzzy Analytical Hierarchy Process  (EFAHP)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(4), 2019</description>
        <description>This paper explored the education factors and ranked their impacts on the employees’ competencies development in Vietnam. Factors contributing to the employees’ competencies in the Vietnamese context are proposed based on the literature review under the justification of experts’ interviews. Then, the extended fuzzy analytical hierarchy process  (EFAHP) approach was used to prioritize the importance of the factors affecting the employees’ competency. The research finding confirmed the decisive role of teachers with the greatest weight of impact on the employees’ competency, though there was a shift of teacher’s role to that of facilitator in the Fourth Industrial Revolution education.</description>
        <description>http://thesai.org/Downloads/Volume10No4/Paper_17-Extended_Fuzzy_Analytical_Hierarchy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Protection of Ultrasound Image Sequence: Employing Motion Vector Reversible Watermarking</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100416</link>
        <id>10.14569/IJACSA.2019.0100416</id>
        <doi>10.14569/IJACSA.2019.0100416</doi>
        <lastModDate>2019-04-30T10:20:58.7800000+00:00</lastModDate>
        
        <creator>Rafi Ullah Habib</creator>
        
        <creator>Fayez Al-Fayez</creator>
        
        <subject>Reversible watermarking; ultrasound sequence; full-search; motion vectors; side information</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(4), 2019</description>
        <description>In healthcare information systems, medical data is very important for diagnosis. Most of the health institutions store their patients’ data on third-party servers. Therefore, its security is very important, since the advent of advanced multimedia and communication technology, whereby digital contents are manipulated, copied, and duplicated without leaving any trace. In this paper, a reversible watermarking technique is applied to the patients’ data (ultrasound image sequence). Since the traditional watermarking schemes can experience some permanent distortions that are not acceptable in the medical application. Thus, a reversible watermarking technique has been used, which can not only secure the ultrasound image sequence but also restore the original sequence back. For watermark embedding, the magnitude and phase angles of motion vectors of the image sequence are used that are obtained by using Full-Search block-based motion estimation algorithm. Before applying the motion estimation algorithm and watermark embedding, the histogram pre-processing is performed to avoid underﬂow/overﬂow. Unlike other state-of-the-art watermarking schemes that are reported in the last decades, the experimental results show that the proposed algorithm is simple, provides a much larger embedding capacity and better quality of the watermarked image sequence.</description>
        <description>http://thesai.org/Downloads/Volume10No4/Paper_16-Protection_of_Ultrasound_Image_Sequence.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimal Compression of Medical Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100415</link>
        <id>10.14569/IJACSA.2019.0100415</id>
        <doi>10.14569/IJACSA.2019.0100415</doi>
        <lastModDate>2019-04-30T10:20:58.7630000+00:00</lastModDate>
        
        <creator>Rafi Ullah Habib</creator>
        
        <subject>Medical images; wavelet transform; JPEG2000; genetic programming; compression; quantization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(4), 2019</description>
        <description>In today’s healthcare system, medical images are playing a vital role in the diagnosis. The challenges arise to the hospital management systems (HMS) are to store and communicate the large volume of medical images generated by various imaging modalities. Efficient compression of medical images is required to reduce the bit rate to increase the storage capacity and speed-up the transmission without affecting its quality. Over the past few decades, several compression standards have been proposed. In this paper, an intelligent JPEG2000 compression scheme is presented to compress the medical images efficiently. Unlike the traditional compression techniques, genetic programming (GP)-based quantization matrices are used to quantize the wavelet coefficients of the input image. Experimental results validate the usefulness of the proposed intelligent compression scheme.</description>
        <description>http://thesai.org/Downloads/Volume10No4/Paper_15-Optimal_Compression_of_Medical_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Compound Mapping and Filter Algorithm for Hybrid SSD Structure</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100414</link>
        <id>10.14569/IJACSA.2019.0100414</id>
        <doi>10.14569/IJACSA.2019.0100414</doi>
        <lastModDate>2019-04-30T10:20:58.7500000+00:00</lastModDate>
        
        <creator>Jin Young Kim</creator>
        
        <creator>Se Jin Kwon</creator>
        
        <subject>Pram; hybrid architecture; QLC NAND flash memory; algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(4), 2019</description>
        <description>With the recent development of byte-unit non-volatile random access memory (RAM), various methods utilizing quad level cell (QLC) not-AND (NAND) flash memory with non-volatile RAM have been proposed. However, tests have shown that these hybrid structures lead to a reduction in the performance of a hybrid solid state disk (SSD) owing to issues regarding space efficiency. This study proposes a compound address method and filter algorithm suitable for the next generation of NAND flash, called hybrid storage media, where QLCs and phase-change memory (PCM) are used together. The filter-mapping algorithm includes a management method that stores data in phase-change memory or flash memory according to the next command, which is accessed when a write command that is half or less than half a page in length is received from the file system. Tests have shown that the compound mapping and filter algorithm reduces the wasted pages by more than half and the number of merge operations is also significantly decreased. This leads to a decrease in the number of delete operations and improves the overall processing speed of the hardware.</description>
        <description>http://thesai.org/Downloads/Volume10No4/Paper_14-Compound_Mapping_and_Filter_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Secured Multi-Hop Clustering Protocol for Location-based Routing in VANETs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100413</link>
        <id>10.14569/IJACSA.2019.0100413</id>
        <doi>10.14569/IJACSA.2019.0100413</doi>
        <lastModDate>2019-04-30T10:20:58.7170000+00:00</lastModDate>
        
        <creator>K Sushma Eunice</creator>
        
        <creator>I. Juvanna</creator>
        
        <subject>Vanet; tamper-proof device; location-based routing protocols; intelligent transportation system; gpsr algorithm; trusted authority; roadside unit; routing protocols</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(4), 2019</description>
        <description>In today’s world, with the rise in the count of vehicles and lack of proper navigation, the congestion has become a major problem. In this scenario, VANETs play a very important part in improving the traffic condition and also in providing proper navigation. Improved navigation system reduces congestion thereby reducing the possibility of occurrence of accidents. In this research work, we have used a position-based routing protocol i.e., GPSR (Greedy Perimeter Stateless Routing Protocol) to effectively analyze the geographical position of the vehicles in the network and to provide updated navigation information. In this system, we have used a security mechanism to identify valid and invalid messages for secure V2V and V2I communications. This mechanism drops all the invalid messages thereby keeping the VANET secure. It also reduces the possibility of attacks on wireless communications in the VANET. This system has better safety features and network performance compared to other hybrid schemes via NS2 simulation.</description>
        <description>http://thesai.org/Downloads/Volume10No4/Paper_13-Secured_Multi_Hop_Clustering_Protocol.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A GRASP-based Solution Construction Approach for the Multi-Vehicle Profitable Pickup and Delivery Problem</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100412</link>
        <id>10.14569/IJACSA.2019.0100412</id>
        <doi>10.14569/IJACSA.2019.0100412</doi>
        <lastModDate>2019-04-30T10:20:58.7030000+00:00</lastModDate>
        
        <creator>Abeer I Alhujaylan</creator>
        
        <creator>Manar I. Hosny</creator>
        
        <subject>Selective pickup and delivery problem; multi-vehicle pro?table pickup and delivery problem; greedy randomized adaptive search procedure; metaheuristic algorithms</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(4), 2019</description>
        <description>With the advancement of e-commerce and Internet shopping, the high competition between carriers has made many companies rethink their service mechanisms to customers, in order to ensure that they stay competitive in the market. Therefore, companies with limited resources focus on serving only customers who provide high profits at the lowest possible cost. The Multi-Vehicle Proﬁtable Pickup and Delivery Problem (MVPPDP) is a vehicle routing problem and one variant of the Selective Pickup and Delivery Problem (SPDP) that is considered to plan the services for these types of companies.  The MVPPDP aims to serve only the profitable customers, where the products are transformed from a selection of pickup customers to the corresponding delivery customers, within a given travel time limit. In this paper, we utilize the construction phase of the well-known Greedy Randomized Adaptive Search Procedure (GRASP) to build initial solutions for the MVPPDP. The performance of the proposed method is compared with two greedy construction heuristics that were previously used in the literature to build the initial solutions of the MVPPDP. The results proved the effectiveness of the proposed method, where eight new initial solutions are obtained for the problem. Our approach is especially beneficial for building a population of solutions that combine both diversity and quality, which can help to obtain good solutions in the improvement phase of the problem.</description>
        <description>http://thesai.org/Downloads/Volume10No4/Paper_12-A_GRASP_based_Solution_Construction_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Software Artefacts Consistency Management towards Continuous Integration: A Roadmap</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100411</link>
        <id>10.14569/IJACSA.2019.0100411</id>
        <doi>10.14569/IJACSA.2019.0100411</doi>
        <lastModDate>2019-04-30T10:20:58.6700000+00:00</lastModDate>
        
        <creator>D A Meedeniya</creator>
        
        <creator>I. D. Rubasinghe</creator>
        
        <creator>I. Perera</creator>
        
        <subject>Consistency management; traceability; continuous integration;  DevOps; comparative study</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(4), 2019</description>
        <description>Software development in DevOps practices has become popular with the collaborative intersection between development and operations teams. The notion of DevOps practices drives the software artefacts changes towards continuous integration and continuous delivery pipeline. Subsequently, traceability management is essential to handle frequent changes with rapid software evolution. This study explores the process and approaches to manage traceability ensuring the artefact consistency towards CICD in DevOps practice. We address the key notions in traceability management process including artefact change detection, change impact analysis, consistency management, change propagation and visualization. Consequently, we assess the applicability of existing change impact analysis models in DevOps practice. This study identifies the conceptualization of the traceability management process, explores the state-of-art solutions and suggests possible research directions. This study shows that the lack of support in heterogeneous artefact consistency management with well-defined techniques. Most of the related models are limited with the industry-level applicability in DevOps practice. Accordingly, there is inadequate tool support to manage traceability between heterogeneous artefacts. This study identifies the challenges in managing software artefact consistency and suggests possible research directions that can be applied to manage the traceability in the process of software development in DevOps practice.</description>
        <description>http://thesai.org/Downloads/Volume10No4/Paper_11-Software_Artefacts_Consistency_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Industrial Financial Forecasting using Long Short-Term Memory Recurrent Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100410</link>
        <id>10.14569/IJACSA.2019.0100410</id>
        <doi>10.14569/IJACSA.2019.0100410</doi>
        <lastModDate>2019-04-30T10:20:58.6400000+00:00</lastModDate>
        
        <creator>Muhammad Mohsin Ali</creator>
        
        <creator>Muhammad Imran Babar</creator>
        
        <creator>Muhammad Hamza</creator>
        
        <creator>Muhammad Jehanzeb</creator>
        
        <creator>Saad Habib</creator>
        
        <creator>Muhammad Sajid Khan</creator>
        
        <subject>Financial forecasting; prediction; long-short term memory; recurrent neural networks; artificial neural networks; IBM SPSS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(4), 2019</description>
        <description>This research deals with the industrial financial forecasting in order to calculate the yearly expenditure of the organization. Forecasting helps in estimation of the future trends and provides a valuable information to make the industrial decisions. With growing economies, the financial world spends billions in terms of expenses. These expenditures are also defined as budgets or operational resources for a functional workplace. These expenses carry a fluctuating property as opposed to a linear or constant growth and this information if extracted can reshape the future in terms of effective spending of finances and will give an insight for the future budgeting reforms. It is a challenge to grasp over the changing trends with an effective accuracy and for this purpose machine learning approaches can be utilized. In this study Long Short-Term Memory (LSTM), which is a variant of Recurrent Neural Network (RNN) from the family of Artificial Neural Networks (ANN), is used for forecasting purposes along with a statistical tool IBM SPSS for comparative analysis. In this study, the experiments are performed on the data set of Pakistan GDP by type of expenditure at current prices - national currency (1970-2016) produced by Economic Statistics Branch of the United Nations Statistics Division (UNSD). Results of this study demonstrate that the proposed model predicted the expenses with better accuracy than that of the classical statistical tools.</description>
        <description>http://thesai.org/Downloads/Volume10No4/Paper_10-Industrial_Financial_Forecasting.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Arabic Text Classification using Feature-Reduction Techniques for Detecting Violence on Social Media</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100409</link>
        <id>10.14569/IJACSA.2019.0100409</id>
        <doi>10.14569/IJACSA.2019.0100409</doi>
        <lastModDate>2019-04-30T10:20:58.6230000+00:00</lastModDate>
        
        <creator>Hissah ALSaif</creator>
        
        <creator>Taghreed Alotaibi</creator>
        
        <subject>Violence; text mining; classification; feature-reduction techniques; Arabic; Twitter posts</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(4), 2019</description>
        <description>With the current increase in the number of online users, there has been a concomitant increase in the amount of data shared online. Techniques for discovering knowledge from these data can provide us with valuable information when it comes to detecting different problems, including violence. Violence is one of the significant problems humanity has faced in recent years all over the world, and this is especially a problem in Arabic countries. To address this issue, this research focuses on detecting violence-related tweets to help in solving this problem. Text mining is an important technique that can be used to find and predict information from text. In this study, a text classification model is built for detecting violence in Arabic dialects on Twitter using different feature-reduction approaches. The experiment comprises bagging, K-nearest neighbors (KNN), and Bayesian boosting using different extraction features, namely, root-based stemming, light stemming, and n-grams. In addition, the study used the following feature-reduction techniques: support vector machine (SVM), Chi-squared (CHI), the Gini index, correlation, rules, information gain (IG), deviation, symmetrical uncertainty, and the IG ratio. The experiment showed that the bagging with tri-gram approach has the highest accuracy at 86.61%, and a combination of IG with SVM from reduction features registers an accuracy of 90.59%.</description>
        <description>http://thesai.org/Downloads/Volume10No4/Paper_9-Arabic_Text_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Efficient Image Haze Removal Algorithm based on New Accurate Depth and Light Estimation Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100408</link>
        <id>10.14569/IJACSA.2019.0100408</id>
        <doi>10.14569/IJACSA.2019.0100408</doi>
        <lastModDate>2019-04-30T10:20:58.6100000+00:00</lastModDate>
        
        <creator>Samia Haouassi</creator>
        
        <creator>Wu Di</creator>
        
        <creator>Meryem Hamidaoui</creator>
        
        <creator>Tobji Rachida</creator>
        
        <subject>Image dehazing; Image Formation Model (IFM); depth map; transmission map; atmospheric light; image blur; image energy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(4), 2019</description>
        <description>Single image Dehazing has become a challenging task for a variety of image processing and computer applications. Many attempts have been devised to recover faded colors and improve image contrast. Such methods, however, do not achieve maximum restoration, as images are often subject to color distortion. This paper proposes an efficient single image Dehazing algorithm that offers satisfactory scene radiance restoration. The proposed method stands on the estimation of two key indices; image blur and atmospheric light that can be employed in the Image Formation Model (IFM) to recover scene radiance of the hazy image. More clearly, we propose an efficient depth estimation method using image blur. Most existing algorithms implement atmospheric light as a constant which often leads to inaccurate estimations, we propose a new algorithm “A-Estimate” based on blur and energy to estimate the atmospheric light accurately, an adaptive transmission map also has been proposed. Experimental results on real and synthesized hazy images demonstrate an improved performance in the proposed method when compared to existing state-of-the-art methods.</description>
        <description>http://thesai.org/Downloads/Volume10No4/Paper_8-An_Efficient_Image_Haze_Removal_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fault Injection and Test Approach for Behavioural Verilog Designs using the Proposed RASP-FIT Tool</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100407</link>
        <id>10.14569/IJACSA.2019.0100407</id>
        <doi>10.14569/IJACSA.2019.0100407</doi>
        <lastModDate>2019-04-30T10:20:58.5770000+00:00</lastModDate>
        
        <creator>Abdul Rafay Khatri</creator>
        
        <creator>Ali Hayek</creator>
        
        <creator>Josef B&#246;rcs&#246;k</creator>
        
        <subject>Behavioural designs; code parsing; fault injection; test approach; Verilog HDL</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(4), 2019</description>
        <description>Soft-core processors and complex Field Pro-grammable Gate Array (FPGA) designs are described as an algorithmic manner, i.e. behavioural abstraction level in Hard-ware Description Languages (HDL). Lower abstraction levels add complexity and delays in the design cycle as well as in the fault injection approach. Therefore, fault simulation/emulation techniques are demanded to develop an approach for testing of design and to evaluate dependability analysis of FPGA designs at this abstraction level. Broadly, the fault injection techniques for FPGA-based designs at the HDL code level are categorised into emulation and simulation-based techniques. This work is an extension of our previous methodologies developed for FPGA designs written at data-flow and gate abstraction levels under the proposed RASP-FIT tool. These methodologies include fault injection by code parsing of the SUT, test approach for finding the test vectors using dynamic and static compaction techniques, fault coverage, and compaction ratio directly at the code level of the design. In this paper, we described the proposed approaches briefly, and the enhancement of a Verilog code modifier for the behavioural designs is presented in detail.</description>
        <description>http://thesai.org/Downloads/Volume10No4/Paper_7-Fault_Injection_and_Test_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dynamic Modification of Activation Function using the Backpropagation Algorithm in the Artificial Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100406</link>
        <id>10.14569/IJACSA.2019.0100406</id>
        <doi>10.14569/IJACSA.2019.0100406</doi>
        <lastModDate>2019-04-30T10:20:58.5630000+00:00</lastModDate>
        
        <creator>Marina Adriana Mercioni</creator>
        
        <creator>Alexandru Tiron</creator>
        
        <creator>Stefan Holban</creator>
        
        <subject>Artificial neural networks; activation function; sigmoid function; WEKA; multilayer perceptron; instance; classifier; gradient; rate metric; performance; dynamic modification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(4), 2019</description>
        <description>The paper proposes the dynamic modification of the activation function in a learning technique, more exactly backpropagation algorithm. The modification consists in changing slope of sigmoid function for activation function according to increase or decrease the error in an epoch of learning. The study was done using the Waikato Environment for Knowledge Analysis (WEKA) platform to complete adding this feature in Multilayer Perceptron class. This study aims the dynamic modification of activation function has changed to relative gradient error, also neural networks with hidden layers have not used for it.</description>
        <description>http://thesai.org/Downloads/Volume10No4/Paper_6-Dynamic_Modification_of_Activation_Function.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Systematic Literature Review (SLR) of Resource Scheduling and Security in Cloud Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100404</link>
        <id>10.14569/IJACSA.2019.0100404</id>
        <doi>10.14569/IJACSA.2019.0100404</doi>
        <lastModDate>2019-04-30T10:20:58.5300000+00:00</lastModDate>
        
        <creator>Abdullah Sheikh</creator>
        
        <creator>Malcolm Munro</creator>
        
        <creator>David Budgen</creator>
        
        <subject>Cloud computing; security; resource scheduling; systematic literature review; SLR</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(4), 2019</description>
        <description>Resource scheduling in cloud computing is a com-plex task due to the number and variety of resources available and the volatility of usage-patterns of resources considering that the resource setting is on the service provider. This compounded further when security issues are also factored in. This paper provide a Systematic Literature Review (SLR) that will help to identify as much prior relevant research that has been done in the area of research topic. Also, all papers that are found from the search will be classified into groups to stand on the current situation and to identify possible existing gaps.</description>
        <description>http://thesai.org/Downloads/Volume10No4/Paper_4-Systematic_Literature_Review.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards a Mechanism for Protecting Seller’s Interest of Cash on Delivery by using Smart Contract in Hyperledger</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100405</link>
        <id>10.14569/IJACSA.2019.0100405</id>
        <doi>10.14569/IJACSA.2019.0100405</doi>
        <lastModDate>2019-04-30T10:20:58.5300000+00:00</lastModDate>
        
        <creator>Ha Xuan Son</creator>
        
        <creator>Minh Hoang Nguyen</creator>
        
        <creator>Nguyen Ngoc Phien</creator>
        
        <creator>Hai Trieu Le</creator>
        
        <creator>Quoc Nghiep Nguyen</creator>
        
        <creator>Van Dai Dinh</creator>
        
        <creator>Phu Thinh Tru</creator>
        
        <creator>Phuc Nguyen</creator>
        
        <subject>Blockchain; fintech; smart contract; customer; seller; shipper; cash on delivery; hyperledger</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(4), 2019</description>
        <description>In emerging economies, with the explosion of e-commerce, payment methods have increasingly enhanced security. However, Cash-on-Delivery (COD) payment method still prevails in cash-based economies. Although COD allows consumers to be more proactive in making payments, it still appears to be vulnerable by the appearance of a third party (shipping companies). In this paper, we proposed a payment system based on “smart contract” implemented on top of blockchain technology to minimize risks for parties. The platform consists of a set of rules that each party must follow including specific delivery time and place, cost of delivery, mortgage money; thereby, forcing parties to be responsible for their tasks in order to complete the contract. We also provided a detailed implementation to illustrate the efficiency of our model.</description>
        <description>http://thesai.org/Downloads/Volume10No4/Paper_5-Towards_a_Mechanism_for_Protecting_Sellers_Interest.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>e-Learning Tools on the Healthcare Professional Social Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100403</link>
        <id>10.14569/IJACSA.2019.0100403</id>
        <doi>10.14569/IJACSA.2019.0100403</doi>
        <lastModDate>2019-04-30T10:20:58.5000000+00:00</lastModDate>
        
        <creator>Evgeny Nikulchev</creator>
        
        <creator>Dmitry Ilin</creator>
        
        <creator>Bladimir Belov</creator>
        
        <creator>Pavel Kolyasnikov</creator>
        
        <creator>Alexander Kosenkov</creator>
        
        <subject>Healthcare professional social networks; advanced digital tools; e-learning; test constructor; e-learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(4), 2019</description>
        <description>According to many studies, professional social networks are not widespread in the health care environment, especially doctors. The article is devoted to two advanced digital tools that can attract the image and increase motivation for professional social networks. The first tool is the inclusion of e-learning, both to increase the level of knowledge and to confirm qualification skills among professionals. The second tool is the developed system of the formation of the tests constructor. The article describes solution of being developed Internet-resources for mass use in professional health care.</description>
        <description>http://thesai.org/Downloads/Volume10No4/Paper_3-E_learning_Tools_on_the_Healthcare.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Adoption of the Internet of Things (IoT) in Agriculture and Smart Farming towards Urban Greening: A Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100402</link>
        <id>10.14569/IJACSA.2019.0100402</id>
        <doi>10.14569/IJACSA.2019.0100402</doi>
        <lastModDate>2019-04-30T10:20:58.4830000+00:00</lastModDate>
        
        <creator>A. A Raneesha Madushanki</creator>
        
        <creator>Malka N Halgamuge</creator>
        
        <creator>W. A. H. Surangi Wirasagoda</creator>
        
        <creator>Ali Syed</creator>
        
        <subject>Internet of Things; IoT; agricultural; smart farming; business; sensor data; automation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(4), 2019</description>
        <description>It is essential to increase the productivity of agricultural and farming processes to improve yields and cost-effectiveness with new technology such as the Internet of Things (IoT). In particular, IoT can make agricultural and farming industry processes more efficient by reducing human intervention through automation. In this study, the aim to analyze recently developed IoT applications in the agriculture and farming industries to provide an overview of sensor data collections, technologies, and sub-verticals such as water management and crop management. In this review, data is extracted from 60 peer-reviewed scientific publications (2016-2018) with a focus on IoT sub-verticals and sensor data collection for measurements to make accurate decisions. Our results from the reported studies show water management is the highest sub-vertical (28.08%) followed by crop management (14.60%) then smart farming (10.11%). From the data collection, livestock management and irrigation management resulted in the same percentage (5.61%). In regard to sensor data collection, the highest result was for the measurement of environmental temperature (24.87%) and environmental humidity (19.79%). There are also some other sensor data regarding soil moisture (15.73%) and soil pH (7.61%). Research indicates that of the technologies used in IoT application development, Wi-Fi is the most frequently used (30.27%) followed by mobile technology (21.10%). As per our review of the research, we can conclude that the agricultural sector (76.1%) is researched considerably more than compared to the farming sector (23.8%). This study should be used as a reference for members of the agricultural industry to improve and develop the use of IoT to enhance agricultural production efficiencies.  This study also provides recommendations for future research to include IoT systems&#39; scalability, heterogeneity aspects, IoT system architecture, data analysis methods, size or scale of the observed land or agricultural domain, IoT security and threat solutions/protocols, operational technology, data storage, cloud platform, and power supplies.</description>
        <description>http://thesai.org/Downloads/Volume10No4/Paper_2-Adoption_of_the_Internet_of_Things.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Global and Local Characterization of Rock Classification by Gabor and DCT Filters with a Color Texture Descriptor</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100401</link>
        <id>10.14569/IJACSA.2019.0100401</id>
        <doi>10.14569/IJACSA.2019.0100401</doi>
        <lastModDate>2019-04-30T10:20:58.4200000+00:00</lastModDate>
        
        <creator>J Wognin Vangah</creator>
        
        <creator>Si&#233; Ouattara</creator>
        
        <creator>Gb&#233;l&#233; Ouattara</creator>
        
        <creator>Alain Clement</creator>
        
        <subject>Rock; classification; G-ALBPCSF; D-ALBPCSF; LBP; gabor; DCT; RGB; HSV; color texture</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(4), 2019</description>
        <description>In the automatic classification of colored natural textures, the idea of proposing methods that reflect human perception arouses the enthusiasm of researchers in the field of image processing and computer vision. Therefore, the color space and the methods of analysis of color and texture, must be discriminating to correspond to the human vision. Rock images are a typical example of natural images and their analysis is of major importance in the rock industry. In this paper, we combine the statistical (Local Binary Pattern (LBP) with Hue Saturation Value (HSV) and Red Green Blue (RGB) color spaces fusion) and frequency (Gabor filter and Discrete Cosine Transform (DCT)) descriptors named respectively Gabor Adjacent Local Binary Pattern Color Space Fusion (G-ALBPCSF) and DCT Adjacent Local Binary Pattern Color Space Fusion (D-ALBPCSF) for the extraction of visual textural and colorimetric features from direct view images of rocks. The textural images from the two G-ALBPCSF and D-ALBPCSF approaches are evaluated through similarity metrics such as Chi2 and the intersection of histograms that we have adapted to color histograms.  The results obtained allowed us to highlight the discrimination of the rock classes. The proposed extraction method provides better classification results for various direct view rock texture images. Then it is validated by a confusion matrix giving a low error rate of 0.8% of classification.</description>
        <description>http://thesai.org/Downloads/Volume10No4/Paper_1-Global_and_Local_Characterization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of Talent Model based on Publication Performance using Apriori Technique</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100381</link>
        <id>10.14569/IJACSA.2019.0100381</id>
        <doi>10.14569/IJACSA.2019.0100381</doi>
        <lastModDate>2019-04-01T11:42:40.8500000+00:00</lastModDate>
        
        <creator>Zulaiha Ali Othman</creator>
        
        <creator> Noraini Ismail</creator>
        
        <creator>Mohd Zakree Ahmad Nazri</creator>
        
        <creator>Hamidah Jantan</creator>
        
        <subject>Human resource management; apriori based association rules; promotion’s guideline</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(3), 2019</description>
        <description>The main problem or challenge faced by Human Resource Management (HRM) is to recognize, develop and manage talent efficiently and effectively. This is because HRM is responsible for selecting the correct talent for suitable positions at the right time, aligned with their existing qualifications, talents and achievements. Furthermore, the decision in identifying talent for a position must be fair, truthful and appropriate. In the academic field, publication is a core component in the evaluation of academic talent that is affected by research, supervision and conference. Therefore, this study proposed an academic talent model based on publication factor using the Apriori technique for the purpose of promotion. This study applies the Apriori based Association Rules algorithm to identify a set of meaningful rules for the assessment of significant relevant talents for the promotion of academic staff in local universities. The findings have successfully developed a model based on talent acquisition of knowledge related to the issuance and have been evaluated by comparing the guidelines for the promotion of academic experts. This knowledge helps to improve the quality of the evaluation process of academic talent management and future planning in HRM.</description>
        <description>http://thesai.org/Downloads/Volume10No3/Paper_81-Development_of_Talent_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Density based Clustering Algorithm for Distributed Datasets using Mutual k-Nearest Neighbors</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100380</link>
        <id>10.14569/IJACSA.2019.0100380</id>
        <doi>10.14569/IJACSA.2019.0100380</doi>
        <lastModDate>2019-04-01T11:42:40.8030000+00:00</lastModDate>
        
        <creator>Ahmed Salim</creator>
        
        <creator> </creator>
        
        <subject>Privacy; mutual k-nearest neighbor; Density-based; clustering; security; DDBC</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(3), 2019</description>
        <description>Privacy and security have always been a concern that prevents the sharing of data and impedes the success of many projects. Distributed knowledge computing, if done correctly, plays a key role in solving such a problem. The main goal is to obtain valid results while ensuring the non-disclosure of data. Density-based clustering is a powerful algorithm in analyzing uncertain data that naturally occur and affect the performance of many applications like location-based services. Nowadays, a huge number of datasets have been introduced for researchers which involve high-dimensional data points with varying densities. Such datasets contain data points with high-density regions surrounded by data points with sparse density. The existing clustering approaches handle these situations inefficiently, especially in the context of distributed data. In this paper, we design a new decomposable density-based clustering algorithm for distributed datasets (DDBC). DDBC utilizes the concept of mutual k-nearest neighbor relationship to cluster distributed datasets with different density.  The proposed DDBC algorithm is capable of preserving the privacy and security of data on each site by requiring a minimal number of transmissions to other sites.</description>
        <description>http://thesai.org/Downloads/Volume10No3/Paper_80-Density_based_Clustering_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Growth Characteristics of Age and Gender-based Anthropometric Data from Human Assisted Remote Healthcare System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100379</link>
        <id>10.14569/IJACSA.2019.0100379</id>
        <doi>10.14569/IJACSA.2019.0100379</doi>
        <lastModDate>2019-03-30T12:17:13.2070000+00:00</lastModDate>
        
        <creator>Mehdi Hasan</creator>
        
        <creator>Rafiqul Islam</creator>
        
        <creator>Fumuhiko Yokota</creator>
        
        <creator>Mariko Nishikitani</creator>
        
        <creator>Akira Fukuda</creator>
        
        <creator>Ashir Ahmed</creator>
        
        <subject>Age and gender-based growth characteristics; portable health clinic; human assisted remote healthcare system; eHealth record</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(3), 2019</description>
        <description>Growth monitoring and promotion of optimal growth are essential components of primary health care. The most popular approach to this topic has been developed and utilized for decades by the CDC (Center for Disease Control and Prevention) in the United States, resulting in its well-known clinical growth pattern charts for boys and girls. This metric comprises a series of percentile curves that illustrate the distribution of selected body measurements, by age. The results show a clear uptrend of three traditional measures: height, weight, and BMI. The chart also shows a trend with the corresponding 5th, 10th, 25th, 50th, 75th, 90th, and 95th percentile data variations. Unfortunately, the CDC metric system only addresses ages 2–20 years. Apparently, no other studies show correspondingly systematic growth characteristic patterns for humans more than 20 years old. Our Portable Health Clinic system has for many years been archiving remote health care data records collected from different age and socioeconomic levels in many locations throughout Bangladesh. This data provides an important resource with which to study the age-related evolving nature of anthropometric data. We aim to see if there are any significant clinical growth patterns, specifically regarding height, weight, BMI, waist, and hip for humans over the age of 20 years. We could not determine a clear indication in terms of a specific age where the significant change from growth to decline occurs.</description>
        <description>http://thesai.org/Downloads/Volume10No3/Paper_79-Growth_Characteristics_of_Age_and_Gender.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Triangle Hyper Hexa-cell Interconnection Network A Novel Interconnection Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100378</link>
        <id>10.14569/IJACSA.2019.0100378</id>
        <doi>10.14569/IJACSA.2019.0100378</doi>
        <lastModDate>2019-03-30T12:17:13.1770000+00:00</lastModDate>
        
        <creator>Asmaa Aljawawdeh</creator>
        
        <creator>Esraa Emriziq</creator>
        
        <creator>Saher Manaseer</creator>
        
        <subject>Interconnection network; hyper hexa-cell; parallel system; radix sort; triangle hyper hexa-cell; triangle topology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(3), 2019</description>
        <description>The interconnection networks play the main role in many applications, because it has a direct influence on it. Nowadays; the challenge is to find suitable topology that can deal with fewer requirements and min-cost. One of the most famous interconnection structures is the cube; it is used to build many interconnection networks such as the cube Hyper hexa cell topology. This work proposes a new topology; a hybrid topology between hyper hexa cell topology and triangle topology. In the propose interconnection network, the focus on the diameter which is less than the cube hyper hexa cell within one in any dimension and this effect on many parameters such as execution time. In the simulation environment, the radix sort being applied on the suggested Interconnection network using dimension number two on both; Triangle Hyper Hexa-cell and Cube Hyper Hexa-cell. Depending on the comparison between both topologies; in Theoretical and practical. The result shows the best performance for Triangle Hyper Hexa-cell. Practically; the measured parameter was the execution time in the simulation environment. Theoretically, the topological properties for both have been measured and got the equations for both, such as: number of nodes in every dimension, the number of edges, the network degree, and the diameter. This architecture will promise to be useful, more powerful for new-generation parallel architectures, and more effective for many applications and can be applied in different fields.</description>
        <description>http://thesai.org/Downloads/Volume10No3/Paper_78-Triangle_Hyper_Hexa_Cell_Interconnection_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Spin-Then-Sleep: A Machine Learning Alternative to Queue-based Spin-then-Block Strategy</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100377</link>
        <id>10.14569/IJACSA.2019.0100377</id>
        <doi>10.14569/IJACSA.2019.0100377</doi>
        <lastModDate>2019-03-30T12:17:13.1770000+00:00</lastModDate>
        
        <creator>Fadai Ganjaliyev</creator>
        
        <subject>Spinlock; spin-then-block; reinforcement learning; queue-based lock; intelligent sleeping</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(3), 2019</description>
        <description>One of the issues with spinlock protocols is excessive spinning which results in a waste of CPU cycles. Some protocols use the hybrid, spin-then-block approach to avoid this problem. In this case, the contending thread may prefer relinquishing the CPU instead of spinning, and resumes execution once notified. This paper presents a machine learning framework for intelligent sleeping and spinning as an alternative to the spin-then-block strategy. This framework can be used to address one of the challenges faced by this strategy: the delay in the critical path. The work suggests a reinforcement learning based approach for queue-based locks that aims at having threads learn to spin or sleep. The challenges of the suggested technique and future work are also discussed.</description>
        <description>http://thesai.org/Downloads/Volume10No3/Paper_77-Spin_Then_Sleep_a_Machine_Learning_Alternative.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Agent-based Simulation for Studying Air Pollution from Traffic in Urban Areas: The Case of Hanoi City</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100376</link>
        <id>10.14569/IJACSA.2019.0100376</id>
        <doi>10.14569/IJACSA.2019.0100376</doi>
        <lastModDate>2019-03-30T12:17:13.1470000+00:00</lastModDate>
        
        <creator>KAFANDO Rodrique</creator>
        
        <creator>HO Tuong Vinh</creator>
        
        <creator>NGUYEN Manh Hung</creator>
        
        <subject>Air pollution; agent-based simulation; traffic</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(3), 2019</description>
        <description>In urban areas, traffic is one of the main causes of air pollution. Establishing an effective solution to raise public awareness of this phenomenon could help to significantly reduce the level of pollution in urban areas. In this study, we design and implement an agent-based simulation allowing to study the principles of production and dispersion of pollutants from road traffic in urban areas. The simulation takes into account different factors that can produce pollutants from the urban zone (the case of Hanoi city in Vietnam): roads and streets, vehicles (types, quantity), traffic, wind direction, etc. With this simulation, one can observe and study the emission and dispersion of pollutants from traffic by conducting experiments with various scenarios and parameters. This work is an interesting solution to sensitize the public’s awareness on the air pollution from traffic in urban areas, so that people can change their behaviors to reduce the air pollution.</description>
        <description>http://thesai.org/Downloads/Volume10No3/Paper_76-An_Agent_based_Simulation_for_Studying_Air_Pollution.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Android based Receptive Language Tracking Tool for Toddlers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100375</link>
        <id>10.14569/IJACSA.2019.0100375</id>
        <doi>10.14569/IJACSA.2019.0100375</doi>
        <lastModDate>2019-03-30T12:17:13.1300000+00:00</lastModDate>
        
        <creator>Sadia Firdous</creator>
        
        <creator>Madhia Wahid</creator>
        
        <creator>Amad Ud Din</creator>
        
        <creator>Khush Bakht</creator>
        
        <creator>Muhammad Yousuf Ali Khan</creator>
        
        <creator>Rehna Batool</creator>
        
        <creator>Misbah Noreen</creator>
        
        <subject>Receptive language; mobile application; hearing impairment</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(3), 2019</description>
        <description>Today’s Android-based applications are gaining more popularity among users, especially among kids. Many Android-based applications are available related to speech therapy of a child but these have left some loopholes. Talking kids is the solution to those applications. It is an Android-based receptive language tracking tool for toddlers that emphasis to improve child’s hearing capability and helps to learn, understand and develop receptive language vocabulary. It includes the colourful images of the daily routine things with their sound in a native accent so that child can learn the daily routine items. Child assessment is also included in this application for monitoring child performance. On the basis of child assessment, the activity log is maintained for keeping track of the child performance. The collected results are showing the successful development of receptive language vocabulary in toddlers with the help of ‘Talking kids’.</description>
        <description>http://thesai.org/Downloads/Volume10No3/Paper_75-Android_based_Receptive_Language_Tracking_Tool.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Analysis of Multilayer Perceptron Neural Network Models in Week-Ahead Rainfall Forecasting</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100374</link>
        <id>10.14569/IJACSA.2019.0100374</id>
        <doi>10.14569/IJACSA.2019.0100374</doi>
        <lastModDate>2019-03-30T12:17:13.1130000+00:00</lastModDate>
        
        <creator>Lemuel Clark P. Velasco</creator>
        
        <creator>Ruth P. Serqui&#241;a</creator>
        
        <creator>Mohammad Shahin A. Abdul Zamad</creator>
        
        <creator>Bryan F. Juanico</creator>
        
        <creator>Junneil C. Lomocso</creator>
        
        <subject>Multilayer perceptron neural network; performance analysis; rainfall forecasting</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(3), 2019</description>
        <description>Multilayer perceptron neural network (MLPNN) is considered as one of the most efficient forecasting techniques which can be implemented for the prediction of weather occurrence. As with any machine learning implementation, the challenge on the utilization of MLPNN in rainfall forecasting lies in the development and evaluation of MLPNN models which delivers optimal forecasting performance. This research conducted performance analysis of MLPNN models through data preparation, model designing, and model evaluation in order to determine which parameters are the best-fit configurations for MLPNN model implementation in rainfall forecasting. During rainfall data preparation, imputation process and spatial correlation evaluation of weather variables from various weather stations showed that the geographical location of the chosen weather stations did not have a direct correlation between stations with respect to rainfall behavior leading to the decision of utilizing the weather station having the most complete weather data to be fed in the MLPNN. By conducting performance analysis of MLPNN models with different combinations of training algorithms, activation functions, learning rate, and momentum, it was found out that MLPNN model having 100 hidden neurons with Scaled Conjugate Gradient training algorithm and Sigmoid activation function delivered the lowest RMSE of 0.031537 while another MLPNN model having the same number of hidden neurons, the same activation function but Resilient Propagation as training algorithm had the lowest MAE of 0.0209. The results of this research showed that performance analysis of MLPNN models is a crucial process in model implementation of MLPNN for week-ahead rainfall forecasting.</description>
        <description>http://thesai.org/Downloads/Volume10No3/Paper_74-Performance_Analysis_of_Multilayer_Perceptron.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Efficient Load Balancing in Cloud Computing using Multi-Layered Mamdani Fuzzy Inference Expert System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100373</link>
        <id>10.14569/IJACSA.2019.0100373</id>
        <doi>10.14569/IJACSA.2019.0100373</doi>
        <lastModDate>2019-03-30T12:17:13.1000000+00:00</lastModDate>
        
        <creator>Naila Samar Naz</creator>
        
        <creator>Sagheer Abbas</creator>
        
        <creator>Muhammad Adnan Khan</creator>
        
        <creator>Benish Abid</creator>
        
        <creator>Nadeem Tariq</creator>
        
        <creator>Muhammad Farrukh Khan</creator>
        
        <subject>PITB; IOT; Virtual-Machine; Data-center; ML, ELB; MFIS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(3), 2019</description>
        <description>In this article, a new Multi-Layered mamdani fuzzy inference system (ML-MFIS) is propound for the Assessment of Efficient Load Balancing (ELB). The proposed ELB-ML-MFIS expert System can categorise the level of ELB in Cloud computing into Excellent, Normal or Low. ELB-ML-MFIS Expert System for ELB in cloud computing is developed under the guidelines from the Microsoft Organization and Pakistan’s Punjab Information Technology Board (PITB) Standard. ELB-ML-MFIS Expert System uses input Cloud Computing parameters such as Data-Center, Virtual-Machine, and Inter –of-Things (IOT) for different layers. This article also analyses the intensities of the Parametres and the results achieved by using the Proposed ELB-ML-MFIS Expert System. All these parameters and results are discussed with the experts of Pakistan’s Punjab Information Technology Board (PITB), Lahore. The accuracy of the proposed ELB-ML-MFIS Expert System is more accurate as compared to other approaches used for it.</description>
        <description>http://thesai.org/Downloads/Volume10No3/Paper_73-Efficient_Load_Balancing_ in_Cloud_Computing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Exploring Mechanisms for Pattern Formation through Coupled Bulk-Surface PDEs in Case of Non-linear Reactions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100372</link>
        <id>10.14569/IJACSA.2019.0100372</id>
        <doi>10.14569/IJACSA.2019.0100372</doi>
        <lastModDate>2019-03-30T12:17:13.0670000+00:00</lastModDate>
        
        <creator>Muflih Alhazmi</creator>
        
        <subject>Bulk-surface; Reaction-diffusion; Finite-Element-Method (FEM); Partial Differential Equations (PDEs)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(3), 2019</description>
        <description>This work explores mechanisms for pattern forma-tion through coupled bulk-surface partial differential equations of reaction-diffusion type. Reaction-diffusion systems posed both in the bulk and on the surface on stationary volumes are coupled through linear Robin-type boundary conditions. The presented work in this paper studies the case of non-linear reactions in the bulk and surface, respectively. For the investigated system is non-dimensionalised and rigorous linear stability analysis is carried out to determine the necessary and sufficient conditions for pattern formation. Appropriate parameter spaces are generated from which model parameters are selected. To exhibit pattern formation, a coupled bulk-surface finite element method is devel-oped and implemented. The numerical algorithm is implemented using an open source software package known as deal.II and show computational results on spherical and cuboid domains. Also, theoretical predictions of the linear stability analysis are verified and supported by numerical simulations. The results show that non-linear reactions in the bulk and surface generate patterns everywhere.</description>
        <description>http://thesai.org/Downloads/Volume10No3/Paper_72-Exploring_Mechanisms_for_Pattern_Formation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Non-Linear EH Relaying in Delay-Transmission Mode over n-&#181; Fading Channels</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100371</link>
        <id>10.14569/IJACSA.2019.0100371</id>
        <doi>10.14569/IJACSA.2019.0100371</doi>
        <lastModDate>2019-03-30T12:17:13.0500000+00:00</lastModDate>
        
        <creator>Ayaz Hussain</creator>
        
        <creator>Inayat Ali</creator>
        
        <creator>Ramesh Kumar</creator>
        
        <creator>Zahoor Ahmed</creator>
        
        <subject>EH relay; non-linear EH model; n-&#181; fading; TSR protocol; throughput</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(3), 2019</description>
        <description>Energy harvesting is a technique to harvest energy from RF (radio frequency) waves. The RF signals have the ability to convey energy and information concurrently. The EH in coop-erative relaying systems may increase the capacity and coverage of wireless networks. In this work, we study a dual-hop (two-hop) relaying. This system has three nodes: a source, a relay, and a destination. The source and destination have multiple antennas. We account a non-linear EH model and TSR (time-switching-based relaying) protocol at the single-antenna relay node. We evaluate the system performance over η−&#181; fading channels. With a saturation threshold, a non-linear EH receiver restrains the harvested power. In the TSR protocol, the relay changes mode between the EH and information processing, by which a fraction of time is used with each process. The fading model η−&#181; incorporates some fading models as notable cases, viz., Nakagami- m, One-sided Gaussian, Nakagami-q (Hoyt), and Rayleigh. The system performance is analyzed in terms of the average capacity and throughput for different saturation threshold power levels, divers antennas arrangements, and different parameter values of η and &#181;.</description>
        <description>http://thesai.org/Downloads/Volume10No3/Paper_71-Non_Linear_EH_Relaying_in_Delay_Transmission_Mode.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of ECG Signal Processing and Filtering Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100370</link>
        <id>10.14569/IJACSA.2019.0100370</id>
        <doi>10.14569/IJACSA.2019.0100370</doi>
        <lastModDate>2019-03-30T12:17:13.0370000+00:00</lastModDate>
        
        <creator>Zia-ul-Haque </creator>
        
        <creator>Rizwan Qureshi</creator>
        
        <creator>Mehmood Nawaz</creator>
        
        <creator>Faheem Yar Khuhawar</creator>
        
        <creator>Nazish Tunio</creator>
        
        <creator>Muhammad Uzair</creator>
        
        <subject>Electrocardiogram; power line interference; electromyography; adaptive filter; Least Mean Square</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(3), 2019</description>
        <description>Electrocardiography (ECG) is a common technique for recording the electrical activity of human heart. Accurate computer analysis of ECG signal is challenging as it is exceedingly prone to high frequency noise and various other artifacts due to its low amplitude. In remote heath care systems, computer based high level understanding of ECG signals is performed using advanced machine learning algorithms. The accuracy of these algorithms relies on the Signal-to-Noise-Ratio (SNR) of the input ECG signal. In this paper, we analyse various methods for removing the high frequency noise components from the ECG signal and evaluate the performance of several adaptive filtering algorithms. The result suggest that the Normalized Least Mean Square (NLMS) algorithm achieves high SNR and Sign LMS is computationally efficient.</description>
        <description>http://thesai.org/Downloads/Volume10No3/Paper_70-Analysis_of_ECG_Signal_Processing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimal Pragmatic Clustering for Wireless Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100369</link>
        <id>10.14569/IJACSA.2019.0100369</id>
        <doi>10.14569/IJACSA.2019.0100369</doi>
        <lastModDate>2019-03-30T12:17:13.0200000+00:00</lastModDate>
        
        <creator>Suzan Basloom</creator>
        
        <creator>Nadine Akkari</creator>
        
        <creator>Ghadah Aldabbagh</creator>
        
        <subject>Clustering; genetic algorithm; K-means; wireless networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(3), 2019</description>
        <description>Nodes’ clustering in wireless networks is one of the solutions that used to improve network performance. This paper discusses the clustering in wireless networks. Then it presents a novel clustering algorithm named Pragmatic Genetic Algorithm (PGA). It combines two of the well known artificial intelligence techniques: K-means and Genetic algorithm. The proposed algorithm aims at minimizing the execution time of the clustering, especially in time-sensitive wireless networks applications. The performance of PGA has been compared with the classical clustering algorithms, namely, K-means and KGA. The experiments have been conducted using synthetic and real data from public repositories. PGA obtained excellent results in execution and stable accuracy even when the number of nodes was increased.</description>
        <description>http://thesai.org/Downloads/Volume10No3/Paper_69-Optimal_Pragmatic_Clustering_for_Wireless_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Gender-neutral Approach to Detect Early Alzheimer’s Disease Applying a Three-layer NN</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100368</link>
        <id>10.14569/IJACSA.2019.0100368</id>
        <doi>10.14569/IJACSA.2019.0100368</doi>
        <lastModDate>2019-03-30T12:17:12.9900000+00:00</lastModDate>
        
        <creator>Shithi Maitra</creator>
        
        <creator>Tonmoy Hossain</creator>
        
        <creator>Abdullah Al-Sakin</creator>
        
        <creator>Sheikh Inzamamuzzaman</creator>
        
        <creator>Md. Mamun Or Rashid</creator>
        
        <creator>Syeda Shabnam Hasan</creator>
        
        <subject>Alzheimer’s disease; dementia; exploratory data analysis; synthetically imputed data; socio-economic factors; specialized imputation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(3), 2019</description>
        <description>Early diagnosis of the neurodegenerative, irreversible  disease Alzheimer’s is crucial for effective disease management.  Dementia from Alzheimer’s is an agglomerated result of  complex criteria taking roots at both medical, social, educational  backgrounds. There being multiple predictive features for the  mental state of a subject, machine learning methodologies are ideal for classification due to their extremely powerful feature-learning  capabilities. This study primarily attempts to classify  subjects as having or not having the early symptoms of the  disease and on the sidelines, endeavors to detect if a subject has  already transformed towards Alzheimer’s. The research utilizes  the OASIS (Open Access Series of Imaging Studies) longitudinal  dataset which has a uniform distribution of demented, nondemented  subjects and establishes the use of novel features such  as socio-economic status and educational background for early  detection of dementia, proven by performing exploratory data  analysis. This research exploits three data-engineered versions of  the OASIS dataset with one eliminating the incomplete cases,  another one with synthetically imputed data and lastly, one  that eliminates gender as a feature—eventually producing the  best results and making the model a gender-neutral unique  piece. The neural network applied is of three layers with two  ReLU hidden layers and a third softmax classification layer. The best accuracy of 86.49% obtained on cross-validation set upon trained parameters is greater than traditional learning  algorithms applied previously on the same data. Drilling down to  two classes namely demented and non-demented, 100% accuracy  has been remarkably achieved. Additionally, perfect recall and a  precision of 0.8696 for the ‘demented’ class have been achieved. The significance of this work consists in endorsing educational, socio-economic factors as useful features and eliminating the  gender-bias using a simple neural network model without the  need for complete MRI tuples that can be compensated for using  specialized imputation methods.</description>
        <description>http://thesai.org/Downloads/Volume10No3/Paper_68-A_Gender_Neutral_Approach_to_Detect_Early_Alzheimers_Disease.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Energy Efficient Camera Solution for Video Surveillance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100367</link>
        <id>10.14569/IJACSA.2019.0100367</id>
        <doi>10.14569/IJACSA.2019.0100367</doi>
        <lastModDate>2019-03-30T12:17:12.9730000+00:00</lastModDate>
        
        <creator>Misbah Ahmad</creator>
        
        <creator>Imran Ahmed</creator>
        
        <creator>Kaleem Ullah</creator>
        
        <creator>Iqbal Khan</creator>
        
        <creator>Ayesha Khattak</creator>
        
        <creator>Awais Adnan</creator>
        
        <subject>Energy efficient; video surveillance; overhead camera</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(3), 2019</description>
        <description>Video surveillance is growing rapidly, new problems and issues are also coming into view which needs serious and urgent attention. Video surveillance system requires a beneficial energy efficient camera solution. In this paper, a single overhead camera solution is introduced which overcomes the problems ex-isting in various frontal and overhead based surveillance systems. This will increases the efficiency and accuracy of surveillance system i.e. frontal and overhead. In this paper, two energy efficient overhead camera models are presented. The first model consists of a single overhead camera with a wide angle lens which covers a wide field of view addresses problems present in the traditional surveillance system. The second model, presents a single smart centralized overhead camera which controls various frontal based cameras. Several factors associated with camera models such as field of view, focal length and distortion are also discussed. Impact of the surveillance cameras are finally discussed which shows that a single energy efficient overhead camera surveillance system can solves many problems present in traditional surveillance system like power consumption, storage, time, human resource and installation cost and small coverage area.</description>
        <description>http://thesai.org/Downloads/Volume10No3/Paper_67-Energy_Efficient_Camera_Solution_for_Video_Surveillance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Developing a Framework for Analyzing Heterogeneous Data from Social Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100366</link>
        <id>10.14569/IJACSA.2019.0100366</id>
        <doi>10.14569/IJACSA.2019.0100366</doi>
        <lastModDate>2019-03-30T12:17:12.9270000+00:00</lastModDate>
        
        <creator>Aritra Paul</creator>
        
        <creator>Mohammad Shamsul Arefin</creator>
        
        <creator>Rezaul Karim</creator>
        
        <subject>Heterogeneous data; recommendation systems; co-sine similarity; video categorization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(3), 2019</description>
        <description>Due to the rapid growth of internet technologies, at present online social networks have become a part of people’s everyday life. People shares their thoughts, feelings, likings, disliking and many other issues at social networks by posting messages, videos, images and commenting on these. It is a great source of heterogeneous data. Heterogeneous data is a kind of unstructured data which comes in a variety of forms with an uncertain speed. In this paper, we develop a framework to collect and analyze a significant amount of heterogeneous data obtained from the social network to understand the behavioural patterns of the people at the social networks. In our framework, at first we crawl data from a well-known social network through Graph API that contains post, comments, images and videos. We compute keywords from the users’ comments and posts and separate keywords as noun, verb, and adjective with the help of an XML based parts of speech tagger. We analyze images related to each user to find out how a user like to move. For this purpose, we count the number of users in an image using frontal face detection classifier. We also analyze video files of the users to find the categories of videos. For this purpose, we divide each video into frames and measure the RGB properties, speed, duration, frame’s height and width. Finally, for each user we combine information from text, images and videos and based on the combined information we develop the profile of the user. Then, we generate recommendations for each user based on activities of the user and cosine similarity between users. We perform several experiments to show the effectiveness of our developed system. From the experimental evaluation, we can say that our framework can generate results up to a satisfactory level.</description>
        <description>http://thesai.org/Downloads/Volume10No3/Paper_66-Developing_a_Framework_for_Analyzing_Heterogeneous_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid Exam Scheduling Technique based on Graph Coloring and Genetic Algorithms Targeted towards Student Comfort</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100365</link>
        <id>10.14569/IJACSA.2019.0100365</id>
        <doi>10.14569/IJACSA.2019.0100365</doi>
        <lastModDate>2019-03-30T12:17:12.9100000+00:00</lastModDate>
        
        <creator>Osama Al-Haj Hassan</creator>
        
        <creator>Osama Qtaish</creator>
        
        <creator>Maher Abuhamdeh</creator>
        
        <creator>Mohammad Al-Haj Hassan</creator>
        
        <subject>Exam scheduling; optimization; graph coloring; genetic algorithms; time tabling; fitness value</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(3), 2019</description>
        <description>Scheduling is one of the vital activities needed in various aspects of life. It is also a key factor in generating exam schedules for academic institutions. In this paper we propose an exam scheduling technique that combines graph coloring and genetic algorithms. On one hand, graph coloring is used to order sections such that sections that are difficult to schedule comes first and accordingly scheduled first which helps in increasing the probability of generating valid schedules. On the other hand, we use genetic algorithms to search more effectively for more optimized schedules within the large search space. We propose a two-stage fitness function that is targeted toward increasing student comfort. We also investigate the effect and potency of the crossover operator and the mutation operator. Our experiments are conducted on a realistic dataset and the results show that a mutation only hybrid approach has a low cost and converges faster toward more optimized schedules.</description>
        <description>http://thesai.org/Downloads/Volume10No3/Paper_65-A_Hybrid_Exam_Scheduling_Technique.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Impacts of Unbalanced Test Data on the Evaluation of Classification Methods</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100364</link>
        <id>10.14569/IJACSA.2019.0100364</id>
        <doi>10.14569/IJACSA.2019.0100364</doi>
        <lastModDate>2019-03-30T12:17:12.8800000+00:00</lastModDate>
        
        <creator>Manh Hung Nguyen</creator>
        
        <subject>Supervised machine learning evaluation; accuracy; f1 score; unbalanced factor</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(3), 2019</description>
        <description>The performance of a classifier in a supervised machine learning problem is popularly evaluated by using the accuracy, precision, recall, and F1-score. These parameters could evaluate very well classifiers in the case that the number of positive label sample and the number of negative label sample in the testing set are balanced or nearly balanced. However, these parameters may miss-evaluate the classifiers in some case where the positive and negative samples in the testing set is unbalanced. This paper proposes some update in these parameters by taking into account the unbalanced factor which represents the unbal-ance ratio of positive and negative samples in the testing set. The new updated parameters are then experimentally evaluated to compare to the traditional parameters.</description>
        <description>http://thesai.org/Downloads/Volume10No3/Paper_64-Impacts_of_Unbalanced_Test_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Real Time Analysis of Crowd Behaviour for Automatic and Accurate Surveillance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100363</link>
        <id>10.14569/IJACSA.2019.0100363</id>
        <doi>10.14569/IJACSA.2019.0100363</doi>
        <lastModDate>2019-03-30T12:17:12.8630000+00:00</lastModDate>
        
        <creator>E Padmalatha</creator>
        
        <creator>Karedla Anantha Sashi Sekhar</creator>
        
        <creator>Dasarada Ram Reddy Mudiam</creator>
        
        <subject>Real time surveillance; violent flow descriptors; neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(3), 2019</description>
        <description>Surveillance in this modern era is a necessity. Creating an alert in case of emergencies and disturbances is of very much importance. As the number of simultaneous camera feeds increase, burden on human supervisor also increases. The proposed system is a way to aid human supervisor in the surveillance job. Creating alerts in real time will help responding quickly to crucial situations. With this in mind, we propose the following things: (1) Generation of ViF (Violent Flow Descriptors) as high-level features in real time. (2) Using generated ViF’s of a Video Dataset for training a neural net and testing its accuracy.(3) Developing a system that can detect the signs of disturbance among the crowd in real time and can learn from the decisions it makes.</description>
        <description>http://thesai.org/Downloads/Volume10No3/Paper_63-Real_Time_Analysis_of_Crowd_Behaviour.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Improved Particle Swarm Optimization Algorithm with Chi-Square Mutation Strategy</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100362</link>
        <id>10.14569/IJACSA.2019.0100362</id>
        <doi>10.14569/IJACSA.2019.0100362</doi>
        <lastModDate>2019-03-30T12:17:12.8470000+00:00</lastModDate>
        
        <creator>Waqas Haider Bangyal</creator>
        
        <creator>Hafiz Tayyab Rauf</creator>
        
        <creator>Hafsa Batool</creator>
        
        <creator>Saad Abdullah Bangyal</creator>
        
        <creator>Jamil Ahmed</creator>
        
        <creator>Sobia Pervaiz</creator>
        
        <subject>Particle Swarm Optimization; Chi-Square Mutation; Population Initialization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(3), 2019</description>
        <description>Particle Swarm Optimization (PSO) algorithm is a population-based strong stochastic search strategy empowered from the inherent way of the bee swarm or animal herds for seeking their foods. Consequently, flexibility for the numerical experimentation, PSO has been used to resolve diverse kind of optimization problems. PSO is much of the time caught in local optima in the meantime taking care of the complex real-world problems.Considering this, a novel modified PSO is introduced by proposing a chi square mutation method. The main functionality of mutation operator in PSO is quick convergence and escapes from the local minima. Population initialization plays a critical role in meta-heuristic algorithm. Moreover, in this work, to improve the convergence, rather applying random distribution for initialization, two quasi random sequences Halton and Sobol have been applied and properly joined with chi-square mutated PSO (Chi-Square PSO) algorithm. The promising experimental result suggests the superiority of the proposed technique. The results present foresight that how the proposed mutation operator influences on the value of cost function and divergence. The proposed mutated strategy is applied for eight (8) benchmark functions extensively used in the literature. The simulation results verify that Chi-Square PSO provide efficient results over other tested algorithms implemented for the function optimization.</description>
        <description>http://thesai.org/Downloads/Volume10No3/Paper_62-An_Improved_Particle_Swarm_Optimization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Image Retrieval using Visual Phrases</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100361</link>
        <id>10.14569/IJACSA.2019.0100361</id>
        <doi>10.14569/IJACSA.2019.0100361</doi>
        <lastModDate>2019-03-30T12:17:12.8330000+00:00</lastModDate>
        
        <creator>Benish Anwar</creator>
        
        <creator>Junaid Baber</creator>
        
        <creator>Atiq Ahmed</creator>
        
        <creator>Maheen Bakhtyar</creator>
        
        <creator>Sher Muhammad Daudpota</creator>
        
        <creator>Anwar Ali Sanjrani</creator>
        
        <creator>Ihsan Ullah</creator>
        
        <subject>Image processing; image retrieval; visual phrases; apriori algorithm; SIFT</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(3), 2019</description>
        <description>Keypoint based descriptors are widely used for various computer vision applications. During this process, key-points are initially detected from the given images which are later represented by some robust and distinctive descriptors like scale-invariant feature transform (SIFT). Keypoint based image-to-image matching has gained significant accuracy for image retrieval type of applications like image copy detection, similar image retrieval and near duplicate detection. Local keypoint descriptors are quantized into visual words to reduce the feature space which makes image-to-image matching possible for large scale applications. Bag of visual word quantization makes it efficient at the cost of accuracy. In this paper, the bag of visual word model is extended to detect frequent pair of visual words which is known as frequent item-set in text processing, also called visual phrases. Visual phrases increase the accuracy of image retrieval without increasing the vocabulary size. Experiments are carried out on benchmark datasets that depict the effectiveness of proposed scheme.</description>
        <description>http://thesai.org/Downloads/Volume10No3/Paper_61-Image_Retrieval_using_Visual_Phrases.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improvement in Classification Algorithms through Model Stacking with the Consideration of their Correlation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100360</link>
        <id>10.14569/IJACSA.2019.0100360</id>
        <doi>10.14569/IJACSA.2019.0100360</doi>
        <lastModDate>2019-03-30T12:17:12.8170000+00:00</lastModDate>
        
        <creator>Muhammad Azam</creator>
        
        <creator>Dr. Tanvir Ahmed</creator>
        
        <creator>Dr. M. Usman Hashmi</creator>
        
        <creator>Rehan Ahmad</creator>
        
        <creator>Abdul Manan</creator>
        
        <creator>Muhammad Adrees</creator>
        
        <creator>Fahad Sabah</creator>
        
        <subject>Classification algorithms; model stacking; correlation; k-nearest neighbor; pre-processing; meta classifiers</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(3), 2019</description>
        <description>In this research we analyzed the performance of some well-known classification algorithms in terms of their accuracy and proposed a methodology for model stacking on the basis of their correlation which improves the accuracy of these algorithms. We selected; Support Vector Machines (svm), Na&#239;ve Bayes (nb), k-Nearest Neighbors (knn), Generalized Linear Model (glm), Latent Discriminant Analysis (lda), gbm, Recursive Partitioning and Regression Trees (rpart), rda, Neural Networks (nnet) and Conditional Inference Trees (ctree) in our research and preformed analyses on three textual datasets of different sizes; Scopus 50,000 instances, IMDB Movie Reviews having 10,000 instances, Amazon Products Reviews having 1000 instances and Yelp dataset having 1000 instances. We used R-Studio for performing experiments. Results show that the performance of all algorithms increased at Meta level. Neural Networks achieved the best results with more than 25% improvement at Meta-Level and outperformed the other evaluated methods with an accuracy of 95.66%, and altogether our model gives far better results than individual algorithms’ performance.</description>
        <description>http://thesai.org/Downloads/Volume10No3/Paper_60-Improvement_in_Classification_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Finding Attractive Research Areas for Young Scientists</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100359</link>
        <id>10.14569/IJACSA.2019.0100359</id>
        <doi>10.14569/IJACSA.2019.0100359</doi>
        <lastModDate>2019-03-30T12:17:12.7870000+00:00</lastModDate>
        
        <creator>Nouman Malik</creator>
        
        <creator>Hikmat Ullah Khan</creator>
        
        <creator>Muhammad Ramzan</creator>
        
        <creator>Muhammad Shahzad Faisal</creator>
        
        <creator>Ahsan Mahmood</creator>
        
        <subject>Research field; scientometrics; attractive areas; g-index</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(3), 2019</description>
        <description>The selection of the research area is very vital for new researchers. One of the major issues for researchers is the selection of the domain of research on which he/she can carry out research. This case is very vital on the grounds that it decides the future of the researchers in that research area. Finding hot and attractive research areas is not considered in the relevant literature of Scientometrics. In this regard, the correct decision of the selection of the research domain helps the researchers to show better performance as well gain a good academic career. The main aim of this research study is to figure out the attractive research areas for the researchers, especially who are at the starting stage of their research life. To the best of our knowledge, this research area is still very limited due to limited work done in this area. So in order to distinguish the attractive research field for the new researchers, new rising fields are identified by applying the well-known g-index, which is widely used for finding the top authors in academic networks. In addition, we compute diverse, relevant features of the research fields which help us to identify top research area. The results demonstrate that the proposed methodology is capable to recommend the attractive research fields for potential future research work. An extensive empirical analysis has been carried out using the widely used academic database of DBLP.</description>
        <description>http://thesai.org/Downloads/Volume10No3/Paper_59-Finding_Attractive_Research_Areas.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Efficient Arnold and Singular Value Decomposition based Chaotic Image Encryption</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100358</link>
        <id>10.14569/IJACSA.2019.0100358</id>
        <doi>10.14569/IJACSA.2019.0100358</doi>
        <lastModDate>2019-03-30T12:17:12.7700000+00:00</lastModDate>
        
        <creator>Ashraf Afifi</creator>
        
        <subject>Encryption arnold transform; singular value decomposition; chaotic image encryption</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(3), 2019</description>
        <description>This paper proposes an efficient image encryption that is based on Arnold transform (AT) and  the Singular value decomposition (SVD). The proposed method employs AT on a plain image to transpose all image pixels in the positions, then a diffusion process is applied to the resulted encrypted image via SVD decomposing into three segments. The decryption process aims to derive the plain image from the cipher image. Matlab simulation experiments are done to examine the suggested method. The achieved results show the superiority of the suggested approach with respect to encryption quality.</description>
        <description>http://thesai.org/Downloads/Volume10No3/Paper_58-Efficient_Arnold_and_Singular_Value_Decomposition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Microcontroller-based RFID, GSM and GPS for Motorcycle Security System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100357</link>
        <id>10.14569/IJACSA.2019.0100357</id>
        <doi>10.14569/IJACSA.2019.0100357</doi>
        <lastModDate>2019-03-30T12:17:12.7530000+00:00</lastModDate>
        
        <creator>Kunnu Purwanto</creator>
        
        <creator>Iswanto</creator>
        
        <creator>Tony Khristanto Hariadi</creator>
        
        <creator>Muhammad Yusvin Muhtar</creator>
        
        <subject>Microcontroller; GPS; GSM; RFID; motorcycle</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(3), 2019</description>
        <description>The crime level including motorcycle theft has been increasing. It occurs regardless the time and place. The owner of the motorcycle needs to ensure the security of his motorcycle by adding either manual or electronic lock. However, both the manual and electronic locks are still incapable to protect the motorcycle from the theft. Based on the problem, this research created an automatic motorcycle safety system, namely, germs narcissistic. Germs narcissist is the key innovation of automatic vehicle security using, GPS (global positioning system), and GSM (global system for mobile communication), and RFID (Radio Frequency Identification). This system was created by using short message service (SMS) to provide vehicle information such as time, position, and alarm informed to the owner of the motorcycle. The arrangement of the technologies can be used as a practical and effective safety key of motorcycle.</description>
        <description>http://thesai.org/Downloads/Volume10No3/Paper_57-Microcontroller_based_RFID_GSM_and_GPS.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Enhanced Concept based Approach for user Centered Health Information Retrieval to Address Readability Issues</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100356</link>
        <id>10.14569/IJACSA.2019.0100356</id>
        <doi>10.14569/IJACSA.2019.0100356</doi>
        <lastModDate>2019-03-30T12:17:12.7400000+00:00</lastModDate>
        
        <creator>Ibrahim Umar Kontagora</creator>
        
        <creator>Isredza Rahmi A. Hamid</creator>
        
        <creator>Nurul Aswa Omar</creator>
        
        <subject>Concept-based approach; medical discharge reports; clinical reports; query expansion; latent semantic indexing; query likelihood model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(3), 2019</description>
        <description>Searching for relevant medical guidance has turn out to be a general and notable task executed by internet users. This diversity of quantifiable information explorers indicates the enormous range of information needs and consequently, a key prerequisite for the development of clinical retrieval systems that would satisfy the clinical information desires of non-clinical professionals and their care givers. This study focused on designing an enhanced model for clinical consumers balanced based medical information retrievals and also proposed an improved system model that would provide simpler medical meanings for every clinical grammer(s) established on a clinical released documents and clinical search results online. We evaluated and compared the enhanced model with the current models in the clinical domain, namely, QLM (“Query Likelihood Model”), LSI (“Latent Semantic Indexing”) and CBA (“Concept Based Approach”) using MeSH, Metamap and UMLS databases. The outcomes gotten from the investigational study confirmed that, the Enhanced model (ECBA) managed to achieve 0.9145, 0.9170 and 0.9156 on MAP (“Mean Average Precision”), P@10 (“Precision @ 10”) and NDCG@10 (“Normalized Discontinued Cumulative Gains @ 10”) in that order. Hence, the superlative model to be deployed in addressing readability hitches is the Enhanced Concept Based method.</description>
        <description>http://thesai.org/Downloads/Volume10No3/Paper_56-An_Enhanced_Concept_based_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Categorical Model of Process Co-Simulation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100355</link>
        <id>10.14569/IJACSA.2019.0100355</id>
        <doi>10.14569/IJACSA.2019.0100355</doi>
        <lastModDate>2019-03-30T12:17:12.7230000+00:00</lastModDate>
        
        <creator>Daniel-Cristian Craciunean</creator>
        
        <creator>Dimitris Karagiannis</creator>
        
        <subject>Process modeling; metamodel; modeling grammars; categorical grammars; category theory; categorical sketch; co-simulation, simulation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(3), 2019</description>
        <description>A set of dynamic systems in which some entities undergo transformations, or receive certain services in successive phases, can be modeled by processes. The specification of a process consists of a description of the properties of this process as a mathematical object in a suitable modeling language. The language chosen for specifying a process should facilitate the writing of this specification in a very clear and simple form. This raises the need for the use of various types of formalisms that are faithful to the component subsystems of such a system and which are capable of mimicking their varied dynamics. Often in practice, the development of domain specific languages is used to provide building blocks adapted to the processes. Thus, the concept of multi-paradigm modeling arises which involves the combination of different types of models, the decomposition and composition of heterogeneous specified models as well as their simulation. Multi-paradigm modeling presents a variety of challenges such as coupling and transforming the models described in various formalisms, the relationship between models at different levels of abstraction, and the creation of metamodels to facilitate the rapid development of varied formalisms for model specification. The simulation can be seen as a set of state variables that evolve over time. Co-simulation is a synthesis of all simulations of the components of the system, coordinated and synchronized based on interactions between them. The theory of categories provides a framework for organizing and structuring formal systems in which heterogeneous information can be transferred, thus allowing for the building of rigorous cohesion bridges between heterogeneous components. This paper proposes a new model of co-simulation of processes based on the category theory.</description>
        <description>http://thesai.org/Downloads/Volume10No3/Paper_55-A_Categorical_Model_of_Process_Co_Simulation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Classification of Melanoma Skin Cancer using Convolutional Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100353</link>
        <id>10.14569/IJACSA.2019.0100353</id>
        <doi>10.14569/IJACSA.2019.0100353</doi>
        <lastModDate>2019-03-30T12:17:12.7070000+00:00</lastModDate>
        
        <creator>Rina Refianti</creator>
        
        <creator>Achmad Benny Mutiara</creator>
        
        <creator>Rachmadinna Poetri Priyandini</creator>
        
        <subject>Convolutional neural network; deep learning; image classification; LeNet-5; melanoma skin cancer; python</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(3), 2019</description>
        <description>Melanoma cancer is a type of skin cancer and is the most dangerous one because it causes the most of skin cancer deaths. Melanoma comes from melanocyte cells, melanin-producing cells, so that melanomas are generally brown or black coloured. Melanomas are mostly caused by exposure to ultraviolet radiation that damages the DNA of skin cells. The diagnoses of melanoma cancer are often performed manually by using visuals of the skilled doctors, analyzing the result of dermoscopy examination and match it with medical sciences. Manual detection weakness is highly influenced by human subjectivity that makes it inconsistent in certain conditions. Therefore, a computer assisted technology is needed to help classifying the results of dermoscopy examination and to deduce the results more accurately with a relatively faster time. The making of this application starts with problem analysis, design, implementation, and testing. This application uses deep learning technology with Convolutional Neural Network method and LeNet-5 architecture for classifying image data. The experiment using 44 images data from the training results with a different number of training and epoch resulted the highest percentage of success at 93% in training and 100% in testing, which the number of training data used of 176 images and 100 epochs. This application was created using Python programming language and Keras library as Tensorflow back-end.</description>
        <description>http://thesai.org/Downloads/Volume10No3/Paper_53-Classification_of_Melanoma_Skin_Cancer.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Leveraging A Multi-Objective Approach to Data Replication in Cloud Computing Environment to Support Big Data Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100354</link>
        <id>10.14569/IJACSA.2019.0100354</id>
        <doi>10.14569/IJACSA.2019.0100354</doi>
        <lastModDate>2019-03-30T12:17:12.7070000+00:00</lastModDate>
        
        <creator>Mohammad Shorfuzzaman</creator>
        
        <creator>Mehedi Masud</creator>
        
        <subject>Big data applications; data cloud; replication; dynamic programming; QoS requirement; workload constraint</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(3), 2019</description>
        <description>Increased data availability and high data access performance are of utmost importance in a large-scale distributed system such as data cloud. To address these issues data can be replicated in various locations in the system where applications are executed. Replication not only improves data availability and access latency but also improves system load balancing. While data replication in distributed cloud storage is addressed in the literature, majority of the current techniques do not consider different costs and benefits of replication from a comprehensive perspective. In this paper, we investigate replica management problem (which is formulated using dynamic programming) in cloud computing environments to support big data applications. To this end, we propose a new highly distributed replica placement algorithm that provides cost-effective replication of huge amount of geographically distributed data into the cloud to meet the quality of service (QoS) requirements of data-intensive (big data) applications while ensuring that the workload among the replica data centers is balanced. In addition, the algorithm takes into account the consistency among replicas due to update propagation. Thus, we build up a multi-objective optimization approach for replica management in cloud that seeks near optimal solution by balancing the trade-offs among the stated issues. For verifying the effectiveness of the algorithm, we evaluated the performance of the algorithm and compared it with two baseline approaches from the literature. The evaluation results demonstrate the usefulness and superiority of the presented algorithm for conditions of interest.</description>
        <description>http://thesai.org/Downloads/Volume10No3/Paper_54-Leveraging_a_Multi_Objective_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-Objective Ant Colony Optimization for Automatic Social Media Comments Summarization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100352</link>
        <id>10.14569/IJACSA.2019.0100352</id>
        <doi>10.14569/IJACSA.2019.0100352</doi>
        <lastModDate>2019-03-30T12:17:12.6930000+00:00</lastModDate>
        
        <creator>Lucky </creator>
        
        <creator>Abba Suganda Girsang</creator>
        
        <subject>Automatic text summarization; social media; ant colony optimization; multi-objective</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(3), 2019</description>
        <description>Summarizing social media comments automatically can help users to capture important information without reading the whole comments. On the other hand, automatic text summarization is considered as a Multi-Objective Optimization (MOO) problem for satisfying two conflicting objectives. Retaining the information from the source of text as much as possible and producing the summary length as short as possible. To solve that problem, an undirected graph is created to construct the relation between social media comments. Then, the Multi-Objective Ant Colony Optimization (MOACO) algorithm is applied to generate summaries by selecting concise and important comments from the graph based on the desired summary size. The quality of generated summaries is compared to other text summarization algorithms such as TextRank, LexRank, SumBasic, Latent Semantic Analysis, and KL-Sum. The result showed that MOACO can produce informative and concise summaries which have small cosine distance to the source text and fewer number of words compared to the other algorithms.</description>
        <description>http://thesai.org/Downloads/Volume10No3/Paper_52-Multi_Objective_Ant_Colony_Optimization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards the Performance Investigation of Automatic Melanoma Diagnosis Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100351</link>
        <id>10.14569/IJACSA.2019.0100351</id>
        <doi>10.14569/IJACSA.2019.0100351</doi>
        <lastModDate>2019-03-30T12:17:12.6600000+00:00</lastModDate>
        
        <creator>Amna Asif</creator>
        
        <creator>Iram Fatima</creator>
        
        <creator>Adeel Anjum</creator>
        
        <creator>Saif U. R. Malik</creator>
        
        <subject>Smartphones; computer based systems; melanoma diagnosis; cloud computing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(3), 2019</description>
        <description>Melanoma is a type of skin cancer, one of the fatal diseases, that appears as an abnormal growth of skin cells and the lesion part often looks like a mole on the skin. Early detection of melanoma from skin lesion by means of screening is an important step towards a reduction in mortality. For this purpose, numerous automatic melanoma diagnosis models based on image processing and machine learning techniques are available for computer-based applications (CBA) and smartphone-based applications (SBA). Since, the smartphones are available as most accessible and easiest methods with built-in camera option, SBA are preferred over CBA. In this paper, we explored the available literature and highlighted the challenges of SBA in terms of execution time due to the limited computing power of smartphones. To resolve this issue of storage of the smartphones, we proposed to develop an SBA that can seamlessly process the image data on the cloud instead of local hardware of the smartphone. Therefore, we designed a study to build a machine learning model of melanoma diagnosis to measure the time taken in preprocessing, segmentation, feature extraction, and classification on the cloud and compared the results with the processing time on the smartphone’s local machine. The results showed there is a significant difference of p value &lt;0.001 on the average processing time taken on both environments. As the processing on the cloud is more efficient. The findings of the proposed research will be helpful for the developers to decide the processing platform while developing smartphone applications for automatic melanoma diagnosis.</description>
        <description>http://thesai.org/Downloads/Volume10No3/Paper_51-Towards_the_Performance_Investigation_of_Automatic_Melanoma_Diagnosis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards Implementing Framework to Generate Myopathic Signals</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100350</link>
        <id>10.14569/IJACSA.2019.0100350</id>
        <doi>10.14569/IJACSA.2019.0100350</doi>
        <lastModDate>2019-03-30T12:17:12.6470000+00:00</lastModDate>
        
        <creator>Amira Dridi</creator>
        
        <creator>Jassem Mtimet</creator>
        
        <creator>Slim Yacoub</creator>
        
        <subject>Component; Surface Electromyography (sEMG); myopathy; root mean square; Power Spectral Density (PSD; skeletal muscles; biceps brachii; interosseous dorsalis; tibialis anterior</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(3), 2019</description>
        <description>In this paper, we describe a simulation system of myopathicsurface electromyography (sEMG) signals. The architecture of the proposed system consists of two cascading modules. SEMG signals of three pathological skeletal muscles (Biceps Brachii, InterosseursDorsalis, TibalisAnterior) were generated. Root Mean Square (RMS Envelope) and Power Spectral Density (PSD) were used to validate our system.</description>
        <description>http://thesai.org/Downloads/Volume10No3/Paper_50-Towards_Implementing_Framework_to_Generate_Myopathic_Signals.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Smart Parking Architecture based on Multi Agent System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100349</link>
        <id>10.14569/IJACSA.2019.0100349</id>
        <doi>10.14569/IJACSA.2019.0100349</doi>
        <lastModDate>2019-03-30T12:17:12.6300000+00:00</lastModDate>
        
        <creator>SOFIA BELKHALA</creator>
        
        <creator>SIHAM BENHADOU</creator>
        
        <creator>KHALID BOUKHDIR</creator>
        
        <creator>HICHAM MEDROMI</creator>
        
        <subject>Smart parking; IoT; multi agent system; artificial intelligence; parking availability; IoT application</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(3), 2019</description>
        <description>Finding a parking space in big cities is becoming more and more impossible. In addition, the emergence of car has created several problems relating to urban mobility for the city. But with the development of technology, these problems can be solved. In this paper, the problem of parking has been addressed by proposing an architecture to automate the parking process using the internet of things, artificial intelligence and multi agent systems.</description>
        <description>http://thesai.org/Downloads/Volume10No3/Paper_49-Smart_Parking_Architecture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Recognition and Classification of Power Quality Disturbances by DWT-MRA and SVM Classifier</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100348</link>
        <id>10.14569/IJACSA.2019.0100348</id>
        <doi>10.14569/IJACSA.2019.0100348</doi>
        <lastModDate>2019-03-30T12:17:12.5970000+00:00</lastModDate>
        
        <creator>Fayyaz Jandan</creator>
        
        <creator>Suhail Khokhar</creator>
        
        <creator>Syed Abid Ali Shaha</creator>
        
        <creator>Farhan Abbasi</creator>
        
        <subject>Power quality disturbances; discrete wavelet transform; multi resolution analysis; support vector machine</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(3), 2019</description>
        <description>Electrical power system is a large and complex network, where power quality disturbances (PQDs) must be monitored, analyzed and mitigated continuously in order to preserve and to re-establish the normal power supply without even slight interruption. Practically huge disturbance data is difficult to manage and requires the higher level of accuracy and time for the analysis and monitoring. Thus automatic and intelligent algorithm based methodologies are in practice for the detection, recognition and classification of power quality events. This approach may help to take preventive measures against abnormal operations and moreover, sudden fluctuations in supply can be handled accordingly. Disturbance types, causes, proper and appropriate extraction of features in single and multiple disturbances, classification model type and classifier performance, are still the main concerns and challenges. In this paper, an attempt has been made to present a different approach for recognition of PQDs with the synthetic model based generated disturbances, which are frequent in power system operations, and the proposed unique feature vector. Disturbances are generated in Matlab workspace environment whereas distinctive features of events are extracted through discrete wavelet transform (DWT) technique. Machine learning based Support vector machine classifier tool is implemented for the classification and recognition of disturbances. In relation to the results, the proposed methodology recognizes the PQDs with high accuracy, sensitivity and specificity. This study illustrates that the proposed approach is valid, efficient and applicable.</description>
        <description>http://thesai.org/Downloads/Volume10No3/Paper_48-Recognition_and_Classification_of_Power_Quality_Disturbances.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced Random Early Detection using Responsive Congestion Indicators</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100347</link>
        <id>10.14569/IJACSA.2019.0100347</id>
        <doi>10.14569/IJACSA.2019.0100347</doi>
        <lastModDate>2019-03-30T12:17:12.5670000+00:00</lastModDate>
        
        <creator>Ahmad Adel Abu-Shareha</creator>
        
        <subject>Congestion; random early detection; active queue management</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(3), 2019</description>
        <description>Random Early Detection (RED) is an Active Queue Management (AQM) method proposed in the early 1990s to reduce the effects of network congestion on the router buffer. Although various AQM methods have extended RED to enhance network performance, RED is still the most commonly utilized method; this is because RED provides stable performance under various network statuses. Indeed, RED maintains a manageable buffer queue length and avoids congestion resulting from an increase in traffic load; this is accomplished using an indicator that reflects the status of the buffer and a stochastic technique for packet dropping. Although RED predicts congestion, reduces packet loss and avoids unnecessary packet dropping, it reacts slowly to an increase in buffer queue length, making it inadequate to detect and react to sudden heavy congestion. Due to the aforementioned limitation, RED is found to be significantly influenced by the way in which the congestion indicator is calculated and used. In this paper, RED is modified to enhance its performance with various network statuses. RED technique is modified to overcome several disadvantages in the original method and enhance network performance. The results indicate that the proposed Enhanced Random Early Detection (EnRED) and Time-window Augmented RED (Windowed-RED) methods—compared to the original RED, ERED and BLUE methods—enhances network performance in terms of loss, dropping and packet delay.</description>
        <description>http://thesai.org/Downloads/Volume10No3/Paper_47-Enhanced_Random_Early_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Review on Security Issues and their Impact on Hybrid Cloud Computing Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100346</link>
        <id>10.14569/IJACSA.2019.0100346</id>
        <doi>10.14569/IJACSA.2019.0100346</doi>
        <lastModDate>2019-03-30T12:17:12.5500000+00:00</lastModDate>
        
        <creator>Mohsin Raza</creator>
        
        <creator>Ayesha Imtiaz</creator>
        
        <creator>Umar Shoaib</creator>
        
        <subject>Hybrid cloud; migration; security issues; security techniques</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(3), 2019</description>
        <description>The evolution of cloud infrastructures toward hybrid cloud models enables innovative business outcomes, twin turbo drivers by the requirement of greater IT agility and overall cost-containment pressures. Hybrid cloud solutions combine the capabilities of both public clouds along with those of on-premises private cloud environments. In order to key benefit with hybrid cloud model, there are different security issues that have been shown to address. In this paper, we explain security issues in detail such as to maintain trust and authenticity of information, Identity management and compliance which is influencing in enterprises due to migration of IT cloud technologies are increasingly turning to hybrid clouds. Here, work outcomes with comparative study of different existing solution and target the common problems domains and security threads.</description>
        <description>http://thesai.org/Downloads/Volume10No3/Paper_46-A_Review_on_Security_Issues_and_their_Impact.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implementation of Multi-Agent based Digital Rights Management System for Distance Education (DRMSDE) using JADE</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100345</link>
        <id>10.14569/IJACSA.2019.0100345</id>
        <doi>10.14569/IJACSA.2019.0100345</doi>
        <lastModDate>2019-03-30T12:17:12.5200000+00:00</lastModDate>
        
        <creator>Ajit Kumar Singh</creator>
        
        <creator>Akash Nag</creator>
        
        <creator>Sunil Karforma</creator>
        
        <creator>Sripati Mukhopadhyay</creator>
        
        <subject>Distance Education (DE); Intellectual Property Rights (IPR); Digital Rights Management (DRM); Multi-Agent System (MAS); JADE</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(3), 2019</description>
        <description>The main objective of Distance Education (DE) is to spread quality education regardless of time and space.  This objective is easily achieved with the help of technology. With the development of World Wide Web and high-speed internet the quality of DE is improved because now Digital Content (DC) can be easily and in no time distributed to many learners of different locations in text, audio and video formats. But, the main obstacle in digital publishing is the protection of Intellectual Property Rights (IPR) of DC. Digital Rights Management (DRM) that manages rights over any digital creation is the only solution to this problem. In this paper, we have made an attempt to implement a Digital Rights Management System for Distance Education known as DRMSDE. We have identified that Multi-Agent System (MAS) based technology is very popular for such type of implementations. Keeping that in mind, we have chosen one of the most popular Multi-Agent based tools, namely JAVA Agent Development Framework (JADE), for our system. This paper presents an overview and the system architecture for the proposed implementation.</description>
        <description>http://thesai.org/Downloads/Volume10No3/Paper_45-Implementation_of_Multi_Agent_based_Digital_Rights.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Effective Approach to Analyze Algorithms with Linear O(n) Worst-Case Asymptotic Complexity</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100344</link>
        <id>10.14569/IJACSA.2019.0100344</id>
        <doi>10.14569/IJACSA.2019.0100344</doi>
        <lastModDate>2019-03-30T12:17:12.5030000+00:00</lastModDate>
        
        <creator>Qazi Haseeb Yousaf</creator>
        
        <creator>Muhammad Arif Shah</creator>
        
        <creator>Rashid Naseem</creator>
        
        <creator>Karzan Wakil</creator>
        
        <creator>Ghufran Ullah</creator>
        
        <subject>Asymptotic complexity; interval analysis; in-depth analysis; Big-Oh; crossover point </subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(3), 2019</description>
        <description>A theoretical approach of asymptote analyzes the algorithms for approximate time complexity. The worst-case asymptotic complexity classifies an algorithm to a certain class. The asymptotic complexity for algorithms returns the degree variable of the algorithmic function while ignores the lower terms. In perspective of programming, asymptote only considers the number of iterations in a loop ignoring inside and outside statements. However, every statement must have some execution time. This paper provides an effective approach to analyze the algorithms belonging to the same class of asymptotes. The theoretical analysis of algorithmic functions shows that the difference between theoretical outputs of two algorithmic functions depends upon the difference between their coefficient of ‘n’ and the constant term.  The said difference marks the point for the behavioral change of algorithms. This theoretic analysis approach is applied to algorithms with linear asymptotic complexity. Two algorithms are considered having a different number of statements outside and inside the loop. The results positively indicated the effectiveness of the proposed approach as the tables and graphs validates the results of the derived formula.</description>
        <description>http://thesai.org/Downloads/Volume10No3/Paper_44-An_Effective_Approach_to_Analyze_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>ATAM: Arabic Traffic Analysis Model for Twitter</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100343</link>
        <id>10.14569/IJACSA.2019.0100343</id>
        <doi>10.14569/IJACSA.2019.0100343</doi>
        <lastModDate>2019-03-30T12:17:12.4730000+00:00</lastModDate>
        
        <creator>Amani AlFarasani</creator>
        
        <creator>Tahani AlHarthi</creator>
        
        <creator>Sarah AlHumoud</creator>
        
        <subject>Data mining; machine learning; sentiment analysis; unsupervised learning; lexicon-based; support vector machines</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(3), 2019</description>
        <description>Harvesting Twitter for insight and meaning in what is called sentiment analysis (SA) is a major trend stemming from computational linguistics and AI. Industry and academia are interested in maximizing efficiency while mining text to attain the most currently available data and crowdsourcing opinions. In this study, we present the ATAM model for traffic analysis using the data available on Twitter. The model comprises five components that start with data streaming and collection and ends with the road incident prediction through classification. The classification of data is done using a lexicon-based method. The predicted classes are as follows: safe, needs attention, dangerous, and neutral. The data were collected for three months in the city of Riyadh, Saudi Arabia. The model was applied on 10k tweets with an overall accuracy of the model classifying all four classes of 82%.</description>
        <description>http://thesai.org/Downloads/Volume10No3/Paper_43-ATAM_Arabic_Traffic_Analysis_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Speaker Identification based on Hybrid Feature Extraction Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100342</link>
        <id>10.14569/IJACSA.2019.0100342</id>
        <doi>10.14569/IJACSA.2019.0100342</doi>
        <lastModDate>2019-03-30T12:17:12.4270000+00:00</lastModDate>
        
        <creator>Feras E. Abualadas</creator>
        
        <creator>Akram M. Zeki</creator>
        
        <creator>Muzhir Shaban Al-Ani</creator>
        
        <creator>Az-Eddine Messikh</creator>
        
        <subject>Speaker identification; biometrics; speaker verification; speaker recognition; text-independent; text-dependent</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(3), 2019</description>
        <description>One of the most exciting areas of signal processing is speech processing; speech contains many features or characteristics that can discriminate the identity of the person. The human voice is considered one of the important biometric characteristics that can be used for person identification.  This work is concerned with studying the effect of appropriate extracted features from various levels of discrete wavelet transformation (DWT) and the concatenation of two techniques (discrete wavelet and curvelet transform ) and study the effect of reducing the number of features by using principal component analysis (PCA) on speaker identification. Backpropagation (BP) neural network was also introduced as a classifier.</description>
        <description>http://thesai.org/Downloads/Volume10No3/Paper_42-Speaker_Identification_based_on_Hybrid_Feature.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimal Design of a Variable Coefficient Fractional Order PID Controller by using Heuristic Optimization Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100341</link>
        <id>10.14569/IJACSA.2019.0100341</id>
        <doi>10.14569/IJACSA.2019.0100341</doi>
        <lastModDate>2019-03-30T12:17:12.4100000+00:00</lastModDate>
        
        <creator>Omer Aydogdu</creator>
        
        <creator>Mehmet Korkmaz</creator>
        
        <subject>Artificial immune system; automatic voltage regulator; particle swarm optimization; variable coefficient fractional order PID controller</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(3), 2019</description>
        <description>This paper deals with an optimal design of a new type Variable coefficient Fractional Order PID (V-FOPID) controller by using heuristic optimization algorithms. Although many studies have mainly paid attention to correct the performance of the system’s transient and steady state responses together, few studies are interested in both transient and steady state performances separately. It is obvious that handling these two cases independently will bring out a better control response. However, there are no studies using different controller parameters for the transient and steady state responses of the system in fractional order control systems. The major contribution of the paper is to fill this gap by presenting a novel approach. To justify the claimed efficiency of the proposed V-FOPID controller, variable coefficient controllers and classical ones are tested through a set of simulations which is about controlling of an Automatic Voltage Regulator (AVR) system. According to the obtained results, first of all it was observed that proposed V-FOPID controller has superiority to the classical PID, Variable coefficient PID (V-PID) and classical Fractional Order PID (FOPID) controllers. Secondly, Particle Swarm Optimization (PSO) algorithm has shown its advantage compared to the Artificial Immune System (AIS) algorithm for the controller design.</description>
        <description>http://thesai.org/Downloads/Volume10No3/Paper_41-Optimal_Design_of_a_Variable_Coefficient.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Survey on Opportunistic Routing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100340</link>
        <id>10.14569/IJACSA.2019.0100340</id>
        <doi>10.14569/IJACSA.2019.0100340</doi>
        <lastModDate>2019-03-30T12:17:12.3970000+00:00</lastModDate>
        
        <creator>Saleh A. Khawatreh</creator>
        
        <creator>Mustafa Abdullah</creator>
        
        <creator>Enas N. Alzubi</creator>
        
        <subject>Opportunistic routing; PSR; ExOR; proactive routing; source routing; tree-based routing; lightweight routing; wireless networks; ad hoc networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(3), 2019</description>
        <description>Opportunistic Routing (OR) is attracted much in the research field of multi-hop wireless networks because it is different from traditional routing protocols [such as: Distance Vector (DV) and Link State (LS)], that it needs a lightweight protocol which is strong source routing basis]. In this paper, the development of opportunistic routing (OR) is presented, starting from the Selection Diversity Forwarding (SDF), Extended Opportunistic (ExOR), Proactive Source Routing (PSR), Cooperative Opportunistic Routing Scheme in Mobile AD Hoc Networks(CORMAN), and the Zone-based Proactive Source Routing(ZPSR). The simulation tests using Network Simulator 2 (ns2) show the effectiveness of Opportunistic Routing protocols among various terms including control overhead, Packet Delivery Ratio (PDR), throughput, and end-to-end delay.</description>
        <description>http://thesai.org/Downloads/Volume10No3/Paper_40-A_Survey_on_Opportunistic_Routing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application of Artificial Neural Network and Information Gain in Building Case-Based Reasoning for Telemarketing Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100339</link>
        <id>10.14569/IJACSA.2019.0100339</id>
        <doi>10.14569/IJACSA.2019.0100339</doi>
        <lastModDate>2019-03-30T12:17:12.3800000+00:00</lastModDate>
        
        <creator>S.M.F.D Syed Mustapha</creator>
        
        <creator>Abdulmajeed Alsufyani</creator>
        
        <subject>Artificial neural network; prediction model; telemarketing; shannon entropy; feature selection; case-based reasoning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(3), 2019</description>
        <description>Traditionally, case-based reasoning (CBR) has been used as advanced technique for representing expert knowledge and reasoning. However, for stochastic business data such as customers’ behavior and users’ preferences, the knowledge cannot be extracted directly from data to build the cases in reasoning in making prediction. Artificial Neural Network that is known to be able to build model for predicting unprecedented business data is used together with Shannon Entropy and Information Gain (IG) to identify the key features. 8 attributes have been identified as key features from the 17 attributes which are based on the telemarketing data. These attributes are used to select the key features in building CBR. The weightage for the key features in the cases is obtained from the IG values.  The mechanism of creating the cases based on the input from the ANN is discussed and the integration process between ANN and CBR is given.  The process of integrating the ANN and CBR shows that both techniques complement each other in building a model in predicting a customer who would subscribe one of the promoted new banking service called “term deposit”.</description>
        <description>http://thesai.org/Downloads/Volume10No3/Paper_39-Application_of_Artificial_Neural_Network..pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Process Capability Indices under Non-Normality Conditions using Johnson Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100338</link>
        <id>10.14569/IJACSA.2019.0100338</id>
        <doi>10.14569/IJACSA.2019.0100338</doi>
        <lastModDate>2019-03-30T12:17:12.3470000+00:00</lastModDate>
        
        <creator>Suboohi Safdar</creator>
        
        <creator>Dr. Ejaz Ahmed</creator>
        
        <creator>Dr. Tahseen Ahmed Jilani</creator>
        
        <creator>Dr. Arfa Maqsood</creator>
        
        <subject>Johnson curve; percentiles; simulation; exact method</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(3), 2019</description>
        <description>Process capability indices (PCIs) quantify the ability of a process to produce on target and within specifications performances. Basic indices designed for normal processes gives flawed results for non-normal process. Numerous methods have been proposed for non-normal processes to estimate PCIs in which some of them are based on transformation methods. The Johnson system comprising three types that translate a continuous non-normal distribution to normal. The aim of this paper is to estimate four basic indices for non-normal process using Johnson system with single straightforward procedure. The efficacy of the proposed approach can be assessed for all three Johnson Curves (SB, SU, SL) but result for SU is presented in this paper. PCIs for a data set are estimated and percentiles are obtained by our proposed exact method based on selected Johnson density function which was earlier based on approximate methods without any prior knowledge of density function of non-normal process. We compare our results with other existing methods to estimate PCIs for non-normal process. From statistical analysis we have noted that this modification improve process capability indices.</description>
        <description>http://thesai.org/Downloads/Volume10No3/Paper_38-Process_Capability_Indices.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Assessment to Achieve Maximum Efficiency in Optimizing Software Failures</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100337</link>
        <id>10.14569/IJACSA.2019.0100337</id>
        <doi>10.14569/IJACSA.2019.0100337</doi>
        <lastModDate>2019-03-30T12:17:12.3330000+00:00</lastModDate>
        
        <creator>Jagadeesh Medapati</creator>
        
        <creator>Prof Anand Chandulal J</creator>
        
        <creator>Prof Rajinikanth T V</creator>
        
        <subject>Software reliability; failure rate; reviews; software cost; optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(3), 2019</description>
        <description>Software Reliability is a specialized area of software engineering which deals with the identification of failures while developing the software. Effective analysis of the reliability helps to signify the number of failures occurred during the development phase. This in turn aid in the refinement of the failures occurred during the development of software. This paper identifies a novel assessment to detect and eliminate the actual software failures efficiently. The approach fits in an exponential log normal distribution of Generalized Gamma Mixture Model (GGMM). The approach estimates two parameters using the Maximum Likelihood Estimate (MLE). Standard Evaluation metrics like Mean Square Error (MSE), Coefficient of Determination (R2), Sum of Squares (SSE), and Root Means Square Error (RMSE) were calculated. The experimentation was carried out on five benchmark datasets which interpret the considered novel technique identifies the actual failures on par with the existing models. This novel software reliability growth model which is more effectual in the identification of the failures significantly and facilitate the present software organizations in the release of software free from bugs just in time.</description>
        <description>http://thesai.org/Downloads/Volume10No3/Paper_37-A_Novel_Assessment_to_Achieve_Maximum_Efficiency.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Experimentation for Modular Robot Simulation by Python Coding to Establish Multiple Configurations</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100336</link>
        <id>10.14569/IJACSA.2019.0100336</id>
        <doi>10.14569/IJACSA.2019.0100336</doi>
        <lastModDate>2019-03-30T12:17:12.3170000+00:00</lastModDate>
        
        <creator>Muhammad Haziq Hasbulah</creator>
        
        <creator>Fairul Azni Jafar</creator>
        
        <creator>Mohd. Hisham Nordin</creator>
        
        <creator>Kazutaka Yokota</creator>
        
        <subject>Dtto robot; simulation; configuration; locomotion; orientation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(3), 2019</description>
        <description>Most of the Modular Self-reconfigurable (MSR) robots are being developed in order to have the capability of achieving different locomotion gaits. It is an approach of robotic system which involving a group of identical robotic modules that are connecting together and are able to perform specific tasks. In this study, a 3D-printed MSR robot named Dtto–Explorer Modular Robot was used to investigate the achievable propagation that can be made based on three of Dtto robot modules. As Dtto robot is developed based on Modular Transformer (M-TRAN), Dtto robot has the number of Degree of Freedom (DOF) same as M-TRAN which is 2 DOF. Hence, this study is done with the intentions to know the variation of configuration that can be made by multiple module robots which have only 2 DOF. The robot propagation was simulated in Virtual Robot Experimentation Platform (V-REP) software. The result of the simulation shows that the Dtto MSR robot can propagate multiple configurations after all the robot modules being connected to each other by different attachment orientation of the Dtto robot and it is suitable for the purpose of further research on MSR robot architecture.</description>
        <description>http://thesai.org/Downloads/Volume10No3/Paper_36-Experimentation_for_Modular_Robot_Simulation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Low-fidelity Prototype Design for Serious Game for Slow-reading Students</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100335</link>
        <id>10.14569/IJACSA.2019.0100335</id>
        <doi>10.14569/IJACSA.2019.0100335</doi>
        <lastModDate>2019-03-30T12:17:12.2870000+00:00</lastModDate>
        
        <creator>Saffa Raihan Zainal Abidin</creator>
        
        <creator>Siti Fadzilah Mat Noor</creator>
        
        <creator>Noraidah Sahari Ashaari</creator>
        
        <subject>Serious game; brain-based learning; low-fidelity prototype; paper prototype; chauffeured prototype; think aloud protocol; slow-reading students</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(3), 2019</description>
        <description>Serious game is an alternative teaching aid that is getting a place of use by teachers and parents. Its widespread use has basically changed the way of life and learning of children and has a positive impact on achievement and increased the motivation of children in learning. However, not all serious game designs are suitable for slow-reading students. They are slightly different from other students in terms of cognitive potential and are struggling to meet academic demands in the class. Therefore, the main objective of this study is to produce low-fidelity prototypes involving target users as early as the design process. This study focuses on the production of storyboard contents suitable for slow-reading students to save time and cost of game model development. This study uses a child-centered design (CCD) method that involves paper prototypes, chauffeured prototype, think aloud protocols and observations. The results of this study are low-fidelity prototypes in the form of computerized storyboards that have been verified and will be used for heuristic assessments. These low-fidelity prototypes are expected to give an early look and help researchers in developing high-fidelity prototypes.</description>
        <description>http://thesai.org/Downloads/Volume10No3/Paper_35-Low_Fidelity_Prototype_Design.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Location Prediction in a Smart Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100334</link>
        <id>10.14569/IJACSA.2019.0100334</id>
        <doi>10.14569/IJACSA.2019.0100334</doi>
        <lastModDate>2019-03-30T12:17:12.2700000+00:00</lastModDate>
        
        <creator>Wael Ali Alosaimi</creator>
        
        <creator>Ahmed Binmahfoudh</creator>
        
        <creator>Roobaea Alroobaea</creator>
        
        <creator>Atef Zaguia</creator>
        
        <subject>Location prediction; context; pattern; Bayesian network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(3), 2019</description>
        <description>The context prediction and especially the location prediction is an important feature for improving the performance of smart systems. Predicting the next location or context of the user make the system proactive, so the system will be capable to offer the suitable services to the user without his involving. In this paper, a new approach will be presented based on the combination of pattern technique and Bayesian network to predict the next location of the user. This approach was tested on real data set, our model was able to achieve 89% of the next location prediction accuracy.</description>
        <description>http://thesai.org/Downloads/Volume10No3/Paper_34-Location_Prediction_in_a_Smart_Environment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Microsatellite’s Detection using the S -Transform Analysis based on the Synthetic and Experimental Coding</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100333</link>
        <id>10.14569/IJACSA.2019.0100333</id>
        <doi>10.14569/IJACSA.2019.0100333</doi>
        <lastModDate>2019-03-30T12:17:12.2530000+00:00</lastModDate>
        
        <creator>Soumaya Zribi</creator>
        
        <creator>Imen Messaoudi</creator>
        
        <creator>Afef Elloumi Oueslati</creator>
        
        <creator>Zied Lachiri</creator>
        
        <subject>DNA sequence; microsatellites; synthetic and experimental coding; s-transform; bioinformatic tools; Empirical Mode and Wavelet Decomposition (EMWD); Parametric Spectral Estimation (PSE)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(3), 2019</description>
        <description>Microsatellite in genomic DNA sequence, or Short tandem repeat (STR). It is a class of tandem repeat that have repeated pattern with size of 2- 6 base-pairs adjacent to each other. The detection of the specific tandem repeat is an important part of genetic diseases identification and it is also used in DNA fingerprinting and in evolutionary studies. Many tools based on string matching have been developed to detect microsatellites. However, these tools are based on prior information about repetitions in the sequence which cannot be always obtainable. For this, the signal processing techniques were suggested to overcome the limitations of the bioinformatic tools. In this paper, we use a new variant of the S-Transform which we apply to short tandem repeats signals. These signals are firstly obtained by applying different coding techniques to the DNA sequences. To further study the performance of the proposed method, we establish a comparison with different bioinformatics approaches (TRF, Mreps, Etandem) and three other methods of signal processing: The Adaptive S-Transform (AST), the Empirical Mode and Wavelet Decomposition (EMWD) and the Parametric Spectral Estimation (PSE) considering the AR model. This study indicates that our approach outperforms the earlier methods in identifying the short tandem repeat, in fact, our method detects the exact number and positions of trinucleotides present in the tested real DNA sequence.</description>
        <description>http://thesai.org/Downloads/Volume10No3/Paper_33-Microsatellites_Detection_using_the_S_Transform_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Energy-Aware Routing Hole Detection Algorithm in the Hierarchical Wireless Sensor Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100332</link>
        <id>10.14569/IJACSA.2019.0100332</id>
        <doi>10.14569/IJACSA.2019.0100332</doi>
        <lastModDate>2019-03-30T12:17:12.2230000+00:00</lastModDate>
        
        <creator>Najm Us Sama</creator>
        
        <creator>Kartinah Bt Zen</creator>
        
        <creator>Atiq Ur Rahman</creator>
        
        <creator>Aziz Ud Din</creator>
        
        <subject>Wireless sensor network; routing protocol; hierarchical routing; routing hole problem; routing hole detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(3), 2019</description>
        <description>To minimize the communication overhead with the help of optimal path selection in Wireless Sensor Network (WSN) routing protocols is the challenging issue. Hierarchical routing optimizes energy utilization by distributing the workload among different clusters. But many-to-one multi-hop hierarchical routing will result in the excessive expenditure of energy near the sink and leads to early energy exhaustion of the nodes. Due to this the routing hole problem can be caused around the base station.  Data routed along the hole boundary nodes will lead to premature exhaustion of energy. This will maximize the size of the hole in the network. Detection of holes saves the additional energy consumption around the hole and minimize the hole size. In this paper a novel energy efficient routing hole detection (EEHD) algorithm is presented, on the detection of routing hole, the periodic re-clustering is performed to avoid the long detour path. Extensive simulations are done in MATLAB, the results reveal that EEHD has better performance than other conventional routing hole detection techniques, such as BCP and BDCIS.</description>
        <description>http://thesai.org/Downloads/Volume10No3/Paper_32-Energy_Aware_Routing_Hole_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Regularization Activation Function for Extreme Learning Machine</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100331</link>
        <id>10.14569/IJACSA.2019.0100331</id>
        <doi>10.14569/IJACSA.2019.0100331</doi>
        <lastModDate>2019-03-30T12:17:12.2070000+00:00</lastModDate>
        
        <creator>Noraini Ismail</creator>
        
        <creator>Zulaiha Ali Othman</creator>
        
        <creator>Noor Azah Samsudin</creator>
        
        <subject>Extreme learning machine; prediction; neural networks; regularization; time series</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(3), 2019</description>
        <description>Extreme Learning Machine (ELM) algorithm based on single hidden layer feedforward neural networks has shown as the best time series prediction technique. Furthermore, the algorithm has a good generalization performance with extremely fast learning speed.  However, ELM facing overfitting problem that can affect the model quality due to the implementation using empirical risk minimization scheme. Therefore, this study aims to improve ELM by introducing an Activation Functions Regularization in ELM called RAF-ELM. The experiment has been conducted in two phases. First, investigating the modified RAF-ELM performance using four type of activation functions which is Sigmoid, Sine, Tribas and Hardlim. In this study, input weight and bias for hidden layers are randomly selected, whereas the best neurons number of hidden layer is determined from 5 to 100. This experiment used UCI benchmark datasets. The number of neurons (99) using Sigmoid activation function shown the best performance. The proposed methods has improved the accuracy performance and learning speed up to 0.016205 MAE and processing time 0.007 seconds respectively compared with conventional ELM and has improved up to 0.0354 MSE for accuracy performance compare with state of the art algorithm. The second experiment is to validate the proposed RAF-ELM using 15 regression benchmark dataset. RAF-ELM has been compared with four neural network techniques namely conventional ELM, Back Propagation, Radial Basis Function and Elman. The results show that RAF-ELM technique obtain the best performance compared to other techniques in term of accuracy for various time series data that come from various domain.</description>
        <description>http://thesai.org/Downloads/Volume10No3/Paper_31-Regularization_Activation_Function.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Agglomerative Hierarchical Clustering with Association Rules for Discovering Climate Change Patterns</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100330</link>
        <id>10.14569/IJACSA.2019.0100330</id>
        <doi>10.14569/IJACSA.2019.0100330</doi>
        <lastModDate>2019-03-30T12:17:12.1770000+00:00</lastModDate>
        
        <creator>Mahmoud Sammour</creator>
        
        <creator>Zulaiha Ali Othman</creator>
        
        <creator>Zurina Muda</creator>
        
        <creator>Roliana Ibrahim</creator>
        
        <subject>Hierarchical clustering; dynamic time warping; ground-level ozone; Apriori Association Rules</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(3), 2019</description>
        <description>Ozone analysis is the process of identifying meaningful patterns that would facilitate the prediction of future trends. One of the common techniques that have been used for ozone analysis is the clustering technique. Clustering is one of the popular methods which contribute a significant knowledge for time series data mining by aggregating similar data in specific groups. However, identifying significant patterns regarding the ground-level ozone is quite a challenging task especially after applying the clustering task. This paper presents a pattern discovery for ground-level ozone using a proposed method known as an Agglomerative Hierarchical Clustering with Dynamic Time Warping (DTW) as a distance measure on which the patterns have been extracted using the Apriori Association Rules (AAR) algorithm. The experiment is conducted on a Malaysian Ozone dataset collected from Putrajaya for year 2006. The experiment result shows 20 pattern influences on high ozone with a high confident (1.00). However, it can be classified into four meaningful patterns; more high temperature with low nitrogen oxide, nitrogen oxide and nitrogen dioxide high, nitrogen oxide with carbon oxide high, and carbon oxide high.  These patterns help in decision making to plan the amount of carbon oxide and nitrogen oxide to be reduced in order to avoid the high ozone surface.</description>
        <description>http://thesai.org/Downloads/Volume10No3/Paper_30-An_Agglomerative_Hierarchical_Clustering.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Supply Chain Modeling and Simulation using SIMAN ARENA a Case Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100329</link>
        <id>10.14569/IJACSA.2019.0100329</id>
        <doi>10.14569/IJACSA.2019.0100329</doi>
        <lastModDate>2019-03-30T12:17:12.1600000+00:00</lastModDate>
        
        <creator>Azougagh Yassine</creator>
        
        <creator>Benhida Khalid</creator>
        
        <creator>Elfezazi Said</creator>
        
        <subject>Supply Chain; transport; simulation; modelling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(3), 2019</description>
        <description>The control of supply chains passes often by identification of various constraints and optimization of the different links and parameters associated to the functioning of the supply chain. To attain these goals, it is vital to get the best knowing and understanding of the supply chain diversity and complexity also to anticipate its behavior, which requires, a pertinent modeling that will offer the necessary information to evaluate the supply chain performance. The present paper focuses on modeling and simulation of a case study of a supply chain using SIMAN ARENA Rockwell software, mainly transport and different operations in this chain. The purpose is creating the simulation models and how to use in a case study to diagnose and master the operation and functioning of this supply chain. The objective is creating simulation models to determine the performance of the supply chain by calculating the transportation time in each travel, number of travels, number of transported fertilizers and sulfur wagons and unloaded acid talks and finally the waiting times in train station, in order to optimize this performance indicators.</description>
        <description>http://thesai.org/Downloads/Volume10No3/Paper_29-Supply_Chain_Modeling_and_Simulation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automated Grading Systems for Programming Assignments: A Literature Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100328</link>
        <id>10.14569/IJACSA.2019.0100328</id>
        <doi>10.14569/IJACSA.2019.0100328</doi>
        <lastModDate>2019-03-30T12:17:12.1300000+00:00</lastModDate>
        
        <creator>Hussam Aldriye</creator>
        
        <creator>Asma Alkhalaf</creator>
        
        <creator>Muath Alkhalaf</creator>
        
        <subject>Automated grading</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(3), 2019</description>
        <description>Automated grading for programming assignments is becoming more and more important nowadays especially with the emergence of the Massive Open Online Courses. Many techniques and systems are being used nowadays for automated grading in educational institutions. This article provides a literature review of the many automated grading systems and techniques that are being used currently. It focuses on highlighting the differences between these systems and techniques and addressing issues, advantages and disadvantages. The review shows that these systems have limitations due to difficulty in usage by students as noticed by some course instructors. Some of these problems stem from UI/UX difficulties while other problems were due to beginner syntax errors and language barriers. Finally, it shows the need to fill the gap by building new systems that are friendlier towards beginner programmers, has better localization and easier user experience.</description>
        <description>http://thesai.org/Downloads/Volume10No3/Paper_28-Automated_Grading_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of Doppler Effects in Underwater Acoustic Channels using Parabolic Expansion Modeling</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100327</link>
        <id>10.14569/IJACSA.2019.0100327</id>
        <doi>10.14569/IJACSA.2019.0100327</doi>
        <lastModDate>2019-03-30T12:17:12.1130000+00:00</lastModDate>
        
        <creator>Ranjani G</creator>
        
        <creator>Sadashivappa G</creator>
        
        <subject>Doppler effect; underwater communication; acoustic channel models; parabolic expansion</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(3), 2019</description>
        <description>Underwater communication systems play an important role in understanding various phenomena that take place within our vast oceans. They can be used as an integral tool in countless applications ranging from environmental monitoring to gathering of oceanographic data, marine archaeology, and search and rescue missions. Acoustic Communication is the viable solution for communication in highly attenuating underwater environment. However, these systems pose a number of challenges for reliable data transmission. Nonnegligible Doppler Effect emerges as a major factor. In order to support reliable high data rate communication, understanding the channel behavior is required. As sea trials are expensive, simulators are required to study the channel behavior. Modeling this channel involves solution to wave equations and validation with experimental data for that portion of the sea. Parabolic expansion model is a wave theory based acoustic channel model. This model applies Pade coefficients and Fourier coefficients as expansion functions to solve the wave equations. This work attempts to characterize the impact of Doppler Effect in the underwater acoustic channel using parabolic expansion models.</description>
        <description>http://thesai.org/Downloads/Volume10No3/Paper_27-Analysis_of_Doppler_Effects.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>V-ITS: Video-based Intelligent Transportation System for Monitoring Vehicle Illegal Activities</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100326</link>
        <id>10.14569/IJACSA.2019.0100326</id>
        <doi>10.14569/IJACSA.2019.0100326</doi>
        <lastModDate>2019-03-30T12:17:12.0830000+00:00</lastModDate>
        
        <creator>Qaisar Abbas</creator>
        
        <subject>Computer vision; intelligent traffic management system; traffic monitoring; vehicle tracking from video; image processing; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(3), 2019</description>
        <description>Vehicle monitoring is a challenging task for video-based intelligent transportation system (V-ITS). Nowadays, the V-ITS system has a significant socioeconomic impact on the development of smart cities and always demand to monitor different traffic parameters. It noticed that traffic accidents are exceeded throughout the world with the percentage of 1.7%. The increase in accidents and the percentage of deaths are due to the people that don’t abide by the traffic rules. To address these challenges, an improved V-ITS system is developed in this paper to detect and track vehicles and driver’s activities during highway driving. This improved V-ITS system is capable to do automatic traffic management that saves traffic accidents. It provides the feature of a real-time detection algorithm for driver immediate line overrun, speed limit overrun and yellow-line driving. To develop this V-ITS system, a pre-trained convolutional neural network (CNN) model with 4-layer architecture was developed and then deep-belief network (DBN) model was utilized to recognize illegal activities. To implement V-ITS system, OpenCV and python tools are mainly utilized. The GRAM-RTM online free data sets were used to test the performance of V-ITS system. The overall significance of this intelligent V-ITS system is comparable to other state-of-the-art systems. The real-time experimental results indicate that the V-ITS system can be used to reduce the number of accidents and ensure the safety of passengers as well as pedestrians.</description>
        <description>http://thesai.org/Downloads/Volume10No3/Paper_26-V_ITS_Video_based_Intelligent_Transportation_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>On Telemedicine Implementations in Ghana</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100325</link>
        <id>10.14569/IJACSA.2019.0100325</id>
        <doi>10.14569/IJACSA.2019.0100325</doi>
        <lastModDate>2019-03-30T12:17:12.0500000+00:00</lastModDate>
        
        <creator>E. T. Tchao</creator>
        
        <creator>Isaac Acquah</creator>
        
        <creator>S. D. Kotey</creator>
        
        <creator>C. S. Aggor</creator>
        
        <creator>J. J. Kponyo</creator>
        
        <subject>Telemedicine; Ghana; m-health; store-and-forward; information and communication technology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(3), 2019</description>
        <description>Most Sub-Saharan Africa countries including Ghana experience a shortage of medical professionals, especially in the rural areas. This is mainly caused by the low-intake of students into medical schools due to inadequacy of facilities to train students. Also, a number of medical students graduate and emigrate to foreign countries to seek new opportunities and enhanced living standards. To reduce the effect of this, telemedicine is being implemented in certain areas to provide healthcare. Much as advances are being made in information and communication technologies, the advancement of telemedicine in developing countries still needs to upgraded and extended to cover more areas. Some categories of telemedicine have little to no implementation in Ghana due to lack of resources, little government support as well as the absence of structured frameworks and policies to ensure their implementation. This paper seeks to present telemedicine applications and implementations in Ghana till date as well as suggest some recommendations to mitigate some of the challenges impeding the advancement of telemedicine.</description>
        <description>http://thesai.org/Downloads/Volume10No3/Paper_25-On_Telemedicine_Implementations_in_Ghana.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Opportunities and the Limitations of Using the Independent Post-Editor Technology in Translation Education</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100324</link>
        <id>10.14569/IJACSA.2019.0100324</id>
        <doi>10.14569/IJACSA.2019.0100324</doi>
        <lastModDate>2019-03-30T12:17:12.0500000+00:00</lastModDate>
        
        <creator>Burcu T&#220;RKMEN</creator>
        
        <creator>Muhammed Zahit CAN</creator>
        
        <subject>Post-editors; machine translation; translation education; computer-assisted translation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(3), 2019</description>
        <description>A new mechanical function known as post-editing, which helps to correct the imperfections of raw machine translation output, is introduced in the translation market. While this function is commonly used as an integral part of the machine translation, it can also be used for correcting non-translated texts on its own. The main purpose of this study is to find an answer to the question of which contributions could be made to the competences of translation students by using independent post-editors during academic translation education. A model course application and survey are conducted with the participation of students from translation studies departments. The interrater reliability is used in this study. In accordance with the results, it is found that most of the students have not known the usage of the independent post-editors during the translation process. The research provides new insights into the contributions of post-editor technology to the translation education.  The findings of the research also reflect the contributions of post-editor technology to the translation quality in terms of the speed-time management, the accuracy of punctuations, abbreviations, and grammar rules. It is also determined that the post-editors contribute to the competencies of the translation students. As a result, it is suggested that the post-editors may be used as an educational material in translation education. Indeed, the results require other studies about the usage of post-editors as educational materials.</description>
        <description>http://thesai.org/Downloads/Volume10No3/Paper_24-The_Opportunities_and_the_Limitations.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Machine Learning Approach for Predicting Nicotine Dependence</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100323</link>
        <id>10.14569/IJACSA.2019.0100323</id>
        <doi>10.14569/IJACSA.2019.0100323</doi>
        <lastModDate>2019-03-30T12:17:12.0370000+00:00</lastModDate>
        
        <creator>Mohammad Kharabsheh</creator>
        
        <creator>Omar Meqdadi</creator>
        
        <creator>Mohammad Alabed</creator>
        
        <creator>Sreenivas Veeranki</creator>
        
        <creator>Ahmad Abbadi</creator>
        
        <creator>Sukaina Alzyoud</creator>
        
        <subject>Machine learning; nicotine dependency; Women; Waterpipe; classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(3), 2019</description>
        <description>An examination of the ability of machine learning methodologies in classifying women Waterpipe (WP) smoker’s level of nicotine dependence is proposed in this work. In this study, we developed a classifier that predicts the level of nicotine dependence for WP tobacco female smokers using a set of novel features relevant to smokers including age, residency, and educational level. The evaluation results show that our approach achieves a recall of 82% when applied on a dataset of female WP smokers in Jordan.</description>
        <description>http://thesai.org/Downloads/Volume10No3/Paper_23-A_Machine_Learning_Approach_for_Predicting_Nicotine.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced Physical Document Management using NFC with Verification for Security and Privacy</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100322</link>
        <id>10.14569/IJACSA.2019.0100322</id>
        <doi>10.14569/IJACSA.2019.0100322</doi>
        <lastModDate>2019-03-30T12:17:12.0370000+00:00</lastModDate>
        
        <creator>Z. Zainal Abidin</creator>
        
        <creator>N.A. Zakaria</creator>
        
        <creator>Z. Abal Abas</creator>
        
        <creator>A.A. Anuar</creator>
        
        <creator>N. Harum</creator>
        
        <creator>M.R. Baharon</creator>
        
        <creator>Z. Ayop</creator>
        
        <subject>Document file management system; physical document files detection; near-field communication (NFC)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(3), 2019</description>
        <description>This study focuses on implementation of physical document management for an organization using Near-Field Communication (NFC) since it provides faster detection on tracking items based on location.  Current physical document management operates using bar-codes. However, barcodes are able to be duplicated, which make it not secure that lead to forgery and unauthorized modification. Therefore, the purpose of the proposed physical document management system is to produce a better administration control in an organization through the use of verification mechanism. Nonetheless, the current NFC based system is lack of verification process. Thus, an enhancement in physical document management with verification process is proposed and self-developed system is built using C#, SQLite, Visual Studio, NFC tag and NFC reader (ACR122U-A9).  Moreover, the new system required employee to log in to the system by scanning ID tag and followed by the physical document File tag, which both tags scanned at the NFC reader. Then, information of the physical file coordinator and the status of the location of the physical document file is displayed. The significant of this study is to protect confidential document and improve administrative control through dual verification; and produce a database to monitor the real-time data detection.</description>
        <description>http://thesai.org/Downloads/Volume10No3/Paper_22-Enhanced_Physical_Document_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cloud Server Security using Bio-Cryptography</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100321</link>
        <id>10.14569/IJACSA.2019.0100321</id>
        <doi>10.14569/IJACSA.2019.0100321</doi>
        <lastModDate>2019-03-30T12:17:12.0200000+00:00</lastModDate>
        
        <creator>Zarnab Khalid</creator>
        
        <creator>Muhammad Rizwan</creator>
        
        <creator>Aysha Shabbir</creator>
        
        <creator>Maryam Shabbir</creator>
        
        <creator>Fahad Ahmad</creator>
        
        <creator>Jaweria Manzoor</creator>
        
        <subject>Cloud computing; biometrics; fingerprints; encryption and decryption methods; cryptography keys; bio-cryptography; blockchain</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(3), 2019</description>
        <description>Data security is becoming more important in cloud computing. Biometrics is a computerized method of identifying a person based on a physiological characteristic. Among the features measured are our face, fingerprints, hand geometry, DNA etc. Biometric can fortify to store the cloud server using bio-cryptography. The Bio-cryptography key is used to secure the scrambled data in the cloud environment. The Bio-cryptography technique uses fingerprint, voice or iris as a key factor to secure the data encryption and decryption in the cloud server.  In this paper, the security of the biometric system through cloud computing is discussed along with improvement regarding its performance to avoid the criminal to access the data. Biometric is a genuine feature for the cloud provider. Cryptography algorithm will be explained using blockchain technology to overcome security issues. The blockchain technology will provide more protection through cryptographic keys to secure biometric data.</description>
        <description>http://thesai.org/Downloads/Volume10No3/Paper_21-Cloud_Server_Security.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Opinion Mining: An Approach to Feature Engineering</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100320</link>
        <id>10.14569/IJACSA.2019.0100320</id>
        <doi>10.14569/IJACSA.2019.0100320</doi>
        <lastModDate>2019-03-30T12:17:12.0030000+00:00</lastModDate>
        
        <creator>Shafaq Siddiqui</creator>
        
        <creator>M. Abdul Rehman</creator>
        
        <creator>Sher M. Daudpota</creator>
        
        <creator>Ahmad Waqas</creator>
        
        <subject>Opinion mining; feature engineering; machine learning; classification; natural language processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(3), 2019</description>
        <description>Sentiment Analysis or opinion mining refers to a process of identifying and categorizing the subjective information in source materials using natural language processing (NLP), text analytics and statistical linguistics. The main purpose of opinion mining is to determine the writer’s attitude towards a particular topic under discussion. This is done by identifying a polarity of a particular text paragraph using different feature sets. Feature engineering in pre-processing phase plays a vital role in improving the performance of a classifier. In this paper we empirically evaluated various features weighting mechanisms against the well-established classification techniques for opinion mining, i.e. Naive Bayes-Multinomial for binary polarity cases and SVM-LIN for multiclass cases. In order to evaluates these classification techniques we use Rotten Tomatoes publically available movie reviews dataset for training the classifiers as this is widely used dataset by research community for the same purpose. The  empirical experiment concludes that the feature set containing noun, verb, adverb and adjective lemmas with feature-frequency (FF) function perform better among all other feature settings with 84% and 85% correctly classified test instances for Na&#239;ve Bayes and SVM, respectively.</description>
        <description>http://thesai.org/Downloads/Volume10No3/Paper_20-Opinion_Mining_an_Approach_to_Feature_Engineering.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Enhancement on Mobile Social Network using Social Link Prediction with Improved Human Trajectory Internet Data Mining</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100318</link>
        <id>10.14569/IJACSA.2019.0100318</id>
        <doi>10.14569/IJACSA.2019.0100318</doi>
        <lastModDate>2019-03-30T12:17:11.9900000+00:00</lastModDate>
        
        <creator>B. Suryakumar</creator>
        
        <creator>Dr. E. Ramadevi</creator>
        
        <subject>Mobile social network; improved multi-context trajectory embedding model with service usage classification model; social link prediction; machine learning; cosine coefficient; Jaccard coefficient</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(3), 2019</description>
        <description>Generally, the mobile social network has missing and unauthentic links. The prediction of those links is one of the major problems to understand the relationship between two nodes and recommends the potential links to the users derived from the history of user-link interactions and their contextual information. The recommendation problem can be modeled as prediction of the future links between users. Many research works have been developed to understand the relationship between the nodes and construct the models for missing or suspicious links prediction. Among those, Improved Multi-Context Trajectory Embedding Model with Service Usage Classification Model (IMC-TEM-SUCM) has better enhancement on human trajectory data mining by classifying the internet traffic. However, this method requires the prediction of the relationship between the nodes and social links. Hence in this article, the IMC-TEM-SUCM is proposed with the Social Link Prediction (SLP) mechanism for identifying the relationship between two nodes and predicting the stable links. In this technique, a number of nodal features are considered and their influence on the link prediction problem of Foursquare and Gowalla are examined. This extended network is used for computing two features such as optimism and reputation that depict node’s characteristics in a signed network. After that, meta-path-based features are considered and their influence of the route length on the problem of link prediction is examined. Moreover, a link prediction process is performed by using the machine learning classification algorithms that use the extracted node-based and meta-path-based features. Also, Cosine coefficient and Jaccard coefficient similarity-based techniques are used for computing the similarity index between any two nodes. A higher similarity indicates a higher chance of forming links between them. Finally, the performance effectiveness of the proposed model is evaluated through the experimental results using different real-world datasets.</description>
        <description>http://thesai.org/Downloads/Volume10No3/Paper_18-An_Enhancement_on_Mobile_Social_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Using FDD for Small Project: An Empirical Case Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100319</link>
        <id>10.14569/IJACSA.2019.0100319</id>
        <doi>10.14569/IJACSA.2019.0100319</doi>
        <lastModDate>2019-03-30T12:17:11.9900000+00:00</lastModDate>
        
        <creator>Shabib Aftab</creator>
        
        <creator>Zahid Nawaz</creator>
        
        <creator>Faiza Anwer</creator>
        
        <creator>Munir Ahmad</creator>
        
        <creator>Ahmed Iqbal</creator>
        
        <creator>Ashfaq Ahmad Jan</creator>
        
        <creator>Muhammad Salman Bashir</creator>
        
        <subject>Agile models; feature driven development; FDD; empirical evaluation; comparative analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(3), 2019</description>
        <description>Empirical analysis evaluates the proposed system via practical experience and reveals its pros and cons. Such type of evaluation is one of the widely used validation approach in software engineering. Conventional software process models were performed well till mid 1990s but then gradually were replaced by agile methodologies. This happened due to the various features, the agile family offered, which the conventional models failed to provide. However besides the advantages, agile models lacked at some areas as well. To get the extreme benefits from any agile model, it is necessary to eliminate the weaknesses of that model by customizing its development structure. Feature Driven Development (FDD) is one of the widely used agile models in software industry particularly for large scale projects. This model has been criticized by many researchers due to its weaknesses such as explicit dependency on experienced staff, little or no guidance for requirement gathering, rigid nature to accommodate requirement changes and heavy development structure. All these weaknesses make the FDD model suitable only for large scale projects where the requirements are less likely to change. This paper deals with the empirical evaluation of FDD during the development of a small scale web project so that the areas and practices of this model can be identified with empirical proof, which make this model suitable only for large projects.  For effective evaluation, the results of FDD case study are compared with a published case study of Extreme Programing (XP), which is widely used for small scale projects.</description>
        <description>http://thesai.org/Downloads/Volume10No3/Paper_19-Using_FDD_for_Small_Project.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Managing and Reducing Handoffs Latency in Wireless Local Area Networks using Multi-Channel Virtual Access Points</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100317</link>
        <id>10.14569/IJACSA.2019.0100317</id>
        <doi>10.14569/IJACSA.2019.0100317</doi>
        <lastModDate>2019-03-30T12:17:11.9730000+00:00</lastModDate>
        
        <creator>Kamran Javed</creator>
        
        <creator>Fowad Talib</creator>
        
        <creator>Mubeen Iqbal</creator>
        
        <creator>Asif Hussain Khan</creator>
        
        <subject>Technology; mobility; heterogeneous; bandwidth; video streaming; mobile network; VoIP; handoffs</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(3), 2019</description>
        <description>The time is era of computer technology and relevant hybrid disciplines to emerge as a multi impact entity in the technological world. In the same stream where user of technology is increasing the expectation from the technology are also expedite, thus users need to have high speed network to support high speed devices, especially in the technical world where palm computers hit the market at its excel. High data transfer rate should be enough supportive to the environment of next generation wireless networks. Mobility is another added factor to high speed connectivity issues. Users for enormous application would like to use a network which is heterogeneous in nature, such as high availability and high bandwidth to avoid issues in real time applications, video streaming and including VoIP and multi-media over the mobile networks. Thus mobile communication access massively relies on the continuous network availability which is done through handoffs, ensures the seamless transfer of device from one AP to another. In this paper, I have extracted the novel approach of multi-channel virtual access points which will nullify or reduce the handoffs latency.</description>
        <description>http://thesai.org/Downloads/Volume10No3/Paper_17-Managing_and_Reducing_Handoffs.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Diagnosis of Parkinson’s Disease based on Wavelet Transform and Mel Frequency Cepstral Coefficients</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100315</link>
        <id>10.14569/IJACSA.2019.0100315</id>
        <doi>10.14569/IJACSA.2019.0100315</doi>
        <lastModDate>2019-03-30T12:17:11.9570000+00:00</lastModDate>
        
        <creator>Taoufiq BELHOUSSINE DRISSI</creator>
        
        <creator>Soumaya ZAYRIT</creator>
        
        <creator>Benayad NSIRI</creator>
        
        <creator>Abdelkrim AMMOUMMOU</creator>
        
        <subject>Parkinson disease; discrete wavelet transform; MFCC; Support Vector Machine (SVM)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(3), 2019</description>
        <description>The aim of this study presented in this paper is to determine the choice of the appropriate wavelet analyzer with the method of extraction of MFCC coefficients for an assistance in the diagnosis of Parkinson&#39;s disease. The analysis used is based on a database of 18 healthy and 20 Parkinsonian patients. The suggested processing is based on the transformation of the speech signal by the wavelet transform through testing several sorts of wavelets, extracting Mel Frequency Cepstral Coefficients (MFCC) from the signals, and we apply the support vector machine (SVM) as classifier. The test results reveal that the best recognition rate, which is 86.84%, is obtained by the wavelets of level 2 at 3rd scale (Daubechie, Symlet, ReverseBior or BiorSpline) combination-MFCC–SVM.</description>
        <description>http://thesai.org/Downloads/Volume10No3/Paper_15-Diagnosis_of_Parkinsons_Disease.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Risk Factors for Software Requirements Change Implementation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100316</link>
        <id>10.14569/IJACSA.2019.0100316</id>
        <doi>10.14569/IJACSA.2019.0100316</doi>
        <lastModDate>2019-03-30T12:17:11.9570000+00:00</lastModDate>
        
        <creator>Marfizah A. Rahman</creator>
        
        <creator>Rozilawati Razali</creator>
        
        <creator>Fatin Filzahti Ismail</creator>
        
        <subject>Requirements change; risk factor; structural equation modeling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(3), 2019</description>
        <description>Requirements change has been regarded as a substantial risk in software development projects. The factors that contribute to the risk are identified through impact analysis, which later determine the planning of the change implementation. The analysis is however not straightforward as the risk factors that constitute requirements change implementation is currently not much explored. This paper identifies the risk factors by firstly collating them qualitatively through a review of related work and a focus group study. The factors are then confirmed quantitatively through a survey in which data is analysed by using Partial Least Squares Structural Equation Modelling (PLS-SEM). The survey comprise of 276 practitioners from software industry who are involved in the impact analysis. The results indicate that User, Project Team, Top Management, Third Party, Organisation, Identification of Change, Existing Product and Planning of Change Implementation are the significant risk factors in planning of requirements change implementation.</description>
        <description>http://thesai.org/Downloads/Volume10No3/Paper_16-Risk_Factors_for_Software_Requirements.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Model Reference Adaptive Control Design for Nonlinear Plants</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100314</link>
        <id>10.14569/IJACSA.2019.0100314</id>
        <doi>10.14569/IJACSA.2019.0100314</doi>
        <lastModDate>2019-03-30T12:17:11.9430000+00:00</lastModDate>
        
        <creator>Wafa Ghozlane</creator>
        
        <creator>Jilani Knani</creator>
        
        <subject>Model reference adaptive control; nonlinear dynamic plants; relative degree; unknown parameters; robot manipulator; input-output feedback linearization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(3), 2019</description>
        <description>In this paper, the basic theory of the model reference adaptive control design and issues of particular relevance to control nonlinear dynamic plants with a relative degree greater than or equal to one with unknown parameters are detailed. The studied analysis was motivated through its application to a robot manipulator with six degrees of freedom. After linearization using the input-output feedback linearization and decoupling algorithm, the nonlinear Multi-input Multi-output system was transformed into six independent single-input single-output linear subsystems each one has a relative degree equal to two, the obtained results in different simulations shows that the augmented reference model adaptive controller has been successfully implemented.</description>
        <description>http://thesai.org/Downloads/Volume10No3/Paper_14-Model_Reference_Adaptive_Control_Design.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Personalized Recommender by Exploiting Domain based Expert for Enhancing Collaborative Filtering Algorithm :PReC</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100313</link>
        <id>10.14569/IJACSA.2019.0100313</id>
        <doi>10.14569/IJACSA.2019.0100313</doi>
        <lastModDate>2019-03-30T12:17:11.9270000+00:00</lastModDate>
        
        <creator>Mrs.M Sridevi</creator>
        
        <creator>Dr.R.Rajeswara Rao</creator>
        
        <subject>Recommender system; collaborative filtering; domain based experts; demographic data</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(3), 2019</description>
        <description>The large amount of information available on the internet initiated various Recommender algorithms to act as an intermediate between number of choices and internet users.  Collaborative filtering is one of the most traditional and intensively used recommendation approaches for many commercial services. Despite providing satisfying outcomes, it does have some issues that include source diversity, reliability, sparsity of data, scalability and cold start. Thus, there is a need for further improvement in the current generation of recommender system to achieve a more effective human decision support in a wide variety of applications and scenarios. Personalized Expert based collaborative filtering (PReC) approach is proposed to identify domain specific experts and the use of experts preference enhanced the performance of collaborative filtering recommender systems. A unified framework is proposed that integrates similar users rating data, experts rating and demographic data to reduce the number of pairwise computations from the search space to ensure scalability and enabled fine grained recommendations.  The proposed method is evaluated using accuracy metrics MAE, RMSE on the data set collected from MovieLens datasets.</description>
        <description>http://thesai.org/Downloads/Volume10No3/Paper_13-Personalized_Recommender_by_Exploiting_Domain.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Systematic Review of Domains, Techniques, Delivery Modes and Validation Methods for Intelligent Tutoring Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100312</link>
        <id>10.14569/IJACSA.2019.0100312</id>
        <doi>10.14569/IJACSA.2019.0100312</doi>
        <lastModDate>2019-03-30T12:17:11.9270000+00:00</lastModDate>
        
        <creator>Aized Amin Soofi</creator>
        
        <creator>Moiz Uddin Ahmed</creator>
        
        <subject>ITS; intelligent tutoring system; intelligent learning; adaptive learning; intelligent tutoring; ITS review</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(3), 2019</description>
        <description>An Intelligent Tutoring System (ITS) is a computer software that help students in learning educational or academics concepts in customized environment. ITSs are instructional systems that have capability to facilitate user by providing instantaneous feedback and instructions without any human intervention. The advancement of new technologies has integrated computer based learning with artificial intelligence methods with aim to develop better custom-made education systems that referred as ITS. One of the important factors that affect students learning process is self-learning; all students cannot have similar experience of learning scholastic concepts from same educational material. Because students have individual differences that make some topics difficult or easy to understand regarding taken subjects. These systems have capability to improve teaching and learning process in different educational domains while respecting individual learning needs. In this study an attempt is made to review the research in field of ITSs and highlight the educational areas or domains in which ITSs have been introduced. Techniques, delivering modes and evaluation methodologies that have been used in developed ITSs have also been discussed in this work. This work will be helpful for both academia and new comers in the field of ITSs to further strengthen basis of tutoring systems in educational domains.</description>
        <description>http://thesai.org/Downloads/Volume10No3/Paper_12-A_Systematic_Review_of_Domains.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Survey on Human Activity Recognition based on Acceleration Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100311</link>
        <id>10.14569/IJACSA.2019.0100311</id>
        <doi>10.14569/IJACSA.2019.0100311</doi>
        <lastModDate>2019-03-30T12:17:11.9100000+00:00</lastModDate>
        
        <creator>Salwa O. Slim</creator>
        
        <creator>Ayman Atia</creator>
        
        <creator>Marwa M.A. Elfattah</creator>
        
        <creator>Mostafa-Sami M.Mostafa</creator>
        
        <subject>Human activity recognition; accelerometer; online system; offline system; traditional machine learning; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(3), 2019</description>
        <description>Human activity recognition is an important area of machine learning research as it has many utilization in different areas such as sports training, security, entertainment, ambient-assisted living, and health monitoring and management. Studying human activity recognition shows that researchers are interested mostly in the daily activities of the human. Therefore, the general architecture of HAR system is presented in this paper, along with the description of its main components. The state of the art in human activity recognition based on accelerometer is surveyed. According to this survey, Most of the researches recently used deep learning for recognizing HAR, but they focused on CNN even though there are other deep learning types achieved a satisfied accuracy. The paper displays a two-level taxonomy in accordance with machine learning approach (either traditional or deep learning) and the processing mode (either online or offline). Forty eight studies are compared in terms of recognition accuracy, classifier, activities types, and used devices. Finally, the paper concludes different challenges and issues online versus offline also using deep learning versus traditional machine learning for human activity recognition based on accelerometer sensors.</description>
        <description>http://thesai.org/Downloads/Volume10No3/Paper_11-Survey_on_Human_Activity_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimizing the Hyperparameter of Feature Extraction and Machine Learning Classification Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100309</link>
        <id>10.14569/IJACSA.2019.0100309</id>
        <doi>10.14569/IJACSA.2019.0100309</doi>
        <lastModDate>2019-03-30T12:17:11.8970000+00:00</lastModDate>
        
        <creator>Sani Muhammad Isa</creator>
        
        <creator>Rizaldi Suwandi</creator>
        
        <creator>Yosefina Pricilia Andrean</creator>
        
        <subject>Sentiment analysis; word2vec; TF-IDF (terms frequency-inverse document frequency); Doc2vec; grid search</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(3), 2019</description>
        <description>The process of assigning a quantitative value to a piece of text expressing a mood or effect is called Sentiment analysis. Comparison of several machine learning, feature extraction approaches, and parameter optimization was done to achieve the best accuracy. This paper proposes an approach to extracting comparison value of sentiment review using three features extraction: Word2vec, Doc2vec, Terms Frequency-Inverse Document Frequency (TF-IDF) with machine learning classification algorithms, such as Support Vector Machine (SVM), Naive Bayes and Decision Tree. Grid search algorithm is used to optimize the feature extraction and classifier parameter. The performance of these classification algorithms is evaluated based on accuracy. The approach that is used in this research succeeded to increase the classification accuracy for all feature extractions and classifiers using grid search hyperparameter optimization on varied pre-processed data.</description>
        <description>http://thesai.org/Downloads/Volume10No3/Paper_9-Optimizing_the_Hyperparameter_of_Feature_Extraction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Stabilizing Average Queue Length in Active Queue Management Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100310</link>
        <id>10.14569/IJACSA.2019.0100310</id>
        <doi>10.14569/IJACSA.2019.0100310</doi>
        <lastModDate>2019-03-30T12:17:11.8970000+00:00</lastModDate>
        
        <creator>Mahmoud Baklizi</creator>
        
        <subject>Congestion control methods; GRED; dynamic GRED; random; simulation; active queue management method</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(3), 2019</description>
        <description>This paper proposes the Stabilized (DGRED) method for congestion detection at the router buffer. This method aims to stabilize the average queue length between allocated minthre_shold and doublemaxthre_shold positions to increase the network performance. The SDGRED method is simulated and compared with Gentle Random Early Detection (GRED) and Dynamic GRED active queue management methods. This comparison is built on different important measures, such as dropping probability, throughput, average delay, packet loss, and mean queue length for packets. The evaluation aims to identify which method presents better simulation performance measurement results when non-congestion or congestion situations occur at the router buffers in congestion control. The results show that at high packet arrival probability, the proposed algorithm helps provide lesser queue length values, delayed time, and packet loss compared with current methods. Furthermore, SDGRED generates adequate throughput at high packet arrival probability.</description>
        <description>http://thesai.org/Downloads/Volume10No3/Paper_10-Stabilizing_Average_Queue_Length.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Applying Diffie-Hellman Algorithm to Solve the Key Agreement Problem in Mobile Blockchain-based Sensing Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100308</link>
        <id>10.14569/IJACSA.2019.0100308</id>
        <doi>10.14569/IJACSA.2019.0100308</doi>
        <lastModDate>2019-03-30T12:17:11.8800000+00:00</lastModDate>
        
        <creator>Nsikak Pius Owoh</creator>
        
        <creator>Manmeet Mahinderjit Singh</creator>
        
        <subject>Internet of Things; mobile crowd sensing; edge computing; sensor data encryption; mining; smart contract</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(3), 2019</description>
        <description>Mobile blockchain has achieved huge success with the integration of edge computing services. This concept, when applied in mobile crowd sensing, enables transfer of sensor data from blockchain clients to edge nodes. Edge nodes perform proof-of-work on sensor data from blockchain clients and append validated data to the chain.  With this approach, blockchain can be performed pervasively. However, securing sensitive sensor data in a mobile blockchain (client/edge node architecture) becomes imperative. To this end, this paper proposes an integrated framework for mobile blockchain which ensures key agreement between clients and edge nodes using Elliptic Curve Diffie-Hellman algorithm. Also, the framework provides efficient encryption of sensor data using the Advanced Encryption Standard algorithm. Finally, key agreement processes in the framework were analyzed and results show that key pairing between the blockchain client and the edge node is a non-trivial process.</description>
        <description>http://thesai.org/Downloads/Volume10No3/Paper_8-Applying_Diffie_Hellman_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Empirical Assessment of Ensemble based Approaches to Classify Imbalanced Data in Binary Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100307</link>
        <id>10.14569/IJACSA.2019.0100307</id>
        <doi>10.14569/IJACSA.2019.0100307</doi>
        <lastModDate>2019-03-30T12:17:11.8630000+00:00</lastModDate>
        
        <creator>Prabhjot Kaur</creator>
        
        <creator>Anjana Gosain</creator>
        
        <subject>Ensemble approaches; boosting; bagging; hybrid ensembles; imbalanced data-sets; classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(3), 2019</description>
        <description>Classifying imbalanced data with traditional classifiers is a huge challenge now-a-days. Imbalance data is a situation wherein the ratio of data within classes is not same. Many real life situations deal with such problems e.g. Web spam detection, Credit card frauds, and Fraudulent telephone calls. The problem exists everywhere when our objective is to identify exceptional cases. The problem is handled by researchers either by modifying the existing classifications methods or by developing new methods.  This paper review ensemble based approaches (Boosting and Bagging based) designed to address imbalance in classes by focusing on binary classification. We compared 6 Boosting based, 7 Bagging based and 2 hybrid ensembles for their performance in imbalance domain. We use KEEL tool to evaluate the performance of these methods by implementing the methods on seven imbalance data having class imbalance ratio from 1.82 to as high as 129.44. Area Under the curve (AUC) parameter is recorded as the performance metric. We also statistically analyzed the methods using Friedman rank test and Wilcoxon Matched Pair signed rank test to strengthen the visual interpretations. After analysis, it is proved that RusBoost ensemble outperformed every other ensemble in the imbalanced data situations.</description>
        <description>http://thesai.org/Downloads/Volume10No3/Paper_7-Empirical_Assessment_of_Ensemble_based_Approaches.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Robust Recurrent Cerebellar Model Articulation Controller for Non-Linear MIMO Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100306</link>
        <id>10.14569/IJACSA.2019.0100306</id>
        <doi>10.14569/IJACSA.2019.0100306</doi>
        <lastModDate>2019-03-30T12:17:11.8470000+00:00</lastModDate>
        
        <creator>Xuan-Kien Dang</creator>
        
        <creator>Van-Phuong Ta</creator>
        
        <subject>Robust controller; MIMO non-linear systems; linear piezoelectric motor (LPM); recurrent network; Cerebellar model articulation controller</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(3), 2019</description>
        <description>This research proposes a robust recurrent cerebellar model articulation control system (RRCMACS) for MIMO non-linear systems to achieve the robustness of the system during operation. In this system, the superior properties of the recurrent cerebellar model articulation controller (RCMAC) are incorporated to imitate an ideal sliding mode controller. The  robust controller efficiently attenuates the effects of uncertainties, external disturbances, and noises to maintain the robustness of the system. The parameters of the controller were updated in the sense of the Lyapunov-like Lemma theory. Therefore, the stability and robustness of the system were guaranteed. The simulation results for the micro-motion stage system are given to prove the effectiveness and applicability of the proposed control system for model-free non-linear systems.</description>
        <description>http://thesai.org/Downloads/Volume10No3/Paper_6-Robust_Recurrent_Cerebellar_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Analysis of Open Source Solution &quot;ntop&quot; for Active and Passive Packet Analysis Relating to Application and Transport Layer</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100304</link>
        <id>10.14569/IJACSA.2019.0100304</id>
        <doi>10.14569/IJACSA.2019.0100304</doi>
        <lastModDate>2019-03-30T12:17:11.8170000+00:00</lastModDate>
        
        <creator>Sirajuddin Qureshi</creator>
        
        <creator>Dr Gordhan Das</creator>
        
        <creator>Saima Tunio</creator>
        
        <creator>Faheem Ullah</creator>
        
        <creator>Ahsan Nazir</creator>
        
        <creator>Ahsan Wajahat</creator>
        
        <subject>ntop; network monitoring; packet analysis; the application layer; transport layer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(3), 2019</description>
        <description>A key issue facing operators around the globe is the most appropriate way to deal with spotting black in networks. For this purpose, the technique of passive network monitoring is very appropriate; this can be utilized to deal with incisive problems within individual network devices, problems relating to the whole LAN (Local Area Network) or core network. This technique, however, is not just relevant for troubleshooting, but it can also be castoff for crafting network statistics and analyzing network enactment. In real time network scenarios, a lot of applications and/or processes simultaneously download and upload data. Sometimes, it is very difficult to keep track of all the uploaded and downloaded data. Wireshark is a tool that is normally used to track packets for analysis between two particular hosts during two particular sessions on the same network. However, Wireshark as some limitations such as it is not a good tool for keeping track of bulky network data transferred among various endpoints. On the other side, an open source solution &quot;ntop&quot; offers active as well as passive packet analysis which can be handy for system administrators, networkers and IT managers. Additionally, with ntop VoIP traffic can also be monitored. In this research work, the ntop solution has been deployed to a network facility and performance analysis of ntop solution for various application processes (on application layer) such as HTTP, SSDP (based on HTTPU) against their associated protocols such as TCP/IP, UDP, and VoIP have been analyzed. Additionally, above said processes and protocols have been comprehensively analyzed relating with their client/server breakdown, duration of the connection, actual throughput, total bytes (bytes received and sent) and total bandwidth consumed.  This study has been helpful to see the weakest and strongest areas of a particular network in terms of analyzing and deploying network policies. This research work will help the research community to deploy ntop solution for real-time monitoring actively and passively.</description>
        <description>http://thesai.org/Downloads/Volume10No3/Paper_4-Performance_Analysis_of_Open_Source_Solution.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Achieving Flatness: Honeywords Generation Method for Passwords based on user behaviours</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100305</link>
        <id>10.14569/IJACSA.2019.0100305</id>
        <doi>10.14569/IJACSA.2019.0100305</doi>
        <lastModDate>2019-03-30T12:17:11.8170000+00:00</lastModDate>
        
        <creator>Omar Z Akif</creator>
        
        <creator>Ann F. Sabeeh</creator>
        
        <creator>G. J. Rodgers</creator>
        
        <creator>H. S. Al-Raweshidy</creator>
        
        <subject>Honeywords; user behaviours; worst password list; dictionary attack</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(3), 2019</description>
        <description>Honeywords (decoy passwords) have been proposed to detect attacks against hashed password databases. For each user account, the original password is stored with many honeywords in order to thwart any adversary. The honeywords are selected deliberately such that a cyber-attacker who steals a file of hashed passwords cannot be sure, if it is the real password or a honeyword for any account. Moreover, entering with a honeyword to login will trigger an alarm notifying the administrator about a password file breach. At the expense of increasing the storage requirement by 24 times, the authors introduce a simple and effective solution to the detection of password file disclosure events. In this study, we scrutinise the honeyword system and highlight possible weak points. Also, we suggest an alternative approach that selects the honeywords from existing user information, a generic password list, dictionary attack, and by shuffling the characters. Four sets of honeywords are added to the system that resembles the real passwords, thereby achieving an extremely flat honeywords generation method. To measure the human behaviours in relation to trying to crack the password, a testbed engaged with by 820 people was created to determine the appropriate words for the traditional and proposed methods. The results show that under the new method it is harder to obtain any indication of the real password (high flatness) when compared with traditional approaches and the probability of choosing the real password is 1/k, where k = number of honeywords plus the real password.</description>
        <description>http://thesai.org/Downloads/Volume10No3/Paper_5_Achieving_Flatness_Honeywords_Generation_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automatic Control of Colonoscope Movement for Modern Colonoscopy</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100303</link>
        <id>10.14569/IJACSA.2019.0100303</id>
        <doi>10.14569/IJACSA.2019.0100303</doi>
        <lastModDate>2019-03-30T12:17:11.8000000+00:00</lastModDate>
        
        <creator>Helga Silaghi</creator>
        
        <creator>Viorica Spoiala</creator>
        
        <creator>Alexandru Marius Silaghi</creator>
        
        <creator>Tiberia Ioana Ilias</creator>
        
        <creator>Cornelia Gyor&#246;di</creator>
        
        <creator>Claudiu Costea</creator>
        
        <creator>Sanda Dale</creator>
        
        <creator>Ovidiu Cristian Fratila</creator>
        
        <subject>Intelligent control; colonoscope movement; classical colonoscopy; mathematical model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(3), 2019</description>
        <description>The paper presents the mathematical realization of the trajectory that the colonoscope should have in the medical intervention, as well as the mathematical demonstration of the functions that make up the colonoscope. The goal of this work is finding a method for reducing the medical doctor&#39;s effort, by using intelligent control of colonoscope movement for improving the comfort of the patient subjected to a classical colonoscopy and reducing the risk of perforation of the colon. Finally, some experimental results are presented, validating the model and the control solutions adopted in the paper.</description>
        <description>http://thesai.org/Downloads/Volume10No3/Paper_3-Automatic_Control_of_Colonoscope_Movement.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Developing Deep Learning Models to Simulate Human Declarative Episodic Memory Storage</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100302</link>
        <id>10.14569/IJACSA.2019.0100302</id>
        <doi>10.14569/IJACSA.2019.0100302</doi>
        <lastModDate>2019-03-30T12:17:11.7870000+00:00</lastModDate>
        
        <creator>Abu Kamruzzaman</creator>
        
        <creator>Charles C. Tappert</creator>
        
        <subject>Convolutional neural network; long short term memory; Variational Autoencoder; deep learning; memory model; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(3), 2019</description>
        <description>Human like visual and auditory sensory devices became very popular in recent years through the work of  deep learning models that incorporate aspects of brain processing such as edge and line detectors found in the visual cortex.  However, very little work has been done on the human memory, and thus our aim is to model human long-term declarative episodic memory storage using deep learning methods.  An innovative way of deep neural network was created on supervised feature learning dataset such as MNIST to achieve high accuracy as well as storing the models hidden layers for future extraction. Convolutional Neural Network (CNN) learning models with transfer learning models were trained to imitate the long-term declarative episodic memory storage of human.  A Recurrent Neural Network (RNN) in the form of Long Short Term Memory (LSTM) model was assembled in layers and then trained and evaluated.  A Variational Autoencoder was also used for training and evaluation to mimic the human memory model.  Frameworks were constructed using TensorFlow for training and testing the deep learning models.</description>
        <description>http://thesai.org/Downloads/Volume10No3/Paper_2-Developing_Deep_Learning_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>LSB based Image Steganography by using the Fast Marching Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100301</link>
        <id>10.14569/IJACSA.2019.0100301</id>
        <doi>10.14569/IJACSA.2019.0100301</doi>
        <lastModDate>2019-03-30T12:17:11.7700000+00:00</lastModDate>
        
        <creator>Xiaoli Huan</creator>
        
        <creator>Hong Zhou</creator>
        
        <creator>Jiling Zhong</creator>
        
        <subject>Image steganography; LSB; the fast marching method; coordinates; PSNR</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(3), 2019</description>
        <description>This paper presents a novel approach for image steganography based on the Least Significant Bit (LSB) method. Most traditional LSB methods choose the initial embedding location of the cover image randomly, and the secret messages are embedded sequentially without considering the image pixels’ values and positions. Our approach utilizes the user-selected seeds in the cover image to avoid the smooth/flat areas where cause a higher detection rate. Then the fast marching method is used to calculate T (the time of arrival of the front of the seeds) and propagate the seeds by computational dynamics. The front propagation process decides the embedding positions of the secret messages. The same algorithm can be used to retrieve the hidden information as well. The coordinates of the seeds are used as the shared key only known to the sender and receiver to add additional security protection. Peak Signal to Noise Ratio (PSNR) is evaluated to measure the quality of resulting images. The experiments show that the proposed approach generates results with high payload capacity and satisfied imperceptibility.</description>
        <description>http://thesai.org/Downloads/Volume10No3/Paper_1-LSB_based_Image_Steganography.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Video Watermarking System for Copyright Protection based on Moving Parts and Silence Deletion</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100279</link>
        <id>10.14569/IJACSA.2019.0100279</id>
        <doi>10.14569/IJACSA.2019.0100279</doi>
        <lastModDate>2019-02-28T12:12:03.7830000+00:00</lastModDate>
        
        <creator>Shahad Almuzairai</creator>
        
        <creator>Nisreen Innab</creator>
        
        <subject>Watermark; audio stream; visual stream; moving block, silence deletion; DWT; DCT; attacks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(2), 2019</description>
        <description>In recent years, video watermarking has emerged as a powerful technique for ensuring copyright protection. However, ensuring the lowest level of distortion, high transparency and transparency control, integrity of the watermarked video, and robustness against attacks that can be applied to destroy the embedded watermark are important properties that should be satisfied in a watermarking system. In this paper, we propose a video watermarking system that hides a watermark in both the visual and audio streams to ensure the integrity of the watermarked video. Specifically, we propose the moving block detection (MBD) algorithm for hiding the watermark in the moving parts of the original visual stream of the video. The MDB algorithm ensures that a minimal amount of distortion is caused by embedding the watermark. The MBD uses entropy to find the moving parts of the visual stream to hide the watermark. The process of hiding in the visual stream is performed using DWT to ensure both transparency and resistance against attacks. We employ the power factors of DWT to control the level of transparency. In addition, we propose the silence deletion algorithm (SDA), which generates a pure original audio stream by removing the noise from the original audio stream to form the hiding place of the watermark within the audio stream. DCT is employed to hide the watermark within the pure original audio stream to ensure resistance against attacks. Under a threat model, which includes bilinear, curved, and LPF geometric attacks and compression and Gaussian noise non-geometric attacks, the experimental results demonstrated that the proposed system outperformed four similar systems: key-frame-, I-frame-, spread-spectrum-, and LBS-based systems.</description>
        <description>http://thesai.org/Downloads/Volume10No2/Paper_79-Video_Watermarking_System_for_Copyright_Protection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Genetic Algorithm for Data Exchange Optimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100278</link>
        <id>10.14569/IJACSA.2019.0100278</id>
        <doi>10.14569/IJACSA.2019.0100278</doi>
        <lastModDate>2019-02-28T12:12:03.7500000+00:00</lastModDate>
        
        <creator>Medhat H A Awadalla</creator>
        
        <subject>Genetic algorithm; dynamic architectures; forward data exchange; virtually inverse data exchange; and hybrid data exchange method</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(2), 2019</description>
        <description>Dynamic architectures have emerged to be a promising implementation platform to provide flexibility, high performance, and low power consumption for computing devices. They can bring unique capabilities to computational tasks and offer the performance and energy efficiency of hardware with the flexibility of software. This paper proposes a genetic algorithm to develop an optimum configuration that optimizes the routing among its communicating processing nodes by minimizing the path length and maximizing possible parallel paths. In addition, this paper proposes forward, virtually inverse, and hybrid data exchange approaches to generate dynamic configurations that achieve data exchange optimization. Intensive experiments and qualitative comparisons have been conducted to show the effectiveness of the presented approaches. Results show significant performance improvement in terms of total execution time of up to 370%, 408%, 477%, and 550% when using configurations developed based on genetic algorithm, forward, virtually inverse, and hybrid data exchange techniques, respectively.</description>
        <description>http://thesai.org/Downloads/Volume10No2/Paper_78-Genetic_Algorithm_for_Data_Exchange_Optimization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Thinging for Computational Thinking</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100277</link>
        <id>10.14569/IJACSA.2019.0100277</id>
        <doi>10.14569/IJACSA.2019.0100277</doi>
        <lastModDate>2019-02-28T12:12:03.7370000+00:00</lastModDate>
        
        <creator>Sabah Al Fedaghi</creator>
        
        <creator>Ali Abdullah Alkhaldi</creator>
        
        <subject>Computational thinking; conceptual modeling; abstract machine; thinging; abstraction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(2), 2019</description>
        <description>This paper examines conceptual models and their application to computational thinking. Computational thinking is a fundamental skill for everybody, not just for computer scientists. It has been promoted as skills that are as fundamental for all as numeracy and literacy. According to authorities in the field, the best way to characterize computational thinking is the way in which computer scientists think and the manner in which they reason how computer scientists think for the rest of us. Core concepts in computational thinking include such notions as algorithmic thinking, abstraction, decomposition, and generalization. This raises several issues and challenges that still need to be addressed, including the fundamental characteristics of computational thinking and its relationship with modeling patterns (e.g., object-oriented) that lead to programming/coding. Thinking pattern refers to recurring templates used by designers in thinking. In this paper, we propose a representation of thinking activity by adopting a thinking pattern called thinging that utilizes a diagrammatic technique called thinging machine (TM). We claim that thinging is a valuable process as a fundamental skill for everybody in computational thinking. The viability of such a proclamation is illustrated through examples and a case study.</description>
        <description>http://thesai.org/Downloads/Volume10No2/Paper_77-Thinging_for_Computational_Thinking.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Designing Smart Sewerbot for the Identification of Sewer Defects and Blockages</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100276</link>
        <id>10.14569/IJACSA.2019.0100276</id>
        <doi>10.14569/IJACSA.2019.0100276</doi>
        <lastModDate>2019-02-28T12:12:03.7370000+00:00</lastModDate>
        
        <creator>Ghulam E Mustafa Abro</creator>
        
        <creator>Bazgha Jabeen</creator>
        
        <creator> Ajodhia</creator>
        
        <creator>Kundan Kumar</creator>
        
        <creator>Abdul Rauf</creator>
        
        <creator>Ali Noman</creator>
        
        <creator>Syed Faiz ul Huda</creator>
        
        <creator>Amjad Ali Qureshi</creator>
        
        <subject>Internet of robotic things (IoRT), GPS, humidity, Internet Protocol, Temperature, wireless communication and sewer defects</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(2), 2019</description>
        <description>Internet of thing (IoT) is a new concept where the term ‘thing’ is associated with the configurable sensors and devices no matter domestic or industrial, whereas bridging up a relationship in between these things and internet protocol is known as Internet of thing. Moreover, the same concept has been introduced in the field of robotics as ‘Internet of Robotic Things (IoRT)’, which is mainly concerned with active sensorization of sensors dully interfaced with any type of robots i.e. autonomous unmanned ground vehicle (UGV). This paper describes the prototyping of an autonomous sewerbot that will not only identify the sewer defects in sewerage pipelines but will also identify the type of blockages using the technique of digital image processing. Furthermore, the deployed configurable sensors will also share the attributes of particular sewerage line on IoT such that temperature, humidity, availability of hazardous gases, exact depth at which it is available and global positioning using GPS module. The paper also provides the brief construction of this mechatronic and amphibian system via which it can extricate the blockages from sewerage lines along with wireless camera surveillance.</description>
        <description>http://thesai.org/Downloads/Volume10No2/Paper_76-Designing_Smart_Sewerbot_for_the_Identification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Text Mining Techniques for Intelligent Grievances Handling System: WECARE Project Improvements in EgyptAir</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100275</link>
        <id>10.14569/IJACSA.2019.0100275</id>
        <doi>10.14569/IJACSA.2019.0100275</doi>
        <lastModDate>2019-02-28T12:12:03.7030000+00:00</lastModDate>
        
        <creator>Shahinaz M.Al-Tabbakh</creator>
        
        <creator> Hanaa M.Mohammed</creator>
        
        <creator>Hayam. El-zahed</creator>
        
        <subject>Knowledge base; Grievances; NLP; SVM; KNN; Na&#239;ve Bayesian; Decision Tree</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(2), 2019</description>
        <description>The current work provides quick responding and minimize the required time of processing of the incoming grievances by using automated categorization that analyses the English text contents and predict the category. This work built a model by text mining and NLP processing to extract the useful information from customer grievances data ,to be used as a guidelines to air transport industry. Since soon , a customer grievances ‘system in EGYPTAIR called WECARE has had large feeds of data which can be collected in data sets through various channels such as e-mail, website or mobile Apps.  Then the incoming data sets are analyzed and assessed by organization&#39;s staff then it is assigned to related department through manual classification. Finally, it provides proposed solution for the issue. Thence grievances categorization that handled manually is time consuming process. So, this work decided a model to improve WECARE system in Egypt Airlines. Classification based data mining Techniques are used to identify data into groups of categories across the variable touch points. The system has 166 categories of problems, but for experimental purposes we decide to study 6 categories only. We have applied four commonly used classifiers namely, Support Vector Machine (SVM), K-Nearest Neighbours(KNN), Na&#239;ve Bayesian and Decision Tree on our data set to classify the grievances data set then selecting the best of them to be the candidate grievances classifier in enhanced WECARE system. Among four classifiers applied on the dataset, KNN achieved the highest average accuracy (97.5% ) with acceptable running time. Also, the work is extended to make hint to the system user, about how to solve this grievance issue based on previous issues saved in Knowledge Base (KB). Several experiments were conducted to test solution hint module by changing similarity score. The benefits of performing a thorough analysis of problems include better understanding of service performance.</description>
        <description>http://thesai.org/Downloads/Volume10No2/Paper_75-Text_Mining_Techniques_for_Intelligent_Grievances.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Review of Community Detection over Social Media:Graph Prospective</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100274</link>
        <id>10.14569/IJACSA.2019.0100274</id>
        <doi>10.14569/IJACSA.2019.0100274</doi>
        <lastModDate>2019-02-28T12:12:03.6870000+00:00</lastModDate>
        
        <creator>Pranita Jain</creator>
        
        <creator>Deepak Singh Tomar</creator>
        
        <subject>Community detection; social media; social media mining; homophily; influence; confounding; social theory; community detection algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(2), 2019</description>
        <description>Community over the social media is the group of globally distributed end users having similar attitude towards a particular topic or product. Community detection algorithm is used to identify the social atoms that are more densely interconnected relatively to the rest over the social media platform. Recently researchers focused on group-based algorithm and member-based algorithm for community detection over social media. This paper presents comprehensive overview of community detection technique based on recent research and subsequently explores graphical prospective of social media mining and social theory (Balance theory, status theory, correlation theory) over community detection. Along with that this paper presents a comparative analysis of three different state of art community detection algorithm available on I-Graph package on python i.e. walk trap, edge betweenness and fast greedy over six different social media data set. That yield intersecting facts about the capabilities and deficiency of community analysis methods.
</description>
        <description>http://thesai.org/Downloads/Volume10No2/Paper_74-Review_of_Community_Detection_over_Social_Media.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Data Aggregation Scheme for Wireless Sensor Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100273</link>
        <id>10.14569/IJACSA.2019.0100273</id>
        <doi>10.14569/IJACSA.2019.0100273</doi>
        <lastModDate>2019-02-28T12:12:03.6570000+00:00</lastModDate>
        
        <creator>Syed Gul Shah</creator>
        
        <creator>Atiq Ahmed</creator>
        
        <creator>Ihsan Ullah</creator>
        
        <creator>Waheed Noor</creator>
        
        <subject>Wireless Sensor Networks; energy consumption; energy-aware routing; clustering; data aggregation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(2), 2019</description>
        <description>Wireless sensor networks (WSN) consist of diverse and minute sensor nodes which are widely employed in different applications, for example, atmosphere monitoring, search and rescue activities, disaster management, untamed life checking and so on. A WSN which is an accumulation of clusters and information exchange occurs with the assistance of cluster head (CH). A lot of sensor nodes’ energy is utilized in procedures like detection, information exchange and making clusters using various protocols. In a cluster based WSN, it is profitable to segregate the tasks performed by cluster heads as a fair amount of energy could be conserved. Following this, we propose a solution to include a supplementary node that is named as a ‘super node’ alongside cluster head in a cluster based WSN in this work. This node is in-charge of all the clusters in a WSN and takes care of the entire cluster’s energy information. It manages the cluster heads from their creation to the end. All the clusters in the network send their respective information to this node that eliminates redundant information and forwards the aggregated information towards the sink. This not only saves the CH energy but also conserves individual cluster node’s energy by proper monitoring the energy levels. This mechanism enhances the lifetime of the network by minimizing the number of communications between nodes and the sink. In order to evaluate the performance of our proposed mechanism, we use various parameters like packet delay, communication overhead and energy consumption that show the optimality of our approach.</description>
        <description>http://thesai.org/Downloads/Volume10No2/Paper_73-A_Novel_Data_Aggregation_Scheme.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Analysis of Network Libraries for Offloading Efficiency in Mobile Cloud Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100272</link>
        <id>10.14569/IJACSA.2019.0100272</id>
        <doi>10.14569/IJACSA.2019.0100272</doi>
        <lastModDate>2019-02-28T12:12:03.6400000+00:00</lastModDate>
        
        <creator>Farhan Sufyan</creator>
        
        <creator>Amit Banerjee</creator>
        
        <subject>Android; Mobile Cloud Computing (MCC); network libraries; offloading; performance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(2), 2019</description>
        <description>In the modern era, smartphones are increasingly becoming an integral and essential part of our daily life. Although the hardware capabilities of the smartphones (i.e., processing, memory, battery, and communication) are improving every day, however, it is not enough to handle computation-intensive applications, such as image processing, data analytics, and encryption. To overcome these limitations, mobile cloud computing (MCC) is introduced, which augments the capabilities of smartphones and resources of the cloud to provide better QoS performance to the user. The idea is to save resources in the smartphones by offloading the computationally intensive tasks to the cloud. In this context, researchers have proposed several offloading frameworks, mainly addressing challenges of why-what-when and where to offload. In this paper, however, we explore another challenging issue of offloading, i.e., how-to-offload. More specifically, we analyze different networking libraries (HttpURLConnection, OkHttp, Volley, Retrofit) and study their performance on various dynamic factors such as data size, communication media, hardware and software of the smartphone. Our objective is to explore if an application can use the same networking library for all the smartphones and all purposes or there is a need to make an adaptive decision based on the local constraints. To understand this, we perform a comprehensive analysis of the networking libraries on different Andriod smart-phones in the real environment and found that there is a need of adaptive network library selection because libraries perform changes in different scenarios.</description>
        <description>http://thesai.org/Downloads/Volume10No2/Paper_72-Comparative_Analysis_of_Network_Libraries.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Overlapped Apple Fruit Yield Estimation using Pixel Classification and Hough Transform</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100271</link>
        <id>10.14569/IJACSA.2019.0100271</id>
        <doi>10.14569/IJACSA.2019.0100271</doi>
        <lastModDate>2019-02-28T12:12:03.6100000+00:00</lastModDate>
        
        <creator>Zartash Kanwal</creator>
        
        <creator>Abdul Basit</creator>
        
        <creator>Muhammad Jawad</creator>
        
        <creator>Ihsan Ullah</creator>
        
        <creator>Anwar Ali Sanjrani</creator>
        
        <subject>Apple detection; pixel classification; curvature esti-mation; Hough circle transform; visual tracking; color segmentation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(2), 2019</description>
        <description>Researchers proposed various visual based methods for estimating the fruit quantity and performing qualitative analysis, they used ariel and ground vehicles to capture the fruit images in orchards. Fruit yield estimation is a challenging task with environmental noise such as illumination changes, color variation, overlapped fruits, cluttered environment, and branches or leaves shading. In this paper, we proposed a learning free fast visual based method to correctly count the apple fruits tightly overlapped in a complex outdoor orchard environment. We first carefully build the color based HS model to perform the color based segmentation. This step extracts the apple fruits from the complex orchard background and produces the blobs representing apples along with the additional noisy regions. We used the fine tuned morphological operators to refine the blobs received from the previous step and remove the noisy regions fol-lowed by the Gaussian smoothing. Finally we treated the circular shaped blobs with Hough Transform algorithm to calculate the center coordinates of each apple edge and the method correctly locates the apples in the images. The results ensures the proposed algorithm successfully detects and count apple fruits in the images captured from apple orchard and outperforms the standard state of the art contoured based method.</description>
        <description>http://thesai.org/Downloads/Volume10No2/Paper_71-Overlapped_Apple_Fruit_Yield_Estimation_using_Pixel.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Parallel Backpropagation Neural Network Training Techniques using Graphics Processing Unit</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100270</link>
        <id>10.14569/IJACSA.2019.0100270</id>
        <doi>10.14569/IJACSA.2019.0100270</doi>
        <lastModDate>2019-02-28T12:12:03.5930000+00:00</lastModDate>
        
        <creator>Muhammad Arslan Amin</creator>
        
        <creator>Muhammad Kashif Hanif</creator>
        
        <creator>Muhammad Umer Sarwar</creator>
        
        <creator>Abdur Rehman</creator>
        
        <creator>Fiaz Waheed</creator>
        
        <creator>Haseeb Rehman</creator>
        
        <subject>Artificial neural network; backpropagation; SIMD; CPU; GPU; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(2), 2019</description>
        <description>Training of artificial neural network using back-propagation is a computational expensive process in machine learning. Parallelization of neural networks using Graphics Pro-cessing Unit (GPU) can help to reduce the time to perform computations. GPU uses a Single Instruction Multiple Data (SIMD) architecture to perform high speed computing. The use of GPU shows remarkable performance gain when compared to CPU. This work discusses different parallel techniques for the backpropagation algorithm using GPU. Most of the techniques perform comparative analysis between CPU and GPU.</description>
        <description>http://thesai.org/Downloads/Volume10No2/Paper_70-Parallel_Backpropagation_Neural_Network_Training.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards an Architecture for Handling Big Data in Oil and Gas Industries: Service-Oriented Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100269</link>
        <id>10.14569/IJACSA.2019.0100269</id>
        <doi>10.14569/IJACSA.2019.0100269</doi>
        <lastModDate>2019-02-28T12:12:03.5800000+00:00</lastModDate>
        
        <creator>Farag Azzedin</creator>
        
        <creator>Mustafa Ghaleb</creator>
        
        <subject>Service-oriented architecture; big data; Hadoop; oil and gas; big data architecture</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(2), 2019</description>
        <description>Existing architectures to handle big data in Oil &amp; gas industry are based on industry-specific platforms and hence limited to specific tools and technologies. With these architectures, we are confined to big data single-provider solutions. The idea of multi-provider big data solutions is essential. When building up big data solutions, organizations should embrace the best-in-class technologies and tools that different providers offer. In this article, we hypothesize that the limitations of the proposed big-data architectures for oil and gas industries can be addressed by a Service Oriented Architecture approach. In this article, we are proposing the idea of breaking complex systems to simple separate yet reliable distributed services. It should be noted that loose coupling exists between the interacting services. Thus, our proposed architecture enables petroleum industries to select the necessary services from the SOA-based ecosystem and create viable big data solutions.</description>
        <description>http://thesai.org/Downloads/Volume10No2/Paper_69-Towards_an_Architecture_for_Handling_Big_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Framework for Disease Outbreak Notification Systems with an Optimized Federation Layer</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100268</link>
        <id>10.14569/IJACSA.2019.0100268</id>
        <doi>10.14569/IJACSA.2019.0100268</doi>
        <lastModDate>2019-02-28T12:12:03.5630000+00:00</lastModDate>
        
        <creator>Farag Azzedin</creator>
        
        <creator>Mustafa Ghaleb</creator>
        
        <creator>Salahadin Adam Mohammed</creator>
        
        <creator>Jaweed Yazdani</creator>
        
        <subject>Disease outbreak notification system; database fed-eration; web services; service oriented architecture; health systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(2), 2019</description>
        <description>Data that is needed to detect outbreaks of known and unknown diseases is often gathered from sources that are scattered in many geographical locations. Often these scattered data exist in a wide variety of formats, structures, and models. The collection, pre-processing, and analysis of these data to detect potential disease outbreaks is very challenging, time-consuming and error-prone. To fight disease outbreaks, healthcare practition-ers, epidemiologists and researchers need to access the scattered data in a secure and timely manner. They also require a uniform and logical framework or methodology to access the relevant data. In this paper, authors propose a federated framework for Disease Outbreak Notification Systems (DONSFed). Using advanced design and an XML technique patented in the US in 2016 by our team, the framework was tested and validated as part of this work. The proposed approach enables healthcare professionals to quickly and uniformly access data that is required to detect potential disease outbreaks. This research focuses on implementing a cloud-based prototype as a proof-of-concept to demonstrate the functionalities and to verify the concept of the proposed framework.</description>
        <description>http://thesai.org/Downloads/Volume10No2/Paper_68-Framework_for_Disease_Outbreak_Notification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Survey on Wandering Behavior Management Systems for Individuals with Dementia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100267</link>
        <id>10.14569/IJACSA.2019.0100267</id>
        <doi>10.14569/IJACSA.2019.0100267</doi>
        <lastModDate>2019-02-28T12:12:03.5330000+00:00</lastModDate>
        
        <creator>Arshia Zernab Hassan</creator>
        
        <creator>Arshia Khan</creator>
        
        <subject>Dementia; wandering behavior; technology; algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(2), 2019</description>
        <description>Alzheimer’s and related dementia are associated with a gradual decline in cognitive abilities of an individual, impairing independent living abilities. Wandering, a purposeless disoriented locomotion tendency or behavior of dementia patients, requires constant caregiver supervision to reduce the risk of phys-ical harm to patients. Integrating technology into care ecology has the potential to alleviate stress and expense. An automatic wandering detection system integrated with an intervention mod-ule may provide warnings and assistive suggestions in times of abnormal behavior. In this study, we survey existing research on technology aided methodologies and algorithms used in detection and management of wandering behavior of individuals affected with dementia. Our study provides insights into mechanisms of collecting movement data and finding patterns that distinguish wandering from normal behavior.</description>
        <description>http://thesai.org/Downloads/Volume10No2/Paper_67-A_Survey_on_Wandering_Behavior_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Effect of Routing Protocols and Layer 2 Mediums on Bandwidth Utilization and Latency</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100266</link>
        <id>10.14569/IJACSA.2019.0100266</id>
        <doi>10.14569/IJACSA.2019.0100266</doi>
        <lastModDate>2019-02-28T12:12:03.5170000+00:00</lastModDate>
        
        <creator>Ghulam Mujtaba</creator>
        
        <creator>Babar Saeed</creator>
        
        <creator>Furhan Ashraf</creator>
        
        <creator>Fiaz Waheed</creator>
        
        <subject>Routing protocols; layer2 technologies; FDDI; la-tency rate; bandwidth utilization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(2), 2019</description>
        <description>Computer networks (CNS) are progressing as emerging field in information and communication technology (ICT). Various computer networks related problems relies on performance of computer network specifically bandwidth utiliza-tion and network latency issues. CNS especially Routing protocols play a vital role for management of network resources as well as for managing the network performance but on the other hand these have adverse effect on performance of network. Network routing protocols, bandwidth and latency rate of any computer network are tightly bounded with each other with respect to network performance. This research is being conducted to analyze the relationship between performance of different protocols, their effect on bandwidth utilization, and network latency rate using layer 2 medium. After analysis of relationship of these parameters suggestions will be made for enhancement of network performance over layer 2 medium.</description>
        <description>http://thesai.org/Downloads/Volume10No2/Paper_66-Effect_of_Routing_Protocols_and_Layer.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards a Fine-Grained Access Control Mechanism for Privacy Protection and Policy Conflict Resolution</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100265</link>
        <id>10.14569/IJACSA.2019.0100265</id>
        <doi>10.14569/IJACSA.2019.0100265</doi>
        <lastModDate>2019-02-28T12:12:03.4870000+00:00</lastModDate>
        
        <creator>Ha Xuan Son</creator>
        
        <creator>En Chen</creator>
        
        <subject>ABAC; privacy; JSON; policy conflict resolving; document store; fine-grained access control</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(2), 2019</description>
        <description>Access control is a security technique that specifies access rights to resources in a computing environment. As information systems nowadays become more complex, it plays an important role in authenticating and authorizing users and preventing an attacker from targeting sensitive information. However, no proper consideration has been fully investigated so far in privacy protection. While many studies have acknowledged this issue, recent studies have not provided a fine-grained access control system for data privacy protection. As the data set becomes larger, we have to confront more privacy challenges. For example, the access control mechanism must be able to guarantee fine-grained access control, privacy protection, conflicts and redundancies between rules of the same policy or between different policies. In this paper, we propose a comprehensive framework for enforcing attribute-based security policies stored in the JSON document together with the feature of data privacy protection and incorporates a policy structure based on the prioritization of functions to resolve conflicts at a fine-grained level called “Privacy aware access control model for policy conflict resolution”. We also use Polish notation for modeling condi-tional expressions which are the combination of subject, action, resource, and environment attributes so that privacy policies are flexible, dynamic and fine-grained. Experiments are carried out to two aspects (i) illustrate the relationship between the processing time for access decision and the complexity of policies;(ii) illustrate the relationship between the processing time for the traditional approach (single policy, multi-policy without priority) and our approach (multi-policy with priority). Experimental results show that the evaluation performance satisfies the privacy requirements defined by the user.</description>
        <description>http://thesai.org/Downloads/Volume10No2/Paper_65-Towards_a_Fine_Grained_Access_Control.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design and Analysis of DNA Encryption and Decryption Technique based on Asymmetric Cryptography System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100264</link>
        <id>10.14569/IJACSA.2019.0100264</id>
        <doi>10.14569/IJACSA.2019.0100264</doi>
        <lastModDate>2019-02-28T12:12:03.4530000+00:00</lastModDate>
        
        <creator>Hassan Al-Mahdi</creator>
        
        <creator>Meshrif Alruily</creator>
        
        <creator>Osama R.Shahin</creator>
        
        <creator>Khalid Alkhaldi</creator>
        
        <subject>DNA cryptography; asymmetric encryption; block cipher; data dependency; dynamic encoding</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(2), 2019</description>
        <description>Security of sensitive information at the time of transmission over public channels is one of the critical issues in digital society. The DNA-based cryptography technique is a new paradigm in the cryptography field that is used to protect data during transmission. In this paper we introduce the asymmetric DNA cryptography technique for encrypting and decrypting plain-texts. This technique is based on the concept of data dependency, dynamic encoding and asymmetric cryptosystem (i.e. RSA algorithm). The asymmetric cryptosystem is used solely to initiate the encryption and decryption processes that are completely conducted using DNA computing. The basic idea is to create a dynamic DNA table based on the plaintext, using multi-level security, data dependency and generating 14 dynamic round keys. The proposed technique is implemented using the JAVA platform and its efficiency is examined in terms of avalanche property. The evaluation process proves that the proposed technique outperforms the RSA algorithm in terms of avalanche property.</description>
        <description>http://thesai.org/Downloads/Volume10No2/Paper_64-Design_and_Analysis_of_DNA_Encryption.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Browsing Behaviour Analysis using Data Mining</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100263</link>
        <id>10.14569/IJACSA.2019.0100263</id>
        <doi>10.14569/IJACSA.2019.0100263</doi>
        <lastModDate>2019-02-28T12:12:03.4370000+00:00</lastModDate>
        
        <creator>Hamid Mukhtar</creator>
        
        <creator>Farhana Seemi</creator>
        
        <creator>Hania Aslam</creator>
        
        <creator>Sana Khattak</creator>
        
        <subject>Pattern discovery; visualization; behavior modeling; web usage mining; browsing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(2), 2019</description>
        <description>Now-a-days most of our time is spent online using some form of digital technology such as search engines, news portals, or social media websites. Our online presence makes us engaged most of the time and leads us to become oblivious of our important work, resulting in a form of procrastination that decreases our productivity significantly. Some desktop and mobile applications have recently emerged to counter the problem by introducing various means of self-tracking to reduce the wasting of time and engage in productive activities. However, these systems suffer several shortcomings in terms of being static or providing a limited view of actions using one aspect only. To promote self-awareness that helps bring positive changes in individual’s performance, there is a need to present the data in a more persuasive ways, bringing interaction to it and present the same data in different ways using both temporal and cate-gorical dimensions. We describe a framework that collects and processes the browsing data and creates a user behavior model to extract valuable and interesting temporal and categorical patterns regarding user online behavior and interests. To discover the valuable behavior patterns from the individual’s browsing data, different web usage mining techniques have been used. Finally, we demonstrate interactive visualizations for the analysis and monitoring of web browsing behavior patterns with the goal of providing the individual with detailed understanding of his/her behavior. We also present a small-scale study including university students, which proves the importance of our work.</description>
        <description>http://thesai.org/Downloads/Volume10No2/Paper_63-Browsing_Behaviour_Analysis_using_Data_Mining.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Service-Oriented Context-Aware Messaging System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100262</link>
        <id>10.14569/IJACSA.2019.0100262</id>
        <doi>10.14569/IJACSA.2019.0100262</doi>
        <lastModDate>2019-02-28T12:12:03.4230000+00:00</lastModDate>
        
        <creator>Alaa Omran Almagrabi</creator>
        
        <creator>Arif Bramantoro</creator>
        
        <subject>Context-Awareness; messaging service; service on-tology; semantic web service</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(2), 2019</description>
        <description>In services oriented computing, location or spatial models are required to model the domain environment whenever location or spatial relationships are utilised by users and/or services. This research presents an ontology-based methodol-ogy for context-aware messaging service. There are five main contributions to this research. First, the research provides a service oriented methodology for modelling and building context-aware messaging systems based on ontological principles. Second, it describes a method that assists understanding the domain’s spatial environment. Third, it includes a proposal of the generic Mona-ServOnt core service ontology that offers context-aware reasoning for capture and use of context. Mona-ServOnt is able to support the deployment of context-aware messaging services in both indoor and outdoor environments. Fourth, a novel generic architecture that captures the requirements for context-aware messaging services is given. Fifth, the generic messaging protocols that describe the exchange of messages within context-aware mes-saging services is modelled. A few experiments were completed to measure the performance of the peer-to-peer services using actual smartphone with Bluetooth capability. In addition, the methodology’s main steps have been validated individually in various context-aware messaging domains. It has been evaluated using competency questions that gauge the scope of the proposed ontology. Furthermore, the generic architecture and messaging protocols have been verified in constructing for each domain.</description>
        <description>http://thesai.org/Downloads/Volume10No2/Paper_62-Service_Oriented_Context_Aware_Messaging_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hypercube Graph Decomposition for Boolean Simplification: An Optimization of Business Process Verification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100261</link>
        <id>10.14569/IJACSA.2019.0100261</id>
        <doi>10.14569/IJACSA.2019.0100261</doi>
        <lastModDate>2019-02-28T12:12:03.3900000+00:00</lastModDate>
        
        <creator>Mohamed NAOUM</creator>
        
        <creator>Outman EL HICHAMI</creator>
        
        <creator>Mohammed AL ACHHAB</creator>
        
        <creator>Badr eddine EL MOHAJIR</creator>
        
        <subject>Business process verification; minimal disjunctive normal form; Boolean reduction; hypercube graph; Karnaugh map; Quine-McCluskey</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(2), 2019</description>
        <description>This paper deals with the optimization of busi-ness processes (BP) verification by simplifying their equivalent algebraic expressions. Actual approaches of business processes verification use formal methods such as automated theorem proving and model checking to verify the accuracy of the business process design. Those processes are abstracted to mathematical models in order to make the verification task possible. However, the structure of those mathematical models is usually a Boolean expression of the business process variables and gateways. Thus leading to a combinatorial explosion when the number of literals is above a certain threshold. This work aims at optimizing the verification task by managing the problem size. A novel algorithm of Boolean simplification is proposed. It uses hypercube graph decomposition to find the minimal equivalent formula of a business process model given in its disjunctive normal form (DNF). Moreover, the optimization method is totally automated and can be applied to any business process having the same formula due to the independence of the Boolean simplification rules from the studied processes. This new approach has been numerically validated by comparing its performance against the state of the art method Quine-McCluskey (QM) through the optimization of several processes with various types of branching.</description>
        <description>http://thesai.org/Downloads/Volume10No2/Paper_61-Hypercube_Graph_Decomposition_for_Boolean.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Forensic Analysis of Docker Swarm Cluster using Grr Rapid Response Framework</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100260</link>
        <id>10.14569/IJACSA.2019.0100260</id>
        <doi>10.14569/IJACSA.2019.0100260</doi>
        <lastModDate>2019-02-28T12:12:03.3770000+00:00</lastModDate>
        
        <creator>Sunardi </creator>
        
        <creator>Imam Riadi</creator>
        
        <creator>Andi Sugandi</creator>
        
        <subject>Forensics; Network; Docker Swarm; Grr Rapid Response</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(2), 2019</description>
        <description>An attack on Internet network does not only hap-pened in the web applications that are running natively by a web server under operating system, but also web applications that are running inside container. The currently popular container machines such as Docker is not always secure from Internet attacks which result in disabling servers that are attacked using DoS/DDoS. Therefore, to improve server performance running this web application and provides the application log, DevOps engineer builds advance method by transforming the system into a cluster computers. Currently this method can be easily implemented using Docker Swarm. This research has successfully investigated digital evidence on the log file of containerized web application running on cluster system built by Docker Swarm. This investigation was carried out by using the Grr Rapid Response (GRR) framework.</description>
        <description>http://thesai.org/Downloads/Volume10No2/Paper_60-Forensic_Analysis_of_Docker_Swarm_Cluster.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Customer Value Proposition for E-Commerce: A Case Study Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100259</link>
        <id>10.14569/IJACSA.2019.0100259</id>
        <doi>10.14569/IJACSA.2019.0100259</doi>
        <lastModDate>2019-02-28T12:12:03.3430000+00:00</lastModDate>
        
        <creator>Nurhizam Safie Mohd Satar</creator>
        
        <creator>Omkar Dastane</creator>
        
        <creator>Muhamad Yusnorizam Ma’arif</creator>
        
        <subject>Online consumer; perceived value; e-commerce; value proposition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(2), 2019</description>
        <description>E-Commerce tools have become a human needs everywhere and important not only to customers but to industry players. The intention to use E-Commerce tools among practitioners, especially in the Malaysian retail sector is not comprehensive as there are still many businesses choosing to use expensive traditional marketing. The research applies academic models and frameworks to the real life situation to develop a value proposition in the practical world by considering 11Street as the company under study and comparing it with Lazada as a leading competitor in the market. The objectives include identification of customers’ perception of a value for E-Commerce Businesses, followed by critical evaluation of existing value proposition of 11Street with Lazada to identify gap and finally to propose a new value proposition for 11street. This paper first identifies customer perceived value of E-Commerce followed by critical review of existing value proposition of 11Street and then comparing and contrasting with the leading player Lazada. By the end of this research, a new consumer value proposition proposal for 11Street proposed for consideration in matching with the Malaysian consumers’ value criteria.</description>
        <description>http://thesai.org/Downloads/Volume10No2/Paper_59-Customer_Value_Proposition_for_e_Commerce.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Scheme for Address Assignment in Wireless Sensor Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100258</link>
        <id>10.14569/IJACSA.2019.0100258</id>
        <doi>10.14569/IJACSA.2019.0100258</doi>
        <lastModDate>2019-02-28T12:12:03.3130000+00:00</lastModDate>
        
        <creator>Ghulam Bhatti</creator>
        
        <subject>Wireless sensor networks; address assignment; logical network topology; routing; address conflict; IEEE 802.15.4; address space; ZigBee</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(2), 2019</description>
        <description>Assigning network addresses to nodes in a wireless sensor network is a crucial task that has implications for the functionality, scalability, and performance of the network. Since sensor nodes generally have scarce resources, the address assignment scheme must be efficient in terms of communications and storage. Most addressing schemes reported in literature or employed in standard specifications have weak aspects. In this paper, a distributed addressing scheme has been proposed that first organizes the raw address space into a regular structure and then maps it into a logical tree structure that is subsequently used to assign addresses in a distributed but conflict-free manner. As an additional benefit, this approach allows underlying tree structure to be used for default routing mechanism in the network, thus, avoiding costly route discovery mechanisms.</description>
        <description>http://thesai.org/Downloads/Volume10No2/Paper_58-A_Novel_Scheme_for_Address_Assignment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Using Academy Awards to Predict Success of Bollywood Movies using Machine Learning Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100257</link>
        <id>10.14569/IJACSA.2019.0100257</id>
        <doi>10.14569/IJACSA.2019.0100257</doi>
        <lastModDate>2019-02-28T12:12:03.2970000+00:00</lastModDate>
        
        <creator>Salman Masih</creator>
        
        <creator>Imran Ihsan</creator>
        
        <subject>Machine learning; supervised learning; classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(2), 2019</description>
        <description>Motion Picture Production has always been a risky and pricey venture. Bollywood alone has released approximately 120 movies in 2017. It is disappointing that only 8% of the movies have made to box office and the remaining 92% failed to return the total cost of production. Studies have explored several determinants that make a motion picture success at box office for Hollywood movies including academy awards. However, same can’t be said for Bollywood movies as there is significantly less research has been conducted to predict their success of a movie. Research also shows no evidence of using academy awards to predict a Bollywood movie’s success. This paper investigates the possibility; does an academy award such as ZeeCine or IIFA, previously won by the actor, playing an important role in movie, impact its success or not? In order to measure, the importance of these academy awards towards a movie’s success, a possible revenue for the movie is predicted using the academy awards information and categorizing the movie in different revenue range classes. We have collected data from multiple sources like Wikipedia, IMDB and BoxOfficeIndia. Various machine-learning algorithms such as Decision Tree, Random Forest, Artificial Neural Networks, Na&#239;ve Bayes and Bayesian Networks are used for the said purpose. Experiment and their results show that academy awards slightly increase the accuracy making an academy award a non-dominating ingredient of predicating movie’s success on box office.</description>
        <description>http://thesai.org/Downloads/Volume10No2/Paper_57-Using_Academy_Awards_to_Predict_Success.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>IoT Technological Development: Prospect and Implication for Cyberstability</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100256</link>
        <id>10.14569/IJACSA.2019.0100256</id>
        <doi>10.14569/IJACSA.2019.0100256</doi>
        <lastModDate>2019-02-28T12:12:03.2830000+00:00</lastModDate>
        
        <creator>Syarulnaziah Anawar</creator>
        
        <creator>Nurul Azma Zakaria</creator>
        
        <creator>Mohd Zaki Masu’d</creator>
        
        <creator>Zulkiflee Muslim</creator>
        
        <creator>Norharyati Harum</creator>
        
        <creator>Rabiah Ahmad</creator>
        
        <subject>Internet of things; cybersecurity; geopolitical; artificial intelligence; technology adoption</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(2), 2019</description>
        <description>Failure to address the risk poses by future technological development could cause devastating damage to public trust in the technologies. Therefore, ascendant technologies such as artificial intelligence are the key components to provide solutions for new cybersecurity threats and strengthen the capabilities of the future technological developments. In effect, ability of the technologies to prevent and withstand a cyber-attack could become the new deterrence. This paper will provide gaps to guide the government, industry, and the research community in pursuing Internet of Things (IoT) technological development that may be in need of improvement. The contribution of this paper is as follows: First, a roadmap that outline security requirements and concerns of future technology and the significant of IoT technology in addressing the concerns. Second, an assessment that illustrates the expected and unexpected impact of future technology adoption and its significant geopolitical implication on potential impacted areas such as regulatory, legal, political, military, and intelligence.</description>
        <description>http://thesai.org/Downloads/Volume10No2/Paper_56-IoT_Technological_Development.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Unique Analytical Modelling of Secure Communication in Wireless Sensor Network to Resist Maximum Threats</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100255</link>
        <id>10.14569/IJACSA.2019.0100255</id>
        <doi>10.14569/IJACSA.2019.0100255</doi>
        <lastModDate>2019-02-28T12:12:03.2670000+00:00</lastModDate>
        
        <creator>Manjunath B. E</creator>
        
        <creator>Dr. P.V. Rao</creator>
        
        <subject>Encryption; energy; secure communication; threats; traffic environment; wireless sensor network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(2), 2019</description>
        <description>Security problems in Wireless Sensor Network (WSN) are still open-end problems. Qualitative evaluation of the existing approaches of security in WSN shows adoption of either complex cryptographic use or attack-specific solution. As WSN is an integral part of upcoming Internet-of-Things (IoT), the attack scenario becomes more complicated owing to the integration of two different forms of networks and so is for the attackers. Therefore, this paper introduces a novel secure communication technique that considers time, energy, and traffic environment as prominent constraints to perform security modeling. The proposed solution designed using analytical methodology has some unique capability to resist any form of illegitimate queries of network participation and yet maintain a superior form of communication service. The simulated outcome shows that the proposed system offers reduced end-to-end delay and highest energy retention as compared to other existing security approaches.</description>
        <description>http://thesai.org/Downloads/Volume10No2/Paper_55-Unique_Analytical_Modelling_of_Secure_Communication.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modelling, Command and Treatment of a PV Pumping System Installed in Tunisia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100254</link>
        <id>10.14569/IJACSA.2019.0100254</id>
        <doi>10.14569/IJACSA.2019.0100254</doi>
        <lastModDate>2019-02-28T12:12:03.2500000+00:00</lastModDate>
        
        <creator>Nejib Hamrouni</creator>
        
        <creator>Sami Younsi</creator>
        
        <creator>Moncef Jraidi</creator>
        
        <subject>Stand-alone PV systems; PV pumping; modelling; louata pumping system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(2), 2019</description>
        <description>This paper studied the modeling, the command and the optimization of a photovoltaic (PV) pumping systems using performed strategies of command laws. The system is formed by a PV generator, a DC-DC converter with a maximal power point tracking (MPPT) command, a DC-AC converter with V/f command law and a submersed motor-pump. The first part of this paper presents the obtained models of the various components of the PV pumping system. Dynamic commands composed of a V/f and MPPT laws are calculated around the converters. The MPPT command insures the power adaptation between PV generator and load whereas the V/f command insures a PWM control of the asynchronous motor and a sinusoidal output signal. Some important results of simulation of the PV pumping system under the environment of MATLAB/SIMULINK are presented. In the second part of this paper some experimental results of a PV pumping system installed in Tunisia are developed. Those results are used to validate the simulating model and to test the performances of the command approach.</description>
        <description>http://thesai.org/Downloads/Volume10No2/Paper_54-Modelling_Command_and_Treatment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Ontological Model to Predict user Mobility</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100253</link>
        <id>10.14569/IJACSA.2019.0100253</id>
        <doi>10.14569/IJACSA.2019.0100253</doi>
        <lastModDate>2019-02-28T12:12:03.2370000+00:00</lastModDate>
        
        <creator>Atef Zaguia</creator>
        
        <creator>Roobaea Alroobaea</creator>
        
        <subject>Context prediction; pervasive system; context-aware system; pattern; ontology; ontological model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(2), 2019</description>
        <description>With the remarkable technological evolution of mobile devices, the use of computing resources has become possible at any time and independent of the geographical position of the user. This phenomenon has various names such as omnipresent diffuse computing, pervasive computing, or ubiquitous systems.  This new form of computing allows users to have access to shared and ubiquitous services focused on their needs, and it is based on context prediction, especially the prediction of the user’s location. This paper aims to present a new approach for predicting a user’s next probable location, by presenting an ontological model which is based on the pattern technique. This is carried out by using an ontological model that comprises different user behaviors and presents details about the environment, where the user is located. The results after tested on real data show that the presented ontological model was able to achieve 85% future location-prediction accuracy (in the case of no similar patterns). Future work will focus on the integration of the Bayesian network to improve the results. This approach will be implemented in smart homes or smart cities to reduce energy consumption.</description>
        <description>http://thesai.org/Downloads/Volume10No2/Paper_53-Ontological_Model_to_Predict_User_Mobility.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Active and Reactive Power Control of Wind Turbine based on Doubly Fed Induction Generator using Adaptive Sliding Mode Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100252</link>
        <id>10.14569/IJACSA.2019.0100252</id>
        <doi>10.14569/IJACSA.2019.0100252</doi>
        <lastModDate>2019-02-28T12:12:03.2030000+00:00</lastModDate>
        
        <creator>Othmane Zamzoum</creator>
        
        <creator>Youness El Mourabit</creator>
        
        <creator>Mustapha Errouha</creator>
        
        <creator>Aziz Derouich</creator>
        
        <creator>Abdelaziz El Ghzizal</creator>
        
        <subject>Wind turbine; DFIG; OP-MPPT; ASMC; adaptive sliding gains</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(2), 2019</description>
        <description>In this work, a robust Adaptive sliding mode controller (ASMC) is proposed to improve the dynamic performance of the Doubly Fed Induction generator (DFIG) based wind system under variable wind speed conditions. Firstly, the dynamic modeling of the main components of the system is performed. Thereafter, the ASMC is designed to control the active and reactive powers of the machine stator. The structure of these controllers was improved by adding two integral terms. Their sliding gains are determined using Lyapunov stability theorem to make them automatically adjusted in order to tackle the external disturbances. Maximum Power Point Tracking (MPPT) strategy was also applied to enhance the power system efficiency. Then, a comparison study with the Field Oriented Control (FOC) based on conventional PI control was conducted to assess the robustness of this technique under the DFIG parameters variations. Finally, a computer simulation was achieved in MATLAB/SIMULINK environment using 2MW wind system model. Satisfactory performances of the proposed strategy were clearly confirmed under variable operating conditions.</description>
        <description>http://thesai.org/Downloads/Volume10No2/Paper_52-Active_and_Reactive_Power_Control_of_Wind_Turbine.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cervical Cancer Prediction through Different Screening Methods using Data Mining</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100251</link>
        <id>10.14569/IJACSA.2019.0100251</id>
        <doi>10.14569/IJACSA.2019.0100251</doi>
        <lastModDate>2019-02-28T12:12:03.1870000+00:00</lastModDate>
        
        <creator>Talha Mahboob Alam</creator>
        
        <creator>Muhammad Milhan Afzal Khan</creator>
        
        <creator>Muhammad Atif Iqbal</creator>
        
        <creator>Abdul Wahab</creator>
        
        <creator>Mubbashar Mushtaq</creator>
        
        <subject>Boosted decision tree; cervical cancer; data mining; dcision trees; decision forest; decision jungle; screening methods</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(2), 2019</description>
        <description>Cervical cancer remains an important reason of deaths worldwide because effective access to cervical screening methods is a big challenge. Data mining techniques including decision tree algorithms are used in biomedical research for predictive analysis. The imbalanced dataset was obtained from the dataset archive belongs to the University of California, Irvine. Synthetic Minority Oversampling Technique (SMOTE) has been used to balance the dataset in which the number of instances has increased. The dataset consists of patient age, number of pregnancies, contraceptives usage, smoking patterns and chronological records of sexually transmitted diseases (STDs). Microsoft azure machine learning tool was used for simulation of results. This paper mainly focuses on cervical cancer prediction through different screening methods using data mining techniques like Boosted decision tree, decision forest and decision jungle algorithms as well performance evaluation has done on the basis of AUROC (Area under Receiver operating characteristic) curve, accuracy, specificity and sensitivity. 10-fold cross-validation method was utilized to authenticate the results and Boosted decision tree has given the best results.  Boosted decision tree provided very high prediction with 0.978 on AUROC curve while Hinslemann screening method has used. The results obtained by other classifiers were significantly worse than boosted decision tree.</description>
        <description>http://thesai.org/Downloads/Volume10No2/Paper_51-Cervical_Cancer_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Breast Cancer Classification using Global Discriminate Features in Mammographic Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100250</link>
        <id>10.14569/IJACSA.2019.0100250</id>
        <doi>10.14569/IJACSA.2019.0100250</doi>
        <lastModDate>2019-02-28T12:12:03.1570000+00:00</lastModDate>
        
        <creator>Nadeem Tariq</creator>
        
        <creator>Beenish Abid</creator>
        
        <creator>Khawaja Ali Qadeer</creator>
        
        <creator>Imran Hashim</creator>
        
        <creator>Zulfiqar Ali</creator>
        
        <creator>Ikramullah Khosa</creator>
        
        <subject>Breast cancer; mammography; pattern recognition; classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(2), 2019</description>
        <description>Breast cancer has become a rapidly prevailing disease among women all over the world. In term of mortality, it is considered to be the second leading cause of death.  Death risk can be reduced by early stage detection, followed by a suitable treatment procedure. Contemporary literature shows that mammographic imaging is widely used for premature discovery of breast cancer. In this paper, we propose an efficient Computer Aided Diagnostic (CAD) system for the detection of breast cancer using mammography images. The CAD system extracts largely discriminating features on the global level for representation of target categories in two sets: all 20 extracted features and top 7 ranked features among them. Texture characteristics using co-occurrence matrices are calculated via the single offset vector. Multilayer perceptron neural network with optimized architecture is fed with individual feature sets and results are produced. Data division corresponds as 60%, 20%, and 20% is used for training, cross-validation, and test purposes, respectively. Robust results are achieved and presented after rotating the data up to five times, which shows higher than 99% accuracy for both target categories, and hence outperform the existing solutions.</description>
        <description>http://thesai.org/Downloads/Volume10No2/Paper_50-Breast_Cancer_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimization and Deployment of Femtocell: Operator’s Perspectives
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100249</link>
        <id>10.14569/IJACSA.2019.0100249</id>
        <doi>10.14569/IJACSA.2019.0100249</doi>
        <lastModDate>2019-02-28T12:12:03.1270000+00:00</lastModDate>
        
        <creator>Javed Iqbal</creator>
        
        <creator>Zuhaibuddin Bhutto</creator>
        
        <creator>Zahid latif</creator>
        
        <creator>M. Zahid Tunio</creator>
        
        <creator>Ramesh Kumar</creator>
        
        <creator>Murtaza Hussain Shaikh</creator>
        
        <creator>Muhammad Nawaz</creator>
        
        <subject>Femtocell; deployment; optimization; service level agreement; fixed mobile convergence; cellular networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(2), 2019</description>
        <description>This study examines the deployment issues of Femtocell, which require the satisfaction level of users on available bandwidth. Femtocells are small Base Stations installed in Homes for the improvement of coverage and capacity of Cellular Networks. Femtocells are connected over traditional DSL, FTTH (fiber to the home) to the Network. Optimization of Cellular Network is required for efficient utilization of available bandwidth and resources. In this paper, we present deployments issues, optimizations of Femtocell, Operator perspective survey results, and Service level agreement (SLA) between cellular operators, which achieve the user’s desires and support in the deployment of Femtocell Network.</description>
        <description>http://thesai.org/Downloads/Volume10No2/Paper_49-Optimization_and_Deployment_of_Femtocell.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Study on Sentiment Analysis Techniques of Twitter Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100248</link>
        <id>10.14569/IJACSA.2019.0100248</id>
        <doi>10.14569/IJACSA.2019.0100248</doi>
        <lastModDate>2019-02-28T12:12:03.1100000+00:00</lastModDate>
        
        <creator>Abdullah Alsaeedi</creator>
        
        <creator>Mohammad Zubair Khan</creator>
        
        <subject>Twitter; sentiment; Web data; text mining; SVM; Bayesian algorithm; hybrid; ensembles</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(2), 2019</description>
        <description>The entire world is transforming quickly under the present innovations. The Internet has become a basic requirement for everybody with the Web being utilized in every field. With the rapid increase in social network applications, people are using these platforms to voice them their opinions with regard to daily issues. Gathering and analyzing peoples’ reactions toward buying a product, public services, and so on are vital. Sentiment analysis (or opinion mining) is a common dialogue preparing task that aims to discover the sentiments behind opinions in texts on varying subjects. In recent years, researchers in the field of sentiment analysis have been concerned with analyzing opinions on different topics such as movies, commercial products, and daily societal issues. Twitter is an enormously popular microblog on which clients may voice their opinions. Opinion investigation of Twitter data is a field that has been given much attention over the last decade and involves dissecting “tweets” (comments) and the content of these expressions. As such, this paper explores the various sentiment analysis applied to Twitter data and their outcomes.</description>
        <description>http://thesai.org/Downloads/Volume10No2/Paper_48-A_Study_on_Sentiment_Analysis_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application of Sentiment Lexicons on Movies Transcripts to Detect Violence in Videos</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100247</link>
        <id>10.14569/IJACSA.2019.0100247</id>
        <doi>10.14569/IJACSA.2019.0100247</doi>
        <lastModDate>2019-02-28T12:12:03.0930000+00:00</lastModDate>
        
        <creator>Badriya Murdhi Alenzi</creator>
        
        <creator>Muhammad Badruddin Khan</creator>
        
        <subject>Sentiment lexicons; sentiment analysis; video transcript; part-of-speech tagging; English SentiWordNet; Vader Package; violence detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(2), 2019</description>
        <description>In the modern era of technological development, the emergence of Web 2.0 applications, related to social media, the dissemination of opinions, feelings, and participation in discussions on various issues have become very easy, which have led to a boom in text mining and natural language processing research. YouTube is one of the most popular social sites for video sharing. This may contain different types of unwanted content such as violence, which is the cause of many social problems, especially among children like aggression and bullying at home, in school and in public places. The research work reports performance of two different sentiment lexicons when they were applied on video transcripts to detect violence in YouTube videos. The automation of process to detect violence in videos can be helpful for censor boards that can use the technology to restrict violent video for a certain age group or can fully block entire video regardless of age. The models were built using the existing sentiment lexicons. The dataset consists of 100 English video transcripts collected from the web and was annotated manually as violent and non-violent. Various experiments were performed on the dataset using English SentiWordNet (ESWN) and Vader Package with different text preprocessing settings. The Vader package outperformed the ESWN by providing 75% accuracy. ESWN results for all POS tagging with 66% accuracy were better than its result for adjectives POS tagging with 58% accuracy.</description>
        <description>http://thesai.org/Downloads/Volume10No2/Paper_47-Application_of_Sentiment_Lexicons_on_Movies.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Flood Analysis in Peru using Satellite Image: The Summer 2017 Case</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100246</link>
        <id>10.14569/IJACSA.2019.0100246</id>
        <doi>10.14569/IJACSA.2019.0100246</doi>
        <lastModDate>2019-02-28T12:12:03.0800000+00:00</lastModDate>
        
        <creator>Avid Roman-Gonzalez</creator>
        
        <creator>Brian A. Meneses-Claudio</creator>
        
        <creator>Natalia I. Vargas-Cuentas</creator>
        
        <subject>Overflow; landslide; chosica; piura; satellite image processing; sentinel 1</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(2), 2019</description>
        <description>At the beginning of the year 2017, different regions of Peru suffered from heavy rains mainly due to the &#39;El Ni&#241;o&#39; and &#39;La Ni&#241;a&#39; phenomena. As a result of these massive storms, several cities were affected by overflows and landslides. Chosica and Piura were the most affected cities. On the other hand, the satellite images have many applications, one of them is the aid for the better management of the natural disasters (post-disaster management). In this sense, the present work proposes the use of radar satellite images from Sentinel constellation to make an analysis of the most-affected areas by floods in the cities of Chosica and Piura. The applied methodology is to analyse and compare two images (one before and one after the disaster) to identify the affected areas based on differences between both images. The analysing process includes radiometric calibration, speckle filtering, terrain correction, histogram plotting, and image binarization. The results show maps of the analysed cities and identify a significant number of areas flooded according to satellite images from March 2017. Using the resulting maps, authorities can make better decisions. The satellite images used were from the Sentinel 1 satellite belonging to the European Union.</description>
        <description>http://thesai.org/Downloads/Volume10No2/Paper_46-Flood_Analysis_in_Peru_using_Satellite_Image.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>JWOLF: Java Free French Wordnet Library</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100245</link>
        <id>10.14569/IJACSA.2019.0100245</id>
        <doi>10.14569/IJACSA.2019.0100245</doi>
        <lastModDate>2019-02-28T12:12:03.0630000+00:00</lastModDate>
        
        <creator>Morad HAJJI</creator>
        
        <creator>Mohammed QBADOU</creator>
        
        <creator>Khalifa MANSOURI</creator>
        
        <subject>JAVA; API; WordNet; WOLF; JAXB; natural language processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(2), 2019</description>
        <description>The electronic lexical databases WordNets, have become essential for many computer applications, especially in linguistic research. Free French WordNet is an XML lexical database for French language based on Princeton WordNet for the English language and other multilingual resources. So far, research on Free French WordNet has focused on the construction and relevance of lexico-semantic information. However, no effort is made to facilitate the exploitation of this database under the Java language. In this context, this paper proposes our approach for the development of a new Java API based on Java Architecture for XML Binding. This Java API will make it easier for developers to exploit and use Free French WordNet to create applications for natural language processing. In order to assess the usefulness of our API, The API performance has been evaluated in the context of a Browser that we developed to extract semantic and lexical relations connecting synsets contained in this database, such as: the tree of hypernymy, the tree of hyponymy, synonyms, etc. The results showed that our API perfectly meets the needs of programmatically exploitation, exploration and consultation of this database in a Java application.</description>
        <description>http://thesai.org/Downloads/Volume10No2/Paper_45-JWOLF_Java_Free_French_Wordnet_Library.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Qualitative Comparison of NoSQL Data Stores</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100244</link>
        <id>10.14569/IJACSA.2019.0100244</id>
        <doi>10.14569/IJACSA.2019.0100244</doi>
        <lastModDate>2019-02-28T12:12:03.0330000+00:00</lastModDate>
        
        <creator>Sarah H. Kamal</creator>
        
        <creator>Hanan H. Elazhary</creator>
        
        <creator>Ehab E. Hassanein</creator>
        
        <subject>Document datastores; graph datastores; key-value datastores; MonoDB; Neo4j; NoSQL datastores; Redis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(2), 2019</description>
        <description>Due to the proliferation of big data with large volume, velocity, complexity, and distribution among remote servers, it became obvious that traditional relational databases are unsuitable for meeting the requirements of such data. This led to the emergence of a novel technology among organizations and business enterprises; NoSQL datastores. Today such datastores have become popular alternatives to traditional relational databases, since their schema-less data models can manipulate and handle a huge amount of structured, semi-structured and unstructured data, with high speed and immense distribution. Those data stores are of four basic types, and numerous instances have been developed under each type. This implies the need to understand the differences among them and how to select the most suitable one for any given data. Unfortunately, research efforts in the literature either consider differences from a theoretical point of view (without real use cases), or address performance issues such as speed and storage, which is insufficient to give researchers deep insight into the mapping of a given data structure to a given NoSQL datastore type. Hence, this paper provides a qualitative comparison among three popular datastores of different types (Redis, Neo4j, and MongoDB) using a real use case of each type, translated to the others. It thus highlights the inherent differences among them, and hence what data structures each of them suits most.</description>
        <description>http://thesai.org/Downloads/Volume10No2/Paper_44-A_Qualitative_Comparison_of_NoSQL_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Real-Time Street Actions Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100243</link>
        <id>10.14569/IJACSA.2019.0100243</id>
        <doi>10.14569/IJACSA.2019.0100243</doi>
        <lastModDate>2019-02-28T12:12:03.0000000+00:00</lastModDate>
        
        <creator>Salah Alghyaline</creator>
        
        <subject>Online human action detection; group behavior analysis; CCTV cameras; computer vision</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(2), 2019</description>
        <description>Human action detection in real time is one of the most important and challenging problems in computer vision. Nowadays, CCTV cameras exist everywhere in our lives. However, the contents of these cameras are monitored and analyzed using human operator. This paper proposes a real time human action detection approach which efficiently detects basic and common actions in the street such as stopping, walking, running, group stopping, group walking, and group running. The proposed approach measures the object movement type based on three techniques: YOLO object detection, Kalman Filter and Homography. Real videos from CCTV camera and BEHAVE dataset are used to test the proposed method. The experimental results show that the proposed method is very effective and accurate to detect basic human actions in the street. The accuracies of the proposed method on the tested videos are 96.9% and 88.4% for the BEHAVE and the created CCTV datasets, respectively. The proposed approach runs in real time with more than 50 fps for BEHAVE dataset and 32 fps for the created CCTV datasets.</description>
        <description>http://thesai.org/Downloads/Volume10No2/Paper_43-A_Real_Time_Street_Actions_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Adaptive Neural Network State Estimator for Quadrotor Unmanned Air Vehicle</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100242</link>
        <id>10.14569/IJACSA.2019.0100242</id>
        <doi>10.14569/IJACSA.2019.0100242</doi>
        <lastModDate>2019-02-28T12:12:02.9870000+00:00</lastModDate>
        
        <creator>Jiang Yuning</creator>
        
        <creator>Muhammad Ahmad Usman Rasool</creator>
        
        <creator>Qian Bo</creator>
        
        <creator>Ghulam Farid</creator>
        
        <creator>Sohaib Tahir Chaudary</creator>
        
        <subject>Neural network observer; quadrotor; nonlinear systems; state estimator</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(2), 2019</description>
        <description>An adaptive neural observer design is presented for the nonlinear quadrotor unmanned aerial vehicle (UAV). This proposed observer design is motivated by the practical quadrotor where the whole dynamical model of system is unavailable. In this paper, dynamics of the quadrotor UAV system and its state space model are discussed and a neural observer design, using a back propagation algorithm is presented. The steady state error is reduced with the neural network term in the estimator design and the transient performance of the system is improved. This proposed methodology reduces the number of sensors and weight of the quadrotor which results in the decrease of manufacturing cost. A Lyapunov-based stability analysis is utilized to prove the convergence of error to the neighborhood of zero. The performance and capabilities of the design procedure are demonstrated by the Simulation results.</description>
        <description>http://thesai.org/Downloads/Volume10No2/Paper_42-An_Adaptive_Neural_Network_State_Estimator.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluation of APi Interface Design by Applying Cognitive Walkthrough</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100241</link>
        <id>10.14569/IJACSA.2019.0100241</id>
        <doi>10.14569/IJACSA.2019.0100241</doi>
        <lastModDate>2019-02-28T12:12:02.9530000+00:00</lastModDate>
        
        <creator>Nur Atiqah Zaini</creator>
        
        <creator>Siti Fadzilah Mat Noor</creator>
        
        <creator>Tengku Siti Meriam Tengku Wook</creator>
        
        <subject>Cognitive Walkthrough; interface design; usability evaluation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(2), 2019</description>
        <description>The usability evaluation of APi interface design was conducted through Cognitive Walkthrough method. APi is a mobile application game designed specifically for preschool children of Tabika Kemas Kampung Berawan, Limbang Sarawak in order to learn about fire safety education. The existing fire safety games have few interaction styles issues and interface design tested on preschool children. A key ingredient to promote the preschool children to learn basic skills of fire safety is by providing them an interactive learning as the new learning method. Low-fidelity of APi prototype was designed based on the user requirements of the preschool children focusing on cognitive, psychomotor and behaviour aspects. This Cognitive Walkthrough method applied on APi interface design involved a small group of professional designers and developers. As a result, the high-fidelity of APi prototype interface design was developed for the preschool children.</description>
        <description>http://thesai.org/Downloads/Volume10No2/Paper_41-Evaluation_of_APi_Interface_Design.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving Modified Grey Relational Method for Vertical Handover in Heterogeneous Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100240</link>
        <id>10.14569/IJACSA.2019.0100240</id>
        <doi>10.14569/IJACSA.2019.0100240</doi>
        <lastModDate>2019-02-28T12:12:02.9370000+00:00</lastModDate>
        
        <creator>Imane Chattate</creator>
        
        <creator>Mohamed El Khaili</creator>
        
        <creator>Jamila Bakkoury</creator>
        
        <subject>Component; vertical handover; network selection; Quality of Service (QoS); Multi Criteria Decision-Making (MCDM); Grey Relational Analysis (GRA); Fuzzy Analytic Hierarchy Process (FAHP)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(2), 2019</description>
        <description>With the advent of next-generation wireless network technologies, vertical handover has become indispensable to keep the mobile user always best connected (ABC) in a heterogeneous environment, especially the significant number of multimedia applications that require good quality of service (QoS) for users. To handle this issue, an improvement of modified Grey Relational Analysis (MGRA) to select the Always-Suitable-Connection (ASC) network has been proposed. Then, Fuzzy analytic hierarchy process (FAHP) method has been used to determine the weight of criteria. In order to validate our contribution, the proposed method called E-MGRA has been applied to obtain the ranking of suitable network. Finally, a simulation has been presented to demonstrate the performances of our developed approach to reduce the number of handovers compared to the classical method.</description>
        <description>http://thesai.org/Downloads/Volume10No2/Paper_40-Improving_Modified_Grey_Relational_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Minimizing Load Shedding in Electricity Networks using the Primary, Secondary Control and the Phase Electrical Distance between Generator and Loads</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100239</link>
        <id>10.14569/IJACSA.2019.0100239</id>
        <doi>10.14569/IJACSA.2019.0100239</doi>
        <lastModDate>2019-02-28T12:12:02.9070000+00:00</lastModDate>
        
        <creator>Nghia. T. Le</creator>
        
        <creator>Anh. Huy. Quyen</creator>
        
        <creator>Binh. T. T. Phan</creator>
        
        <creator>An. T. Nguyen</creator>
        
        <creator>Hau. H. Pham</creator>
        
        <subject>Load shedding; primary control; secondary control; phase electrical distance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(2), 2019</description>
        <description>This paper proposes a method for determining location and calculating the minimum amount of power load needed to shed in order to recover the frequency back to the allowable range. Based on the consideration of the primary control of the turbine governor and the reserve power of the generators for secondary control, the minimum amount of load shedding was calculated in order to recover the frequency of the power system. Computation and analysis of the phase electrical distance between the outage generator and the loads to prioritize distribution of the amount power load shedding at load bus positions. The nearer the load bus from the outage generator is, the higher the amount of load shedding will shed and vice versa. With this technique, a large amount of load shedding could be avoided, hence, saved from economic losses, and customer service interruption. The case study simulation has been verified through using PowerWorld sofware systems. The effectiveness of the proposed method tested on the IEEE 37 bus 9 generators power system standard has demonstrated the effectiveness of this method.</description>
        <description>http://thesai.org/Downloads/Volume10No2/Paper_39-Minimizing_Load_Shedding_in_Electricity_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of Resource Utilization on GPU</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100238</link>
        <id>10.14569/IJACSA.2019.0100238</id>
        <doi>10.14569/IJACSA.2019.0100238</doi>
        <lastModDate>2019-02-28T12:12:02.8900000+00:00</lastModDate>
        
        <creator>M. R. Pimple</creator>
        
        <creator>S.R. Sathe</creator>
        
        <subject>Dynamic kernel; GPU; Multithreading; occupancy; parallel computing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(2), 2019</description>
        <description>The problems arising due to massive data storage and data analysis can be handled by recent technologies, like cloud computing and parallel computing. MapReduce, MPI, CUDA, OpenMP, OpenCL are some of the widely available tools and techniques that use multithreading approach. However, it is a challenging task to use these technologies effectively to handle the compute intensive problems in the fields like life science, environment, fluid dynamics, image processing, etc. In this paper, we have used many core platforms with graphics processing units (GPU) to implement one of very important and fundamental problem of sequence alignment in the field of bioinformatics.  Dynamic and concurrent kernel features offered by graphics card are used to speed up the performance. With these features, we achieved a speed up of around 120X and 55X. We have coupled well-known tiling technique with these features and observed a performance improvement up to 4X and 2X, as compared to non-tiling execution. The paper also analyses resource parameters, GPU occupancy and proposes their relationship with the design parameters for the chosen algorithm. These observations have been quantified and the relationship between the parameters is presented.  The results of study can be extended further to study similar algorithms in this area.</description>
        <description>http://thesai.org/Downloads/Volume10No2/Paper_38-Analysis_of_Resource_Utilization_on_GPU.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>One-Lead Electrocardiogram for Biometric Authentication using Time Series Analysis and Support Vector Machine</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100237</link>
        <id>10.14569/IJACSA.2019.0100237</id>
        <doi>10.14569/IJACSA.2019.0100237</doi>
        <lastModDate>2019-02-28T12:12:02.8770000+00:00</lastModDate>
        
        <creator>Sugondo Hadiyoso</creator>
        
        <creator>Suci Aulia</creator>
        
        <creator>Achmad Rizal</creator>
        
        <subject>ECG; biometric; Hjorth; sample entropy; SVM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(2), 2019</description>
        <description>In this research, a person identification system has been simulated using electrocardiogram (ECG) signals as biometrics. Ten adult people were participated as the subjects in this research taken from their signal ECG using the one-lead ECG machine. A total of 65 raw ECG waves from the 10 subjects were analyzed. This raw signal is then processed using the Hjorth Descriptor and Sample Entropy (SampEn) to get the signal features.  Support Vector Machine (SVM) algorithm was used as the classifier for the subject authentication based upon the record of ECG signal. The results of the research showed that the highest accuracy value of 93.8% was found in Hjorth Descriptor. Compared to SampEn, this method is quite promising to be implemented for having a good performance and fewer features.</description>
        <description>http://thesai.org/Downloads/Volume10No2/Paper_37-One_Lead_Electrocardiogram_for_Biometric_Authentication.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predicting 30-Day Hospital Readmission for Diabetes Patients using Multilayer Perceptron</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100236</link>
        <id>10.14569/IJACSA.2019.0100236</id>
        <doi>10.14569/IJACSA.2019.0100236</doi>
        <lastModDate>2019-02-28T12:12:02.8430000+00:00</lastModDate>
        
        <creator>Ti’jay Goudjerkan</creator>
        
        <creator>Manoj Jayabalan</creator>
        
        <subject>Readmission; diabetes; multilayer perceptron; feature engineering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(2), 2019</description>
        <description>Hospital readmission is considered a key metric in order to assess health center performances. Indeed, readmissions involve different consequences such as the patient’s health condition, hospital operational efficiency but also cost burden from a wider perspective. Prediction of 30-day readmission for diabetes patients is therefore of prime importance. The existing models are characterized by their limited prediction power, generalizability and pre-processing. For instance, the benchmarked LACE (Length of stay, Acuity of admission, Charlson comorbidity index and Emergency visits) index traded prediction performance against ease of use for the end user. As such, this study propose a comprehensive pre-processing framework in order to improve the model’s performance while exploring and selecting a prominent feature for 30-day unplanned readmission among diabetes patients. In order to deal with readmission prediction, this study will also propose a Multilayer Perceptron (MLP) model on data collected from 130 US hospitals. More specifically, the pre-processing technique includes comprehensive data cleaning, data reduction, and transformation. Random Forest algorithm for feature selection and SMOTE algorithm for data balancing are some example of methods used in the proposed pre-processing framework. The proposed combination of data engineering and MLP abilities was found to outperform existing research when implemented and tested on health center data. The performance of the designed model was found, in this regard, particularly balanced across different metrics of interest with accuracy and Area under the Curve (AUC) of 95% and close to the optimal recall of 99%.</description>
        <description>http://thesai.org/Downloads/Volume10No2/Paper_36-Predicting_30_Day_Hospital_Readmission_for_Diabetes_Patients.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Query Expansion in Information Retrieval using Frequent Pattern (FP) Growth Algorithm for Frequent Itemset Search and Association Rules Mining</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100235</link>
        <id>10.14569/IJACSA.2019.0100235</id>
        <doi>10.14569/IJACSA.2019.0100235</doi>
        <lastModDate>2019-02-28T12:12:02.8300000+00:00</lastModDate>
        
        <creator>Lasmedi Afuan</creator>
        
        <creator>Ahmad Ashari</creator>
        
        <creator>Yohanes Suyanto</creator>
        
        <subject>IR; query expansion; association rules; support; confidence; recall; precision</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(2), 2019</description>
        <description>Documents on the Internet have increased in number exponentially; this has resulted in users having difficulty finding documents or information needed. Special techniques are needed to retrieve documents that are relevant to user queries. One technique that can be used is Information Retrieval (IR). IR is the process of finding data (generally documents) in the form of text that matches the information needed from a collection of documents stored on a computer. Problems that often appear on IRs are incorrect user queries; this is caused by user limitations in representing their needs in the query. Researchers have proposed various solutions to overcome these limitations, one of which is to use the Expansion Query (QE). Various methods that have been applied to QE include Ontology, Latent Semantic Indexing (LSI), Local Co-Occurrence, Relevance Feedback, Concept Based, WordNet / Synonym Mapping. However, these methods still have limitations, one of them in terms of displaying the connection or relevance of the appearance of words or phrases in the document collection. To overcome this limitation, in this study we have proposed an approach to QE using the FP-Growth algorithm for the search for frequent itemset and Association Rules (AR) on QE. In this study, we applied the use of AR to QE to display the relevance of the appearance of a word or term with another word or term in the collection of documents, where the term produced is used to perform QE on user queries. The main contribution in this study is the use of Association rules with FP-Growth in the collection of documents to look for the connection of the emergence of words, which is then used to expand the original query of users on IR. For the evaluation of QE performance, we use recall, precision, and f-measure. Based on the research that has been done, it can be concluded that the use of AR on QE can improve the relevance of the documents produced. This is indicated by the average recall, precision, and f-measure values produced at 94.44%, 89.98%, and 92.07%. After comparing the IR process without QE with IR using QE, an increase in recall value was 25.65%, precision was 1.93%, and F-Measure was 15.78%.</description>
        <description>http://thesai.org/Downloads/Volume10No2/Paper_35-Query_Expansion_in_Information_Retrieval.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sentiment Analysis of Arabic Jordanian Dialect Tweets</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100234</link>
        <id>10.14569/IJACSA.2019.0100234</id>
        <doi>10.14569/IJACSA.2019.0100234</doi>
        <lastModDate>2019-02-28T12:12:02.7970000+00:00</lastModDate>
        
        <creator>Jalal Omer Atoum</creator>
        
        <creator>Mais Nouman</creator>
        
        <subject>Sentiment analysis; Arabic Jordanian dialect; tweets; machine learning; text mining</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(2), 2019</description>
        <description>Sentiment Analysis (SA) of social media contents has become one of the growing areas of research in data mining. SA provides the ability of text mining the public opinions of a subjective manner in real time. This paper proposes a SA model of Arabic Jordanian dialect tweets. Tweets are annotated on three different classes; positive, negative, and neutral. Support Vector Machines (SVM) and Na&#239;ve Bayes (NB) are used as supervised machine learning classification tools. Preprocessing of such tweets for SA is done via; cleaning noisy tweets, normalization, tokenization, namely, Entity Recognition, removing stop words, and stemming. The results of the experiments conducted on this model showed encouraging outcomes when Arabic light stemmer/segment is applied on Arabic Jordanian dialect tweets. Also, the results showed that SVM has better performance than NB on such tweets’ classifications.</description>
        <description>http://thesai.org/Downloads/Volume10No2/Paper_34-Sentiment_Analysis_of_Arabic_Jordanian_Dialect.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Smart Book Reader for Visual Impairment Person using IoT Device</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100233</link>
        <id>10.14569/IJACSA.2019.0100233</id>
        <doi>10.14569/IJACSA.2019.0100233</doi>
        <lastModDate>2019-02-28T12:12:02.7830000+00:00</lastModDate>
        
        <creator>Norharyati binti Harum</creator>
        
        <creator>Nurul Azma Zakaria</creator>
        
        <creator>Nurul A. Emran</creator>
        
        <creator>Zakiah Ayop</creator>
        
        <creator>Syarulnaziah Anawar</creator>
        
        <subject>Internet of Things; Raspberry Pi; image processing; wellness; IR4.0; smart book reader</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(2), 2019</description>
        <description>This paper focuses on development of Smart Book Reader will help the blind people or who have low vision to read the book without using braille. This project utilises IoT technology with the use of an IoT device, IoT infrastructure and service. An IoT device, Raspberry Pi, is used which is very energy efficient because it only uses 5V of power to run. It is also a high portability device with only credit card size and can be carried out anywhere. Book reader will capture the picture of book pages using camera and book reader will process the images using Optical Character Recognition software. When the image is recognised, book reader will read it aloud . Therefore, the blind people or those who have low vision will hear it without needing to touch using their fingertips. By using this book reader, the user can enjoy both softcopy and hardcopy books, by using online text to voice converter with a help of IoT connectivity protocol such as Wifi and 4G services. For hardcopy book, a camera is embedded to capture the page. The motivation to develop this product is to encourage all blind people to read ordinary books. This will help them to gain particular knowledge from the reading without a need to learn Braille.</description>
        <description>http://thesai.org/Downloads/Volume10No2/Paper_33-Smart_Book_Reader_for_Visual_Impairment_Person.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Framework to Automate Cloud based Service Attacks Detection and Prevention</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100232</link>
        <id>10.14569/IJACSA.2019.0100232</id>
        <doi>10.14569/IJACSA.2019.0100232</doi>
        <lastModDate>2019-02-28T12:12:02.7670000+00:00</lastModDate>
        
        <creator>P Ravinder Rao</creator>
        
        <creator>Dr. V.Sucharita</creator>
        
        <subject>Data breach; HoA; insider threat; malware injection; ACS; insecure APIs; DoS; automated attack detection; automated prevention; characteristics based detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(2), 2019</description>
        <description>With the increasing demand for high availability, scalability and cost minimization, the adaptation of cloud computing is also increasing. By the demand from the data, consumer or the customers of the applications, the service providers or the application owners are migrating all the applications into the cloud. These migrations of the traditional applications and deploying new applications are benefiting the consumers and the service providers. The consumers are getting the higher availability of the applications and in the other hand, the consumers of the applications are getting benefits from of the cost reduction by optimal scalability and deploying additional features with the least cost, which intern providing the better customer satisfaction. Nevertheless, this migrations and new deployments are attracting the attention of the hackers and attackers as well. In the recent past, several attacks are reported on various popular services like search engines, storage services, and critical application ranging from healthcare to defence. The attacks are sometimes limited to the data exploration, where the attackers only consume the data and sometimes the attackers destroy crucial services. The major challenge in detecting these attacks is mostly identifying the nature of the connection request. Also, identifying the attacks are not sufficient in providing the security for the cloud services and must be deployed as security as a service in the applications or the services or in the data centre as automatic and continuous measures. Various research endeavours have shown critical enhancements in the     on-going past for recognizing the security attacks. Nonetheless, these attempts have not provided any solution in preventing the security attacks. Also, the existing methods as mentioned are not automated and cannot be included in the services. Thus, this work provides a unique automated framework solution for detecting the application traffic pattern and generates the rule sets for detecting any anomalies in the request types. The major outcome of this work is to identify the attack types and prevent further damages to the cloud services with a minimal computational load. The additional benefits from this work are the preventive measure for popular attack types. The work also demonstrates the ability to detect a new type of attacks based on traffic pattern analysis and provides preventive measures for making the cloud computing application hosting industry a safer place.</description>
        <description>http://thesai.org/Downloads/Volume10No2/Paper_32-A_Framework_to_Automate_Cloud_based_Service.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automatic Structured Abstract for Research Papers Supported by Tabular Format using NLP</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100231</link>
        <id>10.14569/IJACSA.2019.0100231</id>
        <doi>10.14569/IJACSA.2019.0100231</doi>
        <lastModDate>2019-02-28T12:12:02.7500000+00:00</lastModDate>
        
        <creator>Zainab Almugbel</creator>
        
        <creator>Nahla El Haggar</creator>
        
        <creator>Neda Bugshan</creator>
        
        <subject>Natural language processing (NLP); Na&#239;ve Bayes (NB) classifier; SVM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(2), 2019</description>
        <description>The abstract is an extensive summary of a scientific paper that supports making a quick decision about reading it. The employment of a structured abstract is useful to represent the major components of the paper. This, in turn, enhances extracting information about the study. Regardless of the importance of the structured abstract, many computer science research papers do not apply it. This may lead to weak abstracts. This paper aims at implementing the natural language processing  (NLP) techniques and machine learning on conventional abstracts to automatically generate structured abstracts that are formatted using the IMRaD (Introduction, Methods, Results, and Discussion) format which   is  considered as a  predominant in medical, scientific writing. The effectiveness of such sentence classiﬁcation, which is the capability of a method to produce an expected outcome of classifying unstructured abstracts in computer science research papers into IMRAD sections, depends on both feature selection and classiﬁcation algorithm. This can be achieved via IMRaD Classifier by measuring the similarity of sentences between the structured and the unstructured abstracts of different research papers.  After that, it can be classified the sentences into one of the IMRaD format tags based on the measured similarity value. Finally, the IMRaD Classifier is evaluated by applying Na&#239;ve Bayes (NB) and Support Vector Machine (SVM) classiﬁers on the same dataset. To conduct this work, we use dataset contains 250 conventional Computer Science abstracts for periods 2015 to 2018. This dataset is collected from two main websites: DBLP and IOS Press content library. In this paper, 200 xml based files are used for training, and 50 xml based files are used for testing. Thus, the dataset is 4x250 files where each file contains a set of sentences that belong to different abstracts but belong to the same IMRaD sections. The experimental results show that Na&#239;ve Bayes (NB) can predict better outcomes for each class (Introduction, method, results, Discussion and Conclusion) than Support Vector Machine (SVM). Furthermore, the performance of the classifier depends on an appropriate number of the representative feature selected from the text.</description>
        <description>http://thesai.org/Downloads/Volume10No2/Paper_31-Automatic_Structured_Abstract.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>MINN: A Missing Data Imputation Technique for Analogy-based Effort Estimation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100230</link>
        <id>10.14569/IJACSA.2019.0100230</id>
        <doi>10.14569/IJACSA.2019.0100230</doi>
        <lastModDate>2019-02-28T12:12:02.7200000+00:00</lastModDate>
        
        <creator>Muhammad Arif Shah</creator>
        
        <creator>Dayang N. A. Jawawi</creator>
        
        <creator>Mohd Adham Isa</creator>
        
        <creator>Karzan Wakil</creator>
        
        <creator>Muhammad Younas</creator>
        
        <creator>Ahmed Mustafa</creator>
        
        <subject>Analogy-based estimation; effort estimation; missing data imputation; software development</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(2), 2019</description>
        <description>Success and failure of a complex software project are strongly associated with the accurate estimation of development effort. There are numerous estimation models developed but the most widely used among those is Analogy-Based Estimation (ABE). ABE model follows human nature as it estimates the future project’s effort by making analogies with the past project&#39;s data. Since ABE relies on the historical datasets, the quality of the datasets affects the accuracy of estimation. Most of the software engineering datasets have missing values. The researchers either delete the projects containing missing values or avoid treating the missing values which reduce the ABE performance. In this study, Numeric Cleansing (NC), K-Nearest Neighbor Imputation (KNNI) and Median Imputation of the Nearest Neighbor (MINN) methods are used to impute the missing values in Desharnais and DesMiss datasets for ABE. MINN technique is introduced in this study. A comparison among these imputation methods is performed to identify the suitable missing data imputation method for ABE. The results suggested that MINN imputes more realistic values in the missing datasets as compared to values imputed through NC and KNNI. It was also found that the imputation treatment method helped in better prediction of the software development effort on ABE model.</description>
        <description>http://thesai.org/Downloads/Volume10No2/Paper_30-MINN_A_Missing_Data_Imputation_Technique.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Photometric Stereo Approach and the Visualization of 3D Face Reconstruction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100229</link>
        <id>10.14569/IJACSA.2019.0100229</id>
        <doi>10.14569/IJACSA.2019.0100229</doi>
        <lastModDate>2019-02-28T12:12:02.7030000+00:00</lastModDate>
        
        <creator>Muhammad Sajid Khan</creator>
        
        <creator>Zabeeh Ullah</creator>
        
        <creator>Maria Shahid Butt</creator>
        
        <creator>Zohaib Arshad</creator>
        
        <creator>Sobia Yousaf</creator>
        
        <subject>3D face; photometric stereo; reconstruction; recognition; feature selection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(2), 2019</description>
        <description>The 3D Morphable models of the human face have prepared myriad of applications in computer vision, human computer interaction and security surveillances. However, due to the variation in size, complexity of training data set, the landmark mapping, the representation in real time and rendering or synthesis of images in three dimensional is limited. In this paper, we extend the approach of the photometric stereo and provide the human face reconstruction in three dimensional. The proposed method consists of two steps. First it automatically detects the face and segment the iris along with statistical features of pupil location in it. Secondly it provides the selection of minimum six features and where iris process to generate the 3D face. In compare with existing methods our approach provides the automation which produces more better and efficient results in contrast to the manual methods.</description>
        <description>http://thesai.org/Downloads/Volume10No2/Paper_29-The_Photometric_Stereo_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Graphic User Interface Design Principles for Designing Augmented Reality Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100228</link>
        <id>10.14569/IJACSA.2019.0100228</id>
        <doi>10.14569/IJACSA.2019.0100228</doi>
        <lastModDate>2019-02-28T12:12:02.6870000+00:00</lastModDate>
        
        <creator>Afshan Ejaz</creator>
        
        <creator>Dr Syed Asim Ali</creator>
        
        <creator>Muhammad Yasir Ejaz</creator>
        
        <creator>Dr Farhan Ahmed Siddiqui</creator>
        
        <subject>GUI; augmented reality; metaphors; affordance; perception; satisfaction; cognitive burden</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(2), 2019</description>
        <description>The reality is a combination of perception, reconstruction, and interaction. Augmented Reality is the advancement that layer over consistent everyday existence which includes content based interface, voice-based interfaces, voice-based interface and guide based or gesture-based interfaces, so designing augmented reality application interfaces is a difficult task for the maker. Designing a user interface which is not only easy to use and easy to learn but its more interactive and self-explanatory which have high perceived affordability, perceived usefulness, consistency and high discoverability so that the user could easily recognized and understand the design. For this purpose, a lot of interface design principles such as learnability, Affordance, Simplicity, Memorability, Feedback, Visibility, Flexibly and others are introduced but there no such principles which explain the most appropriate interface design principles for designing an Augmented Reality application interfaces. Therefore, the basic goal of introducing design principles for Augmented Reality application interfaces is to match the user efforts and the computer display (“plot user input onto computer output”) using an appropriate interface action symbol (“metaphors”) or to make that application easy to use, easy to understand and easy to discover.  In this study by observing augmented reality system and interfaces, few of well-known design principle related to GUI (“user-centered design”) are identified and through them, few issues are shown which can be determined through the design principles. With the help of multiple studies, our study suggests different interface design principles which make designing Augmented Reality application interface more easier and more helpful for the maker as these principles make the interface more interactive, learnable and more usable. To accomplish and test our finding, Pok&#233;mon Go, an Augmented Reality game, was selected and all the suggested principles are implemented and tested on its interface. From the results, our study concludes that our identified principles are most important principles while developing and testing any Augmented Reality application interface.</description>
        <description>http://thesai.org/Downloads/Volume10No2/Paper_28-Graphic_User_Interface_Design_Principles.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Growing Role of Complex Sensor Systems and Algorithmic Pattern Recognition for Vascular Dementia Onset</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100227</link>
        <id>10.14569/IJACSA.2019.0100227</id>
        <doi>10.14569/IJACSA.2019.0100227</doi>
        <lastModDate>2019-02-28T12:12:02.6600000+00:00</lastModDate>
        
        <creator>Janna Madden</creator>
        
        <creator>Arshia Khan</creator>
        
        <subject>Vascular dementia; pattern recognition; machine learning; artificial intelligence; algorithmic disease detection; vascular dementia onset</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(2), 2019</description>
        <description>Vascular Dementia is often Clinically diagnosed once the effects of the disease are prevalent in a person’s daily living routines. However previous research has shown various behavioral and physiological changes linked to the development of Vascular Dementia, with these changes beginning to present earlier than clinical diagnosis is currently possible. In this review, works focused on these early signs of Vascular Dementia are highlighted. However, recognizing these changes is difficult. Many computational systems have been proposed for the evaluation these early signs of Vascular Dementia. The chosen works have largely focused on utilizing sensors systems or algorithmic evaluation can be incorporated into a person’s environment to measure behavioral, and phycological metrics. This raw data can then be computationally analyzed to draw conclusions about the patterns of change surrounding the onset of Vascular Dementia. This compilation of works presents current a framework for investigating the various behavioral and physiological metrics as well as potential avenues for further investigating of sensor system and algorithmic design with the goal of enabling earlier Vascular Dementia detection.</description>
        <description>http://thesai.org/Downloads/Volume10No2/Paper_27-The_Growing_Role_of_Complex_Sensor_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Networking Issues for Security and Privacy in Mobile Health Apps</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100225</link>
        <id>10.14569/IJACSA.2019.0100225</id>
        <doi>10.14569/IJACSA.2019.0100225</doi>
        <lastModDate>2019-02-28T12:12:02.6400000+00:00</lastModDate>
        
        <creator>Yasser Mohammad Al-Sharo</creator>
        
        <subject>Wireless networks; security; privacy; mobile; analyses</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(2), 2019</description>
        <description>It is highly important to give social care on the personal information that is collected by mobile health applications. There has been a rise in the mobile applications which are applied in almost all the departments and this is as a result of the high technological advancement globally. The developers of these applications need to be somehow reluctant in maintaining the privacy of information collected through the applications because many release insecure apps. The aim of this report is to analyze the status of privacy and security in relation to mobile health. The analysis or the review has been done through academic literature review, a study of the laws which regulate mobile health in the EU and USA. Also, lastly, giving a recommendation for the mobile application developers, on how to maintain privacy and security. As a result, other certifications and standards will be proposed for app developers and another guide for the researchers and developers as well.</description>
        <description>http://thesai.org/Downloads/Volume10No2/Paper_25-Networking_Issues_for_Security_and_Privacy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Survey on Techniques to Detect Malicious Activites on Web</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100226</link>
        <id>10.14569/IJACSA.2019.0100226</id>
        <doi>10.14569/IJACSA.2019.0100226</doi>
        <lastModDate>2019-02-28T12:12:02.6400000+00:00</lastModDate>
        
        <creator>Abdul Rahaman Wahab Sait</creator>
        
        <creator>Dr.M.Arunadevi</creator>
        
        <creator>Dr.T.Meyyappan</creator>
        
        <subject>Malware detection; malicious behavior; spam detection; web terrorism; Sql injection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(2), 2019</description>
        <description>The world wide web is more vulnerable for malicious activities. Spam–advertisements, Sybil attacks, Rumour propagation, financial frauds, malware dissemination, and Sql injection are some of the malicious activities on web. Terrorist are using web as a weapon to propaganda false information. Many innocent youths were trapped by web terrorist. It is very difficult to trace the impression of malicious activities on web. Many researches are under development to find a mechanism to protect web users and avoid malicious activities. The aim of the survey is to provide a study on recent techniques to find malicious activities on web.</description>
        <description>http://thesai.org/Downloads/Volume10No2/Paper_26-A_Survey_on_Techniques_to_Detect_Malicious_Activities.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Impact of Privacy Issues on Smart City Services in a Model Smart City</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100224</link>
        <id>10.14569/IJACSA.2019.0100224</id>
        <doi>10.14569/IJACSA.2019.0100224</doi>
        <lastModDate>2019-02-28T12:12:02.6100000+00:00</lastModDate>
        
        <creator>Nasser H. Abosaq</creator>
        
        <subject>IOT; Public-Wi-Fi; Privacy; D2D;  D2U;  industrial 4.0; 5G; Secrecy; FIDO</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(2), 2019</description>
        <description>With the recent technological development, there is prevalent trend for smart infrastructure deployment with intention to provide smart services for inhabitants. City governments of current era are under huge pressure to facilitate their residents by offering state of the art services equipped with modern technology gadgets. To achieve this goal they have been forced for massive investment in IT infrastructure deployment, eventually they are collecting huge amount of data from users with intention of providing them better or improved services. These services are very exciting but on the other side they also pose a big threat to the privacy of individuals. This paper designed and simulated a smart city model. This model is connected with some mandatory communication devices which also produce data for different sensors, Based on simulation results and possible threats for alteration of this data, it suggests solution for privacy issues which are to be considered at top priority to ensure secrecy and privacy of smart city residents.</description>
        <description>http://thesai.org/Downloads/Volume10No2/Paper_24-Impact_of_Privacy_Issues_on_Smart_City_Services.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Smart City and Smart-Health Framework, Challenges and Opportunities</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100223</link>
        <id>10.14569/IJACSA.2019.0100223</id>
        <doi>10.14569/IJACSA.2019.0100223</doi>
        <lastModDate>2019-02-28T12:12:02.5800000+00:00</lastModDate>
        
        <creator>Majed Kamel Al-Azzam</creator>
        
        <creator>Malik Bader Alazzam</creator>
        
        <subject>Smart city; challenges; opportunities; smart health</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(2), 2019</description>
        <description>The new age of mobile health is accompanied with wider implementation of ubiquitous and pervasive mobile communication and computing, that in turn, has brought enormous opportunities for organizations and governments to reconsider their healthcare concept. Alongside, the global process of urbanization signifies a daunting test and attracts the expert concentration towards towns that can obtain significant high populations and service people in a human and efficient approach. The consistent need of these two trends led to evolution of the concept of smart cities plus mobile healthcare. The given article is intended to provide an overview of smart health, explained to be context-aware that is accompanied by mobile health within the smart cities. The purpose of the article is to offer a standpoint on the main fields of research and knowledge explained in the procedure of establishment of the new idea. Furthermore, the article will also focus on major opportunities and challenges that are implied by s-health and will offer a common opportunity for future research.</description>
        <description>http://thesai.org/Downloads/Volume10No2/Paper_23-Smart_City_and_Smart_Health_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Document Similarity Detection using K-Means and Cosine Distance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100222</link>
        <id>10.14569/IJACSA.2019.0100222</id>
        <doi>10.14569/IJACSA.2019.0100222</doi>
        <lastModDate>2019-02-28T12:12:02.5630000+00:00</lastModDate>
        
        <creator>Wendi Usino</creator>
        
        <creator>Anton Satria Prabuwono</creator>
        
        <creator>Khalid Hamed S. Allehaibi</creator>
        
        <creator>Arif Bramantoro</creator>
        
        <creator>Hasniaty A</creator>
        
        <creator>Wahyu Amaldi</creator>
        
        <subject>K-means; cosine distance; cluster; document similarity; document frequency; inverse document frequency; preprocessing; vector space model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(2), 2019</description>
        <description>A two-year study by the Ministry of Research, Technology and Education in Indonesia presented the evaluation of most universities in Indonesia. The findings of the evaluation are the peculiarities of various dissertation softcopies of doctoral students which are similar to any texts available on internet. The suspected plagiarism behavior has a negative effect on both students and faculty members. The main reason behind this behavior is the lack of standardized awareness among faculty members with regard to plagiarism. Therefore, this study proposes a computerized system that is able to detect plagiarism information by using K-means and cosine distance algorithm. The process starts from preprocessing process that includes a novel step of checking Indonesian big dictionary, vector space model design, and the combined calculation of K-means and cosine distance from 17 documents as test data. The result of this study generally shows that the documents have detection accuracy of 93.33%.</description>
        <description>http://thesai.org/Downloads/Volume10No2/Paper_22-Document_Similarity_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Self Adaptable Deployment for Heterogeneous Wireless Sensor Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100221</link>
        <id>10.14569/IJACSA.2019.0100221</id>
        <doi>10.14569/IJACSA.2019.0100221</doi>
        <lastModDate>2019-02-28T12:12:02.5330000+00:00</lastModDate>
        
        <creator>Umesh M. Kulkarni</creator>
        
        <creator>Harish H. Kenchannavar</creator>
        
        <creator>Umakant P. Kulkarni</creator>
        
        <subject>Wireless sensor network (WSN); deployment strategy; self-adaptable</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(2), 2019</description>
        <description>Wireless Sensor Networks (WSN) is becoming a crucial component of most of the fields of engineering. Heterogeneous WSN (HWSN) is characterized by wireless sensor nodes having link (communication), computation or energy heterogeneity for a specific application. WSN applications are constrained by the availability of power hence; conserving energy in a sensor network becomes a major challenge. Literature survey shows that node deployments can have good impact on energy conservation. Works show that self-adaptable nodes can significantly save energy as compared to other types of deployment. This work uses the concept of self-adaptation of nodes to conserve energy in a HWSN. A deployment strategy driven by some dynamic decision making capability can boost the overall performance of a WSN. The work presents an analysis of three types of deployments: like keeping all nodes fixed, all node moving and high energy nodes moving with respect to throughput, delay and energy consumption. Experimental results show that self-adaptable dynamic deployment gives 10% better throughput and 6% better energy conservation than static deployment strategies.</description>
        <description>http://thesai.org/Downloads/Volume10No2/Paper_21-Self_Adaptable_Deployment_for_Heterogeneous_Wireless_Sensor_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fuzzy Logic Driven Expert System for the Assessment of Software Projects Risk</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100220</link>
        <id>10.14569/IJACSA.2019.0100220</id>
        <doi>10.14569/IJACSA.2019.0100220</doi>
        <lastModDate>2019-02-28T12:12:02.5170000+00:00</lastModDate>
        
        <creator>Mohammad Ahmad Ibraigheeth</creator>
        
        <creator>Syed Abdullah Fadzli</creator>
        
        <subject>Risk assessment; critical success factors; fuzzy expert systems; fuzzy rule-base; risk probability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(2), 2019</description>
        <description>This paper presents an expert risk evaluation system developed and based on up-to-date empirical study that uses a real data from huge number of software projects to identify the most factors that affect the project success. Software project can be affected by a range of risk factors through all phases of the development process. Therefore, it has become necessary to consider risk concerns while developing the software project. Risk assessment and management play a significant role in avoiding failure of the software project, and can help in mitigating the effect of the undesirable events that could affect the project outcomes. In this paper, the researchers have developed a novel expert fuzzy-logic tool that can be used by project decision makers to evaluate the expected risks .The developed tool helps in estimating the risk probability based on the software project’s critical success factors. A user-friendly interface is created to enable the project managers to perform general risk evaluation during any stage of the software development process. The proposed tool can be helpful in achieving effective risk control, and therefore improving the overall project outcomes.</description>
        <description>http://thesai.org/Downloads/Volume10No2/Paper_20-Fuzzy_Logic_Driven_Expert_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of Efficient Cognitive Radio MAC Protocol for Ad Hoc Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100219</link>
        <id>10.14569/IJACSA.2019.0100219</id>
        <doi>10.14569/IJACSA.2019.0100219</doi>
        <lastModDate>2019-02-28T12:12:02.4870000+00:00</lastModDate>
        
        <creator>Muhammad Yaseer</creator>
        
        <creator>Haseeb Ur Rehman</creator>
        
        <creator>Amir Usman</creator>
        
        <creator>Muhammad Tayyab Shah</creator>
        
        <subject>Ad Hoc networks; cognitive radio (CR); backup channel; energy efficient protocols; MAC protocol; primary users; secondary users</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(2), 2019</description>
        <description>Cognitive Radio (CR) is an emerging technology to exploit the existing spectrum dynamically. It can intelligently access the vacant spectrum frequency bands. Although a number of methodologies have been suggested for improving the performance of CR networks, little attention has been given to efficient usage, management and energy efficiency. In this paper, a modern paradigm pertaining to the spectrum allotment and usage, manifested as CR, has been introduced as a potential solution to this problem, where the CR (unlicensed) users can opportunistically deploy the available free licensed spectrum bands in such a way that restricts the degree of interference to the extent that the primary (licensed) users can allow. In this article, we analysis and compare various protocols, in addition, we evaluate CREAM MAC, RMC MAC, SWITCH MAC, EECR MAC protocols related to the CR MAC in term of different parameters such as throughput, data transmission and time efficiency. We conclude the most efficient protocol, which have similar features named as Proposed Efficient Cognitive Radio MAC (PECR-MAC) protocol.</description>
        <description>http://thesai.org/Downloads/Volume10No2/Paper_19-Analysis_of_Efficient_Cognitive_Radio.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Usability Model for Mobile Applications Generated with a Model-Driven Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100218</link>
        <id>10.14569/IJACSA.2019.0100218</id>
        <doi>10.14569/IJACSA.2019.0100218</doi>
        <lastModDate>2019-02-28T12:12:02.4700000+00:00</lastModDate>
        
        <creator>Lassaad Ben Ammar</creator>
        
        <subject>Usability; mobile apps; model-driven engineering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(2), 2019</description>
        <description>Usability evaluation of mobile applications (referred to as apps) is an emerging research area in the field of Software Engineering. Several research studies have focused their interest on the challenge of usability evaluation in mobile context. Typically, the usability is measured once the mobile apps is implemented. At this stage of the development process, it is costly to go back and makes the required changes in the design in order to overcome usability problems. Model-driven Engineering (MDE) was proven as a promising solution for this problem. In such approach, a model can be build and analyzed early in the design cycle to identify key characteristics like usability. The traceability established between this model and the final application by means of model transformation plays a key role to preserve its usability or even improve it. This paper attempts to review existing usability studies and subsequently propose a usability model for conducting early usability evaluation for mobile apps generated with an MDE tool.</description>
        <description>http://thesai.org/Downloads/Volume10No2/Paper_18-A_Usability_Model_for_Mobile_Applications.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Existing Trends of Digital Watermarking and its Significant Impact on Multimedia Streaming: A Survey
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100217</link>
        <id>10.14569/IJACSA.2019.0100217</id>
        <doi>10.14569/IJACSA.2019.0100217</doi>
        <lastModDate>2019-02-28T12:12:02.4370000+00:00</lastModDate>
        
        <creator>R. Radha Kumari</creator>
        
        <creator>V. Vijaya Kumar</creator>
        
        <creator>K.Rama Naidu</creator>
        
        <subject>Authentication; copyright-protection; digital information; digital watermark; robustness, security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(2), 2019</description>
        <description>Nowadays digital media has reached the general level of resource sharing system and become a convenient way for sharing lots of information among various individuals. However, these digital data are stored and shared over an internet which is an entirely unsecured and most frequently attacked by several attackers, resulting in a massive loss at various parameters and creates severe issues of copyright protection, ownership protection, authentication, secure communication, etc. In recent years, digital watermarking technology has received extensive attention from users and researchers for content protection and digital data authentication. However, before implementing digital watermarking techniques in practical applications, there are still many problems that need to be solved technically and efficiently. The purpose of this manuscript is to provide a detailed survey on current research techniques of digital watermarking techniques for all media formats with their applications and operational process.  The prime objective of this manuscript is to reveal the research problem and the efficient requirement to implement robust watermarking technique after analyzing the progress of watermarking schemes and current research trend.</description>
        <description>http://thesai.org/Downloads/Volume10No2/Paper_17-Existing_Trends_of_Digital_Watermarking.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Automated Advice Seeking and Filtering System
 
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100216</link>
        <id>10.14569/IJACSA.2019.0100216</id>
        <doi>10.14569/IJACSA.2019.0100216</doi>
        <lastModDate>2019-02-28T12:12:02.4070000+00:00</lastModDate>
        
        <creator>Reham Alskireen</creator>
        
        <creator>Dr. Said Kerrache</creator>
        
        <creator>Dr. Hafida Benhidour</creator>
        
        <subject>Recommender system; hidden metric; advice; Bayesian framework</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(2), 2019</description>
        <description>Advice seeking and knowledge exchanging over the Internet and social networks became a very common activity. The system proposed in this work aims to assist the users in choosing the best possible advice and allows them to exchange advice automatically without knowing each other. The approach used in this work is based on a newly proposed dynamic version of the hidden metric model, where the distance between each couple of users is computed and used to represent the users in a d dimensional Euclidean space. In addition to the position, a degree is also assigned to each user, which represents his/her popularity or how much he/she is trusted by the system. The two factors, distance and degree, are used in selecting advice providers. Both the positions of the users and their degrees are adjusted according to the feedback of the users. The proposed feedback algorithm is based on a Bayesian framework and has a goal of obtaining more accurate advice in the future. The system evaluated and tested using simulation. In the applied experiment, the mean square error was measured for different parameters. All parts of the experiments are performed on a varying number of users (100, 500 and 1000 users). This shows that the system can scale to a large number of users.</description>
        <description>http://thesai.org/Downloads/Volume10No2/Paper_16-An_Automated_Advice_Seeking_and_Filtering_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-Depots Vehicle Routing Problem with Simultaneous Delivery and Pickup and Inventory Restrictions: Formulation and Resolution</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100215</link>
        <id>10.14569/IJACSA.2019.0100215</id>
        <doi>10.14569/IJACSA.2019.0100215</doi>
        <lastModDate>2019-02-28T12:12:02.3900000+00:00</lastModDate>
        
        <creator>BOUANANE Khaoula</creator>
        
        <creator>BENADADA Youssef</creator>
        
        <creator>BENCHEIKH Ghizlane</creator>
        
        <subject>Reverse logistic; inventory restrictions; VRPSDP; multi-depots version; Genetic Algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(2), 2019</description>
        <description>Reverse logistics can be defined as a set of practices and processes for managing returns from the consumer to the manufacturer, simultaneously with direct flow management. In this context, we have chosen to study an important variant of the Vehicle Routing Problem (VRP) which is the Multi-Depot Vehicle Routing Problem with Simultaneous Delivery and Pickup and Inventory Restrictions (MD-VRPSDP-IR). This problem involves designing routes from multiple depots that simultaneously satisfy delivery and pickup requests from a set of customers, while taking into account depot stock levels. This study proposes a hybrid Genetic Algorithm which incorporates three different procedures, including a newly developed one called the K- Nearest Depot heuristic, to assign customers to depots and also the Sweep algorithm for routes construction, and the Farthest Insertion heuristic to improve solutions. Computational results show that our methods outperform the previous ones for MD-VRPSDP.</description>
        <description>http://thesai.org/Downloads/Volume10No2/Paper_15-Multi_Depots_Vehicle_Routing_Problem.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improved Industrial Modeling and Harmonic Mitigation of a Grid Connected Steel Plant in Libya</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100214</link>
        <id>10.14569/IJACSA.2019.0100214</id>
        <doi>10.14569/IJACSA.2019.0100214</doi>
        <lastModDate>2019-02-28T12:12:02.3770000+00:00</lastModDate>
        
        <creator>Abeer Oun</creator>
        
        <creator>Ibrahim Benabdallah</creator>
        
        <creator>Adnen Cherif</creator>
        
        <subject>Industry 4.0; distribution systems; THD; harmonic load flow; passive filters</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(2), 2019</description>
        <description>Currently, we are living in a new transition process towards the fourth phase of industrialization, well known as the purported Industry 4.0. This development backbone supposes a sustainable manufacturing. Were optimal functionalities of a factory components especially energy rationalization and enhanced power quality are nonetheless a privilege but an obligation to introduce efficiently artificial intelligence AI, smart metering SM and automated decision making ADM. In the same axis of mitigating power quality issues, this paper is introduced first to draw innovatively a virtual reality (VR) complex grid connected steel power plant and then to depict harmonic sources in order to moderate them which are caused essentially by nonlinear installed loads manifesting power system quality issues and exhibiting periodic signal distortion. Accordingly, it was essential to assay the diverse origins of harmonic problems and to present the most accommodate and economic solution techniques. Related voltage and current harmonic flows at 30 kV levels, of the General Electricity Company of Libya GECOL located in Tripoli city, are examined. Afterward, inquire jointly their harmful effects on plant components. In order to attenuate distortion, a harmonic analysis has been investigated. Then appropriate filters design have been sized, designed, simulated and appended to the panel. Simulation results are presented and validated using ETAP industrial software under real measurement arena.</description>
        <description>http://thesai.org/Downloads/Volume10No2/Paper_14-Improved_Industrial_Modeling_and_Harmonic_Mitigation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Ensuring Privacy Protection in Location-based Services through Integration of Cache and Dummies</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100213</link>
        <id>10.14569/IJACSA.2019.0100213</id>
        <doi>10.14569/IJACSA.2019.0100213</doi>
        <lastModDate>2019-02-28T12:12:02.3430000+00:00</lastModDate>
        
        <creator>Sara Alaradi</creator>
        
        <creator>Nisreen Innab</creator>
        
        <subject>Privacy protection; dummy; cache; safe cycle; location homogeneity attack; semantic location attack</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(2), 2019</description>
        <description>Location-Based Services (LBS) have recently gained much attention from the research community due to the openness of wireless networks and the daily development of mobile devices. However, using LBS is not risk free. Location privacy protection is a major issue that concerns users. Since users utilize their real location to get the benefits of the LBS, this gives an attacker the chance to track their real location and collect sensitive and personal information about the user. If the attacker is the LBS server itself, privacy issues may reach dangerous levels because all information related to the user&#39;s activities are stored and accessible on the LBS server. In this paper, we propose a novel location privacy protection method called the Safe Cycle-Based Approach (SCBA). Specifically, the SCBA ensures location privacy by generating strong dummy locations that are far away from each other and belong to different sub-areas at the same time. This ensures robustness against advanced inference attacks such as location homogeneity attacks and semantic location attacks. To achieve location privacy protection, as well as high performance, we integrate the SCBA approach with a cache. The key performance enhancement is storing the responses of historical queries to answer future ones using a bloom filter-based search technique. Compared to well-known approaches, namely the ReDS, RaDS, and HMC approaches, experimental results showed that the proposed SCBA approach produces better outputs in terms of privacy protection level, robustness against inference attacks, communication cost, cache hit ratio, and response time.</description>
        <description>http://thesai.org/Downloads/Volume10No2/Paper_13-Ensuring_Privacy_Protection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implementation of Efficient Speech Recognition System on Mobile Device for Hindi and English Language</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100212</link>
        <id>10.14569/IJACSA.2019.0100212</id>
        <doi>10.14569/IJACSA.2019.0100212</doi>
        <lastModDate>2019-02-28T12:12:02.3300000+00:00</lastModDate>
        
        <creator>Gulbakshee Dharmale</creator>
        
        <creator>Dipti D. Patil</creator>
        
        <creator>V. M. Thakare</creator>
        
        <subject>Automatic Speech Recognition (ASR); Mel Frequency Cepstral Coefficient (MFCC); Vector Quantization (VQ); Gaussian Mixture Model (GMM); Hidden Markov Model (HMM); Receiver Operating Characteristics (ROC)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(2), 2019</description>
        <description>Speech recognition or speech to text conversion has rapidly gained a lot of interest by large organizations in order to ease the process of human to machine communication. Optimization of the speech recognition process is of utmost importance, due to the fact that real-time users want to perform actions based on the input speech given by them, and these actions sometime define the lifestyle of the users and thus the process of speech to text conversion should be carried out accurately. Here`s the plan to improve the accuracy of this process with the help of natural language processing and speech analysis. Some existing speech recognition softwares of Google, Amazon, and Microsoft tend to have an accuracy of more than 90% in real time speech detection. This system combines the speech recognition approach used by these softwares and joined with language processing to improve the overall accuracy of the process with the help of phonetic analysis. Proposed Phonetic Model supports multi-lingual speech recognition and observed that the accuracy of this system is 90% for Hindi and English speech to text recognition. The Hindi WordNet database provided by IIT Mumbai used in this research work for Hindi speech to text conversion.</description>
        <description>http://thesai.org/Downloads/Volume10No2/Paper_12-Implementation_of_Efficient_Speech_Recognition_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparison of Multilevel Wavelet Packet Entropy using Various Entropy Measurement for Lung Sound Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100211</link>
        <id>10.14569/IJACSA.2019.0100211</id>
        <doi>10.14569/IJACSA.2019.0100211</doi>
        <lastModDate>2019-02-28T12:12:02.2970000+00:00</lastModDate>
        
        <creator>Achmad Rizal</creator>
        
        <creator>Risanuri Hidayat</creator>
        
        <creator>Hanung Adi Nugroho</creator>
        
        <subject>Wavelet packet entropy; lung sound; Shannon entropy; Renyi entropy; Tsallis entropy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(2), 2019</description>
        <description>Wavelet Entropy (WE) is one of the entropy measurement methods by means of the discrete wavelet transform (DWT) subband. Some of the developments of WE are wavelet packet entropy (WPE), wavelet time entropy. WPE has several variations such as the Shannon entropy calculation on each subband of WPD that produces 2N entropy or WPE, which yields an entropy value. One of the WPE improvements is multilevel wavelet packet entropy (MWPE), which yields entropy value as much as N decomposition level. In a previous research, MWPE was calculated using Shannon method; hence, in this research MWPE calculation was done using Renyi and Tsallis method. The results showed that MWPE using Shannon calculation could yield the highest accuracy of 97.98% for N = 4 decomposition level. On the other hand, MWPE using Renyi entropy yielded the highest accuracy of 93.94% and the one using Tsallis entropy yielded 57.58% accuracy. Here, the test was performed on five lung sound data classes using multilayer perceptron as the classifier.</description>
        <description>http://thesai.org/Downloads/Volume10No2/Paper_11-Comparison_of_Multilevel_Wavelet_Packet.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of Home Network Sustainable Interface Tools</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100210</link>
        <id>10.14569/IJACSA.2019.0100210</id>
        <doi>10.14569/IJACSA.2019.0100210</doi>
        <lastModDate>2019-02-28T12:12:02.2670000+00:00</lastModDate>
        
        <creator>Erman Hamid</creator>
        
        <creator>Nazrulazhar Bahaman</creator>
        
        <creator>Azizah Jaafar</creator>
        
        <creator>Ang Mei Choo</creator>
        
        <creator>Akhdiat Abdul Malek</creator>
        
        <subject>Home network; visualization; sustainable interface</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(2), 2019</description>
        <description>The home network has become a norm in today&#39;s life. Previous studies have shown that home network management is a problem for users who are not in the field of network technology. The existing network management tools are far too difficult to understand by ordinary home network users. Its interface is complex, and does not address the home user&#39;s needs in their daily use. This paper presents an interactive network management tool, which emphasizes support features for home network users. The tool combine interactive visual appearance with persuasive approach that support sustainability. It is not only understandable to all categories of home network users, but also acts as a feature for the user to achieve usability.</description>
        <description>http://thesai.org/Downloads/Volume10No2/Paper_10-Development_of_Home_Network_Sustainable_Interface_Tools.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Proposal of Automatic Methods for the Reuse of Software Components in a Library</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100208</link>
        <id>10.14569/IJACSA.2019.0100208</id>
        <doi>10.14569/IJACSA.2019.0100208</doi>
        <lastModDate>2019-02-28T12:12:02.2500000+00:00</lastModDate>
        
        <creator>Koffi Kouakou Ive Arsene</creator>
        
        <creator>Samassi Adama</creator>
        
        <creator>Kimou Kouadio Prosper</creator>
        
        <creator>Brou Konan Marcellin</creator>
        
        <subject>Method development; reuse; software component; quality of component; functional size; functional processes; financial cost; adaptation effort</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(2), 2019</description>
        <description>The increasing complexity of applications is constraining developers to use reusable components in component markets and mainly free software components. However, the selected components may partially satisfy the requirements of users. In this article, we propose an approach of optimization the selection of software components based on their quality. It consists of: (1) Selecting components that satisfy the customer&#39;s non-functional needs; (2) Calculate the quality score of each of these candidate components to select; (3) Select the best component meeting the customer&#39;s non-functional needs with linear programming by constraints. Our aim is to maximize this selection for considering financial cost of component and adaptation effort. Yet in the literature review, researchers are unanimous that software components reuse reduces the cost of development, maintenance time and also increases the quality of the software. However, the models already developed to evaluate the quality of the component do not simultaneously take into account financial cost and adaptation effort factors. So, in our research, we established a connection between the financial cost and the adaptation time of the selected component by a linear programming model with constraints. For our work&#39;s validation, we propose an algorithm to support the developed theory. User will then be able to choose the relevant software component for his system from the available components.</description>
        <description>http://thesai.org/Downloads/Volume10No2/Paper_8-Proposal_of_Automatic_Methods.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Extracting the Features of Modern Web Applications based on Web Engineering Methods</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100209</link>
        <id>10.14569/IJACSA.2019.0100209</id>
        <doi>10.14569/IJACSA.2019.0100209</doi>
        <lastModDate>2019-02-28T12:12:02.2500000+00:00</lastModDate>
        
        <creator>Karzan Wakil</creator>
        
        <creator>Dayang N.A. Jawawi</creator>
        
        <subject>Modern web applications; MWA; web engineering; extracting features; web versions</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(2), 2019</description>
        <description>With the revolution of the information, an advanced version of the web proposed from web 1.0 to web 4.0. In each version, many web applications appeared. In the new versions, modern web applications (MWAs) proposed. These applications have specific features and different features, and these features made a new challenge for web engineering methods. The problem is that web engineering methods have limitations for MWAs, and the gap is that the developers cannot highlight the new features based on web engineering methods. In this paper, we extract features of the MWA based on web engineering methods. We extract web application modules for showing interaction and structure of their feature based on models and elements of web engineering methods. The result of this work helps the developers for designing MWAs through web engineering methods. Furthermore, lead to researchers to improve web engineering methods for developing MWAs features.</description>
        <description>http://thesai.org/Downloads/Volume10No2/Paper_9-Extraction_the_Features_of_Modern_Web.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimized Field Oriented Control Design by Multi Objective Optimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100207</link>
        <id>10.14569/IJACSA.2019.0100207</id>
        <doi>10.14569/IJACSA.2019.0100207</doi>
        <lastModDate>2019-02-28T12:12:02.2200000+00:00</lastModDate>
        
        <creator>H&#252;seyin Oktay ERKOL</creator>
        
        <subject>Permanent magnet synchronous motor; field oriented control; speed controller; tree-seed algorithm; optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(2), 2019</description>
        <description>Permanent Magnet Synchronous Motors are popular electrical machines in industry because they have high efficiency, low ratio of weight/power and smooth torque with no or less ripple.  In addition to this, control of synchronous motor is a complex process. Vector control techniques are widely used for control of synchronous motors because they simplify the control of AC machines. In this study, Field Oriented Control technique is used as a speed controller of a Permanent Magnet Synchronous Motor. The controller must be good tuned for applications which need high performance, and classical methods are not enough or need more time to achieve the requested performance criteria. Optimization algorithms are good options for tuning process of controllers. They guarantee finding one of the best solutions and need less time for solving the problem. Therefore, in this study, Tree-Seed Algorithm is used for tuning process of the controller parameters and the results show that Tree-Seed Algorithm is good tool for controller tuning process. The controller is also tuned by Particle Swarm Algorithm to make a comparison. The results show that optimized system by Tree-Seed Algorithm has good performance for the applications which need changing speed and load torque. It has also better performance than the system which is optimized by Particle Swarm Optimization algorithm.</description>
        <description>http://thesai.org/Downloads/Volume10No2/Paper_7-Optimized_Field_Oriented_Control_Design.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Clustering of Multidimensional Objects in the Formation of Personalized Diets</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100206</link>
        <id>10.14569/IJACSA.2019.0100206</id>
        <doi>10.14569/IJACSA.2019.0100206</doi>
        <lastModDate>2019-02-28T12:12:02.1870000+00:00</lastModDate>
        
        <creator>Valentina N. Ivanova</creator>
        
        <creator>Igor A. Nikitin</creator>
        
        <creator>Natalia A. Zhuchenko</creator>
        
        <creator>Marina A. Nikitina</creator>
        
        <creator>Yury I. Sidorenko</creator>
        
        <creator>Vladimir I. Karpov</creator>
        
        <creator>Igor V. Zavalishin</creator>
        
        <subject>Multidimensional objects clustering method; integral assessment of reliable risks; nutritional needs of the body; personalized nutrition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(2), 2019</description>
        <description>When developing personalized diets (personalized nutrition) it is necessary to take into account individual physiological nutritional needs of the body associated with the presence of gene polymorphism among consumers. This greatly complicates the development of rations and increases their cost. A methodology for the formation of target diets based on the multidimensional objects clustering method has been proposed. Clustering in the experimental group was carried out on the basis of a calculation of the integral assessment of reliable risks of developing decease conditions according to selected metabolic processes. And genetic data of participants was taken into account. The use of the proposed method allowed reducing the needed number of typical solutions of individual diets for the experimental group from 10 to 3.</description>
        <description>http://thesai.org/Downloads/Volume10No2/Paper_6-Clustering_of_Multidimensional_Objects.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Several Jamming Attacks in Wireless Networks: A Game Theory Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100205</link>
        <id>10.14569/IJACSA.2019.0100205</id>
        <doi>10.14569/IJACSA.2019.0100205</doi>
        <lastModDate>2019-02-28T12:12:02.1570000+00:00</lastModDate>
        
        <creator>Moulay Abdellatif Lmater</creator>
        
        <creator>Majed Haddad</creator>
        
        <creator>Abdelillah Karouit</creator>
        
        <creator>Abdelkrim Haqiq</creator>
        
        <subject>Wireless communications; game theory; jamming attacks; stackelberg game; nash game</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(2), 2019</description>
        <description>Wireless jamming attacks have recently been a subject of several researches, due to the exposed nature of the wireless medium. This paper studies the anti-jamming resistance in the presence of several attackers. Two kind of jammers are considered, smart jammers which have the ability to sense the legitimate signal power and regular jammers which don’t have this ability. An Anti Multi-Jamming based Power Control problem modeled as a non-zero-sum Game is suggested to study how the transmitter can adjust its signal power against several jamming attacks. A closed-form expression of Nash Equilibrium is derived when players actions are taken simultaneously. In addition, a
Stackelberg Equilibrium closed-form expression is derived when the hierarchical behavior between the transmitter and jammers is assumed. Simulation results show that the proposed scheme can enhance the anti-jamming-resistance against several attackers. Furthermore, this study proves that on the transmitter side, the most dangerous jammer is 
considered to have the highest ratio between channel gain and jamming cost. Finally, based on the Q-Learning technique, the transmitter can learn autonomously
without knowing the patterns of attackers.
</description>
        <description>http://thesai.org/Downloads/Volume10No2/Paper_5-Several_Jamming_Attacks_in_Wireless_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Investigating the Impact of Mobility Models on MANET Routing Protocols</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100204</link>
        <id>10.14569/IJACSA.2019.0100204</id>
        <doi>10.14569/IJACSA.2019.0100204</doi>
        <lastModDate>2019-02-28T12:12:02.1400000+00:00</lastModDate>
        
        <creator>Ako Muhammad Abdullah</creator>
        
        <creator>Emre Ozen</creator>
        
        <creator>Husnu Bayramoglu</creator>
        
        <subject>MANET; routing protocols; AODV; OLSR; GRP; node mobility</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(2), 2019</description>
        <description>A mobile ad hoc network (MANET) is a type of multi-hop network under different movement patterns without requiring any fixed infrastructure or centralized control. The mobile nodes in this network moves arbitrarily and topology changes frequently. In MANET routing, protocols play an important role to make reliable communication between nodes.  There are several issues affecting the performance of MANET routing protocols. Mobility is one of the most significant factors that has an impact on the routing process. In this paper, FCM, SCM, RWM and HWM mobility models are designed to analyze the performance of AODV, OLSR and GRP protocols, with ten pause time values. These models are based on varying speeds and pause time of MANET participants. Different node parameters such as data drop rate, average end-to-end delay, media access delay, network load, retransmission attempts and throughput are used to make a performance comparison between mobility models.  The simulation results showed that in most of the cases OLSR protocol provides better performance than other two routing protocols and it is more suitable for networks that require low delay and retransmission attempts, and high throughput.</description>
        <description>http://thesai.org/Downloads/Volume10No2/Paper_4-Investigating_the_Impact_of_Mobility_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Smart Building’s Elevator with Intelligent Control Algorithm based on Bayesian Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100203</link>
        <id>10.14569/IJACSA.2019.0100203</id>
        <doi>10.14569/IJACSA.2019.0100203</doi>
        <lastModDate>2019-02-28T12:12:02.1270000+00:00</lastModDate>
        
        <creator>Yerzhigit Bapin</creator>
        
        <creator>Vasilios Zarikas</creator>
        
        <subject>Bayesian network; smart city; smart building; elevator control algorithm; intelligent elevator system; decision theory; decision support systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(2), 2019</description>
        <description>Implementation of the intelligent elevator control systems based on machine-learning algorithms should play an important role in our effort to improve the sustainability and convenience of multi-floor buildings. Traditional elevator control algorithms are not capable of operating efficiently in the presence of uncertainty caused by random flow of people. As opposed to conventional elevator control approach, the proposed algorithm utilizes the information about passenger group sizes and their waiting time, provided by the image acquisition and processing system. Next, this information is used by the probabilistic decision-making model to conduct Bayesian inference and update the variable parameters. The proposed algorithm utilizes the variable elimination technique to reduce the computational complexity associated with calculation of marginal and conditional probabilities, and Expectation-Maximization algorithm to ensure the completeness of the data sets. The proposed algorithm was evaluated by assessing the correspondence level of the resulting decisions with expected ones. Significant improvement in correspondence level was obtained by adjusting the probability distributions of the variables affecting the decision-making process. The aim was to construct a decision engine capable to control the elevators actions, in way that improves user’s satisfaction. Both sensitivity analysis and evaluation study of the implemented model, according to several scenarios, are presented. The overall algorithm proved to exhibit the desired behavior, in 94% case of the scenarios tested.</description>
        <description>http://thesai.org/Downloads/Volume10No2/Paper_3-Smart_Buildings_Elevator_with_Intelligent_Control.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Adaptive Generalized Gaussian Distribution Oriented Thresholding Function for Image De-Noising</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100202</link>
        <id>10.14569/IJACSA.2019.0100202</id>
        <doi>10.14569/IJACSA.2019.0100202</doi>
        <lastModDate>2019-02-28T12:12:02.1100000+00:00</lastModDate>
        
        <creator>Noorbakhsh Amiri Golilarz</creator>
        
        <creator>Hasan Demirel</creator>
        
        <creator>Hui Gao</creator>
        
        <subject>Adaptive generalized Gaussian distribution; thresholding function; image de-noising; high frequency sub-bands</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(2), 2019</description>
        <description>In this paper, an Adaptive Generalized Gaussian Distribution (AGGD) oriented thresholding function for image de-noising is proposed. This technique utilizes a unique threshold function derived from the generalized Gaussian function obtained from the HH sub-band in the wavelet domain. Two-dimensional discrete wavelet transform is used to generate the decomposition. Having the threshold function formed by using the distribution of the high frequency wavelet HH coefficients makes the function data dependent, hence adaptive to the input image to be de-noised. Thresholding is performed in the high frequency sub-bands of the wavelet transform in the interval [-t, t], where t is calculated in terms of the standard deviation of the coefficients in the HH sub-band. After thresholding, inverse wavelet transform is applied to generate the final de-noised image. Experimental results show the superiority of the proposed technique over other alternative state-of-the-art methods in the literature.</description>
        <description>http://thesai.org/Downloads/Volume10No2/Paper_2-Adaptive_Generalized_Gaussian_Distribution.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hazard Detection and Tracking System for People with Peripheral Vision Loss using Smart Glasses and Augmented Reality</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100201</link>
        <id>10.14569/IJACSA.2019.0100201</id>
        <doi>10.14569/IJACSA.2019.0100201</doi>
        <lastModDate>2019-02-28T12:12:02.0470000+00:00</lastModDate>
        
        <creator>Ola Younis</creator>
        
        <creator>Waleed Al-Nuaimy</creator>
        
        <creator>Mohammad H. Alomari</creator>
        
        <creator>Fiona Rowe</creator>
        
        <subject>Peripheral vision loss; vision impairment; computer vision; assistive technology; motion compensation; optical flow; smart glasses</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(2), 2019</description>
        <description>Peripheral vision loss is the lack of ability to recognise objects and shapes in the outer area of the visual field. This condition can affect people’s daily activities and reduces their quality of life. In this work, a smart technology that implements computer vision algorithms in real-time to detect and track moving hazards around people with peripheral vision loss is presented. Using smart glasses, the system processes real-time captured video and produces warning notifications based on predefined hazard danger levels. Unlike other obstacle avoidance systems, this system can track moving objects in real-time and classify them based on their motion features (such as speed, direction, and size) to display early warning notification. A moving camera motion compensation method was used to overcome artificial motions caused by camera movement before an object detection phase. The detected moving objects were tracked to extract motion features which were used to check if the moving object is a hazard or not. A detection system for camera motion states was implemented and tested on real street videos as the first step before an object detection phase. This system shows promising results in motion detection, motion tracking, and camera motion detection phases. Initial tests have been carried out on Epson’s smart glasses to evaluate the real-time performance for this system. The proposed system will be implemented as an assistive technology that can be used in daily life.</description>
        <description>http://thesai.org/Downloads/Volume10No2/Paper_1-A_Hazard_Detection_and_Tracking_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>EMMCS: An Edge Monitoring Framework for Multi-Cloud Environments using SNMP</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100178</link>
        <id>10.14569/IJACSA.2019.0100178</id>
        <doi>10.14569/IJACSA.2019.0100178</doi>
        <lastModDate>2019-02-04T11:37:42.0130000+00:00</lastModDate>
        
        <creator>Saad Khoudali</creator>
        
        <creator>Karim Benzidane</creator>
        
        <creator>Abderrahim Sekkaki</creator>
        
        <subject>Simple network management protocol; multi-cloud monitoring; edge computing; edge monitoring; microservices; cloud computing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(1), 2019</description>
        <description>Multi-cloud computing is no different than  other Cloud computing (CC) models when it comes to providing users with self-services IT resources. For instance, a company can use services of one specific cloud Service Provider (CSP) for its business, as it can use more than one CSP either to get the best of each without any vendor lock-in. However, the situation is different regarding monitoring a multi-cloud environment. In fact, CSPs provide in-house monitoring tools that are natively compatible with their environment but lack support for other CSP&#39;s environments, which is problematic for any company that wants to use more than a CSP. In addition, third party cloud monitoring tools often use agents installed on each monitored virtual machine (VM) to collect monitoring data and send them to a central monitoring server that is hosted on premise or on a Cloud, which increases bottlenecks and latency while transmitting data or processing it. Therefore, this paper presents a monitoring framework for multi-cloud environments that implements edge computing and RESTFul microservices for a high efficiency monitoring and scalability. In fact, the monitoring framework “EMMCS” uses SNMP agents to collect metrics, and performs all monitoring tasks at the edge of each cloud to enhance network transmission and data processing at the central monitoring server level. The implementation of the framework is tested on different public cloud environments, namely Amazon AWS and Microsoft Azure to show the efficiency of the proposed approach.</description>
        <description>http://thesai.org/Downloads/Volume10No1/Paper_78-EMMCS_An_Edge_Monitoring_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implementation and Comparison of Text-Based Image Retrieval Schemes</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100177</link>
        <id>10.14569/IJACSA.2019.0100177</id>
        <doi>10.14569/IJACSA.2019.0100177</doi>
        <lastModDate>2019-02-02T10:59:53.9270000+00:00</lastModDate>
        
        <creator>Syed Ali Jafar Zaidi</creator>
        
        <creator>Attaullah Buriro</creator>
        
        <creator>Mohammad Riaz</creator>
        
        <creator>Athar Mahboob</creator>
        
        <creator>Mohammad Noman Riaz</creator>
        
        <subject>Image retrieval; image filtering; cosine similarity; sequence matching</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(1), 2019</description>
        <description>Search engines, i.e., Google, Yahoo pro-vide various libraries and API’s to assist programmers and researchers in easier and efficient access to their collected data. When a user generates a search query, the dedicated Application Programming Interface (API) returns the JavaScript Object Notation (JSON) file which contains the desired data. Scraping techniques help image descriptors to separate the image’s URL and web host’s URL in different documents for easier implementation of different algorithms. The aim of this paper is to propose a novel approach to effectively filter out the desired image(s) from the retrieved data. More specifically, this work primarily focuses on applying simple yet efficient techniques to achieve accurate image retrieval. We compare two algorithms, i.e., Cosine similarity and Sequence Matcher, to obtain the accuracy with a minimum of irrelevance. Obtained results prove Cosine similarity more accurate than its counterpart in finding the maximum relevant image(s).</description>
        <description>http://thesai.org/Downloads/Volume10No1/Paper_77-Implementation_and_Comparison_of_Text_based_Image.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Software Product Line Test List Generation based on Harmony Search Algorithm with Constraints Support </title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100176</link>
        <id>10.14569/IJACSA.2019.0100176</id>
        <doi>10.14569/IJACSA.2019.0100176</doi>
        <lastModDate>2019-02-01T13:02:25.4570000+00:00</lastModDate>
        
        <creator>AbdulRahman A. Alsewari</creator>
        
        <creator>Muhammad N. Kabir</creator>
        
        <creator>Kamal Z. Zamli</creator>
        
        <creator>Khalid S. Alaofi</creator>
        
        <subject>Harmony search; computational intelligence; combinatorial testing problem</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(1), 2019</description>
        <description>In software product line (SPL), selecting product&#39;s features to be tested is an essential issue to enable the manufactories to release new products earlier than others. Practically, it is impossible to test all the products’ features (i.e. exhaustive testing). Evidence has shown that several SPL strategies have been proposed to generate the test list for testing purpose. Nevertheless, all the existing strategies failed to produce an optimum test list for all cases. Thus, the current study is aimed to develop a new SPL test list generation strategy based on Harmony Search (HS) algorithm, namely SPL-HS. SPL-HS generates a minimum number of test cases that cover all of the features that are required to be tested based on the required interaction degree (t). The results demonstrate that the performance of SPL-HS is able to compete with the existing SPL strategies for generating test list size.</description>
        <description>http://thesai.org/Downloads/Volume10No1/Paper_76-Software_Product_Line_Test_List_Generation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi Factor Authentication for Student and Staff Access Control</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100175</link>
        <id>10.14569/IJACSA.2019.0100175</id>
        <doi>10.14569/IJACSA.2019.0100175</doi>
        <lastModDate>2019-02-01T13:02:25.3970000+00:00</lastModDate>
        
        <creator>Consuela Simukali</creator>
        
        <creator>Jackson Phiri</creator>
        
        <creator>Stephen Namukolo</creator>
        
        <subject>Security and access control; authentication; RFID; ISO 27002; barcode technologies</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(1), 2019</description>
        <description>This paper proposes a model to improve security, by controlling who accesses the University of Zambia Campus, Student Hostels and Offices. The proposed model combines Barcode, RFID, and Biometrics Technology to automatically identify Students and Staff. A component to track visitors’ physical location and movements in real time is also included to ensure visitors go to authorized places.  A baseline study based on International Standard Organisation 27002 standard was conducted to measure the level of security at UNZA. This result shows that UNZA has uncontrolled access into the campus environment, student hostels and offices. The results from this study were used to develop the proposed model. When the RFID reader installed at any of the entrances detects an RFID tag number, the system requests for a fingerprint scan and scans the database for a match. If both RFID card and fingerprint belong to a registered Student or Staff, the entrance door or Turnstile is released open and access is granted otherwise access is denied. In case of the visitor the National ID number is tied to the RFID tag number. The visitors’ RFID tag has a GPS module fixed to it. Once the visitor is granted access their movements and physical location are tracked in real time.</description>
        <description>http://thesai.org/Downloads/Volume10No1/Paper_75-Multi_Factor_Authentication_for_Student_and_Staff_Access_Control.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of a Two Factor Authentication for Vehicle Parking Space Control based on Automatic Number Plate Recognition and Radio Frequency Identification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100174</link>
        <id>10.14569/IJACSA.2019.0100174</id>
        <doi>10.14569/IJACSA.2019.0100174</doi>
        <lastModDate>2019-01-31T10:29:16.6130000+00:00</lastModDate>
        
        <creator>Friday Chisowa Chazanga</creator>
        
        <creator>Jackson Phiri</creator>
        
        <creator>Sebastian Namukolo</creator>
        
        <subject>RFID; ANPR; Vehicle access control; two-factor authentication</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(1), 2019</description>
        <description>This paper proposed a two factor authentication for vehicle access controls using Automatic Number Plate Recognition (ANPR) and Radio Frequency Identification system (RFID) for the University of Zambia (UNZA) vehicle access points. The University of Zambia is experiencing increasing challenge of car parking space and vehicle access controls to and within campus premises. The survey that was conducted reviewed that members of staff found difficulties finding parking spaces due to intrusion. The survey also reviewed that vehicles have been stolen within campus parking areas without detection. An access control system using integrated ANPR and RFID technologies was developed to provide five authentication states that met different vehicle access point’s requirement. It was built with ‘ORed’ and ‘ANDed’, logic settings to achieve five different states of authentication levels, each suited for a particular access point. The ANRP system used the vehicle number plate to authenticate the vehicle through the use of the camera. On the other hand, the RFID system used the drivers’ card/tag through the RFID card reader to authenticate the user. Daily transaction records were sent to the security center where information would easily be retrieved. Illegal access to restricted areas, threats of theft of motor vehicles and failed transaction recording system was amicably solved by this proposal.</description>
        <description>http://thesai.org/Downloads/Volume10No1/Paper_74-Development_of_a_Two_Factor_Authentication.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Investigating Technologies in Decision based Internet of Things, Internet of Everythings and Cloud Computing for Smart City</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100173</link>
        <id>10.14569/IJACSA.2019.0100173</id>
        <doi>10.14569/IJACSA.2019.0100173</doi>
        <lastModDate>2019-01-31T10:29:16.5830000+00:00</lastModDate>
        
        <creator>Babur Hayat Malik</creator>
        
        <creator>Zunaira Zainab</creator>
        
        <creator>Husnain Mushtaq</creator>
        
        <creator>Amina Yousaf</creator>
        
        <creator>Sohaib Latif</creator>
        
        <creator>Hafiz Zubair</creator>
        
        <creator>Sayyam Malik</creator>
        
        <creator>Palwesha Sehar</creator>
        
        <subject>IoT; IOET; technologies; cloud computing; WSN</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(1), 2019</description>
        <description>The idea of a Smart City features the need to upgrade quality, interconnection and execution of different urban administrations with the utilization of data and correspondence advances (ICT). Smart City advances cloud-based and Internet of Things (IoT) based administrations in which certifiable user interface utilize PDAs, sensors and RFIDs. Distributed computing and IoT are by and by two most essential ICT models that are forming the up and coming age of registering. Cloud computing speaks to the new technique for conveying equipment and programming assets to the clients, Internet of Things (IoT) is at present a standout amongst the most well-known ICT ideal models. In the meantime, the IoT idea imagines another age of gadgets (sensors, both virtual and physical) that are associated with the Internet and give diverse administrations to esteem included applications. Focus of this study attention on the integration of Cloud, IoT and IoE technologies for smart city services as well as a review has been made so that we can develop a better smart city that will utilize IoT, IoE in order to provide a better platform for smart city. This paper tends to the joined area of cloud computing, IoT and IoE for any smart city application organization.</description>
        <description>http://thesai.org/Downloads/Volume10No1/Paper_73-Investigating_Technologies_in_Decision_based_Internet_of_Things.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>LQR Robust Control for Active and Reactive Power Tracking of a DFIG based WECS</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100172</link>
        <id>10.14569/IJACSA.2019.0100172</id>
        <doi>10.14569/IJACSA.2019.0100172</doi>
        <lastModDate>2019-01-31T10:29:16.5670000+00:00</lastModDate>
        
        <creator>Sana Salhi</creator>
        
        <creator>Salah Salhi</creator>
        
        <subject>LQR robust tracking; LPV system; lyapunov stability; LMI; DFIG based wind energy conversion systems; optimal control</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(1), 2019</description>
        <description>This research work sets forward a new formulation of Linear Quadratic Regulator problem (LQR) applied to a Wind Energy Conversion System (WECS). A new necessary and sufficient condition of Lyapunov asymptotic stability is also established. The problem is mathematically described in form of Linear Matrix Inequalities (LMIs). The considered WECS is based on a Doubly Fed Induction Generator (DFIG). An appropriate Linear Parameter Varying (LPV) model is designed. This model stands for a realistic representation of the randomly time varying wind velocity. Stability and robustness of the controller over the admissible values of time varying parameter are investigated. The newly lifted Lyapunov condition gives less conservative conditions for LMI approach in case of parameter-dependent Lyapunov functions PDLF. The considered PDLF has the same variation dynamics as the system matrix. The intrinsic objective for our research is to offer more freedom degrees to the control problem and to improve the efficiency of the controller in case of uncertainties or parametric variations. The performances of the proposed theorems are validated to achieve active and reactive powers tracking of the WECS over the admissible range of wind speeds. The interesting features of the proposed solution are the simpler implementation and the larger robustness margin. It also has the advantage of providing a linear control to the considered nonlinear system without resorting to linearization.  The LMIs implementation is performed on Yalmip Matlab toolbox. The proposed controller is verified on a Matlab Simulink emulator.  This work presents an extension of the LQR control problem to LPV systems.</description>
        <description>http://thesai.org/Downloads/Volume10No1/Paper_72-LQR_Robust_Control_for_Active_and_Reactive_Power_Tracking.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Challenges of Medical Records Interoperability in Developing Countries: A Case Study of the University Teaching Hospital in Zambia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100171</link>
        <id>10.14569/IJACSA.2019.0100171</id>
        <doi>10.14569/IJACSA.2019.0100171</doi>
        <lastModDate>2019-01-31T10:29:16.5530000+00:00</lastModDate>
        
        <creator>Danny Leza</creator>
        
        <creator>Jackson Phiri</creator>
        
        <subject>HER; surgery; ICT; paper based; adoption</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(1), 2019</description>
        <description>The University Teaching Hospital (UTH) is an integral national referral Hospital made up of eight departments. Standardized systems and semantic interoperability is key for successful flow of patient information from one department to another and from section to section within a department. Lack of a SNOMED CT E.H.R System in surgery departments causes inefficient scheduling of surgical procedures, insufficient and inaccurate pertinent patient historical information, misconceptions and error arising from ambiguities in terminology usage. The result is unhealthy clinician working environment leading to high death rates among patients.  Baseline Survey was conducted using questionnaire to establish the major drawbacks of the current manual system in use at the department. Record inspection was done followed by roundtable discussion with stakeholder.  Convenient sampling was used, out of 40 respondents 72.5% had computers in their section, 27.5% did not have, 60% were using partial electronic records and paper based, 37.5% were using manual system, 2.5% reported that they were using electronic record system. The result reviewed more than 50% of the medical practitioner ranging from nurses to surgeon reported to be dissatisfied with the current system. In addition, record inspection was conducted by going to each section of the department to understand the business process and the form and format of data storage; this exercise reviewed redundancy in the capture, storage and management of patient records due to the fact that in every section where a patient pass, while undergoing diagnosis procedure, basic details are collected afresh for the same patient. This situation has brought about unnecessary duplication of work.  The other drawback is the storage of patient records arising from lack of storage space. Record which are ten years old are destroyed to create space for new ones. This destruction of records robs researchers of the much-needed data for trends analysis and patient disease history. Because of these draw backs, it is very apparent that a standardized E.H.R is implemented.</description>
        <description>http://thesai.org/Downloads/Volume10No1/Paper_71-Challenges_of_Medical_Records.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Biometric Recognition using Area under Curve Analysis of Electrocardiogram</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100169</link>
        <id>10.14569/IJACSA.2019.0100169</id>
        <doi>10.14569/IJACSA.2019.0100169</doi>
        <lastModDate>2019-01-31T10:29:16.5200000+00:00</lastModDate>
        
        <creator>Anita Pal</creator>
        
        <creator>Yogendra Narain Singh</creator>
        
        <subject>Electrocardiogram; biometric; area under curve features</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(1), 2019</description>
        <description>In this paper, we introduce a human recognition system that utilizes lead I electrocardiogram (ECG). It proposes an efficient method for ECG analysis that corrects the signal and extract all major features of its waveform. FIR equiripple high pass filter is used for denoising ECG signal. R peak is detected using Haar wavelet transform. A novel class of features called as area under curve are computed from dominant fiducials of ECG waveform along with other known class of features such as interval features, amplitude features and angle features. The feasibility of an electrocardiogram as a new biometric is tested on selected features that reports the authentication performance 99.49% on QT database, 98.96% on PTB database and 98.48%on MIT-BIH arrhythmia database. The results obtained from the proposed approach surpasses the other conventional methods of biometric applications.</description>
        <description>http://thesai.org/Downloads/Volume10No1/Paper_69-Biometric_Recognition_using_Area_under_Curve_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Identification and Formal Representation of Change Operations in LOINC Evolution</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100170</link>
        <id>10.14569/IJACSA.2019.0100170</id>
        <doi>10.14569/IJACSA.2019.0100170</doi>
        <lastModDate>2019-01-31T10:29:16.5200000+00:00</lastModDate>
        
        <creator>Anny Kartika Sari</creator>
        
        <subject>LOINC; ontology; evolution; change operation; formal representation; health</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(1), 2019</description>
        <description>LOINC (Logical Observation Identifiers Names and Codes) is one of the standardized health ontologies that is widely used by practitioners in the health sector. Like other ontologies in health field, LOINC evolves. This research focuses on representing formally the conceptual changes in LOINC. Four steps are taken to achieve this goal. First, the release of LOINC is studied to get an overview of the changes in LOINC. Secondly, the change operations that occur in LOINC are classified. Third, the changes are represented formally by considering the need to keep the history of changes in concepts. Finally, a few algorithms are developed to identify changes that occur between two releases of LOINC. The evaluation shows that the algorithms are able to identify change operations in LOINC with 100% of success rate. With a formal representation of changes that occur in LOINC, it is expected that adjustments to applications that use LOINC can be performed more straightforward. The history of reference to concepts in LOINC can also be traced back so that information about the changes on the reference can be obtained easily.</description>
        <description>http://thesai.org/Downloads/Volume10No1/Paper_70-Identification_and_Formal_Representation_of_Change_Operations.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Reviewing Diagnosis Solutions for Valid Product Configurations in the Automated Analysis of Feature Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100168</link>
        <id>10.14569/IJACSA.2019.0100168</id>
        <doi>10.14569/IJACSA.2019.0100168</doi>
        <lastModDate>2019-01-31T10:29:16.4900000+00:00</lastModDate>
        
        <creator>Cristian L. Vidal-Silva</creator>
        
        <subject>AAFM; feature model; valid product; valid config-uration; FMDiag; FlexDiag</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(1), 2019</description>
        <description>A Feature Model (FM) is an information model to represent commonalities and variabilities for all the products of a Software Product Line (SPL). The complexity and large-scale of real feature models makes their manual analysis for determining the product configurations validity a tedious or even infeasible task. Efficient solutions for the diagnosis of errors in the Automated Analysis of Feature Models (AAFM) already exist such as FMDiag and FlexDiag. Thus, this work describes the fundamental basis for both diagnosis algorithms to apply the first of them on the validity of FM product configurations. The results highlight the applicability and efficiency of FMDiag and invite us to look for additional applications in the AAFM scenarios.</description>
        <description>http://thesai.org/Downloads/Volume10No1/Paper_68-Reviewing_Diagnosis_Solutions_for_Valid_Product_Configurations.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>BioPay: Your Fingerprint is Your Credit Card</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100167</link>
        <id>10.14569/IJACSA.2019.0100167</id>
        <doi>10.14569/IJACSA.2019.0100167</doi>
        <lastModDate>2019-01-31T10:29:16.4900000+00:00</lastModDate>
        
        <creator>Fahad Alsolami</creator>
        
        <subject>Fingerprint; credit/debit card; cybersecurity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(1), 2019</description>
        <description>In recent years, credit and debit cards have become a very convenient method of payment. The growing use of card payments, hereafter referred to as credit cards, is evident in the daily use with many applications, such as withdrawing money from an Automated Teller Machine (ATM) and making payments in a store. Online payment has been very common these days, where the transaction is made across a great distance, allowing for online shopping. This has increased chance of credit cards experiencing a risk of cybersecurity attacks, particularly if the transaction amount is big enough. Another problem that arises is the potential fraud should a thief try to impersonate the credit card owner’s identity. To overcome these obstacles, we propose a BioPay scheme that uses the fingerprint biotoken to replace the current plastic credit card. The BioPay scheme uses the biometric data (fingerprint), revocable fingerprint biotokens (Biotope), and Bipartite token to provide high authentication, non-repudiation, security and privacy for all payment transactions including money withdrawal from an ATM. The BioPay scheme collects biometric data (i.e. fingerprint) from users and embeds four-digit authentication numbers inside the encoding biometric data (i.e. fingerprint), finally distributing them over clouds. In the payment/withdrawal process, a user provides his/her fingerprint to complete the transaction. BioPay scheme insures that the transaction process performs on an encrypted form to provide security and privacy for the customer’s bank information. Our experiment shows that BioPay has comparable accuracy and significant performance gain for performing the transaction process.</description>
        <description>http://thesai.org/Downloads/Volume10No1/Paper_67-BioPay_Your_Fingerprint_is_Your_Credit_Card.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detection of Infected Leaves and Botanical Diseases using Curvelet Transform</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100166</link>
        <id>10.14569/IJACSA.2019.0100166</id>
        <doi>10.14569/IJACSA.2019.0100166</doi>
        <lastModDate>2019-01-31T10:29:16.4730000+00:00</lastModDate>
        
        <creator>Nazish Tunio</creator>
        
        <creator>Abdul Latif Memon</creator>
        
        <creator>Faheem Yar Khuhawar</creator>
        
        <creator>Ghulam Mustafa Abro</creator>
        
        <subject>Region Of Interest (ROI); Support Vector Machine (SVM); feature extraction; curvelet transform; alternata; anthrac-nose; blightness</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(1), 2019</description>
        <description>The study of plants is known as botany and for any botanist it is a daily routine work to examine various plants in their research lab. This research efforts an image processing-based algorithm for extracting the region of interest (ROI) from plant leaf in order to classify the specie and to recognize the particular botanical disease as well. Moreover, this paper addresses the implementation of curvelet transform on subdivided leaf images in order to compute the related information and train the support vector machine (SVM) classifier to execute better results. Furthermore, the paper presents a comparative analysis of existing and proposed algorithm for species and botanical diseases recognition over the dataset of leaves. The proposed multi-dimensional curvelet transform based algorithm provides relatively greater accuracy of 93.5% with leaves dataset.</description>
        <description>http://thesai.org/Downloads/Volume10No1/Paper_66-Detection_of_Infected_Leaves_and_Botanical_Diseases.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Image Co-Segmentation via Examples Guidance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100165</link>
        <id>10.14569/IJACSA.2019.0100165</id>
        <doi>10.14569/IJACSA.2019.0100165</doi>
        <lastModDate>2019-01-31T10:29:16.4430000+00:00</lastModDate>
        
        <creator>Rachida Es-Salhi</creator>
        
        <creator>Imane Daoudi</creator>
        
        <creator>Hamid El Ouardi</creator>
        
        <subject>Co-segmentation; image segmentation; segmentation propagation; MRF based segmentation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(1), 2019</description>
        <description>Given a collection of images which contains objects from the same category, the co-segmentation methods aim at simultaneously segmenting such common objects in each image. Most of existing co-segmentation approaches rely on comput-ing similarities inter-regions representing foregrounds in these images. However, region similarity measurement is challenging due to the large appearance variations among objects in the same category. In addition, for real-world images which have cluttered backgrounds, the existing co-segmentation approaches miss sufficient robustness to extract the common object from the background. In this paper, we propose a new co-segmentation method which takes advantage of the reliable segmentation of few selected images, in order to guide the segmentation of the remaining images in the collection. A random sample of images is first selected from the image collection. Then, the selected images are segmented using an interactive segmentation method. These segmentation results are used to construct positive/negative samples of the targeted common object and background regions respectively. Finally, these samples are propagated to the remain-ing images in the collection through computing both local and global consistency. The experiments on the iCoseg and MSRC datasets demonstrate the performance and robustness of the proposed method.</description>
        <description>http://thesai.org/Downloads/Volume10No1/Paper_65-Image_Co_Segmentation_via_Examples_Guidance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Community Detection in Dynamic Social Networks: A Multi-Agent System based on Electric Field</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100164</link>
        <id>10.14569/IJACSA.2019.0100164</id>
        <doi>10.14569/IJACSA.2019.0100164</doi>
        <lastModDate>2019-01-31T10:29:16.4270000+00:00</lastModDate>
        
        <creator>E. A Abdulkreem</creator>
        
        <creator>H. Zardi</creator>
        
        <creator>H. Karamti</creator>
        
        <subject>Community detection; dynamic social networks; net-work evolution; multi-agent system; electric field; attractive force; repulsive force; attributes similarity; overlapping communities</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(1), 2019</description>
        <description>In recent years, several approaches have been proposed in order to detect communities in social networks. Most of them suffer from the recurrent problems: no detection of overlapping communities, exponential running time, no detection of all possible communities transformations, don’t consider the properties of social members, inability to deal with large scale networks, etc. Multi-agent systems are very suitable for modeling the phenomena in which various autonomous entities in inter-actions able to evolve in a dynamic environment. Considering the advantages of multi-agent simulations for social networks, in the present study, an incremental multi-agent system based on electric field is proposed. In this approach, a group of autonomous agents work together to discover the dynamic communities. Indeed, an agent is associated to each detected community. To update its community according to the dynamic of its members, each agent creates an electric field around it. It applies an attractive force to add very connected and similar members and neighboring communities. In the same time, it applies a repulsive force to reject some members and to get away from other communities. These forces are based on the structural and attributes similarity. To study the performance of this approach, set of different experiments is performed. The obtained results show the efficiency of the proposed model that was able to overcome all mentioned problems.</description>
        <description>http://thesai.org/Downloads/Volume10No1/Paper_64-Community_Detection_in_Dynamic_Social_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Developing an Adaptive Language Model for Bahasa Indonesia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100163</link>
        <id>10.14569/IJACSA.2019.0100163</id>
        <doi>10.14569/IJACSA.2019.0100163</doi>
        <lastModDate>2019-01-31T10:29:16.4100000+00:00</lastModDate>
        
        <creator>Satria Nur Hidayatullah</creator>
        
        <creator>Suyanto</creator>
        
        <subject>Adaptive Language Model; Bahasa Indonesia; Col-lapsed Gibbs Sampling; Latent Dirichlet Allocation; text corpus</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(1), 2019</description>
        <description>A language model is one of the important compo-nents in a speech recognition system. It is commonly developed using a statistical method called n-gram. However, a standard n-gram cannot be used for general domains with so many am-biguous semantics of sentences. This paper focuses on developing an adaptive n-gram language model for Bahasa Indonesia. First, a text corpus of ten million distinct sentences is crawled from hundreds of websites of news, magazines, personal blogs, and writing forums. The text corpus is then used to construct an adaptive language model using Latent Dirichlet Allocation (LDA) with Collapsed Gibbs Sampling (CGS) training method. Compare to the standard n-gram, the adaptive language model gives a better performance in the word selection to produce the best sentence.</description>
        <description>http://thesai.org/Downloads/Volume10No1/Paper_63-Developing_an_Adaptive_Language_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implementation, Verification and Validation of an OpenRISC-1200 Soft-core Processor on FPGA</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100162</link>
        <id>10.14569/IJACSA.2019.0100162</id>
        <doi>10.14569/IJACSA.2019.0100162</doi>
        <lastModDate>2019-01-31T10:29:16.3800000+00:00</lastModDate>
        
        <creator>Abdul Rafay Khatri</creator>
        
        <subject>FPGA Design; HDLs; Hw-Sw Co-design; Open-RISC 1200; Soft-core processors</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(1), 2019</description>
        <description>An embedded system is a dedicated computer system in which hardware and software are combined to per-form some specific tasks. Recent advancements in the Field Programmable Gate Array (FPGA) technology make it possible to implement the complete embedded system on a single FPGA chip. The fundamental component of an embedded system is a microprocessor. Soft-core processors are written in hardware description languages and functionally equivalent to an ordinary microprocessor. These soft-core processors are synthesized and implemented on the FPGA devices. In this paper, the OpenRISC 1200 processor is used, which is a 32-bit soft-core processor and written in the Verilog HDL. Xilinx ISE tools perform synthesis, design implementation and configure/program the FPGA. For verification and debugging purpose, a software toolchain from GNU is configured and installed. The software is written in C and Assembly languages. The communication between the host computer and FPGA board is carried out through the serial RS- 232 port.</description>
        <description>http://thesai.org/Downloads/Volume10No1/Paper_62-Implementation_Verification_and_Validation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Auto-Scaling Approach for Cloud based Mobile Learning Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100161</link>
        <id>10.14569/IJACSA.2019.0100161</id>
        <doi>10.14569/IJACSA.2019.0100161</doi>
        <lastModDate>2019-01-31T10:29:16.3630000+00:00</lastModDate>
        
        <creator>Amani Nasser Almutlaq</creator>
        
        <creator>Dr. Yassine Daadaa</creator>
        
        <subject>Auto-scaling; reinforcement learning; fuzzy Q-learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(1), 2019</description>
        <description>In the last decade, mobile learning applications have attracted a significant amount of attention. Huge investments have been made to develop educational applications that can be implemented on mobile devices. However, mobile learning applications have some limitations, such as storage space and battery life. Cloud computing provides a new idea to solve some limitations of mobile learning applications. However, there are other limitations, like scalability, that must be solved before mobile cloud learning can become completely operational. There are two main problems with scalability. The first occurs when the application server’s performance declines due to an increase in the number of requests, which affects usability. The second is that a decrease in the number of requests makes most application servers idle and therefore wastes money. These two problems can be avoided or minimized by provisioning auto-scaling techniques that permit the acquisition and release of resources dynamically to accommodate demand. In this paper, we propose an intelligent neuro-fuzzy reinforcement learning approach to solve the scalability problem in mobile cloud learning applications, and evaluate the proposed approach against some of the existing approaches via MATLAB. The large state space and long training time required to find the optimal policy are the main problems of reinforcement learning. We use fuzzy Q-learning to solve the large state space problem by grouping similar variables in the same state; there is then no need to use large look-up tables. The use of parallel learning agents reduces the training time needed to determine optimal policies. The experimental results prove that the proposed approach is able to increase learning speed and reduce the training time needed to determine optimal policies.</description>
        <description>http://thesai.org/Downloads/Volume10No1/Paper_61-Auto_Scaling_Approach_for_Cloud_based_Mobile_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Help Tetraplegic People by Means of a Computational Neuronal Control System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100160</link>
        <id>10.14569/IJACSA.2019.0100160</id>
        <doi>10.14569/IJACSA.2019.0100160</doi>
        <lastModDate>2019-01-31T10:29:16.3500000+00:00</lastModDate>
        
        <creator>Jaime Moreno</creator>
        
        <creator>Oswaldo Morales</creator>
        
        <creator>Ricardo Tejeida</creator>
        
        <creator>America Gonzalez</creator>
        
        <creator>Dario Rodriguez</creator>
        
        <subject>Computational applications; Computer Human In-teraction; Neuronal Interactions; Tetraplegic or Quadriplegic Peo-ple; Neural Response</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(1), 2019</description>
        <description>In the present document we present an Interface
called BrainMouse where its main task is to help people with
motor disabilities specially tetraplegic or quadriplegic people that
they can move the mouse of the computer by means of blinking
or any neural response. The Interface uses the data obtained
from a neuronal system which is responsible for taking reliable
readings of the electrical signals generated in the human brain,
through non-intrusive neuronal interfaces. The recorded data is
used by the BrainMouse Interface so that the mouse can perform
functions such as an up, down, left lateral, right lateral, left click,
right click and double click. Thus, this interface has all the options
that a conventional mouse would have.</description>
        <description>http://thesai.org/Downloads/Volume10No1/Paper_60-Help_Tetraplegic_People_by_means_of_a_Computational_Neuronal_Control.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Efficient Algorithm for Enumerating all Minimal Paths of a Graph</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100159</link>
        <id>10.14569/IJACSA.2019.0100159</id>
        <doi>10.14569/IJACSA.2019.0100159</doi>
        <lastModDate>2019-01-31T10:29:16.3330000+00:00</lastModDate>
        
        <creator>Khalid Housni</creator>
        
        <subject>Minimal path; network reliability; linked path struc-ture; recursive algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(1), 2019</description>
        <description>The enumeration of all minimal paths between a terminal pair of a given graph is widely used in a lot of applications such as network reliability assessment. In this paper, we present a new and efficient algorithm to generate all minimal paths in a graph G(V, E). The algorithm proposed builds the set of minimal paths gradually, starting from the source nodes. We present two versions of our algorithm; the first version determines all feasible paths between a pair of terminals in a directed graph without cycle, and this version runs in linear time O(|V| + |E|). The second version determines all minimal paths in a general graph (directed and undirected graph). In order to show the process and the effectiveness of our method, an illustrative example is presented for each case.</description>
        <description>http://thesai.org/Downloads/Volume10No1/Paper_59-An_Efficient_Algorithm_for_Enumerating_all_Minimal_Paths.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Three Dimensional Agricultural Land Modeling using Unmanned Aerial System (UAS)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100158</link>
        <id>10.14569/IJACSA.2019.0100158</id>
        <doi>10.14569/IJACSA.2019.0100158</doi>
        <lastModDate>2019-01-31T10:29:16.3170000+00:00</lastModDate>
        
        <creator>Faisal Mahmood</creator>
        
        <creator>Khizar Abbas</creator>
        
        <creator>Asif Raza</creator>
        
        <creator>Muhammad Awais Khan</creator>
        
        <creator>Prince Waqas Khan</creator>
        
        <subject>Image processing; structure from motion (SFM); unmanned aerial system (UAS); unmanned aerial vehicles (UAVs); camera calibration; change detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(1), 2019</description>
        <description>Nowadays, the unmanned aerial vehicles (UAVs) drones are mostly used in civil and military fields for security and monitoring purposes. They are also involved in the development of electronics communications and navigation systems. The UAVs are the aerial vehicles with a built-in power system having capability of controlling by a remote control system or leads to fly automatically. Rapid increase in their use due to sensors mobility in its small size that becomes the UAVs to fly at lower altitude and their significant contributions to the image processing studies, where the photogrammetric surveys in small scale areas are given importance for landslide and erosion monitoring. This paper is going to consider agriculture activities like detecting crop diseases, finding crop patterns and conduct small scale agriculture policies for study and research. In our study, the UAV drone is used for the image data collection purpose and structure form motion (SfM), algorithmic approach is utilized for producing the volumetric structure or 3-D structure of images. These 3-dimensional structures are further used for building information modeling systems and performing different operations like image classification, enhancement and segmentation. Our approach highlights better and efficient results than others agriculture images approaches captured by UAVs at high altitude.</description>
        <description>http://thesai.org/Downloads/Volume10No1/Paper_58-Three_Dimensional_Agricultural_Land_Modeling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>EEG based Brain Alertness Monitoring by Statistical and Artificial Neural Network Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100157</link>
        <id>10.14569/IJACSA.2019.0100157</id>
        <doi>10.14569/IJACSA.2019.0100157</doi>
        <lastModDate>2019-01-31T10:29:16.2870000+00:00</lastModDate>
        
        <creator>Md. Asadur Rahman</creator>
        
        <creator>Md. Mamun or Rashid</creator>
        
        <creator>Farzana Khanam</creator>
        
        <creator>Mohammad Khurshed Alam</creator>
        
        <creator>Mohiuddin Ahmad</creator>
        
        <subject>Alertness monitoring; Electroencephalography (EEG); Principal Component Analysis (PCA); Analysis of Variance (ANOVA); Discrete Wavelet Transformation (DWT); Band Relative Power; Artificial Neural Network (ANN)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(1), 2019</description>
        <description>Since several work requires continuous alertness like efficient driving, learning, etc. efficient measurement of the alertness states through neural activity is a crucial challenge for the researchers. This work reports a practical method to investigate the alertness state from electroencephalography (EEG) of the human brain. Here, we have proposed a novel idea to monitor the brain alertness from EEG signal that can discriminate the alertness state comparing resting state with a simple statistical threshold. We have investigated two different types of mental tasks: alphabet counting &amp; virtual driving to monitor their alertness level. The EEG signals are acquired from several participants regarding alphabet counting and virtual motor driving tasks. A 9-channel wireless EEG system has been used to acquire their EEG signals from frontal, central, and parietal lobe of the brain. With suitable preprocessing, signal dimensions are reduced by principal component analysis and the features of the signals are extracted by the discrete wavelet transformation method. Using the features, alertness states are classified using the artificial neural network. Additionally, the relative power of responsible frequency band to alertness is analyzed with statistical inference. We have found that the beta relative power increases at a significant level due to alertness which is good enough to differentiate the alertness state from the control state. It is also found that the increment of beta relative power for virtual driving is much greater than the alphabet counting mental alertness. We hope that this work will be very helpful to monitor constant alertness for efficient driving and learning.</description>
        <description>http://thesai.org/Downloads/Volume10No1/Paper_57-EEG_based_Brain_Alertness_Monitoring.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Simulation Results for a Daily Activity Chain Optimization Method based on Ant Colony Algorithm with Time Windows</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100156</link>
        <id>10.14569/IJACSA.2019.0100156</id>
        <doi>10.14569/IJACSA.2019.0100156</doi>
        <lastModDate>2019-01-31T10:29:16.2700000+00:00</lastModDate>
        
        <creator>Imad SABBANI</creator>
        
        <creator>Bouattane Omar</creator>
        
        <creator>Domokos Eszetergar-Kiss</creator>
        
        <subject>Component; ant colony optimization; daily activity chain; travel salesman problem; simulation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(1), 2019</description>
        <description>In this paper, a new approach is presented based on ant colony algorithm with time windows in order to optimize daily activity chains with flexible mobility solutions. This flexibility is realized by temporal and spatial change of activities achieved by travellers during one day. With the injection of flexibility concept of time and locations, the requirements for such a transport system are high. However, our method has shown promising results by decreasing 10 to 20% the total travel time of travellers based on combining and comparing different transport modes including the private transport as well as the public transport and by choosing the optimal set of activities using our method.</description>
        <description>http://thesai.org/Downloads/Volume10No1/Paper_56-Simulation_Results_for_a_Daily_Activity_Chain_Optimization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Generating a Highlight Moments Summary Video of Apolitical Event using Ontological Analysis on Social Media Speech Sentiment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100155</link>
        <id>10.14569/IJACSA.2019.0100155</id>
        <doi>10.14569/IJACSA.2019.0100155</doi>
        <lastModDate>2019-01-31T10:29:16.2570000+00:00</lastModDate>
        
        <creator>Abid Mehdi</creator>
        
        <creator>Benayad Nsiri</creator>
        
        <creator>Yassine Serhane</creator>
        
        <creator>Miyara Mounia</creator>
        
        <subject>Debate summary; API; hashtags; twitter; highlights moment; ontologies; sentiment analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(1), 2019</description>
        <description>Numerous viewers choose to watch political or presidential debates highlights via TV or internet, rather than seeing the whole debate nowadays, which requires a lot of time. However, the task of making a debate summary, which can be considered neutral and does not give out a negative nor a positive image of the speaker, has never been an easy one, due to personal or political beliefs bias of the video maker. This study came up with a solution that generates highlights of a political event, based on twitter social network flow. Twitter streaming API is used to detect an event&#39;s tweets stream using specific hashtags, and detect on a timescale the extreme changes of volume of tweets, which will determine the highlight moments of our video summary at first, then a process is set up based on a group of ontologies that analyze each tweet of these moments to calculate the percentage of each sentiment’s positivity, then classify those moments by category (positive, negative or neutral).</description>
        <description>http://thesai.org/Downloads/Volume10No1/Paper_55-Generating_a_Highlight_Moments_Summary_Video.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Development of Geographic Information System using Participatory GIS Concept of Spatial Management</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100154</link>
        <id>10.14569/IJACSA.2019.0100154</id>
        <doi>10.14569/IJACSA.2019.0100154</doi>
        <lastModDate>2019-01-31T10:29:16.2400000+00:00</lastModDate>
        
        <creator>Nizar Rabbi Radliya</creator>
        
        <creator>Rauf Fauzan</creator>
        
        <creator>Hani Irmayanti</creator>
        
        <subject>Participatory GIS; regional spatial planning; geographic information system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(1), 2019</description>
        <description>Spatial management of Bandung Regency area has been regulated on Regional Regulation (PERDA), which is PERDA Bandung Regency Number 27 of 2016. Recently there are no facilities that can be used as a dissemination media of information about The Regional Layout Planning (RTRW) so that it easily accessed by the community who will utilize the space in Bandung Regency area. The information dissemination of The Regional Layout Planning is very important to avoid the mistake in the use of the area by the community. The use of Participatory GIS is conducted based on the purpose of producing an appropriate spatial plan in accordance with the established rules. The implementation of participatory GIS concept on the geographic information system of regional spatial allows all communities to participate in making decisions on the use of an area.</description>
        <description>http://thesai.org/Downloads/Volume10No1/Paper_54-The_Development_of_Geographic_Information_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of Airport Network in Pakistan Utilizing Complex Network Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100153</link>
        <id>10.14569/IJACSA.2019.0100153</id>
        <doi>10.14569/IJACSA.2019.0100153</doi>
        <lastModDate>2019-01-31T10:29:16.2230000+00:00</lastModDate>
        
        <creator>Hafiz Abid Mahmood Malik</creator>
        
        <creator>Nadeem Mahmood</creator>
        
        <creator>Mir Hammal Usman</creator>
        
        <creator>Kashif Rziwan</creator>
        
        <creator>Faiza Abid</creator>
        
        <subject>Transportation network; Airport network analysis; Complex network; Scale-free network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(1), 2019</description>
        <description>Field of complex network covers different social, technological, biological, scientific collaborative work, communication networks and many others. Among these networks, transportation network is an important indicator to measure the economic growth in any country. In this study different dynamics of Airport Network in Pakistan are analyzed by the complex network methodology.  Dataset of air transportation has been collected from Civil Aviation Authority of Pakistan (CAA) and formatted to accomplish the complex network requirements. The network is formed to observe its different properties and compare these with their topological counterparts. In this, network nodes are represented by Airports of Pakistan while flights between them within a week are considered as edges. The behavior of degree distribution is observed as preferential attachment of nodes, which represented that few nodes are controlling overall network which emphasizes that Airport Network in Pakistan (ANP) follows power law. Clustering coefficient displayed the network as a clustered network. Result of short average path length highlights that Airport Network in Pakistan is small-world network. Study also signified the average nearest neighbour degree node, which explained that ANP exhibited disassortative mixing in nature which states that high degree nodes (airports) tend to connect to low degree nodes (airports). Interestingly, is has been observed that it is not necessary that the most connected node is also the most central node in degree centralities.</description>
        <description>http://thesai.org/Downloads/Volume10No1/Paper_53-Analysis_of_Airport_Network_in_Pakistan.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automated Knowledge Acquisition Framework for Supply Chain Management based on Hybridization of Case based Reasoning and Intelligent Agent</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100152</link>
        <id>10.14569/IJACSA.2019.0100152</id>
        <doi>10.14569/IJACSA.2019.0100152</doi>
        <lastModDate>2019-01-31T10:29:16.2100000+00:00</lastModDate>
        
        <creator>Mohammad Zayed Almuiet</creator>
        
        <creator>Maryam Mohamad Al-zawahra</creator>
        
        <subject>Knowledge acquisition; supply chain management; supply chain knowledge; case-based reasoning; intelligent agent component</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(1), 2019</description>
        <description>Throughout the past few years, there has been notable research effort directed towards developing automated knowledge acquisition (KA) in order to automate knowledge acquisition in Supply Chain Management (SCM) applications. Several methods utilized for the automation of supply chain management involved Intelligent Agent (IA) and Case-Based Reasoning (CBR). This paper used both approaches to bring about automated knowledge acquisition in order to assist decision-making in SCM applications. With the arrival of a new case, prior cases are retrieved from the database and the potential solutions are laid down. After the completion of acquisition, case and solution outcome are analyzed and evaluated according to function similarity. Finally, after evaluating the new case along with the problem details and the chosen solution, the case is retained in the database for issues that will arise in future applications.</description>
        <description>http://thesai.org/Downloads/Volume10No1/Paper_52-Automated_Knowledge_Acquisition_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Architecture for Information Security using Division and Pixel Matching Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100150</link>
        <id>10.14569/IJACSA.2019.0100150</id>
        <doi>10.14569/IJACSA.2019.0100150</doi>
        <lastModDate>2019-01-31T10:29:16.1770000+00:00</lastModDate>
        
        <creator>Abdulrahman Abdullah Alghamdi</creator>
        
        <subject>Information security; steganography; pixel pattern matching; key segmentation; division method</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(1), 2019</description>
        <description>The computer users have to safeguard the information which they are handling. An information hiding algorithm has to make sure that such information is undecipherable since it may have some sensitive information. This paper proposes a steganography method that conceals the message behind the image by providing the security when compared to the other existing methods. In this system, the information to be hidden is encrypted by an advanced cryptography technique. For that, initially, the data is divided by the method of arithmetic division. The information is hold on within the style of the divisor, the quotient &amp; the remainder. The secret key is also encrypted and holds on several pixels. Then, the pixel matching algorithm is used to hide the information of the secret image in the carrier image. By this system, the embedding time is reduced when compared to different existing algorithms. In this method, different types of images are used for testing the proposed algorithm. By using this method, the peak signal to noise magnitude relation obtained is more for all the pixels present in the image.</description>
        <description>http://thesai.org/Downloads/Volume10No1/Paper_50_A_Novel_Architecture_for_Information_Security.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Method for Uncertainty Evaluation of Vicarious Calibration of Spaceborne Visible to Near Infrared Radiometers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100151</link>
        <id>10.14569/IJACSA.2019.0100151</id>
        <doi>10.14569/IJACSA.2019.0100151</doi>
        <lastModDate>2019-01-31T10:29:16.1770000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Wahyudi Hasbi</creator>
        
        <creator>A Hadi Syafrudin</creator>
        
        <creator>Patria Rachman Hakim</creator>
        
        <creator>Sartika Salaswati</creator>
        
        <creator>Lilik Budi Prasetyo</creator>
        
        <creator>Yudi Setiawan</creator>
        
        <subject>Field experiment; vicarious calibration; image quality evaluation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(1), 2019</description>
        <description>A method for uncertainty evaluation of vicarious calibration for solar reflection channels (visible to near infrared) of spaceborne radiometers is proposed. Reflectance based at sensor radiance estimation method for solar reflection channels of radiometers onboard remote sensing satellites is also proposed. One of examples for vicarious calibration of LISA: Line Imager Space Application onboard LISAT: LAPAN-IPB Satellite is described. Through the preliminary analysis, it is found that the proposed uncertainty evaluation method is appropriate. Also, it is found that percent difference between DN: Digital Number derived radiance and estimated TOA: Top of the Atmosphere radiance (at sensor radiance) ranges from 3.5 to 9.6 %. It is also found that the percent difference at shorter wavelength (Blue) is greater than that of longer wavelength (Near Infrared: NIR). In comparison to those facts to those of Terra/ASTER/VNIR, it is natural and reasonable.</description>
        <description>http://thesai.org/Downloads/Volume10No1/Paper_51-Method_for_Uncertainty_Evaluation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Adaptive Heart Disease Behavior-Based Prediction System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100149</link>
        <id>10.14569/IJACSA.2019.0100149</id>
        <doi>10.14569/IJACSA.2019.0100149</doi>
        <lastModDate>2019-01-31T10:29:16.1600000+00:00</lastModDate>
        
        <creator>O. E. Emam</creator>
        
        <creator>A. Abdo</creator>
        
        <creator>Mona. M. Mahmoud</creator>
        
        <subject>Chest pain; risk factors; coronary; cholesterol; neural networks; decision tree; naive Bayes</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(1), 2019</description>
        <description>Heart disease prediction is a complex process that is influenced by several factors, including the combination of attributes leading to the possibility of heart disease and availability of these attributes in the database, an accurate selection of these attributes and determining the priority and impact of each of them on the prediction model, and finally selecting the appropriate classification technique to build the model. Most of the previous studies have used some heart disease symptoms as major risk factors to build a heart disease prediction system leading to inaccurate prediction results. The main objective of this study is to build an Adaptive Heart Disease Behavior-Based Prediction System (AHDBP) based on risk factors and behaviors that may lead to heart disease. Different classification algorithms will be deployed to get the most accurate results. 18 attributes were used to build the prediction system. The accuracy of the classification techniques was as follows: Decision Tree 90.34%, Naive Bayes 91.54%, and Neural Networks 94.91%. Neural networks can predict heart disease better than other techniques. The Chi square method has also been applied to determine the difference between the expected and the observed results, and the proposed system proved its accuracy at 86.54%.</description>
        <description>http://thesai.org/Downloads/Volume10No1/Paper_49-An_Adaptive_Heart_Disease.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Survey of Malware Detection Techniques based on Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100148</link>
        <id>10.14569/IJACSA.2019.0100148</id>
        <doi>10.14569/IJACSA.2019.0100148</doi>
        <lastModDate>2019-01-31T10:29:16.1300000+00:00</lastModDate>
        
        <creator>Hoda El Merabet</creator>
        
        <creator>Abderrahmane Hajraoui</creator>
        
        <subject>Malware; anti-malware; machine learning; feature extraction; feature selection; random forest; SVM; neural networks; classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(1), 2019</description>
        <description>Diverse malware programs are set up daily focusing on attacking computer systems without the knowledge of their users. While some authors of these programs intend to steal secret information, others try quietly to prove their competence and aptitude. The traditional signature-based static technique is primarily used by anti-malware programs in order to counter these malicious codes. Although this technique excels at blocking known malware, it can never intercept new ones. The dynamic technique, which is often based on running the executable on a virtual environment, may be introduced by a number of anti-malware programs. The major drawbacks of this technique are the long period of scanning and the high consumption of resources. Nowadays, recent programs may utilize a third technique. It is the heuristic technique based on machine learning, which has proven its success in several areas based on the processing of huge amounts of data. In this paper we provide a survey of available researches utilizing this latter technique to counter cyber-attacks. We explore the different training phases of machine learning classifiers for malware detection. The first phase is the extraction of features from the input files according to previously chosen feature types. The second phase is the rejection of less important features and the selection of the most important ones which better represent the data contained in the input files. The last phase is the injection of the selected features in a chosen machine learning classifier, so that it can learn to distinguish between benign and malicious files, and give accurate predictions when confronted to previously unseen files. The paper ends with a critical comparison between the studied approaches according to their performance in malware detection.</description>
        <description>http://thesai.org/Downloads/Volume10No1/Paper_48_A_Survey_of_Malware_Detection_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Qualitative Analysis to Evaluate Key Characteristics of Web Mining based e-Commerce Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100147</link>
        <id>10.14569/IJACSA.2019.0100147</id>
        <doi>10.14569/IJACSA.2019.0100147</doi>
        <lastModDate>2019-01-31T10:29:16.1130000+00:00</lastModDate>
        
        <creator>Sohail Tariq</creator>
        
        <creator>Ramzan Talib</creator>
        
        <creator>Muhammad Kashif Hanif</creator>
        
        <creator>Muhammad Umar Sarwar</creator>
        
        <creator>Hafiz Muhammad Rashid</creator>
        
        <creator>Muhammad Zaman Khalid</creator>
        
        <subject>Web Mining; e-Commerce Applications; User Interface; Interactivity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(1), 2019</description>
        <description>E-Commerce applications are playing vital role by providing competitive advantage over business peers. It is important to get interesting patterns from e-commerce transactions to analyze customer experience, customer likelihood. For this, web mining based e-commerce applications are being developed for various e-businesses. There are different characteristics like user interface and interactivity, which can make these applications more efficient and effective. Well-defined criteria are needed to prioritize key characteristics of these applications. The primary intention of this work is to identify and prioritize the key characteristics and their impact on designing these applications. This paper provides a qualitative survey based evaluation and prioritization of key characteristics.</description>
        <description>http://thesai.org/Downloads/Volume10No1/Paper_47-A_Qualitative_Analysis_to_Evaluate_Key_Characteristics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Developing Cross-lingual Sentiment Analysis of Malay Twitter Data Using Lexicon-based Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100146</link>
        <id>10.14569/IJACSA.2019.0100146</id>
        <doi>10.14569/IJACSA.2019.0100146</doi>
        <lastModDate>2019-01-31T10:29:16.1000000+00:00</lastModDate>
        
        <creator>Nur Imanina Zabha</creator>
        
        <creator>Zakiah Ayop</creator>
        
        <creator>Syarulnaziah Anawar</creator>
        
        <creator>Erman Hamid</creator>
        
        <creator>Zaheera Zainal Abidin</creator>
        
        <subject>Opinion Mining; Sentiment Analysis; Lexicon-based Approach; Cross-lingual</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(1), 2019</description>
        <description>Sentiment analysis is a process of detecting and classifying sentiments into positive, negative or neutral. Most sentiment analysis research focus on English lexicon vocabularies. However, Malay is still under-resourced. Research of sentiment analysis in Malaysia social media is challenging due to mixed language usage of English and Malay. The objective of this study was to develop a cross-lingual sentiment analysis using lexicon based approach. Two lexicons of languages are combined in the system, then, the Twitter data were collected and the results were determined using graph. The results showed that the classifier was able to determine the sentiments. This study is significant for companies and governments to understand people’s opinion on social network especially in Malay speaking regions.</description>
        <description>http://thesai.org/Downloads/Volume10No1/Paper_46-Developing_Cross_Lingual_Sentiment_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Connection Time Estimation between Nodes in VDTN</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100145</link>
        <id>10.14569/IJACSA.2019.0100145</id>
        <doi>10.14569/IJACSA.2019.0100145</doi>
        <lastModDate>2019-01-31T10:29:16.0830000+00:00</lastModDate>
        
        <creator>Adnan Ali</creator>
        
        <creator>Muhammad Shakil</creator>
        
        <creator>Hamaad Rafique</creator>
        
        <creator>Sehrish Munawar Cheema</creator>
        
        <subject>Vehicular delay-tolerant network; delay tolerant network; smart transportation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(1), 2019</description>
        <description>Vehicular delay tolerant network (VDTN) is a widely used communication standard for the scenarios where no end to end path is available between nodes. Data is sent from one node to another node using routing protocols of VDTN. These routing protocols use different decision metrics. Based on these metrics, it is chosen whether to send data to connected node or find another suitable candidate. These metrices are Time to live (TTL), geographical information, destination utility, relay utility, meeting prediction, total and remaining buffer size and many other. Different routing protocols use a different combination of metrics. In this paper, a metric called “estimation-time” is introduced. The “estimation-time” is assessed at the encounter of two nodes. Nodes may decide based on that whether to send data or not. This metric can be used in routing decisions. The simulations results are above 88% which proves “estimation-time” metric is calculated correctly.</description>
        <description>http://thesai.org/Downloads/Volume10No1/Paper_45-Connection_Time_Estimation_between_Nodes_in_VDTN.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modeling and Control by Multi-Model Approach of the Greenhouse Dynamical System with Multiple Time-delays</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100143</link>
        <id>10.14569/IJACSA.2019.0100143</id>
        <doi>10.14569/IJACSA.2019.0100143</doi>
        <lastModDate>2019-01-31T10:29:16.0530000+00:00</lastModDate>
        
        <creator>Marwa Hannachi</creator>
        
        <creator>Ikbel Bencheikh Ahmed</creator>
        
        <creator>Dhaou Soudani</creator>
        
        <subject>Variable time-delay; multivariable systems; greenhouse system; renewable energy; multi-model approach; commutation’s technique; internal model control; discrete-time case; stability; disturbances; robustness</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(1), 2019</description>
        <description>This paper presents the Internal Multi-Model Control IMMC for a multivariable discrete-time system with variable multiple delays. This work focus on the Greenhouse climate model as a multivariable time-delay system. In fact, the Greenhouse technology is an interesting subject for sustainable crop production in the regions of disadvantageous climatic conditions. In addition, high summer temperature is an important setback for successful greenhouse crop production throughout year. The main intent of this work is to present a new control of Greenhouse during summer months using the Internal Multi-Model approach. First, the plant and the model are discredited with the bilinear approximation and then they will be controlled with an Internal Multi-Model Control. The chosen system is modeled only in the summer season case. The simulation results prove the robustness of this Internal Multi-Model Control to preserve stability system despite the incertitude of the chosen model and the extern disturbances.</description>
        <description>http://thesai.org/Downloads/Volume10No1/Paper_43-Modeling_and_Control_by_Multi_Model_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Educational Data Classification Framework for Community Pedagogical Content Management using Data Mining</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100144</link>
        <id>10.14569/IJACSA.2019.0100144</id>
        <doi>10.14569/IJACSA.2019.0100144</doi>
        <lastModDate>2019-01-31T10:29:16.0530000+00:00</lastModDate>
        
        <creator>Husnain Mushtaq</creator>
        
        <creator>Imran Siddique</creator>
        
        <creator>Dr. Babur Hayat Malik</creator>
        
        <creator>Muhammad Ahmed</creator>
        
        <creator>Umair Muneer Butt</creator>
        
        <creator>Rana M. Tahir Ghafoor</creator>
        
        <creator>Hafiz Zubair</creator>
        
        <creator>Umer Farooq</creator>
        
        <subject>Text mining; educational data mining; social learning; course design and delivery; technology supported learning; crowdsourced educational data mining</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(1), 2019</description>
        <description>Recent years witness the significant surge in awareness and exploitation of social media especially community Question and Answer (Q&amp;A) websites by academicians and professionals. These sites are, large repositories of vast data, pawing ways to new avenues for research through applications of data mining and data analysis by investigation of trending topics and the topics of most attention of users. Educational Data Mining (EDM) techniques can be used to unveil potential of Community Q&amp;A websites. Conventional Educational Data Mining approaches are concerned with generation of data through systematic ways and mined it for knowledge discovery to improve educational processes. This paper gives a novel idea to explore already generated data through millions of users having variety of expertise in their particular domains across a common platform like StackOverFlow (SO), a community Q&amp;A website where users post questions and receive answers about particular problems. This study presents an EDM framework to classify community data into Software Engineering subjects. The framework classifies the SO posts according to the academic courses along with their best solutions to accommodate learners.  Moreover, it gives teachers, instructors, educators and other EDM stakeholders an insight to pay more attention and focus on commonly occurring subject related problems and to design and manage of their courses delivery and teaching accordingly. The data mining framework performs preprocessing of data using NLP techniques and apply machine learning algorithms to classify data. Amongst all, SVM gives better performs with 72.06% accuracy. Evaluation measures like precision, recall and F-1 score also used to evaluate the best performing classifier.</description>
        <description>http://thesai.org/Downloads/Volume10No1/Paper_44-Educational_Data_Classification_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>RETRACTED: A Defected Ground based Fractal Antenna for C and S Band Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100142</link>
        <id>10.14569/IJACSA.2019.0100142</id>
        <doi>10.14569/IJACSA.2019.0100142</doi>
        <lastModDate>2019-01-31T10:29:16.0200000+00:00</lastModDate>
        
        <creator>Muhammad Noman Riaz</creator>
        
        <creator>Attaullah Buriro</creator>
        
        <creator>Athar Mahboob</creator>
        
        <subject>Antenna gain; voltage standing wave ratio (VSWR); coefficient of reflection; antenna radiation pattern; defected ground</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(1), 2019</description>
        <description>After careful and considered review of the content of this paper by a duly constituted expert committee,  this paper has been found to be in violation of IJACSA`s Publication Principles. We hereby retract the  content of this paper. Reasonable effort should be made to remove all past references to this paper. Retraction DOI: 10.14569/IJACSA.2019.0100142.retraction</description>
        <description>http://thesai.org/Downloads/Volume10No1/Paper_42_A_Defected_Ground_based_Fractal_Antenna.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis and Maximizing Energy Harvesting from RF Signals using T-Shaped Microstrip Patch Antenna</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100141</link>
        <id>10.14569/IJACSA.2019.0100141</id>
        <doi>10.14569/IJACSA.2019.0100141</doi>
        <lastModDate>2019-01-31T10:29:16.0070000+00:00</lastModDate>
        
        <creator>Muhammad Salman Iqbal</creator>
        
        <creator>Tariq Jameel Khanzada</creator>
        
        <creator>Faisal .A Dahri</creator>
        
        <creator>Asif Ali</creator>
        
        <creator>Mukhtiar Ali</creator>
        
        <creator>Abdul Wahab Khokhar</creator>
        
        <subject>Microstrip patch antenna; radio frequency; energy harvesting; ISM band; gain; return loss and directivity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(1), 2019</description>
        <description>The advancement of the modern world requires catering the power crisis. New methodologies for energy harvesting were considered, but their succession in a different environment is still to explore. This paper deals with antenna designing to harvest energy from radio signals. The rummage of energy from surrounding sources is considered a harvesting of energy and it would be an alternative approach for low energy utilization. As comparatively well-known sources are considered for energy harvesting; such as wind and solar, radio frequency signal can provide continues supply of energy harvesting. Alternatively, we are getting the maximum usable energy resources which are challenging the amplitude of arriving signal, which is considered very low and the requirements for operating available antennas are proportionally higher. Using the Microstrip patch antenna limited the energy resources, because it is low profile, easy to configure, simple in design at the lowest rate. Furthermore, the combined the configuration and proposed antenna design provide the maximum energy efficiency. More simulation iterations are performed to maximize the gain of ISM band of 2.4 GHz. The operating frequency of microstrip patch antenna is 2.4GHz, which provides the gain of 7.2dB, return loss -20dB and the directivity of 7.44dB. The achieved result of source voltage is 900 mv after rectification the output voltage 2.5v. The results are efficient and suitable to overcome litter bit power crisis.</description>
        <description>http://thesai.org/Downloads/Volume10No1/Paper_41-Analysis_and_Maximizing_Energy_Harvesting.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Coin Passcode: A Shoulder-Surfing Proof Graphical Password Authentication Model for Mobile Devices</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100140</link>
        <id>10.14569/IJACSA.2019.0100140</id>
        <doi>10.14569/IJACSA.2019.0100140</doi>
        <lastModDate>2019-01-31T10:29:16.0070000+00:00</lastModDate>
        
        <creator>Teoh joo Fong</creator>
        
        <creator>Azween Abdullah</creator>
        
        <creator>NZ Jhanjhi</creator>
        
        <creator>Mahadevan Supramaniam</creator>
        
        <subject>Mobile graphical password; multi-elemental passcode; shoulder-surfing proof passcode; mobile authentication model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(1), 2019</description>
        <description>Swiftness, simplicity, and security is crucial for mobile device authentication. Currently, most mobile devices are protected by a six pin numerical passcode authentication layer which is extremely vulnerable to Shoulder-Surfing attacks and Spyware attacks. This paper proposes a multi-elemental graphical password authentication model for mobile devices that are resistant to shoulder surfing attacks and spyware attacks. The proposed Coin Passcode model simplifies the complex user interface issues that previous graphical password models have, which work as a swift passcode security mechanism for mobile devices. The Coin Passcode model also has a high memorability rate compared to the existing numerical and alphanumerical passwords, as psychology studies suggest that human are better at remembering graphics than words. The results shows that the Coin Passcode is able to overcome the current shoulder-surfing and spyware attack vulnerability that existing mobile application numerical passcode authentication layers suffer from.</description>
        <description>http://thesai.org/Downloads/Volume10No1/Paper_40-The_Coin_Passcode.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Explore the Major Characteristics of Learning Management Systems and their Impact on e-Learning Success</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100139</link>
        <id>10.14569/IJACSA.2019.0100139</id>
        <doi>10.14569/IJACSA.2019.0100139</doi>
        <lastModDate>2019-01-31T10:29:15.9730000+00:00</lastModDate>
        
        <creator>Mohammad Shkoukani</creator>
        
        <subject>Learning management system; e-learning; educational process; Moodle; educational institutions</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(1), 2019</description>
        <description>Today, there are many educational institutions and organizations around the world, especially the universities have adopted the e-learning and learning management system concepts because they want to enhance and support their educational process since the number of students who would like to attend universities and educational institutions is increasing. This paper has many objectives, the first one is comparing between different types of most popular learning management system (LMS) software such as Moodle and Blackboard based on their uniqueness features. The second objective is presenting the learning management systems and their benefits in e-learning. Finally, this research paper presents a proposed model, which consists of six independent variables (application and integration, communication, assessment, content, cost, and security), and one dependent variable which is e-learning success. A questionnaire has been developed and distributed to 450 respondents, and then data was collected from 418 valid questionnaires. The result showed that there is a statistically significant impact of learning management system major characteristics on e-learning success.</description>
        <description>http://thesai.org/Downloads/Volume10No1/Paper_39-Explore_the_Major_Characteristics_of_Learning_Management_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Securing Cognitive Radio Vehicular Ad Hoc Network with Fog Node based Distributed Blockchain Cloud Architecture</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100138</link>
        <id>10.14569/IJACSA.2019.0100138</id>
        <doi>10.14569/IJACSA.2019.0100138</doi>
        <lastModDate>2019-01-31T10:29:15.9600000+00:00</lastModDate>
        
        <creator>Sara Nadeem</creator>
        
        <creator>Muhammad Rizwan</creator>
        
        <creator>Fahad Ahmad</creator>
        
        <creator>Jaweria Manzoor</creator>
        
        <subject>Cognitive radio vehicular ad-hoc network (CRVANET); cloud computing; blockchain; security; software defined networking (SDN); edge computing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(1), 2019</description>
        <description>Cognitive radio, ad hoc networks&#39; applications are continuously increasing in wireless communication globally. In vehicles&#39; environment, cognitive radio technology with mobile ad hoc networks (MANETs) enables vehicles to monitor the available channels and to effectively function in these frequencies through sharing ongoing information with drivers and different frameworks to enhance traffic safety on roads. To fulfill the computational storage resources’ limitations of a specific vehicle, Vehicular Cloud Computing (VCC) is used by merging VANET with cloud computing. Cloud computing requires high security and protection because authenticate users and attackers have the same rights in VCC. The security is enhanced in CRVANETs, but the distributed nature of cloud unlocks a door for dissimilar attacks, such as trust modal, data security, connection fault and query tracking attacks. This paper proposes an effective and secured blockchain scheme-based distributed cloud architecture in place of conventional cloud architecture to secure the drivers’ privacy with low cost and on-demand sensing procedure in CRVANETs ecosystem.</description>
        <description>http://thesai.org/Downloads/Volume10No1/Paper_38-Securing_Cognitive_Radio_Vehicular_Ad_Hoc_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards a Gateway-based Context-Aware and Self-Adaptive Security Management Model for IoT-based eHealth Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100137</link>
        <id>10.14569/IJACSA.2019.0100137</id>
        <doi>10.14569/IJACSA.2019.0100137</doi>
        <lastModDate>2019-01-31T10:29:15.9430000+00:00</lastModDate>
        
        <creator>Waqas Aman</creator>
        
        <creator>Firdous Kausar</creator>
        
        <subject>Internet of things; security; self-adaptation; context awareness; ehealth</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(1), 2019</description>
        <description>IoT-based systems have considerable dynamic behavior and heterogeneous technology participants. The corresponding threats and security operations are also complex to handle. Traditional security solutions may not be appropriate and effective in such ecosystems as they recognize and assess a limited context, they work well only with high-end and specific computing platforms, and implement manual response mechanisms. We have identified the security objectives of a potential IoT-eHealth system and have proposed a security model that can efficiently achieve them. The proposed model is a context-aware and self-adaptive security management model for IoT, in eHealth perspective that will monitor, analyze, and respond to a multitude of security contexts autonomously. As these operations are planned at the gateway level, the model exploits the advantages of computing in the Fog Layer. Moreover, the proposed model offers flexibility and open connectivity to allow any smart device or thing to be managed irrespective of their native design. We have also explained how our model can establish and serve the essential security objectives of an IoT-based environment.</description>
        <description>http://thesai.org/Downloads/Volume10No1/Paper_37-Towards_a_Gateway_based_Context.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Innovative Automatic Discrimination Multimedia Documents for Indexing using Hybrid GMM-SVM Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100136</link>
        <id>10.14569/IJACSA.2019.0100136</id>
        <doi>10.14569/IJACSA.2019.0100136</doi>
        <lastModDate>2019-01-31T10:29:15.9100000+00:00</lastModDate>
        
        <creator>Debabi Turkia</creator>
        
        <creator>Bousselmi Souha</creator>
        
        <creator>Cherif Adnen</creator>
        
        <subject>Audio indexing; classification; GMM; SVM; entropy; Speech-music discrimination</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(1), 2019</description>
        <description>In this paper, a new parameterization method sound discrimination of multimedia documents based on entropy phase is presented to facilitate indexing audio documents and speed up their searches in digital libraries or the retrieval of audio documents in the network, to detect speakers in purely judicial purposes and translate films into a specific language. There are four procedures of an indexing method are developed to solve these problems which are based on (parameterization, training, modeling and classification). In first step new temporal characteristics and descriptors are extracted. However, the GMM and SVM classifiers are associated with the other procedures. The MATLAB environment is the basis of the simulation of the proposed algorithm whose system performance is evaluated from a database consisting of music containing several segments of speech.</description>
        <description>http://thesai.org/Downloads/Volume10No1/Paper_36-Innovative_Automatic_Discrimination_Multimedia_Documents.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Rab-KAMS: A Reproducible Knowledge Management System with Visualization for Preserving Rabbit Farming and Production Knowledge</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100135</link>
        <id>10.14569/IJACSA.2019.0100135</id>
        <doi>10.14569/IJACSA.2019.0100135</doi>
        <lastModDate>2019-01-31T10:29:15.8970000+00:00</lastModDate>
        
        <creator>Temitayo Matthew Fagbola</creator>
        
        <creator>Surendra Colin Thakur</creator>
        
        <creator>Oludayo Olugbara</creator>
        
        <subject>Knowledge_archiving; knowledge_management; mobile design; starUML; prot&#233;g&#233;-OWL; rabbit production; reproducibility; ubiquitous ontology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(1), 2019</description>
        <description>The sudden rise in rural-to-urban migration has been a key challenge threatening food security and most especially the survival of Rabbit Farming and Production (RFP) in Sub-Saharan Africa. Currently, significant knowledge of RFP is going into extinction as evident by the drastic fall in commercial rabbit farming and production indices. Hence, the need for a system to proactively preserve RFP knowledge for future potential farmers cannot be overemphasized. To this end, knowledge archiving and management are key concepts of ensuring long-term digital storage of conceptual blueprints and specifications of systems, methods and frameworks with capacity for future updates while making such information readily accessible to relevant stakeholders on demand. Therefore, a reproducible Rabbit production’ Knowledge Archiving and Management System (Rab-KAMS) is developed in this paper. A 3-staged approach was adopted to develop the Rab-KAMS. This include a knowledge gathering and conceptualization stage;  a knowledge revision stage to validate the authenticity and relevance of the gathered knowledge for its intended purpose and a prototype design stage adopting the use of unified modelling language conceptual workflows, ontology graphs and frame system. For seamless accessibility and ubiquitous purposes, the design was implemented into a mobile application having interactive end-users’ interfaces developed using XML and Java in Android 3.0.2 Studio development environment while adopting the V-shaped software development model. The qualitative evaluation results obtained for Rab-KAMS based on users’ rating and reviews indicate a high level of acceptability and reliability by the users. It also indicates that relevant RFP knowledge were correctly captured and provided in a user-friendly manner. The developed Rab-KAMS could offer seamless acquisition, representation, organization and mining of new and existing verified knowledge about RFP and in turn contributing to food security.</description>
        <description>http://thesai.org/Downloads/Volume10No1/Paper_35-Rab_KAMS_A_Reproducible_Knowledge_Management_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detection of Visual Positive Sentiment using PCNN</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100134</link>
        <id>10.14569/IJACSA.2019.0100134</id>
        <doi>10.14569/IJACSA.2019.0100134</doi>
        <lastModDate>2019-01-31T10:29:15.8630000+00:00</lastModDate>
        
        <creator>Samar H. Ahmed</creator>
        
        <creator>Emad Nabil</creator>
        
        <creator>Amr A. Badr</creator>
        
        <subject>Visual sentiment analysis; pulse coupled neural network (PCNN); viola et al. algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(1), 2019</description>
        <description>Many people all over the world use online social networks to express their feeling and sharing their experience, and the easiest way from their perspective is using images and videos to do so. This paper shows the utilization of two techniques (Viola et al algorithm and Pulse coupled Neural Network) in visual sentiment analysis using a hand-labeled dataset. The proposed system, which uses the PCNN with NN classifier, achieves 96% right classification, whereas Viola algorithm achieves 94% for the same dataset.</description>
        <description>http://thesai.org/Downloads/Volume10No1/Paper_34-Detection_of_Visual_Positive_Sentiment_using_PCNN.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Phishing Website Detection: An Improved Accuracy through Feature Selection and Ensemble Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100133</link>
        <id>10.14569/IJACSA.2019.0100133</id>
        <doi>10.14569/IJACSA.2019.0100133</doi>
        <lastModDate>2019-01-31T10:29:15.8500000+00:00</lastModDate>
        
        <creator>Alyssa Anne Ubing</creator>
        
        <creator>Syukrina Kamilia Binti Jasmi</creator>
        
        <creator>Azween Abdullah</creator>
        
        <creator>NZ Jhanjhi</creator>
        
        <creator>Mahadevan Supramaniam</creator>
        
        <subject>Phishing; feature selection; classification models; random forest; prediction model; logistic regression</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(1), 2019</description>
        <description>This research focuses on evaluating whether a website is legitimate or phishing. Our research contributes to improving the accuracy of phishing website detection. Hence, a feature selection algorithm is employed and integrated with an ensemble learning methodology, which is based on majority voting, and compared with different classification models including Random forest, Logistic Regression, Prediction model etc. Our research demonstrates that current phishing detection technologies have an accuracy rate between 70% and 92.52%. The experimental results prove that the accuracy rate of our proposed model can yield up to 95%, which is higher than the current technologies for phishing website detection. Moreover, the learning models used during the experiment indicate that our proposed model has a promising accuracy rate.</description>
        <description>http://thesai.org/Downloads/Volume10No1/Paper_33_Phishing_Website_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Efficient Scheme for Detection and Prevention of Black Hole Attacks in AODV-Based MANETs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100132</link>
        <id>10.14569/IJACSA.2019.0100132</id>
        <doi>10.14569/IJACSA.2019.0100132</doi>
        <lastModDate>2019-01-31T10:29:15.8330000+00:00</lastModDate>
        
        <creator>Muhammad Salman Pathan</creator>
        
        <creator>Jingsha He</creator>
        
        <creator>Nafei Zhu</creator>
        
        <creator>Zulfiqar Ali Zardari</creator>
        
        <creator>Muhammad Qasim Memon</creator>
        
        <creator>Aneeka Azmat</creator>
        
        <subject>Mobile ad hoc network; denial of service; black hole; fake route request packet; AD-hoc on-demand distance vector</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(1), 2019</description>
        <description>Mobile ad hoc network (MANET) is a set of independent mobile nodes, which connect to each other over a wireless channel without any centralized infrastructure, nor integrated security. MANET is a weak target to many Denial of Service (DOS) attacks, which seriously harms its functionality and connectivity. A black hole attack is a type of DOS attack, where the malevolent node tries to get all the data packets from a source node by sending fabricated fake route reply (RREP) packet, falsely pretending that it possesses the shortest path towards the destination node, and then drops all the packets it receives. In this paper, the AODV (Ad-hoc on-demand distance vector) routing protocol is improved by incorporating an efficient and simple mechanism to mitigate black hole attacks. Mechanism to detect black hole attacks from MANET (MDBM) uses fake route request (RREQ) packets with an unreal destination address in order to detect black hole nodes prior to the actual routing process. Simulation experiment conducted has verified the performance of the proposed detection and prevention scheme. The results demonstrated that the proposed mechanism performed well in terms of Packet Delivery Ratio, End-to-End Delay and Throughput under black hole attack.</description>
        <description>http://thesai.org/Downloads/Volume10No1/Paper_32-An_Efficient_Scheme_for_Detection_and_Prevention.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Enhanced Concept based Approach for User Centered Health Information Retrieval to Address Presentation Issues</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100131</link>
        <id>10.14569/IJACSA.2019.0100131</id>
        <doi>10.14569/IJACSA.2019.0100131</doi>
        <lastModDate>2019-01-31T10:29:15.8030000+00:00</lastModDate>
        
        <creator>Ibrahim Umar Kontagora</creator>
        
        <creator>Isredza Rahmi A. Hamid</creator>
        
        <creator>Nurul Aswa Omar</creator>
        
        <subject>Enhanced concept based approach; existing concept based approach; medical discharge reports; medical expert form; layman’s form</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(1), 2019</description>
        <description>The diversity of health information seekers signifies the enormous variety of information needs by numerous users. The existing health information retrieval systems failed to address the information needs of both medical expert and laymen patients. This study focused on designing an enhanced information retrieval approach using the concept based approach that would address the information needs of both medical experts and laymen patients. We evaluated and compared the performance of the proposed enhanced concept based approach with the existing approaches namely: concept based approach (CBA), query likelihood model (QLM) and latent semantic indexing (LSI) approach using Diagnosia 7, Medical Subject Heading (MeSH), Khresmoi Project 6 and Genetic Home Reference datasets. The experimental results obtained shows that the proposed enhanced concept based approach manage to score similarity scores of 1.0 (100%) in respect to maxSim values for all the runs in all the four datasets and idf weighting values of between 3.82 – 3.86 for all the runs in all the four datasets. While the existing approaches (CBA, QLM, LSI) scored the maxSim scores of 0.5 (50%) for all their runs in all the four dataset and idf weighting values of between 1.40 – 1.47 for all the four dataset, as a result of their inability to generate and display medical search results in both medical experts and layman’s forms. These results shows that the proposed enhanced concept based approach is the best approach suited to be used in addressing presentation issues.</description>
        <description>http://thesai.org/Downloads/Volume10No1/Paper_31-An_Enhanced_Concept_based_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Repository System for Geospatial Software Development and Integration</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100130</link>
        <id>10.14569/IJACSA.2019.0100130</id>
        <doi>10.14569/IJACSA.2019.0100130</doi>
        <lastModDate>2019-01-31T10:29:15.7870000+00:00</lastModDate>
        
        <creator>Basem Y Alkazemi</creator>
        
        <subject>Open-Source software; geographic information system; repository system; specification language; components integration</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(1), 2019</description>
        <description>The integration of geospatial software components has recently received considerable attention due to the need for rapid growth of GIS application and development environments. However, finding appropriate source code components that can be incorporated into a system under development requires considerable verification to ensure the source code can work correctly. This paper therefore describes the design of a repository system that employs a new specification language, namely SpecJ2, to address the challenges involved in integrating and operating software components. SpecJ2 was designed to represent the architectural attributes of source code components and to abstract their complexity by applying the notion of separation of concerns, a key consideration when designing software systems. The results of the experiment showed that SpecJ2 is capable of defining the different architectural attributes of source code components and can facilitate their integration and interaction at run-time. Thus, SpecJ2 can classify software components according to their identified types.</description>
        <description>http://thesai.org/Downloads/Volume10No1/Paper_30-Repository_System_for_Geospatial_Software_Development.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Stress Detection of the Employees Working in Software Houses using Fuzzy Inference</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100129</link>
        <id>10.14569/IJACSA.2019.0100129</id>
        <doi>10.14569/IJACSA.2019.0100129</doi>
        <lastModDate>2019-01-31T10:29:15.7570000+00:00</lastModDate>
        
        <creator>Rabia Abid</creator>
        
        <creator>Nageen Saleem</creator>
        
        <creator>Hafiza Ammaraa Khalid</creator>
        
        <creator>Fahad Ahmad</creator>
        
        <creator>Muhammad Rizwan</creator>
        
        <creator>Jaweria Manzoor</creator>
        
        <creator>Kashaf Junaid</creator>
        
        <subject>Stress; fuzzy inference system; stress detection; software house</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(1), 2019</description>
        <description>In the modern era where the use of computer systems in software houses is mandatory and in various organizations has increased, it has given rise to the level of stress of employees working for hours at the system as well. Employees working in software houses are prone to have increased stress and anxiety level. It is important to detect the stress level of the employees so that various solutions can be applied in the working environment to get a better output. This paper would be beneficial for detecting the stress level of employees working on the computer using various inputs i.e. heart rate, pupil contraction, facial expressions, skin temperature, blood pressure, age and number of hours working on the computer. This research would indicate the raised level of stress of employees and this indication can be used to increase the yield of the quality of work and satisfaction of employees working in a particular organization. According to the levels of stress, within the working environment, during break hours various steps can be taken as a solution and applied during break hours of employees to ensure maximum satisfaction and the improved quality of work.</description>
        <description>http://thesai.org/Downloads/Volume10No1/Paper_29-Stress_Detection_of_the_Employees.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Radial basis Function Neural Network for Predicting Flow Bottom Hole Pressure</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100128</link>
        <id>10.14569/IJACSA.2019.0100128</id>
        <doi>10.14569/IJACSA.2019.0100128</doi>
        <lastModDate>2019-01-31T10:29:15.7400000+00:00</lastModDate>
        
        <creator>Medhat H A Awadalla</creator>
        
        <subject>Radial basis function neural network; neuro-fuzzy system; feedforward neural networks; empirical model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(1), 2019</description>
        <description>The ability to monitor the flow bottom hole pressure in pumping oil wells provides important information regarding both reservoir and artificial lift performance. This paper proposes an iterative approach to optimize the spread constant and root mean square error goal of the radial basis function neural network. In addition, the optimized network is utilized to estimate this oil well pressure. Simulated experiments and qualitative comparisons with the most related techniques such as feedforward neural networks, neuro-fuzzy system, and the empirical model have been conducted. The achieved results show that the proposed technique gives better performance in estimating the flow of bottom hole pressure. Compared with the other developed techniques, an improvement of 7.14% in the root mean square error and 3.57% in the standard deviation of relative error has been achieved. Moreover, 90% and 95% accuracy of the proposed network are attained by 99.6% and 96.9% of test data, respectively.</description>
        <description>http://thesai.org/Downloads/Volume10No1/Paper_28-Radial_Basis_Function_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Economical Motivation and Benefits of using Load Shedding in Energy Management Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100127</link>
        <id>10.14569/IJACSA.2019.0100127</id>
        <doi>10.14569/IJACSA.2019.0100127</doi>
        <lastModDate>2019-01-31T10:29:15.7230000+00:00</lastModDate>
        
        <creator>Walid Emar</creator>
        
        <creator>Ghazi Suhail Al-Barami</creator>
        
        <subject>Demand side management; load shedding; energy management system; energy consumption</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(1), 2019</description>
        <description>With declining fossil fuel consumption and rising energy demand for renewable energy, the need for integration of these highly predictable sources into the electricity system increases. At the same time, there is a rise in the price of energy, which increases the willingness of consumers to change their breed in order to reduce the costs, or at least to keep them in an acceptable level. One of the options for optimizing energy savings on the consumer side is to use the principle of demand response. This principle enables the consumer, for example, to have the necessary information to optimize the consumption of electricity so as to minimize it when the energy price is high. In view of the constantly changing conditions in the electricity system, the need for optimization is to be implemented automatically, without the necessity of users of the system. This paper main focus is the formulation and optimization of Demand Side Management using the quasi-quadratic problem (MIQP). The result of such optimization is the use of individual devices that take into account the cost of electricity, the working cycle of the installation, the requirements of the user, the systems And limitations and other input information. The method proposed which, after implementation into the individual member - the energy manager - will ensure the optimal utilization of appliances and other Set up by the witches of a clever house.</description>
        <description>http://thesai.org/Downloads/Volume10No1/Paper_27-Economical_Motivation_and_Benefits_of_using_Load_Shedding.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>English-Arabic Hybrid Machine Translation System using EBMT and Translation Memory</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100126</link>
        <id>10.14569/IJACSA.2019.0100126</id>
        <doi>10.14569/IJACSA.2019.0100126</doi>
        <lastModDate>2019-01-31T10:29:15.6930000+00:00</lastModDate>
        
        <creator>Rana Ehab</creator>
        
        <creator>Eslam Amer</creator>
        
        <creator>Mahmoud Gadallah</creator>
        
        <subject>Hybrid machine translation system; translation memory; internal medicine publications; google translate; BLEU</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(1), 2019</description>
        <description>The availability of a machine translation to translate from English-to-Arabic with high accuracy is not available because of the difficult morphology of the Arabic Language. A hybrid machine translation system between Example Based machine translation technique and Translation memory was introduced in this paper. Two datasets have been used in the experiments that were constructed by using internal medicine publications and Worldwide Arabic Medical Translation Guide Common Medical Terms sorted by Arabic. To examine the accuracy of the system constructed four experiments were made using Example Based Machine Translation system in the first, Google Translate in the second and Example Based with Google translate in the third and the fourth is the system proposed using Example Based with Translation memory. The system constructed achieved 77.17 score for the first dataset and 63.85 score for the second which were the highest score using BLEU score.</description>
        <description>http://thesai.org/Downloads/Volume10No1/Paper_26-English_Arabic_Hybrid_Machine_Translation_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Trapezoidal Cross-Section Stacked Gate FinFET with Gate Extension for Improved Gate Control</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100125</link>
        <id>10.14569/IJACSA.2019.0100125</id>
        <doi>10.14569/IJACSA.2019.0100125</doi>
        <lastModDate>2019-01-31T10:29:15.6770000+00:00</lastModDate>
        
        <creator>Sangeeta Mangesh</creator>
        
        <creator>Pradeep Chopra</creator>
        
        <creator>Krishan K. Saini</creator>
        
        <subject>Drain induced barrier lowering (DIBL); gate induced drain leakage (GIDL); subthreshold swing (SS); silicon on-insulator (SOI)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(1), 2019</description>
        <description>An improved trapezoidal pile gate bulk FinFET device is implemented with an extension in the gate for enhancing the performance. The novelty in the design is trapezoidal cross-section FinFET with stacked metal gate along with extension on both sides. Such improved device structure with additional process cost exhibits significant enhancement in the performance metrics specially in terms of leakage current behavior. The simulation study proves the suitability of the device for low power applications with improved on/off current ratio, subthreshold swing (SS), drain induced barrier lowering (DIBL), Gate Induced Drain Leakage (GIDL) uniform distribution of electron charge density along the channel and effects of Augur recombination within the channel.</description>
        <description>http://thesai.org/Downloads/Volume10No1/Paper_25-A_Trapezoidal_Cross_Section_Stacked_Gate.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimized K-Means Clustering Model based on Gap Statistic</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100124</link>
        <id>10.14569/IJACSA.2019.0100124</id>
        <doi>10.14569/IJACSA.2019.0100124</doi>
        <lastModDate>2019-01-31T10:29:15.6470000+00:00</lastModDate>
        
        <creator>Amira M. El-Mandouh</creator>
        
        <creator>Laila A. Abd-Elmegid</creator>
        
        <creator>Hamdi A. Mahmoud</creator>
        
        <creator>Mohamed H. Haggag</creator>
        
        <subject>Big data; mapreduce; k-means; gap statistic</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(1), 2019</description>
        <description>Big data has become famous to process, store and manage massive volumes of data. Clustering is an essential phase in big data analysis for many real-life application areas uses clustering methodology for result analysis. The data clustered sets have become a challenging issue in the field of big data analytics. Among all clustering algorithm, the K-means algorithm is the most widely used unsupervised clustering approach as seen from past. The K-means algorithm is the best adapted for deciding similarities between objects based on distance measures with small datasets. Existing clustering algorithms require scalable solutions to manage large datasets. However, for a particular domain-specific problem the initial selection of K is still a significant concern. In this paper, an optimized clustering approach presented which is calculated the optimal number of clusters (k) for specific domain problems. The proposed approach is an optimal solution based on the cluster performance measure analysis based on gab statistic. By observation, the experimental results prove that the proposed model can efficiently enhance the speed of the clustering process and accuracy by reducing the computational complexity of the standard k-means algorithm which achieves 76.3%.</description>
        <description>http://thesai.org/Downloads/Volume10No1/Paper_24-Optimized_K_Means_Clustering_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Deep Learning Approach for Breast Cancer Mass Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100123</link>
        <id>10.14569/IJACSA.2019.0100123</id>
        <doi>10.14569/IJACSA.2019.0100123</doi>
        <lastModDate>2019-01-31T10:29:15.6300000+00:00</lastModDate>
        
        <creator>Wael E. Fathy</creator>
        
        <creator>Amr S. Ghoneim</creator>
        
        <subject>Convolutional Neural Networks (CNNs); breast cancer; Global Average Pooling (GAP); mass classification and localization; Class Activation Map (CAM); Receiver Operating Characteristics Curve (ROC); Deep Learning; Computer Aided Detection And Diagnosis (CAD)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(1), 2019</description>
        <description>Breast cancer is the most widespread type of cancer among women. The diagnosis of breast cancer in its early stages is still a significant problem worldwide. The accurate classification and localization of breast mass help in the early detection of the disease, so in the last few years, a variety of CAD systems are developed to enhance breast cancer classification and localization accuracy, but most of them are fully based on handcrafted feature extraction techniques, which affect its efficiency. Currently, deep learning approaches are able to automatically learn a set of high-level features and consequently, they are achieving remarkable results in object classification and detection tasks.  In this paper, the pre-trained ResNet-50 architecture and the Class Activation Map (CAM) technique are employed in breast cancer classification and localization respectively. CAM technique exploits the Convolutional Neural Network (CNN) classifiers with Global Average Pooling (GAP) layer for object localization without any supervised information about its location. According to the experimental results, the proposed approach achieved 96% Area under the Receiver Operating Characteristics (ROC) curve in the classification with 99.8% sensitivity and 82.1% specificity. Furthermore, it is able to localize 93.67% of the masses at an average of 0.122 false positives per image on the Digital Database for Screening Mammography (DDSM) data-set. It is worth noting that the pre-trained CNN is able automatically to learn the most discriminative features in the mammogram, and then fulfills superior results in breast cancer classification (normal or mass). Additionally, CAM exhibits the concrete relation between the mass located in the mammogram and the discriminative features learned by the CNN.</description>
        <description>http://thesai.org/Downloads/Volume10No1/Paper_23-A_Deep_Learning_Approach_for_Breast_Cancer.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New PHP Discoverer for Modisco</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100122</link>
        <id>10.14569/IJACSA.2019.0100122</id>
        <doi>10.14569/IJACSA.2019.0100122</doi>
        <lastModDate>2019-01-31T10:29:15.6130000+00:00</lastModDate>
        
        <creator>Abdelali Elmounadi</creator>
        
        <creator>Nawfal El Moukhi</creator>
        
        <creator>Naoual Berbiche</creator>
        
        <creator>Nacer Sefiani</creator>
        
        <subject>MDRE; ADM; modisco; model discovery; PHP</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(1), 2019</description>
        <description>MoDisco is an Eclipse Generative Modeling Technologies project (GMT Project) intended to make easier the design and building of model-based solutions that are dedicated to legacy systems Model-Driven Reverse Engineering (MDRE). It offers an open source, generic and extensible MDRE framework. Indeed, MDRE applies of Model-driven Engineering (MDE) principles to enhance traditional Reverse Engineering processes, and thus facilitate their understanding and manipulation. In the same context, the Architecture-Driven Modernization (ADM) is an OMG (Object Management Group) standard, which addresses the integration of MDA (Model-driven Architecture) and Reverse Engineering in the aim of understanding and evolving existing software assets. Thus, Modisco succeeded to stand out as the implementation reference in the MDRE and ADM field. Currently, Modisco handles only some technologies, such as Java and XML. Unfortunately, no adapted way to handle PHP (Hypertext Preprocessor) web-based projects by Modisco is available so far. This paper proposes a new model discovery tool intended for PHP language. This latter constitutes an extension for the Modisco framework that allows managing the applications assets written in PHP language. Thus, this work aims at enhancing the Modisco platform capabilities in managing more software development technologies.</description>
        <description>http://thesai.org/Downloads/Volume10No1/Paper_22-A_New_PHP_Discoverer_for_Modisco.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>OntoDI: The Methodology for Ontology Development on Data Integration</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100121</link>
        <id>10.14569/IJACSA.2019.0100121</id>
        <doi>10.14569/IJACSA.2019.0100121</doi>
        <lastModDate>2019-01-31T10:29:15.5830000+00:00</lastModDate>
        
        <creator>Arda Yunianta</creator>
        
        <creator>Ahmad Hoirul Basori</creator>
        
        <creator>Anton Satria Prabuwono</creator>
        
        <creator>Arif Bramantoro</creator>
        
        <creator>Irfan Syamsuddin</creator>
        
        <creator>Norazah Yusof</creator>
        
        <creator>Alaa Omran Almagrabi</creator>
        
        <creator>Khalid Alsubhi</creator>
        
        <subject>Data integration; methodology; ontology development; semantic issues; semantic approach</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(1), 2019</description>
        <description>The implementations of data integration in current days have many issues to be solved. Heterogeneity of data with non-standardization data, data conflicts between various data sources, data with a different representation, as well as semantic aspects problems are among the challenges and still open to research. Semantic data integration using ontology approach is considered as an appropriate solution to deal with semantic aspects problem in data integration. However, most methodologies for ontology development are developed to cover specific purpose and less suitable for common data integration implementation. This research offers an improved methodology for ontology development on data integration to deal with semantic aspects problem, called OntoDI. It is a continuation and improvement of the previous work about ontology development methods on agent system. OntoDI consists of three main parts, namely the pre-development, core-development and post-development, in which every part contains several phases.  This paper describes the experiment of OntoDI in the electronic learning system domain. Using OntoDI, the development of ontology knowledge gives simpler phases, complete steps, and clear documentation for the ontology client. In addition, this ontology knowledge is also capable to overcome semantic aspect issues that happen in the sharing and integration process in education area.</description>
        <description>http://thesai.org/Downloads/Volume10No1/Paper_21-OntoDI_The_Methodology_for_Ontology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Finger Vein Recognition using Straight Line Approximation based on Ensemble Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100120</link>
        <id>10.14569/IJACSA.2019.0100120</id>
        <doi>10.14569/IJACSA.2019.0100120</doi>
        <lastModDate>2019-01-31T10:29:15.5530000+00:00</lastModDate>
        
        <creator>Roza Waleed Ali</creator>
        
        <creator>Junaidah Mohamed Kassim</creator>
        
        <creator>Siti Norul Huda Sheikh Abdullah</creator>
        
        <subject>Finger vein recognition; SLA; ELM; SVM; HOG; straight line approximate</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(1), 2019</description>
        <description>Human identity recognition and protection of information security are current global concerns in this age of increasing information growth. Biometrics approach of defining identity is considered as one of the highly potential approaches due to its internal feature that is difficult to be artificially recreated, stolen and/or forgotten. The new recognition system based on finger vein is a unique method depending on physiological traits and parameters of the vein patterns for the human. Published works on finger vein identification have hitherto ignored the power of aggregating different types of features and classifiers in improving the performance of the biometric recognition system. In this paper, we developed a novel feature approach named as straight line approximator (SLA) for extending the feature space of vein pattern using a public data set SDUMLA-HMT comprising about 3,816 images of finger vein for 160 persons. Furthermore, we applied a set of extreme learning machine (ELM) and support vector machine (SVM) classifier in different kernels. Then, we used the combination rules to improve the performance of the system. The experiment result of the proposed method achieved an accuracy of 87% using (DS and GWAR) rules at rank 1, while the accuracy of DS rule 93% and GWAR rule 92% at rank 5.</description>
        <description>http://thesai.org/Downloads/Volume10No1/Paper_20-Finger_Vein_Recognition_using_Straight_Line.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Investigation of VoIP Over Mobile WiMAX Networks through OPNET Simulation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100119</link>
        <id>10.14569/IJACSA.2019.0100119</id>
        <doi>10.14569/IJACSA.2019.0100119</doi>
        <lastModDate>2019-01-31T10:29:15.5200000+00:00</lastModDate>
        
        <creator>Ilyas Khudhair Yalwi Dubi</creator>
        
        <creator>Ravie Chandren Muniyandi</creator>
        
        <subject>Voice over Internet Protocol (VoIP); R-score; Worldwide Interoperability of Microwave Access (WiMAX); quality of service (QoS); OPNET 14.5</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(1), 2019</description>
        <description>Worldwide Interoperability for Microwave Access (WiMAX) is regarded as a promising technology that can provide wireless communication because of its advantages which include, high-speed data rates, high coverage and low cost of development and maintenance. WiMAX also supports the performance of Voice over Internet Protocol (VoIP), which is expected to replace conventional circuit switched voice services. VoIP requires to accurately design of QoS configurations over WiMAX networks. This paper focuses on studying and analyzing the performance of VoIP over WiMAX mobile networks. WiMAX network and VoIP technology such as mobility, WiMAX service classes, number of nodes and VoIP codecs are studied and analyzed. WiMAX network is simulated in a different manner using the simulation program known as OPNET Modeler. Simulation results established that the service layer Unsolicited Grand Service (UGS) is more appropriate for VoIP service because it has the best standard and performance. It was also observed that the least delay and highest value of customer satisfaction rate of services is demonstrated by the G.723.1 best coding. It also maintains the minimum consumption of capacity.</description>
        <description>http://thesai.org/Downloads/Volume10No1/Paper_19-Performance_Investigation_of_VoIP.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of Fire Fighting Robot (QRob)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100118</link>
        <id>10.14569/IJACSA.2019.0100118</id>
        <doi>10.14569/IJACSA.2019.0100118</doi>
        <lastModDate>2019-01-31T10:29:15.5070000+00:00</lastModDate>
        
        <creator>Mohd Aliff</creator>
        
        <creator>Nor Samsiah Sani</creator>
        
        <creator>MI Yusof</creator>
        
        <creator>Azavitra Zainal</creator>
        
        <subject>Firefighting robot; compact size robot; ultrasonic sensor; flame sensor; remote control</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(1), 2019</description>
        <description>Fire incident is a disaster that can potentially cause the loss of life, property damage and permanent disability to the affected victim. They can also suffer from prolonged psychological and trauma. Fire fighters are primarily tasked to handle fire incidents, but they are often exposed to higher risks when extinguishing fire, especially in hazardous environments such as in nuclear power plant, petroleum refineries and gas tanks. They are also faced with other difficulties, particularly if fire occurs in narrow and restricted places, as it is necessary to explore the ruins of buildings and obstacles to extinguish the fire and save the victim. With high barriers and risks in fire extinguishment operations, technological innovations can be utilized to assist firefighting. Therefore, this paper presents the development of a firefighting robot dubbed QRob that can extinguish fire without the need for fire fighters to be exposed to unnecessary danger. QRob is designed to be compact in size than other conventional fire-fighting robot in order to ease small location entry for deeper reach of extinguishing fire in narrow space. QRob is also equipped with an ultrasonic sensor to avoid it from hitting any obstacle and surrounding objects, while a flame sensor is attached for fire detection. This resulted in QRob demonstrating capabilities of identifying fire locations automatically and ability to extinguish fire remotely at particular distance. QRob is programmed to find the fire location and stop at maximum distance of 40 cm from the fire. A human operator can monitor the robot by using camera which connects to a smartphone or remote devices.</description>
        <description>http://thesai.org/Downloads/Volume10No1/Paper_18-Development_of_Fire_Fighting_Robot.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Data Categorization and Model Weighting Approach for Language Model Adaptation in Statistical Machine Translation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100117</link>
        <id>10.14569/IJACSA.2019.0100117</id>
        <doi>10.14569/IJACSA.2019.0100117</doi>
        <lastModDate>2019-01-31T10:29:15.4900000+00:00</lastModDate>
        
        <creator>Mohammed AbuHamad</creator>
        
        <creator>Masnizah Mohd</creator>
        
        <subject>Language model adaptation; Statistical machine translation; clustering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(1), 2019</description>
        <description>Language model encapsulates semantic, syntactic and pragmatic information about specific task. Intelligent systems especially natural language processing systems can show different results in terms of performance and precision when moving among genres and domains. Therefore researchers have explored different language model adaptation strategies in order to overcome effectiveness issue. There are two main categories in language model adaptation techniques. The first category includes the techniques that based on the data selection where task-oriented corpus can be extracted and used to train and generate models for specific translations. While the second category focuses on developing a weighting criterion to assign the test data to specific model corpus. The purpose of this research is to introduce language model adaptation approach that combines both categories (data selection and weighting criterion) of language model adaptation. This approach applies data selection for specific-task translations by dividing the corpus into smaller and topic-related corpora using clustering process. We investigate the effect of different approaches for clustering the bilingual data on the language model adaptation process in terms of translation quality using the Europarl corpus WMT07 that includes bilingual data for English-Spanish, English-German and English-French. A mixture of language models should assign any given data to the right language model to be used in the translation process using a specific weighting criterion. The proposed language model adaptation has achieved better translation quality compare to the baseline model in Statistical Machine Translation (SMT).</description>
        <description>http://thesai.org/Downloads/Volume10No1/Paper_17-Data_Categorization_and_Model_Weighting_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Individual Readiness for Change in the Pre-Implementation Phase of Campus Enterprise Resource Planning (ERP) Project in Malaysian Public University</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100116</link>
        <id>10.14569/IJACSA.2019.0100116</id>
        <doi>10.14569/IJACSA.2019.0100116</doi>
        <lastModDate>2019-01-31T10:29:15.4600000+00:00</lastModDate>
        
        <creator>Adiel Harun</creator>
        
        <creator>Zulkefli Mansor</creator>
        
        <subject>Campus ERP; ERP pre-implementation phase; individual readiness for change; IRFC</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(1), 2019</description>
        <description>In recent years, the current globalization has revolutionized transformed the landscape and ecosystem of the institution of higher education were demanding that the university transition from legacy system to Enterprise Resource Planning (ERP) system on enhancing university competitiveness. This shift requires the entire organization to be ready for change as early as the pre-implementation phase to ensure the successful implementation of ERP and resistance among staff is reduced. Past studies related to readiness for change are more focused on the ERP implementation phase for Human Resources, Finance and Manufacturing. However, studies on the individual readiness for change (IRFC) among public university staff in the pre-implementation phase are limited especially in Malaysian. Therefore, this study aims to analyze the IRFC factor among public university staff by combining the theoretical and empirical results of the study. Data analysis was obtained from a questionnaire from 117 public university staff who were in the pre-implementation phase of the Campus ERP project. The findings show that appropriateness, management support, change-specific efficacy and personal valence as contributing IRFC public university staff in on pre-implementation phase of Campus ERP project. Besides that, there are 24 items representing that four factors in measuring IRFC. In the future, studies can be done in a variety of perception such as students and other ERP systems such as Human Resource System and Financial System which are also a core system for the university. Additionally, this study leads for further study in implementation and post-implementation phase of the Campus ERP project.</description>
        <description>http://thesai.org/Downloads/Volume10No1/Paper_16-Individual_Readiness_for_Change.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Requirements Prioritization and using Iteration Model for Successful Implementation of Requirements</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100115</link>
        <id>10.14569/IJACSA.2019.0100115</id>
        <doi>10.14569/IJACSA.2019.0100115</doi>
        <lastModDate>2019-01-31T10:29:15.4430000+00:00</lastModDate>
        
        <creator>Muhammad Yaseen</creator>
        
        <creator>Noraini Ibrahim</creator>
        
        <creator>Aida Mustapha</creator>
        
        <subject>Requirements prioritization; iteration model; user requirements; spanning trees; directed acyclic graph</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(1), 2019</description>
        <description>Requirements prioritization is ranking of software requirements in particular order. Prioritize requirements are easy to manage and implement while un-prioritized requirements are costly and consume much time as total estimation time of project can exceed. Because all requirements are depended on each other so total estimation time exceed when requirements wait for pre-requisite requirements. Priority of requirement also increases when other requirements wait for it but assigning low priority to needed requirements will delaying the project. Iteration model is software engineering (SE) process model in which all requirements are not developed at one time but are developed in phases. Only sufficient information or sub-requirements of particular user requirement (UR) can be needed for other user requirements (URs) so by implementing only the sufficient requirements in first phase will reduce waiting time. Hence total estimation time of the project will also reduce. In this research work, iteration model approach is used during prioritization to reduce total estimation time of project and to assure timely delivery of project. From the results it is concluded that not all sub-requirements of particular UR get same priority, but there are only few requirements that are important and should be given more priority.</description>
        <description>http://thesai.org/Downloads/Volume10No1/Paper_15-Requirements_Prioritization_and_using_Iteration_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Efficient Gabor-Based Recognition for Handwritten Arabic-Indic Digits</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100114</link>
        <id>10.14569/IJACSA.2019.0100114</id>
        <doi>10.14569/IJACSA.2019.0100114</doi>
        <lastModDate>2019-01-31T10:29:15.4100000+00:00</lastModDate>
        
        <creator>Emad Sami Jaha</creator>
        
        <subject>Digit recognition; Gabor filters; OCR; k-nearest neighbor; artificial neural networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(1), 2019</description>
        <description>In daily life, the need of automatically digitizing paper documentations and recognizing textual images is still present with existing and potential upcoming rooms for improvements, especially for languages like Arabic, which is unlike English as an instance, has more complex context and not been extensively supported by research in a such domain. As yet, the available online offline optical character recognition (OCR) systems have utilized functional techniques and achieved high performance mainly on machine printed data images. However, in case of handwritten script, the recognition task becomes highly unconstrained and much more challenging. Amongst a large verity of recognizable multi-lingual characters, handwritten digit recognition is a considerably useful task for different purposes and countless applications. In this research, the focus is on Arabic (known today as Indic or Indian) digit recognition using different proposed Gabor-based approaches in several combinations with different classification methods. The proposed approaches are trained and tested using 91120 digit samples of two independent standard databases (Arabic-Handwritten-Digits and AHDBase), allowing performance variability assessments and comparisons not only between the different combinations of features and classifiers but also between different datasets. The proposed Arabic-Indic digit recognition system achieves high recognition rates reach up to 99.87%. This research practically shows that one of the proposed approaches with significant dimensionality reduced features remains attaining a high recognition rate with low complexity time, which can be hence recommended further for online digit recognition systems.</description>
        <description>http://thesai.org/Downloads/Volume10No1/Paper_14-Efficient_Gabor_based_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Social Network Analysis of Twitter to Identify Issuer of Topic using PageRank</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100113</link>
        <id>10.14569/IJACSA.2019.0100113</id>
        <doi>10.14569/IJACSA.2019.0100113</doi>
        <lastModDate>2019-01-31T10:29:15.3970000+00:00</lastModDate>
        
        <creator>Sigit Priyanta</creator>
        
        <creator>I Nyoman Prayana Trisna</creator>
        
        <subject>Twitter ranking; social network analysis; graph-based algorithm; PageRank; graph centrality</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(1), 2019</description>
        <description>Twitter as widest micro-blogging and social media proves a billion of tweets from many users. Each tweet carry its own topic, and the tweet itself is can be retweeted by other user. Social network analysis is needed to reach the original issuer of a topic. Representing topic-specific Twitter network can be done to get the main issuer of the topic with graph based ranking algorithm. One of the algorithm is PageRank, which rank each node based on number of in-degree of that node, and inversely proportional to out-degree of the other nodes that point to that node. In proposed methodology, network graph is built from Twitter where user acts as node and tweet-retweet relation as directed edge. User who retweet the tweet points to original user who tweet. From the formed graph, each node’s PageRank is calculated as well as other node properties like centrality, degree, and followers and average time retweeted. The result shows that PageRank score of node is directly proportional to closeness centrality and in-degree of node. However, the ranking with PageRank, closeness centrality, and in-degree ranking yield different ranking result.</description>
        <description>http://thesai.org/Downloads/Volume10No1/Paper_13-Social_Network_Analysis_of_Twitter.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Minimizing Information Asymmetry Interference using Optimal Channel Assignment Strategy in Wireless Mesh Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100112</link>
        <id>10.14569/IJACSA.2019.0100112</id>
        <doi>10.14569/IJACSA.2019.0100112</doi>
        <lastModDate>2019-01-31T10:29:15.3630000+00:00</lastModDate>
        
        <creator>Gohar Rahman</creator>
        
        <creator>Chuah Chai Wen</creator>
        
        <creator>Sadiq Shah</creator>
        
        <creator>Misbah Daud</creator>
        
        <subject>Wireless mesh network; information asymmetry interference; channel assignment; integer linear programming; coordinated interference</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(1), 2019</description>
        <description>Multi-radio multi-channel wireless mesh networks (MRMC-WMNs) in recent years are considered as the prioritized choice for users due to its low cost and reliability. MRMC-WMNs is recently been deployed widely across the world but still these kinds of networks face interference problems among WMN links. One of the well-known interference issue is information asymmetry (IA). In case of information asymmetry interference the source mesh nodes of different mesh links cannot sense each other before transmitting data on the same frequency channel. This non-coordination leads to data collision and packet loss of data flow and hence degrades the network capacity. To maximize the MRMC-WMN capacity and minimize IA interference, various schemes for optimal channel assignment have been proposed already. In this research a novel and near-optimal channel assignment model called Information Asymmetry Minimization (IAM) model is proposed based on integer linear programming. The proposed IAM model optimally assigns orthogonal or non-overlapping channels from IEEE 802.11b technology to various MRMC-WMN links. Through extensive simulations we show that our proposed model gives 28.31% network aggregate network capacity improvement over the existing channel assignment model.</description>
        <description>http://thesai.org/Downloads/Volume10No1/Paper_12-Minimizing_Information_Asymmetry.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Novel ABCD Formula to Diagnose and Feature Ranking of Melanoma</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100111</link>
        <id>10.14569/IJACSA.2019.0100111</id>
        <doi>10.14569/IJACSA.2019.0100111</doi>
        <lastModDate>2019-01-31T10:29:15.3330000+00:00</lastModDate>
        
        <creator>Reshma M</creator>
        
        <creator>B. Priestly Shan</creator>
        
        <subject>Karl Pearson’s method; gray level co-occurrence matrix (GLCM); wavelet transform; melanoma; dermoscopy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(1), 2019</description>
        <description>A prototype of skin cancer detection system for melanoma diagnoses in early stages is very important. In this paper, a novel technique is proposed for Skin malignant growth identification based on feature parameters, color shading histogram, to improve the diagnosis method by optimizing the ABCD formula. Features are extracted like Shape, Statistical, GLCM texture, Color, Wavelet transform, Texture. Once the features are extracted we found the most prominent features by assigning a rank. We have calculated parameters such as sensitivity, specificity, accuracy for checking the imperceptibility and robustness of the proposed approach. Also, Correlation analysis is made between traditional and proposed TDS equation using Karl Pearson’s method.</description>
        <description>http://thesai.org/Downloads/Volume10No1/Paper_11-Novel_ABCD_Formula_to_Diagnose.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Blockchain: Securing Internet of Medical Things (IoMT)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100110</link>
        <id>10.14569/IJACSA.2019.0100110</id>
        <doi>10.14569/IJACSA.2019.0100110</doi>
        <lastModDate>2019-01-31T10:29:15.3170000+00:00</lastModDate>
        
        <creator>Nimra Dilawar</creator>
        
        <creator>Muhammad Rizwan</creator>
        
        <creator>Fahad Ahmad</creator>
        
        <creator>Saima Akram</creator>
        
        <subject>Blockchain; IoMT; peer-to-peer; security; proof of work (PoW)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(1), 2019</description>
        <description>The internet of medical things (IoMT) is playing a substantial role in improving the health and providing medical facilities to people around the globe. With the exponential growth, IoMT is having a huge influence in our everyday life style. Instead of going to the hospital, patient clinical related data is remotely observed and processed in a real time data system and then is transferred to the third party for future use such as the cloud. IoMT is intensive data domain with a continuous growing rate which means that we must secure a large amount of sensitive data without being tampered. Blockchain is a temper proved digital ledger which provides us peer-to-peer communication. Blockchain enables communication between non-trusting members without any intermediary. In this paper we first discuss the technology behind Blockchain then propose IoMT based security architecture employing Blockchain to ensure the security of data transmission between connected nodes.</description>
        <description>http://thesai.org/Downloads/Volume10No1/Paper_10-Blockchain_Securing_Internet_of_Medical_Things.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards the Algorithmic Detection of Artistic Style</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100109</link>
        <id>10.14569/IJACSA.2019.0100109</id>
        <doi>10.14569/IJACSA.2019.0100109</doi>
        <lastModDate>2019-01-31T10:29:15.2870000+00:00</lastModDate>
        
        <creator>Jeremiah W. Johnson</creator>
        
        <subject>Artificial intelligence; neural networks; style trans-fer; representation learning; deep learning; computer vision; ma-chine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(1), 2019</description>
        <description>The artistic style of a painting can be sensed by the average observer, but algorithmically detecting a painting’s style is a difficult problem. We propose a novel method for detecting the artistic style of a painting that is motivated by the neural-style algorithm of Gatys et. al. and is competitive with other recent algorithmic approaches to artistic style detection.</description>
        <description>http://thesai.org/Downloads/Volume10No1/Paper_9-Towards_the_Algorithmic_Detection_of_Artistic_Style.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Applying FireFly Algorithm to Solve the Problem of Balancing Curricula</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100108</link>
        <id>10.14569/IJACSA.2019.0100108</id>
        <doi>10.14569/IJACSA.2019.0100108</doi>
        <lastModDate>2019-01-31T10:29:15.2700000+00:00</lastModDate>
        
        <creator>Jose Miguel Rubio</creator>
        
        <creator>Cristian L. Vidal-Silva</creator>
        
        <creator>Ricardo Soto</creator>
        
        <creator>Erika Madariaga</creator>
        
        <creator>Franklin Johnson</creator>
        
        <creator>Luis Carter</creator>
        
        <subject>Balanced Academic Curriculum; Attraction of Fire-flies Meta-heuristic; Optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(1), 2019</description>
        <description>The problem of assigning a balanced academic curriculum to academic periods of a curriculum, that is, the balancing curricula, represents a traditional challenge for every educational institution which look for a match among students and professors. This article proposes a solution for the balancing curricula problem using an optimization technique based on the attraction of fireflies (FA) meta-heuristic. We perform a set of test and real instances to measure the performance of our solution proposal just looking to deliver a system that will simplify the process of designing a curricular network in higher education institutions. The obtained results show that our solution achieves a fairly fast convergence and finds the optimum known in most of the tests carried out.</description>
        <description>http://thesai.org/Downloads/Volume10No1/Paper_8-Applying_FireFly_Algorithm_to_Solve_the_Problem.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Learning Deep Transferability for Several Agricultural Classification Problems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100107</link>
        <id>10.14569/IJACSA.2019.0100107</id>
        <doi>10.14569/IJACSA.2019.0100107</doi>
        <lastModDate>2019-01-31T10:29:15.2570000+00:00</lastModDate>
        
        <creator>Nghia Duong-Trung</creator>
        
        <creator>Luyl-Da Quach</creator>
        
        <creator>Chi-Ngon Nguyen</creator>
        
        <subject>Medicinal Plant Classification; Grain Discoloration Classification; Transfer Learning; Deep Learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(1), 2019</description>
        <description>This paper addresses several critical agricultural classification problems, 
e.g. grain discoloration and medicinal plants identification and classification, 
in Vietnam via combining the idea of knowledge transferability and state-of-the-art deep 
convolutional neural networks. Grain discoloration disease of rice is an emerging threat 
to rice harvest in Vietnam as well as all over the world and it acquires specific attention 
as it results in qualitative loss of harvested crop. Medicinal plants are an important element of 
indigenous medical systems. These resources are usually regarded as a part of culture’s traditional 
knowledge. Accurate classification is preliminary to any kind of intervention and recommendation of 
services. Hence, leveraging technology in automatic classification of these problems has become essential. 
Unfortunately, building and training a machine learning model from scratch is next to impossible due 
to the lack of hardware infrastructure and finance support. It painfully restricts the requirements of 
rapid solutions to deal with the demand. For this purpose, the authors have exploited the idea of 
transfer learning which is the improvement of learning in a new prediction task through the 
transferability of knowledge from a related prediction task that has already been learned. 
By utilizing state-of-the-art deep networks re-trained upon our collected data, our extensive 
experiments show that the proposed combination performs perfectly and achieves the classification 
accuracy of 98.7% and 98.5% on our collected datasets within the acceptable training time on a normal laptop. A mobile application is also deployed to facilitate further integrated recommendation and services.</description>
        <description>http://thesai.org/Downloads/Volume10No1/Paper_7-Learning_Deep_Transferability_for_Several_Agricultural_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Ant Colony Optimization of Interval Type-2 Fuzzy C-Means with Subtractive Clustering and Multi-Round Sampling for Large Data
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100106</link>
        <id>10.14569/IJACSA.2019.0100106</id>
        <doi>10.14569/IJACSA.2019.0100106</doi>
        <lastModDate>2019-01-31T10:29:15.2230000+00:00</lastModDate>
        
        <creator>Sana Qaiyum</creator>
        
        <creator>Izzatdin Aziz</creator>
        
        <creator>Jafreezal Jaafar</creator>
        
        <creator>Adam Kai Leung Wong</creator>
        
        <subject>Interval type-2 fuzzy c-means; ant colony optimization; subtractive clustering; multi-round sampling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(1), 2019</description>
        <description>Fuzzy C-Means (FCM) is widely accepted as a clustering technique. However, it cannot often manage different uncertainties associated with data. Interval Type-2 Fuzzy C-Means (IT2FCM) is an improvement over FCM since it can model and minimize the effect of uncertainty efficiently. However, IT2FCM for large data often gets trapped in local optima and fails to find optimal cluster centers. To overcome this challenge an Ant Colony-based Optimization (ACO) is proposed. Another challenge encountered is determining the number of clusters to perform clustering. Subtractive clustering (SC) is an efficient technique to estimate appropriate number of clusters. Though for large datasets the convergence rate of ACO and SC becomes high and thus, it becomes challenging to cluster data and evaluate correct number of clusters. To encounter the challenges of large dataset, Multi-Round Sampling (MRS) technique is proposed. IT2FCM-ACO with SC and MRS technique performs clustering on subsets of data and determines suitable cluster centers and cluster number. The obtained clusters are then extended to the entire dataset. This eliminates the need for IT2FCM to work on the complete dataset. Thus, the objective of this paper is to optimize IT2FCM using ACO algorithm and to estimate the optimal number of clusters using SC while employing MRS to handle the challenges of voluminous data. Results obtained from several clustering evaluation measures shows the improved performance of IT2FCM-ACO-MRS compared to ITFCM-ACO and IT2FCM. Speed up for different sample size of dataset is computed and is found that IT2FCM-ACO-MRS is ≈1–5 times faster than IT2FCM and IT2FCM-ACO for medium datasets whereas for large datasets it is reported to be ≈ 30–150 times faster.</description>
        <description>http://thesai.org/Downloads/Volume10No1/Paper_6-Ant_Colony_Optimization_of_Interval_Type_2_Fuzzy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Categorical Grammars for Processes Modeling</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100105</link>
        <id>10.14569/IJACSA.2019.0100105</id>
        <doi>10.14569/IJACSA.2019.0100105</doi>
        <lastModDate>2019-01-31T10:29:15.2100000+00:00</lastModDate>
        
        <creator>Daniel-Cristian Craciunean</creator>
        
        <subject>Process modeling; metamodel; modeling grammars; categorical grammars; category theory; categorical sketch</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(1), 2019</description>
        <description>The diversity and heterogeneity of real-world systems makes it impossible to naturally model them only with existing modeling languages. For this reason, models are often constructed using domain specific modeling languages as metamodels, which must themselves be specified by meta-metamodels. In this paper we present a new approach, based on the category theory, to specify metamodels. A grammar for modeling processes (PN, CSP, EPC, etc.) syntactically defines processes and then presents a set of reaction rules that model the behavior of the system. We will see that the categorical sketch is sufficiently expressive to be able to support the constructions needed to visually define the syntax of a graphical modeling language. The category theory also provides appropriate structures to model the behavioral rules of a real system.</description>
        <description>http://thesai.org/Downloads/Volume10No1/Paper_5-Categorical_Grammars_for_Processes_Modeling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cookies and Sessions: A Study of what they are, how they can be Stolen and a Discussion on Security</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100104</link>
        <id>10.14569/IJACSA.2019.0100104</id>
        <doi>10.14569/IJACSA.2019.0100104</doi>
        <lastModDate>2019-01-31T10:29:15.1770000+00:00</lastModDate>
        
        <creator>Young B. Choi</creator>
        
        <creator>Yin L. Loo</creator>
        
        <creator>Kenneth LaCroix</creator>
        
        <subject>AED; ARP spoofing; cookies; CSP; CSRF; HSTS; man-in-the-middle attack; newton; session hijack; web session; XSS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(1), 2019</description>
        <description>Cookies and sessions are common and vital to a person’s experience on the Internet. The use of cookies was originally used to overcome a memoryless protocol while using a tiny amount of the system’s resources. Cookies make for a cohesive experience when shopping online, enjoying customized content, and even receiving personalized advertisements when casually surfing the Web. However, by design, cookies lack security. Our research begins by giving a background of cookies and sessions. It then introduces what session hijacking is, and a lab was constructed to test and show how a cookie can be stolen and replayed to gain authenticated access. Finally, the paper presents various countermeasures for common attacks and tools checking for authentication cookies vulnerabilities.</description>
        <description>http://thesai.org/Downloads/Volume10No1/Paper_4-Cookies_and_Sessions.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Many-Objective Cooperative Co-evolutionary Linear Genetic Programming Applied to the Automatic Microcontroller Program Generation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100103</link>
        <id>10.14569/IJACSA.2019.0100103</id>
        <doi>10.14569/IJACSA.2019.0100103</doi>
        <lastModDate>2019-01-31T10:29:15.1770000+00:00</lastModDate>
        
        <creator>Wildor Ferrel Serruto</creator>
        
        <creator>Luis Alfaro</creator>
        
        <subject>Many-objective optimization; cooperative coevolution; linear genetic programming; program synthesis; microcontroller-based systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(1), 2019</description>
        <description>In this article, a methodology for the generation of programs in assembly language for microcontroller-based systems is proposed, applying a many-objective cooperative co-evolutionary linear genetic programming based on the decomposition of a program into segments, which evolve simultaneously, collaborating with each other in the process. The starting point for the program generation is a table of input/output examples. Two methods of fitness evaluation are also proposed. When the objective is to find a binary combination, the authors propose fitness evaluation with an exhaustive search for the output of each bit of the binary combination in the genetic program. On the other hand, when the objective is to generate specific variations of the logical values in the pins of the microcontroller’s port, the authors propose calculating the fitness, comparing the timing diagrams generated by the genetic program with the desired timing diagrams. The methodology was tested in the generation of drivers for the 4x4 matrix keyboard and character LCD module devices. The experimental results demonstrate that for certain tasks, the use of the proposed method allows for the generation of programs capable of competing with programs written by human programmers.</description>
        <description>http://thesai.org/Downloads/Volume10No1/Paper_3-Many_Objective_Cooperative_Co_evolutionary_Linear_Genetic_Programming.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Linking Context to Data Warehouse Design</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100102</link>
        <id>10.14569/IJACSA.2019.0100102</id>
        <doi>10.14569/IJACSA.2019.0100102</doi>
        <lastModDate>2019-01-31T10:29:15.1470000+00:00</lastModDate>
        
        <creator>Aadil Bouchra</creator>
        
        <creator>Kzaz Larbi</creator>
        
        <creator>Ait Wakrime Abderrahim</creator>
        
        <creator>Sekkaki Abderrahim</creator>
        
        <subject>Business intelligence; data warehouse; context; data mart</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(1), 2019</description>
        <description>Data warehouses are now widely used for analysis and decision support purposes. The availability of software solutions, which are more and more user-friendly and easy to manipulate has made it possible to extend their use to end users who are not specialists in the field of business intelligence. The purpose of this article is to provide an approach that assists non-expert users in the data warehouse design process and integrates their contextual data. As well as to provide a method that assists non-expert users in data warehouse design process while incorporating their contextual data. Our proposal consists of a context model and a comprehensive Data Warehouse construction method that attaches the context to data warehouses and uses it to produce customized data marts adapted to the decision makers context.</description>
        <description>http://thesai.org/Downloads/Volume10No1/Paper_2-Linking_Context_to_Data_Warehouse_Design.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Dynamic Partitioning Algorithm for Sip Detection using a Bottle-Attachable IMU Sensor</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2019</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2019.0100101</link>
        <id>10.14569/IJACSA.2019.0100101</id>
        <doi>10.14569/IJACSA.2019.0100101</doi>
        <lastModDate>2019-01-31T10:29:15.1000000+00:00</lastModDate>
        
        <creator>Henry Griffith</creator>
        
        <creator>Yan Shi</creator>
        
        <creator>Subir Biswas</creator>
        
        <subject>Hydration management; online activity classification; dynamic time windowing; inertial measurement unit sensors</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 10(1), 2019</description>
        <description>Hydration tracking technologies are a promising tool for improving health outcomes across a variety of populations. As a non-wearable solution that is reconfigurable across containers, bottle-attachable inertial measurement unit (IMU) sensors offer numerous advantages versus alternative tracking approaches. This paper proposes a novel dynamic temporal partitioning and classification algorithm for spotting drinks within the streaming data generated by such sensors. By exploiting the distinguishing characteristics of the container’s estimated inclination during drinking, the algorithm identifies candidate drink intervals for subsequent classification using a Threshold-Merge-Discard framework. The proposed approach is benchmarked against a slight variation of a previously introduced sliding window classifier for a series of experiments replicating the intended use case of the device. The new algorithm is shown to increase the true-positive detection rate by 23.7%, while reducing the number of required classification operations by more than an order of magnitude.</description>
        <description>http://thesai.org/Downloads/Volume10No1/Paper_1-A_Dynamic_Partitioning_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Underwater Optical Fish Classification System by Means of Robust Feature Decomposition and Analysis using Multiple Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091286</link>
        <id>10.14569/IJACSA.2018.091286</id>
        <doi>10.14569/IJACSA.2018.091286</doi>
        <lastModDate>2019-01-01T16:33:53.7970000+00:00</lastModDate>
        
        <creator>Mohcine Boudhane</creator>
        
        <creator>Benayad Nsiri</creator>
        
        <creator>Taoufiq Belhoussine Drissi</creator>
        
        <subject>Fish recognition; Optical image analysis; scene understanding; principal component analysis; non-linear artificial neural networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(12), 2018</description>
        <description>Live fish recognition and classification play a pivotal role in underwater understanding, because it help scientists to control the subsea inventory in order to aid fishery management. However, despite technological progress, fish recognition systems still have many limitations on observing fish. Difficulties in visualizing optical images can arise due to external attenua-tion, scattering properties of water. Optical underwater imaging systems can also have detection problems such as changing appearance/orientation of objects, and changes in the scene. In this paper, we propose a new object classification system for underwater optical images. The proposed method is based on robust feature extraction from fish pattern. A specific pre-processing method is used in order to improve the recognition accuracy. A mean-shift algorithm is charged to segment the images and to isolate objects from background in the raw images. The training data is processed by Principal component analysis (PCA), where we calculate the prior probability inter-features. The decision is given using a combined Bayesian Artificial Neural networks (ANNs). ANNs will calculate non linear relationship of the extracted features, and the posterior probabilities. These probabilities will be verified in the last step in order to keep (or reject) the decision. The comparison of results with state of the art methods shows that the proposed system outperforms most of the solutions in different environmental conditions. The solution simultaneously deals with artificial and reel environment. The results obtained in the simulation indicate that the proposed approach provides a good precision to make distinguish between different fish species. An average accuracy of 94.6% is achieved using the proposed recognition method.</description>
        <description>http://thesai.org/Downloads/Volume9No12/Paper_86-Underwater_Optical_Fish_Classification_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Evaluation WPAN of RN-42 Bluetooth based (802.15.1) for Sending the Multi-Sensor LM35 Data Temperature and RaspBerry Pi 3 Model B for the Database and Internet Gateway</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091285</link>
        <id>10.14569/IJACSA.2018.091285</id>
        <doi>10.14569/IJACSA.2018.091285</doi>
        <lastModDate>2019-01-01T16:33:53.7630000+00:00</lastModDate>
        
        <creator>Puput Dani Prasetyo Adi</creator>
        
        <creator>Akio Kitagawa</creator>
        
        <subject>RSSI; Bluetooth; Raspberry pi 3; Internet Gateway</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(12), 2018</description>
        <description>This research will be a test of a multi-sensor data transmission using the Wireless Sensor Network based on Bluetooth RN-42. Accordingly this research, LM35 is a type of Temperature Sensor, furthermore, this research will be used two LM35 sensors installed on the Arduino board and to be processed by Arduino Integrated Development of Environment (IDE) with C++ language. Arduino will be sending of all sensor data from LM35 temperature sensor by Slave RN-42 Bluetooth Configuration to master RN-42 Bluetooth configuration. Furthermore, the temperature data will be sending on Raspberry Pi 3 as an Internet Gateway then data will be sent to the internet and sensor data will be stored in the MySQL database. Furthermore, Sensor data can be accessed by other computers on the internet network using PuTTY with the Raspberry Pi 3 IP Address 192.168.1.145. Moreover, testing is also done by measuring the Signal power of Wireless Personal Area Network with the Receive Signal Strength Indicator variable, so the Bluetooth signal strength in sending multi-sensor data can be known appropriately.</description>
        <description>http://thesai.org/Downloads/Volume9No12/Paper_85-Performance_Evaluation_WPAN_of_RN_42_Bluetooth.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Review of Data Synchronization and Consistency Frameworks for Mobile Cloud Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091284</link>
        <id>10.14569/IJACSA.2018.091284</id>
        <doi>10.14569/IJACSA.2018.091284</doi>
        <lastModDate>2019-01-01T16:33:53.7000000+00:00</lastModDate>
        
        <creator>Yunus Parvej Faniband</creator>
        
        <creator>Iskandar Ishak</creator>
        
        <creator>Fatimah Sidi</creator>
        
        <creator>Marzanah A. Jabar</creator>
        
        <subject>Mobile cloud computing; data consistency; mobile back-end as a service; distributed systems; mobile apps</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(12), 2018</description>
        <description>Mobile devices are rapidly becoming the predom-inant means of accessing the Internet due to advances in wireless communication techniques. The development of Mobile applications (“apps”) for various platforms is on the rise due to growth in the number of connected devices. Numerous apps rely on cloud infrastructure for data storage and sharing. Apart from advances in wireless communication and device technology, there is a lot of research on special data management techniques that addressed the limitations of mobile wireless computing to make the data appear seamless for accessing and retrieval. This paper is an effort to survey the frameworks that support data consistency and synchronization for mobile devices. These frameworks offer a solution for the unreliable connection prob-lem with customized synchronization and replication processes and hence helps in synchronizing with multiple clients. The frameworks are compared for the parameters of consistency and data models (table, objects or both) support along with techniques of synchronization protocol and conflict resolution. The review paper has produced interesting results from the selected studies in areas such as data consistency, handling offline data, data replication, synchronization strategy. The paper is focused on client-centric data consistency and the offline data synchronization feature of various frameworks.</description>
        <description>http://thesai.org/Downloads/Volume9No12/Paper_84-A_Review_of_Data_Synchronization_and_Consistency.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Swarm Eye: A Distributed Autonomous Surveillance System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091283</link>
        <id>10.14569/IJACSA.2018.091283</id>
        <doi>10.14569/IJACSA.2018.091283</doi>
        <lastModDate>2018-12-31T17:33:50.9370000+00:00</lastModDate>
        
        <creator>Faisal Khan</creator>
        
        <creator>J&#246;rn Mehnen</creator>
        
        <creator>Tarapong Sreenuch</creator>
        
        <creator>Syed Alam</creator>
        
        <creator>Paul Townsend</creator>
        
        <subject>Swarm intelligence; distributed surveillance system; bio-inspired algorithm; cooperative UAVs</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(12), 2018</description>
        <description>Conventional means such as Global Positioning System (GPS) and satellite imaging are important information sources but provide only limited and static information. In tactical situations rich 3D images and dynamically self-adapting information are needed to overcome this restriction; this information should be collected where it is available. Swarms are sets of interconnected units that can be arranged and coordinated in any flexible way to execute a specific task in a distributed manner. This paper introduces Swarm Eye – a concept that provides a platform for combining the powerful techniques of swarm intelligence, emergent behaviour and computer graphics in one system. It allows the testing of new image processing concepts for a better and well informed decision making process. By using advanced collaboratively acting eye units, the system can observe, gather and process images in parallel to provide high value information. To capture visual data from an autonomous UAV unit, the unit has to be in the right position in order to get the best visual sight. The developed system also provides autonomous adoption of formations for UAVs in an autonomous and distributed manner in accordance with the tactical situation.</description>
        <description>http://thesai.org/Downloads/Volume9No12/Paper_83-Swarm_Eye_A_Distributed_Autonomous_Surveillance_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Web Assessment of Libyan Government e-Government Services</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091282</link>
        <id>10.14569/IJACSA.2018.091282</id>
        <doi>10.14569/IJACSA.2018.091282</doi>
        <lastModDate>2018-12-31T17:33:50.9200000+00:00</lastModDate>
        
        <creator>Mohd Zamri Murah</creator>
        
        <creator>Abdullah Ahmed Ali</creator>
        
        <subject>Libya; e-Government; web security assessment; in-formation security; website vulnerability; penetration testing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(12), 2018</description>
        <description>Libya has started transferring traditional govern-ment services into e-government services. The e-government initiative involves the use of websites to offer various services such as civil registration, financial transaction and private information handling. Currently, there has not been many studies about the security assessment of the Libyan government websites. Therefore, in this paper, we did a web security assessment of 16 Libyan government websites. The main purpose of this study is to determine the security level of these websites. The web security as-sessment was done in four phases: Reconnaissance, Enumeration and Scanning, Vulnerability assessment (web vulnerabilities and SSL encryption evaluation) and Content Analysis(security and privacy policies). The results showed that 9 websites have high and medium level vulnerabilities. Only 3 websites have A SSL rating. Also, only 3 websites have published security and privacy policies. We found 1 highly unsafe website, 6 unsafe websites, 8 somewhat safe websites and, 1 safe website. Overall, the study indicated the Libyan government websites are adequately secured without major security issues. Since these Libyan government websites deal with sensitive data, adequate security measures should be implemented to reduce the vulnerabilities and to mitigate future cyber security attacks.</description>
        <description>http://thesai.org/Downloads/Volume9No12/Paper_82-Web_Assessment_of_Libyan_Government.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Proposed Model of Cloud based e-Learning for Najran University</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091281</link>
        <id>10.14569/IJACSA.2018.091281</id>
        <doi>10.14569/IJACSA.2018.091281</doi>
        <lastModDate>2018-12-31T17:33:50.9030000+00:00</lastModDate>
        
        <creator>Ibrahim Abdulrab Ahmed</creator>
        
        <creator>Zakir Hussain</creator>
        
        <subject>E-learning; cloud computing; E-learning Embrac-ing Cloud Computing Model (ELECCM); SaaS; PaaS; IaaS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(12), 2018</description>
        <description>For the time being, the educational institutions are keen to use e-learning in their educational environment. This, in turn, will support their learning process and allow the learners to access any service or learning material or information at any time they need it. With all the pros by the e-learning, it still suffers from many problems that are explained clearly in this paper. In contrast, along with the innovation of cloud computing technology as a new paradigm in the IT world. With the establishment of cloud computing, numerous services for numerous fields (e.g., education, business, and government) have been introduced that have greatly facilitated the e-learning. In this paper, it demonstrates how the inclusion of the cloud computing paradigm in the e-learning environment assist positively. A lot of obstacles that are introduced by e-learning have been remedied. It combines the cloud computing in the e-learning system, thus, the proposed E-learning Embracing Cloud Computing Model (ELECCM) has completely developed and performed with all the essential components that are needed for their architecture. The study presents all the procedures that are run in order by the proposed system. Then, a fully functional e-learning system based on cloud computing; with low cost and low technical barriers, is demonstrated and explained clearly.</description>
        <description>http://thesai.org/Downloads/Volume9No12/Paper_81-A_Proposed_Model_of_Cloud_based_e_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Neighbour-Cooperation Heterogeneity-Aware Traffic Engineering for Wireless Sensor Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091280</link>
        <id>10.14569/IJACSA.2018.091280</id>
        <doi>10.14569/IJACSA.2018.091280</doi>
        <lastModDate>2018-12-31T17:33:50.8730000+00:00</lastModDate>
        
        <creator>Christopher Mumpe</creator>
        
        <creator>Da Tang</creator>
        
        <creator>Muhammad Asad</creator>
        
        <creator>Muhammad Aslam</creator>
        
        <creator>Jing Chen</creator>
        
        <creator>Jinsi Zhu</creator>
        
        <creator>Luyuan Jin</creator>
        
        <subject>Wireless Sensor Networks; energy efficient; cluster-ing; multi-hop; routing protocol</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(12), 2018</description>
        <description>Extending the operational duration is a major field of interest in Wireless Sensor Networks (WSNs). This lifetime enhancement task challenges researchers to design an energy ef-ficient traffic engineering which minimizes the dissipation energy and retain the expected quality of routing protocols. Network lifetime can be prolonged by balancing the energy optimization throughout the network period over which sensors relay data traffic towards Base Station (BS). Existing techniques of continu-ous and autonomous reporting sensor nodes, offer an opportunity to design the sensing and reporting co-operation between sensor nodes. Nearby nodes with similar reading environment can co-operate with each other to avoid transmission redundant infor-mation. In this paper we propose “Adaptive Inter-Networking Improved (AINI)” multi-hop routing protocol with co-operate sensing of inter and intra cluster communication by exploiting the concept of tripling the sensor nodes. Proposed routing protocol improved the reliability of whole network by improving the reliability of inter-cluster multi-hoping. Sensor nodes use the shortest path to deliver data to CH using intra-cluster multi-hoping and these CHs are accountable to forward this data to BS using inter-cluster multi-hop communication. Proposed routing protocol resolves the certain issues of WSN like network lifetime, network stability and CHs selection technique. To prove the efficiency of our proposed model we compared the simulation results with existing state-of-the art routing protocols such as, LEACH, LEACH-C, SEP, ESEP and DEEC. Experimental results shows the benefits of neighbour cooperation and heterogeneity-aware by the performance of proposed protocol over existing state-of-the-art routing protocols.</description>
        <description>http://thesai.org/Downloads/Volume9No12/Paper_80-Neighbour_Cooperation_Heterogeneity_Aware_Traffic.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Impact of Android Phone Rooting on User Data Integrity in Mobile Forensics</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091279</link>
        <id>10.14569/IJACSA.2018.091279</id>
        <doi>10.14569/IJACSA.2018.091279</doi>
        <lastModDate>2018-12-31T17:33:50.8570000+00:00</lastModDate>
        
        <creator>Tahani Almehmadi</creator>
        
        <creator>Omar Batarfi</creator>
        
        <subject>Android; rooting; data integrity; mobile forensics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(12), 2018</description>
        <description>Modern cellular phones are potent computing de-vices, and their capabilities are constantly progressing. The Android operating system (OS) is widely used, and the number of accessible apps for Android OS phones is unprecedented. The increasing capabilities of these phones imply that they have distinctive software, memory designs, and storage mechanisms. Furthermore, they are increasingly being used to commit crimes at an alarming rate. This aspect has heightened the need for digital mobile forensics. Because of the rich user data they store, they may be relevant in forensic investigations, and the data must be extracted. However, as this study will show, most of the available tools for mobile forensics rely greatly on rooted (Android) devices to extract data. Rooting, as some of the selected papers in this research will show, poses a key challenge for forensic analysts: user data integrity. Rooting per se, as will be seen, is disadvantageous. It is possible for forensic analysts to extract useful data from Android phones via rooting, but the user data integrity during data acquisition from Android devices is a prime concern. In suggesting an alternative rooting technique for data acquisition from an Android handset, this paper determines whether rooting is forensically sound. This is particularly due to the device’s modification, which a root often requires, that may violate the data integrity.</description>
        <description>http://thesai.org/Downloads/Volume9No12/Paper_79-Impact_of_Android_Phone_Rooting_on_User_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Discovery of Corrosion Patterns using Symbolic Time Series Representation and N-gram Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091278</link>
        <id>10.14569/IJACSA.2018.091278</id>
        <doi>10.14569/IJACSA.2018.091278</doi>
        <lastModDate>2018-12-31T17:33:50.8400000+00:00</lastModDate>
        
        <creator>Shakirah Mohd Taib</creator>
        
        <creator>Zahiah Akhma Mohd Zabidi</creator>
        
        <creator>Izzatdin Abdul Aziz</creator>
        
        <creator>Farahida Hanim Mousor</creator>
        
        <creator>Azuraliza Abu Bakar</creator>
        
        <creator>Ainul Akmar Mokhtar</creator>
        
        <subject>Pipelines corrosion analysis; Symbolic Aggregation Approximation (SAX) representation; corrosion patterns; corrosion factor</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(12), 2018</description>
        <description>There are many factors that can contribute to corrosion in the pipeline. Therefore, it is important for decision makers to analyze and identify the main factor of corrosion in order to take appropriate actions. The factor of corrosion can be analyzed using data mining based on historical datasets collected from monitoring sensors. The purpose of this study is to analyze the trends of corroding agents for pipeline corrosion based on symbolic representation of time series corrosion dataset using Symbolic Aggregation Approximation (SAX). The paper presents the analysis and evaluation of the patterns using N-gram model. Text mining using N-gram model is proposed to mine trend changes from corrosion time series dataset that are transformed as symbolic representation. N-gram was applied for the analysis in order to find significant symbolic patterns that are represented as text. Pattern analysis is performed and the results are discussed according to each environmental factor of pipeline corrosion.</description>
        <description>http://thesai.org/Downloads/Volume9No12/Paper_78-Discovery_of_Corrosion_Patterns_using_Symbolic.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>scaleBF: A High Scalable Membership Filter using 3D Bloom Filter</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091277</link>
        <id>10.14569/IJACSA.2018.091277</id>
        <doi>10.14569/IJACSA.2018.091277</doi>
        <lastModDate>2018-12-31T17:33:50.8270000+00:00</lastModDate>
        
        <creator>Ripon Patgiri</creator>
        
        <creator>Sabuzima Nayak</creator>
        
        <creator>Samir Kumar Borgohain</creator>
        
        <subject>Bloom filter; membership filter; scalable bloom filter, duplicate key filter; hashing; data structure, membership query.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(12), 2018</description>
        <description>Bloom Filter is extensively deployed data structure in various applications and research domain since its inception. Bloom Filter is able to reduce the space consumption in an order of magnitude. Thus, Bloom Filter is used to keep information of a very large scale data. There are numerous variants of Bloom Filters available, however, scalability is a serious dilemma of Bloom Filter for years. To solve this dilemma, there are also diverse variants of Bloom Filter. However, the time complexity and space complexity become the key issue again. In this paper, we present a novel Bloom Filter to address the scalability issue without compromising the performance, called scaleBF. scaleBF deploys many 3D Bloom Filter to filter the set of items. In this paper, we theoretically compare the contemporary Bloom Filter for scalability and scaleBF outperforms in terms of time complexity.</description>
        <description>http://thesai.org/Downloads/Volume9No12/Paper_77-scaleBF_A_High_Scalable_Membership_Filter.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The SMH Algorithm : An Heuristic for Structural Matrix Computation in the Partial Least Square Path Modeling</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091276</link>
        <id>10.14569/IJACSA.2018.091276</id>
        <doi>10.14569/IJACSA.2018.091276</doi>
        <lastModDate>2018-12-31T17:33:50.8100000+00:00</lastModDate>
        
        <creator>Odilon Yapo M Achiepo</creator>
        
        <creator>Edoete Patrice Mensah</creator>
        
        <creator>Edi Kouassi Hilaire</creator>
        
        <subject>Structural equations modeling; PLS algorithm; la-tents variables; structural matrix; R programming language</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(12), 2018</description>
        <description>The Structural equations modeling with latent’s variables (SEMLV) are a class of statistical methods for modeling the relationships between unobservable concepts called latent variables. In this type of model, each latent variable is described by a number of observable variables called manifest variables. The most used version of this category of statistical methods is the partial least square path modeling (PLS Path Modeling). In PLS Path Modeling, the specification of the relashonships between the unobservable concepts, knows as structural relationships, is the most important thing to know for practical purposes. In general, this specification is obtained manually using a lower triangular binary matrix. To obtain this lower triangular matrix, the modeler must put the latent variables in a very precise order, otherwise the matrix obtained will not be triangular inferior. Indeed, the construction of such a matrix only reflects the links of cause and effect between the latent variables. Thus, with each ordering of the latent variables corresponds a precise matrix.The real problem is that, the more the number of studied concepts increases, the more the search for a good order in which it is necessary to put the latent variables to obtain a lower triangular matrix becomes more and more tedious. For five concepts, the modeler must test 5! = 120 possibilities. However, in practice, it is easy to study more than ten variables, so that the manual search for an adequate order to obtain a lower triangular matrix extremely difficult work for the modeler. In this article, we propose an heuristic way to make possible an automatic computation of the structural matrix in order to avoid the usual manual specifications and related subsequent errors.</description>
        <description>http://thesai.org/Downloads/Volume9No12/Paper_76-The_SMH_Algorithm_an_Heuristic_for_Structural.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>NB-IoT Pervasive Communications for Renewable Energy Source Monitoring</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091275</link>
        <id>10.14569/IJACSA.2018.091275</id>
        <doi>10.14569/IJACSA.2018.091275</doi>
        <lastModDate>2018-12-31T17:33:50.7930000+00:00</lastModDate>
        
        <creator>Farooque Hassan Kumbhar</creator>
        
        <subject>NB-IoT; smart off-grid; RACH; 4G LTE</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(12), 2018</description>
        <description>Renewable sources like solar and wind energy have seen a drastic increase in the market, especially in developing countries where electricity prices are high and QoS and QoE, both are at their lowest. In this paper, we innovate by proposing a paradigm of smart off-grid from sensing using an Internet of Things (IoT) based smart meter for continuous monitoring, to reporting a daily user on their smart devices using IoT middleware. Our proposed smart off-grid system keeps track of the performance and faults of the off-grid equipment. Under communication technology scrutiny, we model 3GPP Narrow Band IoT (NB-IoT) collision and success probability of grouping smart meter communications to avoid random access channel (RACH) congestion. The proposed smart off-grid communications outperform existing systems and achieve 1.3 to 20 times higher SINR, more than 30 Mbps data rate in 4G, three times higher data rate in NB-IoT, 25% fewer collisions and 25% higher success rate.</description>
        <description>http://thesai.org/Downloads/Volume9No12/Paper_75-NB_IoT_Pervasive_Communications_for_Renewable.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Partial Greedy Algorithm to Extract a Minimum Phonetically-and-Prosodically Rich Sentence Set</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091274</link>
        <id>10.14569/IJACSA.2018.091274</id>
        <doi>10.14569/IJACSA.2018.091274</doi>
        <lastModDate>2018-12-31T17:33:50.7630000+00:00</lastModDate>
        
        <creator>Fahmi Alfiansyah</creator>
        
        <creator>Suyanto</creator>
        
        <subject>Automatic speech recognition; minimum sentence set; prosody; speech corpus; triphone</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(12), 2018</description>
        <description>A phonetically-and-prosodically rich sentence set is so important in collecting a read-speech corpus for developing phoneme-based speech recognition. The sentence set is usually searched from a huge text corpus of million sentences using the optimization methods. One of the commonly used optimization methods for this case is a Least-to-Most Greedy (LTMG) algo-rithm. It is effective in minimizing the number of phoneme-units. Unfortunately, it does not distribute their frequencies. In this paper, a new method called Partial LTMG algorithm (PLTMG) is proposed to search an optimum set containing triphones and prosodies those are distributed in a near-uniform fashion. Testing on an Indonesian text corpus of ten million sentences crawled from some websites of newspapers and novels shows that the proposed method is not only capable of minimizing both phoneme-units and prosodies but also effective in distributing their frequencies.</description>
        <description>http://thesai.org/Downloads/Volume9No12/Paper_74-Partial_Greedy_Algorithm_to_Extract_a_Minimum.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>FPGA based Hardware-in-the-Loop Simulation for Digital Control of Power Converters using VHDL-AMS</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091273</link>
        <id>10.14569/IJACSA.2018.091273</id>
        <doi>10.14569/IJACSA.2018.091273</doi>
        <lastModDate>2018-12-31T17:33:50.7470000+00:00</lastModDate>
        
        <creator>Abdelouahab Djoubair Benhamadouche</creator>
        
        <creator>Adel Ballouti</creator>
        
        <creator>Farid Djahli</creator>
        
        <creator>Abdeslem Sahli</creator>
        
        <subject>Hardware-in-the-Loop (HIL) simulation; Field-Programmable Gate Array (FPGA); VHDL-AMS; power converter; digital controller</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(12), 2018</description>
        <description>This paper presents a new approach for complex system design, allowing rapid, efficient and low-cost prototyping. Using this approach can simplify designing tasks and go faster from system modeling to effective hardware implementation. Designing multi-domain systems require different engineering competences and several tools, our approach gives a unique design environment, based on the use of VHDL-AMS modeling language and FPGA device within a single design tool. This approach is intended to enhance hardware-in-the-loop (HIL) practices with a more realistic simulation which improve the verification process in the system design flow. This paper describes the implementation of a software/hardware platform as effective support for our methodology. The feasibility and the benefits of the presented approach are demonstrated through a practical case study of a power converter control. The obtained results show that the developed method achieves significant speed-up compared with conventional simulation methods, using minimum resources and minimum latency.</description>
        <description>http://thesai.org/Downloads/Volume9No12/Paper_73-FPGA_based_Hardware_in_the_Loop_Simulation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Solving Dynamic Programming Problem by Pipeline Implementation on GPU</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091272</link>
        <id>10.14569/IJACSA.2018.091272</id>
        <doi>10.14569/IJACSA.2018.091272</doi>
        <lastModDate>2018-12-31T17:33:50.7170000+00:00</lastModDate>
        
        <creator>Susumu Matsumae</creator>
        
        <creator>Makoto Miyazaki</creator>
        
        <subject>Dynamic programming; pipeline implementation; GPGPU</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(12), 2018</description>
        <description>In this paper, we show the effectiveness of a pipeline implementation of Dynamic Programming (DP) on GPU. As an example, we explain how to solve a matrix-chain multiplication (MCM) problem by DP on GPU. This problem can be sequentially solved in O(n3) steps by DP where n is the number of matrices, because its solution table is of size n &#215; n and each element of the table can be computed in O(n) steps. A typical speedup strategy for this is to parallelize the O(n) step computation of each element, which can be easily achieved by parallel prefix computation, i.e., an O(log n) step computation with n threads in a tournament fashion. By such a standard parallelizing method, we can solve the MCM problem in O(n2 log n) steps with n threads. In our approach, we solve the MCM problem on GPU in a pipeline fashion, i.e., we use GPU cores for supporting pipeline-stages so that many elements of the solution table are partially computed in parallel at one time. Our implementation determines one output value per one computational step with n threads in a pipeline fashion and constructs the solution table totally in O(n2) steps with n threads.</description>
        <description>http://thesai.org/Downloads/Volume9No12/Paper_72-Solving_Dynamic_Programming_Problem.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Embedded Feature Selection Method for a Network-Level Behavioural Analysis Detection Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091271</link>
        <id>10.14569/IJACSA.2018.091271</id>
        <doi>10.14569/IJACSA.2018.091271</doi>
        <lastModDate>2018-12-31T17:33:50.7000000+00:00</lastModDate>
        
        <creator>Mohammad Hafiz Mohd Yusof</creator>
        
        <creator>Mohd Rosmadi Mokhtar</creator>
        
        <creator>Abdullah Mohd. Zain</creator>
        
        <creator>Carsten Maple</creator>
        
        <subject>Feature selection; intrusion detection; behavioural analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(12), 2018</description>
        <description>Feature selection in network-level behavioural analysis studies is used to represent the network datasets of a monitored space. However, recent studies have shown that current behavioural analysis methods at the network-level have several issues. The reduction of millions of instances, disregarded parameters, removed similarities of most of the traffic flows to reduce information noise, insufficient number of optimised features and ignore instances which are not an entity are amongst the other issue that have been identified as the main issues contributing to the inability to predict zero-day attacks. Therefore, this paper aims to select the optimal features that will improve the prediction and behavioural analysis. The training dataset will be trained to use the embedded feature selection method which incorporates both the filter and wrapper method. Correlation coefficient, r and weighted score, wj will be used. The accepted or selected features will be optimised uses Beta distribution functions, β, to find its maximum likelihood, Ɩmax. The final selected features will be trained by the Bayesian Network classifier and tested through several testing datasets. Finally, this method was compared to several other feature selection methods. Final results show the proposed selection method’s performance against other datasets consistently outperform other methods.</description>
        <description>http://thesai.org/Downloads/Volume9No12/Paper_71-Embedded_Feature_Selection_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Recurrence Relation for Projectile Simulation Project and Game based Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091270</link>
        <id>10.14569/IJACSA.2018.091270</id>
        <doi>10.14569/IJACSA.2018.091270</doi>
        <lastModDate>2018-12-31T17:33:50.6870000+00:00</lastModDate>
        
        <creator>Humera Tariq</creator>
        
        <creator>Tahseen Jilani</creator>
        
        <creator>Ebad Ali</creator>
        
        <creator>Syed Faraz</creator>
        
        <creator>Usman Amjad</creator>
        
        <subject>Projectile; game programming; simulation; angry birds; linear drag; trajectory; impulse</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(12), 2018</description>
        <description>Huge Gap has been observed on study of projectile simulation models relating it to speed of camera or frame per seconds. The objective of this paper is to explore and investigate time driven simulation models to mimic projectile trajectory; with an intent to highlight importance of game programming on native platforms. The proposed projectile recurrence relation and extensive mathematical modeling based on Triangular Series is an innovative outcome of project and game based learning used in BSCS-514 Computer Graphics Course at Department of Computer Science (DCS) University of Karachi (UOK). Box2D Replica of Popular 2D Mobile Game Angry Bird has been created on desktop to have an in depth mathematical and programming insight of commercial physics engine and discrete event simulation. Analysis has also been performed to answer certain key questions for progressive projectile trajectory for e.g. (1) With What angle, projectile should be launched? (2) What is the maximum height it will reach? (3) How long it will take for landing? (4) What will be its velocity to reach a desired height? (4) Where it will hit? (5) How it will bounce? The above stated questions are important to answer so that projectile motion within engineering, Gaming and other CAD Applications can be taught and programmed correctly specially on native platforms like OpenGL. Besides reporting Numerical results, a successful projectile based game making has been compiled and reported to validate the significance of project based learning in classrooms and labs.</description>
        <description>http://thesai.org/Downloads/Volume9No12/Paper_70-Recurrence_Relation_for_Projectile_Simulation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Construction of TVET M-Learning Model based on Student Learning Style</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091269</link>
        <id>10.14569/IJACSA.2018.091269</id>
        <doi>10.14569/IJACSA.2018.091269</doi>
        <lastModDate>2018-12-31T17:33:50.6700000+00:00</lastModDate>
        
        <creator>Azmi S</creator>
        
        <creator>Mat Noor S.F</creator>
        
        <creator>Mohamed H</creator>
        
        <subject>M-learning; technical and vocational education and training (TVET); user-centered design (UCD)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(12), 2018</description>
        <description>Mobile learning or m-learning is emerging as the innovation of virtual learning that used mobile devices for teaching and learning which can be accessed readily at hand anywhere either in classroom or group. Whereas preliminary study showed that Technical and Vocational Education and Training (TVET) institution were still using the conventional learning, where the students were not getting much exposure at all towards m-learning. In fact, this research discussed about the development and validating the usability of TVET m-learning model based on the user requirements that categorized into three main aspects: devices, users and social. The research scope focused on TVET students as the target users. While, user-centered design (UCD) method has been used in this research through four phases which were analyzing the user requirements, designing model, developing prototype and evaluating usability. Based on the usability evaluation results showed that TVET m-learning model is acceptable and compatible as a guideline of m-learning development for the TVET students. This TVET m-learning model brings benefits in improving the quality of teaching and learning in TVET institutions especially the public training skills institutions to achieve the nation goals in order to become a successful developing country and produce skilled workers in the future as well.</description>
        <description>http://thesai.org/Downloads/Volume9No12/Paper_69-Construction_of_TVET_M_Learning_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning-Based Model Architecture for Time-Frequency Images Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091268</link>
        <id>10.14569/IJACSA.2018.091268</id>
        <doi>10.14569/IJACSA.2018.091268</doi>
        <lastModDate>2018-12-31T17:33:50.6400000+00:00</lastModDate>
        
        <creator>Haya Alaskar</creator>
        
        <subject>Convolutional neural network; time-frequency; spectrogram; scalograms; Hilbert-Huang transform; deep learning; sound signals; biomedical signals</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(12), 2018</description>
        <description>Time-frequency analysis is an initial step in the design of invariant representations for any type of time series signals. Time-frequency analysis has been studied and developed widely for decades, but accurate analysis using deep learning neural networks has only been presented in the last few years. In this paper, a comprehensive survey of deep learning neural network architectures for time-frequency analysis is presented and compares the networks with previous approaches to time-frequency analysis based on feature extraction and other machine learning algorithms. The results highlight the improvements achieved by deep learning networks, critically review the application of deep learning for time-frequency analysis and provide a holistic overview of current works in the literature. Finally, this work facilitates discussions regarding research opportunities with deep learning algorithms in future researches.</description>
        <description>http://thesai.org/Downloads/Volume9No12/Paper_68-Deep_Learning_based_Model_Architecture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cyber Romance Scam Victimization Analysis using Routine Activity Theory Versus Apriori Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091267</link>
        <id>10.14569/IJACSA.2018.091267</id>
        <doi>10.14569/IJACSA.2018.091267</doi>
        <lastModDate>2018-12-31T17:33:50.6230000+00:00</lastModDate>
        
        <creator>Mohd Ezri Saad</creator>
        
        <creator>Siti Norul Huda Sheikh Abdullah</creator>
        
        <creator>Mohd Zamri Murah</creator>
        
        <subject>Cybercrime; love-scam; routine activity theory</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(12), 2018</description>
        <description>The advance new digital era nowadays has led to the increasing cases of cyber romance scam in Malaysia. These technologies have offered both opportunities and challenge, depending on the purpose of the user. To face this challenge, the key factors that influence the susceptibility to cyber romance scam need to be identified. Therefore, this study proposed cyber romance scam models using statistical method and Apriori techniques to explore the key factors of cyber romance scam victimization based on the real police report lodged by the victims. The relationship between demographic variables such as age, education level, marital status, monthly income and independent variables such as level of computer skills and the level of cyber-fraud awareness has been investigated. Then, the result of this study was compared with Routine Activity Theory (RAT). This study found that those between the ages of 25 and 45 years were likely to be the victims of cyber romance scams in Malaysia. The majority of the victims are educated and having a Diploma. In addition, this research shows that married people are more likely to be the victims of cyber romance scams. Study shows that non-income individuals are also vulnerable to being the victims because the study shows that 17 percent of respondents who are the victims are from this group. As expected, those who work and have monthly income between RM2001 and above are more likely to be targeted and become a victim of cyber romance scams. The study also shows that those who lack computer skills and less levels of cyber-fraud awareness are more likely to be victims of cyber romance scams.</description>
        <description>http://thesai.org/Downloads/Volume9No12/Paper_67-Cyber_Romance_Scam_Victimization_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Recommender System based on Empirical Study of Geolocated Clustering and Prediction Services for Botnets Cyber-Intelligence in Malaysia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091266</link>
        <id>10.14569/IJACSA.2018.091266</id>
        <doi>10.14569/IJACSA.2018.091266</doi>
        <lastModDate>2018-12-31T17:33:50.6070000+00:00</lastModDate>
        
        <creator>Nazri Ahmad Zamani</creator>
        
        <creator>Aswami Fadillah Mohd Ariffin</creator>
        
        <creator>Siti Norul Huda Sheikh Abdullah</creator>
        
        <subject>Botnets; recommender system; predictive analytics; Big Data; cyber-threat intelligence; K-Means; DBSCAN</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(12), 2018</description>
        <description>A recommender system is becoming a popular platform that predicts the ratings or preferences in studying human behaviors and habits. The predictive system is widely used especially in marketing, retailing and product development. The system responds to users preferences in goods and services and gives recommendations via Machine Learning algorithms deployed catered specifically for such services. The same recommender system can be built for predicting botnets attack. Via our Integrated Cyber-Evidence (ICE) Big Data system, we build a recommender system based on collected data on telemetric Botnets networks traffics. The recommender system is trained periodically on cyber-threats enriched data from Coordinated Malware Eradication &amp; Remedial Platform system (CMERP), specifically the geolocations and the timestamp of the attacks. The machine learning is based on K-Means and DBSCAN clustering. The result is a recommendation of top potential attacks based on ranks from a given geolocations coordinates. The recommendation also includes alerts on locations with high density of certain botnets types.</description>
        <description>http://thesai.org/Downloads/Volume9No12/Paper_66-Recommender_System_based_on_Empirical_Study.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cloud Computing Auditing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091265</link>
        <id>10.14569/IJACSA.2018.091265</id>
        <doi>10.14569/IJACSA.2018.091265</doi>
        <lastModDate>2018-12-31T17:33:50.5900000+00:00</lastModDate>
        
        <creator>Mohammad Moghadasi</creator>
        
        <creator>Seyed Majid Mousavi</creator>
        
        <creator>G&#225;bor Fazekas</creator>
        
        <subject>Cloud computing; cloud auditing; IT outsourcing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(12), 2018</description>
        <description>Cloud Computing is a new form of IT system and infrastructure outsourcing as an alternative to traditional IT Outsourcing (ITO). Hence, migration to cloud computing is rapidly growing among organizations. Adopting this technology brings numerous positive aspects, although imposing different risks and concerns to organization. An organization which officially deputes its cloud computing services to outside (offshore or inshore) providers and implies that it outsources its functions and process of its IT to external BPO services providers. Therefore, customers of cloud must evaluate and manage the IT infrastructure construction and the organization’s IT control environment of BPO vendors [25]. Since cloud is an internet-based technology, cloud auditing would be very critical and challengeable in such an environment. This paper focuses on practices related to auditing processes, methods, techniques, standards and frameworks in cloud computing environments.</description>
        <description>http://thesai.org/Downloads/Volume9No12/Paper_65-Cloud_Computing_Auditing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards end-to-end Continuous Monitoring of Compliance Status Across Multiple Requirements</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091264</link>
        <id>10.14569/IJACSA.2018.091264</id>
        <doi>10.14569/IJACSA.2018.091264</doi>
        <lastModDate>2018-12-31T17:33:50.5600000+00:00</lastModDate>
        
        <creator>Danny C Cheng</creator>
        
        <creator>Jod B. Villamarin</creator>
        
        <creator>Gregory Cu</creator>
        
        <creator>Nathalie Rose Lim-Cheng</creator>
        
        <subject>Compliance management, continuous compliance monitoring; ontology mapping; natural language processing; secure systems development lifecycle</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(12), 2018</description>
        <description>Monitoring compliance status by an organization has been historically difficult due to the growing number of compliance requirements being imposed by various standards, frameworks, and regulatory requirements.  Existing practices by organizations even with the assistance of security tools and appliances is mostly manual in nature as there is still a need for a human expert to interpret and map the reports generated by various solutions to actual requirements as stated in various compliance documents.  As the number of requirements increases, this process is becoming either too costly or impractical to manage by the organization.  Aside from the numerous requirements, multiple of these documents actually overlap in terms of domains and actual requirements.  However, since current tools do not directly map and highlight overlaps as well as generate detailed gap reports, an organization would perform compliance activities redundantly across multiple requirements thereby increasing cost as well. In this paper, we present an approach that attempts to provide an end-to-end solution from compliance document requirements to actual verification and validation of implementation for audit purposes with the intention of automating compliance status monitoring as well as providing the ability to have continuous compliance monitoring as well as reducing the redundant efforts that an organization embarks on for multiple compliance requirements.  This research thru enhancing existing security ontologies to model compliance documents and applying information extraction practices would allow for overlapping requirements to be identified and gaps to be clearly explained to the organization. Thru the use of secure systems development lifecycle, and heuristics the research also provide a mechanism to automate the technical validation of compliance statuses thereby allowing for continuous monitoring as well as mapping to the enhanced ontology to allow reusability via conceptual mapping of multiple standards and requirements.  Practices such as unit testing and continuous integration from secure systems development life cycle are incorporated to allow for flexibility of the automation process while at the same time using it to support the mapping between compliance requirements.</description>
        <description>http://thesai.org/Downloads/Volume9No12/Paper_64-Towards_End_to_End_Continuous_Monitoring.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Repetitive Control based on Integral Sliding Mode Control of Matched Uncertain Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091263</link>
        <id>10.14569/IJACSA.2018.091263</id>
        <doi>10.14569/IJACSA.2018.091263</doi>
        <lastModDate>2018-12-31T17:33:50.5430000+00:00</lastModDate>
        
        <creator>Nizar TOUJENI</creator>
        
        <creator>Chaouki MNASRI</creator>
        
        <creator>Moncef GASMI</creator>
        
        <subject>Repetitive control; 2D systems; matched uncertainties; integral sliding mode control; sliding surface; linear matrix inequality; reachability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(12), 2018</description>
        <description>This paper proposed an integral sliding mode control scheme based on repetitive control for uncertain repetitive processes with the presence of matched uncertainties, external disturbances and norm-bounded nonlinearities. A new method based on the combination of repetitive control and sliding mode approach is studied in order to use the robustness sensibility property of the sliding mode control to matched uncertainties and disturbances and to cancel gradually tracking error for periodic processes. A sufficient condition of the existence of sliding mode is studied based on basic repetitive control and a sliding mode controller is synthesized through linear matrix inequalities, which guarantees the stability along the periods of the controlled closed-loop process and the reachability of the sliding surface is ensured. Then, an adaptive integral sliding mode controller is synthesized to improve performances of the proposed control scheme. The effectiveness of the proposed controlled design schemes is proved by the use of a third order uncertain mechanical system and the simulation results using the new approaches give good performances.</description>
        <description>http://thesai.org/Downloads/Volume9No12/Paper_63-Repetitive_Control_based_on_Integral_Sliding_Mode.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Image Encryption Approach for Cloud Computing Applications </title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091262</link>
        <id>10.14569/IJACSA.2018.091262</id>
        <doi>10.14569/IJACSA.2018.091262</doi>
        <lastModDate>2018-12-31T17:33:50.5300000+00:00</lastModDate>
        
        <creator>Saleh ALTOWAIJRI</creator>
        
        <creator> Mohamed AYARI</creator>
        
        <creator>Yamen EL TOUATI</creator>
        
        <subject>Cloud computing; image encryption; fourier transform; random phase function</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(12), 2018</description>
        <description>In this paper, a novel image encryption approach is proposed in the context of cloud computing applications.  A fast special transform based on non-equispaced grid technique is introduced and applied as the first time in image encryption applications. By Combining with Fractional Fourier Transform (FRFT) instead of Discrete Fourier Transform (DFT), a good framework for image encryption is opened to enhance data security degree. The both image encipherment and decipherment process are analyzed based on random phase matrix. The time complexity effort of this novel approach is examined and evaluated. Comparative study with traditional encryption algorithms will prove the efficiency and robustness of our proposed technique.</description>
        <description>http://thesai.org/Downloads/Volume9No12/Paper_62-A_Novel_Image_Encryption_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improved Discrete Differential Evolution Algorithm in Solving Quadratic Assignment Problem for best Solutions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091261</link>
        <id>10.14569/IJACSA.2018.091261</id>
        <doi>10.14569/IJACSA.2018.091261</doi>
        <lastModDate>2018-12-31T17:33:50.5130000+00:00</lastModDate>
        
        <creator>Asaad Shakir Hameed</creator>
        
        <creator>Burhanuddin Mohd Aboobaider</creator>
        
        <creator>Ngo Hea Choon</creator>
        
        <creator>Modhi Lafta Mutar</creator>
        
        <subject>Quadratic assignment problem; combinatorial optimization problems; differential evolution algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(12), 2018</description>
        <description>The combinatorial optimization problems are very important in the branch of optimization or in the field of operation research in mathematics. The quadratic assignment problem (QAP) is in the category of facilities location problems and is considered as one of the significant complex’s combinatorial optimization problems since it has many applications in the real world. The QAP is involved in allocating N facilities to N locations with specified distances amid the locations and the flows between the facilities. The modified discrete differential evolution algorithm has been presented in this study based on the crossover called uniform like a crossover (ULX). The proposed algorithm used to enhance the QAP solutions through finding the best distribution of the N facilities to N locations with the minimized total cost. The employed criteria in this study for the evaluation of the algorithm were dependent on the accuracy of the algorithm by using the relative percent deviation (PRD). The proposed algorithm was applied to 41 different sets of the benchmark QAPLIB, while the obtained results indicated that the proposed algorithm was more efficient and accurate compared with Tabu Search, Differential Evolution, and Genetic algorithm.</description>
        <description>http://thesai.org/Downloads/Volume9No12/Paper_61-Improved_Discrete_Differential_Evolution_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Monitoring, Detection and Control Techniques of Agriculture Pests and Diseases using Wireless Sensor Network: A Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091260</link>
        <id>10.14569/IJACSA.2018.091260</id>
        <doi>10.14569/IJACSA.2018.091260</doi>
        <lastModDate>2018-12-31T17:33:50.4830000+00:00</lastModDate>
        
        <creator>S. Azfar</creator>
        
        <creator>A.Nadeem</creator>
        
        <creator>A.B. Alkhodre</creator>
        
        <creator>K.Ahsan</creator>
        
        <creator>N. Mehmood</creator>
        
        <creator>T.Alghmdi</creator>
        
        <creator>Y.Alsaawy</creator>
        
        <subject>Component; pest monitoring and detection; Wireless Sensor Network; pests; agriculture; sensing technology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(12), 2018</description>
        <description>Wireless sensor network technology is widely used in the western world for improving agriculture output. However, in the developing countries, the adaptation of technology is very slow due to various factors such as cost and unawareness of farmers with the technology. There are reports in the literature related to the precision agriculture and hopefully, this paper will add to the knowledge of the use of Wireless sensor network (WSN) for monitoring agriculture fields for pest detection. The literature related to pest monitoring and detection using wireless sensor networking technologies are reviewed. Then, the advanced sensing technologies are currently in use for the detection of a pest has been described. The existing techniques about pest detection and disease monitoring are evaluated on the basis of some key parameters such as the type of sensors used, their cost, processing tools, etc.  Finally, the sensing technologies and the possibility of using third generation sensing technology for monitoring and detection of cotton crops are analyzed.</description>
        <description>http://thesai.org/Downloads/Volume9No12/Paper_60-Monitoring_Detection_and_Control_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improved Design of an Adaptive Massive MIMO Spherical Antenna Array</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091259</link>
        <id>10.14569/IJACSA.2018.091259</id>
        <doi>10.14569/IJACSA.2018.091259</doi>
        <lastModDate>2018-12-31T17:33:50.4670000+00:00</lastModDate>
        
        <creator>Mouloud Kamali</creator>
        
        <creator>Adnen Cherif</creator>
        
        <subject>Adaptive spherical antenna; beam division multiple access BDMA; massive multiple input multiple output MIMO; millimeter-wave mm-wave</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(12), 2018</description>
        <description>Massive capacity and connectivity are the main boundaries towards standing the Internet of Everything (IoE) basis and defining modern wireless generation requirements. These needs cannot be achieved by already deployed phased array antenna in terms of distributed and oriented geometry, dimensions and design. We propose in the present paper an innovating massive multiple input multiple output (MIMO) spherical array network aiming to draw a new three-dimensional configuration to enhance the beam steering, improve bandwidth, total capacity and the scan flexibility. Resolved issues in concordance with 5G requirements are adaptive massive MIMO by using millimeter-wave antenna arrays, small cell design and definition of recommended operational frequency considering the International Telecommunication Union (ITU) norms and directives. The new geometric forms of spherical smart antenna could easily scan all 3D space, ensure higher capacity and reach tens of Giga bit per second (Gbps) value besides eradicating energy wastage aspect of Beam Division Multiple Access (BDMA) in base stations. Mathematical design is detailed and performed simulation results are presented using MATLAB software Tool.</description>
        <description>http://thesai.org/Downloads/Volume9No12/Paper_59-Improved_Design_of_an_Adaptive_Massive_MIMO.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-Attributes Web Objects Classification based on Class-Attribute Relation Patterns Learning Approach </title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091258</link>
        <id>10.14569/IJACSA.2018.091258</id>
        <doi>10.14569/IJACSA.2018.091258</doi>
        <lastModDate>2018-12-31T17:33:50.4500000+00:00</lastModDate>
        
        <creator>Sridhar Mourya</creator>
        
        <creator>P.V.S. Srinivas</creator>
        
        <creator>M. Seetha</creator>
        
        <subject>Classification; multi-attributes; web objects; attribute learning; distinct-class relation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(12), 2018</description>
        <description>The amount of Web data increases with the proliferation of a variety of Web objects, primarily in the form of text, images, video, and music data files. Each of these published objects has some properties that support defining its class properties. Because of their diversity, using these attributes to learn and generate patterns for precise classification is very complicated. Even learning a set of attributes that clearly categorize the categories is very important. Existing attribute learning methods only learn attributes that are closely related to multiple similar objects, but if similar class objects have different attributes, this problem is difficult to learn and classify them. In this paper, a Multi-attributes Web Objects Classification (MA-WOC) based on Class-attribute Relation Patterns Learning Approach is being proposed, which generates a class-attribute with its multi relations patterns. The MA-WOC calculates the relationship probabilities of the attributes and the associated values of the class to understand the degree of association of the construction of classification pattern. To evaluate the effectiveness of the classifier, this will compare to an existing classifier that supports a multi-attribute data set, which shows improvisation of precision with a significant minimum Hamming loss. To evaluate the effectiveness of MA-WOC classification a comparison among the classifiers that are supported to the multi-attribute dataset are being performed to measure the accuracy and hamming loss.</description>
        <description>http://thesai.org/Downloads/Volume9No12/Paper_58-Multi_Attributes_Web_Objects_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>New 3D Objects Retrieval Approach using Multi Agent Systems and Artificial Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091257</link>
        <id>10.14569/IJACSA.2018.091257</id>
        <doi>10.14569/IJACSA.2018.091257</doi>
        <lastModDate>2018-12-31T17:33:50.4200000+00:00</lastModDate>
        
        <creator>Basma Sirbal</creator>
        
        <creator>Mohcine Bouksim</creator>
        
        <creator>Khadija Arhid</creator>
        
        <creator>Fatima Rafii Zakani</creator>
        
        <creator>Taoufiq Gadi</creator>
        
        <subject>3D object retrieval; 3D image processing; distributed artificial intelligence; multi-agent systems; artificial neural network (ANN)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(12), 2018</description>
        <description>Content-based 3D object retrieval is a substantial research area that has drawn a significant number of scientists in last couple of decades. Due to the rapid advancement of technology, 3D models are more and more accessible yet it is hard to find, the models we are searching for. This created the need for efficient and robust retrieval methods, allowing the extraction of relevant matches from the human perspective. Hence, in this paper we are proposing a new framework for 3D object retrieval that starts with a pre-treatment consisting of an Artificial Neural Network (ANN) algorithm with Histogram of features, allowing us to extract a representative value for each category of the database. These values are used for the Multi Agents System (MAS). In this phase, we are classifying these categories according to their relevance to the request object. This sets a distinguishing weight for each object of the database allowing us to extract the right matches. Experiments have proven the stringent of this approach.</description>
        <description>http://thesai.org/Downloads/Volume9No12/Paper_57-New_3D_Objects_Retrieval_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Empirical Investigation on a Tool-Based Boilerplate Technique to Improve Software Requirement Specification Quality </title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091256</link>
        <id>10.14569/IJACSA.2018.091256</id>
        <doi>10.14569/IJACSA.2018.091256</doi>
        <lastModDate>2018-12-31T17:33:50.3900000+00:00</lastModDate>
        
        <creator>Umairah Anuar</creator>
        
        <creator>Sabrina Ahmad</creator>
        
        <creator>Nurul Akmar Emran</creator>
        
        <subject>Empirical investigation; usability; software requirements</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(12), 2018</description>
        <description>The process of producing software requirements specification (SRS) is known to be challenging due to the amount of effort, skills and experience needed in writing good quality SRS. A tool-based boilerplate technique is introduced to provide assistance in identifying essential requirements for a generic information management system and translating them into standard requirements statements in the SRS. This paper presents an empirical investigation to evaluate the usability of the prototype. Results showed that the tool-based boilerplate technique has high usability, usefulness and ease of use.</description>
        <description>http://thesai.org/Downloads/Volume9No12/Paper_56-An_Empirical_Investigation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cluster Based Routing Protocols for Wireless Sensor Networks: An Overview</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091255</link>
        <id>10.14569/IJACSA.2018.091255</id>
        <doi>10.14569/IJACSA.2018.091255</doi>
        <lastModDate>2018-12-31T17:33:50.3730000+00:00</lastModDate>
        
        <creator>Muhammad Nadeem Akhtar</creator>
        
        <creator>Arshad Ali</creator>
        
        <creator>Zulfiqar Ali</creator>
        
        <creator>Muhammad Adnan Hashmi</creator>
        
        <creator>Muhammad Atif</creator>
        
        <subject>Wireless sensor networks; network structure; clustering protocols; energy efficiency</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(12), 2018</description>
        <description>Energy consumption of nodes in Wireless Sensor Networks (WSNs) is a very critical issue, particularly in scenarios where the energy of nodes cannot be recharged. Optimal routing approaches play a key role in energy utilization, so there is great importance of energy efficient routing protocols in WSNs. Energy efficient routing protocols in WSNs are categorized into four schemes, namely (i) communication model, (ii) topology based model, (iii) reliable routing, and (iv) network structure. Network structure category is further divided into flat and cluster-based approaches. This work focuses on a subtype of “network structure” scheme known as clustered based routing protocols, which are mainly used in WSNs for reduction in energy consumption. This work reviews and provides an overview of prominent cluster based energy efficient routing protocols on the basiss of some primary performance metrics such as (i) energy efficiency, (ii) algorithm complexity, (iii) scalability, (iv) data delivery delay, and (v) clustering approach.  Finally, this work discusses some latest research trends with respect to cluster based energy efficient routing protocols in WSNs.</description>
        <description>http://thesai.org/Downloads/Volume9No12/Paper_55-Cluster_based_Routing_Protocols.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Spectral Efficiency of Massive MIMO Communication Systems with Zero Forcing and Maximum Ratio Beamforming</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091254</link>
        <id>10.14569/IJACSA.2018.091254</id>
        <doi>10.14569/IJACSA.2018.091254</doi>
        <lastModDate>2018-12-31T17:33:50.3570000+00:00</lastModDate>
        
        <creator>Asif Ali</creator>
        
        <creator>Imran Ali Qureshi</creator>
        
        <creator>Abdul Latif Memon</creator>
        
        <creator>Sajjad Ali Memon</creator>
        
        <creator>Erum Saba</creator>
        
        <subject>Massive MIMO; Base station; channel capacity; Spectral efficiency; latency; cellular communication; beamforming techniques; throughput; mobile communication</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(12), 2018</description>
        <description>The massive multiple-input-multiple-output (MIMO) is a key enabling technology for the 5G cellular communication systems. In massive MIMO (M-MIMO) systems few hundred numbers of antennas are deployed at each base station (BS) to serve a relatively small number of single-antenna terminals with multiuser, providing higher data rate and lower latency. In this paper, an M-MIMO communication system with a large number of BS antennas with zero-forcing beamforming is proposed for the improved spectral efficiency performance of the system. The zero forcing beamforming technique is used to overcome the interference that limits the spectral efficiency of M-MIMO communication systems. The simulation results authenticate the improvement in the spectral efficiency of M-MIMO system. The spectral efficiency value using zero-forcing beamforming is near to the spectral efficiency value with the no-interference scenario.</description>
        <description>http://thesai.org/Downloads/Volume9No12/Paper_54-Spectral_Efficiency_of_Massive_MIMO_Communication_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Integration of REST-Based Web Service and Browser Extension for Instagram Spam Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091253</link>
        <id>10.14569/IJACSA.2018.091253</id>
        <doi>10.14569/IJACSA.2018.091253</doi>
        <lastModDate>2018-12-31T17:33:50.3400000+00:00</lastModDate>
        
        <creator>Antonius Rachmat Chrismanto</creator>
        
        <creator>Willy Sudiarto Raharjo</creator>
        
        <creator>Yuan Lukito</creator>
        
        <subject>Instagram; spam comments; REST service; web service testing; browser extension</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(12), 2018</description>
        <description>In this paper, a REST-based Web Service developed in previous work was integrated with a newly developed browser extension that works in modern browser (Firefox and Google Chrome) using Greasemonkey. It uses previous collected datasets which comprised of 17.000 postings and comments from 10 Indonesian actresses whom followers are more than 10 million on Instagram. The performance of the developed web services has been evaluated and the average response time is 1678.133ms using AWS platform located in Ohio (US East 2). The proposed work is working as expected and in accuracy test, it has reached 63.125% in overall, 72% for non-stemmed data and 70% for stemmed data using 1000 test data with a processing time needed for classification is under 2s. The new extension works in Firefox and Chrome and it can utilize the web services to classify spam comments in Instagram.</description>
        <description>http://thesai.org/Downloads/Volume9No12/Paper_53-Integration_of_REST_based_Web_Service.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Personalized Hybrid Recommendation Procedure for Internet Shopping Support</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091252</link>
        <id>10.14569/IJACSA.2018.091252</id>
        <doi>10.14569/IJACSA.2018.091252</doi>
        <lastModDate>2018-12-31T17:33:50.3100000+00:00</lastModDate>
        
        <creator>R. Shanthi</creator>
        
        <creator>S.P. Rajagopalan</creator>
        
        <subject>Web mining; web search; products; ranking; recommendation system; hybrid approach; e-commerce; online shopping market</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(12), 2018</description>
        <description>Lately, recommender systems (RS) have offered a remarkable breakthrough to users. It lessens the user time cost thereby delivering faster and better results. After purchasing a product there are recommendations according to the different comments provided by users. Within a short span of product utilization and quality, the users receive a product recommendation. But this doesn’t work out good so as to make it much better;feedbacks, commands and reviews are fetched on the basis of in-depth commands, globally like and normal keys. Recommendation systems are crucially important for the delivery of personalized product to users. With personalized recommendation to product, users can enjoy a variety of targeted recommendations such as online product; the current paper suggests hybrid recommendation system (HRS) that makes use of rating and review to recommend any product to user. The main objective of this paper is to personalize recommendation of product that have become extremely effective revenue drivers for online shopping business. Despite the great benefits, deploying personalized recommendation services typically requires the collection of users’ personal data for processing and analytics, which undesirably makes users. To implement product recommendations following are incorporated that is retrieving personal data, Logical Language based Rule Generation (LLRG), ranking and Hybrid recommendation system. The stages in the suggested recommendation system include, Data Gathering, preprocessing, filtering and Ranking. The Ranking algorithm ranks the products in relation to the sales count. The top list displays the product having greatest count number. In the LLRG strategy, the logic rule generation methodology retrieves useful and mandatory data from reviews, commands, products original state and thereafter comes the recommendation. The HRS enforces two techniques, namely, location based and the other being heterogeneous domain based. Also the recommendations presented to the user are in context to the user’s activities, choices and conduct that are in accord with user’s personal likings and aids in decision making. When comparing the outcome, it is clear that the suggested method is superior than the traditional with regard to clarity, effective recommendation and coverage rate. It’s evaluated that Hybrid Recommendation System yields in greater results compared with rest of the existing recommendation techniques. We, also identity to some future research directions.</description>
        <description>http://thesai.org/Downloads/Volume9No12/Paper_52-A_Personalized_Hybrid_Recommendation_Procedure.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Type-2 Fuzzy in Image Extraction for DICOM Image</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091251</link>
        <id>10.14569/IJACSA.2018.091251</id>
        <doi>10.14569/IJACSA.2018.091251</doi>
        <lastModDate>2018-12-31T17:33:50.2930000+00:00</lastModDate>
        
        <creator>D Nagarajan</creator>
        
        <creator>M.Lathamaheswari</creator>
        
        <creator>J.Kavikumar</creator>
        
        <creator>Hamzha</creator>
        
        <subject>Feature extraction; MRI image; type-2 fuzzy; MATLAB; triangular norms; mathematical properties</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(12), 2018</description>
        <description>Eradication of a desired portion of an image is a very important role in image processing and is also called feature extraction. This is mainly concern about reducing the number of possessions required to portray a large set of data and also reduce memory space requirement and power of data processing.  Perfectly optimized feature extraction is an essential process for an effective design construction. Though there are many tools are available for extracting a feature, Type-2 Fuzzy Logic plays a vital role in producing good results. In this paper, weighted arithmetic operator is proposed using Yager triangular norms and proved the properties of the triangular norms using proposed operator. Also, the paper relates the properties to feature extraction. Also Brain has been extracted from patient MRI DICOM image using MATLAB based on Type-2 Fuzzy setting.</description>
        <description>http://thesai.org/Downloads/Volume9No12/Paper_51-A_Type_2_Fuzzy_in_Image_Extraction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Efficient Rule based Decision Support System using Semantic Web Technology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091250</link>
        <id>10.14569/IJACSA.2018.091250</id>
        <doi>10.14569/IJACSA.2018.091250</doi>
        <lastModDate>2018-12-31T17:33:50.2800000+00:00</lastModDate>
        
        <creator>Jawed Naseem</creator>
        
        <creator>S. M. Aqil Burney</creator>
        
        <creator>Nadeem Mehmood</creator>
        
        <subject>Agriculture information; semantic web; rule based system; intelligent DSS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(12), 2018</description>
        <description>The Semantic Web technology is an efficient mechanism to query or infer knowledge on a global scale using the internet by providing logical reasoning through rule based system.  In this paper application of semantic web technology is discussed in context of agriculture knowledge management and delivery. In agriculture, adoption of newly developed technology is essential to enhance crop production. However, timely dissemination of authenticated agriculture information for decision making at larger scale to diversified end user has always been a challenge due to several reasons. One of the reasons is storing and delivering agriculture knowledge in machine readable form. In this paper a frame work based on semantic web is presented for collection, storing and updating of agricultural information at centralized location and delivering knowledge through intelligent decision support system through semantic web. The frame work utilizes rule based system for querying information from agriculture knowledge base.</description>
        <description>http://thesai.org/Downloads/Volume9No12/Paper_50-An_Efficient_Rule_based_Decision_Support_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluating the Applicability of a Social Content Management Framework: A Case Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091249</link>
        <id>10.14569/IJACSA.2018.091249</id>
        <doi>10.14569/IJACSA.2018.091249</doi>
        <lastModDate>2018-12-31T17:33:50.2630000+00:00</lastModDate>
        
        <creator>Wan Azlin Zurita Wan Ahmad</creator>
        
        <creator>Muriati Mukhta</creator>
        
        <creator>Yazrina Yahya</creator>
        
        <subject>Service science; social content; social content management; social media; value co-creation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(12), 2018</description>
        <description>Social media platform plays an important role in engaging customers. The social content resulting from social media interactions between the organisations and the customers need a proper management. Therefore, in this work, a framework for social content management is introduced to support the management of social content. This framework is developed based on two main concepts. The first is the existing concepts that are present in content management, whilst the second concept is derived from the theory of service science. This approach is adopted to cater for existing concepts in enterprise content management, that are relevant to social content management and also to cater for the concept of value co-creation which forms the basis of engagement between the organisations and the customers. The applicability of the proposed social content management framework needs to be evaluated in order to determine the extent of its applicability in practical situations. Therefore, the main focus of this article is to report the usability of the proposed framework against the practices of the government agencies of Malaysia in managing the social content. The evaluation method used is based on the score of system usability scale. The results from the evaluation revealed that the proposed framework is usable and is deemed practical to be used in organisations.</description>
        <description>http://thesai.org/Downloads/Volume9No12/Paper_49-Evaluating_the_Applicability_of_a_Social_Content_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Shape based Image Retrieval Utilising Colour Moments and Enhanced Boundary Object Detection Technique</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091248</link>
        <id>10.14569/IJACSA.2018.091248</id>
        <doi>10.14569/IJACSA.2018.091248</doi>
        <lastModDate>2018-12-31T17:33:50.2330000+00:00</lastModDate>
        
        <creator>Jehad Q Alnihoud</creator>
        
        <subject>Boundary Based Shape Descriptor (BBSD); Region Based Shape Descriptor (RBSD); CBIR, EBOD; edge detectors</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(12), 2018</description>
        <description>The need for automatic object recognition and retrieval have increased rapidly in the last decade. In content-based image retrieval (CBIR) visual cues such as colour, texture, and shape are the most prominent features used. Texture features are not considered as a significant discriminator unless it is integrated with colour features. Colour-based image retrieval uses global and/or local features has proved its ability to retrieve images with a high degree of accuracy. In contrast, shape-based retrieval is still suffering from numerous unsolved problems such as precise edge detection, overlapping objects, and high cost of feature extraction. In this paper, global colour features are utilized to discriminate unrelated images. Furthermore, a novel hybrid approach is proposed, consisting of a combination of boundary-based shape descriptor (BBSD) and region-based shape descriptor (RBSD), image retrieval. An enhanced object boundary detection (EBOD) is proposed, which uses canny edge detector to detect shape boundaries, with morphological opening to remove isolated nodes. Subsequently, morphological closing is utilized to solidify objects within the target image to enhance shape-based features representation. Finally, shape features are extracted and Euclidean distance measure with different threshold values to measure the similarity between feature vectors is adopted. Five semantic categories of WANG image database are selected to test the proposed approach. The results of experiments are promising, when compared with most common related approaches.</description>
        <description>http://thesai.org/Downloads/Volume9No12/Paper_48-Shape_Based_Image_Retrieval.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards Secure Risk-Adaptable Access Control in Cloud Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091247</link>
        <id>10.14569/IJACSA.2018.091247</id>
        <doi>10.14569/IJACSA.2018.091247</doi>
        <lastModDate>2018-12-31T17:33:50.2170000+00:00</lastModDate>
        
        <creator>Salasiah Abdullah</creator>
        
        <creator>Khairul Azmi Abu Bakar</creator>
        
        <subject>Security; privacy; cloud computing; risk-adaptable access control; authentication</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(12), 2018</description>
        <description>The emergence of pervasive cloud computing has supported the transition of physical data and machine into virtualization environment. However, security threat and privacy have been identified as a challenge to support the widespread adoption of cloud among user.  Moreover, user awareness on the importance of cloud computing has increase the needs to safeguard the cloud by implementing access control that works on dynamic environment. Therefore, the emergence of Risk-Adaptable Access Control (RAdAC) as a flexible medium in handling exceptional access request is a great countermeasure to deal with security and privacy challenges.  However, the rising problem in safeguarding users&#39; privacy in RAdAC model has not been discussed in depth by other researcher. This paper explores the architecture of cloud computing and defines the existing solutions influencing the adoption of cloud among user. At the same time, the obscurity factor in protecting privacy of user is found within RAdAC framework. Similarly, the two-tier authentication scheme in RAdAC has been proposed in responding to security and privacy challenges as shown through informal security analysis.</description>
        <description>http://thesai.org/Downloads/Volume9No12/Paper_47-Towards_Secure_Risk_Adaptable_Access_Control.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Schedule Optimization of Ant Colony Optimization to Arrange Scheduling Process at Certainty Variables </title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091246</link>
        <id>10.14569/IJACSA.2018.091246</id>
        <doi>10.14569/IJACSA.2018.091246</doi>
        <lastModDate>2018-12-31T17:33:50.2000000+00:00</lastModDate>
        
        <creator>Rangga Sidik</creator>
        
        <creator>Mia Fitriawati</creator>
        
        <creator>Syahrul Mauluddin</creator>
        
        <creator>A.Nursikuwagus</creator>
        
        <subject>Ant colony; optimization; scheduling; process; certainty variables</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(12), 2018</description>
        <description>This research aims to get optimal collision of schedule by using certainty variables. Courses scheduling is conducted by ant colony algorithm. Setting parameters for intensity is bigger than 0, visibility track is bigger than 0, and evaporation of ant track is 0.03. Variables are used such as a number of lecturers, courses, classes, timeslot and time.  Performance of ant colony algorithms is measured by how many schedules same time and class collided. Based on executions, with a total of 175 schedules, the average of a cycle is 9 cycles (exactly is 9.2 cycles) and an average of time process is 29.98 seconds. Scheduling, in nine experiments, has an average of time process of 19.99 seconds. Performance of ant colony algorithm is given scheduling process more efficient and predicted schedule collision.</description>
        <description>http://thesai.org/Downloads/Volume9No12/Paper_46-Schedule_Optimization_of_Ant_Colony_Optimization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>BYOD Implementation Factors in Schools: A Case Study in Malaysia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091245</link>
        <id>10.14569/IJACSA.2018.091245</id>
        <doi>10.14569/IJACSA.2018.091245</doi>
        <lastModDate>2018-12-31T17:33:50.1870000+00:00</lastModDate>
        
        <creator>Yusri Hakim bin Yeop</creator>
        
        <creator>Zulaiha Ali Othman</creator>
        
        <creator>Siti Norul Huda Sheikh Abdullah</creator>
        
        <creator>Umi Asma’ Mokhtar</creator>
        
        <creator>Wan Fariza Paizi Fauzi</creator>
        
        <subject>Cybersecurity awareness; cybersecurity education; safety; school cybersecurity policy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(12), 2018</description>
        <description>The Bring Your Own Device (BYOD) initiative has been implemented widely in developed countries as a mechanism to prepare the students for the 4th industrial revolution. Success stories of the initiative vary depending on factors pertaining to its implementation. This study aims to identify the key factors to implement BYOD in schools for educational purposes. The research employed a mix-method approach by means of a survey. The data was collected from teachers through questionnaires and and from the school management through interviews. The respondents included 204 teachers from 5 schools in Putrajaya and Dengkil. Principals, senior assistants, ICT teachers and technicians from three schools were interviewed. They represented the school management group. A descriptive statistical analysis is conducted using SPSS statistical software. The research has identified four key factors for the successful implementation of BYOD in Malaysian schools. Two of the factors are related to the Cyber Security Policy at the schools - enforcing a secure network infrastructure and safety control requirement in the implementation of BYOD at schools. These security-related factors are important for the schools from the very beginning. They can be further categorized according to the implementation stages: pre-, during and post-adoption; cost allocation, preparation of controls and training to support BYOD&#39;s implementation at schools are the corresponding factors to each stage.  On the other hand, the other two key factors are related to the schools’ readiness - ensuring the successful implementation of BYOD whereby the school management group is willing and prepared to tackle any arising BYOD-related issues in the future.</description>
        <description>http://thesai.org/Downloads/Volume9No12/Paper_45-BYOD_Implementation_Factors.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Weighted Minkowski Similarity Method with CBR for Diagnosing Cardiovascular Disease</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091244</link>
        <id>10.14569/IJACSA.2018.091244</id>
        <doi>10.14569/IJACSA.2018.091244</doi>
        <lastModDate>2018-12-31T17:33:50.1700000+00:00</lastModDate>
        
        <creator>Edi Faizal</creator>
        
        <creator>Hamdani Hamdani</creator>
        
        <subject>CBR; cardiovascular; similarity; weighted Minkowski</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(12), 2018</description>
        <description>This study implements Case-Based Reasoning (CBR) to make the early diagnosis of cardiovascular disease based on the calculation of the feature similarity of old cases.  The features used to match old cases with new ones were age, gender, risk factors and symptoms. The diagnostic process was carried out by entering the case feature into the system, and then the system searched cases having similar features with the new case (retrieve). The level of similarity of each similar case was calculated using weighted Minkowski method. Cases with the highest level of similarity would be adopted as new case solutions. If the value of similarity was &lt;0,8, the revision would be conducted by an expert. The tests result conducted by the expert showed that the system was able to perform the diagnosis correctly. The test results are performed on the sensitivity of 100% and specificity of 83,33%. Meanwhile, the accuracy of 95,83% and the error rate of 4,17% is so that this research is relevant enough to be implemented in the medical area.</description>
        <description>http://thesai.org/Downloads/Volume9No12/Paper_44-Weighted_Minkowski_Similarity_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>RSSI and Public Key Infrastructure based Secure Communication in Autonomous Vehicular Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091243</link>
        <id>10.14569/IJACSA.2018.091243</id>
        <doi>10.14569/IJACSA.2018.091243</doi>
        <lastModDate>2018-12-31T17:33:50.1400000+00:00</lastModDate>
        
        <creator>K Balan</creator>
        
        <creator>L. F. Abdulrazak</creator>
        
        <creator>A. S. Khan</creator>
        
        <creator> A. A. Julaihi</creator>
        
        <creator>S. Tarmizi</creator>
        
        <creator>K. S. Pillay</creator>
        
        <creator>H. Sallehudin</creator>
        
        <subject>Autonomous; vehicular ad hoc networks; public key infrastructure; received signal strength indicator</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(12), 2018</description>
        <description>Autonomous Vehicular Ad hoc Networks (A-VANET) is also known as intelligent transportation systems. A-VANET ensures timely and accurate communications between vehicle to vehicle and Vehicle to Roadside Unit (RSU) to improve road safety and enhance the efficiency of traffic flow. Due to open wireless boundary and high mobility, A-VANET is vulnerable to several security threats especially impersonation, denial of service, pollution attacks.  This paper presents a novel Received Signal Strength Indicator (RSSI) based public key infrastructure (PKI) to address the above-mentioned attacks. Each incoming signal will be authenticated based on RSSI value and digital signal (obtained using PKI) is utilized for cryptography and communication within the insecure channel. The proposed solution is verified with and without the presence of attacker by evaluating the packet delivery ratio and packet overhead.</description>
        <description>http://thesai.org/Downloads/Volume9No12/Paper_43-RSSI_and_Public_Key_Infrastructure.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automation Lecture Scheduling Information Services through the Email Auto-Reply Application</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091242</link>
        <id>10.14569/IJACSA.2018.091242</id>
        <doi>10.14569/IJACSA.2018.091242</doi>
        <lastModDate>2018-12-31T17:33:50.1400000+00:00</lastModDate>
        
        <creator>Syahrul Mauluddin</creator>
        
        <creator>Leonardi Paris Hasugian</creator>
        
        <creator>Andri Sahata Sitanggang</creator>
        
        <subject>Email; auto-reply; service center; lecture schedule; java programming</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(12), 2018</description>
        <description>The study program of information systems is one of the largest studies programs at Indonesian Computer University (UNIKOM). In the process of scheduling lectures in the study program of information systems, it has already information systems of used desktop-based lecture scheduling. Lecture schedules that have been created are then informed through various media such as trust online, social media, email and bulletin board. With so many media which are used in the delivery of lecturers schedule it is expected that lecturers, students and laboratory staff can obtain schedule information properly. However, this also frequently causes problems in learning activities like misplaced of room, time, class, and so on. This usually occurs because the schedule in one of the communication media about lecture schedule is not updated when there is a change of schedule, so there are differences in the schedule information among lectures, students and laboratory staff. To overcome these problems, it needs a service center of lecture schedules information to facilitate lecturers, students and laboratory staff in obtaining the latest lecture schedules information. Related to this, in this study we propose a design of email auto-reply application that will be the service center of lecturer schedules information. In this study, the research method is the method of object-oriented approach and the method of prototype system development. In building email auto-reply application, we are using the Java programming language with MySQL database. With the applications, it is expected that lecturers, students and laboratory staff can obtain the latest lecture schedule easily and from the same source, so different lecture schedules among lectures, students and laboratory staff do not happen again.</description>
        <description>http://thesai.org/Downloads/Volume9No12/Paper_42-Automation_Lecture_Scheduling_Information_Services.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced Analytical Hierarchy Process for U-Learning with Near Field Communication (NFC) Technology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091241</link>
        <id>10.14569/IJACSA.2018.091241</id>
        <doi>10.14569/IJACSA.2018.091241</doi>
        <lastModDate>2018-12-31T17:33:50.1070000+00:00</lastModDate>
        
        <creator>Huzaifa Marina Osman</creator>
        
        <creator>Manmeet Mahinderjit Singh</creator>
        
        <creator>Manuel Serafin Plasencia</creator>
        
        <creator>Azizul Rahman Mohd Shariff</creator>
        
        <creator>Anizah Abu Bakar</creator>
        
        <subject>Ubiquitous learning (U-Learning); virtual learning; multi criteria decision making (MCDM); Analytical Hierarchy Process (AHP)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(12), 2018</description>
        <description>Integration of current Virtual Learning Environment (VLE) system with the Near Field Communication (NFC) technology provides Ubiquitous Learning Environment (ULE) in education. The utilization of NFC technology in U-Learning concept will help to improve accessibility and encourage collaborative learning methods in the education sector. In this paper, we conduct a study to investigate eleven (11) adoptions factors of U-Learning with NFC and ranking them using Analytical Hierarchy Process (AHP) a multicriteria decision-making (MCDM) approach. We also utilized Technology Acceptance Model (TAM), Technology Readiness (TR), and combination of TAM and TR (TRAM) as theoretical framework. We have identified TRAM as the best tool based on literature review and utilized the theory to propose an NFC-Enabled Ubiquitous Technology model. The model was utilized to design a questionnaire for survey about user acceptance. Results from the online survey were analyzed using AHP in an absolute measurement approach method. Results from AHP show that optimism is the most influencing factor in adoption of U-Learning using NFC technology followed by innovativeness and accessibility. Finally, this paper contributes in designing an NFC research model.</description>
        <description>http://thesai.org/Downloads/Volume9No12/Paper_41-Enhanced_Analytical_Hierarchy_Process.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Morphological Features Analysis for Erythrocyte Classification in IDA and Thalassemia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091240</link>
        <id>10.14569/IJACSA.2018.091240</id>
        <doi>10.14569/IJACSA.2018.091240</doi>
        <lastModDate>2018-12-31T17:33:50.0900000+00:00</lastModDate>
        
        <creator>Izyani Ahmad</creator>
        
        <creator>Siti Norul Huda Sheikh Abdullah</creator>
        
        <creator>Raja Zahratul Azma Raja Sabudin</creator>
        
        <subject>(Iron Deficiency Anemia) IDA; Thalassemia; erythrocyte; morphological features; classifier; information gain; logistic regression</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(12), 2018</description>
        <description>Iron Deficiency Anemia (IDA) and Thalassemia is a common disease in the world population. In hospital routine, those diseases are being recognized based on level of hemoglobin in Complete Blood Count (CBC) result.  Then, visual experts will conduct examination under the light microscope which is subjected to human error. In this research, we suggested a methodology via machine learning to classify and characterize erythrocyte related with IDA and Thalassemia.  We employ some image pre-processing techniques on the blood smear images to enhance edges and reduce image noise such as gamma correction and morphological processing. Then, every single erythrocyte image will segment the background and foreground by using Otsu’s threshold method. Here, we have considered nine types of erythrocyte such as teardrop, echinocyte, elliptocyte, microcytic, hypochromic, target cell, acanthocyte, sickle cell and normal cell to be classified and portray based on their morphological features. Later, these 24 and 31 features from Hue’s moment, Zernike moment, Fourier descriptor and geometrical features are confirmed as potential features for each condition by calculating one-way ANOVA. Next, the rank of subset features is done based on their information gain value from maximum to minimum. Each of subset is separated by incremental of five features. Here, we compare the performance for each subset with five selected classifiers namely logistic regression, radial basis function network, multilayer perceptron, Na&#239;ve Bayes Classifier and Classification and Regression Tree. The best subsets from 31 features provide the highest result of classification with 83.5% accuracy, 83.5% sensitivity and 83.3% positive predictive value respectively via logistic regression compared to other classifiers. This study could be extended by using image dataset from other blood based disease for future work.</description>
        <description>http://thesai.org/Downloads/Volume9No12/Paper_40-Morphological_Features_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Classification based on Clustering Model for Predicting Main Outcomes of Breast Cancer using Hyper-Parameters Optimization </title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091239</link>
        <id>10.14569/IJACSA.2018.091239</id>
        <doi>10.14569/IJACSA.2018.091239</doi>
        <lastModDate>2018-12-31T17:33:50.0600000+00:00</lastModDate>
        
        <creator>Ahmed Attia Said</creator>
        
        <creator>Laila A.Abd-Elmegid</creator>
        
        <creator>Sherif Kholeif</creator>
        
        <creator>Ayman Abdelsamie Gaber</creator>
        
        <subject>Breast cancer; Survival Rate (SR); Disease Free Survival (DFS); recurrence detection; egy; prediction; data mining; classification; clustering; hyper-parameters optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(12), 2018</description>
        <description>Breast cancer is a deadly disease in women. Predicting the breast cancer outcomes is very useful in determining the efficient treatment plan for the new breast cancer patients. Predicting the breast cancer outcomes (also called Prognosis) are done based on the previous patient’s data, which show the patient’s characteristics and how the doctors treated the patient. In this paper we propose a new efficient model for predicting the main outcomes; Survival Rate, Disease Free Survival, and Recurrence detection; of breast cancer. The proposed model utilizes two techniques to increase the accuracy of the predictive results. The first technique is applying the classification model on various data clusters rather than the full dataset. In such steps, the data is grouped in different clusters according to the similarity of the main characteristics, then the classification model is applied on these clusters. The second technique is using the Hyper-Parameters Optimization (also called Hyper-Parameters Tuning) to increase the accuracy of the classification model. In this step, the proposed model uses Hyper-Parameters Optimization to find a tuple of hyper-parameters that yields on the optimal model which minimizes a predefined loss function on given dataset. The experimental study shows in detail how utilizing such two techniques results in an efficient prediction model producing accurate results.</description>
        <description>http://thesai.org/Downloads/Volume9No12/Paper_39-Classification_based_on_Clustering_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hyper Parameter Optimization using Genetic Algorithm on Machine Learning Methods for Online News Popularity Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091238</link>
        <id>10.14569/IJACSA.2018.091238</id>
        <doi>10.14569/IJACSA.2018.091238</doi>
        <lastModDate>2018-12-31T17:33:50.0430000+00:00</lastModDate>
        
        <creator>Ananto Setyo Wicaksono</creator>
        
        <creator>Ahmad Afif Supianto</creator>
        
        <subject>Hyper parameter; genetic algorithm; online news; popularity; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(12), 2018</description>
        <description>Online news is a media for people to get new information. There are a lot of online news media out there and a many people will only read news that is interesting for them. This kind of news tends to be popular and will bring profit to the media owner. That’s why, it is necessary to predict whether a news is popular or not by using the prediction methods. Machine learning is one of the popular prediction methods we can use. In order to make a higher accuracy of prediction, the best hyper parameter of machine learning methods need to be determined. Determining the hyper parameter can be time consuming if we use grid search method because grid search is a method which tries all possible combination of hyper parameter. This is a problem because we need a quicker time to make a prediction of online news popularity. Hence, genetic algorithm is proposed as the alternative solution because genetic algorithm can get optimal hypermeter with reasonable time. The result of implementation shows that genetic algorithm can get the hyper parameter with almost the same result with grid search with faster computational time. The reduction in computational time is as follows: Support Vector Machine is 425.06%, Random forest is 17%, Adaptive Boosting is 651.06%, and lastly K - Nearest Neighbour is 396.72%.</description>
        <description>http://thesai.org/Downloads/Volume9No12/Paper_38-Hyper_Parameter_Optimization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Lightweight and Optimized Multi-Layer Data Hiding using Video Steganography</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091237</link>
        <id>10.14569/IJACSA.2018.091237</id>
        <doi>10.14569/IJACSA.2018.091237</doi>
        <lastModDate>2018-12-31T17:33:50.0300000+00:00</lastModDate>
        
        <creator>Samar kamil</creator>
        
        <creator>Masri Ayob</creator>
        
        <creator> Siti Norul Huda Sheikh Abdullah</creator>
        
        <creator>Zulkifli Ahmad</creator>
        
        <subject>Video steganography; least significant bit technique; optimized data hiding; cloud computing; boron cipher</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(12), 2018</description>
        <description>The ever-escalating attacks on the internet network are due to rapid technological growth. In order to surmount such challenges, multi-layer security algorithms were developed by hybridizing cryptography and steganography techniques. Consequently, the overall memory size became enormous while hybridizing these techniques. On the other side, the least significant bit (LSB) and modified LSB replacing approaches could provide the variability as detected by steganalysis technique, most found to be susceptible to attack too due to numerous reasons. To overcome these issues, in this paper a lightweight and optimized data hiding algorithm is proposed which consume less memory, provide less variability, and robust against histogram attacks. The proposed steganography system was achieved in two stages. First, data was encrypted using lightweight BORON cipher that only consumed less memory as compared to conventional algorithm such as 3DES, AES. Second, the encrypted data was hidden in the complemented or non-complemented form to obtain minimal variability. The performance of the proposed technique was evaluated in terms of avalanche effect, visual quality, embedding capacity and peak signal to noise ratio (PSNR). The results revealed that the lightweight BORON cipher could produce approximate same avalanche effect as the AES algorithm produced. Furthermore, the value of PSNR had shown much improvement in comparison to optimization algorithm GA.</description>
        <description>http://thesai.org/Downloads/Volume9No12/Paper_37-Lightweight_and_Optimized_Multi_Layer_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Proposed Methodology on Predicting Visitor’s Behavior based on Web Mining Technique</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091236</link>
        <id>10.14569/IJACSA.2018.091236</id>
        <doi>10.14569/IJACSA.2018.091236</doi>
        <lastModDate>2018-12-31T17:33:50.0130000+00:00</lastModDate>
        
        <creator>Abdel Karim Kassem</creator>
        
        <creator>Bassam Daya</creator>
        
        <creator>Pierre Chauvet</creator>
        
        <subject>Web server; log file; web mining; behavior; pattern; web usage mining; visualizations; vulnerabilities; security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(12), 2018</description>
        <description>The evolution of the internet in recent decades enlarge the website&#39;s reports with the records of user’s activities and behaviors that registered in the web server which can be created automatically in the web access log file. The feedback concerning the user’s activities, performance and any problem that may be occur including the cyber security approaches of the web server represents the principal raison of applying the web mining technique. In this paper, we proposed a methodology on predicting users behavior based on the web mining technique by creating and executing analysis applications using a Deep Log Analyzer tool that applied on the web server access log of our faculty website. Furthermore, an associated programmed application has been developed which employs the extracted data into dynamic visualizations reports(tables, graphs, charts) in order to help the web system administrator to increase the web site effectiveness, we had creating a suitable access patterns that permits to identify the interacting users behaviors and the interesting usage patterns such as the occurred errors, potential visitors, navigation activities, behavioral analysis, diagnostic study, and security alerts for intrusion prevention. Moreover, the obtained results achieved the aim of producing a dynamic monitoring by extracting investigation summaries which analyses the discovered access patterns that registered in the faculty web server in order to improve the web site usability by tracking the user’s behaviors and the browsing activities. Our proposed tool will highlight providing a security alerts against the malicious users by predicting the malicious behaviors taking into consideration all the discovered vulnerabilities by detecting the corrupted links used by the abnormal visitors.</description>
        <description>http://thesai.org/Downloads/Volume9No12/Paper_36-A_Proposed_Methodology_on_Predicting.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>CNNSFR: A Convolutional Neural Network System for Face Detection and Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091235</link>
        <id>10.14569/IJACSA.2018.091235</id>
        <doi>10.14569/IJACSA.2018.091235</doi>
        <lastModDate>2018-12-31T17:33:49.9830000+00:00</lastModDate>
        
        <creator>Lionel Landry SOP DEFFO</creator>
        
        <creator>Elie TAGNE FUTE</creator>
        
        <creator>Emmanuel TONYE</creator>
        
        <subject>Convolutional neural network; face recognition; VGG model; CReLU module; deep learning; architecture</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(12), 2018</description>
        <description>In recent years, face recognition has become more and more appreciated and considered as one of the most promising applications in the field of image analysis. However, the existing models have a high level of complexity, use a lot of computational resources and need a lot of time to train the model. That is why it has become a promising field of research where new methods are being proposed every day to overcome these difficulties. We propose in this paper a convolutional neural network system for face recognition with some contributions. First we propose a CRelu module, second we use the module to propose a new architecture model based on the VGG deep neural network model. Thirdly we propose a two stage training strategy improved by a large margin inner product and a small dataset and finally we propose a real time face recognition system where face detection is done by a multi-cascade convolution neural network and the recognition is done by the proposed deep convolutional neural network.</description>
        <description>http://thesai.org/Downloads/Volume9No12/Paper_35-CNNSFR_A_Convolutional_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Heterogeneous Ensemble Pruning based on Bee Algorithm for Mammogram Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091234</link>
        <id>10.14569/IJACSA.2018.091234</id>
        <doi>10.14569/IJACSA.2018.091234</doi>
        <lastModDate>2018-12-31T17:33:49.9670000+00:00</lastModDate>
        
        <creator>Ashwaq Qasem</creator>
        
        <creator>Siti Norul Huda Sheikh Abdullah</creator>
        
        <creator>Shahnorbanun Sahran</creator>
        
        <creator>Dheeb Albashish</creator>
        
        <creator>Rizuana Iqbal Hussain</creator>
        
        <creator>Shantini Arasaratnam</creator>
        
        <subject>Ensemble learning; ensemble pruning; bee algorithm; mammogram; breast cancer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(12), 2018</description>
        <description>In mammogram, masses are primary indication of breast cancer; and it is necessary to classify them as malignant or benign. In this classification task, Computer Aided Diagnostic (CAD) system by using ensemble learning is able to assist radiologists to have better diagnosis of mammogram images. Ensemble learning consists of two steps, generating multiple base classifiers and then combining them together. However, combining all base classifier in the ensemble model increases the computational cost and time. Therefore, ensemble pruning is an important step in ensemble learning to select the ensemble’s members. Due to huge subsets of combination in the ensemble, selecting the proper ensemble subset is desirable. The objective of this paper is to select the optimal ensemble subset by using Bee Algorithm (BA). A pool of different classifier models such as Support vector machine, k-nearest neighbour and linear discriminant analysis classifiers have been trained using different samples of training data and 12 groups of features. Then, by using this pool of classifier models, BA was used to exploit and explore random uniform combination subsets of the trained classifiers. As a result, the best subset will be selected as the optimal ensemble. The mammogram image dataset that was used to test the model has been collected from Hospital Kuala Lumpur (HKL) and consists of 263 benign and malignant masses. The proposed method gives 77 % of Area Under Curve(AUC), 83% of accuracy, 93% of specificity and 60% of sensitivity.</description>
        <description>http://thesai.org/Downloads/Volume9No12/Paper_34-Heterogeneous_Ensemble_Pruning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimizing Power-Based Indoor Tracking System for Wireless Sensor Networks using ZigBee</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091233</link>
        <id>10.14569/IJACSA.2018.091233</id>
        <doi>10.14569/IJACSA.2018.091233</doi>
        <lastModDate>2018-12-31T17:33:49.9500000+00:00</lastModDate>
        
        <creator>Ahmad H. Mahafzah</creator>
        
        <creator>Hesham Abusaimeh</creator>
        
        <subject>Indoor tracking; ZigBee; wireless personal area network; localization; trilateration</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(12), 2018</description>
        <description>Evolution of wireless and digital communication gives birth to the smaller but powerful battery-equipped devices which are easy to maintain and perform the desired tasks. ZigBee is a Wireless Personal Area Network (WPAN) used for home or indoor automation, collecting data for medical research by using the low power digital radios to handle the low data rate. In ZigBee network, sensor nodes are heterogeneously deployed and continuously moving. To detect and tracking of those sensor nodes are challenging in terms of accuracy, calculation time and energy consumption. In this paper, proposed system uses the Received Signal Strength Indication (RSSI) protocol for localization, trilateration for fetching the exact coordinates of sensor nodes and these protocols helps to overcome the problem which eventually led to prolonged sensor network, accurate localization.</description>
        <description>http://thesai.org/Downloads/Volume9No12/Paper_33-Optimizing_Power_based_Indoor_Tracking_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Using Game Theory to Handle Missing Data at Prediction Time of ID3 and C4.5 Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091232</link>
        <id>10.14569/IJACSA.2018.091232</id>
        <doi>10.14569/IJACSA.2018.091232</doi>
        <lastModDate>2018-12-31T17:33:49.9370000+00:00</lastModDate>
        
        <creator>Halima Elaidi</creator>
        
        <creator>Zahra Benabbou</creator>
        
        <creator>Hassan Abbar</creator>
        
        <subject>Decision tree; ID3; C4.5; missing data; game theory; Nash equilibrium</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(12), 2018</description>
        <description>The raw material of our paper is a well known and commonly used type of supervised algorithms: decision trees. Using a training data, they provide some useful rules to classify new data sets. But a data set with missing values is always the bane of a data scientist. Even though decision tree algorithms such as ID3 and C4.5 (the two algorithms with which we are working in this paper) represent some of the simplest pattern classification algorithms that can be applied in many domains, but with the drawback of missing data the task becomes harder because they may have to deal with unknown values in two major steps: at training step and at prediction step. This paper is involved in the processing step of databases using trees already constructed to classify the objects of these data sets. It comes with the idea to overcome the disturbance of missing values using the most famous and the central concept of the game theory approach which is the Nash equilibrium.</description>
        <description>http://thesai.org/Downloads/Volume9No12/Paper_32-Using_game_theory_to_handle_missing_data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Linear Intensity-Based Image Registration</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091231</link>
        <id>10.14569/IJACSA.2018.091231</id>
        <doi>10.14569/IJACSA.2018.091231</doi>
        <lastModDate>2018-12-31T17:33:49.9200000+00:00</lastModDate>
        
        <creator>Yasmin Mumtaz Ahmad</creator>
        
        <creator>Shahnorbanun Sahran</creator>
        
        <creator>Afzan Adam</creator>
        
        <creator> Syazarina Sharis Osman</creator>
        
        <subject>Lineargeo metric transformation; image registration; point correspondence</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(12), 2018</description>
        <description>The accurate detection and localization of lesion within the prostate could greatly benefit in the planning of surgery and radiation therapy. Although T2 Weighted Imaging (T2WI) Magnetic Resonance Imaging (MRI) provides an infinite amount of anatomical information, which ease and improve diagnosis and patient treatment, however, modality specific image artifacts, such as the occurrences of intensity inhomogeneity are still obvious and can adversely affect quantitative image analysis. Conventional high resolution T2WI has been restricted in this respect. On the contrary, Apparent Diffusion Coefficient (ADC) map has been seen as capable to tackle T2WI limitation when a functional assessment of the prostate capable to provide added value compared to T2WI alone. Likewise, it has been shown that diagnosis using ADC map combined with T2WI significantly outperforms T2WI alone. Therefore, to obtain high accuracy detection and localization, a combination of high-resolution anatomic and functional imaging is extremely important in clinical practice. This strategy relies on accurate intensity based image registration. However, registration of anatomical and functional MR imaging is really challenging due to missing correspondences and intensity inhomogeneity. To address this problem, this study researches the used of applying linear geometric transform to the corresponding point to accurately mapping the images for precise alignment and accurate detection. Transformation type is crucial for the success of image registration. The selection of transformation type is influenced by the type and severity of the geometric differences between corresponding images, the accuracy of the control point between images, its density and organization of the control points. A transformation type is selected to reflect geometric differences between two images in image registration. Often, the selection of the suitable transformation type for image registration is undeniably challenging. To make this selection as effective as possible, an experimental mechanism has to be carried out to determine its suitability. These transformations types are Affine, similarity, rigid and translation. Additionally, intensity based image registration is implemented to optimize the similarity metric mean square error through regular step gradient descent optimizer. Accuracies evaluation for each transformation type has been carried out through mean square error (MSE) and peak signal noise ratio (PSNR). The results have been presented in a chart form together with a comparison table.</description>
        <description>http://thesai.org/Downloads/Volume9No12/Paper_31-Linear_Intensity_based_Image_Registration.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Abnormal Region Extraction from MR Brain Images using Hybrid Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091230</link>
        <id>10.14569/IJACSA.2018.091230</id>
        <doi>10.14569/IJACSA.2018.091230</doi>
        <lastModDate>2018-12-31T17:33:49.8900000+00:00</lastModDate>
        
        <creator>Nikhil Gala</creator>
        
        <creator>Kamalakar Desai</creator>
        
        <subject>Clustering algorithm; hybrid approach; MR brain image segmentation; level set; pillar k-means; segmentation errors</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(12), 2018</description>
        <description>Automatic brain abnormality segmentation from magnetic resonance images is a key task that is performed by computer aided algorithm or manual extraction by a medical expert. The regions are often partitioned based on the similarities of intensities that persist in a particular region. MR brain image segmentation is a critical step that helps to identify the abnormal region. Accurate identification of this abnormal region helps the radiologist and surgeons in surgical process and research. Through this paper we present a hybrid approach of algorithms based on clustering approach like region and edge based algorithm involved in segmenting abnormal region from MR brain images. The method is an integration of region based (pillar K-means) and edge based (level set) segmentation algorithm that aims to segment the abnormal region precisely. Experimental results show that the proposed approach could attain segmentation efficiency of 89.2%, mitigating the segmentation errors that were prevalent with region or edge based algorithms.</description>
        <description>http://thesai.org/Downloads/Volume9No12/Paper_30-Abnormal_Region_Extraction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Study of Data Sending Methods for XML and JSON Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091229</link>
        <id>10.14569/IJACSA.2018.091229</id>
        <doi>10.14569/IJACSA.2018.091229</doi>
        <lastModDate>2018-12-31T17:33:49.8730000+00:00</lastModDate>
        
        <creator>Anca-Raluca Breje</creator>
        
        <creator>Robert Gyor&#246;di</creator>
        
        <creator>Cornelia Gyor&#246;di</creator>
        
        <creator>Doina Zmaranda</creator>
        
        <creator>George Pecherle</creator>
        
        <subject>XML; JSON; data model; data transfer; application programming interface</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(12), 2018</description>
        <description>Data exchange between different devices and applications has become a necessity nowadays. Data is no longer stored locally on the device, but in the cloud. In order to communicate with the cloud and exchange data, web services are being used. To keep the communication consistent across different devices and platforms, the data needs to be formatted using a standard data format, such as JSON or XML. This paper compares both standards and provides an in depth analysis of their performance. In order to perform the analysis a web API was built in the PHP framework Laravel, which was then tested with the help of the API development environment called Postman for different number of transferred items.</description>
        <description>http://thesai.org/Downloads/Volume9No12/Paper_29-Comparative_study_of_data_sending_methods.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Position-based Selective Neighbors</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091228</link>
        <id>10.14569/IJACSA.2018.091228</id>
        <doi>10.14569/IJACSA.2018.091228</doi>
        <lastModDate>2018-12-31T17:33:49.8570000+00:00</lastModDate>
        
        <creator>Sofian Hamad</creator>
        
        <creator>Taoufik Yeferny</creator>
        
        <creator>Salem Belhaj</creator>
        
        <subject>Mobile Ad-hoc network (MANET); routing protocol; energy aware; link life time; AODV</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(12), 2018</description>
        <description>In this paper, we propose a routing protocol, named Position-based Selective Neighbors (PSN), for controlling the Route Request (RREQ) propagation in Mobile Ad-hoc Networks (MANETs). PSN relies on the Residual Energy (RE) and the Link Lifetimes (LLT) factors to select the better end-to-end paths between mobile nodes. The key concept is to consider the RE and the LLT to find the best neighboring nodes to forward the received RREQs. A Simulation has been performed to compare PSN with other pioneering routing protocols. Experimental results show that PSN performs better than its competitors. Indeed, our protocol increases the network life time and reduces the network overhead. Furthermore, it reduces the overhead generated by the redundant RREQ, while maintaining good reachability among the mobile nodes.</description>
        <description>http://thesai.org/Downloads/Volume9No12/Paper_28-Position_based_Selective_Neighbors.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application of Fuzzy Analytical Hierarchy Process based on Geometric Mean Method to Prioritize Social Capital Network Indicators</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091227</link>
        <id>10.14569/IJACSA.2018.091227</id>
        <doi>10.14569/IJACSA.2018.091227</doi>
        <lastModDate>2018-12-31T17:33:49.8400000+00:00</lastModDate>
        
        <creator>Vy Dang Bich Huynh</creator>
        
        <creator>Phuc Van Nguyen</creator>
        
        <creator>Quyen Le Hoang Thuy To Nguyen</creator>
        
        <creator>Phong Thanh Nguyen</creator>
        
        <subject>Education; human resource; fuzzy analytic hierarchy process, geometric mean; social capital network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(12), 2018</description>
        <description>Vietnam is striving to develop dynamically and overcome many human resource challenges. As the economy expands, the demand for jobs and human resource development has become increasingly critical. The pressures from reform and international integration are forcing many changes. New graduates must provide proof of their academic capabilities, while also actively developing their social capital to support their job search process. In fact, social capital is an essential capital for personal development as well as professional development of new graduates. This paper applies Fuzzy Analytical Hierarchy Process based on Geometric Mean Methodology to evaluate factors for measuring the social capital of graduates at Ho Chi Minh City Open University in Vietnam. The research results highlight the important role of social networks for graduates, in which a linking network is the most important, a bonding network is the second most important, and a bridging network is the third most important. In addition, the research shows that trust plays an even more important role than networks; and specific belief is more important than general belief.</description>
        <description>http://thesai.org/Downloads/Volume9No12/Paper_27-Application_of_Fuzzy_Analytical_Hierarchy_Process.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimizing the Behaviour of Web Users Through Expectation Maximization Algorithm and Mixture of Normal Distributions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091226</link>
        <id>10.14569/IJACSA.2018.091226</id>
        <doi>10.14569/IJACSA.2018.091226</doi>
        <lastModDate>2018-12-31T17:33:49.8270000+00:00</lastModDate>
        
        <creator>R. Sujatha</creator>
        
        <creator>D. Nagarajan</creator>
        
        <creator>P. Saravanan</creator>
        
        <creator>J. Kavikumar</creator>
        
        <subject>EM algorithm; maximum likelihood; mixture normal distribution; web page frequency</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(12), 2018</description>
        <description>The proposed work is to analyse the user’s behaviour in web access. Worldwide, the web users are browsing through different websites every second. Aim of this paper is to identify the behaviour of user&#39;s in a time bound using an Expectation Maximization (EM) algorithm and the maximum likelihood estimates of the model parameters. A novel approach based on Mixture normal distribution is used to discuss the percentage of user along with web page frequency.</description>
        <description>http://thesai.org/Downloads/Volume9No12/Paper_26-Optimizing_the_Behaviour_of_Web_Users.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Efficient Reduction of Overgeneration Errors for Automatic Controlled Indexing with an Application to the Biomedical Domain</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091225</link>
        <id>10.14569/IJACSA.2018.091225</id>
        <doi>10.14569/IJACSA.2018.091225</doi>
        <lastModDate>2018-12-31T17:33:49.8100000+00:00</lastModDate>
        
        <creator>Samassi Adama</creator>
        
        <creator>Brou Konan Marcellin</creator>
        
        <creator>Goor&#233; Bi Tra</creator>
        
        <creator>Prosper Kimou</creator>
        
        <subject>Concept extraction; concept recognition; automatic controlled indexing; controlled vocabulary; information retrieval</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(12), 2018</description>
        <description>Studies on MetaMap and MaxMatcher has shown that both concept extraction systems suffer from overgeneration problems. Over-generation occurs when the extraction systems mistakenly select an irrelevant concept. One of the reasons for these errors is that these systems use the words to weight the terms of the concepts. In this paper, an Integer Linear Programming model is used to select the optimal subset of extracted concept mentions covering the largest number of important words in the document to be indexed. Then each concept mentions that this set is mapped to a unique concept in UMLS using an information retrieval model.</description>
        <description>http://thesai.org/Downloads/Volume9No12/Paper_25-Efficient_Reduction_of_Overgeneration.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Triangle Shape Feature based on Selected Centroid for Arabic Subword Handwriting</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091224</link>
        <id>10.14569/IJACSA.2018.091224</id>
        <doi>10.14569/IJACSA.2018.091224</doi>
        <lastModDate>2018-12-31T17:33:49.7930000+00:00</lastModDate>
        
        <creator>Nur Atikah Arbain</creator>
        
        <creator>Mohd Sanusi Azmi</creator>
        
        <creator>Azah Kamilah Muda</creator>
        
        <creator>Amirul Ramzani Radzid</creator>
        
        <creator>Azrina Tahir</creator>
        
        <subject>Arabic subword; feature extraction; random forest; support vector machine; triangle geometry</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(12), 2018</description>
        <description>Features are normally modelled based on color, texture and shape. However, some features may have different constraints based on types, styles and pattern of an image. The Arabic subword handwriting, for example, cannot be recognized by color and not suitable to be characterized based on texture. Therefore, features based on shape are suitable to be used for recognizing Arabic subword handwriting since each of the character has various characteristics such as diacritics, thinning and strokes. These characteristics can contribute to particular a shape that is unique and can represent Arabic subword handwriting. Currently, geometry shape such as triangle has been adopted to extract useful features based on triangle properties without implicating any triangle form. In order to increase classification accuracy, these properties have been categorized into several zones where the number of features produced is directly proportional to the number of zones. Nevertheless, shape representation does not implicate any triangle properties such as ratio of side, angle and gradient. By using shape representation, it helps in reducing the number of features. Thus, this paper presents feature based on triangle shape that can represent the identity of Arabic subword handwriting. The method based on triangle shape identifies three main coordinates of triangle formed based on selected centroids. The AHDB dataset is used as a testing data. The Support Vector Machine (SVM) and Random Forest (RF), respectively were used to measure the accuracy of the proposed method using triangle shape as a feature. The accuracy results have shown better outcome with 77.65% (SVM) and 76.43% (RF), which prove the feature based on triangle shape is applicable for Arabic subword handwriting recognition.</description>
        <description>http://thesai.org/Downloads/Volume9No12/Paper_24-Triangle_Shape_Feature_Based_on_Selected_Centroid.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>3D Mapping based-on Integration of UAV Platform and Ground Surveying</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091223</link>
        <id>10.14569/IJACSA.2018.091223</id>
        <doi>10.14569/IJACSA.2018.091223</doi>
        <lastModDate>2018-12-31T17:33:49.7630000+00:00</lastModDate>
        
        <creator>Muhammad Yazid Abu Sari</creator>
        
        <creator>Abd Wahid Rasib</creator>
        
        <creator>Hamzah Mohd Ali</creator>
        
        <creator>Abdul Razak Mohd Yusoff</creator>
        
        <creator>Muhammad Imzan Hassan</creator>
        
        <creator>Khairulnizam M.Idris</creator>
        
        <creator>Asmala Ahmad</creator>
        
        <creator>Rozilawati Dollah</creator>
        
        <subject>3D mapping; UAV platform; ground survey; aerial photo</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(12), 2018</description>
        <description>Development in aerial photogrammetry technology has contributed a notable impact to the area of large-scale mapping. Nowadays, unmanned aerial vehicle (UAV) platform has become a significant tool in aerial mapping. Generating 3D mapping using photos acquired from UAV is more preferable due to its low cost and flexible operation. Therefore, this study aims to develop a technique for 3D mapping with an integration of UAV aerial photos and detailed ground survey. The produced 3D mapping has RMSE(x) = 0.279, RMSE(y) = 0.215, and RMSE (z) = 1.341 using 25 randomly selected sample points. Besides that, the result shows the location parameters i.e. x, y and z were also positively correlated, t-test(x) = 0.961, t-test(y) = 0.250 and t-test (z) = 1.885, respectively.</description>
        <description>http://thesai.org/Downloads/Volume9No12/Paper_23-3D_Mapping_based_on_Integration.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>WordNet based Implicit Aspect Sentiment Analysis for Crime Identification from Twitter</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091222</link>
        <id>10.14569/IJACSA.2018.091222</id>
        <doi>10.14569/IJACSA.2018.091222</doi>
        <lastModDate>2018-12-31T17:33:49.7470000+00:00</lastModDate>
        
        <creator>Hajar El Hannach</creator>
        
        <creator>Mohammed Benkhalifa</creator>
        
        <subject>Implicit aspect based sentiment analysis; information retrieval; machine learning; supervised approaches; frequency model; WordNet; crime detection; hate crime twitter sentiment (HCTS)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(12), 2018</description>
        <description>Crime analysis has become an interesting field that deals with serious public safety issues recognized around the world. Today, investigating Twitter Sentiment Analysis (SA) is a continuing concern within this field. Aspect based SA, the process by which information can be extracted, analyzed and classified, is applied to tweet datasets for sentiment polarity classification to predict crimes. This paper addresses the aspect identification task involving implicit aspect implied by adjectives and verbs for crime tweets. The proposed hybrid model is based on WordNet semantic relations and Term-Weighting scheme, to enhance training data for (1) Crime Implicit Aspect sentences detection (IASD) and (2) Crime Implicit Aspect Identification (IAI). The performance is evaluated using three classifiers Multinomial Na&#239;ve Bayes, Support Vector Machine and Random Forest on three Twitter crime datasets. The obtained results demonstrate the effectiveness of WN synonym and definition relations and prove the importance of verbs in training data enhancement for crime IASD and IAI.</description>
        <description>http://thesai.org/Downloads/Volume9No12/Paper_22-WordNet_based_Implicit_Aspect_Sentiment_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Energy-Efficient Security Threshold Determination Method for the Enhancement of Interleaved Hop-By-Hop Authentication</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091221</link>
        <id>10.14569/IJACSA.2018.091221</id>
        <doi>10.14569/IJACSA.2018.091221</doi>
        <lastModDate>2018-12-31T17:33:49.7330000+00:00</lastModDate>
        
        <creator>Ye Lim Kang</creator>
        
        <creator>Tae Ho Cho</creator>
        
        <subject>Component; wireless sensor networks; false report injection attack; network security; interleaved hop-by-hop authentication</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(12), 2018</description>
        <description>Wireless sensor networks allow attackers to inject false reports by compromising sensor nodes due to the use of wireless communication, the limited energy resources of the sensor nodes, and deployment in an open environment. The forwarding of false reports causes false alarms at the Base Station and consumes the energy of the sensor nodes unnecessarily. As a defense against false report injection attacks, interleaved hop-by-hop authentication was proposed. In interleaved hop-by-hop authentication, the security threshold is a design parameter that influences the number of Message Authentication Codes; the sensor nodes must verify, based on the security requirements of the application and the node density of the network. However, interleaved hop-by-hop authentication fails to defend against false report injection attacks when the number of compromised sensor nodes exceeds the security threshold. To solve this problem, in this paper we propose a security scheme that adjusts the security threshold according to the network situation using an evaluation function. The proposed scheme minimizes the energy consumption of the sensor nodes and reinforces security.</description>
        <description>http://thesai.org/Downloads/Volume9No12/Paper_21-Energy_Efficient_Security_Threshold_Determination.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>FTL Algorithm using Warm Block Technique for QLC+SLC Hybrid NAND Flash Memory</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091220</link>
        <id>10.14569/IJACSA.2018.091220</id>
        <doi>10.14569/IJACSA.2018.091220</doi>
        <lastModDate>2018-12-31T17:33:49.7170000+00:00</lastModDate>
        
        <creator>Wanil Kim</creator>
        
        <creator>Seok-Bin Seo</creator>
        
        <creator>Jin-Young Kim</creator>
        
        <creator>Se Jin Kwon</creator>
        
        <subject>Quad level cell; single level cell; composed flash memory; flash translation layer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(12), 2018</description>
        <description>When applying the existing flash translation layer technique to a mixed NAND flash storage device composed of Quad Level Cell and Single Level Cell, because the characteristics of a semiconductor chip are not taken into consideration, the data are stored indiscriminately, and thus the performance and stability are not guaranteed. Therefore, this study proposes a flash translation layer algorithm using the warm block technique in a NAND flash storage device that combines a large capacity Quad Level Cell and a high performance Single Level Cell. The warm block technique avoids overloading of the read/write/erase operations in the Quad Level Cell flash memory by efficiently placing hot data that are frequently updated on a long-living Single Level Cell. It was confirmed experimentally that the lifetime extension and performance of hybrid NAND flash memory are improved using the warm block technique.</description>
        <description>http://thesai.org/Downloads/Volume9No12/Paper_20-FTL_Algorithm_using_Warm_Block_Technique.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Degree to which Private Education Students at Princess Nourah Bint Abdulrahman University have Access to Soft Skills from their Point of View and Educational Body</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091219</link>
        <id>10.14569/IJACSA.2018.091219</id>
        <doi>10.14569/IJACSA.2018.091219</doi>
        <lastModDate>2018-12-31T17:33:49.7000000+00:00</lastModDate>
        
        <creator>Saeb Kamel Allala</creator>
        
        <creator>Ola Mohy Aldeen Abusukkar</creator>
        
        <subject>Soft skills; special education students; faculty members; Princess Nourah Bint Abdulrahman University</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(12), 2018</description>
        <description>The study aimed at identifying the degree of ownership of special education students in the Department of Special Education, Faculty of Education, Princess Nourah University for soft skills from their point of view and the consideration of the educational body and its relation to some variables (level of study, specialization, teaching experience). The study consisted of (26) faculty members in the Department of Special Education, and the second consisted of (287) female students of the Department of Special Education at different levels and specializations, and the data using the Statistical Package for Social Sciences analysis (SPSS). The results of the statistical analysis of the study data indicated that the total score of students&#39; possession of soft skills according to their point of view was low, except for some of the paragraphs in which the estimate was high. The degree of possession of soft skills from the point of view of faculty members was also low, and the results of the study also indicated that there were differences between the students due to specialization and the level of the school tend to favor the higher level of the study, while there were differences according to the view of the faculty for the higher experience, while the results did not indicate a difference attributed to R faculty members. The study recommended increasing the interest in soft skills for female students in particular, and for female students in general, by including these skills in the study courses and through the various student activities.</description>
        <description>http://thesai.org/Downloads/Volume9No12/Paper_19-The_Degree_to_which_Private_Education_Students.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Smart Surveillance System using Background Subtraction Technique in IoT Application</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091218</link>
        <id>10.14569/IJACSA.2018.091218</id>
        <doi>10.14569/IJACSA.2018.091218</doi>
        <lastModDate>2018-12-31T17:33:49.6870000+00:00</lastModDate>
        
        <creator>Norharyati binti Harum</creator>
        
        <creator>Mohanad Faeq Ali</creator>
        
        <creator>Nurul Azma Zakaria</creator>
        
        <creator>Syahrulnaziah Anawar</creator>
        
        <subject>Internet of things; raspberry pi; motion detection; home security system; surveillance system: ir4.0</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(12), 2018</description>
        <description>This paper presents a development of a security system based on Internet-of-Things (IoT) technology, where an IoT device, Raspberry Pi has been used. In the developed surveillance system, a camera works as a sensor to detect motion, and automatically capture the video of the view of area where the motion is detected. The motion is detected by image processing techniques; background subtraction technique. The technique is applied by comparing two different captured images using Pi NoIR camera. The system can be controlled from anywhere using Telegram application, and users will receive alert message with video using the application. The user can also play a siren from anywhere once detecting suspicious object can access images and videos using Telegram application. This can frighten the thief if the crime is suspected in home or office. Users can also deactivate and activate the system from anywhere at any time using the Telegram. The functionality tests have been done to ensure the developed product can work properly.  Besides, tests to identify a suitable video length to be transmitted to the user and to identify the adequate location of the security in order to minimize false detection as well as false alert have been performed.  The project is an IoT-based which signiﬁcantly in line with the Industrial Revolution 4.0, supporting the infrastructure of Cyber-Physical System.</description>
        <description>http://thesai.org/Downloads/Volume9No12/Paper_18-Smart_Surveillance_System_using_Background.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automation of Combinatorial Interaction Test (CIT) Case Generation and Execution for Requirements based Testing (RBT) of Complex Avionics Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091217</link>
        <id>10.14569/IJACSA.2018.091217</id>
        <doi>10.14569/IJACSA.2018.091217</doi>
        <lastModDate>2018-12-31T17:33:49.6530000+00:00</lastModDate>
        
        <creator>P Venkata Sarla</creator>
        
        <creator>Balakrishnan Ramadoss</creator>
        
        <subject>Avionics; combinatorial interaction testing; requirement specifications; requirements based testing; safety critical; validation; verification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(12), 2018</description>
        <description>In the field of avionics, most of the software systems are either safety critical or mission critical. These systems are developed with high quality standards strictly following the relevant guidelines and procedures. Due to the high criticality of the systems, it is mandatory that the verification and validation of these systems are done with utmost importance and only then any system is cleared for flight trials. The verification and validation activities need to be very exhaustive and hence take a considerable amount of time in the software development lifecycle. This paper describes about the innovative approach towards automation of Combinatorial Interaction Test case generation and execution for Requirements Based Testing of complex avionics systems for achieving test adequacy in a highly time efficient and cost efficient manner.</description>
        <description>http://thesai.org/Downloads/Volume9No12/Paper_17-Automation_of_Combinatorial_Interaction_Test.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Mapping Approach for Fully Virtual Data Integration System Processes</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091216</link>
        <id>10.14569/IJACSA.2018.091216</id>
        <doi>10.14569/IJACSA.2018.091216</doi>
        <lastModDate>2018-12-31T17:33:49.6400000+00:00</lastModDate>
        
        <creator>Ali Z El Qutaany</creator>
        
        <creator>Osman M. Hegazi</creator>
        
        <creator>Ali H. El Bastawissy</creator>
        
        <subject>Data integration; inconsistency detection; inconsistency resolution; mapping; virtual data integration</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(12), 2018</description>
        <description>Nowadays, organizations cannot satisfy their information needs from one data source. Moreover, multiple data sources across the organization fuels the need for data integration. Data integration system’s users pose queries in terms of an integrated schema and expect accurate, unambiguous, and complete answers. So the data integration system is not limited to, getting the answers to the queries from the sources, but also it is extended to detect and resolve the data quality problems appeared due to the integration process. The most crucial component in any data integration system is the mappings constructed between the data sources and the integrated schema. In this paper a new mapping approach is proposed to map not only the elements of the integrated schema as done by the existing approaches, but also it maps other elements required in detecting and resolving the duplicates. It provides a means to facilitate future extensibility and changes to both the sources and the integrated schema. The proposed approach provides a linkage between the fundamental components required to provide accurate and unambiguous answers to the users’ queries from the integration system.</description>
        <description>http://thesai.org/Downloads/Volume9No12/Paper_16-A_Mapping_Approach_for_Fully_Virtual_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cryptography using Random Rc4 Stream Cipher on SMS for Android-Based Smartphones</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091214</link>
        <id>10.14569/IJACSA.2018.091214</id>
        <doi>10.14569/IJACSA.2018.091214</doi>
        <lastModDate>2018-12-31T17:33:49.6070000+00:00</lastModDate>
        
        <creator>Rifki Rifki</creator>
        
        <creator>Anindita Septiarini</creator>
        
        <creator>Heliza Rahmania Hatta</creator>
        
        <subject>Cryptography; SMS security; RC4 stream cipher; random initial state; correlation value</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(12), 2018</description>
        <description>Messages sent using the default Short Message Service (SMS) application have to pass the SMS Center (SMSC) to record the communication between the sender and recipient. Therefore, the message security is not guaranteed because it may read by irresponsible people. This research proposes the RC4 stream cipher method for security in sending SMS. However, RC4 has any limitation in the Key Scheduling Algorithm (KSA) and Pseudo Random Generation Algorithm (PRGA) phases. Therefore, this research developed RC4 with a random initial state to increase the randomness level of the keystream. This SMS cryptography method applied the processes of encryption against the sent SMS followed by decryption against the received SMS. The performance of the proposed method is evaluated based on the time of encryption and decryption as well as the average correlation value. Based on the time, it shows that the length of the SMS characters sent affects the time of encryption and decryption. Meanwhile, the best correlation value achieved 0.00482.</description>
        <description>http://thesai.org/Downloads/Volume9No12/Paper_14-Cryptography_using_Random_RC4_Stream_Cipher.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improvement of the Handover and Quality of Service on Software Defined Wireless Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091215</link>
        <id>10.14569/IJACSA.2018.091215</id>
        <doi>10.14569/IJACSA.2018.091215</doi>
        <lastModDate>2018-12-31T17:33:49.6070000+00:00</lastModDate>
        
        <creator>Fatima Laassiri</creator>
        
        <creator>Mohamed Moughit</creator>
        
        <creator>Noureddine Idboufker</creator>
        
        <subject>SDN; Wi-Fi; QoS; OpenFlow protocol; handover; SDN controller; OpenFlow switch</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(12), 2018</description>
        <description>The Wireless Fidelity (WiFi) is the business name given to the 802.11b and 802.11g IEEE standard by the WiFi Alliance, formerly known as Weca industry with more than 200 member companies dedicated to supporting the growth of wireless LANs. This standard is currently one of the most used standards in the world. The theoretical data rates of 802.11b are 11 Mb/s and 54 Mb/s for 802.11g. This article presents Handover&#39;s improvement performance and quality of service (QoS) parameters and they are: end-to-end delay, latency, jitter, lost packets, and Mean Opinion Score (MoS), under networks Wi-Fi with the help of the OMNeT 4.6 ++, by implementation of a new algorithm at the level of the SDN controller that allows handover management without breaking the connection by respecting the priority per class of traffic. The realization of this work is based on the intra-Wi-Fi mobility, that it is adopted by a macro mobility of level 3 and it is MIPv6 as well as it exploited the protocol of Voice over IP that it is SIP, and the implementation  of  SDN rules on the OpenFlow protocol.</description>
        <description>http://thesai.org/Downloads/Volume9No12/Paper_15-Improvement_the_Handover_and_Quality_of_Service.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Improvement of Performance Handover in Worldwide Interoperability for Microwave Access using Software Defined Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091213</link>
        <id>10.14569/IJACSA.2018.091213</id>
        <doi>10.14569/IJACSA.2018.091213</doi>
        <lastModDate>2018-12-31T17:33:49.5900000+00:00</lastModDate>
        
        <creator>Fatima Laassiri</creator>
        
        <creator>Mohamed Moughit</creator>
        
        <creator>Noureddine Idboufker</creator>
        
        <subject>WiMAX; SDN; QoS; handover; openflow; OMNeT 4.6 ++</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(12), 2018</description>
        <description>The WiMAX network designates in common language of a set of standards and techniques of the world of Wireless Metropolitan Area Networks (WMAN). The standard IEEE 802.16 or WiMAX allows the wireless connection of companies or individuals over long distances at high speed. WiMAX provides an appropriate response for some rural or hard-to-reach areas, which today lack access to Broadband Internet for cost reasons. This technology aims to introduce a complementary solution to the Digital Subscriber Line (DSL) and cable networks on the one hand, and to interconnect WiFi hotspots, on the other hand WiMAX is mainly based on a star topology although mesh topology is possible. Communication can be done in Line of Sight (LOS) or not (NLOS).Software Defined Network (SDN) is a new network paradigm used to simplify network management. It reduces the complexity of network technology.The following article aims to expose a simulation implemented under Omnet4.6++, to improve Handover performance and QoS (End-to-end delay, latency, jitter, MoS and lost packet), by implemented an algorithm in SDN controller. The simulation is tested in WIMAX architecture, and results have been collected from two scenarios with and without SDN controller to proof that this algorithm is more preferment to guarantee a better QOS in Handover.</description>
        <description>http://thesai.org/Downloads/Volume9No12/Paper_13-An_Improvement_of_Performance_Handover.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Correlation based Approach to Differentiate between an Event and Noise in Internet of Things</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091212</link>
        <id>10.14569/IJACSA.2018.091212</id>
        <doi>10.14569/IJACSA.2018.091212</doi>
        <lastModDate>2018-12-31T17:33:49.5770000+00:00</lastModDate>
        
        <creator>Dina ElMenshawy</creator>
        
        <creator>Waleed Helmy</creator>
        
        <subject>Anomaly detection; event; IoT; noise</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(12), 2018</description>
        <description>Internet of Things (IoT) is considered a huge enhancement in the field of information technology. IoT is the integration of physical devices which are embedded with electronics, software, sensors, and connectivity that allow them to interact and exchange data. IoT is still in its beginning so it faces a lot of obstacles ranging from data management to security concerns. Regarding data management, sensors generate huge amounts of data that need to be handled efficiently to have successful employment of IoT applications. Detection of data anomalies is a great challenge that faces the IoT environment because, the notion of anomaly in IoT is domain dependent. Also, the IoT environment is susceptible to a high noise rate. Actually, there are two main sources of anomalies, namely: an event and noise. An event refers to a certain incident which occurred at a specific time, whereas noise denotes an error. Both event and noise are considered anomalies as they deviate from the remaining data points, but actually they have two different interpretations. To the best of our knowledge, no research exists addressing the question of how to differentiate between an event and noise in IoT. As a result, in this paper, an algorithm is proposed to differentiate between an event and noise in the IoT environment. At first, anomalies are detected using exponential moving average technique, then the proposed algorithm is applied to differentiate between an event and noise. The algorithm uses the sensors’ values and correlation existence between sensors to detect whether the anomaly is an event or noise. Moreover, the algorithm was applied on a real traffic dataset of size 5000 records to evaluate its effectiveness and the experiments showed promising results.</description>
        <description>http://thesai.org/Downloads/Volume9No12/Paper_12-A_Correlation_based_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Utilization of Feature based Viola-Jones Method for Face Detection in Invariant Rotation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091211</link>
        <id>10.14569/IJACSA.2018.091211</id>
        <doi>10.14569/IJACSA.2018.091211</doi>
        <lastModDate>2018-12-31T17:33:49.5430000+00:00</lastModDate>
        
        <creator>Tioh Keat Soon</creator>
        
        <creator>Abd Samad Hasan Basari</creator>
        
        <creator>Nuzulha Khilwani Ibrahim</creator>
        
        <creator>Burairah Hussin</creator>
        
        <creator>Ariff Idris</creator>
        
        <creator>Noorayisahbe Mohd Yaacob</creator>
        
        <creator>Mustafa Almahdi Algaet</creator>
        
        <creator>Norazira A. Jalil</creator>
        
        <subject>Face detection; V-J face detection; unconstrained images; bicubic interpolation; SIFT</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(12), 2018</description>
        <description>Faces in an image consists of complex structures in object detection.  The components of a face, which includes the eyes, nose and mouth of a person differs from that of ordinary objects, thus making face detecting a complex process. Some of the challenges encounter posed in face detection of unconstrained images includes background variation, pose variation, facial expression, occlusion and noise. Current research of Viola-Jones (V-J) face detection is limited to only 45 degrees in-plane rotation. This paper proposes only one technique for the V-J detection face in unconstrained images, which V-J face detection with invariant rotation. The technique begins by rotating the given image file with each step 30 degrees until 360 degrees. Each step of adding 30 degrees from origin, V-J face detection is applied, which covers more angles of a rotated face in unconstrained images. Robust detection in rotation invariant used in the above techniques will aid in the detecting of rotated faces in images. The images that have been utilized for testing and evaluation in this paper are from CMU dataset with 12 rotations on each image. Therefore, there are 12 test patterns generated. These images have been measured through the correct detection rate, true positive and false positive. This paper shows that the proposed V-J face detection technique in unconstrained images have the ability to detect rotated faces with high accuracy in correct detection rate. To summarize, V-J face detection in unconstrained images with proposed variation of rotation is the method utilized in this paper. This proposed enhancement improves the current V-J face detection method and further increase the accuracy of face detection in unconstrained images.</description>
        <description>http://thesai.org/Downloads/Volume9No12/Paper_11-The_Utilization_of_Feature_based_Viola_Jones_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improvement of the Vertical Handover Decision and Quality of Service in Heterogeneous Wireless Networks using Software Defined Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091210</link>
        <id>10.14569/IJACSA.2018.091210</id>
        <doi>10.14569/IJACSA.2018.091210</doi>
        <lastModDate>2018-12-31T17:33:49.5300000+00:00</lastModDate>
        
        <creator>Fatima Laassiri</creator>
        
        <creator>Mohamed Moughit</creator>
        
        <creator>Noureddine Idboufker</creator>
        
        <subject>Heterogeneous network; Vertical Handover; WiMAX; WiFi; IEEE 802.16; IEEE 802.11; OMNeT4.6;</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(12), 2018</description>
        <description>The development of wireless networks brings people great convenience. More state-of-the-art communication protocols of wireless networks are getting mature. People attach more importance to the connections between heterogeneous wireless networks as well as the transparency of transmission quality guarantees. Wireless networks are an emerging solution based on users&#39; access to information and services, regardless of their geographic location. The success of these networks in recent years has generated great interest from individuals, businesses and industry. Although there are several access technologies available to the user such as IEEE standards (802.11, and 802.16). SDN is a new network paradigm used to simplify network management, reducing the complexity of network technology. The following work aims to expose a simulation implemented under OMNeT 4.6 ++, to improve the Handover performance between two technologies WiFi and WiMAX. This paper proposes a decision algorithm for a heterogeneous vertical handover between WiFi access points and WiMAX network. The inputs to the algorithm are WiFi RSS, bit rate, jitter, and estimated TCP end-to-end delay.</description>
        <description>http://thesai.org/Downloads/Volume9No12/Paper_10-Improvement_the_Vertical_Handover_Decision.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Reading the Moving Text in Animated Text-Based CAPTCHAs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091209</link>
        <id>10.14569/IJACSA.2018.091209</id>
        <doi>10.14569/IJACSA.2018.091209</doi>
        <lastModDate>2018-12-31T17:33:49.4970000+00:00</lastModDate>
        
        <creator>Syed Safdar Ali Shah</creator>
        
        <creator>Riaz Ahmed Shaikh</creator>
        
        <creator>Rafaqat Hussain Arain</creator>
        
        <subject>Bots; CAPTCHAs; ANNs; animations; image processing; HIPs; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(12), 2018</description>
        <description>Having based on hard AI problems, CAPTCHA (Completely Automated Public Turing test to tell the Computers and Humans Apart) is a hot research topic in the field of computer vision and artificial intelligence. CAPTCHA is a challenge-response test conducted to single out humans and bots. It is ubiquitously implemented on the web since its introduction. As text-based CAPTCHAs are successfully broken by various researchers therefore several design variants have been proposed and implemented in order to further strengthen it. Animated Text-based CAPTCHAs are one of the design variant of it and are based on the difficulty of reading the moving text. They are based on zero knowledge per frame principle. Although it’s still easy for humans to read animated text but it’s a challenge for machines. As proposals for animated CAPTCHAs are on the rise so there is a strong need to scrutinize their strength against automated attacks. In this research, such CAPTCHAs are investigated to verify their robustness against automated attacks. The proposed methods proved that these CAPTCHAs are vulnerable and they do not guarantee the robustness against automated attacks. The proposed frame selection, noise removal, segmentation and recognition methods have successfully decoded these CAPTCHAs with an overall precision, segmentation accuracy and recognition rate of up to 53.8%, 92.9% and 93.5% respectively.</description>
        <description>http://thesai.org/Downloads/Volume9No12/Paper_9-Reading_the_Moving_Text.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sensual Semantic Analysis for Effective Query Expansion</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091208</link>
        <id>10.14569/IJACSA.2018.091208</id>
        <doi>10.14569/IJACSA.2018.091208</doi>
        <lastModDate>2018-12-31T17:33:49.4830000+00:00</lastModDate>
        
        <creator>Muhammad Ahsan Raza</creator>
        
        <creator>M. Rahmah</creator>
        
        <creator>A. Noraziah</creator>
        
        <creator>Mahmood Ashraf</creator>
        
        <subject>Semantic computing; information retrieval; computational intelligence; ontology; term sense disambiguation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(12), 2018</description>
        <description>The information has evolved rapidly over the World Wide Web in the past few years. To satisfy information needs, users mostly submit a query via traditional search engines, which retrieve results on the basis of keyword matching principle. However, a keyword-based search cannot recognize the meanings of keywords and the semantic relationship among the terms in the user’s query; thus, this technique cannot retrieve satisfactory results. The expansion of an initial query with relevant meaningful terms can solve this issue and enhance information retrieval. Generally, query expansion methods consider concepts that are semantically related to query terms within the ontology as candidates in expanding the initial query. An analysis of the correct sense of query terms, rather than only considering semantic relations, is necessary to overcome language ambiguity problems. In this work, we proposed a query expansion framework on the basis of query sense analysis and semantics mining using computer science domain ontology, followed by working prototype of the system. The experts analyzed the results of system prototype over test dataset and Web data, and found a remarkable improvement in the overall search performance. Furthermore, the proposed framework demonstrated better mean average precision and recall values than the baseline method.</description>
        <description>http://thesai.org/Downloads/Volume9No12/Paper_8-Sensual_Semantic_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Two-Level Fault-Tolerance Technique for High Performance Computing Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091207</link>
        <id>10.14569/IJACSA.2018.091207</id>
        <doi>10.14569/IJACSA.2018.091207</doi>
        <lastModDate>2018-12-31T17:33:49.4670000+00:00</lastModDate>
        
        <creator>Aishah M. Aseeri</creator>
        
        <creator>Mai A. Fadel</creator>
        
        <subject>High performance computing; fault tolerance; graphics processing units (GPUS); error detection; n-version programming (NVP); multi-GPU; reliability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(12), 2018</description>
        <description>Reliability is the biggest concern facing future extreme-scale, high performance computing (HPC) systems. Within the current generation of HPC systems, projections suggest that errors will occur with very high rates in future systems. Thus, it is fundamental that we detect errors that can cause the failure of important applications, such as scientific ones. In this paper, we have presented a two-level fault-tolerance approach for the detection and classification of errors for Compute United Device Architecture (CUDA)-based Graphics Processing Units (GPUs). In the first level, it detects the existence of errors by using software redundancy that applies design diversity. In the second level, it investigates the problematic software version and re-executes it on a different hardware component to classify whether the error is a permanent hardware error or a software error. We implemented our approach to run on GPUs and conducted proof of concept experiments by running three versions of matrix multiplications with different error scenarios and results show the feasibility of the proposed approach.</description>
        <description>http://thesai.org/Downloads/Volume9No12/Paper_7-A_Two_Level_Fault_Tolerance_Technique.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Software vs Hardware Implementations for Real-Time Operating Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091206</link>
        <id>10.14569/IJACSA.2018.091206</id>
        <doi>10.14569/IJACSA.2018.091206</doi>
        <lastModDate>2018-12-31T17:33:49.4500000+00:00</lastModDate>
        
        <creator>Nicoleta Cristina GAITAN</creator>
        
        <creator>Ioan Ungurean</creator>
        
        <subject>Embedded system; real time operating systems; microcontrollers; FPGA</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(12), 2018</description>
        <description>In the development of the embedded systems a very important role is played by the real-time operating system (RTOS). They provide basic services for multitasking on small microcontrollers and the support to implement the deadlines imposed by critical systems. The RTOS used can have important consequences in the performance of the embedded system. In order to eliminate the overhead generated by RTOS, the RTOS primitives have begun to be implemented in hardware. Such a solution is the nMPRA architecture (Multi Pipeline Register Architecture - n degree of multiplication) that implements in hardware of all primitives of an RTOS. This article makes a comparison between software RTOS and nMPRA systems in terms of response time to an external event. For comparison, we use three of the most commonly used RTOS in developing embedded systems: FreeRTOS, uC/OS-III and Keil RTX. These RTOSs are executed on a microcontroller that works at the same frequency as the implementations of the nMPRA architecture on a FPGA system.</description>
        <description>http://thesai.org/Downloads/Volume9No12/Paper_6-Software_vs_Hardware_Implementations.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>3D Printing of Personalized Archwire Groove Model for Orthodontics: Design and Implementation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091205</link>
        <id>10.14569/IJACSA.2018.091205</id>
        <doi>10.14569/IJACSA.2018.091205</doi>
        <lastModDate>2018-12-31T17:33:49.4370000+00:00</lastModDate>
        
        <creator>Gang Liu</creator>
        
        <creator>He Qin</creator>
        
        <creator>Haiyan Zhen</creator>
        
        <creator>Bin Liu</creator>
        
        <creator>Xiaolong Wang</creator>
        
        <creator>Xinyao Tao</creator>
        
        <subject>3D Printing; personalized; archwire groove model; orthodontic treatment</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(12), 2018</description>
        <description>In traditional dental treatment, archwires are bent by orthodontists using standard methods. However, the standard models cater to patients with common oral problems, and are unsuitable for personalized orthodontic treatment, which is highly desired in many cases. A method to prepare a personalized archwire groove model is, undoubtedly, useful for orthodontic treatment in clinical diagnosis. In this study, a three-dimensional (3D) printing technology is demonstrated to achieve the personalized archwire groove model in a rapid, computed tomography image compatible manner, to assist orthodontists. This method is expected to improve the efficiency and accuracy of archwire bending and the resultant product can distribute the uniform dentofacial stress, improve the wearing comfort of the patient and further shorten the period of treatment and repair of the tooth.</description>
        <description>http://thesai.org/Downloads/Volume9No12/Paper_5-3D_Printing_of_Personalized_Archwire.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modeling of the Consensus in the Allocation of Resources in Distributed Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091204</link>
        <id>10.14569/IJACSA.2018.091204</id>
        <doi>10.14569/IJACSA.2018.091204</doi>
        <lastModDate>2018-12-31T17:33:49.4200000+00:00</lastModDate>
        
        <creator>Federico Agostini</creator>
        
        <creator>David L. La Red Mart&#237;nez</creator>
        
        <creator>Julio C. Acosta</creator>
        
        <subject>Aggregation operators; communication between groups of processes; mutual exclusion, operating systems; processor scheduling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(12), 2018</description>
        <description>When it comes to processes distributed in process nodes that access critical resources shared in the modality of distributed mutual exclusion, it is important to know how these are managed and the order in which the demand for resources is resolved by the processes. Being in a shared environment, it is necessary to comply with certain rules, for instance, access to resources must be achieved through mutual exclusion. In this work, through an aggregation operator, a consensus mechanism is proposed to establish the order of allocation of resources to the processes. The consensus is understood as the agreement that must be achieved for the allocation of all the resources requested by each process. To model this consensus, it must be taken into account that the processes can form group of processes or be independent, the state of the nodes where each of them is located, the computational load, the number of processes, the priorities of the processes, CPU usage, use of main memory, virtual memory, etc. These characteristics allow the evaluation of the conditions to agree on the order in which allocations of resources to processes will be made.</description>
        <description>http://thesai.org/Downloads/Volume9No12/Paper_4-Modeling_of_the_Consensus_in_the_Allocation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Utilization of a Neuro Fuzzy Model for the Online Detection of Learning Styles in Adaptive e-Learning Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091202</link>
        <id>10.14569/IJACSA.2018.091202</id>
        <doi>10.14569/IJACSA.2018.091202</doi>
        <lastModDate>2018-12-31T17:33:49.3570000+00:00</lastModDate>
        
        <creator>Luis Alfaro</creator>
        
        <creator>Claudia Rivera</creator>
        
        <creator>Jorge Luna-Urquizo</creator>
        
        <creator>Elisa Castaneda</creator>
        
        <creator>Francisco Fialho</creator>
        
        <subject>e-Learning; learning style identification; backpro-pagation neural network; fuzzy logic; neuro fuzzy systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(12), 2018</description>
        <description>After conducting a historical review and establi-shing the state of the art of the various approaches regarding the design and implementation of adaptive e–learning systems—taking into consideration the characteristics of the user, in particular their learning styles and preferences in order to focus on the possibilities for personalizing the ways of utilizing learning materials and objects in a manner distinct from what e–learning systems have traditionally been, which is to say designed for the generic user, irrespective of individual knowledge and learning styles— the authors propose a system model for the classification of user interactions within an adaptive e–learning platform, and its analysis through a mechanism based on backpropagation neural networks and fuzzy logic, which allow for automatic, online identification of the learning styles of the users in a manner which is transparent for them and which can also be of great utility as a component of the architecture of adaptive e–learning systems and knowledge-management systems. Finally, conclusions and recommendations for future work are established.</description>
        <description>http://thesai.org/Downloads/Volume9No12/Paper_2-Utilization_of_a_Neuro_Fuzzy_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Optimal Control Load Demand Sharing Strategy for Multi-Feeders in Islanded Microgrid</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091203</link>
        <id>10.14569/IJACSA.2018.091203</id>
        <doi>10.14569/IJACSA.2018.091203</doi>
        <lastModDate>2018-12-31T17:33:49.3570000+00:00</lastModDate>
        
        <creator>Muhammad Zahid Khan</creator>
        
        <creator>Muhammad Mansoor Khan</creator>
        
        <creator>Xu Xiangming</creator>
        
        <creator>Umar Khalid</creator>
        
        <creator>Muhammad Ahmed Usman Rasool</creator>
        
        <subject>Optimal control; power sharing; voltage regulation; MG</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(12), 2018</description>
        <description>For the operation of autonomous microgrid (MG), an essential task is to meet the load demand sharing using multiple distributed generation (DG) units. The conventional droop control methods and its numerous variations have been developed in the literature in order to realize proportional power sharing amongst such multiple DG units. However, the conventional droop control strategies are subjected to power sharing error because of non-trivial feeder impedances of medium-voltage MGs. Further, complex MGs configurations (mesh or looped networks) usually make to reactive power sharing and system voltage regulation more challenging. This paper presents an optimal control strategy in order to perform the proportional power sharing and voltage regulation for multiple feeders in islanded AC MGs. The case study simulation for optimizing the power sharing and voltage regulations in proposed control strategy has been verified through using MATLAB/Simulink systems.</description>
        <description>http://thesai.org/Downloads/Volume9No12/Paper_3-An_Optimal_Control_Load_Demand_Sharing_Strategy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Validation of the Proposed Hardness Analysis Technique for FPGA Designs to Improve Reliability and Fault-Tolerance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091201</link>
        <id>10.14569/IJACSA.2018.091201</id>
        <doi>10.14569/IJACSA.2018.091201</doi>
        <lastModDate>2018-12-31T17:33:49.2630000+00:00</lastModDate>
        
        <creator>Abdul Rafay Khatri</creator>
        
        <creator>Ali Hayek</creator>
        
        <creator>Josef B&#168;orcs&#168;ok</creator>
        
        <subject>Dependability; fault injection; fault tolerance; reli-ability; single event effects</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(12), 2018</description>
        <description>Reliability and fault tolerance of FPGA systems is a major concern nowadays. The continuous increase of the system’s complexity makes the reliability evaluation extremely difficult and costly. Redundancy techniques are widely used to increase the reliability of such systems. These techniques provide a large area &amp; time overheads which cause more power consumption and delay, respectively. An experimental evaluation method is proposed to find critical nodes of the FPGA-based designs, named “hardness analysis technique” under the proposed RASP-FIT tool. After finding the critical nodes, the proposed redundant model is applied to those locations of the design and the code is modified. The modified code is functionally equivalent and is more hardened to the soft-errors. An experimental set-up is developed to verify and validate the criticality of these locations found by using hardness analysis. After applying redundancy to those loca-tions, the reliability is evaluated concerning failure rate reduction. Experimental results on ISCAS’85 combinational benchmarks show that a min-max range of failure reduction (14%-85%) is achieved compared to the circuit without redundancy under the same faulty conditions, which improves reliability.</description>
        <description>http://thesai.org/Downloads/Volume9No12/Paper_1-Validation_of_the_Proposed_Hardness_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Systematic Analysis and Classification of Cardiac Rate Variability using Artificial Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.0911106</link>
        <id>10.14569/IJACSA.2018.0911106</id>
        <doi>10.14569/IJACSA.2018.0911106</doi>
        <lastModDate>2018-12-03T16:55:47.2400000+00:00</lastModDate>
        
        <creator>Azizullah Kakar</creator>
        
        <creator>Naveed Sheikh</creator>
        
        <creator>Bilal Ahmed</creator>
        
        <creator>Saleem Iqbal</creator>
        
        <subject>Electrocardiogram (ECG), Cardiology, P-QRS-T wave, Autonomic nervous system, Heart rate variability, artificial neural network, Time and frequency domain, Pattern recognition, Diseases classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>Electrocardiogram (ECG) is acquisition of electrical activity signals in cardiology. It contains important information about the condition and diseases of heart. An ECG wave, pattern, size, shape and the time interval between different peaks of P-QRS-T wave provide useful information about the diseases which afflict heart. Heart rate signals vary and this variation contains important indicators of cardiac diseases. To assess autonomic nervous system, heart rate variability is popular and non-invasive tool. These indicators contained in ECG wave appear all the day or occur randomly in the day. So, computer based information is much useful over day long interval to diagnose heart disease. Thus, this paper deals with classification of heart diseases on the basis of heart rate variability using artificial neural network. Feed forward neural network is considered to be almost correct 85% of the test results.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_106-Systematic_Analysis_and_Classification_of_Cardiac_Rate_Variability.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Steganography Technique using JPEG Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.0911107</link>
        <id>10.14569/IJACSA.2018.0911107</id>
        <doi>10.14569/IJACSA.2018.0911107</doi>
        <lastModDate>2018-12-03T16:55:46.6930000+00:00</lastModDate>
        
        <creator>Rand A. Watheq</creator>
        
        <creator>Fadi Almasalha</creator>
        
        <creator>Mahmoud H. Qutqut</creator>
        
        <subject>Steganography; hide secret message; JPEG image; lossy compression; frequency domain; zigzag.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>Steganography is a form of security technique that using ambiguity to hide a secret message within an ordinary message between senders and receivers. In this paper, we propose a new steganography technique for hiding data in Joint Photo-graphic Experts Group (JPEG) images as it is the most known type of image compression between the lossy type compressions. Our proposed work is based on lossy compression (frequency domain) in images. This type of compression is susceptible to change even for the smallest amount of change which raises a difficulty to find a proper location to embed data. This should be done without affecting the image quality and without allowing anyone to notice the hidden message. From the senders side, first, we divide the image into 8*8 blocks, then apply a Discrete Cosine Transform (DCT), Quantization, and zigzag processes respectively. Second, the secret message is embedded at the end of each selected zigzag block array using the best method of our experimental results. Third, the rest of the code applies the Run Length Code (RLC), Different Pulse Code Modularity (DPCM) and Huffman encoder to obtain the compressed image that includes the embedded message. From the receiver’s side, we will reverse the previous steps to extract the secret message using an encrypted shared key via a secure channel. Our experimental results show that the best array content size of zigzag computed coefficients are between 1 to 20. This selection allows us to utilize more than half of the image blocks to embed the secret message and the difference between the cover image that holds the secret message and the original cover image is very minimal and hard to detect.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_107-A_New_Steganography_Technique_using_JPEG.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Automatic Cryptanalysis of Arabic Transposition Ciphers using Compression</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.0911105</link>
        <id>10.14569/IJACSA.2018.0911105</id>
        <doi>10.14569/IJACSA.2018.0911105</doi>
        <lastModDate>2018-12-01T12:27:14.9070000+00:00</lastModDate>
        
        <creator>Noor R. Al-Kazaz</creator>
        
        <creator>William J. Teahan</creator>
        
        <subject>Automatic cryptanalysis; Arabic transposition ci-phers; compression; PPM; word segmentation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>This paper introduces a compression-based method adapted for the automatic cryptanalysis of Arabic transposition ciphers. More specifically, this paper presents how a Prediction by Partial Matching (‘PPM’) compression scheme, a method that shows a high level of performance when applied to the different natural language processing tasks, can also be used for the automatic decryption of transposition ciphers for the Arabic language. Another well known compression scheme, Gzip, is also investigated in this paper with less efficient performance demonstrated by this method. In order to achieve readability, two further compression based approaches for space insertion are evaluated as well in this paper. The results of our experiments with 125 Arabic cryptograms of different lengths show that 97%of the cryptograms are successfully decrypted without any errors using PPM compression models. As well in a post-processing step, we can effectively segment the output that is produced by the automatic insertion of spaces resulting with only a few errors overall. As far as we know, this is the first work to demonstrate an effective automatic cryptanalysis for transposition ciphers in Arabic.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_105-An_Automatic_Cryptanalysis_of_Arabic_Transposition_Ciphers.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Video Authentication using PLEXUS Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.0911104</link>
        <id>10.14569/IJACSA.2018.0911104</id>
        <doi>10.14569/IJACSA.2018.0911104</doi>
        <lastModDate>2018-12-01T12:27:13.9700000+00:00</lastModDate>
        
        <creator>Hala Bahjat Abdulwahab</creator>
        
        <creator>Khaldoun L. Hameed</creator>
        
        <creator>Nawaf Hazim Barnouti</creator>
        
        <subject>PLEXUS; video authentication; video tampering; temporal attacks </subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>Digital Video authentication is very important issue in day to day life. A lot of devices have got the ability of recording or capturing digital videos and all these videos can be passed through the internet as well as many other non-secure channels. There is a problem of illegal updating or manipulation of digital video because of the development in video editing software. Therefore, video authentication techniques are required in order to ensure trustworthiness of the video. There are many techniques used to prevent this issue like Digital Signature and Watermarking, these solutions are successfully included in copyright purposes but it’s still really difficult to implement in many other situations especially in video surveillance. In this paper, a new method called PLEXUS is proposed for digital video authentication on temporal attacks. In authentication process, the sender will generate a signature according to the method steps using a video and private key. In verification process, the receiver will also generate a signature using the same video and private key then each signature will be compared. If the two signatures are matched then the video is not tampered otherwise the video is tampered. This method is implemented using 10 different videos and proved to be an efficient method. </description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_104-Video_Authentication_using_PLEXUS_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>KASP: A Cognitive-Affective Methodology for Designing Serious Learning Games</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.0911103</link>
        <id>10.14569/IJACSA.2018.0911103</id>
        <doi>10.14569/IJACSA.2018.0911103</doi>
        <lastModDate>2018-12-01T12:27:13.0030000+00:00</lastModDate>
        
        <creator>Tahiri Najoua</creator>
        
        <creator>El Alami Mohamed</creator>
        
        <subject>Methodology; Affective Computing; Serious Games; Learning Disabilities; Game Design; Knowledge; Pedagogy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>Many research studies agree on the existence of a close link between emotion and cognition. Actually, much research has demonstrated that students with learning disabilities (LD) experience emotional distress related to their difficulties. In this regard, this article proposes a new methodology of designing intelligent games called KASP Methodology, it’s a new approach applied to the serious games (SGs) design field. It includes new decisive factors for designing SGs for children with LD. The proposed methodology is based on four pillars which are: Knowledge, Affect, Sensory and Pedagogy and it aims helping designers of serious games for building suitable serious learning games for children with LD taking into account the cognitive and emotional aspects of the child learner in order to improve his learning rhythm and foster his emotional state related to learning in a playful and interactive environment.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_103-KASP_A_Cognitive_Affective_Methodology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Review on Event-Based Epidemic Surveillance Systems that Support the Arabic Language</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.0911102</link>
        <id>10.14569/IJACSA.2018.0911102</id>
        <doi>10.14569/IJACSA.2018.0911102</doi>
        <lastModDate>2018-12-01T12:27:12.0370000+00:00</lastModDate>
        
        <creator>Meshrif Alruily</creator>
        
        <subject>Public health; infectious disease; event extraction; disease surveillance system; arabic language</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>With the revolution of the internet, many event-based systems have been developed for monitoring epidemic threats. These systems rely on unstructured data gathered from various online sources. Moreover, some systems are able to handle more than one language to cover all news reports related to disease outbreaks worldwide. The aim of this paper is to examine existing systems in terms of supporting the Arabic language. The 28 identified systems were evaluated based on different criteria. The results of this evaluation show that only 5 systems support the Arabic language using translation tools; hence, disease outbreaks in news reports written in Arabic are not directly processed. In other words, no existing event-based system in the literature has yet been developed specifically for Arabic health news reports to monitor epidemic diseases.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_102-A_Review_on_Event_Based_Epidemic_Surveillance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Proposal of a Distributed Algorithm for Solving the Multiple Constraints Parking Problem</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.0911101</link>
        <id>10.14569/IJACSA.2018.0911101</id>
        <doi>10.14569/IJACSA.2018.0911101</doi>
        <lastModDate>2018-12-01T12:27:11.0370000+00:00</lastModDate>
        
        <creator>Khaoula Hassoune</creator>
        
        <creator>Wafaa Dachry</creator>
        
        <creator>Fouad Moutaouakkil</creator>
        
        <creator>Hicham Medromi</creator>
        
        <subject>ant colony optimization(ACO); multi-agent system; parking monitoring; intelligent systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>The parking problem in big cities has become one of the key causes of the city traffic congestion, driver frustration and air pollution.So to avoid these problems, parking monitoring is an important solution. Recently many new technologies have been developed that allows vehicle drivers to effectively find the free parking places in the city but these systems still limited because they don &#39;t take into consideration road networks constraints. In this paper, We design a distributed system that will help drivers to find the optimal route between their positions and an indoor parking in the city taking into consideration a set of constraints such as ( distance, traffic, amount of fuel in the car, available places in te parking, and parking cost). We propose a distributed technique based on multi objective Ant Colony Optimisation (ACO). The proposed method aim to manage multi objective parking problem in real time using the behavior of real ants and multi agent systems to decrease the traffic flow and to find the optimal route for drivers.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_101-The_Proposal_of_a_Distributed_Algorithm_for_Solving.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Non-Reference QoE Prediction Model for 3D Video Streaming Over Wireless Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.0911100</link>
        <id>10.14569/IJACSA.2018.0911100</id>
        <doi>10.14569/IJACSA.2018.0911100</doi>
        <lastModDate>2018-11-30T11:48:48.1830000+00:00</lastModDate>
        
        <creator>Ibrahim Alsukayti</creator>
        
        <creator>Mohammed Alreshoodi</creator>
        
        <subject>QoE; QoS; Video; Fuzzy Logic; Prediction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>With the rapid growth in mobile device users, and increasing demand for video applications, the traffic from 2D/3D video services is expected to account the largest proportion of internet traffics. User’s perceived quality of experience (QoE) and quality of service (QoS) are the most important key factors for the success of video delivery. In this regard, predicting the QoE attracts high importance for provisioning of 3D video services in wireless domain due to limited resources and bandwidth constraints. This study presents a cross-layer no-reference quality prediction model for the wireless 3D video streaming. The model is based on fuzzy inference systems (FIS), and exploits several QoS key factors that are mapped to the QoE. The performance of the model was validated with unseen datasets and even shows a high prediction accuracy. The result shows a high correlation between the objectivley measured QoE and the predicted QoE by the FIS model.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_100-Hybrid_Non_Reference_QoE_Prediction_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Case Study for the IONEX CODE-Database Processing Tool Software: Ionospheric Anomalies before the Mw 8.2 Earthquake in Mexico on September 7, 2017</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091199</link>
        <id>10.14569/IJACSA.2018.091199</id>
        <doi>10.14569/IJACSA.2018.091199</doi>
        <lastModDate>2018-11-30T11:48:47.1230000+00:00</lastModDate>
        
        <creator>Guillermo Wenceslao Zarate Segura</creator>
        
        <creator>Carlos Sotomayor-Beltran</creator>
        
        <subject>Earthquake; IONEX CODE-Database Processing Tool; ionospheric anomalies; geomagnetic storm; global iono-spheric maps(GIM); IONosphere map EXchange (IONEX)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>A software tool was developed in the Imaging Processing Research laboratory (INTI-Lab) that automatically downloads several IONEX files around a specific user input date and also performs statistical calculations to look for ionospheric anomalies through the generation of differential vertical total electron content (∆VTEC) maps. The IONEX CODE-Database Processing Tool (ICPT) software allows to save a considerable amount of time spended in gathering the necessary IONosphere map EXchange (IONEX) files for the production of differential VTEC maps. Using the ICPT software we were able to detect ionospheric anomalies before the devastating earthquake that happened in Mexico on September 7, 2017. A positive and negative ionospheric anomalies were detected nine days and one day before the seismic event. Due to stable geomagnetic conditions we suggest that the anomalies are assocciated to the earthquake event. Furthermore, It is very likely that the collision between the North American and Coco’s plate is producing the ionization necessary of the air to generate the disturbances observed.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_99-A_Case_Study_for_the_IONEX_CODE.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparison between Commensurate and Non-commensurate Fractional Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091198</link>
        <id>10.14569/IJACSA.2018.091198</id>
        <doi>10.14569/IJACSA.2018.091198</doi>
        <lastModDate>2018-11-30T11:48:46.0930000+00:00</lastModDate>
        
        <creator>Khaled HCHEICHI</creator>
        
        <creator>Faouzi BOUANI</creator>
        
        <subject>discretization; state-space; fractional; calculation time</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>This article deals with fractional systems that rep-resent better physical process and guarantee a very small number of parameters that can reduces the computation time. It focuses in particular on the state-space representation which highlights the state variables and allows to study the internal behavior of the system taking into account the initial state. Moreover, this representation adapts better to the multiple input multiple-output case. It also discusses the discretization of fractional system to finally adapt the Model Predictive Control to apply it and shows its efficiency and performance in these systems. The main objective of this article is to compare the commensurate and non-commensurate fractional models performance, calculation time and ease of use.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_98-Comparison_between_Commensurate_and_Non_Commensurate.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multimodal Automatic Image Annotation Method using Association Rules Mining and Clustering</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091197</link>
        <id>10.14569/IJACSA.2018.091197</id>
        <doi>10.14569/IJACSA.2018.091197</doi>
        <lastModDate>2018-11-30T11:48:45.0470000+00:00</lastModDate>
        
        <creator>Mounira Taileb</creator>
        
        <creator>Eman Alahmadi</creator>
        
        <subject>Automatic image annotation; association rules min-ing; clustering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>Effective and fast retrieval of images from image datasets is not an easy task, especially with the continuous and fast growth of digital images added everyday by used to the web. Automatic image annotation is an approach that has been proposed to facilitate the retrieval of images semantically related to a query image. A multimodal image annotation method is proposed in this paper. The goal is to benefit from the visual features extracted from images and their associated user tags. The proposed method relies on clustering to regroup the text and visual features into clusters and on association rules mining to generate the rules that associate text clusters to visual clusters. In the experimental evaluation, two datasets of the photo annotation tasks are considered; ImageCLEF 2011 and ImageCLEF 2012. Results achieved by the proposed method are better than all the multimodal methods of participants in ImageCLEF 2011 photo annotation task and state-of-the-art methods. Moreover, the MiAP of the proposed method is better than the MiAP of 7 participants out of 11 when using ImageCLEF 2012 in the evaluation.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_97-Multimodal_Automatic_Image.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predicting Potential Banking Customer Churn using Apache Spark ML and MLlib Packages: A Comparative Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091196</link>
        <id>10.14569/IJACSA.2018.091196</id>
        <doi>10.14569/IJACSA.2018.091196</doi>
        <lastModDate>2018-11-30T11:48:44.0030000+00:00</lastModDate>
        
        <creator>Hend Sayed</creator>
        
        <creator>Manal A. Abdel-Fattah</creator>
        
        <creator>Sherif Kholief</creator>
        
        <subject>Churn prediction; Big data; Machine learning; Apache Spark; ML package; MLlib package; Decision tree</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>This study was conducted based on an assumption that Spark ML package has much better performance and accuracy than Spark MLlib package in dealing with big data. The used dataset in the comparison is for bank customers transactions. The Decision tree algorithm was used with both packages to generate a model for predicting the churn proba-bility for bank customers depending on their transactions data. Detailed comparison results were recorded and conducted that the ML package and its new DataFrame-based APIs have better-evaluating performance and predicting accuracy.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_96-Predicting_Potential_Banking_Customer.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Empirical Evaluation of SVM for Facial Expression Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091195</link>
        <id>10.14569/IJACSA.2018.091195</id>
        <doi>10.14569/IJACSA.2018.091195</doi>
        <lastModDate>2018-11-30T11:48:42.9570000+00:00</lastModDate>
        
        <creator>Saeeda Saeed</creator>
        
        <creator>Junaid Baber</creator>
        
        <creator>Maheen Bakhtyar</creator>
        
        <creator>Ihsan Ullah</creator>
        
        <creator>Naveed Sheikh</creator>
        
        <creator>Imam Dad</creator>
        
        <creator>Anwar Ali Sanjrani</creator>
        
        <subject>Facial Expression Recognition; Support Vector Ma-chine (SVM); Histogram of Oriented Gradients (HoG)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>Support Vector Machines (SVMs) have shown bet-ter generalization and classification capabilities in different appli-cations of computer vision; SVM classifies underlying data by a hyperplane that can separate the two classes by maintaining the maximum margin between the support vectors of the respective classes. An empirical analysis of SVMs on the facial expression recognition task is reported with high intra and low inter class variations by conducting an extensive set of experiments on a large-scale Fer 2013 dataset. Three different kernel functions of SVM are used; linear kernel, quadratic kernel and cubic kernel, whereas, Histogram of Oriented Gradient (HoG) is used as a feature descriptor. Cubic Kernel achieves highest accuracy on Fer 2013 dataset using HoG.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_95-Empirical_Evaluation_of_SVM.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Efficient Iris Pattern Recognition Method by using Adaptive Hamming Distance and 1D Log-Gabor Filter</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091194</link>
        <id>10.14569/IJACSA.2018.091194</id>
        <doi>10.14569/IJACSA.2018.091194</doi>
        <lastModDate>2018-11-30T11:48:41.9130000+00:00</lastModDate>
        
        <creator>Rachida Tobji</creator>
        
        <creator>Wu Di</creator>
        
        <creator>Naeem Ayoub</creator>
        
        <creator>Samia Haouassi</creator>
        
        <subject>Iris recognition; bio-metric; Hamming Distance; iris recognition matching; Adaptive Hamming Distance; 1D Log-Gabor Filter; segmentation; normalization; feature encoding; genuine acceptance rate</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>Iris recognition is one of the highly reliable security methods as compared to the other bio-metric security techniques. The iris is an internal organ whose texture is randomly de-termined during embryonic gestation and is amenable with a computerized machine vision system for the remote examination. Previously, researchers utilized different approaches like Ham-ming Distance in their iris recognition algorithms. In this paper, we propose a new method to improve the performance of the iris recognition matching system. Firstly, 1D Log-Gabor Filter is used to encode the unique features of iris into the binary template. The efficiency of the algorithm can be increased by taking into account the coincidence fragile bit’s location with 1D Log-Gabor filter. Secondly, Adaptive Hamming Distance is used to examine the affinity of two templates. The main steps of proposed iris recognition algorithm are segmentation by using the Hough’s circular transformation method, normalization by Daugman’s rubber sheet model that provides a high percentage of accuracy, feature encoding and matching. Simulation studies are made to test the validity of the proposed algorithm. The results obtained ensure the superior performance of our algo-rithm against several state-of-the-art iris matching algorithms. Experiments are performed on the CASIA V1.0 iris database, the success of the proposed method with a genuine acceptance rate is 99.92%.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_94-Efficient_Iris_Pattern_Recognition_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Role of Bloom Filter in Big Data Research: A Survey</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091193</link>
        <id>10.14569/IJACSA.2018.091193</id>
        <doi>10.14569/IJACSA.2018.091193</doi>
        <lastModDate>2018-11-30T11:48:40.8500000+00:00</lastModDate>
        
        <creator>Ripon Patgiri</creator>
        
        <creator>Sabuzima Nayak</creator>
        
        <creator>Samir Kumar Borgohain</creator>
        
        <subject>Bloom Filter; Big Data; Database; Membership Filter; Deduplication; Big Data Storage; Flash memory; Cloud Computing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>Big Data is the most popular emerging trends that becomes a blessing for human kinds and it is the necessity of day-to-day life. For example, Facebook. Every person involves with producing data either directly or indirectly. Thus, Big Data is a high volume of data with exponential growth rate that consists of a variety of data. Big Data touches all fields, including Government sector, IT industry, Business, Economy, Engineering, Bioinformatics, and other basic sciences. Thus, Big Data forms a data silo. Most of the data are duplicates and unstructured. To deal with such kind of data silo, Bloom Filter is a precious resource to filter out the duplicate data. Also, Bloom Filter is inevitable in a Big Data storage system to optimize the memory consumption. Undoubtedly, Bloom Filter uses a tiny amount of memory space to filter a very large data size and it stores information of a large set of data. However, functionality of the Bloom Filter is limited to membership filter, but it can be adapted in various applications. Besides, the Bloom Filter is deployed in diverse field, and also used in the interdisciplinary research area. Bioinformatics, for instance. In this article, we expose the usefulness of Bloom Filter in Big Data research.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_93-Role_of_Bloom_Filter_in_Big_Data_Research.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Video Streaming Analytics for Traffic Monitoring Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091192</link>
        <id>10.14569/IJACSA.2018.091192</id>
        <doi>10.14569/IJACSA.2018.091192</doi>
        <lastModDate>2018-11-30T11:48:40.2900000+00:00</lastModDate>
        
        <creator>Muhammad Arslan Amin</creator>
        
        <creator>Muhammad Kashif Hanif</creator>
        
        <creator>Muhammad Umer Sarwar</creator>
        
        <creator>Muhammad Kamran Sarwar</creator>
        
        <creator>Ayesha Kanwal</creator>
        
        <creator>Muhammad Azeem</creator>
        
        <subject>Video streaming analytics; Traffic monitoring sys-tem; Video streams; Hadoop; GPU; CNN; Deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>It is considered a difficult task to have check on traffic during rush hours. Traditional applications are man-ual, costly, time consuming, and the human factors involved. Large scale data is being generated from different resources. Advancement in technology make it possible to store, process, analyze, and communicate with large scale of video data. The manual applications are wiped out with the invention of automatic applications. Automatic video streaming analytics applications helps to reduce computational resources. The reason is cost efficient and accurate predictions while monitoring traffic on roads. This study reviews previously developed application of video streaming analytics for traffic monitoring systems using Hadoop that are able to efficiently analyze video streams.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_92-Video_Streaming_Analytics_for_Traffic_Monitoring.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Machine Learning based Fine-Tuned and Stacked Model: Predictive Analysis on Cancer Dataset</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091191</link>
        <id>10.14569/IJACSA.2018.091191</id>
        <doi>10.14569/IJACSA.2018.091191</doi>
        <lastModDate>2018-11-30T11:48:39.2300000+00:00</lastModDate>
        
        <creator>Ravi Aavula</creator>
        
        <creator>R. Bhramaramba</creator>
        
        <subject>Machine learning; Cancer prediction; Data mining and Knowledge discovery; Supervised learning; Neural Networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>The earlier forecast and location of disease cells can be useful in curing the illness in medical applications. Knowledge discovery is having many significant roles in health sector, bioinformatics etc. Plenty of hidden information is available in the datasets present in the various domains like - medical information, textual analysis, image attributes exploration etc. Predictive analytics and modeling encompasses a variety of statistical methodologies from machine learning that can analyze the present along with historical facts to make the predictions about the future events. Breast cancer research already has involved with the good amount of progress in recent decade, but due to advancement in technologies, there is still some possibilities for an improvement. In this paper, the fine-tuned and stacked model procedure is presented which is experimented on standard breast cancer dataset. The obtained results show the improvement over state-of-the-art algorithms with improved performance parameters e.g. disease prediction accuracy, sensitivity and better F1 score etc.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_91-A_Machine_Learning_based_Fine_Tuned_and_Stacked_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Virtual Rehabilitation Using Sequential Learning Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091190</link>
        <id>10.14569/IJACSA.2018.091190</id>
        <doi>10.14569/IJACSA.2018.091190</doi>
        <lastModDate>2018-11-30T11:48:38.2000000+00:00</lastModDate>
        
        <creator>Gladys Calle Condori</creator>
        
        <creator>Eveling Castro-Gutierrez</creator>
        
        <creator>Luis Alfaro Casas</creator>
        
        <subject>Kinect Skeletal; Sequential Learning Algoritms; Virtual Rehabilitation; Virtual Reality Therapy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>Rehabilitation systems are becoming more impor-tant now because patients can access motor skills recovery treatment from home, reducing the limitations of time, space and cost of treatment in a medical facility. Traditional rehabilitation systems served as movement guides, later as movement mirrors, and in recent years research has sought to generate feedback messages to the patient based on the evaluation of his or her movements. Currently the most commonly used algorithms for exercise evaluation are Dynamic time warping (DTW), Hidden Markov model (HMM), Support vector machine (SVM). However, the larger the set of exercises to be evaluated, the less accurate the recognition becomes, generating confusion between exercises that have similar posture descriptors. This research paper compares two HMM classifiers and Hidden Conditional Random Fields (HCRF) plus two types of posture descriptors, based on points and based on angles. Point representation proves to be superior to angle representation, although the latter is still acceptable. Similar results are found in HCRF and HMM.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_90-Virtual_Rehabilitation_using_Sequential_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Tele-Ophthalmology Android Application: Design and Implementation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091189</link>
        <id>10.14569/IJACSA.2018.091189</id>
        <doi>10.14569/IJACSA.2018.091189</doi>
        <lastModDate>2018-11-30T11:48:37.1870000+00:00</lastModDate>
        
        <creator>Rachid Merzougui</creator>
        
        <creator>Mourad Hadjila</creator>
        
        <creator>Nadia Benmessaoud</creator>
        
        <creator>Mokhtaria Benaouali</creator>
        
        <subject>Diabetic retinopathy; Screening; Fundus; Tele-ophthalmology; Android; Mathematical morphology; OpenCV; Database</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>Diabetic retinopathy is the leading cause of blind-ness in the world population. Early detection and appropriate treatment can significantly reduce the risk of loss of sight. Medical authorities recommend an annual review of the fundus for diabetic patients. Several screening programs for diabetic retinopathy in the world have been made to implement this recommendation. The purpose of this paper is to implement the idea and principle to facilitate the detection of this disease using tele-ophthalmology and the latest telecommunications tech-nologies. The present paper aims to develop an Android app named &quot;RETINA&quot; which captures retinal photographs using a microscopic lens, process them via several operators based on mathematical morphology after a filtering step by exploiting the library &quot;OpenCV&quot;, and stores them in a local or remote MySQL database. The application also offers a small service and facilitates the task for the doctor and ophthalmologist by allowing the inspection of files of remote patients.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_89-Tele_Ophthalmology_Android_Application.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluating the Quality of UCP-Based Framework using CK Metrics</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091188</link>
        <id>10.14569/IJACSA.2018.091188</id>
        <doi>10.14569/IJACSA.2018.091188</doi>
        <lastModDate>2018-11-30T11:48:36.1230000+00:00</lastModDate>
        
        <creator>Zhamri Che Ani</creator>
        
        <creator>Nor Laily Hashim</creator>
        
        <creator>Hazaruddin Harun</creator>
        
        <creator>Shuib Basri</creator>
        
        <creator>Aliza Sarlan</creator>
        
        <subject>ucp-based framework; use case points; ck metrics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>Software effort estimation is one of the most important concerns in the software industry. It has received much attention since the last 40 years to improve the accuracy of effort estimate at early stages of software development. Due to this reason, many software estimation models have been proposed such as COCOMO, ObjectMetrix, Use Case Points (UCP) and many more. However, some of the estimation methods were not designed for object-oriented technology that actively encourages reuse strategies. Therefore, due to the popularity of UCP model and the evolution of the object-oriented paradigm, a UCP-based framework and supporting program were developed to assist software developers in building good qualities of software effort estimation programs. This paper evaluates the quality of the UCP-based framework using CK Metrics. The results showed that by implementing the UCP-based framework, the quality of the UCP-based program has improved regarding the understandability, testability, maintainability, and reusability.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_88-Evaluating_the_Quality_of_UCP.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Implementation of an IoT-Based Flood Alert System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091187</link>
        <id>10.14569/IJACSA.2018.091187</id>
        <doi>10.14569/IJACSA.2018.091187</doi>
        <lastModDate>2018-11-30T11:48:35.0800000+00:00</lastModDate>
        
        <creator>Wahidah Md. Shah</creator>
        
        <creator>F. Arif</creator>
        
        <creator>A.A. Shahrin</creator>
        
        <creator>Aslinda Hassan</creator>
        
        <subject>Flood Alert System; Internet of Things; Cyber-Physical System; IR4.0</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>Floods are the most damaging natural disaster in this world. On the occasion of heavy flood, it can destroy the community and killed many lives. The government would spend billions of dollars to recover the affected area. It is crucial to develop a flood control system as a mechanism to reduce the flood risk. Providing a quick feedback on the occurrence of the flood is necessary for alerting resident to take early action such as evacuate quickly to a safer and higher place. As a solution, this paper propose a system that is not only able to detect the water level but also able to measure the rise speed of water level and alerted the resident. Waterfall model is adopted as the methodology in this project. Raspberry Pi is used to collect data from the water sensor and transmit the data to GSM module for sending an alert via SMS. The analysis will be done to show how the Raspberry Pi will be integrated with the smartphone to give an alert. The system is tested in an experiments consist of two different environment in order to ensure that the system is able to provide accurate and reliable data. The project is an IoT-based which significantly in line with the Industrial Revolution 4.0, supporting the infrastructure of Cyber-Physical System.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_87-The_Implementation_of_an_IoT_based_Flood_Alert_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Round the Clock Vehicle Emission Monitoring using IoT for Smart Cities</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091186</link>
        <id>10.14569/IJACSA.2018.091186</id>
        <doi>10.14569/IJACSA.2018.091186</doi>
        <lastModDate>2018-11-30T11:48:33.9870000+00:00</lastModDate>
        
        <creator>Jagadish Nayak</creator>
        
        <subject>IoT; Emission; Sensor; Carbon; Tracking; Smartcity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>Emissions from the vehicles contribute the major part of pollution in this world. Most of the countries have stringent rules to check the emission level through their transport authorities. So as to have zero emission, continuous monitoring of emission level is required. Smart cities need to maintain zero pollution throughout the year. In this paper, an IoT (Internet of Things) based system is proposed for continuous tracking and warning system. The prototype developed is connected to the exhaust of the vehicle and data is collected in the cloud, which can be further processed for a warning system. The device is tested with some vehicles and the results are comparable with the existing emission testing systems used in the market. This device can be used by all vehicle manufacturing companies by embedding it in their products.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_86-Round_the_Clock_Vehicle.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Secure User Authentication Scheme with Biometrics for IoT Medical Environments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091185</link>
        <id>10.14569/IJACSA.2018.091185</id>
        <doi>10.14569/IJACSA.2018.091185</doi>
        <lastModDate>2018-11-30T11:48:32.9270000+00:00</lastModDate>
        
        <creator>YoHan Park</creator>
        
        <subject>IoT medical environments; Cryptanalysis; User au-thentication; BAN logic</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>Internet of Things (IoT) is a ubiquitous network that devices are interconnected and users can access those devices through the Internet. Recently, medical healthcare systems are combined with these IoT networks and provide efficient and ef-fective medical services to medical staff and patients. However, the security threats are increased simultaneously as the requirements of medical services in IoT medical environments are increased. It is essential to provide security of the networks from malicious attacks. In 2018, Roy et al. proposed a remote user authentication and key agreement scheme with biometrics in IoT medical environments. Unfortunately, we analyze Roy et al.’s scheme and demonstrate that their scheme does not withstand various attacks, such as replay attacks and password guessing attacks. Then we propose a user authentication scheme to overcome these security drawbacks. The proposed scheme withstands various attacks from adversaries in IoT medical environments and provide better security functionalities of those of Roy et al. We then prove the authentication and session key of the proposed scheme using BAN logic and analyze that our proposed scheme is secure against various attacks.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_85-A_Secure_User_Authentication_Scheme.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Uncertainty Measure in Belief Entropy Framework</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091184</link>
        <id>10.14569/IJACSA.2018.091184</id>
        <doi>10.14569/IJACSA.2018.091184</doi>
        <lastModDate>2018-11-30T11:48:31.9600000+00:00</lastModDate>
        
        <creator>Moise Digrais Mambe</creator>
        
        <creator>Tchimou N’Takp&#180;e</creator>
        
        <creator>Nogbou Georges Anoh</creator>
        
        <creator>Souleymane Oumtanaga</creator>
        
        <subject>Dempster Shafer Theory; Belief entropy; Uncer-tainty; Information management; Deng entropy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>Belief entropy, which represents the uncertainty measure between several pieces of evidence in the Dempster-Shafer framework, is attracting increasing interest in research. It has been used in many applications and is mainly based on the theory of evidence. To quantify uncertainty, several measures have been proposed in the literature. These measures, sometimes in extended or hybrid forms, use the Shannon entropy principle to determine uncertainty degree. However, the failure to consider the scale of the frame of discernment framework remains an open issue in quantifying uncertainty. In this paper, we propose a new uncertainty measure that takes into account the power set of the frame of discernment. After analysing the different existing methods, we show the performance and effectiveness of our proposed approach.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_84-A_New_Uncertainty_Measure_in_Belief_Entropy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>TokenVote: Secured Electronic Voting System in the Cloud</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091183</link>
        <id>10.14569/IJACSA.2018.091183</id>
        <doi>10.14569/IJACSA.2018.091183</doi>
        <lastModDate>2018-11-30T11:48:30.9600000+00:00</lastModDate>
        
        <creator>Fahad Alsolami</creator>
        
        <subject>Cloud; Fingerprint; Voting; Security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>With the spread of democracy around the world, voting is considered a way to collectively make decisions. Recently, many government offices and private organizations use voting to make decisions when the opinions of multiple decision makers must be accounted for. Another advancement: cloud computing attracts many individual and organizations due to low cost, scal-ability, and the ability to leverage big data. These considerations motivate our proposal of the TokenVote scheme. TokenVote is an electronic voting system in the cloud that uses revocable fingerprint biotokens with a secret sharing scheme to provide privacy, non-repudiation, and authentication. The TokenVote scheme spreads shares of secret (vote), embeds them inside the encoding biometric data (i.e. fingerprint), and distributes them over multiple clouds. During the voting process, each voter must provide his/her fingerprint, causing the TokenVote scheme to collect all voting shares from all voters to compute the final voting result. TokenVote does cloud parallel computing for the voting process in an encoded mode to prevent disclosure of the shares of voting and the fingerprint itself. Our experiments show that TokenSign has a significant performance and comparable accuracy when compared with two baselines.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_83-TokenVote_Secured_Electronic_Voting_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Conditional Text Paraphrasing: A Survey and Taxonomy</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091182</link>
        <id>10.14569/IJACSA.2018.091182</id>
        <doi>10.14569/IJACSA.2018.091182</doi>
        <lastModDate>2018-11-30T11:48:30.4130000+00:00</lastModDate>
        
        <creator>Ahmed H. Al-Ghidani</creator>
        
        <creator>Aly A. Fahmy</creator>
        
        <subject>Natural Language Processing; Text Paraphrasing; Conditional Text Paraphrasing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>This work introduces a survey for the Text Para-phrasing task. The survey covers the different types of tasks around text paraphrasing and mentions the techniques and models that are regularly used when approaching towards it, alongside the datasets that are used while training and evaluating the models. Text paraphrasing has an effective impact when it is used in other applications, so, the paper mentions some text paraphrasing applications. Also, this work proposes a new taxonomy that it is called Conditional Text Paraphrasing. To the best of our knowledge, this is the first work that shows varieties and sub-problems of the original text paraphrasing task. The target of this taxonomy is to expand the definition of the text paraphrasing by adding some conditional constraints as features that either control the paraphrase generation or discrimination. This expanded definition opens in mind a new domain for research in Natural Language Processing (NLP) and Machine Learning. Finally, some useful applications for the conditional text paraphrasing are represented.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_82-Conditional_Text_Paraphrasing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving K-Means Algorithm by Grid-Density Clustering for Distributed WSN Data Stream</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091181</link>
        <id>10.14569/IJACSA.2018.091181</id>
        <doi>10.14569/IJACSA.2018.091181</doi>
        <lastModDate>2018-11-30T11:48:29.3700000+00:00</lastModDate>
        
        <creator>Yassmeen Alghamdi</creator>
        
        <creator>Manal Abdullah</creator>
        
        <subject>WSNs; data mining; clustering; data stream; grid density</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>At recent years, Wireless Sensor Networks (WSNs) had a widespread range of applications in many fields related to military surveillance, monitoring health, observing habitat and so on. WSNs contain individual nodes that interact with the environment by sensing and processing physical parameters. Sometimes, sensor nodes generate a big amount of sequential tuple-oriented and small data that is called Data Streams. Data streams usually are huge data that arrive online, flowing rapidly in a very high speed, unlimited and can’t be controlled orderly during arrival. Due to WSN limitations, some challenges are faced and need to be solved. Extending network lifetime and reducing energy consumption are main challenges that could be solved by Data Mining techniques. Clustering is a common data mining technique that effectively organizes WSNs structure. It has proven its efficiency on network performance by extending network lifetime and saving energy of sensor nodes. This paper develops a grid-density clustering algorithm that enhances clustering in WSNs by combining grid and density techniques. The algorithm helps to face limitations found in WSNs that carry data streams. Grid-density algorithm is proposed based on the well-Known K-Means clustering algorithm to enhance it. By using Matlab, the grid-density clustering algorithm is compared with K-Means algorithm. The simulation results prove that the grid-density algorithm outperforms K-Means by 15% in network lifetime and by 13% in energy consumption.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_81-Improving_K_Means_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Evaluation of Trivium on Raspberry Pi</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091180</link>
        <id>10.14569/IJACSA.2018.091180</id>
        <doi>10.14569/IJACSA.2018.091180</doi>
        <lastModDate>2018-11-30T11:48:28.3230000+00:00</lastModDate>
        
        <creator>Ari Kusyanti</creator>
        
        <creator>Syahifudin Shahid</creator>
        
        <creator>Harin Puspa Ayu Catherina</creator>
        
        <creator>Yazid Samanhudi</creator>
        
        <subject>Trivium; Raspberry; Kruskal-Wallis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>High connectivity of billions of IoT devices lead to many security issues. Trivium is designed for IoT to overcome the security challenges of IoT.  The objective of this study is to implement a security service to provide confidentiality for the communication of IoT devices. Furthermore, this study aims to analyze Trivium performance in terms of keystream generation time and memory utilization on Raspberry Pi Zero, Raspberry Pi 2B, and Raspberry Pi 3B. The result showed that there was a statistically significant difference between the keystream generation time and memory utilization on Raspberry Pi Zero, Raspberry Pi 2B, and Raspberry Pi 3B based on Kruskal-Wallis H test. Further test of Jonckheere-Terpstra indicates that the fastest keystream generation time was on Raspberry Pi 3B, and the smallest memory utilization was on Raspberry Pi 2B. The implemantation of Trivium on  three versions of Raspberry Pi shows promising results with less than 27 MB of memory utilization for cryptography leaves more resources available to applications.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_80-Performance_Evaluation_of_Trivium_on_Raspberry.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Social Networking Sites Habits and Addiction Among Adolescents in Klang Valley</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091179</link>
        <id>10.14569/IJACSA.2018.091179</id>
        <doi>10.14569/IJACSA.2018.091179</doi>
        <lastModDate>2018-11-30T11:48:27.7630000+00:00</lastModDate>
        
        <creator>Yazriwati Yahya</creator>
        
        <creator>Nor Zairah Ab. Rahim</creator>
        
        <creator>Roslina Ibrahim</creator>
        
        <creator>Nurazean Maarop</creator>
        
        <creator>Haslina Md Sarkan</creator>
        
        <creator>Suriayati Chuprat</creator>
        
        <subject>Social networking sites; habit behavior; addiction behavior; SNS usage</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>Social networking sites (SNS) is a very popular application in today’s world society. SNS, to certain extent has change the way people communicate with each other. This kind of technology has become a trend among the users regardless the impact of the technology to the users either positive or negative. The level of SNS usage among the adolescents has started to raise concern among the parents and also the society. SNS addictions are becoming problematic in certain countries especially in United States and lately this issue has started to spread all over the world. Malaysia is also one of the country affected with SNS addiction. SNS addiction is not an isolated phenomenon as it is started from high engagement on the SNS usage and it originates from habitual behavior. Therefore, it is important to seek and understand habit and addiction of SNS among adolescents in Malaysia. The purpose of this study is to analyze and explore the usage of SNS among the adolescents in Malaysia, specifically in Klang Valley. It examines the SNS usage behavioral, which is habit and addiction. The data was collected from a sample of 60 respondents using an online survey. The data were analyzed using SPSS for descriptive analysis. From the analysis, it was found that most of the adolescents used SNS in daily basis and majority of them use it for more than two hours per day. Patterns on habits and addiction on the SNS usage shows that some adolescents experienced certain habit and addiction behavior. </description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_79-Social_Networking_Sites_Habits_and_Addiction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Electronically Reconfigurable Two-Stage Schiffman Phase Shifter for Ku Band Beam Steering Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091178</link>
        <id>10.14569/IJACSA.2018.091178</id>
        <doi>10.14569/IJACSA.2018.091178</doi>
        <lastModDate>2018-11-30T11:48:26.6700000+00:00</lastModDate>
        
        <creator>Rawia Wali</creator>
        
        <creator>Lotfi Osman</creator>
        
        <creator>Tchanguiz Razban</creator>
        
        <creator>Yann Mah&#233;</creator>
        
        <subject>Schiffman phase shifter; reconfigurable; varactor diode; beam steerability; Ku-band</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>An electronically reconfigurable phase shifter using two Schiffman sections is performed for beam steering applications in Ku band. The proposed phase shifter consists of only two cascaded coupled-line sections with the reference line removed. This circuit is loaded by varactor diodes that ensure its tunability over a wide bandwidth. By supplying these varactor diodes with suitable bias voltages, a phase shift is continuously adjusted and reached up to 168&#176; at 12.7 GHz with low insertion losses according to the simulations. Thus, the proposed two-stage phase shifter is able to reach a beam steering angle of 28.6&#176; at 12.7 GHz with only one control voltage. The proposed structure exhibits that our phase shifter has a compact size and a large phase shifting range throughout the Ku band. The tunable phase shifter is prototyped and the measurement results are presented.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_78-Electronically_Reconfigurable_Two_Stage_Schiffman_Phase.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards Adaptive user Interfaces for Mobile-Phone in Smart World</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091177</link>
        <id>10.14569/IJACSA.2018.091177</id>
        <doi>10.14569/IJACSA.2018.091177</doi>
        <lastModDate>2018-11-30T11:48:25.5930000+00:00</lastModDate>
        
        <creator>Muhammad Waseem Iqbal</creator>
        
        <creator>Nadeem Ahmad</creator>
        
        <creator>Syed Khuram Shahzad</creator>
        
        <creator>Irum Feroz</creator>
        
        <creator>Natash Ali Mian</creator>
        
        <subject>Adaptive features; smart-phone; usability experience, user interface; user context; usability engineering; UCD</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>All applications are developed for context adaptation and provide communication with users through their interfaces. These applications offer new opportunities for developers as well as users by collecting context data and adapting systems behavior accordingly. Particularly, in mobile devices, these mechanisms provide usability increment tremendously. Rigid and non-adaptive interface blocks the features of context awareness. In this paper, we study methods, technologies and criteria which have been proposed specifically for adaptive interfaces.  Based on these guidelines, we elaborate the intelligence of adaptivity and usage of context according to user mental model. Further, we have proposed a model to develop user context ontology (UCO) and adaptive interface ontology (AIO) to optimize the use of adaptive mobile interfaces in the context of user preferences. These ontologies organize the perceptions and thoughts of user. The philosophy of User Centered Design (UCD) is proposed to analyze the usability and validity of mobile device interfaces according to user contexts.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_77-Towards_Adaptive_User_Interfaces_for_Mobile_Phone.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Image Processing based Task Allocation for Autonomous Multi Rotor Unmanned Aerial Vehicles</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091176</link>
        <id>10.14569/IJACSA.2018.091176</id>
        <doi>10.14569/IJACSA.2018.091176</doi>
        <lastModDate>2018-11-30T11:48:24.5500000+00:00</lastModDate>
        
        <creator>Akif Durdu</creator>
        
        <creator>Mehmet Celalettin Ergene</creator>
        
        <creator>Onur Demircan</creator>
        
        <creator>Hasan Uguz</creator>
        
        <creator>Mustafa Mahmutoglu</creator>
        
        <creator>Ender Kurnaz</creator>
        
        <subject>UAV; multi rotor; quad rotor; image processing; search and rescue; task allocation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>Nowadays studies based on unmanned aerial vehicles draws attention. Especially image processing based tasks are quite important. In this study, several tasks were performed based on the autonomous flight, image processing and load drop capabilities of the Unmanned Aerial Vehicle (UAV). Two main tasks were tested with an autonomous UAV, and the performance of the whole system was measured according to the duration and the methods of the image processing. In the first mission, the UAV flew over a 4x4 sized color matrix. 16 tiles of the matrix had three main colors, and the pattern was changed three times. The UAV was sent to the matrix, recognized 48 colors of the matrix and returned to the launch position autonomously. The second mission was to test load drop and image processing abilities of the UAV. In this mission, the UAV flew over the matrix, read the pattern and went to the parachute drop area. After that, the load was dropped according to the recognized pattern by the UAV and then came back to the launch position.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_76-Image_Processing_based_Task_Allocation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of Mobile Health Application for Cardiovascular Disease Prevention</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091175</link>
        <id>10.14569/IJACSA.2018.091175</id>
        <doi>10.14569/IJACSA.2018.091175</doi>
        <lastModDate>2018-11-30T11:48:23.4870000+00:00</lastModDate>
        
        <creator>Vitri Tundjungsari</creator>
        
        <creator>Abdul Salam M Sofro</creator>
        
        <creator>Heri Yugaswara</creator>
        
        <creator>Adhika Trisna Dwi Putra</creator>
        
        <subject>Mobile health; cardiovascular; disease; human-centered design; standard; user-centered design</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>Cardiovascular diseases are one of major cause of death in the world, as well as in Indonesia. In spite of that fact, cardiovascular diseases (CVDs) could be prevented with healthy behavior and lifestyle, such as: regular health check-up, healthy eating and drinking lifestyle, stress management, sleeping management, regular physical activities. In this paper, we develop mobile health application as a tool to record daily behavior and lifestyle. Mobile health is chosen because nowadays mobile devices are the most popular communication used among people. Thus, we believe that mobile Health (mHealth) is a promising tool to promote healthy lifestyle and behavior. The method we used for developing the application is called as Human-centered design (HCD). The application is evaluated iteratively from the first prototype (low-fidelity prototype) to the final prototype (high-fidelity prototype). Based on the feedback using User Experience Questionnaire (UEQ) shows that the application has above average scores for all of the components, i.e.: attractiveness, clarity, efficiency, accuracy and dependability, stimulation, and novelty. The best score is for Stimulation (Excellent), while the worst score is accuracy and dependability (above average). This shows that mHealth is a potential tool to stimulate users for having healthy lifestyle, however it still required further validation of use from health experts to ensure the accuracy’s result of the application.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_75-Development_of_Mobile_Health_Application.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Using an Integrated Framework for Conceptual Modeling</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091174</link>
        <id>10.14569/IJACSA.2018.091174</id>
        <doi>10.14569/IJACSA.2018.091174</doi>
        <lastModDate>2018-11-30T11:48:22.9430000+00:00</lastModDate>
        
        <creator>Lindita Nebiu Hyseni</creator>
        
        <creator>Zamir Dika</creator>
        
        <subject>Integrated framework for conceptual modeling (IFCMod); joint approval requirements (JAR); system requirements; information system; mixed method case studies</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>The Integrated Framework for Conceptual Modeling (IFCMod) is created to contribute to the quality of the information system through the integration of the functional and non-functional requirements. This paper attempts to explore the outcomes of the IFCMod usage through the Mixed Method Case Studies at the Higher Education Institution and the Central Bank. The case study at the South East European University (SEEU) was the analysis and design of the improvement of the e-Schedule system, while the case study at the Central Bank of the Republic of Kosovo (CBK) was the analysis and design of the Data Collection System for Enterprise Surveys (DCSES). Based on the institutional perspective of the community participation during the semi-structure interviews, at the end of the Joint Approval Requirements (JAR) meetings, in both cases, the outcomes showed that IFCMod usage increases the quality of the information system by increasing quality in the system requirements.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_74-Using_an_Integrated_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Three Levels Quality Analysis Tool for Object Oriented Programming</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091173</link>
        <id>10.14569/IJACSA.2018.091173</id>
        <doi>10.14569/IJACSA.2018.091173</doi>
        <lastModDate>2018-11-30T11:48:22.3800000+00:00</lastModDate>
        
        <creator>Mustafa Ghanem Saeed</creator>
        
        <creator>Maher Talal Alasaady</creator>
        
        <creator>Fahad Layth Malallah</creator>
        
        <creator>Kamaran HamaAli Faraj</creator>
        
        <subject>Software quality models; software measurements; clean code; source code complexity metrics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>In terms of evolution of software engineering methods for complex software developments techniques, new concepts have been emerged in the software languages, which used to develop software quality models. In this research, the Multi Levels Quality Analysis Tool (MLQA) is proposed as a tool for computer-aid software engineering, which classifies software complexity into three levels of analysis, namely the program package analysis, class analysis (program class) and finally the analysis at the level of the program method. MLQA is able to support a method of visual analysis of the software contents with color alerts, and recommendations systems, which can give a quick view of the software development and its complexity. The methodology of this work is a new suggested software quality model based on the standards object-oriented programming complexity metrics as well as threshold limits. In addition, a new quality attribute namely clean code attribute has been proposed and integrating it with the proposed software quality model in a way that enables the user of the model relies on this attribute and reduces the dependence on the software experience, which is expensive and rare at times.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_73-Three_Levels_Quality_Analysis_Tool.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Role of Information Technology on Teaching Process in Education; An Analytical Prospective Study at University of Sulaimani</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091172</link>
        <id>10.14569/IJACSA.2018.091172</id>
        <doi>10.14569/IJACSA.2018.091172</doi>
        <lastModDate>2018-11-30T11:48:21.3200000+00:00</lastModDate>
        
        <creator>Mohammad Esmail Ahmad</creator>
        
        <creator>Ameer Sardar K. Rashid</creator>
        
        <creator>Amanj Anwar Abdullah</creator>
        
        <creator>Raza M. Abdulla</creator>
        
        <subject>Information and communication technology; education; evaluation of information technology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>Nowadays Information Technology (IT) has been engaged in all spheres of life. It plays an important role in developing and processing works in all types of organizations, especially in the teaching process in institutions and universities. The purpose of this paper is to present the impact of information technology in the process of learning progress and teaching improvement in the University of Sulaimani from both lecturers and students perspective view, also to determine the common key factors which teaching process relies upon the information technology framework. In this paper, the researchers created an online questionnaire survey and took a sample number of academic staff and students in different colleges and departments of the University of Sulaimani in 2017, which were 320 questionnaires. The paper shows that information technology has become a basic need not dispense within the teaching process in universities and institutions in this era, also emphasize that various level of understanding of Information Technology serves various learning and teaching process.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_72-The_Role_of_Information_Technology_on_Teaching_Process.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Method for Secured Transaction of Images and Text on Cloud</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091171</link>
        <id>10.14569/IJACSA.2018.091171</id>
        <doi>10.14569/IJACSA.2018.091171</doi>
        <lastModDate>2018-11-30T11:48:20.2600000+00:00</lastModDate>
        
        <creator>John Jeya Singh. T</creator>
        
        <creator>Dr E.Baburaj</creator>
        
        <subject>Three fish; neural network; security; multimedia; cloud storage </subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>Implementation of privacy preservation of data on cloud storage is tedious and complex. Cloud is a third party on – demand service to hold data for a specific period. There is no assurance from the cloud storage providers about the security of data. It is necessary to provide some security to data. Cryptographic algorithms are required to provide security to the data on cloud. The aim of the research is to develop a method which combines Artificial Neural Network and Three fish algorithm for the secured transaction of images. Images are large in size and more sensitive comparing to normal text. The proposed method provides security to image with low computation cost comparing to existing methods. The research is implemented in privacy and public clouds. The generated results prove the proposed research is more efficient in terms of compression ratio, mean square error, normalized absolute error, time, and space efficiency.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_71-A_Novel_Method_for_Secured_Transaction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>RETRACTED: Relationship of Liver Enzymes with Viral Load of Hepatitis C in HCV Infected Patients by Data Analytics</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091170</link>
        <id>10.14569/IJACSA.2018.091170</id>
        <doi>10.14569/IJACSA.2018.091170</doi>
        <lastModDate>2018-11-30T11:48:19.1970000+00:00</lastModDate>
        
        <creator>Fahad Ahmad</creator>
        
        <creator>Kashaf Junaid</creator>
        
        <creator>Ata ul Mustafa</creator>
        
        <subject>Hepatic; hepatitis virus; liver markers; UCINET analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>After careful and considered review of the content of this paper by a duly constituted expert committee, this paper has been found to be in violation of IJACSA`s Publication Principles. We hereby retract the content of this paper. Reasonable effort should be made to remove all past references to this paper. Retraction DOI: 10.14569/IJACSA.2018.091170.retraction</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_70-Relationship_of_Liver_Enzymes_with_Viral_Load.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Biological Feedback Controller Design for Handwriting Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091169</link>
        <id>10.14569/IJACSA.2018.091169</id>
        <doi>10.14569/IJACSA.2018.091169</doi>
        <lastModDate>2018-11-30T11:48:18.1530000+00:00</lastModDate>
        
        <creator>Mariem BADRI</creator>
        
        <creator>Ines CHIHI</creator>
        
        <creator>Afef ABDELKRIM</creator>
        
        <subject>Handwriting system; biological system; feedback control; PD controller; muscles forces signals </subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>This paper deals with a feedback controller of PD (proportional, derivative) type applied to the process of handwriting. The considered model for this study describes the behavior of the system “hand and pen” to forearm muscles forces, applied for the production of handwriting. The applied approach considers memory recall of error signal between model outputs and experimental data to reach a desired trajectory position, a rapid dynamic and stable model response. The control technique is applied in order to expand the handwriting model response to a larger database of graphic traces. The obtained results illustrated the reliability of closed loop control to command the handwriting system, and to ensure its robustness against unknown inputs such as muscles forces that could vary from an individual to another and increase model complexity.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_69-Biological_Feedback_Controller_Design.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Conceptual Model for Measuring Transparency of Inter-Organizational Information Systems in Supply Chain: Case Study of Cosmetic Industry</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091168</link>
        <id>10.14569/IJACSA.2018.091168</id>
        <doi>10.14569/IJACSA.2018.091168</doi>
        <lastModDate>2018-11-30T11:48:17.0930000+00:00</lastModDate>
        
        <creator>Maryam Toofani</creator>
        
        <creator>Alireza Hassanzadeh</creator>
        
        <creator>Ali Rajabzadeh Ghatari</creator>
        
        <subject>Transparency of information systems; supply chain; cosmetic industry, measuring transparency; inter-organizational</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>The role of information systems has changed effectively in organizational performance, and today, information systems are creating value for organizations. This study aims to provide a conceptual model for measuring the transparency of inter-organizational information systems in the supply chain. The statistical population of this research includes all managers and staff of cosmetic cosmetics companies in Tehran while companies are engaged in different sectors and their number is 500. About 218 people were questioned through the calculation with the Cochran formula. A conceptual model for measuring the transparency of inter-organizational information systems in the supply chain was developed based on a review of the theoretical concepts. Researcher made questionnaire have been used to measure variables of the research model. The validity of the research tool was confirmed by experts and the reliability of the tool was reported 0.85 by Cronbach&#39;s alpha. According to T-statistics, transparency of resources, inter-organizational trust, and environmental assurance are positive and significant in measuring transparency of inter-organizational information systems at a level of 0.01 and they are above average.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_68-Model_for_Measuring_Transparency.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Differentiation of Brain Waves from the Movement of the Upper and Lower Extremities of the Human Body</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091167</link>
        <id>10.14569/IJACSA.2018.091167</id>
        <doi>10.14569/IJACSA.2018.091167</doi>
        <lastModDate>2018-11-30T11:48:16.0300000+00:00</lastModDate>
        
        <creator>Brian Meneses-Claudio</creator>
        
        <creator>Witman Alvarado-Diaz</creator>
        
        <creator>Avid Roman-Gonzalez</creator>
        
        <subject>OpenBCI; cyton board; extremity movement; brain wave differentiation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>Currently, the study of brain waves has shown a type of alternative communication, in addition to the different applications that can be made with the brain waves obtained from each individual. The OpenBCI is an open source platform for electroencephalography (EEG), in addition to a device called Cyton Board capable of collecting brain waves and these can be sent to a computer to be processed. In this research work, a computer-machine interface is presented that may be able to collect the brain waves of individuals and process them, this is to indicate the differences between the thinking of the imaginary movement of the left and right arm, also of the leg left and right brain signals. Then, to use these brain wave differences in applications focused on people with physical disabilities.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_67-Differentiation_of_Brain_Waves.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Content Analysis of Privacy Management Features in Geosocial Networking Application</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091166</link>
        <id>10.14569/IJACSA.2018.091166</id>
        <doi>10.14569/IJACSA.2018.091166</doi>
        <lastModDate>2018-11-30T11:48:15.0470000+00:00</lastModDate>
        
        <creator>Syarulnaziah Anawar</creator>
        
        <creator>Yeoh Wai Hong</creator>
        
        <creator>Erman Hamid</creator>
        
        <creator>Zakiah Ayop</creator>
        
        <subject>Privacy management; communication privacy management theory; social network; geosocial network; content analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>Geosocial networking application allows user to share information and communicate with other people within a virtual neighborhood or community. Although most geosocial networking application include privacy management features, one the challenge is to improve privacy management features design. To overcome this challenge, the adaptation of privacy-related theories offers a concrete way to comprehend and analyze how the privacy management features are used as tangible research results that facilitate user and system developer in understanding privacy management. This paper attempt to propose a standardized privacy management features in geosocial networking application from market perspectives that could be utilized by researchers and application developers to demonstrate or measure privacy management features. The objective of this paper is two-fold: First, to map the theoretical constructs guided by Communication Privacy Management (CPM) theory into privacy management features in geosocial networking application.  Second, to evaluate the reliability of the proposed features using content analysis.  Content analysis is performed on 1326 geosocial networking apps in the market (Google Play store and App Store) to determine the reliability of the proposed privacy management features through inter-coder reliability analysis. The primary findings of the content analysis show that many of the privacy management features with low reliability are from Boundary Turbulence construct. Furthermore, only 6 out of 13 proposed features are deemed reliable, namely, specific grouping, visibility setting, privacy policy, violation, imprecision and inaccuracy. The proposed privacy management features may aid researchers and system developers to focus on the best privacy management features for improving geosocial networking application design.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_66-Content_Analysis_of_Privacy_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Edge Detection on DICOM Image using Triangular Norms in Type-2 Fuzzy</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091165</link>
        <id>10.14569/IJACSA.2018.091165</id>
        <doi>10.14569/IJACSA.2018.091165</doi>
        <lastModDate>2018-11-30T11:48:14.3770000+00:00</lastModDate>
        
        <creator>D. Nagarajan</creator>
        
        <creator>M.Lathamaheswari</creator>
        
        <creator>R.Sujatha</creator>
        
        <creator>J.Kavikumar</creator>
        
        <subject>Aggregation operators; T norm; T conorm; triangular interval type-2 fuzzy number (TIT2FN); fuzzy morphology; gray scale Image; medical image processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>In image processing, edge detection is an important venture.  Fuzzy logic plays a vital role in image processing to deal with lacking in quality of an image or imprecise in nature.  This present study contributes an authentic method of fuzzy edge detection through image segmentation.  Gradient of the image is done by triangular norms to extract the information.  Triangular norms (T norms) and triangular conorms (T conorms) are specialized in dealing uncertainty.  Therefore triangular norms are chosen with minimum and maximum operators for the purpose of morphological operations.  Also, mathematical properties of aggregation operator to represent the role of morphological operations using Triangular Interval Type-2 Fuzzy Yager Weighted Geometric (TIT2FYWG) and Triangular Interval Type-2 Fuzzy Yager Weighted Arithmetic (TIT2FYWA) operators are derived. These properties represent the components of image processing. Here Edge detection is done for DICOM image by converting into 2D gray scale image, using Type-2 fuzzy MATLAB and which is the novelty of this work.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_65-Edge_Detection_on_DICOM_image.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Efficient Page Collection Scheme for QLC NAND Flash Memory using Cache</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091164</link>
        <id>10.14569/IJACSA.2018.091164</id>
        <doi>10.14569/IJACSA.2018.091164</doi>
        <lastModDate>2018-11-30T11:48:13.8300000+00:00</lastModDate>
        
        <creator>Seok-Bin Seo</creator>
        
        <creator>Wanil Kim</creator>
        
        <creator>Se Jin Kwon</creator>
        
        <subject>Solid state drive; storage systems; cache; flash translation layer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>Recently, semiconductor companies such as Samsung, Hynix, and Micron, have focused on quad-level cell (QLC) NAND flash memory chips, because of the increase in the capacity of storage systems. The QLC NAND flash memory chip stores 4 bits per cell. A page in the QLC NAND flash memory consists of 16 sectors, which is two to four times larger than that of conventional triple-level cell flash NAND flash memory. Because of its large page size, when the QLC NAND flash memory is applied to the current storage system directly, each page space is not efficiently used, resulting in low space utilization in overall storage systems. To solve this problem, an efficient page collection scheme using cache for QLC NAND flash memory (PCS) is proposed. The main role of PCS is managing the data transmitted from the file system efficiently (according to the data pattern and size), and reducing the number of unnecessary write operations. The efficiency of PCS was evaluated using SNIA IOTTA NEXUS5 trace-driven simulation on QLC NAND flash memory. According to close observation, PCS significantly reduces 50% of write operations compared with previous page collection algorithms, by efficiently collecting the small data into a page. Furthermore, a cache idle-time determination algorithm is proposed to further increase the space utilization of each page, thereby reducing the overall number of write operations on the QLC flash memory.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_64-Efficient_Page_Collection_Scheme.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of Interactive Ophthalmology Hologram</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091163</link>
        <id>10.14569/IJACSA.2018.091163</id>
        <doi>10.14569/IJACSA.2018.091163</doi>
        <lastModDate>2018-11-30T11:48:12.8330000+00:00</lastModDate>
        
        <creator>Sarni Suhaila Rahim</creator>
        
        <creator>Nazreen Abdullasim</creator>
        
        <creator>Wan Sazli Nasaruddin Saifudin</creator>
        
        <creator>Raja Norliza Raja Omar</creator>
        
        <subject>3D animation; hologram; ophthalmology; interactive; eye</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>Ophthalmology is one of the medical branches that deal with the eye. This field is associated with the anatomy, physiology and diseases of the eye. The main objective of this paper is to develop a novel interactive simulation of eye anatomy in three-dimensional (3D) by using holography environment approach in order to create better visualization of the structures of the eyes. Currently, public can access the information on medical through some conventional methods, such as brochure, pamphlet, booklet, in addition to some other better ways, for example 2D and 3D video. However, these methods do not offer an interactivity visualization of the medical information that help creating much engaging presentation to users. Moreover, the medical doctors are unable to show and explain in details about the diseases occurrence through these methods. An interactive method, therefore is required to assist the doctors to convey the disease information effectively. Since the human eye is the most complex organ in our body, thus an advanced technology, i.e., hologram should be used to visualize the visual system in an effective and interactive method, also to produce an effective eye disease explanation. Hologram is in a three-dimensional form besides an interactivity elements, where the proposed technology will help doctors examining vital organs using 3D displays. It is envisaged that the proposed interactive holography technique for clinical purpose would greatly contribute to and assist the management of the eye diseases and other diseases as well. It is hoped that the developed interactive holography technique for eye will assist clinicians in delivering the disease information efficiently and attractively.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_63-Development_of_Interactive_Ophthalmology_Hologram.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dynamic Tuning and Overload Management of Thread Pool System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091162</link>
        <id>10.14569/IJACSA.2018.091162</id>
        <doi>10.14569/IJACSA.2018.091162</doi>
        <lastModDate>2018-11-30T11:48:12.2870000+00:00</lastModDate>
        
        <creator>Faisal Bahadur</creator>
        
        <creator>Arif Iqbal Umar</creator>
        
        <creator>Fahad Khurshid</creator>
        
        <subject>Distributed system; distributed thread pool; thread pool system; performance; overload management</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>Distributed applications have been developed using thread pool system (TPS) in order to improve system performance. The dynamic optimization and overload management of TPS are two crucial factors that affect overall performance of distributed thread pool (DTP). This paper presents a DTP, that is based on central management system, where a central manager forwards client’s requests in round robin fashion to available set of TPSs running in servers. The dynamic tuning of each TPS is done based on request rate on the TPS. The overload condition at each TPS is detected by the TPS itself, by throughput decline. The overload condition is resolved by reducing the size of thread pool to previous value, at which it was producing throughput parallel to the request rates. By reducing the size of thread pool on high request rates, the context switches and thread contention overheads are eliminated that enables system resources to be utilized effectively by available threads in the pool. The result of evaluation proved the validity of proposed system.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_62-Dynamic_Tuning_and_Overload_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Conceptual Modeling of Inventory Management Processes as a Thinging Machine</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091161</link>
        <id>10.14569/IJACSA.2018.091161</id>
        <doi>10.14569/IJACSA.2018.091161</doi>
        <lastModDate>2018-11-30T11:48:11.1930000+00:00</lastModDate>
        
        <creator>Sabah Al-Fedaghi</creator>
        
        <creator>Nourah Al-Huwais</creator>
        
        <subject>Conceptual model; diagrammatic representation; inventory control; inventory management; workflow; thinging</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>A control model is typically classified into three forms: conceptual, mathematical and simulation (computer). This paper analyzes a conceptual modeling application with respect to an inventory management system. Today, most organizations utilize computer systems for inventory control that provide protection when interruptions or breakdowns occur within work processes. Modeling the inventory processes is an active area of research that utilizes many diagrammatic techniques, including data flow diagrams, Universal Modeling Language (UML) diagrams and Integration DEFinition (IDEF). We claim that current conceptual modeling frameworks lack uniform notions and have inability to appeal to designers and analysts. We propose modeling an inventory system as an abstract machine, called a Thinging Machine (TM), with five operations: creation, processing, receiving, releasing and transferring. The paper provides side-by-side contrasts of some existing examples of conceptual modeling methodologies that apply to TM. Additionally, TM is applied in a case study of an actual inventory system that uses IBM Maximo. The resulting conceptual depictions point to the viability of FM as a valuable tool for developing a high-level representation of inventory processes.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_61-Conceptual_Modeling_of_Inventory_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Three-Phase Approach for Developing Suitable   Business Models for Exchanging Federated ERP Components as Web Services</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091160</link>
        <id>10.14569/IJACSA.2018.091160</id>
        <doi>10.14569/IJACSA.2018.091160</doi>
        <lastModDate>2018-11-30T11:48:10.6500000+00:00</lastModDate>
        
        <creator>Evan Asfoura</creator>
        
        <creator>Mohammad Samir Abdel-Haq</creator>
        
        <creator>Houcine Chatti</creator>
        
        <subject>FERP system; ERP web services; FERP mall; ERP workflow; developing approach</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>The importance of business models has increased significantly in the last decade, especially in the Internet. The cause of this increase is the effect of Internet and the associated applications and their business processes regarding to the business model. These effects include, for example, the emerging technical and economic aspects of a business model on the Internet, the support and transformation of traditional business models, and the arise of  new business ideas based on that technology. One of these new ideas is: how distributed Enterprise Resource Planning (ERP) systems or federated ERP (FERP) systems as web services (WSs) can cover the increasing demands of small and medium sized enterprises (SMEs) for business software covered.This paper aims to provide a derived developing approach with three phases that will lead to three suitable concepts that identify suitable business model for FERP System with different scenarios of value exchanging. The results of this work will be conceptual models that describe the character, role and revenue models that identify FERP exchanging business model.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_60-Three_Phase_Approach_for_Developing_Suitable.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Priority-Aware Virtual Machine Selection Algorithm in Dynamic Consolidation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091159</link>
        <id>10.14569/IJACSA.2018.091159</id>
        <doi>10.14569/IJACSA.2018.091159</doi>
        <lastModDate>2018-11-30T11:48:10.0870000+00:00</lastModDate>
        
        <creator>Hanan A. Nadeem</creator>
        
        <creator>Hanan Elazhary</creator>
        
        <creator>Mai A. Fadel</creator>
        
        <subject>Cloud computing; energy efficiency; service level agreement; VM consolidation; VM selection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>In the past few years, many researchers attempted to tackle the problem of decreasing energy consumption in cloud data centers. One of the widely adopted techniques for this purpose is dynamic Virtual Machine (VM) consolidation. Consolidation moves VMs between hosts to decrease energy consumption. However, it has a negative impact on performance leading to Service Level Agreement (SLA) violations. Accordingly, selecting which VM to migrate from one host to another is a challenging task since it can affect performance. Researchers came up with several solutions and policies for efficient VM selection. In this paper, we exploit the fact that many tasks and users may tolerate some performance degradation which means, the tasks running on the VMs can be of different priorities. Accordingly, we propose augmenting consolidation with the priority concept, where low priority tasks are always selected first for migration. Towards this goal, we modified the popular Minimum Migration Time VM selection algorithm using the priority concept. The efﬁciency of the proposed algorithm is confirmed through extensive simulations using CloudSim toolkit and a real workload. The results show that priority awareness has a positive impact on decreasing energy consumption as well as maximizing SLA obligation.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_59-Priority_Aware_Virtual_Machine_Selection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Experimental Evaluation of Security Requirements Engineering Benefits</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091158</link>
        <id>10.14569/IJACSA.2018.091158</id>
        <doi>10.14569/IJACSA.2018.091158</doi>
        <lastModDate>2018-11-30T11:48:09.0270000+00:00</lastModDate>
        
        <creator>Jaouad Boutahar</creator>
        
        <creator>Ilham Maskani</creator>
        
        <creator>Souha&#239;l El Ghazi El Houssa&#239;ni</creator>
        
        <subject>Software security; security requirements engineering; security evaluation; security testing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>Security Requirements Engineering (SRE) approaches are designed to improve information system security by thinking about security requirements at the beginning of the software development lifecycle. This paper is a quantitative evaluation of the benefits of applying such an SRE approach. The followed methodology was to develop two versions of the same web application, with and without using SRE, then comparing the level of security in each version by running different test tools. The subsequent results clearly support the benefits of the early use of SRE with a 38% security improvement in the secure version of the application. This security benefit reaches 67% for high severity vulnerabilities, leaving only non-critical and easy-to-fix vulnerabilities.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_58-Experimental_Evaluation_of_Security_Requirements.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Role of user Involvement in the Success of Project Scope Management</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091157</link>
        <id>10.14569/IJACSA.2018.091157</id>
        <doi>10.14569/IJACSA.2018.091157</doi>
        <lastModDate>2018-11-30T11:48:08.4500000+00:00</lastModDate>
        
        <creator>Maha Alkhaffaf</creator>
        
        <subject>Gathering requirements; defining scope; verifying scope; controlling scope; and user involvement</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>Greater emphasis is now being placed on User Involvement as a factor imperative to Success in Project Scope Management. Although Project Scope Management Processes have a tendency to centre on various factors pertaining to the collecting criteria, defining scope and verifying scope, controlling scope is viewed as being fundamental to the management process as a whole. Furthermore, Success in Project Scope Management in the modern-day competitive business setting is recognised as resting on efficient and effective processes applied across Project Scope Management. One essential factor in achieving success in this arena is that of User Involvement. In this regard, the point is presented that Project Scope Management and User Involvement may be implemented in such a way so as to enhance Successful Project Scope Management. A questionnaire-centred survey approach utilizing Project Scope Management Processes and User Involvement to Successful Project Scope Management, encompassing management- and strategy-level employees, totalling 295, was applied in order to establish the link, both indirect and direct, between particular elements influencing four different IT departments at the governmental level. The data gathered underwent analysis through the use of SPLS (Smart Partial Least Square). This work provides a valuable contribution for professionals in the field, both in terms of researchers and practitioners, and further highlights the different ways in which project managers can arrange and modify Project Scope Management Processes in pursuit of their efforts to enhance the mediation of Successful Project Scope Management through User Involvement.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_57-The_Role_of_User_Involvement.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Exploring Saudi Citizens&#39; Acceptance of Mobile Government Service</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091156</link>
        <id>10.14569/IJACSA.2018.091156</id>
        <doi>10.14569/IJACSA.2018.091156</doi>
        <lastModDate>2018-11-30T11:48:07.3900000+00:00</lastModDate>
        
        <creator>Adnan Mustafa AlBar</creator>
        
        <creator>Mashael A. Hddas</creator>
        
        <subject>M-government; adoption; acceptance; citizen</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>Mobile government is considered as an emerging technology that has been used in Saudi Arabia in order to enhance communication between the government and its citizens, it can also be considered as a mechanism through which the government can effectively respond to their needs and expectations. The current study seeks to propose and validate M-government adoption model to fully understand the varied variables affecting the adoption behavior. This model is based on Technology Acceptance Model (TAM) and DeLone and McLean Information Systems Success Model. The researchers will depend on a descriptive survey approach using structured questionnaires to investigate the extent of M-government acceptance by Saudi users. Structural equation modeling will be used as a method for statistical data analysis.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_56-Exploring_Saudi_Citizens_Acceptance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Handling Class Imbalance in Credit Card Fraud using Resampling Methods</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091155</link>
        <id>10.14569/IJACSA.2018.091155</id>
        <doi>10.14569/IJACSA.2018.091155</doi>
        <lastModDate>2018-11-30T11:48:06.3430000+00:00</lastModDate>
        
        <creator>Nur Farhana Hordri</creator>
        
        <creator>Siti Sophiayati Yuhaniz</creator>
        
        <creator>Nurulhuda Firdaus Mohd Azmi</creator>
        
        <creator>Siti Mariyam Shamsuddin</creator>
        
        <subject>Credit card; imbalanced dataset; misclassification error; resampling methods; random undersampling; random oversampling; synthetic minority oversampling technique</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>Credit card based online payments has grown intensely, compelling the financial organisations to implement and continuously improve their fraud detection system. However, credit card fraud dataset is heavily imbalanced and different types of misclassification errors may have different costs and it is essential to control them, to a certain degree, to compromise those errors. Classification techniques are the promising solutions to detect the fraud and non-fraud transactions. Unfortunately, in a certain condition, classification techniques do not perform well when it comes to huge numbers of differences in minority and majority cases. Hence in this study, resampling methods, Random Under Sampling, Random Over Sampling and Synthetic Minority Oversampling Technique, were applied in the credit card dataset to overcome the rare events in the dataset.  Then, the three resampled datasets were classified using classification techniques. The performances were measured by their sensitivity, specificity, accuracy, precision, area under curve (AUC) and error rate. The findings disclosed that by resampling the dataset, the models were more practicable, gave better performance and were statistically better.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_55-Handling_Class_Imbalance_in_Credit_Card_Fraud.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Features Optimization for ECG Signals Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091154</link>
        <id>10.14569/IJACSA.2018.091154</id>
        <doi>10.14569/IJACSA.2018.091154</doi>
        <lastModDate>2018-11-30T11:48:05.2830000+00:00</lastModDate>
        
        <creator>Alan S. Said Ahmad</creator>
        
        <creator>Majd Salah Matti</creator>
        
        <creator>Adel Sabry Essa</creator>
        
        <creator>Omar A.M. Alhabib</creator>
        
        <creator>Sabri Shaikhow</creator>
        
        <subject>Features optimization; cuttlefish; ECG; ANN-SCG; ID3; KNN; SVM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>A new method is used in this work to classify ECG beats. The new method is about using an optimization algorithm for selecting the features of each beat then classify them. For each beat, twenty-four higher order statistical features and three timing interval features are obtained. Five types of beat classes are used for classification in this work, atrial premature contractions (APC), normal (NOR), premature ventricular contractions (PVC), left bundle branch (LBBB) and right bundle branch (RBBB). Cuttlefish algorithm is used for feature selection which is a new bio-inspired optimization algorithm. Four classifiers are used within CFA, Scaled Conjugate Gradient Artificial Neural Network (SCG-ANN), K-Nearest Neighborhood (KNN), Interactive Dichotomizer 3 (ID3) and Support Vector Machine (SVM). The final results show an accuracy of 97.96% for ANN, 95.71% for KNN, 94.69% for ID3 and 93.06% for SVM, these results were tested on fourteen signal records from MIT-HIH database, where 1400 beats were extracted from these records.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_54-Features_Optimization_for_ECG_Signals.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Blockchain Traffic Offence Demerit Points Smart Contracts: Proof of Work</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091153</link>
        <id>10.14569/IJACSA.2018.091153</id>
        <doi>10.14569/IJACSA.2018.091153</doi>
        <lastModDate>2018-11-30T11:48:04.2370000+00:00</lastModDate>
        
        <creator>Aditya Pradana</creator>
        
        <creator>Goh Ong Sing</creator>
        
        <creator>Yogan Jaya Kumar</creator>
        
        <creator>Ali A. Mohammed</creator>
        
        <subject>Blockchain; proof of work; smart contract; demerit points; decentralized system; distributed ledger</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>In Malaysia, a new regulation of traffic offences demerit points has been over a debate. Therefore, a blockchain model is formulated to solve this issue. It serves a purpose to be a Proof of Work (PoW) of a blockchain system. This model contains application layer and blockchain layer with smart contract inside. The smart contracts act as a conditional filter which follows the regulation rules. It contains three contracts starting from the declaration of each offence’s demerit points and fines until the penalties when a certain amount of demerit points is collected, including revocation of driver license. The contracts will be automatically executed when such conditions are fulfilled. A transaction schema is also designed to match the schema of a traffic offence system. This model is deployed in online environment with two servers synced to each other to prove the decentralized characteristic of blockchain. It is developed using NodeJS while preserving JSON format for transaction between server and client. A user interface is also provided as a simulation media where a traffic officer can input offences and send it to blockchain server while public users or the driver itself can check the status of the driver license recorded on the blockchain. Government officer can monitor the records through a dashboard analytics provided which contains graphs and charts based on the records. This interface is used as media to do evaluation which produces satisfying results. The evaluation shows that the smart contracts are executed properly as compared to real regulations.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_53-Blockchain_Traffic_Offence_Demerit_Points.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Functionality Gaps in the Design of Learning Management Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091152</link>
        <id>10.14569/IJACSA.2018.091152</id>
        <doi>10.14569/IJACSA.2018.091152</doi>
        <lastModDate>2018-11-30T11:48:03.1770000+00:00</lastModDate>
        
        <creator>Tallat Naz</creator>
        
        <creator>Momeen Khan</creator>
        
        <subject>Learning Management System (LMS); shortcomings of LMS; functional gaps in LMS; LMS design issues; remedies for gaps in design of LMS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>This research paper focuses on various gaps associated with the Learning Management System (LMS) and their remedies. LMS is a software application platform upon which multiple tasks related to online tutoring are created. For organizations, it’s crucial that the risks associated with any automated process are kept as low as possible. This should also pertain to selecting the LMS platform for educating new professionals to the organization. To execute this, organizations, should carry out due research before incorporating any system as their primary LMS. Even though, they provide a lot of benefits for organizations integrating such platforms. Choosing faulty LMS for training recruits can lead to a variety of issues later on. Thus, it becomes essential to select the best LMS platform available in the market, and the one suits the organization’s needs. The work proposed in this paper is listing together a number of problems that exists in any given LMS framework and trying to alter them according to the needs of the organization so that they provide a feasible solution and deliver a better guidance to the recruits.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_52-Functionality_Gaps_in_the_Design_of_Learning_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Smile Detection Tool using OpenCV-Python to Measure Response in Human-Robot Interaction with Animal Robot PARO</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091151</link>
        <id>10.14569/IJACSA.2018.091151</id>
        <doi>10.14569/IJACSA.2018.091151</doi>
        <lastModDate>2018-11-30T11:48:02.1170000+00:00</lastModDate>
        
        <creator>Winal Zikril Zulkifli</creator>
        
        <creator>Syamimi Shamsuddin</creator>
        
        <creator>Fairul Azni Jafar</creator>
        
        <creator>Rabiah Ahmad</creator>
        
        <creator>Azizah Abdul Manaf</creator>
        
        <creator>Alaa Abdulsalam Alarood</creator>
        
        <creator>Lim Thiam Hwee</creator>
        
        <subject>Human-robot interaction; OpenCV; PARO</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>Human-robot interaction (HRI) is a field of study that defines the relationship between humans and robot. In robot-assisted mental healthcare, there is still a lack in the methodology especially in evaluating the outcome. In this study, PARO; a robot in the shape of a cute baby seal is introduced as an adjunct therapy tool for six rehabilitation patients with post-stroke depression. Currently, the therapy outcome is measured using psychological tools. When a robot is introduced, a new measurement tool is needed to analyse the patients’ response. Thus, this study constructs a tool using OpenCV-Python to detect the number of smiles when each patient interact with PARO. Smile is an indicator of positive emotion and that PARO helps to uplift patient’s mood. The results were then compared with psychological evaluations. Both tools show congruent results. The number of smiles increased when patients were holding PARO and PARO helped all patients to manage their psychological distress. This indicates that smile detection is an effective supporting tool to indicate respond in human-robot interaction.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_51-Smile_Detection_Tool_using_OpenCV_Python.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluation of Gated Recurrent Unit in Arabic Diacritization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091150</link>
        <id>10.14569/IJACSA.2018.091150</id>
        <doi>10.14569/IJACSA.2018.091150</doi>
        <lastModDate>2018-11-30T11:48:01.0530000+00:00</lastModDate>
        
        <creator>Rajae Moumen</creator>
        
        <creator>Raddouane Chiheb</creator>
        
        <creator>Rdouan Faizi</creator>
        
        <creator>Abdellatif El Afia</creator>
        
        <subject>Gated recurrent unit; long-short term memory; arabic diacritization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>Recurrent neural networks are powerful tools giving excellent results in various tasks, including Natural Language Processing tasks. In this paper, we use Gated Recurrent Unit, a recurrent neural network implementing a simple gating mechanism in order to improve the diacritization process of Arabic. Evaluation of Gated Recurrent Unit for diacritization is performed in comparison with the state-of-the art results obtained with Long-Short term memory a powerful RNN architecture giving the best-known results in diacritization. Evaluation covers two performance aspects, Error rate and training runtime.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_50-Evaluation_of_Gated_Recurrent_Unit.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Semi Supervised Method for Detection of Ambiguous Word and Creation of Sense: Using WordNet</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091149</link>
        <id>10.14569/IJACSA.2018.091149</id>
        <doi>10.14569/IJACSA.2018.091149</doi>
        <lastModDate>2018-11-30T11:47:59.9470000+00:00</lastModDate>
        
        <creator>Sheikh Muhammad Saqib</creator>
        
        <creator>Fazal Masud Kundi</creator>
        
        <creator>Asif Hassan Syed</creator>
        
        <creator>Shakeel Ahmad</creator>
        
        <subject>Word sense disambiguation; machine translation; information retrieval and knowledge acquisition; target word; WordNet; bag of words</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>Machine Translation, Information Retrieval and Knowledge Acquisition are the three main applications of Word Sense Disambiguation (WSD). The sense of a target word can be identified from a dictionary using a ‘bag of words’, i.e. neighbours of the target word. A target word has the same spelling of the word but with a different meaning, i.e. chair, light etc. In WSD, the key input sources are sentences and target words. But, instead of providing a target word, this should automatically be detected. If a sentence has more than one target word, then the filtration process will require further processing. In this study, the proposed framework, consisting of buzz words and query words has been developed to detect target words using the WordNet dictionary. Buzz words are defined as a ‘bag-of-words’ using POS-Tags, and query words are those words having multiple meanings. The proposed framework will endeavor to find the sense of the detected target word using its gloss and with examples containing buzz words. This is a semi-supervised approach because 266 words of multiple meanings have been labelled from various sources and used based on an unsupervised approach to detect the target word and sense (meaning). After experimenting on a dataset consisting of 300 hotel reviews, 100 % of the target words for each sentence were detected with 84 % related to the sense of each sentence or phrase.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_49-Semi_Supervised_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Self Interference Cancellation in Co-Time-Co-Frequency Full Duplex Cellular Communication</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091148</link>
        <id>10.14569/IJACSA.2018.091148</id>
        <doi>10.14569/IJACSA.2018.091148</doi>
        <lastModDate>2018-11-30T11:47:59.4000000+00:00</lastModDate>
        
        <creator>Sajjad Ali Memon</creator>
        
        <creator>Faisal Ahmed Dahri</creator>
        
        <creator>Farzana Rauf Abro</creator>
        
        <creator>Faisal Karim Shaikh</creator>
        
        <subject>Cellular; Co-Time Co-Frequency Full duplex (CCFD); Self-Interference Cancellation (SIC); Communication system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>The performance of co-time co-frequency full duplex (CCDF) communication systems is limited by the self-interference (SI), which is the result of using the same frequency for transmission and reception. However, current communication systems use separate frequencies for transmission and reception, respectively. Therefore, SI is an important issue to be fixed for future-generation systems. As the radio frequency (RF) spectrum is very scared and a CCDF system has the potential to reduce the current spectrum use by half. In this paper, a CCDF communication system is modeled and a combination of RF and digital cancellations is used to mitigate the SI. The simulation results reveal that the proposed combination of RF and digital cancellation achieve the bit-error-rate of 10-11 at an interference-to-signal ratio of 10 dB, which is satisfying value for CCDF communication. The achieved efficiency of the proposed system is 13 bits/sec/Hz at a signal-to-noise ratio of 50 dB. The antenna separation of 35 dB is considered for the proposed model to keep the data loss as minimum as possible. The performance can be improved further by increasing digital-to-analog converter bits but with added complexity.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_48-Self_Interference_Cancellation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Security Issues in Cloud Computing and their Solutions: A Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091147</link>
        <id>10.14569/IJACSA.2018.091147</id>
        <doi>10.14569/IJACSA.2018.091147</doi>
        <lastModDate>2018-11-30T11:47:58.3400000+00:00</lastModDate>
        
        <creator>Sabiyyah Sabir</creator>
        
        <subject>Cloud computing data protection; encryption; digital signature; security issues</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>Cloud computing is an internet-based, emerging technology, tends to be prevailing in our environment especially computer science and information technology fields which require network computing on large scale. Cloud computing is a shared pool of services which is gaining popularity due to its cost effectiveness, availability and great production. Along with its numerous benefits, cloud computing brings much more challenging situation regarding data privacy, data protection, authenticated access etc. Due to these issues, adoption of cloud computing is becoming difficult in today’s era. In this research, various security issues regarding data privacy and reliability, key factors which are affecting the cloud computing, have been addressed and also suggestions on particular areas have been discussed.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_47-Security_Issues_in_Cloud_Computing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Control of Grid Connected Three-Phase Inverter for Hybrid Renewable Systems using Sliding Mode Controller</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091146</link>
        <id>10.14569/IJACSA.2018.091146</id>
        <doi>10.14569/IJACSA.2018.091146</doi>
        <lastModDate>2018-11-30T11:47:57.2800000+00:00</lastModDate>
        
        <creator>Sami Younsi</creator>
        
        <creator>Nejib Hamrouni</creator>
        
        <subject>Grid connected systems; sliding controller; hybrid renewable systems; SVPWM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>This paper presents a power control approach of a grid connected 3-phase inverter for hybrid renewable energy systems that consists of wind generator, flywheel energy storage system and diesel generator. A sliding controller is developed around the grid connected inverter to control the injected currents which leads to control the active and reactive powers requested by grid and/or isolated loads. In series with the controller, a Space Vector Pulse width Modulation method is used to drive the six inverter switches to generate 3-phase voltages and currents for transferring the desired powers requested by the alternative side. Simulations under Matlab-Simulink software of the hybrid renewable energy systems are made to show performances given by the developed sliding mode controller.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_46-Control_of_Grid_Connected_Three_Phase_Inverter.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Fundamentals of Unimodal Palmprint Authentication based on a Biometric System: A Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091145</link>
        <id>10.14569/IJACSA.2018.091145</id>
        <doi>10.14569/IJACSA.2018.091145</doi>
        <lastModDate>2018-11-30T11:47:56.2330000+00:00</lastModDate>
        
        <creator>Inass Shahadha Hussein</creator>
        
        <creator>Shamsul Bin Sahibuddin</creator>
        
        <creator>Nilam Nur Amir Sjarif</creator>
        
        <subject>Biometric system; palmprint; palmprint features; unimodal</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>Biometric system can be defined as the automated method of identifying or authenticating the identity of a living person based on physiological or behavioral traits. Palmprint biometric-based authentication has gained considerable attention in recent years. Globally, enterprises have been exploring biometric authorization for some time, for the purpose of security, payment processing, law enforcement CCTV systems, and even access to offices, buildings, and gyms via the entry doors. Palmprint biometric system can be divided into unimodal and multimodal. This paper will investigate the biometric system and provide a detailed overview of the palmprint technology with existing recognition approaches. Finally, we introduce a review of previous works based on a unimodal palmprint system using different databases.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_45-The_Fundamentals_of_Unimodal_Palmprint.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multivariate Copula Modeling with Application in Software Project Management and Information Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091144</link>
        <id>10.14569/IJACSA.2018.091144</id>
        <doi>10.14569/IJACSA.2018.091144</doi>
        <lastModDate>2018-11-30T11:47:55.2030000+00:00</lastModDate>
        
        <creator>Syed Muhammad Aqil Burney</creator>
        
        <creator>Osama Ajaz</creator>
        
        <creator>Shamaila Burney</creator>
        
        <subject>T-Copula; COCOMO – II; software development schedule; risk analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>This paper discusses application of copulas in software project management and information systems. Successful software projects depend on accurate estimation of software development schedule.  In this research, three major risk factors and their impact on software development schedule are considered. Software development schedule is calculated by COCOMO-II model. Two models are simulated 100000 times, model-I considered dependence among risk factors by T-copula and model-II considered risk factors independent. The comparison of the two risk models revealed that model-II always underestimate the software development schedule while model-I evaluated the software schedule risk accurately. Therefore it is necessary for software development experts to consider dependence among various risk factors. R-package copula is employed to implement the algorithm for multivariate T-copula. Multiplier goodness-of-fit test shows that T-copula is good choice for characterization of dependence among three risk factors.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_44-Multivariate_Copula_Modeling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Decision Support System for Agriculture Industry using Crowd Sourced Predictive Analytics</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091143</link>
        <id>10.14569/IJACSA.2018.091143</id>
        <doi>10.14569/IJACSA.2018.091143</doi>
        <lastModDate>2018-11-30T11:47:54.1600000+00:00</lastModDate>
        
        <creator>Remya S</creator>
        
        <creator>Dr.R.Sasikala</creator>
        
        <subject>Predictive analytics; coir fiber; fuzzy-C4.5; crowdsourcing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>It is really tough to manually examine the raw data. The Datamining strategies are used to detect the applicable information from uncooked data. The data mining algorithms are efficient for retrieving a specific pattern. In Datamining techniques decision trees are the most commonly used methods for predicting the outcome or behavior of a pattern because they can successfully and efficiently visualize the facts. Presently several decision tree algorithms are advanced for predictive analysis. Right here we gathered a dataset for rubberized mattress, from coir board CCRI, and applied the several decision tree algorithms on the data set and as compared every one. Every set of rules gives a completely unique choice tree from the input statistics. This paper focuses in particular on the Fuzzy c4.5 set of rules and compares one-of-a-kind choice tree algorithms for predictive analysis. Here by using predictive analytics, a decision can be made for each rubberized firms.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_43-Decision_Support_System_for_Agriculture_Industry.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Student Risk Identification Model using Machine Learning Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091142</link>
        <id>10.14569/IJACSA.2018.091142</id>
        <doi>10.14569/IJACSA.2018.091142</doi>
        <lastModDate>2018-11-30T11:47:53.1130000+00:00</lastModDate>
        
        <creator>Nityashree Nadar</creator>
        
        <creator>Dr.R.Kamatchi</creator>
        
        <subject>Classification; imbalanced data; machine learning; virtual learning environment</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>This research work aim at addressing issues in detecting student, who are at risk of failing to complete their course. The conceptual design presents a solution for efficient learning in non-existence of data from previous courses, which are generally used for training state-of-art machine learning (ML) based model. The expected scenarios usually occurs in scenario when university introduces new courses for academics. For addressing this work, build a novel learning model that builds a ML from data constructed from present course. The proposed model uses data about already submitted task, which further induces the issues of imbalanced data for both training and testing the classification model. The contribution of the proposed model are: the design of training the learning model for detecting risk student utilizing information from present courses, tackling challenges of imbalanced data which is present in both training and testing data, defining the issues as a classification task, and lastly, developing a novel non-linear support vector machine (NL-SVM) classification model. Experiment outcome shows proposed model attain significant outcome when compared with state-of-art model.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_42-A_Novel_Student_Risk_Identification_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Short-Term Load Forecasting for Electrical Dispatcher of Baghdad City based on SVM-FA</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091141</link>
        <id>10.14569/IJACSA.2018.091141</id>
        <doi>10.14569/IJACSA.2018.091141</doi>
        <lastModDate>2018-11-30T11:47:52.0830000+00:00</lastModDate>
        
        <creator>Aqeel S. Jaber</creator>
        
        <creator>Kosay A. Satar</creator>
        
        <creator>Nadheer A. Shalash</creator>
        
        <subject>SVM; FA; Load forecasting; PSO</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>The improvement of load forecasting accuracy is an important issue in the scientific optimization of power systems. The availability of accurate statistical data and a suitable scientific method are necessary for a perfect prediction of future occurrences. This research deals with the use of a regression forecast model (Support Vector Machine, SVM) for the prediction of the vector data for electrical power loading and temperature in Baghdad city. The Firefly algorithm was used to optimize the parameters of the SVM to improve its prediction accuracy. The quantitative statistical performance evaluation measures (absolute proportional error (MAPE)) were used to evaluate the performance of the optimization methods. The results proved that the modification method was more accurate compared to the basic method and PSO-SVM.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_41-Short_Term_Load_Forecasting.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Methodology for Identification of the Guilt Agent based on IP Binding with MAC using Bivariate Gaussian Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091140</link>
        <id>10.14569/IJACSA.2018.091140</id>
        <doi>10.14569/IJACSA.2018.091140</doi>
        <lastModDate>2018-11-30T11:47:51.0230000+00:00</lastModDate>
        
        <creator>B. Raja Koti</creator>
        
        <creator>Dr. G. V. S. Raj Kumar</creator>
        
        <creator>Dr. Y. Srinivas</creator>
        
        <creator>Dr. K. Naveen Kumar</creator>
        
        <subject>Data leakage; sensitive information; data leakage detection; bivariate normal distribution; probability density function</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>Enormous increase in data in the current world presents a major threat to the organization. Most of the organization maintains some sort of data that is sensitive and must be protected against the loss and leakage. In the IT field, the large amount of data will be exchanged between the multiple points at every moment. During this allocation of the data from the organization to the third party, there are enormous probabilities of data loss, leakages or alteration. Mostly an email is being utilized for correspondence in the working environment and from web-based like logins to ledgers; thereby an email is turning into a standard business application. An email can be abused to leave organization&#39;s elusive information open to trade off. Along these lines, it might be of little surprise that muggings on messages are normal and these issues need to be addressed. This paper completely focuses on the concept of data leakage, the technique to detect the data leakage and prevention.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_40-A_Methodology_for_Identification_of_the_Guilt_Agent.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Simple Approach for Representation of Gene Regulatory Networks (GRN)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091139</link>
        <id>10.14569/IJACSA.2018.091139</id>
        <doi>10.14569/IJACSA.2018.091139</doi>
        <lastModDate>2018-11-30T11:47:49.9630000+00:00</lastModDate>
        
        <creator>Raza ul Haq</creator>
        
        <creator>Javed Ferzund</creator>
        
        <creator>Shahid Hussain</creator>
        
        <subject>Graph theory; graph database; gene regulatory networks; RNA-seq; Genes Co-Expression; Neo4j</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>Gene expressions are controlled by a series of processes known as Gene Regulation, and their abstract mapping is represented by Gene Regulatory Network (GRN) which is a descriptive model of gene interactions. Reverse engineering GRNs can reveal the complexity of gene interactions whose comprehension can lead to several other details. RNA-seq data provides better measurement of gene expressions, however it is difficult to infer GRNs using it because of its discreteness. Multiple other methods have already been proposed to infer GRN using RNA-seq data, but these methodologies are difficult to grasp. In this paper, a simple model is presented to infer GRNs, using RNA-seq based coexpression map provided by GeneFriends database, and a graph-based database tool is used to create regulatory network. The obtained results show that it is convenient to use graph database tools to work with regulatory networks instead of developing a new model from scratch.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_39-A_Simple_Approach_for_Representation_of_Gene.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid Genetic Algorithm with Tabu Search for Optimization of the Traveling Thief Problem</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091138</link>
        <id>10.14569/IJACSA.2018.091138</id>
        <doi>10.14569/IJACSA.2018.091138</doi>
        <lastModDate>2018-11-30T11:47:48.7930000+00:00</lastModDate>
        
        <creator>Saad T Alharbi</creator>
        
        <subject>Combinatorial; hybrid approaches; genetic algorithm; optimization; tabu search; TTP </subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>Until now, several approaches such as evolutionary computing and heuristic methods have been presented to optimize the traveling thief problem (TTP). However, most of these approaches consider the TTP components independently, usually considering the traveling salesman problem (TSP) and then tackling the knapsack problem (KP), despite their interdependent nature. In this paper, we investigate the use of a hybrid genetic algorithm (GA) and tabu search (TS) for the TTP. Therefore, a novel hybrid genetic approach called GATS is proposed and compared with the state-of-the-art approaches. The key aspect of GATS is that TTP solutions are considered by firmly taking into account the interdependent nature of the TTP subcomponents, where all its operators are simultaneously implemented on TSP and KP solutions. A comprehensive set of TTP benchmark datasets was adopted to investigate the effectiveness of GATS. We selected 540 instances for our investigation, which comprised five different groups of cities (51, 52, 76, 100 and 150 cities) and different groupings of items, from 50 to 745 items. All types of knapsack (uncorrelated, uncorrelated with similar weights and bonded strongly correlated) with all different knapsack capacities were also taken into consideration. Different initialization methods were empirically investigated as well. The results of the computational experiments demonstrated that GATS is capable of surpassing the state-of-the-art results for various instances.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_38-A_Hybrid_Genetic_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Effective Lightweight Cryptographic Algorithm to Secure Resource-Constrained Devices</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091137</link>
        <id>10.14569/IJACSA.2018.091137</id>
        <doi>10.14569/IJACSA.2018.091137</doi>
        <lastModDate>2018-11-30T11:47:47.5430000+00:00</lastModDate>
        
        <creator>Sohel Rana</creator>
        
        <creator>Saddam Hossain</creator>
        
        <creator>Hasan Imam Shoun</creator>
        
        <creator>Dr. Mohammod Abul Kashem</creator>
        
        <subject>Lightweight cryptography; IoT; RFID tags; genetic algorithm; feistel architecture; SP network; FELICS; MATLAB</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>In recent years, small computing devices like embedded devices, wireless sensors, RFID tags (Radio Frequency Identification), Internet of Things (IoT) devices are increasing rapidly. They are expected to generate massive amount of sensitive data for controlling and monitoring purposes. But their resources and capabilities are limited. Those also work with valuable private data thus making security of those devices of paramount importance. Therefore, a secure encryption algorithm should be there to protect those vulnerable devices. Conventional encryption ciphers like RSA or AES are computationally expensive; require large memory but hinder performances of those devices. Simple encryption techniques, on the other hand are easy to crack, compromising security. In this paper a secure and efficient lightweight cryptographic algorithm for small computing devices has been proposed. It is a symmetric key block cipher, employing custom substitution-permutation (SP) network and a modified Feistel architecture. Two basic concepts from Genetic algorithm are used. A Linux based benchmark tool, FELICS is used for the measurement and MATLAB for the purpose of encryption quality testing. An improvement over the existing algorithm, the proposed algorithm reduces the use of processing cycles but at the same time provides sufficient security.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_37-An_Effective_Lightweight_Cryptographic_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Noble Method for Data Hiding using Steganography Discrete Wavelet Transformation and Cryptography Triple Data Encryption Standard: DES</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091136</link>
        <id>10.14569/IJACSA.2018.091136</id>
        <doi>10.14569/IJACSA.2018.091136</doi>
        <lastModDate>2018-11-30T11:47:46.4670000+00:00</lastModDate>
        
        <creator>Cahya Rahmad</creator>
        
        <creator>Kohei Arai</creator>
        
        <creator>Arief Prasetyo</creator>
        
        <creator>Novriza Arizki</creator>
        
        <subject>Data hiding; steganography; DWT; cryptography; 3DES</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>Noble method for data hiding using steganography Discrete Wavelet Transformation: DWT and cryptography triple Data Encryption Standard: DES is proposed. In the current era, information technology has become inseparable from human life, especially regarding the processing and dissemination of information. In line with advances in information technology, there are also parties who want to abuse such information by changing information or even damage it. To avoid that happening, then the data needs to be secured first into other media using the DWT method. The choosing of this method is because the image of the data insertion almost resembles the original image. Triple DES methods are also required to encode data and provide additional security so that hidden data will be difficult to solve. The choosing of this method is because it is resistant against brute force, chosen plaintext, and known plaintext attack. Based on the test, image insertion results in 100% immune to the image manipulation of brightness and contrast, but not so resistant to cropped, resized, and rotated image manipulation. Other tests also indicate that the data which is in the picture can be extracted again and will not undergo any changes.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_36-Noble_Method_for_Data_Hiding.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Amharic based Knowledge-Based System for Diagnosis and Treatment of Chronic Kidney Disease using Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091135</link>
        <id>10.14569/IJACSA.2018.091135</id>
        <doi>10.14569/IJACSA.2018.091135</doi>
        <lastModDate>2018-11-30T11:47:45.9070000+00:00</lastModDate>
        
        <creator>Siraj Mohammed</creator>
        
        <creator>Tibebe Beshah</creator>
        
        <subject>Knowledge-based system; kidney diseases; machine learning; knowledge engineering; knowledge representation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>Chronic kidney disease is an important challenge for health systems around the world and consuming a huge proportion of health care finances. Around 85% of the world populations live in developing country of the world, where chronic kidney disease prevention programs are undeveloped. Treatment options for chronic kidney disease are not readily available for most countries in sub-Saharan Africa including Ethiopia. Many rural and urban communities in Ethiopia have extremely limited access to medical advice where medical experts are not readily available. To address such a problem, a medical knowledge-based system can play a significant role. Therefore, the aim of this research was developing a self- learning knowledge based system for diagnosis and treatment of the first three stages of kidney disease that can update the knowledge without the involvement of knowledge engineer. In the development of this system, the following procedures are followed: Knowledge Engineering research design was used to develop the prototype system. Purposive sampling strategies were utilized to choose specialists. The information was acquired by using both structured and unstructured interviews and all knowledge’s are represented by using production rule. The represented production rule was modeled by using decision tree modeling approach. Implementation was employed by using pro-log tools. Testing and evolution was performed through test case and user acceptance methods. Finally, we extensively evaluate the prototype system through visual interactions and test cases. Finally, the results show that our approach is better than the current ones.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_35-Amharic_based_Knowledge_based_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Brain Signal Classification using Genetic Algorithm for Right-Left Motion Pattern</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091134</link>
        <id>10.14569/IJACSA.2018.091134</id>
        <doi>10.14569/IJACSA.2018.091134</doi>
        <lastModDate>2018-11-30T11:47:44.8470000+00:00</lastModDate>
        
        <creator>Cahya Rahmad</creator>
        
        <creator>Rudy Ariyanto</creator>
        
        <creator>Dika Rizky Yunianto</creator>
        
        <subject>Brain wave; EEG; genetic algorithm; classification; left right movement</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>Brain signals or EEG are non-stationary signals and are difficult to analyze visually. The brain signal has five waves alpha, beta, delta, gamma, and theta. The five waves have their frequency to describe the level of attention, alertness, character and external stimuli. The five waves can be used to analyze stimulation patterns when turning left and right. Giving weight to the five brain waves utilizes genetic algorithms to get one signal. Genetic algorithms can be used to find the best signal for classification. In this paper, the EEG signal will be classified to determine the right or left movement pattern. After combining the five brain waves with genetic algorithms is then classified using the Logistic Regression, Linear Discriminant Analysis, K-Neighbors Classifier, Decision Tree, Na&#239;ve Bayes Gaussian, and Support Vector Machine. From the six methods above that have the highest accuracy is 56% and SVM is a method that has better accuracy than others on this problem.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_34-Brain_Signal_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Bound Model of Clustering and Classification (BMCC) for Proficient Performance Prediction of Didactical Outcomes of Students</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091133</link>
        <id>10.14569/IJACSA.2018.091133</id>
        <doi>10.14569/IJACSA.2018.091133</doi>
        <lastModDate>2018-11-30T11:47:44.2830000+00:00</lastModDate>
        
        <creator>Anoopkumar M</creator>
        
        <creator>A. M. J. Md. Zubair Rahman</creator>
        
        <subject>Classification; clustering; precision rate; accuracy; j48 decision tree; bagging; educational data mining</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>In this era of High-Performance High computing systems, Large-scale Data Mining methodologies in the field of education have become a convenience to discover and extract knowledge from Databased of their respective educational archives. Typically, all educational institutions around the world maintain student data repositories. Attributes of students such as the name of the student, gender of student, age group (date of birth), religion, eligibility details, academic assessment details, etc. are kept in it. With this knowledge, in this paper, didactical data mining (DDM) is used to leverage the performance prediction of student and to analyse it proactively. As it is known, Classification and Clustering are the liveliest techniques in mining the required data. Hence, Bound Model of Clustering and Classification (BMCC) have been proposed in this research for most proficient educational data mining. Classification is one of the distinguished options in Data Mining to assign an object under some pre-defined classes according to their attributes, and hence it comes under a supervised learning problem. On the other side, clustering is considered as a non-supervised learning problem that involves in grouping up of objects with respect to some similarities. Moreover, this paper uses the dataset collected from Kerala Technological University-SNG College of Engineering (KTU_SNG) for performing the BMCC. An efficient J48 decision tree algorithm is used for classification and the k-means algorithm is incorporated for clustering here and is optimised with Bootstrap Aggregation (Bagging). The implementation has been done and analysed with a data mining tool called WEKA (Waikato Environment for Knowledge Analysis), and the results are compared with some most used classifications such as Bayes Classifier (NB), Neural Network (Multilayer Perceptron MLP) and J48. It is provable from the results that the model, proposed in this provides high Precision Rate (PR), accuracy and robustness with less computational time, though the sample data set includes some missing values.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_33-Bound_Model_of_Clustering_and_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Applying Machine Learning Techniques for Classifying Cyclin-Dependent Kinase Inhibitors</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091132</link>
        <id>10.14569/IJACSA.2018.091132</id>
        <doi>10.14569/IJACSA.2018.091132</doi>
        <lastModDate>2018-11-30T11:47:43.2400000+00:00</lastModDate>
        
        <creator>Ibrahim Z. Abdelbaky</creator>
        
        <creator>Ahmed F. Al-Sadek</creator>
        
        <creator>Amr A. Badr</creator>
        
        <subject>CDK inhibitors; random forest classification; genetic programming classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>The importance of protein kinases made them a target for many drug design studies. They play an essential role in cell cycle development and many other biological processes. Kinases are divided into different subfamilies according to the type and mode of their enzymatic activity. Computational studies targeting kinase inhibitors identification is widely considered for modelling kinase-inhibitor. This modelling is expected to help in solving the selectivity problem arising from the high similarity between kinases and their binding profiles. In this study, we explore the ability of two machine-learning techniques in classifying compounds as inhibitors or non-inhibitors for two members of the cyclin-dependent kinases as a subfamily of protein kinases. Random forest and genetic programming were used to classify CDK5 and CDK2 kinases inhibitors. This classification is based on calculated values of chemical descriptors. In addition, the response of the classifiers to adding prior information about compounds promiscuity was investigated. The results from each classifier for the datasets were analyzed by calculating different accuracy measures and metrics. Confusion matrices, accuracy, ROC curves, AUC values, F1 scores, and Matthews correlation, were obtained for the outputs. The analysis of these accuracy measures showed a better performance for the RF classifier in most of the cases. In addition, the results show that promiscuity information improves the classification accuracy, but its significant effect was notably clear with GP classifiers.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_32-Applying_Machine_Learning_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Overview of Service and Deployment Models Offered by Cloud Computing, based on International Standard ISO/IEC 17788</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091131</link>
        <id>10.14569/IJACSA.2018.091131</id>
        <doi>10.14569/IJACSA.2018.091131</doi>
        <lastModDate>2018-11-30T11:47:42.1930000+00:00</lastModDate>
        
        <creator>Washington Garcia Quilachamin</creator>
        
        <creator>Igor Aguilar Alonso</creator>
        
        <creator>Jorge Herrera-Tapia</creator>
        
        <subject>Cloud computing services models; IT demand management; deployment models; applications; platform; infraestructure</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>Cloud Computing offers services over the Internet to support business processes, based on deployment models and service, meet business requirements in an efficient and cost-effective. A general context of the types of service models that it, as well as the models of deployment, are not known, so the following research questions are stated: Q1) How many studies refer to service models in Cloud Computing and Models of cloud computing deployment?, Q2) How the service models are classified in relation to the types of capabilities Application, Infrastructure y Platform in a Cloud?, and Q3) What types of cloud computing deployment models currently exist?. The objective of this paper is to investigate the service and deployment models that currently exist in cloud computing, for every which a process of systematic review of the literature has been used as a research methodology. The results show that 45 service models and 4 deployment models were found in Cloud Computing, this allows us to conclude that the offered models give a lot and diverse solutions for the business processes.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_31-Overview_of_Service_and_Deployment_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Data Flows Management and Control in Computer Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091130</link>
        <id>10.14569/IJACSA.2018.091130</id>
        <doi>10.14569/IJACSA.2018.091130</doi>
        <lastModDate>2018-11-30T11:47:41.2100000+00:00</lastModDate>
        
        <creator>Ahmad AbdulQadir AlRababah</creator>
        
        <subject>Data transmission; data stream; input output buffers; telecommunication devices; data packets; blocks of memory; switching matrix; high priority packets; bitstaffing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>In computer networks, loss of data packets is inevitable, because of the buffer memory overflow of at least one of the nodes located on the path from the source to the receiver, including the latter. Such losses associated with overflows are hereinafter referred to as congestion of network nodes. There are many ways to prevent and eliminate overloads; these methods, in the majority, are based on the management of data flows. A special place is taken by the maintenance of packages, taking into account their priorities. The ideas of these solutions are quite simple for their implementation in the development of appropriate software and hardware for telecommunication devices. The article considers a number of original solutions to these problems at a level sufficient for the development of new generations of telecommunication devices and systems such as allowing interrupting transmission of the low-priority packet practically at any stage, then to transmit a high-priority packet and only then resume the interrupted transfer, moreover warning in time the data source about the threat of overloading one or several nodes along the route in the propagation of data packets.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_30-Data_Flows_Management_and_Control.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Agent-Based Co-Modeling of Information Society and Wealth Distribution</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091129</link>
        <id>10.14569/IJACSA.2018.091129</id>
        <doi>10.14569/IJACSA.2018.091129</doi>
        <lastModDate>2018-11-30T11:47:40.6630000+00:00</lastModDate>
        
        <creator>Fay&#231;al Yahyaoui</creator>
        
        <creator>Mohamed Tkiouat</creator>
        
        <subject>Agent-based modeling; computational economics, multi-level; complex systems; parallel computing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>With empirical studies suggesting that information technology influence wealth distribution in different ways, and with economic interactions and information technology adoption being two complex phenomena, there is a need for simulation approach that addresses the whole complexity of the issue without being too costly in terms of computations and without ignoring relevant empirical facts in defining the behavior of different agents.. While this problem seems to require a bottom-up approach using agent-based modeling, further complexity levels in managing the heterogeneous agents in space and time and an appropriate separation in domain areas show its limitations in practice. In this paper we illustrate the use of novel multi-level agent based concepts on this socio-economic issue, by considering our studied phenomenon as an interference of multiple simple other phenomena, namely a basic producer/consumer economy and a diffusion of information model. Such an approach involves writing models following a formalism allowing compatibility and exchange of variables, in addition to implementing appropriate synchronization algorithms. Our simulation used Levelspace, a recent extension project of Netlogo simulation tool combined with data exploration tools but the patterns described are generic and can be implemented in other simulation tools.  Indeed, our case study offers a building block for a framework that can investigate wealth dynamics and other analogue cases with influence between models. Our approach successfully validates against empirical macro-trends in the distribution of wealth and other social patterns. Thanks to its flexibility in conducting experiments, we could reduce the hypotheses that restricted previous models from conducting a multi-dimensional analysis for the Gini index and enabled solving conflicting research issues.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_29-Agent_based_Co_Modeling_of_Information_Society.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Efficient Image Cipher using 2D Logistic Mapping and Singular Value Decomposition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091128</link>
        <id>10.14569/IJACSA.2018.091128</id>
        <doi>10.14569/IJACSA.2018.091128</doi>
        <lastModDate>2018-11-30T11:47:39.6030000+00:00</lastModDate>
        
        <creator>Mohammed A. AlZain</creator>
        
        <subject>Image cipher; 2D-CL; SVD</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>The research paper proposes an efficient image cryptosystem that depends on the utilization of two dimensional (2D) chaotic logistic map (CLM) and singular value decomposition (SVD). The encryption process starts by a confusion stage through applying the 2D-CLM to the input plainimage. Then, the resulted logistic transformed image is then decomposed using the SVD technique into three ciphered components; the horizontal, vertical, and diagonal components. The ciphered horizontal, vertical, and diagonal components are then transmitted to the destination which applies a reverse procedure to reconstruct the original plainimage. A matrix of encryption quality tests are performed for investigating the proposed 2D-CLM based SVD image cipher. The obtained test results confirmed and ensured the efficiency of the proposed 2D-CLM based SVD image cipher.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_28-Efficient_Image_Cipher_using_2D_Logistic_Mapping.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Comparison between Merge and Quick Sort Algorithms in Data Structure</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091127</link>
        <id>10.14569/IJACSA.2018.091127</id>
        <doi>10.14569/IJACSA.2018.091127</doi>
        <lastModDate>2018-11-30T11:47:38.5730000+00:00</lastModDate>
        
        <creator>Irfan Ali</creator>
        
        <creator>Haque Nawaz</creator>
        
        <creator>Imran Khan</creator>
        
        <creator>Abdullah Maitlo</creator>
        
        <creator>M. Ameen Chhajro</creator>
        
        <creator>M. Malook Rind</creator>
        
        <subject>Performance; analysis of algorithm; merge sort; quick sort; complexity; time and space</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>In computer science field, one of the basic operation is sorting. Many sorting operations use intermediate steps. Sorting is the procedure of ordering list of elements in ascending or descending with the help of key value in specific order. Many sorting algorithms have been designed and are being used. This paper presents performance comparisons among the two sorting algorithms, one of them merge sort another one is quick sort and produces evaluation based on the performances relating to time and space complexity. Both algorithms are vital and are being focused for long period but the query is still, which of them to use and when.  Therefore this research study carried out. Each algorithm resolves the problem of sorting of data with a unique method. This study offers a complete learning that how both of the algorithms perform operation and then distinguish them based on various constraints to come with outcome.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_27-Performance_Comparison_between_Merge_and_Quick.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Context-Sensitive Approach to Find Optimum Language Model for Automatic Bangla Spelling Correction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091126</link>
        <id>10.14569/IJACSA.2018.091126</id>
        <doi>10.14569/IJACSA.2018.091126</doi>
        <lastModDate>2018-11-30T11:47:37.5300000+00:00</lastModDate>
        
        <creator>Muhammad Ifte Khairul Islam</creator>
        
        <creator>Md. Tarek Habib</creator>
        
        <creator>Md. Sadekur Rahman</creator>
        
        <creator>Md. Riazur Rahman</creator>
        
        <creator>Farruk Ahmed</creator>
        
        <subject>Spelling correction; non-word error; N-gram; edit distance; magnifying search; accuracy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>Automated spelling correction is an important phenomenon in typing that has intense effect on aiding both literate and semi-literate people while using keyboard or other similar devices. Such automated spelling correction technique also helps students significantly in learning process through applying proper words during word processing. A lot of work has been conducted for English language, but for Bangla, it is still not adequate. All work done so far in Bangla is context-free. Bangla is one of the mostly spoken languages (3.05% of world population) and considered seventh language of all languages in the world. In this paper, we propose a context-sensitive approach for automated spelling correction in Bangla. We make combined use of edit distance and stochastic, i.e. N-gram language model. We use six N-gram models in total. A novel approach is deployed in order to find the optimum language model in terms of performance. In addition, for finding out better performance, a large Bangla corpus of different word types is used. We have achieved a satisfactory and promising accuracy of 87.58%.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_26-A_Context_Sensitive_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>DDoS Classification Using Neural Network and Na&#239;ve Bayes Methods for Network Forensics</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091125</link>
        <id>10.14569/IJACSA.2018.091125</id>
        <doi>10.14569/IJACSA.2018.091125</doi>
        <lastModDate>2018-11-30T11:47:36.4830000+00:00</lastModDate>
        
        <creator>Anton Yudhana</creator>
        
        <creator>Imam Riadi</creator>
        
        <creator>Faizin Ridho</creator>
        
        <subject>DDoS; IDS; neural network; na&#239;ve bayes; network forensics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>Distributed Denial of Service (DDoS) is a network security problem that continues to grow dynamically and has increased significantly to date. DDoS is a type of attack that is carried out by draining the available resources in the network by flooding the package with a significant intensity so that the system becomes overloaded and stops. This attack resulted in enormous losses for institutions and companies engaged in online services. Prolonged deductions and substantial recovery costs are additional losses for the company due to loss of integrity. The activities of damaging, disrupting, stealing data, and everything that is detrimental to the system owner on a computer network is an illegal act and can be imposed legally in court. Criminals can be punished based on the evidence found with the Forensics network mechanism. DDoS attack classification is based on network traffic activity using the neural network and na&#239;ve Bayes methods. Based on the experiments conducted, it was found that the results of accuracy in artificial neural networks were 95.23% and na&#239;ve Bayes were 99.9%. The experimental results show that the na&#239;ve Bayes method is better than the neural network. The results of the experiment and analysis can be used as evidence in the trial process.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_25-DDoS_Classification_using_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Audio Augmentation for Traffic Signs: A Case Study of Pakistani Traffic Signs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091124</link>
        <id>10.14569/IJACSA.2018.091124</id>
        <doi>10.14569/IJACSA.2018.091124</doi>
        <lastModDate>2018-11-30T11:47:35.9370000+00:00</lastModDate>
        
        <creator>Abdul Wahab</creator>
        
        <creator>Aurangzeb Khan</creator>
        
        <creator>Ihsan Rabbi</creator>
        
        <creator>Khairullah Khan</creator>
        
        <creator>Nasir Gul</creator>
        
        <subject>Augmented reality; traffic sign detection; traffic sign recognition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>Augmented Reality (AR) extend the appearance of real-world by adding digital information to the scene using computer graphics and image processing techniques. Various approaches have been used to detect, identify and track objects in real environment depending upon the application, shape of the tracking object and environment type. The marker-based tracking technique is the most commonly used method in augmented reality applications in which fiducial markers are put in the real-world for tracking. In this work we proposed a model to detect and identify the traffic signs through marker based technique to improve the usability of marker-based detection in augmented reality applications. We developed an AR application that can detect and recognize the markers designed for Pakistani traffic signs and augment them with voice alert to the driver so that he does prepared for the upcoming hurdle on the road. As identified by literature no work has been performed on augmentation of voice for traffic signs. From experiments the model outperforms than baseline techniques.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_24-Audio_Augmentation_for_Traffic_Signs.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Architecture for 5G Ultra Dense Heterogeneous Cellular Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091123</link>
        <id>10.14569/IJACSA.2018.091123</id>
        <doi>10.14569/IJACSA.2018.091123</doi>
        <lastModDate>2018-11-30T11:47:34.9230000+00:00</lastModDate>
        
        <creator>Sabeen Tahir</creator>
        
        <subject>Massive MIMO-OFDMA; 5G; IP-based interoperability; heterogeneous</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>The mounting use of wireless devices and wide range applications in an ultra-dense heterogeneous wireless network has directed to the challenging circumstances that could not be handled till 4G. In order to deal with the critical challenges, the fifth generation (5G) wireless network architecture requires an efficient well-organized wireless network. In this paper, a novel architecture for the 5G ultra-dense heterogeneous cellular network is proposed. In the proposed architecture two main aspects, massive MIMO-OFDMA and IP-based vertical handover are considered. In order to have full network coverage, the whole macro area network is divided into the microcells and each microcell is further divided into the smaller cells. The heterogeneity of different types of base stations (macro area network base station, micro-cell base station, and small cell base station) provides efficient network coverage. By reducing the area of the cell, the frequency efficiency and network coverage are improved. All the base stations are equipped with the massive MIMO-OFDMA antennas and different radio access technologies so a single wireless device can switch from one radio access technology to the other. In order to prevent the link disconnection and save IP address, whenever wireless devices need to perform vertical handover first a new connection is established with new radio access technology. It is important to note the same IP address is used whereas the current connection is disconnected. By utilizing IP-based vertical handover, the new 5G wireless network can become the principal goal of service continuity and minimize the handover processing delay. The simulation results show the improvement in the network performance.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_23-A_Novel_Architecture_for_5G_Ultra_Dense.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Usability Testing for Crop and Farmer Activity Information System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091122</link>
        <id>10.14569/IJACSA.2018.091122</id>
        <doi>10.14569/IJACSA.2018.091122</doi>
        <lastModDate>2018-11-30T11:47:34.3630000+00:00</lastModDate>
        
        <creator>Halim Budi Santoso</creator>
        
        <creator>Rosa Delima</creator>
        
        <creator>Emylia Intan Listyaningsih</creator>
        
        <creator>Argo Wibowo</creator>
        
        <subject>Usability testing; crop and activity information system; improvement recommendation; precision farming; information technology for agriculture</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>Information System usability level depends on acceptance and system convenience to be run by users. One of the methods to measure usability level is by conducting usability testing. This article elaborates usability testing for Crop and Farmer Activity Information System. This system is one of the agriculture information systems that is developed to record system activities for each farm field. This system is considered as one of the important role of Information and Communication Technology (ICT) for agriculture. This system has been developing since 2017 and needs to be assessed and tested. To assess the system, usability testing was conducted by taking sampling from two regions in Central Java, Temanggung and Gombong. The respondents are system administrators, farmers, and general users with each of respondents has different criteria. There are 58 respondents participated in this research: 49 farmers, 3 system administrators, and 6 general users. Usability testing was carried out by giving respondents several test tasks based on the system. Each respondent had different kind of test task in accordance with the system functionality for each user. The result of the test found that system administrator user interface assessment value gained average percentage of 69%, while the farmers gained 76% and general users gained 79%. From the test, it also bring some recommendations for system refinement. Those recommendations were taken from user inputs and user test results. The recommendations have been made to bring better system environment.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_22-Usability_Testing_for_Crop_and_Farmer_Activity.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implicit Thinking Knowledge Injection Framework for Agile Requirements Engineering</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091121</link>
        <id>10.14569/IJACSA.2018.091121</id>
        <doi>10.14569/IJACSA.2018.091121</doi>
        <lastModDate>2018-11-30T11:47:33.8170000+00:00</lastModDate>
        
        <creator>Kaiss Elghariani</creator>
        
        <creator>Nazri Kama</creator>
        
        <creator>Nurulhuda Firdaus Mohd Azmi</creator>
        
        <creator>Nur Azaliah Abu bakar</creator>
        
        <subject>Software development methodology; agile methodology; requirements engineering; requirements documentation; and implicit thinking documentation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>Agile has become commonly used as a software development methodology and its success depends on face-to-face communication of software developers and the faster software product delivery. Implicit thinking knowledge has considered as a very significant for organization self-learning. The main goal of paying attention to managing the implicit thinking knowledge is to retrieve valuable information of how the software is developed. However, requirements documentation is a challenging task for Agile software engineers.  The current Agile requirements documentation does not incorporate the implicit thinking knowledge with the values it intends to achieve in the software project.    This research addresses this issue and introduce a framework assists to inject the implicit thinking knowledge in Agile requirements engineering. An experiment used a survey questionnaire and case study of real project implemented for the framework evaluation.  The results show that   the   framework enables software engineers to share and document their implicit thinking knowledge during Agile requirements documentation.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_21-Implicit_Thinking_Knowledge.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>BAAC: Bangor Arabic Annotated Corpus</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091120</link>
        <id>10.14569/IJACSA.2018.091120</id>
        <doi>10.14569/IJACSA.2018.091120</doi>
        <lastModDate>2018-11-30T11:47:32.7870000+00:00</lastModDate>
        
        <creator>Ibrahim S Alkhazi</creator>
        
        <creator>William J. Teahan</creator>
        
        <subject>Component; arabic language; corpus; annotated corpora; analysis results</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>This paper describes the creation of the new Bangor Arabic Annotated Corpus (BAAC) which is a Modern Standard Arabic (MSA) corpus that comprises 50K words manually annotated by parts-of-speech. For evaluating the quality of the corpus, the Kappa coefficient and a direct percent agreement for each tag were calculated for the new corpus and a Kappa value of 0.956 was obtained, with an average observed agreement of 94.25%. The corpus was used to evaluate the widely used Madamira Arabic part-of-speech tagger and to further investigate compression models for text compressed using part-of-speech tags. Also, a new annotation tool was developed and employed for the annotation process of BAAC.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_20-BAAC_Bangor_Arabic_Annotated_Corpus.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Risk Assessment Method for Insider Threats in Cyber Security: A Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091119</link>
        <id>10.14569/IJACSA.2018.091119</id>
        <doi>10.14569/IJACSA.2018.091119</doi>
        <lastModDate>2018-11-30T11:47:32.2400000+00:00</lastModDate>
        
        <creator>Nurul Akmal Hashim</creator>
        
        <creator>Zaheera Zainal Abidin</creator>
        
        <creator>A.P. Puvanasvaran</creator>
        
        <creator>Nurul Azma Zakaria</creator>
        
        <creator>Rabiah Ahmad</creator>
        
        <subject>Insider threats; manufacturing; risk assessment; cyber security; threats; risk</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>Today’s in manufacturing major challenge is to manage large scale of cybersecurity system, which is potentially exposed to a multitude of threats. The utmost risky threats are insider threats. An insider threat arises when a person authorized to perform certain movements in an organization decides to mishandle the trust and harm the organization. Therefore, to overcome these risks, this study evaluates various risk assessment method to assess the impact of insider threats and analyses the current gaps in risk assessment method. Based on the literature search done manually, we compare four methods which are NIST, FRAP, OCTAVE, and CRAMM. The result of the study shows that the most used by an organization is the NIST method. It is because NIST is a method that combines the involvement between human and system in term of collection data. The significance of this study contributes to developing a new method in analyzing the threats that can be used in any organization.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_19-Risk_Assessment_Method_for_Insider_Threats.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimal Overcurrent Relays Coordination using an Improved Grey Wolf Optimizer</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091118</link>
        <id>10.14569/IJACSA.2018.091118</id>
        <doi>10.14569/IJACSA.2018.091118</doi>
        <lastModDate>2018-11-30T11:47:31.1800000+00:00</lastModDate>
        
        <creator>Noor Zaihah Jamal</creator>
        
        <creator>Mohd Herwan Sulaiman</creator>
        
        <creator>Omar Aliman</creator>
        
        <creator>Zuriani Mustaffa</creator>
        
        <subject>Time multiplier setting (TMS); plug setting (PS; grey wolf optimization algorithm (GWO); overcurrent relay coordination</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>Recently, nature inspired algorithms (NIA) have been implemented to various fields of optimization problems. In this paper, the implementation of NIA is reported to solve the overcurrent relay coordination problem. The purpose is to find the optimal value of the Time Multiplier Setting (TMS) and Plug Setting (PS) in order to minimize the primary relays’ operating time at the near end fault. The optimization is performed using the Improved Grey Wolf Optimization (IGWO) algorithm. Some modifications to the original GWO have been made to improve the candidate’s exploration ability. Comprehensive simulation studies have been performed to demonstrate the reliability and efficiency of the proposed modification technique compared to the conventional GWO and some well-known algorithms. The generated results have confirmed the proposed IGWO is able to optimize the objective function of the overcurrent relay coordination problem.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_18-Optimal_Overcurrent_Relays_Coordination.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automated Extraction of Large Scale Scanned Document Images using Google Vision OCR in Apache Hadoop Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091117</link>
        <id>10.14569/IJACSA.2018.091117</id>
        <doi>10.14569/IJACSA.2018.091117</doi>
        <lastModDate>2018-11-30T11:47:30.1330000+00:00</lastModDate>
        
        <creator>Rifiana Arief</creator>
        
        <creator>Achmad Benny Mutiara</creator>
        
        <creator>Tubagus Maulana Kusuma</creator>
        
        <creator>Hustinawaty</creator>
        
        <subject>Automation; extraction; google vision OCR; hadoop; scanned document images</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>This Digitalization of documents is now being done in all fields to reduce paper usage. The availability of modern technology in the form of scanners and cameras supports the growth of multimedia data, especially documents stored in the form of image files. Searching a particular text in a large-scale scanned document images is a difficult task if the document is in the form of images where the text has not been extracted. In this research, text extraction method of large-scale scanned document images using Google Vision OCR on the Hadoop architecture is proposed. The object of research is student thesis documents, which includes the cover page, the approval page, and abstract. All documents are stored in the university&#39;s digital library. Extraction process begins with preparing the input folder that contains image documents (in JPEG format) in HDFS Apache Hadoop and followed by reading the image document. The image document is then extracted using Google Vision OCR in order to obtain text document (in TXT format) and the result is saved to output folder in Hadoop Distributed File System (HDFS). The same process is repeated for the entire documents in the folder. Test results have shown that the proposed methods was able to extract all test documents successfully. The recognition process achieved 100% accuracy and the extraction time is twice as fast as manual extraction. Google Vision OCR also shows better extraction performance compared to other OCR tools. The proposed automated extraction systems can recognize text in a large-scale image document accurately and can be operated in a real-time environment.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_17-Automated_Extraction_of_Large_Scale_Scanned_Document.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Empirical Study of App Permissions : A User Protection Motivation Behaviour</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091116</link>
        <id>10.14569/IJACSA.2018.091116</id>
        <doi>10.14569/IJACSA.2018.091116</doi>
        <lastModDate>2018-11-30T11:47:29.0730000+00:00</lastModDate>
        
        <creator>Ari Kusyanti</creator>
        
        <creator>Harin Puspa Ayu Catherina</creator>
        
        <subject>Protection motivation theory (pmt); application permission; smartphone; structural equation modeling (SEM)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>Smartphone is one of the telecommunications media that can be used anytime and anywhere. To be able to support the activity of its users, smartphone users install the application on their smartphone. When installing an application, there are permissions provided by the application about data that will be collected. However, many users choose to ignore and do not read the application permission since it is too long or difficult to understand, hence they accept the apps permissions without thinking and consequently leads to security problems. This study aims to determine the factors that affect users in reading apps permissions that have been provided by an application before they install the application. Data were collected from 292 respondents who were active in using smartphones. The data analysis method used is Structural Equation Modeling (SEM). The results of the study show that the factors that influence the user in reading the app permissions before they install the application are coping self-efficacy and personal responsibility.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_16-An_Empirical_Study_of_App_Permissions.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid Technique for Tunneling Mechanism of IPv6 using Teredo and 6RD to Enhance the Network Performance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091115</link>
        <id>10.14569/IJACSA.2018.091115</id>
        <doi>10.14569/IJACSA.2018.091115</doi>
        <lastModDate>2018-11-30T11:47:28.5270000+00:00</lastModDate>
        
        <creator>Zulfiqar Ali Zardari</creator>
        
        <creator>Munwar Ali</creator>
        
        <creator>Reehan Ali Shah</creator>
        
        <creator>Ladha Hussain Zardari</creator>
        
        <subject>Hybrid network; IPv4; IPv6; 6RD; teredo tunneling mechanism; network performance jumbo frames</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>Currently, Internet Protocol version 4 (IPv4) addresses have been depleted. Many Internet Service Providers (ISPs), researchers and end users are migrating from IPv4 to IPv6 due to strong features of IPv6 and limitation in IPv4. Different tunneling techniques have been deployed to migrate on IPv6 for ordinary users. However, these techniques create many issues such as compatibility, complexity, connectivity and traffic. Due to the dissimilar header structure and incompatibility of IPv4 and IPv6, the devices are unable to communicate with each other because devices do not support IPv6 addresses directly. The performance of network is also compromised due to huge increment in data transmission traffic. In this paper, we proposed a technique to provide full IPv6 connectivity and enhancing the network performances by combining two tunneling techniques such as IPv6 Rapid Deployment (6RD) and Teredo. To increase the throughput of the network, jumbo frames are used to carry huge amount of data. The main objective of using both techniques is to provide a hybrid network rendering for full IPv6 connectivity. The proposed technique provides not only the IPv6 but also provides better performance in the network. Simulation results show that throughput and packet delivery ratio achieve maximum gain by 9000 bytes and 98%, respectively.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_15-A_Hybrid_Technique_for_Tunneling_Mechanism.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Ranking Method in Group Decision Support to Determine the Regional Prioritized Areas and Leading Sectors using Garrett Score</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091114</link>
        <id>10.14569/IJACSA.2018.091114</id>
        <doi>10.14569/IJACSA.2018.091114</doi>
        <lastModDate>2018-11-30T11:47:27.4970000+00:00</lastModDate>
        
        <creator>Heru Ismanto</creator>
        
        <creator>Suharto</creator>
        
        <creator>Azhari</creator>
        
        <creator>Lincolin Arsyad</creator>
        
        <subject>Priority area; leading sector; garrett; decision group support; spearman correlation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>The main objective of regional development is to achieve equal development in different regions. However, the long duration and complexity of the process may result in the unequal development of some regions. In order to achieve a fair development process for each region, a standard approach must be developed to select a suitable priority area that can support other underdeveloped regions that require attention. One of the approaches taken is to determine the prioritized areas and the leading sectors in the region where the region is expected to be a support for other regions that still need attention and handling on development priorities. This research was conducted to provide a new alternative in the process of determining the prioritized areas, not only by observing the development data, but also involving decision-making components consisting of government and community (including non-governmental organizations and academicians). This study used group decision support approach with the Garrett ranking technique. The results of the research on the determination of the prioritized areas using Garret Score showed that there are 5 of 29 Regencies/Municipalities in Papua Province that can be used as prioritized areas, namely Jayapura Regency, Jayapura Municipality, Mimika, Merauke and Nabire. Then, there are three leading sectors for development, namely agricultural, mining and Industrial and Processing sectors. The test of ranking results was conducted by calculating the Spearman&#39;s correlation coefficient of the Garrett ranking results and obtained a coefficient of 0.807 which means that the ranking results are very strong.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_14-Ranking_Method_in_Group_Decision_Support.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Developing A Model for Predicting the Speech Intelligibility of South Korean Children with Cochlear Implantation using a Random Forest Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091113</link>
        <id>10.14569/IJACSA.2018.091113</id>
        <doi>10.14569/IJACSA.2018.091113</doi>
        <lastModDate>2018-11-30T11:47:26.4670000+00:00</lastModDate>
        
        <creator>Haewon Byeon</creator>
        
        <subject>Random forest; hearing impairment; vocabulary index; speech intelligibility; risk factor; data mining</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>The random forest technique, a tree-based study model, predicts the results by using random decision trees based on the bootstrap technique. Therefore, it has a high prediction power and fewer errors, which are advantages of this method. This study aimed to provide baseline data regarding the language therapy after cochlear implantation by identifying the factors associated with the speech intelligibility of children with cochlear implantation. This study evaluating the factors associated with the articulation accuracy of children with cochlear implantation. This study targeted 82 hearing-impaired children, who lived in Seoul, Incheon, and Suwon areas, were between 4 and 8 years old, and had been worn cochlear implant at least one year and less than five years. Explanatory variables used in this study included gender, age, household income, the wear time of a cochlear implant, vocabulary index, and corrected hearing. Speech intelligibility was analyzed using the &#39;speech intelligibility test tool&#39; composed of nine sentences. The predictive model for speech intelligibility of children with cochlear implants was developed using random forest. The major predictors of the articulation accuracy of children with cochlear implantation were the wear time of a cochlear implant, the time since cochlear implantation, vocabulary, household income, age, and gender, in the order of the magnitude. The final error rate of the random forest model developed by generating 500 bootstrap samples was 0.22, and the prediction rate was 78.8%. The results of this study on a prediction model suggested that it would be necessary to implement cochlear implantation and to develop a customized aural rehabilitation program considering the linguistic ability of a subject for enhancing the speech intelligibility of a child with cochlear implantation.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_13-Developing_a_Model_for_Predicting_the_Speech.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Voice Pathology Recognition and Classification using Noise Related Features</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091112</link>
        <id>10.14569/IJACSA.2018.091112</id>
        <doi>10.14569/IJACSA.2018.091112</doi>
        <lastModDate>2018-11-30T11:47:25.9070000+00:00</lastModDate>
        
        <creator>HAMDI Rabeh</creator>
        
        <creator>HAJJI Salah</creator>
        
        <creator>CHERIF Adnane</creator>
        
        <subject>HTK; MFCC; MEEI; SVD; pathological voices </subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>Nowadays, the diseases of the voice increase because of bad social habits and the misuse of voice. These pathologies should be treated from the beginning. Indeed, it is no longer necessary that the diseases of the voice lead to affect the quality of the voice as heard by a listener. The most useful tool for diagnosing such diseases is the Acoustic analysis. We present in this work, new expression parameters in order to clarify the description of the vocal signal. These parameters help to classify the unhealthy voices. They describes essentially the fundamental frequency F0, the Harmonics-to-Noise report HNR, the report Noise to Harmonics Ratio NHR and Detrended Fluctuation Analysis (DFA). The classification is performed on two Saarbruecken Voice and MEEI pathological databases using HTK classifiers. We can classify them into two  different type: the first classification is binary which is used for the normal and pathological voices, the second one  is called a four-category classification used in spasmodic, polyp, nodule and normal female voices and male speakers. And we studied the effects of these new parameters when combined with the MFCC, Delta, Delta second and Energy coefficients.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_12-Voice_Pathology_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Securing Locations of Mobile Nodes in Wireless Mesh Network’s</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091111</link>
        <id>10.14569/IJACSA.2018.091111</id>
        <doi>10.14569/IJACSA.2018.091111</doi>
        <lastModDate>2018-11-30T11:47:24.8130000+00:00</lastModDate>
        
        <creator>Sultan Alkhliwi</creator>
        
        <subject>Wireless mesh networks; hierarchical mobile IPv6 protocol; authentication; secret key; Scyther tool; OPNET simulation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>The current deployment of wireless mesh networks requires mobility management to track the current locations of mobile nodes around the network without service interruption. To do so, the Hierarchical Mobile IPv6 protocol has been chosen, which minimises the required signalling by introducing a new entity called the mobile anchor point to act as a local home agent for all visiting mobile nodes in a specific domain. It allows a mobile node to register its local/regional care-of addresses with a mobile anchor point by sending a local binding update message. However, the local binding update is quite sensitive; it modifies the routing to enable mobility in the wireless mesh networks. When a local binding update message is spoofed, an attacker can redirect traffic that is destined for legitimate mobile node either to itself or to another node. This situation leads to an increased risk of attacks. Therefore, this paper contributes to addressing this security issue based on wireless mesh networks by cryptography generation and verification of a mobile node’s local and regional care-of addresses, as well as the application of a novel method to verify the reachability of mobile node at claimed local care-of address. This is called the enhanced mobile anchor point registration protocol. The Scyther tool has been used to ensure the proposed protocol accuracy. Furthermore, the performance, in terms of the mobile anchor point registration delay and signalling overhead, is evaluated by using the OPNET modeller simulator.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_11-Securing_Locations_of_Mobile_Nodes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Wireless Internet of Things-Based Air Quality Device for Smart Pollution Monitoring</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091110</link>
        <id>10.14569/IJACSA.2018.091110</id>
        <doi>10.14569/IJACSA.2018.091110</doi>
        <lastModDate>2018-11-30T11:47:23.7870000+00:00</lastModDate>
        
        <creator>Nurul Azma Zakaria</creator>
        
        <creator>Zaheera Zainal Abidin</creator>
        
        <creator>Norharyati Harum</creator>
        
        <creator>Low Chen Hau</creator>
        
        <creator>Nabeel Saleh Ali</creator>
        
        <creator>Fairul Azni Jafar</creator>
        
        <subject>Internet of things (IoT); single-board computer; cloud storage; smart pollution monitoring</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>Living in a healthy environment is a need for every
human being whether indoor or outdoor. However, pollutions
occur everywhere and most people are merely mindful of the
importance of having clean outdoor air to breathe and are not
concerned about the indoor air quality. Indoor air quality refers
to the quality within the building, and relates to the health and
comfort of the building occupants. Dangerous particles exist in
the outside air, pollute the indoor environment and produce
harmful conditions as the polluted air travels into the house or
building through windows or doors. Therefore, a wireless
Internet of Things-based air quality device is developed to
monitor the air quality in the indoor environment. The proposed
system integrates a low-cost air quality sensor, temperature and
humidity sensors, a single-board computer (Raspberry Pi 2
microprocessor) and cloud storage. The system provides realtime air quality reading, transfers the data through a wireless
network to the Internet and displays the data in dedicated
webpage. Furthermore, it stores records in cloud storage and
sends e-mail notification message to the user when unhealthy
condition is met. The study has a significant impact on promoting
affordable and portable smart pollution monitoring system as the
development of the device utilizing low-cost and off-the-shelf
components.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_10-Wireless_Internet_of_Things_based_Air_Quality_Device.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Coverage-Aware Protocols in Wireless Sensor Networks: A Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091109</link>
        <id>10.14569/IJACSA.2018.091109</id>
        <doi>10.14569/IJACSA.2018.091109</doi>
        <lastModDate>2018-11-30T11:47:22.7700000+00:00</lastModDate>
        
        <creator>Jahangir Khan</creator>
        
        <creator>Khalid Mahmood</creator>
        
        <creator>Ansar Munir Shah</creator>
        
        <creator>Babar Nawaz</creator>
        
        <creator>Mahmood ul Hassan</creator>
        
        <subject>Wireless sensor network; coverage area; energy efficiency; coverage protocols</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>Coverage plays a vital role in wireless sensor networks (WSNs), since it is used as one of the important measure to achieve the performance of the sensor network. The sensor nodes in WSN have limited power and energy resources. So, energy efficiency is an essential factor that should be considered along with coverage while designing the coverage protocols. During the past few years, many efforts have been made by the researchers on designing different coverage-aware protocols. Different coverage-aware protocols may impose different ways to solve the coverage and efficient utilization of energy among sensor nodes. In this paper, we present a review on coverage-aware protocols by highlighting their functionalities.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_9-Coverage_Aware_Protocols.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Industrial Internet of Things as a Challenge for Higher Education</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091108</link>
        <id>10.14569/IJACSA.2018.091108</id>
        <doi>10.14569/IJACSA.2018.091108</doi>
        <lastModDate>2018-11-30T11:47:21.7570000+00:00</lastModDate>
        
        <creator>Corneliu Octavian Turcu</creator>
        
        <creator>Cristina Elena Turcu</creator>
        
        <subject>Industry 4.0; industrial internet of things; internet of things; higher education; skills gap</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>This paper is aimed to examine the adoption of the Internet of Things (IoT) in industry (so-called Industrial Internet of Things, shortly IIoT) and the requirements for higher education in the times of the fourth industrial revolution. The addition of the fourth letter, &quot;I&quot; in front of the “IoT” coins the name of the new concept, “IIoT” in relation with another term, “Industry 4.0”. Because these concepts have no precise and widely accepted definitions, we presented some considered relevant by scientific literature. The paper also highlights the most important similarities and differences between these concepts. IIoT is a very dynamic concept and it will constantly bring changes in digital technologies, requirements and markets, and will also transform industries and business practices. According to manifold studies, currently, there is a skills gap which may widen in the future if no action is taken. Higher education must adopt the latest related technologies and must adapt to the new ways in which people, machines, services and data can interact. Consequently, employees, students, graduates, etc. have to be equally dynamic in learning and acquiring new skills. The transition from higher education to employment is a challenge that could be more easily addressed through the efforts of all stakeholders, from individuals to organizations, and from businesses to governments. As changes in higher education take time, all stakeholders will now have to act in preparing for the Industrial Internet of Things.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_8-Industrial_Internet_of_Things.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Experimental Study on an Efficient Dengue Disease Management System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091107</link>
        <id>10.14569/IJACSA.2018.091107</id>
        <doi>10.14569/IJACSA.2018.091107</doi>
        <lastModDate>2018-11-30T11:47:20.7270000+00:00</lastModDate>
        
        <creator>J M.M.C Jayasuriya</creator>
        
        <creator>G.K.K.T.Galappaththi</creator>
        
        <creator>M.A.Dilupa Sampath</creator>
        
        <creator>H.N.Nipunika</creator>
        
        <creator>W.H. Rankothge</creator>
        
        <subject>Optimization; genetic algorithm; iterated local search; algorithm comparison; nurse scheduling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>Dengue has become a serious health hazard in Sri Lanka with the increasing cases and loss of human lives.  It is necessary to develop an efficient dengue disease management system which could predict the dengue out breaks, plan the countermeasures accordingly and allocate resources for the countermeasures. We have proposed a platform for Dengue disease management with following modules: (1) a prediction module to predict the dengue outbreak and (2) an optimization algorithm module to optimize hospital staff according to the predictions made on future dengue patient counts. This paper focuses on the optimization algorithm module. It has been developed based on two approaches: (1) Genetic Algorithm (GA) and (2) Iterated Local Search (ILS). We are presenting the performances of our optimization algorithm module with a comparison of the two approaches. Our results show that the GA approach is much more efficient and faster than the ILS approach.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_7-Experimental_Study_on_an_Efficient_Dengue_Disease.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implementation of a basic Sonar of Echolocation for Education in Telecommunications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091106</link>
        <id>10.14569/IJACSA.2018.091106</id>
        <doi>10.14569/IJACSA.2018.091106</doi>
        <lastModDate>2018-11-30T11:47:19.6970000+00:00</lastModDate>
        
        <creator>Fredy Criollo-S&#225;nchez</creator>
        
        <creator>Rodr&#237;guez-Villarreal, Kevin</creator>
        
        <creator>Mosquera-Sanchez, Cristian</creator>
        
        <creator>Medina-Alvarez, Marco</creator>
        
        <creator>Chavarry-Polanco, Danny</creator>
        
        <creator>Alvarado-D&#237;az, Witman</creator>
        
        <creator>Meneses-Claudio, Brian</creator>
        
        <creator>Roman-Gonzalez, Avid</creator>
        
        <subject>Arduino; processing software; sonar; ultrasonic sensor; servomotor</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>Currently, having a sonar of echolocation in an electronic lab is complicated due to the high cost of its implementation, which is why it is proposed the implementation of a basic sonar, using agile technologies such as the Arduino, which will be used to implement this paper, also it has a servomotor and an ultrasonic sensor, which is responsible for detecting the distance where the objects are located. The Arduino will be in charge of controlling the servomotor movements which have to be between 15 and 165 degrees, in addition, it will send the information through the serial port to a computer, in which the data will be processed and displayed using the Processing software.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_6-Implementation_of_a_Basic_Sonar_of_Echolocation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid Intelligent Model for Enhancing Healthcare Services on Cloud Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091105</link>
        <id>10.14569/IJACSA.2018.091105</id>
        <doi>10.14569/IJACSA.2018.091105</doi>
        <lastModDate>2018-11-30T11:47:18.6530000+00:00</lastModDate>
        
        <creator>Ahmed Abdelaziz</creator>
        
        <creator>Ahmed S. Salama</creator>
        
        <creator>A.M. Riad</creator>
        
        <subject>Chronic kidney disease; linear regression; neural network; cloud computing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>Cloud computing plays a major role in addressing the challenges of healthcare services such as diagnosis of diseases, telemedicine, maximize utilization of medical resources, etc. Early detection of chronic kidney disease on cloud environment is a big challenge that is facing healthcare providers. This paper concentrates on the using of intelligent techniques such as Decision Tree, Clustering, Linear Regression, Modular Neural Network, and Back Propagation Neural Network to address this challenge. In this paper, the researchers propose a hybrid intelligent model based on cloud computing for early revealing of chronic kidney disease. Two intelligent techniques were used: linear regression and neural network. Linear regression was used to define crucial factors that have an impact on chronic kidney disease. The proposed model for early revealing of chronic kidney disease was built using Neural Network. The accuracy of proposed model is 97.8%. This model outperforms on the other models existed in the previous works in terms of the accuracy and precision, recall and F1 score.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_5-A_Hybrid_Intelligent_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Anomaly Detection with Machine Learning and Graph Databases in Fraud Management</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091104</link>
        <id>10.14569/IJACSA.2018.091104</id>
        <doi>10.14569/IJACSA.2018.091104</doi>
        <lastModDate>2018-11-30T11:47:17.6230000+00:00</lastModDate>
        
        <creator>Shamil Magomedov</creator>
        
        <creator>Sergei Pavelyev</creator>
        
        <creator>Irina Ivanova</creator>
        
        <creator>Alexey Dobrotvorsky</creator>
        
        <creator>Marina Khrestina</creator>
        
        <creator>Timur Yusubaliev</creator>
        
        <subject>Data analysis; machine learning; graph database; fraud detection; anti-money laundering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>In this paper, the task of fraud detection using the methods of data analysis and machine learning based on social and transaction graphs is considered. The algorithms for feature calculation, outlier detection and identifying specific sub-graph patterns are proposed. Software realization of the proposed algorithms is described and the results of experimental study of the algorithms on the sets of real and synthetic data are presented.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_4-Anomaly_Detection_with_Machine_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intellectual Paradigm of Artificial Vision: from Video-Intelligence to Strong Artificial Intelligence</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091103</link>
        <id>10.14569/IJACSA.2018.091103</id>
        <doi>10.14569/IJACSA.2018.091103</doi>
        <lastModDate>2018-11-30T11:47:16.5630000+00:00</lastModDate>
        
        <creator>E. M. Yarichin</creator>
        
        <creator>V. M. Gruznov</creator>
        
        <creator>G. F. Yarichina</creator>
        
        <subject>Gauge approach; fibration space; informational hypergraph; video-intelligence; strong artificial intelligence</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>A new (post-Shannon) informational approach is suggested in this paper, which allows to make deep analysis of nature of the information. It was found that information could be presented as an aggregate of quantitative (physical) and qualitative (structural) components to be considered together. It turned out that such full information theory could be efficiently used as the guiding theory at modeling of video-information recognition, perception and understanding. These hierarchical processes are solving the intellectual tasks step-by-step for formation of the corresponding video-information evaluation and also represent a strong interactions-measurements video-information’s ensuring adequacy of these assessments. That is why there is a need to build corresponding video information macro-objects (video-thesauruses) on every level of hierarchy of artificial vision system, which are formed by training (self-training) and form together an upward hierarchy of qualitative measuring scales. The top of this hierarchy is video-intelligence. Information theory of artificial intelligence is a logical development of new information approach from analysis to synthesis. Further “analysis through synthesis” allows establishing the informational nature and structure of not only video-intelligence, but also strong artificial intelligence, which for video-intelligence constitute as intellectual suprasystem.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_3-Intellectual_Paradigm_of_Artificial_Vision.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Method for Implementing Probabilistic Entity Resolution</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091102</link>
        <id>10.14569/IJACSA.2018.091102</id>
        <doi>10.14569/IJACSA.2018.091102</doi>
        <lastModDate>2018-11-30T11:47:15.5800000+00:00</lastModDate>
        
        <creator>Awaad Alsarkhi</creator>
        
        <creator>John R. Talburt</creator>
        
        <subject>Entity resolution; probabilistic matching; deterministic matching; boolean rules; scoring rule; missing values</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>Deterministic and probabilistic are two approaches to matching commonly used in Entity Resolution (ER) systems. While many users are familiar with writing and using Boolean rules for deterministic matching, fewer are as familiar with the scoring rule configuration used to support probabilistic matching. This paper describes a method using deterministic matching to “bootstrap” probabilistic matching. It also examines the effectiveness three commonly used strategies to mitigate the effect of missing values when using probabilistic matching. The results based on experiment using different sets of synthetically generated data processed using the OYSTER open source entity resolution system.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_2-A_Method_for_Implementing_Probabilistic_Entity.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Exploring Identifiers of Research Articles Related to Food and Disease using Artificial Intelligence</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091101</link>
        <id>10.14569/IJACSA.2018.091101</id>
        <doi>10.14569/IJACSA.2018.091101</doi>
        <lastModDate>2018-11-30T11:47:14.5330000+00:00</lastModDate>
        
        <creator>Marco Ross</creator>
        
        <creator>El Sayed Mahmoud</creator>
        
        <creator>El-Sayed M. Abdel-Aal</creator>
        
        <subject>Natural language processing; text classification; ngrams; bioinformatics; knowledge extraction; nutrition assessment; health promotion; research uptake</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(11), 2018</description>
        <description>Currently hundreds of studies in the literature have shown the link between food and reducing the risk of chronic diseases. This study investigates the use of natural language processing and artificial intelligence techniques in developing a classifier that is able to identify, extract and analyze food-health articles automatically. In particular, this research focusses on automatic identification of health articles pertinent to roles of food in lowering the risk of cardiovascular disease, type-2 diabetes and cancer. Three hundred food-health articles on that topic were analyzed to help identify a unique key (Identifier) for each set of publications. These keys were employed to construct a classifier that is capable of performing online search for identifying and extracting scientific articles in request. The classifier showed promising results to perform automatic analysis of food-health articles which in turn would help food professionals and researchers to carry out efficient literature search and analysis in a timely fashion.</description>
        <description>http://thesai.org/Downloads/Volume9No11/Paper_1-Exploring_Identifiers_of_Research_Articles.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Spin / Promela Application for Model checking UML Sequence Diagrams</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091071</link>
        <id>10.14569/IJACSA.2018.091071</id>
        <doi>10.14569/IJACSA.2018.091071</doi>
        <lastModDate>2018-10-31T16:39:13.1430000+00:00</lastModDate>
        
        <creator>Cristian L. Vidal-Silva</creator>
        
        <creator>Rodolfo Villarroel</creator>
        
        <creator>Jos&#180;e Rubio</creator>
        
        <creator>Franklin Johnson</creator>
        
        <creator>Erika Madariaga</creator>
        
        <creator>Camilo Campos</creator>
        
        <creator>Luis Carter</creator>
        
        <subject>Spin / Promela; UML Sequence Diagrams; Fault Tolerance; LTL formulas; Combined Fragment</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(10), 2018</description>
        <description>UML sequence diagrams usually represent the behavior of systems execution. Automated verification of UML sequence diagrams’ correctness is necessary because they can model critical algorithmic behaviors of information systems. UML sequence diagrams applications are often on the requirement and design phases of the software development process, and their correctness guarantees the accurate and transparent implemen-tation of software products. The primary goal of this article is to review and improve the translation of basic and complex UML sequence diagrams into Spin / Promela code taking into account behavioral properties and elements of combined fragments of UML sequence diagrams for synchronous and asynchronous messages. This article also redefines a previous proposal for a transition system for UML sequence diagrams by specifying Linear Temporal Logic (LTL) formulas to verify the model correctness. We present an application example of our modeling proposal on a modified version of a traditional case study by using UML sequence diagrams to translate it into Promela code to verify their properties and correctness.</description>
        <description>http://thesai.org/Downloads/Volume9No10/Paper_71-An_Spin_Promela_Application_for_Model_Checking.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards Secure IoT Communication with Smart Contracts in a Blockchain Infrastructure</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091070</link>
        <id>10.14569/IJACSA.2018.091070</id>
        <doi>10.14569/IJACSA.2018.091070</doi>
        <lastModDate>2018-10-31T16:39:12.5500000+00:00</lastModDate>
        
        <creator>Jawad Ali</creator>
        
        <creator>Toqeer Ali</creator>
        
        <creator>Shahrulniza Musa</creator>
        
        <creator>Ali Zahrani</creator>
        
        <subject>IoT; Blockchain Authorization; Hyperledger Fabric; BC; Blockchain Integrity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(10), 2018</description>
        <description>The Internet of Things (IoT) is undergoing rapid growth in the IT industry, but, it continues to be associated with several security and privacy concerns as a result of its massive scale, decentralised topology, and resource-constrained devices. Blockchain (BC), a distributed ledger technology used in cryptocurrency has attracted significant attention in the realm of IoT security and privacy. However, adopting BC to IoT is not straightforward in most cases, due to overheads and delays caused by BC operations. In this paper, we apply a BC technology known as Hyperledgder Fabric, to an IoT network. This technol-ogy introduces an execute-order technique for transactions that separates the transaction execution from consensus, resulting in increased efficiency. We demonstrate that our proposed IoT-BC architecture is sufficiently secure with regard to fundamental se-curity goals i.e., confidentiality, integrity, and availability. Finally, the simulation results are highlighted that shows the performance overheads associated with our approach are as minimal as those associated with the Hyperledger Fabric framework and negligible in terms of security and privacy.</description>
        <description>http://thesai.org/Downloads/Volume9No10/Paper_70-Towards_Secure_IoT_Communication.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>KNN and ANN-based Recognition of Handwritten Pashto Letters using Zoning Features</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091069</link>
        <id>10.14569/IJACSA.2018.091069</id>
        <doi>10.14569/IJACSA.2018.091069</doi>
        <lastModDate>2018-10-31T16:39:11.9730000+00:00</lastModDate>
        
        <creator>Sulaiman Khan</creator>
        
        <creator>Hazrat Ali</creator>
        
        <creator>Zahid Ullah</creator>
        
        <creator>Nasru Minallah</creator>
        
        <creator>Shahid Maqsood</creator>
        
        <creator>Abdul Hafeez</creator>
        
        <subject>KNN; deep neural network; OCR; zoning technique; Pashto; character recognition; classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(10), 2018</description>
        <description>This paper presents an intelligent recognition sys-tem for handwritten Pashto letters. However, handwritten char-acter recognition is challenging due to the variations in shape and style. In addition to that, these characters naturally vary among individuals. The identification becomes even daunting due to the lack of standard datasets comprising of inscribed Pashto letters. In this work, we have designed a database of moderate size, which encompasses a total of 4488 images, stemming from 102 distinguishing samples for each of the 44 letters in Pashto. Furthermore, the recognition framework extracts zoning features followed by K-Nearest Neighbour (KNN) and Neural Network (NN) for classifying individual letters. Based on the evaluation, the proposed system achieves an overall classification accuracy of approximately 70.05% by using KNN, while an accuracy of 72% through NN at the cost of an increased computation time.</description>
        <description>http://thesai.org/Downloads/Volume9No10/Paper_69-KNN_and_ANN_based_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>TokenSign: Using Revocable Fingerprint Biotokens and Secret Sharing Scheme as Electronic Signature</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091068</link>
        <id>10.14569/IJACSA.2018.091068</id>
        <doi>10.14569/IJACSA.2018.091068</doi>
        <lastModDate>2018-10-31T16:39:10.9430000+00:00</lastModDate>
        
        <creator>Fahad Alsolami</creator>
        
        <subject>Signature; Fingerprint; Electronic; Security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(10), 2018</description>
        <description>Electronic signature is a quick and convenient tool, used for legal documents and payments since business practices revolutionized from traditional paper-based to computer-based systems. The growing use of electronic signature means they are used in many applications daily, both in government and private organizations such as financial services, where an electronic signature is taken from group of people at once to cash checks or perform a transaction approval. However, non-repudiation and authentication issues remain highlighted concerns for electronic signature. To overcome these obstacles, we propose a TokenSign system that uses revocable fingerprints biotokens with Secret Sharing as electronic signature. TokenSign maintains two layers of security. First, TokenSign scheme transforms and encrypts a user fingerprint data. Second, TokenSign embeds a shared secret inside the encrypted fingerprints. Then, TokenSign Scheme distributes all shares of electronic signatures over multiple clouds. During the matching/signing process, TokenSign utilizes thread-ing to do parallel matching for the fingerprints in its secure encrypted form without decrypting the data. Finally, TokenSign scheme applies Secret Sharing scheme to compute the shared secret, producing an electronic signature. As a result, our experi-ments show that TokenSign scheme achieves comparable accuracy and improves performance comparing to the two baselines.</description>
        <description>http://thesai.org/Downloads/Volume9No10/Paper_68-TokenSign_Using_Revocable_Fingerprint_Biotokens.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Routing Calculus with Distance Vector Routing Updates</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091067</link>
        <id>10.14569/IJACSA.2018.091067</id>
        <doi>10.14569/IJACSA.2018.091067</doi>
        <lastModDate>2018-10-31T16:39:09.8200000+00:00</lastModDate>
        
        <creator>Priyanka Gupta</creator>
        
        <creator>Manish Gaur</creator>
        
        <subject>Routing Calculi; Routing Protocols; Well Formed Configuration; Reduction Semantics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(10), 2018</description>
        <description>We propose a routing calculus in a process algebraic framework to implement dynamic updates of routing table using distance vector routing. This calculus is an extension of an existing routing calculus DRωπ where routing tables are fixed except when new nodes are created in which case the routing tables are appended with relevant entries. The main objective of implementing dynamic routing updates is to demonstrate the formal modeling of distributed networks which is closer to the networks in practice. We justify our calculus by showing its reduction equivalence with its specification Dπ  (distributed π-calculus) after abstracting away the unnecessary details from our calculus which in fact is one of the implementations of Dπ. We nomenclate our calculus with routing table updates as DRϕπ .</description>
        <description>http://thesai.org/Downloads/Volume9No10/Paper_67-A_Routing_Calculus_with_Distance_Vector.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Data Modeling Guidelines for NoSQL Document-Store Databases</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091066</link>
        <id>10.14569/IJACSA.2018.091066</id>
        <doi>10.14569/IJACSA.2018.091066</doi>
        <lastModDate>2018-10-31T16:39:08.7130000+00:00</lastModDate>
        
        <creator>Abdullahi Abubakar Imam</creator>
        
        <creator>Shuib Basri</creator>
        
        <creator>Rohiza Ahmad</creator>
        
        <creator>Junzo Watada</creator>
        
        <creator>Maria T. Gonzlez-Aparicio</creator>
        
        <creator>Malek Ahmad Almomani</creator>
        
        <subject>Big Data; NoSQL; Logical and Physical Design; Data Modeling; Modeling Guidelines; Document-Stores; Model Quality</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(10), 2018</description>
        <description>Good database design is key to high data avail-ability and consistency in traditional databases, and numerous techniques exist to abet designers in modeling schemas appropri-ately. These schemas are strictly enforced by traditional database engines. However, with the emergence of schema-free databases (NoSQL) coupled with voluminous and highly diversified datasets (big data), such aid becomes even more important as schemas in NoSQL are enforced by application developers, which requires a high level of competence. Precisely, existing modeling techniques and guides used in traditional databases are insufficient for big-data storage settings. As a synthesis, new modeling guidelines for NoSQL document-store databases are posed. These guidelines cut across both logical and physical stages of database designs. Each is developed based on solid empirical insights, yet they are prepared to be intuitive to developers and practitioners. To realize this goal, we employ an exploratory approach to the investigation of techniques, empirical methods and expert consultations. We analyze how industry experts prioritize requirements and analyze the relationships between datasets on the one hand and error prospects and awareness on the other hand. Few proprietary guidelines were extracted from a heuristic evaluation of 5 NoSQL databases. In this regard, the proposed guidelines have great potential to function as an imperative instrument of knowledge transfer from academia to NoSQL database modeling practices.</description>
        <description>http://thesai.org/Downloads/Volume9No10/Paper_66-Data_Modeling_Guidelines.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards Evaluating Web Spam Threats and Countermeasures</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091065</link>
        <id>10.14569/IJACSA.2018.091065</id>
        <doi>10.14569/IJACSA.2018.091065</doi>
        <lastModDate>2018-10-31T16:39:08.1370000+00:00</lastModDate>
        
        <creator>Lina A. Abuwardih</creator>
        
        <subject>SEO; web spam threats; phishing; malware; web attacks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(10), 2018</description>
        <description>Web spam is a deceiving technique that aims to get high ranks for the retrieved web pages at the top Search Engine Result Pages (SERPs). This paper provides an evaluation for the web spam threats and countermeasures. It is started by presenting the different types of web spam threats that aim to deceive the users with incorrect information, distribute phishing and propagate malware. Then, presenting a detailed description for the proposed anti web spam tools, systems and countermeasures, and conducting a comparison between them. The results indicate that online real time tools are highly recommended solutions against web spam threats.</description>
        <description>http://thesai.org/Downloads/Volume9No10/Paper_65-Towards_Evaluating_Web_Spam_Threats.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Reliable and Energy Efficient MAC Mechanism for Patient Monitoring in Hospitals </title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091064</link>
        <id>10.14569/IJACSA.2018.091064</id>
        <doi>10.14569/IJACSA.2018.091064</doi>
        <lastModDate>2018-10-31T16:39:07.0430000+00:00</lastModDate>
        
        <creator>Madiha Fatima</creator>
        
        <creator>Adeel Baig</creator>
        
        <creator>Irfan Uddin</creator>
        
        <subject>Medical Body Area Network; MAC Protocols; Zig-Bee MAC Mechanism; Guaranteed Time Slot Allocation Scheme</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(10), 2018</description>
        <description>In medical body area network (MBAN) sensors are attached to a patient’s body for continuous and real-time monitoring of biomedical vital signs. Sensors send patient’s data to hospital base station so that doctors/caregivers can access it and be timely informed if patient’s condition goes critical. These tiny sensors have low data rates, small transmission ranges, limited battery power and processing capabilities. Ensuring reliability in MBAN is important due to the critical nature of patient’s data because any wrong/missing/delayed data can create a situation in which doctors may take wrong decisions about patient’s health which can have fatal results. Data transmission reliability in MBAN can be ensured by retransmissions, acknowledgments or guaranteed time slot mechanism but it causes more power consumption. We propose an efficient MAC mechanism to achieve both reliability and energy efficiency at an acceptable trade-off level. The proposed MAC mechanism not only overcomes the limitations of ZigBee MAC mechanism such as inefficient CSMA/CA and underutilization of guaranteed time slots, but also adapts for different traffic types such as emergency and normal traffic. Our results show that application level throughput and packet delivery ratio increase and packet loss decreases. We also optimize energy utilization by tuning macMaxCSMABackoffs and macMinBE parameters of ZigBee MAC mechanism.</description>
        <description>http://thesai.org/Downloads/Volume9No10/Paper_64-Reliable_and_Energy_Efficient_MAC_Mechanism.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Designing a Switching based Workflow Scheduling Framework for Networked Environments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091063</link>
        <id>10.14569/IJACSA.2018.091063</id>
        <doi>10.14569/IJACSA.2018.091063</doi>
        <lastModDate>2018-10-31T16:39:05.9370000+00:00</lastModDate>
        
        <creator>Hamid Tabatabaee</creator>
        
        <creator>Mohamad Reza Mohebbi</creator>
        
        <creator>Hosein Salami</creator>
        
        <subject>Scheduling framework; workflow scheduling; grid; switching based framework</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(10), 2018</description>
        <description>Due to the dynamics of the power of resources in non-dedicated computing environments such as Grid, and on the other hand, the autonomy of these environments and, consequently, the impossibility of repeating the operating scenarios to compare the algorithms created in this context, creating an environment by providing such conditions is necessary. In this paper, a framework for evaluating workflow-scheduling algorithms has been created, focusing on the dynamics of the power of resources in distributed environments. This framework based on a switching model that is capable of considering the change in the processing power of resources with high precision. Using the ability of this framework, the effectiveness of several different workflow scheduling algorithms has been evaluated.</description>
        <description>http://thesai.org/Downloads/Volume9No10/Paper_63-Designing_a_Switching_based_Workflow_Scheduling_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fixed Point Implementation of Tiny-Yolo-v2 using OpenCL on FPGA</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091062</link>
        <id>10.14569/IJACSA.2018.091062</id>
        <doi>10.14569/IJACSA.2018.091062</doi>
        <lastModDate>2018-10-31T16:39:05.3430000+00:00</lastModDate>
        
        <creator>Yap June Wai</creator>
        
        <creator>Zulkalnain bin Mohd Yussof</creator>
        
        <creator>Sani Irwan bin Salim</creator>
        
        <creator>Lim Kim Chuan</creator>
        
        <subject>FPGA; CNN; Tiny-Yolo-v2; OpenCL; detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(10), 2018</description>
        <description>Deep Convolutional Neural Network (CNN) algorithm has recently gained popularity in many applications such as image classification, video analytic and object detection. Being compute-intensive and memory expensive, CNN-based algorithms are hard to be implemented on the embedded device. Although recent studies have explored the hardware implementation of CNN-based object classification models such as AlexNet and VGG, there is still a rare implementation of CNN-based object detection model on Field Programmable Gate Array (FPGA). Consequently, this study proposes the fixed-point (16-bit) implementation of CNN-based object detection model: Tiny-Yolo-v2 on Cyclone V PCIe Development Kit FPGA board using High-Level-Synthesis (HLS) tool: OpenCL. Considering FPGA resource constraints in term of computational resources, memory bandwidth, and on-chip memory, a data pre-processing approach is proposed to merge the batch normalization into convolution layer. To the best of our knowledge, this is the first implementation of Tiny-Yolo-v2 object detection algorithm on FPGA using Intel FPGA Software Development Kit (SDK) for OpenCL. Finally, the proposed implementation achieves a peak performance of 21 GOPs under 100 MHz working frequency.</description>
        <description>http://thesai.org/Downloads/Volume9No10/Paper_62-Fixed_Point_Implementation_of_Tiny_Yolo_v2.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Roadmap to Project Management Office (PMO) and Automation using a Multi-Stage Fuzzy Rules System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091061</link>
        <id>10.14569/IJACSA.2018.091061</id>
        <doi>10.14569/IJACSA.2018.091061</doi>
        <lastModDate>2018-10-31T16:39:04.2370000+00:00</lastModDate>
        
        <creator>Magdi Amer</creator>
        
        <creator>Noha Elayoty</creator>
        
        <subject>Roadmap to build a PMO; automating the PMO; multi-stage Fuzzy System</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(10), 2018</description>
        <description>The Project Management Office (PMO) has proven to be a successful approach to enhance the control on projects and improve their success rate. One of the main functions of the PMO is monitoring projects and ensuring that the adequate processes are applied if a project starts to slip. Due to the high complexity of the parameters involved in choosing the actions to take depending on the type and status of the projects, organizations face difficulties in applying the same standards and processes on all projects across the organizations. In this paper, the authors will provide an overview of the main functions of the PMO, suggest a roadmap to start a PMO function within an organization and the authors will propose an architecture to automate the monitoring and control function of a PMO using a multi-stage fuzzy rules system.</description>
        <description>http://thesai.org/Downloads/Volume9No10/Paper_61-Roadmap_to_Project_Management_Office.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Undergraduate’s Perception on Massive Open Online Course (MOOC) Learning to Foster Employability Skills and Enhance Learning Experience</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091060</link>
        <id>10.14569/IJACSA.2018.091060</id>
        <doi>10.14569/IJACSA.2018.091060</doi>
        <lastModDate>2018-10-31T16:39:03.6570000+00:00</lastModDate>
        
        <creator>Cheong Kar Mee</creator>
        
        <creator>Sazilah binti Salam</creator>
        
        <creator>Linda Khoo Mei Sui</creator>
        
        <subject>MOOCs; mandarin; employability skills; perception; undergraduates</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(10), 2018</description>
        <description>The Massive Open Online Course (MOOC) is a very recent development in higher education institutions in Malaysia. As in September 2015, Universiti Teknikal Malaysia Melaka (UTeM) has introduced Mandarin course under Malaysia MOOCs. The study has focused on undergraduate’s perception of MOOC in Mandarin subject in fostering their employability skills as a research variable. The researcher used qualitative and quantitative method as a research design. An interview was used to investigate their perception of MOOC in Mandarin subject to foster their employability skills. An online survey was also conducted to investigate the effectiveness of MOOC learning. Undergraduates in UTeM were selected as the respondents of this study. The findings show that among all the employability skill, students believe Mandarin Massive online learning course fosters two employability skills which are ‘information gaining skill’ and ‘system and technology skill’. This study on MOOCs is important for the decision-making of the government and relevant institution to make sound decisions. This research is also significant for its contribution towards teaching practices in higher education institution.</description>
        <description>http://thesai.org/Downloads/Volume9No10/Paper_60-Undergraduates_Perception_on_Massive_Open_Online_Course.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Opinion Mining and thought Pattern Classification with Natural Language Processing (NLP) Tools</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091059</link>
        <id>10.14569/IJACSA.2018.091059</id>
        <doi>10.14569/IJACSA.2018.091059</doi>
        <lastModDate>2018-10-31T16:39:02.5670000+00:00</lastModDate>
        
        <creator>Sayyada Muntaha Azim Naqvi</creator>
        
        <creator>Muhammad Awais</creator>
        
        <creator>Muhammad Yahya Saeed</creator>
        
        <creator>Muhammad Mohsin Ashraf</creator>
        
        <subject>Opinion mining; sentiment analysis; natural language processing; think pattern; GATE</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(10), 2018</description>
        <description>Opinion mining from digital media is becoming the easiest way to obtain trivial aspects of the thinking trends. Currently, there exists no hard and fast modeling or classification over this for any society or global community. The marketing companies are currently relying on sentiment analysis for their products. In this paper social sentiment is focused on the form of collective sentiment and individual sentiment; we intend to classify these in the form of Macro and Micro-social sentiment. The sentiment varies among groups, sects etc. and various classes of society are depending on many other characteristics of the society. The social media is available to explore certain ideas, various trends, and their significance. The significance requires further exploration of more patterns and this cycle continues. The exploration cycle focuses on a research outcome. Based on above all the study focuses on the opinion classes towards the general think patterns. The Think Patterns (TP) are developed over time due to social traditions, fashions, family norms etc. The specific community think patterns are very difficult to classify like a female in restricted societies or rural societies of our country. Such trends and patterns are the focus of this study based on various defined parameters. The opinion and sentiment data analysis will be assessed using natural language processing (NLP) tools, Twitter, GATE, Google API’s, etc.</description>
        <description>http://thesai.org/Downloads/Volume9No10/Paper_59-Opinion_Mining_and_Thought_Pattern_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Decision Support Platform based on Cross-Sorting Methods for the Selection of Modeling Methods</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091058</link>
        <id>10.14569/IJACSA.2018.091058</id>
        <doi>10.14569/IJACSA.2018.091058</doi>
        <lastModDate>2018-10-31T16:39:01.4730000+00:00</lastModDate>
        
        <creator>Manal Tamir</creator>
        
        <creator>Fatima Ouzayd</creator>
        
        <creator>Raddouane Chiheb</creator>
        
        <subject>Hospital supply chain; graphical modeling and performance analysis techniques; multi-criteria decision analysis; fuzzy pairwise comparisons; support decision process; computer platform; cross-sorting methods</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(10), 2018</description>
        <description>The hospital supply chain performance is a concept that qualifies the good governance, the continuous improvement and the optimization of human and material resources of the hospital system. Thus, several performance analysis methods have been proposed for qualifying organizational flows and resources management. The main goal of the present study is to expose a literature review of the main graphical modeling and performance analysis techniques used in different research projects in the hospital field. The literature review will be analyzed and complemented by a classification study of the previous techniques. It is about a review in which will be proposed a computer platform based on Multi-Criteria Decision Analysis. This platform uses fuzzy pairwise comparisons and cross-sorting methods. Finally, the classification study is chosen in order to highlight the most adapted techniques to the different characteristics and components of the hospital system as part of the overall support decision process.</description>
        <description>http://thesai.org/Downloads/Volume9No10/Paper_58-A_Decision_Support_Platform.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Real-Time Algorithm for Tracking Astray Pilgrim based on in-Memory Data Structures</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091057</link>
        <id>10.14569/IJACSA.2018.091057</id>
        <doi>10.14569/IJACSA.2018.091057</doi>
        <lastModDate>2018-10-31T16:39:00.3670000+00:00</lastModDate>
        
        <creator>Mohammad A.R. Abdeen</creator>
        
        <creator>Ahmad Taleb</creator>
        
        <subject>In-Memory structure; real-time; tracking algorithm for astray pilgrim; large crowd management</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(10), 2018</description>
        <description>Large crowd management presents a significant challenge to organizers and for the success of the event and to achieve the set objectives. One of the biggest events and with largest crowd in the world is the Muslim pilgrimage to Mecca that happens every year and lasts for five years. The event hosts over two million people from over 80 countries across the world with men, women, and children of various age groups and many languages. One of the challenges that faces the authorities in Saudi Arabia is that many of the pilgrims become astray during the event due to the relative complexity of the rituals mainly mountainous landscape and the language barrier. This result in them being unable to perform the required rituals on the prescribed time(s) with the possibility to invalidate the whole pilgrimage and jeopardize their once-in-a-life journey. Last year over 20,000 pilgrims went astray during the pilgrimage season. In this paper we present a tracking algorithm to help track, alarm, and report astray pilgrims. The algorithm is implemented on a server that contains pilgrims’ data such as geolocations, time stamp and personal information such as name, age, gender, and nationality. Each pilgrim is equipped with a wearable device to report the geolocations and the timestamp to the centralized server. Pilgrims are organized in groups of 20 persons at maximum. By identifying the distance of the pilgrim to its group’s centroid and whether or not the pilgrim’s geolocation is where it is supposed to be according to the pilgrimage schedule, the algorithm determines if the pilgrim is astray or on a verge of becoming astray. Algorithm complexity analysis is performed. For better performance and shorter real-time time to determine the pilgrim’s status, the algorithm employs an in-memory data structure. The analysis showed that the time complexity is O(n). The algorithm has also been tested using simulation runs based on synthesized data that is randomly generated within a specified geographical zone and according to the pilgrimage plan. The simulation results showed good agreement with the analytical performance analysis.</description>
        <description>http://thesai.org/Downloads/Volume9No10/Paper_57-A_Real_Time_Algorithm_for_Tracking_Astray_Pilgrim.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Enhanced Method for Detecting the Shaded Images of the Car License Plates based on Histogram Equalization and Probabilities</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091056</link>
        <id>10.14569/IJACSA.2018.091056</id>
        <doi>10.14569/IJACSA.2018.091056</doi>
        <lastModDate>2018-10-31T16:38:59.7430000+00:00</lastModDate>
        
        <creator>Mohammad Faghedi</creator>
        
        <creator>Behrang Barekatain</creator>
        
        <creator>Kaamran Raahemifar</creator>
        
        <subject>Automatic license plate detection; shadowed images; histogram equalization; cumulative distribution; pixel probability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(10), 2018</description>
        <description>Shadow is one of the major and significant challenges in detection algorithms which track the objects such as the license plates. The quality of images captured by cameras is influenced by weather conditions, low ambient light and low resolution of the camera. The shadow in images reduces the reliability of the sight algorithms of the device as well as the visual quality of images. The previous papers indicate that no effective method has been presented to improve the license plate detection accuracy of the shaded images. In other words, the methods that have been presented for automatic license plate detection in shadowed images until now use a combination of color features and texture of the image.  In all these methods, in order to detect the frame of the shadow and the texture of the image, sufficient light is required in the image; this necessity cannot be found in most of the regular images captured by road cameras. In order to solve this problem, an improved license plate detection method is presented in this research which is able to detect the license plate area in shadowed images effectively. In fact, this is a contrast-improving method which utilizes the dual binary method for automatic plate detection and is introduced to analyze the interior images with low contrast, and also night shots, blurred and shadowed images. In this method, the histogram of the image is firstly calculated for each dimension and then the probability of each pixel in the whole image is obtained. As a result, after calculating the cumulative distribution of the pixels and replacing it in the image, it will be possible to remove the shadow from the image easily. This new method of detection was tested and simulated for 1000 images of vehicles under different conditions. The results indicated the detection accuracy of 90/30, 97/87 and 98/70 percent for the license plates detection in three databases of University of Zagreb, Numberplates.com and National Technical University of Athens, respectively.  In other words, comparing the performance of the proposed method with two similar and new methods, namely Hommos and Azam, indicates an average improvement of 26/70 and 72/95 percent for the plate detection and 32/38 and 36/53 percent for the time required for rapid and correct license plate detection, even in shaded images.</description>
        <description>http://thesai.org/Downloads/Volume9No10/Paper_56-An_Enhanced_Method_for_Detecting_the_Shaded_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Liver Extraction Method from Magnetic Resonance Cholangio-Pancreatography (MRCP) Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091055</link>
        <id>10.14569/IJACSA.2018.091055</id>
        <doi>10.14569/IJACSA.2018.091055</doi>
        <lastModDate>2018-10-31T16:38:58.6330000+00:00</lastModDate>
        
        <creator>Sajid Ur Rehman Khattak</creator>
        
        <creator>Dr. Mushtaq Ali</creator>
        
        <creator>Faqir Gul</creator>
        
        <creator>Nadir Hussian Khan</creator>
        
        <creator>Amanullah Baloch</creator>
        
        <creator>M. Shoaib Ahmed</creator>
        
        <subject>Liver extraction; graph cut algorithm; MRCP images; liver surgery; medical images; liver mask; adaptive thresholding method</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(10), 2018</description>
        <description>Liver extraction from medical images like CT scan and MR images is a challenging task. There are many manuals, Semi-automatic and automatic methods available to extract the liver from computerized tomography (CT) scan images and MR images. However, no method is available in the literature to extract the liver from Magnetic Resonance Cholangio-pancreatography (MRCP) images. Extracting the liver accurately from MRCP images is needed, so that the physician can diagnose the disease easily and plan preoperative liver surgery accordingly. In this paper, we propose a liver extraction method based on Graph Cut algorithm for liver extraction from MRCP images. The experimental results show that the proposed method is very effective for liver extraction from MRCP images.</description>
        <description>http://thesai.org/Downloads/Volume9No10/Paper_55-Liver_Extraction_Method_from_Magnetic_Resonance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Information Processing in EventWeb through Detection and Analysis of Connections between Events</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091054</link>
        <id>10.14569/IJACSA.2018.091054</id>
        <doi>10.14569/IJACSA.2018.091054</doi>
        <lastModDate>2018-10-31T16:38:57.5430000+00:00</lastModDate>
        
        <creator>Tariq Mahmood</creator>
        
        <creator>Shaukat Wasi</creator>
        
        <creator>Khalid Khan</creator>
        
        <creator>Syed Hammad Ahmed</creator>
        
        <creator>Zubair. A. Shaikh</creator>
        
        <subject>EventWeb; information processing; event algebra; operators; link detection; link analysis; information analysis; context-match</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(10), 2018</description>
        <description>Information over the Web is rapidly becoming event-centric with the next age of WWW projected to be an EventWeb in which nodes are inter-connected through diverse types of links. These nodes represent events having informational and experiential information and analysis of these events has a substantial semantic impact regarding enhancement of information search, visualization and story link detection. Information regarding semantics of EventWeb connections is also important for event planning and web management tasks. In this paper, we devise and implement an event algebra for detection and analysis of event connections. As compared to traditional solutions, we process both context-match operators and analytical operators, cater for all event information attributes, and define the strength of connections. We implement a tool to evaluate our algebra over events occurring in the academic domain. We demonstrate an almost perfect precision and recall for context-match operators and high precision and recall for analytical operators.</description>
        <description>http://thesai.org/Downloads/Volume9No10/Paper_54-Information_Processing_in_EventWeb.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Zynq FPGA based and Optimized Design of Points of Interest Detection and Tracking in Moving Images for Mobility System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091053</link>
        <id>10.14569/IJACSA.2018.091053</id>
        <doi>10.14569/IJACSA.2018.091053</doi>
        <lastModDate>2018-10-31T16:38:56.4370000+00:00</lastModDate>
        
        <creator>Abdelkader BEN AMARA</creator>
        
        <creator>Mohamed ATRI</creator>
        
        <creator>Edwige PISSALOUX</creator>
        
        <creator>Richard GRISEL</creator>
        
        <subject>Feature detection; harris &amp; stephens corner detector; tracking; extended kalman filter; HW/SW partitionning; zynq SoC; computer vision; memory access; ARM A9; HLS; Interlacing; blancking; progressive video</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(10), 2018</description>
        <description>In this paper, an FPGA based mobile feature detection and tracking solution is proposed for complex video processing systems. Presented algorithms include feature (corner) detection and robust memory allocation solution to track in real-time corners using the extended Kalman filter. Target implementation environment is a Xilinx Zynq SoC FPGA based. Using the HW/SW partitioning flexibility of Zynq, the ARM dual core processor performance and hardware accelerators generated by Xilinx SDSOC and Vivado HLS tools improve the system ability of processing video accurately with a high frame rate. Several original innovations allow to improve the processing time of the whole system (detection and tracking) by 50% as shown in experimental validation (tracking of visually impaired during their outdoor navigation).</description>
        <description>http://thesai.org/Downloads/Volume9No10/Paper_53-Zynq_FPGA_based_and_Optimized_Design_EP.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Multi-Energetic Modeling Approach based on Bond Graph Applied to In-Wheel-Motor Drive System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091052</link>
        <id>10.14569/IJACSA.2018.091052</id>
        <doi>10.14569/IJACSA.2018.091052</doi>
        <lastModDate>2018-10-31T16:38:55.3600000+00:00</lastModDate>
        
        <creator>Sihem Dridi</creator>
        
        <creator>Ines Ben Salem</creator>
        
        <creator>Lilia El Amraoui</creator>
        
        <subject>Multi-energetic approach; bond graph tool; PWM; 20-Sim environment; in-wheel motor drive system; mechatronic system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(10), 2018</description>
        <description>This paper proposes a multi-energetic modeling approach based on Bond Graph tool to modeling a mechatronic system. The use of this approach allows to better understand the real behavior of such system as well as to express the interaction between the elements and their environments. Firstly, the dynamic model of the In-Motor-Wheel Drive System is built using the Bond Graph tool, which is well suited for a multi-energetic modeling system, where several types of energies are included. Secondly, the control system is established and is based on the Pulse Width Modulation (PWM) technique. Finally, the dynamic model is coupled to the control system. They are then successfully implemented and simulated under 20-Sim environment. The simulation results present the performance and the efficiency of the adapted tool not only for dynamic modeling of the synergetic systems, but also to elaborate its control system.</description>
        <description>http://thesai.org/Downloads/Volume9No10/Paper_52-A_Multi_Energetic_Modeling_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Resource Management in Cloud Data Centers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091051</link>
        <id>10.14569/IJACSA.2018.091051</id>
        <doi>10.14569/IJACSA.2018.091051</doi>
        <lastModDate>2018-10-31T16:38:54.3300000+00:00</lastModDate>
        
        <creator>Aisha Shabbir</creator>
        
        <creator>Kamalrulnizam Abu Bakar</creator>
        
        <creator>Raja Zahilah Raja Mohd. Radzi</creator>
        
        <creator>Muhammad Siraj</creator>
        
        <subject>Big data; cloud data center; MapReduce; resource utilization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(10), 2018</description>
        <description>Vast sums of big data is a consequence of the data from different diversity. Conventional data computational frameworks and platforms are incapable to compute complex big data sets and process it at a fast pace. Cloud data centers having massive virtual and physical resources and computing platforms can provide support to big data processing.  In addition, most well-known framework, MapReduce in conjunction with cloud data centers provide a fundamental support to scale up and speed up the big data classification, investigation and processing of the huge volumes, massive and complex big data sets. Inappropriate handling of cloud data center resources will not yield significant results which will eventually leads to the overall system’s poor utilization.   This research aims at analyzing and optimizing the number of compute nodes following MapReduce framework at computational resources in cloud data center by focusing upon the key issue of computational overhead due to inappropriate parameters selection and reducing overall execution time. The evaluation has been carried out experimentally by varying the number of compute nodes that is, map and reduce units. The results shows evidently that appropriate handling of compute nodes have a significant effect on the overall performance of the cloud data center in terms of total execution time.</description>
        <description>http://thesai.org/Downloads/Volume9No10/Paper_51-Resource_Management_in_Cloud_Data_Centers.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Parameters Affecting Underwater Channel Communication Performance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091050</link>
        <id>10.14569/IJACSA.2018.091050</id>
        <doi>10.14569/IJACSA.2018.091050</doi>
        <lastModDate>2018-10-31T16:38:53.2200000+00:00</lastModDate>
        
        <creator>Sheeraz Ahmed</creator>
        
        <creator>Malik Taimur Ali</creator>
        
        <creator>Saqib Shahid Rahim</creator>
        
        <creator>Zahid Farid</creator>
        
        <creator>Owais Amanullah Khan</creator>
        
        <creator>Zeeshan Najam</creator>
        
        <subject>UWSN; doppler effect; attenuation; noise; salinity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(10), 2018</description>
        <description>Underwater or Acoustic propagation is characterized by three major factors: attenuation that increases with signal frequency, time-varying multipath propagation, and low speed of sound. The background noise, often characterized as Gaussian, is not white, but has a decaying power spectral density. Channel capacity depends on the distance, and may be extremely limited. As acoustic propagation is best supported at low frequencies, an acoustic communication system is inherently wideband and bandwidth is not negligible with respect to its center frequency. The channel has a sparse impulse response, where each physical path acts as a time-varying low-pass filter, and motion introduces Doppler spreading and shifting. Surface waves, internal turbulence and fluctuations in sound speed, contribute to random signal variations. Till date, there are no standardized models for acoustic channel fading. Experimental measurements are often made to assess the statistical properties of the underwater channel.</description>
        <description>http://thesai.org/Downloads/Volume9No10/Paper_50-Parameters_Affecting_Underwater_Channel.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving Recommendation Techniques by Deep Learning and Large Scale Graph Partitioning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091049</link>
        <id>10.14569/IJACSA.2018.091049</id>
        <doi>10.14569/IJACSA.2018.091049</doi>
        <lastModDate>2018-10-31T16:38:52.6430000+00:00</lastModDate>
        
        <creator>Gourav Bathla</creator>
        
        <creator>Rinkle Rani</creator>
        
        <creator>Himanshu Aggarwal</creator>
        
        <subject>Social big data; social recommendation; deep learning; graph partitioning; social trust</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(10), 2018</description>
        <description>Recommendation is very crucial technique for social networking sites and business organizations. It provides suggestions based on users’ personalized interest and provide users with movies, books and topics links that would be most suitable for them. It can improve user effectiveness and business revenue by approximately 30%, if analyzed in intelligent manner. Social recommendation systems for traditional datasets are already analyzed by researchers and practitioners in detail. Several researchers have improved recommendation accuracy and throughput by using various innovative approaches. Deep learning has been proven to provide significant improvements in image processing and object recognition. It is machine learning technique where hidden layers are used to improve outcome. In traditional recommendation techniques, sparsity and cold start are limitations which are due to less user-item interactions. This can be removed by using deep learning models which can improve user-item matrix entries by using feature learning. In this paper, various models are explained with their applications. Readers can identify best suitable model from these deep learning models for recommendation based on their needs and incorporate in their techniques. When these recommendation systems are deployed on large scale of data, accuracy degrades significantly. Social big graph is most suitable for large scale social data. Further improvements for recommendations are explained with the use of large scale graph partitioning. MAE (Mean Absolute Error) and RMSE (Root Mean Squared Error) are used as evaluation parameters which are used to prove better recommendation accuracy. Epinions, MovieLens and FilmTrust datasets are also shown as most commonly used datasets for recommendation purpose.</description>
        <description>http://thesai.org/Downloads/Volume9No10/Paper_49-Improving_Recommendation_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automatic Short Answer Scoring based on Paragraph Embeddings</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091048</link>
        <id>10.14569/IJACSA.2018.091048</id>
        <doi>10.14569/IJACSA.2018.091048</doi>
        <lastModDate>2018-10-31T16:38:51.5370000+00:00</lastModDate>
        
        <creator>Sarah Hassan</creator>
        
        <creator>Aly A. Fahmy</creator>
        
        <creator>Mohammad El-Ramly</creator>
        
        <subject>Automatic scoring; short answer; Pearson correlation coefficient; RMSE; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(10), 2018</description>
        <description>Automatic scoring systems for students’ short answers can eliminate from instructors the burden of grading large number of test questions and facilitate performing even more assessments during lectures especially when number of students is large. This paper presents a supervised learning approach for short answer automatic scoring based on paragraph embeddings. We review significant deep learning based models for generating paragraph embeddings and present a detailed empirical study of how the choice of paragraph embedding model influences accuracy in the task of automatic scoring.</description>
        <description>http://thesai.org/Downloads/Volume9No10/Paper_48-Automatic_Short_Answer_Scoring.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Pipeline Hazards Resolution for a New Programmable Instruction Set RISC Processor</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091047</link>
        <id>10.14569/IJACSA.2018.091047</id>
        <doi>10.14569/IJACSA.2018.091047</doi>
        <lastModDate>2018-10-31T16:38:50.4130000+00:00</lastModDate>
        
        <creator>Hajer Najjar</creator>
        
        <creator>Riad Bourguiba</creator>
        
        <creator>Jaouhar Mouine</creator>
        
        <subject>Processor; RISC; hardware; instruction set; pipeline; hazards; branch predictor; bypass</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(10), 2018</description>
        <description>The work presented in this paper is a part of a project that aims to concept and implement a hardwired programmable processor. A 32-bit RISC processor with customizable ALU (Arithmetic and Logic Unit) is designed then the pipeline technique is implemented is order to reach better performances. However the use of this technique can lead to several troubles called hazards that can affect the correct execution of the program. In this context, this paper identifies and analyzes all different hazards that can occur in this processor pipeline stages. Then detailed solutions are proposed, implemented and validated.</description>
        <description>http://thesai.org/Downloads/Volume9No10/Paper_47-Pipeline_Hazards_Resolution.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detecting and Classifying Crimes from Arabic Twitter Posts using Text Mining Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091046</link>
        <id>10.14569/IJACSA.2018.091046</id>
        <doi>10.14569/IJACSA.2018.091046</doi>
        <lastModDate>2018-10-31T16:38:49.3070000+00:00</lastModDate>
        
        <creator>Hissah AL-Saif</creator>
        
        <creator>Hmood Al-Dossari</creator>
        
        <subject>Crimes; text mining; classification; features extraction techniques; arabic posts; twitter</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(10), 2018</description>
        <description>Crime analysis has become a critical area for helping law enforcement agencies to protect civilians. As a result of a rapidly increasing population, crime rates have increased dramatically, and appropriate analysis has become a time-consuming effort. Text mining is an effective tool that may help to solve this problem to classify crimes in effective manner. The proposed system aims to detect and classify crimes in Twitter posts that written in the Arabic language, one of the most widespread languages today. In this paper, classification techniques are used to detect crimes and identify their nature by different classification algorithms. The experiments evaluate different algorithms, such as SVM, DT, CNB, and KNN, in terms of accuracy and speed in the crime domain. Also, different features extraction techniques are evaluated, including root-based stemming, light stemming, n-gram. The experiments revealed the superiority of n-gram over other techniques. Specifically, the results indicate the superiority of SVM with tri-gram over other classifiers, with a 91.55% accuracy.</description>
        <description>http://thesai.org/Downloads/Volume9No10/Paper_46-Detecting_and_Classifying_Crimes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Topology-Aware Mapping Techniques for Heterogeneous HPC Systems: A Systematic Survey</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091045</link>
        <id>10.14569/IJACSA.2018.091045</id>
        <doi>10.14569/IJACSA.2018.091045</doi>
        <lastModDate>2018-10-31T16:38:48.1970000+00:00</lastModDate>
        
        <creator>Saad B. Alotaibi</creator>
        
        <creator>Fathy alboraei</creator>
        
        <subject>Virtual topology; physical topology; topology-aware mapping; parallel applications; communication pattern</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(10), 2018</description>
        <description>At the present time, the modern platforms of high-performance computing (HPC) consists of heterogeneous computing devices which are connected through complex hierarchical networks. Moreover, it is moving towards the Exascale era and which makes the number of nodes to increase as well as the number of cores within a node to increase. As a consequence, the communication costs and the data movement are increasing. Given that, the efficient topology-aware process mapping has become vital to efficiently optimize the data locality management in order to improve the system performance and energy consumption. It will also decrease the communication cost of the processes by matching the application virtual topology (exploited by the system for assigning the processes to the physical processor) to the target underlying hardware architecture called physical topology. Additionally, improving the locality problem which is one of the most challenging issues faced by the current parallel applications. In this survey paper, we have studied various topology-aware mapping techniques and algorithms.</description>
        <description>http://thesai.org/Downloads/Volume9No10/Paper_45-Topology_Aware_Mapping_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Missing Values Imputation using Similarity Matching Method for Brainprint Authentication</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091044</link>
        <id>10.14569/IJACSA.2018.091044</id>
        <doi>10.14569/IJACSA.2018.091044</doi>
        <lastModDate>2018-10-31T16:38:47.0900000+00:00</lastModDate>
        
        <creator>Siaw-Hong Liew</creator>
        
        <creator>Yun-Huoy Choo</creator>
        
        <creator>Yin Fen Low</creator>
        
        <subject>Similarity matching; data imputation; wavelet phase stability; missing values; artefact rejection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(10), 2018</description>
        <description>This paper proposes a similarity matching imputation method to deal with the missing values in electroencephalogram (EEG) signals. EEG signals with rather high amplitude can be considered as noise, normally they will be removed. The occurrence of missing values after this artefact rejection process increases the complexity of computational modelling due to incomplete data input for model training. The fundamental concept of the proposed similarity matching imputation method is founded on the assumption that similar stimulation on a particular subject will acquire comparable EEG signals response over the related EEG channels. Hence, we replaced the missing values using the highest similarity amplitude measure across different trials in this study. Next, wavelet phase stability (WPS) was used to evaluate the performance of the proposed method since WPS portrays better signals information as compared to amplitude measure in this situation. The statistical paired sample t-test was used to validate the performance of the proposed similarity matching imputation method and the preceding mean substitute imputation method. The lower the value of mean difference indicates the better approximation of imputation data towards its original form. The proposed method is able to treat 9.75% more missing value trials, with significantly better imputation value, than the mean substitution method. Continuity of the current study will be focusing on evaluating the robustness of the proposed method in dealing with different rate of missing data.</description>
        <description>http://thesai.org/Downloads/Volume9No10/Paper_44-Missing_Values_Imputation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implementation of Intelligent Automated Gate System with QR Code</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091043</link>
        <id>10.14569/IJACSA.2018.091043</id>
        <doi>10.14569/IJACSA.2018.091043</doi>
        <lastModDate>2018-10-31T16:38:45.9830000+00:00</lastModDate>
        
        <creator>Erman Hamid</creator>
        
        <creator>Lim Chong Gee</creator>
        
        <creator>Nazrulazhar Bahaman</creator>
        
        <creator>Syarulnaziah Anawar</creator>
        
        <creator>Zakiah Ayob</creator>
        
        <creator>Akhdiat Abdul Malek</creator>
        
        <subject>Component; internet of things; gate system; VB.NET; QR code</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(10), 2018</description>
        <description>This paper is about QR code-based automated gate system. The aim of the research is to develop and implement a type of medium-level security gate system especially for small companies that cannot afford to install high-tech auto gate system. IAGS is a system that uses valid staffs’ QR code pass card to activate the gate without triggering the alarm. It is developed to connect to the internet and provide a real-time email notification if any unauthorized activities detected. Besides that, it is also designed to record all the incoming and outgoing activities for all staff. All QR code pass cards that are generated to staff will be encrypted to provide integrity to the data. The system is based on items such as PIR motion sensor, servo motor, Arduino microcontroller, Piezo buzzer, and camera. The software is implemented using VB.NET and the QR recognition level is about 99% accurate.</description>
        <description>http://thesai.org/Downloads/Volume9No10/Paper_43-Implementation_of_Intelligent_Automated_Gate_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>HOG-AdaBoost Implementation for Human Detection Employing FPGA ALTERA DE2-115</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091042</link>
        <id>10.14569/IJACSA.2018.091042</id>
        <doi>10.14569/IJACSA.2018.091042</doi>
        <lastModDate>2018-10-31T16:38:44.8770000+00:00</lastModDate>
        
        <creator>Trio Adiono Adiono</creator>
        
        <creator>Kevin Shidqi Prakoso</creator>
        
        <creator>Christoporus Deo Putratama</creator>
        
        <creator>Bramantio Yuwono</creator>
        
        <creator>Syifaul Fuada</creator>
        
        <subject>FPGA; human detection; adaboost classifier; ALTERA DE2-115; Histogram Oriented Gradients (HOG) feature</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(10), 2018</description>
        <description>Human detection system using Histogram of Oriented Gradients (HOG) feature and AdaBoost classifier (HOG-AdaBoost) in FPGA ALTERA DE2-115 are presented in this paper. This work is expanded version from our previous study. This paper discusses 1) the HOG performance in detecting human from a passive images with other point-of-views (30 deg., 40 deg., 50 deg., 60 deg. and up to 70 deg.); 2) FPS test with various image sizes (320 x 240, 640 x 480, 800 x 600, and 1280 x 1024); 3) re-measurement the FPGA’s power consumption and 4) simulate the architecture in RTL. We used three databases as a parameter for test purpose, i.e. INRIA, MIT, and Daimler.</description>
        <description>http://thesai.org/Downloads/Volume9No10/Paper_42-HOG_AdaBoost_Implementation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Study of Retrieval Methods of Multi-Dimensional Images in Different Domains</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091041</link>
        <id>10.14569/IJACSA.2018.091041</id>
        <doi>10.14569/IJACSA.2018.091041</doi>
        <lastModDate>2018-10-31T16:38:43.7830000+00:00</lastModDate>
        
        <creator>Shruti Garg</creator>
        
        <subject>Image retrieval techniques; 3D image retrieval; image retrieval survey</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(10), 2018</description>
        <description>Multiple amount of multi-dimensional images are designed and most of them are available on internet at free of cost. The 3D images include three characteristics namely width, height, and depth. The images which are created as 3D can describe the geometry in terms of 3D co-ordinates. These co-ordinates help to obtain the object from the image much easier and accurate. In this paper, we presented a review about the Multi-dimensional image retrieval. Multi-dimensional image retrieval is a process of extracting the relevant 2D or 3D images from the huge database. To perform image retrieval process on large database, several methods like text based, Content based, Annotation based, semantic based, and sketch based were used. The image retrieval techniques are mostly used in the fields like Digital library, medical, forensic science, and so on. A systematic literature review has been shown for image retrieval methods reported on 2010 to 2017. The aim of this article is to show the various concept and efforts of different authors on image retrieval technique.</description>
        <description>http://thesai.org/Downloads/Volume9No10/Paper_41-A_Study_of_Retrieval_Methods.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of Coauthorship Network in Political Science using Centrality Measures</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091040</link>
        <id>10.14569/IJACSA.2018.091040</id>
        <doi>10.14569/IJACSA.2018.091040</doi>
        <lastModDate>2018-10-31T16:38:42.6770000+00:00</lastModDate>
        
        <creator>Adeel Ahmed</creator>
        
        <creator>Muhammad Fahad Khan</creator>
        
        <creator>Muhammad Usman</creator>
        
        <creator>Khalid Saleem</creator>
        
        <subject>Social networks; undirected graph; centrality measures; community detection; data visualization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(10), 2018</description>
        <description>In recent era, networks of data are growing massively and forming a shape of complex structure. Data scientists try to analyze different complex networks and utilize these networks to understand the complex structure of a network in a meaningful way. There is a need to detect and identify such a complex network in order to know how these networks provide communication means while using the complex structure. Social network analysis provides methods to explore and analyze such complex networks using graph theories, network properties and community detection algorithms. In this paper, an analysis of co-authorship network of Public Relation and Public Administration subjects of Microsoft Academic Graph (MAG) is presented, using common centrality measures. The authors belong to different research and academic institutes present all over the world. Cohesive groups of authors have been identified and ranked on the basis of centrality measures, such as betweenness, degree, page rank and closeness. Experimental results show the discovery of authors who are good in specific domain, have a strong field knowledge and maintain collaboration among their peers in the field of Public Relations and Public Administration.</description>
        <description>http://thesai.org/Downloads/Volume9No10/Paper_40-Analysis_of_Coauthorship_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Software Components’ Coupling Detection for Software Reusability</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091039</link>
        <id>10.14569/IJACSA.2018.091039</id>
        <doi>10.14569/IJACSA.2018.091039</doi>
        <lastModDate>2018-10-31T16:38:42.0830000+00:00</lastModDate>
        
        <creator>Zakarya A. Alzamil</creator>
        
        <subject>Software component coupling; software component dependence; software component reusability; components interdependence; components dependence testing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(10), 2018</description>
        <description>Most of the software systems design and modeling techniques concentrates on capturing the functional aspects that comprise a system’s architecture. Non-functional aspects are rarely considered on most of the software system modeling and design techniques. One of the most important aspects of software component is reusability. Software reusability may be understood by identifying components’ dependence, which can be measured by measuring the coupling between system’s components. In this paper an approach to detect the coupling between software system’s components is introduced for the purpose of identifying software components’ reusability that may help in refining the system design. The proposed approach uses a dynamic notion of sequence diagram to understand the dynamic behavior of a software system. The notion of data and control dependence is used to detect the dependences among software components. The components’ dependences are identified in which one component contributes to the output computation of the other component. The results of the experiments show that the proposed algorithm can help the software engineers to understand the dependences among the software components and optimize the software system model by eliminating the unnecessary dependences among software components to enhance their cohesiveness. Such detection provides a better understanding of the software system model in terms of its components’ dependences and their influence on reusability, in which their elimination may enhance software reusability.</description>
        <description>http://thesai.org/Downloads/Volume9No10/Paper_39-Software_Components_Coupling_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Adapted Speed Mechanism for Collision Avoidance in Vehicular Ad hoc Networks Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091038</link>
        <id>10.14569/IJACSA.2018.091038</id>
        <doi>10.14569/IJACSA.2018.091038</doi>
        <lastModDate>2018-10-31T16:38:40.9770000+00:00</lastModDate>
        
        <creator>Said Benkirane</creator>
        
        <creator>Ahmed Jadir</creator>
        
        <subject>VANET; multi-agent systems; safety distances; stopping distance; JADE framework</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(10), 2018</description>
        <description>The disrespect of the safety distance between vehicles is the cause of several road accidents. This distance cannot certainly be estimated at random because of some physical rules to be calculated. The more speed gets higher, the more stopping distance increases, mainly in danger case. Thus, the difference between two vehicles must be calculated accordingly. In this paper, we present a mechanism called Adapted Speed Mechanism (ASM) allowing the adaptation of speed to keep the necessary safety distance between vehicles. This mechanism is based on VANET network operation and Multi Agent System integration to ensure communication and collaboration between vehicles. So, it is necessary to perform real-time calculations to make adequate and relevant decisions.</description>
        <description>http://thesai.org/Downloads/Volume9No10/Paper_38-Adapted_Speed_Mechanism_for_Collision_Avoidance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Investigational Study and Analysis of Cloud-based Content Delivery Network: Perspectives</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091037</link>
        <id>10.14569/IJACSA.2018.091037</id>
        <doi>10.14569/IJACSA.2018.091037</doi>
        <lastModDate>2018-10-31T16:38:39.8670000+00:00</lastModDate>
        
        <creator>Suman Jayakumar</creator>
        
        <creator>Prakash .S</creator>
        
        <creator>C.B Akki</creator>
        
        <subject>Content delivery network; cloud computing; distribution network; mobility; scalability; distribution</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(10), 2018</description>
        <description>The content management includes a major technical strategy in the network paradigm of the internet which is called a Content delivery network. The design and the deployment of the CDN shall ensure optimal Quality of services (QoS). This paper aims to brief the taxonomy of the CDN along with its typical architecture. Much latest advancement in smartphones and smart devices which are content hungry require more efficient and reliable mechanism for the cost-effective delivery of the contents irrespective of bottleneck constraints that leads to redesign the entire architecture of CDN on the cloud as CCDN or a new business model of CCDN as a service. The challenges of design for CCDN along with the evolved architecture are discussed in this paper.</description>
        <description>http://thesai.org/Downloads/Volume9No10/Paper_37-An_Investigational_Study_and_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Data Governance Cloud Security Checklist at Infrastructure as a Service (IaaS)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091036</link>
        <id>10.14569/IJACSA.2018.091036</id>
        <doi>10.14569/IJACSA.2018.091036</doi>
        <lastModDate>2018-10-31T16:38:38.7770000+00:00</lastModDate>
        
        <creator>Kamariah Abu Saed</creator>
        
        <creator>Norshakirah Aziz</creator>
        
        <creator>Said Jadid Abdulkadir</creator>
        
        <creator>Noor Hafizah Hassan</creator>
        
        <creator>Izzatdin A Aziz</creator>
        
        <subject>IaaS; security checklist; guidelines; threats; cloud computing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(10), 2018</description>
        <description>Security checklist is an important element in measuring the level of computing security, especially in cloud computing. Vulnerability in cloud computing become major concern because it will lead to security issue. While security awareness and training can educate users on the severe impact of malware, implementation on data governance and security checklist also can help to reduce the risk of being attacked. Since security checklist is important element to measure security level in cloud computing, data governance can help to manage data right with correct procedure. Due to increasing threats and attacks, service providers and service consumers need to adhere to guidelines and/or checklists when measuring the security level of services and to be prepared for unforeseen circumstances, especially in the IaaS platform. As the IaaS platform lies at the lower level in cloud computing where data are stored, it is vital that IaaS security be given serious consideration to prevent not only data breaches but also data losses. The objective of this paper is to discuss the implementation of security checklist in IaaS layer. In this paper also, several studies related with security assessment and checklist that had been discussed and developed by previous researchers and professional bodies will be discussed. This paper will also discuss the result from interview session that had been conducted by the author with several data centers (DCs) and experts regarding the implementation of security measures in small cloud DCs.</description>
        <description>http://thesai.org/Downloads/Volume9No10/Paper_36-Data_Governance_Cloud_Security_Checklist.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Greedy Algorithms to Optimize a Sentence Set Near-Uniformly Distributed on Syllable Units and Punctuation Marks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091035</link>
        <id>10.14569/IJACSA.2018.091035</id>
        <doi>10.14569/IJACSA.2018.091035</doi>
        <lastModDate>2018-10-31T16:38:37.6670000+00:00</lastModDate>
        
        <creator>Bagus Nugroho Budi Nurtomo</creator>
        
        <creator>Suyanto</creator>
        
        <subject>read-speech corpus; optimum sentence set; syllable; punctuation marks; Modified Least-to-Most Greedy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(10), 2018</description>
        <description>An optimum sentence set that near-uniformly dis-tributed on syllable units and punctuation marks is important to develop a syllable-based automatic speech recognition (ASR). It is usually extracted from a mother set of millions of unique sentences using Modified Least-to-Most (LTM) Greedy algorithm. The Modified LTM Greedy is capable of minimizing the number of syllables but ignores distributing their frequencies. Hence, two schemes are proposed to minimize the number of syllables as well as to distribute their frequencies near-uniformly. Testing on a mother set of 10 million Indonesian sentences shows that both schemes perform better than the Modified LTM Greedy for two syllable units: monosyllables and bisyllables.</description>
        <description>http://thesai.org/Downloads/Volume9No10/Paper_35-Greedy_Algorithms_to_Optimize_a_Sentence.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluation of Distance Measures for Feature based Image Registration using AlexNet</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091034</link>
        <id>10.14569/IJACSA.2018.091034</id>
        <doi>10.14569/IJACSA.2018.091034</doi>
        <lastModDate>2018-10-31T16:38:37.0900000+00:00</lastModDate>
        
        <creator>K. Kavitha</creator>
        
        <creator>B. Sandhya</creator>
        
        <creator>B. Thirumala Rao</creator>
        
        <subject>Distance measures; deep learning; feature detection; feature descriptor; image matching</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(10), 2018</description>
        <description>Image registration is a classic problem of computer vision with several applications across areas like defence, remote sensing, medicine etc. Feature based image registration methods traditionally used hand-crafted feature extraction algorithms, which detect key points in an image and describe them using a region around the point. Such features are matched using a threshold either on distances or ratio of distances computed between the feature descriptors. Evolution of deep learning, in particular convolution neural networks, has enabled researchers to address several problems of vision such as recognition, tracking, localization etc. Outputs of convolution layers or fully connected layers of CNN which has been trained for applications like visual recognition are proved to be effective when used as features in other applications such as retrieval. In this work, a deep CNN, AlexNet, is used in the place of handcrafted features for feature extraction in the first stage of image registration. However, there is a need to identify a suitable distance measure and a matching method for effective results. Several distance metrics have been evaluated in the framework of nearest neighbour and nearest neighbour ratio matching methods using benchmark dataset. Evaluation is done by comparing matching and registration performance using metrics computed from ground truth.</description>
        <description>http://thesai.org/Downloads/Volume9No10/Paper_34-Evaluation_of_Distance_Measures.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid Background Subtraction and Artificial Neural Networks for Movement Recognition in Memorizing Quran</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091033</link>
        <id>10.14569/IJACSA.2018.091033</id>
        <doi>10.14569/IJACSA.2018.091033</doi>
        <lastModDate>2018-10-31T16:38:36.0000000+00:00</lastModDate>
        
        <creator>Anton Satria Prabuwono</creator>
        
        <creator>Ismatul Maula</creator>
        
        <creator>Wendi Usino</creator>
        
        <creator>Arif Bramantoro</creator>
        
        <subject>Movement recognition; computer vision; Quran memorization movement; background subtraction; back propagation; artificial neural networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(10), 2018</description>
        <description>Movement change beyond the duration of time and the variations of object appearance becomes an interesting topic for research in computer vision. Object behavior can be recognized through movement change on video. During the recognition of object behavior, the target and the trace of an object in a video must be determined in the sequence of frames. To date, the existence of object on a video has been widely used in different areas such as supervision, robotics, agriculture, health, sports, education, and traffic. This research focuses on the field of education by recognizing the movement of Quantum Maki Quran memorization through a video. The purpose of this study is to enhance the existing computer vision technique in detecting the Quantum Maki Quran memorization movement on a video. It combines the Background Subtraction method and Artificial Neural Networks; and evaluates the combination to optimize the system accuracy. Background Subtraction is used as object detection method and Back propagation in Artificial Neural Networks is used as object classification. Nine videos are obtained by three different volunteers. These nine videos are divided into six training and three testing data. The experimental result shows that the percentage of accuracy system is 91.67%. It can be concluded that there are several factors influencing the accuracy, such as video capturing factors, video improvements, the models, feature extraction and parameter definitions during the Artificial Neural Networks training.</description>
        <description>http://thesai.org/Downloads/Volume9No10/Paper_33-A_Hybrid_Background_Subtraction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>E2-Invisible Watermarking for Protecting Intellectual Rights of Medical Images and Records</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091032</link>
        <id>10.14569/IJACSA.2018.091032</id>
        <doi>10.14569/IJACSA.2018.091032</doi>
        <lastModDate>2018-10-31T16:38:34.9070000+00:00</lastModDate>
        
        <creator>Kavitha K. J.</creator>
        
        <creator>Dr. B. Priestly Shan</creator>
        
        <subject>Myhealthrecord; SVD; IWT; RSA; QR code; encoding</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(10), 2018</description>
        <description>In today’s digital era, practice of telemedicine has become common which involves the transmission of medical images and Myhealthrecord (MHR) for higher diagnosis in case of emergency and maintaining integrity, robustness, authentication and confidentiality of such patient’s data becomes necessary. Many works has shown that the digital watermarking is one of the solutions but simultaneously, it is known that no complete algorithm is available to fulfil all the requirements of a field. Till the watermarking technique becomes robust, encryption technique can be considered as one of the best solution for protecting the data.  Encoding is used for transforming the information in to another form and in the proposed work of digital watermarking (DWM); encoding is combined with encryption and DWM to enhance the protection of data by maintaining the above said constraints. In this paper, DWM for medical images is implemented by joint combination of spatial and frequency domain technique Singular value decomposition-Integer wavelet transform (SVD-IWT) respectively, 64-bit Rivest-Shamir-Adleman (RSA) crypto-technique and new encoding procedure. To avoid the degradation of the medical image which is very essential in the medical field, data payload should be less and is achieved by the use of quick response (QR) code which consumes less space for large information. Finally the proposed system is compared with other traditional methods and also evaluated against various image processing and geometric attacks.</description>
        <description>http://thesai.org/Downloads/Volume9No10/Paper_32-E2_Invisible_Watermarking.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Convolutional Neural Network Hyper-Parameters Optimization based on Genetic Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091031</link>
        <id>10.14569/IJACSA.2018.091031</id>
        <doi>10.14569/IJACSA.2018.091031</doi>
        <lastModDate>2018-10-31T16:38:33.7670000+00:00</lastModDate>
        
        <creator>Sehla Loussaief</creator>
        
        <creator>Afef Abdelkrim</creator>
        
        <subject>Machine learning; computer vision; image classification; convolutional neural network; CNN hyper parameters; enhanced E-CNN-MP; genetic algorithms; learning accuracy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(10), 2018</description>
        <description>In machine learning for computer vision based applications, Convolutional Neural Network (CNN) is the most widely used technique for image classification. Despite these deep neural networks efficiency, choosing their optimal architecture for a given task remains an open problem. In fact, CNNs performance depends on many hyper-parameters namely CNN depth, convolutional layer number, filters number and their respective sizes. Many CNN structures have been manually designed by researchers and then evaluated to verify their efficiency. In this paper, our contribution is to propose an innovative approach, labeled Enhanced Elite CNN Model Propagation (Enhanced E-CNN-MP), to automatically learn the optimal structure of a CNN.  To traverse the large search space of candidate solutions our approach is based on Genetic Algorithms (GA). These meta-heuristic algorithms are well-known for non-deterministic problem resolution. Simulations demonstrate the ability of the designed approach to compute optimal CNN hyper-parameters in a given classification task.  Classification accuracy of the designed CNN based on Enhanced E-CNN-MP method, exceed that of public CNN even with the use of the Transfer Learning technique. Our contribution advances the current state by offering to scientists, regardless of their field of research, the ability of designing optimal CNNs for any particular classification problem.</description>
        <description>http://thesai.org/Downloads/Volume9No10/Paper_31-Convolutional_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Heterogeneous HW/SW FPGA-Based Embedded System for Database Sequencing Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091030</link>
        <id>10.14569/IJACSA.2018.091030</id>
        <doi>10.14569/IJACSA.2018.091030</doi>
        <lastModDate>2018-10-31T16:38:32.5670000+00:00</lastModDate>
        
        <creator>Talal Bonny</creator>
        
        <subject>Database; sequence comparison; dynamic programming; FPGA</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(10), 2018</description>
        <description>Database sequencing applications including sequence comparison, searching, and analysis are considered among the most computation power and time consumers. Heuristic algorithms suffer from sensitivity while traditional sequencing methods, require searching the whole database to find the most matched sequences, which requires high computation power and time. This paper introduces a dynamic programming technique based-on a measure of similarity between two sequential objects in the database using two components, namely frequency and mean. Additionally, database sequences that have the lowest scores in the comparison process were excluded such that the similarity algorithm between a query sequence and other database sequences is applied to meaningful parts of the database. The proposed technique was implemented and validated using a heterogeneous HW/SW FPGA-based embedded system platform. The implementation was partitioned into (1) hardware part (running on logic gates of FPGA) and (2) software part (running on ARM processor of FPGA). The validation study showed a significant reduction in computation time by accelerating the database sequencing processes by 60% comparing to traditional known methods.</description>
        <description>http://thesai.org/Downloads/Volume9No10/Paper_30-Heterogeneous_HWSW_FPGA.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Task Scheduling Frameworks for Heterogeneous Computing Toward Exascale</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091029</link>
        <id>10.14569/IJACSA.2018.091029</id>
        <doi>10.14569/IJACSA.2018.091029</doi>
        <lastModDate>2018-10-31T16:38:31.9900000+00:00</lastModDate>
        
        <creator>Suhelah Sandokji</creator>
        
        <creator>Fathy Eassa</creator>
        
        <subject>Exascale computing; heterogenous computing; task scheduler framework</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(10), 2018</description>
        <description>The race for Exascale Computing has naturally led computer architecture to transit from the multicore era and into the heterogeneous era. Many systems are shipped with integrated CPUs and graphics processing units (GPUs). Moreover, various applications need to utilize both CPUs and GPUs executive resources, as many of their unique features prove the significant importance and strengths of using each one of the process units PUs. Several research studies consider partitioning the applications, scheduling their execution and allocating them onto the PUs resources. They investigate the important role of optimization and tackle intelligently scheduled tasks on the combination of CPU/GPU architecture CPUs and GPUs cores in achieving the peace of performance and power consumption of Exascale Computing. In this paper, the evolution of heterogeneous computing architectures, the approaches, and challenges toward achieving Exascale Computing, and the various algorithms and techniques used to partition and scheduling tasks are all reviewed. The existing frameworks and runtime systems utilized to optimize performance and improve energy efficiency in desecrates and fused chips in order to attain the objectives of Exascale Computing will also be reviewed.</description>
        <description>http://thesai.org/Downloads/Volume9No10/Paper_29-Task_Scheduling_Frameworks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Construction Project Quality Management using Building Information Modeling 360 Field</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091028</link>
        <id>10.14569/IJACSA.2018.091028</id>
        <doi>10.14569/IJACSA.2018.091028</doi>
        <lastModDate>2018-10-31T16:38:30.8800000+00:00</lastModDate>
        
        <creator>Phong Thanh Nguyen</creator>
        
        <creator>Thu Anh Nguyen</creator>
        
        <creator>Tin Minh Cao</creator>
        
        <creator>Khoa Dang Vo</creator>
        
        <creator>Vy Dang Bich Huynh</creator>
        
        <creator>Quyen Le Hoang Thuy To Nguyen</creator>
        
        <creator>Phuong Thanh Phan</creator>
        
        <creator>Loan Phuc Le</creator>
        
        <subject>BIM 360 field; cloud computing; project management; quality management</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(10), 2018</description>
        <description>A quality management process plays a vital role in the success of engineering and construction projects. The management process needs to be effective and efficient if projects are to be completed on time and within the project’s budget. Many construction projects’ quality management processes are paper-based, which makes them time-consuming and inefficient. The next generation of Building Information Modeling (BIM) is the BIM-cloud. The BIM-cloud can help to enhance the effectiveness of a quality management process; it can also save an organization time and money. This paper proposes a quality management model based on cloud computing, mobile devices, and the Autodesk BIM 360 Field software. This software functions as a platform for gathering, managing and controlling the quality of the management data. The process is then applied to a real project in Vietnam to verify the benefits and barriers of using the BIM 360 Field for a construction project.</description>
        <description>http://thesai.org/Downloads/Volume9No10/Paper_28-Construction_Project_Quality_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Model Development for Predicting the Occurrence of Benign Laryngeal Lesions using Support Vector Machine: Focusing on South Korean Adults Living in Local Communities</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091027</link>
        <id>10.14569/IJACSA.2018.091027</id>
        <doi>10.14569/IJACSA.2018.091027</doi>
        <lastModDate>2018-10-31T16:38:30.2900000+00:00</lastModDate>
        
        <creator>Haewon Byeon</creator>
        
        <subject>Support vector machine; SVM; dysphonia; voice disorder; prediction model; risk factor; data mining</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(10), 2018</description>
        <description>The disease is a consequence of interactions between many complex risk factors, rather than a single cause. Therefore, it is necessary to develop a disease prediction model by using multiple risk factors instead of using a single risk factor. The objective of this study was to develop a model for predicting the occurrence of benign laryngeal lesions based on support vector machine (SVM) using ear, nose and throat (ENT) data from a national-level survey and to provide a basis for selecting high-risk groups and preventing a voice disorder. This study targeted 16,938 adults (≥19years) who participated in the ENT examination among the people who completed the Korea National Health and Nutrition Examination Survey from 2010 to 2012. This study compared the prediction power of the Gauss function, which was used for this study, with that of a linear algorithm, that of a polynomial algorithm, and that of a sigmoid algorithm. Moreover, four kernels were divided into C-SVM and Nu-SVM to compare the prediction accuracy of C-SVM with that of Nu-SVM. The ‘benign laryngeal lesion prediction model’ based on SVM could derive preventive factors and risk factors. The final prediction rate of this SVM using 479 support vectors was 97.306. The fitness results indicated that the difference between C-SVM and Nu-SVM was not large in the benign laryngeal lesion prediction model. In terms of kernel type, the prediction accuracy of Gauss kernel was the highest and the prediction accuracy of the sigmoid kernel was the lowest. The results of this study will provide an important basis for preventing and managing benign laryngeal lesions.</description>
        <description>http://thesai.org/Downloads/Volume9No10/Paper_27-Model_Development_for_Predicting_the_Occurrence.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Evaluation of the Proposed Framework for Access Control in the Cloud and BYOD Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091026</link>
        <id>10.14569/IJACSA.2018.091026</id>
        <doi>10.14569/IJACSA.2018.091026</doi>
        <lastModDate>2018-10-31T16:38:29.6970000+00:00</lastModDate>
        
        <creator>Khalid Almarhabi</creator>
        
        <creator>Kamal Jambi</creator>
        
        <creator>Fathy Eassa</creator>
        
        <creator>Omar Batarfi</creator>
        
        <subject>Bring your own device; access control; policy; security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(10), 2018</description>
        <description>As the bring your own device (BYOD) to work trend grows, so do the network security risks. This fast-growing trend has huge benefits for both employees and employers. With malware, spyware and other malicious downloads, tricking their way onto personal devices, organizations need to consider their information security policies. Malicious programs can download onto a personal device without a user even knowing. This can have disastrous results for both an organization and the personal device. When this happens, it risks BYODs making unauthorized changes to policies and leaking sensitive information into the public domain. A privacy breach can cause a domino effect with huge financial and legal implications, and loss of productivity for organizations. This is a difficult challenge. Organizations need to consider user privacy and rights together with protecting networks from attacks. This paper evaluates a new architectural framework to control the risks that challenge organizations and the use of BYODs. After analysis of large volumes of research, the previous studies addressed single issues. We integrated parts of these single solutions into a new framework to develop a complete solution for access control. With too many organizations failing to implement and enforce adequate security policies, the process needs to be simpler. This framework reduces system restrictions while enforcing access control policies for BYOD and cloud environments using an independent platform. Primary results of the study are positive with the framework reducing access control issues.</description>
        <description>http://thesai.org/Downloads/Volume9No10/Paper_26-An_Evaluation_of_the_Proposed_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Moving from Heterogeneous Data Sources to Big Data: Interoperability and Integration Issues</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091025</link>
        <id>10.14569/IJACSA.2018.091025</id>
        <doi>10.14569/IJACSA.2018.091025</doi>
        <lastModDate>2018-10-31T16:38:28.6030000+00:00</lastModDate>
        
        <creator>Mohamed Osman Hegazi</creator>
        
        <creator>Dinesh Kumar Saini</creator>
        
        <creator>Kashif Zia</creator>
        
        <subject>Heterogeneous databases; interoperability; integration; big data; analytics and intelligence</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(10), 2018</description>
        <description>Heterogeneous databases now facing an emerging challenge of moving towards big data. These databases are adhoc polyglot systems, complex, and NoSQL tools which are semantically annotated. Integration of these heterogeneous databases becoming very challenging because of big data analytics is integrating human and machines contexts. In this paper, an attempt is made to study heterogeneous databases and their interoperability issues and integration issues, their impact on analysis of data. The data science has grown exponentially and a new paradigm has emerged which is of integration of heterogeneous data to big data. Information, knowledge and decision making become easier but the size of databases has grown and it became big data.</description>
        <description>http://thesai.org/Downloads/Volume9No10/Paper_25-Moving_from_Heterogeneous_Data_Sources.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Study of Mobile Forensic Tools Evaluation on Android-Based LINE Messenger</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091024</link>
        <id>10.14569/IJACSA.2018.091024</id>
        <doi>10.14569/IJACSA.2018.091024</doi>
        <lastModDate>2018-10-31T16:38:28.0100000+00:00</lastModDate>
        
        <creator>Imam Riadi</creator>
        
        <creator>Abdul Fadlil</creator>
        
        <creator>Ammar Fauzan</creator>
        
        <subject>Forensic; investigation; mobile; evaluation; performance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(10), 2018</description>
        <description>The limitation of forensic tool and the mobile device’s operating system are two problems for researchers in mobile forensics field. Nevertheless, some kinds of forensic tools testing in several devices might be helpful in an investigation. Therefore, the evaluation of forensic tool is one gate to reach the goal of a digital forensics study. Mobile forensics as one of the digital forensics branch that focusing on data recovery process on mobile devices has some problems in the analytical ability because of the different features of forensic tools. In this research, the researchers present studies and techniques on tools ability and evaluated them based on digital evidence of LINE analysis. The experiment was combined VV methods and NIST standard forensic methods to produce a model of forensic tool evaluation steps. As the result of the experiment, Oxygen Forensic has 61.90% of index number and MOBILedit Forensic has the highest index number at 76.19% in messenger application analysis. This research has successfully assessed the performance of forensic tools.</description>
        <description>http://thesai.org/Downloads/Volume9No10/Paper_24-A_Study_of_Mobile_Forensic_Tools.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluating the Effectiveness of Decision Support System: Findings and Comparison</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091023</link>
        <id>10.14569/IJACSA.2018.091023</id>
        <doi>10.14569/IJACSA.2018.091023</doi>
        <lastModDate>2018-10-31T16:38:27.4330000+00:00</lastModDate>
        
        <creator>Ayman G. Fayoumi</creator>
        
        <subject>DSS; effectiveness of decisions; framework; measurement phase</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(10), 2018</description>
        <description>Nowadays, regardless of the popularity and credibility of Decision Support Systems (DSS), measuring the efficacy of the decisions taken by the DSS is yet to be proven. As previous works identifies the complexities involved in measuring the efficiency of DSS, most of the time DSS efficiency is case dependent. The list of methods for collecting and analyzing data, building models, deployment models, data and model integration, and finally taking decisions are some of the major issues related to measuring DSS effectiveness. This paper focuses on measuring the effectiveness of DSS. The paper highlights the issues that still need to be addressed with efficient frameworks. Based on the literature review and discussion presented in Section I and II, this study proposed a framework and its implementation. Presents how the proposed model can improve the previous work. The major findings of this study reflect that every decision made by DSS is based on the collected data, analyzed by DSS tools, as well as depends on the developed models. Therefore, this study illustrated that each component of DSS plays vital role in measuring the effectiveness of DSS whatever the case and problem for which the DSS has been built and implemented for. In addition, the supporting methods and measuring factors for each component are other findings of this study. Any decision taken by DSS will be evaluated separately in order to measure the effectiveness of the system. The proposed framework resembles a new framework for the decision makers working in any industry.</description>
        <description>http://thesai.org/Downloads/Volume9No10/Paper_23-Evaluating_the_Effectiveness_of_Decision_Support_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>CryptoROS: A Secure Communication Architecture for ROS-Based Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091022</link>
        <id>10.14569/IJACSA.2018.091022</id>
        <doi>10.14569/IJACSA.2018.091022</doi>
        <lastModDate>2018-10-31T16:38:26.3430000+00:00</lastModDate>
        
        <creator>Roham Amini</creator>
        
        <creator>Rossilawati Sulaiman</creator>
        
        <creator>Abdul Hadi Abd Rahman Kurais</creator>
        
        <subject>Robotics; ROS; cyber security; cryptography; access control</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(10), 2018</description>
        <description>Cyber-attacks are a growing threat to future robots. The shift towards automatization has increased relevance and reliance on robots. Securing robots has been secondary or ternary priority and thus robots are vulnerable to cyber-attacks. Securing robots must become an essential (built-in) part of the design rather than being considered as a subsequent (later) add-on. ROS is a widely used and popular open source framework and robots using ROS are increasing in popularity. However, ROS is vulnerable to cyber-attacks. ROS needs to be secured before robots using ROS reach mass market. This study aims at proposing an architecture to secure ROS, using cryptography mechanism, which addresses the most common ROS safety issues. The advantages of our proposed secure architecture, CryptoROS, is that no changes to ROS software libraries and tools is required, it works with all ROS client libraries (e.g. rospy, roscpp) and rebuilding nodes is not necessary.</description>
        <description>http://thesai.org/Downloads/Volume9No10/Paper_22-CryptoROS_a_Secure_Communication_Architecture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Assessment of Groundwater Vulnerability to Pollution using DRASTIC Model and Fuzzy Logic in Herat City, Afghanistan</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091021</link>
        <id>10.14569/IJACSA.2018.091021</id>
        <doi>10.14569/IJACSA.2018.091021</doi>
        <lastModDate>2018-10-31T16:38:25.2330000+00:00</lastModDate>
        
        <creator>Nasir Ahmad Gesim</creator>
        
        <creator>Takeo Okazaki</creator>
        
        <subject>Afghanistan; DRASTIC; de-fuzzification; groundwater; modeling; vulnerability; herat</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(10), 2018</description>
        <description>Groundwater (GW) vulnerability maps have become a standard tool for protecting groundwater resources from pollution because, from one hand groundwater represents the main source of drinking water, and on the other hand high concentrations of human activities such industrial, agricultural, and household represent real or potential sources of groundwater contamination. The main objective of this study is to assess the groundwater vulnerable zones in Herat city, which is the second fastest growing big city in Afghanistan, using the DRASTIC model and fuzzy logic. DRASTIC is based on the seven data layers i.e. Depth of water, net Recharge, Aquifer media, Soil media, Topography, Impact of vadose zone and hydraulic Conductivity that provide the input to the modeling. The study shows that 51% of the city’s groundwater is under highly vulnerable to water pollution. Validation of model showed that vulnerability map which integrated by kriging interpolated layers has better accuracy than inverse distance weighing (IDW) method. The study suggests, that the proposed model can be an effective tool for local authorities who are responsible for managing groundwater resources especially in Afghanistan and assigning rating value of DRASTIC parameters using inference system of fuzzy logic.</description>
        <description>http://thesai.org/Downloads/Volume9No10/Paper_21-Assessment_of_Groundwater_Vulnerability.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Negotiation as a Collaborative Tool for Determining Permissions and Detection of Malicious Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091020</link>
        <id>10.14569/IJACSA.2018.091020</id>
        <doi>10.14569/IJACSA.2018.091020</doi>
        <lastModDate>2018-10-31T16:38:24.1270000+00:00</lastModDate>
        
        <creator>Rabia Riaz</creator>
        
        <creator>Sanam Shahla Rizvi</creator>
        
        <creator>Mubashar Ahmad</creator>
        
        <creator>Sana Shokat</creator>
        
        <creator>Se Jin Kwon</creator>
        
        <subject>Collaborative learning; intrusion detection; mobile applications; information security; web based learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(10), 2018</description>
        <description>In Android OS users find it very difficult to understand and comprehend its permission mechanism. Frequently, users tend to ignore permission negotiations dialogs during installation of an application. Users, who pay attention to the permission negotiation dialogs, find it tough to comprehend the description and evaluation of permission procedure. They do not know the impact of granting these permissions on their data. One major issue is that user is unaware about how application uses their data. He has no insight after granting permission to the application and effect of these permissions on his data’s privacy and security. This research reveals that discrete permission settings are helpful for user to secure his device resources and data. This study uses a distinct technique to detect danger of unnecessary permissions. It helps end users of Android OSs to understand the problems and provides them batter way to deal with the problems and grounds to explore alternatives.</description>
        <description>http://thesai.org/Downloads/Volume9No10/Paper_20-Negotiation_as_a_Collaborative_Tool.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>ABJAD Arabic-Based Encryption</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091019</link>
        <id>10.14569/IJACSA.2018.091019</id>
        <doi>10.14569/IJACSA.2018.091019</doi>
        <lastModDate>2018-10-31T16:38:23.5330000+00:00</lastModDate>
        
        <creator>Ahmad H. Al-Omari</creator>
        
        <subject>Arabic-based cryptography; classical encryption; Arabic language encryption; shared key; keyword</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(10), 2018</description>
        <description>The researcher introduced an enhanced classical Arabic-based encryption technique that is essentially designed for Arab nations. The new algorithm uses the shared key technique where the Keyword system Modulus is employed to add randomness and confusion to the table of alphabets being used. The results proved that the technique is resistant to brute force and cryptanalysis attacks. The time needed to break the algorithm is huge and the possibilities of decrypting the cipher text using the language frequency and language characteristics are hard and unfeasible. The technique assumes the existence of a secure channel for the keyword exchange.</description>
        <description>http://thesai.org/Downloads/Volume9No10/Paper_19-ABJAD_Arabic_based_Encryption.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Design and Evaluation of a User-Centric Information Security Risk Assessment and Response Framework</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091018</link>
        <id>10.14569/IJACSA.2018.091018</id>
        <doi>10.14569/IJACSA.2018.091018</doi>
        <lastModDate>2018-10-31T16:38:22.5200000+00:00</lastModDate>
        
        <creator>Manal Alohali</creator>
        
        <creator>Nathan Clarke</creator>
        
        <creator>Steven Furnell</creator>
        
        <subject>Risk; analysis; security behavior; BFI; correlation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(10), 2018</description>
        <description>The risk of sensitive information disclosure and modification through the use of online services has increased considerably and may result in significant damage.  As the management and assessment of such risks is a well-known discipline for organizations, it is a challenge for users from the general public. Users have difficulties in using, understanding and reacting to security-related threats. Moreover, users only try to protect themselves from risks salient to them.  Motivated by the lack of risk assessment solutions and limited impact of awareness programs tailored for users of the general public, this paper aims to develop a structured approach to help in protecting users from threats and vulnerabilities and, thus, reducing the overall information security risks. By focusing on the user and that different users react differently to the same stimuli, the authors developed a user-centric risk assessment and response framework that assesses and communicates risk on both user and system level in an individualized, timely and continuous way. Three risk assessment models were proposed that depend on user-centric and behavior-related factors when calculating risk. This framework was evaluated using a scenario-based simulation of a number of users and results analyzed. The analysis demonstrated the effectiveness and feasibility of the proposed approach. Encouragingly, this analysis provided an indication that risk can be assessed differently for the same behavior based upon a number of user-centric and behavioral-related factors resulting in an individualized granular risk score/level. This granular risk assessment, provided a more insightful evaluation of both risk and response. The analysis of results was also useful in demonstrating how risk is not the same for all users and how the proposed model is effective in adapting to differences between users offering a novel approach to assessing information security risks.</description>
        <description>http://thesai.org/Downloads/Volume9No10/Paper_18-The_Design_and_Evaluation_of_a_User_Centric_Information.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of Strategic Management System for Northern Border University using Unified Modeling Language</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091017</link>
        <id>10.14569/IJACSA.2018.091017</id>
        <doi>10.14569/IJACSA.2018.091017</doi>
        <lastModDate>2018-10-31T16:38:21.3800000+00:00</lastModDate>
        
        <creator>Shahrin Azuan Nazeer</creator>
        
        <subject>Strategy management; strategic management system; object-oriented analysis and design; unified modeling language</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(10), 2018</description>
        <description>All organizations engage in the strategy management process either formally or informally. Strategy management is used to refer to the entire scope of strategic-decision making activity in an organization to ensure its continuous success. Hence, a strategic management system is viewed as an important tool for strategy management. Northern Border University started to initiate its first five-year strategy plan for the year 1435-1439H (2013-2018). However, the strategy plan is managed without having a strategic management system. Thus, the university has a fundamental disconnect between the formulation of the strategy and the execution of that strategy into useful action.  There is no integration between the strategy formulation and implementation which are treated separately instead of as an integrated system. Therefore, it is difficult for the university to translate their strategies into operational objectives, processes and activities. This paper presents the design process of the strategic management system for the university, whose main purpose is to manage the university’s strategy plan throughout its life cycle.  The design of the strategic management system is based on object-oriented approach using Unified Modeling Language. The system will be used to formulate, implement, monitor and control appropriate university’s strategy plan to support on strategic-decision making for the university.  The solution will thus contribute to the improvement of the university’s performance.</description>
        <description>http://thesai.org/Downloads/Volume9No10/Paper_17-Design_of_Strategic_Management_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Runtime Reasoning of Requirements for Self-Adaptive Systems Using AI Planning Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091016</link>
        <id>10.14569/IJACSA.2018.091016</id>
        <doi>10.14569/IJACSA.2018.091016</doi>
        <lastModDate>2018-10-31T16:38:20.7870000+00:00</lastModDate>
        
        <creator>Zara Hassan</creator>
        
        <creator>Nauman Qureshi</creator>
        
        <creator>Muhammad Adnan Hashmi</creator>
        
        <creator>Arshad Ali</creator>
        
        <subject>Self-Adaptive Systems (SAS); reasoning; requirement engineering; AI planning; CARE framework; runtime reasoning of requirements</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(10), 2018</description>
        <description>Over the years, the domain of Self-Adaptive Systems (SAS) has gained significant importance in software engineering community. Such SAS must ensure high customizability and at the same time effective reasoning to meet their objectives by meeting end-user goals more effectively and efficiently. In this context, techniques related to Automated Planning have acquired substantial precedence owing to their adaptability to diverse scenarios based upon their enhanced knowledge extraction from available Knowledge Base. These AI planning techniques help in supporting self-adaptation mechanism of SAS. We have investigated these techniques to perform runtime reasoning of SAS requirements. This paper proposes an architecture for implementing the reasoning component of previously proposed Continuous Adaptive Requirement Engineering (CARE) framework. The proposed architecture has been experimentally verified by implementation of a prototype application using JSHOP2 (Java implementation of SHOP2, an HTN Planner).</description>
        <description>http://thesai.org/Downloads/Volume9No10/Paper_16-Runtime_Reasoning_of_Requirements.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Rule-Based Root Extraction Algorithm for Arabic Language</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091015</link>
        <id>10.14569/IJACSA.2018.091015</id>
        <doi>10.14569/IJACSA.2018.091015</doi>
        <lastModDate>2018-10-31T16:38:19.6630000+00:00</lastModDate>
        
        <creator>Nisrean Thalji</creator>
        
        <creator>Nik Adilah Hanin</creator>
        
        <creator>Walid Bani Hani</creator>
        
        <creator>Sohair Al-Hakeem</creator>
        
        <creator>Zyad Thalji</creator>
        
        <subject>Root; stem; rules; affix; pattern; corpus</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(10), 2018</description>
        <description>Non-vocalized Arabic words are ambiguous words, because non-vocalized words may have different meanings. Therefore, these words may have more than one root. Many Arabic root extraction algorithms have been conducted to extract the roots of non-vocalized Arabic words. However, most of them return only one root and produce lower accuracy than reported when they are tested on different datasets. Arabic root extraction algorithm is an urgent need for applications like information retrieval systems, indexing, text mining, text classification, data compression, spell checking, text summarization, question answering systems and machine translation. In this work, a new rule-based Arabic root extraction algorithm is developed and focuses to overcome the limitation of previous works. The proposed algorithm is compared to the algorithm of Khoja, which is a well-known Arabic root extraction algorithm that produces high accuracy. The testing process was conducted on the corpus of Thalji, which is mainly built to test and compare Arabic roots extraction algorithms. It contains 720,000 word-root pairs from 12000 roots, 430 prefixes, 320 suffixes, and 4320 patterns. The experimental result shows that the algorithm of Khoja achieved 63%, meanwhile the proposed algorithm achieved 94% of accuracy.</description>
        <description>http://thesai.org/Downloads/Volume9No10/Paper_15-A_Novel_Rule_based_Root_Extraction_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Tracking Systems as Thinging Machine: A Case Study of a Service Company</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091014</link>
        <id>10.14569/IJACSA.2018.091014</id>
        <doi>10.14569/IJACSA.2018.091014</doi>
        <lastModDate>2018-10-31T16:38:19.0870000+00:00</lastModDate>
        
        <creator>Sabah S. Al-Fedaghi</creator>
        
        <creator>Yousef Atiyah</creator>
        
        <subject>Tracking systems; system documentation; system control; abstract machine; conceptual model; thinging</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(10), 2018</description>
        <description>Object tracking systems play important roles in tracking moving objects and overcoming problems such as safety, security and other location-related applications. Problems arise from the difficulties in creating a well-defined and understandable description of tracking systems. Nowadays, describing such processes results in fragmental representation that most of the time leads to difficulties creating documentation. Additionally, once learned by assigned personnel, repeated tasks result in them continuing on autopilot in a way that often degrades their effectiveness. This paper proposes the modeling of tracking systems in terms of a new diagrammatic methodology to produce engineering-like schemata. The resultant diagrams can be used in documentation, explanation, communication, education and control.</description>
        <description>http://thesai.org/Downloads/Volume9No10/Paper_14-Tracking_Systems_as_Thinging_Machine.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Formalization of UML Composite Structure using Colored Petri Nets</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091013</link>
        <id>10.14569/IJACSA.2018.091013</id>
        <doi>10.14569/IJACSA.2018.091013</doi>
        <lastModDate>2018-10-31T16:38:18.5100000+00:00</lastModDate>
        
        <creator>Rao Sohail Iqbal</creator>
        
        <creator>Ramzan Talib</creator>
        
        <creator>Muhammad Awais</creator>
        
        <creator>Haseeb Ur Rehman</creator>
        
        <creator>Wajid Raza</creator>
        
        <subject>Design specification; UML (Unified Modeling Language); semantic; transformation; deadlocks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(10), 2018</description>
        <description>Design specification and requirement analysis, during development process involved in transformation of real world problems to software system are subjected to severe issues owing to involvement of semantics. Though, for design and specification of object-oriented systems, Unified Modeling Language (UML) is now recognized as standard language however, its structures have numerous drawbacks which include lack of semantics definition and unidentified deadlocks. The research work proposes a model to avoid deadlocks, specifically in composite structure of UML. Verification of system models by formal methods holds significance, particularly, at requirement specification and design level, to ensure the accuracy of models and high light the design problems before implementation. The paper proposes the rules that allow software engineers to formalize the behavior of UML 2.0 composite structure using Colored petri nets. Using these rules, the research shall analyze the correspondent Colored petri nets and conclude the properties of the original work flow, using theoretical outcomes in the Colored petri nets domain.</description>
        <description>http://thesai.org/Downloads/Volume9No10/Paper_13-Formalization_of_UML_Composite_Structure.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of Purchasing Module for Agriculture E-Commerce using Dynamic System Development Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091012</link>
        <id>10.14569/IJACSA.2018.091012</id>
        <doi>10.14569/IJACSA.2018.091012</doi>
        <lastModDate>2018-10-31T16:38:17.4030000+00:00</lastModDate>
        
        <creator>Rosa Delima</creator>
        
        <creator>Halim Budi Santoso</creator>
        
        <creator>Novan Andriyanto</creator>
        
        <creator>Argo Wibowo</creator>
        
        <subject>Agricultural e-commerce; dynamic system development method; DSDM; purchase module</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(10), 2018</description>
        <description>Trading model has been changing since the vast implementation of Information and Communication Technology in every sector. This model is known as e-Commerce. Unlikely, there is still limited company which specifically trades agriculture product. Agriculture e-Commerce is known as a platform to buy and sell some agriculture products. Agriculture e-commerce has important role to support economic development and market expansion for farmers in particular and people in rural areas in general. There is still limited access and provider which buy and sell agriculture product to farmer and its farmer representative. Therefore, this research develops specific agriculture e-commerce. There are two main modules for agriculture e-commerce, purchasing and buying module. On this article, we acknowledge to develop the first module, which is purchasing module.  Purchasing module was developed using Dynamic System Development Method (DSDM). Development phase includes feasibility study, business study, functional model iteration, and design and build model iteration. At the end of the phase, testing is conducted. The result of this study is the prototype of agriculture e-Commerce product with predefined functions. Purchasing module of the system depicts the opportunity for farmer to buy the tools and materials. This system has two main functions:  purchasing system management and reporting management. System testing also was conducted to test the system.</description>
        <description>http://thesai.org/Downloads/Volume9No10/Paper_12-Development_of_Purchasing_Module_for_Agriculture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Normalization of Unstructured and Informal Text in Sentiment Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091011</link>
        <id>10.14569/IJACSA.2018.091011</id>
        <doi>10.14569/IJACSA.2018.091011</doi>
        <lastModDate>2018-10-31T16:38:16.2970000+00:00</lastModDate>
        
        <creator>Muhammad Javed</creator>
        
        <creator>Shahid Kamal</creator>
        
        <subject>Informal; normalization; opinion mining; roman; sentiment analysis; text preprocessing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(10), 2018</description>
        <description>Sentiment Analysis is problem of natural language processing which deals with the extraction and analysis of public sentiments shared about target entities over microbloging websites. This field has gained great attention due to the huge availability of decision making textual contents. Sentiment Analysis has enormous application areas such as; Market Analysis, Service Analysis, Showbiz analysis, Movies, sports and even the popularity and acceptance rate of political policies can also be predicted via sentiment analysis systems. Although tremendous volume of opinionative text is available but it is unstructured and noisy due to which sentiment classifiers can’t achieve good outcomes. Normalization is the process used to clean noise from unstructured text for sentiment analysis. In this study we have proposed a mechanism for the normalization of informal and unstructured text. Proposed mechanism is comprised of four essential phases; Noise Reduction, Part of Speech Tagging, Stop Word Removal stemming and Lemmatization. Numerous experiments are performed on twitter data set with unsupervised lexicons and dictionaries. Python and Natural language toolkit is used for performing all four essential steps. This study demonstrates that utilization and normalization of informal tokens in tweets improved the overall classification accuracy from 75.42 to 82.357.</description>
        <description>http://thesai.org/Downloads/Volume9No10/Paper_11-Normalization_of_Unstructured_and_Informal_Text.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dynamic Weight Dropping Policy for Improve High-Priority Message Delivery Delay in Vehicular Delay-Tolerant Network
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091010</link>
        <id>10.14569/IJACSA.2018.091010</id>
        <doi>10.14569/IJACSA.2018.091010</doi>
        <lastModDate>2018-10-31T16:38:15.1870000+00:00</lastModDate>
        
        <creator>GBALLOU Yao Th&#233;ophile</creator>
        
        <creator>GOORE Bi Tra</creator>
        
        <creator>Brou Konan Marcelin</creator>
        
        <subject>Vehicular delay-tolerant network; dropping policies; traffic differentiation; message weight; high priority message</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(10), 2018</description>
        <description>Vehicular Delay-Tolerant Network (VDTN) is a special case of Delay-Tolerant Network (DTN) in which connectivity is provided by movement of vehicles with traffic prioritization to meet the requirements of different applications. Due to high node mobility, short contact time, intermittent connectivity, VDTNs use multi-copy routing protocols to increase message delivery rates and reduce the delay. However due to limited resources (bandwidth and storage capacity), these protocols cause the rapid buffer overflow and therefore the degradation of overall network performance. In this paper, we propose a buffer drop policy based on message weight by including traffic prioritization to improve the high priority messages delivery delay. Thus, the memory is subdivided into a high-weight queue and a low-weight queue. When the buffer is overflowing, and a new message arrives, the algorithm determines the message to be dropped in the queues considering that the current node is the destination of the message, the position of the current node with respect to the destination of the message and the age of the messages in the network.</description>
        <description>http://thesai.org/Downloads/Volume9No10/Paper_10-Dynamic_Weight_Dropping_Policy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Determination of Weighting Assessment on DREAD Model using Profile Matching</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091009</link>
        <id>10.14569/IJACSA.2018.091009</id>
        <doi>10.14569/IJACSA.2018.091009</doi>
        <lastModDate>2018-10-31T16:38:14.1100000+00:00</lastModDate>
        
        <creator>Didit Suprihanto</creator>
        
        <creator>Retantyo Wardoyo</creator>
        
        <creator>Khabib Mustofa</creator>
        
        <subject>DREAD; risk; assessment; profile matching</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(10), 2018</description>
        <description>Web application creators often get lack understanding of security threats that can occur in applications that are made, while security threats can create new problems that are more complex. These security threats will pose risks and can even result in large losses. Determining the risk ratings on a web application software development team is still experiencing problem or debate. The problem which occurs is that not all of the team members agree on the risk rating assessment process. This problem is caused by the differences in opinions and assumptions of the team members about threats and the fact that the assessor has different types of expertise, DREAD model places each expert in the same position. It means that there are no differences in weight at the time of assessment. DREAD stands for five aspects which are related to security threats in web applications. They are D (Potential Damage), R (Reproducibility), E (Exploitability), A (Affected User), and D (Discoverability). The proposal gives weight to the assessor by using profile matching method to produce an assessment involving assessors with different types of expertise, weighting on each assessor is according to their relevance to the assessed aspects, and rating on the type of expertise is according to the aspects assessed for the DREAD model. The result of the study shows that the proposed method can produce the weight closeness of the assessment to the target.</description>
        <description>http://thesai.org/Downloads/Volume9No10/Paper_9-Determination_of_Weighting_Assessment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Artificial Intelligence based Fertilizer Control for Improvement of Rice Quality and Harvest Amount</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091008</link>
        <id>10.14569/IJACSA.2018.091008</id>
        <doi>10.14569/IJACSA.2018.091008</doi>
        <lastModDate>2018-10-31T16:38:12.9870000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Osamu Shigetomi</creator>
        
        <creator>Yuko Miura</creator>
        
        <subject>Nitrogen content; protein content; rice paddy field; remote sensing; regression analysis; rice crop quality; harvest amount; fertilizer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(10), 2018</description>
        <description>Artificial Intelligence: AI based fertilizer control for improvement of rice quality and harvest amount is proposed together with intelligent drone based rice field monitoring system. Through experiments at the rice paddy fields which is situated at Saga Prefectural Research Institute of Agriculture: SPRIA in Saga city, Japan, it is found that the proposed system allows control rice crop quality and harvest amount by changing fertilizer type and supply amount. It, also, is found the most appropriate fertilizer supply management method which maximizing rice crop quality and harvest amount. Furthermore, these rice crop quality and harvest mount can be predicted in the early stage of rice leaf grow. Therefore, rice crop quality and harvest amount becomes controllable.</description>
        <description>http://thesai.org/Downloads/Volume9No10/Paper_8-Artificial_Intelligence_based_Fertilizer_Control.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The User Behavior Analysis Based on Text Messages Using Parafac and Block Term Decomposition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091007</link>
        <id>10.14569/IJACSA.2018.091007</id>
        <doi>10.14569/IJACSA.2018.091007</doi>
        <lastModDate>2018-10-31T16:38:12.4100000+00:00</lastModDate>
        
        <creator>Bilius Laura Bianca</creator>
        
        <subject>Parafac decomposition; block term decomposition; clustering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(10), 2018</description>
        <description>Tensor decompositions represent a start for big data analysis and a start in reduction of dimensionality, object detection, clustering and so on. This paper presents a method to study the behavior of users in the online environment and beyond. A beginning for analyzing this type of data is uniting the Parafac Tensor Decomposition and the Block Term Decomposition.</description>
        <description>http://thesai.org/Downloads/Volume9No10/Paper_7-The_User_Behavior_Analysis_based_on_Text_Messages.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Emotional Changes Detection for Dementia People with Spectrograms from Physiological Signals</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091006</link>
        <id>10.14569/IJACSA.2018.091006</id>
        <doi>10.14569/IJACSA.2018.091006</doi>
        <lastModDate>2018-10-31T16:38:11.3200000+00:00</lastModDate>
        
        <creator>Zeng Fangmeng</creator>
        
        <creator>Liao Peijia</creator>
        
        <creator>Miyuki Iwamoto</creator>
        
        <creator>Noriaki Kuwahara</creator>
        
        <subject>Emotion classification; people with dementia; EEG and ECG; spectrograms; CNN</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(10), 2018</description>
        <description>Due to aging society, there has recently been an increasing percentage of people with serious cognitive decline and dementia around the world. Such patients often lose their diversity of facial expressions and even their ability to speak, rendering them unable to express their feelings to their caregivers. However, emotions and feelings are strongly correlated with physiological signals, detectable with EEG and ECG etc. Therefore, this research develops an emotion predicting system for people with dementia using bio-signals to support their interaction with their caregivers. In this paper, we focused on a previous study for binary classification of emotional changes using spectrograms of EEG and RRI by CNN, verifying the effectiveness of the method. Firstly, the participants were required to watch simulating videos while collecting their EEG and ECG data. Then, STFT was performed, processing the raw data signals by extracting the time-frequency domain features to get the spectrograms. Finally, deep learning was used to detect the emotional changes. CNN was used for arousal classification, with an accuracy of 90.00% with EEG spectrograms, 91.67% with RRI spectrograms, and 93.33% with EEG and RRI spectrograms.</description>
        <description>http://thesai.org/Downloads/Volume9No10/Paper_6-Emotional_Changes_Detection_for_Dementia_People.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A P System for K-Medoids-Based Clustering</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091005</link>
        <id>10.14569/IJACSA.2018.091005</id>
        <doi>10.14569/IJACSA.2018.091005</doi>
        <lastModDate>2018-10-31T16:38:10.2270000+00:00</lastModDate>
        
        <creator>Ping Guo</creator>
        
        <creator>Jingya Xie</creator>
        
        <subject>P systems; Clustering; K-medoids-based clustering;
Membrane computing; Parallel and distributed computing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(10), 2018</description>
        <description>The membrane computing model, also known as
the P system, is a parallel and distributed computing system.
K-medoids algorithm is one of the most famous algorithms in
partition-based clustering algorithms, and has been widely used
in data analysis and modern scientific research. Combining the P
system with the k-medoids algorithm, the maximum parallelism
calculated by the P system can effectively reduce the time
complexity of the k-medoids clustering algorithm. Based on this,
this paper proposes a cell-like P system with promoters and
inhibitors based on k-medoids clustering, and then an instance
is given to illustrate the practicability and effectiveness of the P
system designed.</description>
        <description>http://thesai.org/Downloads/Volume9No10/Paper_5-A_p_System_for_k_Medoids_based_Clustering.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>RASP-FIT: A Fast and Automatic Fault Injection Tool for Code-Modification of FPGA Designs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091004</link>
        <id>10.14569/IJACSA.2018.091004</id>
        <doi>10.14569/IJACSA.2018.091004</doi>
        <lastModDate>2018-10-31T16:38:09.6330000+00:00</lastModDate>
        
        <creator>Abdul Rafay Khatri</creator>
        
        <creator>Ali Hayek</creator>
        
        <creator>Josef Borcsok</creator>
        
        <subject>Code generator; Fault emulation; Fault injection; Fault simulation; Instrumentation; Parser</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(10), 2018</description>
        <description>Fault Injection (FI) is the most popular technique
used in the evaluation of fault effects and the dependability of a
design. Fault Simulation/Emulation (S/E) is involved in several
applications such as test data generation, test set evaluation,
circuit testability, fault detection &amp; diagnosis, and many others.
These applications require a faulty module of the original design
for fault injection testing. Currently, Hardware Description Languages
(HDL) are involved in improving methodologies related
to the digital system testing for Field Programmable Gate Array
(FPGA). Designers can perform advanced testing and fault S/E
methods directly on HDL. To modify the HDL design, it is
very cumbersome and time-consuming task. Therefore, a fault
injection tool (RASP-FIT) is developed and presented, which
consists of code-modifier, fault injection control unit and result
analyser. However, in this paper, code modification techniques
of RASP-FIT are explained for the Verilog code at different
abstraction levels. By code-modification, it means that a faulty
module of the original design is generated which includes different
permanent and transient faults at every possible location. The
RASP-FIT tool is an automatic and fast tool which does not
require much user intervention. To validate these claims, various
faulty modules for different benchmark designs are generated
and presented.</description>
        <description>http://thesai.org/Downloads/Volume9No10/Paper_4-RASP_FIT_A_Fast_and_Automatic_Fault_Injection_Tool.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Isolated Automatic Speech Recognition of Quechua Numbers using MFCC, DTW and KNN</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091003</link>
        <id>10.14569/IJACSA.2018.091003</id>
        <doi>10.14569/IJACSA.2018.091003</doi>
        <lastModDate>2018-10-31T16:38:09.0570000+00:00</lastModDate>
        
        <creator>Hernan Faustino Chacca Chuctaya</creator>
        
        <creator>Rolfy Nixon Montufar Mercado</creator>
        
        <creator>Jeyson Jesus Gonzales Gaona</creator>
        
        <subject>Automatic Speech Recognition; MFCC; DTW; KNN</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(10), 2018</description>
        <description>The Automatic Speech (ASR) area is defined as the
transformation of acoustic signals into string words. This area
has been being developed for many year facilitating the lives
of people so it was implemented in several languages. However,
the development of ASR in some languages with few database
resources but with a large population speaking these languages
is very low. The development of ASR in Quechua language is
almost null which leads culture and population isolation from
technology and information. In this work an ASR system of
isolated Quechua numbers is developed where Mel-Frequency
Cepstral Coefficients (MFCC), Dynamic Time Warping (DTW)
and K-Nearest Neighbor (KNN) methods are implemented using
a database composed by recorded audio numbers from one to
ten in Quechua. The recorded audios to feed the data base were
uttered by natives man and women speakers of Quechua. The
recognition accuracy reached in this research work was 91.1%.</description>
        <description>http://thesai.org/Downloads/Volume9No10/Paper_3-Isolated_Automatic_Speech_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Human Related-Health Actions Detection using Android Camera based on TensorFlow Object Detection API</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091002</link>
        <id>10.14569/IJACSA.2018.091002</id>
        <doi>10.14569/IJACSA.2018.091002</doi>
        <lastModDate>2018-10-31T16:38:07.9500000+00:00</lastModDate>
        
        <creator>Fadwa Al-Azzoa</creator>
        
        <creator>Arwa Mohammed Taqia</creator>
        
        <creator>Mariofanna Milanovab</creator>
        
        <subject>Android camera; TensorFlow object detection API; emergency actions; detection accuracy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(10), 2018</description>
        <description>A new method to detect human health-related actions (HHRA) from a video sequence using an Android camera. The Android platform works not only to capture video images through its camera, but also to detect emergency actions. An application for HHRA is to help monitor unattended children, individuals with special needs or the elderly. The application has been investigating based on TensorFlow Object Detection Application Program Interface (API) technique with Android studio. This paper fundamentally focuses on the comparison, in terms of improving speed and detection accuracy. In this work, two promising new approaches for HHRA detection has been proposed: SSD Mobilenet and Faster RCNN Resnet models. The proposed approaches are evaluated on the NTU RGB+D dataset, which it knows as the present greatest publicly accessible 3D action recognition dataset. The dataset has been split into training and testing dataset. The total confidence scores detection quality (total mAP) for all the actions classes are  95.8% based on the SSD-Mobilenet model and 93.8% based on Faster-R-CNN-Resnet model. The detection process is achieved using two methods to evaluate the detection performance using Android camera (Galaxy S6) and using TensorFlow Object Detection Notebook in terms of accuracy and detection speed. Experimental results have demonstrated valuable improvements in terms of detection accuracy and efficiency for human health-related actions identification. The experiments have executed on Ubuntu 16.04LTS GTX1070 @ 2.80GHZ x8 system.</description>
        <description>http://thesai.org/Downloads/Volume9No10/Paper_2-Human_Related_Health_Actions_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Coronary Heart Disease Diagnosis using Deep Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.091001</link>
        <id>10.14569/IJACSA.2018.091001</id>
        <doi>10.14569/IJACSA.2018.091001</doi>
        <lastModDate>2018-10-31T16:38:07.2930000+00:00</lastModDate>
        
        <creator>Kathleen H Miao</creator>
        
        <creator>Julia H. Miao</creator>
        
        <subject>Cardiovascular disease; heart disease; coronary artery disease; classification; accuracy; diagnosis; diagnostic odds ratio; deep learning; deep neural network; machine learning; F-score; global health; public health; K-S test; precision; prediction; prognosis; ROC curve; specificity; sensitivity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(10), 2018</description>
        <description>According to the World Health Organization, cardiovascular disease (CVD) is the top cause of death worldwide. In 2015, over 30% of global deaths was due to CVD, leading to over 17 million deaths, a global health burden.  Of those deaths, over 7 million were caused by heart disease, and greater than 75% of deaths due to CVD were in developing countries. In the United States alone, 25% of deaths is attributed to heart disease, killing over 630,000 Americans annually. Among heart disease conditions, coronary heart disease is the most common, causing over 360,000 American deaths due to heart attacks in 2015. Thus, coronary heart disease is a public health issue. In this research paper, an enhanced deep neural network (DNN) learning was developed to aid patients and healthcare professionals and to increase the accuracy and reliability of heart disease diagnosis and prognosis in patients. The developed DNN learning model is based on a deeper multilayer perceptron architecture with regularization and dropout using deep learning. The developed DNN learning model includes a classification model based on training data and a prediction model for diagnosing new patient cases using a data set of 303 clinical instances from patients diagnosed with coronary heart disease at the Cleveland Clinic Foundation. The testing results showed that the DNN classification and prediction model achieved the following results: diagnostic accuracy of 83.67%, sensitivity of 93.51%, specificity of 72.86%, precision of 79.12%, F-Score of 0.8571, area under the ROC curve of 0.8922, Kolmogorov-Smirnov (K-S) test of 66.62%, diagnostic odds ratio (DOR) of 38.65, and 95% confidence interval for the DOR test of [38.65, 110.28]. Therefore, clinical diagnoses of coronary heart disease were reliably and accurately derived from the developed DNN classification and prediction models. Thus, the models can be used to aid healthcare professionals and patients throughout the world to advance both public health and global health, especially in developing countries and resource-limited areas with fewer cardiac specialists available.</description>
        <description>http://thesai.org/Downloads/Volume9No10/Paper_1-Coronary_Heart_Disease_Diagnosis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Urdu Sentiment Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090981</link>
        <id>10.14569/IJACSA.2018.090981</id>
        <doi>10.14569/IJACSA.2018.090981</doi>
        <lastModDate>2018-10-01T10:19:11.4170000+00:00</lastModDate>
        
        <creator>Khairullah Khan</creator>
        
        <creator>Wahab Khan</creator>
        
        <creator>Atta Ur Rahman</creator>
        
        <creator>Aurangzeb Khan</creator>
        
        <creator>Asfandyar Khan</creator>
        
        <creator>Ashraf Ullah Khan</creator>
        
        <creator>Bibi Saqia</creator>
        
        <subject>Urdu; sentiment analysis; social media; survey</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(9), 2018</description>
        <description>Internet is the most significant source of getting up thoughts, surveys for a product, and reviews for any type of service or activity.  A Bulky amount of reviews are produced on daily basis on the cyberspace about online products and objects. For example, many individuals share their remarks, reviews and feelings in their own language utilizing social media networks such as twitter and so on. Considering their colossal Quantity and size, it is exceedingly knotty to look at with and interpret specified surveys. Sentiment Analysis (SA) aims at extracting people’s opinion, felling and thought from their reviews in social websites. SA has recently gained significant consideration, however the vast majority of the resources and frameworks constructed so far are tailored to English as well as English like Western languages. The requirement for designing frameworks for different dialects is expanding, particularly as blogging and micro-blogging sites are becoming popular. This paper presents a comprehensive review of approaches of Urdu sentiment analysis and outlines of relevant gaps in the literature.</description>
        <description>http://thesai.org/Downloads/Volume9No9/Paper_81-Urdu_Sentiment_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Agent Cellular Residential Mobility Model : From Functional and Conceptual View</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090980</link>
        <id>10.14569/IJACSA.2018.090980</id>
        <doi>10.14569/IJACSA.2018.090980</doi>
        <lastModDate>2018-09-29T12:00:52.3700000+00:00</lastModDate>
        
        <creator>Elarbi Elalaouy</creator>
        
        <creator>Khadija Rhoulami</creator>
        
        <creator>Moulay Driss Rahmani</creator>
        
        <subject>Residential mobility; land use change; computer model; multi-agent systems; cellular automata </subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(9), 2018</description>
        <description>Residential mobility is of great challenge to sustainable cities. Developing computer models based simulation could be a powerful tool to support informing urban decisions especially with the fact that half of the world’s population now lives in cities. The present paper presents our detailed model of residential mobility which use an alliance Multi-agent systems and Cellular automata (MAS-CA) approach. Conventional Urban modelling approaches will be presented firstly with a distinct light sheded on the alliance CA-MAS approach. The model will be then exposed in its two functional and conceptual views. At the end, results of a scenario of growth population is simulated and discussed. Results of the simulation shows significant conformity with the underlying model hypothesis.</description>
        <description>http://thesai.org/Downloads/Volume9No9/Paper_80-An_Agent_Cellular_Residential_Mobility_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Duplicates Detection Within Incomplete Data Sets Using Blocking and Dynamic Sorting Key Methods</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090979</link>
        <id>10.14569/IJACSA.2018.090979</id>
        <doi>10.14569/IJACSA.2018.090979</doi>
        <lastModDate>2018-09-29T12:00:51.8270000+00:00</lastModDate>
        
        <creator>Abdulrazzak Ali</creator>
        
        <creator>Nurul A. Emran</creator>
        
        <creator>Siti A. Asmai</creator>
        
        <creator>Awsan Thabet</creator>
        
        <subject>Duplicate detection; Incomplete Data Set; Blocking Methods; Sorting key; Attribute Selection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(9), 2018</description>
        <description>In database records duplicate detection, blocking method is commonly used to reduce the number of comparisons between the candidate record pairs. The main procedure in this method requires selecting attributes that will be used as sorting keys. Selection accuracy is essential in clustering candidates records that are likely matched in the same block. Nevertheless, the presence of missing values affects the creation of sorting keys and this is particularly undesirable if it involves the attributes that are used as the sorting keys. This is because, consequently, records that are supposed to be included in the duplicate detection procedure will be excluded from being examined. Thus, in this paper, we propose a method that can deal with the impact of missing values by using a dynamic sorting key. Dynamic sorting is an extension of blocking method that essentially works on two functions namely uniqueness calculation function (UF) (to choose unique attributes) and completeness function (CF) (to search for missing values). We experimented a particular blocking method called as sorted neighborhood with a dynamic sorting key on a restaurant data set (that consists of duplicate records) obtained from earlier research in order to evaluate the method’s accuracy and speed. Hypothetical missing values were applied to testing data set used in the experiment, where we compare the results of duplicate detection with (and without) dynamic sorting key. The result shows that, even though missing values are present, there is a promising improvement in the partitioning of duplicate records in the same block.</description>
        <description>http://thesai.org/Downloads/Volume9No9/Paper_79-Duplicates_Detection_Within_Incomplete_Data_Sets.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Light but Effective Encryption Technique based on Dynamic Substitution and Effective Masking</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090978</link>
        <id>10.14569/IJACSA.2018.090978</id>
        <doi>10.14569/IJACSA.2018.090978</doi>
        <lastModDate>2018-09-29T12:00:50.7330000+00:00</lastModDate>
        
        <creator>Muhammed Jassem Al-Muhammed</creator>
        
        <subject>Encryption techniques; dynamic substitution; key expansion; directive based manipulation; block masking</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(9), 2018</description>
        <description>Cryptography and cryptanalysis are in ever-lasting struggle. As the encryption techniques advance, the cryptanalysis techniques advance as well. To properly face the great danger of the cryptanalysis techniques, we should diligently look for more effective encryption techniques. These techniques must properly handle any weaknesses that may be exploited by hacking tools. We address this problem by proposing an innovative encryption technique. Our technique has unique features that make it different from the other standard encryption methods. Our method advocates the use of dynamic substitution and tricky manipulation operations that introduce tremendous confusion and diffusion to ciphertext. All this is augmented with an effective key expansion that not only allows for implicit embedment of the key in all of the encryption steps but also produces very different versions of this key. Experiments with our proof-of-concept prototype showed that our method is effective and passes very important security tests.</description>
        <description>http://thesai.org/Downloads/Volume9No9/Paper_78-Light_but_Effective_Encryption_Technique.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning based Object Distance Measurement Method for Binocular Stereo Vision Blind Area</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090977</link>
        <id>10.14569/IJACSA.2018.090977</id>
        <doi>10.14569/IJACSA.2018.090977</doi>
        <lastModDate>2018-09-29T12:00:49.6870000+00:00</lastModDate>
        
        <creator>Jiaxu Zhang</creator>
        
        <creator>Shaolin Hu</creator>
        
        <creator>Haoqiang Shi</creator>
        
        <subject>deep learning; computer vision; binocular stereo vision; intelligent transportation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(9), 2018</description>
        <description>Visual field occlusion is one of the causes of urban traffic accidents in the process of reversing. In order to meet the requirements of vehicle safety and intelligence, a method of target distance measurement based on deep learning and binocular vision is proposed. The method first establishes binocular stereo vision model and calibrates intrinsic extrinsic and extrinsic parameters, uses Faster R-CNN algorithm to identify and locate obstacle objects in the image, then substitutes the obtained matching points into a calibrated binocular stereo model for spatial coordinates of the target object. Finally, the obstacle distance is calculated by the formula. In different positions, take pictures of obstacles from different angles to conduct physical tests. Experimental results show that this method can effectively achieve obstacle object identification and positioning, and improve the adverse effect of visual field blindness on driving safety.</description>
        <description>http://thesai.org/Downloads/Volume9No9/Paper_77-Deep_Learning_based_Object_Distance_Measurement.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Developing Disease Classification System based on Keyword Extraction and Supervised Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090976</link>
        <id>10.14569/IJACSA.2018.090976</id>
        <doi>10.14569/IJACSA.2018.090976</doi>
        <lastModDate>2018-09-29T12:00:49.1270000+00:00</lastModDate>
        
        <creator>Muhammad Suffian</creator>
        
        <creator>Muhammad Yaseen Khan</creator>
        
        <creator>Shuakat Wasi</creator>
        
        <subject>Natural language processing; Machine Learning; Multi-Class Classification; Patient descriptions; Keyword Extraction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(9), 2018</description>
        <description>The Evidence-Based Medicine (EBM) is emerged as the helpful practice for medical practitioners to make decisions with available shreds of evidence along with their professional ex-pertise. In EBM, the medical practitioners suggest the medication on the basis of underlying information of patients descriptions and medical records (mostly available in textual form). This paper presents a novel and efficient method for predicting the correct disease. Since these type of tasks are generally accounted as the multi-class classifying problem, therefore, a large number of records are needed, so a large number of records will be entertained in higher n-dimensional space. Our system, as proposed in this paper, will utilise the key-phrases extraction techniques to scoop out the meaningful information to reduce the size of textual dimension, and, the suite of machine learning algorithms for classifying the diseases efficiently. We have tested the proposed approach on 6 different diseases i.e. Asthma, Hypertension, Diabetes, Fever, Abdominal issues, and Heart problems over the dataset of 690 patients. With key-phrases tested in the range [3,7] features, SVM has shown the highest (93.34%, 95%) F1-score and accuracy.</description>
        <description>http://thesai.org/Downloads/Volume9No9/Paper_76-Developing_Disease_Classification_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>First Out First Served Algorithm for Mobile Wireless Sensor Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090975</link>
        <id>10.14569/IJACSA.2018.090975</id>
        <doi>10.14569/IJACSA.2018.090975</doi>
        <lastModDate>2018-09-29T12:00:48.0800000+00:00</lastModDate>
        
        <creator>Anouar Bouirden</creator>
        
        <creator>Maryam El Azhari</creator>
        
        <creator>Ahmed Toumanari</creator>
        
        <creator>Ahmed Aharoune</creator>
        
        <subject>MWSNs; WSN; Packet reception; energy consump-tion; convex area; Mobility Manager</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(9), 2018</description>
        <description>Wireless Sensor Networks (WSNs) have recently gained tremendous attention as they cover a vast range of applications requiring an important number of sensor nodes deployed in the area of interest to measure physiological types of data and send it back to the base station for further analysis and treatment. Many routing protocols have been proposed to perform data routing towards the destination in accordance with energy consumption,end-to-end delay and throughput. In this paper,the First Out First Served algorithm for cluster based routing in Mobile Wireless Sensor Networks was presented. The algorithm aims to increase packet reception within the cluster in a highly constrained environment.The results prove the efficiency of the proposed algorithm in increasing the reception of data packets by the cluster head and enhancing the Radio_Coef ficient Diff parameter of the network.

</description>
        <description>http://thesai.org/Downloads/Volume9No9/Paper_75-First_Out_First_Served_Algorithm_for_Mobile.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>On the Distinction of Subjectivity and Objectivity of Emotions in Texts</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090974</link>
        <id>10.14569/IJACSA.2018.090974</id>
        <doi>10.14569/IJACSA.2018.090974</doi>
        <lastModDate>2018-09-29T12:00:47.0200000+00:00</lastModDate>
        
        <creator>Manh Hung Nguyen</creator>
        
        <subject>Text classification; Emotion classification; subjec-tive emotion; objective emotion</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(9), 2018</description>
        <description>Emotion classification in texts is an instance of the text classification problem. It therefore could apply some existing text classifiers by considering each emotion as a label of the text. However, most of recent works does not differentiate the subjectivity and objectivity of the same emotion in the text. This paper firstly builds some datasets whose labels are emotion, in which the subject and object of the same emotion are considered as two separated labels. Secondly, this paper evaluates some existing classifiers via some scenarios on the built datasets. The results are then discussed on some difficulties of these kinds of problem.</description>
        <description>http://thesai.org/Downloads/Volume9No9/Paper_74-On_the_Distinction_of_Subjectivity_and_Objectivity.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Controlled Environment Model for Dealing with Smart Phone Addiction </title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090973</link>
        <id>10.14569/IJACSA.2018.090973</id>
        <doi>10.14569/IJACSA.2018.090973</doi>
        <lastModDate>2018-09-29T12:00:46.4730000+00:00</lastModDate>
        
        <creator>Irfan Uddin</creator>
        
        <creator>Adeel Baig</creator>
        
        <creator>Abid Ali Minhas</creator>
        
        <subject>Smart phone addiction; Abnormal use of smart phone; Healthy society; Dealing with smart phone addiction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(9), 2018</description>
        <description>Smart phones are commonly used in most parts of the world and it is difficult to find a society that is not affected by the smart phone culture. But the usage of smart phone is crossing the limit of being used as a facility towards high level of abnormal dependency on the phone. This dependency can reach to the point where we have no longer control on the over-use and hence the negative impacts it can cause to our lives. The worst situation is that people do not even consider that this dependency is actually a type of addiction and we need to find some solutions to deal with it. In this research paper, we identify symptoms that show the existence of smart phone addiction and demonstrate that this addiction has an effect on the quality and even quantity of people’ lives and it can ultimately affect the whole society. We propose solutions to deal with smart phone addiction and propose the design of a smart phone application to reduce the level of abnormal dependency on smart phones.</description>
        <description>http://thesai.org/Downloads/Volume9No9/Paper_73-A_Controlled_Environment_Model_for_Dealing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Level of Confidence in Software Effort Estimation by an Intelligent Fuzzy – Neuro - Genetic Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090972</link>
        <id>10.14569/IJACSA.2018.090972</id>
        <doi>10.14569/IJACSA.2018.090972</doi>
        <lastModDate>2018-09-29T12:00:45.4430000+00:00</lastModDate>
        
        <creator>Poonam Rijwani</creator>
        
        <creator>Sonal Jain</creator>
        
        <subject>COCOMO II; artificial neural networks; genetic algorithm; fuzzy logic</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(9), 2018</description>
        <description>Organizations are struggling to deliver the expected software functionality and quality in scheduled time and prescribed budget. Despite availability of numerous advanced effort estimation techniques overestimation and underestimation occur on a vast scale and results in project failures and significant loss to the organization. The paper proposes machine learning based approach to calculate the optimized effort and level of confidence. Genetically trained neural network evaluates the optimum effort for given COCOMO II variables. The level of confidence is evaluated by fuzzy logic and indicates the percentage that the predicted effort will not exceed the limits.</description>
        <description>http://thesai.org/Downloads/Volume9No9/Paper_72-Level_of_Confidence_in_Software_Effort_Estimation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Personalized E-Learning Recommender System using Multimedia Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090971</link>
        <id>10.14569/IJACSA.2018.090971</id>
        <doi>10.14569/IJACSA.2018.090971</doi>
        <lastModDate>2018-09-29T12:00:44.3670000+00:00</lastModDate>
        
        <creator>Hayder Murad</creator>
        
        <creator>Linda Yang</creator>
        
        <subject>E-Learning; recommender system; data mining</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(9), 2018</description>
        <description>Due to the huge amounts of online learning materials, e-learning environments are becoming very popular as means of delivering lectures. One of the most common e-learning challenges is how to recommend quality learning materials to the students. Personalized e-learning recommender systems help to reduce information overload, which tailor learning material to meet individual student’s learning needs. This research focuses on using various recommendation and data mining techniques for personalized learning in e-learning environment.</description>
        <description>http://thesai.org/Downloads/Volume9No9/Paper_71-Personalized_E_Learning_Recommender_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Interface of an Automatic Recognition System for Dysarthric Speech</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090970</link>
        <id>10.14569/IJACSA.2018.090970</id>
        <doi>10.14569/IJACSA.2018.090970</doi>
        <lastModDate>2018-09-29T12:00:43.8230000+00:00</lastModDate>
        
        <creator>Brahim- Fares Zaidi</creator>
        
        <creator>Malika Boudraa</creator>
        
        <creator>Sid-Ahmed Selouani</creator>
        
        <creator>Djamel Addou</creator>
        
        <subject>Automatic Recognition System of Continuous Pathological Speech (ARSCPS); ETSI standard Mel frequency Cepstral Coefficient Front End (ETSI MFCC FE V2.0); Hidden Markov Model Toolkit (HTK); Hidden Models of Markov (HMM); Human/Machine (H/M); Technologies of</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(9), 2018</description>
        <description>This paper addresses the realization of a Human/Machine (H/M) interface including a system for automatic recognition of the Continuous Pathological Speech (ARSCPS) and several communication tools in order to help frail people with speech problems (Dysarthric speech) to access services providing by new technologies of information and communication (TIC) while making it easier for the doctors to achieve a first diagnosis on the patient’s disease. In addition, an ARSCPS has been improved and developed for normal and pathology voice while establishing a link with our graphic interface which is based on the box tools Hidden Markov Model Toolkit (HTK), in addition to the Hidden Models of Markov (HMM). In our work we used different techniques of feature extraction for the speech recognition system in order to improve the dysarthric speech intelligibility while developing an ARSCPS which can perform well for pathological and normal speakers. These techniques are based on the coefficients of ETSI standard Mel Frequency Cepstral Coefficient Front End (ETSI MFCC FE V2.0); Perceptual Linear Prediction coefficients (PLP); Mel Frequency Cepstral Coefficients (MFCC) and the recently proposed Power Normalized Cepstral Coefficients (PNCC) have been used as a basis for comparison. In this context we used the Nemours database which contains 11 speakers that represents dysarthric speech and 11 speakers that represents normal speech.</description>
        <description>http://thesai.org/Downloads/Volume9No9/Paper_70-Interface_of_an_Automatic_Recognition_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Calculation of Pressure Loss Coefficients in Combining Flows of a Solar Collector using Artificial Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090969</link>
        <id>10.14569/IJACSA.2018.090969</id>
        <doi>10.14569/IJACSA.2018.090969</doi>
        <lastModDate>2018-09-29T12:00:43.2600000+00:00</lastModDate>
        
        <creator>Shahzad Yousaf</creator>
        
        <creator>Imran Shafi</creator>
        
        <creator>Jamil Ahmad</creator>
        
        <subject>Artificial neural network; pressure loss coefficients for solar collector; combining flow</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(9), 2018</description>
        <description>The paper presents a novel technique for determination of loss coefficients due to pressure by use of artificial neural network (ANN) in tee junctions. Geometry and flow parameters are feed into ANN as the inputs for purpose of training the network. Efficacy of the network is demonstrated by comparison of the ANN and experimentally obtained pressure loss coefficients for combining flows in a Tee Junction. Reynolds numbers ranging from 200 to 14000 and discharge ratios varying from minimum to maximum flow for calculation of pressure loss coefficients have been used. Pressure loss coefficients calculated using ANN is compared to the models from literature used in junction flows. The results achieved after the application of ANN agrees reasonably to the experimental values.</description>
        <description>http://thesai.org/Downloads/Volume9No9/Paper_69-Calculation_of_Pressure_Loss_Coefficients.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>IMouse: Eyes Gesture Control System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090968</link>
        <id>10.14569/IJACSA.2018.090968</id>
        <doi>10.14569/IJACSA.2018.090968</doi>
        <lastModDate>2018-09-29T12:00:42.6830000+00:00</lastModDate>
        
        <creator>Syed Muhammad Tahir Saleem</creator>
        
        <creator>Sammat Fareed</creator>
        
        <creator>Farzana Bibi</creator>
        
        <creator>Arsalan Khan</creator>
        
        <creator>Shahzad Gohar</creator>
        
        <creator>Hafiz Hamza Ashraf</creator>
        
        <subject>IMouse; eyes gesture control system; eye tracking systems; mouse cursor; eye mouse; webcam; eye movement</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(9), 2018</description>
        <description>A high number of people, affected with neuro-locomotor disabilities or those paralyzed by injury cannot use computers for basic tasks such as sending or receiving messages, browsing the internet, watch their favorite TV show or movies. Through a previous research study, it was concluded that eyes are an excellent candidate for ubiquitous computing since they move anyway during interaction with computing machinery. Using this underlying information from eye movements could allow bringing the use of computers back to such patients. For this purpose, we propose an imouse gesture control system which is completely operated by human eyes only. The purpose of this work is to design an open-source generic eye-gesture control system that can effectively track eye-movements and enable the user to perform actions mapped to specific eye movements/gestures by using computer webcam. It detects the pupil from the user’s face and then tracks its movements. It needs to be accurate in real-time so that the user is able to use it like other every-day devices with comfort.</description>
        <description>http://thesai.org/Downloads/Volume9No9/Paper_68-IMouseEyes_Gesture_Control_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mobile Data Collector Routing Protocol Scheme for Scalable Dense Wireless Sensor Network to Optimize Node’s Life</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090967</link>
        <id>10.14569/IJACSA.2018.090967</id>
        <doi>10.14569/IJACSA.2018.090967</doi>
        <lastModDate>2018-09-29T12:00:41.6530000+00:00</lastModDate>
        
        <creator>Farhan A. Siddiqui</creator>
        
        <creator>Jibran R. Khan</creator>
        
        <creator>Muhammad Saeed</creator>
        
        <creator>M. Arshad</creator>
        
        <creator>Nasir Touheed</creator>
        
        <subject>WSN; MDC; node density; energy efficient sensor networks; agriculture; robust networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(9), 2018</description>
        <description>Wireless Sensor Networks (WSN) is a special kind of network communication architecture which has a very wide range of application and the cost-effectiveness of this architecture boosts its adaptability and usability. The erratic use of WSN, its rapid advancement has encouraged the research community to report several standing problems with WSN, among them is the concern of network life and node energy management in a dense network. This paper presents the experimental outcomes of using MDC multi-tier approach in dense network environments. Besides the node density, experiments also consider the inter-agricultural field hurdles that cause communication disturbance among the nodes that exist in ground level, or at some height above the farming field. The simulated experiment shows the noteworthy results, which comparatively enhance the network lifetime, efficiently utilizing individual node energy, and maximizing the content delivery.</description>
        <description>http://thesai.org/Downloads/Volume9No9/Paper_67-Mobile_Data_Collector_Routing_Protocol_Scheme.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Analysis of Support Vector Machine, Maximum Likelihood and Neural Network Classification on Multispectral Remote Sensing Data </title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090966</link>
        <id>10.14569/IJACSA.2018.090966</id>
        <doi>10.14569/IJACSA.2018.090966</doi>
        <lastModDate>2018-09-29T12:00:41.1070000+00:00</lastModDate>
        
        <creator>Asmala Ahmad</creator>
        
        <creator>Ummi Kalsom Mohd Hashim</creator>
        
        <creator>Othman Mohd</creator>
        
        <creator>Mohd Mawardy Abdullah</creator>
        
        <creator>Hamzah Sakidin</creator>
        
        <creator>Abd Wahid Rasib</creator>
        
        <creator>Suliadi Firdaus Sufahani</creator>
        
        <subject>Land cover; change detection; remote sensing; training set; supervised classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(9), 2018</description>
        <description>Land cover classification is an essential process in many remote sensing applications. Classification based on supervised methods have been preferred by many due to its practicality, accuracy and objectivity compared to unsupervised methods. Nevertheless, the performance of different supervised methods particularly for classifying land covers in Tropical regions such as Malaysia has not been evaluated thoroughly.  The study reported in this paper aims to detect land cover changes using multispectral remote sensing data. The data come from Landsat satellite covering part of Klang District, located in Selangor, Malaysia. Landsat bands 1, 2, 3, 4, 5 and 7 are used as the input for three supervised classification methods namely support vector machines (SVM), maximum likelihood (ML) and neural network (NN). The accuracy of the generated classifications is then assessed by means of classification accuracy. Land cover change analysis is also carried out to identify the most reliable method to detect land changes in which showing SVM gives a more stable and realistic outcomes compared to ML and NN.</description>
        <description>http://thesai.org/Downloads/Volume9No9/Paper_66-Comparative_Analysis_of_Support_Vector_Machine.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Survey on using Neural Network based Algorithms for Hand Written Digit Recognition </title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090965</link>
        <id>10.14569/IJACSA.2018.090965</id>
        <doi>10.14569/IJACSA.2018.090965</doi>
        <lastModDate>2018-09-29T12:00:40.0630000+00:00</lastModDate>
        
        <creator>Muhammad Ramzan</creator>
        
        <creator>Hikmat Ullah Khan</creator>
        
        <creator>Shahid Mehmood Awan</creator>
        
        <creator>Waseem Akhtar</creator>
        
        <creator>Mahwish Ilyas</creator>
        
        <creator>Ahsan Mahmood</creator>
        
        <creator>Ammara Zamir</creator>
        
        <subject>Neural network; digit recognition; segmentation; supervised learning; image classification; computer vision</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(9), 2018</description>
        <description>The detection and recognition of handwritten content is the process of converting non-intelligent information such as images into machine edit-able text. This research domain has become an active research area due to vast applications in a number of fields such as handwritten filing of forms or documents in banks, exam form filled by students, users’ authentication applications. Generally, the handwritten content recognition process consists of four steps: data preprocessing, segmentation, the feature&#172; extraction and selection, application of supervised learning algorithms. In this paper, a detailed survey of existing techniques used for Hand Written Digit Recognition(HWDR) is carried out. This review is novel as it is focused on HWDR and also it only discusses the application of Neural Network (NN) and its modified algorithms. We discuss an overview of NN and different algorithms which have been adopted from NN. In addition, this research study presents a detailed survey of the use of NN and its variants for digit recognition.  Each existing work, we elaborate its steps, novelty, use of dataset and advantages and limitations as well. Moreover, we present a Scientometric analysis of HWDR which presents top journals and sources of research content in this research domain. We also present research challenges and potential future work.</description>
        <description>http://thesai.org/Downloads/Volume9No9/Paper_65-A_Survey_on_using_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Approach for Background Subtraction using Generalized Rayleigh Distribution</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090964</link>
        <id>10.14569/IJACSA.2018.090964</id>
        <doi>10.14569/IJACSA.2018.090964</doi>
        <lastModDate>2018-09-29T12:00:39.0330000+00:00</lastModDate>
        
        <creator>Pavan Kumar Tadiparthi</creator>
        
        <creator>Srinivas Yarramalle</creator>
        
        <creator>Nagesh Vadaparthi</creator>
        
        <subject>Background subtraction; segmentation; generalized rayleigh distribution (GRD); quantitative evaluation; image analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(9), 2018</description>
        <description>Identification of the foreground objects in dynamic scenario video images is an exigent task, when compared to static scenes. In contrast to motionless images, video sequences offer more information concerning how items and circumstances change over time. Pixel based comparisons are carried out to categorize the foreground and the background based on frame difference methodology. In order to have more precise object identification, the threshold value is made static during both the cases, to improve the recognition accuracy, adaptive threshold values are estimated for both the methods. The current article also highlights a methodology using Generalized Rayleigh Distribution (GRD). Experimentation is conducted using benchmark video images and the derived outputs are evaluated using a quantitate approach.</description>
        <description>http://thesai.org/Downloads/Volume9No9/Paper_64-A_Novel_Approach_for_Background_Subtraction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Proposal for a Feature Automation Solution for an IMS-KMS-IoT Platform based on SDN</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090963</link>
        <id>10.14569/IJACSA.2018.090963</id>
        <doi>10.14569/IJACSA.2018.090963</doi>
        <lastModDate>2018-09-29T12:00:37.9870000+00:00</lastModDate>
        
        <creator>Samba DIOUF</creator>
        
        <creator>K&#233;ba GUEYE</creator>
        
        <creator>Samuel OUYA</creator>
        
        <creator>Gervais MENDY</creator>
        
        <subject>IMS; IoT; SDN; automation; KMS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(9), 2018</description>
        <description>The concept of the Internet of Things is a paradigm that is gaining more and more ground and soon the number of connected objects will be counted in billions. This will transform our lives and pose new challenges. To meet the challenges of objects several platforms have been proposed by previous work. Some are based on an IMS-IoT platform and others integrate IMS, IoT and SDN technologies. Our article proposes an architecture that integrates SDN into an IMS-IoT platform and the automation of the control layer (IMS-IoT platform) and the transport layer of the functional architecture in order to meet future requirements related to the configuration of these many objects, mobility and diversity of user terminals (smartphone, tablet, computers, etc.). The proposed architecture makes it possible to benefit from all the simplicity and efficiency provided by an SDN network but also the services offered by the IMS-IoT platform.</description>
        <description>http://thesai.org/Downloads/Volume9No9/Paper_63-Proposal_for_a_Feature_Automation_Solution.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Developing A Model to Predict the Occurrence of the Cardio-Cerebrovascular Disease for the Korean Elderly using the Random Forests Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090962</link>
        <id>10.14569/IJACSA.2018.090962</id>
        <doi>10.14569/IJACSA.2018.090962</doi>
        <lastModDate>2018-09-29T12:00:37.0030000+00:00</lastModDate>
        
        <creator>Haewon Byeon</creator>
        
        <subject>Prediction model; data mining; random forest; risk factors; cardio-cerebrovascular disease; stroke</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(9), 2018</description>
        <description>This study aimed to develop a model for predicting the cardio-cerebrovascular disease of the South Korean elderly using the random forests technique. This study analyzed 2,111 respondents (879 males and 1,232 females), who were age 60 or older, out of total 7,761 respondents, who completed the Seoul Welfare Panel Study. The result variable was defined as the cardio-cerebrovascular disease (e.g., hypertension, cerebral infarction, hyperlipidemia, cardiac infarction, and angina). As a result of developing a random forest-based model,  the major determinants of the cardio-cerebrovascular diseases of the South Korean elderly were mean monthly household income, the highest level of education, subjective health condition, subjective friendship, subjective family relationship, smoking, regular exercise, age, marital status, gender, depression experience, economic activity, and high-risk drinking. Among them, mean monthly household income was the most important predictor of the cardio-cerebrovascular disease. Based on the developed prediction model, it is needed to develop a systematic program for preventing the cardio-cerebrovascular disease of the Korean elderly.</description>
        <description>http://thesai.org/Downloads/Volume9No9/Paper_62-Developing_a_Model_to_Predict_the_Occurrence.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards A Framework for Multilayer Computing of Survivability</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090961</link>
        <id>10.14569/IJACSA.2018.090961</id>
        <doi>10.14569/IJACSA.2018.090961</doi>
        <lastModDate>2018-09-29T12:00:36.4430000+00:00</lastModDate>
        
        <creator>Abolghasem Sadeghi</creator>
        
        <creator>Mohammad Reza Valavi</creator>
        
        <creator>Morteza Barari</creator>
        
        <subject>Network survivability; survivability quantification; survivability computation; system survivability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(9), 2018</description>
        <description>The notion of survivability has an important position in today enterprise systems and critical functions. This notion has been defined in different ways. However, lacking a comprehensive and multilayer model for computing the survivability quantitatively, is the major gap happened in researches of this field; a model that is tally general and applicable in various applications. This research tries to design a comprehensive, multilayer as well as general model for modeling and computing the survivability. Considering that the Markov property is true in our proposed model, we used the Markov model. Using the proposed three layer architecture and designing a Markov structure, we could have been able to compute the survivability initially for each of infrastructure components separately and regardless of their functional dependency to each other. The computations were generalized to consider component dependencies as well as the upper layers entering dependencies in Markov model and could compute the survivability of each vital function for the highest architectural layer based on the underlying layers. Finally, a common and ordinary structure of crisis management has been studied and its results analyzed. We could examine the abilities of our model to compute the survivability of the whole crisis management system successfully.</description>
        <description>http://thesai.org/Downloads/Volume9No9/Paper_61-Towards_a_Framework_for_Multilayer_Computing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Enhanced Malay Named Entity Recognition using Combination Approach for Crime Textual Data Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090960</link>
        <id>10.14569/IJACSA.2018.090960</id>
        <doi>10.14569/IJACSA.2018.090960</doi>
        <lastModDate>2018-09-29T12:00:35.4130000+00:00</lastModDate>
        
        <creator>Siti Azirah Asmai</creator>
        
        <creator>Muhammad Sharilazlan Salleh</creator>
        
        <creator>Halizah Basiron</creator>
        
        <creator>Sabrina Ahmad</creator>
        
        <subject>Named entity recognition; information extraction; fuzzy c-means; k-nearest neighbors; malay language; crime data  </subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(9), 2018</description>
        <description>Named Entity Recognition (NER) is one of the tasks in the information extraction.  NER is used for extracting and classifying words or entities that belong to the proper noun category in text data such as person&#39;s name, location, organization, date and others. As seen in today&#39;s generation, social media such as web pages, blogs, Facebook, Twitter, Instagram and online newspapers are among the major contributors to the generation of information. This paper presents an enhanced Malay Named Entity Recognition model using combination fuzzy c-means and K-Nearest Neighbours Algorithm method for crime analysis. The results showed that this combination method could improve the accuracy performance on entity recognition of crime data in Malay. The model is expected to provide a better method in the process of recognizing named entities for text analysis particularly in Malay.</description>
        <description>http://thesai.org/Downloads/Volume9No9/Paper_60-An_Enhanced_Malay_Named_Entity_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Location-aware Event Attendance System using QR Code and GPS Technology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090959</link>
        <id>10.14569/IJACSA.2018.090959</id>
        <doi>10.14569/IJACSA.2018.090959</doi>
        <lastModDate>2018-09-29T12:00:34.3670000+00:00</lastModDate>
        
        <creator>Zakiah Ayop</creator>
        
        <creator>Chan Yee Lin</creator>
        
        <creator>Syarulnaziah Anawar</creator>
        
        <creator>Erman Hamid</creator>
        
        <creator>Muhammad Syahrul Azhar</creator>
        
        <subject>Event attendance system; quick response (QR) code; global positioning system (GPS); android mobile application</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(9), 2018</description>
        <description>Attendance process in a university’s event is time consuming and tracking the attendance can be harder. In this paper, a smart event attendance system for a university using QR code and GPS technology is proposed with objective to speed up the process of taking students’ attendance and tracking full attendance. The method of developing the system is based on two views; user view which is the mobile application used by the students, and admin view which is the web administration system used by the event organizer. From the evaluation, students’ attendance can be traced from the GPS location combine with QR code. The results indicate that full attendance increases as the system validates attendance through users’ identification, location and timestamp during user login and logout. The proposed system contributes to high satisfaction among the users that claim that the mobile application helps to speed up the event registration process.   </description>
        <description>http://thesai.org/Downloads/Volume9No9/Paper_59-Location_aware_Event_Attendance_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Applying Floyd’s Inductive Assertions Method for Verification of Generalized Net Models Without Temporal Components</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090958</link>
        <id>10.14569/IJACSA.2018.090958</id>
        <doi>10.14569/IJACSA.2018.090958</doi>
        <lastModDate>2018-09-29T12:00:33.2770000+00:00</lastModDate>
        
        <creator>Magdalina Todorova</creator>
        
        <creator>Nora Angelova</creator>
        
        <subject>Floyd’s inductive assertions method; generalized nets; verification; formal methods; education</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(9), 2018</description>
        <description>Generalized Nets are extensions of Petri Nets. They are a suitable tool for describing real sequential and parallel processes in different areas. The implementation of correct Generalized Nets models is a task of great importance for the creation of a number of applications such as transportation management, e-business, medical systems, telephone networks, etc. The cost of an error in the models of some of these applications can be very high. The implementation of models of similar applications has to use formal approaches to prove that the developed models are correct. A foundation stone of software verification, which is suitable for verification of Generalized Nets models with transitions without temporal component, is Floyd’s inductive assertion method. This article presents a modification of Floyd’s inductive assertion method for verification of flowcharts, which allows Generalized Nets without temporal component to be verified. Using an illustrative example, we show that the offered adaptation is appropriate for the purpose of training university students in the Informatics and Computer Sciences in formal methods of verification.</description>
        <description>http://thesai.org/Downloads/Volume9No9/Paper_58-Applying_Floyds_Inductive_Assertions_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implementation of Forward Chaining and Certainty Factor Method on Android-Based Expert System of Tomato Diseases Identification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090957</link>
        <id>10.14569/IJACSA.2018.090957</id>
        <doi>10.14569/IJACSA.2018.090957</doi>
        <lastModDate>2018-09-29T12:00:32.7300000+00:00</lastModDate>
        
        <creator>Kurnia Muludi</creator>
        
        <creator>Radix Suharjo</creator>
        
        <creator>Admi Syarif</creator>
        
        <creator>Fitria Ramadhani</creator>
        
        <subject>Expert system; forward chaining; certainty factor; tomato diseases; android</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(9), 2018</description>
        <description>Plant disease is one of the reasons that cause the destruction of plant. It affects plant productivity and quality. Most of the farmers made mistake in cope with this problem because of the lack of knowledge. Expert system is a solution that has been widely used for identifying disease. This paper presents an Android-based expert system to help identifying tomato diseases. Data used in this expert system consist of 16 data of tomato diseases,  53 data of symptoms, and 20 variety of rules. This paper implements forward chaining and certainty factor method. Forward chaining is used as a reasoning method to get the result of disease identification. Certainty factor is used as a calculation method to obtain accuracy degree of identification results. Testing has been done through two stages, internal and external. The result from internal testing shows that tomato expert system works properly and fit perfectly in various android devices. External testing is done by giving questionnaire to 44 respondents. The result of questionanaires shows that tomato expert system is categorized as “good” by them.</description>
        <description>http://thesai.org/Downloads/Volume9No9/Paper_57-Implementation_of_Forward_Chaining_and_Certainty_Factor.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Internet of Things and Healthcare Analytics for Better Healthcare Solution: Applications and Challenges</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090956</link>
        <id>10.14569/IJACSA.2018.090956</id>
        <doi>10.14569/IJACSA.2018.090956</doi>
        <lastModDate>2018-09-29T12:00:31.6700000+00:00</lastModDate>
        
        <creator>Zuraida Abal Abas</creator>
        
        <creator>Zaheera Zainal Abidin</creator>
        
        <creator>Ahmad Fadzli Nizam Abdul Rahman</creator>
        
        <creator>Hidayah Rahmalan</creator>
        
        <creator>Gede Pramudya</creator>
        
        <creator>Mohd Hakim Abdul Hamid</creator>
        
        <subject>Internet of things; analytics; healthcare; applications; challenges</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(9), 2018</description>
        <description>The total number of population in the world will keep on increasing. This will eventually pose challenges towards quality of life for example issues related to healthcare. Hence, a proper solution needs to be devised in order to face the challenges. Internet of Things (IoT), which is one of the digital technologies, that is becoming a trend now can offer promising solution. This paper serves as a short communication in introducing IoT and its application in healthcare domain as well as the analytics combined with the technology. Some examples are presented according to the categories of the application. It must be noted that the analytics play an important role in making the IoT healthcare as a comprehensive solution. At the end of the paper, challenges in making this digital as an accessible solution is discussed.</description>
        <description>http://thesai.org/Downloads/Volume9No9/Paper_56-Internet_of_Things_and_Healthcare_Analytics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Securing and Monitoring of Bandwidth Usage in Multi-Agents Denial of Service Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090955</link>
        <id>10.14569/IJACSA.2018.090955</id>
        <doi>10.14569/IJACSA.2018.090955</doi>
        <lastModDate>2018-09-29T12:00:30.6870000+00:00</lastModDate>
        
        <creator>Ogunleye G. O.</creator>
        
        <creator>Fashoto S.G</creator>
        
        <creator>Mbunge Elliot</creator>
        
        <creator>Arekete S.A</creator>
        
        <creator>Ojewumi T.O.</creator>
        
        <subject>Bandwidth; mobile agent; multi-agents; DoS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(9), 2018</description>
        <description>The primary purpose of Denial of Service attack (DoS) is to cripple resources so that the resources are made unavailable to the legitimate users. Due to the inadequate monitoring of activities on the network, it has resulted into huge financial losses. Bandwidth which is one of the resources being used on the network, if not properly monitored could result into misused and attack. This paper proposes a real time system for securing and monitoring the amount of bandwidth consumed on the network using the multi-agent framework technology. It also keeps a record of internet protocol (IP) addresses visiting the network and may be used as a starting point for the aspect of response in providing a comprehensive solution to DoS attacks. The bandwidth is pre-entered and an agent is assigned to monitor bandwidth consumption rate against the set threshold. If the bandwidth is consumed above the bandwidth limit and time set, then a DoS attack is suspected taking into considerations the DoS attack framework. This framework can be used as a replicate of what happen in the network scenario environment.</description>
        <description>http://thesai.org/Downloads/Volume9No9/Paper_55-Securing_and_Monitoring_of_Bandwidth_Usage.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intelligent Irrigation Management System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090954</link>
        <id>10.14569/IJACSA.2018.090954</id>
        <doi>10.14569/IJACSA.2018.090954</doi>
        <lastModDate>2018-09-29T12:00:30.1230000+00:00</lastModDate>
        
        <creator>Wafa Difallah</creator>
        
        <creator>Khelifa Benahmed</creator>
        
        <creator>Fateh Bounnama</creator>
        
        <creator>Belkacem Draoui</creator>
        
        <creator>Ahmed Saaidi </creator>
        
        <subject>Internet of things; irrigation; soil; solar; water; wireless sensor network; intelligent</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(9), 2018</description>
        <description>It is widely known that water resources are decreasing around the world. Rapid urbanization, population growth, industries and the expansion of agriculture are increasing demand for freshwater. In most countries, including Algeria, irrigation is the largest consumer of water, with about 70% of all freshwater withdrawals being used for irrigation. Therefore, it can be said that solving the problem of water scarcity is based on the adjustment of irrigation. The aim of this paper is to shed light on the irrigation systems, how they can be applied, and what are their benefits. With the adoption of solar energy to feed the system; this energy source is strongly available in arid zones.</description>
        <description>http://thesai.org/Downloads/Volume9No9/Paper_54-Intelligent_Irrigation_Management_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Printed Arabic Script Recognition: A Survey</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090953</link>
        <id>10.14569/IJACSA.2018.090953</id>
        <doi>10.14569/IJACSA.2018.090953</doi>
        <lastModDate>2018-09-29T12:00:29.0970000+00:00</lastModDate>
        
        <creator>Mansoor Alghamdi</creator>
        
        <creator>William Teahan</creator>
        
        <subject>Optical character recognition; arabic printed OCR; arabic text recognition; arabic OCR survey; feature extraction; segmentation; classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(9), 2018</description>
        <description>Optical character recognition (OCR) is essential in various real-world applications, such as digitizing learning resources to assist visually impaired people and transforming printed resources into electronic media. However, the development of OCR for printed Arabic script is a challenging task. These challenges are due to the specific characteristics of Arabic script. Therefore, different methods have been proposed for developing Arabic OCR systems, and this paper aims to provide a comprehensive review of these methods. This paper also discusses relevant issues of printed Arabic OCR including the challenges of printed Arabic script and performance evaluation. It concludes with a discussion of the current status of printed Arabic OCR, analyzing the remaining problems in the field of printed Arabic OCR and providing several directions for future research.  </description>
        <description>http://thesai.org/Downloads/Volume9No9/Paper_53-Printed_Arabic_Script_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Survey on Smartphone-Based Accident Reporting and Guidance Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090952</link>
        <id>10.14569/IJACSA.2018.090952</id>
        <doi>10.14569/IJACSA.2018.090952</doi>
        <lastModDate>2018-09-29T12:00:28.0330000+00:00</lastModDate>
        
        <creator>Alexandra Fanca</creator>
        
        <creator>Adela Puscasiu</creator>
        
        <creator>Honoriu Valean</creator>
        
        <creator>Silviu Folea</creator>
        
        <subject>Smartphone; accident; detection algorithm; reporting accident; mobile application; sensors</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(9), 2018</description>
        <description>Every day, around the world, a large percentage of people die from road accidents and falls. One of the reasons for a person&#39;s death during accidents is the unavailability of first aid, due to the delay in informing about the accident. Thus, in the case of incidents involving vehicles or falls, response time is crucial for the timely provision of emergency medical services. An effective approach intended to reduce the number of traffic-related deaths is: the use of a system for detecting and reporting the occurred accidents, as well as reducing the time between the occurrences of an accident and sending the first emergency respondents to the scene of the accident. This paper presents a recent study on mobile terminal solutions (smartphones) for detecting and preventing accidents (road, falls, bicycles) and systematic comparisons of existing solutions.</description>
        <description>http://thesai.org/Downloads/Volume9No9/Paper_52-A_Survey_on_Smartphone_based_Accident.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Study of Routing Protocols on CBR and VBR Applications in VANET Scenario</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090951</link>
        <id>10.14569/IJACSA.2018.090951</id>
        <doi>10.14569/IJACSA.2018.090951</doi>
        <lastModDate>2018-09-29T12:00:26.9900000+00:00</lastModDate>
        
        <creator>Pooja Sharma</creator>
        
        <subject>VANETS; routing protocols; qualnet; traffic types introduction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(9), 2018</description>
        <description>Vehicular Adhoc Networks (VANETs) are special type of Mobile Adhoc networks (MANETs) where node movement is in pre ordered fashion but with high velocity in comparison to MANETs where nodes move in random manner. Due to high mobility of nodes, reliable data streaming in vehicular networks is a complex and challenging task. Moreover, transmission of data is difficult because of varying requirements of different applications in terms of various resources like time, energy and bandwidth. This paper gives an overview of performance evaluation of four types of routing protocols on CBR and VBR applications. This paper emphasizes on packet delivery ratio, packet loss and packet loss ratio for CBR and VBR applications in different scenarios like varying node density, varying speed of nodes, pause times and packet size. The effectiveness of various routing protocols shows variation in different conditions. The performance evaluation of different applications in terms of Quality of service (QoS) parameters like packet delivery ratio, packet loss and packet loss ratio has been studied by varying different conditions of CBR traffic and VBR traffic which has gives an insight to improve packet delivery ratio which in turn can be utilized to improve performance of an application in future. </description>
        <description>http://thesai.org/Downloads/Volume9No9/Paper_51-Study_of_Routing_Protocols_on_CBR.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automatic Pavement Cracks Detection using Image Processing Techniques and Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090950</link>
        <id>10.14569/IJACSA.2018.090950</id>
        <doi>10.14569/IJACSA.2018.090950</doi>
        <lastModDate>2018-09-29T12:00:26.4430000+00:00</lastModDate>
        
        <creator>Nawras Shatnawi</creator>
        
        <subject>Artificial neural network (ANN); feature extraction; image processing; pavement crack</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(9), 2018</description>
        <description>Feature extraction methods and subsequent neural network performances were used in this research to impose proper assessment for distressed roads for a case study area in the North of Jordan. Object recognition method was used to extract roads cracks from airborne images acquired by drones. After images has been thresholded and the noise removed, digital image processing algorithms were applied to detect the presence of different crack types in the surface of pavement. In addition to that, the process was capable to automatically determine the length and the orientation of the cracks which were used as input for a neural network pattern recognition function designed for this purpose. Artificial Neural Network was used, tested and verified for cracks extraction. Different patterns and numbers of hidden layers were also investigated. The results revealed that using image processing techniques and neural network could detect pavement cracks with high accuracy.</description>
        <description>http://thesai.org/Downloads/Volume9No9/Paper_50-Automatic_Pavement_Cracks_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of a Novel Approach to Search Resources in IoT</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090949</link>
        <id>10.14569/IJACSA.2018.090949</id>
        <doi>10.14569/IJACSA.2018.090949</doi>
        <lastModDate>2018-09-29T12:00:25.3830000+00:00</lastModDate>
        
        <creator>Nisar Hussain</creator>
        
        <creator>Tayyaba Anees</creator>
        
        <creator>AzeemUllah</creator>
        
        <subject>IoT; IoT resources; search engine for IoT; SensorM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(9), 2018</description>
        <description>Internet of Things (IoT) referred to interconnected the world of things like physical devices, cars, sensors, home appliances, actuators and machines embedded with software at any time, any location. The increasing number of IoT devices facing challenges which are registration, integration, describing sensor, interoperability, semantics, security, discovery and searching. The current systems are suitable for limited number of devices. Our ecosystem change day by day which means we have billions and trillions of devices connecting to the Internet in future. One major challenge in current system is searching of suitable Smart Things from a millions or even billions number of devices in IoT. For the purpose of searching and indexing, some discovery methods and techniques are discussed and compared. Those techniques and methods are studied and find out the limitations and issued of the current system. Another challenge to searching the Smart Things is a variety of description models for describing the Smart Things. In this piece of work, a novel search engine is proposed to search the Smart Things with variety of description models. A web interface is implemented in this research with HTML, JSON and XML formats. The description models of Smart Things SensorML, SensorThings API and W3C JSON-LD are implemented in the current proposed system.</description>
        <description>http://thesai.org/Downloads/Volume9No9/Paper_49-Development_of_a_Novel_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Expert Comparison of Accreditation Support Tools for the Undergraduate Computing Programs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090948</link>
        <id>10.14569/IJACSA.2018.090948</id>
        <doi>10.14569/IJACSA.2018.090948</doi>
        <lastModDate>2018-09-29T12:00:24.3370000+00:00</lastModDate>
        
        <creator>Abdallah Namoun</creator>
        
        <creator>Ahmad Taleb</creator>
        
        <creator>Mohamed Benaida</creator>
        
        <subject>Component; accreditation support tools; continuous quality improvement; undergraduate programs; assessment; student outcomes; software</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(9), 2018</description>
        <description>Realizing continuous quality improvement within educational programs is a challenging task. However, there exist various assessment tools and models that help in this regard. This paper explores the features and capabilities of three major international accreditation support tools and compares their strengths and weaknesses. The investigated tools include EvalTools, CLOSO, and WEAVEonline. Two education quality experts performed a thorough comparison of the three tools across a range of criteria including coverage of the continuous quality improvement cycle, usability of the system, learning curve of faculty, data entry, data protection and privacy, among others. The paper highlights the advantages offered by each tool and identifies the gaps in respect to the continuous quality improvement cycle.</description>
        <description>http://thesai.org/Downloads/Volume9No9/Paper_48-An_Expert_Comparison_of_Accreditation_Support_Tools.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intrusion Detection System with Correlation Engine and Vulnerability Assessment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090947</link>
        <id>10.14569/IJACSA.2018.090947</id>
        <doi>10.14569/IJACSA.2018.090947</doi>
        <lastModDate>2018-09-29T12:00:23.3070000+00:00</lastModDate>
        
        <creator>D.W.Y.O. Waidyarathna</creator>
        
        <creator>W.V.A.C.Nayantha</creator>
        
        <creator>W.M.T.C.Wijesinghe</creator>
        
        <creator>Kavinga Yapa Abeywardena</creator>
        
        <subject>Intrusion detection system (IDS); intrusion detection message exchange format (IDMEF); network intrusion detection system (NIDS); host intrusion detection system (HIDS); security information and event management (siem); correlation; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(9), 2018</description>
        <description>The proposed Intrusion Detection System (IDS) which is implemented with modern technologies to address certain prevailing problems in existing intrusion detection systems’ is capable of giving an advanced output to the security analyst. Even though the network of an organization has been secured internally as well as externally the intruders find ways to penetrate the network. With the system that is proposed activities of those intruders can be identified with a higher probability even if managed to bypass security controls of the network. The goal of this project is to give a reliable output to the system users where all the alerts are more accurate and correlated using HIDS alerts and NIDS alerts which is similar to the modern SIEM concept. The system will perform as a centralized IDS by getting inputs from both HIDS and NIDS which gives data regarding the activities of hosts and network traffic. With those implementations, the system is capable of monitoring host activities, monitoring network traffic with existing tools and give a correlated output which is more accurate, advanced and reliable prioritizing the possible attacks by using machine learning techniques and rule-based correlation techniques. With all these capabilities final product is a fully automated Intrusion Detection System which gives correlated alerts as outputs with a less rate of false positives compared to the existing systems.</description>
        <description>http://thesai.org/Downloads/Volume9No9/Paper_47-Intrusion_Detection_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing the Secured Software Framework using Vulnerability Patterns and Flow Diagrams</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090946</link>
        <id>10.14569/IJACSA.2018.090946</id>
        <doi>10.14569/IJACSA.2018.090946</doi>
        <lastModDate>2018-09-29T12:00:22.2770000+00:00</lastModDate>
        
        <creator>Nor Hafeizah Hassan</creator>
        
        <creator>Nazrulazhar Bahaman</creator>
        
        <creator>Burairah Hussin</creator>
        
        <creator>Shahrin Sahib</creator>
        
        <subject>Software secured framework; security classification; software security; common vulnerabilities and exposures</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(9), 2018</description>
        <description>This article describes the process of simplifying the software security classification. The inputs of this process include a reference model from previous researcher and existing Common Vulnerabilities and Exposure (CVE) database. An interesting aim is to find out how we can make the secured software framework implementable in practice. In order to answer this question, some inquiries were set out regarding reference model and meta-process for classification to be a workable measurement system. The outputs of the process are the results discussion of experimental result and expert’s validation. The experimental result use the existing CVE database which serves as an analysis when a) the framework is applied on three mix datasets, and b) when the framework is applied on two focus datasets. The first explains the result when the framework is applied on the CVE data randomly which consist mix of vendors and the latter is applied on the CVE data randomly but on selective vendors. The metric used in this assessment are precision and recall rate. The result shows there is a strong indicator that the framework can produce acceptable output accuracy. Apart from that, several experts’ views were discussed to show the correctness and eliminate the ambiguity of classification rules and to prove the whole framework process.</description>
        <description>http://thesai.org/Downloads/Volume9No9/Paper_46-Enhancing_the_Secured_Software_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Generation of a Stable Walking Trajectory of a Biped Robot based on the COG based-Gait Pattern and ZMP Constraint</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090945</link>
        <id>10.14569/IJACSA.2018.090945</id>
        <doi>10.14569/IJACSA.2018.090945</doi>
        <lastModDate>2018-09-29T12:00:21.2330000+00:00</lastModDate>
        
        <creator>Arbia Ayari</creator>
        
        <creator>Jilani Knani</creator>
        
        <subject>Biped robot; COG; ZMP; stability; LIPM; LPM; walking gait</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(9), 2018</description>
        <description>The research works contained in this paper are focused on the generation of a stable walking pattern of a biped robot and the study of its dynamic equilibrium while controlling the two following criteria; the centre of gravity COG and the zero-moment point ZMP. The stability was controlled where the biped have to avoid collision with obstacle. The kinematic constraints were also taken into consideration during the walking of the biped robot. In fact, the generation of the walking patterns is composed of several stages. First, we used the Kajita method for the generation of the COG trajectory, based on the linear inverted pendulum LIPM during the simple support phase SSP and linear pendulum model LPM during double support phase DSP. After that, we used two 4thspline function to generate the swing foot trajectory during the SSP and we used exact formulate for the foot trajectory during DSP. Finally, Newton&#39;s algorithm was performed (at the level of the inverse geometric model), in order to calculate the different joints according to the desired trajectories of the hip and the feet. Ground reaction forces were also determined from the dynamic model to satisfy the kinematic constraints on both feet of the biped. The generation of walking is done for two different speeds. To study the biped balance, ZMP generation algorithm was performed during the different walking phases and the results obtained for the two cases were compared.</description>
        <description>http://thesai.org/Downloads/Volume9No9/Paper_45-The_Generation_of_a_Stable_Walking_Trajectory.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of End-to-End Packet Delay for Internet of Things in Wireless Communications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090944</link>
        <id>10.14569/IJACSA.2018.090944</id>
        <doi>10.14569/IJACSA.2018.090944</doi>
        <lastModDate>2018-09-29T12:00:20.2030000+00:00</lastModDate>
        
        <creator>Imane Maslouhi</creator>
        
        <creator>El Miloud Ar –reyouchi</creator>
        
        <creator>Kamal Ghoumid</creator>
        
        <creator>Kaoutar Baibai</creator>
        
        <subject>End to end delay; internet of things; multi hop; wireless communication</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(9), 2018</description>
        <description>Accurate and efficient estimators for End to End delay (E2EPD) plays a significant and critical role in Quality of Service (QoS) provisioning in Internet of Things (IoT) wireless communications. The purpose of this paper, on one hand, is to propose a novel real-time evaluation metrics, on the other hand, addresses the effects of varying packet payload (PP) size. These two objectives rely on the analysis of E2EPD for QoS provisioning in multi-hop wireless IoT networks through multiple hops count from source to destination. The results of this study show the critical effect of PP size, hops count and interface speed on the improving E2EPD use of applications requiring real-time IoT communications.</description>
        <description>http://thesai.org/Downloads/Volume9No9/Paper_44-Analysis_of_End_to_End_Packet_Delay.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comprehensive Classification Model for Diagnosing Multiple Disease Condition from Chest X-Ray</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090943</link>
        <id>10.14569/IJACSA.2018.090943</id>
        <doi>10.14569/IJACSA.2018.090943</doi>
        <lastModDate>2018-09-29T12:00:19.1430000+00:00</lastModDate>
        
        <creator>Savitha S K</creator>
        
        <creator>N.C. Naveen</creator>
        
        <subject>Chest x-ray; classification; supervised learning; radiographs; accuracy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(9), 2018</description>
        <description>Classification plays a significant role in the diagnosis of any form of radiological images in the healthcare sector. After reviewing existing classification approaches carried out over chest radiographs, it was explored that existing techniques are highly restricted to perform binary classification that is not comprehensive for assisting in an effective diagnosis process of chest disease condition. This paper presents a novel approach to classifying chest x-rays on the basis of the practical disease condition. Harnessing the potential features of content-based image retrieval, the proposed system introduces a novel concept of attribute map that not only performs comprehensive classification but also makes the complete computational model extremely lightweight. The study outcome proved to offer better accuracy with the proposed non-iterative process in contrast to existing classifier design.</description>
        <description>http://thesai.org/Downloads/Volume9No9/Paper_43-Comprehensive_Classification_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Applied Artificial Intelligence in 3D-game (HYSTERIA) using UNREAL ENGINE4</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090942</link>
        <id>10.14569/IJACSA.2018.090942</id>
        <doi>10.14569/IJACSA.2018.090942</doi>
        <lastModDate>2018-09-29T12:00:18.5970000+00:00</lastModDate>
        
        <creator>Muhammad Muzammul</creator>
        
        <creator>Muhammad Awais</creator>
        
        <creator>Muhammad Umer ghani</creator>
        
        <creator>Muhammad Imran Manzoor</creator>
        
        <creator>Muhammad Kashif</creator>
        
        <creator>Muhammad Yahya Saeed</creator>
        
        <subject>Applied AI; UNREAL ENGIN 4; technology adoption</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(9), 2018</description>
        <description>Game development industry spreading it roots at wider level. With the advancements in gaming technologies industries adopted latest trends for developing modern games. Artificial intelligence (AI) with programming provided countless support for latest technology adoption in game industry. This paper aims to highlight some major points of our research “Creation of third person shooter game in unreal engine 4”. We discussed how we can use one of the most powerful current generation game engines in an attempt to create our own game “Hysteria”. Endeavoring used to replicate the process of the major game production cycle .It is used by modern gaming industries. We attempted it to create an action adventure shooting game by creating its own original storyline. The game Hysteriaisplayedfromathirdperson perspective in which the player must go through multipleenvironmentsfightinghordesof enemies and try to reach the end of level. Depending on the difficulty level that the player sets, there will be the number of enemies and their fighting intensity. The game has been developed but running at initial stages; further enhancement will be required to give it a much professional impression so that in near future it could be successfully commercialized.</description>
        <description>http://thesai.org/Downloads/Volume9No9/Paper_42-Applied_Artificial_Intelligence_in_3D_Game.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of an Error Output Feedback Digital Delta Sigma Modulator with In–Stage Dithering for Spur–Free Output Spectrum</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090941</link>
        <id>10.14569/IJACSA.2018.090941</id>
        <doi>10.14569/IJACSA.2018.090941</doi>
        <lastModDate>2018-09-29T12:00:17.5500000+00:00</lastModDate>
        
        <creator>Sohail Imran Saeed</creator>
        
        <creator>Khalid Mahmood</creator>
        
        <creator>Mehr e Munir</creator>
        
        <subject>Digital delta sigma modulator; fractional N – frequency synthesizer; phase locked loop; error feedback modulator; spur; dither; MASH; HK – MASH</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(9), 2018</description>
        <description>Digital Delta Sigma Modulator (DDSM) is responsible for generation of spurious tones at the output of fractional n frequency synthesizer due their inherent periodicity. This results in an impure output spectrum of frequency synthesizer when they are used to generate the fractional numbers in the divider of Phase Locked Loop (PLL) based frequency synthesizer. This paper presents the design of Error – Output feedback modulator based third order Multi – stage noise Shaping (MASH) structure with lesser hardware and effective error compensation network to break the underlying periodicity of DDSM. The DDSM is also analyzed by using non-shaped, shaped and self – dithering mechanism to achieve a pure output spectrum and reduced quantization noise.</description>
        <description>http://thesai.org/Downloads/Volume9No9/Paper_41-Design_of_an_Error_Output_Feedback.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Effect of Fusion of Statistical and Texture Features on HSI based Leaf Images with Both Dorsal and Ventral Sides</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090940</link>
        <id>10.14569/IJACSA.2018.090940</id>
        <doi>10.14569/IJACSA.2018.090940</doi>
        <lastModDate>2018-09-29T12:00:16.5200000+00:00</lastModDate>
        
        <creator>Poonam Saini</creator>
        
        <creator>Arun Kumar</creator>
        
        <subject>Dorsal; ventral; leaf classification; random forest; texture features; statistical features</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(9), 2018</description>
        <description>The present work involves statistically analyzing and studying the overall classification accuracy results using Hue channel images of different plant species using their dorsal and ventral sides, and then subjecting them to the process of feature extraction using first order statistical features and texture based features. These extracted features have been subjected to the classification process using KNN and Random Forest algorithms. Further, this work studies the fusion of two different kinds of features extracted for dorsal and ventral plant leaf images and studying the effect of fusion on the overall classification accuracy results. This work also delves into the feature selection task using random forest algorithm and studies the effect of reduced dataset with unique features on the overall classification accuracy results. The most important outcome of this investigation is that the ventral leaf images can be a suitable alternative for plant species classification using digital images and further, the fusion of features does improve the classification accuracy results.</description>
        <description>http://thesai.org/Downloads/Volume9No9/Paper_40-Effect_of_Fusion_of_Statistical_and_Texture_Features.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Validating Antecedents of Customer Engagement in Social Networking Sites using Fuzzy Delphi Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090939</link>
        <id>10.14569/IJACSA.2018.090939</id>
        <doi>10.14569/IJACSA.2018.090939</doi>
        <lastModDate>2018-09-29T12:00:15.4770000+00:00</lastModDate>
        
        <creator>Noraniza Md Jani</creator>
        
        <creator>Mohd Hafiz Zakaria</creator>
        
        <creator>Zulisman Maksom</creator>
        
        <creator>Md. Shariff M. Haniff</creator>
        
        <creator>Ramlan Mustapha</creator>
        
        <subject>Customer engagement; antecedents; fuzzy delphi; SNS; online community</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(9), 2018</description>
        <description>The concept of online customer engagement is getting imperative in modern business due to the uncontrolled conversation via cyber-avenue. This study validates the antecedents of customer engagement conceptualized in Social Networking Sites (SNS) by benefitting the Fuzzy Delphi method. Through purposive sampling, a total of 12 experts from academics and practitioners have participated in the verification of items through 7-point linguistic scales of the questionnaire instrument. The findings show that invited experts have reached agreement on the elements shown within the framework through a 75% percent agreement for each construct. The analysis of this study has highlighted the implications of the relevant theories on the direction and the new dimensions of customer engagement concept especially in SNS to future researchers. Businesses are clearly able to gain stronger knowledge and information about their customer-related factors and their prospects at SNS.</description>
        <description>http://thesai.org/Downloads/Volume9No9/Paper_39-Validating_Antecedents_of_Customer_Engagement.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Consequences of Customer Engagement in Social Networking Sites : Employing Fuzzy Delphi Technique for Validation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090938</link>
        <id>10.14569/IJACSA.2018.090938</id>
        <doi>10.14569/IJACSA.2018.090938</doi>
        <lastModDate>2018-09-29T12:00:14.9300000+00:00</lastModDate>
        
        <creator>Noraniza Md Jani</creator>
        
        <creator>Mohd Hafiz Zakaria</creator>
        
        <creator>Zulisman Maksom</creator>
        
        <creator>Md. Shariff M. Haniff</creator>
        
        <creator>Ramlan Mustapha</creator>
        
        <subject>Customer engagement; fuzzy delphi; SNS; consequences; brand page</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(9), 2018</description>
        <description>The consequences of the customer engagement in the Social Networking Sites (SNS) community have direct impact on the brand. This present research was conducted to examine the cohesive mechanisms for item verification on the most influential consequences of participating the brand community and joining the electronic Word-of-Mouth (eWOM) as the manifestation of the behavior of such communities. Using Fuzzy Delphi techniques, a total of 12 heterogeneous experts are involved in the verification process through a 7-point linguistic scale of the questionnaire survey. The results show good evidence of expert consensus by reaching 75% for each consequence of the engaged customers. On the SNS platform, further aspects of the inspected effects can be expanded to be studied on relevant domains. Practitioners will be more strategic in maintaining and fostering customer relationships, and consistently influencing new customers when interacting actively through SNS brand pages.</description>
        <description>http://thesai.org/Downloads/Volume9No9/Paper_38-Consequences_of_Customer_Engagement.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Designing of Adaptive Self-Assessment Activities in Second Language Learning using Massive Open Online Courses (MOOCs)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090937</link>
        <id>10.14569/IJACSA.2018.090937</id>
        <doi>10.14569/IJACSA.2018.090937</doi>
        <lastModDate>2018-09-29T12:00:13.9000000+00:00</lastModDate>
        
        <creator>Hasmaini Hashim</creator>
        
        <creator>Sazilah Salam</creator>
        
        <creator>Siti Nurul Mahfuzah Mohamad</creator>
        
        <creator>Nur Syafiatun Safwana Sazali</creator>
        
        <subject>MOOCs; adaptive self-assessment; learning styles; cognitive styles; second language</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(9), 2018</description>
        <description>Massive Open Online Courses (MOOCs) provides an effective learning platform with various high-quality educational materials accessible to learners from all over the world. In this paper, the types of learner characteristics in MOOCs second language learning are discussed. However, there are still problems and challenges including assessment. A quantitative research method approach has been utilized in this study. Results of the study are then used for implementing suitable adaptive self-assessment activities in MOOCs learning. Findings of this study are two folds: (1) The dimension of learner characteristics (learning styles and cognitive style) for improving student performance in MOOCs learning and (2) suitable self-assessment activities that consider learners requirement or adaptive to learner characteristics for improving MOOCs learning performance. Based on the findings, the data indicate that visual, active, thinking and intuitive learner is the proposed dimension used in this study. In this study, our aim is to propose adaptive self-assessment activities for improving MOOCs learning in the second language course. In the future study, students will be investigated about their engagement using MOOC assessment in the second language.</description>
        <description>http://thesai.org/Downloads/Volume9No9/Paper_37-The_Designing_of_Adaptive_Self_Assessment_Activities.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of Linear Phase High Pass FIR Filter using Weight Improved Particle Swarm Optimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090936</link>
        <id>10.14569/IJACSA.2018.090936</id>
        <doi>10.14569/IJACSA.2018.090936</doi>
        <lastModDate>2018-09-29T12:00:12.7300000+00:00</lastModDate>
        
        <creator>Adel Jalal Yousif</creator>
        
        <creator>Ghazwan Jabbar Ahmed</creator>
        
        <creator>Ali Subhi Abbood</creator>
        
        <subject>Finite impulse response filter; evolutionary optimization; particle swarm optimization; fitness function; genetic algorithm; high pass filter; impulse response</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(9), 2018</description>
        <description>The design of Finite Impulse Response (FIR) digital filter involves multi-parameter optimization, while the traditional gradient-based methods are not effective enough for precise design. The aim of this paper is to present a method of designing 24th order high pass FIR filter using an evolutionary heuristic search technique called Weight Improved Particle Swarm Optimization (WIPSO).  A new function of the weight parameters is constructed for obtaining a better optimal solution with faster computation. The performance of the proposed algorithm is compared with two other search optimization algorithms namely standard Genetic Algorithm (GA) and conventional Particle Swarm Optimization (PSO). The simulation results show that the proposed WIPSO algorithm is better than GA and PSO in terms of the magnitude response accuracy and the convergence speed for the design of 24th order high pass FIR filter.</description>
        <description>http://thesai.org/Downloads/Volume9No9/Paper_36-Design_of_Linear_Phase_High_Pass_FIR_Filter.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Efficient Protocol using Fuzzy Logic and Grids with Two-Dimensional Techniques for Saving Energy in WSN</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090935</link>
        <id>10.14569/IJACSA.2018.090935</id>
        <doi>10.14569/IJACSA.2018.090935</doi>
        <lastModDate>2018-09-29T12:00:12.1530000+00:00</lastModDate>
        
        <creator>Emad M Ibbini</creator>
        
        <creator>Kweh Yeah Lun</creator>
        
        <creator>Mohamed Othman</creator>
        
        <creator>Zurina Mohd Hanapi</creator>
        
        <creator>Amir Abbas Baradaran</creator>
        
        <subject>Fuzzy logic; fuzzy inference engine; first node die;  last node die; energy efficiency; lifetime</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(9), 2018</description>
        <description>This work proposes an energy-saving protocol for wireless sensor networks (WSNs) using fuzzy logic and grids with two-dimensional techniques, namely, gravity and energy centers, to address the pressing issue of energy efficiency in WSNs. The optimal cluster head is chosen in two stages of the proposed protocol to prolong the network lifetime and reduce the energy consumption. The proposed protocol evaluated the cluster-head radius according to the residual energy and distance to the base station(BS) parameters of the sensor nodes. The proposed scheme shows better improvements than other related protocols as it extends the lifetime of Two Dimensional Technique Based On Center of Gravity and Energy Center (TDTCGE) protocol by 54\% and saves more energy. Fuzzy inference engine (Mamdani&#39;s rule) is used to elect the chance to be the best node. The results have been derived from matlab simulator which shows that the proposed protocol performs better than the TDTCGE protocol. Simulation results show also that our protocol offers a much better network lifetime and energy efficiency than other existing protocols.</description>
        <description>http://thesai.org/Downloads/Volume9No9/Paper_35-An_Efficient_Protocol_using_Fuzzy_Logic.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>EEG Signals based Brain Source Localization Approaches</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090934</link>
        <id>10.14569/IJACSA.2018.090934</id>
        <doi>10.14569/IJACSA.2018.090934</doi>
        <lastModDate>2018-09-29T12:00:11.5600000+00:00</lastModDate>
        
        <creator>Anwar Ali Gaho</creator>
        
        <creator>Sayed Hyder Abbas Musavi</creator>
        
        <creator>Munsif Ali Jatoi</creator>
        
        <creator>Muhammad Shafiq</creator>
        
        <subject>Electroencephalograph; brain source localization; forward problem; inverse problem; bayesian framework</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(9), 2018</description>
        <description>This article is focused on the overview of functionality of the neurons and investigation of the current research and algorithms used for brain source localization.  The human brain is made up of active neurons and continuously generates electrical impulses on scalp surface. The neurons transmit the message through the dendrites called pyramidal cells. The active parts of the brain are addressed and measured by various neuroimaging techniques such as electroencephalography (EEG), magnetoencephalography (MEG) etc.  These techniques help to diagnose pathological, physiological, mental and functional abnormalities of the brain. EEG is a high temporal resolution and a low spatial resolution technique which yields the non-invasively potential difference measurements between pair of electrodes over the scalp. It is used in understanding behavior of brain which is further used to analyze various brain disorders. EEG brain source localization has remained an active area of research in neurophysiology since last couple of decades and still being investigated in terms of its processing time, resolution, localization error, free energy, integrated techniques and algorithms applied. In this paper, several approaches of forward problem, inverse problem and Bayesian framework have been explored to address the uncertainties and issues of localization of the neural activities incurring in the brain. </description>
        <description>http://thesai.org/Downloads/Volume9No9/Paper_34-EEG_Signals_based_Brain_Source_Localization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hashtag Generator and Content Authenticator</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090933</link>
        <id>10.14569/IJACSA.2018.090933</id>
        <doi>10.14569/IJACSA.2018.090933</doi>
        <lastModDate>2018-09-29T12:00:10.3130000+00:00</lastModDate>
        
        <creator>Kavinga Yapa Abeywardana</creator>
        
        <creator>Ginige A.R.</creator>
        
        <creator>Herath N.</creator>
        
        <creator>Somarathne H.P.</creator>
        
        <creator>Thennakoon T.M.N.S.</creator>
        
        <subject>Hashtags; social media; NLP; machine learning; REST API; content authentication</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(9), 2018</description>
        <description>In the recent past, Online Marketing applications have been a focus of research. But still there are enormous challenges on the accuracy and authenticity of the content posted through social media. And if the social media business platforms are considered, majority of the users who try to add a market value to their own product face the problem of not getting enough attention from their target audience.  The purpose of this research is to develop a safe and efficient trending hashtag generating application solution for social media business users which generates trending and relevant hashtags for user content in order to get a broad reach of target audience, automatically generates a meaningful caption to their relevant posts and guarantees the authenticity of the product at the same time. The user content is analyzed and filters the important keywords, generates a meaningful caption, suggest related trending keywords and generates trending hashtags to get the required reach for online marketers. Additionally, the marketing products’ content authentication is ensured. The application uses Natural Language Processing, Machine Learning, API technologies, Java and Python technologies. A unique database is assigned to users which contains rankings for each user. The target audience who engages in buying products get to know about the status of the sellers with respect to authenticity of the content. It is believed that the application provides a promising solution to existing audience reach problems of online marketers and buyers. The significance of this system is to help marketers and buyers to engage in online buying and selling with much effective, reliable and safer ways. This mitigate the vulnerability of bad social media marketing influences and helps to establish a safe and reliable online marketing practice to make both sellers and buyers happy. This paper provides a brief description on how to perform an organized online marketing discipline via the Trending Hashtag Generator &amp; Image Authenticator application.</description>
        <description>http://thesai.org/Downloads/Volume9No9/Paper_33-Hashtag_Generator_and_Content_Authenticator.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Serious Game for Healthcare Industry: Information Security Awareness Training Program for Hospital Universiti Kebangsaan Malaysia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090932</link>
        <id>10.14569/IJACSA.2018.090932</id>
        <doi>10.14569/IJACSA.2018.090932</doi>
        <lastModDate>2018-09-29T12:00:09.2030000+00:00</lastModDate>
        
        <creator>Arash Ghazvini</creator>
        
        <creator>Zarina Shukur</creator>
        
        <subject>Serious game; information security; awareness training program</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(9), 2018</description>
        <description>This paper aims to develop an information security awareness training program for the healthcare industry to ensure the appropriate protection of electronic health systems. Serious games are primarily designed for training purposes rather than pure entertainment. Serious games are proven as an effective training approach for awareness programs. Serious games benefit learning as the games are fun to play and motivate learners to participate and interact with learning activities. Developing a serious game requires the revision of adequate guidelines that identify all characteristics to be incorporated in such games. Thus, this paper reviews serious game models that have been constructed as game development guidelines. To this end, a serious game is developed and implemented at a selected healthcare organization.</description>
        <description>http://thesai.org/Downloads/Volume9No9/Paper_32-A_Serious_Game_for_Healthcare_Industry.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Self-organized Population Segmentation for Geosocial Network Neighborhood</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090931</link>
        <id>10.14569/IJACSA.2018.090931</id>
        <doi>10.14569/IJACSA.2018.090931</doi>
        <lastModDate>2018-09-29T12:00:08.6130000+00:00</lastModDate>
        
        <creator>Low Shen Loong</creator>
        
        <creator>Syarulnaziah Anawar</creator>
        
        <creator>Zakiah Ayop</creator>
        
        <creator>Mohd Rizuan Baharon</creator>
        
        <creator>Erman Hamid</creator>
        
        <subject>Segmentation; geosocial network; virtual neighbourhood; density-based clustering; dunbar’s number</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(9), 2018</description>
        <description>Geosocial network neighborhood application allows user to share information and communicate with other people within a virtual neighborhood or community. A large and crowded neighbourhood will degrade social quality within the community. Therefore, optimal population segmentation is an essential part in a geosocial network neighborhood, to specify access rights and privileges to resources, and increase social connectivity. In this paper, we propose an extension of the density-based clustering method to allow self-organized segmentation for neighbourhood boundaries in a geosocial network. The objective of this paper is two-fold: First, to improve the distance calculation in population segmentation in a geosocial network neighbourhood. Second, to implement self-organized population segmentation algorithms using threshold value and Dunbar number. The effectiveness of the proposed algorithms is evaluated via experimental scenarios using GPS data. The proposed algorithms show improvement in segmenting large group size of cluster into smaller group size of cluster to maintain the stability of social relationship in the neighbourhood.</description>
        <description>http://thesai.org/Downloads/Volume9No9/Paper_31-Self_Organized_Population_Segmentation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Communication System Design of Remote Areas using Openbts</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090930</link>
        <id>10.14569/IJACSA.2018.090930</id>
        <doi>10.14569/IJACSA.2018.090930</doi>
        <lastModDate>2018-09-29T12:00:07.5370000+00:00</lastModDate>
        
        <creator>Winarno Sugeng</creator>
        
        <creator>Theta Dinnarwaty Putri</creator>
        
        <subject>OpenBTS; GSM; communication; remote areas</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(9), 2018</description>
        <description>OpenBTS is a software-based GSM BTS, which allows GSM cell phone users to make phone calls or send SMS (short messages), without using a commercial service provider network. OpenBTS is known as the first open source implementation of the GSM industry standard protocol. The OpenBTS network is a network that is easy to implement and also inexpensive in maintenance and installation. Communication using a cell phone is not only needed in urban areas but remote areas currently require. But the problem is that not all remote areas get services from commercial cellular operators. By implementing mobile phone communication using OpenBTS, remote communication is very likely to be implemented. In this research communication design was delivered using GSM mobile phones using OpenBTS with telephone and SMS services.</description>
        <description>http://thesai.org/Downloads/Volume9No9/Paper_30-Communication_System_Design_of_Remote_Areas.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An IoT based Warehouse Intrusion Detection
(E-Perimeter) and Grain Tracking Model for Food Reserve Agency</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090929</link>
        <id>10.14569/IJACSA.2018.090929</id>
        <doi>10.14569/IJACSA.2018.090929</doi>
        <lastModDate>2018-09-29T12:00:06.4600000+00:00</lastModDate>
        
        <creator>Sipiwe Chihana</creator>
        
        <creator>Jackson Phiri</creator>
        
        <creator>Douglas Kunda</creator>
        
        <subject>Internet of things; motion sensing; RFID; cloud storage; GSM/GPRS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(9), 2018</description>
        <description>Zambia’s agricultural sector through Food Reserve Agency (FRA) while still underdeveloped faces many challenges that range from marketing, spoilage, infestations, and theft at site, spillage and storage among others. The methods used by FRA in their business processes are largely manual as there are no systems in place. In order to help curb these problems, this paper proposed and developed novel methods that can be used to sense real-time warehouse intrusion and grain tracking within the FRA circulation. The IoT based prototype model made use of the APC220 transceiver, GSM, GPRS, RFID, PIR and cloud storage. To curb theft of grain at storage points, the system used motion sensing through the use of PIR sensors, wireless radio communication module and the GSM/GPRS technologies such that when anyone comes in the range of PIR sensor, then the sensor will send a logic signal to the microcontroller. Lastly, the RFID combined with GSM and Arduino microcontroller responsible for grain tracking.  From the results obtained in the experiment conducted it is believed that once this technology is adopted, theft will be reduced and grain management in the FRA satellite Depots dotted around the country will improve.</description>
        <description>http://thesai.org/Downloads/Volume9No9/Paper_29-An_IoT_based_Warehouse_Intrusion_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>NADA: New Arabic Dataset for Text Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090928</link>
        <id>10.14569/IJACSA.2018.090928</id>
        <doi>10.14569/IJACSA.2018.090928</doi>
        <lastModDate>2018-09-29T12:00:05.4300000+00:00</lastModDate>
        
        <creator>Nada Alalyani</creator>
        
        <creator>Souad Larabi Marie-Sainte</creator>
        
        <subject>Data collection; arabic natural language processing; arabic text categorization; dewey decimal classification; synthetic minority over-sampling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(9), 2018</description>
        <description>In the recent years, Arabic Natural Language Processing, including Text summarization, Text simplification, Text Categorization and other Natural Language-related disciplines, are attracting more researchers. Appropriate resources for Arabic Text Categorization are becoming a big necessity for the development of this research. The few existing corpora are not ready for use, they require preprocessing and filtering operations. In addition, most of them are not organized based on standard classification methods which makes unbalanced classes and thus reduced the classification accuracy. This paper proposes a New Arabic Dataset (NADA) for Text Categorization purpose. This corpus is composed of two existing corpora OSAC and DAA. The new corpus is preprocessed and filtered using the recent state of the art methods. It is also organized based on Dewey decimal classification scheme and Synthetic Minority Over-Sampling Technique. The experiment results show that NADA is an efficient dataset ready for use in Arabic Text Categorization.</description>
        <description>http://thesai.org/Downloads/Volume9No9/Paper_28-NADA_New_Arabic_Dataset_for_Text_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning Algorithm for Cyberbullying Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090927</link>
        <id>10.14569/IJACSA.2018.090927</id>
        <doi>10.14569/IJACSA.2018.090927</doi>
        <lastModDate>2018-09-29T12:00:04.4300000+00:00</lastModDate>
        
        <creator>Monirah Abdullah Al-Ajlan</creator>
        
        <creator>Mourad Ykhlef</creator>
        
        <subject>Cyberbullying; convolutional neural network; CNN; detection; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(9), 2018</description>
        <description>Cyberbullying is a crime where one person becomes the target of harassment and hate. Many cyberbullying detection approaches have been introduced, however, they were largely based on textual and user features. Most of the research found in the literature aimed at improving detection through introducing new features. However, as the number of features increases, the feature extraction and selection phases have become harder. On the other hand, no study has examined the meaning of words and semantics in cyberbullying. In order to bridge this gap, we propose a novel algorithms CNN-CB that eliminate the need for feature engineering and produce better prediction than traditional cyberbullying detection approaches. The proposed algorithm adapts the concept of word embedding where similar words have similar embedding. Therefore, bullying tweets will have similar representations and this will advance the detection. CNN-CB is based on convolutional neural network (CNN) and incorporates semantics through the use of word embedding. Experiments showed that CNN-CB algorithm outperform traditional content-based cyberbullying detection with an accuracy of 95%.</description>
        <description>http://thesai.org/Downloads/Volume9No9/Paper_27-Deep_Learning_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Profile-Based Semantic Method using Heuristics for Web Search Personalization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090926</link>
        <id>10.14569/IJACSA.2018.090926</id>
        <doi>10.14569/IJACSA.2018.090926</doi>
        <lastModDate>2018-09-29T12:00:03.3870000+00:00</lastModDate>
        
        <creator>Hikmat A. M. Abdeljaber</creator>
        
        <subject>Semantic search method; user profile; heuristics; web search personalization; information retrieval</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(9), 2018</description>
        <description>User profiles play a critical role in personalizing user search. It assists search systems in retrieving relevant information that is searched on the web considering the user needs. Researchers presented a vast number of profile-based approaches that aims to improve the effectiveness of information retrieval. However, these approaches are syntactic-based which fail to achieve the user satisfaction. By the means that the search results do not meet user preferences, due to the fact that the search is keyword-based rather than semantic-based. Exploiting user profiles with the application of semantic web technology into personalization might produce a step forward in future retrieval systems. By adopting profiling approach and using ontology base characteristics, a semantic-based method using heuristics and KNN algorithm is proposed. It engages searching ontology base domains horizontally and vertically to discover and extract the closest concept to the meaning of the query keyword. The extracted concept is used to expand the user query to personalize the search result and present the customized information for individuals. </description>
        <description>http://thesai.org/Downloads/Volume9No9/Paper_26-Profile_based_Semantic_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Text Clustering using Ensemble Clustering Technique </title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090925</link>
        <id>10.14569/IJACSA.2018.090925</id>
        <doi>10.14569/IJACSA.2018.090925</doi>
        <lastModDate>2018-09-29T12:00:02.3230000+00:00</lastModDate>
        
        <creator>Muhammad Mateen</creator>
        
        <creator>Junhao Wen</creator>
        
        <creator>Mehdi Hassan</creator>
        
        <creator>Sun Song</creator>
        
        <subject>Agglomerative; document clustering; ensemble clustering; gustafson kessel; inverse documents frequency; text clustering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(9), 2018</description>
        <description>Clustering is being used in different fields of research, including data mining, taxonomy, document retrieval, image segmentation, pattern classification. Text clustering is a technique through which text/ documents are divided into a particular number of groups, so that text within each group is related in contents. In this paper, the idea of ensemble text clustering of majority voting is defined. For this purpose, different clustering methods such as fuzzy c-means, k-means, agglomerative, Gustafson Kessel and k-medoid are used. After performing the pre-processing of the documents, inverse document frequency (IDF) has been achieved by the provided dataset. The achieved IDF is considered as input to the clustering algorithms. Dunn Index and Davies Bouldin Index have been calculated which are applied to analyze the usefulness of the proposed ensemble clustering. In this work, a dataset &quot;Textclus&quot; which contains four different classes, history, education, politician and art as a text is applied. Additionally, another dataset &quot;20newsgroups&quot; is also applied for analysis. The clustering quality measures have also been calculated from the proposed ensemble clustering results. The attained results show that the proposed ensemble clustering outperforms the other state of the art clustering techniques.</description>
        <description>http://thesai.org/Downloads/Volume9No9/Paper_25-Text_Clustering_using_Ensemble_Clustering_Technique.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Assessing Trends of Existing Research Contribution Towards Internet-of-Things</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090924</link>
        <id>10.14569/IJACSA.2018.090924</id>
        <doi>10.14569/IJACSA.2018.090924</doi>
        <lastModDate>2018-09-29T12:00:01.7630000+00:00</lastModDate>
        
        <creator>Bhagyashree Ambore</creator>
        
        <creator>Suresh L</creator>
        
        <subject>Bandwidth; cloud computing; energy; internet-of-things; security; sensor network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(9), 2018</description>
        <description>With the growing demands of system automation, technology integration, and non-human intervention technique, Internet-of-Things (IoT) has evolved as a boon and value-added services over pervasive computing. IoT comprises a highly complex system that integrates ubiquitous computing with low-powered data capturing devices via a gateway. Along with various forms of unimaginable advantages, IoT is also associated with a huge list of ongoing problems. The prime objective of the paper is to gauge the effectiveness of existing works of literature being carried out towards mitigating the issues of IoT. The paper illustrates the most frequently explored research topic and less regularly explored topic in IoT for providing a true picture of existing research trends. The paper also idealizes some of the research gaps that have been extracted after reviewing the existing literature.</description>
        <description>http://thesai.org/Downloads/Volume9No9/Paper_24-Assessing_Trends_of_Existing_Research_Contribution.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Image Retrieval System based on Color Global and Local Features Combined with GLCM for Texture Features</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090923</link>
        <id>10.14569/IJACSA.2018.090923</id>
        <doi>10.14569/IJACSA.2018.090923</doi>
        <lastModDate>2018-09-29T12:00:00.6230000+00:00</lastModDate>
        
        <creator>Jehad Q. Alnihoud</creator>
        
        <subject>CBIR; color histogram; GLCM; K-means; WANG database</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(9), 2018</description>
        <description>In CBIR (content-based image retrieval) features are extracted based on color, texture, and shape. There are many factors affecting the accuracy (precision) of retrieval such as number of features, type of features (local or global), color model, and distance measure. In this paper, a two phases approach to retrieve similar images from data set based on color and texture is proposed. In the first phase, global color histogram is utilized with HSV (hue, saturation, and value) color model and an automatic cropping technique is proposed to accelerate the process of features extraction and enhances the accuracy of retrieval. Joint histogram and GLCM (gray-level co-occurrence matric) are deployed in phase two. In this phase, color features and texture features are combined to enhance the accuracy of retrieval. Finally, a new way of using K-means as clustering algorithm is proposed to classify and retrieve images. Two experiments are conducted using WANG database. WANG database consists of 10 different classes each with 100 images. Results of comparing the proposed approach with the most relevant approaches are promising. </description>
        <description>http://thesai.org/Downloads/Volume9No9/Paper_23-Image_Retrieval_System_based_on_Color.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Interest Reduction and PIT Minimization in Content Centric Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090922</link>
        <id>10.14569/IJACSA.2018.090922</id>
        <doi>10.14569/IJACSA.2018.090922</doi>
        <lastModDate>2018-09-29T11:59:59.4700000+00:00</lastModDate>
        
        <creator>Aadil Zia Khan</creator>
        
        <subject>Content centric networks; congestion control; scalability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(9), 2018</description>
        <description>Content Centric Networking aspires to a more efficient use of the Internet through in-path caching, multi-homing, and provisions for state maintenance and intelligent forwarding at the CCN routers. However, these benefits of CCN’s communication model come at the cost of large Pending Interest Table (PIT) sizes and Interest traffic overhead. Reducing PIT size is essential since larger memory sizes have an associated cost of slower access speeds, which would become a bottleneck in high speed networks. Similarly, Interest traffic may lead to upload capacity getting filled up which would be inefficient as well as problematic in case of traffics having bidirectional data transfers such as video conferencing. Our contribution in this paper is threefold. Firstly, we reduce PIT size by eliminating the need for maintaining PIT entries at all routers. We include the return path in the packets and maintain PIT entries at the egress routers only. Further, we use Persistent Interests (PIs), where one Interest suffices for retrieving multiple data segments, in order to reduce PIT entries at the egress routers as well as to reduce Interest overhead. This is especially useful for live and interactive traffic types where packet sizes are small leading to a large number of pipelined Interests at any given time. Lastly, since using PIs affects CCN’s original transport model, we address the affected aspects, namely congestion and flow control and multi path content retrieval. For our congestion scheme, we show that it achieves max-min fairness.</description>
        <description>http://thesai.org/Downloads/Volume9No9/Paper_22-Interest_Reduction_and_PIT_Minimization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Product Feature Ranking and Popularity Model based on Sentiment Comments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090921</link>
        <id>10.14569/IJACSA.2018.090921</id>
        <doi>10.14569/IJACSA.2018.090921</doi>
        <lastModDate>2018-09-29T11:59:58.4400000+00:00</lastModDate>
        
        <creator>Siti Rohaidah Ahmad</creator>
        
        <creator>Azuraliza Abu Bakar</creator>
        
        <creator>Mohd Ridzwan Yaakub</creator>
        
        <creator>Nurhafizah Moziyana Mohd Yusop</creator>
        
        <creator>Muslihah Wook</creator>
        
        <creator>Arniyati Ahmad</creator>
        
        <subject>Product feature ranking; sentiment analysis; feature selection; sentiment word</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(9), 2018</description>
        <description>This paper proposes the development of a model to determine feature popularity ranking for products in the market. Each feature that is reviewed by a customer has a relation to sentiment words present in the sentences within a customer review.  Feature quantity of a product, derived from customer review dataset, cannot be used as a benchmark to determine customers’ preferences since each feature is influenced by sentiment words that give it either a positive or negative meaning. A positive meaning shows that the feature is liked by user; and a negative meaning shows that it is disliked by user. This study finds that sentiment assessments by users play an important role in determining feature popularity ranking; and they affect the feature of a product. Thus, this study proposes the development of a model that takes into account the importance of sentiment assessments present in each sentence within a customer review of a product feature. A case study has been conducted in proving that the developed model is able to produce a list of product feature popularity ranking. Results of this experimental model is also put into simple comparative analysis with a few models from previous studies.</description>
        <description>http://thesai.org/Downloads/Volume9No9/Paper_21-Product_Feature_Ranking_and_Popularity_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of Wearable Patch Antenna for Wireless Body Area Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090920</link>
        <id>10.14569/IJACSA.2018.090920</id>
        <doi>10.14569/IJACSA.2018.090920</doi>
        <lastModDate>2018-09-29T11:59:57.8630000+00:00</lastModDate>
        
        <creator>Saqib Hussain</creator>
        
        <creator>Saima Hafeez</creator>
        
        <creator>Sajjad Ali Memon</creator>
        
        <creator>Nasrullah Pirzada</creator>
        
        <subject>High-Frequency structure simulator (HFSS); return loss; voltage standing wave ratio (VSWR); gain; specific absorption rate (SAR)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(9), 2018</description>
        <description>Wireless body area networks are being widely used due to the increase in the use of wireless networks and various electrical devices. A Wearable Patch antenna is used for enhancement of various applications for WBAN. In this paper, a low profile wearable microstrip patch antenna is designed and suggested for constant observation of human vital signs such as blood pressure, pulse rate and body temperature using wireless body area network (WBAN) technology. The operating frequency of the antenna is taken as 2.45 GHz which lies in industrial, scientific and medical (ISM) frequency band. Polyester textile fabric with a relative permittivity of 1.44 and thickness of 2.85 mm is used as a substrate material. The proposed antenna is designed to achieve better return loss, VSWR, gain and low value of specific absorption rate (SAR) as compare to other existing wearable antenna. The achieved antenna return loss at 2.45 GHz is about -10.52 dB and gain of 7.81 dB. The VSWR value achieved at 2.45 GHz is 1.84, which is good in terms of good impedance matching. Other antenna field parameters like 2D and 3D gain, radiation pattern, and SAR value have been calculated. High-Frequency Structure Simulator (HFSS) is used to design and simulate the proposed antenna. </description>
        <description>http://thesai.org/Downloads/Volume9No9/Paper_20-Design_of_Wearable_Patch_Antenna.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Connectivity Restoration Techniques for Wireless Sensor and Actor Network (WSAN), A Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090919</link>
        <id>10.14569/IJACSA.2018.090919</id>
        <doi>10.14569/IJACSA.2018.090919</doi>
        <lastModDate>2018-09-29T11:59:56.8030000+00:00</lastModDate>
        
        <creator>Muhammad Kashif Saeed</creator>
        
        <creator>Mahmood ul Hassan</creator>
        
        <creator>Ansar Munir Shah</creator>
        
        <creator>Khalid Mahmood</creator>
        
        <subject>Wireless sensor networks; wireless sensor and actor networks; node failure; network partitioning; connectivity restoration;  node movements</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(9), 2018</description>
        <description>Wireless Sensor and actor networks (WSANs) are the most promising research area in the field of wireless communication. It consists of large number of small independent sensor and powerful actor nodes equipped with communication and computation capabilities. Actors gather sensor’s data and react collaboratively to attain application particular assignments. A powerful connected inter-actor network is required to coordinate its operations. Actor node may fail due to the battery depletion or any hardware failure and this failure may divide the network into disjoint segments. This problem can degrade the network performance but also reduce the efficiency and effectiveness of the network. To restore the network into its original state, the researchers have proposed many connectivity restoration techniques during last few years. This paper provides a brief review of the existing connectivity restoration techniques for WSANs with their advantages and limitations. </description>
        <description>http://thesai.org/Downloads/Volume9No9/Paper_19-Connectivity_Resotration_Techniques_for_Wireless_Sensor.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detection of Railroad Networks in SAR Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090918</link>
        <id>10.14569/IJACSA.2018.090918</id>
        <doi>10.14569/IJACSA.2018.090918</doi>
        <lastModDate>2018-09-29T11:59:55.7570000+00:00</lastModDate>
        
        <creator>Safak Altay A&#231;ar</creator>
        
        <creator>Safak Bayir</creator>
        
        <subject>Remote sensing; synthetic aperture radar; railroad networks detection; perceptual grouping</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(9), 2018</description>
        <description>In this study, a railroad networks detection method for synthetic aperture radar (SAR) images is proposed. Proposed method consists of three steps. Firstly, railroad segments are detected. An existing line detector is modified by describing some rules for this process. Then segments are connected by utilizing perceptual grouping. Finally, a new line analysis algorithm is applied to determine real parts of railroad networks. A software is developed to achieve and evaluate proposed method. Completeness and correctness values which are obtained after different steps are computed to evaluate proposed method. Two different TerraSAR-X images are used in experiments and obtained results are discussed in detail.</description>
        <description>http://thesai.org/Downloads/Volume9No9/Paper_18-Detection_of_Railroad_Networks_in_SAR_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Quantum based Evolutionary Algorithm for Stock Index and Bitcoin Price Forecasting</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090917</link>
        <id>10.14569/IJACSA.2018.090917</id>
        <doi>10.14569/IJACSA.2018.090917</doi>
        <lastModDate>2018-09-29T11:59:55.2100000+00:00</lastModDate>
        
        <creator>Usman Amjad</creator>
        
        <creator>Tahseen Ahmed Jilani</creator>
        
        <creator>Humera Tariq</creator>
        
        <creator>Amir Hussain</creator>
        
        <subject>Quantum evolutionary algorithm; fuzzy time series; nature inspired computing; fuzzy logic; crypto currency; bitcoin</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(9), 2018</description>
        <description>Quantum computing has emerged as a new dimension with various applications in different fields like robotic, cryptography, uncertainty modeling etc. On the other hand, nature inspired techniques are playing vital role in solving complex problems through evolutionary approach. While evolutionary approaches are good to solve stochastic problems in unbounded search space, predicting uncertain and ambiguous problems in real life is of immense importance. With improved forecasting accuracy many unforeseen events can be managed well. In this paper a novel algorithm for Fuzzy Time Series (FTS) prediction by using Quantum concepts is proposed in this paper. Quantum Evolutionary Algorithm (QEA) is used along with fuzzy logic for prediction of time series data. QEA is applied on interval lengths for finding out optimized lengths of intervals producing best forecasting accuracy. The algorithm is applied for forecasting Taiwan Futures Exchange (TIAFEX) index as well as for Bitcoin crypto currency time series data as a new approach. Model results were compared with many preceding algorithms.</description>
        <description>http://thesai.org/Downloads/Volume9No9/Paper_17-A_Quantum_based_Evolutionary_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Singkat: A Keyword-Based URL Shortener and Click Tracker Package for Django Web Application</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090916</link>
        <id>10.14569/IJACSA.2018.090916</id>
        <doi>10.14569/IJACSA.2018.090916</doi>
        <lastModDate>2018-09-29T11:59:54.1800000+00:00</lastModDate>
        
        <creator>Gottfried Prasetyadi</creator>
        
        <creator>Utomo Tri Hantoro</creator>
        
        <creator>Achmad Benny Mutiara</creator>
        
        <subject>Url shortener; click tracker; python; django framework; open source</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(9), 2018</description>
        <description>In recent years, Python has been gaining popularity as web scripting/programming language. In this research we propose Singkat, an open source uniform resource locator (URL) shortener package with web user interface built using Python-based Django web framework. It can be deployed and customized in any web project based on Django framework. This makes sure that administrators can gain control over data in their own environment. Users can create shortened links using base62 values to generate pseudo random keyword. To minimize phishing and other abuses, only registered users can create shortened link using their chosen keyword, and it is possible to preview a link before accessing it. Registered users can also monitor each click and get useful information. We also ran some tests to measure Singkat’s performance and functionality.</description>
        <description>http://thesai.org/Downloads/Volume9No9/Paper_16-Singkat_a_Keyword_based_URL_Shortener.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Statistical-Based Trustful Access Control Framework for Smart Campuses</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090915</link>
        <id>10.14569/IJACSA.2018.090915</id>
        <doi>10.14569/IJACSA.2018.090915</doi>
        <lastModDate>2018-09-29T11:59:53.1200000+00:00</lastModDate>
        
        <creator>AHMAD B. ALKHODRE</creator>
        
        <subject>Internet of things; trust management; confidence interval; confidentiality; smart cities</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(9), 2018</description>
        <description>The vision of the Internet of Things (IoT) is based on the idea of offering connectivity to every physical object (e.g., thermometers, banknotes, smart TVs, bicycles, etc.). This connectivity ensures that immediate information about these objects and their surroundings can be obtained and therefore decisions can be taken based on real-time information. This allows increased productivity and efficiency. One of the most important implementations of the IoT is the smart (or digital) cities where the information collected from the connected devices is used in, for instance, configuring energy systems, enhancing the traffic, controlling pollution or ensuring security. However, there is no guarantee that all objects will provide information because, for example, some may be out of service or have lost connectivity bearing in mind that many objects in an IoT network are characterized by their limited resources (e.g., battery life, computing, and connection capacity). Moreover, the decision in an IoT network is mostly based on the information provided by a subset of the objects rather than all of them. In addition, the obtained information can be contradictory for many reasons, such as a defect in the object or malicious interference either in the object itself or during the communication process. Therefore, it is necessary to provide a measure that reflects to what extent the decision in an IoT network is trustful. In this paper, an approach based on statistical science is proposed to measure the trustworthiness of information collected from heat sensors. An architecture and algorithm, based on the confidence interval measurement to reduce the time taken to verify and check the trustworthiness of network sensors or any other type of IoT device. </description>
        <description>http://thesai.org/Downloads/Volume9No9/Paper_15-Statistical_based_Trustful_Access_Control.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Thinging Machine Applied to Information Leakage</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090914</link>
        <id>10.14569/IJACSA.2018.090914</id>
        <doi>10.14569/IJACSA.2018.090914</doi>
        <lastModDate>2018-09-29T11:59:52.1230000+00:00</lastModDate>
        
        <creator>Sabah S. Al-Fedaghi</creator>
        
        <creator>Mahmoud BehBehani</creator>
        
        <subject>Thinging; bank system; abstract machine; software development cycle; heidegger</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(9), 2018</description>
        <description>This paper introduces a case study that involves data leakage in a bank applying the so-called Thinging Machine (TM) model. The aim is twofold: (1) Presenting a systematic conceptual framework for the leakage problem that provides a foundation for the description and design of a data leakage system. (2) The aim in (1) is developed in the context of experimentation with the TM as a new methodology in modeling. The TM model is based on slicing the domain of interest (a part of the world) to reveal data leakage. The bank case study concentrates on leakage during internal operations of the bank. The leakage spots are exposed through surveying data territory throughout the bank. All streams of information flow are identified, thus points of possible leakage can be traced with appropriate evidence. The modeling of flow may uncover possible hidden points of leakage and provide a base for a comprehensive information flow policy. We conclude that a TM based on the Heideggerian notion of thinging can serve as a foundation for early stages of software development and as an alternative approach to the dominant object-orientation paradigm.</description>
        <description>http://thesai.org/Downloads/Volume9No9/Paper_14-Thinging_Machine_applied_to_Information_Leakage.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Crypt-Tag Authentication in NFC Implementation for Medicine Data Management</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090913</link>
        <id>10.14569/IJACSA.2018.090913</id>
        <doi>10.14569/IJACSA.2018.090913</doi>
        <lastModDate>2018-09-29T11:59:51.5600000+00:00</lastModDate>
        
        <creator>Z. Zainal Abidin</creator>
        
        <creator>N. A. Zakaria</creator>
        
        <creator>N. Harum</creator>
        
        <creator>M. R. Baharon</creator>
        
        <creator>Ee-Song Hong</creator>
        
        <creator>Z. Abal Abas</creator>
        
        <creator>Z. Ayop</creator>
        
        <creator>N. A. Mat Ariff</creator>
        
        <subject>Expiry date notification; Radio-Frequency Identification (RFID); Near-Field Communication (NFC); internet of things insider threats; health care</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(9), 2018</description>
        <description>This study focus on the implementation of expiry date detection for medicine using RFID in the health care industry. The motivation for doing this research is the process of searching for the expired medicine is a time consuming and lack of security features included in current NFC implementation. Therefore, the objective of this research is to study the RFID technology used for detecting medicine expiry product and to develop a new system that integrated NFC with authentication feature.  Moreover, the problem of current data management for medicine still using manual or barcode system that lead to inconsistency, easy duplication and human error. Here, the NFC is chosen, due to smaller distance of signal coverage, since less interference and the time spending for sniffing activity by the hacker can be reduced.  The system is developed using C#, SQLite, Visual Studio, NFC Tag and NFC reader (ACR122U-A9). Experiments have shown that the proposed system has produced medicine expiry date system and only authorized person in charge can monitor the medicine.  The impact of the proposed system produces safer, greener and easier environment for better medicine data management. The significance of this study gives a medicine expiry date detection system for health care.</description>
        <description>http://thesai.org/Downloads/Volume9No9/Paper_13-Crypt_Tag_Authentication_in_NFC.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Model for Predicting Educational Domain Rate based on the Regional Level</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090912</link>
        <id>10.14569/IJACSA.2018.090912</id>
        <doi>10.14569/IJACSA.2018.090912</doi>
        <lastModDate>2018-09-29T11:59:50.5000000+00:00</lastModDate>
        
        <creator>Arslan Tariq</creator>
        
        <creator>Iqra Tariq</creator>
        
        <creator>Aneela Abbas</creator>
        
        <creator>Muhammad Aadil Butt</creator>
        
        <creator>Maimoona Shahid</creator>
        
        <creator>Umair Muneer Butt</creator>
        
        <subject>Geographic information system; education; domain; technology; region</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(9), 2018</description>
        <description>The geographic information system (GIS) is rapidly becoming the part of current technology trends. GIS can be used to identify the factors that become the reason for an individual to adopt a field or subject. We used GIS as a major tool with the other technologies to identify the key factors. This research has analyzed that mostly people used to migrate to other cities due to unavailability of resources in their own region. Collection of data was done with the help of Survey 123 through which we were able to collect location coordinates of participants. After that, Pilot study approach used to conduct this research. Results show` that mostly user preferred to move to other cities due to unavailability of programs in local institutes. The overall idea can be used to improvement of local institutes and this research can also be used for proper and efficient allocation of facilities and resources in a region, which in turn can save money and time.</description>
        <description>http://thesai.org/Downloads/Volume9No9/Paper_12-Model_for_Predicting_Educational_Domain_Rate.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Interactive Content Development for Depression Awareness among Tertiary Students</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090911</link>
        <id>10.14569/IJACSA.2018.090911</id>
        <doi>10.14569/IJACSA.2018.090911</doi>
        <lastModDate>2018-09-29T11:59:49.4700000+00:00</lastModDate>
        
        <creator>Sarni Suhaila Rahim</creator>
        
        <creator>Thum Wei Ching</creator>
        
        <subject>2D animation; awareness; depression; interactive content; tertiary students</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(9), 2018</description>
        <description>“2D Animation: Depression among Tertiary Students” is a novel interactive content development that gives information of depression to public.  It consists of seven modules of depression which are Introduction, Statistics, Types, Symptoms, Causes, Treatments and Video of Depression Information. The objectives of this project are to study the causes and effects of depression among tertiary-level students in Malaysia, to design and develop a 2D animation in raising awareness about depression to the viewers and to investigate the effectiveness of depression animation to the users. The methodology used for this project is Multimedia Production Process which consists of three stages which are pre-production, production and post-production. The testing result shows that the interactive content for depression animation is accepted and effective to the public in order to have a better understanding and awareness on depression among tertiary students. </description>
        <description>http://thesai.org/Downloads/Volume9No9/Paper_11-An_Interactive_Content_Development.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Ensemble Approach to Big Data Security (Cyber Security)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090910</link>
        <id>10.14569/IJACSA.2018.090910</id>
        <doi>10.14569/IJACSA.2018.090910</doi>
        <lastModDate>2018-09-29T11:59:48.4100000+00:00</lastModDate>
        
        <creator>Manzoor Ahmed Hashmani</creator>
        
        <creator>Syed Muslim Jameel</creator>
        
        <creator>Aidarus M. Ibrahim</creator>
        
        <creator>Maryam Zaffar</creator>
        
        <creator>Kamran Raza</creator>
        
        <subject>Big data; cyber security; benign, malicious; ensemble approach; Support Vector Machine (SVM); Receiver Operating Characteristic (ROC); Features (F)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(9), 2018</description>
        <description>In the past, information safety was centered on event correlation designed for observing and spotting previously identified attacks. Due to the dynamic nature of multidimensional cyber-attacks, these models are no more acceptable. Specifically, these attacks use different strategies and procedures to find their way into and out of an organization. Traditional methods have reached their limit and thus new approaches are needed to find a solution for arising issues and challenges for big data security. To understand the current problem, we critically reviewed the literature related to big data security and the solutions proposed by the scientific community. In this paper, an ensemble approach for big data cybersecurity is proposed. To evaluate our approach, the given benchmark data is fed to three different classifiers namely to a k-nearest neighbor (KNN), support vector machine (SVM), multilayer perceptron (MLP) and the output of the single classifiers were compared to ensemble approach of the three classifiers. The reported results show that the ensemble approach for big data cybersecurity performs better than the single classifiers.</description>
        <description>http://thesai.org/Downloads/Volume9No9/Paper_10-An_Ensemble_approach_to_Big_Data_Security.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Quality Flag of GOSAT/FTS Products Taking into Account Estimation Reliability</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090909</link>
        <id>10.14569/IJACSA.2018.090909</id>
        <doi>10.14569/IJACSA.2018.090909</doi>
        <lastModDate>2018-09-29T11:59:47.3330000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Takashi Higuchi</creator>
        
        <creator>Hiroshi Okumura</creator>
        
        <creator>Hirofumi Ohyama</creator>
        
        <creator>Shuji Kawakami</creator>
        
        <creator>Kei Shiomi</creator>
        
        <subject>Levenberg-Marquardt; FTS; GOSAT; aerosol; cirrus cloud</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(9), 2018</description>
        <description>Quality and cloud flags of GOSAT/FTS: Fourier Transform Spectrometer onboard Greenhouse gasses Observation Satellite products taking into account cirrus clouds and thick aerosols are considered and proposed. Influence due to cirrus and thick aerosol on estimation of column CO2 and CH4 with GOSAT/FTS data is clarified. Relatively large estimation errors are observed in column CO2 and CH4 retrievals with FTS data in some atmospheric conditions. In order to find such cases, retrieval results and quality/cloud flags in the GOSAT/FTS data products are checked. Through the investigation, it is found that relatively large error is caused by convergence problem due to cirrus clouds and thick aerosols. In the proposed paper, some of the cases of which relatively large estimation error is occurred at the Saga TCCON (The Total Carbon Column Observing Network) site are investigated. Also, a comparative study is conducted between standard products provided by NASA/JPL and the Levenberg-Marquardt based least square method of column CO2 and CH4 retrieval. It is suggested that some improvements of estimation accuracy of column CO2 and CH4 retrieval with GOSAT/FTS data can be expected.</description>
        <description>http://thesai.org/Downloads/Volume9No9/Paper_9-Quality_Flag_of_GOSAT_FTS_Products.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Novel Mechanism of Classifying the Brain Tumor for Identifying its Critical State</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090908</link>
        <id>10.14569/IJACSA.2018.090908</id>
        <doi>10.14569/IJACSA.2018.090908</doi>
        <lastModDate>2018-09-29T11:59:46.7870000+00:00</lastModDate>
        
        <creator>Deepthi Murthy T S</creator>
        
        <creator>Sadashivappa G</creator>
        
        <creator>Ravi Shankar D</creator>
        
        <subject>Brain tumor; classification; categorization; clustering; identification; segmentation; MRI; Brain X-Ray</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(9), 2018</description>
        <description>Classification of brain tumor is one of the most challenging tasks in the clinical and radiological research. Upon investigating the existing research contribution, we find that still there is wide open scope of addressing classification problem pertaining to brain tumor. Therefore, this manuscript presents a simple mechanism of classifying the brain tumor in order to categorize its state of criticality. The proposed system applies a multi-level preprocessing to enhance the input image followed by image thresholding for feature extraction and decomposition using wavelet transform. The extracted features are further subjected to process of dimensional reduction that maintains a balance between good number of enriched feature and less size of redundant feature using statistical approach. Further, a supervised learning approach is implemented that further optimizes the classification process. The study outcome is further benchmarked with different process of classification to show the efficient computational environment of proposed system.</description>
        <description>http://thesai.org/Downloads/Volume9No9/Paper_8-Novel_Mechanism_of_Classifying_the_Brain_Tumor.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparison of Intelligent Methods of SOC Estimation for Battery of Photovoltaic System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090907</link>
        <id>10.14569/IJACSA.2018.090907</id>
        <doi>10.14569/IJACSA.2018.090907</doi>
        <lastModDate>2018-09-29T11:59:45.7570000+00:00</lastModDate>
        
        <creator>Tae-Hyun Cho</creator>
        
        <creator>Hye-Rin Hwang</creator>
        
        <creator>Jong-Hyun Lee</creator>
        
        <creator>In-Soo Lee</creator>
        
        <subject>Lead-acid battery; SOC; FFNN; RNN; ANFIS; gradient descent; levenberg-marquardt; scaled conjugate gradient</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(9), 2018</description>
        <description>It is essential to estimate the state of charge (SOC) of lead-acid batteries to improve the stability and reliability of photovoltaic systems. In this paper, we propose SOC estimation methods for a lead-acid battery using a feed-forward neural network (FFNN) and a recurrent neural network (RNN) with a gradient descent (GD), a levenberg–marquardt (LM), and a scaled conjugate gradient (SCG). Additionally, an adaptive neuro-fuzzy inference system (ANFIS) with a hybrid method was proposed. The voltage and current are used as input data of neural networks to estimate the battery SOC. Experimental results show that the RNN with LM has the best performance for the mean squared error, but the ANFIS has the highest convergence speed.</description>
        <description>http://thesai.org/Downloads/Volume9No9/Paper_7-Comparison_of_Intelligent_Methods.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Method of Graph Mining based on the Topological Anomaly Matrix and its Application for Discovering the Structural Peculiarities of Complex Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090906</link>
        <id>10.14569/IJACSA.2018.090906</id>
        <doi>10.14569/IJACSA.2018.090906</doi>
        <lastModDate>2018-09-29T11:59:44.7600000+00:00</lastModDate>
        
        <creator>Artem Potebnia</creator>
        
        <subject>Topological anomaly matrix; complex network; graph topology; closeness centrality; betweenness centrality; power grid</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(9), 2018</description>
        <description>The article introduces the mathematical concept of the topological anomaly matrix providing the foundation for the qualitative assessment of the topological organization underlying the large-scale complex networks. The basic idea of the proposed concept consists in translating the distributions of the individual vertex-level characteristics (such as the degree, closeness, and betweenness centrality) into the integrative properties of the overall graph. The article analyzes the lower bounds imposed on the items of the topological anomaly matrix and obtains the new fundamental results enriching the graph theory. With a view to improving the interpretability of these results, the article introduces and proves the theorem regarding the smoothness of the closeness centrality distribution over the graph’s vertices. By performing the series of experiments, the article illustrates the application of the proposed matrix for evaluating the topology of the real-world power grid network and its post-attack damage.</description>
        <description>http://thesai.org/Downloads/Volume9No9/Paper_6-Method_of_Graph_Mining.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Financial Literacy of SME Managers’ on Access to Finance and Performance: The Mediating Role of Financial Service Utilization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090905</link>
        <id>10.14569/IJACSA.2018.090905</id>
        <doi>10.14569/IJACSA.2018.090905</doi>
        <lastModDate>2018-09-29T11:59:43.6970000+00:00</lastModDate>
        
        <creator>Juma Buhimila Mabula</creator>
        
        <creator>Han Dong Ping</creator>
        
        <subject>Financial literacy; use of financial services; access to financial services; firm performance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(9), 2018</description>
        <description>Considering financial literacy as a central factor for consumer demand for financial services, we analyze its impact on access and actual use of financial services and its ultimate consequential reflections on SMEs performance in developing economies. By recognizing the important distinction between access and actual use of financial services this study uses the partial least square-structural equation modelling (PLS-SEM) to estimate the conceptual model. The study reveal significant positive impact of financial literacy to financial access and performance of the firm. It was also discovered that there is significant positive direct impact of access to financial services into actual use of financial services and positive significant effect of the use of financial service on firm performance. The firm use of financial services has a significant mediating role on firm access to financial services-firm performance relationship. The implications of these findings offers foretastes on the need to deepen and widen the scope of SMEs managers’ financial literacy for effective financial management and financial financing decisions. We argue distinct contributions of access and actual use of financial services construct on firm performance has to be given attention in attempt to avoid generalizing the phenomenon. </description>
        <description>http://thesai.org/Downloads/Volume9No9/Paper_5-Financial_Literacy_of_SME_Managers.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Amplitude Modulation of Cerebral Rhythms based Method in a Motor Task BCI Paradigm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090904</link>
        <id>10.14569/IJACSA.2018.090904</id>
        <doi>10.14569/IJACSA.2018.090904</doi>
        <lastModDate>2018-09-29T11:59:43.1530000+00:00</lastModDate>
        
        <creator>Oana Diana Eva</creator>
        
        <creator>Anca Mihaela Lazar</creator>
        
        <subject>Brain computer interface; motor tasks; electroencephalographic signal; amplitude modulation analysis; classifiers</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(9), 2018</description>
        <description>Quantitative evaluation based on amplitude modulation analysis of electroencephalographic signals is proposed for a brain computer interface paradigm. The method allows characterization of the interaction effects of different frequency bands in the electroencephalographic rhythms during motor tasks. A new index was proposed and computed to be a measure of the amplitude modulation. Built on this index, features vector are established for training different classification algorithms. Signals recorded from 50 subjects revealed important differences in amplitude modulations between motor tasks. Most notably, Theta modulation of the Theta and Alpha rhythms proved to be reliable discriminant features between different mental tasks.</description>
        <description>http://thesai.org/Downloads/Volume9No9/Paper_4-An_Amplitude_Modulation_of_Cerebral_Rhythms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>User-Defined Financial Functions for MS SQL Server</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090903</link>
        <id>10.14569/IJACSA.2018.090903</id>
        <doi>10.14569/IJACSA.2018.090903</doi>
        <lastModDate>2018-09-29T11:59:42.5730000+00:00</lastModDate>
        
        <creator>Jolana Gubalova</creator>
        
        <creator>Petra Medvedova</creator>
        
        <subject>Financial economics; user-defined functions; financial functions; database management system; structure query language; transact-SQL</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(9), 2018</description>
        <description>The paper deals with mathematical preparation and subsequent programming of various types of financial functions with using of Transact-SQL in Database Management System MS SQL Server. Financial functions are used to automate calculations in the area of Financial Economics. In MS SQL Server, any financial functions are not offered for financial data processing, how such as in program MS Excel. We emphasize that we have used a different calculation methods to create financial formulas, not those used in Excel. If users want to work with some special functions, there is a possibility to prepare User-Defined Functions (UDFs). The use of UDFs will make it easier to work on financial calculations in large databases.</description>
        <description>http://thesai.org/Downloads/Volume9No9/Paper_3-User-Defined_Financial_Functions.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Aesthetics Versus Readability of Source Code</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090902</link>
        <id>10.14569/IJACSA.2018.090902</id>
        <doi>10.14569/IJACSA.2018.090902</doi>
        <lastModDate>2018-09-29T11:59:41.5300000+00:00</lastModDate>
        
        <creator>Ron Coleman</creator>
        
        <subject>Programming style; fractal geometry; readability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(9), 2018</description>
        <description>The relationship between programming style and program readability has never been examined empirically, although the association has substantial importance for both pedagogical and industry best practices. This paper studies a fractal, relativistic measure of programming style called the beauty factor or “beauty” and puts forward two new hypotheses of beauty. First, code with increasing beauty tends to be more readable. Second, beauty measures a unique property in code called aesthetic value distinct from readability. These hypotheses are tested on a corpus of 53,000 lines of open source system codes written by experienced Linux programmers. Statistical correlation analysis is used on 11 different beauty factors versus eight different readability models (i.e., 88 experiments total). As the primary finding, the data show the maximum absolute statistically significant correlation is |p|=0.59 whereas the absolute median correlation is |p|=0.33. In other words, at least 65% of statistically significant variations in beauty cannot be explained by variations in readability; approximately 90% of statistically significant variations in beauty cannot be explained typically by variations in readability.  These results lend support to both hypotheses. The data further shows indentation is more reliably correlated with readability than mnemonics or comments and GNU style is more correlated with readability than K&amp;R, BSD, or Linux styles.</description>
        <description>http://thesai.org/Downloads/Volume9No9/Paper_2-Aesthetics_versus_Readability_of_Source_Code.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Proposal for A High Availability Architecture for VoIP Telephone Systems based on Open Source Software</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090901</link>
        <id>10.14569/IJACSA.2018.090901</id>
        <doi>10.14569/IJACSA.2018.090901</doi>
        <lastModDate>2018-09-29T11:59:40.9200000+00:00</lastModDate>
        
        <creator>Alejandro Martin</creator>
        
        <creator>Eric Gamess</creator>
        
        <creator>Dedaniel Urribarri</creator>
        
        <creator>Jes&#250;s G&#243;mez</creator>
        
        <subject>Cluster; high availability; load balancer; VoIP; SIP; kamailio; corosync; asterisk; SIPp</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(9), 2018</description>
        <description>The inherent needs of organizations to improve and amplify their technological platform entail large expenses with the goal to enhance their performance. Hence, they have to contemplate mechanisms of optimization and the improvement of their operational infrastructure. In this direction arises the need to guarantee the correct operation and non-degradation of the services provided by the platform during the periods with a significant load of work. This type of scenario is perfectly applicable to the field of VoIP technologies, where users generate elevated loads of work on critical points of the infrastructure, during the process of interaction with their peers. In this research work, we propose a solution for high availability, with the goal of maintaining the continuity of the operation of communication environments based on the SIP protocol in high load. We validate our proposal through numerous experiments. Also, we compare our solution with other classical VoIP scenarios and show the advantages of a high availability and fault tolerance architecture for organizations.</description>
        <description>http://thesai.org/Downloads/Volume9No9/Paper_1-A_Proposal_for_a_High_Availability_Architecture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Digital Technology Disorder: Justification and a Proposed Model of Treatment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090882</link>
        <id>10.14569/IJACSA.2018.090882</id>
        <doi>10.14569/IJACSA.2018.090882</doi>
        <lastModDate>2018-09-01T08:50:12.1900000+00:00</lastModDate>
        
        <creator>Andrew Kear</creator>
        
        <creator>Sasha L. Folkes</creator>
        
        <subject>Addiction; digital; treatment; data; smartphone; behaviour; overuse; interventions</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(8), 2018</description>
        <description>Due to advances in technology being made at an exponential rate, organisations are attempting to compete with one another by utilising state-of-the-art technology to provide innovative products and services that encourage use. However, there is no moral code to inform sensitive technology design, a consequence of which is the emergence of so-called technology addiction. While addiction as a term is problematic, increasing evidence suggests that related-conditions present implications for the individual, for organisations and for wider society. In this research, a consideration of the potentially addictive elements of technology indicates that it can be possible to reverse engineer these systems, as it were, to promote the development of new behaviours, which can enable the individual to abstain from overuse. Utilising smartphones to deliver digital behavioural change interventions can leverage abundant data touchpoints to provide highly tailored treatment, in addition to allowing for enhanced monitoring and accuracy. To inform understanding of this contemporary phenomenon, the literature on addiction has been reviewed, along with the literature on persuasion architecture to inform an understanding of techniques that lend themselves to overuse and how these can be leveraged to promote recovery. From which, the authors have developed a proposed model to inform the practice of those operating in the domains of computer science.</description>
        <description>http://thesai.org/Downloads/Volume9No8/Paper_82-Digital_Technology_Disorder.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Formal Specification of Memory Coherence Protocol</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090881</link>
        <id>10.14569/IJACSA.2018.090881</id>
        <doi>10.14569/IJACSA.2018.090881</doi>
        <lastModDate>2018-09-01T08:50:11.6270000+00:00</lastModDate>
        
        <creator>Jahanzaib Khan</creator>
        
        <creator>Muhammad Atif</creator>
        
        <creator>Muhammad Khurram Zahoor Bajwa</creator>
        
        <creator>Muhammad Sohaib Mahmood</creator>
        
        <creator>Sobia Usman</creator>
        
        <subject>Memory coherence; formal specification; shared memory; address space; analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(8), 2018</description>
        <description>Memory coherence is the most fundamental re-quirement in a shared virtual memory system where there are concurrent as well as loosely coupled processes. These processes can demand a page for reading or writing. The memory is called coherent if the last update in a page remains constant for each process until the owner of that page does not change it. The ownership is transferred to a process interested to update that page. In [Kai LI, and Paul Hudak. Memory Coherence in Shared Virtual Memory Systems, 1986. Proc. of Fifth Annual ACM Symposium on Principles of Distributed Computing.], algorithms ensuring memory coherence are given. We formally specify these protocols and report the improvements through formal analysis. The protocols are specified in UPPAAL, i.e., a tool for modeling, validation and verification of real-time systems.</description>
        <description>http://thesai.org/Downloads/Volume9No8/Paper_81-Formal_Specification_of_Memory_Coherence_Protocol.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Initialization Method for Communication and Data Sharing in P2P Environment Between Wireless Sensor Nodes</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090880</link>
        <id>10.14569/IJACSA.2018.090880</id>
        <doi>10.14569/IJACSA.2018.090880</doi>
        <lastModDate>2018-09-01T08:50:10.5370000+00:00</lastModDate>
        
        <creator>M. Asif Jamal</creator>
        
        <creator>Aziz Ur Rehman</creator>
        
        <creator>Moonisa Ahsan</creator>
        
        <creator>M. S. Riaz</creator>
        
        <creator>M. S. Zafar</creator>
        
        <subject>TinyOS; peer-to-peer; motes; testbed; nesC; MICAz; MIB520; handshaking</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(8), 2018</description>
        <description>Wireless Sensor Networks have increased notewor-thy thought nowadays, rather than wired sensor systems, by presenting multi-useful remote hubs, which are littler in size. However, WSNs correspondence is inclined to negative impacts from the physical environment, like, physical hurdles and interfer-ence. The reason for this work is to outline a testbed, to introduce method for communication startup and data sharing in a peer to peer (p2p) environment between wireless sensor nodes. The work is directed on both the IEEE 802.15.4 physical and the application layers. In this testbed, one channel, from the IEEE 802.15.4 channels range is devoted as an “emergency channel” which is utilized for handshaking or in case there is communication failure between the Transmitter (Tx) and Receiver (Rx) nodes. The remaining 15 channels are called “data channels” and are utilized for real information transmission and control signals. Linux based TinyOS-2.x is utilized as a working framework for low power sensors. MICAz bits are utilized as nodes and a MIB520 programming board is utilized for burning the codes and for the purpose of gateways.</description>
        <description>http://thesai.org/Downloads/Volume9No8/Paper_80-Initialization_Method_for_Communication_and_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Minimization of Information Asymmetry Interference using Partially Overlapping Channel Allocation in Wireless Mesh Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090879</link>
        <id>10.14569/IJACSA.2018.090879</id>
        <doi>10.14569/IJACSA.2018.090879</doi>
        <lastModDate>2018-09-01T08:50:09.4430000+00:00</lastModDate>
        
        <creator>Sadiq Shah</creator>
        
        <creator>Khalid Saeed</creator>
        
        <creator>Mustafa Khan</creator>
        
        <creator>Rafi Ullah Khan</creator>
        
        <subject>Wireless Mesh Network (WMN); information asymmetry; Optimal Partially Overlapping Channel Assignment (OPOCA); NOC; Information Asymmetry Minimization (IAM) model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(8), 2018</description>
        <description>Wireless Mesh Network (WMN) is a developing technology that has a great impact on the improvement of the performance, flexibility and reliability over the traditional wireless networks. Using multi-hop communication facility these networks are installed as a solution to extend last-mile access to the Internet. WMN has already been deployed but still it faces certain issues regarding channel assignment and interfer-ence. One of the well-known interference issues is Information Asymmetry (IA) interface that results in increased retransmission ratio, end-to-end delay, and thus decreases the overall network capacity of WMN. Various studies have been done in the past to minimize information asymmetry interference using limited number of orthogonal or non-overlapping channels i.e. 1, 6 and 11 from IEEE 802.11b radio technology. In recent studies, it is mentioned that partially overlapping channels called POCs can be used to maximize network capacity. The purpose of this research is to minimize Information Asymmetry (IA) interference problem by proposing a channel assignment model called Optimal Partially Overlapping Channel Assignment (OPOCA). In this research, comparison has been made between OPOCA and existing Information Asymmetry Minimization (IAM) model. Through extensive simulations it has been verified that the proposed OPOCA model gives 8% better results as compared to existing IAM model.</description>
        <description>http://thesai.org/Downloads/Volume9No8/Paper_79-Minimization_of_Information_Asymmetry_Interference.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sentiment Analysis, Visualization and Classification of Summarized News Articles: A Novel Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090878</link>
        <id>10.14569/IJACSA.2018.090878</id>
        <doi>10.14569/IJACSA.2018.090878</doi>
        <lastModDate>2018-09-01T08:50:08.8670000+00:00</lastModDate>
        
        <creator>Siddhaling Urologin</creator>
        
        <subject>Summarization; sentiment analysis; 3-D visualiza-tion; sentiment classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(8), 2018</description>
        <description>Due to advancement in technology, enormous amount of data is generated every day. One of the main challenges of large amount of data is user overloaded with huge volume of data. Hence effective methods are highly required to help user to comprehend large amount of data. This research work proposes effective methods to extract and represent the data. The summarization is applicable to obtain a brief overview of the text and sentiment analysis can obtain emotions expressed in the text computationally. The combined text summarization and sentiment analysis is proposed on BBC news articles. A pronoun replacement based text summarization method is developed and VADER sentiment analyzer is used to determine sentiment information. The 3-D visualization schemes have been provided to represent the sentiment information. The sentiment analysis and classification are performed on original BBC news articles as well as on summarized articles using classifiers, such as Logistic Regression, Random Forest and Adaboost. On original news articles highest classification rate of 84.93%, using summarization of ratio 25%, 50% and 75% highest classification rates of 78.73%, 83.06% and 83.23%, respectively are observed.</description>
        <description>http://thesai.org/Downloads/Volume9No8/Paper_78-Sentiment_Analysis_Visualization_and_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Performance Analysis of Efficient MIMO Detection Approaches</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090877</link>
        <id>10.14569/IJACSA.2018.090877</id>
        <doi>10.14569/IJACSA.2018.090877</doi>
        <lastModDate>2018-09-01T08:50:08.2900000+00:00</lastModDate>
        
        <creator>Muhammad Faisal</creator>
        
        <creator>Fazal Wahab Karam</creator>
        
        <creator>Ali Zahir</creator>
        
        <creator>Sajid Bashir</creator>
        
        <subject>Multiple input multiple output antennas; MIMO detection approaches; performance analysis; semi-definite program-ming; zero forcing maximum likelihood</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(8), 2018</description>
        <description>The promising massive level MIMO (multiple-input-multiple-output) systems based on extremely huge antenna collections have turned into a sizzling theme of wireless com-munication systems. This paper assesses the performance of the quasi optimal MIMO detection approach based on semi-definite programming (SDP). This study also investigates the gain obtained when using SDP detector by comparing Bit Error Rate (BER) performance with linear detectors. The near optimal Zero Forcing Maximum Likelihood (ZFML) is also implemented and the comparison is evaluated. The ZFML detector reduces exhaustive ML searching using multi-step reduced constellation (MSRC) detection technique. The detector efficiently combines linear processing with local ML search. The complexity is bounded by maintaining small search areas, while performance is maximized by relaxing this constraint and increasing the cardinality of the search space. The near optimality of SDP is analyzed through BER performance with different antenna configurations using 16-QAM signal constellation operating in a flat fading channel. Simulation results indicate that the SDP detector acquired better BER performance, in addition to a significant decrease in computational complexity using different system/antenna configurations.</description>
        <description>http://thesai.org/Downloads/Volume9No8/Paper_77-Comparative_Performance_Analysis_of_Efficient_MIMO_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of Linear Time Varying Flatness-Based Control for Single-Input Single-Output Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090876</link>
        <id>10.14569/IJACSA.2018.090876</id>
        <doi>10.14569/IJACSA.2018.090876</doi>
        <lastModDate>2018-09-01T08:50:07.1970000+00:00</lastModDate>
        
        <creator>Marouen Sleimi</creator>
        
        <creator>Mohamed Ben Abdallah</creator>
        
        <creator>Mounir Ayadi</creator>
        
        <subject>Flatness theory; discrete-time systems; linear time varying; single-input single-output; dead-beat observer; two degree of freedom controller</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(8), 2018</description>
        <description>In this paper, the control of linear discrete-time Varying Single-Input Single-Output systems is tackled. By using flatness theory combined with a dead-beat observer, a two degree of freedom controller is designed with high performances in terms of trajectory tracking. The aim of this work is to avoid the choice of closed loop poles in linear discrete-time varying framework which build a very serious problem in system control. The effectiveness of this control law is highlighted by simulation results.</description>
        <description>http://thesai.org/Downloads/Volume9No8/Paper_76-Design_of_Linear_Time_Varying_Flatness.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>RASP-TMR: An Automatic and Fast Synthesizable Verilog Code Generator Tool for the Implementation and Evaluation of TMR Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090875</link>
        <id>10.14569/IJACSA.2018.090875</id>
        <doi>10.14569/IJACSA.2018.090875</doi>
        <lastModDate>2018-09-01T08:50:06.1830000+00:00</lastModDate>
        
        <creator>Abdul Rafay Khatri</creator>
        
        <creator>Ali Hayek</creator>
        
        <creator>Josef Borcsok</creator>
        
        <subject>Fault injection; fault tolerance; reliability; single event effects; triple modular redundancy; Verilog HDL</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(8), 2018</description>
        <description>Triple Modular Redundancy (TMR) technique is one of the most well-known techniques for error masking and Single Event Effects (SEE) protection for the FPGA designs. These FPGA designs are mostly expressed in hardware descrip-tion languages, such as Verilog and VHDL. The TMR technique involves triplication of the design module and adding the majority voter circuit for each output port. Building this triplication scheme is a non-trivial task and requires a lot of time and effort to alter the code of the design. In this paper, the RASP-TMR tool is developed and presented that has functionalities to take a synthesizable Verilog design file as an input, parse the design and triplicate it. The tool also generates a top-level module in which all three modules are instantiated and finally adds the proposed majority voter circuit. This tool, with its graphical user interface, is implemented in MATLAB. The tool is simple, fast and user-friendly. The tool generates the synthesizable design that facilitates the user to evaluate and verify the TMR design for FPGA-based systems. A simulation scenario is created using Xilinx ISE tools and ISim simulator. Different fault models are examined during simulations such as bit-flip and stuck at 1/0. The results using various benchmark designs demonstrate that the tool produces synthesizable code and the proposed majority voter logic perfectly masks the error/failure.</description>
        <description>http://thesai.org/Downloads/Volume9No8/Paper_75-RASP_TMR_An_Automatic_and_Fast_Synthesizable.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Complex Shear Modulus Estimation using Integration of LMS/AHI Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090874</link>
        <id>10.14569/IJACSA.2018.090874</id>
        <doi>10.14569/IJACSA.2018.090874</doi>
        <lastModDate>2018-09-01T08:50:05.1070000+00:00</lastModDate>
        
        <creator>Quang–Hai Luong</creator>
        
        <creator>Manh–Cuong Nguyen</creator>
        
        <creator>TonThat–Long</creator>
        
        <creator>Duc–Tan Tran</creator>
        
        <subject>Shear wave; elasticity; viscosity; CSM estimation; least mean square; Algebraic Helmholtz Inversion</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(8), 2018</description>
        <description>Elasticity and viscosity of tissues are two important parameters that can be used to investigate the structure of tissues, especially detecting tumors. By using a force excitation, the shear wave speed is acquired to extract its amplitude and phase. This information is then used directly or indirectly to compute the Complex Shear Modulus (CSM consists of elasticity and viscosity). Among these methods, Algebraic Helmholtz Inversion (AHI) algorithm can be combined with the Finite Difference Time Domain (FDTD) model to estimate CSM effectively. However, this algorithm is strongly affected by measured noise while acquiring the particle velocity. Thus, we proposed a LMS/AHI algorithm which can estimate correctly CSM. A simulation scenario is built to confirm the performance of the proposed LMS/AHI algorithm with average error of 3.14%</description>
        <description>http://thesai.org/Downloads/Volume9No8/Paper_74-Complex_Shear_Modulus_Estimation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Using Artificial Neural Networks for Detecting Damage on Tobacco Leaves Caused by Blue Mold</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090873</link>
        <id>10.14569/IJACSA.2018.090873</id>
        <doi>10.14569/IJACSA.2018.090873</doi>
        <lastModDate>2018-09-01T08:50:04.0000000+00:00</lastModDate>
        
        <creator>Himer Avila-George</creator>
        
        <creator>Topacio Valdez-Morones</creator>
        
        <creator>Humberto P&#180;erez-Espinosa</creator>
        
        <creator>Brenda Acevedo-Ju&#180;arez</creator>
        
        <creator>Wilson Castro</creator>
        
        <subject>Nicotiana tabacum L.; Peronospora tabacina Adam, image processing; artificial neural networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(8), 2018</description>
        <description>Worldwide, the monitoring of pests and diseases plays a fundamental role in the agricultural sustainability; making necessary the development of new tools for early pest detection. In this sense, we present a software application for detecting damage in tobacco (Nicotiana tabacum L.) leaves caused by the fungus of blue mold (Peronospora tabacina Adam). This software application processes tobacco leaves images using a pat-tern recognition technique known as Artificial Neural Network. For the training and testing stages, a total of 40 images of tobacco leaves were used. The experimentation carried out shows that the developed model has accuracy higher than 97% and there is no significant difference with a visual analysis carried out by experts in tobacco crop.</description>
        <description>http://thesai.org/Downloads/Volume9No8/Paper_73-Using_Artificial_Neural_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Message Encryption Method based on Amino Acid Sequences and Genetic Codes</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090872</link>
        <id>10.14569/IJACSA.2018.090872</id>
        <doi>10.14569/IJACSA.2018.090872</doi>
        <lastModDate>2018-09-01T08:50:02.9070000+00:00</lastModDate>
        
        <creator>Ahmed Mahdee Abdo</creator>
        
        <creator>Adel Sabry Essa</creator>
        
        <creator>Abdullah A. Abdullah</creator>
        
        <subject>Information; secure; encryption</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(8), 2018</description>
        <description>As the use of technology is increasing rapidly, the amount of shared, sent, and received information is also increas-ing in the same way. As a result, this necessitates the need for finding techniques that can save and secure the information over the net. There are many methods that have been used to protect the information such as hiding information and encryption. In this study, we propose a new encryption method making use of amino acid and DNA sequences. In addition, several criteria including data size, key size and the probability of cracking are used to evaluate the proposed method. The results show that the performance of the proposed method is better than many common encryption methods, such as RSA in terms of evaluation criteria.</description>
        <description>http://thesai.org/Downloads/Volume9No8/Paper_72-A_New_Message_Encryption_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Aspect-Combining Functions for Modular MapReduce Solutions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090871</link>
        <id>10.14569/IJACSA.2018.090871</id>
        <doi>10.14569/IJACSA.2018.090871</doi>
        <lastModDate>2018-09-01T08:50:01.8000000+00:00</lastModDate>
        
        <creator>Cristian Vidal Silva</creator>
        
        <creator>Rodolfo Villarroel</creator>
        
        <creator>Jose&#180; Rubio</creator>
        
        <creator>Franklin Johnson</creator>
        
        <creator>Erika Madariaga</creator>
        
        <creator>Alberto Urz&#180;ua</creator>
        
        <creator>Luis Carter</creator>
        
        <creator>Camilo Campos-Vald&#180;es</creator>
        
        <creator>Xaviera A. L&#180;opez-Cort&#180;es</creator>
        
        <subject>Combining; Hadoop; MapReduce; AOP; AspectJ; aspects</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(8), 2018</description>
        <description>MapReduce represents a programming framework for modular Big Data computation that uses a function map to identify and target intermediate data in the mapping phase, and a function reduce to summarize the output of the map function and give a final result. Because inputs for the reduce function depend on the map function’s output to decrease the communication traffic of the output of map functions to the input of reduce functions, MapReduce permits defining combining function for local aggregation in the mapping phase. MapReduce Hadoop solutions do not warrant the combining functioning application. Even though there exist proposals for warranting the combining function execution, they break the modular nature of MapReduce solutions. Because Aspect-Oriented Programming (AOP) is a programming paradigm that looks for the modular software production, this article proposes and apply Aspect-Combining function, an AOP combining function, to look for a modular MapReduce solution. The Aspect-Combining application results on MapReduce Hadoop experiments highlight computing performance and modularity improvements and a warranted execution of the combining function using an AOP framework like AspectJ as a mandatory requisite.</description>
        <description>http://thesai.org/Downloads/Volume9No8/Paper_71-Aspect_Combining_Functions.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Comparison of QEC Network based JAVA Application and Web based PHP Application</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090870</link>
        <id>10.14569/IJACSA.2018.090870</id>
        <doi>10.14569/IJACSA.2018.090870</doi>
        <lastModDate>2018-09-01T08:50:01.2370000+00:00</lastModDate>
        
        <creator>Sanaullah Memon</creator>
        
        <creator>Rasool Bux Palh</creator>
        
        <creator>Muniba Memon</creator>
        
        <creator>Hina Siddique Memon</creator>
        
        <subject>QEC; network based JAVA; web based PHP; server; apache Jmeter</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(8), 2018</description>
        <description>Every organization wants to automate the manual system for moving and storing their data in particular format. A QEC department takes feedback of teacher evaluation manually from the students in the university that is somehow more difficult to maintain the record of a teacher, more cost-effective and fewer chances to generate an accurate and optimized report. The computerized system has been developed that generates an accurate and optimized report, easy to maintain the record of the teacher. Lots of possibilities are available to design and develop the application using different programming languages. We have developed a network-based JAVA application and web-based PHP application to automate the manual system of teacher evaluation. The GUI of the application contains 18 questions as per policy of HEC which will be answered by the students. After submitting the answers to questions to the server, an excel report will be ready to generate. Our primary focus is to measure the performance of the server of a network-based JAVA application and web-based PHP application. Both forms contain the same scenario, but here we have to find which form is more suitable and beneficent for an organization in terms of their server’s performance parameters like average response time, throughput, and standard deviation and data transfer rate.</description>
        <description>http://thesai.org/Downloads/Volume9No8/Paper_70-Performance_Comparison_of_QEC_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>New Hybrid Task Scheduling Algorithm with Fuzzy Logic Controller in Grid Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090869</link>
        <id>10.14569/IJACSA.2018.090869</id>
        <doi>10.14569/IJACSA.2018.090869</doi>
        <lastModDate>2018-09-01T08:50:00.1470000+00:00</lastModDate>
        
        <creator>Younes Hajoui</creator>
        
        <creator>Omar Bouattane</creator>
        
        <creator>Mohamed Youssfi</creator>
        
        <creator>Elhocein Illoussamen</creator>
        
        <subject>Distributed systems; computational problems; load balancing; Q-learning; ACO; fuzzy hybrid framework</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(8), 2018</description>
        <description>Distributed heterogeneous architecture is extensively applied to a diversity of large scale research projects conducive to solve complex computational problems. Mentioned distributed systems consist of multiple heterogenous linked processing units used to handle the continuous arrival jobs. The tasks scheduling problem is concerned with resource allocation strategies to assign jobs to available computing resources. The load balancing of linked resources becomes a main issue to select in each task schedule the adequate computing resource. Our proposal consists of combining Q-learning with ACO (Ant Colony Optimization) to solve the tasks allocation dilemma. In our proposed Fuzzy Hybrid Framework, Fuzzy ants are used to calculate at each scheduling operation, the novel reward values whereas Q-learning is used to select the suitable Worker Machine. The simulation findings confirmed the efficiency of the proposed framework due to the significant decrease of the makespan.</description>
        <description>http://thesai.org/Downloads/Volume9No8/Paper_69-New_Hybrid_Task_Scheduling_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Learner Cognitive Behavior and Influencing Factors in Web-based Learning Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090868</link>
        <id>10.14569/IJACSA.2018.090868</id>
        <doi>10.14569/IJACSA.2018.090868</doi>
        <lastModDate>2018-09-01T08:49:58.9900000+00:00</lastModDate>
        
        <creator>Kalla Madhusudhana</creator>
        
        <subject>Learning environment; cognitive behavior; influencing factors; pedagogical; knowledge; curriculum content</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(8), 2018</description>
        <description>In educational institutions, to improve student learning outcome and performance, the information and communication technology has enabled us to embark web-based learning approaches. The traditional web-based learning environment in higher education is aimed at fulfilling the users for most of their deserved learning contents as per the course curriculum. But, in modeling the course curriculum and content, the motivational factors have been left out, through which the learner’s cognitive skills development can take place. Therefore, in e-learning courses, this issue needs to be addressed. It can be resolved through subsuming suitable learning objectives and appropriate skills based interactive learning resources, which can enhance thinking skills and cognitive behavior of learner. This paper provides theoretical framework on the pedagogical factors that can influence the quality of students&#39; learning experience and cognitive learning skills in web based learning environment. Furthermore, this study discusses about the role of prior knowledge and learner’s thought process model in cognitive based learning environments.</description>
        <description>http://thesai.org/Downloads/Volume9No8/Paper_68-Learner_Cognitive_Behavior.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Arabic Chatbots: A Survey</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090867</link>
        <id>10.14569/IJACSA.2018.090867</id>
        <doi>10.14569/IJACSA.2018.090867</doi>
        <lastModDate>2018-09-01T08:49:58.4000000+00:00</lastModDate>
        
        <creator>Sarah AlHumoud</creator>
        
        <creator>Asma Al Wazrah</creator>
        
        <creator>Wafa Aldamegh</creator>
        
        <subject>Artificial intelligence; Arabic chatbot; conversational agent; ArabChat; human-machine interaction; utterance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(8), 2018</description>
        <description>A Chatbot is a programmed entity that handles human-like conversations between an artificial agent and humans. This conversation has attracted the attention of researchers who are interested in the interaction between humans and machines to make the conversation more rational and hence pass the Turing test. The available research done in the field of Arabic chatbots is comparably scarce. This paper presents a review of the published Arabic chatbots studies to identify the gap of knowledge and to highlight the areas that needs more study and research. This study concluded the rarity of available research on Arabic chatbots and that all available works are retrieval based.</description>
        <description>http://thesai.org/Downloads/Volume9No8/Paper_67-Arabic_Chatbots_A_Survey.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Overview of Mutation Strategies in Bat Algorithm  
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090866</link>
        <id>10.14569/IJACSA.2018.090866</id>
        <doi>10.14569/IJACSA.2018.090866</doi>
        <lastModDate>2018-09-01T08:49:57.8200000+00:00</lastModDate>
        
        <creator>Waqas Haider Bangyal</creator>
        
        <creator>Jamil Ahmad</creator>
        
        <creator>Hafiz Tayyab Rauf</creator>
        
        <creator>Sobia Pervaiz</creator>
        
        <subject>Bat algorithm; optimization; local optima; mutation strategies; premature convergence; swarm intelligence</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(8), 2018</description>
        <description>Bat algorithm (BA) is a population based stochastic search technique encouraged from the intrinsic manner of bee swarm seeking for their food source. BA has been mostly used to resolve diverse kind of optimization problems and one of major issue faced by BA is frequently captured in local optima meanwhile handling the complex real world problems. Many authors improved the standard BA with different mutation strategies but an exhausted comprehensive overview about mutation strategies is still lacking. This paper aims to furnish a concise and comprehensive study of problems and challenges that prevent the performance of BA. It has been tried to provide guidelines for the researchers who are active in the area of BA and its mutation strategies. The objective of this study is divided in two sections: primarily to display the improvement of BA with mutation strategies that may enhance the performance of standard BA up to great extent and secondly, to motivate the researchers and developers for using BA to solve the complex real world problems. This study presents a comprehensive survey of the various BA algorithms based on mutation strategies. It is anticipated that this survey would be helpful to study the BA algorithm in detail for the researcher.</description>
        <description>http://thesai.org/Downloads/Volume9No8/Paper_66-An_Overview_of_Mutation_Strategies.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New E-Health Tool for Early Identification of Voice and Neurological Pathologies by Speech Processing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090865</link>
        <id>10.14569/IJACSA.2018.090865</id>
        <doi>10.14569/IJACSA.2018.090865</doi>
        <lastModDate>2018-09-01T08:49:57.2430000+00:00</lastModDate>
        
        <creator>Bouafif Lamia</creator>
        
        <creator>Ellouze Noureddine</creator>
        
        <subject>E-Health; voice disorder; HMM classification; feature extraction; MFCC; pathology recognition rate</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(8), 2018</description>
        <description>The objective of this study is to develop a non-invasive method of early identification and classification of voice pathologies and neurological diseases by speech processing. We will present a new automatic medical diagnosis tool which can assist specialists in their medical diagnosis. The developed strategy is based on speech acquisition of the patient followed by audio features extraction, training and recognition by using the HTK toolkit. The computed parameters are compared to standard values from a codebook database. The experiments and tests are conducted by using the MEEI pathological database of KEY Pentax. The obtained results give good discrimination with a mean pathology recognition ratio about 95%. Finally, this E-Health application is helpful for the prevention of specific diseases and improving the quality of patient care as well as reducing the costs of healthcare.</description>
        <description>http://thesai.org/Downloads/Volume9No8/Paper_65-A_New_E_Health_Tool.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Using Fuzzy Clustering Powered by Weighted Feature Matrix to Establish Hidden Semantics in Web Documents</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090864</link>
        <id>10.14569/IJACSA.2018.090864</id>
        <doi>10.14569/IJACSA.2018.090864</doi>
        <lastModDate>2018-09-01T08:49:56.6830000+00:00</lastModDate>
        
        <creator>Pramod D Patil</creator>
        
        <creator>Parag Kulkarni</creator>
        
        <subject>Fuzzy; clustering; web document; feature matrix</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(8), 2018</description>
        <description>Digital Data is growing exponentially exploding on the &#39;World Wide Web&#39;. The orthodox clustering algorithms obligate various challenges to tackle, of which the most often faced challenge is the uncertainty. Web documents have become heterogeneous and very complex. There exist multiple relations between one web document and others in the form of entrenched links. This can be imagined as a one to many (1-M) relationships, for example, a particular web document may fit in many cross domains viz. politics, sports, utilities, technology, music, weather forecasting, linked to ecommerce products, etc. Therefore, there is a necessity for efficient, effective and constructive context driven clustering methods. Orthodox or the already well-established clustering algorithms adhere to classify the given data sets as exclusive clusters. Signifies that we can clearly state whether to which cluster an object belongs to. But such a partition is not sufficient for representing in the real time. So, a fuzzy clustering method is presented to build clusters with indeterminate limits and allows that one object belongs to overlying clusters with some membership degree. In supplementary words, the crux of fuzzy clustering is to contemplate the fitting status to the clusters, as well as to cogitate to what degree the object belongs to the cluster. The aim of this study is to device a fuzzy clustering algorithm which along with the help of feature weighted matrix, increases the probability of multi-domain overlapping of web documents. Over-lapping in the sense that one document may fall into multiple domains. The use of features gives an option or a filter on the basis of which the data would be extracted through the document. Matrix allows us to compute a threshold value which in turn helps to calculate the clustering result.</description>
        <description>http://thesai.org/Downloads/Volume9No8/Paper_64-Using_Fuzzy_Clustering.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Information System Quality: Managers Perspective</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090863</link>
        <id>10.14569/IJACSA.2018.090863</id>
        <doi>10.14569/IJACSA.2018.090863</doi>
        <lastModDate>2018-09-01T08:49:56.1070000+00:00</lastModDate>
        
        <creator>Sarah Aouhassi</creator>
        
        <creator>Mostafa Hanoune</creator>
        
        <subject>Information system; quality; managers; measurement indicator; university</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(8), 2018</description>
        <description>To evaluate Information System Quality (ISQ) quantitatively, a model was constructed based on sub-models related to the five Information System (IS) components, namely, Human Resources, Hardware, Software and application, Procedure and Data, and all IS players perspectives are considered who are: Managers, Technical Staff, Functional Staff and Users. This paper focuses on the survey designed for managers in order to form the variable indicators from variable questions, via appropriate formulas in the first place, and to analyze data collected from IS managers of the Moroccan universities in the second one. This approach will allow diagnosing precisely the malfunctioning areas on ISQ by emphasizing on the components with less quality level. It will also enable making comparison of ISQ on different organizations with the mean of standardized values.</description>
        <description>http://thesai.org/Downloads/Volume9No8/Paper_63-Information_System_Quality.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Impact of Security in QoS Signaling in NGN: Registration Study </title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090862</link>
        <id>10.14569/IJACSA.2018.090862</id>
        <doi>10.14569/IJACSA.2018.090862</doi>
        <lastModDate>2018-09-01T08:49:55.0130000+00:00</lastModDate>
        
        <creator>RAOUYANE Brahim</creator>
        
        <creator>BELMEKKI Elmostafa</creator>
        
        <creator>KHAIRI sara</creator>
        
        <creator>BELLAFKIH mostafa</creator>
        
        <subject>Quality of Service (QoS); Security; New Generation Network (NGN); IP Multimedia Subsystem (IMS); Session Initiation Protocol (SIP)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(8), 2018</description>
        <description>New generation networks (NGN) use an IP base to transmit their services as well as voice, video and other services. The IP Multimedia Subsystem (IMS) which represents the network core, allowed controls and accesses into various services through a set of signalling protocols, the most common of which is Session Initiation Protocol (SIP). After securing the most vulnerable interfaces in the core of the NGN: IMS architecture. The idea is to improve QoS in SIP signalling, especially in authentication and registration that represent the first step to access. The proposed approach is used as encryption asymmetry in the SIP registration process and study the performance of the system in terms of QoS parameters.</description>
        <description>http://thesai.org/Downloads/Volume9No8/Paper_62-Impact_of_Security_in_QoS_Signaling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Role Term-Based Semantic Similarity Technique for Idea Plagiarism Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090861</link>
        <id>10.14569/IJACSA.2018.090861</id>
        <doi>10.14569/IJACSA.2018.090861</doi>
        <lastModDate>2018-09-01T08:49:54.4200000+00:00</lastModDate>
        
        <creator>Ahmed Hamza Osman</creator>
        
        <creator>Hani Moetque Aljahdali</creator>
        
        <subject>Plagiarism detection; semantic similarity; semantic role; term frequency; idea</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(8), 2018</description>
        <description>Most of the text mining systems are based on statistical analysis of term frequency. The statistical analysis of term (phrase or word) frequency captures the importance of the term within a document, but the techniques that had been proposed by now still need to be improved in terms of their ability to detect the plagiarized parts, especially for capturing the importance of the term within a sentence. Two terms can have a same frequency in their documents, but one term pays more to the meaning of its sentences than the other term. In this paper, we want to discriminate between the important term and unimportant term in the meaning of the sentences in order to adopt for idea plagiarism detection. This paper introduces an idea plagiarism detection based on semantic meaning frequency of important terms in the sentences. The suggested method analyses and compares text based on a semantic allocation for each term inside the sentence. SRL offers significant advantages when generating arguments for each sentence semantically.  Promising experimental has been applied on the CS11 dataset and results revealed that the proposed technique&#39;s performance surpasses its recent peer methods of plagiarism detection in terms of Recall, Precision and F-measure.</description>
        <description>http://thesai.org/Downloads/Volume9No8/Paper_61-Role_Term_based_Semantic_Similarity_Technique.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Incremental Technique of Improving Translation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090860</link>
        <id>10.14569/IJACSA.2018.090860</id>
        <doi>10.14569/IJACSA.2018.090860</doi>
        <lastModDate>2018-09-01T08:49:53.8600000+00:00</lastModDate>
        
        <creator>Aasim Ali</creator>
        
        <creator>Arshad Hussain</creator>
        
        <subject>Statistical machine translation; incremental learning algorithm; English; Urdu</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(8), 2018</description>
        <description>Statistical machine translation (SMT) refers to using probabilistic methods of learning translation process primarily from the parallel text. In SMT, the linguistic information such as morphology and syntax can be added to the parallel text for improved results. However, adding such linguistic matter is costly, in terms of time and expert effort. Here, we introduce a technique that can learn better shapes (morphological process) and more appropriate positioning (syntactic realization) of target words, without linguistic annotations. Our method improves result iteratively over multiple passes of translation. Our experiments showed better accuracy of translation, using a well-known scoring tool. There is no language specific step in this technique.</description>
        <description>http://thesai.org/Downloads/Volume9No8/Paper_60-An_Incremental_Technique_of_Improving_Translation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Artificial Neural Network based Weather Prediction using Back Propagation Technique</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090859</link>
        <id>10.14569/IJACSA.2018.090859</id>
        <doi>10.14569/IJACSA.2018.090859</doi>
        <lastModDate>2018-09-01T08:49:53.2830000+00:00</lastModDate>
        
        <creator>Saboor Ahmad Kakar</creator>
        
        <creator>Naveed Sheikh</creator>
        
        <creator>Adnan Naseem</creator>
        
        <creator>Saleem Iqbal</creator>
        
        <creator>Abdul Rehman</creator>
        
        <creator>Aziz ullah Kakar</creator>
        
        <creator>Bilal Ahmad Kakar</creator>
        
        <creator>Hazrat Ali Kakar</creator>
        
        <creator>Bilal Khan</creator>
        
        <subject>Weather forecasting; artificial neural network; classification; prediction; backpropagation; hidden layers</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(8), 2018</description>
        <description>Weather forecasting is a natural phenomenon which has some chaotic changes happening with the passage of time. It has become an essential topic of research due to some abrupt scenarios of weather. As the data of forecast is nonlinear and follows some irregular trends and patterns, there are many traditional techniques (the literature like nonlinear statistics) to work on the efficiency of models to make prediction better than previous models. However, Artificial Neural Network (ANN) has so far evolved out to be as a better way to improve the accuracy and reliability. The ANN is one of the most fastest growing technique of machine learning considered as non-linear predictive models to perform classification and prediction weather forecasts maximum temperature for the whole days (365) of the year. Therefore, a multi-layered neural network is designed and trained with the existing dataset and obtained a relationship between the existing non-linear parameters of weather. Eleven weather features were used to perform classification of weather into four types. Furthermore, twenty training examples from 1997-2015 were used to predict eleven weather features. The results revealed that by increasing the number of hidden layers, the trained neural network can classify and predict the weather variables with less error.</description>
        <description>http://thesai.org/Downloads/Volume9No8/Paper_59-Artificial_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design and Implementation of a Risk Management Tool: A Case Study of the Moodle Platform</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090858</link>
        <id>10.14569/IJACSA.2018.090858</id>
        <doi>10.14569/IJACSA.2018.090858</doi>
        <lastModDate>2018-09-01T08:49:52.7030000+00:00</lastModDate>
        
        <creator>Nadia Chafiq</creator>
        
        <creator>Mohammed Talbi</creator>
        
        <creator>Mohamed Ghazouani</creator>
        
        <subject>Risk management; e-learning; mehari; platform</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(8), 2018</description>
        <description>During the last years, the distinctive feature of our society has been the rapid pace of technological change. In the Moroccan context, universities have put digital learning at the heart of their projects of development thanks to a wide range of hybrid training devices, Small Private Online Course (SPOC) and Massive Open Online Courses (MOOCs) via Virtual Work Environment (ENT, Environnement Num&#233;rique de Travail ). On the one hand, the purpose of using these devices consist in helping improve their performance and in enhancing their attractiveness. On the other hand, is aimed at meeting the increasingly diverse student’s needs, thanks to the infrastructures reorganization and a renovated pedagogy. Also, extensive use of information and communication technologies at different universities exposes them to a problem related to information system (IS) risks in general and e-learning in particular. The risk assessment is quite complicated and multidimensional. It must take into account many components, including assets, threats, vulnerabilities, controls already in place and analyses. In this work, we first propose the methods of risk management. We then present the risk analysis related to the Moodle platform.</description>
        <description>http://thesai.org/Downloads/Volume9No8/Paper_58-Design_and_Implementation_of_a_Risk_Management_Tool.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Defects Prediction and Prevention Approaches for Quality Software Development</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090857</link>
        <id>10.14569/IJACSA.2018.090857</id>
        <doi>10.14569/IJACSA.2018.090857</doi>
        <lastModDate>2018-09-01T08:49:52.1430000+00:00</lastModDate>
        
        <creator>Mashooque Ahmed Memon</creator>
        
        <creator>Mujeeb-Ur-Rhman Magsi Baloch</creator>
        
        <creator>Muniba Memon</creator>
        
        <creator>Syed Hyder Abbas Musavi</creator>
        
        <subject>Software; defects; predictions; preventions; software development</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(8), 2018</description>
        <description>The demand for distributed and complex business applications in the enterprise requires error-free and high-quality application systems. Unfortunately, most of the developed software contains certain defects which cause failure of a system. Such failures are unacceptable for the development in the critical or sensitive applications. This makes the development of high quality and defect free software extremely important in software development. It is important to better understand and compute the association among software defects and its failures for the effective prediction and elimination of these defects to decline the failure and improve software quality. This paper presents a review of software defects prediction and its prevention approaches for the quality software development. It also focuses a review on the potential and constraints of those mechanisms in quality product development and maintenance.</description>
        <description>http://thesai.org/Downloads/Volume9No8/Paper_57-Defects_Prediction_and_Prevention_Approaches.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Blockchain Technology Evolution Between Business Process Management (BPM) and 
Internet-of-Things (IoT)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090856</link>
        <id>10.14569/IJACSA.2018.090856</id>
        <doi>10.14569/IJACSA.2018.090856</doi>
        <lastModDate>2018-09-01T08:49:51.0030000+00:00</lastModDate>
        
        <creator>Doaa Mohey El-Din M. Hussein</creator>
        
        <creator>Mohamed Hamed N. Taha</creator>
        
        <creator>Nour Eldeen M. Khalifa</creator>
        
        <subject>Blockchain; bitcoin; business process; cryptography; decentralization; consensus; applications</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(8), 2018</description>
        <description>A Blockchain is considered the main mechanism for Bitcoin concurrency. A Blockchain is known by a public ledger and public transactions stored in a chain. The properties of blockchain demonstrate in decentralization as distribution blocks, stability, anonymity, and auditing. Blockchain can enhance the results of network efficiency and improve the security of network. It also can be applied in several fields like financial and banking services, healthcare systems, and public services. However, the research is still opening at this point. It includes a big number of technical challenges which prevents the wide application of blockchain, for example, scalability problem, privacy leakage, etc. This paper shows a proposed comprehensive study of blockchain technology. It also examines the research efforts in blockchain.  It presents a proposed blockchain lifecycle which refers to an evolution and a linked ring between business process management improvement and Internet-of-Things concepts. Then, this paper presents a practical proof of this relationship for smart city. It presents a new algorithm and a proposed blockchain framework for 38 blocks (which recognized as smart-houses). Finally, the future directions are well presented in blockchain field.</description>
        <description>http://thesai.org/Downloads/Volume9No8/Paper_56-A_Blockchain_Technology_Evolution.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comparative Study of the Decisional Needs Engineering Approaches</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090855</link>
        <id>10.14569/IJACSA.2018.090855</id>
        <doi>10.14569/IJACSA.2018.090855</doi>
        <lastModDate>2018-09-01T08:49:50.3970000+00:00</lastModDate>
        
        <creator>OUTFAROUIN Ahmad</creator>
        
        <creator>ZAHID Noureddine</creator>
        
        <creator>ABDALI Abdelmouna&#239;m</creator>
        
        <subject>Decisional information systems; decisional needs engineering; needs engineering approaches; goal; scenario; model of needs representation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(8), 2018</description>
        <description>Requirements Engineering (RE) is an important phase in a project of systems development. It helps design-analysts to design and to model the expression of the end-user needs, and their expectations vis-a-vis their future system. This engineering is studying two major issues that are: What should the system do in order to have a complete needs specification, and reason on the why: &quot;Why do we need to build this system? &quot;, without looking for how to build it. The vast majority of needs engineering approaches are based on two concepts: scenario or goal; there are generally three types of approaches: Scenario-Oriented Approaches, Goal-Oriented Approaches and approaches generated by the couple: goals and scenarios at the same time. In the remainder of this paper, we present a comparative study of the three types of the RE approaches, then models of needs representation, and finally we conclude with the conclusions.</description>
        <description>http://thesai.org/Downloads/Volume9No8/Paper_55-A_Comparative_Study_of_the_Decisional_Needs.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implementation of a Formal Software Requirements Ambiguity Prevention Tool</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090854</link>
        <id>10.14569/IJACSA.2018.090854</id>
        <doi>10.14569/IJACSA.2018.090854</doi>
        <lastModDate>2018-09-01T08:49:49.3030000+00:00</lastModDate>
        
        <creator>Rasha Alomari</creator>
        
        <creator>Hanan Elazhary</creator>
        
        <subject>Software requirements; requirements ambiguity; natural language ambiguity; ambiguity prevention; controlled languages; finite state machines</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(8), 2018</description>
        <description>The success of the software engineering process depends heavily on clear unambiguous software requirements.  Ambiguity refers to the possibility to understand a requirement in more than one way. Unfortunately, ambiguity is an inherent property of the natural languages used to write the software user requirements. This could cause a final faulty system implementation, which is too expensive to correct. The basic requirements ambiguity resolution approaches in the literature are ambiguity detection, ambiguity avoidance, and ambiguity prevention. Ambiguity prevention is the least tackled approach because it requires designing formal languages and templates, which are hard to implement. The main goal of this paper is to provide full implementation of an ambiguity prevention tool and then study its effectiveness using real requirements. Towards this goal, we developed a set of Finite State Machine (FSMs) implementing templates of various requirement types. We then used Python to implement the ambiguity prevention tool based on those FSMs. We also collected a benchmark of 2460 real requirements and selected a random set of forty real requirements to test the effectiveness of the developed tool. The experiment showed that the implemented ambiguity prevention tool can prevent critical requirements ambiguity issues such as missing information or domain ambiguity. Nevertheless, there is a tradeoff between ambiguity prevention and the effort needed to write the requirements using the imposed templates.</description>
        <description>http://thesai.org/Downloads/Volume9No8/Paper_54-Implementation_of_a_Formal_Software_Requirements.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Review of Information Security Policy based on Content Coverage and Online Presentation in Higher Education</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090853</link>
        <id>10.14569/IJACSA.2018.090853</id>
        <doi>10.14569/IJACSA.2018.090853</doi>
        <lastModDate>2018-09-01T08:49:48.2100000+00:00</lastModDate>
        
        <creator>Arash Ghazvini</creator>
        
        <creator>Zarina Shukur</creator>
        
        <creator>Zaihosnita Hood</creator>
        
        <subject>Information security policy; policy development; higher education</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(8), 2018</description>
        <description>Policies are high-level statements that are equal to organizational law and drive the decision-making process within the organization. Information security policy is not easy to develop unless organizations clearly identify the necessary steps required in the development process of an information security policy, particularly in institutions of higher education that largely utilize IT. An inappropriate development process or replication of security policy content from other organizations could fail in execution. The execution of a duplicated policy could fail to act in accordance with enforceable rules and regulations even though it is well developed. Hence, organizations need to develop appropriate policies in compliance with the organization regulatory requirements. This paper aims to reviews policies from selected universities with regards to ISO 27001:2013 minimum requirements as well as effective online presentation. The online presentation review covers the elements of aesthetics, navigation and content presentation. The information on the security policy document resides on the universities’ website.</description>
        <description>http://thesai.org/Downloads/Volume9No8/Paper_53-Review_of_Information_Security_Policy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Skew Detection and Correction of Mushaf Al-Quran Script using Hough Transform</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090852</link>
        <id>10.14569/IJACSA.2018.090852</id>
        <doi>10.14569/IJACSA.2018.090852</doi>
        <lastModDate>2018-09-01T08:49:47.6200000+00:00</lastModDate>
        
        <creator>Salem Saleh Bafjaish</creator>
        
        <creator>Mohd Sanusi Azmi</creator>
        
        <creator>Mohammed Nasser Al-Mhiqani</creator>
        
        <creator>Amirul Ramzani Radzid</creator>
        
        <creator>Hairulnizam Mahdin</creator>
        
        <subject>Skew detection; skew correction; Hough transform; preprocessing; binarization; image analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(8), 2018</description>
        <description>Document skew detection and correction is mainly one of base preprocessing steps in the document analysis. Correction of the skewed scanned images is critical because it has a direct impact on image quality. In this paper, the authors proposed a method for skew detection and correction for Mushaf Al-Quran image pages based on Hough transform method. The technique uses Hough transform lines detection for calculating the skew angulation. It works for different version of Mushaf Al-Quran image pages which has skewed text zones. Moreover, it can detect and correct the skew angle in the range between 20 degrees. Experiment conducted on different Mushaf Al-Quran image pages shows the accuracy of the method.</description>
        <description>http://thesai.org/Downloads/Volume9No8/Paper_52-Skew_Detection_and_Correction_of_Mushaf_Al_Quran_Script.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Segmentation Method for Pathological Brain Tumor and Accurate Detection using MRI</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090851</link>
        <id>10.14569/IJACSA.2018.090851</id>
        <doi>10.14569/IJACSA.2018.090851</doi>
        <lastModDate>2018-09-01T08:49:46.4800000+00:00</lastModDate>
        
        <creator>Khurram Ejaz</creator>
        
        <creator>Mohd Shafry Mohd Rahim</creator>
        
        <creator>Amjad Rehman</creator>
        
        <creator>Huma Chaudhry</creator>
        
        <creator>Tanzila Saba</creator>
        
        <creator>Anmol Ejaz</creator>
        
        <creator>Chaudhry Farhan Ej</creator>
        
        <subject>Brain tumor; level set; Hybrid Fuzzy K Mean (Hybrid FKM); Discrete Wavelet Transformation (DWT); Scalable Vector Machine (SVM); Magnetic Resonance Image (MRI); Principal Component Analysis (PCA)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(8), 2018</description>
        <description>Image segmentation is challenging task in field of medical image processing. Magnetic resonance imaging is helpful to doctor for detection of human brain tumor within three sources of images (axil, corneal, sagittal). MR images are nosier and detection of brain tumor location as feature is more complicated. Level set methods have been applied but due to human interaction they are affected so appropriate contour has been generated in discontinuous regions and pathological human brain tumor portion highlighted after applying binarization, removing unessential objects; therefore contour has been generated. Then to classify tumor for segmentation hybrid Fuzzy K Mean-Self Organization Mapping (FKM-SOM) for variation of intensities is used. For improved segmented accuracy, classification has been performed, mainly features are extracted using Discrete Wavelet Transformation (DWT) then reduced using Principal Component Analysis (PCA). Thirteen features from every image of dataset have been classified for accuracy using Support Vector Machine (SVM) kernel classification (RBF, linear, polygon) so results have been achieved using evaluation parameters like Fscore, Precision, accuracy, specificity and recall.</description>
        <description>http://thesai.org/Downloads/Volume9No8/Paper_51-Segmentation_Methods_for_Pathological_Brain_Tumor.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Recommendations for Building Adaptive Cognition-based E-Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090850</link>
        <id>10.14569/IJACSA.2018.090850</id>
        <doi>10.14569/IJACSA.2018.090850</doi>
        <lastModDate>2018-09-01T08:49:45.2930000+00:00</lastModDate>
        
        <creator>Mostafa Saleh</creator>
        
        <creator>Reda Mohamed Salama</creator>
        
        <subject>Adaptive e-learning; learning objects, learning styles; student models; open source LMS; Moodle; personalized teaching model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(8), 2018</description>
        <description>Adapted e-Learning systems try to adapt the learning material based on the student’s preferences. Course authors design their courses with their students’ styles and in mind, course delivery should match the student style, and student assessment should also be adapted to match each specific student’s learning style, while student portfolio helps identifying the student model. To the best of our knowledge, no clear recommendation for building community wide adapted and personalized e-learning systems. This paper presents recommendations to add adaptation and personalization to one of the most common open source Learning Management System (LMS), Moodle. The adaptation features are based on using learning styles, ontology, and cognitive Bloom Taxonomy in building and presentation of the e-learning material (Learning Objects). This is helpful to establish adaptable and cognition-based Learning Object repository and course development centers.</description>
        <description>http://thesai.org/Downloads/Volume9No8/Paper_50-Recommendations_for_Building_Adaptive_Cognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Using Sab-Iomha for an Alpha Channel based Image Forgery Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090849</link>
        <id>10.14569/IJACSA.2018.090849</id>
        <doi>10.14569/IJACSA.2018.090849</doi>
        <lastModDate>2018-09-01T08:49:44.1870000+00:00</lastModDate>
        
        <creator>Muhammad Shahid Bhatti</creator>
        
        <creator>Syed Asad Hussain</creator>
        
        <creator>Abdul Qayyum</creator>
        
        <creator>Abdul Karim Shahid</creator>
        
        <creator>Muhammad Usman Akram</creator>
        
        <creator>Sajid Ibrahim Hashmi</creator>
        
        <subject>Digital images; tamper; steganography; metadata; forgery detection; cipher; image authentication; image validation; watermarking</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(8), 2018</description>
        <description>Digital images are a very popular way of transfer-ring media. However, their integrity remains challenging because these images can easily be manipulated with the help of software tools and such manipulations cannot be verified through a naked-eye. Although there exist some techniques to validate digital images, but in practice, it is not a trivial task as the existing approaches to forgery detection are not very effective. Therefore, there is need for a simple and efficient solution for the challenge. On the other hand, digital image steganography is the concealing of a message within an image file. The secret message can be retrieved afterwards by the author to check the image file for its veracity. This research paper proposes Sabiomha, an image forgery technique that make use of image steganography. The proposed technique is also supported by a software tool to demonstrate its usefulness. Sabiomha works by inserting an invisible watermark to certain alpha bits of the image file. The watermark we have used to steganograph an image is composed of a combination of text inputs the author can use to sign the image. Any attempts to tamper the image would distort the sequence of the bits of the image pixel. Hence, the proposed technique can easily validate originality of a digital image by exposing any tampering. The usability of our contribution is demonstrated by using the software tool we developed to automate the proposed technique. The experiment which we performed to further validate our technique suggested that Sabimoha could be flawlessly applied to image files.</description>
        <description>http://thesai.org/Downloads/Volume9No8/Paper_49-Using_Sab_Iomha_for_an_Alpha_Channel.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluation of the Impact of Usability in Arabic University Websites: Comparison between Saudi Arabia and the UK</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090848</link>
        <id>10.14569/IJACSA.2018.090848</id>
        <doi>10.14569/IJACSA.2018.090848</doi>
        <lastModDate>2018-09-01T08:49:43.0930000+00:00</lastModDate>
        
        <creator>Mohamed Benaida</creator>
        
        <creator>Abdallah Namoun</creator>
        
        <creator>Ahmad Taleb</creator>
        
        <subject>Usability; usability evaluation; factor analysis; student satisfaction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(8), 2018</description>
        <description>Today usability is a crucial factor that can affect any website. The purpose of this study is to explore major usability defects within Saudi university websites in comparison to British university websites from a Saudi student perspective. In addition, students are expected to achieve their goal when surfing a Saudi Arabian university website comfortably and efficiently without any complication. This study uses two methods to evaluate and measure usability problems; user testing and thinking aloud. Both methods are very useful and effective for collecting data from participants. Based on the ranking of the universities, 60 students were split evenly into three groups; each group was asked to evaluate a different pair of university websites from different ranking levels, one from the UK and the other from KSA. The evaluation performed by each group was gathered using the SUS (System Usability Scaling) questionnaire to find flaws within the usability of the website. During the experiment, the participants’ opinions were collected using the thinking aloud method. The findings of this research showed that all Saudi universities in all tiers had significant problems within the usability of their websites. The most frequent problems found were, inconsistency, integration, confidence and satisfaction. Other less frequent problems that were found during this study were design concepts, easy use of websites and comfort of students. Saudi universities can learn from the differences in the quality between both sides to upgrade and redesign their website to achieve user satisfaction, therefore increasing the confidence of the users.</description>
        <description>http://thesai.org/Downloads/Volume9No8/Paper_48-Evaluation_of_the_Impact_of_usability_in_Arabic_University.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Access Control Model for Modern Virtual e-Government Services: Saudi Arabian Case Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090847</link>
        <id>10.14569/IJACSA.2018.090847</id>
        <doi>10.14569/IJACSA.2018.090847</doi>
        <lastModDate>2018-09-01T08:49:42.5170000+00:00</lastModDate>
        
        <creator>Rand Albrahim</creator>
        
        <creator>Hessah Alsalamah</creator>
        
        <creator>Shada Alsalamah</creator>
        
        <creator>Mehmet Aksoy</creator>
        
        <subject>Access control; cloud infrastructure; data classification scheme; data exchange; e-government; fine-grained access; implementation framework; persistent control; XML security technologies; Saudi Arabia</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(8), 2018</description>
        <description>e-Government services require intensive information exchange and interconnection among governmental agencies to provide specialized online services and allow informed decision-making. This could compromise the integrity, confidentiality, and/or availability of the information being exchanged. Government agencies are accountable and liable for the protection of information they possess and use on a least privilege security principle basis even after dissemination. However, traditional access control models are short of achieving this as they do not allow dynamic access to unknown users to the system, they do not provide security controls at a fine-grained level, and they do not provide persistent control over this information. This paper proposes a novel secure access control model for cross-governmental agencies. The secure model deploys a Role-centric Mandatory Access Control MAC (R-MAC) model, suggests a classification scheme for e-Government information, and enforces its application using XML security technologies. By using the proposed model, privacy could be preserved by having dynamic, persistent, and fine-grained control over their shared information.</description>
        <description>http://thesai.org/Downloads/Volume9No8/Paper_47-Access_Control_Model_for_Modern_Virtual_e_Government.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Processing Sampled Big Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090846</link>
        <id>10.14569/IJACSA.2018.090846</id>
        <doi>10.14569/IJACSA.2018.090846</doi>
        <lastModDate>2018-09-01T08:49:41.4100000+00:00</lastModDate>
        
        <creator>Waleed Albattah</creator>
        
        <creator>Rehan Ullah Khan</creator>
        
        <subject>Deep learning; content analysis; machine learning; support vector machines; random forest</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(8), 2018</description>
        <description>Big data processing requires extremely powerful and large computing setup. This puts bottleneck not only on processing infrastructure but also many researchers don’t get the freedom to analyze large datasets. This paper thus analyzes the processing of the large amount of data from machine learnt models that are built on the smaller sets of data samples. This work analyzes more than 40 GB data by testing different strategies of reducing the processed data without losing and compromising on the detection and model learning in machine learning. Many alternatives are analyzed and it is observed that 50% reduction does not drastically harm the machine learning model performance. On average, in SVM only 3.6%, and in Random Forest, only 1.8% performance is reduced, if only 50% data is used. The 50% reduction in instances means that in most cases, the data will fit in the RAM and the processing times will be considerably reduced, benefitting in execution times and or resources. From the incremental training and testing experiments, it is found that in special cases, smaller sub-sampled data can be used for model generation in machine learning problems. This is useful in cases, where there are either limitations on hardware or one has to select among many available machine learning algorithms.</description>
        <description>http://thesai.org/Downloads/Volume9No8/Paper_46-Processing_Sampled_Big_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Identifying Dynamic Topics of Interest across Social Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090845</link>
        <id>10.14569/IJACSA.2018.090845</id>
        <doi>10.14569/IJACSA.2018.090845</doi>
        <lastModDate>2018-09-01T08:49:40.8470000+00:00</lastModDate>
        
        <creator>Mohamed Salaheldin Aly</creator>
        
        <creator>Abeer Al Korany</creator>
        
        <subject>Information propagation; topic modelling; dynamic user modelling; user behavior; machine learning; topic classification; social networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(8), 2018</description>
        <description>Information propagation plays a significant role in online social networks, mining the latent information produced became crucial to understand how information is disseminated. It can be used for market prediction, rumor controlling, and opinion monitoring among other things. Thus, in this paper, an information dissemination model based on dynamic individual interest is proposed. The basic idea of this model is to extract effective topic of interest of each user overtime and identify the most relevant topics with respect to seed users. A set of experiments on real twitter dataset showed that the proposed dynamic prediction model which applies machine learning techniques outperformed traditional models that only rely on words extracted from tweets.</description>
        <description>http://thesai.org/Downloads/Volume9No8/Paper_45-Identifying_Dynamic_Topics_of_Interest.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced and Improved Hybrid Model to Prediction of User Awareness in Agriculture Sector</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090844</link>
        <id>10.14569/IJACSA.2018.090844</id>
        <doi>10.14569/IJACSA.2018.090844</doi>
        <lastModDate>2018-09-01T08:49:39.7400000+00:00</lastModDate>
        
        <creator>A.V.S. Pavan Kumar</creator>
        
        <creator>Dr. R. Bhramaramba</creator>
        
        <subject>Agriculture products; e-agriculture; classification; clustering; ensemble model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(8), 2018</description>
        <description>Agriculture is the backbone of Indian economy and is the main income source for most of the population in India. So farmers are always curious about yield prediction. Crop yield depends on various factors like soil, weather, rain, fertilizers and pesticides. Several factors have different impacts on agriculture, which can be quantified using appropriate statistical methodologies. Applying such methodologies and techniques on historical yield of crops, it is possible to obtain information or knowledge which can be helpful to farmers and government organizations for making better decision and policies which lead to increased production. The main drawbacks of Indian farmers are they do not have proper knowledge regarding crop yield based on soil necessities. So in this paper, we proposed and developed an Improved Hybrid Model (which is combination of both classification, i.e. Artificial Neural Networks and clustering approach i.e. k-means (works based on Euclidean distance)) to provide awareness, usage and prediction to each farmer that relates to classify different crop yield representation based on soil necessity. For that we collected farmer’s data from standard repositories like http://www.tropmet.res.in/static_ page.php?page_id=52#data and then using that data provide awareness and other parameter sequences to all the farmers in India. Our experimental results show efficient e-agriculture with respect to user awareness, usage and prediction with respect to prediction, recall and f-measure for supporting real time marketing of different agriculture products.</description>
        <description>http://thesai.org/Downloads/Volume9No8/Paper_44-Enhanced_and_Improved_Hybrid_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>EEG-Based Emotion Recognition using 3D Convolutional Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090843</link>
        <id>10.14569/IJACSA.2018.090843</id>
        <doi>10.14569/IJACSA.2018.090843</doi>
        <lastModDate>2018-09-01T08:49:38.6630000+00:00</lastModDate>
        
        <creator>Elham S. Salama</creator>
        
        <creator>Reda A.El-Khoribi</creator>
        
        <creator>Mahmoud E.Shoman</creator>
        
        <creator>Mohamed A.Wahby Shalaby</creator>
        
        <subject>Electroencephalogram; emotion recognition; deep learning; 3D convolutional neural networks; data augmentation; single-label classification; multi-label classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(8), 2018</description>
        <description>Emotion recognition is a crucial problem in Human-Computer Interaction (HCI). Various techniques were applied to enhance the robustness of the emotion recognition systems using electroencephalogram (EEG) signals especially the problem of spatiotemporal features learning. In this paper, a novel EEG-based emotion recognition approach is proposed. In this approach, the use of the 3-Dimensional Convolutional Neural Networks (3D-CNN) is investigated using a multi-channel EEG data for emotion recognition. A data augmentation phase is developed to enhance the performance of the proposed 3D-CNN approach. And, a 3D data representation is formulated from the multi-channel EEG signals, which is used as data input for the proposed 3D-CNN model. Extensive experimental works are conducted using the DEAP (Dataset of Emotion Analysis using the EEG and Physiological and Video Signals) data. It is found that the proposed method is able to achieve recognition accuracies 87.44% and 88.49% for valence and arousal classes respectively, which is outperforming the state of the art methods.</description>
        <description>http://thesai.org/Downloads/Volume9No8/Paper_43-EEG_based_Emotion_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Using Artificial Intelligence Approaches to Categorise Lecture Notes</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090842</link>
        <id>10.14569/IJACSA.2018.090842</id>
        <doi>10.14569/IJACSA.2018.090842</doi>
        <lastModDate>2018-09-01T08:49:37.5730000+00:00</lastModDate>
        
        <creator>Naushine Bibi Baijoo</creator>
        
        <creator>Khusboo Bharossa</creator>
        
        <creator>Somveer Kishnah</creator>
        
        <creator>Sameerchand Pudaruth</creator>
        
        <subject>Classification; lecture materials; machine learning; support vector machines; decision trees</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(8), 2018</description>
        <description>Lecture materials cover a broad variety of documents ranging from e-books, lecture notes, handouts, research papers and lab reports amongst others. Downloaded from the Internet, these documents generally go in the Downloads folder or other folders specified by the students. Over a certain period of time, the folders become so messy that it becomes quite difficult to find our way through them. Sometimes files downloaded from the Internet are saved without the certainty that they will be used or revert to in the future. Documents are scattered all over the computer system, making it very troublesome and time consuming for the user to search for a particular file. Another issue that adds up to the difficulty is the improper naming conventions. Certain files bear names that are totally irrelevant to their contents. Therefore, the user has to open these documents one by one and go through them to know what the files are about. One solution to this problem is a file classifier. In this paper, a file classifier will be used to organise the lecture materials into eight different categories, thus easing the tasks of the students and helping them to organise the files and folders on their workstations. Modules each containing about 25 files were used in this study. Two machine learning techniques were used, namely, decision trees and support vector machines. For most categories, it was found that decision trees outperformed SVM.</description>
        <description>http://thesai.org/Downloads/Volume9No8/Paper_42-Using_Artificial_Intelligence_Approaches.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Data Mining Models Comparison for Diabetes Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090841</link>
        <id>10.14569/IJACSA.2018.090841</id>
        <doi>10.14569/IJACSA.2018.090841</doi>
        <lastModDate>2018-09-01T08:49:36.4970000+00:00</lastModDate>
        
        <creator>Amina Azrar</creator>
        
        <creator>Yasir Ali</creator>
        
        <creator>Muhammad Awais</creator>
        
        <creator>Khurram Zaheer</creator>
        
        <subject>Diabetes; data mining; classification; decision tree; Na&#239;ve Bayes; KNN</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(8), 2018</description>
        <description>From the past few years, data mining got a lot of attention for extracting information from large datasets to find patterns and to establish relationships to solve problems. Well known data mining algorithms include classification, association, Na&#239;ve Bayes, clustering and decision tree. In medical science field, these algorithms help to predict a disease at early stage for future diagnosis. Diabetes mellitus is the most growing disease that needs to be predicted at its early stage as it is lifelong disease and there is no cure for it. This research is intended to provide comparison for different data mining algorithms on PID dataset for early prediction of diabetes.</description>
        <description>http://thesai.org/Downloads/Volume9No8/Paper_41-Data_Mining_Models_Comparison_for_Diabetes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Safety and Performance Evaluation Method for Wearable Artificial Kidney Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090840</link>
        <id>10.14569/IJACSA.2018.090840</id>
        <doi>10.14569/IJACSA.2018.090840</doi>
        <lastModDate>2018-09-01T08:49:35.4030000+00:00</lastModDate>
        
        <creator>YeJi Ho</creator>
        
        <creator>SangHoon Park</creator>
        
        <creator>KyungMin Jo</creator>
        
        <creator>Barum Choi</creator>
        
        <creator>SangEun Park</creator>
        
        <creator>Jaesoon Choi</creator>
        
        <subject>Wearable artificial kidney; safety; hemodialysis; peritoneal dialysis; accelerometer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(8), 2018</description>
        <description>This paper focuses on international standards and guidelines related to evaluating the safety and performance of wearable dialysis systems and devices. The applicable standard and evaluation indices for safety and performance are determined, and the relevant international standards and guidelines are provided in a table. In addition, example experiments using a triaxial accelerometer and robot arm are presented for testing the endurance and safety of wearable artificial kidneys. The findings in this paper can be used to suggest new guidelines for the mechanical safety and performance evaluation of wearable artificial kidney systems.</description>
        <description>http://thesai.org/Downloads/Volume9No8/Paper_40-Safety_and_Performance_Evaluation_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Soft Error Tolerance in Memory Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090839</link>
        <id>10.14569/IJACSA.2018.090839</id>
        <doi>10.14569/IJACSA.2018.090839</doi>
        <lastModDate>2018-09-01T08:49:34.3730000+00:00</lastModDate>
        
        <creator>Muhammad Sheikh Sadi</creator>
        
        <creator>Md. Shamimur Rahman</creator>
        
        <creator>Shaheena Sultana</creator>
        
        <creator>Golam Mezbah Uddin</creator>
        
        <creator>Kazi Md. Bodrul Kabir</creator>
        
        <subject>Soft error tolerance; bit-per-byte; majority logic decodable codes; clustering; adjacent errors</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(8), 2018</description>
        <description>This paper proposes a new method to detect and correct multi bit errors in memory applications using a combination of a clustering approach, Bit-Per-Byte error detection technique, and Majority Logic Decodable (MLD) codes. The likelihood of soft errors accelerates with system complexity, reduction in operational voltages, exponential growth in transistor per chip, increases in clock frequencies, breakdown of memory reliability and device shrinking. Memories are the sensitive part of a computer system. Soft errors in memories may cause an instruction to malfunction. Several techniques are already in practice to mitigate the soft errors. Majority logic decodable codes are proved as effective for memory applications because of their ability to correct a massive number of errors. Since memories are used to hold large number of bits that’s the restraint of Majority logic decodable codes method, so we emphasize on the size of data word in this method. The proposed method aims to detect and correct up to seven bit errors with lesser computational time. It works in an efficient manner in case of adjacent errors which is not possible in Majority logic decodable codes (MLD). It is delineated by Experimental reviews that the proposed approach outperforms existing dominant approach with respect to number of erroneous bit detection and correction, and computational time overhead.</description>
        <description>http://thesai.org/Downloads/Volume9No8/Paper_39-Soft_Error_Tolerance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mapping Wheat Crop Phenology and the Yield using Machine Learning (ML)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090838</link>
        <id>10.14569/IJACSA.2018.090838</id>
        <doi>10.14569/IJACSA.2018.090838</doi>
        <lastModDate>2018-09-01T08:49:33.2970000+00:00</lastModDate>
        
        <creator>Muhammad Adnan</creator>
        
        <creator>Abaid-ur-Rehman</creator>
        
        <creator>M. Ahsan Latif</creator>
        
        <creator>Naseer Ahmad</creator>
        
        <creator>Maria Nazir</creator>
        
        <creator>Naheed Akhter</creator>
        
        <subject>RBNN; PCA; stepwise regression; attributes; yield</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(8), 2018</description>
        <description>Wheat has been a prime source of food for the mankind for centuries. The final wheat grain yield is the multitude of the complex interaction among the various yield attributes such as kernel per plant, Spike per plant, NSpt/s, Spike Dry Weight (SDW), etc. Different approaches have been followed to understand the non-linear relationship between the attributes and the yield to manage the crop better in the context of precision agriculture. In this study, Principle Component analysis (PCA) and Stepwise regression used to reduce the dimension of the original data to get the critical attributes under study. The reduced dataset is then modeled using the Radial Basis neural network. RBNN provides the regression value more than 0.95 which indicates the strong dependence of the yield on the critical traits.</description>
        <description>http://thesai.org/Downloads/Volume9No8/Paper_38-Mapping_Wheat_Crop_Phenology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Evaluation of a Smart Remote Patient Monitoring System based Heterogeneous WSN</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090837</link>
        <id>10.14569/IJACSA.2018.090837</id>
        <doi>10.14569/IJACSA.2018.090837</doi>
        <lastModDate>2018-09-01T08:49:32.7200000+00:00</lastModDate>
        
        <creator>Mohamed EDDABBAH</creator>
        
        <creator>Mohamed MOUSSAOUI</creator>
        
        <creator>Yassin LAAZIZ</creator>
        
        <subject>WSN; body sensor networks; remote patent monitoring; e-health; SOA</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(8), 2018</description>
        <description>This paper investigates the development of a remote patient monitoring system based on WBAN Wireless Body Sensor Network. Thus, the main purpose of such design is to interconnect heterogeneous sensor networks not equipped with the HTTP / TCP / UDP stack. A novel gateway architecture is proposed to ensure interoperability and facilitate seamless access to data from different types of body sensors that communicate via different technologies, namely, Bluetooth, IEEE802.15.4 / Zigbee and IEEE 802.15.6. Moreover, an application-layer approach for a Web Service Gateway is also developed for interaction with heterogeneous WSN. The Gateway communicates with the server via the SOAP protocol and manages the service consumption. Since the proposed platform is targeted to monitor the patient health status, a preliminary link test between the sensor and the server is unavoided in terms of quality of service. To evaluate the performances of our proposed platform, a results comparison was conducted based on different communication scenarios (3G, ADSL, and LOCAL). Finding results illustrate the (QoS) constraints, namely, Latency, Packet loss and Jitter.</description>
        <description>http://thesai.org/Downloads/Volume9No8/Paper_37-Performance_Evaluation_of_a_Smart_Remote.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Piezoelectric based Biosignal Transmission using Xbee</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090836</link>
        <id>10.14569/IJACSA.2018.090836</id>
        <doi>10.14569/IJACSA.2018.090836</doi>
        <lastModDate>2018-09-01T08:49:31.6270000+00:00</lastModDate>
        
        <creator>Mohammed Jalil</creator>
        
        <creator>Mohamed Al Hamadi</creator>
        
        <creator>Abdulla Saleh</creator>
        
        <creator>Omar Al Zaabi</creator>
        
        <creator>Soha Ahmed</creator>
        
        <creator>Walid Shakhatreh</creator>
        
        <creator>Mahmoud Al Ahmad</creator>
        
        <subject>Piezoelectric; XBee; medical sensors; vital signs; remote health monitoring</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(8), 2018</description>
        <description>This paper is showcasing the development of an innovative healthcare solution that will allow patient to be monitored remotely. The system utilizes a piezoelectric sheet sensor and XBee wireless communication protocol to collect and transmit heart beat pressure signal from human subject neck to a receiving node. Then, using signal processing techniques a set of important vital parameters such as heart rate, and blood pressure are extracted from the received signal. Those extracted parameters are needed to assess the human subject health continuously and timely. The architecture of our developed system, which enables wireless transmission of the raw acquired physiological signal, has three advantages over existing systems. First, it increases user’s mobility because we employed XBee wireless communication protocol for signal transmission. Second, it increases the system usability since the user has to carry a single unit for signal acquisition while preprocessing is performed remotely. Third, it gives us more flexibility in acquiring various vital parameters with great accuracy since processing is done remotely with powerful computers.</description>
        <description>http://thesai.org/Downloads/Volume9No8/Paper_36-Piezoelectric_based_Biosignal_Transmisson_Using_Xbee.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Energy Consumption Evaluation of AODV and AOMDV Routing Protocols in Mobile Ad-Hoc Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090835</link>
        <id>10.14569/IJACSA.2018.090835</id>
        <doi>10.14569/IJACSA.2018.090835</doi>
        <lastModDate>2018-09-01T08:49:30.6000000+00:00</lastModDate>
        
        <creator>Fawaz Mahiuob Mohammed Mokbal</creator>
        
        <creator>Khalid Saeed</creator>
        
        <creator>Wang Dan</creator>
        
        <subject>MANETs; routing protocols; AODV; AOMDV; energy efficiency; routing performance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(8), 2018</description>
        <description>Mobile Ad-hoc Networks (MANETs) are mobile, multi-hop wireless networks that can be set up anytime, anywhere without the need of pre-existing infrastructure. Due to its dynamic topology the main challenge in such networks is to design dynamic routing protocols, which are efficient in terms of consumption of energy and producing less overhead. The main emphasis of this research is upon the prominent issues of MANETs such as energy efficiency and scalability along with some traditional performance metrics for performance evaluation. Two proactive routing protocols used in this research are single-path AODV versus multi-path AOMDV. Extensive simulation has been done in NS2 simulator, which includes ten scenarios. The simulation results revealed that the performance of AOMDV is more optimal as compared to AODV in terms of throughput, packet delivery fraction and end to end delay. However, in terms of consumption of energy and NRL the AODV protocol performed better as compared to AOMDV.</description>
        <description>http://thesai.org/Downloads/Volume9No8/Paper_35-Energy_Consumption_Evaluation_of_AODV.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Measurement of Rare Plants Learning Media using Backward Chaining Integrated with Context-Input-Process-Product Evaluation Model based on Mobile Technology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090834</link>
        <id>10.14569/IJACSA.2018.090834</id>
        <doi>10.14569/IJACSA.2018.090834</doi>
        <lastModDate>2018-09-01T08:49:29.4770000+00:00</lastModDate>
        
        <creator>Nyoman Wijana</creator>
        
        <creator>Ni Nyoman Parmithi</creator>
        
        <creator>I Gede Astra Wesnawa</creator>
        
        <creator>I Made Ardana</creator>
        
        <creator>I Wayan Eka Mahendra</creator>
        
        <creator>Dewa Gede Hendra Divayana</creator>
        
        <subject>Rare plants species; backward chaining; evaluation; CIPP; mobile technology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(8), 2018</description>
        <description>This research was aimed to know the effectiveness level of learning media utilization to the introduction of rare plants in Alas Kedaton tourism forest in Tabanan-Bali based on backward chaining for students and the general public. The type of this research includes explorative and evaluative research types. The population in this study was the plants species that exist in the Alas Kedaton tourism forest. The human population was the entire society in the area of Alas Kedaton tourism forest. The sampling method of plants species used the quadratic method, while for the human samples used purposive sampling method. The data has been collected then analyzed descriptively. The results of this study indicate that through the utilization of learning media obtained related information about the number of rare plants species in Alas Kedaton tourism forest as many as 48 species of plants with 26 families, and also the factors causing the scarcity of those plants species. Through the use of CIPP (Context-Input-Process-Product) evaluation model assisted by mobile technology, the overall average effectiveness of learning media utilization to the introduction of rare plant in Alas Kedaton tourism forest in Tabanan-Bali based on backward chaining amount of 88.20%, so that was included into the good categorization.</description>
        <description>http://thesai.org/Downloads/Volume9No8/Paper_34-The_Measurement_of_Rare_Plants_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implementation of Blended Learning in Teaching at the Higher Education Institutions of Pakistan</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090833</link>
        <id>10.14569/IJACSA.2018.090833</id>
        <doi>10.14569/IJACSA.2018.090833</doi>
        <lastModDate>2018-09-01T08:49:28.3830000+00:00</lastModDate>
        
        <creator>Saira Soomro</creator>
        
        <creator>Arjumand Bano Soomro</creator>
        
        <creator>Tariq Bhatti</creator>
        
        <creator>Najma Imtiaz Ali</creator>
        
        <subject>Blended learning; teaching-learning; university teachers</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(8), 2018</description>
        <description>Blended learning has emerged as one of the solutions to address the various needs of Higher Education Institutions around the world. Blended Learning is the combination of traditional classroom and online endeavour. It provides advantages of both face to face learning and e-learning. The main purpose of this study is to assess the adaptation level of blended learning in teaching process at Higher Education Institutions. This study carried out mixed method approach by using explanatory sequential model. Teachers of general public universities were included as the sample for this study. Questionnaire and interview techniques were used as data gathering tools. The main findings of this study showed that teachers have a positive perception for technology usage in teaching process. Most of the teachers possessed expertise in the use of different software and equipped with internet skills. The study concluded that in blended learning implementation, universities are still at awareness level and a lot of efforts are required for effective implementation of blended learning. It is recommended that the universities’ administration should provide an extra computing infrastructure (e.g. servers, bandwidth, and storage capacity) to run the courses in blended format. We recommend that in strategic plan of the universities the blended learning should be well defined and highlighted.</description>
        <description>http://thesai.org/Downloads/Volume9No8/Paper_33-Implementation_of_Blended_Learning_in_Teaching.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Impact and Challenges of Requirements Management in Enterprise Resource Planning (ERP) via ERP Thesaurus</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090832</link>
        <id>10.14569/IJACSA.2018.090832</id>
        <doi>10.14569/IJACSA.2018.090832</doi>
        <lastModDate>2018-09-01T08:49:27.8070000+00:00</lastModDate>
        
        <creator>Rahat Izhar</creator>
        
        <creator>Dr. Shahid Nazir Bhatti</creator>
        
        <creator>Saba Izhar</creator>
        
        <creator>Dr. Amr Mohsen Jadi</creator>
        
        <subject>Enterprise Resource Planning (ERP); Product Owner (PO); requirements elicitation; requirements management; change management</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(8), 2018</description>
        <description>Managing requirements efficiently aids the system design team to understand the existence and significance of any  individual requirement, there are numerous requirements management practices that benefit in decision making but significantly many lacks to account the important factors that have substantial influence in managing requirements in context of ERP systems in particular. As highlighted comprehensively later in literature review section, requirements management failure is one of the pivotal aspects for the project(s) failure. The prime problem/lacking in software design and development is when it comes to requirements management the most vital thing that gets ignored is thinking before performing activities. As it should be the main step to save time, money and efforts. Further prominence other aspects in this are pivotal value about the software’s running in industries, the question arises when their business need ERP system, and when requirements change or new requirements are emerged into the system, what are the obstacles faced and how these obstacles are accomplished. ERP systems are becoming the need of industries nowadays as various industries are facing problems regarding data loss; it is challenging for the owners to fetch all the information when they need it, accounting systems are slower and consuming a lot of time and many other issues likewise. This paper further illustrates in detail the important traits, issues toward businesses may have when ERP is implemented and when requirements are changed or not managed professionally what issues are faced by requirement engineering team and industries and thus how to resolve them.</description>
        <description>http://thesai.org/Downloads/Volume9No8/Paper_32-Impact_and_Challenges_of_Requirements_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Study of PMSG Controllers for Variable Wind Turbine Power Optimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090831</link>
        <id>10.14569/IJACSA.2018.090831</id>
        <doi>10.14569/IJACSA.2018.090831</doi>
        <lastModDate>2018-09-01T08:49:26.7130000+00:00</lastModDate>
        
        <creator>Asma Hammami</creator>
        
        <creator>Imen Saidi</creator>
        
        <creator>Dhaou Soudani</creator>
        
        <subject>Wind turbine; internal model control; PI controller Permanent Magnet Synchronous Generator (PMSG); vector control</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(8), 2018</description>
        <description>With a large increase in wind power generation, the direct driven Permanent Magnet Synchronous Generator is the most promising technology for variable speed operation and it also fulfills the grid requirements with high efficiency. This paper studies and compares conventional based on PI controller and proposed control technique for a direct driven PMSG wind turbine. The generator model is established in the Park synchronous rotating d-q reference frame. To achieve maximum power capture, the aeroturbine is controlled through Maximum Power Point Tracking (MPPT) while the PMSG control is treated through field orientation where the two currents control loops are regulated. A proposed direct-current based d-q vector control design is designed by the integration of the Internal Model Controller. Then an optimal control is developed for integrated control of PMSG power optimization and Voltage Source Converter control. The design system was done using SimWindFarm Matlab/Simulink toolbox to evaluate the performance of conventional and proposed technique control of PMSG wind turbine. The analysis, simulation results prove the effectiveness and robustness of the proposed control strategy.</description>
        <description>http://thesai.org/Downloads/Volume9No8/Paper_31-Comparative_Study_of_PMSG_Controllers.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Features and Potential Security Challenges for IoT Enabled Devices in Smart City Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090830</link>
        <id>10.14569/IJACSA.2018.090830</id>
        <doi>10.14569/IJACSA.2018.090830</doi>
        <lastModDate>2018-09-01T08:49:25.6230000+00:00</lastModDate>
        
        <creator>Gasim Alandjani</creator>
        
        <subject>IoT; privacy; smart city; smart society; actuators; sensors; industrial 4.0;5G</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(8), 2018</description>
        <description>Introduction of Internet of Things in our lives have brought drastic changes in the social norms, working habits, ways of completing tasks and planning for future. Data about our interactions with everyday objects can be effectively transmitted to their destinations with many communicating tags that also often provide specific location information. The risk of potential eavesdropping is always a major concern of data owners. Since Internet of Things is primarily responsible for carrying data of smart objects which are mostly connected over wireless technologies, securing of information carried by these wireless links to safeguard the private information is of utmost importance. Cryptographic techniques to cypher data carried by the IoT networks is one possibility which is not feasible due to the lack of sufficient computing resources at the sensor end of IoT devices. In this paper, we discuss various security issues that haunt the secure IoT deployments and propose a layered solution model that prevents breach of security during transmission of data.</description>
        <description>http://thesai.org/Downloads/Volume9No8/Paper_30-Features_and_Potential_Security_Challenges.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Method of Automatic Domain Extraction of Text to Facilitate Retrieval of Arabic Documents</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090829</link>
        <id>10.14569/IJACSA.2018.090829</id>
        <doi>10.14569/IJACSA.2018.090829</doi>
        <lastModDate>2018-09-01T08:49:24.5300000+00:00</lastModDate>
        
        <creator>Mohammad Khaled A. Al-Maghasbeh</creator>
        
        <creator>Mohd Pouzi bin Hamzah</creator>
        
        <subject>Arabic information retrieval; text classification; Arabic text mining; Arabic language processing; text clustering; text classification; text categorization and classification algorithms</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(8), 2018</description>
        <description>Arabic content on the internet has increased over the web because of the growth of the number of Arabic persons who use the internet in the world. Accordingly, this study introduces an automatic approach of domain extraction of information retrieval from these contents based on text classification. Text classification process makes the searching domain specific to facilitate the searching process. This paper discusses how to enhance the capacity of information retrieval in Arabic documents by classifying the unlabelled Arabic text automatically by using text classification algorithms. The classification of documents and texts is an important field in computer science and information retrieval.  It aims at enhancing the retrieval process by identifying the searching-domain of retrieval systems.</description>
        <description>http://thesai.org/Downloads/Volume9No8/Paper_29-A_Method_of_Automatic_Domain_Extraction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Efficient Resource Consumption by Dynamic Clustering and Optimized Routes in Wireless Sensor Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090828</link>
        <id>10.14569/IJACSA.2018.090828</id>
        <doi>10.14569/IJACSA.2018.090828</doi>
        <lastModDate>2018-09-01T08:49:23.4230000+00:00</lastModDate>
        
        <creator>Farzad Kiani</creator>
        
        <subject>Energy efficiency; data-driven; spanning tree; sleep/wake up mode; power management</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(8), 2018</description>
        <description>The energy issue is an important parameter in the wireless sensor networks and should be managed in the different applications. We propose a new routing algorithm that it is energy efficient and uses different approaches as dynamic clustering, spanning tree, self-configurable routing and controls energy consuming by data-driven and power management schemas. It has two main phases. The first is consisting of the steady cluster, cluster head election and creation-spanning tree in each cluster and the second phase is data transmission. The proposed protocol is compared with four other protocols in network lifetime, network balance, and average packet delay and packet delivery. Simulation results show the proposed protocol performance in the network lifetime is about 6 per cent higher than Improved-LEACH, 21.5 per cent higher than EESR and 5.8 per cent higher than DHCO. Its improvement in packet delivery parameter is about 3.5 per cent higher than Improved-LEACH, 6.5 per cent higher than EESR and 3 per cent higher than DHCO. In addition, the performance or in packet delay is about 17 per cent higher than EESR and 6 per cent higher than DHCO but Improved-LEACH protocol has a good performance than our protocol about 4 per cent.</description>
        <description>http://thesai.org/Downloads/Volume9No8/Paper_28-Efficient_Resource_Consumption.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>ECG Abnormality Detection Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090827</link>
        <id>10.14569/IJACSA.2018.090827</id>
        <doi>10.14569/IJACSA.2018.090827</doi>
        <lastModDate>2018-09-01T08:49:22.8600000+00:00</lastModDate>
        
        <creator>Soha Ahmed</creator>
        
        <creator>Ali Hilal-Alnaqbi</creator>
        
        <creator>Mohamed Al Hemairy</creator>
        
        <creator>Mahmoud Al Ahmad</creator>
        
        <subject>Cross-correlation; abnormalities detection; electrocardiogram (ECG); cardiac cycle; eHealth; remote monitoring; algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(8), 2018</description>
        <description>The monitoring and early detection of abnormalities in the cardiac cycle morphology have significant impact on the prevention of heart diseases and their associated complications. Electrocardiogram (ECG) is very effective in detecting irregularities of the heart muscle functionality. In this work, we investigate the detection of possible abnormalities in ECG signal and the identification of the corresponding heart disease in real-time using an efficient algorithm. The algorithm relies on cross-correlation theory to detect abnormalities in ECG signal. The algorithm incorporates two cross-correlations steps. The first step detects abnormality in a real-time ECG signal trace while the second step identifies the corresponding disease. The optimization of search-time is the main advantage of this algorithm.</description>
        <description>http://thesai.org/Downloads/Volume9No8/Paper_27-ECG_Abnormality_Detection_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cost Aware Resource Selection in IaaS Clouds</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090826</link>
        <id>10.14569/IJACSA.2018.090826</id>
        <doi>10.14569/IJACSA.2018.090826</doi>
        <lastModDate>2018-09-01T08:49:21.8300000+00:00</lastModDate>
        
        <creator>Uzma Bibi</creator>
        
        <subject>Cloud computing; pay-as-you-go; infrastructure as a service; cost aware resource selection; virtual machines; hypervisor; instance based approach; quantity based approach; single cloud; multi cloud</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(8), 2018</description>
        <description>One of the main challenges in cloud computing is to cope up with the selection of efficient resources in terms of cost. There are various cloud computing service providers which dynamically provide resources to the customers through different pricing policies. Based on the different APIs and pricing policies of the service providers, it becomes difficult for the customers to select the best service provider in terms of cost. In some cases, if the usage of the resources provided by a datacenter exceeds certain limit, then the providers cannot offer more resources to the customers as new VMs cannot be created. Hence, even if the customer chooses the best provider based on the least cost parameter, still there is no guarantee that the provider allocates complete resources to the customer. For this reason, I present system architecture that selects the best service provider based on the customer requirements mainly the cost. The proposed architecture also performs resource management by automatically providing new VMs from the available service providers in the inter cloud. The proposed system is based on five clouds i.e. Amazon EC2, Cloudsigma, Google, GoGrid, and Windows Azure. An interface is designed for obtaining the user requirements. These user requirements are matched with the design database of five cloud providers and based on the matched values; the catalog of optimal costs for each particular cloud is shown to the user. Then Cost Aware Resource Selection algorithm is run for determining the lowest optimal cost for Instance based approach and Quantity based approach. The algorithm tackles two domains of clouds for the algorithm i.e. Single Cloud and Multi Cloud.</description>
        <description>http://thesai.org/Downloads/Volume9No8/Paper_26-Cost_Aware_Resource_Selection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intelligent Model Conception Proposal for Adaptive Hypermedia Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090825</link>
        <id>10.14569/IJACSA.2018.090825</id>
        <doi>10.14569/IJACSA.2018.090825</doi>
        <lastModDate>2018-09-01T08:49:20.7400000+00:00</lastModDate>
        
        <creator>Mehdi TMIMI</creator>
        
        <creator>Mohamed BENSLIMANE</creator>
        
        <creator>Mohammed BERRADA</creator>
        
        <creator>Kamar OUAZZANI</creator>
        
        <subject>Adaptive hypermedia; artificial intelligence; deep learning; learner model; domain model; adaptation model; brain; neuron</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(8), 2018</description>
        <description>The context of this article is to study and propose solutions for the major problems of adaptive hypermedia systems. In fact, the works and models proposed for these systems are made according to the tradition of studying first theories and rules, then modeling and designing a system that implements them. As a result, adaptive hypermedia systems designed reflect and support only the elements and information that were studied during the design phase. Also, these systems require a huge amount of data to power their architecture in order to start operating. This famous problem is called “cold start” and until now represents a challenge. So, in this paper, we will propose an intelligent and flexible model inspired by human nature and that proposes a promising solution to these problems concerning hypermedia adaptive systems.</description>
        <description>http://thesai.org/Downloads/Volume9No8/Paper_25-Intelligent_Model_Conception_Proposal.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Evaluation of Cloud Computing Resources</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090824</link>
        <id>10.14569/IJACSA.2018.090824</id>
        <doi>10.14569/IJACSA.2018.090824</doi>
        <lastModDate>2018-09-01T08:49:19.6470000+00:00</lastModDate>
        
        <creator>Muhammad Sajjad</creator>
        
        <creator>Arshad Ali</creator>
        
        <creator>Ahmad Salman Khan</creator>
        
        <subject>Cloud computing; OpenStack; windows azure</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(8), 2018</description>
        <description>Cloud computing is an emerging information technology which is rapidly growing. However, measuring the performance of cloud based applications in real environments is a challenging task for research as well as business community. In this work, we focused on Infrastructure as a Service (IaaS) facility of cloud computing. We made a performance evaluation of two renowned public and private cloud platforms. Several performance metrics such as integer, floating Point, GFLOPS, read, random Read, write, random write, bandwidth, jitter and throughput were used to analyze the performance of cloud resources. The motive of this analysis is to help cloud providers to adjust their data center parameters under different working conditions as well as cloud customers to monitor their hired resources. We analyzed and compared the performance of OpenStack and Windows Azure platforms by considering resources like CPU, memory, disk and network in a real cloud setup. In order to evaluate each feature, we used related benchmarks, for example, Geekbench &amp; LINPACK for CPU performance, RAMspeed &amp; STREAM for memory performance, IOzone for disk performance and Iperf for network performance. Our experimental results showed that the performance of both clouds is almost same; however, OpenStack seems to be better option as compared to Windows Azur keeping in view its cost as well as network performance.</description>
        <description>http://thesai.org/Downloads/Volume9No8/Paper_24-Performance_Evaluation_of_Cloud_Computing_Resources.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>OpenSimulator based Multi-User Virtual World: A Framework for the Creation of Distant and Virtual Practical Activities</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090823</link>
        <id>10.14569/IJACSA.2018.090823</id>
        <doi>10.14569/IJACSA.2018.090823</doi>
        <lastModDate>2018-09-01T08:49:18.5530000+00:00</lastModDate>
        
        <creator>MOURDI Youssef</creator>
        
        <creator>SADGAL Mohamed</creator>
        
        <creator>BERRADA FATHI Wafaa</creator>
        
        <creator>EL KABTANE Hamada</creator>
        
        <subject>E-Learning; multi-user virtual world; practical activities; OpenSimulator; virtual reality; virtual laboratories</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(8), 2018</description>
        <description>The exponential growth of technology has contributed to the positive revolution of distance learning. E-learning is becoming increasingly used in the transfer of knowledge where instructors can model and script their courses in several formats such as files, videos and quizzes. In order to complete their courses, practical activities are very important. Several instructors have joined Multi-User Virtual World (MUVW) communities such as SecondeLife, as they offer a degree of interrelated realism and interaction between users. The modeling and scenarization of practical activities in the MUVWs remains a very difficult task considering the technologies used by these MUVWs and the necessary prerequisites. In this paper, we propose a framework for the OpenSimulator MUVWs that can simplify the scenarization of practical activities using the OpenSpace3D software and without requiring designers to have expertise in programming or coding.</description>
        <description>http://thesai.org/Downloads/Volume9No8/Paper_23-Opensimulator_based_Multi_User_Virtual_World.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Developing Communication Strategy for Multi-Agent Systems with Incremental Fuzzy Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090822</link>
        <id>10.14569/IJACSA.2018.090822</id>
        <doi>10.14569/IJACSA.2018.090822</doi>
        <lastModDate>2018-09-01T08:49:17.4800000+00:00</lastModDate>
        
        <creator>Sam Hamzeloo</creator>
        
        <creator>Mansoor Zolghadri Jahromi</creator>
        
        <subject>Multi-agent systems; decentralized partially observable Markov decision process; communication; planning under uncertainty; fuzzy inference systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(8), 2018</description>
        <description>Communication can guarantee the coordinated behavior in the multi-agent systems. However, in many real-world problems, communication may not be available at every time because of limited bandwidth, noisy environment or communication cost. In this paper, we introduce an algorithm to develop a communication strategy for cooperative multi-agent systems in which the communication is limited. This method employs a fuzzy model to estimate the benefit of communication for each possible situation. This specifies minimal communication that is necessary for successful joint behavior. An incremental method is also presented to create and tune our fuzzy model that reduces the high computational complexity of the multi-agent systems. We use several standard benchmark problems to assess the performance of our proposed method. Experimental results show that the generated communication strategy can improve the performance as well as full-communication strategy, while the agents utilize little communication.</description>
        <description>http://thesai.org/Downloads/Volume9No8/Paper_22-Developing_Communication_Strategy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Quality Assurance for Data Analytics</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090821</link>
        <id>10.14569/IJACSA.2018.090821</id>
        <doi>10.14569/IJACSA.2018.090821</doi>
        <lastModDate>2018-09-01T08:49:16.9000000+00:00</lastModDate>
        
        <creator>Rakesh Kumar</creator>
        
        <creator>Birth Subhash</creator>
        
        <creator>Maria Fatima</creator>
        
        <creator>Waqas Mahmood</creator>
        
        <subject>Software Quality Assurance (SQA); data analytical softwares; data driven softwares; real time analytics; data analytics; quality issues; quality control</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(8), 2018</description>
        <description>Quality Assurance is a technique for ensuring the overall software quality suggested by Global Standards bodies like IEEE. The Quality Assurance for Data Analytics requires more time and a very different set of skills because Software Products, which are used for Data Analytics, are different than that of traditional ones. In result, these Software Products require more complex algorithms to operate and then for ensuring their quality, one needs more advanced techniques for handling these Software Products. According to our survey, Data Analytical Software Products require more work because of their more complex nature. One of the possible reasons can be the volume and variety of Data. On the same hand, this research emphasizes on testing of Data Analytical Software Products which have many issues because testing of these Software Products requires real data. However, every time the testing of these Software Products is based either on dummy data or simulations and these Software Products fail when they work in real time. For making these Software Products work well before and after deployment, we have to define certain Quality standards. In this way, we can get better result producing analytics Software Products for better results.</description>
        <description>http://thesai.org/Downloads/Volume9No8/Paper_21-Quality_Assurance_for_Data_Analytics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Acoustic Classification using Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090820</link>
        <id>10.14569/IJACSA.2018.090820</id>
        <doi>10.14569/IJACSA.2018.090820</doi>
        <lastModDate>2018-09-01T08:49:15.8100000+00:00</lastModDate>
        
        <creator>Muhammad Ahsan Aslam</creator>
        
        <creator>Muhammad Umer Sarwar</creator>
        
        <creator>Muhammad Kashif Hanif</creator>
        
        <creator>Ramzan Talib</creator>
        
        <creator>Usama Khalid</creator>
        
        <subject>Acoustics; deep learning; machine learning; neural networks; audio sounds</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(8), 2018</description>
        <description>Acoustic complements is an important methodology to perceive the sounds from environment. Significantly machines in different conditions can have the hearings capability like smartphones, different software or security systems. This kind of work can be implemented through conventional or deep learning machine models that contain revolutionized speech identification to understand general environment sounds. This work focuses on the acoustic classification and improves the performance of deep neural networks by using hybrid feature extraction methods. This study improves the efficiency of classification to extract features and make prediction of cost graph. We have adopted the hybrid feature extraction scheme consisting of DNN and CNN. The results have 12% improvement from the previous results by using mix feature extraction scheme.</description>
        <description>http://thesai.org/Downloads/Volume9No8/Paper_20-Acoustic_Classification_using_Deep_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Investigating the Acceptance of Mobile Health Application User Interface Cultural-Based Design to Assist Arab Elderly Users</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090819</link>
        <id>10.14569/IJACSA.2018.090819</id>
        <doi>10.14569/IJACSA.2018.090819</doi>
        <lastModDate>2018-09-01T08:49:14.7170000+00:00</lastModDate>
        
        <creator>Ahmed Alsswey</creator>
        
        <creator>Irfan Naufal Bin Umar</creator>
        
        <creator>Brandford Bervell</creator>
        
        <subject>TAM; elderly users; mobile health applications; user interface; culture</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(8), 2018</description>
        <description>Mobile health (m-health) applications are a way to provide solutions to the non-availability of physical health services in the Arab world. However, end users of m-health around the world have their cultural and personal differences that distinguish them from others. Studies suggest that culture is an essential component of the success of any product or technology usage. In view of this, the study investigated acceptance towards mobile health application User Interface (UI) designed for Arab elderly users based on their culture. The TAM model formed the theoretical basis upon which a quantitative design was adopted, with a questionnaire as data collection instrument from 134 participants. The findings showed that perceived Ease of Use (PEOU) and Attitude Towards Use (ATU) had a significant positive influence on Behavioural Intention (BI) to use mobile health application User Interface. Overall, the results indicated that Arab elderly users found the mobile health application UI as acceptable due to its cultural-based design. To improve designs of mobile applications UI targeting elderly users, it is vital to gain insight into cultural aspects that influence the usability of mHealth application UI as well as insights into their personal characteristics and experiences.</description>
        <description>http://thesai.org/Downloads/Volume9No8/Paper_19-Investigating_the_Acceptance_of_Mobile_Health_Application.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Adaptive Simulated Evolution based Approach for Cluster Optimization in Wireless Sensor Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090818</link>
        <id>10.14569/IJACSA.2018.090818</id>
        <doi>10.14569/IJACSA.2018.090818</doi>
        <lastModDate>2018-09-01T08:49:13.6100000+00:00</lastModDate>
        
        <creator>Abdulaziz Alsayyari</creator>
        
        <subject>Clustering algorithm; cluster optimization; network lifetime; simulated evolution; wireless sensor networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(8), 2018</description>
        <description>Energy consumption minimization is crucial for the constrained sensors in wireless sensor networks (WSNs). Partitioning WSNs into optimal set of clusters is a promising technique utilized to minimize energy consumption and to increase the lifetime of the network. However, optimizing the network into optimal set of clusters is a non-polynomial (NP) hard problem, and the time needed to solve such problem increases exponentially as the number of sensors increases. In this paper, simulated evolution (SimE) algorithm is engineered to tackle the problem of cluster optimization in WSNs. A goodness measure is developed to measure the accuracy of assigning nodes to clusters and to evaluate the clustering quality of the overall network. SimE was developed such that the number of clusters and cluster heads are adaptive to number of alive nodes in the network. In fact, extensive simulation results demonstrate that SimE provides near optimal clustering and improves the lifetime of the network by about 21% compared to the traditional LEACH-C protocol.</description>
        <description>http://thesai.org/Downloads/Volume9No8/Paper_18-Adaptive_Simulated_Evolution_Based_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Method for Designing Scalable Microservice-based Application Systematically: A Case Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090817</link>
        <id>10.14569/IJACSA.2018.090817</id>
        <doi>10.14569/IJACSA.2018.090817</doi>
        <lastModDate>2018-09-01T08:49:12.5170000+00:00</lastModDate>
        
        <creator>Ahmad Tarmizi Abdul Ghani</creator>
        
        <creator>Mohamad Shanudin Zakaria</creator>
        
        <subject>Microservice; systematic method; scalable microservice design</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(8), 2018</description>
        <description>Microservice is a new transformation of Service-Oriented Architecture (SOA) which is gaining momentum in both academic and industry. The success of microservice began when giant companies like Netflix used them as a service architecture for the purpose of serving customers. Monolithic architecture used by Netflix previously was no longer able to cope with business development and it is difficult to scale to meet user demands. Although Netflix has been successful with microservice architecture, there is no systematic method introduced to produce microservice. Academic studies related to microservice are still in the early stages and have not yet reached maturity. Microservice is seen to require a method that helps organizations to systematically design microservices and replicate the success achieved by Netflix. In forming a method for this systematic microservice then the methods for building an existing microservice are studied. Based on the Design Science Research method, two research artefacts have been produced. The first artefact is a systematic design of microservice that has four main steps. The second artefact is the instantiation by applying the proposed microservice design method to the case studies, namely, MyFlix. Next the evaluation is made on the new method produced by obtaining expert opinions through the process of demonstration and interviewing. The expert assessment results found that the proposed method was able to produce a systematic microservice design based on the six proposed principles and the four main steps. This method can also produce a complete feature microservice such as cohesive, loose coupling, distributed and decentralized that will contribute to the production of scalable system.</description>
        <description>http://thesai.org/Downloads/Volume9No8/Paper_17-Method_for_Designing_Scalable_Microservice_based_Application.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Implementation of Computer based Test on BYOD and Cloud Computing Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090816</link>
        <id>10.14569/IJACSA.2018.090816</id>
        <doi>10.14569/IJACSA.2018.090816</doi>
        <lastModDate>2018-09-01T08:49:11.9230000+00:00</lastModDate>
        
        <creator>Ridi Ferdiana</creator>
        
        <creator>Obert Hoseanto</creator>
        
        <subject>Computer-based test; cloud computing; Bring-Your-Own-Devices (BYOD)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(8), 2018</description>
        <description>Computer-based test promises several benefits such as automatic grading, assessment features, and paper efficiency. However, besides the benefits, the organization should prepare the enough infrastructure, network connectivity, and user education. The problem upsurges when a hundred numbers of users join the computer-based test. This article proposes Bring-Your-Own-Devices (BYOD), and the cloud computing approach to facilitate a hundred numbers of exam participant. Through the experiment method on 393 students, the article determines five central practices that can be used by the organization who want to implement the massive scale computer-based test.</description>
        <description>http://thesai.org/Downloads/Volume9No8/Paper_16-The_Implementation_of_Computer_Based_Test.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Survey on Tor Encrypted Traffic Monitoring</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090815</link>
        <id>10.14569/IJACSA.2018.090815</id>
        <doi>10.14569/IJACSA.2018.090815</doi>
        <lastModDate>2018-09-01T08:49:11.3470000+00:00</lastModDate>
        
        <creator>Mohamad Amar Irsyad Mohd Aminuddin</creator>
        
        <creator>Zarul Fitri Zaaba</creator>
        
        <creator>Manmeet Kaur Mahinderjit Singh</creator>
        
        <creator>Darshan Singh Mahinder Singh</creator>
        
        <subject>Encrypted traffic monitoring; Tor; machine learning; security; survey</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(8), 2018</description>
        <description>Tor (The Onion Router) is an anonymity tool that is widely used worldwide. Tor protect its user privacy against surveillance and censorship using strong encryption and obfuscation techniques which makes it extremely difficult to monitor and identify users’ activity on the Tor network. It also implements strong defense to protect the users against traffic features extraction and website fingerprinting. However, the strong anonymity also became the heaven for criminal to avoid network tracing. Therefore, numerous of research has been performed on encrypted traffic analyzing and classification using machine learning techniques. This paper presents survey on existing approaches for classification of Tor and other encrypted traffic. There is preliminary discussion on machine learning approaches and Tor network. Next, there are comparison of the surveyed traffic classification and discussion on their classification properties.</description>
        <description>http://thesai.org/Downloads/Volume9No8/Paper_15-A_Survey_on_Tor_Encrypted_Traffic_Monitoring.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Analysis of Cloud Computing Adoption Framework for Iraqi e-Government</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090814</link>
        <id>10.14569/IJACSA.2018.090814</id>
        <doi>10.14569/IJACSA.2018.090814</doi>
        <lastModDate>2018-09-01T08:49:10.7530000+00:00</lastModDate>
        
        <creator>Ban Salman Shukur</creator>
        
        <creator>Mohd Khanapi Abd Ghani</creator>
        
        <creator>M.A. Burhanuddin</creator>
        
        <subject>e-Government; cloud computing; framework; Iraq; Iraqi e-government</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(8), 2018</description>
        <description>This paper presents an analysis of the factors which could have possible affect over the adoption of cloud computing via the Iraqi e-government. A conceptual framework model for cloud computing within Iraqi e-government is proposed, analyzed, evaluated and discussed.</description>
        <description>http://thesai.org/Downloads/Volume9No8/Paper_14-An_Analysis_of_Cloud_Computing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparison of Hash Function Algorithms Against Attacks: A Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090813</link>
        <id>10.14569/IJACSA.2018.090813</id>
        <doi>10.14569/IJACSA.2018.090813</doi>
        <lastModDate>2018-09-01T08:49:09.6630000+00:00</lastModDate>
        
        <creator>Ali Maetouq</creator>
        
        <creator>Salwani Mohd Daud</creator>
        
        <creator>Noor Azurati Ahmad</creator>
        
        <creator>Nurazean Maarop</creator>
        
        <creator>Nilam Nur Amir Sjarif</creator>
        
        <creator>Hafiza Abas</creator>
        
        <subject>Hash function algorithms; MD5; PRIMEDS160; SHA-1; SHA-2; SHA-3</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(8), 2018</description>
        <description>Hash functions are considered key components of nearly all cryptographic protocols, as well as of many security applications such as message authentication codes, data integrity, password storage, and random number generation. Many hash function algorithms have been proposed in order to ensure authentication and integrity of the data, including MD5, SHA-1, SHA-2, SHA-3 and RIPEMD. This paper involves an overview of these standard algorithms, and also provides a focus on their limitations against common attacks. These study shows that these standard hash function algorithms suffer collision attacks and time inefficiency. Other types of hash functions are also highlighted in comparison with the standard hash function algorithm in performing the resistance against common attacks.  It shows that these algorithms are still weak to resist against collision attacks.</description>
        <description>http://thesai.org/Downloads/Volume9No8/Paper_13-Comparison_of_Hash_Function_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Adaptive Return of e-Training (ROT) based on Communication Technology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090812</link>
        <id>10.14569/IJACSA.2018.090812</id>
        <doi>10.14569/IJACSA.2018.090812</doi>
        <lastModDate>2018-09-01T08:49:08.5870000+00:00</lastModDate>
        
        <creator>Fahad Alotaibi</creator>
        
        <subject>Return of e-Training (ROT); evaluation models; blackboard; e-learning; Key Performance Indicator (KPI)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(8), 2018</description>
        <description>Persistent economic insecurity and harsh severity actions across the world push businesses either to cut down on training costs or to be very painstaking in choosing a training program that conveys palpable outcomes in a short period of time. Nevertheless, in most cases businesses are still unable to reckon Return of e-Training (ROT) in advance for better allocation of training budget and decision on a proper training plan in line with the business policy. The purpose of this paper is to appraise the practical worth of the applicability and usability of the Adaptive ROT in the enterprises with a particular regard to evaluating the impact of e-training in companies. A case study of gauging the profit of e-training in the Blackboard systems has been conducted. The outcome of this study is judged to be positive, given the efficacy of the Adaptive ROT Evaluation Model for e-training in companies.</description>
        <description>http://thesai.org/Downloads/Volume9No8/Paper_12-Adaptive_Return_of_e_Training.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>El Ni&#241;o / La Ni&#241;a Identification based on Takens Reconstruction Theory</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090811</link>
        <id>10.14569/IJACSA.2018.090811</id>
        <doi>10.14569/IJACSA.2018.090811</doi>
        <lastModDate>2018-09-01T08:49:07.4800000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Kaname Seto</creator>
        
        <subject>Time series analysis; takens; sea surface temperature: SST; southern oscilation index: SOI; El Ni&#241;o-southern oscillation: ENSO</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(8), 2018</description>
        <description>An identification method for earth observation data according to a chaotic behavior based on Takens reconstruction theory is proposed. The proposed method is examined by using the observed time series data of SST (Sea Surface Temperature) and the SOI (Southern Oscillation Index) data. The experimental results show that the time for the identification of the proposed method is not later than that of the existing method. Author confirmed that by using the definitions of the Japan Meteorological Agency and the use of Equations, I can identify El Ni&#241;o / La Ni&#241;a at an earlier time. In other words, we do not necessarily need a numerical value for 10 months by identifying the proposed method. I confirmed that the time required for the identification judgment of the proposed method is about one month. The proposed method is not based on extrapolation method with numerical model or governing equation, but based on interpolation method using only actual observation time series.</description>
        <description>http://thesai.org/Downloads/Volume9No8/Paper_11-El_Ni&#241;o_La_Ni&#241;a_Identification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Improvement of Web Proxy Cache Replacement using Intelligent Greedy-Dual Approaches</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090810</link>
        <id>10.14569/IJACSA.2018.090810</id>
        <doi>10.14569/IJACSA.2018.090810</doi>
        <lastModDate>2018-09-01T08:49:06.9000000+00:00</lastModDate>
        
        <creator>Waleed Ali</creator>
        
        <subject>Cache replacement; Greedy-Dual approaches; machine learning; proxy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(8), 2018</description>
        <description>This paper reports on how intelligent Greedy-Dual approaches based on supervised machine learning were used to improve the web proxy caching performance. The proposed intelligent Greedy-Dual approaches predict the significant web objects’ demand for web proxy caching using Na&#239;ve Bayes (NB), decision tree (C4.5), or support vector machine (SVM) classifiers. Accordingly, the proposed intelligent Greedy-Dual approaches effectively make the cache replacement decision based on the trained classifiers. The trace-driven simulation results indicated that in terms of byte hit ratio and/or hit ratio, the performance of each of the conventional Greedy-Dual-Size-Frequency (GDSF) and Greedy-Dual-Size (GDS) was noticeably enhanced by applying the proposed Greedy-Dual approaches on five real datasets.</description>
        <description>http://thesai.org/Downloads/Volume9No8/Paper_10-Performance_Improvement_of_Web_Proxy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Review on Scream Classification for Situation Understanding</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090809</link>
        <id>10.14569/IJACSA.2018.090809</id>
        <doi>10.14569/IJACSA.2018.090809</doi>
        <lastModDate>2018-09-01T08:49:06.3230000+00:00</lastModDate>
        
        <creator>Saba Nazir</creator>
        
        <creator>Muhammad Awais</creator>
        
        <creator>Sheraz Malik</creator>
        
        <creator>Fatima Nazir</creator>
        
        <subject>Scream classification; scream detection; acoustic parameters; surveillance; security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(8), 2018</description>
        <description>In our living environment, a non-speech audio signal provides a significant evidence for situation awareness. It also compliments the information obtained from a video signal.  In non-speech audio signals, screaming is one of the events in which the people like security guard, care taker and family members are particularly interested in terms of care and surveillance because screams are atomically considered as a sign of danger. Contrary to this concept, this review is particularly targeting automated acoustic systems using non-speech class of scream believing that the screams can further be classified into various classes like happiness, sadness, fear, danger, etc. Inspired by the prevalent scream audio detection and classification field, a taxonomy has been projected to highlight the target applications, significant sound features, classification techniques, and their impact on classification problems in last few decades. This review will assist the researchers for retrieving the most appropriate scream detection and classification technique and acoustic parameters for scream classification that can assist in understanding the vocalization condition of the speaker.</description>
        <description>http://thesai.org/Downloads/Volume9No8/Paper_9-A_Review_on_Scream_Classification_for_Situation_Understanding.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Role of Hyperspectral Imaging: A Literature Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090808</link>
        <id>10.14569/IJACSA.2018.090808</id>
        <doi>10.14569/IJACSA.2018.090808</doi>
        <lastModDate>2018-09-01T08:49:05.2330000+00:00</lastModDate>
        
        <creator>Muhammad Mateen</creator>
        
        <creator>Junhao Wen</creator>
        
        <creator>Nasrullah</creator>
        
        <creator>Muhammad Azeem Akbar</creator>
        
        <subject>Deep learning; electromagnetic spectrum; hyperspectral imaging; imaging spectroscopy; multispectral imaging; remote sensing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(8), 2018</description>
        <description>Optical analysis techniques are used recently to detect and identify the objects from a large scale of images. Hyperspectral imaging technique is also one of them. Vision of human eye is based on three basic color (red, green and blue) bands, but spectral imaging divides the vision into many more bands. Hyperspectral remote sensors achieve imagery data in the form of hundreds of adjoining spectral bands. In this paper, our purpose is to illustrate the fundamental concept, hyperspectral remote sensing, remotely sensed information, methods for hyperspectral imaging and applications based on hyperspectral imaging. Moreover, in the forensic context, the novel methods involving deep neural networks are elaborated in this paper. The proposed idea can be useful for further research in the field of hyperspectral imaging using deep learning.</description>
        <description>http://thesai.org/Downloads/Volume9No8/Paper_8-The_Role_of_Hyperspectral_Imaging.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Review of Prediction of Disease Trends using Big Data Analytics</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090807</link>
        <id>10.14569/IJACSA.2018.090807</id>
        <doi>10.14569/IJACSA.2018.090807</doi>
        <lastModDate>2018-09-01T08:49:04.1570000+00:00</lastModDate>
        
        <creator>Diellza Nagavci</creator>
        
        <creator>Mentor Hamiti</creator>
        
        <creator>Besnik Selimi</creator>
        
        <subject>Big data; algorithms; data analytics; healthcare; disease prediction; data mining</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(8), 2018</description>
        <description>Big Data technologies promise to have a transformative impact in healthcare, public health, and medical research, among other application areas. Several intelligent machine learning techniques were designed and used to provide big data predictive analytics solutions for different illness. Nevertheless, there is no published research for prediction of allergy and respiratory system diseases. However, the impact of research and the finding of different cases is conducive to progress and further development of this. One of the goals of this paper is to devise a systematic mapping study, to explore and analyze existing research about disease prediction in healthcare information. According to the realized investigation of published research from 2012 up to today, we are focusing our research on studies that have been published around big data analytics. With this high number of secondary studies, it is important to conduct a review and provide an overview of the research situation and current developments in this area.</description>
        <description>http://thesai.org/Downloads/Volume9No8/Paper_7-Review_of_Prediction_of_Disease_Trends.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Programming Technologies for the Development of Web-Based Platform for Digital Psychological Tools</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090806</link>
        <id>10.14569/IJACSA.2018.090806</id>
        <doi>10.14569/IJACSA.2018.090806</doi>
        <lastModDate>2018-09-01T08:49:03.1270000+00:00</lastModDate>
        
        <creator>Evgeny Nikulchev</creator>
        
        <creator>Dmitry Ilin</creator>
        
        <creator>Pavel Kolyasnikov</creator>
        
        <creator>Vladimir Belov</creator>
        
        <creator>Ilya Zakharov</creator>
        
        <creator>Sergey Malykh</creator>
        
        <subject>Psychological research tools; web-based platform; choice of the tools and programming technologies</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(8), 2018</description>
        <description>The choice of the tools and programming technologies for information systems creation is relevant. For every projected system, it is necessary to define a number of criteria for development environment, used libraries and technologies. The paper describes the choice of technological solutions using the example of the developed web-based platform of the Russian Academy of Education. This platform is used to provide information support for the activities of psychologists in their research (including population and longitudinal researches). There are following system features: large scale and significant amount of developing time that needs implementation and ensuring the guaranteed computing reliability of a wide range of digital tools used in psychological research; ensuring functioning in different environments  when conducting mass research in schools that have different characteristics of computing resources and communication channels; possibility of services scaling; security and privacy of data; use of technologies and programming tools that would ensure the compatibility and conversion of data with other tools of psychological research processing. Some criteria were introduced for the developed system. These criteria take into account the feature of the functioning and life cycle of the software. A specific example shows the selection of appropriate technological solutions.</description>
        <description>http://thesai.org/Downloads/Volume9No8/Paper_6-Programming_Technologies_for_the_Development_of_Web_based_Platform.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Location-based E-Commerce Services: (Re-) Designing using the ISO9126 Standard</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090805</link>
        <id>10.14569/IJACSA.2018.090805</id>
        <doi>10.14569/IJACSA.2018.090805</doi>
        <lastModDate>2018-09-01T08:49:02.5500000+00:00</lastModDate>
        
        <creator>Antonia Stefani</creator>
        
        <creator>Bill Vassiliadis</creator>
        
        <creator>Theofanis Efthimiades</creator>
        
        <subject>E-commerce; location based services; software quality; software design; ISO9126</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(8), 2018</description>
        <description>E-commerce services based on user geographic location have emerged as a particularly important segment of modern information services. In these user-intensive applications, quality of service is important and design methods are increasingly relying on software standards to achieve quality. In this paper, we propose an evaluation model for location based e-services that provide insights on how overall system quality can be strengthened via identifying the most important quality characteristics of specific user-system interactions facets. The model categorizes location based services into taxonomies of components / functions, which are further analyzed in interaction facets and significance levels. A further mapping to external qualitative sub-characteristics of the ISO9126 quality standard is used to formally decompose design quality into quality attributes. The view of software design through quality attributes is supported by a mathematical model, which calculates significance weights on service components, defined either by designers or by the end users. An experiment, where this method is used to assess functionality is presented.</description>
        <description>http://thesai.org/Downloads/Volume9No8/Paper_5-Location_based_e_Commerce_Services.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparison of Event Choreography and Orchestration Techniques in Microservice Architecture</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090804</link>
        <id>10.14569/IJACSA.2018.090804</id>
        <doi>10.14569/IJACSA.2018.090804</doi>
        <lastModDate>2018-09-01T08:49:01.4570000+00:00</lastModDate>
        
        <creator>Chaitanya K. Rudrabhatla</creator>
        
        <subject>Microservice architecture; database per service pattern; Saga pattern; orchestration; event choreography; No-SQL database; 2 phase commit</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(8), 2018</description>
        <description>Microservice Architecture (MSA) is an architectural design pattern which was introduced to solve the challenges involved in achieving the horizontal scalability, high availability, modularity and infrastructure agility for the traditional monolithic applications. Though MSA comes with a large set of benefits, it is challenging to design isolated services using independent Database per Service pattern. We observed that with each micro service having its own database, when transactions span across multiple services, it becomes challenging to ensure data consistency across databases, particularly in case of roll backs. In case of monolithic applications using RDBMS databases, these distributed transactions and roll backs can be handled efficiently using 2 phase commit techniques. These techniques cannot be applied for isolated No-SQL databases in micro services. This research paper aims to address three things: 1) elucidate the challenges with distributed transactions and rollbacks in isolated No-SQL databases with dependent collections in MSA, 2) examine the application of event choreography and orchestration techniques for the Saga pattern implementation, and 3) present the fact-based recommendations on the saga pattern implementations for the use cases.</description>
        <description>http://thesai.org/Downloads/Volume9No8/Paper_4-Comparison_of_Event_Choreography.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Role of Camera Convergence in Stereoscopic Video See-through Augmented Reality Displays</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090803</link>
        <id>10.14569/IJACSA.2018.090803</id>
        <doi>10.14569/IJACSA.2018.090803</doi>
        <lastModDate>2018-09-01T08:49:00.4430000+00:00</lastModDate>
        
        <creator>Fabrizio Cutolo</creator>
        
        <creator>Vincenzo Ferrari</creator>
        
        <subject>Augmented reality and visualization; stereoscopic display; stereo overlap; video see-through</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(8), 2018</description>
        <description>In the realm of wearable augmented reality (AR) systems, stereoscopic video see-through displays raise issues related to the user’s perception of the three-dimensional space. This paper seeks to put forward few considerations regarding the perceptual artefacts common to standard stereoscopic video see-through displays with fixed camera convergence. Among the possible perceptual artefacts, the most significant one relates to diplopia arising from reduced stereo overlaps and too large screen disparities. Two state-of-the-art solutions are reviewed. The first one suggests a dynamic change, via software, of the virtual camera convergence, whereas the second one suggests a matched hardware/software solution based on a series of predefined focus/vergence configurations. Potentialities and limits of both the solutions are outlined so as to provide the AR community, a yardstick for developing new stereoscopic video see-through systems suitable for different working distances.</description>
        <description>http://thesai.org/Downloads/Volume9No8/Paper_3-The_Role_of_Camera_Convergence_in_Stereoscopic_Video.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Recognition of Ironic Sentences in Twitter using Attention-Based LSTM</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090802</link>
        <id>10.14569/IJACSA.2018.090802</id>
        <doi>10.14569/IJACSA.2018.090802</doi>
        <lastModDate>2018-09-01T08:48:59.3330000+00:00</lastModDate>
        
        <creator>Andrianarisoa Tojo Martini</creator>
        
        <creator>Makhmudov Farrukh</creator>
        
        <creator>Hongwei Ge</creator>
        
        <subject>Irony detection; attention; attention mechanism; sentiment analysis; long-short-term memory</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(8), 2018</description>
        <description>Analyzing written language is an interesting topic that has been studied by many disciplines. Recently, due to the explosive growth of Internet, social media has become an attractive source of searching and getting information for research purposes on written communication. It is true that different words in a sentence serve different purposes of conveying the meaning while they are of different significance. Therefore, this paper is going to employ the attention mechanism to find out the relative contribution or significance of every word in the sentence. In this work, we address the problem of detecting whether a tweet is ironic or not by using Attention-Based Long Short-Term Memory Network. The results show that the proposed method achieves competitive performance on average recall and F1 score compared to the state-of-the-art results.</description>
        <description>http://thesai.org/Downloads/Volume9No8/Paper_2-Recognition_of_Ironic_Sentences_in_Twitter.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Framework Utilizing Machine Learning to Facilitate Gait Analysis as an Indicator of Vascular Dementia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090801</link>
        <id>10.14569/IJACSA.2018.090801</id>
        <doi>10.14569/IJACSA.2018.090801</doi>
        <lastModDate>2018-09-01T08:48:58.2100000+00:00</lastModDate>
        
        <creator>Arshia Khan</creator>
        
        <creator>Janna Madden</creator>
        
        <creator>Kristine Snyder</creator>
        
        <subject>Gait; machine learning; vascular dementia; early diagnosis; indicators; gait analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(8), 2018</description>
        <description>Vascular dementia (VD), the second most common type of dementia, effects approximately 13.9 per cent of people over the age of 71 in the United States alone. 26% of individuals develop VD after being diagnosed with congestive heart failure. Memory and cognition are increasingly affected as dementia progresses. However, these are not the first symptoms to appear in some types of dementia. Alterations in gait and executive functioning have been associated Vascular Cognitive Impairment (VCI). Research findings suggest that gait may be one of the earliest affected systems during onset of VCI, immediately following a vascular episode. The diagnosis tools currently utilized for VD are focused on memory impairment, which is only observed in later stages of VD. Hence we are proposing a framework that isolates gait and executive functioning analysis by applying machine learning to predict VD before cognition is affected, so pharmacological treatments can be used to postpone the onset of cognitive impairment. Over a period of time, we hope to be able to develop prediction algorithms that will not only identify but also predict vascular dementia.</description>
        <description>http://thesai.org/Downloads/Volume9No8/Paper_1-Framework_Utilizing_Machine_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Bits and Bricks: Tangible Interactive Matrix for Real-time Computation and 3D Projection Mapping</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.0901160</link>
        <id>10.14569/SpecialIssue.2018.0901160</id>
        <doi>10.14569/SpecialIssue.2018.0901160</doi>
        <lastModDate>2018-08-20T10:10:24.8530000+00:00</lastModDate>
        
        <creator>Ira Winder</creator>
        
        <subject>Interactive displays; tangible user interface; projection mapping; computer vision; decision-support systems; collaboration; Lego</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>The proliferation of projection mapping and computer vision techniques have made it possible to create a multiplicity of dynamic, illuminated environments that adapt to user intervention. This paper describes a unique system for an illuminated, machine-readable matrix of objects that performs real-time computation and dynamic projection-mapping. Illuminated, tangible-interactive matrices have immediate applications as collaborative computation tools for users who want to leverage matrix-based mathematical modeling techniques within a friendly and accessible environment. The system is designed as an open source kit of both off-the-shelf items (such as Lego) and components that are inexpensively fabricated with standard equipment (such as laser cutters).   This paper outlines 1) a system of hardware and software for the tangible-interactive matrix, 2) case study applications of the tangible interactive matrix in various disciplines such as urban planning and logistics, and 3) discussion of possible directions for future research and experimental design.</description>
        <description>http://thesai.org/Downloads/FTC2017/160_Paper_469-Bits_and_Bricks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Additive Manufacturing and Collaborative Learning for Pre-hospital Care Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.0901159</link>
        <id>10.14569/SpecialIssue.2018.0901159</id>
        <doi>10.14569/SpecialIssue.2018.0901159</doi>
        <lastModDate>2018-08-20T10:10:23.9170000+00:00</lastModDate>
        
        <creator>Cody Fell</creator>
        
        <creator>Annette Sobel</creator>
        
        <creator>Marc Ordonez</creator>
        
        <subject>Pre-hospital; experiential; additive manufacturing</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>A new initiative at Texas Tech University uses hybrid education and collaborative learning modules to integrate new designs to solve real world medical challenge problems. Prehospital care encompasses everything from point of injury in the field to the receiving definitive care facility, and comprises 87% of combat fatalities. The environments span rural settings, wilderness, and extreme environments, both civilian and military, and provide a broad opportunity for innovation in technology to optimize outcome. Our target audience is elementary schools and focus is STEM mentoring and critical problem solving. Through rapid prototyping of new concepts that will enhance quality of life and ultimately, accelerates to full functionality and sustainability, in relatively austere environments students gain an appreciation for the scientific process. We will demonstrate our process and a minimum of two devices that incorporate green energy technologies applicable to this interdisciplinary domain.</description>
        <description>http://thesai.org/Downloads/FTC2017/159_Paper_245-Additive_Manufacturing_and_Collaborative_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Amusing Tools for Teaching Lesser-Loved Languages</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.0901158</link>
        <id>10.14569/SpecialIssue.2018.0901158</id>
        <doi>10.14569/SpecialIssue.2018.0901158</doi>
        <lastModDate>2018-08-20T10:10:23.2170000+00:00</lastModDate>
        
        <creator>Peter Juel Henrichsen</creator>
        
        <subject>Computer assisted language learning (CALL); dialogue systems; gaming</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>TALERUM is an interactive 2nd language learning environment, designed for use throughout the vast and sparsely populated West-Nordic area where Danish is taught as a foreign language (and for historical reasons, not always a popular one). In a town-like environment, the pupil moves between shops, schools, cafes and a home base where the session begins and the pupil is received as an exchange student by the parents of her ‘roomy’. Through user-initiated dialogues with the game characters the pupil learns about her secret mission. Talerum thus uses elements of informal dialogue, game logic and relatively deep semantic analysis to keep the attention of the language student of generation IT. The Talerum software is open code, and the application is portable to other language locales. An English version is in preparation.</description>
        <description>http://thesai.org/Downloads/FTC2017/158_Paper_207-Amusing_Tools_for_Teaching_Lesser-Loved_Languages.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Swarm AI to Detect Deceit in Facial Expressions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.0901157</link>
        <id>10.14569/SpecialIssue.2018.0901157</id>
        <doi>10.14569/SpecialIssue.2018.0901157</doi>
        <lastModDate>2018-08-20T10:10:22.2630000+00:00</lastModDate>
        
        <creator>Louis Rosenberg</creator>
        
        <creator>Niccolo Pescetelli</creator>
        
        <subject>Swarm intelligence; artificial swarm intelligence; collective intelligence; human swarming; artificial intelligence</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>In the natural world, many species amplify their intellectual abilities by working together in closed-loop systems.  Known as Swarm Intelligence (SI), this process has been deeply studied in schools of fish, flocks of birds, and swarms of bees.  The present research employs artificial swarming algorithms to create “human swarms” of online users and explores if swarming can amplify the group’s ability to detect deceit. Researchers recruited 168 participants and divided them randomly into five online swarms, each comprised of 30 to 35 members. Working alone and in networked groups, participants were given tasks with evaluating a set of 20 video clips of smiling people. Each video clip depicted either 1) an authentic smile generated in response to a humorous cue; or 2) a deceitful smile generated falsely upon command. Across the population of 168 participants, the average individual incorrectly identified the deceitful smiles in 33% of the trials. When making evaluations as real-time swarms, the error rate dropped to 18% of trials.  This large reduction in error rate suggests that by swarming, human groups can significantly amplify their ability to detect deceit in facial expressions. These results also suggest that swarming should be explored for use in amplifying other forms of social intelligence.</description>
        <description>http://thesai.org/Downloads/FTC2017/157_Paper_88-Swarm_AI_to_Detect_Deceit_in_Facial_Expressions.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Use of the Qualtrics Offline App to Collect Comprehensive Student Performance Data for Candidate Assessment and Program Improvement</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.0901156</link>
        <id>10.14569/SpecialIssue.2018.0901156</id>
        <doi>10.14569/SpecialIssue.2018.0901156</doi>
        <lastModDate>2018-08-20T10:10:21.5630000+00:00</lastModDate>
        
        <creator>Chris Boosalis</creator>
        
        <creator>Oddmund R. Myhre</creator>
        
        <subject>Qualtrics; Qualtrics App; evaluation; assessment; program assessment</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>New credentialing standards that were issued by the California Commission on Teacher Credentialing require that program administrators maintain a great deal assessment data during field work to include supervision activities, hours, training, and teaching performance expectation evaluations. These data, particularly the evaluation elements, are to be used for candidate assessment and program improvement. The challenge for programs is not only to archive these data points, but also to make the best use of the data as possible. Program leadership must also document that university supervisors and cooperating teachers have received appropriate training prior to conducting visits or hosting students.  The Qualtrics Offline App provides a simple way for all stakeholders who are involved with supervising fieldwork experiences can maintain compliance with these new requirements, while also providing meaningful candidate assessment and data-informed program improvement.</description>
        <description>http://thesai.org/Downloads/FTC2017/156_Paper_2-A_Novel_Use_of_the_Qualtrics_Offline_App.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>RSI: A High-Efficient and Fault-Tolerant Interconnection for Resources-Pooling in Rack Scale</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.0901155</link>
        <id>10.14569/SpecialIssue.2018.0901155</id>
        <doi>10.14569/SpecialIssue.2018.0901155</doi>
        <lastModDate>2018-08-20T10:10:20.5930000+00:00</lastModDate>
        
        <creator>Mingche Lai</creator>
        
        <creator>Xiangxi Zou</creator>
        
        <creator>Shi Xu</creator>
        
        <creator>Jie Jian</creator>
        
        <creator>Xingyun Qi</creator>
        
        <creator>Jiaqing Xu</creator>
        
        <subject>Rack scale; resource pooling; interconnection; adaptive routing; fault-tolerant</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>Data centers are originally composed of servers that deal with different services, and have been developed into rack-mounted computers and blade server systems. At the same time, the traffic between different servers is increasing, and the interconnection network performance between servers has significant influence on the overall performance of the system. Based on the resource pools background in the development of data center recently, this paper proposes a rack-scale interconnection network structure named RSI. According to the features of the interconnection structure, the routing tables are designed with two levels, which can reduce the size of tables and be convenient to realize adaptive routing. Then a low cost adaptive routing called LAR with a threshold to decide the choice of routing is proposed. We come up with a deadlock prevention mechanism and the deadlock can be prevented in LAR with only 2 VCs. In addition, a fault tolerance algorithm is used to deal with potential failures. Compared to current interconnection topologies in a rack, RSI hierarchical topology can support larger scale of nodes with comparable performance. Finally, the evaluation results show that in extreme traffics, LAR achieves about 6 times throughput than minimal routing which performs well in uniform random traffic.</description>
        <description>http://thesai.org/Downloads/FTC2017/155_Paper_381-RSI_A_High-Efficient_and_Fault-Tolerant_Interconnection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Simulation Framework for Decentralized Formation Control of Non-Holonomic Differential Drive Robots</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.0901154</link>
        <id>10.14569/SpecialIssue.2018.0901154</id>
        <doi>10.14569/SpecialIssue.2018.0901154</doi>
        <lastModDate>2018-08-20T10:10:19.8930000+00:00</lastModDate>
        
        <creator>Muhammad Bilal Kadri</creator>
        
        <creator>Fahad Tanveer</creator>
        
        <subject>Decentralized control; cooperative control; formation control; obstacle avoidance; differential drive robots; artificial potential field; virtual structures; behavioral structures; non-holonomic; sliding mode control</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>Decentralized cooperative control schemes are a prime research focus due to their resemblance to biological systems and many advantages over centralized schemes. This paper presents a simulation framework for a decentralized cooperative control scheme for differential drive mobile robots, with focus on formation control and obstacle avoidance.  The framework employs a hierarchical three layered model. The highest layer is responsible for defining intermediate waypoints followed by a navigation layer and a trajectory tracking layer. The navigation layer employs virtual and behavioral structures along with artificial potential field functions using non-linear systems theory to generate robot trajectories. Due to non-holonomic nature of the differential drive robots a robust sliding mode controller is employed for trajectory tracking. Simulation results for individual layer and for the integrated platform are presented, for formation control with obstacle avoidance in a practical scenario with reasonable assumptions. Simulation results validate the working of the proposed scheme.</description>
        <description>http://thesai.org/Downloads/FTC2017/154_Paper_299-A_Simulation_Framework_for_Decentralized_Formation_Control.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implementation of Distance Transformation in the Processing Language</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.0901153</link>
        <id>10.14569/SpecialIssue.2018.0901153</id>
        <doi>10.14569/SpecialIssue.2018.0901153</doi>
        <lastModDate>2018-08-20T10:10:18.9270000+00:00</lastModDate>
        
        <creator>Sudhanshu K Semwal</creator>
        
        <creator>Rama Prasad Reddy Peddireddy</creator>
        
        <subject>City-block; Euclidian distance transformations; image processing; Processing language</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>In this paper, we describe three different approaches for determining or finding a distance map for a binary image. The algorithms that solve such problems are known as Distance Transforms. These algorithms that solve such problems are known as Distance Transformations. These algorithms operate on binary images but can be extended to receive any type of digital image if a conversion algorithm that converts a digital image into a binary digital image is executed prior to executing the Distance Transform algorithm. Therefore, we also examine how to transform any regular digital image into a binary image, that is, into a black and white image. A Distance Transformation algorithm operates on a binary image consisting of featured pixels and non-featured pixels. It outputs a distance map or distance matrix where each cell matches a pixel of the input image and contains a value indicating the distance to the nearest featured pixel. Distance Transforms represent a natural way to blur feature locations geometrically and they allow other image effects like skeletonizing, image matching, object recognition, path planning and navigation. Five test cases are presented and the execution times of the three techniques are compared.</description>
        <description>http://thesai.org/Downloads/FTC2017/153_Paper_265-Implementation_of_Distance_Transformation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modeling the NLP Research Domain using Ontologies</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.0901152</link>
        <id>10.14569/SpecialIssue.2018.0901152</id>
        <doi>10.14569/SpecialIssue.2018.0901152</doi>
        <lastModDate>2018-08-20T10:10:18.3630000+00:00</lastModDate>
        
        <creator>Randa Aljably</creator>
        
        <creator>Auhood Abdullah Alfaries</creator>
        
        <creator>Muna Saleh Al-Razgan</creator>
        
        <subject>NLP; ontologies style; knowledge domain; concepts</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>Natural language processing is an active research filed that will benefit greatly from a shared understanding of its concept terms. Modeling this research domain using an ontological semantic representation that defines and links its terms will definitely benefit the research community. This paper aims to present an ontology model for NLP. This ontological model will serve as a tool in analyzing and understanding the knowledge of the domain and to help practitioners in extracting and aggregating information and then generating an explicit formal and reusable representation of the NLP concepts along with their interrelationships.</description>
        <description>http://thesai.org/Downloads/FTC2017/152_Paper_164-Modeling_the_NLP_Research_Domain.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards a Constraint-based Approach for Service Aggregation and Selection in Cloud E-Marketplaces</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.0901151</link>
        <id>10.14569/SpecialIssue.2018.0901151</id>
        <doi>10.14569/SpecialIssue.2018.0901151</doi>
        <lastModDate>2018-08-20T10:10:17.6630000+00:00</lastModDate>
        
        <creator>Azubuike Ezenwoke</creator>
        
        <creator>Olawande Daramola</creator>
        
        <creator>Matthew Adigun</creator>
        
        <subject>cloud computing; ecosystem; e-marketplace; feature model; constraint programming</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>Service providers leverage cloud ecosystems and cloud e-marketplaces to increase the business value of their services to reach a wider range of service users. The operations of commercial e-marketplaces can be further enhanced by enabling service composition mechanisms that allow automatic aggregation of atomic services into composite offerings that meets complex user requirements.  Existing approaches of cloud service selection are yet to achieve this. Currently, users are constrained to make choices only from a set of predefined atomic services, or at best, manually configure their desirable features and QoS requirements in order to realize their complex requirements given that they have deep knowledge of the service domain. In this paper, a constraint-based approach for service composition and selection to address this problem was proposed. The proposed approach applies constraint-based automated reasoning on feature models to formally guide the aggregation of atomic services to offer composite services in order to satisfy complex requirements with minimal user involvement. The plausibility of the proposed approach is demonstrated via an illustrative customer relationship management (CRM) service ecosystem. The study offers a credible way to replicate the kind of user experience that is currently available on e-commerce platforms in cloud service e-marketplaces.</description>
        <description>http://thesai.org/Downloads/FTC2017/151_Paper_93-Towards_a_Constraint-based_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Class Engagement Analyzer using Facial Feature Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.0901150</link>
        <id>10.14569/SpecialIssue.2018.0901150</id>
        <doi>10.14569/SpecialIssue.2018.0901150</doi>
        <lastModDate>2018-08-20T10:10:16.7100000+00:00</lastModDate>
        
        <creator>ROY MANSERAS</creator>
        
        <creator>Thelma D. Palaoag</creator>
        
        <creator>ALVIN MALICDEM</creator>
        
        <subject>Class engagement; affective computing; facial fea-ture extraction; action units</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>Effective education system can be evaluated through its Input-Process-Output framework implementation. Quality instruction is one of the input component indicators which includes student engagement as its binding measure. In classroom environment, facial expression are used by teachers to measure the affect state of the class. Incorporating technology in education help students prepare for life-long learning. Emerging technologies like Affective Computing are one of today’s trends to improve quality instruction delivery by analyzing affect states of the students. This paper proposed a system of classifying student engagement using facial features. Conceptual framework of the study includes multiple face detection, facial action unit extraction and a classification model. Different algorithms were tested and compared to best configure the proposed predictive classification model. Varied test datasets were also used during experiments to gauge the accuracy and overall performance of this class engagement analyzer prototype.</description>
        <description>http://thesai.org/Downloads/FTC2017/150_Paper_156-Class_Engagement_Analyzer_using_Facial_Feature.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comparative Survey Study on LPWA Networks: LoRa and NB-IoT</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.0901149</link>
        <id>10.14569/SpecialIssue.2018.0901149</id>
        <doi>10.14569/SpecialIssue.2018.0901149</doi>
        <lastModDate>2018-08-20T10:10:15.9930000+00:00</lastModDate>
        
        <creator>Munguakonkwa Emmanuel Migabo</creator>
        
        <creator>Karim Djouani</creator>
        
        <creator>Anish Kurien</creator>
        
        <creator>Thomas Olwal</creator>
        
        <subject>Low Power Wide Area (LPWA); Long Range (LoRa); Narrowband IoT (NB-IoT); comparative; energy efficiency; quality of service (QoS)</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>According to recent studies, billions of objects are expected to be connected wirelessly by 2020. The Low Power Wide Area (LPWA) networks are already playing an important role in connecting billions of new devices making up the IoT. Long Range (LoRa) and Narrowband-IoT (NB-IoT) are currently the two leading technologies triggering considerable research interest. This paper focuses on providing a comprehensive and comparative survey study on the current research and industrial states of NB-IoT and LoRa in terms of their power efficiency, their capacity, quality of service (QoS), reliability and range of coverage. The outcome of this research survey demonstrates that the unlicensed LoRa is more advantageous than the NB-IoT in terms of its energy efficiency, its capacity and cost while the NB-IoT gives benefits in terms of its reliability, resilience to interference, latency and QoS. It is further shown that despite the considerable research and development that has so far been carried out on existing LPWA technologies, there are still challenges to be addressed. This paper therefore proposes potential research future directions to address the identified challenges.</description>
        <description>http://thesai.org/Downloads/FTC2017/149_Paper_340-A_Comparative_Survey_Study_on_LPWA_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cognitive Analysis of 360 Degree Surround Photos</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.0901148</link>
        <id>10.14569/SpecialIssue.2018.0901148</id>
        <doi>10.14569/SpecialIssue.2018.0901148</doi>
        <lastModDate>2018-08-20T10:10:15.0100000+00:00</lastModDate>
        
        <creator>Madhawa Vidanapathirana</creator>
        
        <creator>Lakmal Buddika Meegahapola</creator>
        
        <creator>Indika Perera</creator>
        
        <subject>360 photography; image processing; photosphere; cognition; 360 degree surround photography</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>360 degrees surround photography or photospheres have taken the world by storm as the new media for content creation providing viewers rich, immersive experience compared to conventional photography. With the emergence of Virtual Reality as a mainstream trend, the 360 degree photography is increasingly important in offering a practical approach for the general public to capture virtual reality ready content from their mobile phones without explicit tool support or knowledge. Even though the amount of 360 degree surround content being uploaded to the Internet continues to grow, there is no proper way to index them or to process them for further information. This is because of the difficulty in image processing the photospheres due to the distorted nature of objects embedded. This challenge lies in the way 360 degree panoramic photospheres are saved. This paper presents a unique, and innovative technique named Photosphere to Cognition Engine (P2CE), which allows cognitive analysis on 360 degree surround photos using existing image cognitive analysis algorithms and APIs designed for conventional photos. We have optimized the system using a wide variety of indoor and outdoor samples and extensive evaluation approaches. On average, P2CE provides up-to 100% growth in accuracy on image cognitive analysis of photospheres over direct use of conventional non-photosphere based Image Cognition Systems.</description>
        <description>http://thesai.org/Downloads/FTC2017/148_Paper_322-Cognitive_Analysis_of_360_Degree_Surround_Photos.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Service for Professional Predictive Learning of Skills based on the Patent Analysis of Technologies</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.0901147</link>
        <id>10.14569/SpecialIssue.2018.0901147</id>
        <doi>10.14569/SpecialIssue.2018.0901147</doi>
        <lastModDate>2018-08-20T10:10:14.2130000+00:00</lastModDate>
        
        <creator>Dmitry Ilin</creator>
        
        <creator>Evgeny Nikulchev</creator>
        
        <creator>Gregory Bubnov</creator>
        
        <creator>Egor Mateshuk</creator>
        
        <subject>Online social networks; social networking sites; technology life cycle; predictive learning; patent activity analysis; professional skills; topic detection; LinkedIn; ResearchGate; labor market; scalable services; decision making support; Node.JS</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>In the high-tech and intensively developing sectors of the economy, including the IT industry, the composition of the professional skills and competencies necessary for successful work of specialists is rapidly changing. This is happening due to the dynamics of the composition and capabilities of key and applied technologies. Experts working in the IT industry have to constantly monitor changes in approaches of building architectures and the computational/software systems themselves. Under such conditions, the information-computing service is important, allowing in an automated mode to identify trends in the development of technologies and their corresponding professional skills. Taking into account these trends, it is possible to specifically improve the skills of specialists, adjust the curriculum of universities, centers for professional advancement, etc. As the basis for identifying trends in technology development, the authors of the article proposed to use information from international patent databases. The reason is that large IT companies execute many patents before releasing a new high-tech solution (hardware or software) to the market. In the future the use of these patents in the output leads to changes in the composition of the professional skills of specialists in demand in the labor market. The article analyzes the current demand for IT professionals by employers in the labor market on request in the relevant information systems. The possibilities of using professional social networking sites for recruiting personnel by organizations, searching for jobs by specialists are considered. A prototype of the predictive learning service for the professional social networking site was developed. It provides the following: monitoring of the demand for professional skills in the labor market; the analysis of patents on technologies that are the basis for each of the existing and projected professional skills. The developed service will allow determining the levels of demand for professional skills; actualizing and improving the job seekers’ professional skills; organizing the professional social networking sites; to form personal programs of training on promising technologies.</description>
        <description>http://thesai.org/Downloads/FTC2017/147_Paper_100-Service_for_Professional_Predictive_Learning_of_Skills.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Disaster Relief with Satellite based Synthetic Aperture Radar: SAR</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.0901146</link>
        <id>10.14569/SpecialIssue.2018.0901146</id>
        <doi>10.14569/SpecialIssue.2018.0901146</doi>
        <lastModDate>2018-08-20T10:10:13.2630000+00:00</lastModDate>
        
        <creator>Shogo Kajiki</creator>
        
        <creator>Hiroshi Okumura</creator>
        
        <creator>Kohei Arai</creator>
        
        <subject>Synthetic Aperture Radar (SAR); interferometry; disaster relief; earthquake; water flooding</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>Disaster relief with satellite bases Synthetic Aperture Radar (SAR) is conducted. SAR can be used for disaster relief. Interferometric SAR (IN-SAR) allows elevation estimation. Therefore, it is applicable to estimate seismic changes, elevation changes due to earthquake, land slide, slope collapse, etc. Experiments are conducted for the earthquake disaster which occurs in Kumamoto, Japan and for the river flooding due to typhoon heavy rain which occurs in Oita, Japan. Sentinel-1 of SAR imagery data shows an enormous potential of disaster relief.</description>
        <description>http://thesai.org/Downloads/FTC2017/146_Paper_521-Disaster_Relief_with_Satellite_based_Synthetic_Aperture_Radar_SAR.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Image Retrieval using Graphs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.0901145</link>
        <id>10.14569/SpecialIssue.2018.0901145</id>
        <doi>10.14569/SpecialIssue.2018.0901145</doi>
        <lastModDate>2018-08-20T10:10:12.6700000+00:00</lastModDate>
        
        <creator>Daniel Valdes-Amaro</creator>
        
        <creator>Carlos Guillen-Galvan</creator>
        
        <subject>Content-based image retrieval; interest point detec-tion algorithms; graph theory; delaunay triangulations</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>Content-Based Image Retrieval has been a fast advancing area since the 1990s decade, with the Internet growth and the technology available that provides an easy access to acquire images. Hence, image data requires to be organized so that image database queries can result as fast as possible, even though the many possible topics are available now a days. In this paper, we introduce a novel technique based on global image features by interest point detection and using graph theory, particularly Delaunay triangulations to obtain a graph that can be measured for comparison. The technique shows promising results and can be regarded as flexible in the sense that parts can be adapted or upgraded to achieve better performance.</description>
        <description>http://thesai.org/Downloads/FTC2017/145_Paper_486-Image_Retrieval_using_Graphs.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Smart System to Prevent Child Vehicular Heatstroke</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.0901144</link>
        <id>10.14569/SpecialIssue.2018.0901144</id>
        <doi>10.14569/SpecialIssue.2018.0901144</doi>
        <lastModDate>2018-08-20T10:10:11.8600000+00:00</lastModDate>
        
        <creator>Dylan Howell</creator>
        
        <creator>Sara Talley</creator>
        
        <creator>Samwoo Seong</creator>
        
        <subject>Vehicular heatstroke; Bluetooth; microcontroller</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>This project is a microcontroller based system to prevent children from being left unattended in a hot vehicle. Some products claiming to help prevent vehicular heatstroke are available, but they are marketed to end users rather than to automotive OEMs. The undergraduate engineering students created a system to raise the parent’s awareness and trigger multiple alarms to prevent this situation from occurring. The circuit reads the voltage from the car battery/alternator to see if it is running and uses pressure sensors to detect if a child is in the car-seat. The system sounds an alarm through a local speaker and uses a Bluetooth connection to a smartphone to give a secondary alert in case the parent leaves the child unattended. The Bluetooth connection is not limited to phones, and could be easily integrated directly into the car’s onboard computer with the help of OEMs.</description>
        <description>http://thesai.org/Downloads/FTC2017/144_Paper_296-Smart_System_to_Prevent_Child_Vehicular_Heatstroke.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Exploiting Gaussian Word Embeddings for Document Clustering</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.0901143</link>
        <id>10.14569/SpecialIssue.2018.0901143</id>
        <doi>10.14569/SpecialIssue.2018.0901143</doi>
        <lastModDate>2018-08-20T10:10:11.0170000+00:00</lastModDate>
        
        <creator>Inzamam Rahaman</creator>
        
        <creator>Patrick Hosein</creator>
        
        <subject>Document clustering; word embeddings; informa-tion retrieval</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>Every day, millions of documents in the form of articles, tweets, and blog posts are generated by Internet users. Many of these documents are not annotated with labels making the task of retrieving similar documents non-trivial. The process of grouping similar unannotated documents together is called document clustering. When properly done, a cluster should only contain documents that are related to each other in some manner. In this paper we propose a clustering technique based on Gaussian word embeddings which we illustrate using word2gauss. We demonstrate that the method produces coherent clusters that approach the performance of K-means on Purity and Entropy scores while achieving higher recall as measured by Inverse Purity.</description>
        <description>http://thesai.org/Downloads/FTC2017/143_Paper_276-Exploiting_Gaussian_Word_Embeddings.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dual Generators and Double Motors Measurement and Control System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.0901142</link>
        <id>10.14569/SpecialIssue.2018.0901142</id>
        <doi>10.14569/SpecialIssue.2018.0901142</doi>
        <lastModDate>2018-08-20T10:10:10.1900000+00:00</lastModDate>
        
        <creator>Wenlun CAO</creator>
        
        <creator>Bei CHEN</creator>
        
        <subject>Supervisor control and data acquisition (SCADA); human machine interface (HMI); testing system; Labview; Can2.0B</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>In this paper, a SCADA (supervisor control and data acquisition) system HMI (human machine interface) software was been implemented based on a driver system which consisted by double generators and double motors. This software was been implemented by NI Labview based on the CAN bus. Meanwhile the dynamical monitoring and alarm are implemented. The data collected for the HMI include the collecting data of bus voltage and current; the speed; the temperature on IGBT, bearing and winding. The generator and the motor controller running status and the fault status are also been sent to the HMI. The command sent to the double generators and the double motors are including start/stop, speed, bus voltage, PID parameters to DSP (speed circle Kp and Ki, Kp and Ki in D axis for current circle), etc. The main works focus on the design of CAN communication protocol; multichannel CAN bus control implemented in Labview; bit data and different byte data combination and unpack. In practical engineering application, this system can absolutely realize automatic supervision and control process efficiently and reliably.</description>
        <description>http://thesai.org/Downloads/FTC2017/142_Paper_217-Dual_Generators_and_Double_Motors_Measurement.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of a Modified OFDM System Minimizing Frequency Offset by Dividing Subchannels</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.0901141</link>
        <id>10.14569/SpecialIssue.2018.0901141</id>
        <doi>10.14569/SpecialIssue.2018.0901141</doi>
        <lastModDate>2018-08-20T10:10:09.3470000+00:00</lastModDate>
        
        <creator>Deock-Ho Ha</creator>
        
        <creator>Kyu-il Han</creator>
        
        <subject>Modified CPD-OFDM; diversity structure; divides sub-channel; minimizing frequency offset; improving performance</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>This paper reviews and introduces a modified CPD-OFDM (Orthogonal Frequency Division Multiplexing with cross polarization diversity structure) system improving the system performance degradation due to the frequency offset. In the conventional OFDM system, the frequency offset causes the ICI (inter-channel interference) and degrades signal-to-noise ratio in the receiving end. The cross polarization diversity structure composed with 2-pairs cross polarized circular antenna in each transceiver has the characteristic that it can remarkably remove the odd time reflected waves in each receiving end. Because of this receiving effect for removing the delayed multipath waves, the cross circular polarization diversity structure can reduce the time delay spread and ICI. Therefore, the modified CPD-OFDM system can improve the system performance as well as spectrum efficiency. In order to investigate the performance degradation of CPD-OFDM system due to the frequency offset, computer simulation and theoretical analysis were conducted. From the simulation results, it is clearly seen that the CPD-OFDM system performance has shown 1～3 [dB] improvement compared to that of the conventional OFDM system.</description>
        <description>http://thesai.org/Downloads/FTC2017/141_Paper_204-Analysis_of_a_Modified_OFDM_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Smart Vision System for Monitoring Specialty Crops</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.0901140</link>
        <id>10.14569/SpecialIssue.2018.0901140</id>
        <doi>10.14569/SpecialIssue.2018.0901140</doi>
        <lastModDate>2018-08-20T10:10:08.5200000+00:00</lastModDate>
        
        <creator>Duke M. Bulanon</creator>
        
        <subject>Digital image processing; machine vision; remote sensing; specialty crops</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>Precision agriculture involves observing the crop production process and applying appropriate actions to improve production efficiency. In this paper, a smart vision system is developed to monitor specialty crops which include fruits and vegetables. The smart vision system is composed of the image acquisition module and the image processing element. The image acquisition module is a modified point and shoot camera that could detect both visible and near-infrared wavebands, while the image processing element takes the multispectral image as an input and processes the images using a customized image processing algorithm for crop assessment. The smart vision system was tested using an experimental apple orchard, a commercial onion field, and a peach orchard. Results showed that the smart vision system was able to differentiate different watering input in the apple orchard, recognize the blossoms in the peach orchard, and detect the variation in the onion field.</description>
        <description>http://thesai.org/Downloads/FTC2017/140_Paper_138-A_Smart_Vision_System_for_Monitoring_Specialty_Crops.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Personalized E-Learning Recommender System using Multimedia Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.0901139</link>
        <id>10.14569/SpecialIssue.2018.0901139</id>
        <doi>10.14569/SpecialIssue.2018.0901139</doi>
        <lastModDate>2018-08-20T10:10:07.9430000+00:00</lastModDate>
        
        <creator>Hayder Murad</creator>
        
        <subject>eLearning; recommender system; data mining</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>Due to the huge amounts of online learning materials, e-learning environments are becoming very popular as means of delivering lectures. One of the most common e-learning challenges is how to recommend quality learning materials to the students. Personalized e-learning recommender systems help to reduce information overload, which tailor learning material to meet individual student’s learning needs. This research focuses on using various recommendation and data mining techniques for personalized learning in e-learning environment.</description>
        <description>http://thesai.org/Downloads/FTC2017/139_Paper_45-Personalized_E-Learning_Recommender_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Survey of Routing Protocols for Underwater Wireless Sensor Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.0901138</link>
        <id>10.14569/SpecialIssue.2018.0901138</id>
        <doi>10.14569/SpecialIssue.2018.0901138</doi>
        <lastModDate>2018-08-20T10:10:07.3670000+00:00</lastModDate>
        
        <creator>Samera Batool</creator>
        
        <creator>Muazzam A. Khan Khattak</creator>
        
        <creator>Nazar Abbas Saqib</creator>
        
        <creator>Saad Rehman</creator>
        
        <subject>Underwater Wireless Sensor Networks; Wireless Sensor Networks</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>Advancement in network sensor technology has contributed a lot towards a better society and has opened new avenues of research. Underwater Wireless Sensor Networks (UWSN) attracts a lot of attention of the researchers, due to its military applications, environmental monitoring and prediction of natural disaster. Vibrant underwater weather conditions and node movement make designing of an efficient routing protocol for underwater wireless sensor network a challenging task. This paper represents a comprehensive survey and analysis of existing routing protocols for underwater wireless sensor networks. The main contribution of this paper includes classification of the existing routing techniques based on the routing mechanisms. It presents comparison and analysis of the existing routing techniques based on various important features and highlights the major issues that are the obstacles for designing of an efficient routing protocol for UWSN.</description>
        <description>http://thesai.org/Downloads/FTC2017/138_Paper_38-A_Survey_of_Routing_Protocols_for_Underwater_Wireless_Sensor_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Network Traffic Observations in Data Centers and Forecasting Techniques for Resource Utilization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.0901137</link>
        <id>10.14569/SpecialIssue.2018.0901137</id>
        <doi>10.14569/SpecialIssue.2018.0901137</doi>
        <lastModDate>2018-08-20T10:10:06.8030000+00:00</lastModDate>
        
        <creator>Samar Raza Talpur</creator>
        
        <subject>Network engineering; traffic optimization; exponen-tial smoothing method; forecasting techniques</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>Data Center have decisive role in online corporate world. At present these data centers are not only house of servers, switches and routers, but to provide speedy services to vendors and uninterrupted network connectivity to client’s websites. The importance of managing data center traffic, forecasting tech-niques for resource utilization are more challenging in effective data centers. This paper focuses to observe and analyses the live traffic in real-world data center networks, apply forecasting techniques for traffic optimization and proper resource utilization in data centers. We propose forecasting model for data centers to predict and estimate proper bandwidth utilization in real-world situations. Our model can be useful and identify the upcoming network trend, bandwidth demand and the essential growth to predict the futuristic assessment. The paper is based on day to day network traffic engineering and observation through exponential smoothing method of time series, the approach optimizes the upcoming network traffic for data centers.</description>
        <description>http://thesai.org/Downloads/FTC2017/137_Paper_21-Network_Traffic_Observations_in_Data_Centers.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybridized Optimization Framework for Routing Calls in Call Centres</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.0901136</link>
        <id>10.14569/SpecialIssue.2018.0901136</id>
        <doi>10.14569/SpecialIssue.2018.0901136</doi>
        <lastModDate>2018-08-20T10:10:06.2270000+00:00</lastModDate>
        
        <creator>Mughele Ese Sophia</creator>
        
        <creator>Stella Chiemeke</creator>
        
        <creator>Konyeha Susan</creator>
        
        <creator>Kingsley Ukaoha</creator>
        
        <subject>Optimization; hybrid rule; routing rule; queue; call centre; call resolution</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>The major challenges in a call centre with respect to customer satisfaction has to do with waiting time on the queue for long period of time before they are attended to. Beyond the problem of queue is the nature of the service itself, the effective resolution of the customer issues. The major challenge there is to determine a routing rule that can reduce waiting time and as well enhance call resolution rates. In this study, we conducted simulation analysis using Java simulation library on seven routing rules, four for waiting time and three on call resolution (CR) rate oriented routing rules, in a bid to determine the optimal rule. The data used for the simulation was collected from a call centre of a telecommunications outfit in Nigeria. The result from the simulation gave the optimal rules for waiting time and CR rate routing rule. A hybrid framework was developed from the outcome of the simulation result. The proposed routing rule will be able to achieve low wait-time and enhanced call resolution, this will improve and optimize call centre operations as well as increase customer’s satisfaction and brand loyalty.</description>
        <description>http://thesai.org/Downloads/FTC2017/136_Paper_5-Hybridized_Optimization_Framework_for_Routing_Calls.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Stochastic Power Modeling of Wireless Sensor Networks for Mission Critical Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.0901135</link>
        <id>10.14569/SpecialIssue.2018.0901135</id>
        <doi>10.14569/SpecialIssue.2018.0901135</doi>
        <lastModDate>2018-08-20T10:10:05.6330000+00:00</lastModDate>
        
        <creator>P. Venkata Krishna</creator>
        
        <creator>T. S. Pradeepkumar</creator>
        
        <creator>Mohammad S. Obaidat</creator>
        
        <creator>V. Saritha</creator>
        
        <subject>Wireless sensor networks; simulation; Semi Markov Decision Process (SMDP); Markov process; dynamic programming</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>Wireless Sensor nodes consist of communication devices, physical devices (environmental Sensors), processing unit, memory and radio. Optimizing the power consumed by the sensor nodes is always a challenge. The power consumed during communication is high. Therefore, optimizing the power and energy during communication is really necessary. This paper addresses this issue by implementing stochastic power model of wireless sensor nodes to handle any Mission Critical Systems (MCS). Mission Critical Systems are systems that handle tasks and accomplish the real-time deadlines. If a deadline is not met, something catastrophe may occur and the sensor nodes sleep during critical times which will lead to an unstable system. So instead of going to sleep state, the state changes to idle state to handle critical tasks. In this paper, the motes are characterized using Semi Markov Decision Process (SMDP). Various policies were framed for Non-Critical and Mission Critical Systems. Mission Critical Systems uses nodes that meet the deadlines, thereby, optimizing the power and energy used. Our experimental setup improves the energy efficiency of MCS by at least 25%. The model is validated using Crossbow Sensor motes. Also, the model selects the action in the node in order to suggest the best policy for better energy optimization. The SMDP modeling is solved by Dynamic programming using the value iteration function with discounted rewards. Our results have shown that the nodes can go from active to sleep state for non-critical applications and active to semi-sleep state for mission critical application. Our performance results have shown that 25% more power saving is achieved.</description>
        <description>http://thesai.org/Downloads/FTC2017/135_Paper_502-Stochastic_Power_Modeling_of_Wireless_Sensor_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implementation and Performance Analysis of Probabilistic Cognitive Relaying Communication Demo</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.0901134</link>
        <id>10.14569/SpecialIssue.2018.0901134</id>
        <doi>10.14569/SpecialIssue.2018.0901134</doi>
        <lastModDate>2018-08-20T10:10:05.0570000+00:00</lastModDate>
        
        <creator>Amith Khandakar</creator>
        
        <creator>Amr Mohamed</creator>
        
        <subject>Cognitive Relaying; GNU Radio; Probabilistic Relaying; USRP2</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>Cognitive Relaying is the concept where secondary users (SUs) help primary users (PUs) by relaying its traffic. This paper aims to propose the possibility of studying and comparing the performance of cognitive relaying using an experimental framework developed with USRP2. Probabilistic Relaying and Scheduling is practically implemented and its effect on adjusting the delay and throughput of PU and SU are investigated. Step by Step implementation of a cooperative protocol, where SU helps the PU by relaying its data or its own data based on an adjustable scheduling probability. Adjustable admission control is also introduced in the protocol so that PU data that can be relayed will be introduced in the queue of the SU with certain probability. The effect of varying both these probabilities, service and admission is studied in the setup. Finally, the practical results are verified with the simulation and theoretical results, and a conclusion is drawn from the results of the combined effects of probability of relaying and scheduling on the performance of the PU and SU in a cognitive relay environment.</description>
        <description>http://thesai.org/Downloads/FTC2017/134_Paper_510-Implementation_and_Performance_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Data Communication Quality in Mobile Broadband Access Networks: Radio Propagation Environment Impact and End-User Achievements</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.0901133</link>
        <id>10.14569/SpecialIssue.2018.0901133</id>
        <doi>10.14569/SpecialIssue.2018.0901133</doi>
        <lastModDate>2018-08-20T10:10:04.4800000+00:00</lastModDate>
        
        <creator>Joseph Isabona</creator>
        
        <creator>Anthony Osaigbovo Igbinovia</creator>
        
        <subject>Mobile broadband; radio propagation channel; radio propagation environment; end-user location; channel throughput</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>The need for efficient radio channel quality measurement to support planning, operations and management of data communication networks has increased nowadays. An important parameter for measuring data communication quality of a radio network is channel throughput. In this research work, the impact of end-user location and radio propagation channel environmental parameters on channel throughput performance has been experimentally investigated in an operational 3G networks, with upgraded HSDPA technology. Firstly, results show that near-far effect have enormous impact on channel throughputs especially as the end-user move towards the cell edges. This implies that the packet drop rate on the packet data communication links increases as the user move towards the cell edges. Secondly, it has been shown how the end-user data throughput performance drops as propagation loss exponent increases. This implies that data communication quality in HSDPA mobile broadband is environment dependent. Hence, to provide a good end-user experience, the influence of different radio propagation environment on mobile data communication quality must be considered in the design and deployment of cellular networks.</description>
        <description>http://thesai.org/Downloads/FTC2017/133_Paper_484-Data_Communication_Quality_in_Mobile_Broadband_Access_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Low Computational Complexity of SC Polar Decoder in MIMO Fading Channel</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.0901132</link>
        <id>10.14569/SpecialIssue.2018.0901132</id>
        <doi>10.14569/SpecialIssue.2018.0901132</doi>
        <lastModDate>2018-08-20T10:10:03.9030000+00:00</lastModDate>
        
        <creator>Alaa Abdulameer Hasan</creator>
        
        <subject>Polar codes; MIMO fading channel; Hermite poly-nomials; channel capacity</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>Motivated by large capacity gains in multiple an-tenna systems when ideal channel state information (CSI) is available at both receiver and transmitter and quadrature am-plitude modulation (QAM) is applied, we examine the achievable rates of Rayleigh fading channel measurement based optimization techniques. We consider complex-valued noise Gaussian distri-bution and try to determine the optimal input distribution of fixed signalling points. By using Hermite polynomials and under even-moment constraint, the simulation results show that the information rate is achieved with unique and optimal input distribution. It is also shown that the computational complexity can be reduced by factorizing the optimal distribution into the product of symmetrical distributions.</description>
        <description>http://thesai.org/Downloads/FTC2017/132_Paper_473-Low_Computational_Complexity_of_SC_Polar.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Markov Decision Processes for Bitrate Harmony in Adaptive Video Streaming</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.0901131</link>
        <id>10.14569/SpecialIssue.2018.0901131</id>
        <doi>10.14569/SpecialIssue.2018.0901131</doi>
        <lastModDate>2018-08-20T10:10:03.3270000+00:00</lastModDate>
        
        <creator>Koffka Khan</creator>
        
        <creator>Wayne Goodridge</creator>
        
        <subject>Bottleneck links; DASH; Markov Decision Process; adaptive streaming; Quality of Experience (QoE)</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>It has become common for many devices to share bottleneck links when users watch streaming video. When the DASH standard is used for adaptive video streaming over HTTP it has been found that good Quality of Experience (QoE) among video players become a critical issue. Markov Decision Processes (MDP) is one attempt at optimizing the streaming process by adopting a policy for maximizing particular QoE parameters. This paper proposes a novel approach called SHARE that uses a state-array representation consisting of a quality measurement called Data Rate Ratio (DRR) from each player in the network. A third-party network device collects the DRR values of players. Further it uses a MDP based on discretized DRRs to generate policies for better bitrate selection at runtime, using a unique reward function. A three player model is presented. Based on comparisons with other methods, the result shows that players adopting these policies obtains good QoE across various metrics, such as, bandwidth utilization, unfairness, re-buffering ratios, instability and average quality, with minimal possible trade-offs.</description>
        <description>http://thesai.org/Downloads/FTC2017/131_Paper_430-Markov_Decision_Processes_for_Bitrate_Harmony.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Scheme to Make MANET Selfheal Stable Routing Protocol</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.0901130</link>
        <id>10.14569/SpecialIssue.2018.0901130</id>
        <doi>10.14569/SpecialIssue.2018.0901130</doi>
        <lastModDate>2018-08-20T10:10:02.7470000+00:00</lastModDate>
        
        <creator>ashwani kush</creator>
        
        <creator>C.J. Hwang</creator>
        
        <creator>Rosy Pawar</creator>
        
        <subject>MANET; AODV; security; stable; routing</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>Wireless networking is the latest trend. In all unpredictable and changing environments this networking has enormous applications. Business organizations and all users of various fields choose wireless. This is because of the reason that it allows flex&#172;ibility of location. The attribute to support this is mobility, portability, and ease of installation. In mobile ad-hoc network, nodes are almost continuously moving from one place of location to another. Thus, MANET topology can change often and unpredictably. Mobility of nodes is one of the major issues of concern in mobile Ad-hoc network because it causes a link failure. In this paper a new submission has been suggested that will help mobile nodes to maintain routes to destination and that too with stable route selection. This process will make recovery phase very efficient and fast.  The performance of the proposed routing protocol named as Selfheal Stable Routing protocol (SSRP) is evaluated using performance metrics like Packet Delivery Ratio, Throughput and End to End Delay. The study is based on simulation runs adopting CBR traffic pattern taking care of node failure scenarios.</description>
        <description>http://thesai.org/Downloads/FTC2017/130_Paper_371-Scheme_to_Make_MANET_Selfheal_Stable_Routing_Protocol.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Evaluation of Real Time MIMO testBED using NI-2922 Universal Software Radio Peripheral</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.0901129</link>
        <id>10.14569/SpecialIssue.2018.0901129</id>
        <doi>10.14569/SpecialIssue.2018.0901129</doi>
        <lastModDate>2018-08-20T10:10:02.1700000+00:00</lastModDate>
        
        <creator>Aliyu Buba Abdullahi</creator>
        
        <creator>Akram Hammoudeh</creator>
        
        <creator>Rafael F. S.</creator>
        
        <subject>Universal Software Radio Peripheral (USRP); Multiple Input Multiple Output (MIMO); testBED; Spatial Diversity (SD); Space Time Block Coding (STBC); bit error rate (BER)</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>In the recent years, wireless communication integration with mm-Wave spectrum makes fifth generation (5G) gained tremendous research interest, as a result of scarce resources. This challenged the design paradigm of the previous fourth generation (4G) radio access technology. As the key to future 5G systems, Multiple Input Multiple Output (MIMO) on the other hand, offers promising performance enhancement with effective spectrum utilization although, leads to increase in the cost of deploying the system as the number of antenna increases thus, large simulation assessment prevails in the literature which requires further practical implementations, assessment, and validation in real time. This article present a MIMO testBED experimental design, implementation, and evaluation of the system bit error rate (BER) performance with channel capacity using spatial diversity. The system’s prototyping utilizes Universal Software Radio Peripheral hardware (USRP NI-2922) together with LabVIEW software toolkits, results obtained shows MIMO system feature spatial diversity to improve BER, as well as system channel capacity.</description>
        <description>http://thesai.org/Downloads/FTC2017/129_Paper_331-Performance_Evaluation_of_Real_Time_MIMO_Testbed.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Angular and the Trending Frameworks of Mobile and Web-based Platform Technologies: A Comparative Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.0901128</link>
        <id>10.14569/SpecialIssue.2018.0901128</id>
        <doi>10.14569/SpecialIssue.2018.0901128</doi>
        <lastModDate>2018-08-20T10:10:01.5930000+00:00</lastModDate>
        
        <creator>Mohamed Sultan</creator>
        
        <subject>Angular; mobile and web-based applications; front-end; JavaScript frameworks</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>Recently, numerous new mobile and web-based application frameworks have been released and adopted in both communities of software development industry and research. For example, there are currently the KnockoutJS, BackboneJS and ReactJS frameworks competing together with the different (entirely different) versions of ‘Angular’ frameworks. While some of these new frameworks are more popular than others, some are specialised in certain types of applications, and others have spe-cific advanced features or outstanding capabilities that set them above others. Moreover, with the increase usage and demand of mobile applications, the need for cross-platform frameworks has significantly increased as well. In this paper, we discuss the different criteria which identifies the strengths and weaknesses of using each framework in developing mobile and web-applications. We highlight and discuss 12 different features of latest application frameworks as ‘points of comparison’. Then we compare 5 of the trending frameworks (KnockoutJS, BackboneJS, AngularJS, React and Angular2) in a thorough analysis based on our earlier defined points of comparisons i.e. features. Finally, we focus more deeply on the newly released Angular2 framework showing the eminent capabilities and values added over different trending frameworks and over its own earlier versions. Overall, our comparative analysis results in a few interesting findings regarding different current frameworks, leaving us to believe that a new generation might soon emerge from the exponential path of MVC, MV*/MVW and MVVM.</description>
        <description>http://thesai.org/Downloads/FTC2017/128_Paper_264-Angular_and_the_Trending_Frameworks_of_Mobile.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Propagation Models Calibration in Mobile Cellular Networks: A Case Study in Togo</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.0901127</link>
        <id>10.14569/SpecialIssue.2018.0901127</id>
        <doi>10.14569/SpecialIssue.2018.0901127</doi>
        <lastModDate>2018-08-20T10:10:00.9070000+00:00</lastModDate>
        
        <creator>Adekunl&#233; A. Salami</creator>
        
        <creator>Ayit&#233; S. A. Ajavon</creator>
        
        <creator>Koffi A. Dotche</creator>
        
        <creator>Koffi Sa-Bedja</creator>
        
        <subject>Propagation models tuning; COST-231; SUI-Model; received signals; drive testing</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>Many efforts have been made to come out with some propagation models that should be suitable to the area of interest. Therefore, this raises the issue of the environment and its characteristics. This perspective has led to environment’s classification with some specific models, which in turn are embedded in planning and optimization software. Software is very expensive in addition to the calibration of the embedded prediction models may require an extra cost for the network operator(s). This vision has prompted the work developed in this paper. This paper presents a calibration of the European Cooperation in the field of Scientific and Technical Research (COST) COST-231 model and that of the Standard University Interim model to field data in some selected environment study of Togo. The work not only proposed some optimized fitting parameters suitable for the environmental conditions in Togo, but provides consistent statistical parameters for these models for future electromagnetic applications but though cellular operating frequency has been considered in this work.</description>
        <description>http://thesai.org/Downloads/FTC2017/127_Paper_256-Propagation_Models_Calibration_in_Mobile_Cellular_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Software Defined Radio and Long Term Evolution (LTE) for Community Benefit</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.0901126</link>
        <id>10.14569/SpecialIssue.2018.0901126</id>
        <doi>10.14569/SpecialIssue.2018.0901126</doi>
        <lastModDate>2018-08-20T10:10:00.3000000+00:00</lastModDate>
        
        <creator>Emil Salib</creator>
        
        <creator>Andrew M. Funkhousher</creator>
        
        <subject>Software Defined Radio (SDR); Long Term Evolu-tion (LTE); Radio Frequency (RF); drive test; heat map; LTESub</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>Long Term Evolution (LTE) is the current leading data transfer technology in the cellular industry and is used daily by people nationwide on multiple carriers to access their on-line services and networked applications. However, the service quality and access to this technology differ among cellular service providers due to the distance from the cellular towers and the number of users on the network at any given point in time. While it may be easy for a consumer to find the service provider with the best coverage nationwide, it is not as easy to find which has the strongest cellular signal in their favorite and frequented areas. To provide the residents of a city or town with an accurate representation of the cellular service coverage for different carriers in their area, the authors set out to test a number of cost-effective, open-source based Software Defined Radio (SDR) drive test systems, select one and calibrate it against an industry-standard LTE coverage survey instrument for a number of LTE frequency bands. This paper describes a cost-effective software defined radio (SDR) drive test system (to be known in this paper as LTESub) that we developed and adopted in conducting Radio Frequency (RF) drive tests around a specific geographical area in their school town. To provide both an accurate representation as well as a consumer-friendly way to test for LTE signal strength, the authors selected the RTL-2832U NESDR Nano USB, Delmore Earthmate LT-20 GPS USB dongles and RTL-SDR Scanner Software. They calibrated the newly created LTESub drive test system against the Anritsu LinkMaster ML87110A Drive Test multi-band receiver. The use of the LTESub system led to the successful development of a set of highly accurate heat maps at the 866.3 MHz and 751 MHz bands that can be used by potential cellular service subscribers to select the right frequency bands for their area and to map the same to the cellular service provider by consulting publicly available online tools.</description>
        <description>http://thesai.org/Downloads/FTC2017/126_Paper_166-Software_Defined_Radio_and_Long_Term_Evolution.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Spatio-Temporal Proximity Assistance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.0901125</link>
        <id>10.14569/SpecialIssue.2018.0901125</id>
        <doi>10.14569/SpecialIssue.2018.0901125</doi>
        <lastModDate>2018-08-20T10:09:59.5500000+00:00</lastModDate>
        
        <creator>Kashif Rizwan</creator>
        
        <creator>Nadeem Mahmood</creator>
        
        <creator>S A K Bari</creator>
        
        <creator>Zain Abbas</creator>
        
        <subject>Spatio-temporal; geo-fence; smart personal assistant; push notification; IoT; cloud</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>Ample amount of time spent in locating, finding and procuring desired items in wish list of our daily life necessities. Usually the list of desirable items is prepared when the person is not near to desired items. It would be convenient to have an assistance application that lure people on the go, to procure item of desire. This must require a two tier information sharing system, one at procurer and other at vendor. Procurer add item to the wish list and vendor add offer that must comply with transaction, valid and existence time with respect to determined geo-location. Therefore, a system with mobile application named ‘Zambeel’ is presented that enable procurer to locate vendors for their required things. The idea is to produce geographically aware application for mobile devices that would automatically notify users to get location on map even on the go.</description>
        <description>http://thesai.org/Downloads/FTC2017/125_Paper_112-Spatio-Temporal_Proximity_Assistance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of an Android-based Student Information System Application</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.0901124</link>
        <id>10.14569/SpecialIssue.2018.0901124</id>
        <doi>10.14569/SpecialIssue.2018.0901124</doi>
        <lastModDate>2018-08-20T10:09:58.9730000+00:00</lastModDate>
        
        <creator>Tariq Jamil</creator>
        
        <creator>Iftaquaruddin Mohammed</creator>
        
        <subject>Smartphones; mobile; android; SIS</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>Today’s smartphones have developed into advanced computer systems with enormous photo and video capabilities, larger touch-screens, ubiquitous internet access, and powerful global positional location services. There are numerous applications available nowadays that provide a wide range of services to smartphones’ users in almost every area of daily life, such as communication, news, entertainment, maps, and education. The Student Information System, or SIS, is a service at the desktop computer-level provided by institutions of higher education to the students and faculty using which the students can gain access to their transcripts, get their semester timetable, register or drop courses, find out about the courses’ final examination timetable, or get general academic information. The faculty, using SIS, can access information pertaining to the students registered in their course, get detailed information about their advisees, or audit degrees when their advisees are close to graduation. In this paper, we have described the design of an SIS application developed for Android operating system so that, instead of using a computer system, both students and faculty can gain access to the academic information on-the-go through their smartphones on which this application has been installed. Initial tests have indicated positive experience encountered by both students and faculty upon using this application and efforts are underway to develop similar application for Apple’s iOS as well.</description>
        <description>http://thesai.org/Downloads/FTC2017/124_Paper_95-Development_of_an_Android-based_Student_Information_System_Application.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Reliable High Density Stacked Memristor Memory Designs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.0901123</link>
        <id>10.14569/SpecialIssue.2018.0901123</id>
        <doi>10.14569/SpecialIssue.2018.0901123</doi>
        <lastModDate>2018-08-20T10:09:58.4100000+00:00</lastModDate>
        
        <creator>Selvakumaran Vadivelmurugan</creator>
        
        <subject>Random access storage; very-large-scale integration (VLSI); analog circuits</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>A conventional design has an insulator layer for every crossbar layer stacked. Methodology for alternatively selecting memristor layers and methodology for proper operation are proposed. It would help increasing the vertical density of stacked memristor crossbar arrays. It is the maximum possible memory density design for crossbar stacks. While still suffering a few shortcomings, concurrent access in particular, the new design proves itself as an interesting design alternative because of increase in memory density. For a 2 nm insulator thickness in conventional design, at least 27.50 percentage increase in vertical crossbar density is expected in the proposed design. Alternative designs and approaches have also been proposed to address the shortcomings.</description>
        <description>http://thesai.org/Downloads/FTC2017/123_Paper_480-Reliable_High_Density_Stacked_Memristor_Memory_Designs.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>FSR based Force Myography (FMG) Stability Throughout Non-Stationary Upper Extremity Tasks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.0901122</link>
        <id>10.14569/SpecialIssue.2018.0901122</id>
        <doi>10.14569/SpecialIssue.2018.0901122</doi>
        <lastModDate>2018-08-20T10:09:57.8500000+00:00</lastModDate>
        
        <creator>Carlo Menon</creator>
        
        <creator>Mona Lisa Delva</creator>
        
        <subject>Activities of daily living; age-related rehabilitation; assistive technology; biomedical devices; human factors; independent living; prosthetic control; sensors/sensor application; force myography</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>Force Myography (FMG) is a method of tracking movement and functional activity that is based on the volumetric changes that occur in a limb during muscle contraction. There are several advantages of FMG over other myographic modalities that support its implementation in rehabilitative and assistive technology to track upper extremity movement during activities of daily living. The aim of the current work is to explore the stability of FMG sensors during non-static upper extremity activities. Twenty-one participants with varying age and gender were recruited to perform a set of tasks while wearing a custom FMG band. The participants were required to move between two extremes of range of motion (wrist flexion/extension and forearm pronation/supination) or between two extremes of grasp force (squeeze and relax). FMG presented low variability (&lt;6%) and demonstrated little to no drift with ongoing task duration (Spearman’s |R| &lt; 0.3). FMG variability did not present any relationship to differences in anthropometry or grip strength (Spearman’s |R| &lt; 0.3), suggesting that FMG wearers will present a stable FMG signal despite differing musculoskeletal characteristics. Finally, variability in FMG presented no significant relationship between user variables and the testing accuracies of machine learning models trained on FMG data. The results of this study demonstrate the stability of FMG signals during non-stationary tasks and support the potential of implementing FMG into user-machine interface technology.</description>
        <description>http://thesai.org/Downloads/FTC2017/122_Paper_328-FSR_based_Force_Myography_FMG_Stability_Throughout.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Augmented Efficiency of CLA Logic through Multiple CMOS Configurations</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.0901121</link>
        <id>10.14569/SpecialIssue.2018.0901121</id>
        <doi>10.14569/SpecialIssue.2018.0901121</doi>
        <lastModDate>2018-08-20T10:09:57.2730000+00:00</lastModDate>
        
        <creator>Naga Spandana Muppaneni</creator>
        
        <creator>Steve C.Chiu</creator>
        
        <subject>Carry-lookahead adder (CLA); performance optimization; simulations; transistor sizing; power dissipation; Tanner EDA tools</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>Adders are the basic building blocks in the digital systems. Addition is an indispensable operation in digital, analog and control systems. Performance optimization of a digital adder relies on the parameters such as power, speed and area. Much research has been going on in optimizing the delay and power dissipation of adders. Carry-lookahead adder (CLA) is considered one of the fastest digital adders. It emerges from the concept of computing all incoming carriers in parallel. This paper introduces various design implementations using CMOS transistors, producing unique logic of CLA adder. Each design implementation is analyzed by assessing the power dissipation and the delay at every possible state by transistor sizing. Simulations have been performed on Tanner EDA tools based on 250nm technology at 2.5V supply voltage. Previous work on CLA has been examined, and the 8-bit design of CLA and its delay and power dissipation has been evaluated.</description>
        <description>http://thesai.org/Downloads/FTC2017/121_Paper_222-Augmented_Efficiency_of_CLA_Logic_through_Multiple_CMOS_Configurations.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Cost-Efficient Look-Up Table based Binary Coded Decimal Adder Design</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.0901120</link>
        <id>10.14569/SpecialIssue.2018.0901120</id>
        <doi>10.14569/SpecialIssue.2018.0901120</doi>
        <lastModDate>2018-08-20T10:09:56.6930000+00:00</lastModDate>
        
        <creator>Zarrin Tasnim Sworna</creator>
        
        <creator>Mubin Ul Haque</creator>
        
        <creator>Hafiz Md. Hasan Babu</creator>
        
        <creator>Lafifa Jamal</creator>
        
        <subject>Adder; Binary Coded Decimal (BCD); Field Pro-grammable Gate Array (FPGA); Look-Up Table (LUT); correction</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>The Binary Coded Decimal (BCD) being the more accurate and human-readable representation with ease of conver-sion, is prevailing in the computing and electronic communication. In this paper, a tree-structured parallel BCD addition algorithm is proposed with the reduced time complexity O(􀀀􀀀􀀁􀀂􀀃2 􀀁􀀄 􀀅 􀀀􀀀 − 􀀆􀀄􀀄, where 􀀀 = number of digits and 􀀁 = number of bits in a digit. BCD adder is more effective with a LUT (Look-Up Table)-based design, due to FPGA (Field Programmable Gate Array) technology’s enumerable benefits and applications. A size-minimal and depth-minimal LUT-based BCD adder circuit construction is the main contribution of this paper. The proposed parallel BCD adder gains a radical achievement compared to the existing best-known LUT-based BCD adders. The proposed BCD Adder provides prominent better performance with 20.0%reduction in area and 41.32% reduction in delay for the post-layout simulation. Since the proposed circuit is improved both in area and delay parameter, it is 53.06% efficient in terms of area-delay product compared to the best known existing BCD adder, which is surely a significant achievement.</description>
        <description>http://thesai.org/Downloads/FTC2017/120_Paper_105-A_Cost-Efficient_Look-Up_Table_based_Binary.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>NanoRFID/Computers: Developments and Implications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.0901119</link>
        <id>10.14569/SpecialIssue.2018.0901119</id>
        <doi>10.14569/SpecialIssue.2018.0901119</doi>
        <lastModDate>2018-08-20T10:09:56.1330000+00:00</lastModDate>
        
        <creator>Mario Cardullo</creator>
        
        <creator>Robert Meagley</creator>
        
        <subject>Nanoscale; Internet of Things (IoT); Internet of Everything; RFID; nano-computers; nano RFID/Computer (NR)</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>True nano RFID/Computers (NRs) will represent a major change in the way many things are done. This technology will truly enable fulfillment of the promise of the Internet of Things (IoT).
NR’s could have as transformative impact on the world as the Internet and personal computer have.  They could provide the foundation for realizing the vision of the “Internet of Things” or “Internet of Everything” by “wiring up” the world, from inanimate objects to living organisms.  From the environment of the individual to the global ecosystem, we could “know” and interact with our environment on real-time basis down to a nanoscale, such as human cells and molecular structures of objects.  This would enable humans to have a far greater understanding and awareness of and control over the world around them – viewing our world with much “higher resolution”.  The benefits could range from huge gains in improving human health while reducing costs to far greater efficiencies in the use of natural resources to enhance prosperity and environmental sustainability.
These nanoscale devices can do much more than tracking. They can be embedded in any material, and thus serve as a platform to both acquire data from the material and to send information or instructions to the material.  Beyond two-way data transmission, NRs would not just be tracking devices but complete computing and data acquisition systems. Through the application of ultra-miniaturization, distributive computing and nano antennas, the acquisition and processing of analytical data can become massively parallel.
</description>
        <description>http://thesai.org/Downloads/FTC2017/119_Paper_31-NanoRFIDComputers–Developments_and_Implications.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Achieving Flatness: Honeywords Generation Method for Passwords based on User Behaviours</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.0901118</link>
        <id>10.14569/SpecialIssue.2018.0901118</id>
        <doi>10.14569/SpecialIssue.2018.0901118</doi>
        <lastModDate>2018-08-20T10:09:55.5570000+00:00</lastModDate>
        
        <creator>Omar Akif</creator>
        
        <creator>H. S. Al-Raweshidy</creator>
        
        <creator>G. J. Rodgers</creator>
        
        <subject>Honeywords; user behaviours; worst password list; dictionary attack</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>Honeywords (decoy passwords) have been proposed to detect attacks against hashed password databases. For each user account, the original password is stored with many honeywords in order to thwart any adversary. The honeywords are selected deliberately, such that a cyber-attacker who steals a file of hashed passwords cannot be sure, if it is the real password or a honeyword for any account. Moreover, entering with a honeyword to login will trigger an alarm notifying the administrator about a password file breach. At the expense of increasing the storage requirement by 24 times, the authors introduce a simple and effective solution to the detection of password file disclosure events. In this study, we scrutinise the honeyword system and highlight possible weak points. Also, we suggest an alternative approach that selects the honeywords from existing user information, a generic password list, dictionary attack, and by shuffling the characters. Four sets of honeywords are added to the system that resembles the real passwords, thereby achieving an extremely flat honeywords generation method. To measure the human behaviours in relation to trying to crack the password, a testbed engaged with by 820 people was created to determine the appropriate words for the traditional and proposed methods. The results show that under the new method it is harder to obtain any indication of the real password (high flatness) when compared with traditional approaches and the probability of choosing the real password is 1/k, where k = number of honeywords plus the real password.</description>
        <description>http://thesai.org/Downloads/FTC2017/118_Paper_485-Achieving_Flatness_Honeywords_Generation_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Identifying and Scoring Vulnerability in SCADA Environments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.0901117</link>
        <id>10.14569/SpecialIssue.2018.0901117</id>
        <doi>10.14569/SpecialIssue.2018.0901117</doi>
        <lastModDate>2018-08-20T10:09:54.9470000+00:00</lastModDate>
        
        <creator>Abdullah Abuhussein</creator>
        
        <creator>Parves Kamal</creator>
        
        <creator>Sajjan Shiva</creator>
        
        <subject>Supervisory Control and Data Acquisition (SCADA) security; critical infrastructure security; SCADA; risk assessment; vulnerability scoring</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>Supervisory Control and Data Acquisition (SCADA) systems form a critical component to industries such as national power grids, manufacturing automation, nuclear power production and more. By interacting with control machines and providing real-time support to monitor, gather, and record data, SCADA systems show major impact in industrial environments. Along with the uncountable benefits of SCADA systems, inconceivable risks have raised. Moreover, SCADA operators, production staff and sometimes systems experts have no or little knowledge when applying security due diligence. In this paper, we systematically review SCADA security based on different aspects (i.e. SCADA components, vulnerability, severity, impact, etc.). Our goal is to provide an all-inclusive reference for future SCADA users and researchers. We also use a time-based heuristic approach to evaluate vulnerabilities and show the importance of the evaluation. We aim to establish a fundamental level of security due diligence to ensure SCADA risks are well-comprehended and managed.</description>
        <description>http://thesai.org/Downloads/FTC2017/117_Paper_398-Identifying_and_Scoring_Vulnerability_in_SCADA_Environments.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Secure Two-Factor Authentication with SwissPass Crypto Card: A Case Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.0901116</link>
        <id>10.14569/SpecialIssue.2018.0901116</id>
        <doi>10.14569/SpecialIssue.2018.0901116</doi>
        <lastModDate>2018-08-20T10:09:54.3870000+00:00</lastModDate>
        
        <creator>Annett Laube</creator>
        
        <creator>Reto Koenig</creator>
        
        <subject>Authentication; mobile applications; smart cards; privacy</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>Two-factor authentication requires two pieces of independent evidence, mostly one based on possession and the sec-ond based on knowledge. The major drawback of these methods are usability and the costs to procure and (re)place the hardware token. The SwissPass is a contactless crypto card mainly used to inspect travel tickets (GA and Half-Fare travel cards) by the Swiss federal railways. This paper presents an authentication protocol using the widely spread SwissPass that allows to log in into web and mobile applications in a secure and intuitive way via smart phone. The protocols are further developed to create the SwissPass Authenticator providing federated authentication on the smart phone.</description>
        <description>http://thesai.org/Downloads/FTC2017/116_Paper_393-Secure_Two-Factor_Authentication_with_SwissPass.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Management of Privacy When Photos and Videos are Stored or Shared</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.0901115</link>
        <id>10.14569/SpecialIssue.2018.0901115</id>
        <doi>10.14569/SpecialIssue.2018.0901115</doi>
        <lastModDate>2018-08-20T10:09:53.8230000+00:00</lastModDate>
        
        <creator>Srinivas Madhisetty</creator>
        
        <subject>Privacy; photos and videos conceptual framework</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>Extensive research has been conducted in the technical side by managing privacy using mechanisms such as encryption, passwords, etc. However, the core issues of privacy are not addressed. This is particularly evident when photos and videos are shared via social media. The main problem is that the actual meaning of privacy is difficult to define. Though there are definitions of privacy and acts defined to protect it but there is no clear consensus as to what ‘privacy’ actually means. It is quite often challenging to manage as it is an ill-defined concept.  This research is motivated by the question of what privacy means with in relation to photos and videos and methods have been used to obtain a crowd truth and arrive at a general consensus. The outcome of this research is to develop a conceptual framework of privacy, particularly for sharing photos and video content when using social media.</description>
        <description>http://thesai.org/Downloads/FTC2017/115_Paper_347-Management_of_Privacy_When_Photos_and_Videos.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Security Threats and Techniques in Social Networking Sites: A Systematic Literature Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.0901114</link>
        <id>10.14569/SpecialIssue.2018.0901114</id>
        <doi>10.14569/SpecialIssue.2018.0901114</doi>
        <lastModDate>2018-08-20T10:09:53.2470000+00:00</lastModDate>
        
        <creator>Azah Norman</creator>
        
        <creator>Maw Maw</creator>
        
        <creator>Suraya Hamid</creator>
        
        <creator>Suraya Ika Tamrin</creator>
        
        <subject>Social networking sites (SNS); security; privacy; security techniques; systematic literature review (SLR)</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>During the decade, interactions among people have gradually changed as a result of the popularity, availability and accessibility of social networking sites (SNSs). SNSs enhance our lives in terms of relaxation, knowledge, and communication. On the other hand, the information security and privacy of SNS users have been threatened with most users not aware of this fact. The rate of cyber-attack committed via SNS is dramatically high. Finding a solution to provide better security for social network users has become a major challenge. This review is conducted with the objective to collect and investigate all credible and effective researches that have studied security problems and solutions on SNSs. We aim to extract and discuss the prominent security features and techniques in the selected research articles to provide researchers and practitioners with a concise collection of the security solutions. In this review, we conduct a secondary study by accessing the previous studies devoted to security threats of SNSs and new security techniques to protect them from attacks. We apply the standard guidelines of systematic literature review by working thoroughly on 84 previous studies including journal papers and conference proceedings published in high impact journals. The results show that 2013 is the peak period in which security problems on SNSs obtained attentions from researchers and 23 significant security problems in SNSs were discovered. Facebook and Twitter are the two SNSs mostly referred to by researchers regarding security problems. We found that people (users) and SNSs themselves are the two main causes for today’s security and privacy issues on SNSs. In conclusion, the security and privacy issues on SNSs are still an unsolved problem and there is as yet no solid and complete solution for absolutely removing those issues on SNSs.</description>
        <description>http://thesai.org/Downloads/FTC2017/114_Paper_274-Security_Threats_and_Techniques_in_Social_Networking_Sites.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Biometric based on Neural Representations of Synergistic Hand Grasps</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.0901113</link>
        <id>10.14569/SpecialIssue.2018.0901113</id>
        <doi>10.14569/SpecialIssue.2018.0901113</doi>
        <lastModDate>2018-08-20T10:09:52.6700000+00:00</lastModDate>
        
        <creator>Vrajeshri Patel</creator>
        
        <creator>Martin Burns</creator>
        
        <creator>Ionut Florescu</creator>
        
        <creator>Rajaratnam Chandramouli</creator>
        
        <subject>Biometrics; hand synergies; quadratic discriminant classifier; electroencephalography (EEG); feature extraction</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>To meet the growing need of robust and secure identity verification systems, a new biometric based on neural representations of synergistic hand grasps is proposed here. In this preliminary study five subjects were asked to perform six synergistic hand grasps that are shared most often in common activities of daily living. Their scalp electroencephalographic (EEG) signals were analyzed using 20 scalp electrodes. In our previous work, we found that hand kinematics of these synergistic grasps showed potential as a biometric. In the current work, we asked if the neural representations of these synergistic grasps can provide a unique signature to be a biometric. The results show that across 300 entries, the system, in its best configuration, achieved an accuracy of 92.2% and an EER of ~4.7% when tasked with identifying these five individuals. The implications of these preliminary results and applications in the near future are discussed. We believe that this study could lead to the development of a novel biometric as a potential future technology.</description>
        <description>http://thesai.org/Downloads/FTC2017/113_Paper_271-A_Novel_Biometric_based_on_Neural_Representations.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sensor-based Ransomware Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.0901112</link>
        <id>10.14569/SpecialIssue.2018.0901112</id>
        <doi>10.14569/SpecialIssue.2018.0901112</doi>
        <lastModDate>2018-08-20T10:09:52.0770000+00:00</lastModDate>
        
        <creator>Mitch Thornton</creator>
        
        <creator>Michael A. Taylor</creator>
        
        <creator>Kaitlin N. Smith</creator>
        
        <subject>Ransomware detection; physical sensor side channel; feature vector; encryption</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>A new method for detection of ransomware that is present in an infected host during its payload execution is proposed and evaluated.  Data streams from on-board sensors present in modern computing systems are monitored and appropriate criteria are used that enable the sensor data to effectively detect the presence of ransomware infections.  Encryption detection depends upon the use of small yet distinguishable changes in the physical state of a system as reported through on-board sensor readings.  A feature vector is formulated consisting of various sensor output that is coupled with a detection criteria for the binary states of “ransomware present” versus “normal operation”.  Preliminary experimental results indicate that ransomware is detected with an overall accuracy in excess of 95% and with a corresponding false positive rates of less than 6% for four different types of encryption methods over two candidate systems with different operating systems. An advantage of this approach is that previously unknown or “zero-day” versions of ransomware are vulnerable to our detection method since no prior knowledge of the malware, such as a data signature, is required for our method to be deployed and used.</description>
        <description>http://thesai.org/Downloads/FTC2017/112_Paper_254-Sensor-based_Ransomware_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Distortion Search – A Web Search Privacy Heuristic</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.0901111</link>
        <id>10.14569/SpecialIssue.2018.0901111</id>
        <doi>10.14569/SpecialIssue.2018.0901111</doi>
        <lastModDate>2018-08-20T10:09:51.5170000+00:00</lastModDate>
        
        <creator>Kato Mivule</creator>
        
        <creator>Kenneth Hopkinson</creator>
        
        <subject>Web search privacy; query obfuscation; user profile privacy; user intent obfuscation</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>Search engines have vast technical capabilities to retain Internet search logs for each user and thus present major privacy vulnerabilities to both individuals and organizations in revealing user intent. Additionally, many of the web search privacy enhancing tools available today require that the user trusts a third party, which make confidentiality of user intent even more challenging. The user is left at the mercy of the third party without the control over his or her own privacy. In this article, we suggest a user-centric heuristic, Distortion Search, a web search query privacy methodology that works by the formation of obfuscated search queries via the permutation of query keyword categories, and by strategically applying k-anonymised web navigational clicks on URLs and Ads to generate a distorted user profile and thus providing specific user intent and query confidentiality. We provide empirical results via the evaluation of distorted web search queries in terms of retrieved search results and the resulting web ads from search engines. Preliminary experimental results indicate that web search query and specific user intent privacy might be achievable from the user side without the involvement of the search engine or other third parties.</description>
        <description>http://thesai.org/Downloads/FTC2017/111_Paper_239-Distortion_Search–A_Web_Search_Privacy_Heuristic.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Piezoelectric based Biosignal Transmission using Xbee</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.0901110</link>
        <id>10.14569/SpecialIssue.2018.0901110</id>
        <doi>10.14569/SpecialIssue.2018.0901110</doi>
        <lastModDate>2018-08-20T10:09:50.9370000+00:00</lastModDate>
        
        <creator>Mahmoud Al Ahmad</creator>
        
        <creator>Soha Ahmed</creator>
        
        <creator>Walid Shakhatreh</creator>
        
        <creator>Mohammed Jalil</creator>
        
        <subject>Piezoelectric; XBee; medical sensors; vital signs; remote health monitoring</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>This paper is showcasing the development of an innovative healthcare solution that will allow patient to be monitored remotely. The system utilizes a piezoelectric sheet sensor and XBee wireless communication protocol to collect and transmit heart beat pressure signal from human subject neck to a receiving node. Then, using signal processing techniques a set of important vital parameters such as heart rate, and blood pressure are extracted from the received signal. Those extracted parameters are needed to assess the human subject health continuously and timely. The architecture of our developed system, which enables wireless transmission of the raw acquired physiological signal, has three advantages over existing systems. First, it increases user’s mobility because we employed XBee wireless communication protocol for signal transmission. Second, it increases the system usability since the user has to carry a single unit for signal acquisition while preprocessing is performed remotely. Third, it gives us more flexibility in acquiring various vital parameters with great accuracy since processing is done remotely with powerful computers.</description>
        <description>http://thesai.org/Downloads/FTC2017/110_Paper_202-Piezoelectric_based_Biosignal_Transmisson_Using_Xbee.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Scan2Pass Architecture for Enhancing Security towards E Commerce</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.0901109</link>
        <id>10.14569/SpecialIssue.2018.0901109</id>
        <doi>10.14569/SpecialIssue.2018.0901109</doi>
        <lastModDate>2018-08-20T10:09:50.3600000+00:00</lastModDate>
        
        <creator>Hamzah F. Zmezm</creator>
        
        <creator>Hareth Zmezm</creator>
        
        <creator>Mustafa S.Khalefa</creator>
        
        <creator>Hamid Ali Abed Alasadi</creator>
        
        <subject>Electronic commerce; authentication; one time password; performance and reliability</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>Widely deployed web services facilitate and enrich several applications, such as e-commerce, social networks, and online banking. This study proposes an optical challenge-response user authentication system model based on the One Time Password (OTP) principle (Scan2Pass) that use multifactor authentication and leverage a camera equipped mobile phone of the legitimate user as a secure hardware token. The methodology which is designed and implemented to evaluate the proposed idea will be explored and explained throughout this paper. The chosen method presents a brief overview about the steps required to design an efficient and practical system. Also, the requirements will be discussed as well as our assumption to give a simple yet an adequate understanding about the security of our proposed system in general. Then, an overview about the basic architecture needed for the proposed system to explain the role of the shared secret and the challenge response protocol in order to complete authentication procedure and provide mutual authentication between the user and the server by adopting multi-factors such as time, OTP algorithm by describing the operation flows of users during each phase of this system.</description>
        <description>http://thesai.org/Downloads/FTC2017/109_Paper_152-A_Novel_Scan2Pass_Architecture_for_Enhancing_Security.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Text Dependent Voice Recognition System using MFCC and VQ for Security Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.0901108</link>
        <id>10.14569/SpecialIssue.2018.0901108</id>
        <doi>10.14569/SpecialIssue.2018.0901108</doi>
        <lastModDate>2018-08-20T10:09:49.7830000+00:00</lastModDate>
        
        <creator>Ashwin Nair Anil Kumar</creator>
        
        <creator>Senthil Arumugam Muthukumaraswamy</creator>
        
        <subject>Speaker identification; voice recognition; mel frequency cepstral coefficients (MFCCs); vector quantization (VQ); spectral subtraction</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>This paper presents the implementation of a practical voice recognition system using MATLAB (R2014b) to secure a given user’s system so that only the user may access it. Voice recognition systems have two phases, training and testing. During the training phase, the characteristic features of the speaker are extracted from the speech signal and stored in a database. In the testing phase, the stored audio features of the test voice sample are compared with the voice samples in the database and determined if a match exists. For this research, Mel Frequency Cepstral Coefficients (MFCCs) were chosen to represent the feature vectors of the user’s voice as it accurately simulates the behavior of the human ear. This characteristic of the MFCCs makes them an excellent measure of speaker characteristics. The feature matching process is then performed by subjecting the MFCCs to vector quantization using the LBG (Linde-Buzo-Gray) algorithm. In practical scenarios, noise is a major factor that adversely influences a voice recognition system. The paper addresses this issue by utilizing spectral subtraction to remove environmental noise affecting the speech signal thereby increasing the robustness of the system.</description>
        <description>http://thesai.org/Downloads/FTC2017/108_Paper_151-Text_Dependent_Voice_Recognition_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Secure Fast Fourier Transform using Fully Homomorphic Encryption</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.0901107</link>
        <id>10.14569/SpecialIssue.2018.0901107</id>
        <doi>10.14569/SpecialIssue.2018.0901107</doi>
        <lastModDate>2018-08-20T10:09:49.2230000+00:00</lastModDate>
        
        <creator>Thomas Shortell</creator>
        
        <creator>Ali Shokoufandeh</creator>
        
        <subject>Image processing, computer security, Fast Fourier Transforms</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>Secure signal processing is becoming a de facto model for preserving privacy. We propose a model based on the Fully Homomorphic Encryption (FHE) technique to mitigate security breaches. Our framework provides a method to perform a Fast Fourier Transform (FFT) on a user-specified signal. Using encryption of individual binary values and FHE operations over addition and multiplication, we enable a user to perform the FFT in a fixed point fractional representation in binary. Our approach bounds the error of the implementation to enable user-selectable parameters based on the specific application. We verified our framework against test cases for one dimensional signals and images (two dimensional signals).</description>
        <description>http://thesai.org/Downloads/FTC2017/107_Paper_89-Secure_Fast_Fourier_Transform_using_Fully.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Continuous Authentication in Smartphones: An Analysis on Robust Security Practices</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.0901106</link>
        <id>10.14569/SpecialIssue.2018.0901106</id>
        <doi>10.14569/SpecialIssue.2018.0901106</doi>
        <lastModDate>2018-08-20T10:09:48.6300000+00:00</lastModDate>
        
        <creator>Sajjad Ahmad</creator>
        
        <creator>Munam Shah</creator>
        
        <creator>Adnan Zeb</creator>
        
        <creator>Hussain Ahmad Madni</creator>
        
        <subject>Continuous authentication; security; mobile sharing; TIPS; SenGuard; SilentSense; GeoTouch; gestures; key strokes</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>The current authentication systems in smart phones are classified as static or one shot authentication schemes in which the user is validated at a single point. The existing authentication systems cannot recognize the difference between an intruder and a legitimate user if the security credentials like passwords have been leaked. This issue is addressed in continuous authentication schemes where the system constantly monitors the user by different procedures to detect the user as genuine or intruder. Continuous authentications schemes can be deployed using different methods such as behavioral, gestural and facial, etc. In this paper, we critically analyze the different continuous authentication schemes. We evaluate the robustness and failure free operation of each approach. We aim to provide a precise knowledge about different continuous authentications schemes which help the user to determine the appropriateness of the underlying model adapted by each approach.</description>
        <description>http://thesai.org/Downloads/FTC2017/106_Paper_70-Continuous_Authentication_in_Smartphones.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Relationship between Biometric Technology and  Privacy: A Systematic Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.0901105</link>
        <id>10.14569/SpecialIssue.2018.0901105</id>
        <doi>10.14569/SpecialIssue.2018.0901105</doi>
        <lastModDate>2018-08-20T10:09:48.0670000+00:00</lastModDate>
        
        <creator>Zibusiso Dewa</creator>
        
        <subject>Biometric technology; privacy; legislation; evolving practices; invasiveness; conflicting interests; European Union</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>As the demand for biometric technology grows its very implementation appears poised for broader use and increased concerns with regard to privacy have been raised. Biometric recognition is promoted in a variety of private and government domains, helping to identify individuals or criminals, provide access control systems to enable efficient access to services, helping keep patient data safe amongst other functions. However, new advances in biometrics have brought forth widespread debate amongst researchers with concerns surrounding the effectiveness and management of biometric systems. Further questions arise about their appropriateness and societal impacts of use. This review begins by providing an overview of past and present biometric technological uses and the serious problems they pose to privacy. It then factors that play a part in the implementation of privacy in biometrics. The cultural differences that affect legislative approaches are explored, through comparing the approaches adopted by the European Union and the United States. Furthermore, possible methods of remediating the concerns raised by the implementation of biometrics are discussed. It is concluded that Governments and organisations must be transparent and cooperate with legislators, this combined effort may eliminate many of the perceived risks in the technology and help elucidate clearer methods for governing biometrics, to ensure that future developments hold privacy at a high regard.</description>
        <description>http://thesai.org/Downloads/FTC2017/105_Paper_62-The_Relationship_between_Biometric_Technology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An SDN-based Architecture for Security Provisioning in Fog-to-Cloud (F2C) Computing Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.0901104</link>
        <id>10.14569/SpecialIssue.2018.0901104</id>
        <doi>10.14569/SpecialIssue.2018.0901104</doi>
        <lastModDate>2018-08-20T10:09:47.4900000+00:00</lastModDate>
        
        <creator>Sarang Kahvazadeh</creator>
        
        <creator>Vitor Barbosa</creator>
        
        <creator>Xavier Masip-Bruin</creator>
        
        <creator>Eva Mar&#237;n-Tordera</creator>
        
        <subject>IoT; cloud computing; fog computing; fog-to-cloud computing; security; Software Defined Network (SDN); critical infrastructures</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>The unstoppable adoption of cloud and fog computing is paving the way to developing innovative services, some requiring features not yet covered by either fog or cloud computing. Simultaneously, nowadays technology evolution is easing the monitoring of any kind of infrastructure, be it large or small, private or public, static or dynamic. The fog-to-cloud computing (F2C) paradigm recently came up to support foreseen and unforeseen services demands while simultaneously benefiting from the smart capacities of the edge devices. Inherited from cloud and fog computing, a challenging aspect in F2C is security provisioning. Unfortunately, security strategies employed by cloud computing require computation power not supported by devices at the edge of the network, whereas security strategies in fog are yet on their infancy. Put this way, in this paper we propose Software Defined Network (SDN)-based security management architecture based on a master/slave strategy. The proposed architecture is conceptually applied to a critical infrastructure (CI) scenario, thus analyzing the benefits F2C may bring for security provisioning in CIs.</description>
        <description>http://thesai.org/Downloads/FTC2017/104_Paper_3-An_SDN-based_Architecture_for_Security_Provisioning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Systematic Review of Trends and Gaps in Collaborative Software Engineering in the Cloud</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.0901103</link>
        <id>10.14569/SpecialIssue.2018.0901103</id>
        <doi>10.14569/SpecialIssue.2018.0901103</doi>
        <lastModDate>2018-08-20T10:09:46.8970000+00:00</lastModDate>
        
        <creator>Stanley Ewenike</creator>
        
        <creator>Elhadj Benkhelifa</creator>
        
        <creator>Claude Chibelushi</creator>
        
        <subject>Collaborative software engineering; software development process; models; trends; cloud computing; collaboration; systematic review</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>This paper presents a review of trends and challenges in collaborative Software Engineering. Due to the nature and size of large-scale Software Engineering projects, effective collaboration is important and necessary. Hence, it is not uncommon to see the adoption of a remix of practices, models, methodologies, tools and skills. However, this remix, alongside adoption of emerging paradigms such as Cloud computing, results in factors that undermine collaborative Software Engineering projects. This paper aims to provide a systematic review and analysis of existing trends, models and challenges. This is with a view towards fostering better understanding of factors undermining the collaborative Software Engineering process, as well as, helps to identify motivations, gaps, and issues pertinent to this research area for a more effective process in the Cloud. A systematic approach was employed in this research. This approach is instrumental to identifying relevant primary studies. Its design provides a means for continuity in terms of any future extension to this review.</description>
        <description>http://thesai.org/Downloads/FTC2017/103_Paper_515-Systematic_Review_of_Trends_and_Gaps.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Systems Software for Fast Inter-Machine Page Faults</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.0901102</link>
        <id>10.14569/SpecialIssue.2018.0901102</id>
        <doi>10.14569/SpecialIssue.2018.0901102</doi>
        <lastModDate>2018-08-20T10:09:46.3200000+00:00</lastModDate>
        
        <creator>Joel Nider</creator>
        
        <creator>Mike Rapoport</creator>
        
        <creator>Yiftach Binyamini</creator>
        
        <subject>cloud; post-copy; migration; page fault; low latency; interconnect</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>Cloud computing abstracts the underlying hard-ware details from the user. As long as the customer Service Level Agreements (SLA) are satisfied, cloud providers and operators are free to make infrastructural decisions to optimize business objectives, such as operational efficiency of cloud data centers. By adopting a holistic view of the data center and treating it as a single system, a cloud provider can migrate application compo-nents and virtual machines within the system according to policies such as load balancing and power consumption. We contribute to this vision by removing architectural barriers for workload migration and reducing the downtime of migrating processes. We combine the post-copy approach to workload migration with a novel specialized low latency interconnect for handling resulting remote page faults. In this work, we introduce a cross-architecture workload migration system, specify the requirements towards the specialized interconnect, discuss design trade-offs and issues, and present our proposed SW-HW co-design.</description>
        <description>http://thesai.org/Downloads/FTC2017/102_Paper_446-Systems_Software_for_Fast_Inter-Machine_Page_Faults.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Defining a DSL for Transmission Pipeline Systems Metamodeling</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.0901101</link>
        <id>10.14569/SpecialIssue.2018.0901101</id>
        <doi>10.14569/SpecialIssue.2018.0901101</doi>
        <lastModDate>2018-08-20T10:09:45.7430000+00:00</lastModDate>
        
        <creator>Bunakiye Japheth</creator>
        
        <creator>Acheme I. David</creator>
        
        <subject>Formal specifications; semantic mappings; petroleum industry; pipeline design; modeling platform</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>Transmission pipeline systems metamodeling is simply reengineering pre-constructed notations and abstractions of the pipeline engineering domain in a form that offer expressive power for the domain expert to create designs that suits the intended transmission pipeline project. The required formality that can provide such expressive power is a domain specific language (DSL), the domain specific modeling approach, therefore is adopted to create a domain specific platform where the specification primitives represent abstractions and conceptual modeling processes in the design and implementation of transmission pipeline configurations. Domain specific languages, which are centered on meta-modeling raises the level of abstraction beyond programming by specifying the solution directly using domain concepts. The conceptual DSL definition brings to bear domain abstractions, and expressive power restricted to, the domain of transmission pipelines for the related products in the petroleum industry and in water supply. Consequently this can be achieved only by taking advantage of specific properties of the pipeline engineering application domain that pertain to transmission. The description of these specific properties therefore represents the domain concepts, which will be useful in creating the abstractions and in the semantic mappings of the elements of the DSL modeling platform.</description>
        <description>http://thesai.org/Downloads/FTC2017/101_Paper_437-Defining_a_DSL_for_Transmission_Pipeline_Systems_Metamodeling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modeling Trust in the Mobile User Experience: System Quality Characteristics Influencing Trust</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.0901100</link>
        <id>10.14569/SpecialIssue.2018.0901100</id>
        <doi>10.14569/SpecialIssue.2018.0901100</doi>
        <lastModDate>2018-08-20T10:09:45.1500000+00:00</lastModDate>
        
        <creator>Philip Lew</creator>
        
        <creator>Luis Olsina</creator>
        
        <subject>Trust; quality model; system/system-in-use quality view; mobile user experience</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>As the mobile world continues to expand an Internet of Things and Networks of Everything, we find that our lives, while becoming more convenient also come with ties. These ties are based on many interconnections between people and software, and it is critical to ensure that we trust these ties in the software that we use. Modeling, evaluating, and improving the end user’s trust in the mobile apps requires systematic frameworks and strategies. To this end, we are proposing an adaptable and flexible quality model and framework – borrowing from general accepted ISO 25010 modeling concepts while enhancing our previous work in quality in use modeling – by representing specific system quality characteristics that may influence trust from the quality in use standpoint. The resulting trust modeling framework can be used for evaluation and improvement of trust targeted for different mobile apps.</description>
        <description>http://thesai.org/Downloads/FTC2017/100_Paper_199-Modeling_Trust_in_the_Mobile_User_Experience_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Pair Programming: Collocated Vs. Distributed</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090199</link>
        <id>10.14569/SpecialIssue.2018.090199</id>
        <doi>10.14569/SpecialIssue.2018.090199</doi>
        <lastModDate>2018-08-20T10:09:44.5570000+00:00</lastModDate>
        
        <creator>Mark Rajpal</creator>
        
        <subject>Agile; scrum; pair programming; extreme programming</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>Collocation is almost always a preferred alternative compared to Distributed. It makes sense that collocated team members are more likely to perform better than distributed team members. However, in today’s real world the distributed nature is either the norm or quickly becoming the norm. That is not to say that collocation no longer exists, but rather it is becoming less and less pronounced. Pair programming is a technique that can be performed in a collocated or distributed fashion. Not all software development projects use this practice. The projects that do undertake this programming method typically perform collocated or distributed pair programming, but very rarely use both. This paper examines a project where both types of pair programming were used. At the completion of the project, all developers were asked to complete a survey. The results of the survey allowed us to compare various attributes of collocated and distributed pair programming. What may be surprising is that in some cases the differences between the two are minimal.</description>
        <description>http://thesai.org/Downloads/FTC2017/99_Paper_173-Pair_Programming_Collocated.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Effect of Applying Software Design Patterns on Real Time Software Efficiency</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090198</link>
        <id>10.14569/SpecialIssue.2018.090198</id>
        <doi>10.14569/SpecialIssue.2018.090198</doi>
        <lastModDate>2018-08-20T10:09:43.9970000+00:00</lastModDate>
        
        <creator>Wan Nurhayati Wan Ab Rahman</creator>
        
        <creator>Muhammad Ehsan Rana</creator>
        
        <subject>Design patterns; real time software; real time applications; software performance; software efficiency</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>Real time applications are one of the fastest growing applications in the market due to their popularity, business value and the fact that web is their native environment. As a result, enhancing the performance of these applications has always been a concern for the IT industry. In this research, we took a closer look on the effect of design patterns on the performance of these applications using simulations as a research method. Two of the design patterns used by the researchers, namely, the Observer and the State design patterns, proved to be more effective in terms of software efficiency.</description>
        <description>http://thesai.org/Downloads/FTC2017/98_Paper_117-The_Effect_of_Applying_Software_Design_Patterns.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Herbert Test Helping Hire Software Developers: A Complex Algorithmic Problem Solving Tool</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090197</link>
        <id>10.14569/SpecialIssue.2018.090197</id>
        <doi>10.14569/SpecialIssue.2018.090197</doi>
        <lastModDate>2018-08-20T10:09:43.4200000+00:00</lastModDate>
        
        <creator>Soraya Cardenas</creator>
        
        <subject>Algorithms; evaluations; problem-solving and hiring software developers</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>Companies that hire software developers face challenges hiring employees.  As the cost of hiring developers can cost up to hundreds of thousands of dollars, depending on expertise of the desired candidate, companies must rely on a multi-method approach to secure competent developers. This paper evaluates the “Herbert” test, an algorithmic problem -solving program that is currently being used to help evaluate potential software developers at Fast Track, a web development company. There appears to be a positive correlation between performance on the test and subsequently on the job. This longitudinal study reaffirms the need for a multi-method approach to hiring with a special emphasis on problem-solving. Furthermore, as technical systems continue to emerge, increasing the demand for technical jobs, testing tools like “Herbert” are becoming more relevant.</description>
        <description>http://thesai.org/Downloads/FTC2017/97_Paper_65-Herbert_Test_Helping_Hire_Software_Developers.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Goal based Tailoring of Quality Models for Quality Requirements</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090196</link>
        <id>10.14569/SpecialIssue.2018.090196</id>
        <doi>10.14569/SpecialIssue.2018.090196</doi>
        <lastModDate>2018-08-20T10:09:42.8400000+00:00</lastModDate>
        
        <creator>Arfan Mansoor</creator>
        
        <creator>Detlef Streitferdt</creator>
        
        <creator>Elena Rozova</creator>
        
        <creator>Qaiser Abbas</creator>
        
        <subject>Requirements; goal models; quality models; meta model</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>Context and motivation: Elicit mean to gather, acquire or to extract while requirements elicitation mean to gather or discover requirements. The activity is performed to determine the system requirements from stakeholders, system documents, domain knowledge and from other requirements sources. Question/problem: Requirements engineering is the most important part for a successful software system because here we come to a decision, ‘what’ is going to be built. Wrong decisions during this phase will have negative impact on the final product. Idea: The objective is to develop better understanding of requirements. In the start of requirements engineering process we have only few requirements (along with system vision statements) but at the end of this activity most of the requirements or in ideal scenarios all requirements need to be known at the appropriate level. The idea is to propose an integrative goal-quality model for requirements. The success of software product is highly dependent on Non-functional Requirements (NFR). In this paper an integrative goal model of influencing factors is presented. This helps to guide the tailoring of software quality model which is based on various project requirements, organizational needs, individual goals of developers and constraints of the environment. Contribution: The influencing factors help to integrate goal model to quality model and therefore helps in a systematic elicitation of project specific requirements.</description>
        <description>http://thesai.org/Downloads/FTC2017/96_Paper_61-Goal_based_Tailoring_of_Quality_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Calculation of Pressure Loss Coefficients in Combining Flows of a Solar Collector using Artificial Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090195</link>
        <id>10.14569/SpecialIssue.2018.090195</id>
        <doi>10.14569/SpecialIssue.2018.090195</doi>
        <lastModDate>2018-08-20T10:09:42.2800000+00:00</lastModDate>
        
        <creator>Shahzad Yousaf</creator>
        
        <creator>Imran Shafi</creator>
        
        <subject>Artificial neural network; pressure loss coefficients for solar collector; combining flow</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>The paper presents a novel technique for determination of loss coefficients due to pressure by use of artificial neural network (ANN) in tee junctions. Geometry and flow parameters are feed into ANN as the inputs for purpose of training the network. Efficacy of the network is demonstrated by comparison of the ANN and experimentally obtained pressure loss coefficients for combining flows in a Tee Junction. Reynolds numbers ranging from 200 to 14000 and discharge ratios varying from minimum to maximum flow for calculation of pressure loss coefficients have been used. Pressure loss coefficients calculated using ANN are compared to the models from literature used in junction flows. The results achieved after the application of ANN agrees reasonably to the experimental values.</description>
        <description>http://thesai.org/Downloads/FTC2017/95_Paper_86-Calculation_of_Pressure_Loss_Coefficients_in_Combining_Flows.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Reinforced Experience Generator based on Query Nature and Data Bulking</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090194</link>
        <id>10.14569/SpecialIssue.2018.090194</id>
        <doi>10.14569/SpecialIssue.2018.090194</doi>
        <lastModDate>2018-08-20T10:09:41.7030000+00:00</lastModDate>
        
        <creator>Mazhar Hameed</creator>
        
        <creator>Hiba Khalid</creator>
        
        <creator>Usman Qamar</creator>
        
        <subject>Big data; Online Analytical Processing (OLAP); Online Transactional Processing (OLTP); contextual query; scientific data management</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>The technology advancement has given a more enlightened perspective on developing recent solutions. The advancement of imitating the human brain has achieved some milestones that are providing promising results. However, the technicalities and reliance between different computing entities still remains a concern. This research provides a module based framework that can create a brain or experience imitation for itself using the reinforcement learning agents. The research has established a framework design that can connect and validate user requirements and map it according to functionality with reasonable data retrieval and performance measures are observed. Data is a concern for modern technologies, so is the customer requirements. This research combines the two widest entities of research and brings about a design framework that allows the agent to communicate using knowledge from data and instruction or queries in the forms of user requests or business requirements. The experience generator than enhances the next performance and lessens the cost, expense and improves the overall performance of two widely separated modules with reinforcement learning and experience or knowledge base.</description>
        <description>http://thesai.org/Downloads/FTC2017/94_Paper_249-Reinforced_Experience_Generator_based_on_Query_Nature.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detecting Cropping Patterns of Underutilized Crops using Online Big Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090193</link>
        <id>10.14569/SpecialIssue.2018.090193</id>
        <doi>10.14569/SpecialIssue.2018.090193</doi>
        <lastModDate>2018-08-20T10:09:41.0800000+00:00</lastModDate>
        
        <creator>Ayman Salama Mohamed</creator>
        
        <creator>Ebrahim Jahanshiri</creator>
        
        <creator>Tomas Henrique Maul</creator>
        
        <subject>Underutilised crops; data mining; Google keywords; cropping patterns</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>Crop that are currently underutilized can play a major role in diversifying food sources and combating climate variability. One major obstacle for wider adoption of these species is the lack of information on the geographic areas where these crops are currently grown. These crops are typically grown in marginal lands through subsistence agriculture. At present, there is no global database and no efficient procedure that allows users to acquire cropping patterns of underutilised crops. The proposed solution identifies underutilized cropping patterns using online search engine data. The target is to determine global public interest in underutilised crops over time through search engine data. This identifies possible crop utilisation patterns, trends and interest pertaining to underutilised crops over time. As a proof of concept, we collected a set of keyword synonymous of Bambara groundnut (BG), from local and international databases and research publications. Using the Google AdWords service and 40 different terms for BG, we were able to gather search event data for two years, at the city level. Preliminary analysis done through a software prototype shows that the data provides new insights as to how BG search events are distributed and how this data can be used to delineate current areas where BG grows and what are the characteristics of their value chains. For evaluation purposes, we compared our BG results with the crop’s known network of BG growers and researchers, and confirmed that the results not only matched known regions of the network but also proposed several new ones that need to be evaluated. The results suggest that the proposed solution provides significant indicators for possible cropping patterns and/or research interests around the world.</description>
        <description>http://thesai.org/Downloads/FTC2017/93_Paper_229-Detecting_Cropping_Patterns_of_Underutilized_Crops.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Reliable Innovative Business Model for Online Trading of Machines’ Parameters in the Automation and Manufacturing Sector</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090192</link>
        <id>10.14569/SpecialIssue.2018.090192</id>
        <doi>10.14569/SpecialIssue.2018.090192</doi>
        <lastModDate>2018-08-20T10:09:40.4870000+00:00</lastModDate>
        
        <creator>Ghaidaa Shaabany</creator>
        
        <creator>Reiner Anderl</creator>
        
        <subject>Manufacturing processes; machines’ parameters; new added-value; B2B business; usage control policy; licensing models; security; protection measurements; online marketplace; business model; electronic goods</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>By new manufacturing processes, materials and/or environmental conditions new machine’s parameters are needed. In general, finding the right machines’ parameters of the machine is achieved by a long work effort and high costs. It is important to find the right machines’ parameters of a machine to execute every manufacturing process accurately. Know-how of companies is characterized by developing the appropriate machines’ parameters. A third party can develop a new business model based on a managed exchange of this know-how between companies. Thereby, an online marketplace is provided for trading machines’ parameters. As a result, the revenues of these companies are increased because a new added-value is generated based on existing resources, namely, the previously developed and used machines’ parameters. At the same time, the high costs of finding the right machines’ parameters could be saved by purchasing them on the marketplace. These innovative ideas can be realized only by ensuring the security of the traded machines’ parameters. In this paper, a reliable, innovative business model for online trading of machines’ parameters in the automation and manufacturing sector is presented. It is based on the concept of B2B e-business. Thereby, it is essential to determine a usage control policy of traded machines’ parameters even after purchasing. Thus, operating of these purchased machines’ parameters on the machine is regulated by using different licensing models. Licenses are generated according to order’s conditions for a specific machine after finishing purchasing processes on the marketplace. Before executing purchased machines’ parameters on the machine, a verification process of the machine’s ID and parameters’ validity is required. The protection requirements over the whole trading process, as well as many security measurements for transmitting the machines’ parameters to customers, are demonstrated in this paper.</description>
        <description>http://thesai.org/Downloads/FTC2017/92_Paper_499-Reliable_Innovative_Business_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Meeting Halfway: Is Driverless-cars the Solution to Saudi Women Driving Ban?</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090191</link>
        <id>10.14569/SpecialIssue.2018.090191</id>
        <doi>10.14569/SpecialIssue.2018.090191</doi>
        <lastModDate>2018-08-20T10:09:39.9100000+00:00</lastModDate>
        
        <creator>Mariam Elhussein</creator>
        
        <creator>Mohammed Abdulrahman Alqahtani</creator>
        
        <subject>Driverless-cars; social computing; technology readiness; autonomous cars</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>Driverless-car is a technology from the future that seems to be a reality sooner than expected. All countries around the world are assessing their readiness to adopt the change. For many, it is a technology that promises safer roads and better environment, and a huge economical shift. For Saudi women, this technology might be the solution to a social dilemma that prevents women from driving themselves. This paper explores Saudi Arabia’s readiness for Driverless-cars by highlighting the unique women driving ban situation. More research needs to be conducted to arrive to conclusions regarding how Saudi’s will receive Driverless-cars.</description>
        <description>http://thesai.org/Downloads/FTC2017/91_Paper_400-Meeting_Halfway.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Digital Transformation and Industry 4.0 as a Complex and Eclectic Change</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090190</link>
        <id>10.14569/SpecialIssue.2018.090190</id>
        <doi>10.14569/SpecialIssue.2018.090190</doi>
        <lastModDate>2018-08-20T10:09:39.3300000+00:00</lastModDate>
        
        <creator>Frank Otto</creator>
        
        <creator>Christian-Andreas Schumann</creator>
        
        <creator>Jens Baum</creator>
        
        <creator>Eric Forkel</creator>
        
        <subject>Digitalization; Digital Transformation; industry 4.0; interoperability; semantic database; quality assurance; innovation</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>New technologies and possibilities summarized under the keyword Digital Transformation induce massive changes in the organizations’ processes and will lead to tremendous challenges for almost all businesses. Existing business models will be expanded or replaced by digitally driven services. The Industry 4.0 with its autonomous cyber physical production systems and its intelligent products require holistic approaches for and will lead to sustainable changes in the industrial manufacturing. This paper addresses the main fields of Industry 4.0 and Digital Transformation as well as their core concepts. Further the paper describes the key success factors: interoperability and the organizations’ ability for innovations. These key success factors are illustrated by three case studies regarding the fields of information supply in Facility Management, digital logistics and intrapreneurial behavior as an important source for intra-organizational innovations.</description>
        <description>http://thesai.org/Downloads/FTC2017/90_Paper_225-Digital_Transformation_and_Industry.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Quantum Computing in Geometric Algebra Formalism: Light Beam Guide Implementation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090189</link>
        <id>10.14569/SpecialIssue.2018.090189</id>
        <doi>10.14569/SpecialIssue.2018.090189</doi>
        <lastModDate>2018-08-20T10:09:38.7530000+00:00</lastModDate>
        
        <creator>Alexander Soiguine</creator>
        
        <subject>Quantum computing; geometric algebra; states; observables; measurements; light polarization</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>Keeping in mind that existing problems of conventional quantum mechanics could happen due of a wrong mathematical structure, I suggest an alternative basic structure. The critical part of it is modifying commonly used terms “state”, “observable”, “measurement” and giving them a clear unambiguous definition. This concrete definition, along with the use of a variable complex plane, is quite natural in geometric algebra terms and helps establish a feasible language for quantum computing. The suggested approach is then used in a fiber optics quantum information transferring/processing scenario.</description>
        <description>http://thesai.org/Downloads/FTC2017/89_Paper_155-Quantum_Computing_in_Geometric_Algebra_Formalism.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Knowledge Management Metamodel from Social Analysis of Lessons Learned Recorded in the Cloud</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090188</link>
        <id>10.14569/SpecialIssue.2018.090188</id>
        <doi>10.14569/SpecialIssue.2018.090188</doi>
        <lastModDate>2018-08-20T10:09:38.1930000+00:00</lastModDate>
        
        <creator>JOSE FERNANDO LOPEZ QUINTERO</creator>
        
        <creator>CARLOS MONTENEGRO</creator>
        
        <creator>Medina V</creator>
        
        <creator>Yuri Nieto</creator>
        
        <subject>Knowledge management; personnel knowledge management; lessons learned; semantic analysis; cloud computing; social networks; machine learning</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>This article describes the development of a prototype of a metamodel for Personal Knowledge Management (GCP), which is defined and implemented based on the “Lessons Learned”, registered on a social network for mass use. The functional architecture is applied in the implementation of a registration system and personal lessons learned in the cloud, through a social network: Facebook. The process begins with the acquisition of data from the connection to a non-relational database (NoSQL) in which it has set up a complementary analysis algorithm for the semantic analysis of the recorded information, on the lessons learned and thus study the generation of Organizational Knowledge Management (GCO) from the GCP. The end result is the actual implementation of a functional architecture to integrate a web 2.0 application and an algorithm of semantic analysis from unstructured information using techniques of machine learning and to demonstrate a way to make management of organizational knowledge through personal knowledge management.</description>
        <description>http://thesai.org/Downloads/FTC2017/88_Paper_6-Knowledge_Management_Metamodel.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Designing Biosecurity Concentration for Interdisciplinary Majors</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090187</link>
        <id>10.14569/SpecialIssue.2018.090187</id>
        <doi>10.14569/SpecialIssue.2018.090187</doi>
        <lastModDate>2018-08-20T10:09:37.6170000+00:00</lastModDate>
        
        <creator>Hongmei Chi</creator>
        
        <creator>Satyanarayan Dev</creator>
        
        <subject>Biosecurity; biosafety; cybersecurity; information assurance; active learning</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>This manuscript deals with the design and expansion of a highly successful cybersecurity program in a minority serving university through contemporary education and training methods in biosecurity for students who major in other disciplines, such as biological systems engineering and biotechnology. The key efforts will focus on curriculum development by means of a hands-on laboratory-based instruction as well as research-based technological development for biosecurity. This is accomplished by collaborative work involving two academic departments, namely, Computer &amp; Information Sciences (CIS), and Biological Systems Engineering (BSE). The product is a cross-disciplinary concentration in biosecurity leading to a professional certification, which is ideal for undergraduate students and professionals with a biology background. Although this approach is modelled for a minority serving university, it can be replicated in other institutions of higher learning as well.</description>
        <description>http://thesai.org/Downloads/FTC2017/87_Paper_332-Designing_Biosecurity_Concentration_for_Interdisciplinary_Majors.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Requirements Model for an Integrated Attendance Monitoring System (IAMS)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090186</link>
        <id>10.14569/SpecialIssue.2018.090186</id>
        <doi>10.14569/SpecialIssue.2018.090186</doi>
        <lastModDate>2018-08-20T10:09:37.0370000+00:00</lastModDate>
        
        <creator>Joshua C. Nwokeji</creator>
        
        <creator>Anugu Apoorva</creator>
        
        <creator>Ayodele Olagunju</creator>
        
        <creator>Steve Frezza</creator>
        
        <subject>Learning management system; attendance management systems; feature tree model; student engagement</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>Attendance monitoring systems (AMS) are important educational systems used to monitor student attendance and interest in a given course. The expected benefits of AMS are, among others, to improve student engagement, performance, and retention. The functionalities of most traditional AMS are however limited to recording and reporting attendance. Beyond these, they provide little or no other functionalities that are capable of streamlining student engagement, performance, and retention. To fully realize their expected benefits and meet contemporary pedagogical needs, traditional AMS can benefit from extended innovative functionalities such as ‘Automatic Disengagement Notification’ and ‘Attendance Grader’. But the implementation of these functionalities would depend on predefined systems requirements, which unfortunately are very scare, if available at all, in extant software engineering literature. The significant amount of work, resources, and cost required to develop systems requirements, especially for the optimization of expected benefits, can discourage software/education systems developers from developing such innovative functionalities. We contribute to addressing these limitations by identifying, modeling and describing functional requirements for an integrated AMS. These requirements can be adopted and re-used by AMS developers, and thus reduce the time, cost and other resources expended in requirements development.</description>
        <description>http://thesai.org/Downloads/FTC2017/86_Paper_329-Requirements_Model_for_an_Integrated_Attendance_Monitoring_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Virtual Breadboard: Helping Students to Learn Electrical Engineering at a Distance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090185</link>
        <id>10.14569/SpecialIssue.2018.090185</id>
        <doi>10.14569/SpecialIssue.2018.090185</doi>
        <lastModDate>2018-08-20T10:09:36.4600000+00:00</lastModDate>
        
        <creator>Lori Scarlatos</creator>
        
        <creator>Ahmad Pratama</creator>
        
        <creator>Tatiana Tchoubar</creator>
        
        <subject>Virtual breadboard; circuit design; guide-on-the-side; distance learning; digital hints; user interface</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>This paper presents an interactive virtual breadboard system that provides automated guidance to electrical engineering students working on electronic circuits labs. The primary contribution of the paper is the unique invariant representation of the state of the breadboard, which enables instructors to develop their own lab assignments with a set of customized hints. The paper describes the invariant state representation and its implementation in the virtual breadboard system. It also presents results of a pilot test demonstrating that using this system achieves the goal of reducing the workload of teaching assistants.</description>
        <description>http://thesai.org/Downloads/FTC2017/85_Paper_272-The_Virtual_Breadboard_Helping_Students.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Framework to Implement a U-Learning Service based on Software-Defined Television</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090184</link>
        <id>10.14569/SpecialIssue.2018.090184</id>
        <doi>10.14569/SpecialIssue.2018.090184</doi>
        <lastModDate>2018-08-20T10:09:35.9000000+00:00</lastModDate>
        
        <creator>Gustavo Moreno L&#243;pez</creator>
        
        <creator>Jovani Jim&#233;nez</creator>
        
        <subject>U-learning; cloud computing; framework; software –defined TV; multi-screen</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>This paper aims to propose a framework to implement a ubiquitous learning service based on software -defined television (Sw-de TV) under the approach of software-defined everything and cloud computing. The lack of u-learning frameworks and the little convergence of infrastructure and flexibility in educational contexts are some challenges to overcome.  Here, we present the general framework and an experimental test. The experimental results indicated a satisfactory performance of the video display for different screens, and a very high relevance to be applied in an educational context. One of the test conclusions is that video processing platforms defined by software offer more scalability and flexibility than a conventional television (TV) infrastructure. Such platforms make possible to adapt content to different screens, favoring the implementation of a ubiquitous learning service in which users can choose the moment, place and device in order to perform a learning activity, having video as its main content.</description>
        <description>http://thesai.org/Downloads/FTC2017/84_Paper_257-A_Framework_to_Implement_a_U-Learning_Service.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Higher Education Experiment to Motivate the Use of Gamification Technique in Agile Development Methodology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090183</link>
        <id>10.14569/SpecialIssue.2018.090183</id>
        <doi>10.14569/SpecialIssue.2018.090183</doi>
        <lastModDate>2018-08-20T10:09:35.3230000+00:00</lastModDate>
        
        <creator>Rula Al Azawi</creator>
        
        <creator>Dawood Al Ghatarifi</creator>
        
        <creator>Aladdin Ayesh</creator>
        
        <subject>Gamification; Agile methodology; mobile applica-tion</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>Student’s motivation and engagement difficulties are present in higher education. Between many technologies to increase student motivation and engagement, we found that Gamification technique is the most suitable case. This paper presents our experiment of using Gamification in learning process and based on the use of the Agile methodology in-order to obtain the best results and engagements from the students. Applying Gamification in software engineering is not as straight to move as it may appear. Current research in the area has already recog-nized the possible use of Gamification in the context of software development. It is still an open area of research about how to design and use Gamification in this context. Higher education universities, especially in the Middle East are sometimes facing problems to get students engagement and motivation as a group structure. This paper supports the proposed idea; we presented a preliminary experiment that shows the effect of gamification on the performance of students involvement in a funded project from TRC (The Research Council) in Sultanate of Oman.</description>
        <description>http://thesai.org/Downloads/FTC2017/83_Paper_246-A_Higher_Education_Experiment_to_Motivate.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>To Flip or Not to Flip</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090182</link>
        <id>10.14569/SpecialIssue.2018.090182</id>
        <doi>10.14569/SpecialIssue.2018.090182</doi>
        <lastModDate>2018-08-20T10:09:34.7600000+00:00</lastModDate>
        
        <creator>Steven Billis</creator>
        
        <creator>Nada Anid</creator>
        
        <subject>Just-in-Time Teaching (JiTT); LivescribeTM; minute paper; muddiest point</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>The flipped classroom is gaining popularity as a teaching strategy that allows instructors to create an active learning environment. It focuses the responsibility of learning on the students and changes their role from listeners to learners.  In a previous paper the authors presented an example of a flipped-classroom approach to a one-semester “Fundamentals of Digital Design” required course for Electrical and Computer Engineering majors in order to lower its failure rate and to further motivate students so as to improve overall attrition. The authors used the LivescribeTM paper-based computing platform which consists of a digital pen, AnotoTM digital paper, software applications, and developer tools to create the online recorded lectures and problems which were uploaded to “Blackboard” for students to view and solve at home. The authors used this technology as well as the concept of “Just-in-Time Teaching” (JiTT) to provide the “feedback loop” to affect what happens during the subsequent in-class time together. The authors concluded that while the flipped version of EENG 125 “Fundamentals of Digital Logic” succeeded in improving student retention and while the approach was popular with students, with respect to class averages and standard deviations, the results were not much better than in a traditional classroom which incorporated a high level of active learning activities. As a result, the authors decided to incorporate student groupings that are heterogeneous, so as to provide each student an opportunity to work through problems both independently and in collaboration with their peers as well as Out-of-Class Assessment Techniques (OoCATs) such as the “Minute Paper” and the “Muddiest Point” to provide the authors with useful feedback with regard to the recorded lectures and problem solving assignments. The authors would then assess this new flipped version of EENG 125 with the traditional and active learning versions of the course.</description>
        <description>http://thesai.org/Downloads/FTC2017/82_Paper_213-To_Flip_or_Not_to_Flip.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Some Aspects of Teaching Processes Computerization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090181</link>
        <id>10.14569/SpecialIssue.2018.090181</id>
        <doi>10.14569/SpecialIssue.2018.090181</doi>
        <lastModDate>2018-08-20T10:09:34.1670000+00:00</lastModDate>
        
        <creator>Svetsky Stefan</creator>
        
        <creator>Moravcik Oliver</creator>
        
        <subject>automation of teaching processes; knowledge representation; technology-enhanced learning; educational technology; Cybernetics</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>From the teaching processes computerization point of view, there is an absence of state-of-the-art approaches to their automation derived from “knowledge” as the key element of teaching and learning (including human knowledge transmission taking place in the framework of communication and feedback). This contribution presents such an approach, dealing with how the automation is solved when computerizing teaching processes. It has been developed within the published long-term research on the technology-enhanced learning and works in the teaching practice (it is applied to teaching bachelors students). The approach is based on the design of the “virtual knowledge unit”, as the default data structure, which is both human and machine readable. This enables the teacher not only to process learning content but also to automate communication and feedback thanks to the possibility to transmit the knowledge, i.e. the teaching and learning content, between off-line and online environments by using the virtual knowledge unit. The virtual knowledge joins isomorphically the mental processes of humans with the physical processes of machines. From the practical point of view, the data structure is handled and controlled by the in-house software BIKEE. This database application enables a teacher to solve any kind of teaching and learning processes tailor made for him. Based on the teaching practice, this approach seems to be beyond the state-of-the-art. This can be concluded because a registered utility model, based on the use of virtual knowledge, is used for the knowledge transfer. Some aspects of teaching processes computerization are discussed regarding the automation of mental processes. In this context, the actual research is focused on development of an educational robot.</description>
        <description>http://thesai.org/Downloads/FTC2017/81_Paper_103-Some_Aspects_of_Teaching_Processes_Computerization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Quantum Leap in Accelerated Computing: The Quest of the Missing Links between Quantum Annealer and HPC</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090180</link>
        <id>10.14569/SpecialIssue.2018.090180</id>
        <doi>10.14569/SpecialIssue.2018.090180</doi>
        <lastModDate>2018-08-20T10:09:33.5730000+00:00</lastModDate>
        
        <creator>Liwen Shih</creator>
        
        <subject>Quantum annealing; adaptive parallel software/hardware mapping; algorithm-specific processor scheduling; topology-aware network scheduling; optimization</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>As the arrival of Adiabatic Quantum Annealing Computers (QACs) era, there are ample opportunities to search for the missing links between QACs and HPC (High-Performance Computing). QACs such as D-Wave 2000Q Systems are analog Quantum Annealers capable of instantly zooming in to optimal solutions. We are optimistically aiming at broadening the perspective and impact of QAC by harvesting QAC progress to potentially benefit most every HPC application by optimizing the software/hardware mapping. The narrowed-down, fittest processor schedules found through quantum (or hybrid classical simulated) annealing search enhancement, can then be further compared and fine-tuned to run computation more efficiently in production mode on target HPC systems. With our novel perspective of linking QAC and HPC for a broader application impact, we hope to encourage more and various developments of emerging quantum computer endeavors to eventually make the most of manual tweaking of various problem solving, including parallel programming, unnecessary.</description>
        <description>http://thesai.org/Downloads/FTC2017/80_Paper_501-Quantum_Leap_in_Accelerated_Computing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Toward Quantum Refactoring: Self-Organizing Parallel Resource Mapping with Computation Matrix Transform</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090179</link>
        <id>10.14569/SpecialIssue.2018.090179</id>
        <doi>10.14569/SpecialIssue.2018.090179</doi>
        <lastModDate>2018-08-20T10:09:32.9970000+00:00</lastModDate>
        
        <creator>Liwen Shih</creator>
        
        <subject>Automatic Software-Hardware Resource Mapping; Self-Organized Code Refactoring; Permutation; Causal Matrix Model of Computation; Data-Flow Discovery; Quantum Annealing Optimization</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>As Quantum Annealing Computers (QACs) like D-Wave 2000Q Adiabatic Quantum Systems emerge, we aim to investigate the potential synergy between QAC and HPC as we push toward exascale supercomputing, where manual parallel programming for millions of processor cores will become inhibitive.  Quantum Refactoring is proposed here only as a possible future concept (not yet implemented on QACs) to automatically tweak the code sequence more efficiently than through repeated manual pair-wise operation swaps to optimize computation speed, memory storage, hit ratio, cost, reliability, power and/or energy saving. To facilitate auto code refactoring suitable for such annealing optimization, a self-organizing matrix transform is proposed in this paper, so that QAC can be applied to auto code sequence permutation via computation matrix transform model toward optimized matching between computation and parallel processor cores.  The mathematical model to achieve these goals is through the causal set properties of Matrix Model of Computation (MMC).  A sequence of transformations act as the code refactoring to compact code regions as computation decomposition for parallel multi-core/multi-tread execution. Besides the improved software/hardware matching, the self-organizing matrix approach also serve as a novel paradigm for auto parallel programming, as well as a systematic tool for formal design modeling.</description>
        <description>http://thesai.org/Downloads/FTC2017/79_Paper_500-Toward_Quantum_Refactoring_Self-Organizing_Parallel_Resource_Mapping.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimization of Multi-Dimensional Metrics through Task Scheduling in Cloud Computing Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090178</link>
        <id>10.14569/SpecialIssue.2018.090178</id>
        <doi>10.14569/SpecialIssue.2018.090178</doi>
        <lastModDate>2018-08-20T10:09:32.4370000+00:00</lastModDate>
        
        <creator>Deepak Puthal</creator>
        
        <subject>Cloud computing; task scheduling; energy consumption; makespan; throughput; simulation</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>Cloud-based data centers consume a considerable amount of energy, which is an expensive system. The virtualization technique helps to overcome various issues including the energy issue. Because of the dynamic nature of workload, task consolidation is an effective technique to decrease the total number of servers and unnecessary migrations and consequently optimize energy. Effective task allocation techniques act as a key issue to optimize several performance parameters in the cloud system. This paper presents a novel task consolidation technique to achieve energy-makespan-throughput optimally balanced in the cloud data center. We evaluate the performance of our proposed algorithm using simulation analysis in Java-based CloudSim simulator environments. Results of performance evaluation certify that our proposed algorithm has reduced the energy consumption as compared to existing standard algorithms, and optimized the makespan and throughput of the cloud data center.</description>
        <description>http://thesai.org/Downloads/FTC2017/78_Paper_490-Optimization_of_Multi-Dimensional_Metrics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Task Graph Mapping of General Purpose Applications on a Neuromorphic Platform</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090177</link>
        <id>10.14569/SpecialIssue.2018.090177</id>
        <doi>10.14569/SpecialIssue.2018.090177</doi>
        <lastModDate>2018-08-20T10:09:31.8430000+00:00</lastModDate>
        
        <creator>Indar Sugiarto</creator>
        
        <creator>Pedro Campos</creator>
        
        <creator>Nizar Dahir</creator>
        
        <creator>Gianluca Tempesti</creator>
        
        <subject>Task graph; mapping; neuromorphic; Spiking Neu-ral Network Architecture (SpiNNaker)</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>A task graph is an intuitive way to represent the execution of parallel processes in many modern computing platforms. It can also be used for performance modeling and simulation in a network of computers. Common implementation of task graphs usually involves a form of message passing protocol, which depends on a standard message passing library in the existing operating system. Not every emerging platform has such support from mainstream operating systems. For example the Spiking Neural Network Architecture (SpiNNaker) system, which is a neuromorphic computer originally intended as a brain-style information processing system. As a massive many-core com-puting system, SpiNNaker not only offers abundant processing resources, but also a low-power and flexible application-oriented platform. In this paper, we present an efficient mapping strategy for a task graph on a SpiNNaker machine. The method relies on the existing low-level SpiNNaker’s kernel that provides the direct access to the SpiNNaker elements. As a result, a fault tolerant aware task graph framework suitable for high per-formance computing can be achieved. The experimental results show that SpiNNaker offers very low communication latency and demonstrate that the mapping strategy is suitable for large task graph networks.</description>
        <description>http://thesai.org/Downloads/FTC2017/77_Paper_318-Task_Graph_Mapping_of_General_Purpose.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Green Programming Model for Cloud Software Efficiency</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090176</link>
        <id>10.14569/SpecialIssue.2018.090176</id>
        <doi>10.14569/SpecialIssue.2018.090176</doi>
        <lastModDate>2018-08-20T10:09:31.2800000+00:00</lastModDate>
        
        <creator>Ah-Lian Kor</creator>
        
        <creator>Colin Pattinson</creator>
        
        <subject>Energy efficiency; green computing; programming model; energy efficient cloud</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>Cloud computing aims to deliver more energy efficient computing provision. The potential advantages are primarily based on the opportunities to achieve economies of scale through resource sharing: in particular, by concentrating data storage and processing within data centres, where energy efficiency and measurement are well-established activities. However, this addresses only a part of the overall energy cost of the totality of the cloud because energy is also required to power the networking connections and the end user systems through which access to the data centre is provided. The impact of application software behaviour on the overall system’s energy use within a cloud is less understood. This is of particular concern when one considers the current trend towards “off the shelf” applications accessed from application stores. This mass market for complete applications, or code segments which are included within other applications, creates a very real need for that code to be as efficient as possible, since even small inefficiencies when massively duplicated will result in significant energy loss. This position paper identifies this problem and proposes a supporting tool which will indicate to software developers the energy efficiency of their software as it is developed. Fundamental to the delivery of any workable solution is the measurement and selection of suitable metrics, we propose appropriate metrics and indicate how they may be derived and applied within our proposed system. Addressing the potential cost of application development is fundamental to achieving energy saving within the cloud – particularly as the application store model gains acceptance.</description>
        <description>http://thesai.org/Downloads/FTC2017/76_Paper_297-A_Green_Programming_Model_for_Cloud_Software_Efficiency.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Topological Structure for Parallelizing Multicomputer Cluster</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090175</link>
        <id>10.14569/SpecialIssue.2018.090175</id>
        <doi>10.14569/SpecialIssue.2018.090175</doi>
        <lastModDate>2018-08-20T10:09:30.7370000+00:00</lastModDate>
        
        <creator>Deepak Sharma</creator>
        
        <subject>Parallel cluster; wireless communication network; broadcasting data; access point; multi-computers;  Multiple Ethernet Interface Card</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>Parallel computation, an extension to multi-programming architectures usually structured as tightly coupled organization of multiple CPU cores. Systems under such configurations require lot of effort to manage multiple tasks simultaneously. Operating systems for such hardware follows several real time constraints in order to enhance system performance. Normally, Operating system designates one processor from others as controller which acts as a load scheduler for the others and performs balancing when system performance degrades due to overloading of some of the CPU cores. Regardless of tightly coupled system, another cost effective organization to achieve parallelism is to interconnect multi-computers as a network of cluster.  The advantage of this loosely coupled system is that system is under programmatic control. Low level sockets connections are created to make machine to machine communication possible. This work focuses on Multi-Ethernet Wired LAN Cluster (MEWC) and Broadcasting Wireless Access LAN cluster (BWAC) for executing multiple tasks like a grid.  Further, the work analyzes both wired and wireless clusters along with some factors considered in communication network. Wireless network of multi-computers have the advantage of transmission speed.  In wireless cluster, enhanced data transmission speed is achieved because of the effect of broadcasting links, where wired network survive on single communication link; although Multi-Ethernet distribution may be followed for the improvement. But still wireless LAN gives many advantages.</description>
        <description>http://thesai.org/Downloads/FTC2017/75_Paper_203-Topological_Structure_for_Parallelizing_Multicomputer_Cluster.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Ways of Development of Computer Technologies to Perspective Nano</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090174</link>
        <id>10.14569/SpecialIssue.2018.090174</id>
        <doi>10.14569/SpecialIssue.2018.090174</doi>
        <lastModDate>2018-08-20T10:09:30.1900000+00:00</lastModDate>
        
        <creator>Kateryna Lavryshcheva</creator>
        
        <creator>I.B. Petrov</creator>
        
        <subject>Nano elements; computer technology; engineering; technology systems; assembling modules; molecules; small computers; internet thing</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>The ways of development of the first computers to modern computers, which are assembled from transistors, chips, integrated microcircuits on crystals, etc. are considered. New substances, materials, electronic devices, manipulators from new micro Nano elements are created. It is noted that computer technologies provide people communication (mail, video, Skype, etc.), improve the everyday life of each member of the world community. The analysis of the development of computer engineering is given and the technologies of modern supercomputers in the future are determined. A number of operating supercomputers Simulation and Digital Manufacturing in industry, energy, transport, and construction are presented. The ways of development of the Internet, technologies lines of life cycle, ontologies, smart products in the fields of e-science (biology, genetics, physics, medicine, energy, etc.) and industry are discussed.</description>
        <description>http://thesai.org/Downloads/FTC2017/74_Paper_79-Ways_of_Development_of_Computer_Technologies.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards Service Protection in Fog-to-Cloud (F2C) Computing Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090173</link>
        <id>10.14569/SpecialIssue.2018.090173</id>
        <doi>10.14569/SpecialIssue.2018.090173</doi>
        <lastModDate>2018-08-20T10:09:29.6430000+00:00</lastModDate>
        
        <creator>Vitor Barbosa Souza</creator>
        
        <creator>Wilson Ram&#237;rez</creator>
        
        <creator>Xavier Masip-Bruin</creator>
        
        <creator>Eva Mar&#237;n-Tordera</creator>
        
        <subject>Cloud computing; fog computing; fog-to-cloud computing; Internet of Things; service protection</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>Internet of Things (IoT) services are unstoppably demanding more computing and storage resources. Aligned to this trend, cloud and fog computing came up as the proper paradigms meeting such IoT services demands. More recently, a new paradigm, so-called fog to cloud (F2C) computing, promises to make the most out of both Fog and Cloud, paving the way to new IoT services development. Nevertheless, the benefits of F2C architectures may be diminished by failures affecting the computing commodities. In order to withstand possible failures, the design of novel protection strategies, specifically designed for distributed computing scenarios is required. In this paper, we study the impact of distinct protection strategies on several key performance aspects, including service response time, and usage of computing resources. Numerical results indicate that under distinct failure scenarios, F2C significantly outperforms the conventional cloud.</description>
        <description>http://thesai.org/Downloads/FTC2017/73_Paper_4-Towards_Service_Protection_in_Fog-to-Cloud.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Response-Aware Scheduling of Big Data Applications in Cloud Environments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090172</link>
        <id>10.14569/SpecialIssue.2018.090172</id>
        <doi>10.14569/SpecialIssue.2018.090172</doi>
        <lastModDate>2018-08-20T10:09:29.1130000+00:00</lastModDate>
        
        <creator>Deepak Puthal</creator>
        
        <subject>Scheduling; cloud computing; task allocation; allo-cation time; execution time</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>The cloud infrastructures afford a proper environ-ment for the execution of large-scale big data application. The scheduling of a substantial number of tasks in the heterogeneous multi-tenant cloud environment is one of the most significant research challenges in the current era. The major challenges of task allocation are to optimize the overall completion time, cost of execution, tardiness and utilize idle resources of cloud effectively. In this paper, we have proposed a novel scheduling algorithm for task allocation of the cloud resources to optimize the overall execution time by minimizing response time. In order to find the effectiveness of our proposed algorithm, we have compared our solution with six standard competing algorithms for the optimization of performance metrics in the cloud environment. The results confirm that our proposed algorithm operates better than the other state-of-the-art algorithms in terms of response time (allocation time), makespan and the total execution time.</description>
        <description>http://thesai.org/Downloads/FTC2017/72_Paper_488-Response-Aware_Scheduling_of_Big_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Using PseudoGravity to Attract People</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090171</link>
        <id>10.14569/SpecialIssue.2018.090171</id>
        <doi>10.14569/SpecialIssue.2018.090171</doi>
        <lastModDate>2018-08-20T10:09:28.5670000+00:00</lastModDate>
        
        <creator>Soo Ling Lim</creator>
        
        <creator>Peter J Bentley</creator>
        
        <subject>Social media; Twitter; artificial intelligence; automated engagement; marketing; online survey; e-publishing</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>We introduce the PseudoGravity tool, an automated social media system that establishes a social media presence in the area of interest of a target audience, identifies target users that are open to connect, engages with them, and elicits a complex response and time investment from them. In this work, we use Twitter as the social media platform and an extensive survey as the activity requiring time investment. We evaluate the tool by using it to find and survey a challenging target – science fiction authors – and compare its results with other methods of automated online surveys. In 28 months, the Twitter account managed by the tool achieved more than 12,000 followers, and achieved monthly Tweet Impressions of more than 250,000. The tool also achieved a high survey response rate of 71% and a completion rate of 83% compared to 30% and 47% achieved by typical online surveys, and high numbers of words and characters entered for questions that required free text input. In addition, this work successfully surveyed more than 500 science fiction writers and gained new understandings of the challenges that e-publishing is bringing to their profession.</description>
        <description>http://thesai.org/Downloads/FTC2017/71_Paper_433-Using_PseudoGravity_to_Attract_People.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>I Know What You Felt Last Festival</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090170</link>
        <id>10.14569/SpecialIssue.2018.090170</id>
        <doi>10.14569/SpecialIssue.2018.090170</doi>
        <lastModDate>2018-08-20T10:09:28.0370000+00:00</lastModDate>
        
        <creator>Miguel Nunez-del-Prado</creator>
        
        <creator>Juandiego Morzan-Samame</creator>
        
        <creator>Hugo Alatrista-Salas</creator>
        
        <subject>Sentiment analysis; machine learning algorithms; festival; survey analysis</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>Festivals are an important leisure activity in the life of human beings. As a matter of fact, the organizers of festivals are interested in offering quality activities that allow them to position themselves in the entertainment market. To achieve this aim, the organizers use surveys to obtain a global opinion of the participants focusing on three key points: motivation, perception, and valuation. This method is tedious to perform and is time-consuming. In this effort, we present a complete process that enables to automatically obtaining an overall appreciation of a festival from tweets shared by participants. The aim of this contribution is to replace the surveys by the textual analysis of messages posted on social networks. The precision obtained in our experiments highlight the relevance of our proposal.</description>
        <description>http://thesai.org/Downloads/FTC2017/70_Paper_284-I_Know_What_You_Felt_Last_Festival.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Technology Forecasting Framework Enhanced via Twitter Mining</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090169</link>
        <id>10.14569/SpecialIssue.2018.090169</id>
        <doi>10.14569/SpecialIssue.2018.090169</doi>
        <lastModDate>2018-08-20T10:09:27.4900000+00:00</lastModDate>
        
        <creator>Anthony Breitzman</creator>
        
        <creator>Patrick Thomas</creator>
        
        <subject>Twitter mining; text mining; emerging technologies</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>The amount of information available on new tech-nologies has risen sharply in recent years. In turn, this has increased interest in automated tools to mine this information for useful insights into technology trends, with a particular focus on locating emerging, breakthrough technologies. This paper first outlines an automated framework for technology forecasting developed for the Department of Defense. It then proposes various enhancements to this framework, focusing in particular on utilizing social media data more effectively. Specific topics covered include technology forecasting via Twitter trusted sources and via identification of authoritative Twitter handles. Beyond improving the framework itself, the techniques described in this paper may also be of general interest to researchers using social media data, particularly for technology forecasting.</description>
        <description>http://thesai.org/Downloads/FTC2017/69_Paper_228-A_Technology_Forecasting_Framework_Enhanced.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Building Vocabulary for Big Data Analytics</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090168</link>
        <id>10.14569/SpecialIssue.2018.090168</id>
        <doi>10.14569/SpecialIssue.2018.090168</doi>
        <lastModDate>2018-08-20T10:09:26.9300000+00:00</lastModDate>
        
        <creator>Lyublyana Turiy</creator>
        
        <subject>Big data analytics; domain; controlled vocabulary; keyword; content analysis</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>The explosion in Big Data Analytics research provides a massive amount of software capabilities, publications, and conference proceedings making it difficult to sift through and inter-relate it all. A vast amount of new terminology and professional jargon has been created and adopted for use in the recent years. It is not only important to comprehend the meaning of terms, but also understand how they contrast and synergize amongst each another. This paper serves to address the need of building a consistent vocabulary for the newly growing domain of Big Data Analytics. Understanding and adoption of common consistent vocabulary promotes interdisciplinary communication and collaboration and removes entrance barriers for anyone entering the growing world of Big Data Analytics. Using a step-by-step algorithm based on the bibliometric and content analyses of existing peer-reviewed literature, a sample Big Data Analytics vocabulary is built. The approach includes storing terms in the relational database and being able to retrieve and visualize co-related terms thus establishing connections between them. The step-by-step procedure described in the paper involves: 1) collection of data; 2) data manipulation such as elimination of duplicates, identification of synonyms, grammar forms of the same root words and variations in spelling; 3) calculation of frequency of use of the same term; and finally 4) generation of various reports including most frequently used terms per paper or per narrower category, or - in future release – identification of most likely category based on the combination of co-located terms. For this current proof of concept effort, due to complexities and exceptions when dealing with natural language (English), some steps of this process cannot be fully automated, and hence require manual verification or adjustment, although considerable effort was made to minimize the amount of human intervention. The procedure can be repeated periodically with relative ease to observe and report possible changes in the dynamic field of Big Data Analytics and discover newly created vocabulary. Big Data Analytics was chosen for this project because of its characteristics of a not yet thoroughly documented but fast growing field with critical mass of published works already accumulated. This paper hopes to help with creation of educational materials and demarcation of the domain, while encouraging full research coverage in Big Data Analytics, by promoting discovery and articulation of common principles and solutions.</description>
        <description>http://thesai.org/Downloads/FTC2017/68_Paper_205-Building_Vocabulary_for_Big_Data_Analytics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intent Detection through Text Mining and Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090167</link>
        <id>10.14569/SpecialIssue.2018.090167</id>
        <doi>10.14569/SpecialIssue.2018.090167</doi>
        <lastModDate>2018-08-20T10:09:26.3830000+00:00</lastModDate>
        
        <creator>El Sayed Mahmoud</creator>
        
        <creator>Samantha Akulick</creator>
        
        <subject>Intent detection; text mining; support vector machines; N-grams; parts of speech</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>This work investigated using n-grams, parts-of-speech and support vector machines for detecting the customer intents in the user generated contents. The work demonstrated a system of categorization of customer intents that is concise and useful for business purposes. We examined possible sources of text posts to be analyzed using three text mining algorithms. We presented the three algorithms and the results of testing them in detecting different six intents. This work established that intent detection can be performed on text posts with approximately 61% accuracy.</description>
        <description>http://thesai.org/Downloads/FTC2017/67_Paper_148-Intent_Detection_through_Text_Mining.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Efficient Data Structure for Fast Join Query Processing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090166</link>
        <id>10.14569/SpecialIssue.2018.090166</id>
        <doi>10.14569/SpecialIssue.2018.090166</doi>
        <lastModDate>2018-08-20T10:09:25.8370000+00:00</lastModDate>
        
        <creator>Mohammed Hamdi</creator>
        
        <creator>Feng Yu</creator>
        
        <creator>Sarah Alswedani</creator>
        
        <creator>Wen-Chi Hou</creator>
        
        <subject>Query processing; join queries; equi-join; semi-join; outer-join; anti-join; set operations</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>In this research, we propose to store equi-join relationships of tuples on inexpensive and space abundant devices, such as disks, to facilitate query processing. The equi-join relationships are captured, grouped, and stored as various tables on disks, which are collectively called the Join Core. Queries involving arbitrary legitimate sequences of equi-joins, semi-joins, outer-joins, anti-joins, unions, differences, and intersections can all be answered quickly by merely merging these tables without having to perform joins. The Join Core can also be updated dynamically. Preliminary experimental results showed that all test queries began to generate results instantly, and many completed instantly too. The proposed methodology can be very useful for queries with complex joins of large relations as there are fewer or even no relations or intermediate results needed to be retrieved, generated.</description>
        <description>http://thesai.org/Downloads/FTC2017/66_Paper_63-An_Efficient_Data_Structure_for_Fast_Join_Query_Processing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Super Generalized Central Limit Theorem: Limit Distributions for Sums of Non-Identical Random Variables with Power-Laws</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090165</link>
        <id>10.14569/SpecialIssue.2018.090165</id>
        <doi>10.14569/SpecialIssue.2018.090165</doi>
        <lastModDate>2018-08-20T10:09:25.3070000+00:00</lastModDate>
        
        <creator>Masaru Shintani</creator>
        
        <creator>Ken Umeno</creator>
        
        <subject>Power-law; big data; limit distribution</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>In nature or societies, the power-law is present ubiq-uitously, and then it is important to investigate the characteristics of power-laws in the recent era of big data. In this paper we prove the superposition of non-identical stochastic processes with power-laws converges in density to a unique stable distribution. This property can be used to explain the universality of stable laws such that the sums of the logarithmic return of non-identical stock price fluctuations follow stable distributions.</description>
        <description>http://thesai.org/Downloads/FTC2017/65_Paper_42-Super_Generalized_Central_Limit_Theorem.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Lifting the Veil: Visualizing Sentient Architecture</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090164</link>
        <id>10.14569/SpecialIssue.2018.090164</id>
        <doi>10.14569/SpecialIssue.2018.090164</doi>
        <lastModDate>2018-08-20T10:09:24.7600000+00:00</lastModDate>
        
        <creator>Andreas Bueckle</creator>
        
        <creator>Katy Borner</creator>
        
        <creator>Philip Beesley</creator>
        
        <creator>Matthew Spremulli</creator>
        
        <subject>Intelligent interactive systems; engineering; information visualization; microprocessor; 3D; Internet of Things; mobile applications</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>Increasingly, our everyday environments become more and more connected and “smart”. Intelligent Interactive Systems (IIS) is an umbrella term to describe environments that are characterized by their ability to process data and generate responsive behavior using sensors, actuators, and microprocessors. Sentient Architecture generates an artful, imaginative, and engaging environment in which we can experiment with and observe human behavior and capabilities when confronted with IIS. This paper outlines a user study to test the value of a 3D augmented reality visualization which shows data flows and burst of activity in a Sentient Architecture sculpture named “Sentient Veil” in Boston, MA. Hence, our visualization is fittingly titled Lifting the Veil.</description>
        <description>http://thesai.org/Downloads/FTC2017/64_Paper_481-Lifting_the_Veil.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Randomized Heuristic Algorithm for Cyclic Routing of UAVs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090163</link>
        <id>10.14569/SpecialIssue.2018.090163</id>
        <doi>10.14569/SpecialIssue.2018.090163</doi>
        <lastModDate>2018-08-20T10:09:24.2130000+00:00</lastModDate>
        
        <creator>Cheng Siang Lim</creator>
        
        <creator>Shell Ying Huang</creator>
        
        <subject>Single Unmanned Aerial Vehicle (UAV); cyclic routing; randomization; heuristic</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>Unmanned Aerial Vehicles (UAVs) have been increasingly used in military and civilian applications. Even though the UAV routing problems have similarities with Vehicle Routing Problems, there are still many problems where effective and efficient solutions are lacking. We propose a randomized heuristic algorithm for the cyclic routing of a single UAV. The UAV is required to visit a set of target areas where the time interval between consecutive visits to each area cannot exceed its relative deadline.  The PSPACE-complete problem has a solution whose length may be exponential. Our algorithm tries to compute a feasible cyclic route while trying to keep short the cycle time.  Our tests of 57 instances of the problem show that the algorithm has good effectiveness and efficiency.</description>
        <description>http://thesai.org/Downloads/FTC2017/63_Paper_453-A_Randomized_Heuristic_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Leveraging Different Learning Rules in Hopfield Nets for Multiclass Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090162</link>
        <id>10.14569/SpecialIssue.2018.090162</id>
        <doi>10.14569/SpecialIssue.2018.090162</doi>
        <lastModDate>2018-08-20T10:09:23.6370000+00:00</lastModDate>
        
        <creator>Pooja Agarwal</creator>
        
        <creator>Abhijit J. Thophilus</creator>
        
        <creator>Arti Arya</creator>
        
        <creator>Suryaprasad Jayadevappa</creator>
        
        <subject>Customer loyalty; Hopfield Neural Networks; learn-ing rate; momentum; Storkey learning; Hebbian learning; Softmax activation function</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>Retaining customer and finding the loyalty of an existing customer is an important aspect of today’s business industry. In this paper the study of behavior of different machine learning rules on Hopfield Nets is conducted. This is a work in continuation w.r.t the classification of a real customer dataset into four different classes of Super Premium Loyal Customer (SPL), Premium Loyal Customer (PL), Valued Customer (VC), Normal Customer (NC). This model enhances the approach of finding the loyalty of customer using Hebbian learning and Storkey learning of Hopfield Neural Network (HNN). HNN is reported to give good accuracy with image datasets but with some data preprocessing on customer dataset, it is showing very reasonable accuracy of around 85%. The proposed framework is tested on Breast cancer dataset also and results are tabulated in the paper.</description>
        <description>http://thesai.org/Downloads/FTC2017/62_Paper_426-Leveraging_Different_Learning_Rules_in_Hopfield.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Automated Avoidance Approach for Multiple General-Aviation Conflicts</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090161</link>
        <id>10.14569/SpecialIssue.2018.090161</id>
        <doi>10.14569/SpecialIssue.2018.090161</doi>
        <lastModDate>2018-08-20T10:09:23.0600000+00:00</lastModDate>
        
        <creator>Yousra Almathami</creator>
        
        <creator>Reda Ammar</creator>
        
        <subject>Switching Kalman filters; controlled airspace; aircraft infringements; ground based safety system; polygon triangulation</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>Air traffic controllers (ATC) face daily major concerns which are controlled Airspace (CAS) infringements. An infringement is when a general aviation (GA) aircraft penetrates CASs without an advanced clearance from the ATC. These infringements could cause a mid air collision with authorized aircraft inside CAS which their conflict was not resolved ahead of time. It also disrupts ATCs operations by creating additional workload and revising new manoeuvre tactics. In the last two papers, they were focused on predicting future locations and find their probability of infringements to alert ATC in advance. So far, they have been dealing when one aircraft approaching CAS, in this paper however, focuses on the scenario when multiple aircraft infringe CAS, in case the ATC did not react quickly enough. As of 2020, it is mandatory for all GA to be equipped with a transponder which sends information such as flight ID, exact location and altitude. Therefore, using this assumption we are investigating a possible model which alerts and directs multiple GA out of CAS without interfering with commercial traffic. Kinetic tri-angulation method will be used as an automated manoeuvring tactic, leaving the ATC focusing on only to direct commercial flights.</description>
        <description>http://thesai.org/Downloads/FTC2017/61_Paper_358-An_Automated_Avoidance_Approach_for_Multiple.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Iteratively-Reweighted Least-Squares Fitting of Support Vector Machines: A Majorization–Minimization Algorithm Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090160</link>
        <id>10.14569/SpecialIssue.2018.090160</id>
        <doi>10.14569/SpecialIssue.2018.090160</doi>
        <lastModDate>2018-08-20T10:09:22.5300000+00:00</lastModDate>
        
        <creator>Hien Duy Nguyen</creator>
        
        <creator>Geoffrey McLachlan</creator>
        
        <subject>Iteratively-reweighted least-squares; support vector machines; majorization–minimization algorithm</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>Support vector machines (SVMs) are an important tool in modern data analysis. Traditionally, support vector ma-chines have been fitted via quadratic programming, either using purpose-built or off-the-shelf algorithms. An alternative approach to SVM fitting is presented via the majorization–minimization (MM) paradigm. Algorithms that are derived via MM algorithm constructions can be shown to monotonically decrease their objectives at each iteration, as well as be globally convergent to stationary points. Constructions of iteratively-reweighted least-squares (IRLS) algorithms, via the MM paradigm, for SVM risk minimization problems involving the hinge, least-square, squared-hinge, and logistic losses, and 1-norm, 2-norm, and elastic net penalizations are presented. Successful implementations of the algorithms are demonstrated via some numerical examples.</description>
        <description>http://thesai.org/Downloads/FTC2017/60_Paper_290-Iteratively-Reweighted_Least-Squares_Fitting.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Exploiting Chaos for Fun and Profit</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090159</link>
        <id>10.14569/SpecialIssue.2018.090159</id>
        <doi>10.14569/SpecialIssue.2018.090159</doi>
        <lastModDate>2018-08-20T10:09:21.9670000+00:00</lastModDate>
        
        <creator>Tjeerd V. olde Scheper</creator>
        
        <subject>Chaos; control of chaos; bio-inspired computing</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>Chaos in dynamical systems is still considered to be a somewhat curious, and generally undesirable property of non-linear systems. Despite the plethora of chaotic control methods published over the last decades, only in a few instances has the control of chaos been used to address real world problems in engineering or medicine. This is partly due to the limits of the used control methods, which either require specific analytical knowledge of the system, or the system needs to have specific characteristics to be able to be controllable. The lack of solutions for engineering and biomedical problems may also be due to specific requirements that prevent the implementation of control methods and the, as yet unproven, benefits that controlled chaos may bring to these problems. The aim of a practical application of chaos control is to fully control chaos in theoretical problems first, and then show applicable solutions to physical problems of stability and control. This controlled chaotic state should then have clear and distinct dynamic advantages over uncontrolled chaos and steady state systems. The application of the Rate Control of Chaos (RCC) method, which is derived from metabolic control processes, has already been shown to be effective in controlling several engineering problems. RCC allows non-linear systems to be stabilised into controlled oscillations, even across bifurcations, and it also allows the system to operate in regions of the parameter space that are inaccessible without this method of control. For fun, I will show that RCC controls the N-Body problem; for profit, it can control a bioreactor model to greatly improve yield. The RCC method promises to, finally, permit the control of complex dynamic systems.</description>
        <description>http://thesai.org/Downloads/FTC2017/59_Paper_260-Exploiting_Chaos_for_Fun_and_Profit.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Real-Time Control of a 2-DoF Helicopter Via Model Matching H8 Matrix Modulation Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090158</link>
        <id>10.14569/SpecialIssue.2018.090158</id>
        <doi>10.14569/SpecialIssue.2018.090158</doi>
        <lastModDate>2018-08-20T10:09:21.4230000+00:00</lastModDate>
        
        <creator>Parthish Kumar Paul</creator>
        
        <creator>Jeevamma Jacob</creator>
        
        <subject>Matrix modulation; Twin Rotor MIMO System (TRMS); singularity; generalized plant; H8 control; output error regulation</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>This paper addresses the convergence issues of the H∞ control algorithm by a matrix modulation technique on the mathematical generalized plant model of a system. This paper further presents a general solution to the robust control algorithm convergence problem of MIMO systems. The proposed controller is optimized for output error regulation by comparing the outputs of a higher order MIMO system to that of a slightly underdamped second order plant. The matrix modulation approach considers two singularities, viz., 1) control singularity; and 2) sensor singularity. The corresponding controller is tested on a laboratory model of helicopter, known as Twin Rotor MIMO System (TRMS) for its take off and hovering.</description>
        <description>http://thesai.org/Downloads/FTC2017/58_Paper_250-Real-Time_Control_of_a_2-Dof_Helicopter.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Monitoring of Centrifugal Compressor System using LSTM based Deep RNN</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090157</link>
        <id>10.14569/SpecialIssue.2018.090157</id>
        <doi>10.14569/SpecialIssue.2018.090157</doi>
        <lastModDate>2018-08-20T10:09:20.8900000+00:00</lastModDate>
        
        <creator>Harsh Purohit</creator>
        
        <creator>Karmvir Phogat</creator>
        
        <creator>P.S.V. Nataraj</creator>
        
        <subject>Performance monitoring; LSTM-DRNN; anomaly detection; compressor control system</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>This work presents the results of applying an advanced performance monitoring technique to centrifugal com-pressor system using deep recurrent neural network (DRNN). In reality, due to different kind of disturbances, the compressor system may lead to catastrophic situations. Therefore, perfor-mance monitoring has become an issue of primary importance in modern process engineering automation. Detecting anomalies in such scenarios become challenging using standard statistical approaches. In this article, we discuss a Long Short Term Memory (LSTM) based DRNN technique to predict the faulty behavior of the compressor system. Due to the ability of LSTM to maintain memory, these networks have been proven effective for learning patterns of the time series data with unknown length. This motivates us to propose a performance monitoring schema based on LSTM-DRNN. To validate the proposed approach, we have simulated the compressor model in Simulink and trained the LSTM-DRNN model on the obtained time series data of the compressor system that is running under ideal conditions. Further, the trained network have been used to detect anomalies in the time series data that was generated by introducing disturbance as an inlet temperature changes.</description>
        <description>http://thesai.org/Downloads/FTC2017/57_Paper_247-Performance_Monitoring_of_Centrifugal_Compressor.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Robust Fractional order Power System Stabilizer design using Bacteria Foraging Algorithm for Multi-machine Power System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090156</link>
        <id>10.14569/SpecialIssue.2018.090156</id>
        <doi>10.14569/SpecialIssue.2018.090156</doi>
        <lastModDate>2018-08-20T10:09:20.3470000+00:00</lastModDate>
        
        <creator>Haseena. K A</creator>
        
        <creator>Jeevamma Jacob</creator>
        
        <creator>Abraham T Mathew</creator>
        
        <subject>Stability of synchronous machines; robust control; power system stabilizer; fractional order control; bacteria foraging algorithm</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>Power System Stabilizers (PSSs) are supplementary controllers to enhance the damping of electromechanical oscil-lations in synchronous generators. A fractional order supple-mentary controller, which has the features as broad bandwidth, memory effect, and flatness in phase contribution, is proposed in this paper. The fractional parameter effect made the stabilizer to perform well against the wide range of disturbance uncertainties in the power system. The Fractional order PSS parameter tuning problem is modified as an optimization problem that is solved using Bacteria Foraging Algorithm (BFA) in a multimachine environment. Since BFA is highly efficient optimization technique with fast global search property and convergence, it is popular in the field of power system application domains for solving real-world optimization problems. The robustness of the proposed BFA based fractional order PSS (BFA-FoPSS) is verified in a multi-machine power system under the wide range of operating conditions and by introducing faults of different size at different locations. The efficiency of the proposed BFA-FoPSS is demon-strated through time domain simulations, eigenvalue analysis and with a performance index. Also, the results are compared with PSO based Conventional PSS (PSO-CPSS), and PSO based FoPSS (PSO-FoPSS) to establish the fractional parameter effect on the improvement of system dynamic response and the relevance of the proposed PSS for extending the dynamic stability limit of the system under various loading and generating conditions.</description>
        <description>http://thesai.org/Downloads/FTC2017/56_Paper_238-Robust_Fractional_Order_Power_System_Stabilizer.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cellular Automaton Based Simulation in Panic and Normal Situations: A Case Study on the University Lecture Hall</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090155</link>
        <id>10.14569/SpecialIssue.2018.090155</id>
        <doi>10.14569/SpecialIssue.2018.090155</doi>
        <lastModDate>2018-08-20T10:09:19.8000000+00:00</lastModDate>
        
        <creator>Fadratul Hafinaz Hassan</creator>
        
        <creator>Najihah Ibrahim</creator>
        
        <creator>Nur Shazreen Nabiha Mat Tan Salleh</creator>
        
        <subject>Cellular automata approach; microscopic movement; panic situation; normal situation; university lecture hall, pedestrian flow rate</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>University lecture hall is the most crowded place in the university that concedes by pedestrian which mostly are the students. The university students have their own daily schedules that require them to move from one place to another in a shortest time. However, the unbalance and scattered important places (lecture and tutorial halls, general lab, students’ center and etc.) had caused the unbalance used of university lecture hall’s exit and the population density in the lecture hall. Hence, during panic situation, the evacuation process will lead towards the high physical contact between the pedestrian due to the heavy usage of exits and caused the crowd bottleneck. This research is to study and simulate the pedestrian movement in the university lecture hall for determining the most used exit and the reasons for the mass usage happened. This simulation had used the cellular automata approach as the discrete model for the microscopic movement of the pedestrian. At the end of this research, the university will be proposed with some solutions to overcome this situation. The building design and construction planning was pointed for future enhancement towards the sustainable and prudent learning space for the university’s students.</description>
        <description>http://thesai.org/Downloads/FTC2017/55_Paper_237-Cellular_Automaton_Based_Simulation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evolution in Groups: A Deeper Look at Synaptic Cluster Driven Evolution of Deep Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090154</link>
        <id>10.14569/SpecialIssue.2018.090154</id>
        <doi>10.14569/SpecialIssue.2018.090154</doi>
        <lastModDate>2018-08-20T10:09:19.2370000+00:00</lastModDate>
        
        <creator>Mohammad Javad Shafiee</creator>
        
        <creator>Elnaz Barshan</creator>
        
        <creator>Alexander Wong</creator>
        
        <subject>EvoNet; deep learning; evolution; deep neural net-work; embedded systems</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>A promising paradigm for achieving highly efficient deep neural networks is the idea of evolutionary deep intelligence, which mimics biological evolution processes to progressively synthesize more efficient networks. A crucial design factor in evolutionary deep intelligence is the genetic encoding scheme used to simulate heredity and determine the architectures of offspring networks. In this study, we take a deeper look at the notion of synaptic cluster-driven evolution of deep neural networks which guides the evolution process towards the formation of a highly sparse set of synaptic clusters in offspring networks. Utilizing a synaptic cluster-driven genetic encoding, the proba-bilistic encoding of synaptic traits considers not only individual synaptic properties but also inter-synaptic relationships within a deep neural network. This process results in highly sparse offspring networks which are particularly tailored for parallel computational devices such as GPUs and deep neural network accelerator chips. Comprehensive experimental results using four well-known deep neural network architectures (LeNet-5, AlexNet, ResNet-56, and DetectNet) on two different tasks (object categorization and object detection) demonstrate the efficiency of the proposed method. Cluster-driven genetic encoding scheme syn-thesizes networks that can achieve state-of-the-art performance with significantly smaller number of synapses than that of the original ancestor network. (∼125-fold decrease in synapses for MNIST). Furthermore, the improved cluster efficiency in the generated offspring networks (􀀀9.71-fold decrease in clusters for MNIST and a 􀀀8.16-fold decrease in clusters for KITTI) is particularly useful for accelerated performance on parallel computing hardware architectures such as those in GPUs and deep neural network accelerator chips.</description>
        <description>http://thesai.org/Downloads/FTC2017/54_Paper_220-Evolution_in_Groups_A_Deeper_Look.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Event-B Control Flow Modeling based on iUML-B State Machine</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090153</link>
        <id>10.14569/SpecialIssue.2018.090153</id>
        <doi>10.14569/SpecialIssue.2018.090153</doi>
        <lastModDate>2018-08-20T10:09:18.6930000+00:00</lastModDate>
        
        <creator>Han Peng</creator>
        
        <creator>Chenglie Du</creator>
        
        <creator>Haobin Wang</creator>
        
        <subject>Event-B; control flow modeling; iUML-B state machine; atomicity decomposition; event refinement structure</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>There are some limitations in expressing the order of actions using Event-B. To solve this problem, the event refinement structure method (ERS) is proposed to facilitate modeling of the system’s control flow. However, the event refinement structure cannot be translated to a behavior semantic model such as the communication sequence process (CSP) or labeled transition system (LTS) directly, thus it is not convenient for engineers to verify the control flow. In this paper, we first propose a general method to model the control flow of the Event-B model with various iUML-B state machines. Then we prove by simulation that the event trace of the iUML-B state machine is the same as that of the event refinement structure method. Finally, we use a case study of a lift control system to prove the practicality of our method.</description>
        <description>http://thesai.org/Downloads/FTC2017/53_Paper_219-Event-B_Control_Flow_Modeling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Applying Deep Machine Learning for Psycho-Demographic Profiling of Internet Users using O.C.E.A.N. Model of Personality</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090152</link>
        <id>10.14569/SpecialIssue.2018.090152</id>
        <doi>10.14569/SpecialIssue.2018.090152</doi>
        <lastModDate>2018-08-20T10:09:18.1470000+00:00</lastModDate>
        
        <creator>Iaroslav Omelianenko</creator>
        
        <subject>Deep machine learning; O.C.E.A.N. personality model; psycho-demographic profiling; TensorFlow; R programming language</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>In the modern era, each Internet user leaves enor-mous amounts of auxiliary digital residuals (footprints) by using a variety of on-line services. All this data is already collected and stored for many years. In recent works, it was demonstrated that it is possible to apply simple machine learning methods to analyze collected digital footprints and to create psycho-demographic profiles of individuals. However, while these works clearly demonstrated the applicability of machine learning meth-ods for such an analysis, created simple prediction models still lacks accuracy necessary to be successfully applied for practical needs. We have assumed that using advanced deep machine learning methods may considerably increase the accuracy of predictions. We started with simple machine learning methods to estimate basic prediction performance and moved further by applying advanced methods based on shallow and deep neural networks. Then we compared prediction power of studied models and made conclusions about its performance. Finally, we made hypotheses how prediction accuracy can be further improved. As result of this work, we provide full source code used in the experiments for all interested researchers and practitioners in corresponding GitHub repository. We believe that applying deep machine learning for psycho-demographic profiling may have an enormous impact on the society (for good or worse) and provides means for Artificial Intelligence (AI) systems to better understand humans by creating their psychological profiles. Thus AI agents may achieve the human-like ability to participate in conversation (communication) flow by anticipating human opponents’ reactions, expectations, and behavior. By providing full source code of our research we hope to intensify further research in the area by the wider circle of scholars.</description>
        <description>http://thesai.org/Downloads/FTC2017/52_Paper_195-Applying_Deep_Machine_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Visualized Financial Performance Analysis: Self-Organizing Maps(MS)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090151</link>
        <id>10.14569/SpecialIssue.2018.090151</id>
        <doi>10.14569/SpecialIssue.2018.090151</doi>
        <lastModDate>2018-08-20T10:09:17.6000000+00:00</lastModDate>
        
        <creator>Manchuna Shanmuganathan</creator>
        
        <subject>Self-organizing map (SOM); financial performance; International Financial Reporting Standards (IFRS); Canadian Generally Accepted Accounting Principles (CGAAP); Management Decision and Analysis (MD&amp;A)</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>This study reviews expediency and discusses about self-organizing map (SOM) in financial performance management and analysis of a Canadian bank, and financial data of Royal Bank of Canada (RBC) has been analysed with SOM application. The SOM is used widely for many financial applications, including financial institutions towards financial crisis of 2008. Where it is an automatic data-analysis technique, which is mostly applied to visualization and clustering in data exploration. The objective of this study is to evaluate financial performance and to comprehend the influences of International Financial Reporting Standards (IFRS) after the convergence in year 2011. Effects of Management Decision and Analysis (MD&amp;A) on financial performance were also analyzed from reported financial data gathered through financial statements of RBC. SOM Ward clustering for visualization facilitates in assessing the fundamental nature based on a set of maps and clustering attribute solutions for measuring financial performances. Result of this study indicates that SOM must be a practicable application designed for financial performance analysis and measures for many financial sectors.</description>
        <description>http://thesai.org/Downloads/FTC2017/51_Paper_55-Visualized_Financial_Performance_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Smartphone based Robust Hierarchical Framework for Activity Recognition based on Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090150</link>
        <id>10.14569/SpecialIssue.2018.090150</id>
        <doi>10.14569/SpecialIssue.2018.090150</doi>
        <lastModDate>2018-08-20T10:09:17.0530000+00:00</lastModDate>
        
        <creator>Rida Ghafoor Hussain</creator>
        
        <creator>Muhammad Awais Azam</creator>
        
        <creator>Mustansar Ali Ghazanfar</creator>
        
        <creator>Usman Naeem</creator>
        
        <subject>Bayes Net; Na&#239;ve Bayes; dynamic; ADL; classifiers</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>Human behaviors are a complex and challenging task to learn from daily life activities. Persons who are dependent can be ignored by society. Besides infants, elders are observed to have more accident rates in performing daily life activities. Alzheimer disease is the common impairment that leads to dementia in elderly people. Thus, elderly people are unable to live independent life due to forgetfulness. Continuous care and monitoring is required for Alzheimer’s patients to live a healthy life, as it generally becomes difficult for the people who suffer from this progressive disease to live an independent life. To support elderly people who desire to live an independent life, performing their daily activities smoothly and need a home safety is to find out daily life activities of elderly people and provide them appropriate aid. The heuristic approach has been developed to recognize human behavior and intentions with the help of sensor events. The smart environment is created for monitoring volunteers conducting activities of daily life. This research aims to develop machine learning algorithms for identifying person’s daily activities. The model proposed is flexible, adaptable, and scalable well with data.</description>
        <description>http://thesai.org/Downloads/FTC2017/50_Paper_33-Smartphone_based_Robust_Hierarchical_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A wearable general-purpose solution for Human-Swarm Interaction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090149</link>
        <id>10.14569/SpecialIssue.2018.090149</id>
        <doi>10.14569/SpecialIssue.2018.090149</doi>
        <lastModDate>2018-08-20T10:09:16.5070000+00:00</lastModDate>
        
        <creator>Eduardo Castell&#243; Ferrer</creator>
        
        <subject>Swarm robotics; human-swarm interaction; human-robot interaction; wearable technology</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>Swarms of robots will revolutionize many indus-trial applications, from targeted material delivery to precision farming. Controlling the motion and behavior of these swarms presents unique challenges for human operators, who cannot yet effectively convey their high-level intentions to a group of robots in application. This position paper proposes a new human-swarm interface based on novel wearable gesture-control and haptic-feedback devices. This position paper seeks to combine a wearable gesture recognition device that can detect high-level intentions, a portable device that can detect Cartesian information and finger movements, and a wearable advanced haptic device that can provide real-time feedback. This project is the first to envisage a wearable Human-Swarm Interaction (HSI) interface that separates the input and feedback components of the classical control loop (input, output, feedback), as well as being the first of its kind suitable for both indoor and outdoor environments.</description>
        <description>http://thesai.org/Downloads/FTC2017/49_Paper_477-A_Wearable_General-Purpose_Solution.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Control of Robotic Crawler Cranes in Tandem Lifting Operations</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090148</link>
        <id>10.14569/SpecialIssue.2018.090148</id>
        <doi>10.14569/SpecialIssue.2018.090148</doi>
        <lastModDate>2018-08-20T10:09:15.9630000+00:00</lastModDate>
        
        <creator>Sima Rishmawi</creator>
        
        <creator>William Singhose</creator>
        
        <subject>Crawler; crane; tandem; robotic; tip-over stability; control; crane safety</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>Moving heavy and over-sized loads poses significant control challenges. A single crawler crane may be insufficient for such lifting tasks if the payload exceeds the capacity, or if the payload’s size and shape make it difficult to secure it to a single crane hook. To solve these problems, it may be necessary to manipulate such items by tandem lifting with two cranes. These cranes are usually driven by two operators whose actions are coordinated by a lift director. In this paper, a pseudo-dynamic model describing the behavior of such a system, when the bases of the cranes are moving in a straight line, is derived. The paper also sets basic guidelines that prevent tip-over accidents. Finally, it presents a control system that eliminates the need for a second crane operator by making one crane mimic the behavior of the other crane, thus reducing the possibility of human errors.</description>
        <description>http://thesai.org/Downloads/FTC2017/48_Paper_438-Control_of_Robotic_Crawler_Cranes_in_Tandem.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Prototyping of BoBi Secretary Robot</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090147</link>
        <id>10.14569/SpecialIssue.2018.090147</id>
        <doi>10.14569/SpecialIssue.2018.090147</doi>
        <lastModDate>2018-08-20T10:09:15.4170000+00:00</lastModDate>
        
        <creator>Bilan Zhu</creator>
        
        <creator>Jiansheng Liu</creator>
        
        <subject>Intelligent robot system; personal assistant robot; portable robot; transformable robot; movable robot</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>We describe here a prototyping of intelligent personal robot named BoBi secretary. When it is closed, BoBi is a rectangular box with a smart phone size. Owner can call to BoBi to open to transform from the box to a movable robot, and then it will perform many functions like humans such as moving, talking, emoting, singing, dancing, conversing with people to make people happy, enhance people’s lives, facilitate relationships, have fun with people, connect people with the outside world and assist and support people as an intelligent personal assistant. We consider BoBi is a treasure and so call the box moonlight box that is “月光宝盒” in Chinese. BoBi speaks with people, tells jokes, sings and dances for people, understands the owner and recognizes people’s voices. It can do all works which a secretary is doing including scheduling of works, schedule reminders, sending emails, calling phones, booking, making reservations, searching information, etc. BoBi has three main functions: intelligent meeting recording, multilingual interpretation and reading papers. BoBi is a portable, transformable, movable and intelligent robot.</description>
        <description>http://thesai.org/Downloads/FTC2017/47_Paper_315-A_Prototyping_of_BoBi_Secretary_Robot.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Control System of a Terrain Following Quadcopter Under Uncertainty and Input Constraints: A Review and Research Framework</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090146</link>
        <id>10.14569/SpecialIssue.2018.090146</id>
        <doi>10.14569/SpecialIssue.2018.090146</doi>
        <lastModDate>2018-08-20T10:09:14.8530000+00:00</lastModDate>
        
        <creator>Nasser Alqahtani</creator>
        
        <creator>Homayoun Najjaran</creator>
        
        <subject>Quadrotor; Terrain Following; GPS Denied Envi-ronment</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>In modern society, autonomous quadrotors can be used to perform tasks and collect data in dangerous and inacces-sible environments where human involvement would traditionally be necessary. Unmanned Aerial Vehicles (UAVs), and especially the quadrotor, are still facing obstacles in terms of following a trajectory and flying autonomously in enclosed, complex or GPS denied areas. This paper focuses on presenting the literature on the quadrotor’s ability to follow a terrain. It starts with the cur-rent research frame work and its advantages and disadvantages. Next, a new research frame work is proposed. The new method develops a novel navigation framework which would allow the UAV to autonomously follow unknown terrain while maintaining a certain distance from it, within environmental and energy consumption restraints. The proposed method involves connecting a single-beam LiDAR sensor to the base of the quadrotor in order to retrieve reliable and detailed information about the undulations of the terrain ahead. The sensor then feeds this information back to the quadrotor so that its controller can create a suitable trajectory and ensure a smooth flight-path.</description>
        <description>http://thesai.org/Downloads/FTC2017/46_Paper_277-Control_System_of_a_Terrain_Following_Quadcopter.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Edge Based Adaptive Interpolation Algorithm for Image Scaling</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090145</link>
        <id>10.14569/SpecialIssue.2018.090145</id>
        <doi>10.14569/SpecialIssue.2018.090145</doi>
        <lastModDate>2018-08-20T10:09:14.2930000+00:00</lastModDate>
        
        <creator>Hongjian Shi</creator>
        
        <creator>Wanli Chen</creator>
        
        <subject>Image scaling; edge detection; Sobel operator; Canny operator; bilinear interpolation; bicubic interpolation</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>In this paper, the Canny and Sobel operators are combined to scale an image and to produce an image with high resolution and clear edges. The Canny operator is used at first to detect edges of the objects inside the original image. After that, four Sobel operators in different directions are applied to detect the edge directions. The direction of the edge at one edge point is determined by comparing the first derivatives of the intensity at edge pixels. The image interpolation is carried out adaptively according to the determined edge direction. For the homogeneous areas, the bilinear interpolation is applied. After image scaling, one method to suppress the zig-zag noises is applied to enhance the output image. The first and second steps of such zig-zag suppression are the same with the proposed image scaling algorithm. Then the pixels around the edge pixels are specially modified. The experimental results show that our proposed algorithm could produce scaled images with high resolution and well-preserved edges.</description>
        <description>http://thesai.org/Downloads/FTC2017/45_Paper_392-An_Edge_based_Adaptive_Interpolation_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Scene Classification Using Hidden Markov Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090144</link>
        <id>10.14569/SpecialIssue.2018.090144</id>
        <doi>10.14569/SpecialIssue.2018.090144</doi>
        <lastModDate>2018-08-20T10:09:13.7470000+00:00</lastModDate>
        
        <creator>BENRAIS Lamine</creator>
        
        <creator>BAHA Nadia</creator>
        
        <subject>Scene classification; object’s weight; hidden Markov models</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>In common multiclass classification problem, the main difficulties occur when classes are not mutually exclusive. In order to solve problems such as document classification, medical diagnosis or scene classifications we need to use robust and reliable tools. In this paper, we consider the problem of scene classification treated by hidden Markov models (HMMs) using a novel and intuitive classification process. We introduce a modeling system that scales the parameters of the HMM (observations and hidden states) into the variables of the scene classification problem (scene categories and objects belonging to the scene). The HMM is constructed with the support of object’s weight ranking functions. Inference algorithms are developed to extract the most suitable scene category from the generated discrete Markov chain. In order to approve the efficiency of the proposed method, we used the MIT Indoor dataset (2700 scenes distributed into 67 scenes categories) to evaluate the classification accuracy. We also compared the obtained results with the current state of the art’s methods. Our approach distinguishes itself by obtaining results going until 76% of well classified scenes.</description>
        <description>http://thesai.org/Downloads/FTC2017/44_Paper_333-Scene_Classification_Using_Hidden_Markov_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Improved Surendra Algorithm for Moving Object Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090143</link>
        <id>10.14569/SpecialIssue.2018.090143</id>
        <doi>10.14569/SpecialIssue.2018.090143</doi>
        <lastModDate>2018-08-20T10:09:13.2000000+00:00</lastModDate>
        
        <creator>Fang Dai</creator>
        
        <creator>Dan Yang</creator>
        
        <creator>Tong Dang</creator>
        
        <subject>Moving object detection; Surendra algorithm; background update coefficient; motion mask</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>A vehicle detection algorithm developed by Surendra (called Surendra algorithm) is composed of three parts: segmentation, adaptive background updating and background extraction. Surendra algorithm is sensitive to dynamic environment and is easily influenced by noise and illumination when detecting moving objects. For this reason, we present an improved Surendra algorithm (called Surendra_αInst algorithm), in which frame-difference method is applied to calculate the motion mask by replacing the method of using the Boolean AND operator between two binary images which is derived by two adjacent frames subtracted the background and thresholded to a binary image, respectively. Then the motion mask is employed to calculate the instantaneous background. At the same time, according to the change rate of background pixels, the background update coefficient is calculated to obtain a stable background image. Experiments results on five different types of image sequences showing that our Surendra_αInst algorithm, compared with Surendra algorithm, Surendra_AvgInit algorithm and Surendra_α algorithm, has higher DR and lower FAR, and the moving object detected is more integrated.</description>
        <description>http://thesai.org/Downloads/FTC2017/43_Paper_323-An_Improved_Surendra_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Forming a Random Field via Stochastic Cliques: From Random Graphs to Fully Connected Random Fields</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090142</link>
        <id>10.14569/SpecialIssue.2018.090142</id>
        <doi>10.14569/SpecialIssue.2018.090142</doi>
        <lastModDate>2018-08-20T10:09:12.6230000+00:00</lastModDate>
        
        <creator>Mohammad Javad Shafiee</creator>
        
        <creator>Alexander Wong</creator>
        
        <creator>Paul Fieguth</creator>
        
        <subject>Fully connected random field; random graph; stochastic cliques; graph cuts; Markov random fields</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>Random fields have remained a topic of great inter-est over past decades for the purpose of structured inference, es-pecially for problems such as image segmentation. The local nodal interactions commonly used in such models often suffer the short-boundary bias problem, which are tackled primarily through the incorporation of long-range nodal interactions. However, the issue of computational tractability becomes a significant issue when incorporating such long-range nodal interactions, particularly when a large number of long-range nodal interactions (e.g., fully-connected random fields) are modeled. In this work, we introduce a generalized random field framework based around the concept of stochastic cliques, which addresses the issue of computational tractability when using fully-connected random fields by stochas-tically forming a sparse representation of the random field. The proposed framework allows for efficient structured inference using fully-connected random fields without any restrictions on the potential functions that can be utilized. Several realizations of the proposed framework using graph cuts are presented and evaluated, and experimental results demonstrate that the proposed framework can provide competitive performance for the purpose of image segmentation when compared to existing fully-connected and principled deep random field frameworks.</description>
        <description>http://thesai.org/Downloads/FTC2017/42_Paper_317-Forming_a_Random_Field_via_Stochastic_Cliques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Role of Camera Convergence in Stereoscopic Video See-through Augmented Reality Displays</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090141</link>
        <id>10.14569/SpecialIssue.2018.090141</id>
        <doi>10.14569/SpecialIssue.2018.090141</doi>
        <lastModDate>2018-08-20T10:09:12.0630000+00:00</lastModDate>
        
        <creator>Vincenzo Ferrari</creator>
        
        <creator>Fabrizio Cutolo</creator>
        
        <subject>Augmented reality and visualization; stereoscopic display; stereo overlap; video see-through</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>In the realm of wearable augmented reality (AR) systems, stereoscopic video see-through displays raise issues related to the user’s perception of the three-dimensional space. This paper seeks to put forward few considerations regarding the perceptual artefacts common to standard stereoscopic video see-through displays with fixed camera convergence. Among the possible perceptual artefacts, the most significant one relates to diplopia arising from reduced stereo overlaps and too large screen disparities. Two state-of-the-art solutions are reviewed. The first one suggests a dynamic change, via software, of the virtual camera convergence, whereas the second one suggests a matched hardware/software solution based on a series of predefined focus/vergence configurations. Potentialities and limits of both the solutions are outlined so as to provide the AR community a yardstick for developing new stereoscopic video see-through systems suitable for different working distances.</description>
        <description>http://thesai.org/Downloads/FTC2017/41_Paper_310-The_Role_of_Camera_Convergence_in_Stereoscopic_Video.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Building Scale VR: Automatically Creating Indoor 3D Maps and its Application to Simulation of Disaster Situations</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090140</link>
        <id>10.14569/SpecialIssue.2018.090140</id>
        <doi>10.14569/SpecialIssue.2018.090140</doi>
        <lastModDate>2018-08-20T10:09:11.5170000+00:00</lastModDate>
        
        <creator>Katashi Nagao</creator>
        
        <creator>Yusuke Miyakawa</creator>
        
        <subject>Virtual reality; 3D map; autonomous mobile robot; disaster simulation</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>It is useful to simulate disaster situations by recon-structing actual buildings in a virtual space to enable people using the buildings to learn how to act in a disaster situation before it occurs. Therefore, we are developing a disaster-simulation system that simulates various disaster situations by virtually reproducing the situation inside buildings to allow individuals to experience disaster situations by using the latest virtual reality (VR) system. We use a mobile robot equipped with multiple laser-range sensors that measure the distance to objects in a building and an RGB-depth camera to collect distance and image data while the robot automatically travels along a route suitable for 3D measurement. We also manually scan physical objects individually by using a handheld 3D sensor. We then arrange the objects in a 3D map and manipulate them. We have also developed a VR system called “Building-Scale VR” that consists of indoor 3D maps filled with manipulable virtual objects that we call “operation targets” and a VR headset capable of position tracking within the building. In this paper, we explain how to implement Building-Scale VR and its applications to disaster simulations. It is useful to express disaster situations by reconstructing actual buildings into virtual space and enable users in the building to experience such situations beforehand to learn how to properly act during a disaster.</description>
        <description>http://thesai.org/Downloads/FTC2017/40_Paper_200-Building_Scale_VR_Automatically_Creating_Indoor.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Neural Style Representations and the Large-Scale Classification of Artistic Style</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090139</link>
        <id>10.14569/SpecialIssue.2018.090139</id>
        <doi>10.14569/SpecialIssue.2018.090139</doi>
        <lastModDate>2018-08-20T10:09:10.9700000+00:00</lastModDate>
        
        <creator>Jeremiah Johnson</creator>
        
        <subject>Artificial intelligence; neural network; style trans-fer; deep learning; computer vision; machine learning</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>The artistic style of a painting can be sensed by the average observer, but algorithmically classifying the artistic style of an artwork is a difficult problem. The recently introduced neural-style algorithm uses features constructed from the low-level activations of a pretrained convolutional neural network to merge the artistic style of one image or set of images with the content of another. This paper investigates the effectiveness of various representations based on the neural style algorithm for use in algorithmically classifying the artistic style of paintings. This approach is compared with other neural network based approaches to artistic style classification. Results that are com-petitive with other recent work on this challenging problem are obtained.</description>
        <description>http://thesai.org/Downloads/FTC2017/39_Paper_197-Neural_Style_Representations_and_the_Large-Scale.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Blink Detection for Residential Control</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090138</link>
        <id>10.14569/SpecialIssue.2018.090138</id>
        <doi>10.14569/SpecialIssue.2018.090138</doi>
        <lastModDate>2018-08-20T10:09:10.4400000+00:00</lastModDate>
        
        <creator>Adriano Pontone Nanes</creator>
        
        <creator>Clauber Ces&#225;rio de Souza</creator>
        
        <creator>Diego Yudi Miamoto</creator>
        
        <creator>Murilo de Oliveira Lima</creator>
        
        <subject>Domotics; image processing; computer vision; artificial intelligence</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>Domotics is the study that involves the fusion of the words “domus” (which means home in latin) and “robotics”, linked directly to the act of automating something. Using facial recognition, by detecting the blink of eyes and its movements, we can make it possible to perform various domestic tasks, such as turning on and off a lamp, opening windows and other activities. In this scenario, an application was developed for the Android platform that connects to the Particle Photon micro controller that is focused on the Internet of Things and is also responsible for the modularization of Wi-Fi functioning as embedded communication channel, making it possible to control the functions that triggered the routine tasks of the residence. In order for image processing to take place, we adopted an Open-Source library called OpenCV. The application contains the Accessibility Mode that is selected by default and enables the blink recognition functions that can be used by all people, in addition to the traditional way of touch screen.</description>
        <description>http://thesai.org/Downloads/FTC2017/38_Paper_64-Blink_Detection_for_Residential_Control.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Crowd Behavior Categorization using Live Stream based on Motion Vector Estimation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090137</link>
        <id>10.14569/SpecialIssue.2018.090137</id>
        <doi>10.14569/SpecialIssue.2018.090137</doi>
        <lastModDate>2018-08-20T10:09:09.8930000+00:00</lastModDate>
        
        <creator>Sajid Gul Khawaja</creator>
        
        <creator>Amna Sajid</creator>
        
        <creator>Mehak Tofiq</creator>
        
        <subject>Motion vector; crowd behavior; real time processing</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>The detection of anomalies in large crowd is a cognitive task. A proactive approach is required to effectively manage the crowd flow and to accurately detect the erratic behavior of crowd. In this paper, we present an algorithm which observes crowd optical flow in real time and detect any abnormal events in crowds automatically. The system takes the frames at regular intervals through a video camera and processes these frames using image processing techniques. The proposed system further uses certain rules to classify the normal or abnormal activities of crowd. We propose a novel motion vector based technique to detect behavior of the cluster of interest. The features of the motion vectors are analyzed to characterize the crowd behavior. The evaluation of the system is performed using different videos having different crowd behaviors and the results on simulated crowds demonstrate the effectiveness of the proposed system.</description>
        <description>http://thesai.org/Downloads/FTC2017/37_Paper_49-Crowd_Behavior_Categorization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Isolating Bone and Gray Matter in MRI Images using 3D Slicer</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090136</link>
        <id>10.14569/SpecialIssue.2018.090136</id>
        <doi>10.14569/SpecialIssue.2018.090136</doi>
        <lastModDate>2018-08-20T10:09:09.3470000+00:00</lastModDate>
        
        <creator>Sudhanshu K Semwal</creator>
        
        <creator>Ashley Whiteside</creator>
        
        <subject>3D slicer; medical visualization; thresholding</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>Slicer has been used in medical community for several years now.  This paper describes a Python extension imported in Slicer application. The main contribution of the paper is to outline and explain how to import and test a Python extension which we created to isolate gray matter and bone in MR brain volume images. Our future plans include both qualitative and quantitative analysis, validation, and comparison to other similar techniques, and extensions to 3D surface extractions and interpretations using Slicer.</description>
        <description>http://thesai.org/Downloads/FTC2017/36_Paper_517-Isolating_Bone_and_Gray_Matter_in_MRI_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-class Alzheimer Disease Classification using Hybrid Features</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090135</link>
        <id>10.14569/SpecialIssue.2018.090135</id>
        <doi>10.14569/SpecialIssue.2018.090135</doi>
        <lastModDate>2018-08-20T10:09:08.8170000+00:00</lastModDate>
        
        <creator>Syed Muhammad Anwar</creator>
        
        <subject>Alzheimer; hybrid features; classification; multi-class</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>Alzheimer disease (AD) is one of the most common form of dementia. Accurate detection of AD and its initial stage i.e., mild cognitive impairment (MCI) is a challenging task. In this study, a computer-aided diagnosis (CAD) system is implemented on clinical and diagnostic imaging data from OASIS database. Amygdala and hippocampus are the regions that are most af-fected by Alzheimer and are located inside the grey matter region of brain. Features used for classification are calculated using grey level co-occurrence matrix (GLCM) such as entropy, energy, homogeneity, and correlation. The ratios of the grey matter and white matter volume to the cerebrospinal fluid volume are also used. Clinical features are also used improving the classification accuracy achieving 94.6% for binary classification. The proposed algorithm is also used for multi-class classification where three classes, namely, normal (N), Alzheimer disease (AD), and mild cognitive impairment (MCI) are considered. An accuracy of 79.8% on these classes is achieved that is significant since the classes considered are highly similar. We have achieved improved results in comparison to state-of-the-art techniques for binary classification and have also performed multi-class classification.</description>
        <description>http://thesai.org/Downloads/FTC2017/35_Paper_489-Multi-class_Alzheimer_Disease_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluation of Soft Tissue Variations using a 3D Scanning System for Orthodontic Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090134</link>
        <id>10.14569/SpecialIssue.2018.090134</id>
        <doi>10.14569/SpecialIssue.2018.090134</doi>
        <lastModDate>2018-08-20T10:09:08.2700000+00:00</lastModDate>
        
        <creator>Mauren Abreu de Souza</creator>
        
        <creator>Cristiane Schmitz</creator>
        
        <creator>Melissa Galarza Rodrigues</creator>
        
        <creator>Giovanna Simi&#227;o Ferreira</creator>
        
        <subject>Three-dimensional measurements; laser scanning system; facial geometry; orthodontic applications</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>There are different three-dimensional (3D) imaging systems which allow obtaining measurements from surface geometries. However, for medical applications, such methods need to be safe, with no harm or radiation. Therefore this paper proposes a methodology for obtaining facial surfaces in order to be compared over time for orthodontic applications. Specifically, it was employed a 3D laser scanner for facial geometry acquisition and a dedicated software for performing a rigid surface registration among three different times: (T1) before the treatment, (T2) 15 days after wearing the orthodontic appliance and (T3) three months after ending the treatment. It was observed a change in the face, especially in the region of the nostrils (1.47 mm) and maxilla (1.78 mm). The use of a 3D scanner for facial scanning is able to measure small slightly volumetric changes caused by wearing a palatal expander over time, producing significant facial changes, which will provide social and health impacts.</description>
        <description>http://thesai.org/Downloads/FTC2017/34_Paper_487-Evaluation_of_Soft_Tissue_Variations.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Architecting and Building the Future of Healthcare Informatics: Cloud, Containers, Big Data and CHIPS</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090133</link>
        <id>10.14569/SpecialIssue.2018.090133</id>
        <doi>10.14569/SpecialIssue.2018.090133</doi>
        <lastModDate>2018-08-20T10:09:07.7230000+00:00</lastModDate>
        
        <creator>Rudolph Pienaar</creator>
        
        <creator>Ata Truk</creator>
        
        <creator>Jorge Bernal</creator>
        
        <creator>Nicolas Rannou</creator>
        
        <subject>Web based neuroimaging; big data; applied con-tainerization; telemedicine; cloud-storage</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>New trends in software engineering are reshaping the computing landscape – computation is increasingly portable, storage is increasingly elastic, and data accessibility is increasingly “always on” and “always available” to an exponentially increasing variety of applications and devices. While the effects of these trends in the larger “compute-verse” are profound, this paper will discuss and consider how these trends are affecting specifically healthcare informatics. Indeed, end users will experience this trend in applications that are web-centric and mobile-friendly. Such apps will be increasingly used as gateways to powerful backend services (such as analytics and deep learning), while offering local client-side specialization (rich, immersive visual-izations and collaborations). The paper offers some perspectives and presents some unmet needs in medical informatics and seeks to provide a viewpoint into how the “next wave” of computing might present itself. In particular the paper presents a web-based medical image data and information management software platform called CHIPS (Cloud Healthcare Image Processing Service). This cloud-based service uniquely provides an end-to-end service that can connect data from deep within a Hospital securely to the cloud and allow for powerful collaboration – both on medical image data but also on image processing pipelines, allow for complex processing and enable computational research, and provide a vision of decentralized, large-scale data analysis that can fuel Big Data on medical bioinformatics.</description>
        <description>http://thesai.org/Downloads/FTC2017/33_Paper_468-Architecting_and_Building_the_Future_of_Healthcare.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Prediction of Musical Perception using EEG and Functional Connectivity in the Brain</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090132</link>
        <id>10.14569/SpecialIssue.2018.090132</id>
        <doi>10.14569/SpecialIssue.2018.090132</doi>
        <lastModDate>2018-08-20T10:09:07.1930000+00:00</lastModDate>
        
        <creator>Lavanya Krishna</creator>
        
        <creator>Geethanjali B</creator>
        
        <creator>Chandramouli Ramesh</creator>
        
        <creator>Mahesh Veezhinathan</creator>
        
        <subject>Information processing; data reduction; prediction; brain activation; familiarity</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>Music has various effects on the brain and body. Multiple studies have suggested a marked difference in information processing between musicians and non-musicians when listening to music. However, the occurrence of these changes within the brain is yet to be instantiated. The amount of data obtained to study information processing in the brain is huge and surplus features obtained may be redundant and there is thus an increased need to reduce amount of data. In this study information processing in the brain is determined by obtaining optimum features using data reduction processes. Features obtained are compared with brain activation in an attempt to predict behavioral data. Twenty healthy subjects were considered of which 10 were musicians and 10 were non-musicians. All subjects were made to listen to a music stimulus that was played. Electroencephalography was used to record responses and behavioral data too was obtained. The Major predictors for musicians were frontal and temporal lobe electrodes and this was absent in non-musicians. Although some electrodes have high node strengths, they were not indicated as predictors. The enhanced inter and intra hemispheric functional connectivity was seen for the musicians which was due to familiarity and music learning.</description>
        <description>http://thesai.org/Downloads/FTC2017/32_Paper_444-Prediction_of_Musical_Perception.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implementation of Data Mining from Social Media for Improved Public Health Care</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090131</link>
        <id>10.14569/SpecialIssue.2018.090131</id>
        <doi>10.14569/SpecialIssue.2018.090131</doi>
        <lastModDate>2018-08-20T10:09:06.6500000+00:00</lastModDate>
        
        <creator>Mohammed Saeed Jawad</creator>
        
        <subject>Data mining; social media; medical data; end user feedbacks; positive terms; negative terms; symptoms</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>To improve public health care outcomes with reduced cost, this research proposed a framework which focuses on the positive and negative symptoms of illnesses and the side effects of treatments. However, previous studies have been limited as they neither identified influential users nor discussed how to model forms of relationships that affect network dynamics and determine the accurate ranking of certain end user’s feedbacks. In this research, a two-step analysis framework is proposed as the system. In the first level, the system utilized exploratory analysis and clustered users and their useful feedbacks through self-organizing maps (SOM). In the second level, the system developed three lists of negative and positive feedbacks and treatment symptoms caused by implanting the SOM that considered accurate ranking by calculating the frequency of each term of interests. The feasibility of the proposed solution is confirmed as performance evaluations of the system in terms of computational costs. The results showed that these solutions are reasonable computational costs relative to memory and processor usage.</description>
        <description>http://thesai.org/Downloads/FTC2017/31_Paper_423-Implementation_of_Data_Mining.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design and Development of Non-Invasive Prototype to Measure Pulse Rate, Blood Glucose and Oxygen Saturation Level in Arterial Blood</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090130</link>
        <id>10.14569/SpecialIssue.2018.090130</id>
        <doi>10.14569/SpecialIssue.2018.090130</doi>
        <lastModDate>2018-08-20T10:09:06.1030000+00:00</lastModDate>
        
        <creator>Nazo Haroon</creator>
        
        <creator>Mohsin Islam Tiwana</creator>
        
        <subject>Pulse oximeter; photoplethysmography; absolute relative difference; non-invasive; chemistry analyzer</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>The article is dedicated to propose a system which can measure heart rate, blood glucose and oxygen saturation ratio non-invasively with maximum possible accuracy. The design is easy to use, real time and pain free. Preliminary results are acquired on a prototype amplification and filter circuitry, a sensor consisting of two LEDs red (660 nm) and near infra-red (940 nm) as transmission spectrums and an Arduino controller board. Fingertip photoplethysmographic (PPG) signal analysis is done. Furthermore, results have been compared to commercially available pulse oximeter and measurement accuracy of + 3% for pulse rate and + 1% for oxygen saturation is observed. In non-invasive glucose measurement, accuracy plays a vital role. Hence, a 3 day clinical trial is conducted in a hospital for various diabetic and non-diabetic test specimens of different ages. Total 132 specimens were analyzed during the trial period and results are compared with the Beckman coulter AU-480 chemistry analyzer present in the pathology laboratory of the hospital. Clarke Error Grid (CEG) analysis depicts 94.70% of the readings (of all 3 days) fall in the clinically accepted zone A. The Absolute Relative Differences (ARDs) yield mean and median error values of 9.51% and 8.05%, respectively.</description>
        <description>http://thesai.org/Downloads/FTC2017/30_Paper_410-Design_and_Development_of_Non-Invasive_Prototype.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Interface of an Automatic Recognition System for Dysarthric Speech</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090129</link>
        <id>10.14569/SpecialIssue.2018.090129</id>
        <doi>10.14569/SpecialIssue.2018.090129</doi>
        <lastModDate>2018-08-20T10:09:05.5570000+00:00</lastModDate>
        
        <creator>FARES ZAIDI</creator>
        
        <creator>Malika Boudraa</creator>
        
        <creator>Sid-Ahmed Selouani</creator>
        
        <creator>Ghania Hamdani</creator>
        
        <subject>Automatic Recognition System of Continuous Pathological Speech (ARSCPS); ETSI standard Mel frequency Cepstral Coefficient Front End (ETSI MFCC FE V2.0); Hidden Markov Model Toolkit (HTK); Hidden Models of Markov (HMM); Human/Machine (H/M); Technologies of</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>This paper addresses the realization of a Human/Machine (H/M) interface including a system for automatic recognition of the Continuous Pathological Speech (ARSCPS) and several communication tools in order to help frail people with speech problems (Dysarthric speech) to access services providing by new technologies of information and communication (TIC) while making it easier for the doctors to achieve a first diagnosis on the patient’s disease. In addition, an ARSCPS has been improved and developed for normal and pathology voice while establishing a link with our graphic interface which is based on the box tools Hidden Markov Model Toolkit (HTK), in addition to the Hidden Models of Markov (HMM). In our work we used different techniques of feature extraction for the speech recognition system in order to improve the dysarthric speech intelligibility while developing an ARSCPS which can perform well for pathological and normal speakers. These techniques are based on the coefficients of ETSI standard Mel Frequency Cepstral Coefficient Front End (ETSI MFCC FE V2.0); Perceptual Linear Prediction coefficients (PLP); Mel Frequency Cepstral Coefficients (MFCC) and the recently proposed Power Normalized Cepstral Coefficients (PNCC) have been used as a basis for comparison. In this context we used the Nemours database which contains 11 speakers that represents dysarthric speech and 11 speakers that represents normal speech.</description>
        <description>http://thesai.org/Downloads/FTC2017/29_Paper_397-Interface_of_an_Automatic_Recognition_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application of Business Intelligence Techniques Using SAS on Open Data: Analysing Health Inequality in English Regions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090128</link>
        <id>10.14569/SpecialIssue.2018.090128</id>
        <doi>10.14569/SpecialIssue.2018.090128</doi>
        <lastModDate>2018-08-20T10:09:05.0270000+00:00</lastModDate>
        
        <creator>Ah-Lian Kor</creator>
        
        <creator>Sanela Lazarevski</creator>
        
        <subject>SAS; BI techniques; open health data; data mining; health inequality</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>Health inequality is a widely reported problem. There is an existing body of work that links health inequality and geographical location. This means that one might be more disadvantageous health-wise if one was born in one region compared to another. Existing health inequality related work in various developed and developing countries rely on population census or survey data. Effective conclusions drawn require large scale data with multiple parameters. There is a new phenomenon in countries (e.g. the UK), where governments are opening up citizen-centric data for transparency purposes and to facilitate data-informed policy making. There are many health organisations, including NHS and sister organisations (e.g. HSCIC), which participate in this drive to open up data. These health-related datasets can be exploited health inequality analytics. This work presents a novel approach of analysing health inequality in English regions solely based on open data. A methodological and systematic approach grounded in CRISP-DM methodology is adhered to for the analyses of the datasets. The analysis utilises a well-cited work on health inequality in children and the corresponding parameters such as Preterm birth, Low birth weight, Infant mortality, Excessive weight in children, Breastfeeding prevalence and Children in poverty. An authority in health datasets, called Public Health Outcomes (PHO) Framework, is chosen as a data source that contains data with these parameters. The analysis is carried out using various SAS data mining techniques such as clustering, and time series analysis. The results show the presence of health inequality in English regions. The work clearly identifies the English regions on the right and wrong side of the divide. The policy and future work recommendations based on these findings are articulated in this research. The work presented in this paper is novel as it applies SAS based BI techniques to analyse health inequality for children in the UK solely based on open data.</description>
        <description>http://thesai.org/Downloads/FTC2017/28_Paper_336-Application_of_Business_Intelligence_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Musical Guidance System for Ergogenic Benefits in Workouts</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090127</link>
        <id>10.14569/SpecialIssue.2018.090127</id>
        <doi>10.14569/SpecialIssue.2018.090127</doi>
        <lastModDate>2018-08-20T10:09:04.4500000+00:00</lastModDate>
        
        <creator>PRAKHYATH KUMAR HEGDE</creator>
        
        <creator>SHOAIB SHERIFF</creator>
        
        <creator>SHIVA MURTHY BUSETTY</creator>
        
        <creator>Jinmook Lim</creator>
        
        <subject>Music; heart rate; workout zones; music guidance; ergogenic benefits</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>Music has become an essential component during indoor and outdoor workouts. Studies show that a budding runner can find a significant increase in motivation simply by listening to music. In this paper, we propose a system to further enhance the ergogenic benefits of music by providing real-time musical guidance derived from variations in user’s physiological factors. This system helps user to increase the time spent in desired heart rate zone, thus, benefiting to reach user goal faster.</description>
        <description>http://thesai.org/Downloads/FTC2017/27_Paper_309-A_Musical_Guidance_System_for_Ergogenic_Benefits.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Intelligent Session Transition System Towards Low Power Walking Step Estimation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090126</link>
        <id>10.14569/SpecialIssue.2018.090126</id>
        <doi>10.14569/SpecialIssue.2018.090126</doi>
        <lastModDate>2018-08-20T10:09:03.9030000+00:00</lastModDate>
        
        <creator>Dipankar Das</creator>
        
        <creator>Vishal Bharti</creator>
        
        <creator>Prakhyath Kumar Hegde</creator>
        
        <creator>MoonBae Song</creator>
        
        <subject>Software pedometer; step count; step estimation; accelerometer; fitness device; smartphone</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>Activity tracking has gained prominence with the phenomenal growth of fitness devices and ever increasing fitness awareness and step count has emerged as the primary fitness parameter. Step count is typically derived from continuous motion signal analysis. Either a dedicated hardware processing unit or a software signal processing unit, continuously counting repetition in motion signal is the common implementation choice. While dedicated hardware increases the BOM (Bill of Material), the software pedometer incurs high power consumption. In this paper, we proposed a power efficient step estimation algorithm without continuous high power requirement even in motion at an acceptable accuracy using smartphone’s tri-axial accelerometer data.</description>
        <description>http://thesai.org/Downloads/FTC2017/26_Paper_308-An_Intelligent_Session_Transition_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>ECG Abnormality Detection Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090125</link>
        <id>10.14569/SpecialIssue.2018.090125</id>
        <doi>10.14569/SpecialIssue.2018.090125</doi>
        <lastModDate>2018-08-20T10:09:03.3400000+00:00</lastModDate>
        
        <creator>Mahmoud Al Ahmad</creator>
        
        <creator>Soha Ahmed</creator>
        
        <creator>Ali Alnaqbi</creator>
        
        <creator>Mohamed Al Hemairy</creator>
        
        <subject>Cross-correlation; abnormalities detection; ECG; cardiac cycle; eHealth; remote monitoring; algorithm</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>The monitoring and early detection of abnormalities in the cardiac cycle morphology have significant impact on the prevention of heart diseases and their associated complications. Electrocardiogram (ECG) is very effective in detecting irregularities of the heart muscle functionality. In this work, we investigate the detection of possible abnormalities in ECG signal and the identification of the corresponding heart disease in real-time using an efficient algorithm. The algorithm relies on cross-correlation theory to detect abnormalities in ECG signal. The algorithm incorporates two cross-correlations steps. The first step detects abnormality in a real-time ECG signal trace while the second step identifies the corresponding disease. The optimization of search-time is the main advantage of this algorithm</description>
        <description>http://thesai.org/Downloads/FTC2017/25_Paper_288-ECG_Abnormality_Detection_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Future Augmented Reality in Endosurgery</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090124</link>
        <id>10.14569/SpecialIssue.2018.090124</id>
        <doi>10.14569/SpecialIssue.2018.090124</doi>
        <lastModDate>2018-08-20T10:09:02.7970000+00:00</lastModDate>
        
        <creator>Bojan Nokovic</creator>
        
        <creator>Tian Zhang</creator>
        
        <subject>Augmented reality; image processing; endosurgery</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>We present technology that uses a computer-generated 3-D image inserted in real-time endoscopic view. The generated image represents the safe zone, the area in which the tip of the surgical tool should stay during operation. Movements of tools can be tracked using existing techniques of surgical tools navigation based on tracking fiduciary markers. We describe a spatial measurement system and explain the process of virtual zone creation, challenges related to the accuracy of the augmented reality system and calibration process that can be improved by machine learning techniques. We discuss possible future usage of this system in telesurgery both on the ground and in space.</description>
        <description>http://thesai.org/Downloads/FTC2017/24_Paper_267-Future_Augmented_Reality_in_Endosurgery.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cost-Effective System for the Classification of Muscular Intent using Surface Electromyography and Artificial Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090123</link>
        <id>10.14569/SpecialIssue.2018.090123</id>
        <doi>10.14569/SpecialIssue.2018.090123</doi>
        <lastModDate>2018-08-20T10:09:02.2500000+00:00</lastModDate>
        
        <creator>Anmol Khanna</creator>
        
        <creator>Senthil Arumugam Muthukumaraswamy</creator>
        
        <subject>Electromyography; orthosis; prosthesis; neural network; muscular intent; Arduino; cost-effective</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>This paper presents the implementation of a cost-effective system to classify muscular intent. A neural network is used for this purpose. After skin preparation, feature extraction, network training and real-time testing, an average overall classification accuracy of 93.3% over three possible gestures was obtained. Ultimately, the results obtained speak to the suitability of an Arduino-based system for the acquisition and decoding of muscular intent. This result is indicative of the potential of the Arduino microcontroller in this application, to provide effective performance at a far lower price-point than its competition.</description>
        <description>http://thesai.org/Downloads/FTC2017/23_Paper_149-Cost-Effective_System_for_the_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Personal Food Computer: A New Device for Controlled-Environment Agriculture</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090122</link>
        <id>10.14569/SpecialIssue.2018.090122</id>
        <doi>10.14569/SpecialIssue.2018.090122</doi>
        <lastModDate>2018-08-20T10:09:01.7200000+00:00</lastModDate>
        
        <creator>Eduardo Castell&#243; Ferrer</creator>
        
        <creator>Jake Rye</creator>
        
        <creator>Gordon Brander</creator>
        
        <creator>Tim Savas</creator>
        
        <subject>Controlled-environment agriculture; agricultural robotics; educational robotics; decentralised farming; open-source hardware; open-source software; citizen science</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>Due to their interdisciplinary nature, devices for controlled-environment agriculture have the possibility to turn into ideal tools not only to conduct research on plant phenology but also to create curricula in a wide range of disciplines. Controlled-environment devices are increasing their function-alities as well as improving their accessibility. Traditionally, building one of these devices from scratch implies knowledge in fields such as mechanical engineering, digital electronics, pro-gramming, and energy management. However, the requirements of an effective controlled-environment device for personal use brings new constraints and challenges. This paper presents the OpenAg™ Personal Food Computer (PFC); a low cost desktop size platform, which not only targets plant phenology researchers but also hobbyists, makers, and teachers from elementary to high-school levels (K-12). The PFC is completely open-source and it is intended to become a tool that can be used for collective data sharing and plant growth analysis. Thanks to its modular design, the PFC can be used in a large spectrum of activities.</description>
        <description>http://thesai.org/Downloads/FTC2017/22_Paper_472-Personal_Food_Computer_A_New_Device.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fog-to-Cloud (F2C) Data Management for Smart Cities</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090121</link>
        <id>10.14569/SpecialIssue.2018.090121</id>
        <doi>10.14569/SpecialIssue.2018.090121</doi>
        <lastModDate>2018-08-20T10:09:01.1730000+00:00</lastModDate>
        
        <creator>Amir Sinaeepourfard</creator>
        
        <creator>JORDI GARCIA ALMINANA</creator>
        
        <creator>Xavier Masip-Bruin</creator>
        
        <creator>Eva Mar&#237;n-Tordera</creator>
        
        <subject>Smart city; fog-to-cloud (F2C) computing; data management; data lifecycle model (DLC); data aggregation</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>Smart cities are the current technological solutions to handle the challenges and complexity of the growing urban density. Traditionally, smart city resources management rely on cloud based solutions where sensors data are collected to provide a centralized and rich set of open data. The advantages of cloud-based frameworks are their ubiquity, as well as an (almost) unlimited resources capacity. However, accessing data from the cloud implies large network traffic, high latencies usually not appropriate for real-time or critical solutions, as well as higher security risks. Alternatively, fog computing emerges as a promising technology to absorb these inconveniences. It proposes the use of devices at the edge to provide closer computing facilities and, therefore, reducing network traffic, reducing latencies drastically while improving security. We have defined a new framework for data management in the context of a smart city through a global fog to cloud resources management architecture. This model has the advantages of both, fog and cloud technologies, as it allows reduced latencies for critical applications while being able to use the high computing capabilities of cloud technology. In this paper, we present the data acquisition block of our framework and discuss the advantages. As a first experiment, we estimate the network traffic in this model during data collection and compare it with a traditional real system.</description>
        <description>http://thesai.org/Downloads/FTC2017/21_Paper_396-Fog-to-Cloud_F2C_Data_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Cooperative Human-Machine Interaction Warning Strategy for the Semi-Autonomous Driving Context</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090120</link>
        <id>10.14569/SpecialIssue.2018.090120</id>
        <doi>10.14569/SpecialIssue.2018.090120</doi>
        <lastModDate>2018-08-20T10:09:00.6270000+00:00</lastModDate>
        
        <creator>Pedro Arezes</creator>
        
        <creator>Suana Costa</creator>
        
        <creator>Paulo Sim&#245;es</creator>
        
        <creator>Nelson Costa</creator>
        
        <subject>Warning; driver; autonomous; human-machine interaction (HMI); vehicle</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>This study is a part of an ongoing work regarding the possible scenarios during autonomous driving and it takes into consideration, not only academic literature and industry updates, but also the aspects described on the standards already disclosed. As autonomous driving systems become a more tangible reality, the development of an efficient warning strategy, within the human-machine interaction (HMI), is paramount, for a range of reasons that include the trust in these emerging systems. It has been noted by several researchers that a particular moment of the semi-autonomous driving is of special interest, which is the driver’s role shift from passive monitoring of the vehicle to active control of the autonomous driving system. This study presents a cooperative approach to the vehicle-driver communication strategy, accounting for both human factors and complexity of the AD systems. A nexus diagram has been developed that in a comprehensive way, provides an alternative strategy to the conventional static warning strategy, able to be customized in some specific traits, which can later be resorted to by programmers for expeditiously implementing this much needed strategy in the real context of semi-autonomous driving.</description>
        <description>http://thesai.org/Downloads/FTC2017/20_Paper_385-A_Cooperative_Human-Machine_Interaction_Warning_Strategy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Wireless Sensor for Monitoring Acoustic Induced Vibration</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090119</link>
        <id>10.14569/SpecialIssue.2018.090119</id>
        <doi>10.14569/SpecialIssue.2018.090119</doi>
        <lastModDate>2018-08-20T10:09:00.0970000+00:00</lastModDate>
        
        <creator>Gymama Slaughter</creator>
        
        <subject>Acoustic; vibration; monitoring; wireless; sensors</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>Acoustic induced vibration can cause museum art objects to deteriorate at a very rapid pace, thereby endangering the cultural heritage they conserve. Here, wireless triaxial MEMS accelerometer nodes were used to monitor the effect of loud and live music played during social events at the Walters Art Museum on mock art objects. During the social event, continuous wireless vibration activity monitoring was performed from sensors placed on respective blocks with the art object under test to provide real-time vibration activity information. The vibration induced by the music, as well as the vibration propagation through the specific object via the mounting or display mechanism were evaluated. The loud music played generated high acoustic energy that excited the object vibration modes and placed the object at risk of induce fatigue cracking and “walking”, all of which can potentially cause catastrophic damage in extreme cases. The acoustic field with an intensity of 90 dB induced the highest level of vibration. The display block orientations were observed to contribute to the vibration activity, wherein, the vertical orientation induced higher levels of vibration when compared to the horizontal orientation.</description>
        <description>http://thesai.org/Downloads/FTC2017/19_Paper_346-Wireless_Sensor_for_Monitoring_Acoustic_Induced_Vibration.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Blended Cryptography for Secured Data Transfer in Medical IoT Devices</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090118</link>
        <id>10.14569/SpecialIssue.2018.090118</id>
        <doi>10.14569/SpecialIssue.2018.090118</doi>
        <lastModDate>2018-08-20T10:08:59.5500000+00:00</lastModDate>
        
        <creator>Revanesh M</creator>
        
        <creator>V. Sridar</creator>
        
        <subject>Biomedical; Internet of Things; cryptography; symmetric encryption; asymmetric encryption</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>To make the best use of the vast potential opportunities that accompanies medical IoT sensor devices, security and privacy measures are expected to be inculcated as fundamental requirement within these systems. Although cryptography proves to be a promising solution for data breach problem, there is always an inconclusive debate between the usages of symmetric encryption schemes which provides reduced computation overhead and asymmetric encryption schemes that delivers better authentication in IoT devices. This research paper focuses on a brief analysis of various existing symmetric and asymmetric lightweight algorithms for IoT. Using simulation tests, we provide important analysis and considerations on practical feasibility of these cryptographic algorithms in IoTs built using sensor networks to help designers predict security performance under a set of constraints, we also propose a new methodology which blends the usage of both symmetric and asymmetric encryption scheme for enhanced data transfer in health care related IoT devices.</description>
        <description>http://thesai.org/Downloads/FTC2017/18_Paper_325-Blended_Cryptography_for_Secured_Data_Transfer.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detection and Recognition of Sign Language Protocol using Motion Sensing Device</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090117</link>
        <id>10.14569/SpecialIssue.2018.090117</id>
        <doi>10.14569/SpecialIssue.2018.090117</doi>
        <lastModDate>2018-08-20T10:08:59.0200000+00:00</lastModDate>
        
        <creator>Rita Tse</creator>
        
        <creator>AoXuan Li</creator>
        
        <creator>Zachary Chui</creator>
        
        <creator>Marcus Im</creator>
        
        <subject>Leap motion sensing device; American Sign Language detection and recognition; support vector regression; Internet of Things</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>This paper explores the possibility of implementing a gesture/motion detection and recognition system to recognize the American Sign Language (ASL) protocol for communications and control.  Gestures are captured using a Leap Motion sensing device with recognition based on a Support Vector Regression algorithm. There is a high correlation between the measured and predicted values in those samples; it is able to recognize those sample data with almost 100% accuracy. Development of network connectivity and establishment of communication protocol enables “smart” objects to collect and exchange data. With encouraging results given the limitations of the hardware, this work can be used to collect and exchange data with devices, sensors and information nodes and provide a new solution for Internet of Things in the future.</description>
        <description>http://thesai.org/Downloads/FTC2017/17_Paper_313-Detection_and_Recognition_of_Sign_Language_Protocol.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Knowledge Mining Architecture for Localized Optimization of Smart Heating Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090116</link>
        <id>10.14569/SpecialIssue.2018.090116</id>
        <doi>10.14569/SpecialIssue.2018.090116</doi>
        <lastModDate>2018-08-20T10:08:58.4900000+00:00</lastModDate>
        
        <creator>Bolatzhan Kumalakov</creator>
        
        <creator>Lyazzat Ashikbayeva</creator>
        
        <subject>Distributed knowledge management; context recognition; smart heating</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>Improvement in resource consumption is among the many important targets that smart heating systems are aimed at achieving. Such a system automatically manipulates a household’s physical artefacts (such as radiators, heating boiler, etc.) and changes their operational regimes to achieve this goal. This system can formally be represented by the parameter optimization problem. Although substantial research in this area has already been conducted, there is room for improvement on a collective scale. We adapted the context-aware parameter optimization architecture for geographically distributed machines to integrate multiple-peer knowledge into local optimization. This approach is a novel because it redefines knowledge mining and interpretation functionality, and it employs clustering and machine learning algorithms. Current paper is an attempt to explore sensitivity of heating system’s local optimization to the mined knowledge, as it indicates if the method is applicable at all. A computational experiment confirms such sensitivity and provides basis for the future research.</description>
        <description>http://thesai.org/Downloads/FTC2017/16_Paper_307-Knowledge_Mining_Architecture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intelligent Elevators in a Smart Building</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090115</link>
        <id>10.14569/SpecialIssue.2018.090115</id>
        <doi>10.14569/SpecialIssue.2018.090115</doi>
        <lastModDate>2018-08-20T10:08:57.9430000+00:00</lastModDate>
        
        <creator>Vasileios Zarikas</creator>
        
        <creator>Nurislam Tursynbek</creator>
        
        <subject>Smart buildings; smart houses; elevator; Bayesian networks; intelligent decision making</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>Smart Cities including Smart Buildings are a fascinating fast developing research area. The present work describes an intelligent elevator, integrated in the context of a smart building. A Bayesian network approach was designed to drive decision actions of an elevator, according to the information that is provided by fuzzy rules and by cameras with image recognition software. The aim was to build a decision engine capable to control the elevators actions, in way that improves user’s satisfaction. Both sensitivity analysis and evaluation study of the implemented model, according to several scenarios, are presented. The final algorithm proved to exhibit the desired behavior, in 95% case of the scenarios tested.</description>
        <description>http://thesai.org/Downloads/FTC2017/15_Paper_266-Intelligent_Elevators_in_a_Smart_Building.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Preliminary Investigation into Infrared Sensors in Wearables for Upper Extremity Motion Sensing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090114</link>
        <id>10.14569/SpecialIssue.2018.090114</id>
        <doi>10.14569/SpecialIssue.2018.090114</doi>
        <lastModDate>2018-08-20T10:08:57.3970000+00:00</lastModDate>
        
        <creator>Carlo Menon</creator>
        
        <creator>Jordan Lui</creator>
        
        <creator>Kevin Andrews</creator>
        
        <creator>Andrea Ferrone</creator>
        
        <subject>Rehabilitation; stroke; spasticity; telerehabilitation; telerehab; rehabilitation devices; medical devices; wearable technology; biomedical engineering; machine learning; physical rehabilitation; wireless technology; optical sensing; sensor fusion; transf</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>Studies demonstrate that monitoring and recording movement of rehabilitation exercises can improve the degree of recovery of the patient. Technologies exist to track user movements but they are often large, expensive, or require multiple units to be mounted on the user in different locations. Each of these can be barriers to patient adoption of rehabilitation technologies inside the home. We propose a single unit which incorporates an inertial measurement unit (IMU) and infrared sensors to determine orientation of the arm for various movements. The infrared sensors compensate for IMU drift errors, providing a sensor fusion solution. A novel optical wearable was created for detection of arm movement exercises in three-dimensional space that are consistent with stroke survivor exercises for spasticity rehabilitation. A study of five participants yielded high average accuracies of 98% across participants, without requiring any normalization of results to varying body sizes of participants. These findings indicate a strong inter-patient similarity in arm movement patterns. This inter-patient similarity implies the possibility of a transfer learning application, where various patient data can be used to collectively improve the accuracy of the predictive machine learning model. This could allow development of a medical device that is easily donned by the user for rehabilitation in the comfort of their own home, allowing more effective telerehabilitation.</description>
        <description>http://thesai.org/Downloads/FTC2017/14_Paper_241-A_Preliminary_Investigation_into_Infrared_Sensors.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Microgrid Stability Improvement by Optimization of Virtual Droop Control</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090113</link>
        <id>10.14569/SpecialIssue.2018.090113</id>
        <doi>10.14569/SpecialIssue.2018.090113</doi>
        <lastModDate>2018-08-20T10:08:56.8500000+00:00</lastModDate>
        
        <creator>Binu Krishnan U</creator>
        
        <creator>Mija S J</creator>
        
        <creator>Elizabeth P Cheriyan</creator>
        
        <subject>Droop controller; microgrid; eigenvalue analysis; particle swarm optimization</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>The concept of microgrid has become more relevant due to the increase in the use of distributed energy resources. However, the important factors which are to be considered are their ability to control the power flow and stability of the system. Droop control strategy requires no communication system and realizes the “plug and play” function between source and load. Virtual impedance type droop control method is preferred to improve the transient response and power decoupling of conventional droop method.  In this paper a state space model of such a system is developed based on small signal disturbances. Eigenvalue analysis of the system is also done and the parameters which determine the stability of the system are identified. The optimum values of these parameters are found out by increasing the stability of the system using particle swarm optimization (PSO) technique. The simulation is done by MATLAB coding and the results show that the values of the optimized parameters will improve the stability of the system by shifting the eigenvalues away from the imaginary axis as far as possible on the left half of s plane.</description>
        <description>http://thesai.org/Downloads/FTC2017/13_Paper_240-Microgrid_Stability_Improvement_by_Optimization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Can Blockchain Protect Internet-of-Things?</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090112</link>
        <id>10.14569/SpecialIssue.2018.090112</id>
        <doi>10.14569/SpecialIssue.2018.090112</doi>
        <lastModDate>2018-08-20T10:08:56.3200000+00:00</lastModDate>
        
        <creator>Hiroshi Watanabe</creator>
        
        <subject>Internet-of-Things (IoT); blockchain; security; physical chip identification (Physical Chip-ID); datachain; connected devices; logical address; physical address</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>In the Internet-of-Things, the number of connected devices is expected to be extremely huge, i.e., more than a couple of ten billion. It is however well-known that the security for the Internet-of-Things is still open problem. In particular, it is difficult to certify the identification of connected devices and to prevent the illegal spoofing. It is because the conventional security technologies have advanced for mainly protecting logical network and not for physical network like the Internet-of-Things. In order to protect the Internet-of-Things with advanced security technologies, we propose a new concept (datachain layer) which is a well-designed combination of physical chip identification and blockchain. With a proposed solution of the physical chip identification, the physical addresses of connected devices are uniquely connected to the logical addresses to be protected by blockchain.</description>
        <description>http://thesai.org/Downloads/FTC2017/12_Paper_224-Can_Blockchain_Protect_Internet-of-Things.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Concept of a Generic Co-Simulation Platform for Energy Systems Modeling</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090111</link>
        <id>10.14569/SpecialIssue.2018.090111</id>
        <doi>10.14569/SpecialIssue.2018.090111</doi>
        <lastModDate>2018-08-20T10:08:55.7730000+00:00</lastModDate>
        
        <creator>Jianlei Liu</creator>
        
        <creator>Clemens Duepmeier</creator>
        
        <creator>Veit Hagenmeyer</creator>
        
        <subject>Smart grid; agent-based co-simulation platform; microservice; multi-domain energy system; communication; Rep-resentational State Transfer (REST); big data</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>In order to model, design, execute, control and analyze co-simulations for e.g. Smart Grids, a new scalable and generic system architecture of an agent-based co-simulation platform framework is presented in this article. Not only different kinds of simulators e.g. for power grids and technical plants, but also various types of data sources and real hardware nodes, such as wind turbines, photovoltaic cells, electrical power grid equipment, etc. which are instrumented by measurement devices, can be seamlessly integrated into the co-simulation platform in order to model large transdisciplinary, multi-domain energy systems. By integrating Apache Kafka as message exchange infrastructure into this configurable co-simulation platform the realistic simulation of SCADA communication via standard communication network protocols and services for Smart Grid including big data scenarios can be realized. As basic approaches container virtualization and microservices, namely, Docker con-tainers and a Representational State Transfer (REST) application programming interface (API), are used as an automated runtime environment to control and manage different simulation nodes on a (larger) computing cluster. The co-simulation platform also provides an easy-to-use web browser-based user interface implemented using Angular2 to allow users to model, implement, perform and operate co-simulations for future energy system solutions without any setup or configuration on their local PC and extensive IT knowledge. Furthermore, after the completion of simulations by the individual nodes, the results of a co-simulation run can be automatically stored in databases and then analyzed and visualized afterwards via a web user interface.</description>
        <description>http://thesai.org/Downloads/FTC2017/11_Paper_211-A_New_Concept_of_a_Generic_Co-Simulation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>CR-MEGA: Mutually Exclusive Guaranteed Access Control for Cognitive Radio Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090110</link>
        <id>10.14569/SpecialIssue.2018.090110</id>
        <doi>10.14569/SpecialIssue.2018.090110</doi>
        <lastModDate>2018-08-20T10:08:55.1830000+00:00</lastModDate>
        
        <creator>Muhammad Shafiq</creator>
        
        <creator>Seonghun Son</creator>
        
        <creator>Jin-Ghoo Choi</creator>
        
        <creator>Heejung Yu</creator>
        
        <subject>Cognitive radio networks; dynamic spectrum access; spectrum sensing; carrier sensing; CSMA/CA</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>In Cognitive Radio (CR) networks, unlicensed Sec-ondary Users (SUs) can occupy the white spaces of spectrum channels when licensed Primary Users (PUs) do not use them. Hence, for the operation of CR networks, it is critical to coordinate spectrum accesses of SUs and protect the ongoing communication of PUs. In this paper, we propose a new medium access control protocol for SUs, called Mutually Exclusive Guar-anteed Access for Cognitive Radio networks (CR-MEGA). CR-MEGA adopts a dual sensing approach (i.e., carrier sensing and spectrum sensing) to avoid packet collisions with faraway PUs as well as nearby SUs. Our scheme performs well even in the harsh condition with highly active PUs, but the advantage comes with the increased sensing delay. We analyze the throughput and delay of CR-MEGA using the Markov chain model, and investigate the impacts of various parameters with numerical results.</description>
        <description>http://thesai.org/Downloads/FTC2017/10_Paper_452-CR-MEGA_Mutually_Exclusive_Guaranteed_Access.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Machine Learning-based Advanced Localization Method in Wireless Communication</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090109</link>
        <id>10.14569/SpecialIssue.2018.090109</id>
        <doi>10.14569/SpecialIssue.2018.090109</doi>
        <lastModDate>2018-08-20T10:08:54.6200000+00:00</lastModDate>
        
        <creator>San Hlaing Myint</creator>
        
        <creator>Takuro Sato</creator>
        
        <subject>Fingerprint localization; machine learning; wireless communication</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>In existing localization research area, most of the localization methods suffer from propagation loss because of multi-path effects and sensitivity of the components of wireless technology, e.g. Received Signal Strength Indicator (RSSI). It is leading to miss estimation of localization methods and degradation of estimation accuracy. In this paper, a new advance localization method is proposed in different point of view. K-nearest neighbor (KNN) algorithm is adopted to get the best estimation accuracy and RSSI is used as main feature for estimating unknown position of mobile user. We extend to estimate a circle region instead of one coordinate position with the aid of circle theory as a new idea. The objective of this research is to improve the accuracy of localization methods and to mitigate the sensitivity of the existing methods. In this study, an advance localization method is proposed by considering existing finger printing method from a different point of view. In addition, Radius value optimization problem is investigated to be able to analyze the effects of different radius values on localization accuracy. On the other hand, the sensitivity of estimation accuracy of proposed method is analyzed in terms of different norm functions.</description>
        <description>http://thesai.org/Downloads/FTC2017/9_Paper_409-Machine_Learning-based_Advanced_Localization_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards Secure Interoperability between Heterogeneous Blockchains using Smart Contracts</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090108</link>
        <id>10.14569/SpecialIssue.2018.090108</id>
        <doi>10.14569/SpecialIssue.2018.090108</doi>
        <lastModDate>2018-08-20T10:08:54.0270000+00:00</lastModDate>
        
        <creator>Gaby G. Dagher</creator>
        
        <creator>Chandra L. Adhikari</creator>
        
        <creator>Tyler Enderson</creator>
        
        <subject>Blockchain; Ethereum; smart contract</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>Achieving data confidentiality and privacy while maintaining secure access is essential in various fields, including in the medical sector. Implementing a blockchain-based technol-ogy to secure sensitive data ensures that the users own their data and have control over who can access it. While blockchain technology is still in its infancy, it is the cutting-edge of research in many industries and institutions. The decentralized nature of blockchain technology and the presence of smart contracts in Ethereum are two major features that can be utilized to create a novel data sharing and access system that is secure, flexible, and more reliable. In this paper, we investigate the use of smart contracts between heterogeneous blockchains for the purpose of achieving secure interoperability for data sharing and access control. As a proof of concept, we propose and implement a record management system for healthcare data, where access to healthcare providers’ databases is managed through a private blockchain, only available to healthcare providers, and patients access their medical records through a public blockchain. Addi-tionally, we develop a set of smart contracts for each blockchain to control access, manage storage, and enable interoperability between the two blockchains.</description>
        <description>http://thesai.org/Downloads/FTC2017/8_Paper_491-Towards_Secure_Interoperability.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Blockchain: A New Framework for Robotic Swarm Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090107</link>
        <id>10.14569/SpecialIssue.2018.090107</id>
        <doi>10.14569/SpecialIssue.2018.090107</doi>
        <lastModDate>2018-08-20T10:08:53.4200000+00:00</lastModDate>
        
        <creator>Eduardo Castell&#243; Ferrer</creator>
        
        <subject>Swarm robotics; decentralised systems; blockchain; distributed decision making; new business models</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>Swarms of robots will revolutionize many industrial applications, from targeted material delivery to precision farming. However, several of the heterogeneous characteristics that make them ideal for certain future applications — robot autonomy, decentralized control, collective emergent behavior, etc. — hinder the evolution of the technology from academic institutions to real-world problems. Blockchain, an emerging technology originated in the Bitcoin field, demonstrates that by combining peer-to-peer networks with cryptographic algorithms a group of agents can reach an agreement on a particular state of affairs and record that agreement without the need for a controlling authority. The combination of blockchain with other distributed systems, such as robotic swarm systems, can provide the necessary capabilities to make robotic swarm operations more secure, autonomous, flexible and even profitable. This position paper explains how blockchain technology can provide innovative solutions to four emergent issues in the swarm robotics research field. New security, decision making, behavior differentiation and business models for swarm robotic systems are described by providing case scenarios and examples. Finally, limitations and possible future problems that arise from the combination of these two technologies are described.</description>
        <description>http://thesai.org/Downloads/FTC2017/7_Paper_476-The_Blockchain_A_New_Framework_for_Robotic.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Blockchain based Wine Supply Chain Traceability System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090106</link>
        <id>10.14569/SpecialIssue.2018.090106</id>
        <doi>10.14569/SpecialIssue.2018.090106</doi>
        <lastModDate>2018-08-20T10:08:52.7970000+00:00</lastModDate>
        
        <creator>KAMANASHIS BISWAS</creator>
        
        <creator>Vallipuram Muthukkumarasamy</creator>
        
        <creator>Wee Lum Tan</creator>
        
        <subject>Supply chain traceability; blockchain; consensus; miner; transparency</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>The necessity of wine supply chain traceability system is inevitable due to increase in counterfeiting, adulteration, and use of excessive preservatives and hazardous chemicals. To overcome these issues, wine industry is in need of a traceability system which enables a consumer to verify the composition of each batch of wines from the grape growers to the retailers. However, most of the current systems are RFID and web based and thus it is possible to counterfeit stored information as required. This study proposes a blockchain based wine supply chain traceability system where every transaction is recorded as a block in the chain and is visible to the relevant participants. These blocks of information is immutable since any change to the recorded information will break the chain. In addition to providing quality information management framework, the proposed traceability system enables transparency, safety, and security in the overall process from the grape to the bottle.</description>
        <description>http://thesai.org/Downloads/FTC2017/6_Paper_425-Blockchain_based_Wine_Supply_Chain_Traceability.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modeling Smart Contracts Activities: A Tensor based Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090105</link>
        <id>10.14569/SpecialIssue.2018.090105</id>
        <doi>10.14569/SpecialIssue.2018.090105</doi>
        <lastModDate>2018-08-20T10:08:52.2170000+00:00</lastModDate>
        
        <creator>Jeremy Charlier</creator>
        
        <creator>Radu State</creator>
        
        <creator>Jean Hilger</creator>
        
        <subject>Tensors; CANDECOMP/PARAFAC decomposition; stochastic processes simulation</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>Smart contracts are autonomous software executing predefined conditions. Two of the biggest advantages of the smart contracts are secured protocols and transaction costs reduction. On the Ethereum platform, an open-source blockchain-based platform, smart contracts implement a distributed virtual machine on the distributed ledger. To avoid denial of service attacks and monetize the services, payment transactions are executed whenever code is being executed between contracts. It is thus natural to investigate if predictive analysis is capable to forecast these interactions. We have addressed this issue and proposed an innovative application of the tensor decomposi-tion CANDECOMP/PARAFAC to the temporal link prediction of smart contracts. We introduce a new approach leveraging stochastic processes for series predictions based on the tensor decomposition that can be used for smart contracts predictive analytics.</description>
        <description>http://thesai.org/Downloads/FTC2017/5_Paper_402-Modeling_Smart_Contracts_Activities.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Blockchain and Distributed Ledgers as Trusted Recordkeeping Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090104</link>
        <id>10.14569/SpecialIssue.2018.090104</id>
        <doi>10.14569/SpecialIssue.2018.090104</doi>
        <lastModDate>2018-08-20T10:08:51.6570000+00:00</lastModDate>
        
        <creator>Victoria L. Lemieux</creator>
        
        <subject>Archival science; authenticity; blockchain; distributed ledger; integrity; reliability; trust</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>Blockchains and distributed ledger technology promises trusted and immutable records in a wide variety of use cases involving recordkeeping, including real estate and healthcare. This paper presents a novel framework for evaluating the capability of innovative blockchain-based systems to deliver trustworthy recordkeeping based on archival science-an ancient science aimed at the long-term preservation of authentic records.</description>
        <description>http://thesai.org/Downloads/FTC2017/4_Paper_279-Blockchain_and_Distributed_Ledgers_as_Trusted.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>User-Controlled Privacy-Preserving User Profile Data Sharing based on Blockchain</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090103</link>
        <id>10.14569/SpecialIssue.2018.090103</id>
        <doi>10.14569/SpecialIssue.2018.090103</doi>
        <lastModDate>2018-08-20T10:08:51.0800000+00:00</lastModDate>
        
        <creator>Ajay Kumar Shrestha</creator>
        
        <creator>Julita Vassileva</creator>
        
        <creator>Ralph Deters</creator>
        
        <subject>Privacy; user modeling; blockchain; data sharing; stream; latency; memory consumption</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>The tremendous technological advancement in the last few decades has brought many enterprises to collaborate in a better way while making intelligent decisions. The use of Information Technology tools in obtaining data of people’s everyday life from various autonomous data sources allowing unrestricted access to user data has emerged as an important practical issue and has given rise to legal implications. Various innovative models for data sharing and management have privacy and centrality issues. To alleviate these limitations, we have incorporated blockchain in user modeling. In this paper, we constructed a decentralized data sharing architecture with MultiChain blockchain in the travel domain, which is also applicable to other similar domains including education, health, and sports. Businesses that operate in the tourism industries such as travel and tour agencies, hotels and resorts, shopping malls etc. are connected to the MultiChain and they share their user profile data via stream in the MultiChain. The paper presents the hotel booking service for an imaginary hotel as one of the enterprise nodes, which collects users’ profile data with proper validation and will allow users to decide which of their data to be shared thus ensuring user control over their data and the preservation of privacy. The data from the repository is converted into an open data format while sharing via stream in the blockchain so that other enterprise nodes, after receiving the data, can easily convert them and store into their own repositories. The paper presents an evaluation of the performance of the model by measuring the latency and memory consumption with three test scenarios that mostly affect the user experience. The node responded quickly in all of these cases building a better and more engaging user experience. The paper also proposes a concept of the smart contract in the form of the finite state in the expanding domain of privacy-preserving data sharing and management.</description>
        <description>http://thesai.org/Downloads/FTC2017/3_Paper_127-User-Controlled_Privacy-Preserving_User_Profile_Data_Sharing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Coinspermia: A Cryptocurrency Unchained</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090102</link>
        <id>10.14569/SpecialIssue.2018.090102</id>
        <doi>10.14569/SpecialIssue.2018.090102</doi>
        <lastModDate>2018-08-20T10:08:50.4700000+00:00</lastModDate>
        
        <creator>Thomas Portegys</creator>
        
        <subject>Cryptocurrency; Bitcoin; peer-to-peer network; distributed transactions; quorum system; blockchain</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>The latency and throughput of blockchain-based cyrptocurrencies is a major concern for their suitability as mainstream currencies and as transaction processors in general. The prevalent proof-of-work scheme, exemplified by Bitcoin, is a deliberately laborious effort: the time and energy required to mine blocks makes the blockchain virtually immutable and assists in the consensus-reaching process. Coinspermia (coin=money + spermia=seed) is a different approach: transactions are concurrently seeded throughout a network of peer nodes to an extent sufficient to achieve a high reliability of essential currency operations, including the fast transfer of coins from an owner to a recipient, and the prevention of double spending. A number of Bitcoin features are retained in Coinspermia, including transaction input-outputs and cryptographic addresses and signing, but no special proof-of-work is required to commit transactions. Instead, a client can be assured of an operation completion when a quorum of network nodes acknowledge the operation, which can occur before a transaction operation finishes propagating through the network. Simulation substantiates improved latency and throughput.</description>
        <description>http://thesai.org/Downloads/FTC2017/2_Paper_76-Coinspermia_A_Cryptocurrency_Unchained.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Blockchain and Git Repositories for Sticky Policies Protected OOXML</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2018.090101</link>
        <id>10.14569/SpecialIssue.2018.090101</id>
        <doi>10.14569/SpecialIssue.2018.090101</doi>
        <lastModDate>2018-08-20T10:08:49.8930000+00:00</lastModDate>
        
        <creator>Grzegorz Spyra</creator>
        
        <creator>William J Buchanan</creator>
        
        <creator>Elias Ekonomou</creator>
        
        <subject>Sticky policies; git repositories; Blockchain</subject>
        <description>Special Issue(SpecialIssue), 9(1), 2018</description>
        <description>The paper discusses possible cloud-based Information Rights Management (IRM) model extension with enhanced accountability for both a sticky policy and an attached data. This work compliments research on secure data sharing with Office Open XML (OOXML) package extended by a sticky policy in eXtensible Access Control Mark-up Language (XACML) format. Research used Identity Based Encryption (IBE) primitive to securely bind the policy and the data together. High availability required from cloud service is here achieved using distributed system components. The Git repository and the Blockchain, leveraged technologies are not new, however their application for IRM system is novel and brings it closer to universal approach and open architectural construct.</description>
        <description>http://thesai.org/Downloads/FTC2017/1_Paper_44-Blockchain_and_Git_Repositories.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detection of Parkinson&#39;s Disease through Acoustic Parameters</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090742</link>
        <id>10.14569/IJACSA.2018.090742</id>
        <doi>10.14569/IJACSA.2018.090742</doi>
        <lastModDate>2018-08-10T10:22:48.3600000+00:00</lastModDate>
        
        <creator>Imen Daly</creator>
        
        <creator>Zied Hajaiej</creator>
        
        <creator>Ali Gharsallah</creator>
        
        <subject>Parkinson’s disease; LF model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(7), 2018</description>
        <description>Parkinson’s disease is a neurological disorder. It is the second most common disease after Alzheimer’s. Incidence rates for this disease are increasing rapidly with increasing life expectancy. The search for measures to diagnose the disease and monitor symptoms is an important step, despite the fact that it presents a number of challenges. Among the symptoms related to this disease is the disturbances of the voice which particularly occur in a remarkable way called  hypokinetic dysarthria which is presented by the poverty of the gesture in all the characteristics of the speech (phonatory, prosodic, articulatory and rhythm). Our goal is to do a study based on voice analysis at the level of the glottis to examine some early parameters measured using the LF model and clinical manifestations to help diagnosis of the disease.</description>
        <description>http://thesai.org/Downloads/Volume9No7/Paper_42-Detection_of_Parkinson_Disease.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Context Aware SmartHealth Cloud Platform for Medical Diagnostics</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090741</link>
        <id>10.14569/IJACSA.2018.090741</id>
        <doi>10.14569/IJACSA.2018.090741</doi>
        <lastModDate>2018-08-03T10:46:26.1370000+00:00</lastModDate>
        
        <creator>Sarah Shafqat</creator>
        
        <creator>Almas Abbasi</creator>
        
        <creator>Muhammad Naeem Ahmad Khan</creator>
        
        <creator>Muhammad Ahsan Qureshi</creator>
        
        <creator>Tehmina Amjad</creator>
        
        <creator>Hafiz Farooq Ahmad</creator>
        
        <subject>Healthcare analytics; medical diagnostics; HL7; cloud platform; SmartHealth; big data </subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(7), 2018</description>
        <description>Healthcare has seen a great evolution in current era in terms of new computer technologies. Intensive medical data is generated that opens up research in healthcare analytics. Coping with this intensive data along with making it meaningful to deliver knowledge and be able to make decisions are the most important tasks.  To deduce the authenticity of the data on basis of precision, correction, associations and true meaning is important to validate the understanding of correct semantics. In case of medical diagnosis to form accurate understanding of associations while removing ambiguity and forming a correct picture of the case is of utmost importance. To come up with the right metrics for the diagnostic solution we have explored the known criteria to validate healthcare analytics techniques involved in formation of diagnosis that results in betterment and safety of patients under observations and heading towards possible treatments.  In this work, we have proposed a thematic taxonomy for the comparison of existing healthcare analytics techniques with emphasis on diabetes and its underlying diseases. This analysis lead us to propose a data model for hybrid distributed simulation model for future Context Aware SmartHealth cloud platform for diagnostics. This platform is designed to inherit smartness of unsupervised learning which in turn would keep updating itself under supervised learning by qualified experts. Finally, the accuracy would be determined using HUM approach with biomarkers or a better accuracy model than AUC. The recommended action plan is also presented. </description>
        <description>http://thesai.org/Downloads/Volume9No7/Paper_41-Context_Aware_SmartHealth_Cloud_Platform.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Load Balancing in Cloud Complex Systems using Adaptive Fuzzy Neural Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090740</link>
        <id>10.14569/IJACSA.2018.090740</id>
        <doi>10.14569/IJACSA.2018.090740</doi>
        <lastModDate>2018-08-03T10:46:25.0130000+00:00</lastModDate>
        
        <creator>Mohammad Taghi Sadeghi</creator>
        
        <subject>Load balancing; cloud computing; adaptive fuzzy neural system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(7), 2018</description>
        <description>Load balancing, reliability, and traffic are among the service-oriented issues in software engineering, and cloud computing is no exception to this rule and has put many challenges ahead of experts in this field. Considering the importance of the load balancing process in cloud computing, the purpose of this paper is to provide an appropriate solution for load balancing load in complex cloud systems using an adaptive fuzzy neural system. This system consists of four layers, and a particular operation is performed on each layer. The results of the experiments show that the system has better performance in the criteria mentioned above (balancing, traffic and reliability).</description>
        <description>http://thesai.org/Downloads/Volume9No7/Paper_40-Load_Balancing_in_Cloud_Complex_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Insights on Car Relocation Operations in One-Way Carsharing Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090739</link>
        <id>10.14569/IJACSA.2018.090739</id>
        <doi>10.14569/IJACSA.2018.090739</doi>
        <lastModDate>2018-08-01T11:30:24.5300000+00:00</lastModDate>
        
        <creator>Rabih Zakaria</creator>
        
        <creator>Mohammad Dib</creator>
        
        <creator>Laurent Moalic</creator>
        
        <creator>Alexandre Caminada</creator>
        
        <subject>Carsharing; car relocation; ILP; greedy algorithm; CPLEX; green city</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(7), 2018</description>
        <description>One-way carsharing system is a mobility service that offers short-time car rental service for its users in an urban area. This kind of service is attractive since users can pick up a car from a station and return it to any other station unlike round-trip carsharing systems where users have to return the car to the same station of departure. Nevertheless, uneven users’ demands for cars and for parking places throughout the day poses a challenge on the carsharing operator to rebalance the cars in stations to satisfy the maximum number of users’ requests. We refer to a rebalancing operation by car relocation. These operations increase the cost of operating the carsharing system. As a result, optimizing these operations is crucial in order to reduce the cost of the operator. In this paper, the problem is modeled as an Integer Linear Programming model (ILP). Then we present three different car relocation policies that we implement in a greedy search algorithm. The comparison between the three policies shows that car relocation operations that do not consider future demands do not effectively decrease rejected demands. On the contrary, they can generate more rejected demands. Results prove that solutions provided by our greedy algorithm when using a good policy, are competitive with CPLEX solutions. Furthermore, adding stochastic modification on the input data proves that the results of the two presented approaches are highly affected by the input demand even after adding threshold values constraints.</description>
        <description>http://thesai.org/Downloads/Volume9No7/Paper_39-Insights_on_Car_Relocation_Operations.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intrusion Detection and Prevention Systems as a Service in Could-based Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090738</link>
        <id>10.14569/IJACSA.2018.090738</id>
        <doi>10.14569/IJACSA.2018.090738</doi>
        <lastModDate>2018-07-31T18:23:54.1230000+00:00</lastModDate>
        
        <creator>Khalid Alsubhi</creator>
        
        <creator>Hani Moaiteq AlJahdali</creator>
        
        <subject>Facility Location Problem; Intrusion detection and Prevention Systems; Cloud Computing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(7), 2018</description>
        <description>Intrusion Detection and Prevention Systems (IDPSs) are standalone complex hardware, expensive to purchase, change and manage. The emergence of Network Function Virtualization (NFV) and Software Defined Networking (SDN) mitigates these challenges and delivers middlebox functions as virtual instances. Moreover, cloud computing has become a very cost-effective model for sharing large-scale services in recent years. Features such as portability, isolation, live migration, and customizabil-ity of virtual machines for high-performance computing have attracted enterprise customers to move their in-house IT data center to the cloud. In this paper, we formulate the placement of Intrusion Detection and Prevention Systems (IDPS) and introduce a model called Incremental Mobile Facility Location Problem (IMFLP) to study the IDPP problem. Moreover, we propose a novel and efficient solution called Adaptive Facility Location (AFL) to efficiently solve the optimization problem introduced in the IMFLP model. The effectiveness of our solution is evaluated through realistic simulation studies compared with other popular online facility location algorithms.</description>
        <description>http://thesai.org/Downloads/Volume9No7/Paper_38-Intrusion_Detection_and_Prevention_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Efficient Fault Tolerance Technique for Through-Silicon-Vias in 3-D ICs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090737</link>
        <id>10.14569/IJACSA.2018.090737</id>
        <doi>10.14569/IJACSA.2018.090737</doi>
        <lastModDate>2018-07-31T18:23:53.5600000+00:00</lastModDate>
        
        <creator>Mohamed BENABDELADHIM</creator>
        
        <creator>Wael DGHAIS</creator>
        
        <creator>Fakhreddine ZAYER</creator>
        
        <creator>Belgacem HAMDI</creator>
        
        <subject>Fault tolerance; 3D-IC; TSV; IBIST; BISR; TDMA</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(7), 2018</description>
        <description>Three-dimensional integrated circuits (3D-ICs) based on Through-Silicon-Vias (TSVs) interconnection technology appeared as a viable solution to overcome problems of cost, reliability, yield and stacking area. In order to be commercially feasible, the 3D-IC yield must be as high as possible, which requires a tested and reparable TSVs. To overpass this challenge, an integration of interconnect built-in self-test (IBIST) methodology for 3D-IC is given in the aims to test the defectives TSVs. Once the interconnection has been tested, the result extracted from IBIST initiate the repairing defectives TSVs based on the built-in self-repair (BISR) structure. This paper superposed two fault tolerance techniques in particularly the redundant TSV and the time division multiplexing access (TDMA) in case of multi defectives TSV. This novel repair architecture increase the yield of 3D-ICs in a high failure rate. It leads to 100% reparability for 30% failure rate. A parallel processing approach of the proposed structure is presented to accelerate the test and repair operations. Achieved experimental results with the proposed methodology scheme show a good performance in terms of repair rate and yield.</description>
        <description>http://thesai.org/Downloads/Volume9No7/Paper_37-An_Efficient_Fault_Tolerance_Technique.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Scientific Articles Exploration System Model based in Immersive Virtual Reality and Natural Language Processing Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090736</link>
        <id>10.14569/IJACSA.2018.090736</id>
        <doi>10.14569/IJACSA.2018.090736</doi>
        <lastModDate>2018-07-31T18:23:52.6100000+00:00</lastModDate>
        
        <creator>Luis Alfaro</creator>
        
        <creator>Ricardo Linares</creator>
        
        <creator>Jose Herrera</creator>
        
        <subject>Immersive virtual environment; human computer interaction; natural user interfaces; natural language processing; Oculus Rift</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(7), 2018</description>
        <description>After having carried out a historical review and identifying the state of the art in relation to the interfaces for the exploration of scientific articles, the authors propose a model based in an immersive virtual environment, natural user interfaces and natural language processing, which provides an excellent experience for the user and allows for better use of some of its capabilities, for example, intuition and cognition in 3-dimensional environments. In this work, the Oculus Rift and Leap Motion Hardware devices are used. This work aims to contribute to the proposal of a tool which would facilitate and optimize the arduous task of reviewing literature in scientific databases. The case study is the exploration and information retrieval of scientific articles using ALICIA (Scientific database of Peru). Finally, conclusions and recommendations for future work are laid out and discussed.</description>
        <description>http://thesai.org/Downloads/Volume9No7/Paper_36-Scientific_Articles_Exploration_System_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Global Citation Impact rather than Citation Count</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090735</link>
        <id>10.14569/IJACSA.2018.090735</id>
        <doi>10.14569/IJACSA.2018.090735</doi>
        <lastModDate>2018-07-31T18:23:51.9230000+00:00</lastModDate>
        
        <creator>Gohar Rehman Chughtai</creator>
        
        <creator>Jia Lee</creator>
        
        <creator>Muhammad Mehran Arshad khan</creator>
        
        <creator>Rashid Abbasi</creator>
        
        <creator>Asif Kabir</creator>
        
        <creator>Muhammad Arshad Shehzad Hassan</creator>
        
        <subject>Citation weighting; popular; global citations; prestigious; Global Citation Impact (GCI); research impact</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(7), 2018</description>
        <description>The progressing bloom in the tome of scientific literature available today debars researchers from efficiently shrewd the relevant from irrelevant content. Researchers are persistently engrossed in impactful papers, authors, and venues in their respective fields. Impact of an article depends on the citation received but just a citation count can’t give readers in-depth information about the article. That is the reason some articles are quantified unfairly on the basis of a citation count. In this paper, Global Citation Impact (GCI) is proposed which addresses the issue of considering citations of papers equally. Intuitively, the papers citing a paper are not of the same worth. The proposed index not only considers the number of citations (popularity) like many existing methods did but also considers the worth of citations (prestige). Results and discussions show that researcher whose work is cited by other prestigious papers gets higher rank which is quite fair crediting for research impact.</description>
        <description>http://thesai.org/Downloads/Volume9No7/Paper_35-Global_Citation_Impact.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Resource Recommendation Approach based on Co-Working History</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090734</link>
        <id>10.14569/IJACSA.2018.090734</id>
        <doi>10.14569/IJACSA.2018.090734</doi>
        <lastModDate>2018-07-31T18:23:51.3630000+00:00</lastModDate>
        
        <creator>Nada Mohammed Abdulhameed</creator>
        
        <creator>Iman M. A. Helal</creator>
        
        <creator>Ahmed Awad</creator>
        
        <creator>Ehab Ezat</creator>
        
        <subject>Business process; process instance; co-working his-tory; human-resource recommender criteria; harmony</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(7), 2018</description>
        <description>Recommending the right resource to execute the next activity of a running process instance is of utmost importance for the overall performance of the business process, as well as the resource and for the whole organization. Several approaches have recommended a resource based on the task requirements and the resource capabilities. Moreover, the process execution history and the logs have been used to better recommend a resource based on different human-resource recommender criteria like frequency and speed of execution, etc. These approaches considered the recommendation based on the individual’s execution history of the task that will be allocated to the resource. In this paper, a novel approach based on the co-working history of resources has been proposed. This approach considers the resources that had executed the previous tasks in the current running process instances. Then, it recommends a resource that has the best harmony with the rest of the resources.</description>
        <description>http://thesai.org/Downloads/Volume9No7/Paper_34-A_Resource_Recommendation_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automatic Cyberbullying Detection in Spanish-language Social Networks using Sentiment Analysis Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090733</link>
        <id>10.14569/IJACSA.2018.090733</id>
        <doi>10.14569/IJACSA.2018.090733</doi>
        <lastModDate>2018-07-31T18:23:50.6430000+00:00</lastModDate>
        
        <creator>Rolfy Nixon Montufar Mercado</creator>
        
        <creator>Hernan Faustino Chacca Chuctaya</creator>
        
        <creator>Eveling Gloria Castro Gutierrez</creator>
        
        <subject>Cyberbullying; social media analytics; sentiment analysis; tokenization; stemming; bag of words</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(7), 2018</description>
        <description>Cyberbullying is a growing problem in our society that can bring fatal consequences and can be presented in digital text for example at online social networks. Nowadays there is a wide variety of works focused on the detection of digital texts in the English language, however in the Spanish language there are few studies that address this issue. This paper aims to detect this cybernetic harassment in social networks, in Spanish language. Sentiment analysis techniques are used, such as bag of words, elimination of signs and numbers, tokenization and stemming, as well as a Bayesian classifier. The data used for the training of the Bayesian classifier were obtained from the Spanish Dictionary of Affect in Language (SDAL), which is a database formed by more than 2500 words manually evaluated in three affective dimensions: Pleasantness, activation and imagery, as well as same 595 words obtained following the same procedure of SDAL was used with the help of the members of the Research Center, Technology Transfer and Software Development. As a result, the software developed has 93% success in the validation tests carried out.</description>
        <description>http://thesai.org/Downloads/Volume9No7/Paper_33-Automatic_Cyberbullying_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Wakes-Ship Removal on High-Resolution Optical Images based on Histograms in HSV Color Space</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090732</link>
        <id>10.14569/IJACSA.2018.090732</id>
        <doi>10.14569/IJACSA.2018.090732</doi>
        <lastModDate>2018-07-31T18:23:49.7700000+00:00</lastModDate>
        
        <creator>Fidel Indalecio Mamani Maquera</creator>
        
        <creator>Eveling Gloria Castro Gutierrez</creator>
        
        <subject>Wakes-ship removal; optical remote sensing; ship detection; HSV color space; histograms; intersection over union</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(7), 2018</description>
        <description>Ship detection on optical remote sensing images is getting great attention; however, some images called wakes-ship have not been taken into account yet. Current works in ship detection are focusing on in-shore detection where ships are in calm; furthermore, their methods get high Intersection Over Union (IoU), above 70%, but when computing IoU using only wakes-ship images the ratio is 22%. In this paper, it is presented a new framework to improve ship segmentation on wakes-ship images. In order to achieve this goal, it was considered HSV color space and histograms. First, ship detection was done using state-of-the-art ship detection methods. Second, bin histograms in HSV color space was used to get the colors that rely on wakes. Finally, the removal of wakes from ships was done using some discriminative properties. In this way, the increase of the IoU performance at wake-ship segmentation goes from 22% to 63%, which is an improvement of 186%.</description>
        <description>http://thesai.org/Downloads/Volume9No7/Paper_32-Wakes_Ship_Removal_on_High_Resolution.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automated Quantification of Non-Calcified Coronary Plaques in Cardiac CT Angiographic Imagery</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090731</link>
        <id>10.14569/IJACSA.2018.090731</id>
        <doi>10.14569/IJACSA.2018.090731</doi>
        <lastModDate>2018-07-31T18:23:49.2400000+00:00</lastModDate>
        
        <creator>M Moazzam Jawaid</creator>
        
        <creator>Sanam Narejo</creator>
        
        <creator>Nasrullah Pirzada</creator>
        
        <creator>Junaid Baloch</creator>
        
        <creator>C.C. Reyes-Aldasoro</creator>
        
        <creator>Greg Slabaugh</creator>
        
        <subject>Coronary segmentation; non-calcified plaques; vas-cular quantification; coronary wall analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(7), 2018</description>
        <description>The high mortality rate associated with coronary heart disease (CHD) has driven intensive research in cardiac image analysis. The advent of computed tomography angiogra-phy (CTA) has turned non-invasive diagnosis of cardiovascular anomalies into reality as calcified coronary plaques can be easily identified due to high intensity values. However, detection and quantification of the non-calcified plaques in CTA is still a challenging problem because of their lower intensity values, which are often similar to the nearby blood and muscle tissues. In this work, we propose Bayesian posterior based model for precise quantification of the non-calcified plaques in CTA imagery. The only indicator of non-calcified plaques in CTA is relatively lower intensity. Hence, we exploited intensity variations to discriminate voxels into lumen and plaque classes. Based on the normal coronary segments, we computed the vessel-wall thickness in first step. In the subsequent step, we removed vessel wall from the seg-mented tree and employed Gaussian Mixture Model to compute optimal distribution parameters. In the final step, distribution parameters were employed in Bayesian posterior model to classify voxels into lumen or plaque. A total of 18 CTA volumes were analyzed in this work using two different approaches. According to the experimental results, mean Jaccard overlap is around 88% with respect to the manual expert. In terms of sensitivity, specificity and accuracy, the proposed method achieves 84.13% ,79.15% and 82.02% success, respectively. Conclusion: According to the experimental results, it is shown that the proposed plaque quantification method achieves accuracy equivalent to human experts.</description>
        <description>http://thesai.org/Downloads/Volume9No7/Paper_31-Automated_Quantification_of_Non_Calcified_Coronary_Plaques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced Textual Password Scheme for Better Security and Memorability</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090730</link>
        <id>10.14569/IJACSA.2018.090730</id>
        <doi>10.14569/IJACSA.2018.090730</doi>
        <lastModDate>2018-07-31T18:23:48.6630000+00:00</lastModDate>
        
        <creator>Hina Bhanbhro</creator>
        
        <creator>Shah Zaman Nizamani</creator>
        
        <creator>Syed Raheel Hassan</creator>
        
        <creator>Sheikh Tahir Bakhsh</creator>
        
        <creator>Madini O.Alassafi</creator>
        
        <subject>Security; usability; alphanumeric passwords; authentication</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(7), 2018</description>
        <description>Traditional textual password scheme provides a large number of password combinations but users generally use a small portion of available password space. Complex textual passwords are difficult to remember, therefore most users choose passwords with small length and contain dictionary words. Due to the use of small password length and dictionary words, textual passwords become easy to crack through offline guessability attacks. Traditional textual passwords scheme is also weak against keystroke logger attacks because alphanumeric characters are directly inserted into the password field. In this paper, enhancements are proposed in the registration and login screen of the traditional textual password scheme for improving security against offline guessability attacks and keystroke logger attacks. The proposed registration screen also improve memorability of traditional textual passwords through visual cues or pattern-based approach. In the proposed login screen, passwords are indirectly inserted into the password field, to resist keystroke logger attacks. A comparative analysis between the passwords created in traditional and proposed pattern-based approach is presented. The testing results show that users create strong and high entropy passwords in the proposed pattern-based approach as compared to the traditional textual passwords approach.</description>
        <description>http://thesai.org/Downloads/Volume9No7/Paper_30-Enhanced_Textual_Password_Scheme.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Algorithm that Prevents SPAM Attacks using Blockchain</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090729</link>
        <id>10.14569/IJACSA.2018.090729</id>
        <doi>10.14569/IJACSA.2018.090729</doi>
        <lastModDate>2018-07-31T18:23:47.6800000+00:00</lastModDate>
        
        <creator>Koichi Nakayama</creator>
        
        <creator>Yutaka Moriyama</creator>
        
        <creator>Chika Oshima</creator>
        
        <subject>Cryptocurrency; wallet account; Mail Send Coin (MSC)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(7), 2018</description>
        <description>There are many systems and methods for prevent-ing spam attacks. 
However, at present there is no specific tried-and-true method for preventing such attacks. 
In this paper, we propose an algorithm, “SAGA BC” to prevent spam attacks using a blockchain technique and demonstrate its 
effectiveness by a simulation experiment. A person who sends an email using the “SAGA BC” must pay the processing cost with 
cryptocurrency. If an e-mail sent using this algorithm is received normally at a destination e-mail account, this fee is refunded. 
However, a lot of spam e-mails are not received normally, because addresses of the spam e-mails are indiscriminate. If a spammer sends 
spam using the “SAGA BC ,” he/she will lose the cryptocurrency fee for each such message. Thus, if using the “SAGA  BC” to send e-mail 
becomes a standard practice for the general public, receiving e-mail servers and/or mailers will be able to easily judge incom-ing 
messages without using the “SAGA BC,” because spammers cannot use the “SAGA BC” without losing their cryptocurrency.</description>
        <description>http://thesai.org/Downloads/Volume9No7/Paper_29-An_Algorithm_that_Prevents_SPAM_Attacks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detection of Sentiment Polarity of Unstructured Multi-Language Text from Social Media</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090728</link>
        <id>10.14569/IJACSA.2018.090728</id>
        <doi>10.14569/IJACSA.2018.090728</doi>
        <lastModDate>2018-07-31T18:23:47.0400000+00:00</lastModDate>
        
        <creator>Saad Ahmed</creator>
        
        <creator>Saman Hina</creator>
        
        <creator>Raheela Asif</creator>
        
        <subject>Social media; sentiment analysis; data mining; cellular networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(7), 2018</description>
        <description>In recent years, Twitter has caught the attention of many researchers because of the fact that it is growing very rapidly in terms of number of users and also all the data present as tweets on twitter is public in nature while other social media networks such as Facebook, data is not completely public as users can restrict their post to only users present in their friend list. In this research study, aspect based sentiment analysis (ABSA) was done on the data acquired from social media related to the major cellular network companies of Pakistan (Telenor Pakistan, Mobilink Jazz, Zong, Warid and Ufone). For this research, we have specifically selected all tweets which are not only in English and Roman Urdu but also mixture of above two languages. We have employed natural language processing (NLP) techniques for pre-processing the dataset and machine learning (ML) techniques to detect the sentiments present in the data. The results are interesting and informative specially for policy makers of cellular companies. These companies can utilize this information to increase the performance of their services. In comparison with the state of the art algorithms, the performance of bagging algorithm with this framework on the acquired dataset has produced F Score of 92.25, which is very encouraging outcome of this research work.</description>
        <description>http://thesai.org/Downloads/Volume9No7/Paper_28-Detection_of_Sentiment_Polarity.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluation and Analysis of Bio-Inspired Optimization Techniques for Bill Estimation in Fog Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090727</link>
        <id>10.14569/IJACSA.2018.090727</id>
        <doi>10.14569/IJACSA.2018.090727</doi>
        <lastModDate>2018-07-31T18:23:46.1670000+00:00</lastModDate>
        
        <creator>Hafsa Arshad</creator>
        
        <creator>Hasan Ali Khattak</creator>
        
        <creator>Munam Ali Shah</creator>
        
        <creator>Assad Abbas</creator>
        
        <creator>Zoobia Ameer</creator>
        
        <subject>Cloud computing; fog computing; bio-inspired al-gorithms; pricing; cloudlets</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(7), 2018</description>
        <description>In light of constant developments in the realm of Information Communication and Technologies, large-scale busi-nesses and Internet service providers have realized the limitation of data storage capacity available to them. This led organizations to cloud computing, a concept of sharing of resources among different service providers by renting these resources through service level agreements. Fog computing is an extension to cloud computing architecture in which resources are brought closer to the consumers. Fog computing, being a distinct from cloud computing as it provides storage services along with computing resources. To use these services, the organizations have to pay according to their usage. In this paper, two nature-inspired algorithms, i.e. Pigeon Inspired Optimization (PIO) and Binary Bat Algorithm (BBA) are compared to regulate the effective management of resources so that the cost of resources can be curtailed and billing can be achieved by calculating utilized resources under the service level agreement. PIO and BBA are used to evaluate energy utilization by cloudlets or edge nodes that can be used subsequently for approximating the utilization and bill through a Time of Use pricing scheme. We appraise above-mentioned techniques to evaluate their performance concerning the bill estimation based on the usage of fog servers. With respect to the utilization of resources and reduction in the bill, simulation results have revealed that the BBA gives pointedly better results than PIO.</description>
        <description>http://thesai.org/Downloads/Volume9No7/Paper_27-Evaluation_and_Analysis_of_Bio_Inspired_Optimization_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Practical Approach for Evaluating and Prioritizing Situational Factors in Global Software Project Development</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090726</link>
        <id>10.14569/IJACSA.2018.090726</id>
        <doi>10.14569/IJACSA.2018.090726</doi>
        <lastModDate>2018-07-31T18:23:45.4330000+00:00</lastModDate>
        
        <creator>Kanza Gulzar</creator>
        
        <creator>Jun Sang</creator>
        
        <creator>Adeel Akbar Memon</creator>
        
        <creator>Muhammad Ramzan</creator>
        
        <creator>Xiaofeng Xia</creator>
        
        <creator>Hong Xiang</creator>
        
        <subject>Global software development (GSD); Situational Factors (SFs); Fuzzy Analytic Hierarchy Process (FAHP); Multi criteria decision-making (MCDM); fuzzy set theory; sensitivity analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(7), 2018</description>
        <description>There has been an enormous increase in globaliza-tion that has led to more cooperation and competition across boundaries. Software engineering, particularly distributed soft-ware development (DSD) and global software development (GSD), is evolving rapidly and presents several challenges, such as ge-ographical separations, temporal differences, cultural variations, and management strategies. As a result, a variety of situational factors (SFs) arise that causes challenging problems in software development. Both literature and real world software industry study revealed that the extent of the effect of SFs may vary subject to a certain software project. Project executives should need to concentrate on the right SFs for the successful development of a specific project. This work first examines the optimal and most well-balanced GSD-related SFs and then presents a mechanism for prioritizing the SFs to better understand the extent to which an SF generally affects the GSD. A set of 56 SFs in 11 categories is identified and analyzed in this research. A fuzzy set theory based, multi criteria decision making (MCDM) technique, fuzzy analytical hierarchy process (FAHP) was proposed to extract the SFs that have the strongest effects on GSD. The proposed technique is intelligent and automated and can be customized to suit specific conditions and environments. Thus, it can provide support for a much-needed variation that is the hallmark of such software development environments. A case study of a global company working in collaboration on a project JKL was selected to identify and prioritize the most challenging SFs. A sensitivity analysis is carried out to evaluate the extent of the impact for highly ranked SFs related to JKL project.</description>
        <description>http://thesai.org/Downloads/Volume9No7/Paper_26-A_Practical_Approach_for_Evaluating_and_Prioritizing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Data-driven based Fault Diagnosis using Principal Component Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090725</link>
        <id>10.14569/IJACSA.2018.090725</id>
        <doi>10.14569/IJACSA.2018.090725</doi>
        <lastModDate>2018-07-31T18:23:44.6700000+00:00</lastModDate>
        
        <creator>Shakir M. Shaikh</creator>
        
        <creator>Imtiaz A. Halepoto</creator>
        
        <creator>Nazar H. Phulpoto</creator>
        
        <creator>Muhammad S. Memon</creator>
        
        <creator>Ayaz Hussain</creator>
        
        <creator>Asif A. Laghari</creator>
        
        <subject>Fault Diagnosis; Principal Component Analysis; Multivariate Statistical Approach; Tennessee Eastman Chemical Plant Introduction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(7), 2018</description>
        <description>Modern industrial systems are growing day by day and unlikely their complexity is also increasing. On the other hand, the design and operations have become a key focus of the researchers in order to improve the production system. To cope up with these chellenges, the data-driven technique like principal component analysis (PCA) is famous to assist the working systems. A data in bulk quanitity from the sensor measurements are often available in such industrial systems. Considering the modern industrial systems and their economic benifits, the fault diagnostic techniqes have been deeply studied. For example,  the techniques that consider the process data as the key element. In this paper, the faults have been detected with the data-driven approach using PCA. In particular, the faults have been detected by using T^2 and Q statistics. In this process, PCA projects large data into smaller dimensions. Additionally it also preserves all the important information of process. In order to understand the impact of the technique, Tennessee Eastman chemical plant is considerd for the performance evaluation.</description>
        <description>http://thesai.org/Downloads/Volume9No7/Paper_25-Data_Driven_based_Fault_Diagnosis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Effects of Modulation Index on Harmonics of SP-PWM Inverter Supplying Universal Motor</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090724</link>
        <id>10.14569/IJACSA.2018.090724</id>
        <doi>10.14569/IJACSA.2018.090724</doi>
        <lastModDate>2018-07-31T18:23:43.8430000+00:00</lastModDate>
        
        <creator>Asif A. Solangi</creator>
        
        <creator>Mehr Gul</creator>
        
        <creator>Rameez Shaikh</creator>
        
        <creator>Farhana Umer</creator>
        
        <creator>Noman Khan Pathan</creator>
        
        <creator>Zeeshan Anjum Memon</creator>
        
        <subject>Harmonics; modulation index; SP-PWM inverter; universal motor</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(7), 2018</description>
        <description>This manuscript presents the effects of changing modulation indices on current and voltage harmonics of universal motor when it is supplied by single phase PWM (SP-PWM) inverter, the effect has been analyzed with simulation and experimental setup. For variable speed applications universal motor can be controlled either by phase angle control drive or by SP-PWM inverter drive. SP-PWM inverter-fed drive is common technique that is used to adjust the voltage applied to motor, so that variable speed can be obtained. With the application of SP-PWM inverter-fed drive, harmonics are generated because of power electronic devices. According to the IEEE standard 519, the total harmonic distortion (THD) must be within 5%. In this paper, the effect of modulation index (MI) is used to analyze THD content, and its variation alters the harmonic content.  However, the effects are also analyzed through experimental setup in order to validate the system performance. In future work, keeping modulation index constant, different PWM strategies can be employed in order to decrease harmonics.</description>
        <description>http://thesai.org/Downloads/Volume9No7/Paper_24-Effects_of_Modulation_Index_on_Harmonics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Improved Bat Algorithm based on Novel Initialization Technique for Global Optimization Problem</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090723</link>
        <id>10.14569/IJACSA.2018.090723</id>
        <doi>10.14569/IJACSA.2018.090723</doi>
        <lastModDate>2018-07-31T18:23:43.1700000+00:00</lastModDate>
        
        <creator>Waqas Haider Bangyal</creator>
        
        <creator>Jamil Ahmad</creator>
        
        <creator>Hafiz Tayyab Rauf</creator>
        
        <creator>Sobia Pervaiz</creator>
        
        <subject>Bat algorithm; local optima; exploration and exploitation; quasi-random sequence</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(7), 2018</description>
        <description>Bat algorithm (BA) is a nature-inspired metaheuristic algorithm which is widely used to solve the real world global optimization problem. BA is a population-based intelligent stochastic search technique that emerged from the echolocation features of bats and created from the mimics of bats foraging behavior. One of the major issue faced by the BA is frequently captured in local optima while handling the complex real-world problems.  In this study, a new variant of BA named as improved bat algorithm (I-BAT) is proposed. Improved bat algorithm modifies the standard BA by enhancing its exploitation capabilities, and secondly for initialization of swarm, a quasi-random sequence Torus has been applied to overcome the issue of convergence and diversity. Population initialization is a vital factor in BA, which considerably influences the diversity and convergence of swarm. In order to improve the diversity and convergence, quasi-random sequences are more useful to initialize the population rather than the random distribution. The proposed strategy is applied to standard benchmark functions that are extensively used in the literature. The experimental results illustrate the superiority of the proposed technique. The simulation results verify the efficiency of proposed technique for swarm over the benchmark algorithm that is implemented for the function optimization.</description>
        <description>http://thesai.org/Downloads/Volume9No7/Paper_23-An_Improved_Bat_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Applications of Data Envelopment Analysis in Development and Assessment of Sustainability Across Economic, Environmental and Social Dimensions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090722</link>
        <id>10.14569/IJACSA.2018.090722</id>
        <doi>10.14569/IJACSA.2018.090722</doi>
        <lastModDate>2018-07-31T18:23:42.6400000+00:00</lastModDate>
        
        <creator>Hamid Hosseini</creator>
        
        <creator>Abbas Ali Noura</creator>
        
        <creator>Sara Fanati Rashidi</creator>
        
        <subject>Data envelopment analysis; desirable and undesirable outputs; strong and weak disposability; sustainability efficiency</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(7), 2018</description>
        <description>Recently, senior managers are paying much more attention to the environmental aspects of decision-making units. Technically, global economy is inextricably connected to the environment, as it is heavily dependent on extraction and exploitation of natural resources. In this article, we try to propose a number of models for efficiency evaluation that combine the growing concepts in environmental areas along with social and economic subjects. Generally speaking, if economic growth is to be continuous and effective in the long term, it must be based on a combination of economic, environmental and social components. The existing literature on data envelopment analysis (DEA) is often based on economic efficiency. However, due to the environmental pollution at a global level, there have been recent studies in relation to sustainability efficiency with focus on environmental and social aspects; although, these studies were limited and left much room for further research. The present study evaluates the efficiency of decision-making units using social, economic and environmental indicators, and tries to minimize the flaws of DEA in the proposed models by making relative comparisons to previous models.</description>
        <description>http://thesai.org/Downloads/Volume9No7/Paper_22-Applications_of_Data_Envelopment_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Evaluation of Polynomial Pool-Based Key Pre-Distribution Protocol for Wireless Sensor Network Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090721</link>
        <id>10.14569/IJACSA.2018.090721</id>
        <doi>10.14569/IJACSA.2018.090721</doi>
        <lastModDate>2018-07-31T18:23:41.6730000+00:00</lastModDate>
        
        <creator>Malek Ben Amira</creator>
        
        <creator>Mayssa Bouraoui</creator>
        
        <creator>Noureddine Boulajfen</creator>
        
        <subject>Key management; wireless sensor network (WSN); key pre-distribution schemes; polynomial pool-based KPDS; PIKE; group-based KPDS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(7), 2018</description>
        <description>In nowadays, wireless sensor network (WSN) has been established as a leading emerging technology in the field of remote area distributed sensing due to its diverse application areas. Key pre-distribution is an important task in WSN because after the deployment of sensor nodes, their neighbors become strange to each other. To secure the communication, neighbor nodes have to generate a secret shared key, or a key-path must exist between these nodes. In this paper, we have discussed and presented various key pre-distribution protocols, namely, the polynomial pool-based key pre-distribution which is a scheme for creating pairwise keys between sensors on the foundation of a polynomial-based key pre-distribution protocol, introducing two effective instantiations: a random subset assignment key pre-distribution scheme and a grid-based key pre-distribution scheme. Other studied key pre-distribution schemes (KPDS) are Peer Intermediaries Key for Establishment (PIKE) and Group-based key pre-distribution scheme. The performances of these schemes have been assessed through the simulation of different grids under the TinyOS environment.</description>
        <description>http://thesai.org/Downloads/Volume9No7/Paper_21-Performance_Evaluation_of_Polynomial_Pool_based_Key.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Relationship Strength Based Privacy for the Online Social Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090720</link>
        <id>10.14569/IJACSA.2018.090720</id>
        <doi>10.14569/IJACSA.2018.090720</doi>
        <lastModDate>2018-07-31T18:23:41.0970000+00:00</lastModDate>
        
        <creator>Javed Ahmed</creator>
        
        <creator>Adnan Manzoor</creator>
        
        <creator>Nazar H. Phulpoto</creator>
        
        <creator>Imtiaz A. Halepoto</creator>
        
        <creator>Muhammad Sulleman Memon</creator>
        
        <subject>Online social networks; privacy; social relationships</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(7), 2018</description>
        <description>The trend of communication is changing from mobile messages to the online social networks, for example, Face-book. The social networking applications and websites provide many of the characteristics, such as personal photo sharing. On the positive side by that many individuals form the social relationships. However, the online social networks may lead to the misuse of personal information and its disclosure. The social networks are static and assume equal values for the individuals who are directly connected. On the other hand, in real life the social relationships are dynamic and they are based on different attributes such as location, family background, neighborhood and many more. In order to be secure from the undesirable consequences due to personal information leakage, the effective mechanisms are required. In this paper, a model is proposed for the privacy in online social networks. The proposed model restricts the disclosure of personal information to the individuals. The information of one individual may be disclosed based on the relationship strength and the context. The implementation of this model on the social networks reduces the percentage of information disclosure to the less known individuals.</description>
        <description>http://thesai.org/Downloads/Volume9No7/Paper_20-Relationship_Strength_based_Privacy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implementation of NOGIE and NOWGIE for Human Skin Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090719</link>
        <id>10.14569/IJACSA.2018.090719</id>
        <doi>10.14569/IJACSA.2018.090719</doi>
        <lastModDate>2018-07-31T18:23:40.1430000+00:00</lastModDate>
        
        <creator>M. Omer Aftab</creator>
        
        <creator>Junaid Javed</creator>
        
        <creator>M. Bilal</creator>
        
        <creator>Arfa Hassan</creator>
        
        <creator>M. Adnan Khan</creator>
        
        <subject>Skin detection; Digital Image Processing (DIP); Noise Object Global Image Enhancement (NOGIE); Noise Object with Global Image Enhancement (NOWGIE); Hue Saturation and Value (HSV); RGB</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(7), 2018</description>
        <description>The Digital image processing is one of the most widely implemented fields worldwide. The most applied applications of digital image processing are facial recognition, finger print recognition, medical imaging, law enforcement, cyber-crime investigation, identification of various diseases and criminals, etc. The subject to be discussed in this article is skin detection. Skin detection has solved many serious problems related to digital image process. It is one of the main features in making an intelligent image processing system. The proposed methodology conducts an improved and well enhanced skin detection, the skin and non-skin parts are divided from an input image or video, noise is removed, HSV is applied which also acts as a color model that generates more better results in accordance to RGB or YCbCr for skin and face identification. The algorithms, NOGIE (Noise Object Global Image Enhancement) and NOWGIE (Noise Object with Global Image Enhancement) are applied separately on the input and the results can be compared for better perception and understanding of the applied skin detection techniques, the skin parts are highlighted as “White” while the Non-skin parts are highlighted as “Black”. The results are different NOWGIE gives better results than the NOGIE due to the image enhancement technique. This methodology is subjected to be implemented in special security drones for the identification of suspects, terrorists and spy’s the algorithms provides the ability to detect humans from a non-skin background making an autonomous and excellent security system.</description>
        <description>http://thesai.org/Downloads/Volume9No7/Paper_19-Implementation_of_NOGIE_and_NOWGIE.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>ASSA: Adaptive E-Learning Smart Students Assessment Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090718</link>
        <id>10.14569/IJACSA.2018.090718</id>
        <doi>10.14569/IJACSA.2018.090718</doi>
        <lastModDate>2018-07-31T18:23:39.4900000+00:00</lastModDate>
        
        <creator>Dalal Abdullah Aljohany</creator>
        
        <creator>Reda Mohamed Salama</creator>
        
        <creator>Mostafa Saleh</creator>
        
        <subject>Adaptive e-learning; e-assessments; adaptive assessment; smart assessment; Learning Style (LS)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(7), 2018</description>
        <description>Adaptive e-learning can be improved through measured e-assessments that can provide accurate feedback to instructors. E-assessments can not only provide the basis for evaluation of the different pedagogical methods used in teaching and learning but they also can be used to determine the most suitable delivered materials to students according to their skills, abilities, and prior knowledge’s. This paper presents the Adaptive Smart Student Assessment (ASSA) model. With ASSA instructors worldwide can define their tests, and their students can take these tests on-line. ASSA determines the students’ abilities, skills and preferable Learning Style (LS) with more accuracy and then generates the appropriate questions in an adaptive way then presents them in a preferable learning style of student. It facilitates the evaluation process and measures the students’ knowledge level with more accuracy and then store it in the student’s profile for later use in the learning process to adapt course material content appropriately according to individual student abilities.</description>
        <description>http://thesai.org/Downloads/Volume9No7/Paper_18-ASSA_Adaptive_E_Learning_Smart_Students.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Green Cloud Computing: Efficient Energy-Aware and Dynamic Resources Management in Data Centers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090717</link>
        <id>10.14569/IJACSA.2018.090717</id>
        <doi>10.14569/IJACSA.2018.090717</doi>
        <lastModDate>2018-07-31T18:23:38.6330000+00:00</lastModDate>
        
        <creator>Sara DIOUANI</creator>
        
        <creator>Hicham MEDROMI</creator>
        
        <subject>Cloud Computing; Green Cloud; Data Center; Energy Consumption; Resource Management</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(7), 2018</description>
        <description>The uses of Cloud computing over the last years are constantly increasing since it has become a very important technology in the computing landscape. It provides to client decentralized services and a pay-as-you-go model for consuming resources. The growing need for the cloud services oblige the providers to adopt an enlarged sized data center infrastructure which runs thousands of hosts and servers to store and process data. As result, these large servers engender a lot of heat with visual carbon emission in the air, as well as important energy consumption and higher operating cost. This is why researches in energy economics continue to progress including energy saving techniques in servers, in the network, cooling, and renewable energies, etc. In this paper, we tackled the existing energy efficient methods in the green cloud computing fields and we put forward our green cloud solution for data center dynamic resource management. Our proposed approach aims to reduce the infrastructure energy consumption and maintain the required performances.</description>
        <description>http://thesai.org/Downloads/Volume9No7/Paper_17-Green_Cloud_Computing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Preference in using Agile Development with Larger Team Size</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090716</link>
        <id>10.14569/IJACSA.2018.090716</id>
        <doi>10.14569/IJACSA.2018.090716</doi>
        <lastModDate>2018-07-31T18:23:37.8830000+00:00</lastModDate>
        
        <creator>Ahmed Zia</creator>
        
        <creator>Waleed Arshad</creator>
        
        <creator>Waqas Mahmood</creator>
        
        <subject>Agile Development; Ideal Team Size; Larger Team Size Problems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(7), 2018</description>
        <description>Agile software development includes a group of software development methodologies based on iterative development, where requirements and solutions evolve through collaboration between cross-functional self-organizing teams. Different software houses were visited in a developing country to determine the experiences faced by people working on a real world projects using Agile software development methodology following different variants in different team sizes to determine the preference of using Agile software development methodology in larger team sizes. Several people were surveyed out of which few responded with an opinion of not to use agile development in a team sizes exceeding 25 members. According to the experience of people the ideal team size was 5 to maximum 10. Because according to the survey increase in the number of individuals create issues of communication as it is not possible to keep everyone on the same track with larger teams especially in case of scrum meetings which usually held on daily basis, taking responsibilities as everyone becomes reluctant in taking responsibilities believing someone else will take it, sub teams because the more the number of individuals the more will be the sub teams which indirectly increases the dependency among the teams by breaking the tasks into much smaller chunks. The findings also suggest that customer feedback would increase if the team size is less than 25 which in turn says that the Quality of Software is increased. As this study had only focused on the software companies of a developing country it is recommended that further studies should be carried out by surveying the people of other different developed countries.</description>
        <description>http://thesai.org/Downloads/Volume9No7/Paper_16-Preference_in_using_Agile_Development.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Time Series Analysis for Shortened Labor Mean Interval of Dairy Cattle with the Data of BCS, RFS, Weight, Amount of Milk and Outlook</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090715</link>
        <id>10.14569/IJACSA.2018.090715</id>
        <doi>10.14569/IJACSA.2018.090715</doi>
        <lastModDate>2018-07-31T18:23:37.3070000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Osamu Fukuda</creator>
        
        <creator>Hiroshi Okumura</creator>
        
        <creator>Kenji Endo</creator>
        
        <creator>Kenichi Yamashita</creator>
        
        <subject>Body Condition Score (BCS); Rumen Fill Score (RFS); Dairy cattle; Time-Series Analysis; Cattle Productivity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(7), 2018</description>
        <description>MTime series analysis for shortened labor mean interval of dairy cattle with the data of Body Condition Score (BCS), Rumen Fill Score (RFS), Weight, Amount of Milk and Outlook is conducted. Method for shortened the labor mean internal of Japanese dairy cattle based on time-series analysis with the data of visual index of BCS, RFS, Weight, Amount of Milk and Outlook is proposed. In order to shortened the labor mean interval of dairy cattle is the purpose of the research. Through the experiments with 17 Japanese dairy cattle of the 17 Japanese anestrus Holstein dairy cattle, it is found that the combination of weight, BCS and amount of milk is a good indicator for identification of productive cattle. Therefore, the cattle which need hormone treatments can be identified.</description>
        <description>http://thesai.org/Downloads/Volume9No7/Paper_15-Time_Series_Analysis_for_Shortened_Labor.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Multi-Criteria Decision Making to Rank Android based Mobile Applications for Mathematics</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090714</link>
        <id>10.14569/IJACSA.2018.090714</id>
        <doi>10.14569/IJACSA.2018.090714</doi>
        <lastModDate>2018-07-31T18:23:36.6970000+00:00</lastModDate>
        
        <creator>Seren Basaran</creator>
        
        <creator>Oluwatobi John Aduradola</creator>
        
        <subject>ELECTRE; mobile applications for mathematics; multi-criteria decision making; pedagogical requirements; technical requirements</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(7), 2018</description>
        <description>Exponential growth in the amount of mobile applications for Mathematics has led users to confusion and difficulty in selecting proper application manually which suits to their needs. Therefore, there exists an imperative need for automated and efficient selection of mobile applications for Mathematics where users still heavily trust either application store ratings or the content rated by the application developer. In this study, fuzzy scale weights together with ELECTRE I (ELimination and Choice Expressing REality) were used to solve a typical multi-criteria decision making problem on ranking selected mobile applications for Mathematics with respect to given set of criteria. The alternatives are mobile applications for Mathematics and were chosen from Google Play Store through considering top five highest user ratings and high usage frequencies. Ten sets of criteria on technical and pedagogical aspects specific to mobile applications and five alternatives were used in the ranking process. Findings suggest that ELECTRE I with fuzzy scale weights are remarkably practical for outranking and selection processes. Particularly in the case of unclear and imprecise ratings, this method could offer substantial solution. </description>
        <description>http://thesai.org/Downloads/Volume9No7/Paper_14-A_Multi_Criteria_Decision_Making.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Social Success Factors Affecting Implementation of Agile Software Development Methodologies in Software Industry of Pakistan: An Empirical Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090713</link>
        <id>10.14569/IJACSA.2018.090713</id>
        <doi>10.14569/IJACSA.2018.090713</doi>
        <lastModDate>2018-07-31T18:23:36.1200000+00:00</lastModDate>
        
        <creator>Muhammad Noman Riaz</creator>
        
        <creator>Athar Mahboob</creator>
        
        <creator>Attaullah Buriro</creator>
        
        <subject>Agile methodologies; social factors; congruence value; visionary leadership; software developers</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(7), 2018</description>
        <description>During the past few years it has been observed that the implementation of Agile software development methodologies have become a part and parcel in software development projects not only in large and developed organizations but also in small organizations despite the existence of misapprehension that Agile methodologies are only valid for large scale projects and established organizations. Keeping in view the potential of Agile software methodologies and with the aim of eliminating this misconception, a mixed method methodology was adopted to conduct a study for determining the social factors that contribute or have influence in the successful implementation of Agile software development methodologies. In this study, face-to-face interview sessions were conducted with 271 software professionals that include Portfolio/Program/Project Managers, Scrum Masters and Product Owners representing  28 software development companies operating in Pakistan to gauge the influence of social factors on the success of Agile software projects. The study concluded that the size of the project has nothing to do with the success of a project or otherwise but there exist certain other factors like visionary leadership, degree or level of Agile software practices, congruence value, etc. contribute significantly in success of a project.</description>
        <description>http://thesai.org/Downloads/Volume9No7/Paper_13-Social_Success_Factors_Affecting_Implementation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Impact of Motivator and Demotivator Factors on Agile Software Development</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090712</link>
        <id>10.14569/IJACSA.2018.090712</id>
        <doi>10.14569/IJACSA.2018.090712</doi>
        <lastModDate>2018-07-31T18:23:35.5730000+00:00</lastModDate>
        
        <creator>Shahbaz Ahmed Khan Ghayyur</creator>
        
        <creator>Salman Ahmed</creator>
        
        <creator>Saeed Ullah</creator>
        
        <creator>Waqar Ahmed</creator>
        
        <subject>Agile software development; motivators; demotivators; success factors; barriers; agile methods; software development life cycle</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(7), 2018</description>
        <description>Since the last decade, Agile software development has emerged as a widely utilized software development method keeping in view the developing countries of South Asia. The literature reports significant challenges and barriers for agile in software industry and thus the area still has significant problems when considered with this domain. This study reports an industrial survey in Pakistani software industry practices and practitioners to elicit the indigenous motivator and demotivators of agile paradigm in Pakistan. This study provides a concrete ranking of motivator and demotivator factors which influence the agile paradigm. A lack of proper training and other identified issues indicate that the adoption of agile is in preliminary phases and serious effort is required to set the direction right for success of agile paradigm and its adopting institutions. The survey is conducted in 23 companies practicing agile organizations and involves 90 agile practitioners. Reports of 67 practitioners were finally selected after careful selection against selection criteria for this study. The results indicate various alarming factors which are different from reported literature on the subject. Tolerance to work is the most important motivating factor among Pakistan agile practitioners, likewise lack of resources is the highest demotivating factor. A detailed ranking list of motivators and demotivators and comprehensive data analysis has been provided in this paper which influences strongly the agile software development issues in Pakistan.</description>
        <description>http://thesai.org/Downloads/Volume9No7/Paper_12-The_Impact_of_Motivator_and_Demotivator_Factors.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Measuring the Effect of Use Web 2.0 Technology on Saudi Students’ Motivation to Learn in a Blended Learning Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090711</link>
        <id>10.14569/IJACSA.2018.090711</id>
        <doi>10.14569/IJACSA.2018.090711</doi>
        <lastModDate>2018-07-31T18:23:34.9970000+00:00</lastModDate>
        
        <creator>Sarah M. Bin-jomman</creator>
        
        <creator>Mona Al-Khattabi</creator>
        
        <subject>Web2.0 tools; blended learning; motivation; ARCS model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(7), 2018</description>
        <description>Students’ motivation to learn is the goal of the educational process around the world. There is a close link between learning outcomes and students’ motivation to learn. Thus, the success of blended learning in Saudi higher education depends on not only using different teaching methods and massive expenditures on technology but also on students’ motivation to learn. The main objective of this study is to measure the effect of using the Web 2.0 technology on students’ motivation to learn in a blended learning environment through their attention, relevance, confidence, satisfaction inside in this environment. This study used a randomized experimental research design to examine differences in student’s motivation based on their use of Web 2.0 tools in a blended environment in the Computer Science at Al-Imam Muhammad Ibn Saud Islamic University (IMSIU). This study adopted Keller’s ARCS model of motivation to develop a comprehensive framework of factors that affect the use of Web 2.0 tools in blended learning environment. A questionnaire was conducted to collect data from students. Throughout our investigation, we found that there was a statistically significant difference at the level of 0.05 in overall student motivation between the experimental and control groups resulting from the using Web 2.0 tools technology. Moreover, students using Web 2.0 tools were found to exhibit a statistically significant higher degree of motivation. The results of this study can help decision makers readjust the learning strategy by realizing the importance of using Web 2.0 tools as the main platform in Saudi higher education.</description>
        <description>http://thesai.org/Downloads/Volume9No7/Paper_11-Measuring_the_Effect_of_Use_Web_2_Technology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dynamic Data Aggregation Approach for Sensor-Based Big Data

</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090710</link>
        <id>10.14569/IJACSA.2018.090710</id>
        <doi>10.14569/IJACSA.2018.090710</doi>
        <lastModDate>2018-07-31T18:23:34.0470000+00:00</lastModDate>
        
        <creator>Mohammed S. Al-kahtani</creator>
        
        <creator>Lutful Karim</creator>
        
        <subject>Data aggregation; big data; sensor networks; energy efficiency; clustering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(7), 2018</description>
        <description>Sensors are being used in thousands of applications such as agriculture, health monitoring, air and water pollution monitoring, traffic monitoring and control. As these applications collect zettabytes of data everyday sensors play an integral role into big data. However, most of these data are redundant, and useless. Thus, efficient data aggregation and processing are significantly important in reducing redundant and useless data in sensor-based big data frameworks. Current studies on big data analytics do not focus on aggregating and filtering data at multiple layers of big data frameworks especially at the lower level at data collecting nodes (sensors) that reduce the processing overhead at the upper layer, i.e., big data server. Thus, this paper introduces a multi-tier data aggregation technique for sensor-based big data frameworks. While this work focuses more on data aggregation at sensor networks. To achieve energy efficiency it also demonstrates that efficient data processing at lower layers (sensor) significantly reduces overall energy consumption of the network and data transmission latency.</description>
        <description>http://thesai.org/Downloads/Volume9No7/Paper_10-Dynamic_Data_Aggregation_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Ranking Attribution: A Novel Method for Stylometric Authorship Identification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090709</link>
        <id>10.14569/IJACSA.2018.090709</id>
        <doi>10.14569/IJACSA.2018.090709</doi>
        <lastModDate>2018-07-31T18:23:33.3600000+00:00</lastModDate>
        
        <creator>Marwa Taha Jamil</creator>
        
        <creator>Dr. Tareef kamil Mustafa</creator>
        
        <subject>Data mining; text mining; Stylometric Authorship Attribution; SARA</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(7), 2018</description>
        <description>Stylometric Authorship attribution is one of the essential approaches in the text mining. The present research endorses a Stylometric method called Stylometric Authorship Ranking Attribution (SARA) overcomes the usual problems which are processing time and accurate prediction results, without any human opinion that relays on the domain expert. This new method also uses the most effective attributes used in the Stylometric authorship prediction frequent word bag counts, whether it was frequent single, pair or trio words attributes, which are the most successful attributes in Stylometric prediction, having more alibi for author artistic writing style for our authorship recognition and prediction proposed technique. The experiments show that the proposed method produces superior prediction accuracy and even provides a completely correct result at the final stage of our experimental tests regarding the dataset scope.</description>
        <description>http://thesai.org/Downloads/Volume9No7/Paper_9-Ranking_Attribution_A_Novel_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Probabilistic Neural Network and Word Embedding for Sentiment Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090708</link>
        <id>10.14569/IJACSA.2018.090708</id>
        <doi>10.14569/IJACSA.2018.090708</doi>
        <lastModDate>2018-07-31T18:23:32.5330000+00:00</lastModDate>
        
        <creator>Saqib Alam</creator>
        
        <creator>Nianmin Yao</creator>
        
        <subject>Deep learning; probabilistic neural network; word embedding; sentiment analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(7), 2018</description>
        <description>In the present days, Artificial Intelligence (AI) is an attractive area of research along with numerous practicable purposes and vigorous subject matters and tasks, such as, understand speech, natural language, diagnose medicine and support basic research. In this study deep learning (DL) techniques, i.e. Probabilistic Neural Network (PNN) and Word Embedding (WE) will be used for sentiment analysis. The entire proposed framework will be divided into three phases: (a) normalization, (b) word vectorization, and (c) execution of proposed model.</description>
        <description>http://thesai.org/Downloads/Volume9No7/Paper_8-Probabilistic_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning Features Fusion with Classical Image Features for Image Access</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090707</link>
        <id>10.14569/IJACSA.2018.090707</id>
        <doi>10.14569/IJACSA.2018.090707</doi>
        <lastModDate>2018-07-31T18:23:31.7370000+00:00</lastModDate>
        
        <creator>Rehan Ullah Khan</creator>
        
        <subject>Deep learning; content based filtering; content analysis; machine learning; support vector machines</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(7), 2018</description>
        <description>Depending on the society, the access to the adult content can create social problems. This paper thus proposes a fusion approach for image based adult content filtering. The proposed approach merges the Deep Learning (DL) architecture and classical hand crafted feature extraction approaches. From the DL, we fuse the rich feature extraction capabilities of the Convolutional Neural Networks (CNNs) with the Correlograms features. We optimize the classification by integrating and modifying the Correlograms into skin Correlograms. The results show an increased performance by combining the DL learnt features with the classical hand crafted features. From an evaluation, the proposed approach achieves an Accuracy of 0.93. This work thus motivates the usage of classical hand crafted features to be exploited in the DL architectures for segmentation and detection scenarios.</description>
        <description>http://thesai.org/Downloads/Volume9No7/Paper_7-Deep_Learning_Features_Fusion.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Developing a Candidate Registration System for Zambia School Examinations using the Cloud Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090706</link>
        <id>10.14569/IJACSA.2018.090706</id>
        <doi>10.14569/IJACSA.2018.090706</doi>
        <lastModDate>2018-07-31T18:23:31.1730000+00:00</lastModDate>
        
        <creator>Banji Milumbe</creator>
        
        <creator>Jackson Phiri</creator>
        
        <creator>Monica M Kalumbilo</creator>
        
        <creator>Mayumbo Nyirenda</creator>
        
        <subject>Cloud computing; candidate registration; online registration; Zambia; school examinations; bulk SMS; automation; information communication technology; ICT</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(7), 2018</description>
        <description>Cloud computing has in the recent past gained a lot of ground in this digital age. The use of cloud technologies in business has broken barriers in sharing information making the world one big global village. Regardless of where one is, data or information can be received or sent instantly disregarding distance. In this research, we investigated the challenges in registering candidates for school examinations and availability of internet services in various parts of Zambia and then present a candidate registration process based on the cloud model which is aimed at resolving challenges of distances from examination centres to the examining body in order to register for examinations as well as improving the timelines and cutting down the back and forth movements in the whole process. The web based registration system was developed and tested and the testing ascertained connectivity, functionality and scalability of the system.</description>
        <description>http://thesai.org/Downloads/Volume9No7/Paper_6-Developing_a_Candidate_Registration_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Interactive Visual Decision Tree for Developing Detection Rules of Attacks on Web Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090705</link>
        <id>10.14569/IJACSA.2018.090705</id>
        <doi>10.14569/IJACSA.2018.090705</doi>
        <lastModDate>2018-07-31T18:23:30.6130000+00:00</lastModDate>
        
        <creator>Tran Tri Dang</creator>
        
        <creator>Tran Khanh Dang</creator>
        
        <creator>Truong-Giang Nguyen Le</creator>
        
        <subject>Interactive Analytics; Security Visualization; Visual Decision Tree; Web Application Security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(7), 2018</description>
        <description>Creating detection rules of attacks on web applications is not a trivial task, especially when the attacks are launched by experienced hackers. In such a situation, human expertise is essential to produce effective results. However, human users are easily overloaded by the huge input data, which is meant to be analyzed, learned from, and used to develop appropriate detection rules. To support human users in dealing with the information overload problem while developing detection rules of web application attacks, we propose a novel technique and tool called Interactive Visual Decision Tree (IVDT). IVDT is a variant of the popular decision tree learning technique introduced in research fields such as machine learning and data mining, with two additionally important features: visually supported data analysis and user-guided tree growing. Visually supported data analysis helps human users cope with high volume of training data while analyzing each node in the tree being built. On the other hand, user-guided tree growing allows human users to apply their own expertise and experience to create custom split condition for each tree node. A prototype implementation of IVDT is built and experimented to evaluate its effectiveness in terms of detection accuracy achieved by its users as well as ease of working with. The experiment results prove some advantages of IVDT over traditional decision tree learning method, but also point out its problems that should be handled in future improvements.</description>
        <description>http://thesai.org/Downloads/Volume9No7/Paper_5-Interactive_Visual_Decision_Tree.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Load Forecasting using Autoregressive Integrated Moving Average and Artificial Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090704</link>
        <id>10.14569/IJACSA.2018.090704</id>
        <doi>10.14569/IJACSA.2018.090704</doi>
        <lastModDate>2018-07-31T18:23:30.0500000+00:00</lastModDate>
        
        <creator>Lemuel Clark P. Velasco</creator>
        
        <creator>Daisy Lou L. Polestico</creator>
        
        <creator>Gary Paolo O. Macasieb</creator>
        
        <creator>Michael Bryan V. Reyes</creator>
        
        <creator>Felicisimo B. Vasquez Jr</creator>
        
        <subject>Electric load forecasting; autoregressive integrated moving average; artificial neural network </subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(7), 2018</description>
        <description>Electric load forecasting is a challenging research problem due to the complicated nature of its dataset involving both linear and nonlinear properties. Various literatures attempted to develop forecasting models that utilized statistical in combination with machine learning approaches deal with the dataset’s linear and nonlinear components to obtain close to accurate predictions. In this paper, autoregressive integrated moving average (ARIMA) and artificial neural networks (ANN) were implemented as forecasting models for a power utility’s dataset in order to predict day-ahead electric load. Electric load data preparation, models implementation and forecasting evaluation was conducted to assess if the prediction of the models met the acceptable error tolerance for day-ahead electric load forecasting. A Java-based system made use of R Statistical Software implemented ARIMA(8,1,2) while Encog Library was used to implement the ANN model composing of Resilient Propagation as the training algorithm and Hyperbolic Tangent as the activation function. The ANN+ARIMA hybrid model was found out to deliver a Mean Absolute Percentage Error (MAPE) of 4.09% which proves to be a viable technique in electric load forecasting while showing better forecasting results than solely using ARIMA and ANN. Through this research, both statistical and machine learning approaches were implemented as a forecasting model combination to solve the linear and non-linear properties of electric load data.</description>
        <description>http://thesai.org/Downloads/Volume9No7/Paper_4-Load_Forecasting_using_Autoregressive_Integrated_Moving.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Training Difficulties in Deductive Methods of Verification and Synthesis of Program</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090703</link>
        <id>10.14569/IJACSA.2018.090703</id>
        <doi>10.14569/IJACSA.2018.090703</doi>
        <lastModDate>2018-07-31T18:23:29.4900000+00:00</lastModDate>
        
        <creator>Magdalina Todorova</creator>
        
        <creator>Daniela Orozova</creator>
        
        <subject>Program verification; deductive verification methods; automated theorem provers; proof assistants; education</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(7), 2018</description>
        <description>The article analyzes the difficulties which Bachelor Degree in Informatics and Computer Sciences students encounter in the process of being trained in applying deductive methods of verification and synthesis of procedural programs. Education in this field is an important step towards moving from classical software engineering to formal software engineering. The training in deductive methods is done in the introductory courses in programming in some Bulgarian universities. It includes: Floyd’s method for proving partial and total correctness of flowchart programs; Hoare’s method of verification of programs; and Djikstra’s method of transforming predicates for verification and synthesis of Algol−like programs. The difficulties which occurred during the defining of the specification of the program, which is subjected to verification or synthesis; choosing a loop invariant and loop termination function; finding the weakest precondition; proving the formulated verifying conditions, are discussed in the paper. Means of overcoming these difficulties is proposed. Conclusions are drawn in order to improve the training in the field. Special attention is dedicated to motivating the use of specific tools for software analysis, such as interactive theorem proving system HOL, the software analyzers Frama−C and its WP plug−in, as well as the formal language ACSL, which allows formal specification of properties of C/C++ programs.</description>
        <description>http://thesai.org/Downloads/Volume9No7/Paper_3-Training_Difficulties_in_Deductive_Methods.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fuzzy Data Mining for Autism Classification of Children</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090702</link>
        <id>10.14569/IJACSA.2018.090702</id>
        <doi>10.14569/IJACSA.2018.090702</doi>
        <lastModDate>2018-07-31T18:23:28.9430000+00:00</lastModDate>
        
        <creator>Mofleh Al-diabat</creator>
        
        <subject>Autistic traits; data mining; fuzzy rules; statistical analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(7), 2018</description>
        <description>Autism is a development condition linked with healthcare costs, therefore, early screening of autism symptoms can cut down on these costs. The autism screening process involves presenting a series of questions for parents, caregivers, and family members to answer on behalf of the child to determine the potential of autistic traits. Often existing autism screening tools, such as the Autism Quotient (AQ), involve many questions, in addition to careful design of the questions, which makes the autism screening process lengthy. One potential solution to improve the efficiency and accuracy of screening is the adaptation of fuzzy rule in data mining. Fuzzy rules can be extracted automatically from past controls and cases to form a screening classification system. This system can then be utilized to forecast whether individuals have any autistic traits instead of relying on the conventional domain expert rules. This paper evaluates fuzzy rule-based data mining for forecasting autistic symptoms of children to address the aforementioned problem. Empirical results demonstrate high performance of the fuzzy data mining model in regard to predictive accuracy and sensitivity rates and surprisingly lower than expected specificity rates when compared with other rule-based data mining models.</description>
        <description>http://thesai.org/Downloads/Volume9No7/Paper_2-Fuzzy_Data_Mining_for_Autism_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Wavelet/PSO-Based Segmentation and Marker-Less Tracking of the Gallbladder in Monocular Calibration-free Laparoscopic Cholecystectomy</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090701</link>
        <id>10.14569/IJACSA.2018.090701</id>
        <doi>10.14569/IJACSA.2018.090701</doi>
        <lastModDate>2018-07-31T18:23:28.3830000+00:00</lastModDate>
        
        <creator>Haroun Djaghloul</creator>
        
        <creator>Mohamed Batouche</creator>
        
        <creator>Jean-Pierre Jessel</creator>
        
        <creator>Abdelhamid Benhocine</creator>
        
        <subject>Medical image segmentation; monocular laparo-scopic cholecystectomy; deformable structures tracking; gallbladder segmentation and tracking; markerless augmented reality; wavelets; particles swarm optimisation; minimally invasive surgery (MIS); computer aided surgery (CAS)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(7), 2018</description>
        <description>This paper presents an automatic segmentation and monocular marker-less tracking method of the gallbladder in minimally invasive laparoscopic cholecystectomy intervention that can be used for the construction of an adaptive calibration-free medical augmented reality system. In particular, the pro-posed method consists of three steps, namely, a segmentation of 2D laparoscopic images using a combination of photomet-ric population-based statistical approach and edge detection techniques, a PSO-based detection of the targeted anatomical structure (the gallbladder) and, finally, the 3D model wavelet-based multi-resolution analysis and adaptive 2D/3D registration. The proposed population-based statistical segmentation approach of 2D laparoscopic images differs from classical approaches (his-togram thresholding), in that we consider anatomical structures and surgical instruments in terms of distributions of RGB color triples. This allows an efficient handling, superior robustness and to readily integrate current intervention information. The result of this step consists in a set of point clouds with a loosely gradient information that can cover various anatomical structures. In order to enhance both sensitivity and specificity, the detection of the targeted structure (the gallbladder) is based on a modified PSO (particles swarm optimization) scheme which maximizes both internal features density and the divergence with neighboring structures such as, the liver. Finally, a multi-particles based representation of the targeted structure is constructed, thanks to a proposed wavelet-based multi-resolution analysis of the 3D model of the targeted structure which is registered adaptively with the 2D particles generated during the previous step. Results are shown on both synthetic and real data.</description>
        <description>http://thesai.org/Downloads/Volume9No7/Paper_1-WaveletPSO_based_Segmentation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dimensions of Open Government Data Web Portals: A Case of Asian Countries</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090663</link>
        <id>10.14569/IJACSA.2018.090663</id>
        <doi>10.14569/IJACSA.2018.090663</doi>
        <lastModDate>2018-06-30T11:31:14.3730000+00:00</lastModDate>
        
        <creator>Sanad Aarshi</creator>
        
        <creator>Usman Tariq</creator>
        
        <creator>Babur Hayat Malik</creator>
        
        <creator>Fariha Habib</creator>
        
        <creator>Kinza Ashfaq</creator>
        
        <creator>Irm Saleem</creator>
        
        <subject>Transparency; accountability; portal activities; adoption; principals of open government data which are eight in number; benefits of open government data; recommendations</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(6), 2018</description>
        <description>Citizen Factors of the open government data are being explored in this study in the selected Asian countries. As per the open data availability countries have been selected on global open data index and well-structured open government data portals of Asian countries. To identify and analyze the differences of selected Asian countries through the principals of open government data which are eight in number, analysis the portal activities and observed the Open government data benefits. In analysis, the datasets of selected countries have been analyzed for the purpose of defining the portal activities. These activities include the Visitants, Suppliers, Applications, Developments, generation of Knowledge and overall resources utilization. Open government data of these countries are examined through web contented analysis, in order to understand the open government data’s status. This study also describes different challenges on how adoption, promotion and acceptance of the open government data and portals have been carried out by Asian countries. Moreover, there are some recommendations according to the key problems and status in the open government data initiatives. Also, the study has limitations regarding the number of countries and future directions emphasize the need for Open Government Data analysis in less developed countries also.</description>
        <description>http://thesai.org/Downloads/Volume9No6/Paper_63-Dimensions_of_Open_Government_Data_Web_Portals.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Trust and Security Concerns of Cloud Storage: An Indonesian Technology Acceptance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090662</link>
        <id>10.14569/IJACSA.2018.090662</id>
        <doi>10.14569/IJACSA.2018.090662</doi>
        <lastModDate>2018-06-30T11:31:14.2630000+00:00</lastModDate>
        
        <creator>Nurudin Santoso</creator>
        
        <creator>Ari Kusyanti</creator>
        
        <creator>Harin Puspa Ayu Catherina</creator>
        
        <creator>Yustiyana April Lia Sari</creator>
        
        <subject>Cloud drive; structural equation modeling (SEM); trust; security; risk; behavior intention</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(6), 2018</description>
        <description>Cloud drive is a service that offers data storage on the cloud. As the worldwide rapid growth of cloud drive there are ongoing concerns about trust, privacy and security concerns about how the user’s personal information and data are visible to other users or even abused by the cloud drive provider. This study provides empirical evidence about the factors affecting the acceptance of cloud drive users by using seven construct variables which are Trust, Perceived Risk, Perceived Ease of Use, Perceived Usefulness, Security, Behavioural Intention and Subjective Norm. Data were collected from 294 respondents by using online questionnaire. The data analysis method used was Structural Equation Modelling (SEM) analysis. The results of this study show that the factor affecting the intention of using cloud drive are trust, perceived risk and subjective norm.</description>
        <description>http://thesai.org/Downloads/Volume9No6/Paper_62-Trust_and_Security_Concerns_of_Cloud_Storage.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Introducing a Cybersecurity Mindset into Software Engineering Undergraduate Courses</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090661</link>
        <id>10.14569/IJACSA.2018.090661</id>
        <doi>10.14569/IJACSA.2018.090661</doi>
        <lastModDate>2018-06-30T11:31:14.1700000+00:00</lastModDate>
        
        <creator>Ingrid A. Buckley</creator>
        
        <creator>Janusz Zalewski</creator>
        
        <creator>Peter J. Clarke</creator>
        
        <subject>Cybersecurity; security education, software testing; computer security; defect detection, software maintenance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(6), 2018</description>
        <description>Cybersecurity is a growing problem globally. Software helps to drive and optimize businesses in every aspect of modern life. Software systems have been under continued attacks by malicious entities, and in some cases, the consequences have been catastrophic. In order to tackle this pervasive problem, emphasis has been placed on educating software developers on how to develop secure systems. The majority of attacks on software systems have been largely due to negligence, lack of education, or incorrect application of cybersecurity defenses. As a result, there is a movement to increase cybersecurity education at all levels: novice, intermediate and expert. At the college level, students can be exposed to cybersecurity skills and principles that will better equip them as they transition into the workforce. A case study is presented which assesses the cybersecurity knowledge of juniors and seniors in a software engineering degree program taught over a one-semester period.</description>
        <description>http://thesai.org/Downloads/Volume9No6/Paper_61-Introducing_a_Cybersecurity_Mindset.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Analysis of Machine Learning Algorithms for Missing Value Imputation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090660</link>
        <id>10.14569/IJACSA.2018.090660</id>
        <doi>10.14569/IJACSA.2018.090660</doi>
        <lastModDate>2018-06-30T11:31:14.1070000+00:00</lastModDate>
        
        <creator>Nadzurah Zainal Abidin</creator>
        
        <creator>Amelia Ritahani Ismail</creator>
        
        <creator>Nurul A. Emran</creator>
        
        <subject>Data Mining; Imputation; Machine Learning; KNearest Neighbors; Decision Tree; Bayesian Networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(6), 2018</description>
        <description>Data mining requires a pre-processing task in which the data are prepared, cleaned, integrated, transformed, reduced and discretized for ensuring the quality. Missing values is a
universal problem in many research domains that is commonly encountered in the data cleaning process. Missing values usually occur when a value of stored data absent for a variable of an
observation. Missing values problem imposes undesirable effect on analysis results, especially when it leads to biased parameter estimates. Data imputation is a common way to deal with missing
values where the missing value’s substitutes are discovered through statistical or machine learning techniques. Nevertheless, examining the strengths (and limitations) of these techniques is
important to aid understanding its characteristics. In this paper, the performance of three machine learning classifiers (K-Nearest Neighbors (KNN), Decision Tree, and Bayesian Networks) are
compared in terms of data imputation accuracy. The results shows that among the three classifiers, Bayesian has the most promising performance.</description>
        <description>http://thesai.org/Downloads/Volume9No6/Paper_60-Performance_Analysis_of_Machine_Learning_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Marine Engine Room Alarm Monitoring System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090659</link>
        <id>10.14569/IJACSA.2018.090659</id>
        <doi>10.14569/IJACSA.2018.090659</doi>
        <lastModDate>2018-06-29T12:21:47.2430000+00:00</lastModDate>
        
        <creator>Isaac Tawiah</creator>
        
        <creator>Usman Ashraf</creator>
        
        <creator>Yinglei Song</creator>
        
        <creator>Aleena Akhtar</creator>
        
        <subject>Alarm monitoring system; engine control room;
OPC communication; PLCs; SCADA systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(6), 2018</description>
        <description>Alarms affect operations in most part of the ship.
Their impact on modern Engine Control Room operations is
no less significant. The state of an alarm system serves as an
indication of the extent to which the ship’s operations are under
management control. Thus, the design of efficient and reliable
alarm monitoring system is vital for safe and sound operations.
Although several design techniques have been proposed, all the
proposed design methods employ sophisticated and expensive
approaches in resolving alarm issues. In this paper, a cheap,
yet reliable and efficient alarm design method for engine room
device monitoring is presented. The design method employs
PLCs and SCADA-based system and adopts certain basic design
requirements of alarm monitoring system presented in literary
works. Reasons for such a design method are highlighted, and the
programming platforms for the design are given. The strengths
and weaknesses of some design methods presented in some
published works are reported and solutions to such problems
are proposed. The proposed design technique, including fault
diagnostic algorithm, have been subjected to real-time online
testing at the shipyard, specifically ChangjiangWaterway Bureau,
China (ship name–Ning Dao 501). The testing results proved that
this design technique is reliable, efficient and effective for online
engine control room device monitoring.</description>
        <description>http://thesai.org/Downloads/Volume9No6/Paper_59-Marine_Engine_Room_Alarm_Monitoring_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Web Scraper Revealing Trends of Target Products and New Insights in Online Shopping Websites</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090658</link>
        <id>10.14569/IJACSA.2018.090658</id>
        <doi>10.14569/IJACSA.2018.090658</doi>
        <lastModDate>2018-06-29T12:21:47.2100000+00:00</lastModDate>
        
        <creator>Habib Ullah</creator>
        
        <creator>Zahid Ullah</creator>
        
        <creator>Shahid Maqsood</creator>
        
        <creator>Abdul Hafeez</creator>
        
        <subject>Django QuerySet (DQS); e-commerce; hamming
distance algorithm (HDA); Levenshtein distance algorithm (LDA);
scraper; scheduling mechanism</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(6), 2018</description>
        <description>Trillions of posts from Facebook, tweets in Twitter,
photos on Instagram and e-mails on exchange servers are
overwhelming the Internet with big data. This necessitates the
development of such tools that can detect the frequent updates
and select the required information instantly. This research
work aims to implement scraper software that is capable of
collecting the updated information from the target products
hosted in fabulous online e-commerce websites. The software is
implemented using Scrapy and Django frameworks. The software
is configured and evaluated across different e-commerce websites.
Individual website generates a greater amount of data about
the products that need to be scraped. The proposed software
provides the ability to search a target product in a single
consolidated place instead of searching across various websites,
such as amazon.com, alibaba.com and daraz.pk. Furthermore,
the scheduling mechanism enables the scraper to execute at a
required frequency within a specified time frame.</description>
        <description>http://thesai.org/Downloads/Volume9No6/Paper_58-Web_Scraper_Revealing_Trends_of_Target_Products.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Non-Linear Energy Harvesting Dual-hop DF Relaying System over n-&#181; Fading Channels</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090657</link>
        <id>10.14569/IJACSA.2018.090657</id>
        <doi>10.14569/IJACSA.2018.090657</doi>
        <lastModDate>2018-06-29T12:21:47.1970000+00:00</lastModDate>
        
        <creator>Ayaz Hussain</creator>
        
        <creator>Nazar Hussain Phulpoto</creator>
        
        <creator>Ubaidullah Rajput</creator>
        
        <creator>Fizza Abbas</creator>
        
        <creator>Zahoor Ahmed Baloch</creator>
        
        <subject>Energy harvesting relay; non-linear energy har-vester; ?-&#181; fading; power-splitting-based relaying; throughput</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(6), 2018</description>
        <description>In this work, we analyze a wireless energy harvest-ing decode-and-forward (DF) relaying network with beamforming that is based on a practical non-linear energy harvesting model over η-μ fading channels. We consider a dual-hop relaying system having multiple antennas at the source and destination only. The single-antenna energy constrained relay assists the source to communicate with the destination. At the relay node, we assume a non-linear energy harvesting receiver which limits the harvested power level with a saturation threshold. By considering a power-splitting based relaying (PSR) protocol and a non-linear energy harvesting receiver, we analyze the system performance in terms of the outage probability and throughput for various antennas combinations and for various values of the fading parameters, η and μ. The η-μ fading model has a few particular cases, viz., Rayleigh, Nakagami-m, and Hoyt. These results are general and can be reduced for different fading scenarios as well as for linear energy harvesting relaying.</description>
        <description>http://thesai.org/Downloads/Volume9No6/Paper_57-Non_Linear_Energy_Harvesting_Dual_hop_DF.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Effect of TCP Buffer Size on the Internet Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090656</link>
        <id>10.14569/IJACSA.2018.090656</id>
        <doi>10.14569/IJACSA.2018.090656</doi>
        <lastModDate>2018-06-29T12:21:47.1800000+00:00</lastModDate>
        
        <creator>Imtiaz A. Halepoto</creator>
        
        <creator>Nazar H. Phulpoto</creator>
        
        <creator>Adnan Manzoor</creator>
        
        <creator>Sohail A. Memon</creator>
        
        <creator>Umair A. Qadir</creator>
        
        <subject>TCP; sender buffer; receiver buffer; stream control
transmission protocol (SCTP); error detection and correction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(6), 2018</description>
        <description>The development of applications, such as online
video streaming, collaborative writing, VoIP, text and video
messengers is increasing. The number of such TCP-based applications
is increasing due to the increasing availability of the
Internet. The TCP protocol, works at the 4th layer of the
Internet model and provides many services such as congestion
control, reliable communication, error detection and correction.
Many new protocols have been proposed such as stream control
transmission protocol (SCTP) with more features compared to
the TCP. However, due to the wide deployment, TCP is still the
most widely used. TCP creates the segments and transmit to the
receiver. In order to prevent the errors TCP saves the segments
into the sender buffer. Similarly, the data is also saved at the
receiver buffer before its transmission to the application layer.
The selection of TCP sender and receiver buffer may be varied. It
is very important because many applications work on the smartphones
that are equipped with a small amount of memory. In
many applications such as online video streaming, some errors
are possible and it is not necessary to retransmit the data. In such
case, a small buffer is useful. However, on text transmission the
complete reassembly of message is required by the TCP before
transmission to the application layer. In such case, the large buffer
size is useful that also minimizes the buffer blocking problem of
TCP. This paper provides a detailed study on the impact of TCP
buffer size on smart-phone applications. A simple scenario is
proposed in NS2 simulator for the experimentation.</description>
        <description>http://thesai.org/Downloads/Volume9No6/Paper_56-Effect_of_TCP_Buffer_Size_on_the_Internet.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Formal Analysis and Verification of Agent-Oriented Supply-Chain Management</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090655</link>
        <id>10.14569/IJACSA.2018.090655</id>
        <doi>10.14569/IJACSA.2018.090655</doi>
        <lastModDate>2018-06-29T12:21:47.1630000+00:00</lastModDate>
        
        <creator>Muhammad Zubair Shoukat</creator>
        
        <creator>Muhammad Atif</creator>
        
        <creator>Imran Riaz Hasrat</creator>
        
        <creator>Nadia Mushtaq</creator>
        
        <creator>Ijaz Ahmed</creator>
        
        <subject>Supply chain management; agent-oriented supplychain;
model checking; formal specification and verification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(6), 2018</description>
        <description>Managing various relationships among the supply
chain processes is known as Supply Chain Management (SCM).
SCM is the oversight of finance, information and material as
they move in the flow from different suppliers to manufacturer,
wholesaler, retailer and customers. The main problem
with such software architecture is coordination and reliability
while performing activities. Moreover, continuously changing
market makes this coordination challenging. For example failure
of production facilities, irregularities in meeting deadlines,
unavailability of workers at required times. However, in the
Agent-Oriented Supply-Chain Management described in [Mark
S. Fox, Mihai Barbuceanu, and Rune Teigen “Agent-Oriented
Supply-Chain Management”. The International Journal of Flexible
Manufacturing Systems, 12 (2000)] the proposed solution
claims a remarkable coordination on the basis of an agentoriented
software architecture. In this paper, we formally specify
architecture and verify it using model checking. We use UPPAAL
to formally specify the agents’ behaviour involved in SCM. By
model-checking, we prove that the given SCM’s architecture
partially fulfills its functional requirements.</description>
        <description>http://thesai.org/Downloads/Volume9No6/Paper_55-Formal_Analysis_and_Verification_of_Agent.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Open-Domain Neural Conversational Agents: The Step Towards Artificial General Intelligence</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090654</link>
        <id>10.14569/IJACSA.2018.090654</id>
        <doi>10.14569/IJACSA.2018.090654</doi>
        <lastModDate>2018-06-29T12:21:47.1330000+00:00</lastModDate>
        
        <creator>Sasa Arsovski</creator>
        
        <creator>Sze Hui Wong</creator>
        
        <creator>Adrian David Cheok</creator>
        
        <subject>Artificial intelligence; deep learning; neural networks; open domain chatbots; conversational agents</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(6), 2018</description>
        <description>Development of conversational agents started half
century ago and since then it has transformed into a technology
that is accessible in various aspects in everyday life. This
paper presents a survey current state-of-the-art in the open
domain neural conversational agent research and future research
directions towards Artificial General Intelligence (AGI) creation.
In order to create a conversational agent which is able to
pass the Turing Test, numerous research efforts are focused
on open-domain dialogue system. This paper will present latest
research in domain of Neural Network reasoning and logical
association, sentiment analysis and real-time learning approaches
applied to open domain neural conversational agents. As an
effort to provide future research directions, current cuttingedge
approaches applied to open domain neural conversational
agents, current cutting-edge approaches in rationale generation
and the state-of-the-art research directions in alternative training
methods will be discussed in this paper.</description>
        <description>http://thesai.org/Downloads/Volume9No6/Paper_54-Open_Domain_Neural_Conversational_Agents.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Proposal for a Technological Solution to Improve user Experience in a Shopping Center based on Indoor Geolocation Services</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090653</link>
        <id>10.14569/IJACSA.2018.090653</id>
        <doi>10.14569/IJACSA.2018.090653</doi>
        <lastModDate>2018-06-29T12:21:47.1170000+00:00</lastModDate>
        
        <creator>Luinel Andrade</creator>
        
        <creator>Johan Quintero</creator>
        
        <creator>Eric Gamess</creator>
        
        <creator>Antonio Russoniello</creator>
        
        <subject>Mobile application; web application; geolocation; shopping center; WiFi; access points</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(6), 2018</description>
        <description>For shopping centers, mobile devices and their associated technologies represent great business opportunities and a way to improve the user experience within their facilities. These types of constructions are usually quite large, multi-story, and with a significant number of shops, services, where the visitors may find themselves having difficulties to have a complete and up-to-date list of the stores, determine which stores and services are those that meet the characteristics or specifications they seek, know the location of the shops or how to reach them. This research studies and contemplates different technologies, tools, and approaches for the development of a technological solution for shopping centers that offers in its functionality a geolocation system in the interior spaces of the buildings. Our technological solution includes a mobile application for the Android operating system implemented by using the native development approach, and a web application for managing data, where the contents and settings of the mobile application will be obtained following the client/server model through a private API. It is worth to mention that the already mentioned system of geolocation in interiors is implemented using WiFi technology and the different Access Points installed in the shopping center, through which users can obtain their position, locate the stores or services of their interest, and receive indications on how to reach them.</description>
        <description>http://thesai.org/Downloads/Volume9No6/Paper_53-A_Proposal_for_a_Technological_Solution.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Airline Sentiment Visualization, Consumer Loyalty Measurement and Prediction using Twitter Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090652</link>
        <id>10.14569/IJACSA.2018.090652</id>
        <doi>10.14569/IJACSA.2018.090652</doi>
        <lastModDate>2018-06-29T12:21:47.0870000+00:00</lastModDate>
        
        <creator>Rida Khan</creator>
        
        <creator>Siddhaling Urolagin</creator>
        
        <subject>Consumer loyalty measurement; consumer loyalty prediction; sentimental visualization; airline consumer analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(6), 2018</description>
        <description>Social media today is an integral part of people’s daily routines and the livelihood of some. As a result, it is abundant in user opinions. The analysis of brand specific opinions can inform companies on the level of satisfaction within consumers. This research focus is on analysis of tweets related to airlines based in four regions: Europe, India, Australia and America for consumer loyalty prediction. Sentiment Analysis is carried out using TextBlob analyzer. The tweets are used to calculate and graphically represent the positive, negative mean sentiment scores and a varying mean sentiment score over time for each airline. The terms with complaints and compliments are depicted using visualization methods. A novel method is proposed to measure consumer loyalty using the data gathered from Twitter. Furthermore, consumer loyalty prediction is performed using Twitter data. Three classifiers are employed, namely, Random Forest, Decision Tree and Logistic Regression. A maximum classification accuracy of 99.05% is observed for Random Forest on 10-fold cross validation.</description>
        <description>http://thesai.org/Downloads/Volume9No6/Paper_52-Airline_Sentiment_Visualization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implication of Genetic Algorithm in Cryptography to Enhance Security</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090651</link>
        <id>10.14569/IJACSA.2018.090651</id>
        <doi>10.14569/IJACSA.2018.090651</doi>
        <lastModDate>2018-06-29T12:21:47.0700000+00:00</lastModDate>
        
        <creator>Muhammad Irshad Nazeer</creator>
        
        <creator>Ghulam Ali Mallah</creator>
        
        <creator>Noor Ahmed Shaikh</creator>
        
        <creator>Rakhi Bhatra</creator>
        
        <creator>Raheel Ahmed Memon</creator>
        
        <creator>Muhammad Ismail Mangrio</creator>
        
        <subject>Secure transmission; symmetric cryptosystems; invertible functions; genetic algorithms; efficient encryption</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(6), 2018</description>
        <description>In today’s age of information technology secure transmission of information is a big challenge. Symmetric and asymmetric cryptosystems are not appropriate for high level of security. Modern hash function based systems are better than traditional systems but the complex algorithms of generating invertible functions are very time consuming. In traditional systems data is being encrypted with the key but still there are possibilities of eavesdrop the key and altered text. Therefore, key must be strong and unpredictable, so a method has been proposed which take the advantage of theory of natural selection. Genetic Algorithms are used to solve many problems by modeling simplified genetic processes and are considered as a class of optimization algorithms. By using Genetic Algorithm the strength of the key is improved that ultimately make the whole algorithm good enough. In the proposed method, data is encrypted by a number of steps. First, a key is generated through random number generator and by applying genetic operations. Next, data is diffused by genetic operators and then logical operators are performed between the diffused data and the key to encrypt the data. Finally, a comparative study has been carried out between our proposed method and two other cryptographic algorithms. It has been observed that the proposed algorithm has better results in terms of the key strength but is less computational efficient than other two.</description>
        <description>http://thesai.org/Downloads/Volume9No6/Paper_51-Implication_of_Genetic_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Traffic Predicting Model for Dynamic Spectrum Sharing Over 5G Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090650</link>
        <id>10.14569/IJACSA.2018.090650</id>
        <doi>10.14569/IJACSA.2018.090650</doi>
        <lastModDate>2018-06-29T12:21:47.0400000+00:00</lastModDate>
        
        <creator>Ahmed Alshaflut</creator>
        
        <creator>Vijey Thayananthan</creator>
        
        <subject>Component; traffic predictions; software defined multiple access; dynamic spectrum sharing; 5G networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(6), 2018</description>
        <description>Recently, wireless networks and traffic requirements have been rapidly aggregated in diverse applications in 5G environments. For this reason, researchers have investigated the influences of this growth based on a user’s requirements inside these networks. However, the stream of traffic has been considered a crucial role for the user’s needs over 5G network. In this paper, gigantic data traffic is considered for enabling dynamic spectrum sharing over 5G networks. Thus, various accessing plans are covered to manage the overall network traffic. Additionally, it proposes a traffic predicting model for a technique of managing traffic when multiple requests are received to decrease delays. It has considered different significances related to a large size of traffic practices. Additionally, this work will guide us to enhance traffic solutions within massive requests over outsized networks. Systematically, it has focused on the traffic flow, starting from the accessing steps until passing on requests to suitable spectrum carriers.</description>
        <description>http://thesai.org/Downloads/Volume9No6/Paper_50-Traffic_Predicting_Model_for_Dynamic_Spectrum.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dist-Coop: Distributed Cooperative Transmission in UWSNs using Optimization Congestion Control and Opportunistic Routing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090649</link>
        <id>10.14569/IJACSA.2018.090649</id>
        <doi>10.14569/IJACSA.2018.090649</doi>
        <lastModDate>2018-06-29T12:21:47.0070000+00:00</lastModDate>
        
        <creator>Malik Taimur Ali</creator>
        
        <creator>Saqib Shahid Rahim</creator>
        
        <creator>Mian Ahmed Jan</creator>
        
        <creator>Atif Ishtiaq</creator>
        
        <creator>Sheeraz Ahmed</creator>
        
        <creator>Mukhtar Ahmad</creator>
        
        <creator>Mukhtaj Khan</creator>
        
        <creator>M. Ayub Khan</creator>
        
        <subject>Opportunistic routing; cooperation; congestion control; signal-to-noise ratio</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(6), 2018</description>
        <description>One of the real issues in UWSN is congestion control. The need is to plan an optimized congestion control scheme which enhances the network life time and in addition limits the usage of energy in data transmission from source to destination. In this paper, we propose a routing protocol called Dist-Coop in UWSN. Dist-Coop is a distributed cooperation based routing scheme which uses mechanism for optimized congestion control in noisy links of underwater environment. It is compact, energy proficient and high throughput opportunistic routing scheme for UWSN. In this proposed protocol architecture, we present congestion control with cooperative transmission of data packets utilizing relay sensors. The final objective is to enhance the network life time and forward information utilizing cooperation procedure, limiting energy consumption amid transmission of information. At destination node, combining strategy utilized is based on Signal-to-Noise Ratio (SNRC). Simulation results of Dist-Coop scheme indicate better outcomes in terms of energy consumption, throughput and network lifetime in contrast with Co-UWSN and EH-UWSN routing protocols. Dist-Coop has expended substantially less energy and better throughput when contrasted with these protocols.</description>
        <description>http://thesai.org/Downloads/Volume9No6/Paper_49-Dist_Coop_Distributed_Cooperative_Transmission.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Accident Detection and Smart Rescue System using Android Smartphone with Real-Time Location Tracking</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090648</link>
        <id>10.14569/IJACSA.2018.090648</id>
        <doi>10.14569/IJACSA.2018.090648</doi>
        <lastModDate>2018-06-29T12:21:46.9930000+00:00</lastModDate>
        
        <creator>Arsalan Khan</creator>
        
        <creator>Farzana Bibi</creator>
        
        <creator>Muhammad Dilshad</creator>
        
        <creator>Salman Ahmed</creator>
        
        <creator>Zia Ullah</creator>
        
        <creator>Haider Ali</creator>
        
        <subject>Traffic accidents; accident detection; on-board sensor; accelerometer; android smartphones; real-time tracking; emergency services; emergency responder; emergency victim; SOSafe; SOSafe Go; firebase</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(6), 2018</description>
        <description>A large number of deaths are caused by Traffic accidents worldwide. The global crisis of road safety can be seen by observing the significant number of deaths and injuries that are caused by road traffic accidents. In many situations the family members or emergency services are not informed in time. This results in delayed emergency service response time, which can lead to an individual’s death or cause severe injury. The purpose of this work is to reduce the response time of emergency services in situations like traffic accidents or other emergencies such as fire, theft/robberies and medical emergencies. By utilizing onboard sensors of a smartphone to detect vehicular accidents and report it to the nearest emergency responder available and provide real time location tracking for responders and emergency victims, will drastically increase the chances of survival for emergency victims, and also help save emergency services time and resources.</description>
        <description>http://thesai.org/Downloads/Volume9No6/Paper_48-Accident_Detection_and_Smart_Rescue_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Student’s Opinions on Online Educational Games for Learning Programming Introductory</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090647</link>
        <id>10.14569/IJACSA.2018.090647</id>
        <doi>10.14569/IJACSA.2018.090647</doi>
        <lastModDate>2018-06-29T12:21:46.9600000+00:00</lastModDate>
        
        <creator>Roslina Ibrahim</creator>
        
        <creator>Nor Zairah A. Rahim</creator>
        
        <creator>Doris Wong H. Ten</creator>
        
        <creator>Rasimah C.M Yusoff</creator>
        
        <creator>Nurazean Maarop</creator>
        
        <creator>Suraya Yaacob</creator>
        
        <subject>Educational games; programming introductory; undergraduate; games evaluation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(6), 2018</description>
        <description>Use of educational games is an approach that has potential to change the existing educational method. This is due to games popularity among younger generation as well as engagement and fun features of games compared to conventional learning method. In addition, games are among the most widespread media amongst younger generation or so-called “digital natives” apart from movie, music and internet technology. Game play activities is an important issue to be thoroughly understood due to the facts that many of them are addicted to game play activity. In contrast, conventional learning approaches are not interesting enough to the younger generation. Thus, integration of games technology into education is potentially believed to increase student interest and motivation to learn. This study developed and evaluates an online educational game for learning Programming Introductory course at a university in Malaysia. A total of 180 undergraduate students from computer and engineering background participate in the study. Findings shows that about 80% of students have positive attitude towards the games with around 84% of them find that the games is a fun way to learn, at the same time, an average of 80% agreed that the game provide them with opportunity to learn. Furthermore, about 75% of the students agreed that the game make them able to do self-assessment for Programming course. It was interesting to find that almost 85% of the student said that they will want to use educational games as their future learning approach. Despite many more evidence will be needed especially in Malaysia context, this study is important to rationalize that games can be one of the new learning approaches in the future.</description>
        <description>http://thesai.org/Downloads/Volume9No6/Paper_47-Students_Opinions_on_Online_Educational_Games.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comparative Study of Engineering Students Pedagogical Progress</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090646</link>
        <id>10.14569/IJACSA.2018.090646</id>
        <doi>10.14569/IJACSA.2018.090646</doi>
        <lastModDate>2018-06-29T12:21:46.9470000+00:00</lastModDate>
        
        <creator>Khalid Mahboob</creator>
        
        <creator>Syed Abbas Ali</creator>
        
        <creator>Danish Ur Rehman Khan</creator>
        
        <creator>Fayyaz Ali</creator>
        
        <subject>Pedagogical progress; classification; k-nearest neighbor; Na&#239;ve Bayes; decision trees; engineering students</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(6), 2018</description>
        <description>Students’ pedagogical progress plays a pivotal role in any educational institute in order to pursue imperative education. Educational institutes, Universities, Colleges implement various performance measures in order to keep analyzing and tracking progress of students to cultivate benefits of education in a better way. There are several data mining techniques to apply on education in order to build constructive educational strategies and solutions. This study aims to analyze and track engineering under graduate student’s records to judge quality education, student motivation towards learning, and student pedagogical progress to maintain education at high quality level and predicting engineering student’s forthcoming progress. Different engineering discipline students’ (of three different cohorts) data have been analyzed for tracing current as well as future pedagogical progress based on their sessional (pre-examination) marks. In this research, the classification techniques by k-nearest neighbor, Na&#239;ve Bayes and decision trees are applied to evaluate different engineering technologies student’s performance and also there are different methodologies that can be used for data classification.</description>
        <description>http://thesai.org/Downloads/Volume9No6/Paper_46-A_Comparative_Study_of_Engineering_Students.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-Class Breast Cancer Classification using Deep Learning Convolutional Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090645</link>
        <id>10.14569/IJACSA.2018.090645</id>
        <doi>10.14569/IJACSA.2018.090645</doi>
        <lastModDate>2018-06-29T12:21:46.9300000+00:00</lastModDate>
        
        <creator>Majid Nawaz</creator>
        
        <creator>Adel A. Sewissy</creator>
        
        <creator>Taysir Hassan A. Soliman</creator>
        
        <subject>Breast cancer classification; Convolutional Neural Network (CNN); deep learning; medical image processing; histopathological images</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(6), 2018</description>
        <description>Breast cancer continues to be among the leading causes of death for women and much effort has been expended in the form of screening programs for prevention. Given the exponential growth in the number of mammograms collected by these programs, computer-assisted diagnosis has become a necessity. Computer-assisted detection techniques developed to date to improve diagnosis without multiple systematic readings have not resulted in a significant improvement in performance measures. In this context, the use of automatic image processing techniques resulting from deep learning represents a promising avenue for assisting in the diagnosis of breast cancer. In this paper, we present a deep learning approach based on a Convolutional Neural Network (CNN) model for multi-class breast cancer classification. The proposed approach aims to classify the breast tumors in non-just benign or malignant but we predict the subclass of the tumors like Fibroadenoma, Lobular carcinoma, etc. Experimental results on histopathological images using the BreakHis dataset show that the DenseNet CNN model achieved high processing performances with 95.4% of accuracy in the multi-class breast cancer classification task when compared with state-of-the-art models.</description>
        <description>http://thesai.org/Downloads/Volume9No6/Paper_45-Multi_Class_Breast_Cancer_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Ant Colony System for Dynamic Vehicle Routing Problem with Overtime</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090644</link>
        <id>10.14569/IJACSA.2018.090644</id>
        <doi>10.14569/IJACSA.2018.090644</doi>
        <lastModDate>2018-06-29T12:21:46.9000000+00:00</lastModDate>
        
        <creator>Khaoula OUADDI</creator>
        
        <creator>Youssef BENADADA</creator>
        
        <creator>Fatima-Zahra MHADA</creator>
        
        <subject>Dynamic vehicle routing problem (DVRP); multi-tours; mathematical modeling; hybrid; Ant Colony System (ACS); overtime</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(6), 2018</description>
        <description>Traditionally, in a VRP the vehicles return to depot before the end of the working time. However, in reality several constraints can occur and prevent the vehicles from being at the depot on time. In the dynamic case, we are supposed to answer the requests the same day of their arrival. Nevertheless, it is not always easy to find a solution, which ensures the service while respecting the normal working time. Therefore, allowing the vehicle to use additional time to complete their service may be very useful especially if we have a large demand with a limited number of vehicles. In this context, this article proposes a mathematical modeling with an Ant Colony System (ACS) based approach to solve the dynamic vehicle routing problem (DVRP) multi-tours with overtime. To test the algorithm, we propose new data sets inspired from literature benchmarks. The competitiveness of the algorithm is proved on the classical DVRP.</description>
        <description>http://thesai.org/Downloads/Volume9No6/Paper_44-Ant_Colony_System_for_Dynamic_Vehicle_Routing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Survey of Energy Aware Cloud’s Resource Allocation Techniques for Virtual Machine Consolidation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090643</link>
        <id>10.14569/IJACSA.2018.090643</id>
        <doi>10.14569/IJACSA.2018.090643</doi>
        <lastModDate>2018-06-29T12:21:46.8670000+00:00</lastModDate>
        
        <creator>Asif Farooq</creator>
        
        <creator>Tahir Iqbal</creator>
        
        <creator>Muhammad Usman Ali</creator>
        
        <creator>Zunnurain Hussain</creator>
        
        <subject>Cloud computing; energy aware; green cloud computing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(6), 2018</description>
        <description>As the demand for cloud computing environment is increasing, new techniques for making cloud computing more environment-friendly are being proposed with an aim to convert traditional cloud computing into green cloud computing. A standout amongst the most imperative complications in cloud computing is streamlining of energy utilization because its importance is increasing rapidly. There are numerous strategies and algorithms used to limit the energy utilization in the cloud. Methods incorporate DVFS, UP-VMC, Utility based MFF, HCT, AVVMC, ACO, and ESWCT. In this survey, a review of energy-aware techniques is presented for making virtual machines more energy efficient in a cloud computing. Working on each technique is briefly explained. A comparative analysis is also given for comparing multiple efficient techniques with respect to performance metrics.</description>
        <description>http://thesai.org/Downloads/Volume9No6/Paper_43-A_Survey_of_Energy_Aware_Clouds_Resource.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Information System Evaluation based on Multi-Criteria Decision Making: A Comparison of Two Sectors</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090642</link>
        <id>10.14569/IJACSA.2018.090642</id>
        <doi>10.14569/IJACSA.2018.090642</doi>
        <lastModDate>2018-06-29T12:21:46.8530000+00:00</lastModDate>
        
        <creator>Ansar DAGHOURI</creator>
        
        <creator>Khalifa MANSOURI</creator>
        
        <creator>Mohammed QBADOU</creator>
        
        <subject>Information system success; multi criteria decision; AHP and TOPSIS methods; criteria</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(6), 2018</description>
        <description>In this article, our purpose is to introduce the results of a new approach to assess the information system success. It is based on the DeLone and McLean model and was applied on two domains. The chosen domains are banking sector being the most customer of information technology and construction industry as the least computer-intensive sector. The work methodology used to evaluate the information system performance is a combined approach of the two most popular multi-criteria decision making techniques: AHP and TOPSIS. Based on the results of this technique applied on studied sectors, we can obtain a horizontal comparison at the sector level and optimize the choice of the best system. </description>
        <description>http://thesai.org/Downloads/Volume9No6/Paper_42-Information_System_Evaluation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Empirical Evaluation of Modified Agile Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090641</link>
        <id>10.14569/IJACSA.2018.090641</id>
        <doi>10.14569/IJACSA.2018.090641</doi>
        <lastModDate>2018-06-29T12:21:46.8200000+00:00</lastModDate>
        
        <creator>Shabib Aftab</creator>
        
        <creator>Zahid Nawaz</creator>
        
        <creator>Faiza Anwer</creator>
        
        <creator>Muhammad Salman Bashir</creator>
        
        <creator>Munir Ahmad</creator>
        
        <creator>Madiha Anwar</creator>
        
        <subject>Agile models; SXP; SFDD; Modified XP; modified FDD; empirical evaluation; comparative analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(6), 2018</description>
        <description>Empirical evaluation is one of the widely accepted validation method in the domain of software engineering which investigates the proposed technique via practical experience and reflects its benefits and limitations. Due to various advantages, agile models have been taking over the conventional software development methodologies since last two decades. However besides the benefits, various limitations have been noticed as well by the researchers and software industry in agile family. To achieve the maximum benefits it is vital to fix the limitations by customizing the development structure of agile models. This paper deals with the empirical analysis of modified agile models called Simplified Extreme Programing (SXP) and Simplified Feature Driven Development (SFDD), which are the modified forms of Extreme Programing (XP) and Feature Driven Development (FDD). SXP was presented to eliminate the issues of conventional XP such as, lack of documentation, poor architectural structure and less focus on design. SFDD was proposed to take care of reported issues in FDD such as explicit dependency on experienced staff, little or no guidance for requirement gathering, rigid nature to accommodate requirement changes and heavy development structure. This study evaluates SXP and SFDD through implementing client oriented projects and discusses the results with empirical analysis.</description>
        <description>http://thesai.org/Downloads/Volume9No6/Paper_41-Empirical_Evaluation_of_Modified_Agile_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comparative Evaluation of Dotted Raster-Stereography and Feature-Based Techniques for Automated Face Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090640</link>
        <id>10.14569/IJACSA.2018.090640</id>
        <doi>10.14569/IJACSA.2018.090640</doi>
        <lastModDate>2018-06-29T12:21:46.8070000+00:00</lastModDate>
        
        <creator>Muhammad Wasim</creator>
        
        <creator>S. Talha Ahsan</creator>
        
        <creator>Lubaid Ahmed</creator>
        
        <creator>Syed Faisal Ali</creator>
        
        <creator>Fauzan Saeed</creator>
        
        <subject>Raster-stereography; dotted raster-stereography; feature based; face recognition; IPRL</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(6), 2018</description>
        <description>Automated face recognition systems are fast becoming a need for security-related applications. Development of a fool-proof and efficient face recognition system is a challenging domain for researchers. This paper presents comparative evaluation of two candidate techniques for automated face recognition application, viz. dotted Raster-stereography and feature-based system. The relevant performance parameters — accuracy, precision, sensitivity and specificity – measured for the two techniques using IPRL Database of images are reported. The results suggest that dotted Raster-stereography based face recognition system has better accuracy, precision, sensitivity and specificity, and hence is a preferred choice as compared with feature-based system for such sensitive applications where high face recognition accuracy is required. On the other hand, feature-based technique is faster in terms of the training and testing times required. Hence such applications where volume of face recognition work is large and high speed is required with some compromise in accuracy being acceptable then feature-based technique may also be the technique of choice. </description>
        <description>http://thesai.org/Downloads/Volume9No6/Paper_40-A_Comparative_Evaluation_of_Dotted_Raster_Stereography.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>FPGA based Synthesize of PSO Algorithm and its Area-Performance Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090639</link>
        <id>10.14569/IJACSA.2018.090639</id>
        <doi>10.14569/IJACSA.2018.090639</doi>
        <lastModDate>2018-06-29T12:21:46.7900000+00:00</lastModDate>
        
        <creator>Bharat Lal Harijan</creator>
        
        <creator>Farrukh Shaikh</creator>
        
        <creator>Burhan Aslam Arain</creator>
        
        <creator>Tayab Din Memon</creator>
        
        <creator>Imtiaz Hussain Kalwar</creator>
        
        <subject>Particle swarm optimization (PSO); Remez Exchange Algorithm; FPGA implementation; FIR filter </subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(6), 2018</description>
        <description>Digital filters are the most significant part of signal processing that are used in enormous applications such as speech recognition, acoustic, adaptive equalization, and noise and interference reduction. It would be of great benefit to implement adaptive FIR filter because of self-optimization property, linearity and frequency stability. Designing FIR filter involves multi-modal optimization problems whereas conservative gradient optimization technique is not useful to design the filter. Hence, Particle Swarm Optimization (PSO) algorithm is more flexible and optimization technique based on population of particles in search space and alternative approach for linear phase FIR filter design. PSO improves the solution characteristic by giving a novel method for updating swarm’s position and velocity vector. Set of optimized filter coefficients will be generated by PSO algorithm. In this paper, PSO based FIR Low pass filter is efficiently designed in MATLAB and further Xilinx System Generator tool is used to efficiently design, synthesize and implement FIR filter in FPGA using SPARTEN 3E kit. For an example specifications, output of PSO algorithm is obtained that is set of optimized coefficients whose response is approximating to the ideal response. Hence, functional verification of the proposed algorithm has been performed and the error between obtained filter and ideal filter is minimized successfully. This work demonstrates the effectiveness of the PSO algorithms in parallel processing environment as compared to the Remez Exchange algorithm. </description>
        <description>http://thesai.org/Downloads/Volume9No6/Paper_39-FPGA_based_Synthesize_of_PSO_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Heterogeneous Buffer Size Impact on UDP Performance for Real-Time Video Streaming Application</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090638</link>
        <id>10.14569/IJACSA.2018.090638</id>
        <doi>10.14569/IJACSA.2018.090638</doi>
        <lastModDate>2018-06-29T12:21:46.7730000+00:00</lastModDate>
        
        <creator>Sarfraz Ahmed Soomro</creator>
        
        <creator>M. Mujtaba Shaikh</creator>
        
        <creator>Nasreen Nizamani</creator>
        
        <creator>Ehsan Ali Buriro</creator>
        
        <creator>Khalil M. Zuhaib</creator>
        
        <subject>Real-time communication; buffer size; user datagram protocol; video streaming </subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(6), 2018</description>
        <description>Communication specifically in real-time (RTC) is a terminology which insinuates any live media transmission that occurs inside time limits. In this paper, heterogeneous buffer sizes in random are utilized on different routers and for different ranges to examine their effect on the performance of network for user datagram protocol’s (UDP) video streaming application. It appeared through numerical results that packet switches heterogeneous buffer sizes as a rule influence the general performance of the network. By thinking about bigger range of buffer sizes, throughput improves but End-to-End delay also increases which is customarily not commendable for RTC applications. On the contrary, throughput decreases on account of considering low range of buffer sizes; however, End-to-End delay additionally diminishes. In this manner, the middle of the road scope of buffer sizes range from 30 to 20, recommended for ideal throughput and an adequate lower End-to-End delay.</description>
        <description>http://thesai.org/Downloads/Volume9No6/Paper_38-Heterogeneous_Buffer_Size_Impact.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Efficient Link Prediction Technique in Social Networks based on Node Neighborhoods</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090637</link>
        <id>10.14569/IJACSA.2018.090637</id>
        <doi>10.14569/IJACSA.2018.090637</doi>
        <lastModDate>2018-06-29T12:21:46.7430000+00:00</lastModDate>
        
        <creator>Gypsy Nandi</creator>
        
        <creator>Anjan Das</creator>
        
        <subject>Link prediction; online social networks; common neighbors; Jaccard’s coefficient; Adamic/Adar; preferential attachment; FriendLink</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(6), 2018</description>
        <description>The unparalleled accomplishment of social networking sites, such as Facebook, LinkedIn and Twitter has modernized and transformed the way people communicate to each other. Nowadays, a huge amount of information is being shared by online users through these social networking sites. Various online friendship sites such as Facebook and Orkut, allow online friends to share their thoughts or opinions, comment on others’ timeline or photos, and most importantly, meet new online friends who were known to them before. However, the question remains as to how to quickly propagate one’s online network by including more and more new friends. For this, one of the easy methods used is list of ‘Suggested Friends’ provided by these online social networking sites. For suggestion of friends, prediction of links for each online user is needed to be made based on studying the structural properties of the network. Link prediction is one of the key research directions in social network analysis which has attracted much attention in recent years. This paper discusses about a novel efficient link prediction technique LinkGyp and many other commonly used existing prediction techniques for suggestion of friends to online users of a social network and also carries out experimental evaluations to make a comparative analysis among each technique. Our results on three real social network datasets show that the novel LinkGyp link prediction technique yields more accurate results than several existing link prediction techniques.</description>
        <description>http://thesai.org/Downloads/Volume9No6/Paper_37-An_Efficient_Link_Prediction_Technique.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Swarm Optimization based Radio Resource Allocation for Dense Devices D2D Communication</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090636</link>
        <id>10.14569/IJACSA.2018.090636</id>
        <doi>10.14569/IJACSA.2018.090636</doi>
        <lastModDate>2018-06-29T12:21:46.7100000+00:00</lastModDate>
        
        <creator>O. Hayat</creator>
        
        <creator>R. Ngah</creator>
        
        <creator>Siti Z. Mohd Hashim</creator>
        
        <subject>Device to device (D2D) communication; radio resources (RR) allocation; OFDMA networks; sub-channels and sub-carriers; cellular users and D2D users</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(6), 2018</description>
        <description>In Device to Device (D2D) communication two or more devices communicate directly with each other in the in-band cellular network. It enhances the spectral efficiency due to cellular radio resources (RR) are shared among the cellular users and D2D users. If the RR sharing is not legitimate properly, it causes interference and inefficient use. Therefore, management of RR between cellular users and D2D users is required to control the interference and inefficient use of RR. In D2D enabled cellular network, D2D users have a good signal to noise ratio (SNR) compared with cellular users due to the short distances and dedicated path. Using this advantage, an efficient RR allocation algorithm based on swarm optimization is proposed in this paper, that allows utmost spatial reuse in multi-users and OFDMA networks. The algorithm determines the required RR on the request of D2D users following the indicator variable. It enhances the capacity (Bit/Hz), overall system throughput and spectral efficiency with respect to sub-carriers in OFDMA networks. The performance of the proposed algorithm is evaluated via MATLAB simulations.</description>
        <description>http://thesai.org/Downloads/Volume9No6/Paper_36-Swarm_Optimization_based_Radio_Resource_Allocation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Real-Time Concept Feedback in Lectures for Botho University Students</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090635</link>
        <id>10.14569/IJACSA.2018.090635</id>
        <doi>10.14569/IJACSA.2018.090635</doi>
        <lastModDate>2018-06-29T12:21:46.6970000+00:00</lastModDate>
        
        <creator>Alpheus Wanano Mogwe</creator>
        
        <subject>Real-time feedback systems; interactive technology; e-learning; information technology; understanding of concepts</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(6), 2018</description>
        <description>This is a mixed methodology study which focused on developing a real-time concept feedback system for Botho University students. The study takes advantage of the tablets distributed freely by the institution to ameliorate the problem of lack of understanding of module concepts during lecture lessons. The system addresses issues of providing real-time feedback as the lecture is ongoing without disturbing other students, thus upholding effective class participation and interaction without the need of voicing own concerns loud to the lecturer, and in turn the lecturer is able to view the students’ interactions and address them. The real-time concept feedback system was used to test student comprehension of concepts, improve participation, engagement and attendance. The study identified many factors affecting students’ participation and interaction in a traditional class which inhibits understanding of concepts; hence, the development of the application to address such. It concluded that real-time concept feedback systems are vital in addressing students understanding in lecture sessions, thus upholding the importance of ICTs in education.</description>
        <description>http://thesai.org/Downloads/Volume9No6/Paper_35-Real_Time_Concept_Feedback_in_Lectures.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Multiple Session Payment System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090634</link>
        <id>10.14569/IJACSA.2018.090634</id>
        <doi>10.14569/IJACSA.2018.090634</doi>
        <lastModDate>2018-06-29T12:21:46.6630000+00:00</lastModDate>
        
        <creator>Mohanad Faeq Ali</creator>
        
        <creator>Nur Azman Abu</creator>
        
        <creator>Norharyati Harum</creator>
        
        <subject>Easy payment system; internet of things; secure payment system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(6), 2018</description>
        <description>A wireless smartphone can be designed to process a financial payment efficiently. A user can just swipe his/her credit/debit card over the counter and all the processing needed shall be done seamlessly. A smartphone is a popular device to carry around. It is a hassle to carry and keep track on so many physical debit/credit cards in a wallet. An electronic debit/credit card on a smartphone is a more convenient alternative. This research project will embark on an electronic debit/credit card on a smartphone and migrate to an IoT money. A novel session payment system using IoT money has been introduced to minimise debit/credit card risk. The scope of this paper is confined to the security model for an easy payment system based on Internet of Things (IoT). Previously, each IoT money is unique and used once only on one-time payment. The session payment system will ease the burden on protecting the database of the payment system. This paper will extend the use of one-time payment to a multiple session payment system using an IoT money note.</description>
        <description>http://thesai.org/Downloads/Volume9No6/Paper_34-A_Novel_Multiple_Session_Payment_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Impact of Anaphora Resolution on Opinion Target Identification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090633</link>
        <id>10.14569/IJACSA.2018.090633</id>
        <doi>10.14569/IJACSA.2018.090633</doi>
        <lastModDate>2018-06-29T12:21:46.6500000+00:00</lastModDate>
        
        <creator>BiBi Saqia</creator>
        
        <creator>Khairullah Khan</creator>
        
        <creator>Aurangzeb Khan</creator>
        
        <creator>Wahab Khan</creator>
        
        <creator>Fazali Subhan</creator>
        
        <creator>Muhammad Abid</creator>
        
        <subject>Opinion mining; machine learning; evaluative expression; anaphora resolution; opinion targets</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(6), 2018</description>
        <description>Opinion mining is an interesting area of research because of its wide applications in the decision-making process.  Opinion mining aims to extract user’s perception from the text and to create a fast and accurate summary of people’s opinion about anything. In this study, we have worked on opinion target identification and the impact of anaphora resolution on opinion target extraction. Anaphora resolution can be utilized to detect opinion target in sentences having prepositions instead of nouns. We empirically evaluated the impact of anaphora resolution using benchmark datasets. We have achieved accuracy such as precision: 88.14 recall: 71.45 and f-score: 72.12, respectively.</description>
        <description>http://thesai.org/Downloads/Volume9No6/Paper_33-Impact_of_Anaphora_Resolution.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Text Separation from Graphics by Analyzing Stroke Width Variety in Persian City Maps</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090632</link>
        <id>10.14569/IJACSA.2018.090632</id>
        <doi>10.14569/IJACSA.2018.090632</doi>
        <lastModDate>2018-06-29T12:21:46.6170000+00:00</lastModDate>
        
        <creator>Ali Ghafari-Beranghar</creator>
        
        <creator>Ehsanollah Kabir</creator>
        
        <creator>Kaveh Kangarloo</creator>
        
        <subject>Document image analysis; text/graphics separation; stroke width; raster map; Farsi; Persian; text segmentation; text label</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(6), 2018</description>
        <description>Text segmentation is a live research field with vast new areas to be explored. Separating text layer from graphics is a fundamental step to exploit text and graphics information. The language used in the map is a challenging issue in text layer separation problem. All current methods are proposed for non-Persian language maps. In Persian, text strings are composed of one or more subwords. Each subword is also composed of one to several letters connected together. Therefore, the components of the text strings in Persian are more diverse in terms of size and geometric form than in English. Thus, the overlapping of the Persian text and the lines usually produces a complex structure that the existing methods cannot handle with the necessary efficiency. For this purpose, the stroke width variety of the input map is calculated, and then the average line width of graphics is estimated by analyzing the content of stroke width. After finding the average width of graphical lines, we classify the complex structure into text and graphics in pixel level. We evaluate our method on some variety of full crossing text and graphics in Persian maps and show that some promising results in terms of precision and recall (above 80% and 90%, respectively) are obtained.</description>
        <description>http://thesai.org/Downloads/Volume9No6/Paper_32-Text_Separation_from_Graphics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Power Management of a Stand-Alone Hybrid (Wind/Solar/Battery) Energy System: An Experimental Investigation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090631</link>
        <id>10.14569/IJACSA.2018.090631</id>
        <doi>10.14569/IJACSA.2018.090631</doi>
        <lastModDate>2018-06-29T12:21:46.5870000+00:00</lastModDate>
        
        <creator>Saindad Mahesar</creator>
        
        <creator>Mazhar H. Baloch</creator>
        
        <creator>Ghulam S. Kaloi</creator>
        
        <creator>Mahesh Kumar</creator>
        
        <creator>Aamir M. Soomro</creator>
        
        <creator>Asif A. Solangi</creator>
        
        <creator>Yasir A. Memon</creator>
        
        <subject>Hybrid; stand-alone; wind; solar; battery; power management; Pakistan</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(6), 2018</description>
        <description>In this manuscript, a hybrid wind/solar/battery energy system is proposed for a stand-alone applications. Wind-solar energy sources are used as power generation source in the proposed hybrid energy system (HES), whereas battery is used as energy storing system in order to manage the power flow among various power generation sources and energy storing system. Power management control strategy is also presented for a suggested hybrid system. Through the real load demand and practical weather data (proposed area is Jamshoro, Sindh Pakistan), the system performance is verified under different situations. It is observed that the hybrid system produces maximum power in summer season as compared to other seasons throughout the year. Moreover, the power generated from wind and solar energy contributes 77.88% and 22.12%, respectively. However, it is clearly observed that the HES is cost effective and can be used in remotely rural areas which are isolated from power grid. In future work, the HES can be integrated with the power grid in order to meet the load demand during shortage of power.</description>
        <description>http://thesai.org/Downloads/Volume9No6/Paper_31-Power_Management_of_a_Stand_Alone_Hybrid.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Data Mining: Web Data Mining Techniques, Tools and Algorithms: An Overview</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090630</link>
        <id>10.14569/IJACSA.2018.090630</id>
        <doi>10.14569/IJACSA.2018.090630</doi>
        <lastModDate>2018-06-29T12:21:46.5570000+00:00</lastModDate>
        
        <creator>Muhammd Jawad Hamid Mughal</creator>
        
        <subject>Web data mining; hyperlinks; usage logs; contents; patterns</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(6), 2018</description>
        <description>Web data mining became an easy and important platform for retrieval of useful information. Users prefer World Wide Web more to upload and download data. As increasing growth of data over the internet, it is getting difficult and time consuming for discovering informative knowledge and patterns. Digging knowledgeable and user queried information from unstructured and inconsistent data over the web is not an easy task to perform. Different mining techniques are used to fetch relevant information from web (hyperlinks, contents, web usage logs). Web data mining is a sub discipline of data mining which mainly deals with web. Web data mining is divided into three different types: web structure, web content and web usage mining. All these types use different techniques, tools, approaches, algorithms for discover information from huge bulks of data over the web.</description>
        <description>http://thesai.org/Downloads/Volume9No6/Paper_30-Data_Mining_Web_Data_Mining_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Systematic Review of Cyber Security and Classification of Attacks in Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090629</link>
        <id>10.14569/IJACSA.2018.090629</id>
        <doi>10.14569/IJACSA.2018.090629</doi>
        <lastModDate>2018-06-29T12:21:46.5230000+00:00</lastModDate>
        
        <creator>Muhammad Kashif</creator>
        
        <creator>Sheraz Arshad Malik</creator>
        
        <creator>Muhammad Tahir Abdullah</creator>
        
        <creator>Muhammad Umair</creator>
        
        <creator>Prince Waqas Khan</creator>
        
        <subject>Cyber security; internet; intranet; network security; cybercrime and security alludes</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(6), 2018</description>
        <description>Cyber security plays an important role to secure the people who use internet via different electronic devices in their daily life. Some causes occurred all over world that people face problems when they connect their devices and system via internet. There are some highly sensitive data like biotechnology and military assets which are highly threatened by the hackers; cyber security plays a vital role in securing such data. Misusing the internet becomes a current issue in different sectors of life especially in social media, universities and government organizations. Internet is very useful for students in study institutes and employees who work in different organizations. Internet source gives the facility to people to fetch some information via internet. However, they must be protected when use the internet and secure for any unauthorized access. In this paper we have covered the different aspect of cyber security and Network security in the modern era. We have also tried to cover the threats in Intranet of organizations.</description>
        <description>http://thesai.org/Downloads/Volume9No6/Paper_29-A_Systematic_Review_of_Cyber_Security.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Urdu Word Segmentation using Machine Learning Approaches</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090628</link>
        <id>10.14569/IJACSA.2018.090628</id>
        <doi>10.14569/IJACSA.2018.090628</doi>
        <lastModDate>2018-06-29T12:21:46.5100000+00:00</lastModDate>
        
        <creator>Sadiq Nawaz Khan</creator>
        
        <creator>Khairullah Khan</creator>
        
        <creator>Wahab Khan</creator>
        
        <creator>Asfandyar Khan</creator>
        
        <creator>Fazali Subhan</creator>
        
        <creator>Aman Ullah Khan</creator>
        
        <creator>Burhan Ullah</creator>
        
        <subject>Part-of-speech (POS); NER; word segmentation; information retrieval; Natural Language Processing (NLP); conditional random fields (CRF)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(6), 2018</description>
        <description>Word Segmentation is considered a basic NLP task and in diverse NLP areas, it plays a significant role. The main areas which can be benefited from Word segmentation are IR, POS, NER, sentiment analysis, etc. Urdu Word Segmentation is a challenging task. There can be a number of reasons but Space Insertion Problem and Space Omission Problems are the major ones. Compared to Urdu, the tools and resources developed for word segmentation of English and English like other western languages have record-setting performance. Some languages provide a clear indication for words just like English which having space or capitalization of the first character in a word.  But there are many languages which do not have proper delimitation in between words e.g. Thai, Lao, Urdu, etc. The objective of this research work is to present a machine learning based approach for Urdu word segmentation. We adopted the use of conditional random fields (CRF) to achieve the subject task. Some other challenges faced in Urdu text are compound words and reduplicated words. In this paper, we tried to overcome such challenges in Urdu text by machine learning methodology.</description>
        <description>http://thesai.org/Downloads/Volume9No6/Paper_28-Urdu_Word_Segmentation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of Android-Based Remote Patient Monitoring System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090627</link>
        <id>10.14569/IJACSA.2018.090627</id>
        <doi>10.14569/IJACSA.2018.090627</doi>
        <lastModDate>2018-06-29T12:21:46.4770000+00:00</lastModDate>
        
        <creator>Salman ul Mouzam Abbasi</creator>
        
        <creator>Muhammad Daud</creator>
        
        <creator>Salman Ali</creator>
        
        <creator>Abdul Qadir Ansari</creator>
        
        <subject>Monitoring system; SPO2; temperature; android application; bio instrumentation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(6), 2018</description>
        <description>Efficient real-time monitoring systems for Patients with critical health condition have been always helpful for making timely decisions to save the lives. In such systems, the useful monitored factors include SPO2 (Oxygen Saturation in Blood), heart rate as well as temperature. Further, there are hundreds of patients in ICUs under monitoring systems in different hospitals and in different regions under fewer doctors/consultants on the move. Under above facts, a prototype for continuous monitoring of patient’s health statistics such as SPO2 and temperature along with a bed-side desk using a PC/Laptop (bio instrumentation) working as Server Database with an application layer top transfer data on Android Application Server is successfully developed. This Android application accessing real-time monitored factors using Server Database allows the consultant to monitor patient’s vitals data using his smart phone on move being at any hospital or city that creates easiness to handle any emergency and reduces Patient risks.</description>
        <description>http://thesai.org/Downloads/Volume9No6/Paper_27-Design_of_Android_based_Remote_Patient_Monitoring.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Secure user Authentication and File Transfer in Wireless Sensor Network using Improved AES Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090626</link>
        <id>10.14569/IJACSA.2018.090626</id>
        <doi>10.14569/IJACSA.2018.090626</doi>
        <lastModDate>2018-06-29T12:21:46.4470000+00:00</lastModDate>
        
        <creator>Ishu Gupta</creator>
        
        <creator>S.N Panda</creator>
        
        <creator>Harsh SadaWarti</creator>
        
        <creator>Jatin Gupta</creator>
        
        <subject>Wireless sensor networks (WSN); centralized server; black node; encryption; security; key distribution technique</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(6), 2018</description>
        <description>The WSN technology is a highly efficient and effective way of gathering highly sensitive information and is often deployed in mission-critical applications, which makes the security of its data transmission of vital significance. However, the previous research paper failed to distinguish the role of centralized server for it being the main controller of the entire network. The decision of nodes communicating with each other in the previous research paper was based on the information received from the adjacent node. However, the proposed research paper will take into account the centralized server to develop a new technique to prevent the black node from joining the wireless sensor network. Key distribution technique along with the implementation of improved AES algorithm double key encryption will play an important role in transferring the data between authorized nodes securely and preventing unauthorized user from accessing it.</description>
        <description>http://thesai.org/Downloads/Volume9No6/Paper_26-Secure_User_Authentication_and_File_Transfer.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Movement Direction Estimation on Video using Optical Flow Analysis on Multiple Frames</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090625</link>
        <id>10.14569/IJACSA.2018.090625</id>
        <doi>10.14569/IJACSA.2018.090625</doi>
        <lastModDate>2018-06-29T12:21:46.4300000+00:00</lastModDate>
        
        <creator>Achmad Solichin</creator>
        
        <creator>Agus Harjoko</creator>
        
        <creator>Agfianto Eko Putra</creator>
        
        <subject>Video analysis; movement direction; optical flow; Histograms of Oriented Optical Flow (HOOF); multiple frames</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(6), 2018</description>
        <description>This study proposed a model for determining the movement direction of the object based on the optical flow features. To increase the speed of computational time, optical flow features derived into a Histograms of Oriented Optical Flow (HOOF). We extracted them locally on the grid with a certain size. Moreover, to determine the movement direction we also analyzed multiple frames at once. Based on the experiment results, showing that the value of accuracy, precision, and recall of the movement detection is good, amounting to 93% for accuracy, 73.07% for precision and 84.25% for recall. Furthermore, the results of testing using the best parameter shows the value of accuracy of 98.1%, 35.6% precision, 41.2% recall, and direction detection error rate (DDER) 25,28%. The results of this study are expected to provide benefits in video analysis studies such as the riots detection and abnormal movement in public places.</description>
        <description>http://thesai.org/Downloads/Volume9No6/Paper_25-Movement_Direction_Estimation_on_Video.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of Miniaturized Multiband Microstrip Patch Antenna using Defected Ground Structure</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090624</link>
        <id>10.14569/IJACSA.2018.090624</id>
        <doi>10.14569/IJACSA.2018.090624</doi>
        <lastModDate>2018-06-29T12:21:46.4170000+00:00</lastModDate>
        
        <creator>Mudasar Rashid</creator>
        
        <creator>Mehre E Munir</creator>
        
        <creator>Khalid Mahmood</creator>
        
        <creator>Jehanzeb Khan</creator>
        
        <subject>Miniaturization; multiband; defected ground structure (DGS); defected patch structure (DPS); directivity; gain</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(6), 2018</description>
        <description>The recent developments in communication and antenna engineering demands compact and multiband antennas. Microstrip antenna is one of the most useful antennas for wireless communication because of its inherent features like low profile, light weight and easy fabrication. This design is aimed at miniaturized Microstrip Patch Antenna (MSA), without deteriorating its other parameters, such as gain, bandwidth, directivity and return loss. A significant amount of 89% miniaturization has been made possible by careful and meticulous investigation of slots insertion in patch and ground of MSA antenna. Dielectric substrate used in this design is polyester which has shown better result. As the focus of this design is to miniaturize the MSA, the technique used here is Defected Ground Structure (DGS), along with Defected Patch Structure (DPS) which actually shifted the resonant frequencies to the lower range without increasing its physical dimensions. Besides this shorting pin is also introduced between patch and ground, which also contributed in the enhancement of parameters like gain and return loss. The position of pin played an important role in the acquirement of better performance and radiation at desirable frequency band. Different shapes have been designed on Ground and Patch to obtain enhanced results. With the use of DGS, the designed antenna started radiation at multiple frequency bands. The frequency bands generated by this designed antenna are in the range of L band and S band of IEEE standard which made it apposite to use in variety of applications.</description>
        <description>http://thesai.org/Downloads/Volume9No6/Paper_24-Design_of_Miniaturized_Multiband_Microstrip_Patch_Antenna.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Effective Virtual Reality based Remedy for Acrophobia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090623</link>
        <id>10.14569/IJACSA.2018.090623</id>
        <doi>10.14569/IJACSA.2018.090623</doi>
        <lastModDate>2018-06-29T12:21:46.3830000+00:00</lastModDate>
        
        <creator>Maria Abdullah</creator>
        
        <creator>Zubair Ahmed Shaikh</creator>
        
        <subject>HMI; virtual reality (VR); HMD; acrophobia; VR exposure therapy; cognitive behavioral therapy; 3D VR environment</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(6), 2018</description>
        <description>Virtual reality (VR) Exposure Therapy with sophisticated technology has been used in the Psychological treatment.  The goal is to design a virtual environment using HCI (HMD) device with an interactive and immersive realistic 3D graphic scenes for exposure therapy of acrophobia that allows patient to sense height and gets used to the fearful feelings .The degree of fear is then used to evaluate the effectiveness of the system before and after therapy with the help of comparison. One may feel a little uneasy and perhaps accelerated heart rate, excessive sweating and shortness of breath, etc. are some of the most common physical symptoms of anxiety upon exposure to height. This extreme or irrational fear of height is called “Acrophobia”. The HMI based simulation is used which used the body sensation elucidation as physical symptoms of anxiety upon exposure to height to predict the results. The test reveals that anxiety level decreases from 16% at first level exposure and 8% at last level exposure. It is concluded from the results that VR exposure therapy is more effective than real exposure therapy and also the post test for VR exposure therapy were significantly better than post real exposure results. This system provides cost effective solution for rehabilitation environment.  </description>
        <description>http://thesai.org/Downloads/Volume9No6/Paper_23-An_Effective_Virtual_Reality_based_Remedy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Verifiable Search Over Updatable Encrypted Data in Cloud Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090622</link>
        <id>10.14569/IJACSA.2018.090622</id>
        <doi>10.14569/IJACSA.2018.090622</doi>
        <lastModDate>2018-06-29T12:21:46.3670000+00:00</lastModDate>
        
        <creator>Selasi Kwame Ocansey</creator>
        
        <creator>Changda Wang</creator>
        
        <creator>Wolali Ametepe</creator>
        
        <creator>Qinbao Xu</creator>
        
        <creator>Yu Zeng</creator>
        
        <subject>Cloud computing; verification; outsourced data; update; correctness; search results</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(6), 2018</description>
        <description>With all the benefits from cloud computing, there are negative influences for the data trust and integrity since clients lose control over the outsourced data in clouds. We propose a verification scheme that supports keywords based search among the encrypted data which is updatable. During the verification process the outsourced cloud data are protected from being inferred by the cloud server. Additionally, if the cloud server returns wrong or incomplete search results the clients will be able to detect such failures. A novel concept in our scheme is the ability of clients to update their outsourced data and to ensure the data’s correctness. With our scheme, the data’s update efficiency is high and the client’s computational cost is low, which makes our scheme very suitable for resource constrained devices.</description>
        <description>http://thesai.org/Downloads/Volume9No6/Paper_22-Verifiable_Search_over_Updatable_Encrypted_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>State Transition Testing Approach for Ad hoc Networks using Ant Colony Optimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090621</link>
        <id>10.14569/IJACSA.2018.090621</id>
        <doi>10.14569/IJACSA.2018.090621</doi>
        <lastModDate>2018-06-29T12:21:46.3370000+00:00</lastModDate>
        
        <creator>Ahmed Redha Mahlous</creator>
        
        <creator>Anis Zarrad</creator>
        
        <creator>Taghreed Alotaibi</creator>
        
        <subject>Component; ant colony; simulation; optimization; state transition; ad hoc routing protocol</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(6), 2018</description>
        <description>Nowadays, telecommunication software organizations are challenged to provide high-quality software to customers within their estimated time and budget in order to stay competitive within the market. Because quality is a defining aspect of the product, it is essential for a project manager to stay alert throughout the project lifecycle. Quality has a direct bearing on customer satisfaction, and if a company produces high-quality products, satisfied customers will rank it highly in customer satisfaction surveys. Additionally, dissatisfied customers are more vocal in their criticisms. Therefore, testing is an important step to produce more reliable systems. In this paper we address two important aspects of software testing for ad hoc network protocols. The first one is by integrating a high-level testing approach based on state transition on top of a network simulator in order to fill a perceived gap in existing network simulators. The second one is reducing testing effort by eliminating redundant test cases, in order to effectively improve the result accuracy of existing network simulators. In this paper, we implemented an automated state transition testing approach for wireless network routing protocols, using an improved Ant Colony Optimization (ACO) algorithm. The expected result is to provide maximum coverage in terms of states and transitions.</description>
        <description>http://thesai.org/Downloads/Volume9No6/Paper_21-State_Transition_Testing_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Generating Relational Database using Ontology Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090620</link>
        <id>10.14569/IJACSA.2018.090620</id>
        <doi>10.14569/IJACSA.2018.090620</doi>
        <lastModDate>2018-06-29T12:21:46.3200000+00:00</lastModDate>
        
        <creator>Christina Khnaisser</creator>
        
        <creator>Luc Lavoie</creator>
        
        <creator>Anita Burgun</creator>
        
        <creator>Jean-Francois Ethier</creator>
        
        <subject>Ontology; relational database; database modeling; knowledge model; ontology to relational database</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(6), 2018</description>
        <description>A huge amount of data is being generated every day from different sources. Access to these data can be very valuable for decision-making. Nevertheless, the extraction of information of interest remains a major challenge given a large number of heterogeneous databases. Building shareable and (re)usable data access mechanisms including automated verification and inference mechanisms for knowledge discovery needs to use a common knowledge model with a secure, coherent, and efficient database. For this purpose, an ontology provides an interesting knowledge model and a relational database provides an interesting storage solution. Many papers propose methods for converting ontology to a relational database. This paper describes issues, challenges, and trends derived from the evaluation of 10 methods using 23 criteria. Following this study, this paper shows that none of the methods are complete as well as the conversion process does not use the full expressivity of ontology to derive a complete relational schema including advanced constraints and modification procedures. Thus, more work must be done to decrease the gap between ontologies, a relation database.</description>
        <description>http://thesai.org/Downloads/Volume9No6/Paper_20-Generating_Relational_Database.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intelligent System for the use of the Scientific Research Information System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090619</link>
        <id>10.14569/IJACSA.2018.090619</id>
        <doi>10.14569/IJACSA.2018.090619</doi>
        <lastModDate>2018-06-29T12:21:46.2900000+00:00</lastModDate>
        
        <creator>Khaoula Benmoussa</creator>
        
        <creator>Majida Laaziri</creator>
        
        <creator>Samira Khoulji</creator>
        
        <creator>Mohamed Larbi Kerkeb</creator>
        
        <subject>Moroccan Information System for Scientific Research (SIMARECH); intelligent system; E-learning systems; learning process; interactive learning environments; intelligent tutoring systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(6), 2018</description>
        <description>As part of the digital governance of scientific research of Moroccan universities and national research institutions, the Ministry of Higher Education, Scientific Research and Executive Training has shown great interest in setting up the Moroccan Information System for Scientific Research (SIMARECH). Despite a great effort that was made for the implementation of SIMARECH in Moroccan universities, difficulties appear in the use of this information system. This prompted Abdelmalek Essaadi University to consider developing an intelligent system to provide remote training for users to use SIMARECH to facilitate the learning process, reduce administrative costs for displacement to national universities and save time training, etc. This article is in the context of a new rapidly expanding learning paradigm in the field of artificial intelligence for education. It encompasses the design and development of a SIMARECH Intelligent Learning System of Global Use and Features to provide customized learning and adapt to different environments as well as the types of learning scenarios for user training of SIMARECH, this system is named ISSIMA (intelligent system for the use of SIMARECH).</description>
        <description>http://thesai.org/Downloads/Volume9No6/Paper_19-Intelligent_System_for_the_Use_of_the_Scientific_Research.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Demand Response Programs Significance, Challenges and Worldwide Scope in Maintaining Power System Stability</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090618</link>
        <id>10.14569/IJACSA.2018.090618</id>
        <doi>10.14569/IJACSA.2018.090618</doi>
        <lastModDate>2018-06-29T12:21:46.2600000+00:00</lastModDate>
        
        <creator>Muhammad Faizan Tahir</creator>
        
        <creator>Chen Haoyong</creator>
        
        <creator>Idris Ibn Idris</creator>
        
        <creator>Nauman Ali Larik</creator>
        
        <creator>Saif ullah Adnan</creator>
        
        <subject>Demand side management (DSM); demand response (DR); renewable energy (RE); DR programs; wind; PV</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(6), 2018</description>
        <description>In order to cope up the continuously increasing electric demand, Governments are forced to invest on Renewable Energy (RE) sources due to scarcity of fossil fuels (such as coal, gas and oil), high costs associated with it and emission of greenhouse gases. However, stochastic nature of RE sources like wind and PV threaten the reliability and stability of power system. Demand Response (DR) is an alternative solution to address the issues of economic constraints, integration challenges of RE, and dependency on fossil fuels. It is an aspect of Demand Side Management (DSM) that converts consumer’s passive role to active by changing energy consumption pattern to reduce peak load. DR plays the role in deferring the investment on building new power plants, eliminating transmission losses and making the society green. This work analyzes initialization of different DR programs due to slumping technology costs and recognition of users’ behavior in electricity market. Moreover, this paper points out the problems associated with DR and its project implementation across USA, China and developed cities of Europe.</description>
        <description>http://thesai.org/Downloads/Volume9No6/Paper_18-Demand_Response_Programs_Significance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>MapReduce Performance in MongoDB Sharded Collections</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090617</link>
        <id>10.14569/IJACSA.2018.090617</id>
        <doi>10.14569/IJACSA.2018.090617</doi>
        <lastModDate>2018-06-29T12:21:46.2600000+00:00</lastModDate>
        
        <creator>Jaumin Ajdari</creator>
        
        <creator>Brilant Kasami</creator>
        
        <subject>NoSQL; big data; MapReduce; sharding; MongoDB</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(6), 2018</description>
        <description>In the modern era of computing and countless of online services that gather and serve huge data around the world, processing and analyzing Big Data has rapidly developed into an area of its own. In this paper, we focus on the MapReduce programming model and associated implementation for processing and analyzing large datasets in a NoSQL database such as MongoDB. Furthermore, we analyze the performance of MapReduce in sharded collections with huge dataset and we measure how the execution time scales when the number of shards increases. As a result, we try to explain when MapReduce is an appropriate processing technique in MongoDB and also to give some measures and alternatives to take when MapReduce is used.</description>
        <description>http://thesai.org/Downloads/Volume9No6/Paper_17-MapReduce_Performance_in_MongoDB.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Pedestrian Detection Approach for Driver Assisted System using Haar based Cascade Classifiers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090616</link>
        <id>10.14569/IJACSA.2018.090616</id>
        <doi>10.14569/IJACSA.2018.090616</doi>
        <lastModDate>2018-06-29T12:21:46.2270000+00:00</lastModDate>
        
        <creator>M. Ameen Chhajro</creator>
        
        <creator>Kamlesh Kumar</creator>
        
        <creator>M. Malook Rind</creator>
        
        <creator>Aftab Ahmed Shaikh</creator>
        
        <creator>Haque Nawaz</creator>
        
        <creator>Rafaqat Hussain Arain</creator>
        
        <subject>Pedestrian; Haar based classifier; positive and negative samples; computer vision; object detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(6), 2018</description>
        <description>Object detection and tracking with the aid of computer vision is a most challenging task in the context of Driver Assistant System (DAS) for vehicles. This paper presents pedestrians detection techique using Haar-Like Features. The main aim of this research is to develop a detection system for vehicle drivers that will intimate them in advance for pedestrian’s movement when they are crossing the zebra region or passing nearby to it along the road. For this purpose, dataset of 1000 images have been taken via CCTV camera which was mounted for road monitoring. A Haar based cascade classifiers have been implemented over images. And system is trained for positive (with people) and negative (without people) image samples, respectively. After testing, the obtained results show that it attained 90% accuracy while pedestrian detection. The proposed work provides significant contribution in order to reduce the road accidents as well as ensure the safety measurement for road management.</description>
        <description>http://thesai.org/Downloads/Volume9No6/Paper_16-Pedestrian_Detection_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Rule Based Artificial Intelligent System of Cucumber Greenhouse Environment Control with IoT Technology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090615</link>
        <id>10.14569/IJACSA.2018.090615</id>
        <doi>10.14569/IJACSA.2018.090615</doi>
        <lastModDate>2018-06-29T12:21:46.2130000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Yoshikazu Saitoh</creator>
        
        <subject>Temperature; relative humidity; CO2 content; water supply; liquid fertilizer; rule-based system; IoT; artificial intelligence; expert system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(6), 2018</description>
        <description>The method proposed here allows control cucumber greenhouse environment based on IoT technology. IoT sensors are to measure the room and air temperature, relative humidity, CO2 content, water supply, liquid fertilizer, water content. The basic system is rule based system. All the required rules to control the cucumber greenhouse environment are proposed here. Through regressive analysis between IoT sensor data and the harvest cucumber quality, it is found that the proposed rule based system is appropriate to control the cucumber greenhouse environment.</description>
        <description>http://thesai.org/Downloads/Volume9No6/Paper_15-Rule_based_Artificial_Intelligent_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Wireless Sensor Network and Internet of Things in Precision Agriculture</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090614</link>
        <id>10.14569/IJACSA.2018.090614</id>
        <doi>10.14569/IJACSA.2018.090614</doi>
        <lastModDate>2018-06-29T12:21:46.1970000+00:00</lastModDate>
        
        <creator>Farzad Kiani</creator>
        
        <creator>Amir Seyyedabbasi</creator>
        
        <subject>Wireless sensor network; internet of things; smart agriculture applications; precision agriculture</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(6), 2018</description>
        <description>Internet of Things is one of the most popular subjects nowadays where sensors and smart devices facilitate the provision of information and communication. In IoT, one of the main concepts is wireless sensor networks in which data is collected from all the sensors in a network characterized by low power consumption and a wide range of communication. In this study, an architecture to monitor soil moisture, temperature and humidity on small farms is provided. The main motivation for this study is to decrease water consumption whilst increasing productivity on small agricultural farms and precisions on them. This motivation is further propelled by the fact that agriculture is the backbone of some towns and most villages in most of the countries. Furthermore, some countries depend on farming as the main source of income. Putting the above-mentioned factors into consideration, the farm is divided into regions; the proposed system monitors soil moisture, humidity and temperature in the respective regions using wireless sensor networks, internet of things and sends a report to the end user. The report contains, as part of the information, a 10-day weather forecast. We believe that with the above information, the end user (farmer) can more efficiently schedule farm cultivation, harvesting, irrigation, and fertilization.</description>
        <description>http://thesai.org/Downloads/Volume9No6/Paper_14-Wireless_Sensor_Network_and_Internet_of_Things.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improved Langley and Ratio Langley Methods for Improving Sky-Radiometer Accuracy</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090613</link>
        <id>10.14569/IJACSA.2018.090613</id>
        <doi>10.14569/IJACSA.2018.090613</doi>
        <lastModDate>2018-06-29T12:21:46.1800000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>Calibration; Langley plot; improved Langley method; ratio Langley method; aerosol optical depth; volume spectrum</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(6), 2018</description>
        <description>Improved Langley Method (ILM) is proposed to improve the calibration accuracy of the sky-radiometer. The ILM uses that the calibration coefficients of other arbitrary wavelengths can be presumed from the calibration coefficients in a certain reference wave length, and improves the calibration accuracy of a full wave length region by Ratio Langley Method (RLM) in long wavelength paying attention to calibration accuracy being good comparatively was proposed. Specifically, the calibration coefficient of other wavelengths was presumed by the RLM from the calibration factor by ILM in 0.87 micrometer. The numerical simulation based on measured data of solar direct and aureole when the calibration error of the proposed method was evaluated about the case where &#177;3% and &#177;5% of measurement error is superimposed on the measurement data solar direct and aureole, the maximum with error was 0.0014 and 0.0428, and they of ILM were 0.011 and 0.0489. Therefore, the proposed calibration method is robust for a measurement error compared with ILM, and was understood that highly precise calibration is possible over full wavelength. When the standard deviation of a calibration coefficients estimated the accuracy of the proposed calibration method based on the measured data of the sky-radiometer for 15 days which fits calibration among the measured data for four years or more, it was 0.02016, and since it was smaller than the standard deviation 0.03858 of the calibration coefficients by ILM, the predominance of the proposed calibration method has been confirmed.</description>
        <description>http://thesai.org/Downloads/Volume9No6/Paper_13-Improved_Langley_and_Ratio_Langley_Methods.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Experimental Study of Spatial Cognition Capability Enhancement with Building Block Learning Contents for Disabled Children</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090612</link>
        <id>10.14569/IJACSA.2018.090612</id>
        <doi>10.14569/IJACSA.2018.090612</doi>
        <lastModDate>2018-06-29T12:21:46.1670000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Taiki Ishigaki</creator>
        
        <creator>Mariko Oda</creator>
        
        <subject>Experiment; slope surfaces; interaction between two surfaces</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(6), 2018</description>
        <description>In this research, we develop learning teaching materials using building blocks for children with disabilities, and verify learning effect. It is important to prepare input equipment according to children with disabilities and to prepare learning materials according to the ability you have learned. Therefore, this time we developed a teaching material using building blocks to improve spatial recognition capability using touch pad and tablet as input device. It is decided to measure the effect by comparing the scores learned by actually combining the input device and the learning material. Through experiments with participants of disabled children, it is found that the learning contents are effective and appropriate for improvement of their spatial recognition capability.</description>
        <description>http://thesai.org/Downloads/Volume9No6/Paper_12-Experimental_Study_of_Spatial_Cognition_Capability.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Use of Technology and Financial Literacy on SMEs Practices and Performance in Developing Economies </title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090611</link>
        <id>10.14569/IJACSA.2018.090611</id>
        <doi>10.14569/IJACSA.2018.090611</doi>
        <lastModDate>2018-06-29T12:21:46.1500000+00:00</lastModDate>
        
        <creator>Juma Buhimila Mabula</creator>
        
        <creator>Han Dong Ping</creator>
        
        <subject>Technology use; financial literacy; book keeping; risk management; developing economies</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(6), 2018</description>
        <description>Micro, Small and Medium Enterprises (SMEs) practices in developing economies experience a unique set of challenges to attain their success. With a view of analyzing double impact of SME financial literacy and use of technology on practice of record keeping and risk management as echoed on firm performance, the partial least square structural equation modelling was used to configure the perceived impact of these variables. The results posit a significant relationship between the firm use of technology to its practice of record keeping and performance, a significant positive association of financial literacy and firm risk management practices. Nevertheless the study found insignificant association of financial literacy and firm book keeping practice; it offers unleashed dual practical role of financial literacy and use of technology for improving SMEs financial practices in developing economies.</description>
        <description>http://thesai.org/Downloads/Volume9No6/Paper_11-Use_of_Technology_and_Financial_Literacy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automatic Arabic Image Captioning using RNN-LSTM-Based Language Model and CNN</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090610</link>
        <id>10.14569/IJACSA.2018.090610</id>
        <doi>10.14569/IJACSA.2018.090610</doi>
        <lastModDate>2018-06-29T12:21:46.1030000+00:00</lastModDate>
        
        <creator>Huda A. Al-muzaini</creator>
        
        <creator>Tasniem N. Al-yahya</creator>
        
        <creator>Hafida Benhidour</creator>
        
        <subject>AI; image caption; natural language processing; neural network; deep learning convolutional neural network; recurrent neural network; long short-term memory</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(6), 2018</description>
        <description>The automatic generation of correct syntaxial and semantical image captions is an essential problem in Artificial Intelligence. The existence of large image caption copra such as Flickr and MS COCO have contributed to the advance of image captioning in English. However, it is still behind for Arabic given the scarcity of image caption corpus for the Arabic language. In this work, an Arabic version that is a part of the Flickr and MS COCO caption dataset is built. Moreover, a generative merge model for Arabic image captioning based on a deep RNN-LSTM and CNN model is developed. The results of the experiments are promising and suggest that the merge model can achieve excellent results for Arabic image captioning if a larger corpus is used.</description>
        <description>http://thesai.org/Downloads/Volume9No6/Paper_10-Automatic_Arabic_Image_Captioning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluating M-Learning in Saudi Arabia Universities using Concerns-Based Adoption Model Level of use Framework</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090609</link>
        <id>10.14569/IJACSA.2018.090609</id>
        <doi>10.14569/IJACSA.2018.090609</doi>
        <lastModDate>2018-06-29T12:21:46.0730000+00:00</lastModDate>
        
        <creator>Mohammed Al Masarweh</creator>
        
        <subject>Concern Based Adoption Model (CBAM); evaluation; M-learning; Saudi Universities; level of use; mobile phone</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(6), 2018</description>
        <description>Numerous studies have evaluated aspects of m-learning use in Saudi Arabia, mostly focused on technology use and its impact on students, or technology challenges and promises. Few studies have explored features of m-learning use and engagement among university faculty members. This paper presents a new methodology for evaluating the status of m-learning from faculty members’ perspectives in Saudi Arabia by investigating level of use using Concerns-Based Adoption Model framework. Concerns-Based Adoption Model is well established in the United States of America and in research investigating innovation adoption in education, including recent efforts in the Middle East (Jordan and Saudi Arabia). The outcome of such research, including this study, promotes better use and engagement with m-learning and provides a better understanding of advantages, disadvantages and barriers. The outcomes of this research study can reflect positively on universities’ status in the future and help in reforming policies and practices for developing the use of m-learning in Saudi Arabia.</description>
        <description>http://thesai.org/Downloads/Volume9No6/Paper_9-Evaluating_M_Learning_In_Saudi_Arabia_Universities.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modelling of Thermal Storage in Damaged Composite Structures using Time Displaced Gradient Field Technique (TDGF)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090608</link>
        <id>10.14569/IJACSA.2018.090608</id>
        <doi>10.14569/IJACSA.2018.090608</doi>
        <lastModDate>2018-06-29T12:21:46.0570000+00:00</lastModDate>
        
        <creator>Mahmoud Zaki Iskandarani</creator>
        
        <subject>Gradient norm; edge detection; gray level mapping; segmentation; rate-dependent; lag; thermal images</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(6), 2018</description>
        <description>This paper presents a new approach to composite surface characterization using Gradient Field time displacement. The new technique employs calculation of thermally charged regions within a composite structure as a result of each area gradient and then correlates the regions (storage areas) using a time displaced (Lag) model. The resulting data show that a rate-dependent model is fit to describe the behavior of damaged areas within a composite structure, which act as energy storage elements. The rate of dissipation of stored energy per region contributes to the shape and area of the resulting correlated Lag curve.</description>
        <description>http://thesai.org/Downloads/Volume9No6/Paper_8-Modelling_of_Thermal_Storage.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>MapReduce Programs Simplification using a Query Criteria API</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090607</link>
        <id>10.14569/IJACSA.2018.090607</id>
        <doi>10.14569/IJACSA.2018.090607</doi>
        <lastModDate>2018-06-29T12:21:46.0270000+00:00</lastModDate>
        
        <creator>Boulchahoub Hassan</creator>
        
        <creator>Khalil Namir</creator>
        
        <creator>Amina Rachiq</creator>
        
        <creator>Labriji Elhoussin</creator>
        
        <creator>Benabbou Fouzia</creator>
        
        <subject>Hadoop; HDFS; MapReduce</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(6), 2018</description>
        <description>A Hadoop HDFS is an organized and distributed collection of files. It is created to store a huge part of data and then retrieve it and analyze it efficiently in a less amount of time. To retrieve and analyze data from the Hadoop HDFS, MapReduce Jobs must be created directly using some programming languages like Java or indirectly using some high level languages like HiveQL and PigLatin. Everyone knows that creating MapReduce programs using programming languages is a difficult task that requires a remarkable effort for their creation and also for their maintenance. Writing MapReduce code by hand needs a lot of time, introduce bugs, harm readability, and impede optimizations. Profiles working in the field of big data always try to avoid hard and long programs in their work. They are always looking for much simpler alternatives like graphical interfaces or reduced scripts like PIG Latin or even SQL queries. This article proposes to use a MapReduce Query API inspired from Hibernate Criteria to simplify the code of MapReduce programs. This API proposes a set of predefined methods for making restrictions, projections, logical conditions and so on. An implementation of the Word Count example using the Query Criteria API is illustrated in this paper.</description>
        <description>http://thesai.org/Downloads/Volume9No6/Paper_7-MapReduce_Programs_Simplification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Study of Face Recognition Techniques: A Survey</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090606</link>
        <id>10.14569/IJACSA.2018.090606</id>
        <doi>10.14569/IJACSA.2018.090606</doi>
        <lastModDate>2018-06-29T12:21:45.9930000+00:00</lastModDate>
        
        <creator>Madan Lal</creator>
        
        <creator>Kamlesh Kumar</creator>
        
        <creator>Rafaqat Hussain Arain</creator>
        
        <creator>Abdullah Maitlo</creator>
        
        <creator>Sadaquat Ali Ruk</creator>
        
        <creator>Hidayatullah Shaikh</creator>
        
        <subject>Face recognition; illuminations; partial occlusion; pose invariance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(6), 2018</description>
        <description>With the rapid growth in multimedia contents, among such content face recognition has got much attention especially in past few years. Face as an object consists of distinct features for detection; therefore, it remains most challenging research area for scholars in the field of computer vision and image processing. In this survey paper, we have tried to address most endeavoring face features such as pose invariance, aging, illuminations and partial occlusion. They are considered to be indispensable factors in face recognition system when realized over facial images. This paper also studies state of the art face detection techniques, approaches, viz. Eigen face, Artificial Neural Networks (ANN), Support Vector Machines (SVM), Principal Component Analysis (PCA), Independent Component Analysis (ICA), Gabor Wavelets, Elastic Bunch Graph Matching, 3D morphable Model and Hidden Markov Models. In addition to the aforementioned works, we have mentioned different testing face databases which include AT &amp; T (ORL), AR, FERET, LFW, YTF, and Yale, respectively for results analysis. However, aim of this research is to provide comprehensive literature review over face recognition along with its applications. And after in depth discussion, some of the major findings are given in conclusion.</description>
        <description>http://thesai.org/Downloads/Volume9No6/Paper_6-Study_of_Face_Recognition_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi Focus Image Fusion using Combined Median and Average Filter based Hybrid Stationary Wavelet Transform and Principal Component Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090605</link>
        <id>10.14569/IJACSA.2018.090605</id>
        <doi>10.14569/IJACSA.2018.090605</doi>
        <lastModDate>2018-06-29T12:21:45.9770000+00:00</lastModDate>
        
        <creator>Tian Lianfang</creator>
        
        <creator>Jameel Ahmed Bhutto</creator>
        
        <creator>Du Qiliang</creator>
        
        <creator>Bhawani Shankar</creator>
        
        <creator>Saifullah Adnan</creator>
        
        <subject>Image fusion; Heisenberg’s uncertainty principle; combined median and average filter; Haar wavelet; Stationary Wavelet Transform and Principal Component Analysis (SWT-PCA)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(6), 2018</description>
        <description>Poor illumination, less background contrast and blurring effects makes the medical, satellite and camera images difficult to visualize. Image fusion plays the vital role to enhance image quality by resolving the above issues and reducing the image quantity. The combination of spatial and spectral technique Discrete Wavelet Transform and Principal Component Analysis (DWT-PCA) decrease processing time and reduce number of dimensions but down sampling causes lack of shift invariance that results in poor quality final fused image. At first this work uses combined median and average filter that eliminates noise in the image which is caused by illumination, camera circuitry and sensor at preprocessing stage. Then, hybrid Stationary Wavelet Transform and Principal Component Analysis (SWT-PCA) technique is implemented to increase output image accuracy by eliminating down sampling and is not influenced by artifacts and blurring effects. Further, it can overcome the trade-off of Heisenberg’s uncertainty principle by improving accuracy in both domains, time (spatial) as well as frequency (spectral). The proposed combined median and average filter with hybrid SWT-PCA algorithm measures quality parameters, such as peak signal to noise ratio (PSNR), mean squared error (MSE) and normalized cross correlation (NCC) and improved results depict the superiority of the algorithm than existing techniques.</description>
        <description>http://thesai.org/Downloads/Volume9No6/Paper_5-Multi_Focus_Image_Fusion.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Methodology for Selecting the Preferred Networked Computer System Solution for Dynamic Continuous Defense Missions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090604</link>
        <id>10.14569/IJACSA.2018.090604</id>
        <doi>10.14569/IJACSA.2018.090604</doi>
        <lastModDate>2018-06-29T12:21:45.9470000+00:00</lastModDate>
        
        <creator>San Diego</creator>
        
        <creator>Jeff Tian</creator>
        
        <creator>Jerrell T. Stracener</creator>
        
        <subject>Mission reliability; sustainment reliability; operational availability; basic reliability; networked computer system; system of systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(6), 2018</description>
        <description>This paper presents a methodology for addressing the challenges and opportunities in defining and selecting the preferred Networked Computer System (NCS) solution in response to specified United States Defense mission planning requirements. The identified set of mission requirements are aligned with existing computer system capabilities allowing them to be acquired and processed as candidates to be included as part of the preferred NCS solution. In performing the proper selection process, decision making process is required in being able to properly select the preferred NCS by utilizing associated models for analysis. The models will then be applied towards NCS mission planning in analyzing an NCS solution’s effectiveness in terms of operational availability, mission reliability, capability sustainment and lifecycle cost. The analysis and models were developed in response to the need to develop defense mission planning capability solutions by utilizing existing computer systems enabling the Department of Defense acquisition professionals to perform a practical approach in selecting and defining the preferred NCS for satisfying a mission.</description>
        <description>http://thesai.org/Downloads/Volume9No6/Paper_4-Methodology_for_Selecting_the_Preferred_Networked.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Link Prediction Schemes Contra Weisfeiler-Leman Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090603</link>
        <id>10.14569/IJACSA.2018.090603</id>
        <doi>10.14569/IJACSA.2018.090603</doi>
        <lastModDate>2018-06-29T12:21:45.9170000+00:00</lastModDate>
        
        <creator>Katie Brodhead</creator>
        
        <subject>Weisfeiler-Leman; link prediction; machine learning; linear regression; common walk; path-based; random walk; stochastic block; matrix factorization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(6), 2018</description>
        <description>Link prediction is of particular interest to the data mining and machine learning communities. Until recently all approaches to the problem used embedding-based methods which leverage either node similarities or latent group memberships towards link prediction. Chen and Zhang recently developed a class of non-embedding approaches called Weisfeiler-Leman (WL) Models.  WL-Models extract subgraphs around links and then encode subgraph patterns via adjacency matrices using the so-called Palette-WL algorithm. A training stage then learns nonlinear graph topological features for link prediction. Chen and Zhang compared two WL-Models – a linear regression model (“WLLR”) and a neural networks model (“WLNM”) – against 12 different common link prediction schemes. In this paper, all author claims are validated for WLLR. Additionally, WLLR is tested against 22 additional embedding-based link prediction techniques arising from common neighbor-, path- and random walk-based schemes.  WLLR is shown not to be superior when calculable. In fact, in 80% of the datasets where comparisons were possible, one of our added implementations proved superior.</description>
        <description>http://thesai.org/Downloads/Volume9No6/Paper_3-Link_Prediction_Schemes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Robust Control of a 3D Space Robot with an Initial Angular Momentum based on the Nonlinear Model Predictive Control Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090602</link>
        <id>10.14569/IJACSA.2018.090602</id>
        <doi>10.14569/IJACSA.2018.090602</doi>
        <lastModDate>2018-06-29T12:21:45.8530000+00:00</lastModDate>
        
        <creator>Tatsuya Kai</creator>
        
        <subject>3D space robot; universal joint; initial angular
momentum; nonlinear model predictive control; robustness; attitude
stabilization control; trajectory tracking control</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(6), 2018</description>
        <description>This paper considers robust control problems for a
3D space robot of two rigid bodies connected by a universal joint
with an initial angular momentum. It is particularly difficult to
measure an initial angular momentum in parameters of space
robots since the value of an initial angular momentum depends
on the situations. Hence, the main purpose of this paper is
to develop a robust controller with respect to initial angular
momenta for the 3D space robot. First, a mathematical model,
some characteristics, and two types of control problems for the
3D space robot are presented. Next, for the robust attitude
stabilization control problem of the 3D space robot, a numerical
simulation is performed by using the nonlinear model predictive
control method. Then, for the robust trajectory tracking control
problem of the 3D space robot, another numerical simulation
is carried out. As a result, it turns out that this approach can
realize robust control on initial angular momenta for the two
control problems. In addition, computation amount is reduced
by this approach and real-time control of the 3D space robot can
be achieved.</description>
        <description>http://thesai.org/Downloads/Volume9No6/Paper_2-Robust_Control_of_a_3D_Space_Robot.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Generating Classification Rules from Training Samples</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090601</link>
        <id>10.14569/IJACSA.2018.090601</id>
        <doi>10.14569/IJACSA.2018.090601</doi>
        <lastModDate>2018-06-29T12:21:45.7900000+00:00</lastModDate>
        
        <creator>Arun D. Kulkarni</creator>
        
        <subject>Fuzzy membership functions; classification; rule extraction; multispectral images</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(6), 2018</description>
        <description>In this paper, we describe an algorithm to extract classification rules from training samples using fuzzy membership functions. The algorithm includes steps for generating classification rules, eliminating duplicate and conflicting rules, and ranking extracted rules. We have developed software to implement the algorithm using MATLAB scripts. As an illustration, we have used the algorithm to classify pixels in two multispectral images representing areas in New Orleans and Alaska. For each scene, we randomly selected 10 per cent of the samples from our training set data for generating an optimized rule set and used the remaining 90 per cent of samples to validate the extracted rules. To validate extracted rules, we built a fuzzy inference system (FIS) using the extracted rules as a rule base and classified samples from the training set data. The results in terms of confusion matrices are presented in the paper.</description>
        <description>http://thesai.org/Downloads/Volume9No6/Paper_1-Generating_Classification_Rules.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Study of Feature Selection Algorithms for Predicting Students Academic Performance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090569</link>
        <id>10.14569/IJACSA.2018.090569</id>
        <doi>10.14569/IJACSA.2018.090569</doi>
        <lastModDate>2018-06-02T14:58:55.8270000+00:00</lastModDate>
        
        <creator>Maryam Zaffar</creator>
        
        <creator>Manzoor Ahmed Hashmani</creator>
        
        <creator>K.S. Savita</creator>
        
        <creator>Syed Sajjad Hussain Rizvi</creator>
        
        <subject>Educational data mining; feature selection algorithms; classifiers; CFS; relief feature selection algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(5), 2018</description>
        <description>The main aim of all the educational organizations is to improve the quality of education and elevate the academic performance of students. Educational Data Mining (EDM) is a growing research field which helps academic institutions to improve the performance of their students. The academic institutions are most often judged by the grades achieved by the students in examination. EDM offers different practices to predict the academic performance of students. In EDM, Feature Selection (FS) plays a vital role in improving the quality of prediction models for educational datasets. FS algorithms eliminate unrelated data from the educational repositories and hence increase the performance of classifier accuracy used in different EDM practices to support decision making for educational settings. The good quality of educational dataset can produce better results and hence the decisions based on such quality dataset can increase the quality of education by predicting the performance of students. In the light of this mentioned fact, it is necessary to choose a feature selection algorithm carefully. This paper presents an analysis of the performance of filter feature selection algorithms and classification algorithms on two different student datasets. The results obtained from different FS algorithms and classifiers on two student datasets with different number of features will also help researchers to find the best combinations of filter feature selection algorithms and classifiers. It is very necessary to put light on the relevancy of feature selection for student performance prediction, as the constructive educational strategies can be derived through the relevant set of features. The results of our study depict that there is a 10% difference of prediction accuracies between the results of datasets with different number of features. </description>
        <description>http://thesai.org/Downloads/Volume9No5/Paper_69-A_Study_of_Feature_Selection_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Search Manager: A Framework for Hybridizing Different Search Strategies</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090568</link>
        <id>10.14569/IJACSA.2018.090568</id>
        <doi>10.14569/IJACSA.2018.090568</doi>
        <lastModDate>2018-05-31T15:24:05.2030000+00:00</lastModDate>
        
        <creator>Yousef Abdi</creator>
        
        <creator>Yousef Seyfari</creator>
        
        <subject>Global optimization; metaheuristic; organization management; hybridizing search methods</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(5), 2018</description>
        <description>In the last decade, many of the metaheuristic search methods have been proposed for solving tough optimization problems. Each of these algorithms uses its own learn-by-example mechanism in terms of “movement strategy” to evolve the candidate solutions. In this paper, a framework, called Search Manager, is proposed for hybridizing different learn-by-example methods in one algorithm, which is inspired by the organizational management system in which managers change their management method by viewing performance reduction in their managerial organization. The proposed framework is verified using standard benchmark functions and real-world optimization problems. Further, it is compared with some well-known heuristic search methods. The obtained results indicate not only the optimization capability of the proposed framework, but also its ability to obtain accurate solutions and to achieve higher convergence precision.</description>
        <description>http://thesai.org/Downloads/Volume9No5/Paper_68-Search_Manager_A_Framework_for_Hybridizing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Indefinite Cycle Traffic Light Timing Strategy</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090567</link>
        <id>10.14569/IJACSA.2018.090567</id>
        <doi>10.14569/IJACSA.2018.090567</doi>
        <lastModDate>2018-05-31T15:24:05.1730000+00:00</lastModDate>
        
        <creator>Ping Guo</creator>
        
        <creator>Daiwen Lei</creator>
        
        <creator>Lian Ye</creator>
        
        <subject>Traffic light; timing strategy; adaptive control; signal
control; intelligent transportation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(5), 2018</description>
        <description>Intelligent transportation signal control plays an
important role in reducing traffic congestion and improving road
capacity. The key of signal control is to adjust the traffic lights
appropriately according to the traffic flow, which is an adaptive
control. In this paper, we propose a new timing strategy. This
strategy includes green time optimization and lane combination
calculation. According to the real-time traffic flow, we optimize
green time and calculate lane combination to adjust the cycle and
then we can get the timing plan. The simulation results of random
data and actual traffic data show that the strategy we proposed
can increase traffic efficiency by more than 15% at intersections,
reduce vehicle detention, and relieve traffic congestion.</description>
        <description>http://thesai.org/Downloads/Volume9No5/Paper_67-An_Indefinite_Cycle_Traffic_Light_Timing_Strategy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Efficient Community Detection Algorithm with Label Propagation using Node Importance and Link Weight</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090566</link>
        <id>10.14569/IJACSA.2018.090566</id>
        <doi>10.14569/IJACSA.2018.090566</doi>
        <lastModDate>2018-05-31T15:24:05.1570000+00:00</lastModDate>
        
        <creator>Mohsen Arab</creator>
        
        <creator>Mahdieh Hasheminezhad</creator>
        
        <subject>Label propagation; node importance; link weight</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(5), 2018</description>
        <description>Community detection is a principle tool for
analysing and studying of a network structure. Label Propagation
Algorithm (LPA) is a simple and fast community detection algorithm
which is not accurate enough because of its randomness.
However, some advanced versions of LPA have been presented
in recent years, but their accuracy need to be improved. In this
paper, an improved version of label propagation algorithm for
community detection called WILPAS is presented. The proposed
algorithm for community detection considers both nodes and
links important. WILPAS is a parameter-free algorithm and
so requires no prior knowledge. Experiments and benchmarks
demonstrate that WILPAS is a pretty fast algorithm and outperforms
other representative methods in community detection
on both synthetic and real-world networks. More specifically,
experiments show that the proposed method can detect the true
community structure of real-world networks with higher accuracy
than other representative label propagation-based algorithms.
Finally, experimental results on the networks with millions of
links reveal that the proposed algorithm preserve nearly linear
time complexity of traditional LPA. Therefore, the proposed
algorithm can efficiently detect communities of large-scale social
networks.</description>
        <description>http://thesai.org/Downloads/Volume9No5/Paper_66-Efficient_Community_Detection_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A High-Performing Similarity Measure for Categorical Dataset with SF-Tree Clustering Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090565</link>
        <id>10.14569/IJACSA.2018.090565</id>
        <doi>10.14569/IJACSA.2018.090565</doi>
        <lastModDate>2018-05-31T15:24:05.1400000+00:00</lastModDate>
        
        <creator>Mahmoud A. Mahdi</creator>
        
        <creator>Samir E. Abdelrahman</creator>
        
        <creator>Reem Bahgat</creator>
        
        <subject>Algorithm; clustering; similarity; measurement;
categorical; F-Tree; SF-Tree</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(5), 2018</description>
        <description>Tasks such as clustering and classification assume
the existence of a similarity measure to assess the similarity
(or dissimilarity) of a pair of observations or clusters. The
key difference between most clustering methods is in their
similarity measures. This article proposes a new similarity measure
function called PWO “Probability of the Weights between
Overlapped items ”which could be used in clustering categorical
dataset; proves that PWO is a metric; presents a framework
implementation to detect the best similarity value for different
datasets; and improves the F-tree clustering algorithm with
Semi-supervised method to refine the results. The experimental
evaluation on real categorical datasets, such as “Mushrooms,
KrVskp, Congressional Voting, Soybean-Large, Soybean-Small,
Hepatitis, Zoo, Lenses, and Adult-Stretch” shows that PWO is
more effective in measuring the similarity between categorical
data than state-of-the-art algorithms; clustering based on PWO
with pre-defined number of clusters results a good separation
of classes with a high purity of average 80% coverage of real
classes; and the overlap estimator perfectly estimates the value
of the overlap threshold using a small sample of dataset of around
5% of data size.</description>
        <description>http://thesai.org/Downloads/Volume9No5/Paper_65-A_High_Performing_Similarity_Measure.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Divide and Conquer Approach for Solving Security and Usability Conflict in User Authentication</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090564</link>
        <id>10.14569/IJACSA.2018.090564</id>
        <doi>10.14569/IJACSA.2018.090564</doi>
        <lastModDate>2018-05-31T15:24:05.1100000+00:00</lastModDate>
        
        <creator>Shah Zaman Nizamani</creator>
        
        <creator>Waqas Ali Sahito</creator>
        
        <creator>Shafique Awan</creator>
        
        <subject>Authentication; alphanumeric passwords; security; passwords memorability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(5), 2018</description>
        <description>Knowledge based authentication schemes are divided
into textual password schemes and graphical password
schemes. Textual password schemes are easy to use but have
well known security issues, such as weak against online security
attacks. Graphical password schemes are generally weak against
shoulder surfing attacks. Usability is another issue with most
of the graphical password schemes. For improving security
of knowledge-based authentication schemes complex password
entry procedures are used, which improve security but weakens
useability of the authentication schemes. In order to resolve this
security and usability conflict, a user authentication scheme is
proposed, which contains one registration and two login screens
called easy and secure login screens. Easy login screen provides
easy and quick way of authentication while secure login screen
is resilient to different online security attacks. A user has to
decide based upon the authentication environment, which login
screen to be used for authentication. For secure environment,
where chances of security attacks are less easy login screen
is recommended. For insecure environments where chances of
security attacks are high, secure login screen is recommended for
authentication. In the proposed scheme, image based passwords
can also be set along with alphanumeric passwords. Results
suggest that proposed scheme improves security against offline
and online attacks.</description>
        <description>http://thesai.org/Downloads/Volume9No5/Paper_64-Divide_and_Conquer_Approach_for_Solving_Security.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Experimental Results on Agent-Based Indoor Localization using WiFi Signaling</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090563</link>
        <id>10.14569/IJACSA.2018.090563</id>
        <doi>10.14569/IJACSA.2018.090563</doi>
        <lastModDate>2018-05-31T15:24:05.0930000+00:00</lastModDate>
        
        <creator>Stefania Monica</creator>
        
        <creator>Federico Bergenti</creator>
        
        <subject>WiFi-based localization; indoor localization;
particle swarm optimization; agent technology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(5), 2018</description>
        <description>This paper discusses experimental results on the
possibility of accurately estimating the position of smart devices
in known indoor environments using agent technology. Discussed
localization approaches are based on WiFi signaling, which can
be considered as an ubiquitous technology in the large majority
of indoor environments. The use of WiFi signaling ensures that no
specific infrastructures nor special on-board sensors are required
to support localization. Localization is performed using range
estimates from the fixed access points of the WiFi network,
which are assumed to have known positions. The performance of
two range-based localization algorithms are discussed. The first,
called Two-Stage Maximum-Likelihood algorithm, is well-known
in the literature, while the second is a recent optimization-based
algorithm that uses particle swarm techniques. Results discussed
in the last part of the paper show that a proper processing of
WiFi-based range estimates allows obtaining accurate position
estimates, especially if the optimization-based algorithm is used.</description>
        <description>http://thesai.org/Downloads/Volume9No5/Paper_63-Experimental_Results_on_Agent_based_Indoor_Localization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mobility Management Using the IP Protocol</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090562</link>
        <id>10.14569/IJACSA.2018.090562</id>
        <doi>10.14569/IJACSA.2018.090562</doi>
        <lastModDate>2018-05-31T15:24:05.0630000+00:00</lastModDate>
        
        <creator>Imtiaz A. Halepoto</creator>
        
        <creator>Adnan Manzoor</creator>
        
        <creator>Nazar H. Phulpoto</creator>
        
        <creator>Sohail A. Memon</creator>
        
        <creator>Muzamil Hussain</creator>
        
        <subject>Mobility; Mobile IP; 4G; foreign network;
permanent IP</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(5), 2018</description>
        <description>Time critical applications, such as VoIP and video
conferencing require Internet connectivity all of the time for
better performance. Moreover, in case of vehicular networks, it
is very common for mobile devices to move from one network
to another. In such scenarios the sudden changes in the network
connectivity may cause problems, which affects the data
transmission rate. The movement of a mobile node from one
network to another is also a challenge for the routers to maintain
the routing information as well as to forward the data to the
corresponding node. In all of the aforementioned scenarios, the
switching between the networks with minimum latency improves
the performance, i.e. in terms of mobility and availability of the
network. The Mobile IP protocol serves the purpose of seamless
handover of mobile devices from one network to another. A
mobile node maintains its permanent IP address using the Mobile
IP protocol while moving to a foreign network. When a mobile
node establishes the connection with the foreign network the data
packets transmitted from the home network are redirected to
the foreign network. The Mobile IP protocol establishes a tunnel
between the home network and the foreign network. The process
of tunneling continues until the mobile node moves back to the
home network or when the foreign network advertises the new
IP address of the mobile node. With the increasing number of
wireless devices the mobility is the key challenge. The devices
with multiple interfaces such as mobile phone which uses 4G
as well as WiFi, the urge for the availability of the Internet
is also high. This paper provides a deep discussion about the
Mobile IP protocol and its implementation. A network scenario
is proposed with the configuration of the Mobile IP. According
to the obtained results of the simulations, the Mobile IP protocol
increases the availability of the network connection as well as it
achieves the larger throughput when compared with the scenario
without using the Mobile IP.</description>
        <description>http://thesai.org/Downloads/Volume9No5/Paper_62-Mobility_Management_using_the_IP_Protocol.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Control of Industrial Systems to Avoid Failures: Application to Electrical System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090561</link>
        <id>10.14569/IJACSA.2018.090561</id>
        <doi>10.14569/IJACSA.2018.090561</doi>
        <lastModDate>2018-05-31T15:24:05.0470000+00:00</lastModDate>
        
        <creator>Yamen EL TOUATI</creator>
        
        <creator>Saleh ALTOWAIJRI</creator>
        
        <creator>Mohamed AYARI</creator>
        
        <subject>Dynamic hybrid systems; supervisory control; hybrid
automata; electrical systems; safety</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(5), 2018</description>
        <description>We resolve the control problem for a class of
dynamic hybrid systems (DHS) considering electrical systems as
case study. The objective is to guarantee that the plan never
reaches unsafe states. We consider a subclass class of DHS
called Cumulative Preemptive Event-driven DHS (CPE-DHS).
This class is distinguished by the dominance of its discrete aspect
characterized by features as cumulative continuous variables
combined with actions behavior that may be interrupted and
restarted. We utilize a subclass of Rectangular Hybrid Automata
(RHA), named Constant Slope RHA (CSRHA), as a solution
framework to resolve the control problem. The main contribution
is a control Algorithm for the class of systems described above.
This algorithm ensures that the system meet the requirement
specifications by forcing some events. The forcing action is given
in the form of restrictions on the transition guards of the
CSRHA. The termination/decidability as well as correctness of
the algorithm is given by theorems and formal proofs. This
contribution ensures that the system will always be safe states
and avoid failure due to the reachability of unsafe states. Our
approach can be applied to a large category of industrial systems,
especially electrical systems that we consider as case study.</description>
        <description>http://thesai.org/Downloads/Volume9No5/Paper_61-Control_of_Industrial_Systems_to_Avoid_Failures.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Investigating Saudi Parents’ Intention to Adopt Technical Mediation Tools to Regulate Children’s Internet Usage</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090560</link>
        <id>10.14569/IJACSA.2018.090560</id>
        <doi>10.14569/IJACSA.2018.090560</doi>
        <lastModDate>2018-05-31T15:24:05.0170000+00:00</lastModDate>
        
        <creator>Ala’a Bassam Al-Naim</creator>
        
        <creator>Md Maruf Hasan</creator>
        
        <subject>Child and family safety online; parental control and mediation; technology mediation; Unified Theory of Acceptance and Use of Technology (UTAUT); Saudi Arabia</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(5), 2018</description>
        <description>The adverse and harmful effects of Internet on young children have become a global concern. Parents tend to use different strategies to ensure their children’s online safety. Many studies have suggested that parental mediation may play a positive role in controlling children’s online behavior. The purpose of this study is to identify the factors that shape Saudi parents’ intention to regulate their children’s online practices using technical mediation tools. An integrated model has been proposed based on famous Information System theories and models to investigate parental intention to adopt technical mediation tools. A questionnaire-based survey is conducted for data collection. Basic descriptive statistical analysis, reliability, and validity assessments were used to analyze the data at the preliminary stage, followed by advanced analysis using Structural Equation Modeling to test the research hypotheses. Research results indicate that effort expectancy, performance expectancy, general computer self-efficacy, perceived severity, and perceived vulnerability are the main predictors of Saudi parent’s intention to regulate their children’s online behaviors using technical mediation tools.</description>
        <description>http://thesai.org/Downloads/Volume9No5/Paper_60-Investigating_Saudi_Parents_Intention_to_Adopt_Technical_Mediation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Koch Island Fractal Patch Antenna (KIFPA) for Wideband Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090559</link>
        <id>10.14569/IJACSA.2018.090559</id>
        <doi>10.14569/IJACSA.2018.090559</doi>
        <lastModDate>2018-05-31T15:24:04.9870000+00:00</lastModDate>
        
        <creator>Meryem HADJI</creator>
        
        <creator>Sidi Mohammed MERIAH</creator>
        
        <creator>Djamila ZIANI</creator>
        
        <subject>Fractal antenna; Koch Island fractal-shape; microstrip patch antenna; wideband antenna</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(5), 2018</description>
        <description>In this paper, a new modified printed Koch Island Fractal Patch Antenna (KIFPA) is studied. The conception of such antenna is based on the combination of different techniques. The first, concerns the fractal geometry of the patch, while the second comprises modified ground-plane. The patch is etched according to Koch Island geometry with different iteration number (n = 1, 2 and 3) as inductive loading. It is proximity fed by a 50Ω micro strip line. The proposed antenna operates in the frequency band [6.03–12.62 GHz] with 70.7% for S 11 ≤ −10 dB. The antenna gain and radiation patterns within the operating band are simulated. The design was performed using the CST Microwave Studio Software and the results are presented, compared and discussed. Finally, the proposed antenna is fabricated and the reflection coefficient parameter is measured to validate simulation results.</description>
        <description>http://thesai.org/Downloads/Volume9No5/Paper_59-Koch_Island_Fractal_Patch_Antenna_KIFPA_for_Wideband_Applications.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modeling of Arduino-based Prepaid Energy Meter using GSM Technology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090558</link>
        <id>10.14569/IJACSA.2018.090558</id>
        <doi>10.14569/IJACSA.2018.090558</doi>
        <lastModDate>2018-05-31T15:24:04.9530000+00:00</lastModDate>
        
        <creator>Uzair Ahmed Rajput</creator>
        
        <creator>Khalid Rafique</creator>
        
        <creator>Abdul Sattar Saand</creator>
        
        <creator>Mujtaba Shaikh</creator>
        
        <creator>Muhammad Tarique</creator>
        
        <subject>Arduino; energy meter; smart meters; RFID; GSM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(5), 2018</description>
        <description>It is realized that one of the defective subsystems adding to the tremendous budgetary loss in Power Supply Company is the conventional metering and charging framework. Mistakes get presented at each phase of charging the energy rates, similar to blunders with conventional meters, reading errors by human while noticing the consumed energy; and blunder during the preparation of paid and the due bills. The solution for this downside is a prepaid charging or billing framework of consumed energy. Most of the developing countries are shifting their conventional energy management practices to the modern one by replacing the old and conventional energy meters with the smart meters outfitted with the prepaid facility to quantify the power consumption so as to decrease the income deficits looked by utilities because of customer unwillingness to make consumed energy payments on time. Our proposed design embedded with Arduino and GSM technology is advancement over conventional energy meter, which enables consumer to effectively manage their electricity usage. The system performance is good with the acquired results. An earlier charging will undoubtedly get rid of the issues of unpaid bills and human mistakes in meter readings, along these lines guaranteeing justified income for the utility.</description>
        <description>http://thesai.org/Downloads/Volume9No5/Paper_58-Modeling_of_Arduino_based_Prepaid_Energy_Meter.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Framework for Rumors Detection in Social Media</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090557</link>
        <id>10.14569/IJACSA.2018.090557</id>
        <doi>10.14569/IJACSA.2018.090557</doi>
        <lastModDate>2018-05-31T15:24:04.9400000+00:00</lastModDate>
        
        <creator>Rehana Moin</creator>
        
        <creator>Zahoor-ur-Rehman</creator>
        
        <creator>Khalid Mahmood</creator>
        
        <creator>Mohammad Eid Alzahrani</creator>
        
        <creator>Muhammad Qaiser Saleem</creator>
        
        <subject>Social networks; rumors; inquiry comments; question identification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(5), 2018</description>
        <description>The development of social networks has led the public in general to find easy accessibility for communication with respect to rapid communication to each other at any time. Such services provide the quick transmission of information which is its positive side but its negative side needs to be kept in mind thereby misinformation can spread. Nowadays, in this era of digitalization, the validation of such information has become a real challenge, due to lack of information authentication method. In this paper, we design a framework for the rumors detection from the Facebook events data, which is based on inquiry comments. The proposed Inquiry Comments Detection Model (ICDM) identifies inquiry comments utilizing a rule-based approach which entails regular expressions to categorize the sentences as an inquiry into those starting with an intransitive verb (like is, am, was, will, would and so on) and also those sentences ending with a question mark. We set the threshold value to compare with the ratio of Inquiry to English comments and identify the rumors. We verified the proposed ICDM on labeled data, collected from snopes.com. Our experiments revealed that the proposed method achieved considerably well in comparison to the existing machine learning techniques. The proposed ICDM approach attained better results of 89% precision, 77% recall, and 82% 𝐹-measure. We are of the opinion that our experimental findings of this study will be useful for the worldwide adoption.</description>
        <description>http://thesai.org/Downloads/Volume9No5/Paper_57-Framework_for_Rumors_Detection_in_Social_Media.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Monitoring Vaccine Cold Chain Model with Coloured Petri Net</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090556</link>
        <id>10.14569/IJACSA.2018.090556</id>
        <doi>10.14569/IJACSA.2018.090556</doi>
        <lastModDate>2018-05-31T15:24:04.9230000+00:00</lastModDate>
        
        <creator>Fatima Ouzayd</creator>
        
        <creator>Hajar Mansouri</creator>
        
        <creator>Manal Tamir</creator>
        
        <creator>Raddouane Chiheb</creator>
        
        <creator>Zied Benhouma</creator>
        
        <subject>Vaccine cold chain; monitoring; World Health Organization (WHO); colored Petri net (CPN); performance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(5), 2018</description>
        <description>To protect and prevent vaccines from excessively high or low temperatures throughout the supply chain, from manufacturing to administration, it is necessary to monitor and evaluate vaccine cold chain performance in real time. Therefore, today, the need for smart tracking is a requirement that is accentuated with critical systems, such as the vaccine supply chain. In this article, we propose a model for instant cold chain monitoring using a colored Petri net (CPN). This model focuses on the central storage of vaccines and takes into account certain WHO (World Health Organization) recommendations. The simulation and the key performance indicators obtained can be useful for decision-makers in order to measure the effectiveness and efficiency of vaccine storage.</description>
        <description>http://thesai.org/Downloads/Volume9No5/Paper_56-Monitoring_Vaccine_Cold_Chain_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Study on Usability Awareness in Local IT Industry</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090555</link>
        <id>10.14569/IJACSA.2018.090555</id>
        <doi>10.14569/IJACSA.2018.090555</doi>
        <lastModDate>2018-05-31T15:24:04.8930000+00:00</lastModDate>
        
        <creator>Mahmood Ashraf</creator>
        
        <creator>Lal Khan</creator>
        
        <creator>Muhammad Tahir</creator>
        
        <creator>Ahmed Alghamdi</creator>
        
        <creator>Mohammed Alqarni</creator>
        
        <creator>Thabit Sabbah</creator>
        
        <creator>Muzafar Khan</creator>
        
        <subject>Usability; usability awareness; human-computer interaction (HCI); HCI practitioners; Pakistan IT industry</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(5), 2018</description>
        <description>Usability awareness receives more consideration by industry professionals and researchers throughout the world, but it is limited in Pakistan. This study reports survey results of the current state of usability awareness in the local Information Technology (IT) industry. Forty participants – IT practitioners from IT industry – were involved in the study. We used Usability Maturity Model (UMM) and content analysis methodology to discover the current status of usability awareness. The results indicate that 1) almost half (18 out of 40) of the participants were unaware of the term usability and related concepts, 2) there is shortage of HCI/Usability professionals in organizations, 3) most of the software companies were at unrecognized level of UMM and 4) they were also not interested in usability because of limited or no budget for it. The study also reveals a gap between usability awareness and its perceived usefulness among IT professionals.</description>
        <description>http://thesai.org/Downloads/Volume9No5/Paper_55-A_Study_on_Usability_Awareness_in_Local_IT_Industry.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Gaze Direction based Mobile Application for Quadriplegia Wheelchair Control System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090554</link>
        <id>10.14569/IJACSA.2018.090554</id>
        <doi>10.14569/IJACSA.2018.090554</doi>
        <lastModDate>2018-05-31T15:24:04.8770000+00:00</lastModDate>
        
        <creator>Muayad Sadik Croock</creator>
        
        <creator>Salih Al-Qaraawi</creator>
        
        <creator>Rawan Ali Taban</creator>
        
        <subject>Gaze direction detection; mobile application; obstacle detection; quadriplegia; Raspberry Pi microcomputer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(5), 2018</description>
        <description>People with quadriplegia recruit the interest of researchers in introducing automated movement systems for adopted special purpose wheelchairs. These systems were introduced for easing the movement of such type of disabled people independently. This paper proposed a comprehensive control system that can control the movement of Quadriplegia wheelchairs using gaze direction and blink detection. The presented system includes two main parts. The first part consists of a smartphone that applies the propose gaze direction detection based mobile application. It then sends the direction commands to the second part via Wi-Fi connection. The second part is a prototype representing the quadriplegia wheelchair that contains robotic car (two-wheel driving car), Raspberry Pi III and ultrasound sensors. The gaze direction commands, sent from the smartphone, are received by the Raspberry Pi for processing and producing the control signals. The ultrasound sensors are fixed at the front and back of the car for performing the emergency stop when obstacles are detected. The proposed system is based on gaze tracking and direction detection without the requirement of calibration with additional sensors and instruments. The obtained results show the superior performance of the proposed system that proves the claim of authors. The accuracy ratio is ranged between 66% and 82% depending on the environment (indoor and outdoor) and surrounding lighting as well as the smart phone type.</description>
        <description>http://thesai.org/Downloads/Volume9No5/Paper_54-Gaze_Direction_based_Mobile_Application.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Geographical Distance and Communication Challenges in Global Software Development: A Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090553</link>
        <id>10.14569/IJACSA.2018.090553</id>
        <doi>10.14569/IJACSA.2018.090553</doi>
        <lastModDate>2018-05-31T15:24:04.8470000+00:00</lastModDate>
        
        <creator>Babur Hayat Malik</creator>
        
        <creator>Saeed Faroom</creator>
        
        <creator>Muhammad Nauman Ali</creator>
        
        <creator>Nasir Shehzad</creator>
        
        <creator>Sheraz Yousaf</creator>
        
        <creator>Hammad Saleem</creator>
        
        <subject>Global Software Development (GSD); distributed software development; geographical distance challenges; communication and collaboration</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(5), 2018</description>
        <description>Due to innumerous advantages the Global software engineering is trending now a days in software development industry. Basic drivers for this trend are flexibility, faster development and expected cost saving. Software development has moved from traditional development to the global software development (GSD). Global software development is very important and ordinary practice in the software industry. In GSD, the developers are distributed across different sites and different countries, and lots of problems arise due to the physical social and cultural barriers. Global Software development is facing a number of challenges including Geographical distance, Communication and collaboration, time, culture, trust, tasks distribution, requirements gathering and collaboration. In this paper, authors conducted a detailed study on geographical distances and communication challenges in GSD, their inter dependencies, and also the proposed solutions and guidelines to address these challenges that are very critical in the success of GSD projects. Also in this paper a detailed literature review is provided, combined results are summarized and on the basis of these studies, a comparative study is made. This research will be helpful for other researchers to draw new strategies to tackle these challenges.</description>
        <description>http://thesai.org/Downloads/Volume9No5/Paper_53-Geographical_Distance_and_Communication_Challenges.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automatic Sign Language Recognition: Performance Comparison of Word based Approach with Spelling based Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090552</link>
        <id>10.14569/IJACSA.2018.090552</id>
        <doi>10.14569/IJACSA.2018.090552</doi>
        <lastModDate>2018-05-31T15:24:04.8300000+00:00</lastModDate>
        
        <creator>Shazia Saqib</creator>
        
        <creator>Syed Asad Raza Kazmi</creator>
        
        <creator>Khalid Masood</creator>
        
        <creator>Saleh Alrashed</creator>
        
        <subject>Feature extraction; human computer interaction; image segmentation; object recognition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(5), 2018</description>
        <description>Evolution of computer based interaction has been through a number of phases. From command line interface to menu driven environment to Graphics User Interface, the communication has evolved to a better user friendly environment.  A new form of communication is on the rise and that is Gesture Based Communication, which is a touch free environment basically. Although its applications are mainly for deaf community but smart mobiles, laptops and other similar devices are encouraging this new kind of communication. Sign languages all over the world have a dictionary of signs of several thousand words. Mostly these signs are word based which means that these signs do not make use of basic alphabet signs, rather a new sign has to be designed for every new word added to the dictionary. This paper suggests use of spelling-based gestures especially while communicating with smart phones and laptops.</description>
        <description>http://thesai.org/Downloads/Volume9No5/Paper_52-Automatic_Sign_Language_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Technical and Perceived Usability Issues in Arabic Educational Websites</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090551</link>
        <id>10.14569/IJACSA.2018.090551</id>
        <doi>10.14569/IJACSA.2018.090551</doi>
        <lastModDate>2018-05-31T15:24:04.8130000+00:00</lastModDate>
        
        <creator>Mohamed Benaida</creator>
        
        <creator>Abdallah Namoun</creator>
        
        <subject>Arabic educational websites; perceived usability; automatic evaluation; student perception</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(5), 2018</description>
        <description>Educational websites are often used as effective communication mediums to provide useful information for students and course instructors. The current study explores the perceived usability of three top-ranked Arabic educational websites across seven key usability components: effectiveness, efficiency, learnability, memorability, errors, satisfaction, and content. Moreover, the study also identifies the key technical and usability issues that currently exist within Arabic educational websites. A two-phase process encompassing automated tools and user testing was adopted to evaluate the technical performance and student acceptance of Arabic educational websites. In the automatic evaluation, two tools, namely, Web Page Analyser and GTMetrix, assessed the websites against a number of well-known performance guidelines and criteria. The student evaluation entailed 150 students completing three interaction tasks and evaluating the sites using the CSUQ questionnaire. The findings indicate that Arabic educational websites suffered from various technical issues, such as a high number of HTML objects and their large size and, consequently, slow loading speed. Moreover, the websites failed to satisfy all usability components, and students rated them negatively. Relevant guidelines for the effective design of Arabic educational websites are also discussed in this paper.</description>
        <description>http://thesai.org/Downloads/Volume9No5/Paper_51-Technical_and_Perceived_Usability_Issues.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparison of Task Scheduling Algorithms in Cloud Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090550</link>
        <id>10.14569/IJACSA.2018.090550</id>
        <doi>10.14569/IJACSA.2018.090550</doi>
        <lastModDate>2018-05-31T15:24:04.7970000+00:00</lastModDate>
        
        <creator>Babur Hayat Malik</creator>
        
        <creator>Mehwashma Amir</creator>
        
        <creator>Bilal Mazhar</creator>
        
        <creator>Shehzad Ali</creator>
        
        <creator>Rabiya Jalil</creator>
        
        <creator>Javaria Khalid</creator>
        
        <subject>Task scheduling; algorithms; cloud computing; min-max; genetic algorithm; load balancing; resource utilization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(5), 2018</description>
        <description>The enhanced form of client-server, cluster and grid computing is termed as Cloud Computing. The cloud users can virtually access the resources over the internet. Task submitted by cloud users are responsible for efficiency and performance of cloud computing services. One of the most essential factors which increase the efficiency and performance of cloud environment by maximizing the resource utilization is termed as Task Scheduling. This paper deals with the survey of different scheduling algorithms used in cloud providers. Different scheduling algorithms are available to achieve the quality of service, performance and minimize execution time. Task scheduling is an essential downside within the cloud computing that has to be optimized by combining different parameter. This paper explains the comparison of several job scheduling techniques with respect to several parameters, like response time, load balance, execution time and makespan of job to find the best and efficient task scheduling algorithm under these parameters.  The comparison of scheduling algorithms is also discussed in tabular form in this paper which helps in finding the best algorithms.</description>
        <description>http://thesai.org/Downloads/Volume9No5/Paper_50-Comparison_of_Task_Scheduling_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards Security as a Service to Protect the Critical Resources of Mobile Computing Devices</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090549</link>
        <id>10.14569/IJACSA.2018.090549</id>
        <doi>10.14569/IJACSA.2018.090549</doi>
        <lastModDate>2018-05-31T15:24:04.7830000+00:00</lastModDate>
        
        <creator>Abdulrahman Alreshidi</creator>
        
        <subject>Software engineering; mobile computing; cloud computing; computer security; mobile cloud computing; security as a service</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(5), 2018</description>
        <description>Mobile computing is fast replacing the traditional computing paradigms by offering its users to exploit portable computations and context-aware communications. Despite the benefits of mobile computing, such as portability and context-sensitivity, there are some critical challenges, such as resource poverty of mobile devices and security of mobile user’s data that must be addressed. Implementing the security mechanisms to execute on mobile devices can be challenging as mobile devices lack the required processor, memory and battery resources to support continuous and long-term execution of computation intensive tasks. Cloud computing model can provide virtually unlimited hardware, software, and service resources to compensate for the resource poverty of mobile devices. In recent years, there is a lot of research and development of solutions and frameworks that preserve the security and privacy of mobile devices and their data. However, there has been little effort to secure mobile devices while also supporting an efficient utilization of the limited resources available on mobile devices. In this paper, we propose Security as a Service for mobile devices (SeaaS for mobile) that integrates mobile computing and cloud computing technologies to secure the critical resources of mobile devices. The proposed solution aims to support 1) security for the data critical resources of mobile devices, and 2) security as a service by cloud servers for an efficient utilization of the mobile device resources. We demonstrate the security as a service based on a practical scenario for the security of mobile devices. The evaluation results show that the proposed solution is 1) accurate to detect the potential security threats, and is 2) computationally efficient for mobile devices. The proposed solution as part of ongoing research provides the foundations to develop a framework to address SeaaS for mobile. The proposed solution aims to advance the research state-of-the-art on software engineering, mobile cloud computing, while it specifically focuses exploiting cloud-based services to secure mobile devices.</description>
        <description>http://thesai.org/Downloads/Volume9No5/Paper_49-Towards_Security_as_a_Service_to_Protect_the_Critical_Resources.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards Privacy Preserving Commutative Encryption-Based Matchmaking in Mobile Social Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090548</link>
        <id>10.14569/IJACSA.2018.090548</id>
        <doi>10.14569/IJACSA.2018.090548</doi>
        <lastModDate>2018-05-31T15:24:04.7670000+00:00</lastModDate>
        
        <creator>Fizza Abbas</creator>
        
        <creator>Ubaidullah Rajput</creator>
        
        <creator>Adnan Manzoor</creator>
        
        <creator>Imtiaz Ali Halepoto</creator>
        
        <creator>Ayaz Hussain</creator>
        
        <subject>Privacy; security; matchmaking; interests; mobile social network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(5), 2018</description>
        <description>The last decade or so has witnessed a sharp rise in the growth of mobile devices. These mobile devices and wireless communication technologies enable people around the globe to instantaneously communicate with each other. This leads to the emergence of a new type of social networking known as Mobile Social Network (MSN). MSN offers a wide range of useful applications, such as group text services, social gaming, location-based services (to name a few). One of the popular applications of MSN is matchmaking where people match their interests/hobbies to find the like-minded people for a possible friendship. However, revealing personal hobbies can pose significant threats on a user’s privacy. Therefore, a privacy preserving evaluation method is needed to find the similarity between users’ interests. There are various techniques to achieve privacy preserving matchmaking, such as commutative encryption, oblivious transfer and homomorphic encryption. This paper discusses the feasibility of commutative encryption by evaluating recently proposed schemes. The paper attempts to identify various shortcomings in the present work and discusses future directions. </description>
        <description>http://thesai.org/Downloads/Volume9No5/Paper_48-Towards_Privacy_Preserving_Commutative_Encryption.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Heart Failure Prediction Models using Big Data Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090547</link>
        <id>10.14569/IJACSA.2018.090547</id>
        <doi>10.14569/IJACSA.2018.090547</doi>
        <lastModDate>2018-05-31T15:24:04.7370000+00:00</lastModDate>
        
        <creator>Heba F. Rammal</creator>
        
        <creator>Ahmed Z. Emam</creator>
        
        <subject>Big data; hadoop; healthcare; heart failure; prediction model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(5), 2018</description>
        <description>Big Data technologies have a great potential in transforming healthcare, as they have revolutionized other industries. In addition to reducing the cost, they could save millions of lives and improve patient outcomes. Heart Failure (HF) is the leading death cause disease, both nationally and internally. The Social and individual burden of this disease can be reduced by its early detection. However, the signs and symptoms of HF in the early stages are not clear, so it is relatively difficult to prevent or predict it. The main objective of this research is to propose a model to predict patients with HF using a multi-structure dataset integrated from various resources. The underpinning of our proposed model relies on studying the current analytical techniques that support heart failure prediction, and then build an integrated model based on Big Data technologies using WEKA analytics tool. To achieve this, we extracted different important factors of heart failure from King Saud Medical City (KSUMC) system, Saudi Arabia, which are available in structured, semi-structured and unstructured format. Unfortunately, a lot of information is buried in unstructured data format. We applied some pre-processing techniques to enhance the parameters and integrate different data sources in Hadoop Distributed File System (HDFS) using distributed-WEKA-spark package. Then, we applied data-mining algorithms to discover patterns in the dataset to predict heart risks and causes. Finally, the analyzed report is stored and distributed to get the insight needed from the prediction. Our proposed model achieved an accuracy and Area under the Curve (AUC) of 93.75% and 94.3%, respectively.</description>
        <description>http://thesai.org/Downloads/Volume9No5/Paper_47-Heart_Failure_Prediction_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Measurement Model of Mobile User Connectivity in Femtocell/Macrocell Networks using Fractional Frequency Re-use Scheme</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090546</link>
        <id>10.14569/IJACSA.2018.090546</id>
        <doi>10.14569/IJACSA.2018.090546</doi>
        <lastModDate>2018-05-31T15:24:04.7030000+00:00</lastModDate>
        
        <creator>Mehrin Anannya</creator>
        
        <creator>Riad Mashrub Shourov</creator>
        
        <subject>Femtocell; macrocell; cross-tier interferences; co-tier interferences; closed access methods; open access methods; connectivity probability; mobility factor; outage probability; fractional frequency re-use scheme </subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(5), 2018</description>
        <description>Technologies are traversing to its new dimensions every day. As part of this progression, mobile cellular system is at the summit of its constant advancement. The usage of Femtocells in mobile cellular system has created a massive impact on its architecture. Likewise, the incorporation of femtocells in macrocells for 4G mobile network communication services (like-voice calls, data services, etc.) among mobile stations within few meters has been one of the promising approaches. There is a femto access point (FAP) in Femtocell which handles the authorization of the user around it. Among the three various access methods, FAP allows only the authorized users except the macro cell users in Closed Access Method (CAM). But for Open Access Method (OAM), any type of crossing macrocell user within the radio coverage of femtocell and the femtocell users can get FAP access. To reduce the cross-tier interferences OAM is more efficient, because it deals with both type of users within the femtocell coverage. This paper proposes a performance measurement model for mobile connection probability depending on the mobility factor of mobile users and the communication range in femtocell/ macrocell networks. Furthermore, a derivation has been done to get the optimum result from the outage and connectivity probability under different number of femtocells and mobile users. Finally, to maximum the spectral efficiency for the probable frequency allocation, a Fractional Frequency Re-use scheme among the networks has been proposed.</description>
        <description>http://thesai.org/Downloads/Volume9No5/Paper_46-Performance_Measurement_Model_of_Mobile_User_Connectivity.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Investigating Methods of Resource Provisioning Mechanisms in Cloud: A Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090544</link>
        <id>10.14569/IJACSA.2018.090544</id>
        <doi>10.14569/IJACSA.2018.090544</doi>
        <lastModDate>2018-05-31T15:24:04.6730000+00:00</lastModDate>
        
        <creator>Babur Hayat Malik</creator>
        
        <creator>Talia Anwar</creator>
        
        <creator>Sadaf Ilyas</creator>
        
        <creator>Farheen Jafar</creator>
        
        <creator>Munazza Iftikhar</creator>
        
        <creator>Maryam Malik</creator>
        
        <creator>Noreen Islam Deen</creator>
        
        <subject>Resource provisioning; resource provisioning mechanisms; cloud computing; systematic review; comparison between resource provisioning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(5), 2018</description>
        <description>Delivering information through cloud computing become a modern computation. For this purpose, electronic device is required to access with an active web server. For delivering different resources, the cloud supplier provides computing power for the cloud users to organize their multiple type of application at any time on different platforms. In cloud computing, the main drawback is relevant to the best use of resources as well as resource provisioning. In cloud computing there is a lack of desired resources that is why the cloud resource provision becomes a daring work. To maintain the quality of services, the provisioning of reasonable resources is need of workloads. The main problem is to find the appropriate workload that depends on the cloud user that is related to resource pair application requirements. This paper reveals the cloud resource provisioning and identification in general and in specific, respectively. In this paper, a methodical analysis of resource provisioning in cloud computing is presented, in which resource provisioning,  different types of resource provisioning mechanisms and their comparisons, and benefits are described.</description>
        <description>http://thesai.org/Downloads/Volume9No5/Paper_44-Investigating_Methods_of_Resource_Provisioning_Mechanisms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>University Notification Subscription System using Amazon Web Service</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090545</link>
        <id>10.14569/IJACSA.2018.090545</id>
        <doi>10.14569/IJACSA.2018.090545</doi>
        <lastModDate>2018-05-31T15:24:04.6730000+00:00</lastModDate>
        
        <creator>Babur Hayat Malik</creator>
        
        <creator>Zaheer Mehmood Dar</creator>
        
        <creator>Sabah Mubarik Kayani</creator>
        
        <creator>Mahnoor Dar</creator>
        
        <creator>Muhammad Hassan Shafiq</creator>
        
        <creator>Imran Kabir</creator>
        
        <creator>Fatima Masood</creator>
        
        <creator>Hamna Zakriya</creator>
        
        <creator>Asad Ali</creator>
        
        <subject>Publish-Subscribe system; content and topic-based; university notification; Amazon web services</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(5), 2018</description>
        <description>Publish-Subscribe (Pub-Sub) system is an asynchronous communication service widely used in server-less and micro-services architecture. In a Pub-Sub system, publisher publish message to a topic that is immediately received by all of the subscribers of that topic. Nowadays, students face number of problems regarding admission details, assignments, offered courses, fee schedule, etc. Many a times, they missed the deadlines and it affects their studies. This paper is focused on issues faced by students regarding message delivery, duplication of data and heavy traffic, etc. It should be overcome by using amazon web services to make optimize product and to make it flexible for university Pub-Sub system. Implement the cloud services by using hybrid technique, i.e., content based and topic based architecture. It also explained the multitude use-case of university notification system which leads to make it more adaptable as subscriptions are identified with particular data content.</description>
        <description>http://thesai.org/Downloads/Volume9No5/Paper_45-University_Notification_Subscription_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Motif Detection in Cellular Tumor p53 Antigen Protein Sequences by using Bioinformatics Big Data Analytical Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090543</link>
        <id>10.14569/IJACSA.2018.090543</id>
        <doi>10.14569/IJACSA.2018.090543</doi>
        <lastModDate>2018-05-31T15:24:04.6430000+00:00</lastModDate>
        
        <creator>Tariq Ali</creator>
        
        <creator>Sana Yasin</creator>
        
        <creator>Umar Draz</creator>
        
        <creator>M. Ayaz Arshad</creator>
        
        <creator>Tayyaba Tariq</creator>
        
        <creator>Sarah Javaid</creator>
        
        <subject>Bio-informatics; motif detection; guardian protein Tp53; DNA; tumor antigen; cancer; un-gapped motifs; MEME</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(5), 2018</description>
        <description>Due to the rapid growth of data in the field of big data and bioinformatics, the analysis and management of the data is a very difficult task for the scientist and the researchers. Data exists in many formats like in the form of groups and clusters. The data that exist in the group form and have some repetition patterns called Motifs. A lot of tools and techniques are available in the literature to detect the motifs in different fields like neural networks, antigen/antibody protein, metabolic pathways, DNA/RNA sequences and Protein-Protein Interactions (PPI). In this paper, motif detection is done in tumor antigen protein, namely, cellular tumor antigen p53 (Guardian of the protein and genome) that regulate the cell cycle and suppress the tumor growth in the human body. As tumor is a death causing disease and creates a lot of other diseases in human beings like brain stroke, brain hemorrhage, etc. So there needs to investigate the relation of the tumor protein that prevents the human from not only brain tumor but also from a lot of other diseases that is created from it. To find out the gap between the motifs in the tumor antigen the GLAM2 is used that detects the distance between the motifs very efficiently. Same tumor antigen protein is evaluated at different tools like MEME, TOMTOM, Motif Finder and DREME to analyze the results critically. As tumor protein exists in multiple species, so comparison of homo tumor antigen protein is also done in different species to check the diversity level of this protein. Our purposed approach gives better results and less computational time than other approaches for different types of user characteristics.</description>
        <description>http://thesai.org/Downloads/Volume9No5/Paper_43-Motif_Detection_in_Cellular_Tumor.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detection of Mass Panic using Internet of Things and Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090542</link>
        <id>10.14569/IJACSA.2018.090542</id>
        <doi>10.14569/IJACSA.2018.090542</doi>
        <lastModDate>2018-05-31T15:24:04.6270000+00:00</lastModDate>
        
        <creator>Gehan Yahya Alsalat</creator>
        
        <creator>Mohammad El-Ramly</creator>
        
        <creator>Aly Aly Fahmy</creator>
        
        <creator>Karim Said</creator>
        
        <subject>Internet of Things; IoT; Mobile Crowd Sensing (MCS); wearables;  mass panic; mass gatherings; accelerometer;  Optical Heart Rate (HR) sensor; abnormal crowd behaviour; deep learning;  Recurrent Neural Network (RNN);  Long Short Term Memory (LSTM); Gated Recurrent Unit (GRU); time series</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(5), 2018</description>
        <description>The increase of emergency situations that cause mass panic in mass gatherings, such as terrorist attacks, random shooting, stampede, and fires, sheds light on the fact that advancements in technology should contribute in timely detecting and reporting serious crowd abnormal behaviour. The new paradigm of the ‘Internet of Things’ (IoT) can contribute to that. In this study, a method for real-time detection of abnormal crowd behaviour in mass gatherings is proposed. This system is based on advanced wireless connections, wearable sensors and machine learning technologies. It is a new crowdsourcing approach that considers humans themselves as the surveillance devices that exist everywhere. A sufficient number of the event’s attendees are supposed to wear an electronic wristband which contains a heart rate sensor, motion sensors and an assisted-GPS, and has a wireless connection. It detects the abnormal behaviour by detecting heart rate increase and abnormal motion. Due to the unavailability of public bio-dataset on mass panic, dataset of this study was collected from 89 subjects wearing the above-mentioned wristband and generating 1054 data samples. Two types of data collected were: firstly, the data of normal daily activities and secondly, the data of abnormal activities resembling the behaviour of escape panic. Moreover, another abnormal dataset was synthetically generated to simulate panic with limited motion. In our proposed approach, two-phases of data analysis are done. Phase-I is a deep machine learning model that was used to analyze the sensors’ collected readings of the wristband and detect if the person has indeed panicked in order to send alerting signals. While phase-II data analysis takes place in the monitoring server that receives the alerting signals to conclude if it is a mass panic incident or a false positive case. Our experiments demonstrate that the proposed system can offer a reliable, accurate, and fast solution for panic detection. This experiment uses the Hajj pilgrimage as a case study.</description>
        <description>http://thesai.org/Downloads/Volume9No5/Paper_42-Detection_of_Mass_Panic_using_Internet_of_Things.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Accurate Multi-Biometric Personal Identification Model using Histogram of Oriented Gradients (HOG)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090541</link>
        <id>10.14569/IJACSA.2018.090541</id>
        <doi>10.14569/IJACSA.2018.090541</doi>
        <lastModDate>2018-05-31T15:24:04.5970000+00:00</lastModDate>
        
        <creator>Mostafa A. Ahmad</creator>
        
        <creator>Ahmed H. Ismail</creator>
        
        <creator>Nadir Omer</creator>
        
        <subject>Biometric identifiers; personal identification; multi-biometric systems; face recognition; digital signature; Histogram of Oriented Gradients (HOG)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(5), 2018</description>
        <description>Biometrics is the detection and description of individuals’ physiological and behavioral ‎features. ‎Many different systems require reliable personal identification schemes to either prove or ‎find out ‎the identity of an individual demanding their services. Multi-biometrics are required inside the current ‎context of large worldwide biometric databases and to provide new developing security ‎demands. There are some distinctive and measurable features used to distinguish individuals known ‎as Biometric Identifiers. Multi-biometric systems tend to integrate multiple identifiers to increase ‎recognition accuracy. Face ‎and digital signature identifiers are still a challenge in many applications, especially in security systems. The fundamental objective of this paper is to integrate both identifiers in an accurate personal identification model. In this paper, a reliable multi-biometric model based on Histogram of Oriented Gradients (HOG) features of a face and digital signature and is able to identify individuals accurately is proposed. The methodology is to adopt many parameters such as weights of HOG features in merging process, the HOG parameters itself, and the distance method in matching process to gain higher accuracy. The proposed model achieves perfect results in personal identification using HOG features of digital signature and face together. The results show that the HOG ‎feature descriptor significantly performs target matching at an average of 100% accuracy ratio for ‎face recognition together with the digital signature. It outperforms existing feature sets with an accuracy of ‎‎84.25% for face only and 97.42% for digital signature only.‎</description>
        <description>http://thesai.org/Downloads/Volume9No5/Paper_41-An_Accurate_Multi_Biometric_Personal_Identification_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Tuning of Customer Relationship Management (CRM) via Customer Experience Management (CEM) using Sentiment Analysis on Aspects Level</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090540</link>
        <id>10.14569/IJACSA.2018.090540</id>
        <doi>10.14569/IJACSA.2018.090540</doi>
        <lastModDate>2018-05-31T15:24:04.5800000+00:00</lastModDate>
        
        <creator>Hamed AL-Rubaiee</creator>
        
        <creator>Khalid Alomar</creator>
        
        <creator>Renxi Qiu</creator>
        
        <creator>Dayou Li</creator>
        
        <subject>Opinion mining; customer relationship management; customer experience management; sentiment analysis; Twitter</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(5), 2018</description>
        <description>This study proposes a framework that combines a supervised machine learning and a semantic orientation approach to tune Customer Relationship Management (CRM) via Customer Experience Management (CEM). The framework extracts data from social media first and then integrates CRM and CEM by tuning and optimising CRM to reflect the needs and expectations of users on social media. In other words, in order to reduce the gap between the users’ predicted opinions in CRM and their opinions on social media, the existing data from CEM will be applied to determine the similar behavioural patterns of customers towards similar outcomes within CRM. CRM data and extracted data from social media will be consolidated by the unsupervised data mining method (association). The framework will lead to a quantitative approach to uncover relationships between the extracted data from social media and the CRM data. The results show that changing some aspects of the e-learning criteria that were required by students in their social media posts can help to enhance the classification accuracy in the learning management system (LMS) data and to understand more students’ studying statuses. Furthermore, the results show matching between students’ opinions in CRM and CEM, especially in the negative and neutral classes.</description>
        <description>http://thesai.org/Downloads/Volume9No5/Paper_40-Tuning_of_Customer_Relationship_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Relative Humidity Profile Estimation Method with AIRS (Atmospheric Infrared Sounder) Data by Means of SDM (Steepest Descend Method) with the Initial Value Derived from Linear Estimation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090539</link>
        <id>10.14569/IJACSA.2018.090539</id>
        <doi>10.14569/IJACSA.2018.090539</doi>
        <lastModDate>2018-05-31T15:24:04.5630000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>Atmospheric Infrared Sounder (AIRS); Steepest Descend Method (SDM); LED; MODTRAN; relative humidity; atmospheric model; Infrared sounder</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(5), 2018</description>
        <description>Relative humidity profile estimation method with AIRS (Atmospheric Infrared Sounder) data by means of SDM (Steepest Descend Method) with the initial value derived from LED: Linear Estimation Method is also proposed. Through experiments, it is found that there is almost 15 (%) of relative humidity estimation error. Therefore, it can be said that the relative humidity is still tough issue for retrieval. It is also found that the estimation error does not depend on the designated atmospheric models, Mid-Latitude Summer/Winter, Tropic. Even if the assigned atmospheric model is not correct, the proposed SDM based method allows almost same estimated relative humidity. In other word, it is robust against atmospheric model.</description>
        <description>http://thesai.org/Downloads/Volume9No5/Paper_39-Relative_Humidity_Profile_Estimation_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Automatic Segmentation Algorithm for Solar Filaments in H-Alpha Images using a Context-based Sliding Window</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090538</link>
        <id>10.14569/IJACSA.2018.090538</id>
        <doi>10.14569/IJACSA.2018.090538</doi>
        <lastModDate>2018-05-31T15:24:04.5500000+00:00</lastModDate>
        
        <creator>Ibrahim A. Atoum</creator>
        
        <subject>Solar image processing; solar filament; segmentation; sliding window; Coronal mass ejections</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(5), 2018</description>
        <description>There are many features which appear on the surface of the sun. One of these features that appear clearly are the dark threads in the Hydrogen alpha (Hα) spectrum solar images. These ‘filaments’ are found to have a definite correlation with Coronal Mass Ejections (CMEs). A CME is a large release of plasma into space. It can be hazardous to astronauts and the spacecraft if it is being ejected towards the Earth. Knowing the exact attributes of solar filaments may open the way towards predicting the occurrence of CMEs. In this paper, an efficient and fully automated algorithm for solar filament segmentation without compromising accuracy is proposed. The algorithm uses some statistical measures to design the thresholding equations and it is written in the C++ programming language. The square root of the range as a measure of variability of image intensity values is used to determine the size of the sliding window at run time. There are many previous studies in this area, but no single segmentation method that could precisely claim to be fully automatic exists. Samples were taken from several representative regions in low-contrast and high-contrast solar images to verify the viability and efficacy of the method.</description>
        <description>http://thesai.org/Downloads/Volume9No5/Paper_38-An_Automatic_Segmentation_Algorithm_for_Solar_Filaments.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cascades Neural Network based Segmentation of Fluorescence Microscopy Cell Nuclei</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090537</link>
        <id>10.14569/IJACSA.2018.090537</id>
        <doi>10.14569/IJACSA.2018.090537</doi>
        <lastModDate>2018-05-31T15:24:04.5330000+00:00</lastModDate>
        
        <creator>Sofyan M. A. Hayajneh</creator>
        
        <creator>Mohammad H. Alomari</creator>
        
        <creator>Bassam Al-Shargabi</creator>
        
        <subject>Artificial neural networks; machine learning, DSP, fluorescence microscopy; biomedical imaging; cell nuclei; image segmentation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(5), 2018</description>
        <description>The visual extraction of cellular, nuclear and tissue components from medical images is very vital in the diagnosis routine of different health related abnormalities and diseases. The objective of this work is to modify and efficiently combine different image processing methods supported by cascaded artificial neural networks in an automated system to perform segmentation analysis of medical microscopy images to extract nuclei located in either simple or complex clusters. The proposed system is applied on a publicly available data sets of microscopy nuclei cells. A GUI is designed and presented in this work to ease the analysis and screening of these images. The proposed system shows promising performance and reduced computational time cost. It is hoped that thus system and the corresponding GUI will construct platform base for several biomedical studies in the field of cellular imaging where further complex investigations and modelling of microscopy images could take place.</description>
        <description>http://thesai.org/Downloads/Volume9No5/Paper_37-Cascades_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The P System Design Method based on the P Module</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090536</link>
        <id>10.14569/IJACSA.2018.090536</id>
        <doi>10.14569/IJACSA.2018.090536</doi>
        <lastModDate>2018-05-31T15:24:04.5170000+00:00</lastModDate>
        
        <creator>Ping Guo</creator>
        
        <creator>Xixi Peng</creator>
        
        <creator>Lian Ye</creator>
        
        <subject>P module; P System; P system design; membrane
computing; biocomputing models</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(5), 2018</description>
        <description>Membrane computing is a kind of biocomputing
model. At present, the main research areas of membrane computing
are computational models and P system design. With the
expansion of the P system scale, how to rapidly construct the P
system has become a prominent issue. Designing P system based
on P module is a P system design method proposed in recent
years. This method provides information hiding and can build
P system through recursive combination. However, the current
P module design lacks a unified design method and lacks the
standard process of building P system from P module. This paper
studies the structural characteristics of cell-like P systems, and
proposes an improved P module design method and a process
for assembling P systems through P modules. In order to fully
expound the design method of P module, the P system for the
square root of the large number was analyzed and designed. And
the correctness of the P system based on the P module design
method was verified by an instance.</description>
        <description>http://thesai.org/Downloads/Volume9No5/Paper_36-The_P_System_Design_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Binary PSOGSA for Load Balancing Task Scheduling in Cloud Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090535</link>
        <id>10.14569/IJACSA.2018.090535</id>
        <doi>10.14569/IJACSA.2018.090535</doi>
        <lastModDate>2018-05-31T15:24:04.5030000+00:00</lastModDate>
        
        <creator>Thanaa S. Alnusairi</creator>
        
        <creator>Ashraf A. Shahin</creator>
        
        <creator>Yassine Daadaa</creator>
        
        <subject>Gravitational search algorithm; load balancing; particle swarm optimization; task scheduling; task-to-virtual machine mapping; virtual machine load</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(5), 2018</description>
        <description>In cloud environments, load balancing task scheduling is an important issue that directly affects resource utilization. Unquestionably, load balancing scheduling is a serious aspect that must be considered in the cloud research field due to the significant impact on both the back end and front end. Whenever an effective load balance has been achieved in the cloud then good resource utilization will also be achieved. An effective load balance means distributing the submitted workload over cloud VMs in a balanced way, leading to high resource utilization and high user satisfaction. In this paper, we propose a load balancing algorithm, Binary Load Balancing – Hybrid Particle Swarm Optimization and Gravitational Search Algorithm (Bin-LB-PSOGSA), which is a bio-inspired load balancing scheduling algorithm that efficiently enables the scheduling process to improve load balance level on VMs. The proposed algorithm finds the best Task-to-Virtual machine mapping that is influenced by the length of submitted workload and VM processing speed. Results show that the proposed Bin-LB-PSOGSA achieves better VM load average than the pure Bin-LB-PSO and other benchmark algorithms in terms of load balance level.</description>
        <description>http://thesai.org/Downloads/Volume9No5/Paper_35-Binary_PSOGSA_for_Load_Balancing_Task_Scheduling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>M/M/1/n+Flush/n Model to Enhance the QoS for Cluster Heads in MANETs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090534</link>
        <id>10.14569/IJACSA.2018.090534</id>
        <doi>10.14569/IJACSA.2018.090534</doi>
        <lastModDate>2018-05-31T15:24:04.4700000+00:00</lastModDate>
        
        <creator>Aleem Ali</creator>
        
        <creator>Neeta Singh</creator>
        
        <creator>Poonam Verma</creator>
        
        <subject>Mobile Ad hoc Network (MANET); Cluster Head (CH); queueing approach; Quality of Services (QoS), flushing technique</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(5), 2018</description>
        <description>Clustering in MANET is important to achieve scalability in presence of large networks and high mobility in order to maintain the Quality of Services (QoS) of the network. Improving the QoS is the most important and crucial issue in the area of MANET. Keeping this in mind, the research paper presents an M/M/1/n+Flush/n queueing model to perform better parametric results for cluster heads in MANETs.  In an effort to make the M/M/1/n+Flush/n queueing model, the paper establishes the expressions for utilization (Uₜ) of the Cluster Head (CH), mean queue length (Lq), mean busy period (Eᵨ), mean waiting time (Q) and average response time (R) of the CH. The analytical results are further verified using MATLAB simulations which reveal better outcomes.</description>
        <description>http://thesai.org/Downloads/Volume9No5/Paper_34-MM1n_Flushn_Model_to_Enhance_the_QoS.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Ensemble Framework for Heart Disease Detection and Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090533</link>
        <id>10.14569/IJACSA.2018.090533</id>
        <doi>10.14569/IJACSA.2018.090533</doi>
        <lastModDate>2018-05-31T15:24:04.4570000+00:00</lastModDate>
        
        <creator>Elham Nikookar</creator>
        
        <creator>Ebrahim Naderi</creator>
        
        <subject>Data mining; hybrid ensemble; base classifier; classification accuracy; sensitivity; specificity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(5), 2018</description>
        <description>Data mining techniques have been widely used in clinical decision support systems for detection and prediction of various diseases. As heart disease is the leading cause of death for both men and women, detection and prediction of the heart disease is one of the most important issues in medical domain and many researchers developed intelligent medical decision support systems to improve the ability of the CAD systems in diagnosing heart disease. However, there are almost no studies investigating capabilities of hybrid ensemble methods in building a detection and prediction model for heart disease. In this work, we investigate the use of hybrid ensemble model in which a more reliable ensemble than basic ensemble models is proposed and leads to better performance than other heart disease prediction models. To evaluate the performance of proposed model, a dataset containing 278 samples from SPECT heart disease database is used that after applying the model on the data, 96% of classification accuracy, 80% of sensitivity and 93% of specificity are obtained that indicates acceptable performance of the proposed hybrid ensemble model in comparison with basic ensemble model as well as other state of the art models.</description>
        <description>http://thesai.org/Downloads/Volume9No5/Paper_33-Hybrid_Ensemble_Framework_for_Heart_Disease.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Optimized Inset Feed Circular Cross Strip Antenna Design for C-Band Satellite Links</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090532</link>
        <id>10.14569/IJACSA.2018.090532</id>
        <doi>10.14569/IJACSA.2018.090532</doi>
        <lastModDate>2018-05-31T15:24:04.4230000+00:00</lastModDate>
        
        <creator>Faisal Ahmed Dahri</creator>
        
        <creator>Riaz A. Soomro</creator>
        
        <creator>Sajjad Ali Memon</creator>
        
        <creator>Zeeshan Memon</creator>
        
        <creator>Majid Hussain Memon</creator>
        
        <subject>Circular slot; high gain; C-band; satellite communication; efficiency </subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(5), 2018</description>
        <description>This proposed antenna model and progressing the investigation of an inset fed wideband circular slotted patch antenna is suitable for 5.2 GHz satellite C-band applications. A circularly shaped slot has been chosen to be etched on diminutive square patch (4.4cm*5.64cm) of the inset feed antenna. The object of this work is to develop an efficient and inexpensive transducer system to facilitate its compatibility with monolithic microwave integrated circuits; expenses are minimized for its fabrication and trail low profile for C-band satellite links. This paper focuses on the circular profile of the microstrip patch antenna intended for the proficient gain to enhance the performance of the satellite communication. The return loss of -21.79dB with the directivity 8.22dB and gain of 8.17dB have been estimated. The efficiency of 97% with VSWR of 1.22 compensates each other with better simulation results.</description>
        <description>http://thesai.org/Downloads/Volume9No5/Paper_32-An_Optimized_Inset_Feed_Circular_Cross_Strip_Antenna.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>E-shape Multiband Patch Antenna for 4G, C-band and S-band Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090531</link>
        <id>10.14569/IJACSA.2018.090531</id>
        <doi>10.14569/IJACSA.2018.090531</doi>
        <lastModDate>2018-05-31T15:24:04.4070000+00:00</lastModDate>
        
        <creator>Mehr e Munir</creator>
        
        <creator>Khalid Mahmood</creator>
        
        <creator>Saad Hassan Kiani</creator>
        
        <subject>Minowaki island patch; miniaturization; E shape; gain; directivity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(5), 2018</description>
        <description>In this study, a new E shape mounted on minowaki island patch antenna on FR4 substrate is presented for communication systems applications. With insertion of shortening pin between patch and ground plane, the proposed structure resonated on 6 frequencies; hence producing Hex-band response with good realized gain and directivity radiation values and patterns. Co axial cable is used as means of excitation to excite proposed structure with minimum impedance mismatch losses. The proposed design is miniaturized up to 60.66% and can be used for GSM, GPRS, 4G, WLAN and other S-band and C-band applications.</description>
        <description>http://thesai.org/Downloads/Volume9No5/Paper_31-E_Shape_Multiband_Patch_Antenna.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Effect of Service Broker Policies and Load Balancing Algorithms on the Performance of Large Scale Internet Applications in Cloud Datacenters</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090529</link>
        <id>10.14569/IJACSA.2018.090529</id>
        <doi>10.14569/IJACSA.2018.090529</doi>
        <lastModDate>2018-05-31T15:24:04.3770000+00:00</lastModDate>
        
        <creator>Ali Meftah</creator>
        
        <creator>Ahmed E. Youssef</creator>
        
        <creator>Mohammad Zakariah</creator>
        
        <subject>Cloud computing; datacenters; load balancing algorithms; service broker policies; CloudAnalyst</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(5), 2018</description>
        <description>Cloud computing is advancing rapidly. With such advancement, it has become possible to develop and host large scale distributed applications on the Internet more economically and more flexibly. However, the geographical distribution of user bases, the available Internet infrastructure within those geographical areas, and the dynamic nature of usage patterns of the user bases are critical factors that affect the performance of these applications. Therefore, it is necessary to compromise between datacenters, service broker policies, and load balancing algorithms to optimize the performance of the application and the cost to the owners. This paper aims at studying the effect of service broker policies and load balancing algorithms on the performance of large-scale Internet applications under different configurations of datacenters. To achieve this goal, we modeled the behavior of the popular Facebook application with the most recent worldwide users’ statistics. Then, we evaluated the performance of this application under different configurations of datacenters using: 1) two different service broker policies, namely, closest datacenter and optimum response time; and 2) three load-balancing algorithms, namely, round robin, equally spread current execution, and throttled load balancer. The overall average response time of the application and the overall average time spent for processing a user request by a datacenter are measured and the results are discussed. This study would help service providers generate valuable insights on coordination between datacenters, service policies, and load balancing algorithms when designing Cloud infrastructure services in geographically distributed areas. In addition, application designers would benefit greatly from this study in identifying the optimal arrangement for their applications.</description>
        <description>http://thesai.org/Downloads/Volume9No5/Paper_29-Effect_of_Service_Broker_Policies.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multiple Trips Pattern Mining</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090530</link>
        <id>10.14569/IJACSA.2018.090530</id>
        <doi>10.14569/IJACSA.2018.090530</doi>
        <lastModDate>2018-05-31T15:24:04.3770000+00:00</lastModDate>
        
        <creator>Riaz Ahmed Shaikh</creator>
        
        <creator>Kamelsh Kumar</creator>
        
        <creator>Rafaqat Hussain Arain</creator>
        
        <creator>Hidayatullah Shaikh</creator>
        
        <creator>Imran Memon</creator>
        
        <creator>Safdar Ali Shah</creator>
        
        <subject>Multiple trips pattern mining; multiple trips classification; geo-tagging</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(5), 2018</description>
        <description>In recent years, photograph sharing is one of the most mainstream web service, for example, Flickr, trip advisor and numerous other web services. The photograph sharing web services give capacities to include Geo coordinates, tags, and user ID to photographs to make photograph organizing easily. This study focuses on Geotagged photographs and discusses an approach to recognize user multiple trips pattern, i.e., common arrangements of visits in towns and span of stay and also elucidating labels that describe the multiple trips pattern. First, we segment collection of photos into multiple trips and categorize the photos manually based on photo trips into multiple trips, themes such as Landmark, Nature, Business, Neutral and Event. Our method mines multiple trips pattern for multiple trips theme categories. The experimental result of our technique beats other methods and accurate segmentation of photo collections into numerous trips with the 85% of accuracy. The multiple trips categorize about 91% correctly using tags, photo id, titles of digital photos, user id and visited cities as features. In last, we demonstrate the motivating examples showing an application with which one can find multiple trips pattern from our datasets and other different queries visit duration, destination and multiple trips’ theme on trips.</description>
        <description>http://thesai.org/Downloads/Volume9No5/Paper_30-Multiple_Trips_Pattern_Mining.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Integration of Heterogeneous Requirements using Ontologies</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090528</link>
        <id>10.14569/IJACSA.2018.090528</id>
        <doi>10.14569/IJACSA.2018.090528</doi>
        <lastModDate>2018-05-31T15:24:04.3470000+00:00</lastModDate>
        
        <creator>Ahmad Mustafa</creator>
        
        <creator>Wan M.N. Wan-Kadir</creator>
        
        <creator>Noraini Ibrahim</creator>
        
        <creator>Muhammad Arif Shah</creator>
        
        <creator>Muhammad Younas</creator>
        
        <subject>Heterogeneous requirements; requirement engineering; local ontologies; global ontologies</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(5), 2018</description>
        <description>Ontology-driven approaches are used to sustain the requirement engineering process. Ontologies can be used to define information and knowledge semantics during the requirements engineering phases, such as analysis, specification, validation and management of requirements. However, requirement analysts face difficulties in using ontologies for requirement engineering. In this study, a framework has been proposed to integrate heterogeneous requirements by using local and global ontologies. </description>
        <description>http://thesai.org/Downloads/Volume9No5/Paper_28-Integration_of_Heterogeneous_Requirements.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Quality Aspects of Continuous Delivery in Practice</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090527</link>
        <id>10.14569/IJACSA.2018.090527</id>
        <doi>10.14569/IJACSA.2018.090527</doi>
        <lastModDate>2018-05-31T15:24:04.3000000+00:00</lastModDate>
        
        <creator>Maryam Shahzeydi</creator>
        
        <creator>Taghi Javdani Gandomani</creator>
        
        <creator>Rasool Sadeghi</creator>
        
        <subject>Continuous delivery; quality model; agile software development; agile methods; agile practice</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(5), 2018</description>
        <description>Continuous Delivery is recently used in software projects to facilitate the process of product delivery in Agile software development. As an Agile practice, this practice is mainly used to achieve better quality of software development process and higher customer satisfaction. However, less attention has been paid on exploring the quality factors related to Continuous Delivery as well as quality model. The main aim of this paper is to figure out the quality aspects and factors of Continuous Delivery. Initial data analysis showed that this practice is impressed by people related factors, organizational issues, tools and process related factors as well.</description>
        <description>http://thesai.org/Downloads/Volume9No5/Paper_27-Quality_Aspects_of_Continuous_Delivery_in_Practice.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>TPACK Adaptation among Faculty Members of Education and ICT Departments in University of Sindh, Pakistan</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090526</link>
        <id>10.14569/IJACSA.2018.090526</id>
        <doi>10.14569/IJACSA.2018.090526</doi>
        <lastModDate>2018-05-31T15:24:04.2670000+00:00</lastModDate>
        
        <creator>Saira Soomro</creator>
        
        <creator>Arjumand Bano Soomro</creator>
        
        <creator>Najma Imtiaz Ali</creator>
        
        <creator>Tariq Bhatti</creator>
        
        <creator>Nazish Basir</creator>
        
        <creator>Nazia Parveen Gill</creator>
        
        <subject>TPACK; teaching-learning; circumstantial and contextual factors</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(5), 2018</description>
        <description>Technological Pedagogical Content Knowledge (TPACK) framework has been to investigate the technological and instructive knowledge of teachers. Many researchers have found this framework a useful tool to explore teachers’ awareness regarding TPACK and how do they are relating it in learning and teaching process in different educational settings. During its first generation time period which was from year 2006 to year 2016, TPACK constructs took a decade to get explained and interpreted by researchers. Now, it has entered in its second generation but still contextual aspect yet not being explored in detail. This study addresses two areas; firstly, to measure the TPACK of faculty members of ICT and Education departments of University of Sindh; and secondly, to unfold the impact of four circumstantial/contextual factors (Technological, Culture of Institute, Interpersonal, and Intrapersonal) on the selected faculty members in using TPACK into their own subject domains. The results showed that both faculties are already taking in technology along with their teaching practices instead of limited technological resources. Besides this, they were found collaborative in teaching and open to the technology. This study reports the TPACK framework adaptation among higher education faculty members at University of Sindh. It also helped in understanding the intrapersonal beliefs of faculty members regarding technology integration with pedagogical and content knowledge.</description>
        <description>http://thesai.org/Downloads/Volume9No5/Paper_26-TPACK_Adaptation_among_Faculty_Members_of_Education.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Communicator for Hearing-Impaired Persons using Pakistan Sign Language (PSL)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090525</link>
        <id>10.14569/IJACSA.2018.090525</id>
        <doi>10.14569/IJACSA.2018.090525</doi>
        <lastModDate>2018-05-31T15:24:04.2370000+00:00</lastModDate>
        
        <creator>Muhammad Wasim</creator>
        
        <creator>Adnan Ahmed Siddiqui</creator>
        
        <creator>Abdulbasit Shaikh</creator>
        
        <creator>Lubaid Ahmed</creator>
        
        <creator>Syed Faisal Ali</creator>
        
        <creator>Fauzan Saeed</creator>
        
        <subject>Communicator; hearing-impaired; Pakistan Sign Language (PSL); hand gesture; special person; token</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(5), 2018</description>
        <description>Communication with a hearing-impaired individual is a big challenge for a normal person. Hearing-impaired people uses hand gesture language (sign language) to communicate with each other, which is not easy to understand by a normal person because he/she is not trained to understand sign language. This communication gap between a hearing-impaired and a normal person created big problem for hearing-impaired individuals during their shopping, hospitalization, at their schools and homes. Especially in case of emergency, it is very difficult to understand the statement of a hearing-impaired one’s who uses sign language. In the last few years researchers and developers from all over the world presented different ideas and works to solve this problem but no such solution is available to resolve this issue and can create two-way communication between hearing-impaired and normal persons. This paper presented a detail description about a two-way communication system based on Pakistan Sign Language (PSL). This duplex system is developed through conversion from the text in simple English into hand gestures and vice versa. However, conversion from hand gestures is available not only in text but also with voice providing more convenience to normal person. Main objective is to facilitate a large population and making hearing-impaired persons, the vital part of our civilization. A normal person can enter the text (sentence) in application, after the checking of spelling and grammar, the text is divided into tokens and sub-tokens. A token is a gesture against each word of the text while sub-tokens are the gestures of each character of the words. The combination of tokens created the gestures of text. On the other hand when gestures were input in to the application, using image processing technique, the nature of hand gesture were recognized and converted into corresponding text or voice.</description>
        <description>http://thesai.org/Downloads/Volume9No5/Paper_25-Communicator_for_Hearing_Impaired_Persons.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-Stage Algorithms for Solving a Generalized Capacitated P-median Location Problem</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090524</link>
        <id>10.14569/IJACSA.2018.090524</id>
        <doi>10.14569/IJACSA.2018.090524</doi>
        <lastModDate>2018-05-31T15:24:04.2200000+00:00</lastModDate>
        
        <creator>Mohammed EL AMRANI</creator>
        
        <creator>Youssef BENADADA</creator>
        
        <subject>Location; p-median; multi-capacity; heuristic; LNS; lagrangian relaxation; lower bound</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(5), 2018</description>
        <description>The capacitated p-median location problem is one of the famous problems widely discussed in the literature, but its generalization to a multi-capacity case has not. This generalization, called multi-capacitated location problem, is characterized by allowing facilities to use one of several capacity levels. For this purpose, a predefined list of capacity levels supported by all potential facilities is established. In this paper, we will detail the mathematical formulation and propose a new solving method. We try to construct, indeed, a multi-stage heuristic algorithm that will be called BDF (Biggest Demand First). This new method appears in two approaches: Integrated BDF (IBDF) and Hybridized BDF (HBDF) will be improved by using a local search optimization. A valid lower bound to the optimal solution value is obtained by solving a lagrangian relaxation dual of the exact formulation. Computational results are presented at the end using new instances with higher ratio between the number of customers, facilities and capacity levels or adapted from those of p-median drawn from the literature. The obtained results show that the IBDF is much faster with medium quality solution while HBDF is slower but provides very good solutions close to the optimality.</description>
        <description>http://thesai.org/Downloads/Volume9No5/Paper_24-Multi_Stage_Algorithms_for_Solving_A_Generalized_Capacitated_P_Median.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implementation of Winnowing Algorithm with Dictionary English-Indonesia Technique to Detect Plagiarism</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090523</link>
        <id>10.14569/IJACSA.2018.090523</id>
        <doi>10.14569/IJACSA.2018.090523</doi>
        <lastModDate>2018-05-31T15:24:04.1900000+00:00</lastModDate>
        
        <creator>Anton Yudhana</creator>
        
        <creator>Sunardi</creator>
        
        <creator>Iif Alfiatul Mukaromah</creator>
        
        <subject>Plagiarism; winnowing algorithm; fingerprint; dictionary English-Indonesia</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(5), 2018</description>
        <description>The ease of obtaining information that is easy, fast, and cheap from all over the world through the internet network can encourage someone to take action plagiarism. Plagiarism is an intellectual crime that often occurs in the writing world where the perpetrators take the work of others without declaring the original source; if it continues to be left it will have a negative impact on the academic community and can be a chronic disease in the progress of a nation. At this time, the process of plagiarism detection is done manually and automatically using the help of technological developments (plagiarism detection), but the automatic checks available now mostly just check every letter character contained in the document, cannot check where the plagiarist takes a quote from a foreign language and changed in plagiarist language.  Detection of plagiarism in this study will use a winnowing algorithm that has a function to check every character in two samples by hashing method that can generate fingerprint from two documents. While the dictionary method English-Indonesia change the writing from English to Indonesian language. This research will produce plagiarism detection using winnowing algorithm with English-Indonesian dictionary technique.</description>
        <description>http://thesai.org/Downloads/Volume9No5/Paper_23-Implementation_of_Winnowing_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Formalization of Behavior Change Theories to Accomplish a Health Behavior</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090522</link>
        <id>10.14569/IJACSA.2018.090522</id>
        <doi>10.14569/IJACSA.2018.090522</doi>
        <lastModDate>2018-05-31T15:24:04.1730000+00:00</lastModDate>
        
        <creator>Adnan Manzoor</creator>
        
        <creator>Imtiaz Ali Halepoto</creator>
        
        <creator>Sohail Khokhar</creator>
        
        <creator>Nazar Hussain Phulpoto</creator>
        
        <creator>Muhammad Sulleman Memon</creator>
        
        <subject>Behavior monitoring; healthy lifestyle; behavior change; physical activity; computational model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(5), 2018</description>
        <description>The objective of this paper is to study theories behind behavior change and adaptation of behavior. Humans often live according to habitual behavior. Changing an existing behavior or adopting a new (healthier) behavior is not an easy task. There are a number of things which are important when considering adapting physical activity behavior. A behavior is affected by various cognitive processes, for example involving beliefs, intentions, goals, impediments. A conceptual and computational model is discussed based on state of the art theories about behavior change. The model combines different theories: the social cognitive theory, and the theory of self-regulation. In addition, health behavior interventions are discussed that may be used in a coaching system. The paper consists of two parts: the first part describes a computational model of behavior change and the second part discusses the formalization of evidence-based techniques for behavior change and questions to measure the various states of mind in order to provide tailored and personalized support.</description>
        <description>http://thesai.org/Downloads/Volume9No5/Paper_22-Formalization_of_Behavior_Change.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Chatbot for Automatic Processing of Learner Concerns in an Online Learning Platform</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090521</link>
        <id>10.14569/IJACSA.2018.090521</id>
        <doi>10.14569/IJACSA.2018.090521</doi>
        <lastModDate>2018-05-31T15:24:04.1430000+00:00</lastModDate>
        
        <creator>Mamadou BAKOUAN</creator>
        
        <creator>Beman Hamidja KAMAGATE</creator>
        
        <creator>Tiemoman KONE</creator>
        
        <creator>Souleymane OUMTANAGA</creator>
        
        <creator>Michel BABRI</creator>
        
        <subject>Metadata; ontologies; semantic similarity; natural language; semantic web; chatbot</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(5), 2018</description>
        <description>In this article, we present a chatbot model that can automatically respond to learners’ concerns on an online training platform. The proposed chatbot model is based on an adaptation of the similarity of Dice to understand the concerns of learners. The first phase of this approach allows selecting the pre-established concerns that the teacher has in a knowledge base which are closest to those posed by the learner. The second phase consists of selecting among these k most appropriate concerns based on a measure of similarity built on the concept of domain keywords. The experimentation of the prototype of this chatbot makes it possible to find the adequate answers. In the case, where the question refers to a question from the teacher, the learner is asked if the question identified is the one he was referring to. If he answers in the affirmative, the instructions associated with his request are sent to him. If not, the learner’s concern is sent to the human tutor. The hybridization of this chatbot with the human agent comes to enrich the initial knowledge base of the chatbot. The results obtained with the concept based on the keywords of the domain are encouraging. The learner’s comprehension rate is above 50% when applying the concept of domain keywords while the measure of Dice is below 50%.</description>
        <description>http://thesai.org/Downloads/Volume9No5/Paper_21-A_Chatbot_for_Automatic_Processing_of_Learner_Concerns.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Lightweight Multi-Message and Multi-Receiver Heterogeneous Hybrid Signcryption Scheme based on Hyper Elliptic Curve</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090520</link>
        <id>10.14569/IJACSA.2018.090520</id>
        <doi>10.14569/IJACSA.2018.090520</doi>
        <lastModDate>2018-05-31T15:24:04.1130000+00:00</lastModDate>
        
        <creator>Abid ur Rahman</creator>
        
        <creator>Insaf Ullah</creator>
        
        <creator>Muhammad Naeem</creator>
        
        <creator>Rehan Anwar</creator>
        
        <creator>Noor-ul-Amin</creator>
        
        <creator>Hizbullah Khattak</creator>
        
        <creator>Sultan Ullah</creator>
        
        <subject>Multi-receiver heterogeneous hybrid signcryption; multi-message and multi-receiver heterogeneous hybrid signcryption; hyper elliptic curve; Automated Validation of Internet Security Protocols and Applications (AVISPA)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(5), 2018</description>
        <description>It is a suitable means for multi-messages to use hybrid encryption to make a safe communication. Hybrid encryption confines encryption into two parts: one part uses public key systems to scramble a one-time symmetric key, and the other part uses the symmetric key to scramble the actual message. The quick advancement of the internet technology requires distinctive message communications over the more extensive territory to upgrade the heterogeneous system security. In this paper, we present a lightweight multi-message and multi-receiver Heterogeneous hybrid signcryption scheme based on the hyper elliptic curve. We choose hyper elliptic curve for our scheme, because with 80 bits key give an equivalent level of security as contrasted and different cryptosystems like RSA and Bilinear pairing with 1024 bits key and elliptic curve with 160 bits key, respectively. Further, we validate these security requirements with our scheme, for example, confidentiality, resistance against reply attack, integrity, authenticity, non-repudiation, public verifiability, forward secrecy and unforgeability through a well-known security validation tool called Automated Validation of Internet Security Protocols and Applications (AVISPA). In addition, our approach has low computational costs, which is attractive for low resources devices and heterogeneous environment.</description>
        <description>http://thesai.org/Downloads/Volume9No5/Paper_20_A_Lightweight_Multi_Message_and_Multi_Receiver.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Intelligent Bio-Inspired Algorithm for the Faculty Scheduling Problem</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090519</link>
        <id>10.14569/IJACSA.2018.090519</id>
        <doi>10.14569/IJACSA.2018.090519</doi>
        <lastModDate>2018-05-31T15:24:04.0970000+00:00</lastModDate>
        
        <creator>Sarah Al-Negheimish</creator>
        
        <creator>Fai Alnuhait</creator>
        
        <creator>Hawazen Albrahim</creator>
        
        <creator>Sarah Al-Mogherah</creator>
        
        <creator>Maha Alrajhi</creator>
        
        <creator>Manar Hosny</creator>
        
        <subject>Faculty scheduling; faculty assignment problem; Bees Algorithm; Demon algorithm; timetabling; scheduling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(5), 2018</description>
        <description>All universities have faculty members who need to be assigned to teach courses. Those members have various specialties, preferences and different levels of experience. The manual assignment of courses is a very tedious and time-consuming task that the scheduling committee frequently faces in every department. To solve this timetabling problem, we proposed a novel approach using the Bees Algorithm (BA), which is inspired from bees’ foraging behavior, hybridized with Demon algorithm and Hill Climbing for more extensive search. The scheduling process took into consideration all constraints and variables associated with scheduling courses, according to the requirements of the Computer Science department in our college. The results showed that the schedules produced from the algorithm outperformed the manual schedules in terms of achieving the objective function and satisfying the constraints. In addition, the hybridized version produced better results than the standard BA version without hybridization. The hybridized algorithm is designed for faculty scheduling, but can be further generalized to solve various timetabling problems.</description>
        <description>http://thesai.org/Downloads/Volume9No5/Paper_19-An_Intelligent_Bio_Inspired_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Rainfall Prediction using Data Mining Techniques: A Systematic Literature Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090518</link>
        <id>10.14569/IJACSA.2018.090518</id>
        <doi>10.14569/IJACSA.2018.090518</doi>
        <lastModDate>2018-05-31T15:24:04.0500000+00:00</lastModDate>
        
        <creator>Shabib Aftab</creator>
        
        <creator>Munir Ahmad</creator>
        
        <creator>Noureen Hameed</creator>
        
        <creator>Muhammad Salman Bashir</creator>
        
        <creator>Iftikhar Ali</creator>
        
        <creator>Zahid Nawaz</creator>
        
        <subject>Rainfall prediction; data mining techniques; SLR; systematic literature review</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(5), 2018</description>
        <description>Rainfall prediction is one of the challenging tasks in weather forecasting. Accurate and timely rainfall prediction can be very helpful to take effective security measures in advance regarding: ongoing construction projects, transportation activities, agricultural tasks, flight operations and flood situation, etc. Data mining techniques can effectively predict the rainfall by extracting the hidden patterns among available features of past weather data. This research contributes by providing a critical analysis and review of latest data mining techniques, used for rainfall prediction. Published papers from year 2013 to 2017 from renowned online search libraries are considered for this research. This review will serve the researchers to analyze the latest work on rainfall prediction with the focus on data mining techniques and also will provide a baseline for future directions and comparisons.</description>
        <description>http://thesai.org/Downloads/Volume9No5/Paper_18-Rainfall_Prediction_using_Data_Mining_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Classification of Affective States via EEG and Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090517</link>
        <id>10.14569/IJACSA.2018.090517</id>
        <doi>10.14569/IJACSA.2018.090517</doi>
        <lastModDate>2018-05-31T15:24:04.0330000+00:00</lastModDate>
        
        <creator>Jason Teo</creator>
        
        <creator>Lin Hou Chew</creator>
        
        <creator>Jia Tian Chia</creator>
        
        <creator>James Mountstephens</creator>
        
        <subject>Neuroinformatics; emotion classification; preference classification; excitement classification; electroencephalography (EEG); deep learning; virtual reality; dropouts.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(5), 2018</description>
        <description>Human emotions play a key role in numerous decision-making processes. The ability to correctly identify likes and dislikes as well as excitement and boredom would facilitate novel applications in neuromarketing, affective entertainment, virtual rehabilitation and forensic neuroscience that leverage on sub-conscious human affective states. In this neuroinformatics investigation, we seek to recognize human preferences and excitement passively through the use of electroencephalography (EEG) when a subject is presented with some 3D visual stimuli. Our approach employs the use of machine learning in the form of deep neural networks to classify brain signals acquired using a brain-computer interface (BCI). In the first part of our study, we attempt to improve upon our previous work, which has shown that EEG preference classification is possible although accuracy rates remain relatively low at 61%-67% using conventional deep learning neural architectures, where the challenge mainly lies in the accurate classification of unseen data from a cohort-wide sample that introduces inter-subject variability on top of the existing intra-subject variability. Such an approach is significantly more challenging and is known as subject-independent EEG classification as opposed to the more commonly adopted but more time-consuming and less general approach of subject-dependent EEG classification. In this new study, we employ deep networks that allow dropouts to occur in the architecture of the neural network. The results obtained through this simple feature modification achieved a classification accuracy of up to 79%. Therefore, this study has shown that the use of a deep learning classifier was able to achieve an increase in emotion classification accuracy of between 13% and 18% through the simple adoption of the use of dropouts compared to a conventional deep learner for EEG preference classification. In the second part of our study, users are exposed to a roller-coaster experience as the emotional stimuli which are expected to evoke the emotion of excitement, while simultaneously wearing virtual reality goggles, which delivers the virtual reality experience of excitement, and an EEG headset, acquires the raw brain signals detected when exposed to this excitement stimuli. Here, a deep learning approach is used to improve the excitement detection rate to well above the 90% accuracy level. In a prior similar study, the use of conventional machine learning approaches involving k-Nearest Neighbour (kNN) classifiers and Support Vector Machines (SVM) only achieved prediction accuracy rates of between 65% and 89%. Using a deep learning approach here, rates of 78%-96% were achieved. This demonstrates the superiority of adopting a deep learning approach over other machine learning approaches for detecting human excitement when immersed in an immersive virtual reality environment.</description>
        <description>http://thesai.org/Downloads/Volume9No5/Paper_17-Classification_of_Affective_States_via_EEG.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Security Improvement in Elliptic Curve Cryptography</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090516</link>
        <id>10.14569/IJACSA.2018.090516</id>
        <doi>10.14569/IJACSA.2018.090516</doi>
        <lastModDate>2018-05-31T15:24:04.0030000+00:00</lastModDate>
        
        <creator>Kawther Esaa Abdullah</creator>
        
        <creator>Nada Hussein M. Ali</creator>
        
        <subject>Elliptic curve cryptography; elliptic curve discrete logarithm problem; dual encryption/decryption; Elliptic Curve Diffie Hellman</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(5), 2018</description>
        <description>This paper proposed different approaches to enhance the performance of the Elliptic Curve Cryptography (ECC) algorithm. ECC is vulnerable to attacks by exploiting the public parameters of ECC to solve Discrete Logarithm Problem (DLP). Therefore, these public parameters should be selected safely to obviate all recognized attacks. This paper presents a new generator function to produce the domain parameters for creating the elliptic curve; a secure mechanism is used in the proposed function to avoid all possible known attacks that attempts to solve the Elliptic Curve Discrete Logarithm Problem (ECDLP). Moreover, an efficient algorithm has been proposed for choosing two base points from the curve in order to generate two subgroups in a secure manner. The purpose of the aforementioned algorithm is to offer more confidence for the user since it is not built upon a hidden impairment that it could be subsequently utilized to retrieve user’s private key. The Elliptic Curve Diffie Hellman (ECDH) algorithm is implemented to exchange a session key between the communicating parties in a secure manner. Beside, a preprocessing operation is performed on the message to enhance the diffusion property and consequently leads to increase the strength against cryptanalysis attack. Finally, the dual encryption/decryption algorithm is implemented using different session keys in each stage of the encryption to boost immunity against any attack on the digital audio transmission. The gained results show the positive effect of the dual elliptic curve system in terms of speed and confidentiality without needing any extra time for encryption.</description>
        <description>http://thesai.org/Downloads/Volume9No5/Paper_16-Security_Improvement_in_Elliptic_Curve_Cryptography.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>New Techniques to Enhance Data Deduplication using Content based-TTTD Chunking Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090515</link>
        <id>10.14569/IJACSA.2018.090515</id>
        <doi>10.14569/IJACSA.2018.090515</doi>
        <lastModDate>2018-05-31T15:24:03.9870000+00:00</lastModDate>
        
        <creator>Hala AbdulSalam Jasim</creator>
        
        <creator>Assmaa A. Fahad</creator>
        
        <subject>Data deduplication; big data compression; data reduction; Two Threshold Two Divisor (TTTD); chunking algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(5), 2018</description>
        <description>Due to the fast indiscriminate increase of digital data, data reduction has acquired increasing concentration and became a popular approach in large-scale storage systems. One of the most effective approaches for data reduction is Data Deduplication technique in which the redundant data at the file or sub-file level is detected and identifies by using a hash algorithm. Data Deduplication showed that it was much more efficient than the conventional compression technique in large-scale storage systems in terms of space reduction. Two Threshold Two Divisor (TTTD) chunking algorithm is one of the popular chunking algorithm used in deduplication. This algorithm needs time and many system resources to compute its chunk boundary. This paper presents new techniques to enhance TTTD chunking algorithm using a new fingerprint function, a multi-level hashing and matching technique, new indexing technique to store the Metadata. These new techniques consist of four hashing algorithm to solve the collision problem and adding a new chunk condition to the TTTD chunking conditions in order to increase the number of the small chunks which leads to increasing the Deduplication Ratio. This enhancement improves the Deduplication Ratio produced by TTTD algorithm and reduces the system resources needed by this algorithm. The proposed algorithm is tested in terms of Deduplication Ratio, execution time, and Metadata size.</description>
        <description>http://thesai.org/Downloads/Volume9No5/Paper_15-New_Techniques_to_Enhance_Data_Deduplication.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of Mobile-Interfaced Machine Learning-Based Predictive Models for Improving Students’ Performance in Programming Courses</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090514</link>
        <id>10.14569/IJACSA.2018.090514</id>
        <doi>10.14569/IJACSA.2018.090514</doi>
        <lastModDate>2018-05-31T15:24:03.9400000+00:00</lastModDate>
        
        <creator>Fagbola Temitayo Matthew</creator>
        
        <creator>Adeyanju Ibrahim Adepoju</creator>
        
        <creator>Oloyede Ayodele</creator>
        
        <creator>Obe Olumide</creator>
        
        <creator>Olaniyan Olatayo</creator>
        
        <creator>Esan Adebimpe</creator>
        
        <creator>Omodunbi Bolaji</creator>
        
        <creator>Egbetola Funmilola</creator>
        
        <subject>Student-performance; predictive-modeling; M5P-Decision-Tree; mobile-interface; linear-regression-classifier; programming-courses</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(5), 2018</description>
        <description>Student performance modelling (SPM) is a critical step to assessing and improving students’ performances in their learning discourse. However, most existing SPM are based on statistical approaches, which on one hand are based on probability, depicting that results are based on estimation; and on the other hand, actual influences of hidden factors that are peculiar to students, lecturers, learning environment and the family, together with their overall effect on student performance have not been exhaustively investigated. In this paper, Student Performance Models (SPM) for improving students’ performance in programming courses were developed using M5P Decision Tree (MDT) and Linear Regression Classifier (LRC). The data used was gathered using a structured questionnaire from 295 students in 200 and 300 levels of study who offered Web programming, C or JAVA at Federal University, Oye-Ekiti, Nigeria between 2012 and 2016. Hidden factors that are significant to students’ performance in programming were identified. The relevant data gathered, normalized, coded and prepared as variable and factor datasets, and fed into the MDT algorithm and LRC to develop the predictive models. The developed models were obtained, validated and afterwards implemented in an Android 1.0.1 Studio environment. Extended Markup Language (XML) and Java were used for the design of the Graphical User Interface (GUI) and the logical implementation of the developed models as a mobile calculator, respectively. However, Mean Absolute Error (MAE), Root Mean Squared Error (RMSE), Relative Absolute Error (RAE) and the Root Relative Squared Error (RRSE) were the metrics used to evaluate the robustness of MDT and LRC models. The evaluation results obtained indicate that the variable-based LRC produced the best model in terms of MAE, RMSE, RAE and the RRSE having yielded the least values in all the evaluations conducted. Further results obtained established the strong significance of attitude of students and lecturers, fearful perception of students, erratic power supply, university facilities, student health and students’ attendance to the performance of students in  
programming courses. The variable-based LRC model presented in this paper could provide baseline information about students’ performance thereby offering better decision making towards improving teaching/learning outcomes in programming courses.</description>
        <description>http://thesai.org/Downloads/Volume9No5/Paper_14-Development of Mobile Interfaced Multivariate Predictive Mode.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>BLOT: A Novel Phase Privacy Preserving Framework for Location-Based Services</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090513</link>
        <id>10.14569/IJACSA.2018.090513</id>
        <doi>10.14569/IJACSA.2018.090513</doi>
        <lastModDate>2018-05-31T15:24:03.8470000+00:00</lastModDate>
        
        <creator>Abdullah Albelaihy</creator>
        
        <creator>Jonathan Cazalas</creator>
        
        <creator>Vijey Thayananthan</creator>
        
        <subject>Privacy; location-based services (LBS); oblivious transfer; BLoom Filter Oblivious Transfer (BLOT); bloom filter</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(5), 2018</description>
        <description>The inherent challenge within the domain of location-based services is finding a delicate balance between user privacy and the efficiency of answering queries. Inevitably, security issues can and will arise as the server must be informed about the query location in order to provide accurate responses. Despite the many security advancements in wireless communication, servers may become jeopardized or become infected with malicious software. That said, it is possible to ensure queries do not generate fake responses that appear real; in fact, if a fake response is used, mechanisms can be employed for the user to identify the query’s authenticity. Towards this end, the paper propose BLoom Filter Oblivious Transfer (BLOT), a novel phase privacy preserving framework for LBS that combines a Bloom filter hash function and the oblivious transfer protocol. These methods are shown to be useful in securing a user’s private information. An analysis of the results revealed that BLOT performed markedly better and enhanced entropy when compared to referenced approaches.</description>
        <description>http://thesai.org/Downloads/Volume9No5/Paper_13-Blot_A_Novel_Phase_Privacy_Preserving_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Student Facial Authentication Model based on OpenCV’s Object Detection Method and QR Code for Zambian Higher Institutions of Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090512</link>
        <id>10.14569/IJACSA.2018.090512</id>
        <doi>10.14569/IJACSA.2018.090512</doi>
        <lastModDate>2018-05-31T15:24:03.8000000+00:00</lastModDate>
        
        <creator>Lubasi Kakwete Musambo</creator>
        
        <creator>Jackson Phiri</creator>
        
        <subject>Biometrics; authentication; model; integrity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(5), 2018</description>
        <description>Facial biometrics captures human facial physiological data, converts it into a data item variable so that this stored variable may be used to provide information security services, such as authentication, integrity management or identification that grants privileged access or control to the owner of that data variable. In this paper, we propose a model for student authentication based on facial biometrics. We recommend a secure model that can be used in the authentication and management of student information in the registration and access of resources, such as bursaries, student accommodation and library facilities at the University of Zambia. Since the model is based on biometrics, a baseline study was carried out to collect data from the general public, government entities, commercial banks, students, ICT regulators and schools on their understanding, use and acceptance of biometrics as an authentication tool. Factor analysis has been used to analyze the findings. The study establishes that performance expectancy, effort expectancy, social influence and user privacy are key determinants for application of a biometric multimode authentication. The study further demonstrates that education and work experience are regulating factors on acceptance and expectancy of a biometric authentication system. Based on these results, we then developed a biometric model that can be used to perform authentication for students in higher learning institutions in Zambia. The results of our proposed model show 66% acceptance rate using OpenCV.</description>
        <description>http://thesai.org/Downloads/Volume9No5/Paper_12-Student_Facial_Authentication_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Opportunistic Dissemination Protocol for VANETs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090511</link>
        <id>10.14569/IJACSA.2018.090511</id>
        <doi>10.14569/IJACSA.2018.090511</doi>
        <lastModDate>2018-05-31T15:24:03.7700000+00:00</lastModDate>
        
        <creator>Amina SEDJELMACI</creator>
        
        <creator>Fedoua DIDI</creator>
        
        <creator>Ahmed ABDUL RAHUMAN</creator>
        
        <subject>Flooding; DHVN; SNF; opportunistic; VANET</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(5), 2018</description>
        <description>In this article, we propose an Opportunistic information dissemination protocol by mixing both flooding and an enhanced DHVN (Dissemination protocol for heterogeneous Cooperative Vehicular Network) protocol, allowing them to run opportunistically in a Manhattan plan. Special additional logic is added to the existing version of DHVN protocol in order to efficiently disseminate information in two steps: 1) by adding three tags, Initial Diffusion, Standard DHVN and DHVN Near Intersection; the Initial Diffusion tag is used for the first flooding transmission only and 2) by changing the SNF (Store and Forward) period by making it adaptive depending on the region. Detailed simulation results show that our opportunistic protocol outperforms the DHVN protocol by analyzing its performances using an integrated framework VNS.</description>
        <description>http://thesai.org/Downloads/Volume9No5/Paper_11-An_Opportunistic_Dissemination_Protocol.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Routing Optimization in WBAN using Bees Algorithm for Overcrowded Hajj Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090510</link>
        <id>10.14569/IJACSA.2018.090510</id>
        <doi>10.14569/IJACSA.2018.090510</doi>
        <lastModDate>2018-05-31T15:24:03.7230000+00:00</lastModDate>
        
        <creator>Ghassan Ahmed Ali</creator>
        
        <creator>Shah Murtaza Rashid Al Masud</creator>
        
        <subject>Wireless Body Area Network (WBAN); Bees algorithm; routing optimization; Hajj environment</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(5), 2018</description>
        <description>Crowded places like Hajj environment in Makkah which host from 2 to 3 million on specific area and time can pose health challenges for pilgrims who need medical care. One of the solutions to overcome such difficulties is to use Wireless Body Area Networks (WBANs). WBAN is one of the new technology using wireless sensor network to gather data about status of patient then to forward collected data to be proceeded. However, various types of challenges in WBAN should be concerned. Power consumption is critical within WBAN system. Furthermore, delay of data transfer may lead to wrong diagnosis or uncorrected report that may lead to death; therefore, the transferred data must be reliable to ensure accuracy in measurement. In this paper, we propose a framework for routing optimization in medical wireless network. The proposed framework optimize shortest path in different stages of collected data to get less energy consumption, and reduce transmission time. The proposed work is based on Bees Algorithm to overcome such challenges and find shortest path for data within shortest time during overcrowded of Hajj environment. Matlab simulation results show good performance of Bees Algorithm in terms of reducing transmission time, energy consumption, delay, and throughput.</description>
        <description>http://thesai.org/Downloads/Volume9No5/Paper_10-Routing_Optimization_in_WBAN_using_Bees_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Energy Efficient Mobility Aware MAC Protocol for Wireless Sensor Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090509</link>
        <id>10.14569/IJACSA.2018.090509</id>
        <doi>10.14569/IJACSA.2018.090509</doi>
        <lastModDate>2018-05-31T15:24:03.6900000+00:00</lastModDate>
        
        <creator>Zain ul Abidin Jaffri</creator>
        
        <creator>Asif Kabir</creator>
        
        <creator>Gohar Rehman Chughtai</creator>
        
        <creator>S. Sabahat H. Bukhari</creator>
        
        <creator>Muhammad Arshad Shehzad Hassan</creator>
        
        <subject>Wireless sensor networks; energy efficiency; Media Access Control (MAC); mobility aware; cluster head</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(5), 2018</description>
        <description>Dealing with mobility at the link layer in an efficient and effective way is a formidable challenge in Wireless Sensor Networks due to recent boom in mobile applications and complex network scenarios. Most of the current MAC protocols proposed for WSNs generally focus on stationary network and usually provide feeble network performance in situations where mobile nodes are involved. Many MAC protocols are proposed and techniques are developed to support mobility but they undergo massive energy consumption and latency problems due to frequent connection setup and breakup. In this paper, we propose a new energy efficient mobility aware based MAC protocol (EEMA-MAC), which work efficiently in both stationary and mobile scenarios with less energy consumption. In this protocol the member nodes have sleep and awake time same like existing S-MAC protocol but it expedite the connection setup and efficiency as Cluster Head (CH) has extended wake up time and less sleep time. Simulation results show that this mechanism is effective to avoid frequent disconnection of nodes and performs well in terms of energy consumption, throughput and packet loss as compared with existing protocols, such as S-MAC and MS-MAC. </description>
        <description>http://thesai.org/Downloads/Volume9No5/Paper_9-A_Novel_Energy_Efficient_Mobility.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>3D Visualization of Sentiment Measures and Sentiment Classification using Combined Classifier for Customer Product Reviews</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090508</link>
        <id>10.14569/IJACSA.2018.090508</id>
        <doi>10.14569/IJACSA.2018.090508</doi>
        <lastModDate>2018-05-31T15:24:03.6430000+00:00</lastModDate>
        
        <creator>Siddhaling Urologin</creator>
        
        <creator>Sunil Thomas</creator>
        
        <subject>Sentiment analysis; 3D visualization; sentiment classification; natural language processing; product reviews</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(5), 2018</description>
        <description>The Internet has wide reachability making many users to buy the products online using e-commerce websites. Usually, users provide their opinions, comments, and reviews about the products in social media, e-commerce websites, blogs, etc. The product review comments provided by the customers have rich information about the usage of the products they bought and their sentiments towards those products. In this research, we have collected reviews from Amazon.com and performed sentiment analysis to collect sentiment information. We have proposed 3D visualizations to represent sentiment information, such as sentiment scores and statistics about words used in the reviews. The 3D visualizations are useful to represent large sentiment related information and to have an in-depth understanding of sentiments of users. We have developed a combined classifier using Logistic Regression, Decision Tree and Support Vector Machine. From the reviews, we formed N-gram features using a bag of words and performed sentiment classification using combined classifier. On 10 fold cross-validation, a maximum classification rate for combined classifier of 90.22% is obtained for sentiment classification.</description>
        <description>http://thesai.org/Downloads/Volume9No5/Paper_8-3D_Visualization_of_Sentiment_Measures.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application of the Hierarchy Analysis Method at the Foodstuffs Quality Evaluation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090507</link>
        <id>10.14569/IJACSA.2018.090507</id>
        <doi>10.14569/IJACSA.2018.090507</doi>
        <lastModDate>2018-05-31T15:24:03.6130000+00:00</lastModDate>
        
        <creator>Marina A. Nikitina</creator>
        
        <creator>Igor A. Nikitin</creator>
        
        <creator>Natalya G. Semenkina</creator>
        
        <creator>Igor V. Zavalishin</creator>
        
        <creator>Andrey V. Goncharov</creator>
        
        <subject>Effective functionality; hierarchy analysis method; gluten-free flour confectionery products; organoleptic evaluation of the quality; food product quality</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(5), 2018</description>
        <description>In Russia as well as in the other countries of the world national programs are implemented to improve the health of the population. An integral part of those programs are measures of improvement of food processes structure as well as the quality of food itself. New types of functional and specialized food products that meet the physiological needs of specific groups of the population with a therapeutic and therapeutic-prophylactic action spectrum are becoming more widespread. The article proposes the concept of determining the quality of food products through the indicator of “effective functionality” on the basis of a multicriteria approach using the hierarchy analysis method. On the example of gluten-free flour confectionery products, the determination of the organoleptic evaluation of the quality of a food product is shown, as a particular solution for finding one of the complex indicators of the first level. The use of T. Saaty’s method in making technological decisions on a large number of criteria is substantiated. The analysis of the obtained data allows to draw a conclusion that the greatest weight among alternatives was possessed by the sample containing three kinds of flour: buckwheat, amaranth and linen in the ratio 60:30:10.</description>
        <description>http://thesai.org/Downloads/Volume9No5/Paper_7-Application_of_the_Hierarchy_Analysis_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of Traffic Flow Simulation System to Minimize Intersection Waiting Time</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090506</link>
        <id>10.14569/IJACSA.2018.090506</id>
        <doi>10.14569/IJACSA.2018.090506</doi>
        <lastModDate>2018-05-31T15:24:03.5800000+00:00</lastModDate>
        
        <creator>Jang Seung-Ju</creator>
        
        <subject>Traffic flow simulation; SUMO simulator; reduce traffic time; intersection traffic flow; simulation design</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(5), 2018</description>
        <description>This paper designs a traffic simulation system for minimizing intersection waiting time. We use SUMO simulator which is widely used as a traffic flow simulation tool for traffic flow simulation. Using the SUMO simulator to set the route from the source to the destination and measuring the time required when using the existing intersection signal system. Through this simulation, we want to measure how much the proposed system can minimize the waiting time. In order to minimize the intersection waiting time, it is assumed that there is a loop sensor that can recognize whether there is a waiting vehicle in each direction of the intersection. Using this information, a signal lamp is used as a waiting signal in the case of a direction in which there is no waiting vehicle, and a driving signal is given in the case of a waiting vehicle or an entering vehicle. In this paper, we try to reduce the time required for vehicles to arrive at their destination by making the traffic flow smoothly without any expense such as road expansion through the limited system.</description>
        <description>http://thesai.org/Downloads/Volume9No5/Paper_6-Design_of_Traffic_Flow_Simulation_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel E-Mail Network Evolution Model based on user Information</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090505</link>
        <id>10.14569/IJACSA.2018.090505</id>
        <doi>10.14569/IJACSA.2018.090505</doi>
        <lastModDate>2018-05-31T15:24:03.5330000+00:00</lastModDate>
        
        <creator>Lejun ZHANG</creator>
        
        <creator>Tongxin ZHOU</creator>
        
        <creator>Chunhui ZHAO</creator>
        
        <creator>Zilong JIN</creator>
        
        <subject>Information characteristics; e-mail; network evolution; complex network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(5), 2018</description>
        <description>E-mail is one of the main means of communication in society today, and it is a typical social network. Studying the evolution of the social network structure by constructing an e-mail network evolution model is of great significance to the literature. In this paper, we first analyze the e-mail network by constructing an e-mail network communication model; this mainly includes analysis of the structure of the e-mail network and analysis of the user information in the e-mail network; then, we propose an e-mail network evolution model based on the characteristics of user information and give the specific evolutionary steps; finally, the simulation experiments are carried out to analyze the characteristics of the model. Experiments show that the nodes are characterized by a power-law distribution, and compared with other models; the model is closer to the real network, so it has important practical significance. </description>
        <description>http://thesai.org/Downloads/Volume9No5/Paper_5-A_Novel_E_mail_Network_Evolution_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mixed Profile Method of Speed and Location for Robotic Arms Motion used for Precise Positioning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090504</link>
        <id>10.14569/IJACSA.2018.090504</id>
        <doi>10.14569/IJACSA.2018.090504</doi>
        <lastModDate>2018-05-31T15:24:03.5030000+00:00</lastModDate>
        
        <creator>Liliana Marilena Matica</creator>
        
        <creator>Cornelia Gyor&#246;di</creator>
        
        <creator>Helga Silaghi</creator>
        
        <creator>Andrei Silaghi</creator>
        
        <subject>Sampling period of time; waypoints; location matrix for a robotic arm; acceleration; deceleration;  motion stage; mixt profile of  speed; trapezoidal profile; parabolic profile</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(5), 2018</description>
        <description>The paper describes a new real-time computation method named Mixt Profile of Speed (MPS), which is used to obtain the value of speed, at every sampling period of time, during the acceleration and deceleration stage, whereas the motion has three stages: 1) acceleration, 2) motion with imposed constant speed, and 3) deceleration. The method will determinate the location of a robotic arm for every sampling period of time. The originality of this new computation method refers to the deceleration stage; it determines an accurate positioning at the end of the motion in a well determinate interval of time.  During the forced constant motion stage, the trajectory is imposed and it is linear or circular. The ADNIA algorithm (numerical differential analysis interpolation algorithm) can be implemented at this stage (during the motion with imposed constant speed of the robotic arm) in order to ensure the maximum precision of the computation for the waypoints Cartesian coordinates.</description>
        <description>http://thesai.org/Downloads/Volume9No5/Paper_4-Mixed_Profile_Method_of_Speed.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fuzzy Logic-Controlled 6-DOF Robotic Arm Color-based Sorter with Machine Vision Feedback</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090503</link>
        <id>10.14569/IJACSA.2018.090503</id>
        <doi>10.14569/IJACSA.2018.090503</doi>
        <lastModDate>2018-05-31T15:24:03.4870000+00:00</lastModDate>
        
        <creator>Alexander C. Abad</creator>
        
        <creator>Dino Dominic Ligutan</creator>
        
        <creator>Elmer P. Dadios</creator>
        
        <creator>Levin Jaeron S. Cruz</creator>
        
        <creator>Michael Carlo D.P. Del Rosario</creator>
        
        <creator>Jho Nathan Singh Kudhal</creator>
        
        <subject>Color-based sorter; degrees of freedom; fuzzy logic; joint controller; machine vision; robotic arm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(5), 2018</description>
        <description>A demonstration of the application of fuzzy logic-based joint controller (FLJC) to a 6-DOF robotic arm as a color-based sorter system is presented in this study. The robotic arm with FLJC is integrated with a machine vision system that can discriminate different colors. Additionally, the machine vision system composed of Kinect camera and computer were used to extract the coordinates of the gripper and the objects within the image of the workspace. A graphical user interface with an underlying sorting algorithm allows the user to control the sorting process. Once the system is configured, the computed joint angles by FLJC are transmitted serially to the microcontroller. The results show that the absolute error of the gripper coordinates is less than 2 cm and that the machine vision is capable of achieving at least 95% accuracy in proper color discrimination both for first and second level stacked color objects.</description>
        <description>http://thesai.org/Downloads/Volume9No5/Paper_3-Fuzzy_Logic_Controlled_6_DOF_Robotic_Arm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Flow-Length Aware Cache Replacement Policy for Packet Processing Cache</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090502</link>
        <id>10.14569/IJACSA.2018.090502</id>
        <doi>10.14569/IJACSA.2018.090502</doi>
        <lastModDate>2018-05-31T15:24:03.4570000+00:00</lastModDate>
        
        <creator>Hayato Yamaki</creator>
        
        <subject>Router; packet processing; cache replacement</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(5), 2018</description>
        <description>Recent core routers are required to process packets not only at high throughput but also with low power consumption due to the increase in the network traffic amount. Packet processing cache (PPC) is one of the effective approaches to meet the requirements. PPC enables to process a packet without accessing to a ternary content addressable memory (TCAM) by storing the TCAM lookup results of a flow in a cache. Because the cache miss rate of PPC directly impacts on the packet processing throughput and the power consumption of core routers, it is important for PPC to reduce the number of cache misses. In this study, we focus on characteristics of flows and propose an effective cache replacement policy for PPC. The proposed policy, named Hit Dominance Cache (HDC), divides the cache into two areas and assigns flows to the appropriate area to evict mice flows rapidly and to retain elephant flows preferentially. Simulation results with 15 real network traces show that HDC can reduce the number of cache misses in PPC by up to 29.1% and 12.5% on average when compared to 4-way LRU, conventionally used in PPC. Furthermore, the hardware implementation using Verilog-HDL shows that the hardware costs of HDC is comparable to those of 4-way LRU though HDC performs as if the cache was composed of 8-way set associativity. Finally, we show that HDC can achieve 503 Gbps with 88.8% energy of conventional PPC (20.5% energy of TCAM only architecture).</description>
        <description>http://thesai.org/Downloads/Volume9No5/Paper_2-Flow_Length_Aware_Cache_Replacement_Policy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cardiotocographic Diagnosis of Fetal Health based on Multiclass Morphologic Pattern Predictions using Deep Learning Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090501</link>
        <id>10.14569/IJACSA.2018.090501</id>
        <doi>10.14569/IJACSA.2018.090501</doi>
        <lastModDate>2018-05-31T15:24:03.3800000+00:00</lastModDate>
        
        <creator>Julia H. Miao</creator>
        
        <creator>Kathleen H. Miao</creator>
        
        <subject>Activation function; deep learning; deep neural network; dropout; ensemble learning; multiclass; regularization;  cardiotocography; complications during pregnancy; fetal heart rate</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(5), 2018</description>
        <description>Medical complications of pregnancy and pregnancy-related deaths continue to remain a major global challenge today. Internationally, about 830 maternal deaths occur every day due to pregnancy-related or childbirth-related complications. In fact, almost 99% of all maternal deaths occur in developing countries. In this research, an alternative and enhanced artificial intelligence approach is proposed for cardiotocographic diagnosis of fetal assessment based on multiclass morphologic pattern predictions, including 10 target classes with imbalanced samples, using deep learning classification models. The developed model is used to distinguish and classify the presence or absence of multiclass morphologic patterns for outcome predictions of complications during pregnancy. The testing results showed that the developed deep neural network model achieved an accuracy of 88.02%, a recall of 84.30%, a precision of 85.01%, and an F-score of 0.8508 in average. Thus, the developed model can provide highly accurate and consistent diagnoses for fetal assessment regarding complications during pregnancy, thereby preventing and/or reducing fetal mortality rate as well as maternal mortality rate during and following pregnancy and childbirth, especially in low-resource settings and developing countries.</description>
        <description>http://thesai.org/Downloads/Volume9No5/Paper_1-Cardiotocographic_Diagnosis_of_Fetal_Health.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Energy Efficient Algorithm for Wireless Sensor Network using Fuzzy C-Means Clustering</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090465</link>
        <id>10.14569/IJACSA.2018.090465</id>
        <doi>10.14569/IJACSA.2018.090465</doi>
        <lastModDate>2018-05-03T13:12:13.5700000+00:00</lastModDate>
        
        <creator>Abhilasha Jain</creator>
        
        <creator>Ashok Kumar Goel</creator>
        
        <subject>WSN, clustering; sleep-awake; virtual grids; multi-hop; routing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(4), 2018</description>
        <description>Energy efficiency is a vital issue in wireless sensor networks. In this paper, an energy efficient routing algorithm has been proposed with an aim to enhance lifetime of network. In this paper, Fuzzy C-Means clustering has been used to form optimum number of static clusters. A concept of coherence is used to eliminate redundant data generation and transmission which avoids undue loss of energy. Intra-cluster and inter-cluster gateways are used to avoid nodes from transmitting data through long distances. A new strategy has been proposed to select robust nodes near sink for direct data transmissions. The proposed algorithm is compared with LEACH, MR-LEACH, MH-LEACH and OCM-FCM based upon lifetime, average energy consumption and throughput. From the results, it is confirmed that the performance of the proposed algorithm is much better than other algorithms and is more suitable for implementation in wireless sensor networks.</description>
        <description>http://thesai.org/Downloads/Volume9No4/Paper_65-Energy_Efficient_Algorithm_for_Wireless_Sensor_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automated Segmentation of Whole Cardiac CT Images based on Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090464</link>
        <id>10.14569/IJACSA.2018.090464</id>
        <doi>10.14569/IJACSA.2018.090464</doi>
        <lastModDate>2018-05-03T13:12:13.5070000+00:00</lastModDate>
        
        <creator>Rajpar Suhail Ahmed</creator>
        
        <creator>Jie Liu</creator>
        
        <creator>Muhammad Zahid Tunio</creator>
        
        <subject>Cardiac CT; segmentation; deep learning; automatic location; contour inference</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(4), 2018</description>
        <description>Segmentation of the whole-cardiac CT image sequence is the key to computer-aided diagnosis and study of lesions in the heart. Due to the dilation, contraction and the flow of the blood, the cardiac CT images are prone to weak boundaries and artifacts. Traditional manual segmentation methods are time-consuming and labor-intensive to produce over-segmentation. Therefore, an automatic cardiac CT image sequence segmentation technique is proposed. This technique was employed using deep learning algorithm to understand the segmentation function from the ground truth data. Using the convolution neural network (CNN) on the central location of the heart, filtering ribs, muscles and other contrasting contrast are not an obvious part of the removal of the heart area. Staked denoising auto-encoders are used to automatically deduce the contours of the heart. Therefore, nine cardiac CT image sequence datasets are used to validate the method. The results showed that the algorithm proposed in this paper has best segmentation impact to such cardiac CT images which have a complex background, the distinctness between the background and the target area which is not obvious; and the internal structure diversification. It can filter out most of the non-heart tissue part, which is more conducive to the doctor observing patient’s heart health.</description>
        <description>http://thesai.org/Downloads/Volume9No4/Paper_64-Automated_Segmentation_of_Whole_Cardiac.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Combating the Looping Behavior: A Result of Routing Layer Attack</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090463</link>
        <id>10.14569/IJACSA.2018.090463</id>
        <doi>10.14569/IJACSA.2018.090463</doi>
        <lastModDate>2018-04-30T09:30:57.7930000+00:00</lastModDate>
        
        <creator>David Samuel Bhatti</creator>
        
        <creator>Kinza Sardar</creator>
        
        <creator>Meh Jabeen</creator>
        
        <creator>Umair B. Chaudhry</creator>
        
        <subject>Wormholes attack; wireless routing layer attack; detection time; throughput</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(4), 2018</description>
        <description>Routing layer is one of the most important layers of the network stack. In wireless ad hoc networks, it becomes more significant because nodes act as relay nodes or routers in the network. This characteristic puts them at risk of routing attacks.
A wormhole is the most treacherous attack on a routing layer
of wireless ad hoc networks. The present proposed techniques
require extra hardware, clock synchronization; or they make
restrict assumption to deal with this attack. We have proposed
a simple behavior-based approach which uses a small amount
of memory for recording a few packets received and sent by
the neighboring nodes. From this information, a behavior of
these nodes is detected, that is, whether the behavior is benign
or malicious. Nodes exhibiting malicious behavior are placed in
the blocked node list. Malicious nodes are broadcasted in the
network. None of the legal nodes in the network entertains any
packet from these nodes. This approach has been simulated and
verified in ns2.30 which detects and isolates wormhole nodes
successfully. The current study focuses on the looping behavior
of this attack.</description>
        <description>http://thesai.org/Downloads/Volume9No4/Paper_63-Combating_the_Looping_Behavior.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Load Balancing based on Bee Colony Algorithm with Partitioning of Public Clouds</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090462</link>
        <id>10.14569/IJACSA.2018.090462</id>
        <doi>10.14569/IJACSA.2018.090462</doi>
        <lastModDate>2018-04-30T09:30:57.7630000+00:00</lastModDate>
        
        <creator>Pouneh Ehsanimoghadam</creator>
        
        <creator>Mehdi Effatparvar*</creator>
        
        <subject>Cloud computing; load balancing; bee colony algorithm; public cloud; cloud partitioning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(4), 2018</description>
        <description>Cloud computing is an emerging trend in the IT industry that provides new opportunities to control costs associated with the creation and maintenance of applications. Of prevalent issues in cloud computing, load balancing is a primary one as it has a significant impact on efficiency and plays a leading role in improved management. In this paper, by using a heuristic search technique called the bee colony algorithm, tasks are balanced on a virtual machine such that their waiting time in the queue is minimized. In the proposed model, the cloud is partitioned into several sectors with many nodes as resources of distributed computing. Furthermore, the indices of speed and cost are considered to prioritize virtual machines. The results of a simulation show that the proposed model outperforms prevalent algorithms as it balances the prioritization of tasks on the virtual machine as well as the entire cloud system and minimizes the waiting times of tasks in the queue. It also reduces the completion time of tasks in comparison with the HBB-LB, WRR, and FCFS algorithms.</description>
        <description>http://thesai.org/Downloads/Volume9No4/Paper_62-Load_Balancing_based_on_Bee_Colony_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Project Risk Management Model based on Scrum Framework and Prince2 Methodology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090461</link>
        <id>10.14569/IJACSA.2018.090461</id>
        <doi>10.14569/IJACSA.2018.090461</doi>
        <lastModDate>2018-04-30T09:30:57.7300000+00:00</lastModDate>
        
        <creator>Mahdi Mousaei</creator>
        
        <creator>Taghi Javdani Gandomani</creator>
        
        <subject>Agile software development; risk management; risk management hybrid model; prince2; scrum; agile risk management</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(4), 2018</description>
        <description>With increasing competition in the software industry, software companies need to effectively manage the risks of software projects with minimal time and cost to deliver high quality products. High frequencies of warning errors and failures in software projects are indicative of human and financial costs in the software projects and teams. One of the reasons for the failure of software projects is the lack of risk management mechanism in the software development process, which can, in case of proper implementation of risk management, increase the success rate of such projects. In most projects, risk management activities are strongly confined to the adopted software methodology. Therefore, is needed a solution or model to overcome this constraint. Scrum is one of the most popular software development methodologies which has recently considered by software teams. This methodology seems not to have paid much attention to risk management. Focusing on this weakness, this research has been trying to provide a model for risk management with the participation of 52 Agile experts from six different countries using the Prince2 project management framework in Scrum methodology. The main goals of this model are to improve the coverage and appropriate risk management mechanism on software projects, increase the project’s success rate and to provide a good estimation of the required time, improve product quality and enhance quality parameters, such as the usability, flexibility, efficiency, and reliability.</description>
        <description>http://thesai.org/Downloads/Volume9No4/Paper_61-A_New_Project_Risk_Management_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Secure and Efficient Routing Mechanism in Mobile Ad-Hoc Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090460</link>
        <id>10.14569/IJACSA.2018.090460</id>
        <doi>10.14569/IJACSA.2018.090460</doi>
        <lastModDate>2018-04-30T09:30:57.7000000+00:00</lastModDate>
        
        <creator>Masroor Ali</creator>
        
        <creator>Zahid Ullah</creator>
        
        <creator>Meharban Khan</creator>
        
        <creator>Abdul Hafeez</creator>
        
        <subject>Ad-hoc on-demand distance vector; control ACK; mobile ad-hoc network; network throughput</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(4), 2018</description>
        <description>Securing crucial information is always considered as one of the complex, critical, and a time-consuming task. This research investigates a significant threat to the security of a network, i.e., selective forwarding attack. Protection of information is considered as the main stimulating task in the design of an information system for an organization. The research work proposes a framework that detects the selective forwarding attacks and computes the harmful hosts residing an ad-hoc structure. Our solution is further split into two phases: initial phase is the detection of selective forwarding attacks and the second phase performs the identification of malicious nodes. Performance of the proposed model is evaluated based on the network throughput, which is for the enhancement of security. Simulation of the proposed model is performed using NetLogo and the results show an improvement of 20% in throughput of the network.</description>
        <description>http://thesai.org/Downloads/Volume9No4/Paper_60-Secure_and_Efficient_Routing_Mechanism.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Penalized-Likelihood Image Reconstruction Algorithm for Positron Emission Tomography Exploiting Root Image Size</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090459</link>
        <id>10.14569/IJACSA.2018.090459</id>
        <doi>10.14569/IJACSA.2018.090459</doi>
        <lastModDate>2018-04-30T09:30:57.6670000+00:00</lastModDate>
        
        <creator>Munir Ahmad</creator>
        
        <creator>H. M. Tanveer</creator>
        
        <creator>Z.A. Shaikh</creator>
        
        <creator>Furkh Zeshan</creator>
        
        <creator>Usman Sharif Bajwa</creator>
        
        <subject>Penalized-Likelihood expectation maximization; median root priors; maximum-likelihood expectation maximization; full-width-at-half-maximum  </subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(4), 2018</description>
        <description>Iterative image reconstruction methods are considered better as compared to the analytical reconstruction methods in terms of their noise characteristics and quantification ability. Penalized-Likelihood Expectation Maximization (PLEM) image reconstruction methods are able to incorporate prior information about the object being imaged and have flexibility to include various prior functions which are based on different image descriptions. Median Root Priors intrinsically take into account the salient image features, such as edges, which becomes smooth owing to quadratic priors. Generally, a 3*3 pixels neighborhood support or root image size is used to evaluate the median. We evaluate different root image sizes to observe their effect on the final reconstructed image. Our results show that at higher parameter values, root image size has pronounced effect on different image quality parameters evaluated such as reconstructed image bias as compared to the phantom image, contrast and resolution in the reconstructed object. Our results show that for the small-sized objects, small root image is better whereas for objects of diameter more than two to three times of the resolution of the reconstructed object, larger root image size is preferable in terms of reconstruction speed and image quality.</description>
        <description>http://thesai.org/Downloads/Volume9No4/Paper_59-A_Penalized_Likelihood_Image_Recontrcuction_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Internet of Plants Application for Smart Agriculture</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090458</link>
        <id>10.14569/IJACSA.2018.090458</id>
        <doi>10.14569/IJACSA.2018.090458</doi>
        <lastModDate>2018-04-30T09:30:57.6530000+00:00</lastModDate>
        
        <creator>Khurshid Aliev</creator>
        
        <creator>Mohammad Moazzam Jawaid</creator>
        
        <creator>Sanam Narejo</creator>
        
        <creator>Eros Pasero</creator>
        
        <creator>Alim Pulatov</creator>
        
        <subject>Internet of Things; wireless sensor networks; smart agriculture; smartphone applications; artificial neural network; nonlinear autoregressive model; temperature forecasting</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(4), 2018</description>
        <description>Nowadays, Internet of Things (IoT) is receiving a great attention due to its potential strength and ability to be integrated into any complex system. The IoT provides the acquired data from the environment to the Internet through the service providers. This further helps users to view the numerical or plotted data. In addition, it also allows objects which are located in long distances to be sensed and controlled remotely through embedded devices which are important in agriculture domain. Developing such a system for the IoT is a very complex task due to the diverse variety of devices, link layer technologies, and services. This paper proposes a practical approach to acquiring data of temperature, humidity and soil moisture of plants. In order to accomplish this, we developed a prototype device and an android application which acquires physical data and sends it to cloud. Moreover, in the subsequent part of current research work, we have focused towards a temperature forecasting application. Forecasting metrological parameters have a profound influence on crop growth, development and yields of agriculture. In response to this fact, an application is developed for 10 days ahead maximum and minimum temperatures forecasting using a type of recurrent neural network.</description>
        <description>http://thesai.org/Downloads/Volume9No4/Paper_58-Internet_of_Plants_Application_for_Smart_Agriculture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Recovery of User Interface Web Design Patterns using Regular Expressions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090457</link>
        <id>10.14569/IJACSA.2018.090457</id>
        <doi>10.14569/IJACSA.2018.090457</doi>
        <lastModDate>2018-04-30T09:30:57.6200000+00:00</lastModDate>
        
        <creator>Khalid Mahmood</creator>
        
        <creator>Faiza Tariq</creator>
        
        <creator>Dr. Ghulam Rasool</creator>
        
        <subject>Design patterns; user interface patterns; web applications; web reverse engineering; regular expressions</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(4), 2018</description>
        <description>User Interface Web Design Patterns are standard solutions for the development of web applications. The recovery of these patterns from web applications supports program comprehension, reusability, reverse engineering, re-engineering, and maintenance of legacy web applications. The recovery of patterns from web applications becomes arduous due to the heterogeneous nature of web applications. Authors presented different catalogs and recovery approaches for extracting User Interface Web Design Patterns from source code in last one and half decade. There is still a lack of formal specifications for web design patterns, which are important for their recovery from source code. The objective of this paper is to specify User Interface Web Design Patterns (UIWDP) using semiformal specification technique and use these specifications for the recovery of patterns from the source code of web applications using regular expressions. 55 feature types are identified for the specification of 15 UIWDPs. We evaluated our approach on 75 randomly selected web applications and recovered 15 UIWDPs. The standard deviation, precision, recall and F-score measures are used to evaluate the accuracy of our approach.</description>
        <description>http://thesai.org/Downloads/Volume9No4/Paper_57-Recovery_of_User_Interface_Web_Design_Patterns.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Defining Network Exposure Metrics in Security Risk Scoring Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090456</link>
        <id>10.14569/IJACSA.2018.090456</id>
        <doi>10.14569/IJACSA.2018.090456</doi>
        <lastModDate>2018-04-30T09:30:57.6070000+00:00</lastModDate>
        
        <creator>Eli Weintraub</creator>
        
        <creator>Yuval Cohen</creator>
        
        <subject>Security; cyber-attack; risk scoring; vulnerability; exposure</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(4), 2018</description>
        <description>Organizations are exposed to cyber-attacks on a regular basis. Managers in these organizations are using scoring systems to evaluate the risks of the attacks they are exposed to. Information security methodologies define three major security objectives: confidentiality, integrity and availability. This work is focused on defining new network exposure measures affecting the availability. According to existing security scoring models network exposure risks are assessed by assigning availability measures on an ordinal scale using users’ subjective assessment. In this work quantitative objective measures are defined and presented, based on the specific organizational network, thus improving accuracy of the scores computed by the current security risk scoring models.</description>
        <description>http://thesai.org/Downloads/Volume9No4/Paper_56-Defining_Network_Exposure_Metrics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SVM Optimization for Sentiment Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090455</link>
        <id>10.14569/IJACSA.2018.090455</id>
        <doi>10.14569/IJACSA.2018.090455</doi>
        <lastModDate>2018-04-30T09:30:57.5900000+00:00</lastModDate>
        
        <creator>Munir Ahmad</creator>
        
        <creator>Shabib Aftab</creator>
        
        <creator>Muhammad Salman Bashir</creator>
        
        <creator>Noureen Hameed</creator>
        
        <creator>Iftikhar Ali</creator>
        
        <creator>Zahid Nawaz</creator>
        
        <subject>Sentiment analysis; polarity detection; machine learning technique; support vector machine (SVM); optimized SVM; grid search technique</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(4), 2018</description>
        <description>Exponential growth in mobile technology and mini computing devices has led to a massive increment in social media users, who are continuously posting their views and comments about certain products and services, which are in their use. These views and comments can be extremely beneficial for the companies which are interested to know about the public opinion regarding their offered products or services. This type of public opinion otherwise can be obtained via questionnaires and surveys, which is no doubt a difficult and complex task. So, the valuable information in the form of comments and posts from micro-blogging sites can be used by the companies to eliminate the flaws and to improve the products or services according to customer needs. However, extracting a general opinion out of a staggering number of users’ comments manually cannot be feasible. A solution to this is to use an automatic method for sentiment mining. Support Vector Machine (SVM) is one of the widely used classification techniques for polarity detection from textual data. This study proposes a technique to tune the SVM performance by using grid search method for sentiment analysis. In this paper, three datasets are used for the experiment and performance of proposed technique is evaluated using three information retrieval metrics: precision, recall and f-measure. </description>
        <description>http://thesai.org/Downloads/Volume9No4/Paper_55-SVM_Optimization_for_Sentiment_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Evaluation of IPv4/IPv6 Transition Mechanisms for Real-Time Applications using OPNET Modeler</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090454</link>
        <id>10.14569/IJACSA.2018.090454</id>
        <doi>10.14569/IJACSA.2018.090454</doi>
        <lastModDate>2018-04-30T09:30:57.5600000+00:00</lastModDate>
        
        <creator>Khalid EL KHADIRI</creator>
        
        <creator>Ouidad LABOUIDYA</creator>
        
        <creator>Najib ELKAMOUN</creator>
        
        <creator>Rachid HILAL</creator>
        
        <subject>Dual stack; manual tunnel; 6to4; OPNET; VoIP; video conferencing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(4), 2018</description>
        <description>The problem of the potential depletion of IPv4 addresses has given rise to the development of a new version of the Internet Protocol named IPv6. This version of the protocol offers many improvements, including an increase in the address space from 232 to 2128 and improvements in security, mobility, and quality of service. However, the transition from the current version to the new version (IPv4 to IPv6) is complicated and cannot be performed in a short time. The size and complexity of Internet make this migration task extremely difficult and time-consuming. The Internet Engineering Task Force (IETF) took into account this migration problem and proposed transition mechanisms as temporary solutions allowing IPv4 to coexist and operate in parallel with IPv6 networks. The dual stack, manual tunnel, and 6to4 automatic tunnel appear to be promising solutions depending on their characteristics and benefits. In this paper, we study the performance of these transition mechanisms on real-time applications (VoIP and Video Conferencing) using the network simulator OPNET Modeler. Performance parameters such as delay, delay variation, jitter, MOS, and packet loss are measured for these transition mechanisms. The obtained results showed that the dual stack transition mechanism gave better network performance than the tunneling mechanisms.</description>
        <description>http://thesai.org/Downloads/Volume9No4/Paper_54-Performance_Evaluation_of_IPv4IPv6_Transition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Unsupervised Ads Detection in TV Transmissions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090453</link>
        <id>10.14569/IJACSA.2018.090453</id>
        <doi>10.14569/IJACSA.2018.090453</doi>
        <lastModDate>2018-04-30T09:30:57.5270000+00:00</lastModDate>
        
        <creator>Waseemullah </creator>
        
        <creator>Najeed Ahmed Khan</creator>
        
        <creator>Umair Amin</creator>
        
        <subject>TV ads; video segmentation; semantic analysis; ad segmentation; unsupervised segmentation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(4), 2018</description>
        <description>A novel framework is presented that can segment semantic videos and detect commercials (ads) in a broadcasted TV transmission. The proposed technique combines SURF features and color Histogram in a weighted combination framework resulting in detecting individual TV ads from the transmission after segmenting semantic videos. Thus, better results are achieved. The proposed framework is designed for TV transmissions those who do not use black frame technique between the ad and non-ad part of the transmission and is commonly used in Pakistani TV channels transmission. The television transmission standards in Pakistan are different from those that are used in other countries of the world. The framework used unsupervised technique to segment the semantic videos.</description>
        <description>http://thesai.org/Downloads/Volume9No4/Paper_53-Unsupervised_Ads_Detection_in_TV_Transmissions.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Hybrid Intelligent System for Prediction of Medical Diseases</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090452</link>
        <id>10.14569/IJACSA.2018.090452</id>
        <doi>10.14569/IJACSA.2018.090452</doi>
        <lastModDate>2018-04-30T09:30:57.4970000+00:00</lastModDate>
        
        <creator>Sultan Noman Qasem</creator>
        
        <creator>Monirah Alsaidan</creator>
        
        <subject>Artificial neural network; galactic swarm optimization; particle swarm optimization; genetic algorithm; hybrid intelligent system; medical decision support</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(4), 2018</description>
        <description>This paper proposes a hybrid intelligent system as medical decision support tool for data classification based on the Neural Network, Galactic Swarm Optimization (NN-GSO), and the classification model. The goal of the hybrid intelligent system is to take the advantages and reduce the disadvantages of the constituent models. The system is capable of learning from data sets and reach great classification performance. Consequently, various algorithms have been developed that include Neural Network based on Galactic Swarm Optimization (NN-GSO), Neural Network based on Particle Swarm Optimization (NN-PSO) and Neural Network based on Genetic Algorithm (NN-GA) to improve NN structure and accuracy rates. For the evaluation process, the hybrid intelligent system has used multiple of benchmark medical data sets to evaluate the effectiveness. These benchmarks were gotten from the UCI Repository of Machine Learning. The three-performance metrics were calculated are accuracy, sensitivity and specificity. These metrics are useful for medical applications. The proposed algorithm was tested on various data sets which represent binary and multi-class medical diseases problems. The proposed algorithm performance analyzed and compared with others using k-fold cross validation. The significance tests results have proven that the proposed algorithm is effective to solve neural networks with good generalization ability and network structure for medical diseases detection.</description>
        <description>http://thesai.org/Downloads/Volume9No4/Paper_52-A_New_Hybrid_Intelligent_System_for_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Smart Jamming Attacks in Wireless Networks During a Transmission Cycle: Stackelberg Game with Hierarchical Learning Solution</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090451</link>
        <id>10.14569/IJACSA.2018.090451</id>
        <doi>10.14569/IJACSA.2018.090451</doi>
        <lastModDate>2018-04-30T09:30:57.4670000+00:00</lastModDate>
        
        <creator>Moulay Abdellatif LMATER</creator>
        
        <creator>Majed Haddad</creator>
        
        <creator>Abdelillah Karouit</creator>
        
        <creator>Abdelkrim Haqiq</creator>
        
        <subject>Wireless networks; jamming attacks; game theory;
reinforcement learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(4), 2018</description>
        <description>Due to the broadcast nature of the shared medium,
wireless communications become more vulnerable to malicious
attacks. In this paper, we tackle the problem of jamming in
wireless network when the transmission of the jammer and
the transmitter occur with a non-zero cost. We focus on a
jammer who keeps track of the re-transmission attempts of
the packet until it is dropped. Firstly, we consider a power
control problem following a Nash Game model, where all players
take action simultaneously. Secondly, we consider a Stackelberg
Game model, in which the transmitter is the leader and the
jammer is the follower. As the jammer has the ability to sense
the transmission power, the transmitter adjusts its transmission
power accordingly, knowing that the jammer will do so. We
provide the closed-form expressions of the equilibrium strategies
where both the transmitter and the jammer have a complete
information. Thereafter, we consider a worst case scenario where
the transmitter has an incomplete information while the jammer
has a complete information. We introduce a Reinforcement
Learning method, thus, the transmitter can act autonomously in
a dynamic environment without knowing the above Game model.
It turns out that despite the jammer ability of sensing the active
channel, the transmitter can enhance its efficiency by predicting
the jammer reaction according to its own strategy.</description>
        <description>http://thesai.org/Downloads/Volume9No4/Paper_51-Smart_Jamming_Attacks_in_Wireless_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Object-Oriented Context Description for Movie Based Context-Aware Language Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090450</link>
        <id>10.14569/IJACSA.2018.090450</id>
        <doi>10.14569/IJACSA.2018.090450</doi>
        <lastModDate>2018-04-30T09:30:57.3870000+00:00</lastModDate>
        
        <creator>Hazriani </creator>
        
        <creator>Tsuneo Nakanishi</creator>
        
        <creator>Kenji Hisazumi</creator>
        
        <creator>Akira Fukuda</creator>
        
        <subject>Movie based context-aware language learning (MBCALL);
object-oriented context model (OOCM); context description
language; case grammar</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(4), 2018</description>
        <description>Context-aware ubiquitous learning is a promising
way to learn languages; however, it requires developers and
operators of much effort to construct, deploy, and use the
specialized system. As its alternative, this paper proposes movie
based context-aware language learning (MBCALL) that enables
learners to learn languages through quizzes generated along
virtual contexts occurring in the movie to be replayed. Since
full automatic context capturing from the movie is impossible,
the authors define an object-oriented context model (OOCM)
and also a textual context description language subject to the
OOCM to describe the movie context easily by human work.
The OOCM introduces the case grammar concept of natural
language processing. This enables quiz generation based on types
of the words for objects, actions, and modes found in the movie.
Evaluation with a small movie by three subjects shows that the
OOCM can guide them to enrich information included in the
movie context; therefore, we can generate more types of quizzes
based on the movie context.</description>
        <description>http://thesai.org/Downloads/Volume9No4/Paper_50-Object_Oriented_Context_Description_for_Movie.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Convex Hybrid Restoration and Segmentation Model for Color Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090449</link>
        <id>10.14569/IJACSA.2018.090449</id>
        <doi>10.14569/IJACSA.2018.090449</doi>
        <lastModDate>2018-04-30T09:30:57.3570000+00:00</lastModDate>
        
        <creator>Matiullah </creator>
        
        <creator>Samiullah Khan</creator>
        
        <creator>Noor Badshah</creator>
        
        <creator>Fahim ullah</creator>
        
        <creator>Ziaulla</creator>
        
        <subject>Color images; image restoration; image segmentation;
noise</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(4), 2018</description>
        <description>Image restoration and segmentation are important
areas in digital image processing and computer vision. In this
paper, a new convex hybrid model is proposed for joint restoration
and segmentation during the post-processing of colour images.
The proposed Convex Hybrid model is compared with the existing
state of the art variational models such as Cai model, Chan-Vese
Vector-Valued (CV-VV) model and Local Chan-Vese (LCV) model
using noises such as Salt &amp; Pepper and Gaussian. Additional
four experiments were performed with increasing combination
of noises such as Salt &amp; Pepper, Gaussian, Speckle and Poisson
to thoroughly verify the performance of Convex Hybrid Model.
The results revealed that the Convex Hybrid model comparatively
outperformed qualitatively and has successfully removed the
noises and segment the required object properly. The Convex
Hybrid model used the colour Tele Vision (TV) as a regularizer
for denoising of the corrupt image. The Convex Hybrid Model
is convex and can get global minima. The PDEs obtained from
the minimisation of the Convex Hybrid Model are numerically
solved by using explicit scheme.</description>
        <description>http://thesai.org/Downloads/Volume9No4/Paper_49-Convex_Hybrid_Restoration_and_Segmentation_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comparison of Near-Hidden and Information Asymmetry Interference Problems in Wireless Mesh Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090448</link>
        <id>10.14569/IJACSA.2018.090448</id>
        <doi>10.14569/IJACSA.2018.090448</doi>
        <lastModDate>2018-04-30T09:30:57.3270000+00:00</lastModDate>
        
        <creator>Sadiq Shah</creator>
        
        <creator>Muhammad Atif</creator>
        
        <creator>Samiullah Khan</creator>
        
        <creator>Misbah Daud</creator>
        
        <creator>Fahim Khan Khalil</creator>
        
        <subject>Wireless Mesh Network (WMN); near hidden and
information asymmetry interference; non-overlapping channels</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(4), 2018</description>
        <description>Multi-radio Multi-channel (MRMC) Wireless Mesh
Networks (WMNs) have made quick progress in current years
to become the best option for end users due to its reliability
and low cost. Although WMNs have already been used still the
capacity of WMNs is limited due to information asymmetry and
near hidden interference among frequency channels. In the past,
various research studies have been done to investigate both these
issues. To minimise both these interference types and maximise
network capacity; channel assignment has always been a critical
area of research in WMNs. In this research, a comparative
analysis is done between NH and IA interference based on
their impact on network capacity. This comparison is made
using the existing Optimal Channel Assignment Model (OCAM).
After extensive simulations, it is figured out that NH interference
performs a major role in degrading overall network capacity up
to 4% in comparison to IA interference. Further, in this research
an optimal channel assignment model Information Asymmetry
and Near Hidden Minimization (INM) model is proposed that
collectively minimises both NH and IA interference problems.
The proposed model considers three non-overlapping channels
1, 6 and 11 from IEEE802.11b standard. For simulations, four
different Multi-radio Multi-channel Wireless mesh topologies
have been considered to find out the average network capacity.
An extensive simulation in OPNET shows that the proposed INM
model performs 7% better than the existing OCAM model in
maximising the WMN net capacity.</description>
        <description>http://thesai.org/Downloads/Volume9No4/Paper_48-A_Comparison_of_Near_Hidden_and_Information.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modeling of Quadrotor Roll Loop using Frequency Identification Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090447</link>
        <id>10.14569/IJACSA.2018.090447</id>
        <doi>10.14569/IJACSA.2018.090447</doi>
        <lastModDate>2018-04-30T09:30:57.2930000+00:00</lastModDate>
        
        <creator>Mizouri Walid</creator>
        
        <creator>Najar Slaheddine</creator>
        
        <creator>Aoun Mohamed</creator>
        
        <creator>Bouabdallah Lamjed</creator>
        
        <subject>Quadrotor; modelling; frequency domain; closed
loop identification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(4), 2018</description>
        <description>Model estimation is an important step in quadrotor
control design because model uncertainties can cause unstable
behavior especially with non-robust control methods. In this
paper, a modeling approach of a quadrotor prototype has been
proposed. First an initial dynamic model of quadrotor UAV
based on Euler-Lagrange formalism was developed. Then the
roll system has been estimated using closed loop identification
method and frequency domain analysis. An experimental tests
has been performed for the roll system to validate the estimated
dynamic model.</description>
        <description>http://thesai.org/Downloads/Volume9No4/Paper_47-Modeling_of_Quadrotor_Roll_Loop.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Outsourcing of Secure k-Nearest Neighbours Interpolation Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090446</link>
        <id>10.14569/IJACSA.2018.090446</id>
        <doi>10.14569/IJACSA.2018.090446</doi>
        <lastModDate>2018-04-30T09:30:57.2630000+00:00</lastModDate>
        
        <creator>Muhammad Rifthy Kalideen</creator>
        
        <creator>Bulent Tugrul</creator>
        
        <subject>Cloud computing; k-Nearest neighbour; spatial interpolation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(4), 2018</description>
        <description>Cloud computing becomes essential in these days
for the enterprises. Most of the large companies are moving their
services and data to the cloud servers which offer flexibility and
efficiency. Data owner (DO) hires a cloud service provider (CSP)
to store its data and carry out the related computation. The query
owner (QO) sends a request which is crucial for its future plans
to the CSP. The CSP computes all necessary calculations and
returns the result back to the QO. Neither the data nor query
owners want to reveal their private data to anyone. k-Nearest
Neighbour (k-NN) interpolation is one of the essential algorithms
to produce a prediction value for an unmeasured location. Simply,
it finds k number of nearest neighbours around the query point
to produce an output. Oblivious RAM (ORAM) has been used
to protect the privacy in cloud computing. In our work, we will
perform the k-NN method using the kd-tree and ORAM without
revealing both the data-owner’s and query owner’s confidential
data to each other or to third parties. The proposed solution
will be analysed to ensure that it provides accurate and reliable
predictions while preserving the privacy of all parties.</description>
        <description>http://thesai.org/Downloads/Volume9No4/Paper_46-Outsourcing_of_Secure_k_Nearest_Neighbours.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Simulated Annealing with Levy Distribution for Fast Matrix Factorization-Based Collaborative Filtering</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090445</link>
        <id>10.14569/IJACSA.2018.090445</id>
        <doi>10.14569/IJACSA.2018.090445</doi>
        <lastModDate>2018-04-30T09:30:57.2470000+00:00</lastModDate>
        
        <creator>Mostafa A. Shehata</creator>
        
        <creator>Mohammad Nassef</creator>
        
        <creator>Amr A. Badr</creator>
        
        <subject>Simulated annealing; levy distribution; matrix factorization;
collaborative filtering; recommender systems; metaheuristic
optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(4), 2018</description>
        <description>Matrix factorization is one of the best approaches
for collaborative filtering because of its high accuracy in presenting
users and items latent factors. The main disadvantages
of matrix factorization are its complexity, and are very hard
to be parallelized, especially with very large matrices. In this
paper, we introduce a new method for collaborative filtering
based on Matrix Factorization by combining simulated annealing
with levy distribution. By using this method, good solutions are
achieved in acceptable time with low computations, compared to
other methods like stochastic gradient descent, alternating least
squares, and weighted non-negative matrix factorization.</description>
        <description>http://thesai.org/Downloads/Volume9No4/Paper_45-Simulated_Annealing_with_Levy_Distribution.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Selection of Important Sets by using K-Skyband Query for Sets</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090444</link>
        <id>10.14569/IJACSA.2018.090444</id>
        <doi>10.14569/IJACSA.2018.090444</doi>
        <lastModDate>2018-04-30T09:30:57.2170000+00:00</lastModDate>
        
        <creator>Md. Anisuzzaman Siddique</creator>
        
        <creator>Asif Zaman</creator>
        
        <creator>Yasuhiko Morimoto</creator>
        
        <subject>Set Selection; Skyline Query; Skyline Set Query;
Skyband Query; Skyband-set Query</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(4), 2018</description>
        <description>In this paper, we consider “sets” selection problem
from a database. In conventional selection problem, which is
“objects” selection problem, the skyline query has been utilized,
since it can retrieve a set of important objects where each object
isn’t dominated by another object in a database. However, it
is not effective when we have to select important sets, each of
which contains more than one objects. Thus, we consider a “set
skyline query” that retrieves non-dominated sets of objects from
a database, which we call “object sets.” The K-skyband query
is a popular variant of the skyline query. It retrieves a set of
objects, each of which is not dominated by K other objects.
In this paper, we propose “K-Skyband set query.” It retrieves
important sets instead of objects. We investigated the properties
of the query, as well as developing pruning strategies to avoid the
unnecessary enumeration of objectsets and comparisons among
them. Intensive experiments have been performed to examine the
implemented algorithm. The results demonstrate the effectiveness
and efficiency of the proposed algorithm.</description>
        <description>http://thesai.org/Downloads/Volume9No4/Paper_44-Selection_of_Important_Sets_by_using_K_Skyband.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Traffic Congestion Framework for Smart Riyadh City based on IoT Services</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090443</link>
        <id>10.14569/IJACSA.2018.090443</id>
        <doi>10.14569/IJACSA.2018.090443</doi>
        <lastModDate>2018-04-30T09:30:57.1830000+00:00</lastModDate>
        
        <creator>Hailah Ghanem Al-Majhad</creator>
        
        <creator>Arif Bramantoro</creator>
        
        <creator>Irfan Syamsuddin</creator>
        
        <creator>Arda Yunianta</creator>
        
        <creator>Ahmad Hoirul Basori</creator>
        
        <creator>Anton Satria Prabuwono</creator>
        
        <creator>Omar M. Barukab</creator>
        
        <subject>Traffic congestion framework; Internet of Things;
smart city; business process execution language; Software as a
Service</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(4), 2018</description>
        <description>Internet of Things (IoT) has become one of the most
challenging issues in many researches to connect physical things
through the internet by creating a virtual identity for everything.
Traffic congestion in Riyadh city is chosen due to the proliferation
in the number of vehicles on Riyadh roads that is resulting
in grumbling by citizens. Currently, there is no reliable service
offered to Riyadh citizens from the traffic department enabling
them to access traffic information. A new traffic congestion
framework for Riyadh is proposed so that it can help the
development of traffic congestion services. This framework aims
to benefit from the current Riyadh road infrastructure and apply
the IoT paradigm for detecting traffic congestion. Sensing devices
are used to identify the congestion of the traffic flow through
providing multiple proposed services to the citizens such as a
vehicle counting, live streaming video and rerouting services.
Citizens are able to access all the services by using proposed
mobile application connected to the internet, as all those services
are integrated with public map Service. By using the services, the
citizens are able to identify the exact location where congestion
occurs and an alternate solution can be provided easily. To achieve
this, Business Process Execution Language (BPEL) is embedded
in one of the framework layers. Due to the effectiveness of BPEL,
workflows are built to combine the proposed services with the
legacy Riyadh services. This approach clearly defines how the
services are executed through BPEL models.</description>
        <description>http://thesai.org/Downloads/Volume9No4/Paper_43-A_Traffic_Congestion_Framework_for_Smart_Riyadh.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Trajectory based Arabic Sign Language Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090442</link>
        <id>10.14569/IJACSA.2018.090442</id>
        <doi>10.14569/IJACSA.2018.090442</doi>
        <lastModDate>2018-04-30T09:30:57.1370000+00:00</lastModDate>
        
        <creator>Ala addin I. Sidig</creator>
        
        <creator>Sabri A. Mahmoud</creator>
        
        <subject>Trajectory processing; sign language recognition;
ensemble classifier; polygon description; parameters tuning; signer
independent</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(4), 2018</description>
        <description>Deaf and hearing impaired people use their hand
as a tongue to convey their thoughts by performing descriptive
gestures that form the sign language. A sign language recognition
system is a system that translates these gestures into a form of
spoken language. Such systems are faced by several challenges,
like the high similarities of the different signs, difficulty in
determining the start and end of signs, lack of comprehensive
and bench marking databases. This paper proposes a system for
recognition of Arabic sign language using the 3D trajectory of
hands. The proposed system models the trajectory as a polygon
and finds features that describes this polygon and feed them
to a classifier to recognize the signed word. The system is
tested on a database of 100 words collected using Kinect. The
work is compared with other published works using publicly
available dataset which reflects the superiority of the proposed
technique. The system is tested for both signer-dependent and
signer-independent recognition.</description>
        <description>http://thesai.org/Downloads/Volume9No4/Paper_42-Trajectory_based_Arabic_Sign_Language.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Formal Specification and Analysis of Termination Detection by Weight-throwing Protocol</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090441</link>
        <id>10.14569/IJACSA.2018.090441</id>
        <doi>10.14569/IJACSA.2018.090441</doi>
        <lastModDate>2018-04-30T09:30:57.1230000+00:00</lastModDate>
        
        <creator>Imran Riaz Hasrat</creator>
        
        <creator>Muhammad Atif</creator>
        
        <creator>Muhammad Naeem</creator>
        
        <subject>Termination detection; weight-throwing protocol;
formal specification and verification; model checking</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(4), 2018</description>
        <description>Termination detection is a critical problem in distributed
systems. A distributed computation is called terminated
if all of its processes become idle and there are no in-transit
messages in communication channels. A distributed termination
detection protocol is used to detect the state of a process at
any time, i.e., terminated, idle or active. A termination detection
protocol on the basis of weight-throwing scheme is described in
Yu-Chee Tseng, “Detecting Termination by Weight-throwing in a
Faulty Distributed System”, JPDC, 15 February 1995. We apply
model checking techniques to verify the protocol and for formal
specification and verification the tool-set UPPAAL is used. Our
results show that the protocol fails to fulfil some of its functional
requirements.</description>
        <description>http://thesai.org/Downloads/Volume9No4/Paper_41-Formal_Specification_and_Analysis_of_Termination.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Concentrated Solar Power Site Suitability using GIS-MCDM Technique taken UAE as a Case Study </title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090440</link>
        <id>10.14569/IJACSA.2018.090440</id>
        <doi>10.14569/IJACSA.2018.090440</doi>
        <lastModDate>2018-04-30T09:30:57.0770000+00:00</lastModDate>
        
        <creator>Mohammad Basheer Alqaderi</creator>
        
        <creator>Walid Emar</creator>
        
        <creator>Omar A. Saraereh</creator>
        
        <subject>Analytic hierarchy process (AHP); concentrated solar power (CSP); multi-criteria decision making (MCDM); direct normal irradiance (DNI); United Arab Emirates (UAE); hot-spot locations</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(4), 2018</description>
        <description>In recent years, countries have begun to reduce the consumption of fossil fuels and replace them with renewable energy resources in order to mitigate the effects of fossil fuels on the environment and save money besides increasing energy security. This paper has investigated the suitability map for the large-scale projects in concentrated solar power (CSP) in United Arab Emirates (UAE) using GIS data and multi-criteria decision making technique (MCDM). The suitability map is composed of multi-maps (layers) of solar irradiation [Direct Normal irradiance (DNI) component], land slope, protected areas, land use, proximity to water bodies, power grid and the roads. Then, Analytic Hierarchy Process (AHP) method is applied to identify the weights of ranking criteria. The paper has highlighted the most-suitable location as well as non-suitable location among UAE to install CSP projects. The study’s results proved that UAE have multi-hotspot locations can be exploited for CSP projects.</description>
        <description>http://thesai.org/Downloads/Volume9No4/Paper_40-Concentrated_Solar_Power_Site_Suitability.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Rainfall Prediction in Lahore City using Data Mining Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090439</link>
        <id>10.14569/IJACSA.2018.090439</id>
        <doi>10.14569/IJACSA.2018.090439</doi>
        <lastModDate>2018-04-30T09:30:57.0430000+00:00</lastModDate>
        
        <creator>Shabib Aftab</creator>
        
        <creator>Munir Ahmad</creator>
        
        <creator>Noureen Hameed</creator>
        
        <creator>Muhammad Salman Bashir</creator>
        
        <creator>Iftikhar Ali</creator>
        
        <creator>Zahid Nawaz</creator>
        
        <subject>Rainfall prediction; data mining; classification techniques</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(4), 2018</description>
        <description>Rainfall prediction has extreme significance in countless aspects and scopes. It can be very helpful to reduce the effects of sudden and extreme rainfall by taking effective security measures in advance. Due to climate variations, an accurate rainfall prediction has become more complex than before. Data mining techniques can predict the rainfall through extracting the hidden patterns among weather attributes of past data. This research contributes by exploring the use of various data mining techniques for rainfall prediction in Lahore city. Techniques include: Support Vector Machine (SVM), Na&#239;ve Bayes (NB), k Nearest Neighbor (kNN), Decision Tree (J48) and Multilayer Perceptron (MLP). The dataset is obtained from a weather forecasting website and consists of several atmospheric attributes. For effective prediction, pre-processing technique is used which consists of cleaning and normalization processes. Performance of used data mining techniques is analyzed in terms of precision, recall and f-measure with various ratios of training and test data.</description>
        <description>http://thesai.org/Downloads/Volume9No4/Paper_39-Rainfall_Prediction_in_Lahore_City.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Helitron’s Periodicities Identification in C.Elegans based on the Smoothed Spectral Analysis and the Frequency Chaos Game Signal Coding</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090438</link>
        <id>10.14569/IJACSA.2018.090438</id>
        <doi>10.14569/IJACSA.2018.090438</doi>
        <lastModDate>2018-04-30T09:30:57.0130000+00:00</lastModDate>
        
        <creator>Rabeb Touati</creator>
        
        <creator>Imen Messaoudi</creator>
        
        <creator>Afef Elloumi Ouesleti</creator>
        
        <creator>Zied Lachiri</creator>
        
        <subject>Helitrons; C.elegans; Frequency Chaos Game Signal (FCGS) coding; spectral analysis; tandem periodicities </subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(4), 2018</description>
        <description>Helitrons are typical rolling circle transposons which make up the bulk of eukaryotic genomes. Unlike of other DNA transposons, these transposable elements (TEs), don’t create target site duplications or end in inverted repeats, which make them particular challenge to identify and more difficult to annotate. To date, these elements are not well studied; they only attracted the interest of researchers in biology. The focus of this paper is oriented towards identifying the helitrons in C.elegans genome in the perspective of signal processing. Aiming at the helitron&#39;s identification, a novel methodology including two steps is proposed: the coding and the spectral analysis. The first step consists in converting DNA into a 1-D signal based on the statistical features of the sequence (FCGS coding). As for the second step, it aims to identify the global periodicities in helitrons using the Smoothed Fourier Transform. The resulting spectrum and spectrogram are shown to present a specific signature of each helitron’s type.</description>
        <description>http://thesai.org/Downloads/Volume9No4/Paper_38-Helitrons_Periodicities_Identification_in_C_elegans.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intelligent Educational Assistant based on Multiagent System and Context-Aware Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090437</link>
        <id>10.14569/IJACSA.2018.090437</id>
        <doi>10.14569/IJACSA.2018.090437</doi>
        <lastModDate>2018-04-30T09:30:56.9830000+00:00</lastModDate>
        
        <creator>Fern&#227;o Reges dos Santos</creator>
        
        <creator>Pollyana Notargiacomo</creator>
        
        <subject>Context-aware computing; multiagent systems; artificial intelligence; internet of things</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(4), 2018</description>
        <description>This paper provides an overview of the current stage of EducActiveCore research, an orchestrated computational model, formed by different areas of artificial intelligent, combined to support personalized assistance to students in distance education process, mainly in interaction with Context-Aware environments. The Context-Aware environment applied in this research is observed in conjunction with IoT technologies. IoT is enabled by the latest developments in smart sensors, RFID, communication technologies, and Internet protocols. The basic premise is to have the resources availability and use arrangements, managed directly without human involvement to deliver a new class of smart environments for students. To support this central idea, a Multiagent model is proposed to assist students in interaction with context, determining autonomously the access to useful resources to students. This article introduces the overall research in progress and, the methods of an experiment tested with basic concepts of this scenario, implemented and used by a group of students in real locations. Results obtained during tests, indicates 93% of successful operation performed by this intelligent model on use prediction of resources and scheduling reservation.</description>
        <description>http://thesai.org/Downloads/Volume9No4/Paper_37-Intelligent_Educational_Assistant_based_on_Multiagent_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Model of Interoperability of Multiple Different Information Systems using SOA Middleware Layer and Ontological Database on the Cloud</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090436</link>
        <id>10.14569/IJACSA.2018.090436</id>
        <doi>10.14569/IJACSA.2018.090436</doi>
        <lastModDate>2018-04-30T09:30:56.9500000+00:00</lastModDate>
        
        <creator>Meryem FAKHOURI AMR</creator>
        
        <creator>Khalifa MANSOURI</creator>
        
        <creator>Mohammed QBADOU</creator>
        
        <creator>Bouchaib RIYAMI</creator>
        
        <subject>Interoperability; middleware layer; SOA; information system (IS); cloud computing; ontological database (ODB)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(4), 2018</description>
        <description>The exponential evolution of technology and the environment surrounding the information systems (IS) forces companies to act quickly to follow the trend of business workflows with the use of high computer technologies and well adapted to the needs of the market. Currently, the performance of information system is considered a problem, we must intervene to make them more agile and responsive to better support the strategy of the company. The exchange and communication is a must between information systems to address these issues and new requirements, businesses are looking to integrate and interact even their information systems to interconnect applications. Interoperability is essential between information systems, it promotes alignment between the company’s business strategy and IT strategy while respecting the existing technical heritage of the company. The interoperability solutions between information systems face major problems since the SI is independently developed and designed differently and that solutions must meet certain criteria, namely, autonomy, scalability and resolve trade problems of data. The MDA approach is the most suitable solution to our problem because it ensures a degree of independence between the logic the company’s business and technology platform. Moreover, oriented architecture SOA service is used in this sense, it encapsulates the components of the information system into editable and reusable service. We want through this article to contribute to the development of a model for interoperability of several different IS, founded on a middleware layer compound of services according to the architecture SOA. The special feature of our model is that it uses an ontological database in the Cloud that will store concepts exchanged among interconnected IS and uses several transformation layers, integration, homogenization and adapting services.</description>
        <description>http://thesai.org/Downloads/Volume9No4/Paper_36-Model_of_Interoperability_of_Multiple_Different_Information.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparison between NFC/RFID and Bar Code Systems for Halal Tags Identification: Paired Sample T-Test Evaluation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090435</link>
        <id>10.14569/IJACSA.2018.090435</id>
        <doi>10.14569/IJACSA.2018.090435</doi>
        <lastModDate>2018-04-30T09:30:56.8870000+00:00</lastModDate>
        
        <creator>Mohsen Khosravi</creator>
        
        <creator>Najma Imtiaz Ali</creator>
        
        <creator>Mostafa Karbasi</creator>
        
        <creator>Imtiaz Ali Brohi</creator>
        
        <creator>Irfan Ahmed Shaikh</creator>
        
        <creator>Asadullah Shah</creator>
        
        <subject>Halal products; RFID/NFC; JAKIM; paired sample T-test; Malaysia</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(4), 2018</description>
        <description>Malaysia is a modern Muslim country where the research on Halal product identification is at the peak. In this study, the authors have developed the mobile application which is based on Radio frequency identification near field communication RFID/NFC. The author first developed the database based on the data from Jabatan Kemajuan Islam Malaysia JAKIM, which is Malaysian Halal logo identification authority then the mobile application which uses the Near Field Communication to detect the Halal food using the Radio Frequency Identification. In this paper authors have performed the experimental analysis by comparing the Barcode system that is comprised of parallel line detected by the simple webcam for the Halal logo identification and the new developed RFID NFC mobile application. Paired sample T-Test was performed by using the SPSS 23.0 version. The results revealed that there is significantly difference between the usability, efficiency, affordability, security and satisfaction. The users are more satisfied with the newly developed mobile application as compared to old halal logo system in Malaysia.</description>
        <description>http://thesai.org/Downloads/Volume9No4/Paper_35-Comparison_between_NFCRFID_and_Bar_Code_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Stable Clustering Approach based on Gaussian Distribution and Relative Velocity in VANETs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090434</link>
        <id>10.14569/IJACSA.2018.090434</id>
        <doi>10.14569/IJACSA.2018.090434</doi>
        <lastModDate>2018-04-30T09:30:56.8730000+00:00</lastModDate>
        
        <creator>Mohammed Saad Talib</creator>
        
        <creator>Aslinda Hassan</creator>
        
        <creator>Burairah Hussin</creator>
        
        <creator>Z.A. Abas</creator>
        
        <creator>Zaniab Saad Talib</creator>
        
        <creator>Zainab Sabah Rasoul</creator>
        
        <subject>VANET; clustering; cluster stability; Gaussian; relative velocity; staying duration</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(4), 2018</description>
        <description>Vehicles in Vehicular Ad-hoc Networks (VANETs) are characterized by their high dynamic mobility (velocity). Changing in VANET topology is happened frequently which caused continuous network communication failures. Clustering is one of the solutions applied to reduce the VANET topology changes. Stable clusters are required and Indispensable to control, improve and analyze VANET. In this paper, we introduce a new analytical VANET&#39;s clustering approach. This approach aims to enhance the network stability. The new proposed grouping process in this study depends on the vehicles velocities mean and standard deviation. The principle of the normal (Gaussian) distribution is utilized and emerged with the relative velocity to propose two clustering levels. The staying duration of vehicles in a cluster is also calculated and used as an indication. The first level represents a very high stabile cluster. To form this cluster, only the vehicles having velocities within the range of mean &#177; standard deviation, collected in one cluster (i.e. only 68% of the vehicles allowed to compose this cluster). The cluster head is selected from the vehicles having velocities close to the average cluster velocity. The second level is to create a stable cluster by grouping about 95% of the vehicles. Only the vehicles having velocities within the range of mean &#177; 2 standard deviation are collected in one cluster. This type of clustering is less stable than the first one. The analytical analysis shows that the stability and the staying duration of vehicles in the first clustering approach are better than their values in the second clustering approach.</description>
        <description>http://thesai.org/Downloads/Volume9No4/Paper_34-A_Novel_Stable_Clustering_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Wireless Body Area Network Security and Privacy Issue in E-Healthcare</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090433</link>
        <id>10.14569/IJACSA.2018.090433</id>
        <doi>10.14569/IJACSA.2018.090433</doi>
        <lastModDate>2018-04-30T09:30:56.8400000+00:00</lastModDate>
        
        <creator>Muhammad Sheraz Arshad Malik</creator>
        
        <creator>Muhammad Ahmed</creator>
        
        <creator>Tahir Abdullah</creator>
        
        <creator>Naila Kousar</creator>
        
        <creator>Mehak Nigar Shumaila</creator>
        
        <creator>Muhammad Awais</creator>
        
        <subject>E-Health; privacy; security; wireless body area networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(4), 2018</description>
        <description>Wireless Body Area Network (WBAN) is a collection of wireless sensor nodes which can be placed within the body or outside the body of a human or a living person which in result observes or monitors the functionality and adjoining situations of the body. Utilizing a Wireless Body Area Network, the patient encounters a greater physical versatility and is never again constrained to remain in the hospital. As the Wireless Body Area Network sensor devices is being utilized for gathering the sensitive data and possibly will run into antagonistic situations, they require complicated and very secure security medium or structure to avoid the vitriolic communications within the system. These devices represent various security and privacy protection of touchy and private patient medical data. The medical records of patients are a significant and an unsolved situation due to which a changing or exploitation to the system is possible. In this research paper, we first present an overview of WBAN, how they utilized for healthcare monitoring, its architecture then highlight major security and privacy requirements and assaults at different network layer in a WBAN and we finally talk about various cryptographic algorithms and laws for providing solution of security and privacy in WBAN.  </description>
        <description>http://thesai.org/Downloads/Volume9No4/Paper_33-Wireless_Body_Area_Network_Security.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Impact of Knowledge Management on Organizational Performance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090432</link>
        <id>10.14569/IJACSA.2018.090432</id>
        <doi>10.14569/IJACSA.2018.090432</doi>
        <lastModDate>2018-04-30T09:30:56.8270000+00:00</lastModDate>
        
        <creator>Hayfa.Y. Abuaddous</creator>
        
        <creator>Abdullah A.M. Al Sokkar</creator>
        
        <creator>Blaqees I. Abualodous</creator>
        
        <subject>Knowledge management; infrastructure capabilities; process capabilities; organizational performance; learning organizations </subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(4), 2018</description>
        <description>In today’s business, knowledge is considered as a core asset in any organization, even it can be considered as important as technological capital. It is part of human abilities and thus human capital. Knowledge management (KM) is becoming a fad in an increasing way so many organizations are trying to apply it in order to enhance their organizational performance. In this paper, literatures were investigated critically in order to show the real influence of knowledge management and some of its practices on organizational performance. It has been founded that KM including knowledge process and infrastructure capabilities affect positively in a huge manner on all aspects of organizational performance directly or indirectly. In the same vein, there is a huge need to continuously train and educate the learning organizations’ CEOs about the importance of KM through group works and training programs. </description>
        <description>http://thesai.org/Downloads/Volume9No4/Paper_32-The_Impact_of_Knowledge_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Energy Harvesting for Remote Wireless Sensor Network Nodes</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090431</link>
        <id>10.14569/IJACSA.2018.090431</id>
        <doi>10.14569/IJACSA.2018.090431</doi>
        <lastModDate>2018-04-30T09:30:56.7930000+00:00</lastModDate>
        
        <creator>Engr. Syed Ashraf Ali</creator>
        
        <creator>Engr. Syed Haider Ali</creator>
        
        <creator>Engr. Sajid Nawaz Khan</creator>
        
        <creator>Engr. Muhammad AAmir Aman</creator>
        
        <subject>Wireless sensor network (WSN); photovoltaic (PV)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(4), 2018</description>
        <description>Wireless Sensor Network (WSN) technology is widely used for controlling and monitoring purposes. Advancement accomplished in the past era in wireless communications and microsystems have allowed the change of minor degree and least effort sensor nodes outfitted with remote correspondence abilities ready to manufacture a wireless WSN node. Each sensor node is ordinarily outfitted with one or a couple of detecting units, information preparing units, remote correspondence interface and battery. WSNs have discovered application in an extensive variety of various spaces like home and biomedical health monitoring. Providing continuous supply of energy to these nodes at remote locations is a major concern. The aim of this paper is to provide un-interrupted supply of energy to remote WSN nodes. Solar energy is expected to provide the required energy; however, photovoltaic (PV) based system are not able to operate at night. This may influence the operation of WSN nodes, rendering them useless for that instant. Several techniques have been proposed to provide satisfactory energy storage. However, the utilization of a suitable device to provide the required energy storage and operate WSN node for all day long is an open issue. A complete WSN node is developed for flood monitoring with sensing capacity along with energy harvesting using PV system and storage unit, which is able to harvest and store energy for un-interrupted operation of WSN node at remote sites.</description>
        <description>http://thesai.org/Downloads/Volume9No4/Paper_31-Energy_Harvesting_for_Remote_Wireless_Sensor_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Robust Modeling and Linearization of MIMO RF Power Amplifiers for 4G and 5G Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090430</link>
        <id>10.14569/IJACSA.2018.090430</id>
        <doi>10.14569/IJACSA.2018.090430</doi>
        <lastModDate>2018-04-30T09:30:56.7800000+00:00</lastModDate>
        
        <creator>Imene ZEMZEMI</creator>
        
        <creator>Souhir LEJNEF</creator>
        
        <creator>Noureddine BOULEJFEN</creator>
        
        <creator>M.Fadhel GHANNOUCHI</creator>
        
        <subject>MIMO transmitter; RF power amplifiers; orthogonal polynomials; nonlinear transmitters; digital predistortion</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(4), 2018</description>
        <description>In this paper, a novel set of orthogonal crossover polynomials for the baseband modelling and linearization of MIMO RF Pas is presented. The proposed solution is applicable to WCDMA and LTE applications. The new modelling approach has considerably reduced the numerical instability problem associated with the conventional polynomial model identification. In order to validate the efficiency and the robustness of the proposed solution, a 2x2 MIMO LDMOS RF power amplifier has been measured modelled and linearized. A comparison with the conventional polynomial MIMO models showed clearly the superiority of the proposed solution in a fixed-point calculation environment such as DSP and FPGA boards.</description>
        <description>http://thesai.org/Downloads/Volume9No4/Paper_30-Robust_Modeling_and_Linearization_of_MIMO.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Distributed QoS Constraint based Resource Adaptation Strategy for Cognitive Radio Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090429</link>
        <id>10.14569/IJACSA.2018.090429</id>
        <doi>10.14569/IJACSA.2018.090429</doi>
        <lastModDate>2018-04-30T09:30:56.7630000+00:00</lastModDate>
        
        <creator>Yakubu S. Baguda</creator>
        
        <subject>Cognitive radio network; outage probability; quality of service (QoS); network utility; power adaptation; primary user; secondary user</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(4), 2018</description>
        <description>This paper primarily investigates and addresses the optimal power adaptation strategy problem for multi-user cognitive radio network. The need to determine the optimal power transmission for the secondary user (SU) in a distributed network environment is challenging and important. This requires an efficient adaptive strategy with high convergence capability in order to meet up with the multi-users quality of service (QoS) requirements in a cognitive network and ultimately ensure more efficient resource utilization amongst users. In this paper, a distributed QoS contraint scheme which considered both the transmission power and outage probability has been proposed to maximize the network performance and to control the SUs transmission power. Firstly, the QoS constraint optimization problem for distributed secondary users is formulated and solved in order to adapt with the network dynamics of the cognitive networks. Subsequently, the solution has been used to dynamically adjust the radio power of the SUs to conform with the stringent network constraint and ensures coexistence amongst the users. More importantly, the simulation result shows that the scheme increases the overall network utility when compared to the scheme without adaptation.</description>
        <description>http://thesai.org/Downloads/Volume9No4/Paper_29-Distributed_QoS_Constraint_based_Resource_Adaptation_Strategy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Recommendation using Rule based Implicative Rating Measure</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090428</link>
        <id>10.14569/IJACSA.2018.090428</id>
        <doi>10.14569/IJACSA.2018.090428</doi>
        <lastModDate>2018-04-30T09:30:56.7470000+00:00</lastModDate>
        
        <creator>Lan Phuong Phan</creator>
        
        <creator>Hung Huu Huynh</creator>
        
        <creator>Hiep Xuan Huynh</creator>
        
        <subject>Model evaluation; recommendation model; rule based implicative rating measure; ruleset</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(4), 2018</description>
        <description>The paper presents a rule based implicative rating measure to calculate the ratings of users on items. The paper also presents a new model using the ruleset with the rule length of 2 and the proposed measure to suggest to users the list of items with the highest ratings. The new model is compared to the three existing models that use items (such as the popular items, the items with highest similarities, and the items with strong relationships) to make the suggestion. The experiments on the MSWeb dataset and the MovieLens dataset indicate that the proposed recommendation model has the higher performace (via the Precision - Recall and the ROC curves) than the compared models for most of the given.</description>
        <description>http://thesai.org/Downloads/Volume9No4/Paper_28-Recommendation_using_Rule_based_Implicative_Rating.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Method for Thermal Pain Level Prediction with Eye Motion using SVM</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090427</link>
        <id>10.14569/IJACSA.2018.090427</id>
        <doi>10.14569/IJACSA.2018.090427</doi>
        <lastModDate>2018-04-30T09:30:56.7330000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>Eye motion; thermal pain; support vector machine; thermal stimulus; classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(4), 2018</description>
        <description>Method for thermal pain level prediction with eye motion using SVM is proposed. Through experiments, it is found that thermal pain level is much sensitive to the change rate of pupil size rather than pupil size itself. Also, it is found that the number of blinks shows better classification performance than the other features. Furthermore, the eye size is not a good indicator for thermal pain. Moreover, it is also found that user respond to the thermal stimulus so quickly (0 to 3 sec.) while the thermal pain is remaining for a while (4 to 17 sec.) after the thermal stimulus is removed.</description>
        <description>http://thesai.org/Downloads/Volume9No4/Paper_27-Method_for_Thermal_Pain_Level_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design and Simulation of a Rectangular E-Shaped Microstrip Patch Antenna for RFID based Intelligent Transportation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090426</link>
        <id>10.14569/IJACSA.2018.090426</id>
        <doi>10.14569/IJACSA.2018.090426</doi>
        <lastModDate>2018-04-30T09:30:56.7170000+00:00</lastModDate>
        
        <creator>Asif Ali</creator>
        
        <creator>Nasrullah Pirzada</creator>
        
        <creator>Muhammad Moazzam Jawaid</creator>
        
        <creator>Sajjad Ali Memon</creator>
        
        <subject>E-shaped microstrip patch antenna; antenna gain; return loss; voltage standing wave ratio (VSWR); high-frequency structure simulator (HFSS)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(4), 2018</description>
        <description>A low profile, rectangular E-shaped microstrip patch antenna is designed and proposed for radio-frequency identification (RFID) based intelligent transportation system (ITS) in this paper. The proposed antenna design aims to achieve high gain and low return loss at 0.96 GHz as it is suitable for ultra-high frequency (UHF) RFID tags. The proposed antenna composed of a radiating patch on one side of the dielectric substrate and the ground plane on the other side, copper is used to produce the main radiator. The simulation of the proposed antenna is performed employing the high-frequency structure simulator (HFSS). The dielectric substrate used for the suggested antenna is an FR4 substrate with dielectric constant of 4.3 and height 1.5 mm. The performance of the proposed antenna is measured in terms of gain, return loss, voltage standing wave ratio (VSWR), radiation pattern and the bandwidth. The antenna gain and the return loss of the suggested antenna at 0.96 GHz are 7.3 dB and -12.43 dB, respectively.</description>
        <description>http://thesai.org/Downloads/Volume9No4/Paper_26-Design_and_Simulation_of_a_Rectangular_E_shaped_Microstrip.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Effectiveness of Cloud-Based E-Learning towards Quality of Academic Services: An Omanis’ Expert View</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090425</link>
        <id>10.14569/IJACSA.2018.090425</id>
        <doi>10.14569/IJACSA.2018.090425</doi>
        <lastModDate>2018-04-30T09:30:56.6700000+00:00</lastModDate>
        
        <creator>Qasim A. Alajmi</creator>
        
        <creator>Adzhar Kamaludin</creator>
        
        <creator>Ruzaini Abdullah Arshah</creator>
        
        <creator>Mohammed A. Al-Sharafi</creator>
        
        <subject>Cloud computing; e-learning; cloud-based e-learning; higher education institutions; quality of academic services</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(4), 2018</description>
        <description>The purpose of this paper is to understand the importance, relevance and the need of implementing Cloud-Based E-Learning (CBEL) in Higher education institutions (HEIs) in Oman. The paper maintains its emphasis on addressing the effectiveness of the cloud based e-learning system and also takes into account the activities of comparing and contrasting the before and after effects of the implementation of the CBEL on higher educational sector in Oman. Method: The methodological approach of this paper follows qualitative approach of data collection and the data analysis techniques used in this paper are interpretivist approach and thematic analysis. Results and Findings: The data analysis techniques used in this research paper helped in understanding and gathering meaningful insights relating to the need and significance of implementing CBEL in educational sectors. </description>
        <description>http://thesai.org/Downloads/Volume9No4/Paper_25-The_Effectiveness_of_Cloud_based_E_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Optimization of Audio Classification and Segmentation using GASOM Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090424</link>
        <id>10.14569/IJACSA.2018.090424</id>
        <doi>10.14569/IJACSA.2018.090424</doi>
        <lastModDate>2018-04-30T09:30:56.5930000+00:00</lastModDate>
        
        <creator>Dabbabi Karim</creator>
        
        <creator>Cherif Adnen</creator>
        
        <creator>Hajji Salah</creator>
        
        <subject>Segmentation and classification audio; features extraction; features discrimination; GASOM algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(4), 2018</description>
        <description>Now-a-days, multimedia content analysis occupies an important place in widely used applications. It may depend on audio segmentation which is one of the many other tools used in this area. In this paper, we present an optimized audio classification and segmentation algorithms that are used to segment a superimposed audio stream according to its content into 10 main audio types: speech, non-speech, silence, male speech, female speech, music, environmental sounds, and music genres, such as classic music, jazz, and electronic music. We have tested the KNN, SVM, and GASOM algorithms on two audio classification systems. In the first audio classification system, the audio stream is discriminated into speech no-speech, pure-speech/silence, male speech/female speech, and music/ environmental sounds. However, in the second audio classification system, the audio stream is segmented into music/speech, pure-speech/silence, male speech/female speech. For pure-speech/silence discrimination, it is performed in the two systems according to a rule-based classifier. Concerning the music segments in both systems, they are discriminated into different music genres using the decision tree as a classifier. Also, the first audio classification system has succeeded to achieve higher performances compared to the second one. Indeed, in the first system using the GASOM algorithm with leave-one-out validation technique, the average accuracy has reached 99.17% for the music/environmental sounds discrimination. Moreover, in both systems, the GASOM algorithm has always reached the best results of performances compared to KNN and SVM algorithms. Therefore, in the first system, the GASOM algorithm has been contributed to obtain an optimized consumption time compared to that one obtained using the two HMM and MLP methods.</description>
        <description>http://thesai.org/Downloads/Volume9No4/Paper_24-An_Optimization_of_Audio_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Developed Collaborative Filtering Similarity Method to Improve the Accuracy of Recommendations under Data Sparsity</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090423</link>
        <id>10.14569/IJACSA.2018.090423</id>
        <doi>10.14569/IJACSA.2018.090423</doi>
        <lastModDate>2018-04-30T09:30:56.4830000+00:00</lastModDate>
        
        <creator>Hael Al-bashir</creator>
        
        <creator>Mansoor Abdullateef Abdulgabber</creator>
        
        <creator>Awanis Romli</creator>
        
        <creator>Norazuwa Binti Salehudin</creator>
        
        <subject>Recommendation system; collaborative filtering; similarity method; data sparsity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(4), 2018</description>
        <description>This paper presented a new similarity method to improve the accuracy of traditional Collaborative Filtering (CF) method under sparse data issue. CF provides the user with items, that what they need, based on analyses the preferences of users who have a strong correlation to him/her preference. However, the accuracy is influencing by the method that use to find neighbors. Pearson correlation coefficient and Cosine measures, as the most widely used methods, depending on the rating of only co-rated items to find the correlations between users. Consequently, these methods have lack of ability in addressing the sparsity. This paper presented a new proposed similarity method based on the global user preference to address the sparsity issue and improve the accuracy of recommendation. Thus, the novelty of this method is the ability to solve the similarity issue with a capability of finding the relationship among non-correlated users. Furthermore, to determine the right neighbors during the process of computing the similarity between a pair of users, the developed method considered two main factors (fairness and proportion of co-rated). The MovieLens 100K benchmark dataset is used to evaluate the developed method accuracy. The experiments’ result showed that the accuracy of the developed method is improved compared to the traditional CF similarity methods using a specific common CF evaluation metrics.</description>
        <description>http://thesai.org/Downloads/Volume9No4/Paper_23-A_Developed_Collaborative_Filtering_Similarity_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-Level Encryption Framework</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090422</link>
        <id>10.14569/IJACSA.2018.090422</id>
        <doi>10.14569/IJACSA.2018.090422</doi>
        <lastModDate>2018-04-30T09:30:56.4200000+00:00</lastModDate>
        
        <creator>Ahmad Habboush</creator>
        
        <subject>Multi-level encryption; Advance Encryption Standard (AES); Feistel encryption; symmetric encryption algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(4), 2018</description>
        <description>Multi-level encryption approaches are becoming more popular as they combine the strength of multiple basic/traditional approaches into a complex one. Many multi-level encryption approaches have been introduced for different systems, like Internet of Things, sensor networks, big data, and the web. The main obstacles in building such approaches are to have a secure as well as a computationally efficient multi-level encryption approach. In this paper, we propose a computationally efficient multi-level encryption framework that combines the strength of symmetric, the encryption algorithm AES (Advance Encryption Standard), Feistel network, Genetic Algorithm’s Crossover and Mutation techniques, and HMAC. The framework was evaluated and compared to a set of benchmark symmetric encryption algorithms, such as RC5, DES, and 3-DES. The evaluation was carried out on an identical platform and the algorithms were compared using the throughput and running time performance metrics and Avalanche effect security metric. The results show that the proposed framework can achieve the highest throughput and the lowest running time compared to the considered benchmarked symmetric encryption algorithms and passes the avalanche effect criterion.</description>
        <description>http://thesai.org/Downloads/Volume9No4/Paper_22-Multi_Level_Encryption_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Factors Influencing Cloud Computing Adoption in Saudi Arabia’s Private and Public Organizations: A Qualitative Evaluation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090421</link>
        <id>10.14569/IJACSA.2018.090421</id>
        <doi>10.14569/IJACSA.2018.090421</doi>
        <lastModDate>2018-04-30T09:30:56.3900000+00:00</lastModDate>
        
        <creator>Mohammed Ateeq Alanezi</creator>
        
        <subject>Cloud computing; private and public organization adoption; qualitative evaluation; Saudi Arabia</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(4), 2018</description>
        <description>Cloud Computing is becoming an important tool for improving productivity, efficiency and cost reduction. Hence, the advantages and potential benefits of cloud computing are no longer possible to be ignored by organizations. However, organizations must evaluate factors that influence their decisions before deciding to adopt cloud computing technologies. Many studies have investigated cloud computing adoption in developed countries compared with few studies that have concentrated on examining the factors that influence cloud computing adoption in developing countries. It is not clear to see whether these factors that have been identified by these studies, can be applied in developing countries. The motive of this study is to contribute to the adoption of cloud computing, and to elevate the consciousness of cloud computing technology amongst authorities, researchers, administrators, business enterprise managers and service carriers, particularly within the Saudi Arabian context. This study explores factors that encourage the implementation of cloud or have the capacity to detract from adopting cloud computing in private and public organizations in Saudi Arabia. A qualitative approach through IT professional representatives’ interviews was adopted in this study, which explored two categories, namely, a) the negative impact category which includes:  security and privacy, government policy, lack of knowledge, and Loss of control; and b) the positive impact category which includes three factors: reduce expenses, improve IT performance, and promote scalability and flexibility.</description>
        <description>http://thesai.org/Downloads/Volume9No4/Paper_21-Factors_Influencing_Cloud_Computing_Adoption_in_Saudi_Arabia.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Linear Array for Short Range Radio Location and Application Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090420</link>
        <id>10.14569/IJACSA.2018.090420</id>
        <doi>10.14569/IJACSA.2018.090420</doi>
        <lastModDate>2018-04-30T09:30:56.3730000+00:00</lastModDate>
        
        <creator>Saad Hassan Kiani</creator>
        
        <creator>Khalid Mahmood</creator>
        
        <creator>Ahsan Altaf</creator>
        
        <subject>Linear array; gain; Roggers 5880; voltage standing wave ratio; directivity; radiolocation; short range radio applications</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(4), 2018</description>
        <description>Patch array antennas have primarily been good candidates for higher performance results in communication systems.  This paper comprises of linear 1x4 patch antenna array study constructed on 1.575mm thick Roggers 5880 substrate with high gain of 12.8dB and focused directivity of 12.9dBi. The array network is fed using T Junction method showing well matched input impedance results. With higher performance parameters and reflection coefficient, voltage standing wave ratio, the proposed antenna array is suited for short range radiolocation and radio services application.</description>
        <description>http://thesai.org/Downloads/Volume9No4/Paper_20-A_Linear_Array_for_Short_Range_Radio_Location.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Analysis of Artificial Neural Networks Training Algorithms and Transfer Functions for Medium-Term Water Consumption Forecasting</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090419</link>
        <id>10.14569/IJACSA.2018.090419</id>
        <doi>10.14569/IJACSA.2018.090419</doi>
        <lastModDate>2018-04-30T09:30:56.3570000+00:00</lastModDate>
        
        <creator>Lemuel Clark P. Velasco</creator>
        
        <creator>Angelie Rose B. Granados</creator>
        
        <creator>Jilly Mae A. Ortega</creator>
        
        <creator>Kyla Veronica D. Pagtalunan</creator>
        
        <subject>Artificial neural network; backpropagation; water consumption forecasting</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(4), 2018</description>
        <description>Artificial Neural Network (ANN) is a widely used machine learning pattern recognition technique in predicting water resources based on historical data. ANN has the ability to forecast close to accurate prediction given the appropriate training algorithm and transfer function along with the model’s learning rate and momentum. In this study, using the Neuroph Studio platform, six models having different combination of training algorithms, namely, Backpropagation, Backpropagation with Momentum and Resilient Propagation and transfer functions, namely, Sigmoid and Gaussian were compared. After determining the ANN model’s input, hidden and output neurons from its respective layers, this study compared data normalization techniques and showed that Min-Max normalization yielded better results in terms of Mean Square Error (MSE) compared to Max normalization. Out of the six models tested, Model 1 which was composed of Backpropagation training algorithm and Sigmoid transfer function yielded the lowest MSE. Moreover, learning rate and momentum value for the models of 0.2 and 0.9 respectively resulted to very minimal error in terms of MSE. The results obtained in this research clearly suggest that ANN can be a viable forecasting technique for medium-term water consumption forecasting.</description>
        <description>http://thesai.org/Downloads/Volume9No4/Paper_19-Performance_Analysis_of_Artificial_Neural_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Extraction of ERP Selection Criteria using Critical Decisions Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090418</link>
        <id>10.14569/IJACSA.2018.090418</id>
        <doi>10.14569/IJACSA.2018.090418</doi>
        <lastModDate>2018-04-30T09:30:56.3430000+00:00</lastModDate>
        
        <creator>Motaki Noureddine</creator>
        
        <creator>Kamach Oualid</creator>
        
        <subject>ERP selection; criteria; critical decisions; implementation; information system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(4), 2018</description>
        <description>Companies use ERP systems to automate business processes in order to increase productivity, reduce costs, and meet customer requirements. ERP selection for an enterprise is a decision-making project that is both risky and expensive, a wrong selection of this system or project partners can lead to the failure of the ERP implementation project. In this paper, we combined the theoretical findings of ERP selection issue with expert’s practical recommendations to determine the critical decisions that need to be made in the pre-implementation phase. Then we presented a methodology to determine ERP selection criteria based on the critical decisions analysis. A part of this work was performed within a company that has just launched an ERP implementation project.</description>
        <description>http://thesai.org/Downloads/Volume9No4/Paper_18-Extraction_of_ERP_Selection_Criteria.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Explorative Study for Laundry Mobile Application</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090417</link>
        <id>10.14569/IJACSA.2018.090417</id>
        <doi>10.14569/IJACSA.2018.090417</doi>
        <lastModDate>2018-04-30T09:30:56.3100000+00:00</lastModDate>
        
        <creator>Doaa M. Bamasoud</creator>
        
        <creator>Asma M. Alqahtani</creator>
        
        <creator>Eman A. Aljdea</creator>
        
        <creator>Reem A. Alshomrani</creator>
        
        <creator>Maha S. Alshahrani</creator>
        
        <creator>Zohoor A. Alghamdi</creator>
        
        <creator>Ameerah M. Alghamdi</creator>
        
        <creator>Shahd F. Almaawi</creator>
        
        <creator>Asrar D. Alshahrani</creator>
        
        <subject>Business process change; smartphones; mobile application; laundry</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(4), 2018</description>
        <description>With the current rapid development of technology, many services need redesigning in order to keep up with customer demands. Therefore, organizations nowadays resort to redesigning services and business processes in order to maintain their competitiveness and success. With the recent advances in smartphone capabilities, and their growing penetration rate among individuals, organizations intend to take advantage of these devices by designing mobile applications to help evolve their business. The laundry business is one sector which has great potential for further development. Turning the ordinary routine of laundry into a service obtainable through a mobile application will contribute to reducing the burden of laundry tasks on individuals. This paper reviews the relevant literature and has design an instrument which investigates an individual’s need for such mobile applications.</description>
        <description>http://thesai.org/Downloads/Volume9No4/Paper_17-An_Explorative_Study_for_Laundry_Mobile_Application.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cobit 5-Based Approach for IT Project Portfolio Management: Application to a Moroccan University</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090416</link>
        <id>10.14569/IJACSA.2018.090416</id>
        <doi>10.14569/IJACSA.2018.090416</doi>
        <lastModDate>2018-04-30T09:30:56.2970000+00:00</lastModDate>
        
        <creator>Souad AHRIZ</creator>
        
        <creator>Abir EL YAMAMI</creator>
        
        <creator>Khalifa MANSOURI</creator>
        
        <creator>Mohammed QBADOU</creator>
        
        <subject>Component; IT governance; project portfolio management; Cobit 5; AHP; TOPSIS; prioritization; university</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(4), 2018</description>
        <description>Considering the problem of the management of IT project portfolios in universities, University managers face a lot of uncertainties when prioritizing projects that make up their portfolio. The alignment with their strategy becomes a major challenge and constitutes one of the essential elements of a governance approach. To overcome this challenge, the implementation of a project prioritization approach adapted to the university’s strategy, vision and culture is essential. In this context, this paper aims to provide a multi-criteria approach based on a combination of AHP and TOPSIS methodologies for the selection and prioritization of IT projects in universities. The main feature of our approach is the use of COBIT 5, its principles and enablers as prioritization criteria. In order to validate our model, project portfolio managers of a Moroccan public university were involved to evaluate the criteria and to prioritize their projects. This research demonstrates that the combined use of Multi Criteria Decision Making (MCDM) methodologies proves to be suitable for the implementation of COBIT sub-process APO05.03.</description>
        <description>http://thesai.org/Downloads/Volume9No4/Paper_16-Cobit_5_based_Approach_for_IT_Project.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>eHealth WBAN: Energy-Efficient and Priority-Based Enhanced IEEE802.15.6 CSMA/CA MAC Protocol</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090415</link>
        <id>10.14569/IJACSA.2018.090415</id>
        <doi>10.14569/IJACSA.2018.090415</doi>
        <lastModDate>2018-04-30T09:30:56.2630000+00:00</lastModDate>
        
        <creator>Ansar Munir Shah</creator>
        
        <creator>Abdelzahir Abdelmaboud</creator>
        
        <creator>Khalid Mahmood</creator>
        
        <creator>Mahmood ul Hassan</creator>
        
        <creator>Muhammad Kashif Saeed</creator>
        
        <subject>Wireless Body Area Network (WBAN); node; IEEE802.15.6; MAC; CSMA/CA; Mobility Link Table (MLT)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(4), 2018</description>
        <description>This paper provided a general study of Wireless Body Area Network (WBAN) in health monitoring system as well as the study of the application of wearable and implanted Bio-Medical-Sensors (BMS) which are used to monitor the vital signs of a patient in early detection. Energy efficiency is a significant issue in WBAN which can be achieved by reducing the overhead of control packets, prioritizing sensor-nodes and sink-node selection. Moreover, uncertainty in network topologies, such as distance and link affect between sensor-nodes occurs due to the mobility of human. In this research, we propose a scheme to reduce the overhead of control packets and prioritizing the threshold values of vital signs by assigning low and high transmission power with enhanced IEEE802.15.6 CSMA/CA as well as introduce a Mobility Link Table (MLT) for selecting a sink-node to communicate with the coordinator. Compare it with existing IEEE802.15.6 CSMA/CA technique and results shows the proposed techniques regarding mean power consumption, network delay, network throughput.</description>
        <description>http://thesai.org/Downloads/Volume9No4/Paper_15-eHealth_WBAN_Energy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Pakistan Sign Language Detection using PCA and KNN</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090414</link>
        <id>10.14569/IJACSA.2018.090414</id>
        <doi>10.14569/IJACSA.2018.090414</doi>
        <lastModDate>2018-04-30T09:30:56.2330000+00:00</lastModDate>
        
        <creator>Muhammad Sheraz Arshad Malik</creator>
        
        <creator>Naila Kousar</creator>
        
        <creator>Tahir Abdullah</creator>
        
        <creator>Muhammad Ahmed</creator>
        
        <creator>Faiqa Rasheed</creator>
        
        <creator>Muhammad Awais</creator>
        
        <subject>Deaf and dumb; hand gesture recognition; k-nearest neighbors; Pakistan sign language; principal component analysis; Urdu alphabets</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(4), 2018</description>
        <description>Every society has a large majority group of disable people. The technology is developing day by day but no significant developments are undertaken for the improvement of these people. Sign language is an efficient mean of information exchange with special people, such as Deaf and Dumb people, they communicate with each other through sign language, but it become difficult when they communicate to outer world so sign language is used for this purpose. Different research has been done for this in America, Indonesia and India, but not much work done in Pakistan. In this research paper, author introduce a system for recognizing Pakistan Sign Language (PSL) including the alphabet to facilitate communication between special people and normal. This system capture input through webcam without making use of any additional hardware, then using segmentation approach we separate hand from the background and extract required feature from image using Principal Component Analysis (PCA) and then finally classifies the gesture feature by utilizing K Nearest Neighbors (KNN). This research will fill the communication gap between the deaf and normal people of the Pakistan country.</description>
        <description>http://thesai.org/Downloads/Volume9No4/Paper_14-Pakistan_Sign_Language_Detection_using_PCA.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Using Creative Educational Drama to Enhance Self-Development Skills for the Students at University Level</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090413</link>
        <id>10.14569/IJACSA.2018.090413</id>
        <doi>10.14569/IJACSA.2018.090413</doi>
        <lastModDate>2018-04-30T09:30:56.2170000+00:00</lastModDate>
        
        <creator>Hisham Saad Zaghloul</creator>
        
        <subject>Creative drama; creative educational drama; self-development skills; communication skills; thinking skills</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(4), 2018</description>
        <description>This study has been undertaken using a creative drama in teaching in order to improve the communication and thinking skills for the students of preparatory year students at the Northern Border University. It aimed to measure the differences between the experimental and control groups in skills acquisition among students. The study was conducted on 140 students from both genders (males=70 and females=70). The students were divided into four groups: each has 35 students. The study adopted an experimental approach by observing students’ behavior through affecting their communication and thinking skills with using drama. The findings confirmed that using drama in teaching significantly affected on the experimental group than the students of the control group who were taught with traditional methods as the experimental group achieved better results than the control group. Furthermore, the study stressed the possibility of benefiting from the drama in teaching other practical courses at the university level and provided several recommendations in this regard as well.</description>
        <description>http://thesai.org/Downloads/Volume9No4/Paper_13-Using_Creative_Educational_Drama_to_Enhance_Self_Development.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Software Cost Estimation using Enhanced Artificial Bee Colony Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090412</link>
        <id>10.14569/IJACSA.2018.090412</id>
        <doi>10.14569/IJACSA.2018.090412</doi>
        <lastModDate>2018-04-30T09:30:56.1230000+00:00</lastModDate>
        
        <creator>Sevgi Yigit-Sert</creator>
        
        <creator>Pinar Kullu</creator>
        
        <subject>Software cost estimation; enhanced artificial bee colony algorithm; NASA software</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(4), 2018</description>
        <description>Cost estimation is very important in software development progress so that resource and time planning can be successfully performed. Accurate estimation of cost is directly related to the decision making mechanism in the software development process. The underestimated cost might lead to fewer resources and budget problems; in contrast, customer satisfaction might diminish due to waste of resources. This study represents an estimation model for the effort required for the development of software projects using a variant of artificial bee colony (ABC) algorithm. The proposed model is performed over a dataset consisting of NASA software projects and has better performance than the previous studies.</description>
        <description>http://thesai.org/Downloads/Volume9No4/Paper_12-Software_Cost_Estimation_using_Enhanced_Artificial_Bee_Colony.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Terman-Merril Application for Intelligence Measurement</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090411</link>
        <id>10.14569/IJACSA.2018.090411</id>
        <doi>10.14569/IJACSA.2018.090411</doi>
        <lastModDate>2018-04-30T09:30:56.0770000+00:00</lastModDate>
        
        <creator>Alicia Valdez</creator>
        
        <creator>Griselda Cortes</creator>
        
        <creator>Laura Vazquez</creator>
        
        <creator>Andrea de la Pena</creator>
        
        <creator>Blanca Montano</creator>
        
        <subject>Human resources; Terman-Merril test; intelligence test; ASP.Net</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(4), 2018</description>
        <description>The computational applications support different processes in organizations, one of these processes are those related to human resources, where one of the activities is the hiring of new personnel; the evaluation of human talent to be integrated into a company can be measured through different tests, one of which is the Terman-Merril intelligence test, which measures the intellectual quotient of candidates with a series of sub-tests. In this project, the waterfall method has been used for the development of a web computational application for the Terman-Merril intelligence test, as well as the management of the users and the results obtained to be visualized in spreadsheets for its subsequent analysis and graphing. The ASP.Net programming language and the SQL Server 2014 database have been used for the programming and the storage of information. As a result, it has been applied successfully in some companies, obtaining measurable and evaluable results on the candidates. The application has also been installed in a computer lab for students enrolled in the Bachelor of Human Resources at Faculty of Accounting and Administration at Coahuila, Mexico.</description>
        <description>http://thesai.org/Downloads/Volume9No4/Paper_11-Terman_Merril_Application.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analytical Study of Algorithms for Solving Inverse Kinematic Problems in Robot Motion Control</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090410</link>
        <id>10.14569/IJACSA.2018.090410</id>
        <doi>10.14569/IJACSA.2018.090410</doi>
        <lastModDate>2018-04-30T09:30:56.0470000+00:00</lastModDate>
        
        <creator>Dr. Osama Ahmad Salim Safarini</creator>
        
        <subject>Robot motion control systems; inverse kinematic problems; iterative methods; algorithm convergence; regularization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(4), 2018</description>
        <description>The given article covers the general formulations of inverse kinematic problems for robot motion control systems. We have discussed the difficulties how to solve such problems using analytical and numerical methods. We have also analyzed the convergence of iterative algorithms with the regularization on the trajectory with the points outside of the gripper reachability. The example of an iterative calculation of joint trajectories for a 3-link robot using the recursive algorithm for the Jacobi matrix calculation has been presented. </description>
        <description>http://thesai.org/Downloads/Volume9No4/Paper_10-Analytical_Study_of_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Object Contour in Low Quality Medical Images in Curvelet Domain</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090409</link>
        <id>10.14569/IJACSA.2018.090409</id>
        <doi>10.14569/IJACSA.2018.090409</doi>
        <lastModDate>2018-04-30T09:30:55.9830000+00:00</lastModDate>
        
        <creator>Vo Thi Hong Tuyet</creator>
        
        <creator>Nguyen Thanh Binh</creator>
        
        <subject>Object contour; curvelet transform; augmented lagrangian</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(4), 2018</description>
        <description>The diagnosis and treatment are very important for extending the life of patients. The small abnormalities may also be manifestations of the diseases. One of the abnormalities is the contour of each object in medical images. Therefore, the contour is very important and special to low quality medical images. In this paper, we propose a new method to detect the object contour of low quality medical images based on the self affine snake, one of the types of active contour models. The method includes two periods. Firstly, we use augmented lagrangian method to remove noise and detect edges in low quality medical images in curvelet domain. Finally, the active contour model is improved to show the contour of objects. After comparing the appearance of the contour and the time processing with other algorithms, we confirm that the proposed method is better.</description>
        <description>http://thesai.org/Downloads/Volume9No4/Paper_9-Object_Contour_in_Low_Quality_Medical_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Adaptive Cruise Control Model based on PDLCA for Efficient Lane Selection and Collision Avoidance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090408</link>
        <id>10.14569/IJACSA.2018.090408</id>
        <doi>10.14569/IJACSA.2018.090408</doi>
        <lastModDate>2018-04-30T09:30:55.9530000+00:00</lastModDate>
        
        <creator>Khawaja Iftekhar Rashid</creator>
        
        <creator>Ding Nan</creator>
        
        <creator>Muhammad Tahir</creator>
        
        <creator>Anil Ahmed</creator>
        
        <subject>Automatic vehicle; collision avoidance; lane selection; patience; threshold; traffic density</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(4), 2018</description>
        <description>Todays, safe and sans collision free traveling is essential in the present World. Because of the particular needs required by different usages of Vehicular Ad Hoc Networks (VANETs), mainly in evolving lane, outlining a constant Collision Avoidance (CA) and securely lane changing has turned into the foundation of controlling the vehicle in the dense environment. In our system, we proposed efficient lane selection and collision-free vehicle transportation system in the light of patience, speed and the distance between all types of vehicles. To maximize the efficiency of the existing transportation infrastructure and reduce collision, it is necessary to improve safety and communication in all vehicles. In this paper, we first define the patience based method and simulate model for safety and efficiency of traffic flow using the game theoretical technique. Then we have improved a new algorithm for lane selection and collision control using distance and speed of all vehicles gather from the vehicular communication. After two different methodologies for lane change and collision-free driving are compared like; random patience based method and car following method, based on different traffic density. In the part of contribution as an outcome, our experimental simulation setup is executed on highway road traffic scenarios to demonstrate the capacity of the all type of vehicles to select best-fit lane, identify collision, and explore around them to maintain the safe distance. Finally, it is concluded that our proposed algorithm outperforms in lane selection and collision avoidance to satisfy safety and efficiency in the Vehicular Ad Hoc Networks.</description>
        <description>http://thesai.org/Downloads/Volume9No4/Paper_8-An_Adaptive_Cruise_Control_Model_based_on_PDLCA.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Ethical Issues and Related Considerations Involved with Artificial Intelligence and Autonomous Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090407</link>
        <id>10.14569/IJACSA.2018.090407</id>
        <doi>10.14569/IJACSA.2018.090407</doi>
        <lastModDate>2018-04-30T09:30:55.9200000+00:00</lastModDate>
        
        <creator>Saud S. Alotaibi</creator>
        
        <subject>Ethical issues; artificial intelligence; automated systems; human values; evolutions</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(4), 2018</description>
        <description>The applications of artificial intelligence (AI) and automated systems (AS) demonstrate excellent outcomes in various sectors of industrial units to replace the human from jobs. However, the competitive world with evolutions, advancement of technologies and thrive for success by industrial gains by the managements are leaving the interests and benefits of larger number of human beings in the society. In this paper, various ethical issues related with the implementation of AI/AS are demonstrated with different perspectives. The ongoing developments in the area of AI/AS are critically evaluated and related advantages and serious concerns of the society are discussed. Various global initiatives and legal amendments across the globe to limit the excessive usage of AI/AS are being examined with critical assessments.</description>
        <description>http://thesai.org/Downloads/Volume9No4/Paper_7-Ethical_Issues_and_Related_Considerations.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detection of Soft Atherosclerotic Plaques in Cardiac Computed Tomography Angiography</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090406</link>
        <id>10.14569/IJACSA.2018.090406</id>
        <doi>10.14569/IJACSA.2018.090406</doi>
        <lastModDate>2018-04-30T09:30:55.8900000+00:00</lastModDate>
        
        <creator>Muhammad Moazzam Jawaid</creator>
        
        <creator>Sajjad Ali Memon</creator>
        
        <creator>Imran Ali Qureshi</creator>
        
        <creator>Nasrullah Pirzada</creator>
        
        <subject>Coronary tree segmentation; support vector
machines; non-calcified plaque detection; mean radial profiles;
Rotterdam CTA dataset</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(4), 2018</description>
        <description>Computed tomography angiography (CTA) has
turned non-invasive diagnosis of cardiovascular anomalies into
a reality as state-of-the-art imaging equipment is capable of
recording sub-millimeter details. Based on high intensity, the
calcified plaques are easily identified in cardiac CTA; however,
low density based non-calcified plaque detection has been a
challenging problem in recent years. We propose an efficient
method in this work for automated detection of the non-calcified
plaques using discrete radial profiles. The plaque detection
is accomplished using support vector machine to differentiate
abnormal coronary segments. We investigated a total of 32 CTA
volumes and the detection mean accuracy of 84.6% was achieved,
which is in-line with the reported literature.</description>
        <description>http://thesai.org/Downloads/Volume9No4/Paper_6-Detection_of_Soft_Atherosclerotic_Plaques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mutual Coupling Reduction of MIMO Antenna for Satellite Services and Radio Altimeter Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090405</link>
        <id>10.14569/IJACSA.2018.090405</id>
        <doi>10.14569/IJACSA.2018.090405</doi>
        <lastModDate>2018-04-30T09:30:55.8130000+00:00</lastModDate>
        
        <creator>Saad Hassan Kiani</creator>
        
        <creator>Khalid Mahmood</creator>
        
        <creator>Ahsan Altaf</creator>
        
        <creator>Alex J. Cole</creator>
        
        <subject>Multiple input multiple output (MIMO); mutual coupling; defected ground structures (DGS); fixed satellite services (FSS); radio satellite services (RSS); radio altimeters</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(4), 2018</description>
        <description>Ground irregularities also known as defected ground structures (DGS) is a freshly presented innovatory way in designing of patch antennas to boost up the performance of antenna constraints. This study presents a novel proposal of ground irregularities or defected ground structure is proposed for suppression of mutual coupling effects among 2x1 multiple input multiple output patch array designed on Roggers Duroid 5880. The two adjacent M shape structures surrounding Dumbbell Shaped structure and sandwiched between Dumbbell shape patterns showed the significant level of surface wave suppression up to -42dB while maintaining the gain of 4.7dB and 5.6dBi of directivity. The patch array operates at 4 to 4.3GHz for Fixed and Radio satellite services (FSS) and (RSS) and radio altimeter application systems.</description>
        <description>http://thesai.org/Downloads/Volume9No4/Paper_5-Mutual_Coupling_Reduction_of_MIMO_Antenna.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluation of Photo Contents of Conversation Support System with Protocol Analysis Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090404</link>
        <id>10.14569/IJACSA.2018.090404</id>
        <doi>10.14569/IJACSA.2018.090404</doi>
        <lastModDate>2018-04-30T09:30:55.7500000+00:00</lastModDate>
        
        <creator>Zhou Xiaochun</creator>
        
        <creator>Miyuki Iwamoto</creator>
        
        <creator>Noriaki Kuwahara</creator>
        
        <subject>Protocol analysis method; the elderly; the young volunteer; photo contents; conversation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(4), 2018</description>
        <description>With the deepening of aging and low birth rate in China, the solitary elderly or old couple living alone is becoming more and more, who has a higher risk of senile dementia caused by disuse of cognitive function because of loneliness without communication. Due to the shortage of care workers, the young volunteer is expected becoming communication partner for them. But it is difficult for the young volunteer without the experience of communication with the elderly, and for two generations to find common topics. However, Conversation Support System was proposed so that the elderly and the young volunteer can talk smoothly with common photo contents. In order to evaluate the utility of photo contents of the system in China, we did the conversation experiment by photos in China, to analyze the expression and stress of subjects during the conversation. As a result, the photos which made the elderly and the young volunteer feel easy and difficult for the conversation were found. Then we analyzed utterance data of subjects with protocol analysis method to discuss the common features of these photos. </description>
        <description>http://thesai.org/Downloads/Volume9No4/Paper_4-Evaluation_of_Photo_Contents_of_Conversation_Support_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Linear Quadratic Regulator Design for Position Control of an Inverted Pendulum by Grey Wolf Optimizer</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090403</link>
        <id>10.14569/IJACSA.2018.090403</id>
        <doi>10.14569/IJACSA.2018.090403</doi>
        <lastModDate>2018-04-30T09:30:55.6870000+00:00</lastModDate>
        
        <creator>H&#252;seyin Oktay ERKOL</creator>
        
        <subject>Grey wolf optimizer; inverted pendulum; position controller; linear quadratic regulator; optimized controller design</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(4), 2018</description>
        <description>In this study, a linear quadratic regulator (LQR) based position controller is designed and optimized for an inverted pendulum system. Two parameters, vertical pendulum angle and horizontal cart position, must be controlled together to move a pendulum to desired position. PID controllers are conventionally used for this purpose and two different PID controllers must be used to move the pendulum. LQR is an alternative method. Angle and position of inverted pendulum can be controlled using only one LQR. Determination of Q and R matrices is the main problem when designing an LQR and they must be minimized a defined performance index. Determination of the Q and R matrices is generally made by trial and error method but finding the optimum parameters using this method is difficult and not guaranty. An optimization algorithm can be used for this purpose and in this way; it is possible to obtain optimum controller parameters and high performance. That’s why an optimization method, grey wolf optimizer, is used to tune controller parameters in this study. </description>
        <description>http://thesai.org/Downloads/Volume9No4/Paper_3-Linear_Quadratic_Regulator_Design.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Unifying Modeling Language-Merise Integration Approach for Software Design</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090402</link>
        <id>10.14569/IJACSA.2018.090402</id>
        <doi>10.14569/IJACSA.2018.090402</doi>
        <lastModDate>2018-04-30T09:30:55.5630000+00:00</lastModDate>
        
        <creator>Issar Arab</creator>
        
        <creator>Safae Bourhnane</creator>
        
        <creator>Fatiha Kafou</creator>
        
        <subject>UML; Merise; modeling; design; databases</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(4), 2018</description>
        <description>Software design is the most crucial step in the software development process that is why it must be given a good care. Software designers must go through many modeling steps to end up with a good design that will allow for a smooth development process later. For this, designers usually have to choose between two main modeling methodologies: Merise and UML. Both methodologies are widely used; however, each one has its own advantages and disadvantages. This paper combines both techniques and merges their advantages to come up with an approach that would help software designers make the best of both methodologies. This integration mainly targets the software design step in general but can be specifically applied to database design. It presents the weaknesses and strengths of each one of UML and Merise as two techniques used in database modeling and design. Later in this paper, a comparing of UML and Merise diagrams is lead and based on it, the decision on which of the two is the best at each step of the modeling process. </description>
        <description>http://thesai.org/Downloads/Volume9No4/Paper_2-Unifying_Modeling_Language.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>On Prospects of Development of Telecommunication Systems and Services based on Virtual Reality Technology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090401</link>
        <id>10.14569/IJACSA.2018.090401</id>
        <doi>10.14569/IJACSA.2018.090401</doi>
        <lastModDate>2018-04-30T09:30:55.4830000+00:00</lastModDate>
        
        <creator>Andrey Zuev</creator>
        
        <creator>Roman Bolbakov</creator>
        
        <subject>Virtual reality; cellular network; mobile internet; telecommunication service; digital economy; informational society</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(4), 2018</description>
        <description>Virtual reality technologies are considered to be a basis and a promising development trend of telecommunication systems’ and services. New opportunities and sci-tech problems that need to be solved are currently undergoing analysis. In the nearest future, the possibility of creating a new segment in the international telecommunication services market that is oriented towards providing communication services to the mass consumer and the entertainment industry, new tools for goods promotion and education content delivery, as well as toward solving experimental problems in managing state structures and social economical systems, is being justified. Within the context of studying the necessary prerequisites to implementing this opportunity the following is being done: an approach to creating and charging for according telecommunication services is being given, opportunities for using the existing national and international cellular connection infrastructure is researched; the prospects of shifting towards new cellular network standards of the fifth generation are being analyzed, employing new ways of connecting to the Internet, as well as new formats of transmitting and processing multimedia, including the provision of immersion in the virtual reality environment; safety aspects of end-user equipment and VR-based telecommunication service exploitation are being updated. As a result of consolidating the given material, the authors are working on building grounds for the rapid development in this direction of telecommunication services and options provided by the Russian national and transnational cellular network operators. The paper was created as part of the project 2.7178.2017/БЧ “Researching Cognitive Semiotics in the Multimedia Virtual Reality Environment”.</description>
        <description>http://thesai.org/Downloads/Volume9No4/Paper_1-On_Prospects_of_Development_of_Telecommunication_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Half Mode Substrate Integrated Waveguide Cavity based RFID Chipless Tag</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090356</link>
        <id>10.14569/IJACSA.2018.090356</id>
        <doi>10.14569/IJACSA.2018.090356</doi>
        <lastModDate>2018-04-03T05:27:06.8270000+00:00</lastModDate>
        
        <creator>Soumaya Sakouhi</creator>
        
        <creator>Hedi Ragad</creator>
        
        <creator>Mohamed Latrach</creator>
        
        <creator>Ali Gharsallah</creator>
        
        <subject>Radio frequency identification (RFID); chipless tag; substrate integrated waveguide (SIW); ultra wide band (UWB)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(3), 2018</description>
        <description>This study presents the design of a compact Radio Frequency Identification chipless tag, with reduced size, important quality factor and improved results. The proposed prototype is based on Substrate Integrated Waveguide technology using the Half-Mode Substrate Integrated Waveguide technique to reduce the tag size by the half and preserve the same response. Further, the operating frequency band is from 5.5 to 8 GHz inside a compact surface of 1.3 &#215; 5.7 cm&#178;. The frequency domain approach is adopted based on the frequency shift coding technique. </description>
        <description>http://thesai.org/Downloads/Volume9No3/Paper_56-Half_Mode_Substrate_Integrated_Waveguide_Cavity.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Media Content Access: Image-based Filtering</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090355</link>
        <id>10.14569/IJACSA.2018.090355</id>
        <doi>10.14569/IJACSA.2018.090355</doi>
        <lastModDate>2018-03-30T13:24:16.6970000+00:00</lastModDate>
        
        <creator>Rehan Ullah Khan</creator>
        
        <creator>Ali Alkhalifah</creator>
        
        <subject>Skin detection; content based filtering; content analysis; machine learning; random forest; neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(3), 2018</description>
        <description>As the content on the internet contains sensitive adult material, filtering and blocking this content is essential for the social and ethical values of the many societies and organizations. In this paper, the content filtering is explored from still images perspectives. Thus, this article investigates and analyses the content based filtering which can help in the flagging of the images as adult nature or safe images. As the proposed approach is based on the Chroma (colour) based skin segmentation and detection for detecting the objectionable content in images, therefore, the approach proceeds in the direction of the classical Machine Learning approaches and uses the two well-known classifiers: The Random Forest and the Neural Network. Their fusion is also investigated. Skin colour is analyzed in the YCbCr colour space and in the form of blob analysis. With the “Adult vs Safe” classification, an Accuracy of 0.88 and the low RMSE of 0.313 is achieved, indicating the usefulness of the detection model.</description>
        <description>http://thesai.org/Downloads/Volume9No3/Paper_55-Media_Content_Access_Image_based_Filtering.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Quality of Lossy Compressed Images using Minimum Decreasing Technique</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090353</link>
        <id>10.14569/IJACSA.2018.090353</id>
        <doi>10.14569/IJACSA.2018.090353</doi>
        <lastModDate>2018-03-30T13:24:16.6800000+00:00</lastModDate>
        
        <creator>Ahmed L. Alshami</creator>
        
        <creator>Mohammed Otair</creator>
        
        <subject>Image compression; lossy technique; lossless technique;
image quality measurement; RIFD and JPEG</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(3), 2018</description>
        <description>The acceleration in technology development came
with the urgent need to use large amounts of information, and
the way of storing or transferring the huge information via
various digital networks became very important issues, particularly
in terms of compressing size and quality preservation.
The digital image usage has become widespread in many sectors;
it become important to manipulate these images to use them
in optimal form by maintaining the quality, especially during
the compression process using the lossy techniques. This paper
presents a new technique to enhance the quality of compressed
images while preserving the compression ratio by adding some
of pre-processing steps before implementing any existing lossy
compression technique. The proposed technique depends on
decreasing the minimum elements from the image pixels values
in each: row, column, and 2   2 block, respectively. These steps
minimize the required number of bits to represent each pixel
in the compressed images. In order to prove the efficiency
of the proposed technique, two lossy compression techniques
(novel and classical) were implemented with the proposed. They
implemented on wide range of well-known images with different
dimensions, sizes, and types. The experimental results show
that the quality of the decompressed images with the proposed
technique were enhanced in terms of: MSE, MAE and PSNR as
quality evaluation metrics.</description>
        <description>http://thesai.org/Downloads/Volume9No3/Paper_53-Enhancing_Quality_of_Lossy_Compressed_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Impact of Quantum Computing on Present Cryptography</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090354</link>
        <id>10.14569/IJACSA.2018.090354</id>
        <doi>10.14569/IJACSA.2018.090354</doi>
        <lastModDate>2018-03-30T13:24:16.6800000+00:00</lastModDate>
        
        <creator>Vasileios Mavroeidis</creator>
        
        <creator>Kamer Vishi</creator>
        
        <creator>Mateusz D. Zych</creator>
        
        <creator>Audun J&#248;sang</creator>
        
        <subject>Quantum computers; post-quantum cryptography;
Shor’s algorithm; Grover’s algorithm; asymmetric cryptography;
symmetric cryptography</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(3), 2018</description>
        <description>The aim of this paper is to elucidate the implications
of quantum computing in present cryptography and
to introduce the reader to basic post-quantum algorithms. In
particular the reader can delve into the following subjects: present
cryptographic schemes (symmetric and asymmetric), differences
between quantum and classical computing, challenges in quantum
computing, quantum algorithms (Shor’s and Grover’s), public key
encryption schemes affected, symmetric schemes affected, the impact
on hash functions, and post quantum cryptography. Specifically,
the section of Post-Quantum Cryptography deals with
different quantum key distribution methods and mathematicalbased
solutions, such as the BB84 protocol, lattice-based cryptography,
multivariate-based cryptography, hash-based signatures
and code-based cryptography.</description>
        <description>http://thesai.org/Downloads/Volume9No3/Paper_54-The_Impact_of_Quantum_Computing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of Biometric Technology Adaption and Acceptance in Canada</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090352</link>
        <id>10.14569/IJACSA.2018.090352</id>
        <doi>10.14569/IJACSA.2018.090352</doi>
        <lastModDate>2018-03-30T13:24:16.6670000+00:00</lastModDate>
        
        <creator>Eesa Al Solami</creator>
        
        <subject>Adaption; biometric technology; organizational</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(3), 2018</description>
        <description>This study aimed at analyzing the analysis biometric
technology adoption and acceptance in Canada. From the
introduction, the paper reveals that biometrics technology has
been in existence for many decades despite rising to popularity in
the last two decades. Canada has highly advanced in information
technology. It is observed that the three sectors for the adoption
and acceptance of biometric technologies are: financial services,
immigration, and law enforcement. The study uses judgment for
sampling and questionnaires for the collection of data. Given the
high rate of adoption and acceptance of biometric technologies
in Canada, the paper concludes that the adoption of these
technologies is at the adaptation state. Age and experience
also influence the rate at which individuals accept biometric
technologies with the most experienced participants showing the
highest rate of approval.</description>
        <description>http://thesai.org/Downloads/Volume9No3/Paper_52-Analysis_of_Biometric_Technology_Adaption.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>QTID: Quran Text Image Dataset</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090351</link>
        <id>10.14569/IJACSA.2018.090351</id>
        <doi>10.14569/IJACSA.2018.090351</doi>
        <lastModDate>2018-03-30T13:24:16.6500000+00:00</lastModDate>
        
        <creator>Mahmoud Badry</creator>
        
        <creator>Hesham Hassan</creator>
        
        <creator>Hanaa Bayomi</creator>
        
        <creator>Hussien Oakasha</creator>
        
        <subject>HDF5 dataset; Arabic script; Holy Quran text
image; handwritten text recognition; Arabic OCR; text image
datasets</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(3), 2018</description>
        <description>Improving the accuracy of Arabic text recognition
in imagery requires a big modern dataset as data is the fuel for
many modern machine learning models. This paper proposes a
new dataset, called QTID, for Quran Text Image Dataset, the
first Arabic dataset that includes Arabic marks. It consists of
309,720 different 192x64 annotated Arabic word images that
contain 2,494,428 characters in total, which were taken from
the Holy Quran. These finely annotated images were randomly
divided into 90%, 5%, 5% sets for training, validation, and
testing, respectively. In order to analyze QTID, a different
dataset statistics were shown. Experimental evaluation shows that
current best Arabic text recognition engines like Tesseract and
ABBYY FineReader cannot work well with word images from
the proposed dataset.</description>
        <description>http://thesai.org/Downloads/Volume9No3/Paper_51-QTID_Quran_Text_Image_Dataset.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced Detection and Elimination Mechanism from Cooperative Black Hole Threats in MANETs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090350</link>
        <id>10.14569/IJACSA.2018.090350</id>
        <doi>10.14569/IJACSA.2018.090350</doi>
        <lastModDate>2018-03-30T13:24:16.6330000+00:00</lastModDate>
        
        <creator>Samiullah Khan</creator>
        
        <creator>Faqir Usman</creator>
        
        <creator>Matiullah</creator>
        
        <creator>Fahim Khan Khalil</creator>
        
        <subject>Mobile Ad-hoc Networks (MANETs); black hole
attack; AODV; malicious node; cooperative attack</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(3), 2018</description>
        <description>Malicious node invasion as black hole attack is
a burning issue in MANETs. Black hole attacks with a single
malicious node is easy to detect and prevent. The collaborative
attacks with multiple cooperative malicious node is a challenging
issue in security of MANETs as it is difficult to figure out
due to its complex and sophisticated mechanism. This study
proposed a novel signature-based technique to detect and handle
the cooperative black hole attack in MANETs. For this purpose,
diverse type of simulation scenarios are used with increasing
number of nodes. The parameters such as average throughput,
average packet drop, average end to end delay, average processing
time and malicious node detection rate are used to measure
the impact of signature-based malicious node detection scheme.
AODV is used as routing protocol in this study. This study
revealed that the performance of MANETs degrades with an
increase in a number of malicious nodes. The average throughput
of MANETs decreases with increase in average end to end
delay and average packet drop. Signature-based malicious nodes
detection mechanism is used to counter the cooperative black hole
attack. The signature-based technique has enhanced the detection
and elimination of cooperative black hole attack in MANETs.
This helps in comparatively an increase in average throughput
and decrease in packet delay and packet drop.</description>
        <description>http://thesai.org/Downloads/Volume9No3/Paper_50-Enhanced_Detection_and_Elimination_Mechanism.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Synthetic Loads Analysis of Directed Acyclic Graphs for Scheduling Tasks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090348</link>
        <id>10.14569/IJACSA.2018.090348</id>
        <doi>10.14569/IJACSA.2018.090348</doi>
        <lastModDate>2018-03-30T13:24:16.6200000+00:00</lastModDate>
        
        <creator>Apolinar Velarde Martinez</creator>
        
        <subject>Directed acyclic graph; distributed heterogeneous
computing system (DHCS); Algorithm for scheduling and allocation
tasks in a DHCS; parallel tasks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(3), 2018</description>
        <description>Graphs are structures used in different areas of
scientific research, for the ease they have to represent different
models of real life. There is a great variety of algorithms that
build graphs with very dissimilar characteristics and types that
model the relationships between the objects of the problem to
solve. To model the relationships, characteristics such as depth,
width and density of the graph are used in the directed acyclic
graphs (DAGs) to find the solution to the objective problem. These
characteristics are rarely analyzed and taken into account before
being used in the approach of a solution. In this work, we present
a set of methods for the random generation of DAGs. DAGs are
produced with three of these methods representing three synthetic
loads. Each of the three above characteristics is evaluated and
analyzed in each of DAGs. The generation and evaluation of
synthetic loads is with the objective of predicting the behavior of
each DAG, based on its characteristics, in a scheduling algorithm
and assignment of parallel tasks in a distributed heterogeneous
computing system (DHCS).</description>
        <description>http://thesai.org/Downloads/Volume9No3/Paper_48-Synthetic_Loads_Analysis_of_Directed_Acyclic_Graphs.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comprehensive IoT Attacks Survey based on a Building-blocked Reference Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090349</link>
        <id>10.14569/IJACSA.2018.090349</id>
        <doi>10.14569/IJACSA.2018.090349</doi>
        <lastModDate>2018-03-30T13:24:16.6200000+00:00</lastModDate>
        
        <creator>Hezam Akram Abdul-Ghani</creator>
        
        <creator>Dimitri Konstantas</creator>
        
        <creator>Mohammed Mahyoub</creator>
        
        <subject>Internet of Things (IoT); building block; security
and privacy; reference model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(3), 2018</description>
        <description>Internet of Things (IoT) has not yet reached a distinctive
definition. A generic understanding of IoT is that it offers
numerous services in many domains, utilizing conventional internet
infrastructure by enabling different communication patterns
such as human-to-object, object-to-objects, and object-to-object.
Integrating IoT objects into the standard Internet, however, has
unlocked several security challenges, as most internet technologies
and connectivity protocols have been specifically designed for
unconstrained objects. Moreover, IoT objects have their own
limitations in terms of computation power, memory and bandwidth.
IoT vision, therefore, has suffered from unprecedented
attacks targeting not only individuals but also enterprises, some
examples of these attacks are loss of privacy, organized crime,
mental suffering, and the probability of jeopardizing human
lives. Hence, providing a comprehensive classification of IoT
attacks and their available countermeasures is an indispensable
requirement. In this paper, we propose a novel four-layered IoT
reference model based on building blocks strategy, in which we
develop a comprehensive IoT attack model composed of four key
phases. First, we have proposed IoT asset-based attack surface,
which consists of four main components: 1) physical objects, 2)
protocols covering whole IoT stack, 3) data, and 4) software.
Second, we describe a set of IoT security goals. Third, we
identify IoT attack taxonomy for each asset. Finally, we show
the relationship between each attack and its violated security
goals, and identify a set of countermeasures to protect each asset
as well. To the best of our knowledge, this is the first paper that
attempts to provide a comprehensive IoT attacks model based on
a building-blocked reference model.</description>
        <description>http://thesai.org/Downloads/Volume9No3/Paper_49-A_Comprehensive_IoT_Attacks_Survey.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A User-Based Trust Model for Cloud Computing Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090347</link>
        <id>10.14569/IJACSA.2018.090347</id>
        <doi>10.14569/IJACSA.2018.090347</doi>
        <lastModDate>2018-03-30T13:24:16.6030000+00:00</lastModDate>
        
        <creator>Othman Saeed</creator>
        
        <creator>Riaz Ahmed Shaikh</creator>
        
        <subject>Trust management model; trust value of provider feedback; trust value of third-party feedback; user trust value; availability; reliability; integrity; confidential; authentication</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(3), 2018</description>
        <description>There are many trust management models for the cloud environment. Selecting an appropriate trust model is not an easy job for a user. This work presents a new trust model called ARICA model which help a user to reduce the reliance on the trust value of provider and third-party feedback. Simultaneously, the ARICA model increases the dependence on the user trust value. Furthermore, the proposed model measured the trust based on five attributes: Availability, Reliability, Integrity, Confidentiality, and Authentication.  This paper presents the comparison of the proposed ARICA trust model with two existing schemes. Results show that the proposed model provides better accurate results.</description>
        <description>http://thesai.org/Downloads/Volume9No3/Paper_47-A_User_based_Trust_Model_for_Cloud_Computing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Double Authentication Model using Smartphones to Enhance Student on-Campus Network Access</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090346</link>
        <id>10.14569/IJACSA.2018.090346</id>
        <doi>10.14569/IJACSA.2018.090346</doi>
        <lastModDate>2018-03-30T13:24:16.5870000+00:00</lastModDate>
        
        <creator>Zakaria Saleh</creator>
        
        <creator>Ahmed Mashhour</creator>
        
        <subject>Bluetooth; double authentication; campus network; computer unit security; model design; smartphone; system design</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(3), 2018</description>
        <description>Computers are widely used by all universities to provide network access to students. Therefore, the securities of these computers play a major role in protecting the network. In light of that, a strong access control is required to ensure that    sensitive information will only be accessed through firm authentication mechanism. Smartphones are widely used by students, and can be employed to further enhance student authentication by storing partial access information on the Smartphone. And while students should not leave their computer systems unattended, some do. Therefore, daily network access requires that computer unit to be configured in a way that includes password authentication and an access code stored on a device (the smartphone) which needs to be presented by the user during the authentication process. It is a fact that software and hardware may fail to fully secure and protect systems, but user’s negligence to safeguard their systems by logging out of the computer unit before moving away is far more serious security issue. The system being developed in this research will lock the computer unit once the student moves away from that computer unit and loosing communication with the smartphone.</description>
        <description>http://thesai.org/Downloads/Volume9No3/Paper_46-Double_Authentication_Model_using_Smartphones.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Efficient Mechanism Protocol for Wireless Sensor Networks by using Grids</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090345</link>
        <id>10.14569/IJACSA.2018.090345</id>
        <doi>10.14569/IJACSA.2018.090345</doi>
        <lastModDate>2018-03-30T13:24:16.5730000+00:00</lastModDate>
        
        <creator>Emad Ibbini</creator>
        
        <creator>Kweh Yeah Lun</creator>
        
        <creator>Mohamed Othman</creator>
        
        <creator>Zurina Mohd Hanapi</creator>
        
        <subject>Multilevel; WSN; reliable; heterogeneous; routing </subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(3), 2018</description>
        <description>Multilevel short-distance clustering communication is an important scheme to reduce lost data packets over the path to the sink, particularly when nodes are deployed in a dense WSN (wireless sensor network). Our proposed protocol solves the problems of single hop paths in the TDTCGE (two-dimensional technique based on center of gravity and energy) method, which addresses only single-hop problems and does not minimize distances between nodes by using multi-hop nodes with multilevel clustering grids to avoid dropped packets and to guarantee reliable paths without failures. In multilevel clustering grids, transmitted data are aggregated from lower-level grids to upper-level grids. In this paper, the proposed protocol obtains the optimal path for data transmission between cluster heads and the sink for heterogeneous WSNs. The cluster head nodes play an important role in forwarding data originating from other normal nodes that aggregate data to upper clusterheads. This routing approach is more efficient than other routing approaches, and it provides a reliable protocol for avoidance of data loss. In addition, the proposed protocol produces sleep and wakeup signals to the nodes and cluster heads via an MD (mediation device), thereby reducing energy consumption. Simulation results demonstrate the efficiency of the proposed method in terms of fewer dropped packets and high energy efficiency. The network environment overcomes the drawbacks of failure paths and provides reliable transmission to the sink.</description>
        <description>http://thesai.org/Downloads/Volume9No3/Paper_45-An_Efficient_Mechanism_Protocol_for_Wireless_Sensor_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Solution for the Uniform Integration of Field Devices in an Industrial Supervisory Control and Data Acquisition System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090344</link>
        <id>10.14569/IJACSA.2018.090344</id>
        <doi>10.14569/IJACSA.2018.090344</doi>
        <lastModDate>2018-03-30T13:24:16.5730000+00:00</lastModDate>
        
        <creator>Simona-Anda TCACIUC(GHERASIM)</creator>
        
        <subject>SCADA system; uniform device integration; EDS files; communication protocols; distributed database</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(3), 2018</description>
        <description>Supervisory Control and Data Acquisition (SCADA) systems are increasingly used solutions for monitoring and controlling various industrial processes. The existence of a large number of communication protocols helps to deploy complex systems that enable users to access data from one or more processes at a certain distance, and even control those processes. This article presents a solution for the uniform integration of field devices in an industrial SCADA system. This uniform integration is based on the CANopen communication protocol and the EDS files containing detailed descriptions of devices in a CANopen network. Based on the information and structure of the EDS files, we have designed and developed a database aimed at storing these data in an organization that enables them to be safely and efficiently processed. This database is the basis of a website application that enables the user to learn the characteristics of the field devices connected to the local industrial networks in a SCADA system.</description>
        <description>http://thesai.org/Downloads/Volume9No3/Paper_44-A_Solution_for_the_Uniform_Integration_of_Field_Devices.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The use of  Harmonic Balance in Wave Concept Iterative Method for Nonlinear  Radio Frequency Circuit Simulation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090343</link>
        <id>10.14569/IJACSA.2018.090343</id>
        <doi>10.14569/IJACSA.2018.090343</doi>
        <lastModDate>2018-03-30T13:24:16.5570000+00:00</lastModDate>
        
        <creator>Hicham MEGNAFI</creator>
        
        <creator>Noureddine BOUKLI-HACENE</creator>
        
        <creator>Nathalie RAVUE</creator>
        
        <creator>Henri BAUDRAND</creator>
        
        <subject>WCIP; harmonic balance; nonlinear circuits; planar radio frequency circuits; radio frequency diode</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(3), 2018</description>
        <description>This paper presents the birth of the new hybrid method for the non-linear Radio frequency circuits’ simulation. This method is based on the combination of the wave concept iterative procedure (WCIP) and the harmonic balance (HB) for their advantages. It consists also the development of an application based on this method for the simulation of nonlinear planar radio frequency circuits. The simulation of the Radio frequency diode implemented in micro-strip line is done. The validations are obtained by the comparison of the results obtained by our new hybrid method and Harmonic Balance under Advanced Design System (ADS) software.</description>
        <description>http://thesai.org/Downloads/Volume9No3/Paper_43-The_Use_of_Harmonic_Balance_in_Wave_Concept_Iterative_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Understanding the Factors Affecting the Adoption of Green Computing in the Gulf Universities</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090342</link>
        <id>10.14569/IJACSA.2018.090342</id>
        <doi>10.14569/IJACSA.2018.090342</doi>
        <lastModDate>2018-03-30T13:24:16.5400000+00:00</lastModDate>
        
        <creator>ARWA IBRAHIM AHMED</creator>
        
        <subject>Success factors; green computing; TOE model and universities</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(3), 2018</description>
        <description>Many universities worldwide have adopted, or are intend to adopt green computing in their campuses to improve environmental sustainability and efficient-energy consumption. However, the factors affect the adoption of green computing are still not understood. The aim of this study is to investigate the key success factors affecting the adoption of green computing in developing countries’ universities, specifically in Gulf States universities. This study draws mainly on the TOE model suggested by Tornatzky and Fleischer [1] to understand the key areas for the success factors of green computing adoption. Data was collected based on survey research design to 118 Gulf universities. The findings revealed that the top five success factors affect the adoption of green computing in gulf universities are: awareness, relative advantage, top management support, adequate resources and government policy. Moreover, among the proposed three contexts the most important one is organizational, followed by environmental, and finally technology factors. This study contributes to the theory by providing taxonomy of the factors affecting the adoption of green computing. For practitioners, this study will assist the decision makers in gulf countries to benefit from the findings of this study to make well informed decisions by focusing on the key factor that have a greater impact on adopting green computing.</description>
        <description>http://thesai.org/Downloads/Volume9No3/Paper_42-Understanding_the_Factors_Affecting_the_Adoption.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evolutionary Design of a Carbon Dioxide Emission Prediction Model using Genetic Programming</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090341</link>
        <id>10.14569/IJACSA.2018.090341</id>
        <doi>10.14569/IJACSA.2018.090341</doi>
        <lastModDate>2018-03-30T13:24:16.5270000+00:00</lastModDate>
        
        <creator>Abdel Karim Baareh</creator>
        
        <subject>Fossil fuels; carbon emission; forecasting; genetic programming</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(3), 2018</description>
        <description>Weather pollution is considered as one of the most important, dangerous problem that affects our life and the society security from the different sides. The global warming problem affecting the atmosphere is related to the carbon dioxide emission (CO2) from the different fossil fuels along with temperature. In this paper, this phenomenon is studied to find a solution for preventing and reducing the poison CO2 gas emerged from affecting the society and reducing the smoke pollution.  The developed model consists of four input attributes: the global oil, natural gas, coal, and primary energy consumption and one output the CO2 gas. The stochastic search algorithm Genetic Programming (GP) was used as an effective and robust tool in building the forecasting model. The model data for both training and testing cases were taken from the years of 1982 to 2000 and 2003 to 2010, respectively. According to the results obtained from the different evaluation criteria, it is nearly obvious that the performance of the GP in carbon gas emission estimation was very good and efficient in solving and dealing with the climate pollution problems.</description>
        <description>http://thesai.org/Downloads/Volume9No3/Paper_41-Evolutionary_Design_of_a_Carbon_Dioxide_Emission.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Standardization of Cloud Security using Mamdani Fuzzifier</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090340</link>
        <id>10.14569/IJACSA.2018.090340</id>
        <doi>10.14569/IJACSA.2018.090340</doi>
        <lastModDate>2018-03-30T13:24:16.5270000+00:00</lastModDate>
        
        <creator>Shan e Zahra</creator>
        
        <creator>Muhammad Adnan Khan</creator>
        
        <creator>Muhammad Nadeem Ali</creator>
        
        <creator>Sabir Abbas</creator>
        
        <subject>CC; CS, FIS; FRBS; MIE, standards; compliance; data protection; availability and recovery</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(3), 2018</description>
        <description>Cloud health has consistently been a major issue in information technology. In the CC environment, it becomes particularly serious because the data is located in different places even in the entire globe. Associations are moving their information on to cloud as they feel their information is more secure and effectively evaluated. However, as a few associations are moving to the cloud, they feel shaky. As the present day world pushes ahead with innovation, one must know about the dangers that come along with cloud health. Cloud benefit institutionalization is important for cloud security administrations. There are a few confinements seeing cloud security as it is never a 100% secure. Instabilities will dependably exist in a cloud with regards to security. Cloud security administrations institutionalization will assume a noteworthy part in securing the cloud benefits and to assemble a trust to precede onward cloud. In the event that security is tight and the specialist organizations can guarantee that any interruption endeavor to their information can be observed, followed and confirmed. In this paper, we proposed ranking system using Mamdani fuzzifier. After performing different ranking conditions, like, if compliance is 14.3, Data Protection 28.2, Availability 19.7 and recovery is 14.7 then cloud health is 85% and system will respond in result of best cloud health services. </description>
        <description>http://thesai.org/Downloads/Volume9No3/Paper_40-Standardization_of_Cloud_Security.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Systematic Literature Review of Success Factors and Barriers of Agile Software Development</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090339</link>
        <id>10.14569/IJACSA.2018.090339</id>
        <doi>10.14569/IJACSA.2018.090339</doi>
        <lastModDate>2018-03-30T13:24:16.5100000+00:00</lastModDate>
        
        <creator>Shahbaz Ahmed Khan Ghayyur</creator>
        
        <creator>Salman Ahmed</creator>
        
        <creator>Mukhtar Ali</creator>
        
        <creator>Adnan Naseem</creator>
        
        <creator>Abdul Razzaq</creator>
        
        <creator>Naveed Ahmed</creator>
        
        <subject>Agile methods; systematic literature review; motivator; demotivator; success factors; barriers; scrum; ASD</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(3), 2018</description>
        <description>Motivator and demotivator plays an important role in software industry. It encompasses software performance and productivity which are necessary for projects of Agile software development (ASD). Existing studies comprise of motivators and demotivators of ASD, which exist in dispersed form. That is why there is a need of a detailed systematic literature review to review the factors and sub-factors effecting motivators and demotivators in ASD. A comprehensive review is executed to gather the critical success factors of motivator and demotivator of Agile software development. Thus, the ongoing study classifies motivator and demotivator factors into four classes, i.e., people, organization, technical and process. However, sub-classification is also executed to clarify more of the motivators of agile. Along with this, motivator and demotivator of scrum process is also categorized to overview a clear vision. </description>
        <description>http://thesai.org/Downloads/Volume9No3/Paper_39-A_Systematic_Literature_Review_of_Success_factors.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Effective Automatic Image Annotation Model Via Attention Model and Data Equilibrium</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090338</link>
        <id>10.14569/IJACSA.2018.090338</id>
        <doi>10.14569/IJACSA.2018.090338</doi>
        <lastModDate>2018-03-30T13:24:16.4930000+00:00</lastModDate>
        
        <creator>Amir Vatani</creator>
        
        <creator>Milad Taleby Ahvanooey</creator>
        
        <creator>Mostafa Rahimi</creator>
        
        <subject>Automatic image annotation; attention model; skewed learning; deep learning, word embedding; log-entropy auto encoder</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(3), 2018</description>
        <description>Nowadays, a huge number of images are available. However, retrieving a required image for an ordinary user is a challenging task in computer vision systems. During the past two decades, many types of research have been introduced to improve the performance of the automatic annotation of images, which are traditionally focused on content-based image retrieval. Although, recent research demonstrates that there is a semantic gap between content-based image retrieval and image semantics understandable by humans. As a result, existing research in this area has caused to bridge the semantic gap between low-level image features and high-level semantics. The conventional method of bridging the semantic gap is through the automatic image annotation (AIA) that extracts semantic features using machine learning techniques. In this paper, we propose a novel AIA model based on the deep learning feature extraction method. The proposed model has three phases, including a feature extractor, a tag generator, and an image annotator. First, the proposed model extracts automatically the high and low-level features based on dual tree continues wavelet transform (DT-CWT), singular value decomposition, distribution of color ton, and the deep neural network. Moreover, the tag generator balances the dictionary of the annotated keywords by a new log-entropy auto-encoder (LEAE) and then describes these keywords by word embedding. Finally, the annotator works based on the long-short-term memory (LSTM) network in order to obtain the importance degree of specific features of the image. The experiments conducted on two benchmark datasets confirm that the superiority of proposed model compared to the previous models in terms of performance criteria.</description>
        <description>http://thesai.org/Downloads/Volume9No3/Paper_38-An_Effective_Automatic_Image_Annotation_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Automatic Arabic Essay Grading System based on Text Similarity Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090337</link>
        <id>10.14569/IJACSA.2018.090337</id>
        <doi>10.14569/IJACSA.2018.090337</doi>
        <lastModDate>2018-03-30T13:24:16.4770000+00:00</lastModDate>
        
        <creator>Abdulaziz Shehab</creator>
        
        <creator>Mahmoud Faroun</creator>
        
        <creator>Magdi Rashad</creator>
        
        <subject>Auto-grading systems; string-based similarity; corpus-based similarity, N-Gram</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(3), 2018</description>
        <description>Manual grading process has many problems such as time consuming, costly, enormous resources, lot of efforts and huge pressure on instructors. These problems place the educational community in a dire need to have auto-grading systems so as to address these problems associated with manual grading. Auto-grading systems are wide spread over the world because they play a critical role in educational technology. Additionally, the auto-grading system is introducing many advantages as it saves cost, effort and time in comparison to manual grading. This research compares the different algorithms used in automatic grading systems in Arabic languages using string and corpus algorithms separately. This process is a challenging task following the necessity of inclusive assessment to authenticate the answers precisely. Moreover, the challenge is heightened when working with the Arabic language characterized by complexities in morphology, semantics and syntax. The research applied multiple similarity measures and introduces Arabic data set that contains 210 students’ answers. The results obtained prove that an automatic grading system could provide the teacher with an effective solution for article grading systems.</description>
        <description>http://thesai.org/Downloads/Volume9No3/Paper_37-An_Automatic_Arabic_Essay_Grading_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Circular Calibration of Depth Extraction in Stereo Configuration</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090336</link>
        <id>10.14569/IJACSA.2018.090336</id>
        <doi>10.14569/IJACSA.2018.090336</doi>
        <lastModDate>2018-03-30T13:24:16.4770000+00:00</lastModDate>
        
        <creator>Zulfiqar Ibrahim</creator>
        
        <creator>Zulfiqar Ali Bangash</creator>
        
        <creator>Muhammad Zeeshan</creator>
        
        <subject>Stereo imaging; depth extraction; triangulation; radial distortion; lenses, rectilinear projection component</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(3), 2018</description>
        <description>Lens distortion is defined as departure from rectilinear projection of an imaging system which affects the accuracy of almost all vision applications. This work addresses the problem of distortion with investigating the effects of camera’s view angle and spherical nature of lens on image, and then derives a closed-form solution for the correction of distorted pixel&#39;s angle in image according to geometric shape of lens. We first propose technique that explores the linear relation between lens and charge-coupled device in intrinsic environment of camera, through analysis of pixel’s angle in field of view. Second technique for depth extraction through linear transformation in rectangular configuration is achieved by considering the camera’s background in field of view which provides optimal results in closed environment. As the object moves away from the center of image in image plane, depth accuracy starts to deteriorate due to radial distortion. To rectify this problem, we finally purpose circular calibration methodology which addresses this inaccuracy and accommodate radial distortion to achieve optimal results up to 98%, in great depth with very large baseline. Results show the improvement over established stereo imaging techniques in depth extraction where the presented considerations are not observed. This methodology ensures high accuracy of triangulated depth with very large base line.</description>
        <description>http://thesai.org/Downloads/Volume9No3/Paper_36-Circular_Calibration_of_Depth_Extraction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A High Gain MIMO Antenna for Fixed Satellite and Radar Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090335</link>
        <id>10.14569/IJACSA.2018.090335</id>
        <doi>10.14569/IJACSA.2018.090335</doi>
        <lastModDate>2018-03-30T13:24:16.4630000+00:00</lastModDate>
        
        <creator>Ahsan Altaf</creator>
        
        <creator>Khalid Mahmood</creator>
        
        <creator>Mehre Munir</creator>
        
        <creator>Saad Hassan Kiani</creator>
        
        <subject>Mutual coupling; isolating structure; multiple input multiple output; radar application; finite difference time domain</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(3), 2018</description>
        <description>Patch antennas have emerged rapidly with advancement of communication technology. For antenna design purposes, Finite difference time domain (FDTD) method is a commonly used. This paper focuses on the interaction among elements of MIMO antenna also known as mutual coupling using FDTD method. An M shape is introduced and with placement of isolating structure, round about 12dB of isolation is increased without degradation of performance parameters. The proposed antenna design can be used for radar and satellite services applications.</description>
        <description>http://thesai.org/Downloads/Volume9No3/Paper_35-A_High_Gain_MIMO_Antenna_for_Fixed_Satellite.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluation for Feature Driven Development Paradigm in Context of Architecture Design Augmentation and Perspective Implications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090334</link>
        <id>10.14569/IJACSA.2018.090334</id>
        <doi>10.14569/IJACSA.2018.090334</doi>
        <lastModDate>2018-03-30T13:24:16.4470000+00:00</lastModDate>
        
        <creator>Shahbaz Ahmed Khan Gahyyur</creator>
        
        <creator>Abdul Razzaq</creator>
        
        <creator>Syed Zeeshan Hasan</creator>
        
        <creator>Salman Ahmed</creator>
        
        <creator>Rafi Ullah</creator>
        
        <subject>Software architecture; agile; architecture and agile; integration of architecture and agile; agile architecting practices</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(3), 2018</description>
        <description>Agile is a light weight software development methodology that is useful for rapid application development which is the need of current software industry. Since the focus of agile software development is the customer but it does not provide the detailed information about the application’s architecture and documentation, so software architecture has its own benefits and use of it has many positive effects. The focus of this paper is to provide a systematic mapping of emerging issues in feature driven development that arises due to lack of architecture support in agile methodology and proposed solution’s model. Results of this mapping provides a guideline for researcher to improve the agile methodology by achieving the benefits employed by having an architecture in place that is aligned with agile values and principles. Following research addresses to implement the SEI architecture centric methods in FDD methodology in an adapted form, such that the burden of architecture doesn’t affect the agility of FDD.  And the researcher found the de-motivators of agile which helps to understand the internal cycle and reduces the issues to implement the architecture. This study helps to understand the difference between architecture and FDD. This researcher mapping creates awareness about the process improvement with the combination of architecture and FDD.</description>
        <description>http://thesai.org/Downloads/Volume9No3/Paper_34-Evaluation_for_Feature_Driven_Development_Paradigm_in_Context.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>2-D Object Recognition Approach using Wavelet Transform</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090333</link>
        <id>10.14569/IJACSA.2018.090333</id>
        <doi>10.14569/IJACSA.2018.090333</doi>
        <lastModDate>2018-03-30T13:24:16.3830000+00:00</lastModDate>
        
        <creator>Kamelsh Kumar</creator>
        
        <creator>Riaz Ahmed Shaikh</creator>
        
        <creator>Rafaqat Hussain Arain</creator>
        
        <creator>Safdar Ali Shah</creator>
        
        <subject>Wavelet transforms; db10; edge detection; object recognition; shape moments</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(3), 2018</description>
        <description>Humans have supernatural ability to observe, analyze, and tell about the layout of the 3D world with the help of their natural visual system. But contrary to machine vision system, it remains a most difficult task to recognize various objects from images being captured by cameras. This paper presents 2-D image object recognition approach using Daubechies (Db10) wavelet transform. Firstly, an edge detection is carried out to delineate objects from the images. Secondly, shape moments have been used for object recognition. For testing purpose, different geometrical shapes such as rectangle, circle, triangle and pattern have been selected for image analysis. Simulation has been performed using MATLAB, and obtained results showed that it accurately identifies the objects. The research goal was to test 2-D images for object recognition.</description>
        <description>http://thesai.org/Downloads/Volume9No3/Paper_33-2_D_Object_Recognition_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Arabic Text Categorization using Machine Learning Approaches</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090332</link>
        <id>10.14569/IJACSA.2018.090332</id>
        <doi>10.14569/IJACSA.2018.090332</doi>
        <lastModDate>2018-03-30T13:24:16.3700000+00:00</lastModDate>
        
        <creator>Riyad Alshammari</creator>
        
        <subject>Arabic text; categorization; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(3), 2018</description>
        <description>Arabic Text categorization is considered one of the severe problems in classification using machine learning algorithms. Achieving high accuracy in Arabic text categorization depends on the preprocessing techniques used to prepare the data set. Thus, in this paper, an investigation of the impact of the preprocessing methods concerning the performance of three machine learning algorithms, namely, Na&#168;ıve Bayesian, DMNBtext and C4.5 is conducted. Results show that the DMNBtext learning algorithm achieved higher performance compared to other machine learning algorithms in categorizing Arabic text.</description>
        <description>http://thesai.org/Downloads/Volume9No3/Paper_32-Arabic_Text_Categorization_using_Machine_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Continuous Path Planning of Kinematically Redundant Manipulator using Particle Swarm Optimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090330</link>
        <id>10.14569/IJACSA.2018.090330</id>
        <doi>10.14569/IJACSA.2018.090330</doi>
        <lastModDate>2018-03-30T13:24:16.3530000+00:00</lastModDate>
        
        <creator>Affiani Machmudah</creator>
        
        <creator>Setyamartana Parman</creator>
        
        <creator>M.B. Baharom</creator>
        
        <subject>Path planning; redundant manipulator; genetic algorithm; particle swarm optimization; grey wolf optimizer insert</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(3), 2018</description>
        <description>This paper addresses a problem of a continuous path planning of a redundant manipulator where an end-effector needs to follow a desired path. Based on a geometrical analysis, feasible postures of a self-motion are mapped into an interval so that there will be an angle domain boundary and a redundancy resolution to track the desired path lies within this boundary. To choose a best solution among many possible solutions, meta-heuristic optimizations, namely, a Genetic Algorithm (GA), a Particle Swarm Optimization (PSO), and a Grey Wolf Optimizer (GWO) will be employed with an optimization objective to minimize a joint angle travelling distance. To achieve n-connectivity of sampling points, the angle domain trajectories are modelled using a sinusoidal function generated inside the angle domain boundary. A complex geometrical path obtained from Bezier and algebraic curves are used as the traced path that should be followed by a 3-Degree of Freedom (DOF) arm robot manipulator and a hyper-redundant manipulator. The path from the PSO yields better results than that of the GA and GWO.</description>
        <description>http://thesai.org/Downloads/Volume9No3/Paper_30-Continuous_Path_Planning_of_Kinematically_Redundant_Manipulator.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>On the Sampling and the Performance Comparison of Controlled LTI Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090331</link>
        <id>10.14569/IJACSA.2018.090331</id>
        <doi>10.14569/IJACSA.2018.090331</doi>
        <lastModDate>2018-03-30T13:24:16.3530000+00:00</lastModDate>
        
        <creator>Sirine FEKIH</creator>
        
        <creator>Boutheina SFAIHI</creator>
        
        <creator>Mohamed BENREJEB</creator>
        
        <subject>Sampled-data systems; discretization; finite time stabilization; dead-beat control</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(3), 2018</description>
        <description>In this paper, the impact of the discretization techniques and the sampling time, on the finite-time stabilization of sampled-data controlled Linear Time Invariant (LTI) systems, is investigated. To stabilize the process in finite time, a discrete-time feedback dead-beat controller is designed for the sampled-data system. Checkable conditions on the approximate discrete-time plant model and the associated controller that guarantee the finite-time stabilization of the exact model are developed. The trade-off between the discretization technique, the sampling time and the desired performances is illustrated and discussed. Results are presented through a case study.</description>
        <description>http://thesai.org/Downloads/Volume9No3/Paper_31-On_the_Sampling_and_the_Performance_Comparison.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Ant Colony Optimization for a Plan Guide Course Registration Sequence</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090329</link>
        <id>10.14569/IJACSA.2018.090329</id>
        <doi>10.14569/IJACSA.2018.090329</doi>
        <lastModDate>2018-03-30T13:24:16.3370000+00:00</lastModDate>
        
        <creator>Wael Waheed Al-Qassas</creator>
        
        <creator>Mohammad Said El-Bashir</creator>
        
        <creator>Rabah Al-Shboul</creator>
        
        <creator>Anwar Ali Yahya</creator>
        
        <subject>Ant colony; optimization; guide plan; university course registration</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(3), 2018</description>
        <description>Students in universities do not follow the prescribed course plan guide, which affects the registration process. In this research, we present an approach to tackle the problem of guide for plan of course sequence (GPCS) since that sequence may not be suitable for all students due to various conditions. The Ant Colony Optimization (ACO) algorithm is anticipated to be a suitable approach to solving such problems. Data on sequence of the courses registered by students of the Computer Science Department at Al Al-Bayt University over four years were collected for this study. The fundamental task was to find the suitable pheromone evaporation rate in ACO that generates the optimal GPCS by conducting an Adaptive Ant Colony Optimization (AACO) on the model that used the collected data. We found that 17 courses out of 31 were placed in semesters differing from the semesters preset in the course plan.</description>
        <description>http://thesai.org/Downloads/Volume9No3/Paper_29-Ant_Colony_Optimization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Adoption of Software Process Improvement in Saudi Arabian Small and Medium Size Software Organizations: An Exploratory Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090328</link>
        <id>10.14569/IJACSA.2018.090328</id>
        <doi>10.14569/IJACSA.2018.090328</doi>
        <lastModDate>2018-03-30T13:24:16.3370000+00:00</lastModDate>
        
        <creator>Mohammed Ateeq Alanezi</creator>
        
        <subject>Software process improvement; software organization; software adoption; small and medium enterprises (SME)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(3), 2018</description>
        <description>Quite a lot of attention has been paid in the literature on “how to adopt” software process improvement (SPI) in Small and Medium Size (SME) software organization in several countries. This has resulted in limited improvements to the software industry and impacted the Saudi’s economy. However, the SPI adoption is one of the major issues in the domain of small and medium size software organization, especially in developing countries. The objective of this study is to investigate the current state of SPI adoption in Saudi Arabia in comparison to those of the standard models used internationally which could help in improving the software quality and have an impact on Saudi Arabian economy.  After examining a number of studies in the literature, we have designed a questionnaire to survey SME software organizations in Saudi Arabia. First, we conducted a pilot study with 24 senior managers to access the intended survey and further improve the process. Then, we sent out 480 questionnaires to the participants and received 291 responses. The most interesting part of this result is that the respondents highlighted the benefits of using SPI standard; whereby, when asked about the reason for not using SPI, 64% of the respondents agree that the usage of SPI standard is time consuming and 55% agree that there is a difficulty in understanding the SPI standard.</description>
        <description>http://thesai.org/Downloads/Volume9No3/Paper_28-The_Adoption_of_Software_Process_Improvement.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Image based Arabic Sign Language Recognition System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090327</link>
        <id>10.14569/IJACSA.2018.090327</id>
        <doi>10.14569/IJACSA.2018.090327</doi>
        <lastModDate>2018-03-30T13:24:16.3230000+00:00</lastModDate>
        
        <creator>Reema Alzohairi</creator>
        
        <creator>Raghad Alghonaim</creator>
        
        <creator>Waad Alshehri</creator>
        
        <creator>Shahad Aloqeely</creator>
        
        <subject>Component; Arabic sign language; image; visual descriptor; recognition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(3), 2018</description>
        <description>Through history, humans have used many ways of communication such as gesturing, sounds, drawing, writing, and speaking. However, deaf and speaking impaired people cannot use speaking to communicate with others, which may give them a sense of isolation within their societies. For those individuals, sign language is their principal way to communicate. However, most people (who can hear) do not know the sign language. In this paper, we aim to automatically recognize Arabic Sign Language (ArSL) alphabets using an image-based methodology. More specifically, various visual descriptors are investigated to build an accurate ArSL alphabet recognizer. The extracted visual descriptors are conveyed to One-Versus-All Support Vector Machine (SVM). The analysis of the results shows that Histograms of Oriented Gradients (HOG) descriptor outperforms the other considered descriptors. Thus, the ArSL gesture models that are learned by One-Versus-All SVM using HOG descriptors are deployed in the proposed system.</description>
        <description>http://thesai.org/Downloads/Volume9No3/Paper_27-Image_based_Arabic_Sign_Language.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Evaluation of a Deployed 4G LTE Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090325</link>
        <id>10.14569/IJACSA.2018.090325</id>
        <doi>10.14569/IJACSA.2018.090325</doi>
        <lastModDate>2018-03-30T13:24:16.3070000+00:00</lastModDate>
        
        <creator>E.T. Tchao</creator>
        
        <creator>J.D. Gadze</creator>
        
        <creator>Jonathan Obeng Agyapong</creator>
        
        <subject>Long term evolution; MIMO; performance evaluation; sub-Saharan African; propagation environment</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(3), 2018</description>
        <description>In Ghana and many countries within Sub-Sahara Africa, Long Term Evolution (LTE) is being considered for use within the sectors of Governance, Energy distribution and transmission, Transport, Education and Health. Subscribers and Governments within the region have high expectations for these new networks and want to leverage the promised enhanced coverage and high data rates for development. Recent performance evaluations of deployed WiMAX networks in Ghana showed promising performance of a wireless broadband technology in supporting the capacity demands needed in the peculiar Sub-Saharan African terrain. The deployed WiMAX networks, however could not achieve the optimal quality of service required for providing a seamless wireless connectivity demands needed for emerging mobile applications. This paper evaluates the performance of some selected key network parameters of a newly deployed LTE network in the 2600 MHz band operating in the peculiar Sub-Saharan African terrain under varied MIMO Antenna Configurations. We adopted simulation and field measurement to aid us in our evaluation. Genex Unet has been used to simulate network coverage and throughput performance of 2X2, 4X4 and 8X8 MIMO configurations of the deployed networks. The average simulated throughput per sector of 4X4 MIMO configuration was seen to be better than the 2X2 configuration. However, the percentage coverage for users under the 2x2 MIMO simulation scenario was better than that of the adaptive 4x4 MIMO configuration with 2x2 MIMO achieving 60.41% of coverage area having throughput values between 1 - 40Mbps as against 55.87% achieved by the 4x4 MIMO configuration in the peculiar deployment terrain.</description>
        <description>http://thesai.org/Downloads/Volume9No3/Paper_25-Performance_Evaluation_of_a_Deployed_4G_LTE_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automatic Detection Technique for Speech Recognition based on Neural Networks Inter-Disciplinary</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090326</link>
        <id>10.14569/IJACSA.2018.090326</id>
        <doi>10.14569/IJACSA.2018.090326</doi>
        <lastModDate>2018-03-30T13:24:16.3070000+00:00</lastModDate>
        
        <creator>Mohamad A. A. Al- Rababah</creator>
        
        <creator>Abdusamad Al-Marghilani</creator>
        
        <creator>Akram Aref Hamarshi</creator>
        
        <subject>Speech recognition; automatic detection; recurrent neural network (RNN); LSTM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(3), 2018</description>
        <description>Automatic speech recognition allows the machine to understand and process information provided orally by a human user. It consists of using matching techniques to compare a sound wave to a set of samples, usually composed of words but also of phonemes. This field uses the knowledge of several sciences: anatomy, phonetics, signal processing, linguistics, computer science, artificial intelligence and statistics. The latest acoustic modeling methods provide deep neural networks for speech recognition. In particular, recurrent neural networks (RNNs) have several characteristics that make them a model of choice for automatic speech processing. They can keep and take into account in their decisions past and future contextual information. This paper specifically studies the behavior of Long Short-Term Memory (LSTM)-based neural networks on a specific task of automatic speech processing: speech detection. LSTM model were compared to two neural models: Multi-Layer Perceptron (MLP) and Elman’s Recurrent Neural Network (RNN). Tests on five speech detection tasks show the efficiency of the Long Short-Term Memory (LSTM) model.</description>
        <description>http://thesai.org/Downloads/Volume9No3/Paper_26-Automatic_Detection_Technique_For_Speech_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Efficient Algorithm for Load Balancing in Multiprocessor Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090324</link>
        <id>10.14569/IJACSA.2018.090324</id>
        <doi>10.14569/IJACSA.2018.090324</doi>
        <lastModDate>2018-03-30T13:24:16.2900000+00:00</lastModDate>
        
        <creator>Saleh A. Khawatreh</creator>
        
        <subject>Multiprocessor system; homogeneous system; heterogeneous system; load balance; static load balancing; dynamic load balancing; response time; throughput</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(3), 2018</description>
        <description>A multiprocessor system is a computer with two or more central processing units (CPUs) with each one sharing the common main memory as well as the peripherals. Multiprocessor system is either homogeneous or heterogeneous system. A homogeneous system is a cluster of processors joined to a high speed network for accomplishing the required task; also it is defined as parallel computing system. Homogeneous is a technique of parallel computing system. A heterogeneous system can be defined as the interconnection of a number of processors, having dissimilar computational speed. Load balance is a method of distributing work between the processors fairly in order to get optimal response time, resource utilization, and throughput. Load balancing is either static or dynamic. In static load balancing, work is distributed among all processors before the execution of the algorithm. In dynamic load balancing, work is distributed among all processors during execution of the algorithm. So problems arise when it cannot statistically divide the tasks among the processors. To use multiprocessor systems efficiently, several load balancing algorithms have been adopted widely. This paper proposes an efficient load balance algorithm which addresses common overheads that may decrease the efficiency of a multiprocessor system. Such overheads are synchronization, data communication, response time, and throughput.</description>
        <description>http://thesai.org/Downloads/Volume9No3/Paper_24-An_Efficient_Algorithm_for_Load_Balancing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Computerized Steganographic Technique using Fuzzy Logic</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090323</link>
        <id>10.14569/IJACSA.2018.090323</id>
        <doi>10.14569/IJACSA.2018.090323</doi>
        <lastModDate>2018-03-30T13:24:16.2430000+00:00</lastModDate>
        
        <creator>Abdulrahman Abdullah Alghamdi</creator>
        
        <subject>Computer security; fuzzy logic; carrier image; secret image; steganography; fuzzification; peak signal to noise ratio</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(3), 2018</description>
        <description>Steganography is the method of providing Computer security in which hiding the required information is done by inserting messages within other messages, which is a string of characters containing the useful information, in a carrier image. Using this technique, the required information from the secret image is embedded into individual rows as well as columns present in the pixels of carrier image. In this paper, a novel fuzzy logic based technique is proposed to hide the secret message in individual rows and in individual columns of pixels of the carrier image and to extract the hidden message in the same carrier image. The fuzzification process transforms the image in to various bitplanes. Pixel number and Correlation Value is computed in the original image for hiding the secret information in to the original image. The pixel number and Correlation value is also used as the key for retrieving the embedded image from the receiver side. Pixel merging is done in the sender side by assigning a steganographic value of white and black pixels in original image based on the fuzzy rules by comparing the pixels present in the original and secret images. The information which is hided can be retrieved by using the same fuzzy rules. Experimental results show that the proposed method can hide and retrieve the secret and important messages in an image more effectively and accurately.</description>
        <description>http://thesai.org/Downloads/Volume9No3/Paper_23-Computerized_Steganographic_Technique_using_Fuzzy_Logic.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Interactive Hypermedia Programs and its Impact on the Achievement of University Students Academically Defaulting in Computer Sciences</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090321</link>
        <id>10.14569/IJACSA.2018.090321</id>
        <doi>10.14569/IJACSA.2018.090321</doi>
        <lastModDate>2018-03-30T13:24:16.2130000+00:00</lastModDate>
        
        <creator>Mohamed Desoky Rabeh</creator>
        
        <subject>Interactive hypermedia program; traditional teaching; university students; computer sciences; achievement; impact</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(3), 2018</description>
        <description>Traditional teaching practices through lecture series in a classroom have shown to have less universal efficacy in imparting knowledge to every student. Some students encounter problems in this traditional setting, especially in subjects that require applied instruction rather than verbal teaching. University students who have problems in understanding computer science have been hypothesized in this study to achieve better results on the application of interactive hypermedia programs in their curricula. The study has, thus, conducted a teaching survey through pretest-posttest control group design where computer science students of the Community College of Northern Border University were selected through non-probability sampling methodology and were made to undergo traditional teaching followed by interactive hypermedia sessions on the same subject. The evaluations of the change in performance provided results that showed that there existed a statistically significant difference in the mean scores of students after attending the interactive hypermedia program, providing evidence that hypermedia induced educational sessions were better to induce performance of students than those sessions that did not have any hypermedia exposure. However, the study also provided an understanding of limitations such as generalized quantitative experiments on computer science students of the Northern Border University, but the researcher believes that more widespread experimentation of the same kind can help establish the unbiased performance supremacy of hypermedia instruction in improving the academic performance of university students in different subjects.</description>
        <description>http://thesai.org/Downloads/Volume9No3/Paper_21-Interactive_Hypermedia_Programs_and_Its_Impact.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mobile Phone Operations using Human Eyes Only and its Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090322</link>
        <id>10.14569/IJACSA.2018.090322</id>
        <doi>10.14569/IJACSA.2018.090322</doi>
        <lastModDate>2018-03-30T13:24:16.2130000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>Mobile phone operations; line of sight estimation; gaze estimation; wearable computing; pupil detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(3), 2018</description>
        <description>Mobile phone operations using human eyes only is proposed together with its applications for cooking with referring to recipes and manufacturing with referring to manuals, production procedure, so on. It is found that most of mobile phone operations can be done without touching the screen of the mobile phone. Also, mobile phone operation success rate based on the proposed method is evaluated for the environmental illumination conditions, visible or near infrared (NIR) cameras, the distance between user and mobile phone, as well as pupil size detection accuracy against the environmental illumination changes. Meanwhile, the functionality of two typical applications of the proposed method is confirmed successfully.</description>
        <description>http://thesai.org/Downloads/Volume9No3/Paper_22-Mobile_Phone_Operations_using_Human_Eyes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fuzzy Gains-Scheduling of an Integral Sliding Mode Controller for a Quadrotor Unmanned Aerial Vehicle</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090320</link>
        <id>10.14569/IJACSA.2018.090320</id>
        <doi>10.14569/IJACSA.2018.090320</doi>
        <lastModDate>2018-03-30T13:24:16.1970000+00:00</lastModDate>
        
        <creator>Nour Ben Ammar</creator>
        
        <creator>Soufiene Bouall&#232;gue</creator>
        
        <creator>Joseph Hagg&#232;ge</creator>
        
        <subject>Quadrotor UAV; modeling; flight dynamics stabilization; integral sliding mode control; fuzzy gains-scheduling, adaptive control</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(3), 2018</description>
        <description>This paper investigates an Adaptive Fuzzy Gains-Scheduling Integral Sliding Mode Controller (AFGS-ISMC) design approach to deal with the attitude and altitude stabilization problem of an Unmanned Aerial Vehicles (UAV) precisely of a quadrotor. The Integral Sliding Mode Control (ISMC) seems to be an adequate control tool to remedy this problem. The selection of the controller parameters is done most of the time using repetitive trials-errors based methods. This method is not completely reliable and becomes a time-consuming and difficult task. Here we propose the tuning and selection of all ISMC gains adaptively according to a fuzzy supervisor. The sliding surface and its differential are declared as Fuzzy Logic Supervisor (FLS) inputs and the integral sliding mode control gains as the FLS outputs. The proposed fuzzy-based supervision mechanisms modify all ISMC gains to be time-varying and further enhance the performance and robustness of the obtained adaptive nonlinear controllers against uncertainties and external disturbances. The proposed adaptive fuzzy technique increases the effectiveness of the ISMC structure compared to the classical SMC strategy and excludes the dull and repetitive trials-errors process for its design and tuning. Various simulations have been carried out and followed by comparison and discussion of the results in order to prove the superiority of the suggested fuzzy gains-scheduled ISMC approach for the quadrotor attitude and altitude flight stabilization.</description>
        <description>http://thesai.org/Downloads/Volume9No3/Paper_20-Fuzzy_Gains_Scheduling_of_an_Integral_Sliding_Mode.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Smart Under-Frequency Load Shedding Scheme based on Takagi-Sugeno Fuzzy Inference System and Flexible Load Priority</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090319</link>
        <id>10.14569/IJACSA.2018.090319</id>
        <doi>10.14569/IJACSA.2018.090319</doi>
        <lastModDate>2018-03-30T13:24:16.1830000+00:00</lastModDate>
        
        <creator>J. A. Laghari</creator>
        
        <creator>Suhail Ahmed Almani</creator>
        
        <creator>Jagdesh Kumar</creator>
        
        <creator>Hazlie Mokhlis</creator>
        
        <subject>Distributed generation (DG); flexible load priority; fuzzy load shed amount estimation module (FLSAEM), islanded distribution network; under-frequency load shedding (UFLS)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(3), 2018</description>
        <description>This paper proposes a new smart under frequency load shedding (UFLS) scheme, based on Takagi-Sugeno (TS) fuzzy inference system and flexible load priority. The proposed scheme consists of two parts. First part consists of fuzzy load shed amount estimation module (FLSAEM) which uses TS-fuzzy to estimate the amount of load shed and sends its value to accurate load shedding module (ALSM) to perform accurate load shedding using flexible load priority. The performance of the proposed scheme is tested for intentional islanding case and increment of sudden load in the system. Moreover, the response of the proposed scheme is compared with adaptive UFLS scheme to highlight its advantages. The simulation results show that the proposed UFLS scheme provides the accurate load shedding due to advantage of flexible priority whereas adaptive UFLS scheme due to fixed load priority does not succeed to achieve accurate load shedding.</description>
        <description>http://thesai.org/Downloads/Volume9No3/Paper_19-A_Smart_Under_Frequency_Load_Shedding_Scheme.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Smart Card ID: An Evolving and Viable Technology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090318</link>
        <id>10.14569/IJACSA.2018.090318</id>
        <doi>10.14569/IJACSA.2018.090318</doi>
        <lastModDate>2018-03-30T13:24:16.1670000+00:00</lastModDate>
        
        <creator>Praveen Kumar Singh</creator>
        
        <creator>Neeraj Kumar </creator>
        
        <creator>Bineet Kumar Gupta</creator>
        
        <subject>ISO; IoT; multipurpose; authentication; security; smart card reader; cryptography; identification technology; smart card application</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(3), 2018</description>
        <description>In today’s world carrying a number of plastic smart cards to establish our identity has become an integral segment of our routine lives. Identity establishment necessitates a pre stored readily available data about self and to the administrator to authenticate it with claimer’s personal information. There is a distinct requirement of a technological solution for nationwide multipurpose identity for any citizen across the board. Number of options has been exercised by various countries and every option has its own pros and cons. However, it has been observed that in most of the cases Smart Card solution has been preferred by a user and administrator both. The use of Smart cards are so prevalent that be it any profession, without incorporating its application, identity of any individual is hardly considered complete. In this paper, the principle aim is to discuss the viability of Smart Card technology as an identity solution and its ability to perform various functions with strong access control that increases the reliability of Smart Cards over other technologies. It outlines the overview of smart card technology along with its key applications. Security concerns of smart card have been discussed through an algorithm with the help of a division integer proposition. Possibilities of upgrading it with evolving technology offer it as a universal acceptability of identification. Capability of storing desired amount of information by an administrator to compute multiple operations to authenticate a citizen dictates its widening acceptability and an endeavor has been made in this paper to explain it through a proposed system flow chart.</description>
        <description>http://thesai.org/Downloads/Volume9No3/Paper_18-Smart_Card_ID_An_Evolving_and_Viable_Technology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Strategic Framework and Maturity Index for Measuring Knowledge Management Practices in Government Organizations</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090317</link>
        <id>10.14569/IJACSA.2018.090317</id>
        <doi>10.14569/IJACSA.2018.090317</doi>
        <lastModDate>2018-03-30T13:24:16.1670000+00:00</lastModDate>
        
        <creator>Shilpa Vijaivargia</creator>
        
        <creator>Hemant Kumar Garg</creator>
        
        <subject>Knowledge; Explicit knowledge; Tacit knowledge; knowledge management; knowledge architecture; knowledge process framework; knowledge audit; maturity index; knowledge audit</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(3), 2018</description>
        <description>Knowledge is considered as an intellectual asset of any Organization through which performance of the Organization could be enhanced exponentially. Harnessing of the Organization’s Tacit and Explicit knowledge and its Management is a crucial task as Knowledge Management Practices adopted by Government Organizations are not standardized yet. They are depending on the structure and processes adopted by the organizations at their own level. This paper presents a Strategic Framework of Knowledge Management and defines Maturity Index at three levels for measuring Knowledge Management Practices adopted by an Organization. This paper defines Value of the Knowledge at all the three defined Maturity levels which is based on number of times the knowledge content is viewed, benefits gained against viewing such contents in terms of tangible asset and Socio-Economic Impact. Knowledge Management Practices adopted by Bharat Electronics Limited (BEL) studied and measured at the Maturity Level is defined in this paper.</description>
        <description>http://thesai.org/Downloads/Volume9No3/Paper_17-Strategic_Framework_and_Maturity_Index.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Secure and Privacy Preserving Mail Servers using Modified Homomorphic Encryption (MHE) Scheme</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090316</link>
        <id>10.14569/IJACSA.2018.090316</id>
        <doi>10.14569/IJACSA.2018.090316</doi>
        <lastModDate>2018-03-30T13:24:16.1500000+00:00</lastModDate>
        
        <creator>Lija Mohan</creator>
        
        <creator>Sudheep Elayidon M</creator>
        
        <subject>Big data; encrypted data searching; privacy preserving; homomorphic encryption; hadoop; map reduce</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(3), 2018</description>
        <description>Electronic mail (Email) or the paperless mail is becoming the most acceptable, faster and cheapest way of formal and informal information sharing between users. Around 500 billion mails are sent each day and the count is expected to be increasing. Today, even the sensitive and private information are shared through emails, thus making it the primary target for attackers and hackers. Also, the companies having their own mail server, relies on cloud system for storing the mails at a lower cost and maintenance. This affected the privacy of users as the searching pattern is visible to the cloud. To rectify this, we need to have a secure architecture for storing the emails and retrieve them according to the user queries. Data as well as the queries and computations to retrieve the relevant mails should be hidden from the third party. This article proposes a modified homomorphic encryption (MHE) technique to secure the mails. Homomorphic encryption is made practical using MHE and by incorporating Map Reduce parallel programming model, the execution time is exponentially reduced. Well known techniques in information retrieval, like Vector Space model and Term Frequency – Inverse Document Frequency (TF-IDF) concepts are utilized for finding relevant mails to the query. The analysis done on the dataset proves that our method is efficient in terms of execution time and in ensuring the security of the data and the privacy of the users.</description>
        <description>http://thesai.org/Downloads/Volume9No3/Paper_16-Secure_and_Privacy_Preserving_Mail_Servers.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Distributed Energy Efficient Node Relocation Algorithm (DEENR)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090315</link>
        <id>10.14569/IJACSA.2018.090315</id>
        <doi>10.14569/IJACSA.2018.090315</doi>
        <lastModDate>2018-03-30T13:24:16.1370000+00:00</lastModDate>
        
        <creator>Mahmood ul Hassan</creator>
        
        <creator>Muhammad Amir Khan</creator>
        
        <creator>Shahzad Ali</creator>
        
        <creator>Khalid Mahmood</creator>
        
        <creator>Ansar Munir Shah</creator>
        
        <subject>Wireless sensor network; node failure; network connectivity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(3), 2018</description>
        <description>Wireless Sensor Networks (WSNs) due to their inherent features are vulnerable to single or multiple sensor node failure. Node’s failure can result in partitioning of the networks resulting in loss of inter-node connectivity and eventually compromising the operation of the sensor network. The recovery from partitioning of network is crucial for inter-node connectivity. In literature, a number of approaches have been proposed for the restoration of inter-node connectivity. There is a need for a distributed approach that has an energy efficient operation as energy is a scarce resource. By keeping this in mind we propose a novel technique to restore the connectivity that is distributed and energy efficient. The effectiveness of the proposed technique is proven by extensive simulations. The simulation results show that the proposed technique is efficient and capable of restoring network connectivity by using the mechanisms for improving the coverage.</description>
        <description>http://thesai.org/Downloads/Volume9No3/Paper_15-Distributed_Energy_Efficient_Node_Relocation_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimization based Approach for Content Distribution in Hybrid Mobile Social Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090314</link>
        <id>10.14569/IJACSA.2018.090314</id>
        <doi>10.14569/IJACSA.2018.090314</doi>
        <lastModDate>2018-03-30T13:24:16.1200000+00:00</lastModDate>
        
        <creator>Rizwan Akhtar</creator>
        
        <creator>Imran Memon</creator>
        
        <creator>Zuhaib Ashfaq Khan</creator>
        
        <creator>Changda Wang</creator>
        
        <subject>Mobile social network; social super node; delays; network performance; end-to-end delays; content delivery</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(3), 2018</description>
        <description>This paper presents the new strategy for smooth content distribution in the mobile social network.  We proposed a new method, hybrid mobile social network architecture scheme considered as one node of a social community called social super node with higher capacity provides the services of content distribution. We proposed methods and techniques that are introduced to set the criteria to select a social super node (SSN). The simulation results are carried out to measure the correctness of the network performance of the proposed method. These results indicate that the accessing of those content of nearby social super node instead of the content provider is used to improve overall network performance in terms of content end-to-end delays, delivery ratio, throughput, and cost.</description>
        <description>http://thesai.org/Downloads/Volume9No3/Paper_14-Optimisation_based_Approach_for_Content_Distribution.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Permanent Relocation and Self-Route Recovery in Wireless Sensor and Actor Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090313</link>
        <id>10.14569/IJACSA.2018.090313</id>
        <doi>10.14569/IJACSA.2018.090313</doi>
        <lastModDate>2018-03-30T13:24:16.1030000+00:00</lastModDate>
        
        <creator>Khalid Mahmood</creator>
        
        <creator>Muhammad Amir Khan</creator>
        
        <creator>Mahmood ul Hassan</creator>
        
        <creator>Ansar Munir Shah</creator>
        
        <creator>Muhammad Kashif Saeed</creator>
        
        <subject>Wireless sensor and actor networks; connectivity restoration; node mobility; route recovery node relocation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(3), 2018</description>
        <description>Wireless sensor and actor network’s connectivity and coverage plays a significant role in mission-critical applications, whereas sensors and actors respond immediately to the detected events in an organized and coordinated way for an optimum restoration. When one or multiple actors fail, the network becomes disjoint by losing connectivity and coverage; therefore, self-healing algorithm is required to sustain the connectivity and coverage. In this paper two algorithms; Permanent Relocation Algorithm for Centralized Actor Recovery (PRACAR) and Self-Route Recovery Algorithm (SRRA) for sensors have been proposed for connectivity and coverage. The effectiveness of proposed technique has been proved by realistic simulation results which ensure that our proposed technique better handles the connectivity and coverage.</description>
        <description>http://thesai.org/Downloads/Volume9No3/Paper_13-Permanent_Relocation_and_Self_Route_Recovery_in_Wireless_Sensor.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Online Incremental Rough Set Learning in Intelligent Traffic System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090312</link>
        <id>10.14569/IJACSA.2018.090312</id>
        <doi>10.14569/IJACSA.2018.090312</doi>
        <lastModDate>2018-03-30T13:24:16.0570000+00:00</lastModDate>
        
        <creator>Amal Bentaher </creator>
        
        <creator>Yasser Fouad</creator>
        
        <creator>Khaled Mahar</creator>
        
        <subject>Vehicle to vehicle communication; online learning; rough sets theory; intelligent traffic system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(3), 2018</description>
        <description>In the last few years, vehicle to vehicle communication (V2V) technology has been developed to improve the efficiency of traffic communication and road accident avoidance. In this paper, we have proposed a model for online rough sets learning vehicle to vehicle communication algorithm. This model is an incremental learning method, which can learn data object-by-object or class-by-class. This paper proposed a new rules generation for vehicle data classifying in collaborative environments. ROSETTA tool is applied to verify the reliability of the generated results. The experiments show that the online rough sets based algorithm for vehicle data classifying is suitable to be executed in the communication of traffic environments. The implementation of this model on the objectives’ (cars’) rules that define parameters for the determination of the value of communication, and for reducing the decision rules that leads to the estimation of their optimal value. The confusion matrix is used to assess the performance of the chosen model and classes (Yes or No). The experimental results show the overall accuracy (predicted and actual) of the proposed model. The results show the strength of the online learning model against offline models and demonstrate the importance of the accuracy and adaptability of the incremental learning in improving the prediction ability.</description>
        <description>http://thesai.org/Downloads/Volume9No3/Paper_12-Online_Incremental_Rough_Set_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Portable Natural Language Interface to Arabic Ontologies</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090311</link>
        <id>10.14569/IJACSA.2018.090311</id>
        <doi>10.14569/IJACSA.2018.090311</doi>
        <lastModDate>2018-03-30T13:24:16.0100000+00:00</lastModDate>
        
        <creator>Aimad Hakkoum</creator>
        
        <creator>Hamza Kharrazi</creator>
        
        <creator>Said Raghay</creator>
        
        <subject>Natural language interface; ontology; Semantic web; Arabic natural language processing (NLP)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(3), 2018</description>
        <description>With the growing expansion of the semantic web and its applications, providing natural language interfaces (NLI) to end-users becomes essential to querying RDF stores and ontologies, using simple questions expressed in natural language. Existing NLIs work mostly with the English language. There are very few attempts to develop systems supporting the Arabic language. In this paper, we propose a portable NLI to Arabic ontologies; it will transform the user’s query expressed in Arabic into formal language query. The proposed system starts by a preparation phase that creates a gazetteer from the given ontology. The issued query is then processed using natural language processing (NLP) techniques to extract keywords. These keywords are mapped to the ontology entities, then a valid SPARQL query is generated based on the ontology definition and the reasoning capabilities of the Web Ontology Language (OWL). To evaluate our tool we used two different Arabic ontologies: a Qur’anic ontology and an Arabic sample of Mooney Geography dataset. The proposed system achieved 64% recall and 76% precision.</description>
        <description>http://thesai.org/Downloads/Volume9No3/Paper_11-A_Portable_Natural_Language_Interface.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Breast Cancer Classification in Histopathological Images using Convolutional Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090310</link>
        <id>10.14569/IJACSA.2018.090310</id>
        <doi>10.14569/IJACSA.2018.090310</doi>
        <lastModDate>2018-03-30T13:24:15.9630000+00:00</lastModDate>
        
        <creator>Mohamad Mahmoud Al Rahhal</creator>
        
        <subject>Convolutional neural network (CNN); histopathological images; imagenet; classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(3), 2018</description>
        <description>Computer based analysis is one of the suggested means that can assist oncologists in the detection and diagnosis of breast cancer. On the other hand, deep learning has been promoted as one of the hottest research directions very recently in the general imaging literature, thanks to its high capability in detection and recognition tasks. Yet, it has not been adequately suited to the problem of breast cancer so far. In this context, I propose in this paper an approach for breast cancer detection and classification in histopathological images. This approach relies on a deep convolutional neural networks (CNN), which is pretrained on an auxiliary domain with very large labelled images, and coupled with an additional network composed of fully connected layers. The network is trained separately with respect to various image magnifications (40x, 100x, 200x and 400x). The results presented in the patient level achieved promising scores compared to the state of the art methods.</description>
        <description>http://thesai.org/Downloads/Volume9No3/Paper_10-Breast_Cancer_Classification_in_Histopathological_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Collective Neurodynamic Approach to Survivable Virtual Network Embedding</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090309</link>
        <id>10.14569/IJACSA.2018.090309</id>
        <doi>10.14569/IJACSA.2018.090309</doi>
        <lastModDate>2018-03-30T13:24:15.9000000+00:00</lastModDate>
        
        <creator>Ashraf A. Shahin</creator>
        
        <subject>Collective neurodynamics; integer linear programming; global optimization; network virtualization; survivable virtual network embedding</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(3), 2018</description>
        <description>Network virtualization has attracted significant amount of attention in the last few years as one of the key features of cloud computing. Network virtualization allows multiple virtual networks to share physical resources of single substrate network. However, sharing substrate network resources increases impact of single substrate resource failure. One of the commonly applied mechanisms to protect against such failures is provisioning redundant substrate resources for each virtual network to be used to recover affected virtual resources. However, redundant resources decreases cloud revenue by increasing virtual network embedding cost. In this paper, a collective neurodynamic approach has been proposed to reduce amount of provisioned redundant resources and reduce cost of embedding virtual networks. The proposed approach has been evaluated by using simulation and compared against some existing survivable virtual network embedding techniques.</description>
        <description>http://thesai.org/Downloads/Volume9No3/Paper_9-A_Collective_Neurodynamic_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Designing of Cell Coverage in Light Fidelity</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090308</link>
        <id>10.14569/IJACSA.2018.090308</id>
        <doi>10.14569/IJACSA.2018.090308</doi>
        <lastModDate>2018-03-30T13:24:15.8870000+00:00</lastModDate>
        
        <creator>Rabia Riaz</creator>
        
        <creator>Sanam Shahla Rizvi</creator>
        
        <creator>Farina Riaz</creator>
        
        <creator>Sana Shokat</creator>
        
        <subject>Light-Fidelity (Li-Fi); Wireless-Fidelity (Wi-Fi); communication technology; light emitting diode (LED)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(3), 2018</description>
        <description>The trend of communication has changed and the internet user demands to have higher data rate and secure communication link. Wireless-Fidelity (Wi-Fi) that uses radio waves for communication has been used as an internet access methodology for many years. Now a new concept of wireless communication is introduced that uses visible light for communication and is known as the Light-Fidelity (Li-Fi). Li-Fi attracted the researchers for its vast advantages over Wi-Fi.  Wi-Fi is now an integral part of everyday life. In near future, due to scarcity of spectrum, it would be quite difficult to accommodate new users in limited spectrum of Wi-Fi. To overcome this, Li-Fi is a good option because of its infinite spectrum range, as it uses the visible range of the spectrum. Many researchers discussed that Li-Fi is secure when compared to Wi-Fi. But is it really secure enough? Can anybody access hotspot of Li-Fi? Or is there a need to develop a technique that is used to block the unauthorized access? In this research work, a cellular concept is introduced for the Li-Fi usage in order to increase the security. This research presents a flexible and adjustable cell structure that enhances the security of Li-Fi. The coverage area is shown by utilizing the geometrical structure of the cone and the area of the cone can be controlled. A mechanical system is also installed on the roof to control the coverage area of a Li-Fi by moving LED bulb slightly up and down. A mathematical expression for the proposed coverage area of the cell is provided, which is formed on the ground level by a beam of light originating from the source of light. The adjustable and controlled structure provides security benefits to the owner. Finally, the research is backed by its simulation in Matlab.</description>
        <description>http://thesai.org/Downloads/Volume9No3/Paper_8-Designing_of_Cell_Coverage_in_Light_Fidelity.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Techniques for Improving the Labelling Process of Sentiment Analysis in the Saudi Stock Market</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090307</link>
        <id>10.14569/IJACSA.2018.090307</id>
        <doi>10.14569/IJACSA.2018.090307</doi>
        <lastModDate>2018-03-30T13:24:15.8700000+00:00</lastModDate>
        
        <creator>Hamed AL-Rubaiee</creator>
        
        <creator>Renxi Qiu</creator>
        
        <creator>Khalid Alomar</creator>
        
        <creator>Dayou Li</creator>
        
        <subject>Opinion mining; association rule; Arabic language; sentiment analysis; Twitter</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(3), 2018</description>
        <description>Sentiment analysis is utilised to assess users’ feedback and comments. Recently, researchers have shown an increased interest in this topic due to the spread and expansion of social networks. Users’ feedback and comments are written in unstructured formats, usually with informal language, which presents challenges for sentiment analysis. For the Arabic language, further challenges exist due to the complexity of the language and no sentiment lexicon is available. Therefore, labelling carried out by hand can lead to mislabelling and misclassification. Consequently, inaccurate classification creates the need to construct a relabelling process for Arabic documents to remove noise in labelling. The aim of this study is to improve the labelling process of the sentiment analysis. Two approaches were utilised. First, a neutral class was added to create a framework of reliable Twitter tweets with positive, negative, or neutral sentiments. The second approach was improving the labelling process by relabelling. In this study, the relabelling process applied to only seven random features (positive or negative): “earnings” (ارباح), “losses” (خسائر), “green colour” (باللون_الاخضر), “growing” (زياده), “distribution” (توزيع), “decrease” (انخفاض), “financial penalty” (غرامة), and “delay” (تاجيل). Of the 48 tweets documented and examined, 20 tweets were relabelled and the classification error was reduced by 1.34%.</description>
        <description>http://thesai.org/Downloads/Volume9No3/Paper_7-Techniques_for_Improving_the_Labelling_Process.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Online Estimation of Wind Turbine Tip Speed Ratio by Adaptive Neuro-Fuzzy Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090306</link>
        <id>10.14569/IJACSA.2018.090306</id>
        <doi>10.14569/IJACSA.2018.090306</doi>
        <lastModDate>2018-03-30T13:24:15.8530000+00:00</lastModDate>
        
        <creator>Aamer Bilal Asghar</creator>
        
        <creator>Xiaodong Liu</creator>
        
        <subject>Wind speed; rotor speed; power coefficient; tip speed ratio; ANFIS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(3), 2018</description>
        <description>The efficiency of a wind turbine highly depends on the value of tip speed ratio during its operation. The power coefficient of a wind turbine varies with tip speed ratio. For maximum power extraction, it is very important to hold the tip speed ratio at optimum value and operate the variable-speed wind turbine at its maximum power coefficient. In this paper, an intelligent learning based adaptive neuro-fuzzy inference system (ANFIS) is proposed for online estimation of tip speed ratio (TSR) as a function of wind speed and rotor speed. The system is developed by assigning fuzzy membership functions (MFs) to the input-output variables and artificial neural network (ANN) is applied to train the system using back propagation gradient descent algorithm and least square method. During the training process, the ANN adjusts the shape of MFs by analyzing training data set and automatically generates the decision making fuzzy rules. The simulations are done in MATLAB for standard offshore 5 MW baseline wind turbine developed by national renewable energy laboratory (NREL). The performance of proposed neuro-fuzzy algorithm is compared with conventional multilayer perceptron feed-forward neural network (MLPFFNN). The results show the effectiveness of proposed model. The proposed system is more reliable for accurate estimation of tip speed ratio.</description>
        <description>http://thesai.org/Downloads/Volume9No3/Paper_6-Online_Estimation_of_Wind_Turbine_Tip_Speed_Ratio.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Day-Ahead Load Forecasting using Support Vector Regression Machines</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090305</link>
        <id>10.14569/IJACSA.2018.090305</id>
        <doi>10.14569/IJACSA.2018.090305</doi>
        <lastModDate>2018-03-30T13:24:15.8530000+00:00</lastModDate>
        
        <creator>Lemuel Clark P. Velasco</creator>
        
        <creator>Daisy Lou L. Polestico</creator>
        
        <creator>Dominique Michelle M. Abella</creator>
        
        <creator>Genesis T. Alegata</creator>
        
        <creator>Gabrielle C. Luna</creator>
        
        <subject>Support vector regression machines; day-ahead load forecasting; energy analytics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(3), 2018</description>
        <description>Accurate day-ahead load prediction plays a significant role to electric companies because decisions on power system generations depend on future behavior of loads. This paper presents a strategy for short-term load forecasting that utilizes support vector regression machines. Proper data preparation, model implementation and model validation methods were introduced in this study. The SVRM model being implemented is composed of specific features, parameters, data architecture and kernel to achieve accurate pattern discovery. The developed model was implemented into an electric load forecasting system using the java open source library called LibSVM. To confirm the effectiveness of the proposed model, the performance of the developed model is evaluated through the validation set of the study and compared to other published models. The created SVRM model produced the lowest Mean Average Percentage Error (MAPE) of 1.48% and was found to be a viable forecasting technique for a day-ahead electric load forecasting system.</description>
        <description>http://thesai.org/Downloads/Volume9No3/Paper_5-Day_Ahead_Load_Forecasting_using_Support_Vector.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of the Impact of Different Parameter Settings on Wireless Sensor Network Lifetime</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090304</link>
        <id>10.14569/IJACSA.2018.090304</id>
        <doi>10.14569/IJACSA.2018.090304</doi>
        <lastModDate>2018-03-30T13:24:15.8400000+00:00</lastModDate>
        
        <creator>Muhammad Usman Younus</creator>
        
        <subject>Wireless sensor network; energy consumption; lifetime; WSN parameters; transmission frequency; sampling frequency</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(3), 2018</description>
        <description>The importance of wireless sensors is increasing day by day due to their large demand. Sensor networks are facing some issues in which battery lifetime of sensor node is critical. It depends on the nature and application of the wireless sensor network and its different parameters (sampling frequency, transmission frequency, processing power and transmission power). In this paper, we propose a new and realistic model to show the effect of energy consumption on a lifetime of wireless sensor nodes. After analyzing the model behavior, we are able to find out the sensitive parameters that are highly effective for lifetime of sensor nodes.</description>
        <description>http://thesai.org/Downloads/Volume9No3/Paper_4-Analysis_of_the_Impact_of_Different_Parameters_Setting.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Challenges in Designing Ethical Rules for Infrastructures in Internet of Vehicles</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090303</link>
        <id>10.14569/IJACSA.2018.090303</id>
        <doi>10.14569/IJACSA.2018.090303</doi>
        <lastModDate>2018-03-30T13:24:15.8230000+00:00</lastModDate>
        
        <creator>Razi Iqbal</creator>
        
        <subject>Ethics; road side units; vehicular ad-hoc networks; internet of vehicles; intelligent transportation systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(3), 2018</description>
        <description>Vehicular Ad-hoc Networks (VANETs) have seen significant advancements in technology. Innovation in connectivity and communication has brought substantial capabilities to various components of VANETs such as vehicles, infrastructures, passengers, drivers and affiliated environmental sensors. Internet of Things (IoT) has brought the notion of Internet of Vehicles (IoV) to VANETs where each component of VANET is connected directly or indirectly to the Internet. Vehicles and infrastructures are the key components of a VANET system that can greatly augment the overall experience of the network by integrating the competencies of Vehicle to Vehicle (V2V), Vehicle to Pedestrian (V2P), Vehicle to Sensor (V2S), Vehicle to Infrastructure (V2I) and Infrastructure to Infrastructure (I2I). Internet connectivity in Vehicles and Infrastructures has immensely expanded the potential of developing applications for VANETs under the broad spectrum of IoV. Advent in the use of technology in VANETs requires considerable efforts in scheming the ethical rules for autonomous systems. Currently, there is a gap in literature that focuses on the challenges involved in designing ethical rules or policies for infrastructures, sometimes referred to as Road Side Units (RSUs) for IoVs. This paper highlights the key challenges entailing the design of ethical rules for RSUs in IoV systems. Furthermore, the article also proposes major ethical principles for RSUs in IoV systems that would set foundation for modeling future IoV architectures.</description>
        <description>http://thesai.org/Downloads/Volume9No3/Paper_3-Challenges_of_Ethical_Rules_for_IoV.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluating X-Ray based Medical Imaging Devices with Fuzzy Preference Ranking Organization Method for Enrichment Evaluations</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090302</link>
        <id>10.14569/IJACSA.2018.090302</id>
        <doi>10.14569/IJACSA.2018.090302</doi>
        <lastModDate>2018-03-30T13:24:15.8070000+00:00</lastModDate>
        
        <creator>Dilber Uzun Ozsahin</creator>
        
        <creator>Berna Uzun</creator>
        
        <creator>Musa Sani Musa</creator>
        
        <creator>Ilker Ozsahin</creator>
        
        <subject>X-ray-based imaging devices; medical imaging; fuzzy PROMETHEE</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(3), 2018</description>
        <description>X-rays are ionizing radiation of very high energy, which are used in the medical imaging field to produce images of diagnostic importance. X-ray-based imaging devices are machines that send ionizing radiation to the patient’s body, and obtain an image which can be used to effectively diagnose the patient. These devices serve the same purpose, only that some are the advanced form of the others and are used for specialized radiological exams. These devices have image quality parameters which need to be assessed in order to portray the efficiency, potentiality and negativity of each. The parameters include sensitivity and specificity, radiation dose delivered to the patient, cost of treatment and machine. The parameters are important in that they affect the patient, the hospital management and the radiation worker. Therefore, this paper incorporates these parameters into fuzzy PROMETHEE (Preference Ranking Organization Method for Enrichment Evaluation) multi-criteria decision theory in order to help the decision makers to improve the efficiency of their decision processes, so that they will arrive at the best solution in due course.</description>
        <description>http://thesai.org/Downloads/Volume9No3/Paper_2-Evaluating_X_Ray_based_Medical_Imaging_Devices.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Bitter Melon Crop Yield Prediction using Machine Learning Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090301</link>
        <id>10.14569/IJACSA.2018.090301</id>
        <doi>10.14569/IJACSA.2018.090301</doi>
        <lastModDate>2018-03-30T13:24:15.7770000+00:00</lastModDate>
        
        <creator>Marizel B. Villanueva</creator>
        
        <creator>Ma. Louella M. Salenga</creator>
        
        <subject>Agriculture; Artificial Intelligence; Keras; machine learning algorithm; machine learning; neural network; convolutional neural network; prediction; Python; tensor flow</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(3), 2018</description>
        <description>This research paper aimed to determine the crop bearing capability of bitter melon or bitter gourd more commonly called “Ampalaya” in the Filipino language. Images of bitter melon leaves were gathered from Ampalaya farms and these were used as main data of the research. The leaves were classified as good and bad through their description. The research used Machine Learning Algorithm through Convolutional Neural Network. Training of data was through the capabilities of Keras, Tensor Flow and Python worked together. In conclusion, increasing number of images could enable a machine to learn the difference between a good and a bad Ampalaya plant when presented an image for prediction.</description>
        <description>http://thesai.org/Downloads/Volume9No3/Paper_1-Bitter_Melon_Crop_Yield_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Design of Pilot Aided Channel Estimation for MIMO-CDMA System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090255</link>
        <id>10.14569/IJACSA.2018.090255</id>
        <doi>10.14569/IJACSA.2018.090255</doi>
        <lastModDate>2018-02-28T12:22:49.8670000+00:00</lastModDate>
        
        <creator>Khalid Mahmood</creator>
        
        <subject>Channel estimation; MIMO-CDMA; channel estimator; MMSE; Rayleigh fading channel; SNR</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(2), 2018</description>
        <description>In order to estimate a fading channel characteristics, a pilot signal is propogated with traffic channel. Fading channel parameter estimation is of paramount importance as it may be utilized to design different equalization techniques. It may also be utilized to allocate weights of rake receiver to sturdiest multipaths as well as coherent reception and weighted combination of multipath constituents of wireless communication systems. In this paper, a pilot aided channel estimating technique for MIMO-CDMA systems is presented. This technique utilizes minimum mean squared error estimation of corrupted information in a flat fading channel along with noise. Simulation results predicts theoretical predictions are strongly validated for different values of SNR and users.</description>
        <description>http://thesai.org/Downloads/Volume9No2/Paper_55-A_Novel_Design_of_Pilot_Aided_Channel_Estimation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Review and Classification of Widely used Offline Brain Datasets</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090254</link>
        <id>10.14569/IJACSA.2018.090254</id>
        <doi>10.14569/IJACSA.2018.090254</doi>
        <lastModDate>2018-02-28T12:22:49.8500000+00:00</lastModDate>
        
        <creator>Muhammad Wasim</creator>
        
        <creator>Muhammad Sajjad</creator>
        
        <creator>Farheen Ramzan</creator>
        
        <creator>Usman Ghani Khan</creator>
        
        <creator>Waqar Mahmood</creator>
        
        <subject>BCI; dataset; brain-computer interface; amyotrophic lateral sclerosis; classification </subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(2), 2018</description>
        <description>Brain Computer Interfaces (BCI) are a natural extension to Human Computer Interaction (HCI) technologies. BCI is especially useful for people suffering from diseases, such as Amyotrophic Lateral Sclerosis (ALS) which cause motor disabilities in patients. To evaluate the effectiveness of BCI in different paradigms, the need of benchmark BCI datasets is increasing rapidly. Although, such datasets do exist, a comparative study of such datasets is not available to the best of our knowledge. In this paper, we provided a comprehensive overview of various BCI datasets. We briefly describe the characteristics of these datasets and devise a classification scheme for them. The comparative study provides feature extractors and classifiers used for each dataset. Moreover, potential use-cases for each dataset are also provided.</description>
        <description>http://thesai.org/Downloads/Volume9No2/Paper_54-A_Review_and_Classification_of_Widely_Used_Offline_Brain_Datasets.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Securely Eradicating Cellular Dependency for E-Banking Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090253</link>
        <id>10.14569/IJACSA.2018.090253</id>
        <doi>10.14569/IJACSA.2018.090253</doi>
        <lastModDate>2018-02-28T12:22:49.8330000+00:00</lastModDate>
        
        <creator>Bisma Rasool Pampori</creator>
        
        <creator>Tehseen Mehraj</creator>
        
        <creator>Burhan Ul Islam Khan</creator>
        
        <creator>Asifa Mehraj Baba</creator>
        
        <creator>Zahoor Ahmad Najar</creator>
        
        <subject>E-banking; one time password (OTP); global system for mobile communication (GSM); authentication</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(2), 2018</description>
        <description>Numerous applications are available on the Internet for the exchange of personal information and money. All these applications need to authenticate the users to confirm their legitimacy. Currently, the most commonly employed credentials include static passwords. But people tend to behave carelessly in choosing their passwords to avoid the burden of memorizing complex passwords. Such frail password habits are a severe threat to the various services available online especially electronic banking or e-banking. For eradicating the necessity of creating and managing passwords, a variety of solutions are prevalent, the traditional ones being the usage of One-Time-Password (OTP) that refers to a single session/transaction password. However, the majority of the OTP-based security solutions fail to satisfy the usability or scalability requirements and are quite vulnerable owing to their reliance on multiple communication channels. In this paper, the most reliable and adoptable solution which provides better security in online banking transactions is proposed. This is an initiative to eradicate the dependency on Global System for Mobile communication (GSM) that is the most popular means of sending the One-Time-Passwords to the users availing e-banking facilities.</description>
        <description>http://thesai.org/Downloads/Volume9No2/Paper_53-Securely_Eradicating_GSM_Dependency_for_E_Banking.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Teen’s Social Media Adoption: An Empirical Investigation in Indonesia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090252</link>
        <id>10.14569/IJACSA.2018.090252</id>
        <doi>10.14569/IJACSA.2018.090252</doi>
        <lastModDate>2018-02-28T12:22:49.8030000+00:00</lastModDate>
        
        <creator>Ari Kusyanti</creator>
        
        <creator>Harin Puspa Ayu Catherina</creator>
        
        <creator>Dita Rahma Puspitasari</creator>
        
        <creator>Yustiyana April Lia Sari</creator>
        
        <subject>Social media; Technology Acceptance Model (TAM); user behavior; perceived usefulness; trust; intention; actual use; Structural Equation Modeling (SEM)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(2), 2018</description>
        <description>Social media has reached their popularity in the past decade. Indonesia has more than 63 million social media users who are accessing their account through mobile phone and therefore Indonesia is the third largest users in the world after United States and India. This study is attempted to determine the factors affecting user behaviour intention of social media usage. TAM (Technology Acceptance Model) for Social Media by Rauniar et al. is adopted to provide empirical evidence of teens in Indonesia. Data were collected through questionnaire survey and hypotheses are analyzed with SEM (Structural Equation Modeling).  Result shows that factor affecting Indonesian teens in using social media is perceived usefullness (PP), while Trustworthiness (TW) has no significant influence towards thier intention to use social media.</description>
        <description>http://thesai.org/Downloads/Volume9No2/Paper_52-Teens_Social_Media_Adoption.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Analysis of Evolutionary Algorithms for Multi-Objective Travelling Salesman Problem</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090251</link>
        <id>10.14569/IJACSA.2018.090251</id>
        <doi>10.14569/IJACSA.2018.090251</doi>
        <lastModDate>2018-02-28T12:22:49.7400000+00:00</lastModDate>
        
        <creator>Nosheen Qamar</creator>
        
        <creator>Nadeem Akhtar</creator>
        
        <creator>Irfan Younas</creator>
        
        <subject>Evolutionary computation; algorithms; NSGA-II; NSGA-III; MOEA-D; comparative analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(2), 2018</description>
        <description>The Evolutionary Computation has grown much in last few years. Inspired by biological evolution, this field is used to solve NP-hard optimization problems to come up with best solution. TSP is most popular and complex problem used to evaluate different algorithms. In this paper, we have conducted a comparative analysis between NSGA-II, NSGA-III, SPEA-2, MOEA/D and VEGA to find out which algorithm best suited for MOTSP problems. The results reveal that the MOEA/D performed better than other three algorithms in terms of more hypervolume, lower value of generational distance (GD), inverse generational distance (IGD) and adaptive epsilon. On the other hand, MOEA-D took more time than rest of the algorithms.</description>
        <description>http://thesai.org/Downloads/Volume9No2/Paper_51-Comparative_Analysis_of_Evolutionary_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Behavior of the Minimum Euclidean Distance Optimization Precoders with Soft Maximum Likelihood Detector for High Data Rate MIMO Transmission</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090250</link>
        <id>10.14569/IJACSA.2018.090250</id>
        <doi>10.14569/IJACSA.2018.090250</doi>
        <lastModDate>2018-02-28T12:22:49.7270000+00:00</lastModDate>
        
        <creator>MAHI Sarra</creator>
        
        <creator>BOUACHA Abdelhafid</creator>
        
        <subject>MIMO; max-dmin; POSM; singular values decomposition (SVD); soft-ML detector</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(2), 2018</description>
        <description>The linear closed loop Multiple-input Multiple-output (CL-MIMO) precoding techniques characterized by the channel state information knowledge (CSI), at both sides of the link, aims to improve information throughput and reduce the bit error rate in the communication system. The processing involves multiplying a signal by a precoding matrix, computing from the CSI with some optimized criteria.  In this paper, we proposed a new concatenation of the precoders optimizing the minimal Euclidean distance with soft Maximum Likelihood (soft-ML) detection. We analyze the performance in terms of bit error rate (BER) for the proposed association with the three well-known quantized precoders: Maximum of minimum Euclidean distance (Max-dmin) precoder, Orthogonalized Spatial Multiplexing precoder (POSM), and Orthogonalized Spatial Multiplexing (OSM) based on the same criteria, in coded  MIMO system over a Rayleigh fading channel, using Quadrature Amplitude Modulation (QAM). Simulations show the interest of the proposed association of the dmin-based precoder with a soft - Ml detector, and the best result is achieved for Max-dmin precoder.</description>
        <description>http://thesai.org/Downloads/Volume9No2/Paper_50-Behavior_of_the_Minimum_Euclidean_Distance_Optimization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Choice of Knowledge Representation Model for Development of Knowledge Base: Possible Solutions </title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090249</link>
        <id>10.14569/IJACSA.2018.090249</id>
        <doi>10.14569/IJACSA.2018.090249</doi>
        <lastModDate>2018-02-28T12:22:49.6930000+00:00</lastModDate>
        
        <creator>Sabina Katalnikova</creator>
        
        <creator>Leonids Novickis</creator>
        
        <subject>Extended semantic networks; knowledge base; knowledge representation model; semantic networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(2), 2018</description>
        <description>In current society knowledge, information and intelligent computer systems based on knowledge base play a great role. The ability of an intelligent system to efficiently implement its functions depends on the efficiency of organizing knowledge base, and on the fact whether the applied knowledge representation models comply with the set requirements. The article is devoted to the research of the problem of choosing the knowledge representation models. Based on the requirement analysis for knowledge representation models, one of the solutions for the researched problem shown is application of extended semantic networks. Analysis of extended semantic networks’ properties is carried out, as well as relevant examples of representing knowledge of extended semantic networks’ application for various spheres offered.</description>
        <description>http://thesai.org/Downloads/Volume9No2/Paper_49-Choice_of_Knowledge_Representation_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel DDoS Floods Detection and Testing Approaches for Network Traffic based on Linux Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090248</link>
        <id>10.14569/IJACSA.2018.090248</id>
        <doi>10.14569/IJACSA.2018.090248</doi>
        <lastModDate>2018-02-28T12:22:49.6630000+00:00</lastModDate>
        
        <creator>Muhammad Tahir</creator>
        
        <creator>Mingchu Li</creator>
        
        <creator>Naeem Ayoub</creator>
        
        <creator>Usman Shehzaib</creator>
        
        <creator>Atif Wagan</creator>
        
        <subject>DDoS attacks; floods detection; Linux-APS architecture; mitigation techniques; network traffic; netfilter; testing approaches</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(2), 2018</description>
        <description>In Today’s Digital World, the continuous interruption of users has affected Web Servers (WSVRs), through Distributed Denial-of-Service (DDoS) attacks. These attacks always remain a massive warning to the World Wide Web (WWW). These warnings can interrupt the accessibility of WSVRs, completely by disturbing each data processing before intercommunication properties over pure dimensions of Data-Driven Networks (DDN), management and cooperative communities on the Internet technology.  The purpose of this research is to find, describe and test existing tools and features available in Linux-based solution lab design Availability Protection System (Linux-APS), for filtering malicious traffic flow of DDoS attacks. As source of malicious traffic flow taken most widely used DDoS attacks, targeting WSVRs. Synchronize (SYN), User Datagram Protocol (UDP) and Internet Control Message Protocol (ICMP) Flooding attacks are described and different variants of the mitigation techniques are explained. Available cooperative tools for manipulating with network traffic, like; Ebtables and Iptables tools are compared, based on each type of attacks. Specially created experimental network was used for testing purposes, configured filters servers and bridge. Inspected packets flow through Linux-kernel network stack along with tuning options serving for increasing filter server traffic throughput. In the part of contribution as an outcomes, Ebtables tool appears to be most productive, due to less resources it needed to process each packet (frame). Pointed out that separate detecting system is needed for this tool, in order to provide further filtering methods with data. As main conclusion, Linux-APS, solutions provide full functionality for filtering malicious traffic flow of DDoS attacks either in stand-alone state or combined with detecting systems.</description>
        <description>http://thesai.org/Downloads/Volume9No2/Paper_48-A_Novel_DDoS_Floods_Detection_and_Testing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Time-Dependence in Multi-Agent MDP Applied to Gate Assignment Problem</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090247</link>
        <id>10.14569/IJACSA.2018.090247</id>
        <doi>10.14569/IJACSA.2018.090247</doi>
        <lastModDate>2018-02-28T12:22:49.6470000+00:00</lastModDate>
        
        <creator>Oussama AOUN</creator>
        
        <creator>Abdellatif EL AFIA</creator>
        
        <subject>Time-dependent Multi-Agent Markov Decision Processes; stochastic programming; flight delays; Gate Assignment Problem</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(2), 2018</description>
        <description>Many disturbances can impact gate assignments in daily operations of an airport. Gate Assignment Problem (GAP) is the main task of an airport to ensure smooth flight-to-Gate assignment managing all disturbances. Or, flights schedule often undergoes some unplanned disruptions, such as weather conditions, gate availability or simply a delay that usually arises. A good plan to GAP should manage as possible stochastic events and include all in the planning of assignment. To build a robust model taking in account eventual planning disorder, a dynamic stochastic vision based on Markov Decision Process theory is designed. In this approach, gates are perceived as collaborative agents seeking to accomplish a specific set of flights assignment tasks as provided by a centralized controller. Multi-agent reasoning is then coupled with time dependence aptitude with both time-dependent action durations and stochastic state transitions. This reflection will enable setting up a new model for the GAP powered by a Time-dependent Multi-Agent Markov Decision Processes (TMMDP). The use of this model can provide to controllers at the airport a robust prior solution in every time sequence rather than bringing a risk of online schedule adjustments to handle uncertainty. The solution of this model is a set of optimal decisions time valuated to be made in each case of traffic disruption and at every moment.</description>
        <description>http://thesai.org/Downloads/Volume9No2/Paper_47-Time_Dependence_in_Multi_Agent_MDP.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Compact Modified Square Printed Planar Antenna for UWB Microwave Imaging Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090246</link>
        <id>10.14569/IJACSA.2018.090246</id>
        <doi>10.14569/IJACSA.2018.090246</doi>
        <lastModDate>2018-02-28T12:22:49.6170000+00:00</lastModDate>
        
        <creator>Djamila Ziani</creator>
        
        <creator>Sidi Mohammed Meriah</creator>
        
        <creator>Lotfi Merad</creator>
        
        <subject>Modified square planar antenna; high bandwidth, return loss; UWB antenna; low profile; frequency and time domain analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(2), 2018</description>
        <description>In this paper, both frequency and time domain performances of a new compact planar antenna for the ultra-wideband (UWB) applications are fully investigated. The proposed antenna has the size of 12x18 mm&#178; providing a fractional bandwidth more than 128123% (3.057 GHz to 13.98 GHz, S11&lt;-10 dB). The results show that the proposed antenna performances in term of wide bandwidth, small size, gain and radiation pattern, transmission coefficient and system fidelity factor are very satisfactory. Moreover, by fabricating and testing the proposed antenna, the simulation results are fairly verified.</description>
        <description>http://thesai.org/Downloads/Volume9No2/Paper_46-A_Compact_Modified_Square_Printed_Planar_Antenna.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Fuzzy based Soft Computing Technique to Predict the Movement of the Price of a Stock</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090245</link>
        <id>10.14569/IJACSA.2018.090245</id>
        <doi>10.14569/IJACSA.2018.090245</doi>
        <lastModDate>2018-02-28T12:22:49.6000000+00:00</lastModDate>
        
        <creator>Ashit Kumar Dutta</creator>
        
        <subject>Soft computing; fuzzy logic; stock recommendation; fuzzy based soft computing, soft computing systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(2), 2018</description>
        <description>Soft computing is a part of an artificial intelligence, and fuzzy logic is the study of fuzziness on data. The combination of these two techniques can provide an intelligent system with more ability and flexibility. The nature of data in the stock/capital market is more complex and challenging to predict the movement of the price of the stock. The study has combined both fuzzy c-means and neural network technique for the prediction of the price of the stock. The research finds an optimum solution to predict the future price of a stock.  The comparison of time and space complexity proved that the proposed method is better than the existing methods.</description>
        <description>http://thesai.org/Downloads/Volume9No2/Paper_45-A_Fuzzy_based_Soft_Computing_Technique.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Efficiency and Performance Analysis of a Sparse and Powerful Second Order SVM Based on LP and QP</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090244</link>
        <id>10.14569/IJACSA.2018.090244</id>
        <doi>10.14569/IJACSA.2018.090244</doi>
        <lastModDate>2018-02-28T12:22:49.5830000+00:00</lastModDate>
        
        <creator>Rezaul Karim</creator>
        
        <creator>Amit Kumar Kundu</creator>
        
        <subject>Generalization failure rate; Kernel machine; LP;
QP; machine accuracy cost; Second Order Support Vector Machine;
sparse</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(2), 2018</description>
        <description>Productivity analysis is done on the new algorithm
“Second Order Support Vector Machine (SOSVM)”, which could
be thought as an offshoot of the popular SVM and based on its
conventional QP version as well as the LP one. Our main goal is
to produce a machine which is: 1) sparse &amp; efficient; 2) powerful
(kernel based) but not overfitted; 3) easily realizable. Experiments
on benchmark data shows that to classify a new pattern, the
proposed machine, SOSVM requires samples up to as little as
2.7% of original data set or 4.8% of conventional QP SVM or
48.3% of Vapnik’s LP SVM, which is already sparse. Despite
this heavy test cost reduction, its classification accuracy is very
similar to the most powerful QP SVM while being very simple
to be produced. Moreover, two new terms called “Generalization
Failure Rate (GFR)” and “Machine-Accuracy-Cost (MAC)” are
defined to measure generalization-deficiency and accuracy-cost of
a detector, respectively and used to compare such among different
machines. Results show that our machine possesses GFR up to
as little as 1.4% of the QP SVM or 1.5% of Vapnik’s LP SVM
and MAC up to as little as 2.6% of the QP SVM or 35.9% of
the Vapnik’s sparse LP SVM. Finally, having only two types of
parameters to tune, this machine is straight forward and cheaper
to be produced compared to the most popular &amp; state-of-the-art
machines in this direction. These collectively fulfill the three key
goals that the machine is built for.</description>
        <description>http://thesai.org/Downloads/Volume9No2/Paper_44-Efficiency_and_Performance_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Impact of Thyristor Controlled Series Capacitor on Voltage Profile of Transmission Lines using PSAT</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090243</link>
        <id>10.14569/IJACSA.2018.090243</id>
        <doi>10.14569/IJACSA.2018.090243</doi>
        <lastModDate>2018-02-28T12:22:49.5700000+00:00</lastModDate>
        
        <creator>Babar Noor</creator>
        
        <creator>Muhammad Aamir Aman</creator>
        
        <creator>Murad Ali</creator>
        
        <creator>Sanaullah Ahmad</creator>
        
        <creator>Fazal Wahab Karam</creator>
        
        <subject>Flexible AC Transmission System (FACTS); Thyristor
Controlled Series Capacitors (TCSC); Power System Analysis
Tool (PSAT) tool; voltage profile; voltage collapse</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(2), 2018</description>
        <description>In power system voltage stability is very important
in order to maintain the voltage within the defined limits. The
demand of electrical power increases in the last decade due
to the lack of expansion in the generation and transmission
network. The available transmission network is heavily loaded.
This loading of transmission network cause the voltage instability.
This instability on heavy loaded system creates voltage collapse
which causes the power loss. Due to this phenomenon it is
necessary to keep monitoring the voltage instability and to
reduce voltage collapse. Voltage collapse mostly occurs due to the
minimum availability of the reactive power. A power electronic
device i.e. Flexible AC Transmission System (FACTS) is used
to add the reactive power to the system which improve the
voltage profile and minimizes the chances due to which the
voltage collapse occur. In this paper FACTS series compensator,
i.e. Thyristor Controlled Series Capacitors (TCSC) is injected
between the two nodes of the IEEE 6 bus bar test system to
check the voltage profile. PSAT (Power System Analysis Tool)
tool, which is a new tool in MATLAB for study in power system
analysis is used. IEEE-6 bus system is used as a test system for
the effectiveness of the proposed method. The voltage profile with
and without TCSC device is then compared to conclude the result.</description>
        <description>http://thesai.org/Downloads/Volume9No2/Paper_43-Impact_of_Thyristor_Controlled_Series_Capacitor.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dynamic Reconfiguration of LPWANs Pervasive
System using Multi-agent Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090242</link>
        <id>10.14569/IJACSA.2018.090242</id>
        <doi>10.14569/IJACSA.2018.090242</doi>
        <lastModDate>2018-02-28T12:22:49.5370000+00:00</lastModDate>
        
        <creator>Ghouti ABDELLAOUI</creator>
        
        <creator>Fethi Tarik BENDIMERAD</creator>
        
        <subject>Dynamic reconfiguration; pervasive system; multiagent
system; Low Power Wide Area Network (LPWAN)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(2), 2018</description>
        <description>The development of the Low Power Wide Area
Network (LPWAN) has given new hope for the Internet of Things
and M2M networks to become the most prevalent network type
in industrial world in the near future. This type of network is
designed to connect several entities in a radius that can reach up
to 10 Km. This gain in scope is possible through a reduction in the
amount of information exchanged. The latter will makes LPWANs
Networks the most suitable for telemetry applications. This large
network coverage offered by LPWAN gives the possibility to
connect a large number of objects. On the other hand, it involves
difficulties for the pervasive system associated with this kind of
network to integrate dynamically all this objects, in which an
automatic reconfiguration process becomes crucial to this kind
of network. In this study, we propose the multi-agent systems as
a solution to virtualize the heterogeneity of the peripherals and
to facilitate their integration and their dynamic exploitation in
the LPWAN system. That virtualization is possible due to the
portability of the multi-agent systems and the standardization of
the exploitation of the services offered by them.</description>
        <description>http://thesai.org/Downloads/Volume9No2/Paper_42-Dynamic_Reconfiguration_of_LPWANs_Pervasive.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Effect of Increasing Number of Nodes on Performance of SMAC, CSMA/CA and TDMA in MANETs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090241</link>
        <id>10.14569/IJACSA.2018.090241</id>
        <doi>10.14569/IJACSA.2018.090241</doi>
        <lastModDate>2018-02-28T12:22:49.5070000+00:00</lastModDate>
        
        <creator>Samiullah Khan</creator>
        
        <creator>Farooq Faisal</creator>
        
        <creator>Mahmood Nawaz</creator>
        
        <creator>Farkhanda Javed</creator>
        
        <creator>Fawad Ali Khan</creator>
        
        <creator>Rafidah MD Noor</creator>
        
        <creator>Matiullah</creator>
        
        <creator> Zia ullah</creator>
        
        <creator>Muhammad Shoaib</creator>
        
        <creator>Faqir Usman Masood</creator>
        
        <subject>Medium access control (MAC); sensor medium
access control (SMAC); time division multiple access (TDMA);
carrier sense multiple access with collision avoidance (CSMA/CA);
ad-hoc on demand distance vector (AODV)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(2), 2018</description>
        <description>The importance of Wireless Sensor Network (WSN)
increases due to deployment for geographical, environmental and
surveillance purpose in war fields. WSN facing several challenges
due to its complex nature including key problems, such as routing
and medium access control protocols. Several approaches were
proposed for the performance evaluation of WSN on the basis
of these issues due to the fact that MAC layer access protocols
have a great impact on the performance of WSN. In this paper,
we investigated the performance evaluation of three well known
MAC Access protocols, i.e. sensor medium access control protocol
(SMAC), carrier sense multiple access with collision avoidance
(CSMA/CA), and time division multiple access (TDMA) over adhoc
on demand distance vector (AODV) routing protocol. The
number of simulation scenarios were carried out by using NS-
2, the simulation metrics used are throughput, end-to-end delay
and energy consumed. Simulation results showed that SMAC out
perform CSMA/CA and TDMA by consuming less energy, less
end to end delay and high throughput due to contention based
approach to access the medium for transmission.</description>
        <description>http://thesai.org/Downloads/Volume9No2/Paper_41-Effect_of_Increasing_Number_of_Nodes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Quality of Service Impact on Deficit Round Robin and Stochastic Fair Queuing Mechanism in Wired-cum-Wireless Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090240</link>
        <id>10.14569/IJACSA.2018.090240</id>
        <doi>10.14569/IJACSA.2018.090240</doi>
        <lastModDate>2018-02-28T12:22:49.4900000+00:00</lastModDate>
        
        <creator>Fahim Khan Khalil</creator>
        
        <creator>Samiullah Khan</creator>
        
        <creator>Farooq Faisal</creator>
        
        <creator>Mahmood Nawaz</creator>
        
        <creator>Farkhanda Javed</creator>
        
        <creator>Fawad Ali Khan</creator>
        
        <creator>Rafidah MD Noor</creator>
        
        <creator>Matiullah</creator>
        
        <creator>Zia ullah</creator>
        
        <creator>Muhammad Shoaib</creator>
        
        <creator>Faqir Usman Masood</creator>
        
        <subject>Active queue management; deficit round robin;
stochastic fair queuing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(2), 2018</description>
        <description>The deficient round robin (DRR) and stochastic fair
queue (SFQ) are the active queue mechanism (AQM) techniques.
These AQM techniques play important role in buffer management
in order to control the congestion in the wired-cum-wireless
network by dropping packets during the buffer overflow or
near to overflow. This research study focus on the performance
evaluation of the DRR and SFQ using different scenarios such as
increasing number of node scenario, pause time scenario and
mobility scenario. We evaluate the performance of DRR and
SFQ based on two parameters such as average packet delay and
average packet dropped. In case of increasing number of nodes,
the SFQ has outperformed than DRR by having comparatively
low per packet delay. DRR has higher packet dropped ratio as
compare to SFQ. In mobility and pause time scenario, SFQ has
less per packet delay while DRR has less packet dropped ratio
These results revealed that DRR performance was affected by
an increase in the number of nodes in a network. The DRR
send the packet in a round-robin fashion without caring about
the bandwidth of a path due to which the packet dropped
ratio was high. On another hand, the SFQ has comparatively
outperformed in all scenarios by having less per packet delay.
SFQ become aggressive by dropping more data packets during
buffer overflow. In short, SFQ will be preferred for a network
where the congestion occurred more frequently.</description>
        <description>http://thesai.org/Downloads/Volume9No2/Paper_40-Quality_of_Service_Impact_on_Deficit_Round_Robin.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Crowd Counting Mapping to make a Decision</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090239</link>
        <id>10.14569/IJACSA.2018.090239</id>
        <doi>10.14569/IJACSA.2018.090239</doi>
        <lastModDate>2018-02-28T12:22:49.4770000+00:00</lastModDate>
        
        <creator>Enas Faisal</creator>
        
        <creator>Azzam Sleit</creator>
        
        <creator>Rizik Alsayyed</creator>
        
        <subject>Crowd; map; image processing; human detection;
threshold; recognition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(2), 2018</description>
        <description>Congestion typically occurs when the number of
crowds exceeds the capacity of facilities. In some cases, when
buildings have to be evacuated, people might be trapped in
congestion and cannot escape from the building early enough
which might even lead to stampedes. Crowd Congestion Mapping
(CCM) is a system that enables organizations to find information
about the crowd congestion in target places. This project provides
the ability to make the right decision to determine the reasons that
led to that and to do the appropriate procedures to avoid this
from happening again by optimizing locations and dimensions
of the emergency exits less congested path on the target places.
The system collects crowd congestion data from the locations and
makes it available to corporations via target map. The congestion
is plotted on target place map, for example, the red line for highly
congested location, the pink line for mildly congested location and
green line for free flow of humans in the location.</description>
        <description>http://thesai.org/Downloads/Volume9No2/Paper_39-Crowd_Counting_Mapping_to_make_a_Decision.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Machine-Learning Techniques for Customer Retention: A Comparative Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090238</link>
        <id>10.14569/IJACSA.2018.090238</id>
        <doi>10.14569/IJACSA.2018.090238</doi>
        <lastModDate>2018-02-28T12:22:49.4430000+00:00</lastModDate>
        
        <creator>Sahar F. Sabbeh</creator>
        
        <subject>Customer relationship management (CRM); customer retention; analytical CRM; business intelligence; machine-learning; predictive analytics; data mining; customer churn</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(2), 2018</description>
        <description>Nowadays, customers have become more interested in the quality of service (QoS) that organizations can provide them. Services provided by different vendors are not highly distinguished which increases competition between organizations to maintain and increase their QoS. Customer Relationship Management systems are used to enable organizations to acquire new customers, establish a continuous relationship with them and increase customer retention for more profitability. CRM systems use machine-learning models to analyze customers’ personal and behavioral data to give organization a competitive advantage by increasing customer retention rate. Those models can predict customers who are expected to churn and reasons of churn. Predictions are used to design targeted marketing plans and service offers. This paper tries to compare and analyze the performance of different machine-learning techniques that are used for churn prediction problem. Ten analytical techniques that belong to different categories of learning are chosen for this study. The chosen techniques include Discriminant Analysis, Decision Trees (CART), instance-based learning (k-nearest neighbors), Support Vector Machines, Logistic Regression, ensemble–based learning techniques (Random Forest, Ada Boosting trees and Stochastic Gradient Boosting), Na&#239;ve Bayesian, and Multi-layer perceptron. Models were applied on a dataset of telecommunication that contains 3333 records. Results show that both random forest and ADA boost outperform all other techniques with almost the same accuracy 96%. Both Multi-layer perceptron and Support vector machine can be recommended as well with 94% accuracy. Decision tree achieved 90%, na&#239;ve Bayesian 88% and finally logistic regression and Linear Discriminant Analysis (LDA) with accuracy 86.7%.</description>
        <description>http://thesai.org/Downloads/Volume9No2/Paper_38-Machine_Learning_Techniques_for_Customer_Retention.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Insulator Detection and Defect Classification using Rotation Invariant Local Directional Pattern</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090237</link>
        <id>10.14569/IJACSA.2018.090237</id>
        <doi>10.14569/IJACSA.2018.090237</doi>
        <lastModDate>2018-02-28T12:22:49.4130000+00:00</lastModDate>
        
        <creator>Taskeed Jabid</creator>
        
        <creator>Tanveer Ahsan</creator>
        
        <subject>Insulator detection; insulator defect analysis; local direction pattern (LDP); rotation invariant local directional pattern (RI-LDP); support vector machine (SVM)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(2), 2018</description>
        <description>Detecting power line insulator automatically and analyzing their defects are vital processes in maintaining power distribution systems. In this work, a rotation invariant texture pattern named rotation invariant local directional pattern (RI-LDP) is proposed for representing insulator image. For this at first, local directional pattern (LDP) is applied on image which can encode local texture pattern into an eight bit binary code by analyzing magnitude of edge response in eight different directions. Finally this LDP code is made robust to rotation by meticulously rearranging the generated another binary code which named as rotation invariant local directional pattern (RI-LDP). Insulator detection is carried out where this RI-LDP based histogram act as a feature vector and support vector machine (SVM) plays the role of the classifier. The detected insulator image region is further analyzed for possible defect identification. For this, an automatic extraction method of the individual insulator caps is proposed. The defect in segmented insulators is analyzed using LDP texture feature on individual cap region. We evaluated the proposed method using two sets of 493 real-world insulator images captured from a ground vehicle. The proposed insulator detector shows comparable performance to state-of-the-arts and our defect analysis method outperforms existing methods.</description>
        <description>http://thesai.org/Downloads/Volume9No2/Paper_37-Insulator_Detection_and_Defect_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Performance of Deep Learning and Machine Learning Algorithms on Imbalanced Handwritten Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090236</link>
        <id>10.14569/IJACSA.2018.090236</id>
        <doi>10.14569/IJACSA.2018.090236</doi>
        <lastModDate>2018-02-28T12:22:49.3970000+00:00</lastModDate>
        
        <creator>A’inur A’fifah Amri</creator>
        
        <creator>Amelia Ritahani Ismail</creator>
        
        <creator>Abdullah Ahmad Zarir</creator>
        
        <subject>Deep belief networks; support vector machine; back propagation neural networks; imbalanced handwritten data; classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(2), 2018</description>
        <description>Imbalanced data is one of the challenges in a classification task in machine learning. Data disparity produces a biased output of a model regardless how recent the technology is. However, deep learning algorithms, such as deep belief networks showed promising results in many domains, especially in image processing. Therefore, in this paper, we will review the effect of imbalanced data disparity in classes using deep belief networks as the benchmark model and compare it with conventional machine learning algorithms, such as backpropagation neural networks, decision trees, na&#239;ve Bayes and support vector machine with MNIST handwritten dataset. The experiment shows that although the algorithm is stable and suitable for multiple domains, the imbalanced data distribution still manages to affect the outcome of the conventional machine learning algorithms.</description>
        <description>http://thesai.org/Downloads/Volume9No2/Paper_36-Comparative_Performance_of_Deep_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Machine Learning Method To Screen Inhibitors of Virulent Transcription Regulator of Salmonella Typhi </title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090235</link>
        <id>10.14569/IJACSA.2018.090235</id>
        <doi>10.14569/IJACSA.2018.090235</doi>
        <lastModDate>2018-02-28T12:22:49.3830000+00:00</lastModDate>
        
        <creator>Syed Asif Hassan</creator>
        
        <creator>Atif Hassan</creator>
        
        <creator>Tabrej Khan</creator>
        
        <subject>Typhoid; PhoP regulon, classification model; machine learning (ML) algorithm; eXtreme Gradient Boosting; random forest; sensitivity; accuracy </subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(2), 2018</description>
        <description>The PhoP regulon, a two-component regulatory system is a well-studied system of Salmonella enterica serotype typhi and has proved to play a crucial role in the pathophysiology of typhoid as well as the intercellular survival of the bacterium within host macrophages. The absence of PhoP regulon in the human system makes regulatory proteins of PhoP regulon for target specific for future drug discovery program against multi-drug resistant strains of Salmonella enterica serotype typhi. In recent years, high-throughput screening method has proven to be a reliable source of hit finding against various diseases including typhoid. However, the cost and time involved in HTS are of significant concern. Therefore, there is still a need for an expedient method which is also reliable in screening active hits molecules as well as less time consuming and inexpensive. In this regards, the application of machine learning (ML) based chemoinformatics model to perform HTS of drug-like hit molecules against MDR strain of Salmonella enterica serotype typhi is the most applicable. In this study, bagging and gradient boosting based ML algorithm was used to build a predictive classification model to perform virtual HTS of active inhibitors of the PhoP regulon of Salmonella enterica serotype typhi. The eXtreme Gradient Boosting (XGBoost) based classification model was comparatively accurate and sensitive in classifying active drug-like inhibitors of PhoP regulon of Salmonella enterica serotype typhi.</description>
        <description>http://thesai.org/Downloads/Volume9No2/paper_35-Machine_Learning_Method_to_Screen_Inhibitors_of_Virulent_Transcription.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Improved Social Media Analysis on 3 Layers: A Real Time Enhanced Recommendation System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090234</link>
        <id>10.14569/IJACSA.2018.090234</id>
        <doi>10.14569/IJACSA.2018.090234</doi>
        <lastModDate>2018-02-28T12:22:49.3670000+00:00</lastModDate>
        
        <creator>Mohamed Amine TALHAOUI</creator>
        
        <creator>Hicham AIT EL BOUR</creator>
        
        <creator>Reda MOULOUKI</creator>
        
        <creator>Saida NKIRI</creator>
        
        <creator>Mohamed AZOUAZI</creator>
        
        <subject>Twitter; machine learning; sentiment; Lambda; recommendation; Big data; opinion mining</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(2), 2018</description>
        <description>The Internet can be considered as an open field for expression regarding products, politics, ideas, and people. Those expressive interactions generate a large amount of data pinned per users and groups. In that scope, Big data along with various technologies, such as social media, cloud computing, and machine learning can be used as a toolbox to make sense of the data and give the opportunity to generate efficient analysis and studies of the individuals and crowds regarding market orientation, politics, and industry. The recommendation system for this acts as the pillars of technology, in the field of sentiment analysis and predictive analysis to make sense of user’s data. However, this complex operation comes at the price of this. To each analysis, there is a personalized architecture and tool. In this paper, a novel design of a recommender system is provided powered by sentiment analysis and predictive models applied onto an example of data flow from the social media Twitter. </description>
        <description>http://thesai.org/Downloads/Volume9No2/Paper_34-An_Improved_Social_Media_Analysis_on_3_Layers.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Improvement of FA Terms Dictionary using Power Link and Co-Word Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090233</link>
        <id>10.14569/IJACSA.2018.090233</id>
        <doi>10.14569/IJACSA.2018.090233</doi>
        <lastModDate>2018-02-28T12:22:49.3370000+00:00</lastModDate>
        
        <creator>El-Sayed Atlam</creator>
        
        <creator>Dawlat A. El A.Mohamed</creator>
        
        <creator>Fayed Ghaleb</creator>
        
        <creator>Doaa Abo-Shady</creator>
        
        <subject>Information retrieval; FA terms; co-word analysis; power link; precision; recall</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(2), 2018</description>
        <description>Information retrieval involves obtaining some wanted information in a database. In this paper, we used the power link to improve the extracted field association terms from corpus by the proposed algorithm to support the machine to take the right decision and attach the candidate words in their convenient position in dictionary of the field association terms. Power Link is used as a quantitative tool to compute the co-citation relation among two words depending on the co-frequency and distances among instances of the words.  In this paper, concept of the Power Link as well as modifications of the rules is used to classify the scientific papers into its proper field. In this paper, instead of whole document, a given document will be divided into three parts, namely, title, abstract and body. A given term will be given a weight that depends on the location of the term inside a specific document. The greatest weight will be given to the title then the abstract then the body, respectively. Results show an improvement in precision, recall and F measure.</description>
        <description>http://thesai.org/Downloads/Volume9No2/Paper_33-An_Improvement_of_FA_Terms_Dictionary.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Studying the Impact of Water Supply on Wheat Yield by using Principle Lasso Radial Machine Learning Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090232</link>
        <id>10.14569/IJACSA.2018.090232</id>
        <doi>10.14569/IJACSA.2018.090232</doi>
        <lastModDate>2018-02-28T12:22:49.3030000+00:00</lastModDate>
        
        <creator>Muhammad Adnan</creator>
        
        <creator>M. Abid</creator>
        
        <creator>M. Ahsan Latif</creator>
        
        <creator>Abaid-ur-Rehman</creator>
        
        <creator>Naheed Akhter</creator>
        
        <creator>Muhammad Kashif</creator>
        
        <subject>Radial basis function (RBF); Radial Basis Neural Network (RBNN); ANN; lasso; principle component analysis (PCA)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(2), 2018</description>
        <description>Wheat plays a vital role in the food production as it fulfills 60% requirements of calories and proteins to the 35% of the world population. Owing to wheat importance in food, wheat demand is increasing continuously. Wheat yield is committed to the availability of water supply. Due to climatic and environmental variations of different countries, water supply is not available in constant and desire quantity that is necessary for better wheat yield. So, there is a strong relationship and dependency that exists between water supply and wheat yield.  Therefore, water supply is becoming an issue because it directly effects wheat yield. In this research, a Principle Lasso Radial (PLR) model is proposed using Machine Learning technique to measure the effect of water supply on wheat yield. In this Principle Lasso Radial (PLR) model, various experiments are conducted with respect to the performance metrics, i.e. relative water contents, waxiness, grain per spike and plant height. Principle Lasso Radial (PLR) model’s produced reduced dimensional data with respect to performance metrics. That data is provided to Radial Basis Neural Network (RBNN), and it showed regression values R under different water supply conditions. Principle Lasso Radial (PLR) model achieved an accuracy of 89% among variance Machine Learning techniques.</description>
        <description>http://thesai.org/Downloads/Volume9No2/Paper_32-Studying_the_Impact_of_Water_Supply_on_Wheat_Yield.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Impact of Web 2.0 on Digital Divide in AJ&amp;K Pakistan</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090231</link>
        <id>10.14569/IJACSA.2018.090231</id>
        <doi>10.14569/IJACSA.2018.090231</doi>
        <lastModDate>2018-02-28T12:22:49.2870000+00:00</lastModDate>
        
        <creator>Sana Shokat</creator>
        
        <creator>Rabia Riaz</creator>
        
        <creator>Sanam Shahla Rizvi</creator>
        
        <creator>Farina Riaz</creator>
        
        <creator>Samaira Aziz</creator>
        
        <creator>Raja Shoaib Hussain</creator>
        
        <creator>Mohaib Zulfiqar Abbasi</creator>
        
        <creator>Saba Shabir</creator>
        
        <subject>AJ&amp;K; digital divide; ICT; web2.0; social networking; social inclusion and cohesion enabling approaches; net-living life styling personalization and optimization; subjective human and social factors; well-being through net living</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(2), 2018</description>
        <description>Digital divide is normally measured in terms of gap between those who can efficiently use new technological tools, such as internet, and those who cannot. It was also hypothesized that web 2.0 tools motivate people to use technology i.e. social networking sites can play an important role in bridging digital gap. The study was conducted to determine the presence of digital divide in urban and rural areas of district Muzaffrabad, Azad Jammu &amp; Kashmir. A cross-sectional community based survey was conducted involving 384 respondents from city Muzaffrabad and village Garhi Doppta. The existence of digital divide was assessed on the basis of the questionnaires given. Chi-square test was applied to find the association of different demographic and ICT related factors with internet usage. Despite the growing awareness there are possibilities of gender, age and area based digital divide. Outcomes of the survey affirmed that web 2.0 based web-sites are becoming popular and attracting people to use internet.</description>
        <description>http://thesai.org/Downloads/Volume9No2/Paper_31-Impact_of_Web_2_on_Digital_Divide.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improvement of the Frequency Characteristics for RFID Patch Antenna based on C-Shaped Split Ring Resonator</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090230</link>
        <id>10.14569/IJACSA.2018.090230</id>
        <doi>10.14569/IJACSA.2018.090230</doi>
        <lastModDate>2018-02-28T12:22:49.2570000+00:00</lastModDate>
        
        <creator>Mahdi Abdelkarim</creator>
        
        <creator>Seif Naoui</creator>
        
        <creator>Lassad Larach</creator>
        
        <creator>Ali Gharsallah</creator>
        
        <subject>Rectangular patch antenna; C-shaped split ring resonator; RFID reader antenna; metamaterials antenna</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(2), 2018</description>
        <description>In this paper, we present a new technique for improving frequency characteristics and miniaturizing the geometric dimension of the RFID patch antenna that operates in the SHF band. This technique consists in implementing a periodic network based on a new model of a split C-ring resonator on the square slots of radiating rectangular patch antenna. The results of the simulation have proved that the proposed technique has an excellent radiation efficiency and size reduction compared to the reference patch antenna.</description>
        <description>http://thesai.org/Downloads/Volume9No2/Paper_30-Improvement_of_the_Frequency_Characteristics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Proposed Model to Increase Security of Sensitive Data in Cloud Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090229</link>
        <id>10.14569/IJACSA.2018.090229</id>
        <doi>10.14569/IJACSA.2018.090229</doi>
        <lastModDate>2018-02-28T12:22:49.2270000+00:00</lastModDate>
        
        <creator>Dhurat&#235; Hyseni</creator>
        
        <creator>Besnik Selimi</creator>
        
        <creator>Artan Luma</creator>
        
        <creator>Betim Cico</creator>
        
        <subject>ITSS-IT security specialist; partitioning; confidentiality; cloud service provider; cloud service client; platform as a service; service as a service; third party auditor</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(2), 2018</description>
        <description>There is a complex problem regarding security of data in cloud, it becomes more critical when the data in question is highly sensitive. One of the main approaches to overcome this problem is the encryption data at rest, which comes with its own difficulties such as efficient key management, access permissions and similar. In this paper, we propose a new approach to security that is controlled by the IT Security Specialist (ITSS) of the company/organization. The approach is based on multiple strategies of file encryption, partitioning and distribution among multiple storage providers, resulting in increased confidentiality since a supposed attacker will need to first obtain parts of a file from different storage providers, know how to combine them, before any decryption attempt. All details of the strategy used for a particular file are stored on a separate file, which can be considered as a master key for the file contents. Also, we will present each strategy with the results and comments related to the realized measurements.</description>
        <description>http://thesai.org/Downloads/Volume9No2/Paper_29-The_Proposed_Model_to_Increase_Security.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Task Scheduling Algorithm using Firefly and Simulated Annealing Algorithms in Cloud Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090228</link>
        <id>10.14569/IJACSA.2018.090228</id>
        <doi>10.14569/IJACSA.2018.090228</doi>
        <lastModDate>2018-02-28T12:22:49.1930000+00:00</lastModDate>
        
        <creator>Fakhrosadat Fanian</creator>
        
        <creator>Vahid Khatibi Bardsiri</creator>
        
        <creator>Mohammad Shokouhifar</creator>
        
        <subject>Firefly; make span; simulated annealing; task scheduling; cloud</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(2), 2018</description>
        <description>Task scheduling is a challenging and important issue, which considering increases in data sizes and large volumes of data, has turned into an NP-hard problem. This has attracted the attention of many researchers throughout the world since cloud environments are in fact homogenous systems for maintaining and processing practical applications needed by users. Thus, task scheduling has become extremely important in order to provide better services to users. In this regard, the present study aims at providing a new task-scheduling algorithm using both firefly and simulated annealing algorithms. This algorithm takes advantage of the merits of both firefly and simulated annealing algorithms. Moreover, efforts have been made in regards to changing the primary population or primary solutions for the firefly algorithm. The presented algorithm uses a better primary solution. Local search was another aspect considered for the new algorithm. The presented algorithm was compared and evaluated against common algorithms. As indicated by the results, compared to other algorithms, the presented method performs effectively better in reducing to make span using different number of tasks and virtual machines.</description>
        <description>http://thesai.org/Downloads/Volume9No2/Paper_28-A_New_Task_Scheduling_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mining Trending Hash Tags for Arabic Sentiment Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090227</link>
        <id>10.14569/IJACSA.2018.090227</id>
        <doi>10.14569/IJACSA.2018.090227</doi>
        <lastModDate>2018-02-28T12:22:49.1800000+00:00</lastModDate>
        
        <creator>Yahya AlMurtadha</creator>
        
        <subject>Arabic sentiment analysis; twitter; opinion mining; trending hashtags; text analysis; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(2), 2018</description>
        <description>People text millions of posts everyday on microblogging social networking especially Twitter which make microblogs a rich source for public opinions, customer’s comments and reviews. Companies and public sectors are looking for a way to measure the public response and feedback on particular service or product. Sentiment analysis is an encouraging technique capable to sense the public opinion in a fast and less cost tactic than traditional survey methods like questionnaires and interviews. Various sentiment methods were developed in many languages, such as English and Arabic with much more studies in the first one. Sometime, hash tags are misleading or may have a title that does not really reflects the subject. Tweets in trend hash tags may contain keyword or topics titles better represent the subject of the hash tag. This research aims at proposing an approach to explore Twitter Hash tag trends to retrieve tweets, group retrieved tweets to learn topics’ profiles, do sentiment analysis to test the subjectivity of tweets then develop a prediction model using deep learning to classify a new tweet to the appropriate topic profile. Arabic hash tags trends have been used to evaluate the proposed approach. The performance of the proposed approach (clustering topics within hashtag trend to learn topics profiles then do sentiment analysis) shows better accuracy than sentiment analysis without clustering the topics. </description>
        <description>http://thesai.org/Downloads/Volume9No2/Paper_27-Mining_Trending_Hash_Tags_for_Arabic_Sentiment_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sentiment Analysis using SVM: A Systematic Literature Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090226</link>
        <id>10.14569/IJACSA.2018.090226</id>
        <doi>10.14569/IJACSA.2018.090226</doi>
        <lastModDate>2018-02-28T12:22:49.1470000+00:00</lastModDate>
        
        <creator>Munir Ahmad</creator>
        
        <creator>Shabib Aftab</creator>
        
        <creator>Muhammad Salman Bashir</creator>
        
        <creator>Noureen Hameed</creator>
        
        <subject>Sentiment analysis; polarity detection; machine learning; SVM; support vector machine; SLR; systematic literature review</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(2), 2018</description>
        <description>The world has revolutionized and phased into a new era, an era which upholds the true essence of technology and digitalization. As the market has evolved at a staggering scale, it is must to exploit and inherit the advantages and opportunities, it provides. With the advent of web 2.0, considering the scalability and unbounded reach that it provides, it is detrimental for an organization to not to adopt the new techniques in the competitive stakes that this emerging virtual world has set along with its advantages. The transformed and highly intelligent data mining approaches now allow organizations to collect, categorize, and analyze users’ reviews and comments from micro-blogging sites regarding their services and products. This type of analysis makes those organizations capable to assess, what the consumers want, what they disapprove of, and what measures can be taken to sustain and improve the performance of products and services. This study focuses on critical analysis of the literature from year 2012 to 2017 on sentiment analysis by using SVM (support vector machine). SVM is one of the widely used supervised machine learning techniques for text classification. This systematic review will serve the scholars and researchers to analyze the latest work of sentiment analysis with SVM as well as provide them a baseline for future trends and comparisons.</description>
        <description>http://thesai.org/Downloads/Volume9No2/Paper_26-Sentiment_Analysis_using_SVM.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A 1NF Data Model for Representing Time-Varying Data in Relational Framework</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090225</link>
        <id>10.14569/IJACSA.2018.090225</id>
        <doi>10.14569/IJACSA.2018.090225</doi>
        <lastModDate>2018-02-28T12:22:49.1170000+00:00</lastModDate>
        
        <creator>Nashwan Alromema</creator>
        
        <creator>Fahad Alotaibi</creator>
        
        <subject>Time-oriented data model; time-varying data, valid-time data; transaction time data; bitemporal data; data model; N1NF; 1NF</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(2), 2018</description>
        <description>Attaching Date and Time to varying data plays a definite role in representing a dynamic domain and resources on the database systems. The conventional database stores current data and can only represent the knowledge in static sense, whereas Time-varying database represents the knowledge in dynamic sense. This paper focuses on incorporating interval-based timestamping in First Normal Form (1NF) data model. 1NF approach has been chosen for the easily implementation in relational framework as well as to provide the temporal data representation with the modeling and querying power of relational data model. Simulation results revealed that the proposed approach substantially improved the performance of temporal data representation in terms of required memory storage and queries processing time.</description>
        <description>http://thesai.org/Downloads/Volume9No2/Paper_25-A_1NF_Data_Model_for_Representing_Time_Varying_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Multilingual Datasets Repository of the Hadith Content</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090224</link>
        <id>10.14569/IJACSA.2018.090224</id>
        <doi>10.14569/IJACSA.2018.090224</doi>
        <lastModDate>2018-02-28T12:22:49.1170000+00:00</lastModDate>
        
        <creator>Ahsan Mahmood</creator>
        
        <creator>Hikmat Ullah Khan</creator>
        
        <creator>Fawaz K. Alarfaj</creator>
        
        <creator>Muhammad Ramzan</creator>
        
        <creator>Mahwish Ilyas</creator>
        
        <subject>Data extraction; preprocessing; regex; Hadith; text analysis; parsing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(2), 2018</description>
        <description>Knowledge extraction from unstructured data is a challenging research problem in research domain of Natural Language Processing (NLP). It requires complex NLP tasks like entity extraction and Information Extraction (IE), but one of the most challenging tasks is to extract all the required entities of data in the form of structured format so that data analysis can be applied. Our focus is to explain how the data is extracted in the form of datasets or conventional database so that further text and data analysis can be carried out. This paper presents a framework for Hadith data extraction from the Hadith authentic sources. Hadith is the collection of sayings of Holy Prophet Muhammad, who is the last holy prophet according to Islamic teachings. This paper discusses the preparation of the dataset repository and highlights issues in the relevant research domain. The research problem and their solutions of data extraction, pre-processing and data analysis are elaborated. The results have been evaluated using the standard performance evaluation measures. The dataset is available in multiple languages, multiple formats and is available free of cost for research purposes.</description>
        <description>http://thesai.org/Downloads/Volume9No2/Paper_24-A_Multilingual_Datasets_Repository.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development and Validation of a Cooling Load Prediction Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090223</link>
        <id>10.14569/IJACSA.2018.090223</id>
        <doi>10.14569/IJACSA.2018.090223</doi>
        <lastModDate>2018-02-28T12:22:49.0870000+00:00</lastModDate>
        
        <creator>Abir Khabthani</creator>
        
        <creator>Leila Ch&#226;abane</creator>
        
        <subject>Smart building; energy efficiency; prediction; short sampling-rate; less stored data</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(2), 2018</description>
        <description>In smart buildings, cooling load prediction is important and essential in the sense of energy efficiency especially in hot countries. Indeed, prediction is required in order to provide the occupant by his consumption and incite him to take right decisions that would potentially decrease his energy demand. In some existing models, prediction is based on a selected reference day. This selection depends on several conditions similarity. Such model needs deep analysis of big past data. Instead of a deep study to well select the reference day; this paper is focusing on a short sampling-rate for predicting the next state. So, this method requires less inputs and less stored data. Prediction results will be more close to the real state. In first phase, an hourly cooling load model is implemented. This model has as input current cooling load, current outside temperature and weather forecast to predict the next hour cooling consumption. To enhance model’s performance and reliability, the sampling period is decreasing to 30 minutes with respect to system dynamic. Lastly, prediction’s accuracy is improved by using previous errors between actual cooling load and prediction results. Simulations are realized in nodes located at a campus showing good adequacy with measurements.</description>
        <description>http://thesai.org/Downloads/Volume9No2/Paper_23-Development_and_Validation_of_A_Cooling_Load.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Face Age Estimation Approach based on Deep Learning and Principle Component Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090222</link>
        <id>10.14569/IJACSA.2018.090222</id>
        <doi>10.14569/IJACSA.2018.090222</doi>
        <lastModDate>2018-02-28T12:22:49.0530000+00:00</lastModDate>
        
        <creator>Noor Mualla</creator>
        
        <creator>Essam H. Houssein</creator>
        
        <creator>Hala H. Zayed</creator>
        
        <subject>Deep learning; principal component analysis; support vector machine; K-NN; age estimation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(2), 2018</description>
        <description>This paper presents an approach for age estimation based on faces through classifying facial images into predefined age-groups. However, a task such as the one at hand faces several difficulties because of the different aspects of every single person. Factors like exposure, weather, gender and lifestyle all come into play. While some trends are similar for faces from a similar age group, it is problematic to distinguish the aging aspects for every age group. This paper’s concentration is in four chosen age groups where the estimation takes place. We employed a fast and effective machine learning method: deep learning so that it could solve the age categorization issue. Principal component analysis (PCA) was used for extracting features and reducing face image. Age estimation was applied to three different aging datasets from Morph and experimental results are reported to validate its efficiency and robustness. Eventually, it is evident from the results that the current approach has achieved high classification results compared with support vector machine (SVM) and k-nearest neighbors (K-NN).</description>
        <description>http://thesai.org/Downloads/Volume9No2/Paper_22-Face_Age_Estimation_Approach_based_on_Deep_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>LeafPopDown: Leaf Popular Down Caching Strategy for Information-Centric Networking</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090221</link>
        <id>10.14569/IJACSA.2018.090221</id>
        <doi>10.14569/IJACSA.2018.090221</doi>
        <lastModDate>2018-02-28T12:22:49.0400000+00:00</lastModDate>
        
        <creator>Hizbullah Khattak</creator>
        
        <creator>Noor Ul Amin</creator>
        
        <creator>Ikram ud Din</creator>
        
        <creator>Insafullah</creator>
        
        <creator>Jawaid Iqbal</creator>
        
        <subject>Component; information-centric networking; caching; popularity; least recently used</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(2), 2018</description>
        <description>Information-Centric Networking is a name based internet architecture and is considered as an alternate of IP base internet architecture. The in-network caching feature used in ICN has attracted research interests as it reduces network traffic, server overload and minimizes latency experienced by end users. Researchers have proposed different caching policies for ICN aiming to optimize performance metrics, such as cache hits, diversity and eviction operations. In this paper, we propose a novel caching strategy of LeafPopDown for ICN that significantly reduces eviction operation and enhances cache hits and diversity in ICN.</description>
        <description>http://thesai.org/Downloads/Volume9No2/Paper_21-LeafPopDown_Leaf_Popular_Down_Caching_Strategy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Portable Virtual LAB for Informatics Education using Open Source Software</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090220</link>
        <id>10.14569/IJACSA.2018.090220</id>
        <doi>10.14569/IJACSA.2018.090220</doi>
        <lastModDate>2018-02-28T12:22:49.0070000+00:00</lastModDate>
        
        <creator>Ali H. Alharbi</creator>
        
        <subject>Virtual labs; open source software; e-learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(2), 2018</description>
        <description>The need for students to have hands-on experience is very important in many disciplines to match the requirements of today’s dynamic job market. Informatics, which is the science of information engineering, has been recently integrated into many academic programs. Teaching students the main skills in modern software and web development is essential for them to be successful informatics professionals. For any informatics program, students engage in working on projects as essential parts for some courses in their academic programs. This paper presents the development and evaluation of MiLAB (My Mobile Informatics Lab), a portable virtual lab environment for the teaching and learning of modern web development skills. MiLAB has been integrated into an undergraduate health informatics academic program to improve the teaching and learning of essential web development skills, such as databases management and customization of modern content management systems. The evaluation of MiLAB indicated that it served as an interactive personal environment for students to implement, collaborate, and present their web development projects. Strengths, weaknesses and possible improvements are also discussed.</description>
        <description>http://thesai.org/Downloads/Volume9No2/Paper_20-A_Portable_Virtual_LAB_for_Informatics_Education.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Arijo: Location-Specific Data Crowdsourcing Web Application as a Curriculum Supplement</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090219</link>
        <id>10.14569/IJACSA.2018.090219</id>
        <doi>10.14569/IJACSA.2018.090219</doi>
        <lastModDate>2018-02-28T12:22:48.9770000+00:00</lastModDate>
        
        <creator>Justin Banusing</creator>
        
        <creator>Cedrick Jason Cruz</creator>
        
        <creator>Peter John Flores</creator>
        
        <creator>Eisen Ed Briones</creator>
        
        <creator>Gerald Salazar</creator>
        
        <creator>Rhydd Balinas</creator>
        
        <creator>Serafin Farinas</creator>
        
        <subject>Web application; smart devices; data crowdsourcing; curriculum supplement</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(2), 2018</description>
        <description>Smart devices are quickly becoming more accessible to the general public. With the proper tools, they can be used to supplement the work of educators. According to studies by Beeland Jr. and Roussou, learning through interaction has been considered to be effective by both students and teachers. This study aimed to develop an interactive curriculum supplement for smart devices in the form of a Location-specific Data Crowdsourcing Web Application (Arijo) which teaches students how to conduct experiments and upload their results to the internet for archival purposes. Arijo was developed with a combination of the Appsheet framework, Adobe Photoshop, and Google Maps.  Three core functionalities were programmed: data input/output, data interpretation, and information dissemination. Arijo was able to perform its intended features, such as recording and displaying data within specific locations, along with displaying guides on how to conduct an experiment. Arijo was able to fulfill its main objective, to be a curriculum supplement, through the aforementioned features. In the future, Arijo may be expanded to support more year levels and multiple curriculums because of its modular nature.</description>
        <description>http://thesai.org/Downloads/Volume9No2/Paper_19-Arijo_Data_Crowdsourcing_Web_Application.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Image Contrast Enhancement by Scaling Reconstructed Approximation Coefficients using SVD Combined Masking Technique</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090218</link>
        <id>10.14569/IJACSA.2018.090218</id>
        <doi>10.14569/IJACSA.2018.090218</doi>
        <lastModDate>2018-02-28T12:22:48.9470000+00:00</lastModDate>
        
        <creator>Sandeepa K S</creator>
        
        <creator>Basavaraj N Jagadale</creator>
        
        <creator> J S Bhat</creator>
        
        <creator>Mukund N Naragund</creator>
        
        <creator>Panchaxri</creator>
        
        <subject>Standard intensity deviation clipped sub image histogram equalization; discrete wavelet transform; singular value decomposition; masking technique</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(2), 2018</description>
        <description>The proposed method addresses the general issues of image contrast enhancement. The input image is enhanced by incorporating discrete wavelet transform, singular value decomposition, standard intensity deviation based clipped sub image histogram equalization and masking technique. In this method, low pass filtered coefficients of wavelet and its scaled version undergoes masking approach. The scale value is obtained using singular value decomposition between reconstructed approximation coefficients and standard intensity deviation based clipped sub image histogram equalization image. The masking image is added to the original image to produce a maximum contrast-enhanced image. The supremacy of the proposed method tested over other methods. The qualitative and quantitative analysis is used to justify the performance of the proposed method.</description>
        <description>http://thesai.org/Downloads/Volume9No2/Paper_18-Image_Contrast_Enhancement.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Toward Exascale Computing Systems: An Energy Efficient Massive Parallel Computational Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090217</link>
        <id>10.14569/IJACSA.2018.090217</id>
        <doi>10.14569/IJACSA.2018.090217</doi>
        <lastModDate>2018-02-28T12:22:48.9300000+00:00</lastModDate>
        
        <creator>Muhammad Usman Ashraf</creator>
        
        <creator>Fathy Alburaei Eassa</creator>
        
        <creator>Aiiad Ahmad Albeshri</creator>
        
        <creator> Abdullah Algarni</creator>
        
        <subject>Exascale computing; high-performance computing (HPC); massive parallelism; super computing; energy efficiency; hybrid programming; CUDA; OpenMP; MPI</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(2), 2018</description>
        <description>The emerging Exascale supercomputing system expected till 2020 will unravel many scientific mysteries. This extreme computing system will achieve a thousand-fold increase in computing power compared to the current petascale computing system. The forthcoming system will assist system designers and development communities in navigating from traditional homogeneous to the heterogeneous systems that will be incorporated into powerful accelerated GPU devices beside traditional CPUs. For achieving ExaFlops (10^18 calculations per second) performance through the ultrascale and energy-efficient system, the current technologies are facing several challenges. Massive parallelism is one of these challenges, which requires a novel energy-efficient parallel programming (PP) model for providing the massively parallel performance. In the current study, a new parallel programming model has been proposed, which is capable of achieving massively parallel performance through coarse-grained and fine-grained parallelism over inter-node and intra-node architectural-based processing. The suggested model is a tri-level hybrid of MPI, OpenMP and CUDA that is computable over a heterogeneous system with the collaboration of traditional CPUs and energy-efficient GPU devices. Furthermore, the developed model has been demonstrated by implementing dense matrix multiplication (DMM). The proposed model is considered an initial and leading model for obtaining massively parallel performance in an Exascale computing system.</description>
        <description>http://thesai.org/Downloads/Volume9No2/Paper_17-Toward_Exascale_Computing_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Customer Satisfaction Measurement using Sentiment Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090216</link>
        <id>10.14569/IJACSA.2018.090216</id>
        <doi>10.14569/IJACSA.2018.090216</doi>
        <lastModDate>2018-02-28T12:22:48.9130000+00:00</lastModDate>
        
        <creator>Shaha Al-Otaibi</creator>
        
        <creator>Allulo Alnassar</creator>
        
        <creator>Asma Alshahrani</creator>
        
        <creator>Amany Al-Mubarak</creator>
        
        <creator>Sara Albugami</creator>
        
        <creator>Nada Almutiri</creator>
        
        <creator>Aisha Albugami</creator>
        
        <subject>Social media analytics; sentiment; classification; support vector machine; unigram</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(2), 2018</description>
        <description>Besides the traditional methods of targeting customers, social media presents its own set of opportunities. While companies look for a simple way with a large number of responses, social media platforms like Twitter can allow them to do just that. For example, by creating a hashtag and prompting followers to tweet their answers to some question they can quickly get a large number of answers about a question while simultaneously engaging their customers. Additionally, consumers share their opinions about services and products in public and with their social circles. This valuable data can be used to support business decisions. However, it is huge amounts of unstructured data that is difficult to extract meaningful information out of them. Social Media Analytics is the field which makes insights out of social media data and analyzes its sentiment rather than just reading and counting text.  In this article, we used Twitter data to get insight from public opinion hidden in data. The support vector machine algorithm is used to classify sentiment of tweets whether it is positive or negative and the unigram applied as a feature extraction method. The experiments were conducted using large set of training dataset and the algorithm achieved high accuracy around 87%.</description>
        <description>http://thesai.org/Downloads/Volume9No2/Paper_16-Customer_Satisfaction_Measurement_using_Sentiment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intelligent Transportation System (ITS) for Smart-Cities using Mamdani Fuzzy Inference System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090215</link>
        <id>10.14569/IJACSA.2018.090215</id>
        <doi>10.14569/IJACSA.2018.090215</doi>
        <lastModDate>2018-02-28T12:22:48.8830000+00:00</lastModDate>
        
        <creator>Kashif Iqbal</creator>
        
        <creator>Muhammad Adnan Khan</creator>
        
        <creator>Sagheer Abbas</creator>
        
        <creator>Zahid Hasan</creator>
        
        <creator>Areej Fatima</creator>
        
        <subject>Information Communication Technology (ICT), Internet of Things (IoT), Intelligent Transportation System (ITS), Fuzzy Inference System (MFIS), Traffic Congestion Conditions (TCC), SNA, MF, Mamdani Fuzzy Inference System (MFIS)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(2), 2018</description>
        <description>It is estimated that more than half of the world population lives in cities according to (UN forecasts, 2014), so cities are vital. Cities, as we all know facing with complex challenges – for smart cities the outdated traditional planning of transportation, environmental contamination, finance management and security observations are not adequate. The developing framework for smart-city requires sound infrastructure, latest current technology adoption. Modern cities are facing pressures associated with urbanization and globalization to improve quality-of-life of their citizens. A framework model that enables the integration of cloud-data, social network (SN) services and smart sensors in the context of smart cities is proposed. A service-oriented radical framework enables the retrieval and analysis of big data sets stemming from Social Networking (SN) sites and integrated smart sensors collecting data streams for smart cities. Smart cities’ understanding is a broad concept transportation sector focused in this article. Fuzzification is shown to be a capable mathematical approach for modelling traffic and transportation processes. To solve various traffic and transportation problems a detailed analysis of fuzzy logic systems is developed. This paper presents an analysis of the results achieved using Mamdani Fuzzy Inference System to model complex traffic processes. These results are verified using MATLAB simulation.</description>
        <description>http://thesai.org/Downloads/Volume9No2/Paper_15-Intelligent_Transportation_System_ITS.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detection Capability and CFAR Loss Under Fluctuating Targets of Different Swerling Model for Various  Gamma Parameters in RADAR</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090214</link>
        <id>10.14569/IJACSA.2018.090214</id>
        <doi>10.14569/IJACSA.2018.090214</doi>
        <lastModDate>2018-02-28T12:22:48.7730000+00:00</lastModDate>
        
        <creator>Md. Maynul Islam</creator>
        
        <creator>Mohammed Hossam-E-Haider</creator>
        
        <subject>Swerling model; Constant False Alarm Rate (CFAR) loss; false alarm; gamma function; probability of detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(2), 2018</description>
        <description>Target detection of RADAR deals with different and manifold problems over few decades. The detection capability is one of the most significant factors in RADAR system. The main aim of detection is to increase probability of detection while decreasing rate of false alarm. The threshold of detection is modified as a function of the receiver noise level to keep a fixed rate of false alarm. Constant False Alarm Rate (CFAR) processors are used to maintain the amount of false alarm under supervision in a diverse background of interference. In Signal to Noise Ratio (SNR) level, a loss can be occurred due to CFAR processor. Gamma function is used to determine the probability of false alarm. It is assumed in adaptive CFAR that the interference distribution is familiar here. This type of CFAR also approximates the unknown parameters connected with various interference distributions. CFAR loss depends on gamma function. Incomplete gamma function plays an important role in maintaining threshold voltage as well as probability of detection. Changing the value of gamma function can improve the probability of detection for various Swerling Models which are proposed here. This paper has also proposed a technique to compare various losses due to CFAR in terms of different gamma function in presence of different number of pulses for four Swerling Models.</description>
        <description>http://thesai.org/Downloads/Volume9No2/Paper_14-Detection_Capability_and_CFAR_Loss.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Long-Term Weather Elements Prediction in Jordan using Adaptive Neuro-Fuzzy Inference System (ANFIS) with GIS Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090213</link>
        <id>10.14569/IJACSA.2018.090213</id>
        <doi>10.14569/IJACSA.2018.090213</doi>
        <lastModDate>2018-02-28T12:22:48.7430000+00:00</lastModDate>
        
        <creator>Omar Suleiman Arabeyyat</creator>
        
        <subject>Rainfall prediction; hybrid intelligent system; Adaptive Neuro-Fuzzy Inference System (ANFIS), GIS; time series prediction; long-term weather forecasting; climate change</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(2), 2018</description>
        <description>Weather elements are the most important parameters in metrological and hydrological studies especially in semi-arid regions, like Jordan. The Adaptive Neuro-Fuzzy Inference System (ANFIS) is used here to predict the minimum and maximum temperature of rainfall for the next 10 years using 30 years’ time series data for the period from 1985 to 2015. Several models were used based on different membership functions, different methods of optimization, and different dataset ratios for training and testing. By combining a neural network with a fuzzy system, the hybrid intelligent system results in a hybrid Neuro-Fuzzy system which is an approach that is good enough to simulate and predict rainfall events from long-term metrological data. In this study, the correlation coefficient and the mean square error were used to test the performance of the used model. ANFIS has successfully been used here to predict the minimum and maximum temperature of rainfall for the coming next 10 years and the results show a good consistence pattern compared to previous studies. The results showed a decrease in the annual average rainfall amounts in the next 10 years. The minimum average annual temperature showed the disappearance of a certain predicted zone by ANFIS when compared to actual data for the period 1985-2015, and the same results behavior has been noticed for the average annual maximum.</description>
        <description>http://thesai.org/Downloads/Volume9No2/Paper_13-Long_Term_Weather_Elements_Prediction_in_Jordan.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Software Bug Prediction using Machine Learning Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090212</link>
        <id>10.14569/IJACSA.2018.090212</id>
        <doi>10.14569/IJACSA.2018.090212</doi>
        <lastModDate>2018-02-28T12:22:48.7100000+00:00</lastModDate>
        
        <creator>Awni Hammouri</creator>
        
        <creator>Mustafa Hammad</creator>
        
        <creator>Mohammad Alnabhan</creator>
        
        <creator>Fatima Alsarayrah</creator>
        
        <subject>Software bug prediction; faults prediction; prediction model; machine learning; Na&#239;ve Bayes (NB); Decision Tree (DT); Artificial Neural Networks (ANNs)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(2), 2018</description>
        <description>Software Bug Prediction (SBP) is an important issue in software development and maintenance processes, which concerns with the overall of software successes. This is because predicting the software faults in earlier phase improves the software quality, reliability, efficiency and reduces the software cost. However, developing robust bug prediction model is a challenging task and many techniques have been proposed in the literature. This paper presents a software bug prediction model based on machine learning (ML) algorithms. Three supervised ML algorithms have been used to predict future software faults based on historical data. These classifiers are Na&#239;ve Bayes (NB), Decision Tree (DT) and Artificial Neural Networks (ANNs). The evaluation process showed that ML algorithms can be used effectively with high accuracy rate. Furthermore, a comparison measure is applied to compare the proposed prediction model with other approaches. The collected results showed that the ML approach has a better performance. </description>
        <description>http://thesai.org/Downloads/Volume9No2/Paper_12-Software_Bug_Prediction_using_Machine_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Proposed an Adaptive Bitrate Algorithm based on Measuring Bandwidth and Video Buffer Occupancy for Providing Smoothly Video Streaming</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090211</link>
        <id>10.14569/IJACSA.2018.090211</id>
        <doi>10.14569/IJACSA.2018.090211</doi>
        <lastModDate>2018-02-28T12:22:48.6800000+00:00</lastModDate>
        
        <creator>Saba Qasim Jabbar</creator>
        
        <creator>Dheyaa Jasim Kadhim</creator>
        
        <creator>Yu Li</creator>
        
        <subject>DASH; video streaming; video buffer; video adaptive bitrate algorithm, QoE</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(2), 2018</description>
        <description>Dynamic adaptive streaming via HTTP (DASH) has been popular disseminated over the Internet especially under the circumstances of the time varying network, which it is currently the most challenging for providing smoothly video streaming via high quality. In DASH system, after completing the download of one segment, the player estimates the available network bandwidth by calculating the downloading throughput and then adapting the video bitrate level based on its estimations. However, the estimated bandwidth in the application layer is not accurate due to off-intervals appearance during the downloading process. To avoid the unfairness of bandwidth estimation by the clients, this work proposes a logarithmic approach for received network bandwidth, which includes increasing or decreasing this bandwidth logarithmically to converge the fair share bandwidth (estimated bandwidth). After obtaining the measured bandwidth, an adaptive bitrate algorithm is proposed by considering this measured bandwidth in addition to video buffer occupancy. The video buffer model is associated with three thresholds (i.e. one for initial startup and two for operating thresholds). When the video buffer’s level stays between the two operating thresholds, the video bitrate will keep unchanged. Otherwise, when the buffer occupancy is too high or too low, an appropriate video bitrate is chosen to avoid buffer overflow/underflow. Simulation results show that the proposed scheme is able to converge the measured bandwidth to the fair share bandwidth very quickly. Also the proposed scheme is compared with conventional scheme, we found that our proposed scheme outperforms in achieving the best performance in terms of efficiency, stability and fairness.</description>
        <description>http://thesai.org/Downloads/Volume9No2/Paper_11-Proposed_an_Adaptive_Bitrate_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Day-ahead Base, Intermediate, and Peak Load Forecasting using K-Means and Artificial Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090210</link>
        <id>10.14569/IJACSA.2018.090210</id>
        <doi>10.14569/IJACSA.2018.090210</doi>
        <lastModDate>2018-02-28T12:22:48.6330000+00:00</lastModDate>
        
        <creator>Lemuel Clark P. Velasco</creator>
        
        <creator>Noel R. Estoperez</creator>
        
        <creator>Renbert Jay R. Jayson</creator>
        
        <creator>Caezar Johnlery T. Sabijon</creator>
        
        <creator>Verlyn C. Sayles</creator>
        
        <subject>K-means clustering; artificial neural networks; base intermediate and peak load; day-ahead load forecasting </subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(2), 2018</description>
        <description>Industries depend heavily on the capacity and availability of electric power. A typical load curve has three parts, namely, base, intermediate, and peak load. Predicting the three (3) system loads accurately in a power system will help power utilities ensure the availability of the supply and to avoid the risk for over- or under- utilization of generation, transmission, and distribution facilities. The goal of this research is to create a suitable model for day-ahead base, intermediate and peak load forecasting of the electric load data provided by a power utility company. This paper presents an approach in predicting the three (3) system loads using K-means clustering and artificial neural networks (ANN). The power utility’s load data was clustered using K-means to determine the daily base, intermediate and peak loads that were then fed into an ANN model that utilized Quick Propagation training algorithm and Gaussian activation function. It was found out that the implemented ANN model generated 2.2%, 1.84%, and 1.4% as the lowest MAPE for base, intermediate, and peak loads, respectively, with highest MAPE below the accepted standard error rate of 5%. The results of this study clearly suggest that with the proper method of data preparation, clustering, and model implementation, ANN can be a viable solution in forecasting the day-ahead base, intermediate, and peak load demand of a power utility.</description>
        <description>http://thesai.org/Downloads/Volume9No2/Paper_10-Day_Ahead_Base_Intermediate_and_Peak_Load.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Norm’s Trust Model to Evaluate Norms Benefit Awareness for Norm Adoption in an Open Agent Community</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090209</link>
        <id>10.14569/IJACSA.2018.090209</id>
        <doi>10.14569/IJACSA.2018.090209</doi>
        <lastModDate>2018-02-28T12:22:48.6030000+00:00</lastModDate>
        
        <creator>Al-Mutazbellah Khamees Itaiwi</creator>
        
        <creator>Mohd Sharifuddin Ahmad</creator>
        
        <creator>Alicia Y. C. Tang</creator>
        
        <subject>Norm’s benefits; norm’s trust; norm detection; normative multi-agent systems; intelligent software agent</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(2), 2018</description>
        <description>In recent developments, norms have become important entities that are considered in agent-based systems’ designs. Norms are not only able to organize and coordinate the actions and behaviour of agents but have a direct impact on the achievement of agents’ goals. Consequently, an agent in a multi-agent system requires a mechanism that detects specific norms for adoption while rejecting others. The impact of such norms selection imposes risks on the agent’s goal and its plan ensuing from the probability of positive or negative outcomes when the agent adopts or reject some norms. In an earlier work, this predicament is resolved by enabling an agent to evaluate a norm’s benefits if it decides to adopt a particular norm. The evaluation mechanism entails a framework that analyzes a norm’s adoption ratio, yield, morality and trust, the unified values of which indicates the norm’s benefits. In this paper, the trust parameter of the mechanism is analyzed and a norm’s trust model is proposed and utilized in the evaluation of a norm’s benefits for subsequent adoption or rejection. Ultimately, the norm’s benefits are determined as a consequence of a favorable or unfavorable trust value as a significant parameter in a norm’s adoption or rejection.</description>
        <description>http://thesai.org/Downloads/Volume9No2/Paper_9-Norms_Trust_Model_to_Evaluate_Norms_Benefit_Awareness.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detection of Climate Crashes using Fuzzy Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090208</link>
        <id>10.14569/IJACSA.2018.090208</id>
        <doi>10.14569/IJACSA.2018.090208</doi>
        <lastModDate>2018-02-28T12:22:48.5700000+00:00</lastModDate>
        
        <creator>Rahib H. Abiyev</creator>
        
        <creator>Mohammed Azad Omar</creator>
        
        <creator>Boran Sekeroglu</creator>
        
        <subject>Climate crashes; fuzzy neural networks; parallel ocean program; SVM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(2), 2018</description>
        <description>In this paper the detection of the climate crashes or failure that are associated with the use of climate models based on parameters induced from the climate simulation is considered. Detection and analysis of the crashes allows one to understand and improve the climate models. Fuzzy neural networks (FNN) based on Takagi-Sugeno-Kang (TSK) type fuzzy rule is presented to determine chances of failure of the climate models. For this purpose, the parameters characterising the climate crashes in the simulation are used. For comparative analysis, Support Vector Machine (SVM) is applied for simulation of the same problem. As a result of the comparison, the accuracy rates of 94.4% and 97.96% were obtained for SVM and FNN model correspondingly. The FNN model was discovered to be having better performance in modelling climate crashes.</description>
        <description>http://thesai.org/Downloads/Volume9No2/Paper_8-Detection_of_Climate_Crashes_using_Fuzzy_Neural.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comparison of Usability Aspects between an Existing Hospital Website of Pakistan with a Template based on Usability Standards</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090207</link>
        <id>10.14569/IJACSA.2018.090207</id>
        <doi>10.14569/IJACSA.2018.090207</doi>
        <lastModDate>2018-02-28T12:22:48.5400000+00:00</lastModDate>
        
        <creator>Muhammad Usman</creator>
        
        <creator>Mahmood Ashraf</creator>
        
        <creator>Muhammad Tahir</creator>
        
        <subject>Usability evaluation; healthcare website; hospital website evaluation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(2), 2018</description>
        <description>More people search internet for medical and health information. Due to increase in demand for online health services, hospitals need to equip their websites with usability standards. Hospital websites should be user centered in order to increase the usability. In the instant research study, an existing public sector hospital website is compared with a designed template for healthcare website. Template was designed keeping in view the user demands for hospital websites. Usability evaluation of both websites has been performed. Twenty-one users were involved in the research study. Three representative tasks were performed by each user on each website and a questionnaire was presented afterwards to collect user opinion about the websites under evaluation. Average score was calculated against both websites for each usability component. 75% users responded positively to designed website template comparing with existing hospital website which got 33% positive responses only. Hence, it was evident that the designed template had better response for usability. The findings of this study justify the literature that user centered design can significantly improve usability of websites. This study is a step towards research which intends to understand usability problems and propose design rules for designing hospital websites of Pakistan in line with usability standards.</description>
        <description>http://thesai.org/Downloads/Volume9No2/Paper_7-A_Comparison_of_Usability_Aspects.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Prioritizing Road Maintenance Activities using GIS Platform and Vb.net</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090206</link>
        <id>10.14569/IJACSA.2018.090206</id>
        <doi>10.14569/IJACSA.2018.090206</doi>
        <lastModDate>2018-02-28T12:22:48.5230000+00:00</lastModDate>
        
        <creator>Fardeen Nodrat</creator>
        
        <creator>Dongshik Kang</creator>
        
        <subject>Road maintenance; prioritization; GIS; Vb.net; Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(2), 2018</description>
        <description>One of the most important factors for the sustainable development of any country is the quality and efficiency of its transportation system. The principled and accurate maintenance of roads, in addition to having a major impact on budget savings, improves the quality and service levels of the transportation system. For this reason, road management and maintenance are the main pillars of the transportation system in any country. Nowadays, due to the increased cost of maintaining roads and the lack of funding in this area, traditional ways of managing and maintaining roads, which are more based on the experience of the experts themselves, are no longer affordable. Hence, more recent, and more systematic methods have become more popular among relevant authorities. Afghanistan is a country facing problems such as budget deficits, lack of professional experts and advanced technology in road maintenance sector. This paper presents an example of using the GIS platform and vb.net to prioritize the road maintenance and rehabilitation activities based on identified criteria. A case study conducted in an academic environment and road maintenance and rehabilitation activities prioritized. The results show that the positive criterion has the greatest impact on the ranking of road maintenance activities. The characteristic of this process is to help the decision makers to plan road maintenance requirements to effectively and efficiently allocate funds for future planning.</description>
        <description>http://thesai.org/Downloads/Volume9No2/Paper_6-Prioritizing_Road_Maintenance_Activities.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of Mobile Application for Travelers to Transport Baggage and Handle Check-in Process</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090205</link>
        <id>10.14569/IJACSA.2018.090205</id>
        <doi>10.14569/IJACSA.2018.090205</doi>
        <lastModDate>2018-02-28T12:22:48.4930000+00:00</lastModDate>
        
        <creator>Sara Y. Ahmed</creator>
        
        <subject>Baggage handling system; tracking technology; baggage barcode; android platform; Android software development kit (SDK); phpMyAdmin, Secure Hash Algorithm (SHA)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(2), 2018</description>
        <description>In this paper, an Android based application called ‘Baggage Check-in Handling System’ is developed for helping travelers/passengers transport their baggage to the airport and handle the check-in process. It is merging the idea of online baggage check-in, and tracking technology together. The application is stimulated from the rapid growth of on-demand ride services, such as UberX and Lyft and the wide spread adoption of smart-phones. The proposed system enables travelers to make an appointment before the flight’s take-off by requesting a driver to pick up the traveler’s baggage to transport to the airport. Then, travelers can track the driver’s location using Geographical Position System (GPS). Eventually after the check-in process, the driver will send a unique barcode provided for the baggage to travelers through the application. As a result, the traveler will have the choice of directly proceeding to the flight gate. The application is created for Android platform operating system, and developed in Java programming language using the Android software development kit (SDK). Additionally, data between database and server have been exchanged using phpMyAdmin. The application uses an authentication technique called Secure Hash Algorithm (SHA). This technique is designed to improve the scalability of authentication and reduce the overhead of access control.</description>
        <description>http://thesai.org/Downloads/Volume9No2/Paper_5-Design_of_Mobile_Application_for_Travelers.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Game Theoretic Approach to Demand Side Management in Smart Grid with Multiple Energy Sources and Storage</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090204</link>
        <id>10.14569/IJACSA.2018.090204</id>
        <doi>10.14569/IJACSA.2018.090204</doi>
        <lastModDate>2018-02-28T12:22:48.4470000+00:00</lastModDate>
        
        <creator>Aritra Kumar Lahiri</creator>
        
        <creator>Ashwin Vasani</creator>
        
        <creator>Sumanth Kulkarni</creator>
        
        <creator>Nishant Rawat</creator>
        
        <subject>Demand side management; utility; smart grid; solar; battery; energy provider; fairness; proportional division; utilitarian division; Nash equilibrium</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(2), 2018</description>
        <description>A smart grid is an advancement in electrical grid which includes a variety of operational and energy measures. To utilize energy distribution in an eﬃcient manner, demand side management has become the fore-runner. In our research paper, we use game theory as a tool to model our system as a Stackelberg game. We make use of diﬀerent energy sources like solar energy, energy from battery and energy from the provider to run appliances of a subscriber. We consider the scenario where the subscriber can give excess energy that is generated, back to the grid, thereby reducing the load on the grid during peak hours. We design a pricing scheme to calculate the utilities of the subscriber and the provider and show how our model maximizes the utility of the entire system, thereby showing the existence of a Nash equilibrium.</description>
        <description>http://thesai.org/Downloads/Volume9No2/Paper_4-A_Game_Theoretic_Approach_to_Demand_Side_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Real Time Computation for Robotic Arm Motion upon a Linear or Circular Trajectory</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090203</link>
        <id>10.14569/IJACSA.2018.090203</id>
        <doi>10.14569/IJACSA.2018.090203</doi>
        <lastModDate>2018-02-28T12:22:48.4130000+00:00</lastModDate>
        
        <creator>Liliana Marilena Matica</creator>
        
        <creator>Cornelia Gyor&#246;di</creator>
        
        <creator>Helga Maria Silaghi</creator>
        
        <creator>Simona Veronica Abrudan Cacioara</creator>
        
        <subject>Waypoints; location matrix; position vector; orientation of a robotic arm; orientation versors; Analysis Difference Numeric Interpolate Algorithm (ADNIA); linear or circular ADNIA</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(2), 2018</description>
        <description>The computation method proposed in this paper, named ADNIA (Analysis Differential Numeric Interpolate Algorithms), computes waypoints Cartesian coordinates for TCP (tool centre point) of a robotic arm, for a motion on an linear or circular imposed trajectories. At every sampling period of time, considering real-time software implementation of ADNIA, the location matrix of a robotic arm is computed.  This computation method works with a well-defined value of motion speed; it results a maximum computation precision (for those motions).</description>
        <description>http://thesai.org/Downloads/Volume9No2/Paper_3-Real_Time_Computation_for_Robotic_Arm_Motion.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Serious Game for Improving Inferencing in the Presence of Foreign Language Unknown Words</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090202</link>
        <id>10.14569/IJACSA.2018.090202</id>
        <doi>10.14569/IJACSA.2018.090202</doi>
        <lastModDate>2018-02-28T12:22:48.3830000+00:00</lastModDate>
        
        <creator>Pedro Gabriel Fonteles Furtado</creator>
        
        <creator>Tsukasa Hirashima</creator>
        
        <creator>Hayashi Yusuke</creator>
        
        <subject>Serious game; foreign language; contextual inference;
unknown words</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(2), 2018</description>
        <description>This study presents the design of a serious game for
improving inferencing for foreign language students. The design
of the game is grounded in research on reading theory, motivation
and game design. The game contains trial-and-error activities in
which students create conversations and then watch these conversations
play out. Making mistakes results in students receiving
feedback and being requested to try again. An evaluation of the
system was also conducted, in which participants used both simple
text and the game. Post-test scores for using the game were significantly
higher than scores when reading the text. User reception
to the system was also positive. These results suggest that serious
games can be effective for enhancing inferencing when foreign
language students face unknown words. Implications for reading
comprehension and for incidental vocabulary learning are also
discussed.</description>
        <description>http://thesai.org/Downloads/Volume9No2/Paper_2-A_Serious_Game_for_Improving_Inferencing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dynamic Time Warping and FFT: A Data Preprocessing Method for Electrical Load Forecasting</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090201</link>
        <id>10.14569/IJACSA.2018.090201</id>
        <doi>10.14569/IJACSA.2018.090201</doi>
        <lastModDate>2018-02-28T12:22:48.3200000+00:00</lastModDate>
        
        <creator>Juan Huo</creator>
        
        <subject>Load forecast; Dynamic Time Warping (DTW); Fast Fourier Transform (FFT); random forest; Support Vector Machine (SVM)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(2), 2018</description>
        <description>For power suppliers, an important task is to accurately
predict the short-term load. Thus many papers have
introduced different kinds of artificial intelligent models to
improve the prediction accuracy. In recent years, Random Forest
Regression (RFR) and Support Vector Machine (SVM) are widely
used for this purpose. However, they can not perform well when
the sample data set is too noisy or with too few pattern feature.
It is usually difficult to tell whether a regression algorithm can
accurately predict the future load from the historical data set
before trials. Here we demonstrate a method which estimates the
similarity between time series by Dynamic Time Warping (DTW)
combined with Fast Fourier Transform (FFT). Results show this
is a simple and fast method to filter the raw large electrical load
data set and improve the learning result before looping through
all learning processes.</description>
        <description>http://thesai.org/Downloads/Volume9No2/Paper_1-Dynamic_Time_Warping_and_FFT.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>OpenMP Implementation in the Characterization of a Urban Growth Model Cellular Automaton</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090179</link>
        <id>10.14569/IJACSA.2018.090179</id>
        <doi>10.14569/IJACSA.2018.090179</doi>
        <lastModDate>2018-01-31T14:55:35.7470000+00:00</lastModDate>
        
        <creator>Alvaro Peraza Garz&#243;n</creator>
        
        <creator>Ren&#233; Rodr&#237;guez Zamora</creator>
        
        <creator>Wenseslao Plata Rocha</creator>
        
        <subject>Cellular automata; parallel programming; simulation models; OpenMP; urban growth</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(1), 2018</description>
        <description>This paper presents the implementation of a parallelization strategy using the OpenMP library, while developing a simulation tool based on a cellular automaton (CA) to run urban growth simulations. The characterization of an urban growth model CA is shown and it consists of a digitization process of the land use in order to get all the necessary elements for the CA to work. During the first simulation tests we noticed high processing times due to large quantity of calculations needed to perform one single simulation, in order to minimize this we implemented a parallelization strategy using the fork-join model in order to  optimize the use of available hardware. The results obtained show a significant improvement in execution times in function of the number of available cores and map sizes, as a future work, it is planned to implement artificial neural networks in order to generate more complex urban growth scenarios.</description>
        <description>http://thesai.org/Downloads/Volume9No1/Paper_79-OpenMP_Implementation_in_the_Characterization_of_a-Urban_Growth.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Reverse Engineering State and Strategy Design Patterns using Static Code Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090178</link>
        <id>10.14569/IJACSA.2018.090178</id>
        <doi>10.14569/IJACSA.2018.090178</doi>
        <lastModDate>2018-01-31T14:55:35.7300000+00:00</lastModDate>
        
        <creator>Khaled Abdelsalam Mohamed</creator>
        
        <creator>Amr Kamel</creator>
        
        <subject>Reverse engineering; source code analysis; design patterns; static analysis; graph matching; Gremlin; Joern; Neo4j</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(1), 2018</description>
        <description>This paper presents an approach to detect behavioral design patterns from source code using static analysis techniques. It depends on the concept of Code Property Graph and enriching graph with relationships and properties specific to Design Patterns, to simplify the process of Design Pattern detection. This approach used NoSQL graph database (Neo4j) and uses graph traversal language (Gremlin) for doing graph matching. Our approach, converts the tasks of design pattern detection to a graph matching task by representing Design Patterns in form of graph queries and running it on graph database.</description>
        <description>http://thesai.org/Downloads/Volume9No1/Paper_78-Reverse_Engineering_State_and_Strategy_Design_Patterns.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Agent based Architecture for Modeling and Analysis of Self Adaptive Systems using Formal Methods</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090177</link>
        <id>10.14569/IJACSA.2018.090177</id>
        <doi>10.14569/IJACSA.2018.090177</doi>
        <lastModDate>2018-01-31T14:55:35.7170000+00:00</lastModDate>
        
        <creator>Natash Ali Mian</creator>
        
        <creator>Farooq Ahmad</creator>
        
        <subject>Formal methods; self-adaptive systems; agent based modeling; feedback loop; Petri nets</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(1), 2018</description>
        <description>Self-adaptive systems (SAS) can modify their behavior during execution; this modification is done because of change in internal or external environment. The need for self-adaptive software systems has increased tremendously in last decade due to ever changing user requirements, improvement in technology and need for building software that reacts to user preferences. To build this type of software we need well establish models that have the flexibility to adjust to the new requirements and make sure that the adaptation is efficient and reliable. Feedback loop has proven to be very effective in modeling and developing SAS, these loops help the system to sense, analyze, plan, test and execute the adaptive behavior at runtime. Formal methods are well defined, rigorous and reliable mathematical techniques that can be effectively used to reason and specify behavior of SAS at design and run-time. Agents can play an important role in modeling SAS because they can work independently, with other agents and with environment as well. Using agents to perform individual steps in feedback loop and formalizing these agents using Petri nets will not only increase the reliability, but also, the adaptation can be performed efficiently for taking decisions at run time with increased confidence. In this paper, we propose a multi-agent framework to model self-adaptive systems using agent based modeling. This framework will help the researchers in implementation of SAS, which is more dependable, reliable, autonomic and flexible because of use of multi-agent based formal approach.</description>
        <description>http://thesai.org/Downloads/Volume9No1/Paper_77-Agent_based_Architecture_for_Modeling_and_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Average Link Stability with Energy-Aware Routing Protocol for MANETs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090176</link>
        <id>10.14569/IJACSA.2018.090176</id>
        <doi>10.14569/IJACSA.2018.090176</doi>
        <lastModDate>2018-01-31T14:55:35.7000000+00:00</lastModDate>
        
        <creator>Sofian Hamad</creator>
        
        <creator>Salem Belhaj</creator>
        
        <creator>Muhana M. Muslam</creator>
        
        <subject>Mobile Ad-hoc Network (MANET); routing protocol; energy aware; link life time; AODV</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(1), 2018</description>
        <description>This paper suggests the A-LSEA (Average Link Stability and Energy Aware) routing protocol for Mobile Ad-hoc Networks (MANETs). The main idea behind this algorithm is on the one hand, a node must have enough Residual Energy (RE) before retransmitting the Route Request (RREQ) and declaring itself as a participating node in the end-to-end path. On the other hand, the Link Life Time (LLT) between the sending node and the receiving node must be acceptable before transmitting the received RREQ. The combination of these two conditions provides more stability to the path and less frequent route breaks. The average results of the simulations collected from the suggested A-LSEA protocol showed a fairly significant improvement in the delivery ratio exceeding 10% and an increase in the network lifetime of approximately 20%, compared to other re-active routing protocols.</description>
        <description>http://thesai.org/Downloads/Volume9No1/Paper_76-Average_Link_Stability_with_Energy_Aware_Routing_Protocol.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fuzzy Logic based Approach for VoIP Quality Maintaining</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090174</link>
        <id>10.14569/IJACSA.2018.090174</id>
        <doi>10.14569/IJACSA.2018.090174</doi>
        <lastModDate>2018-01-31T14:55:35.6830000+00:00</lastModDate>
        
        <creator>Mohamed E. A. Ebrahim</creator>
        
        <creator>Hesham A. Hefny</creator>
        
        <subject>Voice over Internet Protocol (VoIP); Fuzzy model System (FMS); Fuzzy Token Bucket Algorithm (FTBA); Resource Reservation Protocol (RSVP); Quality of Service (QoS)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(1), 2018</description>
        <description>Voice communication is an emerging technology and has great importance in our routine life. Perceptual, Voice over Internet Protocol quality is an important issue for VoIP Apps services because VoIP Apps require real-time support. Many network factors (packet loss, packet delay, and jitter) affect to VoIP quality, to achieve this objective we used an approach based on Fuzzy Logic. We configure Resource Reservation Protocol application to control Token Bucket Algorithm and the simulation experiments are carried out with Opnet. In addition, compare Token Bucket with and without Quality of Service for measure network factors. In this paper, building Fuzzy Token Bucket System consists of three variables (Bandwidth Rate, Buffer Size, and New Token) in order to improve Token Bucket Shaper output variable (New Token) by Fuzzy Stability model for Voice over IP quality maintaining.</description>
        <description>http://thesai.org/Downloads/Volume9No1/Paper_74-Fuzzy_Logic_based_Approach_for_VOIP_Quality.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Conceptual Modeling of a Procurement Process</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090175</link>
        <id>10.14569/IJACSA.2018.090175</id>
        <doi>10.14569/IJACSA.2018.090175</doi>
        <lastModDate>2018-01-31T14:55:35.6830000+00:00</lastModDate>
        
        <creator>Sabah Al Fedaghi</creator>
        
        <creator>Mona Al-Otaibi</creator>
        
        <subject>Procurement; RFP; public key infrastructure; conceptual modeling; diagrammatic representation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(1), 2018</description>
        <description>Procurement refers to a process resulting in delivery of goods or services within a set time period. The process includes aspects of purchasing, specifications to be met, and solicitation notifications as in the case of Request For Proposals (RFPs). Typically, such an RFP is described in a verbal ad hoc fashion, in English, with tables and graphs, resulting in imprecise specifications of requirements. It has been proposed that BPMN diagrams be used to specify requirements to be included in RFP. This paper is a merger of three topics: 1) Procurement development with a focus on operational specification of RFP; 2) Public key infrastructure (PKI) as an RFP subject; and 3) Conceptual modeling that produces a diagram as a supplement to an RFP to clarify requirements more precisely than traditional tools, such as natural language, tables, and ad hoc graphs.</description>
        <description>http://thesai.org/Downloads/Volume9No1/Paper_75-Conceptual_Modeling_of_a_Procurement_Process.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Data Mining Techniques to Construct a Model: Cardiac Diseases</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090173</link>
        <id>10.14569/IJACSA.2018.090173</id>
        <doi>10.14569/IJACSA.2018.090173</doi>
        <lastModDate>2018-01-31T14:55:35.6700000+00:00</lastModDate>
        
        <creator>Noreen Akhtar</creator>
        
        <creator>Muhammad Ramzan Talib</creator>
        
        <creator>Nosheen Kanwal</creator>
        
        <subject>Knowledge Discovery in Database (KDD); data mining; decision trees; neural networks; Bayesian classifier; heart disease</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(1), 2018</description>
        <description>Using echocardiography flexible Transthoracic Echocardiography reported data set detecting heart disease by using mining techniques designed prediction model the data set can develop the reliability of analysis of cardiac diseases by echocardiography, using eight iterative and interactive steps consisting Knowledge Discovery in Database (KDD) methodology including from 209 patients with echocardiography to extracting the data important mode of action Transthoracic Echocardiography inspection report. This study used data from Faisalabad Institute of Cardiology study from 2012 to 2015. All models exposed the results of J48 decision tree, na&#239;ve bayes classifier and neural network that has extraordinary classification precision and predictive of heart disease cases are generally comparable. However, J48 model predictive classification accuracy shows of 80% based on the true positive rate ratio and performance slightly better. This study shows to predict heart disease cases and People can be used the results of our study to make more consistent diagnosis of cardiac disease and to help them as a support tool for cardiac disease specialists.</description>
        <description>http://thesai.org/Downloads/Volume9No1/Paper_73-Data_Mining_Techniques_to_Construct_A_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Data Synchronization Model for Heterogeneous Mobile Databases and Server-side Database</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090172</link>
        <id>10.14569/IJACSA.2018.090172</id>
        <doi>10.14569/IJACSA.2018.090172</doi>
        <lastModDate>2018-01-31T14:55:35.6530000+00:00</lastModDate>
        
        <creator>Abdullahi Abubakar Imam</creator>
        
        <creator>Shuib Basri</creator>
        
        <creator>Rohiza Ahmad</creator>
        
        <creator>Abdul Rehman Gilal</creator>
        
        <subject>Heterogeneous databases; data synchronization; mobile databases; mobile devices; NoSQL database; relational databases</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(1), 2018</description>
        <description>Mobile devices, because they can be used to access corporate information anytime anywhere, have recently received considerable attention, and several research efforts have been tailored towards addressing data synchronization problems. However, the solutions are either vendor specific or homogeneous in nature. This paper proposed Heterogeneous Mobile Database Synchronization Model (HMDSM) to enable all mobile databases (regardless of their individual differences) and participate in any data synchronization process. To accomplish this, an experimental approach (exploratory and confirmatory) was employed. Also existing models and algorithms are classified, protracted and applied. All database peculiar information, such as trigger, timestamp and meta-data are eliminated. A listener is added to listen to any operation performed from either side. To prove its performance, the proposed model underwent rigorous experimentation and testing. X2 test was used to analyze the data generated from the experiment. Results show the feasibility of having an approach which can handle database synchronization between heterogeneous mobile databases and the server. The proposed model does not only prove its generic nature to all mobile databases but also reduces the use of mobile resources; thus suitable for mobile devices with low computing power to proficiently process large amount of data.</description>
        <description>http://thesai.org/Downloads/Volume9No1/Paper_72-Data_Synchronization_Model_for_Heterogeneous_Mobile_Devices.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>TSAN: Backbone Network Architecture for Smart Grid of P.R China</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090171</link>
        <id>10.14569/IJACSA.2018.090171</id>
        <doi>10.14569/IJACSA.2018.090171</doi>
        <lastModDate>2018-01-31T14:55:35.6370000+00:00</lastModDate>
        
        <creator>Raheel Ahmed Memon</creator>
        
        <creator>Jianping Li</creator>
        
        <creator>Anwar Ahmed Memon</creator>
        
        <creator>Junaid Ahmed</creator>
        
        <creator>Muhammad Irshad Nazeer</creator>
        
        <creator>Muhammad Ismail</creator>
        
        <subject>Smart Grid; TSAN; 1000BASE-ZX ethernet; backbone architecture</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(1), 2018</description>
        <description>Network architecture of any real-time system must be robust enough to absorb several network failures and still work smoothly. Smart Grid Network is one of those big networks that should be considered and designed carefully because of its dependencies. There are several hybrid approaches that have been proposed using wireless and wired technologies by involving SDH/SONNET as a backbone network, but all technologies have their own limitations and can’t be utilized due to various factors. In this paper, we propose a fiber optic based Gigabit Ethernet (1000BASE-ZX) network named as Territory Substation Area Network (T-SAN) for smart grid backbone architecture. It is a scalable architecture, with several desired features, like higher coverage, fault tolerance, robustness, reliability, and maximum availability. The use case of sample mapping the T-SAN on the map of People Republic of China proves its strength to become backhaul network of any territory or country, the results of implemented architecture and its protocol for fault detection and recovery reveals the ability of system survival under several random, multiple and simultaneous faults efficiently.</description>
        <description>http://thesai.org/Downloads/Volume9No1/Paper_71-TSAN_Backbone_Network_Architecture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Measuring Quality of E-Learning and Desaire2Learn in the College of Science and Humanities at Alghat, Majmaah University</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090170</link>
        <id>10.14569/IJACSA.2018.090170</id>
        <doi>10.14569/IJACSA.2018.090170</doi>
        <lastModDate>2018-01-31T14:55:35.6230000+00:00</lastModDate>
        
        <creator>Abdelmoneim Ali Mohamed</creator>
        
        <creator>Faisal Mohammed Nafie</creator>
        
        <subject>E-learning; Desire2Learn; D2L quality; E-learning quality; learning satisfaction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(1), 2018</description>
        <description>E-learning and Desaire2Learn (D2L) system were used in several higher education institutions; the learning satisfaction depends on the quality of the system applied to serve this issue and its importance in users mind. Therefore, this study, intended to explore the degree of students and Satisfaction of faculty members with the importance and quality of e-learning used and D2L system as a tool for learning some courses. We took a sample of 57 faculty members and 135 students participated in this study. We used two questionnaires as a tool to collect data from participants, one for faculty members and the other for students; both of these questionnaires had the same idea with different questions. We implemented Statistical Package for Social Science (SPSS) to analyze data. The results show that the Satisfaction of faculty members is high with the quality of e-learning and D2L system as a method of teaching, moderate satisfaction with using D2L tools, the result shows there was a positive relationship between e-learning quality and using D2L tools in teaching. But the result record high satisfaction from students towards the quality of e-learning; the D2L system as a method of learning and the result shows there was no statistically significant effect of gender on the D2L system quality. Finally, the study discussed the implications and recommendations of the work.</description>
        <description>http://thesai.org/Downloads/Volume9No1/Paper_70-Measuring_Quality_of_E_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cyber-Security Incidents: A Review Cases in Cyber-Physical Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090169</link>
        <id>10.14569/IJACSA.2018.090169</id>
        <doi>10.14569/IJACSA.2018.090169</doi>
        <lastModDate>2018-01-31T14:55:35.6070000+00:00</lastModDate>
        
        <creator>Mohammed Nasser Al-Mhiqani</creator>
        
        <creator>Rabiah Ahmad</creator>
        
        <creator>Warusia Yassin</creator>
        
        <creator>Aslinda Hassan</creator>
        
        <creator>Zaheera Zainal Abidin</creator>
        
        <creator>Nabeel Salih Ali</creator>
        
        <creator>Karrar Hameed Abdulkareem</creator>
        
        <subject>Cyber-Physical Systems; threats; incidents; security; cybersecurity; taxonomies; matrix; threats analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(1), 2018</description>
        <description>Cyber-Physical Systems refer to systems that have an interaction between computers, communication channels and physical devices to solve a real-world problem. Towards industry 4.0 revolution, Cyber-Physical Systems currently become one of the main targets of hackers and any damage to them lead to high losses to a nation. According to valid resources, several cases reported involved security breaches on Cyber-Physical Systems. Understanding fundamental and theoretical concept of security in the digital world was discussed worldwide. Yet, security cases in regard to the cyber-physical system are still remaining less explored. In addition, limited tools were introduced to overcome security problems in Cyber-Physical System. To improve understanding and introduce a lot more security solutions for the cyber-physical system, the study on this matter is highly on demand. In this paper, we investigate the current threats on Cyber-Physical Systems and propose a classification and matrix for these threats, and conduct a simple statistical analysis of the collected data using a quantitative approach. We confirmed four components i.e., (the type of attack, impact, intention and incident categories) main contributor to threat taxonomy of Cyber-Physical Systems.</description>
        <description>http://thesai.org/Downloads/Volume9No1/Paper_69-Cyber_Security_Incidents.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning Technology for Predicting Solar Flares from (Geostationary Operational Environmental Satellite) Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090168</link>
        <id>10.14569/IJACSA.2018.090168</id>
        <doi>10.14569/IJACSA.2018.090168</doi>
        <lastModDate>2018-01-31T14:55:35.5900000+00:00</lastModDate>
        
        <creator>Tarek A M Hamad Nagem</creator>
        
        <creator>Rami Qahwaji</creator>
        
        <creator>Stan Ipson</creator>
        
        <creator>Zhiguang Wang</creator>
        
        <creator>Alaa S. Al-Waisy</creator>
        
        <subject>Convolutional; neural; network; deep; learning; solar; flare; prediction; space; weather insert</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(1), 2018</description>
        <description>Solar activity, particularly solar flares can have significant detrimental effects on both space-borne and grounds based systems and industries leading to subsequent impacts on our lives. As a consequence, there is much current interest in creating systems which can make accurate solar flare predictions. This paper aims to develop a novel framework to predict solar flares by making use of the Geostationary Operational Environmental Satellite (GOES) X-ray flux 1-minute time series data. This data is fed to three integrated neural networks to deliver these predictions. The first neural network (NN) is used to convert GOES X-ray flux 1-minute data to Markov Transition Field (MTF) images. The second neural network uses an unsupervised feature learning algorithm to learn the MTF image features. The third neural network uses both the learned features and the MTF images, which are then processed using a Deep Convolutional Neural Network to generate the flares predictions. To the best of our knowledge, this work is the first flare prediction system that is based entirely on the analysis of pre-flare GOES X-ray flux data. The results are evaluated using several performance measurement criteria that are presented in this paper.</description>
        <description>http://thesai.org/Downloads/Volume9No1/Paper_68-Deep_Learning_Technology_to_Develop_a_Computer_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Real-Time Experimentation and Analysis of Wifi Spectrum Utilization in Microwave Oven Noisy Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090167</link>
        <id>10.14569/IJACSA.2018.090167</id>
        <doi>10.14569/IJACSA.2018.090167</doi>
        <lastModDate>2018-01-31T14:55:35.5900000+00:00</lastModDate>
        
        <creator>Yakubu S. Baguda</creator>
        
        <subject>Electromagnetic radiation; microwave oven; spectrum utilization; bandwidth; ISM band; signal strength; throughput; wireless channel</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(1), 2018</description>
        <description>The demand for broadband wireless communication in home and office has been increasing exponentially; thus, the need for reliable and effective communication is very crucial. Both theoretical and experimental investigations have clearly shown that electromagnetic radiation from external sources such as microwave oven (MWO) has detrimental impact on the wireless medium and the media content. Therefore, this drastically degrade the signal strength in wireless link and consequently affects the overall throughput due to noise and interference. This experimental study is primarily aimed at critically analyzing and evaluating the impact of electromagnetic radiation on spectrum utilization under different experimental scenarios. The experimental results clearly show that electromagnetic noise radiation from microwave oven can seriously affect the performance of other devices operating in 2.4GHz frequency band, especially, delay sensitive applications and services.</description>
        <description>http://thesai.org/Downloads/Volume9No1/Paper_67-Real_Time_Experimentation_and_Analysis_of_WiFi_Spectrum.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Survey Paper for Software Project Team, Staffing, Scheduling and Budgeting Problem</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090166</link>
        <id>10.14569/IJACSA.2018.090166</id>
        <doi>10.14569/IJACSA.2018.090166</doi>
        <lastModDate>2018-01-31T14:55:35.5770000+00:00</lastModDate>
        
        <creator>Rizwan Akram</creator>
        
        <creator>Salman Ihsan</creator>
        
        <creator>Shaista Zafar</creator>
        
        <creator>Babar Hayat</creator>
        
        <subject>Software engineering; project management; software project resources; project scheduling; budgeting; team</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(1), 2018</description>
        <description>Software project scheduling is a standout amongst the most imperative scheduling zones looked by Software project management team. Software development companies are under substantial strain to finish projects on time, with budget, quality and with the suitable level of values and qualities. Inexperienced development team or potentially poor management can cause deferrals and costs that given scheduling and spending limitations are regularly unsuitable, prompting business basic disappointments. Software development companies frequently battle to convey extends on time, inside spending plan and with the required quality. For a fruitful project, both software building and software management are exceptionally vital. One conceivable reason for this issue is poor Software project management and, specifically, insufficient project scheduling and inadequate team staffing. Software project schedule issue is one of the essential and testing issues come across by the product project directors in the much focused software companies. Since matter is winding up hard with the expanding quantities of workers and tasks, just a couple of calculations exist and the execution is as yet not fulfilling, to build up an adaptable and powerful model for Software project arranging. In this paper we have attempted to expand a few systems and strategies and results yielded are explained.</description>
        <description>http://thesai.org/Downloads/Volume9No1/Paper_66-Survey_Paper_for_Software_Project_Team.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SME Cloud Adoption in Botswana: Its Challenges and Successes</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090165</link>
        <id>10.14569/IJACSA.2018.090165</id>
        <doi>10.14569/IJACSA.2018.090165</doi>
        <lastModDate>2018-01-31T14:55:35.5600000+00:00</lastModDate>
        
        <creator>Malebogo Khanda</creator>
        
        <creator>Srinath Doss</creator>
        
        <subject>Cloud computing; SME’s; cloud computing services; cloud business processes; cloud computing framework; cloud deployment models; cloud computing services model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(1), 2018</description>
        <description>The standard office or business in Botswana hosts their resources in-house. This means that a company will have their hardware, software and support staff as part of their daily work operations. Technology has brought a shift to the office environment with Cloud Computing. Botswana has seen the growth of the Cloud Technologies, within its own boundaries where companies have embraced the new technology to mobilize and push their operational agenda with the same tenacity as the rest of the world using the technology. Cloud computing has taken root in Botswana and it shows that a lot of SMEs are using cloud computing, whilst some are non-adopters to the technology. Edgar Tsimane reported the take up on cloud computing in Botswana. Botswana uses the National ICT policy to guide on technological advances and development, i.e the Maitlamo policy. This paper is considering aspects influencing the company’s decisions on utilizing the cloud as a service, both opportunistic and challenges. Some of the questions to address for the study are: how effective is cloud computing for businesses in Botswana; what challenges and successes these companies have had, is there any particular framework they had to follow to guide them in adopting the services? Finally, the paper was to take consideration in recommending a framework that can be adopted within the Botswana.</description>
        <description>http://thesai.org/Downloads/Volume9No1/Paper_65-SME_Cloud_Adoption_in_Botswana.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Prediction of Stroke using Data Mining Classification  
Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090163</link>
        <id>10.14569/IJACSA.2018.090163</id>
        <doi>10.14569/IJACSA.2018.090163</doi>
        <lastModDate>2018-01-31T14:55:35.5430000+00:00</lastModDate>
        
        <creator>Ohoud Almadani</creator>
        
        <creator>Riyad Alshammari</creator>
        
        <subject>Stroke; data mining; classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(1), 2018</description>
        <description>Stroke is a neurological disease that occurs when a brain cells die as a result of oxygen and nutrient deficiency.  Stroke detection within the first few hours improves the chances to prevent complications and improve health care and management of patients. In addition, significant effect of medications that were used as treatment for stroke would appear only if they were given within the first three hours since the beginning of stroke. A framework has been designed based on data mining techniques on Stroke data set that is obtained from Ministry of National Guards Health Affairs hospitals, Kingdom of Saudi Arabia. A data mining model was built with 95% accuracy. Furthermore, this study showed that patient with the following medical conditions, such as heart diseases (hypertension mainly), immunity diseases, diabetes militias, kidney diseases, hyperlipidemia, epilepsy, or blood (platelets) disorders has a higher probability to develop stroke.</description>
        <description>http://thesai.org/Downloads/Volume9No1/Paper_63-Prediction_of_Stroke_using_Data_Mining.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hardware Implementation for the Echo Canceller System based Subband Technique using TMS320C6713 DSP Kit</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090164</link>
        <id>10.14569/IJACSA.2018.090164</id>
        <doi>10.14569/IJACSA.2018.090164</doi>
        <lastModDate>2018-01-31T14:55:35.5430000+00:00</lastModDate>
        
        <creator>Mahmod. A. Al Zubaidy</creator>
        
        <creator>Sura Z. Thanoon</creator>
        
        <subject>Acoustic echo canceller; Least Mean Square (LMS); Normalized Least Mean Square (NLMS); TMS320C6713; 8 subbands adaptive filter</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(1), 2018</description>
        <description>The acoustic echo cancellation system is very important in the communication applications that are used these days; in view of this importance we have implemented this system practically by using DSP TMS320C6713 Starter Kit (DSK). The acoustic echo cancellation system was implemented based on 8 subbands techniques using Least Mean Square (LMS) algorithm and Normalized Least Mean Square (NLMS) algorithm. The system was evaluated by measuring the performance according to Echo Return Loss Enhancement (ERLE) factor and Mean Square Error (MSE) factor.</description>
        <description>http://thesai.org/Downloads/Volume9No1/Paper_64-Hardware_Implementation_for_the_Echo_Canceller_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Group Decision-Making Method for Selecting Cloud Computing Service Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090162</link>
        <id>10.14569/IJACSA.2018.090162</id>
        <doi>10.14569/IJACSA.2018.090162</doi>
        <lastModDate>2018-01-31T14:55:35.5300000+00:00</lastModDate>
        
        <creator>Ibrahim M. Al-Jabri</creator>
        
        <creator>Mustafa I. Eid</creator>
        
        <creator>M. Sadiq Sohail</creator>
        
        <subject>Cloud computing; cloud service models; multi-criteria decision-making; group decision-making</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(1), 2018</description>
        <description>Cloud computing is a new technology that has great potential for the business world. Many business firms have implemented, are implementing, or planning to implement cloud computing technology. The cloud computing resources are delivered in various forms of service models which make it challenging for business customers to select the model that suits their business needs. This paper proposes a novel group-based decision-making method where a group of decision makers is involved in the decision process. Each decision maker provides weights for the cloud selection criteria. Based on weight aggregations and deviations, decision makers would select the alternative which has the highest ratio of deviation to mean is selected. The method is illustrated with an example on the selection of cloud service models. This method is useful for IT managers in selecting the appropriate cloud service model for their organizations.</description>
        <description>http://thesai.org/Downloads/Volume9No1/Paper_62-A_Group_Decision_Making_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Envisioning Internet of Things using Fog Computing </title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090161</link>
        <id>10.14569/IJACSA.2018.090161</id>
        <doi>10.14569/IJACSA.2018.090161</doi>
        <lastModDate>2018-01-31T14:55:35.5130000+00:00</lastModDate>
        
        <creator>Urooj Yousuf Khan</creator>
        
        <creator>Tariq Rahim Soomro</creator>
        
        <subject>IoT; fog computing; cloud computing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(1), 2018</description>
        <description>Internet of Things is the future of the Internet. It encircles a wide scope. There are currently billions of devices connected to the Internet and this trend is expecting to grow exponentially. Cisco predicts there are at present 20 billion connected devices. These devices, along with their varied data types, transmission rates and communication protocols connect to the Internet seamlessly. The futuristic implementation of Internet of Things across various scenarios demands the real time performance delivery. These range from RFID connected devices to huge data centers. Until date, there is no single communication protocol available for envisioning IoT. There is still no common, agreed upon architecture. Hence, huge challenges lie ahead. One of the ways to envision Internet of Things is to make use of Fog Networks. Fog is essentially a cloudlet, located nearer to the ground. It offers lower latency and better bandwidth conservation. The Fog or Fog computing is a recent concept. The OpenFog Consortium is a joint effort of many vendors. Its latest work is the background study for realizing Fog as a possible paltform for activating Internet of Things. This paper revolves around Envisioning Internet of Things using Fog computing. It begins with a detailed background study of Internet of Things and Fog Architecture. It covers applications and scenarios where such knowledge is highly applicable. The paper concludes by proposing Fog Computing as a possible platform for Internet of Things.</description>
        <description>http://thesai.org/Downloads/Volume9No1/Paper_61-Envisioning_Internet_of_Things_using_Fog_Computing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>QoS-based Cloud Manufacturing Service Composition using Ant Colony Optimization Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090160</link>
        <id>10.14569/IJACSA.2018.090160</id>
        <doi>10.14569/IJACSA.2018.090160</doi>
        <lastModDate>2018-01-31T14:55:35.4970000+00:00</lastModDate>
        
        <creator>Elsoon Neshati</creator>
        
        <creator>Ali Asghar Pourhaji Kazem</creator>
        
        <subject>Cloud computing; cloud manufacturing; service composition, ant colony optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(1), 2018</description>
        <description>Cloud manufacturing (CMfg) is a service-oriented platform that enables engineers to use the manufacturing capacity in the form of cloud-based services that aggregated in service pools on demand. In CMfg, the integration of manufacturing resources across different areas and  industries  is  accomplished  using  cloud  services. In recent years, the interest in cloud manufacturing service composition has grown, due to its importance in different manufacturing applications. When no single service is capable of satisfying the need for a manufacturing service requester, the service combination may be useful in order to fulfill the purpose of the manufacturing service requester. Therefore, the problem of how efficient and effective interconnection of cloud manufactring services has come to fetch many research fields. In this paper, a new algorithm is presented using an ant colony optimization for the problem of cloud manufacturing service composition considering the quality of service.</description>
        <description>http://thesai.org/Downloads/Volume9No1/Paper_60-QoS_based_Cloud_Manufacturing_Service.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Combinatorial Double Auction Winner Determination in Cloud Computing using Hybrid Genetic and Simulated Annealing Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090159</link>
        <id>10.14569/IJACSA.2018.090159</id>
        <doi>10.14569/IJACSA.2018.090159</doi>
        <lastModDate>2018-01-31T14:55:35.4970000+00:00</lastModDate>
        
        <creator>Ali Sadigh Yengi Kand</creator>
        
        <creator>Ali Asghar Pourhaji Kazem</creator>
        
        <subject>Cloud computing; double auction; winner determination; genetic algorithm; simulated annealing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(1), 2018</description>
        <description>With the advancement of information technology need to perform computing tasks everywhere and all the time there, in cloud computing environments and heterogeneous users have access to different sources with different characteristics that these resources are geographically in different areas. Due to this, the allocation of resources in cloud computing comes to the main issue is considered a major challenge to achieve high performance. Due to the nature of cloud computing is a distributed system to account, comes to business, economic methods such as auctions are used to allocate resources for decentralization. As an important economic bilateral hybrid auction model is the perfect solution for the allocation of resources in cloud computing, on the other hand, providers of cloud resources similarly, their sources of supply combined addressing. One of the problems auction two-way combination with maximum benefit for the parties to the transaction is the efficient allocation of resources to the problem of determining an auction winner is known. Given that the winning auction is NP-Hard. It results in a problem, several methods have been proposed to solve it. In this dissertation, taking into account the strength simulated annealing algorithm, a modified version of it is proposed for solving the winner determination in combinatorial double auction problem in cloud computing. The proposed approach is simulated along with genetic and simulated annealing algorithms and the results show that the proposed approach finds better solutions than the two mentioned algorithms.</description>
        <description>http://thesai.org/Downloads/Volume9No1/Paper_59-Combinatorial_Double_Auction_Winner.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Empirical Evaluation of Error Correction Methods and Tools for Next Generation Sequencing Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090158</link>
        <id>10.14569/IJACSA.2018.090158</id>
        <doi>10.14569/IJACSA.2018.090158</doi>
        <lastModDate>2018-01-31T14:55:35.4830000+00:00</lastModDate>
        
        <creator>Atif Mehmood</creator>
        
        <creator>Javed Ferzund</creator>
        
        <creator>Muhammad Usman Ali</creator>
        
        <creator>Abbas Rehman</creator>
        
        <creator>Shahzad Ahmed</creator>
        
        <creator>Imran Ahmad</creator>
        
        <subject>Next generation sequencing; bioinformatics; errors; error correction; execution time; k-spectrum; suffix tree based; hybrid based</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(1), 2018</description>
        <description>Next Generation Sequencing (NGS) technologies produce massive amount of low cost data that is very much useful in genomic study and research. However, data produced by NGS is affected by different errors such as substitutions, deletions or insertion. It is essential to differentiate between true biological variants and alterations occurred due to errors for accurate downstream analysis. Many types of methods and tools have been developed for NGS error correction. Some of these methods only correct substitutions errors whereas others correct multi types of data errors. In this article, a comprehensive evaluation of three types of methods (k-spectrum based, Multi- sequencing alignment and Hybrid based) is presented which are implemented and adopted by different tools. Experiments have been conducted to compare the performance based on runtime and error correction rate. Two different computing platforms have been used for the experiments to evaluate effectiveness of runtime and error correction rate. The mission and aim of this comparative evaluation is to provide recommendations for selection of suitable tools to cope with the specific needs of users and practitioners. It has been noticed that k-mer spectrum based methodology generated superior results as compared to other methods. Amongst all the tools being utilized, Racer has shown eminent performance in terms of error correction rate and execution time for both small as well as large data sets. In multisequence alignment based tools, Karect depicts excellent error correction rate whereas Coral shows better execution time for all data sets. In hybrid based tools, Jabba shows better error correction rate and execution time as compared to brownie. Computing platforms mostly affect execution time but have no general effect on error correction rate.</description>
        <description>http://thesai.org/Downloads/Volume9No1/Paper_58-An_Empirical_Evaluation_of_Error_Correction_Methods.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Analysis of Raw Images and Meta Feature based Urdu OCR using CNN and LSTM</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090157</link>
        <id>10.14569/IJACSA.2018.090157</id>
        <doi>10.14569/IJACSA.2018.090157</doi>
        <lastModDate>2018-01-31T14:55:35.4670000+00:00</lastModDate>
        
        <creator>Asma Naseer</creator>
        
        <creator>Kashif Zafar</creator>
        
        <subject>Long Short Term Memory (LSTM); Convolution Neural Network (CNN); OCR; scale invariance; deep learning; ligature</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(1), 2018</description>
        <description>Urdu language uses cursive script which results in connected characters constituting ligatures. For identifying characters within ligatures of different scales (font sizes), Convolution Neural Network (CNN) and Long Short Term Memory (LSTM) Network are used. Both network models are trained on formerly extracted ligature thickness graphs, from which models extract Meta features. These thickness graphs provide consistent information across different font sizes. LSTM and CNN are also trained on raw images to compare performance on both forms of inputs.  For this research, two corpora, i.e. Urdu Printed Text Images (UPTI) and Centre for Language Engineering (CLE) Text Images are used. Overall performance of networks ranges between 90% and 99.8%. Average accuracy on Meta features is 98.08% while using raw images, 97.07% average accuracy is achieved.</description>
        <description>http://thesai.org/Downloads/Volume9No1/Paper_57-Comparative_Analysis_of_Raw_Images_and_Meta_Feature.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Lifetime Maximization on Scalable Stable Election Protocol for Large Scale Traffic Engineering</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090156</link>
        <id>10.14569/IJACSA.2018.090156</id>
        <doi>10.14569/IJACSA.2018.090156</doi>
        <lastModDate>2018-01-31T14:55:35.4500000+00:00</lastModDate>
        
        <creator>Muhammad Asad</creator>
        
        <creator>Arsalan Ali Shaikh</creator>
        
        <creator>Soomro Pir Dino</creator>
        
        <creator>Muhammad Aslam</creator>
        
        <creator>Yao Nianmin</creator>
        
        <subject>Wireless sensor networks (WSN); heterogeneous network; clustered routing protocol; traffic engineering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(1), 2018</description>
        <description>Recently, Wireless Sensor Networks (WSNs) are getting more fame because of low cost and easy to manage and maintain. WSNs consists of sensor nodes and a Base Station (BS). Sensor nodes are responsible to sense, transmit and receive the data packets from sensing field, and the BS is responsible to collect this data and covert it into readable form. The main issue in this network is lack of power resources. As sensor nodes are restricted to limited energy, so researchers always aims to produce an energy efficient clustered routing protocol. To make the efficient routing protocol, heterogeneity of sensor nodes is a best possible solution. ‘Stable Election Protocol’ was the first heterogeneous network and proposed two level of heterogeneity. SEP Protocol not only improved the network lifetime but also improved the stability of sensor nodes. In order to maximize the network lifetime, we propose the scalability of SEP routing protocol (S-SEP) to check the reliability in large scale networks for traffic engineering. We compare the results of standard SEP routing protocol with fourth level of heterogeneity. Simulation results proves that S-SEP protocol works more better in larger networks.</description>
        <description>http://thesai.org/Downloads/Volume9No1/Paper_56-Lifetime_Maximization_on_Scalable_Stable_Election_Protocol.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Energy-Efficient User-Centric Approach for High-Capacity 5G  Heterogeneous Cellular Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090155</link>
        <id>10.14569/IJACSA.2018.090155</id>
        <doi>10.14569/IJACSA.2018.090155</doi>
        <lastModDate>2018-01-31T14:55:35.4500000+00:00</lastModDate>
        
        <creator>Abdulziz M. Ghaleb</creator>
        
        <creator>Ali Mohammed Mansoor</creator>
        
        <creator>Rodina Ahmad</creator>
        
        <subject>Energy efficiency; HetNets; green networks; usercentric; network-centric; 5G</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(1), 2018</description>
        <description>Today’s cellular networks (3G/4G) do not scale well in heterogeneous networks (HetNets) of multiple technologies that employ network-centric (NC) model. This destabilization is due to the need for coordination and management of multiple layers of the HetNets that the NC models cannot provide. User-centric (UC) approach is one of the key enablers of 5G wireless cellular networks for rapid recovering from network failures and ensuring certain communication capability for the users. In this paper, we present resource-aware energy-saving technique based on the UC model for LTE-A HetNets.We formulate an optimization problem for UC as a mixed linear integer programming (MILP) that minimizes the total power consumption (Energy Efficiency) while respecting the data rate per user and propose a low complexity iterative algorithm to user terminal (UE)-eNodeB association. In UC model, UE possessing terminal intelligence can establish the transmission and reception with different cells within the LTEA HetNet assuming the existence of coordination between the different cells in the network. The performance is evaluated in terms of energy saving in the uplink and downlink and the added capacity to the network (data rate). The evaluation is carried out by comparing a UC model against a NC model with the same simulation setup. The results show significant percentage of energy saving at eNodeBs and UEs in a UC model. Also, system capacity is enhanced in the UC model in both the uplink and downlink due to utilizing best channel gain for transmission and reception.</description>
        <description>http://thesai.org/Downloads/Volume9No1/Paper_55-An_Energy_Efficient_User_Centric_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Efficient Participant’s Selection Algorithm for Crowdsensing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090154</link>
        <id>10.14569/IJACSA.2018.090154</id>
        <doi>10.14569/IJACSA.2018.090154</doi>
        <lastModDate>2018-01-31T14:55:35.4370000+00:00</lastModDate>
        
        <creator>Tariq Ali</creator>
        
        <creator>Umar Draz</creator>
        
        <creator>Sana Yasin</creator>
        
        <creator>Javeria Noureen</creator>
        
        <creator>Ahmad shaf</creator>
        
        <creator>Munwar Ali</creator>
        
        <subject>Mobile crowdsensing (MCS); Mobile Sensing Platform (MSP]); crowd sensing; participant; user pool; crowdsourcing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(1), 2018</description>
        <description>With the advancement of mobile technology the use of Smartphone is greatly increased. Everyone has the mobile phones and it becomes the necessity of life. Today, smart devices are flooding the internet data at every time and in any form that cause the mobile crowdsensing (MCS). One of the key challenges in mobile crowd sensing system is how to effectively identify and select the well-suited participants in recruitments from a large user pool. This research work presents the concept of crowdsensing along with the selection process of participants from a large user pool. MCS provides the efficient selection process for participants that how well suited participant’s selects/recruit from a large user pool. For this, the proposed selection algorithm plays our role in which the recruitment of participants takes place with the availability status from the large user pool. At the end, the graphical result presented with the suitable location of the participants and their time slot.</description>
        <description>http://thesai.org/Downloads/Volume9No1/Paper_54-An_Efficient_Participants_Selection_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Nonlinear Model Predictive Control for pH Neutralization Process based on SOMA Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090153</link>
        <id>10.14569/IJACSA.2018.090153</id>
        <doi>10.14569/IJACSA.2018.090153</doi>
        <lastModDate>2018-01-31T14:55:35.4200000+00:00</lastModDate>
        
        <creator>Hajer Degachi</creator>
        
        <creator>Wassila Chagra</creator>
        
        <creator>Moufida Ksouri</creator>
        
        <subject>Nonlinear model predictive control; optimization; SOMA algorithm; adaptive PID; pH neutralization process</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(1), 2018</description>
        <description>In this work, the pH neutralization process is described by a neural network Wiener (NNW) model. A nonlinear Model Predictive Control (NMPC) is established for the considered process. The main difficulty that can be encountered in NMPC is solving the optimization problem at each sampling time to determine an optimal solution in finite time. The aim of this paper is the use of global optimization method to solve the NMPC minimization problem. Therefore, we propose in this work, to use the Self Organizing Migrating Algorithm (SOMA) to solve the presented optimization problem. This algorithm proves its efficiency to determine the optimal control sequence with a lower computation time. Then the NMPC is compared to adaptive PID controller, where we propose to use the SOMA algorithm to formulate the PID in order to determine the optimal parameters of the PID. The performances of the two controllers based on the SOMA algorithm are tested on the pH neutralization process. </description>
        <description>http://thesai.org/Downloads/Volume9No1/Paper_53-Nonlinear_Model_Predictive_Control.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving Security of the Telemedicine System for the Rural People of Bangladesh</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090152</link>
        <id>10.14569/IJACSA.2018.090152</id>
        <doi>10.14569/IJACSA.2018.090152</doi>
        <lastModDate>2018-01-31T14:55:35.4030000+00:00</lastModDate>
        
        <creator>Toufik Ahmed Emon</creator>
        
        <creator>Uzzal Kumar Prodhan</creator>
        
        <creator>Mohammad Zahidur Rahman</creator>
        
        <creator>Israt Jahan</creator>
        
        <subject>Telemedicine; security; encryption; hashing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(1), 2018</description>
        <description>Telemedicine is a healthcare system where healthcare professionals have the capability to observe, diagnose, evaluate and treat the patient from a remote location and the patient have the ability to easily access the medical expertise quickly and efficiently. Increasing popularity of Telemedicine increase the security intimidations. In this paper, a security framework is implemented for the developed cost-effective Telemedicine system. The proposed security framework secure all the sections of the model following the recommendations of Health Level 7, First Healthcare Interoperability Resources and Health Insurance Portability and Accountability Act. Implementation of this security framework including authenticating the different types of user, secure connection between mobile and sensors through authentication, protect the mobile application from hackers, ensures data security through encryption, as well as secure server, using secured socket layer called SSL. Finally, we can say that the developed Telemedicine model is more secure and it can be implemented in any remote areas of developing countries as like as Bangladesh.</description>
        <description>http://thesai.org/Downloads/Volume9No1/Paper_52-Improving_Security_of_the_Telemedicine_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Bearing Fault Classification based on the Adaptive Orthogonal Transform Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090151</link>
        <id>10.14569/IJACSA.2018.090151</id>
        <doi>10.14569/IJACSA.2018.090151</doi>
        <lastModDate>2018-01-31T14:55:35.3900000+00:00</lastModDate>
        
        <creator>Mohamed Azergui</creator>
        
        <creator>Abdenbi Abenaou</creator>
        
        <creator>Hassane Bouzahir</creator>
        
        <subject>Condition monitoring; vibration analysis; adaptive orthogonal transformation; bearing fault</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(1), 2018</description>
        <description>In this work, we propose an approach based on building an adaptive base which permits to make accurate decisions for diagnosis. The orthogonal adaptive transformation consists of calculating the adaptive operator and the standard spectrum for every state, using two sets of vibration signal records for each type of fault. To classify a new signal, we calculate the spectral vector of this signal in each base. Then, the similarity between this vector and other standard spectra is computed. The experimental results show that the proposed method is very useful for improving the fault detection.</description>
        <description>http://thesai.org/Downloads/Volume9No1/Paper_51-Bearing_Fault_Classification_based_on_the_Adaptive_Orthogonal_Transform.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Motion Planning Framework based on the Quantized LQR Method for Autonomous Robots</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090150</link>
        <id>10.14569/IJACSA.2018.090150</id>
        <doi>10.14569/IJACSA.2018.090150</doi>
        <lastModDate>2018-01-31T14:55:35.3730000+00:00</lastModDate>
        
        <creator>Onur Sencan</creator>
        
        <creator>Hakan Temeltas</creator>
        
        <subject>Robot motion; mobile robotics; hybrid systems; optimal control; quantization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(1), 2018</description>
        <description>This study addresses an argument on the disconnection between the computational side of the robot navigation problem with the control problem including concerns on stability. We aim to constitute a framework that includes a novel approach of using quantizers for occupancy grids and vehicle control systems concurrently. This representation allows stability concerned with the navigation structure through input and output quantizers in the framework. We have given the theoretical proofs of qLQR in the sense of Lyapunov stability alongside with the implementation details. The experimental results demonstrate the effectiveness of the qLQR controller and quantizers in the framework with realtime data and offline simulations.</description>
        <description>http://thesai.org/Downloads/Volume9No1/Paper_50-A_New_Motion_Planning_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving Energy Conservation in Wireless Sensor Network Using Energy Harvesting System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090149</link>
        <id>10.14569/IJACSA.2018.090149</id>
        <doi>10.14569/IJACSA.2018.090149</doi>
        <lastModDate>2018-01-31T14:55:35.3570000+00:00</lastModDate>
        
        <creator>Abdul Rashid</creator>
        
        <creator>Faheem Khan</creator>
        
        <creator>Toor Gul</creator>
        
        <creator>Fakhr-e-Alam</creator>
        
        <creator>Shujaat Ali</creator>
        
        <creator>Samiullah Khan</creator>
        
        <creator>Fahim Khan Khalil</creator>
        
        <subject>Wireless sensor network; Low Energy Adaptive Cluster Hierarchy (LEACH); clustering; cluster head; energy harvesting; energy conservation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(1), 2018</description>
        <description>Wireless Sensor Networks assume an imperative part to monitor and gather information from complex geological ranges. Energy conservation plays a fundamental role in WSNs since such sensor networks are designed to be located in dangerous and non-accessible areas and has gained popularity since the last decade. The main issue of Wireless Sensor Network is energy consumption. Therefore, management of energy consumption of the sensor node is the main area of our research. Sensor nodes use non-changeable batteries for power supply and the lifetime of Sensor node greatly depends on these batteries. The replacement of these batteries is very difficult in many applications, such as an alternative solution to this problem is to use Energy Harvesting system in Wireless Sensor Network to provide a permanent power supply to sensor nodes. This process of extracting energies from nature and converting it into electrical energy is called energy harvesting. Energy can be harvested from the environment for sensor nodes. There are many sources of energies in nature like solar, wind and thermal which can be harvested and used for WSNs. In this research, we suggest to use energy harvesting system for Cluster Heads in a clustering based Wireless Sensor Networks. We will compare our proposed technique to a well-known clustering algorithm Low Energy Adaptive Cluster Hierarchy (LEACH).</description>
        <description>http://thesai.org/Downloads/Volume9No1/Paper_49-Improving_Energy_Conservation_in_Wireless_Sensor_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Predictive Model for Solar Photovoltaic Power using the Levenberg-Marquardt and Bayesian Regularization Algorithms and Real-Time Weather Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090148</link>
        <id>10.14569/IJACSA.2018.090148</id>
        <doi>10.14569/IJACSA.2018.090148</doi>
        <lastModDate>2018-01-31T14:55:35.3570000+00:00</lastModDate>
        
        <creator>Mohammad H. Alomari</creator>
        
        <creator>Ola Younis</creator>
        
        <creator>Sofyan M. A. Hayajneh</creator>
        
        <subject>Solar photovoltaic; solar irradiance; PV power forecasting; machine learning; artificial neural networks; Levenberg-Marquardt; Bayesian regularization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(1), 2018</description>
        <description>The stability of power production in photovoltaics (PV) power plants is an important issue for large-scale gridconnected systems. This is because it affects the control and operation of the electrical grid. An efficient forecasting model is proposed in this paper to predict the next-day solar photovoltaic power using the Levenberg-Marquardt (LM) and Bayesian Regularization (BR) algorithms and real-time weather data. The correlations between the global solar irradiance, temperature, solar photovoltaic power, and the time of the year were studied to extract the knowledge from the available historical data for the purpose of developing a real-time prediction system. The solar PV generated power data were extracted from the power plant installed on-top of the faculty of engineering building at Applied Science Private University (ASU), Amman, Jordan and weather data with real-time records were measured by ASU weather station at the same university campus. Huge amounts of training, validation, and testing experiments were carried out on the available records to optimize the Neural Networks (NN) configurations and compare the performance of the LM and BR algorithms with different sets and combinations of weather data. Promising results were obtained with an excellent realtime overall performance for next-day forecasting with a Root Mean Square Error (RMSE) value of 0.0706 using the Bayesian regularization algorithm with 28 hidden layers and all weather inputs. The Levenberg-Marquardt algorithm provided a 0.0753 RMSE using 23 hidden layers for the same set of learning inputs. This research shows that the Bayesian regularization algorithm outperforms the reported real-time prediction systems for the PV power production.</description>
        <description>http://thesai.org/Downloads/Volume9No1/Paper_48-A_Predictive_Model_for_Solar_Photovoltaic_Power.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparison between Two Adaptive Controllers Applied to Greenhouse Climate Monitoring</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090147</link>
        <id>10.14569/IJACSA.2018.090147</id>
        <doi>10.14569/IJACSA.2018.090147</doi>
        <lastModDate>2018-01-31T14:55:35.3400000+00:00</lastModDate>
        
        <creator>Mohamed Essahafi</creator>
        
        <creator>Mustapha Ait Lafkih</creator>
        
        <subject>Generalized predictive control; greenhouse; multivariable control; identification; recursive least square</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(1), 2018</description>
        <description>This paper presents a study of a multivariable Adaptive Generalized Predictive Controller and its application to control the thermal behaviour of an agricultural greenhouse, which is composed of a number of different elements (cover, internal air, plants, soil, actuators and sensors). The thermal model was obtained after the study of energy balances reacting the physical behavior of the greenhouse. For this reason, we opted to estimate the dynamic model of the greenhouse with algorithm based on recursive least squares (RLS) method. Simulation results are exposed to show the controller’s performances in terms of response time, stability and the rejection of disturbances.</description>
        <description>http://thesai.org/Downloads/Volume9No1/Paper_47-Comparison_between_Two_Adaptive_Controllers.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Parallel Community Detection Algorithm for Big Social Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090146</link>
        <id>10.14569/IJACSA.2018.090146</id>
        <doi>10.14569/IJACSA.2018.090146</doi>
        <lastModDate>2018-01-31T14:55:35.3270000+00:00</lastModDate>
        
        <creator>Yathrib AlQahtani</creator>
        
        <creator>Mourad Ykhlef</creator>
        
        <subject>Data mining; social networks; community detection; distributed computing; Pregel</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(1), 2018</description>
        <description>Mining social networks has become an important task in data mining field, which describes users and their roles and relationships in social networks. Processing social networks with graph algorithms is the source for discovering many features. The most important algorithms applied to social networks are community detection algorithms. Communities of social networks are groups of people sharing common interests or activities. DenGraph is one of the density-based algorithms that used to find clusters of arbitrary shapes based on users’ interactions in social networks. However, because of the rapidly growing size of social networks, it is impossible to process a huge graph on a single machine in an acceptable level of execution. In this article, DenGraph algorithm has been redesigned to work in distributed computing environment. We proposed ParaDengraph Algorithm based on Pregel parallel model for large graph processing.</description>
        <description>http://thesai.org/Downloads/Volume9No1/Paper_46-A_Parallel_Community_Detection_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Requirement Elicitation Techniques for Open Source Systems: A Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090145</link>
        <id>10.14569/IJACSA.2018.090145</id>
        <doi>10.14569/IJACSA.2018.090145</doi>
        <lastModDate>2018-01-31T14:55:35.3100000+00:00</lastModDate>
        
        <creator>Hafiza Maria Kiran</creator>
        
        <creator>Zulfiqar Ali</creator>
        
        <subject>Requirements engineering; requirement elicitation; open source system; requirement elicitation techniques</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(1), 2018</description>
        <description>The trend of Open Source Software development has been increased from the past few years. It has gained much attention of developers in the industry. The development of open source software systems is slightly different from traditional software development. In open source software development, requirement elicitation is a very complex and critical process as developers from different regions of the world develop the system so it’s really difficult to gather requirements for such systems. A variety of available tools, techniques, and approaches are used to perform the process of requirement elicitation. The purpose of this study is to focus on how the process of requirement elicitation is carried out for open source software and the different ways which are used to simplify the process of requirement elicitation. This paper comprehensively describes the techniques which are available and are used for requirement elicitation in open source software development. To do so, a literature survey of the existing techniques of requirement elicitation is conducted and different techniques are found that can be used for requirement elicitation in open source software systems.</description>
        <description>http://thesai.org/Downloads/Volume9No1/Paper_45-Requirement_Elicitation_Techniques_for_Open_Source_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Information Theoretic Analysis of Random Number Generator based on Cellular Automaton</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090144</link>
        <id>10.14569/IJACSA.2018.090144</id>
        <doi>10.14569/IJACSA.2018.090144</doi>
        <lastModDate>2018-01-31T14:55:35.2930000+00:00</lastModDate>
        
        <creator>Amirahmad Nayyeri</creator>
        
        <creator>Gholamhossein Dastghaibyfard</creator>
        
        <subject>Random number generators; entropy; correlation information; elementary cellular automata; reversibility</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(1), 2018</description>
        <description>Realization of Randomness had always been a controversial concept with great importance both from theoretical and practical Perspectives. This realization has been revolutionized in the light of recent studies especially in the realms of Chaos Theory, Algorithmic Information Theory and Emergent behavior in complex systems. We briefly discuss different definitions of Randomness and also different methods for generating it. The connection between all these approaches and the notion of Normality as the necessary condition of being unpredictable would be discussed. Then a complex-system-based Random Number Generator would be introduced. We will analyze its paradoxical features (Conservative Nature and reversibility in spite of having considerable variation) by using information theoretic measures in connection with other measures. The evolution of this Random Generator is equivalent to the evolution of its probabilistic description in terms of probability distribution over blocks of different lengths. By getting the aid of simulations we will show the ability of this system to preserve normality during the process of coarse graining.</description>
        <description>http://thesai.org/Downloads/Volume9No1/Paper_44-An_Information_Theoretic_Analysis_of_Random_Number.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Efficient Smart Emergency Response System for Fire Hazards using IoT</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090143</link>
        <id>10.14569/IJACSA.2018.090143</id>
        <doi>10.14569/IJACSA.2018.090143</doi>
        <lastModDate>2018-01-31T14:55:35.2930000+00:00</lastModDate>
        
        <creator>Lakshmana Phaneendra Maguluri</creator>
        
        <creator>Tumma Srinivasarao</creator>
        
        <creator>Maganti Syamala</creator>
        
        <creator>R. Ragupathy</creator>
        
        <creator>N.J. Nalini</creator>
        
        <subject>Internet of Things (IoT); Aurduino IDE; GPS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(1), 2018</description>
        <description>The Internet of Things pertains to connecting currently unconnected things and people. It is the new era in transforming the existed systems to amend the cost effective quality of services for the society. To support Smart city vision, Urban IoT design plans exploit added value services for citizens as well as administration of the city with the most advanced communication technologies. To make emergency response real time, IoT enhances the way first responders and provides emergency managers with the necessary up-to-date information and communication to make use of those assets. IoT mitigates many of the challenges to emergency response including present problems, like a weak communication network and information lag. In this paper, it is proposed that an emergency response system for fire hazards is designed by using IoT standardized structure. To implement this proposed scheme a low-cost Expressive wi-fi module ESP-32, Flame detection sensor, Smoke detection sensor (MQ-5), Flammable gas detection sensor and one GPS module are used. The sensors detects the hazard and alerts the local emergency rescue organizations like fire departments and police by sending the hazard location to the cloud-service through which all are connected. The overall network utilizes a light weighted data oriented publish-subscribe message protocol MQTT services for fast and reliable communication. Thus, an intelligent integrated system is designed with the help of IoT.</description>
        <description>http://thesai.org/Downloads/Volume9No1/Paper_43-Efficient_Smart_Emergency_Response_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid Approach for Feature Subset Selection using Ant Colony Optimization and Multi-Classifier Ensemble</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090142</link>
        <id>10.14569/IJACSA.2018.090142</id>
        <doi>10.14569/IJACSA.2018.090142</doi>
        <lastModDate>2018-01-31T14:55:35.2800000+00:00</lastModDate>
        
        <creator>Anam Naseer</creator>
        
        <creator>Waseem Shahzad</creator>
        
        <creator>Arslan Ellahi</creator>
        
        <subject>Ant colony optimization; predictive; classifier features selection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(1), 2018</description>
        <description>An active area of research in data mining and machine learning is dimensionality reduction. Feature subset selection is an effective technique for dimensionality reduction and an essential step in successful data mining applications. It reduces the number of features, removes irrelevant, redundant, or noisy features, and enhances the predictive capability of the classifier. It provides fast and cost-effective predictors and leading to better model comprehensibility. In this paper, we proposed a hybrid approach for feature subset selection. It is a filter based method in which a classifier ensemble is coupled with Ant colony optimization algorithm to enhance the predictive accuracy of filters. Extensive experimentation has been carried out on eleven data sets over four different classifiers. All of the data sets are available publically. We have compared our proposed method with numerous filter and wrapper based methods. Experimental results indicate that our method has remarkable ability to generate subsets with reduced number of features. Along with it, our proposed method attained higher classification accuracy.</description>
        <description>http://thesai.org/Downloads/Volume9No1/Paper_42-A_Hybrid_Approach_for_Feature_Subset_Selection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Truncated Patch Antenna on Jute Textile for Wireless Power Transmission at 2.45 GHz</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090141</link>
        <id>10.14569/IJACSA.2018.090141</id>
        <doi>10.14569/IJACSA.2018.090141</doi>
        <lastModDate>2018-01-31T14:55:35.2630000+00:00</lastModDate>
        
        <creator>Kais Zeouga</creator>
        
        <creator>Lotfi Osman</creator>
        
        <creator>Ali Gharsallah</creator>
        
        <creator>Bhaskar Gupta</creator>
        
        <subject>Jute textile; permittivity measurement; loss tangent measurement; patch antenna, truncated patch antenna; frequency shift; wireless power transmission</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(1), 2018</description>
        <description>Jute textile is made from natural fibres and is known for its strength and durability. To determine if jute could be used as a substrate for microstrip antennas, its electromagnetic characteristics (permittivity and loss tangent) are measured in the band of 1 GHz to 5 GHz. The obtained data are used to compare the performances of a simple rectangular patch antenna resonating at 2.45 GHz on jute with others using different textiles as a substrate. Comparing the simulation results gives an idea of using jute as a substrate for microstrip antennas. In the second part of this paper, a truncated patch antenna on jute is studied to be used for wireless power transmission at 2.45 GHz. The antenna was simulated and then fabricated. The measured reflection shows a shift in the resonance frequency compared to the simulated one. The frequency shift is explained, and a solution is proposed to correct it; a second antenna was fabricated and measured.</description>
        <description>http://thesai.org/Downloads/Volume9No1/Paper_41-Truncated_Patch_Antenna_on_Jute_Textile.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>DoS/DDoS Detection for E-Healthcare in Internet of Things</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090140</link>
        <id>10.14569/IJACSA.2018.090140</id>
        <doi>10.14569/IJACSA.2018.090140</doi>
        <lastModDate>2018-01-31T14:55:35.2470000+00:00</lastModDate>
        
        <creator>Iftikhar ul Sami</creator>
        
        <creator>Maaz Bin Ahmad</creator>
        
        <creator>Muhammad Asif</creator>
        
        <creator>Rafi Ullah</creator>
        
        <subject>E-Healthcare; DDoS attack; Internet of Things</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(1), 2018</description>
        <description>Internet of Things (IoT) has emerged as a new horizon in communication age. IoT has provided platform to various emerging technologies and applications for growth. E-Health services have also been integrated and greatly benefitted from IoT. Due to the increased use of computer technology, computer networks have faced serious security challenges and IoT is also facing the same security threats. As IoT has provided platform to other fields, like E-Health, these services are also prone to such threats. Denial of Service (DoS) and Distributed Denial of Service (DDoS) attacks on E-Health servers in IoT would endanger real-time monitoring of patients and also overall reliability of the E-Health services. In this paper, existing solutions to DoS/DDoS attacks in IoT have been reviewed and a reliable solution is presented for securing the servers against these attacks.</description>
        <description>http://thesai.org/Downloads/Volume9No1/Paper_40-DoSDDoS_Detection_for_E_Healthcare_in_Internet_of_Things.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Matrix Clustering based Migration of System Application to Microservices Architecture</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090139</link>
        <id>10.14569/IJACSA.2018.090139</id>
        <doi>10.14569/IJACSA.2018.090139</doi>
        <lastModDate>2018-01-31T14:55:35.2470000+00:00</lastModDate>
        
        <creator>Shahbaz Ahmed Khan Ghayyur</creator>
        
        <creator>Abdul Razzaq</creator>
        
        <creator>Saeed Ullah</creator>
        
        <creator>Salman Ahmed</creator>
        
        <subject>Monolithic architecture; microservices architecture; systematic mapping; system migration; application transformation; traditional application development; emerging challenges; API</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(1), 2018</description>
        <description>A microservice architecture (MSA) style is an emerging approach which is gaining strength with the passage of time. Micro services are recommended by a number of researchers to overcome the limitations and issues encountered by usage of aging method of monolithic architecture styles. Previously the monolithic applications cannot be decomposed into smaller and different services. Monolithic styles application was the one build application. The issue resolution has the focus on lightweight independent application services in the form of sizable services, self-contained units with primary focus on maintenance, performance, scalability, and online services eliminating dependency. All quality factors have been thoroughly discussed in literature, system application migration is becoming an emerging issue with different challenges. This study is addressing the tight coupling to reducing this issue. Moreover, this literature review indicates some complex problems about the migration or conversion of system application into microservice. In architecture, dependency is a big challenge and issue in recent technology. Microservices are recommended by a number of researchers to overcome the limitations issue about how to migrate the existing system application to microservice. The need for a systematic mapping is essential in order to recap the improvement and identify the gaps and requirements for future studies. This study shows open issues first, new findings of quality attributes of microservices and then this study helps to understand the difference between previous traditional systems and microservices based systems. This research study creates awareness about system migration to microservices.</description>
        <description>http://thesai.org/Downloads/Volume9No1/Paper_39-Matrix_Clustering_Based_Migration_of_System_Application.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Social Network Link Prediction using Semantics Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090138</link>
        <id>10.14569/IJACSA.2018.090138</id>
        <doi>10.14569/IJACSA.2018.090138</doi>
        <lastModDate>2018-01-31T14:55:35.2330000+00:00</lastModDate>
        
        <creator>Maria Ijaz</creator>
        
        <creator>Javed Ferzund</creator>
        
        <creator>Muhammad Asif Suryani</creator>
        
        <creator>Anam Sardar</creator>
        
        <subject>Link prediction system; post analysis; semantic similarity; data analysis; social network analysis; dictionary; co-similar links</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(1), 2018</description>
        <description>Currently, social networks have brought about an enormous number of users connecting to such systems over a couple of years, whereas the link mining is a key research track in this area. It has pulled the consideration of several analysts as a powerful system to be utilized as a part of social networks study to understand the relations between nodes in social circles. Numerous data sets of today’s interest are most appropriately called as a collection of interrelated linked objects. The main challenge faced by analysts is to tackle the problem of structured data sets among the objects. For this purpose, we design a new comprehensive model that involves link mining techniques with semantics to perform link mining on structured data sets. The past work, to our knowledge, has investigated on these structured datasets using this technique. For this purpose, we extracted real-time data of posts using different tools from one of the famous SN platforms and check the society’s behavior against it. We have verified our model utilizing diverse classifiers and the derived outcomes inspiring.</description>
        <description>http://thesai.org/Downloads/Volume9No1/Paper_38-Social_Network_Link_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Attendance and Information System using RFID and Web-Based Application for Academic Sector</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090137</link>
        <id>10.14569/IJACSA.2018.090137</id>
        <doi>10.14569/IJACSA.2018.090137</doi>
        <lastModDate>2018-01-31T14:55:35.2170000+00:00</lastModDate>
        
        <creator>Hasanein D. Rjeib</creator>
        
        <creator>Nabeel Salih Ali</creator>
        
        <creator>Ali Al Farawn</creator>
        
        <creator>Basheer Al-Sadawi</creator>
        
        <creator>Haider Alsharqi</creator>
        
        <subject>Student attendance; Attendance Management System (AMS); information service; RFID; IoT; radio-frequency identification; Arduino</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(1), 2018</description>
        <description>Recently, students attendance have been considered as one of the crucial elements or issues that reflects the academic achievements and the performance contributed to any university compared to the traditional methods that impose time-consuming and inefficiency. Diverse automatic identification technologies have been more in vogue such as Radio Frequency Identification (RFID). An extensive research and several applications are produced to take maximum advantage of this technology and bring about some concerns. RFID is a wireless technology which uses to a purpose of identifying and tracking an object via radio waves to transfer data from an electronic tag, called RFID tag or label to send data to RFID reader. The current study focuses on proposing an RFID based Attendance Management System (AMS) and also information service system for an academic domain by using RFID technology in addition to the programmable Logic Circuit (such as Arduino), and web-based application. The proposed system aims to manage student’s attendance recording and provides the capabilities of tracking student absentee as well, supporting information services include students grading marks, daily timetable, lectures time and classroom numbers, and other student-related instructions provided by faculty department staff. Based on the results, the proposed attendance and information system is time-effective and it reduces the documentation efforts as well as, it does not have any power consumption. Besides, students attendance RFID based systems that have been proposed are also analyzed and criticized respect to systems functionalities and main findings. Future directions for further researchers are focused and identified.</description>
        <description>http://thesai.org/Downloads/Volume9No1/Paper_37-Attendance_and_Information_System_Using_RFID.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Iterative Removing Salt and Pepper Noise based on Neighbourhood Information</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090136</link>
        <id>10.14569/IJACSA.2018.090136</id>
        <doi>10.14569/IJACSA.2018.090136</doi>
        <lastModDate>2018-01-31T14:55:35.2000000+00:00</lastModDate>
        
        <creator>Liu Chun</creator>
        
        <creator>Sun Bishen</creator>
        
        <creator>Liu Shaohui</creator>
        
        <creator>Tan Kun</creator>
        
        <creator>Ma Yingrui</creator>
        
        <subject>Salt and pepper noise; noise detection; neighbourhood similarity; detail preserving denoising</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(1), 2018</description>
        <description>Denoising images is a classical problem in low-level computer vision. In this paper, we propose an algorithm which can remove iteratively salt and pepper noise based on neighbourhood while preserving details. First, we compute the probability of different window without free noise pixel by noise ratio, and then determine the size of window. After that the corrupted pixel is replaced by the weighted eight neighbourhood pixels. If the neighbourhood information does not satisfy the de-noising condition, the corrupted pixels will recover in the subsequent iterations.</description>
        <description>http://thesai.org/Downloads/Volume9No1/Paper_36-Iterative_Remove_Salt_and_Pepper_Noise.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>FARM: Fuzzy Action Rule Mining</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090134</link>
        <id>10.14569/IJACSA.2018.090134</id>
        <doi>10.14569/IJACSA.2018.090134</doi>
        <lastModDate>2018-01-31T14:55:35.1870000+00:00</lastModDate>
        
        <creator>Zahra Entekhabi</creator>
        
        <creator>Pirooz Shamsinejadbabki</creator>
        
        <subject>Action mining; fuzzy action rule mining; genetic algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(1), 2018</description>
        <description>Action Mining is a sub-field of Data Mining that concerns about finding ready-to-apply action rules. The majority of the patterns discovered by traditional data mining methods require analysis and further work by domain experts to be applicable in target domain while Action Mining methods try to find final cost-effective actions that can be applied immediately in target domain. Current state-of-the-art methods in AM domain only consider discrete attributes for action rule mining. Therefore, one should discretize continuous attributes using traditional discretization methods before using them for action rule mining. In this paper, the concept of Fuzzy Action Rule has been introduced. In this type of action rule, continuous attributes can be presented in fuzzy form. So that they can suggest fuzzy changes for continuous attributes instead of discretizing them. Because the space of all fuzzy action rules can be so huge a Genetic Algorithm-based Fuzzy Action Rule Mining (GA-FARM) method has been devised for finding the most cost-effective fuzzy action rules with tractable complexity. The proposed method has been implemented and tested on different real datasets. Results confirm that the proposed method is successful in finding cost-effective fuzzy action rules in acceptable time.</description>
        <description>http://thesai.org/Downloads/Volume9No1/Paper_34-FARM_Fuzzy_Action_Rule_Mining.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Secured Interoperable Data Exchange Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090135</link>
        <id>10.14569/IJACSA.2018.090135</id>
        <doi>10.14569/IJACSA.2018.090135</doi>
        <lastModDate>2018-01-31T14:55:35.1870000+00:00</lastModDate>
        
        <creator>A. Bahaa</creator>
        
        <creator>A. Sayed</creator>
        
        <creator>L. Elfangary</creator>
        
        <subject>Data sharing; security; integrity; and protection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(1), 2018</description>
        <description>Interoperability enables peer systems to communicate with each other and use the functionality of peer systems effectively. It improves ability for different systems to exchange information between cooperative systems. It plays a vital role in educational information system institutions. Practically, there are two main technical reasons that restrain the interoperability of the system. First, these systems may be developed under various operating systems, programming languages and different database management systems. Second, the obsessions of security greatly impact the execution of interoperability among various educational institutions. This paper proposes a new RESTful secured interoperable model for data exchange among different information system. This will help educational information system to exchange data among them with a pre-defined standard format of messages. Additionally, this paper designed Cross Platform Web Application Interoperability Protocol (CPWAIP) to facilitate the interaction among components of the proposed model.</description>
        <description>http://thesai.org/Downloads/Volume9No1/Paper_35-A_Secured_Interoperable_Data_Exchange_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Brainwaves for User Verification using Two Separate Sets of Features based on DCT and Wavelet</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090133</link>
        <id>10.14569/IJACSA.2018.090133</id>
        <doi>10.14569/IJACSA.2018.090133</doi>
        <lastModDate>2018-01-31T14:55:35.1700000+00:00</lastModDate>
        
        <creator>Loay E. George</creator>
        
        <creator>Hend A. Hadi</creator>
        
        <subject>Electroencephalogram (EEG); wavelet transforms; DCT; DFT; energy features; statistical moments; Euclidean measure</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(1), 2018</description>
        <description>This paper discusses the effectiveness of brain waves for user verification using electroencephalogram (EEG) recordings of one channel belong to single task. The feature sets were previously introduced as features for EEG-based identification system are tested as suitable features for verification system in this paper. The first considered feature set is based on the energy distribution of DCT’s or DFT’s power spectra, while the second set is based on the statistical moments of wavelet transform, three types of wavelet transforms is proposed. Each set of features is tested using normalized Euclidean distance measure for the matching purpose. The performance of the verification system is evaluated using FAR, FRR, and HTER measures. Two publicly available EEG datasets are used; first is the Colorado State University (CSU) dataset which was collected from seven healthy subjects and the second is the Motor Movement /Imagery (MMI) dataset which is a relatively large dataset was collected from 109 healthy subjects. The attained verification results are encouraging when compared with the results of other recent published works, the best achieved HTER is (0.26) when the system was tested on CSU dataset, while the best achieved HTER is (0.16) when the system was tested on MMI dataset for the features which based on the energy of DFT spectra.</description>
        <description>http://thesai.org/Downloads/Volume9No1/Paper_33-Brainwaves_for_User_Verification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Healthcare Context Information: The Social Context</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090132</link>
        <id>10.14569/IJACSA.2018.090132</id>
        <doi>10.14569/IJACSA.2018.090132</doi>
        <lastModDate>2018-01-31T14:55:35.1530000+00:00</lastModDate>
        
        <creator>Isra’a Ahmed Zriqat</creator>
        
        <creator>Ahmad Mousa Altamimi</creator>
        
        <subject>Context information; social context; healthcare; medical information</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(1), 2018</description>
        <description>During the treatment process, medical institutes collect context information about their patients and store it in their healthcare systems. The collected information describes the measurable, risk, or medication information and used to improve the performance of the institutes healthcare systems by allowing diverse knowledge about patients. Being said that some other information is needed as they influence patients’ life style such as education and income as the high level of education or income reflected positively to the patient’s life, and probably resulting in reducing likelihood disease or incidence of infectious diseases. In this paper, a new type of healthcare context information (Social Context) is proposed to address this need. It can be divided into four main categories: related people, behavior, income and education of the patient. We believe that the new proposed context information should be considered in the designing process of the context-aware medical informatics systems beside to the well-known context information.</description>
        <description>http://thesai.org/Downloads/Volume9No1/Paper_32-A_New_Healthcare_Context_Information.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Robust System for Noisy Image Classification Combining Denoising Autoencoder and Convolutional Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090131</link>
        <id>10.14569/IJACSA.2018.090131</id>
        <doi>10.14569/IJACSA.2018.090131</doi>
        <lastModDate>2018-01-31T14:55:35.1400000+00:00</lastModDate>
        
        <creator>Sudipta Singha Roy</creator>
        
        <creator>Sk. Imran Hossain</creator>
        
        <creator>M. A. H. Akhand</creator>
        
        <creator>Kazuyuki Murase</creator>
        
        <subject>Image denoising; denoising autoencoder; cascaded denoising autoencoder; convolutional neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(1), 2018</description>
        <description>Image classification, a complex perceptual task with many real life important applications, faces a major challenge in presence of noise. Noise degrades the performance of the classifiers and makes them less suitable in real life scenarios. To solve this issue, several researches have been conducted utilizing denoising autoencoder (DAE) to restore original images from noisy images and then Convolutional Neural Network (CNN) is used for classification. The existing models perform well only when the noise level present in the training set and test set are same or differs only a little. To fit a model in real life applications, it should be independent to level of noise. The aim of this study is to develop a robust image classification system which performs well at regular to massive noise levels. The proposed method first trains a DAE with low-level noise-injected images and a CNN with noiseless native images independently. Then it arranges these two trained models in three different combinational structures: CNN, DAE-CNN, and DAE-DAE-CNN to classify images corrupted with zero, regular and massive noises, accordingly. Final system outcome is chosen by applying the winner-takes-all combination on individual outcomes of the three structures. Although proposed system consists of three DAEs and three CNNs in different structure layers, the DAEs and CNNs are the copy of same DAE and CNN trained initially which makes it computationally efficient as well. In DAE-DAE-CNN, two identical DAEs are arranged in a cascaded structure to make the structure well suited for classifying massive noisy data while the DAE is trained with low noisy image data. The proposed method is tested with MNIST handwritten numeral dataset with different noise levels. Experimental results revealed the effectiveness of the proposed method showing better results than individual structures as well as the other related methods.</description>
        <description>http://thesai.org/Downloads/Volume9No1/Paper_31-A_Robust_System_for_Noisy_Image_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Pre-Trained Convolutional Neural Network for Classification of Tanning Leather Image</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090129</link>
        <id>10.14569/IJACSA.2018.090129</id>
        <doi>10.14569/IJACSA.2018.090129</doi>
        <lastModDate>2018-01-31T14:55:35.1230000+00:00</lastModDate>
        
        <creator>Sri Winiarti</creator>
        
        <creator>Adhi Prahara</creator>
        
        <creator>Murinto</creator>
        
        <creator>Dewi Pramudi Ismi</creator>
        
        <subject>Leather classification; tanning leather; convolution neural network (CNN); deep learning; support vector machine (SVM)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(1), 2018</description>
        <description>Leather craft products, such as belt, gloves, shoes, bag, and wallet are mainly originated from cow, crocodile, lizard, goat, sheep, buffalo, and stingray skin. Before the skins are used as leather craft materials, they go through a tanning process. With the rapid development of leather craft industry, an automation system for leather tanning factories is important to achieve large scale production in order to meet the demand of leather craft materials. The challenges in automatic leather grading system based on type and quality of leather are the skin color and texture after tanning process will have a large variety within the same skin category and have high similarity with the other skin categories. Furthermore, skin from different part of animal body may have different color and texture. Therefore, a leather classification method on tanning leather image is proposed. The method uses pre-trained deep convolution neural network (CNN) to extract rich features from tanning leather image and Support Vector Machine (SVM) to classify the features into several types of leather. Performance evaluation shows that the proposed method can classify various types of leather with good accuracy and superior to other state-of-the-art leather classification method in terms of accuracy and computational time.</description>
        <description>http://thesai.org/Downloads/Volume9No1/Paper_29-Pre_Trained_Convolutional_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Iteration Method for Simultaneous Estimation of Vertical Profiles of Air Temperature and Water Vapor with AQUA/AIRS Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090130</link>
        <id>10.14569/IJACSA.2018.090130</id>
        <doi>10.14569/IJACSA.2018.090130</doi>
        <lastModDate>2018-01-31T14:55:35.1230000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>Inversion; tropopause; AQUA; AIRS; Air temperature; sounder; MODTRAN</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(1), 2018</description>
        <description>Iteration method for simultaneous estimation of vertical profiles of air temperature and water vapor with the high spectral resolution of sounder of AQUA/AIRS data is proposed. Through a sensitivity analysis based on the proposed method for the several atmospheric models simulated by MODTRAN, it is found that the proposed method is superior to the conventional method by 41.4% for air temperature profile and by 88.9% for relative humidity profile.</description>
        <description>http://thesai.org/Downloads/Volume9No1/Paper_30-Inversion_Method_for_Reducing_Redundant_Channels.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of Valuable Clustering Techniques for Deep Web Access and Navigation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090128</link>
        <id>10.14569/IJACSA.2018.090128</id>
        <doi>10.14569/IJACSA.2018.090128</doi>
        <lastModDate>2018-01-31T14:55:35.1070000+00:00</lastModDate>
        
        <creator>Qurat ul ain</creator>
        
        <creator>Asma Sajid</creator>
        
        <creator>Uzma Jamil</creator>
        
        <subject>Deep web; clustering; Latent Diriclet Allocation; Latent Semantic Analysis; hierarchical methods; K-means methods</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(1), 2018</description>
        <description>A massive amount of content is available on web but huge portion of it is still invisible. User can only access this hidden web, also called Deep web, by entering a directed query in a web search form and thus accessing the data from database which is not indexed with hyperlinks. Inability to index particular type of content and restricted storage capacity is significant factor behind the invisibleness of web content. Different clustering techniques offer a simple way to analyze large volume of non-indexed content. The major focus of research is to analyze the different clustering techniques to find more accurate and efficient method for accessing and navigating the deep web content. Analysis and comparison of Latent Dirichlet Allocation (LDA), Latent Semantic Analysis (LSA), and Hierarchical and K-means method have been carried out and valuable factors for clustering in deep web have been identified.</description>
        <description>http://thesai.org/Downloads/Volume9No1/Paper_28-Analysis_of_Valuable_Clustering_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Software Engineering: Challenges and their Solution in Mobile App Development</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090127</link>
        <id>10.14569/IJACSA.2018.090127</id>
        <doi>10.14569/IJACSA.2018.090127</doi>
        <lastModDate>2018-01-31T14:55:35.0930000+00:00</lastModDate>
        
        <creator>Naila Kousar</creator>
        
        <creator>Muhammad Sheraz Arshad Malik</creator>
        
        <creator>Aramghan Sarwar</creator>
        
        <creator>Burhan Mohy-ud-din</creator>
        
        <creator>Ayesha Shahid</creator>
        
        <subject>Android; IOS; mobile apps; software quality; survey research; user requirements</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(1), 2018</description>
        <description>Mobile app development is increasing rapidly due to the popularity of smartphones. With billions of apps downloads, the Apple App Store and Google Play Store succeeded to overcome mobile devices. Throughout last 10 years, the amount of smartphones and mobile applications has been perpetually growing. Android and iOS are two mobile platforms that cowl most smartphones within the world in 2017. However, this success challenges app developers to publish high-quality apps to stay attracting and satisfying end-users. Developing a mobile app involves ?rst to select the platforms the app can run, so to develop speci?c solutions (i.e., native apps). During application development a developer come across multiple challenges. In this paper, we have tried to find out challenges faced by developer during their development life cycle with their possible solution.</description>
        <description>http://thesai.org/Downloads/Volume9No1/Paper_27-Software_Engineering_Challenges_and_their_solution.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Seamless Network Database Migration Tool for Insititutions in Zambia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090126</link>
        <id>10.14569/IJACSA.2018.090126</id>
        <doi>10.14569/IJACSA.2018.090126</doi>
        <lastModDate>2018-01-31T14:55:35.0770000+00:00</lastModDate>
        
        <creator>Mutale Kasonde</creator>
        
        <creator>Simon Tembo</creator>
        
        <subject>Database management system; database migration; database structure; database migration toolkits and database cloning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(1), 2018</description>
        <description>The objective of the research was to efficiently manage migration process between different Database Management Systems (DBMS) by automating the database migration process. The automation of the database migration process involved database cloning between different platforms, exchange of data between data center and different clients running non-identical DBMS and backing up the database in flexible format, such as eXtensible Markup Language (XML). This approach involved development of a “Database Migration Tool”. The tool was developed on a windows platform using Java Eclipse™ with four non-identical dummy Relational Databases (Microsoft Access, MySQL, SQL Server and Oracle). The tool was run in a controlled environment over the network and databases were successfully migrated from source to targeted destination option specified. The developed tool is more efficient, timely, as well as highly cost effective.</description>
        <description>http://thesai.org/Downloads/Volume9No1/Paper_26-A_Seamless_Network_Database_Migration_Tool.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Data Exfiltration from Air-Gapped Computers based on ARM CPU</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090125</link>
        <id>10.14569/IJACSA.2018.090125</id>
        <doi>10.14569/IJACSA.2018.090125</doi>
        <lastModDate>2018-01-31T14:55:35.0770000+00:00</lastModDate>
        
        <creator>Kenta Yamamoto</creator>
        
        <creator>Miyuki Hirose</creator>
        
        <creator>Taiichi Saito</creator>
        
        <subject>Air-Gapped Network; ARM CPU; data exfiltration; SIMD; NEON; GSMem</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(1), 2018</description>
        <description>Air-gapped Network is a network isolated from public networks. Several techniques of data exfiltration from air-gapped networks have been recently proposed. Air-gap malware is a malware that breaks the isolation of an air-gapped computer using air-gap covert channels, which extract information from air-gapped computers running on air-gap networks. Guri et al. presented an air-gap malware “GSMem”, which can exfiltrate data from air-gapped computers over GSM frequencies, 850 MHz to 900MHz. GSMem makes it possible to send data using the radio waves leaked out from the system bus between CPU and RAM. It generates binary amplitude shift keying (B-ASK) modulated waves with x86 SIMD instruction. In order to efficiently emit electromagnetic waves from the system-bus, it is necessary to access the RAM without being affected by the CPU caches. GSMem adopts an instruction that writes data without accessing CPU cache in Intel CPU. This paper proposes an air-gap covert channel for computers based on ARM CPU, which includes a software algorithm that can effectively cause cache misses. It is also a technique to use NEON instructions and transmit B-ASK modulated data by radio waves radiated from ARM based computer (e.g. Raspberry Pi 3). The experiment shows that the proposed program sends binary data using radio waves (about 1000kHz ~ 1700kHz) leaked out from system-bus between ARM CPU and RAM. The program can also run on Android machines based on ARM CPU (e.g. ASUS Zenpad 3S 10 and OnePlus 3).</description>
        <description>http://thesai.org/Downloads/Volume9No1/Paper_25-Data_Exfiltration_from_Air_Gapped_Computers.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detection of Violations in Credit Cards of Banks and Financial Institutions based on Artificial Neural Network and Metaheuristic Optimization Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090124</link>
        <id>10.14569/IJACSA.2018.090124</id>
        <doi>10.14569/IJACSA.2018.090124</doi>
        <lastModDate>2018-01-31T14:55:35.0600000+00:00</lastModDate>
        
        <creator>Zarrin Monirzadeh</creator>
        
        <creator>Mehdi Habibzadeh</creator>
        
        <creator>Nima Farajian</creator>
        
        <subject>Financial fraud detection; neural networks; data mining; genetic algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(1), 2018</description>
        <description>Due to popularity of the World Wide Web and e-commerce, electronic communications between people and different organizations through virtual world of the Internet have provided a good basis for commercial and economic relations. These developments, although occurring for less than a century, electronic communications have always been subject to interference, cheating, fraud, and other acts of sabotage. Along with this increase in trading volume, there is a huge increase in the number of online fraud which results in billions of dollars of losses annually worldwide; this has a direct effect on customer service of banking systems, particularly electronic banking systems, and survival as a reliable financial service provider. Therefore, attention to fraud detection techniques is essential to prevent fraudulent acts and is the motive for many scientific researches. For this reason, business intelligence is used to identify financial violations in various economic, banking and other fields. Here, the focus is on algorithms and methods presented in data mining to deal with fraud by using neural networks. The main objective is to improve these methods or present new algorithms by studying the behavioral patterns of customers and the combined use of genetic algorithm to improve the performance of neural network and find the appropriate models for better decision making by implementing and testing the performance of the suggested algorithms. The results show that more strength was given to neural network by using genetic algorithm. In fact, genetic algorithm can raise our ability to control the training process. Moreover, it was concluded that criteria such as age, gender, marital status were not effective on detection; in fact, the most important effective criteria are information related to transaction.</description>
        <description>http://thesai.org/Downloads/Volume9No1/Paper_24-Detection_of_Violations_in_Credit_Cards_of_Banks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comparative Study on Steganography Digital Images: A Case Study of Scalable Vector Graphics (SVG) and Portable Network Graphics (PNG) Images Formats</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090123</link>
        <id>10.14569/IJACSA.2018.090123</id>
        <doi>10.14569/IJACSA.2018.090123</doi>
        <lastModDate>2018-01-31T14:55:35.0470000+00:00</lastModDate>
        
        <creator>Abdulgader Almutairi</creator>
        
        <subject>Image steganography; data hiding; raster and vector images; Scalable Vector Graphics (SVG) and Portable Network Graphics (PNG) images format</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(1), 2018</description>
        <description>Today image steganography plays a key role for exchanging a secret data through the internet. However, the optimal choice of images formats for processing steganography is still an open issue; therefore, this research comes into a table. This research conducts a comparative study between Scalable Vector Graphics (SVG) image format and Portable Network Graphics (PNG) image format. As results show, SVG image format is more efficient than PNG image format in terms of capacity and scalability before and after processing steganography. As well, SVG image format helps to increase simplicity and performance for processing steganography, since it is an XML text file. Our comparative study provides significant results between SVG and PNG images, which have not been seen in the previous related studies.</description>
        <description>http://thesai.org/Downloads/Volume9No1/Paper_23-A_Comparative_Study_on_Steganography_Digital_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Identification of Toddlers’ Nutritional Status using Data Mining Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090122</link>
        <id>10.14569/IJACSA.2018.090122</id>
        <doi>10.14569/IJACSA.2018.090122</doi>
        <lastModDate>2018-01-31T14:55:35.0300000+00:00</lastModDate>
        
        <creator>Sri Winiarti</creator>
        
        <creator>Herman Yuliansyah</creator>
        
        <creator>Aprial Andi Purnama</creator>
        
        <subject>Data mining; k-means clustering; malnutrition status of toddler</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(1), 2018</description>
        <description>One of the problems in community health center or health clinic is documenting the toddlers’ data. The numbers of malnutrition cases in developing country are quite high. If the problem of malnutrition is not resolved, it can disrupt the country’s economic development. This study identifies malnutrition status of toddlers based on the context data from community health center (PUSKESMAS) in Jogjakarta, Indonesia. Currently, the patients’ data cannot directly map into appropriate groups of toddlers’ malnutrition status. Therefore, data mining concept with k-means clustering is used to map the data into several malnutrition status categories. The aim of this study is building software that can be used to assist the Indonesian government in making decisions to take preventive action against malnutrition.</description>
        <description>http://thesai.org/Downloads/Volume9No1/Paper_22-Identification_of_Toddlers_Nutritional_Status.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>General Characteristics and Common Practices for ICT Projects: Evaluation Perspective</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090121</link>
        <id>10.14569/IJACSA.2018.090121</id>
        <doi>10.14569/IJACSA.2018.090121</doi>
        <lastModDate>2018-01-31T14:55:35.0300000+00:00</lastModDate>
        
        <creator>Abdullah Saad AL-Malaise AL-Ghamdi</creator>
        
        <creator>Farrukh Saleem</creator>
        
        <subject>ICT project; ICT evaluation; measurement process; case studies; common practices</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(1), 2018</description>
        <description>In today’s business world, organizations are more dependent on Information and Communication Technologies (ICT) resources. Cloud services, communication services and software services are most common resources, enterprises are spending large amount. To install new services and upgrade existing services, ICT project are essential part of organization’s business strategies. Researchers highlighted the real problem for the organization is to initiate new ICT projects and its evaluation after implementation. This research investigated the common approaches organizations using to start with ICT projects and how to evaluate its impact on after implementation. For this, we have extracted the number of steps with the help of literature review. To validate those steps, six case studies are selected for collecting the samples. The findings of this study elaborate that every ICT project has list of objectives i.e. strategic, informational, IT infrastructure and others. Furthermore, the results highlight that organizations believe on both financial and non-financial evaluation methods based on the type of organization i.e. public or private. Moreover, measurement process applied on project wise, monthly and yearly bases. Importantly, we have found that currently outsourcing plays significant role in success of ICT projects. The results of this study can be helpful for the organization to understand the type of ICT investments, approaches and possible impact on the organizations goals.</description>
        <description>http://thesai.org/Downloads/Volume9No1/Paper_21-General_Characteristics_and_Common_Practices.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Encrypted Fingerprint into VoIP Systems using Cryptographic Key Generated by Minutiae Points</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090120</link>
        <id>10.14569/IJACSA.2018.090120</id>
        <doi>10.14569/IJACSA.2018.090120</doi>
        <lastModDate>2018-01-31T14:55:35.0130000+00:00</lastModDate>
        
        <creator>Mohammad Fawaz Anagreh</creator>
        
        <creator>Anwer Mustafa Hilal</creator>
        
        <creator>Tarig Mohamed Ahmed</creator>
        
        <subject>IP; cryptography; fingerprint; minutiae Advance Encryption Standard (AES); RSA; information security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(1), 2018</description>
        <description>The transmission of the encryption voice over IP is challenging. The voice is recorded, eavesdropping, change and theft voice, etc. The voice over IP is encrypted by using Advance Encryption Standard (AES) Algorithm. AES key is generated from Minutiae Points in fingerprint. By other way, we talk about biometric-cryptosystem, which is hybrid between one of the cryptosystems and biometric systems, such as fingerprint using for authentication as well as to generate cryptographic key to encrypt voice over IP by applying AES. In this paper, we define a new term which is Fingerprint Distribution Problem (FDP) based on Key Distribution Problem. Also, we suggest a solution for this problem by encrypted fingerprint before sending between users by using one of the public key cryptosystems which is RSA Algorithm.</description>
        <description>http://thesai.org/Downloads/Volume9No1/Paper_20-Encrypted_Fingerprint_into_VOIP_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Web Service Composition Framework based on Functional Weight to Reach Maximum QoS</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090119</link>
        <id>10.14569/IJACSA.2018.090119</id>
        <doi>10.14569/IJACSA.2018.090119</doi>
        <lastModDate>2018-01-31T14:55:35.0000000+00:00</lastModDate>
        
        <creator>M.Y. Mohamed Yacoab</creator>
        
        <creator>Abdalla AlAmeen</creator>
        
        <creator>M. Mohemmed Sha</creator>
        
        <subject>Web service; composition of services; non-functional parameters; QoS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(1), 2018</description>
        <description>The recent trend in the web world is to accomplish almost all the user services in every field through the web portals of the respective organizations. But a specific task with series of actions cannot be completed by a single web service with limited functionality. Therefore, multiple web services with different functionalities are composed together to attain the result. Web service composition is an approach that combine various services to fulfill the Web related tasks with preferred quality. Composition of such services will become more challenging when these web services are with similar functionalities, varying Quality and from several providers. Hence, the overall QoS (Quality of Service) could be considered as the major factor for composition. Moreover, in most of the compositions the expected QoS cannot be attained when the task is finished. Sometimes the complete task may have affected by a poor performed single web service. So, while composition, at most care should be taken to select a particular web service. Composing web services dynamically is the main method used to overcome these difficulties. However, to reach the actual functionality of the specific task the quality of each individual service is very much necessary. The QoS of a web service normally evaluated using the non-functional attributes, such as response time, availability, reliability, throughput, etc. Also, while composition, the same level of quality is not expected for individual web services that are included in the chain. So, a framework proposed in this research paper, for web service composition by setting appropriate weightage for the non-functional parameters. Experimental results show that implementation of this method will definitely pave the way to reach the maximum performance of the composition with improved QoS.</description>
        <description>http://thesai.org/Downloads/Volume9No1/Paper_19-A_Web_Service_Composition_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Effect of Music on Shoppers’ Shopping Behaviour in Virtual Reality Retail Stores: Mediation Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090118</link>
        <id>10.14569/IJACSA.2018.090118</id>
        <doi>10.14569/IJACSA.2018.090118</doi>
        <lastModDate>2018-01-31T14:55:34.9830000+00:00</lastModDate>
        
        <creator>Aasim Munir Dad</creator>
        
        <creator>Andrew Kear</creator>
        
        <creator>Asma Abdul Rehman</creator>
        
        <creator>Barry J. Davies</creator>
        
        <subject>Music; retail atmospherics; 3D virtual reality retailing; second life (SL); mediation analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(1), 2018</description>
        <description>The aim of this study is to investigate the effect of music, as an atmospheric cue of 3D virtual reality retail (VRR) stores, on shoppers’ emotions and behaviour. To complete this research, a major empirical study was conducted in Second Life (SL) which is one of the most mature virtual worlds (VWs). The effect of the music on shoppers’ emotions was experimentally tested in computer labs. Pre-test and post-test were conducted to evaluate the emotion levels before and after experiencing 3D VRR stores. Detailed mediation analysis was done with the PROCESS tool at the later stage of the analysis. This research confirmed ‘music’ as an atmospheric cue of 3D Servicescape. Results of this research determined the effect of music on shoppers’ arousal, pleasure and consequent shopping behaviour. Further, this research could not identify the direct effect of arousal on shoppers’ behaviour, however, it was a major source of inducing pleasure and increasing shoppers’ positive approach behaviour. This paper contribute to better understanding the 3D VRR store atmospheric, role of music in it, shoppers’ emotions and behaviour.</description>
        <description>http://thesai.org/Downloads/Volume9No1/Paper_18-The_Effect_of_Music_on_Shoppers_Shopping_Behaviour.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Quality Ranking Algorithms for Knowledge Objects in Knowledge Management Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090117</link>
        <id>10.14569/IJACSA.2018.090117</id>
        <doi>10.14569/IJACSA.2018.090117</doi>
        <lastModDate>2018-01-31T14:55:34.9830000+00:00</lastModDate>
        
        <creator>Amal Al-Rasheed</creator>
        
        <creator>Jawad Berri</creator>
        
        <subject>Knowledge Management System (KMS); Knowledge Object (KO); knowledge evaluation; quality indicator; recommender system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(1), 2018</description>
        <description>The emergence of web-based Knowledge Management Systems (KMS) has raised several concerns about the quality of Knowledge Objects (KO), which are the building blocks of knowledge expertise. Web-based KMSs offer large knowledge repositories with millions of resources added by experts or uploaded by users, and their content must be assessed for accuracy and relevance. To improve the efficiency of ranking KOs, two models are proposed for KO evaluation. Both models are based on user interactions and exploit user reputation as an important factor in quality estimation. For the purpose of evaluating the performance of the two proposed models, the algorithms were implemented and incorporated in a KMS. The results of the experiment indicate that the two models are comparable in accuracy, and that the algorithms can be integrated in the search engine of a KMS to estimate the quality of KOs and accordingly rank the results of user searches.</description>
        <description>http://thesai.org/Downloads/Volume9No1/Paper_17-Quality_Ranking_Algorithms_for_Knowledge_Objects.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Standard Intensity Deviation Approach based Clipped Sub Image Histogram Equalization Algorithm for Image Enhancement</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090116</link>
        <id>10.14569/IJACSA.2018.090116</id>
        <doi>10.14569/IJACSA.2018.090116</doi>
        <lastModDate>2018-01-31T14:55:34.9670000+00:00</lastModDate>
        
        <creator>Sandeepa K S</creator>
        
        <creator>Basavaraj N Jagadale</creator>
        
        <creator>J S Bhat</creator>
        
        <subject>Standard intensity deviation; histogram clipping; histogram equalization; contrast enhancement; entropy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(1), 2018</description>
        <description>The limitations of the hardware and dynamic range of digital camera have created the demand for post processing software tool to improve image quality. Image enhancement is a technique that helps to improve finer details of the image. This paper presents a new algorithm for contrast enhancement, where the enhancement rate is controlled by clipped histogram approach, which uses standard intensity deviation. Here standard intensity deviation is used to divide and equalize the image histogram. The equalization processes is applied to sub images independently and combine them into one complete enhanced image. The conventional histogram equalization stretches the dynamic range which leads to a large gap between adjacent pixels that produces over enhancement problem. This drawback is overcome by defining standard intensity deviation value to split and equalize the histogram. The selection of suitable threshold value for clipping and splitting image, provides better enhancement over other methods. The simulation results show that proposed method out performs other conventional histogram equalization (HE) methods and effectively preserves entropy.</description>
        <description>http://thesai.org/Downloads/Volume9No1/Paper_16-Standard_Intensity_Deviation_approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Web-Based COOP Training System to Enhance the Quality, Accuracy and Usability Access</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090115</link>
        <id>10.14569/IJACSA.2018.090115</id>
        <doi>10.14569/IJACSA.2018.090115</doi>
        <lastModDate>2018-01-31T14:55:34.9500000+00:00</lastModDate>
        
        <creator>Amr Jadi</creator>
        
        <creator>Eesa A. Alsolami</creator>
        
        <subject>COOP training; web applications; integration; quality; accuracy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(1), 2018</description>
        <description>In this paper, a web based COOP training system is demonstrated to ensure usable process of task interactions between various participants. In the existing method various issues related with the paper work, communication gap, etc. raised serious issues between the colleges and industries while implementing the COOP training programs. The primary data was collected by conducting interviews with the supervisors and also by taking the opinion of students to improve the proposed COOP system. The proposed system is capable of reducing the complexity of operations to a greater extent by avoiding overlapping of the information, reducing the communication gap and by increasing the accuracy of the information. The outcomes of the proposed system proved to be very fruitful in terms of results obtained from the point of view of all the participants in the COOP system. The performance, accuracy, quality and assessment of the student reports found to be improved to deliver excellent results.</description>
        <description>http://thesai.org/Downloads/Volume9No1/Paper_15-Web_based_COOP_Training_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implicit and Explicit Knowledge Mining of Crowdsourced Communities: Architectural and Technology Verdicts</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090114</link>
        <id>10.14569/IJACSA.2018.090114</id>
        <doi>10.14569/IJACSA.2018.090114</doi>
        <lastModDate>2018-01-31T14:55:34.9370000+00:00</lastModDate>
        
        <creator>Husnain Mushtaq</creator>
        
        <creator>Babur Hayat Malik</creator>
        
        <creator>Syed Azkar Shah</creator>
        
        <creator>Umair Bin Siddique</creator>
        
        <creator>Muhammad Shahzad</creator>
        
        <creator>Imran Siddique</creator>
        
        <subject>StackOverflow; architecture and technology verdicts; crowdsourcing; data mining; explicit and implicit knowledge; software repositories; knowledge structuring</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(1), 2018</description>
        <description>The use of social media especially community Q&amp;A Sites by software development community has been increased significantly in past few years. The ever mounting data on these Q&amp;A Sites has open up new horizons for research in multiple dimensions. Stackoverflow is repository of large amount of data related to software engineering. Software architecture and technology selection verdicts in SE have enormous and ultimate influence on overall properties and performance of software system, and pose risks to change if once implemented. Most of the risks in Software Engineering projects are directly or indirectly coupled with Architectural and Technology decisions (ATD). Advance Architectural knowledge availability and its utilization are crucial for decision making. Existing architecture and technology knowledge management approaches using software repositories give a rich insight to support architects by offering a wide spectrum of architecture and technology verdicts. However, they are mostly insourced and still depend on manual generation and maintenance of the architectural knowledge. This paper compares various software development approaches and suggests crowdsourcing as knowledge ripped approach and brings into use the most popular online software development community/Crowdsourced (StackOverflow) as a rich source of knowledge for technology decisions to support architecture knowledge management with a more reliable method of data mining for knowledge capturing. This is an exploratory study that follows a qualitative and qualitative e-content analysis approach. Our proposed framework finds relationships among technology and architecture related posts in this community to identify architecture-relevant and technology-related knowledge through explicit and implicit knowledge mining, and performs classification and clustering for the purpose of knowledge structuring for future work.</description>
        <description>http://thesai.org/Downloads/Volume9No1/Paper_14-Implicit_and_Explicit_Knowledge_Mining.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>On P300 Detection using Scalar Products</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090113</link>
        <id>10.14569/IJACSA.2018.090113</id>
        <doi>10.14569/IJACSA.2018.090113</doi>
        <lastModDate>2018-01-31T14:55:34.9370000+00:00</lastModDate>
        
        <creator>Monica Fira</creator>
        
        <creator>Liviu Goras</creator>
        
        <creator>Anca Lazar</creator>
        
        <subject>Electroencephalographic (EEG); brain computer interface; P300; spelling paradigm; classification; signal processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(1), 2018</description>
        <description>Results concerning detection of the P300 wave in EEG segments using scalar products with signals of various shapes are presented and their advantages and limitations are discussed. From the point of view of the computational complexity, the proposed algorithm is a simple algorithm, based on a scalar product and searching for the max value of 6 calculated values. Because we considered that the human subject is not a robot that precisely generates P300 and that there is also a human component of error in the involuntary generation of such waves, we have also calculated the rate of classification of character in the human visual field. To validate the proposed method, electroencephalography recordings from the competition for Spelling BCI Competition III Challenge 2005 -Dataset II have been used.</description>
        <description>http://thesai.org/Downloads/Volume9No1/Paper_13-On_P300_Detection_using_Scalar_Products.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Method for Detection of Foreign Matters Contained in Dryed Nori (Seaweed) based on Optical Property</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090112</link>
        <id>10.14569/IJACSA.2018.090112</id>
        <doi>10.14569/IJACSA.2018.090112</doi>
        <lastModDate>2018-01-31T14:55:34.9200000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>Seaweeds; optical characteristics; BRDF; polarization characteristics; foreign matter detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(1), 2018</description>
        <description>Optical property, such as spectral reflectance, bidirectional reflectance distribution functions and polarization characteristics of dried seaweeds is clarified together with an attempt for transparent foreign matter detection considering optical property of dried seaweeds, such as spectral transparency, bidirectional reflectance and transparency as well as polarimetric properties. Through experiments, it is found that transparent foreign matter can be detected by using bidirectional reflectance distribution function as well as polarization characteristics.</description>
        <description>http://thesai.org/Downloads/Volume9No1/Paper_12-Method_for_Detection_of_Foreign_Matters.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Kit-Build Concept Map with Confidence Tagging in Practical Uses for Assessing the Understanding of Learners</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090111</link>
        <id>10.14569/IJACSA.2018.090111</id>
        <doi>10.14569/IJACSA.2018.090111</doi>
        <lastModDate>2018-01-31T14:55:34.9030000+00:00</lastModDate>
        
        <creator>Jaruwat Pailai</creator>
        
        <creator>Warunya Wunnasri</creator>
        
        <creator>Yusuke Hayashi</creator>
        
        <creator>Tsukasa Hirashima</creator>
        
        <subject>Kit-Build concept map; confidence tagging; effect of confidence information; behavior changing of instructor</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(1), 2018</description>
        <description>An answer of a learner can be interpreted as a learning evidence for demonstrating the understanding of the learner, while a confidence on the answer represents the belief of the learner as the degree of understanding. In this paper, we propose Kit-Build concept map with confidence tagging. Kit-Build concept map (KB map in short) is a digital tool for supporting a concept map strategy where learners can create the learning evidence, and the instructor can access the correctness and confidence information of learners. The practical uses were conducted for demonstrating the valuable of correctness and confidence information in the lecture class. The correctness information was visualized in the control classes, while the correctness and confidence information were visualized in the experiment classes. The observed evidence illustrates that the different information was used for selecting and ordering the supplementary content when the system visualized the different information. The normalized learning gains and effect size demonstrate the different learning achievements between control- and experiment- classes. The results suggest that the confidence information of learner affects the instructor behaviors, which is the positive changing behavior for improving the understanding of their learners. The results of questionnaire suggest that the KB map with confidence tagging is an accepted mechanism for representing the learner’s understanding and their confidence. The instructors also accepted that the confidence information of learners is valuable information for recognizing the learning situation.</description>
        <description>http://thesai.org/Downloads/Volume9No1/Paper_11-Kit_Build_Concept_Map_with_Confidence_Tagging.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improve Mobile Agent Performance by using Knowledge-Based Content</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090110</link>
        <id>10.14569/IJACSA.2018.090110</id>
        <doi>10.14569/IJACSA.2018.090110</doi>
        <lastModDate>2018-01-31T14:55:34.9030000+00:00</lastModDate>
        
        <creator>Tarig Mohamed Ahmed</creator>
        
        <creator>AL-Kharj</creator>
        
        <subject>Mobile agent; mobility; performance; intelligent system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(1), 2018</description>
        <description>Mobile agent technology is one of the mobile computing areas. This technology could be used in several types of applications, such as cloud computing, e-commerce, databases, distributed systems management, network management, etc. The purpose of this paper is to propose a new model for increasing the mobile agent systems performance. The performance is considered as one of the important factors that makes the system reliable. This paper suggests a knowledge-based content to be used to improve the mobile agent systems performance. In the beginning, this work started by conducting intensive survey about related models and mechanisms to investigate the gaps in the performance. A comparative discussion has been conducted between some researches issued and the proposed model. The proposed model has been described in full details based on the components. A scenario-based approach has been used to implement the proposed model by using .Net framework and C# language. The model has been tested and evaluated based on different scenarios. As findings, the overall performance has been improved by 83% when the knowledge-based content is used. In addition, the system performance will improve automatically by the time because the content of the knowledge is increased. The proposed model is suitable to be used in any type of mobile agent applications. The originality of the model is based on conducted survey and own knowledge.</description>
        <description>http://thesai.org/Downloads/Volume9No1/Paper_10-Improve_Mobile_Agent_Performance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Multiclass Deep Convolutional Neural Network Classifier for Detection of Common Rice Plant Anomalies</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090109</link>
        <id>10.14569/IJACSA.2018.090109</id>
        <doi>10.14569/IJACSA.2018.090109</doi>
        <lastModDate>2018-01-31T14:55:34.8730000+00:00</lastModDate>
        
        <creator>Ronnel R. Atole</creator>
        
        <creator>Daechul Park</creator>
        
        <subject>Deep neural network; convolutional neural network; rice; transfer learning; AlexNet</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(1), 2018</description>
        <description>This study examines the use of deep convolutional neural network in the classification of rice plants according to health status based on images of its leaves. A three-class classifier was implemented representing normal, unhealthy, and snail-infested plants via transfer learning from an AlexNet deep network. The network achieved an accuracy of 91.23%, using stochastic gradient descent with mini batch size of thirty (30) and initial learning rate of 0.0001. Six hundred (600) images of rice plants representing the classes were used in the training. The training and testing dataset-images were captured from rice fields around the district and validated by technicians in the field of agriculture.</description>
        <description>http://thesai.org/Downloads/Volume9No1/Paper_9-A_Multiclass_Deep_Convolutional_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Agent-Based System for Efficient kNN Query Processing with Comprehensive Privacy Protection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090108</link>
        <id>10.14569/IJACSA.2018.090108</id>
        <doi>10.14569/IJACSA.2018.090108</doi>
        <lastModDate>2018-01-31T14:55:34.8570000+00:00</lastModDate>
        
        <creator>Mohamad Shady Alrahhal</creator>
        
        <creator>Maher Khemakhem</creator>
        
        <creator>Kamal Jambi</creator>
        
        <subject>Agents; attacks; dummies; fragmentation; indexing; privacy protection; resistance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(1), 2018</description>
        <description>Recently, location based services (LBSs) have become increasingly popular due to advances in mobile devices and their positioning capabilities. In an LBS, the user sends a range of queries regarding his k-nearest neighbors (kNNs) that have common points of interests (POIs) based on his real geographic location. During the query sending, processing, and responding phases, private information may be collected by an attacker, either by tracking the real locations or by analyzing the sent queries. This compromises the privacy of the user and risks his/her safety in certain cases. Thus, the objective of this paper is to ensure comprehensive privacy protection, while also guaranteeing the efficiency of kNN query processing. Therefore, we propose an agent-based system for dealing with these issues. The system is managed by three software agents 
(selectorDL, fragmentorQ, and predictor). The selectorDL agent executes a Wise Dummy Selection Location (WDSL) algorithm 
to ensure the location privacy. The mission of the selectorDL agent is integrated with the mission of the fragmentorQ agent, which is to ensure the query privacy based on Left-Right Fragmentation (LRF) algorithm. To guarantee the efficiency of kNN processing, the predictor agent executes a prediction phase depending on a Cell Based Indexing (CBI) technique. Compared to similar privacy protection approaches, the proposed WDSL and LRF approaches showed higher resistance against location homogeneity attacks and query sampling attacks. In addition, the proposed CBI indexing technique obtains more accurate answers to kNN queries than the previous indexing techniques.</description>
        <description>http://thesai.org/Downloads/Volume9No1/Paper_8-Agent_based_System_for_Efficient_kNN_Query.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>LOD Explorer: Presenting the Web of Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090107</link>
        <id>10.14569/IJACSA.2018.090107</id>
        <doi>10.14569/IJACSA.2018.090107</doi>
        <lastModDate>2018-01-31T14:55:34.8570000+00:00</lastModDate>
        
        <creator>Karwan Jacksi</creator>
        
        <creator>Subhi R. M. Zeebaree</creator>
        
        <creator>Nazife Dimililer</creator>
        
        <subject>Semantic web; linked open data; linked data browsers; exploratory search systems; RDF; SPARQL</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(1), 2018</description>
        <description>The quantity of data published on the Web according to principles of Linked Data is increasing intensely. However, this data is still largely limited to be used up by domain professionals and users who understand Linked Data technologies. Therefore, it is essential to develop tools to enhance intuitive perceptions of Linked Data for lay users. The features of Linked Data point to various challenges for an easy-to-use data presentation. In this paper, Semantic Web and Linked Data technologies are overviewed, challenges to the presentation of Linked Data is stated, and LOD Explorer is presented with the aim of delivering a simple application to discover triplestore resources. Furthermore, to hide the technical challenges behind Linked Data and provide both specialist and non-specialist users, an interactive and effective way to explore RDF resources.</description>
        <description>http://thesai.org/Downloads/Volume9No1/Paper_7-LOD_Explorer_Presenting_the_Web_of_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cadastral and Tea Production Management System with Wireless Sensor Network, GIS based System and IoT Technology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090106</link>
        <id>10.14569/IJACSA.2018.090106</id>
        <doi>10.14569/IJACSA.2018.090106</doi>
        <lastModDate>2018-01-31T14:55:34.8430000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>Internet of Things; geographical information system; tea production; quality control</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(1), 2018</description>
        <description>Cadastral and tea production management system utilizing wireless sensor network of Internet of Things (IoT) technology is proposed. To improve efficiency of tea productions, cadastral management and tea production processes must be managed by Geographical Information System (GIS) based system. Through experiments with sensor acquired data, it is found that the required information can be estimated and represented efficiently. Thus, the system works for improvement of the tea production management and quality control.</description>
        <description>http://thesai.org/Downloads/Volume9No1/Paper_6-Cadastral_and_Tea_Production_Management_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predicting Fork Visibility Performance on Programming Language Interoperability in Open Source Projects</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090105</link>
        <id>10.14569/IJACSA.2018.090105</id>
        <doi>10.14569/IJACSA.2018.090105</doi>
        <lastModDate>2018-01-31T14:55:34.8270000+00:00</lastModDate>
        
        <creator>Bee Bee Chua</creator>
        
        <creator>d. bernardo</creator>
        
        <subject>Open Source Programming Languages; K-nearest neighbors (KNN) Algorithm; interoperability; survivability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(1), 2018</description>
        <description>Despite a variety of programming languages adopted in open source (OS) projects, fork variation on some languages has been minimal and slow to be adopted, and there is little research as to why this is so. We therefore employed a K-nearest neighbours (KNN) technique to predict the fork visibility performance of a productive language from a pool of programming languages adopted in projects. In total, 38 showcase OS projects from 2012 to 2016 were downloaded from the GitHub website and categorized into different levels of programming language adoption clusters. Among 33 languages, JavaScript is one of the popular languages that adopted by community. It has been predicted the language chosen when fork visibility is high can increase project longevity as a highly visible language is likely to occur more often in projects with a significant number of interoperable programming languages and high language fork count. Conversely, a low fork language reduces longevity in projects with an insignificant number of interoperable programming languages and low fork count. Our results reveal the survival of a productive language is in response to high language visibility (large fork number) and high interoperability of multiple programming languages.</description>
        <description>http://thesai.org/Downloads/Volume9No1/Paper_5-Predicting_Fork_Visibility_Performance_on_Programming_Language.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Voice Detection in Traditionnal Tunisian Music using Audio Features and Supervised Learning Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090104</link>
        <id>10.14569/IJACSA.2018.090104</id>
        <doi>10.14569/IJACSA.2018.090104</doi>
        <lastModDate>2018-01-31T14:55:34.8270000+00:00</lastModDate>
        
        <creator>Wissem Ziadi</creator>
        
        <creator>Hamid Amiri</creator>
        
        <subject>Tunisian voice timbre; audio features extraction; singing voice detection; sung/instrumental discrimination; supervised learning algorithms</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(1), 2018</description>
        <description>The research presented in this paper aims to automatically detect the singing voice in traditional Tunisian music, taking into account the main characteristics of the sound of the voice in this particular music style. This means creating the possibility to automatically identify instrumental and singing sounds. Therefore different methods for the automatic classification of sounds using supervised learning algorithms were compared and evaluated. The research is divided into four successive stages. First, the extraction of features vectors from the audio tracks (through calculation of the parameters of sound perception) followed by the selection and transformation process of relevant features for singing/instrumental discrimination. Then, using learning algorithms, the instrumental and vocal classes were modeled from a manually annotated database. Finally, the evaluation of the decision-making process (indexing) was applied on the test part of the database. The musical databases used for this study consists of extracts from the national sound archives of Centre of Mediterranean and Arabic Music (CMAM) and recordings made especially for this research. The possibility to index audio data (classify/segment) into vocal and instrumental recognition allows for the retrieval of content-based information of musical databases.</description>
        <description>http://thesai.org/Downloads/Volume9No1/Paper_4-Voice_Detection_in_Traditional_Tunisian_Music.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Credit Card Fraud Detection using Deep Learning based on Auto-Encoder and Restricted Boltzmann Machine</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090103</link>
        <id>10.14569/IJACSA.2018.090103</id>
        <doi>10.14569/IJACSA.2018.090103</doi>
        <lastModDate>2018-01-31T14:55:34.8100000+00:00</lastModDate>
        
        <creator>Apapan Pumsirirat</creator>
        
        <creator>Liu Yan</creator>
        
        <subject>Credit card; fraud detection; deep learning; unsupervised learning; auto-encoder; restricted Boltzmann machine; Tensorflow</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(1), 2018</description>
        <description>Frauds have no constant patterns. They always change their behavior; so, we need to use an unsupervised learning. Fraudsters learn about new technology that allows them to execute frauds through online transactions. Fraudsters assume the regular behavior of consumers, and fraud patterns change fast. So, fraud detection systems need to detect online transactions by using unsupervised learning, because some fraudsters commit frauds once through online mediums and then switch to other techniques. This paper aims to 1) focus on fraud cases that cannot be detected based on previous history or supervised learning, 2) create a model of deep Auto-encoder and restricted Boltzmann machine (RBM) that can reconstruct normal transactions to find anomalies from normal patterns. The proposed deep learning based on auto-encoder (AE) is an unsupervised learning algorithm that applies backpropagation by setting the inputs equal to the outputs. The RBM has two layers, the input layer (visible) and hidden layer. In this research, we use the Tensorflow library from Google to implement AE, RBM, and H2O by using deep learning. The results show the mean squared error, root mean squared error, and area under curve.</description>
        <description>http://thesai.org/Downloads/Volume9No1/Paper_3-Credit_Card_Fraud_Detection_Using_Deep_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Inferring of Cognitive Skill Zones in Concept Space of Knowledge Assessment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090102</link>
        <id>10.14569/IJACSA.2018.090102</id>
        <doi>10.14569/IJACSA.2018.090102</doi>
        <lastModDate>2018-01-31T14:55:34.7970000+00:00</lastModDate>
        
        <creator>Rania Aboalela</creator>
        
        <creator>Javed Khan</creator>
        
        <subject>Cognitive skill; bloom’s taxonomy; assessment of knowledge; concept space; concepts zones</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(1), 2018</description>
        <description>In these research zones of the knowledge, the assessed domain is identified. Explicitly, these zones are known as Verified Skills, Derived Skills and Potential Skills. In detail, the Verified Skills Zone is the set of tested concepts in the knowledge domain, while Derived Skills Zone is the set of the prerequisite concepts to the tested concepts based on the cognitive skills relation, whereas Potential Skills Zone is the set in the domain that have never been tested or prerequisite to the tested concepts but they are related to the tested concept based on the cognitive relation skills. Identifying cognitive relations between the concepts in one domain simplifies the structure of the assessment, which helps to find the knowledge state of the assessed individual in a short time and minimum number of questions. The existence of the concepts in the assessment domain helps us to estimate the set of the concepts that are known or not known or ready to be known or not ready to be known. In addition, it provides the output of the assessment in concept centric values in addition to the quantity values. The assessment result gives binary values of the assessed domain. “1” implies knowing the concept, whereas “0” implies not knowing the concept. The output is six sets of concepts: 1) Verified Known Skills; 2) Verified Not Known Skills; 3) Derived Known Skills; 4) Derived Not Known Skills; 5) Potential Known Skills; and 6) Potential Not Known Skills. The experiment is conducted to show the binary output of the assessed domain based on the participants’ answers to the asked questions. The results also highlight the efficiency of the assessment.</description>
        <description>http://thesai.org/Downloads/Volume9No1/Paper_2-Inferring_of_Cognitive_Skill_Zones.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Novel Methods for Resolving False Positives during the Detection of Fraudulent Activities on Stock Market Financial Discussion Boards</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2018</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2018.090101</link>
        <id>10.14569/IJACSA.2018.090101</id>
        <doi>10.14569/IJACSA.2018.090101</doi>
        <lastModDate>2018-01-31T14:55:34.7500000+00:00</lastModDate>
        
        <creator>Pei Shyuan Lee</creator>
        
        <creator>Majdi Owda</creator>
        
        <creator>Keeley Crockett</creator>
        
        <subject>Financial discussion boards; financial crimes; pump and dump; text mining; information extraction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 9(1), 2018</description>
        <description>Financial discussion boards (FDBs) have been widely used for a variety of financial knowledge exchange activities through the posting of comments. Popular public FDBs are prone to being used as a medium to spread false financial information due to larger audience groups. Although online forums are usually integrated with anti-spam tools, such as Akismet, moderation of posted content heavily relies on manual tasks. Unfortunately, the daily comments volume received on popular FDBs realistically prevents human moderators to watch closely and moderate possibly fraudulent content, not to mention moderators are not usually assigned with such task. Due to the absence of useful tools, it is extremely time consuming and expensive to manually read and determine whether each comment is potentially fraudulent. This paper presents novel forward and backward analysis methodologies implemented in an Information Extraction (IE) prototype system named FDBs Miner (FDBM). The methodologies aim to detect potentially illegal Pump and Dump comments on FDBs with the integration of per-minute share prices in the detection process. This can possibly reduce false positives during the detection as it categorises the potentially illegal comments into different risk levels for investigation purposes. The proposed system extracts company’s ticker symbols (i.e. unique symbol that represents and identifies each listed company on stock market), comments and share prices from FDBs based in the UK. The forward analysis methodology flags the potentially Pump and Dump comments using a predefined keywords template and labels the flagged comments with price hike thresholds. Subsequently, the backward analysis methodology employs a moving average technique to determine price abnormalities and backward analyse the flagged comments. The first detection stage in forward analysis found 9.82% of potentially illegal comments. It is unrealistic and unaffordable for human moderators or financial surveillance authorities to read these comments on a daily basis. Hence, by integrating share prices to perform backward analysis can categorise the flagged comments into different risk levels. It helps relevant authorities to prioritise and investigate into the higher risk flagged comments, which could potentially indicate a real Pump and Dump crime happening on FDBs when the system is being used in real time.</description>
        <description>http://thesai.org/Downloads/Volume9No1/Paper_1-Novel_Methods_for_Resolving_False_Positives.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>MulWiFi: Flexible Policy Enforcement in Multi-Radio High-Speed WiFi Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081269</link>
        <id>10.14569/IJACSA.2017.081269</id>
        <doi>10.14569/IJACSA.2017.081269</doi>
        <lastModDate>2017-12-30T17:07:09.7070000+00:00</lastModDate>
        
        <creator>Kamran Nishat</creator>
        
        <creator>Zartash Afzal Uzmi</creator>
        
        <creator>Ihsan Ayyub Qazi</creator>
        
        <subject>WLAN; IEEE 802.11; QoS; multi-radio; experimental analysis; testbed</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(12), 2017</description>
        <description>As data rates in 802.11 Wireless LANs (WLANs) scale to Gbps, it becomes increasingly challenging for a single radio resource to meet the goals of high MAC efficiency, service differentiation, and adaptability to diverse network scenarios. We present MulWiFi, a system that uses a cen-tralized controller to manage multiple off-the-shelf radios in a single device for addressing challenges in high-speed WLANs. MulWiFi allows flexible policy enforcement based on ap-plication requirements and channel state information at each radio. MulWiFi offers the ability to (a) customize the MAC based on application requirements and network scenarios, (b) assign different roles to radios, (c) improve MAC throughput efficiency at high data rates, and (d) combine fragmented channels. We demonstrate the early promise of MulWiFi through three case studies and discuss future opportunities and challenges.</description>
        <description>http://thesai.org/Downloads/Volume8No12/Paper_69-MulWiFi_Flexible_Policy_Enforcement_in_Multi_Radio.Pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Evaluation of SIFT and Convolutional Neural Network for Image Retrieval</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081268</link>
        <id>10.14569/IJACSA.2017.081268</id>
        <doi>10.14569/IJACSA.2017.081268</doi>
        <lastModDate>2017-12-30T17:07:09.6770000+00:00</lastModDate>
        
        <creator>Varsha Devi Sachdeva</creator>
        
        <creator>Junaid Baber</creator>
        
        <creator>Maheen Bakhtyar</creator>
        
        <creator>Ihsan Ullah</creator>
        
        <creator>Waheed Noor</creator>
        
        <creator>Abdul Basit</creator>
        
        <subject>Computer vision; SIFT; CNN; image retrieval; precision; recall</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(12), 2017</description>
        <description>Convolutional Neural Network (NN) has gained a lot of attention of the researchers due to its high accuracy in classification and feature learning. In this paper, we evaluated the performance of CNN used as feature for image retrieval with the gold standard feature, aka SIFT. Experiments are conducted on famous Oxford 5k data-set. The mAP of SIFT and CNN is 0.6279 and 0.5284, respectively. The performance of CNN is also compared with bag of visual word (BoVW) model. CNN achieves better accuracy than BoVW.</description>
        <description>http://thesai.org/Downloads/Volume8No12/Paper_68-Performance_Evaluation_of_SIFT_and_Convolutional_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Web Usability and User Trust on E-commerce Websites in Pakistan</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081267</link>
        <id>10.14569/IJACSA.2017.081267</id>
        <doi>10.14569/IJACSA.2017.081267</doi>
        <lastModDate>2017-12-30T17:07:09.6600000+00:00</lastModDate>
        
        <creator>Rohail Shehzad</creator>
        
        <creator>Zulqurnan Aslam</creator>
        
        <creator>Nadeem Ahmad</creator>
        
        <creator>Muhammad Waseem Iqbal</creator>
        
        <subject>Web usability; e-commerce; user trust; Pakistan; random; regular</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(12), 2017</description>
        <description>Web usability is an integral part of e-commerce. Users are less prone to the websites which are difficult to navigate and slow in response time. E-commerce business is growing aggressively on daily basis, but lack of user trust can impede this growth. Success of online business is largely dependent on getting user’s trust. There are different techniques and models to measure web usability and user trust level, but they are not covering all aspects of web usability. So we proposed a new enhanced SUPR-Q model with six (6) parameters, such as Usability, Effectiveness, Efficiency, Learnability, Satisfaction and Security. We performed an experiment with one hundred twenty (120) participants to measure the web usability and user’s trust on two famous e-commerce websites of Pakistan (daraz.pk &amp; homeshopping.pk). We divided our participants into two equal groups, such as Random and Regular group on the basis of their previous shopping exposure. Our results shows that usability score of Regular group who did shopping most frequently were better than the Random group which was less exposed with shopping experience. Regular group was more satisfied from both websites with the score of 46.8% on daraz.pk and 44.8% on homshopping.pk as compared to Random group. Both groups showed higher usability score on daraz.pk which was 45.2% in case of Regular group and 40% in case of Random group due to the higher effectiveness and efficiency of web interface. The overall results showed that trust on e-commerce website plays vital role in user’s satisfaction and purchasing.</description>
        <description>http://thesai.org/Downloads/Volume8No12/Paper_67-Web_Usability_and_User_Trust_on_E_Commerce.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Approach for External Preference Mapping Improvement by Denoising Consumer Rating Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081266</link>
        <id>10.14569/IJACSA.2017.081266</id>
        <doi>10.14569/IJACSA.2017.081266</doi>
        <lastModDate>2017-12-30T17:07:09.6300000+00:00</lastModDate>
        
        <creator>Ibtihel Rebhi</creator>
        
        <creator>Dhafer Malouche</creator>
        
        <subject>Data denoising; Regularized Principal Component Analysis; Stein’s Unbiased Risk Estimate; sensory analysis; external preference mapping stability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(12), 2017</description>
        <description>In this study, denoising data was advocated in sensory analysis field to remove the existing noise in consumer rating data before processing to External Preference Mapping (EPM). This technique is a data visualization used to understand consumers sensory profiles by relating their preferences towards products to external information about sensory characteristics of the perceived products. The output is a perceptual map which visualizes the optimal products and aspects that maximize consumers likings. Hence, EPM is considered as a decision tool to support the development or improvement of products and respond to market requirements. In fact, the stability of the map is affected by the high variability of judgments that make consumer rating data very noisy. This may lead to mismatch between products features and consumers’ preferences then distorted results and wrong decisions. To remove the existing noise, the use of some filtering methods is proposed. Regularized Principal Component Analysis (RPCA) and Stein’s Unbiased Risk Estimate (SURE), based respectively on hard and soft thresholding rules, were applied to consumer rating data to separate the signal to noise and maintain only useful information about the given liking scores. As a way to compare the EPM obtained from each strategy, a sampling process was conducted to randomly select samples from noisy and cleaned data, then perform their corresponding EPM. The stability of the obtained maps was evaluated using an indicator that computes and compares distances between them before and after denoising. The effectiveness of this methodology was evaluated by a simulation study and a potential application was shown on real dataset. Results show that recorded distances after denoising are lower than those before in almost cases for both RPCA and SURE. However, RPCA outperforms SURE. The corresponding map is made more stable where level lines are seen smoothed and products are better located on liking zones. Hence, noise removal reduces variability in data and brings closer preferences which improves the quality of the visualized map.</description>
        <description>http://thesai.org/Downloads/Volume8No12/Paper_66-An_Approach_for_External_Preference_Mapping.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SAFFRON: A Semi-Automated Framework for Software Requirements Prioritization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081265</link>
        <id>10.14569/IJACSA.2017.081265</id>
        <doi>10.14569/IJACSA.2017.081265</doi>
        <lastModDate>2017-12-30T17:07:09.5970000+00:00</lastModDate>
        
        <creator>Syed Ali Asif</creator>
        
        <creator>Zarif Masud</creator>
        
        <creator>Rubaida Easmin</creator>
        
        <creator>Alim Ul Gias</creator>
        
        <subject>Requirement prioritization; rank reversal problem; model-based collaborative filtering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(12), 2017</description>
        <description>Due to dynamic nature of current software development methods, changes in requirements are embraced and given proper consideration. However, this triggers the rank reversal problem which involves re-prioritizing requirements based on stakeholders’ feedback. It incurs significant cost because of time elapsed in large number of human interactions. To solve this issue, a Semi-Automated Framework for soFtware Requirements priOritizatioN (SAFFRON) is presented in this paper. For a particular requirement, SAFFRON predicts appropriate stakeholders’ ratings to reduce human interactions. Initially, item-item collaborative filtering is utilized to estimate similarity between new and previously elicited requirements. Using this similarity, stakeholders who are most likely to rate requirements are determined. Afterwards, collaborative filtering based on latent factor model is used to predict ratings of those stakeholders. The proposed approach is implemented and tested on RALIC dataset. The results illustrate consistent correlation, similar to state of the art approaches, with the ground truth. In addition, SAFFRON requires 13.5-27% less human interaction for reprioritizing requirements.</description>
        <description>http://thesai.org/Downloads/Volume8No12/Paper_65-SAFFRON_A_Semi_Automated_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Survey and Classification of Methods for Building a Semantic Annotation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081264</link>
        <id>10.14569/IJACSA.2017.081264</id>
        <doi>10.14569/IJACSA.2017.081264</doi>
        <lastModDate>2017-12-30T17:07:09.5830000+00:00</lastModDate>
        
        <creator>Georges Lebbos</creator>
        
        <creator>Abd El Salam Al Hajjar</creator>
        
        <creator>Gilles Bernard</creator>
        
        <creator>Mohammad Hajjar</creator>
        
        <subject>Lexical semantics; WordNet; Arabic WordNet; Arabic corpus; synset; Arabic semantic resources; translation-based methods; ontologies</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(12), 2017</description>
        <description>Though Arabic is one of the five most spoken languages, little work has been done on building Arabic semantic resources. Currently, there is no agreed-upon method for building such a reliable Arabic semantic resource. The purpose of this paper is to present a comprehensive survey of different methods for building or enriching Arabic semantic resources; to study and analyze each method; and to categorize the methods according to their properties. This work should contribute to the definition of new methods and help researchers on Arabic semantics to fit their work in the panel of existing ones.</description>
        <description>http://thesai.org/Downloads/Volume8No12/Paper_64-Survey_and_Classification_of_Methods.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced K-mean Using Evolutionary Algorithms for Melanoma Detection and Segmentation in Skin Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081263</link>
        <id>10.14569/IJACSA.2017.081263</id>
        <doi>10.14569/IJACSA.2017.081263</doi>
        <lastModDate>2017-12-30T17:07:09.5500000+00:00</lastModDate>
        
        <creator>Asmaa Aljawawdeh</creator>
        
        <creator>Esraa Imraiziq</creator>
        
        <creator>Ayat Aljawawdeh</creator>
        
        <subject>Melanoma; genetic algorithm; K-mean; particle swarm optimization; classification; segmentation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(12), 2017</description>
        <description>Nowadays, Melanoma has become one of the most significant public health concerns. Malignant Melanoma (MM) is considered the most rapidly spreading type of skin cancer. In this paper, we have built models for detection, segmentation, and classification of Melanoma in skin images using evolutionary algorithms. The first step was to enhance the K-mean algorithm by using two kinds of Evolutionary Algorithms: a Genetic Algorithm and the Particle Swarm Algorithm. Then the Enhanced Algorithms and the default k-mean separately were used to do detection and segmentation of skin cancer images. Then a feature extraction step was applied on the segmented images. Finally, the classification step was done by using two predictive models. The first model was built using a Neural Network backpropagation and the other one using some threshold values for some selected features. The results showed a high accuracy using Neural Back-propagation for the Enhanced K-mean by using a Genetic Algorithm, which achieved 87.5%.</description>
        <description>http://thesai.org/Downloads/Volume8No12/Paper_63-Enhanced_K_Mean_using_Evolutionary_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Rating Prediction with Topic Gradient Descent Method for Matrix Factorization in Recommendation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081262</link>
        <id>10.14569/IJACSA.2017.081262</id>
        <doi>10.14569/IJACSA.2017.081262</doi>
        <lastModDate>2017-12-30T17:07:09.5370000+00:00</lastModDate>
        
        <creator>Guan-Shen Fang</creator>
        
        <creator>Sayaka Kamei</creator>
        
        <creator>Satoshi Fujita</creator>
        
        <subject>Gradient descent; matrix factorization; Latent Dirichlet Allocation; information recommendation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(12), 2017</description>
        <description>In many online review sites or social media, the users are encouraged to assign a numeric rating and write a textual review as feedback to each product that they have bought. Based on users’ history of feedbacks, recommender systems predict how they assesses the unpurchased products to further discover the ones that they may like and buy in future. A traditional approach to predict the unknown ratings is matrix factorization, while it uses only the history of ratings included in the feedbacks. In recent researches, its ignorance of textual reviews is pointed out to be the drawback that brings mediocre performance. In order to solve such issue, we propose a method of rating prediction which uses both the ratings and reviews, including a new first-order gradient method for matrix factorization, named Topic Gradient Descent (TGD). The proposed method firstly derives the latent topics from the reviews via Latent Dirichlet Allocation. Each of the topics is characterized by a probability distribution of words and is assigned to correspond to a latent factor. Secondly, to predict the ratings of the users, it uses matrix factorizaiton which is trained by the proposed TGD method. In the training process, the updating step of each latent factor is dynamically assigned depending on the stochastic proportion of its corresponding topic in the review. In evaluation, we both use YELP challenge dataset and per-category Amazon review datasets. The experimental results show that the proposed method certainly converges the squared error of the prediction, and improves the performance of traditional matrix factorization up to 12.23%.</description>
        <description>http://thesai.org/Downloads/Volume8No12/Paper_62-Rating_Prediction_with_Topic_Gradient_Descent_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Optimized Salahaddin University New Campus IP-Network Design using OPNET</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081261</link>
        <id>10.14569/IJACSA.2017.081261</id>
        <doi>10.14569/IJACSA.2017.081261</doi>
        <lastModDate>2017-12-30T17:07:09.5200000+00:00</lastModDate>
        
        <creator>Ammar. O. Hasan</creator>
        
        <creator>Raghad Z.Yousif</creator>
        
        <creator>Rakan S.Rashid</creator>
        
        <subject>Fiber Distributed Data Interface (FDDI); Optimized Network Engineering Tool (OPNET); File Transfer Protocol (FTP); Virtual Local Area Networks (VLAN); campus network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(12), 2017</description>
        <description>Salahaddin University is the oldest and the biggest university in Kurdistan region. It involves 14 colleges and 3 academic centers. The new university campus that will be established on an area of 10km2 provides a challenge of designing, efficient and robust networking infrastructure due to increased demand on data and data processing applications like FTP (File Transfer Protocol) which is a vital protocol in an academic environment. At access layer (college level) Wireless Local Area Network (WLAN) is employed to provide Wi-Fi to the end user for ultimate mobility. At backbone level four scenarios have been proposed and tested for a proposed university campus. These scenarios used to connect each college Wi-Fi router to Cisco core switch (6509). The first scenario uses optical fiber cable 1000Base-LX (Gigabit- Ethernet), while in the second scenario the Virtual Local Area Network (VLAN) based core switch is used to connect (Gigabit- Ethernet) cables. The third scenario uses FDDI (Fiber Distributed Data Interface) technology. In the fourth scenario, a combination of the VLAN based core switch and FDDI is presented. In the four scenarios, the core switch is connected to the main router Cisco (7507) which connects the campus network to the cloud. The network performance and behavior have been studied by calculating network load throughput and delay. The system has been implemented using OPNET (Optimized Network Engineering Tool) simulator modular 14.5. The simulation results show that the fourth scenario gives minimum delay while maximum data transfer (throughput) is achieved by the fourth scenario.</description>
        <description>http://thesai.org/Downloads/Volume8No12/Paper_61-An_Optimized_Salahaddin_University.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Image noise Detection and Removal based on Enhanced GridLOF Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081260</link>
        <id>10.14569/IJACSA.2017.081260</id>
        <doi>10.14569/IJACSA.2017.081260</doi>
        <lastModDate>2017-12-30T17:07:09.4900000+00:00</lastModDate>
        
        <creator>Ahmed M. Elmogy</creator>
        
        <creator>Eslam Mahmoud</creator>
        
        <creator>Fahd A. Turki</creator>
        
        <subject>Outlier detection; image noise removal; LOF; GridLOF</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(12), 2017</description>
        <description>Image noise removal is a major task in image processing where noise can harness any information inferred from the image especially when the noise level is high. Although there exists many outlier detection approaches used for this task, more enhancements are needed to achieve better performance specifically in terms of time. This paper proposes a new algorithm to detect and remove noise from images depending on an enhanced version of GridLOF algorithm. The enhancement aims to reduce the time and complexity of the algorithm while attaining comparable accuracy. Simulation results on a set of different images proved that proposed algorithm achieves the standard accuracy.</description>
        <description>http://thesai.org/Downloads/Volume8No12/Paper_60-Image_Noise_Detection_and_Removal.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multivariate Statistical Analysis on Anomaly P2P Botnets Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081259</link>
        <id>10.14569/IJACSA.2017.081259</id>
        <doi>10.14569/IJACSA.2017.081259</doi>
        <lastModDate>2017-12-30T17:07:09.4570000+00:00</lastModDate>
        
        <creator>Raihana Syahirah Binti Abdullah</creator>
        
        <creator>Faizal M. A.</creator>
        
        <creator>Zul Azri Muhamad Noh</creator>
        
        <subject>P2P botnets; anomaly-based; chi-square; multivariate; statistical-based</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(12), 2017</description>
        <description>Botnets population is rapidly growing and they become a huge threat on the Internet.  Botnets has been declared as Advanced Malware (AM) and Advanced Persistent Threat (APT) listed attacks which is able to manipulate advanced technology where the intricacy of threats need for continuous detection and protection. These attacks will be almost exclusive for financial gain. P2P botnets act as bots that use P2P technology to accomplish certain tasks. The evolution of P2P technology had generated P2P botnets to become more resilient and robust than centralized botnets. This poses a big challenge on detection and defences. In order to detect these botnets, a complete flow analysis is necessary. In this paper, we proposed anomaly detection through chi-square multivariate statistical analysis which currently focuses on time duration and time slot. This particular time is considered to identify the existence of botserver. We foiled both of host level and network level to make coordination within a P2P botnets and the malicious behaviour each bot exhibits for making detection decisions. The statistical approach result show a high detection accuracy and low false positive that make it as one of the promising approach to reveal botserver.</description>
        <description>http://thesai.org/Downloads/Volume8No12/Paper_59-Multivariate_Statistical_Analysis_on_Anomaly.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>RETRACTED: An Improved Homomorphic Encryption for Secure Cloud Data Storage</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081258</link>
        <id>10.14569/IJACSA.2017.081258</id>
        <doi>10.14569/IJACSA.2017.081258</doi>
        <lastModDate>2017-12-30T17:07:09.4430000+00:00</lastModDate>
        
        <creator>Mohd Rahul</creator>
        
        <creator>Hesham A. Alhumyani</creator>
        
        <creator>Mohd Muntjir</creator>
        
        <creator>Minakshi Kambojl</creator>
        
        <subject>Clouds; cloud computing; issues; security; homomorphic encryption; RSA; EIGamal; Paillier</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(12), 2017</description>
        <description>After careful and considered review of the content of this paper by a duly constituted expert committee,
this paper has been found to be in violation of IJACSA`s Publication Principles. We hereby retract the
content of this paper. Reasonable effort should be made to remove all past references to this paper. Retraction DOI: 10.14569/IJACSA.2017.081258.retraction</description>
        <description>http://thesai.org/Downloads/Volume8No12/Paper_58-An_Improved_Homomorphic_Encryption.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cross-Lingual Sentiment Classification from English to Arabic using Machine Translation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081257</link>
        <id>10.14569/IJACSA.2017.081257</id>
        <doi>10.14569/IJACSA.2017.081257</doi>
        <lastModDate>2017-12-30T17:07:09.4270000+00:00</lastModDate>
        
        <creator>Adel Al-Shabi</creator>
        
        <creator>Aisah Adel</creator>
        
        <creator>Nazlia Omar</creator>
        
        <creator>Tareq Al-Moslmi</creator>
        
        <subject>Cross-lingual sentiment classification; English to Arabic; machine translation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(12), 2017</description>
        <description>Cross-lingual sentiment learning is becoming increasingly important due to the multilingual nature of user-generated content on social media and the scarce resources for languages other than English. However, cross-lingual sentiment learning is a challenging task due to the different distribution between translated data and original data and due to the language gap, i.e. each language has its own ways to express sentiments. This work explores the adaptation of English resources for sentiment analysis to a new language, Arabic. The aim is to design a light model for cross-lingual sentiment classification from English to Arabic, without any manual annotation effort which, at the same time, is easy to build and does not require deep linguistic analysis. The ultimate goal is to find an optimal baseline model and to determine the relation between the noise in the translated data and the accuracy of sentiment classification. Different configurations of several factors are investigated including feature representation, feature reduction methods, and the learning algorithms to find the optimal baseline model. Experiments show that a good classification model can be obtained from translated data regardless of the artificial noise added by machine translation. The results also show a significant cost to automation, and thus the best path to future enhancement is through the inclusion of language-specific knowledge and resources.</description>
        <description>http://thesai.org/Downloads/Volume8No12/Paper_57-Cross_Lingual_Sentiment_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Firefly Algorithm for the Mono-Processors Hybrid Flow Shop Problem</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081256</link>
        <id>10.14569/IJACSA.2017.081256</id>
        <doi>10.14569/IJACSA.2017.081256</doi>
        <lastModDate>2017-12-30T17:07:09.3970000+00:00</lastModDate>
        
        <creator>Latifa DEKHICI</creator>
        
        <creator>Khaled BELKADI</creator>
        
        <subject>Firefly algorithm; hybrid flow shop; metaheuristics; discrete optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(12), 2017</description>
        <description>Nature-inspired swarm metaheuristics become one of the most powerful methods for optimization. In discrete optimization, the efficiency of an algorithm depends on how it is adapted to the problem. This paper aims to provide a discretization of the Firefly Algorithm (FF) for the scheduling of a specific manufacturing system, which is the mono processors two-stage hybrid flow shop (HFS). This kind of manufacturing system appears in several fields as the operating theatre scheduling problem. Results of proposed discrete firefly algorithm are compared to results of other methods found in the literature. Computational results with different numbers of fireflies and on a standard HFS benchmark of about 55 cases, generating about 1900 instances demonstrates that the proposed discretized  metaheuristic reaches the best makespan.</description>
        <description>http://thesai.org/Downloads/Volume8No12/Paper_56-A_Firefly_Algorithm_for_the_Mono_Processors.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Performance Analysis for Generalized Additive and Generalized Linear Modeling in Epidemiology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081255</link>
        <id>10.14569/IJACSA.2017.081255</id>
        <doi>10.14569/IJACSA.2017.081255</doi>
        <lastModDate>2017-12-30T17:07:09.3800000+00:00</lastModDate>
        
        <creator>Talmoudi Khouloud</creator>
        
        <creator>Bellali Hedia</creator>
        
        <creator>Ben-Alaya Nissaf</creator>
        
        <creator>Saez Marc</creator>
        
        <creator>Malouche Dhafer</creator>
        
        <creator>Chahed Mohamed Kouni</creator>
        
        <subject>Generalized linear model; generalized additive model; zoonotic cutaneous leishmaniasis; Central Tunisia</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(12), 2017</description>
        <description>Most environmental-epidemiological researches emphasize modeling as the causal link of different events (e.g., hospital admission, death, disease emergency). There has been a particular concern in the use of the Generalized Linear Models (GLMs) in the field of epidemiology. However, recent studies in this field highlighted the use of non-parametric techniques, especially the Generalized Additive Models (GAMs). The aim of this work is to compare performance of both methods in the field of epidemiology. Comparison is done in terms of sharpening the relation between the predictors and the response variable as well as in predicting outbreaks. The most suitable method is then adopted to elucidate the impact of bioclimatic factors on the emergence of the zoonotic cutaneous leishmaniasis (ZCL) disease in Central Tunisia. Monthly epidemiologic and bioclimatic data from July 2009 to June 2016 are used in this study. Akaike information criterion, R-squared and F-statistic are used to compare model performance, while the root mean square error is used for checking predictive accuracy for both models. Our results show the potential of GAM model to provide a better assessment of the nonlinear relations and to give a high predictive accuracy compared to GLMs. The results also stress the inaccurate estimation of risk factors when linear trends are used to model nonlinear structured data.</description>
        <description>http://thesai.org/Downloads/Volume8No12/Paper_55-Comparative_Performance_Analysis_for_Generalized_Additive.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Web Unique Method (WUM): An Open Source Blackbox Scanner for Detecting Web Vulnerabilities</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081254</link>
        <id>10.14569/IJACSA.2017.081254</id>
        <doi>10.14569/IJACSA.2017.081254</doi>
        <lastModDate>2017-12-30T17:07:09.3500000+00:00</lastModDate>
        
        <creator>Muhammad Noman khalid</creator>
        
        <creator>Muhammad Iqbal</creator>
        
        <creator>Muhammad Talha Alam</creator>
        
        <creator>Vishal Jain</creator>
        
        <creator>Hira Mirza</creator>
        
        <creator>Kamran Rasheed</creator>
        
        <subject>Automated vulnerability detection; black-box scanners; web vulnerabilities crawling; security scanner</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(12), 2017</description>
        <description>The internet has provided a vast range of benefits to society, and empowering people in a variety of ways. Due to incredible growth of Internet usage in past 2 decades, everyday a number of new Web applications are also becoming a part of World Wide Web. The distributed and open nature of internet attracts hackers to interrupt the smooth services of web applications. Some of the famous web application vulnerabilities are SQL Injection, Cross Site Scripting (XSS) and Cross Site request Forgery (CSRF). We believe that in order to encounter these vulnerabilities; the web application vulnerabilities scanner should have strong detection and prevention rules to ease the problem. At present, a number of web application vulnerabilities scanners have been proposed by research community, such as ZED Attack Proxy (ZAP) by AWASP, Wapiti by sourceforge.net and w3af by w3af.org. However, these scanners cannot challenge all web vulnerabilities. This research proposed and develop a vulnerability scanning tool WUM (web unique method) to detection and prevention of all the major instance vulnerabilities and demonstrates how to detect unauthorized access by finding vulnerabilities. With the efficient use of this tool, the developers are able to find potentially vulnerable web application. WUM generated a high level of accuracy and compatibility, which is elaborated underneath. The result of the experiment shows proposed vulnerability scanner tool WUM which gives less false positive and detect more vulnerabilities in comparison of well-known black box scanners.</description>
        <description>http://thesai.org/Downloads/Volume8No12/Paper_54-Web_Unique_Method_WUM.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Recent Approaches to Enhance the Efficiency of Ultra-Wide Band MAC Protocols</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081253</link>
        <id>10.14569/IJACSA.2017.081253</id>
        <doi>10.14569/IJACSA.2017.081253</doi>
        <lastModDate>2017-12-30T17:07:09.3330000+00:00</lastModDate>
        
        <creator>Mohamed Ali Ahmed</creator>
        
        <creator>H. M. Bahig</creator>
        
        <creator>Hassan Al-Mahdi</creator>
        
        <subject>Ultra-wide band (UWB); Wireless Personal Area Network (WPAN); Wireless Sensor Network (WSN); Wireless Body Area Network (WBAN); Medium Access Control (MAC); performance metrics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(12), 2017</description>
        <description>Ultra-wide band (UWB) is a promising radio technology to transmit huge data in short distances between different digital devices or between individual components of a personal computer. Due to the magnificent features of UWB technology, it finds vast research and application interests, such as Wireless Personal Area Networks (WPANs), Wireless Sensor Networks (WSNs), Wireless Body Area Networks (WBANs) as a special case of WSNs, and Wireless Area Networks (WLANs) as well. In this article, we study the assumptions and performance metrics related to recent schemes of Medium Access Control (MAC) Protocols employed in UWB applications that aim to improve its performance. Also, we compare the different approaches used in the recent works based on 10 parameters: application domain, cast type, protocol centralization, number of hops, mobility, number of used channels, uniformity, priority, and analytical approach. Finally, we introduce different approaches to improve UWB applications.</description>
        <description>http://thesai.org/Downloads/Volume8No12/Paper_53-Recent_Approaches_to_Enhance_the_Efficiency.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fish Image Segmentation Algorithm (FISA) for Improving the Performance of Image Retrieval System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081252</link>
        <id>10.14569/IJACSA.2017.081252</id>
        <doi>10.14569/IJACSA.2017.081252</doi>
        <lastModDate>2017-12-30T17:07:09.3030000+00:00</lastModDate>
        
        <creator>Amanullah Baloch</creator>
        
        <creator>Mushstaq Ali</creator>
        
        <creator>Faqir Gul</creator>
        
        <creator>Sadia Basir</creator>
        
        <creator>Ibrar Afzal</creator>
        
        <subject>Fish body parts segmentation; local and global features; object extraction; image retrieval system; image features</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(12), 2017</description>
        <description>The image features (local, global) pay vital role in image retrieval system. The effectiveness of these image features depends on the application domain, i.e., in some domains the global features generate better results while in others the local features give good results. Different species of fishes have different color, texture, and shape features in their body parts (head, abdomen, and tail). Previously most of the work, in fish image domain has been done using global features. This work claims that fish image retrieval system using local features can generate better results as compared to global features. This is because of the fact that fish image has different features in its body parts. In this research, a fish image segmentation algorithm is proposed to extract fish object from its background and then separate fish object into three distinguished body parts, i.e. head, abdomen, and tail. The proposed algorithm was tested on a subset of “QUT_fish_data” data set containing 369 fishes of various sizes of 30 species. The experimental results showed an accuracy of 87.5% on fish image segmentation and demonstrated the effectiveness of local features over global features.</description>
        <description>http://thesai.org/Downloads/Volume8No12/Paper_52-Fish_Image_Segmentation_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Integration of Wearable Smart Sensor for Improving e-Healthcare</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081251</link>
        <id>10.14569/IJACSA.2017.081251</id>
        <doi>10.14569/IJACSA.2017.081251</doi>
        <lastModDate>2017-12-30T17:07:09.2870000+00:00</lastModDate>
        
        <creator>Vijey Thayananthan</creator>
        
        <creator>Abdullah Basuhail</creator>
        
        <subject>Smart sensors; miniaturized devices; e-healthcare applications; MIMO; manifolds</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(12), 2017</description>
        <description>Analyzing health conditions using sensors is one of the daily activities in a healthcare organization. The purpose of this research is to improve the e-healthcare formulated through the integration of wearable smart sensors and miniaturized devices. In this research, monitoring glucose level of the diabetic is considered as an example of the non-linear problem in which we show that accuracy and efficiency of e-healthcare can be achieved through Multiple-input, multiple-output (MIMO) system. In this novel technique, Pn-manifolds, which are the non-linear mathematical approach, provide the flexible rate and enhance the accuracy and efficiency of the medical systems in the e-healthcare services.</description>
        <description>http://thesai.org/Downloads/Volume8No12/Paper_51-Integration_of_Wearable_Smart_Sensor.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Examining the Impact of Feature Selection Methods on Text Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081250</link>
        <id>10.14569/IJACSA.2017.081250</id>
        <doi>10.14569/IJACSA.2017.081250</doi>
        <lastModDate>2017-12-30T17:07:09.2570000+00:00</lastModDate>
        
        <creator>Mehmet Fatih KARACA</creator>
        
        <creator>Safak BAYIR</creator>
        
        <subject>Feature selection; text classification; text mining; k-Nearest Neighbors; support vector machines</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(12), 2017</description>
        <description>Feature selection that aims to determine and select the distinctive terms representing a best document is one of the most important steps of classification. With the feature selection, dimension of document vectors are reduced and consequently duration of the process is shortened. In this study, feature selection methods were studied in terms of dimension reduction rates, classification success rates, and dimension reduction-classification success relation. As classifiers, kNN (k-Nearest Neighbors) and SVM (Support Vector Machines) were used. 5 standard (Odds Ratio-OR, Mutual Information-MI, Information Gain-IG, Chi-Square-CHI and Document Frequency-DF), 2 combined (Union of Feature Selections-UFS and Correlation of Union of Feature Selections-CUFS) and 1 new (Sum of Term Frequency-STF) feature selection methods were tested. The application was performed by selecting 100 to 1000 terms (with an increment of 100 terms) from each class. It was seen that kNN produces much better results than SVM. STF was found out to be the most successful feature selection considering the average values in both datasets. It was also found out that CUFS, a combined model, is the one that reduces the dimension the most, accordingly, it was seen that CUFS classify the documents more successfully with less terms and in short period compared to many of the standard methods.</description>
        <description>http://thesai.org/Downloads/Volume8No12/Paper_50-Examining_the_Impact_of_Feature_Selection_Methods.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Decision Support System for Early-Stage Diabetic Retinopathy Lesions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081249</link>
        <id>10.14569/IJACSA.2017.081249</id>
        <doi>10.14569/IJACSA.2017.081249</doi>
        <lastModDate>2017-12-30T17:07:09.2230000+00:00</lastModDate>
        
        <creator>Kemal AKYOL</creator>
        
        <creator>Safak BAYIR</creator>
        
        <creator>Baha SEN</creator>
        
        <subject>Early stage diabetic retinopathy lesions; feature extraction; important features; image recognition; classification; decision support system; computer aided analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(12), 2017</description>
        <description>Retina is a network layer containing light-sensitive cells. Diseases that occur in this layer, which performs the eye-sight, threaten our eye-sight directly. Diabetic Retinopathy is one of the main complications of diabetes mellitus and it is the most significant factor contributing to blindness in the later stages of the disease. Therefore, early diagnosis is of great importance to prevent the progress of this disease. For this purpose, in this study, an application based on image processing techniques and machine learning, which provides decision support to specialist, was developed for the detection of hard exudates, cotton spots, hemorrhage and microaneurysm lesions which appear in the early stages of the disease. The meaningful information was extracted from a set of samples obtained from the DIARETDB1 dataset during the system modeling process. In this process, Gabor and Discrete Fourier Transform attributes were utilized and dimension reduction was performed by using Spectral Regression Discriminant Analysis algorithm. Then, Random Forest and Logistic Regression and classifier algorithms’ performances were evaluated on each attribute dataset. Experimental results were obtained using the retinal fundus images provided from both DIARETDB1 dataset and the department of Ophthalmology, Ataturk Training and Research Hospital in Ankara.</description>
        <description>http://thesai.org/Downloads/Volume8No12/Paper_49-A_Decision_Support_System_for_Early_Stage_Diabetic.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Text Summarization of Multi-Aspect Comments in Social Networks in Persian Language</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081248</link>
        <id>10.14569/IJACSA.2017.081248</id>
        <doi>10.14569/IJACSA.2017.081248</doi>
        <lastModDate>2017-12-30T17:07:09.2070000+00:00</lastModDate>
        
        <creator>Hossein Shahverdian</creator>
        
        <creator>Hassan Saneifar</creator>
        
        <subject>Text mining; comments analysis; summarization; graph summarization; Persian language</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(12), 2017</description>
        <description>Now-a-days, there are increasingly huge amount of user generated comments on the web. The user generated comments usually contains useful and essential information reflecting public’s or customers’ opinions. Since the information in the comments could be used for decision making, production or service improvement, and achieving user satisfaction, the systematic analysis of these comments is an essential need in so many domains including e-commerce, production, and social network analysis. However, the analysis of large volume of comments is a difficult and time-consuming task. Therefore, the need for a system which can convert this massive volume of comments to a useful and efficient summary is felt more and more. Text summarization leads to using more resources at higher speeds and getting richer information. According to numerous studies conducted in the field of multi-document summarization, few studies can be found that have been focused on the user generated comments in Persian language. In this paper, we propose a novel approach to summarize huge amount of comments in Persian, which is enough close to a human summarization. Our approach is based on semantic and lexical similarities and uses a graph-based summarization. We also propose a clustering to deal with multiple aspects (subjects) in a corpus of comments. According to the experiments, the summaries extracted by the proposed approach reached an average score of 8.75 out of 10, which improves the state-of-the-art summarizer’s score about 14 percent.</description>
        <description>http://thesai.org/Downloads/Volume8No12/Paper_48-Text_Summarization_Multi_Aspect_Comments.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Relevance of Energy Efficiency Gain in Massive MIMO Wireless Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081246</link>
        <id>10.14569/IJACSA.2017.081246</id>
        <doi>10.14569/IJACSA.2017.081246</doi>
        <lastModDate>2017-12-30T17:07:09.1770000+00:00</lastModDate>
        
        <creator>Ahmed Alzahrani</creator>
        
        <creator>Vijey Thayananthan</creator>
        
        <creator>Muhammad Shuaib Qureshi</creator>
        
        <subject>Massive MIMO; manifolds; EE gain; feedback; convergence; quantization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(12), 2017</description>
        <description>The massive MIMO and energy efficiency (EE) analysis for the next generation technology are the hottest topics in wireless network research. This paper explains about massive MIMO wireless networks and EE for manifold channel which is an evolution massive MIMO. This research will help to design and implement a practical system of next generation network based on massive MIMO where efficient processing provides EE gain. In order to approach this research, different types of manifolds are considered with efficient techniques that depend on the rank of the channel matrix. Employing the specific manifold that helps to analyze the rate of the feedback increases not only the overall performance of the MIMO system but also the EE. We studied the convergence techniques used for optimizing quantization errors which have influences with manifold feedback. Here, we have focused on relevant areas which are very important to analyze EE gain in the future massive network. According to the selected results obtained in this research, many challenges will be possible to make useful proposals.</description>
        <description>http://thesai.org/Downloads/Volume8No12/Paper_46-Relevance_of_Energy_Efficiency_Gain.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Smartwatch Centric Social Networking Application for Alzheimer People</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081247</link>
        <id>10.14569/IJACSA.2017.081247</id>
        <doi>10.14569/IJACSA.2017.081247</doi>
        <lastModDate>2017-12-30T17:07:09.1770000+00:00</lastModDate>
        
        <creator>Henda Chorfi Ouertani</creator>
        
        <creator>Ahlam Al-Mutairi</creator>
        
        <creator>Fatimah Al-Shehri</creator>
        
        <creator>GhadahAl-Hammad</creator>
        
        <creator>Maram Al-Suhaibani</creator>
        
        <creator>Munira Al-Holaibah</creator>
        
        <creator>Nouf Al-Saleh</creator>
        
        <subject>Smartwatch; Alzheimer; broadcast; tracking; social application</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(12), 2017</description>
        <description>In recent years, people are increasingly interacting with an overwhelming of devices especially wearable devices. These later have initiated new ways in human connections with the world. The basic concept of these devices is that they can be worn instead of being carried. They have pervasive and unobtrusive presence. Furthermore, they can be used for helping people with serious medical conditions, like the Alzheimer people. In this context, an application that allows Alzheimer relatives to locate and track the patient by connecting the application with GPS smartwatch that is worn by the patient is proposed. A safety area is predefined by the relative in order to be notified once the patient gets beyond this area. When there is no connection with the GPS watch for any reason and the patient cannot be tracked, the relatives can send a broadcast message to all application users who wish to help and participate in such social and humanitarian work.</description>
        <description>http://thesai.org/Downloads/Volume8No12/Paper_47-A_Smartwatch_Centric_Social_Networking_Application.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Cascaded H-Bridge Multilevel Inverter with SOC Battery Balancing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081245</link>
        <id>10.14569/IJACSA.2017.081245</id>
        <doi>10.14569/IJACSA.2017.081245</doi>
        <lastModDate>2017-12-30T17:07:09.1470000+00:00</lastModDate>
        
        <creator>Khalili Tajeddine</creator>
        
        <creator>Raihani Abdelhadi</creator>
        
        <creator>Bouattane Omar</creator>
        
        <creator>Ouajji Hassan</creator>
        
        <subject>Cascaded H-Bridge; multilevel inverter; battery discharge; SOC balancing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(12), 2017</description>
        <description>In this paper, we present a single phase 5 levels H-Bridge multilevel inverter (CHMLI) with battery balancing technique. Each single full bridge is directly connected to a battery inside the power bank. The different combinations and batteries wiring sets offer the possibility to control the batteries discharge. The cascaded H-Bridge multilevel inverter is first described and the discharge is studied in normal conditions under different stress scenarios. State of charge (SOC) balancing technique is then achieved using an equalization algorithm controlling the different switching combination inside the power bank. Results of the simulation model with and without the SOC balancing is presented using Matlab.</description>
        <description>http://thesai.org/Downloads/Volume8No12/Paper_45-A_Cascaded_H_Bridge_Multilevel_Inverter.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Use of a Simplex Method with an Artificial basis in Modeling of Flour Mixtures for Bakery Products</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081244</link>
        <id>10.14569/IJACSA.2017.081244</id>
        <doi>10.14569/IJACSA.2017.081244</doi>
        <lastModDate>2017-12-30T17:07:09.1130000+00:00</lastModDate>
        
        <creator>Natalia A. Berezina</creator>
        
        <creator>Andrey V. Artemov</creator>
        
        <creator>Igor A. Nikitin</creator>
        
        <creator>Igor V. Zavalishin</creator>
        
        <creator>Andrey N. Ryazanov</creator>
        
        <subject>Modeling; simplex method; polycomponent flour mixture; bakery products; biological value; quality</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(12), 2017</description>
        <description>Modeling of flour mixtures for bakery products of increased biological value is done. The problem is solved by a simplex method with an artificial basis related to numerical optimization methods for solving linear programming problems. A mathematical model of the composition of a polycomponent flour mixture has been constructed. The model is taking into account the minimal amount of essential amino acids. An automated scientific research system for modeling the composition of flour mixtures with specified functional characteristics was developed and implemented. The composition of flour mixes for bakery products has been optimized according to the target values of the amino acid score and biological value. Application of the developed software package allows creating prescription compounds for rye-wheat bread with a 6.12-17.66% higher biological value than traditional bakery products.</description>
        <description>http://thesai.org/Downloads/Volume8No12/Paper_44-The_Use_of_a_Simplex_Method_with_an_Artificial_Basis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Creation and Usability Evaluating of E-Learning Contents for Automobile Repair Block Painting</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081243</link>
        <id>10.14569/IJACSA.2017.081243</id>
        <doi>10.14569/IJACSA.2017.081243</doi>
        <lastModDate>2017-12-30T17:07:09.1000000+00:00</lastModDate>
        
        <creator>Shigeru Ikemoto</creator>
        
        <creator>Yuka Takai</creator>
        
        <creator>Hiroyuki Hamada</creator>
        
        <creator>Noriaki Kuwahara</creator>
        
        <subject>Automobile repair; block painting; expert; e-learning; usability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(12), 2017</description>
        <description>Due to the fact that paintwork in the automobile repair industry requires individual correspondence, work by human hands is indispensable. Although the skills of expert engineers have a great influence on the finish, learning these skills require a lot of experience and time. In Japan, the number of young people in the automobile mechanic and automobile repair industry is drastically decreasing due to the declining birthrates, the trend of young people turning away from driving cars, and the diversification of occupation options. Moreover, the aging of the mechanics and repair technicians has been progressing, and the average age of the mechanics and repair technicians remaining in this industry has been increasing every year. In the near future, there is a high possibility that the shortage of human resources supporting this industry will become apparent. In this study, we aimed to construct a self-study support system for young engineers engaged in automobile repair painting to support skill acquisition, using e-learning teaching materials utilizing motion analysis data on block painting by solid paint done by experienced engineers. Furthermore, the usability of the teaching materials was clarified from the viewers’ characteristics obtained by publishing the teaching materials. The e-learning teaching materials which secured a certain number of repeaters had a possibility to be effective teaching material. At the same time, several tasks such as the shortening of playback time of the teaching material time were also highlighted.</description>
        <description>http://thesai.org/Downloads/Volume8No12/Paper_43-Creation_and_Usability_Evaluating_of_E_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Biometrics Recognition based on Image Local Features Ordinal Encoding</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081242</link>
        <id>10.14569/IJACSA.2017.081242</id>
        <doi>10.14569/IJACSA.2017.081242</doi>
        <lastModDate>2017-12-30T17:07:09.0670000+00:00</lastModDate>
        
        <creator>Simina Emerich</creator>
        
        <creator>Bogdan Belean</creator>
        
        <subject>Biometrics; image local features; ordinal measurements; iris; dorsal hand veins</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(12), 2017</description>
        <description>In the present informational era, with the continue extension of embedded computing systems, the demand of faster and robust image descriptors is an important issue. However, image representation and recognition is an open problem. The aim of the paper is to embrace ordinal measurements for image analysis and to apply the concept for a real problem, such as biometric identification. Biometrics provides a robust solution for the identity management process and is increasingly more present in our life. To explore the textural discriminative information of images, the paper proposes a new feature extraction technique, namely, Image Local Features Ordinal Encoding.</description>
        <description>http://thesai.org/Downloads/Volume8No12/Paper_42-Biometrics_Recognition_based_on_Image.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Iterative Learning Control for Trajectory Tracking of Single-link Flexible Arm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081241</link>
        <id>10.14569/IJACSA.2017.081241</id>
        <doi>10.14569/IJACSA.2017.081241</doi>
        <lastModDate>2017-12-30T17:07:09.0530000+00:00</lastModDate>
        
        <creator>Marouen Mejerbi</creator>
        
        <creator>Sameh Zribi</creator>
        
        <creator>Jilani Knani</creator>
        
        <subject>Single-link flexible arm; finite element; trajectory tracking; iterative learning control; linear matrix inequality; bounded real lemma</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(12), 2017</description>
        <description>This paper focuses on the issue of tracking the trajectory of a flexible arm. The purpose is to ensure the flexible arm follows the desired path in the joint space. To achieve our objective, we have three problems to solve: modeling, control, and trajectory planning. As in the case of rigid robots, the Euler-Lagrange formulation remains valid with the exception of dividing the flexible arm into a finite number of elements to model the deformation. The iterative learning control scheme can be used to achieve perfect tracking throughout the movement period, a sufficient condition based on the bounded real lemma that guarantees the convergence error between iteration is given. All the results are presented in terms of linear matrix inequalities synthesis (LMIs).</description>
        <description>http://thesai.org/Downloads/Volume8No12/Paper_41-Iterative_Learning_Control_for_Trajectory_Tracking.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>D-MFCLMin: A New Algorithm for Extracting Frequent Conceptual Links from Social Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081240</link>
        <id>10.14569/IJACSA.2017.081240</id>
        <doi>10.14569/IJACSA.2017.081240</doi>
        <lastModDate>2017-12-30T17:07:09.0200000+00:00</lastModDate>
        
        <creator>Hamid Tabatabaee</creator>
        
        <subject>Social network analysis; frequent conceptual link; data mining; graph mining</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(12), 2017</description>
        <description>Massive amounts of data in social networks have made researchers look for ways to display a summary of the information provided and extract knowledge from them. One of the new approaches to describe knowledge of the social network is through a concise structure called conceptual view. In order to build this view, it is first needed to extract conceptual links from the intended network. However, extracting these links for large scale networks is very time consuming. In this paper, a new algorithm for extracting frequent conceptual link from social networks is provided where by introducing the concept of dependency, it is tried to accelerate the process of extracting conceptual links. Although the proposed algorithm will be able to accelerate this process if there are dependencies between data, but the tests carried out on Pokec social network, which lacks dependency between its data, revealed that absence of dependency, increases execution time of extracting conceptual links only up to 15 percent.</description>
        <description>http://thesai.org/Downloads/Volume8No12/Paper_40-D_MFCLMin_A_New_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Motivators and Demotivators of Agile Software Development: Elicitation and Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081239</link>
        <id>10.14569/IJACSA.2017.081239</id>
        <doi>10.14569/IJACSA.2017.081239</doi>
        <lastModDate>2017-12-30T17:07:09.0070000+00:00</lastModDate>
        
        <creator>Shahbaz Ahmed Khan Ghayyur</creator>
        
        <creator>Salman Ahmed</creator>
        
        <creator>Adnan Naseem</creator>
        
        <creator>Abdul Razzaq</creator>
        
        <subject>Agile methodology; systematic mapping; motivators; demotivators; Agile teams; Agile software development</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(12), 2017</description>
        <description>Motivators and demotivators are key factors in software productivity. Both are also critical to the success of Agile software development. Literature reports very diverse and multidimensional critical factors affecting the quality of Agile software development, thus, there is a need to extract and map required factors systematically for wider implications. The classification of anticipated factors and sub-factors is also desired to simplify their identification and definition. The reported research focuses on the systematic mapping of motivators and demotivators in Agile software development. A systematic mapping literature study has been performed to shed light on scattered critical factors for software engineers, affecting productivity and understanding of Agile viewpoints. Additionally, this study categorizes the extracted motivators as organization, people and technical. Whereas, the sub-factors’ categorization has been concentrated, which contributes to the motivators at grass root level. This research alleviates the problems of identification, definition and classification of the critical factors in agile software development for both practitioners and researchers.</description>
        <description>http://thesai.org/Downloads/Volume8No12/Paper_39-Motivators_and_Demotivators_of_Agile_Software_Development.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Analysis of ANN Techniques for Predicting Channel Frequencies in Cognitive Radio</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081238</link>
        <id>10.14569/IJACSA.2017.081238</id>
        <doi>10.14569/IJACSA.2017.081238</doi>
        <lastModDate>2017-12-30T17:07:08.9900000+00:00</lastModDate>
        
        <creator>Imran Khan</creator>
        
        <creator>Shaukat Wasi</creator>
        
        <creator>Adnan Waqar</creator>
        
        <creator>Saima Khadim</creator>
        
        <subject>Cognitive radio; machine learning; artificial neural network; frequencies band; feed forward neural network; Elman; Radial basis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(12), 2017</description>
        <description>Demand of larger bandwidth increases the spectrum scarcity problem. By using the concepts of Cognitive radio we can achieve an efficient spectrum utilization. The cognitive radio allows the unlicensed user to share the licensed user band. To sense the accessibility of vacant channel and allocation of licensed user band is provided by Machine learning techniques because this decision need to be very fast and accurate. It is based on certain factors (such as Power, Bandwidth, antenna parameters, etc.). In this paper, we used neural network to propose this decision of resource allocation more accurately by providing bandwidth, power, antenna gain, azimuth, angle of elevation and location as a supplements factors to increase the predicting accuracy of Available channel frequencies for secondary user in particular bands. The comparative analysis is done between artificial neural network techniques to determine the maximum decision accuracy in order to design a suitable neural network structure and the system to make fast prediction for available channels. The dataset is divided in to cellular 850 MHZ and Advanced wireless service 1900/2100 MHZ bands. In both bands, Feed Forward networks performs better as compared to Elman and Radial basis network for predicting the best available channel to accommodate the secondary user. It will considerably increase overall QoS and decrease interference, hence making Cognitive radio system reliable.</description>
        <description>http://thesai.org/Downloads/Volume8No12/Paper_38-Comparative_Analysis_of_ANN_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Capacitated Vehicle Routing Problem Solving using Adaptive Sweep and Velocity Tentative PSO</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081237</link>
        <id>10.14569/IJACSA.2017.081237</id>
        <doi>10.14569/IJACSA.2017.081237</doi>
        <lastModDate>2017-12-30T17:07:08.9600000+00:00</lastModDate>
        
        <creator>M. A. H. Akhand</creator>
        
        <creator>Zahrul Jannat Peya</creator>
        
        <creator>Kazuyuki Murase</creator>
        
        <subject>Capacitated vehicle routing problem; Sweep clustering and velocity tentative particle swarm optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(12), 2017</description>
        <description>Vehicle Routing Problem (VRP) has become an integral part in logistic operations which determines optimal routes for several vehicles to serve customers. The basic version of VRP is Capacitated VRP (CVRP) which considers equal capacities for all vehicles. The objective of CVRP is to minimize the total traveling distance of all vehicles to serve all the customers. Various methods are used to solve CVRP, among them the most popular way is splitting the task into two different phases: assigning customers under different vehicles and then finding optimal route of each vehicle. Sweep clustering algorithm is well studied for clustering nodes. On the other hand, route optimization is simply a traveling salesman problem (TSP) and a number of TSP optimization methods are applied for this purpose. In Sweep, cluster formation staring angle is identified as an element of CVRP performance. In this study, a heuristic approach is developed to identify appropriate starting angle in Sweep clustering. The proposed heuristic approach considers angle difference of consecutive nodes and distance between the nodes as well as distances from the depot. On the other hand, velocity tentative particle swarm optimization (VTPSO), the most recent TSP method, is considered for route optimization. Finally, proposed adaptive Sweep (i.e., Sweep with proposed heuristic) plus VTPSO is tested on a large number of benchmark CVRP problems and is revealed as an effective CVRP solving method while outcomes compared with other prominent methods.</description>
        <description>http://thesai.org/Downloads/Volume8No12/Paper_37-Capacitated_Vehicle_Routing_Problem.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Method for Analyzing and Designing Microservice Holistically</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081236</link>
        <id>10.14569/IJACSA.2017.081236</id>
        <doi>10.14569/IJACSA.2017.081236</doi>
        <lastModDate>2017-12-30T17:07:08.9430000+00:00</lastModDate>
        
        <creator>Ahmad Tarmizi Abdul Ghani</creator>
        
        <creator>Mohd. Shanudin Zakaria</creator>
        
        <subject>Microservice; service design; promise theory; viable system model; Viplan method</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(12), 2017</description>
        <description>Microservice is a new architecture that is getting attention in the development of service systems. However, microservice is still at the early stage and the acceptance of this architecture is overwhelming. Microservice architecture is a promising architecture in delivering loosely coupled, decentralized, and scalable system that utilizes the latest technology, such as container and cloud computing. However, the traditional method for analyzing and designing system will not be able to fully utilize the capability of the microservice architecture. Therefore, a new method for analyzing and designing the microservice holistically is being proposed in this paper. The Design Science Research methodology has been adopted in designing the proposed method. The artifact, which is the result of the research, is the proposed method. The proposed method has shown its potential in being used to analyze and design the microservice holistically and to benefit from the microservice architecture capabilities.</description>
        <description>http://thesai.org/Downloads/Volume8No12/Paper_36-A_Method_for_Analyzing_and_Designing_Microservice.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Browser-Based DDoS Attacks without Javascript</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081235</link>
        <id>10.14569/IJACSA.2017.081235</id>
        <doi>10.14569/IJACSA.2017.081235</doi>
        <lastModDate>2017-12-30T17:07:08.9130000+00:00</lastModDate>
        
        <creator>Ryo Kamikubo</creator>
        
        <creator>Taiichi Saito</creator>
        
        <subject>Browser; denial of service (DoS); distributed denial of service (DDoS); attacks; HTML; JavaScript; botnets; networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(12), 2017</description>
        <description>Recently, browser-based distributed denial of service (DDoS) attacks, in which a malicious JavaScript program is distributed through an advertisement network, and runs in the background of the web browser, were observed. In this paper, we address a question whether browser-based DDoS attacks can be realized without JavaScript. We construct new browser-based DDoS attacks based only on HTML functions, and compare them with the existing JavaScript-based DDoS attacks in efficiency.</description>
        <description>http://thesai.org/Downloads/Volume8No12/Paper_35-Browser_based_DDoS_Attacks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Unsupervised Video Surveillance for Anomaly Detection of Street Traffic</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081234</link>
        <id>10.14569/IJACSA.2017.081234</id>
        <doi>10.14569/IJACSA.2017.081234</doi>
        <lastModDate>2017-12-30T17:07:08.8970000+00:00</lastModDate>
        
        <creator>Muhammad Umer Farooq</creator>
        
        <creator>Najeed Ahmed Khan</creator>
        
        <creator>Mir Shabbar Ali</creator>
        
        <subject>Kalman filter; Gaussian mixture model; DBSCAN clustering; similarity matrix; occlusion; computer vision; traffic surveillance; Intelligent Transport Systems (ITS)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(12), 2017</description>
        <description>Intelligent transportation systems enables the analysis of large multidimensional street traffic data to detect pattern and anomaly, which otherwise is a difficult task. Advancement in computer vision makes great contribution in the progress of video based traffic surveillance system. But still there are some challenges which need to be solved like objects occlusion, behavior of objects. This paper developed a novel framework which explores multidimensional data of road traffic to analyze different patterns of traffic and anomaly detection. This framework is implemented on road traffic dataset collected from different areas of the city.</description>
        <description>http://thesai.org/Downloads/Volume8No12/Paper_34-Unsupervised_Video_Surveillance_for_Anomaly_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multicast Routing with Load Balancing in
Multi-Channel Multi-Radio Wireless Mesh Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081233</link>
        <id>10.14569/IJACSA.2017.081233</id>
        <doi>10.14569/IJACSA.2017.081233</doi>
        <lastModDate>2017-12-30T17:07:08.8630000+00:00</lastModDate>
        
        <creator>Atena Asami</creator>
        
        <creator>Majid Asadi Shahmirzadi</creator>
        
        <creator>Sam Jabbehdari</creator>
        
        <subject>Wireless mesh network; multi-radio; multi-channel; multicast routing; channels load balancing; genetic algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(12), 2017</description>
        <description>By an increasing expansion of multimedia services and group communication applications, the need for multicast routing to respond to multicast requests in wireless mesh networks is felt more than before. One of the main challenges in multi-channel multi-radio wireless mesh networks is the efficient use of the capacity of channels as well as load balance in network. In this paper, we proposed an algorithm for building a multicast tree, namely, Load balanced Multicast routing with Genetic Algorithm (LM-GA). The purpose of this algorithm is to construct a multicast tree for requested sessions in Multi-Radio Multi-Channel Wireless Mesh Networks (MCMR WMNs) regarding load balance in channels through minimizing the maximum amount of channels utilization. The results show the efficiency of LM-GA in distribution of load in the channels of the network with finding near-optimal solutions, and also an increase in the network performance while avoiding creation of bottlenecks.</description>
        <description>http://thesai.org/Downloads/Volume8No12/Paper_33-Multicast_Routing_with_Load_Balancing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>TLM-2 Base Protocol Analysis for Model-Driven Design</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081232</link>
        <id>10.14569/IJACSA.2017.081232</id>
        <doi>10.14569/IJACSA.2017.081232</doi>
        <lastModDate>2017-12-30T17:07:08.8330000+00:00</lastModDate>
        
        <creator>Salaheddine Hamza Sfar</creator>
        
        <creator>Rached Tourki</creator>
        
        <subject>Electronic system level design; systemC; transaction level modelling; model-driven engineering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(12), 2017</description>
        <description>The system-on-chip design cost is not only dependent on implementation and manufacturing techniques, but also on the used methodologies and design tools. In recent years, transaction level modelling (TLM) and more specifically the SystemC TLM-2 library has become the standard in writing a system-level specification. Even though TLM-2 based models are more abstract than registry-level ones, they are very challenging to develop. They are often written manually and from scratch. In this paper, we expose a more elaborate and modular structure of transaction level models based on more predictable semantics. This work will be our first stone of the building of a model-driven design, a methodology that has proven itself in software engineering.</description>
        <description>http://thesai.org/Downloads/Volume8No12/Paper_32-TLM_2_Base_Protocol_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of Energy Saving Approaches in Cloud Computing using Ant Colony and First Fit Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081231</link>
        <id>10.14569/IJACSA.2017.081231</id>
        <doi>10.14569/IJACSA.2017.081231</doi>
        <lastModDate>2017-12-30T17:07:08.8170000+00:00</lastModDate>
        
        <creator>Alaa Ahmed</creator>
        
        <creator>Mostafa Ibrahim</creator>
        
        <subject>Cloud computing; VM migration; first fit; ant colony; energy savaging</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(12), 2017</description>
        <description>Cloud computing is a style of technology that is increasingly used every day. It requires the use of an important amount of resources that is dynamically provided as a service. The growth of energy consumption associated to the process of resource allocation implemented in the cloud computing is an important issue that needs to be taken into consideration. Better performance will be acquired by allowing the same required workload to be performed using a lower number of servers, which could bring to important energy savings. So it is a requirement to adopt efficient techniques in order to save and minimize energy consumed clouds such as virtual machines migration. This paper analyzes two algorithms: First Fit and Ant Colony which address the use of virtual machine migration approaches to improve the cloud performance in terms of reducing the consumed energy.</description>
        <description>http://thesai.org/Downloads/Volume8No12/Paper_31-Analysis_of_Energy_Saving_Approaches_In_Cloud_Computing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Harmonics Measurement in Computer Laboratory and Design of Passive Harmonic Filter using MATLAB</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081230</link>
        <id>10.14569/IJACSA.2017.081230</id>
        <doi>10.14569/IJACSA.2017.081230</doi>
        <lastModDate>2017-12-30T17:07:08.8030000+00:00</lastModDate>
        
        <creator>Muhammad Usman Keerio</creator>
        
        <creator>Muhammad Shahzad Bajwa</creator>
        
        <creator>Abdul Sattar Saand</creator>
        
        <creator>Munwar Ayaz memon</creator>
        
        <subject>Power Quality (PQ); personal computers; Total Harmonic Distortion (THD); passive filter; MATLAB/Simulink</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(12), 2017</description>
        <description>In this paper the harmonics measurement for computer loads is analyzed and passive filters are designed for mitigating those harmonics. The filters are designed to meet the IEEE standard 519-1992 which is recommended for harmonic current limits. Personal computers are non-linear loads that generate harmonics due to non-sinusoidal current present at entrance of power supply. In this work the personal computers in laboratory are taken as domestic load and harmonics generated by them cannot be ignored which are simulated using MATLAB/Simulink. The purpose is developing analytical method for the design of the passive harmonic filter that absorbs current harmonics caused by computer loads. The findings of this study would be supportive to make the source current free from harmonics thereby reducing the THD. Simulation results of proposed design method of passive filters shows attractive results for harmonic reduction with profit of upgrade of power factor. Design of passive harmonic filters by using non active power can be simple cost effective solution for systems.</description>
        <description>http://thesai.org/Downloads/Volume8No12/Paper_30-Harmonics_Measurement_in_Computer_Laboratory.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Quantifying Integrity Impacts in Security Risk Scoring Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081229</link>
        <id>10.14569/IJACSA.2017.081229</id>
        <doi>10.14569/IJACSA.2017.081229</doi>
        <lastModDate>2017-12-30T17:07:08.7700000+00:00</lastModDate>
        
        <creator>Eli Weintraub</creator>
        
        <subject>Security; cyber-attack; risk score; vulnerability; database integrity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(12), 2017</description>
        <description>Organizations are attacked daily by criminal hackers. Managers need to know what kinds of cyber-attacks they are exposed to, for taking defense activities. Attackers may cause several kinds of damages according to the knowledge they have on organizations’ configuration and of systems’ vulnerabilities. One possible result of attacks is damaging the database. Estimations of attacks’ impacts on database integrity are not found in literature, besides intuitive managers’ assessments. The aim of this work is defining a quantitative measure, which takes into consideration the known vulnerabilities threatening on database integrity and proving its feasibility. In this work a quantitative integrity impact measure is defined, formulated, illustrated and evaluated. The proposed measure is based on the real database configuration. The superiority of the measure over current practices is illustrated.</description>
        <description>http://thesai.org/Downloads/Volume8No12/Paper_29-Quantifying_Integrity_Impacts_in_Security_Risk.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Computation of Assimilation of Arabic Language Phonemes</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081228</link>
        <id>10.14569/IJACSA.2017.081228</id>
        <doi>10.14569/IJACSA.2017.081228</doi>
        <lastModDate>2017-12-30T17:07:08.7570000+00:00</lastModDate>
        
        <creator>Ayad Tareq Imam</creator>
        
        <creator>Jehad Ahmad Alaraifi</creator>
        
        <subject>Computational phonology; phonological rules; assimilation; phonological coarticulation; artificial neural networks; MATLAB</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(12), 2017</description>
        <description>The computational phonology is fairly a new science that deals with studying phonological rules under the computation point of view. Computational phonology is based on the phonological rules, which are the processes that are applied to phonemes to produce another phoneme under specific phonetic environment. A type of these phonological processes is the assimilation process, which its rules reform the involved phonemes regarding the place of articulation, the manner of articulation, and/or voicing. Thus, assimilation is considered as a consequence of phonological coarticulation. Arabic, like other natural languages, has systematic phonemes’ changing rules. This paper aims to automate the assimilation rules of the Arabic language. Among several computational approaches that are used for automating phonological rules, this paper uses Artificial Neural Network (ANN) approach, and thus, contributes the using of ANN as a computational approach for automating the assimilation rules in the Arabic language. The designed ANN-based system of this paper has been defined and implemented by using MATLAB software, in which the results show the success of this approach and deliver an experience for later similar work.</description>
        <description>http://thesai.org/Downloads/Volume8No12/Paper_28-The_Computation_of_Assimilation_of_Arabic_Language.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Recommender System for Journal Articles using Opinion Mining and Semantics</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081227</link>
        <id>10.14569/IJACSA.2017.081227</id>
        <doi>10.14569/IJACSA.2017.081227</doi>
        <lastModDate>2017-12-30T17:07:08.7400000+00:00</lastModDate>
        
        <creator>Anam Sardar</creator>
        
        <creator>Javed Ferzund</creator>
        
        <creator>Muhammad Asif Suryani</creator>
        
        <creator>Muhammad Shoaib</creator>
        
        <subject>Recommendation system; journal recommendation system; user opinion; sementic similarity; text analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(12), 2017</description>
        <description>Till date, the dominant part of Recommender Systems (RS) work focusing on single domain, i.e. for films, books and shopping and so on. However, human inclinations may traverse over numerous areas. Thus, utilization practices on related things from various domains can be valuable for RS to make recommendations. Academic articles, such as research papers are the way to express ideas and thoughts for the research community. However, there have been a lot of journals available which recognize these technical writings. In addition, journal selection procedure should consider user experience about the journals in order to recommend users most relevant journal. In this work of journal recommendation system, the data about the user experience targeting various aspects of journals has been gathered which addresses user experience about any journal. In addition, data set of archive articles has been developed considering the user experience in this regard. Moreover, the user experience and gathered data of archives are analyzed using two different frameworks based on semantics in order to have better consolidated recommendations. Before submission, we offer services on behalf of the research community that exploit user reviews and relevant data to suggest suitable journal according to the needs of the author.</description>
        <description>http://thesai.org/Downloads/Volume8No12/Paper_27-Recommender_System_for_Journal_Articles.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fuzzy Logic Energy Management Strategy of a Hybrid Renewable Energy System Feeding a Typical Tunisian House</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081226</link>
        <id>10.14569/IJACSA.2017.081226</id>
        <doi>10.14569/IJACSA.2017.081226</doi>
        <lastModDate>2017-12-30T17:07:08.7100000+00:00</lastModDate>
        
        <creator>Sameh ZENNED</creator>
        
        <creator>Houssem CHAOUALI</creator>
        
        <creator>Abdelkader MAMI</creator>
        
        <subject>Energy management strategy; hybrid power system; photovoltaic generator; wind turbine; fuel cell generator; nas battery; fuzzy logic control technique</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(12), 2017</description>
        <description>This paper proposes an energy management strategy for hybrid power system HPS which is composed of a photovoltaic generator, wind turbine, fuel cell generator and NaS battery storage device, feeding a type house. This strategy is based on Fuzzy Logic Control technique. The hybrid power system is sized to provide the energy demand of the inhabitants of the house and if there is an extra power generation, it would be sold to the grid. Using Simulink, we develop the different scenarios in order to use the fuel cell or battery during critical periods. The methodology developed was applied under the climatic conditions (wind speed, solar irradiation and temperature) measured at a site located in the northeast of Tunisia.</description>
        <description>http://thesai.org/Downloads/Volume8No12/Paper_26-Fuzzy_Logic_Energy_Management_Strategy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Parallel Matrix Multiplication Algorithm on Tree-Hypercube Network using Iman1 Supercomputer</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081225</link>
        <id>10.14569/IJACSA.2017.081225</id>
        <doi>10.14569/IJACSA.2017.081225</doi>
        <lastModDate>2017-12-30T17:07:08.6930000+00:00</lastModDate>
        
        <creator>Orieb AbuAlghanam</creator>
        
        <creator>Mohammad Qatawneh</creator>
        
        <creator>Hussein A. al Ofeishat</creator>
        
        <creator>Omar Adwan</creator>
        
        <creator>Ammar Huneiti</creator>
        
        <subject>MPI; supercomputer; tree-hypercube; matrix multiplication</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(12), 2017</description>
        <description>The tree-hypercube (TH) interconnection network is relatively a new interconnection network, which is constructed from tree and hypercube topologies. TH is developed to support parallel algorithms for solving computation and communication intensive problems. In this paper, we propose a new parallel multiplication algorithm on TH network to present broadcast communication operation for TH using store-and-forward technique, namely, one-to-all broadcast operation which allows a message to be transmitted through the shortest path from the source node to all other nodes. The proposed algorithm is implemented and evaluated in terms of running time, efficiency and speedup with different data size using IMAN1. The experimental results show that the runtime, efficiency and the speedup of the proposed algorithm decrease as a number of processors increases for all cases of matrices size of 1000?1000, 2000?2000, and 4000?4000.</description>
        <description>http://thesai.org/Downloads/Volume8No12/Paper_25-A_New_Parallel_Matrix_Multiplication_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Proposed Hybrid Effective Technique for Enhancing Classification Accuracy</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081224</link>
        <id>10.14569/IJACSA.2017.081224</id>
        <doi>10.14569/IJACSA.2017.081224</doi>
        <lastModDate>2017-12-30T17:07:08.6630000+00:00</lastModDate>
        
        <creator>Ibrahim M. El-Hasnony</creator>
        
        <creator>Hazem M. El-Bakry</creator>
        
        <creator>Omar H. Al-Tarawneh</creator>
        
        <creator>Mona Gamal</creator>
        
        <subject>Data mining; bioinformatics; fuzzy rough feature selection; correlation feature selection and data classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(12), 2017</description>
        <description>The automatic prediction and detection of breast cancer disease is an imperative, challenging problem in medical applications. In this paper, a proposed model to improve the accuracy of classification algorithms is presented. A new approach for designing effective pre-processing stage is introduced. Such approach integrates K-means clustering algorithm with fuzzy rough feature selection or correlation feature selection for data reduction. The attributes of the reduced clustered data are merged to form a new data set to be classified. Simulation results prove the enhancement of classification by using the proposed approach. Moreover, a new hybrid model for classification composed of K-means clustering algorithm, fuzzy rough feature selection and discernibility nearest neighbour is achieved. Compared to previous studies on the same data, it is proved that the presented model outperforms other classification models. The proposed model is tested on breast cancer dataset from UCI machine learning repository.</description>
        <description>http://thesai.org/Downloads/Volume8No12/Paper_24-A_Proposed_Hybrid_Effective_Technique.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Teaching Programming to Students in other Fields</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081223</link>
        <id>10.14569/IJACSA.2017.081223</id>
        <doi>10.14569/IJACSA.2017.081223</doi>
        <lastModDate>2017-12-30T17:07:08.6470000+00:00</lastModDate>
        
        <creator>Ivaylo Donchev</creator>
        
        <creator>Emilia Todorova</creator>
        
        <subject>Programming curricula; objects-first; teaching programming; object-oriented; education; programming; problem solving skills; politology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(12), 2017</description>
        <description>It is a fact that programming is difficult to learn. On the other hand, programming skills are essential for each program in the field of computing and must be covered in the curriculum, regardless of the profile. Our experience in the last 3-4 years shows a noticeable downward trend in students’ results in computer science and similar programs. In this article, we comment on the reasons that have led to such a decline and we are looking for solutions by experimenting with motivated students from other areas of knowledge and comparing their progress in mastering basic concepts and mechanisms of programming with that of computer specialists.</description>
        <description>http://thesai.org/Downloads/Volume8No12/Paper_23-Teaching_Programming_to_Students_in_other_Fields.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Distributed GPU-Based K-Means Algorithm for Data-Intensive Applications: Large-Sized Image Segmentation Case</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081221</link>
        <id>10.14569/IJACSA.2017.081221</id>
        <doi>10.14569/IJACSA.2017.081221</doi>
        <lastModDate>2017-12-30T17:07:08.6300000+00:00</lastModDate>
        
        <creator>Hicham Fakhi</creator>
        
        <creator>Omar Bouattane</creator>
        
        <creator>Mohamed Youssfi</creator>
        
        <creator>Hassan Ouajji</creator>
        
        <subject>Distributed computing; GPU computing; K-means; image segmentation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(12), 2017</description>
        <description>K-means is a compute-intensive iterative algorithm. Its use in a complex scenario is cumbersome, specifically in data-intensive applications. In order to accelerate the K-means running time for data-intensive application, such as large sized image segmentation, we use a distributed multi-agent system accelerated by GPUs. In this K-means version, the input image data are divided into subsets of image data which can be performed independently on GPUs. In each GPU, we offloaded the data assignment and the K-centroids recalculation steps of the K-means algorithm for a massively parallel processing. We have implemented this K-means version on the Nvidia GPU with Compute Unified Device Architecture. The distributed multi-agent system was written with Java Agent Development framework.</description>
        <description>http://thesai.org/Downloads/Volume8No12/Paper_21-Distributed_GPU_based_K_Means_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Chi-Square Automatic Interaction Detection Modeling for Predicting Depression in Multicultural Female Students</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081222</link>
        <id>10.14569/IJACSA.2017.081222</id>
        <doi>10.14569/IJACSA.2017.081222</doi>
        <lastModDate>2017-12-30T17:07:08.6300000+00:00</lastModDate>
        
        <creator>Haewon Byeon</creator>
        
        <subject>CHAID; data mining; multicultural family; risk factors; depression</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(12), 2017</description>
        <description>This study developed a depression prediction model for female students from multicultural families by using a decision tree model based on Chi-squared automatic interaction detection (CHAID) algorithm. Subjects of the study were 9,024 female students between 12 and 15 years old among the children of surveyed marriage immigrants. Outcome variables were classified as presence of depression. Explanatory variables included sex, residing area, experience of career counseling, experience of social discrimination, experience of Korean language education, experience of using a multicultural family support center, Korean reading, Korean speaking, Korean writing, Korean listening, Korean society adjustment education experience, needs of Korean society adjustment education, needs of Korean language education, and rejoined entry. In the CHAID algorithm analysis, female students from multicultural families who experienced social discrimination within the past one year and had ordinary Korean speaking skill posed the highest risk of depression. It is necessary to pay social level interests to the mental health of adolescents from multicultural families for achieving successful social integration based on the results of this study.</description>
        <description>http://thesai.org/Downloads/Volume8No12/Paper_22-Chi_Square_Automatic_Interaction_Detection_Modeling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Bio-NER: Biomedical Named Entity Recognition using Rule-Based and Statistical Learners</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081220</link>
        <id>10.14569/IJACSA.2017.081220</id>
        <doi>10.14569/IJACSA.2017.081220</doi>
        <lastModDate>2017-12-30T17:07:08.6000000+00:00</lastModDate>
        
        <creator>Pir Dino Soomro</creator>
        
        <creator>Santosh Kumar Banbhrani</creator>
        
        <creator>Arsalan Ali Shaikh</creator>
        
        <creator>Hans Raj</creator>
        
        <subject>Bio-medical text mining; machine learning; named entity recognition; naive bayesian; rule-based classifier; information extraction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(12), 2017</description>
        <description>The purpose of extracting of Bio-Medical Entities is to recognize the particular entities, whether word or phrases, from the unstructured data contained in the text. This work proposes different approaches and methods, i.e. Machine Learning Hybrid Classification, Rule Based Non-tested Generalized Exemplars and Partial Decision Tree (PART) Learners for Bio-Medical Named Entity Recognition. The Prime objective is to consider, preferably, simple characteristics, such as, affixes and context. In addition, orthographic, Parts of Speech (POS) tags and N-grams are given secondary importance as for as their comparison with affixes and context is concerned. Further, for the very purpose of Bio-medical Diseased Named Recognition, proposal of Rule Based Classifiers along with the Statistical Machine Learning is given. Also, this paper proposes the blend of both preceding methods that jointly construct Hybrid Classification algorithm. Precision, Recall and F-measure – standard metrics- has been put into practice for the evaluation. The results prove that the technique used has far better performance results than the method used before - state-of-art Disease NER (Named Entity Recognition).</description>
        <description>http://thesai.org/Downloads/Volume8No12/Paper_20-Bio_NER_Biomedical_Named_Entity_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Low Power Low Jitter 0.18 CMOS Ring VCO Design with Strategy based on EKV3.0 Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081219</link>
        <id>10.14569/IJACSA.2017.081219</id>
        <doi>10.14569/IJACSA.2017.081219</doi>
        <lastModDate>2017-12-30T17:07:08.5700000+00:00</lastModDate>
        
        <creator>Amine AYED</creator>
        
        <creator>Hamadi GHARIANI</creator>
        
        <subject>Ring VCO; jitter; power consumption; EKV model; MATLAB</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(12), 2017</description>
        <description>In this paper, the design of micro-power CMOS ring VCO with minimum jitter intended for a concept of frequency synthesizer in biotelemetry systems is studied. A design procedure implemented in MATLAB is described for a circuit realization with TSMC 0.18&#181;m CMOS technology. This conventional design methodology based on EKV3.0 model is clearly suited to the challenges of analog circuits design with reduced channel width. Measures realized with ADS confirmed methodology capability to circuit sizing respecting the specifications of application. The designed ring VCO operates at a central frequency of 433MHz in ISM band with an amplitude of oscillation equal to 500 mV. The integration area was intrinsic (without buffers and without external capacitances). The simulated phase noise is about -108 dBc/Hz at 1MHz, the value of rms jitter is 44.8 ps and the power consumption of the designed VCO is 6.37 mW @ 433 MHz.</description>
        <description>http://thesai.org/Downloads/Volume8No12/Paper_19-Low_Power_Low_Jitter_018_CMOS_Ring.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Factors Influencing the Adoption of ICT by Teachers in Primary Schools in Saudi Arabia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081218</link>
        <id>10.14569/IJACSA.2017.081218</id>
        <doi>10.14569/IJACSA.2017.081218</doi>
        <lastModDate>2017-12-30T17:07:08.5530000+00:00</lastModDate>
        
        <creator>Sami Alshmrany</creator>
        
        <creator>Brett Wilkinson</creator>
        
        <subject>Information and communication technology (ICT); primary education; Saudi Arabia; computer literacy; behavioural influence</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(12), 2017</description>
        <description>Information and communication technology (ICT) has become part of everyday life for the many people in business, entertainment, education and many other areas of human activity. Students in primary school are just beginning to learn and accept new ideas, show a maturing creativity, develop critical thinking and decision making skills. ICT enriches all these processes. In education, the successful integration of ICT into learning and teaching depends on teachers’ attitudes and their ability to use communication technologies, not just competently, but with skill and imagination. Experience is required with the medium, however, but ICT use in education has been largely ignored in Saudi Arabia. The study described here investigated the factors influencing the adoption of ICT as a teaching tool by teachers at Saudi Arabian primary schools. Analysis of the data showed computer literacy and confidence with technology registered a significant positive effect on the study, participants’ effort expectancy, which in turn positively influenced their behavioural intention to adopt ICT. On the other hand, Saudi culture, social conditions, system quality, and other obstacles discourage the uptake of ICT by primary school teachers. The findings of this study will assist the Saudi government to enhance the positive factors and eliminate or reduce the negative factors to ensure successful adoption of ICT in primary education by teachers.</description>
        <description>http://thesai.org/Downloads/Volume8No12/Paper_18-Factors_Influencing_the_Adoption_of_ICT.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance vs. Power and Energy Consumption: Impact of Coding Style and Compiler</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081217</link>
        <id>10.14569/IJACSA.2017.081217</id>
        <doi>10.14569/IJACSA.2017.081217</doi>
        <lastModDate>2017-12-30T17:07:08.5370000+00:00</lastModDate>
        
        <creator>Hesham H M Hassan</creator>
        
        <creator>Ahmed Shawky Moussa</creator>
        
        <creator>Ibrahim Farag</creator>
        
        <subject>Energy consumption; energy efficiency; power-aware; performance; coding styles; coding practice; compilers</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(12), 2017</description>
        <description>Reaching a balance between performance and energy consumption has always been a difficult objective to achieve for energy and power-aware applications. The work presented in this paper investigates the impact of using different coding styles to achieve a balance between performance and energy efficiency. The research also studies how different compilers may affect not only the performance of the code but also the energy consumption. The research demonstrates and concludes the process of choosing the right combination of the coding style and compiler, the combination which works best with the nature of the application and the target hardware, is necessary if the balance between performance and energy is a software design goal. The study addresses some experimental aspects of the impact of coding style and choice of the compiler on energy and performance efficiency. It also shows how different coding practices for the same problem could produce different performance and energy consumption rates.</description>
        <description>http://thesai.org/Downloads/Volume8No12/Paper_17-Performance_vs_Power_and_Energy_Consumption.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Machine Learning Model to Predict the Onset of Alzheimer Disease using Potential Cerebrospinal Fluid (CSF) Biomarkers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081216</link>
        <id>10.14569/IJACSA.2017.081216</id>
        <doi>10.14569/IJACSA.2017.081216</doi>
        <lastModDate>2017-12-30T17:07:08.5070000+00:00</lastModDate>
        
        <creator>Syed Asif Hassan</creator>
        
        <creator>Tabrej Khan</creator>
        
        <subject>Alzheimer disease; early-stage biomarker; machine learning algorithm; classification model; accuracy; sensitivity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(12), 2017</description>
        <description>Clinical studies in the past have shown that the pathology of Alzheimer’s disease (AD) initiates, 10 to 15 years before the visible clinical symptoms of cognitive impairment starts to appear in AD diagnosed patients. Therefore, early diagnosis of the AD using potential early stage cerebrospinal fluid (CSF) biomarkers will be valuable in designing a clinical trial and proper care of AD patients. Therefore, the goal of our study was to generate a classification model to predict earlier stages of the AD using specific early-stage CSF biomarkers obtained from a clinical Alzheimer dataset. The dataset was segmented into variable sizes and classification models based on three machine learning (ML) algorithms, such as Sequential Minimal Optimization (SMO), Na&#239;ve Bayes (NB), and J48 were generated. The efficacy of the models to accurately predict the cognitive impairment status was evaluated and compared using various model performance parameters available in Weka software tool. The current findings show that J48 based classification model can be effectively employed for classifying cognitive impaired Alzheimer patient from normal healthy individuals with an accuracy of 98.82%, area under curve (AUC) value of 0.992 and sensitivity &amp; specificity of 99.19% and 97.87%, respectively. The sample size (60% training and 40% independent test data) showed significant improvement in T-test with J48 algorithm when compared with other classifiers tested on Alzheimer dataset.</description>
        <description>http://thesai.org/Downloads/Volume8No12/Paper_16-A_Machine_Learning_Model_to_Predict_the_Onset_of_Alzheimer_Disease.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Leisure Technology for the Elderly: A Survey, User Acceptance Testing and Conceptual Design</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081214</link>
        <id>10.14569/IJACSA.2017.081214</id>
        <doi>10.14569/IJACSA.2017.081214</doi>
        <lastModDate>2017-12-30T17:07:08.4900000+00:00</lastModDate>
        
        <creator>Chow Sook Theng</creator>
        
        <creator>Saravanan Sagadevan</creator>
        
        <creator>Nurul Hashimah Ahamed Hassain Malim</creator>
        
        <subject>Leisure technology; user acceptance; cognitive ability; elderly</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(12), 2017</description>
        <description>The Alzheimer’s disease damages neuronal and synaptic system due to the high level of amyloid beta in the brain. It is the common cause of dementia which is more common to afflict the elderly where they will gradually loss their memory and communication skills as well as deterioration of thinking and reasoning ability. Hence, it is crucial for elderly people to monitor their cognitive performance consistently and continuously to detect the Alzheimer’s symptoms, such as dementia or Mild Cognitive Impairment. There are many technologies that have been established in healthcare for its detection; however, such technologies, mostly medical treatments could not be self-catered by elderly on daily basis and in fact the use of this technology incurs cost each time. Therefore, this study looks at an alternative technology called leisure technology that allows access to the elderly every day at home in an enjoyable and relaxing manner. The aim of this study is to study applications of leisure activities that could stimulate brain cognitive function to be turned to a leisure technology application. Prior to proposing the conceptual design of this application, a user acceptance study of leisure technology among elderly people has been conducted. This study involves interviews and survey through distribution of questionnaires. The survey results shows that 90% of the participants stated that there was an improvement in cognitive abilities after using leisure technology and 98.4% of the participants stated that they could adapt to leisure technology. On the other hand, the outcomes from the interview show that they agreed that different types of leisure technology provide heterogeneous benefits, which can improve their cognitive abilities. Finally, this study proposes a conceptual design for leisure technology application that elderly people can adapt to.</description>
        <description>http://thesai.org/Downloads/Volume8No12/Paper_14-Leisure_Technology_for_the_Elderly.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Machine Learning based Predictive Model for Screening Mycobacterium Tuberculosis Transcriptional Regulatory Protein Inhibitors from High-Throughput Screening Dataset</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081215</link>
        <id>10.14569/IJACSA.2017.081215</id>
        <doi>10.14569/IJACSA.2017.081215</doi>
        <lastModDate>2017-12-30T17:07:08.4900000+00:00</lastModDate>
        
        <creator>Syed Asif Hassan</creator>
        
        <creator>Tabrej Khan</creator>
        
        <subject>Mycobacterium; dosRS-transcriptional regulatory proteins; High Throughput Screening (HTS); virtual screening; machine learning algorithms; classification; predictive chemoinformatics model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(12), 2017</description>
        <description>In view of the essential role played by dosRS in the survival of Mycobacterium in the infected granuloma cells, dosRS transcriptional regulatory proteins were considered as a validated target for high throughput screening (HTS). However, the cost and time factor involved in screening large compound libraries are an important hurdle in identifying lead compounds. Therefore, the use of computational machine learning techniques to build a predictive model for screening putative drug-like molecule has gained significance. In this regard, a target-based predictive model using machine learning approaches was built to develop fast and efficient virtual screening procedures to screen anti-dosRS molecules. In the present study, we have used various structural and physiochemical attributes of compounds from HTS dataset to train and build a chemoinformatics predictive model based on four state-of-art supervised classifiers (Random forest, SMO, J48, and Na&#239;ve Bayes). The trained model was applied to test dataset for validating the robustness, accuracy, and sensitivity of the predictive model in screening active anti-dosRS molecules. The Cost-Sensitive Classifier (CSC) with Random Forest (RF) algorithm based predictive model showed a high sensitivity (100%) and specificity (83.13%) to identify active and inactive molecules, respectively from assay dataset (ID: 1159583). CSC-RF proved to more robust and efficient in classifying active molecule from an imbalanced dataset with highest Balancing Classification Rate (BCR) (91.57%) and maximum Area under the Curve (AUC) value (0.999).</description>
        <description>http://thesai.org/Downloads/Volume8No12/Paper_15-Machine_Learning_based_Predictive_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predicting Future Gold Rates using Machine Learning Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081213</link>
        <id>10.14569/IJACSA.2017.081213</id>
        <doi>10.14569/IJACSA.2017.081213</doi>
        <lastModDate>2017-12-30T17:07:08.4600000+00:00</lastModDate>
        
        <creator>Iftikhar ul Sami</creator>
        
        <creator>Khurum Nazir Junejo</creator>
        
        <subject>Gold rates; prediction; forecasting; linear regression; neural networks; ARMA Model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(12), 2017</description>
        <description>Historically, gold was used for supporting trade transactions around the world besides other modes of payment. Various states maintained and enhanced their gold reserves and were recognized as wealthy and progressive states. In present times, precious metals like gold are held with central banks of all countries to guarantee re-payment of foreign debts, and also to control inflation. Moreover, it also reflects the financial strength of the country. Besides government agencies, various multi-national companies and individuals have also invested in gold reserves. In traditional events of Asian countries, gold is also presented as gifts/souvenirs and in marriages, gold ornaments are presented as Dowry in India, Pakistan and other countries. In addition to the demand and supply of the commodity in the market, the performance of the world’s leading economies also strongly influences gold rates. We predict future gold rates based on 22 market variables using machine learning techniques. Results show that we can predict the daily gold rates very accurately. Our prediction models will be beneficial for investors, and central banks to decide when to invest in this commodity.</description>
        <description>http://thesai.org/Downloads/Volume8No12/Paper_13-Predicting_Future_Gold_Rates.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Approach for Grouping Similar Operations Extracted from WSDLs Files using K-Means Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081212</link>
        <id>10.14569/IJACSA.2017.081212</id>
        <doi>10.14569/IJACSA.2017.081212</doi>
        <lastModDate>2017-12-30T17:07:08.4430000+00:00</lastModDate>
        
        <creator>Rekkal Sara</creator>
        
        <creator>Amrane Fatima</creator>
        
        <creator>Loukil Lakhdar</creator>
        
        <subject>Web services; WSDL; inputs; outputs; similarity; syntax analysis; semantic analysis; Hungarian maximum matching;  K-means</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(12), 2017</description>
        <description>Grouping similar operations is an effective solution to the various problems, especially those related to research because the services will be classified by joint operations. Searching for a particular operation returns, as a result, all services with this same operation, but also the problems related to the substitution (such as, during a call failure or a malfunction). A list of similar operations is returned to the client. He chooses an operation, based on non-functional criteria. In this work, our goal is to study the functional similarity between operations, and thus constituting groups of similar operations, while benefiting from the K-means algorithm.</description>
        <description>http://thesai.org/Downloads/Volume8No12/Paper_12-A_New_Approach_for_Grouping_Similar_Operations.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Big Data Processing for Full-Text Search and Visualization with Elasticsearch</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081211</link>
        <id>10.14569/IJACSA.2017.081211</id>
        <doi>10.14569/IJACSA.2017.081211</doi>
        <lastModDate>2017-12-30T17:07:08.4130000+00:00</lastModDate>
        
        <creator>Aleksei Voit</creator>
        
        <creator>Aleksei Stankus</creator>
        
        <creator>Shamil Magomedov</creator>
        
        <creator>Irina Ivanova</creator>
        
        <subject>Big Data processing; verification; elasticsearch; MapReduce; data clustering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(12), 2017</description>
        <description>In this paper, the task of using Big Data to identify specific individuals on the indirect grounds of their interaction with information resources is considered. Possible sources of Big Data and problems related to its processing are analyzed. Existing means of data clustering are considered. Available software for full-text search and data visualization is analyzed, and a system based on Elasticsearch engine and MapReduce model is proposed for the solution of user verification problem.</description>
        <description>http://thesai.org/Downloads/Volume8No12/Paper_11-Big_Data_Processing_for_Full_Text_Search.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comparative Study of Forensic Tools for WhatsApp Analysis using NIST Measurements</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081210</link>
        <id>10.14569/IJACSA.2017.081210</id>
        <doi>10.14569/IJACSA.2017.081210</doi>
        <lastModDate>2017-12-30T17:07:08.3970000+00:00</lastModDate>
        
        <creator>Rusydi Umar</creator>
        
        <creator>Imam Riadi</creator>
        
        <creator>Guntur Maulana Zamroni</creator>
        
        <subject>Whatsapp; acquisition; NIST parameters; artifact</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(12), 2017</description>
        <description>One of the popularly used features on Android smartphone is WhatsApp. WhatsApp can be misused, such as for criminal purposes. To conduct investigation involving smartphone devices, the investigators need to use forensic tools. Nonetheless, the development of the existing forensic tool technology is not as fast as the development of mobile technology and WhatsApp. The latest version of smartphones and WhatsApp always comes up. Therefore, a research on the performance of the current forensic tools in order to handle a case involving Android smartphones and WhatsApp in particular need to be done. This research evaluated existing forensic tools for performing forensic analysis on WhatsApp using parameters from NIST and WhatsApp artifacts. The outcome shows that Belkasoft Evidence has the highest index number, WhatsApp Key/DB Extractor has superiority in terms of costs, and Oxygen Forensic has superiority in obtaining WhatsApp artifact.</description>
        <description>http://thesai.org/Downloads/Volume8No12/Paper_10-A_Comparative_Study_of_Forensic_Tools.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning-Based Recommendation: Current Issues and Challenges</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081209</link>
        <id>10.14569/IJACSA.2017.081209</id>
        <doi>10.14569/IJACSA.2017.081209</doi>
        <lastModDate>2017-12-30T17:07:08.3670000+00:00</lastModDate>
        
        <creator>Rim Fakhfakh</creator>
        
        <creator>Anis Ben Ammar</creator>
        
        <creator>Chokri Ben Amar</creator>
        
        <subject>Recommender system; deep learning; neural network; YouTube recommendation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(12), 2017</description>
        <description>Due to the revolutionary advances of deep learning achieved in the field of image processing, speech recognition and natural language processing, the deep learning gains much attention. The recommendation task is influenced by the deep learning trend which shows its significant effectiveness and the high-quality of recommendations. The deep learning based recommender models provide a better detention of user preferences, item features and users-items interactions history. In this paper, we provide a recent literature review of researches dealing with deep learning based recommendation approaches which are preceded by a presentation of the main lines of the recommendation approaches and the deep learning techniques. We propose also classification criteria of the different deep learning integration model. Then we finish by presenting the recommendation approach adopted by the most popular video recommendation platform YouTube which is based essentially on deep learning advances.</description>
        <description>http://thesai.org/Downloads/Volume8No12/Paper_9-Deep_Learning_based_Recommendation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Internet Orchestra of Things: A Different Perspective on the Internet of Things</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081208</link>
        <id>10.14569/IJACSA.2017.081208</id>
        <doi>10.14569/IJACSA.2017.081208</doi>
        <lastModDate>2017-12-30T17:07:08.3330000+00:00</lastModDate>
        
        <creator>Cristina Turcu</creator>
        
        <creator>Cornel Turcu</creator>
        
        <subject>Internet of Things; music; game; education; RFID; robot</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(12), 2017</description>
        <description>The Internet of Things (IoT) is defined as a global network that links together living and/or non-living entities, such as people, animals, software, physical objects or devices. These entities can interact with each other, gather, provide or transmit information to the IoT. Although the Internet of Things is a relatively new concept, various platforms are already available. Some of them are open platforms, enabling both the integration of people, systems, and objects from the physical and virtual world, and the visualization of data. For example, there are already some IoT platforms used, like Google Cloud Platform, Microsoft Azure IoT Hub, Amazon Web Services IoT Platform, IBM Watson IoT Platform, Nimbits, Open.Sen.se, ThingWorx, and ThingSpeak. But what if things could not only “work” and “speak”, but also “sing”? We propose a game in which the things connected to IoT can play in real time different sounds, according to the values of some monitored parameters. These things can be grouped in the IoT platform to create a virtual orchestra and make music. Besides this game allowing the creation of great songs, it can be widely used to explain the new ideas behind the fast emerging areas of the Internet of Things. In addition to many technical challenges, it is also worth considering the effect the IoT concept will have on people, society, and economy as a whole.</description>
        <description>http://thesai.org/Downloads/Volume8No12/Paper_8-Internet_Orchestra_of_Things.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Automatic Dysarthric Speech Recognition Approach using Deep Neural Networks </title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081207</link>
        <id>10.14569/IJACSA.2017.081207</id>
        <doi>10.14569/IJACSA.2017.081207</doi>
        <lastModDate>2017-12-30T17:07:08.3200000+00:00</lastModDate>
        
        <creator>Jun Ren</creator>
        
        <creator>Mingzhe Liu</creator>
        
        <subject>Dysarthric speech recognition; deep neural networks; hidden markov models</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(12), 2017</description>
        <description>Transcribing dysarthric speech into text is still a challenging problem for the state-of-the-art techniques or commercially available speech recognition systems. Improving the accuracy of dysarthric speech recognition, this paper adopts Deep Belief Neural Networks (DBNs) to model the distribution of dysarthric speech signal. A continuous dysarthric speech recognition system is produced, in which the DBNs are used to predict the posterior probabilities of the states in Hidden Markov Models (HMM) and the Weighted Finite State Transducers framework was utilized to build the speech decoder. Experimental results show that the proposed method provides better prediction of the probability distribution of the spectral representation of dysarthric speech that outperforms the existing methods, e.g., GMM-HMM based dysarthric speech recogniztion approaches. To the best of our knowledge, this work is the first time to build a continuous speech recognition system for dysarthric speech with deep neural network technique, which is a promising approach for improving the communication between those individuals with speech impediments and normal speakers.</description>
        <description>http://thesai.org/Downloads/Volume8No12/Paper_7-An_Automatic_Dysarthric_Speech_Recognition_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of an Improved Algorithm for Image Processing: A Proposed Algorithm for Optimal Reduction of Shadow from the Image</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081206</link>
        <id>10.14569/IJACSA.2017.081206</id>
        <doi>10.14569/IJACSA.2017.081206</doi>
        <lastModDate>2017-12-30T17:07:08.2570000+00:00</lastModDate>
        
        <creator>Yahia S. AL-Halabi</creator>
        
        <subject>Image; processing; shadow; algorithm; detection filter; luminance; morphological processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(12), 2017</description>
        <description>Shadow detection is the most important aspect in the field of image processing. It has become essential to develop such algorithms that are capable of processing the images with the maximum efficiency. Therefore, the research has aimed to propose an algorithm that effectively processes the image on the basis of shadow reduction. An algorithm has been proposed, which was based on RGB (red, green, and blue) and HIS (hue, saturation and intensity) model. Steps for Shadow detection have been defined. Median filter and colour saturation have been widely used to process the outcomes. Algorithm has proved efficient for the detection of shadow from the images. It was found efficient when compared with two previously developed algorithms. 87% efficiency has been observed, implementing the proposed algorithm as compared to the algorithms implemented previously by other researchers. The study proved to make a supportive effort in the development of optimized algorithm. It has been suggested that the market requires such practices that can be used to improve the working conditions of the image processing paradigm.</description>
        <description>http://thesai.org/Downloads/Volume8No12/Paper_6-Development_of_an_Improved_Algorithm_for_Image_Processing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Training an Agent for FPS Doom Game using Visual Reinforcement Learning and VizDoom</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081205</link>
        <id>10.14569/IJACSA.2017.081205</id>
        <doi>10.14569/IJACSA.2017.081205</doi>
        <lastModDate>2017-12-30T17:07:08.2400000+00:00</lastModDate>
        
        <creator>Khan Adil</creator>
        
        <creator>Feng Jiang</creator>
        
        <creator>Shaohui Liu</creator>
        
        <creator>Aleksei Grigorev</creator>
        
        <creator>B.B. Gupta</creator>
        
        <creator>Seungmin Rho</creator>
        
        <subject>Visual reinforcement learning; Deep Q-learning; FPS; CNN; computational intelligence; Game-AI; VizDoom; agent; bot; DOOM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(12), 2017</description>
        <description>Because of the recent success and advancements in deep mind technologies, it is now used to train agents using deep learning for first-person shooter games that are often outperforming human players by means of only screen raw pixels to create their decisions. A visual Doom AI Competition is organized each year on two different tracks: limited death-match on a known map and a full death-match on an unknown map for evaluating AI agents, because computer games are the best test-beds for testing and evaluating different AI techniques and approaches. The competition is ranked based on the number of frags each agent achieves. In this paper, training a competitive agent for playing Doom’s (FPS Game) basic scenario(s) in a semi-realistic 3D world ‘VizDoom’ using the combination of convolutional Deep learning and Q-learning by considering only the screen raw pixels in order to exhibit agent’s usefulness in Doom is proposed. Experimental results show that the trained agent outperforms average human player and inbuilt game agents in basic scenario(s) where only move left, right and shoot actions are allowed.</description>
        <description>http://thesai.org/Downloads/Volume8No12/Paper_5-Training_an_Agent_for_FPS_Doom_Game.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intelligent Classification of Liver Disorder using Fuzzy Neural System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081204</link>
        <id>10.14569/IJACSA.2017.081204</id>
        <doi>10.14569/IJACSA.2017.081204</doi>
        <lastModDate>2017-12-30T17:07:08.2270000+00:00</lastModDate>
        
        <creator>Mohammad Khaleel Sallam Ma’aitah</creator>
        
        <creator>Rahib Abiyev</creator>
        
        <creator>Idoko John Bush</creator>
        
        <subject>Artificial neural networks; fuzzy systems; fuzzy neural systems; liver disorders</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(12), 2017</description>
        <description>In this study, designed an intelligent model for liver disorders based on Fuzzy Neural System (FNS) models is considered. For this purpose, fuzzy system and neural networks (FNS) are explored for the detection of liver disorders. The structure and learning algorithm of the FNS are described. In this study, we utilized dataset extracted from a renowned machine learning data base (UCI) repository. 10 folds cross-validation approach was explored for the design of the system. The designed algorithm is accurate, reliable and faster as compared to other traditional diagnostic systems. We highly recommend this framework as a specialized training tool for medical practitioners.</description>
        <description>http://thesai.org/Downloads/Volume8No12/Paper_4-Intelligent_Classification_of_Liver_Disorder.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>State-of-the-Art and Open Challenges in RTS Game-AI and Starcraft</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081203</link>
        <id>10.14569/IJACSA.2017.081203</id>
        <doi>10.14569/IJACSA.2017.081203</doi>
        <lastModDate>2017-12-30T17:07:08.1930000+00:00</lastModDate>
        
        <creator>Khan Adil</creator>
        
        <creator>Feng Jiang</creator>
        
        <creator>Shaohui Liu</creator>
        
        <creator>Worku Jifara</creator>
        
        <creator>Zhihong Tian</creator>
        
        <creator>Yunsheng Fu</creator>
        
        <subject>Real Time Strategy (RTS); Game-AI; Starcraft; MMOG; AIIDE; CIG; MOBA</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(12), 2017</description>
        <description>This paper presents a review of artificial intelligence for different approaches used in real-time strategy games. Real-time strategy (RTS) based games are quick combat games in which the objective is to dominate and destroy the opposing enemy such as Rome-total war, Starcraft, the age of empires, and command &amp; conquer, etc. In such games, each player needs to utilize resources efficiently, which includes managing different types of soldiers, units, equipment’s, economic status, positions and the uncertainty during the combat in real time. Now the best human players face difficulty in defeating the best RTS games due to the recent success and advancement of deep mind technologies. In this paper, we explain state-of-the-art and challenges in artificial intelligence (AI) for RTS games and Starcraft, describing problems and issues carried out by RTS based games with some solutions that are addressed to them. Finally, we conclude by emphasizing on game ‘CIG &amp; AIIDE’ competitions along with open research problems and questions in the context of RTS Game-AI, where some of the problems and challenges are mostly considered improved and solved but yet some are open for further research.</description>
        <description>http://thesai.org/Downloads/Volume8No12/Paper_3-State_of_the_Art_and_Open_Challenges_in_RTS_Game.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cluster Formation and Cluster Head Selection Approach for Vehicle Ad-Hoc Network (VANETs) using K-Means and Floyd-Warshall Technique</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081202</link>
        <id>10.14569/IJACSA.2017.081202</id>
        <doi>10.14569/IJACSA.2017.081202</doi>
        <lastModDate>2017-12-30T17:07:08.1630000+00:00</lastModDate>
        
        <creator>Iftikhar Hussain</creator>
        
        <creator>Chen Bingcai</creator>
        
        <subject>Vehicular Ad-hoc Network (VANETs); Mobile ad-hoc networking (MANETs); K-Mean; clustering; cluster head selection; Floyd-Warshall</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(12), 2017</description>
        <description>Vehicular Ad-hoc Network (VANETs) is the specific form of Mobile ad-hoc networking (MANETs) in which high dynamic nodes are utilized in carrying out the operations. They are mainly used in urban areas for safety traveling. Clustering algorithms are used for clustering the vehicles that are in the range of the network as VANET consists of a great amount of traffic. A clustering head node is used specified through a procedure to collect all information from the surroundings. This study introduced a new method for cluster head selection by using the K-Mean and Floyd-Warshall algorithms. The proposed technique first divided the points for vehicle groups while the Floyd-Warshall algorithm calculated all pairs of shortest distance for every vehicle within the defined cluster. A vehicle with the smallest average distance among a cluster is chosen as the cluster head. The Floyd-Warshall algorithm overall selects a centralized vehicle as a cluster head, hence its stability time will improve significantly.</description>
        <description>http://thesai.org/Downloads/Volume8No12/Paper_2-Cluster_Formation_and_Cluster_Head_Selection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Impact of Different Data Types on Classifier Performance of Random Forest, Na&#239;ve Bayes, and K-Nearest Neighbors Algorithms
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081201</link>
        <id>10.14569/IJACSA.2017.081201</id>
        <doi>10.14569/IJACSA.2017.081201</doi>
        <lastModDate>2017-12-30T17:07:08.1000000+00:00</lastModDate>
        
        <creator>Asmita Singh</creator>
        
        <creator>Malka N. Halgamuge</creator>
        
        <creator>Rajasekaran Lakshmiganthan</creator>
        
        <subject>Big data; random forest; Na&#239;ve Bayes; k-nearest neighbors algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(12), 2017</description>
        <description>This study aims to evaluate impact of three different data types (Text only, Numeric Only and Text + Numeric) on classifier performance (Random Forest, k-Nearest Neighbor (kNN) and Na&#239;ve Bayes (NB) algorithms). The classification problems in this study are explored in terms of mean accuracy and the effects of varying algorithm parameters over different types of datasets. This content analysis has been examined through eight different datasets taken from UCI to train models for all three algorithms. The results obtained from this study clearly show that RF and kNN outperform NB. Furthermore, kNN and RF perform relatively the same in terms of mean accuracy nonetheless kNN takes less time to train a model. The changing numbers of attributes in datasets have no effect on Random Forest, whereas Na&#239;ve Bayes mean accuracy fluctuates up and down that leads to a lower mean accuracy, whereas, kNN mean accuracy increases and ends with higher accuracy. Additionally, changing number of trees has no significant effects on mean accuracy of the Random forest, however, the time to train the model has increased greatly. Random Forest and k-Nearest Neighbor are proved to be the best classifiers for any type of dataset. Thus, Na&#239;ve Bayes can outperform other two algorithms if the feature variables are in a problem space and are independent. Besides Random forests, it takes highest computational time and Na&#239;ve Bayes takes lowest. The k-Nearest Neighbor requires finding an optimal number of k for improved performance at the cost of computation time. Similarly, changing the number of attributes that effect Na&#239;ve Bayes and k-Nearest Neighbor performance nevertheless not the Random forest. This study can be extended by researchers who use the parametric method to analyze results.</description>
        <description>http://thesai.org/Downloads/Volume8No12/Paper_1-Impact_of_Different_Data_Types_on_Classifier_Performance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>NoSQL Racket: A Testing Tool for Detecting NoSQL Injection Attacks in Web Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081178</link>
        <id>10.14569/IJACSA.2017.081178</id>
        <doi>10.14569/IJACSA.2017.081178</doi>
        <lastModDate>2017-12-04T07:55:51.2000000+00:00</lastModDate>
        
        <creator>Ahmed M. Eassa</creator>
        
        <creator>Omar H. Al-Tarawneh</creator>
        
        <creator>Hazem M. El-Bakry</creator>
        
        <creator>Ahmed S. Salama</creator>
        
        <subject>NoSQL; injection attack; web application; web security; testing tool





</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(11), 2017</description>
        <description>A NoSQL injection attack targets interactive Web applications that employ NoSQL database services. These applications accept user inputs and use them to form query statements at runtime. During NoSQL injection attack, an attacker might provide malicious query segments as user input which could result in a different database request. In this paper, a testing tool is presented to detect NoSQL injection attacks in web application which is called “NoSQL Racket”. The basic idea of this tool depends on checking the intended structure of the NoSQL query by comparing NoSQL statement structure in code query statement (static code analysis) and runtime query statement (dynamic analysis). But we faced a big challenge, there is no a common query language to drive NoSQL databases like the same way in relational database using SQL as a standardized query language. The proposed tool is tested on four different vulnerable web applications and its effectiveness is compared against three different well known testers, none of them is able to detect any NoSQL Injection attacks. However, the implemented testing tool has the ability to detect the NoSQL injection attacks.</description>
        <description>http://thesai.org/Downloads/Volume8No11/Paper_78-NoSQL_Racket_A_Testing_Tool.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dimensionality Reduction using Hybrid Support Vector Machine and Discriminant Independent Component Analysis for Hyperspectral Image</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081176</link>
        <id>10.14569/IJACSA.2017.081176</id>
        <doi>10.14569/IJACSA.2017.081176</doi>
        <lastModDate>2017-11-30T19:03:16.1400000+00:00</lastModDate>
        
        <creator>Murinto </creator>
        
        <creator>Nur Rochmah Dyah PA</creator>
        
        <subject>Classification; discriminant independent component analysis; support vector machine; hyperspectral image</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(11), 2017</description>
        <description>Hyperspectral image is an image obtain from a satellite sensor.  This image has more than 100 bands with a wide spectral range and increased spatial image resolution, providing detailed information on objects or materials that exist on the ground in a specific way. The hyperspectral image is well suited for the classification of the earth’s surface covering due to it. Due to the features of the hyperspectral data, then lately research related to trend hyperspectral data tend to increase. The transformation of the reduction of the dimensions of the original data into a new dimension reduction chamber is often done to overcome the problem of ‘curse of dimensionality’ in which its dimensions tend to increase exponentially. Data is mapped from the original data to a lower dimensionless space through a dimensional reduction procedure which must display the observation input effectively. Therefore, in this research we proposed a hyperspectral dimension hybrid reduction method which adopted Support Vector Machine (SVM) and Discriminant Independent Component Analysis (DICA) techniques to reduce original data in order to obtain better accuracy. By using SVM+DICA is used to reduction dimension hyperspectral images. In this research, we use KNN as classifier. From the experiment obtained that value of average accuracy is 0.7527, overall accuracy is 0.7901, and Kappa is 0.7608 for AVIRIS dataset.</description>
        <description>http://thesai.org/Downloads/Volume8No11/Paper_76-Dimensionality_Reduction_using_Hybrid_Support.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Examining Software Intellectual Property Rights </title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081175</link>
        <id>10.14569/IJACSA.2017.081175</id>
        <doi>10.14569/IJACSA.2017.081175</doi>
        <lastModDate>2017-11-30T19:03:16.1100000+00:00</lastModDate>
        
        <creator>Ehsan Sargolzaei</creator>
        
        <creator>Fateme Keikha</creator>
        
        <subject>Software intellectual property rights (IPR); software piracy; copyright; patent</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(11), 2017</description>
        <description>Intellectual property rights (IPR) of computer software is the right to assign the software to its creator, not limited to time and space, and non-transferable. Proving IPR of the creators of computer software requires a rigorous review of the ways in which these rights may be violated. The present study was conducted by comparing two populations in Iran with the aim of identifying the level of familiarity and observance of software IPR: 1) 96 software engineers member of IEEE Association and 2) 386 students randomly. Results are analyzed by SPSS software and the validity of the results is verified using T-test. By comparing the results, it was concluded that the first population significantly observed these cases more. Then a model was presented for protecting software IPR so that the challenges are reduced. This research is the completion of our previous work that was discussed as a future work.</description>
        <description>http://thesai.org/Downloads/Volume8No11/Paper_75-Examining_Software_Intellectual_Property_Rights.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Information Security and Learning Content Management System (LCMS)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081174</link>
        <id>10.14569/IJACSA.2017.081174</id>
        <doi>10.14569/IJACSA.2017.081174</doi>
        <lastModDate>2017-11-30T19:03:16.0930000+00:00</lastModDate>
        
        <creator>Walid Qassim Qwaider</creator>
        
        <subject>E-Learning; LCMS; LMS; CMS; information security (IS)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(11), 2017</description>
        <description>The learning environment has recently undergone a quantum leap due to rapid growth in information technology. This development has allowed the e-learning environment to take advantage of electronic tools to improve teaching methods using LCMS. The emergence of many e-learning institutions has accelerated the adoption of information and communication technology without taking due care and understanding of security concerns. LCMS is a new learning method that ultimately relies on the web in its implementation. This article argues essential elements of information security (IS) that require application through the information management system. On other hand, the paper also identifies anti IS measures that can boost IS within the information management system.</description>
        <description>http://thesai.org/Downloads/Volume8No11/Paper_74-Information_Security_and_Learning_Content_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Tsunami Warning System with Sea Surface Features Derived from Altimeter Onboard Satellites</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081173</link>
        <id>10.14569/IJACSA.2017.081173</id>
        <doi>10.14569/IJACSA.2017.081173</doi>
        <lastModDate>2017-11-30T19:03:16.0770000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>Active database system; ocean related data stream; assimilation data; altimeter onboard satellites; Geographic Information System (GIS), tsunami</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(11), 2017</description>
        <description>A tsunami warning system based on active database system with satellite derived real-time data of tidal, significant wave height and ocean wind speed as well as assimilation data of sea level changes as one of the global risk management systems is proposed. Also, Geographic Information System (GIS) with free open source software of PostGIS is proposed for active database system.  It may be said that the proposed tsunami warning and evacuation information provided system is recommendable.</description>
        <description>http://thesai.org/Downloads/Volume8No11/Paper_73-Tsunami_Warning_System_with_Sea_Surface_Features.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Forced-Driven Wet Cloth Simulation based on External Physical Dynamism</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081172</link>
        <id>10.14569/IJACSA.2017.081172</id>
        <doi>10.14569/IJACSA.2017.081172</doi>
        <lastModDate>2017-11-30T19:03:16.0630000+00:00</lastModDate>
        
        <creator>Ahmad Hoirul Basori</creator>
        
        <creator>Hani Moaiteq Abdullah AlJahdali</creator>
        
        <creator>Omar Salim Abdullah</creator>
        
        <subject>Wet cloth simulation, fabric; mass spring; fluid; wind; gravity forces</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(11), 2017</description>
        <description>Cloth simulation remains challenging for past two decades. There are several factors that contribute to this challenge such as, internal and external forces, water, oil and other fluid elements. This paper focuses on simulating wet cloth by considering external forces and water element. Initially, the mass spring technique is used to produce cloth sheet that is composed from collection of matrix point that connects to the spring, then external and internal forces are applied into cloth surfaces. The inner strength is represented by stiffness between springs of cloth particles, while outside forces depend on wind pressure and mass of object that rely on gravity. The wet cloth simulation is started by adding the fluid component into the textile elements which will affect the mass of the cloth itself. The cloth will absorb significance quantity of fluid that will distress the tension between spring particles inside cloth. The experiment has been conducted by simulating the cloth while absorbing the fluid which is controlled by particular equation. It has shown that saturation level of cloth is changing as well as the texture turn to be darker compared to dry cloth. The darkest color of cloth reflects the highest saturation level of the cloth. It also means that cloth cannot absorb more fluid since it is already full in term of capacity. The evaluation is conducted by comparing the dry and wet cloth in terms of motion and physical appearance. It is concluded that the proposed method is able to simulate the convincing wet cloth simulation with high Frame per Second (FPS) rate and realistic motion and appearance. The future work can focus on simulating interaction between fluid and cloth elements to see spoil scene, or washing cloth that remain challenging.</description>
        <description>http://thesai.org/Downloads/Volume8No11/Paper_72-Forced_Driven_Wet_Cloth_Simulation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Optimal Load Balanced Resource Allocation Scheme for Heterogeneous Wireless Networks based on Big Data Technology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081171</link>
        <id>10.14569/IJACSA.2017.081171</id>
        <doi>10.14569/IJACSA.2017.081171</doi>
        <lastModDate>2017-11-30T19:03:16.0300000+00:00</lastModDate>
        
        <creator>Abbas Mirzaei</creator>
        
        <creator>Morteza Barari</creator>
        
        <creator>Houman Zarrabi</creator>
        
        <subject>Heterogeneous wireless networks; radio resource management; quality of service; big data technology; decision making</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(11), 2017</description>
        <description>An important issue in heterogeneous wireless networks is how to optimally utilize various radio resources. While many methods have been proposed for managing radio resources in each network, these methods are not suitable for heterogeneous wireless networks. In this study, a new management method is proposed which provides acceptable service quality and roaming rate, reduces the cost of the service, and utilizes big data technology for its operation. In our proposed scheme, by considering various parameters such as the type of the service, the information related to the location of the user, the movement direction of the user, the cost of the service, and a number of other statistical measures, the most suitable technology for radio access will be selected. It is expected that besides improving the decision making accuracy in selecting the radio access technology and the balanced distribution of network resources, the proposed method provides lower roaming and lower probability of stopping roaming requests entering the network. By considering the various service classes and various quality of service requirements regarding delay, vibration and so on, this can be useful in optimal implementation of heterogeneous wireless networks.</description>
        <description>http://thesai.org/Downloads/Volume8No11/Paper_71-An_Optimal_Load_Balanced_Resource_Allocation_Scheme.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Short Review of Gender Classification based on Fingerprint using Wavelet Transform</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081170</link>
        <id>10.14569/IJACSA.2017.081170</id>
        <doi>10.14569/IJACSA.2017.081170</doi>
        <lastModDate>2017-11-30T19:03:16.0170000+00:00</lastModDate>
        
        <creator>Sri Suwarno</creator>
        
        <creator>P. Insap Santosa</creator>
        
        <subject>Fingerprint; gender; ridge density; wavelet transform</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(11), 2017</description>
        <description>In some cases, knowing the gender of fingerprint owner found in criminal or disaster scene is advantageous.  Theoretically, if the number of the male and female fingerprints in a database is equal, then the identification process of a fingerprint on that database would be two times faster. Some methods have been used to classify gender based on the fingerprint. Most of the method is based on ridge density. This method showed good result. However, it is sensitive to the location of the fingerprint area where ridge density is determined. This paper reviews some literature that used wavelet transform to generate features of a fingerprint. As far as what we found in literature, the number of papers on this topic is very limited. However, based on the literature we reviewed, wavelet transform gives some advantages compared to ridge density counting.</description>
        <description>http://thesai.org/Downloads/Volume8No11/Paper_70-A_Short_Review_of_Gender_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design and Simulation of Adaptive Controller for Single Phase Grid Connected Photovoltaic Inverter under Distorted Grid Conditions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081169</link>
        <id>10.14569/IJACSA.2017.081169</id>
        <doi>10.14569/IJACSA.2017.081169</doi>
        <lastModDate>2017-11-30T19:03:15.9830000+00:00</lastModDate>
        
        <creator>Mohamed Alswidi</creator>
        
        <creator>Abdulaziz Aldobhani</creator>
        
        <creator>Abdurraqib Assad</creator>
        
        <subject>Single phase grid-connected photovoltaic inverter; adaptive controller; grid parameter variations</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(11), 2017</description>
        <description>This paper presents an adaptive controller for single-phase grid-connected photovoltaic inverter under abnormal grid conditions. The main problem associated with the controllers of the grid-connected inverter is that they are tuned for some assumed values of the electrical grid parameters. However, when the parameters, such as voltage and frequency are changed or the grid is subjected to uncertain distortion, these controllers unable to track those variations of the grid parameters and handle the output power within the allowable limit. To overcome such problem, a suitable control strategy is proposed, which is based on frequency adaptive current control and an accurate grid detection. For validity confirmation, a controlled 3KW system, with specific features was designed and simulated. The simulation results confirmed that the strategy is an effective way of control.</description>
        <description>http://thesai.org/Downloads/Volume8No11/Paper_69-Design_and_Simulation_of_Adaptive_Controller_for_Single_Phase_Grid.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Investigating Clinical Decision Support Systems Success Factors with Usability Testing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081168</link>
        <id>10.14569/IJACSA.2017.081168</id>
        <doi>10.14569/IJACSA.2017.081168</doi>
        <lastModDate>2017-11-30T19:03:15.9700000+00:00</lastModDate>
        
        <creator>Vitri Tundjungsari</creator>
        
        <creator>Abdul Salam Mudzakir Sofro</creator>
        
        <creator>Ahmad Sabiq</creator>
        
        <creator>Aan Kardiana</creator>
        
        <subject>Clinical decision support systems; success factors; user; usability testing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(11), 2017</description>
        <description>Clinical Decision Support Systems (CDSS) have been used widely since 2000s to improve the healthcare quality. CDSS can be utilized to support healthcare services as a tool to diagnose, predict, as well as to provide clinical interpretation, alert, and reminder. There are many researches of CDSS implementation on literatures but not many of them present the evidence of CDSS successful implementation. In spite of the potential use of CDSS, there are some researches that reveal the failures of CDSS implementation. This paper contributes to CDSS development by investigating and exploring CDSS success factors with usability testing. The testing involves participants from different types of backgrounds (physicians, IT developers, and students). The participants are being asked to experience three different CDSS to predict cardiovascular risk factors. The result of the research shows that involving different type of users give more insight to design process. It can be concluded that user center design is very critical to produce successful CDSS.</description>
        <description>http://thesai.org/Downloads/Volume8No11/Paper_68-Investigating_Clinical_Decision_Support_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Software Refactoring Approaches: A Survey</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081167</link>
        <id>10.14569/IJACSA.2017.081167</id>
        <doi>10.14569/IJACSA.2017.081167</doi>
        <lastModDate>2017-11-30T19:03:15.9370000+00:00</lastModDate>
        
        <creator>Ismail M. Keshta</creator>
        
        <subject>Software refactoring; refactoring tool; machine learning; hierarchical clustering; graph transformations</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(11), 2017</description>
        <description>The objective of software refactoring is to improve the software product’s quality by improving its performance and understandability. There are also different quality attributes that software refactoring can improve. This study gives a wide overview of five primary approaches to software refactoring. These are two clustering approaches at class level and two at package level, as well as one graph transformational approach at class level. The research also compares the approaches using several evaluation criteria.</description>
        <description>http://thesai.org/Downloads/Volume8No11/Paper_67-Software_Refactoring_Approaches.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Model Driven Development Transformations using Inductive Logic Programming</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081166</link>
        <id>10.14569/IJACSA.2017.081166</id>
        <doi>10.14569/IJACSA.2017.081166</doi>
        <lastModDate>2017-11-30T19:03:15.9370000+00:00</lastModDate>
        
        <creator>Hamdi A. Al-Jamimi</creator>
        
        <creator>Moataz A. Ahmed</creator>
        
        <subject>Transformation model; software design models; transformation rules; inductive logic programming</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(11), 2017</description>
        <description>Model transformation by example is a novel approach in model-driven software engineering. The rationale behind the approach is to derive transformation rules from an initial set of interrelated source and target models; e.g., requirements analysis and software design models. The derived rules describe different transformation steps in a purely declarative way. Inductive Logic Programming utilizes the power of machine learning and the capability of logic programming to induce valid hypotheses from given examples. In this paper, we use Inductive Logic Programming to derive transformation rules from given examples of analysis-design pairs. As a proof concept, we applied the approach to two major software design tasks: class packaging and introducing Fa&#231;ade design. Various analysis-design model pairs collected from different sources were used as case studies. The resultant performance measures show that the approach is promising.</description>
        <description>http://thesai.org/Downloads/Volume8No11/Paper_66-Model_Driven_Development_Transformations.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Traffic Signs Recognition using HP and HOG Descriptors Combined to MLP and SVM Classifiers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081165</link>
        <id>10.14569/IJACSA.2017.081165</id>
        <doi>10.14569/IJACSA.2017.081165</doi>
        <lastModDate>2017-11-30T19:03:15.8900000+00:00</lastModDate>
        
        <creator>A. Salhi</creator>
        
        <creator>B. Minaoui</creator>
        
        <creator>M. Fakir</creator>
        
        <creator>H. Chakib</creator>
        
        <creator>H. Grimech</creator>
        
        <subject>Traffic signs detection and recognition; Histogram of oriented gradient (HOG); Support Vector Machine (SVM); Histogram projection (HP); Multi-layer perceptron (MLP)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(11), 2017</description>
        <description>Detection and recognition of traffic signs in a video streams consist of two steps: the detection of signs in the road scene and the recognition of their type. We usually evaluate globally this process. This evaluated approach unfortunately does not allow to finely analyze the performance of each step. It is difficult to know what step needs to be improved to obtain a more efficient system. Our previous work focused on a real-time detection of road signs, by improving the performances of the detection step in real time. In this paper, we complete the work by focusing on recognition step, where we compare the performances between histogram projection (HP) descriptor, and the histogram-oriented gradient (HOG) descriptor combined with the Multi-Layer Perceptron (MLP) classifier, and the Support Vector Machine (SVM) classifier, to compute characteristics and descriptors of the objects extracted in the step of detection, and identify the kind of traffic signs. Experimental results present the performances of the four combinations of these methods “Descriptor-Classifier” to identify which of them could have high performance for traffic sign recognition.</description>
        <description>http://thesai.org/Downloads/Volume8No11/Paper_65-Traffic_Signs_Recognition_using_HP_and_HOG_Descriptors.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimization and Simulation Approach for Empty Containers Handling</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081164</link>
        <id>10.14569/IJACSA.2017.081164</id>
        <doi>10.14569/IJACSA.2017.081164</doi>
        <lastModDate>2017-11-30T19:03:15.8730000+00:00</lastModDate>
        
        <creator>Chafik Razouk</creator>
        
        <creator>Youssef Benadada</creator>
        
        <subject>Container terminal; design; doubled trailer; simulation; arena; strength of materials; quay</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(11), 2017</description>
        <description>Container handling problems at the container terminals are NP-hard problems. In this paper, we propose a new handling operation’s design and simulation of empty containers, taking into account the interrelated activities at the container terminal. This simulation have been built using a doubled trailer. It moves containers from quayside to yard side or the opposite depending on the flow in container terminal, and it is used to optimize the cycle time and to improve the efficiency of the other equipment. Our interest is to test this new model first for empty containers. The proposed model is applied on a real case study data of the container terminal at Tanger Med port. This new design was developed using Arena software and verifying the strength of materials constraint for the loaded containers. The computational results show the effectiveness of the proposed model, where the cycle time of the port equipment have been reduced by -58%, and the efficiency has been increased where +47% of the moves in container terminal was achieved.</description>
        <description>http://thesai.org/Downloads/Volume8No11/Paper_64-A_Simulation_Approach_using_a_New_Handling_Concept.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Collective Movement Method for Swarm Robot based on a Thermodynamic Model </title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081163</link>
        <id>10.14569/IJACSA.2017.081163</id>
        <doi>10.14569/IJACSA.2017.081163</doi>
        <lastModDate>2017-11-30T19:03:15.8430000+00:00</lastModDate>
        
        <creator>Kouhei YAMAGISHI</creator>
        
        <creator>Tsuyoshi SUZUKI</creator>
        
        <subject>Swarm robotics system; solidity and flexibility collective movement; thermodynamics; distributed control</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(11), 2017</description>
        <description>In this paper, a distributed collective movement control method is proposed for a swarm robotics system based on an internal energy thermodynamic model. The system can move between obstacles with a changing aggregation suitable for confronting obstacle arrangements in an environment. The swarm robot shape is a fixed aggregation formed by virtual attraction and repulsion forces based on the proposed method. It follows a leader agent while retaining its shape. When the swarm robot aggregation shape cannot be maintained during movement though narrow spaces with obstacles, the swarm robot flexibly changes shape according to that of the local environment. To this end, it employs virtual thermal motion, which is made possible with directives and enables continuous movement. A simulation confirmed the capability of the proposed method in enabling the solidity and flexibility collective movement of swarm robots. The results furthermore showed that the parameter setting range is important for applying the proposed method to collective movement.</description>
        <description>http://thesai.org/Downloads/Volume8No11/Paper_63-Collective_Movement_Method_for_Swarm_Robot.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implementation of Pattern Matching Algorithm for Portable Document Format</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081162</link>
        <id>10.14569/IJACSA.2017.081162</id>
        <doi>10.14569/IJACSA.2017.081162</doi>
        <lastModDate>2017-11-30T19:03:15.8270000+00:00</lastModDate>
        
        <creator>Anton Yudhana</creator>
        
        <creator>Sunardi</creator>
        
        <creator>Abdul Djalil Djayali</creator>
        
        <subject>Pattern matching; Rabin-Karp algorithm; data mining; web</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(11), 2017</description>
        <description>Internet availability and e-documents are freely used in the community. This condition has the potential for the occurrence of the act of plagiarism against an e-document of scientific work. The process of detecting plagiarism in some cases seems to be done manually by using human power so that it has the potential to make mistakes in observing and remembering the checkpoints that have been done. The method used in this research is to represent two sets of objects compared in the form of probability. In order for the method to run perfectly, the Rabin-Karp algorithm is applied, wherein Rabin-Karp is a string matching algorithm that uses hash functions as a comparison between the searched string (m) and substring in the text (n). If both hash values are the same then the comparison will be done once again to the characters. The resulting system is a web-based application that shows the value of the similarity of two sets of objects.</description>
        <description>http://thesai.org/Downloads/Volume8No11/Paper_62-Implementation_of_Pattern_Matching_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Adaptive Multilayered Particle Swarm Optimized Neural Network (AMPSONN) for Pipeline Corrosion Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081161</link>
        <id>10.14569/IJACSA.2017.081161</id>
        <doi>10.14569/IJACSA.2017.081161</doi>
        <lastModDate>2017-11-30T19:03:15.7970000+00:00</lastModDate>
        
        <creator>Kien Ee Lee</creator>
        
        <creator>Izzatdin bin Abdul Aziz</creator>
        
        <creator>Jafreezal bin Jaafar</creator>
        
        <subject>Corrosion; damage mechanism; prediction method; artificial neural network; particle swarm optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(11), 2017</description>
        <description>Artificial Neural Network (ANN) design has long been a complex problem because its performance depends heavily on the network topology and algorithm to train the set of synaptic weights. Particle Swarm Optimization (PSO) has been the favored optimization algorithm to complement ANN, but a thorough literature study has shown that there are gaps with current approaches which integrate PSO with ANN, including the optimization of network topology and the unreliable weight training process. These gaps have caused inferior effect on critical Artificial Intelligence (AI) applications and systems, particularly when predicting plant machinery and piping failure due to corrosion. The problem of corrosion prediction in the oil and gas domain remains unanswered due to the lack of a flexible prediction method which targets specific damage mechanisms that caused corrosion. This paper proposes a hybrid prediction method known as the Adaptive Multilayered Particle Swarm Optimized Neural Network (AMPSONN), which integrates several layers of PSO to optimize different parameters of the ANN. The multilayered PSO enables the method to optimize the network topology and train the set of synaptic weights at the same time using a hierarchical optimization approach. Through detailed discussion and literature study, the damage mechanism focused in this research is the CO2 corrosion and the dataset for this research is obtained from the NORSOK empirical model. The proposed AMPSONN method is tested against BP, MPSO and PSOBP methods on an industrial corrosion dataset for different test conditions. The results showed that AMPSONN performs best on all three problems, exhibiting high classification accuracies and time efficiency.</description>
        <description>http://thesai.org/Downloads/Volume8No11/Paper_61-Adaptive_Multilayered_Particle_Swarm_Optimized_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis and Formal Model of RFID-Based Patient Registration System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081160</link>
        <id>10.14569/IJACSA.2017.081160</id>
        <doi>10.14569/IJACSA.2017.081160</doi>
        <lastModDate>2017-11-30T19:03:15.7670000+00:00</lastModDate>
        
        <creator>Marrium Khalid</creator>
        
        <creator>Hamra Afzaal</creator>
        
        <creator>Shoaib Hassan</creator>
        
        <creator>Nazir Ahmad Zafar</creator>
        
        <subject>Patient registration system; Radio Frequency Identification (RFID); bracelet; formal method; semiformal modeling; verification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(11), 2017</description>
        <description>Patient Registration System (PRS) is an important part of hospital environment. Therefore, semiformal model of Patient Registration System that registers the patients by assigning Radio Frequency Identification (RFID) card or bracelet is presented in this paper. The existing Patient Registration Systems do not properly work due to ambiguities and semiformal modeling techniques. However, that is why we will propose formal modeling for PRS using Vienna Development Method (VDM-SL). Firstly, we develop the Unified Modeling Language (UML) based semiformal model of PRS because UML is used for better understanding of the system architecture. Formal methods are used to ensure accuracy and robustness of the system. Therefore, we transform the UML based model into formal model by writing formal specification of the system to improve accuracy and efficiency of PRS. In this way, development time, testing and maintenance cost in building RFID based PRS software is reduced to a great extent.</description>
        <description>http://thesai.org/Downloads/Volume8No11/Paper_60-Analysis_and_Formal_Model_of_RFID.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Recognizing Rainfall Pattern for Pakistan using Computational Intelligence</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081159</link>
        <id>10.14569/IJACSA.2017.081159</id>
        <doi>10.14569/IJACSA.2017.081159</doi>
        <lastModDate>2017-11-30T19:03:15.7500000+00:00</lastModDate>
        
        <creator>M. Ali Aun</creator>
        
        <creator>Abdul Ghani</creator>
        
        <creator>M.Azeem</creator>
        
        <creator>M. Adnan</creator>
        
        <creator>M. Ahsan Latif</creator>
        
        <subject>Rainfall patterns; trend detection; time-series analysis; principal component analysis; box-plot; moving average over shifting horizon; inter-annual variability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(11), 2017</description>
        <description>Over the world, rainfall patterns and seasons are shifting in new directions due to global warming. In the case of Pakistan, unusual rainfall events may outcome with droughts, floods and other natural disasters along with disturbance of economy, so the scientific understating of rainfall patterns will be very helpful to water management and for the economy. In this paper, we have attempted to recognize rainfall patterns over selected regions of the Pakistan. All the time series data of metrological stations are taken from the PMD (Pakistan Meteorological Department). Using PCA (Principal Component Analysis), monthly metrological observations of all the stations in Punjab have been analyzed which covers the area of 205,344 km&#178; and includes monsoon-dominated regions. To tackle the problem of inter-annual variations, trend detection, and seasonality, rainfall data of Lahore, the Pakistan is taken that covers the period of 1976-2006. To obtain results, MASH (Moving Average over Shifting Horizon), PCA (Principal Component Analysis) along with other supporting techniques like bi-plots, the Pair-wise correlation has been applied. The results of this study successfully show seasonal patterns, variations and hidden information in complex precipitation data structure.</description>
        <description>http://thesai.org/Downloads/Volume8No11/Paper_59-Recognizing_Rainfall_Pattern_for_Pakistan.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>NHCA: Developing New Hybrid Cryptography Algorithm for Cloud Computing Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081158</link>
        <id>10.14569/IJACSA.2017.081158</id>
        <doi>10.14569/IJACSA.2017.081158</doi>
        <lastModDate>2017-11-30T19:03:15.7200000+00:00</lastModDate>
        
        <creator>Ali Abdulridha Taha</creator>
        
        <creator>Diaa Salama AbdElminaam</creator>
        
        <creator>Khalid M Hosny</creator>
        
        <subject>Hybrid cryptography algorithms; symmetric encryption algorithms; asymmetric encryption algorithms</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(11), 2017</description>
        <description>The amount of transmitted data through the internet become larger and larger every day. The need of an encryption algorithm that guarantee transmitting data speedily and in a secure manner become a must. The aim of the research is to encrypt and decrypt data efficiently and effectively protect the transmitted data. This research paper presents a model for encrypting transmitted cloud data. This model uses the following encryption algorithms RSA, Triple DES, RC4, and Krishna to generate a new encryption algorithm that encrypt and decrypt transmitted data. The algorithm will help cloud agencies and users to secure their transmitted data and prevent it from being stolen.</description>
        <description>http://thesai.org/Downloads/Volume8No11/Paper_58-NHCA_Developing_New_Hybrid_Cryptography_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Model Study and Fault Detection for the Railway System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081157</link>
        <id>10.14569/IJACSA.2017.081157</id>
        <doi>10.14569/IJACSA.2017.081157</doi>
        <lastModDate>2017-11-30T19:03:15.6870000+00:00</lastModDate>
        
        <creator>ARFA Rawia</creator>
        
        <creator>TLIJANI Hatem</creator>
        
        <creator>KNANI Jilani</creator>
        
        <subject>Dynamic model; the wheel-rail–sleepers system; interaction system; Euler–Bernoulli; Luenberger observer (LO); fault detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(11), 2017</description>
        <description>The wheel-rail-sleepers system is simulated as a series of moving point loads on an Euler–Bernoulli beam resting on a visco-elastic half space. This paper concentrates on the rail-sleepers interaction system (railway system) and the fault detection. The main objective is to mathematically develop and implement a dynamic model of a railway system then the diagnosis of system defects using a Luenberger observer (LO). The simulation results are based on a physical description, mathematical equations and simulations with MATLAB simulation program.</description>
        <description>http://thesai.org/Downloads/Volume8No11/Paper_57-Model_Study_and_Fault_Detection_for_the_Railway_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performances Comparison of IEEE 802.15.6 and IEEE 802.15.4
Optimization and Exploitation in Healthcare and Medical Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081156</link>
        <id>10.14569/IJACSA.2017.081156</id>
        <doi>10.14569/IJACSA.2017.081156</doi>
        <lastModDate>2017-11-30T19:03:15.6730000+00:00</lastModDate>
        
        <creator>C.E. AIT ZAOUIAT</creator>
        
        <creator>A.LATIF</creator>
        
        <subject>Guaranteed Time Slot (GTS); polling; WBAN; IEEE 802.15.6; IEEE 802.15.4; energy consumption</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(11), 2017</description>
        <description>In this paper, we simulate the energy consumption, throughput and reliability for both, Zigbee IEEE 802.15.4 Mac protocol and BAN IEEE 802.15.6 exploited in medical applications using Guaranteed Time Slot (GTS) and polling mechanisms by CASTALIA software. Then, we compare and analyze the simulation results. These results show that the originality of this work focuses on giving decisive factors to choose the appropriate MAC protocol in a medical context depending on the energy consumption, number of used nodes, and sensors data rates.</description>
        <description>http://thesai.org/Downloads/Volume8No11/Paper_56-Performances_Comparison_between_IEEE_BAN.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performances Analysis of a SCADA Architecture for Industrial Processes</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081155</link>
        <id>10.14569/IJACSA.2017.081155</id>
        <doi>10.14569/IJACSA.2017.081155</doi>
        <lastModDate>2017-11-30T19:03:15.6400000+00:00</lastModDate>
        
        <creator>Simona-Anda TCACIUC</creator>
        
        <subject>SCADA systems; middleware; data acquisition; data stream; distributed systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(11), 2017</description>
        <description>SCADA (Supervisory Control And Data Acquisition) systems are used to monitor and control various industrial processes, and have been continuously developed in order to incorporate the new technologies from software development and field busses areas. The middleware communication has the most relevant role in the development of such complex distributed systems as SCADA systems. These systems are very complex and must be reliable and predictable. Furthermore, their performance capabilities are very important. This paper presents a performance analysis of a SCADA system developed for Windows platform, including Windows Compact Embedded. The analysis is focused on the performance difference between computing systems based on Windows desktop and Windows CE operating systems. The utilization of the Windows CE is useful on the application with real-time requirements that cannot be achieved by the Windows desktop. Testing the application and analyzing the results led to the validation of the proposed SCADA system.</description>
        <description>http://thesai.org/Downloads/Volume8No11/Paper_55-Performances_Analysis_of_a_SCADA_Architecture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Review of State-of-the-Art on Wireless Body Area Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081154</link>
        <id>10.14569/IJACSA.2017.081154</id>
        <doi>10.14569/IJACSA.2017.081154</doi>
        <lastModDate>2017-11-30T19:03:15.6100000+00:00</lastModDate>
        
        <creator>Fatemeh Rismanian Yazdi</creator>
        
        <creator>Mehdi Hosseinzadeh</creator>
        
        <creator>Sam Jabbehdari </creator>
        
        <subject>Wireless body area networks; review; challenges; applications; architecture; radio technologies; telemedicine</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(11), 2017</description>
        <description>During the last few years, Wireless Body Area Networks (WBANs) have emerged into many application domains, such as medicine, sport, entertainments, military, and monitoring. This emerging networking technology can be used for e-health monitoring. In this paper, we review the literature and investigate the challenges in the development architecture of WBANs. Then, we classified the challenges of WBANs that need to be addressed for their development. Moreover, we investigate the various diseases and healthcare systems and current state-of-the-art of applications and mainly focus on the remote monitoring for elderly and chronically diseases patients. Finally, relevant research issues and future development are discussed.</description>
        <description>http://thesai.org/Downloads/Volume8No11/Paper_54-A_Review_of_State_of_the_Art_on_Wireless_Body.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Relaxed Random Search for Solving K-Satisfiability and its Information Theoretic Interpretation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081153</link>
        <id>10.14569/IJACSA.2017.081153</id>
        <doi>10.14569/IJACSA.2017.081153</doi>
        <lastModDate>2017-11-30T19:03:15.5930000+00:00</lastModDate>
        
        <creator>Amirahmad Nayyeri</creator>
        
        <creator>Gholamhossein Dastghaibyfard</creator>
        
        <subject>Constraint satisfaction problem; K-SAT; threshold phenomena; randomized algorithm; entropy; NP-completeness</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(11), 2017</description>
        <description>The problem of finding satisfying assignments for conjunctive normal formula with K literals in each clause, known as K-SAT, has attracted many attentions in the previous three decades. Since it is known as NP-Complete Problem, its effective solution (finding solution within polynomial time) would be of great interest due to its relation with the most well-known open problem in computer science (P=NP Conjecture). Different strategies have been developed to solve this problem but in all of them the complexity is preserved in NP class. In this paper, by considering the recent approach of applying statistical physic methods for analyzing the phase transition in the complexity of algorithms used for solving K-SAT, we try to compute the complexity of using randomized algorithm for finding the solution of K-SAT in more relaxed regions. It is shown how the probability of literal flipping process can change the complexity of algorithm substantially. An information theoretic interpretation of this reduction in time complexity will be argued.</description>
        <description>http://thesai.org/Downloads/Volume8No11/Paper_53-Relaxed_Random_Search_for_Solving_K_Satisfiability.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Efficient Node Monitoring Mechanism in WSN using Contikimac Protocol</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081152</link>
        <id>10.14569/IJACSA.2017.081152</id>
        <doi>10.14569/IJACSA.2017.081152</doi>
        <lastModDate>2017-11-30T19:03:15.5630000+00:00</lastModDate>
        
        <creator>Shahzad Ashraf</creator>
        
        <creator>Mingsheng Gao</creator>
        
        <creator>Zhengming Chen</creator>
        
        <creator>Syed Kamran Haider</creator>
        
        <creator>Zeeshan Raza</creator>
        
        <subject>Wireless sensor networks; low power listening; ContikiMAC; Cooja</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(11), 2017</description>
        <description>Wireless Sensor Network is monitored with ContikiMAC Cooja flavor to diagnose the energy utilization ratio by nodes and the fault detection process in distributed approach; adopted the Low power Listening (LPL) mechanism with ContikiMAC to prolong the network’s lifetime. LPL locate the root cause of communication issue, get rid of the interruption problems, and get back normal communication state. The LPL mechanism reduces the energy utilization in centralized and distribute approaches. Even more, the distributed approach is best suited for network monitoring when energy utilization is main objective in the presence of LPL. It is also important how soon the faulty node can be detected. In this case, latency has vital contributions in monitoring mechanism and latency is achieved by developing the efficient faulty node detection methodology.</description>
        <description>http://thesai.org/Downloads/Volume8No11/Paper_52-Efficient_Node_Monitoring_Mechanism_in_WSN.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implementation of an Image Processing Algorithm on a Platform DSP Tms320c6416</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081151</link>
        <id>10.14569/IJACSA.2017.081151</id>
        <doi>10.14569/IJACSA.2017.081151</doi>
        <lastModDate>2017-11-30T19:03:15.5300000+00:00</lastModDate>
        
        <creator>Farah Dhib Tatar</creator>
        
        <creator>Mohsen Machhout</creator>
        
        <subject>Fingerprint; biometrics; images processing; embedded systems; DSP</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(11), 2017</description>
        <description>In the context of emerging technologies, Cloud Computing (CC) was introduced as a new paradigm to host and deliver Information Technology Services. Cloud computing is a new model for delivering resources. However, there are many critical problems appeared with cloud computing, such as data privacy, security, and reliability, etc. But security is the most important between these problems. Biometric identification is a reliable and one of the easiest ways to recognize a person using extractable characteristics. In addition, a biometric application requires a fast and powerful processing systems, hence the increased use of embedded systems in biometric applications especially in image processing. Embedded systems have a wide variety and the choice of a well-designed processor is one of the most important factors that directly affect the overall performance of the system. This study highlights the performance of the Texas Instrument DSP for processing a biometric fingerprint recognition system.</description>
        <description>http://thesai.org/Downloads/Volume8No11/Paper_51-Implementation_of_an_Image_Processing_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Remote Sensor Network using Android Things and Cloud Computing for the Food Reserve Agency in Zambia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081150</link>
        <id>10.14569/IJACSA.2017.081150</id>
        <doi>10.14569/IJACSA.2017.081150</doi>
        <lastModDate>2017-11-30T19:03:15.5170000+00:00</lastModDate>
        
        <creator>Mulima Chibuye</creator>
        
        <creator>Jackson Phiri</creator>
        
        <subject>Internet of things; android things; wireless sensor network (WSN); remote sensor network; Food Reserve Agency (FRA); grain marketing; modern warehousing; cloud computing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(11), 2017</description>
        <description>In order to introduce modern warehousing, improve upon the storage of grain and grain marketing business processes for the Food Reserve Agency in Zambia, a prototype of a remote sensor network was developed and built as a proof of concept for a much wider deployment using cloud computing and the internet of things concept. It was determined that a wireless sensor network would aid the Food Reserve Agency in analytics, timely action and real-time reporting from all its food depots spread-out throughout Zambia. Google’s Android Things Platform was used in order to achieve the objectives. Advantages of Android Things over traditional platforms that have been used to develop wireless sensor networks were looked into and presented in this paper.</description>
        <description>http://thesai.org/Downloads/Volume8No11/Paper_50-A_Remote_Sensor_Network_using_Android_Things.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of Self-Learning Program for the Bending Process of Quartz Glass</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081149</link>
        <id>10.14569/IJACSA.2017.081149</id>
        <doi>10.14569/IJACSA.2017.081149</doi>
        <lastModDate>2017-11-30T19:03:15.4830000+00:00</lastModDate>
        
        <creator>Masamichi Suda</creator>
        
        <creator>Akio Hattori</creator>
        
        <creator>Akihiko Goto</creator>
        
        <creator>Noriaki Kuwahara</creator>
        
        <creator>Hiroyuki Hamada</creator>
        
        <subject>Quartz glass; self-learning; bending; experts; beginner; process analysis; text analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(11), 2017</description>
        <description>Quartz glass is a high-performance glass material with its high heat and chemical resistance, wide optical transparency ranging from ultraviolet light to infrared light, and the high formativeness as a glass material. Because it has high morphological stability due to its heat resistance and low thermal expansion, it is widely used as material for specialized research and development or high-precision components. There are several techniques to process quartz glass material, and an important process among them is the fire processing. The fire processing requires technology to heat and mold glass material in high temperature, and high-quality processing is done by the manual works of experts. In this study, we focused on bending work, which is the process that demands particularly high skill among the fire processing. We developed a self-learning program for beginners to improve their skill in short time by using the bending know-hows of the experts that were clarified through process analysis, product evaluation and an interview with the expert, and examined its effectiveness. As a result, a consistent educational effect was observed in the bending skill improvement of the beginner in a short period of time.</description>
        <description>http://thesai.org/Downloads/Volume8No11/Paper_49-Development_of_Self_Learning_Program_for_the_Bending_Process.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Figural a Flexibility Test for Improving Creative Thinking in an Arabic Learning Environment A Saudi Arabia-Based Case Study
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081148</link>
        <id>10.14569/IJACSA.2017.081148</id>
        <doi>10.14569/IJACSA.2017.081148</doi>
        <lastModDate>2017-11-30T19:03:15.4530000+00:00</lastModDate>
        
        <creator>Nahla Aljojo</creator>
        
        <subject>Creative thinking; kit of factor referenced cognitive tests; students; toothpicks; planning patterns; storage test; cognitive abilities</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(11), 2017</description>
        <description>The capability of graduates to be flexible in the face of rapidly altering situations is an increasingly crucial requirement that teachers should be conscious of, given that persistent development and technological progress are characteristic of contemporary life. Proficiency for learning and cognitive abilities are two areas in which learners need to acquire knowledge. Cognitive spatial ability has various dynamics, the assessment of which can be undertaken through numerous techniques. The major objectives of this paper are to develop a web-based system for measuring adult cognitive ability within an Arabic learning environment, in addition to enhancing their creative thinking and learning capabilities through utilising the kit of factor referenced cognitive tests, devised by Ekstrom et al. (1976). The web-based system will focus on the figural flexibility test (Toothpicks test - planning patterns – storage test). Each test has its own objective with regard to assisting with the measurement of people’s creative ability in different ways, as a means of enhancing creative thinking and learning. Prior to constructing the figural flexibility test system, we are going to distribute a questionnaire in order to assess and examine certain crucial aspects, to inform the construction of our system. The questionnaires were distributed to university students in the Faculty of Computing and Information Technology (FCIT), in addition to random distribution via email and social media, namely, Facebook, Twitter and WhatsApp. Over 500 questionnaires were distributed with 400 responses received. The objective was to assess the new system’s feasibility, as well as to design a system that meets the user’s requirements. As a result of the questionnaire, 77% of people were found to believe that creating a web-based system can assist students with developing their creative thinking and learning abilities.</description>
        <description>http://thesai.org/Downloads/Volume8No11/Paper_48-Figural_A_Flexibility_Test_for_Improving_Creative_Thinking.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fuzzy Logic Tsukamoto for SARIMA
On Automation of Bandwidth Allocation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081147</link>
        <id>10.14569/IJACSA.2017.081147</id>
        <doi>10.14569/IJACSA.2017.081147</doi>
        <lastModDate>2017-11-30T19:03:15.4370000+00:00</lastModDate>
        
        <creator>Isna Alfi Bustoni</creator>
        
        <creator>Adhistya Erna Permanasari</creator>
        
        <creator>Indriana Hidayah</creator>
        
        <creator>Indra Hidayatulloh</creator>
        
        <subject>Bandwidth allocation management; dynamic allocation; fuzzy logic; Tsukamoto inference method; SARIMA</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(11), 2017</description>
        <description>The wireless network is used in different fields to enhance information transfer between remote areas. In the education area, it can support knowledge transfer among academic member including lecturers, students, and staffs. In order to achieve this purpose, the wireless network is supposed to be well managed to accommodate all users. Department of Electrical Engineering and Information Technology UGM sets wireless network for its daily campus activity manually and monitor data traffic at a time then share it to the user. Thus, it makes bandwidth sharing becomes less effective. This study, build a dynamic bandwidth allocation management system which automatically determines bandwidth allocation based on the prediction of future bandwidth using by implementing Seasonal Autoregressive Integrated Moving Average (SARIMA) with the addition of outlier detection since the result more accurate. Moreover, the determination of fixed bandwidth allocation was done using Fuzzy Logic with Tsukamoto Inference Method. The results demonstrate that bandwidth allocations can be classified into 3 fuzzy classes from quantitative forecasting results. Furthermore, manual and automatic bandwidth allocation was compared. The result on manual allocation MAPE was 70,76% with average false positive value 56 MB, compared to dynamic allocation using Fuzzy Logic and SARIMA which has MAPE 38,9% and average false positive value around 13,84 MB. In conclusion, the dynamic allocation was more effective in bandwidth allocation than manual allocation.</description>
        <description>http://thesai.org/Downloads/Volume8No11/Paper_47-Fuzzy_Logic_Tsukamoto_for_SARIMA_on_Automation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Resilient Framework for Distributed Computation Offloading: Overview, Challenges and Issues</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081146</link>
        <id>10.14569/IJACSA.2017.081146</id>
        <doi>10.14569/IJACSA.2017.081146</doi>
        <lastModDate>2017-11-30T19:03:15.4230000+00:00</lastModDate>
        
        <creator>Collinson Colin M. Agbesi</creator>
        
        <creator>Jamal-Deen Abdulai</creator>
        
        <creator>Katsriku Apietu Ferdinand</creator>
        
        <creator>Kofi Adu-Manu Sarpong</creator>
        
        <subject>Cloud computing; mobile cloud computing; computation offloading; distributed computation offloading</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(11), 2017</description>
        <description>Gradually, mobile and smart computing devices are becoming pervasive and prevalent in society and also increasingly being used to undertake the daily tasks and business activities of individuals and organizations worldwide as compared to their desktop counterparts. But these mobile and smart computing devices are resource constrained and sometimes lack the needed computational capacities; thus, memory, energy, storage, and processor to run the plethora of resource intensive applications available for mobile users. There is a lot of benefit of offloading resource demanding applications and intensive computations from mobile devices to other systems with higher resource capacities in the cloud. Mobile cloud computing is a form of cloud computing that seeks to enhance the capacity and capabilities of mobile and smart computing devices by enabling mobile devices to offload some computational tasks to the cloud for processing which otherwise would have been a challenge. The study setup an experiment to investigate computation offloading for mobile devices. The study also presented an energy model for computation offloading. It was observed during the experiment that by offloading intensive applications from mobile and smart computing devices to other systems with higher resource capacities, a great amount of resource efficiency is achieved.</description>
        <description>http://thesai.org/Downloads/Volume8No11/Paper_46-Resilient_Framework_for_Distributed_Computation_Offloading.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>RETRACTED: A Systematic Report on Issue and Challenges during Requirement Elicitation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081145</link>
        <id>10.14569/IJACSA.2017.081145</id>
        <doi>10.14569/IJACSA.2017.081145</doi>
        <lastModDate>2017-11-30T19:03:15.3900000+00:00</lastModDate>
        
        <creator>Burhan Mohy-ud-din</creator>
        
        <creator>Muhammad Awais</creator>
        
        <creator>Muhammad Sheraz Arshad Malik</creator>
        
        <creator>Ayesha Shahid</creator>
        
        <subject>Problems and difficulties in requirement gathering; process of eliciting requirements; requirement gathering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(11), 2017</description>
        <description>After careful and considered review of the content of this paper by a duly constituted expert committee,  this paper has been found to be in violation of IJACSA`s Publication Principles. We hereby retract the  content of this paper. Reasonable effort should be made to remove all past references to this paper. Retraction DOI: 10.14569/IJACSA.2017.081145.retraction</description>
        <description>http://thesai.org/Downloads/Volume8No11/Paper_45-A_Systematic_Report_on_Issue_and_Challenges.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comparative Study of Stereovision Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081144</link>
        <id>10.14569/IJACSA.2017.081144</id>
        <doi>10.14569/IJACSA.2017.081144</doi>
        <lastModDate>2017-11-30T19:03:15.3600000+00:00</lastModDate>
        
        <creator>Elena Bebeselea-Sterp</creator>
        
        <creator>Raluca Brad</creator>
        
        <creator>Remus Brad</creator>
        
        <subject>Stereo vision; disparity; correspondence; comparative study; middlebury benchmark</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(11), 2017</description>
        <description>Stereo vision has been and continues to be one of the most researched domains of computer vision, having many applications, among them, allowing the depth extraction of a scene. This paper provides a comparative study of stereo vision and matching algorithms, used to solve the correspondence problem. The study of matching algorithms was followed by experiments on the Middlebury benchmarks. The tests focused on a comparison of 6 stereovision methods. In order to assess the performance, RMS and some statistics related were computed. In order to emphasize the advantages of each stereo algorithm considered, two-frame methods have been employed, both local and global. The experiments conducted have shown that the best results are obtained by Graph Cuts. Unfortunately, this has a higher computational cost. If high quality is not an issue in applications, local methods provide reasonable results within a much lower time-frame and offer the possibility of parallel implementations.</description>
        <description>http://thesai.org/Downloads/Volume8No11/Paper_44-A_Comparative_Study_of_Stereovision_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improved QoS for Multimedia Transmission using Buffer Management in Wireless Sensor Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081143</link>
        <id>10.14569/IJACSA.2017.081143</id>
        <doi>10.14569/IJACSA.2017.081143</doi>
        <lastModDate>2017-11-30T19:03:15.3430000+00:00</lastModDate>
        
        <creator>Majid Alotaibi</creator>
        
        <subject>packet delivery ratio (PDR); multimedia; buffer; priority; delay</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(11), 2017</description>
        <description>Wireless Sensor Network (WSN) diverts the attention of the research community as it is easy to deploy, self-maintained and does not require predefine infrastructure. These networks are commonly used to broadcast multimedia data from source to destination. However, this kind of data transmission has some challenges, i.e. power and bandwidth limitation with small delay. Art of work mainly focus on optimization either by the shortest route or to minimize the delay by increasing the bandwidth. However, Buffer management is the main constraint to cause delay and loss of packets. In this paper, an approach is presented to manage the buffer and increase the packet delivery ratio (PDR) and reduce delay by assigning the priorities to Intra- coded (I) frame, predictive – coded (P) frame and bidirectional- coded (B) frames dynamically. This approach is very much effective to control the loss of packets in WSN. The presented approach is validated by using Network Simulator 2.</description>
        <description>http://thesai.org/Downloads/Volume8No11/Paper_43-Improved_QoS_for_Multimedia_Transmission.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>University ERP Preparation Analysis: A PPU Case Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081142</link>
        <id>10.14569/IJACSA.2017.081142</id>
        <doi>10.14569/IJACSA.2017.081142</doi>
        <lastModDate>2017-11-30T19:03:15.3130000+00:00</lastModDate>
        
        <creator>Islam K. Sowan</creator>
        
        <creator>Radwan Tahboub</creator>
        
        <creator>Faisal Khamayseh</creator>
        
        <subject>Enterprise Resources Planning (ERP); University ERP; software engineering practices software engineering phases activities; critical success factor; technical success factors; ERP implementation; successful ERP</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(11), 2017</description>
        <description>The Enterprise Resources Planning (ERP) systems are one of the most frequently used systems by business organizations. Recently, the university sectors began using the ERP system in order to increase the quality of their academic and administrative services. However, the implementation of ERP is complicated, risky, and no factor can guarantee a successful system. Previous studies were primarily concerned with Critical Success Factors (CSFs) in business organizations and organizational success factors. This produced plenty of information about these topics. However, the university environment and structure is different, which encourages us to study its specific technical critical success factors. In this paper, Palestine Polytechnic University (PPU) will be our case study. Our attention is concentrated on technical success factors at PPU. Firstly, the paper focused on the technical problems which current systems in the PPU suffered from, in order to extract the particular CSFs which are needed to implement ERP systems. Secondly, the paper focused on the most technical critical factors that ensure successful implementation of the ERP project. Thirdly, a study of the degree to which PPU’s technical staff uses software engineering practices during the development process has been conducted by focusing on phases activities. Our main aim is to get a pool of parameters related to a successful preparation of universities’ ERP systems.</description>
        <description>http://thesai.org/Downloads/Volume8No11/Paper_42-University_ERP_Preparation_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Survey on the Cryptographic Encryption Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081141</link>
        <id>10.14569/IJACSA.2017.081141</id>
        <doi>10.14569/IJACSA.2017.081141</doi>
        <lastModDate>2017-11-30T19:03:15.2970000+00:00</lastModDate>
        
        <creator>Muhammad Faheem Mushtaq</creator>
        
        <creator>Sapiee Jamel</creator>
        
        <creator>Abdulkadir Hassan Disina</creator>
        
        <creator>Zahraddeen A. Pindar</creator>
        
        <creator>Nur Shafinaz Ahmad Shakir</creator>
        
        <creator>Mustafa Mat Deris</creator>
        
        <subject>Cryptography; encryption algorithms; Data Encryption Standard (DES); Triple Data Encryption Standard (3DES); Blowfish; Advanced Encryption Standard (AES); Hybrid Cubes Encryption Algorithm (HiSea)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(11), 2017</description>
        <description>Security is the major concern when the sensitive information is stored and transferred across the internet where the information is no longer protected by physical boundaries.  Cryptography is an essential, effective and efficient component to ensure the secure communication between the different entities by transferring unintelligible information and only the authorized recipient can be able to access the information. The right selection of cryptographic algorithm is important for secure communication that provides more security, accuracy and efficiency. In this paper, we examine the security aspects and processes involved in the design and implementation of most widely used symmetric encryption algorithms such as Data Encryption Standard (DES), Triple Data Encryption Standard (3DES), Blowfish, Advanced Encryption Standard (AES) and Hybrid Cubes Encryption Algorithm (HiSea). Furthermore, this paper evaluated and compared the performance of these encryption algorithms based on encryption and decryption time, throughput, key size, avalanche effect, memory, correlation assessment and entropy. Thus, amongst the existing cryptographic algorithm, we choose a suitable encryption algorithm based on different parameters that are best fit to the user requirements.</description>
        <description>http://thesai.org/Downloads/Volume8No11/Paper_41-A_Survey_on_the_Cryptographic_Encryption_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Investigate the use of Anchor-Text and of Query-Document Similarity Scores to Predict the Performance of Search Engine</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081140</link>
        <id>10.14569/IJACSA.2017.081140</id>
        <doi>10.14569/IJACSA.2017.081140</doi>
        <lastModDate>2017-11-30T19:03:15.2830000+00:00</lastModDate>
        
        <creator>Abdulmohsen Almalawi</creator>
        
        <creator>Rayed AlGhamdi</creator>
        
        <creator>Adel Fahad</creator>
        
        <subject>Data mining; information retrieval; web search; query prediction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(11), 2017</description>
        <description>Query difficulty prediction aims to estimate, in advance, whether the answers returned by search engines in response to a query are likely to be useful. This paper proposes new predictors based upon the similarity between the query and answer documents, as calculated by the three different models. It examined the use of anchor text-based document surrogates, and how their similarity to queries can be used to estimate query difficulty. It evaluated the performance of the predictors based on 1) the correlation between the average precision (AP), 2) the precision at 10 (P@10) of the full text retrieved results, 3) a similarity score of anchor text, and 4) a similarity score of full-text, using the WT10g data collection of web data. Experimental evaluation of our research shows that five of our proposed predictors demonstrate reliable and consistent performance across a variety of different retrieval models.</description>
        <description>http://thesai.org/Downloads/Volume8No11/Paper_40-Investigate_the_Use_of_Anchor_Text_and_of_Query_Document.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>FPGA Prototyping and Design Evaluation of a NoC-Based MPSoC</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081139</link>
        <id>10.14569/IJACSA.2017.081139</id>
        <doi>10.14569/IJACSA.2017.081139</doi>
        <lastModDate>2017-11-30T19:03:15.2500000+00:00</lastModDate>
        
        <creator>Ridha SALEM</creator>
        
        <creator>Yahia SALAH</creator>
        
        <creator>Imed BENNOUR</creator>
        
        <creator>Mohamed ATRI</creator>
        
        <subject>MultiProcessor System-on-Chip; Network-on-Chip; FPGA Field-Programmable Gate Arrays (FPGA) prototyping; design evaluation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(11), 2017</description>
        <description>Chip communication architectures become an important element that is critical to control when designing a complex MultiProcessor System-on-Chip (MPSoC). This led to the emergence of new interconnection architectures, like Network-on-Chip (NoC). NoCs have been proven to be a promising solution to the concerns of MPSoCs in terms of data parallelism. Field-Programmable Gate Arrays (FPGA) has some perceived challenges. Overcoming those challenges with the right prototyping solutions is easy and cost-effective leading to much faster time-to-market. In this paper, we present an FPGA based on rapid prototyping in hardware/software co-design and design evaluation of a mixed HW/SW MPSoC using a NoC. A case study of two-dimensional mesh NoC-based MPSoC architecture is presented with a validation environment. The synthesis and implementation results of the NoC-based MPSoC on a Virtex 5 ML 507 enable a reasonable frequency (151.5 MHz) and a resource usage rate equals to 58% (6,586 out of 11,200 slices used).</description>
        <description>http://thesai.org/Downloads/Volume8No11/Paper_39-FPGA_Prototyping_and_Design_Evaluation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>UML based Formal Model of Smart Transformer Power System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081138</link>
        <id>10.14569/IJACSA.2017.081138</id>
        <doi>10.14569/IJACSA.2017.081138</doi>
        <lastModDate>2017-11-30T19:03:15.2370000+00:00</lastModDate>
        
        <creator>Muniba Sultan</creator>
        
        <creator>Amna Pir</creator>
        
        <creator>Nazir Ahmad Zafar</creator>
        
        <subject>Smart power system; unified model language (UML); formal method; VDM-SL</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(11), 2017</description>
        <description>Recently many significant improvements have been done in traditionally power system. But still a lot of work is needed in traditional power system to mend many challenges. We propose formal method based on subnet model for smart power system. Formal method is mathematics based technique that is used to develop, specify and verify model in a systematic manner. It involve components i.e., power plant, smart grid, transformer and smart meters. Power plant produces electricity and then distributes it to the smart grid. Smart grid generates electricity to transformers and then transformers transfer electricity to smart meters. Smart transformers and smart meters are deployed  inform of subnets that increase the energy efficiency of smart power system. In this paper our main focus is on two components of smart power system that is transformers and smart meters. Graph theory is used for the semi-formal representation of model. In this paper we present system requirements through UML use case diagrams that are used to describe actions of system and then real topology is transferred into model topology in graph theory that is used to represent the structure of system. Mathematical technique and notation based formal method approaches are used for describing and analyzing the system. VDM-SL formal method language is used for formal specification and VDM toolbox is used for the verification and analysis of system.</description>
        <description>http://thesai.org/Downloads/Volume8No11/Paper_38-UML_Based_Smart_Transformer_Power_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Contextual Requirements for Mobile Native Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081137</link>
        <id>10.14569/IJACSA.2017.081137</id>
        <doi>10.14569/IJACSA.2017.081137</doi>
        <lastModDate>2017-11-30T19:03:15.2030000+00:00</lastModDate>
        
        <creator>Sasmita Pani</creator>
        
        <creator>Jibitesh Mishra</creator>
        
        <subject>Mobile contexts; pervasiveness; device usability; mobility interaction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(11), 2017</description>
        <description>Mobile apps have found wide acceptance in today’s world which heavily depend on smart technology to access data over wide location. The apps are mostly of native type which can be used for accessing data even without the internet availability. In this paper the development of mobile native applications requires the assimilation of various analytical contexts depending on the requirement of users. We have done an empirical study of various papers based on ubiquitous systems and mobile apps for finding out the contexts in building mobile native apps and the mobile contexts are such as device context, user context, mobility context and social context. We have found that the overall weight of each mobile context is an empirical study. We have taken various activities which are performed among a user and mobile native apps and formed them into questionnaires which are sent to different mobile native app developers of different software industries. The mapping is done among these activities with the attributes and their associated mobile contexts. We have identified and obtained four contexts as main requirements for developing mobile native apps under any domain. The analysis of requirements is done modeling the contexts and their attributes through OWLDL language. We have determined from the empirical study that the overall weight of device context is more than the other contexts. Hence it is clear that the device context with its numerous features have a great impact on developing mobile native apps under any domain.</description>
        <description>http://thesai.org/Downloads/Volume8No11/Paper_37-Contextual_Requirements_for_Mobile_Native_Applications.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Realtime Application of Constrained Predictive Control for Mobile Robot Navigation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081136</link>
        <id>10.14569/IJACSA.2017.081136</id>
        <doi>10.14569/IJACSA.2017.081136</doi>
        <lastModDate>2017-11-30T19:03:15.1730000+00:00</lastModDate>
        
        <creator>Ibtissem Malouche</creator>
        
        <creator>Faouzi Bouani</creator>
        
        <subject>Embedded C; STM32; microcontrollers; constrained model predictive control; otpimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(11), 2017</description>
        <description>This work addresses the implementation issue of
constrained Model Predictive Control (MPC) for the autonomous
trajectory-tracking problem. The chosen process to control is a
Wheeled Mobile Robot (WMR) described by a discrete, Multiple
Input Multiple Output (MIMO), state-space and linear parameter
varying kinematic model. The main motivation of the constrained
MPC usage in this case relies on its ability in considering, in a
straightforward way, control and states constraints that naturally
arise in trajectory tracking practical problems. The efficiency of
the presented control scheme is validated through experimental
results on a two wheeled mobile robot using both STM32F429II
and STM32F407ZG microcontrollers. The controller implementation
is facilitated by the usage of the automatic C code generation
and interesting optimization before real-time execution. Based
on the experimental results obtained, the good performance and
robustness of the proposed control scheme are established.</description>
        <description>http://thesai.org/Downloads/Volume8No11/Paper_36-Realtime_Application_of_Constrained_Predictive_Control.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Collaborative Editing over Opportunistic Networks: State of the Art and Challenges</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081135</link>
        <id>10.14569/IJACSA.2017.081135</id>
        <doi>10.14569/IJACSA.2017.081135</doi>
        <lastModDate>2017-11-30T19:03:15.1400000+00:00</lastModDate>
        
        <creator>Noha Alsulami</creator>
        
        <creator>Asma Cherif</creator>
        
        <subject>Collaborative editors; opportunistic networks; operational transformation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(11), 2017</description>
        <description>Emerging Opportunistic Networks (ON) are under intensive research and development by many academics. However,
research efforts on ON only addressed routing protocols as well as data dissemination. Too little attention was given to the
applications that can be deployed over ON. These are assumed to use immutable data (e.g., photos/video files). Nevertheless,
Collaborative Editors (CE) which are based on mutable messages are widely used in many fields. Indeed, they allow many users
to concurrently edit the same shared document (e.g., Google Docs). Consequently, it becomes necessary to adapt CE to ON
which represents a challenging task. As a matter of fact, CE synchronization algorithms should ensure the convergence of the
shared content being modified concurrently by users. In this work, we give an overview on ON and CE in an attempt to
combine both states of the art. We highlight the challenges that could be faced when trying to deploy CE over ON.</description>
        <description>http://thesai.org/Downloads/Volume8No11/Paper_35-Collaborative_Editing_over_Opportunistic_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Efficient Method for Breast Mass Segmentation and Classification in Mammographic Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081134</link>
        <id>10.14569/IJACSA.2017.081134</id>
        <doi>10.14569/IJACSA.2017.081134</doi>
        <lastModDate>2017-11-30T19:03:15.1100000+00:00</lastModDate>
        
        <creator>Marwa Hmida</creator>
        
        <creator>Kamel Hamrouni</creator>
        
        <creator>Basel Solaiman</creator>
        
        <creator>Sana Boussetta</creator>
        
        <subject>Mammography; breast mass; mass segmentation; fuzzy active contour; mass classification; possibility theory</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(11), 2017</description>
        <description>According to the World Health Organization, breast
cancer is the main cause of cancer death among women in
the world. Until now, there are no effective ways of preventing
this disease. Thus, early screening and detection is the most
effective method for rising treatment success rates and reducing
death rates due to breast cancer. Mammography is still the most
used as a diagnostic and screening tool for early breast cancer
detection. In this work, we propose a method to segment and
classify masses using the regions of interest of mammographic
images. Mass segmentation is performed using a fuzzy active
contour model obtained by combining Fuzzy C-Means and the
Chan-Vese model. Shape and margin features are then extracted
from the segmented masses and used to classify them as benign
or malignant. The generated features are usually imprecise and
reflect an uncertain representation. Thus, we propose to analyze
them by a possibility theory to deal with imprecise and uncertain
aspect. The experimental results on Regions Of Interest (ROIs)
extracted from MIAS database indicate that the proposed method
yields good mass segmentation and classification results.</description>
        <description>http://thesai.org/Downloads/Volume8No11/Paper_34-An_Efficient_Method_for_Breast_Mass_Segmentation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>FabricVision: System of Error Detection in the Manufacture of Garments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081133</link>
        <id>10.14569/IJACSA.2017.081133</id>
        <doi>10.14569/IJACSA.2017.081133</doi>
        <lastModDate>2017-11-30T19:03:15.0930000+00:00</lastModDate>
        
        <creator>Jaime Moreno</creator>
        
        <creator>Arturo Aguila</creator>
        
        <creator>Eduardo Partida</creator>
        
        <creator>Oswaldo Morales</creator>
        
        <creator>Ricardo Tejeida</creator>
        
        <subject>Computer vision; histogram of oriented gradient; segmentation; object detection; image capture</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(11), 2017</description>
        <description>A computer vision system is implemented to detect
errors in the cutting stage within the manufacturing process of
garments in the textile industry. It provides solution to errors
within the process that cannot be easily detected by any employee,
in addition to significantly increase the speed of quality review.
In the textile industry as in many others, quality control is
required in manufactured products and this has been carried
out manually by means of visual inspection by employees over
the years. For this reason, the objective of this project is to design
a quality control system using computer vision to identify errors
in the cutting stage within the garment manufacturing process to
increase the productivity of textile processes by reducing costs.</description>
        <description>http://thesai.org/Downloads/Volume8No11/Paper_33-FabricVision_System_of_Error_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deployment Protocol for Underwater Wireless Sensors Network based on Virtual Force</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081132</link>
        <id>10.14569/IJACSA.2017.081132</id>
        <doi>10.14569/IJACSA.2017.081132</doi>
        <lastModDate>2017-11-30T19:03:15.0800000+00:00</lastModDate>
        
        <creator>Abeer Almutairi</creator>
        
        <creator>Saoucene Mahfoudh</creator>
        
        <subject>Deployment algorithm; underwater wireless sensor network; virtual force; coverage; connectivity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(11), 2017</description>
        <description>Recently, Underwater Sensor Networks (UWSNs)
have attracted researchers’ attention due to the challenges and
the peculiar characteristics of the underwater environment. The
initial random deployment of UWSN where sensors are scattered
over the area via planes or ships is inefficient, and it doesn’t
achieve full coverage nor maintain network connectivity. Moreover,
energy efficiency in underwater networks is a crucial issue
since nodes utilize battery power as a source of energy and it is
difficult and sometimes impossible to change or replenish these
batteries. Our contribution in this research is to improve the
performance of UWSNs by designing UW-DVFA, an underwater
3-D self-distributed deployment algorithm based on virtual forces.
The main target for this work is to stretch the randomly deployed
network in the 3-D area in a way that guarantees full area
coverage and network connectivity.</description>
        <description>http://thesai.org/Downloads/Volume8No11/Paper_32-Deployment_Protocol_for_Underwater_Wireless_Sensors_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>K-means Based Automatic Pests Detection and Classification for Pesticides Spraying</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081131</link>
        <id>10.14569/IJACSA.2017.081131</id>
        <doi>10.14569/IJACSA.2017.081131</doi>
        <lastModDate>2017-11-30T19:03:15.0470000+00:00</lastModDate>
        
        <creator>Muhammad Hafeez Javed</creator>
        
        <creator>M Humair Noor</creator>
        
        <creator>Babar Yaqoob Khan</creator>
        
        <creator>Nazish Noor</creator>
        
        <creator>Tayyaba Arshad</creator>
        
        <subject>Automatic plant pest detection; pest classification; Delta-E; discrete wavelet transform; support vector machine</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(11), 2017</description>
        <description>Agriculture is the backbone to the living being that
plays a vital role to country’s economy. Agriculture production
is inversely affected by pest infestation and plant diseases. Plants
vitality is directly affected by the pests as poor or abnormal.
Automatic pest detection and classification is an essential research
phenomenon, as early detection and classification of pests as
they appear on the plants may lead to minimizing the loss of
production. This study puts forth a comprehensive model that
would facilitate the detection and classification of the pests by
using Artificial Neural Network (ANN). In this approach, the
image has been segmented from the fields by using enhanced
K-Mean segmentation technique that identifies the pests or any
object from the image. Subsequently, features will be extracted
by using Discrete Cosine Transform (DCT) and classified using
ANN to classify pests. The proposed approach is verified for five
pests that exhibited 94% effectiveness while classifying the pests.</description>
        <description>http://thesai.org/Downloads/Volume8No11/Paper_31-K_Means_based_Automatic_Pests_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Machine Learning for Bioelectromagnetics: Prediction Model using Data of Weak Radiofrequency Radiation Effect on Plants</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081130</link>
        <id>10.14569/IJACSA.2017.081130</id>
        <doi>10.14569/IJACSA.2017.081130</doi>
        <lastModDate>2017-11-30T19:03:15.0330000+00:00</lastModDate>
        
        <creator>Malka N. Halgamuge</creator>
        
        <subject>Machine learning; plants; prediction; mobile phones; base station; radiofrequency electromagnetic fields; RFEMF; plant sensitivity; classification; clustering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(11), 2017</description>
        <description>Plant sensitivity and its bio-effects on non-thermal
weak radio-frequency electromagnetic fields (RF-EMF) identifying
key parameters that affect plant sensitivity that can
change/unchange by using big data analytics and machine learning
concepts are quite significant. Despite its benefits, there is no
single study that adequately covers machine learning concept in
Bioelectromagnetics domain yet. This study aims to demonstrate
the usefulness of Machine Learning algorithms for predicting
the possible damages of electromagnetic radiations from mobile
phones and base station on plants and consequently, develops a
prediction model of plant sensitivity to RF-EMF. We used rawdata
of plant exposure from our previous review study (extracted
data from 45 peer-reviewed scientific publications published
between 1996-2016 with 169 experimental case studies carried out
in the scientific literature) that predicts the potential effects of
RF-EMF on plants. We also used values of six different attributes
or parameters for this study: frequency, specific absorption rate
(SAR), power flux density, electric field strength, exposure time
and plant type (species). The results demonstrated that the
adaptation of machine learning algorithms (classification and
clustering) to predict 1) what conditions will RF-EMF exposure
to a plant of a given species may not produce an effect; 2) what
frequency and electric field strength values are safer; and 3)
which plant species are affected by RF-EMF. Moreover, this paper
also illustrates the development of optimal attribute selection
protocol to identify key parameters that are highly significant
when designing the in-vitro practical standardized experimental
protocols. Our analysis also illustrates that Random Forest
classification algorithm outperforms with highest classification
accuracy by 95.26% (0.084 error) with only 4% of fluctuation
among algorithm measured. The results clearly show that using
K-Means clustering algorithm, demonstrated that the Pea,
Mungbean and Duckweeds plants are more sensitive to RF-EMF
(p &lt;= 0.0001). The sample size of reported 169 experimental case
studies, perhaps low significant in a statistical sense, nonetheless,
this analysis still provides useful insight of exploiting Machine
Learning in Bioelectromagnetics domain. As a direct outcome of
this research, more efficient RF-EMF exposure prediction tools
can be developed to improve the quality of epidemiological studies
and the long-term experiments using whole organisms.</description>
        <description>http://thesai.org/Downloads/Volume8No11/Paper_30-Machine_Learning_for_Bioelectromagnetics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-Target Tracking Using Hierarchical Convolutional Features and Motion Cues</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081129</link>
        <id>10.14569/IJACSA.2017.081129</id>
        <doi>10.14569/IJACSA.2017.081129</doi>
        <lastModDate>2017-11-30T19:03:15.0000000+00:00</lastModDate>
        
        <creator>Heba Mahgoub</creator>
        
        <creator>Khaled Mostafa</creator>
        
        <creator>Khaled T. Wassif</creator>
        
        <creator>Ibrahim Farag</creator>
        
        <subject>Multi-target tracking; correlation filters; convolution neural networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(11), 2017</description>
        <description>In this paper, the problem of multi-target tracking with single camera in complex scenes is addressed. A new approach is proposed for multi-target tracking problem that learns
from hierarchy of convolution features. First fast Region-based
Convolutional Neutral Networks is trained to detect pedestrian
in each frame. Then cooperate it with correlation filter tracker
which learns target’s appearance from pretrained convolutional
neural networks. Correlation filter learns from middle and last
convolutional layers to enhances targets localization. However
correlation filters fail in case of targets full occlusion. This
lead to separated tracklets (mini-trajectories) problem. So a
post processing step is added to link separated tracklets with
minimum-cost network flow. A cost function is used, that depends
on motion cues in associating short tracklets. Experimental results
on MOT2015 benchmark show that the proposed approach
produce comparable result against state-of-the-art approaches.
It shows an increase 4.5 % in multiple object tracking accuracy.
Also mostly tracked targets is 12.9% vs 7.5% against state-ofthe-
art minimum-cost network flow tracker.
</description>
        <description>http://thesai.org/Downloads/Volume8No11/Paper_29-Multi_Target_Tracking_using_Hierarchical_Convolutional_Features.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>GDPI: Signature based Deep Packet Inspection using GPUs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081128</link>
        <id>10.14569/IJACSA.2017.081128</id>
        <doi>10.14569/IJACSA.2017.081128</doi>
        <lastModDate>2017-11-30T19:03:14.9700000+00:00</lastModDate>
        
        <creator>Nausheen Shoaib</creator>
        
        <creator>Jawwad Shamsi</creator>
        
        <creator>Tahir Mustafa</creator>
        
        <creator>Akhter Zaman</creator>
        
        <creator>Jazib ul Hasan</creator>
        
        <creator>Mishal Gohar</creator>
        
        <subject>Packet processing; Graphic Processing Units (GPUs); deep packet inspection; network security; parallel computing; heterogeneity; CUDA</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(11), 2017</description>
        <description>Deep Packet Inspection (DPI) is necessitated for many networked application systems in order to prevent from cyber threats. The signature based Network Intrusion and etection System (NIDS) works on packet inspection and pattern matching mechanisms for the detection of malicious content in network traffic. The rapid growth of high speed networks in data centers demand an efficient high speed packet processing mechanism which is also capable of malicious packets detection. In this paper, we proposed a framework GDPI for efficient packet processing which inspects all incoming packet’s payload with known signature patterns, commonly available is Snort. The
framework is developed using enhanced GPU programming techniques, such as asynchronous packet processing using streams, minimizing CPU to GPU latency using pinned memory and zero copy, and memory coalescing with shared memory which reduces read operation from global memory of the GPU. The overall performance of GDPI is tested on heterogeneous NVIDIA GPUs, like Tegra Tk1, GTX 780, and Tesla K40 and observed that the highest throughput is achieved with Tesla K40. The design code of GDPI is made available for research community. </description>
        <description>http://thesai.org/Downloads/Volume8No11/Paper_28-GDPI_Signature_based_Deep_Packet_Inspection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparison of Machine Learning Algorithms  to Classify Web Pages</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081127</link>
        <id>10.14569/IJACSA.2017.081127</id>
        <doi>10.14569/IJACSA.2017.081127</doi>
        <lastModDate>2017-11-30T19:03:14.9400000+00:00</lastModDate>
        
        <creator>Ansam A. AbdulHussien</creator>
        
        <subject>Web page classification; artificial neural networks; random forest; adaboost</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(11), 2017</description>
        <description>The ‘World Wide Web’, or simply the web, represents one of the largest sources of information in the world. We can say  that any topic we think about is probably finding it&#39;s on the web. Web information comes in different forms and types such as text documents, images and videos. However, extracting useful information, without the help of some web tools, is not an easy process. Here comes the role of web mining, which provides the tools that help us to extract useful knowledge from data on the internet. Many researchers focus on the issue of web pages classification technology that provides high accuracy. In this paper, several ‘supervised learning algorithms’ evaluation to determining the predefined categories among web documents. We use machine learning algorithms ‘Artificial Neural Networks (ANN)’, ‘Random Forest (RF)’, ‘AdaBoost’ to perform a behavior comparison on the web pages classifications problem.</description>
        <description>http://thesai.org/Downloads/Volume8No11/Paper_27-Comparison_of_Machine_Learning_Algorithm_to_Classify_Web_Page.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Software Migration Frameworks for Software System Solutions: A Systematic Literature Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081126</link>
        <id>10.14569/IJACSA.2017.081126</id>
        <doi>10.14569/IJACSA.2017.081126</doi>
        <lastModDate>2017-11-30T19:03:14.9230000+00:00</lastModDate>
        
        <creator>Muhammad Shoaib</creator>
        
        <creator>Adeed Ishaq</creator>
        
        <creator>Muhammad Awais Ahmad</creator>
        
        <creator>Sidra Talib</creator>
        
        <creator>Ghulam Mustafa</creator>
        
        <creator>Aqeel Ahmed</creator>
        
        <subject>Software migration; frameworks; system migration; cloud migration; migration risk</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(11), 2017</description>
        <description>This study examines and review the current software migration frameworks. With the quick technological enhancement, companies need to move their software’s from one platform to another platform like cloud-based migration. There are different types of risks involved during migration. By performing migration activities correctly these risks might be reduced. Due to the absence of resources, such as workforce, time, budget in small organizations, the software migration is not performed in optimized way. Therefore, many functionalities are not implemented exactly after migration. In this paper, we have described different methods and frameworks which provide guideline for developers to enhance software migration process.</description>
        <description>http://thesai.org/Downloads/Volume8No11/Paper_26-Software_Migration_Frameworks_for_Software_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Generic Methodology for Clustering to Maximises Inter-Cluster Inertia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081125</link>
        <id>10.14569/IJACSA.2017.081125</id>
        <doi>10.14569/IJACSA.2017.081125</doi>
        <lastModDate>2017-11-30T19:03:14.9070000+00:00</lastModDate>
        
        <creator>A. Alaoui</creator>
        
        <creator>B. Olengoba Ibara</creator>
        
        <creator>B. Ettaki</creator>
        
        <creator>J. Zerouaoui</creator>
        
        <subject>MC-DBSCAN; iterative process; inter-cluster inertia; unsupervised precision-recall metrics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(11), 2017</description>
        <description>This paper proposes a novel clustering methodology which undeniably manages to offer results with a higher inter-cluster inertia for a better clustering. The advantage obtained with this methodology is due to an algorithm that showed beforehand its efficiency in clustering exercises, MC- DBSCAN, which is associated to an iterative process with a potential of auto-adjustment of the weights of the pertinent criteria that allows the reclassification of objects of the two closest clusters through each iteration, as well as the aptitude of the auto-evaluation of the precision of the clustering during the clustering process. This work conducts the experiments using the well-known benchmark, ‘Seismic’, ‘Landform-Identification’ and ‘Image Segmentation’, to compare the performance of the proposed methodology with other algorithms (K-means, EM, CURE and MC-DBSCAN). The experimental results demonstrate that the proposed solution has good quality of clustering results.</description>
        <description>http://thesai.org/Downloads/Volume8No11/Paper_25-A_Generic_Methodology_for_Clustering.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Recognizing Human Actions by Local Space Time and LS-TSVM over CUDA</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081124</link>
        <id>10.14569/IJACSA.2017.081124</id>
        <doi>10.14569/IJACSA.2017.081124</doi>
        <lastModDate>2017-11-30T19:03:14.8770000+00:00</lastModDate>
        
        <creator>Mohsin Raza Siyal</creator>
        
        <creator>Muhammad Saeed</creator>
        
        <creator>Jibran R. Khan</creator>
        
        <creator>Farhan A. Siddiqui</creator>
        
        <creator>Kamran Ahsan</creator>
        
        <subject>Motion detection; human action recognition; LS-TSVM; GPU Programming; Compute Unified Device Architecture (CUDA)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(11), 2017</description>
        <description>Local space-time features can be used to make the events adapted to the velocity of moving patterns, size of the object and the frequency in captured video. This paper purposed the new implementation approach of Human Action Reorganization (HAR) using Compute Unified Device Architecture (CUDA). Initially, local space-time features extracted from the customized dataset of videos. The video features are extracted by utilizing the Histogram of Optical Flow (HOF) and Harris detector algorithm descriptor. A new extended version of SVM classifier which is four time faster and has better precision than classical SVM known as the Least Square Twin SVM (LS-TSVM); a binary classifier which use two non-parallel hyperplanes, is applied on extracted video features. Paper evaluates the LS-TSVM performance on the customized data and experimental result showed the significant improvements.</description>
        <description>http://thesai.org/Downloads/Volume8No11/Paper_24-Recognizing_Human_Actions_by_Local_Space_Time.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comparative Study between Applications Developed for Android and iOS</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081123</link>
        <id>10.14569/IJACSA.2017.081123</id>
        <doi>10.14569/IJACSA.2017.081123</doi>
        <lastModDate>2017-11-30T19:03:14.8470000+00:00</lastModDate>
        
        <creator>Robert Gyor&#246;di</creator>
        
        <creator>Doina Zmaranda</creator>
        
        <creator>Vlad Georgian Adrian</creator>
        
        <creator>Cornelia Gyor&#246;di</creator>
        
        <subject>Android; iOS; mobile application development; mobile device core features; common scenario performance comparison; development optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(11), 2017</description>
        <description>Now-a-days, mobile applications implement complex functionalities that use device’s core features extensively. This paper realizes a performance analysis of the most important core features used frequently in mobile application development: asynchronous multi-threaded code execution, drawing views/elements on the screen and basic network communications. While multiple mobile platforms have emerged in recent years, in this paper two well-established and popular operating systems were considered for comparison and testing: Android and iOS. Thus, two basic applications featuring the same functionality and complexity were developed to run natively on both platforms. Applications were developed by using development languages and tools recommended for each operating system. This paper aims to highlight the differences between the two operating systems by analyzing core feature performance metrics for both functionally identical mobile applications developed for each platform. Results obtained could be further used for guiding the optimization of application’s development process for each considered operating system.</description>
        <description>http://thesai.org/Downloads/Volume8No11/Paper_23-A_Comparative_Study_between_Applications_Developed.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cross-Organizational Information Systems: A Case for Educational Data Mining</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081122</link>
        <id>10.14569/IJACSA.2017.081122</id>
        <doi>10.14569/IJACSA.2017.081122</doi>
        <lastModDate>2017-11-30T19:03:14.8300000+00:00</lastModDate>
        
        <creator>Gufran Ahmad Ansari</creator>
        
        <creator>Mohammad Tanvir Parvez</creator>
        
        <creator>Ali Al Khalifah</creator>
        
        <subject>Information system; machine learning; cross-organization; decision making; education; fuzzy matching; data mining</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(11), 2017</description>
        <description>Establishing a new organization is becoming more difficult day by day due to the extremely competitive business environment. A new organization may not have enough experience to survive in the competitive market; which in turn may push down the reputation of the organization and the trust of the investors. The goal of this research project work is to design a framework for the cross-organizational information system for assessment and decision making using machine learning with the emphasis on the educational sector. In the proposed framework, organizations share information (even raw data) with each other and machine learning tool will be utilized for shared data analysis for decision making for a particular organization. A framework like this can help new organizations to get benefit from the experience of other ‘older’ organizations and institutions.  Such knowledge-based machine learning system helps to improve the organizational capability of newly established institutions. As an implementation of the framework, we build a fuzzy system that can effectively work as a cross-platform system for educational entities.</description>
        <description>http://thesai.org/Downloads/Volume8No11/Paper_22-Cross_Organizational_Information_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Expert System of Chili Plant Disease Diagnosis using Forward Chaining Method on Android</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081121</link>
        <id>10.14569/IJACSA.2017.081121</id>
        <doi>10.14569/IJACSA.2017.081121</doi>
        <lastModDate>2017-11-30T19:03:14.7970000+00:00</lastModDate>
        
        <creator>Aristoteles </creator>
        
        <creator>Mita Fuljana</creator>
        
        <creator>Joko Prasetyo</creator>
        
        <creator>Kurnia Muludi</creator>
        
        <subject>Android; classic probability; expert system; forward chaining; likert scale</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(11), 2017</description>
        <description>This research was conducted to make an expert system that is able to diagnose disease in chili plants based on knowledge that provided directly from the experts. This research uses classical probability calculation method in calculating the percentage of diagnoses and implemented on the Android mobile device. This research consisted of 37 symptoms data, 10 data of chili disease caused by fungi, and 10 rules. This expert system uses forward chaining inference method. Test results shows: (1) Functional testing using the Black Box Equivalence Partitioning (EP) method give the results as expected on the test scenario on each test class. (2) Expert testing by comparing the results of manual and system calculations matches and run well. (3) User acceptance test is done to 53 respondents which is divided into four groups of respondents. The first respondents group that is consisting of experts of chili disease give average score  of 85.14% (excellent).  The second group that consist of Agriculture Department students give score of 84.13% (excellent). The third respondent group that consist of Computer Science Department students give score of 84.28% (excellent) whereas the last group (chili farmers) give a score of 86% (excellent).</description>
        <description>http://thesai.org/Downloads/Volume8No11/Paper_21-Expert_System_of_Chili_Plant_Disease_Diagnosis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Ghanaian Consumers’ Online Privacy Concerns: Causes and its Effects on E-Commerce Adoption</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081120</link>
        <id>10.14569/IJACSA.2017.081120</id>
        <doi>10.14569/IJACSA.2017.081120</doi>
        <lastModDate>2017-11-30T19:03:14.7670000+00:00</lastModDate>
        
        <creator>E. T. Tchao</creator>
        
        <creator>Kwasi Diawuo</creator>
        
        <creator>Christiana Aggor</creator>
        
        <creator>Seth Djane Kotey</creator>
        
        <subject>E-Commerce; technology adoption; online privacy; perceived vulnerability; perceived control</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(11), 2017</description>
        <description>Online privacy has gradually become a concern for internet users over the years as a result of the interconnection of customers’ devices with other devices supporting the internet technology. This research investigates and discusses the factors that influence the privacy concerns faced by online consumers of internet services and the possible outcomes of these privacy concerns on the African online market with Ghana being the primary focus. Results from this study indicated that only 10.1% of respondents felt that the internet was safe for purchase and payment transaction in Ghana. However, respondents were willing to shop online if e-Commerce was the only means of getting their products. Respondents also had a high sense of perceived vulnerability and their perceived vulnerability to unauthorized data collection and misuse of personal information could affect Ghanaian e-Commerce platform adoption. The perceived ability of users of e-Commerce platforms in Ghana to control data collection and its subsequent use by other third parties was also found to negatively impact customers’ willingness to wholly transact and share their personal information online. The perceived vulnerability was found to be affected by the high levels of internet illiteracy whiles the perceived ability to control the collection of information and use was influenced by both the internet literacy level as well as the level of social awareness of the Ghanaian internet consumer.</description>
        <description>http://thesai.org/Downloads/Volume8No11/Paper_20-Ghanaian_Consumers_Online_Privacy_Concerns.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An AHP Model towards an Agile Enterprise</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081119</link>
        <id>10.14569/IJACSA.2017.081119</id>
        <doi>10.14569/IJACSA.2017.081119</doi>
        <lastModDate>2017-11-30T19:03:14.7500000+00:00</lastModDate>
        
        <creator>Mohamed Amine Marhraoui</creator>
        
        <creator>Abdellah El Manouar</creator>
        
        <subject>Organizational agility; analytical hierarchy process; information technology; agility enablers</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(11), 2017</description>
        <description>Companies are facing different challenges in order to adapt to their environmental context. They should be aware of the changes on the social, political, ecological and economical levels. Moreover, they should act in an efficient and rapid manner by leveraging new and reconfigurable resources. Organizational agility is the firm’s key dynamic capability which enables it to deal with changes and exploit them as opportunities. Firms’ objective is thus to attain a higher degree of agility which can help them to perform durably. In this article, a new model based on analytical hierarchy process (AHP) method is proposed. This can help companies to raise their agility level by deploying the most suitable agility enablers which can be either general or specific when related to information technologies. They can thus develop the most appropriate strategy towards agility regarding their internal and external contexts.</description>
        <description>http://thesai.org/Downloads/Volume8No11/Paper_19-An_AHP_Model_towards_an_Agile_Enterprise.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Model for Forecasting the Number of Cases and Distribution Pattern of Dengue Hemorrhagic Fever in Indonesia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081118</link>
        <id>10.14569/IJACSA.2017.081118</id>
        <doi>10.14569/IJACSA.2017.081118</doi>
        <lastModDate>2017-11-30T19:03:14.7200000+00:00</lastModDate>
        
        <creator>Deni Mahdiana</creator>
        
        <creator>Ahmad Ashari</creator>
        
        <creator>Edi Winarko</creator>
        
        <creator>Hari Kusnanto</creator>
        
        <subject>Dengue Hemorrhagic Fever (DHF); Vector Autoregressive Spatial Autocorrelation (VARSA); forecasting; multivariate time series; Local Indicators of Spatial Association (LISA)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(11), 2017</description>
        <description>Dengue Hemorrhagic Fever (DHF) ourbreaks is one of the lethal health problems in Indonesia. Aedes aegypti type of insect prolefiration as the main vector of DHF has affected climate factors, such as temperature, humidity, rainfall, and irradiation time. Therefore, to project the number of DHF cases is a very important assignment for the Ministry of Health to initiate contingencies planning as a prevention step in confronting  the increasing number of  DHF cases in nearby future. This study aims in developing a forecasting model in anticipating the number of cases and distribution pattern of DHF with multivariate time series using Vector Autoregressive Spatial Autocorrelation (VARSA). VARSA model uses multivariate time series, such as a number of DHF case, minimum temperature, maximum temperature, rainfall, average humidity, irradiation time and population density. This modeling is done in two steps: Vector Autoregressive modeling to predict the number of DHF cases and Local Indicators of Spatial Association (LISA) method to visualize distribution pattern of DHF based on the spatial connectivity of the number of DHF cases among the neighboring districts. This study covers 17 districts in Sleman Yogyakarta, resulting in low errors with Root Means Square Error (RMSE) of 2.10 and Mean Absolute Error (MAE ) of 1.51. This model produces smaller errors than using univariate time series methods, such as Linear regression and Autoregressive Integrated Moving Average (ARIMA).</description>
        <description>http://thesai.org/Downloads/Volume8No11/Paper_18-A_Model_for_Forcasting_the_Number_of_Cases.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Web based Inventory Control System using Cloud Architecture and Barcode Technology for Zambia Air Force</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081117</link>
        <id>10.14569/IJACSA.2017.081117</id>
        <doi>10.14569/IJACSA.2017.081117</doi>
        <lastModDate>2017-11-30T19:03:14.6900000+00:00</lastModDate>
        
        <creator>Thomas Muyumba</creator>
        
        <creator>Jackson Phiri</creator>
        
        <subject>Zambia Air Force (ZAF); inventory system; barcode technology; Radio Frequency Identification (RFID); Near Field Communication (NFC); cloud computing; web based application</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(11), 2017</description>
        <description>Inventory management of spares is one of the activities Zambia Air Force (ZAF) undertakes to ensure optimal serviceability state of equipment to effectively achieve its roles. This obligation could only be made possible by automating the current manual and paper based inventory system. A web based inventory management system using cloud architecture and barcode technology was proposed. A literature review was conducted on three technologies used in the inventory management that is Radio Frequency Identification (RFID), Barcode Technology and Near Field Communication (NFC). A review was also undertaken on the related works to identify the concept that could be adopted in the proposed system. A baseline study was performed to understand the challenges faced by ZAF in the inventory management of spares. The results of the baseline study were analyzed and found that the challenges were attributed to the current manual inventory management system mainly due to human errors, incorrect inventory reporting and pilferage of items. The proposed prototype system was developed and tested and proved to be faster, efficient and more reliable than the manual and paper based system.</description>
        <description>http://thesai.org/Downloads/Volume8No11/Paper_17-A_Web_Based_Inventory_Control_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimization and Evaluation of Hybrid PV/WT/BM System in Different Initial Costs and LPSP Conditions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081116</link>
        <id>10.14569/IJACSA.2017.081116</id>
        <doi>10.14569/IJACSA.2017.081116</doi>
        <lastModDate>2017-11-30T19:03:14.6570000+00:00</lastModDate>
        
        <creator>Abd&#252;lsamed Tabak</creator>
        
        <creator>Mehmet &#214;zkaymak</creator>
        
        <creator>Muhammet Tahir G&#252;neser</creator>
        
        <creator>H&#252;seyin Oktay Erkol</creator>
        
        <subject>Photovoltaic (PV)/wind turbines (WT)/ biomass (BM); hybrid system; optimization; sizing; cost-effective; reliability; genetic algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(11), 2017</description>
        <description>A modelling and optimization study was performed to manage energy demand of a faculty in Karabuk University campus area working with a hybrid energy production system by using genetic algorithm (GA). Hybrid system consists of photovoltaic (PV) panels, wind turbines (WT) and biomass (BM) energy production units. Here BM is considered as a back-up generator. Objective function was constituted for minimizing total net present cost (TNPC) in optimization. In order to obtain more accurate results, measurements were performed with a weather station and data were read from an electricity meter. The system was also checked for reliability by the loss of power supply probability (LPSP). Changes in TNPC and localized cost of energy (LCOE) were interpreted by changing LPSP and economic parameters such as PV investment cost, WT investment cost, BM investment cost, and interest rates. As a result, it was seen that a hybrid system consisted of PV and BM associated with an effective flow algorithm benefited from a GA meets the energy demand of the faculty.</description>
        <description>http://thesai.org/Downloads/Volume8No11/Paper_16-Optimization_and_Evaluation_of_Hybrid_PVWTBM_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Efficient K-Nearest Neighbor Searches for Multiple-Face Recognition in the Classroom based on Three Levels DWT-PCA</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081115</link>
        <id>10.14569/IJACSA.2017.081115</id>
        <doi>10.14569/IJACSA.2017.081115</doi>
        <lastModDate>2017-11-30T19:03:14.6270000+00:00</lastModDate>
        
        <creator>Hadi Santoso</creator>
        
        <creator>Agus Harjoko</creator>
        
        <creator>Agfianto Eko Putra</creator>
        
        <subject>Multiple-face recognition; DWT; PCA; priority k-d tree; k-Nearest Neighbor</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(11), 2017</description>
        <description>The main weakness of the k-Nearest Neighbor algorithm in face recognition is calculating the distance and sort all training data on each prediction which can be slow if there are a large number of training instances. This problem can be solved by utilizing the priority k-d tree search to speed up the process of k-NN classification. This paper proposes a method for student attendance systems in the classroom using facial recognition techniques by combining three levels of Discrete Wavelet Transforms (DWT) and Principal Component Analysis (PCA) to extract facial features followed by applying the priority of k-d tree search to speed up the process of facial classification using k-Nearest Neighbor. The proposed algorithm is tested on two datasets that are Honda/UCSD video dataset and our dataset (AtmafaceDB dataset). This research looks for the best value of k to get the right facial recognition using k-fold cross-validation. 10-fold cross-validation at level 3 DWT-PCA shows that face recognition using k-Nearest Neighbor on our dataset is 95.56% with k = 5, whereas in the Honda / UCSD dataset it is only 82% with k = 3. The proposed method gives computational recognition time on our dataset 40 milliseconds.</description>
        <description>http://thesai.org/Downloads/Volume8No11/Paper_15-Efficient_K_Nearest_Neighbor_Searches.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Lightweight Internet Traffic Classification based on Packet Level Hidden Markov Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081114</link>
        <id>10.14569/IJACSA.2017.081114</id>
        <doi>10.14569/IJACSA.2017.081114</doi>
        <lastModDate>2017-11-30T19:03:14.6100000+00:00</lastModDate>
        
        <creator>Naveed Akhtar</creator>
        
        <creator>Muhammad Kamran</creator>
        
        <subject>Hidden Markov model; traffic classification; net- work security; deep packet inspection; internet traffic modeling; Internet of Things</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(11), 2017</description>
        <description>During the last decade, Internet traffic classification finds its importance not only to safeguard the integrity and security of network resources, but also to ensure the quality of service for business critical applications by optimizing existing network resources. But optimization at first place requires correct identification of different traffic flows. In this paper, we have suggested a framework based on Hidden Markov Model, which will use Internet Packet intrinsic statistical characteristics for traffic classification. The packet inspection based on statistical analysis of its different characteristics has helped to reduce overall computational complexity. Generally, the major challenges associated with any internet traffic classifier are: 1) the limitation to accurately identify encrypted traffic when classification is performed using traditional port based techniques; 2) overall computational complexity, and 3) to achieve high accuracy in traffic identification. Our methodology takes advantage of internet packet statistical characteristics in terms of its size and their inter arrival time in order to model different traffic flows. For experimental results, the data set of mostly used internet applications was used. The proposed HMM models best fit the observed traffic with high accuracy. Achieved traffic identification accuracy was 91% for packet size classifier whereas it was 82% for inter packet time based classifier.</description>
        <description>http://thesai.org/Downloads/Volume8No11/Paper_14-Lightweight_Internet_Traffic_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Repository of Static and Dynamic Signs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081113</link>
        <id>10.14569/IJACSA.2017.081113</id>
        <doi>10.14569/IJACSA.2017.081113</doi>
        <lastModDate>2017-11-30T19:03:14.5800000+00:00</lastModDate>
        
        <creator>Shazia Saqib</creator>
        
        <creator>Syed Asad Raza Kazmi</creator>
        
        <subject>Data gloves; feature extraction; human computer interaction; image segmentation; object recognition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(11), 2017</description>
        <description>Gesture-based communication is on the rise in Human Computer Interaction.  Advancement in the form of smart phones has made it possible to introduce a new kind of communication. Gesture-based interfaces are increasingly getting popular to communicate at public places. These interfaces are another effective communication medium for deaf and dumb. Gestures help conveying inner thoughts of these physically disabled people to others thus eliminating the need of a gesture translator. These gestures are stored in data sets so we need to work on developing an efficient dataset. Many datasets have been developed for languages like American sign language; however, for other sign languages, like Pakistani Sign Language (PSL), there has not been done much work. This paper presents technique for storing datasets for static and dynamic signs for any sign language other than British Sign Language or American Sign Language. For American and British Sign Languages many datasets are available publicly. However, other regional languages lack public availability of data sets. Pakistan Sign Language has been taken as a case study more than 5000 gestures have been collected and they will be made part of a public database as a part of this research. The research is first initiative towards building Universal Sign Language as every region has a different Sign Language. The second focus of the research is to explore methodologies where a signer communicates without any constraint like Data gloves or a particular background. Thirdly, the paper proposes use of spelling based gestures for easier communication. So the dataset design suggested is not affected by constraints of any kind.</description>
        <description>http://thesai.org/Downloads/Volume8No11/Paper_13-Repository_of_Static_and_Dynamic_Signs.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Linear Prediction Model for Effort in Programming based on User Acceptance and Revised use Case Point Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081112</link>
        <id>10.14569/IJACSA.2017.081112</id>
        <doi>10.14569/IJACSA.2017.081112</doi>
        <lastModDate>2017-11-30T19:03:14.5330000+00:00</lastModDate>
        
        <creator>Fahad H. Alshammari</creator>
        
        <subject>Function; point; software; engineering; mathematical; model; large-scale; programming; acceptance; validation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(11), 2017</description>
        <description>As long as most of the processes of verification and validation of software to grant acceptance by the customer/user, are subjective type, it is aimed to design a standard mathematical model with empirical to perform an appointment with areas or stages where development teams most fail involving large-scale software projects. This model will be based on a survey that the user must fill as going testing and validating the software, and which response curve must be linear with respect to the software development process. This paper aims to discuss the aspects surrounding the estimation of mathematical model in the validation and acceptance by a user through the revised Use Case Point Method. First, an assessment of the most recent techniques of application of the method are done, and then a simulation of the process of acceptance and validation by a standard user (Beta Test) will be taken as a practical example. For purposes of this paper, revised use case point method (Re-UCP) must have a specific weight, based on the prerequisites for the development of large-scale software. Once obtained this weighting, the user shall assess the finished product and then an approximation function will be to determine the coefficients of the final model approach, and indicating that is the efficient trend of the development team</description>
        <description>http://thesai.org/Downloads/Volume8No11/Paper_12-Linear_Prediction_Model_for_Effort_in_Programming.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Restructuring of System Analysis and Design Course with Agile Approach for Computer Engineering/Programming Departments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081111</link>
        <id>10.14569/IJACSA.2017.081111</id>
        <doi>10.14569/IJACSA.2017.081111</doi>
        <lastModDate>2017-11-30T19:03:14.5170000+00:00</lastModDate>
        
        <creator>Ahmet ALBAYRAK</creator>
        
        <subject>Agile software development; scrum; course re-design; computer science education</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(11), 2017</description>
        <description>Today software plays an increasingly important and central role in every aspect of everyday life. The number, size, complexity and application areas of the programs developed continue to grow. Many software products have serious problems in cost, timing and quality. It has become almost normal for software projects to exceed their planned cost and schedule. A significant number of development projects have never been completed and many of them have not met user requirements. Employers are not satisfied with new graduates for a variety of reasons (They do not know how to communicate. They do not have enough experience and preparation to work as a team member. They do not have the ability to manage their individual works efficiently and productively. This work was reconfigured and conducted with Scrum from the Agile methodologies of the System Analysis and Design course, which is especially taught in Vocational Schools and Engineering faculties. The recommended approach is available for software development departments. The suggested approach is applied in System Analysis and Design course.</description>
        <description>http://thesai.org/Downloads/Volume8No11/Paper_11-Restructuring_of_System_Analysis_and_Design_Course.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Task Scheduling in Cloud Computing using Lion Optimization Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081110</link>
        <id>10.14569/IJACSA.2017.081110</id>
        <doi>10.14569/IJACSA.2017.081110</doi>
        <lastModDate>2017-11-30T19:03:14.4230000+00:00</lastModDate>
        
        <creator>Nora Almezeini</creator>
        
        <creator>Alaaeldin Hafez</creator>
        
        <subject>Cloud computing; task scheduling algorithm; cloud scheduling; lion optimization algorithm; optimization algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(11), 2017</description>
        <description>Cloud computing has spread fast because of its high performance distributed computing. It offers services and access to shared resources to internet users through service providers. Efficient performance of task scheduling in clouds is one of the most important research issues which needs to be focused on. Various task scheduling algorithms for cloud based on metaheuristic techniques have been examined and showed high performance in reasonable time such as scheduling algorithms based on Ant Colony Optimization (ACO), Genetic Algorithm (GA), and Particle Swarm Optimization (PSO). In this paper, we propose a new task-scheduling algorithm based on Lion Optimization Algorithm (LOA), for cloud computing. LOA is a nature-inspired population-based algorithm for obtaining global optimization over a search space. It was proposed by Maziar Yazdani and Fariborz Jolai in 2015. It is a metaheuristic algorithm inspired by the special lifestyle of lions and their cooperative characteristics. The proposed task scheduling algorithm is compared with scheduling algorithms based on Genetic Algorithm and Particle Swarm Optimization. The results demonstrate the high performance of the proposed algorithm, when compared with the other algorithms.</description>
        <description>http://thesai.org/Downloads/Volume8No11/Paper_10-Task_Scheduling_in_Cloud_Computing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Low Cost Countermeasure at Authentication Protocol Level against Electromagnetic Side Channel Attacks on RFID Tags</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081109</link>
        <id>10.14569/IJACSA.2017.081109</id>
        <doi>10.14569/IJACSA.2017.081109</doi>
        <lastModDate>2017-11-30T19:03:14.3770000+00:00</lastModDate>
        
        <creator>Yassine NAIJA</creator>
        
        <creator>Vincent BEROULLE</creator>
        
        <creator>Mohsen MACHHOUT</creator>
        
        <subject>Radio Frequency Identification (RFID); electromagnetic side channel attack; PRESENT; mutual authentication protocol; countermeasures</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(11), 2017</description>
        <description>Radio Frequency Identification (RFID) technology is widely spread in many security applications. Producing secured low-cost and low-power RFID tags is a challenge. The used of lightweight encryption algorithms can be an economic solution for these RFID security applications. This article proposes low cost countermeasure to secure RFID tags against Electromagnetic Side Channel Attacks (EMA). Firstly, we proposed a parallel architecture of PRESENT block cipher that represents a one way of hiding countermeasures against EMA. 200 000 Electromagnetic traces are used to attack the proposed architecture, whereas 10 000 EM traces are used to attack an existing serial architecture of PRESENT. Then we proposed a countermeasure at mutual authentication protocol by limiting progressively the number of EM traces. This limitation prevents the attacker to perform the EMA. The proposed countermeasure is based on time delay function. It requires 960 GEs and represents a low cost solution compared to existing countermeasures at primitive block cipher (2471 GEs).</description>
        <description>http://thesai.org/Downloads/Volume8No11/Paper_9-Low_Cost_Countermeasure_at_Authentication_Protocol_Level.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improved-Node-Probability Method for Decision Making in Priority Determination of Village Development Proposed Program</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081108</link>
        <id>10.14569/IJACSA.2017.081108</id>
        <doi>10.14569/IJACSA.2017.081108</doi>
        <lastModDate>2017-11-30T19:03:14.3470000+00:00</lastModDate>
        
        <creator>Dedi Trisnawarman</creator>
        
        <creator>Sri Hartati</creator>
        
        <creator>Edi Winarko</creator>
        
        <creator>Purwo Santoso</creator>
        
        <subject>Bayesian networks; PLS-PM; participation weight; decision making; village</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(11), 2017</description>
        <description>This research proposes a new method, the probability of nodes (NP) and the cumulative frequency of indicators within the framework of Bayesian networks to calculate the weight of participation. This method uses the PLS-PM approach to examine the relationship structure of participatory factors and estimate latent variables. Data were collected using questionnaires involving participants offering proposals, the village residents themselves. The participation factors identified in this research were divided into two categories, namely, internal factors (abilities) and external factors (motivation). The internal factors included gender, age, education, occupation, and income, while the external factors included motivation relating to economic, political, socio-cultural, norm-related, and knowledge-related issues. Moreover, there are three factors directly affecting the level of participation, they are: the level of attendance in meetings, participation in giving suggestions, and involvement in decision making. The test results showed that the application of participation weight in decision making priority of proposal of village development program give change of final rank of decision with test result as: recall 50%, precision 80% and accuracy 50%.</description>
        <description>http://thesai.org/Downloads/Volume8No11/Paper_8-Improved_Node_Probability_Method_for_Decision_Making.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Brief Survey on 5G Wireless Mobile Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081107</link>
        <id>10.14569/IJACSA.2017.081107</id>
        <doi>10.14569/IJACSA.2017.081107</doi>
        <lastModDate>2017-11-30T19:03:14.3130000+00:00</lastModDate>
        
        <creator>Marwan A. Al-Namari</creator>
        
        <creator>Ali Mohammed Mansoor</creator>
        
        <creator>Mohd. Yamani Idna Idris</creator>
        
        <subject>5G; millimetre wave (mmW); Internet of Thing (IoT); SDN; massive multiple input and multiple output (Massive-MIMO)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(11), 2017</description>
        <description>The new upcoming technology of the fifth generation wireless mobile network is advertised as lightning speed internet, everywhere, for everything, for everyone in the nearest future. There are a lot of efforts and research carrying on many aspects, e.g. millimetre wave (mmW) radio transmission, massive multiple input and multiple output (Massive-MIMO) new antenna technology, the promising technique of SDN architecture, Internet of Thing (IoT) and many more. In this brief survey, we highlight some of the most recent developments towards the 5G mobile network.</description>
        <description>http://thesai.org/Downloads/Volume8No11/Paper_7-A_Brief_Survey_on_5G_Wireless_Mobile_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Architecture for Real Time Data Stream Processing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081106</link>
        <id>10.14569/IJACSA.2017.081106</id>
        <doi>10.14569/IJACSA.2017.081106</doi>
        <lastModDate>2017-11-30T19:03:14.2830000+00:00</lastModDate>
        
        <creator>Soumaya Ounacer</creator>
        
        <creator>Mohamed Amine TALHAOUI</creator>
        
        <creator>Soufiane Ardchir</creator>
        
        <creator>Abderrahmane Daif</creator>
        
        <creator>Mohamed Azouazi</creator>
        
        <subject>Data stream processing; real-time processing; Apache Hadoop; Apache spark; Apache storm; Lambda architecture; Kappa architecture</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(11), 2017</description>
        <description>Processing a data stream in real time is a crucial issue for several applications, however processing a large amount of data from different sources, such as sensor networks, web traffic, social media, video streams and other sources, represents a huge challenge. The main problem is that the big data system is based on Hadoop technology, especially MapReduce for processing. This latter is a high scalability and fault tolerant framework. It also processes a large amount of data in batches and provides perception blast insight of older data, but it can only process a limited set of data. MapReduce is not appropriate for real time stream processing, and is very important to process data the moment they arrive at a fast response and a good decision making. Ergo the need for a new architecture that allows real-time data processing with high speed along with low latency. The major aim of the paper at hand is to give a clear survey of the different open sources technologies that exist for real-time data stream processing including their system architectures. We shall also provide a brand new architecture which is mainly based on previous comparisons of real-time processing powered with machine learning and storm technology.</description>
        <description>http://thesai.org/Downloads/Volume8No11/Paper_6-A_New_Architecture_for_Real_Time_Data_Stream_Processing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>User based Recommender Systems using Implicative Rating Measure</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081105</link>
        <id>10.14569/IJACSA.2017.081105</id>
        <doi>10.14569/IJACSA.2017.081105</doi>
        <lastModDate>2017-11-30T19:03:14.2530000+00:00</lastModDate>
        
        <creator>Lan Phuong Phan</creator>
        
        <creator>Hung Huu Huynh</creator>
        
        <creator>Hiep Xuan Huynh</creator>
        
        <subject>Implicative rating measure; recommender system; user-based collaborative filtering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(11), 2017</description>
        <description>This paper proposes the implicative rating measure developed on the typicality measure. The paper also proposes a new recommendation model presenting the top N items to the active users. The proposed model is based on the user-based collaborative filtering approach using the implicative intensity measure to find the nearest neighbors of the active users, and the proposed measure to predict users’ ratings for items. The model is evaluated on two datasets MovieLens and CourseRegistration, and compared to some existing models such as: the item based collaborative filtering model using the Jaccard measure, the user based collaborative filtering model using Jaccard measure, the popular items based model, the latent factor based model, and the association rule based model using the confidence measure. The experimental results show that the performance of the proposed model is better when compared to other five models.</description>
        <description>http://thesai.org/Downloads/Volume8No11/Paper_5-User_based_Recommender_Systems_using_Implicative_Rating.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Adaptive Intrusion Detection Method for Wireless Sensor Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081104</link>
        <id>10.14569/IJACSA.2017.081104</id>
        <doi>10.14569/IJACSA.2017.081104</doi>
        <lastModDate>2017-11-30T19:03:14.2200000+00:00</lastModDate>
        
        <creator>Hongchun Qu</creator>
        
        <creator>Zeliang Qiu</creator>
        
        <creator>Xiaoming Tang</creator>
        
        <creator>Min Xiang</creator>
        
        <creator>Ping Wang</creator>
        
        <subject>Wireless sensor network; intrusion detection system; knowledge based detection; clustering algorithm; weighted support vector machine</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(11), 2017</description>
        <description>Current intrusion detection systems for Wireless Sensor Networks (WSNs) which are usually designed to detect a specific form of intrusion or only applied for one specific type of network structure has apparently restrictions in facing various attacks and different network structures. To bridge this gap, based on the mechanism that attacks are much likely to be deviated from normal features and from different shapes of aggregations in feature space, we proposed a knowledge based intrusion detection strategy (KBIDS) to detect multiple forms of attacks over different network structure. We firstly, in the training stage, used a modified unsupervised mean shift clustering algorithm to discover clusters in network features. Then the discovered clusters were classified as an anomaly if they had a certain amount of deviation from the normal cluster captured at the initial stage where no attacks could occur at all. The training data combined with a weighted support vector machine were then used to build the decision function that was used to flag network behaviors. The decision function was updated periodically after training by merging newly added network features to adapt network variability as well as to achieve time efficiency. During network running, each node uniformly captured their status as feature vector at certain interval and forwarded them to the base station on which the model was deployed and run. Using this way, our model can work independently of network structure in both detection and deployment. The efficiency and adaptability of the proposed method have been tested and evaluated by simulation experiments deployed on QualNet. The simulations were conducted as a full-factorial experiment in which all combinations of three forms of attacks and two types of WSN structures were tested. Results demonstrated that the detection accuracy and network structure adaptability of the proposed method outperforms the state-of-the-art intrusion detection methods for WSN.</description>
        <description>http://thesai.org/Downloads/Volume8No11/Paper_4-An_Adaptive_Intrusion_Detection_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-Valued Autoencoders and Classification of Large-Scale Multi-Class Problem</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081103</link>
        <id>10.14569/IJACSA.2017.081103</id>
        <doi>10.14569/IJACSA.2017.081103</doi>
        <lastModDate>2017-11-30T19:03:14.2070000+00:00</lastModDate>
        
        <creator>Ryusuke Hata</creator>
        
        <creator>M. A. H. Akhand</creator>
        
        <creator>Kazuyuki Murase</creator>
        
        <subject>Autoencoder; classification; complex-valued autoencoder; quaternion-valued autoencoder; recognition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(11), 2017</description>
        <description>Two-layered neural networks are well known as autoencoders (AEs) in order to reduce the dimensionality of data. AEs are successfully employed as pre-trained layers of neural networks for classification tasks. Most of the existing studies conceived real-valued AEs in real-valued neural networks. This study investigated complex- and quaternion-valued AEs for complex- and quaternion-valued neural networks. Inputs, weights, biases, and outputs in complex-valued AE (CAE) are complex variables, whereas those in quaternion-valued AE (QAE) are quaternions. In both methods, a split-type activation function is used in the hidden and output units. To deal with the images using the proposed methods, pairs of pixels are allotted to complex-valued inputs in the CAE and quartets of pixels are allotted to quaternion-valued inputs in the QAE. Proposed autoencoders are tested and performance compared with conventional AE for several tasks which are encoding/decoding, handwritten numeral recognition and large-scale multi-class classification. Proposed CAE and QAE revealed as good recognition methods for the tasks and outperformed conventional AE with significance performance in case of large-scale multi-class images recognition.</description>
        <description>http://thesai.org/Downloads/Volume8No11/Paper_3-Multi_Valued_Autoencoders_and_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Classification of Alzheimer Disease based on Normalized Hu Moment Invariants and Multiclassifier</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081102</link>
        <id>10.14569/IJACSA.2017.081102</id>
        <doi>10.14569/IJACSA.2017.081102</doi>
        <lastModDate>2017-11-30T19:03:14.1600000+00:00</lastModDate>
        
        <creator>Arwa Mohammed Taqi</creator>
        
        <creator>Fadwa Al-Azzo</creator>
        
        <creator>Mariofanna Milanova</creator>
        
        <subject>Alzheimer disease; machine learning; Hu moment invariants; SVM; K-Nearest Neighbors (KNN) classifier</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(11), 2017</description>
        <description>There is a great benefit of Alzheimer disease (AD) classification for health care application. AD is the most common form of dementia. This paper presents a new methodology of invariant interest point descriptor for Alzheimer disease classification. The descriptor depends on the normalized Hu Moment Invariants (NHMI). The proposed approach deals with raw Magnetic Resonance Imaging (MRI) of Alzheimer disease. Seven Hu moments are computed for extracting images’ features. These moments are then normalized giving new more powerful features that highly improve the classification system performance. The moments are invariant which is the robustness point of Hu moments algorithm to extract features. The classification process is implemented using two different classifiers, K-Nearest Neighbors algorithm (KNN) and Linear Support Vector Machines (SVM). A comparison among their performances is investigated. The results are evaluated on Alzheimer’s Disease Neuroimaging Initiative (ADNI) database. The best classification accuracy is 91.4% for KNN classifier and 100% for SVM classifier.</description>
        <description>http://thesai.org/Downloads/Volume8No11/Paper_2-Classification_of_Alzheimer_Disease.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Multiple-Criteria Decision Making Model for Ranking Refactoring Patterns</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081101</link>
        <id>10.14569/IJACSA.2017.081101</id>
        <doi>10.14569/IJACSA.2017.081101</doi>
        <lastModDate>2017-11-30T19:03:14.0500000+00:00</lastModDate>
        
        <creator>
Abdulmajeed Aljuhani</creator>
        
        <creator>Luigi Benedicenti</creator>
        
        <creator>Sultan Alshehri</creator>
        
        <subject>Analytic network process; extreme programming; refactoring practice; refactoring patterns</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(11), 2017</description>
        <description>The analytic network process (ANP) is capable of structuring decision problems and finding mathematically determined judgments built on knowledge and experience. Researches
suggest that ANP can be useful in software development, where complicated decisions happen routinely. In extreme programming (XP), the refactoring is applied where the code smells bad. This
might cost more effort and time. As a result, in order to increase the advantages of refactoring in less effort and time, the analytic network process has been used to accomplish this purpose. This
paper presents an example of applying the ANP in order to rank the refactoring patterns regarding the internal code quality attributes. A case study that was conducted in an academic
environment is presented in this paper. The results of the case study show the benefits of using the ANP in XP development cycle.
</description>
        <description>http://thesai.org/Downloads/Volume8No11/Paper_1-A_Multiple_Criteria_Decision_Making_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Integrated Framework to Study Efficient Spectral Estimation Techniques for Assessing Spectral Efficiency Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081057</link>
        <id>10.14569/IJACSA.2017.081057</id>
        <doi>10.14569/IJACSA.2017.081057</doi>
        <lastModDate>2017-10-31T11:11:22.0470000+00:00</lastModDate>
        
        <creator>Kantipudi MVV Prasad</creator>
        
        <creator>H.N. Suresh</creator>
        
        <subject>Amplitude and phase estimation; ASC; capon spectral estimator; spectral estimation; PSC</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(10), 2017</description>
        <description>The advanced network applications enable software driven spectral analysis of non-stationary signal or processes which precisely involves domain analysis with the purpose of decomposing a complex signal coefficients into simpler forms. However, the proper estimation of power coefficients over frequency components of a random signal leads to provide very useful information required in various fields of study. The complex design constraints associated with conventional parametric models such as Dynamic Average Model, Autoregressive MA, etc. for multidimensional spectral estimation using adaptive filters leads to a situation where higher computational complexities generate significant overhead on the systems. Therefore, the proposed study aims to formulate an efficient framework intended to derive a fast algorithm for processing Adaptive Capon and Phase Estimator (APES). The proposed method is applied to a non-stationary signal which is random. Further, the adaptive estimation of power spectra along with more accurate spectral efficiency has been identified in case of APES. An extensive performance evaluation followed by a comparative analysis has been performed by obtaining the values from different spectral estimation techniques, such as APES, PSC, ASC, and CAPON. Moreover, the framework ensures that unlike others, APES is subjected to attain superior signal quality regarding Power Spectral Density (PSD) and Signal to Noise Ratio (SNR) while achieving very less amount of Mean Square Error (MSE). It also exhibits comparatively low convergence speed and computational complexity as compared to its legacy versions.</description>
        <description>http://thesai.org/Downloads/Volume8No10/Paper_57-Integrated_Framework_to_Study_Efficient_Spectral_Estimation_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Validation of Semantic Discretization based Indian Weighted Diabetes Risk Score (IWDRS)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081056</link>
        <id>10.14569/IJACSA.2017.081056</id>
        <doi>10.14569/IJACSA.2017.081056</doi>
        <lastModDate>2017-10-31T11:11:21.9670000+00:00</lastModDate>
        
        <creator>Omprakash Chandrakar</creator>
        
        <creator>Jatinderkumar R. Saini</creator>
        
        <creator>Lal Bihari Barik</creator>
        
        <subject>Data mining; indian weighted diabetes risk score; semantic discretization; type-2 diabetes risk score</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(10), 2017</description>
        <description>The objective of this research study is to validate Indian Weighted Diabetes Risk Score (IWDRS). The IWDRS is derived by applying the novel concept of semantic discretization based on Data Mining techniques. 311 adult participants (age &gt; 18 years), who have been tested for diabetes using the biochemical test in pathology laboratory according to World Health Organization (WHO) guidelines, were selected for this study. These subjects were not included for deriving IWDRS tool. IWDRS is calculated for all 311 subjects. Prediction parameters, such as sensitivity and specificity are evaluated along with other performance parameters for an optimal cut-off score for IWDRS. The IWDRS tool is validated and found to be highly sensitive in diagnosing diabetes positive cases at the same time it is almost equally specific for identifying diabetes negative cases as well. The result of IWDRS is compared with the results of another two similar studies conducted for the Indian population and found it better. At optimal cut-off score IWDRS&gt;=294, the prediction accuracy is 82.32%, while sensitivity and specificity is 82.22% and 82.44%, respectively.</description>
        <description>http://thesai.org/Downloads/Volume8No10/Paper_56-Validation_of_Semantic_Discretization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Unsupervised Abnormal Event Identification Mechanism for Analysis of Crowded Scene</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081055</link>
        <id>10.14569/IJACSA.2017.081055</id>
        <doi>10.14569/IJACSA.2017.081055</doi>
        <lastModDate>2017-10-31T11:11:21.9070000+00:00</lastModDate>
        
        <creator>Pushpa D</creator>
        
        <subject>Abnormal event; detection; event detection; object detection; machine learning; video surveillance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(10), 2017</description>
        <description>The advancement of visual sensing has introduced better capturing of the discrete information from a complex, crowded scene for assisting in the analysis. However, after reviewing existing system, we find that majority of the work carried out till date is associated with significant problems in modeling event detection as well as reviewing abnormality of the given scene. Therefore, the proposed system introduces a model that is capable of identifying the degree of abnormality for an event captured on the crowded scene using unsupervised training methodology. The proposed system contributes to developing a novel region-wise repository to extract the contextual information about the discrete-event for a given scene. The study outcome shows highly improved the balance between the computational time and overall accuracy as compared to the majority of the standard research work emphasizing on event detection.</description>
        <description>http://thesai.org/Downloads/Volume8No10/Paper_55-A_Novel_Unsupervised_Abnormal_Event_Identification_Mechanism.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Feature Weight Optimization Mechanism for Email Spam Detection based on Two-step Clustering Algorithm and Logistic Regression Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081054</link>
        <id>10.14569/IJACSA.2017.081054</id>
        <doi>10.14569/IJACSA.2017.081054</doi>
        <lastModDate>2017-10-31T11:11:21.8430000+00:00</lastModDate>
        
        <creator>Ahmed Hamza Osman</creator>
        
        <creator>Hani Moetque Aljahdali </creator>
        
        <subject>Two-step clustering; spam filtering; classification; detection; feature weight; logistic regression</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(10), 2017</description>
        <description>This research proposed an improved filtering spam technique for suspected emails, messages based on feature weight and the combination of two-step clustering and logistic regression algorithm. Unique, important features are used as the optimum input for a hybrid proposed approach. This study adopted a spam detector model based on distance measure and threshold value. The aim of this model was to study and select distinct features for email filtering using feature weight method as dimension reduction. Two-step clustering algorithm was used to generate a new feature called “Label” to cluster and differentiate the diversity emails and group them based on the inter samples similarity. Thereby the spam filtering process was simplified using the Logistic regression classifier in order to distinguish the hidden patterns of spam and non-spam emails. Experimental design was conducted based on the UCI spam dataset. The outcome of the findings shows that the results of the email filtering are promising compared to other modern spam filtering methods.</description>
        <description>http://thesai.org/Downloads/Volume8No10/Paper_54-Feature_Weight_Optimization_Mechanism.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Aggregation Operator for Assignment of Resources in Distributed Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081053</link>
        <id>10.14569/IJACSA.2017.081053</id>
        <doi>10.14569/IJACSA.2017.081053</doi>
        <lastModDate>2017-10-31T11:11:21.7970000+00:00</lastModDate>
        
        <creator>David L la Red Mart&#237;nez</creator>
        
        <subject>Aggregation operators; concurrency control; communication between groups of processes; mutual exclusion; operating systems; processor scheduling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(10), 2017</description>
        <description>In distributed processing systems it is often necessary to coordinate the allocation of shared resources that should be assigned to processes in the modality of mutual exclusion; in such cases, the order in which the shared resources will be assigned to processes that require them must be decided; in this paper we propose an aggregation operator (which could be used by a shared resources manager module) that will decide the order of allocation of the resources to the processes considering the requirements of the processes (shared resources) and the state of the distributed nodes where the processes operate (their computational load).</description>
        <description>http://thesai.org/Downloads/Volume8No10/Paper_53-Aggregation_Operator_for_Assignment_of_Resources.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Text Summarization Techniques: A Brief Survey</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081052</link>
        <id>10.14569/IJACSA.2017.081052</id>
        <doi>10.14569/IJACSA.2017.081052</doi>
        <lastModDate>2017-10-31T11:11:21.7670000+00:00</lastModDate>
        
        <creator>Mehdi Allahyari</creator>
        
        <creator>Seyedamin Pouriyeh</creator>
        
        <creator>Mehdi Assefi</creator>
        
        <creator>Saeid Safaei</creator>
        
        <creator>Elizabeth D. Trippe</creator>
        
        <creator>Juan B. Gutierrez</creator>
        
        <creator>Krys Kochut</creator>
        
        <subject>Text summarization; extractive summary; abstractive summary knowledge bases; topic models</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(10), 2017</description>
        <description>In recent years, there has been a explosion in the amount of text data from a variety of sources. This volume of text is an invaluable source of information and knowledge which needs to be effectively summarized to be useful. Text summarization is the task of shortening a text document into a condensed version keeping all the important information and content of the original document. In this review, the main approaches to automatic text summarization are described. We review the different processes for summarization and describe the effectiveness and shortcomings of the different methods.</description>
        <description>http://thesai.org/Downloads/Volume8No10/Paper_52-Text_Summarization_Techniques_a_Brief_Survey.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analyzing the Diverse Impacts of Conventional Distributed Energy Resources on Distribution System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081051</link>
        <id>10.14569/IJACSA.2017.081051</id>
        <doi>10.14569/IJACSA.2017.081051</doi>
        <lastModDate>2017-10-31T11:11:21.7330000+00:00</lastModDate>
        
        <creator>Muhammad Aamir Aman</creator>
        
        <creator>Sanaullah Ahmad</creator>
        
        <creator>Azzam ul Asar</creator>
        
        <creator>Babar Noor</creator>
        
        <subject>Electric power system; distributed generation; voltage profile; power losses; synchronous generator; induction generator</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(10), 2017</description>
        <description>In recent years, the rapid boost in energy demand around the globe has put power system in stress. To fulfill the energy demands and confine technical losses, researchers are eager to investigate the diverse impacts of Distributed Generation (DG) on the parameters of distribution network. DG is becoming even more attractive to power producing companies, utilities and consumers due to production of energy near to load centers. Reduction in power losses, better voltage profile and less environmental impact are the benefits of DG. Besides renewable energy resources, conventional energy resources are also a viable option for DG. This research aims to analyze the impact of localized synchronous and induction generators on distributions network. The main objectives are to find optimal type, size and location of DG in distribution network to have better impact on voltage profile and reduction in power losses. Using worldwide recognized software tool ETAP and Kohat road electricity distribution network as a test case. Results depicted that at certain buses, positive impacts on voltage profile were recorded while almost 20% of power losses were decreased when synchronous generator as DG unit was injected in distribution network. Injecting induction generator as DG unit, the results showed increase in power losses due to absorption of reactive power, while improving voltage profile by injecting active power.</description>
        <description>http://thesai.org/Downloads/Volume8No10/Paper_51-Analyzing_the_Diverse_Impacts_of_Conventional_Distributed_Energy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>RGBD Human Action Recognition using Multi-Features Combination and K-Nearest Neighbors Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081050</link>
        <id>10.14569/IJACSA.2017.081050</id>
        <doi>10.14569/IJACSA.2017.081050</doi>
        <lastModDate>2017-10-31T11:11:21.6570000+00:00</lastModDate>
        
        <creator>Rawya Al-Akam</creator>
        
        <creator>Dietrich Paulus</creator>
        
        <subject>RGBD videos; feature extraction; K-means clustering; KNN (K-Nearest Neighbor)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(10), 2017</description>
        <description>In this paper, we present a novel system to analyze human body motions for action recognition task from two sets of features using RGBD videos. The Bag-of-Features approach is used for recognizing human action by extracting local spatialtemporal features and shape invariant features from all video frames. These feature vectors are computed in four steps: Firstly, detecting all interest keypoints from RGB video frames using Speed-Up Robust Features and filters motion points using Motion History Image and Optical Flow, then aligned these motion points to the depth frame sequences. Secondly, using a Histogram of orientation gradient descriptor for computing the features vector around these points from both RGB and depth channels, then combined these feature values in one RGBD feature vector. Thirdly, computing Hu-Moment shape features from RGBD frames; fourthly, combining the HOG features with Hu-moments features in one feature vector for each video action. Finally, the k-means clustering and the multi-class K-Nearest Neighbor is used for the classification task. This system is invariant to scale, rotation, translation, and illumination. All tested, are utilized on a dataset that is available to the public and used often in the community. By using this new feature combination method improves performance on actions with low movement and reach recognition rates superior to other publications of the dataset.</description>
        <description>http://thesai.org/Downloads/Volume8No10/Paper_50-RGBD_Human_Action_Recognition_using_Multi_Features.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of a Microstrip Patch Antenna with High Bandwidth and High Gain for UWB and Different Wireless Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081049</link>
        <id>10.14569/IJACSA.2017.081049</id>
        <doi>10.14569/IJACSA.2017.081049</doi>
        <lastModDate>2017-10-31T11:11:21.5930000+00:00</lastModDate>
        
        <creator>Zain Ul Abedin</creator>
        
        <creator>Zahid Ullah</creator>
        
        <subject>High bandwidth, patch antenna, low profile, linear polarization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(10), 2017</description>
        <description>We propose square shape patch antenna in this research work. Focus of the work is to obtain large bandwidth with compact ground plane for wireless applications. The proposed antenna is designed using dielectric material of FR4 having height of 1.6 mm and having  r of 4.4. We simulated the proposed antenna in CST Microwave Studio. Simulation results show that the proposed antenna achieved bandwidth from 2.33 GHz to 12.4 GHz with radiation efficiency more than 90% in ultrawideband range. The proposed antenna covers the range of ultra wideband from 3.1 GHz to 10.6 GHz, the range of local area network, wide area network, and also covers the range of satellite communications (for both uplink and downlink).</description>
        <description>http://thesai.org/Downloads/Volume8No10/Paper_49-Design_of_a_Microstrip_Patch_Antenna_with_High_Bandwidth.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Accuracy Based Feature Ranking Metric for Multi-Label Text Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081048</link>
        <id>10.14569/IJACSA.2017.081048</id>
        <doi>10.14569/IJACSA.2017.081048</doi>
        <lastModDate>2017-10-31T11:11:21.5170000+00:00</lastModDate>
        
        <creator>Muhammad Nabeel Asim</creator>
        
        <creator>Abdur Rehman</creator>
        
        <creator>Umar Shoaib</creator>
        
        <subject>Binary relevance (BR); label powerset (LP); ACC2; information gain (IG); Relief-F (RF)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(10), 2017</description>
        <description>In many application domains, such as machine learning, scene and video classification, data mining, medical diagnosis and machine vision, instances belong to more than one categories. Feature selection in single label text classification is used to reduce the dimensionality of datasets by filtering out irrelevant and redundant features. The process of dimensionality reduction in multi-label classification is a different scenario because here features may belong to more then one classes. Label and instance space is rapidly increasing by the grandiose of Internet, which is challenging for Multi-Label Classification (MLC). Feature selection is crucial for reduction of data in MLC. Method adaptation and data set transformation are two techniques used to select features in multi label text classification. In this paper, we present dataset transformation technique to reduce the dimensionality of multi-label text data. We used two model transformation approaches: Binary Relevance, and Label Power set for transformation of data from multi-label to single label. The Process of feature selection is done using filter approach which utilizes the data to decide the importance of features without applying learning algorithm. In this paper we used a simple measure (ACC2) for feature selection in multi-label text data. We used problem transformation approach to apply single label feature selection measures on multi-label text data; did the comparison of ACC2 with two other feature selection methods, information gain (IG) and Relief measure. Experimentation is done on three bench mark datasets and their empirical evaluation results are shown. ACC2 is found to perform better than IG and Relief in 80% cases of our experiments.</description>
        <description>http://thesai.org/Downloads/Volume8No10/Paper_48-Accuracy_based_Feature_Ranking_Metric.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Framework for Managing Uncertain Distributed Categorical Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081047</link>
        <id>10.14569/IJACSA.2017.081047</id>
        <doi>10.14569/IJACSA.2017.081047</doi>
        <lastModDate>2017-10-31T11:11:21.4370000+00:00</lastModDate>
        
        <creator>Adel Benaissa</creator>
        
        <creator>Mustapha Yahmi</creator>
        
        <creator>Yassine Jamil</creator>
        
        <subject>Distributed uncertain data; Top-k query; threshold query; indexing; categorical data</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(10), 2017</description>
        <description>In recent years, data has become uncertain due to the flourishing advanced technologies that participate continuously and increasingly in producing large amounts of incomplete data. Often, many modern applications where uncertainty occurs are distributed in nature, e.g., distributed sensor networks, information extraction, data integration, social network, etc. Consequently, even though the data uncertainty has been studied in the past for centralized behavior, it is still a challenging issue to manage uncertainty over the data in situ. In this paper, we propose a framework to managing uncertain categorical data over distributed environments that is built upon a hierarchical indexing technique based on inverted index, and a distributed algorithm to efficiently process queries on uncertain data in distributed environment. Leveraging this indexing technique, we address two kinds of queries on the distributed uncertain databases 1) a distributed probabilistic thresholds query, where its answers satisfy the probabilistic threshold requirement; and 2) a distributed top-k-queries, optimizing, the transfer of the tuples from the distributed sources to the coordinator site and the time treatment. Extensive experiments are conducted to verify the effectiveness and efficiency of the proposed method in terms of communication costs and response time.</description>
        <description>http://thesai.org/Downloads/Volume8No10/Paper_47-Framework_for_Managing_Uncertain_Distributed_Categorical_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Emotion Recognition based on EEG using LSTM Recurrent Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081046</link>
        <id>10.14569/IJACSA.2017.081046</id>
        <doi>10.14569/IJACSA.2017.081046</doi>
        <lastModDate>2017-10-31T11:11:21.4070000+00:00</lastModDate>
        
        <creator>Salma Alhagry</creator>
        
        <creator>Aly Aly Fahmy</creator>
        
        <creator>Reda A. El-Khoribi</creator>
        
        <subject>Electroencephalogram; emotion; emotion recognition; deep learning; long-short term memory</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(10), 2017</description>
        <description>Emotion is the most important component in daily interaction between people. Nowadays, it is important to make the computers understand user’s emotion who interacts with it in human-computer interaction (HCI) systems. Electroencephalogram (EEG) signals are the main source of emotion in our body. Recently, emotion recognition based on EEG signals have attracted many researchers and many methods were reported. Different types of features were extracted from EEG signals then different types of classifiers were applied to these features. In this paper, a deep learning method is proposed to recognize emotion from raw EEG signals. Long-Short Term Memory (LSTM) is used to learn features from EEG signals then the dense layer classifies these features into low/high arousal, valence, and liking. DEAP dataset is used to verify this method which gives an average accuracy of 85.65%, 85.45%, and 87.99% with arousal, valence, and liking classes, respectively. The proposed method introduced high average accuracy in comparison with the traditional techniques.</description>
        <description>http://thesai.org/Downloads/Volume8No10/Paper_46-Emotion_Recognition_based_on_EEG_using_LSTM.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluating Dependency based Package-level Metrics for Multi-objective Maintenance Tasks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081045</link>
        <id>10.14569/IJACSA.2017.081045</id>
        <doi>10.14569/IJACSA.2017.081045</doi>
        <lastModDate>2017-10-31T11:11:21.3600000+00:00</lastModDate>
        
        <creator>Mohsin Shaikh</creator>
        
        <creator>Akhtar Hussain Jalbani</creator>
        
        <creator>Adil Ansari</creator>
        
        <creator>Ahmed Ali</creator>
        
        <creator>Kashif Memon</creator>
        
        <subject>Software quality; package-level metrics; software modularization; fault-prediction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(10), 2017</description>
        <description>Role of packages in organization and maintenance of software systems has acquired vital importance in recent research of software quality. With an advancement in modularization approaches of object oriented software, packages are widely considered as re-usable and maintainable entities of objectoriented software architectures, specially to avoid complicated dependencies and insure software design of well identified services. In this context, recently research study of H. Abdeen on automatic optimization of package dependencies provide composite frame of metrics for package quality and overall source code modularization. There is an opportunity to conduct comprehensive empirical analysis over proposed metrics for assessing their usefulness and application for fault-prediction, design flaw detection, identification of source code anomalies and architectural erosion. In this paper, we examine impact of these dependency optimization based metrics in wide spectrum of software quality for single package and entire software modularization. Our experimental work is conducted over open source software systems through statistical methodology based on cross validation fault-prediction and correlation.We conclude with empirical evidence that dependency based package modularization metrics provide more accurate view for predicting fault-prone packages and improvement of overall software structure. Thus, application of these metrics can help the developers and software practitioners to insure proactive management of the source code dependencies and avoid design flaws during software development.</description>
        <description>http://thesai.org/Downloads/Volume8No10/Paper_45-Evaluating_Dependency_based_Package_level_Metrics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Hybrid Quicksort Algorithm Vectorized using AVX-512 on Intel Skylake</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081044</link>
        <id>10.14569/IJACSA.2017.081044</id>
        <doi>10.14569/IJACSA.2017.081044</doi>
        <lastModDate>2017-10-31T11:11:21.3130000+00:00</lastModDate>
        
        <creator>Berenger Bramas</creator>
        
        <subject>Quicksort; Bitonic; sort; vectorization; SIMD; AVX-512; Skylake</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(10), 2017</description>
        <description>The modern CPU’s design, which is composed of hierarchical memory and SIMD/vectorization capability, governs the potential for algorithms to be transformed into efficient implementations. The release of the AVX-512 changed things radically, and motivated us to search for an efficient sorting algorithm that can take advantage of it. In this paper, we describe the best strategy we have found, which is a novel two parts hybrid sort, based on the well-known Quicksort algorithm. The central partitioning operation is performed by a new algorithm, and small partitions/arrays are sorted using a branch-free Bitonicbased sort. This study is also an illustration of how classical algorithms can be adapted and enhanced by the AVX-512 extension. We evaluate the performance of our approach on a modern Intel Xeon Skylake and assess the different layers of our implementation by sorting/partitioning integers, double floatingpoint numbers, and key/value pairs of integers. Our results demonstrate that our approach is faster than two libraries of reference: the GNU C++ sort algorithm by a speedup factor of 4, and the Intel IPP library by a speedup factor of 1.4.</description>
        <description>http://thesai.org/Downloads/Volume8No10/Paper_44-A_Novel_Hybrid_Quicksort_Algorithm_Vectorized.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Area k-Coverage Optimization Protocol for Heterogeneous Dense Wireless Sensor Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081043</link>
        <id>10.14569/IJACSA.2017.081043</id>
        <doi>10.14569/IJACSA.2017.081043</doi>
        <lastModDate>2017-10-31T11:11:21.2830000+00:00</lastModDate>
        
        <creator>Herv&#233; Gokou Di&#233;di&#233;</creator>
        
        <creator>Boko Aka</creator>
        
        <creator>Michel Babri</creator>
        
        <subject>Coverage; optimization; wireless sensor network; scheduling; GRASP</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(10), 2017</description>
        <description>Detecting redundant nodes and scheduling their activity is mandatory to prolong the lifetime of a densely-deployed wireless sensor network. Provided that the redundancy check and the scheduling phases both help to preserve the coverage ratio and guarantee energy efficiency. However, most of the solutions usually proposed in the literature, tend to allocate a large number of unnecessary neighbor (re)discovery time slots in the dutycycle of the active nodes. Such a shortcoming is detrimental to battery power conservation. In this paper, we propose a crossing points-based heuristic to fast detect redundant nodes even in heterogeneous networks; then, an integer linear program and a local exclusion based strategy to respectively, formulate and solve the sensing unit scheduling problem. Simulations show that the resulting localized asynchronous protocol outperforms some state-of-the-art solutions with respect to coverage preservation and network lifetime enhancement.</description>
        <description>http://thesai.org/Downloads/Volume8No10/Paper_43-Area_k_Coverage_Optimization_Protocol.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modeling House Price Prediction using Regression Analysis and Particle Swarm Optimization Case Study : Malang, East Java, Indonesia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081042</link>
        <id>10.14569/IJACSA.2017.081042</id>
        <doi>10.14569/IJACSA.2017.081042</doi>
        <lastModDate>2017-10-31T11:11:21.2370000+00:00</lastModDate>
        
        <creator>Adyan Nur Alfiyatin</creator>
        
        <creator>Ruth Ema Febrita</creator>
        
        <creator>Hilman Taufiq</creator>
        
        <creator>Wayan Firdaus Mahmudy</creator>
        
        <subject>House prediction; regression analysis; particle swarm optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(10), 2017</description>
        <description>House prices increase every year, so there is a need for a system to predict house prices in the future. House price prediction can help the developer determine the selling price of a house and can help the customer to arrange the right time to purchase a house. There are three factors that influence the price of a house which include physical conditions, concept and location. This research aims to predict house prices based on NJOP houses in Malang city with regression analysis and particle swarm optimization (PSO). PSO is used for selection of affect variables and regression analysis is used to determine the optimal coefficient in prediction. The result from this research proved combination regression and PSO is suitable and get the minimum prediction error obtained which is IDR 14.186.</description>
        <description>http://thesai.org/Downloads/Volume8No10/Paper_42-Modeling_House_Price_Prediction_using_Linear_Regression.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Algorithm to Improve Resolution for Very Few Samples</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081041</link>
        <id>10.14569/IJACSA.2017.081041</id>
        <doi>10.14569/IJACSA.2017.081041</doi>
        <lastModDate>2017-10-31T11:11:21.1870000+00:00</lastModDate>
        
        <creator>Sidi Mohamed Hadj Irid</creator>
        
        <creator>Samir Kameche</creator>
        
        <subject>Direction of arrival (DOA) estimation; Bootstrap; Multiple Signal Classification (MUSIC); resolution; spatial smoothing;  array processing; Uniform Linear Array (ULA)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(10), 2017</description>
        <description>This paper presents a new technic to improve resolution and direction of arrival (DOA) estimation of two closed source, in array processing, when only few samples of received signal are available. In these conditions, the detection of sources (targets) is more arduous, and even breaks down. To overcome these problems, a new algorithm is proposed. It combines spatial smooth method to widen the spatial resolution, bootstrap technique to estimate increased sample size, and a high resolution technique which is Multiple Signal Classification (MUSIC) to estimate DOA. Through different simulations, performance and effectiveness of the proposed approach, referred to as Spatial Smooth and Bootstrapped technique “SSBoot’’, are demonstrated.</description>
        <description>http://thesai.org/Downloads/Volume8No10/Paper_41-A_Novel_Algorithm_to_Improve_Resolution_for_Very_Few_Samples.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Impact of External Disturbance and Discontinuous Input on the Redundant Manipulator Robot Behaviour using the Linear Parameter Varying Modelling Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081040</link>
        <id>10.14569/IJACSA.2017.081040</id>
        <doi>10.14569/IJACSA.2017.081040</doi>
        <lastModDate>2017-10-31T11:11:21.0930000+00:00</lastModDate>
        
        <creator>Sameh Zribi</creator>
        
        <creator>Hatem Tlijani</creator>
        
        <creator>Jilani Knani</creator>
        
        <creator>Vicen&#231; Puig</creator>
        
        <subject>Redundant manipulator robot; flexible structure; linear parameter varying approach; discontinuous torque; external disturbance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(10), 2017</description>
        <description>This paper is concerned with the synthesis of dynamic model of the redundant manipulator robot based on Linear Parameter Varying approach. To evaluate its behavior and in presence of external disturbance several motions profiles are developed using a new algorithm which produce smooth trajectories in optimal time. The main advantages of this proposed approach are its robustness and its simplicity with respect to the flexibility structure, to the motion profile and mass load variations. Numerical simulations with several tasks show that in presence of mass load variation the desired trajectory is more efficiently followed by the LPV model than the dynamic model of the studied mechanism. Its performances are ensured using the smoothest trajectory designed by the Eighth-degree polynomial profile than the Fifth-degree polynomial one and the trapezoidal one.</description>
        <description>http://thesai.org/Downloads/Volume8No10/Paper_40-Impact_of_External_Disturbance_and_Discontinuous_Input.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>QRishing: A User Perspective</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081039</link>
        <id>10.14569/IJACSA.2017.081039</id>
        <doi>10.14569/IJACSA.2017.081039</doi>
        <lastModDate>2017-10-31T11:11:21.0000000+00:00</lastModDate>
        
        <creator>Ari Kusyanti</creator>
        
        <creator>Ali Arifin</creator>
        
        <subject>QR code; perceived risk; perceived privacy; trust; Structural Equation Modeling (SEM)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(10), 2017</description>
        <description>QR Code offers more benefits and features than its predecessor, Barcode, which make it more popular. However, there is no doubt that behind the features and conveniences offered by QR Code, it turns that the QR Code can be utilized to perform QRishing. This study proposes a model based on Technology Acceptance Model (TAM) combined with Perceived Security, Trust, Perceived Behavioral Control, Self-Efficacy and Perceived Risk based on previous research. Data obtained from 300 respondents are then analyzed with Structural Equation Modeling (SEM). The results show that Attitude, Perceive Security and Perceived Risk affect the individual to scan QR Code.</description>
        <description>http://thesai.org/Downloads/Volume8No10/Paper_39-QRishing_a_User_Perspective.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sentiment Summerization and Analysis of Sindhi Text</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081038</link>
        <id>10.14569/IJACSA.2017.081038</id>
        <doi>10.14569/IJACSA.2017.081038</doi>
        <lastModDate>2017-10-31T11:11:20.9230000+00:00</lastModDate>
        
        <creator>Mazhar Ali</creator>
        
        <creator>Asim Imdad Wagan</creator>
        
        <subject>Sindhi NLP; sentiment structurization; sentiment analysis; supervised analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(10), 2017</description>
        <description>Text corpus is important for assessment of language features and variation analysis. Machine learning techniques identify the language terms, features, text structures and sentiment from linguistic corpus. Sindhi language is one of the oldest languages of the world having proper script and complete grammar. Sindhi is remained less resourced language computationally even in this digital era. Viewing this problem of Sindhi language, Sindhi NLP toolkit is developed to solve the Sindhi NLP and computational linguistics problems. Therefore, this research work may be an addition to NLP. This research study has developed an own Sindhi sentimentally structured and analyzed corpus on the basis of accumulated results of Sindhi sentiment analysis tool. Corpus is normalized and analyzed for language features and variation analysis using DTM and TF-IDF techniques. DTM and TF-IDF analysis is performed using n-gram model. The supervised machine learning model is formulated using SVMs and K-NN techniques to perform analysis on Sindhi sentiment analysis corpus dataset.  Precision, recall and f-score show better performance of machine learning technique than other techniques. Cross validation techniques is used with 10 folds to validate and evaluate data set randomly for supervised machine learning analysis. Research study opens doors for linguists, data analysts and decision makers to work more for sentiment summarization and visual tracking.</description>
        <description>http://thesai.org/Downloads/Volume8No10/Paper_38-Sentiment_Summerization_and_Analysis_of_Sindhi_Text.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Word-Based Grammars for PPM</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081037</link>
        <id>10.14569/IJACSA.2017.081037</id>
        <doi>10.14569/IJACSA.2017.081037</doi>
        <lastModDate>2017-10-31T11:11:20.7830000+00:00</lastModDate>
        
        <creator>Nojood O. Aljehane</creator>
        
        <creator>William J. Teahan</creator>
        
        <subject>Component; context-free grammar (CFG); grammar-base; word-based; Preprocessing; Prediction by Partial Matching (PPM); encoding</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(10), 2017</description>
        <description>The Prediction by Partial Matching (PPM) compression algorithm is considered one of the most efficient methods for compressing natural language text. Despite the advances of the PPM method for the English language to predict upcoming symbols or words, more research is required to devise better compression methods for other languages, such as Arabic due, for example, to the rich morphological nature of the Arabic text, where a word can take many different forms. In this paper, we propose a new method that achieves the best compression rates not only for Arabic text but also for other languages that use Arabic script in their writing system such as Persian. Our word-based method constructs a context-free grammar (CFG) for the text and this grammar is then encoded using PPM to achieve excellent compression rates.</description>
        <description>http://thesai.org/Downloads/Volume8No10/Paper_37-Word_based_Grammars_for_PPM.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Design for XOR Gate used for 
Quantum-Dot Cellular Automata (QCA) to Create a Revolution in Nanotechnology Structure</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081036</link>
        <id>10.14569/IJACSA.2017.081036</id>
        <doi>10.14569/IJACSA.2017.081036</doi>
        <lastModDate>2017-10-31T11:11:20.6430000+00:00</lastModDate>
        
        <creator>Radhouane Laajimi</creator>
        
        <creator>Ali Ajimi</creator>
        
        <creator>Lamjed Touil</creator>
        
        <creator>Ali Newaz Bahar</creator>
        
        <subject>QCA exclusive-OR; XOR gate; quantum-dot cellular automata (QCA); nanotechnology; majority gate; unique structure; QCA designer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(10), 2017</description>
        <description>Novel digital technologies always lead to high density and very low power consumption. One of these concepts is Quantum-dot Cellular Automata (QCA), which is one of the new emerging nanotechnology-based on Coulomb repulsion. This article presents three architectures of logical “XOR” gate, a novel structure of two inputs “XOR” gate, which is used as a module to implement four inputs “XOR” gate and eight inputs “XOR” gate using QCA technique. The two inputs, four inputs, and eight inputs QCA “XOR” gate architectures are built using 10, 35, and 90 Cells on 0.008 &#181;m2, 0.036 &#181;m2 and 0.114 &#181;m2 of areas, respectively. The proposed “XOR” gate structure provides an improvement in terms of circuit complexity, area, latency and type of cross wiring compared to other previous architectures. These proposed architectures of “XOR” gate are evaluated and simulated using the QCADesigner tool version 2.0.3.</description>
        <description>http://thesai.org/Downloads/Volume8No10/Paper_36-A_Novel_Design_for_XOR_Gate_used_for_Quantum_Dot_Cellular_Automata.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implementation of a Hierarchical Hybrid Intrusion Detection Mechanism in Wireless Sensors Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081035</link>
        <id>10.14569/IJACSA.2017.081035</id>
        <doi>10.14569/IJACSA.2017.081035</doi>
        <lastModDate>2017-10-31T11:11:20.5500000+00:00</lastModDate>
        
        <creator>Lamyaa Moulad</creator>
        
        <creator>Hicham Belhadaoui</creator>
        
        <creator>Mounir Rifi</creator>
        
        <subject>Wireless Sensor Networks (WSNs); Intrusion Detection System (IDS); anomalies; specification-based detection; Denial Of Service (DOS) attacks; hybrid intrusion detection system; support vector machine(SVM); false alarm; detection rate</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(10), 2017</description>
        <description>During the last years, Wireless Sensor Networks (WSNs) have attracted considerable attention within the scientific community. The applications based on Wireless Sensor Networks, whose areas include, agriculture, military, hospitality management, etc. are growing swiftly. Yet, they are vulnerable to various security threats, like Denial Of Service (DOS) attacks. Such issues can affect and absolutely degrade the performances and cause a dysfunction of the network and its components. However, key management, authentication and secure routing protocols aren’t able to offer the required security for WSNs. In fact, all they can offer is a first line of defense especially against outside attacks. Therefore, the implementation of a second line of defense, which is the Intrusion Detection System (IDS), is deemed necessary as part of an integrated approach, to secure the network against malicious and abnormal behaviors of intruders, hence the goal of this paper. This allows improving security and protecting all resources related to a WSN. Recently, different detection methods have been proposed to develop an effective intrusion detection system for WSNs. In this regard, we proposed an integral  mechanism  which is an hybrid  Intrusion  Detection approach  based on anomaly, detection using support vector  machine (SVM), specifications based technique, signature and  clustering  algorithm to decrease  the  consumption  of  resources,  by reducing  the  amount  of  information  forwarded. So, our aim is to protect WSN, without disturbing networks performances through a good management of their resources, especially the energy.</description>
        <description>http://thesai.org/Downloads/Volume8No10/Paper_35-Implementation_of_an_Hybrid_Intrusion_Detection_Mechanism.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>FFD Variants for Virtual Machine Placement in Cloud Computing Data Centers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081034</link>
        <id>10.14569/IJACSA.2017.081034</id>
        <doi>10.14569/IJACSA.2017.081034</doi>
        <lastModDate>2017-10-31T11:11:20.4570000+00:00</lastModDate>
        
        <creator>Aneeba Khalil Soomro</creator>
        
        <creator>Mohammad Arshad Shaikh</creator>
        
        <creator>Hameedullah Kazi</creator>
        
        <subject>Cloud computing; virtual machine placement; virtualization; first fit decreasing; first fit decreasing (FFD)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(10), 2017</description>
        <description>Virtualization technology is used to efficiently utilize the resources of a Cloud datacenter by running multiple virtual machines (VMs) on a single physical machine (PM) as if each VM is a standalone PM. Efficient placement/consolidation of VMs into PMs can reduce number of active PMs which consequently reduces resource wastage and power consumption. Therefore, VM placement algorithms need to be optimized to reduce the number of PMs required for VM Placements. In this paper, two heuristic based Vector Bin Packing algorithms called FFDmean and FFDmedian are proposed for VM placement. These algorithms use First Fit Decreasing (FFD) technique. FFD preprocesses VMs by sorting all VMs in descending order of their sizes. Since a VM is multidimensional therefore, it is difficult to decide on its size. For this, FFDmean and FFDmedian use measures of central tendency, i.e. mean and median as heuristics, respectively, in order to estimate the size of a VM. The goal of these algorithms is to utilize the PM resources efficiently so that the number of required PMs for accommodation of all VMs can be reduced. CloudSim toolkit is used to carry out the cloud simulation and experiments. Algorithms are compared over three metrics, i.e. hosts used, power consumption and resource utilization efficiency. The results reveal that FFDmean and FFDmedian remarkably outperformed two existing algorithms called Dot-Product and L2 in all three metrics when PM resources were limited.</description>
        <description>http://thesai.org/Downloads/Volume8No10/Paper_34-FFD_Variants_for_Virtual_Machine_Placement_in_Cloud.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design and Implementation of a Communication System and Device Aimed at the Inclusion of People with Oral Communication Disabilities</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081033</link>
        <id>10.14569/IJACSA.2017.081033</id>
        <doi>10.14569/IJACSA.2017.081033</doi>
        <lastModDate>2017-10-31T11:11:20.3600000+00:00</lastModDate>
        
        <creator>M&#225;ximo L&#243;pez S&#225;nchez</creator>
        
        <creator>Juan Gabriel Gonz&#225;lez Serna</creator>
        
        <creator>Jos&#233; Luis Molina Salgado</creator>
        
        <creator>Melisa Hern&#225;ndez Salinas</creator>
        
        <subject>Communication; system; disabilities; device; oral</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(10), 2017</description>
        <description>Disability is part of human condition; it discriminates people who have this complication. The present work was carried out due to this and an experience in our research center. A prototype was designed and build that allows eye signals to be sent to a mobile device, where through a computer system, it was possible to generate an appropriate dialogue mechanism to respond to this challenge. The results allow us to open up an area of opportunity for a contribution in the inclusion of people with disabilities.</description>
        <description>http://thesai.org/Downloads/Volume8No10/Paper_33-Design_and_Implementation_of_a_Communication_System_and_Device.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Tri-Level Industry-Focused Learning Approach for Software Engineering Management</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081032</link>
        <id>10.14569/IJACSA.2017.081032</id>
        <doi>10.14569/IJACSA.2017.081032</doi>
        <lastModDate>2017-10-31T11:11:20.3130000+00:00</lastModDate>
        
        <creator>Anis Zarrad</creator>
        
        <creator>Yassin Daadaa</creator>
        
        <subject>Component; software project management; education; student rubric assessment; student learning outcomes; industry activities; guest lecturing approach</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(10), 2017</description>
        <description>Most engineering classes in higher education rely heavily on the traditional lecture format, despite the fact that a number of investigations have shown that lectures, even when given by good lecturers, have limited success in helping students make sense of the engineering practices they are learning. Recently, the Software Engineering Body of Knowledge (SWEBOK) highlighted the importance of professional practice in producing high quality engineering programs. The integration of industry links in teaching pedagogy is essential. In this paper, the authors introduce a new industry-oriented tri-level teaching approach in order to offer students the opportunity to be involved in industry projects and gain important work experience during the academic period. To prioritize industry hands-on activities for students and shape the traditional classroom toward an industry environment, three entities are involved in this approach: industry guest speakers, teachers, and students. Traditionally, guest lecturing is centered on the speaker, who delivers a presentation and follows with a short question and answer session. Students are often passive learners in this process. A blended learning approach was therefore integrated between all entities to allow more flexible learning opportunities, wherein students participated in each step of guest lecturing, including preparation, questions, and reflection. A software project management case study was introduced to measure students’ performance and satisfaction.</description>
        <description>http://thesai.org/Downloads/Volume8No10/Paper_32-A_Tri_Level_Industry_Focused_Learning_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Energy-Aware Virtual Network Embedding Approach for Distributed Cloud</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081031</link>
        <id>10.14569/IJACSA.2017.081031</id>
        <doi>10.14569/IJACSA.2017.081031</doi>
        <lastModDate>2017-10-31T11:11:20.2530000+00:00</lastModDate>
        
        <creator>Amal S. Alzahrani</creator>
        
        <creator>Ashraf A. Shahin</creator>
        
        <subject>Distributed virtual network embedding; energy consumption; particle swarm optimization; network virtualization; virtual network embedding; virtual network request; virtual network partitioning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(10), 2017</description>
        <description>Network virtualization has caught the attention of many researchers in recent years. It facilitates the process of creating several virtual networks over a single physical network. Despite this advantage, however, network virtualization suffers from the problem of mapping virtual links and nodes to physical network in most efficient way. This problem is called virtual network embedding (“VNE”). Many researches have been proposed in an attempt to solve this problem, which have many optimization aspects, such as improving embedding strategies in a way that preserves energy, reducing embedding cost and increasing embedding revenue. Moreover, some researchers have extended their algorithms to be more compatible with the distributed clouds instead of a single infrastructure provider (“ISP”). This paper proposes energy aware particle swarm optimization algorithm for distributed clouds. This algorithm aims to partition each virtual network request (“VNR”) to sub-graphs, using the Heavy Clique Matching technique (“HCM”) to generate a coarsened graph. Each coarsened node in the coarsened graph is assigned to a suitable data center (“DC”). Inside each DC, a modified particle swarm optimization algorithm is initiated to find the near optimal solution for the VNE problem. The proposed algorithm was tested and evaluated against existing algorithms using extensive simulations, which shows that the proposed algorithm outperforms other algorithms.</description>
        <description>http://thesai.org/Downloads/Volume8No10/Paper_31-Energy_Aware_Virtual_Network_Embedding_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Tagging Urdu Sentences from English POS Taggers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081030</link>
        <id>10.14569/IJACSA.2017.081030</id>
        <doi>10.14569/IJACSA.2017.081030</doi>
        <lastModDate>2017-10-31T11:11:20.1600000+00:00</lastModDate>
        
        <creator>Adnan Naseem</creator>
        
        <creator>Muazzama Anwar</creator>
        
        <creator>Salman Ahmed</creator>
        
        <creator>Qadeem Akhtar Satti</creator>
        
        <creator>Faizan Rasul Hashmi</creator>
        
        <creator>Tahira Malik</creator>
        
        <subject>Standford part-of-speech (POS) tagger; Google translator; Urdu POS tagging; kappa statistic</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(10), 2017</description>
        <description>Being a global language, English has attracted a majority of researchers and academia to work on several Natural Language Processing (NLP) applications. The rest of the languages are not focused as much as English. Part-of-speech (POS) Tagging is a necessary component for several NLP applications. An accurate POS Tagger for a particular language is not easy to construct due to the diversity of that language. The global language English, POS Taggers are more focused and widely used by the researchers and academia for NLP processing. In this paper, an idea of reusing English POS Taggers for tagging non-English sentences is proposed. On exemplary basis, Urdu sentences are processed to tagged from 11 famous English POS Taggers. State-of-the-art English POS Taggers were explored from the literature, however, 11 famous POS Taggers were being input to Urdu sentences for tagging. A famous Google translator is used to translate the sentences across the languages. Data from twitter.com is extracted for evaluation perspective. Confusion matrix with kappa statistic is used to measure the accuracy of actual Vs predicted tagging. The two best English POS Taggers which tagged Urdu sentences were Stanford POS Tagger and MBSP POS Tagger with an accuracy of 96.4% and 95.7%, respectively. The system can be generalized for multi-lingual sentence tagging.</description>
        <description>http://thesai.org/Downloads/Volume8No10/Paper_30-Tagging_Urdu_Sentences_from_English_POS_Taggers.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Bi-Objective Task Scheduling in Cloud Computing using Chaotic Bat Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081029</link>
        <id>10.14569/IJACSA.2017.081029</id>
        <doi>10.14569/IJACSA.2017.081029</doi>
        <lastModDate>2017-10-31T11:11:20.1130000+00:00</lastModDate>
        
        <creator>Fereshteh Ershad Farkar</creator>
        
        <creator>Ali Asghar Pourhaji Kazem</creator>
        
        <subject>Cloud computing; scheduling; chaos theory; bat algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(10), 2017</description>
        <description>Cloud computing is a technology for providing services over the Internet. It gives approach to renting IT infrastructures on a short-term pay- per-usage basis. One of the service provider’s goals is to use the resources efficiently and gain maximum profit. Cloud processes a huge amount of data, so tasks scheduling is a vital role in the cloud computing. The purpose of this paper is to propose a method based on chaos theory and bat algorithm for task scheduling in Cloud computing. Task scheduling is a core and challenging issue in cloud computing. The nature of the scheduling issue is as an NP-Hard problem and because of the success of heuristic algorithms in optimization and NP-Hard problems, the authors use a newly inspired bat algorithm and chaos theory to scheduling the tasks in the cloud. In this method, bat or candidate solutions are represented by a one-dimensional array. The fitness function is calculated based on makespan and energy consumption. The results show that the proposed method can schedule the received tasks in proper time than other compared heuristic algorithms, also the proposed method has better performance in term of makespan and energy consumption than compared methods.</description>
        <description>http://thesai.org/Downloads/Volume8No10/Paper_29-Bi_Objective_Task_Scheduling_in_Cloud_Computing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Redundancy Level Impact of the Mean Time to Failure on Wireless Sensor Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081028</link>
        <id>10.14569/IJACSA.2017.081028</id>
        <doi>10.14569/IJACSA.2017.081028</doi>
        <lastModDate>2017-10-31T11:11:20.0800000+00:00</lastModDate>
        
        <creator>Alaa E. S. Ahmed</creator>
        
        <creator>Mostafa E. A. Ibrahim</creator>
        
        <subject>Wireless sensor network; reliability; clustering; fault tolerant; Mean Time to Failure (MTTF)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(10), 2017</description>
        <description>Recently, wireless sensor networks (WSNs) have gained a great attention due to their ability to monitor various environments, such as temperature, pressure sound, etc. They are constructed from a large number of sensor nodes with computation and communication abilities. Most probably, sensors are deployed in an uncontrolled environment and hence their failures are inevitable all times of work. Faulty sensor nodes may cause incorrect sensing data, wrong data computation or even incorrect communication. Achieving a reliable wireless sensor networks is a most needed goal to ensure quality of service whether at deployment time or during normal operation. While Nodes redundancy is considered as an effective solution to overcome nodes failures, it may negatively affect the WSN lifetime. Redundancy may lead to more energy drains of the whole system. In this paper, the impact of redundancy level on the Mean Time to Failure (MTTF) of a clustered based wireless Sensor Networks (WSNs) is investigated. An expression that can be used to determine the most suitable redundancy level that maximizes the network MTTF is derived and evaluated.</description>
        <description>http://thesai.org/Downloads/Volume8No10/Paper_28-Redundancy_Level_Impact_of_the_Mean_Time.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Balanced Active and Reactive Control Applied to a Grid Connected Five Level Inverter</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081027</link>
        <id>10.14569/IJACSA.2017.081027</id>
        <doi>10.14569/IJACSA.2017.081027</doi>
        <lastModDate>2017-10-31T11:11:20.0030000+00:00</lastModDate>
        
        <creator>Chabakata MAHAMAT</creator>
        
        <creator>Micka&#235;l PETIT</creator>
        
        <creator>Fran&#231;ois COSTA</creator>
        
        <creator>Rym MAROUANI</creator>
        
        <subject>Balanced control; grid connected system; multilevel inverter</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(10), 2017</description>
        <description>This paper presents a balanced active and reactive power control, using a Phase Locked Loop for synchronization, and applied to a grid connected Five Level Inverter. The energy source of the system can be a photovoltaic generator or a wind turbine. We size the passive elements of the system and explain the value of the system architecture using a Five Level Inverter when compared to a classical grid connected system. We also compare the balanced active and reactive power control to an unbalanced active and reactive power control. The simulation results obtained by using Matlab Simulink and Simpowersystems are presented and discussed in this paper.</description>
        <description>http://thesai.org/Downloads/Volume8No10/Paper_27-Balanced_Active_and_Reactive_Control_Applied_to_a_Grid.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Verifying Weak Probabilistic Noninterference</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081026</link>
        <id>10.14569/IJACSA.2017.081026</id>
        <doi>10.14569/IJACSA.2017.081026</doi>
        <lastModDate>2017-10-31T11:11:19.9570000+00:00</lastModDate>
        
        <creator>Ali A. Noroozi</creator>
        
        <creator>Jaber Karimpour</creator>
        
        <creator>Ayaz Isazadeh</creator>
        
        <creator>Shahriar Lotfi</creator>
        
        <subject>Confidentiality; secure information flow; noninterference; algorithmic verification; bisimulation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(10), 2017</description>
        <description>Weak probabilistic noninterference is a security property for enforcing confidentiality in multi-threaded programs. It aims to guarantee secure flow of information in the program and ensure that sensitive information does not leak to attackers. In this paper, the problem of verifying weak probabilistic noninterference by leveraging formal methods, in particular algorithmic verification, is discussed. Behavior of multi-threaded programs is modeled using probabilistic Kripke structures and formalize weak probabilistic noninterference in terms of these structures. Then, a verification algorithm is proposed to check weak probabilistic noninterference. The algorithm uses an abstraction technique to compute quotient space of the program with respect to an equivalence relation called weak probabilistic bisimulation and does a simple check to decide whether the security property is satisfied or not. The progress made is demonstrated by a real-world case study. It is expected that the proposed approach constitutes a significant step towards more widely applicable secure information flow analysis.</description>
        <description>http://thesai.org/Downloads/Volume8No10/Paper_26-Verifying_Weak_Probabilistic_Noninterference.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cloud Computing Environment and Security Challenges: A Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081025</link>
        <id>10.14569/IJACSA.2017.081025</id>
        <doi>10.14569/IJACSA.2017.081025</doi>
        <lastModDate>2017-10-31T11:11:19.8930000+00:00</lastModDate>
        
        <creator>Muhammad Faheem Mushtaq</creator>
        
        <creator>Urooj Akram</creator>
        
        <creator>Irfan Khan</creator>
        
        <creator>Sundas Naqeeb Khan</creator>
        
        <creator>Asim Shahzad</creator>
        
        <creator>Arif Ullah</creator>
        
        <subject>Cloud computing; deployment models; service models; cloud security; trusted third party; cryptography</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(10), 2017</description>
        <description>Cloud computing exhibits a remarkable potential to offer cost-effective and more flexible services on-demand to the customers over the network. It dynamically increases the capabilities of the organization without training new people, investment in new infrastructure or licensing new software. Cloud computing has grown dramatically in the last few years due to the scalability of resources and appear as a fast-growing segment of the IT industry. The dynamic and scalable nature of cloud computing creates security challenges in their management by examining policy failure or malicious activity.  In this paper, we examine the detailed design of cloud computing architecture in which deployment models, service models, cloud components, and cloud security are explored. Furthermore, this study identifies the security challenges in cloud computing during the transfer of data into the cloud and provides a viable solution to address the potential threats. The task of Trusted Third Party (TTP) is introducing that ensure the sufficient security characteristics in the cloud computing. The security solution using the cryptography is specifically as the Public Key Infrastructure (PKI) that operates with Single-Sign-On (SSO) and Lightweight Directory Access Protocol (LDAP) which ensure the integrity, confidentiality, availability, and authenticity involved in communications and data.</description>
        <description>http://thesai.org/Downloads/Volume8No10/Paper_25-Cloud_Computing_Environment_and_Security_Challenges.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluating Cancer Treatment Alternatives using Fuzzy PROMETHEE Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081024</link>
        <id>10.14569/IJACSA.2017.081024</id>
        <doi>10.14569/IJACSA.2017.081024</doi>
        <lastModDate>2017-10-31T11:11:19.8630000+00:00</lastModDate>
        
        <creator>Dilber Uzun Ozsahin</creator>
        
        <creator>Berna Uzun</creator>
        
        <creator>Musa Sani Musa</creator>
        
        <creator>Abdulkader Helwan</creator>
        
        <creator>Chidi Nwekwo Wilsona</creator>
        
        <creator>Fatih Veysel Nur&#231;ina</creator>
        
        <creator>Niyazi Sent&#252;rka</creator>
        
        <creator>Ilker Ozsahin</creator>
        
        <subject>Cancer treatment alternatives; multi criteria decision making; Preference Ranking Organization Method for Enrichment Evaluations (PROMETHEE); fuzzy PROMETHEE</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(10), 2017</description>
        <description>The aim of this study is to apply the principle of multi-criteria decision making theories on various types of cancer treatment techniques. Cancer is an abnormal cell that divides in an uncontrolled manner, it is a growth (tumor) that starts when alterations in genes make one cell to grow and multiply rapidly. Eventually, these cells may metastasize to other tissues. The primary factors that influence the comprehensive treatment plan of cancer include, but not limited to genetic factors, patient general health condition, explicit characteristic of cancer, and even purpose of the treatment. Other factors which are also essential include treatment duration, cost of treatment, comfortability, side effects and percentage of survival rate. The latter factors play an important role in the course of treatment and are therefore needed in order to evaluate the several treatment procedures. The outcome of the decision-making theories on these treatment procedures will help the concerned parties such as the patients, oncologists, and the hospital management. The most common cancer treatment techniques were evaluated and compared based on certain criteria using Fuzzy PROMETHEE decision-making theory.</description>
        <description>http://thesai.org/Downloads/Volume8No10/Paper_24-Evaluating_Cancer_Treatment_Alternatives.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Developing a New Hybrid Cipher Algorithm using DNA and RC4</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081023</link>
        <id>10.14569/IJACSA.2017.081023</id>
        <doi>10.14569/IJACSA.2017.081023</doi>
        <lastModDate>2017-10-31T11:11:19.8000000+00:00</lastModDate>
        
        <creator>Rami k. Ahmed</creator>
        
        <creator>Imad J. Mohammed</creator>
        
        <subject>Rivest Cipher 4 (RC4); Secure Sockets Layer (SSL); Deoxyribonucleic acid (DNA); Rivest Cipher 4- Deoxyribonucleic acid-Algorithm (RC4-DNA-Alg)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(10), 2017</description>
        <description>This paper proposes a new hybrid security algorithm called RC4-DNA-Alg. It combines the symmetric stream cipher RC4 algorithm with DNA-indexing algorithm to provide secured data hiding with high complexity inside steganography framework. While RC4 represent one of the widely used algorithms in network security protocols such as Secure Sockets Layer (SSL), a DNA cryptography considered as a modern branch of cryptography that combines the traditional cryptographic techniques with the power of the genetic material The performance evaluation of the proposed algorithm is measured based on three parameters (conditional entropy, randomness tests and encryption time). The result shows outperformance in security and distorted in hybrid Cipher compared to the native RC4.</description>
        <description>http://thesai.org/Downloads/Volume8No10/Paper_23-Developing_a_New_Hybrid_Cipher_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development and Implementation of the Balanced Scorecard  for a Higher Educational Institution using Business Intelligence Tools</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081022</link>
        <id>10.14569/IJACSA.2017.081022</id>
        <doi>10.14569/IJACSA.2017.081022</doi>
        <lastModDate>2017-10-31T11:11:19.6600000+00:00</lastModDate>
        
        <creator>Alicia Valdez</creator>
        
        <creator>Griselda Cortes</creator>
        
        <creator>Sergio Castaneda</creator>
        
        <creator>Laura Vazquez</creator>
        
        <creator>Jose Medina</creator>
        
        <creator>Gerardo Haces</creator>
        
        <subject>Business intelligence; balanced scorecard; key performance indicators; BI Tools</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(10), 2017</description>
        <description>The objective of designing a strategy for an institution is to create more value and achieve its vision, with clear and coherent strategies, identifying the conditions in which they are currently, the sector in which they work and the different types of competences that generate, as well as the market in general where they perform, to create this type of conditions requires the availability of strategic information to verify the current conditions, to define the strategic line to follow according to internal and external factors, and in this way decide which methods to use to implement the development of a strategy in the organization. This research project was developed in an institution of higher education where the strategic processes were analyzed from different perspectives i.e. financial, customers, internal processes, and training and learning using business intelligence tools, such as Excel Power BI, Power Pivot, Power Query and a relational database for data repository; which helped having agile and effective information for the creation of the balanced scorecard, involving all levels of the organization and academic units; operating key performance indicators (KPI’s), for operational and strategic decisions. The results were obtained in form of boards of indicators designed to be visualized in the final view of the software constructed with previously described software tools.</description>
        <description>http://thesai.org/Downloads/Volume8No10/Paper_22-Development_and_Implementation_of_the_Balanced_Scorecard.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Genetic Algorithm for Optimizing TCM Encoder</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081021</link>
        <id>10.14569/IJACSA.2017.081021</id>
        <doi>10.14569/IJACSA.2017.081021</doi>
        <lastModDate>2017-10-31T11:11:19.5500000+00:00</lastModDate>
        
        <creator>Rekkal Kahina</creator>
        
        <creator>Abdesselam Bassou</creator>
        
        <subject>Trellis Coded Modulation; free distance; genetic algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(10), 2017</description>
        <description>This article describes a genetic algorithm for the optimization of the Trellis Coded Modulation (TCM) schemes with a view to achieve a higher performance in the multipath fading channel. The use of genetic algorithms is motivated by the fact that they are capable of performing global searches to retrieve an approximate solution to an optimization problem and if the solution is unknown to provide one within a reasonable time lapse. The TCM schemes are indeed optimized by the Rouane and Costello algorithm but the latter has as major disadvantage high requirements in both computation time and memory storage. This is further exacerbated by an increase in the encoder rate, the number of memory piles and the depth of the trellis. We describe a genetic algorithm which is especially well suited to combinatorial optimization, in particular to the optimization of NP-complete problems for which the computation time grows with the complexity of the problem, in a non-polynomial way. Furthermore this opens up the possibility of using the method for the generation of codes for channel characteristics for which no optimization codes are yet known.  Simulation results are presented, that show the evolutionary programming algorithm on several generations of populations which only exhibit a medium probability of exchanging genetic information.</description>
        <description>http://thesai.org/Downloads/Volume8No10/Paper_21-A_Genetic_Algorithm_for_Optimizing_TCM_Encoder.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Smart Mobile Healthcare System based on WBSN and 5G</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081020</link>
        <id>10.14569/IJACSA.2017.081020</id>
        <doi>10.14569/IJACSA.2017.081020</doi>
        <lastModDate>2017-10-31T11:11:19.4570000+00:00</lastModDate>
        
        <creator>Farah Nasri</creator>
        
        <creator>Abdellatif Mtibaa</creator>
        
        <subject>IoT; multi-protocol; smart mobile healthcare system; WBSN; android; 5G</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(10), 2017</description>
        <description>The intelligent use of resources enabled by Internet of Things has raised the expectations of the technical as well as the consumer community. However there are many challenges in designing an IoT healthcare system, like security, authentication and exchanging data. The IoT healthcare system, is transforming everyday physical objects, medical devices that surround us into an embedded smart healthcare system. Public healthcare has been paid an increasing attention given the human population and medical expenses exponential growth. It is well known that an effective monitoring healthcare system can detect abnormalities of health conditions in time and make diagnoses according to sensing (WBSN) data. This paper propose a general architecture of a smart mobile IoT healthcare system for monitoring patients risk using a smart phone and 5G. The design of multi-protocol unit for universal connectivity. Web and mobile applications developed to meet the needs of patients, doctors, laboratories analysis and hospitals services. The system advises and alerts in real time the doctors/medical assistants about the changing of vital parameters of the patients, such as body temperature, pulse and Oxygen in Blood etc… and also about important changes on environmental parameters, in order to take preventive measures, save lives in critical care and emergency situations.</description>
        <description>http://thesai.org/Downloads/Volume8No10/Paper_20-Smart_Mobile_Healthcare_System_based_on_WBSN.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Ethical and Social Issues of Information Technology: A Case Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081019</link>
        <id>10.14569/IJACSA.2017.081019</id>
        <doi>10.14569/IJACSA.2017.081019</doi>
        <lastModDate>2017-10-31T11:11:19.3630000+00:00</lastModDate>
        
        <creator>Ehsan Sargolzaei</creator>
        
        <creator>Mohammad Nikbakht</creator>
        
        <subject>Information technology; ethical and social issues; unethical practices; students</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(10), 2017</description>
        <description>The present study is conducted among 283 students from University of Zabol to identify the harm and ethical and social issues in the field of information technology and to classify the immoral practices that students are doing in this field. First various important issues in the field of IT in the social and ethical areas are discussed. Then the cases considered as the most commonly used immoral activities, are selected for evaluation, and the participants ranked these activities according to the method presented in the questionnaire. These activities are examined and analyzed descriptively by SPSS program, reliability of the questionnaire is measured by Cronbach’s alpha coefficient, Bartlett Test of Sphericity and KMO index and the validity of the results is verified using T-test and the results are ranked based on the first performance that happens frequently and the last performance that happens rarely or never. Finally, a set of strategies are presented for preventing ethical abuse in the field of Information Technology so that the challenges are reduced.</description>
        <description>http://thesai.org/Downloads/Volume8No10/Paper_19-The_Ethical_and_Social_Issues_of_Information_Technology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Defense Mechanisms against Machine Learning Modeling Attacks on Strong Physical Unclonable Functions for IOT Authentication: A Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081018</link>
        <id>10.14569/IJACSA.2017.081018</id>
        <doi>10.14569/IJACSA.2017.081018</doi>
        <lastModDate>2017-10-31T11:11:19.2700000+00:00</lastModDate>
        
        <creator>Nur Qamarina Mohd Noor</creator>
        
        <creator>Salwani Mohd Daud</creator>
        
        <creator>Noor Azurati Ahmad</creator>
        
        <creator>Nurazean Maarop</creator>
        
        <subject>IoT authentication; machine learning; modeling attack; Physical Unclonable Function; low area defense mechanism</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(10), 2017</description>
        <description>Security component in IoT system are very crucial because the devices within the IoT system are exposed to numerous malicious attacks. Typical security components in IoT system performs authentication, authorization, message and content integrity check. Regarding authentication, it is normally performed using classical authentication scheme using crypto module. However, the utilization of the crypto module in IoT authentication is not feasible because of the distributed nature of the IoT system which complicates the message cipher and decipher process. Thus, the Physical Unclonable Function (PUF) is suggested to replace crypto module for IoT authentication because it only utilizes responses from set of challenges instead of cryptographic keys to authenticate devices. PUF can generate large number of challenge-response pairs (CRPs) which is good for authentication because the unpredictability is high. However, with the emergence of machine learning modeling, the CRPs now can be predicted through machine learning algorithms. Various defense mechanisms were proposed to counter machine learning modeling attacks (ML-MA). Although they were experimentally proven to be able to increase resiliency against ML-MA, they caused the generated responses to be instable and incurred high area overhead. Thus, there is a need to design the best defense mechanism which is not only resistant to ML-MA but also produces reliable responses and reduces area overhead. This paper presents an analysis on defense mechanisms against ML-MA on strong PUFs for IoT authentication.</description>
        <description>http://thesai.org/Downloads/Volume8No10/Paper_18-Defense_Mechanisms_against_Machine_Learning_Modeling_Attacks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cloud Computing: Empirical Studies in Higher Education A Literature Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081017</link>
        <id>10.14569/IJACSA.2017.081017</id>
        <doi>10.14569/IJACSA.2017.081017</doi>
        <lastModDate>2017-10-31T11:11:19.1300000+00:00</lastModDate>
        
        <creator>Abusfian Elgelany</creator>
        
        <creator>Weam Gaoud Alghabban</creator>
        
        <subject>Cloud computing; education system; e-learning; information and communication technology (ICT)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(10), 2017</description>
        <description>The advent of cloud computing (CC) in recent years has attracted substantial interest from various institutions, especially higher education institutions, which wish to consider the advantages of its features. Many universities have migrated from traditional forms of teaching to electronic learning services, and they rely upon information and communication technology services. The usage of CC in educational environments provides many benefits, such as low-cost services for academics and students. The expanded use of CC comes with significant adoption challenges. Understanding the position of higher education institutions with respect to CC adoption is an essential research area. This paper investigated the current state of CC adoption in the higher education sector in order to enrich the research in this area of interest. Existing limitations and knowledge gaps in current empirical studies are identified. Moreover, suggested areas for further researches will be highlighted for the benefit of other researchers who are interesting in this topic. These researches encourage institutions of education especially in higher education to adopted cloud computing technology.</description>
        <description>http://thesai.org/Downloads/Volume8No10/Paper_17-Cloud_Computing_Empirical_Studies_in_Higher_Education.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Breast Cancer Detection with Mammogram Segmentation: A Qualitative Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081016</link>
        <id>10.14569/IJACSA.2017.081016</id>
        <doi>10.14569/IJACSA.2017.081016</doi>
        <lastModDate>2017-10-31T11:11:19.0370000+00:00</lastModDate>
        
        <creator>Samir M. Badawy</creator>
        
        <creator>Alaa A. Hefnawy</creator>
        
        <creator>Hassan E. Zidan</creator>
        
        <creator>Mohammed T. GadAllah</creator>
        
        <subject>Image processing; double thresholding segmentation; breast cancer detection into mammograms</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(10), 2017</description>
        <description>Mammography is specialized medical imaging for scanning the breasts. A mammography exam (A Mammogram) helps in the early detection and diagnosis of breast cancer. Mammogram image segmentation is useful in detecting the breast cancer regions, hence, better diagnosis. In this paper, we applied enhanced double thresholding-based approach for Mammograms’ image segmentation. Moreover, we added the borders of the final segmented image as a contour to the original image helping physicians to easily detect the breast cancer into different Mammograms. The result is enhanced wise effect onto breast cancer qualitative detection into Mammograms, helping physicians for better diagnosis. Generalization for our study is possible for not only x-ray based Mammograms, but also for all biomedical images, as an enhanced segmentation way for better visualization, detection, and feature extraction, thus better diagnosis. Moreover, this manual thresholding method has the advantage of not only reducing processing time but also the processing storage area.</description>
        <description>http://thesai.org/Downloads/Volume8No10/Paper_16-Breast_Cancer_Detection_with_Mammogram_Segmentation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Lung-Deep: A Computerized Tool for Detection of Lung Nodule Patterns using Deep Learning Algorithms Detection of Lung Nodules Patterns</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081015</link>
        <id>10.14569/IJACSA.2017.081015</id>
        <doi>10.14569/IJACSA.2017.081015</doi>
        <lastModDate>2017-10-31T11:11:18.9730000+00:00</lastModDate>
        
        <creator>Qaisar Abbas</creator>
        
        <subject>Computer-aided diagnosis; lung nodules; patterns detection; deep learning; convolutional neural network; recurrent neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(10), 2017</description>
        <description>The detection of lung-related disease for radiologists is a tedious and time-consuming task. For this reason, automatic computer-aided diagnosis (CADs) systems were developed by using digital CT scan images of lungs. The detection of lung nodule patterns is an important step for the automatic development of CAD system. Currently, the patterns of lung nodule are detected through domain-expert knowledge of image processing and accuracy is also not up-to-the-mark. Therefore, a computerized CADs tool is presented in this paper to identify six different patterns of lung nodules based on multi-layer deep learning ( known as Lung-Deep) algorithms compare to state-of-the-art systems without using the technical image processing methods. A multilayer combination of the convolutional neural network (CNN), recurrent neural networks (RNNs) and softmax linear classifiers are integrated to develop the Lung-Deep without doing any pre- or post-processing steps. The Lung-Deep system is tested with manually draw radiologist contours on the 1200 images including 3250 nodules by using statistical measures. On this dataset, the higher sensitivity (SE) of 88%, specificity (SP) of 80% and 0.98 of the area under the receiver operating curve (AUC) of 0.98 are obtained compared to other systems. Hence, this proposed lung-deep system is outperformed by integrating different layers of deep learning algorithms to detect six patterns of nodules.</description>
        <description>http://thesai.org/Downloads/Volume8No10/Paper_15-Lung_Deep_a_Computerized_Tool_for_Detection_of_Lung_Nodule_Patterns.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Survey of Schema Matching Research using Database Schemas and Instances</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081014</link>
        <id>10.14569/IJACSA.2017.081014</id>
        <doi>10.14569/IJACSA.2017.081014</doi>
        <lastModDate>2017-10-31T11:11:18.9100000+00:00</lastModDate>
        
        <creator>Ali A. Alwan</creator>
        
        <creator>Azlin Nordin</creator>
        
        <creator>Mogahed Alzeber</creator>
        
        <creator>Abedallah Zaid Abualkishik</creator>
        
        <subject>Data integration; instance-based schema matching; schema matching; semantic matching; syntactic matching</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(10), 2017</description>
        <description>Schema matching is considered as one of the essential phases of data integration in database systems. The main aim of the schema matching process is to identify the correlation between schema which helps later in the data integration process. The main issue concern of schema matching is how to support the merging decision by providing the correspondence between attributes through syntactic and semantic heterogeneous in data sources. There have been a lot of attempts in the literature toward utilizing database instances to detect the correspondence between attributes during schema matching process. Many approaches based on instances have been proposed aiming at improving the accuracy of the matching process. This paper set out a classification of schema matching research in database system exploiting database schema and instances. We survey and analyze the schema matching techniques applied in the literature by highlighting the strengths and the weaknesses of each technique. A deliberate discussion has been reported highlights on challenges and the current research trends of schema matching in database. We conclude this paper with some future work directions that help researchers to explore and investigate current issues and challenges related to schema matching in contemporary databases.</description>
        <description>http://thesai.org/Downloads/Volume8No10/Paper_14-A_Survey_of_Schema_Matching_Research_using_Database_Schemas.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Feature Fusion Approach for Hand Tools Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081013</link>
        <id>10.14569/IJACSA.2017.081013</id>
        <doi>10.14569/IJACSA.2017.081013</doi>
        <lastModDate>2017-10-31T11:11:18.8800000+00:00</lastModDate>
        
        <creator>Mostafa Ibrahim</creator>
        
        <creator>Alaa Ahmed</creator>
        
        <subject>Feature fusion; neural network classifier; invariant features; objects classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(10), 2017</description>
        <description>The most important functions in objects classification and recognition system are to segment the objects from the input image, extract common features from the objects, and classify these objects as a member of one of the considered object classes. In this paper, we present a new approach for feature-based objects classification. The main idea of the new approach is the fusion of two different feature vectors that are calculated using Fourier descriptors and moment invariants. The fused moment-Fourier feature vector is invariant to image scaling, rotation, and translation. The fused feature vector for a reference object is used for training feed-forward neural network classifier. Classification of some hand tools is used to evaluate the performance of the proposed classification approach. The results show an appreciable increase in the classification accuracy rate with a considerable decrease in the classifier learning time.</description>
        <description>http://thesai.org/Downloads/Volume8No10/Paper_13-A_Feature_Fusion_Approach_for_Hand_Tools_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluating Urdu to Arabic Machine Translation Tools</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081012</link>
        <id>10.14569/IJACSA.2017.081012</id>
        <doi>10.14569/IJACSA.2017.081012</doi>
        <lastModDate>2017-10-31T11:11:18.8630000+00:00</lastModDate>
        
        <creator>Maheen Akhter Ayesha</creator>
        
        <creator>Sahar Noor</creator>
        
        <creator>Muhammad Ramzan</creator>
        
        <creator>Hikmat Ullah Khan</creator>
        
        <subject>Natural language processing; machine translation; Urdu-Arabic Corpus; Google; Bing; Babylon; translator; BiLingual Evaluation Understudy (BLEU); National Institute of Standard and Technology (NIST); Metric for Evaluation of Translation with Explicit ORder (METEOR)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(10), 2017</description>
        <description>Machine translation is an active research domain in fields of artificial intelligence. The relevant literature presents a number of machine translation approaches for the translation of different languages. Urdu is the national language of Pakistan while Arabic is a major language in almost 20 different countries of the world comprising almost 450 million people.  To the best of our knowledge, there is no published research work presenting any method on machine translation from Urdu to Arabic, however, some online machine translation systems like Google , Bing  and Babylon  provide Urdu to Arabic machine translation facility. In this paper, we compare the performance of online machine translation systems. The input in Urdu language is translated by the systems and the output in Arabic is compared with the ground truth data of Arabic reference sentences. The comparative analysis evaluates the systems by three performance evaluation measures: BLEU (BiLingual Evaluation Understudy), METEOR (Metric for Evaluation of Translation with Explicit ORdering) and NIST (National Institute of Standard and Technology) with the help of a standard corpus. The results show that Google translator is far better than Bing and Babylon translators. It outperforms, on the average, Babylon by 28.55% and Bing by 15.74%.</description>
        <description>http://thesai.org/Downloads/Volume8No10/Paper_12-Evaluating_Urdu_to_Arabic_Machine_Translation_Tools.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Role of Expert Systems in Identification and Overcoming of Dengue Fever</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081011</link>
        <id>10.14569/IJACSA.2017.081011</id>
        <doi>10.14569/IJACSA.2017.081011</doi>
        <lastModDate>2017-10-31T11:11:18.8170000+00:00</lastModDate>
        
        <creator>Nadeem Ahmed</creator>
        
        <creator>Muhammad Shoaib</creator>
        
        <creator>Adeed Ishaq</creator>
        
        <creator>Abdul Wahab</creator>
        
        <subject>Expert system; rule based; Dengue; health care; disease; fever</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(10), 2017</description>
        <description>This paper presents a systematic literature review on expert systems which are used for identification and overcoming of Dengue fever. Dengue is a viral disease produced by Flavivirus. The expansion of Dengue fever is because of uncontrolled population and urbanization without suitable water administration. With the quick technological enhancement, we can identify and overcome Dengue fever by using expert systems. These expert systems require knowledge of the relevant problem and techniques to infer the result in order to make decisions. The literature review provides a comparison of techniques, methodologies, limitations and advantages of different Dengue expert systems. These expert systems facilitate both doctors and patients in Dengue detection. Multiple risk factors can be eliminated by the detection of Dengue fever through expert systems at early stages of Dengue. Furthermore, we find that enhancement of knowledge base improves accuracy of expert systems.</description>
        <description>http://thesai.org/Downloads/Volume8No10/Paper_11-Role_of_Expert_System_in_Identification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Smart Tourism Architectural Model (Kingdom of Saudi Arabia: A Case Study)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081010</link>
        <id>10.14569/IJACSA.2017.081010</id>
        <doi>10.14569/IJACSA.2017.081010</doi>
        <lastModDate>2017-10-31T11:11:18.7230000+00:00</lastModDate>
        
        <creator>Ahmad H. Al-Omari</creator>
        
        <creator>AbdulSamad Al-Marghirani</creator>
        
        <subject>Smart tourism; smart systems; QR-Code; Saudi tourism; Saudi Vision 2030; S-Cicerone</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(10), 2017</description>
        <description>The researchers have proposed and implemented a general application architecture model that complies with the demands of the Saudi tourism sector to be used by tourists on their mobile devices. The design architecture aims to improve tourism sector opportunities, facilitate tourists’ guidance in the holy and historical places, fill in the shortage of having multilingual tourists’ guides, cut off cost expenses and build up capacities. It can support KSA to be a tourist attraction in the region. The research project employs the usage of the Quick Response (QR) codes and the Information Communication Technology (ICT) which are capable of converting the smart phones into a tourist guide device. This new system can be considered as a Smart Cicerone (S-Cicerone). The research project has a flexible design that allows tourists, guests and administrators to interact easily with the system in order to use its services and perform a regular system update and management. The system design is based on component-based architecture including Tourist Layer services, Smart Tourism System Layer services and the Administration Layer services. The components are divided into further services and smartly integrated to formulate the main application functions. This project is meant to be implemented in the Kingdom of Saudi Arabia as a pilot project and is also valid for implantation in any other countries.</description>
        <description>http://thesai.org/Downloads/Volume8No10/Paper_10-Smart_Tourism_Architectural_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Informative Vector Selection in Active Learning using Divisive Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081009</link>
        <id>10.14569/IJACSA.2017.081009</id>
        <doi>10.14569/IJACSA.2017.081009</doi>
        <lastModDate>2017-10-31T11:11:18.6300000+00:00</lastModDate>
        
        <creator>Zareen Sharf</creator>
        
        <creator>Maryam Razzak</creator>
        
        <subject>Active learning; machine learning; pre-clustering; semi-supervised learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(10), 2017</description>
        <description>Traditional supervised machine learning techniques require training on large volumes of data to acquire efficiency and accuracy. As opposed to traditional systems Active Learning systems minimizes the size of training data significantly because the selection of the data is done based on a strong mathematical model. This helps in achieving the same accuracy levels of the results as baseline techniques but with a considerably small training dataset. In this paper, the active learning approach has been implemented with a modification into the traditional system of active learning with version space algorithm. The version space concept is replaced with the divisive analysis (DIANA) algorithm and the core idea is to pre-cluster the instances before distributing them into training and testing data. The results obtained by our system have justified our reasoning that pre-clustering instead of the traditional version space algorithm can bring a good impact on the accuracy of the overall system’s classification. Two types of data have been tested, the binary class and multi-class. The proposed system worked well on the multi-class but in case of binary, the version space algorithm results were more accurate.</description>
        <description>http://thesai.org/Downloads/Volume8No10/Paper_9-The_Informative_Vector_Selection_in_Active_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of a Mobile GIS Property Mapping Application using Mobile Cloud Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081008</link>
        <id>10.14569/IJACSA.2017.081008</id>
        <doi>10.14569/IJACSA.2017.081008</doi>
        <lastModDate>2017-10-31T11:11:18.5200000+00:00</lastModDate>
        
        <creator>Victor Neene</creator>
        
        <creator>Monde Kabemba</creator>
        
        <subject>Leaflet; MapBox; mobile cloud computing; OpenstreetMaps; property mapping</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(10), 2017</description>
        <description>This study presents the development of a mobile GIS Property mapping application for use by local authorities in developing countries. Attempts to develop property mapping applications especially in developing countries have mostly used GIS desktop productivity software tools that required the digitization of property maps by highly skilled GIS experts. In addition, these applications lacked real time capture of attribute, spatial and image data of properties. A survey was conducted in the Kafue local authority to gather systems requirements for the mobile application. After design and modeling, the developed application was trialed in the field and 10 properties were mapped successfully. The software tools used in this study included Android Studio, Leaflet mapping library, Apache2 web server, PostgresSQL with PostGIS Extensions and OpenstreetMaps and MapBox mobile cloud computing mapping services. The hardware tools used included a laptop computer and a mobile phone running android operating system. The study showed that mobile property mapping applications can be developed by tapping into the computing resources provided by mobile cloud computing. The benefits of this model include real time complete property data capture and the use of non GIS experts in mapping projects.</description>
        <description>http://thesai.org/Downloads/Volume8No10/Paper_8-Development_of_a_Mobile_GIS_Property_Mapping.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Action Recognition using Key-Frame Features of Depth Sequence and ELM</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081007</link>
        <id>10.14569/IJACSA.2017.081007</id>
        <doi>10.14569/IJACSA.2017.081007</doi>
        <lastModDate>2017-10-31T11:11:18.4270000+00:00</lastModDate>
        
        <creator>Suolan Liu</creator>
        
        <creator>Hongyuan Wang</creator>
        
        <subject>Action recognition; features; key frame; temporal; extreme learning machine</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(10), 2017</description>
        <description>Recently, the rapid development of inexpensive RGB-D sensor, like Microsoft Kinect, provides adequate information for human action recognition. In this paper, a recognition algorithm is presented in which feature representation is generated by concatenating spatial features from human contour of key frames and temporal features from time difference information of a sequence. Then, an improved multi-hidden layers extreme learning machine is introduced as classifier. At last, we test our scheme on the public UTD-MHAD dataset from recognition accuracy and time consumption.</description>
        <description>http://thesai.org/Downloads/Volume8No10/Paper_7-Action_Recognition_using_Key_Frame_Features.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Factors Associated to Online Shopping at the BoP Community in Rural Bangladesh</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081006</link>
        <id>10.14569/IJACSA.2017.081006</id>
        <doi>10.14569/IJACSA.2017.081006</doi>
        <lastModDate>2017-10-31T11:11:18.3330000+00:00</lastModDate>
        
        <creator>Kazi Mozaher Hossein</creator>
        
        <creator>Fumihiko Yokota</creator>
        
        <creator>Mariko Nishikitani</creator>
        
        <creator>Rafiqul Islam</creator>
        
        <subject>Online shopping; BoP; demographic and behavioral factors; Bangladesh</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(10), 2017</description>
        <description>Online shopping is getting popular even in the rural areas of developing countries. However, few research has been conducted to identify the factors associated to online shopping by the poor villagers. Whereas people living at the bottom of the economic pyramid (BoP) has an aggregate purchase power which is a huge market and online shopping has the potentiality in reducing BoP penalty by removing unnecessary middlemen from the supply chain. In this research, we have conducted a field survey on 600 households in the western part of rural Bangladesh to find out current status of online shopping use by the BoP people and the demographic and behavioral factors associated with online shopping. Chi-square test of association and multi-variate logistic regression test have been performed to analyze data. Result shows that cell phone use, computer use, social media use, and mobile money transfer use have significant relationship in online shopping use at the BoP community.</description>
        <description>http://thesai.org/Downloads/Volume8No10/Paper_6-Factors_Associated_to_Online_Shopping_at_the_BoP_Community.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Survey on Smartphones Security: Software Vulnerabilities, Malware, and Attacks
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081005</link>
        <id>10.14569/IJACSA.2017.081005</id>
        <doi>10.14569/IJACSA.2017.081005</doi>
        <lastModDate>2017-10-31T11:11:18.2400000+00:00</lastModDate>
        
        <creator>Milad Taleby Ahvanooey</creator>
        
        <creator>Qianmu Li</creator>
        
        <creator>Mahdi Rabbani</creator>
        
        <creator>Ahmed Raza Rajput</creator>
        
        <subject>Mobile security; malware; adware; malicious attacks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(10), 2017</description>
        <description>Nowadays, the usage of smartphones and their applications have become rapidly popular in people’s daily life. Over the last decade, availability of mobile money services such as mobile-payment systems and app markets have significantly increased due to the different forms of apps and connectivity provided by mobile devices, such as 3G, 4G, GPRS, and Wi-Fi, etc. In the same trend, the number of vulnerabilities targeting these services and communication networks has raised as well. Therefore, smartphones have become ideal target devices for malicious programmers. With increasing the number of vulnerabilities and attacks, there has been a corresponding ascent of the security countermeasures presented by the researchers. Due to these reasons, security of the payment systems is one of the most important issues in mobile payment systems. In this survey, we aim to provide a comprehensive and structured overview of the research on security solutions for smartphone devices. This survey reviews the state of the art on security solutions, threats, and vulnerabilities during the period of 2011-2017, by focusing on software attacks, such those to smartphone applications. We outline some countermeasures aimed at protecting smartphones against these groups of attacks, based on the detection rules, data collections and operating systems, especially focusing on open source applications. With this categorization, we want to provide an easy understanding for users and researchers to improve their knowledge about the security and privacy of smartphones.</description>
        <description>http://thesai.org/Downloads/Volume8No10/Paper_5-A_Survey_on_Smartphones_Security.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Critical Success Factors Plays a Vital Role in ERP Implementation in Developing Countries: An Exploratory Study in Pakistan</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081004</link>
        <id>10.14569/IJACSA.2017.081004</id>
        <doi>10.14569/IJACSA.2017.081004</doi>
        <lastModDate>2017-10-31T11:11:18.1470000+00:00</lastModDate>
        
        <creator>Naeem Ahmed</creator>
        
        <creator>A. A. Shaikh</creator>
        
        <creator>Muhammad Sarim</creator>
        
        <subject>Information System (IS); Enterprise Resource Planning (ERP) System; ERP implementation; CSFs; Pakistani Small and Medium Sized Enterprises (SMEs); Statistical Package for Social Sciences (SPSS)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(10), 2017</description>
        <description>The capabilities of an Enterprise Resource Planning (ERP) system to integrate all the business functions needed in a single system with a shared database efficiently and effectively has persuaded organizations to adopt them. In enterprise environment, successful ERP implementation has played a vital role for organizational efficiency. In this respect critical success factors (CSFs) have been identified essential for the successful ERP implementation. The purpose of this paper is to identify and analyze CSFs impacting ERP implementation success in Pakistani Small and Medium Sized Enterprises (SMEs).  This paper will help Pakistani SME’s on how to obtain better results from ERP implementation focusing on CSFs relevant to them.</description>
        <description>http://thesai.org/Downloads/Volume8No10/Paper_4-Critical_Success_Factors_Plays_a_Vital_Role_in_ERP_Implementation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Gray Scale Images for Face Detection under Unstable Lighting Condition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081003</link>
        <id>10.14569/IJACSA.2017.081003</id>
        <doi>10.14569/IJACSA.2017.081003</doi>
        <lastModDate>2017-10-31T11:11:18.0530000+00:00</lastModDate>
        
        <creator>Mathias A. ONABID</creator>
        
        <creator>DJIMELI TSAMENE Charly</creator>
        
        <subject>Enhancement; AdaBoost; Haar like features; luma; peak signal to noise ratio (PSNR); Adaptive Histogram Equalisation (AHE); Contrast Limited Adaptive Histogram Equalisation (CLAHE); Global Histogram Equalisation (GHE); Gamma transform</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(10), 2017</description>
        <description>Facial expression plays a vital role in no verbal communication between human beings. The brain, in a quarter of second, can determine the state of mind and the behaviour of a person using different traits in a stable lighting environment. This is not the case in real applications such as online learning or driver monitoring system where lighting is not stable. It is therefore important to study and improve performance of some image enhancement techniques on face detection under varying lighting conditions in the spatial domain. The study is based on gray scale images. Nine gray scale standards based on colour space separating luminance to other colour components are used. The enhancement techniques compared are: the Global Histogram Equalisation (GHE), the Adaptive Histogram Equalisation (AHE) and Contrast Limited Adaptive Histogram Equalisation (CLAHE). Trials on the Labelled Face in the Wild (LFW) dataset using the Viola Jones Haar like features showed the CLAHE to outperform the GHE and AHE in face detection though the results appeared poor under low lighting condition. This motivated the need to stabilize lighting before applying Histogram Equalization techniques. The novelty in this research is that we have been able to apply the Gamma transform as a lighting stabiliser on the gray scale standard before enhancement. Comparing performance after lighting stabilisation showed AHE to be most appropriate for face detection, as it produced a detection rate of 99.31% and a relative high false positive rate (23.89 %).</description>
        <description>http://thesai.org/Downloads/Volume8No10/Paper_3-Enhancing_Gray_Scale_Images_for_Face_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Secure Mobile Learning Framework based on Cloud</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081002</link>
        <id>10.14569/IJACSA.2017.081002</id>
        <doi>10.14569/IJACSA.2017.081002</doi>
        <lastModDate>2017-10-31T11:11:17.9730000+00:00</lastModDate>
        
        <creator>Mohammad Al Shehri</creator>
        
        <subject>Trusted Platform Module (TPM); Communication Module (CM), anonymity; non-repudiation; personalized; BAN logic</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(10), 2017</description>
        <description>With the rising need for highly advanced and digital learning coupled with the growing penetration of smartphones has contributed to the growth of Mobile Learning. According to Ericsson’s forecast, 80% of the world’s population (6.4 billion people) will be Smartphone users by 2021. But the existing Mobile Learning Frameworks has some limitations that need to be addressed for mass adaptation, limitations include device compatibility and security. In this paper we propose a Secure Mobile Learning Framework (SMLF) based on TPM in the cloud. SMLF is supported by three layers Communication Module (CM) which helps in ensuring end to end security. In addition to this we propose a procedure for personalizing   mobile learning applications of the student and instructors. We also propose a secure mobile learning protocol in SMLF framework. Proposed SMLF ensures mutual authentication of all the stakeholders, privacy of the message, integrity of the message, and anonymity of the student from the instructor and non-repudiation and is free from known attacks. Our proposed SMLF framework is successfully verified using BAN logic.</description>
        <description>http://thesai.org/Downloads/Volume8No10/Paper_2-A_Secure_Mobile_Learning_Framework_based_on_Cloud.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Use of Gamification in Higher Education: An Empirical Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.081001</link>
        <id>10.14569/IJACSA.2017.081001</id>
        <doi>10.14569/IJACSA.2017.081001</doi>
        <lastModDate>2017-10-31T11:11:17.8500000+00:00</lastModDate>
        
        <creator>Istv&#225;n Varannai</creator>
        
        <creator>Peter Sasvari</creator>
        
        <creator>Anna Urbanovics</creator>
        
        <subject>Gamification; education; Hungary; technology acceptance model; university student</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(10), 2017</description>
        <description>The use of gamification in higher education has increased considerably over the past decades. An empirical study was conducted in Hungary with two groups of students to investigate their behaviour while interacting with Kahoot! The results were analyzed based on the technology acceptance model. They indicate that the positive attitude, good experience and ease of availability contributed to improve student performance which strengthened the intention to use the application. Besides these, the perceived utility was positively influenced by the ease of use as consequence.</description>
        <description>http://thesai.org/Downloads/Volume8No10/Paper_1-The_Use_of_Gamification_in_Higher_Education_an_Empirical_Study.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Proposed Approach for Image Compression based on Wavelet Transform and Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080959</link>
        <id>10.14569/IJACSA.2017.080959</id>
        <doi>10.14569/IJACSA.2017.080959</doi>
        <lastModDate>2017-10-03T11:07:30.6130000+00:00</lastModDate>
        
        <creator>Houda Chakib</creator>
        
        <creator>Brahim Minaoui</creator>
        
        <creator>Mohamed Fakir</creator>
        
        <creator>Abderrahim Salhi</creator>
        
        <creator>Imad Badi</creator>
        
        <subject>Haar wavelet transform; biorthogonal wavelet; backpropagation neural network; scaled conjugate gradient algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(9), 2017</description>
        <description>Over the last years, wavelet theory has been used with great success in a wide range of applications as signal de-noising and image compression. An ideal image compression system must yield high-quality compressed image with high compression ratio. This paper attempts to find the most useful wavelet function to compress an image among the existing members of wavelet families. Our idea is that a backpropagation neural network is trained to select the suitable wavelet function between the two families: orthogonal (Haar) and biorthogonal (bior4.4), to be used to compress an image efficiently and accurately with an ideal and optimum compression ratio. The simulation results indicated that the proposed technique can achieve good compressed images in terms of peak signal to noise ratio (PSNR) and compression ratio (t) in comparison with random selection of the mother wavelet.</description>
        <description>http://thesai.org/Downloads/Volume8No9/Paper_59-A_Proposed_Approach_for_Image_Compression.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Secure Device Pairing Methods: An Overview</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080958</link>
        <id>10.14569/IJACSA.2017.080958</id>
        <doi>10.14569/IJACSA.2017.080958</doi>
        <lastModDate>2017-10-03T11:07:30.5370000+00:00</lastModDate>
        
        <creator>Aatifah Noureen</creator>
        
        <creator>Umar Shoaib</creator>
        
        <creator>Muhammad Shahzad Sarfraz</creator>
        
        <subject>Device pairing methods; binding method; OOB channel; cryptographic protocols</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(9), 2017</description>
        <description>The procedure of setting up a secure communication channel among unfamiliar human-operated devices is called “Secure Device Pairing”. Secure binding of electronic devices is a challenging task because there are no security measures and commonly trusted infrastructure. It opens up the doors for many security threats and attacks e.g. man in middle and evil twin attacks. In order to mitigate these attacks different techniques have been proposed; some level of user participation is required in decreasing attacks in the device pairing process. A comparative and comprehensive evaluation of prominent secure device pairing methods is described here. The main motive of this research is to summarize the cryptographic protocols used in pairing process and compare the existing methods to secure the pairing devices. That will help in selecting best method according to the situation, as the most popular or easy method, instead they choose different methods in different circumstances.</description>
        <description>http://thesai.org/Downloads/Volume8No9/Paper_58-Secure_Device_Pairing_Methods.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Defense against SYN Flood Attack using LPTR-PSO: A Three Phased Scheduling Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080957</link>
        <id>10.14569/IJACSA.2017.080957</id>
        <doi>10.14569/IJACSA.2017.080957</doi>
        <lastModDate>2017-09-29T18:17:36.2900000+00:00</lastModDate>
        
        <creator>Zonayed Ahmed</creator>
        
        <creator>Maliha Mahbub</creator>
        
        <creator>Sultana Jahan Soheli</creator>
        
        <subject>SYN flood; LPTR-PSO; three-phased algorithm; legal request; buffer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(9), 2017</description>
        <description>Security has become a critical factor in today’s computation systems. The security threats that risk our confidential information can come in form of seemingly legitimate client request to server. While illegitimate requests consume the number of connections a server can handle, no valid new connections can be made. This scenario, named SYN-flooding attacks can be controlled through a fair scheduling algorithm that provides more opportunity to legal requests. This paper proposes a detailed scheduling approach named Largest Processing Time Rejection- Particle Swarm Optimization (LPTR-PSO) that defends the server against varying intensity SYN-flood attack scenarios through a three-phased algorithm. This novel approach considers the number of half-open connections in the server buffer and chooses a phase accordingly. The simulation results show that the proposed defense strategy improves the performance of under attack system in terms of memory occupancy of legal requests and residence time of attack requests.</description>
        <description>http://thesai.org/Downloads/Volume8No9/Paper_57-Defense_against_SYN_Flood_Attack_using_LPTR_PSO.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Proposed Framework for Generating Random Objective Exams using Paragraphs of Electronic Courses</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080956</link>
        <id>10.14569/IJACSA.2017.080956</id>
        <doi>10.14569/IJACSA.2017.080956</doi>
        <lastModDate>2017-09-29T18:17:36.2730000+00:00</lastModDate>
        
        <creator>Elsaeed E. AbdElrazek</creator>
        
        <subject>Objective exams (OE); Applications Artificial Intelligence (AAI); Random Objective Exams Generation (ROEG)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(9), 2017</description>
        <description>Objective exams (OE) plays a major role in educational assessment as well as in electronic learning. The main problem in the traditional system of exams is a low quality of questions caused by some human factors, such as the traditional method for the development of the exam covers a narrow scope of curriculum topics. This does nothing for the separation of teaching process about the examination process. In this study we present a framework that generates three types of Objective exams questions (multiple choice questions (MCQ), true-false question (T/FQ), and complete Questions (CQ) from paragraphs of electronic course. The proposed framework consists of a lot of main stages, it uses both of the natural language processing (NLP) techniques to generate three types of questions (GFQ, T/FQ, and MCQ), and exam maker (EM), it uses the generated questions to produce the objective exams. The proposed system was evaluated by the extent of its ability to generate multiple objective questions. The questions that have been generated from the proposed system was presented to the three of the arbitrators specialists in the field of computer networks to express an opinion on the extent of their relationship to E-course and the accuracy of linguistic and scientific formulation. The results of the study showed an increase in the accuracy and number of the objective exams that were generated through the proposed system compared to the accuracy and number of the exams created by the traditional system this proves the efficiency of the proposed system.</description>
        <description>http://thesai.org/Downloads/Volume8No9/Paper_56-A_Proposed_Framework_for_Generating_Random_Objective.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Classification of Human Emotions from Electroencephalogram (EEG) Signal using Deep Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080955</link>
        <id>10.14569/IJACSA.2017.080955</id>
        <doi>10.14569/IJACSA.2017.080955</doi>
        <lastModDate>2017-09-29T18:17:36.2570000+00:00</lastModDate>
        
        <creator>Abeer Al-Nafjan</creator>
        
        <creator>Manar Hosny</creator>
        
        <creator>Areej Al-Wabil</creator>
        
        <creator>Yousef Al-Ohali</creator>
        
        <subject>Electroencephalogram (EEG); Brain-Computer Interface (BCI); emotion recognition; affective state; Deep Neural Network (DNN); DEAP dataset</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(9), 2017</description>
        <description>Estimation of human emotions from Electroencephalogram (EEG) signals plays a vital role in developing robust Brain-Computer Interface (BCI) systems. In our research, we used Deep Neural Network (DNN) to address EEG-based emotion recognition. This was motivated by the recent advances in accuracy and efficiency from applying deep learning techniques in pattern recognition and classification applications. We adapted DNN to identify human emotions of a given EEG signal (DEAP dataset) from power spectral density (PSD) and frontal asymmetry features. The proposed approach is compared to state-of-the-art emotion detection systems on the same dataset. Results show how EEG based emotion recognition can greatly benefit from using DNNs, especially when a large amount of training data is available.</description>
        <description>http://thesai.org/Downloads/Volume8No9/Paper_55-Classification_of_Human_Emotions_from_Electroencephalogram.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>DBpedia based Ontological Concepts Driven Information Extraction from Unstructured Text</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080954</link>
        <id>10.14569/IJACSA.2017.080954</id>
        <doi>10.14569/IJACSA.2017.080954</doi>
        <lastModDate>2017-09-29T18:17:36.2270000+00:00</lastModDate>
        
        <creator>Adeel Ahmed</creator>
        
        <creator>Syed Saif ur Rahman</creator>
        
        <subject>Ontology-based information extraction; semantic web; named entity recognition; entity linking</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(9), 2017</description>
        <description>In this paper a knowledge base concept driven named entity recognition (NER) approach is presented. The technique is used for information extraction from news articles and linking it with background concepts in knowledge base. The work specifically focuses on extracting entity mentions from unstructured articles. The extraction of entity mentions from articles is based on the existing concepts from DBPedia ontology, representing the knowledge associated with the concepts present in Wikipedia knowledge base. A collection of the Wikipedia concepts through structured DBpedia ontology has been extracted and developed. For processing of unstructured text, Dawn news articles have been scrapped, preprocessed and thereby a corpus has been built. The proposed knowledge base driven system shows that given an article, the system identifies the entity mentions in the text article and how they can automatically be linked with the concepts to the corresponding entity mentions representing their respective pages on Wikipedia. The system is evaluated on three test collections of news articles on politics, sports and entertainment domains. The experimental results in respect of entity mentions are reported. The results are presented as precision, recall and f-measure, where the precision of extraction of relevant entity mentions identified yields the best results with a little variation in percent recall and f-measures. Additionally, facts associated with the extracted entity mentions both in form of sentences and Resource Description Framework (RDF) triples are presented so as to enhance the user’s understanding of the related facts presented in the article.</description>
        <description>http://thesai.org/Downloads/Volume8No9/Paper_54-DBpedia_based_Ontological_Concepts_Driven_Information.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Person re-ID while Crossing Different Cameras: Combination of Salient-Gaussian Weighted BossaNova and Fisher Vector Encodings</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080953</link>
        <id>10.14569/IJACSA.2017.080953</id>
        <doi>10.14569/IJACSA.2017.080953</doi>
        <lastModDate>2017-09-29T18:17:36.1970000+00:00</lastModDate>
        
        <creator>Mahmoud Mejdoub</creator>
        
        <creator>Salma Ksibi</creator>
        
        <creator>Chokri Ben Amar</creator>
        
        <creator>Mohamed Koubaa</creator>
        
        <subject>Person re-identification; histogram encoding; fisher vector; BossaNova; Convolutional Neural Network (CNN); salient weight; Gaussian weight </subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(9), 2017</description>
        <description>Person re-identification (re-ID) is a challenging task in the camera surveillance field, since it addresses the problem of re-identifying people across multiple non-overlapping
cameras. Most of existing approaches have been concentrated on: 1) achieving a robust and effective feature representation; and 2) enforcing discriminative metric learning to predict if two images represent the same identity. In this context, we present a new approach for person re-ID built upon multi-level descriptors. This is achieved by combining three complementary representations: salient-Gaussian Fisher Vector (SGFV) encoding method, salient-Gaussian BossaNova (SGBN) histogram encoding method and deep Convolutional Neural Network (CNN) features.
The two first methods adapt the histogram encoding framework
to the person re-ID task. This is achieved by integrating the
pedestrian saliency map and the spatial location information, in
the histogram encoding process. On one hand, human saliency
is reliable and distinctive in the person re-ID task, since it can
model the uniqueness of the identity. On the other hand, localizing
a person in the image can effectively discard noisy background
information. Finally, one of the most advanced metric learning
in person re-ID: the Cross-view Quadratic Discriminant Analysis
(XQDA) is applied on the top of the resulting description. The
proposed method yields promising person re-ID results on two
challenging image-based person re-ID benchmarks: CUHK03 and
Market-1501.</description>
        <description>http://thesai.org/Downloads/Volume8No9/Paper_53-Person_re-ID_while_Crossing_Different_Cameras.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design Patterns and General Video Game Level Generation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080952</link>
        <id>10.14569/IJACSA.2017.080952</id>
        <doi>10.14569/IJACSA.2017.080952</doi>
        <lastModDate>2017-09-29T18:17:36.1630000+00:00</lastModDate>
        
        <creator>Mudassar Sharif</creator>
        
        <creator>Adeel Zafar</creator>
        
        <creator>Uzair Muhammad</creator>
        
        <subject>General video game level generation; rhythmic analysis; procedural content generation; design pattern; search based level generator</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(9), 2017</description>
        <description>Design patterns have become a vital solution for a number of problems in software engineering. In this paper, we have performed rhythmic analysis of General Video Game Level Generation (GVG-LG) framework and have discerned 23 common design patterns. In addition, we have segregated the identified patterns into four unique classes. The categorization is based on the usage of identified patterns in game levels. Our future aim is to employ these patterns as an input for a search based level generator.</description>
        <description>http://thesai.org/Downloads/Volume8No9/Paper_52-Design_Patterns_and_General_Video_Game_Level.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modeling and Simulation of the Effects of Social Relation and Emotion on Decision Making in Emergency Evacuation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080951</link>
        <id>10.14569/IJACSA.2017.080951</id>
        <doi>10.14569/IJACSA.2017.080951</doi>
        <lastModDate>2017-09-29T18:17:36.1330000+00:00</lastModDate>
        
        <creator>Xuan Hien Ta</creator>
        
        <creator>Dominique Longin</creator>
        
        <creator>Benoit Gaudou</creator>
        
        <creator>Tuong Vinh Ho</creator>
        
        <creator>Manh Hung Nguyen</creator>
        
        <subject>Agent-based simulation; emotion; social relation; emergency evacuation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(9), 2017</description>
        <description>Applying agent-based modeling to simulate the evacuation in case of emergency situations is recognized by many research works as an efficient tool for understanding the behavior and decision making of occupants in these situations.In this paper, we present our work aiming to modeling the influence of the emotion and social relationship of occupants on their behaviors and decision making in emergency as in case of fire disaster. Firstly, we proposed a formalization of occupants’ behavior at group level in emergency situations based on the social theory. This formalization details possible behaviors and actions of people in emergency evacuations, taking into account occupant’s social relationship. The formalization will facilitate the construction of simulation for emergency evacuation. Secondly, we modeled the influence of emotion and group behavior on the decision making of occupants in crisis situations. Thirdly, we developed an agent-based simulation that took into account the effect of group and emotion on the decision making of occupants in emergency situations. We conducted a set of experiments allowing to observe and analyze the behavior of people in emergency evacuation.</description>
        <description>http://thesai.org/Downloads/Volume8No9/Paper_51-Modeling_and_Simulation_of_the_Effects_of_Social_Relation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Camera Calibration for 3D Leaf-Image Reconstruction using Singular Value Decomposition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080950</link>
        <id>10.14569/IJACSA.2017.080950</id>
        <doi>10.14569/IJACSA.2017.080950</doi>
        <lastModDate>2017-09-29T18:17:36.1030000+00:00</lastModDate>
        
        <creator>Hermawan Syahputra</creator>
        
        <creator>Reza Pulungan</creator>
        
        <subject>Camera calibration; image reconstruction; 3D leaf images; singular value decomposition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(9), 2017</description>
        <description>Features of leaves can be more precisely captured using 3D imaging. A 3D leaf image is reconstructed using two 2D images taken using stereo cameras. Reconstructing 3D from 2D images is not straightforward. One of the important steps to improve accuracy is to perform camera calibration correctly. By calibrating camera precisely, it is possible to project measurement of distances in real world to the image plane. To maintain the accuracy of the reconstruction, the camera must also use correct parameter settings. This paper aims at designing a method to calibrate a camera to obtain its parameters and then using the method in the reconstruction of 3D images. Camera calibration is performed using region-based correlation methods. There are several steps necessary to follow. First, the world coordinate and the 2D image coordinate are measured. Extraction of intrinsic and extrinsic camera parameters are then performed using singular value decomposition. Using the available disparity image and the parameters obtained through camera calibration, 3D leafimage reconstruction can finally be performed. Furthermore, the results of the experimental depth-map reconstruction using the intrinsic parameters of the camera show a rough surface, so that a smoothing process is necessary to improve the depth map.</description>
        <description>http://thesai.org/Downloads/Volume8No9/Paper_50-Camera_Calibration_for_3D_Leaf_Image.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A P System for Solving All-Solutions of TSP</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080949</link>
        <id>10.14569/IJACSA.2017.080949</id>
        <doi>10.14569/IJACSA.2017.080949</doi>
        <lastModDate>2017-09-29T18:17:36.0700000+00:00</lastModDate>
        
        <creator>Ping Guo</creator>
        
        <creator>Junqi Xiang</creator>
        
        <creator>Jingya Xie</creator>
        
        <creator>Jinhang Zheng</creator>
        
        <subject>P system, TSP, membrane computing, natural computing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(9), 2017</description>
        <description>P system is a parallel computing system based on a membrane computing model. Since the calculation process of the P system has the characteristics of maximum parallelism and Non-determinism, it has been used to solve the NP-hard problem in polynomial time. This paper designs a P system for TSP problem solving. This P system can not only determine whether the TSP problem has solution, but also give the allsolution when the TSP problem is solved. Finally, an example is given to illustrate the feasibility and effectiveness of the P system designed in this paper.</description>
        <description>http://thesai.org/Downloads/Volume8No9/Paper_49-A_P_System_for_Solving_All_Solutions_of_TSP.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comparative Study of Mamdani and Sugeno Fuzzy Models for Quality of Web Services Monitoring</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080948</link>
        <id>10.14569/IJACSA.2017.080948</id>
        <doi>10.14569/IJACSA.2017.080948</doi>
        <lastModDate>2017-09-29T18:17:36.0400000+00:00</lastModDate>
        
        <creator>Mohd Hilmi Hasan</creator>
        
        <creator>Izzatdin Abdul Aziz</creator>
        
        <creator>Jafreezal Jaafar</creator>
        
        <creator>Lukman AB Rahim</creator>
        
        <creator>Joseph Mabor Agany Manyiel</creator>
        
        <subject>Quality of web service (QoWS) monitoring; fuzzy inference system; QoS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(9), 2017</description>
        <description>This paper presents a comparative study of fuzzy inference system (FIS) with respect to Mamdani and Sugeno FISs to show the accuracy and precision of quality of web service (QoWS) compliance monitoring. We used these two types of FIS for designing the QoWS compliance monitoring model. Clustering validity index is used to optimize the number of clusters of both models. Then both models are constructed based on Fuzzy CMeans (FCM) clustering algorithm. Simulation results with a Mamdani model, a Sugeno model and a crisp-based model for benchmark are presented. We consider different levels of noise (to represent uncertainties) in the simulations for comparison and to analyze the performance of the models when applied in QoWS compliance monitoring. The results show that Sugeno FIS outperforms Mamdani FIS in terms of accuracy and precision by producing better total error, error percentage, precision, mean squared error and root mean squared error measurements.The advantage of using fuzzy-based model is also verified with benchmark model.</description>
        <description>http://thesai.org/Downloads/Volume8No9/Paper_48-A_Comparative_Study_of_Mamdani_and_Sugeno.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Knowledge-based Topic Modeling Approach for Automatic Topic Labeling</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080947</link>
        <id>10.14569/IJACSA.2017.080947</id>
        <doi>10.14569/IJACSA.2017.080947</doi>
        <lastModDate>2017-09-29T18:17:36.0230000+00:00</lastModDate>
        
        <creator>Mehdi Allahyari</creator>
        
        <creator>Seyedamin Pouriyeh</creator>
        
        <creator>Krys Kochut</creator>
        
        <creator>Hamid Reza Arabnia</creator>
        
        <subject>Topic modeling; topic labeling; statistical learning; ontologies; linked open data</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(9), 2017</description>
        <description>Probabilistic topic models, which aim to discover latent topics in text corpora define each document as a multinomial distributions over topics and each topic as a multinomial
distributions over words. Although, humans can infer a proper label for each topic by looking at top representative words of the topic but, it is not applicable for machines. Automatic Topic Labeling techniques try to address the problem. The ultimate goal of topic labeling techniques are to assign interpretable labels for the learned topics. In this paper, we are taking concepts of ontology into consideration instead of words alone to improve the quality of generated labels for each topic. Our work is different in  comparison with the previous efforts in this area, where topics are usually represented with a batch of selected words from topics. We have highlighted some aspects of our  approach including: 1) we have incorporated ontology concepts with statistical topic modeling in a unified framework, where each topic is a multinomial probability distribution over the concepts and each concept is represented as a distribution over words; and 2) a topic labeling model according to the meaning of the concepts of the  ontology included in the learned topics. The best topic labels are selected with respect to the semantic similarity of the concepts and their ontological categorizations. We demonstrate the effectiveness of considering ontological concepts as richer aspects between topics and words by comprehensive experiments on two different data sets. In another word, representing topics via ontological concepts shows an effective way for generating descriptive and representative labels for the discovered topics. </description>
        <description>http://thesai.org/Downloads/Volume8No9/Paper_47-A_Knowledge_based_Topic_Modeling_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Data Distribution Aware Classification Algorithm based on K-Means</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080946</link>
        <id>10.14569/IJACSA.2017.080946</id>
        <doi>10.14569/IJACSA.2017.080946</doi>
        <lastModDate>2017-09-29T18:17:35.9930000+00:00</lastModDate>
        
        <creator>Tamer Tulgar</creator>
        
        <creator>Ali Haydar</creator>
        
        <creator>Ibrahim Ersan</creator>
        
        <subject>Classification; k-means; variance effect; big data</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(9), 2017</description>
        <description>Giving data driven decisions based on precise data analysis is widely required by different businesses. For this purpose many different data mining strategies exist. Nevertheless, existing strategies need attention by researchers so that they can be adapted to the modern data analysis needs. One of the popular algorithms is K-Means. This paper proposes a novel improvement to the classical K-Means classification algorithm. It is known that
data characteristics like data distribution, high-dimensionality, the size, the sparseness of the data, etc. have a great impact on the success of the K-Means clustering, which directly affects the accuracy of classification. In this study, the K-Means algorithm was modified to remedy the algorithm’s classification accuracy
degradation, which is observed when the data distribution is not suitable to be clustered by data centroids, where each centroid is represented by a single mean. Specifically, this paper proposes to intelligently include the effect of variance based on the detected data distribution nature of the data. To see the performance
improvement of the proposed method, several experiments were
carried out using different real datasets. The presented results,
which are achieved after extensive experiments, prove that the
proposed algorithm improves the classification accuracy of KMeans.
The achieved performance was also compared against
several recent classification studies which are based on different classification schemes.</description>
        <description>http://thesai.org/Downloads/Volume8No9/Paper_46-Data_Distribution_Aware_Classification_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Forecasting Scheme for Financial Time-Series Data using Neural Network and Statistical Methods</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080945</link>
        <id>10.14569/IJACSA.2017.080945</id>
        <doi>10.14569/IJACSA.2017.080945</doi>
        <lastModDate>2017-09-29T18:17:35.9770000+00:00</lastModDate>
        
        <creator>Mergani Khairalla</creator>
        
        <creator>Xu-Ning</creator>
        
        <creator>Nashat T. AL-Jallad</creator>
        
        <subject>Financial Time Series; hybrid Model; Additive Combination; regression Combination; Exchange Rate</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(9), 2017</description>
        <description>Currently, predicting time series utilizes as interesting research area for temporal mining aspects. Financial Time Series (FTS) delineated as one of the most challenging tasks, due to data characteristics is devoid of linearity, stationary, noisy, high degree of uncertainty and hidden relations. Several singles&#39; models proposed using both statistical and data mining approaches powerless to deal with these issues. The main objective of this study to propose a hybrid model, using additive and linear regression methods to combine linear and non-linear models. However, three models are investigated namely ARIMA, EXP, and ANN. Firstly, those models are feeding by exchange rate data set (SDG-EURO). Then, the arithmetical outcome of each model examined as benchmark models and set of aforementioned hybrid models in related literature. Results showed the superiority in hybrid model on all other investigated models based on 0.82% MAPE error&#39;s measure for accuracy. Based on the results of this study, we can conclude that further experiments desirable to estimate the weights for accurate combination method and more models essential to be surveyed in the areas of series prediction.</description>
        <description>http://thesai.org/Downloads/Volume8No9/Paper_45-Hybrid_Forecasting_Scheme_for_Time_Series_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Intelligent Security Approach using Game Theory to Detect DoS Attacks In IoT</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080944</link>
        <id>10.14569/IJACSA.2017.080944</id>
        <doi>10.14569/IJACSA.2017.080944</doi>
        <lastModDate>2017-09-29T18:17:35.9470000+00:00</lastModDate>
        
        <creator>Farzaneh Yazdankhah</creator>
        
        <creator>Ali Reza Honarvar</creator>
        
        <subject>Internet of Things (IoT); Network security; Attack detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(9), 2017</description>
        <description>The Internet of Things (IoT) is a new concept in the world of Information and Communication Technology (ICT). The structure of this global network is highly interconnected and presents a new category of challenges from the security, trust, and privacy perspectives. The data transfer problems through the Denial-of-Service (DoS) attacks simply occur in this network and lead to service slow down or system crash. At the present time, traditional techniques are being widely used to confront the denial-of-service attacks in the Internet of Things and unfortunately, smart techniques have been less studied and exploited. In this research, a security solution on the basis of game theory is proposed to detect the denial-of-service attacks and prevent the problems in the services of the network of the Internet of Things. In order to scrutinize the performance of the suggested method in the network, this method was simulated using the NS2 simulator. The simulation results confirmed that the game-theory strategies in the proposed method outperformed the existing methods. Furthermore, in order to verify the acquired findings, a comparative evaluation was exhibited according to the three factors of operational throughput, latency, and energy consumption.</description>
        <description>http://thesai.org/Downloads/Volume8No9/Paper_44-An_Intelligent_Security_Approach_using_Game_Theory.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Distributed Swarm Optimization Modeling for Waste Collection Vehicle Routing Problem</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080943</link>
        <id>10.14569/IJACSA.2017.080943</id>
        <doi>10.14569/IJACSA.2017.080943</doi>
        <lastModDate>2017-09-29T18:17:35.9300000+00:00</lastModDate>
        
        <creator>ELGAREJ Mouhcine</creator>
        
        <creator>MANSOURI Khalifa</creator>
        
        <creator>YOUSSFI Mohamed</creator>
        
        <creator>BENMOUSSA Nezha</creator>
        
        <creator>EL FAZAZI Hanae</creator>
        
        <subject>Vehicle routing system; ant colony optimization; multi-agent system; garbage collection system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(9), 2017</description>
        <description>In this paper, we consider a complex garbage collection problem, where the residents of a particular area dispose of recyclable garbage, which is collected and managed using a fleet of trucks with different weight capacities and volume. This tour is characterized by a set of constraints such as the maximum tour duration (in term of distance and the timing) consumed to collect wastes from several locations. This problem is modeled as a garbage collection vehicle routing problem, which aims to minimize the cost of traveling routes (minimizing the distance traveled) by finding optimal routes for vehicles such that all waste bins are emptied and the waste is driven towards the disposal locations. We propose a distributed technique based on the Ant Colony system Algorithm to find optimal routes that help vehicles to visit all the wastes bins using interactive agents consumed based on the behavior of real ants. The designed solution will try to create a set of layers to control and manage the waste collection, each layer will be handled by an intelligent agent which is characterized by a specific behavior, in this architecture a set of behaviors have been designed to optimizing routes and control the real time capacity of vehicles. Finally, manage the traffic messages between the different agents to select the best solutions that will be assigned to each vehicle. The developed solution performs well compared to the traditional solution on small cases.</description>
        <description>http://thesai.org/Downloads/Volume8No9/Paper_43-Distributed_Swarm_Optimization_Modeling_for_Waste_Collection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing the Administration of National Examinations using Mobile Cloud Technologies: A Case of Malawi National Examinations Board</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080942</link>
        <id>10.14569/IJACSA.2017.080942</id>
        <doi>10.14569/IJACSA.2017.080942</doi>
        <lastModDate>2017-09-29T18:17:35.9000000+00:00</lastModDate>
        
        <creator>Lovemore Solomon</creator>
        
        <creator>Jackson Phiri</creator>
        
        <subject>National examinations; Short Message Service (SMS); Unstructured Supplementary Service Data (USSD); candidate; cloud computing; Malawi National Examinations Board (MANEB)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(9), 2017</description>
        <description>Technological advances and the search for efficiency have catalyzed recently a migration from paper-and-pencil based way of doing things to computer-based in education and training at all levels with its drivers being faster administration, processing and delivery of examination results, error free marking of test items and enhanced interactivity. This research paper aims at establishing the challenges currently faced by Malawi National Examinations Board (MANEB) when registering candidates for national examinations as well as disseminating examinations results. A Short Message Service/Unstructured Supplementary Service Data (SMS/USSD) based mobile application using cloud infrastructure is proposed to address the challenges. Data was collected from 80 respondents consisting of teachers, parents and students whose analytical results show that current MANEB business processes have a number of irregularities that subsequently result in candidates’ registration records missing or being incorrect as well as delayed access to examinations results by candidates. The proposed SMS/USSD application was tested and proved to be faster and more reliable than the traditional computer based approach that is currently being utilized.</description>
        <description>http://thesai.org/Downloads/Volume8No9/Paper_42-Enhancing_the_Administration_of_National_Examinations.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Chronicles of Multicast Routing Protocol in Wireless Sensor Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080941</link>
        <id>10.14569/IJACSA.2017.080941</id>
        <doi>10.14569/IJACSA.2017.080941</doi>
        <lastModDate>2017-09-29T18:17:35.8670000+00:00</lastModDate>
        
        <creator>Nandini G</creator>
        
        <creator>J. Anitha</creator>
        
        <subject>Complexity; multicast routing techniques; overhead; optimization; routing protocol; wireless sensor network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(9), 2017</description>
        <description>Routing protocol in wireless sensor network (WSN) has always been a frequently adopted topic of research in WSN owing to many unsolved issues in it. This paper discusses about the multicast routing protocols in WSN and briefs up different forms of standard research contribution as well as significant recent research techniques toward leveraging the performance of multicast routing. The paper then discusses the beneficial factor and limiting factor in existing multicast techniques and highlights the research gap in this. In order to overcome the research gap, a novel architecture to address the optimization as a cost minimization problem associated with multicast routing in WSN is proposed. This paper contributes to show a present scenario of multicast routing performance in WSN and thereby assists the readers about the possible direction of future with clear visualization of system architecture.</description>
        <description>http://thesai.org/Downloads/Volume8No9/Paper_41-Performance_Chronicles_of_Multicast_Routing_Protocol.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Clustering based Max-Min Scheduling in Cloud Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080940</link>
        <id>10.14569/IJACSA.2017.080940</id>
        <doi>10.14569/IJACSA.2017.080940</doi>
        <lastModDate>2017-09-29T18:17:35.8530000+00:00</lastModDate>
        
        <creator>Zonayed Ahmed</creator>
        
        <creator>Adnan Ferdous Ashrafi</creator>
        
        <creator>Maliha Mahbub</creator>
        
        <subject>Cloud computation; cluster; heuristics; batch-mode heuristics; cluster based max-min scheduling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(9), 2017</description>
        <description>Cloud Computing ensures Service Level Agreement (SLA) by provisioning of resources to cloudlets. This provisioning can be achieved through scheduling algorithms that properly maps given tasks considering different heuristics such as execution time and completion time. This paper is built on the concept of max-min algorithm with and unique proposed modification. A novel idea of clustering based max-min scheduling algorithm is introduced to decrease overall make-span and better VM utilization for variable length of the tasks. Experimental analysis shows that due to clustering, it provides better result than the different variations of max-min as well as other heuristics algorithm in terms of effective utilization of faster VMs and proper scheduling of tasks considering all possible scheduling scenarios and picking up the best solution.</description>
        <description>http://thesai.org/Downloads/Volume8No9/Paper_40-Clustering_based_Max_Min_Scheduling_in_Cloud_Environment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Fuzzy based Model for Effort Estimation in Scrum Projects</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080939</link>
        <id>10.14569/IJACSA.2017.080939</id>
        <doi>10.14569/IJACSA.2017.080939</doi>
        <lastModDate>2017-09-29T18:17:35.8200000+00:00</lastModDate>
        
        <creator>Jasem M. Alostad</creator>
        
        <creator>Laila R. A. Abdullah</creator>
        
        <creator>Lamya Sulaiman Aali</creator>
        
        <subject>Scrum; sprint; effort estimation; fuzzy logic; fuzzy inference system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(9), 2017</description>
        <description>This paper aims to utilize the fuzzy logic concepts to improve the effort estimation in Scrum framework and in turn add a significant enhancement to Scrum. Scrum framework is one of the most popular agile methods in which the team accomplishes their work by breaking down the work into a series of sprints. In Scrum, there are many factors that have a significant influence on the effort estimation of each task in a Sprint. These factors are: Development Team Experience, Task Complexity, Task Size, and Estimation Accuracy. These factors are usually presented using linguistic quantifiers. Therefore, this paper utilizes the fuzzy logic concepts to build a fuzzy based model that can improve the effort estimation in Scrum framework. The proposed model includes three components: fuzzifier, inference engine, and defuzzifier. In addition, the proposed model takes into consideration the feedback that is resulted from comparing the estimated effort and the actual effort. The researcher designed the proposed model using MATLAB. The proposed model is applied on three Sprints of a real software development project to present how the proposed model works and to show how it becomes more accurate over time and gives a better effort estimation. In addition, the Scrum Master and the development team can use the proposed model to monitor the improvement in effort estimation accuracy over the project life.</description>
        <description>http://thesai.org/Downloads/Volume8No9/Paper_39-A_Fuzzy_based_Model_for_Effort_Estimation_in_Scrum_Projects.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Colored Image Retrieval based on Most used Colors</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080938</link>
        <id>10.14569/IJACSA.2017.080938</id>
        <doi>10.14569/IJACSA.2017.080938</doi>
        <lastModDate>2017-09-29T18:17:35.8070000+00:00</lastModDate>
        
        <creator>Sarmad O. Abter</creator>
        
        <creator>Dr. Nada A.Z Abdullah</creator>
        
        <subject>Most used colors feature; color histogram; content-based image retrieval (CBIR); contour analysis; HSV color space</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(9), 2017</description>
        <description>The Fast Development of the image capturing in digital form leads to the availability of large databases of images. The manipulation and management of images within these databases depend mainly on the user interface and the search algorithm used to search these huge databases for images, there are two search methods for searching within image databases: Text-Based and Content-Based. In this paper, we present a method for content-based image retrieval based on most used colors to extract image features. A preprocessing is applied to enhance the extracted features, which are smoothing, quantization and edge detection. Color quantization is applied using RGB (Red, Green, and Blue) Color Space to reduce the range of colors in the image and then extract the most used color from the image. In this approach, Color distance is applied using HSV (Hue, Saturation, Value) color space for comparing a query image with database images because it is the closest color space to the human perspective of colors. This approach provides accurate, efficient, less complex retrieval system.</description>
        <description>http://thesai.org/Downloads/Volume8No9/Paper_38-Colored_Image_Retrieval_based_on_Most_used_Colors.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Fast Method to Estimate Partial Weights Enumerators by Hash Techniques and Automorphism Group</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080937</link>
        <id>10.14569/IJACSA.2017.080937</id>
        <doi>10.14569/IJACSA.2017.080937</doi>
        <lastModDate>2017-09-29T18:17:35.7730000+00:00</lastModDate>
        
        <creator>Moulay Seddiq EL KASMI ALAOUI</creator>
        
        <creator>Sa&#239;d NOUH</creator>
        
        <creator>Abdelaziz MARZAK</creator>
        
        <subject>Partial weights enumerator; PWEHA; automorphism group; hash function; hash table;  BCH codes</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(9), 2017</description>
        <description>BCH codes have high error correcting capability which allows classing them as good cyclic error correcting codes. This important characteristic is very useful in communication and data storage systems. Actually after almost 60 years passed from their discovery, their weights enumerators and therefore their analytical performances are known only for the lengths less than or equal to 127 and only for some codes of length as 255. The Partial Weights Enumerator (PWE) algorithm permits to obtain a partial weights enumerators for linear codes, it is based on the Multiple Impulse Method combined with a Monte Carlo Method; its main inconveniece is the relatively long run time. In this paper we present an improvement of PWE by integration of Hash techniques and a part of Automorphism Group (PWEHA) to accelerate it. The chosen approach applies to two levels. The first is to expand the sample which contains  codewords of the same weight from a given codeword, this is done by adding a part of the Automorphism Group. The second level is to simplify the search in the sample by the use of hash techniques. PWEHA has allowed us to considerably reduce the run time of the PWE algorithm, for example that of PWEHA is reduced at more than 3900% for the BCH (127,71,19) code. This method is validated and it is used to approximate a partial weights enumerators of some  BCH codes of unknown weights enumerators.</description>
        <description>http://thesai.org/Downloads/Volume8No9/Paper_37-A_Fast_Method_to_Estimate_Partial_Weights.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Using Hybrid Evolutionary Algorithm based Adaptive Filtering</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080936</link>
        <id>10.14569/IJACSA.2017.080936</id>
        <doi>10.14569/IJACSA.2017.080936</doi>
        <lastModDate>2017-09-29T18:17:35.7430000+00:00</lastModDate>
        
        <creator>Adnan Alrabea</creator>
        
        <subject>Cognitive radio adhoc networks; distributed spectrum map; swarm optimization; genetic algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(9), 2017</description>
        <description>Noise degrades the overall efficiency of the data transmission in the networking models which is no different in Cognitive Radio Adhoc Networks (CRAHNs). For efficient opportunistic routing in CRAHN, the Modified SMOR (M-SMOR) and Sparsity based Distributed Spectrum Map M-SMOR (SDS-M-SMOR) have been developed which provide significant improvement in the overall routing behavior. However, the increase in the noises is inevitable especially in large scale networks which Swarm Optimization (PSO) and Genetic Algorithm (GA) together termed as HPSOGA. The proposed HPSOGA based adaptive filter readjusts the filter constraints in accordance to the channel and the signals, thus mitigates the noise in the reconfigurable systems, like CRAHNs. The key benefit of the HPSOGA based adaptive filter is the global optimization when compared to other, the proposed model with noise cancellation has better performance values than other routing models.</description>
        <description>http://thesai.org/Downloads/Volume8No9/Paper_36-Using_Hybrid_Evolutionary_Algorithm_based_Adaptive_Filtering.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Rising Issues in VANET Communication and Security: A State of Art Survey</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080935</link>
        <id>10.14569/IJACSA.2017.080935</id>
        <doi>10.14569/IJACSA.2017.080935</doi>
        <lastModDate>2017-09-29T18:17:35.7270000+00:00</lastModDate>
        
        <creator>Sachin P. Godse</creator>
        
        <creator>Parikshit N. Mahalle</creator>
        
        <creator>Sanjeev J. Wagh</creator>
        
        <subject>Vehicular Adhoc Network (VANET); Adaptive Elliptic Curve Cryptography (AECC); Enhanced Elliptic Curve Cryptography (EECC); authentication; message forwarding</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(9), 2017</description>
        <description>VANET (Vehicular Adhoc Network) has made an evolution in the transportation hi-tech system in most of the developed countries. VANET plays an important role in an intelligent transportation system (ITS). This paper gives an overall survey on the research in VANET security and communication. It also gives parameters considered by the previous researchers. After the survey, it considered the authentication and message forwarding issues required more research. Authentication is first line of security in VANET; it avoids attacks made by the malicious nodes. Previous research has come up with some Cryptographic, Trust based, Id based, and Group signature based authentication schemes. Speed of authentication and privacy preservation are the important parameters in VANET authentication. This paper presented the AECC (Adaptive Elliptic Curve Cryptography), and EECC (Enhanced Elliptic Curve Cryptography) schemes to improve the speed and security of authentication. In AECC, the key size is adaptive, i.e. different sizes of keys are generated during the key generation phase. Three ranges are specified for key sizes: small, large, and medium. In EECC, added an extra parameter during the transmission of information from, the vehicle to the RSU for key generation. This additional parameter gives the information about the vehicle ID, and the location of the vehicle to the RSU and the other vehicle. Under the communication issue of VANET, the paper gives priority based message forwarding for improving the message forwarding scheme. It handles emergency situations more effectively.</description>
        <description>http://thesai.org/Downloads/Volume8No9/Paper_35-Rising_Issues_in_VANET_Communication_and_Security.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Medicloud: Hybrid Cloud Computing Framework to Optimize E-Health Activities</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080934</link>
        <id>10.14569/IJACSA.2017.080934</id>
        <doi>10.14569/IJACSA.2017.080934</doi>
        <lastModDate>2017-09-29T18:17:35.6970000+00:00</lastModDate>
        
        <creator>Hina Kunwal</creator>
        
        <creator>Dr. Babur Hayat Malik</creator>
        
        <creator>Amber Saeed</creator>
        
        <creator>Husnain Mushtaq</creator>
        
        <creator>Hassan Bilal Cheema</creator>
        
        <creator>Farhat Mehmood</creator>
        
        <subject>E-health; cloud computing; hybrid cloud; cloud based services; patient; security; cloud adoption</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(9), 2017</description>
        <description>Cloud computing is emerging technology and its usage in health sector is marvelous. It enhances the patient treatment process and allows the physicians to get remotely access to patient medical record anywhere and anytime. Numerous cloud based solution are working currently and offering facilities to people in rural area of developing countries. It is estimated by global healthcare that within few years of adoption of cloud in health sector will increase drastically whereas cloud based health services have opportunities and challenges as well. Privacy, security, interoperability and standards are the factors that influence cloud computing in e-health. For cloud adoption, organization must understand the existing requirements and make strategy for further development. Cloud offers service and deployment model, each organization select the appropriate model according to their requirements. Interesting thing in cloud is that the responsibility is shared among provider and customer from usage perspective. For initiation of whole procedure service level agreement is signed among customer and provider. Organization can access the cloud services from multiple providers. Hybrid cloud computing is best suitable architecture for health organizations. The whole scenario will provide ease to physician and patient and maximize the work production.</description>
        <description>http://thesai.org/Downloads/Volume8No9/Paper_34-MediCloud_Hybrid_Cloud_Computing_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Design of in-Memory File System based on File Virtual Address Framework</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080933</link>
        <id>10.14569/IJACSA.2017.080933</id>
        <doi>10.14569/IJACSA.2017.080933</doi>
        <lastModDate>2017-09-29T18:17:35.6630000+00:00</lastModDate>
        
        <creator>Fahad Samad</creator>
        
        <creator>Zulfiqar Ali Memon</creator>
        
        <subject>Phase change memory; non-volatile memory; Spin Transfer Torque – RAM; sustainable in-memory file system; journaling file system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(9), 2017</description>
        <description>Rapid growth in technology is increasing day by day that demands computer systems to work better, should be reliable and have faster performance with fair cost and best functionalities. In the modern era of technology, memory files are used to shorten the performance gap between memory and storage. Sustainable in-memory file system (SIMFS) was the first that introduces the concept of open file address space into the address space of the process and exploits the memory mapping hardware while accessing files. The purpose of designing and implementing the SIMFS architecture is to achieve performance improvement of in-memory file system. SCMFS are designed for the storage class system that uses the presented memory management component in the operating system to assist in managing block, and it manages the space for each and every file adjacent to the virtual address space. A recent study has proposed that non-volatile memories are powerful enough to minimize the performance gap, as compared to previous generation non-volatile memories. This is because the performance gap between non-volatile and volatile memories has been reduced and there are possibilities of using a non-volatile memory as a computer’s main memory in near future. Lately, high-speed non-volatile storage media, such as Phase Change Memory (PCM) has come into view and it is expected that for storage device PCM will be used by replacing the hard disk in upcoming years. Moreover, the PCM is byte-addressable, it means that it can access individual byte of data rather than word and data access time is expected to be almost indistinguishable of DRAM, a volatile memory. These features and innovations in computer architecture are making the computer system more reliable and faster.</description>
        <description>http://thesai.org/Downloads/Volume8No9/Paper_33-A_New_Design_of_In_Memory_File_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Framework for Applicability of Agile Scrum Methodology: A Perspective of Software Industry</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080932</link>
        <id>10.14569/IJACSA.2017.080932</id>
        <doi>10.14569/IJACSA.2017.080932</doi>
        <lastModDate>2017-09-29T18:17:35.6330000+00:00</lastModDate>
        
        <creator>Anum Ali</creator>
        
        <creator>Mariam Rehman</creator>
        
        <creator>Maria Anjum</creator>
        
        <subject>Scrum agile methodology; framework; software industry; critical factors</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(9), 2017</description>
        <description>Agile scrum methodology has been evolved over the time largely through software industry where it has grown and developed through empirical progress. The research work presented in this paper has proposed a framework by identifying critical elements for applicability of agile scrum methodology in software industry. The proposed framework is based on four elements, i.e. technical, people, environmental and organizational. The proposed framework is validated through statistical analysis, i.e. Structural Equation Modeling (SEM) after collecting data from software industry personals who are working on agile methodologies. The research concludes that 15 out of 18 hypothesis were found significant which include Training &amp; Learning, Societal Culture, Communication &amp; Negotiation, Personal Characteristics, Customer collaboration, Customer commitment, Decision Time, Team Size, Corporate Culture, Planning, Control, Development, Information Administration, and Working Environment.</description>
        <description>http://thesai.org/Downloads/Volume8No9/Paper_32-Framework_for_Applicability_of_Agile_Scrum_Methodology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Question Answering Systems: A Review on Present Developments, Challenges and Trends</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080931</link>
        <id>10.14569/IJACSA.2017.080931</id>
        <doi>10.14569/IJACSA.2017.080931</doi>
        <lastModDate>2017-09-29T18:17:35.6170000+00:00</lastModDate>
        
        <creator>Lorena Kodra</creator>
        
        <creator>Elinda Kajo Me&#231;e</creator>
        
        <subject>Question answering systems; community question answering systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(9), 2017</description>
        <description>Question Answering Systems (QAS) are becoming a model for the future of web search. In this paper we present a study of the latest research in this area. We collected publications from top conferences and journals on information retrieval, knowledge management, artificial intelligence, web intelligence, natural language processing and the semantic web. We identified and classified the topics of Question Answering (QA) being researched on and the solutions that are being proposed. In this study we also identified the issues being most researched on, the most popular solutions being proposed and the newest trends to help researchers gain an insight on the latest developments and trends of the research being done in the area of question answering.</description>
        <description>http://thesai.org/Downloads/Volume8No9/Paper_31-Question_Answering_Systems_A_Review_on_Present_Developments.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Aquabot: A Diagnostic Chatbot for Achluophobia and Autism </title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080930</link>
        <id>10.14569/IJACSA.2017.080930</id>
        <doi>10.14569/IJACSA.2017.080930</doi>
        <lastModDate>2017-09-29T18:17:35.5870000+00:00</lastModDate>
        
        <creator>Sana Mujeeb</creator>
        
        <creator>Muhammad Hafeez Javed</creator>
        
        <creator>Tayyaba Arshad</creator>
        
        <subject>Chatbot; Achluophobia; autism; expert system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(9), 2017</description>
        <description>Chatbots or chatter bots have been a good way to entertain one. This paper emphasizes on the use of a chatbot in the diagnosis of Achluophobia – the fear of darkness and autism disorder. Autism and Achluophobia (fear of darkness) are the most common neurodevelopment disorders usually found in children. State of the art trivial diagnosis methods require a lot of time and are also unable to maintain the case history of psychological disease. A chatbot has been developed in this work which can diagnose the severity of disease based on user’s text based questions. It performs Natural Language Processing (NLP) for meaning extraction and uses Decision Trees to characterize a patient in terms of possible disease. NLP unit extracts meaning of keywords defining intensity of disease’s symptoms, from user’s chat. After that similarity matching of sentence containing keywords is performed. Depth First Search (DFS) technique is used for traversing Decision Tree and making decision about severity of disease. The proposed system namely Aquabot, proves to be an efficient technique in diagnosing Achluophobia and Autism. Aquabot is useful for practitioner psychologists to assist a human psychologist. Aquabot not only saved time and resources but also achieved an accuracy of 88 percent when compared against human psychologist’s diagnosed results.</description>
        <description>http://thesai.org/Downloads/Volume8No9/Paper_30-Aquabot_A_Diagnostic_Chatbot_for_Achluophobia_and_Autism.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hyperspectral Image Segmentation using Homogeneous Area Limiting and Shortest Path Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080929</link>
        <id>10.14569/IJACSA.2017.080929</id>
        <doi>10.14569/IJACSA.2017.080929</doi>
        <lastModDate>2017-09-29T18:17:35.5570000+00:00</lastModDate>
        
        <creator>Fatemeh Hajiani</creator>
        
        <creator>Azar Mahmoodzadeh</creator>
        
        <subject>Segmentation; hyperspectral; shortest path; area limiting</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(9), 2017</description>
        <description>Segmentation, as a preprocessing, plays an important role in hyperspectral images. In this paper, considering the similarity of neighboring pixels, using the size measure, the image spectrum is divided into several segments so that the existence of several sub areas in each segment is possible. Then, using the methods of area limiting and the shortest path to seed pixel, and considering the pixel spectra in all bands, the available areas in each section are separated. The area limiting method controls the amplitude changes of area pixels from seed pixel, and the shortest path method, considering the shortest path to seed, controls the size of area. The proposed method is implemented on AVIRIS images and in terms of the number of areas, the border between areas and the possibility of area interference show better results than other methods.</description>
        <description>http://thesai.org/Downloads/Volume8No9/Paper_29-Hyperspectral_Image_Segmentation_using_Homogeneous_Area.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Active and Reactive Power Control of a Variable Speed Wind Energy Conversion System based on Cage Generator</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080928</link>
        <id>10.14569/IJACSA.2017.080928</id>
        <doi>10.14569/IJACSA.2017.080928</doi>
        <lastModDate>2017-09-29T18:17:35.5400000+00:00</lastModDate>
        
        <creator>Mazhar Hussain Baloch</creator>
        
        <creator>Waqas Ahmed Wattoo</creator>
        
        <creator>Dileep Kumar</creator>
        
        <creator>Ghulam Sarwar Kaloi</creator>
        
        <creator>Ali Asghar Memon</creator>
        
        <creator>Sohaib Tahir</creator>
        
        <subject>VAR compensator; wind turbine; cage generator</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(9), 2017</description>
        <description>This manuscript presents the modeling and control design for a variable speed wind energy conversion system (VS-WECS). This control scheme is based on three-phase squirrel cage induction generator driven by a horizontal-axis wind turbine through the overhead transmission network. In this manuscript, a static VAR compensator is proposed and connected with the squirrel cage induction generator terminals in order to regulate the system parameters, such as voltage, power. Through the pitch angle, the mechanical power was controlled through Simulink (Matlab) software. From the simulation results, the response of the proposed system offers good robustness and fast recovery under various dynamic system disturbances.</description>
        <description>http://thesai.org/Downloads/Volume8No9/Paper_28-Active_and_Reactive_Power_Control_of_a_Variable_Speed_Wind_Energy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Method for Productive Cattle Finding with Estrus Cycle Estimated with BCS and Parity Number and Hormone Treatments based on a Regressive Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080927</link>
        <id>10.14569/IJACSA.2017.080927</id>
        <doi>10.14569/IJACSA.2017.080927</doi>
        <lastModDate>2017-09-29T18:17:35.5100000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Narumi Suzaki</creator>
        
        <creator>Iqbal Ahmed</creator>
        
        <creator>Osamu Fukuda</creator>
        
        <creator>Hiroshi Okumura</creator>
        
        <creator>Kenji Endo</creator>
        
        <creator>Kenichi Yamashita</creator>
        
        <subject>Body Condition Score (BCS); postpartum interval; parity number; estrous cycle; cattle productivity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(9), 2017</description>
        <description>Estrus cycle estimation method through correlation analysis among influencing factors based on regressive analysis is carried out for Japanese Dairy Cattle Productivity Analysis. Through the experiments with 280 Japanese anestrus Holstein dairy cows, it is found that estrus cycle can be estimated with the measured with visual index of Body Condition Score (BCS), hormone treatments, and parity number, based on regressive equation.  Also, it is found that the time from the delivery to the next estrus can be expressed with BCS, hormonal treatments, parity. Thus it is found that productivity of cattle can be identified.</description>
        <description>http://thesai.org/Downloads/Volume8No9/Paper_27-Method_for_Productive_Cattle_Identification_with_Estrus Cycle.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Developing an Assessment Tool of ITIL Implementation in Small Scale Environments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080926</link>
        <id>10.14569/IJACSA.2017.080926</id>
        <doi>10.14569/IJACSA.2017.080926</doi>
        <lastModDate>2017-09-29T18:17:35.4930000+00:00</lastModDate>
        
        <creator>Abir EL YAMAMI</creator>
        
        <creator>Souad AHRIZ</creator>
        
        <creator>Khalifa MANSOURI</creator>
        
        <creator>Mohammed QBADOU</creator>
        
        <creator>Elhossein ILLOUSSAMEN</creator>
        
        <subject>Component; IT Service Management (ITSM); Information Technology Infrastructure Library (ITIL); CSFs (Critical Success Factors); Design Science Research (DSR); Analytical Hierarchy Process (AHP); Small and Medium-sized enterprises (SMEs)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(9), 2017</description>
        <description>Considering the problematic of IT Service Management (ITSM) frameworks Implementation in SMEs, among the various frameworks available for companies to manage their IT services, ITIL is recognized as the most structured and effective framework. Nevertheless, ITIL has been criticized for not been appropriate for small scale enterprises. This paper provided a practical tool formally developed according to Design Science Research (DSR) approach, it aimed to  find out the key factors that affect ITIL implementation success in SMEs, the objective was to eliminate the misunderstanding of the IT service management model’s implementation purpose. It determines various Critical Success Factors (CSFs) of ITIL implementation, the weight of each CSF is calculated with Analytical Hierarchy Process (AHP) and the evaluation was executed in a Moroccan SME. Therefore, it provides an evaluation method in order to help researchers and managers to determine the issues related to local culture of SMEs while adopting ITIL Framework. Results show that the top management support is the most important factor for Moroccan SMEs. It is found that an approach for determining ITIL processes implementation sequencing order need to be developed in order to achieve quick wins.</description>
        <description>http://thesai.org/Downloads/Volume8No9/Paper_26-Developing_an_Assessment_Tool_of_ITIL_Implementation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Determinants Impacting the Adoption of E-Government Information Systems and Suggesting Cloud Computing Migration Framework</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080925</link>
        <id>10.14569/IJACSA.2017.080925</id>
        <doi>10.14569/IJACSA.2017.080925</doi>
        <lastModDate>2017-09-29T18:17:35.4630000+00:00</lastModDate>
        
        <creator>Muhammad Aatif Shafique</creator>
        
        <creator>Babar Hayat Malik</creator>
        
        <creator>Yasar Mahmood</creator>
        
        <creator>Sadaf Nawaz Cheema</creator>
        
        <creator>Khizar Hameed</creator>
        
        <creator>Shabana Tabassum</creator>
        
        <subject>E-government information systems; adoption; TOE; cloud computing migration; Board of Intermediate and Secondary Education (BISE), Pakistan</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(9), 2017</description>
        <description>This research intends to investigate underlying elements that effect the adoption of E-Government Information Systems in Board of Intermediate and Secondary Education (BISE), Pakistan. The study is grounded on the theory of technology, organization and environment (TOE) model. Cloud computing is becoming a viable alternative for System Analysts or IT managers to consider in today’s latest information technology environment and dynamic changes in the technology landscape. The second purpose of this study is to help Government decision makers appropriately decide on the reasonableness of uses for migration to cloud computing. Considering that the provided Services in e-government (BISE) are available by means of the Internet, in this way cloud computing can be used in the implementation of e-government architecture and provide better service utilizing its benefits.</description>
        <description>http://thesai.org/Downloads/Volume8No9/Paper_25-Determinants_Impacting_the_Adoption_of_E_Government_Information_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>QR Code Patterns Localization based on Hu Invariant Moments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080924</link>
        <id>10.14569/IJACSA.2017.080924</id>
        <doi>10.14569/IJACSA.2017.080924</doi>
        <lastModDate>2017-09-29T18:17:35.4300000+00:00</lastModDate>
        
        <creator>Hicham Tribak</creator>
        
        <creator>Youssef Zaz</creator>
        
        <subject>QR code; Hu invariant moments; pattern recognition; image blur estimation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(9), 2017</description>
        <description>The widespread utilization of QR code and its coincidence with the swift growth of e-commerce transactions have imposed the computer vision researchers to continuously devise a variety of QR code recognition algorithms. The latter performances are generally limited due to two main factors. Firstly, most of them are computationally expensive because of the implemented feature descriptor complexities. Secondly, the evoked algorithms are often sensitive to pattern geometric deformations. In this paper a robust approach is proposed, in which the architecture is based on three distinct treatments among others: 1) An image quality assessment stage which evaluates the quality of the captured image in consideration that the presence of blur decreases significantly the recognition accuracy. 2) This stage is followed by an image segmentation based on an achromatic filter through which only the regions of interest are highlighted and consequently the execution time is reduced. 3) Finally, the Hu invariant moments technique is used as feature descriptor permitting removing false positives. This technique is implemented to filter out the set of extracted candidate QR code patterns, which have been roughly extracted by a scanning process. The Hu moments descriptor is able to recognize patterns independently of the geometric transformations they undergo. The experiments show that the incorporation of the aforementioned three stages enhances significantly the recognition accuracy along with a notable diminution of processing time. This makes the proposed approach adapted to embedded systems and devices with limited performances.</description>
        <description>http://thesai.org/Downloads/Volume8No9/Paper_24-QR_Code_Patterns_Localization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Relevance of the Indicators Observed in the Measurement of Social Resilience</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080923</link>
        <id>10.14569/IJACSA.2017.080923</id>
        <doi>10.14569/IJACSA.2017.080923</doi>
        <lastModDate>2017-09-29T18:17:35.4300000+00:00</lastModDate>
        
        <creator>Ida Brou ASSIE</creator>
        
        <creator>Amadou SAWADOGO</creator>
        
        <creator>J&#233;r&#244;me K. ADOU</creator>
        
        <creator>Souleymane OUMTANAGA</creator>
        
        <subject>Social resilience; observatory of social resilience; mathematical modeling of the resilience; analysis of multi-correspondences (ACM)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(9), 2017</description>
        <description>This article scrutinizes the validation of the observed properties by the experts in the study of social resilience. To that purpose, it utilizes the method of factorial analysis of multi-correspondences (ACM) in the reflections and practices of observatories about impact strength. Furthermore, a mathematical modeling of the concept of social resilience, a description of databases of the observatory of impact strength are made in understanding the process of analysis of impact strength of an individual.</description>
        <description>http://thesai.org/Downloads/Volume8No9/Paper_23-Relevance_of_the_Indicators_Observed.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Strategy in Trust-Based Recommender System using K-Means Clustering</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080922</link>
        <id>10.14569/IJACSA.2017.080922</id>
        <doi>10.14569/IJACSA.2017.080922</doi>
        <lastModDate>2017-09-29T18:17:35.4000000+00:00</lastModDate>
        
        <creator>Naeem Shahabi Sani</creator>
        
        <creator>Ferial Najian Tabriz</creator>
        
        <subject>Recommendation systems; collaborative filtering; trust-based recommendation system; k-means; ant colony</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(9), 2017</description>
        <description>Recommender systems are among the most important parts of online systems, including online stores such as Amazon, Netflix that have become very popular in the recent years. These systems lead users to finding desired information and goods in electronic environments. Recommender systems are one of the main tools to overcome the problem of information overload. Collaborative filtering (CF) is one of the best approaches for recommender systems and are spreading as a dominant approach. However, they have the problem of cold-start and data sparsity. Trust-based approaches try to create a neighborhood and network of trusted users that demonstrate users’ trust in each other’s opinions. As such, these systems recommend items based on users’ relationships. In the proposed method, we try to resolve the problems of low coverage rate and high RMSE rate in trust-based recommender systems using k-means clustering and ant colony algorithm (TBRSK). For clustering data, the k-means method has been used on MovieLens and Epinion datasets and the rating matrix is calculated to have the least overlapping.</description>
        <description>http://thesai.org/Downloads/Volume8No9/Paper_22-A_New_Strategy_in_Trust_based_Recommender_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of Zigbee Data Transmission on Wireless Sensor Network Topology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080921</link>
        <id>10.14569/IJACSA.2017.080921</id>
        <doi>10.14569/IJACSA.2017.080921</doi>
        <lastModDate>2017-09-29T18:17:35.3830000+00:00</lastModDate>
        
        <creator>Sigit Soijoyo</creator>
        
        <creator>Ahmad Ashari</creator>
        
        <subject>Zigbee; delay; throughput; packet loss; topology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(9), 2017</description>
        <description>The purpose of this study is to measure the distance in the line of sight environment and to see the data resulted from zigbee transmission by using star, mesh and tree topologies by using delay, throughput and packet loss parameters. The results showed that star topology had the average value which tended to be stable on the measurement of throughput and packet loss because there was no router nodes in star topology so that the accuracy of data delivery was better and it had the smallest delay value because the number of nodes was less than in mesh and tree topology, while the mesh and tree topologies had a poor average value on throughput and packet loss measurements, since the mesh and tree topologies had to go through many processes in which they had to pass through the router node to transmit the data to the coordinator node. However, the mesh and tree topologies had an advantage in which the data delivery could go through more distances than the star topology and they could add more nodes.</description>
        <description>http://thesai.org/Downloads/Volume8No9/Paper_21-Analysis_of_Zigbee_Data_Transmission.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Uniform Segregation of Densely Deployed Wireless Sensor Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080920</link>
        <id>10.14569/IJACSA.2017.080920</id>
        <doi>10.14569/IJACSA.2017.080920</doi>
        <lastModDate>2017-09-29T18:17:35.3700000+00:00</lastModDate>
        
        <creator>Manjeet Singh</creator>
        
        <creator>Surender Soni</creator>
        
        <subject>Clustering; fuzzy logic; wireless sensor network; cluster head; uncertainty</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(9), 2017</description>
        <description>In wireless sensor networks, the selection of cluster heads relies upon the various selection parameters, such as energy, distance, node concentration and rate of retransmission. There is always uncertainty in the suitability of sensor node for the cluster head role due to these various selection parameters. Fuzzy logic is capable of overcoming uncertainties even with incomplete available information. This quality of fuzzy logic can reduce uncertainty in cluster head selection up to large extent. Therefore, in this paper, a fuzzy logic based clustering approach is proposed to enhance the network operational lifetime. The cluster formation is done on the basis of the spatial correlation value between sensors to organize clusters uniformly in the network. The results are compared with well-known approaches CHEF and LEACH.</description>
        <description>http://thesai.org/Downloads/Volume8No9/Paper_20-Uniform_Segregation_of_Densely_Deployed_Wireless_Sensor_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Educational Game Application Development on Classification of Diseases and Related Health Problems Treatment in Android Platform</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080919</link>
        <id>10.14569/IJACSA.2017.080919</id>
        <doi>10.14569/IJACSA.2017.080919</doi>
        <lastModDate>2017-09-29T18:17:35.3370000+00:00</lastModDate>
        
        <creator>Bernadus Rudy Sunindya</creator>
        
        <creator>Nur Hasti Purwani</creator>
        
        <subject>Game; KKPMT (Klasifikasi dan Kodifikasi Penyakit dan Masalah Terkait); android</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(9), 2017</description>
        <description>The classification and codification of diseases and related problems is one of the competences of medical recorder as stated in Kepmenkes RI.377 in 2007. The current problem is the lack of reference exercise in learning KKPMT (Klasifikasi dan Kodifikasi Penyakit dan Masalah Terkait) in Program Diploma-III Medical Recorder and Health Information Malang State Health Polytechnics. The purpose of this research is to design android based KKPMT educational application to improve students understanding of KKPMT course. This investigation was using pre-experiment, one group pretest-posttest with waterfall development method. The population in this study was all the students active in year two of Program Diploma-III Medical Record and Health Information Malang State Health Polytechnics. The result of the implementation showed that after the use of KKPMT educational game application with the diagnosis code G the percentage was above minimum, passing value increased from 6% before using the game to 94% after implementation of the game application. Results of the statistical test by paired t-test showed p-value 0,000 &lt;0.05. The conclusion was that android game software help students in understanding the KKPMT subject matter.</description>
        <description>http://thesai.org/Downloads/Volume8No9/Paper_19-Educational_Game_Application_Development.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Embedded System Design and Implementation of an Intelligent Electronic Differential System for Electric Vehicles</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080918</link>
        <id>10.14569/IJACSA.2017.080918</id>
        <doi>10.14569/IJACSA.2017.080918</doi>
        <lastModDate>2017-09-29T18:17:35.3070000+00:00</lastModDate>
        
        <creator>Ali UYSAL</creator>
        
        <creator>Emel SOYLU</creator>
        
        <subject>Electronic differential; electric vehicle; embedded system; fuzzy logic controller; in-wheel motor</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(9), 2017</description>
        <description>This paper presents an experimental study of the electronic differential system with four-wheel, dual-rear in wheel motor independently driven an electric vehicle. It is worth bearing in mind that the electronic differential is a new technology used in electric vehicle technology and provides better balancing in curved paths. In addition, it is more lightweight than the mechanical differential and can be controlled by a single controller. In this study, intelligently supervised electronic differential design and control is carried out for electric vehicles. Embedded system is used to provide motor control with a fuzzy logic controller. High accuracy is obtained from experimental study.</description>
        <description>http://thesai.org/Downloads/Volume8No9/Paper_18-Embedded_System_Design_and_Implementation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Effectiveness of Existing CAD-Based Research Work towards Screening Breast Cancer</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080917</link>
        <id>10.14569/IJACSA.2017.080917</id>
        <doi>10.14569/IJACSA.2017.080917</doi>
        <lastModDate>2017-09-29T18:17:35.2730000+00:00</lastModDate>
        
        <creator>Vidya Kattepura</creator>
        
        <creator>Dr. Kurian M Z</creator>
        
        <subject>Breast cancer detection; computer aided diagnosis; cancer; classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(9), 2017</description>
        <description>Accurate detection as well as classification of the breast cancer is still an unsolved question in the medical image processing techniques. We reviewed the existing Computer Aided Diagnosis (CAD)-based techniques to find that there has been enough work carried out towards both detection as well as classification of the breast cancer; however, all the existing techniques were implemented in highly controlled research environment. The prime contribution of this paper is it reviews some of the significant journals published during 2005–2016 and discusses its effectiveness thoroughly. The paper finally discusses about the open research issues that require a serious attention from the research community in order to address the existing issues. At the end, the paper makes some suggestion for carrying out future work direction in order to bridge the research gap explored from the existing system.</description>
        <description>http://thesai.org/Downloads/Volume8No9/Paper_17-Effectiveness_of_Existing_CAD_based_Research_Work.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Approach for Boosting Base Station Anonymity in a WSN</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080916</link>
        <id>10.14569/IJACSA.2017.080916</id>
        <doi>10.14569/IJACSA.2017.080916</doi>
        <lastModDate>2017-09-29T18:17:35.2600000+00:00</lastModDate>
        
        <creator>Vicky Kumar</creator>
        
        <creator>Ashok Kumar</creator>
        
        <subject>Anonymity; network lifetime; wireless sensor networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(9), 2017</description>
        <description>Nodes in a wireless sensor network scrutinize the nearby region and transmit their findings to the base station (BS)  using multi-hop transmission. As the BS plays an important role in a wireless sensor network, therefore an adversary who wants to interrupt the operation of the network would avidly look for the BS location and imposes maximum damage by destroying the BS physically. The multi-hop data transmission towards BS makes a prominent pattern of the traffic (huge traffic near the BS region) that indicates the presence of BS in the nearby region and thus the location of the BS may expose to the adversaries. This work aims to provide a novel approach which will increase the BS anonymity. For this purpose,  a randomly roamed BS and the special nodes are proposed to achieve the above mentioned objective. The special nodes produce a large number of high traffic regions, which are similar to the BS region. Now, there are  many regions which look like BS region and hence the probability to get the BS region using traffic analysis is very low. Therefore, this approach increases the effort of adversaries in order to find the exact BS position. We have used a standard entropy model to measure the anonymity of the base station and the GSAT test is used to calculate the number of steps required to find the base station. The results show that the proposed technique provides better results in terms of anonymity compared to the existing techniques.</description>
        <description>http://thesai.org/Downloads/Volume8No9/Paper_16-A_Novel_Approach_for_Boosting_Base_Station_Anonymity.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Estimating Evapotranspiration using Machine Learning Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080915</link>
        <id>10.14569/IJACSA.2017.080915</id>
        <doi>10.14569/IJACSA.2017.080915</doi>
        <lastModDate>2017-09-29T18:17:35.2270000+00:00</lastModDate>
        
        <creator>Muhammad Adnan</creator>
        
        <creator>M. Ahsan Latif</creator>
        
        <creator>Abaid-ur-Rehman</creator>
        
        <creator>Maria Nazir</creator>
        
        <subject>Evapotranspiration; principle component analysis; neural network; irrigation scheduling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(9), 2017</description>
        <description>The measurement of evapotranspiration is the most important factor in irrigation scheduling. Evapotranspiration means loss of water from the surface of plant and soil. Evaporation parameters are being used in studying water balances, water resource management, and irrigation system design and for estimating plant growth and height as well. Evapotranspiration is measured by different methods by using various parameters. Evapotranspiration varies with the climate change and as the climate has a lot of variation geographically, the pre-developed systems have not used all available meteorological data hence not robust models. In this research work, a model is developed to estimate evapotranspiration with more authentic and accurate reduced meteorological parameters using different machine learning techniques. The study reveals to learn and generalize the relationship among different parameters. The dataset with reduced dimension is modeled through time series neural network giving the regression value R=83%.</description>
        <description>http://thesai.org/Downloads/Volume8No9/Paper_15-Estimating_Evapotranspiration_using_Machine_Learning_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Face Extraction from Image based on K-Means Clustering Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080914</link>
        <id>10.14569/IJACSA.2017.080914</id>
        <doi>10.14569/IJACSA.2017.080914</doi>
        <lastModDate>2017-09-29T18:17:35.1970000+00:00</lastModDate>
        
        <creator>Yousef Farhang</creator>
        
        <subject>K-means; RER-K-means; clustering algorithm; face extraction; edge detection; image clustering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(9), 2017</description>
        <description>This paper proposed a new application of K-means clustering algorithm. Due to ease of implementation and application, K-means algorithm can be widely used. However, one of the disadvantages of clustering algorithms is that there is no balance between the clustering algorithm and its applications, and many researchers have paid less attention to clustering algorithm applications. The purpose of this paper is to apply the clustering algorithm application to face extraction. An improved K-means clustering algorithm was proposed in this study. A new method was also proposed for the use of clustering algorithms in image processing. To evaluate the proposed method, two case studies were used, including four standard images and five images selected from LFW standard database. These images were reviewed first by the K-means clustering algorithm and then by the RER-K-means and FE-RER-clustering algorithms. This study showed that the K-means clustering algorithm could extract faces from the image and the proposed algorithm used for this work increased the accuracy rate and, at the same time, reduced the number of iterations, intra cluster distance, and the related processing time.</description>
        <description>http://thesai.org/Downloads/Volume8No9/Paper_14-Face_Extraction_from_Image_based_on_K_Means.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Customization of Graphical Visualization for Health Parameters in Health Care Applications </title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080913</link>
        <id>10.14569/IJACSA.2017.080913</id>
        <doi>10.14569/IJACSA.2017.080913</doi>
        <lastModDate>2017-09-29T18:17:35.1800000+00:00</lastModDate>
        
        <creator>Saima Tunio</creator>
        
        <creator>Hameedullah Kazi</creator>
        
        <creator>Sirajuddin Qureshi</creator>
        
        <subject>Custom graphical visualization; health monitoring application; learnability; usability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(9), 2017</description>
        <description>In the 21st century, health care systems worldwide are facing many challenges as a result of the growing concern of diseases in humans, such as intestine, breathing, paralysis, nutritional value, and urogenital disorders. The use of the mobile technology in the field of healthcare system not only reduces the cost but also facilitates quality of the long-term health care, intelligent automation and rationalization of patient health monitoring wherever needed. While regular monitoring of the readings of the vital signs is critical, it is often overlooked because of the busy life schedule. There is a number of apps for health monitoring systems, but users are generally not satisfied with these applications because of the lack of custom graphical visualization of parameters representations like daily, weekly or yearly graphs, and the relationship between the vital signs. In this research study, we identify a principal issue in the health monitoring application, which is the custom graphical visualization of parameters representations of the health monitoring application. To solve the identified problems in this research study, we focus on the design and implementation of custom graphical visualization of parameters for health monitoring applications. The model emphasizes on the monitor, save and retrieve logs. System usability scale has been modified and evaluated for usability, learnability, and customization of the graph. In this research study, we took N=20 in observation for collecting the readings of Heart Rate, Skin Temperature, Respiration, and Glucose Rate. A total number of responses collected were R=60 from the age group of 24 to 40 years. Comparisons were made between three different Android based health monitoring applications, i.e., S-Health, Health Monitoring and the developed applications. The usability and learnability responses for the developed application as compared to other two applications are significantly high. The overall System Usability Score for the developed application was significantly high.</description>
        <description>http://thesai.org/Downloads/Volume8No9/Paper_13-Customization_of_Graphical_Visualization_for_Health_Parameters.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fuzzy-Semantic Similarity for Automatic Multilingual Plagiarism Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080912</link>
        <id>10.14569/IJACSA.2017.080912</id>
        <doi>10.14569/IJACSA.2017.080912</doi>
        <lastModDate>2017-09-29T18:17:35.1500000+00:00</lastModDate>
        
        <creator>Hanane EZZIKOURI</creator>
        
        <creator>Mohamed ERRITALI</creator>
        
        <creator>Mohamed OUKESSOU</creator>
        
        <subject>CLPD; fuzzy similarity; natural language processing; plagiarism detection; semantic similarity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(9), 2017</description>
        <description>A word may have multiple meanings or senses, it could be modeled by considering that words in a sentence have a fuzzy set that contains words with similar meaning, which make detecting plagiarism a hard task especially when dealing with semantic meaning, and even harder for cross language plagiarism detection. Arabic is known by its richness, word’s constructions and meanings diversity, hence changing texts from/to Arabic is a complex task, and therefore adopting a fuzzy semantic-based approach seems to be the best solution. In this paper, we propose a detailed fuzzy semantic-based similarity model for analyzing and comparing texts in CLP cases, in accordance with the WordNet lexical database, to detect plagiarism in documents translated from/to Arabic, a preprocessing phase is essential to form operable data for the fuzzy process. The proposed method was applied to two texts (Arabic/English), taking into consideration the specificities of the Arabic language. The result shows that the proposed method can detect 85% of the plagiarism cases.</description>
        <description>http://thesai.org/Downloads/Volume8No9/Paper_12-Fuzzy_Semantic_Similarity_for_Automatic_Multilingual_Plagiarism.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Basic Health Screening by Exploiting Data Mining Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080911</link>
        <id>10.14569/IJACSA.2017.080911</id>
        <doi>10.14569/IJACSA.2017.080911</doi>
        <lastModDate>2017-09-29T18:17:35.1200000+00:00</lastModDate>
        
        <creator>Dolluck Phongphanich</creator>
        
        <creator>Nattayanee Prommuang</creator>
        
        <creator>Benjawan Chooprom</creator>
        
        <subject>Bayesian methods; classification technique; data-mining; decision tree methods</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(9), 2017</description>
        <description>This study aimed at proposing a basic health screening system based on data mining techniques in order to help related personnel on basic health screening and to facilitate citizens on self-examining health conditions. The research comprised of two steps. The first step was to create a model by using classification techniques that are Bayesian methods (Na&#239;ve Bayes, Bayesian networks, and Na&#239;ve Bayesian Updateable) and decision tree methods (C4.5, ID3, Partial Rule) to find important attributes causing the disease. In this step, the accuracy of each method was compared to the other methods to select the most efficient model as an input for the next step. The second step was to develop a basic health screening system by exploiting rules from the model developed in the first step as the second step’s inputs were to classify from a citizen’s health profile whether a given citizen is in a normal group, risk group or sick group. Research findings revealed two important attributes directly contributing to diabetes: Blood pressure (BP) and docetaxel (DTX). Furthermore, C4.5 algorithm provided the most accuracy with accuracy of 99.7969%, precision of 99.8%, recall of 99.8% and F-measure of 99.8%.</description>
        <description>http://thesai.org/Downloads/Volume8No9/Paper_11-Basic_Health_Screening_by_Exploiting_Data_Mining_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Phishing Website Detection based on Supervised Machine Learning with Wrapper Features Selection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080910</link>
        <id>10.14569/IJACSA.2017.080910</id>
        <doi>10.14569/IJACSA.2017.080910</doi>
        <lastModDate>2017-09-29T18:17:35.1030000+00:00</lastModDate>
        
        <creator>Waleed Ali</creator>
        
        <subject>Phishing website; machine learning; wrapper features selection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(9), 2017</description>
        <description>The problem of Web phishing attacks has grown considerably in recent years and phishing is considered as one of the most dangerous Web crimes, which may cause tremendous and negative effects on online business. In a Web phishing attack, the phisher creates a forged or phishing website to deceive Web users in order to obtain their sensitive financial and personal information. Several conventional techniques for detecting phishing website have been suggested to cope with this problem. However, detecting phishing websites is a challenging task, as most of these techniques are not able to make an accurate decision dynamically as to whether the new website is phishing or legitimate. This paper presents a methodology for phishing website detection based on machine learning classifiers with a wrapper features selection method. In this paper, some common supervised machine learning techniques are applied with effective and significant features selected using the wrapper features selection approach to accurately detect phishing websites. The experimental results demonstrated that the performance of the machine learning classifiers was improved by using the wrapper-based features selection. Moreover, the machine learning classifiers with the wrapper-based features selection outperformed the machine learning classifiers with other features selection methods.</description>
        <description>http://thesai.org/Downloads/Volume8No9/Paper_10-Phishing_Website_Detection_based_on_Supervised_Machine_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Gait Identification using Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080909</link>
        <id>10.14569/IJACSA.2017.080909</id>
        <doi>10.14569/IJACSA.2017.080909</doi>
        <lastModDate>2017-09-29T18:17:35.0730000+00:00</lastModDate>
        
        <creator>Muhammad Ramzan Talib</creator>
        
        <creator>Ayesha Shafique</creator>
        
        <creator>Muhammad Kashif Hanif</creator>
        
        <creator>Muhammad Umer Sarwar</creator>
        
        <subject>Gait recognition; biometric identification; neural network; back preparation; human detection and tracking; morphological operator; feature extraction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(9), 2017</description>
        <description>Biometric System has become more important in security and verification of any human, which is under surveillance. Identification from distance is also possible by this technology. Researchers are taking interest to find out identification of gait by unknown manners and without informing the human as object. We are going to offer sufficient self-similarity gait recognition system for identification using artificial neural network. In which background modeling is made by video camera, in front of camera movement will be generated as to collect frames as segments using background subtraction algorithm. Then logically head (Skelton) is used to find out the walking object as a walking figure. In short, when a video framing is entered, the offered system identifies the gait properties and body based. Offered system is worked with collected gait dataset with different trials. Video framing sequence showed the algorithm attains recognition performance with its accomplishment. Human as Object identification method using gait is a different technique to verify an individual by the way he move or walk and by the intensity of moving on feet. Biometric recognition is method to assess the behavioural properties of anybody by setting up different pattern as according to need. Gait recognition is type of that biometric system which works without giving any hint to moving object quickly. This is the best way of monitoring the people. Using this system different environment can be controlled like airports, banks, airbase to detect the danger and threat.</description>
        <description>http://thesai.org/Downloads/Volume8No9/Paper_9-Gait_Identification_using_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Automated Surveillance System based on Multi-Processor System-on-Chip and Hardware Accelerator</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080908</link>
        <id>10.14569/IJACSA.2017.080908</id>
        <doi>10.14569/IJACSA.2017.080908</doi>
        <lastModDate>2017-09-29T18:17:35.0400000+00:00</lastModDate>
        
        <creator>Mossaad Ben Ayed</creator>
        
        <creator>SabeurElkosantini</creator>
        
        <creator>Mohamed Abid</creator>
        
        <subject>Surveillance system; suspicious behaviors; multi-processor; accelerator; architecture</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(9), 2017</description>
        <description>The video surveillance, such as an example of security system presents one of the powerful techniques used in advanced systems. Manual vision which is used to analyze video in the traditional approach should be avoided. An automated surveillance system based on suspicious behavior presents a great challenge to developers. The detection is encountered by complexity and time-consuming process. An abnormal behavior could be identified by different ways: actions, face, trajectory, etc. The characteristics of an abnormal behavior still presents a great problem. This paper proposes a specific System On Chip architecture for surveillance system based on Multi-Processor (MPSOC) and hardware accelerator. The aim is to accelerate the processing and obtain a reliable and accelerated suspicious behavior recognition. Finally, the experiment section proves the opportunity of the proposed system in terms of performance and cost.</description>
        <description>http://thesai.org/Downloads/Volume8No9/Paper_8-An_Automated_Surveillance_System_based_on_Multi_Processor_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced Mechanism to Detect and Mitigate Economic Denial of Sustainability (EDoS) Attack in Cloud Computing Environments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080907</link>
        <id>10.14569/IJACSA.2017.080907</id>
        <doi>10.14569/IJACSA.2017.080907</doi>
        <lastModDate>2017-09-29T18:17:35.0270000+00:00</lastModDate>
        
        <creator>Parminder Singh Bawa</creator>
        
        <creator>Shafiq Ul Rehman</creator>
        
        <creator>Selvakumar Manickam</creator>
        
        <subject>Cloud computing; Economic Denial of Sustainability (EDoS) attack; security; Distributed Denial of Service (DDoS) attack; mitigation mechanism; anomaly detection technique</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(9), 2017</description>
        <description>Cloud computing (CC) is the next revolution in the Information and Communication Technology arena. CC is often provided as a service comparable to utility services such as electricity, water, and telecommunications. Cloud service providers (CSP) offers tailored CC services which are delivered as subscription-based services, in which customers pay based on the usage. Many organizations and service providers have started shifting from traditional server-cluster infrastructure to cloud-based infrastructure. Nevertheless, security is one of the main factors that inhibit the proliferation of cloud computing. The threat of Distributed Denial of Service (DDoS) attack continues to wreak havoc in these cloud infrastructures. In addition to DDoS attacks, a new form of attack known as Economic Denial of Sustainability (EDoS) attack has emerged in recent years. DDoS attack in conventional computing setup usually disrupts the service, which affects the client reputation, and results in financial loss. In CC environment, service disruption is very rare due to the auto-scalability (Elasticity), capability, and availability of service level agreements (SLA). However, auto scalability utilize more computing resources in event of a DDoS attack, exceeding the economic bounds for service delivery, thereby triggering EDoS for the organization targeted. Although EDoS attacks are small at the moment, it is expected to grow in the near future in tandem with the growth in cloud usage. There are few EDoS detection and mitigation techniques available but they have weaknesses and are not efficient in mitigating EDoS. Hence, an enhanced EDoS mitigation mechanism (EDoS-EMM) has been proposed. The aim of this mechanism is to provide a real-time detection and effective mitigation of EDoS attack.</description>
        <description>http://thesai.org/Downloads/Volume8No9/Paper_7-Enhanced_Mechanism_to_Detect_and_Mitigate_Economic_Denial_of_Sustainability.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modelling Planar Electromagnetic Levitation System based on Phase Lead Compensation Control</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080906</link>
        <id>10.14569/IJACSA.2017.080906</id>
        <doi>10.14569/IJACSA.2017.080906</doi>
        <lastModDate>2017-09-29T18:17:34.9930000+00:00</lastModDate>
        
        <creator>Mundher H. A. YASEEN</creator>
        
        <subject>Electromagnetic levitation system; lead controller; (magnetic levitation) maglev system; SIMLAB board</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(9), 2017</description>
        <description>Electromagnetic Levitation System is commonly used in the field of train Maglev (magnetic levitation) system. Modelling Maglev system including all the magnetic force characteristics based on the current and position. This paper presents 2DOF model which represents a sample of uniform rigid plane body based on the functions of current and the air gap.  The present work identifies the dynamic correlation of the levitation system of the Maglev using three sub-models. Lead controller is developed to achieve system stability by considering the system correlation of system moments and inductance variations. The control properties of the present model are obtained through SIMLAB microcontroller board to achieve the stable Maglev system.</description>
        <description>http://thesai.org/Downloads/Volume8No9/Paper_6-Modelling_Planar_Electromagnetic_Levitation_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New 30 GHz AMC/PRS RFID Reader Antenna with Circular Polarization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080905</link>
        <id>10.14569/IJACSA.2017.080905</id>
        <doi>10.14569/IJACSA.2017.080905</doi>
        <lastModDate>2017-09-29T18:17:34.9630000+00:00</lastModDate>
        
        <creator>Omrane NECIBI</creator>
        
        <creator>Chaouki GUESMI</creator>
        
        <creator>Ali GHARSALLAH</creator>
        
        <subject>Radio Frequency Identification (RFID); Fabry-Perot Cavity Antenna (FPCA); Electromagnetics Band Gap (EBG); circular polarization; high impedance surface (HIS); artificial magnetic conductor (AMC); millimeter wave identification; Partially Reflective Surface (PRS);  axial ratio (AR)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(9), 2017</description>
        <description>The work on this guideline focus on the development and the design of a circularly polarized metallic EBG antenna fed by two microstrip lines. In order to achieve that purpose, a list of indicative specifications has been established, namely, obtaining an antenna operating from 29.5 to 30 GHz, with a high gain value, an ellipticity rate less than 3 dB and a secondary lobes less than -12 dB which is designed for Radio Frequency Identification (RFID) readers operating in the millimeter band. The size of the patch is 17*17 mm2. Artificial materials, such as artificial magnetic conductor (AMC) and Partially Reflective Surface (PRS) were added as an upper layer to this antenna in order to expand its bandwidth for the RFID reader applications. The new antenna has -35 dB of insertion loss with an impedance bandwidth of 0.6 GHz and a gain of 12.4 dB at 30 GHz. Analysis of the proposed antenna was carried out based on the finite element method using two electromagnetics simulations software: CST-MW Studio&#174; and ANSYS HFSS. The simulation results obtained are presented and discussed.</description>
        <description>http://thesai.org/Downloads/Volume8No9/Paper_5-A_New_30_GHz_AMCPRS_RFID_Reader_Antenna.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design and Control of Self-Stabilizing Angular Robotics Anywalker</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080904</link>
        <id>10.14569/IJACSA.2017.080904</id>
        <doi>10.14569/IJACSA.2017.080904</doi>
        <lastModDate>2017-09-29T18:17:34.9470000+00:00</lastModDate>
        
        <creator>Igor Ryadchikov</creator>
        
        <creator>Semyon Sechenev</creator>
        
        <creator>Sergey Sinitsa</creator>
        
        <creator>Alexander Svidlov</creator>
        
        <creator>Pavel Volkodav</creator>
        
        <creator>Anton Feshin</creator>
        
        <creator>Anas Alotaki</creator>
        
        <creator>Aleksey Bolshakov</creator>
        
        <creator>Michail Drobotenko</creator>
        
        <creator>Evgeny Nikulchev</creator>
        
        <subject>Walking robots; self-stabilization platform; stability of dynamic systems; chassis of robotic complexes</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(9), 2017</description>
        <description>Walking robots are designed to overcome obstacles when moving. The walking robot AnyWallker is developed, in the design of which the task of self-stabilization of the center of the mass is solved; a special type of chassis is developed, providing movement on high cross-country capability. The paper presents the results of designing and controlling the robot, the architecture of the software complex provides management and mastification of the hardware platform. AnyWalker is actually a chassis which can be used to build robots for many different purposes, such as surveying complex environment, industrial operations, and work in hazardous environment.</description>
        <description>http://thesai.org/Downloads/Volume8No9/Paper_4-Design_and_Control_of_Self_Stabilizing_Angular_Robotics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Root-Cause and Defect Analysis based on a Fuzzy Data Mining Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080903</link>
        <id>10.14569/IJACSA.2017.080903</id>
        <doi>10.14569/IJACSA.2017.080903</doi>
        <lastModDate>2017-09-29T18:17:34.9170000+00:00</lastModDate>
        
        <creator>Seyed Ali Asghar Mostafavi Sabet</creator>
        
        <creator>Alireza Moniri</creator>
        
        <creator>Farshad Mohebbi</creator>
        
        <subject>Data mining; association rules; defect analysis; fuzzy sets; root cause analysis; quality</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(9), 2017</description>
        <description>Manufacturing organizations have to improve the quality of their products regularly to survive in today’s competitive production environment. This paper presents a method for identification of unknown patterns between the manufacturing process parameters and the defects of the output products and also of the relationships between the defects. Discovery of these patterns helps practitioners to achieve two main goals: first, identification of the process parameters that can be used for controlling and reducing the defects of the output products and second, identification of the defects that very probably have common roots. In this paper, a fuzzy data mining algorithm is used for discovery of the fuzzy association rules for weighted quantitative data. The application of the association rule algorithm developed in this paper is illustrated based on a net making process at a netting plant. After implementation of the proposed method, a significant reduction was observed in the number of defects in the produced nets.</description>
        <description>http://thesai.org/Downloads/Volume8No9/Paper_3-Root_Cause_and_Defect_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Minimum Redundancy Maximum Relevance-Based Approach for Multivariate Causality Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080902</link>
        <id>10.14569/IJACSA.2017.080902</id>
        <doi>10.14569/IJACSA.2017.080902</doi>
        <lastModDate>2017-09-29T18:17:34.8830000+00:00</lastModDate>
        
        <creator>Yawai Tint</creator>
        
        <creator>Yoshiki Mikami</creator>
        
        <subject>Causal analysis; dummy variable; Minimum Redundancy Maximum Relevance (MRMR); multivariate analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(9), 2017</description>
        <description>Causal analysis, a form of root cause analysis, has been applied to explore causes rather than indications so that the methodology is applicable to identify direct influences of variables. This study focuses on observational data-based causal analysis for factors selection in place of a correlation approach that does not imply causation. The study analyzes the causality relationship between a set of categorical response variables (binary and more than two categories) and a set of explanatory dummy variables by using multivariate joint factor analysis. The paper uses the Minimum Redundancy Maximum Relevance (MRMR) algorithm to identify the causation utilizing data obtained from the National Automotive Sampling System’s Crashworthiness Data System (NASS-CDS) database.</description>
        <description>http://thesai.org/Downloads/Volume8No9/Paper_2-A_Minimum_Redundancy_Maximum_Relevance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Analysis of Anticancer Drug Sensitivity of Lung Cancer Cell Lines by using Machine Learning Clustering Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080901</link>
        <id>10.14569/IJACSA.2017.080901</id>
        <doi>10.14569/IJACSA.2017.080901</doi>
        <lastModDate>2017-09-29T18:17:34.8070000+00:00</lastModDate>
        
        <creator>Chandi S. Wanigasooriya</creator>
        
        <creator>Malka N. Halgamuge</creator>
        
        <creator>Azeem Mohammad</creator>
        
        <subject>Data analysis; clustering; filtered clustering;  simple k-means clustering; cancer; lung cancer; cancer cell lines; drug sensitivity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(9), 2017</description>
        <description>Lung cancer is the commonest type of cancer with the highest fatality rate worldwide. There is continued research that experiments on drug development for lung cancer patients by assessing their responses to chemotherapeutic treatments to select novel targets for improved therapies. This study aims to analyze the anticancer drug sensitivity in human lung cancer cell lines by using machine learning techniques. The data for this analysis is extracted from the National Cancer Institute (NCI). This experiment uses 408,291 human small molecule lung cancer cell lines to conclude. The values are drawn from describing the raw viability values for 91 human lung cancer cell lines treated with 354 different chemical compounds and 432 concentration points tested in each replicate experiments. Our analysis demonstrated the data from a considerable amount of cell lines clustered by using Simple K-means, Filtered clustering and by calculating sensitive drugs for each lung cancer cell line. Additionally, our analysis also demonstrated that the Neopeltolide, Parbendazole, Phloretin and Piperlongumine anti-drug chemical compounds were more sensitive for all 91 cell lines under different concentrations (p-value &lt; 0.001). Our findings indicated that Simple K-means and Filtered clustering methods are completely similar to each other. The available literature on lung cancer cell line data observed a significant relationship between lung cancer and anticancer drugs. Our analysis of the reported experimental results demonstrated that some compounds are more sensitive than other compounds; Phloretin was the most sensitive compound for all lung cancer cell lines which were nearly about 59% out of 91 cell lines. Hence, our observation provides the methodology on how anticancer drug sensitivity of lung cancer cell lines can be analyzed by using machine learning techniques, such as clustering algorithms. This inquiry is a useful reference for researchers who are experimenting on drug developments for the lung cancer in the future.</description>
        <description>http://thesai.org/Downloads/Volume8No9/Paper_1-The_Analysis_of_Anticancer_Drug_Sensitivity_of_Lung_Cancer.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automated Player Selection for a Sports Team using Competitive Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080859</link>
        <id>10.14569/IJACSA.2017.080859</id>
        <doi>10.14569/IJACSA.2017.080859</doi>
        <lastModDate>2017-09-02T12:23:19.9500000+00:00</lastModDate>
        
        <creator>Rabah Al-Shboul</creator>
        
        <creator>Tahir Syed</creator>
        
        <creator>Jamshed Memon</creator>
        
        <creator>Furqan Khan</creator>
        
        <subject>Team selection; match outcome prediction; neural networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(8), 2017</description>
        <description>The use of data analytics to constitute a winning team for the least cost has become the standard modus operandi
in club leagues, beginning from Sabermetrics for the game of basketball. Our motivation is to implement this enomenon in other sports as well, and for the purpose of this work we present a model for football, for which to the best of our knowledge, previous work does not exist. The main objective is to pick the best possible squad from
an available pool of players. This will help decide which team of 11 football players is best to play against a particular opponent, perform prediction of future matches and helps team management in preparing the team for the future. We argue in favour of a semi-supervised learning approach in order to quantify and predict player performance from team data with mutual influence among players, and report win accuracies of around 60%.
</description>
        <description>http://thesai.org/Downloads/Volume8No8/Paper_59-Automated_Player_Selection_for_a_Sports_Team.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fine-grained Accelerometer-based Smartphone Carrying States Recognition during Walking</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080858</link>
        <id>10.14569/IJACSA.2017.080858</id>
        <doi>10.14569/IJACSA.2017.080858</doi>
        <lastModDate>2017-09-01T05:18:20.0700000+00:00</lastModDate>
        
        <creator>Kaori Fujinami</creator>
        
        <creator>Tsubasa Saeki</creator>
        
        <creator>Yinghuan Li</creator>
        
        <creator>Tsuyoshi Ishikawa</creator>
        
        <creator>Takuya Jimbo</creator>
        
        <creator>Daigo Nagase</creator>
        
        <creator>Koji Sato</creator>
        
        <subject>Smartphone; on-body localization; accelerometer;
machine learning; feature selection; wearable computing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(8), 2017</description>
        <description>Due to the dependency of our daily lives on smartphones,
the states of the device have impact on the quality of
services offered through a smartphone. In this article, we focus
on the carrying states of the device while the user is walking,
in which 17 states, e.g., in the front-left trouser pocket, calling
phone in the right hand, in a backpack are subjects to recognition
based on supervised learning with accelerometer-derived features.
A large-scale data collection from 70 persons with three walking
speeds allows reliable evaluation regarding suitable features and
classifiers model, the feature selection method, robustness of
localization against unknown person, and effect of walking speed
in training a classifier. Person-independent evaluation shows that
average F-measures of 17 class classification and merged 9 class
classification were 0.823 and 0.913, respectively.</description>
        <description>http://thesai.org/Downloads/Volume8No8/Paper_58-Fine_grained_Accelerometer_based_Smartphone.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comparison of Predictive Parameter Estimation using Kalman Filter and Analysis of Variance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080857</link>
        <id>10.14569/IJACSA.2017.080857</id>
        <doi>10.14569/IJACSA.2017.080857</doi>
        <lastModDate>2017-09-01T05:18:19.9930000+00:00</lastModDate>
        
        <creator>Asim ur Rehman Khan</creator>
        
        <creator>Haider Mehdi</creator>
        
        <creator>Syed Muhammad Atif Saleem</creator>
        
        <creator>Muhammad Junaid Rabbani</creator>
        
        <subject>Analysis of variance (ANOVA); Kalman controllers; predictive controller</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(8), 2017</description>
        <description>The design of a controller significantly improves if internal states of a dynamic control system are predicted. This paper compares the prediction of system states using Kalman filter and a novel approach analysis of variance (ANOVA). Kalman filter has been successfully applied in several applications. A significant advantage of Kalman filter is its ability to use system output to predict the future states. It has been observed that Kalman filter based predictive controller design outperforms many other approaches. An important drawback of such controllers is however that their performances deteriorate in situations where the system states have no correlation with the output. This paper takes a hypothetical model of a helicopter and builds system model using the state-space diagram. The design is implemented using SIMULINK. It has been observed that in situations where the states are dependent on system output, the ANOVA based state prediction gives comparable results with that of Kalman filter based parameter estimation. The ANOVA based parameter prediction, however outperforms Kalman filter based parameter prediction in situations where the output does not directly contribute in a particular state. The research was based on empirical results. Rigorous testing was performed on four internal states to prove that ANOVA based predictive parameter estimation technique outperforms Kalman based parameter estimation in situations where the system internal states is not directly linked with the output.</description>
        <description>http://thesai.org/Downloads/Volume8No8/Paper_57-A_Comparison_of_Predictive_Parameter_Estimation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Exploiting Temporal Information in Documents and Query to Improve the Information Retrieval Process: Application to Medical Articles</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080856</link>
        <id>10.14569/IJACSA.2017.080856</id>
        <doi>10.14569/IJACSA.2017.080856</doi>
        <lastModDate>2017-08-30T17:19:27.5000000+00:00</lastModDate>
        
        <creator>Jihen MAJDOUBI</creator>
        
        <creator>Ahlam Nabli</creator>
        
        <subject>Biomedical information retrieval; semantic indexing;
temporal criteria; Medical Subject Headings (MeSH) thesaurus</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(8), 2017</description>
        <description>In the medical field, scientific articles represent
a very important source of knowledge for researchers of this
domain. But due to the large volume of scientific articles
published on the web, an efficient detection and use of this
knowledge is quite a difficult task. In this paper, we propose a
novel method for semantic indexing of medical articles by using
the semantic resource MeSH (Medical Subject Headings) and the
temporal information provided in the documents. The proposed
indexing approach was evaluated by intensive experiments. These
experiments were conducted on document test collections of
real world clinical extracted from scientific collections, namely,
CISMEF and CLEF. The results generated by these experiments
demonstrate the effectiveness of our indexing approach.</description>
        <description>http://thesai.org/Downloads/Volume8No8/Paper_56-Exploiting_Temporal_Information_in_Documents.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Efficient Scheme for Real-time Information Storage and Retrieval Systems: A Hybrid Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080855</link>
        <id>10.14569/IJACSA.2017.080855</id>
        <doi>10.14569/IJACSA.2017.080855</doi>
        <lastModDate>2017-08-30T17:19:27.4830000+00:00</lastModDate>
        
        <creator>Syed Ali Hassan</creator>
        
        <creator>Imran Ul Haq</creator>
        
        <creator>Muhammad Asif</creator>
        
        <creator>Maaz Bin Ahmad</creator>
        
        <creator>Moeen Tayyab</creator>
        
        <subject>Insertion; deletion; array; linked list; binary search;
linear search</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(8), 2017</description>
        <description>Information storage and retrieval is the fundamental
requirement for many real-time applications. These systems
demand that data should be sorted all the time, real-time
insertion, deletion and searching should be supported and system
must support dynamic entries. These systems require search
operations to be performed from massive databases implemented
by various data structures. The common data structures used
by these systems are stack, queue or linked list all having their
own limitations. The biggest advantage of using stack is that
binary search can be performed on it easily while on the other
hand insertion and deletion of nodes involves more processing
overhead. In linked list, insertion and deletion of nodes is easier
but searching operation involves more processing overhead as
binary search cannot be performed efficiently on it. In this
paper, a hybrid solution is presented for such systems, which
provides efficient insertion, deletion and searching operations.
Results show the effectiveness of the proposed approach as it
outperforms the existing techniques used by these systems.</description>
        <description>http://thesai.org/Downloads/Volume8No8/Paper_55-An_Efficient_Scheme_for_Real_Time_Information.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multiple Vehicles Semi-Self-driving System Using GNSS Coordinate Tracking under Relative Position with Correction Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080854</link>
        <id>10.14569/IJACSA.2017.080854</id>
        <doi>10.14569/IJACSA.2017.080854</doi>
        <lastModDate>2017-08-30T17:19:27.4670000+00:00</lastModDate>
        
        <creator>Heejin Lee</creator>
        
        <creator>Hiroshi Suzuki</creator>
        
        <creator>Takahiro Kitajima</creator>
        
        <creator>Akinobu Kuwahara</creator>
        
        <creator>Takashi Yasuno</creator>
        
        <subject>Self-driving, positioning; global navigation satellite
system (GNSS); Global Positioning System (GPS); GLONASS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(8), 2017</description>
        <description>This paper describes a simple and low-cost semiself-
driving system which is constructed without cameras or
image processing. In addition, a position correction method is
presented by using a vehicle dynamics. Conventionally, selfdriving
vehicle is operated by various expensive environmental
recognition sensors. It results in rise in prices of the vehicle,
and also the complicated system with various sensors tends to
be a high possibility of malfunction. Therefore, we propose the
semi-self-driving system with a single type of global navigation
satellite system (GNSS) receiver and a digital compass, which
is based on a concept of a preceding vehicle controlled by
a human manually and following vehicles which track to the
preceding vehicle automatically. Each vehicle corrects coordinate
using current velocity and heading angle from sensors. Several
experimental and simulation results using our developed smallscale
vehicles demonstrate the validity of the proposed system
and correction method.</description>
        <description>http://thesai.org/Downloads/Volume8No8/Paper_54-Multiple_Vehicles_Semi_Self_Driving_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Lung Cancer Detection and Classification with 3D Convolutional Neural Network (3D-CNN)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080853</link>
        <id>10.14569/IJACSA.2017.080853</id>
        <doi>10.14569/IJACSA.2017.080853</doi>
        <lastModDate>2017-08-30T17:19:27.4370000+00:00</lastModDate>
        
        <creator>Wafaa Alakwaa</creator>
        
        <creator>Mohammad Nassef</creator>
        
        <creator>Amr Badr</creator>
        
        <subject>Lung cancer; computed tomography; deep learning;
convolutional neural networks; segmentation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(8), 2017</description>
        <description>This paper demonstrates a computer-aided diagnosis
(CAD) system for lung cancer classification of CT scans
with unmarked nodules, a dataset from the Kaggle Data Science
Bowl, 2017. Thresholding was used as an initial segmentation
approach to segment out lung tissue from the rest of the CT
scan. Thresholding produced the next best lung segmentation.
The initial approach was to directly feed the segmented CT
scans into 3D CNNs for classification, but this proved to be
inadequate. Instead, a modified U-Net trained on LUNA16 data
(CT scans with labeled nodules) was used to first detect nodule
candidates in the Kaggle CT scans. The U-Net nodule detection
produced many false positives, so regions of CTs with segmented
lungs where the most likely nodule candidates were located as
determined by the U-Net output were fed into 3D Convolutional
Neural Networks (CNNs) to ultimately classify the CT scan as
positive or negative for lung cancer. The 3D CNNs produced
a test set Accuracy of 86.6%. The performance of our CAD
system outperforms the current CAD systems in literature which
have several training and testing phases that each requires
a lot of labeled data, while our CAD system has only three
major phases (segmentation, nodule candidate detection, and
malignancy classification), allowing more efficient training and
detection and more generalizability to other cancers.</description>
        <description>http://thesai.org/Downloads/Volume8No8/Paper_53-Lung_Cancer_Detection_and_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Solving the Free Clustered TSP Using a Memetic Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080852</link>
        <id>10.14569/IJACSA.2017.080852</id>
        <doi>10.14569/IJACSA.2017.080852</doi>
        <lastModDate>2017-08-30T17:19:27.4370000+00:00</lastModDate>
        
        <creator>Abdullah Alsheddy</creator>
        
        <subject>Combinatorial optimization; clustered travelling
salesman problem; memetic algorithm; guided local search; genetic algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(8), 2017</description>
        <description>The free clustered travelling salesman problem
(FCTSP) is an extension of the classical travelling salesman
problem where the set of vertices is partitioned into clusters, and
the task is to find a minimum cost Hamiltonian tour such that
the vertices in any cluster are visited contiguously. This paper
proposes the use of a memetic algorithm (MA) that combines the
global search ability of Genetic Algorithm with local search to
refine solutions to the FCTSP. The effectiveness of the proposed
algorithm is examined on a set of TSPLIB instances with up
to 318 vertices and clusters varying between 2 and 50 clusters.
Moreover, the performance of the MA is compared with a Genetic
Algorithm and a GRASP with path relinking. The computational
results confirm the effectiveness of the MA in terms of both
solution quality and computational time.</description>
        <description>http://thesai.org/Downloads/Volume8No8/Paper_52-Solving_the_Free_Clustered_TSP.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Adaptive e-learning using Genetic Algorithm and Sentiments Analysis in a Big Data System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080851</link>
        <id>10.14569/IJACSA.2017.080851</id>
        <doi>10.14569/IJACSA.2017.080851</doi>
        <lastModDate>2017-08-30T17:19:27.4070000+00:00</lastModDate>
        
        <creator>Youness MADANI</creator>
        
        <creator>Jamaa BENGOURRAM</creator>
        
        <creator>Mohammed ERRITALI</creator>
        
        <creator>Badr HSSINA</creator>
        
        <creator>Marouane Birjali</creator>
        
        <subject>Adaptive E-learning; genetic algorithms; research
of information; social network; period of activity; sentiment analysis;
parallelize the search problem; big data; Hadoop; MapReduce;
Hadoop distributed file system (HDFS)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(8), 2017</description>
        <description>In this article we describe our adaptive e-learning
system, which allows the learner to take courses adapted to his
profile and to the pedagogical objectives set by the teacher, we
use for adaptation the genetic algorithms to give the learner
the concepts that must learn in an optimal way by seeking the
objectives most adapted to his profile. And after a second level
of adaptation using one of the social networks of the learner
(twitter, facebook, Google + ...), based on his post on one of these
social networks we propose two levels of analysis. The first one
is to look for the period of activity which gives us an idea about
the period when the learner is active and the second consists of
making an analysis of the feelings on the publications that are
published during the period of activity and related to education.
Our work therefore is to adapt the profile of the learner with
the pedagogical objectives by using the genetic algorithm and the
notions of the research of information by doing this work in a
Big Data system, that is to say we parallelize the search problem
using Hadoop with Hadoop distributed file system (HDFS) and
the MapReduce programming model,and after using information
from a social network of the learner, we look for the period of
activity of the learner and the feeling (sentiment analysis) related
to the publications of the period of activity.</description>
        <description>http://thesai.org/Downloads/Volume8No8/Paper_51-Adaptive_e_Learning_using_Genetic_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Unsupervised Local Outlier Detection Method for Wireless Sensor Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080850</link>
        <id>10.14569/IJACSA.2017.080850</id>
        <doi>10.14569/IJACSA.2017.080850</doi>
        <lastModDate>2017-08-30T17:19:27.3730000+00:00</lastModDate>
        
        <creator>Tianyu Zhang</creator>
        
        <creator>Qian Zhao</creator>
        
        <creator>Yoshihiro Shin</creator>
        
        <creator>Yukikazu Nakamoto</creator>
        
        <subject>Wireless sensor networks; outliers detection; unsupervised
learning; mean-shift algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(8), 2017</description>
        <description>Recently, wireless sensor networks (WSNs) have
provided many applications, which need precise sensing data
analysis, in many areas. However, sensing datasets contain outliers
sometimes. Although outliers rarely occur, they seriously reduce
the precision of the sensing data analysis. In the past few years,
many researches focused on outlier detection. However, many
of them ignored one factor that WSNs are usually deployed
in a dynamic environment that changes with time. Thus, we
propose a new method, which is an unsupervised learning method
based on mean-shift algorithm, for outlier detection that can be
used in a dynamic environment for WSNs. To make our method
adapt to a dynamic environment, we define two new distances
for outlier detection. Moreover, the simulation shows that our
method performed on real sensing dataset has ideal results; it
finds outliers with a low false positive rate and has a high recall.
For generality, we also test our method on different synthetic
datasets.</description>
        <description>http://thesai.org/Downloads/Volume8No8/Paper_50-An_Unsupervised_Local_Outlier_Detection_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Technique for Java Code Complexity Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080849</link>
        <id>10.14569/IJACSA.2017.080849</id>
        <doi>10.14569/IJACSA.2017.080849</doi>
        <lastModDate>2017-08-30T17:19:27.3430000+00:00</lastModDate>
        
        <creator>Nouh Alhindawi</creator>
        
        <creator>Mohammad Subhi Al-Batah</creator>
        
        <creator>Rami Malkawi</creator>
        
        <creator>Ahmad Al-Zuraiqi</creator>
        
        <subject>Complexity; java code; McCabe; Halstead; hybrid technique</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(8), 2017</description>
        <description>Software complexity can be defined as the degree of difficulty in analysis, testing, design and implementation of software. Typically, reducing model complexity has a significant impact on maintenance activities. A lot of metrics have been used to measure the complexity of source code such as Halstead, McCabe Cyclomatic, Lines of Code, and Maintainability Index, etc. This paper proposed a hybrid module which consists of two theories which are Halstead and McCabe, both theories will be used to analyze a code written in Java. The module provides a mechanism to better evaluate the proficiency level of programmers, and also provides a tool which enables the managers to evaluate the programming levels and their enhancements over time. This will be known by discovering the various differences between levels of complexity in the code. If the program complexity level is low, then of the programmer professionalism level is high, on the other hand, if the program complexity level is high, then the programmer professionalism level is almost low. The results of the conducted experiments show that the proposed approach give very high and accurate evaluation for the undertaken systems.</description>
        <description>http://thesai.org/Downloads/Volume8No8/Paper_49-Hybrid_Technique_for_Java_Code_Complexity.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Visualizing Computer Programming in a Computer-based Simulated Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080848</link>
        <id>10.14569/IJACSA.2017.080848</id>
        <doi>10.14569/IJACSA.2017.080848</doi>
        <lastModDate>2017-08-30T17:19:27.3270000+00:00</lastModDate>
        
        <creator>Dr. Belsam Attallah</creator>
        
        <subject>Computer programming; programming; object-oriented programming; programming language; parallelism; multi-threading; multithreading; concurrency;; visual; visualization; visual environment; virtual worlds; second life; virtualization.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(8), 2017</description>
        <description>This paper investigated the challenges presented by computer programming (sequential/traditional, concurrent and parallel) for novice programmers and developers. The researcher involved Higher Education in Computer Science students learning programming at multiple levels, as they could well represent beginning programmers, who would struggle in successfully achieving a running program due to the complexity of this theoretical process, which has no similar real-life representation. The paper explored the difficulties faced by students in understanding this challenging, yet fundamental, subject of all Computer Science/Computing degree programmes, and focused on the advantages of visualization techniques to facilitate the learning of computer programming, with recommendations on effective computer-based simulated platforms to achieve this visualization. The paper recommended the application of virtual world technologies, such as ‘Second Life’, to achieve the visualization required to facilitate the understanding and learning of computer programming. The paper demonstrated extensive evidence on the advantages of these technologies to achieve program visualization, and how they facilitated enhanced learning of the programming process. The paper also addressed the benefits of collaboration and experimentation, which are ideal for learning computer programming, and how these aspects are strongly supported in virtual worlds.</description>
        <description>http://thesai.org/Downloads/Volume8No8/Paper_48-Visualizing_Computer_Programming_in_a_Computer.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>AES-Route Server Model for Location based Services in Road Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080847</link>
        <id>10.14569/IJACSA.2017.080847</id>
        <doi>10.14569/IJACSA.2017.080847</doi>
        <lastModDate>2017-08-30T17:19:27.2970000+00:00</lastModDate>
        
        <creator>Mohamad Shady Alrahhal</creator>
        
        <creator>Muhammad Usman Ashraf</creator>
        
        <creator>Adnan Abesen</creator>
        
        <creator>Sabah Arif</creator>
        
        <subject>Mobile computing; location based services; LBS privacy; LBS accuracy; LBS efficiency; ubiquitous computing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(8), 2017</description>
        <description>The now ubiquitous use of Location based services (LBS), within the mobile computing domain, has enabled users to receive accurate points of interest (POI) to their geo-tagged queries.  While location-based services provide rich content, they are not without risks; specifically, the use of LBS poses many serious challenges with respect to privacy protection. Additionally, the efficiency of spatial query processing, and the accuracy of said results, can be problematic when applied to road networks. Existing approaches provide different online route APIs to deliver the precise POI, but mobile user demand not only Accurate, Efficient and Secure (AES) results, but results that do not threaten their privacy. In this paper, we have addressed these challenges by proposing an AES-Route Server (RS) approach for LBS, which supports common spatial queries, including Range Queries and k-Nearest Neighbor Queries. We can secure the user location through the proposed AES-RS model because it provides the query results accurate and efficiently. The proposed model satisfy the primary goals including accuracy, efficiency and privacy for a Location Base System.</description>
        <description>http://thesai.org/Downloads/Volume8No8/Paper_47-AES_Route_Server_Model_for_Location_based_Services.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Using the Facebook Iframe as an Effective Tool for Collaborative Learning in Higher Education</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080846</link>
        <id>10.14569/IJACSA.2017.080846</id>
        <doi>10.14569/IJACSA.2017.080846</doi>
        <lastModDate>2017-08-30T17:19:27.2800000+00:00</lastModDate>
        
        <creator>Mohamed A. Amasha</creator>
        
        <creator>Salem Alkhalaf</creator>
        
        <subject>Facebook iframe; collaborative learning; Facebook markup language (FMBL); hosting environment</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(8), 2017</description>
        <description>Facebook is increasingly becoming a popular senvironment for online learning. Despite the popularity of using Facebook as an e-learning tool, there is a limitation when it comes to presenting content: another platform is required to run the files. Presented in this paper is a case study of how the Facebook iframe code can be used as a hosting environment tool to support collaborative activities in higher education at Qassim University. The study was conducted on a sample of (N=45) university students who were enrolled in Selected Topics in Information Systems (INFO491) at the Faculty of Art &amp; Science at Qassim University. We used Facebook markup language (FBML) to design and implement the course. An online questionnaire was used to investigate the students’ perceptions about using Facebook iframe for the course. Descriptive statistical analysis and chi-square test were used to analyze the data. According to our results, the participants reported that using the Facebook iframe page increased their understanding and improved their learning performance. In addition, for the majority of students, it enabled them to learn more quickly. Our findings also revealed that a Facebook iframe page is a distinctive hosting environment for presenting content.</description>
        <description>http://thesai.org/Downloads/Volume8No8/Paper_46-Using_the_Facebook_Iframe_as_an_Effective_Tool.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Identification of Randles Impedance Model Parameters of a PEM Fuel Cell by the Least Square Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080845</link>
        <id>10.14569/IJACSA.2017.080845</id>
        <doi>10.14569/IJACSA.2017.080845</doi>
        <lastModDate>2017-08-30T17:19:27.2670000+00:00</lastModDate>
        
        <creator>Selm&#233;ne Ben Yahia</creator>
        
        <creator>Hatem Allagui</creator>
        
        <creator>Abdelkader Mami</creator>
        
        <subject>Randles model; impedance; Proton Exchange Membrane (PEM) fuel cell; modeling; parameters identification; least square</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(8), 2017</description>
        <description>One of the problems of industrial development of fuel cells is the reliability of their performances with time. The solution of this problem is through by the development of improved diagnostic methods such as the identification of parameters. This work focuses on the modeling and the identification of the impedance model parameters of a Proton Exchange Membrane (PEM) fuel cell. It is based on the Randles model represented by specific complex impedance at each cell. We have used the “Least square” method to determine the parameters model using measured reference values. The proposed authentication method is valid for Randles model, but it can be generalized to be applied to others.</description>
        <description>http://thesai.org/Downloads/Volume8No8/Paper_45-The_Identification_of_Randles_Impedance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automatic Music Genres Classification using Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080844</link>
        <id>10.14569/IJACSA.2017.080844</id>
        <doi>10.14569/IJACSA.2017.080844</doi>
        <lastModDate>2017-08-30T17:19:27.2330000+00:00</lastModDate>
        
        <creator>Muhammad Asim Ali</creator>
        
        <creator>Zain Ahmed Siddiqui</creator>
        
        <subject>K-nearest neighbor (k-NN); Support Vector Machine (SVM); music; genre; classification; features; Mel Frequency Cepstral Coefficients (MFCC); principal component analysis (PCA)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(8), 2017</description>
        <description>Classification of music genre has been an inspiring job in the area of music information retrieval (MIR). Classification of genre can be valuable to explain some actual interesting problems such as creating song references, finding related songs, finding societies who will like that specific song. The purpose of our research is to find best machine learning algorithm that predict the genre of songs using k-nearest neighbor (k-NN) and Support Vector Machine (SVM).  This paper also presents comparative analysis between k-nearest neighbor (k-NN) and Support Vector Machine (SVM) with dimensionality return and then without dimensionality reduction via principal component analysis (PCA). The Mel Frequency Cepstral Coefficients (MFCC) is used to extract information for the data set. In addition, the MFCC features are used for individual tracks. From results we found that without the dimensionality reduction both k-nearest neighbor and Support Vector Machine (SVM) gave more accurate results compare to the results with dimensionality reduction. Overall the Support Vector Machine (SVM) is much more effective classifier for classification of music genre.  It gave an overall accuracy of 77%.</description>
        <description>http://thesai.org/Downloads/Volume8No8/Paper_44-Automatic_Music_Genres_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid Curvelet Transform and Genetic Algorithm for Image Steganography</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080843</link>
        <id>10.14569/IJACSA.2017.080843</id>
        <doi>10.14569/IJACSA.2017.080843</doi>
        <lastModDate>2017-08-30T17:19:27.2200000+00:00</lastModDate>
        
        <creator>Heba Mostafa Mohamed</creator>
        
        <creator>Ahmed Fouad Ali</creator>
        
        <creator>Ghada Sami Altaweel</creator>
        
        <subject>Image steganography; curvelet transform; least significant bits; genetic algorithm; Peak Signal to Noise Ratio (PSNR)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(8), 2017</description>
        <description>In this paper, we present a new hybrid image steganography algorithm by combining two famous techniques which are curvelet transform and genetic algorithm GA. The proposed algorithm is called Hybrid Curvelet Transform and Genetic Algorithm for image steganography (HCTGA). Curvelet transform is a multiscale geometric analysis tool, its main advantage is that it can solve the important problems efficiently and other transforms are not that ideal. Genetic algorithm is a famous optimization algorithm with the aim of finding the best solutions to a given computational problem that maximizes or minimizes a particular function. In the proposed algorithm the cover and secret images are passed through a preprocessing process by applying four different filters to them in order to remove the noise and achieve a better quality to both images before the hiding process. Then the curvelet transform is applied to the cover image to find the curvelet frequencies of the image, and the secret image is hided at the Least Significant Bits (LSB) of the curvelet frequencies of the cover image to reconstruct the stego image. Finally genetic algorithm operations are employed to find different scenarios for the hiding process by rearranging the hiding bits and finally choose the best scenario that can reach a better image quality and a higher Peak Signal to Noise Ratio (PSNR) results.</description>
        <description>http://thesai.org/Downloads/Volume8No8/Paper_43-A_Hybrid_Curvelet_Transform_and_Genetic_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Artificial Intelligence in Bio-Medical Domain</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080842</link>
        <id>10.14569/IJACSA.2017.080842</id>
        <doi>10.14569/IJACSA.2017.080842</doi>
        <lastModDate>2017-08-30T17:19:27.1870000+00:00</lastModDate>
        
        <creator>Muhammad Salman</creator>
        
        <creator>Abdul Wahab Ahmed</creator>
        
        <creator>Omair Ahmad Khan</creator>
        
        <creator>Basit Raza</creator>
        
        <creator>Khalid Latif</creator>
        
        <subject>Artificial intelligence; expert systems; bio-medical; healthcare; innovations</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(8), 2017</description>
        <description>In this era and in the future, artificially intelligent machines are replacing and playing a key role to enhance human capabilities in many areas. It is also making life style better by providing convenience to all including normal human beings and professionals as well. That is why AI is gaining huge attention and popularity in the field of computer science by which it has revolutionized the rapidly growing technology known as expert system. The applications of AI are working in many areas with huge impact and being used widely as well. AI provides quality and efficiency in almost every area, we are evolving it in. The main purpose of this paper is to explore the area of medical and health-care with respect to AI along with ‘Machine Learning’, and ‘Neural Networks’. This work explores the current use of AI in innovations, in the particular field of Bio-Medical and evaluated that how it has improved hospital inpatient care and other sectors related to it i.e. smart medical home, virtual presence of doctors and patients, automation in diagnostic, etc. that has changed the infrastructure of medical domain. Finally, an investigation of some expert systems and applications is made. These systems and applications are widely used throughout the world and a ranking mechanism of their performance has proposed accordingly in an organized manner. We hope, this work will be helpful for the researchers coming to this particular area and to provide a syntactic information that how computer science (i.e. AI, ANN, ML) is revolutionizing the field of bio-medical and healthcare.</description>
        <description>http://thesai.org/Downloads/Volume8No8/Paper_42-Artificial_Intelligence_in_Bio_Medical_Domain.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detection of Distributed Denial of Service Attacks Using Artificial Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080841</link>
        <id>10.14569/IJACSA.2017.080841</id>
        <doi>10.14569/IJACSA.2017.080841</doi>
        <lastModDate>2017-08-30T17:19:27.1700000+00:00</lastModDate>
        
        <creator>Abdullah Aljumah</creator>
        
        <subject>Distributed Denial of Services (DDoS); ANN; IDS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(8), 2017</description>
        <description>Distributed Denial of Services (DDoS) is a ruthless attack that targets a node or a medium with its false packets to decline the network performance and its resources. Neural networks is a powerful tool to defend a network from this attack as in our proposed solution a mitigation process is invoked when an attack is detected by the detection system using the known patters which separate the legitimate traffic from malicious traffic that were given to artificial neural networks during its training process. In this research article, we have proposed a DDoS detection system using artificial neural networks that will flag (mark) malicious and genuine data traffic and will save network from losing performance. We have compared and evaluated our proposed system on the basis of precision, sensitivity and accuracy with the existing models of the related work. </description>
        <description>http://thesai.org/Downloads/Volume8No8/Paper_41-Detection_of_Distributed_Denial_of_Service_Attacks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Effect of Religious Beliefs, Participation and Values on Corruption: Survey Evidence from Iraq</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080840</link>
        <id>10.14569/IJACSA.2017.080840</id>
        <doi>10.14569/IJACSA.2017.080840</doi>
        <lastModDate>2017-08-30T17:19:27.1400000+00:00</lastModDate>
        
        <creator>Marwah Abdulkareem Mahmood Zuhaira</creator>
        
        <creator>Tian Ye-zhuang</creator>
        
        <subject>Beliefs; participation; values; corruption</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(8), 2017</description>
        <description>This research tests the role that religious beliefs, rituals and values plays on the corruption in Iraq. Furthermore, the research assesses ethical and moral ideals pertinent to religion, in the Iraqi educational sector. Correlation analysis and linear regression help assess the relations among the study’s constructs and variables. The hypotheses tested by multiple regression technique with the help of SPSS software. Grounded in the data collected from 600 employees, the results affirm that religious beliefs have negative association with levels of corruption. Prayers in religious institution are influenced by the clergy, which serves as a set of life instructions to avoid corrupt practices. The generalizability of our results might be limited because we surveyed workers from a single sector; this calls for future studies to verify the stability of our findings across another sectors and firms.</description>
        <description>http://thesai.org/Downloads/Volume8No8/Paper_40-The_Effect_of_Religious_Beliefs_Participation_and_Values.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Studying the Influence of Static Converters’ Current Harmonics on a PEM Fuel Cell using Bond Graph Modeling Technique</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080839</link>
        <id>10.14569/IJACSA.2017.080839</id>
        <doi>10.14569/IJACSA.2017.080839</doi>
        <lastModDate>2017-08-30T17:19:27.1230000+00:00</lastModDate>
        
        <creator>Wafa BEN SALEM</creator>
        
        <creator>Houssem CHAOUALI</creator>
        
        <creator>Dhia MZOUGHI</creator>
        
        <creator>Abdelkader MAMI</creator>
        
        <subject>Static converters; PEM Fuel Cell; boost, buck and buck-boost converters; bond graph; 20-Sim software</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(8), 2017</description>
        <description>This paper shows the results of adding static converters (Boost, Buck and Buck-Boost converters) as an adaptation solution between a PEM Fuel Cell generator and a resistive load in order to study different effects of the converter on the generator performances in terms of voltage and current behavior. The presented results are obtained by simulating the Bond Graph developed model under 20-Sim software and show current and voltage behaviors with each converter under different scenarios of working conditions. </description>
        <description>http://thesai.org/Downloads/Volume8No8/Paper_39-Studying_the_Influence_of_Static_Converters_Current_Harmonics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Text Steganography using Extensions Kashida based on the Moon and Sun Letters Concept</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080838</link>
        <id>10.14569/IJACSA.2017.080838</id>
        <doi>10.14569/IJACSA.2017.080838</doi>
        <lastModDate>2017-08-30T17:19:27.0930000+00:00</lastModDate>
        
        <creator>Anes. A. Shaker</creator>
        
        <creator>Farida Ridzuan</creator>
        
        <creator>Sakinah Ali Pitchay</creator>
        
        <subject>Text steganography; Arabic text; extension Kashida; capacity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(8), 2017</description>
        <description>Existing steganography methods are still lacking in terms of capacity. Hence, a new steganography method for Arabic text is proposed. The method hides secret information bits within Arabic letters using two features, which are the moon and sun letters and the redundant Arabic extension character “-” known as Kashida. The Arabic alphabet contains 28 letters, which are classified into 14 sun letters and 14 moon letters. This classification is based on the way these letters affect the pronunciation of the definite article 
(ال) at the beginning of words. This method uses the sun letters with one extension to hold the secret bits ‘01’, the sun letters with two extensions to hold the secret bits ‘10’, the moon letters with one extension to hold the secret bits ‘00’ and the moon letters with two extensions to hold the secret bits ‘11’. The capacity performance of the proposed method is then compared to three popular text steganographic methods. Capacity is measured based on two factors which are Embedding Ratio (ER) and The Efficiency Ratio (TER). The results show that the Letter Points and Extensions Method produces 24.91% and 21.56% as the average embedding ratio and the average efficiency ratio correspondingly. For the Two Extensions ‘Kashida’ Character Method, the results for the average embedding ratio and the efficiency ratio are 56.76% and 41.81%. For the Text Using Kashida Variation Algorithm method, the average embedding ratio and the average efficiency ratio are 31.61% and 27.82% respectively. Meanwhile, the average embedding ratio and the efficiency ratio for the Proposed Method are 61.16% and 55.70%. Hence, it is concluded that the Proposed Method outweighs the other three methods in terms of their embedding ratio and efficiency ratio which leads to the conclusion that the Proposed Method could provide higher capacity than the other methods.</description>
        <description>http://thesai.org/Downloads/Volume8No8/Paper_38-Text_Steganography_using_Extensions_Kashida.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Context Aware Fuel Monitoring System for Cellular Sites</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080837</link>
        <id>10.14569/IJACSA.2017.080837</id>
        <doi>10.14569/IJACSA.2017.080837</doi>
        <lastModDate>2017-08-30T17:19:27.0770000+00:00</lastModDate>
        
        <creator>Mohammad Asif Khan</creator>
        
        <creator>Ahmad Waqas</creator>
        
        <creator>Qamar Uddin Khand</creator>
        
        <creator>Sajid Khan</creator>
        
        <subject>Fuel theft; fuel sensor; fuel management; remote monitoring</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(8), 2017</description>
        <description>The past decade has been very productive for cellular operators of Pakistan, as their subscribers have grown exponentially with increase in revenue. After this wave of rising, the operators have now reached to saturation level, with the highest teledensity of all time. These Cellular Networks consist of Cell sites, which need electrical power to run. Because of electrical power shortage in Pakistan, the power needs of cell site are fulfilled by the use of electrical power generators which are installed on each site. These generators run on fossil fuel, a large amount of which is being theft from sites. This has very negative impact on Network availability and Operator’s operational expenditure. To cope with this major issue of fuel theft, an embedded system is being designed and tested. This paper highlights this issue of the telecom sector and discusses the design and results of the proposed system. This system would reduce the cell site operational cost and will increase its availability in the service area.</description>
        <description>http://thesai.org/Downloads/Volume8No8/Paper_37-Context_Aware_Fuel_Monitoring_System_for_Cellular_Sites.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analyzing the Social Awareness in Autistic Children Trained through Multimedia Intervention Tool using Data Mining</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080836</link>
        <id>10.14569/IJACSA.2017.080836</id>
        <doi>10.14569/IJACSA.2017.080836</doi>
        <lastModDate>2017-08-30T17:19:27.0470000+00:00</lastModDate>
        
        <creator>Richa Mishra</creator>
        
        <creator>Divya Bhatnagar</creator>
        
        <subject>Asperger Syndrome (ASD); multimedia intervention tool; social skills; autism; computer aided training; autistic children</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(8), 2017</description>
        <description>This study focuses on creating a guideline for the ASD children by simulating the situation and analyzing the understanding of ASD (Asperger Syndrome) children over social skills by using a multimedia intervention tool designed for this purpose.  84 ASD individuals belonging to NGOs and clinics were selected for studying their social and cultural awareness. Autistic kids were taught social skills using specially designed multimedia intervention tool, in a controlled environment under the supervision of special educators or parents. Data mining technique was used to extract knowledge from the data collected after intervention. The results were analyzed to understand the impact of the designed multimedia intervention tool and share with special educators and parents of autistic children. The proposed multimedia intervention tool is inexpensive and user friendly. Integration of this tool has been observed to improve the quality of training an individual with autism traits. The overall growth in social communication of the ASD children under observation was observed to be 26.19%. There were substantial variances between age groups, training set and behavior parameters on any of the measures at follow-up. It was considered that an intervention starts at early age and proves beneficiary to ASD children. The study is establishing the remarkable benefits of designed multimedia intervention tool to train the ASD children.</description>
        <description>http://thesai.org/Downloads/Volume8No8/Paper_36-Analyzing_the_Social_Awareness_in_Autistic_Children.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Creating and Protecting Password: A User Intention</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080835</link>
        <id>10.14569/IJACSA.2017.080835</id>
        <doi>10.14569/IJACSA.2017.080835</doi>
        <lastModDate>2017-08-30T17:19:27.0170000+00:00</lastModDate>
        
        <creator>Ari Kusyanti</creator>
        
        <creator>Yustiyana April Lia Sari</creator>
        
        <subject>Students Academic Information Systems (SAIS); SEM; intention; PMT</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(8), 2017</description>
        <description>Students Academic Information System (SAIS) is an application that provides academic information for the students. The security policy applied by our university requires the students to renew their SAIS password based on the university’s policy. This study aims to analyze SAIS users’ behavior by using six variables adapted from Protection Motivation Theory (PMT), which are Perceived Severity, Perceived Vulnerability, Fear, Response Efficacy, Response Cost and Intentions. The data was collected from 288 SAIS users as respondents. The data analysis method used is Structural Equation Modeling (SEM) analysis. The study result shows that the factors affecting the intention of changing the passwords are perceived severity, fear, response efficacy, and response cost.</description>
        <description>http://thesai.org/Downloads/Volume8No8/Paper_35-Creating_and_Protecting_Password_a_User_Intention.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Feature Extraction and Classification Methods for a Motor Task Brain Computer Interface: A Comparative Evaluation for Two Databases</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080834</link>
        <id>10.14569/IJACSA.2017.080834</id>
        <doi>10.14569/IJACSA.2017.080834</doi>
        <lastModDate>2017-08-30T17:19:27.0000000+00:00</lastModDate>
        
        <creator>Oana Diana Eva</creator>
        
        <creator>Anca Mihaela Lazar</creator>
        
        <subject>Brain computer interface; independent component analysis; Itakura distance; phase synchronization; classifiers</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(8), 2017</description>
        <description>A comparative evaluation is performed on two databases using three feature extraction techniques and five classification methods for a motor imagery paradigm based on Mu rhythm. In order to extract the features from electroencephalographic signals, three methods are proposed: independent component analysis, Itakura distance and phase synchronization. The last one consists of: phase locking value, phase lag index and weighted phase lag index. The classification of the extracted features is performed using linear discriminant analysis, quadratic discriminant analysis, Mahalanobis distance based on classifier, the k-nearest neighbors and support vector machine. The aim of this comparison is to evaluate which feature extraction method and which classifier is more appropriate in a motor brain computer interface paradigm. The results suggest that the effectiveness of the feature extraction method depends on the classification method used.</description>
        <description>http://thesai.org/Downloads/Volume8No8/Paper_34-Feature_Extraction_and_Classification_Methods.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Object’s Shape Recognition using Local Binary Patterns</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080833</link>
        <id>10.14569/IJACSA.2017.080833</id>
        <doi>10.14569/IJACSA.2017.080833</doi>
        <lastModDate>2017-08-30T17:19:26.9830000+00:00</lastModDate>
        
        <creator>Muhammad Wasim</creator>
        
        <creator>Adnan Ahmed Siddiqui</creator>
        
        <creator>Abdul Aziz</creator>
        
        <creator>Lubaid Ahmed</creator>
        
        <creator>Syed Faisal Ali</creator>
        
        <creator>Fauzan Saeed</creator>
        
        <subject>Local binary patterns; object shape recognition; security technologies; content based recognition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(8), 2017</description>
        <description>This paper discusses the concept of object’s shape identification using local binary pattern technique (LBP). Since LBP is computationally simple it has been utilized successfully for recognition of various objects. LBP which has the potential to be used in various identification related fields was applied on a number of different shaped objects, the process converted the given image in to 3x3 binary matrices and several rounds of computation yields the final decision parameter, which is known as merit function. This parameter was then exploited to uniquely identify the shape of different objects.</description>
        <description>http://thesai.org/Downloads/Volume8No8/Paper_33-Objects_Shape_Recognition_using_Local_Binary_Patterns.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Energy Management Strategy of a PV/Fuel Cell/Supercapacitor Hybrid Source Feeding 
an off-Grid Pumping Station</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080832</link>
        <id>10.14569/IJACSA.2017.080832</id>
        <doi>10.14569/IJACSA.2017.080832</doi>
        <lastModDate>2017-08-30T17:19:26.9700000+00:00</lastModDate>
        
        <creator>Houssem CHAOUALI</creator>
        
        <creator>Hichem OTHMANI</creator>
        
        <creator>Mohamed Selm&#233;ne BEN YAHIA</creator>
        
        <creator>Dhafer MEZGHANI</creator>
        
        <creator>Abdelkader MAMI</creator>
        
        <subject>Energy management strategy; simulink; pumping station; photovoltaic generator; fuel cell generator; supercapacitor; fuzzy logic control technique</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(8), 2017</description>
        <description>This work aims to develop an accurate energy management strategy for a hybrid renewable energy system feeding a pumping station. A developed model under Simulink environment is used to compare the performance of the pumping system when it is only fed by a photovoltaic generator, by a hybrid photovoltaic and fuel cell system and finally by a hybrid photovoltaic, fuel cell and a supercapacitor system. The developed control strategy is based on Fuzzy Logic control technique. Several simulations in different dramatic scenarios of working conditions show that the developed control strategy brought major enhancements in system performance and that the use of the supercapacitor makes economic profits by reducing the fuel cell production during critical solar irradiation periods.</description>
        <description>http://thesai.org/Downloads/Volume8No8/Paper_32-Energy_Management_Strategy_of_a_PV_Fuel.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Toward a New Massively Distributed Virtual Machine based Cloud Micro-Services Team Model for HPC: SPMD Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080831</link>
        <id>10.14569/IJACSA.2017.080831</id>
        <doi>10.14569/IJACSA.2017.080831</doi>
        <lastModDate>2017-08-30T17:19:26.9530000+00:00</lastModDate>
        
        <creator>Fat&#233;ma Zahra Benchara</creator>
        
        <creator>Mohamed Youssfi</creator>
        
        <creator>Omar Bouattane</creator>
        
        <creator>Ouafae Serrar</creator>
        
        <creator>Hassan Ouajji</creator>
        
        <subject>Parallel and distributed computing; micro-services; cloud computing; distributed virtual machine; high performance computing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(8), 2017</description>
        <description>This paper aims to propose a new massively distributed virtual machine with scalable and efficient parallel computing models for High Performance Computing (HPC). The message passing paradigm of the Processing Units has a significant impact on HPC with high communication cost that penalizes the performance of these models. Accordingly, the proposed micro-services model allows the HPC applications to enhance the processing power with low communication cost. Thus, the model based Micro-services Virtual Processing Units (MsVPUs) cooperate using asynchronous communication mechanism through the Advanced Message Queuing Protocol (AMQP) protocol in order to maintain the scalability of the Single Program Multiple Data (SPMD) applications. Additionally, this mechanism enhances also the efficiency of the model based load balancing service with time optimized load balancing strategy. The proposed virtual machine is tested and validated through an application of fine grained parallel programs for big data classification. Experimental results present reduced execution time compared to the virtual machine based mobile agent’s model.</description>
        <description>http://thesai.org/Downloads/Volume8No8/Paper_31-Toward_a_New_Massively_Distributed_Virtual_Machine.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mobile Learning Application Development for Improvement of English Listening Comprehension</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080830</link>
        <id>10.14569/IJACSA.2017.080830</id>
        <doi>10.14569/IJACSA.2017.080830</doi>
        <lastModDate>2017-08-30T17:19:26.9230000+00:00</lastModDate>
        
        <creator>Zahida Parveen Laghari</creator>
        
        <creator>Hameedullah Kazi</creator>
        
        <creator>Muhammad Ali Nizamani</creator>
        
        <subject>Mobile Learning (M-Learning); Early Grade Reading Assessment (EGRA); English as Secondary Language (ESL); Automatic Speech Recognition (ASR); Personal Digital Assistance (PDA)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(8), 2017</description>
        <description>Trend towards English language learning has been increased because it is considered as Lingua franca i.e. language of communication for all. However students of Pakistan are behind in this pace, especially rural elementary students. In rural areas there is crucial need to get assistance in their own curriculum after school because mostly they do not find anyone to help at home. M-Learning (Mobile Learning) assists learning anywhere and anytime. This ubiquitous power of M-Learning helps in after school programs and education in rural areas. The aim of this study is to develop M-Learning application for improvement of English listening comprehension in rural primary school students. This study developed English learning application based on Listening Comprehension, which embeds English curriculum of Sindh Textbook board for grade 1, 2 and 3. This study took the form of an after-school program in a village in Pakistan. There were 45 students of grade 3 from rural primary school of Pakistan selected as participants. Since developed application is based on recognition and memorization of information, so that knowledge and comprehension level of cognitive domain from Bloom’s taxonomy were selected for choosing the type of evaluation questions. On the basis of those question types, EGRA (Early Grade Reading Assessment) test is used for evaluation. This test was conducted on two experimental groups and one control group and the results of the groups were compared to one another. The results confirm that English M-learning applications can become helpful tool for students who live in rural areas where they face problems in learning of their English curriculum, since their relatives are not capable to teach them as accordingly.</description>
        <description>http://thesai.org/Downloads/Volume8No8/Paper_30-Mobile_Learning_Application_Development.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design and Simulation of a Novel Dual Band Microstrip Antenna for LTE-3 and LTE-7 Bands </title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080829</link>
        <id>10.14569/IJACSA.2017.080829</id>
        <doi>10.14569/IJACSA.2017.080829</doi>
        <lastModDate>2017-08-30T17:19:26.8900000+00:00</lastModDate>
        
        <creator>Abdullah Al Hasan</creator>
        
        <creator>Mohammad Shahriar Siraj</creator>
        
        <creator>Muhammad Mostafa Amir Faisal</creator>
        
        <subject>Long Term Evolution (LTE); microstrip; dual band; u-slot</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(8), 2017</description>
        <description>Long Term Evolution (LTE) is currently being used in many developed countries and hopefully will be implemented in more countries. An antenna operating in LTE-3 band can support global roaming in ITU Regions 1 and 3, Costa Rica, Brazil and partially in some Caribbean countries and antenna operating in LTE-7 band are appropriate for global roaming in ITU regions 1, 2 and 3. An antenna operating at both the bands will make the place taken by the antenna in a device into half and allow roaming in all the regions mentioned above. The geometry of the current available antenna operating in LTE-3 and LTE-7 bands has a considerably large size. A dual band microstrip antenna operating in LTE-3 and LTE-7 bands is proposed in this work with notable size reduction. The proposed antenna simulation shows resonant frequencies at 1.88GHz and 2.55GHz with return loss below -10dB that covers both LTE-3 and LTE-7 bands. Design and simulation of the proposed antenna is done by IE3D Zeland software. This proposed antenna is suitable for global roaming in ITU regions 1, 2 and 3, which cover most of the world telecom network.</description>
        <description>http://thesai.org/Downloads/Volume8No8/Paper_29-Design_and_Simulation_of_a_Novel_Dual_Band.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Normalisation of Technology Use in a Developing Country Higher Education Institution</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080828</link>
        <id>10.14569/IJACSA.2017.080828</id>
        <doi>10.14569/IJACSA.2017.080828</doi>
        <lastModDate>2017-08-30T17:19:26.8770000+00:00</lastModDate>
        
        <creator>Ibrahim Osman Adam</creator>
        
        <creator>Osman Issah</creator>
        
        <subject>Course and lecturer evaluation; Higher Education Institution (HEI); Normalisation; Normalisation Process Theory (NPT)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(8), 2017</description>
        <description>The purpose of this study is to understand how the use of an online course and lecturer evaluation becomes a normalised way of evaluating courses and lecturers in a developing country higher education institution. Extant literature on course and lecturer evaluations has concentrated on the approaches to evaluating courses, lecturers, and its effectiveness and benefits. However, less attention has been paid to how online evaluations become the medium for lecturer and course evaluation. To address this gap, this study used an interpretive case study approach to collect data through semi-structured interviews, documents and participant observation. Data analysis was conducted using hermeneutics and using Normalisation Process Theory as the theoretical lens. The results show that the online evaluation of courses and lecturers is now a normal practice because of participant’s investment in the meaning of the online evaluation process, their enrolment in the process and the crucial investment of their actions, feedback during implementation, and use of which ensured the normalization.</description>
        <description>http://thesai.org/Downloads/Volume8No8/Paper_28-Normalisation_of_Technology_use_in_a_Developing_Country.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detection and Prevention of SQL Injection Attack by Dynamic Analyzer and Testing Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080827</link>
        <id>10.14569/IJACSA.2017.080827</id>
        <doi>10.14569/IJACSA.2017.080827</doi>
        <lastModDate>2017-08-30T17:19:26.8430000+00:00</lastModDate>
        
        <creator>Rana Muhammad Nadeem</creator>
        
        <creator>Rana Muhammad Saleem</creator>
        
        <creator>Rabnawaz Bashir</creator>
        
        <creator>Sidra Habib</creator>
        
        <subject>Structured Query Language (SQL); injection attack; request receiver; analyzer and tester</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(8), 2017</description>
        <description>With the emergence and popularity of web application, threats related to web applications has increased to large extent. Among many other web applications threats Structured Query Language Injection Attack (SQLIA) is the dominant in its use due to its ability to access the data. Many solutions are proposed in this regard that has success in specific conditions. The proposed model is based on the dynamic analyzer model. The proposed model also has certain advantages like wide applicability, fast response time, coverage to large number of techniques of SQL Injections (SQLI) and efficient in term of resource usage.</description>
        <description>http://thesai.org/Downloads/Volume8No8/Paper_27-Detection_and_Prevention_of_SQL_Injection_Attack.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>ReCSDN: Resilient Controller for Software Defined Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080826</link>
        <id>10.14569/IJACSA.2017.080826</id>
        <doi>10.14569/IJACSA.2017.080826</doi>
        <lastModDate>2017-08-30T17:19:26.8130000+00:00</lastModDate>
        
        <creator>Soomaiya Hamid</creator>
        
        <creator>Narmeen Zakaria Bawany</creator>
        
        <creator>Jawwad Ahmed Shamsi</creator>
        
        <subject>Software Defined Networking (SDN); SDN Controller security; Distributed Denial of Service (DDoS) attack; load balancing; SDN controller cluster; Open Network Operating System (ONOS)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(8), 2017</description>
        <description>Software Defined Networking (SDN) is an emerging network paradigm that provides central control over the network. Although, this simplifies the network management and makes efficient use of network resources, it introduces new threats to network reliability and scalability. In fact, a single centralized controller is a single point of failure. Moreover, a single controller may become a performance bottleneck as processing overhead increases. Distributed SDN controller platforms improve the reliability and scalability to some extent, however they remain vulnerable to Distributed Denial of Service (DDoS) attacks, specifically on control plane. We believe that there is a need for a distributed controller framework that is capable of providing service continuity without performance degradation in case of excessive network traffic or DDoS attacks on controller. In this paper, we aim to address the vulnerabilities of SDN control plane. We propose and implement an efficient and Resilient Controller for Software Defined Network (ReCSDN). This framework is capable of detecting and mitigating DDoS attacks timely and ensures the continuity of services without performance degradation. We created an experimental test bed using Mininet to conduct extensive experiments. We deployed ReCSDN on top of Open Network Operating System (ONOS) cluster to confirm the viability of our approach. The experiment results show that with ReCSDN, control plane is not only able to withstand excessive network load but will also continue to provide services in case of any controller failure.</description>
        <description>http://thesai.org/Downloads/Volume8No8/Paper_26-ReCSDN_Resilient_Controller_for_Software_Defined_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modeling and Verification of Payment System 
in E-Banking</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080825</link>
        <id>10.14569/IJACSA.2017.080825</id>
        <doi>10.14569/IJACSA.2017.080825</doi>
        <lastModDate>2017-08-30T17:19:26.7800000+00:00</lastModDate>
        
        <creator>Iqra Obaid</creator>
        
        <creator>Syed Asad Raza Kazmi</creator>
        
        <creator>Awais Qasim</creator>
        
        <subject>E-banking; model checking; Simple Promela Interpreter (SPIN); formal methods; Linear Temporal Logic (LTL) formula; Promela introduction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(8), 2017</description>
        <description>Formal modeling and verification techniques have been used to ensure the reliability and accuracy of multiple systems to be verified. In contrast to ordinary testing techniques which exhibit the presence of flaws and errors in a system, formal methods prove their absence. Electronic banking (e-banking) services have become very popular with the escalating development in the information and communication technology. Due to the presence of complexity, an e-banking system requires an efficient security model. One important approach to ensure the reliability and security of the e-banking system is through the use of formal methodologies. This study explores the opportunity of modeling interbank payment system through a case study of 1-link Automated Teller Machine (ATM). A generic verification system SPIN (Simple Promela Interpreter) is, therefore, employed to model and then to verify the integrity and security of payment system in e-banking. Linear temporal logic formulas are further summarized to assure the security of the e-banking system. The principal conclusion of the work includes a complete procedure of verification and modeling of the payment system in 1-link ATMs.</description>
        <description>http://thesai.org/Downloads/Volume8No8/Paper_25-Modeling_and_Verification_of_Payment_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Validating a Novel Conflict Resolution Strategy Selection Method (ConfRSSM) Via Multi-Agent Simulation </title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080824</link>
        <id>10.14569/IJACSA.2017.080824</id>
        <doi>10.14569/IJACSA.2017.080824</doi>
        <lastModDate>2017-08-30T17:19:26.7670000+00:00</lastModDate>
        
        <creator>Alicia Y.C. Tang</creator>
        
        <creator>Ghusoon Salim Basheer</creator>
        
        <subject>Multi-agent, conflict resolution strategy; conflict states; confidence level; simulation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(8), 2017</description>
        <description>Selecting a suitable conflict resolution strategy when conflicts appear in multi-agent environments is a hard problem. There is a need to develop a method that can select a suitable strategy which guaranties low cost in terms of the number of messages and time ticks. This paper focuses on conflicts over agents’ individual opinion and decision making by taking into account an agent’s features such as collaborative, autonomous, and local communication. The significance of this research is two-fold. Firstly, this research attempts to prove the significance of giving agents the ability to select an appropriate strategy in different conflict states depending on conflict specifications such as conflict strengths and confidence levels of the conflicting agents. Secondly, the study developed a new method named as ConfRSSM for reducing the communication cost and time taken for selecting a conflict resolution strategy. The approach ignores some conflict states, and replaces complex strategies by a simpler one, in some conflicting cases. Results show ConfRSSM reduces the number of messages and time ticks and thus improving the entire conflict resolution process.</description>
        <description>http://thesai.org/Downloads/Volume8No8/Paper_24-Validating_a_Novel_Conflict_Resolution_Strategy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A 7-Layered E-Government Framework Consolidating Technical, Social and Managerial Aspects</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080823</link>
        <id>10.14569/IJACSA.2017.080823</id>
        <doi>10.14569/IJACSA.2017.080823</doi>
        <lastModDate>2017-08-30T17:19:26.7500000+00:00</lastModDate>
        
        <creator>Mohammed Hitham M.H</creator>
        
        <creator>Dr. Hatem Elkadi H.K</creator>
        
        <creator>Dr. Sherine Ghoneim S.G</creator>
        
        <subject>E-government; framework; e-government; challenges, decision support system (DSS)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(8), 2017</description>
        <description>E-Government has been hype for the last 2 decades and still several implementations do not reach the intended success. Different definitions and consequently different models of operations and assessment were developed. This required the formulation of various frameworks describing the different perceptions and understandings of e-Government.  The different frameworks proposed tend to agree on a set of elements, but each framework seems to have one or few different elements, depending on the perception of the framework founder. Also, entire categories (or dimensions) of elements seem to be left out.  Through a literature review and field survey, the authors identified challenges of an e-Government initiative, categorized in five dimensions:  technical, adoption, organizational, strategy and cultural. Not all categories were covered in any of the existing government frameworks. This would prove to be awkward in the formulation of new government initiatives or in the assessment of existing ones and evolution plan. In an effort to represent the majority of the factors and elements involved in most e-Government initiatives, the authors present a proposed seven-layer-framework for e-government. The layers included are: 1) end user access layer, 2) e-government layer, 3) organization layer, 4) national infrastructure layer, 5) strategic layer, 6) social cultural layer, and 7) national execution layer. The proposed model is compared with existing models and demonstrates that it covers all the aforementioned dimensions.</description>
        <description>http://thesai.org/Downloads/Volume8No8/Paper_23-A_7_Layered_E_Government_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>InstDroid: A Light Weight Instant Malware Detector for Android Operating Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080822</link>
        <id>10.14569/IJACSA.2017.080822</id>
        <doi>10.14569/IJACSA.2017.080822</doi>
        <lastModDate>2017-08-30T17:19:26.7200000+00:00</lastModDate>
        
        <creator>Saba Arshad</creator>
        
        <creator>Rabia Chaudhary</creator>
        
        <creator>Munam Ali Shah</creator>
        
        <creator>Neshmia Hafeez</creator>
        
        <creator>Muhammad Kamran Abbasi</creator>
        
        <subject>Android; static; resource efficient; power consumption; memory; detection rate; accuracy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(8), 2017</description>
        <description>With the increasing popularity of Android operating system, its security concerns have also been raised to a new horizon in past few years. Different researchers have introduced different approaches in order to mitigate the malware attacks on Android devices and they succeed to provide security up to some extent but these antimalware techniques are still resource inefficient and takes longer time to detect the malicious behavior of applications. In this paper, basic security mechanisms, provided by Google Android, and their limitations are discussed. Also, the existing antimalware techniques which lie under the basic detection approaches are discussed and their limitations are also highlighted. This research proposes a light weight instant malware detector, named as InstDroid, for Android devices that can identify the malicious applications immediately. Through experiments, it is shown that InstDroid is an instant malware detector that provides instant security at low resource consumption, power and memory, in comparison to other well-known commercial antimalware applications.</description>
        <description>http://thesai.org/Downloads/Volume8No8/Paper_22-InstDroid_A_Light_Weight_Instant_Malware_Detector.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Usability of Government Websites</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080821</link>
        <id>10.14569/IJACSA.2017.080821</id>
        <doi>10.14569/IJACSA.2017.080821</doi>
        <lastModDate>2017-08-30T17:19:26.7200000+00:00</lastModDate>
        
        <creator>Mahmood Ashraf</creator>
        
        <creator>Faiza Shabbir Cheema</creator>
        
        <creator>Tanzila Saba</creator>
        
        <creator>Abdul Mateen</creator>
        
        <subject>Usability; statutory bodies websites; government websites</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(8), 2017</description>
        <description>Usability of Government websites plays pivotal role in order to provide benefits and services to the citizens. This study presents a usability evaluation for investigating the Nielsen’s usability attributes in Government websites. Based on the previous studies, a proposed website template is used in this study. This template is compared with a selected Government website. Thirty (30) participants performed three (3) representative tasks for each website. The results show that the user responses for the parameters of efficiency, memorability and pleasantness are improved for the proposed template. This effort is a part of the study that may lead to the principles for improving the usability of Government websites of Pakistan.</description>
        <description>http://thesai.org/Downloads/Volume8No8/Paper_21-Usability_of_Government_Websites.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Suitable Personality Traits for Learning Programming Subjects: A Rough-Fuzzy Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080820</link>
        <id>10.14569/IJACSA.2017.080820</id>
        <doi>10.14569/IJACSA.2017.080820</doi>
        <lastModDate>2017-08-30T17:19:26.6870000+00:00</lastModDate>
        
        <creator>Abdul Rehman Gilal</creator>
        
        <creator>Jafreezal Jaafar</creator>
        
        <creator>Mazni Omar</creator>
        
        <creator>Shuib Basri</creator>
        
        <creator>Izzatdin Abdul Aziz</creator>
        
        <creator>Qamar Uddin Khand</creator>
        
        <creator>Mohd Hilmi Hasan</creator>
        
        <subject>Software development; personality; programming; rough sets; fuzzy sets</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(8), 2017</description>
        <description>Programming is a cognitive activity which requires logical reasoning to code for abstract presentation. This study aims to find out the personality traits of students who maintain the effective grades in learning programming courses such as structured programming (SP) and object oriented programming (OOP) by gender classification. Data were collected from three universities to develop, validate, and generalize the Rough-Fuzzy model. Genetic and Johnson algorithms were applied under Rough set theory’s (RST) principles to extract the decision rules. In addition, Standard Voting, Na&#239;ve Bayesian, and Object Tracking procedures were applied on the generated decision rules to find the prediction accuracy of each algorithm. Mamdani’s Fuzzy Inference System (FIS) was used for mapping the decision  rules’  condition (input) to decision (output) based on fuzzy set theory (FST) to develop the model. The results highlighted that certain personality compositions can be suitable for scoring good grades in programming subjects. For instance, a female student is capable enough to improve the programming skills if she is composed of introvert and sensing personality traits. Therefore, it is important to investigate an appropriate personality composition for programming learners.</description>
        <description>http://thesai.org/Downloads/Volume8No8/Paper_20-Suitable_Personality_Traits_for_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modeling and Implementing Ontology for Managing Learners’ Profiles</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080819</link>
        <id>10.14569/IJACSA.2017.080819</id>
        <doi>10.14569/IJACSA.2017.080819</doi>
        <lastModDate>2017-08-30T17:19:26.6570000+00:00</lastModDate>
        
        <creator>Korchi Adil</creator>
        
        <creator>El Amrani El Idrissi Najiba</creator>
        
        <creator>Oughdir Lahcen</creator>
        
        <subject>Ontology; computing environments for human learning (CEHL)-Learner – Learner’s Profile – XML/RDF – JENA API – OWL – PERFECT-LEARN – inference;  Learner modeling – SPARQL - semantic links - concepts – sub-concepts</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(8), 2017</description>
        <description>This paper presents an issue that is important to consider when developing a learning environment whose field is constantly evolving mainly in terms of the use of training platforms. Research in this field has enabled the successful use of information technologies for the benefit of human learning, while placing the learner at the heart of pedagogic situations. It is also an environment that integrates human agents (tutors, learners) and artificial (computers) and allows them to interact locally or through computer networks, as well as conditions for accessing local or distributed training resources. Moreover, several computing environments for human learning (CEHL) platforms are available on the web for free access. These platforms are environments that offer a learner a multitude of courses in various formats in order to satisfy the learner’s desire to learn. Several CEHL platforms are available on the web for free access. But learning itself is not enough and that is why a new generation of advanced learning systems that integrate new pedagogical approaches giving the learner an active role to learn and acquire knowledge has emerged by offering more Interactivity and incorporating a more learner-centered vision. These new generations of advanced learning systems adapt to learners and their profiles by taking into account their cognitive, intellectual and motivational characteristics. An adaptation that cannot be achieved without the complicity of ontological engineering, which plays a very important role in the sharing of knowledge between humans and computers, and between computers and sharing, and reuse of concepts through computational semantics. By the same way, this paper aims at creating a process of modeling and managing profiles of learners based on ontology whatever the learning situation may be. This management process is implemented in computer’s environment based on the learner’s ontology that supports the learner by detecting the gaps in several factors in order to improve them and adapt the pedagogical content to the learner’s profile.</description>
        <description>http://thesai.org/Downloads/Volume8No8/Paper_19-Modeling_and_Implementing_Ontology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-Agent based Functional Testing in the Distributed Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080818</link>
        <id>10.14569/IJACSA.2017.080818</id>
        <doi>10.14569/IJACSA.2017.080818</doi>
        <lastModDate>2017-08-30T17:19:26.6400000+00:00</lastModDate>
        
        <creator>Muhammad Fraz Malik</creator>
        
        <creator>M. N. A. Khan</creator>
        
        <creator>Uzma Bibi</creator>
        
        <creator>Muhammad Ayaz Malik</creator>
        
        <subject>Software quality assurance; software testing; distributed environment; input variation testing; test vectors; multi-agents</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(8), 2017</description>
        <description>Verification and testing are two formal techniques of defect reduction applied on designing and development phases of SDLC to rationalize quality assurance activities.  The process of testing applications in the distributed environment becomes too complex. This study discusses a distributed testing framework that consists of many parallel tester components. The idea is based on utilizing client server environment to conduct software testing efficiently and in a short span of time. It is pertinent to mention that this study is restricted to testing of functional aspects of the software while testing of performance and other quality-of-service aspects are outside the scope of the study. An important factor influencing the use of agent technology in software testing is the dynamic nature of events. Since agents are characterized by intelligence and autonomy, their ability to interact with the environment offers added functionality to make decisions based on the needs of the scenarios that are dynamic in nature. This study shows that the use of agents to build a dynamic model for software testing in the distributed environment results in a more robust and efficient design. The proposed framework is based on distribution of test cases among multiple agents deployed across a distributed system which collaborate with each other to perform testing in an efficient manner. The proposed framework also provides an in-depth visibility into the software quality by providing the defect statistics on-the-fly. The experiments have been conducted using Selenium test automation tool. The test cases along with their test scripts and the test run results are described herein.</description>
        <description>http://thesai.org/Downloads/Volume8No8/Paper_18-Multi_Agent_based_Functional_Testing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Synthesis on SWOT Analysis of Public Sector Healthcare Knowledge Management Information Systems in Pakistan</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080817</link>
        <id>10.14569/IJACSA.2017.080817</id>
        <doi>10.14569/IJACSA.2017.080817</doi>
        <lastModDate>2017-08-30T17:19:26.6270000+00:00</lastModDate>
        
        <creator>Arfan Arshad</creator>
        
        <creator>Mohamad Fauzan Noordin</creator>
        
        <creator>Roslina Bint Othman</creator>
        
        <subject>Healthcare; knowledge management; healthcare knowledge management information system; information and communications technology; SWOT analysis; internal and external factors; healthcare organizations</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(8), 2017</description>
        <description>Healthcare is a community service sector and has been delivering its services for the betterment of civic health since its establishment at communal level. For working efficiently and effectively, this sector profoundly relies on correct and complete health information of people and a proficient integrated healthcare knowledge management information system (HKMIS) to manage this information. The performance of Healthcare organizations has significantly augmented by inception of Information and Communications Technology (ICT) in HKMIS in developed countries, but is yet to exhibit its full potential in developing countries specifically those with huge populations like Pakistan. An exploratory qualitative research methodology was adopted to conduct this study. The purpose and objective of this study was to determine and investigate the internal and external factors that influence the performance of HKMIS by performing SWOT analysis on two of the largest public-sector healthcare organizations of Pakistan. The findings of this study will certainly help authorities to devise methods of improvement in Pakistani HKMIS eventually paving ways towards a better and improved healthcare in the future.</description>
        <description>http://thesai.org/Downloads/Volume8No8/Paper_17-A_Synthesis_on_SWOT_Analysis_of_Public_Sector_Healthcare.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>FPGA Implementation of SVM for Nonlinear Systems Regression</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080816</link>
        <id>10.14569/IJACSA.2017.080816</id>
        <doi>10.14569/IJACSA.2017.080816</doi>
        <lastModDate>2017-08-30T17:19:26.5930000+00:00</lastModDate>
        
        <creator>Intissar SAYEHI</creator>
        
        <creator>Mohsen MACHHOUT</creator>
        
        <creator>Rached TOURKI</creator>
        
        <subject>Machine learning; nonlinear system; SVM regression; Reproducing Kernel Hilbert Space (RKHS); MATLAB; Field-Programmable Gate Arrays (FPGA); Xilinx System Generator</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(8), 2017</description>
        <description>This work resumes the previous implementations of Support Vector Machine for Classification and Regression and explicates the different methods and approaches adopted. Ever since the rarity of works in the field of nonlinear systems regression, an implementation of testing phase of SVM was proposed exploiting the parallelism and reconfigurability of Field-Programmable Gate Arrays (FPGA) platform. The nonlinear system chosen for application was a real challenging model: a fluid level control system existing in our laboratory. The implemented design with fixed point precision demonstrates good enough results comparing with the software performances based on the Normalized Mean Squared Error. Whereas, in term of computation time, a speed-up factor of 60 orders of time comparing to MATLAB results was achieved. Due to the flexibility of Xilinx System Generator, the design is capable to be reused for any other system with different data sets sizes and various kernel functions.</description>
        <description>http://thesai.org/Downloads/Volume8No8/Paper_16-FPGA_Implementation_of_SVM_for_Nonlinear_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improved Hybrid Model in Vehicular Clouds based on Data Types (IHVCDT)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080815</link>
        <id>10.14569/IJACSA.2017.080815</id>
        <doi>10.14569/IJACSA.2017.080815</doi>
        <lastModDate>2017-08-30T17:19:26.5800000+00:00</lastModDate>
        
        <creator>Saleh A. Khawatreh</creator>
        
        <creator>Enas N. Al-Zubi</creator>
        
        <subject>Vehicular Cloud (VC); Vehicular Cloud Computing (VCC); Vehicular Ad hoc Networks (VANETs); cloud algorithms;  hybrid transmissions; IEEE 802.11p; Long-Term Evolution (LTE); transmission cost</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(8), 2017</description>
        <description>In Vehicular Cloud (VC), vehicles collect data from the surrounding environment and exchange this data among the vehicles and the cloud centers. To do that in an efficient way first we need to organize the vehicles into clusters, each one works as a VC, and every cluster is managed by the cluster head (broker). The vehicles are grouped in clusters with adaptive size based on their mobility and capabilities. This model is an approach that forms the clusters based on the vehicles capabilities and handles with different types of data according to its importance to select the best route. A hybrid model is proposed to deal with these differences; Long-Term Evolution (LTE) is used with IEEE 802.11P which forms the traditional wireless access for Vehicular Ad hoc Networks (VANETs). This merge gives the high data delivery, wide-range transmission, and low latency. However, using only LTE based VANET is not practical due to its high cost and the large number of occurrences in the base stations. In this paper, a new Vehicular Cloud (VC) model is proposed which provides data as a service based on Vehicular Cloud Computing (VCC). A new method is proposed for high data dissemination based on the data types. The model is classified into three modes: the urgent mode, the bulk mode, and the normal mode. In the urgent mode, Long-Term Evolution (LTE) is used to achieve a high delivery with minimum delay. In the bulk mode, the vehicle uses IEEE 802.11p and chooses two clusters to divide this huge data. In the normal mode, the model works as D-hops cluster based algorithm.</description>
        <description>http://thesai.org/Downloads/Volume8No8/Paper_15-Improved_Hybrid_Model_in_Vehicular_Clouds.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Cryptosystem using Vigenere and Metaheuristics for RGB Pixel Shuffling</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080814</link>
        <id>10.14569/IJACSA.2017.080814</id>
        <doi>10.14569/IJACSA.2017.080814</doi>
        <lastModDate>2017-08-30T17:19:26.5470000+00:00</lastModDate>
        
        <creator>Zakaria KADDOURI</creator>
        
        <creator>Mohamed Amine Hyaya</creator>
        
        <creator>Mohamed KADDOURI</creator>
        
        <subject>Cryptography; cryptosystem; Vigenere; metaheuristics; image; pixel shuffling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(8), 2017</description>
        <description>In this article we present a new approach using Vigenere and  metaheuristics to resolve a problem of pixel shuffling to cipher an image. First the image is adapted to match the resolution system by transforming it to a list of intensities and coordinates.  The idea is to use Vigenere encryption to maximize the confusion by widening the domain of intensities. Then, metaheuristics play the major role of encryption, generating an appropriate Meta key in order to shuffle the lists. Thus, both Vigenere key and Meta-key are used for encryption and later in decryption by the recipient. Finally, a comparison of different metaheuristics is proposed to find the most suitable one for this cryptosystemt.</description>
        <description>http://thesai.org/Downloads/Volume8No8/Paper_14-A_New_Cryptosystem_using_Vigenere.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Synchronous Authentication Key Management Scheme for Inter-eNB Handover over LTE Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080813</link>
        <id>10.14569/IJACSA.2017.080813</id>
        <doi>10.14569/IJACSA.2017.080813</doi>
        <lastModDate>2017-08-30T17:19:26.5170000+00:00</lastModDate>
        
        <creator>Shadi Nashwan</creator>
        
        <subject>LTE network; X2 handover; horizontal and vertical key derivations; desynchronizing attack</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(8), 2017</description>
        <description>Handover process execution without active session termination is considered one of the most important attribute in the Long Term Evolution (LTE) networks. Unfortunately, this service always is suffered from the growing of security threats. In the Inter-eNB handover, an attacker may exploit these threats to violate the user privacy and desynchronize the handover entities. Therefore, the authentication is the main challenge in such issue. This paper proposes a synchronous authentication scheme to enhance the security level of key management during Inter-eNB handover process in LTE networks. The security analysis proves that the proposed scheme is secure against the current security drawbacks with perfect backward/forward secrecy. Furthermore, the performance analysis in terms of operations cost of authentication and bandwidth overhead demonstrates that the proposed scheme achieves high level of security with desirable efficiency.</description>
        <description>http://thesai.org/Downloads/Volume8No8/Paper_13-Synchronous_Authentication_Key_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Evaluation of Transmission Line Protection Characteristics with DSTATCOM Implementation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080812</link>
        <id>10.14569/IJACSA.2017.080812</id>
        <doi>10.14569/IJACSA.2017.080812</doi>
        <lastModDate>2017-08-30T17:19:26.5000000+00:00</lastModDate>
        
        <creator>Yasar Khan</creator>
        
        <creator>Khalid Mahmood</creator>
        
        <creator>Sanaullah Ahmad</creator>
        
        <subject>Power system analysis; DSTATCOM; transmission line loss minimization; distribution dynamic compensation; transmission losses and efficiency</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(8), 2017</description>
        <description>To meet with the ever-enhancing load demands, new transmission lines should be bolted-on in the existing power system but the economic and environmental concerns are major constraints to this addition. Hence utilities have to rely on the existent power system infrastructure with some modifications. To enhance controllability and boost power transfer potential of the existing power system the use of Flexible Alternating Current Transmission System (FACTS) device is the most viable modification. FACTS devices include Static VAR Compensator (SVC), thyristor controlled series capacitor (TCSC), Thyristor Controlled Reactor (TCR), Thyristor Switched Capacitor (TSC) and Self Commutated VAR compensators i.e. Static Synchronous Compensator (DSTATCOM). Among the FACTS devices, DSTATCOM is the most feasible choice because of its capability to furnish both leading and lagging reactive power, faster response time in comparison with others, smaller harmonic content, inrush current generation is minimum and the dynamic performance with variations of voltage is quite good. DSTATCOM has the ability to have effective control over various issues concerning AC power transmission. However, the Parameters of the protection devices in the present power system are set without taking into account the reaction of these FACTS devices. So in order to ascertain stability and reliability of power system, reaction of FACTS devices with the existent protection schemes must be thoroughly investigated. This paper aims to explore the deviations in the performance characteristics of transmission line protection due to installation of DSTATCOM on a 220KV EHV transmission line using theoretical as well as MATLAB/SIMULINK simulation models. The dynamic performance of a DSTATCOM connected to an existing transmission line system is evaluated when large industrial induction motor is started and voltage sags are introduced.</description>
        <description>http://thesai.org/Downloads/Volume8No8/Paper_12-Performance_Evaluation_of_Transmission_Line.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improvement of Radial basis Function Interpolation Performance on Cranial Implant Design</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080811</link>
        <id>10.14569/IJACSA.2017.080811</id>
        <doi>10.14569/IJACSA.2017.080811</doi>
        <lastModDate>2017-08-30T17:19:26.4700000+00:00</lastModDate>
        
        <creator>Ferhat Atasoy</creator>
        
        <creator>Baha Sen</creator>
        
        <creator>Fatih Nar</creator>
        
        <creator>Ismail Bozkurt</creator>
        
        <subject>Cranioplasty; interpolation on medical images; radial basis function interpolation; symmetrical data</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(8), 2017</description>
        <description>Cranioplasty is a neurosurgical operation for repairing cranial defects that have occurred in a previous operation or trauma. Various methods have been presented for cranioplasty from past to present. In computer-aided design based methods, quality of an implant depends on operator’s talent. In mathematical model based methods, such as curve-fitting and various interpolations, healthy parts of a skull are used to generate implant model. Researchers have studied to improve performance of mathematical models which are independent from operators’ talent. In this study, improvement of radial basis function (RBF) interpolation performance using symmetrical data is presented. Since we focused on the improvement of RBF interpolation performance on cranial implant design, results were compared with previous studies involving the same technique. In comparison with previously presented results, difference between the computed implant model and the original skull was reduced from 7 mm to 2 mm using newly proposed approach.</description>
        <description>http://thesai.org/Downloads/Volume8No8/Paper_11-Improvement_of_Radial_basis_Function.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>PCA based Optimization using Conjugate Gradient Descent Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080810</link>
        <id>10.14569/IJACSA.2017.080810</id>
        <doi>10.14569/IJACSA.2017.080810</doi>
        <lastModDate>2017-08-30T17:19:26.4400000+00:00</lastModDate>
        
        <creator>Subhas A. Meti</creator>
        
        <creator>V.G. Sangam</creator>
        
        <subject>Associative neural network (AANN); conjugate gradient descent; Non-Linear Principle Component Analysis (NLPCA); Principle Component Analysis (PCA); Wireless Body Area Network (WBAN)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(8), 2017</description>
        <description>The energy dissipation in Wireless Body Area Network (WBAN) systems is the biggest concern as it proportionally affects the system longevity. This energy dissipation in the WBAN system mainly takes place due to the signal interference from other networks causing reduction on the dimensionality. The data prediction in WBAN is also a considerable concern corresponding to misinterpretations and faults in the signals. In this paper a novel combination of Principle Component Analysis (PCA) pre-processing along with optimization using the conjugate gradient descent algorithm is proposed. Experimental observations show an improvement in the mean square error and the regression based correlation coefficient when compared to other standard techniques.</description>
        <description>http://thesai.org/Downloads/Volume8No8/Paper_10-PCA_based_Optimization_using_Conjugate_Gradient.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Shadow Identification in Food Images using Extreme Learning Machine</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080809</link>
        <id>10.14569/IJACSA.2017.080809</id>
        <doi>10.14569/IJACSA.2017.080809</doi>
        <lastModDate>2017-08-30T17:19:26.4230000+00:00</lastModDate>
        
        <creator>SALWA KHALID ABDULATEEF</creator>
        
        <creator>MASSUDI MAHMUDDIN</creator>
        
        <creator>NOR HAZLYNA HARUN</creator>
        
        <subject>Extreme machine learning; shadow identification; food images; support vector machine, edge detection; color spaces</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(8), 2017</description>
        <description>Shadow identification is important for food images. Different applications require an accurate shadow identification or removal. A shadow varies from one image to another based on different factors such as lighting, colors, shape of objects, and their arrangement. This makes shadow identification complex problem and lacking systematic approach. Machine learning has high potential to be used for shadow recognition if it is used to train algorithms on wide number of scenarios. In this article, Extreme Learning Machine (ELM) has been used to identify shadow in shadow mask area. This shadow mask area was determined in the image based on edge detection, and morphological operations. ELM has been compared with Support Vector Machine (SVM) for shadow identification and shown better performance.</description>
        <description>http://thesai.org/Downloads/Volume8No8/Paper_9-Shadow_Identification_in_Food_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Innovative Cognitive Architecture for Humanoid Robot</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080808</link>
        <id>10.14569/IJACSA.2017.080808</id>
        <doi>10.14569/IJACSA.2017.080808</doi>
        <lastModDate>2017-08-30T17:19:26.3900000+00:00</lastModDate>
        
        <creator>Muhammad Faheem Mushtaq</creator>
        
        <creator>Urooj Akram</creator>
        
        <creator>Adeel Tariq</creator>
        
        <creator>Irfan Khan</creator>
        
        <creator>Muhammad Zulqarnain</creator>
        
        <creator>Umer Iqbal</creator>
        
        <subject>Humanoid robots; cognition; cognitive architecture; self-learning behavior; dynamic environment</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(8), 2017</description>
        <description>Humanoid robot is appearing as most popular research tool and emerging research field. The greatest challenge in the development of robot is cognition, advancement and the understanding in the human like cognition. Humanoid robot requires a self-learning behavior like the humans that is able to get the experience from environment. Based on experience, it can modify their actions, or having conscious intellectual capability to reduce empirical factual knowledge. In this regard, we propose a novel framework called an Innovative Cognitive Architecture for Humanoid Robot (ICAHR) that is capable to develop cognitive through social interaction and autonomous exploration. It combines the modules of active memory, decision processor, and sensor listener that has capability to perform self-learning behavior like human, to make decisions in dynamic environment, and perform more valid and intelligent actions with better precision. The proposed architecture may result in safe, robust, flexible, and reliable machines that can be substitute of human beings in different tasks. The feasibility of new proposed ICAHR design has been examined through real-world case studies.</description>
        <description>http://thesai.org/Downloads/Volume8No8/Paper_8-An_Innovative_Cognitive_Architecture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Features-based Comparative Study of the State-of-the-art Cloud Computing Simulators and Future Directions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080807</link>
        <id>10.14569/IJACSA.2017.080807</id>
        <doi>10.14569/IJACSA.2017.080807</doi>
        <lastModDate>2017-08-30T17:19:26.3770000+00:00</lastModDate>
        
        <creator>Ahmad Waqas</creator>
        
        <creator>M. Abdul Rehman</creator>
        
        <creator>Abdul Rehman Gilal</creator>
        
        <creator>Mohammad Asif Khan</creator>
        
        <creator>Javed Ahmed</creator>
        
        <creator>Zulkefli Muhammed Yusof </creator>
        
        <subject>Cloud computing; simulation; cloud simulator; cloud performance analysis; simulator features</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(8), 2017</description>
        <description>Cloud computing has emerged during the last decade and turned out to be an essential component for today’s business. Therefore, many solutions are being proposed to optimize and secure the cloud computing environment. To test and validate the proposed solutions before deploying in real cloud infrastructure, a cloud computing simulator is the key requirement. There are several cloud computing simulators that have been used by research community for this purpose. In this paper, we have discussed modern cloud simulators and presented comprehensive comparison based on their features.</description>
        <description>http://thesai.org/Downloads/Volume8No8/Paper_7-A_Features_based_Comparative_Study_of_the_State_of_the_art_Cloud_Computing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>DDoS Attacks Classification using Numeric 
Attribute-based Gaussian Naive Bayes</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080806</link>
        <id>10.14569/IJACSA.2017.080806</id>
        <doi>10.14569/IJACSA.2017.080806</doi>
        <lastModDate>2017-08-30T17:19:26.3430000+00:00</lastModDate>
        
        <creator>Abdul Fadlil</creator>
        
        <creator>Imam Riadi</creator>
        
        <creator>Sukma Aji</creator>
        
        <subject>Distributed Denial of Service (DdoS); Gaussian Naive Bayes; Numeric</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(8), 2017</description>
        <description>Cyber attacks by sending large data packets that deplete computer network service resources by using multiple computers when attacking are called Distributed Denial of Service (DDoS) attacks. Total Data Packet and important information in the form of log files sent by the attacker can be observed and captured through the port mirroring of the computer network service. The classification system is required to distinguish network traffic into two conditions, first normal condition, and second attack condition. The Gaussian Naive Bayes classification is one of the methods that can be used to process numeric attribute as input and determine two decisions of access that occur on the computer network service that is “normal” access or access under “attack” by DDoS as output. This research was conducted in Ahmad Dahlan University Networking Laboratory (ADUNL) for 60 minutes with the result of classification of 8 IP Address with normal access and 6 IP Address with DDoS attack access.</description>
        <description>http://thesai.org/Downloads/Volume8No8/Paper_6-DDoS_Attacks_Classification_using_Numeric_Attribute.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Non-Linear Regression Modeling is used for Asymmetry Co-Integration and Managerial Economics in Iraqi Firms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080805</link>
        <id>10.14569/IJACSA.2017.080805</id>
        <doi>10.14569/IJACSA.2017.080805</doi>
        <lastModDate>2017-08-30T17:19:26.2970000+00:00</lastModDate>
        
        <creator>Karrar Abdulellah Azeez</creator>
        
        <creator>Han DongPing</creator>
        
        <creator>Marwah Abdulkareem Mahmood</creator>
        
        <subject>Cost asymmetry; managerial expectations; co-integration; nonlinear regression function</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(8), 2017</description>
        <description>This paper analyzes the cost asymmetry through managerial expectations in a nonlinear regression function. Two development determinants, asymmetry co-integration and managerial expectations are also considered. The results revealed that managerial expectation had an impact on the wholesale cost asymmetry response. The managerial optimism is pronounced that show cost asymmetry response for sales, and inventory assets increased higher than decreased with the changing of the expectation basic coefficient and the values of contract parameters. Finally, the impacts of the managerial expectations, cost basic coefficient, and values of the contract parameters are analyzed for illustrating the results of the proposed nonlinear models with the help of numerical experiments. The research examined the short-run and the long-run effects of asymmetry co-integration and managerial expectation changes on the cost behavior in Iraq using the nonlinear regression function.</description>
        <description>http://thesai.org/Downloads/Volume8No8/Paper_5-A_Non_Linear_Regression_Modeling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Review of Towered Big-Data Service Model for Biomedical Text-Mining Databases</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080804</link>
        <id>10.14569/IJACSA.2017.080804</id>
        <doi>10.14569/IJACSA.2017.080804</doi>
        <lastModDate>2017-08-30T17:19:26.2500000+00:00</lastModDate>
        
        <creator>Alshreef Abed</creator>
        
        <creator>Jingling Yuan</creator>
        
        <creator>Lin Li</creator>
        
        <subject>Big data; biomedical data; text mining; information retrieval; feature extraction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(8), 2017</description>
        <description>The rapid growth of biomedical informatics has drawn increasing popularity and attention. The reason behind this are the advances in genomic, new molecular, biomedical approaches and various applications like protein identification, patient medical records, genome sequencing, medical imaging and a huge set of biomedical research data are being generated day to day. The increase of biomedical data consists of both structured and unstructured data. Subsequently, in a traditional database system (structured data), managing and extracting useful information from unstructured-biomedical data is a tedious job. Hence, mechanisms, tools, processes, and methods are necessary to apply on unstructured biomedical data (text) to get the useful business data. The fast development of these accumulations makes it progressively troublesome for people to get to the required information in an advantageous and viable way. Text mining can help us mine information and knowledge from a mountain of text, and is now widely applied in biomedical research. Text mining is not a new technology, but it has recently received spotlight attention due to the emergence of Big Data. The applications of text mining are diverse and span to multiple disciplines, ranging from biomedicine to legal, business intelligence and security. In this survey paper, the researcher identifies and discusses biomedical data (text) mining issues, and recommends a possible technique to cope with possible future growth.</description>
        <description>http://thesai.org/Downloads/Volume8No8/Paper_4-A_Review_of_Towered_Big_Data_Service_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Meteonowcasting using Deep Learning Architecture</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080803</link>
        <id>10.14569/IJACSA.2017.080803</id>
        <doi>10.14569/IJACSA.2017.080803</doi>
        <lastModDate>2017-08-30T17:19:26.2370000+00:00</lastModDate>
        
        <creator>Sanam Narejo</creator>
        
        <creator>Eros Pasero</creator>
        
        <subject>Deep learning architectures; deep belief network; time series prediction; weather nowcasting</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(8), 2017</description>
        <description>The area of deep learning has enjoyed a resurgence on its peak, in almost every field of interest. Weather forecasting is a complicated and one of the most challenging tasks that includes observing and processing huge amount of data. The present paper proposes an effort to apply deep learning approach for the prediction of weather parameters such as temperature, pressure and humidity of a particular site. The implemented predictive models are based on Deep Belief Network (DBN) and Restricted Boltzmann Machine (RBM). Initially, each model is trained layer by layer in an unsupervised manner to learn the non-linear hierarchical features from the input distribution of dataset. Subsequently, each model is re-trained globally in supervised manner with an output layer to predict the appropriate output. The obtained results are encouraging. It is found that the feature based forecasting model can make predictions with high degree of accuracy. This implies that the model can be suitably adapted for making longer forecasts over larger geographical areas.</description>
        <description>http://thesai.org/Downloads/Volume8No8/Paper_3-MeteoNowcasting_using_Deep_Learning_Architecture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comparison between Chemical Reaction Optimization and Genetic Algorithms for Max Flow Problem</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080802</link>
        <id>10.14569/IJACSA.2017.080802</id>
        <doi>10.14569/IJACSA.2017.080802</doi>
        <lastModDate>2017-08-30T17:19:26.1900000+00:00</lastModDate>
        
        <creator>Mohammad Y. Khanafseh</creator>
        
        <creator>Ola M. Surakhi</creator>
        
        <creator>Ahmad Sharieh</creator>
        
        <creator>Azzam Sleit</creator>
        
        <subject>Chemical reaction optimization; Ford-Fulkerson algorithm; genetic algorithm; maximum flow problem</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(8), 2017</description>
        <description>This paper presents a comparison between the performance of Chemical Reaction Optimization algorithm and Genetic algorithm in solving maximum flow problem with the performance of Ford-Fulkerson algorithm in that. The algorithms have been implemented sequentially using JAVA programming language, and executed to find maximum flow problem using different network size. Ford-Fulkerson algorithm which is based on the idea of finding augmenting path is the most popular algorithm used to find maximum flow value but its time complexity is high.  The main aim of this study is to determine which algorithm will give results closer to the Ford-Fulkerson results in less time and with the same degree of accuracy. The results showed that both algorithms can solve Max Flow problem with accuracy results close to Ford Fulkerson results, with a better performance achieved when using the genetic algorithm in term of time and accuracy.</description>
        <description>http://thesai.org/Downloads/Volume8No8/Paper_2-A_Comparison_between_Chemical_Reaction_Optimization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>HappyMeter: An Automated System for Real-Time Twitter Sentiment Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080801</link>
        <id>10.14569/IJACSA.2017.080801</id>
        <doi>10.14569/IJACSA.2017.080801</doi>
        <lastModDate>2017-08-30T17:19:26.0330000+00:00</lastModDate>
        
        <creator>Joaquim Perotti Canela</creator>
        
        <creator>Tina Tian</creator>
        
        <subject>Twitter; social networks; data mining; sentiment analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(8), 2017</description>
        <description>The paper presents HappyMeter, an automated system for real-time Twitter sentiment analysis. More than 380 million tweets consisting of nearly 30,000 words, almost 6,000 hashtags and over 5,000 user mentioned have been studied. A sentiment model is used to measure the sentiment level of each term in the contiguous United States. The system automatically mines real-time Twitter data and reveals the changing patterns of the public sentiment over an extended period of time. It is possible to compare the public opinions regarding a subject, hashtag or a Twitter user between different states in the U.S. Users may choose to see the overall sentiment level of a term, as well as its sentiment value on a specific day. Real-time results are delivered continuously and visualized through a web-based graphical user interface.</description>
        <description>http://thesai.org/Downloads/Volume8No8/Paper_1-Happymeter_an_Automated_System_for_Real_Time.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Dynamics of IT Workaround Practices - A Theoretical Concept and an Empirical Assessment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080773</link>
        <id>10.14569/IJACSA.2017.080773</id>
        <doi>10.14569/IJACSA.2017.080773</doi>
        <lastModDate>2017-08-03T12:33:53.5600000+00:00</lastModDate>
        
        <creator>Ahmed Alojairi</creator>
        
        <subject>IT effectiveness; workarounds; cybernetics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(7), 2017</description>
        <description>An interesting phenomenon that has received limited attention in the extant literature is that of IT workaround practices. Based on Ashby&#39;s Law of Requisite Variety, workarounds were found to be used to accomplish the basic task of matching unmatched variety in the system. The Interaction Effectiveness (IE) ratio of 1.4 was used as a baseline to uncover potential sources of workarounds. The Echo method was used to collect data from 42 users in a high-technology company (HTC). Enablers of and barriers to workaround practices were divided into four main categories - flexibility, reliability, ease of use, and coordination - whereas workarounds were divided into three categories - using other tools, seeking help, and accepting. The results of the case study indicate that &quot;reliability&quot; is the dominant category for both helpful and non-helpful incidents, whereas &quot;coordination&quot; was the least significant. Of the workaround mechanisms, &quot;using other tools&quot; was the most significant category for all users. The findings suggest cycles of continuous improvement to the IE ratio to alleviate the need for workarounds, but a more fundamental issue concerning the source of workaround behaviors is a function of misfits between input variety by users and variety handling capabilities of the system.</description>
        <description>http://thesai.org/Downloads/Volume8No7/Paper_73-The_Dynamics_of_IT_Workaround_Practices.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Impact of Pulse Voltage as Desulfator to Improve Automotive Lead Acid Battery Capacity</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080772</link>
        <id>10.14569/IJACSA.2017.080772</id>
        <doi>10.14569/IJACSA.2017.080772</doi>
        <lastModDate>2017-08-03T12:33:53.4970000+00:00</lastModDate>
        
        <creator>EL MEHDI LAADISSI</creator>
        
        <creator>ANAS EL FILALI</creator>
        
        <creator>MALIKA ZAZI</creator>
        
        <subject>Lead acid battery; desulfator; pulse charging; cold cranking; sulfation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(7), 2017</description>
        <description>This paper studies the impact of Pulse Voltage as Desulfator to recover weak automotive Lead Acid Battery capacity which is caused by Sulfation. This technique is used to overcome the premature loss of battery capacity and speed up the process of charging and extend the lead acid battery life cycle 3 to 4 times compared with traditional charging methods using constant current. Sulfation represents the accumulation of lead sulfate on the electrodes (lead plates). This phenomenon appears naturally at each discharge of the battery, and disappears during a recharge. This is common with starter batteries in cars driven in the city with a load-hungry accessory. A motor in idle or at low speed cannot charge the battery sufficiently. Voltage pulse decompose the sulfate (PbSO4) attached to the electrode which is the main cause of the loss of capacity. In this paper, we study the effects of the recovery capacity of a Lead Acid Battery. Voltage pulses will be applied on a commercial automotive battery to collect data, using a charger/Desulfator prototype based on a PCDUINO. The experiment results show that there is improvement of Cold Cranking Amps level, and charge time duration of the Lead Acid Battery after using our prototype.</description>
        <description>http://thesai.org/Downloads/Volume8No7/Paper_72-Impact_of_Pulse_Voltage_as_Desulfator.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Text based Authentication Scheme for Improving Security of Textual Passwords</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080771</link>
        <id>10.14569/IJACSA.2017.080771</id>
        <doi>10.14569/IJACSA.2017.080771</doi>
        <lastModDate>2017-08-01T11:38:53.0500000+00:00</lastModDate>
        
        <creator>Shah Zaman Nizamani</creator>
        
        <creator>Syed Raheel Hassan</creator>
        
        <creator>Tariq Jamil Khanzada</creator>
        
        <creator>Mohd Zalisham Jali</creator>
        
        <subject>Password security; security; usability; alphanumeric passwords; authentication</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(7), 2017</description>
        <description>User authentication through textual passwords is very common in computer systems due to its ease of use. However textual passwords are vulnerable to different kinds of security attacks, such as spyware and dictionary attacks. In order to overcome the deficiencies of textual password scheme, many graphical password schemes have been proposed. The proposed schemes could not fully replace textual passwords, due to usability and security issues. In this paper a text based user authentication scheme is proposed which improves the security of textual
password scheme by modifying the password input method and adding a password transformation layer. In the proposed scheme alphanumeric password characters are represented by random decimal numbers which resist online security attacks such as shoulder surfing and key logger attacks. In the registration process password string is converted into a completely new string of symbols or characters before encryption. This strategy improves password security against offline attacks such as brute-force and dictionary attacks. In the proposed scheme passwords consist of alphanumeric characters therefore users are not required to remember any new kind of passwords such as used in graphical authentication. Hence password memorability burden has been
minimized. However mean authentication time of the proposed scheme is higher than the textual password scheme due to the security measures taken for the online attacks.</description>
        <description>http://thesai.org/Downloads/Volume8No7/Paper_71-A_Text_based_Authentication_Scheme_for_Improving_Security.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Privacy-preserving Twitter-based Solution for Visually Impaired People</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080770</link>
        <id>10.14569/IJACSA.2017.080770</id>
        <doi>10.14569/IJACSA.2017.080770</doi>
        <lastModDate>2017-07-31T12:57:56.3900000+00:00</lastModDate>
        
        <creator>Dina Ahmed Abdraboo</creator>
        
        <creator>Tarek Gaber</creator>
        
        <creator>Mohamed El Sayed Wahed</creator>
        
        <subject>Human powered technology; blind people; visually
impaired people; user’s privacy; IT-based solution; social networks;
friend-sourcing; crowd-sourcing; accessibility; low vision; bilingual;
screen reader</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(7), 2017</description>
        <description>Visually impaired people is a big community all
over the world. They usually seek help to perform their daily
activities such as reading the expired date of food cans or
medicine, reading out PIN of a certain ATM Visa, identifying
the color of clothes or differentiate between the money notes
and other objects with the same shape. A number of IT-based
solutions have been proposed to help and assist blind and/or
visually impaired people. Generally speaking, these solutions,
however, do not support Arabic languages nor protect blind
users’ privacy. In this paper, Trusted Blind Society (TBS) mobile
application is proposed. It is an android application which allows
blind users to recognize their unknown surroundings by utilizing
two concepts: social networks sites and friendsourcing. These two
concepts were employed by allowing family members and the
trusted friends, who are registered on Twitter, to answer blind
users’ questions on a real time. The solution is also bilingual,
supports (Arabic/English) and allows screen reader using Android
talk-back service. The performance of the TBS system was
evaluated using loader.io to check its stability under the heavy
load and it was tested by a number of blind volunteers and
the results showed good performance comparing to most related
work.</description>
        <description>http://thesai.org/Downloads/Volume8No7/Paper_70-Privacy_Preserving_Twitter_based_Solution.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Ladder Networks: Learning under Massive Label Deficit</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080769</link>
        <id>10.14569/IJACSA.2017.080769</id>
        <doi>10.14569/IJACSA.2017.080769</doi>
        <lastModDate>2017-07-31T12:57:56.3600000+00:00</lastModDate>
        
        <creator>Behroz Mirza</creator>
        
        <creator>Tahir Syed</creator>
        
        <creator>Jamshed Memon</creator>
        
        <creator>Yameen Malik</creator>
        
        <subject>Ladder networks; semi-supervised learning; deep learning; structure observer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(7), 2017</description>
        <description>Advancement in deep unsupervised learning are
finally bringing machine learning close to natural learning,
which happens with as few as one labeled instance. Ladder
Networks are the newest deep learning architecture that proposes
semi-supervised learning at scale. This work discusses how the
ladder network model successfully combines supervised and
unsupervised learning taking it beyond the pre-training realm.
The model learns from the structure, rather than the labels alone
transforming it from a label learner to a structural observer. We
extend the previously-reported results by lowering the number
of labels, and report an error of 1.27 on 40 labels only, on the
MNIST dataset that in a fully supervised setting, uses 60000
labeled training instances.</description>
        <description>http://thesai.org/Downloads/Volume8No7/Paper_69-Ladder_Networks_Learning_under_Massive_Label.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comprehensive Analysis on the Security Threats and their Countermeasures of IoT</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080768</link>
        <id>10.14569/IJACSA.2017.080768</id>
        <doi>10.14569/IJACSA.2017.080768</doi>
        <lastModDate>2017-07-31T12:57:56.3270000+00:00</lastModDate>
        
        <creator>Abdul Wahab Ahmed</creator>
        
        <creator>Omair Ahmad Khan</creator>
        
        <creator>Mian Muhammad Ahmed</creator>
        
        <creator>Munam Ali Shah</creator>
        
        <subject>Internet of things; security threats; countermeasures;
privacy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(7), 2017</description>
        <description>Internet of Things referred as a pervasive network
architecture which provides services to the physical world by
processing and analyzing data. In this modern era Internet of
Things has been shown much significance and rapidly developing
by connecting heterogeneous devices with various technologies. By
this way interconnectivity of large number of electronic devices
connected with the IoT network leads the risk of security and
confidentiality of data. This paper analyzes different security
issues, their counter measures and discusses the future directions
of security in IoT. Furthermore, this paper also discusses essential
technologies of security like encryption in the scenario of IoT for
the prevention of harmful threats in the light of latest research.</description>
        <description>http://thesai.org/Downloads/Volume8No7/Paper_68-A_Comprehensive_Analysis_on_the_Security_Threats.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Survey of Datasets for Biomedical Question Answering Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080767</link>
        <id>10.14569/IJACSA.2017.080767</id>
        <doi>10.14569/IJACSA.2017.080767</doi>
        <lastModDate>2017-07-31T12:57:56.2970000+00:00</lastModDate>
        
        <creator>Muhammad Wasim</creator>
        
        <creator>Dr. Waqar Mahmood</creator>
        
        <creator>Dr. Usman Ghani Khan</creator>
        
        <subject>Biomedical; QA system; review; survey</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(7), 2017</description>
        <description>The massively ever increasing amount of textual
and linked biomedical data available online poses many challenges
for information seekers. So, the focus of information
retrieval community has shifted to precise information retrieval,
i.e. providing exact answer to a user question. In recent years,
many datasets related to Biomedical Question Answering (BioQA)
have emerged which the researchers can use to evaluate the
performance of their systems. We reviewed these biomedical
datasets and analyzed their characteristics. The survey in this
paper covers these datasets for BioQA and has a two fold purpose:
to provide an overview of the available datasets in this domain
and to help researchers select the most suitable dataset for
benchmarking their system.</description>
        <description>http://thesai.org/Downloads/Volume8No7/Paper_67-A_Survey_of_Datasets_for_Biomedical_Question.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comparative Study for Performance and Power Consumption of FPGA Digital Interpolation Filters</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080766</link>
        <id>10.14569/IJACSA.2017.080766</id>
        <doi>10.14569/IJACSA.2017.080766</doi>
        <lastModDate>2017-07-31T12:57:56.2800000+00:00</lastModDate>
        
        <creator>Tim Donnelly</creator>
        
        <creator>Jungu Choi</creator>
        
        <creator>Alexander V. Kildishev</creator>
        
        <creator>Matthew Swabey</creator>
        
        <creator>Mark C. Johnson</creator>
        
        <subject>Digital signal processing; digital interpolation filters;
FPGA</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(7), 2017</description>
        <description>The development of FPGA-based digital signal
processing devices has been gaining attention. Researchers seek
to reduce power consumption and enhance signal processing
quality in these devices with given resources and spatial limits.
Hence, there is a need to investigate both the capability and the
power consumption associated with the various digital filtering
schemes commonly used in FPGA-based devices. We carry out
a set of performance and power consumption measurements
of interpolation filters using an FPGA and other basic signal
processing building blocks. We compare the signal processing
performance with theoretical prediction, and measure the power
consumed by the filters. Our experimental measurements also
confirm the accuracy of the numerical tools used for predicting
FPGA power consumption. This paper is aimed at providing a
framework to accurately test basic signal processing across various
interpolation schemes and compare the respective schemes’
software-side contributions to power consumption and filtering
quality.</description>
        <description>http://thesai.org/Downloads/Volume8No7/Paper_66-A_Comparative_Study_for_Performance_and_Power.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Anonymized Social Networks Community Preservation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080765</link>
        <id>10.14569/IJACSA.2017.080765</id>
        <doi>10.14569/IJACSA.2017.080765</doi>
        <lastModDate>2017-07-31T12:57:56.2670000+00:00</lastModDate>
        
        <creator>Jyothi Vadisala</creator>
        
        <creator>Valli Kumari Vatsavayi</creator>
        
        <subject>Community; anonymity; degree; social network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(7), 2017</description>
        <description>Social Networks have been widely used in the
society. Most of the people are connected to one another,
communicated with each other and share the information in
different forms. The information gathered from different social
networking sites is growing tremendously in large volumes of
various research, marketing and other purposes which is creating
security and privacy concerns. The gathered information contains
some sensitive and private information about an individual, such
as the relationship of an individual or group information. So,
to protect the data from unauthorized users the data should
be anonymized before publishing. In this paper, we study how
the k-degree and k-NMF anonymized methods preserve the
existing communities of the original social networks. We use an
existing heuristic algorithm called Louvian method to identify
the communities in social networks. We conduct the experiments
on real data sets and compare the performances of the two
anonymized social networks for preservation of communities of
the original social networks.</description>
        <description>http://thesai.org/Downloads/Volume8No7/Paper_65-Anonymized_Social_Networks_Community.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>OSPF vs EIGRP: A Comparative Analysis of CPU Utilization using OPNET</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080764</link>
        <id>10.14569/IJACSA.2017.080764</id>
        <doi>10.14569/IJACSA.2017.080764</doi>
        <lastModDate>2017-07-31T12:57:56.2330000+00:00</lastModDate>
        
        <creator>Muhammad Kashif Hanif</creator>
        
        <creator>Ramzan Talib</creator>
        
        <creator>Nafees Ayub</creator>
        
        <creator>Muhammad Umer Sarwar</creator>
        
        <creator>Sami Ullah</creator>
        
        <subject>Network protocols; topology; OPNET; interior gateway
protocols (IGPs); OSPF</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(7), 2017</description>
        <description>Routing is difficult in enterprise networks because
a packet might have to traverse many intermediary nodes to
reach the final destination. The selection of an appropriate routing
protocol for a large network is difficult task. The focus of this
work is to select and identify the best routing technique for
a computer network. In this study, the performance of OSPF
and EIGRP routing protocols with respect to CPU utilization
is analyzed using OPNET simulator. The results depict EIGRP
acquires redundant information which effect CPU utilization.</description>
        <description>http://thesai.org/Downloads/Volume8No7/Paper_64-OSPF_vs_EIGRP_A_Comparative_Analysis_of_CPU.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Survey on User Interfaces for Interaction with Human and Machines</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080763</link>
        <id>10.14569/IJACSA.2017.080763</id>
        <doi>10.14569/IJACSA.2017.080763</doi>
        <lastModDate>2017-07-31T12:57:56.2200000+00:00</lastModDate>
        
        <creator>Mirza Abdur Razzaq</creator>
        
        <creator>Muhammad Ali Qureshi</creator>
        
        <creator>Kashif Hussain Memon</creator>
        
        <creator>Saleem Ullah</creator>
        
        <subject>Command line interface (CLI); graphical user interface
(GUI); user interface (UI); sixth sense device; natural language
interface; brain-computer interface; emerging user interfaces</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(7), 2017</description>
        <description>Interaction with the machines and computers is
achieved using user interfaces. Nowadays, with the tremendous
growth of technology, the interaction is made more simple and
flexible. The study of user interfaces for human-computers and
machines interaction is the main focus of this paper. In particular,
an extensive overview of different user interfaces available so
far is provided. The review covers text-based, graphical-based,
and new class of emerging user interfaces to interact with the
machines and computers. This work will be helpful for the
development of new user interfaces.</description>
        <description>http://thesai.org/Downloads/Volume8No7/Paper_63-A_Survey_on_User_Interfaces_for_Interaction_with_Human.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mobility based Net Ordering for Simultaneous Escape Routing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080762</link>
        <id>10.14569/IJACSA.2017.080762</id>
        <doi>10.14569/IJACSA.2017.080762</doi>
        <lastModDate>2017-07-31T12:57:56.2030000+00:00</lastModDate>
        
        <creator>Kashif Sattar</creator>
        
        <creator>Aleksandar Ignjatovic</creator>
        
        <creator>Anjum Naveed</creator>
        
        <creator>Muhammad Zeeshan</creator>
        
        <subject>Net ordering; optimization model; ordered escape
routing; PCB routing; simultaneous escape routing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(7), 2017</description>
        <description>With the advancement in electronics technology,
number of pins under the ball grid array (BGA) are increasing on
reduced size components. In small size components, a challenging
task is to solve the escape routing problem where BGA pins
escape towards the component boundary. It is often desirable to
perform ordered simultaneous escape routing (SER) to facilitate
area routing and produce elegant Printed Circuit Board (PCB)
design. Some heuristic techniques help in finding the PCB
routing solution for SER but for larger problems these are
time consuming and produce sub-optimal results. This work
propose solution which divides the problem into two parts.
First, a novel net ordering algorithm for SER using network
theoretic approach and then linear optimization model for single
component ordered escape routing has been proposed. The model
routes maximum possible nets between two components of the
PCB by considering the design rules based on the given net
ordering. Comparative analysis shows that the proposed net
ordering algorithm and optimization model performs better than
the existing routing algorithms for SER in terms of number of
nets routed. Also the running time using proposed algorithm
reduces to O(2NE=2) + O(2NE=2) for ordered escape routing of
both components. This time is much lesser than O(2NE) due to
exponential reduction.</description>
        <description>http://thesai.org/Downloads/Volume8No7/Paper_62-Mobility_based_Net_Ordering_for_Simultaneous_Escape_Routing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Short Survey on Static Hand Gesture Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080761</link>
        <id>10.14569/IJACSA.2017.080761</id>
        <doi>10.14569/IJACSA.2017.080761</doi>
        <lastModDate>2017-07-31T12:57:56.1730000+00:00</lastModDate>
        
        <creator>Huu-Hung Huynh</creator>
        
        <creator>Duc-Hoang Vo</creator>
        
        <subject>Hand gesture; rank-order correlation matrix, Gabor filter; block; centroid distance; Fourier transform</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(7), 2017</description>
        <description>This paper presents a survey of methods which have
been recently proposed for recognizing static hand gestures. These
approaches are first summarized and then are assessed based on
a common dataset. Because mentioned methods employ different
types of input, the survey focuses on stages of feature extraction
and classification. Other former steps, such as pre-processing
and hand segmentation, are slightly modified. In experiments,
this work does not only consider the recognition accuracy but
also suggests suitable scenarios for each method according to its
advantages and limitations.</description>
        <description>http://thesai.org/Downloads/Volume8No7/Paper_61-Short_Survey_on_Static_Hand_Gesture_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Enhanced Approach for Detection and Classification of Computed Tomography Lung Cancer</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080760</link>
        <id>10.14569/IJACSA.2017.080760</id>
        <doi>10.14569/IJACSA.2017.080760</doi>
        <lastModDate>2017-07-31T12:57:56.1400000+00:00</lastModDate>
        
        <creator>Wafaa Alakwaa</creator>
        
        <creator>Mohammad Nassef</creator>
        
        <creator>Amr Badr</creator>
        
        <subject>Lung cancer; computed tomography; affine invariant
moments; pulmonary nodules; R2; feature selection; support
vector machine</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(7), 2017</description>
        <description>The paper presents approaches for nodule detection
and extraction in axial lung computed tomography. The goal is
to detect correctly pulmonary nodule to recognize and screen
lung cancer patients. The pulmonary nodule detection is very
challenging problem. The proposed model developed a hybrid
efficient model based on affine-invariant representation and shape
of segmented nodule. Due to large number of extracted features
for all slices on patient, feature selection is an important step
to select the most important feature for classification. We apply
forward stepwise least squares regression that maximizes the Rsquared
value, this criterion provides a fast preprocessing feature
selection assessment for systems with huge volumes of features
based on a linear models framework. Moreover, gradient boosting
have been suggested to select the relevant features based on
boosting approach. Classification of patients has been done by
support vector machine. Kaggle DSB dataset is used to test the
accuracy of our model. The results show major improvement in
accuracy and the features are reduced.</description>
        <description>http://thesai.org/Downloads/Volume8No7/Paper_60-An_Enhanced_Approach_for_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fast–ICA for Mechanical Fault Detection and Identification in Electromechanical Systems for Wind Turbine Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080759</link>
        <id>10.14569/IJACSA.2017.080759</id>
        <doi>10.14569/IJACSA.2017.080759</doi>
        <lastModDate>2017-07-31T12:57:56.1100000+00:00</lastModDate>
        
        <creator>Mohamed Farhat</creator>
        
        <creator>Yasser Gritli</creator>
        
        <creator>Mohamed Benrejeb</creator>
        
        <subject>Source separation; fault diagnosis; independent component analysis; fast–ICA; spectral analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(7), 2017</description>
        <description>Recently, the approaches based on source separation are increasingly adopted for the fault diagnosis in several industrial applications. In particular, Independent Component Analysis (ICA) method is attractive, thanks to its simplicity of implementation. In the context of electrical rotating machinery with a variable speed, namely the wind turbine type, the interaction between the electrical and mechanical parts along with the fault is complex. Therefore, the essential system variables are affected and it thereby requires to be analyzed in order to detect the presence of certain faults. In this paper, the target system is the classical association of a doubly-fed induction motor to a two stage gearbox for wind energy application system. The investigated mechanical fault is a uniform wear of two gear wheels for the same stage. The idea behind the proposed technique is to consider the fault detection and identification as a source separation problem. Based on the analysis into independent components, Fast–ICA algorithm is adopted to separate and identify the sources of the gear faults. Afterwards, a spectral analysis is applied on the signals resulting from the separation in order to identify the fault components related to the damaged wheels. The efficiency of the proposed technique for the separation and identification of the fault components is evaluated by numerical simulations.</description>
        <description>http://thesai.org/Downloads/Volume8No7/Paper_59-Fast_ICA_for_Mechanical_Fault_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Eye Controlled Mobile Robot with Shared Control for Physically Impaired People</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080758</link>
        <id>10.14569/IJACSA.2017.080758</id>
        <doi>10.14569/IJACSA.2017.080758</doi>
        <lastModDate>2017-07-31T12:57:56.0800000+00:00</lastModDate>
        
        <creator>Muhammad Wasim</creator>
        
        <creator>Javeria Khan</creator>
        
        <creator>Dawer Saeed</creator>
        
        <creator>Dr. Usman Ghani Khan</creator>
        
        <subject>Locked-in syndrome; EEG; shared contol; eye controlled robot</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(7), 2017</description>
        <description>Physically impaired and disabled people are an integral part of human society. Devices providing assistance to such individuals can help them contribute to the society in a more productive way. The situation is even worse for patients with locked-in syndrome who cannot move their body at all. These problems were the motivation to develop an eye controlled robot to facilitate such patients. Readily available commercial headset is used to record electroencephalogram (EEG) signals for classification and processing. Classification based control signals were then transmitted to robot for navigation. The robot mimics a brain controlled wheelchair with eye movements. The robot is based on shared control which is safe and robust. The analysis of robot navigation for patients showed promising results.</description>
        <description>http://thesai.org/Downloads/Volume8No7/Paper_58-Eye_Controlled_Mobile_Robot.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dynamic Access Control Policy based on Blockchain and Machine Learning for the Internet of Things</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080757</link>
        <id>10.14569/IJACSA.2017.080757</id>
        <doi>10.14569/IJACSA.2017.080757</doi>
        <lastModDate>2017-07-31T12:57:56.0470000+00:00</lastModDate>
        
        <creator>Aissam OUTCHAKOUCHT</creator>
        
        <creator>Hamza ES-SAMAALI</creator>
        
        <creator>Jean Philippe LEROY</creator>
        
        <subject>Internet of Things; security; access control; dynamic policy; security policy; blockchain; machine learning; reinforcement learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(7), 2017</description>
        <description>The Internet of Things (IoT) is now destroying the barriers between the real and digital worlds. However, one of the huge problems that can slow down the development of this global wave, or even stop it, concerns security and privacy requirements. The criticality of these latter comes especially from the fact that the smart objects may contain very intimate information or even may be responsible for protecting people’s lives. In this paper, the focus is on access control in the IoT context by proposing a dynamic and fully distributed security policy. Our proposal will be based, on one hand, on the concept of the blockchain to ensure the distributed aspect strongly recommended in the IoT; and on the other hand on machine learning algorithms, particularly on reinforcement learning category, in order to provide a dynamic, optimized and self-adjusted security policy.</description>
        <description>http://thesai.org/Downloads/Volume8No7/Paper_57-Dynamic_Access_Control_Policy_based_on_Blockchain.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Low-Power Hardware Design of Binary Arithmetic Encoder in H.264</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080756</link>
        <id>10.14569/IJACSA.2017.080756</id>
        <doi>10.14569/IJACSA.2017.080756</doi>
        <lastModDate>2017-07-31T12:57:56.0170000+00:00</lastModDate>
        
        <creator>Ben Hamida Asma</creator>
        
        <creator>Nedra Jarray</creator>
        
        <creator>Zitouni Abdelkrim</creator>
        
        <subject>H.264; Binary Arithmetic Encoder (BAE); Context-based Adaptive Binary Arithmetic Coding (CABAC); clock gating</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(7), 2017</description>
        <description>Context-Based Adaptive Binary Arithmetic Coding (CABAC) is a well-known bottleneck in H.264/AVC, owing to the highly serialized calculation and high data dependency of the binary arithmetic encoder. This work presents a hardware architecture for the sub-module binary arithmetic encoder of the CABAC. Moreover, a clock gating technique is inserted into our design for power saving. An FPGA design of the proposed architecture can work at a frequency up to 268 MHz on Virtex 5. The suggested design can achieve 17% of power consumption saving, which allows it to be applied for low power video coding applications.</description>
        <description>http://thesai.org/Downloads/Volume8No7/Paper_56-Low_Power_Hardware_Design_of_Binary_Arithmetic_Encoder.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>New Deep Kernel Learning based Models for Image Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080755</link>
        <id>10.14569/IJACSA.2017.080755</id>
        <doi>10.14569/IJACSA.2017.080755</doi>
        <lastModDate>2017-07-31T12:57:56.0000000+00:00</lastModDate>
        
        <creator>Rabha O. Abd-elsalam</creator>
        
        <creator>Yasser F.Hassan</creator>
        
        <creator>Mohamed W.Saleh</creator>
        
        <subject>Deep learning; multiple kernel; support vector machine;  image classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(7), 2017</description>
        <description>Deep learning system is used for solving many problems in different domains but it gives an over-fitting risk when richer representations are increased. In this paper, three different models with different deep multiple kernel learning architectures are proposed and evaluated for the breast cancer classification problem. Discrete Wavelet transform and edge histogram descriptor are used to extract the image features. For image classification purpose, support vector machine with the proposed deep multiple kernel models are used. Also, the span bound is employed for optimizing these models over the dual objective function. Furthermore, the comparison between the performance of the traditional support vector machine which uses only single kernel and the introduced models is worked out that show the efficiency of the experimental results of the proposed models.</description>
        <description>http://thesai.org/Downloads/Volume8No7/Paper_55-New_Deep_Kernel_Learning_based_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Introducing Time based Competitive Advantage in IT Sector with Simulation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080754</link>
        <id>10.14569/IJACSA.2017.080754</id>
        <doi>10.14569/IJACSA.2017.080754</doi>
        <lastModDate>2017-07-31T12:57:55.9700000+00:00</lastModDate>
        
        <creator>Rida Maryam</creator>
        
        <creator>Adnan Naseem</creator>
        
        <creator>Junaid Haseeb</creator>
        
        <creator>Khizar Hameed</creator>
        
        <creator>Muhammad Tayyab</creator>
        
        <creator>Babar Shahzaad</creator>
        
        <subject>Business strategy; competitive advantage; time-base; a competitor; simulation; software industry</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(7), 2017</description>
        <description>Incompletion of projects in time leads to project failure which is the major dilemma of the software industry. Different strategies are used to gain a competitive advantage over competitors in business. In software perspective, time is an incredibly critical factor, software products should be delivered in time to gain competitive advantage. However, at a halt, there is no such strategy that covers time perspective. In this paper, a time-based strategy for software products is introduced. More specifically, the importance of time-based strategy by analyzing its associated factors is highlighted using simulations.</description>
        <description>http://thesai.org/Downloads/Volume8No7/Paper_54-Introducing_Time_based_Competitive_Advantage.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Ultra-Wideband Antenna Design for GPR Applications: A Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080753</link>
        <id>10.14569/IJACSA.2017.080753</id>
        <doi>10.14569/IJACSA.2017.080753</doi>
        <lastModDate>2017-07-31T12:57:55.9400000+00:00</lastModDate>
        
        <creator>Jawad Ali</creator>
        
        <creator>Noorsaliza Abdullah</creator>
        
        <creator>Muhammad Yusof Ismail</creator>
        
        <creator>Ezri Mohd</creator>
        
        <creator>Shaharil Mohd Shah</creator>
        
        <subject>Ultra-wideband antennas; ground penetrating radar; antennas; antenna review</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(7), 2017</description>
        <description>This paper presents a comparative review study on ultra-wideband (UWB) antenna technology for Ground Penetrating Radar (GPR) applications. The proposed antenna designs for UWB ground penetrating radar include a bow-tie antennas, Vivaldi antennas, horn antennas, planar antennas, tapered slot antennas, dipole antennas, and spiral antennas. Furthermore a comprehensive study in terms of operating frequency range, gain and impedance bandwidth on each antenna is performed in order to select a suitable antenna structure to analyze it for GPR systems. Based on the design comparison, the antenna with a significant gain and enhanced bandwidth has been selected for future perspective to examine the penetration depth and resolution imaging, simultaneously suitable for GPR detection applications. Three different types of antennas are chosen to be more suitable from the final comparison which includes Vivaldi, horn and tapered slot antennas. On further analysis a tapered slot antenna is a promising candidate as it has the ability to address the problems such as penetration depth and resolution imaging in GPR system due to its directional property, high gain and greater bandwidth operation, both in the lower and higher frequency range.</description>
        <description>http://thesai.org/Downloads/Volume8No7/Paper_53-Ultra_Wideband_Antenna_Design_for_GPR_Applications.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Financial Market Prediction using Google Trends</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080752</link>
        <id>10.14569/IJACSA.2017.080752</id>
        <doi>10.14569/IJACSA.2017.080752</doi>
        <lastModDate>2017-07-31T12:57:55.9070000+00:00</lastModDate>
        
        <creator>Farrukh Ahmed</creator>
        
        <creator>Dr. Raheela Asif</creator>
        
        <creator>Dr. Saman Hina</creator>
        
        <creator>Muhammad Muzammil</creator>
        
        <subject>Google trends; financial market; stock market; Karachi stock market; multiclass neural network; multiclass decision trees</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(7), 2017</description>
        <description>Financial decisions are among the most significant life-changing decisions that individuals make. There is a strong correlation between financial decision making and human behavior. In this research the relationship between what people think and how stock market moves is investigated. The data from 2010 to 2015 of some of business, political and financial events which directly impact the local stock market in Pakistan is analyzed. The data was collected from search engine Google via Google trends. The association between internet searches regarding the political or business events and how the subsequent stock market moves is established. It was found that increase in search of these topics may lead to stock market fall or rise. The overall objective of this research is to predict Karachi Stock Exchange (now known as Pakistan stock exchange) 100 index by quantifying the semantics of international market. In addition to that, the relation between what an individual thinks while searching on Google which affects the local market is also investigated.  The collected data has been mined by Multiclass Neural Network and Multiclass Decision Trees. The result shows that Multiclass Decision Trees performed best with an accuracy of 94%.</description>
        <description>http://thesai.org/Downloads/Volume8No7/Paper_52-Financial_Market_Prediction_using_Google_Trends.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intelligent System for Detection of Micro-Calcification in Breast Cancer</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080751</link>
        <id>10.14569/IJACSA.2017.080751</id>
        <doi>10.14569/IJACSA.2017.080751</doi>
        <lastModDate>2017-07-31T12:57:55.8900000+00:00</lastModDate>
        
        <creator>M. Abdul Rehman</creator>
        
        <creator>Jamil Ahmed</creator>
        
        <creator>Ahmed Waqas</creator>
        
        <creator>Ajmal Sawand</creator>
        
        <subject>Medical image mining; machine learning; feature extraction; classification; Digital Communication in Medicine (DICOM)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(7), 2017</description>
        <description>Recently; medical image mining has become one of the well-recognized research area(s) of machine learning and artificial intelligence techniques have been vastly used in various computer added diagnostic systems. Specifically; breast cancer classification problem is considered as one of the most significant problems. For instance, complex, diverse and heterogamous malignant features of micro-calcification in DICOM (Digital Communication in Medicine) images of mammography are very difficult to classify because the persistence of noise in mammogram images creates lots of confusions for doctors.  In order to reduce the chances of misdiagnosis and to discernment the difference between malignant and benign lesions of micro-calcification this paper proposes a system so called “Intelligent System For Detection of Micro-Calcification in Breast Cancer” by considering all above stated problems. Overall our system comprises over three main stages. In first stage, adaptive threshold algorithm is used to reduce the noise, and canny edge detection algorithm is used to detect the edges of every macro or micro classification. In second stage, deginated as feature selection is done by using auto-crop algorithm, which crops all types of calcifications and lesions by proposed algorithm so called CFEDNN (Calcification Feature Extraction Deep Neural Networks) which is designed to avoid the manual ROIs (Region of Interest). Decision model is constructed by using DNN (Deep Neural Networks) and the best classification accuracy is measured as 95.6%.</description>
        <description>http://thesai.org/Downloads/Volume8No7/Paper_51-Intelligent_System_for_Detection_of_MicroCalcification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Efficient Feature Selection for Product Labeling over Unstructured Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080750</link>
        <id>10.14569/IJACSA.2017.080750</id>
        <doi>10.14569/IJACSA.2017.080750</doi>
        <lastModDate>2017-07-31T12:57:55.8770000+00:00</lastModDate>
        
        <creator>Zeki YETGIN</creator>
        
        <creator>Abdullah ELEWI</creator>
        
        <creator>Furkan G&#214;Z&#220;KARA</creator>
        
        <subject>Product labeling; product clustering; feature selection; similarity metrics; hierarchical clustering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(7), 2017</description>
        <description>The paper introduces a novel feature selection algorithm for labeling identical products collected from online web resources. Product labeling is important for clustering similar or same products. Products blindly crawled over the web sources, such as online sellers, have unstructured data due to having features expressed in different representations and formats. Such data result in feature vectors whose representation is unknown and non-uniform in length. Thus, product labeling, as a challenging problem, needs efficient selection of features that best describe the products. In this paper, an efficient feature selection algorithm is proposed for product labeling problem. Hierarchical clustering is used with the state of the art similarity metrics to assess the performance of the proposed algorithm. The results show that the proposed algorithm increases the performance of product labeling significantly. Furthermore, the method can be applied to any clustering algorithm that works on unstructured data.</description>
        <description>http://thesai.org/Downloads/Volume8No7/Paper_50-Efficient_Feature_Selection_for_Product_Labeling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>2.5 D Facial Analysis via Bio-Inspired Active Appearance Model and Support Vector Machine for Forensic Application</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080749</link>
        <id>10.14569/IJACSA.2017.080749</id>
        <doi>10.14569/IJACSA.2017.080749</doi>
        <lastModDate>2017-07-31T12:57:55.8430000+00:00</lastModDate>
        
        <creator>Siti Norul Huda Sheikh Abdullah</creator>
        
        <creator>Mohammed Hasan Abdulameer</creator>
        
        <creator>Nazri Ahmad Zamani</creator>
        
        <creator>Fasly Rahim</creator>
        
        <creator>Khairul Akram Zainol Ariffin</creator>
        
        <creator>Zulaiha Othman</creator>
        
        <creator>Mohd Zakree Ahmad Nazri</creator>
        
        <subject>Face recognition; active appearance model; ant bee colony; particle swarm optimization; support vector machine</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(7), 2017</description>
        <description>In this paper, a fully automatic 2.5D facial technique for forensic applications is presented. Feature extraction and classification are fundamental processes in any face identification technique. Two methods for feature extraction and classification are proposed in this paper subsequently. Active Appearance Model (AAM) is one of the familiar feature extraction methods but it has weaknesses in its fitting process. Artificial bee colony (ABC) is a fitting solution due to its fast search ability. However, it has drawback in its neighborhood search. On the other hand, PSO-SVM is one of the most recent classification approaches. However, its performance is weakened by the usage of random values for calculating velocity. To solve the problems, this research is conducted in three phases as follows: the first phase is to propose Maximum Resource Neighborhood Search (MRNS) which is an enhanced ABC algorithm to improve the fitting process in current AAM. Then, Adaptively Accelerated PSO-SVM (AAPSO-SVM) classification technique is proposed, by which the selection of the acceleration coefficient values is done using particle fitness values in finding the optimal parameters of SVM. The proposed methods AAM-MRNS, AAPSO-SVM and the whole 2.5D facial technique are evaluated by comparing them with the other methods using new 2.5D face image data set. Further, a sample of Malaysian criminal real case of CCTV facial investigation suspect has been tested in the proposed technique. Results from the experiment shows that the proposed techniques outperformed the conventional techniques. Furthermore, the 2.5D facial technique is able to recognize a sample of Malaysian criminal case called “Tepuk Bahu” using CCTV facial investigation.</description>
        <description>http://thesai.org/Downloads/Volume8No7/Paper_49-2.5_D_Facial_Analysis_via_Bio_Inspired_Active_Appearance_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Estimating True Demand in Airline’s Revenue Management Systems using Observed Sales</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080748</link>
        <id>10.14569/IJACSA.2017.080748</id>
        <doi>10.14569/IJACSA.2017.080748</doi>
        <lastModDate>2017-07-31T12:57:55.8300000+00:00</lastModDate>
        
        <creator>Alireza Nikseresht</creator>
        
        <creator>Koorush Ziarati</creator>
        
        <subject>Demand estimation; demand modelling; forecasting; revenue management; inventory control; unconstraining; uncensoring</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(7), 2017</description>
        <description>Forecasting accuracy is very important in revenue management. Improved forecast accuracy, improves the decision made about inventory and this lead to a greater revenue. In the airline’s revenue management systems, the inventory is controlled by changing the product availability. As a consequence of changing availability, the recorded sales become a censored observation of underlying demand, so could not depict the true demand, and the accuracy of forecasting is affected by this censored data. This paper proposed a method to estimate true demand from censored data. In the literature, this process is referred to as unconstraining or uncensoring. Multinomial Logit model is used to model the customer choice behaviour. A simple algorithm is proposed to estimate the parameters (customers’ preference) of the model by using historical sales data, product availability info and the market share. The proposed method is evaluated using different simulated datasets and the results are compared with three benchmark models that are used commonly in airline revenue management practice. The experiments show that proposed method outperforms the others in terms of execution time and accuracy. A 47.64% improvement is reported in root mean square error between simulated and estimated demand in contrast to the benchmark models.</description>
        <description>http://thesai.org/Downloads/Volume8No7/Paper_48-Estimating_True_Demand_in_Airline’s_Revenue_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mobility for an Optimal Data Collection in Wireless Sensor Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080747</link>
        <id>10.14569/IJACSA.2017.080747</id>
        <doi>10.14569/IJACSA.2017.080747</doi>
        <lastModDate>2017-07-31T12:57:55.7970000+00:00</lastModDate>
        
        <creator>EZ-ZAIDI Asmaa</creator>
        
        <creator>RAKRAK Said</creator>
        
        <subject>Contact time; mobile sink; wireless sensor networks; meeting point; data gathering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(7), 2017</description>
        <description>Sensor nodes located in the vicinity of a static sink drain rapidly their batteries since they have to carry more traffic burden. This situation results in network partition, holes as well as data losses. To mitigate this issue, many research proposed the use of mobile sink in data collection as a potential solution. However, due to its speed, the mobile sink has very short communication time to pick up all data from the sensor nodes within the network, therefore the sink is forced to return back to gather the remaining data. In this paper, we propose a new data collection scheme that aims to decrease the latency and enlarge the staying time between the mobile sink and the meeting points that buffer data originated from the other sensor nodes. We have also handled the case of urgent data so that they can be delivered without any delay. Our proposed scheme is validated via extensive simulations using NS2 simulator. Our approach significantly decreases the latency and prolongs the contact time between the mobile sink and sensor nodes.</description>
        <description>http://thesai.org/Downloads/Volume8No7/Paper_47-Mobility_for_an_Optimal_Data_Collection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intelligent Diagnostic System for Nuclei Structure Classification of Thyroid Cancerous and Non-Cancerous Tissues</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080746</link>
        <id>10.14569/IJACSA.2017.080746</id>
        <doi>10.14569/IJACSA.2017.080746</doi>
        <lastModDate>2017-07-31T12:57:55.7670000+00:00</lastModDate>
        
        <creator>Jamil Ahmed Chandio</creator>
        
        <creator>M. Abdul Rehman Soomrani</creator>
        
        <subject>Machine learning; decision support system; clustering; classification; cancer cells</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(7), 2017</description>
        <description>Recently, image mining has opened new bottlenecks in the field of biomedical discoveries and machine leaning techniques have brought significant revolution in medical diagnosis. Especially, classification problem of human cancerous tissues would assume to be one of the really challenging problems since it requires very high optimized algorithms to select the appropriate features from histopathological images of well-differentiated thyroid cancers. For instance prediction of initial changes in neoplasm such as hidden patterns of nuclei overlapping sequences, variations in nuclei structures, distortion in chromatin distributions and identification of other micro- architectural behaviors would provide more meticulous assistance to doctors in early diagnosis of cancer. In-order to mitigate all above stated problems this paper proposes a novel methodology so called “Intelligent Diagnostic System for Nuclei Structural Classification of Thyroid Cancerous and Non-Cancerous Tissues” which classifies nuclei structures and cancerous behaviors from medical images by using proposed algorithm Auto_Tissue_Analysis. Overall methodology of approach is comprised of four layers. In first layer noise reduction techniques are used. In second layer feature selection techniques are used. In third layer decision model is constructed by using random forest (tree based) algorithm. Finally result visualization and performance evaluation is done by using confusion matrix, precision and recall measures. The overall classification accuracy is measured about 74% with 10-k fold cross validation.</description>
        <description>http://thesai.org/Downloads/Volume8No7/Paper_46-Intelligent_Diagnostic_System_for_Nuclei_Structure.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Modeling based Agent Cellular Automata for Advanced Residential Mobility Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080745</link>
        <id>10.14569/IJACSA.2017.080745</id>
        <doi>10.14569/IJACSA.2017.080745</doi>
        <lastModDate>2017-07-31T12:57:55.7500000+00:00</lastModDate>
        
        <creator>Elarbi Elalaouy</creator>
        
        <creator>Khadija Rhoulami</creator>
        
        <creator>Moulay Driss Rahmani</creator>
        
        <subject>Residential mobility, multi agent systems, cellular automata; urban modeling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(7), 2017</description>
        <description>Nowadays, residential mobility (RM) is usually interconnected with other urban phenomena to give more realistic and effective to the simulation models in order to support urban planners and decision makers. Recent RM research works to describe models from a functional view; however researchers do less focus in providing software modeling of their RM applications.  Based on this note, the article presents an agent cellular automata based modeling for advanced RM applications. The proposed modeling contains six models based on UML 2.0 diagrams which models parts of the system from different views. The work could be of interest for specialists (researchers, designers and developers) when modeling advanced RM applications.</description>
        <description>http://thesai.org/Downloads/Volume8No7/Paper_45-A_Novel_Modeling_based_Agent_Cellular_Automata.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Method for System Requirements Approval</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080744</link>
        <id>10.14569/IJACSA.2017.080744</id>
        <doi>10.14569/IJACSA.2017.080744</doi>
        <lastModDate>2017-07-31T12:57:55.7200000+00:00</lastModDate>
        
        <creator>Lindita Nebiu Hyseni</creator>
        
        <creator>Zamir Dika</creator>
        
        <subject>Approval method; approve requirements; system requirements; functional and non-functional requirements; joint approval requirements</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(7), 2017</description>
        <description>The requirements approval method is necessary to ensure that the system requirements have been identified in right way and the understanding between the contractor and the client exist. During research conducted is identified that most of the scholars have been working for the requirements definition during the meeting with the client, even they started to initiate the validation by checking whether the requirements captures the needs of client but not the approval of the requirements. Therefore, it is proposed the Joint Approval Requirements (JAR) method based on identified gaps through literature review and work experience. In this paper, this theoretical JAR method has been developed further on, through the presentation of its details about approval of the final version of the functional and non-functional requirements document and the integrated conceptual model of the IS. The presented method is ready for the research community in order to implement in different industries to measure the effect of the JAR method in the system requirements.</description>
        <description>http://thesai.org/Downloads/Volume8No7/Paper_44-Method_for_System_Requirements_Approval.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Analysis of Online Rating Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080743</link>
        <id>10.14569/IJACSA.2017.080743</id>
        <doi>10.14569/IJACSA.2017.080743</doi>
        <lastModDate>2017-07-31T12:57:55.7030000+00:00</lastModDate>
        
        <creator>Mohammad Azzeh</creator>
        
        <subject>Online rating systems; reputation models; comparative analysis; decision making; e-commerce</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(7), 2017</description>
        <description>Online rating systems serve as decision support tool for choosing the right transactions on the internet. Consumers usually rely on others’ experiences when do transaction on the internet, therefore their feedbacks are helpful in succeeding such transactions. One important form of such feedbacks is the product ratings. Most online rating systems have been proposed either by researchers or industry. But there is much debate about their accuracies and stability. This paper looks at the accuracy and stability of set of common online rating systems over dense and sparse datasets. To accomplish that we used three evaluation measures namely, Mean Absolute Errors (MAE), Mean Balanced Relative Error (MBRE) and Mean Inverse Balanced Relative Error (MIBRE), in addition to Borda count to assess the stability of ranking among various rating systems. The results showed that both median and Dirichlet are the most accurate models for both sparse and dense datasets, whereas the BetaDR model is the most stable model across different evaluation measures. Therefore we recommend using Dirichlet or BetaDR for the products with few number of ratings and using the median model with products of large number of ratings.</description>
        <description>http://thesai.org/Downloads/Volume8No7/Paper_43-Comparative_Analysis_of_Online_Rating_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Semantic based Data Integration in Scientific Workflows</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080742</link>
        <id>10.14569/IJACSA.2017.080742</id>
        <doi>10.14569/IJACSA.2017.080742</doi>
        <lastModDate>2017-07-31T12:57:55.6900000+00:00</lastModDate>
        
        <creator>M. Abdul Rehman</creator>
        
        <creator>Jamil Ahmed</creator>
        
        <creator>Ahmed Waqas</creator>
        
        <creator>Ajmal Sawand</creator>
        
        <subject>Data integration; scientific workflows; ontology; data semantics; data management</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(7), 2017</description>
        <description>Data Integration has become the most prominent aspect of data management applications, especially in scientific domains like ecology, biology, and geosciences. Today’s complex scientific applications and the rise of diverse data generating devices in scientific domains (e.g. sensors) have made data integration a challenging task. In response to these types of challenges, data management applications are providing ground-breaking functionalities which come at the price of high complexity. This paper presents a semantic data integration framework which is based on the exploitation of ontologies. Exploiting a Description Logics formalism and associated reasoning procedures, the framework is able the handle heterogeneous formats and different semantics. Besides an in-depth discussion of the ontology-based integration capability, the paper also discusses a brief overview of the system architecture and its application in a real world scenario taken from ecological research.</description>
        <description>http://thesai.org/Downloads/Volume8No7/Paper_42-Semantic_based_Data_Integration_in_Scientific_Workflows.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Mathematical Model for Comparing Memory Storage of Three Interval-Based Parametric Temporal Database Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080741</link>
        <id>10.14569/IJACSA.2017.080741</id>
        <doi>10.14569/IJACSA.2017.080741</doi>
        <lastModDate>2017-07-31T12:57:55.6730000+00:00</lastModDate>
        
        <creator>Nashwan Alromema</creator>
        
        <creator>Mohd Shafry Mohd Rahim</creator>
        
        <creator>Ibrahim Albidewi</creator>
        
        <subject>Valid-time data model; N1NF; tnterval-based timestamping; temporal data model; 1NF</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(7), 2017</description>
        <description>Interval-Based Parametric Temporal Database Model (IBPTDM) captures the historical changes of database object in single tuple. Such data model violates 1NF and it is difficult to be implemented on top of conventional Database Management Systems (DBMS). The reason behind that, IBPTDM cannot directly use relational storage structure or query evaluation technique that depends on atomic attribute values as well as it is unfixed attribute size. 1NF model with its features can be used to solve such challenge. Modeling time-varying data in 1NF model raise a question about memory storage efficiency and ease of use. A novel approach for representing temporal data in 1NF model and compare it with other main approaches in literature is the main goal of this research. To this end, a mathematical model for comparing a three different storage models is demonstrated to illustrate that the proposed model is more efficient than other approaches under certain conditions. The simulation results showed that the proposed model overcomes the needless redundancy of data, achieves saving in memory storage, and it is easy to be implemented in relational data model or to be adapted with a production systems that need to track temporal aspects of functioning database Systems.</description>
        <description>http://thesai.org/Downloads/Volume8No7/Paper_41-A_Mathematical_Model_for_Comparing_Memory_Storage.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Customized Descriptor for Various Obstacles Detection in Road Scene</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080740</link>
        <id>10.14569/IJACSA.2017.080740</id>
        <doi>10.14569/IJACSA.2017.080740</doi>
        <lastModDate>2017-07-31T12:57:55.6430000+00:00</lastModDate>
        
        <creator>Haythem Ameur</creator>
        
        <creator>Abdelhamid Helali</creator>
        
        <creator>J. Ram&#237;rez</creator>
        
        <creator>J. M. Gorriz</creator>
        
        <creator>Ridha Mghaieth</creator>
        
        <creator>Hassen Maaref</creator>
        
        <subject>ADAS; customized HOG; linear SVM; obstacle detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(7), 2017</description>
        <description>Recently, real-time object detection systems have become a major challenge in the smart vehicle. In this work, we aim to increase both pedestrian and driver safety through improving their recognition rate in the vehicle’s embedded vision systems. Based on the Histogram of Oriented Gradients (HOG) descriptor, an optimized object detection system is presented in order to achieve an efficient recognition system for several obstacles. The main idea is to customize the weight of each bin in the HOG-feature vector according to its contribution in the description process of the extracted relevant features. Performance studies using a linear SVM classifier prove the efficiency of our approach. Indeed, based on the INRIA datasets, we have improved the sensitivity rate of the pedestrian detection by 11% and the vehicle detection by 5%.</description>
        <description>http://thesai.org/Downloads/Volume8No7/Paper_40-Customized_Descriptor_for_Various_Obstacles.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>ODSA: A Novel Ordering Divisional Scheduling Algorithm for Modern Operating Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080739</link>
        <id>10.14569/IJACSA.2017.080739</id>
        <doi>10.14569/IJACSA.2017.080739</doi>
        <lastModDate>2017-07-31T12:57:55.6100000+00:00</lastModDate>
        
        <creator>Junaid Haseeb</creator>
        
        <creator>Muhammad Tayyab</creator>
        
        <creator>Khizar Hameed</creator>
        
        <creator>Samia Rehman</creator>
        
        <creator>Muhammad Junaid</creator>
        
        <creator>Agha Muhammad Musa Khan</creator>
        
        <subject>CPU scheduling; round robin scheduling algorithm; turnaround time; waiting time; context switching</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(7), 2017</description>
        <description>CPU scheduling is defined as scheduling multiple processes that are required to be executed in a specific time period. A large number of scheduling algorithms have been proposed to achieve maximum CPU utilization/throughput and minimizing turn around, waiting and response time. Existing studies claim that Round Robin (RR) is providing best results in terms of above-mentioned factors. In RR, a process is assigned to CPU for a fixed time quantum then the process starts its execution, in case that assigned time quantum greater than CPU’s capacity then remaining section of that process waits for its next turn. Although RR schedules processes in an efficient manner, however, it has certain limitations such as if time quantum is too small or large, it causes frequent context switching and response time can increase. To address these identified problems, various improved versions of RR also exist. The purpose of this paper is twofold: 1) a comparison between different improved versions of RR; and 2) a new algorithm named Ordering Divisional Scheduling Algorithm (ODSA) is also proposed that combines various features of different algorithms and is actually an improvement to RR. Our results show that ODSA can schedule processes with less turn around and average waiting time as compared to existing solutions.</description>
        <description>http://thesai.org/Downloads/Volume8No7/Paper_39-ODSA_A_Novel_Ordering_Divisional_Scheduling_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning based Computer Aided Diagnosis System for Breast Mammograms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080738</link>
        <id>10.14569/IJACSA.2017.080738</id>
        <doi>10.14569/IJACSA.2017.080738</doi>
        <lastModDate>2017-07-31T12:57:55.5800000+00:00</lastModDate>
        
        <creator>M. Arfan Jaffar</creator>
        
        <subject>Classification; breast mammograms; computer aided diagnosis; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(7), 2017</description>
        <description>In this paper, a framework has been presented by using a combination of deep Convolutional Neural Network (CNN) with Support Vector Machine (SVM). Proposed method first perform preprocessing to resize the image so that it can be suitable for CNN and perform enhancement quality of the images can be enhanced. Deep Convolutional Neural Network (CNN) has been used for features extraction and classification with Support Vector Machine (SVM). Standard dataset MIAS and DDMS has been employed for testing the proposed framework by generating new images from these datasets by the process of augmentation. Different performance measures like Accuracy, Sensitivity, Specificity and area under the curve (AUC) has been employed as a quantitative measure and compared with state of the art existing methods.  Results shows that proposed framework has attained accuracy 93.35% and 93% sensitivity.</description>
        <description>http://thesai.org/Downloads/Volume8No7/Paper_38-Deep_Learning_based_Computer_Aided_Diagnosis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SEUs Mitigation on Program Counter of the LEON3 Soft Processor</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080737</link>
        <id>10.14569/IJACSA.2017.080737</id>
        <doi>10.14569/IJACSA.2017.080737</doi>
        <lastModDate>2017-07-31T12:57:55.5630000+00:00</lastModDate>
        
        <creator>Afef KCHAOU</creator>
        
        <creator>Wajih EL HADJ YOUSSEF</creator>
        
        <creator>Rached TOURKI</creator>
        
        <subject>NETFI+ ; fault injection; SEUs; LEON3; simulation; emulation; reliability; TMR</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(7), 2017</description>
        <description>Analyzing and evaluating the sensitivity of embedded systems to soft-errors have always been a challenge for aerospace or safety equipment designer. Different automated fault-injection methods have been developed for evaluating the sensitivity of integrated circuit. Also many techniques have been developed to get a fault tolerant architecture in order to mask and mitigate fault injection in a circuit. Fault injection mitigation and repair techniques are applied together on LEON3 processor in goal to study the reliability of a soft-core. The so-called NETlist Fault Injection (NETFI+) tool is a fault injection techniques used in this paper. The prediction of Single Event Upset (SEU) error-rates between radiation ground testing and FPGA implementation have been done with good and accurate result. But no functional simulations have been performed. A Triple Modular Redundancy (TMR) is used in this paper as a repair technique versus fault injection. This paper analyses the effectiveness of fault tolerant method on LEON3 soft-core running a benchmark. It starts by evaluating the behavior of LEON3’s program counter against Single Event Upset error-rate accuracy between the functional simulation and the FPGA emulation and an analysis of the LEON3 reliability in presence of fault tolerant technique. The objective is to offer, through the new version of NETFI+ with introducing a fault tolerant technique, the possibility to designers to evaluate the benefits of SEUs mitigation for the LEON3 processor on the program counter.</description>
        <description>http://thesai.org/Downloads/Volume8No7/Paper_37-SEUs_Mitigation_on_Program_Counter.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Lean Software Development by using Devops Practices</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080736</link>
        <id>10.14569/IJACSA.2017.080736</id>
        <doi>10.14569/IJACSA.2017.080736</doi>
        <lastModDate>2017-07-31T12:57:55.5330000+00:00</lastModDate>
        
        <creator>Ahmed Bahaa Farid</creator>
        
        <creator>Yehia Mostafa Helmy</creator>
        
        <creator> Mahmoud Mohamed Bahloul</creator>
        
        <subject>Lean software development; DevOps; development &amp; IT operations; continuous delivery; monitoring; continuous integration</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(7), 2017</description>
        <description>Competition between companies has made a great pressure to produce new features continuously as fast as possible, subsequently successful software companies needs to learn more about customers and get new features out to them more rapidly. Lean software development cannot integrate between development and operation teams. DevOps enables this merge between them and creates operational parts as one part of the development process and made it up to date during the development phase, so reduced errors during the deployment. The purpose of this paper is to investigate how can use devops practices to improve the performance of  lean software development production process and introduces a new framework that merge lean and devops process. The research has been evaluated on a sample of 2 departments in Faculty of Commerce at Helwan University. The results of this work have led to reduce the response delivery time for customers and rapid feedback provides accurate expectations for customer needs that lead to lower levels of deployment pains and lower change fail rates.</description>
        <description>http://thesai.org/Downloads/Volume8No7/Paper_36-Enhancing_Lean_Software_Development.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Image Encryption Technique based on the Entropy Value of a Random Block</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080735</link>
        <id>10.14569/IJACSA.2017.080735</id>
        <doi>10.14569/IJACSA.2017.080735</doi>
        <lastModDate>2017-07-31T12:57:55.5170000+00:00</lastModDate>
        
        <creator>Mohammed A. F. Al-Husainy</creator>
        
        <creator>Diaa Mohammed Uliyan</creator>
        
        <subject>Image security; image encryption; secret key; image authentication.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(7), 2017</description>
        <description>The use of digital images in most fields of information technology systems makes these images usually contain confidential information. When these images transmitted via the Internet especially in the Cloud, it becomes necessary to protect these images in a way that ensure putting the confidential information that are contained far away from the attackers. A proposed image encryption technique has been presented in this work. This technique used a secret key that is extracted from the image content itself. Therefore, there is no need to find a secret channel to exchange any key where, sender and receiver authenticate each other with regards to a shared secret key extracted from the image. The technique constructs its secret key that is used to encrypt the image, based on the entropy values of a set of randomly selected blocks from the image itself. Vairous experiments have been conducted to evaluate the strength and performance of the technique. The experimental results shows that the proposed technique can be used effectively in the field of image security to protect and authenticate images.</description>
        <description>http://thesai.org/Downloads/Volume8No7/Paper_35-Image_Encryption_Technique_based_on_the_Entropy_Value.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detection of Cardiac Disease using Data Mining Classification Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080734</link>
        <id>10.14569/IJACSA.2017.080734</id>
        <doi>10.14569/IJACSA.2017.080734</doi>
        <lastModDate>2017-07-31T12:57:55.5030000+00:00</lastModDate>
        
        <creator>Abdul Aziz</creator>
        
        <creator>Aziz Ur Rehman</creator>
        
        <subject>Cardiac disease; classification technique; decision tree; knowledge discovery</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(7), 2017</description>
        <description>Cardiac Disease (CD) is one of the major causes of death. An important task is to identify the Cardiac disease very minutely and precisely. Generally medical diagnostic errors are dangerous and costly. Worldwide they are leading to deaths. Data mining techniques are very important to minimize the diagnostic errors as well as to improve the patient’s safety. Data mining techniques are very effective in designing a medical support system and enrich ability to determine the unseen patterns and associations in clinical data. In this paper, the application of classification technique, decision tree for the detection of heart disease have been introduced. Classification tree uses many factors including age, blood sugar and blood pressure; it can detect the probability of patients fallen in CD by using fewer diagnostic tests which save time and money.</description>
        <description>http://thesai.org/Downloads/Volume8No7/Paper_34-Detection_of_Cardiac_Disease_using_Data_Mining.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comparative Analysis of Quality Assurance of Mobile Applications using Automated Testing Tools</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080733</link>
        <id>10.14569/IJACSA.2017.080733</id>
        <doi>10.14569/IJACSA.2017.080733</doi>
        <lastModDate>2017-07-31T12:57:55.4870000+00:00</lastModDate>
        
        <creator>Haneen Anjum</creator>
        
        <creator>Muhammad Imran Babar</creator>
        
        <creator>Muhammad Jehanzeb</creator>
        
        <creator>Maham Khan</creator>
        
        <creator>Saima Chaudhry</creator>
        
        <creator>Summiyah Sultana</creator>
        
        <creator>Zainab Shahid</creator>
        
        <creator>Furkh Zeshan</creator>
        
        <creator>Shahid Nazir Bhatti</creator>
        
        <subject>Mobile application; quality assurance; automated testing; testing tools</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(7), 2017</description>
        <description>Use of mobile applications are trending these days due to adoption of handheld mobile devices with operating systems such as Android, iOS and Windows. Delivering quality mobile apps is as important as in any other web or desktop application. Simplification and ease of quality assurance or evaluation in mobile devices is achieved by using automated testing tools. These tools have been evaluated for their features, platforms, code coverage, and efficiency. However, they have not been evaluated and compared to each other for different quality attributes they can enhance in the apps under test. This research study aims to evaluate different testing tools focusing on identifying quality factors they aid to achieve in the apps under test. Furthermore, it aims to measure overall trends of essential quality factors achieved using automated testing tools. The findings of this study are beneficial to the practitioners and researchers. The practitioners need to look up for specific tools which aid them to assure the desired quality factors in the apps under test. The researchers may base their studies on the findings of this study to propose solutions or revise existing tools in order to achieve maximum number of critical quality attributes in the app under test. This study revealed that the trend of automated testing is high on usability, correctness and robustness. Moreover, the trend is average on testability and performance. However, for assurance of extensibility, maintainability, scalability, and platform compatibility, only a few tools are available.</description>
        <description>http://thesai.org/Downloads/Volume8No7/Paper_33-A_Comparative_Analysis_of_Quality_Assurance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Simulation and Analysis of Quality of Service (QoS) Parameters of Voice over IP (VoIP) Traffic through Heterogeneous Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080732</link>
        <id>10.14569/IJACSA.2017.080732</id>
        <doi>10.14569/IJACSA.2017.080732</doi>
        <lastModDate>2017-07-31T12:57:55.4400000+00:00</lastModDate>
        
        <creator>Mahdi H. Miraz</creator>
        
        <creator>Suhail A. Molvi</creator>
        
        <creator>Muzafar A. Ganie</creator>
        
        <creator>Maaruf Ali</creator>
        
        <creator>AbdelRahman H. Hussein</creator>
        
        <subject>Voice over Internet Protocol (VoIP); Quality of Service (QoS); Mean Opinion Score (MOS); simulation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(7), 2017</description>
        <description>Identifying those causes and parameters that affect the Quality of Service (QoS) of Voice-over-Internet Protocol (VoIP) through heterogeneous networks such as WiFi, WiMAX and between them are carried out using the OPNET simulation tool. Optimization of the network for both intra- and inter-system traffic to mitigate the deterioration of the QoS are discussed. The average value of the jitter of the VoIP traffic traversing through the WiFi-WiMAX network was observed to be higher than that of utilizing WiFi alone at some points in time. It is routinely surmised to be less than that of transiting across the WiFi network only and obviously higher than passing through the increased bandwidth network of WiMAX. Moreover, both the values of the packet end-to-end delay and the Mean Opinion Score (MOS) were considerably higher than expected. The consequences of this optimization, leading to a solution, which can ameliorate the QoS over these networks are analyzed and offered as the conclusion of this ongoing research.</description>
        <description>http://thesai.org/Downloads/Volume8No7/Paper_32-Simulation_and_Analysis_of_Quality_of_Service.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Strategy of Validities’ Computation for Multimodel Approach: Experimental Validation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080731</link>
        <id>10.14569/IJACSA.2017.080731</id>
        <doi>10.14569/IJACSA.2017.080731</doi>
        <lastModDate>2017-07-31T12:57:55.4230000+00:00</lastModDate>
        
        <creator>Abdennacer BEN MESSAOUD</creator>
        
        <creator>Samia TALMOUDI BEN AOUN</creator>
        
        <creator>Moufida LAHMARI KSOURI</creator>
        
        <subject>Validities; residues’ approach; multimodel; quasi-hierarchical structuring; experimental validation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(7), 2017</description>
        <description>The evaluation of validities is a fundamental step in the design of the multimodel approach. Indeed, it is thanks to validities that we estimate the contribution of each base-model in the reproduction of the behavior of the global process in a given operating area. These coefficients are calculated most commonly by the approach of the residues formulated by the distance between the real output and the sub-models’ outputs. In this paper, a strategy allowing to improve the performances of the residues’ approach in terms of precision and robustness is proposed. This strategy is based on a quasi-hierarchical structuring. A simulation example and a validation on a semi-batch reactor showed the interest and the effectiveness of the proposed strategy.</description>
        <description>http://thesai.org/Downloads/Volume8No7/Paper_31-A_New_Strategy_of_Validities_Computation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Approach for Leukemia Identification based on Cepstral Analysis and Wavelet Transform</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080730</link>
        <id>10.14569/IJACSA.2017.080730</id>
        <doi>10.14569/IJACSA.2017.080730</doi>
        <lastModDate>2017-07-31T12:57:55.3470000+00:00</lastModDate>
        
        <creator>Amira Samy Talaat</creator>
        
        <creator>Amir F. Atiya</creator>
        
        <subject>MFCC; feature extraction; classification; identification system; leukemia</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(7), 2017</description>
        <description>This paper implements a new leukemia identification method which depends on Mel frequency cepstral coefficient (MFCC) feature extraction and wavelet transform. Leukemia identification is a measurement of blood cell features for detecting the blood cancer of a patient. Blood cell feature extraction is based on transforming the blood cell two dimensional (2D) image into one dimensional (1D) signal and thereafter extracting MFCCs from such signal. Furthermore, discrete wavelet transform (DWT) of the 1D blood cell signals are used for extracting extra MFCCs features to assist the identification procedure. In addition, Wavelet transform with denoising is used to reduce noise and increase classification accuracy. Feature matching/classification of the blood cell to be a normal cell or leukemia cell is performed in the proposed method using five different classifiers. Experimental results of leukemia identification method show that the proposed method is very good with wavelet transform and robust in the presence of noise.</description>
        <description>http://thesai.org/Downloads/Volume8No7/Paper_30-A_New_Approach_for_Leukemia_Identification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Network Traffic Classification using Machine Learning Techniques over Software Defined Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080729</link>
        <id>10.14569/IJACSA.2017.080729</id>
        <doi>10.14569/IJACSA.2017.080729</doi>
        <lastModDate>2017-07-31T12:57:55.2370000+00:00</lastModDate>
        
        <creator>Mohammad Reza Parsaei</creator>
        
        <creator>Mohammad Javad Sobouti</creator>
        
        <creator>Seyed Raouf khayami</creator>
        
        <creator>Reza Javidan</creator>
        
        <subject>Software defined networks; openflow; traffic classification; neural network; multilayer perceptron</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(7), 2017</description>
        <description>Nowadays Internet does not provide an exchange of information between applications and networks, which may results in poor application performance. Concepts such as application-aware networking or network-aware application programming try to overcome these limitations. The introduction of Software-Defined Networking (SDN) opens a path towards the realization of an enhanced interaction between networks and applications. SDN is an innovative and programmable networking architecture, representing the direction of the future network evolution. Accurate traffic classification over SDN is of fundamental importance to numerous other network activities, from security monitoring to accounting, and from Quality of Service (QoS) to providing operators with useful forecasts for long-term provisioning. In this paper, four variants of Neural Network estimator are used to categorize traffic by application. The proposed method is evaluated in the four scenarios: feedforward; Multilayer Perceptron (MLP); NARX (Levenberg-Marquardt) and NARX (Na&#239;ve Bayes). These scenarios respectively provide accuracy of 95.6%, 97%, 97% and 97.6%.</description>
        <description>http://thesai.org/Downloads/Volume8No7/Paper_29-Network_Traffic_Classification_using_Machine.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Image and AES Inspired Hex Symbols Steganography (IAIS) for Anti-Forensic Artifacts</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080728</link>
        <id>10.14569/IJACSA.2017.080728</id>
        <doi>10.14569/IJACSA.2017.080728</doi>
        <lastModDate>2017-07-31T12:57:55.1600000+00:00</lastModDate>
        
        <creator>Somyia M. Abu Asbeh</creator>
        
        <creator>Sarah M. Hammoudeh</creator>
        
        <creator>Arab M. Hammoudeh</creator>
        
        <subject>Mobile Forensics, Anti-Forensics, Data Hiding, Steganography, AES, AIS.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(7), 2017</description>
        <description>Technology (including mobiles and computers) has become a basic, indispensable need in our daily life. With an initial purpose of achieving basic functions such as communication, technology has evolved into a virtual gate to the whole world connecting individuals through social media and various websites and applications. Most importantly, technology became the reservoir of our personal information and important, sensitive data. This has led to increased risks of security breaches and data thefts demanding countermeasure approaches. One of these approaches is Steganography. Steganography is a data hiding approach that allows for invisible, relatively safe communication. Several forms of steganography have been developed, among which are Image steganography and our previously developed AES Inspired Steganography. In this paper we propose a new variation in which we combine both of these approaches, Image and AES Inspired Steganography (IAIS). This approach proposes hiding the hex symbol format of the encrypted secret data into a carrier image file. The image file is converted to a hexadecimal representation in which the hex symbol could be embedded without applying any noticeable changes to the original image. Deciphering the hidden information requires secret keys agreed upon by the communicating parties confidentially. These carrier files can be exchanged among mobile devices and/or computers. Comparisons between the original cover images and the cover images with the hidden text have shown that no changes occurred in the colour histogram of the images. However, the noise test has shown that exposure to noise can affect the hexadecimal content of the image, hence the embedded hex symbol representation of the secret text.</description>
        <description>http://thesai.org/Downloads/Volume8No7/Paper_28-Image_and_AES_Inspired_Hex_Symbols_Steganography.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Factors Influencing Users’ Intentions to Use Mobile Government Applications in Saudi Arabia: TAM Applicability</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080727</link>
        <id>10.14569/IJACSA.2017.080727</id>
        <doi>10.14569/IJACSA.2017.080727</doi>
        <lastModDate>2017-07-31T12:57:55.0970000+00:00</lastModDate>
        
        <creator>Raed Alotaibi</creator>
        
        <creator>Luke Houghton</creator>
        
        <creator>Kuldeep Sandhu</creator>
        
        <subject>TAM; Saudi Arabia; e-government; m-government applications</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(7), 2017</description>
        <description>M-government applications in Saudi Arabia are still at an early stage. In this study, a modified technology acceptance model (TAM) was used to identify and measure the factors that influence users’ intentions to use m-government applications in Saudi Arabia. This study focuses on the relationships between behavioural intention to use (BIU) and six independent factors: three TAM constructs (perceived usefulness [PU], attitude towards use [ATU], and perceived ease of use [PEU]) and three external factors: perceived trustworthiness [TRU], perceived security [SEC], and awareness [AWAR]).  Only PU, ATU and TRU had a significant positive influence on BIU for m-government applications. The results also showed that most participants had a positive attitude towards using m-government applications. Overall, the results demonstrate that the model is suitable in the Saudi m-government context.</description>
        <description>http://thesai.org/Downloads/Volume8No7/Paper_27-Factors_Influencing_Users_Intentions_to_Use_Mobile.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>UHF RFID Reader Antenna using Novel Planar Metamaterial Structure for RFID System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080726</link>
        <id>10.14569/IJACSA.2017.080726</id>
        <doi>10.14569/IJACSA.2017.080726</doi>
        <lastModDate>2017-07-31T12:57:55.0170000+00:00</lastModDate>
        
        <creator>Marwa Zamali</creator>
        
        <creator>Lotfi Osman</creator>
        
        <creator>Hedi Ragad</creator>
        
        <creator>Mohamed Latrach</creator>
        
        <subject>Loop antenna; metamaterial; miniaturization; RFID system; UHF band</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(7), 2017</description>
        <description>An Ultra High Frequency (UHF) half-loop antenna used in Radio Frequency Identification (RFID) systems is proposed with a planar patterned metamaterial structure of compact size. The size of the planar patterned metamaterial structure is (0.20λ*0.20λ*0.0023λ&#39;) mm3. This antenna consists of two metamaterial unit cells having negative permittivity and permeability. Simulation results of input return loss, radiation pattern, and directivity of this antenna are presented using CST software. A comparison between the conventional antenna and the new metamaterial half-loop antenna is also provided. The simulated results show that the metamaterial antenna has a resonance frequency of 0.866 GHz, a realized gain of 1.96 dB, and an efficiency increase of about 20%. Simulation and measurement results are in perfect agreement, which proves that the proposed antenna can operate in the UHF band for RFID systems.</description>
        <description>http://thesai.org/Downloads/Volume8No7/Paper_26-UHF_RFID_Reader_Antenna_using_Novel_Planar.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of Efficient Pipelined Router Architecture for 3D Network on Chip</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080725</link>
        <id>10.14569/IJACSA.2017.080725</id>
        <doi>10.14569/IJACSA.2017.080725</doi>
        <lastModDate>2017-07-31T12:57:54.9870000+00:00</lastModDate>
        
        <creator>Bouraoui Chemli</creator>
        
        <creator>Abdelkrim Zitouni</creator>
        
        <creator>Alexandre Coelho</creator>
        
        <creator>Raoul Velazco</creator>
        
        <subject>3D network on chip; router optimization; turn model; parallel communication; router pipeline stages</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(7), 2017</description>
        <description>As a relevant communication structure for integrated circuits, Network-on-Chip (NoC) architecture has attracted a range of research topics. Compared to conventional bus technology, NoC provides higher scalability and enhances the system performance for future System-on-Chip (SoC). Divergently, we presented the packet-switching router design for 2D NoC which supports 2D mesh topology. Despite the offered benefits compared to conventional bus technology, NoC architecture faces some limitations such as high cost communication, high power consumption and inefficient router pipeline usage. One of the proposed solutions is 3D design. In this context, we suggest router architecture for 3D mesh NoC, a natural extension of our prior 2D router design. The proposal uses the wormhole switching and employs the turn mod negative-first routing algorithm Thus, deadlocks are avoided and dynamic arbiter are implemented to deal with the Quality of Service (QoS) expected by the network. We also adduce an optimization technique for the router pipeline stages. We prototyped the proposal on FPGA and synthesized under Synopsys tool using the 28 nm technology. Results are delivered and compared with other famous works in terms of maximal clock frequency, area, power consumption and estimated peak performance.</description>
        <description>http://thesai.org/Downloads/Volume8No7/Paper_25-Design_of_Efficient_Pipelined_Router_Architecture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Efficient Key Agreement and Nodes Authentication Scheme for Body Sensor Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080724</link>
        <id>10.14569/IJACSA.2017.080724</id>
        <doi>10.14569/IJACSA.2017.080724</doi>
        <lastModDate>2017-07-31T12:57:54.9570000+00:00</lastModDate>
        
        <creator>Jawaid Iqbal</creator>
        
        <creator>Noor ul Amin</creator>
        
        <creator>Arif Iqbal Umar</creator>
        
        <creator>Nizamud Din</creator>
        
        <subject>Body sensor network; hash function; node authentication; key agreement; session key</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(7), 2017</description>
        <description>Technological evolvement of Wireless Sensor Networks (WSNs) gave birth to an attractive research area for health monitoring called Body Sensor Network (BSN). In BSN tiny sensor nodes sense physiological data of patients under medical health care and transmit this data to Base Station (BS) and then forward to Medical Server (MS). BSN is exposed to security threats due to vulnerable wireless channel. Protection of human physiological data against adversaries is a major addressable issue while keeping constrained resources of BSN under consideration. Our proposed scheme consists of three stages. In first stage deployment of initial secret key by the ward Medical Officer (MO), in second stage secure key exchange and node authentication, in third stage secure data communication are performed. We have compared our proposed scheme with three existing schemes. Our scheme is efficient in computation cost, communication overhead and storage as compared to existing schemes while providing enough security against the adversaries.</description>
        <description>http://thesai.org/Downloads/Volume8No7/Paper_24-Efficient_Key_Agreement_and_Nodes_Authentication.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Efficient Spectral Amplitude Coding (SAC) Technique for Optical CDMA System using Wavelength Division Multiplexing (WDM) Concepts</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080723</link>
        <id>10.14569/IJACSA.2017.080723</id>
        <doi>10.14569/IJACSA.2017.080723</doi>
        <lastModDate>2017-07-31T12:57:54.9230000+00:00</lastModDate>
        
        <creator>Hassan Yousif Ahmed</creator>
        
        <creator>Medien Zeghid</creator>
        
        <subject>Optical Code Division Multiple Access System (OCDMA); Multiple Access Interference (MAI); Spectral Amplitude Coding (SAC); Wavelength Division Multiplexing (WDM); SAC-OCDMA/WDM MP (SW-MP) code; Cross Correlation (CC)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(7), 2017</description>
        <description>This article introduces an improved method for Optical Code Division Multiple Access system (OCDMA). In this scheme, a hybrid technique is used in which Wavelength Division Multiplexing (WDM) is merged with Spectral Amplitude Coding (SAC) to efficiently diminish Multiple Access Interference (MAI) and alleviate the impact of Phase Induced Intensity Noise (PIIN) appearing in photo-detecting process. The proposed technique SAC-OCDMA/WDM MP (SW-MP) is implemented by using Matrix Partitioning (MP) code family, which is constructed via merging mathematics sequence and algebraic approaches. The key notion is to create the code patterns in SAC domain, then diagonally replicate it in the wavelength domain as blocks which preserves the same code patterns of a given code weight. The SW-MP code family preserves convenient code length property which gives flexibility in transmitter-receiver design. It is reported that the proposed scheme has potential to remove MAI proficiently and improve the system performance significantly.</description>
        <description>http://thesai.org/Downloads/Volume8No7/Paper_23-An_Efficient_Spectral_Amplitude_Coding.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of Received Power Characteristics of Commercial Photodiodes in Indoor Los Channel Visible Light Communication</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080722</link>
        <id>10.14569/IJACSA.2017.080722</id>
        <doi>10.14569/IJACSA.2017.080722</doi>
        <lastModDate>2017-07-31T12:57:54.9100000+00:00</lastModDate>
        
        <creator>Syifaul Fuada</creator>
        
        <creator>Angga Pratama Putra</creator>
        
        <creator>Trio Adiono</creator>
        
        <subject>Commercial photodiodes; LoS channel; power received; visible light communication</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(7), 2017</description>
        <description>To date, the photodiode still the first choice component is used in optical communication, especially for visible light communication (VLC) system. It has advantages of speed, energy consumption, and sensitivity, compared to other devices (e.g. image sensor). There are many practical implementations of high-speed VLC which uses photodiode. Commercially available photodiode typically have specific characteristics, so that it needs some consideration to be used as optimal receiver devices in VLC system. In this paper, analysis of received power characteristics of the photodiode in indoor line-of-sight (LoS) channel of VLC system is discussed. MATLAB&#174; simulation is used as approach model (student version). The experiments are done by changing several parameters such as the semi-angle half power of the transmitter, distance from the transmitter to receiver, room size, field-of-view (FOV), lens index and optical filter gain. From the results, it can be known that distance, room size, FOV and LED power factor to have linear characteristic against the received power of commercial photodiode. Also in LoS channel model, the gain of optical filter and lens index plays an important role in defining the characteristics of received power.</description>
        <description>http://thesai.org/Downloads/Volume8No7/Paper_22-Analysis_of_Received_Power_Characteristics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Security in OpenFlow Enabled Cloud Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080721</link>
        <id>10.14569/IJACSA.2017.080721</id>
        <doi>10.14569/IJACSA.2017.080721</doi>
        <lastModDate>2017-07-31T12:57:54.8930000+00:00</lastModDate>
        
        <creator>Abdalla Alameen</creator>
        
        <creator>Sadia Rubab</creator>
        
        <creator>Bhawna Dhupia</creator>
        
        <creator>Manjur Kolhar</creator>
        
        <subject>Software defined networks; OpenFlow; 5G network component; ONF; virtualization; SDN security framework; future security networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(7), 2017</description>
        <description>Inception of flow tables as data plane abstraction, and forwarding rules that are managed by centralized controllers in emerging Software Defined Networks (SDN) has stemmed significant progress in OpenFlow based architectures. SDN is particularly fueled by data center networking and cloud computing. OpenFlow coupled with cloud solutions provide dynamic networking capabilities. With the benefits obtained from network services, security enforcement become more important and need powerful techniques for its implementation. Extensive researches in cloud security bring forward numerous methods of leveraging the SDN architecture with efficient security enforcement. The future of SDN and mobile networks is also enlightened if security models are satisfactory to cover dynamic and flexible requirements of evolving networks. This paper presents a survey of the state of the art research on security techniques in OpenFlow based cloud environments. Security is one of the main aspect of any network. A fair study and evaluation of these methods are carried out in the paper along with the security considerations in SDN and its enforcement. The security issues and recommendations for 5g network are covered briefly. This work provides an understanding of the problem, its current solution space, and anticipated future research directions.</description>
        <description>http://thesai.org/Downloads/Volume8No7/Paper_21-Security_in_OpenFlow_Enabled_Cloud_Environment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Copula Statistic for Measuring Nonlinear Dependence with Application to Feature Selection in Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080720</link>
        <id>10.14569/IJACSA.2017.080720</id>
        <doi>10.14569/IJACSA.2017.080720</doi>
        <lastModDate>2017-07-31T12:57:54.8770000+00:00</lastModDate>
        
        <creator>Mohsen Ben Hassine</creator>
        
        <creator>Lamine Mili</creator>
        
        <creator>Kiran Karra</creator>
        
        <subject>Copula; multivariate dependence; nonlinear systems; feature selection; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(7), 2017</description>
        <description>Feature selection in machine learning aims to find out the best subset of variables from the input that reduces the computation requirement and improves the predictor performance. In this paper, a new index based on empirical copulas, termed the Copula Statistic (CoS) to assess the strength of statistical dependence and for testing statistical independence is introduced. It is shown that this test exhibits higher statistical power than other indices.  Finally, the CoS is applied to feature selection in machine learning problems, which allow a demonstration of the good performance of the CoS.</description>
        <description>http://thesai.org/Downloads/Volume8No7/Paper_20-A_Copula_Statistic_for_Measuring_Nonlinear_Dependence.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Optimization of Query Processing in Seabase Cloud Databases based on CCEVP Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080719</link>
        <id>10.14569/IJACSA.2017.080719</id>
        <doi>10.14569/IJACSA.2017.080719</doi>
        <lastModDate>2017-07-31T12:57:54.8470000+00:00</lastModDate>
        
        <creator>Abdulkadir &#214;ZDEMIR</creator>
        
        <creator>Hasan Asil</creator>
        
        <subject>SeaBase; optimization; query processing; database; adaption</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(7), 2017</description>
        <description>A cloud database is a database usually installed on cloud computing software platforms. There are several methods for query processing in cloud databases. This study tried to optimize query processing in the SeaBase cloud database and reduce query processing time. This method used adaptability for optimization. This method was designed for cloud-based databases. The algorithm is composed of three components: 1) multi cloud query separator; 2) query similarity detector based on the execution plan; and 3) replacement policy. This method is implemented as a system for a fully object-oriented simulation. The system is added to the SeaBase as an agent. The evaluation result show that this method reduced response time by 1.9 percent.</description>
        <description>http://thesai.org/Downloads/Volume8No7/Paper_19-The_Optimization_of_Query_Processing_in_SeaBase_Cloud.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Role of Strategic Information Systems (SIS) in Supporting and Achieving the Competitive Advantages (CA): An Empirical Study on Saudi Banking Sector</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080718</link>
        <id>10.14569/IJACSA.2017.080718</id>
        <doi>10.14569/IJACSA.2017.080718</doi>
        <lastModDate>2017-07-31T12:57:54.8170000+00:00</lastModDate>
        
        <creator>Nisreen F. Alshubaily</creator>
        
        <creator>Abdullah A. Altameem</creator>
        
        <subject>Strategic information systems (SIS); competitive advantage (CA); operational efficiency; information quality; innovation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(7), 2017</description>
        <description>The purpose of this research paper is to identify the significant role of Strategic Information Systems (SIS) in supporting the Competitive Advantage (CA). It also explains its role on the dimensions that increase the competitive advantage which are the operational efficiency, information quality and innovation. In order to achieve the goal of this study and to collect the primary data, the researchers designed a survey, in the form of an electronic questionnaire. This survey instrument consisted of 20 questions. It was distributed to members of the study sample, which contains the managers at all levels, and the employees in the Saudi banking sector. The number of the participants included in the survey was 147. The results of this study revealed that there is a significant role of strategic information systems on increasing operational efficiency, improving the quality of information and promoting innovation. This in turn enabled the organizations to achieve higher levels of competitive advantages. The strategic information systems have deep consequences for organizations that adopt them; managers could achieve great and sustainable competitive advantages from such systems if carefully considered and developed. On the other hand, this study was conducted in the banking sector in KSA context. So, more research is needed in other sectors and in the context of other countries; to confirm and generalize the results. Finally, the paper’s primary value lies in its ability to provide the evidence that the strategic information systems play a significant role in supporting and achieving the competitive advantages in Saudi Arabia, particularly in the banking sector. Since there was a lack of such research in the Saudi context, this paper can provide a theoretical basis for future researchers as well as practical implications for managers.</description>
        <description>http://thesai.org/Downloads/Volume8No7/Paper_18-The_Role_of_Strategic_Information_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Efficient Distributed Traffic Events Generator for Smart Highways</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080717</link>
        <id>10.14569/IJACSA.2017.080717</id>
        <doi>10.14569/IJACSA.2017.080717</doi>
        <lastModDate>2017-07-31T12:57:54.7830000+00:00</lastModDate>
        
        <creator>Abdelaziz Daaif</creator>
        
        <creator>Omar Bouattane</creator>
        
        <creator>Mohamed Youssfi</creator>
        
        <creator>Oum El Kheir Abra</creator>
        
        <subject>Event generator; smart highway; simulation; multi-agent systems; distributed computing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(7), 2017</description>
        <description>This paper deals with a spatiotemporal traffic events generator for real highway networks. The goal is to use the event generator to test real-time and batch traffic analysis applications. In this context, we represent a highway network as an oriented graph based on the geographic data of the different sensors locations. The traffic is generated based on a socio-cultural calendar using a virtual clock to speed up the simulation process. In order to enable our generator to support the global worldwide highway networks, we propose a dynamic sized distributed architecture based on multi-agent systems. In this platform, we distinguish the physical model based on sensors from the logical model based on an oriented graph. The architecture of the simulator and the results of some of its implementations applied to the Moroccan highway network are presented.</description>
        <description>http://thesai.org/Downloads/Volume8No7/Paper_17-An_Efficient_Distributed_Traffic_Events_Generator.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Reducing Dimensionality in Text Mining using Conjugate Gradients and Hybrid Cholesky Decomposition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080716</link>
        <id>10.14569/IJACSA.2017.080716</id>
        <doi>10.14569/IJACSA.2017.080716</doi>
        <lastModDate>2017-07-31T12:57:54.7530000+00:00</lastModDate>
        
        <creator>Jasem M. Alostad</creator>
        
        <subject>Data mining; non-integer matrix factorization; Cholesky decomposition; conjugate gradient algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(7), 2017</description>
        <description>Generally, data mining in larger datasets consists of certain limitations in identifying the relevant datasets for the given queries. The limitations include: lack of interaction in the required objective space, inability to handle the data sets or discrete variables in datasets, especially in the presence of missing variables and inability to classify the records as per the given query, and finally poor generation of explicit knowledge for a query increases the dimensionality of the data. Hence, this paper aims at resolving the problems with increasing data dimensionality in datasets using modified non-integer matrix factorization (NMF). Further, the increased dimensionality arising due to non-orthogonally of NMF is resolved with Cholesky decomposition (cdNMF). Initially, the structuring of datasets is carried out to form a well-defined geometric structure. Further, the complex conjugate values are extracted and conjugate gradient algorithm is applied to reduce the sparse matrix from the data vector. The cdNMF is used to extract the feature vector from the dataset and the data vector is linearly mapped from upper triangular matrix obtained from the Cholesky decomposition. The experiment is validated against accuracy and normalized mutual information (NMI) metrics over three text databases of varied patterns. Further, the results prove that the proposed technique fits well with larger instances in finding the documents as per the query, than NMF, neighborhood preserving: nonnegative matrix factorization (NPNMF), multiple manifolds non-negative matrix factorization (MMNMF), robust non-negative matrix factorization (RNMF), graph regularized non-negative matrix factorization (GNMF), hierarchical non-negative matrix factorization (HNMF) and cdNMF.</description>
        <description>http://thesai.org/Downloads/Volume8No7/Paper_16-Reducing_Dimensionality_in_Text_Mining.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Real-Time Analysis of Students’ Activities on an E-Learning Platform based on Apache Spark</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080715</link>
        <id>10.14569/IJACSA.2017.080715</id>
        <doi>10.14569/IJACSA.2017.080715</doi>
        <lastModDate>2017-07-31T12:57:54.7370000+00:00</lastModDate>
        
        <creator>Abdelmajid Chaffai</creator>
        
        <creator>Larbi Hassouni</creator>
        
        <creator>Houda Anoun</creator>
        
        <subject>Real time analytics; e-learning; big data; Hadoop; spark; Moodle; change data capture; streaming; data visualization clustering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(7), 2017</description>
        <description>Real time analytics is the capacity to extract valuables insights from data that comes continuously from activities on the web or network sensors. It is largely used in web based business to drive decisions based on user’s experiences, such dynamic pricing and personalized advertising. Many universities have adopted web based learning in their learning process. They use data-mining techniques to better understand students’ behavior, and most of the tools developed are based on historical and  stored data, and do not allow real time reactivity. Online activities of learners generate at high speed a huge amount of data in form of users’ interactions which have all characteristics to be considered as Big data. Deal with volume and velocity of these data in order to inform and enable decisions-makers to act at right time lead us to use new methods to capture E-Learning data, and process it in real time.
This paper focuses on the design and implementation of modern and hybrid real time data pipeline architecture using Apache Flume to collect data, Apache Spark as an unified engine computation for performing analytics on students’ activities data  and Apache Hive as a data warehouse for storing the processed data and for use by various reporting tools. To conceive this platform we conduct an experiment on Moodle database source.
</description>
        <description>http://thesai.org/Downloads/Volume8No7/Paper_15-Real_Time_Analysis_of_Students_Activities.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>New Divide and Conquer Method on Endmember Extraction Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080714</link>
        <id>10.14569/IJACSA.2017.080714</id>
        <doi>10.14569/IJACSA.2017.080714</doi>
        <lastModDate>2017-07-31T12:57:54.7230000+00:00</lastModDate>
        
        <creator>Ihab Samir</creator>
        
        <creator>Bassam Abdellatif</creator>
        
        <creator>Amr Badr</creator>
        
        <subject>Endmember extraction algorithm (EEA); endmember extraction (EE); automatic target generation process (ATGP); hyperspectral imagery; simplex growing algorithm (SGA); hyperspectral unmixing; vertex component analysis (VCA); divide and conquer method</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(7), 2017</description>
        <description>In hyperspectral imagery, endmember extraction (EE) is a main stage in hyperspectral unmixing process where its role lies in extracting distinct spectral signature, endmembers, from hyperspectral image which is considered as the main input for unsupervised hyperspectral unmixing to generate the abundance fractions for every pixel in hyperspectral data. EE process has some difficulties. There are less distinct endmembers than its mixed background; also, there are endmembers that have rare occurrences in data that are considered as difficulties in EE process. In this paper, we propose a new technique that uses divide and conquer method for EE process to find out these difficult (rare or less distinct) endmembers. divide and conquer method is used to divide hyperspectral data scene to multiple divisions and take each division as a standalone scene to enable endmember extraction  algorithms (EEAs) to extract difficult endmembers easily and finally conquer all extracted endmembers from all divisions. We implemented this method on real dataset using three EEAs: ATGP, VCA, and SGA and recorded the results that outperform the results from usual endmember extraction techniques methods in all used algorithms.</description>
        <description>http://thesai.org/Downloads/Volume8No7/Paper_14-New_Divide_and_Conquer_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hearing Aid Method by Equalizing Frequency Response of Phoneme Extracted from Human Voice</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080713</link>
        <id>10.14569/IJACSA.2017.080713</id>
        <doi>10.14569/IJACSA.2017.080713</doi>
        <lastModDate>2017-07-31T12:57:54.6900000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Takuto Konishi</creator>
        
        <subject>Hearing aid; phoneme; frequency response; equalization filter; hidden markov model (HMM)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(7), 2017</description>
        <description>Hearing aid method by equalizing frequency response of phoneme which is extracted from human voice is proposed. One of the problems of the existing hearing aid is poor customization of the frequency response compensation. Frequency response characteristics are different by the person who need hearing aid. The proposed hearing aid is based on frequency response equalization by phoneme by phoneme. Frequency characteristics of phoneme are to be equalized. This is the specific feature of the proposed hearing aid method. Through experiments, it is found that the proposed hearing aid by phoneme is superior to the conventional hearing aid.</description>
        <description>http://thesai.org/Downloads/Volume8No7/Paper_13-Hearing_Aid_Method_by_Equalizing_Frequency_Response.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application of the Tabu Search Algorithm to Cryptography</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080712</link>
        <id>10.14569/IJACSA.2017.080712</id>
        <doi>10.14569/IJACSA.2017.080712</doi>
        <lastModDate>2017-07-31T12:57:54.6770000+00:00</lastModDate>
        
        <creator>Zakaria KADDOURI</creator>
        
        <creator>Fouzia OMARY</creator>
        
        <subject>Symmetric encryption; heuristic; Tabu search; algorithm; scheduling problem; combinatorial problems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(7), 2017</description>
        <description>Tabu search is a powerful algorithm that has been applied with great success to many difficult combinatorial problems. In this paper, we have designed and implemented a symmetrical encryption algorithm whose internal structure is mainly based on Tabu search algorithm. This heuristic performs multiple searches among different solutions and stores the best solutions in an adaptive memory. First of all, we coded the encryption problem by simulating a scheduling problem. Next, we have used an appropriate coding for our problem. Then we used the suitable evaluation function. Through the symmetric key generated by our algorithm, we have illustrated the principle of encryption and decryption. The experimentations of our approach are given at the end of this paper, from which we examined our new system strengths and the elements that could be improved.</description>
        <description>http://thesai.org/Downloads/Volume8No7/Paper_12-Application_of_the_Tabu_Search_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Performance of the Bond Graph Approach for Diagnosing Electrical Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080711</link>
        <id>10.14569/IJACSA.2017.080711</id>
        <doi>10.14569/IJACSA.2017.080711</doi>
        <lastModDate>2017-07-31T12:57:54.6430000+00:00</lastModDate>
        
        <creator>Dhia Mzoughi</creator>
        
        <creator>Abderrahmene Sallami</creator>
        
        <creator>Abdelkader Mami</creator>
        
        <subject>Bond graph; faults detection and isolation; electrical system; analytical redundancy relations</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(7), 2017</description>
        <description>The increasing complexity of automated industrial systems, the constraints of competitiveness in terms of cost of production and facility security have mobilized in the last years a large community of researchers to improve the monitoring and the diagnosis of this type of processes. This work proposes a reliable and efficient method for the diagnosis of an electrical system. The improvement of the reliability of the systems depends essentially on the algorithms of fault detection and isolation. The developed method is based on the use of analytical redundancy relations allowing the detection and isolation of faults which occur in the various elements of the system using a structural and causal analysis. In this context, the bond graph appears as an interesting approach since it models physical systems element by element which facilitates the detection and location of faults. The simulation of the system is performed through 20-sim software dedicated to the bond graph applications.</description>
        <description>http://thesai.org/Downloads/Volume8No7/Paper_11-The_Performance_of_the_Bond_Graph_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Proposed Adaptive Scheme for Arabic Part-of Speech Tagging</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080710</link>
        <id>10.14569/IJACSA.2017.080710</id>
        <doi>10.14569/IJACSA.2017.080710</doi>
        <lastModDate>2017-07-31T12:57:54.6130000+00:00</lastModDate>
        
        <creator>Mohammad Fasha</creator>
        
        <subject>Arabic natural language processing (ANLP); part-of-speech (POS) tagging; part-of-speech tokenization scheme; morpho-syntactic tagging; Arabic declension system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(7), 2017</description>
        <description>This paper presents an Arabic-compliant part-of-speech (POS) tagging scheme based on using atomic tag markers that are grouped together using brackets. This scheme promotes the speedy production of annotations while preserving the richness of resultant annotations. The proposed scheme is comprised of two main elements, a new tokenization approach and a custom tool that enables the semi-automatic implementation of this scheme. The proposed model can serve in many scenarios where the user is in a need for better Arabic support and more control over the Part-of-Speech tagging process. This scheme was used to annotate sample narratives and it demonstrated capability and adaptability while addressing the various distinguishing features of Arabic language including its unique declension system. It also sets new baselines that are prospect for further exploration by future efforts.</description>
        <description>http://thesai.org/Downloads/Volume8No7/Paper_10-A_Proposed_Adaptive_Scheme_for_Arabic_Part_of_Speech.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Core Levels Algorithm for Optimization: Case of Microwave Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080709</link>
        <id>10.14569/IJACSA.2017.080709</id>
        <doi>10.14569/IJACSA.2017.080709</doi>
        <lastModDate>2017-07-31T12:57:54.5970000+00:00</lastModDate>
        
        <creator>Ali Haydar</creator>
        
        <creator>Ezgi Deniz &#220;lker</creator>
        
        <creator>Kamil Dimililer</creator>
        
        <creator>Sadik &#220;lker</creator>
        
        <subject>Metaheuristic algorithms; evolutionary algorithms; microwave circuits, optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(7), 2017</description>
        <description>Metaheuristic algorithms are investigated and used by many researchers in different areas. It is crucial to find optimal solutions for all problems under study especially for the ones which require sensitive optimization. Especially, for real case problems, solution quality and convergence speed of the algorithms are highly desired characteristics. In this paper, a new optimization algorithm called Core Levels Algorithm (COLA) based on the use of metaheuristics is proposed and analyzed. In the algorithm, two core levels are applied recursively to create new offsprings from the parent vectors which provides a desired balance on the exploration and exploitation characteristics. The algorithm’s performance is first studied on some well-known benchmark functions and then compared with previously proposed efficient evolutionary algorithms. The experimental results showed that even at the early stages of optimization, obtained values are very close or exactly the same as the optimum values of the analyzed functions. Then, the performance of COLA is investigated on real case problems such as some selected microwave circuit designs. The results denoted that COLA produces stable results and provides high accuracy of optimization without high parameter dependency even for the real case problems.</description>
        <description>http://thesai.org/Downloads/Volume8No7/Paper_9-Core_Levels_Algorithm_for_Optimization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards an SOA Architectural Model for AAL-Paas Design and Implimentation Challenges</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080708</link>
        <id>10.14569/IJACSA.2017.080708</id>
        <doi>10.14569/IJACSA.2017.080708</doi>
        <lastModDate>2017-07-31T12:57:54.5670000+00:00</lastModDate>
        
        <creator>El murabet Amina</creator>
        
        <creator>Abtoy Anouar</creator>
        
        <creator>Abdellah Touhafi</creator>
        
        <creator>Abderahim Tahiri</creator>
        
        <subject>Ambient Assisted Living (AAL); Ambient Assisted Living Platform as a Service (AAL-PaaS); Service Oriented Architecture (SOA); Wireless Sensors Network (WSN)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(7), 2017</description>
        <description>Ambient Assisted Living (AAL) systems main purpose is to improve the quality of life of special groups of people, including the elderly and people with physical disabilities. Driven by the critical ongoing changes in all modern, industrialized countries, there is a huge interest in IT-based equipment and services these days, to facilitate daily tasks and extend the independency time for these groups. Thence, AAL systems can benefit from the huge advances of both intelligent systems and communication technologies as promising growing research fields. The implementation of such complicated yet vital system should be established on solid bases relying on a standard architecture to satisfy and respond to the needs of heterogeneous stakeholders. This article proposes a Service Oriented Architecture model for Ambient Assisted Living Platform as a Service based on Wireless Sensors Network, it starts by presenting a classification of ambient assisted living services. Secondly, it describes some user and environmental challenges that have an impact on the service qualities. The discussion of architectural trends for AAL systems is included, and the description of challenges in designing and implementing of an effective one. Finally, this paper introduces a new vision of prototypical AAL systems architecture.</description>
        <description>http://thesai.org/Downloads/Volume8No7/Paper_8-Towards_an_SOA_Architectural_Model_for_AAL_PaaS.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Reduced-Latency and Area-Efficient Architecture for FPGA-Based Stochastic LDPC Decoders</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080707</link>
        <id>10.14569/IJACSA.2017.080707</id>
        <doi>10.14569/IJACSA.2017.080707</doi>
        <lastModDate>2017-07-31T12:57:54.5330000+00:00</lastModDate>
        
        <creator>Ghania Zerari</creator>
        
        <creator>Abderrezak Guessoum</creator>
        
        <subject>Stochastic decoding; low-density parity-check (LDPC) decoder; field programmable gate array (FPGA)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(7), 2017</description>
        <description>This paper introduces a new field programmable gate array (FPGA) based stochastic low-density parity-check (LDPC) decoding process, to implement fully parallel LDPC-decoders. The proposed technique is designed to optimize the FPGA logic utilisation and to decrease the decoding latency. In order to reduce the complexity, the variable node (VN) output saturated-counter is removed and each VN internal memory is mapped only in one slice distributed RAM. Furthermore, an efficient VN initialization, using the channel input probability, is performed to improve the decoder convergence, without requiring additional resources. The Xilinx FPGA implementation shows that the proposed decoding approach reaches high performance along with reduction of logic utilisation, even for short codes. As a result, for a (200, 100) regular codes, a 57% reduction of the average decoding cycles is attained with an important bit error rate improvement, at Eb/N0 = 5.5dB. Additionally, a significant hardware reduction is achieved.</description>
        <description>http://thesai.org/Downloads/Volume8No7/Paper_7-Reduced-Latency_and_Area-Efficient_Architecture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Research Pathway towards MAC Protocol in Enhancing Network Performance in Wireless Sensor Network (WSN)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080706</link>
        <id>10.14569/IJACSA.2017.080706</id>
        <doi>10.14569/IJACSA.2017.080706</doi>
        <lastModDate>2017-07-31T12:57:54.4870000+00:00</lastModDate>
        
        <creator>Anitha K</creator>
        
        <creator>Usha S</creator>
        
        <subject>Delay; energy issues; latency; MAC protocol; scalability; Wireless Sensor Network (WSN)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(7), 2017</description>
        <description>The applications and utility of Wireless Sensor Network (WSN) have increased its pace in making an entry to the commercial market since the last five years. It has successfully established its association with Internet-of-Things (IoT) and other reconfigurable networks. However, in this advent of exponential progress in technology, WSN still suffers from elementary problems of energy efficiency, scalability, delay, and latency where Medium Access Control (MAC) protocols hold the primary responsibility. This paper reviewed the frequently used MAC protocols and studied their advantages and limitations followed by most recently carried out implementation work towards WSN performance enhancement. The paper finally outlines the unsolved problems from the existing research work and discussed briefly the research gap followed by a chalked out plan of tentative future work to address the research gap from existing review.</description>
        <description>http://thesai.org/Downloads/Volume8No7/Paper_6-Research_Pathway_towards_MAC_Protocol.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Feature Selection and Extraction Framework for DNA Methylation in Cancer</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080705</link>
        <id>10.14569/IJACSA.2017.080705</id>
        <doi>10.14569/IJACSA.2017.080705</doi>
        <lastModDate>2017-07-31T12:57:54.4570000+00:00</lastModDate>
        
        <creator>Abeer A. Raweh</creator>
        
        <creator>Mohammad Nassef</creator>
        
        <creator>Amr Badr</creator>
        
        <subject>DNA methylation, feature selection; feature extraction; cancer classification; epigenetics; biomarkers; hypomethylation; hypermethylation; methylation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(7), 2017</description>
        <description>Feature selection methods for cancer classification are aimed to overcome the high dimensionality of the biomedical data which is a challenging task. Most of the feature selection methods based on DNA methylation are time consuming during testing phase to identify the best pertinent features subset that are relevant to accurate prediction. However, the hybridization between feature selection and extraction methods will bring a method that is far fast than only feature selection method. This paper proposes a framework based on both novel feature selection methods that employ statistical variation, standard deviation and entropy, along with extraction methods to predict cancer using three new features, namely, Hypomethylation, Midmethylation and Hypermethylation. These new features represent the average methylation density of the corresponding three regions. The three features are extracted from the selected features based on the analysis of the methylation behavior. The effectiveness of the proposed framework is evaluated by the breast cancer classification accuracy. The results give 98.85% accuracy using only three features out of 485,577 features. This result proves the capability of the proposed approach for breast cancer diagnosis and confirms that feature selection and extraction methods are critical for practical implementation.</description>
        <description>http://thesai.org/Downloads/Volume8No7/Paper_5-Feature_Selection_and_Extraction_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of A Clinically-Oriented Expert System for Differentiating Melanocytic from Non-melanocytic Skin Lesions
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080704</link>
        <id>10.14569/IJACSA.2017.080704</id>
        <doi>10.14569/IJACSA.2017.080704</doi>
        <lastModDate>2017-07-31T12:57:54.4270000+00:00</lastModDate>
        
        <creator>Qaisar Abbas</creator>
        
        <subject>Skin cancer; melanocytic; non-melanocytic; dermoscopy; deep learning; convolutional neural network; stack-based autoencoders</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(7), 2017</description>
        <description>Differentiating melanocytic from non-melanocytic (MnM) skin lesions is the first and important step required by clinical experts to automatically diagnosis pigmented skin lesions (PSLs). In this paper, a new clinically-oriented expert system (COE-Deep) is presented for automatic classification of MnM skin lesions through deep-learning algorithms without focusing on pre- or post-processing steps. For the development of COE-Deep system, the convolutional neural network (CNN) model is employed to extract the prominent features from region-of-interest (ROI) skin images. Afterward, these features are further purified through stack-based autoencoders (SAE) and classified by a softmax linear classifier into categories of melanocytic and non-melanocytic skin lesions. The performance of COE-Deep system is evaluated based on 5200 clinical images dataset obtained from different public and private resources. The significance of COE-Deep system is statistical measured in terms of sensitivity (SE), specificity (SP), accuracy (ACC) and area under the receiver operating curve (AUC) based on 10-fold cross validation test. On average, the 90% of SE, 93% of SP, 91.5% of ACC and 0.92 of AUC values are obtained. It noticed that the results of the COE-Deep system are statistically significant. These experimental results indicate that the proposed COE-Deep system is better than state-of-the-art systems. Hence, the COE-Deep system is able to assist dermatologists during the screening process of skin cancer.</description>
        <description>http://thesai.org/Downloads/Volume8No7/Paper_4-Development_of_a_Clinically_Oriented_Expert_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Impedance Matching of a Microstrip Antenna</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080703</link>
        <id>10.14569/IJACSA.2017.080703</id>
        <doi>10.14569/IJACSA.2017.080703</doi>
        <lastModDate>2017-07-31T12:57:54.3930000+00:00</lastModDate>
        
        <creator>Sameh Khmailia</creator>
        
        <creator>Hichem Taghouti</creator>
        
        <creator> Sabri Jmal</creator>
        
        <creator> Abdelkader Mami</creator>
        
        <subject>Impedance matching; microstrip antenna; “L” matching network; bond graph model; principle of causality; wave matrix; scattering matrix; transmission and reflection characteristics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(7), 2017</description>
        <description>Microstrip patch antennas play a very significant role in communication systems. In recent years, the study to improve their performances has made great progression, and different methods have been proposed to optimize their characteristics such as the gain, the bandwidth, the impedance matching and the resonance frequency.
This paper presents a new method that allows to ameliorate the impedance matching, thus to increase the gain of a rectangular microstrip antenna.
This method is based on the adaptation technique using a simple “L” matching network.  
The originality of this work is the exploitation of the principle of causality that permits to detect the problems of reflected waves and to obtain the suitable placement of components that constitute the matching circuit.
</description>
        <description>http://thesai.org/Downloads/Volume8No7/Paper_3-Impedance_Matching_of_a_Microstrip_Antenna.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>MobisenseCar: A Mobile Crowd-Based Architecture for Data Acquisition and Processing in Vehicle-Based Sensing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080702</link>
        <id>10.14569/IJACSA.2017.080702</id>
        <doi>10.14569/IJACSA.2017.080702</doi>
        <lastModDate>2017-07-31T12:57:54.3630000+00:00</lastModDate>
        
        <creator>Lionel Nkenyereye</creator>
        
        <creator>Jong Wook Jang</creator>
        
        <subject>Mobile crowdsensing; data processing; web services; hadoop; hiveQL; OBD-II</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(7), 2017</description>
        <description>The use of wireless technology via smartphone allows designing smartphone applications based on OBD-II for increasing environment sensing. However, uploading of vehicle’s diagnostics data via car driver’s tethered smart phone attests a long Internet latency when a large number of concurrent users use the remote mobile crowdsensing server application simultaneously, which increases the communication cost. The large volume of data would also challenge the traditional data processing framework. This paper studies design functionalities of mobile crowdsensing architecture applied to vehicle-based sensing for handling a huge amount of sensor data collected by those vehicle-based sensors equipped with a smart device connected to the OBD-II interface. The proposed MobiSenseCar uses Node.js, a web server architecture based on single-thread event loop approach and Apache Hive platform responsible for analyzing vehicle’s engine data. The Node.JS is 40% faster than the traditional web server side features thread-based approach. Experiment results show that MapReduce algorithm is highly scalable and optimized for distributed computing. With this mobile crowdsensing architecture it was possible to monitor information of car’s diagnostic system condition in real time, improving driving ability and protect the environment by reducing vehicle emissions.</description>
        <description>http://thesai.org/Downloads/Volume8No7/Paper_2-MobiSenseCar_A_Mobile_Crowd_Based_Architecture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Better Comparison Summary of Credit Scoring Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080701</link>
        <id>10.14569/IJACSA.2017.080701</id>
        <doi>10.14569/IJACSA.2017.080701</doi>
        <lastModDate>2017-07-31T12:57:54.2700000+00:00</lastModDate>
        
        <creator>Sharjeel Imtiaz</creator>
        
        <creator>Allan J. Brimicombe</creator>
        
        <subject>Credit score data mining; classification; artifical neural network; imputation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(7), 2017</description>
        <description>The credit scoring aim is to classify the customer credit as defaulter or non-defaulter. The credit risk analysis is more effective with further boosting and smoothing of the parameters of models. The objective of this paper is to explore the credit score classification models with an imputation technique and without imputation technique.  However, data availability is low in case of without imputation because of missing values depletion from the large dataset. On the other hand, imputation based dataset classification accuracy with linear method of ANN is better than other models. The comparison of models with boosting and smoothing shows that error rate is better metric than area under curve (AUC) ratio. It is concluded that artificial neural network (ANN) is better alternative than decision tree and logistic regression when data availability is high in dataset. </description>
        <description>http://thesai.org/Downloads/Volume8No7/Paper_1-A_Better_Comparison_Summary_of_Credit_Scoring_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cost Optimization of Replicas in Tree Network of Data Grid with QoS and Bandwidth Constraints</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080662</link>
        <id>10.14569/IJACSA.2017.080662</id>
        <doi>10.14569/IJACSA.2017.080662</doi>
        <lastModDate>2017-07-05T13:20:33.8000000+00:00</lastModDate>
        
        <creator>Alireza Chamkoori</creator>
        
        <creator>Farnoosh Heidari</creator>
        
        <creator>Naser Parhizgar</creator>
        
        <subject>Hierarchical data grid; replication cost; replica optimal placement; communication cost; storage cost; cost minimization; QoS and bandwidth constraints</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(6), 2017</description>
        <description>Data Grid provides resources for data-intensive scientific applications that need to access a huge amount of data around the world. Since data grid is built on a wide-area network, its latency prohibits efficient access to data. This latency can be decreased by data replication in the vicinity of users who request data. Data replication can also improve data availability and decreases network bandwidth usage. It can be influenced by two imperative constraints: Quality of Service (QoS) that is locally owned by a user and bandwidth constraint that globally affects on link that might be shared by multiple users. Guaranteeing both constraints and also minimizing replication cost consisting
communication and storage costs is a challenging task. To address this problem, the authors propose to use a dynamic algorithm called Optimal Placement of Replicas to minimize replication cost and coupled with meeting both mentioned constraints. It is also designed as heuristic algorithms that are competitive with optimal algorithm in performance metrics such as replication cost, network bandwidth usage and data availability. Extensive simulations show that the Optimal algorithm saves 10% cost compared
to heuristic algorithms and provides local responsiveness for half of the user requests.</description>
        <description>http://thesai.org/Downloads/Volume8No6/Paper_62-Cost_Optimization_of_Replicas_in_Tree_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Web Service for Incremental and Automatic Data Warehouses Fragmentation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080661</link>
        <id>10.14569/IJACSA.2017.080661</id>
        <doi>10.14569/IJACSA.2017.080661</doi>
        <lastModDate>2017-07-04T11:21:38.2200000+00:00</lastModDate>
        
        <creator>Ettaoufik Abdelaziz</creator>
        
        <creator>Mohammed Ouzzif</creator>
        
        <subject>Data warehouse; horizontal fragmentation; incremental fragmentation; frequent queries; web service</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(6), 2017</description>
        <description>The data warehouses (DW) are proposed to collect and store heterogeneous and bulky data. They represent a collection of thematic, integrated, non-volatile and histories data. They are fed from different data sources through transactional queries and offer analytical data through decisional queries. Generally, the decisional queries execution cost on large tables is very high. Reducing this cost becomes essential to enable decision-makers to interact in a reasonable time. In this context, DW administrators use different optimization techniques such as fragmentation, indexing, materialized views, and parallelism. On the other hand, the volume of data residing in the DW is constantly evolving. This can increase the complexity of frequent queries, which can degrade the performance of DW. The administrator always has to manually design a new fragmentation scheme from the new load of frequent queries. Having an automatic fragmentation tool of DW becomes important. The approach proposed in this paper aims at an incremental horizontal fragmentation technique of the DW through a web service. This technique is based on the updating of the queries load by adding the new frequent queries and eliminating the queries which do not remain frequent. The goal is to automate the implementation of the incremental fragmentation in order to optimize the new queries load. An experimental study on a real DW is carried out and comparative tests show the satisfaction of our approach.</description>
        <description>http://thesai.org/Downloads/Volume8No6/Paper_61-Web_Service_for_Incremental_ and_Automatic_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Facial Expression Recognition using Hybrid Texture Features based Ensemble Classifier</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080660</link>
        <id>10.14569/IJACSA.2017.080660</id>
        <doi>10.14569/IJACSA.2017.080660</doi>
        <lastModDate>2017-07-04T11:21:38.1700000+00:00</lastModDate>
        
        <creator>M. Arfan Jaffar</creator>
        
        <subject>Expression classification; ensemble; adaboost; facial; features</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(6), 2017</description>
        <description>Communication is fundamental to humans. In the literature, it has been shown through many scientific research studies that human communication ranges from 54 to 94 percent is non-verbal. Facial expressions are the most of the important part of the non-verbal communication and it is the most promising way for people to communicate their feelings and emotions to represent their intentions. Pervasive computing and ambient intelligence is required to develop human-centered systems that actively react to complex human communication happening naturally. Therefore, Facial Expression Recognition (FER) system is required that can be used for such type of problem. In this paper, FER system has been proposed by using hybrid texture features to predict the expressions of human. Existing FER system has a problem that these systems show discrepancies in different cultures and ethnicities. Proposed systems also solve this type of problem by using hybrid texture features which are invariant to scale as well as rotate. For texture features, Gabor LBP (GLBP) features have been used to classify expressions by using Random Forest Classifier. Experimentation has been performed on different facial databases that demonstrate promising results.</description>
        <description>http://thesai.org/Downloads/Volume8No6/Paper_60-Facial_Expression_Recognition_using_Hybrid_Texture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cryptography: A Comparative Analysis for Modern Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080659</link>
        <id>10.14569/IJACSA.2017.080659</id>
        <doi>10.14569/IJACSA.2017.080659</doi>
        <lastModDate>2017-07-04T11:21:38.0770000+00:00</lastModDate>
        
        <creator>Faiqa Maqsood</creator>
        
        <creator>Muhammad Ahmed</creator>
        
        <creator>Muhammad Mumtaz Ali</creator>
        
        <creator>Munam Ali Shah</creator>
        
        <subject>Cryptography; symmetric; asymmetric; encryption; decryption</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(6), 2017</description>
        <description>Cryptography plays a vital role for ensuring secure communication between multiple entities. In many contemporary studies, researchers contributed towards identifying best cryptography mechanisms in terms of their performance results. Selection of cryptographic technique according to a particular context is a big question; to answer this question, many existing studies have claimed that technique selection is purely dependent on desired quality attributes such as efficiency and security. It has been identified that existing reviews are either focused only towards symmetric or asymmetric encryption types. Another limitation is found that a criterion for performance comparisons only covers common parameters. In this paper, we have evaluated the performance of different symmetric and asymmetric algorithms by covering multiple parameters such as encryption/decryption time, key generation time and file size. For evaluation purpose, we have performed simulations in a sample context in which multiple cryptography algorithms have been compared.  Simulation results are visualized in a way that clearly depicts which algorithm is most suitable while achieving a particular quality attribute.</description>
        <description>http://thesai.org/Downloads/Volume8No6/Paper_59-Cryptography_A_Comparative_Analysis_for_Modern_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cloud Computing: Pricing Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080658</link>
        <id>10.14569/IJACSA.2017.080658</id>
        <doi>10.14569/IJACSA.2017.080658</doi>
        <lastModDate>2017-07-02T08:39:29.8300000+00:00</lastModDate>
        
        <creator>Aferdita Ibrahimi</creator>
        
        <subject>Component; Cloud Computing pricing model; comparison of pricing model; Google Cloud platform and Amazon Web Service pricing model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(6), 2017</description>
        <description>Cloud computing is the elemental aspect for online security of computing resources. It helps on-demand dividing of resources and cost between a major number of end users. It provides end users to process, manage, and store data so fast with reasonable prices. This is significant to know the causes of embarrassment between clients in relation to cloud computing services, particularly when it comes to a new price method. The price presents an important element, an indicator that often shows the quality of services, but on the other hand, the salesman with its offer on services has an impact directly on clients’ decisions to use them. For both providers and users of cloud services, identifying the common factors in cloud services pricing is critical. In this paper will be shown various pricing model for cloud computing, and how they affect in different resources, their comparison, also the pricing model for two platforms: 1) Google  Cloud Computing; and 2) Amazon Web Services.</description>
        <description>http://thesai.org/Downloads/Volume8No6/Paper_58-Cloud_Computing_Pricing_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sentiment Analysis Using Deep Learning Techniques: A Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080657</link>
        <id>10.14569/IJACSA.2017.080657</id>
        <doi>10.14569/IJACSA.2017.080657</doi>
        <lastModDate>2017-06-30T14:44:24.4630000+00:00</lastModDate>
        
        <creator>Qurat Tul Ain</creator>
        
        <creator>Mubashir Ali</creator>
        
        <creator>Amna Riaz</creator>
        
        <creator>Amna Noureen</creator>
        
        <creator>Muhammad Kamran</creator>
        
        <creator>Babar Hayat</creator>
        
        <creator>A. Rehman</creator>
        
        <subject>Sentiment analysis; recurrent neural network; deep neural network; convolutional neural network; recursive neural network; deep belief network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(6), 2017</description>
        <description>The World Wide Web such as social networks, forums, review sites and blogs generate enormous heaps of data in the form of users views, emotions, opinions and arguments about different social events, products, brands, and politics. Sentiments of users that are expressed on the web has great influence on the readers, product vendors and politicians. The unstructured form of data from the social media is needed to be analyzed and well-structured and for this purpose, sentiment analysis has recognized significant attention. Sentiment analysis is referred as text organization that is used to classify the expressed mind-set or feelings in different manners such as negative, positive, favorable, unfavorable, thumbs up, thumbs down, etc. The challenge for sentiment analysis is lack of sufficient labeled data in the field of Natural Language Processing (NLP). And to solve this issue, the sentiment analysis and deep learning techniques have been merged because deep learning models are effective due to their
automatic learning capability. This Review Paper highlights latest studies regarding the implementation of deep learning models such as deep neural networks, convolutional neural networks and many more for solving different problems of sentiment analysis such as sentiment classification, cross lingual problems, textual and visual analysis and product review analysis, etc.</description>
        <description>http://thesai.org/Downloads/Volume8No6/Paper_57-Sentiment_Analysis_using_Deep_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Feature Selection Algorithm based on Mutual Information using Local Non-uniformity Correction Estimator</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080656</link>
        <id>10.14569/IJACSA.2017.080656</id>
        <doi>10.14569/IJACSA.2017.080656</doi>
        <lastModDate>2017-06-30T14:44:24.4500000+00:00</lastModDate>
        
        <creator>Ahmed I. Sharaf</creator>
        
        <creator>Mohamed Abu El-Soud</creator>
        
        <creator>Ibrahim El-Henawy</creator>
        
        <subject>Feature subset selection; irrelevant features; mutual information; local non-uniformity correction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(6), 2017</description>
        <description>Feature subset selection is an effective approach used to select a compact subset of features from the original set. This approach is used to remove irrelevant and redundant
features from datasets. In this paper, a novel algorithm is proposed to select the best subset of features based on mutual information and local non-uniformity correction estimator. The proposed algorithm consists of three phases: in the first phase, a ranking function is used to measure the dependency and relevance among features. In the second phase, candidates with higher dependency and minimum redundancy are selected to participate in the optimal subset. In the last phase, the produced subset is refined using forward and backward wrapper filter to ensure its effectiveness. A UCI machine repository datasets are used for validation and testing. The performance of the
proposed algorithm has been found very significant in terms of classification accuracy and time complexity.</description>
        <description>http://thesai.org/Downloads/Volume8No6/Paper_56-A_Feature_Selection_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>On FPGA Implementation of a Continuous-Discrete Time Observer for Sensorless Induction Machine using Simulink HDL Coder</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080655</link>
        <id>10.14569/IJACSA.2017.080655</id>
        <doi>10.14569/IJACSA.2017.080655</doi>
        <lastModDate>2017-06-30T14:44:24.4170000+00:00</lastModDate>
        
        <creator>Moez Besbes</creator>
        
        <creator>Salim Hadj Sad</creator>
        
        <creator>Faouzi M’Sahli</creator>
        
        <creator>Monther Farza</creator>
        
        <subject>Hig gain observer; FPGA; HDL coder</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(6), 2017</description>
        <description>This paper deals with the design of a continuousdiscrete time high gain observer (CDHGO) for sensorless control of an induction machine (IM). Only two weakly sampled stator current measurements are used to achieve a real-time estimation of the rotor flux, the mechanical speed and the load torque. The feasibility of implementing our algorithm on the FPGA target is discussed in term of the best word format choice for internal variables and in term of making up for problems attached with complex bloc diagram VHDL conversion. Before an eventual implementation on the Virtex FPGA board, a validation of the proposed observer is performed through the ModelSim software
where we show that the waveforms of estimates bring closer the true ones.</description>
        <description>http://thesai.org/Downloads/Volume8No6/Paper_55-On_FPGA_Implementation_of_a_Continuous.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Data Provenance for Cloud Computing using Watermark</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080654</link>
        <id>10.14569/IJACSA.2017.080654</id>
        <doi>10.14569/IJACSA.2017.080654</doi>
        <lastModDate>2017-06-30T14:44:24.3570000+00:00</lastModDate>
        
        <creator>Muhammad Umer Sarwar</creator>
        
        <creator>Muhammad Kashif Hanif</creator>
        
        <creator>Ramzan Talib</creator>
        
        <creator>Bilal Sarwar</creator>
        
        <creator>Waqar Hussain</creator>
        
        <subject>Cloud computing; data provenance; watermark; security; visible watermark; invisible watermark</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(6), 2017</description>
        <description>The term data is new oil which has become a proverb due to large amount of data generation from various sources. Processing and storing such tremendous amount of data
is beyond the capabilities of traditional computing system. Cloud computing preferably considered next-generation architecture due to dynamic resource pools, low cost, reliability, virtualization, and high availability. In cloud computing, one important issue is to track and record the origin of data objects which is known as data provenance. Major challenges to provenance management in distributed environment are privacy and security. This paper presents data provenance management for cloud computing using watermarking technique. The experiment is performed by using both visible and hidden watermarks on shared data objects stored in cloud computing environment. The experimental results demonstrate the efficiency and reliability of proposed technique.</description>
        <description>http://thesai.org/Downloads/Volume8No6/Paper_54-Data_Provenance_for_Cloud_Computing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Review of Bluetooth based Scatternet for Mobile Ad hoc Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080653</link>
        <id>10.14569/IJACSA.2017.080653</id>
        <doi>10.14569/IJACSA.2017.080653</doi>
        <lastModDate>2017-06-30T14:44:24.3400000+00:00</lastModDate>
        
        <creator>Khizra Asaf</creator>
        
        <creator>Muhammad Umer Sarwar</creator>
        
        <creator>Muhammad Kashif Hanif</creator>
        
        <creator>Ramzan Talib</creator>
        
        <creator>Irfan Khan</creator>
        
        <subject>Bluetooth; ad hoc network; piconet; scatternet; MANET</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(6), 2017</description>
        <description>Bluetooth based networking is an emerging and
promising technology that takes small area networking to an enhanced
and better level of communication. Bluetooth specification
supports piconet formation. However, scatternet formation remains
open. The primary challenge faced for scatternet formation
is the interconnection of piconets. This paper presents a review
of the proposed approaches and the problems confronted for establishing
scatternet for ad hoc networks specifically MANET. In
this work, a comparison of the Blue layer algorithm with MMPI
interface based algorithm on Bluetooth scatternet formation. The
enhancement in the developed MMPI framework makes it a good
option for scatternet applications.</description>
        <description>http://thesai.org/Downloads/Volume8No6/Paper_53-A_Review_of_Bluetooth_based_Scatternet.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comprehensive Understanding of Intelligent User Interfaces</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080652</link>
        <id>10.14569/IJACSA.2017.080652</id>
        <doi>10.14569/IJACSA.2017.080652</doi>
        <lastModDate>2017-06-30T14:44:24.3230000+00:00</lastModDate>
        
        <creator>Sarang Shaikh</creator>
        
        <creator>M. Ajmal Sawand</creator>
        
        <creator>Najeed Ahmed Khan</creator>
        
        <creator>Farhan Badar Solangi</creator>
        
        <subject>Intelligent user interfaces; HCI; artificial intelligence; IIUI</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(6), 2017</description>
        <description>This paper represents basic discussion for one of the latest advances in the technology, known as Intelligent User Interface (IIUI) which is a combination of two major fields of computer science, namely, HCI &amp; Artificial Intelligence. The paper first discusses basic definitions, motivation to this research and UIMS (User Interface Management System) along with example of user interface models to understand user interfaces in detail. The four major classes (with their examples) of these interfaces have been taken as a method for this study. The overall discussion summarizes some basic principles used to create these interfaces, components that are important in the generation of IUIs and
decision making process in IUI for the reader to understand working of IIUIs.</description>
        <description>http://thesai.org/Downloads/Volume8No6/Paper_52-Comprehensive_Understanding_of_Intelligent.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Two-Stage Classifier Approach using RepTree Algorithm for Network Intrusion Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080651</link>
        <id>10.14569/IJACSA.2017.080651</id>
        <doi>10.14569/IJACSA.2017.080651</doi>
        <lastModDate>2017-06-30T14:44:24.2930000+00:00</lastModDate>
        
        <creator>Mustapha Belouch</creator>
        
        <creator>Salah El Hadaj</creator>
        
        <creator>Mohamed Idhammad</creator>
        
        <subject>Intrusion detection; REPTree; UNSW-NB15; NSLKDD</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(6), 2017</description>
        <description>In this paper, we present a two-stage classifier based on RepTree algorithm and protocols subset for network intrusion detection system. To evaluate the performance of our approach, we used the UNSW-NB15 data set and the NSL-KDD data set. In first phase our approach divides the incoming network traffics into three type of protocols TCP, UDP or Other, then classifies into normal or anomaly. In second stage a multiclass algorithm classify the anomaly detected in the first phase to identify the attacks class in order to choose the appropriate intervention. The number of features is reduced from over 40 to less than 20 features, according to the protocol, using feature selection techniques. The detection accuracy of 88,95% and 89,85% was achieved on the complete UNSW-NB15 and NSL-KDD data set, respectively using individual classifier, results are better as compared to the recent work on these data sets.</description>
        <description>http://thesai.org/Downloads/Volume8No6/Paper_51-A_Two_Stage_Classifier_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Security Issues in the Internet of Things (IoT): A Comprehensive Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080650</link>
        <id>10.14569/IJACSA.2017.080650</id>
        <doi>10.14569/IJACSA.2017.080650</doi>
        <lastModDate>2017-06-30T14:44:24.2600000+00:00</lastModDate>
        
        <creator>Mirza Abdur Razzaq</creator>
        
        <creator>Sajid Habib Gill</creator>
        
        <creator>Muhammad Ali Qureshi</creator>
        
        <creator>Saleem Ullah</creator>
        
        <subject>Internet of Things (IoT); security issues in IoT; security; privacy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(6), 2017</description>
        <description>Wireless communication networks are highly prone to security threats. The major applications of wireless communication networks are in military, business, healthcare, retail, and transportations. These systems use wired, cellular, or adhoc networks. Wireless sensor networks, actuator networks, and vehicular networks have received a great attention in society and industry. In recent years, the Internet of Things (IoT) has received considerable research attention. The IoT is considered as future of the internet. In future, IoT will play a vital role and will change our living styles, standards, as well as business models. The usage of IoT in different applications is expected to rise rapidly in the coming years. The IoT allows billions of devices, peoples, and services to connect with others and exchange information. Due to the increased usage of IoT devices, the IoT networks are prone to various security attacks. The deployment of efficient security and privacy protocols in IoT networks is extremely needed to ensure confidentiality, authentication, access control, and integrity, among others. In this paper, an extensive comprehensive study on security and privacy issues in IoT networks is provided.</description>
        <description>http://thesai.org/Downloads/Volume8No6/Paper_50-Security_Issues_in_the_Internet_of_Things.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Impact of Distributed Generation on the Reliability of Local Distribution System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080649</link>
        <id>10.14569/IJACSA.2017.080649</id>
        <doi>10.14569/IJACSA.2017.080649</doi>
        <lastModDate>2017-06-30T14:44:24.2470000+00:00</lastModDate>
        
        <creator>Sanaullah Ahmad</creator>
        
        <creator>Sana Sardar</creator>
        
        <creator>Azzam Ul Asar</creator>
        
        <creator>Babar Noor</creator>
        
        <subject>Electric power system reliability; distributed generation; reliability assessment </subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(6), 2017</description>
        <description>With the growth of distributed generation (DG) and renewable energy resources the power sector is becoming more sophisticated, distributed generation technologies with its diverse impacts on power system is becoming attractive area for researchers. Reliability is one of the vital area in electric power system which defines continuous supply of power and customer satisfaction. Around the world many power generation and distribution companies conduct reliability tests to ensure continues supply of power to its customers. Uttermost reliability problems in power system are due to distribution network. In this research reliability analysis of distribution system is done. The interruption frequency and interruption duration increases as the distance of load points increase from feeder. Injection of single DG unit into distribution system increase reliability of
distribution system, injecting multiple DG at different locations and near to load points in distribution network further increases reliability of distribution system, while introducing multiple DG at single location improves reliability of distribution system. The reliability of distribution system remains unchanged while varying the size of DG unit. Different reliability tests were done to find the optimum location to plant DG in distribution system. For these analyses distribution feeder bus 2 of RBTS is selected as
case study. The distribution feeder is modeled in ETAP, ETAP is software tool used for electrical power system modeling, analysis, design, optimization, operation, control, and automation. These results can be helpful for power utilities and power producer companies to conduct reliability tests and to properly utilize the distributed generation sources for future expansion of power systems.</description>
        <description>http://thesai.org/Downloads/Volume8No6/Paper_49-Impact_of_Distributed_Generation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fault-Tolerant Model Predictive Control for a Z(TN)-Observable Linear Switching Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080648</link>
        <id>10.14569/IJACSA.2017.080648</id>
        <doi>10.14569/IJACSA.2017.080648</doi>
        <lastModDate>2017-06-30T14:44:24.2300000+00:00</lastModDate>
        
        <creator>Abir SMATI</creator>
        
        <creator>Wassila CHAGRA</creator>
        
        <creator>Moufida KSSOURI</creator>
        
        <subject>Switching systems; Z(TN)-observability; finite control set predictive control; fault tolerant control; multicellular converter</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(6), 2017</description>
        <description>This work considers the control and the state observation of a linear switched systems with actuators faults. A particular problem is studied: the occurrence of non-observable
subsystem in the switching sequence. Hence, the accuracy of the state estimations will decrease affecting the observer-based fault detection algorithms. In this paper, we propose a solution based on a constrained switching control in a predictive scheme. An extension to fault-tolerant control is derived, using several hybrid observers for estimation and fault detection and a reconfigurable finite control set model-predictive controller. The paper includes experimental results applied to a multicellular converter to demonstrate the efficiency of the method.</description>
        <description>http://thesai.org/Downloads/Volume8No6/Paper_48-Fault_Tolerant_Model_Predictive.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Image Encryption Technique based on Chaotic S-Box and Arnold Transform</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080647</link>
        <id>10.14569/IJACSA.2017.080647</id>
        <doi>10.14569/IJACSA.2017.080647</doi>
        <lastModDate>2017-06-30T14:44:24.2130000+00:00</lastModDate>
        
        <creator>Shabieh Farwa</creator>
        
        <creator>Tariq Shah</creator>
        
        <creator>Nazeer Muhammad</creator>
        
        <creator>Nargis Bibi</creator>
        
        <creator>Adnan Jahangir</creator>
        
        <creator>Sidra Arshad</creator>
        
        <subject>Chaos; image encryption; tent map; S-box; Arnold transform; statistical analyses</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(6), 2017</description>
        <description>In recent years, chaos has been extensively used in cryptographic systems. In this regard, one dimensional chaotic maps gained increased attention because of their intrinsic simplicity and ease in application. Many image encryption algorithms that are based on chaotic substitution boxes (S-boxes) have been studied in the last few years but some of them appear to be vulnerable to robustness. We, in this paper, propose an efficient scheme for image encryption that utilizes a composition of chaotic substitution based on tent map with the scrambling effect of the Arnold transform. The proposed construction algorithm for substitution box is, on one hand, straightforward and saves omputational labour, while on the other, it provides highly efficient performance outcomes. The understudy scheme uses an S-box, that is based on 1-D chaotic tent map. We partially encrypt the image using this S-box and then apply certain number of iterations of the Arnold transform to attain the fully encrypted image. For decryption we apply the reverse process.
The strength of the proposed method is determined through the
most significant techniques used for the statistical analysis and it
is proved that the anticipated algorithm shows coherent results.</description>
        <description>http://thesai.org/Downloads/Volume8No6/Paper_47-An_Image_Encryption_Technique.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Survey of Big Data Analytics in Healthcare</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080646</link>
        <id>10.14569/IJACSA.2017.080646</id>
        <doi>10.14569/IJACSA.2017.080646</doi>
        <lastModDate>2017-06-30T14:44:24.1830000+00:00</lastModDate>
        
        <creator>Muhammad Umer Sarwar</creator>
        
        <creator>Muhammad Kashif Hanif</creator>
        
        <creator>Ramzan Talib</creator>
        
        <creator>Awais Mobeen</creator>
        
        <creator>Muhammad Aslam</creator>
        
        <subject>Big data; Analytics; Healtcare; Analytical tools; Machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(6), 2017</description>
        <description>Debate on big data analytics has earned a remarkable interest in industry as well as academia due to knowledge, information and wisdom extraction from big data. Big data and cloud computing are two most important trends that are defining the new emerging analytical tools. Big data has various applications in different fields like traffic control, weather forecasting, fraud detection, security, education enhancement and health care. Extraction of knowledge from large amount of data has become a challenging task. Similarly, big data analysis can be used for effective decision making in healthcare by some modification in existing machine learning algorithms. In this paper, rawbacks of existing machine learning algorithms are summarized for big data analysis in healthcare.</description>
        <description>http://thesai.org/Downloads/Volume8No6/Paper_46-A_Survey_of_Big_Data_Analytics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Japanese Tourism Recommender System with Automatic Generation of Seasonal Feature Vectors</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080645</link>
        <id>10.14569/IJACSA.2017.080645</id>
        <doi>10.14569/IJACSA.2017.080645</doi>
        <lastModDate>2017-06-30T14:44:24.1670000+00:00</lastModDate>
        
        <creator>Guan-Shen Fang</creator>
        
        <creator>Sayaka Kamei</creator>
        
        <creator>Satoshi Fujita</creator>
        
        <subject>Tourism recommender system; seasonal feature vector; Wikipedia; Twitter</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(6), 2017</description>
        <description>Tourism recommender systems have been widely used in our daily life to recommend tourist spots to users meeting their preference. In this paper, we propose a contentbased tourism recommender system considering travel season of users. In order to characterize seasonal variable features of spots,the proposed system generates seasonal feature vectors in three steps: 1) to identify the vocabulary concerned through Wikipedia; 2) to identify the trend over all spots through Twitter for each season; and 3) to highlight the weight of words contained in each identified trend. In the decision of recommendation, it does not only match the user profile with features of spots but also takes user’s travel season into account. The effectiveness of the proposed system is evaluated by a series of experiments, i.e.computer simulation and questionnaire evaluation. The result indicates that: 1) those vectors certainly reflect the similarity of spots for designated time period, and 2) with using such vectors of spots, the system successfully realized a tourism seasonal recommendation.</description>
        <description>http://thesai.org/Downloads/Volume8No6/Paper_45-A_Japanese_Tourism_Recommender_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards Efficient Graph Traversal using a Multi-GPU Cluster</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080644</link>
        <id>10.14569/IJACSA.2017.080644</id>
        <doi>10.14569/IJACSA.2017.080644</doi>
        <lastModDate>2017-06-30T14:44:24.0900000+00:00</lastModDate>
        
        <creator>Hina Hameed</creator>
        
        <creator>Nouman M Durrani</creator>
        
        <creator>Sehrish Hina</creator>
        
        <creator>Jawwad A. Shamsi</creator>
        
        <subject>Graph processing; GPU cluster; distributed graph traversal API; CUDA; BFS; MPI</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(6), 2017</description>
        <description>Graph  processing has always been a challenge, as there are inherent complexities in it. These include scalability to larger data sets and clusters, dependencies between vertices in the graph, irregular memory accesses during processing and traversals, minimal locality of reference, etc. In literature, there are several implementations for parallel graph processing on single GPU systems but only few for single and multi-node multi-GPU systems. In this paper, the prospects of improvement in large graph traversals by utilizing multi-GPU cluster for Breadth First Search algorithm has been studied. In this regard, a DiGPU, a CUDA-based implementation for graph traversal in shared memory multi-GPU and distributed memory multi-GPU systems has been proposed. In this work, an open source software module has also been developed and verified through set of experiments. Further, evaluations have been demonstrated on local cluster as well as on CDER cluster. Finally, experimental analysis has been performed on several graph data sets using different system configurations to study the impact of load distribution with respect to GPU specification on performance of our implementation.</description>
        <description>http://thesai.org/Downloads/Volume8No6/Paper_44-Towards_Efficient_Graph_Traversal_using_a_Multi_GPU_Cluster.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Secure Encryption for Wireless Multimedia Sensors Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080643</link>
        <id>10.14569/IJACSA.2017.080643</id>
        <doi>10.14569/IJACSA.2017.080643</doi>
        <lastModDate>2017-06-30T14:44:24.0600000+00:00</lastModDate>
        
        <creator>Amina Msolli</creator>
        
        <creator>Haythem Ameur</creator>
        
        <creator>Abdelhamid Helali</creator>
        
        <creator>Hassen Maaref</creator>
        
        <subject>Wireless Multimedia Sensor Network (WMSN); image encryption; Shift-AES; security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(6), 2017</description>
        <description>The security in wireless multimedia sensor network is a crucial challenge engendered by environmental, material constraint requirements and the energy consumption. Standard encryption algorithms do not agree with the real-time applications on this network. One of the solutions to the challenges mentioned above is to maintain the safety and reduce the energy consumption. In this article, a new approach with a high-energy efficiency, a high level of security and a big robustness against the statistics and differential attacks is presented in this paper. The new approach called Shift-AES admits simple operations such as the substitution, the transposition by or-exclusive and shift. It keeps the principle of Shannon for the diffusion and the confusion. Some criteria to measure the performances of the approach such as the visual inspection, histogram analysis, entropy images, the correlation of two adjacent pixels, the analysis against differential attacks, and the analysis of performance at the level run-time and throughput are successfully realized. The experimental evaluation of the proposed algorithm Shift-AES proves that the algorithm is ideal for wireless multimedia sensor network. With a satisfactory level of security, best term timeliness and throughput of transmission, compared with the AES standard encryption algorithm, this approach allows us to increase the lifetime of the network.</description>
        <description>http://thesai.org/Downloads/Volume8No6/Paper_43-Secure_Encryption_for_Wireless_Multimedia.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Systematic Literature Review to Determine the Web Accessibility Issues in Saudi Arabian University and Government Websites for Disable People</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080642</link>
        <id>10.14569/IJACSA.2017.080642</id>
        <doi>10.14569/IJACSA.2017.080642</doi>
        <lastModDate>2017-06-30T14:44:24.0270000+00:00</lastModDate>
        
        <creator>Muhammad Akram</creator>
        
        <creator>Rosnafisah Bt Sulaiman</creator>
        
        <subject>Web accessibility; disability; e-government; web contents accessibility guidelines; WCAG 1.0; WCAG 2.0; accessibility evaluation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(6), 2017</description>
        <description>Kingdom of Saudi Arabia has shown great commitment and support in past 10 years towards the higher education and transformation of manual governmental services to online through web. As a result number of university and e-government websites increased but without following the proper accessibility guidelines. Due to this many disable peoples may not be fully benefited the contents available on university and government websites. According to the World Health Organization (WHO) report, there are more than one billion people all over the world facing different kind of disabilities. Almost 720,000 Saudi nationals are disable which is about 4% of total Saudi population. The objective of this study is to review the existing literature to identify the web accessibility issues in Saudi Arabian university and government websites through a systematic literature review. Several scholarly databases were searched for the research studies published on web accessibility evaluation globally and in Saudi Arabia from 2009 to 2017. Only 15 (6 based on Saudi Arabia and 9 global) research articles out of 123 articles fulfilled the selection criteria. Literature review reveals that web accessibility is a global issue and many countries around the world including Saudi Arabia are facing web accessibility challenges. Moreover web accessibility guidelines WCAG 1.0 and WCAG 2.0 are not addressing many problems which are faced by user and some guidelines were not effective to avoid the user problems. However, findings in this study open a new dimension in web accessibility to do extensive research to determine the web accessibility criterions/standards in context of Saudi Arabia.</description>
        <description>http://thesai.org/Downloads/Volume8No6/Paper_42-A_Systematic_Literature_Review.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Quizrevision: A Mobile Application using the Google MIT App Inventor Language Compared with LMS</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080641</link>
        <id>10.14569/IJACSA.2017.080641</id>
        <doi>10.14569/IJACSA.2017.080641</doi>
        <lastModDate>2017-06-30T14:44:24.0130000+00:00</lastModDate>
        
        <creator>Mohamed A. Amasha</creator>
        
        <creator>Shaimaa Al-Omary</creator>
        
        <subject>Quizrevision; mobile application; LMS; e-learning; e-course; MIT APP Inventor; Android devices</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(6), 2017</description>
        <description>At Qassim University, the Blackboard (https://lms.qu.edu.sa) Learning Management System (LMS) is used. An exploratory study was conducted on 105 randomly selected students attending Qassim University. Of these, 91 students (87%) affirmed that they did not use the LMS as a study aide. This paper describes the means by which the MIT App Inventor language could be used to develop a mobile application (app) for the Android operating system. The app, Quizrevision, enables students to review course knowledge and concepts. An online survey was used to investigate students’ perceptions and gather their feedback regarding the use of Quizrevision as a study aide, as compared to the LMS. An achievement test was used to examine the improvement of students’ scores. Data was collected from 114 students taking the Phonetics course (Arab 342) in the Arabic Language Department (ALD) of Qassim University; 63 of them (55.27%) were male, and 51 (44.73%) were female. Descriptive statistics, chi-square, and t-test were used to analyze the data. The results indicated that the Quizrevision app supported the students’ achievement. There was a positive attitude towards using the Quizrevision app, as well as higher engagement in using the app as compared with using the LMS. In addition, findings confirm that students prefer using m-learning apps rather than using LMSs for reviewing course concepts and knowledge. Furthermore, student scores improved after using the app.</description>
        <description>http://thesai.org/Downloads/Volume8No6/Paper_41-Quizrevision_A_Mobile_Application_using_the_Google_MIT.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modeling and FPGA Implementation of a Thermal Peak Detection Unit for Complex System Design</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080640</link>
        <id>10.14569/IJACSA.2017.080640</id>
        <doi>10.14569/IJACSA.2017.080640</doi>
        <lastModDate>2017-06-30T14:44:23.9970000+00:00</lastModDate>
        
        <creator>Aziz Oukaira</creator>
        
        <creator>Ouafaa Ettahri</creator>
        
        <creator>Ahmed Lakhssassi</creator>
        
        <subject>Thermal peak; complex system design; MATLAB; GDS; RO; FPGA; DE1</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(6), 2017</description>
        <description>This paper, presents the modelization and the implementation of a thermal peak detection unit for complex system design. The modelization step starts with modeling the formula of the heat source using Simulink/Matlab tool, is the main objective of this work. Then the input temperature, the angles, the distance as well as certain frequencies, will be obtained from this formula using the GDS (gradient Direction Sensor) method based on RO (Ring Oscillator). Before the transition to the implementation in FPGA board, the use of VHDL code is necessary to describe the thermal peak detection unit, in order to verify and validate the whole module. This work offers a solution to thermally induced stress and local overheating of complex systems design which has been a major concern for the designers during the design of integrated circuit. In this paper a DE1 FPGA board cyclone V family 5CSEMA5F31C6 is used for the implementation. </description>
        <description>http://thesai.org/Downloads/Volume8No6/Paper_40-Modeling_and_FPGA_Implementation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Grid Connected PV Plant based on Smart Grid Control and Monitoring</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080639</link>
        <id>10.14569/IJACSA.2017.080639</id>
        <doi>10.14569/IJACSA.2017.080639</doi>
        <lastModDate>2017-06-30T14:44:23.9670000+00:00</lastModDate>
        
        <creator>Ibrahim Benabdallah</creator>
        
        <creator>Abeer Oun</creator>
        
        <creator>Adn&#232;ne Cherif</creator>
        
        <subject>Distributed generation systems (DGS); smart grid (SG); smart meters (SM); photovoltaic systems (PVS)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(6), 2017</description>
        <description>Today, smart grid is considered as an attractive technology for monitoring and management of grid connected renewable energy plants due to its flexibility, network architecture and communication between providers and consumers. Smart grid has been deployed with renewable energy resources to be securely connected to the grid. Indeed, this technology aims to complement the demand for power generation and distributed storage. For this reason, a system powered by a photovoltaic (PV) has been chosen as an interesting solution due to its competitive cost and technical structure. To achieve this goal, a realistic smart grid configuration design is presented and evaluated using a radial infrastructure. Three-voltage models are used to demonstrate the grid design. Smart Meters are included via a SCADA to acquire and monitor the electrical signal characteristics during the day and to evaluate it through a statistical report. An operational data center (ODC) is used to collect the SMs statistical report and to review the demand-offer (DO) powers balance. The obtained results with Matlab/Simulink are validated by the famous ETAP software.</description>
        <description>http://thesai.org/Downloads/Volume8No6/Paper_39-Grid_Connected_PV_Plant_based_on_Smart_Grid.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Identifying Top-k Most Influential Nodes by using the Topological Diffusion Models in the Complex Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080638</link>
        <id>10.14569/IJACSA.2017.080638</id>
        <doi>10.14569/IJACSA.2017.080638</doi>
        <lastModDate>2017-06-30T14:44:23.9330000+00:00</lastModDate>
        
        <creator>Maryam Paidar</creator>
        
        <creator>Sarkhosh Seddighi Chaharborj</creator>
        
        <creator>Ali Harounabadi</creator>
        
        <subject>Topological Diffusion; TOPSIS; Social Network; Complex Network; Interactive and Non-interactive Activities; Heat Diffusion Kernel</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(6), 2017</description>
        <description>Social networks are sub-set of complex networks, where users are defined as nodes, and the connections between users are edges. One of the important issues concerning social network analysis is identifying influential and penetrable nodes. Centrality is an important method among many others practiced for identification of influential nodes. Centrality criteria include degree centrality, betweenness centrality, closeness centrality, and Eigenvector centrality; all of which are used in identifying those influential nodes in weighted and weightless networks. TOPSIS is another basic and multi-criteria method which employs four criteria of centrality simultaneously to identify influential nodes; a fact that makes it more accurate than the above criteria. Another method used for identifying influential or top-k influential nodes in complex social networks is Heat Diffusion Kernel: As one of the Topological Diffusion Models; this model identifies nodes based on heat diffusion. In the present paper, to use the topological diffusion model, the social network graph is drawn up by the interactive and non-interactive activities; then, based on the diffusion, the dynamic equations of the graph are modeled. This was followed by using improved heat diffusion kernels to improve the accuracy of influential nodes identification. After several re-administrations of the topological diffusion models, those users who diffused more heat were chosen as the most influential nodes in the concerned social network. Finally, to evaluate the model, the current method was compared with Technique for Order Preferences by Similarity to Ideal Solution (TOPSIS).</description>
        <description>http://thesai.org/Downloads/Volume8No6/Paper_38-Identifying_Top_k_Most_Influential_Nodes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Review and Proof of Concept for Phishing Scam Detection and Response using Apoptosis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080637</link>
        <id>10.14569/IJACSA.2017.080637</id>
        <doi>10.14569/IJACSA.2017.080637</doi>
        <lastModDate>2017-06-30T14:44:23.9170000+00:00</lastModDate>
        
        <creator>A Yahaya Lawal Aliyu</creator>
        
        <creator>Madihah Mohd Saudi</creator>
        
        <creator>Ismail Abdullah</creator>
        
        <subject>Phishing; apoptosis; phishing detection; phishing response</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(6), 2017</description>
        <description>Phishing scam is a well-known fraudulent activity in which victims are tricked to reveal their confidential information especially those related to financial information. There are various phishing schemes such as deceptive phishing, malware based phishing, DNS-based phishing and many more. Therefore in this paper, a systematic review analysis on existing works related with the phishing detection and response techniques together with apoptosis have been further investigated and evaluated. Furthermore, one case study to show the proof of concept how the phishing works is also discussed in this paper.  This paper also discusses the challenges and the potential research for future work related with the integration of phishing detection model and response with apoptosis. This research paper also can be used as a reference and guidance for further study on phishing detection and response.</description>
        <description>http://thesai.org/Downloads/Volume8No6/Paper_37-A_Review_and_Proof_of_Concept_for_Phishing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mobile Malware Classification via  System Calls and Permission for GPS Exploitation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080636</link>
        <id>10.14569/IJACSA.2017.080636</id>
        <doi>10.14569/IJACSA.2017.080636</doi>
        <lastModDate>2017-06-30T14:44:23.8870000+00:00</lastModDate>
        
        <creator>Madihah Mohd Saudi</creator>
        
        <creator>Muhammad ‘Afif b. Husainiamer</creator>
        
        <subject>Mobile malware; Global Positioning System (GPS) exploitation; system call; permission; covering algorithm; static and dynamic analyses</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(6), 2017</description>
        <description>Now-a-days smartphones have been used worldwide for an effective communication which makes our life easier. Unfortunately, currently most of the cyber threats such as identity theft and mobile malwares are targeting smartphone users and based on profit gain. They spread faster among the users especially via the Android smartphones. They exploit the smartphones through many ways such as through Global Positioning System (GPS), SMS, call log, audio or image. Therefore to detect the mobile malwares, this paper presents 32 patterns of permissions and system calls for GPS exploitation by using covering algorithm. The experiment was conducted in a controlled lab environment, by using static and dynamic analyses, with 5560 of Drebin malware datasets were used as the training dataset and 500 mobile apps from Google Play Store for testing. As a result, 21 out of 500 matched with these 32 patterns. These new patterns can be used as guidance for all researchers in the same field in identifying mobile malwares and can be used as the input for a formation of a new mobile malware detection model.  </description>
        <description>http://thesai.org/Downloads/Volume8No6/Paper_36-Mobile_Malware_Classification_via_System_Calls.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>ASCII based Sequential Multiple Pattern Matching Algorithm for High Level Cloning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080635</link>
        <id>10.14569/IJACSA.2017.080635</id>
        <doi>10.14569/IJACSA.2017.080635</doi>
        <lastModDate>2017-06-30T14:44:23.8700000+00:00</lastModDate>
        
        <creator>Manu Singh</creator>
        
        <creator>Vidushi Sharma</creator>
        
        <subject>Pattern matching; ASCII based; high level clone; file clone</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(6), 2017</description>
        <description>For high level of clones, the ongoing (present) research scenario for detecting clones is focusing on developing better algorithm. For this purpose, many algorithms have been proposed but still we require the methods that are more efficient and robust. Pattern matching is one of those favorable algorithms which is having that required potential in research of computer science. The structural clones of high level clones comprised lower level smaller clones with similar code fragments. In this repetitive occurrence of simple clones in a file may prompt higher file level clones. The proposed algorithm detects repetitive patterns in same file and clones at higher level of abstraction like file. In genetic area, there are a number of algorithms that are being used to identify DNA sequence. When compared with some of the existing algorithms the proposed algorithm for ASCII based sequential multiple pattern matching gives better performance. The present method increases overall performance and gradually decline the number of comparisons and character per comparison proportion by repudiating (avoid) unnecessary DNA comparisons.</description>
        <description>http://thesai.org/Downloads/Volume8No6/Paper_35-ASCII_based_Sequential_Multiple_Pattern_Matching_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Improvement of Power Saving Class Type II Algorithm in WiMAX Sleep-mode</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080634</link>
        <id>10.14569/IJACSA.2017.080634</id>
        <doi>10.14569/IJACSA.2017.080634</doi>
        <lastModDate>2017-06-30T14:44:23.8400000+00:00</lastModDate>
        
        <creator>Mehrdad Davoudi</creator>
        
        <creator>Mohammad-Ali Pourmina</creator>
        
        <creator>Ahmad Salahi</creator>
        
        <subject>WiMAX; IEEE 802.16; sleep mode; power saving class type II (PSC II); proactive buffer; quality of service (QoS)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(6), 2017</description>
        <description>Because of the fact that users can connect to a WiMAX (IEEE 802.16) network wirelessly with large-scale movement capability, it is inevitable that they cannot access electrical power sources at their desired time. As a result, a mechanism is needed to reduce power consumption; and therefore three power saving classes have been defined in WiMAX that each one is designed for a specific application. Although using a suitable power saving class (PSC) can reduce power consumption significantly, but lack of cross-layer coordination can reduce the efficiency of the power saving mechanism. Since real-time services which are related to power saving class type II (PSC II) have great importance and vast applications, an improved PSC II algorithm for WiMAX is proposed in this paper which not only guarantees WiMAX quality of service (QoS), but also makes the cross-layer coordination using a proactive buffer resulting in less power consumption. There is also a comparison made between the performance of the proposed algorithm and the predefined PSC II algorithm in WiMAX using computer simulations and it shows that using the proposed algorithm reduces power consumption by 60 percent, while WiMAX QoS is still guaranteed.</description>
        <description>http://thesai.org/Downloads/Volume8No6/Paper_34-An_Improvement_of_Power_Saving_Class.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automatic Fuzzy-based Hybrid Approach for Segmentation and Centerline Extraction of Main Coronary Arteries</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080633</link>
        <id>10.14569/IJACSA.2017.080633</id>
        <doi>10.14569/IJACSA.2017.080633</doi>
        <lastModDate>2017-06-30T14:44:23.8230000+00:00</lastModDate>
        
        <creator>Khadega Khaled</creator>
        
        <creator>Mohamed A. Wahby Shalaby</creator>
        
        <creator>Khaled Mostafa El Sayed</creator>
        
        <subject>Automatic segmentation; coronary arteries; computed tomography angiography; centerlines extraction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(6), 2017</description>
        <description>Coronary arteries segmentation and centerlines extraction is an important step in Coronary Artery Disease diagnosis. The main purpose of the fully automated presented approaches is helping the clinical non-invasive diagnosis process to be done in fast way with accurate result. In this paper, a hybrid scheme is proposed to segment the coronary arteries and to extract the centerlines from Computed Tomography Angiography volumes. The proposed automatic hybrid segmentation approach combines the Hough transform with a fuzzy-based region growing algorithm. First, a circular Hough transform is used to detect initially the aorta circle. Then, the well-known Fuzzy c-mean algorithm is employed to detect the seed points for the region growing algorithm resulting in 3D binary volume. Finally, the centerlines of the segmented arteries are extracted based on the segmented 3D binary volume using a skeletonization based method. Using a benchmark database provided by the Rotterdam Coronary Artery Algorithm Evaluation Framework, the proposed algorithm is tested and evaluated. A comparative study shows that the proposed hybrid scheme is able to achieve a higher accuracy, in comparison to the most related and recent published work, at reasonable computational cost.</description>
        <description>http://thesai.org/Downloads/Volume8No6/Paper_33-Automatic_Fuzzy-based_Hybrid_Approach_for_Segmentation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Classifying Natural Language Text as Controlled and Uncontrolled for UML Diagrams</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080632</link>
        <id>10.14569/IJACSA.2017.080632</id>
        <doi>10.14569/IJACSA.2017.080632</doi>
        <lastModDate>2017-06-30T14:44:23.7630000+00:00</lastModDate>
        
        <creator>Nakul Sharma</creator>
        
        <creator>Prasanth Yalla</creator>
        
        <subject>Natural Language Processing; UML Diagrams; Software Engineering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(6), 2017</description>
        <description>Natural language text fall within the category of Controlled and Uncontrolled Natural Language. In this paper, an algorithm is presented to show that a given language text is controlled or uncontrolled. The parameters and framework is provided for UML diagram&#39;s repository. The parameter for controlled and uncontrolled languages is provided.</description>
        <description>http://thesai.org/Downloads/Volume8No6/Paper_32-Classifying_Natural_Language_Text_as_Controlled.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Network Packet Classification using Neural Network based on Training Function and Hidden Layer Neuron Number Variation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080631</link>
        <id>10.14569/IJACSA.2017.080631</id>
        <doi>10.14569/IJACSA.2017.080631</doi>
        <lastModDate>2017-06-30T14:44:23.7300000+00:00</lastModDate>
        
        <creator>Imam Riadi</creator>
        
        <creator>Arif Wirawan Muhammad</creator>
        
        <creator>Sunardi</creator>
        
        <subject>Classification; DDoS; neural; network; training; function; hidden; layer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(6), 2017</description>
        <description>Distributed denial of service (DDoS) is a structured network attack coming from various sources and fused to form a large packet stream. DDoS packet stream pattern behaves as normal packet stream pattern and very difficult to distinguish between DDoS and normal packet stream. Network packet classification is one of the network defense system in order to avoid DDoS attacks. Artificial Neural Network (ANN) can be used as an effective tool for network packet classification with the appropriate combination of numbers hidden layer neuron and training functions. This study found the best classification accuracy, 99.6% was given by ANN with hidden layer neuron numbers stated by half of input neuron numbers and twice of input neuron numbers but the number of hidden layers neuron by twice of input neuron numbers gives stable accuracy on all training function. ANN with Quasi-Newton training function doesn’t much affected by variation on hidden layer neuron numbers otherwise ANN with Scaled-Conjugate and Resilient-Propagation training function.</description>
        <description>http://thesai.org/Downloads/Volume8No6/Paper_31-Network_Packet_Classification_Using_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Analytical Model for Availability Evaluation of Cloud Service Provisioning System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080630</link>
        <id>10.14569/IJACSA.2017.080630</id>
        <doi>10.14569/IJACSA.2017.080630</doi>
        <lastModDate>2017-06-30T14:44:23.7000000+00:00</lastModDate>
        
        <creator>Fatimah M. Alturkistani</creator>
        
        <creator>Saad S. Alaboodi</creator>
        
        <subject>Cloud computing; availability evaluation; series and parallel configuration; infrastructure as service</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(6), 2017</description>
        <description>Cloud computing is a major technological trend that continues to evolve and flourish. With the advent of the cloud, high availability assurance of cloud service has become a critical issue for cloud service providers and customers. Several studies have considered the problem of cloud service availability modeling and analysis. However, the complexity of the cloud service provisioning system and the deep dependency stack of its layered architecture make it challenging to evaluate the availability of cloud services. In this paper, we propose a novel analytical model of cloud service provisioning systems availability. Further, we provide a detailed methodology for evaluating cloud service availability using series/parallel configurations and operational measures. The results of a case study using simulated cloud computing infrastructure illustrates the usability of the proposed model.</description>
        <description>http://thesai.org/Downloads/Volume8No6/Paper_30-An_Analytical_Model_for_Availability_Evaluation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Adaptive Solution for Congestion Control in CoAP-based Group Communications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080629</link>
        <id>10.14569/IJACSA.2017.080629</id>
        <doi>10.14569/IJACSA.2017.080629</doi>
        <lastModDate>2017-06-30T14:44:23.6830000+00:00</lastModDate>
        
        <creator>Fathia OUAKASSE</creator>
        
        <creator>Said RAKRAK</creator>
        
        <subject>Internet of Things (IoT); Constrained Application Protocol (CoAP); congestion control; group communication; multicast; unicast</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(6), 2017</description>
        <description>The use of lightweight devices and constrained resources like Wireless Sensors Network (WSN) makes patterns traffic in the Internet of Things (IoT) different from the ones in conventional networks. One of the most emerging messaging protocols used to address the needs of these lightweight IoT nodes is Constrained Application Protocol (CoAP).  CoAP presents a lot of advantages compared to other IoT application layer protocols; it ensures group communication via multicast communications between a server and multiple clients. Nevertheless, it doesn’t support a group communication from a client to multiple servers; it relies on multiple unicasts to do so. Regarding the fact that these constrained devices communicate via a large amount of messages and notifications, network congestion occurs. This paper proposes an adaptive congestion control algorithm designed for group communications using unicast between a client and multiple servers. Simulated results show that the proposed mechanism can appropriately achieve higher performances in terms of response time and packet loss.</description>
        <description>http://thesai.org/Downloads/Volume8No6/Paper_29-An_Adaptive_Solution_for_Congestion_Control.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Investigation into the Suitability of k-Nearest Neighbour (k-NN) for Software Effort Estimation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080628</link>
        <id>10.14569/IJACSA.2017.080628</id>
        <doi>10.14569/IJACSA.2017.080628</doi>
        <lastModDate>2017-06-30T14:44:23.6530000+00:00</lastModDate>
        
        <creator>Razak Olu-Ajayi</creator>
        
        <subject>Software effort estimation; machine learning; k-Nearest Neighbor; Constructive COst MOdel II</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(6), 2017</description>
        <description>Software effort estimation is an increasingly significant field, due to the overwhelming role of software in today’s global market. Effort estimation involves forecasting the effort in person-months or hours required for developing a software. It is vital to ideal planning and paramount for controlling the software development process. However, there is presently no optimal method to accurately estimate the effort required to develop a software system. Inaccurate estimation leads to poor use of resources and perhaps failure of the software project. Effort estimation also plays a key role in deducing cost of a software project. Software cost estimation includes the generation of the effort estimates and project duration to predict cost required to develop software project. Thus, effort is very essential and there is always need to enhance the accuracy as much as possible. This study evaluates and compares the potential of Constructive COst MOdel II (COCOMO II) and k-Nearest Neighbor (k-NN) on software project dataset. By the analysis of results received from each method, it may be concluded that the proposed method k-NN yields better performance over the other technique utilized in this study.</description>
        <description>http://thesai.org/Downloads/Volume8No6/Paper_28-An_Investigation_into_the_Suitability_of_k_Nearest.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>EVOTLBO: A TLBO based Method for Automatic Test Data Generation in EvoSuite</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080627</link>
        <id>10.14569/IJACSA.2017.080627</id>
        <doi>10.14569/IJACSA.2017.080627</doi>
        <lastModDate>2017-06-30T14:44:23.6230000+00:00</lastModDate>
        
        <creator>Mohammad Mehdi Dejam Shahabi</creator>
        
        <creator>S. Parsa Badiei</creator>
        
        <creator>S. Ehsan Beheshtian</creator>
        
        <creator>Reza Akbari</creator>
        
        <creator>S. Mohammad Reza Moosavi</creator>
        
        <subject>EvoSuite; TLBO; test data generation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(6), 2017</description>
        <description>Now-a-days software has a great impact on different aspects of human life. Software systems are responsible for safety of major critical tasks. To prevent catastrophic malfunctions, promising quality testing techniques should be used during software development. Software testing is an effective technique to catch defects, but it significantly increases the development cost. Therefore, automated testing is a major issue in software engineering. Search-Based Software Testing (SBST), specifically genetic algorithm, is the most popular technique in automated testing for achieving appropriate degree of software quality. In this paper TLBO, a swarm intelligence technique, is proposed for automatic test data generation as well as for evaluation of test results. The algorithm is implemented in EvoSuite, which is a reference tool for search-based software testing. Empirical studies have been carried out on the SF110 dataset which contains 110 java projects from the online code repository SourceForge and the results show that the TLBO provides competitive results in comparison with major genetic based methods.</description>
        <description>http://thesai.org/Downloads/Volume8No6/Paper_27-EVOTLBO_A_TLBO_based_Method_for_Automatic.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Internet-based Student Admission Screening System utilizing Data Mining</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080626</link>
        <id>10.14569/IJACSA.2017.080626</id>
        <doi>10.14569/IJACSA.2017.080626</doi>
        <lastModDate>2017-06-30T14:44:23.6070000+00:00</lastModDate>
        
        <creator>Dolluck Phongphanich</creator>
        
        <creator>Wirat Choonui</creator>
        
        <subject>Classification method; data mining; decision tree; student admission screening</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(6), 2017</description>
        <description>This study aimed to propose an internet-based student admission screening system utilizing data mining in order for officers to reduce time to evaluate applicants as well as for the faculty to use less human resources on screening applicants that meets their proficiency and criteria of each department. Another benefit is that the system can help applicants efficiently choose a specialization that is suitable to their proficiency and capability. The system used a decision tree based classification method. Prior to system development, six models were created and tested to find the most efficient model which would later be applied for development of internet-based student admission screening system. The first three of six models employed a k-fold cross validation technique, while the remaining three models use a percentage split test technique. Experiment results revealed that the most efficient model was the data classification model that uses Percentage Split (80), which provided the precision of 87.90%, recall of 87.80%, F-measure of 87.60% and accuracy of 87.82%. To make the efficient student admission screening system, this experiment selected a data classification model that implements Percentage Split (80).</description>
        <description>http://thesai.org/Downloads/Volume8No6/Paper_26-An_Internet_based_Student_Admission_Screening.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>MAC Protocol with Regression based Dynamic Duty Cycle Feature for Mission Critical Applications in WSN</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080625</link>
        <id>10.14569/IJACSA.2017.080625</id>
        <doi>10.14569/IJACSA.2017.080625</doi>
        <lastModDate>2017-06-30T14:44:23.5770000+00:00</lastModDate>
        
        <creator>Gayatri Sakya</creator>
        
        <creator>Vidushi Sharma</creator>
        
        <subject>Regression based adaptive duty cycle approach; mission critical MAC; analytical model; performance analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(6), 2017</description>
        <description>Wireless sensor networks demand energy efficient and application specific medium access control protocol when deployed in critical areas which are not frequently accessible. In such areas, the residual energy of nodes also become important along with the efficient data delivery. Many techniques using adaptive duty cycle approach are suggested by researchers to improve the data delivery performance of protocols. As low duty cycle introduces delay and high duty cycle causes energy losses in the network so duty cycle adaptation according to the distribution of nodes near event occurring area, traffic behaviour and remaining energy of the nodes may be done for energy saving as well as efficient data delivery performance. After analysing the S-MAC protocol performance in critical scenarios for the residual energy, throughput and packet delivery ratio, this paper suggests an improved mission critical MAC protocol called MC-MAC which uses novel regression based adaptive duty cycle approach. The duty cycle is given by the regression pattern of traffic while considering the performance of SMAC protocol for residual energy, throughput and packet delivery ratio. The analytical model of MC-MAC protocol is given accordingly and the performance analysis shows that the proposed MC-MAC protocol saves 40% energy of whole network and also 20% energy of the critical nodes in the mission critical path till base station, as compared to SMAC protocol. Very few improved MAC protocols provide mechanism to save the residual energy of critical nodes and hence to improve the lifetime of critical path. As MC-MAC protocol considers the throughput and packets delivery ratio (also along with residual energy) for calculating the regression formula for duty cycle based on traffic, so it is better than other critical MAC protocols which does trade-off of energy with throughput and packet delivery ratio.</description>
        <description>http://thesai.org/Downloads/Volume8No6/Paper_25-MAC_Protocol_with_Regression_based_Dynamic.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Empirical Investigation into Blended Learning Effects on Tertiary Students and Students Perceptions on the Approach in Botswana</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080624</link>
        <id>10.14569/IJACSA.2017.080624</id>
        <doi>10.14569/IJACSA.2017.080624</doi>
        <lastModDate>2017-06-30T14:44:23.5600000+00:00</lastModDate>
        
        <creator>Gofaone Kgosietsile Kebualemang</creator>
        
        <creator>Alpheus Wanano Mogwe</creator>
        
        <subject>Blended learning; blended learning effects; Students’ perceptions</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(6), 2017</description>
        <description>The aim of the research was to conduct an empirical investigation into blended learning (BL) effects on tertiary students and students’ perceptions on the approach. This purpose was objective driven, following three objectives which were identified as 1) to assess the impact of BL on students enrolled in Tertiary institutions; 2) to assess tertiary students’ perceptions on the BL mode; and lastly 3) to establish the extent to which BL is accepted in a typical institution or university learning environment. An extensive literature review exercise was carried out which led to identification of two research questions to be used to meet the objectives and the purpose of the study. The research questions were 1) Does blended learning (BL) transform learners’ attitudes towards learning and improves results? 2) Does blended learning (BL) revolutionizes learners’ critical thinking levels and dispositions? Through the research the authors were specifically trying to elucidate and understand the BL mode and the effects it has on students, and their perceptions on it. The researcher followed the quantitative approach with the aid of using a questionnaire to further understand the effects of BL mode on students and their perceptions on the same, after reviewing several literatures. The findings indicated that the BL mode has a positive impact on the students, and students’ perceptions on the BL mode were also positive. These findings led to positive conclusions on the BL mode, substantiating the literature review findings on the same. In the light of the findings, and the objectives of the study, the authors concluded the study by proposing a framework which could be used for monitoring BL effects on tertiary students and students’ perceptions on the approach, as the results from the study indicated a positive outlook on the BL mode.</description>
        <description>http://thesai.org/Downloads/Volume8No6/Paper_24-An_Empirical_Investigation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Collaborative Approach for Effective Requirement Elicitation in Oblivious Client Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080623</link>
        <id>10.14569/IJACSA.2017.080623</id>
        <doi>10.14569/IJACSA.2017.080623</doi>
        <lastModDate>2017-06-30T14:44:23.5130000+00:00</lastModDate>
        
        <creator>Muhammad Kashif Hanif</creator>
        
        <creator>Muhammad Ramzan Talib</creator>
        
        <creator>Nauman Ul Haq</creator>
        
        <creator>Arfan Mansoor</creator>
        
        <creator>Muhammad Umer Sarwar</creator>
        
        <creator>Nafees Ayub</creator>
        
        <subject>Requirement elicitation; oblivious client; software development; quality improvement; elicitation model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(6), 2017</description>
        <description>Acquiring the desired requirements from customer through requirement elicitation process is a big deal as entire project depends on this initial important activity. Poor requirement elicitation affects software quality. Various factors in the oblivious client environment like culture, linguistic, gender, nationality, race and politics; can affect the final deliverables. The interaction of complex values, attitudes, behavioral norms, beliefs and communication approaches by stakeholders with different values may lead towards misunderstanding and misinterpretation. This could lead towards failure or dissatisfaction of the final outcome which might cause loss to both parties. The project requires redesign or modification that could take extra time and cost to get the desired results. The oblivious nature of the client’s working environment is the major cause of poor requirement elicitation. This study focuses the issues in oblivious client environment where client is reluctant to provide desired information. This work proposes a novel requirement elicitation model for effective software development in oblivious client environment. The quality improvement results of software after using this model were verified using a qualitative survey.</description>
        <description>http://thesai.org/Downloads/Volume8No6/Paper_23-A_Collaborative_Approach_for_Effective_Requirement.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>One-Year Survival Prediction of Myocardial Infarction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080622</link>
        <id>10.14569/IJACSA.2017.080622</id>
        <doi>10.14569/IJACSA.2017.080622</doi>
        <lastModDate>2017-06-30T14:44:23.4800000+00:00</lastModDate>
        
        <creator>Abdulkader Helwan</creator>
        
        <creator>Dilber Uzun Ozsahin</creator>
        
        <creator>Rahib Abiyev</creator>
        
        <creator>John Bush</creator>
        
        <subject>Machine learning; myocardial infarction; backpropagation; radial basis function network; generalization; one-year survival prediction </subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(6), 2017</description>
        <description>Myocardial infarction is still one of the leading causes of death and morbidity. The early prediction of such disease can prevent or reduce the development of it. Machine learning can be an efficient tool for predicting such diseases. Many people have suffered myocardial infarction in the past. Some of those have survived and others were dead after a period of time. A machine learning system can learn from the past data of those patients to be capable of predicting the one-year survival or death of patients with myocardial infarction. The survival at one year, death at one year, survival period, in addition to some clinical data of patients who have suffered myocardial infarction can be used to train an intelligent system to predict the one-year survival or death of current myocardial infarction patients. This paper introduces the use of two neural networks: Feedforward neural network that uses backpropagation learning algorithm (BPNN) and radial basis function networks (RBFN) that were trained on past data of patients who suffered myocardial infarction to be capable of generalizing the one-year survival or death of new patients. Experimentally, both networks were tested on 64 instances and showed a good generalization capability in predicting the correct diagnosis of the patients. However, the radial basis function network outperformed the backpropagation network in performing this prediction task.</description>
        <description>http://thesai.org/Downloads/Volume8No6/Paper_22-One_Year_Survival_Prediction_of_Myocardial.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of a High Speed Architecture of MQ-Coder for JPEG2000 on FPGA</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080621</link>
        <id>10.14569/IJACSA.2017.080621</id>
        <doi>10.14569/IJACSA.2017.080621</doi>
        <lastModDate>2017-06-30T14:44:23.4670000+00:00</lastModDate>
        
        <creator>Taoufik Salem Saidani</creator>
        
        <creator>Hafedh Mahmoud Zayani</creator>
        
        <subject>MQ-Coder; High speed architecture; FPGA; JPEG2000; VHDL</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(6), 2017</description>
        <description>Digital imaging is omnipresent today. In many areas, digitized images replace their analog ancestors such as photographs or X-rays. The world of multimedia makes extensive use of image transfer and storage. The volume of these files is very high and the need to develop compression algorithms to reduce the size of these files has been felt. The JPEG committee has developed a new standard in image compression that now also has the status of Standard International: JPEG 2000. The main advantage of this new standard is its adaptability. Whatever the target application, whatever resources or available bandwidth, JPEG 2000 will adapt optimally. However, this flexibility has a price: the JPEG2000 perplexity is far superior to that of JPEG. This increased complexity can cause problems in applications with real-time constraints. In such cases, the use of a hardware implementation is necessary. In this context, the objective of this paper is the realization of a JPEG2000 encoder architecture satisfying real-time constraints. The proposed architecture will be implemented using programmable chips (FPGA) to ensure its effectiveness in real time. Optimization of renormalization module and byte-out module are described in this paper. Besides, the reduction in computational steps effectively minimizes the time delay and hence the high operating frequency. The design was implemented targeting a Xilinx Virtex 6 and an Altera Stratix FPGAs. Experimental results show that the proposed hardware architecture achieves real-time compression on video sequences on 35 fps at HDTV resolution.</description>
        <description>http://thesai.org/Downloads/Volume8No6/Paper_21-Design_of_a_High_Speed_Architecture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Parallel Genetic Algorithm for Maximum Flow Problem</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080620</link>
        <id>10.14569/IJACSA.2017.080620</id>
        <doi>10.14569/IJACSA.2017.080620</doi>
        <lastModDate>2017-06-30T14:44:23.4330000+00:00</lastModDate>
        
        <creator>Ola M. Surakhi</creator>
        
        <creator>Mohammad Qatawneh</creator>
        
        <creator>Hussein A. al Ofeishat</creator>
        
        <subject>Flow network; Ford Fulkerson algorithm; Genetic algorithm; Max Flow problem; MPI; multithread; supercomputer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(6), 2017</description>
        <description>The maximum flow problem is a type of network optimization problem in the flow graph theory. Many important applications used the maximum flow problem and thus it has been studied by many researchers using different methods. Ford Fulkerson algorithm is the most popular algorithm that used to solve the maximum flow problem, but its complexity is high. In this paper, a parallel Genetic algorithm is applied to find a maximum flow in a weighted directed graph, by finding the objective function value for each augmenting path from the source to the sink simultaneously in the parallel steps in every iteration. The algorithm is implemented using Message Passing Interface (MPI) library, and results are conducted from a real distributed system IMAN1 supercomputer and were compared with a sequential version of Genetic-Maxflow. The simulation results show this parallel algorithm speedup the running time by achieving up to 50% parallel efficiency.</description>
        <description>http://thesai.org/Downloads/Volume8No6/Paper_20-A_Parallel_Genetic_Algorithm_for_Maximum.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implementation of the RN Method on FPGA using Xilinx System Generator for Nonlinear System Regression</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080619</link>
        <id>10.14569/IJACSA.2017.080619</id>
        <doi>10.14569/IJACSA.2017.080619</doi>
        <lastModDate>2017-06-30T14:44:23.4030000+00:00</lastModDate>
        
        <creator>Intissar SAYEHI</creator>
        
        <creator>Okba TOUALI</creator>
        
        <creator>T. Saidani </creator>
        
        <creator>B. Bouallegue</creator>
        
        <creator>Mohsen MACHHOUT</creator>
        
        <subject>Machine learning; Reproducing Kernel Hilbert Spaces (RKHS); regularization networks; FPGA; HW/SW Co-simulation; systolic array architecture; PT326; Wiener-Hammerstein benchmark</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(6), 2017</description>
        <description>In this paper, we propose a new approach aiming to ameliorate the performances of the regularization networks (RN) method and speed up its computation time. A considerable rapidity in totaling calculation time and high performance were accomplished through conveying difficult calculation charges to FPGA. Using Xilinx System Generator, a successful HW/SW Co-Design was constructed to accelerate the Gramian matrix computation. Experimental results involving two real data sets of Wiener-Hammerstein benchmark with process noise prove the efficiency of the approach. The implementation results demonstrate the efficiency of the heterogeneous architecture, presenting a speed-up factor of 40-50 orders of time, comparing to the CPU simulation.</description>
        <description>http://thesai.org/Downloads/Volume8No6/Paper_19-Implementation_of_the_RN_Method_on_FPGA.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Learner Model for Adaptable e-Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080618</link>
        <id>10.14569/IJACSA.2017.080618</id>
        <doi>10.14569/IJACSA.2017.080618</doi>
        <lastModDate>2017-06-30T14:44:23.3870000+00:00</lastModDate>
        
        <creator>Moiz Uddin Ahmed</creator>
        
        <creator>Nazir Ahmed Sangi</creator>
        
        <creator>Amjad Mahmood</creator>
        
        <subject>E-learning; adaptable; pedagogy; learning styles; 
e-assessment
</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(6), 2017</description>
        <description>The advancement in Information and Communication Technology (ICT) has provided new opportunities for teaching and learning in the form of e-learning. However, developing specialized contents, accommodating profiles of learners, e-learning pedagogy and available ICT infrastructure are the real challenges that need to be properly addressed for any successful e-learning system. The adaptability in an e-learning system can be used to address many of these challenges and issues. This paper proposes a learner model for adaptable e-learning model. The proposed model is based on the findings of a survey conducted to investigate the profiles and preferences of the local learners. The conceptual framework highlights the layered model of adaptable e-learning with the knowledge level of learners as the foundation layer. The foundation layer is derived from four components of adaptable e-learning, i.e., domain, program pedagogy, student model and technology interface. The learner algorithm retrieves the adaptable contents from the domain model by analyzing the learner information stored in the student model. The e-assessment is part of the program pedagogy and the assessment results are used to control the presentation and navigation of adaptable contents during the learning process. The model has been tested on a Computer Science course offered by Allama Iqbal Open University, Islamabad, Pakistan at Post Graduate Diploma level. The results show that the proposed adaptable e-learning model has significantly improved the knowledge level of the learners.</description>
        <description>http://thesai.org/Downloads/Volume8No6/Paper_18-A_Learner_Model_for_Adaptable_E_learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Adaptive CAD System to Detect Microcalcification in Compressed Mammogram Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080617</link>
        <id>10.14569/IJACSA.2017.080617</id>
        <doi>10.14569/IJACSA.2017.080617</doi>
        <lastModDate>2017-06-30T14:44:23.3400000+00:00</lastModDate>
        
        <creator>Ayman AbuBaker</creator>
        
        <subject>Mammogram image; texture features; Discrete Cosine Transform (DCT); Singular Value Decomposition (SVD)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(6), 2017</description>
        <description>Microcalcifications (MC) in mammogram images are an early sign for breast cancer and their early detection is vital to improve its prognosis. Since MC appears as small dot in the mammogram image with size less than 1 mm and maybe easily overlooked by the radiologist, the Computer Aided Diagnosis (CAD) approach can assist the radiologist to improve their diagnostic accuracy. On the other hand, the mammogram images are a high resolution image with large image size which makes difficult the image transfer through the media. Therefore, in this paper, two image compressions techniques which are Discrete Cosine Transform (DCT) with entropy coding and Singular Value Decomposition (SVD) were investigated to reduce the mammogram image size. Then a novel adaptive CAD system is used to test the quality of the processed image based on true positive (TP) ratio and number of detected false positive (FP) regions in the mammogram image. The proposed adaptive CAD system used the visual appearance of MC in the mammogram to detect a potential MC regions. Then five texture features are implemented to reduce number of detected FP regions in the mammogram images. After implementing the adaptive CAD system on 100 mammogram images from USF and MIAS databases, it was found that the DCT can reduce the image size with a high quality since the ratio of TP is 87.6% with 11 FP/regions while in SVD the TP ratio is 79.1% with 26 FP/regions.</description>
        <description>http://thesai.org/Downloads/Volume8No6/Paper_17-An_Adaptive_CAD_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-Criteria Wind Turbine Selection using Weighted Sum Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080616</link>
        <id>10.14569/IJACSA.2017.080616</id>
        <doi>10.14569/IJACSA.2017.080616</doi>
        <lastModDate>2017-06-30T14:44:23.3270000+00:00</lastModDate>
        
        <creator>Shafiqur Rehman</creator>
        
        <creator>Salman A. Khan</creator>
        
        <subject>Wind turbine; renewable energy; weighted sum method; multi-criteria decision-making</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(6), 2017</description>
        <description>Wind energy is becoming a potential source for renewable and clean energy. An important factor that contributes to efficient generation of wind power is the use of appropriate wind turbine. However, the task of selecting an appropriate, site-specific turbine is a complex problem. The complexity is due to the presence of several conflicting decision criteria in the decision process. Therefore, a decision is sought such that best tradeoff is achieved between the selection criteria. With the inherent complexities encompassing the decision-making process, this study develops a multi-criteria decision model for turbine selection based on the concepts of weighted sum approach. Results indicate that the proposed methodology for finding the most suitable turbine from a pool of 18 turbines is effective.</description>
        <description>http://thesai.org/Downloads/Volume8No6/Paper_16-Multi_Criteria_Wind_Turbine_Selection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fast Hybrid String Matching Algorithm based on the Quick-Skip and Tuned Boyer-Moore Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080615</link>
        <id>10.14569/IJACSA.2017.080615</id>
        <doi>10.14569/IJACSA.2017.080615</doi>
        <lastModDate>2017-06-30T14:44:23.3100000+00:00</lastModDate>
        
        <creator>Sinan Sameer Mahmood Al-Dabbagh</creator>
        
        <creator>Nuraini bint Abdul Rashid</creator>
        
        <creator>Mustafa Abdul Sahib Naser</creator>
        
        <creator>Nawaf Hazim Barnouti</creator>
        
        <subject>Hybrid algorithm; string matching algorithm; Tuned Boyer-Moore algorithm; quick-skip search algorithm; Sinan Sameer Tuned Boyer Moore-Quick Skip Search (SSTBMQS)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(6), 2017</description>
        <description>The string matching problem is considered as one of the most interesting research areas in the computer science field because it can be applied in many essential different applications such as intrusion detection, search analysis, editors, internet search engines, information retrieval and computational biology. During the matching process two main factors are used to evaluate the performance of the string matching algorithm which are the total number of character comparisons and the total number of attempts. This study aims to produce an efficient hybrid exact string matching algorithm called Sinan Sameer Tuned Boyer Moore-Quick Skip Search (SSTBMQS) algorithm by blending the best features that were extracted from the two selected original algorithms which are Tuned Boyer-Moore and Quick-Skip Search. The SSTBMQS hybrid algorithm was tested on different benchmark datasets with different size and different pattern lengths. The sequential version of the proposed hybrid algorithm produces better results when compared with its original algorithms (TBM and Quick-Skip Search) and when compared with Maximum-Shift hybrid algorithm which is considered as one of the most recent hybrid algorithm. The proposed hybrid algorithm has less number of attempts and less number of character comparisons.</description>
        <description>http://thesai.org/Downloads/Volume8No6/Paper_15-Fast_Hybrid_String_Matching_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comparative Study on the Effect of Multiple Inheritance Mechanism in Java, C++, and Python on Complexity and Reusability of Code</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080614</link>
        <id>10.14569/IJACSA.2017.080614</id>
        <doi>10.14569/IJACSA.2017.080614</doi>
        <lastModDate>2017-06-30T14:44:23.2800000+00:00</lastModDate>
        
        <creator>Fawzi Albalooshi</creator>
        
        <creator>Amjad Mahmood</creator>
        
        <subject>Reusability; complexity; python; java; C++; CK metrics; multiple inheritance; software metrics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(6), 2017</description>
        <description>Two of the fundamental uses of generalization in object-oriented software development are the reusability of code and better structuring of the description of objects. Multiple inheritance is one of the important features of object-oriented methodologies which enables developers to combine concepts and increase the reusability of the resulting software. However, multiple inheritance is implemented differently in commonly used programming languages. In this paper, we use Chidamber and Kemerer (CK) metrics to study the complexity and reusability of multiple inheritance as implemented in Python, Java, and C++. The analysis of results suggests that out of the three languages investigated Python and C++ offer better reusability of software when using multiple inheritance, whereas Java has major deficiencies when implementing multiple inheritance resulting in poor structure of objects.</description>
        <description>http://thesai.org/Downloads/Volume8No6/Paper_14-A_Comparative_Study_on_the_Effect_of_Multiple.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cross-Layer-Based Adaptive Traffic Control Protocol for Bluetooth Wireless Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080613</link>
        <id>10.14569/IJACSA.2017.080613</id>
        <doi>10.14569/IJACSA.2017.080613</doi>
        <lastModDate>2017-06-30T14:44:23.2470000+00:00</lastModDate>
        
        <creator>Sabeen Tahir</creator>
        
        <creator>Sheikh Tahir Bakhsh</creator>
        
        <subject>Bluetooth; scatternet; multi-layer; resolving bottleneck; reducing control overhead component</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(6), 2017</description>
        <description>Bluetooth technology is particularly designed for a wireless personal area network that is low cost and less energy consuming. Efficient transmission between different Bluetooth nodes depends on network formation. An inefficient Bluetooth topology may create a bottleneck and a delay in the network when data is routed. To overcome the congestion problem of Bluetooth networks, a Cross-layer-based Adaptive Traffic Control (CATC) protocol is proposed in this paper. The proposed protocol is working on backup device utilization and network restructuring. The proposed CATC is divided into two parts; the first part is based on intra-piconet traffic control, while the second part is based on inter-piconet traffic control. The proposed CATC protocol controls the traffic load on the master node by network restructuring and the traffic load of the bridge node by activating a Fall-Back Bridge (FBB). During the piconet restructuring, the CATC performs the Piconet Formation within Piconet (PFP) and Scatternet Formation within Piconet (SFP). The PFP reconstructs a new piconet in the same piconet for the devices which are directly within the radio range of each other. The SFP reconstructs the scatternet within the same piconet if the nodes are not within the radio range. Simulation results are proof that the proposed CATC improves the overall performance and reduces control overhead in a Bluetooth network.</description>
        <description>http://thesai.org/Downloads/Volume8No6/Paper_13-Cross_layer_based_Adaptive_Traffic_Control_Protocol.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>On Arabic Character Recognition Employing Hybrid Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080612</link>
        <id>10.14569/IJACSA.2017.080612</id>
        <doi>10.14569/IJACSA.2017.080612</doi>
        <lastModDate>2017-06-30T14:44:23.2170000+00:00</lastModDate>
        
        <creator>Al-Amin Bhuiyan</creator>
        
        <creator>Fawaz Waselallah Alsaade</creator>
        
        <subject>Arabic characters; Arabic OCR; image histogram; BAMMLP; hybrid neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(6), 2017</description>
        <description>Arabic characters illustrate intricate, multidimensional and cursive visual information. Developing a machine learning system for Arabic character recognition is an exciting research. This paper addresses a neural computing concept for Arabic Optical Character Recognition (OCR). The method is based on local image sampling of each character to a selected feature matrix and feeding these matrices into a Bidirectional Associative Memory followed by Multilayer Perceptron (BAMMLP) with back propagation learning algorithm. The efficacy of the system has been justified over different test patterns of Arabic characters. Experimental results validate that the system is well efficient to recognize Arabic characters with overall more than 82% accuracy.</description>
        <description>http://thesai.org/Downloads/Volume8No6/Paper_12-On_Arabic_Character_Recognition_Employing_Hybrid_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Phishing Websites Classification using Hybrid SVM and KNN Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080611</link>
        <id>10.14569/IJACSA.2017.080611</id>
        <doi>10.14569/IJACSA.2017.080611</doi>
        <lastModDate>2017-06-30T14:44:23.2000000+00:00</lastModDate>
        
        <creator>Altyeb Altaher</creator>
        
        <subject>information security; phishing websites; Support vector machine; K-nearest neighbors</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(6), 2017</description>
        <description>Phishing is a potential web threat that includes mimicking official websites to trick users by stealing their important information such as username and password related to financial systems. The attackers use social engineering techniques like email, SMS and malware to fraud the users. Due to the potential financial losses caused by phishing, it is essential to find effective approaches for phishing websites detection. This paper proposes a hybrid approach for classifying the websites as Phishing, Legitimate or Suspicious websites, the proposed approach intelligently combines the K-nearest neighbors (KNN) algorithm with the Support Vector Machine algorithm (SVM) in two stages. Firstly, the K-NN was utilized as  and robust to noisy data and effective classifier. Secondly, the SVM is employed as powerful classifier. The proposed approach integrates the simplicity of KNN with the effectiveness of SVM. The experimental results show that the proposed hybrid approach achieved the highest accuracy of 90.04% when compared with other approaches.</description>
        <description>http://thesai.org/Downloads/Volume8No6/Paper_11-Phishing_Websites_Classification_using_Hybrid_SVM.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Environments and System Types of Virtual Reality Technology in STEM: a Survey</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080610</link>
        <id>10.14569/IJACSA.2017.080610</id>
        <doi>10.14569/IJACSA.2017.080610</doi>
        <lastModDate>2017-06-30T14:44:23.1870000+00:00</lastModDate>
        
        <creator>Asmaa Saeed Alqahtani</creator>
        
        <creator>Lamya Foaud Daghestani</creator>
        
        <creator>Lamiaa Fattouh Ibrahim</creator>
        
        <subject>Virtual reality; 3D graphics; immersion; 3D images; navigation; multimedia</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(6), 2017</description>
        <description>Virtual Reality (VR) technology has been used widely today in Science, Technology, Engineering and Mathematics (STEM) fields. The VR is emerging computer interface distinguished by high degrees of immersion, trustworthy, and interaction. The goal of VR is making the user believe, as much as possible, that he is within the computer-generated environment. The VR has become one of the important technologies to be discussed regarding its applications, usage, and its different types that can achieve huge benefits in the real world. This survey paper introduces detail information about VR systems and requirements to build correct VR environment. Moreover, this work presents a comparison between system types of VR. Then, it presents the tools and software used for building VR environments. After that, we epitomize a road of the map for selecting appropriate VR system according to the field of applications. Finally, we introduce the conclusion and future predictions to develop the VR systems.</description>
        <description>http://thesai.org/Downloads/Volume8No6/Paper_10-Environments_and_System_Types_of_Virtual_Reality.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Insight to Research Progress on Secure Routing in Wireless Ad hoc Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080609</link>
        <id>10.14569/IJACSA.2017.080609</id>
        <doi>10.14569/IJACSA.2017.080609</doi>
        <lastModDate>2017-06-30T14:44:23.1700000+00:00</lastModDate>
        
        <creator>Jyoti Neeli</creator>
        
        <creator>N K Cauvery</creator>
        
        <subject>Attacks; confidentiality; secured routing; integrity; mobile ad hoc network; wireless ad hoc network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(6), 2017</description>
        <description>Wireless Ad hoc Network offers a cost effective communication to the users free from any infrastructural dependencies. It is characterized by decentralized architecture, mobile nodes, dynamic topology, etc. that makes the network formation typically challenging. In the past decade, there has been a series of research work towards enhancing its routing performance by addressing various significant problems. This manuscript mainly orients around the progress being made in the line of secure routing protocol, which is still a bigger issue. The paper discusses different approaches undertaken by existing literature towards discrete security problem and explores the effective level of security. The study outcome of the paper finds that progress towards wireless ad hoc network is still very less and there is a need to come up with robust security framework. The paper also discusses the research gap being identified from the existing techniques and finally discusses the future work direction to address the certain unsolved problem.</description>
        <description>http://thesai.org/Downloads/Volume8No6/Paper_9-Insight_to_Research_Progress_on_Secure_Routing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intelligent Hybrid Approach for Android Malware Detection based on Permissions and API Calls</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080608</link>
        <id>10.14569/IJACSA.2017.080608</id>
        <doi>10.14569/IJACSA.2017.080608</doi>
        <lastModDate>2017-06-30T14:44:23.1230000+00:00</lastModDate>
        
        <creator>Altyeb Altaher</creator>
        
        <creator>Omar Mohammed Barukab</creator>
        
        <subject>Android malware detection; features selection; fuzzy inference system; particle swarm optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(6), 2017</description>
        <description>Android malware is rapidly becoming a potential threat to users. The number of Android malware is growing exponentially; they become significantly sophisticated and cause potential financial and information losses for users. Hence, there is a need for effective and efficient techniques to detect the Android malware applications. This paper proposes an intelligent hybrid approach for Android malware detection using the permissions and API calls in the Android application. The proposed approach consists of two steps. The first step involves finding the most significant permissions and Application Programming Interfaces (API) calls that leads to efficient discrimination between the malware and good ware applications. For this purpose, two features selection algorithms, Information Gain (IG) and Pearson CorrCoef (PC) are employed to rank the individual permissions and API’s calls based on their importance for classification. In the second step, the proposed new hybrid approach for Android malware detection based on the combination of the Adaptive neural fuzzy Inference System (ANFIS) with the Particle Swarm Optimization (PSO), is employed to differentiate between the malware and goodware Android applications (apps). The PSO is intelligently utilized to optimize the ANFIS parameters by tuning its membership functions to generate reliable and more precise fuzzy rules for Android apps classification. Using a dataset consists of 250 goodware and 250 malware apps collected from different recourse, the conducted experiments show that the suggested method for Android malware detection is effective and achieved an accuracy of 89%.</description>
        <description>http://thesai.org/Downloads/Volume8No6/Paper_8-Intelligent_Hybrid_Approach_for_Android_Malware.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>GPC Temperature Control of A Simulation Model Infant-Incubator and Practice with Arduino Board</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080607</link>
        <id>10.14569/IJACSA.2017.080607</id>
        <doi>10.14569/IJACSA.2017.080607</doi>
        <lastModDate>2017-06-30T14:44:23.1070000+00:00</lastModDate>
        
        <creator>E. Feki</creator>
        
        <creator>M. A. Zermani</creator>
        
        <creator>A. Mami</creator>
        
        <subject>Incubator; neonatal; model; temperature; Arduino; GPC</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(6), 2017</description>
        <description>The thermal environment surrounding preterm neonates in closed incubators is regulated via air temperature control mode. At present, these control modes do not take account of all the thermal parameters involved in a pattern of incubator such as the thermal parameters of preterm neonates (birth weight &lt; 1000grams). The objective of this work is to design and validate a generalized predictive control (GPC) that takes into account the closed incubator model as well as the newborn premature model. Then, we implemented this control law on a DRAGER neonatal incubator with and without newborn using microcontroller card. Methods: The design of the predictive control law is based on a prediction model. The developed model allows us to take into account all the thermal exchanges (radioactive, conductive, convective and evaporative) and the various interactions between the environment of the incubator and the premature newborn. Results: The predictive control law and the simulation model developed in Matlab/Simulink environment make it possible to evaluate the quality of the mode of control of the air temperature to which newborn must be raised. The results of the simulation and implementation of the air temperature inside the incubator (with newborn and without newborn) prove the feasibility and effectiveness of the proposed GPC controller compared with a proportional–integral–derivative controller (PID controller).</description>
        <description>http://thesai.org/Downloads/Volume8No6/Paper_7-GPC_Temperature_Control_of_a_Simulation_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Glaucoma-Deep: Detection of Glaucoma Eye Disease on Retinal Fundus Images using Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080606</link>
        <id>10.14569/IJACSA.2017.080606</id>
        <doi>10.14569/IJACSA.2017.080606</doi>
        <lastModDate>2017-06-30T14:44:23.0900000+00:00</lastModDate>
        
        <creator>Qaisar Abbas</creator>
        
        <subject>Fundus imaging; glaucoma; diabetic retinopathy; deep learning; convolutional neural networks; deep belief network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(6), 2017</description>
        <description>Detection of glaucoma eye disease is still a challenging task for computer-aided diagnostics (CADx) systems. During eye screening process, the ophthalmologists measures the glaucoma by structure changes in optic disc (OD), loss of nerve fibres (LNF) and atrophy of the peripapillary region (APR). In retinal images, the automated CADx systems are developed to assess this eye disease through segmentation-based hand-crafted features. Therefore in this paper, the convolutional neural network (CNN) unsupervised architecture was used to extract the features through multilayer from raw pixel intensities. Afterwards, the deep-belief network (DBN) model was used to select the most discriminative deep features based on the annotated training dataset. At last, the final decision is performed by softmax linear classifier to differentiate between glaucoma and non-glaucoma retinal fundus image. This proposed system is known as Glaucoma-Deep and tested on 1200 retinal images obtained from publically and privately available datasets. To evaluate the performance of Glaucoma-Deep system, the sensitivity (SE), specificity (SP), accuracy (ACC), and precision (PRC) statistical measures were utilized. On average, the SE of 84.50%, SP of 98.01%, ACC of 99% and PRC of 84% values were achieved. Comparing to state-of-the-art systems, the Nodular-Deep system accomplished significant higher results. Consequently, the Glaucoma-Deep system can easily recognize the glaucoma eye disease to solve the problem of clinical experts during eye-screening process on large-scale environments.</description>
        <description>http://thesai.org/Downloads/Volume8No6/Paper_6-Glaucoma_Deep_Detection_of_Glaucoma_Eye_Disease.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Process Improvements for Crowdsourced Software Testing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080605</link>
        <id>10.14569/IJACSA.2017.080605</id>
        <doi>10.14569/IJACSA.2017.080605</doi>
        <lastModDate>2017-06-30T14:44:23.0600000+00:00</lastModDate>
        
        <creator>Sulta Alyahya</creator>
        
        <creator>Dalal Alrugebh</creator>
        
        <subject>Software testing; crowdsourcing, crowd testing; process improvement; tool</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(6), 2017</description>
        <description>Crowdsourced software testing has been a common practice lately. It refers to the use of crowdsourcing in software testing activities. Although the crowd testing is a collaborative process by nature, there is no available research that provides a critical assessment of the key collaboration activities offered by the current crowdsourced testing platforms. In this paper, we review the process used in the crowd testing platforms including identifying the workflow used in managing the crowd testing process starting from submitting testing requirements and ending with reviewing testing report. Understanding the current process is then utilized to identify a set of limitations in the current process and has led to propose three process improvements (improving assigning crowd manager, improving building test team, monitoring testing progress). We have designed and implemented these process improvements and then evaluated them using two techniques: 1) questionnaire and 2) workshop. The questionnaire shows that the process improvements are significantly sound and strong enough to be added to crowd testing platforms. In addition, the evaluation through conducting a workshop was useful to assess the design and implementation of the process improvements. The participants were satisfied with them but asked for further modifications. Nevertheless, because crowd testing requires participation from a large number of people, the automation suggested improving managing the current process which was highly appreciated.</description>
        <description>http://thesai.org/Downloads/Volume8No6/Paper_5-Process_Improvements_for_Crowdsourced_Software.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Handwritten Digit Recognition based on Output-Independent Multi-Layer Perceptrons</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080604</link>
        <id>10.14569/IJACSA.2017.080604</id>
        <doi>10.14569/IJACSA.2017.080604</doi>
        <lastModDate>2017-06-30T14:44:23.0300000+00:00</lastModDate>
        
        <creator>Ismail M. Keshta</creator>
        
        <subject>Handwritten digit recognition; Pattern classification; Neural network mode; Two-class classification; Accuracy; Binary data</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(6), 2017</description>
        <description>With handwritten digit recognition being an established and significant problem that is facing computer vision and pattern recognition, there has been a great deal of research work that has been undertaken in this area. It is not a trivial task because of the big variation that exists in the writing styles that have been found in the available data. Therefore both, the features and the classifier need to be efficient. The core contribution of this research is the development of a new classification technique that is based on the MLP, which can be identified in handwritten documents as the binary digits ‘0’ and ‘1’. This technique maps the different sets of various input data onto the MLP output neurons. An experimental evaluation of the technique’s performance is provided. This evaluation is based on the well-known ‘Pen-Based Recognition of Handwritten Digits’ dataset, which is comprised of a total of 250 handwriting samples that are taken from 44 writers. The results obtained are very promising for such an approach in accurate handwriting recognition.</description>
        <description>http://thesai.org/Downloads/Volume8No6/Paper_4-Handwritten_Digit_Recognition_based_on_Output.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sentiment Analysis on Twitter Data using KNN and SVM</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080603</link>
        <id>10.14569/IJACSA.2017.080603</id>
        <doi>10.14569/IJACSA.2017.080603</doi>
        <lastModDate>2017-06-30T14:44:22.9970000+00:00</lastModDate>
        
        <creator>Mohammad Rezwanul Huq</creator>
        
        <creator>Ahmad Ali</creator>
        
        <creator>Anika Rahman</creator>
        
        <subject>Support Vector Machine (SVM); k-nearest neighbor (KNN); Grid Search; Confusion matrix; ROC graph; Hyperplane; Social data analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(6), 2017</description>
        <description>Millions of users share opinions on various topics using micro-blogging every day. Twitter is a very popular micro-blogging site where users are allowed a limit of 140 characters; this kind of restriction makes the users be concise as well as expressive at the same time. For that reason, it becomes a rich source for sentiment analysis and belief mining. The aim of this paper is to develop such a functional classifier which can correctly and automatically classify the sentiment of an unknown tweet. In our work, we propose techniques to classify the sentiment label accurately. We introduce two methods: one of the methods is known as sentiment classification algorithm (SCA) based on k-nearest neighbor (KNN) and the other one is based on support vector machine (SVM). We also evaluate their performance based on real tweets.</description>
        <description>http://thesai.org/Downloads/Volume8No6/Paper_3-Sentiment_Analysis_on_Twitter_Data_using_KNN_and_SVM.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multispectral Image Analysis using Decision Trees</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080602</link>
        <id>10.14569/IJACSA.2017.080602</id>
        <doi>10.14569/IJACSA.2017.080602</doi>
        <lastModDate>2017-06-30T14:44:22.9500000+00:00</lastModDate>
        
        <creator>Arun Kulkarni</creator>
        
        <creator>Anmol Shrestha</creator>
        
        <subject>Decision trees; knowledge extraction; fuzzy inference system; Landsat imagery</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(6), 2017</description>
        <description>Many machine learning algorithms have been used to classify pixels in Landsat imagery. The maximum likelihood classifier is the widely-accepted classifier. Non-parametric methods of classification include neural networks and decision trees. In this research work, we implemented decision trees using the C4.5 algorithm to classify pixels of a scene from Juneau, Alaska area obtained with Landsat 8, Operation Land Imager (OLI). One of the concerns with decision trees is that they are often over fitted with training set data, which yields less accuracy in classifying unknown data. To study the effect of overfitting, we have considered noisy training set data and built decision trees using randomly-selected training samples with variable sample sizes. One of the ways to overcome the overfitting problem is pruning a decision tree. We have generated pruned trees with data sets of various sizes and compared the accuracy obtained with pruned trees to the accuracy obtained with full decision trees. Furthermore, we extracted knowledge regarding classification rules from the pruned tree. To validate the rules, we built a fuzzy inference system (FIS) and reclassified the dataset. In designing the FIS, we used threshold values obtained from extracted rules to define input membership functions and used the extracted rules as the rule-base. The classification results obtained from decision trees and the FIS are evaluated using the overall accuracy obtained from the confusion matrix.</description>
        <description>http://thesai.org/Downloads/Volume8No6/Paper_2-Multispectral_Image_Analysis_using_Decision_Trees.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intelligent Security for Phishing Online using Adaptive Neuro Fuzzy Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080601</link>
        <id>10.14569/IJACSA.2017.080601</id>
        <doi>10.14569/IJACSA.2017.080601</doi>
        <lastModDate>2017-06-30T14:44:22.8430000+00:00</lastModDate>
        
        <creator>G. Fehringer</creator>
        
        <creator>P. A. Barraclough</creator>
        
        <subject>Phishing websites; fuzzy models; feature model; intelligent detection; neuro fuzzy; fuzzy inference system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(6), 2017</description>
        <description>Anti-phishing detection solutions employed in industry use blacklist-based approaches to achieve low false-positive rates, but blacklist approaches utilizes website URLs only. This study analyses and combines phishing emails and phishing web-forms in a single framework, which allows feature extraction and feature model construction. The outcome should classify between phishing, suspicious, legitimate and detect emerging phishing attacks accurately. The intelligent phishing security for online approach is based on machine learning techniques, using Adaptive Neuro-Fuzzy Inference System and a combination sources from which features are extracted. An experiment was performed using two-fold cross validation method to measure the system’s accuracy. The intelligent phishing security approach achieved a higher accuracy. The finding indicates that the feature model from combined sources can detect phishing websites with a higher accuracy. This paper contributes to phishing field a combined feature which sources in a single framework. The implication is that phishing attacks evolve rapidly; therefore, regular updates and being ahead of phishing strategy is the way forward.</description>
        <description>http://thesai.org/Downloads/Volume8No6/Paper_1-Intelligent_Security_for_Phishing_Online.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Compliance-Driven Architecture for Healthcare Industry</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080571</link>
        <id>10.14569/IJACSA.2017.080571</id>
        <doi>10.14569/IJACSA.2017.080571</doi>
        <lastModDate>2017-06-02T06:11:43.5370000+00:00</lastModDate>
        
        <creator>Syeda Uzma Gardazi</creator>
        
        <creator>Arshad Ali Shahid</creator>
        
        <subject>Compliance-driven; architectural mechanisms; ISO 9001:2015; ISO 27001:2013; HIPAA; HITCH; software architecture; Logic-based Compliance Advisor (LCA); architectural evaluation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(5), 2017</description>
        <description>The United States (US) healthcare organizations are continuously struggling to cope-up with evolving regulatory requirements e.g. Health Information Technology for Economic and Clinical Health Act (HITECH) and International Organization for Standardization (ISO) 9001: 2015. These requirements are not only affecting the US healthcare industry but also other industries as well e.g. software industry that provides software products and services to healthcare organizations. It is vital for software companies to ensure and comply with applicable regulatory requirements. These evolving regulatory requirements may affect all phases of software development lifecycle including software architecture. It is difficult for Software architects to transform and trace regulatory requirements at software architecture level due to the absence of software design and architectural mechanisms. We have composed architectural mechanisms from given set of information security regulations i.e. Health Insurance Portability and Accountability Act (HIPAA) non-functional requirements, and these composed mechanisms were used to initiate initial architecture for the Electronic Health Record (EHR) and/or Health Level Seven (HL7). At next, style was selected for compliant and non-compliant software architecture. A layer of compliance was introduced in existing layered style that intends to help software companies to track compliance at software architecture level. Further, we have evaluated compliance-driven EHR architecture vs. non-compliant EHR architecture using a large healthcare billing and IT company with offices on three continents as a case study.</description>
        <description>http://thesai.org/Downloads/Volume8No5/Paper_71-Compliance_Driven_Architecture_for_Healthcare.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Establishing Standard Rules for Choosing Best KPIs for an E-Commerce Business based on Google Analytics and Machine Learning Technique </title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080570</link>
        <id>10.14569/IJACSA.2017.080570</id>
        <doi>10.14569/IJACSA.2017.080570</doi>
        <lastModDate>2017-06-02T06:11:43.5070000+00:00</lastModDate>
        
        <creator>Haris Ahmed</creator>
        
        <creator>Dr. Tahseen Ahmed Jilani</creator>
        
        <creator>Waleej Haider</creator>
        
        <creator>Mohammad Asad Abbasi</creator>
        
        <creator>Shardha Nand</creator>
        
        <creator>Saher Kamran</creator>
        
        <subject>E-commerce KPI; Google Analytics; Machine Learning; C4.5 Decision Tree; Weka J48</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(5), 2017</description>
        <description>The predictable values that indicate the performance of any company and determine that how well they are performing in order to achieve their objective is referred by the term called as “key performance indicators”. The key performance indicator techniques and other methods that are similar to KPI are usually implemented in the businesses that are running online, but for an e-commerce business, it is always difficult to select the right KPI. As long as the KPIs are concerned, the biggest blunder that an online business can make is that they calculate everything along with the KPIs. But whatever they are calculating cannot be referred as the “key” because they are measuring each and everything, so this can immediately become devastating. The need is to only measure certain specific keys in order to calculate the performance of a business. The main aim of this research is to establish the set of standard rules that must be adopted in order to identify the best KPIs for an e-commerce business website based on google analytics and machine learning technique.</description>
        <description>http://thesai.org/Downloads/Volume8No5/Paper_70-Establishing_Standard_Rules_for_Choosing_Best_KPIs.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modulation Components and Genetic Algorithm for Speaker Recognition System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080569</link>
        <id>10.14569/IJACSA.2017.080569</id>
        <doi>10.14569/IJACSA.2017.080569</doi>
        <lastModDate>2017-06-02T06:11:43.4730000+00:00</lastModDate>
        
        <creator>Tariq A. Hassan</creator>
        
        <creator>Rihab I. Ajel</creator>
        
        <creator>Eman K. Ibrahim</creator>
        
        <subject>Computer Forensics; Digital Signal Processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(5), 2017</description>
        <description>In this paper, the aim is to investigate weather or
not that changing the filter-bank components (of the speaker
recognition system) could improve the system performance in
identifying the speaker. The filter is composed of 30 Gamatone
filter channels. First, the channels are mel distributed of the
frequency line. Then the components&#180;values (center frequencies
and bandwidths) changes with each run. Genetic algorithm (GA)
is adopted to improve the filter component values that, in a result,
improve the system performance. At each GA run, a new set of
filter components will be generated that aimed to improve the
performance comparing with the previous run. This will continue
until the system reach to the maximum accuracy or the GA reach
to its limits. Results show that the system will be improved at
each run, however, different words might response differently to
the system filter changing. Also, in terms of additive noise, the
results show that although the digits affected differently by the
noise, the system still get improving with reach GA run.</description>
        <description>http://thesai.org/Downloads/Volume8No5/Paper_69-Modulation_Components_and_Genetic_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Line of Sight Estimation Accuracy Improvement using Depth Image and Ellipsoidal Model of Cornea Curvature</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080568</link>
        <id>10.14569/IJACSA.2017.080568</id>
        <doi>10.14569/IJACSA.2017.080568</doi>
        <lastModDate>2017-06-02T06:11:43.3630000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Kohya Iwamura</creator>
        
        <subject>Computer input just by sight; Computer input by human eyes only; Purkinje image; Cornea curvature</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(5), 2017</description>
        <description>Line of sight estimation accuracy improvement is attempted using depth image (distance between user and display) and ellipsoidal model (shape of user’s eye) of cornea curvature. It is strongly required to improve line of sight estimation accuracy for perfect computer input by human eyes only. The conventional method for line of sight estimation is based on the approximation of cornea shape with ellipse function in the acquired eye image. The proposed estimation method is based on the approximation of crystalline lenses and cornea with ellipsoidal function. Therefore, much accurate approximation can be performed by the proposed method. Through experiments, it is found that depth images are useful for improvement of the line of sight estimation accuracy.</description>
        <description>http://thesai.org/Downloads/Volume8No5/Paper_68-Line_of_Sight_Estimation_Accuracy_Improvement.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Workplace Design and Employee’s Performance and Health in Software Industry of Pakistan</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080567</link>
        <id>10.14569/IJACSA.2017.080567</id>
        <doi>10.14569/IJACSA.2017.080567</doi>
        <lastModDate>2017-05-31T13:48:23.6870000+00:00</lastModDate>
        
        <creator>Amna Riaz</creator>
        
        <creator>Umar Shoaib</creator>
        
        <creator>Muhammad Shahzad Sarfraz</creator>
        
        <subject>Ergonomics; Office work design; Employee’s health; Employee’s performance; User friendly design; Accessibility</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(5), 2017</description>
        <description>Factors like colour, light, air quality, environmental conditions, and noise has a great effect on the Health and Performance of office employees. All these factors have the impact on employee’s performance and are the reasons to improve or reduce the level of employee’s working quality and health. The office design, computer usage, and sitting postures affect the muscles, eyes and other body parts. Availability of better office environment and design improve the performance and health of employees to achieve much better and productive outcome from the employees. Following empirical study has investigated the relationship between office workplace design and employee’s health and performance. We conducted a survey on the employees working in the software industry of Pakistan, collected the data from employees through questionnaire. We used Linear Regression for the analysis of the study. The results concluded that workplace design has a significant impact on employee’s health, and have a negative relationship with the employee discomfort level. Results also showed that workplace design has statistically significant impact on employee’s performance.</description>
        <description>http://thesai.org/Downloads/Volume8No5/Paper_67-Workplace_Design_and_Employee’s_Performance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SmileToPhone: A Mobile Phone System for Quadriplegic Users Controlled by EEG Signals</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080566</link>
        <id>10.14569/IJACSA.2017.080566</id>
        <doi>10.14569/IJACSA.2017.080566</doi>
        <lastModDate>2017-05-31T13:48:23.6570000+00:00</lastModDate>
        
        <creator>Heyfa Ammar</creator>
        
        <creator>Mounira Taileb</creator>
        
        <subject>Quadriplegia; EEG; facial expression; BCI system;
HCI</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(5), 2017</description>
        <description>Quadriplegic people are unable to use mobile devices
without the aid of other persons which can be devastating
for them both socially and economically. This has motivated many
researchers to propose hardware and software solutions that
operate as intermediates between the impaired users and their
devices: accessibility switches, joysticks and head movements.
However, the efficiency of these tools is limited in some conditions.
To alleviate this problem, we propose to exploit electroencephalographic
signals captured via an adequate headset. More precisely,
the user is asked to perform a facial expression that will be
recognized by the system through the analysis of the EEG signals.
Several facial expressions are offered and each one corresponds to
a command wirelessly sent to the mobile device and executed. This
Brain Computer Interface based system is called SmileToPhone.
It enables the quadriplegic patients to use their smartphones
in an easy way with a minimum of effort and with respect to
studied Human-Computer-Interaction requirements. The system
includes the main functionalities of a smartphone such as making
calls and sending messages. The evaluation of the system usability
showed that most of the time, users were able to use the different
functionalities of the system in an easy way. The current results
are encouraging and motivating to add more features to the
system.</description>
        <description>http://thesai.org/Downloads/Volume8No5/Paper_66-SmileToPhone_A_Mobile_Phone_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>On the Probability of Detection Ability in Observing Dynamic Environmental Phenomena using Wireless Sensor Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080565</link>
        <id>10.14569/IJACSA.2017.080565</id>
        <doi>10.14569/IJACSA.2017.080565</doi>
        <lastModDate>2017-05-31T13:48:23.6230000+00:00</lastModDate>
        
        <creator>Omar Fouad Mohammed</creator>
        
        <creator>Burairah Hussin</creator>
        
        <creator>Abd Samad Hasan Basari</creator>
        
        <subject>Wireless Sensor Network (WSN); Environmental Monitoring; Event Detection; Event Localization; Sensing Modelling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(5), 2017</description>
        <description>Wireless Sensor Network (WSN) is being utilised for several purposes in military and civil domains, including surveillance, monitoring, and management, where networked sensors monitor and detect an event of interest and report to the concerned party through the WSN or the Internet infrastructure. Due to the characteristics of WSN, there are many fundamental technical challenges including the node deployment, event localization, and event tracking, among which, the probability of the event observability has a crucial role. The observability is defined as the capability of observing an evolving event in the monitoring area. The probability of detection ability in observing an event depends on the parameters of the detection function, which in turn rely on the sensor technology and the nature of the surrounding environment. This paper addresses the observation of an event using WSN and how accurately the event is observed in the monitoring area. It presents a practical solution for event observability after formulizing and establishing the complexity of observability issue and tackling its relation and impact on node deployment and event localization. Hence, a feasible event observation model has been proposed and validated in this paper. The numerical results of the experimental evaluation have confirmed that an accurate detection of an occurring event can be achieved by the proposed model.</description>
        <description>http://thesai.org/Downloads/Volume8No5/Paper_65-On_the_Probability_of_Detection_Ability_in_Observing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predictive Performance Comparison Analysis of Relational &amp; NoSQL Graph Databases</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080564</link>
        <id>10.14569/IJACSA.2017.080564</id>
        <doi>10.14569/IJACSA.2017.080564</doi>
        <lastModDate>2017-05-31T13:48:23.5770000+00:00</lastModDate>
        
        <creator>Wisal Khan</creator>
        
        <creator>Ejaz ahmed</creator>
        
        <creator>Waseem Shahzad</creator>
        
        <subject>Big Data; Hadoop; MapReduce; Relational
Databases; NoSQL Databases; Decision tree</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(5), 2017</description>
        <description>From last three decades, the relational databases
are being used in many organizations of various natures such
as Education, Health, Business and in many other applications.
Traditional databases show tremendous performance and are
designed to handle structured data with ACID (Atomicity, Consistency,
Isolation, Durability) property to manage data integrity.
In the current era, organizations are storing more data i.e. videos,
images, blogs, etc. besides structured data for decision making.
Similarly, social media and scientific applications are generating
large amount of semi-structured data of varied nature. Relational
databases cannot process properly and manage such large amount
of data efficiently. To overcome this problem, another paradigm
NoSQL databases is introduced to manage and process massive
amount of unstructured data efficiently. NoSQL databases are
divided into four categories and each category is used according
to the nature and need of the specific problem. In this paper
we will compare Oracle relational database and NoSQL graph
database using optimized queries and physical database tuning
techniques. The comparison is two folded: in the first iteration we
compare various kinds of queries such as simpler query, database
tuning of Oracle relational database such as sub databases and
perform these queries in our desired environments. Secondly, for
this comparison we will perform predictive analysis for the results
obtained from our experiments.</description>
        <description>http://thesai.org/Downloads/Volume8No5/Paper_64-Predictive_Performance_Comparison_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Nonlinear Identification and Control of Coupled Mass-Spring-Damper System using Polynomial Structures</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080563</link>
        <id>10.14569/IJACSA.2017.080563</id>
        <doi>10.14569/IJACSA.2017.080563</doi>
        <lastModDate>2017-05-31T13:48:23.5470000+00:00</lastModDate>
        
        <creator>Sana RANNEN</creator>
        
        <creator>Chekib GHORBEL</creator>
        
        <creator>Naceur BENHADJ BRAIEK</creator>
        
        <subject>Identification; RLS algorithm; Polynomial structure;
Stabilizing control; LQR</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(5), 2017</description>
        <description>The paper aims to identify and control the coupled
mass-spring-damper system. A nonlinear discrete polynomial
structure is elaborated. Its parameters are estimated using
Recursive Least Squares (RLS) algorithm. Moreover, a feedback
stabilizing control law based on Kronecker power is designed.
Finally, simulations are presented to illustrate the effectiveness
of the proposed structure.</description>
        <description>http://thesai.org/Downloads/Volume8No5/Paper_63-Nonlinear_Identification_and_Control_of_Coupled.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modeling Smart Agriculture using SensorML</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080562</link>
        <id>10.14569/IJACSA.2017.080562</id>
        <doi>10.14569/IJACSA.2017.080562</doi>
        <lastModDate>2017-05-31T13:48:23.5300000+00:00</lastModDate>
        
        <creator>Maha Arooj</creator>
        
        <creator>Muhammad Asif</creator>
        
        <creator>Syed Zeeshan Shah</creator>
        
        <subject>Internet of Things; Smart Agriculture; Sensors;
SensorML; OGC SWE; Sensor Web</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(5), 2017</description>
        <description>IoT is transforming the physical world into a digital
world by connecting people and things. This paper describes
state of the art domains where IoT is playing a key role. Smart
agriculture is selected as a case study of IoT application domain.
OGC sensor web enablement framework is studied and discussed
its application for smart agriculture. This paper mainly focuses
on modeling the smart agriculture system using SensorML of
OGC. It also identified, developed and modeled few major
components/sub-systems/systems required for smart agriculture.
This study also demonstrated how SensorML can be utilized in
modeling IoT enabled systems.</description>
        <description>http://thesai.org/Downloads/Volume8No5/Paper_62-Modeling_Smart_Agriculture_using_SensorML.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>High Precision DCT CORDIC Architectures for Maximum PSNR</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080561</link>
        <id>10.14569/IJACSA.2017.080561</id>
        <doi>10.14569/IJACSA.2017.080561</doi>
        <lastModDate>2017-05-31T13:48:23.4830000+00:00</lastModDate>
        
        <creator>Imen Ben Saad</creator>
        
        <creator>Sonia Mami</creator>
        
        <creator>Yassine Hachaichi</creator>
        
        <creator>Younes Lahbib</creator>
        
        <creator>Abdelkader Mami</creator>
        
        <subject>Cordic Loeffler DCT; high quality architecture; low
power architecture; Image Processing; DCT</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(5), 2017</description>
        <description>This paper proposes two optimal Cordic Loeffler based DCT (Discrete Cosine Transform algorithm) architectures: a fast and low Power DCT architecture and a high PSNR DCT architecture. The rotation parameters of CORDIC angles required for these architectures have been calculated using a MATLAB script. This script allows the variation of the angle’s precision from 10-1 to 10-4. The experimental results show that the fast and low Power DCT architecture correponds to the precision 10-1. Its complexity is even lower than the BinDCT which is a reference in terms of low complexity and its power has been enhanced in comparison with the conventional Cordic Loeffler DCT by 12 mW. The experimental results also show that the high PSNR DCT architecture corresponds to the precision 10-3 for which the PSNR has been improved by 6.55 dB in comparison with the conventional Cordic Loeffler DCT. Then, the hardware implementation and the generated RTL of some required Cordics are presented.</description>
        <description>http://thesai.org/Downloads/Volume8No5/Paper_61-High_Precision_DCT_CORDIC_Architectures.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Ensuring Data Provenance with Package Watermarking</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080560</link>
        <id>10.14569/IJACSA.2017.080560</id>
        <doi>10.14569/IJACSA.2017.080560</doi>
        <lastModDate>2017-05-31T13:48:23.4700000+00:00</lastModDate>
        
        <creator>Muhammad Umer Sarwar</creator>
        
        <creator>Muhammad Kashif Hanif</creator>
        
        <creator>Ramzan Talib</creator>
        
        <creator>Muhammad Asad Abbas</creator>
        
        <subject>Data security; Provenance; Watermarking; Tempering;
Cryptography; Encryption; Decryption</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(5), 2017</description>
        <description>The last decade has shown tremendous growth
data production from different sectors, e.g., biology, financial
markets, scientific computing, business processes, Internet of
Things. The “Data is New Oil” has become a proverb in academic
and corporate circles. Accordingly, tracing, recording origin and
deriving data called data provenance has gained tremendous
traction across board. Privacy and security of data are major
challenges to provenance management. This can be tackled using
watermarking. The downside of majority of existing watermarking
techniques is data distortion. In this work, we purpose a novel
approach called package watermarking that addresses the data
capacity, usability, robustness, security, distortion, verifiability,
and detectability issues in data provenance.</description>
        <description>http://thesai.org/Downloads/Volume8No5/Paper_60-Ensuring_Data_Provenance_with_Package.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Designing Graphical Data Storage Model for Gene-Protein and Gene-Gene Interaction Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080559</link>
        <id>10.14569/IJACSA.2017.080559</id>
        <doi>10.14569/IJACSA.2017.080559</doi>
        <lastModDate>2017-05-31T13:48:23.4370000+00:00</lastModDate>
        
        <creator>Hina Farooq</creator>
        
        <creator>Javed Ferzund</creator>
        
        <creator>Azka Mahmood</creator>
        
        <creator>Muhammad Atif Sarwar</creator>
        
        <subject>Big Data; Graph Theory; Graph Database; Gene-
Gene Interaction; Protein-Protein Interaction; Large Scale Biological
Graphs; Storage Model; Neo4j</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(5), 2017</description>
        <description>Graph is an expressive way to represent dynamic
and complex relationships in highly connected data. In today’s
highly connected world, general purpose graph databases are
providing opportunities to experience benefits of semantically
significant networks without investing on the graph infrastructure.
Examples of prominent graph databases are: Neo4j, Titan
and OrientDB etc. In biological OMICS landscape, Interactomics
is one of the new disciplines that focuses mainly on the data
modeling, data storage and retrieval of biological interaction
data. Biological experiments generate prodigious amount of data
in various formats(semi-structured or unstructured). The large
volume of such data posses challenges for data acquisition, data
integration, multiple data modalities (either data model of storage
model, storage, processing and visualization. This paper aims at
designing a well suited graphical data storage model for biological
information which is collected from major heterogeneous
biological data repositories, by using graph database.</description>
        <description>http://thesai.org/Downloads/Volume8No5/Paper_59-Designing_Graphical_Data_Storage_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Collaborative Routing Algorithm for Fault Tolerance in Network on Chip CRAFT NoC</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080558</link>
        <id>10.14569/IJACSA.2017.080558</id>
        <doi>10.14569/IJACSA.2017.080558</doi>
        <lastModDate>2017-05-31T13:48:23.4070000+00:00</lastModDate>
        
        <creator>Chakib NEHNOUH</creator>
        
        <creator>Mohamed SENOUCI</creator>
        
        <creator>Abdelkader Chaib</creator>
        
        <subject>Network on Chip; Fault Tolerance; Congestion;
Reliability; Sub- network; Routing Algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(5), 2017</description>
        <description>Many fault tolerance techniques have been proposed
in Network on Chip to cope with defects during fabrication
or faults during product lifetime. Fault tolerance routing
algorithm provide reliable mechanisms for continue delivering
their services in spite of defective nodes due to the presence
of permanent and/or transient faults throughout their lifetime
implementation. This paper presents a new approach in the
domain of fault-tolerant NoC with two main contributions.
Firstly, we consider a unified fault model that include transient
faults, permanent faults and congestion considered as a fault.
Secondly, we present a new architecture based on sub-nets and
give an overview of the associated test and (re)routing algorithm.
The main result of this paper, is a new routing algorithm called
Collaborative Routing Algorithm for Fault Tolerance in Network
on Chip (CRAFT-NoC). We compare our approach with ACOFAR
that considers as well congestion and permanent faults. Our
simulation results show significant improvements in terms of both
latency and reliability.</description>
        <description>http://thesai.org/Downloads/Volume8No5/Paper_58-Collaborative_Routing_Algorithm_for_Fault_Tolerance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Bottom-up Approach for Visual Object Recognition on FPGA based Embedded Multiprocessor Architecture</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080557</link>
        <id>10.14569/IJACSA.2017.080557</id>
        <doi>10.14569/IJACSA.2017.080557</doi>
        <lastModDate>2017-05-31T13:48:23.3730000+00:00</lastModDate>
        
        <creator>Hanen Chenini</creator>
        
        <subject>Object recognition; Saliency-based feature detector/
descriptor; Object classifier; Pipeline architecture; Coarsegrained
model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(5), 2017</description>
        <description>This paper presents an object recognition approach
of outdoor autonomous systems identifying the nature of the
interested object when observing an image. Therefore, seeking
for effective and robust recognition method, the proposed
approach is performed using a novel saliency based feature
detector/descriptor which is combined with an object classifier
to identify the nature of objects in an indoor or an outdoor
environment. As known, bottom-up visual attention computational
models need a considerable computational power and
communication cost. A major challenge in this work is to deal
with such image processing applications managing a large amount
of the information processing and to work within real-time
requirements by improving the processing speed.
Based on interesting approach designing specific architectures
for parallelism, this paper presents a solution for rapid
prototyping of saliency-based object recognition applications. In
order to meet computation and communication requirement,
the developed pipelined architectures are composed of identical
processing modules which can work concurrently with distributed
memories and compute in parallel several sequential tasks with a
high computational cost. We present hardware implementations
with performance results on an Xilinx System-on-Programmable
Chip (SoPC) target. The experimental results including execution
times and application speedups as well as requirements in terms
of computing resources show that the proposed homogeneous
network of processors is efficient for embedding the proposed
image processing application.</description>
        <description>http://thesai.org/Downloads/Volume8No5/Paper_57-A_Bottom-up_Approach_for_Visual_Object.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Reconfigurable MMIC Antenna with RF-MEMS Resonator for Radar Application at K and Ka Bands</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080556</link>
        <id>10.14569/IJACSA.2017.080556</id>
        <doi>10.14569/IJACSA.2017.080556</doi>
        <lastModDate>2017-05-31T13:48:23.3430000+00:00</lastModDate>
        
        <creator>Bassem Jmai</creator>
        
        <creator>Salem Gahgouh</creator>
        
        <creator>Ali Gharsallah</creator>
        
        <subject>RF-MEMS; CPW; Bandwidth; Meander; Resonator; Frequency reconfigurable antennas and MMIC</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(5), 2017</description>
        <description>This paper presents a new reconfigurable antenna based on coplanar waveguide (CPW). The design for reconfigurable antenna is based on monolithic microwave integrate circuit (MMIC). This scheme combines a CPW antenna and switchable resonator radio frequency micro-electro-mechanical system (RF-MEMS). The resonator RF-MEMS presents a meander inductor structure and tuning capacitor controlled by the applied DC voltage. This component can be used for the System on the Chip (SoC). Moreover, this device presents a compactness characteristic and the possibility to operate at high frequencies. The switch element allows changing the frequency band and the resonant frequency easily. The simulation results are shown between 10 and 40 GHz. The presented reconfigurable antenna can cover five bands: (26, 26.6) GHz, (26.4, 27.3) GHz, (27.3, 28) GHz, (29, 30.1) GHz and (30.13, 30.7) GHz. All simulation results were made by the High Frequency Structural Simulator (HFSS) software and validated by Computer Simulation Technology Microwave Studio (CST MWS).</description>
        <description>http://thesai.org/Downloads/Volume8No5/Paper_56-A_Novel_Reconfigurable_MMIC_Antenna_with_RF-MEMS.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Lightweight Approach for Specification and Detection of  SOAP Anti-Patterns</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080555</link>
        <id>10.14569/IJACSA.2017.080555</id>
        <doi>10.14569/IJACSA.2017.080555</doi>
        <lastModDate>2017-05-31T13:48:23.3270000+00:00</lastModDate>
        
        <creator>Fatima Sabir</creator>
        
        <creator>Ghulam Rasool</creator>
        
        <creator>Maria Yousaf</creator>
        
        <subject>SOAP web services; Anti-patterns; Bad smells; SQL</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(5), 2017</description>
        <description>Web-services have become a governing technology for Service Oriented Architectures due to reusability of services and their dependence on other services. The evolution in service based systems demands frequent changes to provide quality of service to customers. It is realised by different researchers that evolution in service based systems may degrade design and quality of service and may generate poor solutions known as anti-patterns. The detection of anti-patterns from web services is an important research realm and it is continuously getting the attention of researchers. There are a number of techniques and tools presented for detection of anti-patterns from object oriented software applications but only few approaches are presented for detection of anti-patterns from SOA. The state of the art anti-pattern detection approaches presented for detection of anti-patterns from SOA are not flexible enough and they are limited to detection of only a few anti-patterns. We present a flexible approach supplemented with a tool support to detect 10 anti-patterns from different SOA-based applications. We compare results of our approach with two representative state of the art approaches.</description>
        <description>http://thesai.org/Downloads/Volume8No5/Paper_55-A_Lightweight_Approach_for_Specification_and_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predictive Approach towards Software Effort Estimation using Evolutionary Support Vector Machine</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080554</link>
        <id>10.14569/IJACSA.2017.080554</id>
        <doi>10.14569/IJACSA.2017.080554</doi>
        <lastModDate>2017-05-31T13:48:23.2970000+00:00</lastModDate>
        
        <creator>Tahira Mahboob</creator>
        
        <creator>Sabheen Gull</creator>
        
        <creator>Sidrish Ehsan</creator>
        
        <creator>Bushra Sikandar</creator>
        
        <subject>Correlation coefficient; Decision tree; Effort Estimation; Evolutionary Support Vector Machine; Software project management</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(5), 2017</description>
        <description>The project effort measurement is one of the most important estimates done in project management domain. This measure is done in advance using some traditional methods like Function Point analysis, Use case analysis, PERT analysis, Analogous, Poker, etc. Classical models have limitations that they are burdensome to implement, especially when there are LOC (lines of code) or objects’ count required in measurement.  Sometimes historical information regarding a project is also considered to estimate the projects’ effort. But these estimates are then needed to be adjusted. The idea proposed in this research is to determine what factors regarding a project are directly related to the effort estimation. Other than that a model is proposed to predict the effort using minimum number of parameters in software project development.</description>
        <description>http://thesai.org/Downloads/Volume8No5/Paper_54-Predictive_Approach_towards_Software_Effort_Estimation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Miniaturisation of a 2-Bits Reflection Phase Shifter for Phased Array Antenna based on Experimental Realisation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080553</link>
        <id>10.14569/IJACSA.2017.080553</id>
        <doi>10.14569/IJACSA.2017.080553</doi>
        <lastModDate>2017-05-31T13:48:23.2800000+00:00</lastModDate>
        
        <creator>Mariem Mabrouki</creator>
        
        <creator>Bassem Jmai</creator>
        
        <creator>Ridha ghayoula</creator>
        
        <creator>Ali. Gharsallah</creator>
        
        <subject>Reflection type PS; FET switch; Branch line coupler; Semiconductors technology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(5), 2017</description>
        <description>In this paper, a controllable reflection type Phase Shifter (PS) is designed, simulated and implemented. The structure of the 2-bits PS consists of branch line coupler, delay lines and six GaAs FET switches controlled in pair. The phase shifting is achieved by turning ON one pair of switches. The circuit design is fabricated using FR4 substrate with dielectric constant equal to 4.7. The size of the realised circuit is 7cm&#215;2.8cm. To reduce this size, two methods are used. First, shortened quarter-wave length transmission line in T model is employed to develop a compact branch-line coupler. Second, a loaded line with capacitor is used to reduce the dimension of delays lines. The two methods are combined to realise a PS with compact size equal to 4.5cm&#215;1.96cm.</description>
        <description>http://thesai.org/Downloads/Volume8No5/Paper_53-Miniaturisation_of_a_2_Bits_Reflection_Phase_Shifter.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SaaS Level based Middleware Database Integrator Platform</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080552</link>
        <id>10.14569/IJACSA.2017.080552</id>
        <doi>10.14569/IJACSA.2017.080552</doi>
        <lastModDate>2017-05-31T13:48:23.2200000+00:00</lastModDate>
        
        <creator>Sanjkta Pal</creator>
        
        <subject>Database integration; Integrator platform; Multi- Level graph; Subset of vertices; First class edge; Concrete edge; Connectivity edge</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(5), 2017</description>
        <description>In purpose of data searching acceleration, the fastest data response is the major concern for latest cloud environment. Regarding this, the intellectual decision is to enrich the SaaS level applications. Amongst the SaaS based applications, service level database integration is the recent trend to provide the integrated view of the heterogeneous cloud databases through shared services using DBaaS. But the generic limitations interacted during the database integration are dynamic adaptability of multiple databases structure, dynamic data location identification in the concern databases, data response using the data commonality. Data migration technique and single query approach are the two individual solutions for the proposed limitations. But the side effects during data migration technique are extra space utilisation and excess time consumption. Again, the single query approach suffers from worst case time complexity for data connectivity, data aggregation and query evaluation. So, to find a suitable data response solution by eliminating these combined major issues, a graph based Middleware Database Integrator Platform or MDIP model has been proposed. This integrator platform is actually the flexible metadata representation technique for the concerned heterogeneous cloud databases. The associativity and commonality among components of multiple databases would be further helpful for efficient data searching in an integrated way. For the incorporation within the service level but not in the services, MDIP is considered as the different platform. It is applicable over any service based database integration in purpose of data response efficiency. Finally, the quality assessment using evaluated query time compared with already proposed SLDI shows better data access quality. Thus, its expertise dedication in data response can overcome summarised challenges like data adaptation flexibility, dynamic identification of data location, wastage of data storage, data accessing within minimal time span and optimised cost in presence of data consistency, data partitioning and user side scalability.</description>
        <description>http://thesai.org/Downloads/Volume8No5/Paper_52-SaaS_Level_based_Middleware_Database_Integrator_Platform.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Using PCA and Factor Analysis for Dimensionality Reduction of Bio-informatics Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080551</link>
        <id>10.14569/IJACSA.2017.080551</id>
        <doi>10.14569/IJACSA.2017.080551</doi>
        <lastModDate>2017-05-31T13:48:23.1870000+00:00</lastModDate>
        
        <creator>M. Usman Ali</creator>
        
        <creator>Shahzad Ahmed</creator>
        
        <creator>Javed Ferzund</creator>
        
        <creator>Atif Mehmood</creator>
        
        <creator>Abbas Rehman</creator>
        
        <subject>Bioinformatics; Statistics; Microarray; Leukaemia; Feature Selection; Statistical tests; PCA; Factor Analysis; R tool</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(5), 2017</description>
        <description>Large volume of Genomics data is produced on daily basis due to the advancement in sequencing technology. This data is of no value if it is not properly analysed. Different kinds of analytics are required to extract useful information from this raw data. Classification, Prediction, Clustering and Pattern Extraction are useful techniques of data mining. These techniques require appropriate selection of attributes of data for getting accurate results. However, Bioinformatics data is high dimensional, usually having hundreds of attributes. Such large a number of attributes affect the performance of machine learning algorithms used for classification/prediction.  So, dimensionality reduction techniques are required to reduce the number of attributes that can be further used for analysis. In this paper, Principal Component Analysis and Factor Analysis are used for dimensionality reduction of Bioinformatics data. These techniques were applied on Leukaemia data set and the number of attributes was reduced from to.</description>
        <description>http://thesai.org/Downloads/Volume8No5/Paper_51-Using_PCA_and_Factor_Analysis_for_Dimensionality_Reduction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Study of Bayesian and Energy Detection Including MRC Under Fading Environment in Collaborative Cognitive Radio Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080550</link>
        <id>10.14569/IJACSA.2017.080550</id>
        <doi>10.14569/IJACSA.2017.080550</doi>
        <lastModDate>2017-05-31T13:48:23.1400000+00:00</lastModDate>
        
        <creator>Shakila Zaman</creator>
        
        <creator>Risala Tasin Khan</creator>
        
        <creator>Md. Imdadul Islam</creator>
        
        <subject>Maximal Ratio Combining; Collaborative spectrum sensing, Fading and Shadowing; Data fusion centre; Receiver operating characteristics; False alarm rate</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(5), 2017</description>
        <description>The most important component of Cognitive Radio Network (CRN) is to sense the underutilised spectrum efficiently in fading environment for incorporating the increasing demand of wireless applications. The result of spectrum sensing can be affected by incorrect detection of the existence of Primary User (PU). In this paper, we have considered Collaborative spectrum sensing to maximise the spectrum utilisation of Cognitive Radio (CR) user. We proposed a new architecture and algorithm that shows the step by step spectrum sensing procedure using Energy detection and Bayesian detection in collaborative environment for an optimal number of users. This algorithm also includes Maximal Ratio Combining (MRC) diversity techniques in fusion centre to make a final decision under fading condition. The simulation result shows the significant optimisation of detection performance with less misdetection for large number of users. It is also observed that MRC produces better results in collaborative manner under Nakagami-m, Rayleigh and Normal fading. Finally in this paper, we have analysed the relative performance of different wireless channels for various SNR levels and from that analysis it concludes that ED technique works better in high SNR and BD technique works for low SNR.</description>
        <description>http://thesai.org/Downloads/Volume8No5/Paper_50-Comparative_Study_of_Bayesian_and_Energy_Detection_Including_MRC.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Conflict Resolution Strategy Selection Method (ConfRSSM) in Multi-Agent Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080549</link>
        <id>10.14569/IJACSA.2017.080549</id>
        <doi>10.14569/IJACSA.2017.080549</doi>
        <lastModDate>2017-05-31T13:48:23.1100000+00:00</lastModDate>
        
        <creator>Alicia Y.C. Tang</creator>
        
        <creator>Ghusoon Salim Basheer</creator>
        
        <subject>Conflict resolution; Confidence level; Multi-agent system; Strategy selection method</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(5), 2017</description>
        <description>Selecting a suitable conflict resolution strategy when conflicts appear in multi-agent environments is a hard problem. There is a need to formulate a model for strategic decision making in selecting a strategy to resolve conflicts. In this paper, we formalise a model for selecting a conflict resolution strategy in multi-agent systems. The model is expected to select a suitable strategy which guaranties low cost in terms of the number of messages and time ticks. This paper focuses on a novel method to guide strategic decision making for conflict resolution. The proposed model is named as Conflict Resolution Strategy Selection Method (ConfRSSM). We identified three distinct types of intervention: (1) domain requirement, (2) conflict strength, and (3) confidence level of the conflicting agents. We also ascertain that the most appropriate conflict resolution strategy for a given conflict depends on the type of conflict (weak, strong), the agents’ confidence level, and the domain preferences. Our method explores the best strategic choices that will reduce the cost and time in selecting a strategy.</description>
        <description>http://thesai.org/Downloads/Volume8No5/Paper_49-A_Conflict_Resolution_Strategy_Selection_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>NFC Technology for Contactless Payment Echosystems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080548</link>
        <id>10.14569/IJACSA.2017.080548</id>
        <doi>10.14569/IJACSA.2017.080548</doi>
        <lastModDate>2017-05-31T13:48:23.0800000+00:00</lastModDate>
        
        <creator>EL Hillali Wadii</creator>
        
        <creator>Jaouad Boutahar</creator>
        
        <creator>Souhail EL Ghazi</creator>
        
        <subject>NFC; Secure Element; SIM-Centric; HCE; Tockenisation; MPayment; Mticketing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(5), 2017</description>
        <description>Since the earliest ages, the human being has not ceased to develop its system of exchange of goods. The first system introduced is barter, it has evolved over time into currency by taking various forms (shells, teeth, feathers, etc.). The evolution of micro-electronics has favoured the appearance of a new form of payment that is the credit card. Currently it is the most used means of payment throughout the world. Today financial institutions want to replace the credit card by mobile phone for the implementation of contactless payment systems via NFC. This mode of operation is called Host Card Emulation or HCE. We will present in this article the basic element at the heart of this technology, which is the Secure Element. We will present the different forms that this element can take and possible cases of use of this technology for the establishment of an ecosystem of payment by mobile or purchase tickets transport.</description>
        <description>http://thesai.org/Downloads/Volume8No5/Paper_48-NFC_Technology_for_Contactless_Payment_Echosystems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Context-Aware Mobile Application Task Offloading to the Cloud</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080547</link>
        <id>10.14569/IJACSA.2017.080547</id>
        <doi>10.14569/IJACSA.2017.080547</doi>
        <lastModDate>2017-05-31T13:48:23.0630000+00:00</lastModDate>
        
        <creator>Hanan Elazhary</creator>
        
        <creator>Saja Aloraini</creator>
        
        <creator>Roa’a Aljuraid</creator>
        
        <subject>Application offloading; Context awareness; Distributed systems; Mobile application; Mobile cloud computing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(5), 2017</description>
        <description>One of the benefits of mobile cloud computing is the ability to offload mobile applications to the cloud for many reasons including performance enhancement and reduced resource consumption. This paper is concerned with offloading of context-aware mobile applications, in which actions or tasks are executed in certain contexts and offloading those tasks needs to be itself context-aware to be advantageous. The paper investigates candidate techniques and development models in the literature to identify suitable ones. Accordingly, the paper proposes the practical Context-Aware Mobile applications Offloading (CAMO) development model, which we developed in Java for the Android platform. Programmers can exploit the independency of the tasks of a typical context-aware mobile application and use CAMO to profile each task in isolation on the mobile and the cloud. The paper introduces the concept of a task-offloading plan in which programmers specify a criterion and/or an objective for offloading a task in a specific context. Offloading criteria allow rapid offloading in case the mobile environment does not change frequently. Based on the profiling results, programmers can use the classes and methods of CAMO to develop one or more custom offloading plans for each task or use pre-specified plans, criterion and objectives. We provide three example tasks with details of their profiling and analysis for developing corresponding offloading plans. CAMO is general and flexible enough for offloading any application partitioned into independent modules. Empirical evaluation shows extreme satisfaction of mobile application developers with its capabilities.</description>
        <description>http://thesai.org/Downloads/Volume8No5/Paper_47-Context-Aware_Mobile_Application_Task_Offloading.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Designing Novel Queries for Analysing NoSQL Data of Gene-Disease Associations</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080546</link>
        <id>10.14569/IJACSA.2017.080546</id>
        <doi>10.14569/IJACSA.2017.080546</doi>
        <lastModDate>2017-05-31T13:48:23.0470000+00:00</lastModDate>
        
        <creator>Hira Yaseen</creator>
        
        <creator>Muhammad Atif Sarwar</creator>
        
        <creator>Javed Ferzund</creator>
        
        <subject>Cypher queries; NoSQL; data model; Gene-disease associations; Causative factors; Drugs/treatment</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(5), 2017</description>
        <description>To precisely identify gene associated diseases has been an open area of research for biological scientists to ensure clinical and psychological symptoms and treatment for human diseases. Because whole Human Genome is defined now it is the next step to find all necessary possible factors from such a complex data set that cause gene mutations and hence lead inherited and/or non-inherited diseases. So our research implementation combines all important factors from different biomolecular data sources to make one integrated data set and defines new relationships among these factors for gene associated disease/s that were not present in existing platforms. This paper presents a novel query model for NoSQL data storage that can help researchers to visualise relationships among gene factors and two new factors termed as “causative factors” and “drugs/treatment” for associated diseases. Since no data source applies graphical querying for gene associated diseases, our proposed novel cypher query model can help researchers to deeply analyse data set and get results in an efficient manner. The proposed query model writes novel cypher queries for this research domain on a graphical data model implemented in neo4j, which is a NoSQL (Not Only Structured) database. Use of NoSQL database and NoSQL query language has overcome certain limitations of relational databases, the existing data platforms had to cope up with. This paper gives a new suitable data storage format and effective data search queries for large, complex, semi-structured and multi-dimensional gene associated diseases data set to efficiently define new relationships among factors format to open new horizons of research.</description>
        <description>http://thesai.org/Downloads/Volume8No5/Paper_46-Designing_Novel_Queries_for_Analysing_NoSQL_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Awareness Survey of Anonymisation of Protected Health Information in Pakistan</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080545</link>
        <id>10.14569/IJACSA.2017.080545</id>
        <doi>10.14569/IJACSA.2017.080545</doi>
        <lastModDate>2017-05-31T13:48:23.0170000+00:00</lastModDate>
        
        <creator>Muhammad Usman Shahid</creator>
        
        <creator>Saman Hina</creator>
        
        <creator>Waqas Mahmood</creator>
        
        <creator>Hamda Usman</creator>
        
        <subject>Anonymisation; De-Identification; Protected health information; Patient data</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(5), 2017</description>
        <description>With the growing advancement of science and technology, research has become the vital step in every educational field. This research survey sheds light on the methods of de-identification and anonymisation for protecting the privacy of the patients, practitioners and nurses. Researchers require huge amounts of patient data for carrying out different analyses. Patient information must therefore be preserved while ensuring that the applied privacy policies do not render the data less valuable. De-identification and anonymisation techniques masks the patient identity through various methods such as suppression, randomisation, shuffling, creating pseudonyms, generalisation, adding noise, scrambling, masking, encoding and encryption, etc. The dataset having critical information is called protected health information (PHI) through which an individual can be identified. Thus, PHI must be preserved through an appropriate means to make data valuable and at the same time, protect the data from hackers. This paper presents the importance of securing PHIs in Pakistan by analysing the results of an awareness survey.</description>
        <description>http://thesai.org/Downloads/Volume8No5/Paper_45-Awareness_Survey_of_Anonymisation_of_Protected.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Security Scheme based on Twofish and Discrete Wavelet Transform</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080544</link>
        <id>10.14569/IJACSA.2017.080544</id>
        <doi>10.14569/IJACSA.2017.080544</doi>
        <lastModDate>2017-05-31T13:48:22.9700000+00:00</lastModDate>
        
        <creator>Mohammad S. Saraireh</creator>
        
        <subject>Cryptography; Twofish; DWT; Histogram; PSTN; Steganography</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(5), 2017</description>
        <description>Nowadays, there is a huge amount of data exchanged between different users; the security of the exchanged data has become a significant problem due to the existing of several security attacks. So, to increase the confidence of users several security techniques can be used together to enhance the level of security.  In this research paper a new secure system is proposed. The proposed system employs cryptography and steganography together. The combination between cryptography and steganography contributes in increasing the security level to provide a robust system that can resist the security attacks. In this paper, the Twofish block cipher based cryptography is employed to encrypt the data. The Twofish permits trade-offs between speed, key setup time, software size, memory, and security level. The steganographic algorithm employed to hide the encrypted data into an image is the discrete wavelet transforms (DWT) algorithm. Different security tests are used to evaluate the security and functionality of the suggested algorithm, such as, the peak signal to noise ratio (PSNR) analysis and histogram analysis. The results reveal that, the algorithm proposed in this paper is secure.</description>
        <description>http://thesai.org/Downloads/Volume8No5/Paper_44-A_Novel_Security_Scheme_based_on _Twofish.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Big Data Storage Model for Protein-Protein Interaction and Gene-Protein Associations</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080543</link>
        <id>10.14569/IJACSA.2017.080543</id>
        <doi>10.14569/IJACSA.2017.080543</doi>
        <lastModDate>2017-05-31T13:48:22.9370000+00:00</lastModDate>
        
        <creator>M. Atif Sarwar</creator>
        
        <creator>Hira Yaseen</creator>
        
        <creator>Javed Ferzund</creator>
        
        <creator>Hina Farooq</creator>
        
        <creator>Azka Mahmood</creator>
        
        <subject>Hadoop; HBase; Big Data; Apache Drill; Protein-Protein Interaction; Gene-Protein Association; Gene-Disease Associations</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(5), 2017</description>
        <description>NGS (Next Generation Sequencing) technology has resulted in huge amount of proteomics data that exists in the form of interactions (protein-protein, gene-protein, and gene-disease). ETL (Extraction, Transformation, and Loading) techniques are very useful for Databases. Existing Rational Databases are not unified and having SQL (Structured Query Language). Proteomics data requires improvement for Integration of different Data sources. With the usage of NoSQL (not only SQL), improve the efficiency and performance. For this, a novel based unified model has been designed for protein interactions data (P-P, G-G, and G-D) by using Apache HBase to evaluate given the model, different case studies have been used.</description>
        <description>http://thesai.org/Downloads/Volume8No5/Paper_43-A_Novel_Big_Data_Storage_Model_for_Protein.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Study of Hybrid Autonomous Power System Modelling Via Multi-Agents Strategy</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080542</link>
        <id>10.14569/IJACSA.2017.080542</id>
        <doi>10.14569/IJACSA.2017.080542</doi>
        <lastModDate>2017-05-31T13:48:22.9070000+00:00</lastModDate>
        
        <creator>NASRI Sihem</creator>
        
        <creator>BEN SLAMA Sami</creator>
        
        <creator>ZAFAR Bassam</creator>
        
        <creator>CHERIF Adnan</creator>
        
        <subject>Solar Source; Energy Recovery; Hydogen; Energy Storage; Ultra-capacitor; Multi-agent; Energy Management</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(5), 2017</description>
        <description>In this paper, a design of a Hybrid autonomous Power System is proposed and detailed. The studied system integrates several components as solar energy source, Energy Recovery system based on a proton membrane exchange fuel cell system and two energy storage components, namely, (1) Energy Storage based on H2 gas production, and (2) an Ultra-capacitor storage device. The system is controlled through an energy management Unit which aims to ensure the smooth operation system to be against any unexpected fluctuation. The modelling of the system relies on the application of a multi-agent strategy whose good effects on the performance of the system is evaluated and demonstrated by the obtained simulation results. The improvement of the system performance is proved through a comparison with the conventional strategies. The system that relies on multi-agents control approach seems to be more reliable and promising in term of effectiveness and fast response.</description>
        <description>http://thesai.org/Downloads/Volume8No5/Paper_42-Study_of_Hybrid_Autonomous_Power_System_Modelling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fuzzy Ontology based Approach for Flexible  Association Rules Mining</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080541</link>
        <id>10.14569/IJACSA.2017.080541</id>
        <doi>10.14569/IJACSA.2017.080541</doi>
        <lastModDate>2017-05-31T13:48:22.8900000+00:00</lastModDate>
        
        <creator>Alsayed M. H. Moawad</creator>
        
        <creator>Ahmed M. Gadallah</creator>
        
        <creator>Mohamed H. Kholief</creator>
        
        <subject>Fuzzy Ontology; Crisp Ontology; Data Mining; Fuzzy Association Rule</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(5), 2017</description>
        <description>Data mining is used for extracting related data. The association rules approach is one of the used methods for analyzing, discovering and extracting knowledge and mining the relationships among raw data. Commonly, it is important to understand and discover such knowledge directly from huge records of items stored in a relational database. This paper proposes an approach for generating human-like fuzzy association rules based on fuzzy ontology. It focuses on enhancing the process of extracting association rules from a huge database respecting a predefined domain fuzzy ontology. Commonly, association rules mining based on crisp ontology is found to be more flexible than classical ones as it considers the relationships between concepts or items. Yet, crisp ontology suffers from the problem of information losing resulted from the rigid boundaries of crisp relationships, which are approximated to be 0 or 1, between concepts. In contrast, the smooth boundaries of fuzzy sets make it able to represent partial relationships that range from 0 to 1 between concepts in an ontology in a more flexible human-like manner. Consequently, generating fuzzy association rules based on fuzzy ontology makes it more human-like and reliable compared with other previous ones. An illustrative case study, on two different data sets, shows the added value of the proposed approach compared with some other recent approaches.</description>
        <description>http://thesai.org/Downloads/Volume8No5/Paper_41-Fuzzy_Ontology_based_Approach_for_Flexible.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Texture based Classification of Breast Mammograms using Adaboost Classifier</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080540</link>
        <id>10.14569/IJACSA.2017.080540</id>
        <doi>10.14569/IJACSA.2017.080540</doi>
        <lastModDate>2017-05-31T13:48:22.8600000+00:00</lastModDate>
        
        <creator>M. Arfan Jaffar</creator>
        
        <subject>Features; Segmentation; Breast mammograms; Classification; Texture</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(5), 2017</description>
        <description>Breast cancer is one of the most dangerous, leading and widespread cancers in the world especially in women. For breast analysis, digital mammography is the most suitable tool used to take mammograms for detection of cancer. It has been proved in the literature that if it can be detected at early and initial stages, then there are many chances to cure timely and efficiently. Therefore, initial screening of mammograms is the most important to detect cancer at initial stages. A radiologist is very expensive in the whole world wide and for a common person, it is very difficult to take opinion from more than one radiologist because it is a very sensitive disease. Thus, another solution is required that can be used as a second opinion to help the low cost solution to the patients. In this paper, a solution has been proposed to solve such type of problem to take mammograms and then detect cancer automatically in those images without any help of radiologist or medical specialist. So this solution can be adopted especially at the initial level. Proposed method first segment the portion of the image that contains these cancerous parts. After that, enhancement has been performed so that cancer can be clearly visible and identifiable. Texture features have been extracted to classify mammograms. An ensemble classifier AdaBoost has been used to classify those features by using the concept of intelligent experts. The standard dataset has been used for validation of the proposed method by using well-known quantitative measures. Proposed method has been compared with the existing method. Results show that proposed method has achieved 96.74% accuracy as well as 98.34% sensitivity.</description>
        <description>http://thesai.org/Downloads/Volume8No5/Paper_40-Hybrid_Texture_based_Classification_of_Breast.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Software Quality and Productivity Model for Small and Medium Enterprises</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080539</link>
        <id>10.14569/IJACSA.2017.080539</id>
        <doi>10.14569/IJACSA.2017.080539</doi>
        <lastModDate>2017-05-31T13:48:22.8430000+00:00</lastModDate>
        
        <creator>Jamaiah H. Yahaya</creator>
        
        <creator>Asadullah Tareen</creator>
        
        <creator>Aziz Deraman</creator>
        
        <creator>Abdul Razak Hamdan</creator>
        
        <subject>Software Quality; Small and Medium Enterprise; SME Productivity; Software Quality and Productivity Model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(5), 2017</description>
        <description>The enterprises today including small and medium enterprises (SMEs) are dependent on software to accomplish their objectives and maintain survivability and sustainability in their businesses. Although many studies in software quality have been carried out previously, they still lack for the correlation between software quality and the impact to SMEs productivity. The objectives of this study are to determine the quality factors from management’s perspective and to determine the impact of software quality and the productivity of SMEs. It is implemented through a survey conducted in Malaysia which involves 43 respondents who are among the managers and management of SME companies. The survey indicates that efficiency, expandability, functionality, reusability, safety and usability are the most influential factors from a management perspective. The research hypotheses defined are accepted with strong relationships between the defined variables. It shows that the level of software quality assessment in SMEs is correlated with the level of its productivity. Based on these findings, a software quality and productivity (SQAP) model for SME is developed. This paper presents the development of SQAP model which can be used as the standard and guideline in the process of obtaining and upgrading software in SMEs and can further be applied in quality assessment in the organisations.</description>
        <description>http://thesai.org/Downloads/Volume8No5/Paper_39-Software_Quality_and_Productivity_Model_for_Small.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Performance of Individual and Ensemble Classifiers for an Arabic Sign Language Recognition System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080538</link>
        <id>10.14569/IJACSA.2017.080538</id>
        <doi>10.14569/IJACSA.2017.080538</doi>
        <lastModDate>2017-05-31T13:48:22.8300000+00:00</lastModDate>
        
        <creator>Miada A. Almasre</creator>
        
        <creator>Hana Al-Nuaim</creator>
        
        <subject>Ensemble; Stacking; Support vector machine; SVM; Random forest; RF; Nearest neighbour; kNN; ArSL recognition system; Depth sensors</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(5), 2017</description>
        <description>The objective of this paper is to compare different classifiers’ recognition accuracy for the 28 Arabic alphabet letters gestured by participants as Sign Language and captured by two depth sensors. The accuracy results of three individual classifiers: (1) the support vector machine (SVM), (2) random forest (RF), and (3) nearest neighbour (kNN), using the original gestured dataset were compared with the accuracy results using an ensemble of the results of each classifier, as recommended by the literature. SVM produced higher overall accuracy when running as an individual classifier regardless of the number of observations for each letter. However, for letters with fewer than 65 observations each, which created a far smaller dataset, RF had higher accuracy than SVM did when using the ensemble approach. Although RF produced higher accuracy results for classes with limited class observation data, the difference between the accuracy results of RF in phase 2 and SVM in phase 1 was negligible. The researchers conclude that such a difference does not warrant using the ensemble approach for this experiment, which adds more processing complexity without a significant increase in accuracy.</description>
        <description>http://thesai.org/Downloads/Volume8No5/Paper_38-The_Performance_of_Individual_and_Ensemble.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An RTOS-based Fault Injection Simulator for Embedded Processors</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080537</link>
        <id>10.14569/IJACSA.2017.080537</id>
        <doi>10.14569/IJACSA.2017.080537</doi>
        <lastModDate>2017-05-31T13:48:22.7970000+00:00</lastModDate>
        
        <creator>Nejmeddine ALIMI</creator>
        
        <creator>Younes LAHBIB</creator>
        
        <creator>Mohsen MACHHOUT</creator>
        
        <creator>Rached TOURKI</creator>
        
        <subject>Cryptography; DFA; Fault Injection; Simulator; RTOS; ARM; Microcontroller; MATLAB</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(5), 2017</description>
        <description>Evaluating embedded systems vulnerability to faults injection attacks has gained importance in recent years due to the rising threats they bring to chips security. The task is particularly important for micro-controllers since they have lower resistance to fault attacks compared to hardware-based cryptosystems. This paper reviews recent embedded fault injection simulators from literature and presents an embedded high-level fault injection mechanism based on a Real-Time Operating System (RTOS). The approach aims to be architecture-independent and portable to 32-bit micro-controllers and embedded processors. The proposed mechanism, primarily targets realistic fault attack scenarios on memory locations, is adapted to timed and event-based fault injection. A Differential Fault Attack (DFA) was mounted on a popular ARM-based micro-controller running FreeRTOS to illustrate the proposed mechanism. The aim is also to bridge the embedded fault injection simulation mechanism efficiently to a computer-based cryptanalysis and to highlight the importance of physically protecting the memory and integrating data-specific countermeasures.</description>
        <description>http://thesai.org/Downloads/Volume8No5/Paper_37-An_RTOS-based_Fault_Injection_Simulator.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Forecasting Production Values using Fuzzy Logic Interval based Partitioning in Different Intervals</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080536</link>
        <id>10.14569/IJACSA.2017.080536</id>
        <doi>10.14569/IJACSA.2017.080536</doi>
        <lastModDate>2017-05-31T13:48:22.7830000+00:00</lastModDate>
        
        <creator>Shubham Aggarwal</creator>
        
        <creator>Jatin Sokhal</creator>
        
        <creator>Bindu Garg</creator>
        
        <subject>Mean Square Error; Fuzzy time series; Average Forecast Error Rate</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(5), 2017</description>
        <description>Fuzzy time series models have been put forward for rice production from many researchers around the globe, but the prediction has not been very accurate. Frequency density or ratio based partitioning methods have been used to represent the partition of discourse. We observed that various prediction models used 7th interval based partitioning for their prediction models, so we wanted to find the reason for that and along with finding the explanation for that we have proposed a novel algorithm to make predictions easy. We have tried to provide an explanation for that. This paper has been put forth due to the motivation from previously published research works in prediction logics. In the current paper, we use a fuzzy time series model and provide a more accurate result than the methods already existent. To make such predictions, we have used interval based partitioning as the partition of discourse and actual production as the universe of discourse. Fuzzy models are used for prediction in many areas, like enrolments prediction, stock price analysis, weather forecasting, and rice production.</description>
        <description>http://thesai.org/Downloads/Volume8No5/Paper_36-Forecasting_Production_Values_using_Fuzzy_Logic_Interval.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Effect of Threshold Values Used for Road Segments Detection in SAR Images on Road Network Generation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080535</link>
        <id>10.14569/IJACSA.2017.080535</id>
        <doi>10.14569/IJACSA.2017.080535</doi>
        <lastModDate>2017-05-31T13:48:22.7500000+00:00</lastModDate>
        
        <creator>Safak Altay A&#231;ar</creator>
        
        <creator>Safak Bayir</creator>
        
        <subject>road detection; synthetic aperture radar</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(5), 2017</description>
        <description>In this study, the effect of threshold values used for road segments detection in synthetic aperture radar (SAR) images of road network generation is examined. A three-phase method is applied as follows: image smoothing, road segments detection and irrelevant segments removal. Threshold values used in road segment detection phase are evaluated for four different situations and results are compared. The software is developed to apply and test all situations. Two different synthetic aperture radar images are used in experimental studies.</description>
        <description>http://thesai.org/Downloads/Volume8No5/Paper_35-Effect_of_Threshold_Values_used_for_Road_Segments.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Edge Cover based Graph Coloring Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080534</link>
        <id>10.14569/IJACSA.2017.080534</id>
        <doi>10.14569/IJACSA.2017.080534</doi>
        <lastModDate>2017-05-31T13:48:22.7200000+00:00</lastModDate>
        
        <creator>Harish Patidar</creator>
        
        <creator>Dr. Prasun Chakrabarti</creator>
        
        <subject>Graph Colouring Problem, Edge Cover, Independent Set, NP-Hard Problem</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(5), 2017</description>
        <description>Graph Colouring Problem is a well-known NP-Hard problem. In Graph Colouring Problem (GCP) all vertices of any graph must be coloured in such a way that no two adjacent vertices are coloured with the same colour. In this paper, a new algorithm is proposed to solve the GCP. Proposed algorithm is based on finding vertex sets using edge cover method. In this paper implementation prospective of the algorithm is also discussed. Implemented algorithm is tested on various graph instances of DIMACS standards dataset. Algorithm execution time and a number of colours required to colour graph are compared with some other well-known Graph Colouring Algorithms. Variation in time complexity with reference to increasing in the number of vertices, a number of edges and an average degree of a graph are also discussed in this paper.</description>
        <description>http://thesai.org/Downloads/Volume8No5/Paper_34-A_Novel_Edge_Cover_based_Graph_Coloring_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Gamified Incentives: A Badge Recommendation Model to Improve User Engagement in Social Networking Websites</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080533</link>
        <id>10.14569/IJACSA.2017.080533</id>
        <doi>10.14569/IJACSA.2017.080533</doi>
        <lastModDate>2017-05-31T13:48:22.6900000+00:00</lastModDate>
        
        <creator>Reza Gharibi</creator>
        
        <creator>Mohammad Malekzadeh</creator>
        
        <subject>Social Media; Data Mining; Gamification Algorithms; User Engagement; Recommendation Systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(5), 2017</description>
        <description>The online social communities employ several techniques to attract more users to their services. One of the essential demand of these communities is to find efficient ways to attract more users and improve their engagement. For this reason, social media sites typically take advantage of gamification systems to improve users’ participation. Among all the gamification services, badges are the most popular feature in online communities which are massively used as a reward system for users. Therefore, the recommendation of relevant unachieved badges to users will have a significant impact on their engagement level; instead of leaving them in the ocean of different actions and badges. In this paper, we develop a badge recommendation model based on item-based collaborative filtering which recommends the next achievable badges to users. The model calculates the correlation between unachieved badges and users’ previously awarded badges. We evaluate our model with the data from Stack Overflow question-answering website to examine if the recommendation model can recommend proper badges in an existing real community. Experimental results show that the model has about 70 per cent true recommendation by just recommending one badge and it has about 80 per cent correct recommendation if it recommends two badges for each user.</description>
        <description>http://thesai.org/Downloads/Volume8No5/Paper_33-Gamified_Incentives_A_Badge_Recommendation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Compendious Study of Online Payment Systems: Past Developments, Present Impact, and Future Considerations</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080532</link>
        <id>10.14569/IJACSA.2017.080532</id>
        <doi>10.14569/IJACSA.2017.080532</doi>
        <lastModDate>2017-05-31T13:48:22.6570000+00:00</lastModDate>
        
        <creator>Burhan Ul Islam Khan</creator>
        
        <creator>Rashidah F. Olanrewaju</creator>
        
        <creator>Asifa Mehraj Baba</creator>
        
        <creator>Adil Ahmad Langoo</creator>
        
        <creator>Shahul Assad</creator>
        
        <subject>E-Commerce; Online payment system; Online payment developments; Payment gateway; Online payment challenges</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(5), 2017</description>
        <description>The advent of e-commerce together with the growth of the Internet promoted the digitisation of the payment process with the provision of various online payment methods like electronic cash, debit cards, credit cards, contactless payment, mobile wallets, etc. Besides, the services provided by mobile payment are gaining popularity day-by-day and are showing a transition by advancing towards a propitious future of speculative prospects in conjunction with the technological innovations. This paper is aimed at evaluating the present status and growth of online payment systems in worldwide markets and also takes a look at its future. In this paper, a comprehensive survey on all the aspects of electronic payment has been conducted after analysis of several research studies on online payment systems. Several online payment system services, the associated security issues and the future of such modes of payment have been analysed. This study also analyses the various factors that affect the adoption of online payment systems by consumers. Furthermore, there can be seen a huge growth in mobile payment methods globally beating both debit and credit card payments, all due to the convenience and security offered by them. Nevertheless, various obstacles have been identified in the adoption of online payment methods; thus, some measures have to be taken for granting this industry a hopeful future. Thus, there should be a suitable trade-off between usability and security when designing online payment systems in order to attract customers. Also, technical and organisational issues which arise in the attempt to achieve interoperability must be taken into consideration by the designers. As a matter of fact, the process of developing interoperable and flexible solutions and universal standards is one of the most difficult tasks in the future ahead.</description>
        <description>http://thesai.org/Downloads/Volume8No5/Paper_32-A_Compendious_Study_of_Online_Payment_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design and Simulation of Robust Controllers for Power Electronic Converters used in New Energy Architecture for a (PVG)/ (WTG) Hybrid System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080531</link>
        <id>10.14569/IJACSA.2017.080531</id>
        <doi>10.14569/IJACSA.2017.080531</doi>
        <lastModDate>2017-05-31T13:48:22.6430000+00:00</lastModDate>
        
        <creator>Mohamed Akram JABALLAH</creator>
        
        <creator>Dhafer MEZGHANI</creator>
        
        <creator>Abdelkader MAMI</creator>
        
        <subject>(PVG)/(WTG) Hybrid system; (MPPTSMC); (MPPTCC); Wind turbine generator (WTG); Photovoltaic generator (PVG)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(5), 2017</description>
        <description>The use of the combination of photovoltaic energy source and the wind energy source as a hybrid configuration has become an alternative solution to produce power energy to fed industrial and domestic applications. In order to fully exploit the energy provided by both sources and ensure a very high efficiency it is necessary to oblige the hybrid power system to produce the maximum possible power. Indeed, in the applications based on renewable energy, power converters are used as an essential element that can help the global energy system to extract maximum power. This paper focuses on developing and optimising of a new architecture for hybrid photovoltaic generator (PVG) / wind turbine generator (WTG) power energy. To obtain the maximum power, two kinds of MPPT procedures have been used: the first is based on MPPT (P&amp;O) sliding mode control (MPPTSMC) for the photovoltaic generator (PVG), and the second is a control based on MPPT current control (MPPTCC) approach and that for the wind turbine generator (WTG). In addition, the proposed hybrid power system can work very well under changes of climatic conditions, such as irradiation and wind speed. On the other hand, in order to maintain dc-link at a desired and stable value, during these variations, we have integrated a boost converter controlled by a sliding mode controller (SMC). A simulation model for the hybrid power system has been carried out using PSIM tools.</description>
        <description>http://thesai.org/Downloads/Volume8No5/Paper_31-Design_and_Simulation_of_Robust_Controllers_for_Power.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fault Attacks Resistant Architecture for KECCAK Hash Function</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080530</link>
        <id>10.14569/IJACSA.2017.080530</id>
        <doi>10.14569/IJACSA.2017.080530</doi>
        <lastModDate>2017-05-31T13:48:22.6270000+00:00</lastModDate>
        
        <creator>Fatma Kahri</creator>
        
        <creator>Hassen Mestiri</creator>
        
        <creator>Belgacem Bouallegue</creator>
        
        <creator>Mohsen Machhout</creator>
        
        <subject>Cryptographic; KECCAK SHA-3; Fault detection; Embedded systems; FPGA implementation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(5), 2017</description>
        <description>The KECCAK cryptographic algorithms widely used in embedded circuits to ensure a high level of security to any systems which require hashing as the integrity checking and random number generation. One of the most efficient cryptanalysis techniques against KECCAK implementation is the fault injection attacks. Until now, only a few fault detection schemes for KECCAK have been presented. In this paper, in order to provide a high level of security against fault attacks, an efficient error detection scheme based on scrambling technique has been proposed. To evaluate the robust of the proposed detection scheme against faults attacks, we perform fault injection simulations and we show that the fault coverage is about 99,996%. We have described the proposed detection scheme and through the Field-Programmable Gate Array analysis, results show that the proposed scheme can be easily implemented with low complexity and can efficiently protect KECCAK against fault attacks. Moreover, the Field-Programmable Gate Array implementation results show that the proposed KECCAK fault detection scheme realises a compromise between implementation cost and KECCAK robustness against fault attacks.</description>
        <description>http://thesai.org/Downloads/Volume8No5/Paper_30-Fault_Attacks_Resistant_Architecture_for_KECCAK.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Corpus for Test, Compare and Enhance Arabic Root Extraction Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080529</link>
        <id>10.14569/IJACSA.2017.080529</id>
        <doi>10.14569/IJACSA.2017.080529</doi>
        <lastModDate>2017-05-31T13:48:22.5930000+00:00</lastModDate>
        
        <creator>Nisrean Thalji</creator>
        
        <creator>Nik Adilah Hanin</creator>
        
        <creator>Yasmin Yacob</creator>
        
        <creator>Sohair Al-Hakeem</creator>
        
        <subject>Arabic root extraction algorithm; corpus; pattern; prefix; suffix; root</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(5), 2017</description>
        <description>Many studies have focused recently on building, evaluating and comparing Arabic root extracting algorithm. The main challenges facing root extraction algorithms are the absence of standard data set for testing, comparing and enhancing different Arabic root extraction algorithms. In addition, the absence of complete lists of roots prefixes suffixes and patterns. In this paper, we describe the development of a new corpus driven from traditional Arabic dictionaries “mu’jams”. The goal is to use the corpus, as a new gold standard data set for testing, comparing and enhancing different Arabic root extraction algorithms. This data set covers all types of words and all roots. It contains each word and its root as a pair to avoid the consultation of a human expert needed to verify the correct roots of words used in the testing or comparing process. We describe the individual phases of the corpus construction, i.e. normalisation, reading derivation words and roots as a pair, and reading each root and its definition part. We have automatically extracted (12000) roots, (430) prefixes, (320) suffixes, (4320) patterns, and (720,000) word-root pair. Konja’s and Garside Arabic root extraction algorithm was tested on this corpus; the accuracy was (63%), then we test it after supplying it with our lists of roots prefixes suffixes and patterns, the accuracy of it became 84%.</description>
        <description>http://thesai.org/Downloads/Volume8No5/Paper_29-Corpus_for_Test_Compare_and_Enhance_Arabic_Root.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Digital Image Security: Fusion of Encryption, Steganography and Watermarking</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080528</link>
        <id>10.14569/IJACSA.2017.080528</id>
        <doi>10.14569/IJACSA.2017.080528</doi>
        <lastModDate>2017-05-31T13:48:22.5630000+00:00</lastModDate>
        
        <creator>Mirza Abdur Razzaq</creator>
        
        <creator>Riaz Ahmed Shaikh</creator>
        
        <creator>Mirza Adnan Baig</creator>
        
        <creator>Ashfaque Ahmed Memon</creator>
        
        <subject>Image security; Encryption; Steganography; Watermarking</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(5), 2017</description>
        <description>Digital images are widely communicated over the internet. The security of digital images is an essential and challenging task on shared communication channel. Various techniques are used to secure the digital image, such as encryption, steganography and watermarking. These are the methods for the security of digital images to achieve security goals, i.e. confidentiality, integrity and availability (CIA). Individually, these procedures are not quite sufficient for the security of digital images. This paper presents a blended security technique using encryption, steganography and watermarking. It comprises of three key components: (1) the original image has been encrypted using large secret key by rotating pixel bits to right through XOR operation, (2) for steganography, encrypted image has been altered by least significant bits (LSBs) of the cover image and obtained stego image, then (3) stego image has been watermarked in the time domain and frequency domain to ensure the ownership. The proposed approach is efficient, simpler and secured; it provides significant security against threats and attacks.</description>
        <description>http://thesai.org/Downloads/Volume8No5/Paper_28-Digital_Image_Security_Fusion_of_Encryption.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intelligent Watermarking Scheme for image Authentication and Recovery</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080527</link>
        <id>10.14569/IJACSA.2017.080527</id>
        <doi>10.14569/IJACSA.2017.080527</doi>
        <lastModDate>2017-05-31T13:48:22.5330000+00:00</lastModDate>
        
        <creator>Rafi Ullah</creator>
        
        <creator>Hani Ali Alquhayz</creator>
        
        <subject>Watermarking; Genetic Programming (GP); Authentication; Quantisation; and Recovery</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(5), 2017</description>
        <description>Recently, researchers have proposed semi-fragile watermarking techniques with the additional capability of image recovery. However, these approaches have certain limitations with respect to capacity, imperceptibility, and robustness. In this paper, we are proposing two independent watermarks, one for image recovery and the other for authentication. The first watermark (image digest), a highly compressed version of the original image itself, is used to recover the distorted image. Unlike the traditional quantisation matrix, genetic programming based matrices are used for compression purposes. These matrices are based on the local characteristics of the original image. Furthermore, a second watermark, which is a pseudo-random binary matrix, is generated to authenticate the host image precisely. Experimental results show that the semi-fragility of the watermarks makes the proposed scheme tolerant of JPEG lossy compression and it locates the tampered regions accurately.</description>
        <description>http://thesai.org/Downloads/Volume8No5/Paper_27-Intelligent_Watermarking_Scheme_for_Image_Authentication.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sustainable Green SLA (GSLA) Validation using Bayesian Network Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080526</link>
        <id>10.14569/IJACSA.2017.080526</id>
        <doi>10.14569/IJACSA.2017.080526</doi>
        <lastModDate>2017-05-31T13:48:22.5000000+00:00</lastModDate>
        
        <creator>Iqbal Ahmed</creator>
        
        <creator>Hiroshi Okumura</creator>
        
        <creator>Kohei Arai</creator>
        
        <creator>Osamu Fukuda</creator>
        
        <subject>GSLA; Sustainability; GSLA informational model; Bayesian Network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(5), 2017</description>
        <description>Currently, most of the IT (Information Technology) and ICT (Information and Communication Technology) industries/companies provides their various services/product at a different level of customers/users through newly developed sustainable GSLA (Green Service Level Agreement). In addition, all these industries also designed new green services at their scope by using global sustainable GSLA informational model. The recent development of sustainable GSLA under 3Es (Ecology, Economy and Ethics) are assisting these IT and ICT based industries to practice sustainability by providing green services to their customers/users and thus respecting green computing paradigm. However, the evaluation of newly developed sustainable GSLA model is not validating yet. This research attempts to evaluate and validate the sustainable GSLA model by using Bayesian Network Model (BNM). The validation of using BNM is done with the feedback of 44 different IT and ICT based companies from Japan, India and Bangladesh. The average accuracy of using BNM for validating sustainable GSLA model is 68% while considering all sample data sets. Moreover, while the proposed BNM have higher confidence with entropy calculation, then the accuracy is almost 100% for most of the companies’ feedback. The proposed idea of using BNM for evaluating and validating sustainable GSLA model would definitely help the ICT engineer to design and develop future green services in their industries. Additionally, the evaluation also validates the proposed information sustainable GSLA model from previous research.</description>
        <description>http://thesai.org/Downloads/Volume8No5/Paper_26-Sustainable_Green_SLA_(GSLA)_Validation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Addressing the Future Data Management Challenges in IoT: A Proposed Framework</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080525</link>
        <id>10.14569/IJACSA.2017.080525</id>
        <doi>10.14569/IJACSA.2017.080525</doi>
        <lastModDate>2017-05-31T13:48:22.4870000+00:00</lastModDate>
        
        <creator>Mohammad Asad Abbasi</creator>
        
        <creator>Zulfiqar A. Memon</creator>
        
        <creator>Tahir Q. Syed</creator>
        
        <creator>Jamshed Memon</creator>
        
        <creator>Rabah Alshboul</creator>
        
        <subject>IoT; Data Management; Cloud Computing; Big Data; Smart Devices; Interoperability; Privacy; Trust</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(5), 2017</description>
        <description>Internet of Thing (IoT) has been attracting the interest of researchers in recent years. Traditionally, only handful types of devices had the capability to be connected to internet/intranet, but due to the latest developments in RFID, NFC, smart sensors and communication protocols billions of heterogeneous devices are being connected each year. From smart phones uploading the data regarding location and fitness to smart grids uploading the data regarding energy consumption and distribution, these devices are generating a huge amount of data each passing moment. This research paper proposes a data management framework to securely manage the huge amount of data that is being generated by IoT enabled devices. The proposed framework is divided into nine layers. The framework incorporates layers such as data collection layer, fog computing layer, integrity management layer, security layer, data aggregation layer, data analysis layer, data storage layer, application layer and archiving layer. The security layer has been proposed as a background layer because all layers shall ensure the privacy and security of the data. These layers will help in managing the data from the point where it is generated by an IoT enabled device until the point where the data is archived at the data center.</description>
        <description>http://thesai.org/Downloads/Volume8No5/Paper_25-Addressing_the_Future_Data_Management_Challenges.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Genetic Programming based Algorithm for Predicting Exchanges in Electronic Trade using Social Networks’ Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080524</link>
        <id>10.14569/IJACSA.2017.080524</id>
        <doi>10.14569/IJACSA.2017.080524</doi>
        <lastModDate>2017-05-31T13:48:22.4530000+00:00</lastModDate>
        
        <creator>Shokooh Sheikh Abooli Poor</creator>
        
        <creator>Mohammad Ebrahim Shiri</creator>
        
        <subject>Electronic business; Social networks; prediction; machine learning; genetic programming; Facebook network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(5), 2017</description>
        <description>Purpose of this paper is to use Facebook dataset for predicting Exchanges in Electronic business. For this purpose, first a dataset is collected from Facebook users and this dataset is divided into two training and test datasets. First, an advertisement post is sent for training data users and feedback from each user is recorded. Then, a learning machine is designed and trained based on these feedbacks and users&#39; profiles. In order to design this learning machine, genetic programming is used. Next, test dataset is used to test the learning machine. The efficiency of the proposed method is evaluated in terms of Precision, Accuracy, Recall and F-Measure. Experiment results showed that the proposed method outperforms basic algorithm (based on J48) and random selection method in selecting objective users for sending advertisements. The proposed method has obtained Accuracy=74% and 73% earning ration in classifying users.</description>
        <description>http://thesai.org/Downloads/Volume8No5/Paper_24-A_Genetic_Programming_based_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimizing Coverage of Churn Prediction in Telecommunication Industry</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080523</link>
        <id>10.14569/IJACSA.2017.080523</id>
        <doi>10.14569/IJACSA.2017.080523</doi>
        <lastModDate>2017-05-31T13:48:22.4230000+00:00</lastModDate>
        
        <creator>Adnan Anjum</creator>
        
        <creator>Saeeda Usman</creator>
        
        <creator>Adnan Zeb</creator>
        
        <creator>Imran Uddin Afridi</creator>
        
        <creator>Pir Masoom Shah</creator>
        
        <creator>Zahid Anwar</creator>
        
        <creator>Adeel Anjum</creator>
        
        <creator>Basit Raza</creator>
        
        <creator>Ahmad Kamran Malik</creator>
        
        <creator>Saif Ur Rehman Malik</creator>
        
        <subject>Telco; Churn Prediction; Business Intelligence; Business Support Systems; Operations Support Systems; E-Churn Model (Ensembling Churn Model)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(5), 2017</description>
        <description>Companies are investing more in analytics to obtain a competitive edge in the market and decision makers are required better identification among their data to be able to interpret complex patterns more easily. Alluring thousands of new customers is worthless if an equal number is leaving. Business Intelligence (BI) systems are unable to find hidden churn patterns for the huge customer base. In this paper, a decision support system has been proposed, which can predict the churning behaviour of a customer efficiently. We have proposed a procedure to develop an analytical system using data mining as well as machine learning techniques C5, CHAID, QUEST, and ANN for the churn analysis and prediction for the telecommunication industry. Prediction performance can be significantly improved by using a large volume and several features from both Business Support Systems (BSS) and Operations Support Systems (OSS). Extensive experiments are performed; marginal increases in predictive performance can be seen by using a larger volume and multiple attributes from both Telco BSS and OSS data. From the results, it is observed that using a combination of techniques can help to figure out a better and precise churn prediction model.</description>
        <description>http://thesai.org/Downloads/Volume8No5/Paper_23-Optimizing_Coverage_of_Churn_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Mixed Method Study for Investigating Critical Success Factors (CSFs) of E-Learning in Saudi Arabian Universities</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080522</link>
        <id>10.14569/IJACSA.2017.080522</id>
        <doi>10.14569/IJACSA.2017.080522</doi>
        <lastModDate>2017-05-31T13:48:22.3930000+00:00</lastModDate>
        
        <creator>Quadri Noorulhasan Naveed</creator>
        
        <creator>AbdulHafeez Muhammad</creator>
        
        <creator>Sumaya Sanober</creator>
        
        <creator>Mohamed Rafik N. Qureshi</creator>
        
        <creator>Asadullah Shah</creator>
        
        <subject>Critical Success Factors (CSFs); Content Reliability and Collected Mean; E-Learning, Kingdom of Saudi Arabia. Quantitative Analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(5), 2017</description>
        <description>Electronic Learning (E-Learning) in the education system has become the obvious choice of the community over the globe because of its numerous advantages. The main aim of the present study is to identify Critical Success Factors (CSFs) and validate them for successful implementation of the E-Leaning at Saudi Arabian Universities. This study developed a multi-dimensional instrument for measuring the E-Learning CSFs in the higher educational institutions of Saudi Arabia. The study reviewed various CSFs from literature and identified most important E-Learning CSFs which are described and grouped in five dimensions such as Student, Instructor, Design and Contents, System and Technological, and Institutional Management Services. The 36 CSFs falling under these relevant dimensions were then validated their importance quantitatively through university Students, Instructors, and E-Learning staffs of some well-known universities in Saudi Arabia. A survey instrument was developed and tested on a sample of 257 respondents of Saudi Arabia Universities. It was found that System and Technological dimension is the most significant as perceived by respondents.   Results of the study discovered that all obtained factors are highly reliable and thus would be useful to develop and implement E-Learning systems.</description>
        <description>http://thesai.org/Downloads/Volume8No5/Paper_22-A_Mixed_Method_Study_for_Investigating_Critical.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Survey on Content-based Image Retrieval</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080521</link>
        <id>10.14569/IJACSA.2017.080521</id>
        <doi>10.14569/IJACSA.2017.080521</doi>
        <lastModDate>2017-05-31T13:48:22.3770000+00:00</lastModDate>
        
        <creator>Mohamed Maher Ben Ismail</creator>
        
        <subject>Image retrieval; Content-based image retrieval; Supervised learning; Unsupervised learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(5), 2017</description>
        <description>The widespread of smart devices along with the exponential growth of virtual societies yield big digital image databases. These databases can be counter-productive if they are not coupled with efficient Content-Based Image Retrieval (CBIR) tools. The last decade has witnessed the introduction of promising CBIR systems and promoted applications in various fields. In this article, a survey on state of the art content based image retrieval including empirical and theoretical work is proposed. This work also includes publications that cover research aspects relevant to CBIR area. Namely, unsupervised and supervised learning and fusion techniques along with low-level image visual descriptors have been reported. Moreover, challenges and applications that emerged to support CBIR research have been discussed in this work.</description>
        <description>http://thesai.org/Downloads/Volume8No5/Paper_21-A_Survey_on_Content_based_Image_Retrieval.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Association between JPL Coding Standard Violations and Software Faults: An Exploratory Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080520</link>
        <id>10.14569/IJACSA.2017.080520</id>
        <doi>10.14569/IJACSA.2017.080520</doi>
        <lastModDate>2017-05-31T13:48:22.3470000+00:00</lastModDate>
        
        <creator>Bashar Q. Ahmed</creator>
        
        <creator>Mahmoud O. Elish</creator>
        
        <subject>Coding standard; Software faults; Software quality; Exploratory study</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(5), 2017</description>
        <description>Since the software community has realised the importance of adopting coding standards during the development process for improved software quality, many coding standards have been proposed and used during the software development. The main objective of this paper is to explore the association between Java Programming Language (JPL) coding standard and fault density of classes in object-oriented software. For this purpose, a set of metrics that quantify the violations of coding standards has been proposed. An exploratory study was then conducted in which data were collected from six open source software systems. The study involved principal component analysis, bivariate correlation analysis, and univariate regression analysis. The principle component analysis has shown that many of the proposed metrics fall into the first two components which in turn reflects the importance and diversity of these metrics. Furthermore, associations between some metrics and fault density have been observed across all systems, and thus indicate that these metrics can be useful predictors for improved early estimation of faulty density of object-oriented classes.</description>
        <description>http://thesai.org/Downloads/Volume8No5/Paper_20-Association_between_JPL_Coding_Standard_Violations.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Comparison of Protocols Combination based on EIGRP and OSPF for Real-Time Applications in Enterprise Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080519</link>
        <id>10.14569/IJACSA.2017.080519</id>
        <doi>10.14569/IJACSA.2017.080519</doi>
        <lastModDate>2017-05-31T13:48:22.3130000+00:00</lastModDate>
        
        <creator>Dounia EL IDRISSI</creator>
        
        <creator>Najib ELKAMOUN</creator>
        
        <creator>Fatima LAKRAMI</creator>
        
        <creator>Rachid HILAL</creator>
        
        <subject>Routing protocols; EIGRP; OSPF; Redistribution; QoS; Opnet</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(5), 2017</description>
        <description>This work studies the impact of redistribution on network performance compared with the use of a single routing protocol. A real network with real traffic parameters is simulated, in order to investigate a real deployment case, and then being able to extract precise results and practical conclusions. This work demonstrates that using one single routing protocol is more efficient in general cases for real topologies, especially when deploying sensitive applications requiring a certain QoS  level.</description>
        <description>http://thesai.org/Downloads/Volume8No5/Paper_19-Performance_Comparison_of_Protocols_Combination.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fuzzy Pi Adaptive Learning Controller for Controlling the Angle of Attack of an Aircraft</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080518</link>
        <id>10.14569/IJACSA.2017.080518</id>
        <doi>10.14569/IJACSA.2017.080518</doi>
        <lastModDate>2017-05-31T13:48:22.3000000+00:00</lastModDate>
        
        <creator>Srinibash Swain</creator>
        
        <creator>Partha Sarathi Khuntia</creator>
        
        <subject>Angle of Attack; Interpolation Rule; Performance Indices; Fuzzy PI Adaptive Learning Controller</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(5), 2017</description>
        <description>In this paper, a Fuzzy PI Adaptive Learning controller is proposed for a flight control system to control the angle of attack of an aircraft. The proposed controller tracks the reference angle as desired by the pilot of the aircraft. The performance indices are evaluated and the corresponding value is compared with that for the conventional controllers obtained from Zigler Nichols (ZN), Tyreus Luyben (TL) and Extended Skogestad Internal Model Controller (ESIMC). The performance indices such as Mean Square Error (MSE), Integral Absolute Error (IAE) and Integral Absolute Time Error (IATE) are evaluated to verify superiority of one over another.</description>
        <description>http://thesai.org/Downloads/Volume8No5/Paper_18-Fuzzy_PI_Adaptive_Learning_Controller_for_Controlling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Investigation of Critical Factors that Perturb Business-IT Alignment in Organizations</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080517</link>
        <id>10.14569/IJACSA.2017.080517</id>
        <doi>10.14569/IJACSA.2017.080517</doi>
        <lastModDate>2017-05-31T13:48:22.2670000+00:00</lastModDate>
        
        <creator>Muhammad Asif Khan</creator>
        
        <subject>Alignment; Business-it gap; Organizational factors; Strategic alignment, Critical factors</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(5), 2017</description>
        <description>Business executives around the globe have recognised the significance of information technology (IT) and started adopting IT in their business processes. Firms always invest in adopting latest technologies in order to comply with the customer requirements despite of heavy investment companies are unable to avail optimum benefits from the underpinning technologies. Consequently IT does not support business the way it should have been and hence a misalignment between business and IT is created. In the current study various factors in a Saudi financial institution have been discussed and assessed that perturb alignments in both the entities. In the research study questionnaire approach has been used which is an effective tool to collect qualitative data.  Finally, some recommendations have been suggested to bring business and IT into an alignment.</description>
        <description>http://thesai.org/Downloads/Volume8No5/Paper_17-Investigation_of_Critical_Factors_that_Perturb.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detection of Scaled Region Duplication Image Forgery using Color based Segmentation with LSB Signature</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080516</link>
        <id>10.14569/IJACSA.2017.080516</id>
        <doi>10.14569/IJACSA.2017.080516</doi>
        <lastModDate>2017-05-31T13:48:22.2530000+00:00</lastModDate>
        
        <creator>Dr. Diaa Mohammed Uliyan</creator>
        
        <creator>Dr. Mohammed A. F. Al-Husainy</creator>
        
        <subject>Digital image forensics; Region duplication; Forgery detection; Image authentication</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(5), 2017</description>
        <description>Due to the availability of powerful image editing softwares, forgers can tamper the image content easily. There are various types of image forgery, such as image splicing and region duplication forgery. Region duplication is one of the most common manipulations used for tampering digital images. It is vital in image forensics to authenticate the digital image. In this paper, a novel region duplication forgery detection approach is proposed. By segmenting the input image based on the colour features, sufficient number of centroids are produced, that exist even in small or smooth regions. Then, the Least Significant Bit (LSB) of all the colours of pixels in each segment are extracted to build the signature vector. Finally, the hamming distance is calculated through exploiting the signature vector of image to find the dissimilarity. Various experimental results are provided to demonstrate the superior performance of the proposed scheme under some post processing operations such as scaling attack.</description>
        <description>http://thesai.org/Downloads/Volume8No5/Paper_16-Detection_of_Scaled_Region_Duplication_Image.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Resources Management of High Speed Downlink Packet Access Network in the Presence of Mobility</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080515</link>
        <id>10.14569/IJACSA.2017.080515</id>
        <doi>10.14569/IJACSA.2017.080515</doi>
        <lastModDate>2017-05-31T13:48:22.2200000+00:00</lastModDate>
        
        <creator>Abdulaleem Ali Almazroi</creator>
        
        <subject>HSDPA Network; Admission Control; Performance Evaluation; Mobility</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(5), 2017</description>
        <description>High-Speed Downlink Protocol Access (HSDPA) is a mobile telephony protocol. It is designed to increase data capacity and transfer rate. This paper presents a resource allocation strategy in the HSDPA broadband network. An admission check is proposed. It divides the coverage area of a base station (Node-B) into several regions based on the principle of Adaptive Modulation and Coding (AMC) efficiency.
The call admission control (CAC) mechanism distinguishes two RT and NRT traffic according to the type of service requested by the user. It dynamically allocates an effective bandwidth to each accepted call in the system based on its modulation efficiency and maintains its initial rate during its communication.</description>
        <description>http://thesai.org/Downloads/Volume8No5/Paper_15-Resources_Management_of_High_Speed_Downlink.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Early Phase Software Project Risk Assessment Support Method for Emergent Software Organizations</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080514</link>
        <id>10.14569/IJACSA.2017.080514</id>
        <doi>10.14569/IJACSA.2017.080514</doi>
        <lastModDate>2017-05-31T13:48:22.1900000+00:00</lastModDate>
        
        <creator>Sahand Vahidnia</creator>
        
        <creator>&#214;mer &#214;zg&#252;r Tanri&#246;ver</creator>
        
        <creator>I.N. Askerzade</creator>
        
        <subject>Software Risk Identification; Software Risk Assessment; Failure Mode Prediction; Fuzzy Decision Support</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(5), 2017</description>
        <description>Risk identification and assessment are amongst critical activities in software project management. However, identifying and assessing risks and uncertainties is a challenging process especially for emergent software organizations that lack resources. The research aims to introduce a method and a prototype tool to assist software development practitioners and teams with risk assessment processes. We have identified and put forward software project related risks from the literature. Then by conducting a survey to software practitioners of small organizations, we collected probability and impact of each risk factor opinions of 86 practitioners based on past projects. We developed a risk assessment method and a prototype tool initially based on data that accumulates further data as the tool. Along with a risk prioritisation and risk matrix, the method utilises fuzzy logic to provide the practitioners with predicted scores for potential failure types and aggregated risk score for the project. In order to validate the usability of the method and the tool, we have conducted a case study for the project risk assessment in a small software organization. The introduced method is partially successful at prediction of risks and estimating the probability of predefined failure modes.</description>
        <description>http://thesai.org/Downloads/Volume8No5/Paper_14-An_Early_Phase_Software_Project_Risk_Assessment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Neural Network Classification of White Blood Cell using Microscopic Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080513</link>
        <id>10.14569/IJACSA.2017.080513</id>
        <doi>10.14569/IJACSA.2017.080513</doi>
        <lastModDate>2017-05-31T13:48:22.1730000+00:00</lastModDate>
        
        <creator>Mazin Z. Othman</creator>
        
        <creator>Thabit S. Mohammed</creator>
        
        <creator>Alaa B. Ali</creator>
        
        <subject>White Blood Cell; Neural networks; Image analysis; Leukocytes;  Lymphocyte; Feature extraction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(5), 2017</description>
        <description>With the technological advances in medical field, the need for faster and more accurate analysis tools becomes essential   for better patients’ diagnosis.  In this work, the image recognition problem of white blood cells (WBC) is investigated. Five types of white blood cells are classified using a feed forward back propagation neural network. After segmentation of blood cells that are obtained from microscopic images, the most 16 significant features of these cells are fed as inputs to the neural network. Half of the 100 of the WBC sub-images that are found after segmentation are used to train the neural network, while the other half is used for test. The results found are promising with classification accuracy being 96%.</description>
        <description>http://thesai.org/Downloads/Volume8No5/Paper_13-Neural_Network_Classification_of_White_Blood_Cell.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>RKE-CP: Response-based Knowledge Extraction from Collaborative Platform of Text-based Communication</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080512</link>
        <id>10.14569/IJACSA.2017.080512</id>
        <doi>10.14569/IJACSA.2017.080512</doi>
        <lastModDate>2017-05-31T13:48:22.1430000+00:00</lastModDate>
        
        <creator>Jalaja G</creator>
        
        <creator>Kavitha C</creator>
        
        <subject>Text Mining; Collaborative Platform; Probability Theory; Heterogeneous Domain; Precision /Recall</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(5), 2017</description>
        <description>With the generation of massive amount of product-centric responses from existing applications on collaborative platform, it is necessary to perform a discrete analytical operation on it. As majority of such responses are textual in nature, it increases the applicability of using text mining approaches on it. We review the existing research contribution in text mining to find that there are significant research gap. Therefore, the proposed study presents a technique called as RKE-CP i.e. Response-based Knowledge Extraction from Collaborative Platform where the term Collaborative points towards cloud environment. The proposed technique is designed using mathematical modelling where the maximum focus of design and implementation lies on accomplishing a good balance between faster response time in mining operation and higher precision/recall rate. The study outcome possess’ better precision score, recall, and lowered processing time as compared with the most relevant work text mining.</description>
        <description>http://thesai.org/Downloads/Volume8No5/Paper_12-RKE-CP_Response-based_Knowledge_Extraction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Research Advancements Towards in Existing Smart Metering over Smart Grid</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080511</link>
        <id>10.14569/IJACSA.2017.080511</id>
        <doi>10.14569/IJACSA.2017.080511</doi>
        <lastModDate>2017-05-31T13:48:22.1100000+00:00</lastModDate>
        
        <creator>Abdul Khadar A</creator>
        
        <creator>Javed Ahamed Khan</creator>
        
        <creator>M S Nagaraj</creator>
        
        <subject>Digital Meter; Energy; Power Distribution; Performance; Privacy Smart Meter;  Smart Grid</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(5), 2017</description>
        <description>The advent of smart meters has automated the entire process of billing generation system over commercial energy usage which was previously done using digital meter. Although western countries practice its usage more, it is still unknown to many developing countries along with its power distribution. Hence, this paper reviews the working principle of smart meters along with the brief of basic operation description. It thoroughly investigates the implementation work towards algorithm design and techniques developed that are being carried out in last five years towards smart meters. The paper examines the various significant technology that has evolved to address the problems in smart meter e.g. performance improvement, energy efficiency, security factor, etc. Finally, a set of research gap is explored after scrutinizing the advantages and limitations of existing techniques followed by brief highlights of the feasible line of research to compensate the unaddressed problems associated with research work direction towards smart meters.</description>
        <description>http://thesai.org/Downloads/Volume8No5/Paper_11-Research_Advancements_towards_in_Existing_Smart.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Study on the Effect of Learning Strategy using a Highlighter Pen on Gaze Movement</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080510</link>
        <id>10.14569/IJACSA.2017.080510</id>
        <doi>10.14569/IJACSA.2017.080510</doi>
        <lastModDate>2017-05-31T13:48:22.0630000+00:00</lastModDate>
        
        <creator>Hiroki Nishimura</creator>
        
        <creator>Noriaki Kuwahara</creator>
        
        <subject>Highlighter pen; Learning strategy; Eye movement</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(5), 2017</description>
        <description>In this study, we propose a learning strategy using a highlighter pen to improve the learning efficiency of learners. This method makes the important information stand out by colouring text. It is known that highlighting important points of sentence problems with a highlighter pen improves the speed of answers and correct answer rates, especially in school subjects, such as Japanese and mathematics. In this study, we focused on the gaze movement and analysed the gaze dwell time and the number of gaze movements to clarify what kind of influence and learning effect it has on the cognitive process.</description>
        <description>http://thesai.org/Downloads/Volume8No5/Paper_10-A_Study_on_the_Effect_of_Learning_Strategy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Web Security: Detection of Cross Site Scripting in PHP Web Application using Genetic Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080509</link>
        <id>10.14569/IJACSA.2017.080509</id>
        <doi>10.14569/IJACSA.2017.080509</doi>
        <lastModDate>2017-05-31T13:48:22.0330000+00:00</lastModDate>
        
        <creator>Abdalla Wasef Marashdih</creator>
        
        <creator>Zarul Fitri Zaaba</creator>
        
        <creator>Herman Khalid Omer</creator>
        
        <subject>Web Application Security; Security Vulnerability; Web Testing; Cross Site Scripting; Genetic Algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(5), 2017</description>
        <description>Cross site scripting (XSS) is one of the major threats to the web application security, where the research is still underway for an effective and useful way to analyse the source code of web application and removes this threat. XSS occurs by injecting the malicious scripts into web application and it can lead to significant violations at the site or for the user. Several solutions have been recommended for their detection. However, their results do not appear to be effective enough to resolve the issue. This paper recommended a methodology for the detection of XSS from the PHP web application using genetic algorithm (GA) and static analysis. The methodology enhances the earlier approaches of determining XSS vulnerability in the web application by eliminating the infeasible paths from the control flow graph (CFG). This aids in reducing the false positive rate in the outcomes. The results of the experiments indicated that our methodology is more effectual in detecting XSS vulnerability from the PHP web application compared to the earlier studies, in terms of the false positive rates and the concrete susceptible paths determined by GA Generator.</description>
        <description>http://thesai.org/Downloads/Volume8No5/Paper_9-Web_Security_Detection_of_Cross_Site_Scripting_in_PHP.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implementation of Failure Enterprise Systems in Organizational Perspective Framework</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080508</link>
        <id>10.14569/IJACSA.2017.080508</id>
        <doi>10.14569/IJACSA.2017.080508</doi>
        <lastModDate>2017-05-31T13:48:22.0030000+00:00</lastModDate>
        
        <creator>Soobia Saeed</creator>
        
        <creator>Asadullah Shaikh</creator>
        
        <creator>Muhammad Ali Memon</creator>
        
        <creator>Majid Hussain Memon</creator>
        
        <creator>Faheem Ahmed Abassi</creator>
        
        <creator>Syed Mehmood R Naqvi</creator>
        
        <subject>VDCL; ERP; Implementation; projects; failure</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(5), 2017</description>
        <description>Failure percentage of Enterprise Resource Planning (ERP) implementation projects stay high, even following quite a while of endeavours to diminish them. In this paper, the author proposes the exact exploration that plans to decrease the failure percentage of ERP projects. Nonetheless, most endeavours to enhance project achievement have concentrated on varieties inside of the conventional project management pattern. Author contends that a main driver of high ERP Implementation project failure percentage is the conventional pattern itself. Implementation of another pattern is a Value-Driven Change Leadership (VDCL) of reducing ERP Implementation failure percentage. This paper proposes an exact examination to explain the part of the new pattern (VDCL) in diminishing ERP Implementation failure percentage. This paper portrays the exploratory procedure for an exact study to the use of VDCL in decreasing ERP Implementation failure percentage.</description>
        <description>http://thesai.org/Downloads/Volume8No5/Paper_8-Implementation_of_Failure_Enterprise_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SHPIS: A Database of Medicinal Plants from Saudi Arabia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080507</link>
        <id>10.14569/IJACSA.2017.080507</id>
        <doi>10.14569/IJACSA.2017.080507</doi>
        <lastModDate>2017-05-31T13:48:21.9700000+00:00</lastModDate>
        
        <creator>Asif Hassan Syed</creator>
        
        <creator>Tabrej khan</creator>
        
        <subject>Saudi Medicinal Plants; Saudi Herbal Plant Information System; MySQL; Relational Database Management System; Hypertext Pre-processor; Web Portal</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(5), 2017</description>
        <description>Many studies in the past have revealed the use of the indigenous medicinal plant for the treatment of various diseases in Saudi Arabia.  However, the details of these indigenous essential medicinal herbs and their therapeutic implication against various human and animals diseases are not well documented and organised in a local platform. In this regard, a thorough mining of scholarly article for information on local herbal remedies available and used by communities of Saudi Arabia was performed. The research revealed a unique insight into the natural herbal resource of Saudi Arabia with as many as 120 varieties of the medicinal plant from Saudi Arabia. Therefore, in order to provide a structured platform to store and retrieve relevant information pertaining to an indigenous medicinal plant of Saudi Arabia, a Saudi Herbal Plants Information System was built using waterfall model. MySQL an open source Relational Database Management System and server-side scripting language Hypertext Pre-processor was used to build an interactive dynamic web portal of the Saudi Herbal Plants Information System. The designed web portal allows visitors to access information on herbs available in the herbal database for research and development.</description>
        <description>http://thesai.org/Downloads/Volume8No5/Paper_7-SHPIS_A_Database_of_Medicinal_Plants_from_Saudi_Arabia.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sperm Motility Algorithm for Solving Fractional Programming Problems under Uncertainty</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080506</link>
        <id>10.14569/IJACSA.2017.080506</id>
        <doi>10.14569/IJACSA.2017.080506</doi>
        <lastModDate>2017-05-31T13:48:21.9400000+00:00</lastModDate>
        
        <creator>Osama Abdel Raouf</creator>
        
        <creator>Bayoumi M. Ali Hassan</creator>
        
        <creator>Ibrahim M. Hezam</creator>
        
        <subject>Sperm Motility Algorithm; Fractional Programming; Uncertainty; Fuzzy Programming; Monte Carlo Method</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(5), 2017</description>
        <description>This paper investigated solving Fractional Programming Problems under Uncertainty (FPPU) using Sperm Motility Algorithm. Sperm Motility Algorithm (SMA) is a novel metaheuristic algorithm inspired by fertilization process in human, was proposed for solving optimization problems by Osama and Hezam [1]. The uncertainty in the Fractional Programming Problem (FPP) could be found in the objective function coefficients and/or the coefficients of the constraints. The uncertainty in the coefficients can be characterised by two methods. The first method is fuzzy logic-based alpha-cut analysis in which uncertain parameters are treated as fuzzy numbers leading to Fuzzy Fractional Programming Problems (FFPP). The second is Monte Carlo simulation (MCS) in which parameters are treated as random variables bound to a given probability distribution leading to Probabilistic Fractional Programming Problems (PFPP). The two different methods are used to revise the trustiness in the transformation to the deterministic domain. A comparative study of the obtained result using SMA with genetic algorithm and the two SI algorithms on a selected benchmark examples is carried out. A detailed comparison is induced giving a ranked recommendation for algorithms and methods proper for solving FPPU.</description>
        <description>http://thesai.org/Downloads/Volume8No5/Paper_6-Sperm_Motility_Algorithm_for_Solving_Fractional.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Analysis of Various Methods Treatment Expert Assessments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080505</link>
        <id>10.14569/IJACSA.2017.080505</id>
        <doi>10.14569/IJACSA.2017.080505</doi>
        <lastModDate>2017-05-31T13:48:21.9100000+00:00</lastModDate>
        
        <creator>Georgi Popov</creator>
        
        <creator>Shamil Magomedov</creator>
        
        <subject>Treatment; Linguistic variables; Information processing; Evaluation procedures</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(5), 2017</description>
        <description>The paper deals with the problem of choosing the most effective methods of processing expert information if there are several results of expert evaluation on the problem. The problem of levelling expert assessments, which differ much from the other set of estimates, is considered. Ratios for the weighting factors of individual expert assessments, taking into account the extent of the deviation of each expert&#39;s evaluation of the resulting valuation to be obtained from them, are offered. For the problem of estimation of the degree of importance the different components of the computer to ensure the security of data processed in the personal computer, a list of five possible expert data processing methods is formed, and carried out an expert evaluation of the level of the components’ importance on the basis of linguistic variables. Expert estimations are processed by all presented methods. The results of evaluation allowed to identify the most effective methods of treatment; namely median variant of the maximum likelihood method, which is based on a stochastic model of peer review, and proposed in the paper method that takes into account deviations from the specific evaluations of the resulting values.</description>
        <description>http://thesai.org/Downloads/Volume8No5/Paper_5-Comparative_Analysis_of_Various_Methods.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mode-Scheduling Steering Law of VSCMGs for Multi-Target Pointing and Agile Maneuver of a Spacecraft</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080504</link>
        <id>10.14569/IJACSA.2017.080504</id>
        <doi>10.14569/IJACSA.2017.080504</doi>
        <lastModDate>2017-05-31T13:48:21.8770000+00:00</lastModDate>
        
        <creator>Yasuyuki Nanamori</creator>
        
        <creator>Masaki Takahashi</creator>
        
        <subject>Variable-Speed Control Moment Gyros; Attitude Control; Singularity; Steering Law; Spacecraft</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(5), 2017</description>
        <description>This study proposes a method of selecting a set of gimbal angles in the final state and applies the method to the mode-scheduling steering law of variable-speed control moment gyros intended for multi-target pointing manoeuvres in the three-axis attitude control of a spacecraft. The proposed method selects reference final gimbal angles, considering the condition numbers of the Jacobian matrix of the reaction wheel mode in the final state of a single manoeuvre and that of the constant-speed control moment gyro mode at the start of the upcoming manoeuvre to keep away from the singularities. To improve the reachability of reference final gimbal angles, the nearest set of gimbal angles among nominated sets according to the Euclidean norm is selected as the reference final set at the middle of the single manoeuvre, and then realised by adopting gimbal angle feedback steering logic using null motion. In addition, the manoeuvre profile is designed such that the second half of the single manoeuvre is more gradual and takes longer than the first. Numerical simulation confirms the validity of the proposed method in consecutive manoeuvres.</description>
        <description>http://thesai.org/Downloads/Volume8No5/Paper_4-Mode-Scheduling_Steering_Law_of_VSCMGs.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Novelty of A-Web based Adaptive Data-Driven Networks (DDN) Management &amp; Cooperative Communities on the Internet Technology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080503</link>
        <id>10.14569/IJACSA.2017.080503</id>
        <doi>10.14569/IJACSA.2017.080503</doi>
        <lastModDate>2017-05-31T13:48:21.8470000+00:00</lastModDate>
        
        <creator>Muhammad Tahir</creator>
        
        <creator>MingChu Li</creator>
        
        <creator>Arsalan Ali Shaikh</creator>
        
        <creator>Muhammad Aamir</creator>
        
        <subject>Adaptive Data-driven Management; A-Web Editor; Community Graphs; Internet Technology; Logically Connection Links; Vertical &amp; Horizontal Networks Communities</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(5), 2017</description>
        <description>Nowadays, the area of adaptive data science of all data-driven properties on the Internet remains generally envision through integrated web entity maintenance. In this connection, several clients can collaborate with web server then collapse all data resources. However, the ideal client/server model tolerates after approximate edge produced via design all data in the unique centric area. Specifically, the proposed method of Internet cooperative communities is graphed data structure of vertical and horizontal entities sharing a mutual concern or field of reference. The computer networks centrally located the segment of cooperative neighbourhood build a logically graphs structure connection links spread the sensible computer networks structure of searching cooperative communities’ nodes on the Internet. The time for generation a global cooperative community structure can be improved and adjusted. That confesses the tool around dynamic and in state of the art algorithms’ and its usage performances. In this way, our techniques can professionally selection the classified structure of A-Web communities, and users preferred web data services can be recovered and choosing A-Web communities allowing to the categorised structure and distributes systems on influence rank. Finally, this is implemented into the novelty of A-Web constructed adaptive data-driven networks management structure. In the part of the contribution, this system provides the revolution of decentralised networking libraries. In other words, this project connects on the free-net and help in searching millions of scientific research data science volumes that are published globally on the Internet technology. This system also will connect other files; documents or info-resources on A-Web and middleware of the fundamental concepts of A-Web will be encapsulated transitory.</description>
        <description>http://thesai.org/Downloads/Volume8No5/Paper_3-The_Novelty_of_A-Web_based_Adaptive_Data-Driven.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Scalable Service for Predictive Learning based on the Professional Social Networking Sites</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080502</link>
        <id>10.14569/IJACSA.2017.080502</id>
        <doi>10.14569/IJACSA.2017.080502</doi>
        <lastModDate>2017-05-31T13:48:21.8300000+00:00</lastModDate>
        
        <creator>Evgeny Nikulchev</creator>
        
        <creator>Dmitry Ilin</creator>
        
        <creator>Gregory Bubnov</creator>
        
        <creator>Egor Mateshuk</creator>
        
        <subject>Online social networks; Social networking sites; Technology life cycle; Predictive learning; Patent activity analysis, Professional skills; Topic detection; LinkedIn; ResearchGate</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(5), 2017</description>
        <description>Professional social networking sites are widely used as a tool for obtaining specific information such as technology trends and professional skills demand. The article is aimed to consider the evolution of services for professional communities through integration of analysis of the patent activity, analysis of the academic research activity and analysis of the labour market trends. Authors have developed the prototype of a predictive learning software service which intended to fill the gap between professional social networking sites and e-learning systems, including massive open online course systems. It includes functionality for monitoring of professional skills demand on the labour market and analysis of patents for each corresponding technology. The software service will help to determine demand for professional skills, to actualise an applicant’s skillset, to organise professional communities and to build individual learning programs for studying of skills and technologies which are predicted to grow in demand on the labour market.</description>
        <description>http://thesai.org/Downloads/Volume8No5/Paper_2-Scalable_Service_for_Predictive_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Ranking XP Prioritization Methods based on the ANP</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080501</link>
        <id>10.14569/IJACSA.2017.080501</id>
        <doi>10.14569/IJACSA.2017.080501</doi>
        <lastModDate>2017-05-31T13:48:21.7530000+00:00</lastModDate>
        
        <creator>Abdulmajeed Aljuhani</creator>
        
        <creator>Luigi Benedicenti</creator>
        
        <creator>Sultan Alshehri</creator>
        
        <subject>analytic network process; extreme programming;
planning game; prioritization techniques; user stories</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(5), 2017</description>
        <description>The analytic network process (ANP) is considered
one of the most powerful tools to facilitate decision-making
in complex environments. The ANP allows decision makers to
structure their problems mathematically using a series of simple
binary comparisons. Research suggests that ANP can be useful in
software development, where complicated decisions are routinely
made. Industrial adoption of ANP, however, is virtually nonexistent
because of its perceived complexity. We believe that
ANP can be very beneficial in industry as it resolves conflicts
in a mutually acceptable manner. We propose a protocol for its
adoption by means of a case study that aims to explain a ranking
method to assist an XP team in selecting the best prioritization
method for ranking the user stories. The protocol was tested in
a professional course environment.</description>
        <description>http://thesai.org/Downloads/Volume8No5/Paper_1-Ranking_XP_Prioritization_Methods_based_on_the_ANP.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Low Error Floor Concatenated LDPC for MIMO Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080475</link>
        <id>10.14569/IJACSA.2017.080475</id>
        <doi>10.14569/IJACSA.2017.080475</doi>
        <lastModDate>2017-05-01T14:17:39.3100000+00:00</lastModDate>
        
        <creator>Lamia Berriche</creator>
        
        <creator>Areej Al Qahtani</creator>
        
        <subject>LDPC; error floor; MIMO; error control</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(4), 2017</description>
        <description>Multiple-Input and Multiple-Output, or MIMO is the use of multiple antennas at both the transmitter and receiver to improve communication performance. MIMO technology has attracted attention in wireless communications; because it offers significant increases in data throughput and spectral efficiency without additional bandwidth or increased transmit power. To achieve the mentioned above performance Bit Error Rates (BER) should be low. For this reason efficient encoding and decoding algorithms should be used. MIMO systems rely on error-control coding to ensure reliable communication in the presence of noise. Forward Error Correction Codes (FEC) such as convolutional and block codes were investigated for MIMO systems. Low Density Parity Check (LDPC) shows good performance except that an error floor may appear at high Signal to Noise Ratio (SNR). In this work we propose a concatenated error control code that reduces the error floor of LDPC codes suffering from error floor. The proposed scheme is a good candidate for high rates real time communication since it reduces the decoding latency as well.</description>
        <description>http://thesai.org/Downloads/Volume8No4/Paper_75-Low_Error_Floor_Concatenated_LDPC_for_MIMO.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mitigating Address Spoofing Attacks in Hybrid SDN</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080474</link>
        <id>10.14569/IJACSA.2017.080474</id>
        <doi>10.14569/IJACSA.2017.080474</doi>
        <lastModDate>2017-05-01T14:17:39.2970000+00:00</lastModDate>
        
        <creator>Fahad Ubaid</creator>
        
        <creator>Rashid Amin</creator>
        
        <creator>Faisal Bin Ubaid</creator>
        
        <creator>Muhammad Muwar Iqbal</creator>
        
        <subject>Communication system security; Network Security; ARP Spoofing Introduction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(4), 2017</description>
        <description>Address spoofing attacks like ARP spoofing and DDoS attacks are mostly launched in a networking environment to degrade the performance. These attacks sometimes break down the network services before the administrator comes to know about the attack condition. Software Defined Networking (SDN) has emerged as a novel network architecture in which date plane is isolated from the control plane. Control plane is implemented at a central device called controller. But, SDN paradigm is not commonly used due to some constraints like budget, limited skills to control SDN, the flexibility of traditional protocols. To get SDN benefits in a traditional network, a limited number of SDN devices can be deployed among legacy devices. This technique is called hybrid SDN. In this paper, we propose a new approach to automatically detect the attack condition and mitigate that attack in hybrid SDN. We represent the network topology in the form of a graph. A graph based traversal mechanism is adopted to indicate the location of the attacker. Simulation results show that our approach enhances the network efficiency and improves the network security.</description>
        <description>http://thesai.org/Downloads/Volume8No4/Paper_74-Mitigating_Address_Spoofing_Attacks_in_Hybrid_SDN.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Efficient Routing Protocol in Mobile Ad-hoc Networks by using Artificial Immune System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080473</link>
        <id>10.14569/IJACSA.2017.080473</id>
        <doi>10.14569/IJACSA.2017.080473</doi>
        <lastModDate>2017-05-01T14:17:39.2630000+00:00</lastModDate>
        
        <creator>Fatemeh Sarkohaki</creator>
        
        <creator>Reza Fotohi</creator>
        
        <creator>Vahab Ashrafian</creator>
        
        <subject>AIS-OLSR; Routing protocol; Mobile ad hoc network; AIS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(4), 2017</description>
        <description>Characteristics of the mobile ad-hoc networks such as nodes high mobility and limited energy are regarded as the routing challenges in these networks. OLSR protocol is one of the routing protocols in mobile ad hoc network that selects the shortest route between source and destination through Dijkstra&#39;s algorithm. However, OLSR suffers from a major problem. It does not consider parameters such as nodes’ energy level and links length in its route processing. This paper employs the artificial immune system (AIS) to enhance efficiency of OLSR routing protocol. The proposed algorithm, called AIS-OLSR, considers hop count, remaining energy in the intermediate nodes, and distance among node, which is realized by negative selection and ClonalG algorithms of AIS. Widespread packet - level simulation in ns-2 environment, shows that AIS-OLSR outperforms OLSR and EA-OLSR in terms of packet delivery ratio, throughput, end-end delay and lifetime.</description>
        <description>http://thesai.org/Downloads/Volume8No4/Paper_73-An_Efficient_Routing_Protocol_in_Mobile_Ad-hoc.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving Routing Performances to Provide Internet Connectivity in VANETs over IEEE 802.11p</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080472</link>
        <id>10.14569/IJACSA.2017.080472</id>
        <doi>10.14569/IJACSA.2017.080472</doi>
        <lastModDate>2017-05-01T14:17:39.2030000+00:00</lastModDate>
        
        <creator>Driss ABADA</creator>
        
        <creator>Abdellah MASSAQ</creator>
        
        <creator>Abdellah BOULOUZ</creator>
        
        <subject>VANET; routing; link stability; fading; RSS; mobility</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(4), 2017</description>
        <description>In the intelligent transportation systems, many applications and services could be offered in the road via Internet. Providing these applications over vehicular ad hoc network (VANETs) technology may require good performances in routing. The channel fading and quality of received signal are the two main factors which affect the Mobile ad hoc network perfor-mances as well as mobility of vehicles, in terms of throughput and delay packets that are relevant to the performance evaluation of the routing protocols. In this paper, we propose an efficient relay selection scheme based on Contention Based Forwarding (CBF) and Fuzzy Logic System (FLS) that considers two important Quality of Service parameters such as link stability, and quality of received signal to select a potential relay vehicle, in order to improve the routing performances in the network. The simulation results show that the proposed relay selection scheme enhances throughput, and decreases packet delay and overhead comparing it with an existing link stability based routing protocol(MBRP) and M-AODV+.</description>
        <description>http://thesai.org/Downloads/Volume8No4/Paper_72-Improving_Routing_Performances_to_Provide_Internet.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SVM based Emotional Speaker Recognition using MFCC-SDC Features</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080471</link>
        <id>10.14569/IJACSA.2017.080471</id>
        <doi>10.14569/IJACSA.2017.080471</doi>
        <lastModDate>2017-04-29T12:53:35.7070000+00:00</lastModDate>
        
        <creator>Asma Mansour</creator>
        
        <creator>Zied Lachiri</creator>
        
        <subject>Emotion; Speaker recognition; Mel Frequency Cep-stral Coefficients (MFCC); Shifted-Delta-Cepstral (SDC); SVM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(4), 2017</description>
        <description>Enhancing the performance of emotional speaker recognition process has witnessed an increasing interest in the last years. This paper highlights a methodology for speaker recognition under different emotional states based on the mul-ticlass Support Vector Machine (SVM) classifier. We compare two feature extraction methods which are used to represent emotional speech utterances in order to obtain best accuracies. The first method known as traditional Mel-Frequency Cepstral Coefficients (MFCC) and the second one is MFCC combined with Shifted-Delta-Cepstra (MFCC-SDC). Experimentations are conducted on IEMOCAP database using two multiclass SVM ap-proaches: One-Against-One (OAO) and One Against-All (OAA). Obtained results show that MFCC-SDC features outperform the conventional MFCC.</description>
        <description>http://thesai.org/Downloads/Volume8No4/Paper_71-SVM_based_Emotional_Speaker_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Privacy and Security Mechanisms for eHealth Monitoring Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080470</link>
        <id>10.14569/IJACSA.2017.080470</id>
        <doi>10.14569/IJACSA.2017.080470</doi>
        <lastModDate>2017-04-29T12:53:35.6770000+00:00</lastModDate>
        
        <creator>M. Ajmal Sawand</creator>
        
        <creator>Najeed Ahmed Khan</creator>
        
        <subject>Wireless body area network; e-healthcare; mobile crowd sensing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(4), 2017</description>
        <description>The rapid scientific and technological merging be-tween Internet of Things (IoT), cloud computing and wireless body area networks (WBANs) have significantly contributed to the advent of e-healthcare. Due to this the quality of medicinal care has also been improved. Specifically, patient-centric health care monitoring plays important role in e-healthcare facilities by providing important assistance in different areas, including medical data collection and aggregation, data transmission, data processing, data query, and so on. This paper proposed an ar-chitectural framework to describe complete monitoring life cycle and indicates the important service modules. More meticulous discussions are then devoted to data gathering at patient side, which definitely serves as essential basis in achieving efficient, vigorous and protected patient health monitoring. Different design challenges are also analyzed to develop a high quality and protected patient-centric monitoring systems along with their possible potential solutions.</description>
        <description>http://thesai.org/Downloads/Volume8No4/Paper_70-Privacy_and_Security_Mechanisms_for_eHealth.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Evaluation of Anti-Collision Algorithms for RFID System with Different Delay Requirements</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080469</link>
        <id>10.14569/IJACSA.2017.080469</id>
        <doi>10.14569/IJACSA.2017.080469</doi>
        <lastModDate>2017-04-29T12:53:35.6470000+00:00</lastModDate>
        
        <creator>Warakorn Srichavengsup</creator>
        
        <subject>RFID; Anti-collision; Q algorithm; Priority</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(4), 2017</description>
        <description>The main purpose of Radio-frequency identification (RFID) implementation is to keep track of the tagged items. The basic components of an RFID system include tags and readers. Tags communicate with the reader through a shared wireless channel. Tag collision problem occurs when more than one tag attempts to communicate with the reader simultaneously. Therefore, the second-generation UHF Electronic Product Code (EPC Gen 2) standard uses Q algorithm to deal with the collision problem. In this paper, we introduce three new anti-collision algorithms to handle multiple priority classes of tags, namely, DC, DQ and DCQ algorithms. The goal is to achieve high system performance and enable each priority class to meet its delay requirement. The simulation results reveal that DCQ algorithm is more effective than the DC and DQ algorithms as it is designed to flexibly control and adjust system parameters to obtain the desired delay differentiation level. Finally, it can conclude that the proposed DCQ algorithm can control the delay differentiation level and yet maintain high system performance.</description>
        <description>http://thesai.org/Downloads/Volume8No4/Paper_69-Performance_Evaluation_of_Anti-Collision_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>PaMSA: A Parallel Algorithm for the Global Alignment of Multiple Protein Sequences</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080468</link>
        <id>10.14569/IJACSA.2017.080468</id>
        <doi>10.14569/IJACSA.2017.080468</doi>
        <lastModDate>2017-04-29T12:53:35.6130000+00:00</lastModDate>
        
        <creator>Irma R. Andalon-Garcia</creator>
        
        <creator>Arturo Chavoya</creator>
        
        <subject>Multiple Sequence Alignment; parallel program-ming; Message Passing Interface</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(4), 2017</description>
        <description>Multiple sequence alignment (MSA) is a well-known problem in bioinformatics whose main goal is the identification of evolutionary, structural or functional similarities in a set of three or more related genes or proteins. We present a parallel approach for the global alignment of multiple protein sequences that combines dynamic programming, heuristics, and parallel programming techniques in an iterative process. In the proposed algorithm, the longest common subsequence technique is used to generate a first MSA by aligning identical residues. An iterative process improves the MSA by applying a number of operators that were defined in the present work, in order to produce more accurate alignments. The accuracy of the alignment was evaluated through the application of optimization functions. In the proposed algorithm, a number of processes work independently at the same time searching for the best MSA of a set of sequences. There exists a process that acts as a coordinator, whereas the rest of the processes are considered slave processes. The resulting algorithm was called PaMSA, which stands for Parallel MSA. The MSA accuracy and response time of PaMSA were compared against those of Clustal W, T-Coffee, MUSCLE, and Parallel T-Coffee on 40 datasets of protein sequences. When run as a sequential application, PaMSA turned out to be the second fastest when compared against the nonparallel MSA methods tested (Clustal W, T-Coffee, and MUSCLE). However, PaMSA was designed to be executed in parallel. When run as a parallel application, PaMSA presented better response times than Parallel T-Cofffee under the conditions tested. Furthermore, the sum-of-pairs scores achieved by PaMSA when aligning groups of sequences with an identity percentage score from approximately 70% to 100%, were the highest in all cases. PaMSA was implemented on a cluster platform using the C++ language through the application of the standard Message Passing Interface (MPI) library.</description>
        <description>http://thesai.org/Downloads/Volume8No4/Paper_68-PaMSA_A_Parallel_Algorithm_for_the_Global.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Online Reputation Model Using Moving Window </title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080467</link>
        <id>10.14569/IJACSA.2017.080467</id>
        <doi>10.14569/IJACSA.2017.080467</doi>
        <lastModDate>2017-04-29T12:53:35.6000000+00:00</lastModDate>
        
        <creator>Mohammad Azzeh</creator>
        
        <subject>Reputation Model, Moving Window, Ratings Aggregation Method, E-Commerce.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(4), 2017</description>
        <description>Users are increasingly dependent on decision tools to facilitate their transactions on the internet. Reputation models offer a solution to the users in supporting their purchase decisions. The reputation model takes product ratings as input and produces product quality as score. Most existing reputation models use na&#239;ve average method or weighted average method to aggregate ratings. Na&#239;ve average method is unstable when there exist a clear trend in the ratings sequence. Also, the weighted methods are influenced by unfair and malicious ratings. This paper introduces a new simple reputation model that aggregates ratings based on the concept of moving window. This approach enables us to study variability of ratings over time which allows us to investigate the trend of ratings and account for sudden changes in ratings trend. The window size can be defined by either number of ratings or duration. The proposed model has been validated against stat-of-art reputation models using Mean Absolute Error and Kendall tau correlation.</description>
        <description>http://thesai.org/Downloads/Volume8No4/Paper_67-Online_Reputation_Model_Using_Moving_Window.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Medical Image Retrieval based on the Parallelization of the Cluster Sampling Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080466</link>
        <id>10.14569/IJACSA.2017.080466</id>
        <doi>10.14569/IJACSA.2017.080466</doi>
        <lastModDate>2017-04-29T12:53:35.5670000+00:00</lastModDate>
        
        <creator>Hesham Arafat Ali</creator>
        
        <creator>Salah Attiya</creator>
        
        <creator>Ibrahim El-henawy</creator>
        
        <subject>Bayes’ theorem; Hamiltonian Monte-Carlo; Inverse problems; Markov chain Monte-Carlo; Medical image reconstruc-tion; Parallel programming</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(4), 2017</description>
        <description>Cluster sampling algorithm is a scheme for sequential data assimilation developed to handle general non-Gaussian and nonlinear settings. The cluster sampling algorithm can be used to solve a wide spectrum of problems that requires data inversion such as image retrieval, tomography, weather prediction amongst others. This paper develops parallel cluster sampling algorithms, and show that a multi-chain version is embarrassingly parallel, and can be used efficiently for medical image retrieval amongst other applications. Moreover, it presents a detailed complexity analysis of the proposed parallel cluster samplings scheme and discuss their limitations. Numerical experiments are carried out using a synthetic one dimensional example, and a medical image retrieval problem. The experimental results show the accuracy of the cluster sampling algorithm to retrieve the original image from noisy measurements, and uncertain priors. Specifically, the proposed parallel algorithm increases the acceptance rate of the sampler from 45% to 81% with Gaussian proposal kernel, and achieves an improvement of 29% over the optimally-tuned Tikhonov-based solution for image retrieval. The parallel nature of the proposed algorithm makes the it a strong candidate for practical and large scale applications.</description>
        <description>http://thesai.org/Downloads/Volume8No4/Paper_66-Medical_Image_Retrieval_based_on_the_Parallelization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Large Scale Graph Matching(LSGM): Techniques, Tools, Applications and Challenges</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080465</link>
        <id>10.14569/IJACSA.2017.080465</id>
        <doi>10.14569/IJACSA.2017.080465</doi>
        <lastModDate>2017-04-29T12:53:35.5530000+00:00</lastModDate>
        
        <creator>Azka Mahmood</creator>
        
        <creator>Hina Farooq</creator>
        
        <creator>Javed Ferzund</creator>
        
        <subject>Big Data; Graph Matching; Graph Isomorphism; Graph Analytics; Data Models; Large Scale Graphs</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(4), 2017</description>
        <description>Large Scale Graph Matching (LSGM) is one of the fundamental problems in Graph theory and it has applications in many areas such as Computer Vision, Machine Learning, Pattern Recognition and Big Data Analytics (Data Science). Matching belongs to the combinatorial class of problems which refers to finding correspondence between the nodes of a graph or among set of graphs (subgraphs) either precisely or approximately. Precise Matching is also known as Exact Matching such as (sub)Graph Isomorphism and Approximate Matching is called Inexact Matching in which matching activity concerns with conceptual/semantic matching rather than focusing on structural details of graphs. In this article, a review of matching problem is presented i.e. Semantic Matching (conceptual), Syntactic Match-ing (structural) and Schematic Matching (Schema based). The aim is to present the current state of the art in Large Scale Graph Matching (LSGM), a systematic review of algorithms, tools and techniques along with the existing challenges of LSGM. Moreover, the potential application domains and related research activities are provided.</description>
        <description>http://thesai.org/Downloads/Volume8No4/Paper_65-Large_Scale_Graph_Matching_LSGM_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Identification and Nonlinear PID Control of Hammerstein Model using Polynomial Structures</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080464</link>
        <id>10.14569/IJACSA.2017.080464</id>
        <doi>10.14569/IJACSA.2017.080464</doi>
        <lastModDate>2017-04-29T12:53:35.5200000+00:00</lastModDate>
        
        <creator>Zeineb RAYOUF</creator>
        
        <creator>Chekib GHORBEL</creator>
        
        <creator>Naceur BENHADJ BRAIEK</creator>
        
        <subject>Parametric identification; Hammerstein model; RLS algorithm; Polynomial structure; Nonlinear PID controller</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(4), 2017</description>
        <description>In this paper, a new nonlinear discrete-time PID is proposed to control Hammerstein model. This model is composed by a static nonlinearity gain associated to a linear dynamic sub-system. Nonlinear polynomial structures are used to identify and to control this class of systems. The determination of parameters is based on the use of RLS algorithm. A coupled two-tank process is given to illustrate the effectiveness of the proposed approach.</description>
        <description>http://thesai.org/Downloads/Volume8No4/Paper_64-Identification_and_Nonlinear_PID_Control.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dynamic Programming Inspired Genetic Programming to Solve Regression Problems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080463</link>
        <id>10.14569/IJACSA.2017.080463</id>
        <doi>10.14569/IJACSA.2017.080463</doi>
        <lastModDate>2017-04-29T12:53:35.5030000+00:00</lastModDate>
        
        <creator>Asim Darwaish</creator>
        
        <creator>Hammad Majeed</creator>
        
        <creator>M. Quamber Ali</creator>
        
        <creator>Abdul Rafay</creator>
        
        <subject>Genetic Programming; Evolutionary Computing; Machine Learning; Fitness Landscape; Semantic GP; Symbolic Regression and Dynamic Decomposition of GP</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(4), 2017</description>
        <description>The candidate solution in traditional Genetic Pro-graming is evolved through prescribed number of generations using fitness measure. It has been observed that, improvement of GP on different problems is insignificant at later generations. Furthermore, GP struggles to evolve on some symbolic regression problems due to high selective pressure, where input range is very small, and few generations are allowed. In such scenarios stagnation of GP occurs and GP cannot evolve a desired solution. Recent works address these issues by using single run to reduce residual error which is based on semantic concept. A new approach is proposed called Dynamic Decomposition of Genetic Programming (DDGP) inspired by dynamic programing. DDGP decomposes a problem into sub problems and initiates sub runs in order to find sub solutions. The algebraic sum of all the sub solutions merge into an overall solution, which provides the desired solution. Experiments conducted on well known benchmarks with varying complexities, validates the proposed approach, as the empirical results of DDGP are far superior to the standard GP. Moreover, statistical analysis has been conducted using T test, which depicted significant difference on eight datasets. Symbolic regression problems where other variants of GP stagnates and cannot evolve the required solution, DDGP is highly recommended for such symbolic regression problems.</description>
        <description>http://thesai.org/Downloads/Volume8No4/Paper_63-Dynamic_Programming_Inspired_Genetic.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>DSP Real-Time Implementation of an Audio Compression Algorithm by using the Fast Hartley Transform</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080462</link>
        <id>10.14569/IJACSA.2017.080462</id>
        <doi>10.14569/IJACSA.2017.080462</doi>
        <lastModDate>2017-04-29T12:53:35.4730000+00:00</lastModDate>
        
        <creator>Souha BOUSSELMI</creator>
        
        <creator>Noureddine ALOUI</creator>
        
        <creator>Adnen CHERIF</creator>
        
        <subject>Speech compression; Fast Hartley transform (FHT); Discrete Wavelet Transform (DWT)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(4), 2017</description>
        <description>This paper presents a simulation and hardware implementation of a new audio compression scheme based on the fast Hartley transform in combination with a new modified run length encoding. The proposed algorithm consists of analyzing signals with fast Hartley Transform and then thresholding the ob-tained coefficients below a given threshold which are then encoded using a new approach of run length encoding. The thresholded coefficients are, finally, quantized and coded into binary stream. The experimental results show the ability of the fast Hartley transform to compress audio signals. Indeed, it concentrates the signal energy in a few coefficients and demonstrates the ability of the new approach of run length encoding to increase the compression factor. The results of the current work are compared with wavelet based compression by using objective assessments namely CR, SNR, PSNR and NRMSE. This study shows that the fast Hartley transform is more appropriate than wavelets one since it offers a higher compression ratio and a better speech quality. In addition, we have tested the audio compression system on DSP processor TMS320C6416.This test shows that our system fits with the real-time requirements and ensures a low complexity. The perceptual quality is evaluated with the Mean Opinion Score (MOS).</description>
        <description>http://thesai.org/Downloads/Volume8No4/Paper_62-DSP_Real-Time_Implementation_of_an_Audio.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>DoS Detection Method based on Artificial Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080461</link>
        <id>10.14569/IJACSA.2017.080461</id>
        <doi>10.14569/IJACSA.2017.080461</doi>
        <lastModDate>2017-04-29T12:53:35.4430000+00:00</lastModDate>
        
        <creator>Mohamed Idhammad</creator>
        
        <creator>Karim Afdel</creator>
        
        <creator>Mustapha Belouch</creator>
        
        <subject>DoS detection; Artificial Neural Networks; Feed-forward Neural Networks; Network traffic classification; Feature selection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(4), 2017</description>
        <description>DoS attack tools have become increasingly sophis-ticated challenging the existing detection systems to continually improve their performances. In this paper we present a victim-end DoS detection method based on Artificial Neural Networks (ANN). In the proposed method a Feed-forward Neural Network (FNN) is optimized to accurately detect DoS attack with minimum resources usage. The proposed method consists of the following three major steps: (1) Collection of the incoming network traffic,(2) selection of relevant features for DoS detection using an unsupervised Correlation-based Feature Selection (CFS) method,(3) classification of the incoming network traffic into DoS traffic or normal traffic. Various experiments were conducted to evaluate the performance of the proposed method using two public datasets namely UNSW-NB15 and NSL-KDD. The obtained results are satisfactory when compared to the state-of-the-art DoS detection methods.</description>
        <description>http://thesai.org/Downloads/Volume8No4/Paper_61-DoS_Detection_Method_based_on_Artificial_Neural.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Two Phase Hybrid Classifier based on Structure Similarities and Textural Features for Accurate Meningioma Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080460</link>
        <id>10.14569/IJACSA.2017.080460</id>
        <doi>10.14569/IJACSA.2017.080460</doi>
        <lastModDate>2017-04-29T12:53:35.4270000+00:00</lastModDate>
        
        <creator>Kiran Fatima</creator>
        
        <creator>Hammad Majeed</creator>
        
        <subject>Meningioma; Computer-Aided Diagnosis; Brain Tumour Classification; Cell Segmentation; Shape Analysis; Texture Analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(4), 2017</description>
        <description>Meningioma subtype classification is a complex pattern classification problem of digital pathology due to het-erogeneity issues of tumor texture, low inter-class and high intra-class texture variations of tumor samples, and architec-tural variations of cellular components. The basic aim is the achievement of significantly high classification results for all the subtypes of meningioma while dealing with inherent complexity and texture variations. The ultimate goal is to mimic the prognosis decision of expert pathologists and assist newer pathologists in making right and quick decisions. In this paper, a novel hybrid classification framework based on nuclei shape matching and texture analysis is proposed for classification of four subtypes of grade-I benign meningioma. Meningothelial and fibroblastic subtypes are classified on basis of nuclei shape matching through skeletons and shock graphs while an optimized texture-based evolutionary framework is designed for the classification of transi-tional and psammomatous subtypes. Classifier-based evolutionary feature selection is performed using Genetic Algorithm (GA) in combination with Support Vector Machine (SVM) to select the optimal combination of higher-order statistical features extracted from morphologically processed RGB color channel images. The proposed hybrid classifier employed leave-one-patient-out 5-fold cross validation and achieved an overall 95.63% mean classification accuracy.</description>
        <description>http://thesai.org/Downloads/Volume8No4/Paper_60-A_Two_Phase_Hybrid_Classifier_based_on_Structure.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Lexicon-based Approach to Build Service Provider Reputation from Arabic Tweets in Twitter</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080459</link>
        <id>10.14569/IJACSA.2017.080459</id>
        <doi>10.14569/IJACSA.2017.080459</doi>
        <lastModDate>2017-04-29T12:53:35.3970000+00:00</lastModDate>
        
        <creator>Haifa Al-Hussaini</creator>
        
        <creator>Hmood Al-Dossari</creator>
        
        <subject>Reputation; Sentiment Analysis; Arabic Language; Saudi Dialect; Social Media</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(4), 2017</description>
        <description>Nowadays Social media has become a popular com-munication tool among Internet users. Many users share opinions and experiences on different service providers everyday through the social media platforms. Thus, these platforms become valuable sources of data which can be exploited and used efficiently to support decision-making. However, finding and monitoring customers’ opinions on the social media is difficult task due to the fast growth of the content. This work focus on using Twitter for the task of building service providers’ reputation. Particularly, service provider’s reputation is calculated from the collected Saudi tweets in Twitter. To do so, a Saudi dialect lexicon has been developed as a basic component for sentiment polarity to classify words extracted from Twitter into either a positive or negative word. Then, beta probability density functions have been used to combine feedback from the lexicon to derive reputation scores. Experimental evaluations show that the proposed approach were consistent with the results of Qaym, a website that calculates restaurants’ rankings based on consumer ratings and comments.</description>
        <description>http://thesai.org/Downloads/Volume8No4/Paper_59-A_Lexicon-based_Approach_to_Build_Service.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Framework to Reason about the Knowledge of Agents in Continuous Dynamic Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080458</link>
        <id>10.14569/IJACSA.2017.080458</id>
        <doi>10.14569/IJACSA.2017.080458</doi>
        <lastModDate>2017-04-29T12:53:35.3630000+00:00</lastModDate>
        
        <creator>Ammar Mohammed</creator>
        
        <creator>Ahmed M. Elmogy</creator>
        
        <subject>Epistemic logic; Reasoning; Hybrid Automata; Agents</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(4), 2017</description>
        <description>Applying formal methods to a group of agents provides a precise and unambiguous definition of their behaviors, as well as verify properties of agents against implementations. Hybrid automaton is one of the formal approaches that are used by several works to model a group of agents. Several logics have been proposed, as extension of temporal logics to specify and hence verify those quantitative and qualitative properties of systems modeled by hybrid automaton. However, when it comes to agents, one needs to reason about the knowledge of other agents participating in the model. For this purpose, epistemic logic can be used to specify and reason about the knowledge of agents. But this logic assumes that the model of time is discrete. This paper proposes a novel framework that formally specifies and verifies the epistemic behaviors of agents within continuous dynamics. To do so, the paper first extends the hybrid automaton with knowledge. Second, the paper proposes a new logic that extends epistemic logic with quantitative real time requirement. Finally, the paper shows how to specify several properties that can be verified within our framework.</description>
        <description>http://thesai.org/Downloads/Volume8No4/Paper_58-A_Framework_to_Reason_about_the_Knowledge.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Recent Study on Routing Protocols in UWSNs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080457</link>
        <id>10.14569/IJACSA.2017.080457</id>
        <doi>10.14569/IJACSA.2017.080457</doi>
        <lastModDate>2017-04-29T12:53:35.3630000+00:00</lastModDate>
        
        <creator>Muhammad Ahsan</creator>
        
        <creator>Sheeraz Ahmed</creator>
        
        <creator>Adil khan</creator>
        
        <creator>Mukhtaj khan</creator>
        
        <creator>Fazle Hadi</creator>
        
        <creator>Fazal Wahab</creator>
        
        <creator>Imran Ahmed</creator>
        
        <subject>UWSN; routing protocol; relay node; sink</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(4), 2017</description>
        <description>Recent research has seen remarkable advancement in the field of Under Water Sensor Networks (UWSNs). Many different protocols are developed in the recent years in this domain. As these protocols can be categorized in a variety of ways according to the mechanisms and functionalities they follow, hence it becomes important to understand their principal working. In this research we have introduced three analysis methods; Clustering based, Localization based and Cooperation based routing by selecting some recent routing protocols in the field of UWSN and presented a comparative analysis according to the categories in which they lie. This research has been taken theoretically and is qualitative one. Also a detail analysis of their key advantages and flaws are also identified in this research.</description>
        <description>http://thesai.org/Downloads/Volume8No4/Paper_57-A_Recent_Study_on_Routing_Protocols_in_UWSNs.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Classifying and Segmenting Classical and Modern Standard Arabic using Minimum Cross-Entropy</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080456</link>
        <id>10.14569/IJACSA.2017.080456</id>
        <doi>10.14569/IJACSA.2017.080456</doi>
        <lastModDate>2017-04-29T12:53:35.3330000+00:00</lastModDate>
        
        <creator>Ibrahim S Alkhazi</creator>
        
        <creator>William J. Teahan</creator>
        
        <subject>text classification; Arabic language; Classical Arabic; Modern Standard Arabic</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(4), 2017</description>
        <description>Text classification is the process of assigning a text or a document to various predefined classes or categories to reflect their contents. With the rapid growth of Arabic text on the Web, studies that address the problems of classification and segmentation of the Arabic language are limited compared to other languages, most of which implement word-based and feature extraction algorithms. This paper adopts a PPM character-based compression scheme to classify and segment Classical Arabic (CA) and Modern Standard Arabic (MSA) texts. An initial experiment using the PPM classification method on samples of text resulted in an accuracy of 95.5%, an average precision of 0.958, an average recall of 0.955 and an average F-measure of 0.954, using the concept of minimum cross-entropy. PPM-based classification experiments on standard Arabic corpora showed that they contained different types of text (CA or MSA), or a mixture of the both (CA and MSA). Further experiments with the same corpora showed that a more accurate picture of the contents of the corpora was possible using the PPM-based segmentation method. Tag-based compression experiments (using tags produced by parts-of-speech Arabic taggers) also showed that the quality of the tagging (as measured by compression quality) is significantly affected when tagging either CA and MSA text. The conclusion is that NLP applications (such as taggers) should treat these texts separately and use different training data for each or process them differently.</description>
        <description>http://thesai.org/Downloads/Volume8No4/Paper_56-Classifying_and_Segmenting_Classical_and_Modern_Standard_Arabic.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Impact and Challenges of Requirement Engineering in Agile Methodologies: A Systematic Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080455</link>
        <id>10.14569/IJACSA.2017.080455</id>
        <doi>10.14569/IJACSA.2017.080455</doi>
        <lastModDate>2017-04-29T12:53:35.3170000+00:00</lastModDate>
        
        <creator>Sehrish Alam</creator>
        
        <creator>Shahid Nazir Bhatti</creator>
        
        <creator>S. Asim Ali Shah</creator>
        
        <creator>Dr. Amr Mohsen Jadi</creator>
        
        <subject>Requirement Engineering; Traditional approaches; Agile methodologies; Challenges in RE; Requirement prioritization; Nonfunctional Requirements; Dynamic system development method; Scrum; Extreme programming</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(4), 2017</description>
        <description>Requirement Engineering is one of important stage in development life cycle. All requirements required for development of product is collected in this phase. A high standard product can be developed by agile methodology in less budget and time. Importance of agile practices have been enhanced since it offers assist cooperation too software engineering. Being basic phase of software engineering, requirement engineering has different processes.  The elements of direct correspondence is one of spry way which not at all like to other conventional and traditional approaches .Although a lot of research has been done on agile practices and role of requirement in agile methodologies but still there is need of studies on change manage management ,requirement prioritization, prototyping and nonfunctional requirement in agile methodologies. Aim of this review paper is to present the limitations in presentation of requirement engineering phases in agile practices and what are the issues and challenges that agile person faces in implementation of agile practices. Many research studies from different sources have been reviewed on basis of inclusion and exclusion criteria. Most RE activities has been discussed in review. Evidence helps to prove that how RE process was performed in scrums. Mostly research has been conducted on general agile methodologies, few authors specified RE practices in other methodologies of agile.  Finding of this research is the work of researchers  that will be beneficial for those who are interested in finding interesting area of research in this field because many techniques of agile (extreme programming, crystal methodology, lean ) requires further study and practical results as clarified by studies.</description>
        <description>http://thesai.org/Downloads/Volume8No4/Paper_55-Impact_and_Challenges_of_Requirement_Engineering.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Design and Development of Spam Risk Assessment Prototype: In Silico of Danger Theory Variants</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080454</link>
        <id>10.14569/IJACSA.2017.080454</id>
        <doi>10.14569/IJACSA.2017.080454</doi>
        <lastModDate>2017-04-29T12:53:35.2870000+00:00</lastModDate>
        
        <creator>Kamahazira Zainal</creator>
        
        <creator>Mohd Zalisham Jali</creator>
        
        <subject>Danger Theory Variants; Text Spam Messages; Severity Assessment; Text Mining; Information Retrieval; Knowledge Discovery</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(4), 2017</description>
        <description>Now-a-days, data is flowing with various types of information and it is absolutely enormous and moreover, it is in unstructured form. These raw data is meaningless unless it is processed and analyzed to retrieve all the valuable and meaningful information. In this paper, a design and principal functionalities of the system prototype is introduced. A process of information retrieval by applying the text mining with Artificial Immune System (AIS) is proposed to discover the possible level of severity for a Short Messaging Service (SMS) spam. This is expected to be a potential tool in retrieving an implicit danger that a spam might impact to the recipients. Furthermore, the development of this tool can be considered as an emergence of another data mining tool that also exceedingly possible to be embedded with another existing tool.</description>
        <description>http://thesai.org/Downloads/Volume8No4/Paper_54-The_Design_and_Development_of_Spam_Risk.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimized Quality Model for Agile Development: Extreme Programming (XP) as a Case Scenario</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080453</link>
        <id>10.14569/IJACSA.2017.080453</id>
        <doi>10.14569/IJACSA.2017.080453</doi>
        <lastModDate>2017-04-29T12:53:35.2570000+00:00</lastModDate>
        
        <creator>Atika Tabassum</creator>
        
        <creator>Iqra Manzoor</creator>
        
        <creator>Dr. Shahid Nazir Bhatti</creator>
        
        <creator>Aneesa Rida Asghar</creator>
        
        <creator>Dr. Imtiaz Alam</creator>
        
        <subject>Agile Software Engineering (ASE); Agile Software Development (ASD); Extreme Programming (XP); ISO; ISO 9126; ISO 25000</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(4), 2017</description>
        <description>The attributes of quality are that it is complex taxonomy, it cannot be weighted or measured but can be felt, discussed and judged. Early assessment and verification of functional attributes (requirements) are supported well by renowned standards while the nonfunctional attributes (requirements) are not.  Agile software development methodologies are of high repute as the most popular and effective approaches to the development of software systems.
Early requirements verification methodologies in Agile Software Engineering are well focused in this way and hence mainly researched have achieved in functional requirements. For early quality aspects (attributes) in order to bring quality in our design and hence development process, it is very important to consider nonfunctional requirements quality metrics (attributes). A comprehensive work is also being done to propose and validate (using iThink) different quality models which could make sure the quality of agile software products being developed, which will be though available in detail in the literature review (section II). Yet a generic and standard quality metrics model is missing in this for the agile software practices in all, which off course is further needed to make sure that the agile product being developed, will surely accomplish quality characteristics as decided by the stakeholders as well as the mentioned quality standard they are addressing. In this work we have proposed a quality metrics model that fulfills the desired quality attributes exist in ISO/IEC (Quality standards, ISO 9126, ISO 25000) in early requirements, we validated this by performing simulations in iThink technology that also ensures that the quality of item being produced to meet the described criteria.</description>
        <description>http://thesai.org/Downloads/Volume8No4/Paper_53-Optimized_Quality_Model_for_Agile_Development_Extreme_Programming.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Impact of Story Point Estimation on Product using Metrics in Scrum Development Process</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080452</link>
        <id>10.14569/IJACSA.2017.080452</id>
        <doi>10.14569/IJACSA.2017.080452</doi>
        <lastModDate>2017-04-29T12:53:35.2400000+00:00</lastModDate>
        
        <creator>Ali Raza Ahmed</creator>
        
        <creator>Muhammad Tayyab</creator>
        
        <creator>Dr. Shahid Nazir Bhatti</creator>
        
        <creator>Dr. Abdullah J. Alzahrani</creator>
        
        <creator>Dr. Muhammad Imran Babar</creator>
        
        <subject>product backlog; sprint backlog; backlog Item; front end designer; product Owner; agile software development; scrum master; product owner; sprint planning; velocity chart; Agile methodology; Effort Estimation; Story Points Estimation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(4), 2017</description>
        <description>Agile Software Development techniques are worldwide accepted, regardless of the definition of agile we all must agree with the fact that agile is maturing day by day, suppliers of software systems are moving away from traditional waterfall techniques and other development practices in favor of agile methods. There are numerous types of methodologies, domains/ methods in agile for which are to be selected according to the current situation and demand of the current project. As a case scenario in the following research will discuss scrum as a development technique in which we will focus on the effort estimation(s) and their effects by discussing distinct metrics. Mainly estimation refers directly to cost, time and complexity during the life cycle of project. Metrics will help the teams to better understand the development progress and building releasing (releases) of software easier in a fluent and robust way. The following paper thus identifies aspects mainly ignored by the development team(s) during estimation.</description>
        <description>http://thesai.org/Downloads/Volume8No4/Paper_52-Impact_of_Story_Point_Estimation_on_Product_using_Metrics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Proactive Intention-based Safety through Human Location Anticipation in HRI Workspace</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080451</link>
        <id>10.14569/IJACSA.2017.080451</id>
        <doi>10.14569/IJACSA.2017.080451</doi>
        <lastModDate>2017-04-29T12:53:35.2100000+00:00</lastModDate>
        
        <creator>Muhammad Usman Ashraf</creator>
        
        <creator>Muhammad Awais</creator>
        
        <creator>Muhammad Sheraz Arshad Malik</creator>
        
        <creator>Ijaz Shoukat</creator>
        
        <creator>Muhammad Sher</creator>
        
        <subject>Intention Recognition; Human-Robot Interaction; Human-Robot Interaction Safety; Unsafe Zone; Workspace</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(4), 2017</description>
        <description>The safety involved in Human-Robot Interaction (HRI) is an important issue. This is the key point for the increase or decrease in HRI activity. A novel solution concerning the safety of HRI is proposed. The solution considers the near future human intentions. A set of possible human intentions is known to the robot. The robot also knows the places that can be visited by the interacting human according to his current intention. The proposed solution enables the robot to avoid a potential collision by anticipating the future human location and dividing the workspace into safe and unsafe zones. The solution contributes for the improvement of HRI safety measures but further efforts are required for achieving an enhanced safety level.</description>
        <description>http://thesai.org/Downloads/Volume8No4/Paper_51-Proactive_Intention_based_Safety_through_Human_Location_Anticipation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modern Data Formats for Big Bioinformatics Data Analytics</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080450</link>
        <id>10.14569/IJACSA.2017.080450</id>
        <doi>10.14569/IJACSA.2017.080450</doi>
        <lastModDate>2017-04-29T12:53:35.1930000+00:00</lastModDate>
        
        <creator>Shahzad Ahmed</creator>
        
        <creator>M. Usman Ali</creator>
        
        <creator>Javed Ferzund</creator>
        
        <creator>Muhammad Atif Sarwar</creator>
        
        <creator>Abbas Rehman</creator>
        
        <creator>Atif Mehmood</creator>
        
        <subject>Big Data; Machine Learning; Hadoop; MapReduce; Spark; Bioinformatics; Microarray; Data Models; Data Formats; Classification; Clustering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(4), 2017</description>
        <description>Next Generation Sequencing (NGS) technology has resulted in massive amounts of proteomics and genomics data. This data is of no use if it is not properly analyzed. ETL (Extraction, Transformation, Loading) is an important step in designing data analytics applications. ETL requires proper understanding of features of data. Data format plays a key role in understanding of data, representation of data, space required to store data, data I/O during processing of data, intermediate results of processing, in-memory analysis of data and overall time required to process data. Different data mining and machine learning algorithms require input data in specific types and formats. This paper explores the data formats used by different tools and algorithms and also presents modern data formats that are used on Big Data Platform. It will help researchers and developers in choosing appropriate data format to be used for a particular tool or algorithm.</description>
        <description>http://thesai.org/Downloads/Volume8No4/Paper_50-Modern_Data_Formats_for_Big_Bioinformatics_Data_Analytics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Simplex Parallelization in a Fully Hybrid Hardware Platform</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080449</link>
        <id>10.14569/IJACSA.2017.080449</id>
        <doi>10.14569/IJACSA.2017.080449</doi>
        <lastModDate>2017-04-29T12:53:35.1630000+00:00</lastModDate>
        
        <creator>Basilis Mamalis</creator>
        
        <creator>Marios Perlitis</creator>
        
        <subject>Parallel Processing; Linear Programming; Simplex Algorithm; MPI; OpenMP; CUDA</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(4), 2017</description>
        <description>The simplex method has been successfully used in solving linear programming (LP) problems for many years. Parallel approaches have also extensively been studied due to the intensive computations required, especially for the solution of large LP problems. Furthermore, the rapid proliferation of multicore CPU architectures as well as the computational power provided by the massive parallelism of modern GPUs have turned CPU / GPU collaboration models increasingly into focus over the last years for better performance. In this paper, a highly scalable implementation framework of the standard full tableau simplex method is first presented, over a hybrid parallel platform which consists of multiple multicore nodes interconnected via a high-speed communication network. The proposed approach is based on the combined use of MPI and OpenMP, adopting a suitable column-based distribution scheme for the simplex tableau. The parallelization framework is then extended in such a way that it can exploit concurrently the full power of the provided resources on a multicore single-node environment with a CUDA-enabled GPU (i.e. using the CPU cores and the GPU concurrently), based on a suitable hybrid multithreading/GPU offloading scheme with OpenMP and CUDA. The corresponding experimental results show that the hybrid MPI+OpenMP based parallelization scheme leads to particularly high speed-up and efficiency values, considerably better than in other competitive approaches, and scaling well even for very large / huge linear problems. Furthermore, the performance of the hybrid multithreading/GPU offloading scheme is clearly superior to both the OpenMP-only and the GPU-only based implementations in almost all cases, which validates the worth of using both resources concurrently. The most important, when it is used in combination with MPI in a multi-node (fully hybrid) environment, it leads to substantial improvements in the speedup achieved for large and very large LP problems.</description>
        <description>http://thesai.org/Downloads/Volume8No4/Paper_49-Simplex_Parallelization_in_a_Fully_Hybrid_Hardware_Platform.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Wi-Fi Redux: Never Trust Untrusted Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080448</link>
        <id>10.14569/IJACSA.2017.080448</id>
        <doi>10.14569/IJACSA.2017.080448</doi>
        <lastModDate>2017-04-29T12:53:35.1470000+00:00</lastModDate>
        
        <creator>Young B. Choi</creator>
        
        <creator>Kenneth P. LaCroix</creator>
        
        <subject>Wi-Fi; Untrusted Network; Pineapple; MITM; DNS Spoofing; Least Privilege</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(4), 2017</description>
        <description>This study analyzes the dangers posed to computer user information and their equipment as they connect to untrusted networks, such as those found in coffee shops. Included in this study is a virtualized lab consisting of the target and attacker nodes and router to facilitate communication. Also included are a binary for reverse connection and a modified binary that was created to connect back to the attacker node and bypasses most Anti-virus software.</description>
        <description>http://thesai.org/Downloads/Volume8No4/Paper_48-Wi_Fi_Redux_Never_Trust_Untrusted_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Formal Specification of a Truck Geo-Location Big-Data Application</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080447</link>
        <id>10.14569/IJACSA.2017.080447</id>
        <doi>10.14569/IJACSA.2017.080447</doi>
        <lastModDate>2017-04-29T12:53:35.1300000+00:00</lastModDate>
        
        <creator>Ayman Naseem</creator>
        
        <creator>Nadeem Akhtar</creator>
        
        <creator>Malik Saad Missen</creator>
        
        <subject>Big-data; Formal methods; Correctness properties; Safety; Liveness; Internet-of-Things (IoT); MapReduce; Hadoop Distributed File System (HDFS); Finite State Processes (FSP); Labelled Transition System (LTS)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(4), 2017</description>
        <description>In the last few year’s social networks, e-commerce, mobile commerce, and sensor networks have resulted into an exponential increase in data size. This data comes in all formats i.e. structured, un-structured and semi-structured. To efficiently extract useful information from these huge data sources is important. This information can play a central role in making future decisions and strategies. A truck geo-location big-data application integrated with formal model is proposed. The truck geo-location data is un-structured and it is accessed and manipulated by Hadoop query engine. Labelled transition system based formal model of the application is proposed to ensure safety and liveness properties of correctness.</description>
        <description>http://thesai.org/Downloads/Volume8No4/Paper_47-Formal_Specification_of_a_Truck_Geo_Location_Big_Data_Application.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>OTSA: Optimized Time Synchronization Approach for Delay-based Energy Efficient Routing in WSN</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080446</link>
        <id>10.14569/IJACSA.2017.080446</id>
        <doi>10.14569/IJACSA.2017.080446</doi>
        <lastModDate>2017-04-29T12:53:35.1130000+00:00</lastModDate>
        
        <creator>K. Nagarathna</creator>
        
        <creator>Jayashree D Mallapur</creator>
        
        <subject>Wireless Sensor Network; Routing; Time Synchronization; Optimization; Hardware Clock</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(4), 2017</description>
        <description>Time Synchronization is one of the problems and still ignored problem in area of wireless sensor network (WSN). After reviewing the existing literatures, it is found that there are few studies that combinely address the problem of energy conservation, clustering, routing along with minimizing the errors due to time synchronization in sensor network. Therefore, this manuscript presents a delay-based routing which considers the propagation delay to formulate a mechanism for delay compensation in large scale wireless sensor network. The prime goal of this technique is to jointly address energy problems, time synchronization, and routing in wireless sensor network. The outcome of the proposed study was found to posse’s minimized communication overhead, minimized synchronization errors, lower energy consumption, and reduced processing time when compared with the existing standards of time synchronization techniques.</description>
        <description>http://thesai.org/Downloads/Volume8No4/Paper_46-OTSA_Optimized_Time_Synchronization_Approach_for_Delay.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Rich Feature-based Kernel Approach for Drug- Drug Interaction Extraction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080445</link>
        <id>10.14569/IJACSA.2017.080445</id>
        <doi>10.14569/IJACSA.2017.080445</doi>
        <lastModDate>2017-04-29T12:53:35.1000000+00:00</lastModDate>
        
        <creator>ANASS RAIHANI</creator>
        
        <creator>NABIL LAACHFOUBI</creator>
        
        <subject>Drug–drug interaction; Feature-based approach; Nonlinear kernel; Biomedical informatics; Natural Language Processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(4), 2017</description>
        <description>Discovering drug-drug interactions (DDIs) is a crucial issue for both patient safety and health care cost control. Developing text mining techniques for identifying DDIs has attracted a great deal of attention in the last few years. Unfortunately, state-of-the-art results didn&#39;t exceed the threshold of 0.7 F1 score, which calls for more efforts. In this work, we propose a new feature-based kernel method to extract and classify DDIs. Our approach consists of two steps: identifying DDIs and assigning one of four different DDI types to the predicted drug pairs. We demonstrate that by using new groups of features non-linear kernels can achieve the best performance. When evaluated on the DDIExtraction 2013 challenge corpus, our system achieved an F1-score of 71.79%, as compared to 69.75% and 68.4% reported by the top two state-of-the-art systems.</description>
        <description>http://thesai.org/Downloads/Volume8No4/Paper_45-A_Rich_Feature-based_Kernel_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Utilization of Finite Elements Programs and Matlab Simulink in the Study of a Special Electrical Motor</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080444</link>
        <id>10.14569/IJACSA.2017.080444</id>
        <doi>10.14569/IJACSA.2017.080444</doi>
        <lastModDate>2017-04-29T12:53:35.0670000+00:00</lastModDate>
        
        <creator>Olivian Chiver</creator>
        
        <creator>Liviu Neamt</creator>
        
        <creator>Oliviu Matei</creator>
        
        <creator>Zoltan Erdei</creator>
        
        <creator>Cristian Barz</creator>
        
        <subject>parameters; synchronous motor; single-phase; permanent magnet; finite elements programs; Matlab/Simulink</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(4), 2017</description>
        <description>This paper presents the study of a single-phase synchronous motor with permanent magnets (PM) using some computer programs. This motor type is used especially in household applications, and it has a low power. It is known that PM synchronous motors have a great advantage consisting in the lack of rotor losses. For this motor the starting problem has been solved performing a variable air gap under polar shoes, what determine the occurrence of the starting torque due to the fact that the axis of the rotor field created by PM in the rest position differs from the axis of the stator. First the parameters of the motor have been determined by tests and finite elements (FE) simulations, without knowing the properties of the PM. At the beginning magnetostatic FE simulations have been performed and then in magnetodynamic regime. The obtained results in the two regimes are closed. Secondly, with the determined parameters, a Matlab Simulink model has been realized (this being the final goal), and the dynamic regime of the motor has been studied. The results regarding the motor speed in starting process, the current variation, are also presented and discussed.</description>
        <description>http://thesai.org/Downloads/Volume8No4/Paper_44-Utilization_of_Finite_Elements_Programs.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Wireless Sensor Network Energy Efficiency with Fuzzy Improved Heuristic A-Star Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080443</link>
        <id>10.14569/IJACSA.2017.080443</id>
        <doi>10.14569/IJACSA.2017.080443</doi>
        <lastModDate>2017-04-29T12:53:35.0370000+00:00</lastModDate>
        
        <creator>Sigit Soijoyo</creator>
        
        <creator>Retantyo Wardoyo</creator>
        
        <subject>Improved Heuristic A-Star; Fuzzy Logic; Wireless Sensor Network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(4), 2017</description>
        <description>Energy is a major factor in designing wireless sensor networks (WSNs). In order to extend the network lifetime, researchers should consider energy consumption in routing protocols of WSNs. Routing will serve to facilitate a number of sensors on the technology of WSNs to identify the optimal path and manage energy consumption saving at the time of transmitting data. Current Wireless Sensor Networks efficiency system uses node selection as the main parameter without applying path finding routing. It will not complete the optimization of energy. This research was designed to address the problem of energy optimization by using fuzzy-improved heuristic A-Star. A new algorithm named improved heuristic A-Star was developed from previous A-Star algorithm. The result of fuzzy-improved heuristic A-Star indicated node sensor to sink destination saved 0.3698 joule energy dissipation which resulted in longer lifetime.</description>
        <description>http://thesai.org/Downloads/Volume8No4/Paper_43-Wireless_Sensor_Network_Energy_Efficiency.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Interactive Mobile Health Monitoring System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080442</link>
        <id>10.14569/IJACSA.2017.080442</id>
        <doi>10.14569/IJACSA.2017.080442</doi>
        <lastModDate>2017-04-29T12:53:35.0200000+00:00</lastModDate>
        
        <creator>Varsha Wahane</creator>
        
        <creator>Dr.P.V. Ingole</creator>
        
        <subject>Biomedical sensors; Wireless body area network; mobile device and microcontroller</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(4), 2017</description>
        <description>Health monitoring system is an active application in pervasive and ubiquitous computing. It is an application of mobile computing technology for enhancing communication among health care workers, physicians and patients with a view to provide better health care system. Recent elevation in sensors, wireless communication and low-power integrated circuits has empowered the design of pocket size, light weight, low-cost, and interactive bio-sensor nodes. These nodes are seamlessly integrated for mobile health monitoring using wireless body area network which can sense, process and communicate one or more vital parameters.
The proposed system, through mobile device can provide patient health parameters (such as temperature, heart rate and ECG) to medical server, care taker and to medical practitioner based on the biomedical and environmental data collected by deployed sensors. In this system, multiple physiological parameters are incorporated for monitoring as against one or two parameters in legacy system. In this paper hardware, software and implementation of system is discussed whereas the focus is on authentication, power consumption, accuracy in transmission of health parameters to medical server.
</description>
        <description>http://thesai.org/Downloads/Volume8No4/Paper_42-Interactive_Mobile_Health_Monitoring_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Output Feedback Controller Synthesis for Discrete-Time Nonlinear Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080441</link>
        <id>10.14569/IJACSA.2017.080441</id>
        <doi>10.14569/IJACSA.2017.080441</doi>
        <lastModDate>2017-04-29T12:53:34.9900000+00:00</lastModDate>
        
        <creator>Hajer Bouzaouache</creator>
        
        <subject>nonlinear systems; discrete-time systems; optimal control; output feedback control</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(4), 2017</description>
        <description>This paper presents a computational approach for solving optimal control problem for a class of nonlinear discrete-time systems. We focus on problem in which a pre-specified N local subsystems are given to describe the studied systems. For such problem, we derive an output feedback controller and a cost function such that the resulting closed-loop system is asymptotically stable and the closed loop cost function is minimized. The main results are demonstrated numerically through the implementation of the proposed algorithm for solving the optimal control problem of a mechanical system.</description>
        <description>http://thesai.org/Downloads/Volume8No4/Paper_41-Output_Feedback_Controller_Synthesis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Proposing a Keyword Extraction Scheme based on Standard Deviation, Frequency and Conceptual Relation of the Words</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080440</link>
        <id>10.14569/IJACSA.2017.080440</id>
        <doi>10.14569/IJACSA.2017.080440</doi>
        <lastModDate>2017-04-29T12:53:34.9730000+00:00</lastModDate>
        
        <creator>Shadi Masaeli</creator>
        
        <creator>Seyed Mostafa Fakhrahmad</creator>
        
        <creator>Reza Boostani</creator>
        
        <creator>Betsabeh Tanoori</creator>
        
        <subject>Keyword extraction; key-phrase extraction; TFISF; standard deviation; frequency</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(4), 2017</description>
        <description>At each text there are a few keywords which provide important information about the content of that text. Since this limited set of words (keywords) is supposed to describe the total concept of a text (e.g. article, book), the correct choosing of keywords for a text plays an important role in the right representing of that text. Despite several efforts in this field, none of the so far published methods is accurate enough to elicit representative words for retrieving a vast variety of different texts. In this study, an unsupervised scheme is proposed which is independent on domain, language, structure and length of a text. The proposed method uses the words’ frequency in conjunction with standard deviation of occurred location of words in text along with considering the conceptual relation of words. In the next stage, a secondary score is given to those selected keywords by the statistical criterion of TFISF in order to improve the basis method of TFIDF. Moreover, the proposed hybrid method does not remove the stopwords since they might be a part of bigram keywords while the similar approaches remove all stopwords at their first stage. Experimental results on the known SEMEVAL dataset imply the superiority of the proposed method in comparison with state-of-the-art schemes in terms of F-score and accuracy. Therefore, the introduced hybrid method can be considered as an alternative scheme for accurate keyword extraction.</description>
        <description>http://thesai.org/Downloads/Volume8No4/Paper_40-Proposing_a_Keyword_Extraction_Scheme_based_on_Standard.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Human Visual System-based Unequal Error Protection for Robust Video Coding</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080439</link>
        <id>10.14569/IJACSA.2017.080439</id>
        <doi>10.14569/IJACSA.2017.080439</doi>
        <lastModDate>2017-04-29T12:53:34.9430000+00:00</lastModDate>
        
        <creator>Ouafae Serrar</creator>
        
        <creator>Oum el kheir Abra</creator>
        
        <creator>Mohamed Youssfi</creator>
        
        <subject>video coding; unequal error protection; human visual system (HVS); Regions of Interest ROI; Significant Motion Vectors SVM; Classification; index of importance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(4), 2017</description>
        <description>To increase the overall visual quality of the video services without increasing data rate, a human visual system-based video coding, founded on a hierarchy of the video stream in different levels of importance, is developed. Determining these importance levels takes in count three classification criteria: the position of current image in the group of images (image level), the importance of the motion vectors of macroblocks in the current image (macroblock level) and belonging or not of a pixel in a spatial region of interest (pixel level). At the end of this classification process, an interpolation between the results of the three-level selection allows to establish an index of importance for each macroblock of the image to be encoded. This index determines the type of channel coding to be applied to the corresponding macroblock. Tests have shown that the technique presented in this paper achieves better results in PSNR and SSIM (structural similarity) than an equal error protection technique.</description>
        <description>http://thesai.org/Downloads/Volume8No4/Paper_39-Human_Visual_System-based_Unequal_Error_Protection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Clustering Students’ Arabic Tweets using Different Schemes</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080438</link>
        <id>10.14569/IJACSA.2017.080438</id>
        <doi>10.14569/IJACSA.2017.080438</doi>
        <lastModDate>2017-04-29T12:53:34.9130000+00:00</lastModDate>
        
        <creator>Hamed Al-Rubaiee</creator>
        
        <creator>Khalid Alomar</creator>
        
        <subject>Twitter; Arabic tweets; Saudi Arabia; King Abdulaziz University; data mining; data preparation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(4), 2017</description>
        <description>In this paper, Twitter has been chosen as a platform for clustering the topics that have been mentioned by King Abdulaziz University students to understand students’ behaviours and answer their inquiries. The aim of the study is to propose a model for clustering analysis of Saudi Arabian (standard and Arabian Gulf dialect) tweets to segment topics included in the students’ posts. A combination of the natural language processing (NLP) and the machine learning (ML) method to build models is used to cluster tweets according to their text similarity. K-mean algorithm is utilised with different vector representation schemes such as TF-IDF (term frequency-inverse document frequency) and BTO (binary-term occurrence). Distinct preprocessing is explored to obtain the N-grams term of tokens. The cluster distance performance task is applied to determine the average between the centroid clusters. Moreover, human evaluation clustering is performed by looking at the data source to make sure that the clusters are making sense to an educational domain. At this moment, each cluster has been identified, and students’ accounts on Twitter have been known by their facilities or their educational system, such as e-learning. The results show that the best vector’s representation was using BTO, and it will be useful to apply it to cluster students’ text instead of the TF-IDF scheme.</description>
        <description>http://thesai.org/Downloads/Volume8No4/Paper_38-Clustering_Students’_Arabic_Tweets_using_Different_Schemes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Social Semantic Web based Conceptual Architecture of Disaster Trail Management System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080437</link>
        <id>10.14569/IJACSA.2017.080437</id>
        <doi>10.14569/IJACSA.2017.080437</doi>
        <lastModDate>2017-04-29T12:53:34.8970000+00:00</lastModDate>
        
        <creator>Ashfaq Ahmad</creator>
        
        <creator>Roslina Othman</creator>
        
        <creator>Mohamad Fauzan</creator>
        
        <subject>Ontology; Disaster Trail Management; Information Extraction; Knowledge Management</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(4), 2017</description>
        <description>Disasters affect human lives severely. Due to these disasters, hundreds and thousands of human beings lost their lives and gracious properties. Government agencies, non- government organization and individual volunteers act to rescue the affected people and to mitigate the disaster effects. These teams require real time information about the nature, severity, area and number of affectees. Their efforts can be supported by providing timely, effective and specific information so that the rescuers can get better idea about the available routes to reach the affectees, urgency and mass of loss. People share huge amount of data through blogs and social media that can be utilized to help rescue operations. This information can electronically be filtered, arranged and formatted in a proper manner. Thus, semantic web technologies can play a vital role in providing timeliness information. Purpose of this research is to capture explicit knowledge of the domain in form of ontologies, automatic information extraction, generation of implicit knowledge and then disseminating this information to various stakeholders. Collection of implicit and explicit knowledge will help improve decision making for disaster trail management.</description>
        <description>http://thesai.org/Downloads/Volume8No4/Paper_37-A_Social_Semantic_Web_based_Conceptual.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Resources Management of Mobile Network IEEE 802.16e WiMAX</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080436</link>
        <id>10.14569/IJACSA.2017.080436</id>
        <doi>10.14569/IJACSA.2017.080436</doi>
        <lastModDate>2017-04-29T12:53:34.8800000+00:00</lastModDate>
        
        <creator>Mubarak Elamin Elmubarak Daleel</creator>
        
        <creator>Marwa Eltigani Abubakar Ali</creator>
        
        <subject>Wireless Networks; IEEE 802.16; WiMAX; Radio Resource Allocation; Mobility; Admission Control</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(4), 2017</description>
        <description>The evolution of the world of telecommunications towards the mobile multimedia following the technological advances has demonstrated that to provide access to the network is no longer sufficient. The need for users is to access value-added multimedia services in their own home environment regardless of how they access the systems. Multimedia services require high transfer rates and have quality service requirements. They must coexist with services with real time constraints such as the voice service which does not tolerate variation of the delay between sending and receiving packets. The guarantee of these services by the operator becomes much more difficult in the technologies that take into account the mobility of the users.
This paper studies the IEEE802.16e system according to the continuous modeling case. A model of the IEEE802.16e cell is proposed and allows the decomposition of the cell according to the principle of the AMC adaptive modulation and coding technique. The model is based on an admission control mechanism in the presence of two types of real-time and non-real-time traffic. The model is based on a new CAC strategy with intra-cell mobility and gives the same QoS for the calls of this traffic by favoring the calls in progress on the new arrivals.</description>
        <description>http://thesai.org/Downloads/Volume8No4/Paper_36-Resources_Management_of_Mobile_Network_IEEE.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>VHDL Design and FPGA Implementation of LDPC Decoder for High Data Rate</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080435</link>
        <id>10.14569/IJACSA.2017.080435</id>
        <doi>10.14569/IJACSA.2017.080435</doi>
        <lastModDate>2017-04-29T12:53:34.8500000+00:00</lastModDate>
        
        <creator>A. Boudaoud</creator>
        
        <creator>M. El Haroussi</creator>
        
        <creator>E. Abdelmounim</creator>
        
        <subject>error correcting codes; LDPC codes; BP “Min-Sum”; VHDL language; FPGA</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(4), 2017</description>
        <description>In this work, we present a FPGA design and implementation of a parallel architecture of a low complexity LDPC decoder for high data rate applications. The selected code is a regular LDPC code (3, 4). VHDL design and synthesis of such architecture uses the decoding by the algorithm of BP (Believe propagation) simplified &quot;Min-Sum&quot;. The complexity of the proposed architecture was studied; it is 6335 LEs at a data rate of 2.12 Gbps for quantization of 8 bits at the second iteration. We also realized a platform based on a co-simulation on Simulink to validate performance in BER (Bit Error Rate) of our architecture.</description>
        <description>http://thesai.org/Downloads/Volume8No4/Paper_35-VHDL_Design_and_FPGA_Implementation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Segmentation of Brain Tumor in Multimodal MRI using Histogram Differencing &amp; KNN</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080434</link>
        <id>10.14569/IJACSA.2017.080434</id>
        <doi>10.14569/IJACSA.2017.080434</doi>
        <lastModDate>2017-04-29T12:53:34.8200000+00:00</lastModDate>
        
        <creator>Qazi Nida-Ur-Rehman</creator>
        
        <creator>Imran Ahmed</creator>
        
        <creator>Ghulam Masood</creator>
        
        <creator>Najam-U-Saquib</creator>
        
        <creator>Muhammad Khan</creator>
        
        <creator>Awais Adnan</creator>
        
        <subject>MRI imaging; tumor types; image segmentation; Histogram Differencing; KNN</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(4), 2017</description>
        <description>Tumor segmentation inside the brain MRI is one of the trickiest and demanding subjects for the research community due to the complex nature and structure of the human brain and the different types of abnormalities that grow inside the brain.  A Few common types of tumors are CNS Lymphoma, Meningioma, Glioblastoma, and Metastases. In this research work, our aim is to segment and classify the four most commonly diagnosed types of brain tumors. To segment the four most common brain tumors, we are proposing a new demanding dataset comprising of multimodal MRI along with healthy brain MRI images. The dataset contains 2000 images collected from online sources of about 80 patient cases. Segmentation method proposed in this research is based on histogram differencing with rank filter. Morphology at post-processing is practically implemented to detect the brain tumor more evidently. The KNN classification is applied to classify tumor values into their respective category (i.e. benign and malignant) based on the size value of tumor. The average rate of True Classification Rate (TCR) achieved is 97.3% and False Classification Rate (FCR) is 2.7%.</description>
        <description>http://thesai.org/Downloads/Volume8No4/Paper_34-Segmentation_of_Brain_Tumor_in_Multimodal_MRI.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>QR Code Recognition based on Principal Components Analysis Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080433</link>
        <id>10.14569/IJACSA.2017.080433</id>
        <doi>10.14569/IJACSA.2017.080433</doi>
        <lastModDate>2017-04-29T12:53:34.8030000+00:00</lastModDate>
        
        <creator>Hicham Tribak</creator>
        
        <creator>Youssef Zaz</creator>
        
        <subject>QR code; Image segmentation; Principal Components Analysis; Perspective rectification; Pattern similarity measurement</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(4), 2017</description>
        <description>QR (Quick Response) code recognition systems (based on computer vision) have always been challenging to be accurately devised due to two main constraints: (1) QR code recognition system must be able to localize QR codes from an acquired image even in case of unfavorable conditions (illumination variations, perspective distortions) and (2) The system must be adapted to embedded system platforms in terms of processing complexity and resources requirement. Most of the earlier proposed QR code recognition systems implemented complex feature descriptors such as (Harris features, Hough transform which aim at extracting QR code pattern features and subsequently estimating their positions. This process is reinforced by pattern classifiers e.g. (Random forests, SVM) which are used to remove false detected patterns. Those approaches are very computationally expensive. Thus, they are not able to be run in real-time systems.
In this paper, a streamlined QR code recognition approach is proposed to be efficiently operable in systems characterized by a limited performance. The evoked approach is conducted as follows: the captured image is segmented in order to reduce searching space and extract the regions of interest. Afterwards a horizontal and vertical scans are performed to localize preliminarily QR code patterns, followed by Principal Component Analysis (PCA) method which allows removing false positives. Thereafter, the remaining patterns are assembled according to a constraint so as to localize the corresponding QR codes. Experimental results show that the incorporation of PCA decreases notably the processing time and increase QR code recognition accuracy (96%).</description>
        <description>http://thesai.org/Downloads/Volume8No4/Paper_33-QR_ Code_Recognition_based_on_Principal_Components.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Critical Success Factors In Implementing ITIL in the Ministry of Education in Saudi Arabia: An Exploratory Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080432</link>
        <id>10.14569/IJACSA.2017.080432</id>
        <doi>10.14569/IJACSA.2017.080432</doi>
        <lastModDate>2017-04-29T12:53:34.7730000+00:00</lastModDate>
        
        <creator>Abdullah S Alqahtani</creator>
        
        <subject>Information Technology Infrastructure Technology (ITIL); Information Technology Service Management (ITSM); Ministry of Education (MoE); the Kingdom of Saudi Arabia (KSA)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(4), 2017</description>
        <description>This paper engages with the ITIL framework for IT service delivery within the specific context of the Ministry of Education in the Kingdom of Saudi Arabia (KSA). A literature review process is used to develop a critical success factors (CSFs) for the implementation of the ITIL framework in an organisation, based on a series of models like TAM and UTAUT, and then put into an overall conceptual model of use behaviour towards ITIL described by [1]. The conceptual model is then deployed in the field through a series of interviews with IT professionals within the Ministry of Education in the KSA. The interviews are semi-structured, and were intended to draw out the corresponding factors for success and the factors that have hindered the implementation of ITIL within this organisation. The data confirm the view of the literature that strong leadership and management involvement is essential both as a success factor in its own right, and as the means by which other success factors are enabled. The findings of paper make two observations. First, the literature sets out a series of quite precise success factors that relate to project management, communication, and quality control. The data presented in this research project demonstrate that in practice it is hard to make clear all of these factors to such a level of detail within the Ministry of Education in Saudi. Second, the implementation of ITIL is more reflexive than the literature and the conceptual model would initially demonstrate.</description>
        <description>http://thesai.org/Downloads/Volume8No4/Paper_32-Critical_Success_Factors_In_Implementing_ITIL_in_the_Ministry.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimized Routing Information Exchange in Hybrid IPv4-IPv6 Network using OSPFV3 &amp; EIGRPv6</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080431</link>
        <id>10.14569/IJACSA.2017.080431</id>
        <doi>10.14569/IJACSA.2017.080431</doi>
        <lastModDate>2017-04-29T12:53:34.7570000+00:00</lastModDate>
        
        <creator>Zeeshan Ashraf</creator>
        
        <creator>Muhammad Yousaf</creator>
        
        <subject>EIGRPv6; OSPFv3; Hybrid IPv4-IPv6; Route Redistributio; Route Summarization; Tunneling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(4), 2017</description>
        <description>IPv6 is the next generation internet protocol which is gradually replacing the IPv4. IPv6 offers larger address space, simpler header format, efficient routing, better QoS and built-in security mechanisms. The migration from IPv4 to IPv6 cannot be attained in a short span of time. The main issue is compatibility and interoperability between the two protocols. Therefore, both the protocols are likely to coexist for a long time. Usually, tunneling protocols are deployed over hybrid IPv4-IPv6 networks to offer end-to-end IPv6 connectivity. Many routing protocols are used for IPv4 and IPv6. In this paper, researchers analyzed the optimized routing information exchange of two routing protocols (OSPFv3 &amp; EIGRPv6) in hybrid IPv4-IPv6 network. Experimental results show that OSPFv3 performs better than EIGRPv6 in terms of most of the parameters i.e. convergence time, RTT, response time, tunnel overhead, protocol traffic statistics, CPU and memory utilization.</description>
        <description>http://thesai.org/Downloads/Volume8No4/Paper_31-Optimized_Routing_Information_Exchange.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A RDWT and Block-SVD based Dual Watermarking Scheme for Digital Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080430</link>
        <id>10.14569/IJACSA.2017.080430</id>
        <doi>10.14569/IJACSA.2017.080430</doi>
        <lastModDate>2017-04-29T12:53:34.7230000+00:00</lastModDate>
        
        <creator>Sachin Gaur</creator>
        
        <creator>Vinay Kumar Srivastava</creator>
        
        <subject>Digital image watermarking; Redundant Discrete wavelet transform; Singular value decomposition; Arnold transform; NCC and PSNR</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(4), 2017</description>
        <description>In the modern era, digital image watermarking is a successful method to protect the multimedia digital data for example copyright protection, content verification, rightful ownership identification, tamper detection etc. In this paper for improving the robustness and security, a Dual watermarking approach using Redundant Discrete Wavelet Transform (RDWT), block based singular value decomposition (SVD) and Arnold transform is presented. There are two gray scale watermarks, one is Prime watermark and other is Arnold scrambled Second watermark. Second watermark is embedded into the RDWT transformed Prime watermark in all sub bands to get the processed watermark image. After that transformed gray scale cover image is partitioned into non-overlapping blocks for embedding the processed watermark image by modifying the SVD coefficients of each block to obtain the resultant watermarked image. Now a reverse algorithm is developed to takeout the Prime and Second watermark from noisy image. Analysis and experimental outcomes show that the presented method is more robust against numerous image processing attacks and perform better as compared to previously introduced schemes related to presented work.</description>
        <description>http://thesai.org/Downloads/Volume8No4/Paper_30-A_RDWT_and_block-SVD_based_Dual_Watermarking.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Observation of Scintillation Events from GPS and NavIC (IRNSS) Measurements at Bangalore Region</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080429</link>
        <id>10.14569/IJACSA.2017.080429</id>
        <doi>10.14569/IJACSA.2017.080429</doi>
        <lastModDate>2017-04-29T12:53:34.7100000+00:00</lastModDate>
        
        <creator>Manjula T R</creator>
        
        <creator>Raju Garudachar</creator>
        
        <subject>Ionosphere scintillation; Navigation; carrier to noise ratio; solar activity; equinox</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(4), 2017</description>
        <description>Ionosphere scintillation is a random phenomenon of the ionosphere, causing abrupt fluctuations in the amplitude and phase of the signals traversing the medium, significantly impacting the performance of navigation systems, signifying the need to take up scintillation studies. Scintillation events are monitored on L5, S and L1 band signals of IRNSS and GPS navigation system respectively over low latitude Bangalore region during moderate and low solar activity period, 2015 and 2016 respectively. Investigations into scintillation variability with respect to local time, solar activity and seasonal variations are conducted to draw a trend of scintillation pattern. Comparison of L5 and L1 band scintillation events demonstrate similar scintillation pattern with varying scintillation magnitude. With S band signals exhibit minimum scintillation, suggesting the scintillation-free link for effective navigation.</description>
        <description>http://thesai.org/Downloads/Volume8No4/Paper_29-Observation_of_Scintillation_Events_from_GPS.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Gatekeepers Practices in Knowledge Diffusion within Saudi Organizations: KFMC Case Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080428</link>
        <id>10.14569/IJACSA.2017.080428</id>
        <doi>10.14569/IJACSA.2017.080428</doi>
        <lastModDate>2017-04-29T12:53:34.6770000+00:00</lastModDate>
        
        <creator>Mona Alawadh</creator>
        
        <creator>Abdullah Altameem</creator>
        
        <subject>Knowledge sharing; gatekeepers; brokerage; knowledge transfer; and SNA</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(4), 2017</description>
        <description>Gatekeepers in organizations play a critical role in terms of disseminating and transferring outside knowledge into their groups. This research contributes in identifying the gatekeepers&#39; practices in terms of gathering, selecting, and diffusing knowledge. In the context of Saudi organizations, the exploratory case selected in this research is King Fahad Medical City (KFMC). This research is conducted on Health Informatics and Information Technology employees. A mixed-method design is applied on this research to provide a deep understanding of knowledge interactions structure and the process of knowledge interactions across the organization network. Both methods; questionnaires and interviews are conducted in order to investigate the context. Social Network Analysis method is also used in this research to capture the &quot;brokerage&quot; network structure position using Flow Betweenness Centrality algorithm. The findings reveal that gatekeepers use different knowledge sharing mechanisms which are: information retrieval, information pooling, pushing, diffusion, collaborative problem solving, and thinking along. In addition, the results present the distinct methods and technologies used by the gatekeepers to collect and share their knowledge with others. The findings of this research help managerial decision makers and strategic managers among start-up organizations and also well-structured organizations to provide valuable insights and decisions in terms of policies, strategies, and the appropriate collaborative tools that foster collaborative working.</description>
        <description>http://thesai.org/Downloads/Volume8No4/Paper_28-Gatekeepers_Practices_in_Knowledge_Diffusion.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Spatial Comprehension Exercise System with 3D CG of Toy Model for Disabled Children</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080427</link>
        <id>10.14569/IJACSA.2017.080427</id>
        <doi>10.14569/IJACSA.2017.080427</doi>
        <lastModDate>2017-04-29T12:53:34.6470000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Taiki Ishigaki</creator>
        
        <creator>Mariko Oda</creator>
        
        <subject>Spatial Comprehension; Toy model; Augmented reality; Computer graphics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(4), 2017</description>
        <description>Spatial comprehension exercise system with Three-Dimensional Computer Graphics: 3D CG of toy model for disabled children is proposed. In order to improve spatial comprehension in an attractive manner, a toy model is created together with building block model. Through experiments, it is confirmed that the spatial comprehension is improved for the disabled children remarkably.</description>
        <description>http://thesai.org/Downloads/Volume8No4/Paper_27-Spatial_Comprehension_Exercise_System_with_3D_CG.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modeling and Control of a Multi-Machine Traction System Connected in Series using Two Static Converter</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080426</link>
        <id>10.14569/IJACSA.2017.080426</id>
        <doi>10.14569/IJACSA.2017.080426</doi>
        <lastModDate>2017-04-29T12:53:34.6300000+00:00</lastModDate>
        
        <creator>Selimane MEGUENNI</creator>
        
        <creator>Abedelkhader. DJAHBAR</creator>
        
        <subject>synchronous machine; Multi-machine Multi-inverter; five-phase; vector control</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(4), 2017</description>
        <description>Power may be segmented either at the converter, using a multilevel inverter, either at the machine, by performing a polyphase winding. Moreover, increasing numbers of phases enables improved power quality and reducing torque ripples with the advantage of fault tolerance related to the loss of one or more phases. This class of system offers a reduction of design time, costs and the optimization of the volume of embedded systems. The objective of this work is to order, model and characterize the behavior of a training system multimachines composed of two five phase synchronous permanent magnet motors connected in series using two static converters.</description>
        <description>http://thesai.org/Downloads/Volume8No4/Paper_26-Modeling_and_Control_of_a_Multi-Machine_Traction_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>E-exam Cheating Detection System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080425</link>
        <id>10.14569/IJACSA.2017.080425</id>
        <doi>10.14569/IJACSA.2017.080425</doi>
        <lastModDate>2017-04-29T12:53:34.6000000+00:00</lastModDate>
        
        <creator>Razan Bawarith</creator>
        
        <creator>Dr. Abdullah Basuhail</creator>
        
        <creator>Dr. Anas Fattouh</creator>
        
        <creator>Prof. Dr. Shehab Gamalel-Din</creator>
        
        <subject>online exam; cheating; continuous authentication; online proctor; fingerprint; eye tracking</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(4), 2017</description>
        <description>With  the  expansion  of  Internet  and  technology  over  the  past  decade,  E-learning  has  grown exponentially day by day. Cheating in exams has been a widespread phenomenon all over the world regardless of the levels of development. Therefore, detection of traditional cheating methods may no longer be wholly successful to fully prevent cheating during examinations. Online examination is an integral and vital component of E-learning. Students’ exams in E-learning are remotely submitted without any monitoring from physical proctors. As a result of being able to easily cheat during e-exams, E-learning universities depend on an examination process in which students take a face-to-face examination in a physical place allocated at the institution premises under supervised conditions, however this conflicts with the concept of distant E-learning environment. This paper will investigate the methods used by student for cheating detection in online exams, through continuous authentication and online proctors. In addition, we have implemented an E-exam management system, which used to detect and prevent the cheating in online exams. The system used fingerprint reader authenticator and eye tribe tracker in exam session. We researched two parameters that can define the examinee status as cheating or non-cheating during exam.  Through these two parameters: the total time on out screen and the number of times on out screen were computed.</description>
        <description>http://thesai.org/Downloads/Volume8No4/Paper_25-E-exam_Cheating_Detection_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automatic Recognition of Medicinal Plants using Machine Learning Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080424</link>
        <id>10.14569/IJACSA.2017.080424</id>
        <doi>10.14569/IJACSA.2017.080424</doi>
        <lastModDate>2017-04-29T12:53:34.5700000+00:00</lastModDate>
        
        <creator>Adams Begue</creator>
        
        <creator>Venitha Kowlessur</creator>
        
        <creator>Upasana Singh</creator>
        
        <creator>Fawzi Mahomoodally</creator>
        
        <creator>Sameerchand Pudaruth</creator>
        
        <subject>leaf recognition; medicinal plants; random forest; Mauritius</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(4), 2017</description>
        <description>The proper identification of plant species has major benefits for a wide range of stakeholders ranging from forestry services, botanists, taxonomists, physicians, pharmaceutical laboratories, organisations fighting for endangered species, government and the public at large. Consequently, this has fueled an interest in developing automated systems for the recognition of different plant species. A fully automated method for the recognition of medicinal plants using computer vision and machine learning techniques has been presented. Leaves from 24 different medicinal plant species were collected and photographed using a smartphone in a laboratory setting. A large number of features were extracted from each leaf such as its length, width, perimeter, area, number of vertices, colour, perimeter and area of hull. Several derived features were then computed from these attributes. The best results were obtained from a random forest classifier using a 10-fold cross-validation technique. With an accuracy of 90.1%, the random forest classifier performed better than other machine learning approaches such as the k-nearest neighbour, na&#239;ve Bayes, support vector machines and neural networks. These results are very encouraging and future work will be geared towards using a larger dataset and high-performance computing facilities to investigate the performance of deep learning neural networks to identify medicinal plants used in primary health care. To the best of our knowledge, this work is the first of its kind to have created a unique image dataset for medicinal plants that are available on the island of Mauritius. It is anticipated that a web-based or mobile computer system for the automatic recognition of medicinal plants will help the local population to improve their knowledge on medicinal plants, help taxonomists to develop more efficient species identification techniques and will also contribute significantly in the protection of endangered species.</description>
        <description>http://thesai.org/Downloads/Volume8No4/Paper_24-Automatic_Recognition_of_Medicinal_Plants.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Enhanced Breast Cancer Diagnosis Scheme based on Two-Step-SVM Technique</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080423</link>
        <id>10.14569/IJACSA.2017.080423</id>
        <doi>10.14569/IJACSA.2017.080423</doi>
        <lastModDate>2017-04-29T12:53:34.5530000+00:00</lastModDate>
        
        <creator>Ahmed Hamza Osman</creator>
        
        <subject>Two-Step Clustering; Breast Cancer; SVM classification; Diagnosis; Tumors</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(4), 2017</description>
        <description>This paper proposes an automatic diagnostic method for breast tumour disease using hybrid Support Vector Machine (SVM) and the Two-Step Clustering Technique. The hybrid technique is aimed at improving the diagnostic accuracy and reducing diagnostic miss-classification, thereby solving the classification problems related to Breast Tumour. To distinguish the hidden patterns of the malignant and benign tumours, the Two-Step algorithm and SVM have been combined and employed to differentiate the incoming tumours. The developed hybrid method enhances the accuracy by 99.1% when examined on the UCI-WBC data set. Moreover, in terms of evaluation measures, it has been shown experimentally results that the hybrid method outperforms the modern classification techniques for breast cancer diagnosis.</description>
        <description>http://thesai.org/Downloads/Volume8No4/Paper_23-An_Enhanced_Breast_Cancer_Diagnosis_Scheme.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Instant Diacritics Restoration System for Sindhi Accent Prediction using N-Gram and Memory-Based Learning Approaches</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080422</link>
        <id>10.14569/IJACSA.2017.080422</id>
        <doi>10.14569/IJACSA.2017.080422</doi>
        <lastModDate>2017-04-29T12:53:34.5370000+00:00</lastModDate>
        
        <creator>Hidayatullah Shaikh</creator>
        
        <creator>Javed Ahmed Mahar</creator>
        
        <creator>Mumtaz Hussain Mahar</creator>
        
        <subject>Sindhi Language; Instant Diacritics Restoration; Text Prediction; N-Grams; Memory-Based Learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(4), 2017</description>
        <description>The script of Sindhi Language is highly complex due to many complexities including abundance of homographic words. The interpretation of the text turns so tough due to the possibility of multitudinal meanings associated with a homographic word unless given specific pronunciation with the help of diacritics. Diacritics help the readers to comprehend the text easily. Due to the rapidly developing nature of this era, people don’t bother writing diacritics in routine applications of life. Besides creating difficulties for human reading, the absence of diacritics does also make the text abstruse for machine reading. Relatively alike human, machines may also lead to semantic and syntactic complexities during computational processing of the language. Instant diacritics restoration is an approach emerged from the text prediction systems. This type of diacritics restoration is an unprecedented work in the realm of natural language processing, particularly in Indo-Aryan languages. A proposition for a framework using N-Grams and Memory-Based Learning approach is made in this work. The grab-point of this mechanism is its 99.03% accuracy on the corpus of Sindhi language during the experiments. The comparative edge of instant diacritics restoration is its being source of expedition in the performance of other natural language and speech processing applications. The future development of this approach seems vivid and clear for Sindhi orthography is highly similar to those of Arabic, Urdu, Persian and other languages based on this type of script.</description>
        <description>http://thesai.org/Downloads/Volume8No4/Paper_22-Instant_Diacritics_Restoration_System_for_Sindhi.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Improved Machine Learning Approach to Enhance the Predictive Accuracy for Screening Potential Active USP1/UAF1 Inhibitors</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080421</link>
        <id>10.14569/IJACSA.2017.080421</id>
        <doi>10.14569/IJACSA.2017.080421</doi>
        <lastModDate>2017-04-29T12:53:34.5070000+00:00</lastModDate>
        
        <creator>Syed Asif Hassan</creator>
        
        <creator>Ahmed Hamza Osman</creator>
        
        <subject>Ubiquitinases; DNA repair mechanism; anti USP1/UAF1 molecule; High-throughput Dataset; Feature Selection and Discriminant Technique; Chemoinformatic Model; Classification accuracy; T-test</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(4), 2017</description>
        <description>DNA repair mechanism is an important mechanism employed by the cancerous cell to survive the DNA damages induced during uncontrolled proliferation of cell and anti-cancer drug treatments. In this context, the Ubiquitin-Specific Proteases (USP1) in complex with Ubiquitin Associated Factor 1(UAF1) plays a key role in the survival of cancerous cell by DNA repair mechanism. Thus, this put forth USP1/UAF1 complex as a striking anti-cancer target for screening of anti-cancer molecule. The current research is aimed to improve the classification accuracy of the existing bioactivity predictive chemoinformatics model for screening potential active USP1/UAF1 inhibitors from high-throughput screening data. The current study employed feature selection method to extract key molecular descriptors from the publicly available high-throughput screening dataset of small molecules that were used to screen active USP1/UAF1 complex inhibitors. This study proposes an improved predictive machine learning approach using the feature selection technique and two class Linear Discriminant Technique (LDA) algorithm to accurately predict the active novel USP1/UAF1 inhibitor compounds.</description>
        <description>http://thesai.org/Downloads/Volume8No4/Paper_21-An_Improved_Machine_Learning_Approach_to_Enhance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Proposed Fuzzy Stability Model to Improve Multi-Hop Routing Protocol</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080420</link>
        <id>10.14569/IJACSA.2017.080420</id>
        <doi>10.14569/IJACSA.2017.080420</doi>
        <lastModDate>2017-04-29T12:53:34.4900000+00:00</lastModDate>
        
        <creator>Hamdy A.M. Sayedahmed</creator>
        
        <creator>Hesham A. Hefny</creator>
        
        <creator>Imane M.A. Fahmy</creator>
        
        <subject>MANET; Fuzzy Model; Routes Stability; OPNET; DSR; FSDSR;  MATLAB</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(4), 2017</description>
        <description>Today’s wide spread use of mobile devices such as: mobile phones, tablets, laptops and many others had driven the wireless Mobile Network growth especially the Mobile Ad hoc Networks commonly referred to as MANETs. Since the routing process is regarded as the core of communication and is associated with the network performance metrics, then its improvement will be revealed in the whole network performance improvement. Due to users’ mobility, limited battery power, and limited transmission ranges, the current routing protocols should consider the stability of routes. Hence, the lack of resources of MANETs may result in imprecise routing decisions. In this paper, the proposed fuzzy model is used to handle imprecision of routing decisions by Fuzzy stability model for Dynamic Source Routing (FSDSR). Regarding the number of hops per route, cache size, end-to-end delay and route discovery time, the results showed that FSDSR has outperformed the state of art protocol Dynamic Source Routing protocol (DSR).</description>
        <description>http://thesai.org/Downloads/Volume8No4/Paper_20-A_Proposed_Fuzzy_Stability_Model_to_Improve.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Human Gesture Recognition using Keyframes on Local Joint Motion Trajectories</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080419</link>
        <id>10.14569/IJACSA.2017.080419</id>
        <doi>10.14569/IJACSA.2017.080419</doi>
        <lastModDate>2017-04-29T12:53:34.4600000+00:00</lastModDate>
        
        <creator>Rafet Durgut</creator>
        
        <creator>Oguz FINDIK</creator>
        
        <subject>Human gesture recognition; dynamic time warping; local joint motion trajectory;  Human action recognition; microsoft kinect</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(4), 2017</description>
        <description>Human Action Recognition (HAR) systems are systems that recognize and classify the actions that users perform against the sensor or camera. In most HAR systems, an input test data is compared with the reference data in the database using various methods. Classification process is performed according to the result obtained. The size of the test or reference data directly affects the operation speed of the system. Reduced data size allows a significant performance increase in system operation speed. In this study, action recognition method is proposed by using skeletal joint information obtained by Microsoft Kinect sensor. Splitting keyframes are obtained from the skeletal joint information. The keyframes are observed as a distinguishing feature. Therefore, these keyframes are used for the classification process. Keeping the keyframes instead of keeping the position or angle information of action in the reference database can benefit from memory and working time. The weight value of each keyframes is calculated in the method. The problem of temporal differences that occur when comparing test and reference action is solved by Dynamic Time Warping (DTW). The k-nearest neighbor’s algorithm is used for classification according to the obtained results from DTW. The sample has been tested in a data set so that the success of the method can be tested. As a result, 100% correct classification was achieved. It is also suitable for working at real time systems. Breakpoints can also be used to provide feedback to the user as a result of the classification process. The magnitude and direction of the keyframes, the change in the trajectory of joint, the position and the time of its existence also give information about the time errors.</description>
        <description>http://thesai.org/Downloads/Volume8No4/Paper_19-Human_Gesture_Recognition_using_Keyframes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Secure Data Accumulation among Reliable Hops with Rest/Alert Scheduling in Wireless Sensor Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080418</link>
        <id>10.14569/IJACSA.2017.080418</id>
        <doi>10.14569/IJACSA.2017.080418</doi>
        <lastModDate>2017-04-29T12:53:34.4300000+00:00</lastModDate>
        
        <creator>Mohamed Mustaq AhmedA</creator>
        
        <creator>Abdalla AlAmeen</creator>
        
        <creator>Mohemmed Sha M</creator>
        
        <creator>Mohamed Yacoab M.Y</creator>
        
        <creator>Manesh.T</creator>
        
        <subject>Sensor Networks; Data Collection; Data Accumulation; Reliability; Security Key; Rest/Alert hops</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(4), 2017</description>
        <description>Wireless Sensor Networks (WSNs) are more inclined to attackers by outer sources. The total information must be secured to guarantee the uprightness and privacy. In sensor networks, the data collection and data accumulation are mainly based on the energy levels of the sensor hops. Due to the drain of the energy, at one particular point of time the sensor hops become obsolete and the data transmission will not take place.  This research proposes a reliable and secure strategy with dependable hops utilizing own-key logic test with convention for sensor organization. This research work proposes to practice a few hops as dependable hops (Reliable-hops) to understand the insight nature of the nodes. With every hop, a secret authorization is shared among the sink and its neighboring hops. At this point, a network is developed for sending information to the sink hops in a progressive design. The hops encode the information by utilizing the secrecy authorization and advances to the next level in network. By improving the transmission structure of reliable-hops, the accumulated value was confirmed towards guaranteeing trustworthiness. The proposed system is demonstrated with various examples and carried throughout the paper.</description>
        <description>http://thesai.org/Downloads/Volume8No4/Paper_18-Secure_Data_Accumulation_among_Reliable_Hops.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Prediction of Naturally Fractured Reservoir Performance using Novel Integrated Workflow</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080417</link>
        <id>10.14569/IJACSA.2017.080417</id>
        <doi>10.14569/IJACSA.2017.080417</doi>
        <lastModDate>2017-04-29T12:53:34.4130000+00:00</lastModDate>
        
        <creator>Reda Abdel Azim</creator>
        
        <subject>fractured reservoirs; production potential; fracture network map and finite element</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(4), 2017</description>
        <description>Generation of naturally fractured reservoir subsurface fracture maps and prediction its production potential are considered complex process due to insufficient data available such as bore hole images, core data and proper reservoir simulator model. 
To overcome such shortcomings, the industry has relied on geo-statistical analyses of hard and soft data, which are often referred to as static data. This paper presents an integrated workflow that models and predicting fractured reservoirs performance, through the use of gradient-based inversion techniques and discrete fracture network modelling (DFN), which—through the inversion of well test data (i.e., dynamic data)—aims to optimise fracture properties and then predicting of the reservoir production potential. The first step in the workflow is to identify flow contributing fracture sets by analysing available core descriptions, borehole images, conventional log data and production data. Once the fracture sets are identified, the fracture intensity is statistically populated in the inter-well space. In the second step, 3D block-based permeability tensors are calculated based on flow through discrete fractures, and the fracture intensity is then propagated away from the wellbore, i.e., by relating to permeability tensors with fracture intensity. In the final step (fracture optimisation), the fracture properties are computed by DFN modelling, which includes distribution, orientation and geometry in different realisations. Fluid flow is simulated in these discrete fractures to estimate pressure change and pressure derivatives. The production rate associated with drill stem test that performed within this reservoir area has been successfully simulated using the optimised subsurface fracture map that has been generated from the first step.</description>
        <description>http://thesai.org/Downloads/Volume8No4/Paper_17-Prediction_of_Naturally_Fractured_Reservoir.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Study on Ranking Key Factors of Virtual Teams Effectiveness in Saudi Arabian Petrochemical Companies</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080416</link>
        <id>10.14569/IJACSA.2017.080416</id>
        <doi>10.14569/IJACSA.2017.080416</doi>
        <lastModDate>2017-04-29T12:53:34.3830000+00:00</lastModDate>
        
        <creator>Abdullah Basiouni</creator>
        
        <creator>Kang Mun Arturo Tan</creator>
        
        <creator>Hafizi Muhamad Ali</creator>
        
        <creator>Walid Bahamdan</creator>
        
        <creator>Ahmad Khalifi</creator>
        
        <subject>Virtual teams; Social network; Echo method; Quantitative analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(4), 2017</description>
        <description>This research ranks effectiveness-related factors of virtual teams. The literature suggests various factors which could motivate or discourage management in using virtual teams versus co-located teams. Forty-eight interviews were done in petrochemical companies in Saudi Arabia. The Echo Method has been employed and eleven factors were identified. Results showed that the participants ranked efficiency and communication as first and second as a motivating factor in adopting the virtual team approach. While, the other three motivating factors which were ranked lower are flexibility, diversity and cooperation. On the other hand, the six discouraging factors (barrier) are miscommunication, scheduling preferences, unreliability of technology, incompetency of staff, varying standards and isolationist tendency. Suggestions were made to counteract the effects of the barrier-inducing factors and enhance the effects of the motivating factors.</description>
        <description>http://thesai.org/Downloads/Volume8No4/Paper_16-A_Study_on_Ranking_Key_Factors_of_Virtual_Teams.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improved Selfish Node Detection Algorithm for Mobile Ad Hoc Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080415</link>
        <id>10.14569/IJACSA.2017.080415</id>
        <doi>10.14569/IJACSA.2017.080415</doi>
        <lastModDate>2017-04-29T12:53:34.3670000+00:00</lastModDate>
        
        <creator>Ahmed. A. Hadi</creator>
        
        <creator>Zulkarnain Md. Ali</creator>
        
        <creator>Yazan Aljeroudi</creator>
        
        <subject>Selfish nodes detection; AODV routing; routing protocols; MANET</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(4), 2017</description>
        <description>Mobile Ad hoc network (MANET) suffers from different security issues. Ideally, not all nodes in MANET cooperate in forwarding packets because of non-malicious intention. This node is called selfish node and it behaves so due to its internal state such as limited energy concerns. Selfish nodes drop packets and that harms the process of routes establishment and relaying packets. Therefore, it is very important to detect these nodes and avoid them, which guarantees improving the performance of the overall network. Here, an improved scheme has been developed for detecting selfish node in Ad Hoc On-demand Distance Vector routing protocol (AODV) based wireless routing network. Two algorithms were integrated for assuring least fault positive decision of selfish nodes detection; first one is to avoid false positive of detection of selfish nodes in forwarding Route Request (RREQ) and second one is to avoid false positive of detection of selfish node in forwarding data packets. This scheme guarantees improvement in performance of packet forwarding in terms of Packet Delivery Ratio (PDR) and End-to-End delay (E2E delay).</description>
        <description>http://thesai.org/Downloads/Volume8No4/Paper_15-Improved_Selfish_Node_Detection_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Representation and Searching Algorithm for Opening Hours</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080414</link>
        <id>10.14569/IJACSA.2017.080414</id>
        <doi>10.14569/IJACSA.2017.080414</doi>
        <lastModDate>2017-04-29T12:53:34.3330000+00:00</lastModDate>
        
        <creator>Teodora Husar</creator>
        
        <creator>Cornelia Gyor&#246;di</creator>
        
        <creator>Robert Gyor&#246;di</creator>
        
        <creator>Sorin Sarca</creator>
        
        <subject>Opening Hours; Java; optimizations</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(4), 2017</description>
        <description>Opening Hours can be considered a data type having a human representation; this means that it can be easily understood by human beings and hardly understood by computers because the lack of a standard structured representation. In essence, the opening hours gives us a simple information: opening state at a certain date and time, and this is our focus in this paper. So far, this kind of functionality does not exists in today&#39;s database management systems because there are no algorithms developed in this way. The purpose of this paper is to presents a novel and easy to implement algorithm for encoding opening hours in order to quickly search and get the opening state for records.</description>
        <description>http://thesai.org/Downloads/Volume8No4/Paper_14-A_Novel_Representation_and_Searching_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Review on Urdu Language Parsing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080413</link>
        <id>10.14569/IJACSA.2017.080413</id>
        <doi>10.14569/IJACSA.2017.080413</doi>
        <lastModDate>2017-04-29T12:53:34.3030000+00:00</lastModDate>
        
        <creator>Arslan Ali Raza</creator>
        
        <creator>Asad Habib</creator>
        
        <creator>Jawad Ashraf</creator>
        
        <creator>Muhammad Javed</creator>
        
        <subject>Natural Language Processing; Machine Learning; Urdu Language Processing and Dependency Parsing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(4), 2017</description>
        <description>Natural Language Processing is the multidisciplinary area of Artificial Intelligence, Machine Learning and Computational Linguistic for processing human language automatically. It involves understanding and processing of human language.  The way through which we share our contents or feelings have always great importance in understanding and processing of language. Parsing is the most suited approach in identifying and scanning what the available sentences expressed? Parsing is the process in which syntactic structure of sentence is identified using grammatical tags. The syntactically correct sentence structure is achieved by assigning grammatical labels to its constituents using lexicon and syntactic rules. Phrase and Dependency are two main structure formalisms for parsing natural language sentences. The growing use of web 2.0 has produced novel research challenges as people from different geographical areas are using this channel and sharing contents in their native languages. Urdu is one of such free word order native language which is widely shared over social media sites but identification and summarization of Urdu sentences is challenging task. In this review paper we present an overview to recent work in parsing of fixed order (i.e. English) and free word order languages (i.e Urdu) in order to reveal the most suited method for Urdu Language Parsing. This survey explored that dependency parsing is more appropriate for Urdu and other free word order languages and parsers of English language are not useful in parsing Urdu sentence due to its morphological, syntactical and grammatical differences.</description>
        <description>http://thesai.org/Downloads/Volume8No4/Paper_13-A_Review_on_Urdu_Language_Parsing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Parallel Simulated Annealing Algorithm for Weapon-Target Assignment Problem</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080412</link>
        <id>10.14569/IJACSA.2017.080412</id>
        <doi>10.14569/IJACSA.2017.080412</doi>
        <lastModDate>2017-04-29T12:53:34.2870000+00:00</lastModDate>
        
        <creator>Emrullah SONUC</creator>
        
        <creator>Baha SEN</creator>
        
        <creator>Safak BAYIR</creator>
        
        <subject>Weapon-Target Assignment; Multi-start Simulated Annealing, Combinatorial optimization; Parallel algorithms; GPU</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(4), 2017</description>
        <description>Weapon-target assignment (WTA) is a combinatorial optimization problem and is known to be NP-complete. The WTA aims to best assignment of weapons to targets to minimize the total expected value of the surviving targets. Exact methods can solve only small-size problems in a reasonable time. Although many heuristic methods have been studied for the WTA in the literature, a few parallel methods have been proposed. This paper presents parallel simulated algorithm (PSA) to solve the WTA. The PSA runs on GPU using CUDA platform. Multi-start technique is used in PSA to improve quality of solutions. 12 problem instances (up to 200 weapons and 200 targets) generated randomly are used to test the effectiveness of the PSA. Computational experiments show that the PSA outperforms SA on average and runs up to 250x faster than a single-core CPU.</description>
        <description>http://thesai.org/Downloads/Volume8No4/Paper_12-A_Parallel_Simulated_Annealing_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Using Weighted Bipartite Graph for Android Malware Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080411</link>
        <id>10.14569/IJACSA.2017.080411</id>
        <doi>10.14569/IJACSA.2017.080411</doi>
        <lastModDate>2017-04-29T12:53:34.2570000+00:00</lastModDate>
        
        <creator>Altyeb Altaher</creator>
        
        <subject>Android malware; Bipartite graph; Classification algorithms; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(4), 2017</description>
        <description>The complexity and the number of mobile malware are increasing continually as the usage of smartphones continue to rise. The popularity of Android has increased the number of malware that target Android-based smartphones.  Developing efficient and effective approaches for Android malware classification is emerging as a new challenge. This paper introduces an effective Android malware classifier based on the weighted bipartite graph. This classifier includes two phases: in the first phase, the permissions and API Calls used in the Android app are utilized to construct the weighted bipartite graph; the feature importance scores are integrated as weights in the bipartite graph to improve the discrimination between malware and goodware apps, by incorporating extra meaningful information into the graph structure. The second phase applied multiple classifiers to categorise the Android application as a malware or goodware. The results using an Android malware dataset consists of different malware families, showing the effectiveness of our approach toward Android malware classification.</description>
        <description>http://thesai.org/Downloads/Volume8No4/Paper_11-Using_Weighted_Bipartite_Graph_ for_Android.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Efficient Approach for the Security Threats on Data Centers in IOT Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080410</link>
        <id>10.14569/IJACSA.2017.080410</id>
        <doi>10.14569/IJACSA.2017.080410</doi>
        <lastModDate>2017-04-29T12:53:34.2270000+00:00</lastModDate>
        
        <creator>Fahad H. Alshammari</creator>
        
        <subject>Internet of Things; Data centers; sessions; Security Threats; Networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(4), 2017</description>
        <description>Internet of Things has progressed from the conjunction of wireless knowledge, MEMS which is termed as micro electromechanical systems, micro facilities and the Internet. The conjunction has helped scratch down the storage walls concerning operating technology (OT) and information technology (IT), and allowing amorphous machine created data to be examined for understandings that will drive enhancements. The Things known as IOT is an arrangement of interconnected computing procedures, mechanical and digital machineries, substances or matters that are delivered with inimitable identifiers, and the ability to handover data over a system without necessitating human to humanoid, or human to computer collaboration. However, the security is one of the main concerns in Internet of things, which should be minimized. There are unnecessary requests from the attacker to overload the data center, which results in the hanging of the servers, decreasing the throughput, and requesting a transmission to the Data centers. This paper deals with an efficient approach to decrease the unwanted request at the Data Centers, so that the sessions will be reduced, and the unnecessary load will be reduced on the data centers, in order to mitigate the effect of attack as much as possible.</description>
        <description>http://thesai.org/Downloads/Volume8No4/Paper_10-An_Efficient_Approach_for_the_Security_Threats.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Novel Intra-Prediction Framework for H.264 Video Compression using Decision and Prediction Mode</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080409</link>
        <id>10.14569/IJACSA.2017.080409</id>
        <doi>10.14569/IJACSA.2017.080409</doi>
        <lastModDate>2017-04-29T12:53:34.1930000+00:00</lastModDate>
        
        <creator>Pradeep Kumar N.S.</creator>
        
        <creator>H.N. Suresh</creator>
        
        <subject>Encoding Mechanism; H.264 / AVC; Intra-Prediction Mode; Video Compression; Visual Quality</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(4), 2017</description>
        <description>With the increasing usage of multimedia contents and advancement of the communication devices (along with services), there is a heavy demand of an effective multimedia compression protocol. In this regards, H.264 has been proven to be an effective video compression standard; however, its computational complexity associated with out and various other issues has been impediment towards mainstream of research towards compression. Therefore, we present a novel framework that enhances the capability of H.264 compression method by emphasizing on accomplishing the cost effectiveness of computational operation during intra-prediction mode. A simple and novel encoding mechanism has been formulated using H.264/AVC using decision mode of macro block as well as selection of prediction mode exclusively for intra-prediction in H.264/AVC. The study outcome is found to offer a superior signal quality as compared to conventional H.264 encoding mechanism.</description>
        <description>http://thesai.org/Downloads/Volume8No4/Paper_9-Novel_Intra-Prediction_Framework_for_H.264_Video.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Adaptive Case Management Framework to Develop Case-based Emergency Response System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080408</link>
        <id>10.14569/IJACSA.2017.080408</id>
        <doi>10.14569/IJACSA.2017.080408</doi>
        <lastModDate>2017-04-29T12:53:34.1630000+00:00</lastModDate>
        
        <creator>Abobakr Y. Shahrah</creator>
        
        <creator>Majed A. Al-Mashari</creator>
        
        <subject>Adaptive Case Management; Case Handling; Case Management; Emergency Response System</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(4), 2017</description>
        <description>Emergency response to crisis, disaster, or catastrophe incidents is a clear example of knowledge-intensive and collaboration-heavy process facing all public safety-related organizations. Software systems to support emergency response have existed for decades. However, the limitations of these systems and their development approaches are still significant in terms of flexibility and dynamicity. With the emergence of Adaptive Case Management (ACM) as a new software development approach to support knowledge work and the empower knowledge worker, the authors found that ACM is a promising approach that can be extended to support emergency response especially in large-scale situations. This research aims to study how ACM can be leveraged to design and implement case-based emergency response systems (ERSs). In particular, the authors propose a domain-specific and vendor-neutral Case Management Framework (CMF) that incorporates the essential capabilities to support the ERSs. As a proof-of-concept, the authors support the proposed CMF by a case-based ERS prototype. Finally, the authors conclude that ACM has a great potential to enhance the effectiveness and efficiency of ERSs. This work can be considered as an attempt to advocate the adaptation of ACM in such context.</description>
        <description>http://thesai.org/Downloads/Volume8No4/Paper_8-Adaptive_Case_Management_Framework_to_Develop.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Visualizing Composition in Design Patterns</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080407</link>
        <id>10.14569/IJACSA.2017.080407</id>
        <doi>10.14569/IJACSA.2017.080407</doi>
        <lastModDate>2017-04-29T12:53:34.1330000+00:00</lastModDate>
        
        <creator>Zaigham Mushtaq</creator>
        
        <creator>Kiran Iqbal</creator>
        
        <creator>Ghulam Rasool</creator>
        
        <subject>Design patterns; Visualization; Program Comprehension; Reverse engineering; Composition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(4), 2017</description>
        <description>Visualization of design patterns information play a vital role in analysis, design and comprehension of software applications. Different representations of design patterns have been proposed in literature, but each representation has its strengths and limitations. State of the art design pattern visualization approaches are unable to capture all the aspects of design pattern visualization which is important for the comprehension of any software application e.g., the role that a class, attribute and operation play in a design pattern. Additionally, there exist multiple instances of a design pattern and different types of overlapping in the design of different systems. Visualization of overlapping and composition in design patterns is important for forward and reverse engineering domains. The focus of this paper is to analyze the characteristics, strengths and limitations of key design pattern representations used for visualization and propose a hybrid approach which incorporates best features of existing approaches while suppressing their limitations. The approach extends features which are important for visualizing different types of overlapping in design patterns. Stereotypes, tagged values, semantics and constraints are defined to represent the design pattern information related to attributes and/or operations of a class. A prototyping tool named VisCDP is developed to demonstrate and evaluate our proposed.</description>
        <description>http://thesai.org/Downloads/Volume8No4/Paper_7-Visualizing_Composition_in_Design_Patterns.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comprehensive Insight towards Research Direction in Information Propagation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080406</link>
        <id>10.14569/IJACSA.2017.080406</id>
        <doi>10.14569/IJACSA.2017.080406</doi>
        <lastModDate>2017-04-29T12:53:34.1170000+00:00</lastModDate>
        
        <creator>Selva Kumar S</creator>
        
        <creator>Dr. Kayarvizhy N</creator>
        
        <subject>Information Propagation Prediction; Social Network Analysis; Predcitive Modelling; Information Propogation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(4), 2017</description>
        <description>The concept of Information Propagation has been studied to illustrate the particular, discrete, and explicit behavior of the nodes in a complex and highly distributed and connected networks. The complex network structure exhibits various challenges towards information propagation due to the usage of diversified communication protocol and dynamic behavior in the context of uncertainty. This paper is first one of its kind, which reviews frequently addressed problems, the most significant research techniques,  for addressing various research problems associated with the information propagation concerning to social network analysis, data routing behavior in the multi-path wireless networks, multimedia transmission, and security. This paper is useful for the various researchers, academicians, and industry having research interest into social network analysis, predictive modeling and information propagation analysis.</description>
        <description>http://thesai.org/Downloads/Volume8No4/Paper_6-A_Comprehensive_Insight_Towards_Research_Direction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dynamic Service Adaptation Architecture</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080405</link>
        <id>10.14569/IJACSA.2017.080405</id>
        <doi>10.14569/IJACSA.2017.080405</doi>
        <lastModDate>2017-04-29T12:53:34.1000000+00:00</lastModDate>
        
        <creator>Mohammed Yassine BAROUDI</creator>
        
        <creator>Abdelkrim BENAMAR</creator>
        
        <creator>Fethi Tarik BENDIMERAD</creator>
        
        <subject>Adaptative service; software component; service; dynamic adaptation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(4), 2017</description>
        <description>This paper proposes a software architecture for dynamical service adaptation. The services are constituted by reusable software components. The adaptation’s goal is to optimize the service function of their execution context. For a first step, the context will take into account just the user needs but other elements will be added. A particular feature in our proposition is the profiles that are used not only to describe the context’s elements but also the components itself. An Adapter analyzes the compatibility between all these profiles and detects the points where the profiles are not compatibles. The same Adapter search and apply the possible adaptation solutions: component customization, insertion, extraction or replacement.</description>
        <description>http://thesai.org/Downloads/Volume8No4/Paper_5-Dynamic_Service_Adaptation_Architecture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Learning Analytics in a Shared-Network Educational Environment: Ethical Issues and Countermeasures</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080404</link>
        <id>10.14569/IJACSA.2017.080404</id>
        <doi>10.14569/IJACSA.2017.080404</doi>
        <lastModDate>2017-04-29T12:53:34.0700000+00:00</lastModDate>
        
        <creator>Olugbenga Adejo</creator>
        
        <creator>Thomas Connolly</creator>
        
        <subject>Learning Analytics; Student’s data; Emerging technologies; Ethical Issues; Higher Education</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(4), 2017</description>
        <description>The recent trend in the development of education across the globe is the use of the new Learning Analytics (LA) tools and technologies in teaching and learning. The potential benefits of LA notwithstanding, potential ethical issues have to be considered and addressed in order to avoid any legal issues that might arise from its use. As a result of this, Higher Education Institutions (HEIs) involved in the development of LA tools need to pay particular attention to every ethical challenges/constraint that might arise.
This paper aims to identify and discuss several ethical issues connected with the practice and use of LA tools and technologies in analysing and predicting the performance of students in a shared network environment of HEIs. The study discusses the four ethical issues of Information and Communication Technology namely Privacy, Accuracy, Property and Accessibility (PAPA’s Model) as well as other approaches to explain these future concerns. The paper also presents the empirical evidence of the views of students on the analytical use and storage of their data.
The results indicate that even though students have high trust in the privacy and security of their data being used by their institutions, more than half of the students have ethical concerns with the accessibility and storage of their data beyond a certain period. In the light of this, generalised strategies on ethical issues of the use of learners’ data in an HEI shared networked environment are proposed.
</description>
        <description>http://thesai.org/Downloads/Volume8No4/Paper_4-Learning_Analytics_in_a_Shared-Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>3D Human Action Recognition using Hu Moment Invariants and Euclidean Distance Classifier</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080403</link>
        <id>10.14569/IJACSA.2017.080403</id>
        <doi>10.14569/IJACSA.2017.080403</doi>
        <lastModDate>2017-04-29T12:53:34.0400000+00:00</lastModDate>
        
        <creator>Fadwa Al-Azzo</creator>
        
        <creator>Arwa Mohammed Taqi</creator>
        
        <creator>Mariofanna Milanova</creator>
        
        <subject>human action recognition; Hu moment invariants; surveillance camera; Euclidean distance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(4), 2017</description>
        <description>This paper presents a new model of scale, rotation, and translations invariant interest point descriptor for human actions recognition. The descriptor, HMIV (Hu Moment Invariants on Videos) is used for solving surveillance camera recording problems under different conditions of side, position, direction and illumination. The proposed approach deals with raw input human action video sequences. Seven Hu moments are computed for extracting human action features and for storing them in a 1D vector which is constringed as one mean value for all the frames’ moments. The moments are invariant to scale, translation, or rotation, which is the robustness point of Hu moments algorithm. The experiments are evaluated using two different datasets; KTH and UCF101. The classification process is executed by calculating the Euclidean distance between the training and testing datasets. Human action with minimum distance will be selected as the winner matching action. The maximum classification accuracy in this work is 93.4% for KTH dataset and 92.11% for UCF101.</description>
        <description>http://thesai.org/Downloads/Volume8No4/Paper_3-3D_Human_Action_Recognition_using_Hu_Moment_Invariants.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deep Learning Approach for Secondary Structure Protein Prediction based on First Level Features Extraction using a Latent CNN Structure</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080402</link>
        <id>10.14569/IJACSA.2017.080402</id>
        <doi>10.14569/IJACSA.2017.080402</doi>
        <lastModDate>2017-04-29T12:53:34.0230000+00:00</lastModDate>
        
        <creator>Adil Al-Azzawi</creator>
        
        <subject>Secondary structure protein prediction; secondary structure; fine-tuning; Stacked Sparse; Deep Learning; CNN</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(4), 2017</description>
        <description>In Bioinformatics, Protein Secondary Structure Prediction (PSSP) has been considered as one of the main challenging tasks in this field. Today, secondary structure protein prediction approaches have been categorized into three groups (Neighbor-based, model-based, and meta predicator-based model). The main purpose of the model-based approaches is to detect the protein sequence-structure by utilizing machine learning techniques to train and learn a predictive model for that. In this model, different supervised learning approaches have been proposed such as neural networks, hidden Markov chain, and support vector machines have been proposed. In this paper, our proposed approach which is a Latent Deep Learning approach relies on detecting the first level features based on using Stacked Sparse Autoencoder. This approach allows us to detect new features out of the set of training data using the sparse autoencoder which will have used later as convolved filters in the Convolutional Neural Network (CNN) structure. The experimental results show that the highest accuracy of the prediction is 86.719% in the testing set of our approach when the backpropagation framework has been used to pre-trained techniques by relying on the unsupervised fashion where the whole network can be fine-tuned in a supervised learning fashion.</description>
        <description>http://thesai.org/Downloads/Volume8No4/Paper_2-Deep_Learning_Approach_for_Secondary_Structure.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Teaching Software Testing using Data Structures</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080401</link>
        <id>10.14569/IJACSA.2017.080401</id>
        <doi>10.14569/IJACSA.2017.080401</doi>
        <lastModDate>2017-04-29T12:53:33.8970000+00:00</lastModDate>
        
        <creator>Ingrid A. Buckley</creator>
        
        <creator>Winston S. Buckley</creator>
        
        <subject>Software Testing; Data Structures; Abstract Data Type (ADT); Unit Testing; Performance Testing; Stacks; Binary Search Tree; Towers of Hanoi</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(4), 2017</description>
        <description>Software testing is typically a rushed and neglected activity that is done at the final stages of software development. In particular, most students tend to test their programs manually and very seldom perform adequate testing. In this paper, two basic data structures are utilized to highlight the importance of writing effective test cases by testing their fundamental properties. The paper also includes performance testing at the unit level, of a classic recursive problem called the Towers of Hanoi. This teaching approach accomplishes two important pedagogical objectives: (1) it allows students to think about how to find hidden bugs and defects in their programs and (2) it encourages them to test more effectively by leveraging data structures that are already familiar to them.</description>
        <description>http://thesai.org/Downloads/Volume8No4/Paper_1-Teaching_Software_Testing_using_Data_Structures.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of 1-bit Comparator using 2 Dot 1 Electron  Quantum-Dot Cellular Automata</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080366</link>
        <id>10.14569/IJACSA.2017.080366</id>
        <doi>10.14569/IJACSA.2017.080366</doi>
        <lastModDate>2017-04-04T08:33:23.6970000+00:00</lastModDate>
        
        <creator>Angona Sarker</creator>
        
        <creator>Md. Badrul Alam Miah</creator>
        
        <creator>Ali Newaz Bahar</creator>
        
        <subject>QCA; 2 Dot 1 Electron QCA; Comparator; Coulomb’s principle</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(3), 2017</description>
        <description>In nanotechnologies, quantum-dot cellular automata (QCA) offer promising and attractive features for nano-scale computing. QCA effectively overcomes the scaling shortfalls of CMOS technology. One of the variants of QCA is 4 Dot 2 Electron QCA which is well explored and researched. The main concentration of this study is on 2 Dot 1 Electron QCA, an emerging variant of QCA. A novel and efficient XOR gate based on 2 Dot 1 Electron QCA is designed. Moreover a comparator using the proposed novel XOR gate is presented in this present scope. The proposed architecture is justified using a well-accepted standard mathematical function based on Coulomb’s law. Energy and power dissipation of the architecture are analyzed using different energy parameters. AS the compactness of proposed design is 76.4% the design met high degree of compactness and better efficiency.</description>
        <description>http://thesai.org/Downloads/Volume8No3/Paper_66-Design_of_1_bit_Comparator_using_2_Dot.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automatic Conditional Switching (ACS), an Incremental Enhancement to TCP-Reno/RTP to Improve the VoIPv6 Performance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080365</link>
        <id>10.14569/IJACSA.2017.080365</id>
        <doi>10.14569/IJACSA.2017.080365</doi>
        <lastModDate>2017-04-04T08:33:23.6030000+00:00</lastModDate>
        
        <creator>Asaad Abdallah Yousif Malik Abusin</creator>
        
        <creator>Junaidi Abdullah</creator>
        
        <creator>Tan Saw Chin</creator>
        
        <subject>TCP Reno; VoIPv6 Performance; VoIPv6 improvements</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(3), 2017</description>
        <description>In this research work an Automatic Conditional Switching Protocol (ACSP) is proposed, which is a conditional switching method between Delay based TCP-Reno and RTP (Real-Time Transport Protocol). It is a delay constrained method based on the VoIPv6 delay limit stated by the Internet Engineering Task Forth (IETF), which is 150 milliseconds in accordance with the ITU G.114 standard recommendation (G.114 International Telecommunication Union, 1994) which recommended that the permissible amount of delay in VoIP to be less than or equal to 150 milliseconds in order to maintain the voice quality. This proposed Automatic Control Switching (ACS) should be implemented in the routers and switches in the protocol layer of the VoIPv6 network to improve VoIPv6 performance. The system can be defined simply as combination of TCP Reno/UDP (RTP) with Automatic Conditional Switching. This Automatic Conditional Switching is based on Delay based Congestion Avoidance (DCA)</description>
        <description>http://thesai.org/Downloads/Volume8No3/Paper_65-Automatic_Conditional_Switching.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Generation of Sokoban Stages using Recurrent
Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080364</link>
        <id>10.14569/IJACSA.2017.080364</id>
        <doi>10.14569/IJACSA.2017.080364</doi>
        <lastModDate>2017-03-31T13:22:30.7030000+00:00</lastModDate>
        
        <creator>Muhammad Suleman</creator>
        
        <creator>Farrukh Hasan Syed</creator>
        
        <creator>Tahir Q. Syed</creator>
        
        <creator>Saqib Arfeen</creator>
        
        <creator>Sadaf I. Behlim</creator>
        
        <creator>Behroz Mirza</creator>
        
        <subject>Stepwise Cooperative Training; Generative Networks; Recurrent Neural Networks; Sokoban; Puzzles; Deep Learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(3), 2017</description>
        <description>Puzzles and board games represent several important
classes of AI problems, but also represent difficult
complexity classes. In this paper, we propose a deep learning
based alternative to train a neural network model to find solution
states of the popular puzzle game Sokoban. The network trains
against a classical solver that uses theorem proving as the oracle
of valid and invalid games states, in a setup that is similar to
the popular adversarial training framework. Using our approach,
we have been able to verify the validity of a Sokoban puzzle up
to an accuracy of 99% on the test set. We have also been able
to train our network to generate the next possible state of the
puzzle board up to an accuracy of 99% on the validation set. We
hope that through this approach, a trained neural network will
be able to replace human experts and classical rule-based AI in
generating new instances and solutions for such games.</description>
        <description>http://thesai.org/Downloads/Volume8No3/Paper_64-Generation_of_Sokoban_Stages_using_Recurrent.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Autonomous Software Installation using a Sequence
of Predictions from Bayesian Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080363</link>
        <id>10.14569/IJACSA.2017.080363</id>
        <doi>10.14569/IJACSA.2017.080363</doi>
        <lastModDate>2017-03-31T13:22:30.6870000+00:00</lastModDate>
        
        <creator>Behraj Khan</creator>
        
        <creator>Umar Manzoor</creator>
        
        <creator>Tahir Syed</creator>
        
        <subject>Multiagent System; Machine Learning; Software
 installation/un-installation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(3), 2017</description>
        <description>The idea of automated installation/un-installation is a direct consequence of the tedious and time consuming manual efforts put into installing or uninstalling multiple software over hundreds of machines. In this work we propose what is to the best of our knowledge the first learnable method of autonomous software installation/un-installation. The method leverages text classification using as data textual guidelines given for users on the installation window. This is used to arrive at the Next/Pause/Abort decisions for each installation window using multiple classifier schemes. We report the best results using a full Bayesian Network with accuracy level of 94%, while Na&#168;ive Bayes and rule-based inference accuracy was 42% and 88%. We attribute this to the sequential nature of the Bayesian network that corresponds to the sequential nature of natural language data.</description>
        <description>http://thesai.org/Downloads/Volume8No3/Paper_63-Autonomous_Software_Installation_using_a_Sequence.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Rule Adaptation in Collaborative Working Environments using RBAC Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080362</link>
        <id>10.14569/IJACSA.2017.080362</id>
        <doi>10.14569/IJACSA.2017.080362</doi>
        <lastModDate>2017-03-31T13:22:30.6570000+00:00</lastModDate>
        
        <creator>Ahmad Kamran Malik</creator>
        
        <creator>Abdul Mateen</creator>
        
        <creator>Yousra Asim</creator>
        
        <creator>Basit Raza</creator>
        
        <creator>Muhammad Anwar</creator>
        
        <creator>Wajeeha Naeem</creator>
        
        <creator>Malik Ahsan Ali</creator>
        
        <subject>Dynamic Adaptation; RBAC; Privacy; Collaboration</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(3), 2017</description>
        <description>Collaborative Working Environments (CWEs) are
getting prominence these days. With the increase in the use of
collaboration tools and technologies, a lot of sharing and privacy
issues have also emerged. Due to its dynamic nature, a CWE
needs to adapt the changes into accordingly. In this paper, we have
implemented the Adaptive Dynamic Sharing and Privacy-aware
Role Based Access Control (Adaptive DySP-RBAC) model which
provides user’s information privacy to dynamically adapt the
changes occurring in the system at any time. The proposed model
has been implemented as a prototype and tested. Results have
shown that our system efficiently and effectively adapts access
rules according to the changes happening in a CWE along with
preserving the user’s information privacy in the system.</description>
        <description>http://thesai.org/Downloads/Volume8No3/Paper_62-Rule_Adaptation_in_Collaborative_Working.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multitaper MFCC Features for Acoustic Stress Recognition from Speech</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080361</link>
        <id>10.14569/IJACSA.2017.080361</id>
        <doi>10.14569/IJACSA.2017.080361</doi>
        <lastModDate>2017-03-31T13:22:30.6400000+00:00</lastModDate>
        
        <creator>Salsabil Besbes</creator>
        
        <creator>Zied Lachiri</creator>
        
        <subject>Mel Frequency Cepstral Coefficients (MFCC); Multitapering;
Multiclass SVM; Stress recognition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(3), 2017</description>
        <description>Ameliorating the performances of speech recognition
system is a challenging problem interesting recent researchers.
In this paper, we compare two extraction methods of
Mel Frequency Cepstral Coefficients used to represent stressed
speech utterances in order to obtain best performances. The
first method known as traditional is based on single window
(taper) generally the Hamming window and the second one
is a novel technique developed with multitapers instead of a
single taper. The extracted features are then classified using the
multiclass Support Vector Machines. Experimental results on the
SUSAS database have shown that the multitaper MFCC features
outperform the conventional MFCCs.</description>
        <description>http://thesai.org/Downloads/Volume8No3/Paper_61-Multitaper_MFCC_Features_for_Acoustic_Stress.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Missing Data Imputation using Genetic Algorithm for Supervised Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080360</link>
        <id>10.14569/IJACSA.2017.080360</id>
        <doi>10.14569/IJACSA.2017.080360</doi>
        <lastModDate>2017-03-31T13:22:30.6100000+00:00</lastModDate>
        
        <creator>Waseem Shahzad</creator>
        
        <creator>Qamar Rehman</creator>
        
        <creator>Ejaz Ahmed</creator>
        
        <subject>genetic algorithm; information gain; missing data;
supervised learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(3), 2017</description>
        <description>Data is an important asset for any organization to
successfully run its business. When we collect data, it contains
data with low qualities such as noise, incomplete, missing values
etc. If the quality of data is low then mining results of any
data mining algorithm will also below. In this paper, we propose
a technique to deal with missing values. Genetic algorithm
(GA) is used for the estimation of missing values in datasets.
GA is introduced to generate optimal sets of missing values
and information gain (IG) is used as the fitness function to
measure the performance of an individual solution. Our goal
is to impute missing values in a dataset for better classification
results. This technique works even better when there is a higher
rate of missing values or incomplete information along with a
greater number of distinct values in attributes/features having
missing values. We compare our proposed technique with single
imputation techniques and multiple imputations (MI) statistically
based approaches on various benchmark classification techniques
on different performance measures. We show that our proposed
methods outperform when compare with another state of the art
missing data imputation techniques.</description>
        <description>http://thesai.org/Downloads/Volume8No3/Paper_60-Missing_Data_Imputation_using_Genetic_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>JSEA: A Program Comprehension Tool Adopting LDA-based Topic Modeling</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080359</link>
        <id>10.14569/IJACSA.2017.080359</id>
        <doi>10.14569/IJACSA.2017.080359</doi>
        <lastModDate>2017-03-31T13:22:30.5800000+00:00</lastModDate>
        
        <creator>Tianxia Wang</creator>
        
        <creator>Yan Liu</creator>
        
        <subject>Java program comprehension; Topic models; Interactive
tool</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(3), 2017</description>
        <description>Understanding a large number of source code is
a big challenge for software development teams in software
maintenance process. Using topic models is a promising way
to automatically discover feature and structure from textual
software assets, and thus support developers comprehending
programs on software maintenance. To explore the application
of applying topic modeling to software engineering practice,
we proposed JSEA (Java Software Engineers Assistant), an
interactive program comprehension tool adopting LDA-based
topic modeling, to support developers during performing software
maintenance tasks. JSEA utilizes essential information
automatically generated from Java source code to establish a
project overview and to bring search capability for software
engineers. The results of our preliminary experimentation suggest
the practicality of JSEA.</description>
        <description>http://thesai.org/Downloads/Volume8No3/Paper_59-JSEA_A_Program_Comprehension_Tool_Adopting.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>High Performance of Hash-based Signature Schemes</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080358</link>
        <id>10.14569/IJACSA.2017.080358</id>
        <doi>10.14569/IJACSA.2017.080358</doi>
        <lastModDate>2017-03-31T13:22:30.5470000+00:00</lastModDate>
        
        <creator>Ana Karina D. S. de Oliveira</creator>
        
        <creator>Julio L&#180;opez</creator>
        
        <creator>Roberto Cabral</creator>
        
        <subject>post-quantum cryptography; digital signature;
Merkle signature; LMS; XMSS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(3), 2017</description>
        <description>Hash-based signature schemes, whose security is
based on properties of the underlying hash functions, are promising
candidates to be quantum-safe digital signatures schemes. In
this work, we present a software implementation of two recent
standard proposals for hash-based signature schemes, Leighton
and Micali Signature (LMS) scheme and Extended Merkle
Signature Scheme (XMSS), using a set of AVX2 instructions on
Intel processors. The implementation uses several optimization
techniques for speeding up the underlying hash functions SHA2
or SHA3, and other building block functions which lead to high
performance for signature operations on both schemes. On an
Intel Skylake processor, using a tree of height 60 with 12 layers,
the signing operation for XMSS takes 3,841,199 cycles (1,043
signatures per second) at 128-bit security level (against quantum
attacks). For an equivalent security, the LMS system computes
a signature in 1,307,376 cycles (3,065 signatures per second). We
also provide the first comparative performance results for signing
and verification of both schemes using different parameters.
The results of our implementation indicate that both schemes
LMS and XMSS can achieve high performance using vector
instructions on modern processors.</description>
        <description>http://thesai.org/Downloads/Volume8No3/Paper_58-High_Performance_of_Hash-based_Signature_Schemes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dynamic Gesture Classification for Vietnamese Sign Language Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080357</link>
        <id>10.14569/IJACSA.2017.080357</id>
        <doi>10.14569/IJACSA.2017.080357</doi>
        <lastModDate>2017-03-31T13:22:30.5170000+00:00</lastModDate>
        
        <creator>Duc-Hoang Vo</creator>
        
        <creator>Huu-Hung Huynh</creator>
        
        <creator>Phuoc-Mien Doan</creator>
        
        <creator>Jean Meunier</creator>
        
        <subject>Dynamic gesture; feature extraction; depth information;
Vietnamese Sign Language</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(3), 2017</description>
        <description>This paper presents an approach of feature extraction
and classification for recognizing continuous dynamic
gestures corresponding to Vietnamese Sign Language (VSL).
Input data are captured by the depth sensor of a Microsoft
Kinect, which is almost not affected by the light of environment.
In detail, each gesture is represented by a volume corresponding
to a sequence of depth images. The feature extraction stage is
performed by dividing such volume into a 3D grid of same-size
blocks in which each one is then converted into a scalar value.
This step is followed by the process of classification. The wellknown
method Support Vector Machine (SVM) is employed in
this work, and the Hidden Markov Model (HMM) technique is
also applied in order to provide a comparison on recognition
accuracy. Besides, a dataset of 3000 samples corresponding to
30 dynamic gestures in VSL was created by 5 volunteers. The
experiments on this dataset to validate the approach and that
shows the promising results with average accuracy up to 95%.</description>
        <description>http://thesai.org/Downloads/Volume8No3/Paper_57-Dynamic_Gesture_Classification_for_Vietnamese_Sign.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparison of Localization Free Routing Protocols in Underwater Wireless Sensor Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080356</link>
        <id>10.14569/IJACSA.2017.080356</id>
        <doi>10.14569/IJACSA.2017.080356</doi>
        <lastModDate>2017-03-31T13:22:30.5000000+00:00</lastModDate>
        
        <creator>Muhammad Khalid</creator>
        
        <creator>Awais Adnan</creator>
        
        <creator>Zahid Ullah</creator>
        
        <creator>Waqar Khalid</creator>
        
        <creator>Naveed Ahmad</creator>
        
        <creator>Ahsan Ashfaq</creator>
        
        <subject>Underwater Networks; Sensor; Wireless Communication;
Survey; Localization Based; Routing; Protocols</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(3), 2017</description>
        <description>Underwater Wireless Sensor Network (UWSN) is
newly developed branch of Wireless Sensor network (WSN).
UWSN is used for exploration of underwater resources, oceanographic
data collection, flood or disaster prevention, tactical
surveillance system and unmanned underwater vehicles. UWSN
uses sensors of small size with a limited energy, memory and
allows limited range for communication. Due to multiple differences
from terrestrial sensor network, radio waves cannot be
used over here. Acoustic channel are used for communication
in deep water, which has many limitations like low bandwidth,
high end to end delay and path loss. With the above limitations
while using acoustic waves, it is very important to develop energy
efficient and reliable protocols. Energy efficient communication
in underwater networks has become uttermost need of UWSN
technology. The main aim nowadays is to operate sensor with
smaller battery for a longer time. This paper will analyse various
routing protocols in the area of UWSN through simulation. This
paper will analyse Depth Based Routing (DBR), Energy Efficient
Depth Based Routing (EEDBR) and Hop by Hop Dynamic
Addressing Based (H2-DAB) protocol through simulation. This
comparison is carried out on the basis of total consumed energy,
end to end delay, path loss and data delivery ratio.</description>
        <description>http://thesai.org/Downloads/Volume8No3/Paper_56-Comparison_of_Localization_Free_Routing_Protocols.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Area and Energy Efficient Viterbi Accelerator for Embedded Processor Datapaths</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080355</link>
        <id>10.14569/IJACSA.2017.080355</id>
        <doi>10.14569/IJACSA.2017.080355</doi>
        <lastModDate>2017-03-31T13:22:30.4700000+00:00</lastModDate>
        
        <creator>Abdul Rehman Buzdar</creator>
        
        <creator>Liguo Sun</creator>
        
        <creator>Muhammad Waqar Azhar</creator>
        
        <creator>Muhammad Imran Khan</creator>
        
        <creator>Rao Kashif</creator>
        
        <subject>Viterbi decoder; Codesign; FPGA; MicroBlaze; Embedded
Processor</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(3), 2017</description>
        <description>Viterbi algorithm is widely used in communication
systems to efficiently decode the convolutional codes. This algorithm
is used in many applications including cellular and satellite
communication systems. Moreover, Serializer-deserializers
(SERDESs) having critical latency constraint also use viterbi
algorithm for hardware implementation. We present the integration
of a mixed hardware/software viterbi accelerator unit
with an embedded processor datapath to enhance the processor
performance in terms of execution time and energy efficiency.
Later we investigate the performance of viterbi accelerated embedded
processor datapath in terms of execution time and energy
efficiency. Our evaluation shows that the viterbi accelerated
Microblaze soft-core embedded processor datapath is three times
more cycle and energy efficient than a datapath lacking a viterbi
accelerator unit. This acceleration is achieved at the cost of some
area overhead.</description>
        <description>http://thesai.org/Downloads/Volume8No3/Paper_55-Area_and_Energy_Efficient_Viterbi_Accelerator.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Multi-Level Process Mining Framework for Correlating and Clustering of Biomedical Activities using Event Logs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080354</link>
        <id>10.14569/IJACSA.2017.080354</id>
        <doi>10.14569/IJACSA.2017.080354</doi>
        <lastModDate>2017-03-31T13:22:30.4530000+00:00</lastModDate>
        
        <creator>Muhammad Rashid Naeem</creator>
        
        <creator>Hamad Naeem</creator>
        
        <creator>Muhammad Aamir</creator>
        
        <creator>Waqar Ali</creator>
        
        <creator>Waheed Ahmed Abro</creator>
        
        <subject>biomedical event data; business process modeling;
Levenshtein similarity clustering; multilevel process mining;
spaghetti process models</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(3), 2017</description>
        <description>Cost, time and resources are major factors affecting
the quality of hospitals business processes. Bio-medical processes
are twisted, unstructured and based on time series making it
difficult to do proper process modeling for them. On other
hand, Process mining can be used to provide an accurate view
of biomedical processes and their execution. Extracting process
models from biomedical code sequenced data logs is a big
challenge for process mining as it doesn’t provide business entities
for workflow modeling. This paper explores application of process
mining in biomedical domain through real-time case study of
hepatitis patients. To generate event logs from big datasets,
preprocessing techniques and LOG Generator tool is designed.
To reduce complexity of generated process model, a multilevel
process mining framework including text similarity clustering
algorithm based on Levenshtein Distance is proposed for event
logs to eliminate spaghetti processes. Social network models and
four distinct types of sub workflow models are evaluated using
specific process mining algorithms.</description>
        <description>http://thesai.org/Downloads/Volume8No3/Paper_54-A_Multi-Level_Process_Mining_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Proposed Framework to Investigate the User Acceptance of Personal Health Records in Malaysia using UTAUT2 and PMT</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080353</link>
        <id>10.14569/IJACSA.2017.080353</id>
        <doi>10.14569/IJACSA.2017.080353</doi>
        <lastModDate>2017-03-31T13:22:30.4230000+00:00</lastModDate>
        
        <creator>Ali Mamra</creator>
        
        <creator>Abdul Samad Sibghatullah</creator>
        
        <creator>Gede Pramudya Ananta</creator>
        
        <creator>Malik Bader Alazzam</creator>
        
        <creator>Yasir Hamad Ahmed</creator>
        
        <creator>Mohamed Doheir</creator>
        
        <subject>PHRs; User Acceptance; UTAUT2; PMT; Malaysia</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(3), 2017</description>
        <description>Personal Health Records (PHRs) can be considered as one of the most important health technologies. PHRs enroll the patients directly to their health decision making through giving them the authority to control and share their health information. Testing the user acceptance toward new technology is a vital process. Over the previous decades many models have been used and the latest one was UTAUT2. UTAUT2 have been widely used in e-business and gaming user acceptance researches, whereas, it has been rarely used in health field. This study proposes a combination of UTAUT2 and PMT in order to investigate the user acceptance of PHRs. Relevant factors may be added at the literature review stage. The final model will be used as a framework to investigate the user acceptance of PHRs in Malaysia.</description>
        <description>http://thesai.org/Downloads/Volume8No3/Paper_53-A_Proposed_Framework_to_Investigate_the_User_Acceptance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Scalability and Performance of Selected Websites of Universities: An Analytical Study of Punjab (India)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080352</link>
        <id>10.14569/IJACSA.2017.080352</id>
        <doi>10.14569/IJACSA.2017.080352</doi>
        <lastModDate>2017-03-31T13:22:30.4070000+00:00</lastModDate>
        
        <creator>Bhim Sain Singla</creator>
        
        <creator>Dr. Himanshu Aggarwal</creator>
        
        <subject>Hits per second; scalability; performance; throughput; transaction response time; university websites</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(3), 2017</description>
        <description>Today, education has emerged as a major area of commercial activities. The access to various University websites through Internet has opened up new opportunities for the beneficiaries. The creation of these websites fully serves the purpose of educational institutions in advancing and achieving their goals. The varied information made available on these websites and the minimum transaction response time to address the queries of end-users can go a long way to influence their decision in selecting a particular course and an institution. The issue assumes greater significance especially in a developing country like India where a website development and deployment activity is primarily facing the shortage of formalized website design techniques and testing procedures. The performance of most of the University websites is reasonably well, but when accessed by only a few concurrent users. Thus, the aim of this study is to analyze and compare the scalability and performance of selected University websites of Punjab (India) by means of load testing. Simulation of realistic users’ behavior is achieved through LoadRunner, a software tool for performance testing. Of all the University websites under study, on the basis of their scalability and performance, it has been found that the websites of Deemed and Central universities are the most and least efficient respectively. The findings of this study can be of great significance for the higher educational institutions in improving the performance quality of their websites resulting into their better ranking and satisfaction of the stakeholders. The paper also outlines the scope for further research in the area under study.</description>
        <description>http://thesai.org/Downloads/Volume8No3/Paper_52-Scalability_and_Performance_of_Selected_Websites_of_Universities.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Crowdsensing: Socio-Technical Challenges and Opportunities</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080351</link>
        <id>10.14569/IJACSA.2017.080351</id>
        <doi>10.14569/IJACSA.2017.080351</doi>
        <lastModDate>2017-03-31T13:22:30.3770000+00:00</lastModDate>
        
        <creator>Javeria Noureen</creator>
        
        <creator>Muhammad Asif</creator>
        
        <subject>Crowdsensing; sensing devices; privacy; smart phones</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(3), 2017</description>
        <description>With the advancement in mobile technology, the sensing and computational capability of mobile devices is increasing. The sensors in mobile devices are being used in a variety of ways to sense and actuate.  Mobile crowdsensing is a paradigm that involves ordinary people to participate in a sensing task. This sensing model has the capability to provide a new vision of people centric sensing as a service.  This research work reviewed different domains utilizing mobile crowdsensing for solving different domain specific problems. Mobile crowd sensing model is also posing different socio-technical challenges which needs to be addressed. The research work reviewed and analyzed a variety of socio-technical challenges of mobile crowdsensing and possible solutions presented by different studies. There are different socio-technical challenges but the challenge of privacy in crowdsensing requires extra measures to realize the vision of mobile crowdsensing.</description>
        <description>http://thesai.org/Downloads/Volume8No3/Paper_51-Crowdsensing_Socio-Technical_Challenges.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Water Quality Monitoring based on Small Satellite Technology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080350</link>
        <id>10.14569/IJACSA.2017.080350</id>
        <doi>10.14569/IJACSA.2017.080350</doi>
        <lastModDate>2017-03-31T13:22:30.3430000+00:00</lastModDate>
        
        <creator>N. Gallah</creator>
        
        <creator>O. b. Bahri</creator>
        
        <creator>N. Lazreg</creator>
        
        <creator>A. Chaouch</creator>
        
        <creator>Kamel Besbes</creator>
        
        <subject>space mission; nanosatellite; SDR; autonomous; on-line water monitoring</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(3), 2017</description>
        <description>In order to improve the routine of water quality monitoring and reduce the risk of accidental or deliberate contaminations, this paper presents the development of in-situ water quality monitoring and analysis system based on small satellite technology. A space mission design and analysis was performed in this work. The system consists of three segments; space segment including a constellation of nano-satellites, ground segment entailing the ground station and user-segment containing the in-situ water quality sensors. The authors studied the orbit characteristics and the number of nano-satellites required to cover the Middle East and North Africa (MENA) which presents the most water scarce region of the world. Thus, 9 nano-satellites distributed in 5 low earth orbits (600 km) are needed to cover MENA region continuously for about all the day. Data collected by various sensors such as pH and temperature are sent through Software Defined Radio (SDR) module which is responsible for the satellite communication.</description>
        <description>http://thesai.org/Downloads/Volume8No3/Paper_50-Water_Quality_Monitoring_based_on_Small_Satellite_Technology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Appraising Research Direction &amp; Effectiveness of Existing Clustering Algorithm for Medical Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080348</link>
        <id>10.14569/IJACSA.2017.080348</id>
        <doi>10.14569/IJACSA.2017.080348</doi>
        <lastModDate>2017-03-31T13:22:30.2970000+00:00</lastModDate>
        
        <creator>Sudha V</creator>
        
        <creator>Girijamma H A</creator>
        
        <subject>Medical Data; Clustering Algorithm; k-Means Clustering; Fuzzy; Classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(3), 2017</description>
        <description>The applicability and effectiveness of clustering algorithms had unquestioningly benefitted solving various sectors of real-time problems. However, with the changing time, there is a significant change in forms of the data. This paper briefs about the different taxonomies of the clustering algorithm and highlights the frequently used techniques to understand the research popularity. We also discuss the existing direction of the research work and find that still there is a significant amount of open issues when it comes to clustering medical data. We find that existing techniques are quite symptomatic in nature on local problems in clustering while problems associated with complex medical data are yet to be explored by the researchers. We believe that this manuscript will give a good summary of the effectiveness of existing clustering techniques towards medical data as a contribution.</description>
        <description>http://thesai.org/Downloads/Volume8No3/Paper_48-Appraising_Research_Direction_Effectiveness_of_Existing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Comment on Reinforcement of Testing Criteria</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080347</link>
        <id>10.14569/IJACSA.2017.080347</id>
        <doi>10.14569/IJACSA.2017.080347</doi>
        <lastModDate>2017-03-31T13:22:30.2670000+00:00</lastModDate>
        
        <creator>Monika Singh</creator>
        
        <creator>Vinod Kumar Jain</creator>
        
        <subject>Formal Methods; Safety Critical System; Z Notation; Schema</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(3), 2017</description>
        <description>This paper presents the formal aspects of testing criteria for Safety Critical Systems. A brief review of testing strategies i.e. white box and black box is given along with their various criteria’s. Z Notation; a formal specification language is used to sever the purpose of formalization. Initially, the schemas are formed for Statement Coverage (SC), Decision coverage (DC), Path Coverage (PC), Equivalence Partition Class (EPC), Boundary Value Analysis (BV) and Cause &amp; Effect (C&amp;F). The completeness and correctness of test schema are enriched by verifying these with Z/EVES; a Theorem Prover tool for Z specification.</description>
        <description>http://thesai.org/Downloads/Volume8No3/Paper_47-A_New_Comment_on_Reinforcement_of_Testing_Criteria.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automatic Image Annotation based on Dense Weighted Regional Graph</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080346</link>
        <id>10.14569/IJACSA.2017.080346</id>
        <doi>10.14569/IJACSA.2017.080346</doi>
        <lastModDate>2017-03-31T13:22:30.2370000+00:00</lastModDate>
        
        <creator>Masoumeh Boorjandi</creator>
        
        <creator>Zahra Rahmani Ghobadi</creator>
        
        <creator>Hassan Rashidi</creator>
        
        <subject>automatic annotation; dense weighted regional graph; segmentation; feature vector</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(3), 2017</description>
        <description>Automatic image annotation refers to create text labels in accordance with images&#39; context automatically. Although, numerous studies have been conducted in this area for the past decade, existence of multiple labels and semantic gap between these labels and visual low-level features reduced its performance accuracy. In this paper, we suggested an annotation method, based on dense weighted regional graph. In this method, clustering areas was done by forming a dense regional graph of area classification based on strong fuzzy feature vector in images with great precision, as by weighting edges in the graph, less important areas are removed over time and thus semantic gap between low-level features of image and human interpretation of high-level concepts reduces much more. To evaluate the proposed method, COREL database, with 5,000 samples have been used. The results of the images in this database, show acceptable performance of the proposed method in comparison to other methods.</description>
        <description>http://thesai.org/Downloads/Volume8No3/Paper_46-Automatic_Image_Annotation_based_on_Dense.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Analysis of Route Redistribution among Diverse Dynamic Routing Protocols based on OPNET Simulation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080345</link>
        <id>10.14569/IJACSA.2017.080345</id>
        <doi>10.14569/IJACSA.2017.080345</doi>
        <lastModDate>2017-03-31T13:22:30.2200000+00:00</lastModDate>
        
        <creator>Zeyad Mohammad</creator>
        
        <creator>Ahmad Abusukhon</creator>
        
        <creator>Adnan A. Hnaif</creator>
        
        <creator>Issa S. Al-Otoum</creator>
        
        <subject>route redistribution; dynamic routing protocols; EIGRP; IS-IS; OSPF; RIPv2</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(3), 2017</description>
        <description>Routing protocols are the fundamental block of selecting the optimal path from a source node to a destination node in internetwork. Due to emerge the large networks in business aspect thus; they operate diverse routing protocols in their infrastructure. In order to keep a large network connected; the implementation of the route redistribution is needed in network routers. This paper creates the four scenarios on the same network topology by using Optimized Network Engineering Tools Modeler (OPNET 14.5) simulator in order to analyze the performance of the route redistribution among three routing protocols by configuring three protocols from a set of Routing Information Protocol (RIPv2), Enhanced Interior Gateway Routing Protocol (EIGRP), Open Shortest Path First (OSPF), and Intermediate System to Intermediate System (IS-IS) dynamic routing protocols on each scenario. The first scenario is EIGRP_OSPF_ISIS, the second scenario is EIGRP_OSPF_RIPv2, the third scenario is RIPv2_EIGRP_ISIS, and the fourth scenario is RIPv2_OSPF_ISIS. The simulation results showed that the RIPv2_EIGRP_ISIS scenario outperforms the other scenarios in terms of network convergence time, the hops number, jitter, packet delay variation, packet end to end delay; therefore, it fits real time applications such as voice and video conferencing. In contrast, the EIGRP_OSPF_ISIS scenario has better results compared with other scenarios in terms of response time in case of using web browsing, database query, and Email services.</description>
        <description>http://thesai.org/Downloads/Volume8No3/Paper_45-Performance_Analysis_of_Route_Redistribution_among_Diverse_Dynamic.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Emotion Classification in Arousal Valence Model using MAHNOB-HCI Database</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080344</link>
        <id>10.14569/IJACSA.2017.080344</id>
        <doi>10.14569/IJACSA.2017.080344</doi>
        <lastModDate>2017-03-31T13:22:30.1870000+00:00</lastModDate>
        
        <creator>Mimoun Ben Henia Wiem</creator>
        
        <creator>Zied Lachiri</creator>
        
        <subject>Emotion Classification; MAHNOB-HCI; Peripheral Physiological Signals; Arousal-Valence Space; Support Vector Machine</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(3), 2017</description>
        <description>Emotion recognition from physiological signals attracted the attention of researchers from different disciplines, such as affective computing, cognitive science and psychology. This paper aims to classify emotional statements using peripheral physiological signals based on arousal-valence evaluation. These signals are the Electrocardiogram, Respiration Volume, Skin Temperature and Galvanic Skin Response. We explored the signals collected in the MAHNOB-HCI multimodal tagging database. We defined the emotion into three different ways: two and three classes using 1-9 discrete self-rating scales and another model using 9 emotional keywords to establish the three defined areas in arousal-valence dimensions. To perform the accuracies, we began by removing the artefacts and noise from the signals, and then we extracted 169 features. We finished by classifying the emotional states using the support vector machine. The obtained results showed that the electrocardiogram and respiration volume were the most relevant signals for human’s feeling recognition task. Moreover, the obtained accuracies were promising comparing to recent related works for each of the three establishments of emotion modeling.</description>
        <description>http://thesai.org/Downloads/Volume8No3/Paper_44-Emotion_Classification_in_Arousal_Valence_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Computation of QoS While Composing Web Services </title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080343</link>
        <id>10.14569/IJACSA.2017.080343</id>
        <doi>10.14569/IJACSA.2017.080343</doi>
        <lastModDate>2017-03-31T13:22:30.1570000+00:00</lastModDate>
        
        <creator>Khozema Ali Shabbar</creator>
        
        <creator>Dr. Tarun Shrimali</creator>
        
        <creator>Dr. Mohemmed Sha</creator>
        
        <subject>Web Services; Web Services Selection; Web Services Composition; Composite QoS (CQoS); Quality of Service (QoS)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(3), 2017</description>
        <description>Composition of web services has emerged as a fast growing field of research since an atomic service in its entirety is not capable to perform a specific task. Composition of web services is a process where a set of web services, heterogeneous in nature, are clubbed together in order to perform a specific task. Individually, Component web services may be performing well as far as Quality of Service (QoS) is concerned but the core issue is that while composing, do they satisfy Users requirements in terms of QoS? Computation of QoS while composing web services appears to be a big challenge. A lot of research work in this regard, has already been undertaken to come out with new, innovative and credible solutions for the same.
This Paper presents a thorough review-study of different frameworks, architectures, methodologies and algorithms suggested by different researchers in their efforts to compute the overall QoS while composing web services. Moreover, Effectiveness of different methods in terms of QoS while composing is also presented.</description>
        <description>http://thesai.org/Downloads/Volume8No3/Paper_43-Computation_of_QoS_While_Composing_Web_Services.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design and Architecture of a Location and Time-based Mobile-Learning System: A Case-Study for Interactive Islamic Content</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080342</link>
        <id>10.14569/IJACSA.2017.080342</id>
        <doi>10.14569/IJACSA.2017.080342</doi>
        <lastModDate>2017-03-31T13:22:30.1270000+00:00</lastModDate>
        
        <creator>Omar Tayan</creator>
        
        <creator>Moulay Ibrahim El-Khalil Ghembaza</creator>
        
        <creator>Khalid Al-Oufi</creator>
        
        <subject>Mobile-Learning; Context-Aware Notifications; Time and Location Based Reporting; Interactive-Knowledge based User-Events and Activities; Design and Architecture</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(3), 2017</description>
        <description>This paper describes a software design, architecture and process of a novel mobile-learning (m-Learning) approach based on smart-phone devices for retrieving relevant content in real-time based on the user’s-location and the current-time and presenting the content to the user in a manner that support portable learning on-the-move. Mobile-learning using Islamic content is used as a case-study of the proposed system, which can easily be adapted for other learning-content. The proposed system is highly interactive and frequently purges the host-device for details of the current user-location and current-time (e.g. the time, day and month in the solar and the lunar calendars) before such details are used to retrieve the most relevant content. In this study, Quranic-verses, corresponding interpretations (Tafseer) and Hadith (Prophetic words or actions) relates to the online content being fetched in this application. For example, a user may be performing some travel/pilgrimage during Ramadan, in which the relevant content/teachings (based on the user’s location and the current time) are presented to the user in a timely manner in order to learn the rituals of that day or location. The information fetched is then presented and displayed in an interactive user-friendly manner. A summary and comparative analysis of some related applications is presented, showing the limitations of other m-Learning applications and demonstrating the new contribution of our architecture design. Finally, the described system allows authorized scholars to upload and report Islamic-decrees made in real-time based on findings/experience at a particular time and/or location-of-interest (e.g. the new rulings/decrees are then published online in real-time). It is anticipated that millions of end-users shall benefit from the proposed system through the benefits of fast, highly-accessible, user-friendly and relevant information retrieved online in real-time. Advantageously, the potential application and large impact of the proposed m-Learning approach for use with other learning-content/courses is notable.</description>
        <description>http://thesai.org/Downloads/Volume8No3/Paper_42-Design_and_Architecture_of_a_Location_and_Time-based_Mobile-Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Measuring the Impact of the Blackboard System on Blended Learning Students</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080341</link>
        <id>10.14569/IJACSA.2017.080341</id>
        <doi>10.14569/IJACSA.2017.080341</doi>
        <lastModDate>2017-03-31T13:22:30.1270000+00:00</lastModDate>
        
        <creator>Thamer Alhussain</creator>
        
        <subject>learning management system; IS impact/measurement; Blackboard; blended learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(3), 2017</description>
        <description>With the advantages of using learning management systems (LMS) such as Blackboard in the educational process, assessing the impact of such systems has become increasingly important. This study measures the impact of the Blackboard system on students at Saudi Electronic University (SEU) in order to help improve the quality of existing learning environment. For this assessment and measurement, the IS-Impact Measurement Model is used, since it is the most comprehensive model that is valid in the context of this study. The results of this paper indicate how Blackboard is influencing individual performance. It concludes that the use of the Blackboard system has a positive impact on individuals.</description>
        <description>http://thesai.org/Downloads/Volume8No3/Paper_41-Measuring_the_Impact_of_the_Blackboard_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comparison of Collaborative Access Control Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080340</link>
        <id>10.14569/IJACSA.2017.080340</id>
        <doi>10.14569/IJACSA.2017.080340</doi>
        <lastModDate>2017-03-31T13:22:30.0930000+00:00</lastModDate>
        
        <creator>Ahmad Kamran Malik</creator>
        
        <creator>Abdul Mateen</creator>
        
        <creator>Muhammad Anwar Abbasi</creator>
        
        <creator>Basit Raza</creator>
        
        <creator>Malik Ahsan Ali</creator>
        
        <creator>Wajeeha Naeem</creator>
        
        <creator>Yousra Asim</creator>
        
        <creator>Majid Iqbal Khan</creator>
        
        <subject>RBAC; Collaboration; Privacy; Access control; Security; Information sharing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(3), 2017</description>
        <description>Collaborative environments need access control to data and resources to increase working cooperation efficiently yet effectively. Several approaches are proposed and multiple access control models are recommended in this domain. In this paper, four Role-Based Access Control (RBAC) based collaborative models are selected for analysis and comparison. The standard RBAC model, Team-based Access Control (TMAC) model, Privacy-aware Role-Based Access Control (P-RBAC) model and Dynamic Sharing and Privacy-aware RBAC (DySP-RBAC) model are used for experiments. A prototype is developed for each of these models and pros and cons of these models are discussed. Performance and sharing parameters are used to compare these collaborative models. The standard RBAC model is found better by having a quick response time for queries as compared to other RBAC models. The DySP-RBAC model outperforms other models by providing enhanced sharing capabilities.</description>
        <description>http://thesai.org/Downloads/Volume8No3/Paper_40-A_Comparison_of_Collaborative_Access_Control.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Real-Time H.264/AVC Entropy Encoder Hardware Architecture in Baseline Profile</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080339</link>
        <id>10.14569/IJACSA.2017.080339</id>
        <doi>10.14569/IJACSA.2017.080339</doi>
        <lastModDate>2017-03-31T13:22:30.0630000+00:00</lastModDate>
        
        <creator>Ben Hamida Asma</creator>
        
        <creator>Dhahri Salah</creator>
        
        <creator>Zitouni Abdelkrim</creator>
        
        <subject>H.264/AVC; CAVLC; Exp-Golomb</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(3), 2017</description>
        <description>In this paper, we present a new hardware architecture of an entropy encoder for an H.264/AVC video encoder. The proposed design aims to employ a parallel module at a pre-encoding stage to reduce a critical path. Additionally, the arithmetic table elimination method is used to eliminate the memory cost. Besides, the reduction in the size of VLC tables offers area saving. This architecture is synthesized on an FPGA Virtex IV. The simulation results show that this design can operate up to 234 MHz, which allows processing a 4CIF video format in real time.</description>
        <description>http://thesai.org/Downloads/Volume8No3/Paper_39-Real-Time_H.264AVC_Entropy_Encoder_Hardware.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Extensive Survey over Traffic Management/Load Balance in Cloud Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080338</link>
        <id>10.14569/IJACSA.2017.080338</id>
        <doi>10.14569/IJACSA.2017.080338</doi>
        <lastModDate>2017-03-31T13:22:30.0330000+00:00</lastModDate>
        
        <creator>Amith Shekhar C</creator>
        
        <creator>Dr. Sharvani. G S</creator>
        
        <subject>Cloud Computing; Load Balance; Traffic Management</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(3), 2017</description>
        <description>Cloud Computing (CC) is all about carrying out processing in other&#39;s system. There are various vendors who provide CC services. The basic algorithm that should be met to access CC services is a need for steady internet connection. As everything is done online the traffic across the internet is to be managed efficiently so that the transmission delay can be minimized and better quality of service can be given to the customers. The network should not be too congested at any moment of time. Hence the traffic management becomes a crucial factor for the better performance of the CC network. This paper addressed the most valuable terms and topics concerning the load balance/traffic management in cloud computing. Also, the paper is meant to discuss the study analysis of the recent researches in load balance of CC. From the study analysis the current research gap is addressed with a future scope of research to overcome the research gap.</description>
        <description>http://thesai.org/Downloads/Volume8No3/Paper_38-An_Extensive_Survey_over_Traffic_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Exploreing K-Means with Internal Validity Indexes for Data Clustering in Traffic Management System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080337</link>
        <id>10.14569/IJACSA.2017.080337</id>
        <doi>10.14569/IJACSA.2017.080337</doi>
        <lastModDate>2017-03-31T13:22:30.0000000+00:00</lastModDate>
        
        <creator>Sadia Nawrin</creator>
        
        <creator>Md Rahatur Rahman</creator>
        
        <creator>Shamim Akhter</creator>
        
        <subject>Traffic Management System (TMS); Data Clustering; K-means; Hierarchical Clustering; Cluster Validation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(3), 2017</description>
        <description>Traffic Management System (TMS) is used to improve traffic flow by integrating information from different data repositories and online sensors, detecting incidents and taking actions on traffic routing. In general, two decision making systems-weights updating and forecasting are integrated inside the TMS. The models need numerous data sets for making appropriate decisions. To determine the dynamic road weights in TMS, four (4) different environmental attributes are considered, which are directly or indirectly related to increase the traffic jam– rain fall, temperature, wind, and humidity. In addition, peak hour is taken as an additional attribute. Usually, the data sets are classified by instinct method. However, optimum classification on data sets is vital to improve the decision accuracy of the TMS. Collected data sets have no class label and thus, cluster based unsupervised classifications (partitioning, hierarchical, grid-based, density-based) can be used to find optimum number of classifications in each attribute, and expected to improve the performance of the TMS. Two most popular and frequently used classifiers are hierarchical clustering and partition clustering. K-means is simple, easy to implement, and easy to interpret the clustering results. It is also faster, because the order of time complexity is linear with the number of data. Thus, in this paper we are going to demonstrate the performance of partition k-means and hierarchical k-means with their implementations by Davies Boulder Index (DBI), Dunn Index (DI), Silhouette Coefficient (SC) methods to outline the optimal number classifications (features) inside each attribute of TMS data sets. Subsequently, the optimal classes are validated by using WSS (within sum of square) errors and correlation methods. The validation results conclude that k-means with DI performs better in all attributes of TMS data sets and provides more accurate optimum classification numbers. Thereafter, the dynamic road weights for TMS are generated and classified using the combined k-means and DI method.</description>
        <description>http://thesai.org/Downloads/Volume8No3/Paper_37-Exploreing_K-Means_with_Internal_Validity_Indexes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>AnyCasting In Dual Sink Approach (ACIDS) for WBASNs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080336</link>
        <id>10.14569/IJACSA.2017.080336</id>
        <doi>10.14569/IJACSA.2017.080336</doi>
        <lastModDate>2017-03-31T13:22:29.9870000+00:00</lastModDate>
        
        <creator>Muhammad Rahim Baig</creator>
        
        <creator>Najeeb Ullah</creator>
        
        <creator>Fazle Hadi</creator>
        
        <creator>Sheeraz Ahmed</creator>
        
        <creator>Abdul Hanan</creator>
        
        <creator>Imran Ahmed</creator>
        
        <subject>Stability Period; WBASNs; Throughput; End-to-End Delay;  Energy Consumption; AnyCasting; RSSI</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(3), 2017</description>
        <description>After successful development in health-care services, WBASN is also being used in other fields where continuous and distant health-care monitoring is required. Various suggested protocols presented in literature work to enhance the performance of WBASN by focusing on delay, energy efficiency and routing. In this research we focus to increase the stability period and throughput, while decreasing end-to-delay. Two sink nodes are utilized and concept of AnyCasting is introduced. In this research, we have presented a scheme AnyCasting In Dual Sink (ACIDS) for WBASN and compared it with existing protocols LAEEBA and DARE. The performance of ACIDS is found to be 51% and 13% efficient than LAEEBA and DARE respectively in throughput. Results show that, the stability period of ACIDS is much greater than LAEEBA and DARE with minimum delay. Energy parameter in ACIDS is in tradeoff with the improved parameters, due to the computation of RSSI which does more processing and utilizes more energy.</description>
        <description>http://thesai.org/Downloads/Volume8No3/Paper_36-AnyCasting_In_Dual_Sink_Approach_ACIDS_for_WBASNs.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Low Complexity based Edge Color Matching Algorithm for Regular Bipartite Multigraph </title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080335</link>
        <id>10.14569/IJACSA.2017.080335</id>
        <doi>10.14569/IJACSA.2017.080335</doi>
        <lastModDate>2017-03-31T13:22:29.9530000+00:00</lastModDate>
        
        <creator>Rezaul Karim</creator>
        
        <creator>Muhammad Mahbub Hasan Rony</creator>
        
        <creator>Md. Rashedul Islam</creator>
        
        <creator>Md. Khaliluzzaman</creator>
        
        <subject>matching; edge-coloring; complexity; bipartite multigraph; DFS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(3), 2017</description>
        <description>An edge coloring of a graph G is a process of assigning colors to the adjacent edges so that the adjacent edges
represents the different colors. In this paper, an algorithm is
proposed to find the perfect color matching of the regular
bipartite multigraph with low time complexity. For that, the
proposed algorithm is divided into two procedures. In the first
procedure, the possible circuits and bad edges are extracted from
the regular bipartite graph. In the second procedure, the bad
edges are rearranged to obtain the perfect color matching. The
depth first search (DFS) algorithm is used in this paper for
traversing the bipartite vertices to find the closed path, open
path, incomplete components, and bad edges. By the proposed
algorithm, the proper edge coloring of D – regular bipartite
multi-graph can be obtained in O (D.V) time.</description>
        <description>http://thesai.org/Downloads/Volume8No3/Paper_35-A Low_Complexity_based_Edge.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Self-Protection against Insider Threats in DBMS through Policies Implementation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080334</link>
        <id>10.14569/IJACSA.2017.080334</id>
        <doi>10.14569/IJACSA.2017.080334</doi>
        <lastModDate>2017-03-31T13:22:29.9230000+00:00</lastModDate>
        
        <creator>Farukh Zaman</creator>
        
        <creator>Basit Raza</creator>
        
        <creator>Ahmad Kamran Malik</creator>
        
        <creator>Adeel Anjum</creator>
        
        <subject>autonomic; self-protection; insider threats; policies; DBMS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(3), 2017</description>
        <description>In today’s world, information security of an organization has become a major challenge as well as a critical business issue. Managing and mitigating these internal or external security related issues, organizations hire highly knowledgeable security expert persons. Insider threats in database management system (DBMS) are inherently a very hard problem to address. Employees within the organization carry out or harm organization data in a professional manner. To protect and monitor organization information from insider user in DBMS, the organization used different techniques, but these techniques are insufficient to secure their data. We offer an autonomous approach to self-protection architecture based on policy implementation in DBMS. This research proposes an autonomic model for protection that will enforce Access Control policies, Database Auditing policies, Encryption policies, user authentication policies, and database configuration setting policies in DBMS. The purpose of these policies to restrict insider user or Database Administrator (DBA) from malicious activities to protect data.</description>
        <description>http://thesai.org/Downloads/Volume8No3/Paper_34-Self-Protection_against_Insider_Threats_in_DBMS.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluating Predictive Algorithms using Receiver-Operative Characteristics for Coronary Illness among Diabetic Patients</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080333</link>
        <id>10.14569/IJACSA.2017.080333</id>
        <doi>10.14569/IJACSA.2017.080333</doi>
        <lastModDate>2017-03-31T13:22:29.9070000+00:00</lastModDate>
        
        <creator>Tahira Mahboob</creator>
        
        <creator>Saman Sahaheen</creator>
        
        <creator>Nuzhat Tahir</creator>
        
        <creator>Mukhtiar Bano</creator>
        
        <subject>Artificial neural networks (ANN); Decision tree; Na&#239;ve Bayes; Logistic Regression and Clustering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(3), 2017</description>
        <description>The grouping of information is a typical method in Machine learning. Information mining assumes a crucial part to extract learning from vast databases from operational databases. In medicinal services Data mining is a creating field of high significance, giving expectations and a more profound comprehension of restorative information sets. Most extreme information mining technique relies on an arrangement of elements that characterizes the conduct of the learning calculation furthermore straightforwardly or by implication impact of the multifaceted nature of models. Coronary illness is the main sources of death over the past years. Numerous scientists utilize a few information digging methods for the diagnosing of coronary illness. Diabetes is one of the incessant maladies that emerge when the pancreas does not deliver enough insulin. The vast majority of the frameworks have effectively utilized Machine learning strategies, for example, Na&#239;ve Bayes Algorithm, Decision Trees, logistic Regression and Support Vector Machines to name a few. These techniques solely rely on grouping of the information with respect to finding the heart variations from the norm. Bolster vector machine is an advanced strategy has been effectively in the field of machine learning. Utilizing coronary illness determination, the framework presented predicts using characteristics such as, age, sex, cholesterol, circulatory strain, glucose and the odds of a diabetic patient getting a coronary illness using machine learning algorithms.</description>
        <description>http://thesai.org/Downloads/Volume8No3/Paper_33-Evaluating_Predictive_Algorithms_using_Receiver_Operative.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Downlink and Uplink Message Size Impact on Round Trip Time Metric in Multi-Hop Wireless Mesh Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080332</link>
        <id>10.14569/IJACSA.2017.080332</id>
        <doi>10.14569/IJACSA.2017.080332</doi>
        <lastModDate>2017-03-31T13:22:29.8770000+00:00</lastModDate>
        
        <creator>Youssra Chatei</creator>
        
        <creator>Maria Hammouti</creator>
        
        <creator>El Miloud Ar-reyouchi</creator>
        
        <creator>Kamal Ghoumid</creator>
        
        <subject>RTT; Remote wireless communications; Wireless radio router; Wireless multi-hop networks; Average message size; FEC; SCADA</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(3), 2017</description>
        <description>In this paper, the authors propose a novel real-time study metrics of Round Trip Time (RTT) for Multi-Hop Wireless Mesh Networks. They focus on real operational wireless networks with fixed nodes, such as industrial wireless networks. The main aim of the metric is to show the effect of the Downlink and Uplink message size (DMS and UMS) on RTT with and without Forward Error Correction (FEC), user data size without any headers, on RTT path between a source (Supervisory Control and Data Acquisition (SCADA) center) and a destination RTU (Remote Terminal Units). The metric assigns weights to links based on the RTT path of a packet size over the link path. They studied the performance of the metric by implementing it in three wireless scenarios consisting of 3, 4 and 5 nodes; each node represents a wireless radio IP router. They find that in a multi-hops environment, the real-time metric clearly shows the impact of DMS, UMS and FEC on RTT by making judicious use of the number of the hops. </description>
        <description>http://thesai.org/Downloads/Volume8No3/Paper_32-Downlink_and_Uplink_Message_Size_Impact_on_Round.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Contribution to the Development of A Dynamic Circulation Map using the Multi-Agent Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080331</link>
        <id>10.14569/IJACSA.2017.080331</id>
        <doi>10.14569/IJACSA.2017.080331</doi>
        <lastModDate>2017-03-31T13:22:29.8470000+00:00</lastModDate>
        
        <creator>Asmaa ROUDANE</creator>
        
        <creator>Mohamed YOUSSFI</creator>
        
        <creator>Khalifa MANSOURI</creator>
        
        <subject>Road traffic; road traffic management systems; intelligent systems; complex systems; multi-agent systems; graphs; real-time</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(3), 2017</description>
        <description>Road traffic is considered one of the most difficult domains to manage and one of the fast- growing networks. This environment is geographically distributed and its actors are in continuous interaction in order to be able to succeed in a real-time exchange of the variable data of the road traffic. The modeling of this complex system requires powerful tools to be able to realize precise representations of reality hence the combination of mathematical modeling tools and intelligent systems. This paper presents a model of road traffic using graphs for the representation of the road infrastructure and the multi-agent technology for the management of the data representing the information circulating in the road network. This new approach aims at optimizing traffic paths, minimizing the time of a given journey and developing a dynamic traffic map that will enable the user to reach his destination by following the best possible path.</description>
        <description>http://thesai.org/Downloads/Volume8No3/Paper_31-Contribution_to_the_Development_of_a_Dynamic.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>On the Dynamic Maintenance of Data Replicas based on Access Patterns in A Multi-Cloud Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080330</link>
        <id>10.14569/IJACSA.2017.080330</id>
        <doi>10.14569/IJACSA.2017.080330</doi>
        <lastModDate>2017-03-31T13:22:29.8130000+00:00</lastModDate>
        
        <creator>Mohammad Shorfuzzaman</creator>
        
        <subject>Multi-cloud environment; data replication; distributed algorithm; response time; dynamic maintenance; QoS constraint</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(3), 2017</description>
        <description>Cloud computing provides services and infrastructures to enable end-users to access, modify and share massive geographically distributed data. There are increasing interests in developing data-intensive (big data) applications in this computing environment that need to access huge datasets. Accessing such data in an efficient way is deterred with factors such as dynamic changes in resource availability, provision of diverse service quality by different cloud providers. Data replication has already been proven to be an effective technique to overcome these challenges. Replication offers reduced response time in data access, higher data availability and improved system load balancing. Once the replicas are created in a multi-cloud environment, it is of utmost importance to continuously support maintenance of these replicas dynamically. This is to ensure that replicas are located in optimal data center locations to minimize replication cost and to meet specific user and system requirements. First, this paper proposes a novel approach to distributed placement of static replicas in appropriate data center locations. Secondly, motivated by the fact that a multi-cloud environment is highly dynamic, the paper presents a dynamic replica maintenance technique that re-allocates replicas to new data center locations upon significant performance degradation. The evaluation results demonstrate the effectiveness of the presented dynamic maintenance technique with static placement decisions in a multi-cloud environment</description>
        <description>http://thesai.org/Downloads/Volume8No3/Paper_30-On_the_Dynamic_Maintenance_of_Data_Replicas.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Electronic Health as a Component of G2C Services</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080329</link>
        <id>10.14569/IJACSA.2017.080329</id>
        <doi>10.14569/IJACSA.2017.080329</doi>
        <lastModDate>2017-03-31T13:22:29.7970000+00:00</lastModDate>
        
        <creator>Rasim Alguliyev</creator>
        
        <creator>Farhad Yusifov</creator>
        
        <subject>electronic health; electronic government; Government to Citizen (G2C); electronic services; medical information; Electronic Health Records (EHR)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(3), 2017</description>
        <description>This paper explores electronic health as a segment of electronic government. International practice in electronic health field and electronic health strategies adopted in Europe are analysed. Current practices in delivery of electronic health services in G2C are investigated and perspectives are specified.</description>
        <description>http://thesai.org/Downloads/Volume8No3/Paper_29-Electronic_Health_as_a_Component_of_G2C_Services.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Congestion Control using Cross layer and Stochastic Approach in Distributed Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080328</link>
        <id>10.14569/IJACSA.2017.080328</id>
        <doi>10.14569/IJACSA.2017.080328</doi>
        <lastModDate>2017-03-31T13:22:29.7670000+00:00</lastModDate>
        
        <creator>Selvarani R</creator>
        
        <creator>Vinodha K</creator>
        
        <subject>Distributed Network System; cross layer; congestion; Traffic Flow; Rate Control Metric</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(3), 2017</description>
        <description>In recent past, the current Internet architecture has many challenges in supporting the magnificent network traffic. Among the various that affect the quality of communication in the massive architecture the challenge in maintaining congestion free flow of traffic is one of the major concerns. In this paper, we propose a novel technique to address this issue using cross layer paradigm based on stochastic approach with extended markovian model. The cross layer approach will bridge the physical layer, link layer, network layer and transport layer to control congestion. The resource provisioning operation will be carried out over link layer and the mechanism of exploring the congestion using stochastic approach will be implemented over the network layer. The Markov modeling is adopted to identify the best routes amidst of highly congested paths and it is carried out at the transport layer. An analytical research methodology will be adopted to prove that it is feasible to develop a technique that can identify the origination point of congestion and share the same with the entire network. It is found that this approach for congestion control is effective with respect to end to end delay, packet delivery ratio and processing time.</description>
        <description>http://thesai.org/Downloads/Volume8No3/Paper_28-Congestion_Control_using_Cross_layer_and_Stochastic.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Empirical Investigation of the Correlation between Package-Level Cohesion and Maintenance Effort</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080327</link>
        <id>10.14569/IJACSA.2017.080327</id>
        <doi>10.14569/IJACSA.2017.080327</doi>
        <lastModDate>2017-03-31T13:22:29.7200000+00:00</lastModDate>
        
        <creator>Waleed Albattah</creator>
        
        <subject>package; cohesion; metric; maintenance effort; maintainability; software; measurements</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(3), 2017</description>
        <description>The quality of the software design has a considerable impact on software maintainability. Improving software quality can reduce costs and efforts of software maintenance. Cohesion, as one of software quality characteristics, can be used as an early indicator for predicting software maintenance efforts. This paper improves Martin’s cohesion metric, which is one of the well-known and well-accepted cohesion metrics. The strong correlation found between package cohesion, using our proposed metric, and maintenance efforts shows the improvement made on measuring cohesion, and how it would be for predicting maintenance efforts. The experimental study included data from four open source Java software systems. The results show that as good the package cohesion is, as the less maintenance effort will be needed.</description>
        <description>http://thesai.org/Downloads/Volume8No3/Paper_27-An_Empirical_Investigation_of_the_Correlation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced Security for Data Sharing in Multi Cloud Storage (SDSMC)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080326</link>
        <id>10.14569/IJACSA.2017.080326</id>
        <doi>10.14569/IJACSA.2017.080326</doi>
        <lastModDate>2017-03-31T13:22:29.6730000+00:00</lastModDate>
        
        <creator>Dr. K. Subramanian</creator>
        
        <creator>F.Leo John</creator>
        
        <subject>Malicious Insiders; privacy; Index based Data slicing; Malicious Files; Multi-Cloud Storage; Data Sharing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(3), 2017</description>
        <description>Multiple Cloud storage has become one of the essential services of cloud computing.  This Multi-Cloud storage models allow users to store sliced encrypted data in various cloud drives. Thus, it provides support for various cloud storage services using the single interface rather than using single cloud storage services. Cloud security goal primarily focuses on issues that relate to information privacy and security aspects of cloud computing. This latest data storage service and data moderation prototype focus on malicious insider’s access on stored data, protection from malicious files, removal of centralized distribution of data storage and removal of outdated files or downloaded files frequently. Data owner does not necessarily need to worry about the future of the data stored in the Multi-Cloud server may be extracted or depraved. The other is ingress control of data. The proposed method ensures the file or data cannot get access without the knowledge or permission of the owner. Thus, this research aims at offering an architecture which reduces malicious insiders and file threats with an algorithm that improves data sharing security in Multi- Cloud storage services. This technique will offer a secure environment whereby the data owner can store and retrieve data from Multi-Cloud Environment without file merging conflicts and prevents insider attacks to obtain a meaningful information. The experimental results indicate that the suggested model is suitable for decision making process for the data owners in the better adoption of multi-cloud storage service for sharing their information securely.</description>
        <description>http://thesai.org/Downloads/Volume8No3/Paper_26-Enhanced_Security_for_Data_Sharing_in_Multi_Cloud_Storage.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving the Control Strategy of a Standalone PV Pumping System by Fuzzy Logic Technique</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080325</link>
        <id>10.14569/IJACSA.2017.080325</id>
        <doi>10.14569/IJACSA.2017.080325</doi>
        <lastModDate>2017-03-31T13:22:29.6570000+00:00</lastModDate>
        
        <creator>Houssem CHAOUALI</creator>
        
        <creator>Hichem OTHMANI</creator>
        
        <creator>Dhafer MEZGHANI</creator>
        
        <creator>Abdelkader MAMI</creator>
        
        <subject>Photovoltaic Pumping System; Asynchronous Moto-Pump; PI Speed Controller; Fuzzy Logic Control Technique; MPPT Tracking System; Simulation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(3), 2017</description>
        <description>This work aims to develop an accurate model of an existing Photovoltaic Pumping System (PvPS) which is composed of an Ebara Pra-0.50T Asynchronous Moto-Pump (AMP) fed by Kaneka GSA-60 photovoltaic panels via a Moeller DV-51 speed drive. The developed model is then used to compare the performance of the system with its original control strategy based on classical indirect vector control strategy using PI speed controller and the proposed new control strategy based on Fuzzy Logic control technique for speed control and MPPT system. The obtained results of comparative simulations, induced in different dramatic variation of working conditions, show that the developed control strategy brought major enhancements in system performance.</description>
        <description>http://thesai.org/Downloads/Volume8No3/Paper_25-Improving_the_Control_Strategy_of_a_Standalone_PV_Pumping_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of Frequency Reconfigurable Multiband Meander Antenna Using Varactor Diode for Wireless Communication</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080324</link>
        <id>10.14569/IJACSA.2017.080324</id>
        <doi>10.14569/IJACSA.2017.080324</doi>
        <lastModDate>2017-03-31T13:22:29.6270000+00:00</lastModDate>
        
        <creator>I. ROUISSI</creator>
        
        <creator>J.M.FLOC’H</creator>
        
        <creator>H.TRABELSI</creator>
        
        <subject>Frequency reconfigurable meander antenna; chip capacitor; Varactor diode; Size reduction; tuning range; wireless communication</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(3), 2017</description>
        <description>A compact multiband frequency reconfigurable meander antenna proposed for wireless communication systems is designed and described in this paper. A folded structure has been chosen due its good tradeoff between size, bandwidth and efficiency. The reference antenna is based on a meander patch structure radiating at F1=1, F2=1.94, F3=2.6GHz and optimized to be integrated in the Printed Circuit Board (PCB). In order to sweep the resonance frequencies, a chip capacitor was inserted between the meander patch printed on the top layer and a floating ground plane on the back one, in first case. Than in second case, by inserting a varactor diode to tuning electronically the resonance frequencies over wide bands. The measured results agree with simulations and good radiation properties were obtained. Realized prototype and related results are indeed presented and discussed.</description>
        <description>http://thesai.org/Downloads/Volume8No3/Paper_24-Design_of_Frequency_Reconfigurable_Multiband.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Review of Secure Authentication based e-Payment Protocol</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080323</link>
        <id>10.14569/IJACSA.2017.080323</id>
        <doi>10.14569/IJACSA.2017.080323</doi>
        <lastModDate>2017-03-31T13:22:29.5970000+00:00</lastModDate>
        
        <creator>Mr.B. Ratnakanth</creator>
        
        <creator>Prof.P.S.Avadhani</creator>
        
        <subject>Security protocol; smart card; encryption technique; payment protocol; E-commerce</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(3), 2017</description>
        <description>The growth of e-commerce platform is increasing rapidly and possesses a higher level of hazard compared to standard applications as well as it requires a more prominent level of safety.Additionally, the transaction and their data about clients are enormously sensitive, security production and privacy is exceptionally crucial. Consequently, the confirmation is generally vital towards security necessities as well as prevents the data from stolen and unauthorized person over the transaction of e-payment. At the same time, privacy strategies are essential to address the client data security. Because of this, the data protection and security ought to be viewed as a central part of e-business framework plan. Particularly enormous consideration is given to cash exchanges assurance. In the past decades, various methods were created to permit secure cash transaction using e-payment frameworks. This study will review and discuss the e-payment scheme. It uses various encryption algorithms and methods to accomplish data integrity, privacy, non-repudiation and authentication.</description>
        <description>http://thesai.org/Downloads/Volume8No3/Paper_23-A_Review_of_Secure_Authentication_based_e- Payment_Protocol.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>oDyRM: Optimized Dynamic Reusability Model for Enhanced Software Consistency</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080322</link>
        <id>10.14569/IJACSA.2017.080322</id>
        <doi>10.14569/IJACSA.2017.080322</doi>
        <lastModDate>2017-03-31T13:22:29.5800000+00:00</lastModDate>
        
        <creator>R. Selvarani</creator>
        
        <creator>P. Mangayarkarasi</creator>
        
        <subject>Cost; Back propagation Algorithm; Design Reusability; Object Oriented Design; Optimization; Project management</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(3), 2017</description>
        <description>Accomplishment of optimization technique on Object Oriented design component is a very challenging task. The prior model DyRM has introduced a technique to perform modeling of design reusability under three real-time constraints. The proposed study extends the DyRM   model by incorporating optimization using multilayered perception techniques in neural network. The system takes the similar input as is done by the prior DyRM, which is subjected to Levenberg-Marquardt optimization algorithm using multi-layer perceptron of configuration 4-24-2 to generate the optimal output of consistency factor. The paper discusses the underlying technique elaborately and presents the outcome that shows a good curve fits between experimental and predicted data. The model is therefore termed as optimized DyRM (oDyRM) to evaluate the consistency factor associated with the proposed model.</description>
        <description>http://thesai.org/Downloads/Volume8No3/Paper_22-oDyRM_Optimized_Dynamic_Reusability_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detection of Edges Using Two-Way Nested Design</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080321</link>
        <id>10.14569/IJACSA.2017.080321</id>
        <doi>10.14569/IJACSA.2017.080321</doi>
        <lastModDate>2017-03-31T13:22:29.5500000+00:00</lastModDate>
        
        <creator>Asim ur Rehman Khan</creator>
        
        <creator>Syed Muhammad Atif Saleem</creator>
        
        <creator>Haider Mehdi</creator>
        
        <subject>Analysis of variance (ANOVA); Edge detection; F-test; nested design; T-test</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(3), 2017</description>
        <description>This paper implements a novel approach of identifying edges in images using a two-way nested design. The test comprises of two steps. First step is based on an F-test. The sums of square (SS) of various effects are used to extract the mean square (MS) effect of respective effects and the unknown effect considered as noise. The mean square value has a chi-square distribution. The ratio of two chi-square distributions has an F-distribution. The final decision is based on testing a hypothesis for the presence or absence of an effect. The second step is based on contrast function (CF). This test identifies the presence or absence of an edge in four directions that are horizontal, vertical, and the two diagonal directions. The test is based on Tukey’s T-test. The performance of nested design is compared with the edge detection using Sobel filter. A rigorous testing reveals that the nested design yields comparable results for images that are either free of noise or corrupted with light noise. The nested design however outperforms the Sobel filter in situations where the images are corrupted with heavy noise.</description>
        <description>http://thesai.org/Downloads/Volume8No3/Paper_21-Detection_of_Edges_using_Two-Way_ Nested_Design.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Block Wise Data Hiding with Auxilliary Matrix</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080320</link>
        <id>10.14569/IJACSA.2017.080320</id>
        <doi>10.14569/IJACSA.2017.080320</doi>
        <lastModDate>2017-03-31T13:22:29.5170000+00:00</lastModDate>
        
        <creator>Jyoti Bharti</creator>
        
        <creator>R.K. Pateriya</creator>
        
        <creator>Sanyam Shukla</creator>
        
        <subject>Steganography; RGB planes; Scanning; Stego-image; ASCII value</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(3), 2017</description>
        <description>This paper introduces a novel method based on auxiliary matrix to hide a text data in an RGB plane. To hide the data  in RGB planes of image via scanning, encryption and decryption.  To enhance the security, the scanning technique combines two different traversals – spiral and snake traversal. The encryption algorithm involves  auxiliary  matrix as a payload  and  consider the  least significant bits of three planes. To  embed the text message would in the form of  ASCII values which are similar to the red plane  values and least significant value of pixels in blue plane marks the position of pixels. The least significant bit of boundary values of green-plane signifies the message. These three planes are recombined to form the stego-image, to decrypt the message with the help of scanning in the red-plane and blue plane and green plane simultaneously. Performance evaluation is done using PSNR, MSE and entropy calculation and generated results are compared with some earlier proposed work to present its efficiency with respect to others.</description>
        <description>http://thesai.org/Downloads/Volume8No3/Paper_20-Block_Wise_Data_Hiding_with_Auxilliary_Matrix.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Semantic Interpretation of Unusual Behaviors Extracted from Outliers of Moving Objects Trajectories</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080319</link>
        <id>10.14569/IJACSA.2017.080319</id>
        <doi>10.14569/IJACSA.2017.080319</doi>
        <lastModDate>2017-03-31T13:22:29.4870000+00:00</lastModDate>
        
        <creator>Sana CHAKRI</creator>
        
        <creator>Said RAGHAY</creator>
        
        <creator>Salah EL HADAJ</creator>
        
        <subject>Moving objects analysis; spatial databases; data mining; Semantic clustering; semantic trajectories</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(3), 2017</description>
        <description>The increasing use of location-aware devices has led to generate a huge volume of data from satellite images and mobile sensors; these data can be classified into geographical data. And traces generated by objects moving on geographical territory, these traces are usually modeled as streams of spatiotemporal points called trajectories. Integrating trajectory sample points with geographical and contextual data before applying mining techniques can be more gainful for the application users. It contributes to produce significant knowledge about movements and provide applications with richer and more meaningful patterns. Trajectory Outliers are a sort of patterns that can be extracted from trajectories. However, the majority of algorithms proposed for discovering outliers are based on the geometric side of trajectories; our approach extends these works to produce outliers based on semantic trajectories in order to give meaning to the outliers extracted, and to understand the unusual behaviors that can be detected. To prove the efficiency of the approach proposed we show some experimental results.</description>
        <description>http://thesai.org/Downloads/Volume8No3/Paper_19-A_Semantic_Interpretation_of_Unusual_Behaviors.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Bus Arbitration Scheme with an Efficient Utilization and Distribution</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080318</link>
        <id>10.14569/IJACSA.2017.080318</id>
        <doi>10.14569/IJACSA.2017.080318</doi>
        <lastModDate>2017-03-31T13:22:29.4570000+00:00</lastModDate>
        
        <creator>Amin M. A. El-Kustaban</creator>
        
        <creator>Abdullah A. K. Qahtan</creator>
        
        <subject>Chip Multiprocessor; Round Robin; Lottery Algorithms; Latency; VHDL</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(3), 2017</description>
        <description>Computer designers utilize the recent huge advances in Very Large Scale Integration (VLSI) to place several processors on the same chip die to get Chip Multiprocessor (CMP). The shared bus is the most common media used to connect these processors with each other and with the shared resources. Distributing the shared bus among the contention processors represents a critical issue that affects overall performance of the CMP. Optimal utilization with fair distribution of the shared bus represents another challenge. This paper introduces a bus arbitration scheme, which is an Age-Based Lottery (ABL) Arbitration that combines the lottery and age-based algorithms to overcome the shared bus challenges. The results show that the developed bus arbitration scheme maximizes the bus utilization and improves the distribution by at least 13.5% with an acceptable latency time comparing to the traditional bus arbitration schemes.</description>
        <description>http://thesai.org/Downloads/Volume8No3/Paper_18-A_Bus_Arbitration_Scheme_with_an_Efficient.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>RIN-Sum: A System for Query-Specific Multi-Document Extractive Summarization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080317</link>
        <id>10.14569/IJACSA.2017.080317</id>
        <doi>10.14569/IJACSA.2017.080317</doi>
        <lastModDate>2017-03-31T13:22:29.4230000+00:00</lastModDate>
        
        <creator>Rajesh Wadhvani</creator>
        
        <creator>Rajesh Kumar Pateriya</creator>
        
        <creator>Manasi Gyanchandani</creator>
        
        <creator>Sanyam Shukla</creator>
        
        <subject>Text summarization; maximum marginal relevance; sentence selection; DUC2007 data collection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(3), 2017</description>
        <description>In paper, we have proposed a novel summarization framework to generate a quality summary by extracting Relevant-Informative-Novel (RIN) sentences from topically related document collection called as RIN-Sum. In the proposed framework, with the aim to retrieve user&#39;s relevant informative sentences conveying novel information, ranking of structured sentences has been carried out. For sentence ranking, Relevant-Informative-Novelty (RIN) ranking function is formulated in which three factors, i.e., the relevance of sentence with input query, informativeness of the sentence and the novelty of the sentence have been considered. For relevance measure instead of incorporating existing metrics, i.e., Cosine and Overlap which have certain limitations, a new relevant metric called as C-Overlap has been formulated.  RIN ranking is applied on document collection to retrieve relevant sentences conveying significant and novel information about the query. These retrieved sentences are used to generate query-specific summary of multiple documents. The performance of proposed framework have been investigated using standard dataset, i.e., DUC2007 documents collection and summary evaluation tool, i.e., ROUGE.</description>
        <description>http://thesai.org/Downloads/Volume8No3/Paper_17-RIN-Sum_A_System_for_Query-Specific_Multi-Document_Extractive.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Selection of Mathematical Problems in Accordance with Student’s Learning Style</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080316</link>
        <id>10.14569/IJACSA.2017.080316</id>
        <doi>10.14569/IJACSA.2017.080316</doi>
        <lastModDate>2017-03-31T13:22:29.3930000+00:00</lastModDate>
        
        <creator>Elena Fabiola Ruiz Ledesma</creator>
        
        <creator>Juan J. Guti&#233;rrez Garc&#237;a</creator>
        
        <subject>technology; research projects; education; learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(3), 2017</description>
        <description>This article describes the implementation and development of an expert system as a support tool to tackle mathematical topics, by using Bayesian networks as engine of inference and a learning styles, as well as the difficulty level of problems and establish the base of the classifier probabilistic. The expert system makes decisions as which element to visualize at a specific moment, gives the student the best resource, and supervises the user progress. The article is divided into three sections, the first one deals with the construction of the expert system, the second one presents the operation of the system through the classification of students consistent with their profile, which is based on the prevailing learning style among them and in the difficulty level that problems have so that the student reaches in solving successfully. It also shows the operability of the system in respect of the allocation of digital resources in accordance with the identified profile and gradually provides more assignments with different difficulty levels, as the student progresses. An experimental study was performed by means of which the system was assessed under 30 students to the level of engineering and those who studied the Applied Calculus course in their second semester of the degree course. This group was named the study group (SG). The SG used the system for one semester. The results at the initial and final evaluation were from 3.58 to 7.37 for CG and SG respectively. Applying the F test, a statistically significant difference in increase was found (p &lt;0.002). These results showed that SG identified the concept of derivative and applied that concept correctly in real problems solving correctly 74% of the final questionnaire, so it is concluded that the system expert opens a new way in educational research.</description>
        <description>http://thesai.org/Downloads/Volume8No3/Paper_16-Selection_of_Mathematical_Problems_in_Accordance_with_Students_Learning_Style.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Techniques used to Improve Spatial Visualization Skills of Students in Engineering Graphics Course: A Survey</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080315</link>
        <id>10.14569/IJACSA.2017.080315</id>
        <doi>10.14569/IJACSA.2017.080315</doi>
        <lastModDate>2017-03-31T13:22:29.3600000+00:00</lastModDate>
        
        <creator>Asmaa Saeed Alqahtani</creator>
        
        <creator>Lamya Foaud Daghestani</creator>
        
        <creator>Lamiaa Fattouh Ibrahim</creator>
        
        <subject>3D graphics; virtual reality; spatial visualization skills; mental rotation skill; engineering graphics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(3), 2017</description>
        <description>Spatial visualization skills are crucial in engineering fields and are required to support the spatial abilities of engineering students. Instructors in engineering colleges indicated that freshmen students faced difficulties when visualizing models in engineering graphics. Students cannot correctly understand and process visual object and mental images of the engineering models. Traditional tools using textbooks, physical models, and modeling techniques is not sufficient for improving the spatial visualization skills of engineering students. This paper is a survey of all techniques used to learn freshmen students in engineering graphics and improve their spatial visualization skills. Also, it presents the method of evaluation the spatial visualization skills. After describing techniques and presented the literature review, this work presents a comparison between methodologies and techniques used in previous studies. Finally, we summarize a road of the map for the techniques and strategies to improve the spatial visualization skills for freshmen engineering students.</description>
        <description>http://thesai.org/Downloads/Volume8No3/Paper_15-Techniques_Used_to_Improve_Spatial_Visualization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>GIS Utilization for Delivering a Time Condition Products</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080314</link>
        <id>10.14569/IJACSA.2017.080314</id>
        <doi>10.14569/IJACSA.2017.080314</doi>
        <lastModDate>2017-03-31T13:22:29.3300000+00:00</lastModDate>
        
        <creator>Noha I. Sharaf</creator>
        
        <creator>Bahaa T.Shabana</creator>
        
        <creator>Hazem M. El-Bakry</creator>
        
        <subject>conditional products; distribution centers (depots); capacity of vehicle; vehicle routing problems (VRP); geographic information system (GIS); OD Cost Matrix; Network analyst</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(3), 2017</description>
        <description>As population is increasing rapidly all over the world, the need for delivering products is being more difficult especially for conditional products (products with life time). Many Customers require conditional products to be delivered to their locations. Distribution center may have multi depots (multi store branches) instead of one depot.  Every depot has limited number of vehicles to minimize cost. Capacities of these vehicles are based on two dimensions (weight and volume). Geographic information system (GIS) is used for localizing customers’ destinations. Then OD Cost Matrix is used to assign every customer destination to the least cost depot to be served from it. Finally Network analyst is used to solve the vehicle routing problem generating final route directions for every vehicle and calculating the best time for lunch break of drivers automatically. This case study is applied on Mansoura city in Egypt.</description>
        <description>http://thesai.org/Downloads/Volume8No3/Paper_14-GIS_Utilization_for_Delivering_a_Time_Condition_Products.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Model of Information Systems Efficiency based on Key Performance Indicator (KPI)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080313</link>
        <id>10.14569/IJACSA.2017.080313</id>
        <doi>10.14569/IJACSA.2017.080313</doi>
        <lastModDate>2017-03-31T13:22:29.2830000+00:00</lastModDate>
        
        <creator>Ahmad AbdulQadir AlRababah</creator>
        
        <subject>Information Systems Management; System performance; Key performance Indicators; Data warehouse; Information Systems Integration</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(3), 2017</description>
        <description>Any company concerning with information technology considers Automated performance management processes as a key component of its operations as it enables the company get a clear long -term assessment about the performance of employees as well as operating units of the company. One technique that the company could use to evaluate the present performance of both employees and operating units is the utilization of KPI -based management information system. The current study seeks to provide a new model of information system efficiency based on key performance indicator and the extent to which such approach helps the company evaluate the performance within the company. In addition to recognize the requirements and criteria needed to establish an effective system of performance measurement , the axioms that may influence the designing of the Model KPIs and the approaches that We need to be contracted with a fixed set of key performance indicators in order to facilitate hiring.</description>
        <description>http://thesai.org/Downloads/Volume8No3/Paper_13-A_New_Model_of_Information_Systems_Efficiency.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Issues and Trends in Satellite Telecommunications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080312</link>
        <id>10.14569/IJACSA.2017.080312</id>
        <doi>10.14569/IJACSA.2017.080312</doi>
        <lastModDate>2017-03-31T13:22:29.2670000+00:00</lastModDate>
        
        <creator>David Hiatt</creator>
        
        <creator>Young B. Choi</creator>
        
        <subject>satellite communications; telecommunications; satellite orbit types; bandwidth allocation; constellation design; power generation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(3), 2017</description>
        <description>In this paper we will discuss a bit about satellite telecommunications.  A brief introduction and history of satellite telecommunications will be presented.  Then a discussion of certain prevalent satellite orbit types will be given, because this is relevant to understanding how certain satellite applications are employed.  Various areas of ongoing research in the field of satellite telecommunications, to include bandwidth allocation, satellite constellation design for remote parts of the world, and power generation, among others will be discussed.</description>
        <description>http://thesai.org/Downloads/Volume8No3/Paper_12-Issues_and_Trends_in_Satellite_Telecommunications.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modified Hierarchical Method for Task Scheduling in Grid Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080311</link>
        <id>10.14569/IJACSA.2017.080311</id>
        <doi>10.14569/IJACSA.2017.080311</doi>
        <lastModDate>2017-03-31T13:22:29.2530000+00:00</lastModDate>
        
        <creator>Ahmad Ali AlZubi</creator>
        
        <subject>directed acyclic graph (DAG); task granularity; hierarchical method; Maui scheduler; scheduler; scheduling algorithm; task manager; grid; parallelism degree</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(3), 2017</description>
        <description>This study aims to increase the productivity of grid systems by an improved scheduling method. A brief overview and analysis of the main scheduling methods in grid systems are presented. A method for increasing efficiency by optimizing the task graph structure considering the grid system node structure is proposed. Task granularity (the ratio between the amount of computation and transferred data) is considered to increase the efficiency of planning. An analysis of the impact on task scheduling efficiency in a grid system is presented. A correspondence of the task graph structure considering the node structure (in which the task is immersed) to the effectiveness of scheduling in a grid system is shown. A modified method for scheduling tasks while considering their granularity is proposed. The relevant algorithm for task scheduling in a grid system is developed. Simulation of the proposed algorithm using the modeling system GridSim is conducted. A comparative analysis between the modified algorithm and the algorithm of the hierarchical scheduler Maui is shown. The general advantages and disadvantages of the proposed algorithm are discussed.</description>
        <description>http://thesai.org/Downloads/Volume8No3/Paper_11-Modified_Hierarchical_Method_for_Task_Scheduling_in_Grid_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Electrical Model to U-Slot Patch Antenna with Circular Polarization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080310</link>
        <id>10.14569/IJACSA.2017.080310</id>
        <doi>10.14569/IJACSA.2017.080310</doi>
        <lastModDate>2017-03-31T13:22:29.2200000+00:00</lastModDate>
        
        <creator>Guesmi Chaouki</creator>
        
        <creator>Necibi Omrane</creator>
        
        <creator>Ghnimi Said</creator>
        
        <creator>Gharsallah Ali</creator>
        
        <subject>RFID; circularly polarization; U-slot antenna; RFID reader antenna; Electrical model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(3), 2017</description>
        <description>The microstrip antenna is one of the best antenna structures, due to its low cost and compact design. In this paper, a coaxial feed circularly polarized square patch antenna is designed using the U-slot. The proposed antenna is suited for the RFID readers in the SHF band. This structure of antenna of FR-4 substrate (dielectric constant = 3.5), is capable to cover the range of frequency of 2.4 to 2.5GHz. The size of patch is 25*25 mm2. An equivalent electrical model of this antenna was proposed and simulated by the ADS software. The simulated gain is 4.189 dBi and S11 bandwidth is about 100 MHz. Analysis and modeling of the proposed antenna was carried out using the CST and HFSS simulator based on the finite element method. The simulation results obtained are presented and discussed.</description>
        <description>http://thesai.org/Downloads/Volume8No3/Paper_10-An_Electrical_Model_to_U-Slot_Patch_Antenna.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Trust and Reputation Model for Quality Assessment of Online Content</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080309</link>
        <id>10.14569/IJACSA.2017.080309</id>
        <doi>10.14569/IJACSA.2017.080309</doi>
        <lastModDate>2017-03-31T13:22:29.1900000+00:00</lastModDate>
        
        <creator>Yousef Elsheikh</creator>
        
        <subject>Online content; Quality assessment; Trust and reputation model; User behaviour; User reliability; user tendency</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(3), 2017</description>
        <description>In recent years, online transactions have become more prevalent than it was. This means that the number of online users to perform such transactions keeps growing, causing an increase in the level of expectations for them. One of those expectations is to enable them to get a better understanding of such transactions before going ahead with it. Consequently, trust and reputation models represent an important milestone to support those users to make their own decisions to facilitate online transactions. Many of the common trust and reputation models used primitive methods to calculate the reputation of online content. These methods are usually inaccurate when there is a divergence in rating. In addition, the lack of predictability through the latter ratings in emerging trends. Others use a probabilistic model or the so-called weighted average, which usually focusing on a single dimension for online user ratings. Even those models that combine multiple dimensions of user ratings are usually not representative on the one hand, and on the other hand are with heterogeneous weights. This paper fills this gap by proposing a model to assess the trust and reputation of online content, relying on three factors namely user behavior, user reliability, and user tendency with homogeneous weights of interest to the user on the Internet. These homogenous weights will be used to measure the reputation of any online content. The proposed model has been validated and compared with some other well-known models, and showed a significant improvement in terms of the Mean Absolute Error (MAE). The proposed model is also good with sparse and dense datasets.</description>
        <description>http://thesai.org/Downloads/Volume8No3/Paper_9-A_Trust_and_Reputation_Model_for_Quality.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Qualitative Study of Existing Research Techniques on Wireless Mesh Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080308</link>
        <id>10.14569/IJACSA.2017.080308</id>
        <doi>10.14569/IJACSA.2017.080308</doi>
        <lastModDate>2017-03-31T13:22:29.0800000+00:00</lastModDate>
        
        <creator>Naveen T.H</creator>
        
        <creator>Vasanth G</creator>
        
        <subject>Access Points; Channel Allocation; Internet Access; Routing Problems; Wireless Mesh Network; QoS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(3), 2017</description>
        <description>Wireless Mesh Network (WMN) is one of the significant forms of the wireless mesh network that assists in creating highly interconnected communication node. Since a decade, there have been various studies towards enhancing the performance of WMN which is successful to a large extent. However, with the upcoming technology of pervasive and dynamic networks WMN suffers from various routing issues, Quality-of-Service (QoS) issue, channel allocation, sustainability of routes which makes the theory contradicting when considering for real-world challenges in wireless networks. This paper, therefore, brief about fundamental information of WMN followed by a discussion of existing research trends and existing research techniques. Finally, the paper also discusses the open research issues after reviewing the existing research techniques.</description>
        <description>http://thesai.org/Downloads/Volume8No3/Paper_8-Qualitative_Study_of_Existing_Research_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modeling of High Speed Free Space Optics System to Maintain Signal Integrity in Different Weather Conditions; System Level</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080307</link>
        <id>10.14569/IJACSA.2017.080307</id>
        <doi>10.14569/IJACSA.2017.080307</doi>
        <lastModDate>2017-03-31T13:22:29.0500000+00:00</lastModDate>
        
        <creator>Rao Kashif</creator>
        
        <creator>Oluwole John</creator>
        
        <creator>Fujiang Lin</creator>
        
        <creator>Abdul Rehman Buzdar</creator>
        
        <subject>Free Space Optical; NRZ; RZ; PIN; APD; Photo Detector; BER; Q-factor</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(3), 2017</description>
        <description>Free space optical (FSO) also known as free space photonics (FSS) is a technology widely deployed in Local Area Network (LAN), Metro Area Network (MAN), and in Inter &amp; Intra chip communications. However satellite to satellite and other space use of FSO requires further consideration. Although FSO is highly beneficial due to its easy deployment and high security in narrow beam as well as market demand for 10 GB+, some factors especially rain, snow and fog attenuation causes signal integrity problem in FSO. To get better Signal Integrity in FSO we need to consider all components while designing the system. In this paper a comparative analysis has been performed on 10 GB and 40 GB FSO system over 1 Km. Firstly for selecting suitable modulation technique we compared NRZ and RZ modulation and get spectrum analysis. NRZ modulation was found more data efficient. Signal Integrity in FSO system with 10 GB/s was analyzed by eye diagram and Q-Factor of both APD and PIN Photo detector was presented in graph. Same experiment was repeated with 40 GB/s and Bit error rate of both photo detectors were presented.</description>
        <description>http://thesai.org/Downloads/Volume8No3/Paper_7-Modeling_of_High_Speed_Free_Space_Optics_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Prediction Method for Large Diatom Appearance with Meteorological Data and MODIS Derived Turbidity and Chlorophyll-A in Ariake Bay Area in Japan</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080306</link>
        <id>10.14569/IJACSA.2017.080306</id>
        <doi>10.14569/IJACSA.2017.080306</doi>
        <lastModDate>2017-03-31T13:22:29.0030000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>chlorophyl-a concentration; red tide; diatom; MODIS; satellite remote sensing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(3), 2017</description>
        <description>Prediction method for large diatom appearance in winter with meteorological data and MODIS derived turbidity and chlorophyll-a in Ariake Bay Area in Japan is proposed. Mechanism for large diatom appearance in winter is discussed with the influencing factors, meteorological condition and in-situ data of turbidity, chlorophyll-a data with the measuring instruments equipped at the Saga University own Tower in the Ariake Bay area. Particularly, the method for estimation of turbidity is still under discussion. Therefore, the algorithm for estimation of turbidity with MODIS data is proposed here. Through experiments, it is found that the proposed prediction method for large diatom appearance is validated with the meteorological data and MODIS derived turbidity as well as chlorophyll-a data estimated for the winter (from January to March) in 2012 and 2015.</description>
        <description>http://thesai.org/Downloads/Volume8No3/Paper_6-Prediction_Method_for_Large_Diatom_Appearance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Survey of Spam Detection Methods on Twitter</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080305</link>
        <id>10.14569/IJACSA.2017.080305</id>
        <doi>10.14569/IJACSA.2017.080305</doi>
        <lastModDate>2017-03-31T13:22:28.9870000+00:00</lastModDate>
        
        <creator>Abdullah Talha Kabakus</creator>
        
        <creator>Resul Kara</creator>
        
        <subject>Twitter spam; spam detection; spam filtering; mobile security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(3), 2017</description>
        <description>Twitter is one of the most popular social media platforms that has 313 million monthly active users which post 500 million tweets per day. This popularity attracts the attention of spammers who use Twitter for their malicious aims such as phishing legitimate users or spreading malicious software and advertises through URLs shared within tweets, aggressively follow/unfollow legitimate users and hijack trending topics to attract their attention, propagating pornography. In August of 2014, Twitter revealed that 8.5% of its monthly active users which equals approximately 23 million users have automatically contacted their servers for regular updates. Thus, detecting and filtering spammers from legitimate users are mandatory in order to provide a spam-free environment in Twitter. In this paper, features of Twitter spam detection presented with discussing their effectiveness. Also, Twitter spam detection methods are categorized and discussed with their pros and cons. The outdated features of Twitter which are commonly used by Twitter spam detection approaches are highlighted. Some new features of Twitter which, to the best of our knowledge, have not been mentioned by any other works are also presented.</description>
        <description>http://thesai.org/Downloads/Volume8No3/Paper_5-A_Survey_of_Spam_Detection_Methods_on_Twitter.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Preliminary Numerical Simulation Study of Developing Ankle Foot Orthosis to Support Sit-To-Stand Movement in Children with Cerebral Palsy</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080304</link>
        <id>10.14569/IJACSA.2017.080304</id>
        <doi>10.14569/IJACSA.2017.080304</doi>
        <lastModDate>2017-03-31T13:22:28.9400000+00:00</lastModDate>
        
        <creator>Chihiro NAKAGAWA</creator>
        
        <creator>Ryo YONETSU</creator>
        
        <creator>Tomohiro ITO</creator>
        
        <creator>Shunsuke KUSADA</creator>
        
        <creator>Atsuhiko SHINTANI</creator>
        
        <subject>Cerebral; palsy; Standing-up motion; Motion analysis; Numerical simulation; Rigid link model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(3), 2017</description>
        <description>The purpose of this study is to identify an effective method of support for the standing-up motion of children with cerebral palsy (CP). Experiments revealed remarkable differences in the shank and upper-body motions of children with CP compared with normally developed (ND) children. Shank tilt angles of CP children were smaller and their upper-body tilt angles were larger than those of ND children. The large upper-body tilt compensates for the smaller shank tilt but will cause back pain and/or deformation of the hip joint as they grow. It is therefore imperative to find a method of support to help CP children realize more natural motions (similar to those of ND children) to prevent these problems. The standing-up motion of ND children was adopted as the goal. Experiments identified a similarity in the angular variation between ND children’s upper bodies and shanks; the standing-up motion of children with CP under that condition was then simulated using a two-dimensional four-link model of the human body. As a result of the numerical simulation, shank angles of CP children increased and their upper-body angles decreased from those measured during the experiments, which indicates that the proposed method of support is qualitatively effective at allowing CP children to realize a more natural standing-up motion.</description>
        <description>http://thesai.org/Downloads/Volume8No3/Paper_4-A_Preliminary_Numerical_Simulation_Study_of_Developing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Design of Patch Antenna using U-Slot and Defected Ground Structure</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080303</link>
        <id>10.14569/IJACSA.2017.080303</id>
        <doi>10.14569/IJACSA.2017.080303</doi>
        <lastModDate>2017-03-31T13:22:28.9100000+00:00</lastModDate>
        
        <creator>Saad Hassan Kiani</creator>
        
        <creator>Khalid Mahmood</creator>
        
        <creator>Mehre Munir</creator>
        
        <creator>Alex James Cole</creator>
        
        <subject>multiband frequencies; directivity; gain; slots; Bandwidth; reflection coefficient</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(3), 2017</description>
        <description>A novel design of patch antenna is presented with double U slot structure on patch with ground irregularities. As a result tri-band response is achieved with gain reaching 0.785 to 3.75dB respectively and directivity of 5.5 to 5.6dBi. Coaxial cable is mounted with patch as medium of power. The antenna has shown minimum mismatch loss with 0 to 5% with high bandwidth response of 37 to 1200MHZ. The proposed antenna can be used for GSM, W-LAN, GPRS and other radio communication services systems.</description>
        <description>http://thesai.org/Downloads/Volume8No3/Paper_3-A_Novel_Design_of_Patch_Antenna_using_U-Slot.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>ComplexCloudSim: Towards Understanding Complexity in QoS-Aware Cloud Scheduling</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080302</link>
        <id>10.14569/IJACSA.2017.080302</id>
        <doi>10.14569/IJACSA.2017.080302</doi>
        <lastModDate>2017-03-31T13:22:28.8930000+00:00</lastModDate>
        
        <creator>Huankai Chen</creator>
        
        <creator>Frank Z Wang</creator>
        
        <subject>Cloud Scheduling; Damage Spreading; QoS; Complexity;
Chaotic Behaviour; Cloud Simulation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(3), 2017</description>
        <description>The cloud is generally assumed to be homogeneous
in most of the research efforts related to cloud resource management
and the performance of cloud resource can be determined
as it is predictable. However, a plethora of complexities are
associated with cloud resources in the real world: dynamicity,
heterogeneity and uncertainty. For heterogeneous cloud resources
experiencing vast dynamic changes in performance, a critical
role is played by the statistical characteristics of execution times,
related to different cloud resources, to facilitate decision making
in management. The cloud’s performance can be considerably
influenced by the differences between the estimated and actual
execution times, which may affect the robustness of resource
management systems.
Limitation exists in the study of cloud resource management
systems’ complexities even though extensive research has been
done on complexity issues in various fields from decision making
in economics to computational biology. This paper concentrates
on managing the research question regarding the complexity’s
role in QoS-aware cloud resource management systems. We
present the ComplexCloudSim. Here, CloudSim, a popular simulation
tool-kit, is extended through modelling of complexity
factors in the cloud, including dynamic changes of run-time
performance, resource heterogeneity, and task execution times’
uncertainty. The effects of complexity on performance within
cloud environments are examined by comparing four widely used
heuristic cloud scheduling algorithms, given that the execution
time information is inaccurate. Furthermore, a damage spreading
analysis, one amongst the available complex system analysis
methods, is applied to the system and simulations are run to
reveal the system’s sensitivity to initial conditions within specific
parameter regions. Finally, how small of a damage can spread
throughout the system within the region is discussed as well as
research is done for the potential ways to avoid such chaotic
behaviours and develop more robust systems.</description>
        <description>http://thesai.org/Downloads/Volume8No3/Paper_2 -ComplexCloudSim_Towards_Understanding.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Stylometric Techniques for Multiple Author Clustering</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080301</link>
        <id>10.14569/IJACSA.2017.080301</id>
        <doi>10.14569/IJACSA.2017.080301</doi>
        <lastModDate>2017-03-31T13:22:28.8300000+00:00</lastModDate>
        
        <creator>David Kernot</creator>
        
        <creator>Terry Bossomaier</creator>
        
        <creator>Roger Bradbury</creator>
        
        <subject>Authorship Identification; Principal Component Analysis; Linear Discriminant Analysis; Vector Space Method; Seriation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(3), 2017</description>
        <description>In 1598-99 printer, William Jaggard named Shakespeare as the sole author of The Passionate Pilgrim even though Jaggard chose a number of non-Shakespearian poems in the volume. Using a neurolinguistics approach to authorship identification, a four-feature technique, RPAS, is used to convert the 21 poems in The Passionate Pilgrim into a multi-dimensional vector.  Three complementary analytical techniques are applied to cluster the data and reduce single technique bias before an alternate method, seriation, is used to measure the distances between clusters and test the strength of the connections.  The multivariate techniques are found to be robust and able to allocate nine of the 12 unknown poems to Shakespeare. The authorship of one of the Barnfield poems is questioned, and analysis highlights that others are collaborations or works of yet to be acknowledged poets.  It is possible that as many as 15 poems were Shakespeare’s and at least five poets were not acknowledged.</description>
        <description>http://thesai.org/Downloads/Volume8No3/Paper_1-Stylometric_Techniques_for_Multiple_Author_Clustering.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mobile Technology based Polio-Vaccination System (PVS) – First Step Towards Polio-Free Pakistan</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080253</link>
        <id>10.14569/IJACSA.2017.080253</id>
        <doi>10.14569/IJACSA.2017.080253</doi>
        <lastModDate>2017-03-08T12:39:03.3300000+00:00</lastModDate>
        
        <creator>Nukhba Afzal</creator>
        
        <creator>Malik Muhammad Saad Missen</creator>
        
        <creator> Amnah Firdous</creator>
        
        <creator>Nadeem Akhtar</creator>
        
        <creator> Hina Asmat</creator>
        
        <creator>Saleem Ullah</creator>
        
        <subject>Polio Vaccination; Information system; GPS technology; Health-care</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(2), 2017</description>
        <description>Health information technology revolutionized the world with its great expansion and widespread in the domain of health care system. Most of the developed countries adopted advanced technology in their vaccination systems. Vaccination systems of many developing countries still lack the use of technology eventually causing mismanagement and corruption to occur in vaccination campaigns. Issues like mismanagement and corruption not only affect vaccination campaigns but also cause further diffusion of a disease. Pakistan is also one of such countries where vaccination system is prone to these and many other issues and hence it does not help in disease eradication. For example, polio remains alive in Pakistan because Pakistan’s Polio vaccination system is faced with many problems and the biggest one is security of vaccination teams. Corruption, mismanagement, unawareness among public and life-threat to vaccination teams are the main problems of current polio vaccination system of Pakistan. To overcome these flaws and to make an idyllic system with the new advanced technology, we propose technology oriented secure polio vaccination system. The proposed system is more secure and removes flaws in the current system. We model our proposed system using Colored Petri Nets (CPNs) which is a state-of-the-art tool for formal modeling.</description>
        <description>http://thesai.org/Downloads/Volume8No2/Paper_53-Mobile_Technology_based_Polio-Vaccination_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Empirical Evaluation of Social and Traditional Search Tools for Adhoc Information Retrieval</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080252</link>
        <id>10.14569/IJACSA.2017.080252</id>
        <doi>10.14569/IJACSA.2017.080252</doi>
        <lastModDate>2017-03-08T12:39:03.2530000+00:00</lastModDate>
        
        <creator>Safdar Hussain</creator>
        
        <creator>Malik Muhammad Saad Missen</creator>
        
        <creator>Nadeem Akhtar</creator>
        
        <creator>Mujtaba Husnain</creator>
        
        <creator>Intesab Hussain</creator>
        
        <creator>M. Ali Nizamani</creator>
        
        <subject>AOL Query Log; Facebook; Twitter; Social Search</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(2), 2017</description>
        <description>The nature of World Wide Web (www) has evolved over the passage of time. Easier and faster availability of Internet has given rise to huge volumes of data available online. Another cause of huge volumes of data is the emergence of online social networks (like Facebook, Twitter, etc.) which has actually changed the role of data consumers to data generators. Increasing popularity of these online social networks has also changed the way different web services used to be used. For example, Facebook messaging has some impact on usage of emails; twitter usage affects (positively or negatively) online newspaper readings. Both of these platforms are heavily used for information searching. In this paper, we evaluate the role of Facebook and Twitter for academic queries and compare the findings with Google search engines to find out if there is a chance that these online social networks will replace Google sooner. A query set selected from the standard AOL dataset is used for experimentation. Academic related queries are selected and classified by expert users. Findings of Google, Facebook and Twitter are compared against these queries using Mean Average Precision (MAP), as a metrics for evaluation. Results conclude that Google has the dominating factor with a better MAP than Facebook and Twitter.</description>
        <description>http://thesai.org/Downloads/Volume8No2/Paper_52-Empirical_Evaluation_of_Social_and_Traditional_Search.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Face Recognition using SIFT Key with Optimal Features Selection Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080251</link>
        <id>10.14569/IJACSA.2017.080251</id>
        <doi>10.14569/IJACSA.2017.080251</doi>
        <lastModDate>2017-03-01T10:32:26.4730000+00:00</lastModDate>
        
        <creator>Taqdir </creator>
        
        <creator>Renu Dhir</creator>
        
        <subject>Feature Extraction; Classifier; PCA; LPA; LBP; DWT; SIFT key; Genetic algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(2), 2017</description>
        <description>Facial expression is complex in nature due to legion of variations present. These variations are identified and recorded using feature extraction mechanisms. The researchers have worked towards it and created classifiers for identifying face expression. The classifiers involve Principal component analysis (PCA), Local Polynomial approximation (LPA), Linear binary pattern (LBP), Discrete wavelet transformation (DWT) etc. The proposed work deals with the new classifier using SIFT key with genetic algorithm to identify distinct facial expression. Optimal features of existing algorithms are used within the proposed work. Also comparison of existing techniques such as LBP, PCA and DWT is presented with SIFT key with genetic algorithm. The results show that proposed classifier gives better result in terms of recognition raet.</description>
        <description>http://thesai.org/Downloads/Volume8No2/Paper_51-Face_Recognition_using_Sift_Key_with_Optimal_Features_Selection_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Method for Game Development Driven by User-eXperience: a Study of Rework, Productivity and Complexity of Use</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080250</link>
        <id>10.14569/IJACSA.2017.080250</id>
        <doi>10.14569/IJACSA.2017.080250</doi>
        <lastModDate>2017-03-01T10:32:26.4270000+00:00</lastModDate>
        
        <creator>Mario Gonzalez-Salazar</creator>
        
        <creator>Hugo Mitre-Hernandez</creator>
        
        <creator>Carlos Lara-Alvarez</creator>
        
        <subject>Rework; Productivity; Complexity of Use; Video Game Development</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(2), 2017</description>
        <description>The growing capabilities and revenues of video game development are important factors for software companies. However, game development processes could be considered im-mature, specifically in the design phase. Ambiguous requirements in game design cause rework. User-eXperience (UX) is usually assessed at the end of the development process, causing difficulties to ensure the interactive experience between the game and users. To reduce these problems, this paper proposes a method for Game Development driven by User-eXperience (GameD-UX) that integrates a repository based on requirements engineering, a model for user experience management, and an adjusted agile process. Two experiments were conducted to study rework and productivity of video game development. Results of the first experiment revealed that GameD-UX causes less rework than conventional approaches, but it induces lower productivity. A tool for supporting the GameD-UX method was developed by considering the lessons learned. The second experiment showed that the software tool increases the productivity and reduces the complexity of use of GameD-UX.</description>
        <description>http://thesai.org/Downloads/Volume8No2/Paper_50-Method_for_Game_Development_Driven.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Low Cost FPGA based Cryptosystem Design for High Throughput Area Ratio</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080249</link>
        <id>10.14569/IJACSA.2017.080249</id>
        <doi>10.14569/IJACSA.2017.080249</doi>
        <lastModDate>2017-02-28T15:16:57.9730000+00:00</lastModDate>
        
        <creator>Muhammad Sohail Ibrahim</creator>
        
        <creator>Irfan Ahmed</creator>
        
        <creator>M. Imran Aslam</creator>
        
        <creator>Muhammad Ghazaal</creator>
        
        <creator>Muhammad Usman</creator>
        
        <creator>Kamran Raza</creator>
        
        <creator>Shujaat Khan</creator>
        
        <subject>Encryption; Cryptosystem; Secure Cipher; AES; FPGA; Full loop unroll</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(2), 2017</description>
        <description>Over many years, Field Programmable Gated Ar-rays (FPGA) have been used as a target device for various prototyping and cryptographic algorithm applications. Due to the parallel architecture of FPGAs, the flexibility of cryptographic algorithms can be exploited to achieve high throughputs at the expense of very low chip area. In this research, we propose a low cost FPGA based cryptosystem named as Secure Cipher for high throughput to area ratio. The proposed Secure Cipher is implemented using full loop unroll technique in order to exploit the parallelism of the proposed algorithm. The proposed cryp-tosystem implementation achieved a throughput of 4600Mbps for encryption. The logic resource utilization of this implementation is 802 logic elements(LE) which yields a throughput to area ratio of 5.735Mbps/LE.</description>
        <description>http://thesai.org/Downloads/Volume8No2/Paper_49-A_Low_Cost_FPGA_based_Cryptosystem_Design.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Study of the Performance of Multi-hop Routing Protocols in Wireless Sensor Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080248</link>
        <id>10.14569/IJACSA.2017.080248</id>
        <doi>10.14569/IJACSA.2017.080248</doi>
        <lastModDate>2017-02-28T11:25:49.6600000+00:00</lastModDate>
        
        <creator>Nouredine Seddiki</creator>
        
        <creator>Bassou Abedsalem</creator>
        
        <subject>network sensors; routing protocol; simulation; NS2; d network lifetime</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(2), 2017</description>
        <description>Currently in the literature, there are quite a num-ber of multi-hop routing algorithms, some of which are subject to normalization.
Routing protocols based on clustering provide an efficient method for extending the lifetime of a wireless sensor network. Except that much of the research focuses less on communication between the Cluster-Head (CH), the nodes and the base station, and gives even less importance to the influence of the type of communication on the Life of the network.
The aim of this article is to make a comparative study between some routing algorithms. Since they are not based on analytical models, the exact evaluation of some aspects of these protocols is very difficult. This is the reason why we make simulations.
To study their performance. Our simulation is done under NS2 (Network Simulator 2). It allowed to obtain a classification of the different routing algorithms studied according to one of the metrics such as the loss of message, and the lifetime of a network.</description>
        <description>http://thesai.org/Downloads/Volume8No2/Paper_48-Study_of_the_Performance_of_Multi_hop_Routing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Time Varying Back Propagating Algorithm for MIMO Adaptive Inverse Controller</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080247</link>
        <id>10.14569/IJACSA.2017.080247</id>
        <doi>10.14569/IJACSA.2017.080247</doi>
        <lastModDate>2017-02-28T11:25:49.6300000+00:00</lastModDate>
        
        <creator>Ibrahim Mustafa Mehedi</creator>
        
        <subject>Adaptive inverse control; neural network; MIMO; multilayer perception</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(2), 2017</description>
        <description>In the field of automatic control system design, adaptive inverse is a powerful control technique. It identifies the system model and controls automatically without having prior knowledge about the dynamics of plant. In this paper neural network based adaptive inverse controller is proposed to control a MIMO system. Multi layer perception and back propagation are combinedly used in this investigation to design the NN learning algorithm. The developed structure represents the ability to identify and control the MIMO system. Mathematical derivation and simulation results for both plant identification and control are shown in this paper. Further, to prove the superiority of the proposed technique, performances are compared with recursive least square (RLS) method for the same MIMO system. RLS based adaptive inverse scheme is discussed in this paper for plant identification and control. Also the obtained simulated results are compared for both plant parameter estimation and tracking trajectory performance.</description>
        <description>http://thesai.org/Downloads/Volume8No2/Paper_47-Time_Varying_Back_Propagating_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Analysis of Proposed Congestion Avoiding Protocol for IEEE 802.11s</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080246</link>
        <id>10.14569/IJACSA.2017.080246</id>
        <doi>10.14569/IJACSA.2017.080246</doi>
        <lastModDate>2017-02-28T11:25:49.6300000+00:00</lastModDate>
        
        <creator>Kishwer Abdul Khaliq</creator>
        
        <creator>Amir Qayyum</creator>
        
        <creator>Jurgen Pannek</creator>
        
        <subject>Wireless Mesh Network; IEEE802.11s; Congestion Control; Congestion Avoidance; Routing Protocol; HWMP</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(2), 2017</description>
        <description>The wireless technology is one of the core compo-nents of mobile applications with mobility support at low deploy-ment costs. Among these, Wireless Mesh Network (WMN) is one of the technologies that supports mobile users for un-disrupted, reliable data connectivity, provides high bandwidth even in areas, where access of such services is difficult. Additionally, it features capabilities like self-configuring, self-healing, and self-organizing. IEEE proposed a MAC standard for WMN enhancements named IEEE 802.11s for multi-hop networks. Within this standard, the mandatory routing protocol called Hybrid Wireless Mesh Protocol (HWMP) is proposed for efficient utilization of resources to achieve high bandwidth at MAC layer. To improve this protocol, a congestion avoiding protocol was proposed, which utilizes alternate paths just before the congestion state is reached. The proposed technique does not add any overhead, it utilizes congestion notification frame, which is already part of standard. This paper discusses simulation results of the proposed routing protocol against the existing HWMP protocol for packet delivery fraction, throughput and delay. The results indicate that the proposed technique significantly improves performance of IEEE 802.11s.</description>
        <description>http://thesai.org/Downloads/Volume8No2/Paper_46-Performance_Analysis_of_Proposed_Congestion.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>OpenCL-Accelerated Object Classification in Video Streams using Spatial Pooler of Hierarchical Temporal Memory</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080245</link>
        <id>10.14569/IJACSA.2017.080245</id>
        <doi>10.14569/IJACSA.2017.080245</doi>
        <lastModDate>2017-02-28T11:25:49.6130000+00:00</lastModDate>
        
        <creator>Maciej Wielgosz</creator>
        
        <creator>Marcin Pietron</creator>
        
        <subject>Hierarchical Temporal Memory; OpenCL; GPU; Video processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(2), 2017</description>
        <description>The paper presents a method to classify objects in video streams using a brain-inspired Hierarchical Temporal Memory (HTM) algorithm. Object classification is a challeng-ing task where humans still significantly outperform machine learning algorithms due to their unique capabilities. A system which achieves very promising performance in terms of recogni-tion accuracy have been implemented. Unfortunately, conducting more advanced experiments is very computationally demanding; some of the trials run on a standard CPU may take as long as several days for 960x540 video streams frames. Therefore, authors decided to accelerate selected parts of the system using OpenCL. In particular, authors seek to determine to what extent porting selected and computationally demanding parts of a core may speed up calculations.
The classification accuracy of the system was examined through a series of experiments and the performance was given in terms of F1 score as a function of the number of columns, synapses, min overlap and winners set size. The system achieves the highest F1 score of 0.95 and 0.91 for min overlap=4 and 256 synapses, respectively. Authors have also conduced a series of experiments with different hardware setups and measured CPU/GPU acceleration. The best kernel speed-up of 632x and 207x was reached for 256 synapses and 1024 columns. However, overall acceleration including transfer time was significantly lower and amounted to 6.5x and 3.2x for the same setup.</description>
        <description>http://thesai.org/Downloads/Volume8No2/Paper_45-OpenCL_Accelerated_Object_Classification_in_Video.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Reverse Area Skyline in a Map</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080244</link>
        <id>10.14569/IJACSA.2017.080244</id>
        <doi>10.14569/IJACSA.2017.080244</doi>
        <lastModDate>2017-02-28T11:25:49.5800000+00:00</lastModDate>
        
        <creator>Annisa </creator>
        
        <creator>Asif Zaman</creator>
        
        <creator>Yasuhiko Morimoto</creator>
        
        <subject>skyline query; reverse skyline query; area skyline query</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(2), 2017</description>
        <description>Skyline query retrieves a set of data objects, each
of which is not dominated by another object. On the other
hand, given a query object q, “reverse” skyline query retrieves
a set of points that are “dynamic” skyline of q. If q is a given
preference of a user, “dynamic” skyline query retrieves a set of
points that are not dominated by another point with respect to
q. Intuitively,“reverse” skyline query of q retrieves a set of points
that are as preferable as q. Area skyline query is a method for
selecting good areas, each of which is near to desirable facilities
such as stations, warehouses, promising customers’ house, etc.
and is far from undesirable facilities such as competitors’ shops,
noise sources, etc. In this paper, we applied reverse skyline
concept to area skyline query and proposed Reverse Area Skyline
algorithm. Analogically, given an area g, reverse area skyline
query selects areas, each of which are as preferable as g. Assume
that a real estate company wants to sell an area. Reverse area
skyline query must be useful for such company to consider
effective real estate developments so that the area attracts many
buyers. Reverse area skyline query can also be used for selecting
promising buyers of the area.</description>
        <description>http://thesai.org/Downloads/Volume8No2/Paper_44-Reverse_Area_Skyline_in_a_Map.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Prediction by a Hybrid of Wavelet Transform and Long-Short-Term-Memory Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080243</link>
        <id>10.14569/IJACSA.2017.080243</id>
        <doi>10.14569/IJACSA.2017.080243</doi>
        <lastModDate>2017-02-28T11:25:49.5670000+00:00</lastModDate>
        
        <creator>Putu Sugiartawan</creator>
        
        <creator>Reza Pulungan</creator>
        
        <creator>Anny Kartika Sari</creator>
        
        <subject>Wavelet Transform; Long-Short-Term Memory; Re-current Neural Network; Time Series Prediction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(2), 2017</description>
        <description>Data originating from some specific fields, for in-stance tourist arrivals, may exhibit a high degree of fluctuations as well as non-linear characteristics due to time varying behaviors. This paper proposes a new hybrid method to perform prediction for such data. The proposed hybrid model of wavelet transform and long-short-term memory (LSTM) recurrent neural network (RNN) is able to capture non-linear attributes in tourist arrival time series. Firstly, data is decomposed into constitutive series through wavelet transform. The decomposition is expressed as a function of a combination of wavelet coefficients, which have different levels of resolution. Then, LSTM neural network is used to train and simulate the value at each level to find the bias vectors and weighting coefficients for the prediction value. A sliding windows model is employed to capture the time series nature of the data. An evaluation is conducted to compare the proposed model with other RNN algorithms, i.e., Elman RNN and Jordan RNN, as well as the combination of wavelet transform with each of them. The result shows that the proposed model has better performance in terms of training time than the original LSTM RNN, while the accuracy is better than the hybrid of wavelet-Elman and the hybrid of wavelet-Jordan.</description>
        <description>http://thesai.org/Downloads/Volume8No2/Paper_43-Prediction_by_a_Hybrid_of_Wavelet_Transform.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cyclic Redundancy Checking (CRC) Accelerator for Embedded Processor Datapaths</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080242</link>
        <id>10.14569/IJACSA.2017.080242</id>
        <doi>10.14569/IJACSA.2017.080242</doi>
        <lastModDate>2017-02-28T11:25:49.5330000+00:00</lastModDate>
        
        <creator>Abdul Rehman Buzdar</creator>
        
        <creator>Liguo Sun</creator>
        
        <creator>Rao Kashif</creator>
        
        <creator>Muhammad Waqar Azhar</creator>
        
        <creator>Muhammad Imran Khan</creator>
        
        <subject>CRC; Accelerator; Codesign; FPGA; MicroBlaze; Embedded Processor</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(2), 2017</description>
        <description>We present the integration of a multimode Cyclic Redundancy Checking (CRC) accelerator unit with an embedded processor datapath to enhance the processor performance in terms of execution time and energy efficiency. We investigate the performance of CRC accelerated embedded processor datapath in terms of execution time and energy efficiency. Our evaluation shows that the CRC accelerated Microblaze SoftCore embedded processor datapath is 153 times more cycle and energy efficient than a datapath lacking a CRC accelerator unit. This acceleration is achieved at the cost of some area overhead.</description>
        <description>http://thesai.org/Downloads/Volume8No2/Paper_42-Cyclic_Redundancy_Checking_CRC_Accelerator.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Structure of Advance Encryption Standard with 3-Dimensional Dynamic S-box and Key Generation Matrix</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080241</link>
        <id>10.14569/IJACSA.2017.080241</id>
        <doi>10.14569/IJACSA.2017.080241</doi>
        <lastModDate>2017-02-28T11:25:49.5200000+00:00</lastModDate>
        
        <creator>Ziaur Rahaman</creator>
        
        <creator>Anjela Diana corraya</creator>
        
        <creator>Mousumi Akter Sumi</creator>
        
        <creator>Ali Newaz Bahar</creator>
        
        <subject>Advanced Encryption Standard; AES Modification; 3-dimensional Key Generation Matrix; dynamic S-box</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(2), 2017</description>
        <description>The study of sending and receiving secret messages is called cryptography. Generally, senders and receivers are unaware about the process of encryption and decryption. Hence, encryption plays an important role in data communication and data security. The meaning of encryption is not only to keep data confidential from unwanted access but also ensuring the data integrity through available way. As the capacity of breaking the security is increasing rapidly, so, the process that hides infor-mation is one of the most concerned topics. Advanced Encryption Standard is a popular, widely used and efficient encryption algorithm, which has been used since it was invented. This paper focuses on the AES key generation process and Substitution box. It modifies the conventional key generation technique and builds the dynamic 3-Dimensional S-box of Advance Encryption Standard. The proposed approach suggests 3-Dimensioanl Key Generation Matrix and S-box. As per shown this novel technique increases the amount of time it needs during encryption and decryption. The experimental result shows that it also enhances the strength of AES algorithm. The proposed approach illustrates the theoretical analysis and corresponding experimented results.</description>
        <description>http://thesai.org/Downloads/Volume8No2/Paper_41-A_Novel_Structure_of_Advance_Encryption_Standard.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Impact of Analytical Assessment of Requirements Prioritization Models: An Empirical Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080240</link>
        <id>10.14569/IJACSA.2017.080240</id>
        <doi>10.14569/IJACSA.2017.080240</doi>
        <lastModDate>2017-02-28T11:25:49.5030000+00:00</lastModDate>
        
        <creator>Aneesa Rida Asghar</creator>
        
        <creator>Shahid Nazir Bhatti</creator>
        
        <creator>Atika Tabassum</creator>
        
        <creator>S Asim Ali Shah</creator>
        
        <subject>Agile Software Engineering (ASE); Agile Software Development (ASD); Scrum Software Development Process; SCRUM; Product Owner (PO); Extreme Programming (XP); Requirements Prioritization techniques; Analytical Hierarchy Process (AHP); Cummulative Voting (CV); Numerical Assignment (NAT)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(2), 2017</description>
        <description>Requirements prioritization is one of the important parts of managing requirements in software development process which plays its role in the success or failure of a software product. A software product can go wrong or fail if right requirements are not prioritized at right time.  Thus, there is a need of a vast or complete requirements prioritization technique or model that spans all the factors that must be considered while prioritizing requirements whether it’s for a traditional software development or agile software development. There are several requirements prioritization methodologies that aid in decision making and in prioritizing requirements but importantly many lacks to account the important factors that have significant influence in prioritizing requirements. A requirement prioritization methodology that takes account of important factors such as time and human behavioral factors that has an influence in prioritizing requirements is required. This new model/ technique expected to overcome the lack that is in existing prioritization techniques because of not considering time gap factor and human behavioral factor. Extensive study on literature of agile methodology, requirements elicitation and prioritization has been done to find out factors that influence the decision making process of requirement prioritization. It is found that as agile methodologies such as XP, SCRUM deliver products in increments, there is a time gap between each increment of approximate 4 weeks or more, this time lapse could cause human behavioral to change either because of market demand or any other personal reason and, thus, influences the prioritization decision. These factors could be termed as time factor and human behavioral factors. Thus, a requirement prioritization technique or model is needed that takes account of all such factors while prioritizing requirements whether it’s for a traditional software development or agile software development.</description>
        <description>http://thesai.org/Downloads/Volume8No2/Paper_40-The_Impact_of_Analytical_Assessment_of_Requirements_Prioritization_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analytical Review on Test Cases Prioritization Techniques: An Empirical Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080239</link>
        <id>10.14569/IJACSA.2017.080239</id>
        <doi>10.14569/IJACSA.2017.080239</doi>
        <lastModDate>2017-02-28T11:25:49.4870000+00:00</lastModDate>
        
        <creator>Zainab Sultan</creator>
        
        <creator>Rabiya Abbas</creator>
        
        <creator>Shahid Nazir Bhatti</creator>
        
        <creator>S. Asim Ali Shah</creator>
        
        <subject>Agile Software Engineering (ASE); Testing; Regression Testing; Test Suit Reduction; Test Case Generation; Test minimization; Test Case Prioritization Technique</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(2), 2017</description>
        <description>For conclusively predicting the quality of any software system, software testing plays an important but a vital role. For finding faults early and to observe failures (anomalies) before implementation stage, software testing is done and if bugs (defects) are detected then software is passed through maintenance phase. The success and failure of a software project is often attributed to the development methodology used. It is also observed that in many scenarios, the software engineering methods are not implemented in their true spirit. Moreover, many of the development methodologies don’t cater the change very well, because they follow a predefined development path which allows very less deviation. In software testing, regression testing is the important type of software testing. When any change made on the software then regression testing is done to check that it doesn’t influence other parts of software. In regression testing, test cases are prioritized in order to reuse new test cases and existing test cases. Test case prioritization is done by using different techniques. This paper presents a review of different test case prioritization techniques.</description>
        <description>http://thesai.org/Downloads/Volume8No2/Paper_39-Analytical_Review_on_Test_Cases_Prioritization_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Need and Role of Scala Implementations in Bioinformatics</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080238</link>
        <id>10.14569/IJACSA.2017.080238</id>
        <doi>10.14569/IJACSA.2017.080238</doi>
        <lastModDate>2017-02-28T11:25:49.4730000+00:00</lastModDate>
        
        <creator>Abbas Rehman</creator>
        
        <creator>Ali Abbas</creator>
        
        <creator>Muhammad Atif Sarwar</creator>
        
        <creator>Javed Ferzund</creator>
        
        <subject>Scala; Big Data; Hadoop; Spark; Next Generation Sequencing; Genomics; RNA; DNA; Bioinformatics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(2), 2017</description>
        <description>Next Generation Sequencing has resulted in the generation of large number of omics data at a faster speed that was not possible before. This data is only useful if it can be stored and analyzed at the same speed. Big Data platforms and tools like Apache Hadoop and Spark has solved this problem. However, most of the algorithms used in bioinformatics for Pairwise alignment, Multiple Alignment and Motif finding are not implemented for Hadoop or Spark. Scala is a powerful language supported by Spark. It provides, constructs like traits, closures, functions, pattern matching and extractors that make it suitable for Bioinformatics applications. This article explores the Bioinformatics areas where Scala can be used efficiently for data analysis. It also highlights the need for Scala implementation of algorithms used in Bioinformatics.</description>
        <description>http://thesai.org/Downloads/Volume8No2/Paper_38-Need_and_Role_of_Scala_Implementations_in_Bioinformatics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sentiment Analysis Challenges of Informal Arabic Language</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080237</link>
        <id>10.14569/IJACSA.2017.080237</id>
        <doi>10.14569/IJACSA.2017.080237</doi>
        <lastModDate>2017-02-28T11:25:49.4400000+00:00</lastModDate>
        
        <creator>Salihah AlOtaibi</creator>
        
        <creator>Muhammad Badruddin Khan</creator>
        
        <subject>Informal Arabic; Sentiment analysis; Opinion Mining (OM) ; Twitter; YouTube</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(2), 2017</description>
        <description>Recently, there are wide numbers of users that use the social network like Twitter, Facebook, MySpace to share various kinds of resources, express their opinions, thoughts, messages in real time. Thus, increase the amount of electronic content that generated by users. Sentiment analysis becomes a very interesting topic in research community. Thereby, we need to give more attention to Arabic sentiment analysis. This paper discusses the challenges and obstacles when analyze the sentiment analysis of informal Arabic, the social media. The most of recent research sentiment analysis conduct for English text. Also, when the research works in Arabic sentiment analysis, they focus in formal Arabic. However, most of social media network use the informal Arabic (colloquial) such as Twitter and YouTube website. This paper investigates the problems and the challenges to identify sentiment in informal Arabic language which is mostly used when users express their opinions and feelings in context of twitter and YouTube Arabic content.</description>
        <description>http://thesai.org/Downloads/Volume8No2/Paper_37-Sentiment_Analysis_Challenges_of_Informal_Arabic_Language.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Priority Task Scheduling Strategy for Heterogeneous Multi-Datacenters in Cloud Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080236</link>
        <id>10.14569/IJACSA.2017.080236</id>
        <doi>10.14569/IJACSA.2017.080236</doi>
        <lastModDate>2017-02-28T11:25:49.4270000+00:00</lastModDate>
        
        <creator>Naoufal Er-raji</creator>
        
        <creator>Faouzia Benabbou</creator>
        
        <subject>age; cloud computing; cluster; data-center; deadline; length; node; SLA; priority task scheduling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(2), 2017</description>
        <description>With the rapid development in science and technology, cloud computing has emerged to be widely adopted in several IT (Information Technology) areas. It allows for the companies as well as researchers to use the computing resources as a service over a network as internet without owning the infrastructure. However, Due to increasing demand of cloud computing, the growing number of tasks affects the system load and performance. Scheduling of multitasks with respect SLA (Service Level Agreement) can face serious challenges. In order to overcome this problem as well as provide better quality of service, the tasks have to be scheduled in optimal way. In this paper, we address the problem of the priority task scheduling through proposing a global strategy over distributed data-center in cloud computing basing on three parameters: tasks deadline, task age and the task length.</description>
        <description>http://thesai.org/Downloads/Volume8No2/Paper_36-Priority_Task_Scheduling_Strategy_for_Heterogeneous_Multi_Datacenters.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Logarithmic Spiral-based Construction of RBF Classifiers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080235</link>
        <id>10.14569/IJACSA.2017.080235</id>
        <doi>10.14569/IJACSA.2017.080235</doi>
        <lastModDate>2017-02-28T11:25:49.4100000+00:00</lastModDate>
        
        <creator>Mohamed Wajih Guerfala</creator>
        
        <creator>Amel Sifaoui</creator>
        
        <creator>Afef Abdelkrim</creator>
        
        <subject>Radial Basis Function neural network; classification;   k -means; validity index of Davis Bouldin; Mean Squared Error; Mahalanobis distance; Logarithmic spiral; golden angle; golden ratio</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(2), 2017</description>
        <description>Clustering process is defined as grouping similar objects together into homogeneous groups or clusters. Objects that belong to one cluster should be very similar to each other, but objects in different clusters will be dissimilar. It aims to simplify the representation of the initial data. The automatic classification recovers all the methods allowing the automatic construction of such groups. This paper describes the design of radial basis function (RBF) neural classifiers using a new algorithm for characterizing the hidden layer structure. This algorithm, called k-means Mahalanobis distance, groups the training data class by class in order to calculate the optimal number of clusters of the hidden layer, using two validity indexes. To initialize the initial clusters of k-means algorithm, the method of logarithmic spiral golden angle has been used. Two real data sets (Iris and Wine) are considered to improve the efficiency of the proposed approach and the obtained results are compared with basic literature classifier</description>
        <description>http://thesai.org/Downloads/Volume8No2/Paper_35-Logarithmic_Spiral_based_Construction_of_RBF_Classifiers.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Semantic Sentiment Analysis of Arabic Texts</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080234</link>
        <id>10.14569/IJACSA.2017.080234</id>
        <doi>10.14569/IJACSA.2017.080234</doi>
        <lastModDate>2017-02-28T11:25:49.3800000+00:00</lastModDate>
        
        <creator>Sana Alowaidi</creator>
        
        <creator>Mustafa Saleh</creator>
        
        <creator>Osama Abulnaja</creator>
        
        <subject>Arabic Sentiment Analysis; Twitter; Semantic Relations; Arabic WordNet; Machine Learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(2), 2017</description>
        <description>Twitter considered as a rich resource to collect people&#39;s opinions in different domains and attracted researchers to develop an automatic Sentiment Analysis (SA) model for tweets. In this work, a semantic Arabic Twitter Sentiment Analysis (ATSA) model is developed based on supervised machine learning (ML) approaches and semantic analysis. Most of the existing Arabic SA approaches represent tweets based on the bag-of-words (BoW) model. The main limitation of this model is that it is semantically weak; where words considered as independent features and ignore the semantic associations between them. As a result, synonymous words that appear in two tweets are represented as different independent features. To overcome this limitation, this work proposes enriching the tweets representation with concepts utilizing Arabic WordNet (AWN) as an external knowledge base. In addition, different concepts representation approaches are developed and evaluated with na&#239;ve Bayes (NB) and support vector machine (SVM) ML classifiers on an Arabic Twitter dataset. The experimental results indicate that using concepts features improves the performance of the ATSA model compared with the basic BoW representation. The improvement reached 4.48% with the SVM classifier and 5.78% with the NB classifier.</description>
        <description>http://thesai.org/Downloads/Volume8No2/Paper_34-Semantic_Sentiment_Analysis_of_Arabic_Texts.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Web Application Development by Applying the MVC and Table Data Gateway in the Annual Program Budget Management System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080233</link>
        <id>10.14569/IJACSA.2017.080233</id>
        <doi>10.14569/IJACSA.2017.080233</doi>
        <lastModDate>2017-02-28T11:25:49.3470000+00:00</lastModDate>
        
        <creator>A. Medina-Santiago</creator>
        
        <creator>A. Cisneros-G&#243;mez</creator>
        
        <creator>E. M. Melgar-Paniagua</creator>
        
        <creator>G. B. Nango-S&#243;lis</creator>
        
        <creator>E. A. Moreno-L&#243;pez</creator>
        
        <creator>M. E. Castellanos-Morales</creator>
        
        <creator>D. B. Cantoral-D&#237;az</creator>
        
        <creator>L. M. Blanco-Gonzalez</creator>
        
        <subject>WEB Applications; MVC; Data Gateway Table; Software engineering; incremental; iterative</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(2), 2017</description>
        <description>This paper is the result of the development of the Web application to register the Annual Work Program, in which goals and actions are assigned the financial resources to manage the annual work program identified. In this paper, we have identified five types of users: the first is the Administrator, in charge of monitoring the goals programmed in the period, as well as the resource assigned to reach those goals; the second corresponds to the purchasing department who is in charge of contacting the supplier and at the same time inform financial and warehouse of the acquisition through the system; the third corresponds to a warehouse in charge of validating the material and generate entry / exit official vouchers and send the purchase order to financiers; the fourth user corresponds to financial, this will identify through the system that all the procedure is completed to make the payment; and finally, the fifth user make up the set of all remaining departments. Finally, the system presents flexibility in case it is necessary to go adding departments.</description>
        <description>http://thesai.org/Downloads/Volume8No2/Paper_33-Web_Application_Development_by_Applying_the_MVC.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparison of Discrete Cosine Transforms (DCT), Discrete Fourier Transforms (DFT), and Discrete Wavelet Transforms (DWT) in Digital Image Watermarking</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080232</link>
        <id>10.14569/IJACSA.2017.080232</id>
        <doi>10.14569/IJACSA.2017.080232</doi>
        <lastModDate>2017-02-28T11:25:49.3330000+00:00</lastModDate>
        
        <creator>Rosa A Asmara</creator>
        
        <creator>Reza Agustina</creator>
        
        <creator>Hidayatulloh</creator>
        
        <subject>Digital Image Watermarking; 2D Discrete Cosine Transform (2D DCT); 2D Discrete Fourier Transform (2D DFT); 2D Discrete Wavelet Transform (2D DWT); Least Significant Bit method (LSB); Digital Signal Processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(2), 2017</description>
        <description>Digital Image Watermarking is used recently to secure the image by embedding another digital image. It is typically used to identify ownership of the copyright of the signal. Frequency domain transformation methods used widely in Digital Image Compression and Digital Image Watermarking. They reduce the weakness of classics digital image watermarking such as Least Significant Bit (LSB) methods which is more noise-tolerant. Popular transformation method used are Two Dimensional Discrete Cosine Transform (2D DCT), Two Dimensional Discrete Fourier Transforms (2D DFT), and Two Dimensional Discrete Wavelet Transform (2D DWT). This paper will show the comparison result of those three transformation method. The experiments are comparison analysis of image watermark quality using Peak Signal to Noise Ratio (PSNR), color converting, image resizing, image optical scanning and the noise-tolerant of the image watermarked by giving Gaussian noise.</description>
        <description>http://thesai.org/Downloads/Volume8No2/Paper_32-Comparison_of_Discrete_Cosine_Transforms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Helpful Statistics in Recognizing Basic Arabic Phonemes</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080231</link>
        <id>10.14569/IJACSA.2017.080231</id>
        <doi>10.14569/IJACSA.2017.080231</doi>
        <lastModDate>2017-02-28T11:25:49.3170000+00:00</lastModDate>
        
        <creator>Mohamed O.M. Khelifa</creator>
        
        <creator>Yousfi Abdellah</creator>
        
        <creator>Yahya O.M. ElHadj</creator>
        
        <creator>Mostafa Belkasmi</creator>
        
        <subject>automatic speech recognition (ASR); speech recognizer; phonemes recognition; speech database; hidden Markova models (HMMs)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(2), 2017</description>
        <description>The recognition of continuous speech is one of the main challenges in the building of automatic speech recognition (ASR) systems, especially when it comes to phonetically complex languages such as Arabic. An ASR system seems to be actually in a blocked alley. Nearly all solutions follow the same general model. The previous research focused on enhancing its performance by incorporating supplementary features. This paper is part of ongoing research efforts aimed at developing a high-performance Arabic speech recognition system for learning and teaching purposes. It investigates a statistical analysis of certain distinctive features of the basic Arabic phonemes which seems helpful in enhancing the performance of a baseline HMM-based ASR system. The statistics are collected using a particular Arabic speech database, which involves ten different male speakers and more than eight hours of speech which covers all Arabic phonemes. In HMM modeling framework, the statistics provided are helpful in establishing the appropriate number of HMM states for each phoneme and they can also be utilized as an initial condition for the EM estimation procedure, which generally, accelerates the estimation process and, thus, improves the performance of the system. The obtained findings are presented and possible applications of automatic speech recognition and speaker identification systems are also suggested.</description>
        <description>http://thesai.org/Downloads/Volume8No2/Paper_31-Helpful_Statistics_in_Recognizing_Basic_Arabic_Phonemes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Impact Propagation of Human Errors on Software Requirements Volatility</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080230</link>
        <id>10.14569/IJACSA.2017.080230</id>
        <doi>10.14569/IJACSA.2017.080230</doi>
        <lastModDate>2017-02-28T11:25:49.2870000+00:00</lastModDate>
        
        <creator>Zahra Askarinejadamiri</creator>
        
        <creator>Abdul Azim Abd Ghani</creator>
        
        <creator>Hazura Zulzallil</creator>
        
        <creator>Koh Tieng Wei</creator>
        
        <subject>Human factor; human errors; requirements volatility</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(2), 2017</description>
        <description>Requirements volatility (RV) is one of the key risk sources in software development and maintenance projects because of the frequent changes made to the software. Human faults and errors are major factors contributing to requirement change in software development projects. As such, predicting requirements volatility is a challenge to risk management in the software area. Previous studies only focused on certain aspects of the human error in this area. This study specifically identifies and analyses the impact of human errors on requirements gathering and requirements volatility. It proposes a model based on responses to a survey questionnaire administered to 215 participants who have experience in software requirement gathering. Exploratory factor analysis (EFA) and structural equation modelling (SEM) were used to analyse the correlation of human errors and requirement volatility. The results of the analysis confirm the correlation between human errors and RV. The results show that human actions have a higher impact on RV compared to human perception. The study provides insights into software management to understand socio-technical aspects of requirements volatility in order to control risk management. Human actions and perceptions respectively are a root cause contributing to human errors that lead to RV.</description>
        <description>http://thesai.org/Downloads/Volume8No2/Paper_30-Impact_Propagation_of_Human_Errors_on_Software_Requirements_Volatility.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Proposal of the Support Tool for After-Class Work based on the Online Threaded Bulletin Board</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080229</link>
        <id>10.14569/IJACSA.2017.080229</id>
        <doi>10.14569/IJACSA.2017.080229</doi>
        <lastModDate>2017-02-28T11:25:49.2700000+00:00</lastModDate>
        
        <creator>Kohei Otake</creator>
        
        <creator>Yoshihisa Shinozawa</creator>
        
        <creator>Tomofumi Uetake</creator>
        
        <subject>Learning Support Tool; Online Threaded Bulletin Board; Network Analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(2), 2017</description>
        <description>In this paper, based on the assumption that after-class work in an exercise-based course accompanied by group work is done on an online threaded bulletin board system, the authors propose a support tool for the instructors. Specifically, while focusing on the factors that compose a discussion on the online bulletin board, the users who comment, the topics, and the items (keywords) to be discussed, the authors try to visualize the relationships among these factors as network diagrams. The authors also propose indexes, the comment degree and the activation degree, to evaluate communities formed there. Our experiments in which group work was actually implemented with the application of the proposed tool demonstrated that use of the network diagrams and the evaluation indexes served to distinguish the differences between those groups with properly-proceeding discussions and those without such discussions. The authors confirmed that this can enable the instructors to easily discover those students who do not participate in the discussion and groups with sluggish discussions.</description>
        <description>http://thesai.org/Downloads/Volume8No2/Paper_29-Proposal_of_the_Support_Tool_for_After_Class_Work.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>ROI-based Compression on Radiological Image by Urdhva-Tiryagbhyam and DWT Over FPGA</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080228</link>
        <id>10.14569/IJACSA.2017.080228</id>
        <doi>10.14569/IJACSA.2017.080228</doi>
        <lastModDate>2017-02-28T11:25:49.2530000+00:00</lastModDate>
        
        <creator>Suma </creator>
        
        <creator>V. Sridhar</creator>
        
        <subject>Radiological Image Compression; Discrete Wavelet Transform; FPGA; Lifting Scheme</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(2), 2017</description>
        <description>The area of radiological image compression has not yet met its potential solution. After reviewing the existing mechanism of compression, it was found that majority of the existing techniques suffers from significant pitfalls e.g. more usage of transformation schemes, more resource utilization, delay, less focus on FPGA performance enhancement, extremely less emphasis on Vedic-multipliers. Hence, this paper presents an analytical modelling of ROI (Region of Interest)-based radiological image compression that applies Vedic Multiplier Urdhva-Tiryagbhyam to enhance the performance of coding using Discrete Wavelet Transform (DWT). The study outcome was implemented in Matlab and multiple test bed of FPGA devices (Virtex 4 FX100 -12 FF1152 and Spartan 3 XC400-5TQ144) and assessed using both visual and numerical outcomes to find that proposed system excel better performance in comparison to recently existing techniques.</description>
        <description>http://thesai.org/Downloads/Volume8No2/Paper_28-ROI_based_Compression_on_Radiological_Image.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>GSM based Android Application: Appliances Automation and Security Control System using Arduino</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080227</link>
        <id>10.14569/IJACSA.2017.080227</id>
        <doi>10.14569/IJACSA.2017.080227</doi>
        <lastModDate>2017-02-28T11:25:49.2230000+00:00</lastModDate>
        
        <creator>Kainat Fareed Memon</creator>
        
        <creator>Javed Ahmed Mahar</creator>
        
        <creator>Hidayatullah Shaikh</creator>
        
        <creator>Hafiz Ahmed Ali</creator>
        
        <creator>Farhan Ali Surahio</creator>
        
        <subject>android application; gsm module; security system; Arduino</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(2), 2017</description>
        <description>Now-a-days, automation is playing significant role using android phone in human life, particularly, handicapped and senior citizens. Appliances automation allows users to control different appliances such as light, fan, fridge and AC. It also provides security system like door controlling, temperature &amp; fire detection and water shower. Furthermore, security cameras are used to control and monitored by the users to observe activity around a house. It has been observed that the internet services in interior Sindh are not as much better as required. Hence, GSM SIM900A based android application is developed named Appliances Automation &amp; Security Control System using Arduino. The developed system is decomposed into two separate entities: (1) hardware is designed and developed using Arduino (MEGA 2560) with other required electronics components which is programmed using embedded C language, (2) an Android app which provides freedom to user to control and access the electronic appliances and the security system without internet. The developed application is tested in Karachi, Sukkur and Khairpur with ZONG, Moblink, Telenor and Ufone. The acceptable results are achieved at Karachi and Sukkur but suitable results are not calculated at Khairpur in terms of delay due to the frequency of selected GSM Module.</description>
        <description>http://thesai.org/Downloads/Volume8No2/Paper_27-GSM_based_Android_Application_Appliances_Automation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Framework for an Effective Information Security Awareness Program in Healthcare</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080226</link>
        <id>10.14569/IJACSA.2017.080226</id>
        <doi>10.14569/IJACSA.2017.080226</doi>
        <lastModDate>2017-02-28T11:25:49.2070000+00:00</lastModDate>
        
        <creator>Arash Ghazvini</creator>
        
        <creator>Zarina Shukur</creator>
        
        <subject>awareness Training Program; Information Security; Content Development; Electronic Health Record; Human Error; Serious Game</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(2), 2017</description>
        <description>Electronic Health Record (EHR) is a valuable asset of every healthcare and it needs to be protected. Human errors are recognized as the major information security threats to EHR systems. Employees who interact with EHR systems should be trained about the risks and hazards related to information security. However, there are limited studies regarding the effectiveness of training programs. The aim of this paper is to propose a framework that provides guidelines for healthcare organizations to select an effective information security training delivery method. In addition, this paper proposes a guideline to develop information security content for awareness training programs. Lastly, this study attempts to implement the proposed framework in a selected healthcare for evaluation. Hence, a serious game is developed as a training method to deliver information security content for the selected healthcare. An effective training program raises employees’ awareness toward information security with a long-term impact. It helps to gradually change employees’ behavior over time by reducing their negligence towards secure utilization of healthcare EHR systems.</description>
        <description>http://thesai.org/Downloads/Volume8No2/Paper_26-A_Framework_for_an_Effective_Information_Security_Awareness.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Finite Element Method Combined with Neural Networks for Power System Grounding Investigation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080225</link>
        <id>10.14569/IJACSA.2017.080225</id>
        <doi>10.14569/IJACSA.2017.080225</doi>
        <lastModDate>2017-02-28T11:25:49.1900000+00:00</lastModDate>
        
        <creator>Liviu Neamt</creator>
        
        <creator>Oliviu Matei</creator>
        
        <creator>Olivian Chiver</creator>
        
        <subject>neural network; finite element analysis; power systems; grounding</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(2), 2017</description>
        <description>Even in homogenous soil and for simple geometrical structure the analytical design of a grounding system is a complex and not very accurate procedure. Using Finite Element Analysis (FEA) it can perform a precise design for complex grounding systems but with important hardware resources and time consumption. This paper proposes a methodology for power system grounding design, directed to ensure the advantages of the FEA but without its disadvantages. This is realized by adding the function emulation using neural networks. The vertical rod, buried in inhomogeneous soil is the subject of this presentation. Consequently, the first step was to perform FEA for a large number of configurations: different types of vertical rods connected to the surface, buried at various depths in different double-layer soil structures. Then, the results have been interpreted through a multi-layer perceptron (MLP) with one hidden layer. A compromise between the number of inputs and precision have been tested, in order to define a minimum number of FEA required to obtain an acceptable grounding system design, i.e. a desired grounding resistance, for any combinations of the geometrical and material parameters. The validation of the methodology was done based on data reported in various research works.</description>
        <description>http://thesai.org/Downloads/Volume8No2/Paper_25-Finite_Element_Method_Combined_with_Neural_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>RTS/CTS Framework Paradigm and WLAN Qos Provisioning Methods</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080224</link>
        <id>10.14569/IJACSA.2017.080224</id>
        <doi>10.14569/IJACSA.2017.080224</doi>
        <lastModDate>2017-02-28T11:25:49.1770000+00:00</lastModDate>
        
        <creator>Mohamed Nj.</creator>
        
        <creator>S. Sahib</creator>
        
        <creator>N. Suryana</creator>
        
        <creator>B. Hussin</creator>
        
        <subject>RTS/CTS; MAC; Internet; Telephony; video; real-time; loss; multimedia; WLAN; mechanism; performance; protocols, collision; framework; transmission; reception; flow control; handshake; MANET; BSS; IBSS; QoS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(2), 2017</description>
        <description>Wireless local area network (WLAN) communications performance design and management have evolved a lot to be where they are today. They went through some technology’s amendments and innovations. But, some performance tools remained almost unchanged and play a fundamental role in contemporary networking solutions despite the latest innovations higher influence on their indisputable and important function. That is the case with Request to send (RTS) and consent to receive (CTS) protocols. They are among the former technologies, which helped for transmission control with better performance in WLAN environment. They are so important, particularly since the advent of sensitive data networking (e.g. internet telephony, audio and video materials distribution) over the internet protocol (IP). Up to recent years following today’s multimedia WLAN based networks deployment trends, RTS/CTS) contributed to provide networks with some expected good performance levels prior to the discovery of more sophisticated methods for this purpose (i.e. performance enhancements). And yet, one may question whether the new technologies have rendered RTS/CTS frameworks obsolete; or are they now used only for some specific network applications traffic management? This articles review attempts to comprehensibly study some of the research works, which have had interest in RTS/CTS mechanism as tools for WLAN applications performance support. Various researches have studied these tools from their early innovation as network node’s built-in component, through different frameworks associated with WLAN legacy (IEEE 802.11) MAC protocols. This paper analyzed RTS/CTS initial implementation as mere network performance solution from packets’ collision avoidance perspective; and then for transmission delay due to hidden nodes and their false deployment. The article closes up on a critical analysis on the possible long time contribution of these protocols into integrated schemes based WLAN QoS performance design.</description>
        <description>http://thesai.org/Downloads/Volume8No2/Paper_24-RTSCTS_Framework_Paradigm_and_WLAN_Qos_Provisioning_Methods.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Virtual Observation System for Earth System Model: An Application to ACME Land Model Simulations</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080223</link>
        <id>10.14569/IJACSA.2017.080223</id>
        <doi>10.14569/IJACSA.2017.080223</doi>
        <lastModDate>2017-02-28T11:25:49.1430000+00:00</lastModDate>
        
        <creator>Dali Wang</creator>
        
        <creator>Fengming Yuan</creator>
        
        <creator>Benjamin Hernandez</creator>
        
        <creator>Yu Pei</creator>
        
        <creator>Cindy Yao</creator>
        
        <creator>Chad Steed</creator>
        
        <subject>Earth System Modeling; Accelerated Climate Modeling for Energy; In-Situ Data Analytics; Virtual Observation System; Functional Unit Testing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(2), 2017</description>
        <description>Investigating and evaluating physical-chemical-biological processes within an Earth system model (EMS) can be very challenging due to the complexity of both model design and software implementation.  A virtual observation system (VOS) is presented to enable interactive observation of these processes during system simulation.  Based on advance computing technologies, such as compiler-based software analysis, automatic code instrumentation, and high-performance data transport, the VOS provides run-time observation capability, in-situ data analytics for Earth system model simulation, model behavior adjustment opportunities through simulation steering.  A VOS for a terrestrial land model simulation within the Accelerated Climate Modeling for Energy model is also presented to demonstrate the implementation details and system innovations.</description>
        <description>http://thesai.org/Downloads/Volume8No2/Paper_23-Virtual_Observation_System_for_Earth_System_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Internet of Things (IoT) : Charity Automation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080222</link>
        <id>10.14569/IJACSA.2017.080222</id>
        <doi>10.14569/IJACSA.2017.080222</doi>
        <lastModDate>2017-02-28T11:25:49.1300000+00:00</lastModDate>
        
        <creator>Maher Omar Alshammari</creator>
        
        <creator>Abdulmohsen A. Almulhem</creator>
        
        <creator>Noor Zaman</creator>
        
        <subject>Smart city; Smart Charity; Internet of Things (IoT)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(2), 2017</description>
        <description>People are living in cities and villages based on their profession and their earnings. Those who have better earnings can live their live nicely. However, those who do not have good earnings are facing difficulties to survive their lives even for their basic necessities such as food and clothes. Government and limited charity organizations are trying to help them. In the Kingdom of Saudi Arabia few charity organizations placed few donation boxes around the city to collect donations on donor’s ease, but it has become hard for them to monitor them regularly, which affect the donation condition. Involving the Internet of Technology (IoT) will give the donors&#39; comfort and fast way to communicate with the charity, which will make the donation process efficient, easier and in well-organized as well. This paper presents a smart solution which is based on advanced technologies namely; Smart Charity (SC) that will help charity organizations, donators and needy people by involving IoT. SC working mechanism based on two folds, 1)Web-Based Application and 2)Android-based smart Application that will enable donors to donate through their mobiles anywhere and anytime, as well they can suggest the best time for them so the charity organization’s representative can visit and collect the desired donations. SC will enable the charity organization to know the location of donors and needy people through GPS as well. In addition, SC introduces Smart Donation Box (SDB) concept by involving IoT, which will have the capability to communicate with charity organizations about its current status such as quarterly, half or fully filled.</description>
        <description>http://thesai.org/Downloads/Volume8No2/Paper_22-Internet_of_Things_IoT_Charity_Automation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Investigation of Analytic Decision During Driving Test</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080221</link>
        <id>10.14569/IJACSA.2017.080221</id>
        <doi>10.14569/IJACSA.2017.080221</doi>
        <lastModDate>2017-02-28T11:25:49.0970000+00:00</lastModDate>
        
        <creator>Samir Ghouali</creator>
        
        <creator>Yassine Zakarya Ghouali</creator>
        
        <creator>Mohammed Feham</creator>
        
        <subject>Panel Co-integration; Panel Granger Causality; FMOLS and DOLS Estimators; Cardiorespiratory electromyography galvanic signals</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(2), 2017</description>
        <description>To examine the long-term causality between Cardiorespiratory Electromyography Galvanic signals for 17 drivers taken from Stress Recognition in Automobile Drivers database.
Methods: Two statistical methods, co-integration to reveal an eventual existence of a long-term relationship between ECG (Electrocardiograph), EMG (electromyography), GSR (galvanic skin resistance), heart rate (HR) and respiration,  well as the Application of the model of Granger causality.
Results: ECG shows certain dependence to EMG, GSR, heart rate and respiration. The results for ECG dependent suggest that an increase of 1% in EMG, FOOTGSR, HAND GSR, HR and RESPIRATION implies a variation of ECG which take a value respectively of 0.016248%, 0.007241%, 0.028366%, 0.000511% and 0.000110% in the within dimension based on the FMOLS (Fully Modified Ordinary Least Squares). With same way, the result for ECG suggest that an increase of 1% in EMG, FOOT GSR, HAND GSR, HR and RESPIRATION implies a variation of ECG which take a value respectively, of 0.065684%, 0.014534%, 0.032800%, 0.000304%, 0.005986% in the between dimension based on the same method. The results of panel Granger causality show a bi-directional relationship between ECG and FOOT GSR, HAND GSR and respiration signals, it must be noted as a unidirectional causality from EMG to ECG.
Conclusion:  This study shows the long-term interaction between the bio signals, and reveal how the understanding of these interactions can help the doctors to understand the risks that may exist between these interactions. The main advantage of a multidimensional and multivariate model is to solve a multitude of problems that prevent doctors to treat the patients better and is not the case for studies in two dimensions.</description>
        <description>http://thesai.org/Downloads/Volume8No2/Paper_21-An_Investigation_of_Analytic_Decision_During_Driving_Test.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Ant Colony Optimization (ACO) based Routing Protocols for Wireless Sensor Networks (WSN): A Survey</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080220</link>
        <id>10.14569/IJACSA.2017.080220</id>
        <doi>10.14569/IJACSA.2017.080220</doi>
        <lastModDate>2017-02-28T11:25:49.0670000+00:00</lastModDate>
        
        <creator>Anand Nayyar</creator>
        
        <creator>Rajeshwar Singh</creator>
        
        <subject>Wireless Sensor Networks; Routing; Routing Protocols; Swarm Intelligence (SI); Ant Based Routing; Ant Colony Optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(2), 2017</description>
        <description>Wireless Sensor Networks have several issues and challenges with regard to Energy Efficiency, Limited Computational capability, Routing Overhead, Packet Delivery and many more. Designing Energy Efficient Routing Protocol has always been a limiting factor cum issue for Wireless Sensor Networks. Varied routing protocols being proposed till date for overcoming these issues based on Swarm Intelligence. Swarm Intelligence is concerned with study of combined behavior of systems designed by varied components for coordinating among themselves via decentralized controls and self-organization. Algorithms based on Swarm Intelligence, nature based intelligence are highly robust, adaptive and scalable. This paper presents comprehensive survey of Ant Colony Optimization based routing protocols for Wireless Sensor Networks to give better platform for researchers to work on various shortcomings of protocols developed till date to develop efficient routing protocol for WSN in near future.</description>
        <description>http://thesai.org/Downloads/Volume8No2/Paper_20-Ant_Colony_Optimization_ACO_based_Routing_Protocols.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi Objective Optimization of Cloud Computing Services for Consumers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080219</link>
        <id>10.14569/IJACSA.2017.080219</id>
        <doi>10.14569/IJACSA.2017.080219</doi>
        <lastModDate>2017-02-28T11:25:49.0500000+00:00</lastModDate>
        
        <creator>Eli WEINTRAUB</creator>
        
        <creator>Yuval COHEN</creator>
        
        <subject>Cloud Computing; Security Risk; Software as a service; Platform as a service; Infrastructure as a service; Optimization; Cost; Utility</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(2), 2017</description>
        <description>This paper presents a novel multi objective model for optimizing the purchase decision of a cloud computing services customer. The providers are typically offering consumers cloud computing varying information systems services. The cloud services consist of different functionalities at varying costs, and varying reliability. So the customer’s main objectives (based on the literature) are to maximize their utility, and minimize their costs and risks. Since utility cost and risks are different dimensions, the problem is essentially a multi-objective optimization problem. So far, previous research does not address the multi objective nature of the problem. This article deals with optimizing consumers’ decision, but at the same time maintaining each of their objectives’ considerations. An optimization model presented and illustrated. The article also demonstrates the advantages gained by the optimization model when implemented using the dynamic cloud architecture over the traditional cloud architecture.</description>
        <description>http://thesai.org/Downloads/Volume8No2/Paper_19-Multi_Objective_Optimization_of_Cloud_Computing_Services.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Computational Model for the Generalised Dispersion of Synovial Fluid</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080218</link>
        <id>10.14569/IJACSA.2017.080218</id>
        <doi>10.14569/IJACSA.2017.080218</doi>
        <lastModDate>2017-02-28T11:25:49.0370000+00:00</lastModDate>
        
        <creator>M. Alshehri</creator>
        
        <creator>S. K. Sharma</creator>
        
        <subject>Synovial Fluid; Articular Cartilage; Unsteady diffusion coefficient; Computational model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(2), 2017</description>
        <description>The metabolic function of synovial fluid is important to understand normal and abnormal synovial joint motion, especially if one seeks some leading causes of the degenerative joint disease. The concentration of hyaluronic acid molecules and other high molecular weight substances in the synovial fluid may be responsible to disperse the nutrients into the cartilage. The theoretical study of the convective diffusion mechanism occurring in the knee joint is presented.  A flow model has been analyzed for better understanding of the convective diffusion of the viscous flow in between the articular surfaces. The governing system of partial differential equations has been solved for the Newtonian fluid with suitable matching and conditions. The analytical solution for the unsteady dispersion problem has been obtained for the better understand the phenomena of nutritional transport to synovial joint. The contributions of (convection + diffusion) on the dispersion of nutrients are investigated in detail. The dispersion coefficient has been computed for different values of the viscosity parameter. The results show that the average concentration has a negative correlation with the axial distance and the time.</description>
        <description>http://thesai.org/Downloads/Volume8No2/Paper_18-Computational_Model_for_the_Generalised_Dispersion.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Unsupervised Commercials Identification in Videos</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080217</link>
        <id>10.14569/IJACSA.2017.080217</id>
        <doi>10.14569/IJACSA.2017.080217</doi>
        <lastModDate>2017-02-28T11:25:49.0030000+00:00</lastModDate>
        
        <creator>Najeed Ahmed Khan</creator>
        
        <creator>Umair Amin</creator>
        
        <creator>Waseemullah</creator>
        
        <creator>Muhammad Umer</creator>
        
        <subject>TV commercial; semantic analysis; segmentation; video classification; commercial detection; commercial classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(2), 2017</description>
        <description>Commercials (ads) identification and measure their statistics from a video stream is an essential requirement. The duration of a commercial and the timing on which the commercial runs on TV cost differently to the ads owner. Automatic systems that measure these statistics will facilitate the ad owner. This research presents a system that segment semantic videos and identify commercials  automatically from broadcast TV transmission. The proposed technique uses color histogram and SURF features resulting in identify individual ads from TV transmission video stream. Experimental results on unseen videos demonstrate better results for ads identification. The target for the proposed approach is television transmission that do not use blank frame between the ads and a non-ad part of the transmission like in Pakistan, different from European countries TV transmission. The proposed segmentation approach is unsupervised.</description>
        <description>http://thesai.org/Downloads/Volume8No2/Paper_17-Unsupervised_Commercials_Identification_in_Videos.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Prolonging the Network Lifetime of WSN by using the Consumed Power Fairly Protocol</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080216</link>
        <id>10.14569/IJACSA.2017.080216</id>
        <doi>10.14569/IJACSA.2017.080216</doi>
        <lastModDate>2017-02-28T11:25:48.9730000+00:00</lastModDate>
        
        <creator>Ahmed Jamal Ahmed</creator>
        
        <creator>Jiwa Abdullah</creator>
        
        <subject>WSN; network topology; energy consumption; graph theory; Consumed Power Fairly</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(2), 2017</description>
        <description>In wireless sensor networks (WSN), energy saving is always a key concern. Since nodes have limited power, some of them may use specific routes, thus leading to exhaustion of intermediate nodes. These nodes die, which results in routing holes in the network. Consequently, the overall throughput of the network may get reduced. Therefore, in this study, the mathematical proposed model is leaded to optimal route by depending on equal power consumption from whole nodes in the network. Moreover, a new routing protocol model was designed. The protocol is termed as Consumed Power Fairly (CPF). This protocol could achieve high power efficiency by distributing power consumption equally to all nodes in the network. Our proposed model works on finding the route to destination with high power availability after summation of the total power for all nodes from the source to the destination node, thus subtracting the power consumption for particular data required to send. In short, the proposal CPF protocol reduces the number of dead nodes and keeps the connectivity high, which increases prolonging the network lifetime.</description>
        <description>http://thesai.org/Downloads/Volume8No2/Paper_16-Prolonging_the_Network_Lifetime_of_WSN.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Decision Framework for Mobile Development Methods</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080215</link>
        <id>10.14569/IJACSA.2017.080215</id>
        <doi>10.14569/IJACSA.2017.080215</doi>
        <lastModDate>2017-02-28T11:25:48.9430000+00:00</lastModDate>
        
        <creator>LACHGAR Mohamed</creator>
        
        <creator>ABDALI Abdelmouna&#239;m</creator>
        
        <subject>Mobile development approaches; Mobile development tools; Cross-platform mobile; Mobile OS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(2), 2017</description>
        <description>Recently, the mobile applications have emerged with the uprising smartphone trend. Now-a-days, a huge number of mobile operating systems require more developments, in order to achieve that, Open source cross-platform mobile frameworks came up, in order to allow importing the same code on various operating systems. In this paper, the focus is made on commonly used mobile development methods, and a process that selects the most suitable solution for a particular need is proposed. Eventually, a new framework that helps to choose the appropriate approach and tool respectively is suggested, according to a convenient survey based on binary questions, in addition to certain criteria.</description>
        <description>http://thesai.org/Downloads/Volume8No2/Paper_15-Decision_Framework_for_Mobile_Development_Methods.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Efficient Model for Distributed Computing based on Smart Embedded Agent</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080214</link>
        <id>10.14569/IJACSA.2017.080214</id>
        <doi>10.14569/IJACSA.2017.080214</doi>
        <lastModDate>2017-02-28T11:25:48.9100000+00:00</lastModDate>
        
        <creator>Hassna Bensag</creator>
        
        <creator>Mohamed Youssfi</creator>
        
        <creator>Omar Bouattane</creator>
        
        <subject>Distributed computing; parallel computing; Multi Agent System; Embedded computing; Raspberry PI 2</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(2), 2017</description>
        <description>Technological advances of embedded computing exposed humans to an increasing intrusion of computing in their day-to-day life (e.g. smart devices). Cooperation, autonomy, and mobility made the agent a promising mechanism for embedded devices. The work aims to present a new model of an embedded agent designed to be implemented in smart devices in order to achieve parallel tasks in a distribute environment. To validate the proposed model, a case study was developed for medical image segmentation using Cardiac Magnetic Resonance Image (MRI).  In the first part of this paper, we focus on implementing the parallel algorithm of classification using C-means method in embedded systems. We propose then a new concept of distributed classification using multi-agent systems based on JADE and Raspberry PI 2 devices.</description>
        <description>http://thesai.org/Downloads/Volume8No2/Paper_14-Efficient_Model_for_Distributed_Computing_based_on_Smart_Embedded_Agent.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Model-based Pedestrian Trajectory Prediction using Environmental Sensor for Mobile Robots Navigation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080213</link>
        <id>10.14569/IJACSA.2017.080213</id>
        <doi>10.14569/IJACSA.2017.080213</doi>
        <lastModDate>2017-02-28T11:25:48.8970000+00:00</lastModDate>
        
        <creator>Haruka Tonoki</creator>
        
        <creator>Ayanori Yorozu</creator>
        
        <creator>Masaki Takahashi</creator>
        
        <subject>Prediction of Human Movement; Service Robots; Vector Auto Regressive Models; Kalman Filter; Collision Avoidance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(2), 2017</description>
        <description>Safety is the most important to the mobile robots that coexist with human. There are many studies that investigate obstacle detection and collision avoidance by predicting obstacles’ trajectories several seconds into the future using mounted sensors such as cameras and laser range finder (LRF) for the safe behavior control of robots. In environments such as crossing roads where blind areas occur because of visual barriers like walls, obstacle detection might be delayed and collisions might be difficult to avoid. Using environmental sensors to detect obstacles is effective in such environments. When crossing roads, there are several passages pedestrian might move and it is difficult to depict going each passage in the same movement model. Therefore, we hypothesize that a more effective way to predict pedestrian movement is by predicting passages pedestrian might move and estimating the trajectories to the passages. We acquire pedestrian trajectory data using an environmental LRF with an extended Kalman filter (EKF) and construct pedestrian movement models using vector auto regressive (VAR) models, which pedestrian state is consisting of the position, speed and direction. Then, we test the validity of the constructed pedestrian movement models using experimental data. We narrow down the selection of a pedestrian movement model by comparing the prediction error for each path between the estimated pedestrian state using an EKF, and the predicted state using each movement model. We predict the trajectory using the selected movement model. Finally, we confirm that an appropriate path model that a pedestrian can actually move through is selected before the crossing area and that only the appropriate model is selected near the crossing area.</description>
        <description>http://thesai.org/Downloads/Volume8No2/Paper_13-Model_based_Pedestrian_Trajectory_Prediction_using_Environmental_Sensor.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Review of Image Compression and Encryption Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080212</link>
        <id>10.14569/IJACSA.2017.080212</id>
        <doi>10.14569/IJACSA.2017.080212</doi>
        <lastModDate>2017-02-28T11:25:48.8630000+00:00</lastModDate>
        
        <creator>Emy Setyaningsih</creator>
        
        <creator>Retantyo Wardoyo</creator>
        
        <subject>cryptography; compression; lossless; lossy; compressive sensing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(2), 2017</description>
        <description>In line with a growing need for data and information transmission in a safe and quick manner, researches on image protection and security through a combination of cryptographic and compression techniques begin to take form. The combination of these two methods may include into three categories based on their process sequences. The first category, i.e. cryptographic technique followed by compression method, focuses more on image security than the reduction of a size of data. The second combination, compression technique followed by the cryptographic method, has an advantage where the compression technique can be lossy, lossless, or combination of both. The third category, i.e. compression and cryptographic technologies in a single process either partially or in the form of compressive sensing(CS) provides a good data safety assurance with such a low computational complexity that it is eligible for enhancing the efficiency and security of data/information transmission.</description>
        <description>http://thesai.org/Downloads/Volume8No2/Paper_12-Review_of_Image_Compression_and_Encryption_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Multi-Threaded Symmetric Block Encryption Scheme Implementing PRNG for DES and AES Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080211</link>
        <id>10.14569/IJACSA.2017.080211</id>
        <doi>10.14569/IJACSA.2017.080211</doi>
        <lastModDate>2017-02-28T11:25:48.8330000+00:00</lastModDate>
        
        <creator>Adi A. Maaita</creator>
        
        <creator>Hamza A. Alsewadi</creator>
        
        <subject>Computer Security; Symmetric cryptography; DES; AES; pseudo random number generators</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(2), 2017</description>
        <description>Due to the ever-increasing efficiency of computer systems, symmetric cryptosystem are becoming more vulnerable to linear cryptanalysis brute force attacks. For example, DES with its short key (56 bits) is becoming easier to break, while AES has a much longer key size (up to 256 bits), which makes it very difficult to crack using even the most advanced dedicated cryptanalysis computers. However, more complex algorithms, which exhibit better confusion and diffusion characteristics, are always required. Such algorithms must have stronger resistance against differential and linear cryptanalysis attacks. This paper describes the development of an algorithm that implements a pseudo random number generator (PRNG) in order to increase the key generation complexity. Experimental results on both DES and AES cryptosystems complemented with the PRNG have shown an average improvement of up to 36.3% in the avalanche error computation over the original standard systems, which is a considerable improvement in the time complexity of both systems.</description>
        <description>http://thesai.org/Downloads/Volume8No2/Paper_11-A_Multi_Threaded_Symmetric_Block_Encryption_Scheme.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Web Server Performance Evaluation in a Virtualisation Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080210</link>
        <id>10.14569/IJACSA.2017.080210</id>
        <doi>10.14569/IJACSA.2017.080210</doi>
        <lastModDate>2017-02-28T11:25:48.8170000+00:00</lastModDate>
        
        <creator>Manjur Kolhar</creator>
        
        <subject>Cloud computing; virtual machine; resource sharing; latency sensitive; web server; multi-tier application</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(2), 2017</description>
        <description>Operational and investment costs are reduced by resource sharing in virtual machine (VM) environments, which also results in an overhead for hosted services. VM machine performance is important because of resource contention. If an application takes a long time to execute because of its CPU or network, it is considered to be a failure because if many VMs are running over a single hardware platform, there will be competition for shared resources, e.g., the CPU, network bandwidth, and memory. Therefore, this study focuses on measuring the performance of a web server under a virtual environment and comparing those results with that from a dedicated machine. We found that the difference between the two sets of results is largely negligible. However, in some areas, one approach performed better than the other.</description>
        <description>http://thesai.org/Downloads/Volume8No2/Paper_10-Web_Server_Performance_Evaluation_in_a_Virtualisation_Environment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improved Mechanism to Prevent Denial of Service Attack in IPv6 Duplicate Address Detection Process</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080209</link>
        <id>10.14569/IJACSA.2017.080209</id>
        <doi>10.14569/IJACSA.2017.080209</doi>
        <lastModDate>2017-02-28T11:25:48.7870000+00:00</lastModDate>
        
        <creator>Shafiq Ul Rehman</creator>
        
        <creator>Selvakumar Manickam</creator>
        
        <subject>Secure-DAD; Duplicate Address Detection; Denial of Service Attack; IPv6 Security; Address auto-configuration</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(2), 2017</description>
        <description>From the days of ARPANET, with slightly over two hundred connected hosts involving five organizations to a massive global, always-on network connecting hosts in the billions, the Internet has become as important as the need for electricity and water. Internet Protocol version 4 (IPv4) could not sustain the growth of the Internet. In ensuring the growth is not stunted, a new protocol, i.e. Internet Protocol version 6 (IPv6) was introduced that resolves the addressing issue IPv4 had. In addition, IPv6 was also laden with new features and capabilities. One of them being address auto-configuration. This feature allows hosts to self-configure without the need for additional services. Nevertheless, the design of IPv6 has led to several security shortcomings. Duplicate Address Detection (DAD) process required for auto-configuration is prone to Denial of Service (DoS) attack in which hosts are unable to configure themselves to join the network. Various mechanisms, SeND, SSAS, and the most recent being Trust-ND, have been introduced to address this issue. Although these mechanisms were able to circumvent DoS attack on DAD process, they have introduced various side effects, i.e. complexities and degradation of performance. This paper reviews the shortcomings of these mechanism and proposes a new mechanism, Secure-DAD, that addresses them. The performance comparison between Trust-ND and Secure-ND also showed that Secure-DAD is more promising with improvement in terms of processing time reduction of 45.1% compared to Trust-ND while preventing DoS attack in IPv6 DAD process.</description>
        <description>http://thesai.org/Downloads/Volume8No2/Paper_9-Improved_Mechanism_to_Prevent_Denial_of_Service_Attack.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Security and Privacy Risks Awareness for Bring Your Own Device (BYOD) Paradigm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080208</link>
        <id>10.14569/IJACSA.2017.080208</id>
        <doi>10.14569/IJACSA.2017.080208</doi>
        <lastModDate>2017-02-28T11:25:48.7870000+00:00</lastModDate>
        
        <creator>Manmeet Mahinderjit Singh</creator>
        
        <creator>Chen Wai Chan</creator>
        
        <creator>Zakiah Zulkefli</creator>
        
        <subject>Mobile Computing; BYOD Higher Education; Security; Privacy; Malicious Software; Risk</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(2), 2017</description>
        <description>The growing trend of BYOD in the higher education institutions creates a new form of student learning pedagogy in which students are able to use the mobile devices for their academic purposes in anywhere and anytime.  Security threat in the paradigm of BYOD creates a great opportunity for hackers or attackers to find new attacks or vulnerabilities that could possibly exploit the students’ mobile devices and gains valuable data from them. A survey was conducted in learning the current awareness of security and privacy importance in BYOD for higher education in Malaysia. Based on the analysis of this survey, it demonstrates that the trend of BYOD in Malaysia has begun. Thoroughly, the survey results have been proven that the current basic fundamental security and privacy awareness and knowledge on mobile devices or applications is important in order to protect their mobile devices or data.</description>
        <description>http://thesai.org/Downloads/Volume8No2/Paper_8-Security_and_Privacy_Risks_Awareness_for_Bring_Your_Own_Device.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Concepts and Tools for Protecting Sensitive Data in the IT Industry: A Review of Trends, Challenges and Mechanisms for Data-Protection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080207</link>
        <id>10.14569/IJACSA.2017.080207</id>
        <doi>10.14569/IJACSA.2017.080207</doi>
        <lastModDate>2017-02-28T11:25:48.7700000+00:00</lastModDate>
        
        <creator>Omar Tayan</creator>
        
        <subject>sensitive-data; data-breaches; data-protection; trend analysis; classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(2), 2017</description>
        <description>Advancements in storage, dissemination and access of multimedia data content on the Internet continues to grow at exponential rates, while individuals, organizations and governments spend huge efforts to exert their fingerprint in this information age through the use of online multimedia resources to propagate thoughts, services, policies, ecommerce and other types of information. Furthermore, information at different levels may be classified into confidential, sensitive and critical data types. Such data has been subject to numerous tools and techniques for providing automated information processing, information management and storage mechanisms. Consequently, numerous security tools and techniques have also emerged for the protection of data at the various organizational levels and according to different requirements. This paper discusses three important types of information security aspects that includes; data-storage, in-transit data and data access-prevention for unauthorized users. In particular, the paper reviews and presents the latest trends and most common challenges in information security with regards to data-breaches and vulnerabilities found in industry today using simple brief summaries for the benefit of IT practitioners and academics. Thereafter, state-of-the-art techniques used to secure information content commonly required in applications-software, in-house operations software or websites are given. Mechanisms for enhancing data-protection under the given set of challenges and vulnerabilities are also discussed. Finally, the importance of using information security policies and standards for protecting organizational data content is discussed along with foreseeable open issues for future work.</description>
        <description>http://thesai.org/Downloads/Volume8No2/Paper_7-Concepts_and_Tools_for_Protecting_Sensitive_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Approach of nMPRA Architecture using Hardware Implemented Support for Event Prioritization and Treating</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080206</link>
        <id>10.14569/IJACSA.2017.080206</id>
        <doi>10.14569/IJACSA.2017.080206</doi>
        <lastModDate>2017-02-28T11:25:48.7530000+00:00</lastModDate>
        
        <creator>Ionel ZAGAN</creator>
        
        <creator>Nicoleta Cristina GAITAN</creator>
        
        <creator>Vasile Gheorghita GAITAN</creator>
        
        <subject>nMPRA; event treating; mutex; inter-task communication; hardware scheduler</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(2), 2017</description>
        <description>One of the fundamental requirements of real time operating systems is the determinism of executing critical tasks and treating multiple periodic or aperiodic events. The present paper presents the hardware support of the nMPRA processor (Multi Pipeline Register Architecture) dedicated to treating time events, interrupt events and events associated with synchronization and inter-task communication mechanisms. Because in real time systems the treatment of events is a very important aspect, this paper describes both the mechanism implemented in hardware for prioritizing and treating multiple events, and the experimental results obtained using Virtex-7 FPGA circuit. The article&#39;s element of originality is the very short response time required in treating and prioritizing events.</description>
        <description>http://thesai.org/Downloads/Volume8No2/Paper_6-An_Approach_of_nMPRA_Architecture_using_Hardware_Implemented_Support.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Graph Theoretic Approach for Minimizing Storage Space using Bin Packing Heuristics</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080205</link>
        <id>10.14569/IJACSA.2017.080205</id>
        <doi>10.14569/IJACSA.2017.080205</doi>
        <lastModDate>2017-02-28T11:25:48.7230000+00:00</lastModDate>
        
        <creator>Debajit Sensarma</creator>
        
        <creator>Samar Sen Sarma</creator>
        
        <subject>Bin Packing; Combinatorial Optimization; Graph Theory; Heuristics; Operational Research</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(2), 2017</description>
        <description>In the age of Big Data the problem of storing huge volume of data in a minimum storage space by utilizing available resources properly is an open problem and an important research aspect in recent days. This problem has a close relationship with the famous classical NP-Hard combinatorial optimization problem namely the “Bin Packing Problem” where bins represent available storage space and the problem is to store the items or data in minimum number of bins. This research work mainly focuses on to find a near optimal solution of the offline one dimensional Bin Packing Problem based on two heuristics by taking the advantages of graph. Additionally, extreme computational results on some benchmark instances are reported and compared with the best known solution and solution produced by the four other well-known bin oriented heuristics. Also some future directions of the proposed work have been depicted.</description>
        <description>http://thesai.org/Downloads/Volume8No2/Paper_5-A_Graph_Theoretic_Approach_for_Minimizing_Storage_Space.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Tutoring Functions in a Blended Learning System: Case of Specialized French Teaching</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080204</link>
        <id>10.14569/IJACSA.2017.080204</id>
        <doi>10.14569/IJACSA.2017.080204</doi>
        <lastModDate>2017-02-28T11:25:48.7070000+00:00</lastModDate>
        
        <creator>Nadia Chafiq</creator>
        
        <creator>Mohammed Talbi</creator>
        
        <subject>Learning scenario; Tutoring functions; platform; linguistic proficiency; Interaction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(2), 2017</description>
        <description>There is an emergence of blended learning today which combines diversified teaching methods, alternating distance learning and classroom learning. As a matter of fact, most Moroccan universities are presently aware of the importance of this approach, which appears to be most suited for Moroccan university context. This article is meant to identify the different roles of the tutor within the blended learning system. This is more precisely to present an experience of implementing a hybrid learning system by using the &quot;FOUL&quot; platform. The introduction of this platform is accompanied by a need for developing new skills, be it for a teacher or a student. This experience is motivated, on the one hand, by the supply of additional online resources as a complement to a face-to-face classroom method , on the other hand by the personalization of learning and riding out classroom-based learning constraints (e.g. in terms of time, place, staff ...) as it is the case in universities. This article intends to address the above problems by analyzing student responses to questionnaires and processing the content of synchronous communication between learners/learners and learners/online tutors in order to identify and analyze tutoring functions. It can be concluded that the success of a hybrid learning system is conditioned by the presence of some basic functions such as: pedagogical, organizational and socio-motivational functions. These functions remain dominant in a hybrid learning system.</description>
        <description>http://thesai.org/Downloads/Volume8No2/Paper_4-Tutoring_Functions_in_a_Blended_Learning_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Investigation on Information Communication Technology Awareness and Use in Improving Livestock Farming in Southern District, Botswana</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080203</link>
        <id>10.14569/IJACSA.2017.080203</id>
        <doi>10.14569/IJACSA.2017.080203</doi>
        <lastModDate>2017-02-28T11:25:48.6770000+00:00</lastModDate>
        
        <creator>Clifford Matsoga Lekopanye</creator>
        
        <creator>Meenakshi Sundaram K</creator>
        
        <subject>ICT; Livestock production; ICT utilization; information access; developing countries; ICT awareness</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(2), 2017</description>
        <description>This paper investigated the extent of Information Communication Technology (ICT) usage by livestock keepers and limitations encountered. The study was conducted with the objective of coming up with findings that will contribute towards strengthening ICT usage for the development of livestock keeping. In order to meet this objective the researcher used a mixed method approach where by qualitative and quantitative methods were both used. The results of this study, showed  mobile phone technology as the most popular ICT used by 89% of the respondents. Also 73 % of the respondents indicated that they have access to the local radio channel, television accounted for 59%.  Other types of ICTs that were pointed out by few respondents are Facebook, Email, the Internet and YouTube. Livestock keepers have identified a number of limitations in using ICTs that need to be addressed that include high cost of communication, poor mobile communication signal, unawareness of television and radio program schedule, and lack of electricity in rural areas.
In conclusion, this study has identified ICTs such as radio, mobile phones, and television as types of ICT that are used most frequently by livestock keepers, though they are not used at satisfactory level for livestock production. Therefore, the researcher proposed that information systems aimed at delivering information to livestock keepers should be more mobile driven than being computer based. A major approach that could be adopted to address the challenges of radio and television usage is to come up with livestock programs that combine mobile technology with radio and television programs. Participants and listeners to radio or television programs could use mobile technologies to send in their questions either by calls or short messages.</description>
        <description>http://thesai.org/Downloads/Volume8No2/Paper_3-An_Investigation_on_Information_Communication_Technology_Awareness.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Agent-based Managing for Grid Cloud System — Design and Prototypal Implementation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080202</link>
        <id>10.14569/IJACSA.2017.080202</id>
        <doi>10.14569/IJACSA.2017.080202</doi>
        <lastModDate>2017-02-28T11:25:48.6600000+00:00</lastModDate>
        
        <creator>Osama H. Younis</creator>
        
        <creator>Fathy E. Eassa</creator>
        
        <creator>Fadi F. Fouz</creator>
        
        <creator>Amin Y. Noaman</creator>
        
        <creator>Ayman I. Madbouly</creator>
        
        <creator>Leon J. Osterweil</creator>
        
        <subject>Cloud Computing; Grid; Management; Distributed Systems; Architecture</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(2), 2017</description>
        <description>Here, we present the design and architecture of an Agent-based Manager for Grid Cloud Systems (AMGCS) using software agents to ensure independency and scalability when the number of resources and jobs increase. AMGCS handles IaaS resources (Infrastructure-as-a-Service — compute, storage and physical resources), and schedules compute-intensive jobs for execution over available resources based on QoS criteria, with optimized task-execution and high resource-utilization, through the capabilities of grid clouds. This prototypal design and implementation has been tested and shown a proven ability to increase the reliability and performance of cloud application by distributing its tasks to more than one cloud system, hence increase the reliability of user jobs and complex tasks submitted from regular machines.</description>
        <description>http://thesai.org/Downloads/Volume8No2/Paper_2-Agent_based_Managing_for_Grid_Cloud_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Visualising Arabic Sentiments and Association Rules in Financial Text</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080201</link>
        <id>10.14569/IJACSA.2017.080201</id>
        <doi>10.14569/IJACSA.2017.080201</doi>
        <lastModDate>2017-02-28T11:25:48.6000000+00:00</lastModDate>
        
        <creator>Hamed AL-Rubaiee</creator>
        
        <creator>Renxi Qiu</creator>
        
        <creator>Dayou Li</creator>
        
        <subject>Opinion mining; Stock market; Twitter; Saudi Arabia; Association text rules; Data mining, Text Visualization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(2), 2017</description>
        <description>Text mining methods involve various techniques, such as text categorization, summarisation, information retrieval, document clustering, topic detection, and concept extraction.  In addition, because of the difficulties involved in text mining, visualisation techniques can play a paramount role in the analysis and pre-processing of textual data.  This paper will present two novel frameworks for the classification and extraction of the association rules and the visualisation of financial Arabic text in order to realize both the general structure and the sentiment within an accumulated corpus.  However, mining unstructured data with natural language processing (NLP) and machine learning techniques can be arduous, especially where the Arabic language is concerned, because of limited research in this area.  The results show that our frameworks can readily classify Arabic tweets.  Furthermore, they can handle many antecedent text association rules for the positive class and the negative class.</description>
        <description>http://thesai.org/Downloads/Volume8No2/Paper_1-Visualising_Arabic_Sentiments_and_Association_Rules.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Organizing Multipath Routing in Cloud Computing Environments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080158</link>
        <id>10.14569/IJACSA.2017.080158</id>
        <doi>10.14569/IJACSA.2017.080158</doi>
        <lastModDate>2017-02-08T09:07:28.4670000+00:00</lastModDate>
        
        <creator>Amr Tolba</creator>
        
        <subject>Cloud Computing; Remote resource; Optimizing traffic engineering; Multipath routing; Disjoint paths; Parallel transmission</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(1), 2017</description>
        <description>One of the objectives of organizing cloud systems is to ensure effective access to remote resources by optimizing traffic engineering (TE) procedures. This paper considers the traffic engineering problem in a cloud environment by using a multipath routing technique. The multipath routing algorithm is used to identify the maximum number of disjoint paths in the graph which overcomes the problem in the junction area estimation process. So, the algorithm forms a plurality of non-overlapping and partially intersecting paths between any two nodes is proposed. Finally, the conditions for the formation of multipath virtual channels to ensure minimum build-time posts for the parallel transmission of its parts are also discussed.</description>
        <description>http://thesai.org/Downloads/Volume8No1/Paper_58-Organizing_Multipath_Routing_in_Cloud_Computing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intrusion Detection in Wireless Body Sensor Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080157</link>
        <id>10.14569/IJACSA.2017.080157</id>
        <doi>10.14569/IJACSA.2017.080157</doi>
        <lastModDate>2017-02-08T09:07:28.4370000+00:00</lastModDate>
        
        <creator>Nadya El MOUSSAID</creator>
        
        <creator>Ahmed TOUMANARI</creator>
        
        <creator>Maryam EL AZHARI</creator>
        
        <subject>Intrusion detection; cloning attack; 802.15.6; healthcare; WBSN</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(1), 2017</description>
        <description>The recent advances in electronic and robotics industry have enabled the manufacturing of sensors capable of measuring a set of application-oriented parameters and transmit them back to the base station for analysis purposes. These sensors are widely used in many applications including the healthcare systems forming though a Wireless Body Sensor Networks. The medical data must be highly secured and possible intrusion has to be fully detected to proceed with the prevention phase. In this paper, we propose a new intrusion superframe schema for 802.15.6 standard to detect the cloning attack. The results proved the efficiency of our technique in detecting this type of attack based on 802.15.6 parameters performances coupled with frequency switching at the radio model.</description>
        <description>http://thesai.org/Downloads/Volume8No1/Paper_57-Intrusion_Detection_in_Wireless_Body_Sensor.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Effectiveness of D2L System: An Evaluation of Teaching-Learning Process in the Kingdom of Saudi Arabia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080156</link>
        <id>10.14569/IJACSA.2017.080156</id>
        <doi>10.14569/IJACSA.2017.080156</doi>
        <lastModDate>2017-02-08T09:07:28.3600000+00:00</lastModDate>
        
        <creator>Mohammed Al-Shehri</creator>
        
        <subject>e-learning; Desire2Learn (D2L); Unified Theory of Acceptance and Use of Technology (UTAUT)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(1), 2017</description>
        <description>High quality education could be achieved through an e-learning system as it increases the educational information accessibility, service availability and accuracy when compared to a conventional face-to-face teaching-learning approach. However, user acceptance is one of the key essentials for adoption and success of e-learning system. Many studies revealed that use of D2L application is one of the best tool in the adoption of teaching and learning methodologies and the effectiveness of this application has gained a demand within the last few years. This paper investigates the feasibility of applying UTAUT model on Desire2Learn (D2L) e-learning system in KSA. The main objective of this study is to evaluate the efficiency of D2L (e-learning) system based on students acceptance. Questionaire method was employed to accomplish this study. Based on the (UTAUT) model the impact of trust on the adoption of e-learning systems’ services from students’ perception was studied. Feedback was collected from 213 students to carry out the study. The results of this study indicates that the services offered by D2L System has significant weightage on teaching-learning process in Saudi Arabia from the student perspective.</description>
        <description>http://thesai.org/Downloads/Volume8No1/Paper_56-The_Effectiveness_of_D2L_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multithreaded Sliding Window Approach to Improve Exact Pattern Matching Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080155</link>
        <id>10.14569/IJACSA.2017.080155</id>
        <doi>10.14569/IJACSA.2017.080155</doi>
        <lastModDate>2017-01-31T16:33:15.8600000+00:00</lastModDate>
        
        <creator>Ala’a Al-shdaifat</creator>
        
        <creator>Basam Hammo</creator>
        
        <creator>Mohammad Abushariah</creator>
        
        <creator>Esra’a Alshdaifat</creator>
        
        <subject>pattern matching; multithreading; sliding window; Brute Force; Knuth-Morris-Pratt; Boyer-Moore</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(1), 2017</description>
        <description>In this paper an efficient pattern matching ap-proach, based on a multithreading sliding window technique, is proposed to improve the efficiency of the common sequential exact pattern matching algorithms including: (i) Brute Force, (ii) Knuth-Morris-Pratt and (iii) Boyer-Moore. The idea is to divide the text under-search into blocks, each block is allocated one or two threads running concurrently. Reported experimental results indicated that the proposed approach improves the performance of the well-known pattern matching algorithms, in terms of search time, especially when the searched patterns are located at the middle or at the end of the text.</description>
        <description>http://thesai.org/Downloads/Volume8No1/Paper_55-Multithreaded_Sliding_Window_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Parallel Architecture for Face Recognition using MPI</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080154</link>
        <id>10.14569/IJACSA.2017.080154</id>
        <doi>10.14569/IJACSA.2017.080154</doi>
        <lastModDate>2017-01-31T16:33:15.8430000+00:00</lastModDate>
        
        <creator>Dalia Shouman Ibrahim</creator>
        
        <creator>Salma Hamdy</creator>
        
        <subject>Face Recognition; PCA; MPI; Parallel Programming; Distributed memory architecture</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(1), 2017</description>
        <description>The face recognition applications are widely used in different fields like security and computer vision. The recognition process should be done in real time to take fast decisions. Princi-ple Component Analysis (PCA) considered as feature extraction technique and is widely used in facial recognition applications by projecting images in new face space. PCA can reduce the dimensionality of the image. However, PCA consumes a lot of processing time due to its high intensive computation nature. Hence, this paper proposes two different parallel architectures to accelerate training and testing phases of PCA algorithm by exploiting the benefits of distributed memory architecture. The experimental results show that the proposed architectures achieve linear speed-up and system scalability on different data sizes from the Facial Recognition Technology (FERET) database.</description>
        <description>http://thesai.org/Downloads/Volume8No1/Paper_54-Parallel_Architecture_for_Face_Recognition_using_MPI.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mobile Sensing for Data-Driven Mobility Modeling</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080153</link>
        <id>10.14569/IJACSA.2017.080153</id>
        <doi>10.14569/IJACSA.2017.080153</doi>
        <lastModDate>2017-01-31T16:33:15.8130000+00:00</lastModDate>
        
        <creator>Kashif Zia</creator>
        
        <creator>Arshad Muhammad</creator>
        
        <creator>Katayoun Farrahi</creator>
        
        <creator>Dinesh Kumar Saini</creator>
        
        <subject>mobile sensing; data-driven mobility model; agent based models</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(1), 2017</description>
        <description>The use of mobile sensed location data for realistic human track generation is privacy sensitive. People are unlikely to share their private mobile phone data if their tracks were to be simulated. However, the ability to realistically generate human mobility in computer simulations is critical for advances in many domains, including urban planning, emergency handling, and epidemiology studies. In this paper, we present a data-driven mobility model to generate human spatial and temporal movement patterns on a real map applied to an agent based setting. We address the privacy aspect by considering collective participant transitions between semantic locations, defined in a privacy preserving way. Our modeling approach considers three cases which decreasingly use real data to assess the value in generating realistic mobility, considering data of 89 participants over 6079 days. First, we consider a dynamic case which uses data on a half-hourly basis. Second, we consider a data-driven case without time of day dynamics. Finally, we consider a homogeneous case where the transitions between locations are uniform, random, and not data-driven. Overall, we find the dynamic data-driven case best generates the semantic transitions of previously unseen participant data.</description>
        <description>http://thesai.org/Downloads/Volume8No1/Paper_53-Mobile_Sensing_for_Data_Driven_Mobility_Modeling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Online Synchronous Brain Wave Signal Pattern Classifier with Parallel Processing Optimization for Embedded System Implementation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080152</link>
        <id>10.14569/IJACSA.2017.080152</id>
        <doi>10.14569/IJACSA.2017.080152</doi>
        <lastModDate>2017-01-31T16:33:15.7800000+00:00</lastModDate>
        
        <creator>Bruno Senzio-Savino</creator>
        
        <creator>Mohammad Reza Alsharif</creator>
        
        <creator>Carlos E. Gutierrez</creator>
        
        <creator>Kamaledin Setarehdan</creator>
        
        <subject>Brain Computer Interface; batch SOM; SVM; Parallel-processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(1), 2017</description>
        <description>Commercial Brain Computer Interface applications are currently expanding due to the success of widespread dis-semination of low cost devices. Reducing the cost of a traditional system requires appropriate resources, such as proper software tools for signal processing and characterization. In this paper, a methodology for classifying a set of attention and meditation brain wave signal patterns is presented by means of unsupervised signal feature clustering with batch Self-Organizing Maps (b-SOM) and supervised classification by Support Vector Machine (SVM). Previous research on this matter did not combine both methods and also required an important amount of computation time. With the use of a small square neuron grid by b-SOM and an RBF kernel SVM, a well delimited classifier was obtained. The recognition rate was 70% after parameter tuning. In terms of optimization, the parallel b-SOM algorithm reduced drastically the computation time, allowing online clustering and classification for full length input data.</description>
        <description>http://thesai.org/Downloads/Volume8No1/Paper_52-An_Online_Synchronous_Brain_Wave_Signal_Pattern.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SIT: A Lightweight Encryption Algorithm for Secure Internet of Things</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080151</link>
        <id>10.14569/IJACSA.2017.080151</id>
        <doi>10.14569/IJACSA.2017.080151</doi>
        <lastModDate>2017-01-31T16:33:15.7500000+00:00</lastModDate>
        
        <creator>Muhammad Usman</creator>
        
        <creator>Irfan Ahmed</creator>
        
        <creator>M. Imran Aslam</creator>
        
        <creator>Shujaat Khan</creator>
        
        <creator>Usman Ali Shah</creator>
        
        <subject>IoT; Security; Encryption; Wireless Sensor Network WSN; Khazad</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(1), 2017</description>
        <description>The Internet of Things (IoT) being a promising technology of the future is expected to connect billions of devices. The increased number of communication is expected to generate mountains of data and the security of data can be a threat. The devices in the architecture are essentially smaller in size and low powered. Conventional encryption algorithms are generally computationally expensive due to their complexity and requires many rounds to encrypt, essentially wasting the constrained energy of the gadgets. Less complex algorithm, however, may compromise the desired integrity. In this paper we propose a lightweight encryption algorithm named as Secure IoT (SIT). It is a 64-bit block cipher and requires 64-bit key to encrypt the data. The architecture of the algorithm is a mixture of feistel and a uniform substitution-permutation network. Simulations result shows the algorithm provides substantial security in just five encryption rounds. The hardware implementation of the algorithm is done on a low cost 8-bit micro-controller and the results of code size, memory utilization and encryption/decryption execution cycles are compared with benchmark encryption algo-rithms. The MATLAB code for relevant simulations is available online at https://goo.gl/Uw7E0W.</description>
        <description>http://thesai.org/Downloads/Volume8No1/Paper_51-SIT_A_Lightweight_Encryption_Algorithm_for_Secure.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sentiment Classification of Twitter Data Belonging to Saudi Arabian Telecommunication Companies</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080150</link>
        <id>10.14569/IJACSA.2017.080150</id>
        <doi>10.14569/IJACSA.2017.080150</doi>
        <lastModDate>2017-01-31T16:33:15.7330000+00:00</lastModDate>
        
        <creator>Ali Mustafa Qamar</creator>
        
        <creator>Suliman A. Alsuhibany</creator>
        
        <creator>Syed Sohail Ahmed</creator>
        
        <subject>sentiment analysis; social networks; supervised machine learning; text mining</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(1), 2017</description>
        <description>Twitter has attracted the attention of many re-searchers owing to the fact that every tweet is, by default, public in nature which is not the case with Facebook. In this paper, we present sentiment analysis of tweets written in English, belonging to different telecommunication companies in Saudi Arabia. We apply different machine learning algorithms such as k nearest neighbor algorithm, Artificial Neural Networks (ANN), Na&#168;ive Bayesian etc. We classified the tweets into positive, negative and neutral classes based on Euclidean distance as well as cosine similarity. Moreover, we also learned similarity matrices for kNN classification. CfsSubsetEvaluation as well as Information Gain was used for feature selection. The results of CfsSubsetEvaluation were better than the ones obtained with Information Gain. Moreover, kNN performed better than the other algorithms and gave 75.4%, 76.6% and 75.6% for Precision, Recall and F-measure, respectively. We were able to get an accuracy of 80.1%with a symmetric variant of kNN while using cosine similarity. Furthermore, interesting trends wrt days, months etc. were also discovered.</description>
        <description>http://thesai.org/Downloads/Volume8No1/Paper_50-Sentiment_Classification_of_Twitter_Data_Belonging.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Value based PSO Test Case Prioritization Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080149</link>
        <id>10.14569/IJACSA.2017.080149</id>
        <doi>10.14569/IJACSA.2017.080149</doi>
        <lastModDate>2017-01-31T16:33:15.7200000+00:00</lastModDate>
        
        <creator>Erum Ashraf</creator>
        
        <creator>Khurrum Mahmood</creator>
        
        <creator>Tamim Ahmed Khan</creator>
        
        <creator>Shaftab Ahmed</creator>
        
        <subject>Test case prioritization (TCP); Particle swarm optimization (PSO); Average percentage of fault detection (APFD); Value based software engineering (VBSE)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(1), 2017</description>
        <description>Regression testing is performed to see if any changes introduced in software will not affect the rest of functional software parts. It is inefficient to re-execute all test cases every time the changes are made. In this regard test cases are prioritized by following some criteria to perform efficient testing while meeting limited testing resources. In our research we have proposed value based particle swarm intelligence algorithm for test case prioritization. The aim of our research is to detect maximum faults earlier in testing life cycle. We have introduced the combination of six prioritization factors for prioritization. These factors are customer priority, Requirement volatility, implementation complexity, requirement traceability, execution time and fault impact of requirement. This combination of factors has not been used before for prioritization. A controlled experiment has been performed on three medium size projects and compared results with random prioritization technique. Results are analyzed with the help of average percentage of fault detection (APFD) metric. The obtained results showed our proposed algorithm as more efficient and robust for earlier rate of fault detection. Results are also revalidated by proposing our new validation equation and showed consistent improvement in our proposed algorithm.</description>
        <description>http://thesai.org/Downloads/Volume8No1/Paper_49-Value_based_PSO_Test_Case_Prioritization_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Community Detection in Networks using Node Attributes and Modularity</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080148</link>
        <id>10.14569/IJACSA.2017.080148</id>
        <doi>10.14569/IJACSA.2017.080148</doi>
        <lastModDate>2017-01-31T16:33:15.7030000+00:00</lastModDate>
        
        <creator>Yousra Asim</creator>
        
        <creator>Rubina Ghazal</creator>
        
        <creator>Wajeeha Naeem</creator>
        
        <creator>Abdul Majeed</creator>
        
        <creator>Basit Raza</creator>
        
        <creator>Ahmad Kamran Malik</creator>
        
        <subject>Community Detection; Louvain algorithm; Node attributes; and Modularity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(1), 2017</description>
        <description>Community detection in network is of vital importance to find cohesive subgroups. Node attributes can improve the accuracy of community detection when combined with link information in a graph. Community detection using node attributes has not been investigated in detail. To explore the aforementioned idea, we have adopted an approach by modifying the Louvain algorithm. We have proposed Louvain-AND-Attribute (LAA) and Louvain-OR-Attribute (LOA) methods to analyze the effect of using node attributes with modularity. We compared this approach with existing community detection approaches using different datasets. We found the performance of both algorithms better than Newman’s Eigenvector method in achieving modularity and relatively good results of gain in modularity in LAA than LOA. We used density, internal and external edge density for the evaluation of quality of detected communities. LOA provided highly dense partitions in the network as compared to Louvain and Eigenvector algorithms and close values to Clauset. Moreover, LOA achieved few numbers of edges between communities.</description>
        <description>http://thesai.org/Downloads/Volume8No1/Paper_48-Community_Detection_in_Networks_using_Node_Attributes_and_Modularity.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Conceptual Model for WWBAN (Wearable Wireless Body Area Network)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080147</link>
        <id>10.14569/IJACSA.2017.080147</id>
        <doi>10.14569/IJACSA.2017.080147</doi>
        <lastModDate>2017-01-31T16:33:15.6730000+00:00</lastModDate>
        
        <creator>Jawad Hussain Awan</creator>
        
        <creator>Shahzad Ahmed Memon</creator>
        
        <creator>Nisar Ahmed Memon</creator>
        
        <creator>Raza Shah</creator>
        
        <creator>Zulifqar Bhutto</creator>
        
        <creator>Rahat Ali Khan</creator>
        
        <subject>Healthcare Environment; Healthcare service; wireless body area networks and wireless sensors</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(1), 2017</description>
        <description>Modern world advances in sensors miniaturization and wireless networking which enables exploiting wireless sensor networking to monitor and control the environment. Human health monitoring is promising applications of sensor networks in a healthcare environment. Sensor system was worn by the human that creates wireless body area network to monitor and provide synchronized response to the patients for medical contextual information received by sensors. Though, challenging tasks are encountered by researchers to address habitually conflicting necessities for size, time to operate, correctness of data, reliability and time to store that data and provide responses accordingly.
This paper encompasses the structural design of hardware as well as software in a wireless sensor network system for monitoring health issues. The paper outlines few healthcare services, innovations latest trends those monitor patients in the health care systems and propose some of the other future trends where they might be helpful for future research to be used in handheld.</description>
        <description>http://thesai.org/Downloads/Volume8No1/Paper_47-Conceptual_Model_for_WWBAN_Wearable_Wireless_Body_Area_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detection of J2EE Patterns based on Customizable Features</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080146</link>
        <id>10.14569/IJACSA.2017.080146</id>
        <doi>10.14569/IJACSA.2017.080146</doi>
        <lastModDate>2017-01-31T16:33:15.6570000+00:00</lastModDate>
        
        <creator>Zaigham Mushtaq</creator>
        
        <creator>Ghulam Rasool</creator>
        
        <creator>Balawal Shahzad</creator>
        
        <subject>Source code analysis; Cross-language; Analysis methods; Reverse Engineering; Source code parsing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(1), 2017</description>
        <description>Design patterns support extraction of design information for better program understanding, reusability and reengineering. With the advent of contemporary applications, the extraction of design information has become quite complex and challenging. These applications are multilingual in nature i.e. their design information is spread across various language components that are interlinked with each other. At present, no approach is available that is capable to extract design information of multilingual applications by using design patterns. This paper lays foundation for the analysis of multilingual source code for the detection of J2EE Patterns. J2EE Patterns provide design solutions for effective enterprise applications. A novel approach is presented for the detection of J2EE Patterns from multilingual source code of J2EE applications. For this purpose, customizable and reusable feature types are presented as definitions of J2EE Patterns catalogue. A prototype implementation is evaluated on a corpus that contains the repository of multilingual source code of J2EE Patterns. Additionally, the tool is tested on open source applications. The accuracy of the tool is validated by successfully recognizing J2EE Patterns from the multilingual source code. The results demonstrate the significance of customizable definitions of J2EE Pattern’s catalogue and capability of prototype.</description>
        <description>http://thesai.org/Downloads/Volume8No1/Paper_46-Detection_of_J2EE_Patterns.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design, Release, Update, Repeat: The Basic Process of a Security Protocol’s Evolution</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080145</link>
        <id>10.14569/IJACSA.2017.080145</id>
        <doi>10.14569/IJACSA.2017.080145</doi>
        <lastModDate>2017-01-31T16:33:15.6270000+00:00</lastModDate>
        
        <creator>Young B. Choi</creator>
        
        <creator>Nathan D. Hunter</creator>
        
        <subject>Security; Protocol; 3PAKE; PGP; quantum; service, SSL; TLS; DSQC; QSDC; cyber; hashing; DOMIH; SHA</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(1), 2017</description>
        <description>Companies, businesses, colleges, etc. throughout the world use computer networks and telecommunications to run their operations. The convenience, information-gathering, and organizational abilities provided by computer networks and the Internet is undeniably useful. However, as computer and network technology continues to become increasingly advanced, the threat and number of cyber-attacks rises. Without advanced and well-developed security protocols, companies, businesses, colleges, and even ordinary individuals would be at the mercy of malicious hackers. This paper focuses on security protocols such as PGP, 3PAKE, and TLS’s processes, design, history, advantages, and disadvantages. Research for this article was conducted through reading numerous scholarly articles, conference articles, and IT and information security textbooks. The purpose of this article is to expose and elaborate on vulnerabilities in popular security protocols, introduce lesser-known protocols with great potential, and lastly, provide suggestions for future directions and modifications to improve security protocols through qualitative and theoretical research.</description>
        <description>http://thesai.org/Downloads/Volume8No1/Paper_45-Design_Release_Update_Repeat_The_Basic_Process_of_a_Security.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimized Order of Software Testing Techniques in Agile Process – A Systematic Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080144</link>
        <id>10.14569/IJACSA.2017.080144</id>
        <doi>10.14569/IJACSA.2017.080144</doi>
        <lastModDate>2017-01-31T16:33:15.6100000+00:00</lastModDate>
        
        <creator>Farrukh Latif Butt</creator>
        
        <creator>Shahid Nazir Bhatti</creator>
        
        <creator>Sohail Sarwar</creator>
        
        <creator>Amr Mohsen Jadi</creator>
        
        <creator>Abdul Saboor</creator>
        
        <subject>Agile methodology; software testing techniques; software build; software quality</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(1), 2017</description>
        <description>The designing, development of a software product needs lot of efforts whereas software testing is also a very challenging task but it is equally mandatory activity in order to ensure the quality of the product before shipping to customer. When it comes to the Agile model under which software builds are developed very frequently and development goes on a very high pace, software testing becomes more important and critical. Organizations following the agile methodology, encounter number of problems in formulating a software testing process unless they come up with a systematic testing approach based on right testing technique at a proper stage of the agile process. This paper addresses the relevant software testing techniques feasible at different stages of the agile process and proposes a dedicated software testing framework producing quality software products developed under agile methodology.</description>
        <description>http://thesai.org/Downloads/Volume8No1/Paper_44-Optimized_Order_of_Software_Testing_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modern Authentication Techniques in Smart Phones: Security and Usability Perspective</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080142</link>
        <id>10.14569/IJACSA.2017.080142</id>
        <doi>10.14569/IJACSA.2017.080142</doi>
        <lastModDate>2017-01-31T16:33:15.5770000+00:00</lastModDate>
        
        <creator>Usman Shafique</creator>
        
        <creator>Hikmat Khan</creator>
        
        <creator>Sabah-ud-din Waqar</creator>
        
        <creator>Asma Sher</creator>
        
        <creator>Adnan Zeb</creator>
        
        <creator>Uferah Shafi</creator>
        
        <creator>Rahim Ullah</creator>
        
        <creator>Rehmat Ullah</creator>
        
        <subject>smartphone; authentication; security; attacks; knowledge-based</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(1), 2017</description>
        <description>A smartphone has more advanced computing ability and connectivity than basic featured phones. Presently, we are moving from the Internet society to a mobile society where more and more access to the information is required. This has resulted in a mobile security which is no longer immanent, but imperative. Smartphone authentication has received substantial attention of the research community for the past several years because there have been modern developments beyond the classical PINs and passwords making user authentication more challenging. In this paper, we critically analyze the attacks and the vulnerabilities in smartphones’ authentication mechanisms. A comparative analysis of different authentication techniques along with the usage of the different authentication methods is discussed which lead the end-user towards choosing the most suitable and customizable authentication technique.</description>
        <description>http://thesai.org/Downloads/Volume8No1/Paper_42-Modern_Authentication_Techniques_in_Smart_Phones.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Estimation Method of the Total Number of Wild Animals based on Modified Jolly’s Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080143</link>
        <id>10.14569/IJACSA.2017.080143</id>
        <doi>10.14569/IJACSA.2017.080143</doi>
        <lastModDate>2017-01-31T16:33:15.5770000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Takashi Higuchi</creator>
        
        <creator>Tetsuya Murakami</creator>
        
        <subject>Wild animals; Jolly’s method; Specific wild animal detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(1), 2017</description>
        <description>Estimation method of the total number, the probabilities of birth and alive of wild animals based on Jolly’s method is proposed. Jolly’s method requires putting tags to the captured wild animals by bank trap while just identifications of the wild animals using camera images are required for the proposed method. An identification method is also proposed here. Other than these, the method for detection of specific wild animals is proposed. The proposed method is validated through simulations. The proposed method for specific wild animal detection with acquired camera images is also validated. The simulation results show that the proposed Modified Jolly’s Method: MJM is superior to the conventional Petersen method by 2.65% in terms of confidence interval of the estimated total number of wild pigs in the simulation cells in concern (128 by 128).</description>
        <description>http://thesai.org/Downloads/Volume8No1/Paper_43-Estimation_Method_of_the_Total_Number.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>MOMEE: Manifold Optimized Modeling of Energy Efficiency in Wireless Sensor Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080141</link>
        <id>10.14569/IJACSA.2017.080141</id>
        <doi>10.14569/IJACSA.2017.080141</doi>
        <lastModDate>2017-01-31T16:33:15.5300000+00:00</lastModDate>
        
        <creator>Rajalakshmi M.C.</creator>
        
        <creator>A.P. Gnana Prakash</creator>
        
        <subject>Wireless Sensor Network; Energy Efficiency; network Lifetime; Optimization; Battery</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(1), 2017</description>
        <description>Although adoption pace of wireless sensor network has increased in recent times in many advance technologies of ubiquitous-ness, but still there are various open-end challenges associated with energy efficiencies among the sensor nodes till now. We reviewed the existing research approaches towards energy optimization techniques to explore significant problems. This paper introduces MOMEE i.e. Manifold Optimized Modeling of Energy Efficiency that offers novel clustering as well as novel energy optimized routing strategy. The proposed system uses analytical modeling methodology and is found to offer better resiliency against traffic bottleneck condition. The study outcome of MOMEE exhibits higher number of alive nodes, lower number of dead nodes, good residual energy, and better throughput as compared to existing energy efficient routing approaches in wireless sensor network.</description>
        <description>http://thesai.org/Downloads/Volume8No1/Paper_41-MOMEE_Manifold_Optimized_Modeling_of_Energy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>From PID to Nonlinear State Error Feedback Controller</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080140</link>
        <id>10.14569/IJACSA.2017.080140</id>
        <doi>10.14569/IJACSA.2017.080140</doi>
        <lastModDate>2017-01-31T16:33:15.5170000+00:00</lastModDate>
        
        <creator>Wameedh Riyadh Abdul-Adheem</creator>
        
        <creator>Ibraheem Kasim Ibraheem</creator>
        
        <subject>tracking differentiator; state error feedback; Lyapunov function; asymptotic stability; nonlinear PID</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(1), 2017</description>
        <description>In this paper an improved nonlinear state error feedback controller (INLSEF) has been proposed for perfect reference tracking and minimum control energy. It consists of a nonlinear tracking differentiator together with nonlinear combinations of the error signal.  The tracking differentiator generates a set of reference profile for the input signal, which is the signal itself in addition to its derivatives. On the other hand, the 12-parameters nonlinear combinations of the error signal make the INLSEF controller can handles with time-varying and system’s nonlinearity. Digital simulation studies have been conducted for the proposed controller and compared with several works from literature survey on two case studies, mass-spring-damper which is a very nonlinear system and nonlinear ball and beam systems.  The parameters of the nonlinear combination of the error signal are tuned to satisfy the optimality condition by minimizing the OPI performance index defined in this work. From the simulations results one can conclude that the proposed INLSEF controller performance is better than that of its counterpart in terms of speed and control energy and minimum error. Also, the results showed that the proposed controller is effectively enhancing the stability and performance of the closed loop system.</description>
        <description>http://thesai.org/Downloads/Volume8No1/Paper_40-From_PID_to_Nonlinear_State_Error.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intelligent Real-Time Facial Expression Recognition from Video Sequences based on Hybrid Feature Tracking Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080139</link>
        <id>10.14569/IJACSA.2017.080139</id>
        <doi>10.14569/IJACSA.2017.080139</doi>
        <lastModDate>2017-01-31T16:33:15.5000000+00:00</lastModDate>
        
        <creator>Nehal O. Sadek</creator>
        
        <creator>Noha A. Hikal</creator>
        
        <creator>Fayez W. Zaki</creator>
        
        <subject>facial expression recognition (FER); Hierarchical Support Vector Machine (HSVM); Human computer Interaction (HCI); Real-time facial expression recognition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(1), 2017</description>
        <description>In this paper, a method for automatic facial expression recognition (FER) from video sequences is introduced. The features are extracted from tracking of facial landmarks. Each landmark component is tracked by appropriate method, results in proposing a hybrid technique that is able to achieve high recognition accuracy with limited feature dimensionality. Moreover, our approach aims to increase the system accuracy by increasing the FER recognition accuracy of the most overlapped expressions while achieving low processing time. Thus, the paper introduces also an intelligent Hierarchal Support Vector Machine (HSVM) to reduce the cross-correlation between the confusing expressions. The proposed system was trained and tested using a standard video sequence dataset for six facial expressions, and compared with previous work. Experimental results show an average of 96% recognition accuracy and average processing time of 93 msec.</description>
        <description>http://thesai.org/Downloads/Volume8No1/Paper_39-Intelligent_Real-Time_Facial_Expression_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Context based Emotion Analyzer for Interactive Agent</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080138</link>
        <id>10.14569/IJACSA.2017.080138</id>
        <doi>10.14569/IJACSA.2017.080138</doi>
        <lastModDate>2017-01-31T16:33:15.4700000+00:00</lastModDate>
        
        <creator>Mubasher H. Malik</creator>
        
        <creator>Syed Ali Raza</creator>
        
        <creator>H.M. Shehzad Asif</creator>
        
        <subject>artificial general intelligence; context based sentiments; emotions; natural language processing; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(1), 2017</description>
        <description>Emotions can affect human’s performance in a considerable manner. These emotions can be articulated in many ways such as text, speech, facial expressions, gestures and postures. Humans in effect of their emotions, have ability to perform surprising tasks under their emotional state. In recent years, interactive cognitive agents are being utilized in industrial and non-industrial organizations to interact with persons inside or outside the organization. Existing agents are intelligent enough to communicate like humans by expressing their emotions and recognizing emotions of other person. One of the main limitation of existing interactive agents is that they can only recognize emotions based on predefined keywords or semantics instead of analyzing the context in which those keywords were used. The focus of this paper is to study context based emotions and to present a model, which can analyze the context and generate emotion accordingly.</description>
        <description>http://thesai.org/Downloads/Volume8No1/Paper_38-Context_based_Emotion_Analyzer_for_Interactive_Agent.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analyzing Interaction Flow Modeling Language in Web Development Lifecycle</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080137</link>
        <id>10.14569/IJACSA.2017.080137</id>
        <doi>10.14569/IJACSA.2017.080137</doi>
        <lastModDate>2017-01-31T16:33:15.4530000+00:00</lastModDate>
        
        <creator>Karzan Wakil</creator>
        
        <creator>Dayang N.A. Jawawi</creator>
        
        <subject>Interaction Flow Modeling Language; IFML; Web Engineering Methods; Web Development Lifecycle</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(1), 2017</description>
        <description>Two years ago, the Object Management Group (OMG) adopted a new standard method named Interaction Flow Modeling Language (IFML) for web engineering domain. IFML is designed to express the content, user interaction, and control behavior of the front end of applications. There are number lacks in web engineering methods, because each of them is defined to particular specifications, one of which is the open issue of supporting the whole lifecycle in process development. In this paper, we analyze IFML models in the process development lifecycle to show capability of the method used in the process development. We then make a comparison between IFML and other methods in lifecycle phases. Finally, we add IFML to the web engineering lifecycle’s map. It is anticipated that the result of this paper will be to become a guide for developers for using IFML in the development of new applications.</description>
        <description>http://thesai.org/Downloads/Volume8No1/Paper_37-Analyzing_Interaction_Flow_Modeling_Language.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Elasticity of SaaS Applications using Queuing Theory</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080136</link>
        <id>10.14569/IJACSA.2017.080136</id>
        <doi>10.14569/IJACSA.2017.080136</doi>
        <lastModDate>2017-01-31T16:33:15.4370000+00:00</lastModDate>
        
        <creator>Ashraf A. Shahin</creator>
        
        <subject>auto-scaling; cloud computing; cloud resource scaling; queuing theory; resource provisioning; virtualized resources</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(1), 2017</description>
        <description>Elasticity is one of key features of cloud computing. Elasticity allows Software as a Service (SaaS) applications’ provider to reduce cost of running applications. In large SaaS applications that are developed using service-oriented architecture model, each service is deployed in a separated virtual machine and may use one or more services to complete its task. Although, scaling service independently from its required services propagates scaling problem to other services, most of current elasticity approaches do not consider functional dependencies between services, which increases the probability of violating service level agreement. In this paper, architecture of SaaS application is modeled as multi-class M/M/m processor sharing queuing model with deadline to take into account functional dependencies between services during estimating required scaling resources. Experimental results show effectiveness of the proposed model in estimating required resources during scaling virtual resources.</description>
        <description>http://thesai.org/Downloads/Volume8No1/Paper_36-Enhancing_Elasticity_of_SaaS_Applications_using_Queuing_Theory.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design, Modeling and Energy Management of a PEM Fuel Cell / Supercapacitor Hybrid Vehicle</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080135</link>
        <id>10.14569/IJACSA.2017.080135</id>
        <doi>10.14569/IJACSA.2017.080135</doi>
        <lastModDate>2017-01-31T16:33:15.4230000+00:00</lastModDate>
        
        <creator>Wahib Andari</creator>
        
        <creator>Samir Ghozzi</creator>
        
        <creator>Hatem Allagui</creator>
        
        <creator>Abdelkader Mami</creator>
        
        <subject>PEM Fuel Cell; Powertrain; Electric vehicle; Supercapacitor; Energy management; Power system; hydrogen gain</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(1), 2017</description>
        <description>This work concerns the study and the modeling of hybrid Proton Exchange Membrane (PEM) Fuel Cell electric vehicle. In fact, the paper deals with the model description of the powertrain which includes two energy sources: a PEM Fuel Cell as a primary source and a supercapacitor as a secondary source. The architecture is two degrees of freedom permitting a stability of the DC bus voltage. The hybridation of primary source with an energy storage system can improve vehicle dynamic response during transients and hydrogen consumption. The proposed energy management algorithm allows us to have a minimum hydrogen consumption. This algorithm is based on supercapacitor state of charge (SOC) control and acceleration/deceleration phases making possible braking energy recovery. The proposed model is simulated and tested using Matlab/Simulink software allowing rapid transitions between sources. The obtained results with the New European Driving Cycle  (NEDC) cycle demonstrate a 22% gain in hydrogen consumption.</description>
        <description>http://thesai.org/Downloads/Volume8No1/Paper_35-Design_Modeling_and_Energy_Management_of_a_PEM_Fuel_Cell.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Description Logic Application for UML Class Diagrams Optimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080134</link>
        <id>10.14569/IJACSA.2017.080134</id>
        <doi>10.14569/IJACSA.2017.080134</doi>
        <lastModDate>2017-01-31T16:33:15.3900000+00:00</lastModDate>
        
        <creator>Maxim Sergievskiy</creator>
        
        <subject>UML; domain models; description logic; concept; role; class diagram; design patterns; anti-patterns</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(1), 2017</description>
        <description>Most of known technologies of object-oriented developments are UML-based; particularly widely used class diagrams that serve to describe the model of a software system, reflecting the regularities of the domains. CASE tools used for object-oriented developments, often lack verification and optimization functions of diagrams. This article will discuss one of the ways to present class diagram in the form of statements description logic, and then perform their verification, and optimization. Optimization process is based on design patterns and anti-patterns. We will show that some transformations could be done automatically, while in other cases suboptimal models need to be adjusted by a designer.</description>
        <description>http://thesai.org/Downloads/Volume8No1/Paper_34-Description_Logic_Application_for_UML_Class_Diagrams_Optimization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>TinyCO – A Middleware Model for Heterogeneous Nodes in Wireless Sensor Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080133</link>
        <id>10.14569/IJACSA.2017.080133</id>
        <doi>10.14569/IJACSA.2017.080133</doi>
        <lastModDate>2017-01-31T16:33:15.3600000+00:00</lastModDate>
        
        <creator>Atif Naseer</creator>
        
        <creator>Basem Y Alkazemi</creator>
        
        <creator>Hossam I Aldoobi</creator>
        
        <subject>wireless sensor networks; middleware; heterogeneous network; interoperability; service-oriented architecture</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(1), 2017</description>
        <description>Wireless sensor networks (WSNs) contain multiple nodes of the same configuration and type. The biggest challenge nowadays is to communicate with heterogeneous nodes of different WSNs. To communicate with distinct networks, an application requires generic middleware. This middleware should be able to translate the requests for contrary WSNs. Most of the wireless nodes use the TinyOS or Contiki operating systems. These operating systems vary in their architecture, configuration and programming model. An application cannot communicate with heterogeneous networks because of their divergent nature. In this paper, we design and implement TinyCO (a generic middleware model for WSNs), which overcomes these challenges. TinyCO is a general-purpose service-oriented middleware model. This middleware model can identify the heterogeneous networks based on TinyOS and Contiki. It allows applications to communicate with these networks using a generic request. This middleware interprets the given input into signatures of the underlying networks. This proposed middleware is implemented in Java and tested on TelosB motes.</description>
        <description>http://thesai.org/Downloads/Volume8No1/Paper_33-TinyCO_A_Middleware_Model_for_Heterogeneous_Nodes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>EEBFTC: Extended Energy Balanced with Fault Tolerance Capability Protocol for WSN</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080132</link>
        <id>10.14569/IJACSA.2017.080132</id>
        <doi>10.14569/IJACSA.2017.080132</doi>
        <lastModDate>2017-01-31T16:33:15.3300000+00:00</lastModDate>
        
        <creator>Mona M. Jamjoom</creator>
        
        <subject>wireless sensor network; clustering; EBC; energy; fault tolerant</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(1), 2017</description>
        <description>This paper proposes a new framework for wireless sensor networks (WSN) by combining two routing protocol algorithms. In the proposed framework two algorithms are taking into consideration the energy balanced clustering (EBC) protocol in WSN with fault tolerance capabilities. The organizer is automatically selected by the base station (BS) and then it selects the cluster head (CH).  The mechanism of selecting the organizer node and the cluster head (CH) is based on the power, efficacy and energy balance load. In addition, the organizer is responsible to select a new CH in case of failure and vice versa. So, the energy balanced clustering and fault tolerance operations will prolong the node life time and thus the network will be efficient in data transmission and more reliable. The new framework after implementation is named Extended Energy Balanced with Fault Tolerance Capability (EEBFTC) protocol.</description>
        <description>http://thesai.org/Downloads/Volume8No1/Paper_32-EEBFTC_Extended_Energy_Balanced.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Numerical Method for Constructing Fixed Right Shift (FRS) Code for SAC-OCDMA Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080131</link>
        <id>10.14569/IJACSA.2017.080131</id>
        <doi>10.14569/IJACSA.2017.080131</doi>
        <lastModDate>2017-01-31T16:33:15.3130000+00:00</lastModDate>
        
        <creator>Hassan Yousif Ahmed</creator>
        
        <creator>K. S. Nisar</creator>
        
        <creator>Medin Zeghid</creator>
        
        <creator>S. A. Aljunid</creator>
        
        <subject>FRS; OCDMA; MAI; BER; MFH; MQC</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(1), 2017</description>
        <description>In optical code division multiple access (OCDMA) systems, multiple access interference (MAI) problem which amplifies with the number of users actively involving in the network robustly bound the performance of such network. In this paper an algorithm to generate binary code sequences based on spectral amplitude coding (SAC) technique for CDMA under optical communication systems environment named fixed right shifting (FRS) is proposed. This algorithm is built with minimum cross correlation (MCC) using some type of Jordan matrices with straightforward algebraic methods. By using the code weight W and the number of users N, various sets of binary code sequences for the possibilities of even and odd combination are constructed. Furthermore, this algorithm allows users with different code sequences to transmit data with minimum likelihood of interference. Simulation results show our technique for an agreeable bit error rate (BER) of 10-12 can support a higher number of users in deterministic and stochastic methods compared to reported techniques such Modified Quadratic Congruence (MQC) and Modified Frequency Hopping (MFH).</description>
        <description>http://thesai.org/Downloads/Volume8No1/Paper_31-Numerical_Method_for_Constructing_Fixed_Right_Shift.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Diabetes Disease Diagnosis Method based on Feature Extraction using K-SVM</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080130</link>
        <id>10.14569/IJACSA.2017.080130</id>
        <doi>10.14569/IJACSA.2017.080130</doi>
        <lastModDate>2017-01-31T16:33:15.2970000+00:00</lastModDate>
        
        <creator>Ahmed Hamza Osman</creator>
        
        <creator>Hani Moetque Aljahdali</creator>
        
        <subject>K-means Clustering; Diabetes Patients; SVM; Diagnosis; Accuracy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(1), 2017</description>
        <description>Nowadays, diabetes disease is considered one of the key reasons of death among the people in the world. The availability of extensive medical information leads to the search for proper tools to support physicians to diagnose diabetes disease accurately. This research aimed at improving the diagnostic accuracy and reducing diagnostic miss-classification based on the extracted significant diabetes features. Feature selection is critical to the superiority of classifiers founded through knowledge discovery approaches, thereby solving the classification problems relating to diabetes patients. This study proposed an integration approach between the SVM technique and K-means clustering algorithms to diagnose diabetes disease. Experimental results achieved high accuracy for differentiating the hidden patterns of the Diabetic and Non-diabetic patients compared with the modern diagnosis methods in term of the performance measure. The T-test statistical method obtained significant improvement results based on K-SVM technique when tested on the UCI Pima Indian standard dataset.</description>
        <description>http://thesai.org/Downloads/Volume8No1/Paper_30-Diabetes_Disease_Diagnosis_Method_based_on_Feature_Extraction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Printed Arabic Text Recognition using Linear and Nonlinear Regression</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080129</link>
        <id>10.14569/IJACSA.2017.080129</id>
        <doi>10.14569/IJACSA.2017.080129</doi>
        <lastModDate>2017-01-31T16:33:15.2670000+00:00</lastModDate>
        
        <creator>Ashraf A. Shahin</creator>
        
        <subject>auto-scaling; cloud computing; cloud resource scaling; queuing theory; resource provisioning; virtualized resources</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(1), 2017</description>
        <description>Arabic language is one of the most popular languages in the world. Hundreds of millions of people in many countries around the world speak Arabic as their native speaking. However, due to complexity of Arabic language, recognition of printed and handwritten Arabic text remained untouched for a very long time compared with English and Chinese. Although, in the last few years, significant number of researches has been done in recognizing printed and handwritten Arabic text, it stills an open research field due to cursive nature of Arabic script. This paper proposes automatic printed Arabic text recognition technique based on linear and ellipse regression techniques. After collecting all possible forms of each character, unique code is generated to represent each character form. Each code contains a sequence of lines and ellipses. To recognize fonts, a unique list of codes is identified to be used as a fingerprint of font. The proposed technique has been evaluated using over 14000 different Arabic words with different fonts and experimental results show that average recognition rate of the proposed technique is 86%.</description>
        <description>http://thesai.org/Downloads/Volume8No1/Paper_29-Printed_Arabic_Text_Recognition_using_Linear.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Studying Applicability Feasibility of OFDM in Upcoming 5G Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080128</link>
        <id>10.14569/IJACSA.2017.080128</id>
        <doi>10.14569/IJACSA.2017.080128</doi>
        <lastModDate>2017-01-31T16:33:15.2500000+00:00</lastModDate>
        
        <creator>Nagapushpa K.P</creator>
        
        <creator>Chitra Kiran N</creator>
        
        <subject>OFDM; 5G; Next Generation; Peak to Average Power Ratio (PAPR); multi-carrier; Waveforms; Multiple Access</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(1), 2017</description>
        <description>Orthogonal frequency-division multiplexing (OFDM) is one of unbeatable multiplexing technique till date. However with increasing version of next generation mobile standards like 5G, the applicability of OFDM is quite questionable. The prime reason behind this is in order to offer higher data rates and extensive networking services in 5G, OFDM will be required to overcome some inherent problems of spectral leakage, power consumption, less supportability of increased channel capacity. The reason community still believes that there is positive scope of OFDM to be applicable in 5G network provided it undergoes certain changes. This paper reviews some of the complimentary waveforms that have been theoretically proven to be adding edge to OFDM system. The paper reviews different waveforms as well as multiple access techniques to be used in designing 5G technology and assess their effectiveness in viewpoint of research applicability and effectiveness towards leveraging 5G.</description>
        <description>http://thesai.org/Downloads/Volume8No1/Paper_28-Studying_Applicability_Feasibility_of_OFDM.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Extending Unified Modeling Language to Support Aspect-Oriented Software Development</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080127</link>
        <id>10.14569/IJACSA.2017.080127</id>
        <doi>10.14569/IJACSA.2017.080127</doi>
        <lastModDate>2017-01-31T16:33:15.2200000+00:00</lastModDate>
        
        <creator>Rehab Allah Mohamed Ahmed</creator>
        
        <creator>Amal Elsayed Aboutabl</creator>
        
        <creator>Mostafa-Sami M. Mostafa</creator>
        
        <subject>Aspect-Oriented Software Development; Model Driven Architecture; Eclipse Modeling Framework; Object Management group; UML; AspectJ</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(1), 2017</description>
        <description>Aspect-Oriented Software Development (AOSD) is continuously gaining more importance as the complexity of software systems increases and requirement changes are high- rated. A smart way for making reuse of functionality without additional effort is separating the functional and non functional requirements. Aspect-oriented software development supports the capability of separating requirements based on concerns. AspectJ is one of the aspect-oriented implementations of Java. Using Model Driven Architecture (MDA) specifications, an AspectJ model representing AspectJ elements can be created in an abstract way with the ability to be applied in UML, Java or XML. One of the open source tools which support MDA and follows the standards of the Object Management Group (OMG) for both UML and MDA is Eclipse providing an implementation of MDA through Eclipse Modeling Framework (EMF). This paper focuses on creating a UML profile;  a UML extension which supports language specifications for  AspectJ using EMF. Our work is based on the latest UML specification  (UML 2.5) and uses MDA to enable the inclusion of aspect-oriented concepts in the design process.</description>
        <description>http://thesai.org/Downloads/Volume8No1/Paper_27-Extending_Unified_Modeling_Language_to_Support_Aspect_Oriented.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Discovering Semantic and Sentiment Correlations using Short Informal Arabic Language Text</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080126</link>
        <id>10.14569/IJACSA.2017.080126</id>
        <doi>10.14569/IJACSA.2017.080126</doi>
        <lastModDate>2017-01-31T16:33:15.2030000+00:00</lastModDate>
        
        <creator>Salihah AlOtaibi</creator>
        
        <creator>Muhammad Badruddin Khan</creator>
        
        <subject>Opinion Mining; Sentiment analysis; semantic analysis; Twitter; Informal Arabic</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(1), 2017</description>
        <description>Semantic and Sentiment analysis have received a great deal of attention over the last few years due to the important role they play in many different fields, including marketing, education, and politics. Social media has given tremendous opportunities for researchers to collect huge amount of data as input for their semantic and sentiment analysis. Using twitter API, we collected around 4.5 million Arabic tweets and used them to propose a novel automatic unsupervised approach to capture patterns of words and sentences of similar contextual semantics and sentiment in informal Arabic language at word and sentence levels. We used Language Modeling (LM) model which is statistical model that can estimate the distribution of natural language in effective way. The results of experiments of proposed model showed better performance than classic bigram and latent semantic analysis (LSA) model in most of cases at word level. In order to handle the big data, we used different text processing techniques followed by removal of the unique words based on their rele Informal Arabic, Big Data, Sentiment analysis, Opinion Mining (OM), semantic analysis, bigram model, LSA model, Twitter vance to problem.</description>
        <description>http://thesai.org/Downloads/Volume8No1/Paper_26-Discovering_Semantic_and_Sentiment_Correlations.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Role of Explicit Knowledge Management and Reuse in Higher Educational Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080125</link>
        <id>10.14569/IJACSA.2017.080125</id>
        <doi>10.14569/IJACSA.2017.080125</doi>
        <lastModDate>2017-01-31T16:33:15.1730000+00:00</lastModDate>
        
        <creator>Sanjiv Sharma</creator>
        
        <creator>O.K. Harsh</creator>
        
        <subject>Knowledge Management; Higher Education; Software Engineering; Knowledge Reuse; Harsh Model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(1), 2017</description>
        <description>Role of knowledge management and knowledge reuse has been investigated analytically in higher educational environment using Nonaka &amp; Takeuchi and Harsh models. It has been observed that in three dimensional environment knowledge management and reuse together play key role for students and faculty both if these could be appropriately exploited. A comprehensive system can be built which can benefit both students and faculty in wider areas of their respective knowledge management. Special benefits of knowledge reuse may be seen if we could treat knowledge reusability as an independent quantity along with explicit and tacit knowledge. Current model reveals analytically that knowledge reusability may boost the operative knowledge in an educational organization and may have its sovereign reality. Present work may be also helpful to manage knowledge during software reuse and associated actions.</description>
        <description>http://thesai.org/Downloads/Volume8No1/Paper_25-Role_of_Explicit_Knowledge_Management_and_Reuse.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Depth Partitioning and Coding Mode Selection Statistical Analysis for SHVC</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080124</link>
        <id>10.14569/IJACSA.2017.080124</id>
        <doi>10.14569/IJACSA.2017.080124</doi>
        <lastModDate>2017-01-31T16:33:15.1400000+00:00</lastModDate>
        
        <creator>Ibtissem Wali</creator>
        
        <creator>Amina Kessentini</creator>
        
        <creator>Mohamed Ali Ben Ayed</creator>
        
        <creator>Nouri Masmoudi</creator>
        
        <subject>Video Coding; HEVC; SHVC; Coding efficiency; Statistical analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(1), 2017</description>
        <description>The Scalable High Efficiency Video Coding (SHVC) has been proposed to improve the coding efficiency. However, this additional extension generally results an important coding complexity. Several studies were performed to overcome the complexity through algorithmic optimizations that led to an encoding time reduction. In fact, mode decision analysis is imperatively important in order to have an idea about the partitioning modes based on two parameters, such as prediction unit size and frame type. This paper presents statistical observations at two levels: coding units (CUs) and prediction units (PUs) selected by the encoder. Analysis was performed for several test sequences with different motion and texture characteristics. The experimental results show that the percentage of choosing coding or prediction unit size and type depends on sequence parameters, frame type, and temporal level.</description>
        <description>http://thesai.org/Downloads/Volume8No1/Paper_24-Depth_Partitioning_and_Coding_Mode_Selection_Statistical_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SSL based Webmail Forensic Engine</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080123</link>
        <id>10.14569/IJACSA.2017.080123</id>
        <doi>10.14569/IJACSA.2017.080123</doi>
        <lastModDate>2017-01-31T16:33:15.1100000+00:00</lastModDate>
        
        <creator>Manesh T</creator>
        
        <creator>Abdalla A Alameen</creator>
        
        <creator>Mohemmed Sha M</creator>
        
        <creator>Mohamed Mustaq Ahmed A</creator>
        
        <creator>Mohamed Yacoab M.Y.</creator>
        
        <creator>Bhadran V K</creator>
        
        <creator>Abraham Varghese</creator>
        
        <subject>Forensics; Network Sessions; Packet Drop; Secure Data Aggregation; Sensor Nodes</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(1), 2017</description>
        <description>In this era of information technology, email applications are the foremost and extensively used electronic communication technology. Emails are profusely used to exchange data and information using several frontend applications from various service providers by its users. Currently most of the email clients and service providers now moved to secured data communications using SSL or TLS security for their data exchanged. Cyber criminals and terrorists have started by means of this mode for exchanging their malicious information in their transactions.  Forensic experts have to face greater difficulty and multiple challenges in tracing crucial forensic information from network packets as the communication is secured. These challenges might affect the digital forensic experts in procuring substantial evidences against such criminals from their working environments. This research work revels working background of SSL based webmail forensic engine, which decrypt respective communication or network session and also reconstruct the actual message contents of webmail applications. This digital forensic engine is compatible to work with in proxy servers and other computing environments and enables forensic reconstruction followed by analysis of webmail clients. Proposed forensic engine employs is a high-speed packet capturing hardware module, a sophisticated packet reformation algorithm; restores email header and messages from encrypted stream of SMTP and POP3 network sessions. Proposed forensic engine also support cyber investigation team with generated forensic report and prosecution of culprits by judiciary system of the specific country.</description>
        <description>http://thesai.org/Downloads/Volume8No1/Paper_23-SSL_based_Webmail_Forensic_Engine.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Semantically-Time-Referrer based Approach of Web Usage Mining for Improved Sessionization in Pre-Processing of Web Log</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080122</link>
        <id>10.14569/IJACSA.2017.080122</id>
        <doi>10.14569/IJACSA.2017.080122</doi>
        <lastModDate>2017-01-31T16:33:15.0930000+00:00</lastModDate>
        
        <creator>Navjot Kaur</creator>
        
        <creator>Himanshu Aggarwal</creator>
        
        <subject>Web Usage Mining; User Identification; Session Identification; Semantics; Data Cleaning; Time Heuristics; Referrer Heuristics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(1), 2017</description>
        <description>Web usage mining(WUM) , also known as Web Log Mining is the application of Data Mining techniques, which are applied on large volume of data to extract useful and interesting user behaviour patterns from web logs, in order to improve web based applications. This paper aims to improve the data discovery by mining the usage data from log files. In this paper the work is done in three phases. First and second phase0 which are data cleaning and user identification respectively are completed using traditional methods. The third phase, session identification is done using three different methods. The main focus of this paper is on sessionization of log file which is a critical step for extracting usage patterns. The proposed referrer-time and Semantically-time-referrer methods overcome the limitations of traditional methods. The main advantage of pre-processing model presented in this paper over other methods is that it can process text or excel log file of any format.  The experiments are performed on three different log files which indicate that the proposed semantically-time-referrer based heuristic approach achieves better results than the traditional time and Referrer-time based methods. The proposed methods are not complex to use. Web log file is collected from different servers and contains the public information of visitors. In addition, this paper also discusses different types of web log formats.</description>
        <description>http://thesai.org/Downloads/Volume8No1/Paper_22-A_Novel_Semantically_Time_Referrer_based_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Emergence of Unstructured Data and Scope of Big Data in Indian Education</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080121</link>
        <id>10.14569/IJACSA.2017.080121</id>
        <doi>10.14569/IJACSA.2017.080121</doi>
        <lastModDate>2017-01-31T16:33:15.0800000+00:00</lastModDate>
        
        <creator>S S Kolhatkar</creator>
        
        <creator>M Y Patil</creator>
        
        <creator>S P Kolhatkar</creator>
        
        <creator>M S Paranjape</creator>
        
        <subject>Big Data; Indian Education; Unstructured data; Big Data Analytics; Comparative of Big data and Relational Database; Scope of Big Data based Applications</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(1), 2017</description>
        <description>The Indian Education sector has grown exponentially in the last few decades as per various official reports[22]. Large amount of information pertaining to education sector is generated every year. This has led to the requirement for managing and analyzing the structured and unstructured information related to various stakeholders. At the same time there is a need to adapt to the dynamic global world by channelizing young talent in appropriate domains by cognizing and deriving the knowledge about individual student preferences hidden within the vast amount of education data. The derived knowledge is about getting finer information related to courses, facility and quality of institutes, etc and also analyzing unstructured educational learning resources present in the form of multimedia data. Also, the desire to cater to stakeholders for decision making related to courses, admissions, career planning etc has accentuated big data analytics.
Various MIS or ERP systems handling structured information for educational applications exist in order to aid in the administration and managerial process. These systems are useful in customizing software application as per institutes or courses, generating various customizable reports and aiding the decision making process related to institutes. The need for storing, maintaining and analyzing unstructured information related to multimedia content online has generated a need for big data and data analytics. This paper is about the emergence of unstructured data, comparison of features provided by relational databases and big data and to identify the scope of big data in the Indian education sector.</description>
        <description>http://thesai.org/Downloads/Volume8No1/Paper_21-Emergence_of_Unstructured_Data_and_Scope_of_Big_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Data Privacy Ontology for Ubiquitous Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080120</link>
        <id>10.14569/IJACSA.2017.080120</id>
        <doi>10.14569/IJACSA.2017.080120</doi>
        <lastModDate>2017-01-31T16:33:15.0470000+00:00</lastModDate>
        
        <creator>Narmeen Zakaria Bawany</creator>
        
        <creator>Zubair A. Shaikh</creator>
        
        <subject>Privacy Ontology; Data Privacy; Location based privacy; Time based privacy; Ubiquitous Computing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(1), 2017</description>
        <description>Privacy is an ability to understand, choose, and regulate what personal data one shares, with whom, for how long and under what context. Data owners must not lose the rights of ownership, once the data is shared. Privacy decisions have many components that include identity, access granularity, time and context. We propose an ontology based model for data privacy configuration in terms of producer and consumer. Producer is an IP entity who owns data, that is Data owner.  Consumer is an IP entity with whom data is shared. We differentiate between consumer and data holder, also and IP entity, which may not have similar access rights as consumer. As we rely on Semantic Web technologies to enable these privacy preferences, our proposed vocabulary is platform independent and can thus be used by any system relying on these technologies. Ideally, producers can specify a set of attributes which consumers must satisfy in order to be granted access to the requested information. Privacy can be configured not only in terms of typical read and edit, but novel attributes like location and time are also included in the proposed ontology.</description>
        <description>http://thesai.org/Downloads/Volume8No1/Paper_20-Data_Privacy_Ontology_for_Ubiquitous_Computing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evolutionary Method of Population Classification According to Level of Social Resilience</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080119</link>
        <id>10.14569/IJACSA.2017.080119</id>
        <doi>10.14569/IJACSA.2017.080119</doi>
        <lastModDate>2017-01-31T16:33:15.0330000+00:00</lastModDate>
        
        <creator>Coulibaly Kpinna Tiekoura</creator>
        
        <creator>Brou Konan Marcellin</creator>
        
        <creator>Babri Michel</creator>
        
        <creator>Souleymane Oumtanaga</creator>
        
        <subject>genetic algorithm; Unsupervised classification; social resilience; Partitioning method</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(1), 2017</description>
        <description>Following the many natural disasters and global socio-economic upheavals of the 21st century, the concept of resilience is increasingly the subject of much research aimed at finding appropriate responses to these traumas. However, most existing work on resilience is limited to a broad cross-disciplinary panel of non-operational theoretical approaches. Thus, the study of the processes of social resilience is confronted with difficulties of modeling and a lack of appropriate analysis tools. However, the existing stratification methods are too general to take into account the specificities of the resilience and are difficult to use for non-specialists in modeling. In addition, most traditional methods of partition research have limitations including their inability to effectively exploit the research space. In this paper, we propose a classification algorithm based on the technique of genetic algorithms and adapted to the context of social resilience. Our objective function, after penalization by two criteria, allows to explore widely the space of research for solutions while favoring classes quite homogeneous and well separated between them.</description>
        <description>http://thesai.org/Downloads/Volume8No1/Paper_19-Evolutionary_Method_of_Population_Classification_According.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Computationally Efficient P-LRU based Optimal Cache Heap Object Replacement Policy</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080118</link>
        <id>10.14569/IJACSA.2017.080118</id>
        <doi>10.14569/IJACSA.2017.080118</doi>
        <lastModDate>2017-01-31T16:33:15.0000000+00:00</lastModDate>
        
        <creator>Burhan Ul Islam Khan</creator>
        
        <creator>Rashidah F. Olanrewaju</creator>
        
        <creator>Roohie Naaz Mir</creator>
        
        <creator>Abdul Raouf Khan</creator>
        
        <creator>S. H. Yusoff</creator>
        
        <subject>cache heap object replacement; garbage collectors; Java Virtual Machine; pseudo LRU</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(1), 2017</description>
        <description>The recent advancement in the field of distributed computing depicts a need of developing highly associative and less expensive cache memories for the state-of-art processors i.e., Intel Core i6, i7, etc. Hence, various conventional studies introduced cache replacement policies which are one of the prominent key factors to determine the effectiveness of a cache memory. Most of the conventional cache replacement algorithms are found to be as not so efficient on memory management and complexity analysis. Therefore, a significant and thorough analysis is required to suggest a new optimal solution for optimizing the state-of-the-art cache replacement issues. The proposed study aims to conceptualize a theoretical model for optimal cache heap object replacement. The proposed model incorporates Tree based and MRU (Most Recently Used) pseudo-LRU (Least Recently Used) mechanism and configures it with JVM’s garbage collector to replace the old referenced objects from the heap cache lines. The performance analysis of the proposed system illustrates that it outperforms the conventional state of art replacement policies with much lower cost and complexity. It also depicts that the percentage of hits on cache heap is relatively higher than the conventional technologies.</description>
        <description>http://thesai.org/Downloads/Volume8No1/Paper_18-A_Computationally_Efficient_P_LRU_based_Optimal_Cache_Heap.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Scheduling in Desktop Grid Systems: Theoretical Evaluation of Policies &amp; Frameworks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080117</link>
        <id>10.14569/IJACSA.2017.080117</id>
        <doi>10.14569/IJACSA.2017.080117</doi>
        <lastModDate>2017-01-31T16:33:14.9870000+00:00</lastModDate>
        
        <creator>Muhammad Khalid Khan</creator>
        
        <creator>Tariq Mahmood</creator>
        
        <creator>Syed Irfan Hyder</creator>
        
        <subject>desktop grid systems; task scheduling policies; work fetch policies</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(1), 2017</description>
        <description>Desktop grid systems have already established their identity in the area of distributed systems. They are well suited for High Throughput Computing especially for Bag-of-Tasks applications. In desktop grid systems, idle processing cycles and memory of millions of users (connected through internet or through any other communication mechanism) can be utilized but the workers / hosts machines not under any centralized administrative control that result in high volatility. This issue is countered by applying various types of scheduling policies that not only ensure task assignments to better workers but also takes care of fault tolerance through replication and other mechanism. In this paper, we discussed leading desktop grid systems framework and performed a comparative analysis of these frameworks. We also presented a theoretical evaluation of server and client based scheduling policies and identified key performance indicators to evaluate these policies.</description>
        <description>http://thesai.org/Downloads/Volume8No1/Paper_17-Scheduling_in_Desktop_Grid_Systems_Theoretical_Evaluation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards Empowering Hearing Impaired Students&#39; Skills in Computing and Technology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080116</link>
        <id>10.14569/IJACSA.2017.080116</id>
        <doi>10.14569/IJACSA.2017.080116</doi>
        <lastModDate>2017-01-31T16:33:14.9700000+00:00</lastModDate>
        
        <creator>Nihal Esam Abuzinadah</creator>
        
        <creator>Areej Abbas Malibari</creator>
        
        <creator>Paul Krause</creator>
        
        <subject>computer programming education; deaf; hearing-impaired students; e-learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(1), 2017</description>
        <description>Studies have shown that deaf and hearing-impaired students have many difficulties in learning applied disciplines such as Medicine, Engineering, and Computer Programming. This study aims to investigate the readiness of deaf students to pursue higher education in applied sciences, more specifically in computer science. This involves investigating their capabilities in computer skills and applications. Computer programming is an integral component in the technological field that can facilitate the development of further scientific advances. Devising a manner of teaching the deaf and hearing-impaired population will give them an opportunity to contribute to the technology sector. This would allow these students to join the scientific world when otherwise; they are generally unable to participate because of the limitations they encounter. The study showed that deaf students in Jeddah are eager to continue their higher education and that a large percentage of these students are keen on studying computer science, particularly if they are provided with the right tools.</description>
        <description>http://thesai.org/Downloads/Volume8No1/Paper_16-Towards_Empowering_Hearing_Impaired_Students_Skills.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Outcome based Assessment using Fuzzy Logic</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080115</link>
        <id>10.14569/IJACSA.2017.080115</id>
        <doi>10.14569/IJACSA.2017.080115</doi>
        <lastModDate>2017-01-31T16:33:14.9400000+00:00</lastModDate>
        
        <creator>Abraham Varghese</creator>
        
        <creator>Shajidmon Kolamban</creator>
        
        <creator>Jagath Prasad Sreedhar</creator>
        
        <creator>Sankara Nayaki</creator>
        
        <subject>Outcome based Education; Course Learning Outcome; Fuzzy Logic</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(1), 2017</description>
        <description>Outcome Based Education (OBE) or student centered learning is one of the key component in quality assurance and enhancement in the higher education. The OBE approach encourages students to become active learner rather than being passive as in the traditional teacher-centered learning approach. In OBE, teacher is a facilitator of the teaching learning process; therefore the quality of teaching learning process does not depends on how a Lecturer teaches the course, but on the skill or knowledge achieved by the students. The level of the attainment of Course Level Outcomes (CLOs) is the indicator of the skill, knowledge and behavior that students acquired at the end of the course. Therefore each and every activity conducted in the classroom has to be reflected in the assessment of course outcome, which is measurable. In this paper, an efficient way of assessing the course learning outcome using Fuzzy Logic is presented. The uniqueness of the method is it will give an accurate measure to assess the attainment level of the course by considering every parameter enabling the learning process.</description>
        <description>http://thesai.org/Downloads/Volume8No1/Paper_15-Outcome_based_Assessment_using_Fuzzy_Logic.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Antenna Performance Improvement Techniques for Energy Harvesting: A Review Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080114</link>
        <id>10.14569/IJACSA.2017.080114</id>
        <doi>10.14569/IJACSA.2017.080114</doi>
        <lastModDate>2017-01-31T16:33:14.9070000+00:00</lastModDate>
        
        <creator>Raed Abdulkareem Abdulhasan</creator>
        
        <creator>Abdulrashid O. Mumin</creator>
        
        <creator>Yasir A. Jawhar</creator>
        
        <creator>Mustafa S. Ahmed</creator>
        
        <creator>Rozlan Alias</creator>
        
        <creator>Khairun Nidzam Ramli</creator>
        
        <creator>Mariyam Jamilah Homam</creator>
        
        <creator>Lukman Hanif Muhammad Audah</creator>
        
        <subject>energy harvesting; slotted patch; circularly polarization; solar substrate; rectenna</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(1), 2017</description>
        <description>The energy harvesting is defined as using energy that is available within the environment to increase the efficiency of any application. Moreover, this method is recognized as a useful way to break down the limitation of battery power for wireless devices. In this paper, several antenna designs of energy harvesting are introduced. The improved results are summarized as a 2&#215;2 patch array antenna realizes improved efficiency by 3.9 times higher than the single patch antenna. The antenna has enhanced the bandwidth of 22.5 MHz after load two slots on the patch. The solar cell antenna is allowing harvesting energy during daylight. A couple of E-patches antennas have increased the bandwidth of 33% and the directivity up to 20 dBi. The received power can be improved by 1.2-1.4 times when using the dual port on pixel antenna. Complementary split ring resonator and substrate integrated waveguide are utilized cavity-backed feeding on a fractal patch antenna to enhance the bandwidth around 5.1%. Moreover, adding a rectifier circuit to an antenna converts the reserved RF-signal to DC power, and then duplicated the input voltage up to sum the total number of rectifier circuit stages. Therefore, the advantages and disadvantages of each antenna depend on the technique which used in design.</description>
        <description>http://thesai.org/Downloads/Volume8No1/Paper_14-Antenna_Performance_Improvement_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Non-Linear Distance Transformation Algorithm and its Application in Medical Image Processing in Healthcare</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080113</link>
        <id>10.14569/IJACSA.2017.080113</id>
        <doi>10.14569/IJACSA.2017.080113</doi>
        <lastModDate>2017-01-31T16:33:14.8930000+00:00</lastModDate>
        
        <creator>Yahia S. AL-Halabi</creator>
        
        <subject>Distance map; complexity; nonlinear; medical; image processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(1), 2017</description>
        <description>Medical image processing is one of the most demanding domains of the computing sciences. The importance of the domain is in terms of the CPU and the memory requirements that shall be used by the system to compute the result. Moreover, the volume of the data is often very large in terms of the space and primarily requires many processing tools. At the same time, the tools have to be available as the real time applications, in order to be used by the physicians On the other hand; computational complexity is also another significant issue in the processing of the data. Therefore, the development of advanced and optimized algorithms is now sought as the way to improvise the effectiveness of the processing and development capability of the image processing systems. Thus, distance map (DT) has emerged as one of the most influential types of algorithms to enhance the computational capacity of the unit and further produce better results for the actual outcome of the system.   The results of the output have been in high favour of the distance map algorithm. Moreover, the output has witnessed an increasing trend in the performance of the system and has been quite prominent in terms of acquiring the resources. The results of the conclusion has concluded the fact that, the application of the distance map algorithms has resulted in the reliable alternative that can be applied in the field to improvise the running norms and further speed up the existing procedures.</description>
        <description>http://thesai.org/Downloads/Volume8No1/Paper_13-Non_Linear_Distance_Transformation_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Optimum Frequency Controller of Hybrid Pumping System: Bond Graph Modeling-Simulation and Practice with ARDUINO Board</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080112</link>
        <id>10.14569/IJACSA.2017.080112</id>
        <doi>10.14569/IJACSA.2017.080112</doi>
        <lastModDate>2017-01-31T16:33:14.8600000+00:00</lastModDate>
        
        <creator>MEZGHANI Dhafer</creator>
        
        <creator>OTHMANI Hichem</creator>
        
        <creator>SASSI Fares</creator>
        
        <creator>MAMI Abdelkader</creator>
        
        <creator>DAUPHIN-TANGUY Genevi&#232;ve</creator>
        
        <subject>Hybrid power systems; Control systems; Optimization; Photovoltaic; wind turbine</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(1), 2017</description>
        <description>The strategy of rural development in Tunisia needs to include as one of its priorities: the control of water. In seeking solutions for the energy control dedicated to pumping, it seems interesting to know the benefits of a new technique based on the complementarities of two renewable energy sources such as solar and wind power. The climate’s dependence requires a complex modelling and more optimization methods for controlling of hybrid system. Moreover, in recent years, technological progression at hardware and software enables researchers to process these optimization problems using embedded platforms. For this paper, we apply the approach bond graph to model a complex system. Our hybrid pumping installation contains a photovoltaic generator, a wind source, converters and an induction motor-pump group. The numerical closed-loop simulation of the complete model in an appropriate environment allows us to generate an optimisation control whose the appropriate frequency depends on meteorological conditions (wind speed, insulation and temperature). The implementation of this control and the experimental measurements validate the optimum efficiency and verify operation reliability of our hybrid structure.</description>
        <description>http://thesai.org/Downloads/Volume8No1/Paper_12-A_New_Optimum_Frequency_Controller.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Novel Conception of a Tunable RF MEMS Resonator</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080111</link>
        <id>10.14569/IJACSA.2017.080111</id>
        <doi>10.14569/IJACSA.2017.080111</doi>
        <lastModDate>2017-01-31T16:33:14.8300000+00:00</lastModDate>
        
        <creator>Bassem Jmai</creator>
        
        <creator>Adnen Rajhi</creator>
        
        <creator>Ali Gharsallah</creator>
        
        <subject>RF MEMS; CPW; Meander inductor; Tunable; Resonator; MMIC</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(1), 2017</description>
        <description>This paper presents a new monolithic microwave integrated circuit (MMIC) based on coplanar waveguide (CPW) design for a tunable resonator based on RF MEMS. This RF structure, which can be used for system on chip (SOC), is constituted with MEMS Bridge placed between two meander inductors and the tenability is controlled by a variable applied DC voltage. Moreover, this device presents a compactness characteristic and the possibility to operate at high frequencies. The resonant frequency and the bandwidth can be changed easily by changing the bridge gap of the RF MEMS. The numerical simulations of this novel structure of a tunable RF MEMS  resonator were performed with the electromagnetic solvers CST MWS (Computer simulation Technology Microwave Studio) and validated by the more accurate electromagnetic solver HFSS (High Frequency Structural Simulator). The simulation results, for three different spacing of the bridge gap, show that the tunable frequency band are between 10 and 40 GHz with the two electromagnetic solvers and exhibiting three resonant frequencies (21, 23.1 and 24.6 GHz).The simulation results of the return loss  using CST achieves 29 dB with an  insertion loss less than 1 dB; However, the HFSS simulation  shows similar performance in the resonant frequencies and in the bandwidth giving better results in terms of the return loss (about 35dB instead of 29 dB) and showing a good adaptation.</description>
        <description>http://thesai.org/Downloads/Volume8No1/Paper_11-Novel_Conception_of_a_Tunable_RF_MEMS_Resonator.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Interoperable Data Framework to Manipulate the Smart City Data using Semantic Technologies</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080110</link>
        <id>10.14569/IJACSA.2017.080110</id>
        <doi>10.14569/IJACSA.2017.080110</doi>
        <lastModDate>2017-01-31T16:33:14.8130000+00:00</lastModDate>
        
        <creator>Majdi Beseiso</creator>
        
        <creator>Abdulkareem Al-Alwani</creator>
        
        <creator>Abdullah Altameem</creator>
        
        <subject>Smart cities; Smart Data Integration; Big data; IOT; Software architecture</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(1), 2017</description>
        <description>During the last decade, enormous volumes of urban data have been produced by the Government agencies, the NGOs and the citizens. In such a scenario, we are presented with a diverse sets of data which holds valuable information. This information can be extracted and analyzed and have a number of usages for the well-being of citizens. The major impediment to achieve this goal is the data itself, the available data are redundant, scattered and come with various legacy formats. Data interoperability, scalability and integration are paramount issues which could not be resolved unless the scattered data silos are accessible with a standard representation.  In this paper, we propose a framework that resolves the data interoperability and associated challenges in the smart city environment.  The framework takes the raw smart city data from several resources and stores them in a NoSQL database. The framework transforms the scattered data into machine-processable data. Besides, the database is linked with an API and simple dashboard for further analysis, which can be utilized to build big data applications based on urban data so that government agencies can get a summarized overview of resource distribution.</description>
        <description>http://thesai.org/Downloads/Volume8No1/Paper_10-An_Interoperable_Data_Framework_to_Manipulate_the_Smart_City_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Analysis and Survey of Ant Colony Optimization based Rule Miners</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080108</link>
        <id>10.14569/IJACSA.2017.080108</id>
        <doi>10.14569/IJACSA.2017.080108</doi>
        <lastModDate>2017-01-31T16:33:14.7830000+00:00</lastModDate>
        
        <creator>Zulfiqar Ali</creator>
        
        <creator>Waseem Shahzad</creator>
        
        <subject>Classification Rule; Ant Colony Optimization; Data Mining; Rule Discovery</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(1), 2017</description>
        <description>In this research study, we analyze the performance of bio inspired classification approaches by selecting Ant-Miners (Ant-Miner, cAnt_Miner, cAnt_Miner2 and cAnt_MinerPB) for the discovery of classification rules in terms of accuracy, terms per rule, number of rules, running time and model size discovered by the corresponding rule mining algorithm. Classification rule discovery is still a challenging and emerging research problem in the field of data mining and knowledge discovery. Rule based classification has become cutting edge research area due to its importance and popular application areas in the banking, market basket analysis, credit card fraud detection, costumer behaviour, stock market prediction and protein sequence analysis. There are various approaches proposed for the discovery of classification rules like Artificial Neural Networks, Genetic Algorithm, Evolutionary Programming, SVM and Swarm Intelligence. This research study is focused on classification rule discovery by Ant Colony Optimization. For the performance analysis, Myra Tool is used for experiments on the 18 public datasets (available on the UCI repository). Data sets are selected with varying number of instances, number of attributes and number of classes. This research paper also provides focused survey of Ant-Miners for the discovery of classification rules.</description>
        <description>http://thesai.org/Downloads/Volume8No1/Paper_8-Comparative_Analysis_and_Survey_of_Ant_Colony.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Classification Model for Imbalanced Medical Data based on PCA and Farther Distance based Synthetic Minority Oversampling Technique</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080109</link>
        <id>10.14569/IJACSA.2017.080109</id>
        <doi>10.14569/IJACSA.2017.080109</doi>
        <lastModDate>2017-01-31T16:33:14.7830000+00:00</lastModDate>
        
        <creator>NADIR MUSTAFA</creator>
        
        <creator>JIAN-PING LI</creator>
        
        <creator>Raheel A. Memon</creator>
        
        <creator>Mohammed Z. Omer</creator>
        
        <subject>Principle Component Analysis; Information Gain; farther Distance based Synthetic Minority Oversampling; Correlation based Feature</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(1), 2017</description>
        <description>Medical data are extensively used in the diagnosis of human health. So it has played a vital role for physicians as well as in medical engineering. Accordingly, many types of research are going on related to this to have a better prediction of the diseases or to improve the diagnosis quality. However, most of the researchers work on either dimensionality space or imbalanced data. Due to this, sometimes one may not have the accurate predictions or classifications of the malignant diseases as both the factors are equally important. So it still needs an improvement or more work required to address these biomedical challenges by combing both the factors. As such this paper proposes a new and efficient combined algorithm based on FD_SMOTE (Farther Distance Based on Synthetic Minority Oversampling Techniques) and Principle Component Analysis (PCA), which successfully reduces the high dimensionality and balances the minority class. Finally, the present algorithm has been investigated on biomedical data and it gives the desired results in terms of dimensionality and data balancing. Here, In this paper, the quality of dimensionality reduction and balanced data has been evaluated using assessment metrics like co-variance, Accuracy (ACC) and Area Under the Curve (AUC). It has been observed from the numerical results that the performance of the algorithm achieved the best accuracy with metrics of ACC and AUC.</description>
        <description>http://thesai.org/Downloads/Volume8No1/Paper_9-A_Classification_Model_for_Imbalanced_Medical_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Factors Influencing the Adoption of Cloud Computing by Saudi University Hospitals</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080107</link>
        <id>10.14569/IJACSA.2017.080107</id>
        <doi>10.14569/IJACSA.2017.080107</doi>
        <lastModDate>2017-01-31T16:33:14.7500000+00:00</lastModDate>
        
        <creator>Seham S. Almubarak</creator>
        
        <subject>cloud computing; (TOE) framework; (DOI) theory; technological innovation; IT adoption; healthcare; Saudi hospitals</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(1), 2017</description>
        <description>This study aims to evaluate the adoption of Cloud Computing in Saudi university hospitals and to investigate the factors that impact the adoption. This study integrates the Technological, Organizational, Environmental (TOE) framework and the Diffusion of Innovation (DOI) theory, and adds the decision maker context to the original model. The study sample included Saudi university hospitals in Riyadh city. The data were collected using semi-structured interviews and a questionnaire. The result of this study determines the five most significant factors influencing the adoption of cloud computing in Saudi university hospitals, which are in sequence: Relative advantage, Decision-maker&#39;s innovativeness, Decision-maker&#39;s knowledge in IT, Compatibility, and Top management support. Moreover, among the four different contexts, the most important context is the Decision-maker context, followed by the Technological context, then the Organizational context, and finally the Environmental context. The findings are beneficial for hospitals to guide them to make better decisions regarding cloud computing adoption. Scholars can use this study to gain a more holistic understanding of cloud computing adoption and apply new theories in this field.</description>
        <description>http://thesai.org/Downloads/Volume8No1/Paper_7-Factors_Influencing_the_Adoption_of_Cloud_Computing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Insights on Error-Resilient Image Transmission Schemes on Wireless Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080106</link>
        <id>10.14569/IJACSA.2017.080106</id>
        <doi>10.14569/IJACSA.2017.080106</doi>
        <lastModDate>2017-01-31T16:33:14.7200000+00:00</lastModDate>
        
        <creator>Bharathi Gururaj</creator>
        
        <creator>G Sadashivappa</creator>
        
        <subject>Wireless Image Transmission: Wireless Networks; Fading; Error-Correction; Channel Coding</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(1), 2017</description>
        <description>Usage of image as data (or signal) is quite frequent in majority of the user-centric application. However, transmission of image over non-concrete communication medium like air is still vulnerable due to inherent weakness of wireless communication e.g. interference, noise, scattering, fading, security etc. Wireless image transmission has yet unsolved problems when it comes to error resiliency. In this paper, we have reviewed the significant research contribution published in last 5 years associated with wireless image transmission, channel coding mechanism, and investigated the scale of effectiveness in the techniques based on advantages and limitations. We also extracted a significant research gap, which requires immediate attention. Hence, we propose our future direction of work with an indicative architectural design in order to address the problems identified in research gap from existing literatures. This paper is meant to brief about the existing system and practical approaches to solve such problems.</description>
        <description>http://thesai.org/Downloads/Volume8No1/Paper_6-Insights_on_Error_Resilient_Image_Transmission_Schemes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Efficient Video Editing for Mobile Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080105</link>
        <id>10.14569/IJACSA.2017.080105</id>
        <doi>10.14569/IJACSA.2017.080105</doi>
        <lastModDate>2017-01-31T16:33:14.6900000+00:00</lastModDate>
        
        <creator>Ignasi Vegas Pajaro</creator>
        
        <creator>Ankur Agrawal</creator>
        
        <creator>Tina Tian</creator>
        
        <subject>iOS programming; Image processing; GPU; CPU; Objective-C; GPUImage; OpenGL</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(1), 2017</description>
        <description>Recording, storing and sharing video content has become one of the most popular usages of smartphones. This has resulted in demand for video editing apps that the users can use to edit their videos before sharing on various social networks. This study describes a technique to create a video editing application that uses the processing power of both GPU and CPU to process various editing tasks. The results and subsequent discussion shows that using the processing power of both the GPU and CPU in the video editing process makes the application much more time-efficient and responsive as compared to just the CPU-based processing.</description>
        <description>http://thesai.org/Downloads/Volume8No1/Paper_5-Efficient_Video_Editing_for_Mobile_Applications.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of Particle Swarm Optimization and Genetic Algorithm based on Task Scheduling in Cloud Computing Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080104</link>
        <id>10.14569/IJACSA.2017.080104</id>
        <doi>10.14569/IJACSA.2017.080104</doi>
        <lastModDate>2017-01-31T16:33:14.6570000+00:00</lastModDate>
        
        <creator>Frederic Nzanywayingoma</creator>
        
        <creator>Yang Yang</creator>
        
        <subject>Execution Time; Task Scheduling Algorithms; Particle Swarms (PSO); Genetic Algorithm (GA); Virtual Machines (VMs)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(1), 2017</description>
        <description>Since the beginning of cloud computing technology, task scheduling problem has never been an easy work. Because of its NP-complete problem nature, a large number of task scheduling techniques have been suggested by different researchers to solve this complicated optimization problem. It is found worth to employ heuristics methods to get optimal or to arrive at near-optimal solutions. In this work, a combination of two heuristics algorithms was proposed: particle swarm optimization (PSO) and genetic algorithm (GA). Firstly, we list pros and cons of each algorithm and express its best interest to maximize the resource utilization.  Secondly, we conduct a performance comparison approach based on two most critical objective functions of task scheduling problems which are execution time and computation cost of tasks in cloud computing. Thirdly, we compare our results with other existing heuristics algorithms from the literatures. The experimental results was examined with benchmark functions and results showed that the particle swarm optimization (PSO) performs better than genetic algorithm (GA) but they both present a similarity because of their population based search methods. The results also showed that the proposed hybrid models outperform the standard PSO and reduces dramatically the execution time and lower the processing cost on the computing resources.</description>
        <description>http://thesai.org/Downloads/Volume8No1/Paper_4-Analysis_of_Particle_Swarm_Optimization_and_Genetic_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Building a Penetration Testing Device for Black Box using Modified Linux for Under $50</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080103</link>
        <id>10.14569/IJACSA.2017.080103</id>
        <doi>10.14569/IJACSA.2017.080103</doi>
        <lastModDate>2017-01-31T16:33:14.6270000+00:00</lastModDate>
        
        <creator>Young B. Choi</creator>
        
        <creator>Kenneth P. LaCroix</creator>
        
        <subject>Penetration Testing; Black Box; White Box; Modified Linux; Raspberry Pi (RPi); Kali Linux</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(1), 2017</description>
        <description>This study analyzes the use of a Raspberry Pi (RPi) as part of a Penetration Tester’s toolkit. The RPi’s form factor, performance to cost ratio, used in conjunction with modified Linux, allows the RPi to be a very versatile product. Whatsmore, the RPi retails for $35 and is available from many hobby shops and on Amazon.com.  Included in this research is the use of a virtual lab where the RPi is attached using an Ethernet connection. Simple attacks are carried out with a few suggestions for preventing this scenario from playing out in the real world.</description>
        <description>http://thesai.org/Downloads/Volume8No1/Paper_3-Building_a_Penetration_Testing_Device_for_Black_Box.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Permutation of Web Search Query Types for User Intent Privacy</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080102</link>
        <id>10.14569/IJACSA.2017.080102</id>
        <doi>10.14569/IJACSA.2017.080102</doi>
        <lastModDate>2017-01-31T16:33:14.5800000+00:00</lastModDate>
        
        <creator>Kato Mivule</creator>
        
        <subject>Web search query privacy; user intent privacy; search engines; Information Retrieval</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(1), 2017</description>
        <description>Privacy remains a major concern when using search engines to find for information on the web due to the fact that search engines own massive resources in preserving search logs of each user and organizations. However, many of the present query search privacy practices require the very same search engine and third party to collaborate, making privacy even more difficult. Therefore, as a contribution, we present a heuristic, permutation of web search query types, a non-cryptographic heuristic that works by formation of obfuscated search queries via permutation of query keyword categories. Preliminary results from this study show that web search query and specific user intent privacy might be achievable from the user side without involvement of the search engine or other third parties by the permutation of web search query types.</description>
        <description>http://thesai.org/Downloads/Volume8No1/Paper_2-Permutation_of_Web_Search_Query_Types_for_User_Intent_Privacy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Approach for Acquiring Computer Systems to Satisfy Mission Capabilities</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2017</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2017.080101</link>
        <id>10.14569/IJACSA.2017.080101</id>
        <doi>10.14569/IJACSA.2017.080101</doi>
        <lastModDate>2017-01-31T16:33:14.5170000+00:00</lastModDate>
        
        <creator>Glenn Tolentino</creator>
        
        <creator>Jeff Tian</creator>
        
        <creator>Jerrell Stracener</creator>
        
        <subject>Systems Integration; Systems Development; Systems Reliability; Software Systems; Systems Engineering; Lifecycle Cost</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 8(1), 2017</description>
        <description>Defense Computer Systems developed and maintained over the years has resulted in thousands of disparate, compartmented, focused, and mission driven systems that are utilized daily for deliberate and crisis mission planning activities.  The defense acquisition community is responsible for the development and sustainment of these systems over the course of its systems engineering lifecycle from conception, utilization, and eventually the decommissioning of these systems.  While missions are being planned, and satisfied by existing computer systems, there are new missions being proposed which cannot be satisfied by a single existing computer system capability.  Therefore, this raises the question whether a Networked Computer System (NCS) using combinations of existing and developmental computer systems is preferred in order to satisfy new capability requirements. This paper explores an approach in identifying a preferred NCS solution and determining the effectiveness in satisfying a mission.</description>
        <description>http://thesai.org/Downloads/Volume8No1/Paper_1-Approach_for_Acquiring_Computer_Systems_to_Satisfy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Approach of Graph Realization for Data Hiding using Human Encoding</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071256</link>
        <id>10.14569/IJACSA.2016.071256</id>
        <doi>10.14569/IJACSA.2016.071256</doi>
        <lastModDate>2017-01-07T09:26:48.9930000+00:00</lastModDate>
        
        <creator>Fatema Akhter</creator>
        
        <creator>Md. Selim Al Mamun</creator>
        
        <subject>Data hiding; Graph steganography; Huffman encoding; Steganalytic attack</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(12), 2016</description>
        <description>The rapid advancement of technology has changed
the way of our living. Sharing information becomes inevitable in
everyday life. However, it encounters many security issues when
dealing with secret or private information. The transmission of
such sophisticated information has become highly important and
received much attention. Therefore, various techniques have been
exercised for security of information. Graph steganography is a
way of hiding information by translating it to plotted data in a
graph. Because of numerous usages of graphs in everyday life,
the transmission can take place without drawing any attention.
In this paper, we propose a new graph realization technique
for steganography that looks as if innocent and imperceptible
to present day steganalytic attacks and the hidden message can
only be read by its respective recipient. The secret message is
first translated to prefix codes using huffman encoding. Then
the prefix code for separate word in the message is plotted in a
graph. The proposed technique offers high embedding capacity
and imperceptibility due to prefix presentation and word by
word encoding of the message. The experimental outcomes show
strong resistance towards steganalytic attacks in contrast to other
approaches.</description>
        <description>http://thesai.org/Downloads/Volume7No12/Paper_56-A_New_Approach_of_Graph_Realization_for_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Reducing Energy Consumption in Wireless Sensor Networks using Ant Colony Algorithm and Autonomy Mechanisms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071214</link>
        <id>10.14569/IJACSA.2016.071214</id>
        <doi>10.14569/IJACSA.2016.071214</doi>
        <lastModDate>2017-01-07T09:26:48.9470000+00:00</lastModDate>
        
        <creator>Javad Mozaffari</creator>
        
        <creator>Mehdi EffatParvar</creator>
        
        <subject>wireless sensor networks; ant algorithm; network stability; energy consumption reduction; network coverage; network lifetime</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(12), 2016</description>
        <description>Wireless sensor network includes hundreds or thousands of nodes with limited energy. Since the lifetime of each sensor is as the battery life of the sensor, the energy issue is discussed as a fundamental challenge. In this article, parallel ant algorithm and exclusive territoriality algorithm have been used by providing the ability of nodes self-determination, in order to improve the parameters of energy consumption, extend the life and network coverage. For routing nodes also is used direct send method, sending by hierarchical clustering, along with carrier head cluster. This article is evaluated by focusing on network stability, based on two main factors: reduce energy consumption and extend the network life and increase network coverage. The simulated output in this paper represents an increase of energy consumption balance and network lifetime approach (the first death time) and network imperative life (the last death time), which represents network high-performance than latch, direct transmission and other methods. Therefore, also in this article the purpose is to provide a better way than previous methods based on developed ant algorithm to reduce energy consumption against hardware limitations.</description>
        <description>http://thesai.org/Downloads/Volume7No12/Paper_14-Reducing_Energy_Consumption_in_Wireless_Sensor_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Art of Crypto Currencies</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071255</link>
        <id>10.14569/IJACSA.2016.071255</id>
        <doi>10.14569/IJACSA.2016.071255</doi>
        <lastModDate>2016-12-31T12:28:35.4700000+00:00</lastModDate>
        
        <creator>Sufian Hameed</creator>
        
        <creator>Sameet Farooq</creator>
        
        <subject>Crypto Currency; Bitcoin; Ripple; Litecoin; Dash coin; Stellar</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(12), 2016</description>
        <description>Crypto Currencies have recently gained enormous popularity amongst the general public. With each passing day, more and more companies are radically accepting crypto cur-rencies in their payment systems, paving way for an economic revolution. Currently more than 700 crypto-currencies are avail-able at Coindesk alone for trade purposes. As of November 2016, the Crypto currencies hold a total market share of over 14 Billion USD1 [5]. With no centralized institution to monitor the movement of funds, Crypto currencies and their users are susceptible to multiple threats. In this paper we present an effort to explain the functionality of some of the most popular crypto currencies available in the online market. We present an analysis of the mining methodologies employed by these currencies to induce new currency into the market and how they compete with each other to provide fast, decentralized transactions to the users. We also share, some of the most dangerous attacks that can be placed on these crypto currencies and how the overall model of the crypto currencies mitigates these attacks. Towards the end, we will present taxonomy of the five highly popular crypto currencies and compare their features.</description>
        <description>http://thesai.org/Downloads/Volume7No12/Paper_55-The_Art_of_Crypto_Currencies.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Software Defined Security Service Provisioning Framework for Internet of Things</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071254</link>
        <id>10.14569/IJACSA.2016.071254</id>
        <doi>10.14569/IJACSA.2016.071254</doi>
        <lastModDate>2016-12-31T12:28:35.4370000+00:00</lastModDate>
        
        <creator>Faraz Idris Khan</creator>
        
        <creator>Sufian Hameed</creator>
        
        <subject>IoT; Software Defined Security; Security in IoT; Software Defined Networking; Software Defined based IoT (SDIoT)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(12), 2016</description>
        <description>Programmable management framework have paved the way for managing devices in the network. Lately, emerging paradigm of Software Defined Networking (SDN) have rev-olutionized programmable networks. Designers of networking applications i.e. Internet of things (IoT) have started investigating potentials of SDN paradigm in improving network management. IoT envision interconnecting various embedded devices surround-ing our environment with IP to enable internet connectivity. Unlike traditional network architectures, IoT are characterized by constraint in resources and heterogeneous inter connectivity of wireless and wired medium. Therefore, unique challenges for managing IoT are raised which are discussed in this paper. Ubiquity of IoT have raised unique security challenges in IoT which is one of the aspect of management framework for IoT. In this paper, security threats and requirements are summarized in IoT extracted from the state of the art efforts in investigating security challenges of IoT. Also, SDN based security service provisioning framework for IoT is proposed.</description>
        <description>http://thesai.org/Downloads/Volume7No12/Paper_54-Software_Defined_Security_Service_Provisioning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Comparison between MAI and Noise Constrained LMS Algorithm for MIMO CDMA DFE and Linear Equalizers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071253</link>
        <id>10.14569/IJACSA.2016.071253</id>
        <doi>10.14569/IJACSA.2016.071253</doi>
        <lastModDate>2016-12-31T12:28:35.4070000+00:00</lastModDate>
        
        <creator>Khalid Mahmood</creator>
        
        <subject>Least mean squared algorithm (LMS); linear equal-izer (LE); multiple input; multiple output (MIMO); decision feedback equalizer (DFE); multiple access interference (MAI); Variance; adaptive algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(12), 2016</description>
        <description>This paper presents a performance comparison between a constrained least mean squared algorithm for MIMO CDMA decision feedback equalizer and linear equalizer. Both algorithms are constrained on the length of spreading sequence, number of users, variance of multiple access interference as well as additive white Gaussian noise (new constraint). An important feature of both algorithms is that multiple access interference together with noise variance is used as a constraint in MIMO CDMA linear and decision feedback equalization systems. Convergence analysis is performed for algorithm in both cases. From the simulation results shown at the end show that algorithm developed for decision feedback equalizer has outperformed the algorithm developed for linear equalizer in MIMO CDMA case</description>
        <description>http://thesai.org/Downloads/Volume7No12/Paper_53-Performance_Comparison_between_MAI.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modeling and Solving the Open-End Bin Packing Problem</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071252</link>
        <id>10.14569/IJACSA.2016.071252</id>
        <doi>10.14569/IJACSA.2016.071252</doi>
        <lastModDate>2016-12-31T12:28:35.3900000+00:00</lastModDate>
        
        <creator>Maiza Mohamed</creator>
        
        <creator>Tebbal Mohamed</creator>
        
        <creator>Rabia Billal</creator>
        
        <subject>Open-End Bin-packing; heuristics; discrete opti-mization; combinatorial problem</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(12), 2016</description>
        <description>In the Open-End Bin Packing Problem a set of items with varying weights must be packed into bins of uniform weight limit such that the capacity of the bin can be exceeded only by the last packed item, known as the overflow item. The objective is to minimize the number of used bins. In this paper, we present our Integer Linear Program model based on a modification of Cesili and Righini model [1]. Also, we propose two greedy heuristics to solve a problem. The first one is an adaptation of the Minimum Bin Slack heuristic where we have reduced to one unit capacity, the weight of the largest item in the current bin. While, the second heuristic is based on the well-known First Fit Decreasing heuristic. Computational results based on benchmark instances taken from the literature as well as generated instances show the effectiveness of the proposed heuristics in both solution quality and time requirement.</description>
        <description>http://thesai.org/Downloads/Volume7No12/Paper_52-Modeling_and_Solving_the_Open_End_Bin_Packing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Using Real-World Car Traffic Dataset in Vehicular Ad Hoc Network Performance Evaluation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071251</link>
        <id>10.14569/IJACSA.2016.071251</id>
        <doi>10.14569/IJACSA.2016.071251</doi>
        <lastModDate>2016-12-31T12:28:35.3600000+00:00</lastModDate>
        
        <creator>Lucas Rivoirard</creator>
        
        <creator>Martine Wahl</creator>
        
        <creator>Patrick Sondi</creator>
        
        <creator>Marion Berbineau</creator>
        
        <creator>Dominique Gruyer</creator>
        
        <subject>MOCoPo dataset; mobility models; vehicular ad hoc networks; simulation; performance evaluation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(12), 2016</description>
        <description>Vehicular ad hoc networking is an emerging paradigm which is gaining much interest with the development of new topics such as the connected vehicle, the autonomous vehicle, and also new high-speed mobile communication technologies such as 802.11p and LTE-D. This paper presents a brief review of different mobility models used for evaluating performance of routing protocols and applications designed for vehicular ad hoc networks. Particularly, it describes how accurate mobility traces can be built from a real-world car traffic dataset that embeds the main characteristics affecting vehicle-to-vehicle communications. An effective use of the proposed mobility models is illustrated in various road traffic conditions involving communicating vehicles equipped with 802.11p. This study shows that such dataset actually contains additional information that cannot completely be obtained with other analytical or simulated mobility models, while impacting the results of performance evaluation in vehicular ad hoc networks.</description>
        <description>http://thesai.org/Downloads/Volume7No12/Paper_51-Using_Real_World_Car_Traffic_Dataset.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fault-Tolerant Resource Provisioning with Deadline-Driven Optimization in Hybrid Clouds</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071250</link>
        <id>10.14569/IJACSA.2016.071250</id>
        <doi>10.14569/IJACSA.2016.071250</doi>
        <lastModDate>2016-12-31T12:28:35.3300000+00:00</lastModDate>
        
        <creator>Emmanuel Ahene</creator>
        
        <creator>Kingsley Nketia Acheampong</creator>
        
        <creator>Heyang Xu</creator>
        
        <subject>Deadline; fault recovery; hybrid Clouds; resource provisioning; software-as-a-service</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(12), 2016</description>
        <description>Resource provisioning remains as one of the chal-lenging research problems in cloud computing, more importantly when considered together with service reliability. Fault-tolerance techniques such as fault-recovery is one of the techniques that can be employed to improve on service reliability. Technically, fault-recovery has obvious impact on service performance. Such impact requires detailed studies. Only few works on hybrid cloud resource provisioning address fault recovery and its impact. In this paper, we investigate the problem of resource provisioning in hybrid Clouds, considering the probability of hybrid cloud resource failure during job execution. We formulate this problem as an optimization model with operational cost as an objective function subject to the deadline constraint of jobs. Based on our proposed optimization model, we design a heuristic-based algo-rithm called dynamic resource provisioning algorithm (DRPA). We then perform extensive experiments to evaluate performance of the proposed algorithm based on a real world set of data. The results confirm the obvious impact of fault recovery on the performance metrics (operational cost and deadline violation rate) and also confirms that DRPA can be useful in minimizing operational cost.</description>
        <description>http://thesai.org/Downloads/Volume7No12/Paper_50-Fault_Tolerant_Resource_Provisioning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Crowd Mobility Analysis using WiFi Sniffers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071249</link>
        <id>10.14569/IJACSA.2016.071249</id>
        <doi>10.14569/IJACSA.2016.071249</doi>
        <lastModDate>2016-12-31T12:28:35.3130000+00:00</lastModDate>
        
        <creator>Anas Basalamah</creator>
        
        <subject>WiFi Probes; Crowd Monitoring; Crowd Mobility Analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(12), 2016</description>
        <description>Wi-fi enabled devices such as today’s smart-phones are regularly in-search for connectivity. They continuously send management frames called Probe Requests searching for previ-ously accessed networks. These frames contain the sender’s MAC addresses in clear text, which can be used as an identifier for that sender. Being able to sniff that MAC address at several locations allows us to understand the mobility behavior of that device. In this paper, we present a solar-powered, beagle-bone based standalone system that continuously sniffs the air for probes and extract their MAC addresses. We deployed the system in the world’s largest gathering (The Hajj) and tested it at scale. Our objective was to build an infrastructure for non-invasive mass crowd analysis. Our deployment had a total of 8 sniffers covering a population of 185,000 people. We detected 37.5% of the population, analysed their arrival and departure behaviours, identified their smartphone manufacturers and extracted their transition patterns from one sub-location to another. By pre-senting valuable insights on the mobility of our target crowd, we validated the potential of our platform for crowd mobility analysis.</description>
        <description>http://thesai.org/Downloads/Volume7No12/Paper_49-Crowd_Mobility_Analysis_using_WiFi_Sniffers.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Centralized Reputation Management Scheme for Isolating Malicious Controller(s) in Distributed Software-Defined Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071248</link>
        <id>10.14569/IJACSA.2016.071248</id>
        <doi>10.14569/IJACSA.2016.071248</doi>
        <lastModDate>2016-12-31T12:28:35.2800000+00:00</lastModDate>
        
        <creator>Bilal Karim Mughal</creator>
        
        <creator>Sufian Hameed</creator>
        
        <creator>Ghulam Muhammad Shaikh</creator>
        
        <subject>SDN; controller security; malicious controllers; trust; reputation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(12), 2016</description>
        <description>Software-Defined Networks have seen an increas-ing in their deployment because they offer better network manageability compared to traditional networks. Despite their immense success and popularity, various security issues in SDN remain open problems for research. Particularly, the problem of securing the controllers in distributed environment is still short of any solutions. This paper proposes a scheme to identify any rogue/malicious controller(s) in a distributed environment. Our scheme is based on trust and reputation system which is centrally managed. As such, our scheme identifies any controllers acting maliciously by comparing the state of installed flows/policies with policies that should be installed. Controllers rate each other on this basis and report the results to a central entity, which reports it to the network administrator.</description>
        <description>http://thesai.org/Downloads/Volume7No12/Paper_48-A_Centralized_Reputation_Management_Scheme.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implementation of Cooperative Spectrum Sensing Algorithm using Raspberry Pi</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071247</link>
        <id>10.14569/IJACSA.2016.071247</id>
        <doi>10.14569/IJACSA.2016.071247</doi>
        <lastModDate>2016-12-31T12:28:35.2500000+00:00</lastModDate>
        
        <creator>Ammar Ahmed Khan</creator>
        
        <creator>Aamir Zeb Shaikh</creator>
        
        <creator>Shabbar Naqvi</creator>
        
        <creator>Talat Altaf</creator>
        
        <subject>Cooperative Spectrum Sensing; Cognitive Radio; Fusion Center; Raspberry Pi</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(12), 2016</description>
        <description>A novel cooperative spectrum sensing algorithm is implemented and analyzed using Raspberry Pi. In the proposed setup, Nokia cell phone is used as a spectrum sensing device while Raspberry Pi functions as a FC device to collect sensing results from local sensing devices. The investigation results of the proposed setup show significant improvement in detection performance as compared to local spectrum sensing techniques. Furthermore, results show a successful communication between sensing nodes and FC.</description>
        <description>http://thesai.org/Downloads/Volume7No12/Paper_47-Implementation_of_Cooperative_Spectrum_Sensing_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Credibility Evaluation of Online Distance Education Websites</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071246</link>
        <id>10.14569/IJACSA.2016.071246</id>
        <doi>10.14569/IJACSA.2016.071246</doi>
        <lastModDate>2016-12-31T12:28:35.2330000+00:00</lastModDate>
        
        <creator>Khalid Al-Omar</creator>
        
        <subject>university websites; credibility; trustworthiness; online trust; website design; Saudi Arabia; distance education</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(12), 2016</description>
        <description>Web credibility is becoming a significant factor in increasing user satisfaction, trust, and loyalty. Web credibility is particularly important for people who cannot visit an institution for one reason or other and mostly depend on the website, such as online distance education students. Accordingly, universities and educational websites need to determine the types of credibility problems they have on their websites. However, far too little attention has been paid to providing detailed information regarding the types of specific credibility problems that could be found on university websites in general, and specifically, in the Kingdom of Saudi Arabia (KSA). The aim of this paper is to study and analyze the credibility of university websites that offer distance education courses in the KSA. A total of 12 universities in Saudi Arabia were considered, which include 11 affiliated and one private university. The analysis of the data represents the level of credibility of distance education websites. Results reveal that in Saudi Arabia, distance education websites are reliable, but violate basic credibility guidelines.</description>
        <description>http://thesai.org/Downloads/Volume7No12/Paper_46-Credibility_Evaluation_of_Online_Distance_Education_Websites.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Incident Management System for Debt Collection in Virtual Banking</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071245</link>
        <id>10.14569/IJACSA.2016.071245</id>
        <doi>10.14569/IJACSA.2016.071245</doi>
        <lastModDate>2016-12-31T12:28:35.2030000+00:00</lastModDate>
        
        <creator>Sareh Saberi</creator>
        
        <creator>Seyyed Mohsen Hashemi</creator>
        
        <subject>virtual banking; debt collection; incident management</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(12), 2016</description>
        <description>An astonishing peak volume of bad loans in most countries, including Iran, is one of the latest manifestations of deep disorders which inhibited banking system from performing its main duty to promote development plans over a long period. Main mission of banking system is to link savers and those economic actors who need financial facilities. Banks, as intermediaries, receive interest from the second group and pay interest to the first group. During the last 10 years, millions of people have been controlling their financial lives online in the developed markets. Counters of access to electronic money and electronic wallet have increased currently. Bad loans increase as more facilities are provided for customers. Therefore, a mechanism is required for debt collection without any need for physical bank and improvement of this process using incident management system.</description>
        <description>http://thesai.org/Downloads/Volume7No12/Paper_45-An_Incident_Management_System_for_Debt_Collection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Representing Job Scheduling for Volunteer Grid Environment using Online Container Stowage</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071244</link>
        <id>10.14569/IJACSA.2016.071244</id>
        <doi>10.14569/IJACSA.2016.071244</doi>
        <lastModDate>2016-12-31T12:28:35.1870000+00:00</lastModDate>
        
        <creator>Saddaf Rubab</creator>
        
        <creator>Mohd Fadzil Hassan</creator>
        
        <creator>Ahmad Kamil Mahmood</creator>
        
        <creator>Syed Nasir Mehmood Shah</creator>
        
        <subject>Volunteer grid computing; volunteer resources; container stowage; job scheduling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(12), 2016</description>
        <description>Volunteer grid computing comprises of volunteer resources which are unpredictable in nature and as such the scheduling of jobs among these resources could be very uncer-tain.  It is also difficult to ensure the successful completion of submitted jobs on volunteer resources as these resources may opt to withdraw from the grid system anytime or there might be a resource failure, which requires job reassignments. However, a careful consideration of future jobs can make scheduling of jobs more reliable on volunteer resources. There are two possibilities; either to forecast the future jobs or to forecast the resource availability by studying the history events.  In this paper an attempt has been made to utilize the future job forecasting in improving the job scheduling experience with volunteer grid resources. A scheduling approach is proposed that uses container stowage to allocate the volunteer grid resources based on the jobs submitted. The proposed scheduling approach optimizes the number of resources actively used. The approach presents online container stowage adaptability for scheduling jobs using volun-teer grid resources. The performance has been evaluated by making comparison to other scheduling algorithms adopted in volunteer grid. The simulation results have shown that the pro-posed approach performs better in terms of average turnaround and waiting time in comparison with existing scheduling algo-rithms. The job load forecast also reduced the number of job reassignments.</description>
        <description>http://thesai.org/Downloads/Volume7No12/Paper_44-Representing_Job_Scheduling_for_Volunteer_Grid_Environment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Reducing the Electrical Consumption in the Humidity Control Process for Electric Cells using an Intelligent Fuzzy Logic Controller</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071243</link>
        <id>10.14569/IJACSA.2016.071243</id>
        <doi>10.14569/IJACSA.2016.071243</doi>
        <lastModDate>2016-12-31T12:28:35.1400000+00:00</lastModDate>
        
        <creator>Rafik Lasri</creator>
        
        <creator>Larbi Choukri</creator>
        
        <creator>Mohammed Bouhorma</creator>
        
        <creator>Ignacio Rojas</creator>
        
        <creator>H&#233;ctor Pomares</creator>
        
        <subject>humidity in electric cells; humidity control; optimization of electrical consumption; intelligent fuzzy logic controller; saving power</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(12), 2016</description>
        <description>The electrical energy distribution uses a huge network to cover the urbanized areas. The network distribution incorporates an important number of electrical cells that ensure the energy transformation. These cells play a fundamental role to ensure a permanent feeding. Thus, the performance of these cells must be optimized. The main problem that affects these cells is the inside humidity that should be controlled permanently to prevent serious damage and power failure. The presented work proposes the use of a powerful intelligent Fuzzy Logic Controller that can online adapt their internal parameters according to the actually state of the controlled plant and auto-learn from the behavior of the plant how the current humidity level can be decreased. The used controller can stabilize the humidity inside the cells within the recommended range by controlling a set of heating resistances installed inside these cells and in the same time ensuring valuable advantages for the electrical energy distribution company. Unlike the rest of the controllers that are used to stabilize moisture. The intelligent controller used in these papers ensures a very precise control with very low power consumption which trains a very significant energy savings in each electrical cell. Knowing that the distribution network incorporates a very large number of electrical cells, the final savings balance would be a very high amount of energy that can be presented economically with significant savings on the electricity bills.</description>
        <description>http://thesai.org/Downloads/Volume7No12/Paper_43-Reducing_the_Electrical_Consumption_in_the_Humidity_Control_Process.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detection and Classification of Mu Rhythm using Phase Synchronization for a Brain Computer Interface</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071242</link>
        <id>10.14569/IJACSA.2016.071242</id>
        <doi>10.14569/IJACSA.2016.071242</doi>
        <lastModDate>2016-12-31T12:28:35.1100000+00:00</lastModDate>
        
        <creator>Oana Diana Eva</creator>
        
        <subject>brain computer interface; electroencephalogram; phase synchronization; phase lag index; weighted phase lag index; classifiers</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(12), 2016</description>
        <description>Phase synchronization in a brain computer interface based on Mu rhythm is evaluated by means of phase lag index and weighted phase lag index. In order to detect and classify the important features reflected in brain signals during execution of mental tasks (imagination of left and right hand movement), the proposed methods are implemented on two datasets. The classification is performed using linear discriminant classifier, quadratic discriminant classifier, Mahalanobis distance classifier, k nearest neighbor and support vector machine. Classification accuracies up to 74% and 61% for phase lag index and weighted phase lag index were achieved. The results indicate that phase synchronization measures are relevant for classifying mental tasks recorded in the active state and the relaxation state from additional motor area and from the sensorimotor area. Phase lag index and weighted phase lag index methods are easy to implement, efficient, provide relevant features for the classification and can be used as an offline methods for motor imagery paradigms.</description>
        <description>http://thesai.org/Downloads/Volume7No12/Paper_42-Detection_and_Classification_of_Mu_Rhythm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Semantic Learning Object (SLO)Web-Editor based on Web Ontology Language (OWL) using a New  OWL2XSLO Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071241</link>
        <id>10.14569/IJACSA.2016.071241</id>
        <doi>10.14569/IJACSA.2016.071241</doi>
        <lastModDate>2016-12-31T12:28:35.1100000+00:00</lastModDate>
        
        <creator>Zouhair Rimale</creator>
        
        <creator>EL Habib Benlahmar</creator>
        
        <creator>Abderrahim Tragha</creator>
        
        <subject>m-learning; learning objects; web semantic; XML-schema; xsd; owl; Ontology; rdf</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(12), 2016</description>
        <description>Today, we see a strong demand for real-time information, with a rapid growth of m-learning. We also see that there are many educational resources on the Internet. Learning objects (LOs) are designed as a means of reusing these resources. Most of these LOs are built for e-learning systems based on desktop computers, which prevents their use on mobile devices. A LO is an area that is open to research and has a lot of potential in the creation, adaptation and production of learning content. There are standards that describe LOs in general as IEEE LOM, SCORM. Semantic web and its associated technologies are increasingly used in electronic document editing while separating the content from the presentation. Creating a LO with the semantic web is complex and raises difficulties because of the editing tools that require general knowledge of XML syntax and related technologies. In this paper, the authors propose a new OWL2XSLO approach based on ontologies (OWL) allowing the generation of XML-Schemas LOs. They then derive a semantic LO web editor based on OWL2XSLO approach for the generation of a content type enabling the editing of interactive LOs with XML technology and which can then be integrated into LMS and adapted to the mobile display.</description>
        <description>http://thesai.org/Downloads/Volume7No12/Paper_41-A_Semantic_Learning_Object_Web_Editor_based_on_Web_Ontology_Language.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimizing the Locations of Intermediate Rechlorination Stations in a Drinking Water Distribution Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071240</link>
        <id>10.14569/IJACSA.2016.071240</id>
        <doi>10.14569/IJACSA.2016.071240</doi>
        <lastModDate>2016-12-31T12:28:35.0800000+00:00</lastModDate>
        
        <creator>Amali Said</creator>
        
        <creator>Mourchid Mohammed</creator>
        
        <creator>EL Faddouli Nour-eddine</creator>
        
        <creator>Zouhri Mohammed</creator>
        
        <subject>Simulation of chlorine distribution; drinking water distribution network; deficit nodes; optimizing the locations of rechlorination stations; Dynamic programming</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(12), 2016</description>
        <description>The preservation of the water quality in the distribution network requires maintaining permanently minimum residual chlorine at any point of the network. This is possible only if we plan chlore’s injections in various points of the network for intermediate rechlorination, or when increasing the initial level of chlorine in the tank outlet. In the latter case, there is a risk of disruption of the taste and smell of water for consumers near the tanks.
Therefore, to avoid an excessive increase in the chlorine concentration in the tanks and to avoid affecting the taste of the distributed water, intermediate rechlorination stations should be implemented. These stations will proceed with the chlorine regulation.
Given the high cost of the implementation of such stations, the optimization of the number and the choice of location of these stations are needed. This paper is focused on the implementation of an algorithm for such optimization. We used dynamic programming in this algorithm. Performance tests of our decision support system were done on real sites of the Wilaya Rabat-Sale (network of Morocco&#39;s capital).</description>
        <description>http://thesai.org/Downloads/Volume7No12/Paper_40-Optimizing_the_Locations_of_Intermediate_Rechlorination_Stations.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Role of Requirements Elicitation &amp; Prioritization to Optimize Quality in Scrum Agile Development</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071239</link>
        <id>10.14569/IJACSA.2016.071239</id>
        <doi>10.14569/IJACSA.2016.071239</doi>
        <lastModDate>2016-12-31T12:28:35.0470000+00:00</lastModDate>
        
        <creator>Aneesa Rida Asghar</creator>
        
        <creator>Shahid Nazir Bhatti</creator>
        
        <creator>Atika Tabassum</creator>
        
        <creator>Zainab Sultan</creator>
        
        <creator>Rabiya Abbas</creator>
        
        <subject>Agile Software Engineering (ASE); Agile Software Development (ASD); Scrum Software Development Process; SCRUM; Product Owner (PO)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(12), 2016</description>
        <description>One of most common aspect with traditional software development is managing requirements. As requirements emerge throughout the software development process and thus are needed to be addressed through proper communication and integration between stakeholders, developers and documentation. Agile methodology is an innovative and iterative process that supports changing requirements and helps in addressing changes throughout the development process.
Requirements are elicited at the beginning of every software development process and project (product) and latter are prioritized according to their importance to the market and to the product itself. One of the most important and influencing steps while making a software product is requirements prioritization. Prioritizing requirements helps the software team to understand the existence and importance of a particular requirement, its importance of use and its urgency to time to market. There are many requirements prioritization techniques with their relative strength and weaknesses. Otherwise many of them fail to take account all the factors that must be considered while prioritizing requirements such as cost, value, risk, time to market, number of requirements and effect of non-functional requirements on functional requirements.
There are several requirements prioritization methodologies that aid in decision making but importantly many lacks to account the important factors that have significant influence in prioritizing requirements. A requirement prioritization methodology based on several factors such as time to market, cost, risk etc has been proposed. The proposed model is expected to overcome this lack. In sprints, requirements will be prioritized both on the basis of influencing factors such as cost, value, risk, time to market etc. and through the effect of non-functional requirements over functional requirements. This will improve the overall quality of software product when it is included in the development process of scrum. Requirements will not only be prioritized based on sprints, human decision but by critically analyzing the factors (sub characteristics) that can cause the product to success/ fail repeatedly thus ensuring the consistency in right requirements and hence the right prioritized requirements will be selected for a particular sprint at a time.</description>
        <description>http://thesai.org/Downloads/Volume7No12/Paper_39-Role_of_Requirements_Elicitation_Prioritization_to_Optimize_Quality.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Novel Causality in Consumer’s Online Behavior: Ecommerce Success Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071238</link>
        <id>10.14569/IJACSA.2016.071238</id>
        <doi>10.14569/IJACSA.2016.071238</doi>
        <lastModDate>2016-12-31T12:28:35.0330000+00:00</lastModDate>
        
        <creator>Amna Khatoon</creator>
        
        <creator>Shahid Nazir Bhatti</creator>
        
        <creator>Atika Tabassum</creator>
        
        <creator>Aneesa Rida</creator>
        
        <creator>Sehrish Alam</creator>
        
        <subject>Online shopping; Consumer behavior; E-marketer; Usability; DeLone &amp; McLean; eCcommerce success model; Causal loop diagram; iThink; Simulation; Evaluation; Retailer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(12), 2016</description>
        <description>Online shopping (e-Shopping) has grown at a rapid pace with the advancement in modern web technologies, there are then socio and technical aspects (factors) in the mentioned e-shopping. The following research paper highlights some mandatory socio-technical factors affecting consumer’s behavior in online shopping environment. In this work a comprehensive conceptual model is put forward based on proposed reform DeLone and McLean Success Model for Information Systems. This model is used for the assessment of the success of eCommerce web portals.  Approximately thirteen different hypotheses are proposed on the bases of this methodology which represent the cause and effect relationship among the various variables affecting consumer’s online buying behavior. Further this work is simulated in iThink technology to show prominently that consumer’s satisfaction and trust directly affects productivity of the organization.  For development organizations the proposed methodology is valuable because it will facilitate in building the eCommerce websites, web portals whereas retailers can improve the productivity of their organization by accomplishing this.</description>
        <description>http://thesai.org/Downloads/Volume7No12/Paper_38-Novel_Causality_in_Consumers_Online_Behavior_Ecommerce_Success_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>U Patch Antenna using Variable Substrates for Wireless Communication Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071237</link>
        <id>10.14569/IJACSA.2016.071237</id>
        <doi>10.14569/IJACSA.2016.071237</doi>
        <lastModDate>2016-12-31T12:28:35.0000000+00:00</lastModDate>
        
        <creator>Saad Hassan Kiani</creator>
        
        <creator>Khalid Mahmood</creator>
        
        <creator>Umar Farooq Khattak</creator>
        
        <creator>Burhan-Ud-Din</creator>
        
        <creator>Mehre Munir</creator>
        
        <subject>miniaturization; directivity; gain; substrates; efficiency; VSWR; Wireless communication; Multiband response</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(12), 2016</description>
        <description>Due to their smaller size and light weighted structures patch antennas are frequently now used in GPS transmitters and receivers and throughout modern communication technology. In this paper a miniaturaized patch antenna is presented using stack configuration. Various parameters such as gain, directivity, return loss, efficiency of antenna is demonstrated. Using Air, Teflon, Foam and FR4 (Lossy) as substrates FR4 (lossy) is kept fixed and other substrates are combined one by one to observe response of proposed antenna. The antenna showed dual and tri band response with different combination of mentioned substrates. The proposed antenna has been found useful for W-LAN, GSM, Radio Satellite, Fixed Satellite Services (RSS) &amp; (FSS) and satellite communication systems.</description>
        <description>http://thesai.org/Downloads/Volume7No12/Paper_37-U_Patch_Antenna_using_Variable_Substrates.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automatic Cloud Resource Scaling Algorithm based on Long Short-Term Memory Recurrent Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071236</link>
        <id>10.14569/IJACSA.2016.071236</id>
        <doi>10.14569/IJACSA.2016.071236</doi>
        <lastModDate>2016-12-31T12:28:34.9870000+00:00</lastModDate>
        
        <creator>Ashraf A. Shahin</creator>
        
        <subject>auto-scaling; cloud computing; cloud resource scaling; recurrent neural networks; resource provisioning; virtualized resources</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(12), 2016</description>
        <description>Scalability is an important characteristic of cloud computing. With scalability, cost is minimized by provisioning and releasing resources according to demand. Most of current Infrastructure as a Service (IaaS) providers deliver threshold-based auto-scaling techniques. However, setting up thresholds with right values that minimize cost and achieve Service Level Agreement is not an easy task, especially with variant and sudden workload changes. This paper has proposed dynamic threshold based auto-scaling algorithms that predict required resources using Long Short-Term Memory Recurrent Neural Network and auto-scale virtual resources based on predicted values. The proposed algorithms have been evaluated and compared with some of existing algorithms. Experimental results show that the proposed algorithms outperform other algorithms.</description>
        <description>http://thesai.org/Downloads/Volume7No12/Paper_36-Automatic_Cloud_Resource_Scaling_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Overview of Technical Elements of Liver Segmentation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071235</link>
        <id>10.14569/IJACSA.2016.071235</id>
        <doi>10.14569/IJACSA.2016.071235</doi>
        <lastModDate>2016-12-31T12:28:34.9700000+00:00</lastModDate>
        
        <creator>Nazish Khan</creator>
        
        <creator>Imran Ahmed</creator>
        
        <creator>Mehreen Kiran</creator>
        
        <creator>Awais Adnan</creator>
        
        <subject>component; CT Scan; Liver; Dataset; Segmentation technique</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(12), 2016</description>
        <description>Liver diseases are life-threatening, it’s important to detect it tumor in early stages. So, for tumor detection Segmentation of the liver is a first and significant stride. Segmentation of the liver is a yet difficult undertaking in view of its intra patient variability in intensity, shape and size of the liver. The aim of this paper is to assemble a wide assortment of techniques and used CT scan dataset information for liver segmentation that will provide a decent beginning to the new researcher. There are different strategies from basic to advance like thresholding, active contour, region growing to graph cut is briefly abridge to give an outline of existing segmentation strategies. We review the concept of particular strategies and review their original ideas. Our idea is to provide information under which condition a chosen strategy will work or utilize.</description>
        <description>http://thesai.org/Downloads/Volume7No12/Paper_35-Overview_of_Technical_Elements_of_Liver_Segmentation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Clustering-based Spam Image Filtering Considering Fuzziness of the Spam Image</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071234</link>
        <id>10.14569/IJACSA.2016.071234</id>
        <doi>10.14569/IJACSA.2016.071234</doi>
        <lastModDate>2016-12-31T12:28:34.9400000+00:00</lastModDate>
        
        <creator>Master Prince</creator>
        
        <subject>versatility of spam image; feature fusion weight; cluster; rule table</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(12), 2016</description>
        <description>If there are pros, corns are always there. As email becomes a part of individual’s need in our busy life with its benefits, it has negative aspect too by means of email spamming. Nowadays images with embedded text called image spamming have been used by the spammers as effective text spam filtering methods already been introduced. Tracking and stopping spam become challenge in the internet world because of versatility in the spam images. In this paper a novel model AFSIF (Autonomous Fuzzy Spam Image Filter) has been introduced. The basic idea behind AFSIF is, an spam image can combine several basic features of different spam images, so feature fusion weight of the image has been generated, which keeps combined feature of spam images and user preference as well. Here user preference has not been applied separately; it is used to calculate the fusion weight in terms of predefined topics (rule table).</description>
        <description>http://thesai.org/Downloads/Volume7No12/Paper_34-Clustering_based_Spam_Image_Filtering_Considering.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of IPv4 vs IPv6 Traffic in US</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071233</link>
        <id>10.14569/IJACSA.2016.071233</id>
        <doi>10.14569/IJACSA.2016.071233</doi>
        <lastModDate>2016-12-31T12:28:34.9230000+00:00</lastModDate>
        
        <creator>Mahmood ul-Hassan</creator>
        
        <creator>Muhammad Amir Khan</creator>
        
        <creator>Khalid Mahmood</creator>
        
        <creator>Ansar Munir Shah</creator>
        
        <subject>IPv4; IPv6; Mobile node; IP Traffic; IID Testing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(12), 2016</description>
        <description>It is still an accepted assumption that internet traffic is dominated by IPv4. However, due to introduction of modern technologies and concepts like Internet of Things (IoT) IPv6 has become the essential element. So keeping in mind the advancements in new technologies and introduced concepts to update the adoption of IPv6 on the internet. We want to find out what percentage of the IPv6 traffic is present in the Internet from last 6 years (2008-14) and to obtain the adoption curve of IPv6 native traffic by years to analyze if it is slow or fast. Also what are factors, constraints and limitations involve in the adoption of IPv6. Therefore, we have taken two data sets from the Caida website. The dataset belongs to OC-48 and OC-192 links from two data center of Equinix located at Chicago and Sanjose in the US. Finally compare the final curve with infograph of World IPv6 Launch to know how realistic it is and applied Linear Prediction techniques to see the future trend of the dataset obtained from the US population.</description>
        <description>http://thesai.org/Downloads/Volume7No12/Paper_33-Analysis_of_IPv4_vs_IPv6_Traffic_in_US.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Approach for Analyzing ISO / IEC 25010 Product Quality Requirements based on Fuzzy Logic and Likert Scale for Decision Support Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071232</link>
        <id>10.14569/IJACSA.2016.071232</id>
        <doi>10.14569/IJACSA.2016.071232</doi>
        <lastModDate>2016-12-31T12:28:34.8900000+00:00</lastModDate>
        
        <creator>Hasnain Iqbal</creator>
        
        <creator>Muhammad Babar</creator>
        
        <subject>ISO / IEC 25010; Product Quality Requirements; Fuzzy Logic; Likert Scale; Functional Requirements; Non-Functional Requirements; Internet Banking; Decision Support Systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(12), 2016</description>
        <description>Decision Support Systems (DSS) are collaborative software systems that are built to support controlling of an organization in decision making process when faced with non-routine problems in a specific application domain. It’s important to measure portability, maintainability, security, reliability, functional suitability, performance efficiency, compatibility, and usability quality requirements of DSS properly. ISO / IEC 25010 which replaced ISO 9126, used for three different quality models for software products, such as: a) Quality in use model, b) Product quality model, and c) Data quality model. There is a lack of methodologies to measure and quantify these quality requirements. Fuzzy logic used to specify quality requirements of DSS, because it’s an approach to computing based on degrees of truth, rather than true or false logics. Likert scale is a method in which it converts qualitative values into quantitative values to make a best statistical analysis. The measurement and quantification of quality requirements of DSS is a challenging task, because these quality requirements are in qualitative form and can’t be represented in quantitative way. Although, several quality requirements methods for DSS have been proposed so far, but the research on analyzing quality requirements of DSS are still limited. In this paper, quantitative approach proposed for analyzing ISO / IEC 25010 product quality requirements based on fuzzy logic and likert scale for DSS which aims to quantify quality requirements. Moreover implemented proposed framework on a case study ‘Internet Banking’ and got data from 25 respondents i.e. System Analysts and Domain Experts of banking sector.</description>
        <description>http://thesai.org/Downloads/Volume7No12/Paper_32-An_Approach_for_Analyzing_ISO_ IEC_25010.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Inclusive Comparison in LAN Environment between Conventional and Hybrid Methods for Spectral Amplitude Coding Optical CDMA Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071231</link>
        <id>10.14569/IJACSA.2016.071231</id>
        <doi>10.14569/IJACSA.2016.071231</doi>
        <lastModDate>2016-12-31T12:28:34.8770000+00:00</lastModDate>
        
        <creator>Hassan Yousif Ahmed</creator>
        
        <creator>Nisar K.S</creator>
        
        <creator>Z. M Gharsseldien</creator>
        
        <creator>S. A. Aljunid</creator>
        
        <subject>Conventional SAC (CSAC); Hybrid SAC (HSAC); WDM; OCDMA; MAI</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(12), 2016</description>
        <description>In this paper, performance analysis of conventional spectral amplitude coding (CSAC) with hybrid (HSAC) for OCDMA system is investigated in local area network (LAN) environment. The CSAC is built based on arithmetic sequence with simple algebraic ways. The HSAC technique is used in which spectral amplitude coding (SAC) combined with wavelength division multiplexing (WDM) to effectively reduce multiple access interference (MAI) and mitigate the influence of phase induced intensity noise (PIIN) arising in photodetecting process. The main idea is to construct the code sequences in SAC domain then repeat it diagonally in the wavelength domain as groups which maintains the same cardinality of a given code weight. Results show that HSAC outperforms CSAC when the number of active users is high due to its better correlation properties.  It has been shown that the HSAC can suppress intensity noise effectively and improve the bandwidth utilization significantly up to 4.2 nm.</description>
        <description>http://thesai.org/Downloads/Volume7No12/Paper_31-An_Inclusive_Comparison_in_LAN_Environment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An M-Learning Framework in the Podcast Form (MPF) using Context-Aware Technology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071230</link>
        <id>10.14569/IJACSA.2016.071230</id>
        <doi>10.14569/IJACSA.2016.071230</doi>
        <lastModDate>2016-12-31T12:28:34.8600000+00:00</lastModDate>
        
        <creator>Mohamed A.Amasha</creator>
        
        <creator>Elsaeed E. AbdElrazek</creator>
        
        <subject>M-Learning; Context Aware; E-Learning; RSS; Podcast</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(12), 2016</description>
        <description>Mobile computing is rapidly transforming the world in which we live, with the advent of iPhones, iPads, tablet computers, and Android smartphones. M-learning in the podcast form (MPF) is a recent development forconveying coursecontent to students in higher education. Context-aware technologies use temporal and environmental information to determine context.  Thisstudy presents a theoretical framework for using context awareness, M-learning in the podcast forminvestigate the effectiveness of using MPF engagement with context-aware technology in Teaching and Learning a multimedia course.  The framework is based on two principal dimensions (MPF andcontext-aware technology), and it contributes to support researchers in e-learning and ubiquitous learning.  The study was conducted onstudents (n = 42) enrolled ina multimedia course (IS 450) at QassimUniversity.  After finishing the course, theycompleted an online survey to give their feedback on the effectiveness of using MPF with context-aware technology.The results indicate that learners had a positive attitude towards usingMPF with context awarenesstechnology, and that they considered it a great way to develop their knowledge and receive course information.This study demonstrates the ability of context-aware technology to enhance the behaviour of learners by using m-learning in the podcast form.</description>
        <description>http://thesai.org/Downloads/Volume7No12/Paper_30-An_M-Learning_Framework_in_the_Podcast_Form.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Low Complexity for Scalable Video Coding Extension of  H.264 based on  the Complexity of Video</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071229</link>
        <id>10.14569/IJACSA.2016.071229</id>
        <doi>10.14569/IJACSA.2016.071229</doi>
        <lastModDate>2016-12-31T12:28:34.8300000+00:00</lastModDate>
        
        <creator>Mayada Khairy</creator>
        
        <creator>Amr Elsayed</creator>
        
        <creator>Alaa Hamdy</creator>
        
        <creator>Hesham Farouk Ali</creator>
        
        <subject>Scalable video coding; motion estimation; SVC layers; quality scalability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(12), 2016</description>
        <description>Scalable Video Coding (SVC) / H.264 is one type of video compression techniques. Which provided more reality in dealing with video compression to provide an efficient video coding based on H.264/AVC. This ensures higher performance through high compression ratio. SVC/H.264 is a complexity technique whereas the takes considerable time for computation the best mode of macroblock and motion estimation through using the exhaustive search techniques. This work reducing the processing time through matching between the complexity of the video and the method of selection macroblock and motion estimation. The goal of this approach is reducing the encoding time and improving the quality of video stream the efficiency of the proposed approach makes it suitable for are many applications as video conference application and security application.</description>
        <description>http://thesai.org/Downloads/Volume7No12/Paper_29-Low_Complexity_for_Scalable_Video_Coding.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Integrated Approach to Conceptual Modeling</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071228</link>
        <id>10.14569/IJACSA.2016.071228</id>
        <doi>10.14569/IJACSA.2016.071228</doi>
        <lastModDate>2016-12-31T12:28:34.7970000+00:00</lastModDate>
        
        <creator>Lindita Nebiu Hyseni</creator>
        
        <creator>Zamir Dika</creator>
        
        <subject>Integrated Framework; Conceptual Model; Functional Requirements; Non-Functional Requirements; Research Gaps; Joint Approval Requirements (JAR)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(12), 2016</description>
        <description>Conceptual modeling is supporting understanding and communication of the requirements in developing the information system (IS). The nature of requirements is usually divided into the functional (FRs) and non-functional requirements (NFRs). Thus, the scholars who are representing the conceptual modeling separate FRs and NFRs. Attempting to create an integrated framework for conceptual modeling of FRs and NFRs a new approach is being represented. In order to justify the approach, the research work and relevant literature in the field by using integrated perspective has been analyzed in this paper. As an outcome of this review, the need for an integrated approach to the requirement determination and proposed an integrated framework for conceptual modeling, which shall include the functional and non-functional requirements in one conceptual model has been identified. There is also identified that a small number of researchers have worked in this field, whilst the failure rate of the IS implementation which continues to be high has been the motivation, therefore the concept of proposing an integrated framework which will contribute to the increase of the efficacy of the requirements from the analysis phase in order to secure the sustainability of the IS has been approached.</description>
        <description>http://thesai.org/Downloads/Volume7No12/Paper_28-Integrated_Approach_to_Conceptual_Modeling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Improved Malicious Behaviour Detection Via k-Means and Decision Tree</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071227</link>
        <id>10.14569/IJACSA.2016.071227</id>
        <doi>10.14569/IJACSA.2016.071227</doi>
        <lastModDate>2016-12-31T12:28:34.7670000+00:00</lastModDate>
        
        <creator>Warusia Yassin</creator>
        
        <creator>Siti Rahayu</creator>
        
        <creator>Faizal Abdollah</creator>
        
        <creator>Hazlin Zin</creator>
        
        <subject>Intrusion Detection; Malicious Behaviours; Clustering; Decision Tree Classifier; Packet Headers</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(12), 2016</description>
        <description>Data Mining algorithm which is applied as an anomaly detection system has been considered as one of the essential techniques in malicious behaviour detection. Unfortunately, such detection system is known for its inclination in detecting a cyber-malicious activity more accurately (i.e. maximizing malicious and non-malicious behaviours detection) and has become a persistent limitation in the deployment of intrusion detection systems. Consequently, these constraints will affect a number of important performance factors such as the accuracy, detection rate and false alarms. In this research, KMDT proposed as an anomaly detection model that utilized k-means clustering and decision tree classifier to maximize the detection of malicious behaviours by scrutinizing packet headers. The k-means clustering employed for labelling and plots the whole behaviours into identical cluster, which characterized the behaviours into suspicious or non-suspicious composition. Subsequently, these dissimilar clustered behaviours are reordered within two classes of types such as malicious and non-malicious via decision tree classifier.  KMDT is a profitable finding which improved the anomaly detection performance in identifying suspicious and non-suspicious behaviours as well as characterizes it into malicious and non-malicious behaviours more accurately. These criteria have been validated by the result from the experiments throughout banking system environment dataset 2016.  KMDT have detected more malicious behaviours accurately as contrast to discrete and diversely combined methods.</description>
        <description>http://thesai.org/Downloads/Volume7No12/Paper_27-An_Improved_Malicious_Behaviour_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Topic based Approach for Sentiment Analysis on Twitter Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071226</link>
        <id>10.14569/IJACSA.2016.071226</id>
        <doi>10.14569/IJACSA.2016.071226</doi>
        <lastModDate>2016-12-31T12:28:34.7370000+00:00</lastModDate>
        
        <creator>Pierre FICAMOS</creator>
        
        <creator>Yan LIU</creator>
        
        <subject>sentiment analysis; opinion mining; natural language processing; feature extraction; topic modeling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(12), 2016</description>
        <description>Twitter has grown in popularity during the past decades. It is now used by millions of users who share information about their daily life and their feelings. In order to automatically process and analyze these data, applications can rely on analysis methods such as sentiment analysis and topic modeling. This paper contributes to the sentiment analysis research field. First, the preprocessing steps required to extract features from Twitter data are described. Then, a topic based method is proposed so as to estimate the sentiment of a tweet. This method requires to extract topics from the training dataset, and train models for each of these topics. The method allows to increase the accuracy of the sentiment estimation compared to using a single model for every topic.</description>
        <description>http://thesai.org/Downloads/Volume7No12/Paper_26-A_Topic_based_Approach for_Sentiment_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Method for Measuring the Performance of Software Project Managers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071225</link>
        <id>10.14569/IJACSA.2016.071225</id>
        <doi>10.14569/IJACSA.2016.071225</doi>
        <lastModDate>2016-12-31T12:28:34.7030000+00:00</lastModDate>
        
        <creator>Jasem M. Alostad</creator>
        
        <subject>software project manager; performance; measurement; metrics; indicators; Goal Question Metrics method; schedule management</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(12), 2016</description>
        <description>This paper is focused on providing a novel method for measuring the performance of software project managers. It clarifies the fundamental concepts of software project management, knowledge areas, life cycle phases of software project, and performance metrics. It presents some examples of processes and common performance metrics related to knowledge areas of software project management. The researcher extracts an enhanced list of performance metrics using a questionnaire that is passed to 60 experts and specialists in the field of software projects. Their responses are collected and filtered for reaching to effective performance metrics and the importance degree of each one. The researcher adapts Goal Question Metrics method to include an additional step that dedicated to calculate a performance indicator for each knowledge area of software project management. Finally, the new method has been applied on 3 real software projects to measure the performance of their managers. Measuring the performance of software project managers can be helpful in controlling and improving the performance.</description>
        <description>http://thesai.org/Downloads/Volume7No12/Paper_25-A_Novel_Method_for_Measuring_the_Performance .pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>RAX System to Rank Arabic XML Documents</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071224</link>
        <id>10.14569/IJACSA.2016.071224</id>
        <doi>10.14569/IJACSA.2016.071224</doi>
        <lastModDate>2016-12-31T12:28:34.6730000+00:00</lastModDate>
        
        <creator>Hesham Elzentani</creator>
        
        <creator>Mladen Veinovic</creator>
        
        <creator>Goran Šimic</creator>
        
        <subject>Text similarity measures; Text classification; Processing Arabic documents</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(12), 2016</description>
        <description>This paper describes an RAX System designed for ranking Arabic documents in information retrieval processes. The proposed solution basically depends on the similarity of textual content. The model we have designed can be used for documents stored in the different formats and written in Arabic language. Due the complex lingual semantics of this language the proposed solution uses a pure statistical approach. The design and implementation are based on existing text processing frameworks and referent Arabic grammar. The main focus of our research has been the evaluation of different similarity measures used for classifying Arabic documents from different domains and different document categories based on query criteria provided by the user.</description>
        <description>http://thesai.org/Downloads/Volume7No12/Paper_24-RAX_System_to_Rank_Arabic_XML_Documents.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Response Prediction for Chronic HCV Genotype 4 Patients to DAAs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071223</link>
        <id>10.14569/IJACSA.2016.071223</id>
        <doi>10.14569/IJACSA.2016.071223</doi>
        <lastModDate>2016-12-31T12:28:34.6430000+00:00</lastModDate>
        
        <creator>Mohammed A. Farahat</creator>
        
        <creator>Khaled A.Bahnasy</creator>
        
        <creator>A. Abdo</creator>
        
        <creator>Sanaa M.Kamal</creator>
        
        <creator>Samar K. Kassim</creator>
        
        <creator>Ahmed Sharaf Eldin</creator>
        
        <subject>HCV; DMT; Decision Tree; DAAs; Prediction Model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(12), 2016</description>
        <description>Hepatitis C virus (HCV) is a major cause of chronic liver disease, end stage liver disease and liver cancer in Egypt. Genotype 4 is the prevalent genotype in Egypt and has recently spread to Southern Europe particularly France, Italy, Greece and Spain.  Recently, new direct acting antivirals (DAAs) have caused a revolution in HCV therapy with response rates approaching 100%. Despite the diversity of DAAs, treatment of chronic hepatitis C genotype 4 has not yet been optimized.  The aim of this study is to build a framework to predict the response of chronic HCV genotype 4 patients to various DAAs by applying Data Mining Techniques (DMT) on clinical information. The framework consists of three phases which are data preprocessing phase to prepare the data before applying the DMT; DM phase to apply DMT, evaluation phase to evaluate the performance and accuracy of the built prediction model using a data mining evaluation technique. The experimental results showed that the model obtained acceptable results.</description>
        <description>http://thesai.org/Downloads/Volume7No12/Paper_23-Response_Prediction_for_Chronic_HCV_Genotype_4_Patients_to_DAAs.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Formal Modeling and Verification of Smart Traffic Environment with Design Aided by UML</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071222</link>
        <id>10.14569/IJACSA.2016.071222</id>
        <doi>10.14569/IJACSA.2016.071222</doi>
        <lastModDate>2016-12-31T12:28:34.6270000+00:00</lastModDate>
        
        <creator>Umber Noureen Abbas</creator>
        
        <creator>Nazir Ahmad Zafar</creator>
        
        <creator>Farhan Ullah</creator>
        
        <subject>Formal specification; Formal analysis; VDM; Sequence Diagram; Issue Challan; LED</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(12), 2016</description>
        <description>Issue challan in response to rules violation, LED (Light Emitting Diode) and Bridge components of this proposed Smart Traffic Monitoring and Guidance System are presented in this paper to monitor violation of rules, update users about traffic congestion through LED and to provide central hub to communicate with sensors to update server about the traffic situation. It involves the Wireless Sensors and actors to communicate with the system. The proposed components require fewer resources in terms of sensors and actors. Further sensors identify violation of rules through issue challan. Secondly, LED component provides information to users about the traffic situations. Thirdly, Bridge component is used to provide central hub to communicate with different components in the proposed model and to update the server. The proposed components of this model are implemented by developing formal specification using VDM-SL. VDM-SL is a formal specification language used for analysis of complex systems. The developed specification is validated, verified and analyzed using VDM-SL Toolbox.</description>
        <description>http://thesai.org/Downloads/Volume7No12/Paper_22-Formal_Modeling_and_Verification_of_ Smart_Traffic_Environment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluating Confidentiality Impact in Security Risk Scoring Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071221</link>
        <id>10.14569/IJACSA.2016.071221</id>
        <doi>10.14569/IJACSA.2016.071221</doi>
        <lastModDate>2016-12-31T12:28:34.5970000+00:00</lastModDate>
        
        <creator>Eli Weintraub</creator>
        
        <subject>information security; risk management; continuous monitoring; vulnerability; confidentiality; risk assessment; access control; authorization system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(12), 2016</description>
        <description>Risk scoring models assume that confidentiality evaluation is based on user estimations. Confidentiality evaluation incorporates the impacts of various factors including systems&#39; technical configuration, on the processes relating to users&#39; confidentiality. The assumption underlying this research is that system users are not capable of estimating systems&#39; confidentiality since they lack the knowledge on the technical structure. According to the proposed model, systems&#39; confidentiality is calculated using technical information of systems&#39; components. The proposed model evaluates confidentiality based on quantitative metrics rather than qualitative estimates which are currently being used. Frameworks&#39; presentation includes system design, an algorithm calculating confidentiality measures and an illustration of risk scoring computations.</description>
        <description>http://thesai.org/Downloads/Volume7No12/Paper_21-Evaluating_Confidentiality_Impact_in_Security_Risk.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparison Study of Different Lossy Compression Techniques Applied on Digital Mammogram Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071220</link>
        <id>10.14569/IJACSA.2016.071220</id>
        <doi>10.14569/IJACSA.2016.071220</doi>
        <lastModDate>2016-12-31T12:28:34.5630000+00:00</lastModDate>
        
        <creator>Ayman AbuBaker</creator>
        
        <creator>Mohammed Eshtay</creator>
        
        <creator>Maryam AkhoZahia</creator>
        
        <subject>Mammogram Images; DCT Compression; SVD compression; Microcalcifications</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(12), 2016</description>
        <description>The huge growth of the usage of internet increases the need to transfer and save multimedia files. Mammogram images are part of these files that have large image size with high resolution. The compression of these images is used to reduce the size of the files without degrading the quality especially the suspicious regions in the mammogram images. Reduction of the size of these images gives more chance to store more images and minimize the cost of transmission in the case of exchanging information between radiologists. Many techniques exists in the literature to solve the loss of information in images. In this paper, two types of compression transformations are used which are Singular Value Decomposition (SVD) that transforms the image into series of Eigen vectors that depends on the dimensions of the image and Discrete Cosine Transform (DCT) that covert the image from spatial domain into frequency domain. In this paper, the Computer Aided Diagnosis (CAD) system is implemented to evaluate the microcalcification appearance in mammogram images after using the two transformation compressions.  The performance of both transformations SVD and DCT is subjectively compared by a radiologist. As a result, the DCT algorithm can effectively reduce the size of the mammogram images by 65% with high quality microcalcification appearance regions.</description>
        <description>http://thesai.org/Downloads/Volume7No12/Paper_20-Comparison_Study_of_Different_Lossy_Compression_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Internal Model Control of A Class of Continuous Linear Underactuated Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071219</link>
        <id>10.14569/IJACSA.2016.071219</id>
        <doi>10.14569/IJACSA.2016.071219</doi>
        <lastModDate>2016-12-31T12:28:34.5170000+00:00</lastModDate>
        
        <creator>Asma Mezzi</creator>
        
        <creator>Dhaou Soudani</creator>
        
        <subject>Internal Model Control; continuous linear underactuated systems; specific controller; minimum phase systems; non-minimum phase behavior; non-zero initial conditions; model parameters effects; set-point tracking; stability; disturbance rejection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(12), 2016</description>
        <description>This paper presents an Internal Model Control (IMC) structure designed for a class of continuous linear underactuated systems. The study treats the case of Minimum Phase (MP) systems and those whose zero dynamics are not necessarily stable. The proposed IMC structure is based on a specific controller which is obtained by the realization of an approximate inverse of the model plant. It is shown that, using such IMC structure, it is possible to remedy the problem of system underactuation and Non-Minimum Phase (NMP) behavior. The case of non-zero initial conditions and imperfect modeling are also presented and model parameters effects on the system evolution are discussed. Simulated examples are presented to prove the effectiveness of the proposed control method to ensure set-point tracking, stability and disturbance rejection.</description>
        <description>http://thesai.org/Downloads/Volume7No12/Paper_19-Internal_Model_Control_of_a_Class_of_Continuous_Linear.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Stemmer Impact on Quranic Mobile Information Retrieval Performance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071218</link>
        <id>10.14569/IJACSA.2016.071218</id>
        <doi>10.14569/IJACSA.2016.071218</doi>
        <lastModDate>2016-12-31T12:28:34.4700000+00:00</lastModDate>
        
        <creator>Huda Omar Aljaloud</creator>
        
        <creator>Mohammed Dahab</creator>
        
        <creator>Mahmoud Kamal</creator>
        
        <subject>stemming; information retrieval; light10; Quran lexicon; mobile performance; natural language processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(12), 2016</description>
        <description>Stemming algorithms are employed in information retrieval (IR) to reduce verity variants of the same word with several endings to a standard stem. Stemmers can also help IR systems by unifying vocabulary, reducing term variants, reducing storage space, and increasing the likelihood of matching documents, all of which make stemming very attractive for use in IR. This paper aims to study the impact of using stemming techniques in mobile effectiveness. Two-word extraction stemming techniques will be used: a light stemmer and a dictionary-lookup stemmer. Also, three sets of experiments were conducted in this research in order to raise the efficiency of mobile aapplications. Implementing the two stemming approaches and assessing their accuracy by calculating the precision, recall, MAP, and f-measure, produced results which show that the light10 stemmer outperforms the dictionary-lookup stemmer in precision and MAP. Furthermore, the mobile performance of the light10 stemmer exceeds that of the dictionary-based stemmer.</description>
        <description>http://thesai.org/Downloads/Volume7No12/Paper_18-Stemmer_Impact_on_Quranic_Mobile_Information.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>All in Focus Image Generation based on New Focusing Measure Operators</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071217</link>
        <id>10.14569/IJACSA.2016.071217</id>
        <doi>10.14569/IJACSA.2016.071217</doi>
        <lastModDate>2016-12-31T12:28:34.4230000+00:00</lastModDate>
        
        <creator>Hossam Eldeen M. Shamardan</creator>
        
        <subject>Focus Measure; All In Focus; Stockwell Transform; DOST</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(12), 2016</description>
        <description>To generate an all in focus image, the Shape-From-Focus (SFF) is used. The SFF key is finding the optimal focus depth at each pixel or area in an image within sequence of images. In this paper two new focusing measure operators are suggested to be used for SFF. The suggested operators are based on modification for the state of art tool for time-frequency analysis, the Stockwell Transform (ST). The first operator depends on Discrete Orthogonal Stockwell Transform (DOST) which represents a pared version of ST, while the other depends on Pixelwise DOST (P-DOST) which provides a local spatial frequency description. Both of the operators provides the computational complexity and memory demand efficiency compared to the operator depending on ST.  A comparison between the suggested operators to operators based on ST are performed and showed that the suggested operators’ performances are as analogous to that of ST.</description>
        <description>http://thesai.org/Downloads/Volume7No12/Paper_17-All_in_Focus_Image_Generation_based_on_New_ Focusing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automatic Fall Detection using Smartphone Acceleration Sensor</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071216</link>
        <id>10.14569/IJACSA.2016.071216</id>
        <doi>10.14569/IJACSA.2016.071216</doi>
        <lastModDate>2016-12-31T12:28:34.3930000+00:00</lastModDate>
        
        <creator>Tran Tri Dang</creator>
        
        <creator>Hai Truong</creator>
        
        <creator>Tran Khanh Dang</creator>
        
        <subject>fall detection; long lie detection; acceleration sensor; smartphone; personal healthcare</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(12), 2016</description>
        <description>In this paper, we describe our work on developing an automatic fall detection technique using smart phone. Fall is detected based on analyzing acceleration patterns generated during various activities. An additional long lie detection algorithm is used to improve fall detection rate while keeping false positive rate at an acceptable value. An application prototype is implemented on Android operating system and is used to evaluate the proposed technique performance. Experiment results show the potential of using this app for fall detection. However, more realistic experiment setting is needed to make this technique suitable for use in real life situations.</description>
        <description>http://thesai.org/Downloads/Volume7No12/Paper_16-Automatic_Fall_Detection_using_Smartphone_Acceleration_Sensor.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Measuring the Data Openness for the Open Data in Saudi Arabia e-Government – A Case Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071215</link>
        <id>10.14569/IJACSA.2016.071215</id>
        <doi>10.14569/IJACSA.2016.071215</doi>
        <lastModDate>2016-12-31T12:28:34.3470000+00:00</lastModDate>
        
        <creator>Marwah W. AlRushaid</creator>
        
        <creator>Abdul Khader Jilani Saudagar</creator>
        
        <subject>abstraction; accessibility; awareness; benchmarking; e-Government; knowledge; information; openness</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(12), 2016</description>
        <description>Conceptually, data can be found at the lowest level of abstraction from where information and knowledge are being extracted. Furthermore, data itself has no meaning, unless it’s being interpreted and transferred into information and knowledge. Thus, all governments have come to appreciate the relevance between releasing data and obtaining information and knowledge in return. However, the abstract nature of data with the undefined benefits to everyday life has slowed down the awareness of public to open data and its relevance. Thus, the increasing efforts by governments in embracing open data agendas may not be clear and shared among people. Most of the open government data initiatives focus on the technology needed to support the usability and accessibility of data. However, this focus has not been proven to increase citizens’ awareness. Citizens’ awareness of open data practices must be carefully measured as without citizen engagement, open government data is useless.
The purpose of this research is to measure data openness level of Saudi Arabia e-Government Data Portal. Moreover, a proposed model by the researcher, which is based on a scoring model by Global Open data index, is used to measure data openness level of Saudi Arabia e-Government Data Portal.</description>
        <description>http://thesai.org/Downloads/Volume7No12/Paper_15-Measuring_ the_Data_Openness_for_ the_Open_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Bayesian Approach to Predicting Water Supply and  Rehabilitation of Water Distribution Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071213</link>
        <id>10.14569/IJACSA.2016.071213</id>
        <doi>10.14569/IJACSA.2016.071213</doi>
        <lastModDate>2016-12-31T12:28:34.3000000+00:00</lastModDate>
        
        <creator>Abdelaziz Lakehal</creator>
        
        <creator>Fares Laouacheria</creator>
        
        <subject>Water distribution network (WDN) management; Rehabilitation; Pipes and valves reliability; Bayesian Networks (BN); Water supply</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(12), 2016</description>
        <description>Water distribution network (WDN) consists of several elements the main ones: pipes and valves. The work developed in this article focuses on a water supply prediction in the short and long term. To this end, reliability data were conjugated in decision making tools on water distribution network rehabilitation in a forecasting context. The pipes are static elements that allow the transport of water to customers, while the valves are dynamic components which perform ensure management of water flow. This paper presents a Bayesian approach that allows management of water distribution network based on the evaluation of the reliability of network components. Modeling based on a Static Bayesian Network (SBN) is implemented to analyze qualitatively and quantitatively the availability of water in the different segments of the network. Dynamic Bayesian networks (DBN) are then used to assess the valves reliability as function of time, which allows management of water distribution based on water availability assessment in different segments. Finally an application on data of a fraction of a distribution network supplying a town is presented to show the effectiveness and the strong contribution of Bayesian networks (BN) in this research field.</description>
        <description>http://thesai.org/Downloads/Volume7No12/Paper_13-A_Bayesian_Approach_to_Predicting_Water_Supply.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards A Broader Adoption of Agile Software Development Methods</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071212</link>
        <id>10.14569/IJACSA.2016.071212</id>
        <doi>10.14569/IJACSA.2016.071212</doi>
        <lastModDate>2016-12-31T12:28:34.2670000+00:00</lastModDate>
        
        <creator>Abdallah Alashqur</creator>
        
        <subject>Agile Methods; Agile software development; SCRUM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(12), 2016</description>
        <description>Traditionally, software design and development has been following the engineering approach as exemplified by the waterfall model, where specifications have to be fully detailed and agreed upon prior to starting the software construction process. Agile software development is a relatively new approach in which specifications are allowed to evolve even after the beginning of the development process, among other characteristics. Thus, agile methods provide more flexibility than the waterfall model, which is a very useful feature in many projects. To benefit from the advantages provided by agile methods, the adoption rate of these methods in software development projects can be further encouraged if certain practices and techniques in agile methods are improved. In this paper, an analysis is provided of several practices and techniques that are part of agile methods that may hinder their broader acceptance. Further, solutions are proposed to improve such practices and consequently facilitate a wider adoption rate of agile methods in software development.</description>
        <description>http://thesai.org/Downloads/Volume7No12/Paper_12-Towards_A_Broader_Adoption_of_Agile_Software_Development_Methods.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improved Sliding Mode Nonlinear Extended State Observer  based Active Disturbance Rejection Control for Uncertain Systems with Unknown Total Disturbance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071211</link>
        <id>10.14569/IJACSA.2016.071211</id>
        <doi>10.14569/IJACSA.2016.071211</doi>
        <lastModDate>2016-12-31T12:28:34.2370000+00:00</lastModDate>
        
        <creator>Wameedh Riyadh Abdul-Adheem</creator>
        
        <creator>Ibraheem Kasim Ibraheem</creator>
        
        <subject>extended state observer; sliding mode; rejection control; tracking differentiator; DC motor; nonlinear state feedback</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(12), 2016</description>
        <description>This paper presents a new strategy for the active disturbance rejection control (ADRC) of a general uncertain system with unknown bounded disturbance based on a nonlinear sliding mode extended state observer (SMESO). Firstly, a nonlinear extended state observer is synthesized using sliding mode technique for a general uncertain system assuming asymptotic stability. Then the convergence characteristics of the estimation error are analyzed by Lyapunov strategy. It revealed that the proposed SMESO is asymptotically stable and accurately estimates the states of the system in addition to estimating the total disturbance.  Then, an ADRC is implemented by using a nonlinear state error feedback (NLSEF) controller; that is suggested by J. Han and the proposed SMESO to control and actively reject the total disturbance of a permanent magnet DC (PMDC) motor. These disturbances caused by the unknown exogenous disturbances and the matched uncertainties of the controlled model. The proposed SMESO is compared with the linear extended state observer (LESO). Through digital simulations using MATLAB / SIMULINK, the chattering phenomenon has been reduced dramatically on the control input channel compared to LESO. Finally, the closed-loop system exhibits a high immunity to torque disturbance and quite robustness to matched uncertainties in the system.</description>
        <description>http://thesai.org/Downloads/Volume7No12/Paper_11-Improved_Sliding_Mode_Nonlinear_Extended_State_Observer.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Model and Criteria for the Automated Refactoring of the UML Class Diagrams</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071210</link>
        <id>10.14569/IJACSA.2016.071210</id>
        <doi>10.14569/IJACSA.2016.071210</doi>
        <lastModDate>2016-12-31T12:28:34.2070000+00:00</lastModDate>
        
        <creator>Evgeny Nikulchev</creator>
        
        <creator>Olga Deryugina</creator>
        
        <subject>UML; refactoring; class diagrams; software architecture; software design; UML transformation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(12), 2016</description>
        <description>Many papers have been written on the challenges of the software refactoring. The question is which refactorings can be applied on the modelling level. Based on the UML model, for example. With the aim of evaluating this possibility the algorithm and the software tool of automated UML class diagram refactoring were introduced. The software tool proposed reduces the level of the UML class diagram complexity metric.</description>
        <description>http://thesai.org/Downloads/Volume7No12/Paper_10-Model_and_Criteria_for_the_Automated_Refactoring_of_the_UML_Class.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Enhanced Partial Transmit Sequence Segmentation Schemes to Reduce the PAPR in OFDM Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071209</link>
        <id>10.14569/IJACSA.2016.071209</id>
        <doi>10.14569/IJACSA.2016.071209</doi>
        <lastModDate>2016-12-31T12:28:34.1900000+00:00</lastModDate>
        
        <creator>Yasir Amer Al-Jawhar</creator>
        
        <creator>Nor Shahida M. Shah</creator>
        
        <creator>Montadar Abas Taher</creator>
        
        <creator>Mustafa Sami Ahmed</creator>
        
        <creator>Khairun N. Ramli</creator>
        
        <subject>OFDM; PAPR; PTS; adjacent PTS; interleaving PTS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(12), 2016</description>
        <description>Although the orthogonal frequency division multiplexing system (OFDM) is widely used in high-speed data rate wire and wireless environment, the peak-to- average-power-ratio (PAPR) is one of its major obstacles for the real applications. The high PAPR value leads some devices of the OFDM system such as power amplifiers and analog to digital converters to work out of band of these devices. Thus the system efficiency is degraded. Many techniques have been proposed to overcome the high PAPR in OFDM systems such as partial transmit sequences (PTS), selected mapping and interleaving technique. PTS is considered as one of the effective PAPR reduction methods; this scheme depends on segmentation of the input data into several subblocks and then combined again. The three well-known segmentation schemes are pseudo-random, adjacent and interleaving; each scheme has PAPR reduction performance, and computational complexity differs from one to another. In this paper, five types of segmentation schemes are proposed to improve the PAPR reduction execution including sine and cosine shape as well as hybrid interleaving and adjacent schemes in new approaches. From the simulation results, the proposed methods can achieve PAPR reduction performance greater than that of the adjacent and interleaving partition schemes, without increasing the computational complexity of the system. Moreover, the enhanced schemes can realize better PAPR performance with any number of subcarriers.</description>
        <description>http://thesai.org/Downloads/Volume7No12/Paper_9-An_Enhanced_Partial_Transmit_Sequence_Segmentation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Segmentation using Codebook Index Statistics for Vector Quantized Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071208</link>
        <id>10.14569/IJACSA.2016.071208</id>
        <doi>10.14569/IJACSA.2016.071208</doi>
        <lastModDate>2016-12-31T12:28:34.1570000+00:00</lastModDate>
        
        <creator>Hsuan T. Chang</creator>
        
        <creator>Jian-Tein Su</creator>
        
        <subject>image segmentation; vector quantization; index image; adaptive thresholding; codebook statistics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(12), 2016</description>
        <description>In this paper, the segmentation using codebook index statistics (SUCIS) method is proposed for vector-quantized images. Three different codebooks are constructed according to the statistical characteristics (mean, variance, and gradient) of the codewords. Then they are employed to generate three different index images, which can be used to analyze the image contents including the homogeneous, edge, and texture blocks. An adaptive thresholding method is proposed to assign all image blocks in the compressed image to several disjoint regions with different characteristics. In order to make the segmentation result more accurate, two post-processing methods: the region merging and boundary smoothing schemes, are proposed. Finally, the pixel-wise segmentation result can be obtained by partitioning the image blocks at the single-pixel level. Experimental results demonstrate the effectiveness of the proposed SUCIS method on image segmentation, especially for the applications on object extraction.</description>
        <description>http://thesai.org/Downloads/Volume7No12/Paper_8-Segmentation_using_Codebook_Index_Statistics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Heterogeneous Framework to Detect Intruder Attacks in Wireless Sensor Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071207</link>
        <id>10.14569/IJACSA.2016.071207</id>
        <doi>10.14569/IJACSA.2016.071207</doi>
        <lastModDate>2016-12-31T12:28:34.1100000+00:00</lastModDate>
        
        <creator>Mustafa Al-Fayoumi</creator>
        
        <creator>Yasir Ahmad</creator>
        
        <creator>Usman Tariq</creator>
        
        <subject>Intrusion; node compromise; anomaly; signature; MLP</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(12), 2016</description>
        <description>Wireless sensor network (WSN) has been broadly implemented in real world applications, such as monitoring of forest fire, military targets detection, medical and/or science areas and above all in our daily home life as well. Nevertheless, WSNs are effortlessly compromised by adversaries due to their broadcast transmission medium as a means of communication which are lacking in tamper resistance. Consequently, an intruder can over hear all traffic, replay previous messages, inject malicious data packets, or can compromise a node. Commonly, sensor nodes are very much vulnerable of two main issues in security aspect that are node authentication and compromising a node. In this paper, a heterogeneous framework of node capture and intrusion detection for WSNs is proposed. This framework efficiently detects the captured nodes by using a novel technique, embedded with an Intrusion Detection mechanism which aggregates Signature and Anomaly based approach with Neural Network Multi-Layer Perceptron (MLP) classification in a clustering environment. Moreover, the proposed framework achieves efficiency at reasonable computation and communication costs and it can be a security shield to real WSN applications.</description>
        <description>http://thesai.org/Downloads/Volume7No12/Paper_7-A_Heterogeneous_Framework_to_Detect_Intruder.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>RSECM: Robust Search Engine using Context-based Mining for Educational Big Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071206</link>
        <id>10.14569/IJACSA.2016.071206</id>
        <doi>10.14569/IJACSA.2016.071206</doi>
        <lastModDate>2016-12-31T12:28:34.0800000+00:00</lastModDate>
        
        <creator>D. Pratiba</creator>
        
        <creator>G. Shobha</creator>
        
        <subject>Big Data; Context; Cloud; Educational Data; Hadoop; Search Engine</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(12), 2016</description>
        <description>With an accelerating growth in the educational sector along with the aid of ICT and cloud-based services, there is a consistent rise of educational big data, where storage and processing become the prime matter of challenge. Although many recent attempts have used open source framework e.g. Hadoop for storage, still there are reported issues in sufficient security management and data analyzing problems. Hence, there is less applicability of mining techniques for upcoming search engine due to unstructured educational data. The proposed system introduces a technique called as RSECM i.e. Robust Search Engine using Context-based Modeling that presents a novel archival and search engine. RSECM generates its own massive stream of educational big data and performs the efficient search of data. Outcome exhibits RSECM outperforms SQL based approaches concerning faster retrieval of the dynamic user-defined query.</description>
        <description>http://thesai.org/Downloads/Volume7No12/Paper_6-RSECM_Robust_Search_Engine_using_Context-based_Mining.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Scheduling of Distributed Algorithms for Low Power Embedded Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071205</link>
        <id>10.14569/IJACSA.2016.071205</id>
        <doi>10.14569/IJACSA.2016.071205</doi>
        <lastModDate>2016-12-31T12:28:34.0500000+00:00</lastModDate>
        
        <creator>Stanislaw Deniziak</creator>
        
        <creator>Albert Dzitkowski</creator>
        
        <subject>Embedded system; distributed algorithm; task scheduling; big.LITTLE; low power system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(12), 2016</description>
        <description>Recently, the advent of embedded multicore processors has created interesting technologies for power management. Systems consisting of low-power and high-efficient cores create new possibilities for the optimization of power consumption. However, new design methods, dedicated to these  technologies should be developed. In this paper we present a method of static task scheduling for low-power real-time embedded systems. We assume that the system is specified as a distributed algorithm, then it is implemented using multi-core embedded processor with low-power processing capabilities. We propose a new scheduling method to create the optimal or suboptimal schedule. The goal of optimization is to minimize the power consumption while all time constraints will be satisfied or the quality of service will be as high as possible. We present experimental results, obtained for sample systems, showing advantages of our method.</description>
        <description>http://thesai.org/Downloads/Volume7No12/Paper_5-Scheduling_of_Distributed_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards Development of Real-Time Handwritten Urdu Character to Speech Conversion System for Visually Impaired</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071204</link>
        <id>10.14569/IJACSA.2016.071204</id>
        <doi>10.14569/IJACSA.2016.071204</doi>
        <lastModDate>2016-12-31T12:28:34.0170000+00:00</lastModDate>
        
        <creator>Tajwar Sultana</creator>
        
        <creator>Abdul Rehman Abbasi</creator>
        
        <creator>Bilal Ahmed Usmani</creator>
        
        <creator>Sadeem Khan</creator>
        
        <creator>Wajeeha Ahmed</creator>
        
        <creator>Naima Qaseem</creator>
        
        <creator>Sidra</creator>
        
        <subject>Artificial Neural Network; Classification; OCR; Text To Speech; Urdu Handwritten Character</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(12), 2016</description>
        <description>Text to Speech (TTS) Conversion Systems have been an area of research for decades and have been developed for both handwritten and typed text in various languages. Existing research shows that it has been a challenging task to deal with Urdu language due to the complexity of Urdu ‘Nastaliq’ (rich variety in writing styles), therefore, to the best of our knowledge, not much work has been carried out in this area. Keeping in view the importance of Urdu language and the lack of development in this domain, our research focuses on ‘handwritten’ Urdu TTS system. The idea is to first recognize a handwritten Urdu character and then convert it into an audible human speech. Since handwriting styles of different people vary greatly from each other, a machine learning technique for the recognition part is used i.e., Artificial Neural Networks (ANN). Correctly recognized characters, then, undergo processing which converts them into human speech. Using this methodology, a working prototype has been successfully implemented in MATLAB that gives an overall accuracy of 91.4%. Our design serves as a platform for further research and future enhancements for word and sentence processing, especially for visually impaired people.</description>
        <description>http://thesai.org/Downloads/Volume7No12/Paper_4-Towards_Development_of_Real_Time_Handwritten_Urdu_Character.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Universally Designed and Usable Data Visualization for A Mobile Application in the Context of Rheumatoid Arthritis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071203</link>
        <id>10.14569/IJACSA.2016.071203</id>
        <doi>10.14569/IJACSA.2016.071203</doi>
        <lastModDate>2016-12-31T12:28:33.9870000+00:00</lastModDate>
        
        <creator>Suraj Shrestha</creator>
        
        <creator>Pietro Murano</creator>
        
        <subject>universal design; usability; evaluation; data visualization; mobile application; rheumatoid arthritis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(12), 2016</description>
        <description>This paper discusses the design, development and evaluation of a data visualization prototype for a mobile application, for people with rheumatoid arthritis conditions. The visualizations concern ways of displaying graphically data for monitoring and evaluating the daily activities of rheumatoid arthritis sufferers. An initial visualization was developed and then a second was developed, aiming to be more usable and universally designed than the first version. An empirical experiment was used for evaluation and collection of quantitative data. Furthermore, semi-structured interviews were used for eliciting more qualitative data in terms of participant opinions. The overall results suggest that the second visualization was more usable and more universally designed than the first version. The paper concludes with some recommendations for future improvements.</description>
        <description>http://thesai.org/Downloads/Volume7No12/Paper_3-A_Universally_Designed_and_Usable_Data_Visualization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Mixed Signal Platform to Study the Accuracy/Complexity Trade-Off of DPD Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071202</link>
        <id>10.14569/IJACSA.2016.071202</id>
        <doi>10.14569/IJACSA.2016.071202</doi>
        <lastModDate>2016-12-31T12:28:33.9400000+00:00</lastModDate>
        
        <creator>Hanan Thabet</creator>
        
        <creator>Morgan Roger</creator>
        
        <creator>Caroline Lelandais-Perrault</creator>
        
        <subject>LTE; DPD algorithms; Simulation Platform; Accuracy/Complexity; Power Consumption; Co-Simulation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(12), 2016</description>
        <description>The increase in bandwidth of Power Amplifier (PA) input signals has led to the development of more complex behavioral PA models. Most recent models such as the Generalized Memory Polynomial (1) or the Polyharmonic distortion modeling (2) can be used to design very performant but complex and thus very consuming Digital Predistortion algorithms (DPDs). On the other hand, with earlier simpler models, the precision of the DPD may not be enough. The model order is also the major factor influencing the requirements in terms of bandwidth and dynamic range of the digitized signal in the feedback loop of a typical Power amplification system architecture: the higher the order, the more information is needed for identification.
This paper describes a new mixed signal simulation platform developed to study the complexity vs. accuracy trade-off from the DPD point of view. The platform estimates the accuracy of the DPD and the power consumption (including the consumption of the DPD itself) of the whole feedback loop, by comparing various PA models with various DPDs algorithms. Contrary to older works, measuring the accuracy on the open loop without DPD and estimating the complexity in theoretical number of operations, our goal is to be able to estimate with precision the performances and the power consumption of the whole amplification system (PA + DPD + DAC + feedback loop) for optimization of DPD algorithms.</description>
        <description>http://thesai.org/Downloads/Volume7No12/Paper_2-A_New_Mixed_Signal_Platform_to_Study.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Estimation Method of Ionospheric TEC Distribution using Single Frequency Measurements of GPS Signals</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071201</link>
        <id>10.14569/IJACSA.2016.071201</id>
        <doi>10.14569/IJACSA.2016.071201</doi>
        <lastModDate>2016-12-31T12:28:33.8630000+00:00</lastModDate>
        
        <creator>Win Zaw Hein</creator>
        
        <creator>Yoshitaka Goto</creator>
        
        <creator>Yoshiya Kasahara</creator>
        
        <subject>Global Positioning System; GPS; Signal processing and propagation; Ionospheric Delay; Total Electron Content (TEC)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(12), 2016</description>
        <description>The satellite-to-ground communications are influenced by ionospheric plasma which varies depending on solar and geomagnetic activities as well as regions and local times. With the expansion of use of the space, continuous monitoring of the ionospheric plasma has become an important issue. In Global Positioning System (GPS), the ionospheric delay, which is proportional to ionospheric total electron content (TEC) along the propagation path, is the largest error in signal propagation. The TEC has been observed from dual frequency GPS signals because only the ionospheric delay has frequency dependences. Costs of multi-frequency receivers are, however, much higher than those of single frequency ones. In the present study, an estimation method of TEC distribution map from single frequency GPS measurements was developed. The developed method was evaluated by comparing its results with those from dual frequency measurements. The method makes it possible to expand ionospheric TEC observation networks easily.</description>
        <description>http://thesai.org/Downloads/Volume7No12/Paper_1-Estimation_Method_of_ Ionospheric_TEC_Distribution.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Neural Backpropagation System for the Study of Obesity in Childhood</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2016.051204</link>
        <id>10.14569/IJARAI.2016.051204</id>
        <doi>10.14569/IJARAI.2016.051204</doi>
        <lastModDate>2016-12-13T12:44:07.5400000+00:00</lastModDate>
        
        <creator>A. Medina-Santiago</creator>
        
        <creator>E. M. Melgar-Paniagua</creator>
        
        <creator>L. C. Campos-Reyes</creator>
        
        <creator>A. Cisneros-G&#243;mez</creator>
        
        <creator>N. R. Garc&#237;a-Chong</creator>
        
        <subject>prediction system; Nutrition; Backpropagation Neural Network; Obesity</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 5(12), 2016</description>
        <description>This paper presents the development of a nutritional system using Backpropagation neural network, that is able to provide a clear and simple prediction problems of obesity in children up to twelve years, based on your eating habits during the day. For the development of this project has taken into account various factors, which are vital for the proper development of infants. A prediction system can offer a solution to several factors, which are not easily determined by convectional means.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume5No12/Paper_4-Neural_Backpropagation_System_for_the_Study.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Impact of Domain Modeling Techniques on the Quality of Domain Model: An Experiment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071158</link>
        <id>10.14569/IJACSA.2016.071158</id>
        <doi>10.14569/IJACSA.2016.071158</doi>
        <lastModDate>2016-12-10T20:49:45.0300000+00:00</lastModDate>
        
        <creator>Hiqmat Nisa</creator>
        
        <creator>Salma Imtiaz</creator>
        
        <creator>Muhammad Uzair Khan</creator>
        
        <creator>Saima Imtiaz</creator>
        
        <subject>Domain Model; UML; Experiment; Noun Phrasing Technique; Category List Technique</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(11), 2016</description>
        <description>The unified modeling language (UML) is widely used to analyze and design different software development artifacts in an object oriented development. Domain model is a significant artifact that models the problem domain and   visually represents real world objects and relationships among them.  It facilitates the comprehension process by identifying the vocabulary   and   key   concepts of the business world. Category list technique identifies concepts and associations with the help of pre defined categories, which are important to business information systems. Whereas noun phrasing technique performs grammatical analysis of use case description to recognize concepts and associations. Both of these techniques are used for the construction of domain model, however, no empirical evidence exists that evaluates the quality of the resultant domain model constructed via these two basic techniques. A controlled experiment was performed to investigate the impact of category list and noun phrasing technique on quality of the domain model. The constructed domain model is evaluated for completeness, correctness   and   effort   required for its design. The obtained results show that category list technique is better than noun phrasing technique for the identification of concepts as it avoids generating unnecessary elements i.e. extra concepts, associations and attributes in the domain model. The noun phrasing technique produces a comprehensive domain model and requires less effort as compared to category list. There is no statistically significant difference between both techniques in case of correctness.</description>
        <description>http://thesai.org/Downloads/Volume7No11/Paper_58-Impact_of_Domain_Modeling_Techniques_on_the_Quality.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mood Extraction Using Facial Features to Improve Learning Curves of Students in E-Learning Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071157</link>
        <id>10.14569/IJACSA.2016.071157</id>
        <doi>10.14569/IJACSA.2016.071157</doi>
        <lastModDate>2016-12-10T20:49:45.0130000+00:00</lastModDate>
        
        <creator>Abdulkareem Al-Alwani</creator>
        
        <subject>Mood extraction; Facial features; Facial recognition; Online education; E-Learning; Attention state; Learning styles</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(11), 2016</description>
        <description>Students’ interest and involvement during class lectures is imperative for grasping concepts and significantly improves academic performance of the students. Direct supervision of lectures by instructors is the main reason behind student attentiveness in class. Still, there is sufficient percentage of students who even under direct supervision tend to lose concentration. Considering the e-learning environment, this problem is aggravated due to absence of any human supervision. This calls for an approach to assess and identify lapses of attention by a student in an e-learning session.  This study is carried out to improve student’s involvement in e-learning platforms by using their facial feature to extract mood patterns. Analyzing themoods based on emotional states of a student during an online lecture can provide interesting results which can be readily used to improvethe efficacy of content delivery in an e-learning platform. A survey is carried out among instructors involved in e-learning to identify most probable facial features that represent the facial expressions or mood patterns of a student. A neural network approach is used to train the system using facial feature sets to predict specific facial expressions. Moreover, a data association based algorithm specifically for extracting information on emotional states by correlating multiple sets of facial features is also proposed. This framework showed promising results in inciting student’s interest by varying the content being delivered.Different combinations of inter-related facial expressions for specific time frames were used to estimate mood patterns and subsequently level of involvement of a student in an e-learning environment.The results achieved during the course of research showed that mood patterns of a student provide a good correlation with his interest or involvement during online lectures and can be used to vary the content to improve students’ involvement in the e-learning system.More facial expressions and mood categories can be included to diversify the application of the proposed method.</description>
        <description>http://thesai.org/Downloads/Volume7No11/Paper_57-Mood_Extraction_Using_Facial_Features.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of Decision Support System to Selection of the Blended Learning Platforms for Mathematics and ICT Learning at SMK TI Udayana</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2016.051203</link>
        <id>10.14569/IJARAI.2016.051203</id>
        <doi>10.14569/IJARAI.2016.051203</doi>
        <lastModDate>2016-12-10T19:13:34.5900000+00:00</lastModDate>
        
        <creator>I Made Ardana</creator>
        
        <creator>I Putu Wisna Ariawan</creator>
        
        <creator>Dewa Gede Hendra Divayana</creator>
        
        <subject>Blended Learning; Decision Support System; Weighted Product; Mathematics and ICT Learning</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 5(12), 2016</description>
        <description>The development of information technology is able to solve problems in various areas of life, including education. Some examples of the application of information technology in education, among others: e-learning, e-library, e-module, blended learning, and more. One school in Bali that also apply information technology in the learning process, namely SMK TI Udayana. Application forms of information technology in the learning process at SMK TI Udayana have the shape of utilization of blended learning. There are several platforms options can be used for blended learning in the learning process at SMK TI Udayana, including: Edmodo, Quipper School, Moodle, Kelase, and others. Facts show that the number of platforms that can be used for blended learning, sometimes making teachers and students get confused to choose the most suitable, especially in the process of mathematics and ICT learning at SMK TI Udayana. Based on these problems, then created of the decision support system to be able to choose a blended learning platforms that is suitable for mathematics and ICT learning at SMK TI Udayana. The methods used in this decision support system is weighted product, because it can determine blended learning platforms for mathematics and ICT learning based on the highest value obtained from the calculation of several criteria.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume5No12/Paper_3-Development_of_Decision_Support_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Data Hiding Method Replacing LSB of Hidden Portion for Secret Image with Run-Length Coded Image</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2016.051202</link>
        <id>10.14569/IJARAI.2016.051202</id>
        <doi>10.14569/IJARAI.2016.051202</doi>
        <lastModDate>2016-12-10T19:13:34.5600000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>Wavelet; DWT; Steganography; Random number based Permutation; Data hiding; Data compression</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 5(12), 2016</description>
        <description>Data hiding method based on steganography improving secret image invisibility through replacing Least Significant Bit: LSB with run-length coded Image is proposed. Although, the proposed method is based on Discrete Wavelet Transformation: DWT, run-length coded secrete image is embedded in the LSB of the high frequency component before reconstruction of the image for opening to the public for improvement of the secrete image invisibility. Before embedding the coded secret image, the coded secret image bits are rearranged the order by using random number. Therefore, secret image invisibility is much improved. Through experiments, it is confirmed that secret images are almost invisible in distribute images. Data hiding performance in terms of invisibility of secret images which are embedded in distribute images is evaluated with the Peak Signal to Noise Ratio: PSNR and the Root Mean Square: RMS difference between the original secret image and extracted one from the distribute images. Meanwhile the conventional Multi-Resolution Analysis: MRA based data hiding is attempted with a variety of parameters, level of MRA and the frequency component location of which secret image is replaced to it and is compared to the proposed method. It is found that the proposed data hiding method is superior to the conventional method.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume5No12/Paper_2-Data_Hiding_Method_Replacing_LSB_of_Hidden.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Size Distribution Estimation Method using Reflected Laser Light Angle Dependency by Rain Droplets</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2016.051201</link>
        <id>10.14569/IJARAI.2016.051201</id>
        <doi>10.14569/IJARAI.2016.051201</doi>
        <lastModDate>2016-12-10T19:13:34.4970000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>Rainfall; Laser ranging; Marshall Palmer distribution; Rayleigh scattering; Mie scattering; Phase function; Rain drpolet size distribution</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 5(12), 2016</description>
        <description>Methods for size distribution estimation and rainfall type discrimination with estimated phase function using measured reflected laser light by rain droplet are proposed. Preliminary experiments are conducted with Laser ranging instrument and spectral radiometer for estimation of size distribution and rainfall type discrimination as well as phase function of scattering by rain drops. Through the experiments, it is found that rainfall type can be discriminated together with rain droplet size distribution estimation.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume5No12/Paper_1-Size_Distribution_Estimation_Method_Using_Reflected_Laser.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Computer Science Approach to Philosophy: Schematizing Whitehead’s Processes</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071156</link>
        <id>10.14569/IJACSA.2016.071156</id>
        <doi>10.14569/IJACSA.2016.071156</doi>
        <lastModDate>2016-12-01T10:57:10.1970000+00:00</lastModDate>
        
        <creator>Sabah Al-Fedaghi</creator>
        
        <subject>A. N. Whitehead; schematization; metaphysical ontology; diagrammatic representation; flow</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(11), 2016</description>
        <description>Diagrams are used in many areas of study to depict knowledge and to assist in understanding of problems. This paper aims to utilize schematic representation to facilitate understanding of certain philosophical works; specifically, it is an attempt, albeit tentative, to schematize A. N. Whitehead’s ontological approach. It targets professionals and students in fields outside of philosophy such as computer science and engineering, who often look to sources in philosophy for design ideas or for a critical framework for practice. Yet students in such fields struggle to navigate thinkers’ writings. The paper employs schematization as an apparatus of specification for clarifying philosophical language by describing philosophical ideas in a form familiar to computer science. The resultant high-level representation seems to be a viable tool for enhancing the relationship between philosophy and computer science, especially in computer science education.</description>
        <description>http://thesai.org/Downloads/Volume7No11/Paper_56-Computer_Science_Approach_to_Philosophy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards Analytical Modeling for Persuasive Design Choices in Mobile Apps</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071155</link>
        <id>10.14569/IJACSA.2016.071155</id>
        <doi>10.14569/IJACSA.2016.071155</doi>
        <lastModDate>2016-12-01T10:57:10.1830000+00:00</lastModDate>
        
        <creator>Hamid Mukhtar</creator>
        
        <subject>goal; intent; analytics; modeling; feedback</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(11), 2016</description>
        <description>Persuasive technology has emerged as a new field of research in the past decade with its applications in various domains including web-designing, human-computer interaction, healthcare systems, and social networks. Although persuasive technology has its roots in psychology and cognitive sciences, researchers from the computing disciplines are also increasingly interested in it. Unfortunately, the existing theories, models, and frameworks for persuasive system design fall short due to absence of systematic design processes mostly used in the computing domains as well as lack of support for appropriate post-analysis.
This work provides some insight into such limitations and identifies the importance of analytical modeling for persuasion in mobile applications design. The authors illustrate, using a case study, that appropriate mathematical models can be applied together with user modeling to develop a persuasive system that will allow the designer to consider several design choices simultaneously.</description>
        <description>http://thesai.org/Downloads/Volume7No11/Paper_55-Towards_Analytical_Modeling_for_Persuasive_Design.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Generic Model for Assessing Multilevel Security-Critical Object-Oriented Programs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071154</link>
        <id>10.14569/IJACSA.2016.071154</id>
        <doi>10.14569/IJACSA.2016.071154</doi>
        <lastModDate>2016-12-01T10:57:10.1670000+00:00</lastModDate>
        
        <creator>Bandar M. Alshammari</creator>
        
        <subject>Multilevel Security Models; Object-Orientation; Se-curity Metrics; Security Matrix; Unified Model Language</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(11), 2016</description>
        <description>The most promising approach for developing secure systems is the one which allows software developers to assess and compare the relative security of their programs based on their designs. Thereby, software metrics provide an easy approach for evaluating the security of certain object-oriented designs. They can also measure the impact on security that caused by modifications to existing programs. However, most studies in this area focus on a binary classification of data, either is classified or unclassified. In fact, there are other models with other classifications of data, for instance, the common model used by Defense departments that classifies data into four security levels. However, these various classifications have received little attention in terms of measuring their effect. This paper introduces a model for measuring information flow of security-critical data within a certain object-oriented program with multilevel classification of its security-critical data. It defines a set of object-oriented security metrics which are capable of assessing the security of a given program’s design from the point of view of potential information flow. These metrics can be used to compare the security of programs or assess the effect of program modifications on security. Specifically, this paper proposes a generic model that consists of several security metrics to measure the relative security of object-oriented designs with respect to design quality properties of accessibility, cohesion, coupling, and design size.</description>
        <description>http://thesai.org/Downloads/Volume7No11/Paper_54-A_Generic_Model_for_Assessing_Multilevel.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Text Mining: Techniques, Applications and Issues</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071153</link>
        <id>10.14569/IJACSA.2016.071153</id>
        <doi>10.14569/IJACSA.2016.071153</doi>
        <lastModDate>2016-12-01T10:57:10.1500000+00:00</lastModDate>
        
        <creator>Ramzan Talib</creator>
        
        <creator>Muhammad Kashif Hanif</creator>
        
        <creator>Shaeela Ayesha</creator>
        
        <creator>Fakeeha Fatima</creator>
        
        <subject>Classification; Knowledge Discovery; Applications; Information Extraction; Patterns</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(11), 2016</description>
        <description>Rapid progress in digital data acquisition tech-niques have led to huge volume of data. More than 80 percent of today’s data is composed of unstructured or semi-structured data. The discovery of appropriate patterns and trends to analyze the text documents from massive volume of data is a big issue. Text mining is a process of extracting interesting and non-trivial patterns from huge amount of text documents. There exist different techniques and tools to mine the text and discover valuable information for future prediction and decision making process. The selection of right and appropriate text mining technique helps to enhance the speed and decreases the time and effort required to extract valuable information. This paper briefly discuss and analyze the text mining techniques and their applications in diverse fields of life. Moreover, the issues in the field of text mining that affect the accuracy and relevance of results are identified.</description>
        <description>http://thesai.org/Downloads/Volume7No11/Paper_53-Text_Mining_Techniques_Applications_and_Issues.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modified Random Forest Approach for Resource Allocation in 5G Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071152</link>
        <id>10.14569/IJACSA.2016.071152</id>
        <doi>10.14569/IJACSA.2016.071152</doi>
        <lastModDate>2016-12-01T10:57:10.1370000+00:00</lastModDate>
        
        <creator>Parnika De</creator>
        
        <creator>Shailendra Singh</creator>
        
        <subject>5G; Cognitive Radio; Clustering; Fusion Centre; Random Forest</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(11), 2016</description>
        <description>According to annual visual network index (VNI) report by the year 2020, 4G will reach its maturity and incremental approach will not meet demand. Only way is to switch to newer generation of mobile technology called as 5G. Resource allocation is critical problem that impact 5G Network operation critically. Timely and accurate assessment of underutilized bandwidth to primary user is necessary in order to utilize it efficiently for increasing network efficiency. This paper presents a decision making system at Fusion center using modified Random Forest. Modified Random Forest is first trained using Database accumulated by measuring different network parameters and can take decision on allocation of resources. The Random Forest is retrained after fixed time interval, considering dynamic nature of network. We also test its performance in comparison with existing AND/OR logic decision logic at Fusion Center</description>
        <description>http://thesai.org/Downloads/Volume7No11/Paper_52-Modified_Random_Forest_Approach_for_Resource.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Software-Defined Networks (SDNs) and Internet of Things (IoTs): A Qualitative Prediction for 2020</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071151</link>
        <id>10.14569/IJACSA.2016.071151</id>
        <doi>10.14569/IJACSA.2016.071151</doi>
        <lastModDate>2016-12-01T10:57:10.1030000+00:00</lastModDate>
        
        <creator>Sahrish Khan Tayyaba</creator>
        
        <creator>Munam Ali Shah</creator>
        
        <creator>Naila Sher Afzal Khan</creator>
        
        <creator>Yousra Asim</creator>
        
        <creator>Wajeeha Naeem</creator>
        
        <creator>Muhammad Kamran</creator>
        
        <subject>SDN; IoT; Integration of SDN-IoT; WSN; LTE; M2M communication; NFV</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(11), 2016</description>
        <description>The Internet of Things (IoT) is imminent technology grabbing industries and research attention with a fast stride. Currently, more than 15 billion devices are connected to the Internet and this number is expected to reach up to 50 billion by 2020. The data generated by these IoT devices are immensely high, creating resource allocation, flow management and security jeopardises in the IoT network. Programmability and centralised control are considered an alternative solution to address IoT issues. On the other hand, a Software Define Network (SDN) provides a centralised and programmable control and management for the underlying network without changing existing network architecture. This paper surveys the state of the art on the IoT integration with the SDN. A comprehensive review and the generalised solutions over the period 2010-2016 is presented for the different communication domains. Furthermore, a critical review of the IoT and the SDN technologies, current trends in research and the futuristic contributing factors form part of the paper. The comparative analysis of the existing solutions of SDN based IoT implementation provides an easy and concise view of the emerging trends. Lastly, the paper predicts the future and presents a qualitative view of the world in 2020.</description>
        <description>http://thesai.org/Downloads/Volume7No11/Paper_51-Software_Defined_Networks_SDNs.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Determination of Child Vulnerability Level from a Decision-Making System based on a Probabilistic Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071150</link>
        <id>10.14569/IJACSA.2016.071150</id>
        <doi>10.14569/IJACSA.2016.071150</doi>
        <lastModDate>2016-12-01T10:57:10.0730000+00:00</lastModDate>
        
        <creator>SAHA Kouassi Bernard</creator>
        
        <creator>BROU Konan Marcelin</creator>
        
        <creator>Goor&#233; Bi Tra</creator>
        
        <creator>Souleymane OUMTANAGA</creator>
        
        <subject>Crisis; Children; XML data warehouse; data mining; scheduling; Resilience; snowflake pattern; vulnerability level; probabilistic model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(11), 2016</description>
        <description>The purpose of this paper is to provide a decision support tool based on a mathematical model and an algorithm that can help in the assessment of   the level of vulnerability of children in C&#244;te d&#39;Ivoire. So, this study was conducted in three phases, the first one includes the settlement of a data warehouse. Then the second involves the application of probabilistic model. The final phase deals with the classification of children considered vulnerable in descending order from the most to the least vulnerable. The purpose of this classification is to better manage the resources of donors to support vulnerable children. This work is part of the activities of UMRI The resilience of C&#244;te d’Ivoire.  This is to propose mathematical and computational tools to facilitate the work of the Centre for social resilience. The use of the context of   children made vulnerable due to crises or diseases is an example of practical application of our social resilience model</description>
        <description>http://thesai.org/Downloads/Volume7No11/Paper_50-Determination_of_Child_Vulnerability_Level.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>WQbZS: Wavelet Quantization by Z-Scores for JPEG2000</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071149</link>
        <id>10.14569/IJACSA.2016.071149</id>
        <doi>10.14569/IJACSA.2016.071149</doi>
        <lastModDate>2016-12-01T10:57:10.0430000+00:00</lastModDate>
        
        <creator>Jesus Jaime Moreno-Escobar</creator>
        
        <creator>Oswaldo Morales-Matamoros</creator>
        
        <creator>Ricardo Tejeida-Padilla</creator>
        
        <creator>Ana Lilia Coria-Paes</creator>
        
        <creator>Teresa Ivonne Contreras-Troya</creator>
        
        <subject>Z-Scores; Statistical Normalization; Wavelet Transformation; Scalar Quantization; Deadzone Quantization; JPEG2000</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(11), 2016</description>
        <description>In this document we present a methodology to quantize wavelet coefficients for any wavelet-base entropy coder, we apply it in the particular case of JPEG2000. Any compression system have three main steps: Transformation in terms of fre-quency, Quantization and Entropy Coding. The only responsible for reducing or maintaining precision is the second element, Quantization, since it is the element of lossy compression that reduces the precision of dequantized pixels in order to make quantized pixels more compressible. We modify the well-known dead zone scalar Quantization introducing Z-Scores in the pro-cess. Thus, Z-scores are expressed in terms of standard deviations from their means. Resultantly, these z-scores have a distribution with a mean of 0 and a standard deviation of 1, in this way we increase redundancies into the image, which produces a lower compression ratio.</description>
        <description>http://thesai.org/Downloads/Volume7No11/Paper_49-WQbZS_Wavelet_Quantization_by_Z_Scores.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Synergies of Advanced Technologies and Role of VANET in Logistics and Transportation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071148</link>
        <id>10.14569/IJACSA.2016.071148</id>
        <doi>10.14569/IJACSA.2016.071148</doi>
        <lastModDate>2016-12-01T10:57:10.0270000+00:00</lastModDate>
        
        <creator>Kishwer Abdul Khaliq</creator>
        
        <creator>Amir Qayyum</creator>
        
        <creator>Jurgen Pannek</creator>
        
        <subject>VANET; IEEE802.11p; Logistics; Vehicular Ad-hoc Network; Transportation; Technology role</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(11), 2016</description>
        <description>In Intelligent Transport Systems (ITS), Vehicular Ad-hoc Network (VANET) is one of key wireless technologies, which helps in managing road safety, traffic efficiency, fleet, logistics and transportation. The objective of this paper is to give an overview of the implication of different technologies and placement of VANET in transportation and specifically in logistics. We provide researchers with an overview of consid-ered technologies in logistics scenarios and the current projects regarding VANET for safety and non-safety applications. We additionally discuss current and potential domains in logistics in which new applications can improve efficiency by use of new and existing technologies.</description>
        <description>http://thesai.org/Downloads/Volume7No11/Paper_48-Synergies_of_Advanced_Technologies_and_Role.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Polynomial based Channel Estimation Technique with Sliding Window for M-QAM Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071147</link>
        <id>10.14569/IJACSA.2016.071147</id>
        <doi>10.14569/IJACSA.2016.071147</doi>
        <lastModDate>2016-12-01T10:57:09.9970000+00:00</lastModDate>
        
        <creator>O. O. Ogundile</creator>
        
        <creator>M. O. Oloyede</creator>
        
        <creator>F. A. Aina</creator>
        
        <creator>S. S. Oyewobi</creator>
        
        <subject>Channel estimation; Doppler frequency; frame length; interpolation; polynomial order</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(11), 2016</description>
        <description>Pilot Symbol Assisted Modulation (PSAM) channel estimation techniques over Rayleigh fading channels have been analysed in recent years. Fluctuations in the Rayleigh fading channel gain degrades the performance of any modulation scheme. This paper develops and analyses a PSAM Polynomial interpolation technique based on Least Square (LS ) approxi-mations to estimate the Channel State Information (CSI) for M-ary Quadrature Amplitude Modulation (M-QAM) over flat Rayleigh fading channels. A Sliding window approach with pilot symbol adjustment is employed in order to minimize the computational time complexity of the estimation technique. The channel estimation performance, and its computational delay and time complexity is verified for di?erent Doppler frequen-cies ( fd), frame lengths (L), and Polynomial orders (P-orders). Simulation results show that the Cubic Polynomial interpolation gives superior Symbol Error Rate (SER) performance than the Quadratic Polynomial interpolation and higher P-orders, and the performance of the Polynomial estimation techniques degrade with increase in the P-orders.</description>
        <description>http://thesai.org/Downloads/Volume7No11/Paper_47-Polynomial_based_Channel_Estimation_Technique.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Multi-Agent Framework for Data Extraction,Transformation and Loading in Data Warehouse</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071146</link>
        <id>10.14569/IJACSA.2016.071146</id>
        <doi>10.14569/IJACSA.2016.071146</doi>
        <lastModDate>2016-12-01T10:57:09.9630000+00:00</lastModDate>
        
        <creator>Ramzan Talib</creator>
        
        <creator>Muhammad Kashif Hanif</creator>
        
        <creator>Fakeeha Fatima</creator>
        
        <creator>Shaeela Ayesha</creator>
        
        <subject>Data Warehouse; Extraction; Loading; Multi-Agent; Operational Data; Transformation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(11), 2016</description>
        <description>The rapid growth in size of data sets poses chal-lenge to extract and analyze information in timely manner for better prediction and decision making. Data warehouse is the solution for strategic decision making. Data warehouse serves as a repository to store historical and current data. Extraction, Transformation and Loading (ETL) process gather data from different sources and integrate it into data warehouse. This paper proposes a multi-agent framework that enhance the efficiency of ETL process. Agents perform specific task assigned to them. The identification of errors at different stages of ETL process become easy. This was difficult and time consuming in traditional ETL process. Multi-agent framework identify data sources, extract, integrate, transform, and load data into data warehouse. A monitoring agent remains active during this process and generate alerts if there is issue at any stage.</description>
        <description>http://thesai.org/Downloads/Volume7No11/Paper_46-Multi_Agent_Framework_for_ETL_Fakeeha.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fast Approximation for Toeplitz, Tridiagonal, Symmetric and Positive Definite Linear Systems that Grow Over Time</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071145</link>
        <id>10.14569/IJACSA.2016.071145</id>
        <doi>10.14569/IJACSA.2016.071145</doi>
        <lastModDate>2016-12-01T10:57:09.9330000+00:00</lastModDate>
        
        <creator>Pedro Mayorga</creator>
        
        <creator>Alfonso Estudillo</creator>
        
        <creator>A. Medina-Santiago</creator>
        
        <creator>Jos&#180;e V&#180;azquez</creator>
        
        <creator>Fernando Ramos</creator>
        
        <subject>real time interpolation; linear convergence; Cholesky decomposition; biomedical data acquisition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(11), 2016</description>
        <description>Linear systems with tridiagonal structures are very
common in problems related not only to engineering, but chemistry,
biomedical or finance, for example, real time cubic B-Spline
interpolation of ND-images, real time processing of Electrocardiography
(ECG) and hand drawing recognition. In those problems
which the matrix is positive definite, it is possible to optimize
the solution in O(n) time. This paper describes such systems
whose size grows over time and proposes an approximation
in O(1) time of such systems based on a series of previous
approximations. In addition, it is described the development of
the method and is proved that the proposed solution converges
linearly to the optimal. A real-time cubic B-Spline interpolation
of an ECG is computed with this proposal, for this application
the proposed method shows a global relative error near to 10-6
and its computation is faster than traditional methods, as shown
in the experiments.</description>
        <description>http://thesai.org/Downloads/Volume7No11/Paper_45-Fast_Approximation_for_Toeplitz_Tridiagonal.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluation of OLSR Protocol Implementations using Analytical Hierarchical Process (AHP)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071144</link>
        <id>10.14569/IJACSA.2016.071144</id>
        <doi>10.14569/IJACSA.2016.071144</doi>
        <lastModDate>2016-12-01T10:57:09.9000000+00:00</lastModDate>
        
        <creator>Ashfaq Ahmad Malik</creator>
        
        <creator>Athar Mahboob</creator>
        
        <creator>Tariq Mairaj Rasool Khan</creator>
        
        <subject>OLSR; MANET; AHP; Routing Protocols</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(11), 2016</description>
        <description>Adhoc networks are part of IEEE 802.11 Wireless LAN Standard also called Independent Basic Service Set (IBSS) and work as Peer to Peer network by default. These work without the requirement of an Infrastructure (such as an Access Point) and demands specific routing requirements to work as a multi-hop network. There are various Adhoc network routing protocols which are categorized as Proactive, Reactive and Hybrid. OLSR (a proactive routing protocol) is one of widely used routing protocols in adhoc networks. In this paper an empirical study and analysis of the various OLSR implementations (by different research groups and individuals) has been conducted in light of Relative Opinion Scores (ROS) and Analytical Hierarchical Process (AHP) Online System software. Based on quantitative comparison of results, it is concluded that OLSRd project is most updated and best amongst six variants of OLSR protocol implementations.</description>
        <description>http://thesai.org/Downloads/Volume7No11/Paper_44-Evaluation_of_OLSR_Protocol_Implementations.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Adaptive Error Detection Method for P300-based Spelling Using Riemannian Geometry</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071143</link>
        <id>10.14569/IJACSA.2016.071143</id>
        <doi>10.14569/IJACSA.2016.071143</doi>
        <lastModDate>2016-12-01T10:57:09.8870000+00:00</lastModDate>
        
        <creator>Attaullah Sahito</creator>
        
        <creator>M. Abdul Rahman</creator>
        
        <creator>Jamil Ahmed</creator>
        
        <subject>Brain Computer Interface; EEG; P300; Rieman-nian geometry; xDAWN; Covariances; Tangent Space; Elastic net</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(11), 2016</description>
        <description>Brain-Computer Interface (BCI) systems have be-come one of the valuable research area of ML (Machine Learning) and AI based techniques have brought significant change in traditional diagnostic systems of medical diagnosis. Specially; Electroencephalogram (EEG), which is measured electrical ac-tivity of the brain and ionic current in neurons is result of these activities. A brain-computer interface (BCI) system uses these EEG signals to facilitate humans in different ways. P300 signal is one of the most important and vastly studied EEG phenomenon that has been studied in Brain Computer Interface domain. For instance, P300 signal can be used in BCI to translate the subject’s intention from mere thoughts using brain waves into actual commands, which can eventually be used to control different electro mechanical devices and artificial human body parts. Since low Signal-to-Noise-Ratio (SNR) in P300 is one of the major challenge because concurrently ongoing heterogeneous activities and artifacts of brain creates lots of challenges for doctors to understand the human intentions. In order to address above stated challenge this research proposes a system so called Adaptive Error Detection method for P300-Based Spelling using Riemannian Geometry, the system comprises of three main steps, in first step raw signal is cleaned by preprocessing. In second step most relevant features are extracted using xDAWN spatial filtering along with covariance matrices for handling high dimensional data and in final step elastic net classification algorithm is applied after converting from Riemannian manifold to Euclidean space using tangent space mapping. Results obtained by proposed method are comparable to state-of-the-art methods, as they decrease time drastically; as results suggest six times decrease in time and perform better during the inter-session and inter-subject variability.</description>
        <description>http://thesai.org/Downloads/Volume7No11/Paper_43-Adaptive_Error_Detection_Method_for_P300.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Connected Dominating Set based Optimized Routing Protocol for Wireless Sensor Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071142</link>
        <id>10.14569/IJACSA.2016.071142</id>
        <doi>10.14569/IJACSA.2016.071142</doi>
        <lastModDate>2016-12-01T10:57:09.8530000+00:00</lastModDate>
        
        <creator>Hamza Faheem</creator>
        
        <creator>Naveed Ilyas</creator>
        
        <creator>Siraj ul Muneer</creator>
        
        <creator>Sadaf Tanvir</creator>
        
        <subject>Connected Dominating Set; Wireless Sensor Net-works; Energy Efficiency</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(11), 2016</description>
        <description>Wireless Sensor Networks(WSNs) have problem of energy starvation in their operations. This constraint demands that the topology of communicating nodes should be limited. One of the ways of restraining the communicating nodes is by creating a Connected Dominating Set(CDS) of nodes out of them. In this paper, an Optimized Region Based Efficient Data(AORED) routing protocol for WSNs has been proposed. CDS has been employed in AORED to create a virtual backbone of communicating nodes in the network. The empirical study involving extensive simulations show that the proposed routing protocol outperforms the legacy DEEC and SEP protocols. AORED has increased number of transmission rounds, increased number of clusterheads and reduced number of packets sent to the basestation as compared to DEEC and SEP protocols.</description>
        <description>http://thesai.org/Downloads/Volume7No11/Paper_42-Connected_Dominating_Set_based_Optimized_Routing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Denoising in Wavelet Domain Using Probabilistic Graphical Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071141</link>
        <id>10.14569/IJACSA.2016.071141</id>
        <doi>10.14569/IJACSA.2016.071141</doi>
        <lastModDate>2016-12-01T10:57:09.8400000+00:00</lastModDate>
        
        <creator>Maham Haider</creator>
        
        <creator>Muhammad Usman Riaz</creator>
        
        <creator>Imran Touqir</creator>
        
        <creator>Adil Masood Siddiqui</creator>
        
        <subject>Guassian Mixture Models (GMM); Hidden Markov Model (HMM); Discrete Wacelet Transform (DWT); Hidden Markov Tree (HMT)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(11), 2016</description>
        <description>Denoising of real world images that are degraded by Gaussian noise is a long established problem in statistical signal processing. The existing models in time-frequency domain typically model the wavelet coefficients as either independent or jointly Gaussian. However, in the compression arena, techniques like denoising and detection, states the need for models to be non-Gaussian in nature. Probabilistic Graphical Models designed in time-frequency domain, serves the purpose for achieving denoising and compression with an improved performance. In this work, Hidden Markov Model (HMM) designed with 2D Discrete Wavelet Transform (DWT) is proposed. A comparative analysis of proposed method with different existing techniques: Wavelet based and curvelet based methods in Bayesian Network domain and Empirical Bayesian Approach using Hidden Markov Tree  model for denoising has been presented. Results are compared in terms of PSNR and visual quality.</description>
        <description>http://thesai.org/Downloads/Volume7No11/Paper_41-Denoising_in_Wavelet_Domain_Using_Probabilistic.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Image De-Noising and Compression Using Statistical based Thresholding in 2-D Discrete Wavelet Transform</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071140</link>
        <id>10.14569/IJACSA.2016.071140</id>
        <doi>10.14569/IJACSA.2016.071140</doi>
        <lastModDate>2016-12-01T10:57:09.8070000+00:00</lastModDate>
        
        <creator>Qazi Mazhar</creator>
        
        <creator>Adil Masood Siddique</creator>
        
        <creator>Imran Touqir</creator>
        
        <creator>Adnan Ahmad Khan</creator>
        
        <subject>Wavelet Thresholding; statistical Thresholding Image De-noising; Image Compression; Wavelet Sub-band Thresholding</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(11), 2016</description>
        <description>Images are very good information carriers but they depart from their original condition during transmission and are corrupted by different kind of noise. The purpose is to remove the noisy coefficients such that minimum amount of information is lost and maximum amount of noise is suppressed or reduced. We considered Generalized Gaussian distribution for modeling of noise. In the proposed technique, statistical thresholding methods are used for the estimation of threshold value while Bi-orthogonal wavelet has been envisaged for image decomposition and reconstruction. A qualitative and quantitative analysis of thresholding methods on different images shows significant results for statistical thresholding methods based on objective and subjective quality as compared to other de-noising methods.</description>
        <description>http://thesai.org/Downloads/Volume7No11/Paper_40-Image_De_Noising_and_Compression_Using_Statistical.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Wavelet-based Image Modelling for Compression Using Hidden Markov Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071139</link>
        <id>10.14569/IJACSA.2016.071139</id>
        <doi>10.14569/IJACSA.2016.071139</doi>
        <lastModDate>2016-12-01T10:57:09.7930000+00:00</lastModDate>
        
        <creator>Muhammad Usman Riaz</creator>
        
        <creator>Imran Touqir</creator>
        
        <creator>Maham Haider</creator>
        
        <subject>Hidden Markov model; Wavelet transformation; Compression; Expectation Maximization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(11), 2016</description>
        <description>Statistical signal modeling using hidden Markov model is one of the techniques used for image compression. Wavelet based statistical signal models are impractical for most of the real time processing because they usually represent the wavelet coefficients as jointly Gaussian or independent to each other. In this paper, we build up an algorithm that succinctly characterizes the interdependencies of wavelet coefficients and their Non-Gaussian behavior especially for image compression. This is done by extracting the combine feature of hidden Markov model and Wavelet transformation that gives us comparatively better results. To estimate the parameter of wavelet based Hidden Markov model, an efficient expectation maximization algorithm is developed.</description>
        <description>http://thesai.org/Downloads/Volume7No11/Paper_39-Wavelet_based_Image_Modelling_for_Compression.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Efficient Relay Selection Scheme based on Fuzzy Logic for Cooperative Communication</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071138</link>
        <id>10.14569/IJACSA.2016.071138</id>
        <doi>10.14569/IJACSA.2016.071138</doi>
        <lastModDate>2016-12-01T10:57:09.7600000+00:00</lastModDate>
        
        <creator>Shakeel Ahmad Waqas</creator>
        
        <creator>Imran Touqir</creator>
        
        <creator>Nasir Khan</creator>
        
        <creator>Imran Rashid</creator>
        
        <subject>Cooperative Networks; Relay selection schemes; Amplify and forward; Fuzzy logic; Nakagami Fading channel; Rician Fading Channel; Rayleigh Fading Channel</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(11), 2016</description>
        <description>The performance of cooperative network can be increased by using relay selection technique. Therefore, interest in relay selection is sloping upward. We proposed two new relay selection schemes based on fuzzy logic for dual hop cooperative communication. These relay selection schemes require SNR (signal to noise ratio), cooperative gain and channel gain as input fuzzy parameters for selection of best relay. The performance of first proposed relay selection scheme is evaluated in term of BER (bit error rate) in Nakagami, Rician and Rayleigh fading channels. In second proposed relay selection scheme, threshold is used with the objective to minimize the power consumption and channel estimation load. Its performance is analyzed in term of BER, number of active relays and load of number of channel estimations.</description>
        <description>http://thesai.org/Downloads/Volume7No11/Paper_38-Efficient_Relay_Selection_Scheme_based_on_Fuzzy_Logic.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Scalable Scientific Workflows Management System SWFMS</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071137</link>
        <id>10.14569/IJACSA.2016.071137</id>
        <doi>10.14569/IJACSA.2016.071137</doi>
        <lastModDate>2016-12-01T10:57:09.7300000+00:00</lastModDate>
        
        <creator>M. Abdul Rahman</creator>
        
        <subject>Scientific Workflows; Workflow Management System; Reference Architecture</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(11), 2016</description>
        <description>In today’s electronic world conducting scientific experiments, especially in natural sciences domain, has become more and more challenging for domain scientists since “science” today has turned out to be more complex due to the two dimensional intricacy; one: assorted as well as complex computational (analytical) applications and two: increasingly large volume as well as heterogeneity of scientific data products processed by these applications. Furthermore, the involvement of increasingly large number of scientific instruments such as sensors and machines makes the scientific data management even more challenging since the data generated from such type of instruments are highly complex. To reduce the amount of complexities in conducting scientific experiments as much as possible, an integrated framework that transparently implements the conceptual separation between both the dimensions is direly needed. In order to facilitate scientific experiments ‘workflow’ technology has in recent years emerged in scientific disciplines like biology, bioinformatics, geology, environmental science, and eco-informatics. Much more research work has been done to develop the scientific workflow systems. However, our analysis over these existing systems shows that they lack a well-structured conceptual modeling methodology to deal with the two complex dimensions in a transparent manner. This paper presents a scientific workflow framework that properly addresses these two dimensional complexities in a proper manner.</description>
        <description>http://thesai.org/Downloads/Volume7No11/Paper_37-Scalable_Scientific_Workflows_Management_System_SWFMS.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced Re-Engineering Mechnanism to Improve the Efficiency of Software Re-Engineering</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071136</link>
        <id>10.14569/IJACSA.2016.071136</id>
        <doi>10.14569/IJACSA.2016.071136</doi>
        <lastModDate>2016-12-01T10:57:09.7130000+00:00</lastModDate>
        
        <creator>A. Cathreen Graciamary</creator>
        
        <creator>Chidambaram</creator>
        
        <subject>Software Engineering; Software Re engineering; Software Quality; Restructuring</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(11), 2016</description>
        <description>Generally, software re-engineering is economical and perfect way to provide much needed boost to a present software system. Software Re-engineering is like to obtain a fully completed software from existing software with additional features if needed. The overall process of Software re-engineering is to analyze the needed requiements &amp; its contents. It also changes the needed contents or transforms the existing software system for reconstructing a novel software system. The difficult part in re-engineering is to understand the traditional system. Most of the software re-engineering mechanisms are aimed to achieve the common re-engineering objectives and the objectives are: improved software quality, reduced complexity, reduce maintenance cost and increased reliability. As a result, several traditional re-engineering mechanisms fail to verify the performance of individual functionality in existing software. This performance evaluation increases the complexity in re-engineering process. To minimizing the complexities in software re-engineering, this proposed system implements a novel approach named Enhanced Re-engineering mechanism. This enhanced mechanism introduces a new idea, before executing the re- build process the developer verifies the performance of particular function in existing system. After that, the function performance is compared with proposed algorithm. Based on the comparison process only rebuild process should be carried out. Finally this proposed mechanism reduces the complexities in software re-engineering.</description>
        <description>http://thesai.org/Downloads/Volume7No11/Paper_36-Enhanced_Re_Engineering_Mechnanism_to_Improve.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intelligent System for Detection of Abnormalities in Human Cancerous Cells and Tissues</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071135</link>
        <id>10.14569/IJACSA.2016.071135</id>
        <doi>10.14569/IJACSA.2016.071135</doi>
        <lastModDate>2016-12-01T10:57:09.6830000+00:00</lastModDate>
        
        <creator>Jamil Ahmed Chandio</creator>
        
        <creator>M. Abdul Rahman Soomrani</creator>
        
        <subject>Medical Image mining; Decision support system; Pre-process; DICOM; FNAB</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(11), 2016</description>
        <description>Due to the latest advances in the field of MML (Medical Machine Learning) a significant change has been witnessed and traditional diagnostic procedures have been converted into DSS (Decision Support Systems). Specially, classification problem of cancer discovery using DICOM (Digital Communication in Medicine) would assume to be one of the most important problems. For example differentiation between the cancerous behaviours of chromatin deviations and nucleus related changes in a finite set of nuclei may support the cytologist during the cancer diagnostic process. In-order to assist the doctors during the cancer diagnosis, this paper proposes a novel algorithm BCC (Bag_of_cancerous_cells) to select the most significant histopathological features from the well-differentiated thyroid cancers. Methodology of proposed system comprises upon three layers. In first layer data preparation have been done by using BMF (Bag of Malignant Features) where each nuclei is separated with its related micro-architectural components and behaviours. In second layer decision model has been constructed by using CNN (Convolutional Neural Network) classifier and to train the histopathological behaviours such like BCP (Bags of chromatin Paches) and BNP (Bags of Nuclei Patches). In final layer, performance evaluation is done. A total number of 4520 nuclei observations were trained to construct the decision models from which BCP (Bags of Chromatin Patches) consists upon the 2650 and BNP (Bags of Nuclei Patches) comprises upon 1870 instances. Best measured accuracy for BCP was recorded as 97.93% and BNP accuracy was measured as 97.86%.</description>
        <description>http://thesai.org/Downloads/Volume7No11/Paper_35-Intelligent_System_for_Detection_of_Abnormalities.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>ETEEM- Extended Traffic Aware Energy Efficient MAC Scheme for WSNs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071134</link>
        <id>10.14569/IJACSA.2016.071134</id>
        <doi>10.14569/IJACSA.2016.071134</doi>
        <lastModDate>2016-12-01T10:57:09.6530000+00:00</lastModDate>
        
        <creator>Younas Khan</creator>
        
        <creator>Sheeraz Ahmed</creator>
        
        <creator>Fakhri Alam Khan</creator>
        
        <creator>Imran Ahmad</creator>
        
        <creator>Saqib Shahid Rahim</creator>
        
        <creator>M. Irfan Khattak</creator>
        
        <subject>Energy Consumption; Multi-hop; Network Allocator Vector; Throughput; Wireless Sensor Networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(11), 2016</description>
        <description>Idle listening issue arises when a sensor node listens to medium despite the absence of data which results in consumption of energy. ETEEM is a variant of Traffic Aware Energy Efficient MAC protocol (TEEM) which focuses on energy optimization due to reduced idle listening time and much lesser overhead on energy sources. It uses a novel scheme for using idle listening time of sensor nodes. The nodes are only active for small amount of time and most of the time, will be in sleep mode when no data is available. ETEEM reduces energy at byte level and uses a smaller byte packet called FLAG replacing longer byte SYNC packets of S-MAC and SYNCrts of TEEM respectively. It also uses a single acknowledgement packet per data set hence reducing energy while reducing frequency of the acknowledgment frames sent. The performance of ETEEM is 70% better comparative to other under-consideration MAC protocols.</description>
        <description>http://thesai.org/Downloads/Volume7No11/Paper_34-ETEEM_Extended_Traffic_Aware_Energy_Efficient_MAC_Scheme.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Constraints in the IoT: The World in 2020 and Beyond</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071133</link>
        <id>10.14569/IJACSA.2016.071133</id>
        <doi>10.14569/IJACSA.2016.071133</doi>
        <lastModDate>2016-12-01T10:57:09.6370000+00:00</lastModDate>
        
        <creator>Asma Haroon</creator>
        
        <creator>Munam Ali Shah</creator>
        
        <creator>Yousra Asim</creator>
        
        <creator>Wajeeha Naeem</creator>
        
        <creator>Muhammad Kamran</creator>
        
        <creator>Qaisar Javaid</creator>
        
        <subject>Internet of Things; Future Internet; Next generation network issues; World-wide network; 2020</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(11), 2016</description>
        <description>The Internet of Things (IoT), often referred as the future Internet; is a collection of interconnected devices integrated into the world-wide network that covers almost everything and could be available anywhere. IoT is an emerging technology and aims to play an important role in saving money, conserving energy, eliminating gap and better monitoring for intensive management on a routine basis. On the other hand, it is also facing certain design constraints such as technical challenges, social challenges, compromising privacy and performance tradeoffs. This paper surveys major technical limitations that are hindering the successful deployment of the IoT such as standardization, interoperability, networking issues, addressing and sensing issues, power and storage restrictions, privacy and security, etc. This paper categorizes the existing research on the technical constraints that have been published in the recent years. With this categorization, we aim to provide an easy and concise view of the technical aspects of the IoT. Furthermore, we forecast the changes influenced by the IoT. This paper predicts the future and provides an estimation of the world in year 2020 and beyond.</description>
        <description>http://thesai.org/Downloads/Volume7No11/Paper_33-Constraints_in_the_IoT_The_World_in_2020.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Issue Tracking System based on Ontology and Semantic Similarity Computation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071132</link>
        <id>10.14569/IJACSA.2016.071132</id>
        <doi>10.14569/IJACSA.2016.071132</doi>
        <lastModDate>2016-12-01T10:57:09.6070000+00:00</lastModDate>
        
        <creator>Habes Alkhraisat</creator>
        
        <subject>issue tracking; ontology; similarity computation; vector space model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(11), 2016</description>
        <description>A computer program is never truly finished; changes are a constant feature of computer program development, there are always something need to be added, redone, or fixed. Therefore, issue-tracking systems are widely used on the system development to keep track of reported issues. This paper proposes a new architecture for automated issue tracking system based on ontology and semantic similarity measure.  The proposed architecture integrates several natural languages techniques including vector space model, domain ontology, term-weighting, cosine similarity measure, and synonyms for semantic expansion. The proposed system searches for similar issue templates, which are characteristic of certain fields, and identifies similar issues in an automated way, possible experts and responses are extracted finally. The experimental results demonstrated the accuracy of the new architecture, the experiment result indicates that the accuracy reaches to 94%.</description>
        <description>http://thesai.org/Downloads/Volume7No11/Paper_32-Issue_Tracking_System_based_on_Ontology_and_Semantic_Similarity_Computation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Real-Time Implementation of an Open-Circuit Dc-Bus Capacitor Fault Diagnosis Method for a Three-Level NPC Rectifier</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071131</link>
        <id>10.14569/IJACSA.2016.071131</id>
        <doi>10.14569/IJACSA.2016.071131</doi>
        <lastModDate>2016-12-01T10:57:09.5900000+00:00</lastModDate>
        
        <creator>Fatma Ezzahra LAHOUAR</creator>
        
        <creator>Mahmoud HAMOUDA</creator>
        
        <creator>Jaleleddine BEN HADJ SLAMA</creator>
        
        <subject>fault detection; capacitor failure; open-circuit fault; real-time implementation; multilevel converters</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(11), 2016</description>
        <description>The main goal of this paper is to detect the open-circuit fault of the electrolytic capacitors usually used in the dc-bus of a three phase/level NPC active rectifier. This phenomenon causes unavoidable overvoltage across the dc-bus leading therefore to the destruction of the converter’s power semiconductors. The real-time detection of this fault is therefore vital to avoid severe damage as well as wasted repair time. The proposed diagnosis method is based on the measurement of the voltages across the two dc-bus capacitors. Their mean values are therefore compared with the half value of the dc-bus reference voltage. If the comparison result is under a predefined threshold value, a fault alarm signal is generated in real-time by the monitoring system. The converter’s control algorithm and fault detection method are both implemented in real-time on a DSP controller. The obtained experimental results confirm the effectiveness of the proposed diagnosis technique. Indeed, a fault signal is generated at the peripheral of the DSP after 60 ms of the fault occurrence.</description>
        <description>http://thesai.org/Downloads/Volume7No11/Paper_31-Real_Time_Implementation_of_an_Open_Circuit.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Approach to Automatic Road-Accident Detection using Machine Vision Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071130</link>
        <id>10.14569/IJACSA.2016.071130</id>
        <doi>10.14569/IJACSA.2016.071130</doi>
        <lastModDate>2016-12-01T10:57:09.5600000+00:00</lastModDate>
        
        <creator>Vaishnavi Ravindran</creator>
        
        <creator>Lavanya Viswanathan</creator>
        
        <creator>Shanta Rangaswamy</creator>
        
        <subject>Feature extraction; Image denoising; Machine vision; object detection; Supervised learning; Support vector machines</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(11), 2016</description>
        <description>In this paper, a novel approach for automatic road accident detection is proposed. The approach is based on detecting damaged vehicles from footage received from surveillance cameras installed in roads and highways which would indicate the occurrence of a road accident. Detection of damaged cars falls under the category of object detection in the field of machine vision and has not been achieved so far. In this paper, a new supervised learning method comprising of three different stages which are combined into a single framework in a serial manner which successfully detects damaged cars from static images is proposed. The three stages use five support vector machines trained with Histogram of gradients (HOG) and Gray level co-occurrence matrix (GLCM) features. Since damaged car detection has not been attempted, two datasets of damaged cars - Damaged Cars Dataset-1 (DCD-1) and Damaged Cars Dataset-2 (DCD-2) – was compiled for public release. Experiments were conducted on DCD-1 and DCD-2 which differ based on the distance at which the image is captured and the quality of the images. The accuracy of the system is 81.83% for DCD-1 captured at approximately 2 meters with good quality and 64.37% for DCD-2 captured at approximately 20 meters with poor quality.</description>
        <description>http://thesai.org/Downloads/Volume7No11/Paper_30-A_Novel_Approach_to_Automatic_Road_Accident_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Solving Word Tile Puzzle using Bee Colony Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071129</link>
        <id>10.14569/IJACSA.2016.071129</id>
        <doi>10.14569/IJACSA.2016.071129</doi>
        <lastModDate>2016-12-01T10:57:09.5270000+00:00</lastModDate>
        
        <creator>Erum Naz</creator>
        
        <creator>Khaled Al-Dabbas</creator>
        
        <creator>Mahdi Abrishami</creator>
        
        <creator>Lars Mehnen</creator>
        
        <creator>Milan Cvetkovic </creator>
        
        <subject>slide tile puzzle; artificial bee colony algorithm; swarm intelligence; artificial intelligence; fitness function; loyalty function; word tile puzzle; Bee colony optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(11), 2016</description>
        <description>In this paper, an attempt has been made to solve the word tile puzzle with the help of Bee Colony Algorithm, in order to find maximum number of words by moving a tile up, down, right or left. Bee Colony Algorithm is a type of heuristic algorithms and is efficient and better than blind algorithms, in terms of running time and cost of search time. To examine the performance of the implemented algorithm, several experiments were performed with various combinations. The algorithm was evaluated with the help of statistical functions, such as average, maximum and minimum, for hundred and two-hundred iterations. Results show that an increasing number of agents can improve the average number of words found for both number of tested iterations. However, continuous increase in number of steps will not improve the results. Moreover, results of both iterations showed that the overall performance of the algorithm was not much improved by increasing the number of iterations.</description>
        <description>http://thesai.org/Downloads/Volume7No11/Paper_29-Solving_Word_Tile_Puzzle_using_Bee_Colony_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Metrics for Decision Support in Big Data vs. Traditional RDBMS Tools &amp; Technologies</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071128</link>
        <id>10.14569/IJACSA.2016.071128</id>
        <doi>10.14569/IJACSA.2016.071128</doi>
        <lastModDate>2016-12-01T10:57:09.4970000+00:00</lastModDate>
        
        <creator>Alazar Baharu</creator>
        
        <creator>Durga Prasad Sharma</creator>
        
        <subject>Big Data; RDBMSs; big data tools; Variety; velocity; volume; Metrics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(11), 2016</description>
        <description>In IT industry research communities and data scientists have observed that Big Data has challenged the legacy of solutions. ‘Big Data’  term used for any collection of data or data sets which is so large and complex and difficult to process and manage using traditional data processing applications and existing Relational Data Base Management Systems (RDBMSs). In Big Data; the most important challenges include analysis, capture, curation, search, sharing, storage, transfer, visualization and privacy. As the data increases in various dimensions with various features like structured, semi structured and unstructured with high velocity, high volume and high variety; the RDBMSs face another fold of challenges to be studied and analyzed. Due to the aforesaid limitations of RDBMSs, data scientists and information managers forced to rethink about alternative solutions for handling such data with 3Vs.Initially research study focused on to develop an intelligent base for decision makers so that alternative solutions for long term suitable solutions and handle the data and information with 3Vs can be designed. In this research attempts has been made to analyze the feature based capabilities of RDBMSs and then performance experimentation, observation and analysis has been done with Big Data handling tools and technologies. The features considered for scientific observation and analysis were resource consumption, execution time, on demand scalability, maximum data size, structure of the data, data visualization, and ease of deployment, cost and security. Finally the research provides a decision support metrics for decision makers in selecting the appropriate tool or technology based on the nature of data to be handled in the target organizations.</description>
        <description>http://thesai.org/Downloads/Volume7No11/Paper_28-Performance_Metrics_for_Decision_Support_in_Big_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Variability Management in Business-IT Alignment: MDA based Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071127</link>
        <id>10.14569/IJACSA.2016.071127</id>
        <doi>10.14569/IJACSA.2016.071127</doi>
        <lastModDate>2016-12-01T10:57:09.4800000+00:00</lastModDate>
        
        <creator>Hanae Sbai</creator>
        
        <creator>Mounia Fredj</creator>
        
        <subject>alignment; variability; MDA; PAIS; configurable service, configurable process</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(11), 2016</description>
        <description>The expansion of PAIS (Process Aware Information Systems) has created the need for reuse in business processes. In fact, companies are left with directories containing several variants of the same business processes, which differ according to their application context. Consequently, the development of PAIS has become increasingly expensive. Therefore, research in business process management domain introduced the concept of configurable process, with the aim of managing the variability of business process. However, with the emergence of the services-based development paradigm, the alignment of services with business processes is highly required in PAIS. Thus, in this paper an MDA based method which allows for generating configurable services from configurable process is proposed.</description>
        <description>http://thesai.org/Downloads/Volume7No11/Paper_27-Variability_Management_in_Business_IT_Alignment_MDA.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Wavelet based Scalable Edge Detector</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071126</link>
        <id>10.14569/IJACSA.2016.071126</id>
        <doi>10.14569/IJACSA.2016.071126</doi>
        <lastModDate>2016-12-01T10:57:09.4330000+00:00</lastModDate>
        
        <creator>Imran Touqir</creator>
        
        <creator>Adil Masood Siddique</creator>
        
        <creator>Yasir Saleem</creator>
        
        <subject>Wavelet scales correlation; Edge detection; image denoising; Multiresolution analysis; entropy reduction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(11), 2016</description>
        <description>Fixed size kernels are used to extract differential structure of images.  Increasing the kernal size reduces the localization accuracy and noise along with increase in computational complexity. The computational cost of edge extraction is related to the image resolution or scale. In this paper wavelet scale correlation for edge detection along with scalability in edge detector has been envisaged. The image is decomposed according to its resolution, structural parameters and noise level by multilevel wavelet decomposition using Quadrature Mirror Filters (QMF). The property that image structural information is preserved at each decomposition level whereas noise is partially reduced within subbands, is being exploited. An innovative wavelet synthesis approach is conceived based on scale correlation of the concordant detail bands such that the reconstructed image fabricates an edge map of the image. Although this technique falls short to spot few edge pixels at contours but the results are better than the classical operators in noisy scenario and noise elimination is significant in the edge maps keeping default threshold constraint.</description>
        <description>http://thesai.org/Downloads/Volume7No11/Paper_26-Wavelet_based_Scalable_Edge_Detector.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>E-Commerce Adoption at Customer Level in Jordan: an Empirical Study of Philadelphia General Supplies</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071125</link>
        <id>10.14569/IJACSA.2016.071125</id>
        <doi>10.14569/IJACSA.2016.071125</doi>
        <lastModDate>2016-12-01T10:57:09.3700000+00:00</lastModDate>
        
        <creator>Mohammed Al Masarweh</creator>
        
        <creator>Sultan Al-Masaeed</creator>
        
        <creator>Laila Al-Qaisi</creator>
        
        <creator>Ziad Hunaiti</creator>
        
        <subject>Information systems; E-commerce; E-commerce Adoption; E-commerce in Jordan; Jordan</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(11), 2016</description>
        <description>E-commerce in developing countries has been studied by numerous researchers during the last decade and a number of common and culturally specific challenges have been identified.. This study considers Jordan as a case study of a developing country where E-commerce is still in its infancy. Therefore, this research work comes as a complement to previous research and an opportunity to refine E-commerce adaptation research. This research was conducted by survey distributed randomly across branches of Philadelphia General Supplies (PGS), a small and medium enterprise (SME). The key findings in this research indicated that Jordanian society is moving towards online shopping at very low rates of adoption, due to barriers including weak infrastructure throughout the country except in the capital, societal trends and culture and educational and computer literacy. This means that E-commerce in Jordan still remains an under-developed industry.</description>
        <description>http://thesai.org/Downloads/Volume7No11/Paper_25-E_Commerce_Adoption_at_Customer_Level_in_Jordan.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Characterizations of Flexible Wearable Antenna based on Rubber Substrate</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071124</link>
        <id>10.14569/IJACSA.2016.071124</id>
        <doi>10.14569/IJACSA.2016.071124</doi>
        <lastModDate>2016-12-01T10:57:09.3100000+00:00</lastModDate>
        
        <creator>Saadat Hanif Dar</creator>
        
        <creator>Jameel Ahmed</creator>
        
        <creator>Muhammad Raees</creator>
        
        <subject>wearable antenna; antenna characterization; antennas</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(11), 2016</description>
        <description>Modern ages have observed excessive attention from both scientific and academic communities in the field of flexible electronic based systems. Most progressive flexible electronic systems require incorporating the flexible rubber substrate antenna operating in explicit bands to offer wireless connectivity which is extremely required by today’s network concerned society. This paper characterizes flexible antenna performance under the environments developed by natural rubber as the substrate. Flexible antenna grounded on rubber substrate was simulated using CST microwave studio with diverse permittivity and loss tangent. In our work, prototype antennas were built using natural rubber with different carbon filler substances. This paper reveals advanced flexible substrate effects on antenna quality factor (Q) and its consequences on bandwidth and gain. Such antennas under bending washing environment were also found to perform better than existing designs, showing less change in their gain, frequency shift and impedance mismatch.</description>
        <description>http://thesai.org/Downloads/Volume7No11/Paper_24-Characterizations_of_Flexible_Wearable_Antenna.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Big Data Knowledge Mining</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071123</link>
        <id>10.14569/IJACSA.2016.071123</id>
        <doi>10.14569/IJACSA.2016.071123</doi>
        <lastModDate>2016-12-01T10:57:09.2930000+00:00</lastModDate>
        
        <creator>Huda Umar Banuqitah</creator>
        
        <creator>Fathy Eassa</creator>
        
        <creator>Kamal Jambi</creator>
        
        <creator>Maysoon Abulkhair</creator>
        
        <subject>Knowledge Mining; Relation Extraction; Self-supervised; Big Data; Agent</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(11), 2016</description>
        <description>Big Data (BD) era has been arrived. The ascent of big data applications where information accumulation has grown beyond the ability of the present programming instrument to catch, manage and process within tolerable short time. The volume is not only the characteristic that defines big data, but also velocity, variety, and value. Many resources contain BD that should be processed. The biomedical research literature is one among many other domains that hides a rich knowledge. MEDLINE is a huge biomedical research database which remain a significantly underutilized source of biological information. Discovering the useful knowledge from such huge corpus leading to many problems related to the type of information such as the related concepts of the domain of texts and the semantic relationship associated with them. In this paper, an agent-based system of two–level for Self-supervised relation extraction from MEDLINE using Unified Medical Language System (UMLS) Knowledgebase, has been proposed . The model uses a Self-supervised Approach for Relation Extraction (RE) by constructing enhanced training examples using information from UMLS with hybrid text features. The model incorporates Apache Spark and HBase BD technologies with multiple data mining and machine learning technique with the Multi Agent System (MAS). The system shows a better result in comparison with the current state of the art and na&#239;ve approach in terms of Accuracy, Precision, Recall and F-score.</description>
        <description>http://thesai.org/Downloads/Volume7No11/Paper_23-Big_Data_Knowledge_Mining.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multiobjective Optimization for the Forecasting Models on the Base of the Strictly Binary Trees</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071122</link>
        <id>10.14569/IJACSA.2016.071122</id>
        <doi>10.14569/IJACSA.2016.071122</doi>
        <lastModDate>2016-12-01T10:57:09.2630000+00:00</lastModDate>
        
        <creator>Nadezhda Astakhova</creator>
        
        <creator>Liliya Demidova</creator>
        
        <creator>Evgeny Nikulchev</creator>
        
        <subject>forecasting model; strictly binary tree; modified clonal selection algorithm; multiobjective optimization; affinity indicator; tendencies discrepancy indicator</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(11), 2016</description>
        <description>The optimization problem dealing with the development of the forecasting models on the base of strictly binary trees has been considered. The aim of paper is the comparative analysis of two optimization variants which are applied for the development of the forecasting models. Herewith the first optimization variant assumes the application of one quality indicator of the forecasting model named as the affinity indicator and the second variant realizes the application of two quality indicators of the forecasting model named as the affinity indicator and the tendencies discrepancy indicator. In both optimization variants the search of the best forecasting models is carried out by means of application of the modified clonal selection algorithm. To obtain the high variety of population of the forecasting models it is offered to consider values of the crowding-distance at the realization of the second optimization variant. The results of experimental studies confirming the use efficiency of the modified clonal selection algorithm on the base of the second optimization variant are given.</description>
        <description>http://thesai.org/Downloads/Volume7No11/Paper_22-Multiobjective_Optimization_for_the_Forecasting_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Qos-based Computing Resources Partitioning between Virtual Machines in the Cloud Architecture</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071121</link>
        <id>10.14569/IJACSA.2016.071121</id>
        <doi>10.14569/IJACSA.2016.071121</doi>
        <lastModDate>2016-12-01T10:57:09.2300000+00:00</lastModDate>
        
        <creator>Evgeny Nikulchev</creator>
        
        <creator>Evgeniy Pluzhnik</creator>
        
        <creator>Oleg Lukyanchikov</creator>
        
        <creator>Dmitry Biryukov</creator>
        
        <creator>Elena Andrianova</creator>
        
        <subject>cloud computing architecture; simulation; software for monitoring computer resources</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(11), 2016</description>
        <description>Cloud services have been used very widely, but configuration of the parameters, including the efficient allocation of resources, is an important objective for the system architect. The article is devoted to solving the problem of choosing the architecture of computers based on simulation and developed program for monitoring computing resources. Techniques were developed aimed at providing the required quality of service and efficient use of resources. The article describes the monitoring program of computing resources and time efficiency of the target application functions. On the basis of this application the technique is shown and described in the experiment, designed to ensure the requirements for quality of service, by isolating one process from the others on different virtual machines inside the hypervisor.</description>
        <description>http://thesai.org/Downloads/Volume7No11/Paper_21-Qos_based_Computing_Resources_Partitioning_between_Virtual.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>State of the Art Exploration Systems for Linked Data: A Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071120</link>
        <id>10.14569/IJACSA.2016.071120</id>
        <doi>10.14569/IJACSA.2016.071120</doi>
        <lastModDate>2016-12-01T10:57:09.2300000+00:00</lastModDate>
        
        <creator>Karwan Jacksi</creator>
        
        <creator>Nazife Dimililer</creator>
        
        <creator>Subhi R. M. Zeebaree</creator>
        
        <subject>Exploratory Search System; Linked Data; Linked Data Browser; Semantic Web</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(11), 2016</description>
        <description>The ever-increasing amount of data available on the web is the result of the simplicity of sharing data over the current Web. To retrieve relevant information efficiently from this huge dataspace, a sophisticated search technology, which is further complicated due to the various data formats used, is crucial. Semantic Web (SW) technology has a prominent role in search engines to alleviate this issue by providing a way to understand the contextual meaning of data so as to retrieve relevant, high-quality results. An Exploratory Search System (ESS), is a featured data looking and search approach which helps searchers learn and explore their unclear topics and seeking goals through a set of actions. To retrieve high-quality retrievals for ESSs, Linked Open Data (LOD) is the optimal choice. In this paper, SW technology is reviewed, an overview of the search strategies is provided, and followed by a survey of the state of the art Linked Data Browsers (LDBs) and ESSs based on LOD. Finally, each of the LDBs and ESSs is compared with respect to several features such as algorithms, data presentations, and explanations.</description>
        <description>http://thesai.org/Downloads/Volume7No11/Paper_20-State_of_the_Art_Exploration_Systems_for_Linked_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Applying Chatbots to the Internet of Things: Opportunities and Architectural Elements</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071119</link>
        <id>10.14569/IJACSA.2016.071119</id>
        <doi>10.14569/IJACSA.2016.071119</doi>
        <lastModDate>2016-12-01T10:57:09.2000000+00:00</lastModDate>
        
        <creator>Rohan Kar</creator>
        
        <creator>Rishin Haldar</creator>
        
        <subject>Internet of Things; Chatbots; Human-Computer Interaction; Conversational User Interfaces; Software Agents</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(11), 2016</description>
        <description>Internet of Things (IoT) is emerging as a significant technology in shaping the future by connecting physical devices or things with the web. It also presents various opportunities for the intersection of other technological trends which can allow it to become even more intelligent and efficient. In this paper, we focus our attention on the integration of Intelligent Conversational Software Agents or Chatbots with IoT. Prior literature has covered various applications, features, underlying technologies and known challenges of IoT. On the other hand, Chatbots are a relatively new concept, being widely adopted due to significant progress in the development of platforms and frameworks.  The novelty of this paper lies in the specific integration of Chatbots in the IoT scenario. We analyzed the shortcomings of existing IoT systems and put forward ways to tackle them by incorporating chatbots. A general architecture is proposed for implementing such a system, as well as platforms and frameworks – both commercial and open source – which allow for the implementation of such systems. Identification of the newer challenges and possible future research directions with this new integration have also been addressed.</description>
        <description>http://thesai.org/Downloads/Volume7No11/Paper_19-Applying_Chatbots_to_the_Internet_of_Things.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Statistical Implicative Similarity Measures for User-based Collaborative Filtering Recommender System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071118</link>
        <id>10.14569/IJACSA.2016.071118</id>
        <doi>10.14569/IJACSA.2016.071118</doi>
        <lastModDate>2016-12-01T10:57:09.1700000+00:00</lastModDate>
        
        <creator>Nghia Quoc Phan</creator>
        
        <creator>Phuong Hoai Dang</creator>
        
        <creator>Hiep Xuan Huynh</creator>
        
        <subject>Similarity measures; Implication intensity; User-based collaborative filtering recommender system; statistical implicative similarity measures</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(11), 2016</description>
        <description>This paper proposes a new similarity measures for User-based collaborative filtering recommender system. The similarity measures for two users are based on the Implication intensity measures. It is called statistical implicative similarity measures (SIS). This similarity measures is applied to build the experimental framework for User-based collaborative filtering recommender model. The experiments on MovieLense dataset show that the model using our similarity measures has fairly accurate results compared with User-based collaborative filtering model using traditional similarity measures as Pearson correlation, Cosine similarity, and Jaccard.</description>
        <description>http://thesai.org/Downloads/Volume7No11/Paper_18-Statistical_Implicative_Similarity_Measures_for_User_based_Collaborative.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>MIMC: Middleware for Identifying &amp; Mitigating Congestion Level in Hybrid Mobile Adhoc Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071117</link>
        <id>10.14569/IJACSA.2016.071117</id>
        <doi>10.14569/IJACSA.2016.071117</doi>
        <lastModDate>2016-12-01T10:57:09.1530000+00:00</lastModDate>
        
        <creator>P. G. Sunitha Hiremath</creator>
        
        <creator>C.V. Guru Rao</creator>
        
        <subject>Middleware; Congestion Control; Traffic Management; Hybrid Mobile Adhoc network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(11), 2016</description>
        <description>Adoption of middleware system to solve the congestion problem in mobile ad-hoc network is few to find in the existing system. Research gap is found as existing congestion control mechanism in MANET doesn’t use middleware design and existing middleware system were never investigated for its applicability in congestion control over the mobile ad-hoc network. Therefore, we introduce a novel middleware system called as MIMC or Middleware for Identifying and Mitigating Congestion in Hybrid Mobile Adhoc Network. MIMC is also equipped with novel traf-fic modeling using rule-based control matrix that not only pro-vides a better scenario of congestion but also assists in decision making for routing, which the existing techniques fails. This paper discusses the algorithms, result discussion on multiple scenarios to show MIMC perform better congestion control as compared to existing techniques.</description>
        <description>http://thesai.org/Downloads/Volume7No11/Paper_17-MIMC_Middleware_for_Identifying_Mitigating_Congestion.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of Dynamic Real-Time Navigation System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071116</link>
        <id>10.14569/IJACSA.2016.071116</id>
        <doi>10.14569/IJACSA.2016.071116</doi>
        <lastModDate>2016-12-01T10:57:09.1200000+00:00</lastModDate>
        
        <creator>Shun FUJITA</creator>
        
        <creator>Kayoko YAMAMOTO</creator>
        
        <subject>Navigation System; Dynamic Real-Time; Web-Based Geographical Information Systems (GIS); Social Media; Recommendation System; Augmented Reality (AR); Smart Glasses</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(11), 2016</description>
        <description>This study aimed to develop a system that considers dynamic real-time situations to provide effective support for tourist activities. The conclusions of this study are summarized in the following three points: (1) The system was developed by integrating Web-GIS, social media, recommendation systems and AR terminals (smart glasses) into a single system, and operated in the center part of Yokohama City in Kanagawa Prefecture, Japan. It enabled the accumulation, sharing and recommendation of information and navigation to guide users to their goals both in normal conditions and in the event of disasters. (2) The web-based system was aimed at members of the general public over 18 years old and operated for seven weeks. The total number of users was 86, and 170 items of information were contributed. A system using smart glasses operated for two days, and the total number of users was 34. (3) Evaluation results clarified that it was possible to support user behavior both in normal conditions and in the event of disasters, and to efficiently and safety conduct navigation using smart glasses. Operation premised on disaster conditions showed that users who accessed the system via mobile information terminals increased, and actively used functions requiring location information.</description>
        <description>http://thesai.org/Downloads/Volume7No11/Paper_16-Development_of_Dynamic_Real_Time_Navigation_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Analysis of In-Network Caching in Content-Centric Advanced Metering Infrastructure</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071115</link>
        <id>10.14569/IJACSA.2016.071115</id>
        <doi>10.14569/IJACSA.2016.071115</doi>
        <lastModDate>2016-12-01T10:57:09.1070000+00:00</lastModDate>
        
        <creator>Nour El Houda Ben Youssef</creator>
        
        <creator>Yosra Barouni</creator>
        
        <creator>Sofiane Khalfallah</creator>
        
        <creator>Jaleleddine Ben Hadj Slama</creator>
        
        <creator>Khaled Ben Driss</creator>
        
        <subject>caching; placement; replacement; content-centric networking; Named Data Networking; Advanced Metering Infrastructure; Smart Grid</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(11), 2016</description>
        <description>In-network caching is a key feature of content-centric networking. It is however a relatively costly mechanism with hardware requirements besides placement/replication strategies elaboration. As content-centric networking is proposed in the literature to manage smart grid (SG) communications, we aim, in this research work, to investigate the cost effectiveness of in-network caching in this context. We consider, in particular, the Advanced Metering Infrastructure (AMI) service that comes into prominence since its outputs are imperative inputs of most smart grid applications. In this research work, AMI communication topology and data traffic are characterized. Corresponding simulation environment is then built. Thereafter, various placement and replacement strategies are compared in a simulation study to be further able to propose a suitable cache placement and replacement combination for AMI in Smart Grid.</description>
        <description>http://thesai.org/Downloads/Volume7No11/Paper_15-Performance_Analysis_of_In_Network_Caching.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimal Path Planning using RRT* based Approaches: A Survey and Future Directions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071114</link>
        <id>10.14569/IJACSA.2016.071114</id>
        <doi>10.14569/IJACSA.2016.071114</doi>
        <lastModDate>2016-12-01T10:57:09.0730000+00:00</lastModDate>
        
        <creator>Iram Noreen</creator>
        
        <creator>Amna Khan</creator>
        
        <creator>Zulfiqar Habib</creator>
        
        <subject>optimal path; mobile robots; RRT*; sampling based planning; survey; future directions</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(11), 2016</description>
        <description>Optimal path planning refers to find the collision free, shortest, and smooth route between start and goal positions. This task is essential in many robotic applications such as autonomous car, surveillance operations, agricultural robots, planetary and space exploration missions. Rapidly-exploring Random Tree Star (RRT*) is a renowned sampling based planning approach. It has gained immense popularity due to its support for high dimensional complex problems. A significant body of research has addressed the problem of optimal path planning for mobile robots using RRT* based approaches. However, no updated survey on RRT* based approaches is available. Considering the rapid pace of development in this field, this paper presents a comprehensive review of RRT* based path planning approaches. Current issues relevant to noticeable advancements in the field are investigated and whole discussion is concluded with challenges and future research directions.</description>
        <description>http://thesai.org/Downloads/Volume7No11/Paper_14-Optimal_Path_Planning_using_RRT_based_Approaches.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Using Multiple Seasonal Holt-Winters Exponential Smoothing to Predict Cloud Resource Provisioning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071113</link>
        <id>10.14569/IJACSA.2016.071113</id>
        <doi>10.14569/IJACSA.2016.071113</doi>
        <lastModDate>2016-12-01T10:57:09.0430000+00:00</lastModDate>
        
        <creator>Ashraf A. Shahin</creator>
        
        <subject>auto-scaling; cloud computing; cloud resource scaling; holt-winters exponential smoothing; resource provisioning; virtualized resources</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(11), 2016</description>
        <description>Elasticity is one of the key features of cloud computing that attracts many SaaS providers to minimize their services’ cost. Cost is minimized by automatically provision and release computational resources depend on actual computational needs. However, delay of starting up new virtual resources can cause Service Level Agreement violation. Consequently, predicting cloud resources provisioning gains a lot of attention to scale computational resources in advance. However, most of current approaches do not consider multi-seasonality in cloud workloads. This paper proposes cloud resource provisioning prediction algorithm based on Holt-Winters exponential smoothing method. The proposed algorithm extends Holt-Winters exponential smoothing method to model cloud workload with multi-seasonal cycles. Prediction accuracy of the proposed algorithm has been improved by employing Artificial Bee Colony algorithm to optimize its parameters. Performance of the proposed algorithm has been evaluated and compared with double and triple exponential smoothing methods. Our results have shown that the proposed algorithm outperforms other methods.</description>
        <description>http://thesai.org/Downloads/Volume7No11/Paper_13-Using_Multiple_Seasonal_Holt_Winters_Exponential_Smoothing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Security Risk Assessment of Cloud Computing Services in a Networked Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071112</link>
        <id>10.14569/IJACSA.2016.071112</id>
        <doi>10.14569/IJACSA.2016.071112</doi>
        <lastModDate>2016-12-01T10:57:09.0270000+00:00</lastModDate>
        
        <creator>Eli WEINTRAUB</creator>
        
        <creator>Yuval COHEN</creator>
        
        <subject>Cloud Computing; Risk Management; Information Security; Cloud Risks; Software as a service; Platform as a service; Infrastructure as a service</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(11), 2016</description>
        <description>Different cloud computing service providers offer their customers&#39; services with different risk levels. The customers wish to minimize their risks for a given expenditure or investment. This paper concentrates on consumers&#39; point of view. Cloud computing services are composed of services organized according to a hierarchy of software application services, beneath them platform services which also use infrastructure services. Providers currently offer software services as bundles which include the software, platform and infrastructure services. Providers also offer platform services bundled with infrastructure services. Bundling services prevent customers from splitting their service purchases between a provider of software and a different provider of the underlying platform or infrastructure. In this paper the underlying assumption is the existence of a free competitive market, in which consumers are free to switch their services among providers. The proposed model is aimed at the potential customer who wishes to compare the risks of cloud service bundles offered by providers. The article identifies the major components of risk in each level of cloud computing services. A computational scheme is offered to assess the overall risk on a common scale.</description>
        <description>http://thesai.org/Downloads/Volume7No11/Paper_12-Security_Risk_Assessment_of_Cloud_Computing_Services.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comparative Study Between the Capabilities of MySQl  Vs. MongoDB as a Back-End for an Online Platform</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071111</link>
        <id>10.14569/IJACSA.2016.071111</id>
        <doi>10.14569/IJACSA.2016.071111</doi>
        <lastModDate>2016-12-01T10:57:09.0130000+00:00</lastModDate>
        
        <creator>Cornelia Gyor&#246;di</creator>
        
        <creator>Robert Gyor&#246;di</creator>
        
        <creator>Ioana Andrada Olah</creator>
        
        <creator>Livia Bandici</creator>
        
        <subject>MySQL; relational database; MongoDB; non- relational database; comparative study</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(11), 2016</description>
        <description>In this article we present a comparative study between the usage capabilities of MongoDB, a non-relational database, and MySQL’s usage capabilities, a relational database, as a back-end for an online platform. We will also present the advantages of using a non-relational database, namely MongoDB, compared to a relational database, namely MySQL, integrated in an online platform, which allows users to publish different articles, books, magazines and so on, and also gives them the possibility to share online their items with other people. Nowadays, most applications have thousands of users that perform operations simultaneously thus, it takes more than one operation to be executed at a time, to really see the differences between the two databases. This paper aims to highlight the differences between MySQL and MongoDB, integrated in an online platform, when various operations were executed in parallel by many users.</description>
        <description>http://thesai.org/Downloads/Volume7No11/Paper_11-A_Comparative_Study_Between_the_Capabilities.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automatic Rotation Recovery Algorithm for Accurate Digital Image and Video Watermarks Extraction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071110</link>
        <id>10.14569/IJACSA.2016.071110</id>
        <doi>10.14569/IJACSA.2016.071110</doi>
        <lastModDate>2016-12-01T10:57:08.9970000+00:00</lastModDate>
        
        <creator>Nasr addin Ahmed Salem Al-maweri</creator>
        
        <creator>Aznul Qalid Md Sabri</creator>
        
        <creator>Ali Mohammed Mansoor</creator>
        
        <subject>Rotation recovery; image watermarking; video watermarking; watermark extraction; robustness</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(11), 2016</description>
        <description>Research in digital watermarking has evolved rapidly in the current decade. This evolution brought various different methods and algorithms for watermarking digital images and videos. Introduced methods in the field varies from weak to robust according to how tolerant the method is implemented to keep the existence of the watermark in the presence of attacks. Rotation attacks applied to the watermarked media is one of the serious attacks which many, if not most, algorithms cannot survive. In this paper, a new automatic rotation recovery algorithm is proposed. This algorithm can be plugged to any image or video watermarking algorithm extraction component. The main job for this method is to detect the geometrical distortion happens to the watermarked image/images sequence; recover the distorted scene to its original state in a blind and automatic way and then send it to be used by the extraction procedure. The work is limited to have a recovery process to zero padded rotations for now, cropped images after rotation is left as future work. The proposed algorithm is tested on top of extraction component. Both recovery accuracy and the extracted watermarks accuracy showed high performance level.</description>
        <description>http://thesai.org/Downloads/Volume7No11/Paper_10-Automatic_Rotation_Recovery_Algorithm_for_Accurate_Digital_Image.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Framework of Resource Management using Server Consolidation to Minimize Live Migration and Load Balancing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071109</link>
        <id>10.14569/IJACSA.2016.071109</id>
        <doi>10.14569/IJACSA.2016.071109</doi>
        <lastModDate>2016-12-01T10:57:08.9670000+00:00</lastModDate>
        
        <creator>Alexander Ngenzi</creator>
        
        <creator>Selvarani R</creator>
        
        <creator>Suchithra R</creator>
        
        <subject>Resource Management; Live Migration; Virtual Machine; Load Balancing; Cloud Computing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(11), 2016</description>
        <description>Live Migration is one of the essential operations that require more attention to addressing its high variability problems with virtual machines. We review the existing techniques of resource management to find that there are less modeling to solve this problem. The present paper introduces a novel framework that mainly targets to achieve a computational effective resource management technique. The technique uses the stochastic approach in modelling to design a new traffic management scheme that considers multiple traffic possibilities over VMs along with its switching states. Supported by an analytical modelling approach, the proposed technique offers an efficient placement of virtual machine to the physical server, performs the computation of blocks, and explores reduced resource usage. The study outcome was found to possess potential reduction in live migration, more extent of VM mapping with physical servers, and increased level of capacity.</description>
        <description>http://thesai.org/Downloads/Volume7No11/Paper_9-Framework_of_Resource_Management_using_Server_Consolidation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Mobile Device Software to Improve Construction Sites Communications &quot;MoSIC&quot;</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071108</link>
        <id>10.14569/IJACSA.2016.071108</id>
        <doi>10.14569/IJACSA.2016.071108</doi>
        <lastModDate>2016-12-01T10:57:08.9500000+00:00</lastModDate>
        
        <creator>Adel Khelifi</creator>
        
        <creator>Khaled Hesham Hyari</creator>
        
        <subject>Mobile application ; Construction communication; Construction site ; Construction information</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(11), 2016</description>
        <description>Effective communication among project participants in construction sites is a real dilemma for construction projects productivity. To improve the efficiency of participants in construction projects and have a speedy delivery of these projects, this paper presents the development of a mobile application system to support construction site communication. The developed system is designed to enhance communication between home office employees, field office staff, and mobile users at the construction sites. It has two components: a mobile application and a website. The mobile application component provides users with valuable features such as, receive sites’ instructions, send requests for interpretations and retrieve information about projects. Whereas, the website component allows users, such as, home office employees to track projects’ progress and find projects’ location. The developed system is tested first on emulators and then on Android devices. After that the system was tested on a highway improvement project. Through their mobile phones, site users are able to interact with field office and home office personnel who use the web application to communicate with mobile users. It is expect that this work will contribute to facilitate communication in construction sites, which is much needed in this information intensive sector.</description>
        <description>http://thesai.org/Downloads/Volume7No11/Paper_8-A_Mobile_Device_Software_to_Improve_Construction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Teachme, A Gesture Recognition System with Customization Feature</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071107</link>
        <id>10.14569/IJACSA.2016.071107</id>
        <doi>10.14569/IJACSA.2016.071107</doi>
        <lastModDate>2016-12-01T10:57:08.9200000+00:00</lastModDate>
        
        <creator>Hazem Qattous</creator>
        
        <creator>Bilal Sowan</creator>
        
        <creator>Omar AlSheikSalem</creator>
        
        <subject>Microsoft Kinect&#174;; Gesture recognition system; Gesture customization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(11), 2016</description>
        <description>Many presentation these days are done with the help of a presentation tool.  Lecturers at Universities and researchers in conferences use such tools to order the flow of the presentation and to help audiences follow the presentation points.  Presenters control the presentation tools using mouse and keyboard which keep the presenters always beside the computer machine to be close enough to the keyboard and mouse.  This reduces the ability of the lecturer to move close to the audiences and reduces the eye contact with them.  Moreover, using such traditional techniques in controlling presentation tools lack the communication naturalness.  Several gesture recognition tools are introduced as solutions for these problems.  However, these tools require the user to learn specific gestures to control the presentation and/or the mouse.  These specific gestures can be considered as a gestures vocabulary for the gesture recognition system.  This paper introduces a gesture recognition system, TeachMe, which controls Microsoft PowerPoint presentation tool and the mouse pointer.  TeachMe also has a gesture customization feature that allows the user to customize some gestures according to his/her preference.  TeachMe uses Kinect device as an interface for capturing gestures.  This paper, specifically, discusses in details the techniques and factors taken into consideration for implementing the system and its customization feature.</description>
        <description>http://thesai.org/Downloads/Volume7No11/Paper_7-Teachme_A_Gesture_Recognition_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of Security Requirements Engineering: Towards a Comprehensive Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071106</link>
        <id>10.14569/IJACSA.2016.071106</id>
        <doi>10.14569/IJACSA.2016.071106</doi>
        <lastModDate>2016-12-01T10:57:08.8870000+00:00</lastModDate>
        
        <creator>Ilham Maskani</creator>
        
        <creator>Jaouad Boutahar</creator>
        
        <creator>Souha&#239;l El Ghazi El Houssa&#239;ni</creator>
        
        <subject>Security requirements; Requirements engineering; Security standards; Comparison; Risk assessment</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(11), 2016</description>
        <description>Software’s security depends greatly on how a system was designed, so it’s very important to capture security requirements at the requirements engineering phase. Previous research proposes different approaches, but each is looking at the same problem from a different perspective such as the user, the threat, or the goal perspective. This creates huge gaps between them in terms of the used terminology and the steps followed to obtain security requirements. This research aims to define an approach as comprehensive as possible, incorporating the strengths and best practices found in existing approaches, and filling the gaps between them. To achieve that, relevant literature reviews were studied and primary approaches were compared to find their common and divergent traits.  To guarantee comprehensiveness, a documented comparison process was followed. The outline of our approach was derived from this comparison. As a result, it reconciles different perspectives to security requirements engineering by including: the identification of stakeholders, assets and goals, and tracing them later to the elicited requirements, performing risk assessment in conformity with standards and performing requirements validation. It also includes the use of modeling artifacts to describe threats, risks or requirements, and defines a common terminology.</description>
        <description>http://thesai.org/Downloads/Volume7No11/Paper_6-Analysis_of_Security_Requirements_Engineering.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Japanese Dairy Cattle Productivity Analysis using Bayesian Network Model (BNM)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071105</link>
        <id>10.14569/IJACSA.2016.071105</id>
        <doi>10.14569/IJACSA.2016.071105</doi>
        <lastModDate>2016-12-01T10:57:08.8570000+00:00</lastModDate>
        
        <creator>Iqbal Ahmed</creator>
        
        <creator>Kenji Endo</creator>
        
        <creator>Osamu Fukuda</creator>
        
        <creator>Kohei Arai</creator>
        
        <creator>Hiroshi Okumura</creator>
        
        <creator>Kenichi Yamashita</creator>
        
        <subject>Bayesian Network Model; BCS; Postpartum Interval; Parity Number; Estrous Cycle; Cattle Productivity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(11), 2016</description>
        <description>Japanese Dairy Cattle Productivity Analysis is carried out based on Bayesian Network Model (BNM). Through the experiment with 280 Japanese anestrus Holstein dairy cow, it is found that the estimation for finding out the presence of estrous cycle using BNM represents almost 55% accuracy while considering all samples. On the contrary, almost 73% accurate estimation could be achieved while using suspended likelihood in sample datasets. Moreover, while the proposed BNM model have more confidence then the estimation accuracy is lies in between 93 to 100%.  In addition, this research also reveals the optimum factors to find out the presence of estrous cycle among the 270 individual dairy cows. The objective estimation methods using BNM definitely lead a unique idea to overcome the error of subjective estimation of having estrous cycle among these Japanese dairy cattle.</description>
        <description>http://thesai.org/Downloads/Volume7No11/Paper_5-Japanese_Dairy_Cattle_Productivity_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Sales Forecasting Model in Automotive Industry using Adaptive Neuro-Fuzzy Inference System(Anfis) and Genetic Algorithm(GA)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071104</link>
        <id>10.14569/IJACSA.2016.071104</id>
        <doi>10.14569/IJACSA.2016.071104</doi>
        <lastModDate>2016-12-01T10:57:08.8270000+00:00</lastModDate>
        
        <creator>Amirmahmood Vahabi</creator>
        
        <creator>Shahrooz Seyyedi Hosseininia</creator>
        
        <creator>Mahmood Alborzi</creator>
        
        <subject>Sales Forecasting; Adaptive Neuro-fuzzy inference system (Anfis); Genetic Algorithm (GA)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(11), 2016</description>
        <description>Nowadays, Sales Forecasting is vital for any business in competitive atmosphere. For an accurate forecasting, correct variables should be considered. In this paper, we address these problems and a technique is proposed which combines two artificial intelligence algorithms in order to forecast future automobile sales in Saipa group which is a leading Automobile manufacturer in Iran. Anfis is used as the base technique which is combined with GA. GA is used in order to tune the Anfis results.
Furthermore, sales forecasting is succeeded with annual data of years between 1990 and 2016. With this in mind, per capita income, inflation rate, housing, Importation, Currency Rate (USD), loan interest rate and automobile import tariffs are selected as effective variables in the proposed model. Finally, we compare our model with ANN model which is a well-known forecasting model.</description>
        <description>http://thesai.org/Downloads/Volume7No11/Paper_4-A_Sales_Forecasting_Model_in_Automotive_Industry_using_Adaptive.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Vismarkmap – A Web Search Visualization Technique through Visual Bookmarking Approach with Mind Map Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071103</link>
        <id>10.14569/IJACSA.2016.071103</id>
        <doi>10.14569/IJACSA.2016.071103</doi>
        <lastModDate>2016-12-01T10:57:08.8100000+00:00</lastModDate>
        
        <creator>Abdullah Al-Mamun</creator>
        
        <creator>Sheak Rashed Haider Noori</creator>
        
        <subject>hci; visual bookmark; information retrieval; mind map; visualization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(11), 2016</description>
        <description>Due to the massive growth of information over the Internet, Bookmarking becomes the most popular technique to keep track of the websites with the expectation of finding out the previously searched websites easily whenever are needed. However, present browser bookmark systems or different online social bookmarking websites actually do not let the users to manage their desired searches with appropriate method, so that the users could easily recognize or recall the previously searched website and its content with the bookmark whenever they are in need. In this paper, a new approach of bookmarking technique has been proposed which will let the users to organize their bookmarks with some features using mind map, a scientifically approved mental model that will help the users to recall easily the previously searched websites’ information from the bookmarks and will minimize the tendency to revisit or research the website using the search engines. Basically, the proposed system is more than a mind map as it provides more flexibility to organize the bookmarks.</description>
        <description>http://thesai.org/Downloads/Volume7No11/Paper_3-Vismarkmap_A_Web_Search_Visualization_Technique.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>OWLMap: Fully Automatic Mapping of Ontology into Relational Database Schema</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071102</link>
        <id>10.14569/IJACSA.2016.071102</id>
        <doi>10.14569/IJACSA.2016.071102</doi>
        <lastModDate>2016-12-01T10:57:08.7800000+00:00</lastModDate>
        
        <creator>Humaira Afzal</creator>
        
        <creator>Mahwish Waqas</creator>
        
        <creator>Tabbassum Naz</creator>
        
        <subject>Semantic Web; Ontology; Database; Mapping; OWL; Jena API</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(11), 2016</description>
        <description>Semantic web is becoming a controversial issue in current research era. There must be an automated approach to transform ontology constructs into relational database so that it can be queried efficiently. The previous research work based on transformation of RDF/OWL concepts into relational database contains flaws in complete transformation of ontology constructs into relational database. Some researchers claim that their technique of transformation is entirely automated, however their approach of mapping is incomplete and miss essential OWL constructs. This paper presents a tool called OWLMap that is fully automatic and provides lossless approach for transformation of ontology into relational database format. Number of experiments have been performed for ontology to relational database transformation. Experiments show that proposed approach is fully automatic, effective and quick. Our OWLMap is based on an approach that is lossless as well as it does not loose data, data types and structure.</description>
        <description>http://thesai.org/Downloads/Volume7No11/Paper_2-OWLMap_Fully_Automatic_Mapping_of_Ontology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>BITRU: Binary Version of the NTRU Public Key Cryptosystem via Binary Algebra</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071101</link>
        <id>10.14569/IJACSA.2016.071101</id>
        <doi>10.14569/IJACSA.2016.071101</doi>
        <lastModDate>2016-12-01T10:57:08.7300000+00:00</lastModDate>
        
        <creator>Nadia M.G. Alsaidi</creator>
        
        <creator>Hassan R. Yassein</creator>
        
        <subject>NTRU; BITRU; polynomial ring; binary algebra</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(11), 2016</description>
        <description>New terms such as closest vector problem (CVP) and the shortest vector problem (SVP), which have been illustrated as NP-hard problem, emerged, leading to a new hope for designing  public key cryptosystem based on certain lattice hardness. A new cryptosystem called NTRU is proven computationally efficient and it can be implemented with low cost. With these characteristics, NTRU possesses advantage over others system that rely on number-theoretical problem in a finite field (e.g. integer factorization problem or discrete logarithm problem). These advantages make NTRU a good choice for many applications. After the adaptation of NTRU, many attempts to generalize its algebraic structure have appeared. In this study, a new variant of the NTRU public key cryptosystem called BITRU is proposed. BITRU is based on a new algebraic structure used as an alternative to NTRU-mathematical structure called binary algebra. This commutative and associative. Establishing two public keys in the proposed system has distinguished it from NTRU and those similar to NTRU cryptosystems. This new structure helps to increase the security and complexity of   BITRU. The clauses of BITRU, which include key generation, encryption, decryption, and decryption failure, are explained in details. Its suitability of the proposed system is proven and its security is demonstrated by comparing it with NTRU.</description>
        <description>http://thesai.org/Downloads/Volume7No11/Paper_1-BITRU_Binary_Version_of_the_NTRU_Public_Key_Cryptosystem.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Impact of Domain Modeling Techniques on the Quality of Domain Model: An Experiment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071007</link>
        <id>10.14569/IJACSA.2016.071007</id>
        <doi>10.14569/IJACSA.2016.071007</doi>
        <lastModDate>2016-11-18T11:39:48.3770000+00:00</lastModDate>
        
        <creator>Hiqmat Nisa</creator>
        
        <creator>Salma Imtiaz</creator>
        
        <creator>Muhammad Uzair Khan</creator>
        
        <creator>Saima Imtiaz</creator>
        
        <subject>Domain Model; UML; Experiment; Noun Phrasing Technique; Category List Technique</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(10), 2016</description>
        <description>The unified modeling language (UML) is widely used to analyze and design different software development artifacts in an object oriented development. Domain model is a significant artifact that models the problem domain and   visually represents real world objects and relationships among them.  It facilitates the comprehension process by identifying the vocabulary   and   key   concepts of the business world. Category list technique identifies concepts and associations with the help of pre defined categories, which are important to business information systems. Whereas noun phrasing technique performs grammatical analysis of use case description to recognize concepts and associations. Both of these techniques are used for the construction of domain model, however, no empirical evidence exists that evaluates the quality of the resultant domain model constructed via these two basic techniques. A controlled experiment was performed to investigate the impact of category list and noun phrasing technique on quality of the domain model. The constructed domain model is evaluated for completeness, correctness   and   effort   required for its design. The obtained results show that category list technique is better than noun phrasing technique for the identification of concepts as it avoids generating unnecessary elements i.e. extra concepts, associations and attributes in the domain model. The noun phrasing technique produces a comprehensive domain model and requires less effort as compared to category list. There is no statistically significant difference between both techniques in case of correctness.</description>
        <description>http://thesai.org/Downloads/Volume7No10/Paper_7-Impact_of_Domain_Modeling_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Analysis of Solar Photovoltaic Cells for Telecommunication Cellular Network in Remote Areas of Pakistan</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2016.051105</link>
        <id>10.14569/IJARAI.2016.051105</id>
        <doi>10.14569/IJARAI.2016.051105</doi>
        <lastModDate>2016-11-10T12:12:29.4870000+00:00</lastModDate>
        
        <creator>Abdul Ghayur</creator>
        
        <creator>Sanaullah Ahmad</creator>
        
        <creator>Manzoor Ahmad</creator>
        
        <subject>Network performance parameters, Power on distri-bution system, On-grid-Solar systems, Latest technology evolution</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 5(11), 2016</description>
        <description>In this research design and implementation of solar photo voltaic cell is done for base transceiver system (BTS) of telecom cellular networks in remote areas of Pakistan, to accomplish this task investigation is done regarding the present alternate power source of base transceiver system (BTS) that is the generator sets used as a stand-by, prime and t-prime source. This research will examine that generator sets fuel consumption and maintenance cost is considerably high and the cellular company has to pay a lot to keep a site on air and to overcome the connectivity issues.To resolve these issues this research is aimed to implement solar technology on BTS, for this purpose exploration is done regarding BTS rectifier system and suggested to use power on distribution systems 16 (PODS 16), latest technology evolution (LTE) based instead of the simple BTS rectifier, this new rectifier is intelligent and has redundant ways to overcome power issues as it has the capability to work directly on solar panel equipments and it requires DC supply. Other important factor is that solar panel recharge batteries for power backup and to keep the site on air during night time. Different cost comparison of solar and generator sets have been done by taking real data of different remote areas sites and in the end it is concluded that solar is the alternate costless, environmental friendly source of energy for BTS and can be implemented both for off-grid and on-grid systems.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume5No11/Paper_5-Performance_Analysis_of_Solar_Photovoltaic_Cells.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Contribution to Securing Connections in a Communications Network: Modeling and Conception of a Fraud Detector</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2016.051104</link>
        <id>10.14569/IJARAI.2016.051104</id>
        <doi>10.14569/IJARAI.2016.051104</doi>
        <lastModDate>2016-11-10T12:12:29.4400000+00:00</lastModDate>
        
        <creator>Souad EZZBADY</creator>
        
        <creator>Abdelwahed NAMIR</creator>
        
        <subject>Directed graph database; strongly connected components; Security Management; Theorem graph; communication network.</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 5(11), 2016</description>
        <description>With the explosion in the volume of data, it became primordial for businesses and responsible to implement new tools to detect in real time the unusual changes in its communications network to address all security holes. In this sense, one of the most recently used solutions is the migration of relational databases to the directed graph databases, thanks to his capacity to manage huge and complex bases, and his easiness of management of security.  It is in this sense that this work is located, whose objective is firstly to model the data as a graph with nodes representing users and arcs represent the connection between users. And secondly to monitor connections of links between the different nodes to facilitate the task to one who will handle this data with the ultimate goal of detecting cases of fraud. Indeed, it is to propose a modeling and conception of a technique to improve the communication network management to monitor and report real-time alerts in the event of fraud.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume5No11/Paper_4-Contribution to Securing Connections in a Communications Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Method for NIR Reflectance Estimation with Visible Camera Data based on Regression for NDVI Estimation and its Application for Insect Damage Detection of Rice Paddy Fields</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2016.051103</link>
        <id>10.14569/IJARAI.2016.051103</id>
        <doi>10.14569/IJARAI.2016.051103</doi>
        <lastModDate>2016-11-10T12:12:29.3800000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Kenji Gondoh</creator>
        
        <creator>Osamu Shigetomi</creator>
        
        <creator> Yuko Miura </creator>
        
        <subject>Rice crop; Rice leaf; Nitrogen content; Protein content; NIR reflectance; Water content; Size of rice leaves; Weight of rice crops</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 5(11), 2016</description>
        <description>Method for Near Infrared: NIR reflectance estimation with visible camera data based on regression for Normalized Vegetation Index: NDVI estimation is proposed together with its application for insect damage detection of rice paddy fields. Through experiments at rice paddy fields which is situated at Saga Prefectural Agriculture Research Institute SPARI in Saga city, Kyushu, Japan, it is found that there is high correlation between NIR reflectance and Green color reflectance. Therefore, it is possible to estimate NIR reflectance with visible camera data which results in possibility of estimation of NDVI with drone mounted visible camera data. As is well known that the protein content in rice crops is highly correlated with NIR intensity, or reflectance of rice leaves, it is possible to estimate rice crop quality with drone based visible camera data.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume5No11/Paper_3-Method_for_NIR_Reflectance_Estimation_with_Visible_Camera_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Aerosol Parameter Estimation Method Utilizing Solar Direct and Diffuse Irradiance Measuring Instrument without Sun Tracking Mechanics</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2016.051102</link>
        <id>10.14569/IJARAI.2016.051102</id>
        <doi>10.14569/IJARAI.2016.051102</doi>
        <lastModDate>2016-11-10T12:12:29.3630000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>Aerosol; Atmospheric optical depth; Solar irradiance; Solar direct; Solar diffuse; Aereole; Junge parameter; Size distribution; Real and imaginary parts of refractive index</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 5(11), 2016</description>
        <description>Estimation method of aerosol parameter by means of solar direct and diffuse irradiance measurements with the proposed instrument of fiber-ball radiometer without sun tracking mechanics is proposed. Sky-radiometer and aureole-meter is well known instrument which allows measurements of solar direct and diffuse irradiance for estimation of aerosol parameter. The proposed fiber-ball radiometer also allows solar direct and diffuse irradiance measurements and is comparatively light as well as is composed without any mechanics so that it is portable and is enable to bring anywhere you want including test sites for field campaigns. Meanwhile, it is not always that the fiber-ball instrument points to the sun due to the error of the sun ephemeris calculations as well as sun track do not match to the actual one. Influence due to pointing error on aerosol parameter estimation error is clarified. Possible maximum pointing error may cause some error on aerosol parameter estimation. Experimental results show that 0.43%, 42.23%, 2.12% root mean square errors are suspected for real and imaginary part of refractive index and Junge parameter as of 1.747, 0.0056 and 3.0 of typical case of atmosphere.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume5No11/Paper_2-Aerosol_Parameter_Estimation_Method_Utilizing_Solar_Direct_and_Diffuse_Irradiance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Method for Aerosol Parameter Estimation Error Analysis - Consideration of Noises Included in the Measured Solar Direct and Diffuse Irradiance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2016.051101</link>
        <id>10.14569/IJARAI.2016.051101</id>
        <doi>10.14569/IJARAI.2016.051101</doi>
        <lastModDate>2016-11-10T12:12:29.3000000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>Aerosol; Atmospheric optical depth; Solar irradiance; Solar direct; Solar diffuse; Aereole; Junge parameter; Size distribution; Real and imaginary parts of refractive index</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 5(11), 2016</description>
        <description>Method for error analysis of aerosol parameter estimation with assumed noises which are included in the measured solar direct and diffuse irradiance is proposed. The noises included in the measured solar direct and diffuse irradiance are assumed to be Chi-Square probability density function due to the fact that the measured irradiance is represented as output power which is square of output voltage which corresponds to the measured irradiance which is assumed to be normal distribution of probability density function. Aerosol parameters (refractive index which consists of real and imaginary parts, size distribution which is represented by Junge parameter) are estimated with the measured solar direct and diffuse irradiance which corresponds to the acquired output power of the measuring instrument which allows measurement of solar direct and diffuse irradiance. Through experiments with the measured solar direct and diffuse irradiance, it is found that the estimation accuracy of imaginary part of aerosol refractive index is the most sensitive to the added noises followed by size distribution of Junge parameter and real part of aerosol refractive index.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume5No11/Paper_1-Method_for_Aerosol_Parameter.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Feasibility Study of Optical Spectroscopy as a Medical Tool for Diagnosis of Skin Lesions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071052</link>
        <id>10.14569/IJACSA.2016.071052</id>
        <doi>10.14569/IJACSA.2016.071052</doi>
        <lastModDate>2016-11-04T08:02:51.0730000+00:00</lastModDate>
        
        <creator>Asad Saf</creator>
        
        <creator>Sheikh Ziauddin</creator>
        
        <creator>Alexander Horsch</creator>
        
        <creator>Mahzad Ziai</creator>
        
        <creator>Victor Castaneda</creator>
        
        <creator>Tobias Lasser</creator>
        
        <creator>Nassir Navab</creator>
        
        <subject>Melanoma; Classification; Supervised Learning; Computer–Aided Diagnosis; Machine Learning; Optical Spec-troscopy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(10), 2016</description>
        <description>Skin cancer is one of the most frequently en-countered types of cancer in the Western world. According to the Skin Cancer Foundation Statistics, one in every five Americans develops skin cancer during his/her lifetime. Today, the incurability of advanced cutaneous melanoma raises the importance of its early detection. Since the differentiation of early melanoma from other pigmented skin lesions is not a trivial task, even for experienced dermatologists, computer aided diagnosis could become an important tool for reducing the mortality rate of this highly malignant cancer type.
In this paper, a computer aided diagnosis system based on machine learning is proposed in order to support the clinical use of optical spectroscopy for skin lesions quantification and classification. The focuses is on a feasibility study of optical spectroscopy as a medical tool for diagnosis. To this end, data acquisition protocols for optical spectroscopy are defined and detailed analysis of feature vectors is performed. Different tech-niques for supervised and unsupervised learning are explored on clinical data, collected from patients with malignant and benign skin lesions.</description>
        <description>http://thesai.org/Downloads/Volume7No10/Paper_52-Feasibility_Study_of_Optical_Spectroscopy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluating Mobile Phones and Web Sites for Academic Information Needs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071051</link>
        <id>10.14569/IJACSA.2016.071051</id>
        <doi>10.14569/IJACSA.2016.071051</doi>
        <lastModDate>2016-11-04T08:02:50.9930000+00:00</lastModDate>
        
        <creator>Muhammad Farhan</creator>
        
        <creator>Malik Muhammad Saad Missen</creator>
        
        <creator>Nadeem Akhtar</creator>
        
        <creator>Muhammad Ali Nizamani</creator>
        
        <creator>Amnah Firdous</creator>
        
        <creator>Hina Asmat</creator>
        
        <subject>Usability Engineering; Smart Phones; Academic information need</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(10), 2016</description>
        <description>In the last decade, there has been an exponential growth in use of mobile phones among people. Smart phone invention has digitized life of a common man especially after introduction of 3G/4G technology. People are used to use Internet on the move because of this advancement in technology. This advancement has also motivated usability design researchers to propose more usable designs for both smart phones and web sites. This work focuses on evaluation of web usability of mobile phones as well as usability of university web sites. Evaluation is performed on the most popular mobile phones required by the most common mobile users. Selection of the most popular mobile devices, the most common mobile users and their web usage is done by conducting a very detailed survey in the local market. Survey concludes that students and labors are the most common buyers of mobile phones and we choose three mobiles phones from the category of most popular phones that are iPhone (iPhone 4 precisely), Q-Mobile (Q Mobile A35) and Windows phone (Lumia 535). Six participants (three male and three females) are selected for fully detailed and rigorous task-based usability testing with “think aloud” technique. Task scenarios are defined to evaluate the usability of both i.e. smart phones and chosen university web sites. From results of usability testing, we find out that iPhone has better usability design as far as its response time is concerned while Q Mobile ranks second and Microsoft Windows phone takes last position in this ranking. Usability evaluation of university web sites on these mobile phones concludes that web site of Islamia university of Bahawalpur (I.U.B, Bahawalpur) has better mobile usability design and Bahauddin Zakariya university (BZU, Multan) and NFC Institute of Engineering and Technology NFCIET, Multan) second and third respectively and while web site of Institute of Southern Punjab (ISP, Multan) comes last when measured in terms of task completion time. All tests are performed on wireless network when internet download speed is between 3MBPS to 3.2MBPS.</description>
        <description>http://thesai.org/Downloads/Volume7No10/Paper_51-Evaluating_Mobile_Phones_and_Web_Sites.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Frame Size Adjustment with Sub-Frame Observation for Dynamic Framed Slotted Aloha</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071050</link>
        <id>10.14569/IJACSA.2016.071050</id>
        <doi>10.14569/IJACSA.2016.071050</doi>
        <lastModDate>2016-11-02T08:19:22.5670000+00:00</lastModDate>
        
        <creator>Robithoh Annur</creator>
        
        <creator>Suvit Nakpeerayuth</creator>
        
        <subject>anti-collision tag identification; RFID; Framed slot-ted Aloha; frame adjustment</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(10), 2016</description>
        <description>In this paper, a simple frame size adjustment of dynamic framed slotted Aloha for tag identification in RFID networks is proposed. In dynamic framed slotted Aloha, the reader is required to announce the frame size for every frame. To achieve maximum system efficiency, it is essential to set the frame size according to the number unidentified tags appropriately. The proposed approach utilizes the information from a portion of the frame to adjust the size of the next frame. Simulation results show that the smaller number of observed slots results in faster frame adjustment and higher throughput. Compared to the existing anti-collision algorithms, the proposed approach achieves higher throughput and higher identification rate.</description>
        <description>http://thesai.org/Downloads/Volume7No10/Paper_50-A_Frame_Size_Adjustment_with_Sub_Frame.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparison of Inter-and Intra-Subject Variability of P300 Spelling Dictionary in EEG Compressed Sensing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071049</link>
        <id>10.14569/IJACSA.2016.071049</id>
        <doi>10.14569/IJACSA.2016.071049</doi>
        <lastModDate>2016-11-02T08:19:22.4730000+00:00</lastModDate>
        
        <creator>Monica Fira</creator>
        
        <creator>Liviu Goras</creator>
        
        <subject>Biomedical signal processing; Brain-computer interfaces; Compressed sensing; Classification algorithms; Electroencephalography</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(10), 2016</description>
        <description>In this paper, we propose a new compression method for electroencephalographic signals based on the concept of compressed sensing (CS) for the P300 detection spelling paradigm. The method uses a universal mega-dictionary which has been found not to be patient-specific. To validate the proposed method, electroencephalography recordings from the competition for Spelling, BCI Competition III Challenge 2005 - Dataset II, have been used. To evaluate the reconstructed signal, both quantitative and qualitative measures were used. For qualitative evaluation, we used the classification rate for the observed character based on P300 detection in the case of the spelling paradigm applied on the reconstructed electroencephalography signals, using the winning scripts (Alain Rakotomamonjy and Vincent Guigue). While for quantitative evaluation, distortion measures between the reconstructed and original signals were used.</description>
        <description>http://thesai.org/Downloads/Volume7No10/Paper_49-Comparison_of_Inter_and_Intra_Subject_Variability.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>FNN based Adaptive Route Selection Support System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071048</link>
        <id>10.14569/IJACSA.2016.071048</id>
        <doi>10.14569/IJACSA.2016.071048</doi>
        <lastModDate>2016-11-01T13:23:03.9600000+00:00</lastModDate>
        
        <creator>Saoreen Rahman</creator>
        
        <creator>M. Shamim Kaiser</creator>
        
        <creator>Mahtab Uddin Ahmmed</creator>
        
        <subject>GPS; Fuzzy Neural Network; Path delay; Signal Point Delay; Webster Delay Formula</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(10), 2016</description>
        <description>This paper presents Fuzzy Neural Network (FNN) based Adaptive Route Selection Support System (ARSSS) for assisting drivers of vehicles. The aim of the proposed ARSSS system is to select path based on shortest possible time. The proposed system intakes traffic information, such as volume to capacity ratio, traffic flow, vehicle queue length and green cycle length, passenger car unit etc using different types of sensor nodes, remote servers, CCTVs and the road information such as path length, signalized junctions, intersection points between source-destination pair are captured using GPS service. A FNN has been employed to select an optimal path having shortest time. The input parameters of FNN are distance, signal point delay, road type and traffic flow whereas the output parameter is path selection probability which paves the way to identify the best suitable path. The simulation result revels that FNN based ARSSS outperforms more accurate than that of other route selection support system (webster delay model) and artificial neural network (ANN) in estimating path delay.</description>
        <description>http://thesai.org/Downloads/Volume7No10/Paper_48-FNN_based_Adaptive_Route_Selection_Support_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Unsupervised Morphological Relatedness</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071047</link>
        <id>10.14569/IJACSA.2016.071047</id>
        <doi>10.14569/IJACSA.2016.071047</doi>
        <lastModDate>2016-11-01T13:23:03.9470000+00:00</lastModDate>
        
        <creator>Ahmed Khorsi</creator>
        
        <creator>Abeer Alsheddi</creator>
        
        <subject>Arabic Language; Computational Linguistics; Morphological Relatedness; Semitic Morphology; Unsupervised Learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(10), 2016</description>
        <description>Assessment of the similarities between texts has been studied for decades from different perspectives and for several purposes. One interesting perspective is the morphology. This article reports the results on a study on the assessment of the morphological relatedness between natural language words. The main idea is to adapt a formal string alignment algorithm namely Needleman-Wunsch’s to accommodate the statistical char-acteristics of the words in order to approximate how similar are the linguistic morphologies of the two words. The approach is unsupervised from end to end and the experiments show an nDCG reaching 87% and an r-precision reaching 81%.</description>
        <description>http://thesai.org/Downloads/Volume7No10/Paper_47-Unsupervised_of_Morphological_Relatedness.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards Multi-Stage Intrusion Detection using IP Flow Records</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071046</link>
        <id>10.14569/IJACSA.2016.071046</id>
        <doi>10.14569/IJACSA.2016.071046</doi>
        <lastModDate>2016-11-01T13:23:03.9130000+00:00</lastModDate>
        
        <creator>Muhammad Fahad Umer</creator>
        
        <creator>Muhammad Sher</creator>
        
        <creator>Imran Khan</creator>
        
        <subject>IP flows; Multi-stage intrusion detection; One-class classification; Multi-class classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(10), 2016</description>
        <description>Traditional network-based intrusion detection sys-tems using deep packet inspection are not feasible for modern high-speed networks due to slow processing and inability to read encrypted packet content. As an alternative to packet-based intrusion detection, researchers have focused on flow-based intrusion detection techniques. Flow-based intrusion detection systems analyze IP flow records for attack detection. IP flow records contain summarized traffic information. However, flow data is very large in high-speed networks and cannot be processed in real-time by the intrusion detection system. In this paper, an efficient multi-stage model for intrusion detection using IP flows records is proposed. The first stage in the model classifies the traffic as normal or malicious. The malicious flows are further analyzed by a second stage. The second stage associates an attack type with malicious IP flows. The proposed multi-stage model is efficient because the majority of IP flows are discarded in the first stage and only malicious flows are examined in detail. We also describe the implementation of our model using machine learning techniques.</description>
        <description>http://thesai.org/Downloads/Volume7No10/Paper_46-Towards_Multi_Stage_Intrusion_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Time Emotional Analysis of Arabic Tweets at Multiple Levels</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071045</link>
        <id>10.14569/IJACSA.2016.071045</id>
        <doi>10.14569/IJACSA.2016.071045</doi>
        <lastModDate>2016-11-01T13:23:03.8830000+00:00</lastModDate>
        
        <creator>Amr M. Sayed</creator>
        
        <creator>Samir AbdelRahman</creator>
        
        <creator>Reem Bahgat</creator>
        
        <creator>Aly Fahmy</creator>
        
        <subject>Emotional Analysis; Sentiment Analysis; Cluster-ing; Two-Step Classification; Time Analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(10), 2016</description>
        <description>Sentiment and emotional analyses have recently become effective tools to discover peoples attitudes towards real-life events. While Many corners of the emotional analysis research have been conducted, time emotional analysis at expression and aspect levels is yet to be intensively explored. This paper aims to analyse people emotions from tweets extracted during the Arab Spring and the recent Egyptian Revolution. Analysis is done on tweet, expression and aspect levels. In this research, we only consider surprise, happiness, sadness, and anger emotions in addition to sarcasm expression. We propose a time emotional analysis framework that consists of four components namely annotating tweets, classifying at tweet/expression levels, clustering on some aspects, and analysing the distributions of people emo-tions,expressions, and aspects over specific time. Our contribution is two-fold. First, our framework effectively analyzes people emotional trends over time, at different fine-granularity levels (tweets, expressions, and aspects) while being easily adaptable to other languages. Second, we developed a lightweight clustering algorithm that utilizes the short length of tweets. On this problem, the developed clustering algorithm achieved higher results compared to state-of-the-art clustering algorithms. Our approach achieved 70.1% F-measure in classification, compared to 85.4% which is the state of the art results on English. Our approach also achieved 61.45% purity in clustering.</description>
        <description>http://thesai.org/Downloads/Volume7No10/Paper_45-Time_Emotional_Analysis_of_Arabic_Tweets.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Software Requirements Conflict Identification: Review and Recommendations</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071044</link>
        <id>10.14569/IJACSA.2016.071044</id>
        <doi>10.14569/IJACSA.2016.071044</doi>
        <lastModDate>2016-11-01T13:23:03.8670000+00:00</lastModDate>
        
        <creator>Maysoon Aldekhail</creator>
        
        <creator>Azzedine Chikh</creator>
        
        <creator>Djamal Ziani</creator>
        
        <subject>software requirements; requirements engineering; requirements conflicts</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(10), 2016</description>
        <description>Successful development of software systems re-quires a set of complete, consistent and clear requirements. A wide range of different stakeholders with various needs and backgrounds participate in the requirements engineering process. Accordingly, it is difficult to completely satisfy the requirements of each and every stakeholder. It is the requirements engineer’s job to trade-off stakeholders’ needs with the project resources and constraints. Many studies assert that failure in understanding and managing requirements in general, and requirement conflicts in particular, are one of the main problems of exceeding cost and allocated time which in turn results in project failure.
This paper aims at investigating the different reasons of re-quirements conflicts and the different types of requirements conflicts. It providing an overview of existing research works on identifying conflicts; and discussing their limitations in order to yield suggestions for improvement.
Objective: To provide an overview of existing research studies on identifying software requirements conflict and identifying limitations and areas for improvement.
Method: A comparative literature was conducted by assessing 20 studies dated from 2001 to 2014.</description>
        <description>http://thesai.org/Downloads/Volume7No10/Paper_44-Software_Requirements_Conflict_Identification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Recovering and Tracing Links between Software Codes and Test Codes of the Open Source Projects</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071043</link>
        <id>10.14569/IJACSA.2016.071043</id>
        <doi>10.14569/IJACSA.2016.071043</doi>
        <lastModDate>2016-11-01T13:23:03.8370000+00:00</lastModDate>
        
        <creator>Amir Hossein Rasekh</creator>
        
        <creator>Amir Hossein Arshia</creator>
        
        <creator>Seyed Mostafa Fakhrahmad</creator>
        
        <creator>Mohammad Hadi Sadreddini</creator>
        
        <subject>Unit Testing; Source Code; Similarity; Software Engineering; Open Source; Data Mining</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(10), 2016</description>
        <description>One of the most important controversial issues in the design and implementation of software is the functionality of the designed system. With impressive efforts of different software teams in the field of the system, the primary concern of the developers is its proper and error free functioning of the whole system. Therefore, various tests are defined and designed to help software teams to produce error free software or software with minimum error rate. It is difficult but important to find a proper link between written test class and the class under the test. Discovering these links is useful for programmers to perform the Regression Test more efficiently. In this paper, we are trying to propose a model for the recovery of traceable links between test classes and the classes under the test. The presented model comprises four sections. Firstly, we retrieve the name of similar classes between the test class and source class. Afterward, we extract the complexity, Cyclomatic and design metrics from the source codes and the test classes. Finally, after creating a train set, we implement the data mining algorithms to find the potential relationship between unit tests and the classes under the test. One of the advantages of this method is its language independence; furthermore, the preliminary results show that the proposed method has a good performance.</description>
        <description>http://thesai.org/Downloads/Volume7No10/Paper_43-Recovering_and_Tracing_Links_between_Software_Codes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Inter Prediction Complexity Reduction for HEVC based on Residuals Characteristics</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071042</link>
        <id>10.14569/IJACSA.2016.071042</id>
        <doi>10.14569/IJACSA.2016.071042</doi>
        <lastModDate>2016-11-01T13:23:03.8200000+00:00</lastModDate>
        
        <creator>Kanayah Saurty</creator>
        
        <creator>Pierre C. Catherine</creator>
        
        <creator>Krishnaraj M. S. Soyjaudah</creator>
        
        <subject>HEVC; inter prediction; early termination scheme; complexity reduction; prediction residuals</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(10), 2016</description>
        <description>High Efficiency Video Coding (HEVC) or H.265 is currently the latest standard in video coding. While this new standard promises improved performance over the previous H.264/AVC standard, the complexity has drastically increased due to the various new improved tools added. The splitting of the 64x64 Largest Coding Unit (LCU) into smaller CU sizes forming a quad tree structure involves a significant number of operations and comparisons which imposes a high computational burden on the encoder. In addition, the improved Motion Estimation (ME) techniques used in HEVC inter prediction in order to ensure greater compression also contribute to the high encoding time. In this paper, a set of standard thresholds are identified based on the Mean Square (MS) of the residuals. These thresholds are used to terminate the CU splitting process and to skip some of the inter modes processing. In addition, CUs with large MS values are split at a very early stage. Experimental results show that the proposed method can effectively reduce the encoding time by 62.2% (70.8% for ME) on average, compared to HM 10, yielding a BD-Rate of only 1.14%.</description>
        <description>http://thesai.org/Downloads/Volume7No10/Paper_42-Inter_Prediction_Complexity_Reduction_for_HEVC.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Internal Model Control Method for MIMO Over-Actuated Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071041</link>
        <id>10.14569/IJACSA.2016.071041</id>
        <doi>10.14569/IJACSA.2016.071041</doi>
        <lastModDate>2016-11-01T13:23:03.7900000+00:00</lastModDate>
        
        <creator>Ahmed Dhahri</creator>
        
        <creator>Imen Saidi</creator>
        
        <creator>Dhaou Soudani</creator>
        
        <subject>internal model control (IMC); over-actuated multivariable system; inverse model; method of virtual outputs; disturbances rejections, stability; state error</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(10), 2016</description>
        <description>A new design method internal model control is proposed for multivariable over-actuated processes that are often encountered in complicated industrial processes.
Due to the matrix that is adopted to describe over-actuated system is not square, many classical multivariable control methods can be hardly applied in such system. In this paper, based on method of virtual outputs, a new internal model control method is proposed.
The proposed method is applied to shell standard control problem (3 inputs and 2 outputs). The simulation results show that the robust controller can keep the set inputs without overshoot, steady state error, input tracking performance and disturbance rejection performance, the results are satisfactory have proved the effectiveness and reliability of the proposed method.</description>
        <description>http://thesai.org/Downloads/Volume7No10/Paper_41-A_New_Internal_Model_Control_Method_for_MIMO.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid Algorithm based on Invasive Weed Optimization and Particle Swarm Optimization for Global Optimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071040</link>
        <id>10.14569/IJACSA.2016.071040</id>
        <doi>10.14569/IJACSA.2016.071040</doi>
        <lastModDate>2016-11-01T13:23:03.7430000+00:00</lastModDate>
        
        <creator>Zeynab Hosseini</creator>
        
        <creator>Ahmad Jafarian</creator>
        
        <subject>Invasive weed optimization; Particle Swarm Optimization; Global optimization; Hybrid algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(10), 2016</description>
        <description>In this paper, an effective combination of two Metaheuristic algorithms, namely Invasive Weed Optimization and the Particle Swarm Optimization, has been proposed. This hybridization called as HIWOPSO, consists of two main phases of Invasive Weed Optimization (IWO) and Particle Swarm Optimization (PSO). Invasive weed optimization is the nature- inspired algorithm which is inspired by colonial behavior of weeds. Particle Swarm Optimization is a swarm base Algorithm that uses the swarm intelligence to guide the solution to the goal. IWO algorithm is the algorithm which is not benefit from swarm intelligence and PSO converges to the local optimums quickly. In order to benefit from swarm intelligence and avoidance from trapping in local solutions, new hybrid algorithm IWO and PSO has been proposed. To obtain the required results, the experiment on a set of benchmark functions was performed and compared with other algorithms. The findings based on the non-parametric tests and statistical analysis showed that HIWOPSO is a more preferable and effective method in solving the high-dimensional functions.</description>
        <description>http://thesai.org/Downloads/Volume7No10/Paper_40-A_Hybrid_Algorithm_based_on_Invasive_Weed_Optimization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Improvement of Threshold based Audio Steganography using Parallel Computation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071039</link>
        <id>10.14569/IJACSA.2016.071039</id>
        <doi>10.14569/IJACSA.2016.071039</doi>
        <lastModDate>2016-11-01T13:23:03.7270000+00:00</lastModDate>
        
        <creator>Muhammad Shoaib</creator>
        
        <creator>Zakir Khan</creator>
        
        <creator>Danish Shehzad</creator>
        
        <creator>Tamer Dag</creator>
        
        <creator>Arif Iqbal Umar</creator>
        
        <creator>Noor Ul Amin</creator>
        
        <subject>Steganography; LSB; Steganalysis; Parallel; Pipelining; Processing Efficient; Real time; Security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(10), 2016</description>
        <description>Audio steganography is used to hide secret information inside audio signal for the secure and reliable transfer of information. Various steganography techniques have been proposed and implemented to ensure adequate security level. The existing techniques either focus on the payload or security, but none of them has ensured both security and payload at same time. Data Dependency in existing solution was reluctant for the execution of steganography mechanism serially. The audio data and secret data pre-processing were done and existing techniques were experimentally tested in Matlab that ensured the existence of problem in efficient execution. The efficient least significant bit steganography scheme removed the pipelining hazard and calculated Steganography parallel on distributed memory systems. This scheme ensures security, focuses on payload along with provisioning of efficient solution.  The result depicts that it not only ensures adequate security level but also provides better and efficient solution.</description>
        <description>http://thesai.org/Downloads/Volume7No10/Paper_39-Performance_Improvement_of_Threshold_based_Audio_Steganography.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Conceptual Modeling in Simulation: A Representation that Assimilates Events</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071038</link>
        <id>10.14569/IJACSA.2016.071038</id>
        <doi>10.14569/IJACSA.2016.071038</doi>
        <lastModDate>2016-11-01T13:23:03.6970000+00:00</lastModDate>
        
        <creator>Sabah Al-Fedaghi</creator>
        
        <subject>events; flow; conceptual modeling; simulation; diagrammatic language</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(10), 2016</description>
        <description>Simulation is often based on some type of model of the evolved portion of the world being studied. The underlying model is a static description; the simulation itself is executed by generating events or dynamic aspects into the system. In this context, this paper focuses on conceptual modeling in simulation. It is considered the most important aspect in simulation modelling, at the same time it is thought the least understood The paper proposes a new diagrammatic language as a modeling representation in simulation and as a basis for a theoretical framework for associated notions such as events and flows. Specifically, operational semantics using events to define fine-grained activities are assimilated into the representation, resulting in an integration of the static domain description and the dynamic chronology of events (so-called process level). The resulting unified specification facilitates understanding of the simulation procedure and enhances understanding of basic notions such as things (entities), events, and flows (activities).</description>
        <description>http://thesai.org/Downloads/Volume7No10/Paper_38-Conceptual_Modeling_in_Simulation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dynamic Inertia Weight Particle Swarm Optimization for Solving Nonogram Puzzles</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071037</link>
        <id>10.14569/IJACSA.2016.071037</id>
        <doi>10.14569/IJACSA.2016.071037</doi>
        <lastModDate>2016-11-01T13:23:03.6800000+00:00</lastModDate>
        
        <creator>Habes Alkhraisat</creator>
        
        <creator>Hasan Rashaideh</creator>
        
        <subject>Non-Polynomial Complete problem; Nonograms puzzle; Swarm theory; Particle swarms; Optimization; Dynamic Inertia Weigh</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(10), 2016</description>
        <description>Particle swarm optimization (PSO) has shown to be a robust and efficient optimization algorithm therefore PSO has received increased attention in many research fields. This paper demonstrates the feasibility of applying the Dynamic Inertia Weight Particle Swarm Optimization to solve a Non-Polynomial (NP) Complete puzzle. This paper presents a new approach to solve the Nonograms Puzzle using Dynamic Inertia Weight Particle Swarm Optimization (DIW-PSO). We propose the DIW-PSO to optimize a problem of finding a solution for Nonograms Puzzle. The experimental results demonstrate the suitability of DIW-PSO approach for solving Nonograms puzzles. The outcome results show that the proposed DIW-PSO approach is a good promising DIW-PSO for NP-Complete puzzles.</description>
        <description>http://thesai.org/Downloads/Volume7No10/Paper_37-Dynamic_Inertia_Weight_Particle_Swarm_Optimization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Medical Image Fusion Algorithm based on Local Average Energy-Motivated PCNN in NSCT Domain</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071036</link>
        <id>10.14569/IJACSA.2016.071036</id>
        <doi>10.14569/IJACSA.2016.071036</doi>
        <lastModDate>2016-11-01T13:23:03.6500000+00:00</lastModDate>
        
        <creator>Huda Ahmed</creator>
        
        <creator>Emad N. Hassan</creator>
        
        <creator>Amr A. Badr</creator>
        
        <subject>Medical image fusion; pulse-coupled neural network; local average energy; non-subsampled contourlet transform</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(10), 2016</description>
        <description>Medical Image Fusion (MIF) can improve the performance of medical diagnosis, treatment planning and image-guided surgery significantly through providing high-quality and rich-information medical images. Traditional MIF techniques suffer from common drawbacks such as: contrast reduction, edge blurring and image degradation. Pulse-coupled Neural Network (PCNN) based MIF techniques outperform the traditional methods in providing high-quality fused images due to its global coupling and pulse synchronization property; however, the selection of significant features that motivate the PCNN is still an open problem and plays a major role in measuring the contribution of each source image into the fused image. In this paper, a medical image fusion algorithm is proposed based on the Non-subsampled Contourlet Transform (NSCT) and the Pulse-Coupled Neural Network (PCNN) to fuse images from different modalities. Local Average Energy is used to motivate the PCNN due to its ability to capture salient features of the image such as edges, contours and textures. The proposed approach produces a high quality fused image with high contrast and improved content in comparison with other image fusion techniques without loss of significant details on both levels: the visual and the quantitative.</description>
        <description>http://thesai.org/Downloads/Volume7No10/Paper_36-Medical_Image_Fusion_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Position-based Sentiment Classification Algorithm for Facebook Comments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071035</link>
        <id>10.14569/IJACSA.2016.071035</id>
        <doi>10.14569/IJACSA.2016.071035</doi>
        <lastModDate>2016-11-01T13:23:03.6330000+00:00</lastModDate>
        
        <creator>Khunishkah Surroop</creator>
        
        <creator>Khushboo Canoo</creator>
        
        <creator>Sameerchand Pudaruth</creator>
        
        <subject>sentiment analysis; Facebook; cybercrime; emoticons</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(10), 2016</description>
        <description>With the popularisation of social networks, people are now more at ease to share their thoughts, ideas, opinions and views about all kinds of topics on public platforms. Millions of users are connected each day on social networks and they often contribute to online crimes by their comments or posts through cyberbullying, identity theft, online blackmailing, etc. Mauritius has also registered a surge in the number of cybercrime cases during the past decade. In this study, a trilingual dataset of 1031 comments was extracted from public pages on Facebook. This dataset was manually categorised into four different sentiment classes: positive, negative, very negative and neutral, using a novel sentiment classification algorithm. Out of these 1031 comments, it was found that 97.8% of the very negative sentiments, 70.7% of the negative sentiments and 77.0% of the positive sentiments were correctly extracted. Despite the added complexity of our dataset, the accuracy of our system is slightly better than similar works in the field. The accuracy of the lexicon-based approach was also much higher than when we used machine learning techniques. The outcome of this research work can be used by the Mauritius Police Force to track down potential cases of cybercrime on social networks. Decisive actions can then be implemented in time.</description>
        <description>http://thesai.org/Downloads/Volume7No10/Paper_35-A_Novel_Position_based_Sentiment_Classification_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improved Association Rules Mining based on Analytic Network Process in Clinical Decision Making</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071034</link>
        <id>10.14569/IJACSA.2016.071034</id>
        <doi>10.14569/IJACSA.2016.071034</doi>
        <lastModDate>2016-11-01T13:23:03.6030000+00:00</lastModDate>
        
        <creator>Shakiba Khademolqorani</creator>
        
        <subject>Clinical Data Mining; Clinical Decision Making; Association Rules Mining; Analytic Network Process</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(10), 2016</description>
        <description>Association Rules Mining is one of the most important fields in data mining and knowledge discovery in databases. Rules explosion is a problem of concern, as conventional mining algorithms often produce too many rules for decision makers to digest. In order to overcome this problem in clinical decision making, this paper concentrates on using Analytic Network Process method to improve the process of extracting rules. The rules provided by association rules, through group decision making of physicians and health experts, are used to organize and evaluate related features by analytic network process. The proposed method has been applied in the completed blood count based on real database. It generated interesting association rules useable and useful for medical diagnosis.</description>
        <description>http://thesai.org/Downloads/Volume7No10/Paper_34-Improved_Association_Rules_Mining_based_on_Analytic_Network_Process.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automated Imaging System for Pigmented Skin Lesion Diagnosis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071033</link>
        <id>10.14569/IJACSA.2016.071033</id>
        <doi>10.14569/IJACSA.2016.071033</doi>
        <lastModDate>2016-11-01T13:23:03.5870000+00:00</lastModDate>
        
        <creator>Mariam Ahmed Sheha</creator>
        
        <creator>Amr Sharwy</creator>
        
        <creator>Mai S. Mabrouk</creator>
        
        <subject>component; Pigmented Skin lesions; Color Space; Bounding box; local range; SVM; local range; KNN; ANN; Experimental models</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(10), 2016</description>
        <description>Through the study of pigmented skin lesions risk factors, the appearance of malignant melanoma turns the anomalous occurrence of these lesions to annoying sign.   The difficulty of differentiation between malignant melanoma and melanocytic naive is the error-bone problem that usually faces the physicians in diagnosis. To think through the hard mission of pigmented skin lesions diagnosis different clinical diagnosis algorithms were proposed such as pattern analysis, ABCD rule of dermoscopy, Menzies method, and 7-points checklist. Computerized monitoring of these algorithms improves the diagnosis of melanoma compared to simple naked-eye of physician during examination. Toward the serious step of melanoma early detection, aiming to reduce melanoma mortality rate, several computerized studies and procedures were proposed.  Through this research different approaches with a huge number of features were discussed to point out the best approach or methodology could be followed to accurately diagnose the pigmented skin lesion. This paper proposes automated system for diagnosis of melanoma to provide quantitative and objective evaluation of skin lesion as opposed to visual assessment, which is subjective in nature. Two different data sets were utilized to reduce the effect of qualitative interpretation problem upon accurate diagnosis. Set of clinical images that are acquired from a standard camera while the other set is acquired from a special dermoscopic camera and so named dermoscopic images. System contribution appears in new, complete and different approaches presented for the aim of pigmented skin lesion diagnosis. These approaches result from using large conclusive set of features fed to different classifiers. The three main types of different features extracted from the region of interest are geometric, chromatic, and texture features. Three statistical methods were proposed to select the most significant features that will cause a valuable effect in diagnosis; Fisher score method, t-test, and F-test. The selected features of high-ranking score based on the statistical methods are used for the diagnosis of the two lesion groups using Artificial Neural Network (ANN), K-Nearest Neighbor (KNN) and Support Vector Machine (SVM) as three different classifiers proposed. The overall System performance was then measured in regards to Specificity, Sensitivity and Accuracy. According to the different approaches that will be mentioned later the best result was showen by the ANN designed with the feature selected according to fisher score method enables a diagnostic accuracy of 96.25% and 97% for dermoscopic and clinical images respectively.</description>
        <description>http://thesai.org/Downloads/Volume7No10/Paper_33-Automated_Imaging_System_for_Pigmented_Skin_Lesion_Diagnosis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>On the Improved Nonlinear Tracking Differentiator based Nonlinear PID Controller Design</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071032</link>
        <id>10.14569/IJACSA.2016.071032</id>
        <doi>10.14569/IJACSA.2016.071032</doi>
        <lastModDate>2016-11-01T13:23:03.5570000+00:00</lastModDate>
        
        <creator>Ibraheem Kasim Ibraheem</creator>
        
        <creator>Wameedh Riyadh Abdul-Adheem</creator>
        
        <subject>Nonlinear tracking differentiator; PID; Nonlinear mass-spring-damper; Lyapunov theory; Measurement noise</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(10), 2016</description>
        <description>This paper presents a new improved nonlinear tracking differentiator (INTD) with hyperbolic tangent function in the state-space system. The stability and convergence of the INTD are thoroughly investigated and proved. Through the error analysis, the proposed INTD can extract differentiation of any piecewise smooth nonlinear signal to reach a high accuracy.  The improved tracking differentiator (INTD) has the required filtering features and can cope with the nonlinearities caused by the noise. Through simulations, the INTD is implemented as a signal’s derivative generator for the closed-loop feedback control system with a nonlinear PID controller for the nonlinear Mass-Spring-Damper system and showed that it could achieve the signal tracking and differentiation faster with a minimum mean square error.</description>
        <description>http://thesai.org/Downloads/Volume7No10/Paper_32-On_the_Improved_Nonlinear_Tracking_Differentiator.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Time-Saving Approach for Optimal Mining of Association Rules</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071031</link>
        <id>10.14569/IJACSA.2016.071031</id>
        <doi>10.14569/IJACSA.2016.071031</doi>
        <lastModDate>2016-11-01T13:23:03.5400000+00:00</lastModDate>
        
        <creator>Mouhir Mohammed</creator>
        
        <creator>Balouki Youssef</creator>
        
        <creator>Gadi Taoufiq</creator>
        
        <subject>MDPREF Algorithm; Association Rules mining; Data partitioning; Optimization (profitability, efficiency and Risks) ; Bagging</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(10), 2016</description>
        <description>Data mining is the process of analyzing data so as to get useful information to be exploited by users. Association rules is one of data mining techniques used to detect different correlations and to reveal relationships among data individual items in huge data bases. These rules usually take the following form: if X then Y as independent attributes. An association rule has become a popular technique used in several vital fields of activity such as insurance, medicine, banks, supermarkets… Association rules are generated in huge numbers by algorithms known as Association Rules Mining algorithms. The generation of huge quantities of Association Rules may be time-and-effort consuming this is the reason behind an urgent necessity of an efficient and scaling approach to mine only the relevant and significant association rules. This paper proposes an innovative approach which mines the optimal rules from a large set of Association Rules in a distributive processing way to improve its efficiency and to decrease the running time.</description>
        <description>http://thesai.org/Downloads/Volume7No10/Paper_31-Time_Saving_Approach_for_Optimal_Mining_of_Association_Rules.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Efficient and Robust High Efficiency Video Coding Framework to Enhance Perceptual Quality of Real-Time Video Frames</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071030</link>
        <id>10.14569/IJACSA.2016.071030</id>
        <doi>10.14569/IJACSA.2016.071030</doi>
        <lastModDate>2016-11-01T13:23:03.5100000+00:00</lastModDate>
        
        <creator>Murthy SVN</creator>
        
        <creator>Sujatha B K</creator>
        
        <subject>H.265; HEVC; Video Coding; Compression</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(10), 2016</description>
        <description>Different level of compression on real-time video streaming has successfully reduced the storage space complexities and bandwidth constraints in the recent times. This paper aims to design and develop a novel concept towards the enhancement of perceptual quality of a real-time video frames. The proposed model has been experimented considering multi-level compression operation using H.265 where .avi moving frames standards play a crucial role. The study also applies a novel concept of High-efficiency video coding (HEVC) for adaptive live video streaming over a mobile network. The proposed study aims to formulate a multi-level optimization for HEVC to enhance the performance of both encoding and decoding mechanisms at the client as well as server side to ensure higher compression rate. The experimental outcomes also show that the proposed protocol achieves better performance ratio and overall; throughput in comparison with conventional H.263, H.264 by enhancing the perceptual quality of .avi format real-time video frames.</description>
        <description>http://thesai.org/Downloads/Volume7No10/Paper_30-An_Efficient_and_Robust_High_Efficiency_Video.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analyzing Distributed Generation Impact on the Reliability of Electric Distribution Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071029</link>
        <id>10.14569/IJACSA.2016.071029</id>
        <doi>10.14569/IJACSA.2016.071029</doi>
        <lastModDate>2016-11-01T13:23:03.4770000+00:00</lastModDate>
        
        <creator>Sanaullah Ahmad</creator>
        
        <creator>Sana Sardar</creator>
        
        <creator>Babar Noor</creator>
        
        <creator>Azzam ul Asar</creator>
        
        <subject>Distribution Generation; Electric Power System Reliability; Wind Turbine Generator; Interruptions; Reliability Assessment</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(10), 2016</description>
        <description>With proliferation of Distribution Generation (DG) and renewable energy technologies the power system is becoming more complex, with passage of time the development of distributed generation technologies is becoming diverse and broad. Power system reliability is one of most vital area in electric power system which deals with continuous supply of power and customer satisfaction. Distribution network in power system contributed up to 80% of reliability problems. This paper analyzes the impact of Wind Turbine Generator (WTG) as a distribution generation source on reliability of distribution system.  Injecting single WTG and close to load point has positive impact on reliability, while injecting multiple WTGs at single place has adverse impact on distribution system reliability.  These analyses are performed on bus 2 of Roy Billinton Test System (RBTS).</description>
        <description>http://thesai.org/Downloads/Volume7No10/Paper_29-Analyzing Distributed Generation Impact.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Real Time Monitoring of Human Body Vital Signs using Bluetooth and WLAN</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071028</link>
        <id>10.14569/IJACSA.2016.071028</id>
        <doi>10.14569/IJACSA.2016.071028</doi>
        <lastModDate>2016-11-01T13:23:03.4600000+00:00</lastModDate>
        
        <creator>Najeed Ahmed Khan</creator>
        
        <creator>M. Ajmal Sawand</creator>
        
        <creator>Marium Hai</creator>
        
        <creator>Arwa Khuzema</creator>
        
        <creator>Mehak Tariq</creator>
        
        <subject>Telemedicine; smartphone; Vital signs; remote locations; photoplethysmography; e-health; medical telemetry</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(10), 2016</description>
        <description>The technology of telemedicine is emerging and advancing day by day, it is capable of taking the field of healthcare to a whole new level of personalization. A person can keep a close check on his/her health&#39;s critical signs and can receive suitable feedback if required,- to help maintain the best of health status with the help of Wireless Body Area Network (WBAN) concept. The sensor nodes can wirelessly communicate with any smart phone through an Android application to continuously monitor and have complete access to the medical data of the patient. Moreover it also aims to maintain an efficient electronic medical record of the person. Moreover, the consultant and the caretaker of the patient can have this important information remotely through an internet connection and provide with significant advice which encapsulates the term -smart first aid technology.
In the proposed framework miniaturized sensors are worn on the body and non-intrusively monitor a person’s physiological state. The body vital signs (e.g.: heart rate, temperature etc.) are recorded through the sensor nodes and transmit to the smart phone via Bluetooth, where the data of vital signs is stored and will further transmitted to remote locations if needed.</description>
        <description>http://thesai.org/Downloads/Volume7No10/Paper_28-Real_Time_Monitoring_of_Human_Body_Vital.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SDME Quality Measure based Stopping Criteria for Iterative Deblurring Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071027</link>
        <id>10.14569/IJACSA.2016.071027</id>
        <doi>10.14569/IJACSA.2016.071027</doi>
        <lastModDate>2016-11-01T13:23:03.4300000+00:00</lastModDate>
        
        <creator>Mayana Shah</creator>
        
        <creator>U. D. Dalal</creator>
        
        <subject>Image deblurring; stopping point; Point Spread Function; Second derivative like measure of enhancement</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(10), 2016</description>
        <description>Deblurring from motion problem with or without noise is ill-posed inverse problem and almost all inverse problem require some sort of parameter selection. Quality of restored image in iterative motion deblurring is dependent on optimal stopping point or regularization parameter selection. At optimal point reconstructed image is best matched to original image and for other points either data mismatch occurs and over smoothing is resulted. The methods used for optimal parameter selection are formulated based on correct estimation of noise variance or with restrictive assumption on noise. Some methods involved heavy computation and produce delay in final output. In this paper we propose the method which calculate visual image quality of reconstructed image with the help of Second derivative like measure of enhancement (SDME) and helps to efficiently decide optimal stopping condition which has been checked for leading image deblurring algorithm. It do not require any estimation of noise variance or no heavy computation are needed. Simulation has been done for various images including standard images for different degradation and noise condition. For test leading deblurring algoritham of Blind and Semi-Blind deblurring of Natural Images using Alternate direction method of minimizer (ADMM) is considered. The obtained results for synthetically blurred images are good even under noisy condition with ? ISNR average values 0.2914 dB. The proposed whiteness measures seek powerful solution to iterative deblurring algorithms in deciding automatic stopping criteria.</description>
        <description>http://thesai.org/Downloads/Volume7No10/Paper_27-SDME_Quality_Measure_based_Stopping_Criteria.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>New Speech Enhancement based on Discrete Orthonormal Stockwell Transform</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071026</link>
        <id>10.14569/IJACSA.2016.071026</id>
        <doi>10.14569/IJACSA.2016.071026</doi>
        <lastModDate>2016-11-01T13:23:03.4000000+00:00</lastModDate>
        
        <creator>Safa SAOUD</creator>
        
        <creator>Souha BOUSSELMI</creator>
        
        <creator>Mohamed BEN NASER</creator>
        
        <creator>Adnane CHERIF</creator>
        
        <subject>MRA; Stockwell Transform; DOST; DWT; speech enhancement</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(10), 2016</description>
        <description>S-transform is an effective time-frequency representation which gives simultaneous frequency and time distribution information alike the wavelet transforms (WT). However, the ST redundantly doubles the dimension of the original data set and the Discrete Orthonormal S-Transform (DOST) can decrease the redundancy of S-transform farther. So, this paper aims to propose a new method to remove additive background noise from noisy speech signal using DOST which supplies a multi-resolution analysis (MRA) spatial-frequency representation of image processing and signal analysis. Hence, the performances of the applied speech enhancement technique have been evaluated objectively and subjectively in comparison with respect to many other methods in four background noises at different SNR levels.</description>
        <description>http://thesai.org/Downloads/Volume7No10/Paper_26-New_Speech_Enhancement.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Strength of Crypto-Semantic System of Tabular Data Protection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071025</link>
        <id>10.14569/IJACSA.2016.071025</id>
        <doi>10.14569/IJACSA.2016.071025</doi>
        <lastModDate>2016-11-01T13:23:03.3670000+00:00</lastModDate>
        
        <creator>Hazem (Moh&#39;d Said) Abdel Majid Hatamleh</creator>
        
        <creator>Hassan Mohammad</creator>
        
        <creator>Roba mohmoud ali aloglah</creator>
        
        <creator>Saleh Ebrahim Alomar</creator>
        
        <subject>cipher key; cryptographic; data protection; crypto-semantic; lexicographical systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(10), 2016</description>
        <description>The strength of the crypto-semantic method (CSM) of text data protection based on the use of lexicographical systems in the form of applied linguistic corpora within the formally defined restrictions of selected spheres of applied uses has been analyzed. The levels of cryptographic strength provided by the crypto-semantic method of data protection with due regard of a cryptanalyst’s resource capabilities are determined. The conditions under which the CSM provides absolute guarantee of text data protection from confidentiality compromise are determined.</description>
        <description>http://thesai.org/Downloads/Volume7No10/Paper_25-Strength_of_Crypto_Semantic_System_of_Tabular_Data_Protection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Tri-Band Fractal Patch Antenna for GSM and Satellite Communication Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071024</link>
        <id>10.14569/IJACSA.2016.071024</id>
        <doi>10.14569/IJACSA.2016.071024</doi>
        <lastModDate>2016-11-01T13:23:03.3530000+00:00</lastModDate>
        
        <creator>Saad Hassan Kiani</creator>
        
        <creator>Shahryar Shafique Qureshi</creator>
        
        <creator>Khalid Mahmood</creator>
        
        <creator>Mehr-e- Munir</creator>
        
        <creator>Sajid Nawaz Khan</creator>
        
        <subject>miniaturization; directivity; gain; slots; Bandwidth; VSWR</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(10), 2016</description>
        <description>Due to their smaller size and light weighted structures patch antennas are accustomed in modern communication Technology. With additional size in reduction, micro strip antennas are commonly used in handsets, GPS receivers etc. This paper presents a novel design of fractal shape patch antenna using U-slot on patch and defected ground structure. Due to slots on patch and ground, tri-band resonating response is attained with maximum gain and directivity of 4.22dB and 6.51dBi showing high impedance bandwidth and radiation efficiency. The antenna showed good VSWR of 1.63 to 1.02 thus, showing high efficiency. As evident in the simulation results, the proposed antenna has been found useful for W-LAN, GSM, Radio Satellite, Fixed Satellite Services (RSS) &amp; (FSS) and satellite communication systems.</description>
        <description>http://thesai.org/Downloads/Volume7No10/Paper_24-Tri_Band_Fractal_Patch_Antenna_for_GSM_and_Satellite_Communication_Systems_Latest.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Web Accessibility Challenges</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071023</link>
        <id>10.14569/IJACSA.2016.071023</id>
        <doi>10.14569/IJACSA.2016.071023</doi>
        <lastModDate>2016-11-01T13:23:03.3370000+00:00</lastModDate>
        
        <creator>Hayfa.Y. Abuaddous</creator>
        
        <creator>Mohd Zalisham Jali</creator>
        
        <creator>Nurlida Basir</creator>
        
        <subject>component; Website Accessibility; Disabilities; Accessibility challenges; WCAG 2.0; Accessibility automated tools</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(10), 2016</description>
        <description>Despite the importance of web accessibility in recent years, websites remain partially or completely inaccessible to certain sectors of the population. This is due to several reasons, including web developers’ little or no experience in accessibility and the lack of accurate information about the best ways to quickly and easily identify accessibility problems using different Accessibility Evaluation Methods (AEMs). This paper surveys accessibility literature and presents a general overview of the primary challenges of accessibility barriers on websites. In this sense, we critically investigate main challenges forms related to accessibility including standards and guidelines (WCAG 2.0), during website’s design and development and during evaluation. Finally, a set of recommendations such as enforcing accessibility legislations are presented to overcome some challenges.</description>
        <description>http://thesai.org/Downloads/Volume7No10/Paper_23-Web_Accessibility_Challenges.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Parallel Fuzzy-Genetic Algorithm for Classification and Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071022</link>
        <id>10.14569/IJACSA.2016.071022</id>
        <doi>10.14569/IJACSA.2016.071022</doi>
        <lastModDate>2016-11-01T13:23:03.3070000+00:00</lastModDate>
        
        <creator>Hassan Abounaser</creator>
        
        <creator>Ihab Talkhan</creator>
        
        <creator>Ahmed Fahmy</creator>
        
        <subject>Fuzzy Classification; Rule-Base; Fuzzy Logic System (FLS); Genetic Algorithm; Distributed Data Mining (DDM)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(10), 2016</description>
        <description>One of the top challenging problems in data mining domain is the distributed data mining (DDM) and mining multi-agent data. In distributed environment, classical techniques require that the distributed data be first collected in a data warehouse which is usually either ineffective or infeasible. Hence, mining over decentralized data sources can overcome such issues. Rule-based classifiers involve sharp cutoffs for continuous attributes. Fuzzy Logic System (FLS) has features that make it an adequate tool for addressing this shortcoming effectively and efficiently. In this paper, a framework for a Parallel Fuzzy-Genetic Algorithm (PFGA) has been developed for classification and prediction over decentralized data sources. The model parameters are evolved using two nested genetic algorithms (GAs). The outer GA evolves the fuzzy sets whereas the inner GA evolves the fuzzy rules. During optimization, best rules are only distributed among agents to construct the overall optimized model. Several experiments have been conducted over many benchmark datasets. The experiment results show that the developed model has good accuracy and more efficient in performance and comprehensibility of linguistic rules compared to some models implemented in KEEL software tool.</description>
        <description>http://thesai.org/Downloads/Volume7No10/Paper_22-A_Parallel_Fuzzy_Genetic_Algorithm_for_Classification_and_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cross Site Scripting: Detection Approaches in Web Application</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071021</link>
        <id>10.14569/IJACSA.2016.071021</id>
        <doi>10.14569/IJACSA.2016.071021</doi>
        <lastModDate>2016-11-01T13:23:03.2900000+00:00</lastModDate>
        
        <creator>Abdalla Wasef Marashdih</creator>
        
        <creator>Zarul Fitri Zaaba</creator>
        
        <subject>Web Application Security; Security; Software Security; Security Vulnerability; Cross Site Scripting; XSS; Genetic Algorithm; GA</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(10), 2016</description>
        <description>Web applications have become one of the standard platforms for service releases and representing information and data over the World Wide Web. Thus, security vulnerabilities headed to various type of attacks in web applications. Amongst those is Cross Site Scripting also known as XSS. XSS can be considered as one of the most popular type of threat in web security application. XSS occurs by injecting the malicious scripts into web application, and it can lead to significant violations at the site or for the user. This paper highlights the issues (i.e. security and vulnerability) in web application specifically in regards to XSS. In addition, the future direction of research within this domain is highlighted.</description>
        <description>http://thesai.org/Downloads/Volume7No10/Paper_21-Cross_Site_Scripting_Detection_Approaches_in_Web_Application.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Symbolism in Computer Security Warnings: Signal Icons &amp; Signal Words</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071020</link>
        <id>10.14569/IJACSA.2016.071020</id>
        <doi>10.14569/IJACSA.2016.071020</doi>
        <lastModDate>2016-11-01T13:23:03.2600000+00:00</lastModDate>
        
        <creator>Nur Farhana Samsudin</creator>
        
        <creator>Zarul Fitri Zaaba</creator>
        
        <creator>Manmeet Mahinderjit Singh</creator>
        
        <creator>Azman Samsudin</creator>
        
        <subject>security; signal icons; signal words; usable security; usability; warning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(10), 2016</description>
        <description>Security warning is often encountered by the end users when they use their system. It is a form of communication to notify the users of possible consequences in the future. These threats have always been evolved with the advancement of technologies. The attacks threaten the end users with many harmful effects such as malware attacks. However, security warning keeps being ignored due to various reasons. One of the reasons is lack of attention towards warnings. The end users feels burden and treat security task as a secondary rather than primary task. To divert user’s mind to read and comprehend the security warnings, it is important to capture the user’s attention. Signal words and signal icons are important in the security warning as it is the elements that could help user to heed the warnings. A survey study has been conducted with 60 participants in regards to the perception towards attractiveness and understanding of the signal words and icons. It can be revealed that end users significantly feel that the icon with the exclamation marks is attractive and easy to understand. However, only one of three hypotheses is proven to be significant.</description>
        <description>http://thesai.org/Downloads/Volume7No10/Paper_20-Symbolism_in_Computer_Security_Warnings.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Named Entity Recognition System for Postpositional Languages: Urdu as a Case Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071019</link>
        <id>10.14569/IJACSA.2016.071019</id>
        <doi>10.14569/IJACSA.2016.071019</doi>
        <lastModDate>2016-11-01T13:23:03.2130000+00:00</lastModDate>
        
        <creator>Muhammad Kamran Malik</creator>
        
        <creator>Syed Mansoor Sarwar</creator>
        
        <subject>IOB tagging; BIO tagging; BILOU tagging; IOE tagging; BIL2 tagging; NER for Resource-poor languages</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(10), 2016</description>
        <description>Named Entity Recognition and Classification is the process of identifying named entities and classifying them into one of the classes like person name, organization name, location name, etc. In this paper, we propose a tagging scheme Begin Inside Last -2 (BIL2) for the Subject Object Verb (SOV) languages that contain postposition. We use the Urdu language as a case study. We compare the F-measure values obtained for the tagging schemes IO, BIO2, BILOU and BIL2 using Hidden Markov Model (HMM) and Conditional Random Field (CRF). The BIL2 tagging scheme results are better than the other three tagging schemes using the same parameters including bigram and context window. With HMM, the F-measure values for IO, BIO2, BILOU, and BIL2 are 44.87%, 44.88%, 45.14%, and 45.88%, respectively. With CRF, the F-measure values for IO, BIO2, BILOU, and BIL2 are 35.13%, 35.90%, 37.85%, and 38.39%, respectively. The F-measure values for BIL2 are better than those of previously reported techniques</description>
        <description>http://thesai.org/Downloads/Volume7No10/Paper_19-Named_Entity_Recognition_System_for_Postpositional_Languages.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>AAODV (Aggrandized Ad Hoc on Demand Vector): A Detection and Prevention Technique for Manets</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071018</link>
        <id>10.14569/IJACSA.2016.071018</id>
        <doi>10.14569/IJACSA.2016.071018</doi>
        <lastModDate>2016-11-01T13:23:03.1800000+00:00</lastModDate>
        
        <creator>Abdulaziz Aldaej</creator>
        
        <creator>Tariq Ahamad</creator>
        
        <subject>MANET; AODV; Grayhole; Blackhole</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(10), 2016</description>
        <description>Security is a major concern that needs to be addressed in Mobile Adhoc Networks because of its vulnerable feature that includes infrastructureless environment, dynamic topology, and randomized node movement making MANETs prone to various network attacks. Synergetic attacks have raucous effects on MANETs as compared to particular single attack. Various algorithms and protocols have been designed and developed to meet the increasing demand of MANET security but there is still a room for improvement in order to make it more reliable and hassle-free communication. An AggrandizedAODV is presented in this paper to detect and prevent various synergistic and non-synergistic attacks.</description>
        <description>http://thesai.org/Downloads/Volume7No10/Paper_18-AAODV_Aggrandized_Ad_Hoc_on_Demand_Vector.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Trending Challenges in Multi Label Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071017</link>
        <id>10.14569/IJACSA.2016.071017</id>
        <doi>10.14569/IJACSA.2016.071017</doi>
        <lastModDate>2016-11-01T13:23:03.1330000+00:00</lastModDate>
        
        <creator>Raed Alazaidah</creator>
        
        <creator>Farzana Kabir Ahmad</creator>
        
        <subject>Challenges; Correlations among labels; Multi Label Classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(10), 2016</description>
        <description>Multi label classification has become a very important paradigm in the last few years because of the increasing domains that it can be applied to. Many researchers have developed many algorithms to solve the problem of multi label classification. Nerveless, there are still some stuck problems that need to be investigated in depth. The aim of this paper is to provide researchers with a brief introduction to the problem of multi label classification, and introduce some of the most trending challenges.</description>
        <description>http://thesai.org/Downloads/Volume7No10/Paper_17-Trending_Challenges_in_Multi_Label_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Multi-Task Distributed Vision System Embedded on a Hex-Rotorcraft UAV</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071016</link>
        <id>10.14569/IJACSA.2016.071016</id>
        <doi>10.14569/IJACSA.2016.071016</doi>
        <lastModDate>2016-11-01T13:23:03.1200000+00:00</lastModDate>
        
        <creator>Nadir Younes</creator>
        
        <creator>Boukhdir Khalid</creator>
        
        <creator>Moutaouakkil Fouad</creator>
        
        <creator>Hicham Medromi</creator>
        
        <subject>multi-agent architecture; image processing; real-time systems; target detection; panoramic images; target following; unmanned aerial vehicles (UAVs); vision systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(10), 2016</description>
        <description>In this paper, are presented the general architecture and implementation of a multi-task distributed vision system designed and embedded onboard a Hex-Rotorcraft UAV. The system uses multiple cheap heterogeneous cameras in order to perform various tasks such as: ground target pedestrian detection, tracking, creating panoramic images, video stabilization and streaming multiple data/video feeds over a wireless secure channel. In what follows, are discussed this multi-agent architecture designed to provide our UAV with an embedded intelligent vision system using autonomous agents entrusted with managing the previously listed functionalities. In addition to the cheap set of USB and module cameras, the presented vision system is composed of a Local Data Processing Module connected to each camera and a Central Module used to control the overall system, process the regrouped data and streams it to the ground station. The overall vision system has been tested in real flights and is still under improvements.</description>
        <description>http://thesai.org/Downloads/Volume7No10/Paper_16-A_Multi_Task_Distributed_Vision_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>User Intent Discovery using Analysis of Browsing History</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071015</link>
        <id>10.14569/IJACSA.2016.071015</id>
        <doi>10.14569/IJACSA.2016.071015</doi>
        <lastModDate>2016-11-01T13:23:03.0870000+00:00</lastModDate>
        
        <creator>Wael K. Abdallah</creator>
        
        <creator>Aziza S. Asem</creator>
        
        <creator>Mohammed Badr Senousy</creator>
        
        <subject>component; Information Retrieval; Search Engines; Users’ Search Intents; Search Log and Browsing History</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(10), 2016</description>
        <description>The search engine can retrieve the information from the web by using keyword queries. The responsibility of search engines is getting the relevant results that met with users’ search intents. Nowadays, all search engines provide search log of the user (queries logs, click information besides browsing history). The main objective of this work is to provide features that can help users during their web search by categorizing related browsing URLs together. That will be done by identifying intent groups for each URLs category, then identifying intent-segments for each intent group. Upon clustering the query categories, groups, and intent segments search engines can improve the representation of users’ search context behind the current query, this would help search engines to discover the user’s intents during the web search.  Through the use of the normalized discounted cumulative gain (NDCG), the experimental results show the proposed method can improve the performance of the search engine.</description>
        <description>http://thesai.org/Downloads/Volume7No10/Paper_15-User_Intent_Discovery_using_Analysis_of_Browsing_History.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Coreference Resolution Approach using Morphological Features in Arabic</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071014</link>
        <id>10.14569/IJACSA.2016.071014</id>
        <doi>10.14569/IJACSA.2016.071014</doi>
        <lastModDate>2016-11-01T13:23:03.0700000+00:00</lastModDate>
        
        <creator>Majdi Beseiso</creator>
        
        <creator>Abdulkareem Al-Alwani</creator>
        
        <subject>Coreference resolution; Anaphora; Alternative Approach; Arabic NLP; morphological features</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(10), 2016</description>
        <description>Coreference resolution is considered one of the challenges in natural language processing.  It is an important task that includes determining which pronouns are referring to which entities. Most of the earlier approaches for coreference resolution are rule-based or machine learning approaches. However, these types of approaches have many limitations especially with Arabic language. In this paper, a different approach to coreference resolution is presented. The approach uses morphological features and dependency trees instead. It has  fivestages, which overcomes the limitations of using annotated datasets for learning or a set of rules.  The approach was evaluatedusing our own customized annotated dataset and “AnATAr” dataset. The evaluation show encouraging results with average F1 score of 89%.</description>
        <description>http://thesai.org/Downloads/Volume7No10/Paper_14-A_Coreference_Resolution_Approach_using_Morphological_Features.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A System Framework for Smart Class System to Boost Education and Management</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071013</link>
        <id>10.14569/IJACSA.2016.071013</id>
        <doi>10.14569/IJACSA.2016.071013</doi>
        <lastModDate>2016-11-01T13:23:03.0400000+00:00</lastModDate>
        
        <creator>Ahmad Tasnim Siddiqui</creator>
        
        <creator>Mehedi Masud</creator>
        
        <subject>E-Learning; smart class system; quality education; higher education; enhanced education</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(10), 2016</description>
        <description>The large number of reasonably priced computers, Internet broadband connectivity and rich education content has created a global phenomenon by which information and communication technology (ICT) has used to remodel education. E-learning can be explained as the use of available information, computational and communication technologies to assist learning practice. In the modern world, education has become more universal, and people are looking for learning with simplicity and interest. Students are looking for more interactive and attractive learning style rather than old traditional style. Using technological learning, we can enhance the education system. We can deliver quality education to students as well as we can ease and uniform the process of education by using the modern technologies and methods. In this paper, we propose a smart class model to manage the entire educational activities and hence to enhance the quality of education.</description>
        <description>http://thesai.org/Downloads/Volume7No10/Paper_13-A_System_Framework_for_Smart_Class_System_to_Boost_Education.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Energy Efficient Routing Protocol for Maximizing Lifetime in Wireless Sensor Networks using Fuzzy Logic and Immune System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071012</link>
        <id>10.14569/IJACSA.2016.071012</id>
        <doi>10.14569/IJACSA.2016.071012</doi>
        <lastModDate>2016-11-01T13:23:03.0100000+00:00</lastModDate>
        
        <creator>Safaa Khudair Leabi</creator>
        
        <creator>Turki Younis Abdalla</creator>
        
        <subject>routing; fuzzy logic; artificial immune system; network lifetime; wireless sensor networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(10), 2016</description>
        <description>Energy limitations has become fundamental challenge for designing wireless networks systems. One of the important and interested features is network lifetime. Many works have been developed to maximize wireless sensor network lifetime, in which one of the important work is routing. This paper proposes a new adaptive routing technique for prolonging the lifetime in wireless sensor networks by using Fuzzy-Immune System. The artificial immune system is used to solve the packet LOOP problem and to control route direction. While fuzzy logic system is used to determine the optimal path for sending data packets. The proposed routing technique seeks to determine the optimal route path from source to destination so that energy consumption is balanced. The proposed routing technique is compared with classical method. Simulation results demonstrate that the proposed technique shows significant increase in network lifetime of about 0.93. The proposed technique proved that energy consumption is well managed.</description>
        <description>http://thesai.org/Downloads/Volume7No10/Paper_12-Energy_Efficient_Routing_Protocol_for_Maximizing_Lifetime.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Determining the Types of Diseases and Emergency Issues in Pilgrims During Hajj: A Literature Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071011</link>
        <id>10.14569/IJACSA.2016.071011</id>
        <doi>10.14569/IJACSA.2016.071011</doi>
        <lastModDate>2016-11-01T13:23:02.9770000+00:00</lastModDate>
        
        <creator>Shah Murtaza Rashid Al Masud</creator>
        
        <creator>Asmidar Abu Bakar</creator>
        
        <creator>Salman Yussof</creator>
        
        <subject>Hajj; pilgrims; health; communicable diseases; non-communicable diseases; emergency</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(10), 2016</description>
        <description>Introduction: Every year 2-3 million pilgrims with different background and most of them are elderly from 184 countries in the world congregate in the holy place ‘Haram’ at Makkah in Saudi Arabia to perform Hajj. During the pilgrimage, they come across a great deal of rough and tough environment, physical hassle and mental stress. Due to the hardship of travel, fluctuation of weather, continuous walking during religious rites at specific time and sites, many pilgrims injury, feel tired, sick, and exhausted. These may also create complications and overburden the physiological functions including heart, chest, abdominal, and kidney of those who suffer from chronic diseases. Besides the problem of diseases, crowds could cause some other significant problems including missing and lost pilgrims, injuries, and even death. Objective: To determine the common health problems e.g. diseases and emergency incidents encountered by pilgrims during Hajj was the main objective of this study. Methods: An extensive literature review to determine the common health problems and emergency incidents during Hajj was conducted through a systematic literature review. Numerous scholarly databases were used to search for articles published related to health problems and emergency incidents during Hajj from 2008 to 2016. Eligible articles included case reports, experimental and non-experimental studies. Only thirty articles out of two hundred and sixty articles had met the specific inclusion criteria. Results: The analysis revealed that respiratory diseases include pneumonia, influenza, and asthma (73.33%) were the main health problems encountered by the pilgrims during Hajj followed by heat stroke or heat attack, sunlight effects (16.67%), cardiovascular disease, heart disease (10%). The analysis also revealed that emergency incidents include traffic accidents, and trauma was 3.33%. Notwithstanding the information given above, according to the analysis, the common health problems during Hajj are mainly divided into two categories: non-communicable diseases (62.5%) and communicable diseases (37.5%). IBM’s statistical package for the social sciences (SPSS) version 22 was used to analysis the result. Conclusion: Both communicable and non-communicable health issues are the most common health problems encountered by pilgrims during Hajj. But, due to lack of existing studies associated with this research area, a definite conclusion could not be made. However, our findings demonstrated the necessity of new research to find solutions to pilgrims’ health problems during Hajj.</description>
        <description>http://thesai.org/Downloads/Volume7No10/Paper_11-Determining_the_Types_of_Diseases_and_Emergency_Issues.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Methodology in Study of Effective Parameters in Network-on-Chip Interconnection’s (Wire/Wireless) Performance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071010</link>
        <id>10.14569/IJACSA.2016.071010</id>
        <doi>10.14569/IJACSA.2016.071010</doi>
        <lastModDate>2016-11-01T13:23:02.9630000+00:00</lastModDate>
        
        <creator>Mostafa Haghi</creator>
        
        <creator>Kerstin Thurow</creator>
        
        <creator>Norbert Stoll</creator>
        
        <creator>Saed Moradi</creator>
        
        <subject>network on chip; on-chip interconnection; buffer size; virtual channel; subnet</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(10), 2016</description>
        <description>Network-on-Chip (NoC) paradigm has been proposed as an alternative bus-based schemes to achieve high performance and scalability in System-on-Chip (SoC) design. Performance analysis and evaluation of on-chip interconnect architectures are widely considered. Time latency and throughput are two very critical parameters which play vital role to improve the system performance. In this work, these two elements are evaluated in both wire and wireless approaches under different conditions for networks contain 64,512 and 1024 number of cores. There are number of parameters those have direct and indirect effects on the delay and throughput, among all, these four are chosen: routing algorithm, buffer size, virtual channel and subnet. Thus this work is clustered into two general parts, in the first section the effect of algorithms and buffer size are calculated and then later on in second part when switching from wire approach to wireless, it’s shown that, virtual channel and subnet are able to influence the performance of a network on chip positively under some circumstances. We don&#39;t concentrate on approach and techniques here. Our target in this paper is to determine the critical points, trade-off and study the effect of mentioned parameters on entire system. Evaluation is done by means of Booksim and Noxim simulators which are based on system C.</description>
        <description>http://thesai.org/Downloads/Volume7No10/Paper_10-A_New_Methodology_in_Study_of_Effective_Parameters.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automatic Detection of Omega Signals Captured by the Poynting Flux Analyzer (PFX) on Board the Akebono Satellite</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071009</link>
        <id>10.14569/IJACSA.2016.071009</id>
        <doi>10.14569/IJACSA.2016.071009</doi>
        <lastModDate>2016-11-01T13:23:02.9300000+00:00</lastModDate>
        
        <creator>I Made Agus Dwi Suarjaya</creator>
        
        <creator>Yoshiya Kasahara</creator>
        
        <creator>Yoshitaka Goto</creator>
        
        <subject>Auto-detection; Satellite; Signal processing; Wave Propagation; Plasmasphere</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(10), 2016</description>
        <description>The Akebono satellite was launched in 1989 to observe the Earth’s magnetosphere and plasmasphere. Omega was a navigation system with 8 ground stations transmitter and had transmission pattern that repeats every 10 s. From 1989 to 1997, the PFX on board the Akebono satellite received signals at 10.2 kHz from these stations. Huge amounts of PFX data became valuable for studying the propagation characteristics of VLF waves in the ionosphere and plasmasphere. In this study, we introduce a method for automatic detection of Omega signals from the PFX data in a systematic way, it involves identifying a transmission station, calculating the delay time, and estimating the signal intensity. We show the reliability of the automatic detection system where we able to detect the omega signal and confirmed its propagation to the opposite hemisphere along the Earth’s magnetic field lines. For more than three years (39 months), we detected 43,734 and 111,049 signals in the magnetic and electric field, respectively, and demonstrated that the proposed method is powerful enough for the statistical analyses.</description>
        <description>http://thesai.org/Downloads/Volume7No10/Paper_9-Automatic_Detection_of_Omega_Signals_Captured.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Selection Operator - CSM in Genetic Algorithms for Solving the TSP</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071008</link>
        <id>10.14569/IJACSA.2016.071008</id>
        <doi>10.14569/IJACSA.2016.071008</doi>
        <lastModDate>2016-11-01T13:23:02.9000000+00:00</lastModDate>
        
        <creator>Wael Raef Alkhayri</creator>
        
        <creator>Suhail Sami Owais</creator>
        
        <creator>Mohammad Shkoukani</creator>
        
        <subject>Genetic Algorithm; Traveling Salesman Problem; Genetic Algorithm Operators; Clustering; Selection Operator</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(10), 2016</description>
        <description>Genetic Algorithms (GAs) is a type of local search that mimics biological evolution by taking a population of string, which encodes possible solutions and combines them based on fitness values to produce individuals that are fitter than others. One of the most important operators in Genetic Algorithm is the selection operator. A new selection operator has been proposed in this paper, which is called Clustering Selection Method (CSM). The proposed method was implemented and tested on the traveling salesman problem. The proposed CSM was tested and compared with other selection methods, such as random selection, roulette wheel selection and tournament selection methods. The results showed that the CSM has the best results since it reached the optimal path with only 8840 iterations and with minimum distance which was 79.7234 when the system has been applied for solving Traveling Salesman Problem (TSP) of 100 cities.</description>
        <description>http://thesai.org/Downloads/Volume7No10/Paper_8-A_New_Selection_Operator_CSM_in_Genetic_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>On Standards for Application Level Interfaces in SDN</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071006</link>
        <id>10.14569/IJACSA.2016.071006</id>
        <doi>10.14569/IJACSA.2016.071006</doi>
        <lastModDate>2016-11-01T13:23:02.8370000+00:00</lastModDate>
        
        <creator>Yousef Ibrahim Daradkeh</creator>
        
        <creator>Mujahed Aldhaifallah</creator>
        
        <creator>Dmitry Namiot</creator>
        
        <creator>Manfred Sneps-Sneppe</creator>
        
        <subject>SDN; REST API; Northbound interface; application</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(10), 2016</description>
        <description>In this paper, authors discuss application level interfaces for Software Defined Networks. While the Application Programming Interfaces for the interaction with the hardware are widely described in Software Defined Networks, the software interfaces for applications received far less attention. However, it is obvious that interfaces to software applications are very important. Actually, application level interfaces should be one of the main elements in Software Defined Networks. It is a core feature. In this article, we want to discuss the issues of standardization of software interfaces for applications in Software Defined Networks area. Nowadays, there are several examples of unified Application Program Interfaces in the telecommunications area. Is it possible to reuse this experience for Software Defined Networks or Software Defined Networks standards are radically different? This is the main question discussed in this paper.</description>
        <description>http://thesai.org/Downloads/Volume7No10/Paper_6-On_Standards_for_Application_Level_Interfaces_in_SDN.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Backstepping Control of Induction Motor Fed by Five-Level NPC Inverter</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071005</link>
        <id>10.14569/IJACSA.2016.071005</id>
        <doi>10.14569/IJACSA.2016.071005</doi>
        <lastModDate>2016-11-01T13:23:02.8070000+00:00</lastModDate>
        
        <creator>Ali BOUCHAIB</creator>
        
        <creator>Abdellah MANSOURI</creator>
        
        <creator>Rachid TALEB</creator>
        
        <subject>Backstepping control; Five-level NPC inverter; Field orientated control; Induction motor</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(10), 2016</description>
        <description>In this paper we will present a contribution to the backstepping control for induction motor (IM) based on the principle of Field Orientated Control (FOC). This law is established step by step while ensuring the stability of the machine in the closed loop, by a suitable choice of the function Lyapunov, In addition it is executed to assure the convergence the error&#180;s speed tracking at all initials conditions are possible. Both the speed and the rotor flux are supposed obtained by sensors. The control of the IM by five-level NPC inverter generally uses Pulse-width modulation techniques (PWM). Finally, we represent some of the simulation results by simulations in Matlab/Simulink environment.</description>
        <description>http://thesai.org/Downloads/Volume7No10/Paper_5-Backstepping_Control_of_Induction_Motor_Fed_by_Five_Level_NPC_Inverter.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Diagnosing Coronary Heart Disease using Ensemble Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071004</link>
        <id>10.14569/IJACSA.2016.071004</id>
        <doi>10.14569/IJACSA.2016.071004</doi>
        <lastModDate>2016-11-01T13:23:02.7770000+00:00</lastModDate>
        
        <creator>Kathleen H. Miao</creator>
        
        <creator>Julia H. Miao</creator>
        
        <creator>George J. Miao</creator>
        
        <subject>accuracy; adaptive Boosting algorithm; AUC; classifier; classification error; coronary heart disease; diagnosis; ensemble learning; F-score; K-S measure; machine learning; precision; prediction; recall; ROC; sensitivity; specificity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(10), 2016</description>
        <description>Globally, heart disease is the leading cause of death for both men and women. One in every four people is afflicted with and dies of heart disease. Early and accurate diagnoses of heart disease thus are crucial in improving the chances of long-term survival for patients and saving millions of lives. In this research, an advanced ensemble machine learning technology, utilizing an adaptive Boosting algorithm, is developed for accurate coronary heart disease diagnosis and outcome predictions. The developed ensemble learning classification and prediction models were applied to 4 different data sets for coronary heart disease diagnosis, including patients diagnosed with heart disease from Cleveland Clinic Foundation (CCF), Hungarian Institute of Cardiology (HIC), Long Beach Medical Center (LBMC), and Switzerland University Hospital (SUH). The testing results showed that the developed ensemble learning classification and prediction models achieved model accuracies of 80.14% for CCF, 89.12% for HIC, 77.78% for LBMC, and 96.72% for SUH, exceeding the accuracies of previously published research. Therefore, coronary heart disease diagnoses derived from the developed ensemble learning classification and prediction models are reliable and clinically useful, and can aid patients globally, especially those from developing countries and areas where there are few heart disease diagnostic specialists.</description>
        <description>http://thesai.org/Downloads/Volume7No10/Paper_4-Diagnosing_Coronary_Heart_Disease_using_Ensemble_Machine_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Pentaho and Jaspersoft: A Comparative Study of Business Intelligence Open Source Tools Processing Big Data to Evaluate Performances</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071003</link>
        <id>10.14569/IJACSA.2016.071003</id>
        <doi>10.14569/IJACSA.2016.071003</doi>
        <lastModDate>2016-11-01T13:23:02.7300000+00:00</lastModDate>
        
        <creator>Victor M. Parra</creator>
        
        <creator>Ali Syed</creator>
        
        <creator>Azeem Mohammad</creator>
        
        <creator>Malka N. Halgamuge</creator>
        
        <subject>Big Data; BI; Business Intelligence; CAS; Computer Algebra System; ETL; Data Mining; OLAP</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(10), 2016</description>
        <description>Regardless of the recent growth in the use of “Big Data” and “Business Intelligence” (BI) tools, little research has been undertaken about the implications involved. Analytical tools affect the development and sustainability of a company, as evaluating clientele needs to advance in the competitive market is critical. With the advancement of the population, processing large amounts of data has become too cumbersome for companies. At some stage in a company’s lifecycle, all companies need to create new and better data processing systems that improve their decision-making processes.  Companies use BI Results to collect data that is drawn from interpretations grouped from cues in the data set BI information system that helps organisations with activities that give them the advantage in a competitive market. However, many organizations establish such systems, without conducting a preliminary analysis of the needs and wants of a company, or without determining the benefits and targets that they aim to achieve with the implementation. They rarely measure the large costs associated with the implementation blowout of such applications, which results in these impulsive solutions that are unfinished or too complex and unfeasible, in other words unsustainable even if implemented. BI open source tools are specific tools that solve this issue for organizations in need, with data storage and management. This paper compares two of the best positioned BI open source tools in the market: Pentaho and Jaspersoft, processing big data through six different sized databases, especially focussing on their Extract Transform and Load (ETL) and Reporting processes by measuring their performances using Computer Algebra Systems (CAS). The ETL experimental analysis results clearly show that Jaspersoft BI has an increment of CPU time in the process of data over Pentaho BI, which is represented by an average of 42.28% in performance metrics over the six databases. Meanwhile, Pentaho BI had a marked increment of the CPU time in the process of data over Jaspersoft evidenced by the reporting analysis outcomes with an average of 43.12% over six databases that prove the point of this study. This study is a guiding reference for many researchers and those IT professionals who support the conveniences of Big Data processing, and the implementation of BI open source tool based on their needs.</description>
        <description>http://thesai.org/Downloads/Volume7No10/Paper_3-Pentaho_and_Jaspersoft_A_Comparative_Study_of_Business_Intelligence.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Characterizing the 2016 U.S. Presidential Campaign using Twitter Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071002</link>
        <id>10.14569/IJACSA.2016.071002</id>
        <doi>10.14569/IJACSA.2016.071002</doi>
        <lastModDate>2016-11-01T13:23:02.6970000+00:00</lastModDate>
        
        <creator>Ignasi Vegas</creator>
        
        <creator>Tina Tian</creator>
        
        <creator>Wei Xiong</creator>
        
        <subject>Twitter; social networks; data mining</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(10), 2016</description>
        <description>This paper models the 2016 U.S. presidential campaign in the context of Twitter. The study analyzes the presidential candidates’ Twitter activity by crawling their real-time tweets. More than 16,000 tweets were observed in this work. We study the interactions between the politicians and their Twitter followers in the retweet and favorite networks. The most frequently mentioned unigrams are presented, which serve the best featuring the political focuses of a candidate. The mention network among the politicians was constructed by parsing the content of their tweets. In this paper, we also study the Twitter profile of the users who follow the presidential candidates. The gender ratio among the Twitter subscribers is examined using the government’s census data. We also investigate the geography of Twitter supporters for each candidate.</description>
        <description>http://thesai.org/Downloads/Volume7No10/Paper_2-Characterizing_the_2016_U_S_Presidential_Campaign_using_Twitter_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Encryption Algorithms for Color Images: A Brief Review of Recent Trends</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.071001</link>
        <id>10.14569/IJACSA.2016.071001</id>
        <doi>10.14569/IJACSA.2016.071001</doi>
        <lastModDate>2016-11-01T13:23:02.5570000+00:00</lastModDate>
        
        <creator>Anuja P Parameshwaran</creator>
        
        <creator>Wen-Zhan Song</creator>
        
        <subject>RGB; HIS; CMY; Chaotic cryptosystem; wavelets; affine transforms and visual cryptography</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(10), 2016</description>
        <description>The recent years have witnessed rapid developments in the field of image encryption algorithms for secure color image processing. Image encryption algorithms have been classified in different ways in the past. This paper reviews the different image encryption algorithms developed during the period 2007-2015, highlighting their contrasting features. At the same time a broad classification of the said algorithms into: (1) full encryption algorithm and (2) partial encryption algorithm, each further sub-classified with respect to their domain orientations (spatial, frequency and hybrid domains) have been attempted. Efforts have also been made to cover different algorithms useful for color images of various color spaces like Red-Green-Blue (RGB), Hue-Saturation-Intensity (HSI), Cyan-Magenta-Yellow (CMY) etc. Chaotic cryptosystems, various transforms like wavelets, affine transforms etc. and visual cryptography systems are being discussed in detail.</description>
        <description>http://thesai.org/Downloads/Volume7No10/Paper_1-Encryption_Algorithms_for_Color_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Pursuit Reinforcement Competitive Learning: PRCL based Online Clustering with Learning Automata</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2016.051006</link>
        <id>10.14569/IJARAI.2016.051006</id>
        <doi>10.14569/IJARAI.2016.051006</doi>
        <lastModDate>2016-10-12T13:30:14.9430000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>Pursuit Reinforcement Guided Competitive Learning; Reinforcement Guided Competitive Learning; Sustained Reinforcement Guided Competitive Learning Vector Quantization; Learning Automata</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 5(10), 2016</description>
        <description>A new online clustering method based on not only reinforcement and competitive learning but also pursuit algorithm (Pursuit Reinforcement Competitive Learning: PRCL) as well as learning automata is proposed for reaching a relatively stable clustering solution in comparatively short time duration. UCI repository data which are widely used for evaluation of clustering performance in usual is used for a comparative study among the existing conventional online clustering methods of Reinforcement Guided Competitive Learning: RGCL, Sustained RGCL: SRGCL, Vector Quantization, and the proposed PRCL. The results show that the clustering accuracy of the proposed method is superior to the conventional methods. More importantly, it is found that the proposed PRCL is much faster than the conventional methods. The proposed method is then applied to the evacuation simulation study. It is found that the proposed method is much faster than the conventional method of vector quatization to find the most appropriate evacuation route. Due to the fact that the proposed PRCL method allows finding the most appropriate evacuation route, collisions among peoples who have to evacuate for the proposed method is much less than that of vector quatization.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume5No10/Paper_6-Pursuit_Reinforcement_Competitive_Learning_PRCL.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Flowcharting the Meaning of Logic Formulas</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2016.051005</link>
        <id>10.14569/IJARAI.2016.051005</id>
        <doi>10.14569/IJARAI.2016.051005</doi>
        <lastModDate>2016-10-11T07:06:48.4870000+00:00</lastModDate>
        
        <creator>Sabah Al-Fedaghi</creator>
        
        <subject>knowledge representation; logic formulas; diagrammatic representation; sense</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 5(10), 2016</description>
        <description>In logic, representation of a domain (e.g., physical reality) comprises the things its expressions (formulas) refer to and their relationships. Recent research has examined the realm of nonsymbolic representations, especially diagrams. It is claimed that, in general, diagrams have advantages over linguistic descriptions. Current diagrammatic representations of logic formulas do not completely depict their underlying semantics, and they lack a basic static structure that incorporates elementary dynamic features, creating a conceptual gap that can lead to misinterpretation. This paper demonstrates a methodology for mapping the sense of a logic formula and producing diagrams that integrate linguistic conception, truth-values, and meaning and can be used in teaching, communication, and understanding, especially with students specializing in logic representation and reasoning.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume5No10/Paper_5-Flowcharting_the_Meaning_of_Logic_Formulas.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Method for Vigor Diagnosis of Tea Trees based on Nitrogen Content in Tealeaves Relating to NDVI</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2016.051004</link>
        <id>10.14569/IJARAI.2016.051004</id>
        <doi>10.14569/IJARAI.2016.051004</doi>
        <lastModDate>2016-10-11T07:06:48.4870000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>Tealeaves; Nitrigen content; Amino accid; Leaf area; NDVI</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 5(10), 2016</description>
        <description>Method for vigor diagnosis of tea trees based on nitrogen content in tealeaves relating to NDVI is proposed. In the proposed method, NIR camera images of tealeaves are used for estimation of nitrogen content in tealeaves. The nitrogen content is highly correlated to Theanine (amid acid) content in tealeaves. Theanine rich tealeaves taste good. Therefore, tealeaves quality can be estimated with NIR camera images. Also, leaf area of tealeaves is highly correlated to NIR reflectance of tealeaf surface. Therefore, not only tealeaf quality but also harvest mount can be estimated with NIR camera images. Experimental results shows the proposed method does work for estimation of appropriate tealeaves harvest timing with NIR camera images.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume5No10/Paper_4-Method_for_Vigor_Diagnosis_of_Tea_Trees.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Error Analysis of Line of Sight Estimation using Purkinje Images for Eye-based Human-Computer Interaction: HCI</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2016.051003</link>
        <id>10.14569/IJARAI.2016.051003</id>
        <doi>10.14569/IJARAI.2016.051003</doi>
        <lastModDate>2016-10-11T07:06:48.4530000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>Computer input just by sight; Computer input by human eyes only; Purkinje image</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 5(10), 2016</description>
        <description>Error analysis of line of sight estimation using Purkinje images for eye-based Human-Computer Interaction: HCI is conducted. Double Purkinje images which are obtained by two points of light sources are used in the proposed method for eye rotation angle estimation. It aimed at the improvement of the eyeball rotation angle accuracy by simply presuming the value of the curvature radius of the cornea. This technique is a cancellation of the time of the calibration that is the problem of past glance presumption. By presuming the size of the radius of curvature of the cornea. As a result, the eyeball rotation angle presumption accuracy of about 0.98deg was obtained 0.57deg and horizontally in the vertical direction without doing the calibration.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume5No10/Paper_3-Error_Analysis_of_Line_of_Sight_Estimation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Pattern of Success Vs. Pattern of Failure: Adaptive Authentication Through Kolmogorov–Smirnov (K-S) Statistics</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2016.051002</link>
        <id>10.14569/IJARAI.2016.051002</id>
        <doi>10.14569/IJARAI.2016.051002</doi>
        <lastModDate>2016-10-11T07:06:48.4400000+00:00</lastModDate>
        
        <creator>Gahangir Hossain</creator>
        
        <creator>Pradeep Palaniswamy</creator>
        
        <creator>Rajab Challoo</creator>
        
        <subject>Mobile user experience; Biometrics; Smart mobile devices; Mobile identity management; Mobile authentication; Lock patterns; Mean time value</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 5(10), 2016</description>
        <description>Smartphones have become a basic necessity in lives of all human beings. Apart from the core functionality of communication, these become a medium for storage of sensitive personal information, financial data and official documents. Hence, there is an inevitable need to emphasize on securing access to such devices considering the nature of data being stored. In addition, accessibility and authentication methods need to secure, robust, and user-friendly. This paper discusses an adaptive authentication mechanism with a nonparametric classification approach, Kolmogorov–Smirnov (K-S) statistic, which is coupled with the use of lock pattern dynamics as a secure and user-friendly two-factor authentication method. The data used for experimental exploration were collected from a systematically programmed Android device to capture the temporal parameters when individuals drew lock patterns on the touch screen. Each user has his individualistic way of drawing the pattern, which is used as the key for identifying imposters from valid users.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume5No10/Paper_2-Pattern_of_Success_Vs_Pattern_of_Failure_Adaptive_Authentication.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Image Retrieval Method Utilizing Texture Information Derived from Discrete Wavelet Transformation Together with Color Information</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2016.051001</link>
        <id>10.14569/IJARAI.2016.051001</id>
        <doi>10.14569/IJARAI.2016.051001</doi>
        <lastModDate>2016-10-11T07:06:48.3130000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Cahya Ragmad</creator>
        
        <subject>Wavelet; DWT; local feature; Color; Texture; CBIR</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 5(10), 2016</description>
        <description>Image retrieval method utilizing texture information which is derived from Discrete Wavelet Transformation: DWT together with color information is proposed. One of the specific features of the texture information extracted from portions of image is based on Dyadic wavelet transformation with forming texture feature vector by using energy derived from Gabor transform on 7 by 7 pixel neighbor of significant points. Using the Wang`s dataset, the proposed method is evaluated with retrieval success rate (precision and recall) as well as Euclidian distance between the image in concern (Query image) and the other images in the database of interest and is compared to the other method. As the result through the experiments, it is found that the DWT derived texture information is significantly effective in comparison to the color information.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume5No10/Paper_1-Image_Retrieval_Method_Utilizing_Texture_Information_Derived.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automatic Generation of Model for Building Energy Management</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070960</link>
        <id>10.14569/IJACSA.2016.070960</id>
        <doi>10.14569/IJACSA.2016.070960</doi>
        <lastModDate>2016-10-01T05:58:29.9670000+00:00</lastModDate>
        
        <creator>Quoc-Dung Ngo</creator>
        
        <creator>Yanis Hadj-Said</creator>
        
        <creator>St&#180;ephane Ploix</creator>
        
        <creator>Ujjwal Maulik</creator>
        
        <subject>building energy management system, model trans-formation, model driven engineering, optimization, mixed integer linear programming, simulated annealing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(9), 2016</description>
        <description>This paper proposes a model transformation ap-proach for model-based energy management in buildings. Indeed, energy management is a large area that covers a wide range of applications such as simulation, mixed integer linear pro-gramming optimization, simulated annealing optimization, model parameter estimation, diagnostic analysis,. . . Each application re-quires a model but in a specific formalism with specific additional information. Up to now, application models are rewritten for each application. In building energy management, because the optimization problems may be dynamically generated, model transformation should be done dynamically, depending on the problem to solve. For this purpose, a model driven engineering approach combined with the use of a computer algebra system is proposed. This paper presents the core specifications of the transformation of a so-called high level pivot model into applica-tion specific models. As an example, transformations of a pivot model into both an acausal linear model for mixed integer linear programming optimization and a causal non-linear model for simulated annealing optimization are presented. These models are used for energy management of a smart building platform named Monitoring and Habitat Intelligent located at PREDIS/ENSE3 in Grenoble, France.</description>
        <description>http://thesai.org/Downloads/Volume7No9/Paper_60-Automatic_Generation_of_Model_for_Building_Energy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Solving Nonlinear Eigenvalue Problems using an Improved Newton Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070959</link>
        <id>10.14569/IJACSA.2016.070959</id>
        <doi>10.14569/IJACSA.2016.070959</doi>
        <lastModDate>2016-10-01T05:58:29.9530000+00:00</lastModDate>
        
        <creator>S.A Shahzadeh Fazeli</creator>
        
        <creator>F. Rabiei</creator>
        
        <subject>nonlinear eigenvalue problems; Newton method; LU-decomposition; refined eigenvalues</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(9), 2016</description>
        <description>Finding approximations to the eigenvalues of non-linear eigenvalue problems is a common problem which arises from many complex applications. In this paper, iterative algo-rithms for finding approximations to the eigenvalues of nonlinear eigenvalue problems are verified. These algorithms use an efficient numerical approach for calculating the first and second deriva-tives of the determinant of the problem. Here we present and examine a technique for solving nonlinear eigenvalue problems using Newton method. Computational aspects of this approach for a nonlinear eigenvalue problem are analyzed. The efficiency of the algorithm is demonstrated using an example.</description>
        <description>http://thesai.org/Downloads/Volume7No9/Paper_59-Solving_Nonlinear_Eigenvalue_Problems_using_an_Improved_Newton_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Peak-to-Average Power Ratio Reduction based Varied Phase for MIMO-OFDM Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070958</link>
        <id>10.14569/IJACSA.2016.070958</id>
        <doi>10.14569/IJACSA.2016.070958</doi>
        <lastModDate>2016-10-01T05:58:29.9200000+00:00</lastModDate>
        
        <creator>Lahcen Amhaimar</creator>
        
        <creator>Saida Ahyoud</creator>
        
        <creator>Adel Asselman</creator>
        
        <creator>Elkhaldi Said</creator>
        
        <subject>OFDM; MIMO; PAPR; PTS; HPA; GA</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(9), 2016</description>
        <description>One of the severe drawbacks of orthogonal fre-quency division multiplexing (OFDM) is high Peak-to-Average Power Ratio (PAPR) of transmitted OFDM signals. During modulation the sub-carriers are added together with same phase which increases the value of PAPR, leading to more interference and limits power efficiency of High Power Amplifier (HPA), it’s requires power amplifier’s (PAs) with large linear oper-ating ranges but such PAs are difficult to design and costly to manufacture. Therefore, to reduce PAPR various methods have been proposed. As a promising scheme, partial transmit sequences (PTS) provides an effective solution for PAPR reduction of OFDM signals. In this paper, we propose a PAPR reduction method for an OFDM system with variation of phases based on PTS schemes and Solid State Power Amplifiers (SSPA) of Saleh model in conjunction with digital predistortion (DPD), in order to improve the performance in terms of PAPR, the HPA linearity and for the sake of mitigating the in-band distortion and the spectrum regrowth. The simulation results show that the proposed algorithm can not only reduces the PAPR significantly, but also improves the out-of-band radiation and decreases the computational complexity.</description>
        <description>http://thesai.org/Downloads/Volume7No9/Paper_58-Peak_to_Average_Power_Ratio_Reduction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>MOSIC: Mobility-Aware Single-Hop Clustering Scheme for Vehicular Ad hoc Networks on Highways</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070957</link>
        <id>10.14569/IJACSA.2016.070957</id>
        <doi>10.14569/IJACSA.2016.070957</doi>
        <lastModDate>2016-10-01T05:58:29.9030000+00:00</lastModDate>
        
        <creator>Amin Ziagham Ahwazi</creator>
        
        <creator>MohammadReza NooriMehr</creator>
        
        <subject>Vehicular Ad hoc Networks; Mobile ad hoc Net-works; Network Topology Control; Clustering Scheme</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(9), 2016</description>
        <description>As a new branch of Mobile ad hoc networks, Vehicular ad hoc networks (VANETs) have significant attention in academic and industry researches. Because of high dynamic nature of VANET, the topology will be changed frequently and quickly, and this condition is causing some difficulties in maintaining topology of these kinds of networks. Clustering is one of the controlling mechanism that able to grouping vehicles in same categories based upon some predefined metrics such as density, geographical locations, direction and velocity of vehicles. Using of clustering can make network’s global topology less dynamic and improve the scalability of it. Many of the VANET clustering algorithms are taken from MANET that has been shown that these algorithms are not suitable for VANET. Hence, in this paper we proposed a new clustering scheme that use Gauss Markov mobility (GMM) model for mobility predication that make vehicle able to prognosticate its mobility relative to its neighbors. The proposed clustering scheme’s goal is forming stable clusters by increasing the cluster head lifetime and less cluster head changes number. Simulation results show that the proposed scheme has better performance than existing clustering approach, in terms of cluster head duration, cluster member duration, cluster head changes rate and control overhead.</description>
        <description>http://thesai.org/Downloads/Volume7No9/Paper_57-MOSIC_Mobility_Aware_Single_Hop_Clustering.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modeling and Analyzing Anycast and Geocast Routing in Wireless Mesh Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070956</link>
        <id>10.14569/IJACSA.2016.070956</id>
        <doi>10.14569/IJACSA.2016.070956</doi>
        <lastModDate>2016-10-01T05:58:29.8730000+00:00</lastModDate>
        
        <creator>Fazle Hadi</creator>
        
        <creator>Sheeraz Ahmed</creator>
        
        <creator>Abid Ali Minhas</creator>
        
        <creator>Atif Naseer</creator>
        
        <subject>Mesh Network; Anycast; Geocast; Routing; Unicast</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(9), 2016</description>
        <description>Wireless technology has become an essential part of this era’s human life and has the capability of connecting virtually to any place within the universe. A mesh network is a self healing wireless network, built through a number of distributed and redundant nodes to support variety of applications and provide reliability. Similarly, anycasting is an important service that might be used for a variety of applications. In this paper we have studied anycast routing in the wireless mesh networks and the anycast traffic from the gateway to the mesh network having multiple anycast groups. We have also studied the geocast traffic in which the packets reach to the group head via unicast traffic and then are broadcasted inside the group. Moreover, we have studied the intergroup communication between different anycast groups. The review of the related literature shows that no one has considered anycasting and geocasting from gateway to the mesh network while considering the multiple anycast groups and intergroup communication. The network is modeled, simulated and analyzed for its various parameters using OMNET++ simulator.</description>
        <description>http://thesai.org/Downloads/Volume7No9/Paper_56-Modeling_and_Analyzing_Anycast_and_Geocast.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intelligent Pedestrian Detection using Optical Flow and HOG</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070955</link>
        <id>10.14569/IJACSA.2016.070955</id>
        <doi>10.14569/IJACSA.2016.070955</doi>
        <lastModDate>2016-10-01T05:58:29.8430000+00:00</lastModDate>
        
        <creator>Huma Ramzan</creator>
        
        <creator>Bahjat Fatima</creator>
        
        <creator>Ahmad R. Shahid</creator>
        
        <creator>Sheikh Ziauddin</creator>
        
        <creator>Asad Ali Safi</creator>
        
        <subject>Pedestrian detection, pedestrian protection system, HOG descriptor, optical flow, motion vectors, FPPI, miss-rate</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(9), 2016</description>
        <description>Pedestrian detection is an important aspect of autonomous vehicle driving as recognizing pedestrians helps in reducing accidents between the vehicles and the pedestrians. In literature, feature based approaches have been mostly used for pedestrian detection. Features from different body portions are extracted and analyzed for interpreting the presence or absence of a person in a particular region in front of car. But these approaches alone are not enough to differentiate humans from non-humans in dynamic environments, where background is continuously changing. We present an automated pedestrian detection system by finding pedestrians’ motion patterns and combing them with HOG features. The proposed scheme achieved 17.7% and 14.22% average miss rate on ETHZ and Caltech datasets, respectively.</description>
        <description>http://thesai.org/Downloads/Volume7No9/Paper_55-Intelligent_Pedestrian_Detection_using_Optical_Flow.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Analysis on Natural Image Small Patches</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070954</link>
        <id>10.14569/IJACSA.2016.070954</id>
        <doi>10.14569/IJACSA.2016.070954</doi>
        <lastModDate>2016-10-01T05:58:29.8270000+00:00</lastModDate>
        
        <creator>Shengxiang Xia</creator>
        
        <creator>Wen Wang</creator>
        
        <creator>Di Liang</creator>
        
        <subject>natural image analysis; persistent homology; high-contrast patches; Klein bottle; barcode</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(9), 2016</description>
        <description>The method of computational homology is used to analyze natural image 8 &#215; 8 and 9x9-patches locally. Our experimental results show that there exist subspaces of the spaces of 8x8 and 9x9-patches that are topologically equivalent to a circle and a Klein bottle respectively. These extend the results of the paper ”on the local behavior of spaces of natural images.” To the larger patches. The Klein bottle feature of natural image patches can be used in image compression.</description>
        <description>http://thesai.org/Downloads/Volume7No9/Paper_54-An_Analysis_on_Natural_Image_Small_Patches.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Anti-noise Capability Improvement of Minimum Energy Combination Method for SSVEP Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070953</link>
        <id>10.14569/IJACSA.2016.070953</id>
        <doi>10.14569/IJACSA.2016.070953</doi>
        <lastModDate>2016-10-01T05:58:29.8100000+00:00</lastModDate>
        
        <creator>Omar Trigui</creator>
        
        <creator>Wassim Zouch</creator>
        
        <creator>Mohamed Ben Messaoud </creator>
        
        <subject>Brain-Computer Interface; Steady State Visual Evoked Potential; Minimum Energy Combination; Empirical Mode Decomposition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(9), 2016</description>
        <description>Minimum energy combination (MEC) is a widely used method for frequency recognition in steady state visual evoked potential based BCI systems. Although it can reach acceptable performances, this method remains sensitive to noise. This paper introduces a new technique for the improvement of the MEC method allowing ameliorating its Anti-noise capability. The Empirical mode decomposition (EMD) and the moving average filter were used to separate noise from relevant signals. The results show that the proposed BCI system has a higher accuracy than systems based on Canonical Correlation Analysis (CCA) or Multivariate Synchronization Index (MSI). In fact, the system achieves an average accuracy of about 99% using real data measured from five subjects by means of the EPOC EMOTIVE headset with three visual stimuli. Also by using four commands, the system accuracy reaches 91.78% with an information-transfer rate of about 27.18 bits/min.</description>
        <description>http://thesai.org/Downloads/Volume7No9/Paper_53-Anti-noise_Capability_Improvement_of_Minimum.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel High Dimensional and High Speed Data Streams Algorithm: HSDStream</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070952</link>
        <id>10.14569/IJACSA.2016.070952</id>
        <doi>10.14569/IJACSA.2016.070952</doi>
        <lastModDate>2016-10-01T05:58:29.7800000+00:00</lastModDate>
        
        <creator>Irshad Ahmed</creator>
        
        <creator>Irfan Ahmed</creator>
        
        <creator>Waseem Shahzad</creator>
        
        <subject>Evolving data stream; high dimensionality; pro-jected clustering; density-based clustering; micro-clustering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(9), 2016</description>
        <description>This paper presents a novel high speed clustering scheme for high-dimensional data stream. Data stream clustering has gained importance in different applications, for example, network monitoring, intrusion detection, and real-time sensing. High dimensional stream data is inherently more complex when used for clustering because the evolving nature of the stream data and high dimensionality make it non-trivial. In order to tackle this problem, projected subspace within the high dimensions and limited window sized data per unit of time are used for clustering purpose. We propose a High Speed and Dimensions data stream clustering scheme (HSDStream) which employs exponential mov-ing averages to reduce the size of the memory and speed up the processing of projected subspace data stream. It works in three steps: i) initialization, ii) real-time maintenance of core and outlier micro-clusters, and iii) on-demand offline generation of the final clusters. The proposed algorithm is tested against high dimensional density-based projected clustering (HDDStream) for cluster purity, memory usage, and the cluster sensitivity. Experi-mental results are obtained for corrected KDD intrusion detection dataset. These results show that HSDStream outperforms the HDDStream in all performance metrics, especially, the memory usage and the processing speed.</description>
        <description>http://thesai.org/Downloads/Volume7No9/Paper_52-A_novel_high_dimensional_and_high_speed_data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid Steganography System based on LSB Matching and Replacement</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070951</link>
        <id>10.14569/IJACSA.2016.070951</id>
        <doi>10.14569/IJACSA.2016.070951</doi>
        <lastModDate>2016-10-01T05:58:29.7630000+00:00</lastModDate>
        
        <creator>Hazem Hiary</creator>
        
        <creator>Khair Eddin Sabri</creator>
        
        <creator>Mohammed S. Mohammed</creator>
        
        <creator>Ahlam Al-Dhamari</creator>
        
        <subject>Steganography; LSB matching; LSB replacement; Embedding capacity; Imperceptibility</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(9), 2016</description>
        <description>This paper proposes a hybrid steganographic ap-proach using the least significant bit (LSB) technique for grayscale images. The proposed approach uses both LSB match-ing (LSB-M) and LSB replacement to hide the secret data in images. Using hybrid LSB techniques increase the level of security. Thus, attackers cannot easily, if not impossible, extract the secret data. The proposed approach stores two bits in a pixel. The embedding rate can reach up to 1.6 bit per pixel. The proposed approach is evaluated and subjected to various kinds of image processing attacks. The performance of the proposed algorithm is compared with two other relevant techniques; pixel-value differencing (PVD) and Complexity Based LSB-M (CBL). Experimental results indicate that the proposed algorithm out-performs PVD in terms of imperceptibility. Also, it significantly outperforms CBL in two main features; higher embedding rate (ER), and more robust to most common image processing attacks such as median filtering, histogram equalization, and rotation.</description>
        <description>http://thesai.org/Downloads/Volume7No9/Paper_51-A_Hybrid_Steganography_System_Based_on_LSB.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Information Retrieval Approach using Query Expansion and Spectral-based</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070950</link>
        <id>10.14569/IJACSA.2016.070950</id>
        <doi>10.14569/IJACSA.2016.070950</doi>
        <lastModDate>2016-10-01T05:58:29.7330000+00:00</lastModDate>
        
        <creator>Sara Alnofaie</creator>
        
        <creator>Mohammed Dahab</creator>
        
        <creator>Mahmoud Kamal </creator>
        
        <subject>Information Retrieval; Discrete Wavelet Transform; Query Expansion; Term Signal; Spectral Based Retrieval Method</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(9), 2016</description>
        <description>Most of the information retrieval (IR) models rank the documents by computing a score using only the lexicographical query terms or frequency information of the query terms in the document. These models have a limitation as they does not consider the terms proximity in the document or the term-mismatch or both of the two. The terms proximity information is an important factor that determines the relatedness of the document to the query. The ranking functions of the Spectral-Based Information Retrieval Model (SBIRM) consider the query terms frequency and proximity in the document by comparing the signals of the query terms in the spectral domain instead of the spatial domain using Discrete Wavelet Transform (DWT). The query expansion (QE) approaches are used to overcome the word-mismatch problem by adding terms to query, which have related meaning with the query. The QE approaches are divided to statistical approach Kullback-Leibler divergence (KLD) and semantic approach P-WNET that uses WordNet. These approaches enhance the performance. Based on the foregoing considerations, the objective of this research is to build an efficient QESBIRM that combines QE and proximity SBIRM by implementing the SBIRM using the DWT and KLD or P-WNET. The experiments conducted to test and evaluate the QESBIRM using Text Retrieval Conference (TREC) dataset. The result shows that the SBIRM with the KLD or P-WNET model outperform the SBIRM model in precision (P@), R-precision, Geometric Mean Average Precision (GMAP) and Mean Average Precision (MAP).</description>
        <description>http://thesai.org/Downloads/Volume7No9/Paper_50-A_Novel_Information_Retrieval_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparison of Digital Signature Algorithm and Authentication Schemes for H.264 Compressed Video</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070949</link>
        <id>10.14569/IJACSA.2016.070949</id>
        <doi>10.14569/IJACSA.2016.070949</doi>
        <lastModDate>2016-10-01T05:58:29.7030000+00:00</lastModDate>
        
        <creator>Ramzi Haddaji</creator>
        
        <creator>Samia Bouaziz</creator>
        
        <creator>Raouf Ouni</creator>
        
        <creator>Abdellatif Mtibaa</creator>
        
        <subject>Elliptic curve cryptography; H.264; DSA (Digital signature algorithm); ECDSA (Elliptic Curve Digital Signature Algorithm); Implementation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(9), 2016</description>
        <description>In this paper we present the advantages of the elliptic curve cryptography for the implementations of the electronic signature algorithms “elliptic curve digital signature algorithm, ECDSA”, compared with “the digital signature algorithm, DSA”, for the signing and authentication of H.264 compressed videos. Also, we compared the strength and add-time of these algorithms on a database containing several videos sequences.</description>
        <description>http://thesai.org/Downloads/Volume7No9/Paper_49-Comparison_of_Digital_Signature_Algorithm_and_Authentication_Schemes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dependency Test: Portraying Pearson&#39;s Correlation Coefficient Targeting Activities in Project Scheduling </title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070948</link>
        <id>10.14569/IJACSA.2016.070948</id>
        <doi>10.14569/IJACSA.2016.070948</doi>
        <lastModDate>2016-10-01T05:58:29.6700000+00:00</lastModDate>
        
        <creator>Jana Shafi</creator>
        
        <creator>Amtul Waheed</creator>
        
        <creator>Sumaya Sanober</creator>
        
        <subject>TVS; Transparent; Dependency; PCC; Activity; Resource; Schedule; Project Introduction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(9), 2016</description>
        <description>In this paper, we discuss project scheduling with conflicting activity-resources. Several project activities require same resources but, may be scheduled with the certain lapse of time resulting in repeatedly using the same kind of resources for executing dissimilar activities. Due to the frequent usage of same resources multiple times, expenditure become more expensive and project duration extends. The problem is to find out such kind of activities which are developing implicit relations amid them.  , we proposed a solution by introducing TVs (Transparent view of Scheduling) model. First, we analyze and enlists activities according to required resources, categorize them and then we segregate dependent and independent activities by indicating a value. Performing Dependency test on activities by using Pearson&#39;s Correlation Coefficient (PCC) to calculate the rate of relations among the ordered activities for similar resources. By using this model we can reschedule activities to avoid confusion and disordering of resources without consumption of time and capital.</description>
        <description>http://thesai.org/Downloads/Volume7No9/Paper_48-Dependency_Test_Portraying_Pearsons_Correlation_Coefficient_Targeting.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Internet of Things based Expert System for Smart Agriculture</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070947</link>
        <id>10.14569/IJACSA.2016.070947</id>
        <doi>10.14569/IJACSA.2016.070947</doi>
        <lastModDate>2016-10-01T05:58:29.6570000+00:00</lastModDate>
        
        <creator>Raheela Shahzadi</creator>
        
        <creator>Javed Ferzund</creator>
        
        <creator>Muhammad Tausif</creator>
        
        <creator>Muhammad Asif Suryani</creator>
        
        <subject>Internet of Things; Smart Agriculture; Cotton; Plant Diseases; Wireless Sensor Network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(9), 2016</description>
        <description>Agriculture sector is evolving with the advent of the information and communication technology. Efforts are being made to enhance the productivity and reduce losses by using the state of the art technology and equipment. As most of the farmers are unaware of the technology and latest practices, many expert systems have been developed in the world to facilitate the farmers. However, these expert systems rely on the stored knowledge base. We propose an expert system based on the Internet of Things (IoT) that will use the input data collected in real time. It will help to take proactive and preventive actions to minimize the losses due to diseases and insects/pests.</description>
        <description>http://thesai.org/Downloads/Volume7No9/Paper_47-Internet_of_Things_based_Expert_System_for_Smart_Agriculture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fitness Proportionate Random Vector Selection based DE Algorithm (FPRVDE)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070946</link>
        <id>10.14569/IJACSA.2016.070946</id>
        <doi>10.14569/IJACSA.2016.070946</doi>
        <lastModDate>2016-10-01T05:58:29.6230000+00:00</lastModDate>
        
        <creator>Qamar Abbas</creator>
        
        <creator>Jamil Ahmad</creator>
        
        <creator>Hajira Jabeen</creator>
        
        <subject>Differential Evolution; Fitness Proportion; Trial vector generation; Mutation; Optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(9), 2016</description>
        <description>Differential Evolution (DE) is a simple, powerful and easy to use global optimization algorithm. DE has been studied in detail by many researchers in the past years. In DE algorithm trial vector generation strategies have a significant influence on its performance. This research studies that whether performance of DE algorithm can be improved by incorporating selection advancement in effective trial vector generation strategies. A novel advancement in DE trial vector generation strategies is proposed in this research to speeds up the convergence speed of DE algorithm. The proposed fitness proportion based random vector selection DE (FPRVDE) is based on the proportion of individual fitness mechanism. FPRVDE reduces the role of poor performing individuals to enhance it performance capability of DE algorithm. To form a trial vector using FPRVDE, individual based on the proportion of their fitness are selected. FPRVDE mechanism is applied to most commonly used set of DE variants. A comprehensive set of multidimensional function optimization problems is used to access the performance of FPRVDE. Experimental result shows that proposed approach accelerates DE algorithm.</description>
        <description>http://thesai.org/Downloads/Volume7No9/Paper_46-Fitness_Proportionate_Random_Vector_Selection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Example-based Super-Resolution Algorithm for Multi-Spectral Remote Sensing Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070945</link>
        <id>10.14569/IJACSA.2016.070945</id>
        <doi>10.14569/IJACSA.2016.070945</doi>
        <lastModDate>2016-10-01T05:58:29.5930000+00:00</lastModDate>
        
        <creator>W. Jino Hans</creator>
        
        <creator>Lysiya Merlin.S</creator>
        
        <creator>Venkateswaran N</creator>
        
        <creator>Divya Priya T</creator>
        
        <subject>Remote sensing Super-resolution; Image-pair analysis; Regression operators</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(9), 2016</description>
        <description>This paper proposes an example-based super-resolution algorithm for multi-spectral remote sensing images. The underlying idea of this algorithm is to learn a matrix-based implicit prior from a set of high-resolution training examples to model the relation between LR and HR images. The matrix-based implicit prior is learned as a regression operator using conjugate decent method. The direct relation between LR and HR image is obtained from the regression operator and it is used to super-resolve low-resolution multi-spectral remote sensing images. A detailed performance evaluation is carried out to validate the strength of the proposed algorithm.</description>
        <description>http://thesai.org/Downloads/Volume7No9/Paper_45-An_Example_based_Super_Resolution_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Human Face Classification using Genetic Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070944</link>
        <id>10.14569/IJACSA.2016.070944</id>
        <doi>10.14569/IJACSA.2016.070944</doi>
        <lastModDate>2016-10-01T05:58:29.5770000+00:00</lastModDate>
        
        <creator>Tania Akter Setu</creator>
        
        <creator>Md. Mijanur Rahman</creator>
        
        <subject>Face Detection; Facial Feature Extraction; Genetic Algorithm; Neural Network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(9), 2016</description>
        <description>The paper presents a precise scheme for the development of a human face classification system based human emotion using the genetic algorithm (GA). The main focus is to detect the human face and its facial features and classify the human face based on emotion, but not the interest of face recognition. This research proposed to combine the genetic algorithm and neural network (GANN) for classification approach. There are two way for combining genetic algorithm and neural networks, such as supportive approach and collaborative approach. This research proposed the supportive approach to developing an emotion-based classification system. The proposed system received frontal face image of human as input pattern and detected face and its facial feature regions, such as, mouth (or lip), nose, and eyes. By the analysis of human face, it is seen that most of the emotional changes of the face occurs on eyes and lip. Therefore, two facial feature regions (such as lip and eyes) have been used for emotion-based classification. The GA has been used to optimize the facial features and finally the neural network has been used to classify facial features. To justify the effectiveness of the system, several images were tested. The achievement of this research is higher accuracy rate (about 96.42%) for human frontal face classification based on emotion.</description>
        <description>http://thesai.org/Downloads/Volume7No9/Paper_44-Human_Face_Classification_using_Genetic_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Differential Evolution based SHEPWM for Seven-Level Inverter with Non-Equal DC Sources</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070943</link>
        <id>10.14569/IJACSA.2016.070943</id>
        <doi>10.14569/IJACSA.2016.070943</doi>
        <lastModDate>2016-10-01T05:58:29.5470000+00:00</lastModDate>
        
        <creator>Fay&#231;al CHABNI</creator>
        
        <creator>Rachid TALEB</creator>
        
        <creator>M’hamed HELAIMI</creator>
        
        <subject>selective harmonic elimination; multi-level inverters; differential evolution; cascade H-bridge inverters; optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(9), 2016</description>
        <description>This paper presents the application of differential evolution algorithm to obtain optimal switching angles for a single-phase seven-level to improve AC voltage quality. The proposed inverter in this article is composed of two H-bridge cells with non-equal DC voltage sources in order to generate multiple voltage levels. Selective harmonic elimination pulse width modulation (SHPWM) strategy is used to improve the AC output voltage waveform generated by the proposed inverter. The differential evolution (DE) optimization algorithm is used to solve non-linear transcendental equations necessary for the (SHPWM). Computational results obtained from computer simulations presented a good agreement with the theoretical predictions. A laboratory prototype based on STM32F407 microcontroller was built in order to validate the simulation results. The experimental results show the effectiveness of the proposed modulation method.</description>
        <description>http://thesai.org/Downloads/Volume7No9/Paper_43-Differential_Evolution_based_SHEPWM_for_Seven_Level_Inverter.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of Compensation Network in a Correlated-based Channel using Angle of Arrivals</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070942</link>
        <id>10.14569/IJACSA.2016.070942</id>
        <doi>10.14569/IJACSA.2016.070942</doi>
        <lastModDate>2016-10-01T05:58:29.5300000+00:00</lastModDate>
        
        <creator>Affum Emmanuel Ampoma</creator>
        
        <creator>Paul Oswald Kwasi Anane</creator>
        
        <creator>Obour Agyekum Kwame O.-B</creator>
        
        <creator>Maxwell Oppong Afriyie</creator>
        
        <subject>Angle of arrival (AoA); channel correlation; decoupling network; mutual coupling; MIMO</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(9), 2016</description>
        <description>We explore combined effect of spatial correlation and mutual coupling matrix, and its subsequent effects on performance of multiple input multiple output (MIMO) systems After the decoupling process.  We will also look at a correlation based stochastic channel model with the linear antenna arrays as the signal source. For the purpose of understanding, it is assumed that fading is correlated at both transmitter and receiver sides, in spite of the fact that the decoupling network enhances isolation between Receiving antenna array. In this paper, we model the transmit the antenna array in CST Microwave Studio, as a uniform linear Array with monopoles as antenna elements. On the receiving side, the scattering parameters of the coupled and decoupled monopole Array are measured in an anechoic chamber. The theoretical analysis and simulation results show the joint dependency of the system capacity on an angle of arrival (AoA) and antenna element spacing, with enhanced system performance at reduced AoAs with Increased antenna element separation. Consequently, essential benefits of MIMO system performance can be achieved with an efficient decoupling network while boosting the signal sources by adding further antenna elements.</description>
        <description>http://thesai.org/Downloads/Volume7No9/Paper_42-Analysis_of_Compensation_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Role of Image Enhancement in Citrus Canker Disease Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070941</link>
        <id>10.14569/IJACSA.2016.070941</id>
        <doi>10.14569/IJACSA.2016.070941</doi>
        <lastModDate>2016-10-01T05:58:29.5000000+00:00</lastModDate>
        
        <creator>K. Padmavathi</creator>
        
        <creator>K. Thangadurai</creator>
        
        <subject>Lemon tree; Citrus Canker; Recursively Separated Weighted Histogram Equalization; Median Filter; Image Enhancement; Disease detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(9), 2016</description>
        <description>Digital image processing is employed in numerous areas of biology to identify and analyse problems. This approach aims to use image processing techniques for citrus canker disease detection through leaf inspection. Citrus canker is a severe bacterium-based citrus plant disease. The symptoms of citrus canker disease typically occur in the leaves, branches, fruits and thorns. The leaf images show the health status of the plant and facilitate the observation and detection of the disease level at an early stage. The leaf image analysis is an essential step for the detection of numerous plant diseases.
The proposed approach consists of two stages to improve the clarity and quality of leaf images. The primary stage uses Recursively Separated Weighted Histogram Equalization (RSWHE), which improves the contrast level. The second stage removes the unwanted noise using a Median filter. This proposed approach uses these methods to improve the clarity of the images and implements these methods in lemon citrus canker disease detection.</description>
        <description>http://thesai.org/Downloads/Volume7No9/Paper_41-The_Role_of_Image_Enhancement_in_Citrus_Canker_Disease_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of a Prediction System for Hydrate Formation in Gas Pipelines using Wireless Sensor Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070940</link>
        <id>10.14569/IJACSA.2016.070940</id>
        <doi>10.14569/IJACSA.2016.070940</doi>
        <lastModDate>2016-10-01T05:58:29.4830000+00:00</lastModDate>
        
        <creator>Ahmed Raed Moukhtar</creator>
        
        <creator>Alaa M. Hamdy</creator>
        
        <creator>Sameh A. Salem</creator>
        
        <subject>WSN; Sensing Node; K-Factor; ANN; Link Quality Indicator; Hydrate Formation Temperature; Received Signal Strength Indicator</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(9), 2016</description>
        <description>Before the evolution of the Wireless Sensor Networks (WSN) technology, many production wells in the oil and gas industry were suffering from the gas hydration formation process, as most of them are remotely located away from the host location. By taking the advantage of the WSN technology, it is possible now to monitor and predict the critical conditions at which hydration will form by using any computerized model. In fact, most of the developed models are based on two well-known hand calculation methods which are the Specific gravity and K-Factor methods. In this research, the proposed work is divided into two phases; first, the development of a three prediction models using the Neural Network algorithm (ANN) based on the specific gravity charts, the K-Factor method and the production rates of the flowing gas mixture in the process pipelines. While in the second phase, two WSN prototype models are designed and implemented using National Instruments WSN hardware devices. Power analysis is carried out on the designed prototypes and regression models are developed to give a relation between the sensing nodes (SN) consumed current, Node-to-Gateway distance and the operating link quality. The prototypes controller is interfaced with a GSM module and connected to a web server to be monitored via mobile and internet networks.</description>
        <description>http://thesai.org/Downloads/Volume7No9/Paper_40-Design of a Prediction System for Hydrate Formation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>E-Learning for Secondary and Higher Education Sectors: A Survey</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070939</link>
        <id>10.14569/IJACSA.2016.070939</id>
        <doi>10.14569/IJACSA.2016.070939</doi>
        <lastModDate>2016-10-01T05:58:29.4530000+00:00</lastModDate>
        
        <creator>Sadia Ashraf</creator>
        
        <creator>Tamim Ahmed Khan</creator>
        
        <creator>Inayat ur Rehman</creator>
        
        <subject>Distributed learning environments; elementary education; improving classroom teaching; intelligent tutoring systems; interactive learning environments; media in education; post-secondary education; secondary education; simulations</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(9), 2016</description>
        <description>Electronic learning (e-learning) has gained reasonable acceptance from educational institutions at all levels. There are various studies conducted by researchers considering different aspects of e-learning to investigate how we can benefit in imparting quality education. However, there is a requirement to find out how researchers consider different sectors of secondary and higher education (HE) sectors. In this paper, we carefully select published research article of past six years and study how the research was conducted and which research methods are applied to attain results. We also investigate how case studies are presented for evaluating results. We finally present our findings from conducting this study of e-learning research at secondary and the higher education levels.</description>
        <description>http://thesai.org/Downloads/Volume7No9/Paper_39-E-Learning_for_Secondary_and _Higher_Education.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predicting CO2 Emissions from Farm Inputs in Wheat Production using Artificial Neural Networks and Linear Regression Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070938</link>
        <id>10.14569/IJACSA.2016.070938</id>
        <doi>10.14569/IJACSA.2016.070938</doi>
        <lastModDate>2016-10-01T05:58:29.4200000+00:00</lastModDate>
        
        <creator>Majeed Safa</creator>
        
        <creator>Mohammadali Nejat</creator>
        
        <creator>Peter Nuthall</creator>
        
        <creator>Bruce Greig</creator>
        
        <subject>Artificial neural networks; modelling; CO2 emissions; wheat cultivation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(9), 2016</description>
        <description>Two models have been developed for simulating CO2 emissions from wheat farms: (1) an artificial neural network (ANN) model; and (2) a multiple linear regression model (MLR). Data were collected from 40 wheat farms in the Canterbury region of New Zealand. Investigation of more than 140 various factors enabled the selection of eight factors to be employed as the independent variables for final the ANN model. The results showed the final ANN developed can forecast CO2 emissions from wheat production areas under different conditions (proportion of wheat cultivated land on the farm, numbers of irrigation applications and numbers of cows), the condition of machinery (tractor power index (hp/ha) and age of fertilizer spreader) and N, P and insecticide inputs on the farms with an accuracy of &#177;11% (&#177; 113 kg CO2/ha). The total CO2 emissions from farm inputs were estimated as 1032 kg CO2/ha for wheat production. On average, fertilizer use of 52% and fuel use of around 20% have the highest CO2 emissions for wheat cultivation. The results confirmed the ANN model forecast CO2 emissions much better than MLR model.</description>
        <description>http://thesai.org/Downloads/Volume7No9/Paper_38-Predicting_CO2_Emissions_from_Farm_Inputs_in_Wheat_Production.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Machine Vision System for Quality Inspection of Pine Nuts</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070937</link>
        <id>10.14569/IJACSA.2016.070937</id>
        <doi>10.14569/IJACSA.2016.070937</doi>
        <lastModDate>2016-10-01T05:58:29.4070000+00:00</lastModDate>
        
        <creator>Ikramullah Khosa</creator>
        
        <creator>Eros Pasero</creator>
        
        <subject>pine nuts; Image processing; neural networks; feature extraction; classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(9), 2016</description>
        <description>Computers and artificial intelligence have penetrated in the food industry since last decade, for intellectual automatic processing and packaging in general, and in assisting for quality inspection of the food itself in particular. The food quality assessment task becomes more challenging when it is about harmless internal examination of the ingredient, and even more when its size is also minute. In this article, a method for automatic detection, extraction and classification of raw food item is presented using x-ray image data of pine nuts. Image processing techniques are employed in developing an efficient method for automatic detection and then extraction of individual ingredient, from the source x-ray image which comprises bunch of nuts in a single frame. For data representation, statistical texture analysis is carried out and attributes are calculated from each of the sample image on the global level as features. In addition co-occurrence matrices are computed from images with four different offsets, and hence more features are extracted by using them. To find fewer meaningful characteristics, all the calculated features are organized in several combinations and then tested. Seventy percent of image data is used for training and 15% each for cross-validation and test purposes. Binary classification is performed using two state-of-the-art non-linear classifiers: Artificial Neural Network (ANN) and Support Vector Machines (SVM). Performance is evaluated in terms of classification accuracy, specificity and sensitivity. ANN classifier showed 87.6% accuracy with correct recognition rate of healthy nuts and unhealthy nuts as 94% and 62% respectively. SVM classifier produced the similar accuracy achieving 86.3% specificity and 89.2% sensitivity rate. The results obtained are unique itself in terms of ingredient and promising relatively. It is also found that feature set size can be reduced up to 57% by compromising 3.5% accuracy, in combination with any of the tested classifiers.</description>
        <description>http://thesai.org/Downloads/Volume7No9/Paper_37-A_Machine_Vision_System_for_Quality_Inspection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Between Transition from IPv4 and IPv6 Adaption: The Case of Jordanian Government</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070936</link>
        <id>10.14569/IJACSA.2016.070936</id>
        <doi>10.14569/IJACSA.2016.070936</doi>
        <lastModDate>2016-10-01T05:58:29.3730000+00:00</lastModDate>
        
        <creator>Iman Akour</creator>
        
        <subject>IP networks; IPv6 protocol; IPv6 road map; ipv6 transition; IPv6 adoption</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(9), 2016</description>
        <description>IPv6 is being the new replacement for its predecessor IPv4, IPv6 has been used by most Internet services and adopted by most internet architecture these days. Existing protocol IPv4 reveals critical issues such as approaching exhaustion of its address space, continuous growth of the internet and rising new technologies lead to increasing the complexity of the configuration, etc. To healing from the limitations of IPv4 Internet Engineering Task Force (IETF) developed the next generation IP called IPv6. Jordan, like many other countries, is endeavoring to adapt and transit from IPv4 to in an efficient way that will provide an excellent level of service as coveted by its citizens. In this study, the author tried to navigate IPv6 concept from the literature and review the thoughts, steps, and challenges that the Jordanian government pursued in transiting from IPv4 to IPv6.</description>
        <description>http://thesai.org/Downloads/Volume7No9/Paper_36-Between_Transition_from_IPv4_and_IPv6_Adaption.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Proposed Bilingual Model for Right to Left Language Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070935</link>
        <id>10.14569/IJACSA.2016.070935</id>
        <doi>10.14569/IJACSA.2016.070935</doi>
        <lastModDate>2016-10-01T05:58:29.3430000+00:00</lastModDate>
        
        <creator>Farhan M Al Obisat</creator>
        
        <creator>Zaid T Alhalhouli</creator>
        
        <creator>Hazim S. AlRawashdeh</creator>
        
        <subject>software; testing; languages; right to left; development; application; bilingual; social media</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(9), 2016</description>
        <description>Using right to left languages (RLL) in software programming requires switching the direction of many components in the interface. Preserving the original interface layout and only changing the language may result in different semantics or interpretations of the content. However, this aspect is often dismissing in the field. This research, therefore, proposes a Bilingual Model (BL) to check and correct the directions in social media applications. Moreover, test-driven development (TDD) For RLL, such as Arabic, is considered in the testing methodologies. Similarly, the bilingual analysis has to follow both the TDD and BL models.</description>
        <description>http://thesai.org/Downloads/Volume7No9/Paper_35-Proposed_Bilingual_Model_for_Right_to_Left_Language_Applications.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Estimation of Trajectory and Location for Mobile Sound Source</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070934</link>
        <id>10.14569/IJACSA.2016.070934</id>
        <doi>10.14569/IJACSA.2016.070934</doi>
        <lastModDate>2016-10-01T05:58:29.3270000+00:00</lastModDate>
        
        <creator>Mehmet Cem Catalbas</creator>
        
        <creator>Merve Yildirim</creator>
        
        <creator>Arif Gulten</creator>
        
        <creator>Hasan Kurum</creator>
        
        <creator>Simon Dobrišek</creator>
        
        <subject>Sound processing; sound source localization; azimuth angle estimation; generalized cross-correlation; interaural time difference; interaural level difference</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(9), 2016</description>
        <description>In this paper, we present an approach to estimate mobile sound source trajectory. An artificially created sound source signal is used in this work. The main aim of this paper is to estimate the mobile object trajectory via sound processing methods. The performance of generalized cross correlation techniques is compared with that of noise reduction filters for the success of trajectory estimation. The azimuth angle between the sound source and receiver is calculated during the whole movement. The parameter of Interaural Time Difference (ITD) is utilized for determining azimuth angle. The success of estimated delay is compared with different types of Generalized Cross Correlation (GCC) algorithms. In this study, an approach for sound localization and trajectory estimation on 2D space is proposed. Besides, different types of pre-filter method are tried for removing the noise and signal smoothing of recorded sound signals. Some basic parameters of sound localization process are also explained. Moreover, the calculation error of average azimuth angle is compared with different GCC and pre-filtered methods. To conclude, it is observed that estimation of location and trajectory information of a mobile object from a stereo sound recording is realized successfully.</description>
        <description>http://thesai.org/Downloads/Volume7No9/Paper_34-Estimation_of_Trajectory_and_Location_for_Mobile_Sound_Source.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Security and Privacy Issues in Ehealthcare Systems: Towards Trusted Services</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070933</link>
        <id>10.14569/IJACSA.2016.070933</id>
        <doi>10.14569/IJACSA.2016.070933</doi>
        <lastModDate>2016-10-01T05:58:29.3130000+00:00</lastModDate>
        
        <creator>Isra’a Ahmed Zriqat</creator>
        
        <creator>Ahmad Mousa Altamimi</creator>
        
        <subject>Electronic health records; Systematic review; Privacy; Security regulations; Interoperability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(9), 2016</description>
        <description>Recent years have witnessed a widespread availability of electronic healthcare data record (EHR) systems. Vast amounts of health data were generated in the process of treatment in medical centers such hospitals, clinics, or other institutions. To improve the quality of healthcare service, EHRs could be potentially shared by a variety of users. This results in significant privacy issues that should be addressed to make the use of EHR practical. In fact, despite the recent research in designing standards and regulations directives concerning security and privacy in EHR systems, it is still, however, not completely settled out the privacy challenges. In this paper, a systematic literature review was conducted concerning the privacy issues in electronic healthcare systems. More than 50 original articles were selected to study the existing security approaches and figure out the used security models. Also, a novel Context-aware Access Control Security Model (CARE) is proposed to capture the scenario of data interoperability and support the security fundamentals of healthcare systems along with the capability of providing fine-grained access control.</description>
        <description>http://thesai.org/Downloads/Volume7No9/Paper_33-Security_and_Privacy_Issues_In_eHealthcare_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Wireless Sensor Network Security using Artificial Neural Network based Trust Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070932</link>
        <id>10.14569/IJACSA.2016.070932</id>
        <doi>10.14569/IJACSA.2016.070932</doi>
        <lastModDate>2016-10-01T05:58:29.2800000+00:00</lastModDate>
        
        <creator>Adwan Yasin</creator>
        
        <creator>Kefaya Sabaneh</creator>
        
        <subject>Wireless sensor network; security; Artificial neural network; trust rate; malicious node; trust model; threat</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(9), 2016</description>
        <description>Wireless sensor network (WSN) is widely used in environmental conditions where the systems depend on sensing and monitoring approach. Water pollution monitoring system depends on a network of wireless sensing nodes which communicate together depending on a specific topological order. The nodes distributed in a harsh environment to detect the polluted zones within the WSN range based on the sensed data.  WSN exposes several malicious attacks as a consequence of its presence in such open environment, so additional techniques are needed alongside with the existing cryptography approach. In this paper an enhanced trust model based on the use of radial base artificial neural network (RBANN) is presented to predict the future behavior of each node based on its weighted direct and indirect behaviors, in order to provide a comprehensive trust model that helps to detect and eliminate malicious nodes within the WSN. The proposed model considered the limited power, storage and processing capabilities of the system.</description>
        <description>http://thesai.org/Downloads/Volume7No9/Paper_32-Enhancing_Wireless_Sensor_Network_Security.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fingerprint Gender Classification using Univariate Decision Tree (J48)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070931</link>
        <id>10.14569/IJACSA.2016.070931</id>
        <doi>10.14569/IJACSA.2016.070931</doi>
        <lastModDate>2016-10-01T05:58:29.2670000+00:00</lastModDate>
        
        <creator>S. F. Abdullah</creator>
        
        <creator>A.F.N.A. Rahman</creator>
        
        <creator>Z.A. Abas</creator>
        
        <creator>W.H.M. Saad</creator>
        
        <subject>fingerprint; gender classification; global features; Univariate Decision Tree; J48</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(9), 2016</description>
        <description>Data mining is the process of analyzing data from a different category. This data provide information and data mining will extracts a new knowledge from it and a new useful information is created. Decision tree learning is a method commonly used in data mining. The decision tree is a model of decision that looklike as a tree-like graph with nodes, branches and leaves. Each internal node denotes a test on an attribute and each branch represents the outcome of the test. The leaf node which is the last node will holds a class label. Decision tree classifies the instance and helps in making a prediction of the data used. This study focused on a J48 algorithm for classifying a gender by using fingerprint features. There are four types of features in the fingerprint that is used in this study, which is Ridge Count (RC), Ridge Density (RD), Ridge Thickness to Valley Thickness Ratio (RTVTR) and White Lines Count (WLC). Different cases have been determined to be executed with the J48 algorithm and a comparison of the knowledge gain from each test is shown. All the result of this experiment is running using Weka and the result achieve 96.28% for the classification rate.</description>
        <description>http://thesai.org/Downloads/Volume7No9/Paper_31-Fingerprint_Gender_Classification_Using_Univariate.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Light Weight Service Oriented Architecture for the Internet of Things</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070930</link>
        <id>10.14569/IJACSA.2016.070930</id>
        <doi>10.14569/IJACSA.2016.070930</doi>
        <lastModDate>2016-10-01T05:58:29.2670000+00:00</lastModDate>
        
        <creator>Omar Aldabbas</creator>
        
        <subject>Internet of Things; wireless sensor networks; sensing services; information extraction; data mining</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(9), 2016</description>
        <description>Internet of Things (IoT) is a ubiquitous embedded ecosystem known for its capability to perform common application functions through coordinating resources distributed on-object or on-network domains. As new applications evolve, the challenge is in the analysis and implementation of multimodal data streamed by diverse kinds of sensors. This paper presents a new service-centric approach for data collection and retrieval, considering objects as highly decentralized, composite and cost-sufficient services. Such services are constructed from objects located within close geographical proximity to retrieve spatiotemporal events from the gathered sensor data.  To achieve this, we advocate coordination languages and models to fuse multimodal, heterogeneous services through interfacing with every service to accomplish the network objective according to the data they gather and analyze.  In this paper we give an application scenario that illustrates the implementation of the coordination models to provision successful collaboration among IoT objects to retrieve information. The proposed solution reduced the communication delay before service composition by up to 43% and improved the target detection accuracy by up to 70% while maintaining energy consumption 20% lower than its best rivals in the literature.</description>
        <description>http://thesai.org/Downloads/Volume7No9/Paper_30-A_Light_Weight_Service_Oriented_Architecture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimization of Dynamic Virtual Machine Consolidation in Cloud Computing Data Centers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070929</link>
        <id>10.14569/IJACSA.2016.070929</id>
        <doi>10.14569/IJACSA.2016.070929</doi>
        <lastModDate>2016-10-01T05:58:29.2330000+00:00</lastModDate>
        
        <creator>Alireza Najari</creator>
        
        <creator>Seyed EnayatOllah Alavi</creator>
        
        <creator>Mohammad Reza Noorimehr</creator>
        
        <subject>Cloud Computing; Dynamic Consolidation; Energy Consumption; Virtualization; Service Level Agreement</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(9), 2016</description>
        <description>The present study aims at recognizing the problem of dynamic virtual machine (VM) Consolidation using virtualization, live migration of VMs from underloaded and overloaded hosts and switching idle nodes to the sleep mode as a very effective approach for utilizing resources and accessing energy efficient cloud computing data centres. The challenge in the present study is to reduce energy consumption thus guarantee   Service Level Agreement (SLA) at its highest level. The proposed algorithm predicts CPU utilization in near future using Time-Series method as well as Simple Exponential Smoothing (SES) technique, and takes appropriate action based on the current and predicted CPU utilization and comparison of their values with the dynamic upper and lower thresholds. The four phases in this algorithm include identification of overloaded hosts, identification of underloaded hosts, selection of VMs for migration and identification of appropriate hosts as the migration destination. The study proposes solutions along with dynamic upper and lower thresholds in regard with the first two phases. By comparing current and predicted CPU utilizations with these thresholds, overloaded and underloaded hosts are accurately identified to let migration happen only from the hosts which are currently as well as in near future overloaded and underloaded. The authors have   used Maximum Correlation (MC) VM selection policy in the third phase, and attempted in phase four such that hosts with moderate loads, i.e. not overloaded hosts, liable to overloading and underloaded, are selected as the migration destination. The simulation results from the Clouds framework demonstrate an average reduction of 83.25, 25.23 percent and 61.1 in the number of VM migrations, energy consumption and SLA violations (SLAV), respectively.</description>
        <description>http://thesai.org/Downloads/Volume7No9/Paper_29-Optimization_of_Dynamic_Virtual_Machine.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Variability of Acoustic Features of Hypernasality and it’s Assessment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070928</link>
        <id>10.14569/IJACSA.2016.070928</id>
        <doi>10.14569/IJACSA.2016.070928</doi>
        <lastModDate>2016-10-01T05:58:29.2030000+00:00</lastModDate>
        
        <creator>Shahina Haque</creator>
        
        <creator>Md. Hanif Ali</creator>
        
        <creator>A.K.M. Fazlul Haque</creator>
        
        <subject>Speech analysis; Acoustic feature; Hypernasality; Cleft palate; Velopharyngeal opening; Vowel space area; Read speech</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(9), 2016</description>
        <description>Hypernasality (HP) is observed across voiced phonemes uttered by Cleft-Palate (CP) speakers with defective velopharyngeal (VP) opening. HP assessment using signal processing technique is challenging due to the variability of acoustic features across various conditions such as speakers, speaking style, speaking rate, severity of HP etc. Most of the study for hypernasality (HP) assessment is based on isolated sustained vowels under laboratory conditions. We measure the variability of acoustic features and detect HP using vowel /i/, /a/ and /u/ in continuous read speech with gradually increasing severity of HP of CP speakers. Linear predictive coding (LPC) method is used for acoustic feature extraction.  In first part of our study, we observe the variation in acoustic parameters within and across vowel category with gradually increasing HP. We observe that inter-speaker variability in spectral features among CP subjects for vowel /i/ is 0.96, /a/ has 1.13 and vowel /u/ has 2.05. The inter-speaker variability measurement suggests that high back vowel /u/ is mostly affected and has the highest variability. High front vowel /i/ is least affected and has the lowest variability with HP. In the second part, ratio of vowel space area (VSA) of hypernasal and normal speech is calculated and used as a measure for HP detection. We observe that VSA spanned by CP subjects is 0.65 times less than isolated uttered Bangla nasal VSA and 0.43 times less than read speech uttered English oral VSA.</description>
        <description>http://thesai.org/Downloads/Volume7No9/Paper_28-Variability_of_Acoustic_Features_of_Hypernasality.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Semantic Approach for Mathematical Expression Retrieval</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070927</link>
        <id>10.14569/IJACSA.2016.070927</id>
        <doi>10.14569/IJACSA.2016.070927</doi>
        <lastModDate>2016-10-01T05:58:29.1870000+00:00</lastModDate>
        
        <creator>Zahra Asebriy</creator>
        
        <creator>Soulaimane Kaloun</creator>
        
        <creator>Said Raghay</creator>
        
        <creator>Omar Bencharef</creator>
        
        <subject>Mathematical expression; Retrieval information; MathML; Semantic similarity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(9), 2016</description>
        <description>Math search or mathematical expression retrieval has become a challenging task. Mathematical expressions are very complex, they are highly symbolic, and they have a semantic meaning that we should respect. In this paper, we propose a similarity search method for mathematical expression based on a multilevel representation of expressions and a multilevel search. We used the K-Nearest Neighbors with three types of distances to evaluate relevance between expressions. In the experimental level, the proposed system significantly outperforms statistical algorithms.</description>
        <description>http://thesai.org/Downloads/Volume7No9/Paper_27-A_Semantic_Approach_for_Mathematical_Expression_Retrieval.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Robust Image Watermarking using Fractional Sinc Transformation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070926</link>
        <id>10.14569/IJACSA.2016.070926</id>
        <doi>10.14569/IJACSA.2016.070926</doi>
        <lastModDate>2016-10-01T05:58:29.1570000+00:00</lastModDate>
        
        <creator>Almas Abbasi</creator>
        
        <creator>Chaw Seng Woo</creator>
        
        <subject>Fractional Calculus; fractional Sinc; image Watermarking; robust</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(9), 2016</description>
        <description>The increased utilization of internet in sharing and dissemination of digital data makes it is very difficult to maintain copyright and ownership of data. Digital watermarking offers a method for authentication and copyright protection. Digital image watermarking is an important technique for the multimedia content authentication and copyright protection. This paper present a watermarking algorithm making a balance between imperceptibility and robustness based on fractional calculus and also a  domain has constructed using fractional Sinc function (FSc). The FSc model the signal as polynomial for watermark embedding. Watermark is embedded in all the coefficients of the image. Cross correlation method based on Neyman-Pearson is used for watermark detection. Moreover fraction rotation expression has constructed to achieve rotation.  Experimental results confirmed the proposed technique has good robustness and outperformed another technique in imperceptibility. Furthermore the proposed method enables blind watermark detection where the original image is not required during the watermark detection and thus making it more practical than non-blind watermarking techniques.</description>
        <description>http://thesai.org/Downloads/Volume7No9/Paper_26-Robust_Image_Watermarking_using_Fractional_Sinc_Transformation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi- Spectrum Bands Allocation for Time-Varying Traffic in the Flexible Optical Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070925</link>
        <id>10.14569/IJACSA.2016.070925</id>
        <doi>10.14569/IJACSA.2016.070925</doi>
        <lastModDate>2016-10-01T05:58:29.1400000+00:00</lastModDate>
        
        <creator>KAMAGATE Beman Hamidja</creator>
        
        <creator>Michel BABRI</creator>
        
        <creator>GOORE Bi Tra</creator>
        
        <creator>Souleymane OUMTANAGA</creator>
        
        <subject>Spectrum band; Multi-spectrum bands; time-varying traffic; elastic optical network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(9), 2016</description>
        <description>The flexible optical networks are the promising solution to the exponential increase of traffic generated by telecommunications networks. They combine flexibility with the finest granularity of optical resources. Therefore, the flexible optical networks position themselves as a better solution than conventional WDM network. In the operational phase, traffic of connections fluctuates. In fact, the user’s need is not the same during day periods. Such traffic may experiment evidence of rising working hours, end of months or years and decreases during the night or on holidays. This variation requires the expansion or contraction of the number of frequency slots allocated to a connection to match the exact needs of the moment. The expansion of the traffic around the reference frequency of connection may lead to blockage because it must share frequency slots with neighboring connections in compliance with the constraints of continuity, contiguity, and non-overlapping. In this study, we offer a technique for allocating frequency slots for time-varying traffic connections. We share out the additional traffic load on different spectrum paths by respecting the constraint of time synchronization related to the differential delay to reduce the blocking rate due to traffic fluctuation.</description>
        <description>http://thesai.org/Downloads/Volume7No9/Paper_25-Multi_Spectrum_Bands_Allocation_for_Time_Varying_Traffic.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Developing a Transition Parser for the Arabic Language</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070924</link>
        <id>10.14569/IJACSA.2016.070924</id>
        <doi>10.14569/IJACSA.2016.070924</doi>
        <lastModDate>2016-10-01T05:58:29.1100000+00:00</lastModDate>
        
        <creator>Aref abu Awad</creator>
        
        <creator>Essam Hanandeh</creator>
        
        <subject>Natural language processing; Arabic parser; lexicon; Transition Network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(9), 2016</description>
        <description>One of the most important Characteristics of the Arabic language is the exhaustive undertaking. Thus, analyzing Arabic sentences is difficult because of the length of sentences and the numerous structural complexities. This research aims at developing an Arabic parser and lexicon. A lexicon has been developed with the goal of analyzing and extracting the attributes of Arabic words. The parser was written by using a top–down algorithm parsing technique with recursive transition network. Then, the parser has been evaluated against real sentences and the outcomes were satisfactory.</description>
        <description>http://thesai.org/Downloads/Volume7No9/Paper_24-Developing_a_Transition_Parser_for_the_Arabic_Language.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Using a Cluster for Securing Embedded Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070923</link>
        <id>10.14569/IJACSA.2016.070923</id>
        <doi>10.14569/IJACSA.2016.070923</doi>
        <lastModDate>2016-10-01T05:58:29.0770000+00:00</lastModDate>
        
        <creator>Mohamed Salim LMIMOUNI</creator>
        
        <creator>Khalid BOUKHDIR</creator>
        
        <creator>Hicham MEDROMI</creator>
        
        <creator>Siham BENHADOU</creator>
        
        <subject>cluster; intrusion detection system; embedded system; security; parallel system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(9), 2016</description>
        <description>In today&#39;s increasingly interconnected world, the deployment of an Intrusion Detection System (IDS) is becoming very important for securing embedded systems from viruses, worms, attacks, etc. But IDSs face many challenges like computational resources and ubiquitous threats. Many of these challenges can be resolved by running the IDS in a cluster to allow tasks to be parallelly executed. In this paper, we propose to secure embedded systems by using a cluster of embedded cards that can run multiple instances of an IDS in a parallel way. This proposition is now possible with the availability of new low-power single-board computers (Raspberry Pi, BeagleBoard, Cubieboard, Galileo, etc.). To test the feasibility of our proposed architecture, we run two instances of the Bro IDS on two Raspberry Pi. The results show that we can effectively run multiple instances of an IDS in a parallel way on a cluster of new low-power single-board computers to secure embedded systems.</description>
        <description>http://thesai.org/Downloads/Volume7No9/Paper_23-Using_a_Cluster_for_Securing_Embedded_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Identifying Green Services using GSLA Model for Achieving Sustainability in Industries</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070922</link>
        <id>10.14569/IJACSA.2016.070922</id>
        <doi>10.14569/IJACSA.2016.070922</doi>
        <lastModDate>2016-10-01T05:58:29.0630000+00:00</lastModDate>
        
        <creator>Iqbal Ahmed</creator>
        
        <creator>Hiroshi Okumura</creator>
        
        <creator>Kohei Arai</creator>
        
        <subject>GSLA; Green Services; GaaS; Sustainability; Informational model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(9), 2016</description>
        <description>Green SLA (GSLA) is a formal agreement between service providers/vendors and users/customers incorporating all the traditional/basic commitments (Basic SLAs) as well as incorporating Ecological, Economical, and Ethical (3Es) aspects of sustainability. Recently, most of the IT (Information Technology) and ICT (Information and Communication Technology) industries are practicing sustainability under green computing domain through designing green services at their scope. However, most of these services only focused on power consumption, energy efficiency, and carbon emission. Moreover, the sustainability can not achieve without considering 3Es simultaneously. The recent development of sustainable GSLA are assisting to identify the missing green services under 3Es. This research attempts to design all missing green services for sustainability by using global informational model of Green SLA. All these newly identified green IT services could reside with other existing services in the industry. Additionally, the design and evaluation technique of these new green services could be used as a guideline for the ICT engineers and as well as other industries too. Moreover, the evaluation and monitoring of new green services are justified using general questionnaires design and analytical tools among the 20 startup ICT industries in Bangladesh and Japan. The proposed idea of designing new green services and their justification methods would be helpful for the ICT engineer to practice sustainability in their competitive businesses.</description>
        <description>http://thesai.org/Downloads/Volume7No9/Paper_22-Identifying_Green_Services_using_GSLA_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application of Intelligent Data Mining Approach in Securing the Cloud Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070921</link>
        <id>10.14569/IJACSA.2016.070921</id>
        <doi>10.14569/IJACSA.2016.070921</doi>
        <lastModDate>2016-10-01T05:58:29.0470000+00:00</lastModDate>
        
        <creator>Hanna M. Said</creator>
        
        <creator>Ibrahim El Emary</creator>
        
        <creator>Bader A. Alyoubi</creator>
        
        <creator>Adel A. Alyoubi</creator>
        
        <subject>Cloud computing; Cloud security issue; Data mining; Naive Bayes; multilayer percepton; Support vector machine; decision tree (C4.5); and Partial Tree (PART)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(9), 2016</description>
        <description>Cloud computing is a modern term refers to a model for emerging computing, where it is possible to use machines in large data centers for delivering services in a scalable manner, so corporations has become in need for large scale inexpensive computing. Recently, several governments have begun to utilize cloud computing architectures, applications and platforms for meeting the needs of their constituents and delivering services. Security occupies the first rank of obstacles that face cloud computing for governmental agencies and businesses. Cloud computing is surrounded by many risks that may have major effects on services and information supported via this technology. Also, Cloud Computing is one of the promising technology in which the scientific community has recently encountered. Cloud computing is related to other research areas such as distributed and grid computing, Service-Oriented Architecture, and virtualization, as cloud computing inherited their limitations and advancements. It is possible to exploit new opportunities for security. This paper aim is to discuss and analyze how achieve mitigation for cloud computing security risks as a basic step towards obtaining secure and safe environment for cloud computing. The results showed that, Using a simple decision tree model Chaid algorithm security rating for classifying approach is a robust technique that enables the decision-maker to measure the extent of cloud securing, and the provided services. It was proved throughout this paper that policies, standards, and controls are critical in management process to safeguard and protect the systems as well as data. The management process should analyze and understand cloud computing risks for protecting systems and data from security exploits</description>
        <description>http://thesai.org/Downloads/Volume7No9/Paper_21-Application_of_Intelligent_Data_Mining_Approach_in_Securing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Context-Sensitive Opinion Mining using Polarity Patterns</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070920</link>
        <id>10.14569/IJACSA.2016.070920</id>
        <doi>10.14569/IJACSA.2016.070920</doi>
        <lastModDate>2016-10-01T05:58:29.0170000+00:00</lastModDate>
        
        <creator>Saeedeh Sadat Sadidpour</creator>
        
        <creator>Hossein Shirazi</creator>
        
        <creator>Nurfadhlina Mohd Sharef</creator>
        
        <creator>Behrouz Minaei-Bidgoli</creator>
        
        <creator>Mohammad Ebrahim Sanjaghi</creator>
        
        <subject>Opinion mining; Polarity patterns; Pattern matching; Context-sensitive; Politics domain</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(9), 2016</description>
        <description>The growing of Web 2.0 has led to huge information is available. The analysis of this information can be very useful in various fields. In this regards, opinion mining and sentiment analysis are one of the most interesting task that many researchers have paid attention for two last decades. However, this task involves to some challenges that a very important challenge is the different polarity of words in various domain and context. Word polarity is an important feature in the determination of review polarity through sentiment analysis. Existing studies have proposed n-gram technique as a solution which allows the matching of the selected words to the lexicon. However, identification of word polarity using the standard n-gram method poses limitation as it ignores the word placement and its effect according to the contextual domain. Therefore, this study proposes a linguistic-based model to extract the word adjacency patterns to determine the review polarity. The results reflect the superiority of the proposed model compared to other benchmarking approaches.</description>
        <description>http://thesai.org/Downloads/Volume7No9/Paper_20-Context_Sensitive_Opinion_Mining_using_Polarity_Patterns.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Estimation Medicine for Diseases System to Support Medical Diagnosis by Expert System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070919</link>
        <id>10.14569/IJACSA.2016.070919</id>
        <doi>10.14569/IJACSA.2016.070919</doi>
        <lastModDate>2016-10-01T05:58:29.0000000+00:00</lastModDate>
        
        <creator>Noor T. Mahmood</creator>
        
        <subject>Diagnosis; Disease; Medicine; Rete Algorithm; Expert System; Intelligent System</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(9), 2016</description>
        <description>Researches confirmed that 70 thousand cases of death, which happen yearly in the world, were because of the misprescribing of the drug itself or its dose (overdose or lower dose). Choosing the wrong alternative drug inspired the professionals in the healthcare field to the importance of assigning the best technologies to decrease the percentages of the therapeutic methods in giving the drug to prevent mistakes in prescribing the suitable drug. A system based on Rete Algorithm is proposed where the best-chosen medicine is offered through the suggested system. Selection of Estimation Medicine for Diseases (EMD) System is introduced where the diagnosis is made basically according to the symptoms and the medical history of the patient. This research aims to acquire a good model using this algorithm to obtain more accurate choices of medicine. The system (EMD) is tested by the doctors in Iraqi hospitals and it has been found that there is no other systems that can be compared to EMD system. The accuracy of estimating the appropriate medicine for heart diseases is approximately (87.26%).</description>
        <description>http://thesai.org/Downloads/Volume7No9/Paper_19-Estimation_Medicine_for_Diseases_System_to_Support_Medical_Diagnosis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Trends of Recent Secure Communication System and its Effectiveness in Wireless Sensor Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070918</link>
        <id>10.14569/IJACSA.2016.070918</id>
        <doi>10.14569/IJACSA.2016.070918</doi>
        <lastModDate>2016-10-01T05:58:28.9700000+00:00</lastModDate>
        
        <creator>Manjunath B E</creator>
        
        <creator>P.V. Rao</creator>
        
        <subject>Wireless Sensor Network; Security; Cryptography; Encryption; Secured Routing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(9), 2016</description>
        <description>Wireless sensor network has received increasing attention from the research community since last decade due to multiple problems associated with it. Out of many other significant problems e.g. routing, energy, load balancing, resource allocation, there is a lesser extent of effective security protocols towards solving security pitfalls in wireless sensor network. This paper studies the trend of research manuscript published in last six years about security problems to find that cryptographic techniques received more attention compared to non-cryptographic-based techniques. It also reviews the existing implementation towards addressing security problems and assesses its effectiveness by highlighting beneficial factor as well as limitations. Finally, we extract a research gap to identify the unexplored area of research, which is finalized to be implemented as a part of the future study to overcome the recent security issues.</description>
        <description>http://thesai.org/Downloads/Volume7No9/Paper_18-Trends_of_Recent_Secure_Communication_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Efficient Hybrid Semantic Text Similarity using Wordnet and a Corpus</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070917</link>
        <id>10.14569/IJACSA.2016.070917</id>
        <doi>10.14569/IJACSA.2016.070917</doi>
        <lastModDate>2016-10-01T05:58:28.9370000+00:00</lastModDate>
        
        <creator>Issa Atoum</creator>
        
        <creator>Ahmed Otoom</creator>
        
        <subject>text similarity; distributional similarity; information content; knowledge-based similarity; corpus-based similarity; WordNet</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(9), 2016</description>
        <description>Text similarity plays an important role in natural language processing tasks such as answering questions and summarizing text. At present, state-of-the-art text similarity algorithms rely on inefficient word pairings and/or knowledge derived from large corpora such as Wikipedia. This article evaluates previous word similarity measures on benchmark datasets and then uses a hybrid word similarity in a novel text similarity measure (TSM). The proposed TSM is based on information content and WordNet semantic relations. TSM includes exact word match, the length of both sentences in a pair, and the maximum similarity between one word and the compared text. Compared with other well-known measures, results of TSM are surpassing or comparable with the best algorithms in the literature.</description>
        <description>http://thesai.org/Downloads/Volume7No9/Paper_17-Efficient_Hybrid_Semantic_Text_Similarity_using_Wordnet.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of Purchasing Tendency using ID-POS Data of Social Login User</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070916</link>
        <id>10.14569/IJACSA.2016.070916</id>
        <doi>10.14569/IJACSA.2016.070916</doi>
        <lastModDate>2016-10-01T05:58:28.9230000+00:00</lastModDate>
        
        <creator>Kohei Otake</creator>
        
        <creator>Takashi Namatame</creator>
        
        <subject>Social Networking Service; Consumer Behavior; Network Analysis; ID-POS Data</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(9), 2016</description>
        <description>This study targets social login registrants on an EC site and aims to clarify the difference between the purchasing tendency of social login registrants and general members by analyzing product purchasing history. The authors focused on the golf portal site that is the subject of this research. The authors analyzed the purchasing data comparing social login registrants with general members. It became clear that the social login registrants and general members have different distribution regarding the number of purchases and purchase type. Moreover, the social login registrants have a larger range of purchase types per purchase and they are purchasing from a variety of genres. In addition, the authors analyzed them with a focus on the relationship between products purchased. As the results of network analysis, it became clear that the existence of specific product combinations (concentrated sets on the network) more readily purchased simultaneously by Facebook users than by general members. Moreover, the authors compared each network tendency using a network index (degree, closeness and betweenness centrality). As the results, it became clear that social login registrants have less resistance to purchasing expensive products on an EC site compared with general members and golf gears act as a bridge for purchasing.</description>
        <description>http://thesai.org/Downloads/Volume7No9/Paper_16-Analysis_of_Purchasing_Tendency_using_ID_POS_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving the Emergency Services for Accident Care in Saudi Arabia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070915</link>
        <id>10.14569/IJACSA.2016.070915</id>
        <doi>10.14569/IJACSA.2016.070915</doi>
        <lastModDate>2016-10-01T05:58:28.9070000+00:00</lastModDate>
        
        <creator>Amr Jadi</creator>
        
        <subject>Accidents; Communication; Emergency Services; Hajj Pilgrims; Healthcare; Saudi Arabia</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(9), 2016</description>
        <description>The road safety is one of the serious challenges faced by most of the governments due to the involvement of various issues. Being perfect in driving is not enough on the roads but tackling the mistakes of other persons is also an important aspect of the present day driving. Dealing with the accidents, injured personals, communicating the emergency services and dealing with other legal formalities is a serious challenge in present conditions. Providing emergency services is a real challenge due to increased population, heavy traffic and communication problems.
In this paper, a novel technique is being introduced to avoid delays and major setbacks by emergency services at the time of accidents. The proposed technique works along with traffic control system of Kingdom of Saudi Arabia (KSA). By introducing such system in the healthcare, the serious drawbacks of communication can be avoided to a maximum extent. The proposed system can prove to be very effective at a place like Saudi Arabia, where millions of Hajj pilgrims visit for socio-religious gatherings.</description>
        <description>http://thesai.org/Downloads/Volume7No9/Paper_15-Improving_the_Emergency_Services_for_Accident_Care.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>ROHDIP: Resource Oriented Heterogeneous Data Integration Platform</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070914</link>
        <id>10.14569/IJACSA.2016.070914</id>
        <doi>10.14569/IJACSA.2016.070914</doi>
        <lastModDate>2016-10-01T05:58:28.8770000+00:00</lastModDate>
        
        <creator>Wael Shehab</creator>
        
        <creator>Sherin M. ElGokhy</creator>
        
        <creator>ElSayed Sallam</creator>
        
        <subject>Data Integration; Data heterogeneity; SOA; ROA; Restful; ROHDIP</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(9), 2016</description>
        <description>During the last few years, the revolution of social networks such as Facebook, Twitter, and Instagram led to a daily increasing of data that are heterogeneous in their sources, data models, and platforms. Heterogeneous data sources have many forms such as the www, deep web, relational databases systems, No-SQL database systems, hierarchal data systems, semi-structured files, in which data are usually allocated on different machines (distributed) and have different data models (heterogeneous).
Large-scale data integration efforts demonstrate that their most valuable contribution is implementing a data integration platform that provides a uniform access to the heterogeneous data sources, as well as the different versions of data reported by the same data source over time. Furthermore, the platform must be able to integrate data from a broad range of data authoring devices and database management systems. It also should be accessible by almost types of data querying devices to ensure globally querying the integration platform from any place on earth anytime and receiving the query result in any data format.
In this paper, we create a resource oriented heterogeneous data integration platform (ROHDIP) that facilitates the data integration process and implements the objectives discussed above. We use the resource oriented architecture ROA to support the uniform access by most types of data querying devices from anywhere and to improve the query response time.</description>
        <description>http://thesai.org/Downloads/Volume7No9/Paper_14-ROHDIP_Resource_Oriented_Heterogeneous_Data_Integration_Platform.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>High Performance Computing Over Parallel Mobile Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070913</link>
        <id>10.14569/IJACSA.2016.070913</id>
        <doi>10.14569/IJACSA.2016.070913</doi>
        <lastModDate>2016-10-01T05:58:28.8430000+00:00</lastModDate>
        
        <creator>Doha Ehab Attia</creator>
        
        <creator>Abeer Mohamed ElKorany</creator>
        
        <creator>Ahmed Shawky Moussa</creator>
        
        <subject>Parallel computing; High-performance computing; Mobile computing; Cluster computing; Android OS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(9), 2016</description>
        <description>There are currently more mobile devices than people on the planet. This number is likely to multiply many folds with the Internet of Things revolution in the next few years. This may treasure an unprecedented computational power especially with the wide spread of multicore processors on mobile phones. This paper investigates and proposes a new methodology for mobile cluster computing, where multiple mobile devices including their multicore processors can be combined to perform possibly massively parallel applications. The paper presents in details the steps for building and testing the mobile cluster using the proposed methodology and proving the successful implementation.</description>
        <description>http://thesai.org/Downloads/Volume7No9/Paper_13-High_Performance_Computing_Over_Parallel_Mobile_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>On the Internal Multi-Model Control of Uncertain Discrete-Time Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070912</link>
        <id>10.14569/IJACSA.2016.070912</id>
        <doi>10.14569/IJACSA.2016.070912</doi>
        <lastModDate>2016-10-01T05:58:28.8130000+00:00</lastModDate>
        
        <creator>Chakra Othman</creator>
        
        <creator>Ikbel Ben Cheikh</creator>
        
        <creator>Dhaou Soudani</creator>
        
        <subject>Internal model control IMC; Internal multi-model control IMMC; Kharitonov theorem; Switching method; Residues techniques; discrete-time systems; uncertain systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(9), 2016</description>
        <description>In this paper, new approaches of internal multi-model control are proposed to be applied for the case of the discrete-time systems with parametric uncertainty. In this sense, two implantation structures of the internal multi-model control are adopted; the first is based on the principle of switching and the second on the residues techniques. The stability’s study of these control structures is based on the Kharitonov theorem, thus two extensions of this theorem have been applied to define the internal models. To illustrate these approaches, simulation results are presented at the end of this article.</description>
        <description>http://thesai.org/Downloads/Volume7No9/Paper_12-On_the_Internal_Multi_Model_Control_of_Uncertain_Discrete_Time_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Camera Self-Calibration with Varying Intrinsic Parameters by an Unknown Three-Dimensional Scene </title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070911</link>
        <id>10.14569/IJACSA.2016.070911</id>
        <doi>10.14569/IJACSA.2016.070911</doi>
        <lastModDate>2016-10-01T05:58:28.7800000+00:00</lastModDate>
        
        <creator>B. SATOURI</creator>
        
        <creator>A. EL ABDERRAHMANI</creator>
        
        <creator>H. TAIRI</creator>
        
        <creator>K. SATORI</creator>
        
        <subject>Self-calibration; varying intrinsic parameters; non linear optimization; Interests points; Matching; Fundamental matrix</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(9), 2016</description>
        <description>In the present paper, we will propose a new and robust method of camera self-calibration having varying intrinsic parameters from a sequence of images of an unknown 3D object. The projection of two points of the 3D scene in the image planes is used to determine the projection matrices. The present method is based on the formulation of a non linear cost function from the determination of a relationship between two points of the scene with their opposite relative to the axis of abscise and their projections in the image planes. The resolution of this function with genetic algorithm enables us to estimate the intrinsic parameters of different cameras. The important of our approach reside in the use of a single pair of images which provides fewer equations, simplifies the mathematical complexities and minimizes the execution time of the application, the use of the data of the first image only without the use of the data of the second image, the use of any camera which makes the intrinsic parameters variable not constant and the use of a 3D scene reduces the planarity constraints. The experimental results on synthetic and real data prove the performance and robustness of our approach.</description>
        <description>http://thesai.org/Downloads/Volume7No9/Paper_11-Camera_Self_Calibration_with_Varying_Intrinsic_Parameters.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Balanced Distribution of Load on Grid Resources using Cellular Automata</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070910</link>
        <id>10.14569/IJACSA.2016.070910</id>
        <doi>10.14569/IJACSA.2016.070910</doi>
        <lastModDate>2016-10-01T05:58:28.7670000+00:00</lastModDate>
        
        <creator>Amir Akbarian Sadeghi</creator>
        
        <creator>Ahmad Khademzadeh</creator>
        
        <creator>Mohammad Reza Salehnamadi</creator>
        
        <subject>Computing Grid; Load balancing; Cellular Automata; Fuzzy Logic</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(9), 2016</description>
        <description>Load balancing is a technique for equal and fair distribution of workloads on resources and maximizing their performance as well as reducing the overall execution time. However, meeting all of these goals in a single algorithm is not possible due to their inherent conflict, so some of the features must be given priority based on requirements and objectives of the system and the desired algorithm must be designed with their orientation. In this article, a decentralized load balancing algorithm based on Cellular Automata and Fuzzy Logic has been presented which has capabilities needed for fair distribution of resources in Grid level.
Each computing node in this algorithm has been modeled as a Cellular Automata’s cell and has been provided with the help of Fuzzy Logic in which each node can be an expert system and have a decisive role which is the best choice in a dynamic environment and uncertain data.
Each node is mapped of one of the VL, L, VN, H and VH state based on information exchange on certain time periods with its neighboring nodes and based on fuzzy logic tries to decrease the communication overhead and estimate the state of the other nodes in subsequent. The decision to send or receive the workload is made based on each node state. Thus, an appropriate structure for the system can greatly improve the efficiency of the algorithm. Fuzzy control does not search and optimize, just makes decisions based on inputs which are effective internal parameters of the system and are mostly based on incomplete and nonspecific information.
Each node based on information exchange at specific time periods with its neighboring nodes, and according to Fuzzy Logic rules is mapped of one of the VL, L, N, H and VH states. To reduce communication overhead, with the help of Fuzzy Logic tries to estimate the state of the other nodes in subsequent periods, and based on the status of each node, makes a decision to send or receive workloads. Thus an appropriate structure for the system can improve the efficiency of the algorithm. In fact, Fuzzy Logic does not search and optimize, just makes decisions based on the input parameters which are often incomplete and imprecise.</description>
        <description>http://thesai.org/Downloads/Volume7No9/Paper_10-Balanced_Distribution_of_Load_on_Grid_Resources.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Good Quasi-Cyclic Codes from Circulant Matrices Concatenation using a Heuristic Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070909</link>
        <id>10.14569/IJACSA.2016.070909</id>
        <doi>10.14569/IJACSA.2016.070909</doi>
        <lastModDate>2016-10-01T05:58:28.7500000+00:00</lastModDate>
        
        <creator>Bouchaib AYLAJ</creator>
        
        <creator>Said NOUH</creator>
        
        <creator>Mostafa BELKASMI</creator>
        
        <creator>Hamid ZOUAKI</creator>
        
        <subject>Circulant matrix; quasi-cyclic Codes; Minimum Distance; Simulated Annealing;  Linear Error Correcting codes</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(9), 2016</description>
        <description>In this paper we present a method to search  q circulant matrices; the concatenation of these circulant matrices with circulant identity matrix generates quasi-cyclic codes with  high various code rate q/(q+1) (q an integer).
This method searches circulant matrices in order to find the good quasi-cyclic code (QCC) having the largest minimum distance. A modified simulated annealing algorithm is used as an evaluator tool of the minimum distance of the obtained QCC codes. Based on this method we found 16 good quasi-cyclic codes with  rates (1/2, 2/3 and 3/4), their estimated minimum distance reaches  the lower bounds of codes considered to be the better linear block codes  in Brouwer’s database.</description>
        <description>http://thesai.org/Downloads/Volume7No9/Paper_9-Good_Quasi_Cyclic_Codes_from_Circulant_Matrices_Concatenation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fuzzy Risk-based Decision Method for Vehicular Ad Hoc Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070908</link>
        <id>10.14569/IJACSA.2016.070908</id>
        <doi>10.14569/IJACSA.2016.070908</doi>
        <lastModDate>2016-10-01T05:58:28.7200000+00:00</lastModDate>
        
        <creator>Riaz Ahmed Shaikh</creator>
        
        <subject>Ad hoc networks; Decision methods; Risk management; Trust management; Vehicular Networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(9), 2016</description>
        <description>A vehicular ad hoc network (VANET) is an emerging technology that has the potential to improve road safety and traveler comfort. In VANETs, mobile vehicles communicate with each other for the purpose of sharing various kinds of information. This information is very useful for preventing road accidents and traffic jams. On Contrary, bogus and inaccurate information may cause undesirable things, such as automobile fatalities and traffic congestion. Therefore, it is highly beneficial to consider risk before vehicle takes any decision based on the received information from the surrounding vehicles. To overcome these issues, we propose a new risk-based decision method for vehicular ad hoc networks. It determines a risk-based the three key elements: 1) application type and sensitivity level, 2) vehicle context and 3) driver’s attitude. This paper also provides theoretical analysis and evaluation of the proposed method, and it also discusses the applications of the proposed model.</description>
        <description>http://thesai.org/Downloads/Volume7No9/Paper_8-Fuzzy_Risk-based_Decision_Method_for_Vehicular.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intelligent Image Watermarking based on Handwritten Signature</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070907</link>
        <id>10.14569/IJACSA.2016.070907</id>
        <doi>10.14569/IJACSA.2016.070907</doi>
        <lastModDate>2016-10-01T05:58:28.7030000+00:00</lastModDate>
        
        <creator>Saeid Shahmoradi</creator>
        
        <creator>Nasrollah Sahragard</creator>
        
        <creator>Ahmad Hatam</creator>
        
        <subject>intelligent watermarking; genetic algorithm; neural networks; handwritten signature</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(9), 2016</description>
        <description>With the growth of digital technology over the past decades, the issue of copyright protection has become especially important. Digital watermarking is a suitable way of addressing this issue. The main problem in the area of watermarking, is the balance between image transparency and resistance to attacks after watermarking, where an increase in either one of them will always cause a decrease in the other. Providing statistical and intelligent methods, is the most common way of optimizing resistance and transparency. In this paper, the intelligent method of genetic algorithm (GA) in watermarking will be examined and also the results of using this method will be compared with the results of a statistical SVD-based method. Also, by combining the issues of watermarking and authentication, a relatively higher security in these two issues can be achieved. In this scheme, the security of watermarking increases through the provision of a new method which is based on the combination of image watermarking with a person&#39;s handwritten signature. It must be mentioned that the section of signature recognition is implemented using neural networks. The results from implementing these two methods show that in this area, intelligent methods have a better performance compared to statistical methods. This method can also be used for tasks like passport or national identity card authentication.</description>
        <description>http://thesai.org/Downloads/Volume7No9/Paper_7-Intelligent_Image_Watermarking_based_on_Handwritten_Signature.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Survey of IPv6 Deployment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070906</link>
        <id>10.14569/IJACSA.2016.070906</id>
        <doi>10.14569/IJACSA.2016.070906</doi>
        <lastModDate>2016-10-01T05:58:28.6730000+00:00</lastModDate>
        
        <creator>Manal M. Alhassoun</creator>
        
        <creator>Sara R. Alghunaim</creator>
        
        <subject>IPv4; IPv6; deployment; Internet</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(9), 2016</description>
        <description>The next-generation Internet protocol (IPv6) was designed to overcome the limitation in IPv4 by using a 128-bit address instead of a 32-bit address. In addition to solving the address the limitations, IPv6 has many improved features. This research focused to survey IPv6 deployment all around the world. The objectives of this survey paper are to highlight the issues related to the IPv6 deployment and to look into the IPv4 to IPv6 transition mechanisms. Furthermore, provide insight on the global effort around the world to contribute in IPv6 deployment. In addition, identify the potential solutions or suggestions that could improve the IPv6 deployment rate. In order to achieve the said objectives we survey number of papers on IPv6 deployment from different countries and continents.</description>
        <description>http://thesai.org/Downloads/Volume7No9/Paper_6-A_Survey_of_IPv6_Deployment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An IoT Middleware Framework for Industrial Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070905</link>
        <id>10.14569/IJACSA.2016.070905</id>
        <doi>10.14569/IJACSA.2016.070905</doi>
        <lastModDate>2016-10-01T05:58:28.6400000+00:00</lastModDate>
        
        <creator>Nicoleta-Cristina Gaitan</creator>
        
        <creator>Vasile Gheorghita Gaitan</creator>
        
        <creator>Ioan Ungurean</creator>
        
        <subject>Internet of Things; Middleware; CORBA; ACE ORB (TAO); Data Distribution Service</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(9), 2016</description>
        <description>Starting from the RFID and the wireless sensor networks, the Internet of connected things has attracted the attention of major IT companies and later, of the industrial environment that recognized the concept as one of their key axes for future growth and development. The implementation of IoT in the industrial environment raises some significant issues related to the diversity of fieldbuses, the large number of devices and their configuration. The requirements related to reliability, security and real-time are very important. This paper proposes an industrial IoT and communications at the edge framework which has some outstanding features related to: the easy integration of fieldbuses and devices used in industrial environments with automatic configuration features, integration of multiple middleware technologies (CORBA, OPC and DDS), the uncoupling of the industrial activity from the publishing data on the Internet, security at different levels of the framework. Another important feature of the proposed framework is that it is based on mature standards and on open source or public implementations of these standards. The framework is modular, allowing the easy integration of new fieldbus protocols, middleware technologies and new objects in the client application. This paper is focused mainly on CORBA and DDS approaches.</description>
        <description>http://thesai.org/Downloads/Volume7No9/Paper_5-An_IoT_Middleware_Framework_for_Industrial_Applications.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Designing and Implementing of Intelligent Emotional Speech Recognition with Wavelet and Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070904</link>
        <id>10.14569/IJACSA.2016.070904</id>
        <doi>10.14569/IJACSA.2016.070904</doi>
        <lastModDate>2016-10-01T05:58:28.6270000+00:00</lastModDate>
        
        <creator>Bibi Zahra Mansouri</creator>
        
        <creator>Hamid Mirvaziri</creator>
        
        <creator>Faramarz Sadeghi</creator>
        
        <subject>Recognition of emotion from speech; feature extraction; MFCC; Artificial neural network; Wavelet</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(9), 2016</description>
        <description>Recognition of emotion from speech is a significant subject in man-machine fields. In this study, speech signal has analyzed in order to create a recognition system which is able to recognize human emotion and a new set of characteristic has proposed in time, frequency and time–frequency domain in order to increase the accuracy. After extracting features of Pitch, MFCC, Wavelet, ZCR and Energy, neural networks classify four emotions of EMO-DB and SAVEE databases. Combination of features for two emotions in EMO- DB database is 100%, for three emotions is 98.48% and for four emotions is 90% due to the variety of speech, existing more spoken words and distinguishing male and female which is better than the result of SAVEE database. In SAVEE database, accuracy is 97.83% for two emotions of happy and sad, 84.75% for three emotions of angry, normal and sad and 77.78% for four emotions of happy, angry, sad and normal</description>
        <description>http://thesai.org/Downloads/Volume7No9/Paper_4-Designing_and_Implementing_of_Intelligent_Emotional_Speech_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Gaussian Mixture Model and Deep Neural Network based Vehicle Detection and Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070903</link>
        <id>10.14569/IJACSA.2016.070903</id>
        <doi>10.14569/IJACSA.2016.070903</doi>
        <lastModDate>2016-10-01T05:58:28.5930000+00:00</lastModDate>
        
        <creator>S Sri Harsha</creator>
        
        <creator>K. R. Anne</creator>
        
        <subject>Vehicle detection and classification; deep neural network; AlexNet; SIFT; Gaussian Mixture Model; LDA</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(9), 2016</description>
        <description>The exponential rise in the demand of vision based traffic surveillance systems have motivated academia-industries to develop optimal vehicle detection and classification scheme. In this paper, an adaptive learning rate based Gaussian mixture model (GMM) algorithm has been developed for background subtraction of multilane traffic data. Here, vehicle rear information and road dash-markings have been used for vehicle detection. Performing background subtraction, connected component analysis has been applied to retrieve vehicle region. A multilayered AlexNet deep neural network (DNN) has been applied to extract higher layer features. Furthermore, scale invariant feature transform (SIFT) based vehicle feature extraction has been performed. The extracted 4096-dimensional features have been processed for dimensional reduction using principle component analysis (PCA) and linear discriminant analysis (LDA). The features have been mapped for SVM-based classification. The classification results have exhibited that AlexNet-FC6 features with LDA give the accuracy of 97.80%, followed by AlexNet-FC6 with PCA (96.75%). AlexNet-FC7 feature with LDA and PCA algorithms has exhibited classification accuracy of 91.40% and 96.30%, respectively. On the contrary, SIFT features with LDA algorithm has exhibited 96.46% classification accuracy. The results revealed that enhanced GMM with AlexNet DNN at FC6 and FC7 can be significant for optimal vehicle detection and classification.</description>
        <description>http://thesai.org/Downloads/Volume7No9/Paper_3-Gaussian_Mixture_Model_and_ Deep_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Maneuverability of an Inverted Pendulum Vehicle According to the Handle Operation Methods</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070902</link>
        <id>10.14569/IJACSA.2016.070902</id>
        <doi>10.14569/IJACSA.2016.070902</doi>
        <lastModDate>2016-10-01T05:58:28.5630000+00:00</lastModDate>
        
        <creator>Chihiro NAKAGAWA</creator>
        
        <creator>Takuya CHIKAYAMA</creator>
        
        <creator>Akikazu OKAMOTO</creator>
        
        <creator>Atsuhiko SHINTANI</creator>
        
        <creator>Tomohiro ITO</creator>
        
        <subject>personal mobility vehicle; inverted pendulum vehicle; maneuverability; handle operation; number of operations; questionnaire evaluation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(9), 2016</description>
        <description>This study investigated what handle operation and turning gain is comfortable for people using an inverted pendulum vehicle that is changeable the handle operation. Experimental conditions were three conditions. First is a slalom course with two cones placed at an interval of 1.8 m. Second is a slalom course with five cones placed at an interval of 1.4 m. Third is a slalom course with six cones placed at 1.8m, 1.4m, 1.8m, 1.4m, 1.8m, and 1.8m interval. The first condition  considered the difference of handle operation between subjects who were used to ride and not used to ride. The second condition considered the difference of maneuverability due to gains. The third condition considered the difference of maneuverability between two handle operations in real running space in a condition of 10 gains. In a result of the first condition, a subject who was used to ride run effectively and running time is short compared with a subject who was used to ride. However, in handle yaw rotation, the difference of maneuverability was small. In a result of the second condition, running mileage about the same in two handle operation, but running time of handle yaw rotation is shorter than that of handle roll rotation. In a result of the third condition, like the second condition, running time of handle yaw rotation is shorter than that of handle roll rotation. In questionnaire evaluation, the best gain is the lower gain, 0.02. At last, An experiment was carried out by 14 subjects in the best gain, 0.02 that is best both handle operation. In the result of this experiment, 12 subjects answered that handle yaw rotation is better than handle roll rotation.</description>
        <description>http://thesai.org/Downloads/Volume7No9/Paper_2-Maneuverability_of_an_Inverted_Pendulum_Vehicle_According.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Approach for Energy Efficient Dynamic Virtual Machine Consolidation in Cloud Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070901</link>
        <id>10.14569/IJACSA.2016.070901</id>
        <doi>10.14569/IJACSA.2016.070901</doi>
        <lastModDate>2016-10-01T05:58:28.5000000+00:00</lastModDate>
        
        <creator>Sara Nikzad</creator>
        
        <creator>Seyed EnayatOllah Alavi</creator>
        
        <creator>Mohammad Reza Soltanaghaei</creator>
        
        <subject>Cloud Computing; Service Level Agreement; Energy Consumption; Virtualization; Dynamic Consolidation; Data Center</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(9), 2016</description>
        <description>Nowadays, as the use of cloud computing service becomes more extensive and the customers welcome this service, an increasing trend in energy consumption and operational costs of these centers may be seen. To reduce operational costs, the providers should decrease energy consumption to an extent that Service Level Agreement (SLA) maintains at a desirable level. This paper adopts the virtual machine consolidation problem in cloud computing data centers as a solution to achieve this goal, putting forward solutions to make the decision regarding the necessity of migration from hosts and finding appropriate hosts as destinations of migration. Using time-series forecasting method and Double Exponential Smoothing (DES) technique, the proposed algorithm predicts CPU utilization in near future. It also proposes an optimal equation for the dynamic lower threshold. Comparing current and predicted CPU utilization with dynamic upper and lower thresholds, this algorithm identifies and categorizes underloaded and overloaded hosts. According to this categorization, migration then occurs from the hosts that meet the necessary conditions for migration. This paper identifies a certain type of hosts as “troublemaker hosts”. Most probably, the process of prediction and decision making regarding the necessity of migration will be disrupted in the case of these hosts. Upon encountering this type of hosts, the algorithm adopts policies to modify them or switch them to sleep mode, thereby preventing the adverse effects caused by their existence. The researchers excluded all overloaded, prone-to-be-overloaded, underloaded, and prone-to-be-underloaded hosts from the list of suitable hosts to find suitable hosts as destinations of migration. An average improvement of 86.2%, 28.4%, and 87.2% respectively for the number of migrations of virtual machines, energy consumption, and SLA violation is among the simulation achievements of this algorithm using Clouds tool.</description>
        <description>http://thesai.org/Downloads/Volume7No9/Paper_1-An_Approach_for_Energy_Efficient_Dynamic_Virtual_Machine_Consolidation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>WSDF: Weighting of Signed Distance Function for Camera Motion Estimation in RGB-D Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2016.050905</link>
        <id>10.14569/IJARAI.2016.050905</id>
        <doi>10.14569/IJARAI.2016.050905</doi>
        <lastModDate>2016-09-10T12:38:47.7930000+00:00</lastModDate>
        
        <creator>Pham Minh Hoang</creator>
        
        <creator>Vo Hoai Viet</creator>
        
        <creator>Ly Quoc Ngoc</creator>
        
        <subject>RGB-D data; 3D Reconstruction; SDF; Camera Motion Estimation</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 5(9), 2016</description>
        <description>With the recent advent of the cost-effective Kinect, which can capture real-time high-resolution RGB and visual depth information, has opened an opportunity to significantly increase the capabilities of many automated vision based recognition including object/action classification, 3D reconstruction, etc… In this work, we address the camera motion estimation which is an important phase in 3D object reconstruction system based on RGB-D data. We segment objects by thresholding algorithm based on depth data and propose the weighting function for SDF that is called WSDF. The problem of minimizing of this function is solved by Gauss-Newton methods. We systematically evaluate our method on TUM dataset. The experimental results are measured by ATE and RPE that evaluate both global and local consistency of camera motion estimation algorithm. We demonstrate large improvements over the state-of-the-art methods on both plant and teddy3 objects and achieve the best ATE as 0.00564 and 0.0182 and the best RPE as 0.00719 and 0.00104, respectively. These experiments show that the proposed method significantly outperforms state-of-the-art techniques.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume5No9/Paper_5-WSDF_Weighting_of_Signed_Distance_Function.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Prediction of Employee Turnover in Organizations using Machine Learning Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2016.050904</link>
        <id>10.14569/IJARAI.2016.050904</id>
        <doi>10.14569/IJARAI.2016.050904</doi>
        <lastModDate>2016-09-10T12:38:45.4970000+00:00</lastModDate>
        
        <creator>Rohit Punnoose</creator>
        
        <creator>Pankaj Ajit</creator>
        
        <subject>turnover prediction; machine learning; extreme gradient boosting; supervised classification; regularization</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 5(9), 2016</description>
        <description>Employee turnover has been identified as a key issue for organizations because of its adverse impact on work place productivity and long term growth strategies. To solve this problem, organizations use machine learning techniques to predict employee turnover. Accurate predictions enable organizations to take action for retention or succession planning of employees. However, the data for this modeling problem comes from HR Information Systems (HRIS); these are typically under-funded compared to the Information Systems of other domains in the organization which are directly related to its priorities. This leads to the prevalence of noise in the data that renders predictive models prone to over-fitting and hence inaccurate. This is the key challenge that is the focus of this paper, and one that has not been addressed historically. The novel contribution of this paper is to explore the application of Extreme Gradient Boosting (XGBoost) technique which is more robust because of its regularization formulation. Data from the HRIS of a global retailer is used to compare XGBoost against six historically used supervised classifiers and demonstrate its significantly higher accuracy for predicting employee turnover.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume5No9/Paper_4-Prediction_of_Employee_Turnover_in_Organizations.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Direction for Artificial Intelligence to Achieve Sapiency Inspired by Homo Sapiens</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2016.050903</link>
        <id>10.14569/IJARAI.2016.050903</id>
        <doi>10.14569/IJARAI.2016.050903</doi>
        <lastModDate>2016-09-10T12:38:43.0600000+00:00</lastModDate>
        
        <creator>Mahmud Arif Pavel</creator>
        
        <subject>artificial sapience; sapient agent; artificial intelligence; bio-inspired AI</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 5(9), 2016</description>
        <description>Artificial intelligence technology has developed significantly in the past decades. Although many computational programs are able to approximate many cognitive abilities of Homo sapiens, the intelligence and sapience level of these programs are not even close to Homo sapiens. Rather than developing a computational system with the intelligent or sapient attribute, I propose to develop a system capable of performing functions that could deem as intelligent or sapient by Homo sapiens or others. I advocate converting current computational systems to educable systems that have built-in capabilities to learn and be taught with a universal programming language. The idea is that this attempt would help to attain computational actions in artificial means, which could be viewed as similar to human intelligent and sapient acts. Although this paper is seemingly speculative, some feasible elements are proposed to advance the field of Artificial Intelligence.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume5No9/Paper_3-Direction_for_Artificial_Intelligence_to_Achieve_Sapiency.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Pursuit Reinforcement Competitive Learning: PRCL based Online Clustering with Tracking Algorithm and its Application to Image Retrieval</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2016.050902</link>
        <id>10.14569/IJARAI.2016.050902</id>
        <doi>10.14569/IJARAI.2016.050902</doi>
        <lastModDate>2016-09-10T12:38:40.7730000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>Pursuit Reinforcement Guided Competitive Learning; Reinforcement Guided Competitive Learning; Sustained Reinforcement Guided Competitive Learning Vector Quantization; Learning Automata</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 5(9), 2016</description>
        <description>Pursuit Reinforcement guided Competitive Learning: PRCL based on relatively fast online clustering that allows grouping the data in concern into several clusters when the number of data and distribution of data are varied of reinforcement guided competitive learning is proposed. One of applications of the proposed method is image portion retrievals from the relatively large scale of the images such as Earth observation satellite images. It is found that the proposed method shows relatively fast on the retrievals in comparison to the other existing conventional online clustering such as Vector Quatization: VQ. Moreover, the proposed method shows much faster than the others for the multi-stage retrievals of image portion as well as scale estimation.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume5No9/Paper_2-Pursuit_Reinforcement_Competitive_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Rank Aggregation Algorithm for Ensemble of Multiple Feature Selection Techniques in Credit Risk Evaluation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2016.050901</link>
        <id>10.14569/IJARAI.2016.050901</id>
        <doi>10.14569/IJARAI.2016.050901</doi>
        <lastModDate>2016-09-10T12:38:39.2930000+00:00</lastModDate>
        
        <creator>Shashi Dahiya</creator>
        
        <creator>S.S Handa</creator>
        
        <creator>N.P Singh</creator>
        
        <subject>Classification; Credit Risk; Feature Selection; Ensemble; Rank Aggregation; Bagging</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 5(9), 2016</description>
        <description>In credit risk evaluation the accuracy of a classifier is very significant for classifying the high-risk loan applicants correctly. Feature selection is one way of improving the accuracy of a classifier. It provides the classifier with important and relevant features for model development.  This study uses the ensemble of multiple feature ranking techniques for feature selection of credit data. It uses five individual rank based feature selection methods. It proposes a novel rank aggregation algorithm for combining the ranks of the individual feature selection methods of the ensemble. This algorithm uses the rank order along with the rank score of the features in the ranked list of each feature selection method for rank aggregation. The ensemble of multiple feature selection techniques uses the novel rank aggregation algorithm and selects the relevant features using the 80%, 60%, 40% and 20% thresholds from the top of the aggregated ranked list for building the C4.5, MLP, C4.5 based Bagging and MLP based Bagging models. It was observed that the performance of models using the ensemble of multiple feature selection techniques is better than the performance of 5 individual rank based feature selection methods. The average performance of all the models was observed as best for the ensemble of feature selection techniques at 60% threshold.  Also, the bagging based models outperformed the individual models most significantly for the 60% threshold.  This increase in performance is more significant from the fact that the number of features were reduced by 40% for building the highest performing models. This reduces the data dimensions and hence the overall data size phenomenally for model building. The use of the ensemble of feature selection techniques using the novel aggregation algorithm provided more accurate models which are simpler, faster and easy to interpret.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume5No9/Paper_1-A_Rank_Aggregation_Algorithm_for_Ensemble.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards Securing Medical Documents from Insider Attacks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070848</link>
        <id>10.14569/IJACSA.2016.070848</id>
        <doi>10.14569/IJACSA.2016.070848</doi>
        <lastModDate>2016-09-01T12:18:38.4130000+00:00</lastModDate>
        
        <creator>Maaz Bin Ahmad</creator>
        
        <creator>Muhammad Fahad</creator>
        
        <creator>Abdul Wahab Khan</creator>
        
        <creator>Muhammad Asif</creator>
        
        <subject>covert channels; misuse; insider; medical; documents</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(8), 2016</description>
        <description>Medical organizations have sensitive health related documents. Unauthorized access attempts for these should not only be prevented but also detected in order to ensure correct treatment of the patients and to capture the malicious intent users. Such organizations normally rely on the principle of least privileges together with the deployment of some commercial available software to cope up this issue. But such methods can’t be helpful in some of the misuse methods e.g. covert channels. As insiders may be the part of the team which developed such software, he may have deliberately inserted such channels in the source code of that software. The results may be catastrophic not only to that organization but for the patients too. This paper presents an application for securely exchange of documents in medical organizations of our country. The induction of water marking and hash protected documents enhances its security and make it fit to deploy in medical related organizations. The deployment is done in such a way that only higher management has access to the source code for reviewing. Results demonstrate its effectiveness in preventing and detecting majority of the information misuse channels.</description>
        <description>http://thesai.org/Downloads/Volume7No8/Paper_48-Towards_Securing_Medical_Documents_from_Insider_Attacks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Using Persuasive Recommendations in Wellness Applications based upon User Activities</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070847</link>
        <id>10.14569/IJACSA.2016.070847</id>
        <doi>10.14569/IJACSA.2016.070847</doi>
        <lastModDate>2016-09-01T12:18:38.3830000+00:00</lastModDate>
        
        <creator>Hamid Mukhtar</creator>
        
        <subject>user goal; modeling; user feedback; context; preferences</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(8), 2016</description>
        <description>Recently, a large number of mobile wellness ap-plications have emerged for assisting users in self-monitoring of daily food intake and physical activities. While such applications are in abundance, many research surveys have found that the users soon give them up after giving some initial try. This article presents our application for healthcare self-management that monitors users’ activities but –unlike the existing applications–it focuses on keeping the users engaged for self-management. The distinguishing feature of our application is that it uses persuasive mechanisms to help users adopt healthy behavior. For this purpose, users’ various activities are monitored and then they are persuaded using different persuasion strategies that are adaptive and are according to their behavior. For each user, a behavior model is created that is based on Fogg’s behavior model but, in addition, it also holds within it user preferences, and user health profile. The behavior model is then used to create a persuasion profile of the user that allows us to propose personalized suggestions targeted to overcome his lacking behavior. We also describe a case study that describes the actual application.</description>
        <description>http://thesai.org/Downloads/Volume7No8/Paper_47-Using_Persuasive_Recommendations_in_Wellness.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Improved Approach for Text-Independent Speaker Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070846</link>
        <id>10.14569/IJACSA.2016.070846</id>
        <doi>10.14569/IJACSA.2016.070846</doi>
        <lastModDate>2016-09-01T12:18:38.3030000+00:00</lastModDate>
        
        <creator>Rania Chakroun</creator>
        
        <creator>Leila Belta&#239;fa Zouari</creator>
        
        <creator>Mondher Frikha</creator>
        
        <subject>GMM; speaker verification; speaker recognition; speaker identification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(8), 2016</description>
        <description>This paper presents new Speaker Identification and Speaker Verification systems based on the use of new feature vectors extracted from the speech signal. The proposed structure  combine between the most successful Mel Frequency Cepstral Coefficients and new features which are the Short Time Zero Crossing Rate of the signal. A comparison between speaker recognition systems based on Gaussian mixture models using the well known Mel Frequency Cepstral Coefficients and the novel systems based on the use of a combination between both reduced Mel Frequency Cepstral Coefficients features vectors and Short Time Zero Crossing Rate features is given. This comparison proves that the use of the new reduced feature vectors help to improve the system’s performance and also help to reduce the time and memory complexity of the system which is required for realistic applications that suffer from computational resource limitation. The experiments were performed on speakers from TIMIT database for different training durations. The suggested systems performances are evaluated against the baseline systems. The increase of the proposed systems performances are well observed for identification experiments and the decrease of Equal Error Rates are also remarkable for verification experiments. Experimental results demonstrate the effectiveness of the new approach which avoids the use of more complex algorithms or the combination of different approaches requiring lengthy calculation.</description>
        <description>http://thesai.org/Downloads/Volume7No8/Paper_46-An_Improved_Approach_for_Text_Independent_Speaker_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Architecture Considerations for Big Data Management</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070845</link>
        <id>10.14569/IJACSA.2016.070845</id>
        <doi>10.14569/IJACSA.2016.070845</doi>
        <lastModDate>2016-08-31T16:51:50.8930000+00:00</lastModDate>
        
        <creator>Khalim Amjad Meerja</creator>
        
        <creator>Khaled Almustafa</creator>
        
        <subject>Internet of Things (IoT), Big Data, Cloud network, RFID, Sensor Networks, 5G</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(8), 2016</description>
        <description>A network architecture is concerned with holistic view of interconnection of different nodes with each other. This refers to both physical and logical ways of interconnection of all nodes in the network. The way in which they are connected influences the strategies adopted for Big Data Management. In this present day of Internet of Things (IoT), each kind of device is required and made possible for communicating with other completely different kinds of devices. The heterogeneous nature of devices in the network needs a completely new architecture to efficiently handle Big Data which is generated continually, either for providing services to end users or for study and analysis in a research process. It is thus very essential to visit various kinds of devices that are available on the Internet, their characteristics and requirements, how they communicate and process data, and eventually how the human society embraces the Big Data generation for their daily consumption. This paper is dedicated to bringing all theses aspects together in one place, bringing different technologies into one single network architecture.</description>
        <description>http://thesai.org/Downloads/Volume7No8/Paper_45-Architecture_Considerations_for_Big_Data_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Face Recognition in Uncontrolled Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070844</link>
        <id>10.14569/IJACSA.2016.070844</id>
        <doi>10.14569/IJACSA.2016.070844</doi>
        <lastModDate>2016-08-31T16:51:50.8630000+00:00</lastModDate>
        
        <creator>Radhey Shyam</creator>
        
        <creator>Yogendra Narain Singh</creator>
        
        <subject>Face recognition, A-LBP, descriptor, distance metrics,
area under curve, decidability index</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(8), 2016</description>
        <description>This paper presents a novel method of facial image
representation for face recognition in uncontrolled environment.
It is named as augmented local binary patterns (A-LBP) that
works on both, uniform and non-uniform patterns. It replaces
the central non-uniform pattern with a majority value of the
neighbouring uniform patterns obtained after processing all
neighbouring non-uniform patterns. These patterns are finally
combined with the neighbouring uniform patterns, in order to
extract discriminatory information from the local descriptors.
The experimental results indicate the vitality of the proposed
method on particular face datasets, where the images are prone
to extreme variations of illumination.</description>
        <description>http://thesai.org/Downloads/Volume7No8/Paper_44-Face_Recognition_in_Uncontrolled_Environment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cohesion Based Personalized Community Recommendation System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070843</link>
        <id>10.14569/IJACSA.2016.070843</id>
        <doi>10.14569/IJACSA.2016.070843</doi>
        <lastModDate>2016-08-31T16:51:50.8470000+00:00</lastModDate>
        
        <creator>Md Mamunur Rashid</creator>
        
        <creator>Kazi Wasif Ahmed</creator>
        
        <creator>Hasan Mahmud</creator>
        
        <creator>Md. Kamrul Hasan</creator>
        
        <creator>Husne Ara Rubaiyeat</creator>
        
        <subject>Social network, Community or Group recommendation,
Cohesion, Amity factor, User Preferences or proclivity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(8), 2016</description>
        <description>Our life is totally engaged by the progressive
growth of online social networking. Because, millions of users are
interconnecting with each other using different social media sites
like Facebook, Twitter, LinkedIn, Google+, Pinterest, Instagram
etc. Most of the social sites like Facebook, Google+ allow users to
join different groups or communities where people can share their
common interests and express opinions around a common cause,
problem or activity. However, an information overloading issue
has disturbed users as thousands of communities or groups are
creating each day. To resolve this problem, we have presented
a community or group recommendation system centered on
cohesion where cohesion represents high degree of connectedness
among users in social network. In this paper, we emphasis on
suggesting useful communities (or groups in term of Facebook)
that users personally attracted in to join; reducing the effort
to find useful information based on cohesion. Our projected
framework contains of the steps like: extracting sub-network
from a social networking site (SNS), computing the impact of
amity(both real-life or social and SNS connected), measuring user
proclivity factor, calculating threshold from existing communities
or groups of a user and lastly recommending community or group
based on derived threshold. In result analysis part, we consider
the precision-recall values by discarding community or group
one at a time from the list of communities or groups of a certain
user and checking whether the removed community or group is
recommended by our proposed system. We have evaluated our
system with 20 users and found 76% F1 accuracy measure.</description>
        <description>http://thesai.org/Downloads/Volume7No8/Paper_43-Cohesion_Based_Personalized_Community.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>BRIQA: Framework for the Blind and Referenced Visual Image Quality Assessment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070842</link>
        <id>10.14569/IJACSA.2016.070842</id>
        <doi>10.14569/IJACSA.2016.070842</doi>
        <lastModDate>2016-08-31T16:51:50.8170000+00:00</lastModDate>
        
        <creator>Jaime Moreno</creator>
        
        <creator>Oswaldo Morales</creator>
        
        <creator>Ricardo Tejeida</creator>
        
        <creator>Eduardo Garc&#180;ia</creator>
        
        <subject>Image Quality Assessment; Contrast Band-Pass Filtering;
Peak Signal-to-Noise Ratio</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(8), 2016</description>
        <description>Our proposal is to present a Blind and Referenced
Image Quality Assessment or BRIQA. Thus, the main proposal
of this paper is to propose an Interface, which contains not only a
Full-Referenced Image Quality Assessment (IQA) but also a No-
Referenced or Blind IQA applying perceptual concepts by means
of Contrast Band-Pass Filtering (CBPF). Then, this proposal
consists in contrast a degraded input image with the filtered
versions of several distances by a CBPF, which computes some
of the Human Visual System (HVS) variables. If BRIQA detects
only one input, it performs a Blind Image Quality Assessment,
on the contrary if BRIQA detects two inputs, it considers that a
Referenced Image Quality Assessment will be computed. Thus,
we first define a Full-Reference IQA and then a No-Reference
IQA, which correlation is important when is contrasted with the
psychophysical results performed by several observers. BRIQA
weights the Peak Signal-to-Noise Ratio by using an algorithm that
estimates some properties of the Human Visual System. Then,
we compare BRIQA algorithm not only with the mainstream
estimator in IQA, PSNR, but also state-of-the-art IQA algorithms,
such as Structural SIMilarity (SSIM), Mean Structural
SIMilarity (MSSIM), Visual Information Fidelity (VIF), etc. Our
experiments show that the correlation of BRIQA correlated with
PSNR is important, but this proposal does not need imperatively
the reference image in order to estimate the quality of the
recovered image.</description>
        <description>http://thesai.org/Downloads/Volume7No8/Paper_42-BRIQA_Framework_for_the_Blind_and_Referenced.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analytical and Numerical Study of the Onset of Electroconvection in a Dielectric Nanofluid Saturated a Rotating Darcy Porous Medium</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070841</link>
        <id>10.14569/IJACSA.2016.070841</id>
        <doi>10.14569/IJACSA.2016.070841</doi>
        <lastModDate>2016-08-31T16:51:50.8000000+00:00</lastModDate>
        
        <creator>Abderrahim Wakif</creator>
        
        <creator>Zoubair Boulahia</creator>
        
        <creator>Rachid Sehaqui</creator>
        
        <subject>Linear Stability; Electroconvection; Dielectric Nanofluid; Rotation; Porous Medium; Power Series Method</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(8), 2016</description>
        <description>The simultaneous effect of rotation and a vertical AC electric field on the onset of electroconvection in a horizontal dielectric nanofluid layer saturated a Darcy porous medium is investigated. The boundaries of the dielectric nanofluid layer are considered isothermal, where the vertical nanoparticle flux is zero. The resulting eigenvalue problem is solved analytically by the Galerkin weighted residuals technique (GWRT) and numerically using the power series method (PSM).The results show that an increase either in the AC electric Rayleigh-Darcy number, in the Lewis number, in the nanoparticle Rayleigh-Darcy number or in the modified diffusivity ratio allows to hasten the onset of electroconvection in dielectric nanofluids, while the Taylor-Darcy number and the porosity of the medium allow to delay it.</description>
        <description>http://thesai.org/Downloads/Volume7No8/Paper_41-Analytical_and_Numerical_Study_of_the_Onset.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Autonomous Vehicle-to-Vehicle (V2V) Decision Making in Roundabout using Game Theory</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070840</link>
        <id>10.14569/IJACSA.2016.070840</id>
        <doi>10.14569/IJACSA.2016.070840</doi>
        <lastModDate>2016-08-31T16:51:50.7700000+00:00</lastModDate>
        
        <creator>Lejla Banjanovic-Mehmedovic</creator>
        
        <creator>Edin Halilovic</creator>
        
        <creator>Ivan Bosankic</creator>
        
        <creator>Mehmed Kantardzic</creator>
        
        <creator>Suad Kasapovic</creator>
        
        <subject>autonomous vehicles; decision making; non-zero-sum game theory; mobile robots; roundabout; vehicle-to-vehicle cooperation (V2V); wireless communication</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(8), 2016</description>
        <description>Roundabout intersections promote a continuous flow of traffic. Roundabouts entry move traffic through an intersection more quickly, and with less congestion on approaching roads. With the introduction of smart vehicles and cooperative decision-making, roundabout management shortens the waiting time and leads to a more efficient traffic without breaking the traffic laws and earning penalties. This paper proposes a novel approach of cooperative behavior strategy in conflict situations between the autonomous vehicles in roundabout using game theory. The game theory presents a strategic decision-making technique between independent agents - players. Each individual player tends to achieve best payoff, by analyzing possible actions of other players and their influence on game outcome. The Prisoner&#39;s Dilemma game strategy is selected as approach to autonomous vehicle-to-vehicle (V2V) decision making at roundabout test-bed, because the commonly known traffic laws dictate certain rules of vehicle&#39;s behavior at roundabout. It is shown that, by integrating non-zero-sum game theory in autonomous vehicle-to-vehicle (V2V) decision making capabilities, the roundabout entry problem can be solved efficiently with shortened waiting times for individual autonomous vehicles.</description>
        <description>http://thesai.org/Downloads/Volume7No8/Paper_40-Autonomous_Vehicle_to_Vehicle_V2V_Decision_Making.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Improved Pulmonary Nodule Detection Scheme based on Multi-Layered Filtering and 3d Distance Metrics</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070839</link>
        <id>10.14569/IJACSA.2016.070839</id>
        <doi>10.14569/IJACSA.2016.070839</doi>
        <lastModDate>2016-08-31T16:51:50.7370000+00:00</lastModDate>
        
        <creator>Baber Jahangir</creator>
        
        <creator>Muhammad Imran</creator>
        
        <creator>Qamar Abbas</creator>
        
        <creator>Shahina Rabeel1line</creator>
        
        <creator>Ayyaz Hussain</creator>
        
        <subject>computer-aided detection system; nodule detection lung nodule</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(8), 2016</description>
        <description>This paper proposed a computer-aided detection (CAD) system to automatically detect pulmonary nodules from thoracic computed tomography (CT) images. Automatically detect pulmonary nodules is a difficult job because of the large deviation in size, shape, location and density of nodules. The proposed CAD scheme applies multiple 3D disk-shaped laplacian filters to enhance the shape of spherical regions. Optimal multiple thresholding and 3D distance mapping is used to extract regions of interest and separate nodules. Finally, rule-based pruning removes easily dismissible false positive structures. The proposed system provides an overall nodule detection rate of 80% with an average of 12.2 false positives per scan. The experimental results reveals that the proposed CAD can attain a comparatively high performance.</description>
        <description>http://thesai.org/Downloads/Volume7No8/Paper_39-An_Improved_Pulmonary_Nodule_Detection_Scheme.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Pattern Recognition Approach in Multidimensional Databases: Application to the Global Terrorism Database</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070838</link>
        <id>10.14569/IJACSA.2016.070838</id>
        <doi>10.14569/IJACSA.2016.070838</doi>
        <lastModDate>2016-08-31T16:51:50.7230000+00:00</lastModDate>
        
        <creator>Semeh BEN SALEM</creator>
        
        <creator>Sami NAOUALI</creator>
        
        <subject>clustering; pattern recognition; multidimensional databases; distance measurement; Khi2 formula; Euclidean distance; Multiple Correspondence Analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(8), 2016</description>
        <description>This paper presents a pattern recognition approach in multidimensional databases. The approach is based on a clustering method using the distance measurement between a reference profile and the database observations. Two distance measurements will be proposed: an adaptation of the Khi2 formula to the multidimensional context, extracted from the Multiple Correspondence Analysis (MCA), and the Euclidean distance. A comparison between the two distances will be provided to retain the most efficient one for the multidimensional clustering context. The proposed approach will be applied to a real case study representing armed attacks worldwide stored in the Global Terrorism Database (GTD).</description>
        <description>http://thesai.org/Downloads/Volume7No8/Paper_38-Pattern_Recognition_Approach_in_Multidimensional_Databases_Application.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Item-based Multi-Criteria Collaborative Filtering Algorithm for Personalized Recommender Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070837</link>
        <id>10.14569/IJACSA.2016.070837</id>
        <doi>10.14569/IJACSA.2016.070837</doi>
        <lastModDate>2016-08-31T16:51:50.6900000+00:00</lastModDate>
        
        <creator>Qusai Shambour</creator>
        
        <creator>Mou’ath Hourani</creator>
        
        <creator>Salam Fraihat</creator>
        
        <subject>Collaborative Filtering; Recommender Systems; Multi-Criteria; Sparsity; New Item</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(8), 2016</description>
        <description>Recommender Systems are used to mitigate the information overload problem in different domains by providing personalized recommendations for particular users based on their implicit and explicit preferences. However, Item-based Collaborative Filtering (CF) techniques, as the most popular techniques of recommender systems, suffer from sparsity and new item limitations which result in producing inaccurate recommendations. The use of items’ semantic information besides the inclusion of multi-criteria ratings can successfully alleviate such problems and generate more accurate recommendations. This paper proposes an Item-based Multi-Criteria Collaborative Filtering algorithm that integrates the items’ semantic information and multi-criteria ratings of items to lessen known limitations of the item-based CF techniques. According to the experimental results, the proposed algorithm prove to be very effective in terms of dealing with both of the sparsity and new item problems and therefore produce more accurate recommendations when compared to standard item-based CF techniques.</description>
        <description>http://thesai.org/Downloads/Volume7No8/Paper_37-An_Item_based_Multi_Criteria_Collaborative_Filtering_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Urdu Text Classification using Majority Voting</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070836</link>
        <id>10.14569/IJACSA.2016.070836</id>
        <doi>10.14569/IJACSA.2016.070836</doi>
        <lastModDate>2016-08-31T16:51:50.6770000+00:00</lastModDate>
        
        <creator>Muhammad Usman</creator>
        
        <creator>Zunaira Shafique</creator>
        
        <creator>Saba Ayub</creator>
        
        <creator>Kamran Malik</creator>
        
        <subject>Text Classification; Tokenization; Stemming; Na&#239;ve Bayes; SVM; Random Forest; Bernoulli NB; Multinomial NB; SGD; Classifier; Majority Voting</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(8), 2016</description>
        <description>Text classification is a tool to assign the predefined categories to the text documents using supervised machine learning algorithms. It has various practical applications like spam detection, sentiment detection, and detection of a natural language. Based on the idea we applied five well-known classification techniques on Urdu language corpus and assigned a class to the documents using majority voting. The corpus contains 21769 news documents of seven categories (Business, Entertainment, Culture, Health, Sports, and Weird). The algorithms were not able to work directly on the data, so we applied the preprocessing techniques like tokenization, stop words removal and a rule-based stemmer. After preprocessing 93400 features are extracted from the data to apply machine learning algorithms.  Furthermore, we achieved up to 94% precision and recall using majority voting.</description>
        <description>http://thesai.org/Downloads/Volume7No8/Paper_36-Urdu_Text_Classification_using_Majority_Voting.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Speech Impairments in Intellectual Disability: An Acoustic Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070835</link>
        <id>10.14569/IJACSA.2016.070835</id>
        <doi>10.14569/IJACSA.2016.070835</doi>
        <lastModDate>2016-08-31T16:51:50.6430000+00:00</lastModDate>
        
        <creator>Sumanlata Gautam</creator>
        
        <creator>Latika Singh</creator>
        
        <subject>speech development; spectro-temporal; intellectual disabilities; timescales; learning disability; classification model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(8), 2016</description>
        <description>Speech is the primary means of human communication. Speech production starts in early ages and matures as children grow. People with intellectual or learning disabilities have deficit in speech production and faces difficulties in communication. These people need tailor-made therapies or trainings for rehabilitation to lead their lives independently. To provide these special trainings , it is important to know the exact nature of impairment in the speech through acoustic analysis. This study calculated the spectro-temporal features relevant to brain structures, encoded at short and long timescales in the speech of 82 subjects including 32 typically developing children, 20 adults and 30 participants with intellectual disabilities (severity ranges from mild to moderate). The results revealed that short timescales, which encoded information like formant transition in typically developing group were significantly different from intellectually disabled group, whereas long timescales were similar amongst groups. The short timescales were significantly different even within typically developing group but not within intellectually disabled group. The findings suggest that the features encoded at short timescales and ratio (short/long) play a significant role in classifying the group. It is shown that the classifier models with good accuracy can be constructed using acoustic features under investigation. This indicates that these features are relevant in differentiating normal and disordered speech. These classification models can help in early diagnostics of intellectual or learning disabilities.</description>
        <description>http://thesai.org/Downloads/Volume7No8/Paper_35-Speech_Impairments_in_Intellectual_Disability.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Study of Resilient Architecture for Critical Software-Intensive System-of-Systems (Sisos)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070834</link>
        <id>10.14569/IJACSA.2016.070834</id>
        <doi>10.14569/IJACSA.2016.070834</doi>
        <lastModDate>2016-08-31T16:51:50.6300000+00:00</lastModDate>
        
        <creator>Nadeem Akhtar</creator>
        
        <creator>Malik Muhammad Saad Missen</creator>
        
        <creator>Nadeem Salamat</creator>
        
        <creator>Amnah Firdous</creator>
        
        <creator>Mujtaba Husnain</creator>
        
        <subject>Resilient architecture; Critical systems; System-of-System (SoS); Software-intensive SoS (SiSoS); Emergent behavior; Correctness; Safety</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(8), 2016</description>
        <description>The role of critical system-of-systems have become considerably software-intensive. A critical system-of-system has to satisfy correctness properties of liveness and safety. As critical system-of-systems have to operate in open environments in which they interact and collaborate with other systems, satisfy action of the requirements through traditional offline top-down engineering no longer suffice. Most of the critical software-intensive system-of-systems have no fixed boundaries and services provided by other systems will come and go in unpredictable ways; in these systems assuring correctness is a challenging issue. These systems need to tolerate faults in the face of change; they need a resilient architecture. An approach has been proposed for the analysis, design, formal specification and verification of critical Software-intensive System-of-Systems.</description>
        <description>http://thesai.org/Downloads/Volume7No8/Paper_34-A_Study_of_Resilient_Architecture_for_Critical_Software_Intensive.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluation of Wellness Detection Techniques using Complex Activities Association for Smart Home Ambient</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070833</link>
        <id>10.14569/IJACSA.2016.070833</id>
        <doi>10.14569/IJACSA.2016.070833</doi>
        <lastModDate>2016-08-31T16:51:50.5970000+00:00</lastModDate>
        
        <creator>Farhan Sabir Ujager</creator>
        
        <creator>Azhar Mahmood</creator>
        
        <creator>Shaheen Khatoon</creator>
        
        <creator>Muhammad Imran</creator>
        
        <creator>Umair Abdullah</creator>
        
        <subject>Wellness detection; Elderly people; WSN Smart homes; Activity recognition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(8), 2016</description>
        <description>Wireless Sensor Network based smart homes have the potential to meet the growing challenges of independent living of elderly people in smart homes. However, wellness detection of elderly people in smart homes is still a challenging research domain. Many researchers have proposed several techniques; however, majority of these techniques does not provide a comprehensive solution because complex activities cannot be determined easily and comprehensive wellness is difficult to diagnose. In this study’s critical review, it has been observed that strong association lies among the vital wellness determination parameters. In this paper, an association rules based model is proposed for the simple and complex (overlapped) activities recognition and comprehensive wellness detection mechanism after analyzing existing techniques. It considers vital wellness detection parameters (temporal association of sub activity location and sub activity, time gaps between two adjacent activities, temporal association of inter and intra activities). Activity recognition and wellness detection will be performed on the basis of extracted temporal association rules and expert knowledgebase. Learning component is an important module of our proposed model to accommodate the changing trends in the frequent pattern behavior of an elderly person and recommend a caregiver/expert to adjust the expert knowledgebase according to the found abnormalities.</description>
        <description>http://thesai.org/Downloads/Volume7No8/Paper_33-Evaluation_of_Wellness_Detection_Techniques_using_Complex.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Approach for Submission of Tasks to a Data Center in a Virtualized Cloud Computing Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070832</link>
        <id>10.14569/IJACSA.2016.070832</id>
        <doi>10.14569/IJACSA.2016.070832</doi>
        <lastModDate>2016-08-31T16:51:50.5670000+00:00</lastModDate>
        
        <creator>B. Santhosh Kumar</creator>
        
        <creator>Dr. Latha Parthiban</creator>
        
        <subject>Energy consumption; Virtualization; Data Center Selection Framework</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(8), 2016</description>
        <description>The submission of tasks to a data center plays a crucial role to achieve the services like scheduling, processing in a cloud computing environment. Energy consumption of a data center must be considered for task processing as it results in high operational expenditures and bad environmental impact. Unfortunately, none of the current research works focus on energy factor while submitting tasks to a cloud. In this paper a framework is proposed to select a data center with minimum energy consumption. The service provider has to register all the data centers in a registry. The energy consumed by task processing using virtualization and energy of IT equipments like routers, switches is calculated. The data center selection framework finally selects the data center with minimum energy consumption for task processing. The experimental results indicate that the proposed idea results in a less energy when compared to the existing algorithms for selection of data centers.</description>
        <description>http://thesai.org/Downloads/Volume7No8/Paper_32-A_Novel_Approach_for_Submission_of_Tasks_to_a_Data_Center.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid Approach for Measuring Semantic Similarity between Documents and its Application in Mining the Knowledge Repositories</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070831</link>
        <id>10.14569/IJACSA.2016.070831</id>
        <doi>10.14569/IJACSA.2016.070831</doi>
        <lastModDate>2016-08-31T16:51:50.5370000+00:00</lastModDate>
        
        <creator>Ms. K. L. Sumathy</creator>
        
        <creator>Dr. Chidambaram</creator>
        
        <subject>dataset documents; research similarity documents; ontology and corpus</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(8), 2016</description>
        <description>This paper explains about similarity measure and the relationship between the knowledge repositories. This paper also describes the significance of document similarity measures, algorithms and to which type of text it can be applied Document similarity measures are of full text similarity, paragraph similarity, sentence similarity, semantic similarity, structural similarity and statistical measures. Two different frameworks had been proposed in this paper, one for measuring document to document similarity and the other model which measures similarity between documents to multiple documents. These two proposed models can use any one of the similarity measures in implementation aspect, which is been put forth for further research.</description>
        <description>http://thesai.org/Downloads/Volume7No8/Paper_31-A_Hybrid_Approach_for_Measuring_Semantic_Similarity_between_Documents.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Ontology-based Query Expansion for Arabic Text Retrieval</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070830</link>
        <id>10.14569/IJACSA.2016.070830</id>
        <doi>10.14569/IJACSA.2016.070830</doi>
        <lastModDate>2016-08-31T16:51:50.5200000+00:00</lastModDate>
        
        <creator>Waseem Alromima</creator>
        
        <creator>Ibrahim F. Moawad </creator>
        
        <creator>Rania Elgohary</creator>
        
        <creator>Mostafa Aref </creator>
        
        <subject>Information Retrieval; Arabic Ontology; Semantic Search; Arabic Quran Corpus</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(8), 2016</description>
        <description>The semantic resources are important parts in the Information Retrieval (IR) such as search engines, Question Answering (QA), etc., these resources should be available, readable and understandable. In semantic web, the ontology plays a central role for the information retrieval, which use to retrieves more relevant information from unstructured information. This paper presents a semantic-based retrieval system for the Arabic text, which expands the input query semantically using Arabic domain ontology. In the proposed approach, the search engine index is represented using Vector Space Model (VSM), and the Arabic’s place nouns domain ontology has been used which constructed and implemented using Web Ontology Language (OWL) from Arabic corpus. The proposed approach has been experimented on the Arabic Quran corpus, and the experiments show that the approach outperforms in terms of both precision and recall the traditional keyword-based methods.</description>
        <description>http://thesai.org/Downloads/Volume7No8/Paper_30-Ontology_based_Query_Expansion_for_Arabic_Text_Retrieval.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analytical Performance Evaluation of IPv6 and IPv4 Over 10 Gigabit Ethernet and InfiniBand using IPoIB</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070829</link>
        <id>10.14569/IJACSA.2016.070829</id>
        <doi>10.14569/IJACSA.2016.070829</doi>
        <lastModDate>2016-08-31T16:51:50.4900000+00:00</lastModDate>
        
        <creator>Eric Gamess</creator>
        
        <creator>Humberto Ortiz-Zuazaga</creator>
        
        <subject>IPv4; IPv6; Performance Evaluation; InfiniBand; IP over InfiniBand; 10 Gigabit Ethernet; Benchmarking Tools</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(8), 2016</description>
        <description>IPv6 is the response to the shortage of IPv4 addresses. It was defined almost twenty years ago by the IETF as a replacement of IPv4, and little by little, it is becoming more preponderant as the Internet protocol. The growth of Internet has led to the development of high performance networks. On one hand, Ethernet has evolved significantly and today it is common to find 10 Gigabit Ethernet networks in LANs. On the other hand, another approach for high performance networking is based on RDMA (Remote Direct Memory Access) which offers innovative features such as kernel bypass, zero copy, offload of splitting and assembly of messages in packets to the CAs (Channel Adapters), etc. InfiniBand is currently the most popular technology that implements RDMA. It uses verbs instead of sockets and a big effort of the community is required to port TCP/IP software to InfiniBand, to take advantage of its benefits. Meanwhile, IPoIB (IP over InfiniBand) is a protocol that has been proposed and permits the execution of socket-based applications on top of InfiniBand, without any change, at the expense of performance. In this paper, we make a performance evaluation of IPv6 and IPv4 over 10 Gigabit Ethernet and IPoIB. Our results show that 10 Gigabit Ethernet has a better throughput than IPoIB, especially for small and medium payload sizes. However, as the payload size increases, the advantage of 10 Gigabit Ethernet is reduced in comparison to IPoIB/FDR. With respect to latency, IPoIB did much better than 10 Gigabit Ethernet. Finally, our research also indicates that in a controlled environment, IPv4 has a better performance than IPv6.</description>
        <description>http://thesai.org/Downloads/Volume7No8/Paper_29-Analytical_Performance_Evaluation_of_Ipv6_and_Ipv4_Over.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Efficient Load Balancing Algorithm for the Arrangement-Star Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070828</link>
        <id>10.14569/IJACSA.2016.070828</id>
        <doi>10.14569/IJACSA.2016.070828</doi>
        <lastModDate>2016-08-31T16:51:50.4570000+00:00</lastModDate>
        
        <creator>Ahmad M. Awwad</creator>
        
        <creator>Jehad A. Al-Sadi</creator>
        
        <subject>Interconnection Networks; Topological Properties; Arrangement-Star; Load balancing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(8), 2016</description>
        <description>The Arrangement-Star is a well-known network in the literature and it is one of the promising interconnection networks in the area of super computing, it is expected to be one of the attractive alternatives in the future for High Speed Parallel Computers. The Arrangement-Star network has many attractive topological properties such as small diameter, low degree, good connectivity, low broadcasting cost and flexibility in choosing the desired network size. Although, some of the research work has been done on Arrangement-Star network, it still needs more investigation and research efforts to explore these attractive topologies and utilize it to solve some real life applications. In this paper we attempt to fill this gap by proposing an efficient algorithm for load balancing among different processors of the Arrangement-Star network. The proposed algorithm is named as Arrangement Star Clustered Dimension Exchange Method ASCDEM presented and implemented on the Arrangement-Star network. The algorithm is based on the Clustered Dimension Exchange Method (CDEM). The ASCDEM algorithm is shown to be efficient in redistributing the load balancing among all different processors of the network as evenly as possible. A complete detail of this algorithm in addition to examples and discussions to explore the benefits of applying this distributed algorithm is presented in this paper. Furthermore an analytical study on the algorithm is presented and discussed to explore the attractive performance of the proposed algorithm.</description>
        <description>http://thesai.org/Downloads/Volume7No8/Paper_28-Efficient_Load_Balancing_Algorithm_for_the_Arrangement_Star_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Factors Influencing Patients’ Attitudes to Exchange Electronic Health Information in Saudi Arabia: An Exploratory Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070827</link>
        <id>10.14569/IJACSA.2016.070827</id>
        <doi>10.14569/IJACSA.2016.070827</doi>
        <lastModDate>2016-08-31T16:51:50.4270000+00:00</lastModDate>
        
        <creator>Mariam Al-Khalifa</creator>
        
        <creator>Shaheen Khatoon</creator>
        
        <creator>Azhar Mahmood</creator>
        
        <creator>Iram Fatima</creator>
        
        <subject>Health Information Exchange; Electronic Medical Record; TAM; Theoretical Model Introduction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(8), 2016</description>
        <description>Health Information Exchange (HIE) systems electronically transfer patients’ clinical, demographic, and health-related information between different care providers. These exchanges offer improved health care quality, reduced medical errors and health care costs, increase patient safety and organizational efficiency. However, technologies cannot bring such improvements if patients are reluctant to share personal health information which could impede the success of HIE system.  The purpose of this study is to identify different factors that determine patients’ acceptance for sharing their medical information among different care provider. Based preliminary on the Theory of Planned Behavior (TPB) and Technology Acceptance Model (TAM) combined with patients’ perspective an integrated model is proposed. A questionnaire survey is conducted to measure the proportion of respondents’ willingness to share their information with the residence of the eastern province of Kingdom of Saudi Arabia. A sample of 300 respondents over 18 years of age is collected. Basic descriptive statistical analysis, reliability and validity assessment is conducted to analyze data and measure the goodness of model. Furthermore, Structural Equation Modelling is used to test research hypothesis. The finding shows that perceived benefit, perceived risk, subjective norms and attitude are the main predictors of patients’ willingness or unwillingness to share their health information. The study revealed that more attention should be directed to these factors during the design and implementation of future HIE system to avoid expected barriers.</description>
        <description>http://thesai.org/Downloads/Volume7No8/Paper_27-Factors_Influencing_Patients_Attitudes_to_Exchange_Electronic.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Chemical Reaction Optimization for Max Flow Problem</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070826</link>
        <id>10.14569/IJACSA.2016.070826</id>
        <doi>10.14569/IJACSA.2016.070826</doi>
        <lastModDate>2016-08-31T16:51:50.3930000+00:00</lastModDate>
        
        <creator>Reham Barham</creator>
        
        <creator>Ahmad Sharieh</creator>
        
        <creator>Azzam Sliet</creator>
        
        <subject>Chemical reaction optimization(CRO); Decomposition; Heuristic; Max Flow problem; Molecule; Optimization; Reactions; Synthesis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(8), 2016</description>
        <description>This study presents an algorithm for MaxFlow problem using &quot;Chemical Reaction Optimization algorithm (CRO)&quot;. CRO is a recently established meta-heuristics algorithm for optimization, inspired by the nature of chemical reactions. The main concern is to find the best maximum flow value at which the flow can be shipped from the source node to the sink node in a flow network without violating any capacity constraints in which the flow of each edge remains within the upper bound value of the capacity. The proposed MaxFlow-CRO algorithm is presented, analyzed asymptotically and experimental test is conducted. Asymptotic runtime is derived theoretically.  The algorithm is implemented using JAVA programming language. Results show a good performance with a complexity of O(I E2), for I iterations and E edges.  The number of iterations I in the algorithm, is an important factor that will affect the results obtained. As number of iterations is increased, best possible max-Flow value is obtained.</description>
        <description>http://thesai.org/Downloads/Volume7No8/Paper_26-Chemical_Reaction_Optimization_for_Max_Flow_Problem.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Influence of Adopting a Text-Free User Interface on the Usability of a Web-based Government System with Illiterate and Semi-Literate People</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070825</link>
        <id>10.14569/IJACSA.2016.070825</id>
        <doi>10.14569/IJACSA.2016.070825</doi>
        <lastModDate>2016-08-31T16:51:50.3330000+00:00</lastModDate>
        
        <creator>Ms. Ghadam Alduhailan</creator>
        
        <creator>Dr. Majed Alshamari</creator>
        
        <subject>text-free interface; web-based system; usability; e-services; government; consolidated framework</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(8), 2016</description>
        <description>Illiterate and semi-literate people usually face different types of difficulties when they use the Internet, such as reading and recognising text. This research aims to develop and examine the influence of adopting a text-free user interface on the usability of a web-based government system with illiterate and semi-literate people.  A number of steps have been followed in order to achieve this research goal. An extensive literature review has been carried out to explore the adoption of different concepts or representations of content to help illiterate/semi-literate people in Information and Communication Technology (ICT) projects. Then a consolidated framework is proposed and adopted in order to develop a text-free user interface. This can help in building a text-free user interface for a certain service here in Saudi Arabia. Cultural factors, education level, text-free icons, and usability guidelines have been considered in the above-mentioned framework. A prototype of a web-based government system after taking into account the above framework has been designed and developed. Usability testing and heuristic evaluation have been used as usability assessment methods in order to evaluate the system usability and its impact on the usability for illiterate people in Saudi Arabia. The results are encouraging as the achieved results of usability measures imply that adopting the consolidated framework has influenced the usability in this research.</description>
        <description>http://thesai.org/Downloads/Volume7No8/Paper_25-Influence_of_Adopting_a_Text_Free_User_Interface.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Optimization of the Multi-Pumped Raman Optical Amplifier using MOICA</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070824</link>
        <id>10.14569/IJACSA.2016.070824</id>
        <doi>10.14569/IJACSA.2016.070824</doi>
        <lastModDate>2016-08-31T16:51:50.3170000+00:00</lastModDate>
        
        <creator>Mohsen Katebi Jahromi</creator>
        
        <creator>Seyed Mojtaba Saif</creator>
        
        <creator>Masoud Jabbari</creator>
        
        <subject>Raman amplifier; ICA; WDM System; Optical fiber; Multi-objective Optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(8), 2016</description>
        <description>In order to achieve the best gain profile for multi pump distributed Raman amplifiers in Wavelength Division Multiplexing (WDM) transmission systems, the power and wavelength of pumps, the type of pumping configuration and the number of pump signals are the most important factors. In this paper, using a Multi-Objective Imperialist Competition Optimization Algorithm (MOICA) with lowest power consumption and lowest number of pumps, we propose the most uniform gain profile for two types of pumping configurations in S- band and compare the results. Considering the design conditions including the type of pumping configuration, fiber length, fiber type and number of pump signals and using the multi-objective algorithm, we propose a method which can be used to achieve a gain level in which the amplifier has the lowest power consumption and lowest gain ripple. According to this, we can design a powerful WDM transmission system by Distributed Raman Amplifier (DRA) with a good performance and efficiency.</description>
        <description>http://thesai.org/Downloads/Volume7No8/Paper_24-Performance_Optimization_of_the_Multi_Pumped_Raman_Optical_Amplifier.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluating the Usability of Optimizing Text-based CAPTCHA Generation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070823</link>
        <id>10.14569/IJACSA.2016.070823</id>
        <doi>10.14569/IJACSA.2016.070823</doi>
        <lastModDate>2016-08-31T16:51:50.2870000+00:00</lastModDate>
        
        <creator>Suliman A. Alsuhibany</creator>
        
        <subject>text-based CAPTCHA; usability; security; optimization; experimentation; evaluation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(8), 2016</description>
        <description>A CAPTCHA is a test that can, automatically, tell human and computer programs apart. It is a mechanism widely used nowadays for protecting web applications, interfaces, and services from malicious users and automated spammers. Usability and robustness are two fundamental aspects with CAPTCHA, where the usability aspect is the ease with which humans pass its challenges, while the robustness is the strength of its segmentation-resistance mechanism. The collapsing mechanism, which is removing the space between characters to prevent segmentation, has been shown to be reasonably resistant to known attacks. On the other hand, this mechanism drops considerably the human-solvability of text-based CAPTCHAs. Accordingly, an optimizer has previously been proposed that automatically enhances the usability of a CAPTCHA generation without sacrificing its robustness level. However, this optimizer has not yet been evaluated in terms of improving the usability. This paper, therefore, evaluates the usability of this optimizer by conducting an experimental study. The results of this evaluation showed that a statistically significant enhancement is found in the usability of text-based CAPTCHA generation.</description>
        <description>http://thesai.org/Downloads/Volume7No8/Paper_23-Evaluating_the_Usability_of_Optimizing_Text_based_CAPTCHA_Generation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Software Design Principles to Enhance SDN Architecture</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070822</link>
        <id>10.14569/IJACSA.2016.070822</id>
        <doi>10.14569/IJACSA.2016.070822</doi>
        <lastModDate>2016-08-31T16:51:50.2530000+00:00</lastModDate>
        
        <creator>Iyad Alazzam</creator>
        
        <creator>Izzat Alsmadi</creator>
        
        <creator>Khalid M.O Nahar</creator>
        
        <subject>component; SDN; OpenFlo; Software design; SDN architecture; Design principles; Design patterns</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(8), 2016</description>
        <description>SDN as a network architecture emerged on top of existing technologies and knowledge. Through defining the controller as a software program, SDN made a strong connection between networking and software engineering. Traditionally, network programs were vendor specific and embedded in hardware switches and routers. SDN focuses on isolation between control and forwarding or data planes. However, in the complete SDN network, there are many other areas (i.e. CPU, memory, hardware, bandwidth and software). In this paper, we propose extending SDN architecture and propose isolation layers with the goal of improving the overall network design. Such flexible architecture can support future evolution and changes without the need to significantly change original components or modules.</description>
        <description>http://thesai.org/Downloads/Volume7No8/Paper_22-Software_Design_Principles_to_Enhance_SDN_Architecture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>MRPPSim: A Multi-Robot Path Planning Simulation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070821</link>
        <id>10.14569/IJACSA.2016.070821</id>
        <doi>10.14569/IJACSA.2016.070821</doi>
        <lastModDate>2016-08-31T16:51:50.2230000+00:00</lastModDate>
        
        <creator>Ebtehal Turki Saho Alotaibi</creator>
        
        <creator>Hisham Al-Rawi</creator>
        
        <subject>component; simulation; modeling; evaluation; multi-robot path planning problem; performance measurements</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(8), 2016</description>
        <description>Multi-robot path planning problem is an interesting problem of research having great potential for several optimization problems in the world. In multi-robot path planning problem domain (MRPP), robots must move from their start locations to their goal locations avoiding collisions with each other. MRPP is a relevant problem in several domains, including; automatic packages inside a warehouse, automated guided vehicles, planetary exploration,  robotics mining, and video games. This work introduces MRPPSim; a new modeling, evaluation and simulation tool for multi-robot path planning algorithms and its applications. In doing so, it handles all the aspects related to the multi-robot path planning algorithms. Through its working, MRPPSim unifies the representation for the input. This algorithm provides researchers with a set of evaluation models with each of them serving a set of objectives. It provides a comprehensive method to evaluate and compare the algorithm’s performance to the ones that solve public benchmark problems inas shown in literature. The work presented in this paper also provides a complete tool to reformat and control user input, critical small benchmark, biconnected, random and grid problems. Once all of this is performed, it calculates the common performance measurements of multi-robot path planning algorithms in a unified way. The work presented in this paper animates the results so the researchers can follow their algorithms’ executions. In addition, MRPPSim is designed as set of models, each is dedicated to a specific function, this allows new algorithm, evaluation model, or performance measurements to be easily plugged into the simulator.</description>
        <description>http://thesai.org/Downloads/Volume7No8/Paper_21-MRPPSim_A_Multi_Robot_Path_Planning_Simulation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Survey on Case-based Reasoning in Medicine</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070820</link>
        <id>10.14569/IJACSA.2016.070820</id>
        <doi>10.14569/IJACSA.2016.070820</doi>
        <lastModDate>2016-08-31T16:51:50.2070000+00:00</lastModDate>
        
        <creator>Nabanita Choudhury</creator>
        
        <creator>Shahin Ara Begum</creator>
        
        <subject>case-based reasoning; medicine; artificial intelligence; soft computing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(8), 2016</description>
        <description>Case-based reasoning (CBR) based on the memory-centered cognitive model is a strategy that focuses on how people learn a new skill or how they generate hypothesis on new situations based on their past experiences. Among various Artificial Intelligence tracks, CBR, due to its intrinsic similarity with the human reasoning process has been very promising in the utilization of intelligent systems in various domains, in particular in the domain of medicine. In this paper, we extensively survey the literature on CBR systems that are used in the medical domain over the past few decades. We also discuss the difficulties of implementing CBR in medicine and outline opportunities for future work.</description>
        <description>http://thesai.org/Downloads/Volume7No8/Paper_20-A_Survey_on_Case_based_Reasoning_in_Medicine.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automated Simulation P2P Botnets Signature Detection by Rule-based Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070819</link>
        <id>10.14569/IJACSA.2016.070819</id>
        <doi>10.14569/IJACSA.2016.070819</doi>
        <lastModDate>2016-08-31T16:51:50.1770000+00:00</lastModDate>
        
        <creator>Raihana Syahirah Abdullah</creator>
        
        <creator>Faizal M.A.</creator>
        
        <creator>Zul Azri Muhamad Noh</creator>
        
        <creator>Nurulhuda Ahmad</creator>
        
        <subject>Botnets; P2P Botnets; Signature; Rule-based</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(8), 2016</description>
        <description>Internet is a most salient services in communication.  Thus, companies take this opportunity by putting critical resources online for effective business organization.  This has given rise to activities of cyber criminals actuated by botnets.  P2P networks had gained popularity through distributed applications such as file-sharing, web caching and network storage whereby it is not easy to guarantee that the file exchanged not the malicious in non-centralized authority of P2P networks.  For this reason, these networks become the suitable venue for malicious software to spread.  It is straightforward for attackers to target the vulnerable hosts in existing P2P networks as bot candidates and build their zombie army.  They can be used to compromise a host and make it become a P2P bot.  In order to detect these botnets, a complete flow analysis is necessary. In this paper, we proposed an automated P2P botnets through rule-based detection approach which currently focuses on P2P signature illumination. We consider both of synchronisation within a botnets and the malicious behaviour each bot exhibits at the host or network level to recognize the signature and activities in P2P botnets traffic. The rule-based approach have high detection accuracy and low false positive.</description>
        <description>http://thesai.org/Downloads/Volume7No8/Paper_19-Automated_Simulation_P2P_Botnets_Signature_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimized Voting Scheme for Efficient Vanishing Point Detection in General Road Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070818</link>
        <id>10.14569/IJACSA.2016.070818</id>
        <doi>10.14569/IJACSA.2016.070818</doi>
        <lastModDate>2016-08-31T16:51:50.1600000+00:00</lastModDate>
        
        <creator>Vipul H. Mistry</creator>
        
        <creator>Ramji Makwana</creator>
        
        <subject>Road Detection; Vanishing Point; Gabor Filter; voting scheme; general road segmentation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(8), 2016</description>
        <description>Next generation automobile industries are aiming for development of vision-based driver assistance system and driver-less vehicle system. In the context of this application, a major challenge lies in the identification of efficient road region segmentation from captured image frames. Recent research work suggests that use of a global feature like vanishing point makes the road detection algorithm more robust and general for all types of roads. The goal of this research work is the reduction of computational complexity involved with voting process for identification of vanishing point. This paper presents an efficient and optimized voter selection strategy to identify vanishing point in general road images. The major outcome of this algorithm is the reduction in computational complexity as well as improvement in efficiency of vanishing point detection algorithm for all types of road images. The key attributes of the methodology are dominant orientation selection, voter selection based on voter location and modified voting scheme, combining dominant orientation and distance based soft voting process. Results of a number of qualitative and quantitative experiments clearly demonstrate the efficiency of proposed algorithm.</description>
        <description>http://thesai.org/Downloads/Volume7No8/Paper_18-Optimized_Voting_Scheme_for_Efficient_Vanishing_Point.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Review of Solutions for SDN-Exclusive Security Issues</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070817</link>
        <id>10.14569/IJACSA.2016.070817</id>
        <doi>10.14569/IJACSA.2016.070817</doi>
        <lastModDate>2016-08-31T16:51:50.1300000+00:00</lastModDate>
        
        <creator>Jakob Spooner</creator>
        
        <creator>Dr Shao Ying Zhu</creator>
        
        <subject>SDN; software; security; OpenFlow; networking; network security; NFV</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(8), 2016</description>
        <description>Software Defined Networking is a paradigm still in its emergent stages in the realm of production-scale networks. Centralisation of network control introduces a new level of flexibility for network administrators and programmers. Security is a huge factor contributing to consumer resistance to implementation of SDN architecture. Without addressing the issues inherent from SDNs centralised nature, the benefits in performance and network configurative flexibility cannot be harnessed. This paper explores key threats posed to SDN environments and comparatively analyses some of the mechanisms proposed as mitigations against these threats – it also provides some insight into the future works which would enable a securer SDN architecture.</description>
        <description>http://thesai.org/Downloads/Volume7No8/Paper_17-A_Review_of_Solutions_for_SDN_Exclusive_Security_Issues.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Finite Elements Modeling of Linear Motor for Automatic Sliding Door Application</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070816</link>
        <id>10.14569/IJACSA.2016.070816</id>
        <doi>10.14569/IJACSA.2016.070816</doi>
        <lastModDate>2016-08-31T16:51:50.1000000+00:00</lastModDate>
        
        <creator>Aymen Lachheb</creator>
        
        <creator>Lilia El Amraoui</creator>
        
        <creator>Jalel Khedhiri</creator>
        
        <subject>linear motor; sliding door; 2D-finite-element analysis; switched reluctance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(8), 2016</description>
        <description>In this paper, a linear switched reluctance motor is designed and investigated to be used as a sliding door drive system. A non linear two dimensions finite model is built to predict the performance of the designed motor. Thus, the static electromagnetic characteristics are investigated and analyzed. The inductance and the electromagnetic force are determined for different translator position and current intensities with consideration of magnetic saturation effects. The results of analysis prove that the magnetic behavior of this motor is non linear. Furthermore, an important asymmetry of the static and dynamic characteristics between the extreme phases and the central phase is observed in high excitation levels.</description>
        <description>http://thesai.org/Downloads/Volume7No8/Paper_16-Finite_Elements_Modeling_of_Linear_Motor.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Content based Video Retrieval Systems Performance based on Multiple Features and Multiple Frames using SVM</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070815</link>
        <id>10.14569/IJACSA.2016.070815</id>
        <doi>10.14569/IJACSA.2016.070815</doi>
        <lastModDate>2016-08-31T16:51:50.0830000+00:00</lastModDate>
        
        <creator>Mohd Aasif Ansari</creator>
        
        <creator>Hemlata Vasishtha</creator>
        
        <subject>CBVR; KFCG; Multiple Frames; SVM; BTC; Gabor filter</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(8), 2016</description>
        <description>In this paper, Content Based Video Retrieval Systems performance is analysed and compared for three different types of feature vectors. These types of features are generated using three different algorithms; Block Truncation Coding (BTC) extended for colors, Kekre’s Fast Codebook Generation (KFCG) algorithm and Gabor filters. The feature vectors are extracted from multiple frames instead of using only key frames or all frames from the videos. The performance of each type of feature is analysed by comparing the results obtained by two different techniques; Euclidean Distance and Support Vector Machine (SVM). Although a significant number of researchers have expressed dissatisfaction to use image as a query for video retrieval systems, the techniques and features used here provide enhanced and higher retrieval results while using images from the videos. Apart from higher efficiency, complexity has also been reduced as it is not required to find key frames for all the shots. The system is evaluated using a database of 1000 videos consisting of 20 different categories. Performance achieved using BTC features calculated from color components is compared with that achieved using Gabor features and with KFCG features. These performances are compared again with the performances obtained from systems using SVM and the systems without using SVM.</description>
        <description>http://thesai.org/Downloads/Volume7No8/Paper_15-Content_based_Video_Retrieval_Systems_Performance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Medical Image Inpainting with RBF Interpolation Technique</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070814</link>
        <id>10.14569/IJACSA.2016.070814</id>
        <doi>10.14569/IJACSA.2016.070814</doi>
        <lastModDate>2016-08-31T16:51:50.0500000+00:00</lastModDate>
        
        <creator>Mashail Alsalamah</creator>
        
        <creator>Saad Amin</creator>
        
        <subject>Inpainting; interpolate; texture synthesis; exemplar texture inpainting; Radial Basis Function</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(8), 2016</description>
        <description>Inpainting is a method for repairing damaged images or to remove unwanted parts of an image. While this process has been performed by professional artists in the past, today, the use of this technology is emerging in the medical area – especially in the medical imaging realm. In this study, the proposed inpainting method uses a radial basis function (RBF) interpolation technique. We first explain radial basis functions and then, the RBF interpolation system. This technique generally depends on matrix processes. Thus, matrix operations are executed after the interpolation and form a main part of the process. This interpolation matrix has a known value used for interpolating n values and we need to find M – 1 for the n + 1 values of the original and inserted data. Implementation of an inpainting operation is carried out in an object-oriented programming (OOP) language. This is where the process is completed. The algorithm used in this study has a graphical user interface. Further, several skin images are used for testing this system. The obtained output represents a high level of accuracy that can support the validity of the proposed method.</description>
        <description>http://thesai.org/Downloads/Volume7No8/Paper_14-Medical_Image_Inpainting_with_RBF_Interpolation_Technique.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Emotion Recognition from Speech using Prosodic and Linguistic Features</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070813</link>
        <id>10.14569/IJACSA.2016.070813</id>
        <doi>10.14569/IJACSA.2016.070813</doi>
        <lastModDate>2016-08-31T16:51:50.0200000+00:00</lastModDate>
        
        <creator>Mahwish Pervaiz</creator>
        
        <creator>Tamim Ahmed Khan</creator>
        
        <subject>Emotion Extraction; Prosodic Features; Temporal Features; Dynamic Time Wrapping; Segmentation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(8), 2016</description>
        <description>Speech signal can be used to extract emotions. However, it is pertinent to note that variability in speech signal can make emotion extraction a challenging task. There are a number of factors that indicate presence of emotions. Prosodic and temporal features have been used previously for the purpose of identifying emotions. Separately, prosodic/temporal and linguistic features of speech do not provide results with adequate accuracy. We can also find out emotions from linguistic features if we can identify contents. Therefore,  We consider prosodic as well as temporal or linguistic features which help increasing accuracy of emotion recognition, which is our first contribution reported in this paper. We propose a two-step model for emotion recognition; we extract emotions based on prosodic features in the first step. We extract emotions from word segmentation combined with linguistic features in the second step. While performing our experiments, we prove that the classification mechanisms, if trained without considering age factor, do not help improving accuracy. We argue that the classifier should be based on the age group on which the actual emotion extraction be required, and this becomes our second contribution submitted in this paper.</description>
        <description>http://thesai.org/Downloads/Volume7No8/Paper_13-Emotion_Recognition_from_Speech_using_Prosodic.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implementing and Comparison between Two Algorithms to Make a Decision in a Wireless Sensors Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070812</link>
        <id>10.14569/IJACSA.2016.070812</id>
        <doi>10.14569/IJACSA.2016.070812</doi>
        <lastModDate>2016-08-31T16:51:50.0030000+00:00</lastModDate>
        
        <creator>Fouad Essahlaoui</creator>
        
        <creator>Ahmed El Abbassi</creator>
        
        <creator>Rachid Skouri</creator>
        
        <subject>Wireless Network; Sensors; Stationary; Filter; CUSUM; Average; Arduino; Butane</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(8), 2016</description>
        <description>The clinical presentation of acute CO poisoning and hydrocarbon gas (Butane CAS 106-97-8) varies depending on terrain, humidity, temperature, duration of exposure and the concentration of gas toxic:
From then consciousness disorders (100 ppm or 15%) rapidly limiting miners to ambient air and under oxygen until sudden coma (300 ppm or 45%) required hospitalization monitoring unit, if not the result in few minutes it’s death in the poisoning site [1].
Leakage of the filling butane gas in the plant and very close to the latter position at the Faculty and under gas detection project. Has met a set of sensors to warn of possible leak, which can affect students, teachers and staff of the institution.
Therefore, this document describes the implementation of two methods: the first is Average filter and the second as Cusum algorithm, to make a warning decision swished a signal given by the wireless sensors [9] [14-15]. Which installed in the inner side of Faculty of Science and Technology in Errachidia.</description>
        <description>http://thesai.org/Downloads/Volume7No8/Paper_12-Implementing_and_Comparison_between_Two_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>PSIM and MATLAB Co-Simulation of Photovoltaic System using “P and O” and “Incremental Conductance” MPPT</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070811</link>
        <id>10.14569/IJACSA.2016.070811</id>
        <doi>10.14569/IJACSA.2016.070811</doi>
        <lastModDate>2016-08-31T16:51:49.9730000+00:00</lastModDate>
        
        <creator>ANAS EL FILALI</creator>
        
        <creator>EL MEHDI LAADISSI</creator>
        
        <creator>MALIKA ZAZI</creator>
        
        <subject>Photovoltaic; Boost; PWM; MPPT; P and O; Incremental Conductance; co-simulation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(8), 2016</description>
        <description>The photovoltaic (PV) generator shows a nonlinear current-voltage (I-V) characteristic that its maximum power point (MPP) differs with irradiance and temperature. by employing simple maximum power point tracking algorithms, we can track this MPP and increase the efficiency of our photovoltaic system. Two methods for the maximum power point tracking (MPPT) of a photovoltaic system under variable temperature and insolation conditions are discussed in this work: Incremental Conductance compared to conventional tracking algorithm (P&amp;O).
In this paper, a new modeling solution is presented, using co-simulation between a specialist modeling tool called PSIM and the popular Matlab software using simcoupler module. Co-simulation is carried out by implementing the MPPT command circuits in PSIM and PV panel, boost DC-DC converter and battery in MATLAB/Simulink.</description>
        <description>http://thesai.org/Downloads/Volume7No8/Paper_11-PSIM_and_MATLAB_Co_Simulation_of_Photovoltaic_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Simulation of Shunt Active Power Filter Controlled by SVPWM Connected to a Photovoltaic Generator</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070810</link>
        <id>10.14569/IJACSA.2016.070810</id>
        <doi>10.14569/IJACSA.2016.070810</doi>
        <lastModDate>2016-08-31T16:51:49.9430000+00:00</lastModDate>
        
        <creator>Ismail BOUYAKOUB</creator>
        
        <creator>Abedelkhader DJAHBAR</creator>
        
        <creator>Benyounes MAZARI</creator>
        
        <creator>Omar MAAROUF</creator>
        
        <subject>shunt active power filter; harmonic currents; MVF; SVPWM; three level inverter; GPV</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(8), 2016</description>
        <description>In this paper we study the shunt active power filter. This filter contains a voltage three-level inverter controlled by the SVPWM strategy supplied by a the DC bus powered by a solar array to improve the quality of electric energy and eliminate harmonics currents generated by non-linear loads, the latter identified by the method of multivariable filter. The objective of this study is to obtain an unpolluted source into the power grid where all the simulation results are obtained by using MATLAB Environment.</description>
        <description>http://thesai.org/Downloads/Volume7No8/Paper_10-Simulation_of_Shunt_Active_Power_Filter_Controlled.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>FPGA Implementation of Parallel Particle Swarm Optimization Algorithm and Compared with Genetic Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070809</link>
        <id>10.14569/IJACSA.2016.070809</id>
        <doi>10.14569/IJACSA.2016.070809</doi>
        <lastModDate>2016-08-31T16:51:49.9270000+00:00</lastModDate>
        
        <creator>BEN AMEUR Mohamed sadek</creator>
        
        <creator>SAKLY Anis</creator>
        
        <subject>PSO algorithm; GA; FPGA; Finite state machine; hardware</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(8), 2016</description>
        <description>In this paper, a digital implementation of Particle Swarm Optimization algorithm (PSO) is developed for implementation on Field Programmable Gate Array (FPGA). PSO is a recent intelligent heuristic search method in which the mechanism of algorithm is inspired by the swarming of biological populations. PSO is similar to the Genetic Algorithm (GA). In fact, both of them use a combination of deterministic and probabilistic rules. The experimental results of this algorithm are effective to evaluate the performance of the PSO compared to GA and other PSO algorithm. New digital solutions are available to generate a hardware implementation of PSO Algorithms. Thus, we developed a hardware architecture based on Finite state machine (FSM) and implemented into FPGA to solve some dispatch computing problems over other circuits based on swarm intelligence. Moreover, the inherent parallelism of these new hardware solutions with a large computational capacity makes the running time negligible regardless the complexity of the processing.</description>
        <description>http://thesai.org/Downloads/Volume7No8/Paper_9-FPGA_Implementation_of_Parallel_Particle_Swarm_Optimization_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New DTC Scheme using Second Order Sliding Mode and Fuzzy Logic of a DFIG for Wind Turbine System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070808</link>
        <id>10.14569/IJACSA.2016.070808</id>
        <doi>10.14569/IJACSA.2016.070808</doi>
        <lastModDate>2016-08-31T16:51:49.8970000+00:00</lastModDate>
        
        <creator>Zinelaabidine BOUDJEMA</creator>
        
        <creator>Rachid TALEB</creator>
        
        <creator>Adil YAHDOU</creator>
        
        <subject>DFIG; wind turbine; DTC; SOSM; super twisting; fuzzy logic</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(8), 2016</description>
        <description>This article present a novel direct torque control (DTC) scheme using high order sliding mode (HOSM) and fuzzy logic of a doubly fed induction generator (DFIG) incorporated in a wind turbine system. Conventional direct torque control strategy (C-DTC) using hysteresis controllers presents considerable flux and torque undulations at steady state period. In order to ensure a robust DTC method for the DFIG-rotor side converter and reduce flux and torque ripples, a second order sliding mode (SOSM) technique based on super twisting algorithm and fuzzy logic is used in this paper. Simulation results show the efficiency of the proposed method of control especially on the quality of the provided power comparatively to a C-DTC.</description>
        <description>http://thesai.org/Downloads/Volume7No8/Paper_8-A_New_DTC_Scheme_using_Second_Order_Sliding_Mode.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Analysis &amp; Comparison of Optimal Economic Load Dispatch using Soft Computing Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070807</link>
        <id>10.14569/IJACSA.2016.070807</id>
        <doi>10.14569/IJACSA.2016.070807</doi>
        <lastModDate>2016-08-31T16:51:49.8630000+00:00</lastModDate>
        
        <creator>Vijay Kumar</creator>
        
        <creator>Dr. Jagdev Singh</creator>
        
        <creator>Dr. Yaduvir Singh</creator>
        
        <creator>Dr. Sanjay Prakash Sood</creator>
        
        <subject>ELD; FL; GA; FCGA</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(8), 2016</description>
        <description>Power plants not situated at similar space from center of loads and their fuel prices are dissimilar. In this paper, ELD of actual power generation measured.ELD is preparation of generators to reduce total functioning price of generator units exposed to equality constraint of power balance within minimum and maximum working limits of producing units. In this paper FL, GAs &amp; hybridization of GA-FL is utilized to find optimal solution of ELD systems. ELD resolutions found by resolving conservative load flow equations though at same time reducing fuel prices. Performance of results is analyzed by comparing the data values obtained with the help of soft computing techniques in ELD.</description>
        <description>http://thesai.org/Downloads/Volume7No8/Paper_7-Performance_Analysis_Comparison_of_Optimal_Economic_Load.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimization of OADM DWDM Ring Optical Network using Various Modulation Formats</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070806</link>
        <id>10.14569/IJACSA.2016.070806</id>
        <doi>10.14569/IJACSA.2016.070806</doi>
        <lastModDate>2016-08-31T16:51:49.8170000+00:00</lastModDate>
        
        <creator>Vikrant Sharma</creator>
        
        <creator>Dalveer Kaur</creator>
        
        <subject>NRZ; RZ; OADM; WDM; ADM; ROADM; PRBS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(8), 2016</description>
        <description>In this paper, the performance of the ring optical network is analyzed at bit rate 2.5 Gbps and 5 Gbps for various modulation formats such as NRZ rectangular, NRZ raised cosine, RZ soliton, RZ super Gaussian, RZ raised cosine and RZ rectangular. The effect of insertion losses is analyzed. It is observed that RZ Soliton performs better than all other formats and using this scheme system can exist up to 95 dB insertion loss. It has been observed that there is a rise in the system performance for NRZ rectangular and RZ soliton formats after 10 GHz bandwidth.</description>
        <description>http://thesai.org/Downloads/Volume7No8/Paper_6-Optimization_of_OADM_DWDM_Ring_Optical_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Adaptive Threshold for Background Subtraction in Moving Object Detection using Stationary Wavelet Transforms 2D</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070805</link>
        <id>10.14569/IJACSA.2016.070805</id>
        <doi>10.14569/IJACSA.2016.070805</doi>
        <lastModDate>2016-08-31T16:51:49.7870000+00:00</lastModDate>
        
        <creator>Oussama Boufares</creator>
        
        <creator>Noureddine Aloui</creator>
        
        <creator>Adnene Cherif</creator>
        
        <subject>moving object detection; SWT; background subtraction; adaptive threshold; kalman filter</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(8), 2016</description>
        <description>Both detection and tracking objects are challenging problems because of the type of the objects and even their presence in the scene. Generally, object detection is a prerequisite for target tracking, and tracking has no effect on object detection. In this paper, we propose an algorithm to detect and track moving objects automatically of a video sequence analysis, taken with a fixed camera. In the detection steps we perform a background subtraction algorithm, the obtained results are decomposed using discrete stationary wavelet transform 2D and the coefficients are thresholded using Birge-Massart strategy. The tracking step is based on the classical Kalman filter algorithm. This later uses the Kalman filter as many as the number of the moving objects in the image frame. The tests evaluation proved the efficiency of our algorithm for motion detection using adaptive threshold. The comparison results show that the proposed algorithm gives a better performance of detection and tracking than the other methods.</description>
        <description>http://thesai.org/Downloads/Volume7No8/Paper_5-Adaptive_Threshold_for_Background_Subtraction_in_Moving.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Cylindrical DRA for C-Band Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070804</link>
        <id>10.14569/IJACSA.2016.070804</id>
        <doi>10.14569/IJACSA.2016.070804</doi>
        <lastModDate>2016-08-31T16:51:49.7700000+00:00</lastModDate>
        
        <creator>Hamed Gharsallah</creator>
        
        <creator>Lotfi Osman</creator>
        
        <creator>Lassaad Latrach</creator>
        
        <subject>Dielectric Resonator Antenna; gain; reflection coefficient; circular polarization; axial ratio</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(8), 2016</description>
        <description>In this paper, we study a Dielectric Resonator Antenna of cylindrical shape with circular polarization for applications in the C band. The proposed antenna is composed of two different layers. The first is Polyflon Polyguide with relative permittivity er1 = 2.32 and a loss tangent tand = 0.002 as a lower layer. The second is Rogers RO3010 with relative permittivity er2 = 10.2 and a loss tangent tand = 0.0035 as an upper layer which is excited by dual probe feed. The 90&#176; phase shift of two probes feed can create a circular polarization. In this study, we focused on the effect of the variations in the height of the Polyflon Polyguide as well as the probe feed. Simulations under HFSS software have led to bandwidth values of about 2.2 GHz and 2.6 GHz for the proposed antenna with one probe and dual probe, respectively. The obtained gains are higher than 5.4 dB and can range up 8.1 dB.</description>
        <description>http://thesai.org/Downloads/Volume7No8/Paper_4-A_Novel_Cylindrical_DRA_for_C_Band_Applications.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards Enhancing Supportive E-Learning Courses using Smart Tags</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070803</link>
        <id>10.14569/IJACSA.2016.070803</id>
        <doi>10.14569/IJACSA.2016.070803</doi>
        <lastModDate>2016-08-31T16:51:49.7230000+00:00</lastModDate>
        
        <creator>Hayel Khafajeh</creator>
        
        <creator>Heider Wahsheh</creator>
        
        <creator>Ahmad Albhaishi</creator>
        
        <creator>Mofareh Alqahtani</creator>
        
        <subject>Learning Management System (LMS); Supportive courses; Blended courses; Online Courses; Quality Matters</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(8), 2016</description>
        <description>E-learning management systems have emerged as a method of education development in many universities in the Arab world. E-learning management system tools provide a basic environment for interaction between faculty members and students, and these tools require information technology to obtain the most benefit. This paper proposes a method for enhancing the delivery of supportive e-learning courses using smart tags, such as NFC technique. The study sample comprises students at a supportive E-learning course in King Khaled University. This study aims to propose a technique to enhance the delivery of E-learning courses using these tags, which enable teachers and students to interact with the educational material and track their academic performance. The conducted experiments used receiver operating characteristic (ROC) prediction quality measurements to evaluate the proposed technique.</description>
        <description>http://thesai.org/Downloads/Volume7No8/Paper_3-Towards_Enhancing_Supportive_E_Learning_Courses.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Simulation of Building Evacuation: Performance Analysis and Simplified Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070802</link>
        <id>10.14569/IJACSA.2016.070802</id>
        <doi>10.14569/IJACSA.2016.070802</doi>
        <lastModDate>2016-08-31T16:51:49.6930000+00:00</lastModDate>
        
        <creator>Yasser M. Alginahi</creator>
        
        <creator>Muhammad N. Kabir</creator>
        
        <subject>Crowd Evacuation; BuildingEXODUS; Modelling &amp; Simulation; Least squares method</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(8), 2016</description>
        <description>Crowd evacuation from industrial buildings, factories, theatres, protest areas, festivals, exhibitions and religious/sports gatherings is very crucial in many dangerous scenarios, such as fire, earthquakes, threat and attacks.  Simulation of crowd evacuation is an integral part of planning for emergency situations and training staff for crow management. In this paper, simulation of crowd evacuation for a large building-hall is studied using a popular crowd-simulation software BuildingEXODUS. Evacuation of the fully occupied hall is simulated with eleven test cases using the different experimental setups in the software. The results of the different evacuation scenarios are analysed to check the effect of various parameters involved in the evacuation performance. Finally, using the evacuation test results, simplified models are developed.  It is found that the model outputs are in good agreement with the simulation results. Therefore, the models can readily be used for fast computation of the evacuation results without running the actual simulation.</description>
        <description>http://thesai.org/Downloads/Volume7No8/Paper_2-Simulation_of_Building_Evacuation_Performance_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Encoding a T-RBAC Model for E-Learning Platform on ORBAC Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070801</link>
        <id>10.14569/IJACSA.2016.070801</id>
        <doi>10.14569/IJACSA.2016.070801</doi>
        <lastModDate>2016-08-31T16:51:49.6130000+00:00</lastModDate>
        
        <creator>Kassid Asmaa</creator>
        
        <creator>Elkamoun Najib</creator>
        
        <subject>Security policies; Access control; Rbac model; Orbac model; e-learning platforms; trust; multi-agent systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(8), 2016</description>
        <description>with rapid development and increase in the amount of available resources in E-learning platforms, the need to design new architecture for such systems has become inevitable to improve the search quality and simplifying ways to take online courses. The integration of multi-agent systems has played a very important role in developing open, interactive and distributed learning systems. A  lot of research  in E-learning and multi-agent system have been put into developing infrastructure and providing content, security and trust issues have hardly ever been considered worth knowing that security issues may endanger the success of these platforms.
The application of a control access policy, as one of the most important aspects of security,  in E-learning platform based on multi agent systems , plays an important role to secure interaction with agents/users and reinforcing it with the integration of trust level .The  work  of this paper is to encode  a new access control model developed in previous works  based on ‘’ role based access control  model ‘’  and trust level, on ‘’Organization based access control model’’  to improve  the  security  level in  E-learning platforms based on multi-agent systems.
The encoded model is implemented and evaluated by “MotOrbac” tool, in order to define its validity context and limitations for a large and extended deployment.</description>
        <description>http://thesai.org/Downloads/Volume7No8/Paper_1-Encoding_a_T_RBAC_Model_for_E_Learning_Platform.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Information-Theoretic Active SOM for Improving Generalization Performance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2016.050804</link>
        <id>10.14569/IJARAI.2016.050804</id>
        <doi>10.14569/IJARAI.2016.050804</doi>
        <lastModDate>2016-08-10T10:12:18.7200000+00:00</lastModDate>
        
        <creator>Ryotaro Kamimura</creator>
        
        <subject>SOM; Labeled and Unlabeled; Supervised and Unsupervised; Generalization; Interpretation</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 5(8), 2016</description>
        <description>In this paper, we introduce a new type of information-theoretic method called “information-theoretic active SOM”, based on the self-organizing maps (SOM) for training
multi-layered neural networks. The SOM is one of the most important techniques in unsupervised learning. However, SOM knowledge is sometimes ambiguous and cannot be easily interpreted. Thus, we introduce the information-theoretic method to produce clearer and interpretable representations. The present method extends this information-theoretic approach into supervised learning. The main contribution can be summarized by three points. First, it is shown that clear representations by
the information-theoretic method can be effective in training supervised learning. Second, the method is sufficiently simple where there are two separated components, namely, information maximization and error minimization component. Usually, two components are mixed in one framework, and it is difficult to compromise between them. In addition, the knowledge obtained by this information-theoretic SOM can be used to solve the shortage of unlabeled data, because the information maximization
component is unsupervised and can process all input data with and without labels. The method was applied to the well-known image segmentation datasets. Experimental results showed that clear weights were produced and generalization performance was improved by using the information-theoretic SOM. In addition, the final results were stable, almost independent of the parameter values.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume5No8/Paper_4-Information_Theoretic_Active_SOM_for_Improving_Generalization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sensitivity Analysis of Aerosol Parameter Estimations with Measured Solar Direct and Diffuse Irradiance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2016.050803</link>
        <id>10.14569/IJARAI.2016.050803</id>
        <doi>10.14569/IJARAI.2016.050803</doi>
        <lastModDate>2016-08-10T10:12:18.7030000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>Aerosol; Atmospheric optical depth; Solar irradiance; Solar direct; Solar diffuse; Aereole; Junge parameter; Size distribution; Real and imaginary parts of refractive index</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 5(8), 2016</description>
        <description>Sensitivity analysis of aerosol parameter (refractive index which consists of real and imaginary parts, size distribution which is represented by Junge parameter) estimations with measured solar direct and diffuse irradiance is made. Through experiments with the measured solar direct and diffuse irradiance, it is found that the results from the sensitivity analysis is valid and adequate.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume5No8/Paper_3-Sensitivity_Analysis_of_Aerosol_Parameter_Estimations.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Method for 3D Image Representation with Reducing the Number of Frames based on Characteristics of Human Eyes</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2016.050802</link>
        <id>10.14569/IJARAI.2016.050802</id>
        <doi>10.14569/IJARAI.2016.050802</doi>
        <lastModDate>2016-08-10T10:12:18.6730000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>3D image representation; Volume rendering; NTSC image display</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 5(8), 2016</description>
        <description>Method for 3D image representation with reducing the number of frames based on characteristics of human eyes is proposed together with representation of 3D depth by changing the pixel transparency. Through experiments, it is found that the proposed method allows reduction of the number of frames by the factor of 1/6. Also, it can represent the 3D depth through visual perceptions. Thus, real time volume rendering can be done with the proposed method.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume5No8/Paper_2-Method_for_3D_Image_Representation_with_Reducing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improved Framework for Breast Cancer Detection using Hybrid Feature Extraction Technique and FFNN</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2016.050801</link>
        <id>10.14569/IJARAI.2016.050801</id>
        <doi>10.14569/IJARAI.2016.050801</doi>
        <lastModDate>2016-08-10T10:12:18.6100000+00:00</lastModDate>
        
        <creator>Ibrahim Mohamed Jaber Alamin</creator>
        
        <creator>W. Jeberson</creator>
        
        <creator>H K Bajaj</creator>
        
        <subject>Breast Cancer; Preprocessing; Segmentation; Region Growing; Noise Removal; Filtering; Orientation; Gradient Magnitude; Higher Order Statistics; FFNN</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 5(8), 2016</description>
        <description>Breast Cancer early detection using terminologies of image processing is suffered from the less accuracy performance in different automated medical tools. To improve the accuracy, still there are many research studies going on different phases such as segmentation, feature extraction, detection, and classification. The proposed framework is consisting of four main steps such as image preprocessing, image segmentation, feature extraction and finally classification. This paper presenting the hybrid and automated image processing based framework for breast cancer detection. For image preprocessing, both Laplacian and average filtering approach is used for smoothing and noise reduction if any. These operations are performed on 256 x 256 sized gray scale image. Output of preprocessing phase is used at efficient segmentation phase.  Algorithm is separately designed for preprocessing step with goal of improving the accuracy. Segmentation method contributed for segmentation is nothing but the improved version of region growing technique. Thus breast image segmentation is done by using proposed modified region growing technique. The modified region growing technique overcoming the limitations of orientation as well as intensity. The next step we proposed is feature extraction, for this framework we have proposed to use combination of different types of features such as texture features, gradient features, 2D-DWT features with higher order statistics (HOS). Such hybrid feature set helps to improve the detection accuracy. For last phase, we proposed to use efficient feed forward neural network (FFNN). The comparative study between existing 2D-DWT feature extraction and proposed HOS-2D-DWT based feature extraction methods is proposed.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume5No8/Paper_1-Improved_Framework_for_Breast_Cancer_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cuckoo Search Optimization for Reduction of a Greenhouse Climate Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070785</link>
        <id>10.14569/IJACSA.2016.070785</id>
        <doi>10.14569/IJACSA.2016.070785</doi>
        <lastModDate>2016-08-05T05:04:25.4730000+00:00</lastModDate>
        
        <creator>Hasni Abdelhafid</creator>
        
        <creator>Haffane Ahmed</creator>
        
        <creator>Sehli Abdelkrim</creator>
        
        <creator>Draoui Belkacem</creator>
        
        <subject>optimization; cuckoo search; greenhouses; metaheuristics; climate models</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(7), 2016</description>
        <description>Greenhouse climate and crop models and specially reduced models are necessary for bettering environmental management and control ability. In this paper, we present a new metaheuristic method, called Cuckoo Search (CS) algorithm, established on the life of a bird family for selecting the parameters of a reduced model which optimizes their choice by minimizing a cost function. The reduced model was already developed for control purposes and published in the literature. The proposed models target at simulating and predicting the greenhouse environment. [?]. This study focuses on the dynamical behaviors of the inside air temperature and pressure using ventilation. Some
experimental results are used for model validation, the greenhouse being automated with actuators and sensors connected to a greenhouse control system on the cuckoo search methods to determine the best set of parameters allowing for the convergence of a criteria based on the difference between calculated and observed state variables (inside air temperature and water vapour pressure content). The results shown that the tested Cuckoo Search algorithm allows for a faster convergence towards the
optimal solution than classical optimization methods.</description>
        <description>http://thesai.org/Downloads/Volume7No7/Paper_85-Cuckoo_Search_Optimization_for_Reduction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Goal Model Integration for Tailoring Product Line Development Processes</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070784</link>
        <id>10.14569/IJACSA.2016.070784</id>
        <doi>10.14569/IJACSA.2016.070784</doi>
        <lastModDate>2016-08-05T05:04:25.3630000+00:00</lastModDate>
        
        <creator>Arfan Mansoor</creator>
        
        <creator>Detlef Streitferdt</creator>
        
        <creator>Muhammad Kashif Hanif</creator>
        
        <subject>Goal model; Product Line; Development Process; Process Line</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(7), 2016</description>
        <description>Many companies rely on the promised benefits of product lines, targeting systems between fully custom made software and mass products. Such customized mass products
account for a large number of applications automatically derived from a product line. This results in the special importance of product lines for companies with a large part of their product portfolio based on their product line. The success of product line development efforts is highly dependent on tailoring the development process. This paper presents an integrative model of influence factors to tailor product line development processes according to different project needs, organizational goals, individual
goals of the developers or constraints of the environment. This model integrates goal models, SPEM models and requirements to tailor development processes.</description>
        <description>http://thesai.org/Downloads/Volume7No7/Paper_84-Goal_Model_Integration_for_Tailoring_Product_Line.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Balanced Distribution of Load on Grid Resources using Cellular Automata</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070783</link>
        <id>10.14569/IJACSA.2016.070783</id>
        <doi>10.14569/IJACSA.2016.070783</doi>
        <lastModDate>2016-08-03T10:43:37.6700000+00:00</lastModDate>
        
        <creator>Amir Akbarian Sadeghi</creator>
        
        <creator>Ahmad Khademzadeh</creator>
        
        <creator>Mohammad Reza Salehnamadi</creator>
        
        <subject>computing Grid; load balancing; cellular automata; fuzzy logic</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(7), 2016</description>
        <description>Load balancing is a technique for equal and fair distribution of load on resources and maximizing their performance as well as reducing the overall execution time. However, meeting all of these goals in a single algorithm is not possible due to their inherent conflict, so some of the features must be given priority based on requirements and objectives of the system and the desired algorithm smust be designed with their orientation. In this article, a decentralized load balancing algorithm based on cellular automata and fuzzy logic has been presented which has capabilities needed for fair distribution of resources in Grid level.
Each computing node in this algorithm has been modeled as a Cellular Automata’s cell and has been provided with the help of fuzzy logic in which each node can be an expert system and have a decisive role which is the best choice for tasking in dynamic environment and uncertain data.
 Each node is mapped to one of the VL, L, VN, and H, VH conditions based on information exchange on certain time periods with its neighboring nodes and based on fuzzy logic and tries to estimate the status of the other nodes in subsequent periods to reduce communication overhead with the help of Fuzzy Logic and the decision making to send or receive task loads is done based on the status of each node. So an appropriate structure for the system can greatly improve the efficiency of the algorithm. Fuzzy control does not use search and optimization and makes decisions based on inputs which are effective parameters of the system and are mostly based on incomplete and nonspecific information.</description>
        <description>http://thesai.org/Downloads/Volume7No7/Paper_83-Balanced_Distribution_of_Load_on_Grid_Resources.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>TGRP: A New Hybrid Grid-based Routing Approach for Manets</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070782</link>
        <id>10.14569/IJACSA.2016.070782</id>
        <doi>10.14569/IJACSA.2016.070782</doi>
        <lastModDate>2016-08-03T10:43:37.6230000+00:00</lastModDate>
        
        <creator>Hussein Al-Maqbali</creator>
        
        <creator>Mohamed Ould-Khaoua</creator>
        
        <creator>Khaled Day</creator>
        
        <creator>Abderezak Touzene</creator>
        
        <creator>Nasser Alzeidi</creator>
        
        <subject>MANETs; routing protocols; NS2 simulation; performance evaluation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(7), 2016</description>
        <description>Most existing grid-based routing protocols use reactive mechanisms to build routing paths. In this paper, we propose a new hybrid approach for grid-based routing in MANETs which uses a combination of reactive and proactive mechanisms. The proposed routing approach uses shortest-path trees to build the routing paths between source and destination nodes. We design a new protocol based on this approach called the Tree-based Grid Routing Protocol (TGRP). The main advantage of the new approach is the high routing path stability due to availability of readily constructed alternative paths. Our simulation results show that the stability of the TGRP paths results in a substantially higher performance compared to other protocols in terms of lower end-to-end delay, higher delivery ratio and reduced control overhead.</description>
        <description>http://thesai.org/Downloads/Volume7No7/Paper_82-TGRP_A_New_Hybrid_Grid_based_Routing_Approach_for_Manets.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Robust MAI Constrained Adaptive Algorithm for Decision Feedback Equalizer for MIMO Communication Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070781</link>
        <id>10.14569/IJACSA.2016.070781</id>
        <doi>10.14569/IJACSA.2016.070781</doi>
        <lastModDate>2016-07-31T10:46:32.3570000+00:00</lastModDate>
        
        <creator>Khalid Mahmood</creator>
        
        <creator>Syed Muhammad Asad</creator>
        
        <creator>Muhammad Moinuddin</creator>
        
        <creator>Waqas Imtiaz</creator>
        
        <subject>Decision feedback equalizer, single input single output, inter symbol interference, multiple access interference, Rayleigh fading, AWGN, adaptive algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(7), 2016</description>
        <description>Decision feedback equalizer uses prior sensor’s decisions
to mitigate damaging effects of intersymbol interference
on the received symbols. Due to its inherent non linear nature,
decision feedback equalizer outperforms the linear equalizer in
case where the intersymbol interference is sever in a communication
system. Equalization of multiple input multiple output
fast fading channels is a daunting job as these equalizers
should not only mitigate the intersymbol interference but also
interstream interference. Various equalization methods have been
suggested in the adaptive filtering literature for multiple input
multiple output systems. In our paper, we have developed a novel
algorithm for multiple input multiple output communication
systems centered around constrained optimization technique. It is
attained by reducing the mean squared error criteria with respect
to known variance statistics of multiple access interference and
white Gaussian noise. Novelty of our paper is that such a
constrained method has not been used for scheme of multiple
input multiple output decision feedback equalizer resulting in a
constrained algorithm. Performance of the proposed algorithm is
compared to the least mean squared as well as normalized least
mean squared algorithms. Simulation results demonstrate that
proposed algorithm outclasses competing multiple input multiple
output decision feedback equalizer algorithms.</description>
        <description>http://thesai.org/Downloads/Volume7No7/Paper_81-A_Robust_MAI_Constrained_Adaptive_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Quartic approximation of circular arcs using equioscillating error function</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070780</link>
        <id>10.14569/IJACSA.2016.070780</id>
        <doi>10.14569/IJACSA.2016.070780</doi>
        <lastModDate>2016-07-31T10:46:32.3270000+00:00</lastModDate>
        
        <creator>Abedallah Rababah</creator>
        
        <subject>Bezier curves; quartic approximation; circular arc; high accuracy; approximation order; equioscillation; CAD</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(7), 2016</description>
        <description>A high accuracy quartic approximation for circular
arc is given in this article. The approximation is constructed so
that the error function is of degree 8 with the least deviation
from the x-axis; the error function equioscillates 9 times; the
approximation order is 8. The numerical examples demonstrate
the efficiency and simplicity of the approximation method as
well as satisfying the properties of the approximation method
and yielding the highest possible accuracy.</description>
        <description>http://thesai.org/Downloads/Volume7No7/Paper_80-Quartic_approximation_of_circular_arcs_using.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Wyner-Ziv Video Coding using Hadamard Transform and Deep Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070779</link>
        <id>10.14569/IJACSA.2016.070779</id>
        <doi>10.14569/IJACSA.2016.070779</doi>
        <lastModDate>2016-07-31T10:46:32.3100000+00:00</lastModDate>
        
        <creator>Jean-Paul Kouma</creator>
        
        <creator>Ulrik Soderstrom</creator>
        
        <subject>Wyner-Ziv; video coding; rate distortion; Hadamard transform; Deep learning; Expectation Maximization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(7), 2016</description>
        <description>Predictive schemes are current standards of video
coding. Unfortunately they do not apply well for lightweight
devices such as mobile phones. The high encoding complexity
is the bottleneck of the Quality of Experience (QoE) of a video
conversation between mobile phones. A considerable amount of
research has been conducted towards tackling that bottleneck.
Most of the schemes use the so-called Wyner-Ziv Video Coding
Paradigm, with results still not comparable to those of predictive
coding. This paper shows a novel approach for Wyner-Ziv video
compression. It is based on the Reinforcement Learning and
Hadamard Transform. Our Scheme shows very promising results.</description>
        <description>http://thesai.org/Downloads/Volume7No7/Paper_79-Wyner_Ziv_Video_Coding_using_Hadamard_Transform.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>WHITE - DONKEY: Unmanned Aerial Vehicle for searching missing people</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070778</link>
        <id>10.14569/IJACSA.2016.070778</id>
        <doi>10.14569/IJACSA.2016.070778</doi>
        <lastModDate>2016-07-31T10:46:32.2470000+00:00</lastModDate>
        
        <creator>Jaime Moreno</creator>
        
        <creator>Jesus Cruz</creator>
        
        <creator>Edgar Dominguez</creator>
        
        <subject>Computer Vision; Unmanned Aerial Vehicle; Search and Rescue System; Human Visual System; and Quadricopter</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(7), 2016</description>
        <description>Searching for a missing person is not an easy task to
accomplish,so over the years search methods have been developed,
the problem is that the methods currently available have certain
limitations and these limitations are reflected in time location.
Time location in a person search is a very important factor that
rescuers cannot afford to waste because the missing person is
exposed to great dangers. In people search the vision system of
the human being plays a very important role. The human visual
system has the ability to detect and identify objects such as trees,
walls, people among others besides to estimate the distance to
them, this gives the human being the possibility of moving in
their environment. With the development of artificial intelligence
primarily to computer vision it is possible to model the human
visual perception and generate computer software needed to
simulate these capabilities. Using computer vision is expected
to search for any missing person designing and implementing
algorithms in order to an Unmanned Aerial Vehicle perform this
task, also thanks to the speed of this is expected to reduce the time
location. By using of a Unmanned Aerial Vehicle is not intended
to replace the human being in the difficult task of searching and
rescuing people but rather is intended to serve as a support tool
in performing this difficult task.</description>
        <description>http://thesai.org/Downloads/Volume7No7/Paper_78-WHITE_DONKEY_Unmanned_Aerial_Vehicle.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sentiment Based Twitter Spam Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070777</link>
        <id>10.14569/IJACSA.2016.070777</id>
        <doi>10.14569/IJACSA.2016.070777</doi>
        <lastModDate>2016-07-31T10:46:32.2170000+00:00</lastModDate>
        
        <creator>Nasira Perveen</creator>
        
        <creator>Malik M. Saad Missen</creator>
        
        <creator>Qaisar Rasool</creator>
        
        <creator>Nadeem Akhtar</creator>
        
        <subject>sentiment analysis; spam detection; twitter</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(7), 2016</description>
        <description>Spams are becoming a serious threat for the users
of online social networks especially for the ones like of twitter.
twitter’s structural features make it more volatile to spam attacks.
In this paper, we propose a spam detection approach for twitter
based on sentimental features. We perform our experiments on a
data collection of 29K tweets with 1K tweets for 29 trending topics
of 2012 on twitter. We evaluate the usefulness of our approach by
using five classifiers i.e. BayesNet, Naive Bayes, Random Forest,
Support Vector Machine (SVM) and J48. Naive Bayes, Random
Forest, J48 and SVM spam detections performance improved with
our all proposed features combination. The results demonstrate
that proposed features provide better classification accuracy when
combined with content and user-oriented features.</description>
        <description>http://thesai.org/Downloads/Volume7No7/Paper_77-Sentiment_Based_Twitter_Spam_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>WE-MQS-VoIP Priority: An enhanced LTE Downlink Scheduler for voice services with the integration of VoIP priority mode</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070776</link>
        <id>10.14569/IJACSA.2016.070776</id>
        <doi>10.14569/IJACSA.2016.070776</doi>
        <lastModDate>2016-07-31T10:46:32.1700000+00:00</lastModDate>
        
        <creator>Duy-Huy Nguyen</creator>
        
        <creator>Hang Nguyen</creator>
        
        <creator>Eric Renault</creator>
        
        <subject>AMR-WB; Wideband E-model; VoLTE; VoIP priority mode; User perception</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(7), 2016</description>
        <description>The Long Term Evolution (LTE) is a high data
rates and fully All-IP network. It is developed to support well to
multimedia services such as Video, VoIP, Gaming, etc. So that, the
real-time services such as VoIP, video, etc. need to be optimized.
Nevertheless, the deployment of such live stream services having
many challenges. Scheduling and allocating radio resource are
very important in LTE network, especially with multimedia
services such as VoIP. When voice service transmitted over LTE
network, it is affected by many network impairments where there
are three main factors including packet loss, delay, and jitter. This
study proposes a new scheduler which is based on VoIP priority
mode,Wideband (WB) E-model, QoS- and Channel-Aware (called
WE-MQS-VoIP Priority scheduler) for voice services in LTE
downlink direction. The proposed scheduling scheme is built
based on the WB E-model and Maximum Queue Size (MQS). In
addition, we integrate the VoIP priority mode into our scheduling
scheme. Since the proposed scheduler considers the VoIP priority
mode and user perception, thus, it improves significantly the
system performance. The results demonstrate that the proposed
scheduler not only meets QoS demands of voice calls but also
outperforms Modified Largest Weighted Delay First (M-LWDF)
in terms of delay, Packet Loss Rate (PLR) for all number of user
(NU) and excepting NU equals 30, respectively. For Fairness Index
(FI), cell throughput, and Spectral Efficiency (SE), the difference
among the packet schedulers is not significant. The performance
evaluation is compared in terms of Delay, PLR, Throughput, FI,
and SE.</description>
        <description>http://thesai.org/Downloads/Volume7No7/Paper_76-WE_MQS_VoIP_Priority_An_enhanced_LTE.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Software Architecture Quality Measurement Stability and Understandability</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070775</link>
        <id>10.14569/IJACSA.2016.070775</id>
        <doi>10.14569/IJACSA.2016.070775</doi>
        <lastModDate>2016-07-31T10:46:32.1400000+00:00</lastModDate>
        
        <creator>Mamdouh Alenezi</creator>
        
        <subject>Software Engineering; Software Architecture; Quality Attributes; Stability; Understandability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(7), 2016</description>
        <description>Over the past years software architecture has become
an important sub-field of software engineering. There has
been substantial advancement in developing new technical approaches
to start handling architectural design as an engineering
discipline. Measurement is an essential part of any engineering
discipline. Quantifying the quality attributes of the software
architecture will reveal good insights about the architecture. It
will also help architects and practioners to choose the best fit of
alternative architectures that meets their needs. This work paves
the way for researchers to start investigating ways to measure
software architecture quality attributes. Measurement of these
qualities is essential for this sub-field of software engineering.
This work explores Stability and Understandability of software
architecture, several metrics that affect them, and literature
review of these qualities.</description>
        <description>http://thesai.org/Downloads/Volume7No7/Paper_75-Software_Architecture_Quality_Measurement_Stability.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Quality of Service Provisioning in Biosensor Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070774</link>
        <id>10.14569/IJACSA.2016.070774</id>
        <doi>10.14569/IJACSA.2016.070774</doi>
        <lastModDate>2016-07-31T10:46:32.1070000+00:00</lastModDate>
        
        <creator>Yahya Osais</creator>
        
        <creator>Muhammad Butt</creator>
        
        <subject>Biosensor networks; Quality of service; Markov decision processes</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(7), 2016</description>
        <description>Biosensor networks are wireless networks consisting
of tiny biological sensors (biosensors, for short) that can be implanted
inside the body of human and animal subjects. Biosensors
can measure various biological processes that occur inside the
body of the subject under test. Applications of biosensor networks
include automated drug delivery, heart beat rate monitoring, and
temperature sensing. Since biosensor networks employ wireless
transmission, heat is generated in the tissues surrounding the implanted
biosensors. Human and animal tissues are very sensitive
to temperature increase. Therefore, the generated heat is mitigated
by the natural thermoregulatory system. However, excessive
transmissions can cause a significant increase in temperature and
thus tissue damage. Hence, there is a need for a mechanism to
control the rate of wireless transmissions. Of course, controlling
the rate of wireless transmissions will lead to Quality-of-Service
(QoS) issues like the required minimum delay and throughput. In
this paper, we are going to investigate the above issues using the
framework of Markov Decision Processes (MDPs). We are going
to develop several MDP models that will enable us to study the
different trade-offs involved in QoS provisioning in biosensor
networks. The optimal policies computed using the proposed
MDP models are compared with greedy policies to show their
vigilant behavior and viable performance.</description>
        <description>http://thesai.org/Downloads/Volume7No7/Paper_74-Quality_of_Service_Provisioning_in_Biosensor.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>New mechanism for Cloud Computing Storage Security</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070773</link>
        <id>10.14569/IJACSA.2016.070773</id>
        <doi>10.14569/IJACSA.2016.070773</doi>
        <lastModDate>2016-07-31T10:46:32.0770000+00:00</lastModDate>
        
        <creator>Almokhtar Ait El Mrabti</creator>
        
        <creator>Najim Ammari</creator>
        
        <creator>Anas Abou El Kalam</creator>
        
        <creator>Abdellah Ait Ouahman</creator>
        
        <creator>Mina De Montfort</creator>
        
        <subject>Cloud Computing; Data Security; Data Encryption; Fragmentation-Redundancy-Scattering;</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(7), 2016</description>
        <description>Cloud computing, often referred to as simply the
cloud, appears as an emerging computing paradigm which
promises to radically change the way computer applications
and services are constructed, delivered, managed and finally
guaranteed as dynamic computing environments for end users.
The cloud is the delivery of on-demand computing resources -
everything from applications to data centers - over the Internet
on a pay-for-use basis. The revolution of cloud computing has
provided opportunities for research in all aspects of cloud computing.
Despite the big progress in cloud computing technologies,
funding concerns in cloud, security may limit a broader adoption.
This paper presents a technique to tolerate both accidental and
intentional faults, which is fragmentation-redundancy-scattering
(FRS). The possibility to use the FRS technique as an intrusion
tolerance one is investigated for providing secure and dependable
storage in the cloud environment. Also a cloud computing security
(CCS) based on the FRS technique is proposed to explore how this
proposal can then be used via several scenarios. To demonstrate
the robustness of the proposal, we formalize our design and we
carry out a security as well as performance evaluations of the
approach and we compare it with the classical model. The paper
concludes by strongly suggesting future research proposals for
the CCS framework.</description>
        <description>http://thesai.org/Downloads/Volume7No7/Paper_73-New_mechanism_for_Cloud_Computing_Storage.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>From Emotion Recognition to Website Customizations</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070772</link>
        <id>10.14569/IJACSA.2016.070772</id>
        <doi>10.14569/IJACSA.2016.070772</doi>
        <lastModDate>2016-07-31T10:46:32.0430000+00:00</lastModDate>
        
        <creator>O. B. Efremides</creator>
        
        <subject>Emotion recognition; classification; computer vision; web interfaces</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(7), 2016</description>
        <description>A computer vision system that recognizes the emotions
of a website’s user and customizes the context and the
presentation of this website accordingly is presented herein. A
logistic regression classifiers is trained over the Extended Cohn-
Kanade dataset in order to recognize the emotions. The Scale-
Invariant Feature Transform algorithm over two different part
of an image, the face and the eyes without any special pixel
intensities preprocessing, is used to describe each emotion. The
testing phase shows a significant improvement in the classification
results. A toy web site, as a proof of concept, is also developed.</description>
        <description>http://thesai.org/Downloads/Volume7No7/Paper_72-From_Emotion_Recognition_to_Website.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>FPGA implementation of filtered image using 2D Gaussian filter</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070771</link>
        <id>10.14569/IJACSA.2016.070771</id>
        <doi>10.14569/IJACSA.2016.070771</doi>
        <lastModDate>2016-07-31T10:46:32.0130000+00:00</lastModDate>
        
        <creator>Leila kabbai</creator>
        
        <creator>Anissa Sghaier</creator>
        
        <creator>Ali Douik</creator>
        
        <creator>Mohsen Machhout</creator>
        
        <subject>Gaussian Filter; convolution;fixed point arithmetic; Floating point arithmetic;FPGA</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(7), 2016</description>
        <description>Image filtering is one of the very useful techniques
in image processing and computer vision. It is used to eliminate
useless details and noise from an image. In this paper, a
hardware implementation of image filtered using 2D Gaussian
Filter will be present. The Gaussian filter architecture will be
described using a different way to implement convolution module.
Thus, multiplication is in the heart of convolution module, for
this reason, three different ways to implement multiplication
operations will be presented. The first way is done using the
standard method. The second way uses Field Programmable
Gate Array (FPGA) features Digital Signal Processor (DSP)
to ensure and make fast the scalability of the effective FPGA
resource and then to speed up calculation. The third way uses
real multiplier for more precision and a the maximum uses of
FPGA resources. In this paper, we compare the image quality
of hardware (VHDL) and software (MATLAB) implementation
using the Peak Signal-to-Noise Ratio (PSNR). Also, the FPGA
resource usage for different sizes of Gaussian kernel will be
presented in order to provide a comparison between fixed-point
and floating point implementations.</description>
        <description>http://thesai.org/Downloads/Volume7No7/Paper_71-FPGA_implementation_of_filtered_image_using 2D.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Developing a Real-Time Web Questionnaire System for Interactive Presentations</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070770</link>
        <id>10.14569/IJACSA.2016.070770</id>
        <doi>10.14569/IJACSA.2016.070770</doi>
        <lastModDate>2016-07-31T10:46:31.9670000+00:00</lastModDate>
        
        <creator>Yusuke Niwa</creator>
        
        <creator>Shun Shiramatsu</creator>
        
        <creator>Tadachika Ozono</creator>
        
        <creator>Toramatsu Shintani</creator>
        
        <subject>Interactive Presentation; Real-time Web questionnaire; Collaborative tools; communication aids; information sharing; Web services</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(7), 2016</description>
        <description>Conducting presentations with bi-directional communication
requires extended presentation systems, e.g., having
sophisticated expressions and gathering real-time feedback. We
aim to develop an interactive presentation system to enhance
presentations with bi-directional communication during presentations.
We developed a hybrid interactive presentation system that
is a collaboration between the traditional presentation supporting
system, e.g. PowerPoint, and a web application. To gather feedback
from audiences at presentations, the web application delivers
presentation slides to audiences. The client system provides a
feature of creating annotations and answering the questions on
delivered presentation slides for making feedback. Specifically,
the system provides a real-time questionnaire function where the
result is displayed on a shared screen in real time while gathering
answers. Since users can make their questionnaire on PowerPoint,
the task becomes quite easy. This paper explains the development
of the system and demonstrates that the real-time questionnaire
system realizes high performance scalability.</description>
        <description>http://thesai.org/Downloads/Volume7No7/Paper_70-Developing_a_Real_Time_Web_Questionnaire_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Conditions Facilitating the Aversion of Unpopular Norms: An Agent-Based Simulation Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070769</link>
        <id>10.14569/IJACSA.2016.070769</id>
        <doi>10.14569/IJACSA.2016.070769</doi>
        <lastModDate>2016-07-31T10:46:31.9500000+00:00</lastModDate>
        
        <creator>Zoofishan Zareen</creator>
        
        <creator>Muzna Zafar</creator>
        
        <creator>Kashif Zia</creator>
        
        <subject>Agent-based Modeling and Simulation; Emperor’s Dilemma; Complex Adaptive Systems.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(7), 2016</description>
        <description>People mostly facilitate and manage their social
lives adhering to the prevalent norms. There are some norms
which are unpopular, yet people adhere to them. Ironically,
people at individual level do not agree to these norms, but,
they still follow and even facilitate them. Irrespective of the
social and psychological reasons behind their persistence, sometimes,
for societal good, it is necessary to oppose and possibly
avert the unpopular norms. In this paper, we model theorydriven
computational specifications of Emperor’s Dilemma into
an agent-based simulation, to understand the the conditions
that result in emergence of unpopular norms. The reciprocal
nature of persistence and aversion of norms, thus, is utilized
to define situations under which these norms can be changed
and averted. Simulation is performed under many interesting
“what-if” questions. The simulation results reveal that under high
density conditions of agent population with a high percentage of
norm aversion activists, the aversion of unpopular norms can be
achieved.</description>
        <description>http://thesai.org/Downloads/Volume7No7/Paper_69-Conditions_Facilitating_the_Aversion_of_Unpopular.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Computational Modeling of Proteins based on Cellular Automata</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070768</link>
        <id>10.14569/IJACSA.2016.070768</id>
        <doi>10.14569/IJACSA.2016.070768</doi>
        <lastModDate>2016-07-31T10:46:31.9030000+00:00</lastModDate>
        
        <creator>Alia Madain</creator>
        
        <creator>Abdel Latif Abu Dalhoum</creator>
        
        <creator>Azzam Sleit</creator>
        
        <subject>Proteins 3D Folding; Bioinformatics; Computational Modeling; Cellular Automata; Theoretical Computer Science;</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(7), 2016</description>
        <description>The literature of building computational and mathematical
models of proteins is rich and diverse, since its practical
applications are of a vital importance in the development of many
fields. Modeling proteins is not a straightforward process and in
some modeling strategies, it requires to combine concepts from
different fields including physics, chemistry, thermodynamics,
and computer science. The focus here will be on models that
are based on the concept of cellular automata and equivalent
systems. Cellular automata are discrete computational models
that are capable of universal computation, in other words, they
are capable of doing any computation that a normal computer
can do. What is special about cellular automata is its ability
to produce complex and chaotic global behavior from local
interactions. The paper discusses the effort done so far by
the researchers community in this direction and proposes a
computational model of protein folding that is based on 3D
cellular automata. Unlike common models, the proposed model
maintains the basic properties of cellular automata and keeps a
realistic view of proteins operations. As in any cellular automata
model, the dimension, neighborhood, boundary, and rules were
specified. In addition, a discussion is given to clarify why these
parameters are in place and what possible alternatives can be
used in the protein folding context.</description>
        <description>http://thesai.org/Downloads/Volume7No7/Paper_68-Computational_Modeling_of_Proteins.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Analysis of ALU Implementation with RCA and Sklansky Adders In ASIC Design Flow</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070767</link>
        <id>10.14569/IJACSA.2016.070767</id>
        <doi>10.14569/IJACSA.2016.070767</doi>
        <lastModDate>2016-07-31T10:46:31.8730000+00:00</lastModDate>
        
        <creator>Abdul Rehman Buzdar</creator>
        
        <creator>Liguo Sun</creator>
        
        <creator>Abdullah Buzdar</creator>
        
        <subject>Arithmetic Logic Unit; Ripple Carry Adder; Sklansky Adder; ASIC Design, EDA Tools</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(7), 2016</description>
        <description>An Arithmetic Logic Unit (ALU) is the heart
of every central processing unit (CPU) which performs basic
operations like addition, subtraction, multiplication, division and
bitwise logic operations on binary numbers. This paper deals with
implementation of a basic ALU unit using two different types of
adder circuits, a ripple carry adder and a sklansky type adder.
The ALU is designed using application specific integrated circuit
(ASIC) platform where VHDL hardware description language
and standard cells are used. The target process technology
is 130nm CMOS from the foundry ST Microelectronics. The
Cadence EDA tools are used for the ASIC implementation.
A comparative analysis is provided for the two ALU circuits
designed in terms of area, power and timing requirements.</description>
        <description>http://thesai.org/Downloads/Volume7No7/Paper_67-Comparative_Analysis_of_ALU_Implementation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Arabic Text Question Answering from an Answer Retrieval Point of View: a survey</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070766</link>
        <id>10.14569/IJACSA.2016.070766</id>
        <doi>10.14569/IJACSA.2016.070766</doi>
        <lastModDate>2016-07-31T10:46:31.8570000+00:00</lastModDate>
        
        <creator>Bodor A. B. Sati</creator>
        
        <creator>Mohammed A. S. Ali</creator>
        
        <creator>Sherif M. Abdou</creator>
        
        <subject>Question answering; Information retrieval; Answer retrieval; Arabic NLP</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(7), 2016</description>
        <description>Arabic Question Answering (QA) is gaining more
importance due to the importance of the language and the
dramatic increase in online Arabic content. The goal of this article
is to review the state-of-the-art of Arabic QA methods, to classify
them into different categories from an answer retrieval viewpoint
and to present their applications, issues and new trends. The main
components of question answering systems are also presented.
Finally, this survey provides a comparative study of systems of
each type of QA based on several criteria.</description>
        <description>http://thesai.org/Downloads/Volume7No7/Paper_66-Arabic_Text_Question_Answering_from_an_Answer.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Applications of Multi-criteria Decision Making in Software Engineering</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070765</link>
        <id>10.14569/IJACSA.2016.070765</id>
        <doi>10.14569/IJACSA.2016.070765</doi>
        <lastModDate>2016-07-31T10:46:31.8270000+00:00</lastModDate>
        
        <creator>Sumeet Kaur Sehra</creator>
        
        <creator>Yadwinder Singh Brar</creator>
        
        <creator>Navdeep Kaur</creator>
        
        <subject>Multi-criteria Decision Making; Analytic Hierarchy Process; Fuzzy AHP; Software Engineering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(7), 2016</description>
        <description>Every complex problem now days require multicriteria
decision making to get to the desired solution. Numerous
Multi-criteria decision making (MCDM) approaches have evolved
over recent time to accommodate various application areas and
have been recently explored as alternative to solve complex
software engineering problems. Most widely used approach is
Analytic Hierarchy Process that combines mathematics and
expert judgment. Analytic Hierarchy Process suffers from the
problem of imprecision and subjectivity. This paper proposes to
use Fuzzy AHP (FAHP) instead of traditional AHP method. The
usage of FAHP helps decision makers to make better choices both
in relation to tangible criteria and intangible criteria. The paper
provides a clear guide on how FAHP can be applied, particularly
in the software engineering area in specific situations. The
conclusion of this study would help and motivate practitioners
and researchers to use multi-criteria decision making approaches
in the area of software engineering.</description>
        <description>http://thesai.org/Downloads/Volume7No7/Paper_65-Applications_of_Multi_criteria_Decision_Making.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Method in Two-Step-Ahead Weight Adjustment of Recurrent Neural Networks: Application in Market Forecasting</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070764</link>
        <id>10.14569/IJACSA.2016.070764</id>
        <doi>10.14569/IJACSA.2016.070764</doi>
        <lastModDate>2016-07-31T10:46:31.7800000+00:00</lastModDate>
        
        <creator>Narges Talebi Motlagh</creator>
        
        <creator>Amir RikhtehGar Ghiasi</creator>
        
        <subject>Recurrent Neural Network; Two Step Ahead Prediction; Reinforcement Learning; Directional Statistics; Gold Market</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(7), 2016</description>
        <description>Gold price prediction is a very complex nonlinear
problem which is severely difficult. Real-time price prediction,
as a principle of many economic models, is one of the most
challenging tasks for economists since the context of the financial
agents are often dynamic. Since in financial time series, direction
prediction is important, in this work, an innovative Recurrent
Neural Network (RNN) is utilized to obtain accurate Two-Step-
Ahead (2SA) prediction results and ameliorate forecasting per-
formances of gold market. The training method of the proposed
network has been combined with an adaptive learning rate
algorithm and a linear combination of Directional Symmetry (DS)
is utilized in the training phase. The proposed method has been
developed for online and offline applications. Simulations and
experiments on the daily Gold market data and the benchmark
time series of Lorenz and Rossler shows the high efficiency of
proposed method which could forecast future gold price precisely.</description>
        <description>http://thesai.org/Downloads/Volume7No7/Paper_64-A_Novel_Method_in_Two_Step_Ahead_Weight.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Management Information Systems in Public Institutions in Jordan</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070763</link>
        <id>10.14569/IJACSA.2016.070763</id>
        <doi>10.14569/IJACSA.2016.070763</doi>
        <lastModDate>2016-07-31T10:46:31.7500000+00:00</lastModDate>
        
        <creator>Ahmad A. Al-Tit</creator>
        
        <subject>management information systems; adoption success factors; organizational performance; public institutions</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(7), 2016</description>
        <description>Six constructs were utilized in this study to explore the factors affecting MIS implementation in Jordanian public institutions and to investigate the impact of MIS implementation on organizational (operational) performance. They were human factors, organizational factors, technological factors, environmental factors, MIS implementation components and organizational performance. The required data were collected using a valid and reliable questionnaire developed based on the literature review. Human factors were conceptualized as users’ computer skills and experience, IS usefulness and IS ease of use. Organizational factors were assessed using three sub-indicators, which were top-management support, user training and IS confidentiality. Technological factors were evaluated by systematic quality, information quality and service quality. The overall industry, industry environment and external pressure were three indicators used to measure the environmental factors. Two variables were selected to measure MIS implementation: IT/IS capability and technological aspects related to information service quality. Since the current study tackled public institutions, the indicators of organizational performance were limited to operational ones. The questionnaire was distributed to 125 informants from IT/IS departments. The findings of the study indicated the acceptance of the hypothesis that the factors in question are significantly and positively related to MIS implementation, which in turn, when measured by IT/IS capability and information service quality, significantly and positively affect organizational performance. The main contribution provided by this study is that MIS implementation is not limited to information technology and systems capabilities and usefulness. Other factors should be considered, particularly when examining the impact of MIS implementation on organizational performance.</description>
        <description>http://thesai.org/Downloads/Volume7No7/Paper_63-Management_Information_Systems_in_Public_Institutions_in_Jordan.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analyzing Data Reusability of Raytrace Application in Splash2 Benchmark</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070762</link>
        <id>10.14569/IJACSA.2016.070762</id>
        <doi>10.14569/IJACSA.2016.070762</doi>
        <lastModDate>2016-07-31T10:46:31.7170000+00:00</lastModDate>
        
        <creator>Hao Do-Duc</creator>
        
        <creator>Vinh Ngo-Quang</creator>
        
        <subject>Chip multiprocessors; benchmark; ray tracing; reflection; intensity; ray-Tree</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(7), 2016</description>
        <description>When designing a chip multiprocessors, we use Splash2 to estimate its performance. This benchmark contains eleven applications. The performance when running them is similar, except Raytrace. We analyse it to clarity why the performance is not good. We discover, in theory, Raytrace never reuses data. This leads the fact that the performance is not good due to the low hit ratio in data cache.</description>
        <description>http://thesai.org/Downloads/Volume7No7/Paper_62-Analyzing_Data_Reusability_of_Raytrace_Application.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Emergency Unit Support System to Diagnose Chronic Heart Failure Embedded with SWRL and Bayesian Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070761</link>
        <id>10.14569/IJACSA.2016.070761</id>
        <doi>10.14569/IJACSA.2016.070761</doi>
        <lastModDate>2016-07-31T10:46:31.6700000+00:00</lastModDate>
        
        <creator>Baydaa Al-Hamadani</creator>
        
        <subject>Ontology Engineering; Bayesian Network; Heart Failure; Expert System; Validation Test</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(7), 2016</description>
        <description>In all the regions of the world, heart failure is common and on raise caused by several aetiologies. Although the development of the treatment is fast, there are still lots of cases that lose their lives in emergence sections because of slow response to treat these cases. In this paper we propose an expert system that can help the practitioners in the emergency rooms to fast diagnose the disease and advise them with the appropriate operations that should be taken to save the patient’s life. Based on the mostly binary information given to the system, Bayesian Network model was selected to support the process of reasoning under uncertain or missing information. The domain concepts and the relations between them were building by using ontology supported by the Semantic Web Rule Language to code the rules. The system was tested on 105 patients and several classification functions were tested and showed remarkable results in the accuracy and sensitivity of the system.</description>
        <description>http://thesai.org/Downloads/Volume7No7/Paper_61-An_Emergency_Unit_Support_System_to_Diagnose_Chronic.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancement in System Schedulability by Controlling Task Releases</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070760</link>
        <id>10.14569/IJACSA.2016.070760</id>
        <doi>10.14569/IJACSA.2016.070760</doi>
        <lastModDate>2016-07-31T10:46:31.6530000+00:00</lastModDate>
        
        <creator>Basharat Mahmood</creator>
        
        <creator>Naveed Ahmad</creator>
        
        <creator>Saif ur Rehman Malik</creator>
        
        <creator>Adeel Anjum</creator>
        
        <subject>Real-time Systems; Fixed Priority Scheduling; RM Scheduling; Priority Inversion</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(7), 2016</description>
        <description>In real-time systems fixed priority scheduling techniques are considered superior than the dynamic priority counterparts from implementation perspectives; however the dynamic priority assignments dominate the fixed priority mechanism when it comes to system utilization. Considering this gap, a number of results are added to real-time system literature recently that achieve higher utilization at the cost of tuning task parameters. We further investigate this problem by proposing a novel fixed priority scheduling technique that keeps task parameters intact. The proposed technique favors the lower priority tasks by blocking the release of higher priority tasks without hurting their deadlines. The aforementioned strategy helps in creating some extra space that is utilized by a lower priority task to complete its execution. It is proved that the proposed technique dominates pure preemptive scheduling. Furthermore the results obtained are applied to an example task set which is not schedulable with preemption threshold scheduling and quantum based scheduling but it is schedulable with proposed technique. The analyses show the supremacy of our work over existing fixed priority alternatives from utilization perspective.</description>
        <description>http://thesai.org/Downloads/Volume7No7/Paper_60-Enhancement_in_System_Schedulability_by_Controlling_Task_Releases.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cyber Profiling Using Log Analysis And K-Means Clustering</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070759</link>
        <id>10.14569/IJACSA.2016.070759</id>
        <doi>10.14569/IJACSA.2016.070759</doi>
        <lastModDate>2016-07-31T10:46:31.6230000+00:00</lastModDate>
        
        <creator>Muhammad Zulfadhilah</creator>
        
        <creator>Yudi Prayudi</creator>
        
        <creator>Imam Riadi</creator>
        
        <subject>Clustering; K-Means; Log; Network; Cyber Profiling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(7), 2016</description>
        <description>The Activities of Internet users are increasing from year to year and has had an impact on the behavior of the users themselves. Assessment of user behavior is often only based on interaction across the Internet without knowing any others activities.  The log activity can be used as another way to study the behavior of the user. The Log Internet activity is one of the types of big data so that the use of data mining with K-Means technique can be used as a solution for the analysis of user behavior. This study has been carried out the process of clustering using K-Means algorithm is divided into three clusters, namely high, medium, and low. The results of the higher education institution show that each of these clusters produces websites that are frequented by the sequence: website search engine, social media, news, and information. This study also showed that the cyber profiling had been done strongly influenced by environmental factors and daily activities.</description>
        <description>http://thesai.org/Downloads/Volume7No7/Paper_59-Cyber_Profiling_Using_Log_Analysis_And_K_Means_Clustering.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Evaluation of Requirement Prioritization Techniques with ANP</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070758</link>
        <id>10.14569/IJACSA.2016.070758</id>
        <doi>10.14569/IJACSA.2016.070758</doi>
        <lastModDate>2016-07-31T10:46:31.5770000+00:00</lastModDate>
        
        <creator>Javed ali Khan</creator>
        
        <creator>Izaz-ur-Rehman</creator>
        
        <creator>Iqbal Qasim</creator>
        
        <creator>Shah Poor Khan</creator>
        
        <creator>Yawar Hayat Khan</creator>
        
        <subject>Requirement Engineering; Requirement prioritization; ANP; AHP; Software Engineering; Comparison</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(7), 2016</description>
        <description>This article elaborates an evaluation of seven software requirements prioritization methods (ANP, binary search tree, AHP, hierarchy AHP, spanning tree matrix, priority group and bubble sort). Based on the case study of local project (automation of Mobilink  franchise system), the experiment is conducted by students in the Requirement Engineering course in the department of Software Engineering at the University of Science and Technology Bannu, Khyber Pakhtunkhawa, Pakistan. Parameters/ measures used for the experiment are consistency indication, scale of measurement, interdependence, required number of decisions, total time consumption, time consumption per decision, ease of use, reliability of results and fault tolerance; on which requirements prioritization techniques are evaluated. The results of experiment show that ANP is the most successful prioritization methodology among all the available prioritization methodologies.</description>
        <description>http://thesai.org/Downloads/Volume7No7/Paper_58-An_Evaluation_of_Requirement_Prioritization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluation of a Behind-the-Ear ECG Device for Smartphone Based Integrated Multiple Smart Sensor System in Health Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070757</link>
        <id>10.14569/IJACSA.2016.070757</id>
        <doi>10.14569/IJACSA.2016.070757</doi>
        <lastModDate>2016-07-31T10:46:31.5470000+00:00</lastModDate>
        
        <creator>Numan Celik</creator>
        
        <creator>Nadarajah Manivannan</creator>
        
        <creator>Wamadeva Balachandran</creator>
        
        <subject>wireless body area networks; body-worn sensors; ECG; core body temperature; oxygen saturation level (SpO2); biosensor integration; m-health</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(7), 2016</description>
        <description>In this paper, we present a wireless Multiple Smart Sensor System (MSSS) in conjunction with a smartphone to enable an unobtrusive monitoring of electrocardiogram (ear-lead ECG) integrated with multiple sensor system which includes core body temperature and blood oxygen saturation (SpO2) for ambulatory patients. The proposed behind-the-ear device makes the system desirable to measure ECG data: technically less complex, physically attached to non-hair regions, hence more suitable for long term use, and user friendly as no need to undress the top garment. The proposed smart sensor device is similar to the hearing aid device and is wirelessly connected to a smartphone for physiological data transmission and displaying. This device not only gives access to the core temperature and ECG from the ear, but also the device can be controlled (removed and reapplied) by the patient at any time, thus increasing the usability of personal healthcare applications. A number of combination ECG electrodes, which are based on the area of the electrode and dry/non-dry nature of the surface of the electrodes are tested at various locations near behind the ear.  The best ECG electrode is then chosen based on the Signal-to-Noise Ratio (SNR) of the measured ECG signals. These electrodes showed acceptable SNR ratio of ~20 db, which is comparable with existing tradition ECG electrodes. The developed ECG electrode systems is then integrated with commercially available PPG sensor (Amperor pulse oximeter) and core body temperature sensor (MLX90614) using a specialized micro controller (Arduino UNO) and the results monitored using a newly developed smartphone (android) application.</description>
        <description>http://thesai.org/Downloads/Volume7No7/Paper_57-Evaluation_of_a_Behind_the_Ear_ECG_Device_for_Smartphone.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Current Trends and Research Challenges in Spectrum-Sensing for Cognitive Radios</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070756</link>
        <id>10.14569/IJACSA.2016.070756</id>
        <doi>10.14569/IJACSA.2016.070756</doi>
        <lastModDate>2016-07-31T10:46:31.5130000+00:00</lastModDate>
        
        <creator>Roopali Garg</creator>
        
        <creator>Dr. Nitin Saluja</creator>
        
        <subject>CR; cognitive radio; FC-PSO; fast-convergence particle swarm optimization; FC; fusion centre; KLMS; kernel least mean square; PU;  primary use); ROC; receiver operating characteristic curves; SU; secondary user; soft combination; spectrum hole</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(7), 2016</description>
        <description>The ever increasing demand of wireless communication systems has led to search of suitable spectrum bands for transmission of data. The research in the past has revealed that radio spectrum is under-utilized in most of the scenarios. This prompted the scientist to seek a solution to utilize the spectrum efficiently. Cognitive Radios provided an answer to the problem by sensing the idle (licensed) bands and allowing (secondary) users to transmit in these idle spaces. Spectrum sensing forms the main block of cognition cycle.
This paper reviews the current trends in research in the domain of spectrum sensing. The author describes the type of channel being modelled, diversity combining schemes used, optimal algorithms applied at fusion centre, spectrum sensing techniques employed. Further, the research challenges are discussed. It is presented that various attributes like sensing time, throughput, rate reliability, optimum cooperative users, sensing frequency etc. needs to be addressed. A trade-off needs to be established to optimize two opposing parameters like sensing and throughput.</description>
        <description>http://thesai.org/Downloads/Volume7No7/Paper_56-Current_Trends_and_Research_Challenges.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Crowding Optimization Method to Improve Fractal Image Compressions Based Iterated Function</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070755</link>
        <id>10.14569/IJACSA.2016.070755</id>
        <doi>10.14569/IJACSA.2016.070755</doi>
        <lastModDate>2016-07-31T10:46:31.5000000+00:00</lastModDate>
        
        <creator>Shaimaa S. Al-Bundi</creator>
        
        <creator>Nadia M. G. Al-Saidi</creator>
        
        <creator>Neseif J. Al-Jawari</creator>
        
        <subject>Fractal; Iterated Function System (IFS); Genetic algorithm (GA); Crowding method; Fractal Image Compression (FIC)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(7), 2016</description>
        <description>Fractals are geometric patterns generated by Iterated Function System theory. A popular technique known as fractal image compression is based on this theory, which assumes that redundancy in an image can be exploited by block-wise self-similarity and that the original image can be approximated by a finite iteration of fractal codes. This technique offers high compression ratio among other image compression techniques. However, it presents several drawbacks, such as the inverse proportionality between image quality and computational cost. Numerous approaches have been proposed to find a compromise between quality and cost. As an efficient optimization approach, genetic algorithm is used for this purpose. In this paper, a crowding method, an improved genetic algorithm, is used to optimize the search space in the target image by good approximation to the global optimum in a single run. The experimental results for the proposed method show good efficiency by decreasing the encoding time while retaining a high quality image compared with the classical method of fractal image compression.</description>
        <description>http://thesai.org/Downloads/Volume7No7/Paper_55-Crowding_Optimization_Method_to_Improve_Fractal_Image.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Simulation and Analysis of Optimum Golomb Ruler Based 2D Codes for OCDMA System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070754</link>
        <id>10.14569/IJACSA.2016.070754</id>
        <doi>10.14569/IJACSA.2016.070754</doi>
        <lastModDate>2016-07-31T10:46:31.4530000+00:00</lastModDate>
        
        <creator>Dr. Gurjit Kaur</creator>
        
        <creator>Rajesh Yadav</creator>
        
        <creator>Disha Srivastava</creator>
        
        <creator>Aarti Bhardwaj</creator>
        
        <creator>Manu Gangwar</creator>
        
        <creator>Nidhi</creator>
        
        <subject>OCDMA System; 2D Codes; OOC; Golomb Ruler; BER; Eye Diagram; MAI</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(7), 2016</description>
        <description>The need for high speed communications networks has led the research communities and industry to develop reliable, scalable transatlantic and transpacific fiber-optic communication links. In this paper the optimum Golomb ruler based 2D OCDMA codes has been demonstrated.  An OCDMA system based on the discussed 2D codes is designed and simulated on Optisystem. The encoder and decoder structure of OCDMA system have been designed using filter and time delays. Further the performance is analysed for various parameter such as bit rate, number of users, BER (Bit Error Rate), quality factor, eye diagram and signal diagram. The system is analyzed for up to 18 users at 1 Gbps and 1.25 Gbps bit rate.</description>
        <description>http://thesai.org/Downloads/Volume7No7/Paper_54-Simulation_and_Analysis_of_Optimum_Golomb_Ruler.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comparative Study of Classification Algorithms using Data Mining: Crime and Accidents in Denver City the USA</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070753</link>
        <id>10.14569/IJACSA.2016.070753</id>
        <doi>10.14569/IJACSA.2016.070753</doi>
        <lastModDate>2016-07-31T10:46:31.3900000+00:00</lastModDate>
        
        <creator>Amit Gupta</creator>
        
        <creator>Azeem Mohammad</creator>
        
        <creator>Ali Syed</creator>
        
        <creator>Malka N. Halgamuge</creator>
        
        <subject>Data Mining; Classification; Big Data; Crime and Accident</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(7), 2016</description>
        <description>In the last five years, crime and accidents rates have increased in many cities of America. The advancement of new technologies can also lead to criminal misuse. In order to reduce incidents, there is a need to understand and examine emerging patterns of criminal activities. This paper analyzed crime and accident datasets from Denver City, USA during 2011 to 2015 consisting of 372,392 instances of crime. The dataset is analyzed by using a number of Classification Algorithms. The aim of this study is to highlight trends of incidents that will in return help security agencies and police department to discover precautionary measures from prediction rates. The classification of algorithms used in this study is to assess trends and patterns that are assessed by BayesNet, NaiveBayes, J48, JRip, OneR and Decision Table. The output that has been used in this study, are correct classification, incorrect classification, True Positive Rate (TP), False Positive Rate (FP), Precision (P), Recall (R) and F-measure (F). These outputs are captured by using two different test methods: k-fold cross-validation and percentage split. Outputs are then compared to understand the classifier performances. Our analysis illustrates that JRip has classified the highest number of correct classifications by 73.71% followed by decision table with 73.66% of correct predictions, whereas OneR produced the least number of correct predictions with 64.95%. NaiveBayes took the least time of 0.57 sec to build the model and perform classification when compared to all the classifiers.  The classifier stands out producing better results among all the classification methods. This study would be helpful for security agencies and police department to discover data patterns and analyze trending criminal activity from prediction rates.</description>
        <description>http://thesai.org/Downloads/Volume7No7/Paper_53-A_Comparative_Study_of_Classification_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimum Access Analysis of Collaborative Spectrum Sensing in Cognitive Radio Network using MRC</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070752</link>
        <id>10.14569/IJACSA.2016.070752</id>
        <doi>10.14569/IJACSA.2016.070752</doi>
        <lastModDate>2016-07-31T10:46:31.3430000+00:00</lastModDate>
        
        <creator>Risala Tasin Khan</creator>
        
        <creator>Shakila Zaman</creator>
        
        <creator>Md. Imdadul Islam</creator>
        
        <creator>M. R. Amin</creator>
        
        <subject>Fusion center; Local energy detection; Maximum Ratio Combining; Spectrum Sensing; Receiver Operating Characteristics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(7), 2016</description>
        <description>The performance of cognitive radio network mainly depends on the finest sensing of the presence or absence of Primary User (PU). The throughput of a Secondary User (SU) can be reduced because of the false detection of PU which causes an SU from its transmission opportunity. The factorization of the probability of correct decision is a really hard job when the special false alarm is incorporated into it. Previous works focus on collaborative sensing on the normal environment. In this paper, we have proposed a collaborative sensing method in Cognitive radio network for optimal access of PU licensed band by SU. It is shown performance analysis of energy detection through different cognitive users and conducts a clear comparison between local and collaborative sensing. In this paper, the maximal ratio combining diversity technique with energy detection has been employed to reduce the false alarm probability in the collaborative environment. The simulation result shows significant reduction of the probability of misdetection with increasing in the number of collaborative users. We also analyze that MRC scheme exhibits the best detection performance in collaborative environment.</description>
        <description>http://thesai.org/Downloads/Volume7No7/Paper_52-Optimum_Access_Analysis_of_Collaborative_Spectrum_Sensing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Indirect Substitution Method in Combinable Services by Eliminating Incompatible Services</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070751</link>
        <id>10.14569/IJACSA.2016.070751</id>
        <doi>10.14569/IJACSA.2016.070751</doi>
        <lastModDate>2016-07-31T10:46:31.3100000+00:00</lastModDate>
        
        <creator>Forough Hematian Chahardah Cheriki</creator>
        
        <creator>Sima Emadi</creator>
        
        <subject>component; indirect substitution; SLA; service composition; quality of service</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(7), 2016</description>
        <description>Service-oriented architecture is a style in information systems architecture with the aim of achieving loose coupling in communication between software components and services. Service, here means software implementation, is a well-defined business function that can be used and be called in various processes or software. An organization can choose and composite the Web services that fulfill its intended quality of service. As the number of available Web services increases, choosing the best services to composite is challenging and is the most important problem of service composition. In addition, due to the utilization of systems in dynamic environments, service characteristics and users’ needs are constantly faced with changes which lead to deterioration of service, unavailability and quality loss of services. One of the ways to deal with this challenge is substitution of a Web service with another service, which is done at the runtime and dynamically. Substitution is both direct and indirect. Though there are many related works in the field of direct substitution, still no work is done for explaining substitution based on the indirect method, and works were conducted only on direct substitution. In this method, there are many problems such as the incompatibility of important services in composition. To solve the problems in this method and other challenges in this paper, considering a subset of inputs and outputs, qualitative parameters and service composition, simultaneous and dynamic service composition and use of the fitness function of genetic algorithm to compare the compositions are done. In addition, in substitution, a table which contains the best possible substitutes with dynamic updates through multi-threading techniques is provided. The results obtained by the analysis and evaluation of the proposed method, indicates the establishment of compatibility between the services, and finding the best possible substitute to reduce substitution time.</description>
        <description>http://thesai.org/Downloads/Volume7No7/Paper_51-Indirect_Substitution_Method_in_Combinable_Services.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Reputation Management System for Fostering Trust in Collaborative and Cohesive Disaster Management</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070750</link>
        <id>10.14569/IJACSA.2016.070750</id>
        <doi>10.14569/IJACSA.2016.070750</doi>
        <lastModDate>2016-07-31T10:46:31.2800000+00:00</lastModDate>
        
        <creator>Sabeen Javed</creator>
        
        <creator>Hammad Afzal</creator>
        
        <creator>Fahim Arif</creator>
        
        <creator>Awais Majeed</creator>
        
        <subject>reputation; trust; reputation management; disaster management; collaborators; collaborative management</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(7), 2016</description>
        <description>The best management of a disaster requires knowledge, skills and other resources not only for relief and rehabilitation but also for recovery and mitigation of its effects. These multifaceted goals cannot be achieved by a single organization and require collaborative efforts in an agile manner. Blind trust cannot be applied while selecting collaborators/team members/partners therefore good reputation of a collaborator is mandatory. Currently, various Information and Communication Technology based artifacts, for collaborative disaster management, have been developed; however, they do not employ trust and reputation as their key factor. In this paper, a framework of reputation based trust management system is proposed for the support of disaster management. The key features of framework are Meta model, Reputation Indicator Matrix and Computational algorithm, deployed using Service Oriented Architecture. To evaluate the efficacy of the artifact, a prototype is implemented. Furthermore, an industrial survey is carried out to get the feedback on the proposed framework. The results support that the proposed reputation management system provides significant support in collaborative disaster management by assisting in agile and smart decision making in all phases of disaster management cycle.</description>
        <description>http://thesai.org/Downloads/Volume7No7/Paper_50-Reputation_Management_System_for_Fostering_Trust.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluation of Fault Tolerance in Cloud Computing using Colored Petri Nets</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070749</link>
        <id>10.14569/IJACSA.2016.070749</id>
        <doi>10.14569/IJACSA.2016.070749</doi>
        <lastModDate>2016-07-31T10:46:31.2330000+00:00</lastModDate>
        
        <creator>Mehdi Effatparvar</creator>
        
        <creator>Seyedeh Solmaz Madani</creator>
        
        <subject>Cloud Computing; Fault Tolerance; Colored Petri Nets; Reliability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(7), 2016</description>
        <description>Nowadays, the necessity of rendering reliable services to the customers in business markets is assumed as a crucial matter for the service providers, and the importance of this subject in many fields is undeniable. Design of systems with high complexity and existence of different resources in network cloud leads the service providers to intend to provide the best services to their customers. One of the important challenges for service providers is fault tolerance and reliability and different techniques and methods have been presented for solving this challenge so far.
The method presented in this paper analyzes the fault tolerance process in interconnected network cloud in order to avoid problems and irreparable damages before implementation. In the offered method, the fault tolerance was evaluated aiding colored petri nets using Byzantine technique. Summary of results analyzed by cpntools and demonstrated reliability. It was concluded that upon increase of requests, the fault tolerance is reduced and consequently reliability is also reduced and vice versa. In other word, resources management is under impact of requested services.</description>
        <description>http://thesai.org/Downloads/Volume7No7/Paper_49-Evaluation_of_Fault_Tolerance_in_Cloud_Computing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Finding Non Dominant Electrodes Placed in Electroencephalography (EEG) for Eye State Classification using Rule Mining</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070748</link>
        <id>10.14569/IJACSA.2016.070748</id>
        <doi>10.14569/IJACSA.2016.070748</doi>
        <lastModDate>2016-07-31T10:46:31.2030000+00:00</lastModDate>
        
        <creator>Mridu Sahu</creator>
        
        <creator>N.K.Nagwani</creator>
        
        <creator>ShrishVerma</creator>
        
        <subject>Electroencephalography (EEG); Most Non Dominant (MND); Ranker algorithm; classification; EEG</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(7), 2016</description>
        <description>Electroencephalography is a measure of brain activity by wave analysis; it consist number of electrodes. Finding most non-dominant electrode positions in Eye state classification is important task for classification. The proposed work is identifying which electrodes are less responsible for classification. This is a feature selection step required for optimal EEG channel selection. Feature selection is a mechanism for subset selection of input features, in this work input features are EEG Electrodes. Most Non Dominant (MND), gives irrelevant input electrodes in eye state classification and thus it, reduces computation cost. MND set creation completed using different stages. Stages includes, first extreme value removal from electroencephalogram (EEG) corpus for data cleaning purpose. Then next step is attribute selection, this is a preprocessing step because it is completed before classification step. MND set gives electrodes they are less responsible for classification and if any EEG electrode corpus wants to remove feature present in this set, then time and space required to build the classification model is (20%) less than as compare to all electrodes for the same, and accuracy of classification not very much affected. The proposed article uses different attribute evaluation algorithm with Ranker Search Method.</description>
        <description>http://thesai.org/Downloads/Volume7No7/Paper_48-Finding_Non_Dominant_Electrodes_Placed_in_Electroencephalography.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Diagnosis of Diabetes by Applying Data Mining Classification Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070747</link>
        <id>10.14569/IJACSA.2016.070747</id>
        <doi>10.14569/IJACSA.2016.070747</doi>
        <lastModDate>2016-07-31T10:46:31.1700000+00:00</lastModDate>
        
        <creator>Tahani Daghistani</creator>
        
        <creator>Riyad Alshammari</creator>
        
        <subject>Diabetes; Data mining; Self-Organizing Map; Decision tree; Classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(7), 2016</description>
        <description>Health care data are often huge, complex and heterogeneous because it contains different variable types and missing values as well. Nowadays, knowledge from such data is a necessity. Data mining can be utilized to extract knowledge by constructing models from health care data such as diabetic patient data sets. In this research, three data mining algorithms, namely Self-Organizing Map (SOM), C4.5 and RandomForest, are applied on adult population data from Ministry of National Guard Health Affairs (MNGHA), Saudi Arabia to predict diabetic patients using 18 risk factors. RandomForest achieved the best performance compared to other data mining classifiers.</description>
        <description>http://thesai.org/Downloads/Volume7No7/Paper_47-Diagnosis_of_Diabetes_by_Applying_Data_Mining_Classification_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Investigating the Use of Machine Learning Algorithms in Detecting Gender of  the Arabic Tweet Author</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070746</link>
        <id>10.14569/IJACSA.2016.070746</id>
        <doi>10.14569/IJACSA.2016.070746</doi>
        <lastModDate>2016-07-31T10:46:31.1570000+00:00</lastModDate>
        
        <creator>Emad AlSukhni</creator>
        
        <creator>Qasem Alequr</creator>
        
        <subject>Social Networking; Data Mining; Sentiment Analysis; Sentiment Classification; Gender Detection; Twitter</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(7), 2016</description>
        <description>Twitter is one of the most popular social network sites on the Internet to share opinions and knowledge extensively. Many advertisers use these Tweets to collect some features and attributes of Tweeters to target specific groups of highly engaged people. Gender detection is a sub-field of sentiment analysis for extracting and predicting the gender of a Tweet author. In this paper, we aim to investigate the gender of Tweet authors using different classification mining techniques on Arabic language, such as  Na&#239;ve Bayes (NB), Support vector machine (SVM), Na&#239;ve Bayes Multinomial (NBM), J48 decision tree, KNN. The results show that the NBM, SVM, and J48 classifiers can achieve accuracy above to 98%, by adding names of Tweet author as a feature. The results also show that the preprocessing approach has negative effect on the accuracy of gender detection. In nutshell, this study shows that the ability of using machine learning classifiers in detecting the gender of Arabic Tweet author.</description>
        <description>http://thesai.org/Downloads/Volume7No7/Paper_46-Investigating_the_Use_of_Machine_Learning_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Strategy to Optimize the Load Migration Process in Cloud Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070745</link>
        <id>10.14569/IJACSA.2016.070745</id>
        <doi>10.14569/IJACSA.2016.070745</doi>
        <lastModDate>2016-07-31T10:46:31.1400000+00:00</lastModDate>
        
        <creator>Hamid Mirvaziri</creator>
        
        <creator>ZhilaTajrobekar</creator>
        
        <subject>cloud computing; load balancing; migration; virtual machines; simulated annealing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(7), 2016</description>
        <description>Cloud computing is a model of internet-based service that provides easy access to a set of changeable computational sources through internet for users based on their demand. Load balancing in cloud have to manage service provider resources appropriately. Load balancing in cloud computing is the process of load distribution between distributed computational nodes for optimal use of resources and have to decrease latency in order to prevent a situation in which some nodes overloaded and some others under-loaded or be in the idle mode. Load migration is a potential solution for most of critical conditions such as load imbalance. However, many load migration methods are only based on one purpose. Practically, considering just one objective for migration can be in contrary to the other objectives and may lose optimal solution to work in existing situation. Therefore, having a strategy to make load migration process purposeful is essential in cloud environment. The main idea of this research is to reduce cost and increase efficiency in order to be compatible with cloud different conditions. In the recommended method, it is tried to improve load migration process using several different criteria simultaneously and apply some changes in previous methods. The simulated annealing algorithm is employed to implement the recommended strategy in the present research. Obtained result show desired performance and efficiency in general. This algorithm is highly flexible by which several important criteria can be calculated simultaneously.</description>
        <description>http://thesai.org/Downloads/Volume7No7/Paper_45-A_New_Strategy_to_Optimize_the_Load_Migration_Process.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Reducing the Calculations of Quality-Aware Web Services Composition Based on Parallel Skyline Service</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070744</link>
        <id>10.14569/IJACSA.2016.070744</id>
        <doi>10.14569/IJACSA.2016.070744</doi>
        <lastModDate>2016-07-31T10:46:31.0930000+00:00</lastModDate>
        
        <creator>Maryam Moradi</creator>
        
        <creator>Sima Emadi</creator>
        
        <subject>service composition; parallel Skyline service; the dominant relationship; service quality</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(7), 2016</description>
        <description>The perfect composition of atomic services to provide users with services through applying qualitative parameters is very important. As expected, web services with similar features lead to competition among the service providers. The key challenge to find an appropriate web service for composition occurs when multiple aspects of quality (such as response time, cost, etc.) are considered in the optimal composition of services. Skyline service provides the best service with consideration of qualitative parameters by using superior analysis. In this study, Skyline algorithm is used to find a set of best possible services compositions while taking into account qualitative parameters. The parallelism technique in this study, had significant impact on reducing response time and increasing the speed of service composition as well as reduction in composition calculations. The results of the analysis and evaluation of the proposed method shows optimum runtime and the best composition.</description>
        <description>http://thesai.org/Downloads/Volume7No7/Paper_44-Reducing_the_Calculations_of_Quality_Aware_Web_Services_Composition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of Modulator and Demodulator for a 863-870 MHz BFSK Tranceiver</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070743</link>
        <id>10.14569/IJACSA.2016.070743</id>
        <doi>10.14569/IJACSA.2016.070743</doi>
        <lastModDate>2016-07-31T10:46:31.0470000+00:00</lastModDate>
        
        <creator>A. Neifar</creator>
        
        <creator>G. Bouzid</creator>
        
        <creator>M. Masmoudi</creator>
        
        <subject>ISM band; FHSS; FSK modulator; BFSK demodulator; wireless sensor network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(7), 2016</description>
        <description>This paper presents the design of low power modulator and demodulator circuits dedicated to a BFSK transceiver, operating in the 863- 870 MHZ ISM band. The two circuits were designed using ams 0.35&#181;m technology with 3V dc voltage supply. Simulation results of the new Direct Digital Frequency Synthesizer in the modulation have shown good performances of the designed system as the Spurious Free Dynamic Range SFDR reached -88 dBc while the circuit consumes only 47.7 &#181;W @ 43.3MHz. The demodulator has also presented a good BER of 10-3 @10.9 EbtoN0 and a sensitivity of about -115 dBm.</description>
        <description>http://thesai.org/Downloads/Volume7No7/Paper_43-Design_of_Modulator_and_Demodulator.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Impact of Black-Hole Attack on ZRP Protocol</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070742</link>
        <id>10.14569/IJACSA.2016.070742</id>
        <doi>10.14569/IJACSA.2016.070742</doi>
        <lastModDate>2016-07-31T10:46:31.0170000+00:00</lastModDate>
        
        <creator>CHAHIDI Badr</creator>
        
        <creator>EZZATI Abdellah</creator>
        
        <subject>ZRP; Blackhole; security; Routing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(7), 2016</description>
        <description>lack of infrastructure in ad hoc networks makes their deployment easier. Each node in an ad hoc network can route data using a routing protocol, which decreases the level of security. Ad hoc networks are exposed to several attacks such as the blackhole attack. In this article, a study has been made on the impact of the attack on the hybrid routing protocol ZRP (Zone Routing Protocol). In this attack a malicious node is placed between two or more nodes in order to drop data. The trick of the attack is simple, the malicious node declares to have the most reliable way to the destination so that the wife destination chooses this path.
In this study, NS2 is used to assess the impact of the attack on ZRP. Two metrics measure, namely the packet delivered ratio and end to end delay.</description>
        <description>http://thesai.org/Downloads/Volume7No7/Paper_42-The_Impact_of_Black_Hole_Attack_on_ZRP_Protocol.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Investigative Behavioral Intention to Knowledge Acceptance and Motivation in Cloud Computing Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070741</link>
        <id>10.14569/IJACSA.2016.070741</id>
        <doi>10.14569/IJACSA.2016.070741</doi>
        <lastModDate>2016-07-31T10:46:30.9700000+00:00</lastModDate>
        
        <creator>Sundus A. Hamoodi</creator>
        
        <subject>Cloud computing; formatting; ARCS model; UTAUT model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(7), 2016</description>
        <description>Recently the number of Cloud Computing users in educational institutions has increased. Students have the chance to access various applications and this gives the opportunity to take advantage of those applications. This study examined the behavioral Intention toward Cloud Computing Applications and evaluated the acceptance of these Applications. Participants population consisted of 110 students from different Jordanian universities.  The results showed that Performance Expectancy, Effort Expectancy, Attitude toward using Technology, Social Influence, Self-Efficacy, Attention and Relevance have different levels of correlation with Behavioral Intention in Cloud Computing Applications, and there was no correlation between Anxiety and Behavioral Intention in Cloud Computing Applications.</description>
        <description>http://thesai.org/Downloads/Volume7No7/Paper_41-Investigative_Behavioral_Intention_to_Knowledge_Acceptance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving the Recognition of Heart Murmur</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070740</link>
        <id>10.14569/IJACSA.2016.070740</id>
        <doi>10.14569/IJACSA.2016.070740</doi>
        <lastModDate>2016-07-31T10:46:30.9370000+00:00</lastModDate>
        
        <creator>Magd Ahmed Kotb</creator>
        
        <creator>Hesham Nabih Elmahdy</creator>
        
        <creator>Fatma El Zahraa Mostafa</creator>
        
        <creator>Mona El Falaki</creator>
        
        <creator>Christine William Shaker</creator>
        
        <creator>Mohamed Ahmed Refaey</creator>
        
        <creator>Khaled W Y Rjoob</creator>
        
        <subject>component; Hidden Markov Model (HMM); Heart Murmur; Mel Frequency Cepestral Coefficient MFCC; Systolic Murmur; Diastolic Murmur; Auscultation Area; ventricular septaldefect (VSD); mitral stenosis (MS); mitral regurgitation (MR); aortic stenosis (AS); aortic regurgitation (AR); patent ductusarteriosus (PDA); pulmonary regurgitation (PR); pulmonary stenosis (PS); Electrocardiogram(ECG); Echocardiography(ECHO); Computed Tomography(CT); Correct Classification Rate(CCR); Artificial Neural Network(ANN); Back Propagation Neural Network (BPNN); Empirical Mode Decomposition (EMD); Support Vector Machines(SVM); Adaptive Neuro-Fuzzy Inference System (ANFIS); MATRIXLABORATORY (MATLAB); Radial Basis Function (RBF)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(7), 2016</description>
        <description>Diagnosis of congenital cardiac defects is challenging, with some being diagnosed during pregnancy while others are diagnosed after birth or later on during childhood. Prompt diagnosis allows early intervention and best prognosis. Contemporary diagnosis relies upon the history, clinical examination, pulse oximetery, chest X-ray, electrocardiogram (ECG), echocardiography (ECHO), computed tomography (CT) and cardiac catheterization. These diagnostic modalities reliable upon recording electrical activity or sound waves or upon radiation. Yet, congenital heart diseases are still liable to misdiagnosis because of level of operator expertise and other multiple factors. In an attempt to minimize effect of operator expertise this paper built a classification model for heart murmur recognition using Hidden Markov Model (HMM). This paper used Mel Frequency Cepestral coefficient (MFCC) as a feature and 13 MFCC coefficients. The machine learning model built by studying 1069 different heart sounds covering normal heart sounds, ventricular septal defect (VSD), mitral regurgitation (MR), aortic stenosis (AS), aortic regurgitation (AR), patent ductus arteriosus (PDA), pulmonary regurgitation (PR), and pulmonary stenosis (PS). MFCC feature used to extract feature matrix for each type of heart sounds after separation according to amplitude threshold. The frequency of normal heart sound (range= 1Hz to 139Hz) was specific without overlap with any of the studied defects (ranged= 156-556Hz). The frequency ranges for each of these defects was typical without overlap according to examined heart area (aortic, pulmonary, tricuspid and mitral area). The overall correct classification rate (CCR) using this model was 96% and sensitivity 98%. This model has great potential for prompt screening and specific defect detection.  Effect of cardiac contractility, cardiomegaly or cardiac electrical activity on this novel detection system needs to be verified in future works.</description>
        <description>http://thesai.org/Downloads/Volume7No7/Paper_40-Improving_the_Recognition_of_Heart_Murmur.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Impact of Elliptical Holes Filled with Ethanol on Confinement Loss and Dispersion in Photonic Crystal Fibers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070739</link>
        <id>10.14569/IJACSA.2016.070739</id>
        <doi>10.14569/IJACSA.2016.070739</doi>
        <lastModDate>2016-07-31T10:46:30.8600000+00:00</lastModDate>
        
        <creator>Khemiri Kheareddine</creator>
        
        <creator>Ezzedine Tahar</creator>
        
        <creator>Houria Rezig</creator>
        
        <subject>confinement loss; dispersion; doped Photonic Crystal Fiber; ethanol-filled holes; elliptical  holes; FDTD </subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(7), 2016</description>
        <description>To get a confinement loss value, the weakest possible We have interest to optimize an optical fiber our  structure  has a cladding which is formed by holes in silica. The geometry of the holes is special because they have an elliptical shape and oriented with some angle. The introduction of ethanol in the holes, the omission of some rings allows us to have values very close to zero for the confinement loss. In this paper, we have designed an ultraflat dispersion PCF. We notice that the zero dispersion can be in the range from 1000 nm to 1650 nm and has the value of 0+-0.14ps/nm/km.</description>
        <description>http://thesai.org/Downloads/Volume7No7/Paper_39-Impact_of_Elliptical_Holes_Filled_with_Ethanol_on_Confinement_Loss.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Design of Miniaturaized Patch Antenna Using Different Substrates for S-Band and C-Band Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070738</link>
        <id>10.14569/IJACSA.2016.070738</id>
        <doi>10.14569/IJACSA.2016.070738</doi>
        <lastModDate>2016-07-31T10:46:30.8130000+00:00</lastModDate>
        
        <creator>Saad Hassan Kiani</creator>
        
        <creator>Khalid Mahmood</creator>
        
        <creator>Sharyar Shafeeq</creator>
        
        <creator>Mehre Munir</creator>
        
        <creator>Khalil Muhammad Khan</creator>
        
        <subject>substrates; microstrip; return loss; directivity; miniaturized; Impdence bandwidth</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(7), 2016</description>
        <description>In advance communication technology, patch antennas are widely exploit due to their inexpensive and light weighted structure. This paper presents a novel design of miniaturized multiband patch antenna using different substrates frequently used in patch antennas. Various substrates such as Teflon, Roger 5880, Bakelite and Air are in use to achieve better gain and directivity. The proposed miniaturized multiband patch antenna contains 2 substrates where one substrate is FR4 (fixed and lossy) and the other substrates are changed to observe gain, directivity and return loss. Coaxial probe serving mode is presented in this paper. This serving mode is a contacting arrangement for patch, in which the outer conductor is linked to ground plane and the inner conductor of the coaxial connector spreads through dielectric and is bonded to the radiating patch. The proposed antenna can be used for various S-band and C-band applications.</description>
        <description>http://thesai.org/Downloads/Volume7No7/Paper_38-A_Novel_Design_of_Miniaturaized_Patch_Antenna.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Visual Knowledge Generation from Data Mining Patterns for Decision-Making</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070737</link>
        <id>10.14569/IJACSA.2016.070737</id>
        <doi>10.14569/IJACSA.2016.070737</doi>
        <lastModDate>2016-07-31T10:46:30.7800000+00:00</lastModDate>
        
        <creator>Jihed Elouni</creator>
        
        <creator>Hela Ltifi</creator>
        
        <creator>Mounir Ben Ayed</creator>
        
        <creator>Mohamed Masmoudi</creator>
        
        <subject>Knowledge; patterns; visualization; data mining; Decision Support Systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(7), 2016</description>
        <description>The visual data mining based decision support systems had already been recognized in literature. It allows users analysing large information spaces to support complex decision-making. Prior research provides frameworks focused on simply representing extracted patterns. In this paper, we present a new model for visually generating knowledge from these patterns and communicating it for intelligent decision-making. To prove the practicality of the proposed model, it was applied in the medical field to fight against nosocomial infections in the intensive care units.</description>
        <description>http://thesai.org/Downloads/Volume7No7/Paper_37-Visual_Knowledge_Generation_from_Data_Mining.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mobile Forensic Images and Videos Signature Pattern Matching using M-Aho-Corasick</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070736</link>
        <id>10.14569/IJACSA.2016.070736</id>
        <doi>10.14569/IJACSA.2016.070736</doi>
        <lastModDate>2016-07-31T10:46:30.7330000+00:00</lastModDate>
        
        <creator>Yusoof Mohammed Hasheem</creator>
        
        <creator>Kamaruddin Malik Mohamad</creator>
        
        <creator>Ahmed Nur Elmi Abdi</creator>
        
        <creator>Rashid Naseem</creator>
        
        <subject>mobile forensics; Images; Videos, M-Aho-Corasick; (File Signature Pattern Matching)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(7), 2016</description>
        <description>Mobile forensics is an exciting new field of research. An increasing number of Open source and commercial digital forensics tools are focusing on less time during digital forensic examination. There is a major issue affecting some mobile forensic tools that allow the tools to spend much time during the forensic examination. It is caused by implementation of poor file searching algorithms by some forensic tool developers. This research is focusing on reducing the time taken to search for a file by proposing a novel,  multi-pattern signature matching algorithm  called M-Aho-Corasick which is adapted from the original Aho-Corasick algorithm. Experiments are conducted on five different datasets which one of the data sets is  obtained from Digital Forensic Research Workshop (DFRWS 2010). Comparisons are made between M-Aho-Corasick using M_Triage  with Dec0de, Lifter, XRY, and Xaver. The result shows that M-Aho-Corasick using M_Triage  has reduced the searching time by 75% as compared to Dec0de,  36% as compared to Lifter, 28% as compared to XRY, and 71% as compared to Xaver. Thus, M-Aho-Corasick using M_Triage  tool is more efficient than Dec0de, Lifter, XRY, and Xaver in avoiding the extraction of high number of false positive results.</description>
        <description>http://thesai.org/Downloads/Volume7No7/Paper_36-Mobile_Forensic_Images_and_Videos_Signature_Pattern.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Investigation and Comparison of Invasive Weed, Flower Pollination and Krill Evolutionary Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070735</link>
        <id>10.14569/IJACSA.2016.070735</id>
        <doi>10.14569/IJACSA.2016.070735</doi>
        <lastModDate>2016-07-31T10:46:30.6870000+00:00</lastModDate>
        
        <creator>Marjan Abdeyazdan</creator>
        
        <creator>Samaneh Mehri Dehno</creator>
        
        <creator>Sayyed Hedayat Tarighinejad</creator>
        
        <subject>evolutionary algorithm; invasive weed algorithm; flower pollination algorithm; krill algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(7), 2016</description>
        <description>Being inspired by natural phenomena and available biological processes in the nature is one of the difficult methods of problem solving in computer sciences. Evolutionary methods are a set of algorithms that are inspired from the nature and are based on their evolutionary mechanisms. Unlike other optimizing methods of problem solving, evolutionary algorithms do not require any prerequisites and usually offer solutions very close to optimized answers. Based on their behavior, evolutionary algorithms are divided into two categories of biological processes based on plant behavior and animal behavior. Various evolutionary algorithms have been proposed so far to solve optimization problems, some of which include evolutionary algorithm of invasive weed and flower pollination algorithm that are inspired by plants and krill algorithm inspired by the animal algorithm of sea animals. In this paper, a comparison is made for the first time between the accuracy and rate of involvement in  local optimization of these new evolutionary algorithms to identify the best algorithm in terms of efficiency. Results of various tests show that invasive weed algorithm is more efficient and accurate than flower pollination and krill algorithms.</description>
        <description>http://thesai.org/Downloads/Volume7No7/Paper_35-An_Investigation_and_Comparison_of_Invasive_Weed.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>New Modified RLE Algorithms to Compress Grayscale Images with Lossy and Lossless Compression</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070734</link>
        <id>10.14569/IJACSA.2016.070734</id>
        <doi>10.14569/IJACSA.2016.070734</doi>
        <lastModDate>2016-07-31T10:46:30.6400000+00:00</lastModDate>
        
        <creator>Hassan K. Albahadily</creator>
        
        <creator>Alaa A. Jabbar Altaay</creator>
        
        <creator>Viktar U. Tsviatko</creator>
        
        <creator>Valery K. Kanapelka</creator>
        
        <subject>compression; Run Length Encoding; quantization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(7), 2016</description>
        <description>New modified RLE algorithms to compress grayscale images with lossy and lossless compression, depending on the probability of repetition of pixels in the image and the pixel values to reduce the size of the encoded data by sending bit 1 instead of the original value of the pixel if the pixel’s value is repeated. The proposed algorithms achieved good reduction of encoded size as compared with other compression method that used to compare with our method and decrease encoding time by good ratio.</description>
        <description>http://thesai.org/Downloads/Volume7No7/Paper_34-New_modified_RLE_algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improvisation of Security aspect of Steganographic System by applying RSA Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070733</link>
        <id>10.14569/IJACSA.2016.070733</id>
        <doi>10.14569/IJACSA.2016.070733</doi>
        <lastModDate>2016-07-31T10:46:30.5930000+00:00</lastModDate>
        
        <creator>Manoj Kumar Ramaiya</creator>
        
        <creator>Dr. Dinesh Goyal</creator>
        
        <creator>Dr. Naveen Hemrajani</creator>
        
        <subject>Image Steganography; Cryptography; LSB insertion; Public key Cryptosystem; RSA algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(7), 2016</description>
        <description>The applications accessing multimedia systems and content over the internet have grown extremely in the earlier few years. Moreover, several end users or intruders can simply use tools to synthesize and modify valuable information. The safety of information over unsafe communication channel has constantly been a primary concern in the consideration of researchers. It became one of the most important problems for information technology and essential to safeguard this valuable information during transmission. It is also important to determine where and how such a multimedia file is confidential. Thus, a need exists for emerging technology that helps to defend the integrity of information and protected the intellectual property privileges of owners. Various approaches are coming up to safeguard the data from unauthorized person. 
Steganography and Cryptography are two different techniques for security data over communication network. The primary purpose of Cryptography is to create message concept unintelligible or ciphertext might produce suspicious in the mind of opponents. On the other hand, Steganography implant secrete message in to a cover media and hides its existence. As a normal practice, data embedding is employed in communication, image, text or multimedia contents for the purpose of copyright, authentication and digital signature etc.
Both techniques provides the sufficient degree of security but are vulnerable to intruder’s attacks when used over unsecure communication channel. Attempt to combines the two techniques i.e. Cryptography and Steganography, did results in security improvement. The existing steganographic algorithms primarily focus on embedding approach with less attention to pre-processing of data which offer flexibility, robustness and high security level. Our proposed model is based on Public key cryptosystem or RSA algorithms in which RSA algorithm is used for message encryption in encoding function and the resultant encrypted image is hidden into cover image employing Least Significant Bit (LSB) embedding method.
</description>
        <description>http://thesai.org/Downloads/Volume7No7/Paper_33-Improvisation_of_Security_aspect_of_Steganographic.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Social Computing: The Impact on Cultural Behavior</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070732</link>
        <id>10.14569/IJACSA.2016.070732</id>
        <doi>10.14569/IJACSA.2016.070732</doi>
        <lastModDate>2016-07-31T10:46:30.5630000+00:00</lastModDate>
        
        <creator>Naif Ali Almudawi</creator>
        
        <subject>social computing; Web 2.0; cultural behavior; culture; Power distance; Individualism vs. collectivism; masculinity vs. femininity; uncertainty; avoidance and time horizon</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(7), 2016</description>
        <description>Social computing continues to become more and more popular and has impacted cultural behavior. While cultural behavior affects the way an individual do social computing, Hofstede’s theory is still prevalent. The results of this literature review suggest that, at least for several cultural dimensions, some adjustments may be required to reflect current time and the recognition of the role of technology nowadays. Thus, today, social computing has evolved into continuous communication and interaction of many culturally diverse users.</description>
        <description>http://thesai.org/Downloads/Volume7No7/Paper_32-Social_Computing_the_Impact_on_Cultural_Behavior.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intelligent Sensor Based Bayesian Neural Network for Combined Parameters and States Estimation of a Brushed DC Motor</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070731</link>
        <id>10.14569/IJACSA.2016.070731</id>
        <doi>10.14569/IJACSA.2016.070731</doi>
        <lastModDate>2016-07-31T10:46:30.5170000+00:00</lastModDate>
        
        <creator>Hacene MELLAH</creator>
        
        <creator>Kamel Eddine HEMSAS</creator>
        
        <creator>Rachid TALEB</creator>
        
        <subject>DC motor; thermal modeling; state and parameter estimations; Bayesian regulation; backpropagation; cascade-forward neural network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(7), 2016</description>
        <description>The objective of this paper is to develop an Artificial Neural Network (ANN) model to estimate simultaneously, parameters and state of a brushed DC machine.  The proposed ANN estimator is novel in the sense that his estimates simultaneously temperature, speed and rotor resistance based only on the measurement of the voltage and current inputs. Many types of ANN estimators have been designed by a lot of researchers during the last two decades. Each type is designed for a specific application. The thermal behavior of the motor is very slow, which leads to large amounts of data sets. The standard ANN use often Multi-Layer Perceptron (MLP) with Levenberg-Marquardt Backpropagation (LMBP), among the limits of LMBP in the case of large number of data, so the use of MLP based on LMBP is no longer valid in our case. As solution, we propose the use of Cascade-Forward Neural Network (CFNN) based Bayesian Regulation backpropagation (BRBP). To test our estimator robustness a random white-Gaussian noise has been added to the sets. The proposed estimator is in our viewpoint accurate and robust.</description>
        <description>http://thesai.org/Downloads/Volume7No7/Paper_31-Intelligent_Sensor_Based_Bayesian_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Collaborative Process of Decision Making in the Business Context based on Online Questionnaires</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070730</link>
        <id>10.14569/IJACSA.2016.070730</id>
        <doi>10.14569/IJACSA.2016.070730</doi>
        <lastModDate>2016-07-31T10:46:30.4830000+00:00</lastModDate>
        
        <creator>Rhizlane Seltani</creator>
        
        <creator>Noura Aknin</creator>
        
        <creator>Souad Amjad</creator>
        
        <creator>Mohamed Chrayah</creator>
        
        <creator>Kamal Eddine El Kadiri</creator>
        
        <subject>Decision Making; Web 2.0; Blogs; Business Intelligence; SCAMMPERR Method; Online Questionnaire</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(7), 2016</description>
        <description>This article is a component of a series of articles and scientific researches conducted by the research team which deals with the web 2.0 and its interactions with the different technology areas. During recent years, the emergence of the web 2.0 has revolutionized the world of new technologies, in particular the business intelligence field, providing businesses with new and innovative ways to make use of information in order to improve their overall performance. This article comes to consolidate the profit which can be taken from the new technologies of the web 2.0, especially blogs which constitute a valuable mean to gather exchanged information and results of the collaboration between users, by offering a new collaborative tool for decision making based on online questionnaires in order to exploit the collective intelligence which represents a very important source of significant data, and by adopting the SCAMMPERR method, a creative technique of stimulation of ideas and problem solving.
This paper presents a practical innovation in the computing level and makes an impact on the economic and the organizational sides of the enterprise, by proposing a new methodology based on the SCAMMPERR technique and supported by the strengths of the web 2.0 to ensure a collaborative decision making. As a result, it provides relevant decisions which support the traditional decision support systems.</description>
        <description>http://thesai.org/Downloads/Volume7No7/Paper_30-A_Collaborative_Process_of_Decision_Making_in_the_Business.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Albanian Sign Language (AlbSL) Number Recognition from Both Hand’s Gestures Acquired by Kinect Sensors</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070729</link>
        <id>10.14569/IJACSA.2016.070729</id>
        <doi>10.14569/IJACSA.2016.070729</doi>
        <lastModDate>2016-07-31T10:46:30.4370000+00:00</lastModDate>
        
        <creator>Eriglen Gani</creator>
        
        <creator>Alda Kika</creator>
        
        <subject>Albanian Sign Language (AlbSL); Number Recog-nition; Microsoft Kinect; K-Means; Fourier Descriptors</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(7), 2016</description>
        <description>Albanian Sign Language (AlbSL) is relatively new and until now there doesn’t exist a system that is able to recognize Albanian signs by using natural user interfaces (NUI). The aim of this paper is to present a real-time gesture recognition system that is able to automatically recognize number signs for Albanian Sign Language, captured from signer’s both hands. Kinect device is used to obtain data streams. Every pixel generated from Kinect device contains depth data information which is used to construct a depth map. Hands segmentation process is performed by applying a threshold constant to depth map. In order to differentiate signer’s hands a K-means clustering algorithm is applied to partition pixels into two groups corresponding to each signer’s hands. Centroid distance function is calculated in each hand after extracting hand’s contour pixels. Fourier descrip-tors, derived form centroid distance is used as a hand shape representation. For each number gesture there are 15 Fourier descriptors coefficients generated which represent uniquely that gesture. Every input data is compared against training data set by calculating Euclidean distance, using Fourier coefficients. Sign with the lowest Euclidean distance is considered as a match. The system is able to recognize number signs captured from one hand or both hands. When both signer’s hands are used, some of the methodology processes are executed in parallel in order to improve the overall performance. The proposed system achieves an accuracy of 91% and is able to process 55 frames per second.</description>
        <description>http://thesai.org/Downloads/Volume7No7/Paper_29-Albanian_Sign_Language _AlbSL_Number_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Efficent Lossless Compression Scheme for ECG Signal</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070728</link>
        <id>10.14569/IJACSA.2016.070728</id>
        <doi>10.14569/IJACSA.2016.070728</doi>
        <lastModDate>2016-07-31T10:46:30.4070000+00:00</lastModDate>
        
        <creator>O. *El B’charri</creator>
        
        <creator>R. Latif</creator>
        
        <creator>A. Abenaou</creator>
        
        <creator>A. Dliou</creator>
        
        <creator>W. Jenkal</creator>
        
        <subject>ECG; lossless compression; data encoding; compression ratio</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(7), 2016</description>
        <description>Cardiac diseases constitute the main cause of mortality around the globe. For detection and identification of cardiac problems, it is very important to monitor the patient&#39;s heart activities for long periods during his normal daily life. The recorded signal that contains information about the condition of the heart called electrocardiogram (ECG). As a result, long recording of ECG signal amounts to huge data sizes. In this work, a robust lossless ECG data compression scheme for real-time applications is proposed. The developed algorithm has the advantages of lossy compression without introducing any distortion to the reconstructed signal. The ECG signals under test were taken from the PTB Diagnostic ECG Database. The compression procedure is simple and provides a high compression ratio compared to other lossless ECG compression methods. The compressed ECG data is generated as a text file. The decompression scheme has also been developed using the reverse logic and it is observed that there is no difference between original and reconstructed ECG signal.</description>
        <description>http://thesai.org/Downloads/Volume7No7/Paper_28-An_Efficent-Lossless_Compression_Scheme_for_ECG_Signal.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of the System to Support Tourists’ Excursion Behavior using Augmented Reality</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070727</link>
        <id>10.14569/IJACSA.2016.070727</id>
        <doi>10.14569/IJACSA.2016.070727</doi>
        <lastModDate>2016-07-31T10:46:30.3770000+00:00</lastModDate>
        
        <creator>Jiawen ZHOU</creator>
        
        <creator>Kayoko YAMAMOTO</creator>
        
        <subject>Augmented Reality; Web-GIS; Social Media; Recommendation System; AR recommended GIS; Tourists’ Excursion Behavior</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(7), 2016</description>
        <description>The purpose of this study is to develop an information system (AR recommended GIS) to support tourists’ excursion behavior by making the accumulating, sharing, and recommending of information concerning urban tourist spots possible.  The conclusion of this study can be summarized into three points as shown below. (1) In order to support tourists’ excursion behaviors by integrating SNS, Twitter, Web-GIS, recommendation system, and Smart Eyeglass, in addition to making the accumulating, sharing, and recommending of information regarding urban tourist spots possible, the AR recommended GIS was designed and developed. (2) 91% were between the age of 20-40 among the 91 users, and the ultimate number of submitted information was 161. In addition, concerning the operation using Smart Eyeglass, which was conducted with tourists in the Minato Mirai area, the total number of users were 34, age of users were spread out, and all users had no experience in using Smart Eyeglasses. (3) From the results of the Web questionnaire survey, the system is compatible for the collection method of tourist spot information for users, and is mainly used to collect tourist spot information using the viewing and recommendation functions. From the results of the access analysis using the log data form during the operation, the utilization method of the system with PCs and mobile information terminals were very similar. Additionally, as the system using AR Smart Eyeglass was rated extremely highly, it was evident that it is possible to support tourists’ excursion behavior using PCs, mobile information terminals, and AR Smart Eyeglasses are possible.</description>
        <description>http://thesai.org/Downloads/Volume7No7/Paper_27-Development_of_the_System_to_Support_Tourists.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>MAS based on a Fast and Robust FCM Algorithm for MR Brain Image Segmentation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070726</link>
        <id>10.14569/IJACSA.2016.070726</id>
        <doi>10.14569/IJACSA.2016.070726</doi>
        <lastModDate>2016-07-31T10:46:30.3430000+00:00</lastModDate>
        
        <creator>Hanane Barrah</creator>
        
        <creator>Abdeljabbar Cherkaoui</creator>
        
        <creator>Driss Sarsri</creator>
        
        <subject>agents; MAS; FCM; c-means algorithm; MRI images; image segmentation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(7), 2016</description>
        <description>In the aim of providing sophisticated applications and getting benefits from the advantageous properties of agents, designing agent-based and multi-agent systems has become an important issue that received further consideration from many application domains. Towards the same goal, this work gathered three different research fields; image segmentation, fuzzy clustering and multi-agent systems (MAS); and furnished a MAS for MR brain image segmentation that is based on a fast and robust FCM (FRFCM) algorithm. The proposed MAS was tested, as well as the sequential version of the FRFCM algorithm and the standard FCM, on simulated and real normal brains. The experimental results were valuable in both segmentation accuracy and running times point of views. </description>
        <description>http://thesai.org/Downloads/Volume7No7/Paper_26-MAS_based_on_a_Fast_and_Robust_FCM_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Efficient Application Specific Memory Storage and ASIP Behavior Optimization in Embedded System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070725</link>
        <id>10.14569/IJACSA.2016.070725</id>
        <doi>10.14569/IJACSA.2016.070725</doi>
        <lastModDate>2016-07-31T10:46:30.3130000+00:00</lastModDate>
        
        <creator>Ravi Khatwal</creator>
        
        <creator>Manoj Kumar Jain</creator>
        
        <subject>Memory design; Compiler; Processor design; Scheduling Techniques; Memory storage</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(7), 2016</description>
        <description>Low power embedded system requires effective memory design system which improves the system performance with the help of memory implementation techniques. Application specific data allocation design pattern implements the memory storage area and internal cell design techniques implements data transition speeds. Embedded cache design is implemented with simulator and scheduling approaches which can reduce the cache miss behavior and improve the cache hit quantities. Cache hit optimization, delay reduction and latency prediction techniques are effective for ASIP design. The design functionality is simply specifying the tradeoff among various design metrics like performance, power, size, cost and flexibility. ASIP behavior and memory storage area optimized for low power embedded system and implements cycle time with effective scheduling techniques which implements the system performance with low power consumption.</description>
        <description>http://thesai.org/Downloads/Volume7No7/Paper_25-An_Efficient_Application_Specific_Memory_Storage_and_ASIP_Behavior.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Decision Support System for Diabetes Mellitus through Machine Learning Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070724</link>
        <id>10.14569/IJACSA.2016.070724</id>
        <doi>10.14569/IJACSA.2016.070724</doi>
        <lastModDate>2016-07-31T10:46:30.2830000+00:00</lastModDate>
        
        <creator>Tarik A. Rashid</creator>
        
        <creator>Saman . M. Abdulla</creator>
        
        <creator>Rezhna . M. Abdulla</creator>
        
        <subject>Diabetes disease; Blood sugar rate and symptoms; ANN; Prediction and Classification models</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(7), 2016</description>
        <description>recently, the diseases of diabetes mellitus have grown into extremely feared problems that can have damaging effects on the health condition of their sufferers globally. In this regard, several machine learning models have been used to predict and classify diabetes types. Nevertheless, most of these models attempted to solve two problems; categorizing patients in terms of diabetic types and forecasting blood surge rate of patients.  This paper presents an automatic decision support system for diabetes mellitus through machine learning techniques by taking into account the above problems, plus, reflecting the skills of medical specialists who believe that there is a great relationship between patient’s symptoms with some chronic diseases and the blood sugar rate. Data sets are collected from Layla Qasim Clinical Center in Kurdistan Region, then, the data is cleaned and proposed using feature selection techniques such as Sequential Forward Selection and the Correlation Coefficient, finally, the refined data is fed into machine learning models for prediction, classification, and description purposes. This system enables physicians and doctors to provide diabetes mellitus (DM) patients good health treatments and recommendations.</description>
        <description>http://thesai.org/Downloads/Volume7No7/Paper_24-Decision_Support_System_for_Diabetes_Mellitus.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Air Pollution Analysis using Ontologies and Regression Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070723</link>
        <id>10.14569/IJACSA.2016.070723</id>
        <doi>10.14569/IJACSA.2016.070723</doi>
        <lastModDate>2016-07-31T10:46:30.2500000+00:00</lastModDate>
        
        <creator>Parul Choudhary</creator>
        
        <creator>Dr. Jyoti Gautam</creator>
        
        <subject>Ontologies; Air pollution Analysis; Regression Models; Linear Regression</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(7), 2016</description>
        <description>Rapidly throughout the world economy, &quot;the expansive Web&quot; in the &quot;world&quot; explosive growth, rapidly growing market characterized by short product cycles exists and the demand for increased flexibility as well as the extensive use of a new data vision managed data society. A new socio-economic system that relies more and more on movement and allocation results in data whose daily existence, refinement, economy and adjust the exchange industry. Cooperative Engineering Co -operation and multi -disciplinary installed on people&#39;s cooperation is a good example. Semantic Web is a new form of Web content that is meaningful to computers and additional approved another example. Communication, vision sharing and exchanging data Society&#39;s are new commercial bet. Urban air pollution modeling and data processing techniques need elevated Association. Artificial intelligence in countless ways and breakthrough technologies can solve environmental problems from uneven offers. A method for data to formal ontology means a true meaning and lack of ambiguity to allow us to portray memo. In this work we survey regression model for ontologies and air pollution.</description>
        <description>http://thesai.org/Downloads/Volume7No7/Paper_23-Air_pollution_Analysis_using_Ontologies_and_Regression.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Zone Classification Approach for Arabic Documents using Hybrid Features</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070722</link>
        <id>10.14569/IJACSA.2016.070722</id>
        <doi>10.14569/IJACSA.2016.070722</doi>
        <lastModDate>2016-07-31T10:46:30.2200000+00:00</lastModDate>
        
        <creator>Amany M.Hesham</creator>
        
        <creator>Sherif Abdou</creator>
        
        <creator>Amr Badr</creator>
        
        <creator>Mohsen Rashwan</creator>
        
        <creator>Hassanin M.Al-Barhamtoshy</creator>
        
        <subject>segmentation; layout analysis; texture features; connected component analysis; Arabic script; genetic algorithms</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(7), 2016</description>
        <description>Zone segmentation and classification is an important step in document layout analysis. It decomposes a given scanned document into zones. Zones need to be classified into text and non-text, so that only text zones are provided to a recognition engine. This eliminates garbage output resulting from sending non-text zones to the engine. This paper proposes a framework for zone segmentation and classification. Zones are segmented using morphological operation and connected component analysis. Features are then extracted from each zone for the purpose of classification into text and non-text. Features are hybrid between texture-based and connected component based features. Effective features are selected using genetic algorithm. Selected features are fed into a linear SVM classifier for zone classification. System evaluation shows that the proposed zone classification works well on multi-font and multi-size documents with a variety of layouts even on historical documents.</description>
        <description>http://thesai.org/Downloads/Volume7No7/Paper_22-A_Zone_Classification_Approach_for_Arabic_Documents.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Intelligent Agent based Architecture for Visual Data Mining</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070721</link>
        <id>10.14569/IJACSA.2016.070721</id>
        <doi>10.14569/IJACSA.2016.070721</doi>
        <lastModDate>2016-07-31T10:46:30.1570000+00:00</lastModDate>
        
        <creator>Hamdi Ellouzi</creator>
        
        <creator>Hela Ltifi</creator>
        
        <creator>Mounir Ben Ayed</creator>
        
        <subject>Multi Agent System; Decision Support System; Visualization; Knowledge Discovery from Data; Nosocomial Infection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(7), 2016</description>
        <description>the aim of this paper is to present an intelligent architecture of Decision Support System (DSS) based on visual data mining. This architecture applies the multi-agent technology to facilitate the design and development of DSS in complex and dynamic environment. Multi-Agent Systems add a high level of abstraction. To validate the proposed architecture, it is implemented to develop a distributed visual data mining based DSS to predict nosocomial infectionsoccurrence in intensive care units. The developed prototype was evaluated to verify the architecture practicability.</description>
        <description>http://thesai.org/Downloads/Volume7No7/Paper_21-An_Intelligent_Agent_based_Architecture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evolutionary Strategy of Chromosomal RSOM Model on Chip for Phonemes Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070720</link>
        <id>10.14569/IJACSA.2016.070720</id>
        <doi>10.14569/IJACSA.2016.070720</doi>
        <lastModDate>2016-07-31T10:46:30.1270000+00:00</lastModDate>
        
        <creator>Mohamed Salah Salhi</creator>
        
        <creator>Nejib Khalfaoui</creator>
        
        <creator>Hamid Amiri</creator>
        
        <subject>Information recognition; Recurrent SOM; Chromosomal RSOM model; Evolutionary RSOM; Implementation over SoC</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(7), 2016</description>
        <description>This paper aims to contribute in modeling and implementation, over a system on chip SoC, of a powerful technique for phonemes recognition in continuous speech. A neural model known by its efficiency in static data recognition, named SOM for self organization map, is developed into a recurrent model to incorporate the temporal aspect in these applications. The obtained model RSOM will subsequently introduced to ensure the diversification of the genetic algorithm GA populations to expand even more the search space and optimize the obtained results.  We assigned a chromosomal vision to this model in an effort to improve the information recognition rate.</description>
        <description>http://thesai.org/Downloads/Volume7No7/Paper_20-Evolutionary_Strategy_of_Chromosomal_RSOM_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Function-Behavior-Structure Model of Design: An Alternative Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070719</link>
        <id>10.14569/IJACSA.2016.070719</id>
        <doi>10.14569/IJACSA.2016.070719</doi>
        <lastModDate>2016-07-31T10:46:30.1100000+00:00</lastModDate>
        
        <creator>Sabah Al-Fedaghi</creator>
        
        <subject>conceptual design; FBS framework; flow-based model; function; behaviour; structure</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(7), 2016</description>
        <description>The Function-Behavior-Structure model (FBS) of design conceptualizes objects in terms of function, behavior, and structure. It has been widely utilized as a foundation for modelling the design process that transforms posited functions to a description of behaviors. Nevertheless, the FBS model is still regarded as a subjective and experience-based process and it provides no theory about how a function is transformed into behavior. Research has shown that the critical concepts of function and behavior have many different definitions. This paper suggests a viable alternative and contrasts it with the FBS framework of design using published study cases. The results point to several benefits gained by adopting the proposed method.</description>
        <description>http://thesai.org/Downloads/Volume7No7/Paper_19-Function_Behavior_Structure_Model_of_Design.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Dual Cylindrical Tunable Laser Based on MEMS</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070718</link>
        <id>10.14569/IJACSA.2016.070718</id>
        <doi>10.14569/IJACSA.2016.070718</doi>
        <lastModDate>2016-07-31T10:46:30.0930000+00:00</lastModDate>
        
        <creator>Ahmed Fawzy</creator>
        
        <creator>Osama M. EL-Ghandour</creator>
        
        <creator>Hesham F.A. Hamed</creator>
        
        <subject>Dual ECT; wavelength tuning; MEMS; DRIE</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(7), 2016</description>
        <description>Free space optics is considered the topic of the day and have a large variety of applications which free space separates source from destination such as External cavity tunable laser (ECTL). In ECTL, laser source emits Gaussian beam that propagates in plane with substrate until reach external reflector. The efficiency of these applications depends on the amount of light that coupled back to the laser, called coupling efficiency. Increasing coupling efficiency depends on using assembled lens&#39;s or any optical part in the path between laser front facet and external reflector, which result in increasing the cost and integration effort. We innovate here anew configuration of external cavity tunable laser based on cylindrical (curved) Mirrors. The usage of cylindrical mirror affects on the amount of light that coupled back to laser and that decreases the alignment requirement in the laser assembly as compared to another configurations based on flat mirror. The fabrication of cylindrical mirror is simple with respect to spherical mirror so it can be used in batch fabrication. Tuning achieved by using micro electro mechanical system MEMS technology. The system consists of a laser cavity and a two filter cavities for wavelength selection. The formation of cylindrical microstructures were made into the substrate volume. So we report also the micromachining method that used for fabricating the cylindrical mirror. Anisotropic etching and the deep reactive ion etching (DRIE) are especially useful for the batch fabrication of large optical mechanical devices. The characteristics of the laser&#39;s spectral response versus laser facet reflectance variations are described via simulations. The diffraction of light in ECTL formed by the laser front facet and the external reflector are taken into account. Here we report all things about the model including the fabrication steps and simulation analysis.</description>
        <description>http://thesai.org/Downloads/Volume7No7/Paper_18-A_Dual_Cylindrical_Tunable_laser_based_on_MEMS.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Ontology for Academic Program Accreditation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070717</link>
        <id>10.14569/IJACSA.2016.070717</id>
        <doi>10.14569/IJACSA.2016.070717</doi>
        <lastModDate>2016-07-31T10:46:30.0470000+00:00</lastModDate>
        
        <creator>Jehad Sabri Alomari</creator>
        
        <subject>Accreditation; Ontology; Semantic Web; classification; Education</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(7), 2016</description>
        <description>Many educational institutions are adopting national and international accreditation programs to improve teaching, student learning, and curriculum. There is a growing demand across higher education for automation and helpful educational resources to continuously improve student outcomes.  The student outcomes are the required knowledge and skill set that graduates of any accredited program have to gain in order entry into the workforce or for to continue with their future education.  To evaluate student outcomes, each assessment activities must map to a course learning outcomes which maps students’ outcomes.  The problem is that all course learning outcomes and student outcome mapping are placed in documents or database which requires more work and time to access and understand.  This paper proposes an ontology based solution to enable visual discover of all course learning outcomes that maps to a particular student outcome and related assessments to help faculty or curriculum committees avoid over mapping or under mapping students’ outcomes.</description>
        <description>http://thesai.org/Downloads/Volume7No7/Paper_17-Ontology_for_Academic_Program_Accreditation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluating Web Accessibility Metrics for Jordanian Universities</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070716</link>
        <id>10.14569/IJACSA.2016.070716</id>
        <doi>10.14569/IJACSA.2016.070716</doi>
        <lastModDate>2016-07-31T10:46:30.0170000+00:00</lastModDate>
        
        <creator>Israa Wahbi Kamal</creator>
        
        <creator>Heider A. Wahsheh</creator>
        
        <creator>Izzat M. Alsmadi</creator>
        
        <creator>Mohammed N. Al-Kabi</creator>
        
        <subject>web accessibility, web ranking, web evaluation, web testing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(7), 2016</description>
        <description>University web portals are considered one of the main access gateways for universities. Typically, they have a large candidate audience among the current students, employees, and faculty members aside from previous and future students, employees, and faculty members. Web accessibility is the concept of providing web content universal access to different machines and people with different ages, skills, education levels, and abilities. Several web accessibility metrics have been proposed in previous years to measure web accessibility. We integrated and extracted common web accessibility metrics from the different accessibility tools used in this study. This study evaluates web accessibility metrics for 36 Jordanian universities and educational institute websites. We analyze the level of web accessibility using a number of available evaluation tools against the standard guidelines for web accessibility. Receiver operating characteristic quality measurements is used to evaluate the effectiveness of the integrated accessibility metrics.</description>
        <description>http://thesai.org/Downloads/Volume7No7/Paper_16-Evaluating_Web_Accessibility_Metrics_for_Jordanian_Universities.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Evolutionary Stochastic Approach for Efficient Image Retrieval using Modified Particle Swarm Optimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070715</link>
        <id>10.14569/IJACSA.2016.070715</id>
        <doi>10.14569/IJACSA.2016.070715</doi>
        <lastModDate>2016-07-31T10:46:29.9870000+00:00</lastModDate>
        
        <creator>Hadis Heidari</creator>
        
        <creator>Abdolah Chalechale</creator>
        
        <subject>color moments; content based image retrieval; particle swarm optimization (PSO); texture feature</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(7), 2016</description>
        <description>Image retrieval system as a reliable tool can help people in reaching efficient use of digital image accumulation; also finding efficient methods for the retrieval of images is important. Color and texture descriptors are two basic features in image retrieval. In this paper, an approach is employed which represents a composition of color moments and texture features to extract low-level feature of an image. By assigning equal weights for different types of features, we can’t obtain good results, but by applying different weights to each feature, this problem is solved. In this work, the weights are improved using a modified Particle Swarm Optimization (PSO) method for increasing average Precision of system. In fact, a novel method based on an evolutionary approach is presented and the motivation of this work is to enhance Precision of the retrieval system with an improved PSO algorithm. The average Precision of presented method using equally weighted features and optimal weighted features is 49.85% and 54.16%, respectively. 4.31% increase in the average Precision achieved by proposed technique can achieve higher recognition accuracy, and the search result is better after using PSO.</description>
        <description>http://thesai.org/Downloads/Volume7No7/Paper_15-An_Evolutionary_Stochastic_Approach_for_Efficient_Image_Retrieval.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Switched Control of a Time Delayed Compass Gait Robot</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070714</link>
        <id>10.14569/IJACSA.2016.070714</id>
        <doi>10.14569/IJACSA.2016.070714</doi>
        <lastModDate>2016-07-31T10:46:29.9530000+00:00</lastModDate>
        
        <creator>Elyes Maherzi</creator>
        
        <creator>Walid Arouri</creator>
        
        <creator>Mongi Besbes</creator>
        
        <subject>Biped robot; delayed system; Switched system; Stability; Lagrange formulation; Lyapunov method; Relaxation; Linear matrix inequalities (LMI); bilinear matrix inequalities (BMI)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(7), 2016</description>
        <description>the analysis and control of delayed systems are becoming more and more research topics in progress. This is mainly due to the fact that the delay is frequently encountered in technological systems. Most control command laws are based on current digital computers and delays are intrinsic to the process or in the control loop caused by the transmission time control sequences, or computing time. In other hand, the controls of humanoid walking robot present a common problem in robotics because it involves physical interaction between an articulated system and its environment. This close relationship is actually a common set of fundamental problems such as the implementation of robust stable dynamic control. This paper presents acomplete approach, based on switched system theory, for the stabilization of a compass gait robot subject to time delays transmission. The multiple feedback gains designed are based on multiple linear systems governed by a switching control law. The establishment of control law in real time is affected by the unknown pounded random delay. The results obtained from this method show that the control law stabilize the compass robot walk despite a varying delay reaching six times sampling period.</description>
        <description>http://thesai.org/Downloads/Volume7No7/Paper_14-Switched_Control_of_a_Time_Delayed_Compass_Gait_Robot.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>PSO Algorithm based Adaptive Median Filter for Noise Removal in Image Processing Application</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070713</link>
        <id>10.14569/IJACSA.2016.070713</id>
        <doi>10.14569/IJACSA.2016.070713</doi>
        <lastModDate>2016-07-31T10:46:29.9070000+00:00</lastModDate>
        
        <creator>Ruby Verma</creator>
        
        <creator>Rajesh Mehra</creator>
        
        <subject>Switching median filter; Particle Swarm algorithm; Noise removal; salt and pepper noise</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(7), 2016</description>
        <description>A adaptive Switching median filter for salt and pepper noise removal based on genetic algorithm is presented. Proposed filter consist of two stages, a noise detector stage and a noise filtering stage. Particle swarm optimization seems to be effective for single objective problem. Noise Dictation stage works on it. In contrast to the standard median filter, the proposed algorithm generates the noise map of corrupted Image. Noise map gives information about the corrupted and non-corrupted pixels of Image. In filtering, filter calculates the median of uncorrupted neighbouring pixels and replaces the corrupted pixels. Extensive simulations are performed to validate the proposed filter. Simulated results show refinement both in Peak signal to noise ratio (PSNR) and Image Quality Index value (IQI). Experimental results shown that proposed method is more effective than existing methods.</description>
        <description>http://thesai.org/Downloads/Volume7No7/Paper_13-PSO_Algorithm_based_Adaptive_Median_Filter_for_Noise_Removal_in_Image_Processing_Application.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Conversion of Empirical MOS Transistor Model Extracted From 180 nm Technology To EKV3.0 Model Using MATLAB</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070712</link>
        <id>10.14569/IJACSA.2016.070712</id>
        <doi>10.14569/IJACSA.2016.070712</doi>
        <lastModDate>2016-07-31T10:46:29.8770000+00:00</lastModDate>
        
        <creator>Amine AYED</creator>
        
        <creator>Mongi LAHIANI</creator>
        
        <creator>Hamadi GHARIANI</creator>
        
        <subject>EKV model; gm/ID methodology; analog design; MATLAB</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(7), 2016</description>
        <description>In this paper, the EKV3.0 model used for RF analog designs was validated in all-inversion regions under bias conditions and geometrical effects. A conversion of empirical data of 180nm CMOS process to EKV model was proposed. A MATLAB developed algorithm for parameter extraction was set up to evaluate the basic EKV model parameters. Respecting the substrate, and as long as the source and drain voltages remain constant, the DC currents and gm/ID real transistors ratio can be reconstructed by means of the EKV model with acceptable accuracy even with short channel devices. The results verify that the model takes into account the second order effects such as DIBL and CLM. The sizing of the elementary amplifier was considered in the studied example. The sizing procedure based on gm/ID methodology was described considering a semi-empirical model and an EKV model. The two gave close results.</description>
        <description>http://thesai.org/Downloads/Volume7No7/Paper_12-A_Conversion_of_Empirical_MOS_Transistor_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Firefly Algorithm for Adaptive Emergency Evacuation Center Management</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070711</link>
        <id>10.14569/IJACSA.2016.070711</id>
        <doi>10.14569/IJACSA.2016.070711</doi>
        <lastModDate>2016-07-31T10:46:29.8300000+00:00</lastModDate>
        
        <creator>Yuhanis Yusof</creator>
        
        <creator>Nor Laily Hashim</creator>
        
        <creator>Noraziah ChePa</creator>
        
        <creator>Azham Hussain</creator>
        
        <subject>Firefly Algorithm; Swarm Intelligence; Flood Management; Evacuation Center Management</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(7), 2016</description>
        <description>Flood disaster is among the most devastating natural disasters in the world, claiming more lives and causing property damage. The pattern of floods across all continents has been changing, becoming more frequent, intense and unpredictable for local communities. Due to unforeseen scenarios, some evacuation centers that host the flood victims may also be drowned. Hence, prime decision making is required to relocate the victims and resources to a safer center.  This study proposes a Firefly Algorithm (FA) to be employed in an emergency evacuation center management. Experimental analysis of a minimization problem was performed to compare the solutions produced by FA and the ones generated using Tabu Search. Results show that the proposed FA produced solutions with smaller utility value, hence indicating that it is better than the benchmark method.</description>
        <description>http://thesai.org/Downloads/Volume7No7/Paper_11-Firefly_Algorithm_for_Adaptive_Emergency_Evacuation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Effective Data Mining Technique for Classification Cancers via Mutations in Gene using Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070710</link>
        <id>10.14569/IJACSA.2016.070710</id>
        <doi>10.14569/IJACSA.2016.070710</doi>
        <lastModDate>2016-07-31T10:46:29.8000000+00:00</lastModDate>
        
        <creator>Ayad Ghany Ismaeel</creator>
        
        <creator>Dina Yousif Mikhail</creator>
        
        <subject>Detection; Classification; Data Mining; TP53 Gene; Tumor Protein P53; Back Propagation Network (BPN)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(7), 2016</description>
        <description>The prediction plays the important role in detecting efficient protection and therapy/treatment of cancer. The prediction of mutations in gene needs a diagnostic and classification, which is based on the whole database (big dataset enough), to reach sufficient accuracy/correct results. Since the tumor suppressor P53 is approximately about fifty percentage of all human tumors because mutations that occur in the TP53 gene into the cells. So, this paper is applied on tumor p53, where the problem is there are several primitive databases (e.g. excel genome and protein database) contain datasets of TP53 gene with its tumor protein p53, these databases are rich datasets that cover all mutations and cause diseases (cancers). But these Data Bases cannot reach to predict and diagnosis cancers, i.e. the big datasets have not efficient Data Mining method, which can predict, diagnosis the mutation, and classify the cancer of patient. The goal of this paper to reach a Data Mining technique, that employs neural network, which bases on the big datasets. Also, offers friendly predictions, flexible, and effective classified cancers, in order to overcome the previous techniques drawbacks. This proposed technique is done by using two approaches, first, bioinformatics techniques by using BLAST, CLUSTALW, etc, in order to know if there are malignant mutations or not. The second, data mining by using neural network; it is selected (12) out of (53) TP53 gene database fields. To clarify, one of these 12 fields (gene location field) did not exists inTP53 gene database; therefore, it is added to the database of TP53 gene in training and testing back propagation algorithm, in order to classify specifically the types of cancers. Feed Forward Back Propagation supports this Data Mining method with data training rate (1) and Mean Square Error (MSE) (0.00000000000001). This effective technique allows in a quick, accurate and easy way to classify the type of cancer.</description>
        <description>http://thesai.org/Downloads/Volume7No7/Paper_10-Effective_Data_Mining_Technique_for_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Energy Dissipation Model for 4G and WLAN Networks in Smart Phones</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070709</link>
        <id>10.14569/IJACSA.2016.070709</id>
        <doi>10.14569/IJACSA.2016.070709</doi>
        <lastModDate>2016-07-31T10:46:29.7500000+00:00</lastModDate>
        
        <creator>Shalini Prasad</creator>
        
        <creator>S. Balaji</creator>
        
        <subject>G Wireless Networks; Energy Consumption; Smart phone; Wi-Fi</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(7), 2016</description>
        <description>With the modernization of the telecommunication standards, there has been considerable evolution of various technologies to assist cost effective communication. In this regard, the fourth generation communication services or commonly known as 4G mobile networks have penetrated almost every part of the world to offer faster and seamless data connectivity. However, such services come at the cost of energy drained from the smart phone supporting 4G services. This paper presents an algorithm that is capable of evaluating the actual amount of energy being dissipated while using next generation mobile networks. The study also performs a comparative analysis of energy dissipation of 4G networks with other wireless local area networks to understand the networks that cause more energy dissipation.</description>
        <description>http://thesai.org/Downloads/Volume7No7/Paper_9-Energy_Dissipation_Model_for_4G_and_WLAN_Networks_in_Smart_phones.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Direct Torque Control of Saturated Doubly-Fed Induction Generator using High Order Sliding Mode Controllers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070708</link>
        <id>10.14569/IJACSA.2016.070708</id>
        <doi>10.14569/IJACSA.2016.070708</doi>
        <lastModDate>2016-07-31T10:46:29.7030000+00:00</lastModDate>
        
        <creator>Elhadj BOUNADJA</creator>
        
        <creator>Abdelkader DJAHBAR</creator>
        
        <creator>Mohand Oulhadj MAHMOUDI</creator>
        
        <creator>Mohamed MATALLAH</creator>
        
        <subject>Doubly Fed Induction Generator (DFIG); Magnetic saturation; Direct Torque Control (DTC); High Order Sliding Mode Controller (HOSMC)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(7), 2016</description>
        <description>The present work examines a direct torque control strategy using a high order sliding mode controllers of a doubly-fed induction generator (DFIG) incorporated in a wind energy conversion system and working in saturated state. This research is carried out to reach two main objectives. Firstly, in order to introduce some accuracy for the calculation of DFIG performances, an accurate model considering magnetic saturation effect is developed. The second objective is to achieve a robust control of DFIG based wind turbine. For this purpose, a Direct Torque Control (DTC) combined with a High Order Sliding Mode Control (HOSMC) is applied to the DFIG rotor side converter. Conventionally, the direct torque control having hysteresis comparators possesses major flux and torque ripples at steady-state and moreover the switching frequency varies on a large range. The new DTC method gives a perfect decoupling between the flux and the torque. It also reduces ripples in these grandeurs. Finally, simulated results show, accurate dynamic performances, faster transient responses and more robust control are achieved.</description>
        <description>http://thesai.org/Downloads/Volume7No7/Paper_8-Direct_Torque_Control_of_Saturated_Doubly_Fed_Induction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Identifying and Prioritizing Evaluation Criteria for User-Centric Digital Identity Management Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070707</link>
        <id>10.14569/IJACSA.2016.070707</id>
        <doi>10.14569/IJACSA.2016.070707</doi>
        <lastModDate>2016-07-31T10:46:29.6730000+00:00</lastModDate>
        
        <creator>Sepideh Banihashemi</creator>
        
        <creator>Elaheh Homayounvala</creator>
        
        <creator>Alireza Talebpour</creator>
        
        <creator>Abdolreza Abhari</creator>
        
        <subject>management of information technology; digital identity management systems; evaluation criteria; fuzzy analytical hierarchy process (FAHP); user-centricity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(7), 2016</description>
        <description>Identity Management systems are used for securing digital identity of users in reliable, automated and compatible way.  Service providers employ identity management systems which are cost effective and scalable but cause poor usability for users. Identity management systems are user-centric applications which should be designed by considering users’ perspective. User centricity is a remarkable concept in identity management systems as it provides more powerful user control and privacy. This approach has been evolved from amending past paradigms. Thus, evaluation of digital identity management systems based on users’ point of view, is really important. The main objective of this paper is to identify the appropriateness of the criteria used in evaluation of user-centric digital identity management systems. These criteria are gathered from the literature and then categorized into four groups for the first time in this work to examine the importance of each parameter. In this approach, several interviews were performed as a qualitative research method and two questionnaires have been filled out by forty six users who were involved with identity management systems. Since the answers are perception based data the most important criteria in each category are assessed by using fuzzy method. This research found that the most important criteria are related to security category. The results of this research can provide valuable information for managers and decision makers of hosting companies as well as system designers to adapt and develop appropriate user-centric digital identity management systems.</description>
        <description>http://thesai.org/Downloads/Volume7No7/Paper_7-Identifying_and_Prioritizing_Evaluation_Criteria_for_User_Centric.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Proposed Quantitative Conceptual Model for the Assessment of Patient Clinical Outcome</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070706</link>
        <id>10.14569/IJACSA.2016.070706</id>
        <doi>10.14569/IJACSA.2016.070706</doi>
        <lastModDate>2016-07-31T10:46:29.6430000+00:00</lastModDate>
        
        <creator>Mou’ath Hourani</creator>
        
        <subject>quantitative; conceptual model; assessment; patient clinical outcome</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(7), 2016</description>
        <description>The assessment of patient clinical outcome focuses on measuring various aspects of the patient’s health status after medical treatments and interventions. Patient clinical outcome assessment is a major concern in the clinical field as the current measures are not well developed and, as a result, they may be used without sufficient understanding of their characteristics. This issue retards the development in the clinical field. This paper proposes a general pure quantitative conceptual model for the assessment of patient clinical outcome. The proposed model contains five WHO’s International Classification of Functioning, Disability, and Health (ICF) measurable components: body functions impairment, clinical elegancy distortion, pain, death, and shortening of life expectancy. Total patient clinical outcome is measured by summing the five WHO components. Five validity types are used to validate the proposed model: content, construct, criterion, descriptive, and predictive validities.</description>
        <description>http://thesai.org/Downloads/Volume7No7/Paper_6-A_Proposed_Quantitative_Conceptual_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Combination of Neural Networks and Fuzzy Clustering Algorithm to Evalution Training Simulation-Based Training</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070705</link>
        <id>10.14569/IJACSA.2016.070705</id>
        <doi>10.14569/IJACSA.2016.070705</doi>
        <lastModDate>2016-07-31T10:46:29.5970000+00:00</lastModDate>
        
        <creator>Lida Pourjafar</creator>
        
        <creator>Mehdi Sadeghzadeh</creator>
        
        <creator>Marjan Abdeyazdan</creator>
        
        <subject>Educational Data Mining; Simulation-Based Training; Dimensions Reduction; ANFIS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(7), 2016</description>
        <description>With the advancement of computer technology, computer simulation in the field of education are more realistic and more effective. The definition of simulation is to create a virtual environment that accurately and real experiences to improve the individual. So Simulation Based Training is the ability to improve, replace, create or manage a real experience and training in a virtual mode. Simulation Based Training also provides large amounts of information to learn, so use data mining techniques to process information in the case of education can be very useful. So here we used data mining to examine the impact of simulation-based training. The database created in cooperation with relevant institutions, including 17 features. To study the effect of selected features, LDA method and Pearson&#39;s correlation coefficient was used along with genetic algorithm. Then we use fuzzy clustering to produce fuzzy system and improved it using Neural Networks. The results showed that the proposed method with reduced dimensions have 3% better than other methods.</description>
        <description>http://thesai.org/Downloads/Volume7No7/Paper_5-Combination_of_Neural_Networks_and_Fuzzy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Method to Build NLP Knowledge for Improving Term Disambiguation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070704</link>
        <id>10.14569/IJACSA.2016.070704</id>
        <doi>10.14569/IJACSA.2016.070704</doi>
        <lastModDate>2016-07-31T10:46:29.5500000+00:00</lastModDate>
        
        <creator>E. MD. Abdelrahim</creator>
        
        <creator>El-Sayed Atlam</creator>
        
        <creator>R. F. Mansour</creator>
        
        <subject>Information Retrieval; NLP Knowledge; Disambiguation; Word Semantics; trie structure</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(7), 2016</description>
        <description>Term sense disambiguation is very essential for different approaches of NLP, including Internet search engines, information retrieval, Data mining, classification etc. However, the old methods using case frames and semantic primitives are not qualify for solving term ambiguities which needs a lot of information with sentences. This new approach introduces a building structure system of natural language knowledge. In this paper all surface case patterns is classified in advance with the consideration of the meaning of noun. Moreover, this paper introduces an efficient data structure using a trie which define the linkage among leaves and multi-attribute relations. By using this linkage multi-attribute relations, we can get a high frequent access among verbs and noun with an automatic generation of hierarchical relationships.  In our experiment a large tagged corpus (Pan Treebank) is used to extract data. In our approach around 11,000 verbs and nouns is used for verifying the new method and made a hierarchy group of its noun. Moreover, the achievement of term disambiguating using our trie structure method and linking trie among leaves is 6% higher than old method.</description>
        <description>http://thesai.org/Downloads/Volume7No7/Paper_4-A_New_Method_to_Build_NLP_Knowledge_for_Improving_Term_Isambiguation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Evaluation of Routing Protocol (RPL) for Internet of Things</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070703</link>
        <id>10.14569/IJACSA.2016.070703</id>
        <doi>10.14569/IJACSA.2016.070703</doi>
        <lastModDate>2016-07-31T10:46:29.5030000+00:00</lastModDate>
        
        <creator>Qusai Q. Abuein</creator>
        
        <creator>Muneer Bani Yassein</creator>
        
        <creator>Mohammed Q. Shatnawi</creator>
        
        <creator>Laith Bani-Yaseen</creator>
        
        <creator>Omar Al-Omari</creator>
        
        <creator>Moutaz Mehdawi</creator>
        
        <creator>Hussien Altawssi</creator>
        
        <subject>density network; objective function; zero grid; packet delivery; power consumption; Internet of Things</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(7), 2016</description>
        <description>Recently, Internet Engineering Task Force (IETF) standardized a powerful and flexible routing protocol for Low Power and Lossy Networks (RPL). RPL is a routing protocol for low power and lossy networks in the Internet of Things. It is an extensible distance vector protocol, which has been proposed for low power and lossy networks in the global realm of IPv6 networks, so it selects the routes from a source to a destination node based on certain metrics injected into the objective function (OF). There has been an investigation of the performance of RPL in the lighter density network. This study investigates the performance of RPL in medium density using of two objective function in various topologies (e.g. grid, random). The performance of RPL is studied using various metrics. For example, Packet Delivery Ratio (PDR), Power Consumption and Packet Reception Ratio (RX) using a fixed Packet Reception Ratio (RX) values.</description>
        <description>http://thesai.org/Downloads/Volume7No7/Paper_3-Performance_Evaluation_of_Routing_Protocol.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Approach for Improvement Security against DoS Attacks in Vehicular Ad-hoc Network </title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070702</link>
        <id>10.14569/IJACSA.2016.070702</id>
        <doi>10.14569/IJACSA.2016.070702</doi>
        <lastModDate>2016-07-31T10:46:29.4700000+00:00</lastModDate>
        
        <creator>Reza Fotohi</creator>
        
        <creator>Yaser Ebazadeh</creator>
        
        <creator>Mohammad Seyyar Geshlag</creator>
        
        <subject>component; VANET; P-Secure Protocol; DoS Attack; detection; OBUmodelVaNET; security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(7), 2016</description>
        <description>Vehicular Ad-Hoc Networks (VANET) are a proper subset of mobile wireless networks, where nodes are revulsive, the vehicles are armed with special electronic devices on the motherboard OBU (On Board Unit) which enables them to trasmit and receive messages from other vehicles in the VANET. Furthermore the communication between the vehicles, the VANET interface is donated by the contact points with road infrastructure. VANET is a subgroup of MANETs. Unlike the MANETs nodes, VANET nodes are moving very fast. Impound a permanent route for the dissemination of emergency messages and alerts from a danger zone is a very challenging task. Therefore, routing plays a significant duty in VANETs. decreasing network overhead, avoiding network congestion, increasing traffic congestion and packet delivery ratio are the most important issues associated with routing in VANETs. In addition, VANET network is subject to various security attacks. In base VANET systems, an algorithm is used to dicover attacks at the time of confirmation in which overhead delay occurs. This paper proposes (P-Secure) approach which is used for the detection of DoS attacks before the confirmation time. This reduces the overhead delays for processing and increasing the security in VANETs. Simulation results show that the P-Secure approach, is more efficient than OBUmodelVaNET approach in terms of PDR, e2e_delay, throughput and drop packet rate.</description>
        <description>http://thesai.org/Downloads/Volume7No7/Paper_2-A_New_Approach_for_Improvement_Security.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Vertical Handover Management for Mobile Telemedicine System using Heterogeneous Wireless Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070701</link>
        <id>10.14569/IJACSA.2016.070701</id>
        <doi>10.14569/IJACSA.2016.070701</doi>
        <lastModDate>2016-07-31T10:46:29.3300000+00:00</lastModDate>
        
        <creator>Hoe-Tung Yew</creator>
        
        <creator>Eko Supriyanto</creator>
        
        <creator>M Haikal Satria</creator>
        
        <creator>Yuan-Wen Hau</creator>
        
        <subject>Mobile telemedicine system; vertical handover; heterogeneous networks; unnecessary handover; throughput; cost</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(7), 2016</description>
        <description>Application of existing mobile telemedicine system is restricted by the imperfection of network coverage, network capacity, and mobility. In this paper, a novel telemedicine based handover decision making (THODM) algorithm is proposed for mobile telemedicine system using heterogeneous wireless networks.  The proposed algorithm select the best network based on the services requirement to ensure the connected or targeted network candidate has sufficient capacity for supporting the telemedicine services. The simulation results show that the proposed algorithm minimizes the number of unnecessary handover to WLAN in high speed environment. The throughput achieved by the proposed algorithm is up to 75% and 205% higher than Cellular and RSS based schemes, respectively. Moreover, the average data transmission cost of THODM algorithm is 24% and 69.2% lower than the Cellular and RSS schemes. The proposed algorithm minimizes the average transmission cost while maintaining the telemedicine service quality at the highest level in high speed environment.</description>
        <description>http://thesai.org/Downloads/Volume7No7/Paper_1-A_Vertical_Handover_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced Tunneling Technique for Flow-Based Fast Handover in Proxy Mobile Ipv6 Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2016.050702</link>
        <id>10.14569/IJARAI.2016.050702</id>
        <doi>10.14569/IJARAI.2016.050702</doi>
        <lastModDate>2016-07-11T03:36:27.2300000+00:00</lastModDate>
        
        <creator>Yunes Abdussalam Amgahd</creator>
        
        <creator>Raghav Yadav</creator>
        
        <subject></subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 5(7), 2016</description>
        <description>In the Mobile IPv6 network, each node is highly mobile and handoff is a very common process. When not processed efficiently, the handoff process may result in large amount of packet loss. If the handover process is performed without appropriate connection verification and through specified tunnels, then this may result in inappropriate traffic flow since the required traffic redirection may not happen in this case. To overcome these issues, we propose to develop an  Enhanced Tunneling Technique for Flow-based Fast Handover in Proxy Mobile IPv6 Networks. In this technique, the packets are buffered to minimize the packet loss during handover and then the Flow based Fast Handover techique is employed to ensure that the traffic is redirected to the new subnet after handover process. Thus ensuring efficient network operation.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume5No7/Paper_2-Enhanced_Tunneling_Technique_for_Flow_Based_Fast_Handover.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Diagrammatic Language for Artificial Intelligence: Representation of Things that Flow</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2016.050701</link>
        <id>10.14569/IJARAI.2016.050701</id>
        <doi>10.14569/IJARAI.2016.050701</doi>
        <lastModDate>2016-07-11T03:36:27.1370000+00:00</lastModDate>
        
        <creator>Sabah Al-Fedaghi</creator>
        
        <subject>diagrammatic representation; philosophical diagrammatic language; possible worlds</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 5(7), 2016</description>
        <description>This paper utilizes a diagrammatic language for expressing certain philosophical notions, such as possible worlds, beliefs, and propositions. The focus is on a diagrammatic representation that depicts “things” to show how their various important properties and relations can be explicated in terms of diagrams. The paper does not add a new contribution to philosophy (what is said in it); rather, it contributes a representation tool for philosophy. Akin to specifications in software engineering, the proposed is to provide a complementary technique for expressing different aspects of the involved philosophical concepts that are typically presented in the form of textual explanations. The resultant diagrams seem to be a viable tool that can be utilized in teaching, in communication, and to facilitate an understanding of philosophical problems.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume5No7/Paper_1-Diagrammatic_Language_for_Artificial_Intelligence_Representation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Determining adaptive thresholds for image segmentation for a license plate recognition system</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070667</link>
        <id>10.14569/IJACSA.2016.070667</id>
        <doi>10.14569/IJACSA.2016.070667</doi>
        <lastModDate>2016-07-01T14:32:55.5730000+00:00</lastModDate>
        
        <creator>Siti Norul Huda Sheikh Abdullah</creator>
        
        <creator>Khairuddin Omar</creator>
        
        <creator>Abbas Salimi Zaini</creator>
        
        <creator>Maria Petrou</creator>
        
        <creator>Marzuki Khalid</creator>
        
        <subject>adaptive threshold; image segmentation; license
plate recognition; neural network; computer surveillance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(6), 2016</description>
        <description>A vehicle license plate recognition (LPR) system
is useful to many applications, such as entrance admission,
security, parking control, airport and cargo, traffic and speed
control. This paper describe an adaptive threshold for image
segmentation applied to a system for Malaysian intelligent license
plate recognition (MyiLPR). Due to the different types of license
plates used, the requirements of an automatic LPR system are
rather different for each country. Upon receiving the input car
image, this system (MyiLPR) detects and segments the license
plate based on proposed adaptive threshold via image and
blob histogram, and blob agglomeration, and finally, it extracts
geometric character features and classifies them using neural
network. The use of the proposed adaptive threshold increased the
detection, segmentation and recognition rate to 99%, 94.98% and
90% correspondingly, from 95%, 78.27% and 71.08% obtained
with the fixed threshold used in the originally proposed system</description>
        <description>http://thesai.org/Downloads/Volume7No6/Paper_67-Determining_adaptive_thresholds_for_image.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Scheduling on Heterogeneous Multi-core Processors Using Stable Matching Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070666</link>
        <id>10.14569/IJACSA.2016.070666</id>
        <doi>10.14569/IJACSA.2016.070666</doi>
        <lastModDate>2016-07-01T14:32:55.5430000+00:00</lastModDate>
        
        <creator>Muhammad Rehman Zafar</creator>
        
        <creator>Muhammad Asfand-e-Yar</creator>
        
        <subject>Heterogeneous, Performance, Scheduling, Multicore
processors, Stable matching</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(6), 2016</description>
        <description>Heterogeneous Multi-core Processors (HMP) are
better to schedule jobs as compare to homogenous multi-core
processors. There are two main factors associated while analyzing
both architectures i.e. performance and power consumption.
HMP incorporates cores of various types or complexities in a
solitary chip. Hence, HMP is capable to address both throughput
and productivity for different workloads by coordinating
execution assets to the needs of every application. The primary
objective of this study is to improve the dynamic selection of the
processor core to fulfill the power and performance requirements
using a task scheduler. In the proposed solution, there will be
dynamic priority lists for tasks and available cores. The tasks to
core mapping is performed on the basis of priorities of the tasks
and cores.</description>
        <description>http://thesai.org/Downloads/Volume7No6/Paper_66-Scheduling_on_Heterogeneous_Multi-core_Processors.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance of a Constrained Version of MOEA/D on CTP-series Test Instances</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070665</link>
        <id>10.14569/IJACSA.2016.070665</id>
        <doi>10.14569/IJACSA.2016.070665</doi>
        <lastModDate>2016-07-01T14:32:55.5130000+00:00</lastModDate>
        
        <creator>Muhammad Asif Jan</creator>
        
        <creator>Rashida Adeeb Khanum</creator>
        
        <creator>Nasser Mansoor Tairan</creator>
        
        <creator>Wali Khan Mashwani</creator>
        
        <subject>Decomposition; MOEA/D; threshold based penalty
function; constrained multiobjective optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(6), 2016</description>
        <description>Constrained multiobjective optimization arises in
many real-life applications, and is therefore gaining a constantly
growing attention of the researchers. Constraint handling techniques
differ in the way infeasible solutions are evolved in
the evolutionary process along with their feasible counterparts.
Our recently proposed threshold based penalty function gives
a chance of evolution to infeasible solutions whose constraint
violation is less than a specified threshold value. This paper
embeds the threshold based penalty function in the update and
replacement scheme of multi-objective evolutionary algorithm
based on decomposition (MOEA/D) to find tradeoff solutions for
constrained multiobjective optimization problems (CMOPs). The
modified algorithm is tested on CTP-series test instances in terms
of the hypervolume metric (HV-metric). The experimental results
are compared with the two well-known algorithms, NSGA-II and
IDEA. The sensitivity of algorithm to the adopted parameters is
also checked. Empirical results demonstrate the effectiveness of
the proposed penalty function in the MOEA/D framework for
CMOPs</description>
        <description>http://thesai.org/Downloads/Volume7No6/Paper_65_Performance_of_a_Constrained_Version_of_MOEAD.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multivariable Decoupling Controller: Application to Multicellular Converter</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070664</link>
        <id>10.14569/IJACSA.2016.070664</id>
        <doi>10.14569/IJACSA.2016.070664</doi>
        <lastModDate>2016-07-01T14:32:55.4970000+00:00</lastModDate>
        
        <creator>Abir Smati</creator>
        
        <creator>Wassila Chagra</creator>
        
        <creator>Denis Berdjag</creator>
        
        <creator>Moufida Ksouri</creator>
        
        <subject>Hybride systems; Multicellular series converters;
PWM; Closed loop control</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(6), 2016</description>
        <description>A new control strategy is presented in this paper,
based on previous works limited to the control of the capacitor
voltages considered as the outputs of a three cell converter.
An additional control input is proposed to this latter in order
to obtain the desired current output. The experimentations
performed on a multicellular converter are presented and the
discussed results showing the efficiency of the contribution</description>
        <description>http://thesai.org/Downloads/Volume7No6/Paper_64-Multivariable_Decoupling_Controller_Application.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Knowledge-based Approach for Event Extraction from Arabic Tweets</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070663</link>
        <id>10.14569/IJACSA.2016.070663</id>
        <doi>10.14569/IJACSA.2016.070663</doi>
        <lastModDate>2016-07-01T14:32:55.4630000+00:00</lastModDate>
        
        <creator>Mohammad AL-Smadi</creator>
        
        <creator>Omar Qawasmeh</creator>
        
        <subject>Event Extraction; Knowledge base; Entity
linking; Named entity disambiguation; Arabic NLP</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(6), 2016</description>
        <description>Tweets provide a continuous update on
current events. However, Tweets are short, personalized
and noisy, thus raises more challenges for event
extraction and representation. Extracting events out
of Arabic tweets is a new research domain where few
examples – if any – of previous work can be found.
This paper describes a knowledge-based approach for
fostering event extraction out of Arabic tweets. The
approach uses an unsupervised rule-based technique
for event extraction and provides a named entity disambiguation
of event related entities (i.e. person, organization,
and location). Extracted events and their
related entities are populated to the event knowledge
base where tagged tweets’ entities are linked to their
corresponding entities represented in the knowledge
base. Proposed approach was evaluated on a dataset
of 1K Arabic tweets covering different types of events
(i.e. instant events and interval events). Results show
that the approach has an accuracy of, 75.9% for event
trigger extraction, 87.5% for event time extraction, and
97.7% for event type identification.</description>
        <description>http://thesai.org/Downloads/Volume7No6/Paper_63-Knowledge-based_Approach_for_Event_Extraction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hashtag the Tweets: Experimental Evaluation of Semantic Relatedness Measures</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070662</link>
        <id>10.14569/IJACSA.2016.070662</id>
        <doi>10.14569/IJACSA.2016.070662</doi>
        <lastModDate>2016-07-01T14:32:55.4500000+00:00</lastModDate>
        
        <creator>Muhammad Asif</creator>
        
        <creator>Malik Muhammad Saad Missen</creator>
        
        <creator>Nadeem Akhtar</creator>
        
        <creator>Hina Asmat</creator>
        
        <creator>Mujtaba Husnain</creator>
        
        <creator>Muhammad Asghar</creator>
        
        <subject>component; formatting; style; styling; insert (key words)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(6), 2016</description>
        <description>On Twitter, hashtags are used to summarize topics
of the tweet content and to help search tweets. However, hashtags
are created in a free style and thus heterogeneous, increasing
difficulty of their usage. Therefore, it is important to evaluate
that if they really represent the content they are attached with?
In this work, we perform detailed experiments to find answer for
this question. In addition to this, we compare different semantic
relatedness measures to find this similarity between hashtags and
tweets. Experiments are performed using ten different measures
and Adapted Lesk is found to be the best.</description>
        <description>http://thesai.org/Downloads/Volume7No6/Paper_62-Hashtag_the_Tweets_Experimental_Evaluation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>HAMSA: Highly Accelerated Multiple Sequence Aligner</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070661</link>
        <id>10.14569/IJACSA.2016.070661</id>
        <doi>10.14569/IJACSA.2016.070661</doi>
        <lastModDate>2016-07-01T14:32:55.4330000+00:00</lastModDate>
        
        <creator>Naglaa M. Reda</creator>
        
        <creator>Mohammed Al-Neama</creator>
        
        <creator>Fayed F. M. Ghaleb</creator>
        
        <subject>Bioinformatics; Multiple sequence alignment; parallel
programming; Clusters; Multi-cores</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(6), 2016</description>
        <description>For biologists, the existence of an efficient tool for
multiple sequence alignment is essential. This work presents a
new parallel aligner called HAMSA. HAMSA is a bioinformatics
application designed for highly accelerated alignment of multiple
sequences of proteins and DNA/RNA on a multi-core cluster
system. The design of HAMSA is based on a combination of
our new optimized algorithms proposed recently of vectorization,
partitioning, and scheduling. It mainly operates on a distance
vector instead of a distance matrix. It accomplishes similarity
computations and generates the guide tree in a highly accelerated
and accurate manner. HAMSA outperforms MSAProbs with 21.9-
fold speedup, and ClustalW-MPI of 11-fold speedup. It can be
considered as an essential tool for structure prediction, protein
classification, motive finding and drug design studies</description>
        <description>http://thesai.org/Downloads/Volume7No6/Paper_61-HAMSA_Highly_Accelerated_Multiple_Sequence.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Exploiting Document Level Semantics in Document Clustering</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070660</link>
        <id>10.14569/IJACSA.2016.070660</id>
        <doi>10.14569/IJACSA.2016.070660</doi>
        <lastModDate>2016-07-01T14:32:55.4030000+00:00</lastModDate>
        
        <creator>Muhammad Rafi</creator>
        
        <creator>Muhammad Naveed Sharif</creator>
        
        <creator>Waleed Arshad</creator>
        
        <creator>Habibullah Rafay</creator>
        
        <subject>Document Clustering; Text Mining; Similarity Measure; Semantics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(6), 2016</description>
        <description>Document clustering is an unsupervised machine
learning method that separates a large subject heterogeneous
collection (Corpus) into smaller, more manageable, subject homogeneous
collections (clusters). Traditional method of document
clustering works around extracting textual features like: terms,
sequences, and phrases from documents. These features are
independent of each other and do not cater meaning behind these
word in the clustering process. In order to perform semantic
viable clustering, we believe that the problem of document clustering
has two main components: (1) to represent the document in
such a form that it inherently captures semantics of the text. This
may also help to reduce dimensionality of the document and (2)
to define a similarity measure based on the lexical, syntactic and
semantic features such that it assigns higher numerical values
to document pairs which have higher syntactic and semantic
relationship. In this paper, we propose a representation of
document by extracting three different types of features from a
given document. These are lexical  , syntactic   and semantic 

features. A meta-descriptor for each document is proposed using
these three features: first lexical, then syntactic and in the last
semantic. A document to document similarity matrix is produced
where each entry of this matrix contains a three value vector for
each lexical  , syntactic   and semantic 
. The main contributions
from this research are (i) A document level descriptor using three
different features for text like: lexical, syntactic and semantics.
(ii) we propose a similarity function using these three, and
(iii) we define a new candidate clustering algorithm using three
component of similarity measure to guide the clustering process
in a direction that produce more semantic rich clusters. We
performed an extensive series of experiments on standard text
mining data sets with external clustering evaluations like: FMeasure
and Purity, and have obtained encouraging results.</description>
        <description>http://thesai.org/Downloads/Volume7No6/Paper_60-Exploiting_Document_Level_Semantics_in_Document.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Data Mining in Education</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070659</link>
        <id>10.14569/IJACSA.2016.070659</id>
        <doi>10.14569/IJACSA.2016.070659</doi>
        <lastModDate>2016-07-01T14:32:55.3700000+00:00</lastModDate>
        
        <creator>Abdulmohsen Algarni</creator>
        
        <subject>Data mining, Educational Data Mining (EDM),
Knowledge extraction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(6), 2016</description>
        <description>Data mining techniques are used to extract useful
knowledge from raw data. The extracted knowledge is valuable
and significantly affects the decision maker. Educational data
mining (EDM) is a method for extracting useful information
that could potentially affect an organization. The increase of
technology use in educational systems has led to the storage
of large amounts of student data, which makes it important
to use EDM to improve teaching and learning processes. EDM
is useful in many different areas including identifying at-risk
students, identifying priority learning needs for different groups
of students, increasing graduation rates, effectively assessing
institutional performance, maximizing campus resources, and
optimizing subject curriculum renewal. This paper surveys the
relevant studies in the EDM field and includes the data and
methodologies used in those studies</description>
        <description>http://thesai.org/Downloads/Volume7No6/Paper_59-Data_Mining_in_Education.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>RETRACTED: An Extension of the Bisection Theorem to Symmetrical Circuits with Cross-Coupling</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070658</link>
        <id>10.14569/IJACSA.2016.070658</id>
        <doi>10.14569/IJACSA.2016.070658</doi>
        <lastModDate>2016-07-01T14:32:55.3400000+00:00</lastModDate>
        
        <creator>Fadi Nessir Zghoul</creator>
        
        <subject>Bisection theorem; common-mode analysis; cross-coupling; differential amplifiers; differential-mode analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(6), 2016</description>
        <description>After careful and considered review of the content of this paper by a duly constituted expert committee, this paper has been found to be in violation of IJACSA`s Publication Principles. We hereby retract the content of this paper. Reasonable effort should be made to remove all past references to this paper. Retraction DOI: 10.14569/IJACSA.2016.070658.retraction</description>
        <description>http://thesai.org/Downloads/Volume7No6/Paper_58-An_Extension_of_the_Bisection_Theorem.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Auction-Bidding Protocol for Distributed Bit Allocation in RSSI-based Localization Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070657</link>
        <id>10.14569/IJACSA.2016.070657</id>
        <doi>10.14569/IJACSA.2016.070657</doi>
        <lastModDate>2016-07-01T14:32:55.2930000+00:00</lastModDate>
        
        <creator>Ahmad A. Ababneh</creator>
        
        <subject>Target localization; Auction-bidding</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(6), 2016</description>
        <description>Several factors (e.g., target energy, sensor density)
affect estimation error at a point of interest in sensor networks.
One of these factors is the number of allocated bits to sensors that
cover the point of interest when quantization is employed. In this
paper, we investigate bit allocation in such networks such that
estimation error requirements at multiple points of interest are
satisfied as best as possible. To solve this nonlinear integer programming
problem, we propose an iterative distributed auctionbidding
protocol. Starting with some initial bit distribution, a
network is divided into a a number of clusters each with its
own auction. Each cluster head (CH) acts as an auctioneer and
divides sensors into buyers or sellers of bits (i.e., commodity).
With limited messaging, CHs redistribute bits among sensors,
each bit at a time such that the difference between achieved and
required estimation errors within each cluster is reduced in each
round. We propose two bit-pricing schemes used by sensors to
decide on exchanging bits. Finally, simulation results show that
our proposed ‘distributed’ protocol’s error performance can be
within 5%-10% of that of a ‘centralized’ genetic algorithm (GA)
solution</description>
        <description>http://thesai.org/Downloads/Volume7No6/Paper_57-An_Auction-Bidding_Protocol_for_Distributed.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Proposed Textual Graph Based Model for Arabic Multi-document Summarization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070656</link>
        <id>10.14569/IJACSA.2016.070656</id>
        <doi>10.14569/IJACSA.2016.070656</doi>
        <lastModDate>2016-07-01T14:32:55.2770000+00:00</lastModDate>
        
        <creator>Muneer A. Alwan</creator>
        
        <creator>Hoda M. Onsi</creator>
        
        <subject>Text Summarization; Arabic Abstractive Summary; Textual Graph; Natural Language Processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(6), 2016</description>
        <description>Text summarization task is still an active area of research in natural language preprocessing. Several methods that have been proposed in the literature to solve this task
have presented mixed success. However, such methods developed
in a multi-document Arabic text summarization are based on
extractive summary and none of them is oriented to abstractive
summary. This is due to the challenges of Arabic language and
lack of resources. In this paper, we present a minimal languagedependent
processing abstractive Arabic multi-document summarizer.
The proposed model is based on textual graph to remove
multi-document redundancy and generate coherent summary.
Firstly, the original text, highly redundant and related multidocument,
will be converted into textual graph. Next, graph
traversal with structural rules will be applied to concatenate
related sentences to single ones. Finally, unwanted and less
weighted phrases will be removed from the summarized sentences
to generate final summary. Preliminary results show that the
proposed method has achieved promising results for multidocument
summarization.</description>
        <description>http://thesai.org/Downloads/Volume7No6/Paper_56-A_Proposed_Textual_Graph_Based_Model_for_Arabic.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Algorithm for Optimizing Multiple Services Resource Allocation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070655</link>
        <id>10.14569/IJACSA.2016.070655</id>
        <doi>10.14569/IJACSA.2016.070655</doi>
        <lastModDate>2016-07-01T14:32:55.2470000+00:00</lastModDate>
        
        <creator>Amjad Gawanmeh</creator>
        
        <creator>Alain April</creator>
        
        <subject>Cloud computing; Cloud Services; Scheduling; Parallel and Distributed systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(6), 2016</description>
        <description>Resource provisioning becomes more and more challenging problem in cloud computing environment since cloudbased
services become more numerous and dynamic. The problem
of scheduling multiple tasks for multiple users on a given number
of resources is considered NP-Complete problem, and therefore,
several heuristic based research methods were proposed, yet,
there are still many improvements can be done, since the problem
has several optimization parameters. In addition, most proposed
solutions are built on top of several assumptions and simplifications
by applying computational methods such as game theory,
fuzzy logic, or evolutionary computing. This paper presents an
algorithm to address the problem of resource allocation across
a cloud-based network, where several resources are available,
and the cost of computational service depends on the amount of
computation. The algorithm is applicable without restrictions on
cost vector or compaction time matrix as opposed to methods in
the literature. In addition, the execution of the algorithm shows
better utility compared to methods applied on similar problems</description>
        <description>http://thesai.org/Downloads/Volume7No6/Paper_55-A_Novel_Algorithm_for_Optimizing_Multiple_Services.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Study from Several Business Cases and Methodologies for ICT Project Evaluation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070654</link>
        <id>10.14569/IJACSA.2016.070654</id>
        <doi>10.14569/IJACSA.2016.070654</doi>
        <lastModDate>2016-07-01T14:32:55.2300000+00:00</lastModDate>
        
        <creator>Farrukh Saleem</creator>
        
        <creator>Naomie Salim</creator>
        
        <creator>Abdulrahman H. Altalhi</creator>
        
        <creator>Abdullah AL-Malaise AL-Ghamdi</creator>
        
        <creator>Zahid Ullah</creator>
        
        <creator>Fatmah A. Baothman</creator>
        
        <creator>Muhammad Haleem Junejo</creator>
        
        <subject>ICT Investment, Evaluation of ICT Investment; Multi-Dimensional Approaches; Multi-Criteria Approaches; Financial Approaches</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(6), 2016</description>
        <description>Achieving high competitive advantage through Information and Communication Technologies (ICT) has never been easy without proper management and appropriate utilization of ICT resources. Therefore, the statistics suggested that ICT project failures are very common in the organization due to several reasons; it fails to deliver the required objectives of investment, inaccurate budget planning, lack of risk management plan and time overrun are some basic reasons for an ICT project’s failure. To overcome these issues, recently ICT decision makers are emphasizing more on ICT project’s evaluation rather than investment. The practitioner broadly categorized the evaluation techniques in post and pre evaluation methods, which is further divided into measuring the return from financial and non-financial perspectives. The main purpose of this paper is to provide a comparative analysis on ICT investment’s evaluation, their categories based on pre and post evaluation. Thus, the paper offers an extensive literature review that can help ICT decision makers and organizations to better select the evaluation techniques available, where integration of multiple techniques can further improve this process</description>
        <description>http://thesai.org/Downloads/Volume7No6/Paper_54-Comparative_Study_from_Several_Business_Cases.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Increasing the Target Prediction Accuracy of MicroRNA Based on Combination of Prediction Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070653</link>
        <id>10.14569/IJACSA.2016.070653</id>
        <doi>10.14569/IJACSA.2016.070653</doi>
        <lastModDate>2016-07-01T14:32:55.2000000+00:00</lastModDate>
        
        <creator>Mohammed Q. Shatnawi</creator>
        
        <creator>Mohammad Alhammouri</creator>
        
        <creator>Kholoud Mukdadi </creator>
        
        <subject>miRNA; chromosome; prediction; genome; disease; biology; DNA sequence; enzyme</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(6), 2016</description>
        <description>MicroRNA is an oligonucleotide that plays a role in the pathogenesis of several diseases (mentioning Cancer). It is a non-coding RNA that is involved in the control of gene expression through the binding and inhibition of mRNA. 
In this study, three algorithms were implemented in WEKA software using two testing modes to analyze five datasets of miRNA families. The data mining techniques are used to compare the interactions of miRNA-mRNA that it either belongs to the same gene-family or to different families, and to establish a biological scheme that explains how the biological parameters are involved or less involved in miRNA-mRNA prediction.
The factors that were involved in the prediction process includs match, mismatch, bulge, loop, and score to represent the binding characteristics, while the position, 3’UTR length, and chromosomal location and chromosomal categorizations represent the characteristics of the target mRNA. These attributes can provide an empirical guidance for study of specific miRNA family to scan the whole human genome for novel targets. This research provides promising results that can be utilized for current and future research in this field
</description>
        <description>http://thesai.org/Downloads/Volume7No6/Paper_53-Increasing_the_Target_Prediction_Accuracy_of_MicroRNA_Based_on_Combination.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Authenticating Sensitive Speech-Recitation in Distance-Learning Applications using Real-Time Audio Watermarking</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070652</link>
        <id>10.14569/IJACSA.2016.070652</id>
        <doi>10.14569/IJACSA.2016.070652</doi>
        <lastModDate>2016-07-01T14:32:55.1700000+00:00</lastModDate>
        
        <creator>Omar Tayan</creator>
        
        <creator>Lamri Laouamer</creator>
        
        <creator>Tarek Moulahi</creator>
        
        <creator>Yasser M. Alginahi</creator>
        
        <subject>Audio; Watermarking; Quran-recitation; Integrity; Authentication</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(6), 2016</description>
        <description>Thispaper focuses on audio-watermarking authentication and integrity-protection within the context of the speech-data transmitted over the Internet in a real-time learning environment.The Arabic Quran recitation through distance learning is used as a case-study example that is characteristic of sensitive data requiring robust authentication and integrity measures. Thisworkproposes an approach for the purpose ofauthenticating and validatingaudio-data transmitted by a publisher or during communications between an instructor and students reciting via Internet communications.The watermarking approach proposed here is based on detection of the key patternswithin the audio signal as an input to the algorithm before the embedding phase is performed.The developed application could be easily used at both sides of the communication for ensuring authenticity and integrity of the transmitted speech signal and is proved effective for many distance-learning applications that require low-complexity processing in real-time</description>
        <description>http://thesai.org/Downloads/Volume7No6/Paper_52-Authenticating sensitive_Speech_Recitation_in_Distance_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Data Security Using Cryptography and Steganography Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070651</link>
        <id>10.14569/IJACSA.2016.070651</id>
        <doi>10.14569/IJACSA.2016.070651</doi>
        <lastModDate>2016-07-01T14:32:55.1230000+00:00</lastModDate>
        
        <creator>Marwa E. Saleh</creator>
        
        <creator>Abdelmgeid A. Aly</creator>
        
        <creator>Fatma A. Omara</creator>
        
        <subject>Image Steganography; Pixel Value Difference (PVD); Encryption; Decryption; Advance encryption standard (AES)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(6), 2016</description>
        <description>Although cryptography and steganography could be used to provide data security, each of them has a problem.  Cryptography problem is that, the cipher text looks meaningless, so the attacker will interrupt the transmission or make more careful checks on the data from the sender to the receiver. Steganography problem is that once the presence of hidden information is revealed or even suspected, the message is become known. According to the work in this paper, a merged technique for data security has been proposed using Cryptography and Steganography techniques to improve the security of the information. Firstly, the Advanced Encryption Standard (AES) algorithm has been modified and used to encrypt the secret message. Secondly, the encrypted message has been hidden using method in [1]. Therefore, two levels of security have been provided using the proposed hybrid technique. In addition, the proposed technique provides high embedding capacity and high quality stego images</description>
        <description>http://thesai.org/Downloads/Volume7No6/Paper_51-Data_Security_Using_Cryptography_and_Steganography_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Evaluation of Support Vector Regression Models for Survival Analysis: A Simulation Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070650</link>
        <id>10.14569/IJACSA.2016.070650</id>
        <doi>10.14569/IJACSA.2016.070650</doi>
        <lastModDate>2016-07-01T14:32:55.0730000+00:00</lastModDate>
        
        <creator>Hossein Mahjub</creator>
        
        <creator>Shahrbanoo Goli</creator>
        
        <creator>Javad Faradmal</creator>
        
        <creator>Ali-Reza Soltanian</creator>
        
        <subject>support vector machines; support vector regression; survival analysis; simulation study; Cox model; mean residual life</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(6), 2016</description>
        <description>Desirable features of support vector regression (SVR) models have led to researchers extending them to survival problems. In current paper we evaluate and compare performance of different SVR models and the Cox model using simulated and real data sets with different characteristics. Several SVR models are applied: 1) SVR with only regression constraints (standard SVR); 2) SVR with regression and ranking constraints; 3) SVR with positivity constraints; and 4) L1-SVR. Also, a SVR model based on mean residual life is proposed.
Our findings from evaluation of real data sets indicate that for data sets with high censoring rate and high number of features, SVR model significantly outperforms the Cox model. Simulated data sets also show similar results. For some real data sets L1-SVR has a significantly degraded performance in comparison to the standard SVR. Performance of other SVR models is not substantially different from the standard SVR with the real data sets. Nevertheless, the results of simulated data sets show that standard SVR slightly outperforms SVR with regression and ranking constraints
</description>
        <description>http://thesai.org/Downloads/Volume7No6/Paper_50-Performance_Evaluation_of_Support_Vector_Regression_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Formal Verification of a Secure Model for Building E-Learning Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070649</link>
        <id>10.14569/IJACSA.2016.070649</id>
        <doi>10.14569/IJACSA.2016.070649</doi>
        <lastModDate>2016-07-01T14:32:55.0430000+00:00</lastModDate>
        
        <creator>Farhan M Al Obisat</creator>
        
        <creator>Hazim S. AlRawashdeh</creator>
        
        <subject>Formal verification; SPIN Model Checking; E-content; E-protection; Encryption and Decryption; Security of e-content</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(6), 2016</description>
        <description>Internet is considered as common medium for E-learning to connect several parties with each other (instructors and students) as they are supposed to be far away from each other. Both wired and wireless networks are used in this learning environment to facilitate mobile access to educational systems. This learning environment requires a secure connection and data exchange.  An E-learning model was implemented and evaluated by conducting student’s experiments. Before the approach is deployed in the real world a formal verification for the model is completed which shows that unreachability case does not exist. The model in this paper which is concentrated on the security of e-content has successfully validated the model using SPIN Model Checker where no errors were found</description>
        <description>http://thesai.org/Downloads/Volume7No6/Paper_49-Formal_Verification_of_a_Secure_Model_for_Building.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Network-State-Aware Quality of Service Provisioning for the Internet of Things</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070648</link>
        <id>10.14569/IJACSA.2016.070648</id>
        <doi>10.14569/IJACSA.2016.070648</doi>
        <lastModDate>2016-07-01T14:32:54.9970000+00:00</lastModDate>
        
        <creator>Shafique Ahmad Chaudhry</creator>
        
        <creator>Jun Zhang</creator>
        
        <subject>Internet of things; QoS provisioning; 6Lowpan; Policy-based QoS; IP-based Wireless Sensor Network; 6LoWPAN</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(6), 2016</description>
        <description>The Internet of Things (IoT) describes a diverse range of technologies to enable a diverse range of applications using diverse platforms for communication. IP-enabled Wireless Sensor Networks (6LoWPAN) are an integral part of IoT realization because of their huge potential for sensing and communication. The provision of Quality of Service (QoS) requirements in IoT is a challenging task because of device heterogeneity in terms of bandwidth, computing and communication capabilities of the diverse set of IoT nodes and networks. The sensor nodes in IoT, e.g., 6LoWPAN, exhibit low battery power, limited bandwidth and extremely constrained computing power. Additionally, these IP-based sensor networks are inherently dynamic in nature due to node failures and mobility. Introduction of modern delay-sensitive applications for such networks has made the QoS provisioning task even harder. In this paper, we present Network-State-Adaptive QoS provision algorithm for 6LoWPAN, which adapts with the changing network state to ensure that QoS requirements are met even with the dynamic network states. It is a policy-based mechanism, which collaborates with the underlying routing protocol to satisfy the QoS requirements specified in the high level policies. It is simple in its implementation yet minimizes the degradation of the best effort traffic at a considerable level. Our implementation results show that our protocol adjusts well in dynamic 6LoWPAN environment where multiple services are competing for available limited resources</description>
        <description>http://thesai.org/Downloads/Volume7No6/Paper_48-Network_State_Aware_Quality_of_Service_Provisioning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Stable Beneficial Group Activity Formation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070647</link>
        <id>10.14569/IJACSA.2016.070647</id>
        <doi>10.14569/IJACSA.2016.070647</doi>
        <lastModDate>2016-07-01T14:32:54.9670000+00:00</lastModDate>
        
        <creator>Noor Sami Al-Anbaki</creator>
        
        <creator>Azzam Sleit</creator>
        
        <creator>Ahmed Sharieh</creator>
        
        <subject>computational models; group formation; members&#39; preferences; Nash stability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(6), 2016</description>
        <description>Computational models are one of the very powerful tools for expressing everyday situations that are derived from human interactions. In this paper, an investigation of the problem of forming beneficial groups based on the members&#39; preferences and the coordinator&#39;s own strategy is presented . It is  assumed that a coordinator has a good intention behind trimming members&#39; preferences to meet the ultimate goal of forming the group. His strategy is justified and evaluated by Nash stability. There are two variations of the problem: the Anonymous Stable Beneficial Group Activity Formation and the General Stable Beneficial Group Activity Formation. The computational complexity of solving both variations has been analyzed. Finding stable groups needs non-polynomial time algorithm to be solved. A polynomial time solution is presented and enhanced with examples</description>
        <description>http://thesai.org/Downloads/Volume7No6/Paper_47-Stable_Beneficial_Group_Activity_Formation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comparative Study of the Iterative Numerical Methods Used in Mine Ventilation Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070646</link>
        <id>10.14569/IJACSA.2016.070646</id>
        <doi>10.14569/IJACSA.2016.070646</doi>
        <lastModDate>2016-07-01T14:32:54.9330000+00:00</lastModDate>
        
        <creator>B Maleki</creator>
        
        <creator>E. Mozaffari</creator>
        
        <subject>mine ventilation; network analysis; Newton-Raphson method; linear theory</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(6), 2016</description>
        <description>Ventilation is one of the key safety tasks in underground mines. Determination of the airflow through mine openings and ducts is complex and often requires the application of numerical analysis. The governing equations used in the computation of mine ventilation are discussed in matrix forms. The aim of this paper is to compare the most frequently used numerical analysis methods, which includes Newton-Raphson and the Linear Theory. It is the challenge of this study to investigate the influence of the initial flow rates and the fans in the network. A simulated mine ventilation network is represented in order to examine the two numerical methods. The numerical results acquired from Newton-Raphson method exhibited faster rate of convergence in comparison to those of the Linear Theory method. The mine ventilation networks are less expanded, therefore, the Newton-Raphson method converges faster. On the other hand, when using computational tools and software the advantage of faster convergence becomes less important, and therefore the Linear Theory method will be more preferred</description>
        <description>http://thesai.org/Downloads/Volume7No6/Paper_46-A_Comparative_Study_of_the_Iterative_Numerical_Methods.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SecFHIR: A Security Specification Model for Fast Healthcare Interoperability Resources</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070645</link>
        <id>10.14569/IJACSA.2016.070645</id>
        <doi>10.14569/IJACSA.2016.070645</doi>
        <lastModDate>2016-07-01T14:32:54.9030000+00:00</lastModDate>
        
        <creator>Ahmad Mousa Altamimi</creator>
        
        <subject>Healthcare; FHIR; Interoperability; Privacy preserving; Standards; XML schema</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(6), 2016</description>
        <description>Patients taking medical treatment in distinct healthcare institutions have their information deeply fragmented between very different locations. All this information --- probably with different formats --- may be used or exchanged to deliver professional healthcare services. As the exchange of information/ interoperability is a key requirement for the success of healthcare process, various predefined e-health standards have been developed. Such standards are designed to facilitate information interoperability in common formats. Fast Healthcare Interoperability Resources (FHIR) is a newly open healthcare data standard that aims to providing electronic healthcare interoperability. FHIR was coined in 2014 to address limitations caused by the ad-hoc implementation and the distributed nature of modern medical care information systems. Patient’s data or resources are structured and standard in FHIR through a highly readable format such as XML or JSON. However, despite the unique features of FHIR, it is not a security protocol, nor does it provide any security-related functionality. In this paper, we propose a security specification model (SecFHIR) to support the development of intuitive policy schemes that are mapping directly to the healthcare environment. The formal semantics for SecFHIR are based on the well-established typing and the independent platform properties of XML. Specifically, patients’ data are modeled in FHIR using XML documents. In our model, we assume that these XML resources are defined by a set of schemes. Since XML Schema is a well-formed XML document, the permission specification can be easily integrated to the schema itself, then the specified permissions are applied to instance objects without any change. In other words, our security model (SecFHIR) defines permissions on XML schemes level, which implicitly specify the permissions on XML resources. Using these schemes, SecFHIR can combine them to support complex constraints over XML resources. This will result in reusable permissions, which efficiently simplify the security administration and achieve fine-grained access control. We also discuss the core elements of the proposed model, as well as the integration with the FHIR framework</description>
        <description>http://thesai.org/Downloads/Volume7No6/Paper_45-SecFHIR_A_Security_Specification_Model_for_Fast_Healthcare.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improvement of Persian Spam Filtering by Game Theory</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070644</link>
        <id>10.14569/IJACSA.2016.070644</id>
        <doi>10.14569/IJACSA.2016.070644</doi>
        <lastModDate>2016-07-01T14:32:54.8570000+00:00</lastModDate>
        
        <creator>Seyedeh Tina Sefati</creator>
        
        <creator>Mohammad-Reza Feizi-Derakhshi</creator>
        
        <creator>Seyed Naser Razavi</creator>
        
        <subject>Spam Filtering; Game theory; Stackelberg game; Evolutionary Stable Strategy; Email Classification; Stackelberg equilibria</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(6), 2016</description>
        <description>There are different methods for dealing with spams; however, since spammers continuously use tricks to defeat the proposed methods, hence, filters should be constantly updated. In this study, Stackelberg game was used to produce a dynamic filter and the relations between filter and adversary were modelled as a turn game in which there is a leader and a follower. Then, an attempt was made to solve the game as an optimization program via the evolutionary stable strategy (ESS). The dataset used in the study for evaluating and analyzing the proposed method was a real dataset including the emails of four users’ personal emails. The results of the conducted evaluations and investigations indicated that the proposed method had an 8% improvement over the three-class classification method and a 0.8% improvement over the ESS-based equilibrium point method</description>
        <description>http://thesai.org/Downloads/Volume7No6/Paper_44-Improvement_of_Persian_Spam_Filtering_by_Game_Theory.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Resource Utilization in Cloud Computing as an Optimization Problem</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070643</link>
        <id>10.14569/IJACSA.2016.070643</id>
        <doi>10.14569/IJACSA.2016.070643</doi>
        <lastModDate>2016-07-01T14:32:54.8270000+00:00</lastModDate>
        
        <creator>Ala&#39;a Al-Shaikh</creator>
        
        <creator>Hebatallah Khattab</creator>
        
        <creator>Ahmad Sharieh</creator>
        
        <creator>Azzam Sleit</creator>
        
        <subject>Activity Selection; NP-Complete; Optimization Problem; Resource Utilization; 0/1 Knapsack</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(6), 2016</description>
        <description>In this paper, an algorithm for resource utilization problem in cloud computing based on greedy method is presented. A privately-owned cloud that provides services to a huge number of users is assumed. For a given resource, hundreds or thousands of requests accumulate over time to use that resource by different users worldwide via the Internet. A prior knowledge of the requests to use that resource is also assumed. The main concern is to find the best utilization schedule for a given resource in terms of profit obtained by utilizing that resource, and the number of time slices during which the resource will be utilized. The problem is proved to be an NP-Complete problem. A greedy algorithm is proposed and analyzed in terms of its runtime complexity.   The proposed solution is based on a combination of the 0/1 Knapsack problem and the activity-selection problem. The algorithm is implemented using Java. Results show good performance with a runtime complexity O((F-S)nLogn)</description>
        <description>http://thesai.org/Downloads/Volume7No6/Paper_43-Resource_Utilization_in_Cloud_Computing_as_an_Optimization_Problem.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Rule Based Approach for Arabic Part of Speech Tagging and Name Entity Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070642</link>
        <id>10.14569/IJACSA.2016.070642</id>
        <doi>10.14569/IJACSA.2016.070642</doi>
        <lastModDate>2016-07-01T14:32:54.8100000+00:00</lastModDate>
        
        <creator>Mohammad Hjouj Btoush</creator>
        
        <creator>Abdulsalam Alarabeyyat</creator>
        
        <creator>Isa Olab</creator>
        
        <subject>POS; Speech tagging; Speech recognition; Text phrase; Phrase; NLP</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(6), 2016</description>
        <description>The aim of this study is to build a tool for Part of Speech (POS) tagging and Name Entity Recognition for Arabic Language, the approach used to build this tool is a rule base technique. The POS Tagger contains two phases:The first phase is to pass word into a lexicon phase, the second level is the morphological phase, and the tagset are (Noun, Verb and Determine). The Named-Entity detector will apply rules on the text and give the correct Labels for each word, the labels are Person(PERS), Location (LOC) and Organization (ORG)</description>
        <description>http://thesai.org/Downloads/Volume7No6/Paper_42-Rule_Based_Approach_for_Arabic_Part_of_Speech_Tagging_and_Name_Entity_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancement of KaPoW Plugin to Defend Against DDoS Attacks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070641</link>
        <id>10.14569/IJACSA.2016.070641</id>
        <doi>10.14569/IJACSA.2016.070641</doi>
        <lastModDate>2016-07-01T14:32:54.7630000+00:00</lastModDate>
        
        <creator>Farah Samir Barakat</creator>
        
        <creator>A .Prof. Amira Kotb</creator>
        
        <subject>Application Security; Client Puzzling; DDoS; Metrics; PHP; Puzzle</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(6), 2016</description>
        <description>DDoS attack is one of the hardest attacks to detect and mitigate in the computer world. This paper introduces two quantitative models, which use the client puzzling to detect and thwart application DDoS attacks. We simulated the models to use the probabilistic metrics to penalize the malicious users and prevent them from launching a DDoS attack while offering a stable environment to the normal users and decreasing the number of false positives and false negatives</description>
        <description>http://thesai.org/Downloads/Volume7No6/Paper_41-Enhancement_of_KaPoW_Plugin_to_Defend_Against_DDoS_Attacks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>PSO-based Optimized Canny Technique for Efficient Boundary Detection in Tamil Sign Language Digital Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070640</link>
        <id>10.14569/IJACSA.2016.070640</id>
        <doi>10.14569/IJACSA.2016.070640</doi>
        <lastModDate>2016-07-01T14:32:54.7300000+00:00</lastModDate>
        
        <creator>Dr M Krishnaveni</creator>
        
        <creator>Dr P Subashini</creator>
        
        <creator>TT Dhivyaprabha</creator>
        
        <subject>Tamil Sign Language; Canny Edge Detection; PSO; Thresholding; Objective Function</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(6), 2016</description>
        <description>For the hearing impaired, sign language is the most prevailing means of communication for their day-to-day life. It is always a challenge to develop an optimized automated system to recognize and interpret the implication of signs expressed by the hearing impaired. There are a wide range of algorithms developed for SLR, in which only few considerable approaches are carried out in Tamil Sign Language Recognition. This paper has proposed a significant contribution in segmentation process which is the most predominant component of image analysis in constructing the SLR system. Segmentation is handled using edge detection procedure for finding the borders of hand sign within the captured images, by detecting the split in the image illumination. The objective of the edge function, to find the boundary intensity, is done by a particle swarm optimization technique which chooses the optimal threshold values and implemented in the canny hysteresis thresholding method. The analysis primarily uses common edge recognition algorithms which contain Sobel, Robert, Canny and Prewitt from which the scope of the work is extended by introducing an optimization technique in Canny method. The performance of the proposed algorithm is tested with real time Tamil sign language dataset and comparison is inevitably carried out with standard segmentation metrics</description>
        <description>http://thesai.org/Downloads/Volume7No6/Paper_40-PSO_based_Optimized_Canny_Technique_for_Efficient_Boundary_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Survey on Smartphones Systems for Emergency Management (SPSEM)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070639</link>
        <id>10.14569/IJACSA.2016.070639</id>
        <doi>10.14569/IJACSA.2016.070639</doi>
        <lastModDate>2016-07-01T14:32:54.7000000+00:00</lastModDate>
        
        <creator>Hafsa Maryam</creator>
        
        <creator>Munam Ali Shah</creator>
        
        <creator>Qaisar Javaid</creator>
        
        <creator>Muhammad Kamran</creator>
        
        <subject>smartphone; emergency response; mobile; crises management</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(6), 2016</description>
        <description>Emergency never runs with earlier intimations and indications. In the real world and practical life, detecting and perceiving such emergencies and reporting them are a genuine test and tough challenge. Smartphones Systems for Emergency Management (SPSEM) provide details of existing emergency applications and provide a new direction to overcome the traditional problem having manual intercession and reporting emergencies. In this paper, we provide a comprehensive overview of SPSEM. We elaborate how embedded sensors automate the procedure of emergency crisis detection and responding. Furthermore, we critically evaluate the operations, benefits, limitations, emergency applications and responsiveness in any emergency crisis of different approaches. We provide an easy and concise view of the underlying model adapted by each SPSEM approach.  In last, we estimate the future utility and provide an insight of upcoming trends in SPSEM</description>
        <description>http://thesai.org/Downloads/Volume7No6/Paper_39-A_Survey_on_Smartphones_Systems_for_Emergency_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimized Image Scaling Using DWT and Different Interpolation Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070638</link>
        <id>10.14569/IJACSA.2016.070638</id>
        <doi>10.14569/IJACSA.2016.070638</doi>
        <lastModDate>2016-07-01T14:32:54.6700000+00:00</lastModDate>
        
        <creator>Wardah Aslam</creator>
        
        <creator>Khurram Khurshid</creator>
        
        <creator>Asfaq Ahmed Khan</creator>
        
        <subject>Bilinear Interpolation; Bicubic Interpolation; Discrete Wavelet Transform Image Scaling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(6), 2016</description>
        <description>Discrete Wavelet Transform (DWT) has gained much limelight in the past years. Wavelet Transform has precedence over Discrete Fourier Transform and Discrete Cosine Transform because they capture the frequency as well as spatial information of a signal. In this paper DWT has been used for image scaling purpose. To achieve higher visual quality image, DWT is applied on the gray scale image using a downscaling technique. The original image is recovered using IDWT by employing different interpolation techniques for upscaling. In this paper interpolation techniques used are: Nearest Neighbor, Bilinear and Bicubic. Peak Signal to Noise Ratio (PSNR) and Mean Square Error (MSE) is calculated for quantifying interpolated image effectiveness. Results show that the reconstructed image is better when using a combination of DWT and Bicubic interpolation</description>
        <description>http://thesai.org/Downloads/Volume7No6/Paper_38-Optimized_Image_Scaling_Using_DWT_and_Different_Interpolation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improvement of Adaptive Smart Concentric Circular Antenna Array Based Hybrid PSOGSA Optimizer</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070637</link>
        <id>10.14569/IJACSA.2016.070637</id>
        <doi>10.14569/IJACSA.2016.070637</doi>
        <lastModDate>2016-07-01T14:32:54.6530000+00:00</lastModDate>
        
        <creator>Ahmed Magdy</creator>
        
        <creator>Osama M. EL-Ghandour</creator>
        
        <creator>Hesham F. A. Hamed</creator>
        
        <subject>Smart antenna; CCAA; PSOGSA; HAP; and Beam-forming</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(6), 2016</description>
        <description>Unlike all recent research which used Concentric Circular Antenna Array (CCAA) based on one beam-former for each single main beam, this research presents a technique to adapt smart CCAA by using only single beam-former for multi main beams based on hybrid PSOGSA. Hybrid PSOGSA is a combining technique between Particle Swarm Optimization and Gravitational Search Algorithm which is applied in the feeding of the smart CCAA to enhance its performance. Phase excitation of the array with a large number of elements is suggested for different scenarios based on hybrid PSOGSA and other algorithms such as PSO and GSA in High Altitude Platform (HAP) application. Simulation results proved that hybrid PSOGSA achieves better performance than other optimizers for excitation of the smart CCAA in all scenarios for different parameters like normalized array factor, fitness values, convergence rate, and directivity</description>
        <description>http://thesai.org/Downloads/Volume7No6/Paper_37-Improvement_of_Adaptive_Smart_Concentric_Circular_Antenna.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards Improving the Quality of Present MAC Protocols for LECIM Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070636</link>
        <id>10.14569/IJACSA.2016.070636</id>
        <doi>10.14569/IJACSA.2016.070636</doi>
        <lastModDate>2016-07-01T14:32:54.6070000+00:00</lastModDate>
        
        <creator>Mohammad Arif Siddiqui</creator>
        
        <creator>Shah Murtaza Rashid Al Masud</creator>
        
        <creator>Mohammed Basit Kamal</creator>
        
        <subject>wireless networking; LECIM; IEEE P802.15; WPAN; MAC</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(6), 2016</description>
        <description>Wireless networking system is quickly growing in the field of communication technology due to its usefulness and huge applications.  To make the system more effective to the users its lower energy consumption, security, reliability and lower cost issues must be considered under any circumstances. Low energy wireless is exceedingly required because the sensors are frequently located where mains power and network infrastructure are not reliably available. The recent development of Low Energy Critical Infrastructure Monitoring (LECIM) has vast applications including: Water leak detection, Bridge/structural integrity monitoring, Oil &amp; gas pipeline monitoring, electric plant monitoring, public transport tracking, Cargo container monitoring, Railroad condition monitoring, Traffic congestion monitoring, Border surveillance, Medical alert for at-risk populations and many more.  This proposal Low Energy Critical Infrastructure Monitoring (LECIM) is proposed by the Task Group 4k under IEEE P802.15 WPAN. Although many issues related to its quality are involved, but several Media Access Control (MAC) protocols with different objectives were proposed for LECIM. In this research paper, issues related to energy consumption and wastage in LECIM system, energy savings mechanism, relevant energy conscious MAC protocols have been briefly studied and analyzed. Science Direct, Elsevier, Springer, IEEE Explore, Google Scholar and Wiley digital Library databases were used to search for articles related to the existing MAC protocols well suited for LECIM system. Finally, some ideas have been proposed towards developing energy efficient MAC protocol for LECIM applications in order to fulfill and satisfy the major issues of LECIM quality</description>
        <description>http://thesai.org/Downloads/Volume7No6/Paper_36-Towards_Improving_the_Quality_of_Present_MAC_Protocols.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of a Fingerprint Gender Classification Algorithm Using Fingerprint Global Features</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070635</link>
        <id>10.14569/IJACSA.2016.070635</id>
        <doi>10.14569/IJACSA.2016.070635</doi>
        <lastModDate>2016-07-01T14:32:54.5770000+00:00</lastModDate>
        
        <creator>S. F. Abdullah</creator>
        
        <creator>A.F.N.A. Rahman</creator>
        
        <creator>Z.A.Abas</creator>
        
        <creator>W.H.M Saad</creator>
        
        <subject>fingerprint; gender classification; global features; algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(6), 2016</description>
        <description>In forensic world, the process of identifying and calculating the fingerprint features is complex and take time when it is done manually using fingerprint laboratories magnifying glass. This study is meant to enhance the forensic manual method by proposing a new algorithm for fingerprint global feature extraction for gender classification.  The result shows that the new algorithm gives higher acceptable readings which is above 70% of classification rate when it is compared to the manual method. This algorithm is highly recommended in extracting a fingerprint global feature for gender classification process.</description>
        <description>http://thesai.org/Downloads/Volume7No6/Paper_35-Development_of_a_Fingerprint_Gender_Classification_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Knowledge Extraction from Metacognitive Reading Strategies Data Using Induction Trees</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070634</link>
        <id>10.14569/IJACSA.2016.070634</id>
        <doi>10.14569/IJACSA.2016.070634</doi>
        <lastModDate>2016-07-01T14:32:54.5300000+00:00</lastModDate>
        
        <creator>Christopher Taylor</creator>
        
        <creator>Arun Kulkarni</creator>
        
        <creator>Kouider Mokhtar</creator>
        
        <subject>Metacognitive Reading Strategies; Classification; Induction Tree; Rule Extraction; Fuzzy Inference System</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(6), 2016</description>
        <description>The assessment of students’ metacognitive knowledge and skills about reading is critical in determining their ability to read academic texts and do so with comprehension. In this paper, we used induction trees to extract metacognitive knowledge about reading from a reading strategies dataset obtained from a group of 1636 undergraduate college students. Using a C4.5 algorithm, we constructed decision trees, which helped us classify participants into three groups based on their metacognitive strategy awareness levels consisting of global, problem-solving and support reading strategies.  We extracted rules from these decision trees, and in order to evaluate accuracy of the extracted rules, we built a fuzzy inference system (FIS) with the extracted rules as a rule base and classified the test dataset with the FIS.  The extracted rules are evaluated using measures such as the overall efficiency and Kappa coefficient</description>
        <description>http://thesai.org/Downloads/Volume7No6/Paper_34-Knowledge_Extraction_from_Metacognitive_Reading_Strategies_Data_Using_Induction_Trees.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Classifying Arabic Text Using KNN Classifier</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070633</link>
        <id>10.14569/IJACSA.2016.070633</id>
        <doi>10.14569/IJACSA.2016.070633</doi>
        <lastModDate>2016-07-01T14:32:54.4970000+00:00</lastModDate>
        
        <creator>Amer Al-Badarenah</creator>
        
        <creator>Emad Al-Shawakfa</creator>
        
        <creator>Khaleel Al-Rababah</creator>
        
        <creator>Safwan Shatnawi</creator>
        
        <creator>Basel Bani-Ismail</creator>
        
        <subject>categorization; Arabic; KNN; stemming; cross validation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(6), 2016</description>
        <description>With the tremendous amount of electronic documents available, there is a great need to classify documents automatically. Classification is the task of assigning objects (images, text documents, etc.) to one of several predefined categories. The selection of important terms is vital to classifier performance, feature set reduction techniques such as stop word removal, stemming and term threshold were used in this paper. Three term-selection techniques are used on a corpus of 1000 documents that fall in five categories. A comparison study is performed to find the effect of using full-word, stem, and the root term indexing methods. K-nearest – neighbors classifiers used in this study. The averages of all folds for Recall, Precision, Fallout, and Error-Rate were calculated. The results of the experiments carried out on the dataset show the importance of using k-fold testing since it presents the variations of averages of recall, precision, fallout, and error rate for each category over the 10-fold</description>
        <description>http://thesai.org/Downloads/Volume7No6/Paper_33-Classifying_Arabic_Text_Using_KNN_Classifier.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Face Retrieval Based On Local Binary Pattern and Its Variants: A Comprehensive Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070632</link>
        <id>10.14569/IJACSA.2016.070632</id>
        <doi>10.14569/IJACSA.2016.070632</doi>
        <lastModDate>2016-07-01T14:32:54.4830000+00:00</lastModDate>
        
        <creator>Phan Khoi</creator>
        
        <creator>Lam Huu Thien</creator>
        
        <creator>Vo Hoai Viet</creator>
        
        <subject>Face Retrieval; LBP; PLBP; Grid LBP; LSH</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(6), 2016</description>
        <description>Face retrieval (FR) is one of the specific fields in content-based image retrieval (CBIR). Its aim is to search relevant faces in large database based on the contents of the images rather than the metadata. It has many applications in important areas such as face searching, forensics, and identification… In this paper, we experimentally evaluate Face Retrieval based on Local Binary Pattern (LBP) and its variants: Rotation Invariant Local Binary Pattern (RILBP) and Pyramid of Local Binary Pattern (PLBP). We also use a grid LBP based operator, which divides an image into 6&#215;7 sub-regions then concentrates LBP feature vector from each of them into a spatially enhanced feature histogram. These features were firstly tested on three fontal face datasets: The Database of Faces (TDF), Caltech Faces 1999 (CF1999) and the combination of The Database of faces and Caltech Faces 1999 (CF). Good result on these dataset has encouraged us to conduct tests on Labeled Faces in the Wild (LFW), where the images were taken from real-world condition. Mean average precision (MAP) was used for measuring the performance of the system. We carry out the experiments in two main stages indexing and searching with the use of k-fold cross-validation. We further boost the system by using Locality Sensitive Hashing (LSH). Furthermore, we also evaluate the impact of LSH on the searching stage. The experimental results have shown that LSH is effective for face searching as well as LBP is robust feature in fontal face retrieval</description>
        <description>http://thesai.org/Downloads/Volume7No6/Paper_32-Face_Retrieval_Based_on_Local_Binary_Pattern_and_Its_Variants.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Self-organizing Location and Mobility-Aware Route Optimization Protocol for Bluetooth Wireless</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070631</link>
        <id>10.14569/IJACSA.2016.070631</id>
        <doi>10.14569/IJACSA.2016.070631</doi>
        <lastModDate>2016-07-01T14:32:54.4500000+00:00</lastModDate>
        
        <creator>Sheikh Tahir Bakhsh</creator>
        
        <subject>Bluetooth; Hop count; Mobility; Routing; Resource optimization; Scatternet; Self-healings</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(6), 2016</description>
        <description>Bluetooth allows multi-hop ad-hoc networks that contain multiple interconnected piconets in a common area to form a scatternet. Routing is one of the technical issues in a scatternet because nodes can arrive and leave at arbitrary times; hence node mobility has a serious impact on network performance. Bluetooth network is built in an ad-hoc fashion, therefore, a fully connected network does not guarantee. Moreover, a partially connected network may not find the shortest route between source and destination. In this paper, a new Self-organizing Location and Mobility-aware Route Optimization (LMRO) protocol is proposed for Bluetooth scatternet, which is based on node mobility and location. The proposed protocol considered the shortest route ahead of the source and destination nodes through nodes location information. In addition, proposed protocol guarantees network connectivity through executing Self-organizing procedure for the damaged route by considering signal strength. The proposed LMRO protocol predicts node mobility through the signal strength and activates an alternate link before the main link breaks. Simulation results show that the LMRO protocol has reduced the average hop count by 20%-50% and increased network throughput by 30%-40% compared to existing protocols</description>
        <description>http://thesai.org/Downloads/Volume7No6/Paper_31-A_Self_organizing_Location_and_Mobility_Aware_Route_Optimization_Protocol_for_Bluetooth_Wireless.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Denial of Service Attack in IPv6 Duplicate Address Detection Process</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070630</link>
        <id>10.14569/IJACSA.2016.070630</id>
        <doi>10.14569/IJACSA.2016.070630</doi>
        <lastModDate>2016-07-01T14:32:54.4200000+00:00</lastModDate>
        
        <creator>Shafiq Ul Rehman</creator>
        
        <creator>Selvakumar Manickam</creator>
        
        <subject>Duplicate Address Detection; Denial of Service; IPv6; Address autoconfiguration; Security; Internet Protocol</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(6), 2016</description>
        <description>IPv6 was designed to replace the existing Internet Protocol, that is, IPv4. The main advantage of IPv6 over IPv4 is the vastness of address space. In addition, various improvements were brought to IPv6 to address the drawbacks in IPv4. Nevertheless, as with any new technology, IPv6 suffers from various security vulnerabilities. One of the vulnerabilities discovered allows Denial of Service Attack using the Duplicate Address Detection mechanism. In order to study and analyse this attack, an IPv6 security testbed was designed and implemented. This paper presents our experience with the deployment and operation of the testbed, and discussion on the outcome and data gathered from carrying out DoS attack in this testbed</description>
        <description>http://thesai.org/Downloads/Volume7No6/Paper_30-Denial_of_Service_Attack_in_IPv6_Duplicate_Address_Detection_Process.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design and Modeling of RF Power Amplifiers with Radial Basis Function Artificial Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070629</link>
        <id>10.14569/IJACSA.2016.070629</id>
        <doi>10.14569/IJACSA.2016.070629</doi>
        <lastModDate>2016-07-01T14:32:54.3900000+00:00</lastModDate>
        
        <creator>Ali Reza Zirak</creator>
        
        <creator>Sobhan Roshani</creator>
        
        <subject>Amplifier model; artificial neural network (ANN); class-F amplifier; radial basis function; RF amplifier</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(6), 2016</description>
        <description>A radial basis function (RBF) artificial neural network model for a designed high efficiency radio frequency class-F power amplifier (PA) is presented in this paper. The presented amplifier is designed at 1.8 GHz operating frequency with 12 dB of gain and 36 dBm of 1dB output compression point. The obtained power added efficiency (PAE) for the presented PA is 76% under 26 dBm input power. The proposed RBF model uses input and DC power of the PA as inputs variables and considers output power as the output variable. The presented RBF network models the designed class-F PA as a block, which could be applied in circuit design. The presented model could be used to model any RF power amplifier. The obtained results show a good agreement between real data and predicted values by RBF model. The results clearly show that the presented RBF network is more precise than multilayer perceptron (MLP) model. According to the results, better than 84% and 92% improvement is achieved in MAE and RMSE, respectively</description>
        <description>http://thesai.org/Downloads/Volume7No6/Paper_29-Design_and_Modeling_of_RF_Power_Amplifiers_with_Radial_Basis_Function.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Memetic Algorithm for the Capacitated Location-Routing Problem</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070628</link>
        <id>10.14569/IJACSA.2016.070628</id>
        <doi>10.14569/IJACSA.2016.070628</doi>
        <lastModDate>2016-07-01T14:32:54.3730000+00:00</lastModDate>
        
        <creator>Laila KECHMANE</creator>
        
        <creator>Benayad NSIRI</creator>
        
        <creator>Azeddine BAALAL</creator>
        
        <subject>hybrid genetic algorithm; capacitated location-routing problem; location; assigning; vehicle routing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(6), 2016</description>
        <description>In this paper, a hybrid genetic algorithm is proposed to solve a Capacitated Location-Routing Problem. The objective is to minimize the total cost of the distribution in a network composed of depots and customers, both depots and vehicles have limited capacities, each depot has a homogenous vehicle fleet and customers’ demands are known and must be satisfied. Solving this problem involves making strategic decisions such as the location of depots, as well as tactical and operational decisions which include assigning customers to the opened depots and organization of the vehicle routing. To evaluate the performance of the proposed algorithm, its results are compared to those obtained by a greedy randomized adaptive search procedure, computational results shows that the algorithm gave good quality solutions</description>
        <description>http://thesai.org/Downloads/Volume7No6/Paper_28-A_Memetic_Algorithm_for_the_Capacitated_Location_Routing_Problem.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>BF-PSO-TS: Hybrid Heuristic Algorithms for Optimizing Task Schedulingon Cloud Computing Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070626</link>
        <id>10.14569/IJACSA.2016.070626</id>
        <doi>10.14569/IJACSA.2016.070626</doi>
        <lastModDate>2016-07-01T14:32:54.3400000+00:00</lastModDate>
        
        <creator>Hussin M. Alkhashai</creator>
        
        <creator>Fatma A. Omara</creator>
        
        <subject>Cloud computing; task scheduling; CloudSim; Particle Swarm Optimization; Tabu search; Best-Fit</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(6), 2016</description>
        <description>Task Scheduling is a major problem in Cloud computing because the cloud provider has to serve many users. Also, a good scheduling algorithm helps in the proper and efficient utilization of the resources. So, task scheduling is considered as one of the major issues on the Cloud computing systems. The objective of this paper is to assign the tasks to multiple computing resources. Consequently, the total cost of execution is to be minimum and load to be shared between these computing resources. Therefore, two hybrid algorithms based on Particle Swarm Optimization (PSO) have been introduced to schedule the tasks; Best-Fit-PSO (BFPSO) and PSO-Tabu Search (PSOTS). According to BFPSO algorithm, Best-Fit (BF) algorithm has been merged into the PSO algorithm to improve the performance. The main principle of the modified BFSOP algorithm is that BF algorithm is used to generate the initial population of the standard PSO algorithm instead of being initiated randomly. According to the proposed PSOTS algorithm, the Tabu-Search (TS) has been used to improve the local research by avoiding the trap of the local optimality which could be occurred using the standard PSO algorithm.  The two proposed algorithms (i.e., BFPSO and PSOTS) have been implemented using Cloudsim and evaluated comparing to the standard PSO algorithm using five problems with different number of independent tasks and resources. The performance parameters have been considered are the execution time (Makspan), cost, and resources utilization.  The implementation results prove that the proposed hybrid algorithms (i.e., BFPSO, PSOTS) outperform the standard PSO algorithm</description>
        <description>http://thesai.org/Downloads/Volume7No6/Paper_26-BF_PSO_TS_Hybrid_Heuristic_Algorithms_for_Optimizing_Task_Schedulingon.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Approach for Integrating Data Mining with Saudi Universities Database Systems: Case Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070627</link>
        <id>10.14569/IJACSA.2016.070627</id>
        <doi>10.14569/IJACSA.2016.070627</doi>
        <lastModDate>2016-07-01T14:32:54.3400000+00:00</lastModDate>
        
        <creator>Mohamed Osman Hegazi</creator>
        
        <creator>Mohammad Alhawarat</creator>
        
        <creator>Anwer Hilal</creator>
        
        <subject>Data Mining; Database; Predict; Integration; Association rule mining; Neural networks; Decision tree; Educational Data Mining (EDM); University system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(6), 2016</description>
        <description>This paper presents an approach for integrating data mining algorithms within Saudi university’s database system, viz., Prince Sattam Bin Abdulaziz University (PSAU) as a case study. The approach based on a bottom-up methodology; it starts by providing a data mining application that represents a solution to one of the problems that face Saudi Universities’ systems. After that, it integrates and implements the solution inside the university’s database system. This process is then repeated to enhance the university system by providing data mining tools that help different parties -especially decision makers- to carry out certain decision. The paper presents a case study that includes analyzing and predicting the student withdrawal from courses at PSAU using association rule mining, neural networks and decision trees. Then it provides a conceptual and practical approach for integrating the resulted application within the university’s database system. The experiment improves that this approach can be used as a framework for integrating data mining techniques within Saudi university’s database systems. The paper concluded that mining universities’ data can be applied as a computer system (intelligent university’s system), Also, data mining algorithms can be adapted with any database system regardless that this system is new, exists or legacy. Moreover, data mining algorithms can be a solution for some educational problems, in addition to providing information for decision makers and users</description>
        <description>http://thesai.org/Downloads/Volume7No6/Paper_27-An_Approach_for_Integrating_Data_Mining_with_Saudi_Universities.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modeling Access Control Policy of a Social Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070625</link>
        <id>10.14569/IJACSA.2016.070625</id>
        <doi>10.14569/IJACSA.2016.070625</doi>
        <lastModDate>2016-07-01T14:32:54.3100000+00:00</lastModDate>
        
        <creator>Chaimaa Belbergui</creator>
        
        <creator>Najib Elkamoun</creator>
        
        <creator>Rachid Hilal</creator>
        
        <subject>social network; Facebook; access control; OrBAC; study of coherence</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(6), 2016</description>
        <description>Social networks bring together users in a virtual platform and offer them the ability to share -within the Community- personal and professional information’s, photos, etc. which are sometimes sensitive. Although, the majority of these networks provide access control mechanisms to their users (to manage who accesses to which information), privacy settings are limited and do not respond to all users&#39; needs. Hence, the published information remain all vulnerable to illegal access. In this paper, the access control policy of the social network &quot;Facebook&quot; is analyzed in a profound way by starting with its modeling with &quot;Organization Role Based Access Control&quot; model, and moving to the simulation of the policy with an appropriate simulator to test the coherence aspect, and ending with a discussion of analysis results which shows the gap between access control management options offered by Facebook and the real requirements of users in the same context. Extracted conclusions prove the need of developing a new access control model that meets most of these requirements, which will be the subject of a forthcoming work</description>
        <description>http://thesai.org/Downloads/Volume7No6/Paper_25-Modeling_Access_Control_Policy_of_a_Social_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving Service-Oriented Architecture Processes in Process of Automatic Services Composition Using Memory and QF, QWV Factor</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/</link>
        <id></id>
        <doi></doi>
        <lastModDate>2016-07-01T14:32:54.2470000+00:00</lastModDate>
        
        <creator>Behnaz Nahvi</creator>
        
        <creator>Jafar Habibi</creator>
        
        <subject>Service-orien ted architecture; process management; multi-agent systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(6), 2016</description>
        <description>The application of service-orientated architecture in organizations for implementation of complicated workflows in electronic way using composite web services has become widespread. Thus, challenging research issues have also been raised in this regard. One of these issues is constructing composite web services by workflows. These workflows are composed of existing web services. Selections of a web service for each of workflow activities and fulfilling users’ conditions is still regarded as a major challenge. In fact, selection of a web service out of many such web services with identical function is a critical task which generally depends on composite evaluation tool of QoS. Previously proposed approaches do not consider exchange restrictions on the composition process and internal processes of architecture and previous experiences, and they ignore the fact that value of many of QoSs depends on the time of implementation. Selection of web services only based on QoS does not bring about optimal composite web service. Thus, till now, no solution has been proposed that performs composition process automatically or semi-automatically in optimal manner
Objective: identification of existing concerns on composition of services and then designing a framework to provide a solution which consider all concerns and finally performing tests in order to examine and evaluate proposed framework
Method: in the proposed framework, elements affecting management of service-oriented architecture processes are organized according to a logical procedure. This framework identifies processes of this style of architecture based on requirements in service-oriented architecture processes management and according to qualitative features in this area. In the proposed framework, in addition to using existing data in the problem area, existing structure and patterns in the area of software architecture are also utilize, and management processes in service-orientated architecture are improved based on propriety of available requirements. QWV are qualitative weighted dynamic features which indicate priority of users, and QF is quality factor of service at the time of implementation which is weighted in the framework. These factors are used for constructing composite web service. Multifactor computing is known as a natural computational system for automating the interaction between services. The factors in multi-agent systems can be used as the main reliable mechanism for the control which usually use data exchange for accelerating their evaluations. For identification of all concerns in the solution space, many aspects should be examined. To this end, classes of agents are defined which investigate these aspects in the form of four components using repository data.
Results: proposed framework was simulated by Arena software and results showed this framework can be useful in automatic generation of needed services and meet all concerns at the same time. Results support that using agents in the model increased speed of accountability and satisfaction of users as well as system efficiency
</description>
        <description>http://thesai.org/Downloads/Volume7No6/Paper_24-Improving_Service_Oriented_Architecture_Processes_in_Process_of_Automatic_Services_Composition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-Robot Path-Planning Problem for a Heavy Traffic Control Application: A Survey</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070623</link>
        <id>10.14569/IJACSA.2016.070623</id>
        <doi>10.14569/IJACSA.2016.070623</doi>
        <lastModDate>2016-07-01T14:32:54.1400000+00:00</lastModDate>
        
        <creator>Ebtehal Turki Saho Alotaibi</creator>
        
        <creator>Hisham Al-Rawi</creator>
        
        <subject>component; Heavy traffic control; Multi robots; Coupled Path Planning; Decoupled Path Planning; Collision Avoidance; Heuristics; RRT; Push and Swap; Push and Rotate; Bibox</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(6), 2016</description>
        <description>This survey looked at the methods used to solve multi-autonomous vehicle path-planning for an application of heavy traffic control in cities. Formally, the problem consisted of a graph and a set of robots. Each robot has to reach its destination in the minimum time and number of movements, considering the obstacles and other robots’ paths, hence, the problem is NP-hard. The study found that decoupled centralised approaches are the most relevant approaches for an autonomous vehicle path-planning problem for three reasons: (1) a city is a large environment and coupled centralised approaches scale weakly, (2) the overhead of a coupled decentralised approach to achieve the global optimal will affect the time and memory of the other robots, which is not important in a city configuration and (3) the coupled approaches suppose that the number of robots is defined before they start to find the paths and resolve collisions, while in a city, any car can start at any time and hence, each car should work individually and resolve collisions as they arise. In addition, the study reviewed four decoupled centralised techniques to solve the problem: multi-robot path-planning rapidly exploring random tree (MRRRT), push and swap (PAS), push and rotate (PAR) and the Bibox algorithm. The experiments showed that MRRRT is the best for exploring any search space and optimizing the solution. On the other hand, PAS, PAR and Bibox are better in terms of providing a complete solution for the problem and resolving collisions in significantly much less time, the analysis, however, shows that a wider class of solvable instances are excluded from PAS and PAR domain. In addition, Bibox solves a smaller class than the class solved by PAS and PAR in less time, in the worst case, and with a shorter path than PAS and PAR</description>
        <description>http://thesai.org/Downloads/Volume7No6/Paper_23-Multi_Robot_Path_Planning_Problem_for_a_Heavy_Traffic_Control.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Robust Audio Watermarking Technique Operates in MDCT Domain based on Perceptual Measures</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070622</link>
        <id>10.14569/IJACSA.2016.070622</id>
        <doi>10.14569/IJACSA.2016.070622</doi>
        <lastModDate>2016-07-01T14:32:54.0470000+00:00</lastModDate>
        
        <creator>Maha Bellaaj</creator>
        
        <creator>Kais Ouni</creator>
        
        <subject>digital audio watermarking; Hamming; LSB; psychoacoustic model1, 2; MDCT; DCT; FFT; SNR; ODG</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(6), 2016</description>
        <description>the review presents a digital audio watermarking technique operating in the frequency domain with two variants. This technique uses the Modified Discrete Cosine Transform (MDCT) to move to the frequency domain. To ensure more inaudibility, we exploited the proprieties of the psychoacoustic model 1 (PMH1) of MPEG1 encoder layer I in the first variant and those of psychoacoustic model 2 (PMH2) of MPEG1 encoder Layer III in the second alternative to search the places for insertion of the watermark. In both variants of the technique, the bits of the mark will be duplicated to increase the capacity of insertion then inserted into the least significant bit (LSB). For more reliability in the detection phase, we use an error correction code (Hamming) on the mark.
Next, to analyze the performance of the proposed technique, we perform two comparative studies. In the first, we compare the proposed digital audio watermarking technique with her two variants and those achieved by Luigi Rosa and Rolf Brigola, ‘which we download the M-files of each’. The technique developed by Luigi Rosa operates in the frequency domain but using the Discrete Cosine Transform (DCT) as transformation and that proposed by Rolf Brigola uses the Fast Fourier Transform (FFT). We studied the robustness of each technique against different types of attacks such as compression / decompression MP3, stirmark audio attack and we evaluated the inaudibility by using an objective approach by calculating the SNR and the ODG notes given by PEAQ. The robustness of this technique is shown against different types of attacks. In the second, we prove the contribution of the proposed technique by comparing the payload data, imperceptibility and robustness against attack MP3 with others existing techniques in the literature
</description>
        <description>http://thesai.org/Downloads/Volume7No6/Paper_22-A_Robust_Audio_Watermarking_Technique_Operates_in_MDCT.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Flying Ad-Hoc Networks: Routing Protocols, Mobility Models, Issues</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070621</link>
        <id>10.14569/IJACSA.2016.070621</id>
        <doi>10.14569/IJACSA.2016.070621</doi>
        <lastModDate>2016-07-01T14:32:53.9370000+00:00</lastModDate>
        
        <creator>Muneer Bani Yassein</creator>
        
        <creator>“Nour Alhuda” Damer</creator>
        
        <subject>FANET; Ad-hoc Network; UAVs; MANET;  Mobility Model; Networking Model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(6), 2016</description>
        <description>Flying Ad-Hoc Networks (FANETs) is a group of Unmanned Air Vehicles (UAVs) which completed their work without human intervention. There are some problems in this kind of networks: the first one is the communication between (UAVs). Various routing protocols introduced classified into three categories, static, proactive, reactive routing protocols in order to solve this problem. The second problem is the network design, which depends on the network mobility, in which is the process of cooperation and collaboration between the UAV. Mobility model of FANET is introduced in order to solve this problem. In Mobility Model, the path and speed variations of the UAV and represents their position are defined. As of today, Random Way Point Model is utilized as manufactured one for Mobility in the greater part of recreation situations. The Arbitrary Way Point model is not relevant for the UAV in light of the fact that UAV do not alter their course and versatility, speed quickly at one time because of this reason, we consider more practical models, called Semi-Random Circular Movement (SRCM) Mobility Model. Also, we consider different portability models, Mission Plan-Based (MPB) Mobility Model, Pheromone-Based Model. Moreover, Paparazzi Mobility Model (PPRZM). This paper presented and discussed the main routing protocols and main mobility models used to solve the communication, cooperation, and collaboration in FANET networks</description>
        <description>http://thesai.org/Downloads/Volume7No6/Paper_21-Flying_Ad-Hoc_Networks_Routing_Protocols_Mobility_Models_Issues.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluation and Comparison of Binary Trie base IP Lookup Algorithms with Real Edge Router IP Prefix Dataset</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070620</link>
        <id>10.14569/IJACSA.2016.070620</id>
        <doi>10.14569/IJACSA.2016.070620</doi>
        <lastModDate>2016-07-01T14:32:53.9200000+00:00</lastModDate>
        
        <creator>Alireza Shirmarz</creator>
        
        <creator>Masoud Sabaei</creator>
        
        <creator>Mojtaba hosseini</creator>
        
        <subject>Binary Trie; IP-Lookup; Running Time; Memory Usage; Complexity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(6), 2016</description>
        <description>Internet network is comprised of routers that forward packets towards their destinations. IP routing lookup requires computing the Best-Matching Prefix. The main Functionality of Router is finding the Appropriate Path for Packet. There are many Algorithms for IP-Lookup with different Speed, Complexity and Memory usage. In This Paper Three Binary Trie algorithms will be considered for Performance Analysis. These algorithms are Priority-Trie, Disjoint Binary and Binary Trie. We consider three parameters for comparison, these parameters are Time, Memory and Complexity of Algorithms. For performance analysis, we develop and run algorithms with real Lookup-Tables which were used in an edge router</description>
        <description>http://thesai.org/Downloads/Volume7No6/Paper_20-Evaluation_and_Comparison_of_binary_trie_base_IP_Lookup_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cost-effective and Green Manufacturing Substrate Integrated Waveguide (SIW) BPF for Wireless Sensor Network Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070619</link>
        <id>10.14569/IJACSA.2016.070619</id>
        <doi>10.14569/IJACSA.2016.070619</doi>
        <lastModDate>2016-07-01T14:32:53.8730000+00:00</lastModDate>
        
        <creator>Hiba Abdel Ali</creator>
        
        <creator>Rachida Bedira</creator>
        
        <creator>Hichem Trabelsi</creator>
        
        <creator>Ali Gharsallah</creator>
        
        <subject>Substrate Integrated Waveguide; Band pass filter; Wireless sensor network; green material technology; paper substrate</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(6), 2016</description>
        <description>This paper presents a comparison between innovative technique for implementation of  substrate integrated waveguide   band pass  filter centered at 4 GHz  and conventional PCB results . Two poles filter  is designed, simulated and fabricated. The novel  fabrication process technique  is  based on  green manufacturing where physical etching of metal layer of aluminum is glued  on paper substrate. The  pass band filter is composed of  two resonant adjacent and symmetric cavities separated by a coupling  iris. The same topology is adopted with conventional substrate and PCB technology to validate the new technique results. The  bandwidth achieved is almost four times wider than found with PCB technology developed for UWB applications. The proposed technique keeps the advantages of conventional SIW technology including low profile, compact size, complete shielding, easy fabrication, low cost for mass production as well as convenient integration with planar waveguide components including transitions. Moreover, the flexible quality of paper offer the possibility to fabricate conformal shapes of SIW components which is not possible with conventional rigid substrates made with PCB technology .  In addition to  the advantages of eco friendly, renewable, light  weight , and ultra low cost materials</description>
        <description>http://thesai.org/Downloads/Volume7No6/Paper_19-Cost_effective_and_Green_Manufacturing_Substrate_Integrated_Waveguide.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multicast Routing Problem Using Tree-Based Cuckoo Optimization Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070618</link>
        <id>10.14569/IJACSA.2016.070618</id>
        <doi>10.14569/IJACSA.2016.070618</doi>
        <lastModDate>2016-07-01T14:32:53.8430000+00:00</lastModDate>
        
        <creator>Mahmood Sardarpour</creator>
        
        <creator>Hasan Hosseinzadeh</creator>
        
        <creator>Mehdi Effatparvar</creator>
        
        <subject>Multicast tree; routing; Cuckoo Algorithm; optimal function</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(6), 2016</description>
        <description>The problem of QoS multicast routing is to find a multicast tree with the least expense/cost which would meet the limitations such as band width, delay and loss rate. This is a NP-Complete problem. To solve the problem of multicast routing, the entire routes from the source node to every destination node are often recognized. Then the routes are integrated and changed into a single multicast tree. But they are slow and complicated methods. The present paper introduces a new tree-based optimization method to overcome such weaknesses. The recommended method directly optimizes the multicast tree. Therefore a tree-based typology including several spanning trees is created which combines the trees two by two. For this purpose, the Cuckoo Algorithm is used which is proved to be well converged and makes quick calculations. The simulation conducted on different types of network typologies proved that it is a practical and influential algorithm</description>
        <description>http://thesai.org/Downloads/Volume7No6/Paper_18-Multicast_Routing_Problem_Using_Tree_Based_Cuckoo_Optimization_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Compressed Sensing of Multi-Channel EEG Signals: Quantitative and Qualitative Evaluation with Speller Paradigm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070617</link>
        <id>10.14569/IJACSA.2016.070617</id>
        <doi>10.14569/IJACSA.2016.070617</doi>
        <lastModDate>2016-07-01T14:32:53.8100000+00:00</lastModDate>
        
        <creator>Monica Fira</creator>
        
        <subject>Compressed sensed; EEG; Brain computer interface; P300; Speller Paradigm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(6), 2016</description>
        <description>In this paper the possibility of the electroencephalogram (EEG) compressed sensing based on specific dictionaries is presented. Several types of projection matrices (matrices with random i.i.d. elements sampled from the Gaussian or Bernoulli distributions, and matrices optimized for the particular dictionary used in reconstruction by means of appropriate algorithms) have been compared. The results are discussed from the reconstruction error point of view and from the classification rates of the spelling paradigm</description>
        <description>http://thesai.org/Downloads/Volume7No6/Paper_17-Compressed_Sensing_of_Multi_Channel_EEG_Signals_Quantitative.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detection of SQL Injection Using a Genetic Fuzzy Classifier System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070616</link>
        <id>10.14569/IJACSA.2016.070616</id>
        <doi>10.14569/IJACSA.2016.070616</doi>
        <lastModDate>2016-07-01T14:32:53.7800000+00:00</lastModDate>
        
        <creator>Christine Basta</creator>
        
        <creator>Ahmed elfatatry</creator>
        
        <creator>Saad Darwish</creator>
        
        <subject>SQL injection; web security; genetic fuzzy system; fuzzy rule learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(6), 2016</description>
        <description>SQL Injection (SQLI) is one of the most popular vulnerabilities of web applications. The consequences of SQL injection attack include the possibility of stealing sensitive information or bypassing authentication procedures. SQL injection attacks have different forms and variations. One difficulty in detecting malicious attacks is that such attacks do not have a specific pattern. A new fuzzy rule-based classification system (FBRCS) can tackle the requirements of the current stage of security measures. This paper proposes a genetic fuzzy system for detection of SQLI where not only the accuracy is a priority, but also the learning and the flexibility of the obtained rules. To create the rules having high generalization capabilities, our algorithm builds on initial rules, data-dependent parameters, and an enhancing function that modifies the rule evaluation measures. The enhancing function helps to assess the candidate rules more effectively based on decision subspace. The proposed system has been evaluated using a number of well-known data sets. Results show a significant enhancement in the detection procedure</description>
        <description>http://thesai.org/Downloads/Volume7No6/Paper_16-Detection_of_SQL_Injection_Using_a_Genetic_Fuzzy_Classifier_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Efficiency in Motion: The New Era of E-Tickets</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070615</link>
        <id>10.14569/IJACSA.2016.070615</id>
        <doi>10.14569/IJACSA.2016.070615</doi>
        <lastModDate>2016-07-01T14:32:53.7330000+00:00</lastModDate>
        
        <creator>Fan Wu</creator>
        
        <creator>Dwayne Clarke</creator>
        
        <creator>Jian Jiang</creator>
        
        <creator>Adontavius Turner</creator>
        
        <subject>Genereal; Mobile Application; Mobile Device; E-Ticket</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(6), 2016</description>
        <description>The development of mobile applications has played an important role in technology. Due to recent advances in technology, mobile applications are creating more attraction across the world. Mobile application is a very interesting field of research. There are several ongoing research and development in both industries along with in academia. In this paper, we present the design and implementation of a mobile application that creates an electronic ticket or e-ticket application for campus police. The goals for this mobile application are to make the ticket process much faster and easier for campus police by using mobile devices. Furthermore, the results will indicate an increase in performance and productivity for campus police by sending, retrieving data, printing tickets and permits for students to limit the need for paper based ticketing</description>
        <description>http://thesai.org/Downloads/Volume7No6/Paper_15-Efficiency_in_Motion_the_New_Era_to_E_Tickets.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Automatic Evaluation for Online Machine Translation: Holy Quran Case Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070614</link>
        <id>10.14569/IJACSA.2016.070614</id>
        <doi>10.14569/IJACSA.2016.070614</doi>
        <lastModDate>2016-07-01T14:32:53.7170000+00:00</lastModDate>
        
        <creator>Emad AlSukhni</creator>
        
        <creator>Mohammed N. Al-Kabi</creator>
        
        <creator>Izzat M. Alsmadi</creator>
        
        <subject>machine translation; language automatic evaluation; Statistical Machine Translation; Quran machine translation; Arabic MT</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(6), 2016</description>
        <description>The number of Free Online Machine Translation (FOMT) users witnessed a spectacular growth since 1994. FOMT systems change the aspects of machine translation (MT) and the mass translated materials using a wide range of natural languages and machine translation systems. Hundreds of millions of people use these FOMT systems to translate the holy Quran (Al-Qur?an) verses from the Arabic language to other natural languages, and vice versa. In this study, an automatic evaluation for the use of FOMT systems to translate Arabic Quranic text into English is conducted. The two well-known FOMT systems (Google and Bing Translators) are chosen to be evaluated in this study using a metric called Assessment of Text Essential Characteristics (ATEC). ATEC metric is one of the automatic evaluation metrics for machine translation systems. ATEC scores the correlation between the output of a machine translation system and professional human reference translation based on word choice, word orders and the similarity between MT output and the human reference translation. Extensive evaluation has been conducted on two well-known FOMT systems to translate Arabic Quranic text into English. This evaluation shows that Google translator performs better than Bing translator in translating Quranic text. It is noticed that the average ATEC score does not exceed 41% which indicates that FOMT systems are ineffective in translating Quranic texts accurately</description>
        <description>http://thesai.org/Downloads/Volume7No6/Paper_14-An_Automatic_Evaluation_for_Online_Machine_Translation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Development of the Routing Pattern of the Backbone Data Transmission Network for the Automation of the Krasnoyarsk Railway</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070613</link>
        <id>10.14569/IJACSA.2016.070613</id>
        <doi>10.14569/IJACSA.2016.070613</doi>
        <lastModDate>2016-07-01T14:32:53.6700000+00:00</lastModDate>
        
        <creator>Sergey Victorovich Makarov</creator>
        
        <creator>Faridun Abdulnazarov</creator>
        
        <creator>Omurbek Anarbekov</creator>
        
        <subject>Router; Topology; IS-IS protocol; OSPF protocol</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(6), 2016</description>
        <description>The paper deals with the data transmission network of the Krasnoyarsk Railway, its structure, the topology of data transmission and the routing protocol, which supports its operation, as well as the specifics of data transmission networking. The combination of the railway automation applications and the data transmission network make up the automation systems, making it possible to improve performance, increase the freight traffic volume, and improve the quality of passenger service. The objective of this paper is to study the existing data transmission network of the Krasnoyarsk Railway and to develop ways of its modernization, in order to improve the reliability of the network and the automated systems that use this network. It was found that the IS-IS and OSPF routing protocols have many differences, primarily due to the fact that the IS-IS protocol does not use IP addresses as paths. On the one hand, it makes it possible to use the IS-IS protocol for the operation in IPv6 networks, whereas OSPF version 2 doesn’t provide this opportunity; therefore, OSPF version 3 was developed to solve this problem. However, on the other hand, in case of using IPv4, the routing configuration by means of the IS-IS protocol will imply the need to study a significant volume of information and use unusual methods of routing configuration</description>
        <description>http://thesai.org/Downloads/Volume7No6/Paper_13-The_Development_of_the_Routing_Pattern_of_Backbone_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid Algorithm Based on Firefly Algorithm and Differential Evolution for Global Optimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070612</link>
        <id>10.14569/IJACSA.2016.070612</id>
        <doi>10.14569/IJACSA.2016.070612</doi>
        <lastModDate>2016-07-01T14:32:53.6400000+00:00</lastModDate>
        
        <creator>S. Sarbazfard</creator>
        
        <creator>A. Jafarian</creator>
        
        <subject>Differential Evolution; Firefly Algorithm; Global optimization; Hybrid algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(6), 2016</description>
        <description>In this paper, a new and an effective combination of two metaheuristic algorithms, namely Firefly Algorithm and the Differential evolution, has been proposed. This hybridization called as HFADE, consists of two phases of Differential Evolution (DE) and Firefly Algorithm (FA). Firefly algorithm is the nature- inspired algorithm which has its roots in the light intensity attraction process of firefly in the nature. Differential evolution is an Evolutionary Algorithm that uses the evolutionary operators like selection, recombination and mutation. FA and DE together are effective and powerful algorithms but FA algorithm depends on random directions for search which led into retardation in finding the best solution and DE needs more iteration to find proper solution. As a result, this proposed method has been designed to cover each algorithm deficiencies so as to make them more suitable for optimization in real world domain. To obtain the required results, the experiment on a set of benchmark functions was performed and findings showed that HFADE is a more preferable and effective method in solving the high-dimensional functions</description>
        <description>http://thesai.org/Downloads/Volume7No6/Paper_12-A_Hybrid_Algorithm_Based_on_Firefly_Algorithm_and_Differential_Evolution.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Artificial Neural Networks Approach for Diagnosing Diabetes Disease Type II</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070611</link>
        <id>10.14569/IJACSA.2016.070611</id>
        <doi>10.14569/IJACSA.2016.070611</doi>
        <lastModDate>2016-07-01T14:32:53.6100000+00:00</lastModDate>
        
        <creator>Zahed Soltani</creator>
        
        <creator>Ahmad Jafarian</creator>
        
        <subject>diabetes type 2; probabilistic artificial neural networks; data mining; mean squares error; Naive Bayes</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(6), 2016</description>
        <description>Diabetes is one of the major health problems as it causes physical disability and even death in people. Therefore, to diagnose this dangerous disease better, methods with minimum error rate must be used. Different models of artificial neural networks have the capability to diagnose this disease with minimum error. Hence, in this paper we have used probabilistic artificial neural networks for an approach to diagnose diabetes disease type II. We took advantage of Pima Indians Diabetes dataset with 768 samples in our experiments. According to this dataset, PNN is implemented in MATLAB. Furthermore, maximizing accuracy of diagnosing the Diabetes disease type II in training and testing the Pima Indians Diabetes dataset is the performance measure in this paper. Finally, we concluded that training accuracy and testing accuracy of the proposed method is 89.56% and 81.49%, respectively</description>
        <description>http://thesai.org/Downloads/Volume7No6/Paper_11-A_New_Artificial_Neural_Networks_Approach_for_Diagnosing_Diabetes_Disease_Type.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Impact of IT Resources on IT Capabilities in Sudanese Insurance and Banking Sectors</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070610</link>
        <id>10.14569/IJACSA.2016.070610</id>
        <doi>10.14569/IJACSA.2016.070610</doi>
        <lastModDate>2016-07-01T14:32:53.5770000+00:00</lastModDate>
        
        <creator>Anwar Yahia Shams Eldin</creator>
        
        <creator>Abdel Hafiez Ali</creator>
        
        <creator>Ahmad A. Al-Tit</creator>
        
        <subject>IT resource, IT capabilities; RBV; process level; banking sector;  insurance sector; Sudan</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(6), 2016</description>
        <description>The previous studies that applied the Resource Based View (RBV), to examine the impact of IT resource on IT (Information Technology) competencies, often show different results. This study intends to investigate the impact of  IT resources (core communication technology, group collaboration enterprise competences “inter-organization system usage”) on IT capabilities (infrastructure, empowerment and functional capabilities) trying to discuss some empirical limitation of testing RBV such as able to disentangle the effects from a variety of sources, and how IT resources complement with other IT resources. Data were collected from 83 IT employees involved in Sudanese banking and insurance sector. A questionnaire was used to collect data. Reliability and factor analysis was conducted to ensure goodness of the data; regression analysis conducted to test the relationships between variables. The findings of this study do disentangle the effects on IT capabilities from a variety of sources. Moreover, it shows how IT resource complements each other in order to generate the outcome of capability</description>
        <description>http://thesai.org/Downloads/Volume7No6/Paper_10-Impact_of_IT_Resources_on_IT_Capabilities_in_Sudanese_Insurance_and_Banking_Sectors.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Assessment for the Model Predicting of the Cognitive and Language Ability in the Mild Dementia by the Method of Data-Mining Technique</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070609</link>
        <id>10.14569/IJACSA.2016.070609</id>
        <doi>10.14569/IJACSA.2016.070609</doi>
        <lastModDate>2016-07-01T14:32:53.5300000+00:00</lastModDate>
        
        <creator>Haewon Byeon</creator>
        
        <creator>Dongwoo Lee</creator>
        
        <creator>Sunghyoun Cho</creator>
        
        <subject>random forests; data mining; mild dementia; risk factors; neuropsychological test</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(6), 2016</description>
        <description>Assessments of cognitive and verbal functions are widely used as screening tests to detect early dementia. This study developed an early dementia prediction model for Korean elderly based on random forest algorithm and compared its results and precision with those of logistic regression model and decision tree model. Subjects of the study were 418 elderly (135 males and 283 females) over the age of 60 in local communities. Outcome was defined as having dementia and explanatory variables included digit span forward, digit span backward, confrontational naming, Rey Complex Figure Test (RCFT) copy score, RCFT immediate recall, RCFT delayed recall, RCFT recognition true positive, RCFT recognition false positive, Seoul Verbal Learning Test (SVLT) immediate recall, SVLT delayed recall, SVLT recognition true positive, SVLT recognition false positive, Korean Color Word Stroop Test (K-CWST) color reading correct, and K-CWST color reading error. The Random Forests algorithm was used to develop prediction model and the result was compared with logistic regression model and decision tree based on chi-square automatic interaction detector (CHAID). As the result of the study, the tests with high level of predictive power in the detection of early dementia were verbal memory, visuospatial memory, naming, visuospatial functions, and executive functions. In addition, the random forests model was more accurate than logistic regression and CHIAD. In order to effectively detect early dementia, development of screening test programs is required which are composed of tests with high predictive power</description>
        <description>http://thesai.org/Downloads/Volume7No6/Paper_9-Assessment_for_the_Model_Predicting_of_the_Cognitive_and_Language.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>LNG Import Contract in the perspective of Associated Technical and Managerial Challenges for the Distribution Companies of Pakistan</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070608</link>
        <id>10.14569/IJACSA.2016.070608</id>
        <doi>10.14569/IJACSA.2016.070608</doi>
        <lastModDate>2016-07-01T14:32:53.5130000+00:00</lastModDate>
        
        <creator>Kawish Bakht</creator>
        
        <creator>Farhan Aslam</creator>
        
        <creator>Tahir Nawaz</creator>
        
        <creator>Bakhtawar Seerat</creator>
        
        <subject>LNG Import Contract; sustainable energy solution; UFG (Unaccounted for Gas) losses; Infrastructure and policy Amendments; Technical and Managerial Challenges</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(6), 2016</description>
        <description>Energy Managers and Government Office Holders in Pakistan are nowadays pondering over multiple options for the resolution of ongoing Energy crises in the country.
LNG (Liquefied Natural Gas) import has been finalized for being the quickest remedy among all the available options. LNG (Liquefied Natural Gas) contract is at the verge to be implemented in Pakistan. However there are several factors that need to be addressed while implementing the project.  In this paper, identification of the challenges affecting the optimized distribution of gas is presented. The sustainability of LNG (Liquefied Natural Gas) Project depends upon the successful running of the project without facing any financial crises arising from the gas distribution losses. The motive of this paper is the identification of the factors that are risk for the sustainability and successful running of LNG Import in Pakistan on long term basis. In association it is required to identify technical and managerial challenges for the gas distribution companies in distribution of LNG. Moreover recommendations are proposed for modification in existing infrastructure and governing policies related to gas distribution companies for logical success and long term sustainability of LNG import in Pakistan</description>
        <description>http://thesai.org/Downloads/Volume7No6/Paper_8-LNG_Import_Contract_in_the_perspective_of_Associated_Technical_and_Managerial_Challenges_for_the_Distribution_Companies_of_Pakistan.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Numerical Solutions of Heat and Mass Transfer with the First Kind Boundary and Initial Conditions in Capillary Porous Cylinder Using Programmable Graphics Hardware</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070607</link>
        <id>10.14569/IJACSA.2016.070607</id>
        <doi>10.14569/IJACSA.2016.070607</doi>
        <lastModDate>2016-07-01T14:32:53.4670000+00:00</lastModDate>
        
        <creator>Hira Narang</creator>
        
        <creator>Fan Wu</creator>
        
        <creator>Abisoye Ogunniyan</creator>
        
        <subject>Genereal: Numerical Solution; Heat and Mass Transfer; General Purpose Graphics Processing Unit; CUDA</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(6), 2016</description>
        <description>Recently, heat and mass transfer simulation is more and more important in various engineering fields. In order to analyze how heat and mass transfer in a thermal environment, heat and mass transfer simulation is needed. However, it is too much time-consuming to obtain numerical solutions to heat and mass transfer equations.  Therefore, in this paper, one of acceleration techniques developed in the graphics community that exploits a graphics processing unit (GPU) is applied to the numerical solutions of heat and mass transfer equations. The nVidia Compute Unified Device Architecture (CUDA) programming model provides a straightforward means of describing inherently parallel computations. This paper improves the performance of solving heat and mass transfer equations over capillary porous cylinder with the first boundary and initial conditions numerically running on GPU. Heat and mass transfer simulation using the novel CUDA platform on nVidia Quadro FX 4800 is implemented. Our experimental results clearly show that GPU can accurately perform heat and mass transfer simulation. GPU can significantly accelerate the performance with the maximum observed speedups 10 times. Therefore, the GPU is a good approach to accelerate the heat and mass transfer simulation</description>
        <description>http://thesai.org/Downloads/Volume7No6/Paper_7-Numerical_Solutions_of_Heat_and_Mass_Transfer.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Secure Steganography for Digital Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070606</link>
        <id>10.14569/IJACSA.2016.070606</id>
        <doi>10.14569/IJACSA.2016.070606</doi>
        <lastModDate>2016-07-01T14:32:53.4070000+00:00</lastModDate>
        
        <creator>Khan Farhan Rafat</creator>
        
        <creator>Muhammad Junaid Hussain</creator>
        
        <subject>Steganography; Imperceptibility; Information Hiding; LSB Technique; Secure Communication; Information Security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(6), 2016</description>
        <description>The degree of imperceptibility of hidden image in the ‘Digital Image Steganography’ is mostly defined in relation to the limitation of Human Visual System (HVS), its chances of detection using statistical methods and its capacity to hide maximum information inside its body. Whereas, a tradeoff does exist between data hiding capacity of the cover image and robustness of underlying information hiding scheme. This paper is an exertion to underline the technique to embed information inside the cover at Stego key dependent locations, which are hard to detect, to achieve optimal security. Hence, it is secure under worst case scenario where Wendy is in possession of the original image (cover) agreed upon by Alice and Bob for their secret communication. Reliability of our proposed solution can be appreciated by observing the differences between cover, preprocessed cover and Stego object. Proposed scheme is equally good for color as well as gray scaled images. Another interesting aspect of this research is that it implicitly presents fusion of cover and information to be hidden in it while taking care of passive attacks on it</description>
        <description>http://thesai.org/Downloads/Volume7No6/Paper_6-Secure_Steganography_for_Digital_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Ontology based Intrusion Detection System in Wireless Sensor Network for Active Attacks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070605</link>
        <id>10.14569/IJACSA.2016.070605</id>
        <doi>10.14569/IJACSA.2016.070605</doi>
        <lastModDate>2016-07-01T14:32:53.3900000+00:00</lastModDate>
        
        <creator>Naheed Akhter</creator>
        
        <creator>Maruf Pasha</creator>
        
        <subject>Semantics; Wireless Sensor Network; Intrusion detection and prevention system; ontologies</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(6), 2016</description>
        <description>WSNs are vulnerable to attacks and have deemed special attention for developing mechanism for securing against various threats that could effect the overall infrastructure. WSNs are open to miscellaneous classes of attacks and security breaches are intolerable in WSNs. Threats like untrusted data transmissions, settlement in open and unfavorable environments are still open research issues. Safekeeping is an essential and complex requirement in WSNs. These issues raise the need to develop a security-based mechanism for Wireless Sensor Network to categorize the different attacks based on their relevance. A detailed survey of active attacks is highlighted based on the nature and attributes of those attacks. An Ontology based mechanism is developed and tested for active attack  in WSNs</description>
        <description>http://thesai.org/Downloads/Volume7No6/Paper_5-Ontology_based_Intrusion_Detection_System_in_Wireless_Sensor_Network_for_Active_Attacks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Rapid Control Prototyping and PIL Co-Simulation of a Quadrotor UAV Based on NI myRIO-1900 Board</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070604</link>
        <id>10.14569/IJACSA.2016.070604</id>
        <doi>10.14569/IJACSA.2016.070604</doi>
        <lastModDate>2016-07-01T14:32:53.3600000+00:00</lastModDate>
        
        <creator>Soufiene Bouall&#232;gue</creator>
        
        <creator>Rabii Fessi</creator>
        
        <subject>Quadrotor UAV; modeling; Computer Aided Design; Model Based Design; Rapid Control Prototyping; PIL co-simulation; LabVIEW/CDSim; NI myRIO-1900; PID and MPC approaches</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(6), 2016</description>
        <description>In this paper, a new Computer Aided Design (CAD) methodology for the Processor-In-the-Loop (PIL) co-simulation and Rapid Control Prototyping (RCP) of a Quadrotor Vertical Take-Off and Landing (VTOL) type of Unmanned Arial Vehicle (UAV) is proposed and successfully implemented around an embedded NI myRIO-1900 target and a host PC. The developed software (SW) and hardware (HW) prototyping platform is based on the Control Design and Simulation (CDSim) module of LabVIEW environment and an established Network Streams data communication protocol. A dynamical model of the Quadrotor UAV, which incorporates the dynamics of vertical and landing flights and aerodynamic forces, is obtained using the Newton-Euler formalism. PID and Model Predictive Control (MPC) approaches are chosen as examples for experiment prototyping. These control laws, as well as the dynamical model of the Quad, are implemented and deployed as separate LabVIEW Virtual Instruments (VI) on the myRIO-1900 target and the host PC, respectively. Several demonstrative co-simulation results, obtained for a 3D LabVIEW emulator of the Quadrotor, are presented and discussed in order to improve the effectiveness of the proposed Model Based Design (MBD) prototyping methodology</description>
        <description>http://thesai.org/Downloads/Volume7No6/Paper_4-Rapid_Control_Prototyping_and_PIL_Co_Simulation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid Data Mining Approach for Intrusion Detection on Imbalanced NSL-KDD Dataset</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070603</link>
        <id>10.14569/IJACSA.2016.070603</id>
        <doi>10.14569/IJACSA.2016.070603</doi>
        <lastModDate>2016-07-01T14:32:53.3130000+00:00</lastModDate>
        
        <creator>Mohammad Reza Parsaei</creator>
        
        <creator>Samaneh Miri Rostami</creator>
        
        <creator>Reza Javidan</creator>
        
        <subject>intrusion detection system; feature selection; imbalanced dataset; SMOTE; NSL KDD</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(6), 2016</description>
        <description>Intrusion detection systems aim to detect malicious viruses from computer and network traffic, which is not possible using common firewall. Most intrusion detection systems are developed based on machine learning techniques. Since datasets which used in intrusion detection are imbalanced, in the previous methods, the accuracy of detecting two attack classes, R2L and U2R, is lower than that of the normal and other attack classes. In order to overcome this issue, this study employs a hybrid approach. This hybrid approach is a combination of synthetic minority oversampling technique (SMOTE) and cluster center and nearest neighbor (CANN). Important features are selected using leave one out method (LOO). Moreover, this study employs NSL KDD dataset. Results indicate that the proposed method improves the accuracy of detecting U2R and R2L attacks in comparison to the baseline paper by 94% and 50%, respectively</description>
        <description>http://thesai.org/Downloads/Volume7No6/Paper_3-A_Hybrid_Data_Mining_Approach_for_Intrusion_Detection_on_Imbalanced_NSL_KDD_Dataset.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mobile Software Testing: Thoughts, Strategies, Challenges, and Experimental Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070602</link>
        <id>10.14569/IJACSA.2016.070602</id>
        <doi>10.14569/IJACSA.2016.070602</doi>
        <lastModDate>2016-07-01T14:32:53.2670000+00:00</lastModDate>
        
        <creator>Mohammed Akour</creator>
        
        <creator>Bouchaib Falah</creator>
        
        <creator>Ahmad A. Al-Zyoud</creator>
        
        <creator>Salwa Bouriat</creator>
        
        <creator>Khalid Alemerien</creator>
        
        <subject>Software Mobile Testing; Software Testing; Mobile Performance Testing; Hybrid Mobile Application</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(6), 2016</description>
        <description>Mobile devices have become more pervasive in our daily lives, and are gradually replacing regular computers to perform traditional processes like Internet browsing, editing photos, playing videos and sound track, and reading different files. The importance of mobile devices in our life necessitates more concerns of the reliability and compatibility of mobile applications, and thus, testing these applications arises as an important phase in mobile devices adaption process. This paper addressed various research directions on mobile applications testing by investigating essential concepts, scope, features and requirements for testing mobile application. We highlight the similarities and the differences between mobile APP testing and mobile web testing. Furthermore, we discuss and compare different mobile testing approaches and environments, and provide the challenges as emergent needs in test environments. As a case study, we compared the testing experience of hybrid application in an emulator and a real world device. The purpose of the experiment is to verify to which extent a virtual device can emulate a complete client experience. Set of experiments are conducted where five android mobile browsers are tested. Each browser will be on a real device as well as an emulated device with the same features (CPU used, memory size, etc). The application will be tested on the following metrics: Performance and function/behavior testing</description>
        <description>http://thesai.org/Downloads/Volume7No6/Paper_2-Mobile_Software_Testing_Thoughts_Strategies_Challenges.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Multi-faceted Computational Trust Model for Online Social Network (OSN)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070601</link>
        <id>10.14569/IJACSA.2016.070601</id>
        <doi>10.14569/IJACSA.2016.070601</doi>
        <lastModDate>2016-07-01T14:32:53.1230000+00:00</lastModDate>
        
        <creator>Manmeet Mahinderjit Singh</creator>
        
        <creator>Teo Yi Chin</creator>
        
        <subject>Multi-facet Trust; Recommender Trust Model; Action Trust; Online Social Network; Security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(6), 2016</description>
        <description>Online Social Network (OSN) is an online social platform that enables people to exchange information, get in touch with family members or friends, and also helps as a marketing tool. However, OSN suffers from various security and privacy issues. Trust, fundamentally,  is made up of security with hard trust (cryptographic mechanism) and soft trust (recommender system); user&#39;s trustworthiness for this platform has decrement signed. In this paper, the authors leverage the multi-faceted model trust concept from user-centric and personalized trust model and present weightage and ranking for its important features by employing statistical means. Next, the multi-faceted model trust is combined with an existing Action-based model and Context recommender. The contributions of this research are an enhanced trust algorithm and an enhanced context-based, recommender-based trust, which has been tested based on user-acceptance. Overall, the result demonstrates OSN as fairly better by employing a multi-faceted model which embeds both actions incomparable to recommender type</description>
        <description>http://thesai.org/Downloads/Volume7No6/Paper_1-Hybrid_Multi-faceted_Computational_Trust_Model_for_Online_Social_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Model for Facial Emotion Inference Based on Planar Dynamic Emotional Surfaces</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2016.050608</link>
        <id>10.14569/IJARAI.2016.050608</id>
        <doi>10.14569/IJARAI.2016.050608</doi>
        <lastModDate>2016-06-11T05:35:29.2870000+00:00</lastModDate>
        
        <creator>J. P. P. Ruivo</creator>
        
        <creator>T. Negreiros</creator>
        
        <creator>M. R. P. Barretto</creator>
        
        <creator>B. Tinen</creator>
        
        <subject>emotion recognition, facial emotion, Kalman filter, machine learning</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 5(6), 2016</description>
        <description>Emotions have direct influence on the human life and are of great importance in relationships and in the way interactions between individuals develop. Because of this, they are also important for the development of human-machine interfaces that aim to maintain a natural and friendly interaction with its users. In the development of social robots, which this work aims for, a suitable interpretation of the emotional state of the person interacting with the social robot is indispensable. The focus of this paper is the development of a mathematical model for recognizing emotional facial expressions in a sequence of frames. Firstly, a face tracker algorithm is used to find and keep track of faces in images; then the found faces are fed into the model developed in this work, which consists of an instantaneous emotional expression classifier, a Kalman filter and a dynamic classifier that gives the final output of the model.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume5No6/Paper_8-A_Model_for_Facial_Emotion_Inference.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Approach for Discovery Quantitative Fuzzy Multi-Level Association Rules Mining Using Genetic Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2016.050607</link>
        <id>10.14569/IJARAI.2016.050607</id>
        <doi>10.14569/IJARAI.2016.050607</doi>
        <lastModDate>2016-06-11T05:35:29.2400000+00:00</lastModDate>
        
        <creator>Saad M. Darwish</creator>
        
        <creator>Abeer A. Amer</creator>
        
        <creator>Sameh G. Taktak</creator>
        
        <subject>Quantitative Data Mining; Fuzzy Association Rule Mining; Multilevel Association rule; Optimization Algorithm</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 5(6), 2016</description>
        <description>Quantitative multilevel association rules mining is a central field to realize motivating associations among data components with multiple levels abstractions. The problem of expanding procedures to handle quantitative data has been attracting the attention of many researchers. The algorithms regularly discretize the attribute fields into sharp intervals, and then implement uncomplicated algorithms established for Boolean attributes. Fuzzy association rules mining approaches are intended to defeat such shortcomings based on the fuzzy set theory. Furthermore, most of the current algorithms in the direction of this topic are based on very tiring search methods to govern the ideal support and confidence thresholds that agonize from risky computational cost in searching association rules. To accelerate quantitative multilevel association rules searching and escape the extreme computation, in this paper, we propose a new genetic-based method with significant innovation to determine threshold values for frequent item sets. In this approach, a sophisticated coding method is settled, and the qualified confidence is employed as the fitness function. With the genetic algorithm, a comprehensive search can be achieved and system automation is applied, because our model does not need the user-specified threshold of minimum support. Experiment results indicate that the recommended algorithm can powerfully generate non-redundant fuzzy multilevel association rules.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume5No6/Paper_7-A_Novel_Approach_for_Discovery_Quantitative_Fuzzy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Highly Accurate Prediction of Jobs Runtime Classes</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2016.050606</link>
        <id>10.14569/IJARAI.2016.050606</id>
        <doi>10.14569/IJARAI.2016.050606</doi>
        <lastModDate>2016-06-11T05:35:29.2230000+00:00</lastModDate>
        
        <creator>Anat Reiner-Benaim</creator>
        
        <creator>Anna Grabarnick</creator>
        
        <creator>Edi Shmueli</creator>
        
        <subject>Runtime Prediction; Job Scheduler; Server Farms; Classifier; Mixture Distribution</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 5(6), 2016</description>
        <description>Separating the short jobs from the long is a known technique to improve scheduling performance. This paper describes a method developed for accurately predicting the runtimes classes of the jobs to enable the separation. Our method uses the fact that the runtimes can be represented as a mixture of overlapping Gaussian distributions, in order to train a CART classifier to provide the prediction. The threshold that separates the short jobs from the long jobs is determined during the evaluation of the classifier to maximize prediction accuracy.  The results indicate overall accuracy of 90% for the data set used in the study, with sensitivity and specificity both above 90%.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume5No6/Paper_6-Highly_Accurate_Prediction_of_Jobs_Runtime_Classes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Thresholding Based Method for Rainy Cloud Detection with NOAA/AVHRR Data  by Means of Jacobi Itteration Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2016.050605</link>
        <id>10.14569/IJARAI.2016.050605</id>
        <doi>10.14569/IJARAI.2016.050605</doi>
        <lastModDate>2016-06-11T05:35:29.1930000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>Jacobi itteration method; Multi-Variiate Regressive Analysis; AVHRR; Rainfall area detection; Rain Radar</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 5(6), 2016</description>
        <description>Thresholding based method for rainy cloud detection with NOAA/AVHRR data by means of Jacobi iteration method is proposed. Attempts of the proposed method are made through comparisons to truth data which are provided by Japanese Meteorological Agency: JMA which is derived from radar data. Although the experimental results show not so good regressive performance, new trials give some knowledge and are informative. Therefore, the proposed method suggests for creation of new method for rainfall area detection with visible and thermal infrared imagery data.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume5No6/Paper_5-Thresholding_Based_Method_for_Rainy_Cloud_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Students’ Weakness Detective in Traditional Class</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2016.050604</link>
        <id>10.14569/IJARAI.2016.050604</id>
        <doi>10.14569/IJARAI.2016.050604</doi>
        <lastModDate>2016-06-11T05:35:29.1770000+00:00</lastModDate>
        
        <creator>Fatimah Altuhaifa</creator>
        
        <subject>emotional learner prediction; voice identifier and verifier; weakness detecting; artificial intelligent in education</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 5(6), 2016</description>
        <description>In Artificial Intelligent in Education in learning contexts and domains, the traditional classroom is tough to find students’ weakness during lecture due to the student’s number and because the instruction is busy with explaining the lesson. According to that, choosing teaching style that can improve student talent or skills to performs better in their classes or professional life would not be an easy task. This system is going to detect the average of students’ weakness and find either a solution for this or instruction a style that can increase students’ ability and skills by filtering the collection data, understanding the problem. After that, it provides a teaching style.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume5No6/Paper_4-Students_Weakness_Detective_in_Traditional_Class.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Overview on the Using Rough Set Theory on GIS Spatial Relationships Constraint</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2016.050603</link>
        <id>10.14569/IJARAI.2016.050603</id>
        <doi>10.14569/IJARAI.2016.050603</doi>
        <lastModDate>2016-06-11T05:35:29.1470000+00:00</lastModDate>
        
        <creator>Li Jing</creator>
        
        <creator>Zhou Wenwen</creator>
        
        <subject>space constraint; GIS; rough set; fuzzy geographic; spatial relationship</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 5(6), 2016</description>
        <description>To explore the constraint range of geographic video space, is the key points and difficulties to video GIS research. Reflecting by spatial constraints in the geographic range, sports entity and its space environment between complicated constraint and relationship of video play a significant role in semantic understanding. However, how to position this precision to meet the characteristic behavior extraction demand that becomes research this kind of problem in advance. Taking Rough set theory reference involved, that make measuring space constraint accuracy possible. And in the past, many GIS rough applications are based on the equivalence partition pawlak rough set. This paper analyzes the basic math in recent years in the research of rough set theory and related nature, discusses the GIS uncertainty covering approximation space, covering rough sets, analysis of it in the geographic space constraint the adjustment range.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume5No6/Paper_3-Overview_on_the_Using_Rough_Set_Theory_on_GIS_Spatial_Relationships_Constraint.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improved Fuzzy C-Mean Algorithm for Image Segmentation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2016.050602</link>
        <id>10.14569/IJARAI.2016.050602</id>
        <doi>10.14569/IJARAI.2016.050602</doi>
        <lastModDate>2016-06-11T05:35:29.1300000+00:00</lastModDate>
        
        <creator>Hind Rustum Mohammed</creator>
        
        <creator>Husein Hadi Alnoamani</creator>
        
        <creator>Ali AbdulZahraa Jalil</creator>
        
        <subject>pattern recognition; image segmentation; fuzzy c-mean; improved fuzzy c-mean; algorithms</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 5(6), 2016</description>
        <description>The segmentation of image is considered as a significant level in image processing system, in order to increase image processing system speed, so each stage in it must be speed reasonably. Fuzzy c-mean clustering is an iterative algorithm to find final groups of large data set such as image so that is will take more time to implementation. This paper produces an improved fuzzy c-mean algorithm that takes less time in find cluster and used in image segmentation.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume5No6/Paper_2-Improved_Fuzzy_C_Mean_Algorithm_for_Image_Segmentation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Technique to Manage Big Bioinformatics Data Using Genetic Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2016.050601</link>
        <id>10.14569/IJARAI.2016.050601</id>
        <doi>10.14569/IJARAI.2016.050601</doi>
        <lastModDate>2016-06-11T05:35:29.0370000+00:00</lastModDate>
        
        <creator>Huda Jalil Dikhil</creator>
        
        <creator>Mohammad Shkoukani</creator>
        
        <creator>Suhail Sami Owais</creator>
        
        <subject>Bioinformatics; Big Data; Genetic Algorithms; Hadoop MapReduce</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 5(6), 2016</description>
        <description>The continuous growth of data, mainly the medical data at laboratories becomes very complex to use and to manage by using traditional ways. So, the researchers start studying genetic information field which increased in the past thirty years in bioinformatics domain (the computer science field, genetic biology field, and DNA). This growth of data becomes known as big bioinformatics data. Thus, efficient algorithms such as Genetic Algorithms are needed to deal with this big and vast amount of bioinformatics data in genetic laboratories. So the researchers proposed two models to manage the big bioinformatics data in addition to the traditional model. The first model by applying Genetic Algorithms before MapReduce, the second model by applying Genetic Algorithms after the MapReduce, and the original or the traditional model by applying only MapReduce without using Genetic Algorithms.  The three models were implemented and evaluated using big bioinformatics data collected from the Duchenne Muscular Dystrophy (DMD) disorder. The researchers conclude that the second model is the best one among the three models in reducing the size of the data, in execution time, and in addition to the ability to manage and summarize big bioinformatics data. Finally by comparing the percentage errors of the second model with the first model and the traditional model, the researchers obtained the following results 1.136%, 10.227%, and 11.363% respectively. So the second model is the most accurate model with the less percentage error.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume5No6/Paper_1-A_New_Technique_to_Manage_Big_Bioinformatics_Data_Using_Genetic_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Survey of Cloud Migration Methods: A Comparison and Proposition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070579</link>
        <id>10.14569/IJACSA.2016.070579</id>
        <doi>10.14569/IJACSA.2016.070579</doi>
        <lastModDate>2016-06-01T11:06:12.6000000+00:00</lastModDate>
        
        <creator>Khadija SABIRI</creator>
        
        <creator>Faouzia BENABBOU</creator>
        
        <creator>Mustapha HAIN</creator>
        
        <creator>Hicham MOUTACHAOUIK</creator>
        
        <creator>Khalid AKODADI</creator>
        
        <subject>Application on premise; Migration methods; Cloud migration Method; PIM; PSM; ADM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(5), 2016</description>
        <description>Along with the significant advantages of cloud computing paradigm, the number of enterprises, which expect to move a legacy system towards a cloud, is steadily increasing. Unfortunately, this move is not straightforward. There are many challenges to take up. The applications are often written with the outdated technologies. While some enterprises redevelop applications with a specific Cloud provider in mind, others try to move the legacy systems, either because the organization wants to keep the past investments, or because the legacy systems hold important data. Migrating the legacy systems to the Cloud introduces technical and business challenges. This paper aims to study deeply and to compare existing Cloud migration methods, based on Model Driven Engineering (MDE) approach to highlight the strengths and weaknesses of each one. Finally, we have proposed a Cloud legacy system Migration Method relied on Architecture Driven Modernization (ADM), and explained its working process.</description>
        <description>http://thesai.org/Downloads/Volume7No5/Paper_79-A_Survey_of_Cloud_Migration_Methods_A_Comparison_and_Proposition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Virtual Heterogeneous Model Integration Layer</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070578</link>
        <id>10.14569/IJACSA.2016.070578</id>
        <doi>10.14569/IJACSA.2016.070578</doi>
        <lastModDate>2016-06-01T11:06:12.5670000+00:00</lastModDate>
        
        <creator>Muhammad Ali Memon</creator>
        
        <creator>Asadullah Shaikh</creator>
        
        <creator>Khizer Hayat</creator>
        
        <creator>Mutiullah Shaikh</creator>
        
        <subject>Model Driven Engineering; Co-evolution; Co-adaptation; Delta models; Model Integration</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(5), 2016</description>
        <description>The classic way of building a software today sim-plistically consists in connecting a piece of code calling a method with the piece of code implementing that method. We consider these piece of code (software systems) not calling anything, behaving in a non deterministic way and providing complex sets of services in different domains. In software engineering reusability is the holly grail, and specially the reusability of code from autonomus tools requires powerful compostion/integration mechanisms. These systems are developed by different developers and being modified inceremently. Integrating these autonomous tools generate various conflicts. To deal with these conflicts, current integration mechanisms defines specific set of rules to resolve these conflicts and accompalish integration. Indeed still there is a big chance that changes made by other developers, or they update their changes in order to make them compliant with other developers cancel the updates done by others. The approach presented here claims three contributions in the field of Hetrogeneous Software Integration. First, this approach eliminate the need of conflicts resolving mechanism. Secondly, it provides the mechanism to work in the presence of conflicts without resolving them. Finally, contribution is that the integration mechanism does not affect if either of the system evolves. We do this by introducing an intermediate virtual layer between two systems that introduce a delta models which consist of three parts; viability that share required elements, hiding that hide conflicting elements and aliasing that aliases same concepts in both systems.</description>
        <description>http://thesai.org/Downloads/Volume7No5/Paper_78-Virtual_Heterogeneous_Model_Integration_Layer.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Information-Seeking Problem in Human-Technology Interaction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070577</link>
        <id>10.14569/IJACSA.2016.070577</id>
        <doi>10.14569/IJACSA.2016.070577</doi>
        <lastModDate>2016-06-01T11:06:12.5370000+00:00</lastModDate>
        
        <creator>Mohammad Alsulami</creator>
        
        <creator>Asadullah Shaikh</creator>
        
        <subject>Information seeking; seeking problem in HCI; HCI Information seeking</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(5), 2016</description>
        <description>In the history of information-seeking, the intention of a query and the posed query have some level of distance between them. Because human query-responders are innately connected to times and trends and have the ability to understand natural language and human intention, they have often been the idealistic sources of knowledge-direction. As the quantity of depth of knowledge of humanity grows, technological systems have sought to utilize natural language, both spoken and written, as a format of accepted queries. Modern works seek to improve such systems utilizing distance-metrics of literal queries to understood questions with maps to knowledge-bases. However, these methods do not often take into account the value of information in terms of query interpretation for mapping and as such may have identifiable limitations compared with human responders. In this paper, a model for information value is proposed and existing works in speech and query recognition are discussed relative to their considerations of information value.</description>
        <description>http://thesai.org/Downloads/Volume7No5/Paper_77-The_Information_Seeking_Problem.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Semantic Feature Based Arabic Opinion Mining Using Ontology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070576</link>
        <id>10.14569/IJACSA.2016.070576</id>
        <doi>10.14569/IJACSA.2016.070576</doi>
        <lastModDate>2016-06-01T11:06:12.4900000+00:00</lastModDate>
        
        <creator>Abdullah M. Alkadri</creator>
        
        <creator>Abeer M. ElKorany</creator>
        
        <subject>Opinion Mining; Sentimental Analysis; Ontology; Feature extraction; Polarity identification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(5), 2016</description>
        <description>with the increase of opinionated reviews on the web, automatically analyzing and extracting knowledge from those reviews is very important. However, it is a challenging task to be done manually. Opinion mining is a text mining discipline that automatically performs such a task. Most researches done in this field were focused on English texts with very limited researches on Arabic language. This scarcity is because there are a lot of obstacles in Arabic. The aim of this paper is to develop a novel semantic feature-based opinion mining framework for Arabic reviews. This framework utilizes the semantic of ontologies and lexicons in the identification of opinion features and their polarity. Experiments showed that the proposed framework achieved a good level of performance compared with manually collected test data.</description>
        <description>http://thesai.org/Downloads/Volume7No5/Paper_76-Semantic_Feature_Based_Arabic_Opinion_Mining.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optical Character Recognition System for Urdu Words in Nastaliq Font</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070575</link>
        <id>10.14569/IJACSA.2016.070575</id>
        <doi>10.14569/IJACSA.2016.070575</doi>
        <lastModDate>2016-06-01T11:06:12.4570000+00:00</lastModDate>
        
        <creator>Safia Shabbir</creator>
        
        <creator>Imran Siddiqi</creator>
        
        <subject>Optical Character Recognition; Urdu Text; Liga-tures; Hidden Markov Models; Clustering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(5), 2016</description>
        <description>Optical Character Recognition (OCR) has been an attractive research area for the last three decades and mature OCR systems reporting near to 100% recognition rates are available for many scripts/languages today. Despite these develop-ments, research on recognition of text in many languages is still in its early days, Urdu being one of them. The limited existing literature on Urdu OCR is either limited to isolated characters or considers limited vocabularies in fixed font sizes. This research presents a segmentation free and size invariant technique for recognition of Urdu words in Nastaliq font using ligatures as units of recognition. Ligatures, separated into primary ligatures and diacritics, are recognized using right-to-left HMMs. Diacritics are then associated with the main body using position information and the resulting ligatures are validated using a dictionary. The system evaluated on Urdu words realized promising recognition rates at ligature and word levels.</description>
        <description>http://thesai.org/Downloads/Volume7No5/Paper_75-Optical_Character_Recognition_System_for_Urdu.Pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>On the Use of Arabic Tweets to Predict Stock Market Changes in the Arab World</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070574</link>
        <id>10.14569/IJACSA.2016.070574</id>
        <doi>10.14569/IJACSA.2016.070574</doi>
        <lastModDate>2016-06-01T11:06:12.4270000+00:00</lastModDate>
        
        <creator>Khalid AlKhatib</creator>
        
        <creator>Abdullateef Rabab’ah</creator>
        
        <creator>Mahmoud Al-Ayyoub</creator>
        
        <creator>Yaser Jararweh</creator>
        
        <subject>Twitter; Sentiment Analysis; Granger Causality; Pearson Correlation; Arab Stock Market</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(5), 2016</description>
        <description>Social media users nowadays express their opinions and feelings about many event occurring in their lives. For certain users, some of the most important events are the ones related to the financial markets. An interesting research field emerged over the past decade to study the possible relationship between the fluctuation in the financial markets and the online social media. In this research we present a comprehensive study to identify the relation between Arabic financial-related tweets and the change in stock markets using a set of the most active Arab stock indices. The results show that there is a Granger Causality relation between the volume and sentiment of Arabic tweets and the change in some of the stock markets.</description>
        <description>http://thesai.org/Downloads/Volume7No5/Paper_74-On_the_Use_of_Arabic_Tweets_to_Predict_Stock.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Object Conveyance Algorithm for Multiple Mobile Robots based on Object Shape and Size</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070573</link>
        <id>10.14569/IJACSA.2016.070573</id>
        <doi>10.14569/IJACSA.2016.070573</doi>
        <lastModDate>2016-06-01T11:06:12.3970000+00:00</lastModDate>
        
        <creator>Purnomo Sejati</creator>
        
        <creator>Hiroshi Suzuki</creator>
        
        <creator>Takahiro Kitajima</creator>
        
        <creator>Akinobu Kuwahara</creator>
        
        <creator>Takashi Yasuno</creator>
        
        <subject>multiple mobile robots, object conveyance, team member determination</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(5), 2016</description>
        <description>This paper describes a determination method of a number of a team for multiple mobile robot object conveyance. The number of robot on multiple mobile robot systems is the factor of complexity on robots formation and motion control. In our previous research, we verified the use of the complex-valued neural network for controlling multiple mobile robots in object conveyance problem. Though it is a significant issue to develop effective determination team member for multiple mobile robot object conveyance, few studies have been done on it. Therefore, we propose an algorithm for determining the number of the team member on multiple mobile robot object conveyance with grasping push. The team member is determined based on object weight to obtain appropriate formation. First, the object shape and size measurement is carried out by a surveyor robot that approaches and surrounds the object. During surrounding the object, the surveyor robot measures its distance to the object and records for estimating the object shape and size. Since the object shape and size are estimated, the surveyor robot makes initial push position on the estimated push point and calls additional robots for cooperative push. The algorithm is validated in several computer simulations with varying object shape and size. As a result, the proposed algorithm is promising for minimizing the number of the robot on multiple mobile robot object conveyance.</description>
        <description>http://thesai.org/Downloads/Volume7No5/Paper_73-Object_Conveyance_Algorithm_for_Multiple.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>NEB in Analysis of Natural Image 8 &#215; 8 and 9 &#215; 9 High-contrast Patches</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070572</link>
        <id>10.14569/IJACSA.2016.070572</id>
        <doi>10.14569/IJACSA.2016.070572</doi>
        <lastModDate>2016-06-01T11:06:12.3630000+00:00</lastModDate>
        
        <creator>Shengxiang Xia</creator>
        
        <creator>Wen Wang</creator>
        
        <subject>nudged elastic band; natural image high-contrast patch; cell complex; density function</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(5), 2016</description>
        <description>In this paper we use the nudged elastic band tech-nique from computational chemistry to investigate sampled high-dimensional data from a natural image database. We randomly sample 8 &#215; 8 and 9 &#215; 9 high-contrast patches of natural images and create a density estimator believed as a Morse function. By the Morse function we build one-dimensional cell complexes from the sampled data. Using one-dimensional cell complexes, we identify topological properties of 8 &#215; 8 and 9 &#215; 9 high-contrast natural image patches, we show that there exist two kinds of subsets of high-contrast 8 &#215; 8 and 9 &#215; 9 patches modeled as a circle, by the new method we confirm some results obtained through the method of computational topology.</description>
        <description>http://thesai.org/Downloads/Volume7No5/Paper_72-NEB_in_Analysis_of_Natural_Image.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Efficient Verification-Driven Slicing of UML/OCL Class Diagrams</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070571</link>
        <id>10.14569/IJACSA.2016.070571</id>
        <doi>10.14569/IJACSA.2016.070571</doi>
        <lastModDate>2016-06-01T11:06:12.3500000+00:00</lastModDate>
        
        <creator>Asadullah Shaikh</creator>
        
        <creator>Uffe Kock Wiil</creator>
        
        <subject>MDD; UML; OCL; Model Slicing; Efficient Verification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(5), 2016</description>
        <description>Model defects are a significant concern in the Model-Driven Development (MDD) paradigm, as model trans-formations and code generation may propagate errors present in the model to other notations where they are harder to detect and trace. Formal verification techniques can check the correctness of a model, but their high computational complexity can limit their scalability.
Current approaches to this problem have an exponential worst-case run time. In this paper, we propose a slicing technique which breaks a model into several independent submodels from which irrelevant information can be abstracted to improve the scalability of the verification process. We consider a specific static model (UML class diagrams annotated with unrestricted OCL constraints) and a specific property to verify (satisfiability, i.e., whether it is possible to create objects without violating any constraints). The definition of the slicing procedure ensures that the property under verification is preserved after partitioning. Furthermore, the paper provides an evaluation of experimental results from a real-world case study.</description>
        <description>http://thesai.org/Downloads/Volume7No5/Paper_71-Efficient_Verification_Driven_Slicing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Diversity-Based Boosting Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070570</link>
        <id>10.14569/IJACSA.2016.070570</id>
        <doi>10.14569/IJACSA.2016.070570</doi>
        <lastModDate>2016-06-01T11:06:12.3330000+00:00</lastModDate>
        
        <creator>Jafar A. Alzubi</creator>
        
        <subject>Artificial Intelligence; Classification; Boosting; Di-versity; Game Theory</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(5), 2016</description>
        <description>Boosting is a well known and efficient technique for constructing a classifier ensemble. An ensemble is built incrementally by altering the distribution of training data set and forcing learners to focus on misclassification errors. In this paper, an improvement to Boosting algorithm called DivBoosting algorithm is proposed and studied. Experiments on several data sets are conducted on both Boosting and DivBoosting. The experimental results show that DivBoosting is a promising method for ensemble pruning. We believe that it has many advantages over traditional boosting method because its mechanism is not solely based on selecting the most accurate base classifiers but also based on selecting the most diverse set of classifiers.</description>
        <description>http://thesai.org/Downloads/Volume7No5/Paper_70-Diversity_Based_Boosting_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detection and Counting of On-Tree Citrus Fruit for Crop Yield Estimation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070569</link>
        <id>10.14569/IJACSA.2016.070569</id>
        <doi>10.14569/IJACSA.2016.070569</doi>
        <lastModDate>2016-06-01T11:06:12.3030000+00:00</lastModDate>
        
        <creator>Zeeshan Malik</creator>
        
        <creator>Sheikh Ziauddin</creator>
        
        <creator>Ahmad R. Shahid</creator>
        
        <creator>Asad Safi</creator>
        
        <subject>Precision agriculture; yield estimation; k-means segmentation; leaf occlusion;, illumination; morphology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(5), 2016</description>
        <description>In this paper, we present a technique to estimate citrus fruit yield from the tree images. Manually counting the fruit for yield estimation for marketing and other managerial tasks is time consuming and requires human resources, which do not always come cheap. Different approaches have been used for the said purpose, yet separation of fruit from its background poses challenges, and renders the exercise inaccurate. In this paper, we use k-means segmentation for recognition of fruit, which segments the image accurately thus enabling more accurate yield estimation. We created a dataset containing 83 tree images with 4001 citrus fruits from three different fields. We are able to detect the on-tree fruits with an accuracy of 91.3%. In addition, we find a strong correlation between the manual and the automated fruit count by getting coefficients of determination R2 up to 0.99.</description>
        <description>http://thesai.org/Downloads/Volume7No5/Paper_69-Detection_and_Counting_of_On_Tree_Citrus_Fruit.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automatic Diagnosing of Suspicious Lesions in Digital Mammograms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070568</link>
        <id>10.14569/IJACSA.2016.070568</id>
        <doi>10.14569/IJACSA.2016.070568</doi>
        <lastModDate>2016-06-01T11:06:12.2570000+00:00</lastModDate>
        
        <creator>Abdelali ELMOUFIDI</creator>
        
        <creator>Khalid El Fahssi</creator>
        
        <creator>Said Jai-andaloussi</creator>
        
        <creator>Abderrahim Sekkaki</creator>
        
        <creator>Gwenole Quellec</creator>
        
        <creator>Mathieu Lamard</creator>
        
        <creator>Guy Cazuguel</creator>
        
        <subject>Breast cancer, Mammogram, Computer-aided diagnosis, Segmentation, Regions of interest, Support Vector Machine, FROC analysis, ROC analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(5), 2016</description>
        <description>Breast cancer is the most common cancer and the leading cause of morbidity and mortality among women’s age between 50 and 74 years across the worldwide. In this paper we’ve proposed a method to detect the suspicious lesions in mammograms, extracting their features and classify them as Normal or Abnormal and Benign or Malignant for diagnosing of breast cancer. This method consists of two major parts: The first one is detection of regions of interest (ROIs). The second one is diagnosing of detected ROIs. This method was tested by Mini Mammography Image Analysis Society (Mini-MIAS) database. To check method’s performance, we’ve used FROC (Free-Receiver Operating Characteristics) curve in the detection part and ROC (Receiver Operating Characteristics) curve in the diagnosis part. Obtained results show that the performance of detection part has sensitivity of 94.27% at 0.67 false positive per image. The performance of diagnosis part has 94.29% accuracy, with 94.11% sensitivity, 94.44% specificity in the classification as normal or abnormal mammogram, and has achieved 94.4%accuracy, with 96.15% sensitivity and 94.54% specificity in the classification as Benign or Malignant mammogram.</description>
        <description>http://thesai.org/Downloads/Volume7No5/Paper_68-Automatic_Diagnosing_of_Suspicious_Lesions.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Algorithmic approach for abstracting transient states in timed systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070567</link>
        <id>10.14569/IJACSA.2016.070567</id>
        <doi>10.14569/IJACSA.2016.070567</doi>
        <lastModDate>2016-06-01T11:06:12.1770000+00:00</lastModDate>
        
        <creator>Mohammed Achkari Begdouri</creator>
        
        <creator>Houda Bel Mokadem</creator>
        
        <creator>Mohamed El Haddad</creator>
        
        <subject>Timed automata, symbolic model checking, back-ward analysis algorithm, correctness, data structures</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(5), 2016</description>
        <description>In previous works, the timed logic TCTL was extended with importants modalities, in order to abstract transient states that last for less than k time units. For all modalities of this extension, called TCTL?, the decidability of the model-checking problem has been proved with an appropriate extension of Alur and Dill’s region graph. But this theoretical result does not support a natural implementation due to its state-space explosion problem. This is not surprising since, even for TCTL timed logics, the model checking algorithm that is implemented in tools like UPPAAL or KRONOS is based on a so-called zone algorithm and data structures like DBMs, rather than on explicit sets of regions.
In this paper, we propose a symbolic model-checking algorithm which computes the characteristic sets of some TCTL? formulae and checks their truth values. This algorithm generalizes the zone algorithm for TCTL timed logics. We also present a complete correctness proof of this algorithm, and we describe its implementation using the DBM data structure.</description>
        <description>http://thesai.org/Downloads/Volume7No5/Paper_67-An_Algorithmic_approach_for_abstracting.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A QoS Solution for NDN in the Presence of Congestion Control Mechanism</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070566</link>
        <id>10.14569/IJACSA.2016.070566</id>
        <doi>10.14569/IJACSA.2016.070566</doi>
        <lastModDate>2016-06-01T11:06:12.1630000+00:00</lastModDate>
        
        <creator>Abdullah Alshahrani</creator>
        
        <creator>Izzat Alsmadi</creator>
        
        <subject>Named Data Networking; Quality of Service; Con-gestion Control</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(5), 2016</description>
        <description>Both congestion control and Quality of Service (QoS) are important quality attributes in computer networks. Specifically, for the future Internet architecture known as Named Data Networking (NDN), solutions using hop-by-hop interest shaping have shown to cope with the traffic congestion issue. Ad-hoc techniques for implementing QoS in NDN have been proposed. In this paper, we propose a new QoS mechanism that can work on top of an existing congestion control based on interest shaping. Our solution provides four priority levels, which are assigned to packets and lead to different QoS. Simulations show that high priority applications are consistently served first, while at the same time low priority applications never starve. Results in ndnSIM simulator also demonstrate that we avoid congestion while operating at optimal throughputs.</description>
        <description>http://thesai.org/Downloads/Volume7No5/Paper_66-A_QoS_Solution_for_NDN_in_the_Presence_of_Congestion_Control_Mechanism.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Enhencment Medical Image Compression Algorithm Based on Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070565</link>
        <id>10.14569/IJACSA.2016.070565</id>
        <doi>10.14569/IJACSA.2016.070565</doi>
        <lastModDate>2016-06-01T11:06:12.1300000+00:00</lastModDate>
        
        <creator>Manel Dridi</creator>
        
        <creator>Mohamed Ali Hajjaji</creator>
        
        <creator>Belgacem Bouallegue</creator>
        
        <creator>Abdellatif Mtibaa</creator>
        
        <subject>Artificial Neural Network; medical image; compression; DICOM; PSNR; CR</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(5), 2016</description>
        <description>The main objective of medical image compression is to attain the best possible fidelity for an available communication and storage [6], in order to preserve the information contained in the image and does not have an error when they are processing it. In this work, we propose a medical image compression algorithm based on Artificial Neural Network (ANN). It is a simple algorithm which preserves all the image data. Experimental results performed at 8 bits/pixels and 12bits/pixels medical images show the performances and the efficiency of the proposed method. To determine the ‘acceptability’ of image compression we have used different criteria such as maximum absolute error (MAE), universal image quality (UIQ), correlation and peak signal to noise ratio (PSNR).</description>
        <description>http://thesai.org/Downloads/Volume7No5/Paper_65-An_Enhencment_Medical_Image_Compression_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Approach for Enhancing the Quality of Medical Computerized Tomography Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070564</link>
        <id>10.14569/IJACSA.2016.070564</id>
        <doi>10.14569/IJACSA.2016.070564</doi>
        <lastModDate>2016-06-01T11:06:12.1130000+00:00</lastModDate>
        
        <creator>Mutaz Al-Frejat</creator>
        
        <creator>Mohammad HjoujBtoush</creator>
        
        <subject>Spatial domain; CT image; Laplacian filter; MedPix database; Lung nodules</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(5), 2016</description>
        <description>Computerized tomography (CT) images contribute immensely to medical research and diagnosis. However, due to degradative factors such as noise, low contrast, and blurring, CT images tend to be a degraded representation of the actual body or part under investigation. To reduce the risk of imprecise diagnosis associated with poor-quality CT images, this paper presents a new technique designed to enhance the quality of medical CT images. The main objective is to improve the appearance of CT images in order to obtain better visual interpretation and analysis, which is expected to ease the diagnosis process. The proposed technique involves applying a median filter to remove noise from the CT images and then using a Laplacian filter to enhance the edges and the contrast in the images. Also, as CT images suffer from low contrast, a Contrast Limited Adaptive Histogram Equalization transform is also applied to solve this problem. The main strength of this transform is its modest computational requirements, ease of application, and excellent results for most images. According to a subjective assessment by a group of radiologists, the proposed technique resulted in excellent enhancement, including that of the contrast and the edges of medical CT images. From a medical perspective, the proposed technique was able to clarify the arteries, tissues, and lung nodules in the CT images. In addition, blurred nodules in chest CT images were enhanced effectively. Therefore the proposed technique can help radiologists to better detect lung nodules and can also assist in diagnosing the presence of tumours and in the detection of abnormal growths.</description>
        <description>http://thesai.org/Downloads/Volume7No5/Paper_64-A_New_Approach_for_Enhancing_the_Quality_of_Medical_Computerized_Tomography_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Approach to Finding Similarity Between Two Community Graphs Using Graph Mining Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070563</link>
        <id>10.14569/IJACSA.2016.070563</id>
        <doi>10.14569/IJACSA.2016.070563</doi>
        <lastModDate>2016-06-01T11:06:12.0830000+00:00</lastModDate>
        
        <creator>Bapuji Rao</creator>
        
        <creator>Saroja Nanda Mishra</creator>
        
        <subject>community graph; compressed community graph; dissimilar edges; self-loop; similar edges; weighted adjacency matrix</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(5), 2016</description>
        <description>Graph similarity has studied in the fields of shape retrieval, object recognition, face recognition and many more areas. Sometimes it is important to compare two community graphs for similarity which makes easier for mining the reliable knowledge from a large community graph. Once the similarity is done then, the necessary mining of knowledge can be extracted from only one community graph rather than both which leads saving of time. This paper proposes an algorithm for similarity check of two community graphs using graph mining techniques. Since a large community graph is difficult to visualize, so compression is essential. This proposed method seems to be easier and faster while checking for similarity between two community graphs since the comparison is between the two compressed community graphs rather than the actual large community graphs.</description>
        <description>http://thesai.org/Downloads/Volume7No5/Paper_63-An_Approach_to_Finding_Similarity_Between_Two_Community.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Static Filtered Sky Color Constancy</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070562</link>
        <id>10.14569/IJACSA.2016.070562</id>
        <doi>10.14569/IJACSA.2016.070562</doi>
        <lastModDate>2016-06-01T11:06:12.0530000+00:00</lastModDate>
        
        <creator>Ali Alkhalifah</creator>
        
        <subject>Static Filter; Color Constancy; LAB color space; Sky Color Detection; Horizon detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(5), 2016</description>
        <description>In Computer Vision, the sky color is used for lighting correction, image color enhancement, horizon alignment, image indexing, and outdoor image classification and in many other applications. In this article, for robust color based sky segmentation and detection, usage of lighting correction for sky color detection is investigated. As such, the impact of color constancy on sky color detection algorithms is evaluated and investigated. The color correction (constancy) algorithms used includes Gray-Edge (GE), Gray-World (GW), Max-RGB (MRGB) and Shades-of-Gray (SG). The algorithms GE, GW, MRGB, and SG, are tested on the static filtered sky modeling. The static filter is developed in the LAB color space. This evaluation and analysis is essential for detection scenarios, especially, color based object detection in outdoor scenes. From the results, it is concluded that the color constancy before sky color detection using LAB static filters has the potential of improving sky color detection performance. However, the application of the color constancy can impart adverse effects on the detection results. For images, the color constancy algorithms depict a compact and stable representative of the sky chroma loci, however, the sky color locus might have a shifting and deviation in a particular color representation. Since the sky static filters are using the static chromatic values, different results can be obtained by applying color constancy algorithms on various datasets.</description>
        <description>http://thesai.org/Downloads/Volume7No5/Paper_62-Static_Filtered_Sky_Color_Constancy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>TMCC: An Optimal Mechanism for Congestion Control in Wireless Sensor Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070561</link>
        <id>10.14569/IJACSA.2016.070561</id>
        <doi>10.14569/IJACSA.2016.070561</doi>
        <lastModDate>2016-06-01T11:06:12.0370000+00:00</lastModDate>
        
        <creator>Razieh Golgiri</creator>
        
        <creator>Reza Javidan</creator>
        
        <subject>Wireless Sensor Network (WSN); traffic management; resource control; alternative path; QoS; TMCC</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(5), 2016</description>
        <description>Most proposed methods for congestion control of Wireless Sensor Networks (WSNs) have disadvantages such as central congestion control mechanism through the sink node, using only one traffic control or resource control mechanism and also having the same throughput on all nodes. For the purpose of addressing these problems, in this paper, a new congestion control protocol is presented in order to increase network lifetime and reliability of WSNs. Since the priority of generated traffic in network level is not uniform in WSNs, an architecture framework is proposed based on priority of generated traffics for service identification in network level in order to meet better service quality and efficiency. The proposed method called TMCC has been compared with Traffic-Aware Dynamic Routing (TADR) method to show the effectiveness of the proposed method in terms of end to end delay, throughput, power consumption and lifetime of network.</description>
        <description>http://thesai.org/Downloads/Volume7No5/Paper_61-TMCC_An_Optimal_Mechanism_for_Congestion_Control.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Load Balancing in Partner-Based Scheduling Algorithm for Grid Workflow</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070560</link>
        <id>10.14569/IJACSA.2016.070560</id>
        <doi>10.14569/IJACSA.2016.070560</doi>
        <lastModDate>2016-06-01T11:06:12.0070000+00:00</lastModDate>
        
        <creator>Muhammad Roman</creator>
        
        <creator>Jawad Ashraf</creator>
        
        <creator>Asad Habib</creator>
        
        <creator>Gohar Ali</creator>
        
        <subject>Load Balancing; Advance Reservation; Resource Utilization; Workflow Scheduling; Job Distribution</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(5), 2016</description>
        <description>Automated advance reservation has the potential to ensure a good scheduling solution in computational Grids. To improve global throughput of Grid system and enhance resource utilization, workload has to be distributed among the resources of the Grid evenly. This paper discusses the problem of load distribution and resource utilization in heterogeneous Grids in advance reservation environment. We have proposed an extension of Partner Based Dynamic Critical Path for Grids algorithm named Balanced Partner Based Dynamic Critical Path for Grids (B-PDCPG) that incorporates a hybrid and threshold based mechanism to achieve load balancing to an allowed value of variation in workload among the resources in Partner Based Dynamic Critical Path for Grids algorithm. The proposed load balancing technique uses Utilization Profiles to store the reservation details and check the loads from these profiles on each of the resources and links. The load is distributed among resources based on the processing element capacity and number of processing units on resources. The simulation results, using Gridsim simulation engine, show that the proposed technique has balanced the workload very effectively and has provided better utilization of resources while decreasing the workflow makespan.</description>
        <description>http://thesai.org/Downloads/Volume7No5/Paper_60-Load_Balancing_in_Partner_Based_Scheduling_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>New Data Clustering Algorithm (NDCA)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070559</link>
        <id>10.14569/IJACSA.2016.070559</id>
        <doi>10.14569/IJACSA.2016.070559</doi>
        <lastModDate>2016-06-01T11:06:11.9730000+00:00</lastModDate>
        
        <creator>Abdullah Abdulkarem</creator>
        
        <creator>Imane Aly Saroit Ismail</creator>
        
        <creator>Amira Mohammed Kotb</creator>
        
        <subject>Wireless sensor network; clustering; energy efficiency; cluster head selection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(5), 2016</description>
        <description>Wireless sensor networks (WSNs) have sensing, data processing and communicating capabilities. The major task of the sensor node is to gather the data from the sensed field and send it to the end user via the base station (BS). To satisfy the scalability and prolong the network lifetime the sensor nodes are grouped into clusters. This paper proposes a new clustering algorithm named New Data Clustering Algorithm (NDCA). It takes optimal number of the clusters and the data packets sent from the surrounding environment to be the cluster head (CH) selection criteria.</description>
        <description>http://thesai.org/Downloads/Volume7No5/Paper_59-New_Data_Clustering_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Discrete-Time Approximation for Nonlinear Continuous Systems with Time Delays</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070558</link>
        <id>10.14569/IJACSA.2016.070558</id>
        <doi>10.14569/IJACSA.2016.070558</doi>
        <lastModDate>2016-06-01T11:06:11.9600000+00:00</lastModDate>
        
        <creator>Bemri H’mida</creator>
        
        <creator>Soudani Dhaou</creator>
        
        <subject>Discrete-time systems; Time-delay systems; Taylor-Lie series; non-linear systems; Simulation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(5), 2016</description>
        <description>This paper is concerned with the discretization of nonlinear continuous time delay systems. Our approach is based on Taylor-Lie series.  The main idea aims to minimize the effect of the delay and neglects the importance of nonlinear parameter by the linearization of the system study in an attempt to make its handling and easier programming as possible. We investigate a new method based on the development of new theoretical methods for the time discretization of nonlinear systems with time delay .The performance of these proposed discretization methods was validated by doing the numerical simulation using a nonlinear system with state delay. Some illustrative examples are given to show the effectiveness of the obtained results.</description>
        <description>http://thesai.org/Downloads/Volume7No5/Paper_58-Discrete_Time_Approximation_for_Nonlinear_Continuous_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fault-Tolerant Fusion Algorithm of Trajectory and Attitude of Spacecraft Based on Multi-Station Measurement Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070557</link>
        <id>10.14569/IJACSA.2016.070557</id>
        <doi>10.14569/IJACSA.2016.070557</doi>
        <lastModDate>2016-06-01T11:06:11.9270000+00:00</lastModDate>
        
        <creator>YANG Xiaoyan</creator>
        
        <creator>HU Shaolin</creator>
        
        <creator>YU Hui</creator>
        
        <creator>LI Shaomini Xi’an</creator>
        
        <subject>trajectory; fault-tolerance; data fusion</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(5), 2016</description>
        <description>Aiming at the practical situation that the navigation processes of spacecrafts usually rely on several different kinds of tracking equipments which track the spacecraft by turns, a series of new outlier-tolerant fusion algorithms are build to determine the whole flight path as well as attitude parameters. In these new algorithms, the famous gradient descent methods are used to find out the outliers-tolerant flight paths from an integrated data-fusion function designed delicately. In this paper, these new algorithms are used to determine reliably the flight paths and attitude parameters in the situation that a spacecraft is tracked by a series of equipments working by turns and there are some outliers arising in the data series. Advantages of these new algorithms are not only plenary fusion of all of the data series from different kinds of equipments but also discriminatory usage: on the one hand, if the data are dependable, the useable information contained in these data are sufficiently used; on the other hand, if the data are outliers, the bad information from these data are efficiently eliminated from these algorithms. In this way, all of the computational flight paths and attitude parameters are insured to be consistent and reliable.</description>
        <description>http://thesai.org/Downloads/Volume7No5/Paper_57-Fault_Tolerant_Fusion_Algorithm_of_Trajectory_and_Attitude.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimization of Channel Coding for Transmitted Image Using Quincunx Wavelets Transforms Compression</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070556</link>
        <id>10.14569/IJACSA.2016.070556</id>
        <doi>10.14569/IJACSA.2016.070556</doi>
        <lastModDate>2016-06-01T11:06:11.8970000+00:00</lastModDate>
        
        <creator>Mustapha Khelifi</creator>
        
        <creator>Abdelmounaim Moulay Lakhdar</creator>
        
        <creator>Iman Elawady</creator>
        
        <subject>Code Rate; Optimization; Quincunx Wavelets Transforms compression; Genetic Algorithm; BSC channel; Reed-Solomon codes</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(5), 2016</description>
        <description>Many images you see on the Internet today have undergone compression for various reasons. Image compression can benefit users by having pictures load faster and webpages use up less space on a Web host. Image compression does not reduce the physical size of an image but instead compresses the data that makes up the image into a smaller size. In case of image transmission the noise will decrease the quality of recivide image which obliges us to use channel coding techniques to protect our data against the channel noise. The Reed-Solomon code is one of the most popular channel coding techniques used to correct errors in many systems ((Wireless or mobile communications, Satellite communications, Digital television / DVB,High-speed modems such as ADSL, xDSL, etc.). Since there is lot of possibilities to select the input parameters of RS code this will make us concerned about the optimum input that can protect our data with minimum number of redundant bits. In this paper we are going to use the genetic algorithm to optimize in the selction of input parameters of RS code acording to the channel conditions wich reduce the number of bits needed to protect our data with hight quality of received image.</description>
        <description>http://thesai.org/Downloads/Volume7No5/Paper_56-Optimization_of_Channel_Coding_for_Transmitted_Image.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Survey of Techniques for Deep Web Source Selection and Surfacing the Hidden Web Content</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070555</link>
        <id>10.14569/IJACSA.2016.070555</id>
        <doi>10.14569/IJACSA.2016.070555</doi>
        <lastModDate>2016-06-01T11:06:11.8800000+00:00</lastModDate>
        
        <creator>Khushboo Khurana</creator>
        
        <creator>M.B. Chandak</creator>
        
        <subject>Deep Web; Surfacing Deep Web; Source Selection; Deep Web Crawler; Schema Matching</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(5), 2016</description>
        <description>Large and continuously growing dynamic web content has created new opportunities for large-scale data analysis in the recent years. There is huge amount of information that the traditional web crawlers cannot access, since they use link analysis technique by which only the surface web can be accessed. Traditional search engine crawlers require the web pages to be linked to other pages via hyperlinks causing large amount of web data to be hidden from the crawlers. Enormous data is available in deep web that can be useful to gain new insight for various domains, creating need to access the information from the deep web by developing efficient techniques. As the amount of Web content grows rapidly, the types of data sources are proliferating, which often provide heterogeneous data. So we need to select Deep Web Data sources that can be used by the integration systems. The paper discusses various techniques that can be used to surface the deep web information and techniques for Deep Web Source Selection.</description>
        <description>http://thesai.org/Downloads/Volume7No5/Paper_55-Survey_of_Techniques_for_Deep_Web_Source_Selection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analyzing Virtual Machine Live Migration in Application Data Context</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070554</link>
        <id>10.14569/IJACSA.2016.070554</id>
        <doi>10.14569/IJACSA.2016.070554</doi>
        <lastModDate>2016-06-01T11:06:11.8500000+00:00</lastModDate>
        
        <creator>Mutiullah Shaikh</creator>
        
        <creator>Asadullah Shaikh</creator>
        
        <creator>Muhammad Ali Memon</creator>
        
        <creator>Farah Deeba</creator>
        
        <subject>component; Cloud Computing; Virtualization; Virtual Machine Monitor VMM; Xen; VMResume; Xen Save and Restore; DC Data Centers Copy on Write CoW</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(5), 2016</description>
        <description>Virtualization plays a very vital role in the big cloud federation. Live and Real-time virtual machine migration is always a challenging task in virtualized environment, different approaches, techniques and models have already been presented and implemented by many re- searchers. The aim of this work is to investigate various parameters of Real-time and live data migration of virtual machines in stateful and data context at the application level. The migration of one virtual machine to another requires some time depending on the network bandwidth, guest availability, hardware limitation overcomes, resource allocation, server reallocation, hypervisor compatibility and many more.  To  enhance  and ensure  the  performance and  optimization of the  time  this  work presents the  some analysis  in the  form  of different  time  stacks  in multiple piece of data stored  in the  virtual machines. To  optimize  the  migration time virtual machine checkpoints are  used  in order  to  achieve  the  better results  by using  the  xen hypervisor memory  technique which  dynamically allows the migration of the configured  memory  while the allocated memory  could  be  discarded for   a  while.  By  this  the  bad  memory  remains un-migrated only  the  good  memory  consisting the  used  data would  be migrated by means  of Real-time.</description>
        <description>http://thesai.org/Downloads/Volume7No5/Paper_54-Analyzing_Virtual_Machine_Live_Migration_in_Application_Data_Context.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Delay-Decomposition Stability Approach of Nonlinear Neutral Systems with Mixed Time-Varying Delays</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070553</link>
        <id>10.14569/IJACSA.2016.070553</id>
        <doi>10.14569/IJACSA.2016.070553</doi>
        <lastModDate>2016-06-01T11:06:11.8200000+00:00</lastModDate>
        
        <creator>Ilyes MAZHOUD</creator>
        
        <creator>Issam AMRI</creator>
        
        <creator>Dhaou SOUDANI</creator>
        
        <subject>Neutral systems; Lyapunov–Krasovskii approach; asymptotic stability; mixed time-varying delays; nonlinear perturbations; Linear Matrix Inequalities (LMIs)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(5), 2016</description>
        <description>This paper deals with the asymptotic stability of neutral systems with mixed time-varying delays and nonlinear perturbations. Based on the Lyapunov–Krasovskii functional including the triple integral terms and free weighting matrices approach, a novel delay-decomposition stability criterion is obtained. The main idea of the proposed method is to divide each delay interval into two equal segments. Then, the Lyapunov–Krasovskii functional is used to split the bounds of integral terms of each subinterval. In order to reduce the stability criterion conservatism, delay-dependent sufficient conditions are performed in terms of Linear Matrix Inequalities (LMIs) technique. Finally, numerical simulations are given to show the effectiveness of the proposed stability approach.</description>
        <description>http://thesai.org/Downloads/Volume7No5/Paper_53-Delay_Decomposition_Stability_Approach_of_Nonlinear_Neutral_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Reversible Data Hiding Scheme for BTC-Compressed Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070552</link>
        <id>10.14569/IJACSA.2016.070552</id>
        <doi>10.14569/IJACSA.2016.070552</doi>
        <lastModDate>2016-06-01T11:06:11.7870000+00:00</lastModDate>
        
        <creator>Ching-Chiuan Lin</creator>
        
        <creator>Shih-Chieh Chen</creator>
        
        <creator>Kuo Feng Hwang</creator>
        
        <creator>Chi-Ming Yao</creator>
        
        <subject>Block Truncation Coding; Reversible Data Hiding; Difference Expansion</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(5), 2016</description>
        <description>This paper proposes a reversible data hiding scheme for BTC-compressed images. A block in the BTC-compressed image consists of a larger block-mean pixel and a smaller block-mean pixel. Two message bits are embedded into a pair of neighboring blocks. One is embedded by expanding the difference between the two larger block-mean pixels and the other is embedded by expanding the one between the two smaller block-mean pixels. Experimental results show that the embedding strategy may decrease the modification of images. The proposed scheme may obtain a stego-image with high visual quality and a payload capacity of one bit per block, approximately.</description>
        <description>http://thesai.org/Downloads/Volume7No5/Paper_52-A_Reversible_Data_Hiding_Scheme_for_BTC_Compressed_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Data Mining Framework for Generating Sales Decision Making Information Using Association Rules</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070551</link>
        <id>10.14569/IJACSA.2016.070551</id>
        <doi>10.14569/IJACSA.2016.070551</doi>
        <lastModDate>2016-06-01T11:06:11.7730000+00:00</lastModDate>
        
        <creator>Md. Humayun Kabir</creator>
        
        <subject>databases; data mining framework; Apriori algorithm; association rule; sales decision making information</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(5), 2016</description>
        <description>The rapid technological development in the field of information and communication technology (ICT) has enabled the databases of super shops to be organized under a countrywide sales decision making network to develop intelligent business systems by generating enriched business policies. This paper presents a data mining framework for generating sales decision making information from sales data using association rules generated from valid user input item set with respect to the sales data under analysis. The proposed framework includes super shop’s raw database storing sales data collected through sales application systems at different Point of Sale (POS) terminals. Apriori algorithm is famous for association rule discovery from the transactional database. The proposed technique using customized association rule generation and analysis checks the input items with sales data for validation of the input items. The support and confidence of each rule are computed. Sales decision making information about input items is generated by analyzing each of the generated association rules, which can be used to improve sales decision making policy to attract customers in order to increase sales. It is hoped that this approach for generating sales decision making information by analyzing sales data using association rules is more specific decision and application oriented as the business decision makers are not usually interested to all of the items of the sales database for making a specific sales decision.</description>
        <description>http://thesai.org/Downloads/Volume7No5/Paper_51-Data_Mining_Framework_for_Generating_Sales_Decision_Making_Information.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Face Detection and Recognition Using Viola-Jones with PCA-LDA and Square Euclidean Distance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070550</link>
        <id>10.14569/IJACSA.2016.070550</id>
        <doi>10.14569/IJACSA.2016.070550</doi>
        <lastModDate>2016-06-01T11:06:11.7400000+00:00</lastModDate>
        
        <creator>Nawaf Hazim Barnouti</creator>
        
        <creator>Sinan Sameer Mahmood Al-Dabbagh</creator>
        
        <creator>Wael Esam Matti</creator>
        
        <creator>Mustafa Abdul Sahib Naser</creator>
        
        <subject>Face Detection; Face Recognition; PCA; LDA; Viola-Jones; Feature Extraction; Distance Measurement; MATLAB; MUCT; Face94; Grimace</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(5), 2016</description>
        <description>In this paper, an automatic face recognition system is proposed based on appearance-based features that focus on the entire face image rather than local facial features. The first step in face recognition system is face detection. Viola-Jones face detection method that capable of processing images extremely while achieving high detection rates is used. This method has the most impact in the 2000’s and known as the first object detection framework to provide relevant object detection that can run in real time. Feature extraction and dimension reduction method will be applied after face detection. Principal Component Analysis (PCA) method is widely used in pattern recognition. Linear Discriminant Analysis (LDA) method that used to overcome drawback the PCA has been successfully applied to face recognition. It is achieved by projecting the image onto the Eigenface space by PCA after that implementing pure LDA over it. Square Euclidean Distance (SED) is used. The distance between two images is a major concern in pattern recognition. The distance between the vectors of two images leads to image similarity. The proposed method is tested on three databases (MUCT, Face94, and Grimace). Different number of training and testing images are used to evaluate the system performance and it show that increasing the number of training images will increase the recognition rate.</description>
        <description>http://thesai.org/Downloads/Volume7No5/Paper_50-Face_Detection_and_Recognition_Using_Viola_Jones.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Awareness Training Transfer and Information Security Content Development for Healthcare Industry</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070549</link>
        <id>10.14569/IJACSA.2016.070549</id>
        <doi>10.14569/IJACSA.2016.070549</doi>
        <lastModDate>2016-06-01T11:06:11.7100000+00:00</lastModDate>
        
        <creator>Arash Ghazvini</creator>
        
        <creator>Zarina Shukur</creator>
        
        <subject>information security; human error; awareness training program; training content; security policy; electronic health record</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(5), 2016</description>
        <description>Electronic Health Record (EHR) becomes increasingly pervasive and the need to safeguard EHR becomes more vital for healthcare organizations. Human error is known as the biggest threat to information security in Electronic Health Systems that can be minimized through awareness training programs. There are various techniques available for awareness of information security. However, research is scant regarding effective information security awareness delivery methods. It is essential that effective awareness training delivery method is selected, designed, and executed to ensure the appropriate protection of organizational assets. This study adapts Holton’s transfer of training model to develop a framework for effective information security awareness training program. The framework provides guidelines for organizations to select an effective delivery method based on the organizations’ needs and success factor, and to create information security content from a selected healthcare’s internal information security policy and related international standards. Organizations should make continual efforts to ensure that content of policy is effectively communicated to the employees.</description>
        <description>http://thesai.org/Downloads/Volume7No5/Paper_49-Awareness_Training_Transfer_and_Information_Security_Content.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Conservative Noise Filters</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070548</link>
        <id>10.14569/IJACSA.2016.070548</id>
        <doi>10.14569/IJACSA.2016.070548</doi>
        <lastModDate>2016-06-01T11:06:11.6770000+00:00</lastModDate>
        
        <creator>Mona M.Jamjoom</creator>
        
        <creator>Khalil El Hindi</creator>
        
        <subject>component; Instance Reduction Techniques; Instance-Based Learning; Class noise; Noise Filter; Naive Bayesian; Outlier; False Positive</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(5), 2016</description>
        <description>Noisy training data have a huge negative impact on machine learning algorithms. Noise-filtering algorithms have been proposed to eliminate such noisy instances. In this work, we empirically show that the most popular noise-filtering algorithms have a large False Positive (FP) error rate. In other words, these noise filters mistakenly identify genuine instances as outliers and eliminate them. Therefore, we propose more conservative outlier identification criteria that improve the FP error rate and, thus, the performance of the noise filters. With the new filter, an instance is eliminated if and only if it is misclassified by a mutual decision of Na&#239;ve Bayesian (NB) classifier and the original filtering criteria being used. The number of genuine instances that are incorrectly eliminated is reduced as a result, thereby improving the classification accuracy.  </description>
        <description>http://thesai.org/Downloads/Volume7No5/Paper_48-Conservative_Noise_Filters.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluating Damage Potential in Security Risk Scoring Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070547</link>
        <id>10.14569/IJACSA.2016.070547</id>
        <doi>10.14569/IJACSA.2016.070547</doi>
        <lastModDate>2016-06-01T11:06:11.6630000+00:00</lastModDate>
        
        <creator>Eli Weintraub</creator>
        
        <subject>CVSS; security; risk management; configuration; Continuous Monitoring; vulnerability; damage potential; risk scoring</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(5), 2016</description>
        <description>A Continuous Monitoring System (CMS) model is presented, having new improved capabilities. The system is based on the actual real-time configuration of the system. Existing risk scoring models assume damage potential is estimated by systems&#39; owner, thus rejecting the information relying in the technological configuration. The assumption underlying this research is based on users&#39; ability to estimate business impacts relating to systems&#39; external interfaces which they use regularly in their business activities, but are unable to assess business impacts relating to internal technological components. According to the proposed model systems&#39; damage potential is calculated using technical information on systems&#39; components using a directed graph. The graph is incorporated into the Common Vulnerability Scoring Systems&#39; (CVSS) algorithm to produce risk scoring measures. Framework presentation includes system design, damage potential scoring algorithm design and an illustration of scoring computations.</description>
        <description>http://thesai.org/Downloads/Volume7No5/Paper_47-Evaluating_Damage_Potential_in_Security_Risk_Scoring_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Artificial Neural Networks and Support Vector Machine for Voice Disorders Identification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070546</link>
        <id>10.14569/IJACSA.2016.070546</id>
        <doi>10.14569/IJACSA.2016.070546</doi>
        <lastModDate>2016-06-01T11:06:11.6470000+00:00</lastModDate>
        
        <creator>Nawel SOUISSI</creator>
        
        <creator>Adnane CHERIF</creator>
        
        <subject>Automatic Speech Recognition (ASR); Pathological voices; Artificial Neural Networks (ANN); Support Vector Machine (SVM); Linear Discriminant Analysis (LDA); Mel Frequency Cepstral Coefficients (MFCC)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(5), 2016</description>
        <description>The diagnosis of voice diseases through the invasive medical techniques is an efficient way but it is often uncomfortable for patients, therefore, the automatic speech recognition methods have attracted more and more interest recent years and have known a real success in the identification of voice impairments. In this context, this paper proposes a reliable algorithm for voice disorders identification based on two classification algorithms; the Artificial Neural Networks (ANN) and the Support Vector Machine (SVM). The feature extraction task is performed by the Mel Frequency Cepstral Coefficients (MFCC) and their first and second derivatives. In addition, the Linear Discriminant Analysis (LDA) is proposed as feature selection procedure in order to enhance the discriminative ability of the algorithm and minimize its complexity. The proposed voice disorders identification system is evaluated based on a widespread performance measures such as the accuracy, sensitivity, specificity, precision and Area Under Curve (AUC).</description>
        <description>http://thesai.org/Downloads/Volume7No5/Paper_46-Artificial_Neural_Networks_and_Support_Vector_Machine.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>EDAC: A Novel Energy-Aware Clustering Algorithm for Wireless Sensor Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070545</link>
        <id>10.14569/IJACSA.2016.070545</id>
        <doi>10.14569/IJACSA.2016.070545</doi>
        <lastModDate>2016-06-01T11:06:11.6170000+00:00</lastModDate>
        
        <creator>Ahmad A. Ababneh</creator>
        
        <creator>Ebtessam Al-Zboun</creator>
        
        <subject>Clustering algorithms; Sensor networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(5), 2016</description>
        <description>Clustering is a useful technique for reducing energy consumption in wireless sensor networks (WSN).  To achieve a better network lifetime performance, different clustering algorithms use various parameters for cluster head (CH) selection. For example, the sensor&#39;s own residual energy as well as the network&#39;s total residual energy are used. In this paper, we propose an energy-distance aware clustering (EDAC) algorithm that incorporates both the residual energy levels of sensors within a cluster radius as well as the distances. To achieve this, we define a metric that is calculated at each sensor based on local information within its neighborhood. This metric is incorporated within the CH selection probability. Using this metric, one can choose the sensors with low residual energy levels to have the greatest impact on CH selection which results in CH selection being biased to be close to these sensors. This results in reducing their communication energy cost to the CH. Simulation results indicate that our proposed EDAC algorithm outperforms both the LEACH and the energy-efficient DEEC protocols in terms of network lifetime.</description>
        <description>http://thesai.org/Downloads/Volume7No5/Paper_45-EDAC_A_Novel_Energy_Aware_Clustering_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implementation of Novel Medical Image Compression Using Artificial Intelligence</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070544</link>
        <id>10.14569/IJACSA.2016.070544</id>
        <doi>10.14569/IJACSA.2016.070544</doi>
        <lastModDate>2016-06-01T11:06:11.6000000+00:00</lastModDate>
        
        <creator>Mohammad Al-Rababah</creator>
        
        <creator>Abdusamad Al-Marghirani</creator>
        
        <subject>Medical image; lossless Compression; lifting wavelets; CDF9/7; Lifting scheme; SPIHT coding</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(5), 2016</description>
        <description>The medical image processing process is one of the most important areas of research in medical applications in digitized medical information. A medical images have a large sizes. Since the coming of digital medical information, the important challenge is to care for the conduction and requirements of huge data, including medical images. Compression is considered as one of the necessary algorithm to explain this problem. A large amount of medical images must be compressed using lossless compression. This paper proposes a new medical image compression algorithm founded on lifting wavelet transform CDF 9/7 joined with SPIHT coding algorithm, this algorithm applied the lifting composition to confirm the benefit of the wavelet transform. To develop the proposed algorithm, the outcomes compared with other compression algorithm like JPEG codec. Experimental results proves that the anticipated algorithm is superior to another algorithm in both lossy and lossless compression for all medical images tested. The Wavelet-SPIHT algorithm provides PSNR very important values for MRI images.</description>
        <description>http://thesai.org/Downloads/Volume7No5/Paper_44-Implementation_of_Novel_Medical_Image_Compression_Using_Artificial_Intelligence.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>AES Inspired Hex Symbols Steganography for Anti-Forensic Artifacts on Android Devices</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070543</link>
        <id>10.14569/IJACSA.2016.070543</id>
        <doi>10.14569/IJACSA.2016.070543</doi>
        <lastModDate>2016-06-01T11:06:11.5700000+00:00</lastModDate>
        
        <creator>Somyia M. Abu Asbeh</creator>
        
        <creator>Sarah M. Hammoudeh</creator>
        
        <creator>Arab M. Hammoudeh</creator>
        
        <subject>Mobile Forensics; Anti-Forensics; Artifact Wiping; Data Hiding; Steganography; AES</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(5), 2016</description>
        <description>Mobile phones technology has become one of the most common and important technologies that started as a communication tool and then evolved into key reservoirs of personal information and smart applications. With this increased level of complications, increased dangers and increased levels of countermeasures and opposing countermeasures have emerged, such as Mobile Forensics and anti-forensics. One of these anti-forensics tools is steganography, which introduced higher levels of complexity and security against hackers’ attacks but simultaneously create obstacles to forensic investigations. In this paper we proposed a new data hiding approach, the AES Inspired Steganography (AIS), which utilizes some AES data encryption concepts while hiding the data using the concept of hex symbols steganography. As the approach is based on the use of multiple encryption steps, the resulting carrier files would be unfathomable without the use of the cipher key agreed upon by the communicating parties. These carrier files can be exchanged amongst android devices and/or computers. Assessments of the proposed approach have proven it to be advantageous over the currently existing steganography approaches in terms of character frequency, security, robustness, length of key, and Compatibility.</description>
        <description>http://thesai.org/Downloads/Volume7No5/Paper_43-AES_Inspired_Hex_Symbols_Steganography_for_Anti_Forensic_Artifacts.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Nonlinear Condition Tolerancing Using Monte Carlo Simulation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070542</link>
        <id>10.14569/IJACSA.2016.070542</id>
        <doi>10.14569/IJACSA.2016.070542</doi>
        <lastModDate>2016-06-01T11:06:11.5530000+00:00</lastModDate>
        
        <creator>JOUILEL Naima</creator>
        
        <creator>ELGADARI M’hammed</creator>
        
        <creator>RADOUANI Mohammed</creator>
        
        <creator>EL FAHIME Benaissa</creator>
        
        <subject>Worst case tolerancing; statistical tolerancing; Monte Carlo simulation; nonlinear condition; slider crank system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(5), 2016</description>
        <description>To ensure accuracy and performance of the products, designers tend to hug the tolerances. While, manufacturers prefer to increase them in order to reduce costs and ensure competition. The analysis and synthesis of tolerances aim on studying their influence on conformity with functional requirements. This study may be conducted in the case of the most unfavorable configurations with the &quot;worst case&quot; method, or &quot;in all cases&quot; using the statistical approach. However, having a nonlinear condition make it difficult to analyse the influence of parameters on the functional condition.
In this work, we are interested in the tolerance analysis of a mechanism presenting a nonlinear functional condition (slider crank mechanism). To do this we&#39;ll develop an approach of tolerances analysis combining the worst case and the statistical methods.
</description>
        <description>http://thesai.org/Downloads/Volume7No5/Paper_42-Nonlinear_Condition_Tolerancing_Using_Monte_Carlo_Simulation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Big Data Classification Using the SVM Classifiers with the Modified Particle Swarm Optimization and the SVM Ensembles</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070541</link>
        <id>10.14569/IJACSA.2016.070541</id>
        <doi>10.14569/IJACSA.2016.070541</doi>
        <lastModDate>2016-06-01T11:06:11.5370000+00:00</lastModDate>
        
        <creator>Liliya Demidova</creator>
        
        <creator>Evgeny Nikulchev</creator>
        
        <creator>Yulia Sokolova</creator>
        
        <subject>Big Data; classification; ensemble; SVM classifier; kernel function type; kernel function parameters; particle swarm optimization algorithm; regularization parameter; support vectors</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(5), 2016</description>
        <description>The problem with development of the support vector machine (SVM) classifiers using modified particle swarm optimization (PSO) algorithm and their ensembles has been considered. Solving this problem would allow fulfilling the high-precision data classification, especially Big Data classification, with the acceptable time expenditures. The modified PSO algorithm conducts a simultaneous search of the type of kernel functions, the parameters of the kernel function and the value of the regularization parameter for the SVM classifier. The idea of particles&#39; &#171;regeneration&#187; served as the basis for the modified PSO algorithm. In the implementation of this algorithm, some particles change the type of their kernel function to the one which corresponds to the particle with the best value of the classification accuracy. The offered PSO algorithm allows reducing the time expenditures for the developed SVM classifiers, which is very important for Big Data classification problem. In most cases such SVM classifier provides the high quality of data classification. In exceptional cases the SVM ensembles based on the decorrelation maximization algorithm for the different strategies of the decision-making on the data classification and the majority vote rule can be used. Also, the two-level SVM classifier has been offered. This classifier works as the group of the SVM classifiers at the first level and as the SVM classifier on the base of the modified PSO algorithm at the second level. The results of experimental studies confirm the efficiency of the offered approaches for Big Data classification.</description>
        <description>http://thesai.org/Downloads/Volume7No5/Paper_41-Big_Data_Classification_Using_the_SVM_Classifiers.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-Objective Task Scheduling in Cloud Computing Using an Imperialist Competitive Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070540</link>
        <id>10.14569/IJACSA.2016.070540</id>
        <doi>10.14569/IJACSA.2016.070540</doi>
        <lastModDate>2016-06-01T11:06:11.5230000+00:00</lastModDate>
        
        <creator>Majid Habibi</creator>
        
        <creator>Nima Jafari Navimipour</creator>
        
        <subject>Cloud Computing; Tasks scheduling; Imperialist Competitive Algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(5), 2016</description>
        <description>Cloud computing is being welcomed as a new basis to manage and provide services on the internet. One of the reasons for increased efficiency of this environment is the appropriate structure of the tasks scheduler. Since the tasks scheduling in the cloud computing environment and distributed systems is an NP-hard problem, in most cases to optimize the scheduling issues, the meta-heuristic methods inspired by nature are used rather than traditional or greedy methods. One of the most powerful meta-heuristic methods of optimization in the complex problems is an Imperialist Competitive Algorithm (ICA). Thus, in this paper, a meta-heuristic method based on ICA is provided to optimize the scheduling issue in the cloud environment. Simulation results in MATLAB environment show the amount of 0.7 percent improvement in execution time compared with a Genetic Algorithm(GA).</description>
        <description>http://thesai.org/Downloads/Volume7No5/Paper_40-Multi_Objective_Task_Scheduling_in_Cloud_Computing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Learning on High Frequency Stock Market Data Using Misclassified Instances in Ensemble</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070539</link>
        <id>10.14569/IJACSA.2016.070539</id>
        <doi>10.14569/IJACSA.2016.070539</doi>
        <lastModDate>2016-06-01T11:06:11.4900000+00:00</lastModDate>
        
        <creator>Meenakshi A.Thalor</creator>
        
        <creator>S.T. Patil</creator>
        
        <subject>Classifiers; Concept drift; Data stream; Ensemble; Non-stationary Environment</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(5), 2016</description>
        <description>Learning on non-stationary distribution has been shown to be a very challenging problem in machine learning and data mining, because the joint probability distribution between the data and classes changes over time. Many real time problems suffer concept drift as they changes with time. For example, in stock market, the customer’s behavior may change depending on the season of the year and on the inflation. Concept drift can occurs in the stock market for a number of reasons for example, trader’s preference for stocks change over time, increases in a stock’s value may be followed by decreases. The objective of this paper is to develop an ensemble based classification algorithm for non-stationary data stream which would consider misclassified instances during learning process. In addition, we are presenting here an exhaustive comparison of proposed algorithms with state-of-the-art classification approaches using different evaluation measures like recall, f-measure and g-mean.</description>
        <description>http://thesai.org/Downloads/Volume7No5/Paper_39-Learning_on_High_Frequency_Stock_Market_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Energy Provisioning Technique to Balance Energy Depletion and Maximize the Lifetime of Wireless Sensor Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070538</link>
        <id>10.14569/IJACSA.2016.070538</id>
        <doi>10.14569/IJACSA.2016.070538</doi>
        <lastModDate>2016-06-01T11:06:11.4770000+00:00</lastModDate>
        
        <creator>Hassan Hamid Ekal</creator>
        
        <creator>Jiwa Bin Abdullah</creator>
        
        <subject>Wireless Sensor Network (WSN); Lifetime; Node deployment; Energy provisioning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(5), 2016</description>
        <description>With the promising technology of Wireless Sensor Networks (WSNs), lots of applications have been developed for monitoring and tracking in military, commercial, and educational environments. Characteristics of WSNs and resource limitation impose negative impacts on the performance and effectiveness of these applications. Imbalanced energy consumption among sensor nodes can significantly reduce the performance and lifetime of the network. In multi-hop corona WSN, the traffic imbalance among sensor nodes will make nodes located near to the sink consume more energy and finish their energy faster than those distant from the sink. This would cause what is called “energy hole”, which prevents the network from performing the intended tasks properly. The objective of the work in this paper is to balance energy consumption to help improving the lifetime of corona-based WSNs. To maximize the lifetime of the network, an innovative energy provisioning technique is proposed for harmonizing the energy consumption among coronas by computing the extra needed energy in every corona. Experimental results of the evaluation revealed that the proposed technique could improve the network lifetime noticeably via fair balancing of energy consumption ratio among coronas.</description>
        <description>http://thesai.org/Downloads/Volume7No5/Paper_38-Energy_Provisioning_Technique_to_Balance_Energy_Depletion.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Test Case Reduction Techniques - Survey</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070537</link>
        <id>10.14569/IJACSA.2016.070537</id>
        <doi>10.14569/IJACSA.2016.070537</doi>
        <lastModDate>2016-06-01T11:06:11.4430000+00:00</lastModDate>
        
        <creator>Marwah Alian</creator>
        
        <creator>Dima Suleiman</creator>
        
        <creator>Adnan Shaout</creator>
        
        <subject>Regression testing; Test case reduction; Test Suite</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(5), 2016</description>
        <description>Regression testing is considered to be the most expensive phase in software testing. Therefore, regression testing reduction eliminates the redundant test cases in the regression testing suite and saves cost of this phase. In order to validate the correctness of the new version software project that resulted from maintenance phase, Regression testing reruns the regression testing suite to ensure that the new version. Several techniques are used to deal with the problem of regression testing reduction. This research is going to classify these techniques regression testing reduction problem.</description>
        <description>http://thesai.org/Downloads/Volume7No5/Paper_37-Test_Case_Reduction_Techniques_Survey.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Deep Network and Polar Transformation Features for Static Hand Gesture Recognition in Depth Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070536</link>
        <id>10.14569/IJACSA.2016.070536</id>
        <doi>10.14569/IJACSA.2016.070536</doi>
        <lastModDate>2016-06-01T11:06:11.4130000+00:00</lastModDate>
        
        <creator>Vo Hoai Viet</creator>
        
        <creator>Tran Thai Son</creator>
        
        <creator>Ly Quoc Ngoc</creator>
        
        <subject>Hand Gesture Recognition; Deep Network; Polar Transformation; Depth Data</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(5), 2016</description>
        <description>Static hand gesture recognition is an interesting and challenging problem in computer vision. It is considered a significant component of Human Computer Interaction and it has attracted many research efforts from the computer vision community in recent decades for its high potential applications, such as game interaction and sign language recognition. With the recent advent of the cost-effective Kinect, depth cameras have received a great deal of attention from researchers. It promoted interest within the vision and robotics community for its broad applications. In this paper, we propose the effective hand segmentation from the full depth image that is important step before extracting the features to represent for hand gesture. We also represent the novel hand descriptor explicitly encodes the shape and appearance information from depth maps that are significant characteristics for static hand gestures. We propose hand descriptor based on Polar Transformation coordinate is called Histogram of Polar Transformation (HPT) in order to capture both shape and appearance. Beside a robust hand descriptor, a robust classification model also plays a very important role in the hand recognition model. In order to have a high performance in recognition rate, we propose hybrid model for classification based on Sparse Auto-encoder and Deep Neural Network. We demonstrate large improvements over the state-of-the-art methods on two challenging benchmark datasets are NTU Hand Digits and ASL Finger Spelling and achieve the overall accuracy as 97.7% and 84.58%, respectively. Our experiments show that the proposed method significantly outperforms state-of-the-art techniques.</description>
        <description>http://thesai.org/Downloads/Volume7No5/Paper_36-Hybrid_Deep_Network_and_Polar_Transformation_Features.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Efficient Load Balancing Routing Technique for Mobile Ad Hoc Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070535</link>
        <id>10.14569/IJACSA.2016.070535</id>
        <doi>10.14569/IJACSA.2016.070535</doi>
        <lastModDate>2016-06-01T11:06:11.3970000+00:00</lastModDate>
        
        <creator>Mahdi Abdulkader Salem</creator>
        
        <creator>Raghav Yadav</creator>
        
        <subject>AODV; MANET; Load balancing; throughput; packet delivery ratio; routing overhead</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(5), 2016</description>
        <description>The mobile ad hoc network (MANET) is nothing but the wireless connection of mobile nodes which provides the communication and mobility among wireless nodes without the need of any physical infrastructure or centralized devices such as access point or base station. The communication in MANET is done by routing protocols. There are different categories of routing protocols introduced with different goals and objectives for MANETs such as proactive routing protocols (e.g. DSDV), reactive routing protocols (e.g. ADOV), geographic routing protocols (e.g. GRP), hybrid routing protocols etc. There are two important research problems with such routing protocols to address such as efficient load balancing and energy efficiency. In this paper, we are focusing on evaluation and analysis of efficient load balancing protocol design for MANET. Inefficient load balancing technique results in increasing routing overhead, poor packet delivery ratio, and other Quality of Service (QoS) parameters. In literature, there are a number of different methods proposed for improving the performance of routing protocols by efficient load balancing among mobile nodes communication. However, most of the methods suffer from various limitations. In this paper, we propose a novel technique for improved the QoS performance of load balancing approach as well as increasing the network lifetime. Evaluation of Network lifetime is out of scope of this paper.</description>
        <description>http://thesai.org/Downloads/Volume7No5/Paper_35-Efficient_Load_Balancing_Routing_Technique_for_Mobile.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application of Fuzzy Abduction Technique in Aerospace Dynamics</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070534</link>
        <id>10.14569/IJACSA.2016.070534</id>
        <doi>10.14569/IJACSA.2016.070534</doi>
        <lastModDate>2016-06-01T11:06:11.3670000+00:00</lastModDate>
        
        <creator>Sudipta Ghosh</creator>
        
        <creator>Souvik Chatterjee</creator>
        
        <creator>Binanda Kishore Mondal</creator>
        
        <creator>Debasish Kundu</creator>
        
        <subject>Fuzzy logic; Fuzzy abduction; Aerospace dynamics; Inverse Fuzzy relation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(5), 2016</description>
        <description>The purpose of this paper is to apply Fuzzy Abduction Technique in aerospace dynamical problem. A model of an aeroplane is proposed for consideration at different air density level of the atmosphere and at different speed of the plane. Different air density of the atmosphere, angle of wings and speed of the plane are selected as parameters to be studied. In this paper a method is developed to determine the angle of wings of the plane with respect to its axis at different air density level of the atmosphere and at different speed of the plane. Data are given to justify our proposed method theoretically.</description>
        <description>http://thesai.org/Downloads/Volume7No5/Paper_34-Application_of_Fuzzy_Abduction_Technique_in_Aerospace_Dynamics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Efficient and Reliable Core-Assisted Multicast Routing Protocol in Mobile Ad-Hoc Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070533</link>
        <id>10.14569/IJACSA.2016.070533</id>
        <doi>10.14569/IJACSA.2016.070533</doi>
        <lastModDate>2016-06-01T11:06:11.3500000+00:00</lastModDate>
        
        <creator>Faheem Khan</creator>
        
        <creator>Sohail Abbas</creator>
        
        <creator>Samiullah Khan</creator>
        
        <subject>MANET; Core; Mirror core; Multicast routing; Receiver initiated; Mesh based routing; NS2</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(5), 2016</description>
        <description>Mobile ad-hoc network is a collection of mobile nodes that are connected wirelessly forming random topology through decentralized administration. In Mobile ad-hoc networks, multicasting is one of the important mechanisms which can increase network efficiency and reliability by sending multiple copies in a single transmission without using several unicast transmissions. Receiver initiated mesh based multicasting approach provides reliability to Mobile ad-hoc network by reducing overhead.
Receiver initiated mesh based multicast routing strongly relies on proper selection of a core node. The existing schemes suffer from two main problems. First, the core selection process is not efficient, that usually selects core in a manner that may decrease core lifetime and deteriorate network performance in the form of frequent core failures. Second, the existing schemes cause too much delay for core re-selection(s) process. The performance becomes worse in situations where frequent core failures occur due to high mobility which causes excessive flooding for reconfigurations of another core and hence delays the on-going communication and compromising the network reliability.
The objectives of the paper are as follows. First, we propose an efficient method in which the core is selected within the receiver group on the basis of multiple parameters like battery capacity and location, as a result, a more stable core is selected with minimum core failure. Second, to increase the reliability and decrease the delay, we introduce the idea of the mirror core. The mirror core takes the responsibility as a main core after the failure of the primary core and has certain advantages such as maximum reliability, minimum delay and minimizing the data collection process. We implement and evaluate the proposed solution in Network Simulator 2. The result shows that this scheme performs better than the existing benchmark schemes in terms of the packet delivery ratio, overhead and throughput.
</description>
        <description>http://thesai.org/Downloads/Volume7No5/Paper_33-An_Efficient_and_Reliable_Core_Assisted_Multicast_Routing_Protocol.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Evaluation of 802.11p-Based Ad Hoc Vehicle-to-Vehicle Communications for Usual Applications Under Realistic Urban Mobility</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070532</link>
        <id>10.14569/IJACSA.2016.070532</id>
        <doi>10.14569/IJACSA.2016.070532</doi>
        <lastModDate>2016-06-01T11:06:11.3200000+00:00</lastModDate>
        
        <creator>Patrick Sondi</creator>
        
        <creator>Martine Wahl</creator>
        
        <creator>Lucas Rivoirard</creator>
        
        <creator>Ouafae Cohin</creator>
        
        <subject>V2V; 802.11p; QoS; Urban mobility; Simulation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(5), 2016</description>
        <description>In vehicular ad hoc networks, participating vehicles organize themselves in order to support lots of emerging applications. While network infrastructure can be dimensioned correctly in order to provide quality of service support to both vehicle-to-vehicle and vehicle-to-infrastructure communications, there are still many issues to achieve the same performance using only ad hoc vehicle-to-vehicle communications. This paper investigates the performance of such communications for complete applications including their specific packet size, packet acknowledgement mechanisms and quality of service requirements. The simulation experiments are performed using Riverbed (OPNET) Modeler on a network topology made of 50 nodes equipped with IEEE 802.11p technology and following realistic trajectories in the streets of Paris at authorized speeds. The results show that almost all application types are well supported, provided that the source and the destination have a direct link. Particularly, it is pointed out that introducing supplementary hops in a communication has more effects on end-to-end delay and loss rate rather than mobility of the nodes. The study also shows that ad hoc reactive routing protocols degrade performance by increasing the delays while proactive ones introduce the same counter performance by increasing the network load with routing traffic. Whatever the routing protocol adopted, the best performance is obtained only while small groups of nodes communicate using at most two-hop routes.</description>
        <description>http://thesai.org/Downloads/Volume7No5/Paper_32-Performance_Evaluation_of_802.11p_Based_Ad_Hoc.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Educational Data Mining &amp; Students’ Performance Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070531</link>
        <id>10.14569/IJACSA.2016.070531</id>
        <doi>10.14569/IJACSA.2016.070531</doi>
        <lastModDate>2016-06-01T11:06:11.2870000+00:00</lastModDate>
        
        <creator>Amjad Abu Saa</creator>
        
        <subject>Data Mining; Education; Students; Performance; Patterns</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(5), 2016</description>
        <description>It is important to study and analyse educational data especially students’ performance. Educational Data Mining (EDM) is the field of study concerned with mining educational data to find out interesting patterns and knowledge in educational organizations. This study is equally concerned with this subject, specifically, the students’ performance. This study explores multiple factors theoretically assumed to affect students’ performance in higher education, and finds a qualitative model which best classifies and predicts the students’ performance based on related personal and social factors.</description>
        <description>http://thesai.org/Downloads/Volume7No5/Paper_31-Educational_Data_Mining_Students_Performance_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Efficient Audio Classification Approach Based on Support Vector Machines</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070530</link>
        <id>10.14569/IJACSA.2016.070530</id>
        <doi>10.14569/IJACSA.2016.070530</doi>
        <lastModDate>2016-06-01T11:06:11.2570000+00:00</lastModDate>
        
        <creator>Lhoucine Bahatti</creator>
        
        <creator>Omar Bouattane</creator>
        
        <creator>My Elhoussine Echhibat</creator>
        
        <creator>Mohamed Hicham Zaggaf</creator>
        
        <subject>Classification; features; selection; timbre; SVM; IRMFSP; RFE-SVM; CQT</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(5), 2016</description>
        <description>In order to achieve an audio classification aimed to identify the composer, the use of adequate and relevant features is important to improve performance especially when the classification algorithm is based on support vector machines.
As opposed to conventional approaches that often use timbral features based on a time-frequency representation of the musical signal using constant window, this paper deals with a new audio classification method which improves the features extraction according the Constant Q Transform (CQT) approach and includes original audio features related to the musical context in which the notes appear.
The enhancement done by this work is also lay on the proposal of an optimal features selection procedure which combines filter and wrapper strategies.
Experimental results show the accuracy and efficiency of the adopted approach in the binary classification as well as in the multi-class classification.
</description>
        <description>http://thesai.org/Downloads/Volume7No5/Paper_30-An_Efficient_Audio_Classification_Approach_Based_on_Support_Vector_Machines.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SIP Signaling Implementations and Performance Enhancement over MANET: A Survey</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070529</link>
        <id>10.14569/IJACSA.2016.070529</id>
        <doi>10.14569/IJACSA.2016.070529</doi>
        <lastModDate>2016-06-01T11:06:11.2270000+00:00</lastModDate>
        
        <creator>Mazin Alshamrani</creator>
        
        <creator>Haitham Cruickshank</creator>
        
        <creator>Zhili Sun</creator>
        
        <creator>Godwin Ansa</creator>
        
        <creator>Feda Alshahwan</creator>
        
        <subject>SIP; VoIP; MANET; Peer-to-Peer; Back-to-Back User Agent (B2BUA); IMS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(5), 2016</description>
        <description>The implementation of the Session Initiation Protocol (SIP)-based Voice over Internet Protocol (VoIP) and multimedia over MANET is still a challenging issue. Many routing factors affect the performance of SIP signaling and the voice Quality of Service (QoS). Node mobility in MANET causes dynamic changes to route calculations, topology, hop numbers, and the connectivity status between the correspondent nodes. SIP-based VoIP depends on the caller’s registration, call initiation, and call termination processes. Therefore, the SIP signaling performance has an important role for the overall QoS of SIP-based VoIP applications for both IPv4 and IPv6 MANET. Different methods have been proposed to evaluate and benchmark the performance of the SIP signaling system. However, the efficiency of these methods vary and depend on the identified performance metrics and the implementation platforms. This survey examines the implementation of the SIP signaling system for VoIP applications over MANET and highlights the available performance enhancement methods.</description>
        <description>http://thesai.org/Downloads/Volume7No5/Paper_29-SIP_Signaling_Implementations_and_Performance_Enhancement.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>NISHA: Novel Interface for Smart Home Applications for Arabic Regionsubtitle as Needed</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070528</link>
        <id>10.14569/IJACSA.2016.070528</id>
        <doi>10.14569/IJACSA.2016.070528</doi>
        <lastModDate>2016-06-01T11:06:11.1930000+00:00</lastModDate>
        
        <creator>Muneer Bani Yassein</creator>
        
        <creator>Yaser Khamayseh</creator>
        
        <creator>Maryan Yatim</creator>
        
        <subject>Human Computer Interaction (HCI); HCI Design and evaluation methods; User Interface Design; User Centered Design; Smart Homes</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(5), 2016</description>
        <description>Researchers have developed many devices and applications for smart homes to control home’s appliances. The main goal of this research is to propose a touch-based interface (namely, NISHA) for smart homes to meet user needs and requirements and is able to control any appliance in the house. This study is designed for people and circumstances in the Middle East countries (Jordan and West Bank) and therefore, is set out to design a user interface for smart home applications taking into consideration the economic, social, and technological differences. Referring to those differences, NISHA was designed in a classical representational design instead of a modern advanced one, based on virtual images instead of text, full control instead of automatic control, and very restrictive privacy issues for people of these countries still look at smart homes as a technology that threaten their privacy.  Moreover, NISHA was tested and evaluated using heuristic and cognitive walk-through evaluation techniques. Evaluation results showed that 80% of users and experts were satisfied with NISHA as a user friendly interface, 90% of users were satisfied that NISHA met their expectations, and finally, 93% of users strongly asked to have NISHA in their daily lives.</description>
        <description>http://thesai.org/Downloads/Volume7No5/Paper_28-NISHA_Novel_Interface_for_Smart_Home_Applications.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Using Business Intelligence Tools for Predictive Analytics in Healthcare System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070527</link>
        <id>10.14569/IJACSA.2016.070527</id>
        <doi>10.14569/IJACSA.2016.070527</doi>
        <lastModDate>2016-06-01T11:06:11.1630000+00:00</lastModDate>
        
        <creator>Mihaela-Laura IVAN</creator>
        
        <creator>Mircea Raducu TRIFU</creator>
        
        <creator>Manole VELICANU</creator>
        
        <creator>Cristian CIUREA</creator>
        
        <subject>Healthcare Analytics; Business Intelligence tools; SAP HANA; SAP Lumira; SAP Predictive Analytics; Birth Rate; Big Data</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(5), 2016</description>
        <description>The scope of this article is to highlight how healthcare analytics can be improved using Business Intelligence tools. Healthcare system has learned from the previous lessons the necessity of using healthcare analytics for improving patient care, hospital administration, population growth and many others aspects. Business Intelligence solutions applied for the current analysis demonstrate the benefits brought by the new tools, such as SAP HANA, SAP Lumira, and SAP Predictive Analytics. In detailed is analyzed the birth rate with the contribution of different factors to the world.</description>
        <description>http://thesai.org/Downloads/Volume7No5/Paper_27-Using_Business_Intelligence_Tools_for_Predictive_Analytics_in_Healthcare_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Robust Approach for Action Recognition Based on Spatio-Temporal Features in RGB-D Sequences</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070526</link>
        <id>10.14569/IJACSA.2016.070526</id>
        <doi>10.14569/IJACSA.2016.070526</doi>
        <lastModDate>2016-06-01T11:06:11.1170000+00:00</lastModDate>
        
        <creator>Ly Quoc Ngoc</creator>
        
        <creator>Vo Hoai Viet</creator>
        
        <creator>Tran Thai Son</creator>
        
        <creator>Pham Minh Hoang</creator>
        
        <subject>Action Recognition; Depth Sequences; GMM; SVM; Multiple Features; Spatio-Temporal Features</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(5), 2016</description>
        <description>Recognizing human action is attractive research topic in computer vision since it plays an important role on the applications such as human-computer interaction, intelligent surveillance, human actions retrieval system, health care, smart home, robotics and so on. The availability the low-cost Microsoft Kinect sensor, which can capture real-time high-resolution RGB and visual depth information, has opened an opportunity to significantly increase the capabilities of many automated vision based recognition tasks. In this paper, we propose new framework for action recognition in RGB-D video. We extract spatiotemporal features from RGB-D data that capture both visual, shape and motion information. Moreover, the segmentation technique is applied to present the temporal structure of action. Firstly, we use STIP to detect interest points both of RGB and depth channels. Secondly, we apply HOG3D descriptor for RGB channel and 3DS-HONV descriptor for depth channel. In addition, we also extract HOF2.5D from fusing RGB and Depth to capture human’s motion. Thirdly, we divide the video into segments and apply GMM to create feature vectors for each segment. So, we have three feature vectors (HOG3D, 3DS-HONV, and HOF2.5D) that represent for each segment. Next, the max pooling technique is applied to create a final vector for each descriptor. Then, we concatenate the feature vectors from the previous step into the final vector for action representation. Lastly, we use SVM method for classification step. We evaluated our proposed method on three benchmark datasets to demonstrate generalizability. And, the experimental results shown to be more accurate for action recognition compared to the previous works. We obtain overall accuracies of 93.5%, 99.16% and 89.38% with our proposed method on the UTKinect-Action, 3D Action Pairs and MSR-Daily Activity 3D dataset, respectively. These results show that our method is feasible and superior performance over the-state-of-the-art methods on these datasets.</description>
        <description>http://thesai.org/Downloads/Volume7No5/Paper_26-A_Robust_Approach_for_Action_Recognition_Based_on_Spatio_Temporal.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Gender Prediction for Expert Finding Task</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070525</link>
        <id>10.14569/IJACSA.2016.070525</id>
        <doi>10.14569/IJACSA.2016.070525</doi>
        <lastModDate>2016-06-01T11:06:11.0870000+00:00</lastModDate>
        
        <creator>Daler Ali</creator>
        
        <creator>Malik Muhammad Saad Missen</creator>
        
        <creator>Nadeem Akhtar</creator>
        
        <creator>Nadeem Salamat</creator>
        
        <creator>Hina Asmat</creator>
        
        <creator>Amnah Firdous</creator>
        
        <subject>Urdu; Semantic Web; Gender Prediction; Expert Profiling; Machine Learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(5), 2016</description>
        <description>Predicting gender by names is one of the most interesting problems in the domain of Information Retrieval and expert finding task. In this research paper, we propose a machine learning approach for gender prediction task. We propose a new feature, that is, combination of letters in names which gives 86.54% accuracy. Our data collection consists of 3000 Urdu language names written using English Alphabets. This technique can be used to extract names from email addresses and hence is also valid for emails. To the best of our knowledge, it is the first- ever attempt for predicting gender from Pakistani (Urdu) names written using English alphabets.</description>
        <description>http://thesai.org/Downloads/Volume7No5/Paper_25-Gender_Prediction_for_Expert_Finding_Task.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Parallel and Distributed Genetic Algorithm with Multiple-Objectives to Improve and Develop of Evolutionary Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070524</link>
        <id>10.14569/IJACSA.2016.070524</id>
        <doi>10.14569/IJACSA.2016.070524</doi>
        <lastModDate>2016-06-01T11:06:11.0400000+00:00</lastModDate>
        
        <creator>Khalil Ibrahim Mohammad Abuzanouneh</creator>
        
        <subject>Heterogeneous clusters; NP-hard; evolutionary multi-objective algorithm; parallel algorithms; Real-time scheduling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(5), 2016</description>
        <description>In this paper, we argue that the timetabling problem reflects the problem of scheduling university courses, So you must specify the range of time periods and a group of instructors  for a range of  lectures to check a set of constraints  and reduce the cost  of other constraints ,this is the problem called NP-hard, it is a class of problems that are informally, it’s mean that necessary operations to solve the problem will increases exponentially   and directly proportional to the size of the problem, The construction of timetable is most complicated problem that was facing many universities, and  increased by size of the university data and overlapping disciplines between colleges, and when a traditional algorithm (EA) is unable to provide satisfactory results, a distributed EA (dEA), which deploys the population on distributed systems ,it also offers an opportunity to solve extremely high dimensional problems through distributed coevolution using a divide-and-conquer mechanism, Further, the distributed environment allows a dEA to maintain population diversity, thereby avoiding local optima and also facilitating multi-objective search, by employing different distributed models to parallelize the processing of EAs, we designed a genetic algorithm suitable for Universities environment and the constraints facing it when building timetable for lectures.</description>
        <description>http://thesai.org/Downloads/Volume7No5/Paper_24-Parallel_and_Distributed_Genetic_Algorithm_with_Multiple_Objectives.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Assessment Model for Language Learners’ Writing Practice (in Preparing for TOEFL iBT)  Based on Comparing Structure, Vocabulary, and Identifying Discrepant Essays</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070523</link>
        <id>10.14569/IJACSA.2016.070523</id>
        <doi>10.14569/IJACSA.2016.070523</doi>
        <lastModDate>2016-06-01T11:06:11.0230000+00:00</lastModDate>
        
        <creator>Duc Huu Pham</creator>
        
        <creator>Tu Ngoc Nguyen</creator>
        
        <subject>Computer-assisted writing skills; computerized scoring; integrated and independent responses; model; posttest</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(5), 2016</description>
        <description>This study aims to investigate if learners of English can improve computer-assisted writing skills through the analysis of the data from the post test. In this study, the focus was given to intermediate-level students of English taking final writing tests (integrated and independent responses) in preparation for TOEFL iBT. We manually scored and categorized the students’ writing responses into five-point levels for the data to make the software. The results of the study showed that the model could be suitable for computerized scoring for language instructors to grade in a fair and exact way and for students to improve their writing performance through practice on the computer</description>
        <description>http://thesai.org/Downloads/Volume7No5/Paper_23-Assessment_Model_for_Language_Learners_Writing_Practice.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Impact of Privacy Concerns and Perceived Vulnerability to Risks on Users Privacy Protection Behaviors on SNS: A Structural Equation Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070522</link>
        <id>10.14569/IJACSA.2016.070522</id>
        <doi>10.14569/IJACSA.2016.070522</doi>
        <lastModDate>2016-06-01T11:06:10.9770000+00:00</lastModDate>
        
        <creator>Noora Sami Al-Saqer</creator>
        
        <creator>Mohamed E. Seliaman</creator>
        
        <subject>Social networking sites (SNSs); information privacy concern; perceived vulnerability; SEM; protection behavior</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(5), 2016</description>
        <description>This research paper investigates Saudi users’ awareness levels about privacy policies in Social Networking Sites (SNSs), their privacy concerns and their privacy protection measures. For this purpose, a research model that consists of five main constructs namely  information privacy concern, awareness level of privacy policies of social networking sites, perceived vulnerability to privacy risks, perceived response efficacy, and privacy protecting behavior was developed. An online survey questionnaire was used to collect responses from a sample of (108) Saudi SNSs users. The study found that Saudi users of social networking sites are concerned about their information privacy, but they do not have enough awareness of the importance of privacy protecting behaviors to safeguard their privacy online. The research results also showed that there is a lack of awareness of privacy policies of Social networking sites among Saudi users. Testing hypothesis results using the Structural Equation Modeling (SEM) showed that information privacy concern positively affects privacy protection behaviors in SNSs and perceived vulnerability to privacy risks positively affects information privacy concern.</description>
        <description>http://thesai.org/Downloads/Volume7No5/Paper_22-The_Impact_of_Privacy_Concerns_and_Perceived_Vulnerability.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Multimodal Firefly Optimization Algorithm Based on Coulomb’s Law</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070521</link>
        <id>10.14569/IJACSA.2016.070521</id>
        <doi>10.14569/IJACSA.2016.070521</doi>
        <lastModDate>2016-06-01T11:06:10.9600000+00:00</lastModDate>
        
        <creator>Taymaz Rahkar-Farshi</creator>
        
        <creator>Sara Behjat-Jamal</creator>
        
        <subject>Swarm Intelligence; multimodal firefly algorithm; multimodal optimization; firefly algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(5), 2016</description>
        <description>In this paper, a multimodal firefly algorithm named the CFA (Coulomb Firefly Algorithm) has been presented based on the Coulomb’s law. The algorithm is able to find more than one optimum solution in the problem search space without requiring any additional parameter. In this proposed method, less bright fireflies would be attracted to fireflies which are not only brighter, but according to the Coulomb’s law pose the highest gravity. Approaching the end of iteration, fireflies&#39; motion steps are reduced which finally results in a more accurate result. With limited number of iterations, groups of fireflies gather around global and local optimal points. After the final iteration, the firefly which has the highest fitness value, would be survived and the rest would be omitted. Experiments and comparisons on the CFA algorithm show that the proposed method has successfully reacted in solving multimodal optimization problems.</description>
        <description>http://thesai.org/Downloads/Volume7No5/Paper_21-A_Multimodal_Firefly_Optimization_Algorithm_Based_on_Coulombs_Law.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving Accelerometer-Based Activity Recognition by Using Ensemble of Classifiers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070520</link>
        <id>10.14569/IJACSA.2016.070520</id>
        <doi>10.14569/IJACSA.2016.070520</doi>
        <lastModDate>2016-06-01T11:06:10.9300000+00:00</lastModDate>
        
        <creator>Tahani Daghistani</creator>
        
        <creator>Riyad Alshammari</creator>
        
        <subject>Activity Recognition; Sensors; Smart phones; accelerometer data; Data mining; Ensemble</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(5), 2016</description>
        <description>In line with the increasing use of sensors and health application, there are huge efforts on processing of collected data to extract valuable information such as accelerometer data. This study will propose activity recognition model aim to detect the activities by employing ensemble of classifiers techniques using the Wireless Sensor Data Mining (WISDM). The model will recognize six activities namely walking, jogging, upstairs, downstairs, sitting, and standing. Many experiments are conducted to determine the best classifier combination for activity recognition. An improvement is observed in the performance when the classifiers are combined than when used individually. An ensemble model is built using AdaBoost in combination with decision tree algorithm C4.5. The model effectively enhances the performance with an accuracy level of 94.04 %.</description>
        <description>http://thesai.org/Downloads/Volume7No5/Paper_20-Improving_Accelerometer_Based_Activity_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>MMO: Multiply-Minus-One Rule for Detecting &amp; Ranking Positive and Negative Opinion</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070519</link>
        <id>10.14569/IJACSA.2016.070519</id>
        <doi>10.14569/IJACSA.2016.070519</doi>
        <lastModDate>2016-06-01T11:06:10.8970000+00:00</lastModDate>
        
        <creator>Sheikh Muhammad Saqib</creator>
        
        <creator>Fazal Masud Kundi</creator>
        
        <subject>Sentiment Classification; Preprocessing; Text Mining; Sentiment Orientation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(5), 2016</description>
        <description>Hit and hot issue about reviews of any product is sentiment classification. Not only manufacturing company of the reviewed product takes decision about its quality, but the customers’ purchase of the product is also based on the reviews. Instead of reading all the reviews one by one, different works have been done to classify them as negative or positive with preprocessing. Suppose from 1000 reviews, there are 300 negative and 700 are positive. As a whole it is positive. Company and customer may not be satisfied with this sentiment orientation. For companies, negative reviews should be separated with respect to different aspects and features, so companies can enhance the features of the product. There is also a lot of work on aspect extraction, and then aspect based sentiment analysis. While on the other hand, users want the most positive reviews and the most negative reviews, then they can decide purchasing a certain product. To consider the issue from users’ perspective, authors suggest a method Multiply-Minus-One (MMO) which can evaluate each review and find scores based on positive, negative, intensifiers and negation words using WordNet Dictionary. Experiments on 4 types of datasets of product reviews show that this method can achieve 86%, 83%, 83% and 85% precision performance.</description>
        <description>http://thesai.org/Downloads/Volume7No5/Paper_19-MMO_Multiply_Minus_One_Rule_for_Detecting_Ranking_Positive.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SSH Honeypot: Building, Deploying and Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070518</link>
        <id>10.14569/IJACSA.2016.070518</id>
        <doi>10.14569/IJACSA.2016.070518</doi>
        <lastModDate>2016-06-01T11:06:10.8670000+00:00</lastModDate>
        
        <creator>Harry Doubleday</creator>
        
        <creator>Leandros Maglaras</creator>
        
        <creator>Helge Janicke</creator>
        
        <subject>SSH Honeypot; Cyber Security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(5), 2016</description>
        <description>This article is set to discuss the various techniques that can be used while developing a honeypot, of any form, while considering the advantages and disadvantages of these very different methods. The foremost aims are to cover the principles of the Secure Shell (SSH), how it can be useful and more importantly, how attackers can gain access to a system by using it. The article involved the development of multiple low interaction honeypots. The low interaction honeypots that have been developed make use of the highly documented libssh and even editing the source code of an already available SSH daemon. Finally the aim is to combine the results with the vastly distributed Kippo honeypot, in order to be able to compare and contrast the results along with usability and necessity of particular features. Providing a clean and simple description for less knowledgeable users to be able to create and deploy a honeypot of production quality, adding security advantages to their network instantaneously.</description>
        <description>http://thesai.org/Downloads/Volume7No5/Paper_18-SSH_Honeypot_Building_Deploying_and_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Factors of Subjective Voice Disorder Using Integrated Method of Decision Tree and Multi-Layer Perceptron Artificial Neural Network Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070517</link>
        <id>10.14569/IJACSA.2016.070517</id>
        <doi>10.14569/IJACSA.2016.070517</doi>
        <lastModDate>2016-06-01T11:06:10.8370000+00:00</lastModDate>
        
        <creator>Haewon Byeon</creator>
        
        <creator>Sunghyoun Cho</creator>
        
        <subject>Neural Networks; Subjective Voice Disorder; decision tree; risk factor; data-mining</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(5), 2016</description>
        <description>The aim of the present study was to develop a prediction model for subjective voice disorders based on an artificial neural network algorithm and a decision tree using national statistical data. Subjects of analysis were 8,713 adults over the age of 19 (3,801 males and 4,912 females) who completed the otolaryngological examination of the Korea National Health and Nutrition Examination Survey from 2010 to 2012. Explanatory variables included age, education level, income, occupation, problem drinking, coffee consumption, and pain and discomfort from disease over the last two weeks. A multi-layer perceptron artificial neural network and a decision tree model were used for the analysis. In this model, smoking, pain and discomfort from disease over the last two weeks, education level, occupation, and income were drawn out as major predictors of subjective voice disorders. In order to minimize the risk of dysphonia, it is necessary to establish a scientific management system for high-risk groups.</description>
        <description>http://thesai.org/Downloads/Volume7No5/Paper_17-The_Factors_of_Subjective_Voice_Disorder_Using_Integrated_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Smoothness Measure for Image Fusion in Discrete Cosine Transform</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070516</link>
        <id>10.14569/IJACSA.2016.070516</id>
        <doi>10.14569/IJACSA.2016.070516</doi>
        <lastModDate>2016-06-01T11:06:10.8200000+00:00</lastModDate>
        
        <creator>Radhika Vadhi</creator>
        
        <creator>Veera Swamy Kilari</creator>
        
        <creator>Srinivas Kumar Samayamantula</creator>
        
        <subject>smoothness; statistical measures; DCT; image fusion</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(5), 2016</description>
        <description>The aim of image fusion is to generate high-quality images using information from source images. The fused image contains more information than any of the source images. Image fusion using transforms is more effective than spatial methods. Statistical measures such as mean, contrast, and variance, are used in Discrete Cosine Transform (DCT) for image fusion. In this paper, we use statistical measures, such as the smoothness of a block in the transform domain, to select appropriate blocks from multiple images to obtain a fused image. Smoothness captures important blocks in images and duly eliminates noisy blocks. Furthermore, we compare and analyze all statistical measures in the DCT domain. Experimental results establish the superiority of our proposed method over state-of-the-art techniques for image fusion.</description>
        <description>http://thesai.org/Downloads/Volume7No5/Paper_16-Smoothness_Measure_for_Image_Fusion_in_Discrete_Cosine_Transform.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>IAX-JINGLE Network Architectures Based-One/Two Translation Gateways</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070515</link>
        <id>10.14569/IJACSA.2016.070515</id>
        <doi>10.14569/IJACSA.2016.070515</doi>
        <lastModDate>2016-06-01T11:06:10.7900000+00:00</lastModDate>
        
        <creator>Hadeel Saleh Haj Aliwi</creator>
        
        <creator>Putra Sumari</creator>
        
        <subject>media conferencing; VoIP; interworking; translation gateway; IAX; Jingle</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(5), 2016</description>
        <description>Nowadays, Multimedia Communication has improved rapidly to allow people to communicate via the Internet. However, Internet users cannot communicate with each other unless they use the same chatting applications since each chatting application uses a certain signaling protocol to make the media call. The interworking module is a very critical issue since it solves the communication problems between any two protocols, and enables people around the world to make a voice/video call even if they use different chatting applications. Providing interoperability between different signaling protocols and multimedia applications takes the advantages of more than one protocol. Usually, each signaling protocol has its own messages which differ from other signaling protocol messages format. Thus, when two clients use different signaling protocols want to communicate phonetically, the sent/received messages between them will not be understood because the control and media packets in each protocol are different than the corresponding ones in the other protocol, The interworking module solves this kind of problem by matching the signals and media messages by providing translation gateways in the middle between the two protocols. Thus, many interworking modules have been proposed in order to enable many protocols’ users to chat with each other without any difficulties. This paper compares two interworking modules between Inter-Asterisk eXchange Protocol and Jingle Protocol. An experimental implementation in terms of session time is provided.</description>
        <description>http://thesai.org/Downloads/Volume7No5/Paper_15-IAX_JINGLE_Network_Architectures_Based_OneTwo_Translation_Gateways.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Enhanced Framework with Advanced Study to Incorporate the Searching of E-Commerce Products Using Modernization of Database Queries</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070514</link>
        <id>10.14569/IJACSA.2016.070514</id>
        <doi>10.14569/IJACSA.2016.070514</doi>
        <lastModDate>2016-06-01T11:06:10.7570000+00:00</lastModDate>
        
        <creator>Mohd Muntjir</creator>
        
        <creator>Ahmad Tasnim Siddiqui</creator>
        
        <subject>E-Commerce; Database; Database; Queries; Integration; Database Queries</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(5), 2016</description>
        <description>This study aims to inspect and evaluate the integration of database queries and their use in e-commerce product searches. It has been observed that e-commerce is one of the most prominent trends, which have been emerged in the business world, for the past decade. E-commerce has gained tremendous popularity, as it offers higher flexibility, cost efficiency, effectiveness, and convenience, to both, consumers and businesses. Large number of retailing companies has adopted this technology, in order to expand their operations, across of the globe; hence they needs to have highly responsive and integrated databases. In this regard, the approach of database queries is found to be the most appropriate and adequate techniques, as it simplifies the searches of e-commerce products.</description>
        <description>http://thesai.org/Downloads/Volume7No5/Paper_14-An_Enhanced_Framework_with_Advanced_Study_to_Incorporate_the_Searching.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Identify and Classify Critical Success Factor of Agile Software Development Methodology Using Mind Map</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070513</link>
        <id>10.14569/IJACSA.2016.070513</id>
        <doi>10.14569/IJACSA.2016.070513</doi>
        <lastModDate>2016-06-01T11:06:10.7270000+00:00</lastModDate>
        
        <creator>Tasneem Abd El Hameed</creator>
        
        <creator>Mahmoud Abd EL Latif</creator>
        
        <creator>Sherif Kholief</creator>
        
        <subject>Agile success factor; Agile principles</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(5), 2016</description>
        <description>Selecting the right method, right personnel and right practices, and applying them adequately, determine the success of software development. In this paper, a qualitative study is carried out among the critical factors of success from previous studies. The factors of success match with their relative principles to illustrate the most valuable factor for agile approach success, this paper also prove that the twelve principles poorly identified for few factors resulting from qualitative and quantitative past studies. Dimensions and Factors are presented using Critical success Dimensions and Factors Mind Map Model. </description>
        <description>http://thesai.org/Downloads/Volume7No5/Paper_13-Identify_and_Clasify_Critical_Success_Factor_of_Agile_Software.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Analysis of Enhanced Interior Gateway Routing Protocol (EIGRP) Over Open Shortest Path First (OSPF) Protocol with Opnet</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070512</link>
        <id>10.14569/IJACSA.2016.070512</id>
        <doi>10.14569/IJACSA.2016.070512</doi>
        <lastModDate>2016-06-01T11:06:10.7100000+00:00</lastModDate>
        
        <creator>Anibrika Bright Selorm Kodzo</creator>
        
        <creator>Mustapha Adamu Mohammed</creator>
        
        <creator>Ashigbi Franklin Degadzor</creator>
        
        <creator>Michael Asante</creator>
        
        <subject>Routing; Protocol; Algorithm; Throughput</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(5), 2016</description>
        <description>Due to the increase in the easy accessibility of computers and mobile phones alike, routing has become indispensable in deciding how computes communicate especially modern computer communication networks. This paper presents performance analysis between EIGRP and OSPFP for real time applications using Optimized Network Engineering Tool (OPNET). In order to evaluate OSPF and EIGRP’s performance, three network models were designed where 1st, 2nd and 3rd network models are configured respectively with OSPF, EIGRP and a combination of EIGRP and OSPF. Evaluation of the proposed routing protocols was performed based on quantitative metrics such as Convergence Time, Jitter, End-to-End delay, Throughput and Packet Loss through the simulated network models. The evaluation results showed that EIGRP protocol provides a better performance than OSPF routing protocol for real time applications. By examining the results (convergence times in particular), the results of simulating the various scenarios identified the routing protocol with the best performance for a large, realistic and scalable network.</description>
        <description>http://thesai.org/Downloads/Volume7No5/Paper_12-Performance_Analysis_of_Enhanced_Interior_Gateway_Routing_Protocol.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Carbon Break Even Analysis: Environmental Impact of Tablets in Higher Education</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070511</link>
        <id>10.14569/IJACSA.2016.070511</id>
        <doi>10.14569/IJACSA.2016.070511</doi>
        <lastModDate>2016-06-01T11:06:10.6800000+00:00</lastModDate>
        
        <creator>Fadi Safieddine</creator>
        
        <creator>Imad Nakhoul</creator>
        
        <subject>Environmental, Tablet; Higher Education; Carbon-footprint; Break-even Analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(5), 2016</description>
        <description>With the growing pace of tablets use and the large focus it is attracting especially in higher education, this paper looks at an important aspect of tablets; their carbon footprint. Studies have suggested that tablets have positive impact on the environment; especially since tablets use less energy than laptops or desktops. Recent manufacturers’ reports on the carbon footprint of tablets have revealed that a significant portion, as much as 80%, of the carbon footprint of tablets comes from production and delivery as opposed to the operational life-cycle of these devices. Thus rending some of previous assumptions about the environmental impact of tablets questionable. This study sets to answer a key question: What is the break-even analysis point when saving on printed paper offsets the carbon footprint of producing and running the tablet in higher education. A review of the literature indicated several examples of tablet models and their carbon emission impact; this is compared to the environmental savings on paper that green courses could produce. The analysis of the carbon break-even point shows that even when considering some of the most efficient and least carbon impact tablets available on the market with a carbon-footprint production of 153Kg CO2e, the break-even point is 81.5 months; referring to 6 years, 9 months and 15 days of use. This exceeds the life-cycle of an average tablet of five years and average degree duration of four years. While tablets still have the least carbon-footprint impact compared to laptops and desktops, to achieve the break-even point of carbon neutral operations this study concludes that manufacturers need to find more environmentally efficient ways of production that would reduce the carbon-footprint product to a level that does not exceed 112.8kg CO2e.</description>
        <description>http://thesai.org/Downloads/Volume7No5/Paper_11-Carbon_Break_Even_Analysis_Environmental_Impact_of_Tablets_in_Higher_Education.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Identify and Manage the Software Requirements Volatility</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070510</link>
        <id>10.14569/IJACSA.2016.070510</id>
        <doi>10.14569/IJACSA.2016.070510</doi>
        <lastModDate>2016-06-01T11:06:10.6500000+00:00</lastModDate>
        
        <creator>Khloud Abd Elwahab</creator>
        
        <creator>Mahmoud Abd EL Latif</creator>
        
        <creator>Sherif Kholeif</creator>
        
        <subject>software requirements; requirement errors; requirements volatility; reason for requirement changes and control changes</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(5), 2016</description>
        <description>Management of software requirements volatility through development of life cycle is a very important stage. It helps the team to control significant impact all over the project (cost, time and effort), and also it keeps the project on track, to finally satisfy the user which is the main success criteria for the software project.
In this research paper, we have analysed the root causes of requirements volatility through a proposed framework presenting the requirements volatility causes and how to manage requirements volatility during the software development life cycle.
Our proposed framework identifies requirement error types, causes of requirements volatility and how to manage these volatilities to know the necessary changes and take the right decision according to volatility measurements (priorities, status and working hours). This framework contains four major phases (Elicitation and Analysis phase, Specification Validation phase, Requirements Volatility Causes phase and Changes Management phase). We will explain each phase in detail.
</description>
        <description>http://thesai.org/Downloads/Volume7No5/Paper_10-Identify_and_Manage_the_Software_Requirements_Volatility.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance of Spectral Angle Mapper and Parallelepiped Classifiers in Agriculture Hyperspectral Image</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070509</link>
        <id>10.14569/IJACSA.2016.070509</id>
        <doi>10.14569/IJACSA.2016.070509</doi>
        <lastModDate>2016-06-01T11:06:10.6170000+00:00</lastModDate>
        
        <creator>Sahar A. El_Rahman</creator>
        
        <subject>Accuracy Assessment; ENVI; Hyperspectral Imaging; Parallelepiped Classifier; Spectral Angel Mapper; Supervised Classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(5), 2016</description>
        <description>Hyperspectral Imaging (HSI) is used to provide a wealth of information which can be used to address a variety of problems in different applications. The main requirement in all applications is the classification of HSI data. In this paper, supervised HSI classification algorithms are used to extract agriculture areas that specialize in wheat growing and get a classified image. In particular, Parallelepiped and Spectral Angel Mapper (SAM) algorithms are used. They are implemented by a software tool used to analyse and process geospatial images that is an Environment of Visualizing Images (ENVI). They are applied on Al-Kharj, Saudi Arabia as the study area. The overall accuracy after applying the algorithms on  the image of the study area for SAM classification was 66.67%, and 33.33% for Parallelepiped classification. Therefore,  SAM algorithm has provided a better a study area image classification.</description>
        <description>http://thesai.org/Downloads/Volume7No5/Paper_9-Performance_of_Spectral_Angle_Mapper_and_Parallelepiped_Classifiers.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Quizzes: Quiz Application Development Using Android-Based MIT APP Inventor Platform</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070508</link>
        <id>10.14569/IJACSA.2016.070508</id>
        <doi>10.14569/IJACSA.2016.070508</doi>
        <lastModDate>2016-06-01T11:06:10.6000000+00:00</lastModDate>
        
        <creator>Muhammad Zubair Asghar</creator>
        
        <creator>Iqra Sana</creator>
        
        <creator>Khushboo Nasir</creator>
        
        <creator>Hina Iqbal</creator>
        
        <creator>Fazal Masud Kundi</creator>
        
        <creator>Sadia Ismail</creator>
        
        <subject>Quiz; Android; MIT App Inventor; Interviews and test preparation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(5), 2016</description>
        <description>This work deals with the development of Android-based multiple-choice question examination system, namely: Quizzes. This application is developed for educational purposes, allowing the users to prepare the multiple choice questions for different examinations conducted on provincial and national level. The main goal of the application is to enable users to practice for subjective tests conducted for admissions and recruitment, with the focus on Computer Science field. This quiz application includes three main modules, namely (i) computer science, (ii) verbal, and (iii) analytical. The computer science and verbal modules contain various sub-categories. This quiz includes three functions: (i) Hint, (ii) Skip, and (iii) Pause/life-lines. These functions can be used only once by a user. It shows progress feedback during quiz play, and at the end, the app also shows the result.</description>
        <description>http://thesai.org/Downloads/Volume7No5/Paper_8-Quizzes_Quiz_Application_Development_Using_Android.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Reduced Switch Voltage Stress Class E Power Amplifier Using Harmonic Control Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070507</link>
        <id>10.14569/IJACSA.2016.070507</id>
        <doi>10.14569/IJACSA.2016.070507</doi>
        <lastModDate>2016-06-01T11:06:10.5700000+00:00</lastModDate>
        
        <creator>Ali Reza Zirak</creator>
        
        <creator>Sobhan Roshani</creator>
        
        <subject>class E power amplifier; harmonic control network (HCN); MOSFET drain Impedance; ZVS and ZVDS conditions</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(5), 2016</description>
        <description>In this paper, a harmonic control network (HCN) is presented to reduce the voltage stress (maximum MOSFET voltage) of the class E power amplifier (PA). Effects of the HCN on the amplifier specifications are investigated. The results show that the proposed HCN affects several specifications of the amplifier, such as drain voltage, switch current, output power capability (Cp factor), and drain impedance. The output power capability of the presented amplifier is also improved, compared with the conventional class E structure. High-voltage stress limits the design specifications of the desired amplifier. Therefore, several limitations can be removed with the reduced switch voltage. According to the results, the maximum drain voltage for the presented amplifier is reduced and subsequently, the output power capability is increased about 25% using the presented structure. Zero-voltage switching condition (ZVS) and zero-voltage derivative switching condition (ZVDS) are assumed in the design procedure. These two conditions are essential for high efficiency achievement in various classes of switching amplifiers. A class E PA with operating frequency of 1 MHz is designed and simulated using advanced design system (ADS) and PSpice software. The theory and simulated results are in good agreement.</description>
        <description>http://thesai.org/Downloads/Volume7No5/Paper_7-A_Reduced_Switch_Voltage_Stress_Class_E_Power_Amplifier.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Empirical Analysis of Metrics Using UML Class Diagram</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070506</link>
        <id>10.14569/IJACSA.2016.070506</id>
        <doi>10.14569/IJACSA.2016.070506</doi>
        <lastModDate>2016-06-01T11:06:10.5400000+00:00</lastModDate>
        
        <creator>Bhawana Mathur</creator>
        
        <creator>Manju Kaushik</creator>
        
        <subject>UML Class diagram; Maintainability; Object Oriented System; CK Metrics suite; Model; Software; UML</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(5), 2016</description>
        <description>Lots of Organizations before they are setup survey the maintainability of programming frameworks. To give quality program design there exists a critical strategy called Object-Oriented Framework. Object–Oriented estimations might be utilized to study the judgment skills suite of a class diagram’s structure particularly programming valuations and how the models have been developed and portrayed. The UML Class Diagram metrics maintain the Object-Oriented software. It is maintained through the investigation of the association among object oriented metrics and maintainability. This paper shows the effects of a scientific evaluation of software maintainability forecast and metrics. The research aims at the software quality attribute of maintainability as opposite to the method of software maintenance. It also aims to find out the vital correlation between structural complexity metrics and maintenance time.  Several investigators have done copious in it, got lots of theoretical outcomes, and subsequently established a chain of practical uses. Due to dynamic changes in object-oriented technology, in today’s scenario the class diagram is an essential UML model, as, researcher must first get to know the use of software in a scientific manner. It is an affordable strategy which has had an exceptional result in recent times. This paper is related to UML class diagram metrics through which a way is provided to maintain UML class diagram complexity weights. UML Class diagram’s qualities will efficiently and technically show the complexity of Object -Oriented Software. A more specific research study has shown that the technique is associated with individual’s experience and also can be useful to improve software quality.</description>
        <description>http://thesai.org/Downloads/Volume7No5/Paper_6-Empirical_Analysis_of_Metrics_Using_UML_Class_Diagram.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards Face Recognition Using Eigenface</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070505</link>
        <id>10.14569/IJACSA.2016.070505</id>
        <doi>10.14569/IJACSA.2016.070505</doi>
        <lastModDate>2016-06-01T11:06:10.5070000+00:00</lastModDate>
        
        <creator>Md. Al-Amin Bhuiyan</creator>
        
        <subject>Eigenvector; Eigenface; RMS Contrast Scaling; Face Recognition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(5), 2016</description>
        <description>This paper presents a face recognition system employing eigenface-based approach. The principal objective of this research is to extract feature vectors from images and to reduce the dimension of information. The method is implemented on frontal view facial images of persons to explore a two-dimensional representation of facial images. The system is organized with RMS (Root Mean Square) contrast scaling technique employed for pre-processing the images to adjust with poor lighting conditions. Experiments have been conducted using Carnegie Mellon University database of human faces and University of Essex Computer Vision Research Projects dataset. Experimental results indicate that the proposed eigenface-based approach can classify the faces with accuracy more than 80% in all cases.</description>
        <description>http://thesai.org/Downloads/Volume7No5/Paper_5-Towards_Face_Recognition_Using_Eigenface.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Geographical Information System Based Approach to Monitor Epidemiological Disaster: 2011 Dengue Fever Outbreak in Punjab, Pakistan</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070504</link>
        <id>10.14569/IJACSA.2016.070504</id>
        <doi>10.14569/IJACSA.2016.070504</doi>
        <lastModDate>2016-06-01T11:06:10.4770000+00:00</lastModDate>
        
        <creator>Shahbaz Ahmad</creator>
        
        <creator>Muhammad Asif</creator>
        
        <creator>Muhammad Yasir</creator>
        
        <creator>Shahzad Nazir</creator>
        
        <creator>Muhammad Majid</creator>
        
        <creator>Muhammad Umar Chaudhry</creator>
        
        <subject>GIS; Dengue; Hemorrhagic fever; Aedes aegypti</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(5), 2016</description>
        <description>Epidemiological disaster management, using geo-informatics (GIS), is an innovative field of rapid information gathering. Dengue fever, a vector-borne disease, also known as break bone fever, is a lethal re-emerging arboviral disease. Its endemic flow is causing serious effects to the economy and health at the global level. Even now, many under-developed and developing countries like Pakistan lack the necessary GIS technologies to monitor such health issues. The aim of this study is to enhance the effectiveness of developing countries through disaster management capabilities by using state-of-the-art technologies, which provide the measures to relief the disaster burden on public sector agencies. In this paper, temporal changes and regional burden for distribution of this disease are mapped using GIS tools. For the prevention of disaster burden, these types of studies are widely used to provide an effective help and relief. This study concludes that a public sector institute can use such tools for surveillance purpose and to identify the risk areas for possible precautionary measures.</description>
        <description>http://thesai.org/Downloads/Volume7No5/Paper_4-Geographical_Information_System_Based_Approach_to_Monitor_Epidemiological.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Classified Arabic Documents Using Semi-Supervised Technique</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070503</link>
        <id>10.14569/IJACSA.2016.070503</id>
        <doi>10.14569/IJACSA.2016.070503</doi>
        <lastModDate>2016-06-01T11:06:10.4470000+00:00</lastModDate>
        
        <creator>Dr. Khalaf Khatatneh</creator>
        
        <subject>Arabic Language; Na&#239;ve Bays; Classifier; Indexing; Stop word</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(5), 2016</description>
        <description>In this work, we test the performance of the Na&#239;ve Bayes classifier in the categorization of Arabic text. Arabic is rich and unique in its own way and has its own distinct features. The issues and characteristics of Arabic language are addressed in our study and the classifier was modified and regulates to fit the needs of the language.  a vector or word and their frequencies method is used to represent each document. We trained our classifier using both techniques supervised and semi-supervised in an attempt to compare between them and see if the classification accuracy will improve as a result of using the technique of semi-supervised. Many various experiments were performed, and the thoroughness of the classifier was measured using recall, precision, fallout and error. The outcomes illustrates that the semi-supervised learning can significantly enhance the classification accuracy of Arabic text.</description>
        <description>http://thesai.org/Downloads/Volume7No5/Paper_3-Classified_Arabic_Documents_Using_Semi_Supervised_Technique.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>ADBT Frame Work as a Testing Technique: An Improvement in Comparison with Traditional Model Based Testing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070502</link>
        <id>10.14569/IJACSA.2016.070502</id>
        <doi>10.14569/IJACSA.2016.070502</doi>
        <lastModDate>2016-06-01T11:06:10.4130000+00:00</lastModDate>
        
        <creator>Mohammed Akour</creator>
        
        <creator>Bouchaib Falah</creator>
        
        <creator>Karima Kaddouri</creator>
        
        <subject>Activity Diagram; Black Box Testing; Finite State Machine; Model-Based Testing; Software Testing; Test Suite; Test Case; Use Case Diagram</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(5), 2016</description>
        <description>Software testing is an embedded activity in all software development life cycle phases. Due to the difficulties and high costs of software testing, many testing techniques have been developed with the common goal of testing software in the most optimal and cost-effective manner. Model-based testing (MBT) is used to direct testing activities such as test verification and selection. MBT is employed to encapsulate and understand the behavior of the system under test, which supports and helps software engineers to validate the system with various likely actions. The widespread usage of models has influenced the usage of MBT in the testing process, especially with UML. In this research, we proposed an improved model based testing strategy, which involves and uses four different diagrams in the testing process. This paper also discusses and explains the activities in the proposed model with the finite state model (FSM).  The comparisons have been done with traditional model based testings in terms of test case generation and result.</description>
        <description>http://thesai.org/Downloads/Volume7No5/Paper_2-ADBT_Frame_Work_as_a_Testing_Technique_An_Improvement_in_Comparison.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid Method to Predict Success of Dental Implants</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070501</link>
        <id>10.14569/IJACSA.2016.070501</id>
        <doi>10.14569/IJACSA.2016.070501</doi>
        <lastModDate>2016-06-01T11:06:10.2900000+00:00</lastModDate>
        
        <creator>Reyhaneh Sadat Moayeri</creator>
        
        <creator>Mehdi Khalili</creator>
        
        <creator>Mahsa Nazari</creator>
        
        <subject>Data Mining; Dental Implant; W-J48; Neural Network; K-NN; Na&#239;ve Bayes; SVM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(5), 2016</description>
        <description>Background/Objectives: The market demand for dental implants is growing at a significant pace. Results obtained from real cases shows that some dental implants do not lead to success. Hence, the main problem is whether machine learning techniques can be successful in prediction of success of dental implants.
Methods/Statistical Analysis: This paper presents a combined predictive model to evaluate the success of dental implants. The classifiers used in this model are W-J48, SVM, Neural Network, K-NN and Na&#239;ve Bayes. All internal parameters of each classifier are optimized. These classifiers are combined in a way that results in the highest possible accuracies.
Results: The performance of the proposed method is compared with single classifiers. Results of our study show that the combinative approach can achieve higher performance than the best of the single classifiers. Using the combinative approach improves the sensitivity indicator by up to 13.3%.
Conclusion/Application: Since diagnosis of patients whose implant does not lead to success is very important in implant surgery, the presented model can help surgeons to make a more reliable decision on level of success of implant operation prior to surgery.
</description>
        <description>http://thesai.org/Downloads/Volume7No5/Paper_1-A_Hybrid_Method_to_Predict_Success_of_Dental_Implants.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Factor Analysis Based Selections</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2016.050507</link>
        <id>10.14569/IJARAI.2016.050507</id>
        <doi>10.14569/IJARAI.2016.050507</doi>
        <lastModDate>2016-05-10T12:06:46.0730000+00:00</lastModDate>
        
        <creator>Sylvia Encheva</creator>
        
        <subject>Boolean factor analysis; Formal concept analysis; Belnap’s logic</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 5(5), 2016</description>
        <description>Merger in higher education has been of scholarly interest to researchers in various fields. This work is devoted to challenges related to partner selection for an feasible merger. A systematic approach is proposed based on describing educational organizations via several predefined key numbers from the one hand and their expectations from the other hand. Methods from Boolean factor analysis, formal concept analysis, and Belnap’s logic are further employed in an attempt of drawing meaningful conclusions.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume5No5/Paper_7-Factor_Analysis_Based_Selections.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Brainstorming Versus Arguments Structuring in Online Forums</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2016.050506</link>
        <id>10.14569/IJARAI.2016.050506</id>
        <doi>10.14569/IJARAI.2016.050506</doi>
        <lastModDate>2016-05-10T12:06:46.0600000+00:00</lastModDate>
        
        <creator>Abdulrahman Alqahtani</creator>
        
        <creator>Marius Silaghi</creator>
        
        <subject>Knowledge Representation, Threading Models for Ar-guments in Electronic Debates, Threading Model Classification, Debate Threading Model, Comparison Online news Platforms</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 5(5), 2016</description>
        <description>We characterize electronic discussion forums as being of one of the following two types: Brainstorming Forums and Arguments Structuring Forums. In this work we analyze and classify the types of threading models occurring as a function of the type of forum. For our analysis we study forums attached to the 25 news sources most used by the aggregator Google News, as detected by a 2007 study. Most discussion forums associated with articles on these news sources seem to be designed not with the purpose of structuring arguments but mainly with the purpose of helping readers brainstorm easily their reactions to the corresponding news item. The forums were classified as to what user-supported metadata they gather and use in comment presentation.
We compare the features observed for brainstorming forums, as learned via the aforementioned procedure, with the fea-tures of dedicated argument structuring forums. The argument structuring forums that were used as basis of the comparison are: YourView, DebateDecide, and Opinion Space. We notice significant differences in the obtained models for the two types of forums, as well as significant differences with respect to the the structuring of user submitted data in polls associated with major news channels.
We believe that this is the first kind of work that deals with the issue above.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume5No5/Paper_6-Brainstorming_Versus_Arguments_Structuring.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Mobile Version of the Predicted Energy Efficient Bee-Inspired Routing (PEEBR)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2016.050505</link>
        <id>10.14569/IJARAI.2016.050505</id>
        <doi>10.14569/IJARAI.2016.050505</doi>
        <lastModDate>2016-05-10T12:06:46.0270000+00:00</lastModDate>
        
        <creator>Imane M. A. Fahmy</creator>
        
        <creator>Hesham A. Hefny</creator>
        
        <creator>Laila Nassef</creator>
        
        <subject>PEEBR; PEEBR-1; PEEBR-2; Energy Efficient Routing; Bee-inspired; Artificial Bee Colony (ABC) optimization; Random Mobility Model</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 5(5), 2016</description>
        <description>In this paper, the previously proposed Predictive Energy Efficient Bee-inspired Routing (PEEBR) family of routing optimization algorithms based on the Artificial Bees Colony (ABC) Optimization model is extended from a random static mobility model, as employed by its first version (PEEBR-1), into a random mobility model in its second version (PEEBR-2).  This random mobility model used by PEEBR-2 algorithm is proposed and described. Then, PEEBR-2’s was simulated in order to compare its performance relative to the first version (PEEBR-1) in terms of predicted optimal path energy consumption, nodes batteries residual power and fitness.
The simulation results have shown that PEEBR-2’s optimal path is predicted to consume less energy and realizing higher fitness. On the other hand, PEEBR-1’s optimal paths nodes possess higher batteries residual power. At last, the impact of mobile nodes speeds was studied for PEEBR-2 in terms of optimal path’s predicted energy consumption and path nodes batteries residual power showing its performance stability relative to nodes mobility speed.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume5No5/Paper_5-The_Mobile_Version_of_the_Predicted_Energy_Efficient.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Study on Cloud Parameter Estimation Among GOSAT/CAI, MODIS, CALIPSO/CALIOP and Landsat-8/OLI with Laser Radar: Lidar as Truth Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2016.050504</link>
        <id>10.14569/IJARAI.2016.050504</id>
        <doi>10.14569/IJARAI.2016.050504</doi>
        <lastModDate>2016-05-10T12:06:45.9970000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Masanori Sakashita</creator>
        
        <creator>Hiroshi Okumura</creator>
        
        <creator>Shuji Kawakami</creator>
        
        <creator>Kei Shiomi</creator>
        
        <creator>Hirofumi Ohyama</creator>
        
        <subject>Cirrus cloud; GOSAT/CAI; Landsat; LiDAR; Sky view camera; CALIPSO/CALIOP; topogramphic representation of 3D clouds</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 5(5), 2016</description>
        <description>A comparative study on cloud parameter estimation among GOSAT/CAI, MODIS, CALIPSO/CALIOP and Landsat-8/OLI is carried out using Laser Radar: Lidar as a truth data. Optical depth, size distribution, as well as cirrus type of clouds are cloud parameters. In particular, cirrus cloud detection is tough issue. 1.38 &#181;m channel is required for its detection. Although MODIS and Landsat-8/OLI have such channel, the other mission instruments, CAI and CALIPSO/CALIOP do not have such channel. As a truth data of cloud parameter, ground based Lidar is used in this comparative study. From the Lidar, backscattered echo signal and depolarization coefficient are obtained as a function of altitude. Therefore, cloud type, vertical profile can be derived from the Lidar data. CALIPSO/CALIOP is satellite based Lidar which allows observation of clouds from space. Although the directions of laser light emissions between CALIPSO/CALIOP and the ground based Lidar are different, their principles are same. Therefore, it is expected that CALIPSO/CALIOP data derived cloud parameters are similar to the ground based Lidar data derived cloud parameters. The experimental results show the aforementioned facts and are useful for improvement of cloud parameter estimation accuracy with several sensor data combinations.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume5No5/Paper_4-Comparative_Study_on_Cloud_Parameter_Estimation_Among.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Method for Reducing the Number of Wild Animal Monitors by Means of Kriging</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2016.050503</link>
        <id>10.14569/IJARAI.2016.050503</id>
        <doi>10.14569/IJARAI.2016.050503</doi>
        <lastModDate>2016-05-10T12:06:45.9800000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Takashi Higuchi</creator>
        
        <subject>Kriging; Variogram; Semi-Variogram; Wild animal; Wild pig</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 5(5), 2016</description>
        <description>Method for reducing the number of wild animal monitors is proposed by means of Kriging. Through wild animal route of simulations with 128 by 128 cells, the required number of wild animal monitors is clarified. Then it is found that the number of wild animal monitors can be reduced based on Kriging by using variograms and semi-variograms among the neighboring monitors. Also, it is found that the number of wild animal monitors by the factor of a by means of the proposed method.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume5No5/Paper_3-Method_for_Reducing_the_Number_of_Wild_Animal_Monitors.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Outlier-Tolerance RML Identification of Parameters in CAR Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2016.050502</link>
        <id>10.14569/IJARAI.2016.050502</id>
        <doi>10.14569/IJARAI.2016.050502</doi>
        <lastModDate>2016-05-10T12:06:45.9500000+00:00</lastModDate>
        
        <creator>Hong Teng-teng</creator>
        
        <creator>Hu Shaolin</creator>
        
        <subject>recursive maximum likelihood identification; parameter identification; outliers; outlier-tolerance identification</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 5(5), 2016</description>
        <description>The measured data inevitably contain abnormal data under the normal operating conditions. Most of the existing algorithms, such as least squares identification and maximum likelihood estimation, are easily affected by abnormal data and appear large indentation deviation. It is a difficult task needed to be addressed that how to improve the sensitivity of the existing algorithm or build a new parameter identifying algorithm with outlier-tolerance ability to abnormal data in system identification technology application. In this paper, the sensitivity of the RML to the sampled abnormal data was analyzed and a new improvement algorithm of CAR process is established to improve outlier-tolerance ability of the RML identification when there are outliers in the sampling series. The improved algorithm not only effectively inhibits the negative impact of the abnormal data but also effectively improve the quality of the parameter identification results. Some simulation given in this paper shows that the improved RML algorithm has strong outlier-tolerance. This paper’s research results play an important role in engineering control, signal processing, industrial automation and aerospace or other fields.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume5No5/Paper_2-Outlier_Tolerance_RML_Identification_of_Parameters_in_CAR_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Parameter Optimization for Nadaraya-Watson Kernel Regression Method with Small Samples</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2016.050501</link>
        <id>10.14569/IJARAI.2016.050501</id>
        <doi>10.14569/IJARAI.2016.050501</doi>
        <lastModDate>2016-05-10T12:06:45.8730000+00:00</lastModDate>
        
        <creator>Li Fengping</creator>
        
        <creator>Zhou Yuqing</creator>
        
        <creator>Xue Wei</creator>
        
        <subject>small samples regression; Nadaraya-Watson kernel regression; parameter optimization; loss function; cross validation</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 5(5), 2016</description>
        <description>Many current regression algorithms have unsatisfactory prediction accuracy with small samples. To solve this problem, a regression algorithm based on Nadaraya-Watson kernel regression (NWKR) is proposed. The proposed method advocates parameter selection directly from the standard deviation of training data, optimized with leave-one-out cross- validation (LOO-CV). Good generalization performance of the proposed parameter selection is demonstrated empirically using small sample regression problems with Gaussian noise. The results show that proposed parameter optimization method is more robust and accurate than other methods for different noise levels and different sample sizes, and indicate the importance of Vapnik’s e-insensitive loss for regression problems with small samples.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume5No5/Paper_1-Parameter_Optimization_for_Nadaraya_Watson_Kernel_Regression_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Broadcast Scheme DSR-based Mobile Adhoc Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070473</link>
        <id>10.14569/IJACSA.2016.070473</id>
        <doi>10.14569/IJACSA.2016.070473</doi>
        <lastModDate>2016-05-02T11:24:42.6600000+00:00</lastModDate>
        
        <creator>Muneer Bani Yassein</creator>
        
        <creator>Ahmed Y. Al-Dubai</creator>
        
        <subject>a dynamic counter based; Broadcasting; DSR</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(4), 2016</description>
        <description>Traffic classification seeks to assign packet flows to an appropriate quality of service (QoS). Despite many studies that have placed a lot of emphasis on broadcast communication, broadcasting in MANETs is still a problematic issue. Due to the absence of the fixed infrastructure in MANETs, broadcast is an essential operation for all network nodes. Although the blind flooding is the simplest broadcasting technique, it is inefficient and lacks resource utilization efficiency. One of the proposed schemes to mitigate the blind flooding deficiency is the counter based broadcast scheme that depends on the number of received duplicate packets between the node and its neighbors, where the node compares the duplicate packet itself and each neighbor node that previously re-broadcasted a packet. Due to the fact that existing counter-based schemes are mainly based on the fixed counter based approach, these schemes are not efficient in different operating conditions. Thus, unlike existing studies, this paper proposes a dynamic counter based threshold value and examines its effectiveness under the Dynamic Source Routing Protocol (DSR) which is one of the well-known on-demand routing protocols. Specifically, we develop in this paper a new counter based broadcast algorithm under the umbrella of the DSR, namely, Inspired Counter Based Broadcasting (DSR-ICB). Using various simulation experiments, DSR-ICB has shown good performance especially in terms of delay and the number of redundant packets. </description>
        <description>http://thesai.org/Downloads/Volume7No4/Paper_73-A_Novel_Broadcast_Scheme_DSR_based_Mobile_Ad_hoc_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Methodology for Ontology Development in Lesson Plan Domain</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070472</link>
        <id>10.14569/IJACSA.2016.070472</id>
        <doi>10.14569/IJACSA.2016.070472</doi>
        <lastModDate>2016-05-02T11:24:42.6470000+00:00</lastModDate>
        
        <creator>Aslina Saad</creator>
        
        <creator>Shahnita Shaharin</creator>
        
        <subject>knowledge representation; methodology; ontology development; lesson plan</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(4), 2016</description>
        <description>Ontology has been recognized as a knowledge representation mechanism that supports a semantic web application. The semantic web application that supports lesson plan construction is crucial for teachers to deal with the massive information sources from various domains on the web.  Thus, knowledge in lesson plan domain needs to be represented accordingly so that the search on the web will retrieve relevant materials only. Essentially, such retrieval needs an appropriate representation of the domain problem. The emergence of semantic web technology provides a promising solution to improve the representation, sharing, and re-use of information to support decision making. Thus, the knowledge of lesson plan domain needs to be represented ontologically to support efficient retrieval of semantic web application in the domain of lesson plan.  This paper presents a new methodology for ontology development representation of lesson plan domain to support semantic web application. The methodology is focused on the important model, tools, and techniques in each phase of the development. The methodology consists of four phases, namely requirements analysis, development, implementation, evaluation and maintenance.</description>
        <description>http://thesai.org/Downloads/Volume7No4/Paper_72-The_Methodology_for_Ontology_Development_in_Lesson_Plan_Domain.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Genetic-Based Task Scheduling Algorithm in Cloud Computing Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070471</link>
        <id>10.14569/IJACSA.2016.070471</id>
        <doi>10.14569/IJACSA.2016.070471</doi>
        <lastModDate>2016-05-02T11:24:42.5830000+00:00</lastModDate>
        
        <creator>Safwat A. Hamad</creator>
        
        <creator>Fatma A. Omara</creator>
        
        <subject>Cloud computing; Task Scheduling; Genetic    Algorithm; Optimization Algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(4), 2016</description>
        <description>Nowadays, Cloud computing is widely used in companies and enterprises. However, there are some challenges in using Cloud computing. The main challenge is resource management, where Cloud computing provides IT resources (e.g., CPU, Memory, Network, Storage, etc.) based on virtualization concept and pay-as-you-go principle. The management of these resources has been a topic of much research. In this paper, a task scheduling algorithm based on Genetic Algorithm (GA) has been introduced for allocating and executing an application’s tasks. The aim of this proposed algorithm is to minimize the completion time and cost of tasks, and maximize resource utilization. The performance of this proposed algorithm has been evaluated using CloudSim toolkit.</description>
        <description>http://thesai.org/Downloads/Volume7No4/Paper_71-Genetic_Based_Task_Scheduling_Algorithm_in_Cloud_Computing_Environment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Exploring the Potential of Mobile Crowdsourcing in the Sharing of Information on Items Prices</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070470</link>
        <id>10.14569/IJACSA.2016.070470</id>
        <doi>10.14569/IJACSA.2016.070470</doi>
        <lastModDate>2016-05-01T19:05:15.7630000+00:00</lastModDate>
        
        <creator>Hazleen Aris</creator>
        
        <creator>Marina Md Din</creator>
        
        <subject>Mobile crowdsourcing; Price comparison; Crowdsourcing potential; Crowdsourcing survey</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(4), 2016</description>
        <description>This article presents the result of a survey performed to identify the potential of using mobile crowdsourcing as means to exchange information on the prices of household items at local stores from the consumers point of view. The potential was identified from four perspectives; mobile devices capability, internet usage pattern, supporting infrastructure and readiness towards information sharing. Survey questionnaires comprising 18 quantitative questions were distributed to 138 respondents in the forms of hardcopy and online softcopy over a one month period in May 2014. Collected data were analysed using descriptive statistics and correlation analysis methods. Findings from the analyses showed that the potential of using mobile crowdsourcing in sharing information of item prices is high as seen from the perspectives of the mobile devices capability and supporting infrastructure. Internet usage pattern of the consumers as well as their attitude towards information sharing are also in support of the potential. To the best of our knowledge, this is the first study that gathered statistical data on the potential of using mobile crowdsourcing for the sharing of information on items prices. Potential is usually assumed based on informal observation on the prevalent of mobile devices and their widespread use, and are not supported by empirical data. It is of value to the broader research communities who are currently engaged in mobile crowdsourcing research for consumers benefits.
</description>
        <description>http://thesai.org/Downloads/Volume7No4/Paper_70-Exploring_the_Potential_of_Mobile_Crowdsourcing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multilingual Artificial Text Extraction and Script Identification from Video Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070469</link>
        <id>10.14569/IJACSA.2016.070469</id>
        <doi>10.14569/IJACSA.2016.070469</doi>
        <lastModDate>2016-05-01T19:05:15.7300000+00:00</lastModDate>
        
        <creator>Akhtar Jamil</creator>
        
        <creator>Azra Batool</creator>
        
        <creator>Zumra Malik</creator>
        
        <creator>Ali Mirza</creator>
        
        <creator>Imran Siddiqi</creator>
        
        <subject>Multilingual Text Detection; Video Images; Script Recognition; Artificial Neural Networks; Local Binary Patterns.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(4), 2016</description>
        <description>This work presents a system for extraction and script identification of multilingual artificial text appearing in video images. As opposed to most of the existing text extraction systems which target textual occurrences in a particular script or language, we have proposed a generic multilingual text extraction system that relies on a combination of unsupervised and supervised techniques. The unsupervised approach is based on application of image analysis techniques which exploit the contrast, alignment and geometrical properties of text and identify candidate text regions in an image. Potential text regions are then validated by an Artificial Neural Network (ANN) using a set of features computed from Gray Level Co-occurrence Matrices (GLCM). The script of the extracted text is finally identified using texture features based on Local Binary Patterns (LBP). The proposed system was evaluated on video images containing textual occurrences in five different languages including English, Urdu, Hindi, Chinese and Arabic. The promising results of the experimental evaluations validate the effectiveness of the proposed system for text extraction and script identification.
</description>
        <description>http://thesai.org/Downloads/Volume7No4/Paper_69-Multilingual_Artificial_Text_Extraction_and_Script.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Iterative Threshold Decoding Of High Rates Quasi-Cyclic OSMLD Codes</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070468</link>
        <id>10.14569/IJACSA.2016.070468</id>
        <doi>10.14569/IJACSA.2016.070468</doi>
        <lastModDate>2016-05-01T19:05:15.7000000+00:00</lastModDate>
        
        <creator>Karim Rkizat</creator>
        
        <creator>Anouar Yatribi</creator>
        
        <creator>Mohammed Lahmer</creator>
        
        <creator>Mostafa Belkasmi</creator>
        
        <subject>Iterative threshold decoding; Quasi-Cyclic codes; OSMLD codes; Majority logic decoding; Steiner Triple System; BIBD
</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(4), 2016</description>
        <description>Majority logic decoding (MLD) codes are very powerful thanks to the simplicity of the decoder. Nevertheless, to find constructive families of these codes has been recognized to be a hard job. Also, the majority of known MLD codes are cyclic which are limited in the range of the rates. In this paper a new adaptation of the Iterative threshold decoding algorithm is considered, for decoding Quasi-Cyclic One Step Majority logic codes (QC-OSMLD) codes of high rates. We present the construction of QC-OSMLD codes based on Singer difference sets of rate 1/2, and codes of high rates based on Steiner triple system which allows to have a large choice of codes with different lengths and rates. The performances of this algorithm for decoding these codes on both Additive White Gaussian Noise (AWGN) channel and Rayleigh fading channel, to check its applicability in wireless environment, is investigated. </description>
        <description>http://thesai.org/Downloads/Volume7No4/Paper_68-Iterative_Threshold_Decoding_Of_High_Rates.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving Credit Scorecard Modeling Through Applying Text Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070467</link>
        <id>10.14569/IJACSA.2016.070467</id>
        <doi>10.14569/IJACSA.2016.070467</doi>
        <lastModDate>2016-05-01T19:05:15.6830000+00:00</lastModDate>
        
        <creator>Omar Ghailan</creator>
        
        <creator>Hoda M.O. Mokhtar</creator>
        
        <creator>Osman Hegazy</creator>
        
        <subject>Credit Scoring; Textual Data Analysis; Logistic Regression; Loan Default.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(4), 2016</description>
        <description>In the credit card scoring and loans management, the prediction of the applicant’s future behavior is an important decision support tool and a key factor in reducing the risk of
Loan Default. A lot of data mining and classification approaches have been developed for the credit scoring purpose. For the best of our knowledge, building a credit scorecard by analyzing the textual data in the application form has not been explored so far. This paper proposes a comprehensive credit scorecard model technique that improves credit scorecard modeling though employing textual data analysis. This study uses a sample of loan application forms of a financial institution providing loan
services in Yemen, which represents a real-world situation of the credit scoring and loan management. The sample contains a set of Arabic textual data attributes defining the applicants. The credit scoring model based on the text mining pre-processing and logistic regression techniques is proposed and evaluated through a comparison with a group of credit scorecard modeling techniques that use only the numeric attributes in the application form. The results show that adding the textual attributes analysis achieves higher classification effectiveness and outperforms the other traditional numerical data analysis techniques. </description>
        <description>http://thesai.org/Downloads/Volume7No4/Paper_67-Improving_Credit_Scorecard_Modeling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Impact of IP Addresses Localization on the Internet Dynamics Measurement</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070466</link>
        <id>10.14569/IJACSA.2016.070466</id>
        <doi>10.14569/IJACSA.2016.070466</doi>
        <lastModDate>2016-05-01T19:05:15.6530000+00:00</lastModDate>
        
        <creator>Tounwendyam Frederic Ouedraogo</creator>
        
        <creator>Tonguim Ferdinand Guinko</creator>
        
        <subject>Networks; Internet; Dynamics; Measurement; Localization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(4), 2016</description>
        <description>Many projects have sought to measure the dynamics of the Internet by using end-to-end measurement tools. The RADAR tool has been designed in this context. It consists in periodically tracing the routes from a monitor toward a set of destinations, IP addresses chosen randomly in the Internet. However, the localization of these destinations on the topology has a significant influence on the observed dynamics. We study the dynamics observed when the destinations are localized at a country scale. We show that this localization may lead to observe a different dynamics. The local dynamics observed in our case is mainly a routing dynamics whereas the load balancing dominates the entire Internet dynamics.</description>
        <description>http://thesai.org/Downloads/Volume7No4/Paper_66-Impact_of_IP_Addresses_Localization_on_the_Internet.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Estimating the Parameters of Software Reliability Growth Models Using the Grey Wolf Optimization Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070465</link>
        <id>10.14569/IJACSA.2016.070465</id>
        <doi>10.14569/IJACSA.2016.070465</doi>
        <lastModDate>2016-05-01T19:05:15.6200000+00:00</lastModDate>
        
        <creator>Alaa F. Sheta</creator>
        
        <creator>Amal Abdel-Raouf</creator>
        
        <subject>Software Reliability, Reliability Growth Models, Grey Wolf Optimizer, Exponential Model, Power Model, Delayed S-Shaped Model
</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(4), 2016</description>
        <description>In this age of technology, building quality software is essential to competing in the business market. One of the major principles required for any quality and business software product for value fulfillment is reliability. Estimating software reliability early during the software development life cycle saves time and money as it prevents spending larger sums fixing a defective software product after deployment. The Software Reliability Growth Model (SRGM) can be used to predict the number of failures that may be encountered during the software testing process. In this paper we explore the advantages of the Grey Wolf Optimization (GWO) algorithm in estimating the SRGM’s parameters with the objective of minimizing the difference between the estimated and the actual number of failures of the software system. We evaluated three different software reliability growth models: the Exponential Model (EXPM), the Power Model (POWM) and the Delayed S-Shaped Model (DSSM). In addition, we used three different datasets to conduct an experimental study in order to show the effectiveness of our approach.
</description>
        <description>http://thesai.org/Downloads/Volume7No4/Paper_65-Estimating_the_Parameters_of_Software_Reliability.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Data Security, Privacy, Availability and Integrity in Cloud Computing: Issues and Current Solutions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070464</link>
        <id>10.14569/IJACSA.2016.070464</id>
        <doi>10.14569/IJACSA.2016.070464</doi>
        <lastModDate>2016-05-01T19:05:15.6070000+00:00</lastModDate>
        
        <creator>Sultan Aldossary</creator>
        
        <creator>William Allen</creator>
        
        <subject>Data security; Data Confidentiality; Data Privacy; Cloud Computing; Cloud Security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(4), 2016</description>
        <description>Cloud computing changed the world around us. Now people are moving their data to the cloud since data is getting bigger and needs to be accessible from many devices. Therefore, storing the data on the cloud becomes a norm. However, there are many issues that counter data stored in the cloud starting from virtual machine which is the mean to share resources in cloud and ending on cloud storage itself issues. In this paper, we present those issues that are preventing people from adopting the cloud and give a survey on solutions that have been done to minimize risks of these issues. For example, the data stored in the cloud needs to be confidential, preserving integrity and available. Moreover, sharing the data stored in the cloud among many users is still an issue since the cloud service provider is untrustworthy to manage authentication and authorization. In this paper, we list issues related to data stored in cloud storage and solutions to those issues which differ from other papers which focus on cloud as general.</description>
        <description>http://thesai.org/Downloads/Volume7No4/Paper_64-Data_Security_Privacy_Availability_and_Integrity.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Containing a Confused Deputy on x86: A Survey of Privilege Escalation Mitigation Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070463</link>
        <id>10.14569/IJACSA.2016.070463</id>
        <doi>10.14569/IJACSA.2016.070463</doi>
        <lastModDate>2016-05-01T19:05:15.5730000+00:00</lastModDate>
        
        <creator>Scott Brookes</creator>
        
        <creator>Stephen Taylor</creator>
        
        <subject>Protection &amp; Security; Virtualization; Kernel ROP; ret2usr; Kernel Code Implant; rootkits; Operating Systems; Privilege Escalation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(4), 2016</description>
        <description>The weak separation between user- and kernelspace in modern operating systems facilitates several forms of privilege escalation. This paper provides a survey of protection techniques, both cutting-edge and time-tested, used to prevent common privilege escalation attacks. The techniques are compared against each other in terms of their effectiveness, their performance impact, the complexity of their implementation, and their impact on diversification techniques such as ASLR. Overall the literature provides a litany of disjoint techniques, each of which trades some performance cost for effectiveness against a particular isolated threat. No single technique was found to effectively mitigate all known and potential attack vectors with reasonable performance cost overhead.</description>
        <description>http://thesai.org/Downloads/Volume7No4/Paper_63-Containing_a_Confused_Deputy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Computational Intelligence Optimization Algorithm Based on Meta-heuristic Social-Spider: Case Study on CT Liver Tumor Diagnosis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070462</link>
        <id>10.14569/IJACSA.2016.070462</id>
        <doi>10.14569/IJACSA.2016.070462</doi>
        <lastModDate>2016-05-01T19:05:15.5430000+00:00</lastModDate>
        
        <creator>Mohamed Abu ElSoud</creator>
        
        <creator>Ahmed M. Anter</creator>
        
        <subject>Liver; CT; Social-Spider Optimization; Metaheuristics; Support Vector Machine; Random Selection Features; Classification; Sequential Forward Floating Search; Optimization.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(4), 2016</description>
        <description>Feature selection is an importance step in classification phase and directly affects the classification performance. Feature selection algorithm explores the data to eliminate noisy, redundant, irrelevant data, and optimize the classification performance. This paper addresses a new subset feature selection performed by a new Social Spider Optimizer algorithm (SSOA) to find optimal regions of the complex search space through the interaction of individuals in the population. SSOA is a new natural meta-heuristic computation algorithm which mimics the behavior of cooperative social-spiders based on the biological laws of the cooperative colony. Different combinatorial set of
feature extraction is obtained from different methods in order to keep and achieve optimal accuracy. Normalization function is applied to smooth features between [0,1] and decrease gap between features. SSOA based on feature selection and reduction compared with other methods over CT liver tumor dataset, the proposed approach proves better performance in both feature size reduction and classification accuracy. Improvements are observed consistently among 4 classification methods. A theoretical analysis
that models the number of correctly classified data is proposed using Confusion Matrix, Precision, Recall, and Accuracy. The achieved accuracy is 99.27%, precision is 99.37%, and recall is 99.19%. The results show that, the mechanism of SSOA provides very good exploration, exploitation and local minima avoidance.</description>
        <description>http://thesai.org/Downloads/Volume7No4/Paper_62-Computational_Intelligence_Optimization_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Answer Extraction System Based on Latent Dirichlet Allocation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070461</link>
        <id>10.14569/IJACSA.2016.070461</id>
        <doi>10.14569/IJACSA.2016.070461</doi>
        <lastModDate>2016-05-01T19:05:15.5270000+00:00</lastModDate>
        
        <creator>Mohammed A. S. Ali</creator>
        
        <creator>Sherif M. Abdou</creator>
        
        <subject>Question Answering; frequently asked questions; information retrieval; artificial intelligence;</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(4), 2016</description>
        <description>Question Answering (QA) task is still an active area of research in information retrieval. A variety of methods which have been proposed in the literature during the last few decades to solve this task have achieved mixed success. However, such methods developed in the Arabic language are scarce and do not have a good performance record. This is due to the challenges of Arabic language. QA based on Frequently Asked Questions is an important branch of QA in which a question is answered based on pre-answered ones. In this paper, the aim is to build a question answering system that responds to a user inquiry based on pre-answered questions. The proposed approach is based on Latent Dirichlet Allocation. Firstly, the dataset, pairs of questions and associated answers, will be grouped into several clusters of related documents. Next, when a new question to be answered is posed to the system, it,therefore, starts to assign this question to its appropriate cluster, then, use a similarity measure to get the top ten closest possible answers. Preliminary results show that the proposed method is achieving a good level of performance.</description>
        <description>http://thesai.org/Downloads/Volume7No4/Paper_61-Answer_Extraction_System_Based_on_Latent_Dirichlet_Allocation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Survey On Interactivity in Topic Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070460</link>
        <id>10.14569/IJACSA.2016.070460</id>
        <doi>10.14569/IJACSA.2016.070460</doi>
        <lastModDate>2016-05-01T19:05:15.4970000+00:00</lastModDate>
        
        <creator>Patrik Ehrencrona Kjellin</creator>
        
        <creator>Yan Liu</creator>
        
        <subject>topic model; latent dirichlet allocation; LDA; interactive; visualisation; IVA; survey; review</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(4), 2016</description>
        <description>Trying to make sense and gain deeper insight from large sets of data is becoming a task very central to computer science in general. Topic models, capable of uncovering the semantic themes pervading through large collections of documents, have seen a surge in popularity in recent years. However, topic models are high level statistical tools; their output is given in terms of probability distributions, suited neither for simple interpretation nor deep analysis. Interpreting the fitted topic models in an intuitive manner requires visual and interactive tools. Additionally, some measure of human interaction is typically required for refining the output offered by such models. In the research, this area remains relatively unexplored – only recently has this aspect been receiving more attention. In this paper, the literature is surveyed as it pertains to interactivity and visualisation within the context of topic models, with the goal of finding current research trends in this area.</description>
        <description>http://thesai.org/Downloads/Volume7No4/Paper_60-A_Survey_On_Interactivity_in_Topic_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Frequency Based Hierarchical Fast Search Block Matching Algorithm for Fast Video Communication</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070459</link>
        <id>10.14569/IJACSA.2016.070459</id>
        <doi>10.14569/IJACSA.2016.070459</doi>
        <lastModDate>2016-05-01T19:05:15.4670000+00:00</lastModDate>
        
        <creator>Nijad Al-Najdawi</creator>
        
        <creator>Sara Tedmori</creator>
        
        <creator>Omar A. Alzubi</creator>
        
        <creator>Osama Dorgham</creator>
        
        <creator>Jafar A. Alzubi</creator>
        
        <subject>Video coding; Frequency domain; Motion estimation; Hierarchical search; Block matching; Communication.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(4), 2016</description>
        <description>Numerous fast-search block motion estimation algorithms have been developed to circumvent the high computational cost required by the full-search algorithm. These
techniques however often converge to a local minimum, which makes them subject to noise and matching errors. Hence, many spatial domain block matching algorithms have been developed in literature. These algorithms exploit the high correlation that exists between pixels inside each frame block. However, with the block transformed frequencies, block matching can be used to test the similarities between a subset of selected frequencies that correctly identify each block uniquely; therefore fewer
comparisons are performed resulting in a considerable reduction in complexity. In this work, a two-level hierarchical fast search motion estimation algorithm is proposed in the frequency domain. This algorithm incorporates a novel search pattern at the top level of the hierarchy. The proposed hierarchical method for motion estimation not only produces consistent motion vectors within each large object, but also accurately estimates the motion of small objects with a substantial reduction in complexity when compared to other benchmark algorithms.</description>
        <description>http://thesai.org/Downloads/Volume7No4/Paper_59-A_Frequency_Based_Hierarchical_Fast_Search.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An approach of inertia compensation based on electromagnetic induction in brake test</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070458</link>
        <id>10.14569/IJACSA.2016.070458</id>
        <doi>10.14569/IJACSA.2016.070458</doi>
        <lastModDate>2016-05-01T19:05:15.4330000+00:00</lastModDate>
        
        <creator>Xiaowen Li</creator>
        
        <creator>Han Que</creator>
        
        <subject>Brake test; Electromagnetic induction; DC trans-former</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(4), 2016</description>
        <description>This paper briefly introduced the operational principle of the brake test bench, and points out the shortcomings when controlling the current of brake test, which means the reference measuring data is instantaneous. Aimed at this deficiency, a current control model based on electromagnetic induction and DC voltage is proposed. On the principle of electromagnetic induction, continuous data and automatic processes are realized. It significantly minimized errors owing to instantaneous data, and maximized the accuracy of the brake test.</description>
        <description>http://thesai.org/Downloads/Volume7No4/Paper_58-A_approach_of_inertia_compensation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Localization and Monitoringo of Public Transport Services Based on Zigbee</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070457</link>
        <id>10.14569/IJACSA.2016.070457</id>
        <doi>10.14569/IJACSA.2016.070457</doi>
        <lastModDate>2016-05-01T19:05:15.4200000+00:00</lastModDate>
        
        <creator>Izet Jagodic</creator>
        
        <creator>Suad Kasapovic</creator>
        
        <creator>Amir Hadzimehmedovic</creator>
        
        <creator> Lejla Banjanovic-Mehmedovic</creator>
        
        <subject>Wireless mesh network; Zigbee; Xbee; microcontroller; web development; integration</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(4), 2016</description>
        <description>Regular and systematic public transport is of great importance to all residents in any country, in the city and on commuter routes. In our environment, users of public transport can track the movement of vehicles with great difficulty, given that the current system does not meet the necessary criteria, and does not comply with the functioning of transport system. The aim of the final paper is to show the development of such a system using ZigBee and Arduino platforms. This paper shows an example of use the technologies mentioned above, their main advantages and disadvantages, with the emphasis on communication between the device and its smooth progress. In order to show the way in which the system could function, a simple mesh network was created, consisting of coordinator, routers for data distribution and end devices representing the vehicles. To view the results a web application was developed using open-source tool which is for display of the collected data on the movement of nodes in the network.</description>
        <description>http://thesai.org/Downloads/Volume7No4/Paper_57-Localization_and_Monitoringo_of_Public_Transport_Services_Based_on_Zigbee.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving and Extending Indoor Connectivity Using Relay Nodes for 60 GHz Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070456</link>
        <id>10.14569/IJACSA.2016.070456</id>
        <doi>10.14569/IJACSA.2016.070456</doi>
        <lastModDate>2016-05-01T19:05:15.3870000+00:00</lastModDate>
        
        <creator>Mohammad Alkhawatra</creator>
        
        <creator>Nidal Qasem</creator>
        
        <subject>60 GHz; Indoor Wireless; Multi-hop; Relay; Relay Selection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(4), 2016</description>
        <description>a 60 GHz wireless system can provide very high data rates. However, it has a tremendous amount of both Free Space Path Loss (FSPL) and penetration loss. To mitigate these losses and extend the system range; we propose techniques for using relay nodes. The relay node has been positioned correctly in order to shorten the distance between a source and a destination, this gave a reduction in the FSPL value. In addition, the positioning of the relay node correctly gave an alternative Line of Sight (LoS) to overcome the penetration loss caused by human bodies. For the last challenge, the considerably short range of the wireless network in the 60 GHz band, the range has been extended by applying the multi-hop communication with the concept of relay nodes selection. The length of the room was doubled and still get the same losses as if there was no expansion. All three techniques were modeled inside ‘Wireless InSite’ by three scenarios. The first scenario was a conference room with no obstacles to focus on FSPL. In the second scenario, the same conference room was modeled but human bodies have been taken into consideration to check the penetration loss effect. The final scenario was the extended version of the first scenario to deal with the small range issue.</description>
        <description>http://thesai.org/Downloads/Volume7No4/Paper_56-Improving_and_Extending_Indoor_Connectivity_Using_Relay_Nodes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application of Data Warehouse in Real Life: State-of-the-art Survey from User Preferences’ Perspective</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070455</link>
        <id>10.14569/IJACSA.2016.070455</id>
        <doi>10.14569/IJACSA.2016.070455</doi>
        <lastModDate>2016-05-01T19:05:15.3570000+00:00</lastModDate>
        
        <creator>Muhammad Bilal Shahid</creator>
        
        <creator>Umber Sheikh</creator>
        
        <creator>Basit Raza</creator>
        
        <creator>Qaisar Javaid</creator>
        
        <subject>Data warehouse (DW); Data warehouse applications; Decision support systems; OLAP; Preference based</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(4), 2016</description>
        <description>In recent years, due to increase in data complexity and manageability issues, data warehousing has attracted a great deal of interest in real life applications especially in business, finance, healthcare and industries. As the importance of retrieving the information from knowledge-base cannot be denied, data warehousing is all about making the information available for decision making. Data warehouse is accepted as the heart of the latest decision support systems. Due to the eagerness of data warehouse in real life, the need for the design and implementation of data warehouse in different applications is becoming crucial. Information from operational data sources are integrated by data warehousing into a central repository to start the process of analysis and mining of integrated information and primarily used in strategic decision making by means of online analytical processing techniques (OLAP). Despite the applications of data warehousing techniques in number of areas, there is no comprehensive literature review for it. This survey paper is an effort to present the applications of data warehouse in real life. It focuses to help the scholars knowing the analysis of data warehouse applications in number of domains. This survey provides applications, case studies and analysis of data warehouse used in various domains based on user preferences.</description>
        <description>http://thesai.org/Downloads/Volume7No4/Paper_55-Application_of_Data_Warehouse_in_Real_Life_State.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Spatiotemporal Context Modelling in Pervasive Context-Aware Computing Environment: A Logic Perspective</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070454</link>
        <id>10.14569/IJACSA.2016.070454</id>
        <doi>10.14569/IJACSA.2016.070454</doi>
        <lastModDate>2016-05-01T19:05:15.3400000+00:00</lastModDate>
        
        <creator>Darine Ameyed</creator>
        
        <creator>Moeiz Miraoui</creator>
        
        <creator>Chakib Tadj</creator>
        
        <subject>context modelling; logic; formal; pervasive system; context-aware system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(4), 2016</description>
        <description>Pervasive context-aware computing, is one of the topics that received particular attention from researchers. The context, itself is an important notion explored in many works discussing its: acquisition, definition, modelling, reasoning and more. Given the permanent evolution of context-aware systems, context modeling is still a complex task, due to the lack of an adequate, dynamic, formal and relevant context representation. This paper discusses various context modeling approaches and previous logic-based works. It also proposes a preliminary formal spatiotemporal context modelling based on first order logic, derived from the structure of natural languages.</description>
        <description>http://thesai.org/Downloads/Volume7No4/Paper_54-Spatiotemporal_Context_Modelling_in_Pervasive_Context.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Off-Line Arabic (Indian) Numbers Recognition Using Expert System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070453</link>
        <id>10.14569/IJACSA.2016.070453</id>
        <doi>10.14569/IJACSA.2016.070453</doi>
        <lastModDate>2016-05-01T19:05:15.3100000+00:00</lastModDate>
        
        <creator>Fahad Layth Malallah</creator>
        
        <creator>Mostafah Ghanem Saeed</creator>
        
        <creator>Maysoon M. Aziz</creator>
        
        <creator>Olasimbo Ayodeji Arigbabu</creator>
        
        <creator>Sharifah Mumtazah Syed Ahmad</creator>
        
        <subject>Arabic numeral character recognition; Image Processing; Pattern Recognition; Feature Extraction; Object Segmentation; Expert System</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(4), 2016</description>
        <description>This paper proposes an effective approach to automatic recognition of printed Arabic numerals which are extracted from digital images. First, the input image is normalized and pre-processed to an acceptable form. From the preprocessed image, components of the words are segmented into individual objects representing different numbers. Second, the numerical recognition is performed using an expert system based on a set of if-else rules, where each set of rules represents the categorization of each number. Finally, rigorous experiments are carried out on 226 random Arabic numerals selected from 40 images of Iraqi car plate numbers. The proposed method attained an accuracy of 97%.</description>
        <description>http://thesai.org/Downloads/Volume7No4/Paper_53-Off_Line_Arabic_Indian_Numbers_Recognition_Using_Expert_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Format-Compliant Selective Encryption Scheme for Real-Time Video Streaming of the H.264/AVC</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070452</link>
        <id>10.14569/IJACSA.2016.070452</id>
        <doi>10.14569/IJACSA.2016.070452</doi>
        <lastModDate>2016-05-01T19:05:15.2930000+00:00</lastModDate>
        
        <creator>Fatma SBIAA</creator>
        
        <creator>Sonia KOTEL</creator>
        
        <creator>Medien ZEGHID</creator>
        
        <creator>Rached TOURKI</creator>
        
        <creator>Mohsen MACHHOUT</creator>
        
        <creator>Adel BAGANNE</creator>
        
        <subject>component; Video coding; Data encryption; Data compression; H.264/AVC</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(4), 2016</description>
        <description>H.264 video coding standard is one of the most promising techniques for the future video communications. In fact, it supports a broad range of applications. Accordingly, with the continuous promotion of multimedia services, H.264 has been widely used in real-world applications. A major concern in the design of H.264 encryption algorithms is how to achieve a sufficiently high security level, while maintaining the efficiency of the underlying compression process. In this paper a new selective encryption scheme for the H.264 standard is presented. The aim of this work is to study the security of the H.264 standard in order to propose the appropriate design of a hardware crypto-processor based on a stream cipher algorithm. Since the proposed cryptosystem is mainly dedicated to the multimedia applications, it provides multiple security levels in order to satisfy the requirements of various applications for different purposes while ensuring higher coding efficiency. Different performance analyses were made in order to evaluate the new encryption system. The experimental results showed the reliability and the robustness of the proposed technique.</description>
        <description>http://thesai.org/Downloads/Volume7No4/Paper_52-A_Format_Compliant_Selective_Encryption_Scheme.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Word Sense Disambiguation Approach for Arabic Text</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070451</link>
        <id>10.14569/IJACSA.2016.070451</id>
        <doi>10.14569/IJACSA.2016.070451</doi>
        <lastModDate>2016-05-01T19:05:15.2630000+00:00</lastModDate>
        
        <creator>Nadia Bouhriz</creator>
        
        <creator>Faouzia Benabbou</creator>
        
        <creator>El Habib Ben Lahmar</creator>
        
        <subject>Word Sense Disambiguation; Arabic Text; local context; global context; Arabic WordNet; Semantic Similarity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(4), 2016</description>
        <description>Word Sense Disambiguation (WSD) consists of identifying the correct sense of an ambiguous word occurring in a given context. Most of Arabic WSD systems are based generally on the information extracted from the local context of the word to be disambiguated. This information is not usually sufficient for a best disambiguation. To overcome this limit, we propose an approach that takes into consideration, in addition to the local context, the global context too extracted from the full text. More particularly, the sense attributed to an ambiguous word is the one of which semantic proximity is more close both to its local and global context. The experiments show that the proposed system achieved an accuracy of 74%.</description>
        <description>http://thesai.org/Downloads/Volume7No4/Paper_51-Word_Sense_Disambiguation_Approach_for_Arabic_Text.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Miniaturized Meander Slot Antenna Tor RFID TAG with Dielectric Resonator at 60 Ghz</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070450</link>
        <id>10.14569/IJACSA.2016.070450</id>
        <doi>10.14569/IJACSA.2016.070450</doi>
        <lastModDate>2016-05-01T19:05:15.2470000+00:00</lastModDate>
        
        <creator>JMAL Sabri</creator>
        
        <creator>NECIBI Omrane</creator>
        
        <creator> TAGHOUTI Hichem</creator>
        
        <creator>MAMI Abdelkader</creator>
        
        <creator>GHARSALLAH Ali</creator>
        
        <subject>RFID TAG; Millimeter Wave Identification; Meander Slot Antenna; Dielectric Resonator Antenna (DRA); On-chip Antenna; Silicon; Scattering Matrix; Bond-Graph; Scattering Bond-Graph</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(4), 2016</description>
        <description>Day after day, recent advances in millimeter wave communications have called for the development of compact and efficient antennas. Furthermore, the greatest challenges in this area is to get a good performance and a miniaturized antenna. In addition, the design of antenna, in the Silicon technology, is one of the key challenges. In this way, this work will focus on the design of the free 60 GHz band, high gain and high efficiency on-chip antenna meandering slots for transponder Radio Frequency Identification (RFID). Further, the stacked dielectric resonators (DRS) will be arranged above the power element of on-chip antenna with an excitation of coplanar waveguide (CPW). We will use the Scattering Bond-Graph formalism like a new technique to design these proposed antennas and we will use the microwave Studio CST software simulation to validate the results. We have miniaturized the proposed antenna after having such a number of iteration and by applying the Bond Graph methodology, and the size of the antenna is about 1.2 * 1.1 mm2.</description>
        <description>http://thesai.org/Downloads/Volume7No4/Paper_50-Miniaturized_Meander_Slot_Antenna_Tor_RFID_TAG.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Proposed Multi Images Visible Watermarking Technique</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070449</link>
        <id>10.14569/IJACSA.2016.070449</id>
        <doi>10.14569/IJACSA.2016.070449</doi>
        <lastModDate>2016-05-01T19:05:15.2000000+00:00</lastModDate>
        
        <creator>Ruba G. Al-Zamil</creator>
        
        <creator>Safa’a N. Al-Haj Saleh</creator>
        
        <subject>Visible watermarking; multi-image watermarking; marker image; background image; image channels; opacity; Matlab</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(4), 2016</description>
        <description>Visible watermarking techniques are proposed to secure digital data against unauthorized attacks. These techniques protect data from illegal access and use. In this work, a multi visible watermarking technique that allows embedding different types of markers into different types of background images has been proposed It also allows adding multiple markers on the same background image with different sizes, positions and opacity levels without any interference. The proposed technique improves the flexibility issues of visible watermarking and helps in increasing the security levels. A visible watermarking system is designed to implement the proposed technique. The system facilitates single and multiple watermarking as illustrated in the proposed technique. Experimental results indicate that the proposed technique applies visible watermarking successfully.</description>
        <description>http://thesai.org/Downloads/Volume7No4/Paper_49-A_Proposed_Multi_Images_Visible_Watermarking_Technique.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Edge Detection with Neuro-Fuzzy Approach in Digital Synthesis Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070448</link>
        <id>10.14569/IJACSA.2016.070448</id>
        <doi>10.14569/IJACSA.2016.070448</doi>
        <lastModDate>2016-05-01T19:05:15.1530000+00:00</lastModDate>
        
        <creator>Fatma ZRIBI</creator>
        
        <creator>Noureddine ELLOUZE</creator>
        
        <subject>Neuro-Fuzzy; learning databases; Gaussian noise; synthesis images</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(4), 2016</description>
        <description>This paper presents an enhanced Neuro-Fuzzy (NF) Approach of edge detection with an analysis of the characteristic of the method. The specificity of our method is an enhancement of the learning database of the diagonal edges compared to the original learning database. The original inspired NF edge detection model uses just one image learning database realized by Emin Yuksel. The tests are accomplished in synthesis images with a noised one of 20% of Gaussian noise.</description>
        <description>http://thesai.org/Downloads/Volume7No4/Paper_48-Edge_Detection_with_Neuro_Fuzzy_Approach_in_Digital_Synthesis_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>User Interface Menu Design Performance and User Preferences: A Review and Ways Forward</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070447</link>
        <id>10.14569/IJACSA.2016.070447</id>
        <doi>10.14569/IJACSA.2016.070447</doi>
        <lastModDate>2016-05-01T19:05:15.1230000+00:00</lastModDate>
        
        <creator>Dr Pietro Murano</creator>
        
        <creator>Margrete Sander</creator>
        
        <subject>Menus; navigation of interfaces; universal design; research methods</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(4), 2016</description>
        <description>This review paper is about menus on web pages and applications and their positioning on the user screen. The paper aims to provide the reader with a succinct summary of the major research in this area along with an easy to read tabulation of the most important studies. Furthermore, the paper concludes with some suggestions for future research regarding how menus and their positioning on the screen could be improved. The two principal suggestions concern trying to use more qualitative methods for investigating the issues and to develop in the future more universally designed menus.</description>
        <description>http://thesai.org/Downloads/Volume7No4/Paper_47-User_Interface_Menu_Design_Performance_and_User_Preferences.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Throughput Measurement Method Using Command Packets for Mobile Robot Teleoperation Via a Wireless Sensor Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070446</link>
        <id>10.14569/IJACSA.2016.070446</id>
        <doi>10.14569/IJACSA.2016.070446</doi>
        <lastModDate>2016-05-01T19:05:15.0900000+00:00</lastModDate>
        
        <creator>Kei SAWAI</creator>
        
        <creator>Ju Peng</creator>
        
        <creator>Tsuyoshi Suzuki</creator>
        
        <subject>Wireless Sensor Networks; Rescue Robot Teleoperation; Communication Quality Measurement</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(4), 2016</description>
        <description>We are working to develop an information gathering system comprising a mobile robot and a wireless sensor network (WSN) for use in post-disaster underground environments. In the proposed system, a mobile robot carries wireless sensor nodes and deploys them to construct a WSN in the environment, thus providing a wireless communication infrastructure for mobile robot teleoperation. An operator then controls the mobile robot remotely while monitoring end-to-end communication quality with the mobile robot. Measurement of communication quality on wireless LANs has been widely studied. However, a throughput measurement method has not been developed for assessing the usability of wireless mobile robot teleoperation. In particular, a measurement method is needed that can handle mobile robots as they move around an unknown environment. Accordingly, in this paper, we propose a method for measuring throughput as a measure of communication quality in a WSN for wireless teleoperation of mobile robots. The feasibility of the proposed method was evaluated and verified in in practical field test where an operator remotely controlled mobile robots using a WSN.</description>
        <description>http://thesai.org/Downloads/Volume7No4/Paper_46-Throughput_Measurement_Method_Using_Command_Packets_for_Mobile_Robot.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Efficient Forecasting  of Stock Market Using Particle Swarm Optimization with Center of Mass Based Technique</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070445</link>
        <id>10.14569/IJACSA.2016.070445</id>
        <doi>10.14569/IJACSA.2016.070445</doi>
        <lastModDate>2016-05-01T19:05:15.0770000+00:00</lastModDate>
        
        <creator>Razan A. Jamous</creator>
        
        <creator> Essam El.Seidy</creator>
        
        <creator>Bayoumi Ibrahim Bayoum</creator>
        
        <subject>Stock market forecasting; particle swarm optimization; Bacterial foraging optimization; Adaptive bacterial foraging optimization; Genetic algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(4), 2016</description>
        <description>This paper develops an efficient forecasting model for various stock price indices based on the previously introduced particle swarm optimization with center mass (PSOCOM) technique. The structure used in the proposed prediction models is a simple linear combiner using (PSOCOM) by minimizing its mean square error (MSE) to evaluate the proposed model. The comparison with other models such as standard PSO, Genetic algorithm, Bacterial foraging optimization, and adaptive bacterial foraging optimization had been done. The experimental results show that PSOCOM algorithms are the best among other algorithms in terms of MSE and the accuracy of prediction for some stock price indices. Whereas, the proposed forecasting model gives accurate prediction for short- and long-term prediction. As a result, the proposed stock market prediction model is more efficient from the other compared models.</description>
        <description>http://thesai.org/Downloads/Volume7No4/Paper_45-A_Novel_Efficient_Forecasting_of_Stock_Market_Using_Particle_Swarm_Optimization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Group Decision Support System to Evaluate the ICT Project Performance Using the Hybrid Method of AHP, TOPSIS and Copeland Score</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070444</link>
        <id>10.14569/IJACSA.2016.070444</id>
        <doi>10.14569/IJACSA.2016.070444</doi>
        <lastModDate>2016-05-01T19:05:15.0600000+00:00</lastModDate>
        
        <creator>Herri Setiawan</creator>
        
        <creator>Jazi Eko Istiyanto</creator>
        
        <creator>Retantyo Wardoyo</creator>
        
        <creator>Purwo Santoso</creator>
        
        <subject>GDSS; ICT; MCDM; AHP; TOPSIS; Copeland Score; Decision Maker</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(4), 2016</description>
        <description>This paper proposed a concept of the Group Decision Support System (GDSS) to evaluate the performance of Information and Communications Technology (ICT) Projects in Indonesian regional government agencies to overcome any possible inconsistencies which may occur in a decision-making process. By considering the aspect of the applicable legislation, decision makers involved to provide an assessment and evaluation of the ICT project implementation in regional government agencies consisted of Executing Parties of Government Institutions, ICT Management Work Units, Business Process Owner Units, and Society, represented by DPRD (Regional People’s Representative Assembly). The contributions of those decision makers in the said model were in the form of preferences to evaluate the ICT project-related alternatives based on the predetermined criteria for the method of Multiple Criteria Decision Making (MCDM). This research presented a GDSS framework integrating the Methods of Analytic Hierarchy Process (AHP), Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) and Copeland Score. The AHP method was used to generate values for the criteria used as input in the calculation process of the TOPSIS method. Results of the TOPSIS calculation generated a project rank indicated by each decision maker, and to combine the different preferences of these decision makers, the Copeland Score method was used as one of the voting methods to determine the best project rank of all the ranks indicated by the desicion makers.</description>
        <description>http://thesai.org/Downloads/Volume7No4/Paper_44-The_Group_Decision_Support_System_to_Evaluate_the_Ict_Project_Performance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Devising a Secure Architecture of Internet of Everything (IoE) to Avoid the Data Exploitation in Cross Culture Communications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070443</link>
        <id>10.14569/IJACSA.2016.070443</id>
        <doi>10.14569/IJACSA.2016.070443</doi>
        <lastModDate>2016-05-01T19:05:15.0430000+00:00</lastModDate>
        
        <creator>Asim Majeed</creator>
        
        <creator>Rehan Bhana</creator>
        
        <creator>Anwar Ul Haq</creator>
        
        <creator>Mike-Lloyd Williams</creator>
        
        <subject>privacy; privacy enhancing technology (PET); big data; information communication technology (ICT)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(4), 2016</description>
        <description>The communication infrastructure among various interconnected devices has revolutionized the process of collecting and sharing information. This evolutionary paradigm of collecting, storing and analyzing data streams is called the Internet of Everything (IoE). The information exchange through IoE is fast and accurate but leaves security issues. The emergence of IoE has seen a drift from a single novel technology to several technological developments. Managing various technologies under one infrastructure is complex especially when a network is openly allowing nodes to access it. Access transition of infrastructures from closed networked environments to the public internets has raised security issues. The consistent growth in IoE technology is recognized as a bridge between physical, virtual and cross-cultural worlds. Modern enterprises are becoming reliant on interconnected wireless intelligent devices and this has put billions of user’s data in risk. The interference and intrusion in any infrastructure have opened the door of public safety concerns because this interception could compromise the user’s personal data as well as personal privacy. This research aims to adopt a holistic approach to devising a secure IoE architecture for cross-culture communication organizations, with attention paid to the various technological wearable devices, their security policies, communication protocols, data format and data encryption features to avoid the data exploitation. A systems methodology will be adopted with a view to developing a secure IoE model which provides for a generic implementation after analyzing the critical security features to minimize the risk of data exploitations. This would combine the ability of IoE to connect, communicate, and remotely manage an incalculable number of networked, automated devices with the security properties of authentication, availability, integrity and confidentiality on a configurable basis. This will help clarify issues currently present and narrow down security threats planning considerably.</description>
        <description>http://thesai.org/Downloads/Volume7No4/Paper_43-Devising_a_Secure_Architecture_of_Internet_of_Everything.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Particle Swarm Optimization Based Stock Market Prediction Technique</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070442</link>
        <id>10.14569/IJACSA.2016.070442</id>
        <doi>10.14569/IJACSA.2016.070442</doi>
        <lastModDate>2016-05-01T19:05:15.0300000+00:00</lastModDate>
        
        <creator>Essam El. Seidy</creator>
        
        <subject>Computational intelligence; Particle Swarm Optimization; Stock Market; Prediction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(4), 2016</description>
        <description>Over the last years, the average person&#39;s interest in the stock market has grown dramatically. This demand has doubled with the advancement of technology that has opened in the International stock market, so that nowadays anybody can own stocks, and use many types of software to perform the aspired profit with minimum risk. Consequently, the analysis and prediction of future values and trends of the financial markets have got more attention, and due to large applications in different business transactions, stock market prediction has become a critical topic of research. In this paper, our earlier presented particle swarm optimization with center of mass technique (PSOCoM) is applied to the task of training an adaptive linear combiner to form a new stock market prediction model. This prediction model is used with some common indicators to maximize the return and minimize the risk for the stock market. The experimental results show that the proposed technique is superior than the other PSO based models according to the prediction accuracy.</description>
        <description>http://thesai.org/Downloads/Volume7No4/Paper_42-A_New_Particle_Swarm_Optimization_Based_Stock_Market_Prediction_Technique.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Adaptive Key Exchange Procedure for VANET</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070441</link>
        <id>10.14569/IJACSA.2016.070441</id>
        <doi>10.14569/IJACSA.2016.070441</doi>
        <lastModDate>2016-05-01T19:05:14.9970000+00:00</lastModDate>
        
        <creator>Hamza Toulni</creator>
        
        <creator>Mohcine Boudhane</creator>
        
        <creator>Benayad Nsiri</creator>
        
        <creator>Mounia Miyara</creator>
        
        <subject>ITS; Vehicular ad-hoc networks; public key exchange; Security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(4), 2016</description>
        <description>VANET is a promising technology for intelligent transport systems (ITS). It offers new opportunities aiming at improving the circulation of vehicles on the roads and improving road safety. However, vehicles are interconnected by wireless links and without using any infrastructure, which exposes the vehicular network to many attacks. This paper presents a new solution for the exchange of security keys to protect information exchanged between vehicles. In addition to securing the inter-vehicular communication, the proposed solution has considerably decreased the time for the exchange of keys, thus improving the performance of VANET.</description>
        <description>http://thesai.org/Downloads/Volume7No4/Paper_41-An_Adaptive_Key_Exchange_Procedure_for_VANET.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Approach to Detect Duplicate Code Blocks to Reduce Maintenance Effort</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070440</link>
        <id>10.14569/IJACSA.2016.070440</id>
        <doi>10.14569/IJACSA.2016.070440</doi>
        <lastModDate>2016-05-01T19:05:14.9670000+00:00</lastModDate>
        
        <creator>Sonam Gupta</creator>
        
        <creator>Dr. P. C Gupta</creator>
        
        <subject>Clones; Program Dependence Graph (PDG); Control Flow Graph (CFG); Abstract Syntax Tree (AST)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(4), 2016</description>
        <description>It was found in many cases that a code might be a clone for one programmer but not the same for another one. This problem occurs because of inaccurate documentation. According to research, the maintainers are not aware of the original design and thus, face the difficulty of agreeing on the system’s components and their relations or understanding the work of the application. The problem also occurs because of the different team of development and maintenance resulting in more effort and time during maintenance. This paper proposes a novel approach to detect the clones at the programmer side such that if a particular code is a clone then it can be well documented. This approach will provide both the individual duplicate statements as well as the block in which they appear. The approach has been examined on seven open source systems.</description>
        <description>http://thesai.org/Downloads/Volume7No4/Paper_40-A_Novel_Approach_to_Detect_Duplicate_Code_Blocks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Informational Model as a Guideline to Design Sustainable Green SLA (GSLA)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070439</link>
        <id>10.14569/IJACSA.2016.070439</id>
        <doi>10.14569/IJACSA.2016.070439</doi>
        <lastModDate>2016-05-01T19:05:14.9370000+00:00</lastModDate>
        
        <creator>Iqbal Ahmed</creator>
        
        <creator>Hiroshi Okumura</creator>
        
        <creator>Kohei Arai</creator>
        
        <subject>SLA; GSLA; Green Computing; Sustainability; IT ethics; Informational model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(4), 2016</description>
        <description>Recently, Service Level Agreement (SLA) and green SLA (GSLA) becomes very important for both the service providers/vendors and as well as for the users/customers. There are many ways to inform users/customers about various services with its inherent execution functionalities and even non-functional/Quality of Service (QoS) aspects through SLAs. However, these basic SLAs actually do not cover eco-efficient green issues or IT ethics issues for sustainable development. That is why green SLA (GSLA) already came into play for achieving sustainability in the industry. Nevertheless, the current practice of GSLA in the industry do not respect the sustainability at all. GSLA defined as a formal agreement incorporating all the traditional commitments respecting some green computing parameters such as carbon footprint, energy consumption etc. Therefore, there are still gaps for achieving sustainability through existing GSLA. To reach the goal of achieving sustainability and getting more customers, many IT (Information Technology) and ICT (Information and Communication Technology) business are looking for a real GSLA which would meet the ecological, economical and ethical aspects (3Es) of sustainability. This research discovers the missing parameters and introduce new parameters under sustainability hoods. In addition, it defines GSLA of sustainability with new green performance indicators and their measurable units. It also discovers the management complexity of proposed new GSLA through designing a general informational model and identifies various new entities and their effects with other entities under three pillars of sustainability. The ICT engineer could use the informational model as a guideline to design a sustainable GSLA for the industry. Therefore, the proposed model could help different service providers/vendors to define their future business strategies for the upcoming sustainable society.</description>
        <description>http://thesai.org/Downloads/Volume7No4/Paper_39-An_Informational_Model_as_a_Guideline_to_Design_Sustainable_Green_SLA.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Physiologically Motivated Feature Extraction for Robust Automatic Speech Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070438</link>
        <id>10.14569/IJACSA.2016.070438</id>
        <doi>10.14569/IJACSA.2016.070438</doi>
        <lastModDate>2016-05-01T19:05:14.9200000+00:00</lastModDate>
        
        <creator>Ibrahim Missaoui</creator>
        
        <creator>Zied Lachiri</creator>
        
        <subject>Feature extraction; Two-dimensional Gabor filters; Noisy speech recognition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(4), 2016</description>
        <description>In this paper, a new method is presented to extract robust speech features in the presence of the external noise. The proposed method based on two-dimensional Gabor filters takes in account the spectro-temporal modulation frequencies and also limits the redundancy on the feature level. The performance of the proposed feature extraction method was evaluated on isolated speech words which are extracted from TIMIT corpus and corrupted by background noise. The evaluation results demonstrate that the proposed feature extraction method outperforms the classic methods such as Perceptual Linear Prediction, Linear Predictive Coding, Linear Prediction Cepstral coefficients and Mel Frequency Cepstral Coefficients.</description>
        <description>http://thesai.org/Downloads/Volume7No4/Paper_38-Physiologically_Motivated_Feature_Extraction_for_Robust_Automatic_Speech_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improved Tracking Using a Hybrid Optcial-Haptic Three-Dimensional Tracking System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070437</link>
        <id>10.14569/IJACSA.2016.070437</id>
        <doi>10.14569/IJACSA.2016.070437</doi>
        <lastModDate>2016-05-01T19:05:14.8870000+00:00</lastModDate>
        
        <creator>M’hamed Frad</creator>
        
        <creator>Hichem Maaref</creator>
        
        <creator>Samir Otman</creator>
        
        <creator>Abdellatif Mtibaa</creator>
        
        <subject>virtual reality; Scalable-SPIDAR; support vector regression; hybrid tracking system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(4), 2016</description>
        <description>The aim of this paper is to asses to what extent an optical tracking system (OTS) used for position tracking in virtual reality can be improved by combining it with a human scale haptic device named Scalable-SPIDAR. The main advantage of the Scalable-SPIDAR haptic device is the fact it is unobtrusive and not dependent of free line-of-sight. Unfortunately, the accuracy of the Scalable-SPIDAR is affected by bad-tailored mechanical design. We explore to what extent the influence of these inaccuracies can be compensated by collecting precise information on the nonlinear error by using the OTS and applying support vector regression (SVR) for calibrating the haptic device reports. After calibration of the Scalable-SPIDAR we have found that the average error in position readings reduced from to 263.7240&#177;75.6207 mm to 12.6045&#177;8.4169 mm. These results encourage the development of a hybrid haptic-optical system for virtual reality applications where the haptic device acts as an auxiliary source of position information for the optical tracker.</description>
        <description>http://thesai.org/Downloads/Volume7No4/Paper_37-Improved_Tracking_Using_a_Hybrid_Optcial_Haptic.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Study to Investigate State of Ethical Development in E-Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070436</link>
        <id>10.14569/IJACSA.2016.070436</id>
        <doi>10.14569/IJACSA.2016.070436</doi>
        <lastModDate>2016-05-01T19:05:14.8730000+00:00</lastModDate>
        
        <creator>AbdulHafeez Muhammad</creator>
        
        <creator>Mohd. Feham MD. Ghalib</creator>
        
        <creator>Farooq Ahmad</creator>
        
        <creator>Quadri N Naveed</creator>
        
        <creator>Asadullah Shah</creator>
        
        <subject>e-Learning; ethical development; ethics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(4), 2016</description>
        <description>Different researches evidenced that e-learning has provided more opportunities to behave unethically than in traditional learning. A descriptive quantitative enquiry-based study is performed to explore same issue in e-Learning environments. The factors required for ethical development of students were extracted from literature. Later, efforts are made to assess their significance and their status in e-Learning with 5-point Likert scale survey. The sample consisted of 47 teachers, 298 students, and 31 administrative staff of e-learning management being involved in e-Learning. The work also observed state of students on various ethical behaviors. The study emphasized that the physical presence of teacher, an ethically conducive institutional environment, and the involvement of the society members are among the main factors that help in the ethical development of a student which are missing in e-Learning. The results of the study showed that the moral behavior of e-Learners is at decline because of lack of these required factors in e-Learning.  This work also suggested the need of a model indicating how these deficiencies can be addressed by the educational institutions for ethical development of higher education learners.</description>
        <description>http://thesai.org/Downloads/Volume7No4/Paper_36-A_Study_to_Investigate_State_of_Ethical_Development_in_E_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Content-Based Image Retrieval for Medical Applications with Flip-Invariant Consideration Using Low-Level Image Descriptors</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070435</link>
        <id>10.14569/IJACSA.2016.070435</id>
        <doi>10.14569/IJACSA.2016.070435</doi>
        <lastModDate>2016-05-01T19:05:14.8270000+00:00</lastModDate>
        
        <creator>Qusai Q. Abuein</creator>
        
        <creator> Mohammed Q. Shatnawi</creator>
        
        <creator>Radwan Batiha</creator>
        
        <creator>Ahmad Al-Aiad</creator>
        
        <creator>Suzan Amareen</creator>
        
        <subject>CBIR; image retrieval; feature extraction; medical images; flipped images</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(4), 2016</description>
        <description>Content-Based Image Retrieval (CBIR) has recently become one of the most attractive things in medical image research.  Most existing research on CBIR focused on low-level image descriptors in image retrieval, which sometimes weaken the retrieval accuracy since many relevant images are not retrieved. Limited number of researches consider flipping the image on different sides. In order to fill the knowledge gap, this research focuses on considering the flipped images in retrieval based on previously implemented system that uses low-level image descriptors.
Some improvements are made on the system considering the flipped images by extracting the features of the main and flipped images. The final results showed that the proposed system outperforms the existing system.  The system has proven as a powerful method in helping medical staff, physician decision makers, and students to get better results by giving wide range of needed images, and helps in reasoning and building better decisions.
</description>
        <description>http://thesai.org/Downloads/Volume7No4/Paper_35-Content_Based_Image_Retrieval_for_Medical_Applications_with_Flip_Invariant_Consideration.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improve Query Performance On Hierarchical Data. Adjacency List Model Vs. Nested Set Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070434</link>
        <id>10.14569/IJACSA.2016.070434</id>
        <doi>10.14569/IJACSA.2016.070434</doi>
        <lastModDate>2016-05-01T19:05:14.8100000+00:00</lastModDate>
        
        <creator>Cornelia Gyor&#246;di</creator>
        
        <creator>Romulus-Radu Moldovan-Duse</creator>
        
        <creator>Robert Gyor&#246;di</creator>
        
        <creator>George Pecherle</creator>
        
        <subject>adjacency list model; nested set model; relational database; MSSQL 2014; hierarchical data</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(4), 2016</description>
        <description>Hierarchical data are found in a variety of database applications, including content management categories, forums, business organization charts, and product categories. In this paper, we will examine two models deal with hierarchical data in relational databases namely, adjacency list model and nested set model. We analysed these models by executing various operations and queries in a web-application for the management of categories, thus highlighting the results obtained during performance comparison tests. The purpose of this paper is to present the advantages and disadvantages of using an adjacency list model compared to nested set model in a relational database integrated into an application for the management of categories, which needs to manipulate a big amount of hierarchical data.</description>
        <description>http://thesai.org/Downloads/Volume7No4/Paper_34-Improve_Query_Performance_On_Hierarchical_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comparative Study of Databases with Different Methods of Internal Data Management</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070433</link>
        <id>10.14569/IJACSA.2016.070433</id>
        <doi>10.14569/IJACSA.2016.070433</doi>
        <lastModDate>2016-05-01T19:05:14.7800000+00:00</lastModDate>
        
        <creator>Cornelia Gyor&#246;di</creator>
        
        <creator>Robert Gyor&#246;di</creator>
        
        <creator>Alexandra Stefan</creator>
        
        <creator>Livia Bandici</creator>
        
        <subject>MongoDB; Microsoft SQL Server; NoSQL; non-relational database</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(4), 2016</description>
        <description>The purpose of this paper is to present a comparative study between a non-relational MongoDB database and a relational Microsoft SQL Server database in the case of an unstructured representation of data, in XML or JSON format. We mainly focus our presentation on exploring all the possibilities that each type of database offers us, in the case that the data, which has to be stored, cannot or is not wanted to be normalized. This is a scenario most often found in production when, for the application that is being developed we are extracting unstructured data from social networks or all kinds of different channels that the user might have. The comparative study is based on the creation of a benchmark application developed in C# using Visual Studio 2013, which accesses databases created beforehand with proper optimizations that will be described.</description>
        <description>http://thesai.org/Downloads/Volume7No4/Paper_33-A_Comparative_Study_of_Databases_with_Different_Methods_of_Internal_Data_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Application of Fuzzy Control in Water Tank Level Using Arduino</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070432</link>
        <id>10.14569/IJACSA.2016.070432</id>
        <doi>10.14569/IJACSA.2016.070432</doi>
        <lastModDate>2016-05-01T19:05:14.7470000+00:00</lastModDate>
        
        <creator>Fay&#231;al CHABNI</creator>
        
        <creator>Rachid TALEB</creator>
        
        <creator>Abderrahmen BENBOUALI</creator>
        
        <creator>Mohammed Amin BOUTHIBA</creator>
        
        <subject>Fuzzy control; PI; PID; Arduino; System identification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(4), 2016</description>
        <description>Fuzzy logic control has been successfully utilized in various industrial applications; it is generally used in complex control systems, such as chemical process control. Today, most of the fuzzy logic controls are still implemented on expensive high-performance processors. This paper analyzes the effectiveness of a fuzzy logic control using a low-cost controller applied to a water level control system. The paper also gives a low-cost hardware solution and practical procedure for system identification and control. First, the mathematical model of the process was obtained with the help of Matlab. Then two methods were used to control the system, PI (Proportional, Integral) and fuzzy control. Simulation and experimental results are presented.</description>
        <description>http://thesai.org/Downloads/Volume7No4/Paper_32-The_Application_of_Fuzzy_Control_in_Water_Tank_Level_Using_Arduino.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Problem of Universal Grammar with Multiple Languages: Arabic, English, Russian as Case Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070431</link>
        <id>10.14569/IJACSA.2016.070431</id>
        <doi>10.14569/IJACSA.2016.070431</doi>
        <lastModDate>2016-05-01T19:05:14.7330000+00:00</lastModDate>
        
        <creator>Nabeel Imhammed Zanoon</creator>
        
        <subject>Chomsky; linguistic; Universal Grammar; Arabic; English; Russian</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(4), 2016</description>
        <description>Every language has its characteristics and rules, though all languages share the same components like words, sentences, subject, verb, object and so on. Nevertheless, Chomsky suggested the theory of language acquisition in children instinctively through a universal grammar that represents a universal grammar for all human languages. Since it has its declaration, this theory has encountered criticism from linguists. In this paper, the criticism will be presented, and the conclusion that the general rule is not compatible with all human languages will be suggested by studying some human languages as a case namely Arabic, English, and Russian.</description>
        <description>http://thesai.org/Downloads/Volume7No4/Paper_31-The_Problem_of_Universal_Grammar_with_Multiple_Languages.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Analysis on Host Vulnerability Evaluation of Modern Operating Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070430</link>
        <id>10.14569/IJACSA.2016.070430</id>
        <doi>10.14569/IJACSA.2016.070430</doi>
        <lastModDate>2016-05-01T19:05:14.7000000+00:00</lastModDate>
        
        <creator>Afifa Sajid</creator>
        
        <creator>Munam Ali Shah</creator>
        
        <creator>Muhammad Kamran</creator>
        
        <creator>Qaisar Javaid</creator>
        
        <creator>Sijing Zhang</creator>
        
        <subject>Security; Operating system; Virtualization; kernel; Reliability; Vulnerability evaluation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(4), 2016</description>
        <description>Security is a major concern in all computing environments. One way to achieve security is to deploy a secure operating system (OS). A trusted OS can actually secure all the resources and can resist the vulnerabilities and attacks effectively. In this paper, our contribution is twofold. Firstly, we critically analyze the host vulnerabilities in modern desktop OSs. We group existing approaches and provide an easy and concise view of different security models adapted by most widely used OSs. The comparison of several OSs regarding structure, architecture, mode of working, and security models also form part of the paper. Secondly, we use the current usage statistics for Windows, Linux, and MAC OSs and predict their future. Our forecast will help the designers, developers and users of the different OSs to prepare for the upcoming years accordingly.</description>
        <description>http://thesai.org/Downloads/Volume7No4/Paper_30-An_Analysis_on_Host_Vulnerability_Evaluation_of_Modern_Operating_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Holistic Evaluation Framework for Automated Bug Triage Systems: Integration of Developer Performance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070429</link>
        <id>10.14569/IJACSA.2016.070429</id>
        <doi>10.14569/IJACSA.2016.070429</doi>
        <lastModDate>2016-05-01T19:05:14.6700000+00:00</lastModDate>
        
        <creator>Dr. V. Akila</creator>
        
        <creator>Dr.V.Govindasamy</creator>
        
        <subject>Bug Management; Bug Triage; Recommendation Metrics; Key Performance Indicators; Developer Performance; Bug Resolution Time</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(4), 2016</description>
        <description>Bug Triage is an important aspect of Open Source Software Development. Automated Bug Triage system is essential to reduce the cost and effort incurred by manual Bug Triage. At present, the metrics that are available in the literature to evaluate the Automated Bug Triage System are only recommendation centric. These metrics address only the correctness and coverage of the Automated Bug Triage System. Thus, there is a need for user-centric evaluation of the Bug Triage System. The two types of metrics to evaluate the Automated Bug Triage System include Recommendation Metrics and  User Metrics. There is a need to corroborate the results produced by the Recommendation Metrics with User Metrics. To this end, this paper furnishes a Holistic Evaluation Framework for Bug Triage System by integrating the developer performance into the evaluation framework. The Automated Bug Triage System is also to retrieve a set of developers for resolving a bug.  Hence, this paper proposes Key Performance Indicators (KPI) for appraising a developer’s effectiveness in contribution towards the resolution of the bug. By applying the KPIs on the retrieved set of developers, the Bug Triage System can be evaluated quantitatively.</description>
        <description>http://thesai.org/Downloads/Volume7No4/Paper_29-Holistic_Evaluation_Framework_for_Automated_Bug_Triage_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Urdu to Punjabi Machine Translation: An Incremental Training Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070428</link>
        <id>10.14569/IJACSA.2016.070428</id>
        <doi>10.14569/IJACSA.2016.070428</doi>
        <lastModDate>2016-05-01T19:05:14.5930000+00:00</lastModDate>
        
        <creator>Umrinderpal Singh</creator>
        
        <creator>Vishal Goyal</creator>
        
        <creator>Gurpreet Singh Lehal</creator>
        
        <subject>Machine Translation; Urdu to Punjabi Machine Translation; NLP; Urdu; Punjabi; Indo-Aryan Languages</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(4), 2016</description>
        <description>The statistical machine translation approach is highly popular in automatic translation research area and promising approach to yield good accuracy. Efforts have been made to develop Urdu to Punjabi statistical machine translation system. The system is based on an incremental training approach to train the statistical model. In place of the parallel sentences corpus has manually mapped phrases which were used to train the model. In preprocessing phase, various rules were used for tokenization and segmentation processes. Along with these rules, text classification system was implemented to classify input text to predefined classes and decoder translates given text according to selected domain by the text classifier. The system used Hidden Markov Model(HMM) for the learning process and Viterbi algorithm has been used for decoding. Experiment and evaluation have shown that simple statistical model like HMM yields good accuracy for a closely related language pair like Urdu-Punjabi. The system has achieved 0.86 BLEU score and in manual testing and got more than 85% accuracy.</description>
        <description>http://thesai.org/Downloads/Volume7No4/Paper_28-Urdu_to_Punjabi_Machine_Translation_An_Incremental_Training_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hex Symbols Algorithm for Anti-Forensic Artifacts on Android Devices</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070427</link>
        <id>10.14569/IJACSA.2016.070427</id>
        <doi>10.14569/IJACSA.2016.070427</doi>
        <lastModDate>2016-05-01T19:05:14.5600000+00:00</lastModDate>
        
        <creator>Somyia M. Abu Asbeh</creator>
        
        <creator>Sarah M. Hammoudeh</creator>
        
        <creator>Hamza A. Al-Sewadi</creator>
        
        <creator> Arab M. Hammoudeh</creator>
        
        <subject>Mobile Forensics; Anti-Forensics; Artifact Wiping; Data Hiding; Wicker; Steganography</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(4), 2016</description>
        <description>Mobile phones technology has become one of the most common and important technologies that started as a communication tool and then evolved into key reservoirs of personal information and smart applications. With this increased level of complications, increased dangers and increased levels of countermeasures and opposing countermeasures have emerged, such as Mobile Forensics and anti-forensics. One of these anti-forensics tools is steganography, which introduced higher levels of complexity and security against hackers’ attacks but simultaneously create obstacles to forensic investigations.
A new anti-forensics approach for embedding data in the steganography field is proposed in this paper. It is based on hiding secret data in hex symbols carrier files rather than the usually used file multimedia carrier including videos, image and sound files. Furthermore, this approach utilizes hexadecimal codes to embed the secret data on the contrary to the conventional steganography approaches which apply changes to binary codes. Accordingly, the resulting data in the carrier files will be unfathomable without the use of special keys yielding a high level of attacking and deciphering difficulty. Besides, embedding the secret data in the form of hex symbols, the agreed upon procedure between communicating parties follows a random embedding manner formulated using WinHex software. Files can be exchanged amongst android devices and/or computers. Experiments were conducted for applying the proposed algorithm on rooted android devices, which also are connected to computers. The proposed methods showed advantages over the currently existing steganography approaches, in terms of character frequency, capacity, security, and robustness.
</description>
        <description>http://thesai.org/Downloads/Volume7No4/Paper_27-Hex_Symbols_Algorithm_for_Anti_Forensic_Artifacts_on_Android_Devices.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Survey on Security for Smartphone Device</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070426</link>
        <id>10.14569/IJACSA.2016.070426</id>
        <doi>10.14569/IJACSA.2016.070426</doi>
        <lastModDate>2016-05-01T19:05:14.4830000+00:00</lastModDate>
        
        <creator>Syed Farhan Alam Zaidi</creator>
        
        <creator>Munam Ali Shah</creator>
        
        <creator>Muhammad Kamran</creator>
        
        <creator> Qaisar Javaid</creator>
        
        <creator>Sijing Zhang</creator>
        
        <subject>Smartphone Security; Vulnerabilities; Attacks; Malware</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(4), 2016</description>
        <description>The technological advancements in mobile connectivity services such as GPRS, GSM, 3G, 4G, Blue-tooth, WiMAX, and Wi-Fi made mobile phones a necessary component of our daily lives. Also, mobile phones have become smart which let the users perform routine tasks on the go. However, this rapid increase in technology and tremendous usage of the smartphones make them vulnerable to malware and other security breaching attacks. This diverse range of mobile connectivity services, device software platforms, and standards make it critical to look at the holistic picture of the current developments in smartphone security research. In this paper, our contribution is twofold. Firstly,  we review the threats, vulnerabilities, attacks and their solutions over the period of 2010-2015 with a special focus on smartphones. Attacks are categorized into two types, i.e., old attack and new attacks. With this categorization, we aim to provide an easy and concise view of different attacks and the possible solutions to improve smartphone security. Secondly,  we critically analyze our findings and estimate the market growth of different operating systems for the smartphone in coming years. Furthermore, we estimate the malware growth and forecast the world in 2020.</description>
        <description>http://thesai.org/Downloads/Volume7No4/Paper_26-A_Survey_on_Security_for_Smartphone_Device.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hyperspectral Image Classification Using Unsupervised Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070425</link>
        <id>10.14569/IJACSA.2016.070425</id>
        <doi>10.14569/IJACSA.2016.070425</doi>
        <lastModDate>2016-05-01T19:05:14.4670000+00:00</lastModDate>
        
        <creator>Sahar A. El_Rahman</creator>
        
        <subject>hyperspectral imaging; unsupervised classification; K-Means algorithm; ISODATA algorithm; ENVI</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(4), 2016</description>
        <description>Hyperspectral Imaging (HSI) is a process that results in collected and processed information of the electromagnetic spectrum by a specific sensor device. It’s data provide a wealth of information. This data can be used to address a variety of problems in a number of applications. Hyperspectral Imaging classification assorts all pixels in a digital image into groups. In this paper, unsupervised hyperspectral image classification algorithms used to obtain a classified hyperspectral image. Iterative Self-Organizing Data Analysis Technique Algorithm (ISODATA) algorithm and K-Means algorithm are used.  Applying two algorithms on Washington DC hyperspectral image, USA, using ENVI tool. In this paper, the performance was evaluated on the base of the accuracy assessment of the process after applying Principle Component Analysis (PCA) and K-Means or ISODATA algorithm. It is found that, ISODATA algorithm is more accurate than K-Means algorithm. Since The overall accuracy of classification process using K-Means algorithm is 78.3398% and  The overall accuracy of classification process using ISODATA algorithm is 81.7696%. Also the processing time increased when the number of iterations increased to get the classified image.</description>
        <description>http://thesai.org/Downloads/Volume7No4/Paper_25-Hyperspectral_Image_Classification_Using_Unsupervised_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Scalable Hybrid Speech Codec for Voice over Internet Protocol Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070424</link>
        <id>10.14569/IJACSA.2016.070424</id>
        <doi>10.14569/IJACSA.2016.070424</doi>
        <lastModDate>2016-05-01T19:05:14.4370000+00:00</lastModDate>
        
        <creator>Manas Ray</creator>
        
        <creator>Mahesh Chandra</creator>
        
        <creator>B.P. Patil</creator>
        
        <subject>VoIP; Speech Compression; Hybrid Speech Codec; ITU-T G.729 codec; db10 wavelet; Statistical Analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(4), 2016</description>
        <description>With the advent of various web-based applications and the fourth generation (4G) access technology, there has been an exponential growth in the demand of multimedia service delivery along with speech signals in a voice over internet protocol (VoIP) setup. Need is felt to fine-tune the conventional speech codecs deployed to cater to the modern environment. This fine-tuning can be achieved by further compressing the speech signal and to utilize the available bandwidth to deliver other services. This paper presents a scalable -hybrid model of speech codec using ITU-T G.729 and db10 wavelet. The codec addresses the problem of compression of speech signal in VoIP setup. The performance comparison of the codec with the standard codec has been performed by statistical analysis of subjective, objective and quantifiable parameters of quality desirable from the codec deployed in VoIP platforms.</description>
        <description>http://thesai.org/Downloads/Volume7No4/Paper_24-Scalable_Hybrid_Speech_Codec_for_Voice_Over_Internet_Protocol_Applications.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Security Risk Scoring Incorporating Computers&#39; Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070423</link>
        <id>10.14569/IJACSA.2016.070423</id>
        <doi>10.14569/IJACSA.2016.070423</doi>
        <lastModDate>2016-05-01T19:05:14.4030000+00:00</lastModDate>
        
        <creator>Eli Weintraub</creator>
        
        <subject>CVSS; Security; Risk Management; Configuration Management; CMDB; Continuous Monitoring System; Vulnerability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(4), 2016</description>
        <description>A framework of a Continuous Monitoring System (CMS) is presented, having new improved capabilities. The system uses the actual real-time configuration of the system and environment characterized by a Configuration Management Data Base (CMDB) which includes detailed information of organizational database contents, security and privacy specifications. The Common Vulnerability Scoring Systems&#39; (CVSS) algorithm produces risk scores incorporating information from the CMDB. By using the real updated environmental characteristics the system enables achieving accurate scores compared to existing practices. Framework presentation includes systems&#39; design and an illustration of scoring computations.</description>
        <description>http://thesai.org/Downloads/Volume7No4/Paper_23-Security_Risk_Scoring_Incorporating_Computers_Environment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improvement of Sample Selection: A Cascade-Based Approach for Lesion Automatic Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070422</link>
        <id>10.14569/IJACSA.2016.070422</id>
        <doi>10.14569/IJACSA.2016.070422</doi>
        <lastModDate>2016-05-01T19:05:14.3900000+00:00</lastModDate>
        
        <creator>Shofwatul ‘Uyun</creator>
        
        <creator>M. Didik R Wahyudi</creator>
        
        <creator>Lina Choridah</creator>
        
        <subject>CADe system; haar; cascade classifier; mammogram</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(4), 2016</description>
        <description>Computer-Aided Detection (CADe) system has a significant role as a preventative effort in the early detection of breast cancer. There are some phases in developing the pattern recognition on the CADe system, including the availability of            a large number of data, feature extraction, selection and use of features, and the selection of the appropriate classification method. Haar cascade classifier has been successfully developed to detect the faces in the multimedia image automatically and quickly. The success of the face detection system must not be separated from the availability of the training data in the large numbers. However, it is not easy to implement on a medical image because of some reasons, including its low quality, the very little gray-value differences, and the limited number of the patches for the examples of the positive data. Therefore, this research proposes an algorithm to overcome the limitation of the number of patches on the region of interest to detect whether the lesion exists or not on the mammogram images based on the Haar cascade classifier. This research uses the mammogram and ultrasonography images from the breast imaging of 60 probands and patients in the Clinic of Oncology, Yogyakarta. The testing of the CADe system is done by comparing the reading result of that system with the mammography reading result validated with the reading of the ultrasonography image by the Radiologist. The testing result of the k-fold cross validation demonstrates that the use of the algorithm for the multiplication of intersection rectangle may improve the system performance with accuracy, sensitivity, and specificity of 76%, 89%, and 63%, respectively.</description>
        <description>http://thesai.org/Downloads/Volume7No4/Paper_22-Improvement_of_Sample_Selection_A_Cascade_Based_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Novel Approach to Estimate Missing Data Using Spatio-Temporal Estimation Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070421</link>
        <id>10.14569/IJACSA.2016.070421</id>
        <doi>10.14569/IJACSA.2016.070421</doi>
        <lastModDate>2016-05-01T19:05:14.3730000+00:00</lastModDate>
        
        <creator>Aniruddha D. Shelotkar</creator>
        
        <creator>Dr. P. V. Ingole</creator>
        
        <subject>Error concealment; Wavelet Transform; Missing Data estimation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(4), 2016</description>
        <description>With advancement of wireless technology and the processing power in mobile devices, every handheld device supports numerous video streaming applications. Generally, user datagram protocol (UDP) is used in video transmission technology which does not provide assured quality of service (QoS). Therefore, there is need for video post processing modules for error concealments. In this paper we propose one such algorithm to recover multiple lost blocks of data in video. The proposed algorithm is based on a combination of wavelet transform and spatio-temporal data estimation. We decomposed the frame with lost blocks using wavelet transform in low and high frequency bands. Then the approximate information (low frequency) of missing block is estimated using spatial smoothening and the details (high frequency) are added using bidirectional (temporal) predication of high frequency wavelet coefficients. Finally inverse wavelet transform is applied on modified wavelet coefficients to recover the frame.  In proposed algorithm, we carry out an automatic estimation of missing block using spatio-temporal manner. Experiments are carried with different YUV and compressed domain streams. The experimental results show enhancement in PSNR as well as visual quality and cross verified by video quality metrics (VQM).</description>
        <description>http://thesai.org/Downloads/Volume7No4/Paper_21-Novel_Approach_to_Estimate_Missing_Data_Using_Spatio_Temporal_Estimation_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Systematic Evaluation of Social Recommendation Systems: Challenges and Future</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070420</link>
        <id>10.14569/IJACSA.2016.070420</id>
        <doi>10.14569/IJACSA.2016.070420</doi>
        <lastModDate>2016-05-01T19:05:14.3570000+00:00</lastModDate>
        
        <creator>Priyanka Rastogi</creator>
        
        <creator>Dr. Vijendra Singh</creator>
        
        <subject>Social Recommender System; Social Tagging; Social Contextual Information</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(4), 2016</description>
        <description>The issue of information overload could be effectively managed with the help of intelligent system which is capable of proactively supervising the users in accessing relevant or useful information in a tailored way, by pruning the large space of possible options. But the key challenge lies in what all information can be collected and assimilated to make effective recommendations. This paper discusses reasons for evolution of recommender systems leading to transition from traditional to social information based recommendations. Social Recommender System (SRS) exploits social contextual information in the form of social links of users, social tags, user-generated data that contain huge supplemental information about items or services that are expected to be of interest of user or about features of items. Therefore, having tremendous potential for improving recommendation quality. Systematic literature review has been done for SRS by categorizing various kinds of social-contextual information into explicit and implicit user-item information. This paper also analyses key aspects of any generic recommender system namely Domain, Personalization Levels, Privacy and Trustworthiness, Recommender algorithms to give a better understanding to researchers new in this field.</description>
        <description>http://thesai.org/Downloads/Volume7No4/Paper_20-Systematic_Evaluation_of_Social_Recommendation_Systems_Challenges_and_Future.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Subset Feature Elimination Mechanism for Intrusion Detection System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070419</link>
        <id>10.14569/IJACSA.2016.070419</id>
        <doi>10.14569/IJACSA.2016.070419</doi>
        <lastModDate>2016-05-01T19:05:14.3270000+00:00</lastModDate>
        
        <creator>Herve Nkiama</creator>
        
        <creator>Syed Zainudeen Mohd Said</creator>
        
        <creator>Muhammad Saidu</creator>
        
        <subject>classification; decision tree; features selection; intrusion detection system; NSL-KDD; scikit-learn</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(4), 2016</description>
        <description>several studies have suggested that by selecting relevant features for intrusion detection system, it is possible to considerably improve the detection accuracy and performance of the detection engine. Nowadays with the emergence of new technologies such as Cloud Computing or Big Data, large amount of network traffic are generated and the intrusion detection system must dynamically collected and analyzed the data produce by the incoming traffic. However in a large dataset not all features contribute to represent the traffic, therefore reducing and selecting a number of adequate features may improve the speed and accuracy of the intrusion detection system. In this study, a feature selection mechanism has been proposed which aims to eliminate non-relevant features as well as identify the features which will contribute to improve the detection rate, based on the score each features have established during the selection process. To achieve that objective, a recursive feature elimination process was employed and associated with a decision tree based classifier and later on, the suitable relevant features were identified. This approach was applied on the NSL-KDD dataset which is an improved version of the previous KDD 1999 Dataset, scikit-learn that is a machine learning library written in python was used in this paper. Using this approach, relevant features were identified inside the dataset and the accuracy rate was improved. These results lend to support the idea that features selection improve significantly the classifier performance. Understanding the factors that help identify relevant features will allow the design of a better intrusion detection system.</description>
        <description>http://thesai.org/Downloads/Volume7No4/Paper_19-A_Subset_Feature_Elimination_Mechanism_for_Intrusion_Detection_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Secure High Dynamic Range Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070418</link>
        <id>10.14569/IJACSA.2016.070418</id>
        <doi>10.14569/IJACSA.2016.070418</doi>
        <lastModDate>2016-05-01T19:05:14.2970000+00:00</lastModDate>
        
        <creator>Med Amine Touil</creator>
        
        <creator>Noureddine Ellouze</creator>
        
        <subject>high dynamic range; tone mapping; range compression; integrity verification; encryption; scrambling; inverse tone mapping; range expansion</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(4), 2016</description>
        <description>In this paper, a tone mapping algorithm is proposed to produce LDR (Limited Dynamic Range) images from HDR (High Dynamic Range) images. In the approach, non-linear functions are applied to compress the dynamic range of HDR images. Security tools will be then applied to the resulting LDR images and their effectiveness will be tested on the reconstructed HDR images. Three specific examples of security tools are described in more details: integrity verification using hash function to compute local digital signatures, encryption for confidentiality, and scrambling technique.</description>
        <description>http://thesai.org/Downloads/Volume7No4/Paper_18-Secure_High_Dynamic_Range_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New CAD System for Breast Microcalcifications Diagnosis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070417</link>
        <id>10.14569/IJACSA.2016.070417</id>
        <doi>10.14569/IJACSA.2016.070417</doi>
        <lastModDate>2016-05-01T19:05:14.2630000+00:00</lastModDate>
        
        <creator>H. Boulehmi</creator>
        
        <creator>H. Mahersia</creator>
        
        <creator>K. Hamrouni</creator>
        
        <subject>Artifacts and pectoral muscle removal; Bayesian back-propagation neural network; Breast microcalcifications; CAD system; Digital mammograms; Galactophorous tree interpolation;  GGD estimation; Morphologic features; Neuro-fuzzy system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(4), 2016</description>
        <description>Breast cancer is one of the most deadly cancers in the world, especially among women. With no identified causes and absence of effective treatment, early detection remains necessary to limit the damages and provide possible cure. Submitting women with family antecedent to mammography periodically can provide an early diagnosis of breast tumors. Computer Aided Diagnosis (CAD) is a powerful tool that can help radiologists improving their diagnostic accuracy at earlier stages. Several works have been developed in order to analyze digital mammographies, detect possible lesions (especially masses and microcalcifications) and evaluate their malignancy.
In this paper a new approach of breast microcalcifications diagnosis on digital mammograms is introduced. The proposed approach begins with a preprocessing procedure aiming artifacts and pectoral muscle removal based on morphologic operators and contrast enhancement based on galactophorous tree interpolation.
The second step of the proposed CAD system consists on segmenting microcalcifications clusters, using Generalized Gaussian Density (GGD) estimation and a Bayesian back-propagation neural network.
The last step is microcalcifications characterization using morphologic features which are used to feed a neuro-fuzzy system to classify the detected breast microcalcifications into benign and malignant classes.
</description>
        <description>http://thesai.org/Downloads/Volume7No4/Paper_17-A_New_CAD_System_for_Breast_Microcalcifications_Diagnosis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Method for Text Hiding in the Image by Using LSB</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070416</link>
        <id>10.14569/IJACSA.2016.070416</id>
        <doi>10.14569/IJACSA.2016.070416</doi>
        <lastModDate>2016-05-01T19:05:14.2330000+00:00</lastModDate>
        
        <creator>Reza tavoli</creator>
        
        <creator>Maryam bakhshi</creator>
        
        <creator>Fatemeh salehian</creator>
        
        <subject>Hiding text inside an image; image processing; Steganography; image compression; LSB</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(4), 2016</description>
        <description>an important topic in the exchange of confidential messages over the internet is the security of information conveyance. For instance, the producers and consumers of digital products are keen to know that their products are authentic and can be differentiated from those that are invalid. The science of encryption is the art of embedding data in audio files, images, videos or content in a way that would meet the above security needs. Steganography is a branch of data-hiding science which aims to reach a desirable level of security in the exchange of private military and commercial data which is not clear. These approaches can be used as complementary methods of encryption in the exchange of private data.</description>
        <description>http://thesai.org/Downloads/Volume7No4/Paper_16-A_New_Method_for_Text_Hiding_in_the_Image_by_Using_LSB.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>New Artificial Immune System Approach Based on Monoclonal Principle for Job Recommendation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070415</link>
        <id>10.14569/IJACSA.2016.070415</id>
        <doi>10.14569/IJACSA.2016.070415</doi>
        <lastModDate>2016-05-01T19:05:14.2030000+00:00</lastModDate>
        
        <creator>Shaha Al-Otaibi</creator>
        
        <creator>Mourad Ykhlef</creator>
        
        <subject>content-based filtering; computational intelligence; artificial immune system; clonal selection; monoclonal antibodies</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(4), 2016</description>
        <description>Finding the best solution for an optimization problem is a tedious task, specifically in the presence of enormously represented features. When we handle a problem such as job recommendations that have a diversity of their features, we should rely to metaheuristics. For example, the Artificial Immune System which is a novel computational intelligence paradigm achieving diversification and exploration of the search space as well as exploitation of the good solutions were reached in reasonable time. Unfortunately, in problems with diversity nature such job recommendation, it produces a huge number of antibodies that causes a large number of matching processes affect the system efficiency. To leverage this issue, we present a new intelligence algorithm inspired by immunology based on monoclonal antibodies production principle that, up to our knowledge, has never applied in science and engineering problems. The proposed algorithm recommends ranked list of best applicants for a certain job. We discussed the design issues, as well as the immune system processes that should be applied to the problem. Finally, the experiments are conducted that shown an excellence of our approach.</description>
        <description>http://thesai.org/Downloads/Volume7No4/Paper_15-New_Artificial_Immune_System_Approach_Based_on_Monoclonal_Principle.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Network on Chip Design Dedicated to Multicast Service</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070414</link>
        <id>10.14569/IJACSA.2016.070414</id>
        <doi>10.14569/IJACSA.2016.070414</doi>
        <lastModDate>2016-05-01T19:05:14.1700000+00:00</lastModDate>
        
        <creator>Mohamed Fehmi Chatmen</creator>
        
        <creator>Adel Baganne</creator>
        
        <creator>Rached Tourki</creator>
        
        <subject>Network-on-Chip (NoC); adaptive routing; Quality of service; Multicast</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(4), 2016</description>
        <description>The qualities of service presented in the network on chip are considered as a network performance criteria. However, the implementation of a quality of service, such as multicasting, shows difficulties, especially at the algorithmic level. Numerous studies have tried to implement networks that support the multicast service by adopting various algorithms to maintain the network average latency acceptable. To evaluate these algorithms, their performances are compared with the algorithms based on the multi-unicast. As expected, there is always a performances improvement. Regrettably, there is a possible degradation of latency introduced by such a service because of the large occupation of the network bandwidth for some period (which depends on packet size). In this paper, we propose an architectural solution aiming to avoid this possible degradation.</description>
        <description>http://thesai.org/Downloads/Volume7No4/Paper_14-A_New_Network_on_Chip_Design_Dedicated_to_Multicast_Service.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cultural Dimensions of Behaviors Towards E-Commerce in a Developing Country Context</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070413</link>
        <id>10.14569/IJACSA.2016.070413</id>
        <doi>10.14569/IJACSA.2016.070413</doi>
        <lastModDate>2016-05-01T19:05:14.1070000+00:00</lastModDate>
        
        <creator>Fahim Akhter</creator>
        
        <subject>Electronic Commerce; Security; Culture; Online Shopping; Privacy; Saudi Arabia</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(4), 2016</description>
        <description>Customers prefer to shop online for various reasons such as saving time, better prices, convenience, selection, and availability of products and services. The accessibility and the ubiquitous nature of the Internet facilitate business beyond brick and mortar. The web-based businesses are required to understand the consumers’ expectations, attitudes, and behavior across the globe and take into consideration of cultural effects. Saudi Arabia has become a highly potential lucrative market for web-based companies.  However, the growing number of Saudi Internet users has not become leading online shoppers. It is important for web based companies to identify the barriers that are causing Saudi users to stay away from online shopping mainstream. This led to understanding Saudi culture, expectations, behavior, and decision-making process to promote e-commerce. The purpose of this study is to investigate the effects of Saudi Arabian culture on the diffusion process of e-commerce.  The study addresses the cultural differences, risk perceptions, and attitude by investigating Saudi people about shopping online. An empirical study was conducted to collect the data from Saudi users.</description>
        <description>http://thesai.org/Downloads/Volume7No4/Paper_13-Cultural_Dimensions_of_Behaviors_Towards_E_Commerce.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Method to Design S-Boxes Based on Key-Dependent Permutation Schemes and its Quality Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070412</link>
        <id>10.14569/IJACSA.2016.070412</id>
        <doi>10.14569/IJACSA.2016.070412</doi>
        <lastModDate>2016-05-01T19:05:14.0930000+00:00</lastModDate>
        
        <creator>Kazys Kazlauskas</creator>
        
        <creator>Robertas Smaliukas</creator>
        
        <creator>Gytis Vaicekauskas</creator>
        
        <subject>data encryption; substitution boxes; generation algorithms; distance metrics; quality analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(4), 2016</description>
        <description>S-boxes are used in block ciphers as the important nonlinear components. The nonlinearity provides important protection against linear and differential cryptanalysis. The S-boxes used in encryption process could be chosen to be key-dependent. In this paper, we have presented four simple algorithms for generation key-dependent S-boxes. For quality analysis of the key-dependent S-boxes, we have proposed eight distance metrics. We have assumed the Matlab function “randperm” as standard of permutation and compared it with permutation possibilities of the proposed algorithms. In the second section we describe four algorithms, which generate key-dependent S-boxes. In the third section we analyze eight normalized distance metrics which we have used for evaluation of the quality of the key-dependent generation algorithms. Afterwards, we experimentally investigate the quality of the generated key-dependent S-boxes. Comparison results show that the key-dependent S-boxes have good quality and may be applied in cipher systems.</description>
        <description>http://thesai.org/Downloads/Volume7No4/Paper_12-A_Novel_Method_to_Design_S_Boxes_Based_on_Key_Dependent.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Regression Test-Selection Technique Using Component Model Based Modification: Code to Test Traceability</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070411</link>
        <id>10.14569/IJACSA.2016.070411</id>
        <doi>10.14569/IJACSA.2016.070411</doi>
        <lastModDate>2016-05-01T19:05:14.0770000+00:00</lastModDate>
        
        <creator>Ahmad A. Saifan</creator>
        
        <creator>Mohammed Akour</creator>
        
        <creator>Iyad Alazzam</creator>
        
        <creator>Feras Hanandeh</creator>
        
        <subject>Regression Test; Regression Test selection technique; Meta-Model; Models Traceability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(4), 2016</description>
        <description>Regression testing is a safeguarding procedure to validate and verify adapted software, and guarantee that no errors have emerged. However, regression testing is very costly when testers need to re-execute all the test cases against the modified software. This paper proposes a new approach in regression test selection domain. The approach is based on meta-models (test models and structured models) to decrease the number of test cases to be used in the regression testing process. The approach has been evaluated using three Java applications. To measure the effectiveness of the proposed approach, we compare the results using the re-test to all approaches. The results have shown that our approach reduces the size of test suite without negative impact on the effectiveness of the fault detection.</description>
        <description>http://thesai.org/Downloads/Volume7No4/Paper_11-Regression_Test_Selection_Technique_Using_Component_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Experimental Use of Kit-Build Concept Map System to Support Reading Comprehension of EFL in Comparing with Selective Underlining Strategy</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070410</link>
        <id>10.14569/IJACSA.2016.070410</id>
        <doi>10.14569/IJACSA.2016.070410</doi>
        <lastModDate>2016-05-01T19:05:14.0470000+00:00</lastModDate>
        
        <creator>Mohammad ALKHATEEB</creator>
        
        <creator>Yusuke HAYASHI</creator>
        
        <creator>Taha RAJAB</creator>
        
        <creator>Tsukasa HIRASHIMA</creator>
        
        <subject>Technology-Enhanced Learning; Kit-Build Concept Map; Reading Comprehension; EFL</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(4), 2016</description>
        <description>In this paper, we describe the effects of using Kit-Build concept mapping (KB-mapping) method as a technology-enhanced support for the Reading Comprehension (RC) in English as Foreign Language (EFL) contexts. RC is a process that helps learners to become a more effective and efficient reader. It is an intentional, active and interactive activity that language learners experience in their daily working activities. RC of EFL is a significant research area in technology-enhanced learning. In order to clarify the effect of KB-mapping method, we compared the results with that of selective underlining (SU) strategy through the Comprehension Test (CT) and the Delayed Comprehension Test (DCT) that performed two weeks later. As the results, it is clarified that there is a noticed difference in the DCT scores, while there is no significant difference in the CT scores. It indicates that the use of KB-mapping method helps learners to retain their information for longer period of time. By doing more statistical analysis for the results of the Kit- Build Conditions (KB-conditions) group and comparing them with the map scores, we found that the learners could answer 76% of the questions whose answers were included in their learner’s maps. It was found that learners could recall 86% of the questions and that their answers were included in their learner’s maps. It indicates that the use of KB-mapping method helps learners to retain and recall more information compared with the SU strategy even after two weeks elapsed. In a follow-up questionnaire after the end of all experiments, it was revealed that participants thought that using KB-mapping was similar to SU for the CT just after the use, but KB-mapping was more useful in remembering information after a while, and it was more difficult to carry out. Participants liked to use it in RC tasks, but asked for more time to do it.</description>
        <description>http://thesai.org/Downloads/Volume7No4/Paper_10-Experimental_Use_of_Kit_Build_Concept_Map_System_to_Support_Reading.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Using Rule Base System in Mobile Platform to Build Alert System for Evacuation and Guidance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070409</link>
        <id>10.14569/IJACSA.2016.070409</id>
        <doi>10.14569/IJACSA.2016.070409</doi>
        <lastModDate>2016-05-01T19:05:14.0300000+00:00</lastModDate>
        
        <creator>Maysoon Fouad Abulkhair</creator>
        
        <creator>Lamiaa Fattouh Ibrahim</creator>
        
        <subject>safety; Natural disasters; smartphone; rule-based system; Mobile Network; Smart Phone</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(4), 2016</description>
        <description>The last few years have witnessed the widespread use of mobile technology. Billions of citizens around the world own smartphones, which they use for both personal and business applications. Thus, technologies will minimize the risk of losing people&#39;s lives.  Mobile platform is one of the most popular plat-form technologies utilized on a wide scale and accessible to a high number of people.  There has been a huge increase in natural and manmade disasters in the last few years. Such disasters can hap-pen anytime and anywhere causing major damage to people and  property. The environment affluence and the failure of people to go to other safe places are the results of catastrophic events re-cently in Jeddah city. Flood causes the sinking and destruction of homes and private properties.  Thus, this paper describes a sys-tem that can help in determining the affected properties, evacuat-ing them, and providing a proper guidance to the registered users in the system.  This system notifies mobile phone users by sending guidance messages and sound alerts, in a real-time when disasters (fires, floods) hit. Warnings and tips are received on the mobile user to teach him/her how to react before, during, and after the disaster. Provide a mobile application using GPS to determine the user location and guide the user for the best way with the aid of rule-based system that built through the interview with the Experts domains. Moreover, the user will re-ceive Google map updates for any added information. This sys-tem consists of two subsystems: the first helps students in our university to evacuate during a catastrophe and the second aids all people in the city. Due to all these features, the system can access the required information at the needed time.</description>
        <description>http://thesai.org/Downloads/Volume7No4/Paper_9-Using_Rule_Base_System_in_Mobile_Platform_to_Build_Alert_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Incorporating Multiple Attributes in Social Networks to Enhance the Collaborative Filtering Recommendation Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070408</link>
        <id>10.14569/IJACSA.2016.070408</id>
        <doi>10.14569/IJACSA.2016.070408</doi>
        <lastModDate>2016-05-01T19:05:14.0000000+00:00</lastModDate>
        
        <creator>Jian Yi</creator>
        
        <creator>Xiao Yunpeng</creator>
        
        <creator>Liu Yanbing</creator>
        
        <subject>Recommender System; Social Networks; Collaborative Filtering; Comprehensive Similarity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(4), 2016</description>
        <description>In view of the existing user similarity calculation principle of recommendation algorithm is single, and recommender system accuracy is not well, we propose a novel social multi-attribute collaborative filtering algorithm (SoMu). We first define the user attraction similarity by users’ historical rated behaviors using graph theory, and secondly, define the user interaction similarity by users’ social friendship which is based on the social relationship of being followed and following. Then, we combine the user attraction similarity and the user interaction similarity to obtain a multi-attribute comprehensive user similarity model. Finally, realize personalized recommendation according to the comprehensive similarity model. Experimental results on Douban and MovieLens show that the proposed algorithm successfully incorporates multiple attributes in social networks to recommendation algorithm, and improves the accuracy of recommender system with the improved comprehensive similarity computing model.</description>
        <description>http://thesai.org/Downloads/Volume7No4/Paper_8-Incorporating_Multiple_Attributes_in_Social_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Approach of Self-Organizing Systems Based on Factor-Order Space</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070407</link>
        <id>10.14569/IJACSA.2016.070407</id>
        <doi>10.14569/IJACSA.2016.070407</doi>
        <lastModDate>2016-05-01T19:05:13.9670000+00:00</lastModDate>
        
        <creator>Jin Li</creator>
        
        <creator>Ping He</creator>
        
        <subject>Self-organization; factor-order space; extended order; extended entropy; systems level</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(4), 2016</description>
        <description>To explore new system self-organizing theory, it’s urgent to find a new method in the system science. 
This paper combines factor space theory with system non-optimum theory, applies it into the research of system self-organizing theory and 
proposes new concepts as system factor space, object-factor and space-order relation. It constructs factor-space framework of system 
self-organizing based on factor mapping and object inversion, studies system ordering from a new perspective with optimum and non-optimum 
attributes as the basis of system uncertainty, and expands factor space theory from f(0,o) to f(o,0). The research suggests that the construction of 
system factor space is to build an information system capable of self-learning for system self-organization and better enhance functions of 
system self-organization by adopting information fusion of data analysis and perception judgment.</description>
        <description>http://thesai.org/Downloads/Volume7No4/Paper_7-An_Approach_of_Self_Organizing_Systems_Based_on_Factor_Order_Space.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improve Traffic Management in the Vehicular Ad Hoc Networks by Combining Ant Colony Algorithm and Fuzzy System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070406</link>
        <id>10.14569/IJACSA.2016.070406</id>
        <doi>10.14569/IJACSA.2016.070406</doi>
        <lastModDate>2016-05-01T19:05:13.9530000+00:00</lastModDate>
        
        <creator>Fazlollah Khodadadi</creator>
        
        <creator>Seyed Javad Mirabedini</creator>
        
        <creator>Ali Harounabadi</creator>
        
        <subject>Traffic Management; Vehicular Ad-hoc Networks; Ant Colony Algorithms; Fuzzy System</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(4), 2016</description>
        <description>Over the last years, total number of transporter has increased. High traffic leads to serious problems and finding a sensible solution to solve the traffic problem is a significant challenge. Also, the use of the full capacity of existing streets can help to solve this problem and reduce costs. Instead of using static algorithms, we present a new method, ACO algorithm, combine with fuzzy logic which is a fair solution to improve traffic management in the vehicular ad hoc networks. We have called this the Improved Traffic Management in the Vehicular ad hoc networks (ITMV). Proffer method combines the map segmentation and assign to one server, calculate the instantaneous state of the traffic in roads with use fuzzy logic and distribute traffic for reduce traffic as much as possible by less time priority rather than shorter route. This method collects the vehicles and streets information to calculate the instantaneous state of the vehicle density. The proposed method through simulations were compared with some existing methods and in terms of speed, travel time and reduce air pollution improved by an average of 36.5%, 38%, 29% ,Respectively.</description>
        <description>http://thesai.org/Downloads/Volume7No4/Paper_6-Improve_Traffic_Management_in_the_Vehicular_Ad_Hoc_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cloud CRM: State-of-the-Art and Security Challenges</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070405</link>
        <id>10.14569/IJACSA.2016.070405</id>
        <doi>10.14569/IJACSA.2016.070405</doi>
        <lastModDate>2016-05-01T19:05:13.9200000+00:00</lastModDate>
        
        <creator>Amin Shaqrah</creator>
        
        <subject>Cloud computing; CRM; Security; Cloud Security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(4), 2016</description>
        <description>Security undoubtedly play the main role of cloud CRM deployment, since the agile firms utilized cloud services in the providers infrastructures to perform acute CRM operations. In this paper researcher emphasis on the cloud CRM themes, security threads the most concern. Some aspects of security discussed concern on deployment the cloud CRM like: Access customers’ database and control; secure data transfer over the cloud; trust among the enterprise and cloud service provider; confidentiality, integrity, availability triad; and security hazard, future studies and practice are presented at the end.</description>
        <description>http://thesai.org/Downloads/Volume7No4/Paper_5-Cloud_CRM_State_of_the_Art_and_Security_Challenges.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Network Attack Classification and Recognition Using HMM and Improved Evidence Theory</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070404</link>
        <id>10.14569/IJACSA.2016.070404</id>
        <doi>10.14569/IJACSA.2016.070404</doi>
        <lastModDate>2016-05-01T19:05:13.8270000+00:00</lastModDate>
        
        <creator>Gang Luo</creator>
        
        <creator>Ya Wen</creator>
        
        <creator>Lingyun Xiang</creator>
        
        <subject>Hidden Markov Model; Evidence theory; Network attack; KDD CUP99; Classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(4), 2016</description>
        <description>In this paper, a decision model of fusion classification based on HMM-DS is proposed, and the training and recognition methods of the model are given. As the pure HMM classifier can’t have an ideal balance between each model with a strong ability to identify its target and the maximum difference between models. So in this paper, the results of HMM are integrated into the DS framework, and HMM provides state probabilities for DS. The output of each hidden Markov model is used as a body of evidence. The improved evidence theory method is proposed to fuse the results and encounter drawbacks of the pure HMM for improving classification accuracy of the system. We compare our approach with the traditional evidence theory method, other representative improved DS methods, pure HMM method and common classification methods. The experimental results show that our proposed method has a significant practical effect in improving the training process of network attack classification with high accuracy.</description>
        <description>http://thesai.org/Downloads/Volume7No4/Paper_4-Network_Attack_Classification_and_Recognition_Using_HMM_and_Improved_Evidence_Theory.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improved Appliance Coordination Scheme with Waiting Time in Smart Grids</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070403</link>
        <id>10.14569/IJACSA.2016.070403</id>
        <doi>10.14569/IJACSA.2016.070403</doi>
        <lastModDate>2016-05-01T19:05:13.7800000+00:00</lastModDate>
        
        <creator>Firas A. Al Balas</creator>
        
        <creator>Wail Mardini</creator>
        
        <creator>Yaser Khamayseh</creator>
        
        <creator>Dua’a Ah.K.Bani-Salameh</creator>
        
        <subject>smart grids; energy bill; off-peak</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(4), 2016</description>
        <description>Smart grids aim to merge the advances in communications and information technologies with traditional power grids. In smart grids, users can generate energy and sell it to the local utility supplier. The users can reduce energy consumption by shifting appliances’ start time to off-peak hours. Many researchers have proposed techniques to reduce the previous issue for home appliances, such as the Appliances Coordination (ACORD) scheme and Appliances Coordination with Feed In (ACORD-FI) scheme.
The goal of this work is to introduce an efficient scheme to reduce the total cost of energy bills by utilizing the ACORD-FI scheme to obtain an effective solution. In this work three scheduling schemes are proposed: the Appliances Coordination by Giving Waiting Time (ACORD-WT), the Appliances Coordination by Giving Priority (ACORD-P), and using photovoltaic (PV) with priority and waiting time scheduling algorithms.
A simulator written in C++ is used to test the performance of the proposed schemes using. The performance metric used is the total savings in the cost of the energy bill in dollars. The first comparison for the proposed schemes with the ACORD-FI, and the results show that the efficiency of the proposed ACORD-WT is better than the ACORD-FI, regardless of the number of appliances. Moreover, the proposed ACORD-P, is also better than the standard ACORD-FI.
</description>
        <description>http://thesai.org/Downloads/Volume7No4/Paper_3-Improved_Appliance_Coordination_Scheme_with_Waiting.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Framework for Satellite Image Enhancement Using Quantum Genetic and Weighted IHS+Wavelet Fusion Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070402</link>
        <id>10.14569/IJACSA.2016.070402</id>
        <doi>10.14569/IJACSA.2016.070402</doi>
        <lastModDate>2016-05-01T19:05:13.7500000+00:00</lastModDate>
        
        <creator>Amal A. HAMED</creator>
        
        <creator>Osama A. OMER</creator>
        
        <creator>Usama S. MOHAMED</creator>
        
        <subject>Quantum genetic algorithm (QGA); HIS; fusion; wavelet; registration; super-resolution</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(4), 2016</description>
        <description>this paper examined the applicability of quantum genetic algorithms to solve optimization problems posed by satellite image enhancement techniques, particularly super-resolution, and fusion. We introduce a framework starting from reconstructing the higher-resolution panchromatic image by using the subpixel-shifts between a set of lower-resolution images (registration), then interpolation, restoration, till using the higher-resolution image in pan-sharpening a multispectral image by weighted IHS+Wavelet fusion technique. For successful super-resolution, accurate image registration should be achieved by optimal estimation of subpixel-shifts. Optimal-parameters blind restoration and interpolation should be performed for the optimal quality higher-resolution image. There is a trade-off between spatial and spectral enhancement in image fusion; it is difficult for the existing methods to do the best in both aspects. The objective here is to achieve all combined requirements with optimal fusion weights, and use the parameters constraints to direct the optimization process. QGA is used to estimate the optimal parameters needed for each mathematic model in this framework “Super-resolution and fusion.” The simulation results show that the QGA-based method can be used successfully to estimate automatically the approaching parameters which need the maximal accuracy, and achieve higher quality and efficient convergence rate more than the corresponding conventional GA-based and the classic computational methods.</description>
        <description>http://thesai.org/Downloads/Volume7No4/Paper_2-A_Framework_for_Satellite_Image_Enhancement_Using_Quantum_Genetic.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Novel Altered Region for Biomarker Discovery in Hepatocellular Carcinoma (HCC) Using Whole Genome SNP Array</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070401</link>
        <id>10.14569/IJACSA.2016.070401</id>
        <doi>10.14569/IJACSA.2016.070401</doi>
        <lastModDate>2016-05-01T19:05:13.6870000+00:00</lastModDate>
        
        <creator>Esraa M. Hashem</creator>
        
        <creator>Mai S. Mabrouk</creator>
        
        <creator>Ayman M. Eldeib</creator>
        
        <subject>Hepatocellular carcinoma; copy number alternation; biomarkers; single-nucleotide polymorphism</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(4), 2016</description>
        <description>cancer represents one of the greatest medical causes of mortality. The majority of Hepatocellular carcinoma arises from the accumulation of genetic abnormalities, and possibly induced by exterior etiological factors especially HCV and HBV infections. There is a need for new tools to analysis the large sum of data to present relevant genetic changes that may be critical for both understanding how cancers develop and determining how they could ultimately be treated. Gene expression profiling may lead to new biomarkers that may help develop diagnostic accuracy for detecting Hepatocellular carcinoma. In this work, statistical technique (discrete stationary wavelet transform) for detection of copy number alternations to analysis high-density single-nucleotide polymorphism array of 30 cell lines on specific chromosomes, which are frequently detected in Hepatocellular carcinoma have been proposed. The results demonstrate the feasibility of whole-genome fine mapping of copy number alternations via high-density single-nucleotide polymorphism genotyping, Results revealed that a novel altered chromosomal region is discovered; region amplification (4q22.1) have been detected in 22 out of 30-Hepatocellular carcinoma cell lines (73%). This region strike, AFF1 and DSPP, tumor suppressor genes. This finding has not previously reported to be involved in liver carcinogenesis; it can be used to discover a new HCC biomarker, which helps in a better understanding of hepatocellular carcinoma.</description>
        <description>http://thesai.org/Downloads/Volume7No4/Paper_1-Novel_Altered_Region_for_Biomarker_Discovery_in_Hepatocellular_Carcinoma.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Intelligent Approach for Predicting Product Compositions of a Distillation Column</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2016.050405</link>
        <id>10.14569/IJARAI.2016.050405</id>
        <doi>10.14569/IJARAI.2016.050405</doi>
        <lastModDate>2016-04-10T13:23:03.5630000+00:00</lastModDate>
        
        <creator>Yousif Al-Dunainawi</creator>
        
        <creator>Maysam F. Abbod</creator>
        
        <subject>Hybrid Intelligence; Prediction; Distillation Column; Neural network; Particle swarm optimisation</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 5(4), 2016</description>
        <description>Compositions measurement is a vitally critical issue for the modelling and control of distillation process. The product compositions of distillation columns are traditionally measured using indirect techniques via inferring tray compositions from its temperature or by using an online analyser. These techniques were reported as inefficient and relatively slow methods. In this paper, an alternative procedure is presented to predict the compositions of a binary distillation column. Particle swarm optimisation based artificial neural network PSO-ANN is trained by different algorithms and tested by new unseen data to check the generality of the proposed method. Particle swarm optimization is utilised, here, to choose the optimal topology of the network. The simulation results have indicated a reasonable accuracy of prediction with a minimal error between the predicted and simulated data of the column.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume5No4/Paper_5-Hybrid_Intelligent_Approach_for_Predicting_Product.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multiple-Language Translation System Focusing on Long-distance Medical and Outpatient Services</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2016.050404</link>
        <id>10.14569/IJARAI.2016.050404</id>
        <doi>10.14569/IJARAI.2016.050404</doi>
        <lastModDate>2016-04-10T13:23:03.5330000+00:00</lastModDate>
        
        <creator>Rena Aierken</creator>
        
        <creator>Li Xiao</creator>
        
        <creator>Su Sha</creator>
        
        <creator>Dawa Yidemucao</creator>
        
        <subject>questionnaire for outpatient cases; Chinese Uyghur language; medical Chinese-Minority language parallel corpus; statistical machine translation</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 5(4), 2016</description>
        <description>For people living in the countryside, an effective long-distance medical and health service is very important. People living in western China, especially, require convenient communication in their native language with doctors working in a modern city. To address this problem, a multiple-language translation system for long-distance medical and outpatient services is discussed. This system initially provides a table containing basic information including disease names and symptoms for different medical classifications, and then translates the sentences selected from the table automatically using a machine translation system. Finally, a PDF file is created for the doctor and the patient. In this paper, the system construction and evaluation of the machine translation are introduced.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume5No4/Paper_4-Multiple_Language_Translation_System_Focusing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Location Monitoring System with GPS, Zigbee and Wifi Beacon for Rescuing Disable Persons</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2016.050403</link>
        <id>10.14569/IJARAI.2016.050403</id>
        <doi>10.14569/IJARAI.2016.050403</doi>
        <lastModDate>2016-04-10T13:23:03.4700000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Taka Eguchi</creator>
        
        <subject>Rescue system; Location estimation; Attitude estimation; Health monitoring; Mobile applications;  Triage; Rescue planning</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 5(4), 2016</description>
        <description>Location monitoring system for rescue disable persons by switching the location estimation methods with GPS, ZigBee and WiFi beacon is proposed. Rescue system with triage using health condition monitoring together with location and attitude monitoring as well as the other data acquired with mobile devices is evaluated with the proposed location monitoring system. Through simulation study, influence due to location estimation error on rescue time is evaluated together with effect of the proposed location monitoring system. Also, it is found that the effect of triage on rescue time is clarified.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume5No4/Paper_3-Location_Monitoring_System_with_GPS_Zigbee_and_Wifi_Beacon.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Wildlife Damage Estimation and Prediction Using Blog and Tweet Information</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2016.050402</link>
        <id>10.14569/IJARAI.2016.050402</id>
        <doi>10.14569/IJARAI.2016.050402</doi>
        <lastModDate>2016-04-10T13:23:03.4530000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Shohei Fujise</creator>
        
        <subject>Wildlife damage; Blog; Tweet; Big data analysis; Natural language recognition</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 5(4), 2016</description>
        <description>Wildlife damage estimation and prediction using blog and tweet information is conducted. Through a regressive analysis with the truth data about wildlife damage which is acquired by the federal and provincial governments and the blog and the tweet information about wildlife damage which are acquired in the same year, it is found that some possibility for estimation and prediction of wildlife damage. Through experiments, it is found that R2 value of the relations between the federal and provincial government gathered truth data of wildlife damages and the blog and the tweet information derived wildlife damages is more than 0.75. Also, it is possible to predict wildlife damage by using past truth data and the estimated wildlife damages. Therefore, it is concluded that the proposed method is applicable to estimate and predict wildlife damages.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume5No4/Paper_2-Wildlife_Damage_Estimation_and_Prediction_Using_Blog_and_Tweet_Information.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>One of the Possible Causes for Diatom Appearance in Ariake Bay Area in Japan In the Winter from 2010 to 2015 (Clarified with AQUA/MODIS)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2016.050401</link>
        <id>10.14569/IJARAI.2016.050401</id>
        <doi>10.14569/IJARAI.2016.050401</doi>
        <lastModDate>2016-04-10T13:23:03.4070000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>chlorophyl-a concentration; red tide; diatom; MODIS; satellite remote sensing</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 5(4), 2016</description>
        <description>One of the possible causes for diatom appearance in Ariake bay area I Japan in the winter seasons  from 2010 to 2015 is clarified with AQUA/MODIS of remote sensing satellite. Two months (January and February) AQUA/MODIS derived chlorophyll-a concentration are used for analysis of diatom appearance. Match-up data of AQUA/MODIS with the evidence of the diatom appearance is extracted from the MODIS database. Through experiments, it is found that diatom appears after a long period time of relatively small size of red tide appearance. Also, it depends on the weather conditions and tidal effect as well as water current in the bay area in particular.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume5No4/Paper_1-One_of_the_Possible_Causes_for_Diatom_Appearance_in_Ariake.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Enhancement of Patch-based Descriptors for Image Copy Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070361</link>
        <id>10.14569/IJACSA.2016.070361</id>
        <doi>10.14569/IJACSA.2016.070361</doi>
        <lastModDate>2016-03-31T17:20:55.8130000+00:00</lastModDate>
        
        <creator>Junaid Baber</creator>
        
        <creator>Maheen Bakhtyar</creator>
        
        <creator>Waheed Noor</creator>
        
        <creator>Abdul Basit</creator>
        
        <creator>Ihsan Ullah</creator>
        
        <subject>Content-based image copy detection, SIFT, CSLBP, robust descriptors, patch based descriptors</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(3), 2016</description>
        <description>Images have become main sources for the informa-tion, learning, and entertainment, but due to the advancement and progress in multimedia technologies, millions of images are shared on Internet daily which can be easily duplicated and redistributed. Distribution of these duplicated and transformed images cause a lot of problems and challenges such as piracy, redundancy, and content-based image indexing and retrieval. To address these problems, copy detection system based on local features are widely used. Initially, keypoints are detected and represented by some robust descriptors. The descriptors are computed over the affine patches around the keypoints, these patches should be repeatable under photometric and geometric transformations. However, there exist two main challenges with patch based descriptors, (1) the affine patch over the keypoint can produce similar descriptors under entirely different scene or the context which causes “ambiguity”, and (2) the descriptors are not enough “distinctive” under image noise. Due to these limitations, the copy detection systems suffer in performance. We present a framework that makes descriptor more distinguishable and robust by influencing them with the texture and gradients in vicinity. The experimental evaluation on keypoints matching and image copy detection under severe transformations shows the effectiveness of the proposed framework.</description>
        <description>http://thesai.org/Downloads/Volume7No3/Paper_61-Performance_Enhancement_of_Patch_based_Descriptors_for_Image_Copy_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Context-Aware Recommender System for Personalized Places in Mobile Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070360</link>
        <id>10.14569/IJACSA.2016.070360</id>
        <doi>10.14569/IJACSA.2016.070360</doi>
        <lastModDate>2016-03-31T17:20:55.7970000+00:00</lastModDate>
        
        <creator>Soha A.El-Moemen Mohamed</creator>
        
        <creator>Taysir Hassan A.Soliman</creator>
        
        <creator>Adel A.Sewisy</creator>
        
        <subject>recommender system, context, context-aware, ge-netic algorithm, gamma function</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(3), 2016</description>
        <description>Selecting the most appropriate places under differ-ent context is important contribution nowadays for people who visit places for the first time. The aim of the work in this paper is to make a context-aware recommender system, which recom-mends places to users based on the current weather, the time of the day, and the user’s mood. This Context-aware Recommender System will determine the current weather and time of the day in a user’s location. Then, it gets places that are appropriate to context state in the user’s location. Also, Recommender system takes the current user’s mood and then selects the location that the user should go. Places are recommended based on what other users have visited in the similar context conditions. Recommender system puts rates for each place in each context for each user. The place’s rates are calculated by The Genetic algorithm, based on Gamma function. Finally,mobile application was implemented in the context-aware recommender system.</description>
        <description>http://thesai.org/Downloads/Volume7No3/Paper_60-A_Context_Aware_Recommender_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Testing and Analysis of Activities of Daily Living Data with Machine Learning Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070359</link>
        <id>10.14569/IJACSA.2016.070359</id>
        <doi>10.14569/IJACSA.2016.070359</doi>
        <lastModDate>2016-03-31T17:20:55.7670000+00:00</lastModDate>
        
        <creator>Ayse Cufoglu</creator>
        
        <creator>Adem Coskun</creator>
        
        <subject>Activities of Daily Living (ADL); Machine Learning (ML); Classification Algorithms; Active and Independent Aging</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(3), 2016</description>
        <description>It is estimated that 28% of European Union’s population will be aged 65 or older by 2060. Europe is getting older and this has a high impact on the estimated cost to be spent for older people. This is because, compared to the younger generation, older people are more at risk to have/face cognitive impairment, frailty and social exclusion, which could have negative effects on their lives as well as the economy of the European Union. The ‘active and independent ageing’ concept aims to support older people to live active and independent life in their preferred location and this goal can be fully achieved by understanding the older people (i.e their needs, abilities, preferences, difficulties they are facing during the day). One of the most reliable resources for such information is the Activities of Daily Living (ADL), which gives essential information about people’s lives. Understanding this kind of information is an important step towards providing the right support, facilities and care for the older population. In the literature, there is a lack of study that evaluates the performance of Machine Learning algorithms towards understanding the ADL data. This work aims to test and analyze the performance of the well known Machine Learning algorithms with ADL data.</description>
        <description>http://thesai.org/Downloads/Volume7No3/Paper_59-Testing_and_Analysis_of_Activities_of_Daily_Living.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modified Grapheme Encoding and Phonemic Rule to Improve PNNR-Based Indonesian G2P</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070358</link>
        <id>10.14569/IJACSA.2016.070358</id>
        <doi>10.14569/IJACSA.2016.070358</doi>
        <lastModDate>2016-03-31T17:20:55.7330000+00:00</lastModDate>
        
        <creator>Suyanto </creator>
        
        <creator>Sri Hartati</creator>
        
        <creator>Agus Harjoko</creator>
        
        <subject>Modified grapheme encoding; phonemic rule; In-donesian grapheme-to-phoneme conversion; pseudo nearest neigh-bour rule</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(3), 2016</description>
        <description>A grapheme-to-phoneme conversion (G2P) is very important in both speech recognition and synthesis. The existing Indonesian G2P based on pseudo nearest neighbour rule (PNNR) has two drawbacks: the grapheme encoding does not adapt all Indonesian phonemic rules and the PNNR should select a best phoneme from all possible conversions even though they can be filtered by some phonemic rules. In this paper, a modified partial orthogonal binary grapheme encoding and a phonemic-based rule are proposed to improve the performance of PNNR-based Indonesian G2P. Evaluating on 5-fold cross-validation, contain 40K words to develop the model and 10K words to evaluation each, shows that both proposed concepts reduce the relative phoneme error rate (PER) by 13.07%. A more detail analysis shows the most errors are from grapheme ?e? that can be dynamically converted into either /E/ or /??/ since four prefixes, ’ber’, ’me’, ’per’, and ’ter’, produce many ambiguous conversions with basic words and also from some similar compound words with both different pronunciations for the grapheme ?e?. A stemming procedure can be applied to reduce those errors.</description>
        <description>http://thesai.org/Downloads/Volume7No3/Paper_58-Modified_Grapheme_Encoding_and_Phonemic_Rule.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Evaluation of Affinity Propagation Approaches on Data Clustering</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070357</link>
        <id>10.14569/IJACSA.2016.070357</id>
        <doi>10.14569/IJACSA.2016.070357</doi>
        <lastModDate>2016-03-31T17:20:55.7200000+00:00</lastModDate>
        
        <creator>R. Refianti</creator>
        
        <creator>A.B. Mutiara</creator>
        
        <creator>A.A. Syamsudduha</creator>
        
        <subject>Affinity Propagation, Availability, Clustering, Exem-plar, Responsibility, Similarity Matrix.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(3), 2016</description>
        <description>Classical techniques for clustering, such as k-means clustering, are very sensitive to the initial set of data centers, so it need to be rerun many times in order to obtain an optimal result. A relatively new clustering approach named Affinity Propagation (AP) has been devised to resolve these problems. Although AP seems to be very powerful it still has several issues that need to be improved. In this paper several improvement or development are discussed in , i.e. other four approaches: Adaptive Affinity Propagation, Partition Affinity Propagation, Soft Constraint Affinity propagation, and Fuzzy Statistic Affinity Propagation. and those approaches are be implemented and compared to look for the issues that AP really deal with and need to be improved. According to the testing results, Partition Affinity Propagation is the fastest one among four other approaches. On the other hand Adaptive Affinity Propagation is much more tolerant to errors, it can remove the oscillation when it occurs where the occupance of oscillation will bring the algorithm to fail to converge. Adaptive Affinity propagation is more stable than the other since it can deal with error which the other can not. And Fuzzy Statistic Affinity Propagation can produce smaller number of cluster compared to the other since it produces its own preferences using fuzzy iterative methods.</description>
        <description>http://thesai.org/Downloads/Volume7No3/Paper_57-Performance_Evaluation_of_Affinity_Propagation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving Vertical Handoffs Using Mobility Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070356</link>
        <id>10.14569/IJACSA.2016.070356</id>
        <doi>10.14569/IJACSA.2016.070356</doi>
        <lastModDate>2016-03-31T17:20:55.7030000+00:00</lastModDate>
        
        <creator>Mahmoud Al-Ayyoub</creator>
        
        <creator>Ghaith Husari</creator>
        
        <creator>Wail Mardini</creator>
        
        <subject>Heterogeneous wireless networks, Vertical handoff, Markov model, Artificial intelligence, Mobility management.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(3), 2016</description>
        <description>The recent advances in wireless communications require integration of multiple network technologies in order to satisfy the increasing demand of mobile users. Mobility in such a heterogeneous environment entails that users keep moving between the coverage regions of different networks, which means that a non-trivial vertical handoff scheme is required in order to maintain a seamless transition from one network technology to another. A good vertical handoff scheme must provide the users with the best possible connection while keeping connection dropping probability to the minimum. In this paper, we propose a handoff scheme which employs the Markov model to predict the users’ future locations in order to make better handoff decisions with reduced connection dropping probability and number of unnecessary handoffs. Through simulation, the proposed scheme is compared with the SINR-based scheme, which was shown to outperform other vertical handoff schemes. The experiments show that the proposed scheme achieves significant improvements over the SINR-based scheme that can reach 51% in terms of the number of failed handoffs and 44% in terms of the number of handoffs.</description>
        <description>http://thesai.org/Downloads/Volume7No3/Paper_56-Improving_Vertical_Handoffs_Using_Mobility.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Characterizing End-to-End Delay Performance of Randomized TCP Using an Analytical Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070355</link>
        <id>10.14569/IJACSA.2016.070355</id>
        <doi>10.14569/IJACSA.2016.070355</doi>
        <lastModDate>2016-03-31T17:20:55.6870000+00:00</lastModDate>
        
        <creator>Mohammad Shorfuzzaman</creator>
        
        <creator>Mehedi Masud</creator>
        
        <creator>Md. Mahfuzur Rahman</creator>
        
        <subject>Randomized TCP, end to end delay, congestion window, TCP pacing, propagation delay, Markov chain.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(3), 2016</description>
        <description>TCP (Transmission Control Protocol) is the main transport protocol used in high speed network. In the OSI Model, TCP exists in the Transport Layer and it serves as a connection-oriented protocol which performs handshaking to create a connection. In addition, TCP provides end-to-end reliability. There are different standard variants of TCP (e.g. TCP Reno, TCP NewReno etc.)which implement mechanisms to dynamically control the size of congestion window but they do not have any control on the sending time of successive packets. TCP pacing introduces the concept of controlling the packet sending time at TCP sources to reduce packet loss in a bursty traffic network. Randomized TCP is a new TCP pacing scheme which has shown better performance (considering throughput, fairness) over other TCP variants in bursty networks. The end-to-end delay of Randomized TCP is a very important performance measure which has not yet been addressed. In the current high speed networks, it is increasingly important to have mechanisms that keep end-to-end to delay within an acceptable range. In this paper, we present the performance evaluation of end-to-end delay of Randomized TCP. To this end, we have used an analytical and a simulation model to characterize the end-to-end delay performance of Randomized TCP.</description>
        <description>http://thesai.org/Downloads/Volume7No3/Paper_55-Characterizing_End_to_End_Delay_Performance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Enhanced Automated Test Item Creation Based on Learners Preferred Concept Space</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070354</link>
        <id>10.14569/IJACSA.2016.070354</id>
        <doi>10.14569/IJACSA.2016.070354</doi>
        <lastModDate>2016-03-31T17:20:55.6570000+00:00</lastModDate>
        
        <creator>Mohammad AL-Smadi</creator>
        
        <creator>Margit H&#168;ofler</creator>
        
        <creator>Christian G&#168;utl</creator>
        
        <subject>Automated Assessment; Automatic Test-Item Cre-ation; Self-Regulated Learning; Evaluation of CAL systems; Ped-agogical issues; Natural-Language Processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(3), 2016</description>
        <description>Recently, research has become increasingly inter-ested in developing tools that are able to automatically create test items out of text-based learning contents. Such tools might not only support instructors in creating tests or exams but also learners in self-assessing their learning progress. This paper presents an enhanced automatic question-creation tool (EAQC) that has been recently developed. EAQC extracts the most important key phrases (concepts) out of a textual learning content and automatically creates test items based on these concepts. Moreover, this paper discusses two studies for the evaluation of EAQC application in real learning settings. The first study showed that concepts extracted by the EAQC often but not always reflect the concepts extracted by learners. Learners typically extracted fewer concepts than the EAQC and there was a great inter-individual variation between learners with regard to which concepts they experienced as relevant. Accordingly, the second study investigated whether the functionality of the EAQC can be improved in a way that valid test items are created if the tool was fed with concepts provided by learners. The results showed that the quality of semi-automated creation of test items were satisfactory. Moreover, this depicts the EAQC flexibility in adapting its workflow to the individual needs of the learners.</description>
        <description>http://thesai.org/Downloads/Volume7No3/Paper_54-An_Enhanced_Automated_Test_Item_Creation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Algerian dialect: Study and Resources</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070353</link>
        <id>10.14569/IJACSA.2016.070353</id>
        <doi>10.14569/IJACSA.2016.070353</doi>
        <lastModDate>2016-03-31T17:20:55.6270000+00:00</lastModDate>
        
        <creator>Salima Harrat</creator>
        
        <creator>Karima Meftouh</creator>
        
        <creator>Mourad Abbas</creator>
        
        <creator>Khaled-Walid Hidouci</creator>
        
        <creator>Kamel Smaili</creator>
        
        <subject>Arabic dialect, Algerian dialect, Modern Standard Arabic, Grapheme to Phoneme Conversion, Morphological Analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(3), 2016</description>
        <description>Arabic is the official language overall Arab coun-tries, it is used for official speech, news-papers, public adminis-tration and school. In Parallel, for everyday communication, non-official talks, songs and movies, Arab people use their dialects which are inspired from Standard Arabic and differ from one Arabic country to another. These linguistic phenomenon is called disglossia, a situation in which two distinct varieties of a language are spoken within the same speech community. It is observed Throughout all Arab countries, standard Arabic widely written but not used in everyday conversation, dialect widely spoken in everyday life but almost never written. Thus, in NLP area, a lot of works have been dedicated for written Arabic. In contrast, Arabic dialects at a near time were not studied enough. Interest for them is recent. First work for these dialects began in the last decade for middle-east ones. Dialects of the Maghreb are just beginning to be studied. Compared to written Arabic, dialects are under-resourced languages which suffer from lack of NLP resources despite their large use. We deal in this paper with Arabic Algerian dialect a non-resourced language for which no known resource is available to date. We present a first linguistic study introducing its most important features and we describe the resources that we created from scratch for this dialect.</description>
        <description>http://thesai.org/Downloads/Volume7No3/Paper_53-An_Algerian_dialect_Study_and_Resources.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Parallel Implementation of Bias Field Correction Fuzzy C-Means Algorithm for Image Segmentation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070352</link>
        <id>10.14569/IJACSA.2016.070352</id>
        <doi>10.14569/IJACSA.2016.070352</doi>
        <lastModDate>2016-03-31T17:20:55.6100000+00:00</lastModDate>
        
        <creator>Nouredine AITALI</creator>
        
        <creator>Bouchaib CHERRADI</creator>
        
        <creator>Ahmed EL ABBASSI</creator>
        
        <creator>Omar BOUATTANE</creator>
        
        <creator>Mohamed YOUSSFI</creator>
        
        <subject>Image segmentation; Bias field correction; GPU; Non homogeneity intensity; CUDA; Clustering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(3), 2016</description>
        <description>Image segmentation in the medical field is one of the most important phases to diseases diagnosis. The bias field estimation algorithm is the most interesting techniques to correct the in-homogeneity intensity artifact on the image. However, the use of such technique requires a powerful processing and quite expensive for big size as medical images. Hence the idea of parallelism becomes increasingly required. Several researchers have followed this path mainly in the bioinformatics field where they have suggested different algorithms implementations. In this paper, a novel Single Instruction Multiple Data (SIMD) architecture for bias field estimation and image segmentation algorithm is proposed. In order to accelerate compute-intensive portions of the sequential implementation, we have implemented this algorithm on three different graphics processing units (GPU) cards named GT740m, GTX760 and GTX580 respectively, using Compute Unified Device Architecture (CUDA) software programming tool. Numerical obtained results for the computation speed up, allowed us to conclude on the suitable GPU architecture for this kind of applications and closest ones.</description>
        <description>http://thesai.org/Downloads/Volume7No3/Paper_52-Parallel_Implementation_of_Bias_Field_Correction_Fuzzy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Developing a Feasible and Maintainable Ontology for Automatic Landscape Design</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070351</link>
        <id>10.14569/IJACSA.2016.070351</id>
        <doi>10.14569/IJACSA.2016.070351</doi>
        <lastModDate>2016-03-31T17:20:55.5800000+00:00</lastModDate>
        
        <creator>Pintescu Alina</creator>
        
        <creator>Matei Oliviu-Dorin</creator>
        
        <creator>Boanca Iuliana Paunita</creator>
        
        <creator>Honoriu Valean</creator>
        
        <subject>environment; landscapes; ontology; ontology-based simulation; sustainable landscapes</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(3), 2016</description>
        <description>In general, landscape architecture includes analysis, planning, design, administration and management of natural and artificial. An important aspect is the formation of so-called sustainable landscapes that allow maximum use of the environment, natural resources and promote sustainable restoration of ecosystems. For such purposes, a designer needs a complete database with existing and suitable plants, but no designing tool has one. Therefore it is presented the structure and the development of on ontology suitable for storing and managing all information and knowledge about plants. The advantage is that the format of the ontology allows the storage of any plant species (e.g. live or fossil) and automated reasoning. Ontology is a formal conceptualization of a particular knowledge about the world, through the explicit representation of basic concepts, relations, and inference rules about themselves. Therefore the ontology may be used by a design tool for helping the designer and choosing the best options for a sustainable landscape.</description>
        <description>http://thesai.org/Downloads/Volume7No3/Paper_51-Developing_a_Feasible_and_Maintainable_Ontology_for_Automatic_Landscape_Design.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Improved Image Steganography Method Based on LSB Technique with Random Pixel Selection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070350</link>
        <id>10.14569/IJACSA.2016.070350</id>
        <doi>10.14569/IJACSA.2016.070350</doi>
        <lastModDate>2016-03-31T17:20:55.5470000+00:00</lastModDate>
        
        <creator>Marwa M. Emam</creator>
        
        <creator>Abdelmgeid A. Aly</creator>
        
        <creator>Fatma A. Omara</creator>
        
        <subject>Image Steganography; PRNG (Pseudorandom Number Generator); Peak Signal-to-Noise Rate (PSNR); Mean Square Error (MSE)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(3), 2016</description>
        <description>with the rapid advance in digital network, information technology, digital libraries, and particularly World Wide Web services, many kinds of information could be retrieved any time. Thus, the security issue has become one of the most significant problems for distributing new information. It is necessary to protect this information while passing over insecure channels. Steganography introduces a strongly approach to hide the secret data in an appropriate media carriers such as images, audio files, text files, and video files. In this paper, a new image steganography method based on spatial domain is proposed. According to the proposed method, the secret message is embedded randomly in the pixel location of the cover image using Pseudo Random Number Generator (PRNG) of each pixel value of the cover image instead of embedding sequentially in the pixels of the cover image. This randomization is expected to increase the security of the system. The proposed method works with two layers (Blue and Green), as (2-1-2) layer, and the byte of the message will be embedded in three pixels only in this form (3-2-3). From the experimental results, it has found that the proposed method achieves a very high Maximum Hiding Capacity (MHC), and higher visual quality as indicated by the Peak Signal-to- Noise Ratio (PSNR).</description>
        <description>http://thesai.org/Downloads/Volume7No3/Paper_50-An_Improved_Image_Steganography_Method_Based_on_LSB_Technique.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Planning And Allocation of Tasks in a Multiprocessor System as a Multi-Objective Problem and its Resolution Using Evolutionary Programming</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070349</link>
        <id>10.14569/IJACSA.2016.070349</id>
        <doi>10.14569/IJACSA.2016.070349</doi>
        <lastModDate>2016-03-31T17:20:55.5170000+00:00</lastModDate>
        
        <creator>Apolinar Velarde Martinez</creator>
        
        <creator>Eunice Ponce de Le&#243;n Sent&#237;</creator>
        
        <creator>Juan Antonio Nungaray Ornelas</creator>
        
        <creator>Juan Alejandro Monta&#241;ez de la Torre</creator>
        
        <subject>Multicomputer system; Evolutionary Multi-objective Optimization;  First Input First Output; Random-Order-of-Service; Estimation of Distribution Algorithms; Univariate Distribution Algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(3), 2016</description>
        <description>the use of Linux-based clusters is a strategy for the development of multiprocessor systems. These types of systems face the problem of efficiently executing the planning and allocation of tasks, for the efficient use of its resources. This paper addresses this as a multi-objective problem, carrying out an analysis of the objectives that are opposed during the planning of the tasks, which are waiting in the queue, before assigning tasks to processors. For this, we propose a method that avoids strategies such as those that use genetic operators, exhaustive searches of contiguous free processors on the target system, and the use of the strict allocation policy: First Come First Serve (FIFO). Instead, we use estimation and simulation of the joint probability distribution as a mechanism of evolution, for obtaining assignments of a set of tasks, which are selected from the waiting queue through the planning policy Random-Order-of-Service (ROS). A set of conducted experiments that compare the results of the FIFO allocation policy, with the results of the proposed method show better results in the criteria of: utilization, throughput, mean turnaround time, waiting time and the total execution time, when system loads are significantly increased.</description>
        <description>http://thesai.org/Downloads/Volume7No3/Paper_49-Planning_And_Allocation_of_Tasks_in_a_Multiprocessor_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Accelerated Architecture Based on GPU and Multi-Processor Design for Fingerprint Recognition </title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070348</link>
        <id>10.14569/IJACSA.2016.070348</id>
        <doi>10.14569/IJACSA.2016.070348</doi>
        <lastModDate>2016-03-31T17:20:55.5000000+00:00</lastModDate>
        
        <creator>Mossaad Ben Ayed</creator>
        
        <creator>Sabeur Elkosantini</creator>
        
        <subject>Minutia; Fingerprint; Architecture design; recognition; Gabor filter; MPSOC</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(3), 2016</description>
        <description>Fingerprint recognition is widely used in security systems to recognize humans. In both industry and scientific literature, many fingerprint identification systems were developed using different techniques and approaches. Although the number of conducted research works in this field, developed systems suffer for some limitations partially those related the real time computation and fingerprint recognition. Accordingly, this paper proposes a reliable algorithm for fingerprint recognition based on the extraction and matching of Minutiae. In this paper, we present also an accelerated architecture based on GPU and multi-processor design in which the suggested fingerprint recognition algorithm is implemented.</description>
        <description>http://thesai.org/Downloads/Volume7No3/Paper_48-An_Accelerated_Architecture_Based_on_GPU_and_Multi_Processor_Design.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Real-Time Gender Classification by Face</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070347</link>
        <id>10.14569/IJACSA.2016.070347</id>
        <doi>10.14569/IJACSA.2016.070347</doi>
        <lastModDate>2016-03-31T17:20:55.4700000+00:00</lastModDate>
        
        <creator>Eman Fares Al Mashagba</creator>
        
        <subject>Biometrics; Face Detection; Geometry-based; Gender Classification; Quasi-Newton Algorithms</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(3), 2016</description>
        <description>The identification of human beings based on their biometric body parts, such as face, fingerprint, gait, iris, and voice, plays an important role in electronic applications and has become a popular area of research in image processing. It is also one of the most successful applications of computer–human interaction and understanding. Out of all the abovementioned body parts,the face is one of most popular traits because of its unique features.In fact, individuals can process a face in a variety of ways to classify it by its identity, along with a number of other characteristics, such as gender, ethnicity, and age. Specifically, recognizing human gender is important because people respond differently according to gender. In this paper, we present a robust method that uses global geometry-based features to classify gender and identify age and human beings from video sequences. The features are extracted based on face detection using skin color segmentation and the computed geometric features of the face ellipse region. These geometric features are then used to form the face vector trajectories, which are inputted to a time delay neural network and are trained using the Broyden–Fletcher–Goldfarb–Shanno (BFGS) function. Results show that using the suggested method with our own dataset under an unconstrained condition achieves a 100% classification rate in the training set for all application, as well as 91.2% for gender classification, 88% for age identification, and 83% for human identification in the testing set. In addition, the proposed method establishes the real-time system to be used in three applications with a simple computation for feature extraction.</description>
        <description>http://thesai.org/Downloads/Volume7No3/Paper_47-Real_Time_Gender_Classification_by_Face.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Resource Allocation in Cloud Computing Using Imperialist Competitive Algorithm with Reliability Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070346</link>
        <id>10.14569/IJACSA.2016.070346</id>
        <doi>10.14569/IJACSA.2016.070346</doi>
        <lastModDate>2016-03-31T17:20:55.4370000+00:00</lastModDate>
        
        <creator>Maryam Fayazi</creator>
        
        <creator>Mohammad Reza Noorimehr</creator>
        
        <creator>Sayed Enayatollah Alavi</creator>
        
        <subject>Imperialist Competitive algorithm; Reliability; makespan; Cloud Computing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(3), 2016</description>
        <description>Cloud computing has become a universal trend now. So, for users, the reliability is an effective factor to use this technology. In addition, users prefer to implement and get their work done quickly. This paper takes into account these two parameters for resource allocation due to their importance In this method, the Imperialist Competitive algorithm with the addition of a cross layer of cloud architecture to reliability evaluation is used. In this cross layer, initial reliability is considered for all the resources and the implementation of their tasks and due to the success or failure of implementation, reliability of resources is increased or reduced. Reliability and makespan are used as a cost function in ICA for resource allocation. Results show that the proposed method can search the problem space in a better manner and give a better performance when compared to other methods.</description>
        <description>http://thesai.org/Downloads/Volume7No3/Paper_46-Resource_Allocation_in_Cloud_Computing_Using_Imperialist_Competitive_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Feature Selection Based on Minimum Overlap Probability (MOP) in Identifying Beef and Pork</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070345</link>
        <id>10.14569/IJACSA.2016.070345</id>
        <doi>10.14569/IJACSA.2016.070345</doi>
        <lastModDate>2016-03-31T17:20:55.4070000+00:00</lastModDate>
        
        <creator>Khoerul Anwar</creator>
        
        <creator>Agus Harjoko</creator>
        
        <creator>Suharto Suharto</creator>
        
        <subject>overlap; feature selection; best feature; minimum overlap probability (MOP); identifying</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(3), 2016</description>
        <description>Feature selection is one of the most important techniques in image processing for classifying. In classifying beef and pork based on texture feature, feature overlaps are difficult issues. This paper proposed feature selection method by Minimum Overlap Probability (MOP) to get the best feature. The method was tested on two datasets of features of digital images of beef and pork which had similar textures and overlapping features. The selected features were used for data training and testing by Backpropagation Neural Network (BPNN).  Data training process used single features and several selected feature combinations. The test result showed that BPNN managed to detect beef or pork images with 97.75% performance. From performance a conclusion was drawn that MOP method could be used to select the best features in feature selection for classifying/identifying two digital image objects with similar textures.</description>
        <description>http://thesai.org/Downloads/Volume7No3/Paper_45-Feature_Selection_Based_on_Minimum_Overlap_Probability.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>ECG Signal Compression Using the High Frequency Components of Wavelet Transform</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070344</link>
        <id>10.14569/IJACSA.2016.070344</id>
        <doi>10.14569/IJACSA.2016.070344</doi>
        <lastModDate>2016-03-31T17:20:55.3770000+00:00</lastModDate>
        
        <creator>Surekha K.S</creator>
        
        <creator>B. P. Patil</creator>
        
        <subject>ECG; PRD; transform</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(3), 2016</description>
        <description>Electrocardiography (ECG) is the method of recording electrical activity of the heart by using electrodes.  In ambulatory and continuous monitoring of ECG, the data that need to be handled is huge. Hence we require an efficient compression technique. The data also must retain the clinically important features after compression.   For most of the signals, the low frequency component is considered as most important part of the signal.   In wavelet analysis, the approximation coefficients are the low frequency components of the signal. The detail coefficients are the high frequency components of the signal.  Most of the time the detail coefficients (high frequency components) are not considered. In this paper, we propose to use detail coefficients of Wavelet transform for ECG signal compression. The Compression Ratio (CR) of both the approximation and detail coefficients are compared. Threshold based technique is adopted. The Threshold value helps to remove the coefficients below the set threshold value of coefficients. Experiment is carried out using different types of Wavelet transforms.  MIT BIH ECG data base is used for experimentation. MATLAB tool is used for simulation purpose. The novelty of the method is that the CR achieved by detail coefficients is better. CR of about 88% is achieved using Sym3 Wavelet. The performance measure of the reconstructed signal is carried out by PRD.</description>
        <description>http://thesai.org/Downloads/Volume7No3/Paper_44-ECG_Signal_Compression_Using_the_High_Frequency_Components.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Communication-Load Impact on the Performance of Processor Allocation Strategies in 2-D Mesh Multicomputer Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070343</link>
        <id>10.14569/IJACSA.2016.070343</id>
        <doi>10.14569/IJACSA.2016.070343</doi>
        <lastModDate>2016-03-31T17:20:55.3600000+00:00</lastModDate>
        
        <creator>Zaid Mustafa</creator>
        
        <creator>J. J. Alshaer</creator>
        
        <creator>O. Dorgham</creator>
        
        <creator>S. Bani-Ahmad</creator>
        
        <subject>Processor allocation; Parallel computing; 2-D Mesh; Communication patterns; Multicomputer systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(3), 2016</description>
        <description>A number of processor allocation strategies have been proposed in literature. A key performance factor that can highlight the difference between these strategies is the amount of communication conducted between the parallel jobs to be allocated. This paper aims to identifying how the density and pattern of communication can affect the performance of these strategies. Compared to the work already presented in literature, we examined a wider range of communication patterns. Other works consider only two types of communication patterns; those are the one-to-all and all-to-all patterns. This paper used the C language to implement the different allocation strategies, and then combined them with the ProcSimity simulation tool. The processor-allocation strategies are examined under the First-Come-First-Serve scheduling strategy. Results show that communication pattern and load are effective factors that have a significant impact on the performance of the processor allocation strategy used.</description>
        <description>http://thesai.org/Downloads/Volume7No3/Paper_43-Communication_Load_Impact_on_the_Performance_of_Processor_Allocation_Strategies.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluation of Navigational Aspects of Moodle</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070342</link>
        <id>10.14569/IJACSA.2016.070342</id>
        <doi>10.14569/IJACSA.2016.070342</doi>
        <lastModDate>2016-03-31T17:20:55.3300000+00:00</lastModDate>
        
        <creator>Raheela Arshad</creator>
        
        <creator>Awais Majeed</creator>
        
        <creator>Hammad Afzal</creator>
        
        <creator>Muhammad Muzammal</creator>
        
        <creator>Arif ur Rahman</creator>
        
        <subject>e-Learning; Navigational Evaluation Framework; Learning management system (LMS); Moodle; Usability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(3), 2016</description>
        <description>Learning Management System (LMS) is an effective platform for communication and collaboration among teachers and students to enhance learning. These LMSs are now widely used in both conventional and virtual and distance learning paradigms. These LMSs have various limitations as identified in the existing literature, including poor learning content, use of appropriate technology and usability issues. Poor usability leads to the distraction of users. Literature covers many aspects of usability evaluation of LMS. However, there is less focus on navigational issues. Poor navigational can lead to disorientation and cognitive overload of the users of any Web application.  For this reason, we have proposed a navigational evaluation framework to evaluate the navigational structure of the LMS. We have applied this framework to evaluate the navigational structure of Moodle. We conducted a survey among students and teachers of two leading universities in Pakistan, where Moodle is in use. This work summarizes the survey results and proposes guidelines to improve the usability of Moodle based on the feedback received from its users.</description>
        <description>http://thesai.org/Downloads/Volume7No3/Paper_42-Evaluation_of_Navigational_Aspects_of_Moodle.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Moon Landing Trajectory Optimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070341</link>
        <id>10.14569/IJACSA.2016.070341</id>
        <doi>10.14569/IJACSA.2016.070341</doi>
        <lastModDate>2016-03-31T17:20:55.2970000+00:00</lastModDate>
        
        <creator>Ibrahim Mustafa MEHEDI</creator>
        
        <creator>Md. Shofiqul ISLAM</creator>
        
        <subject>lunar landing; trajectory optimization; optimization techniques; DIDO optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(3), 2016</description>
        <description>Trajectory optimization is a crucial process during the planning phase of a spacecraft landing mission. Once a trajectory is determined, guidance algorithms are created to guide the vehicle along the given trajectory. Because fuel mass is a major driver of the total vehicle mass, and thus mission cost, the objective of most guidance algorithms is to minimize the required fuel consumption. Most of the existing algorithms are termed as “near-optimal” regarding fuel expenditure. The question arises as to how close to optimal are these guidance algorithms. To answer this question, numerical trajectory optimization techniques are often required. With the emergence of improved processing power and the application of new methods, more direct approaches may be employed to achieve high accuracy without the associated difficulties in computation or pre-existing knowledge of the solution. An example of such an approach is DIDO optimization. This technique is applied in the current research to find these minimum fuel optimal trajectories.</description>
        <description>http://thesai.org/Downloads/Volume7No3/Paper_41-Moon_Landing_Trajectory_Optimization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Classification of Hand Gestures Using Gabor Filter with Bayesian and Na&#239;ve Bayes Classifier</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070340</link>
        <id>10.14569/IJACSA.2016.070340</id>
        <doi>10.14569/IJACSA.2016.070340</doi>
        <lastModDate>2016-03-31T17:20:55.2500000+00:00</lastModDate>
        
        <creator>Tahira Ashfaq</creator>
        
        <creator>Khurram Khurshid</creator>
        
        <subject>Human Computer Interaction; Hand Segmentation; Gesture recognition; Gabor Filter; Bayesian and Na&#239;ve Bayes classifiers; Feature Extraction; Image Processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(3), 2016</description>
        <description>A hand Gesture is basically the movement, position or posture of hand used extensively in our daily lives as part of non-verbal communication. A lot of research is being carried out to classify hand gestures in videos as well as images for various applications. The primary objective of this communication is to present an effective system that can classify various static hand gestures in complex background environment. The system is based on hand region localized using a combination of morphological operations. Gabor filter is applied to the extracted region of interest (ROI) for extraction of hand features that are then fed to Bayesian and Na&#239;ve Bayes classifiers. The results of the system are very encouraging with an average accuracy of over 90%.</description>
        <description>http://thesai.org/Downloads/Volume7No3/Paper_40-Classification_of_Hand_Gestures_Using_Gabor_Filter.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automation of Optimized Gabor Filter Parameter Selection for Road Cracks Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070339</link>
        <id>10.14569/IJACSA.2016.070339</id>
        <doi>10.14569/IJACSA.2016.070339</doi>
        <lastModDate>2016-03-31T17:20:55.2200000+00:00</lastModDate>
        
        <creator>Haris Ahmad Khan</creator>
        
        <creator>M. Salman</creator>
        
        <creator>Sajid Hussain</creator>
        
        <creator>Khurram Khurshid</creator>
        
        <subject>Pavement Cracks; Automated detection; Gabor Filters; Genetic Algorithm; Parameter Selection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(3), 2016</description>
        <description>Automated systems for road crack detection are extremely important in road maintenance for vehicle safety and traveler’s comfort. Emerging cracks in roads need to be detected and accordingly repaired as early as possible to avoid further damage thus reducing rehabilitation cost. In this paper, a robust method for Gabor filter parameters optimization for automatic road crack detection is discussed. Gabor filter has been used in previous literature for similar applications. However, there is a need for automatic selection of optimized Gabor filter parameters due to variation in texture of roads and cracks. The problem of change of background, which in fact is road texture, is addressed through a learning process by using synthetic road crack generation for Gabor filter parameter tuning. Tuned parameters are then tested on real cracks and a thorough quantitative analysis is performed for performance evaluation.</description>
        <description>http://thesai.org/Downloads/Volume7No3/Paper_39-Automation_of_Optimized_Gabor_Filter_Parameter_Selection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Solution Methodology: Heuristic-Metaheuristic-Implicit Enumeration 1-0 for the Capacitated Vehicle Routing Problem (Cvrp)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070338</link>
        <id>10.14569/IJACSA.2016.070338</id>
        <doi>10.14569/IJACSA.2016.070338</doi>
        <lastModDate>2016-03-31T17:20:55.1900000+00:00</lastModDate>
        
        <creator>David Escobar Vargas</creator>
        
        <creator>Ram&#243;n A. Gallego Rend&#243;n</creator>
        
        <creator>Antonio Escobar Zuluaga</creator>
        
        <subject>1-0 implicit enumeration; CVRP; Operations research; Genetic algorithm; Chu-Beasley; Heuristics; Metaheuristics and exact methods</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(3), 2016</description>
        <description>The capacitated vehicle routing problem (CVRP) is a difficult combinatorial optimization problem that has been intensively studied in the last few decades. We present a hybrid methodology approach to solve this problem which incorporates an improvement stage by using a 1-0 implicit enumeration technique or Balas’s method. Other distinguishing features of the methodology proposed include a specially designed route-based crossover operator for solution recombination and an effective local procedure as the mutation step. Finally, the methodology is tested with instances of the specialized literature and compared with its best-known solutions for the CVRP with homogeneous fleet, to be able to identify the efficiency of the use of the Balas’s methodology in routing problems.</description>
        <description>http://thesai.org/Downloads/Volume7No3/Paper_38-Hybrid_Solution_Methodology_Heuristic_Metaheuristic_Implicit_Enumeration.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Extract Five Categories CPIVW from the 9V’s Characteristics of the Big Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070337</link>
        <id>10.14569/IJACSA.2016.070337</id>
        <doi>10.14569/IJACSA.2016.070337</doi>
        <lastModDate>2016-03-31T17:20:55.1570000+00:00</lastModDate>
        
        <creator>Suhail Sami Owais</creator>
        
        <creator>Nada Sael Hussein</creator>
        
        <subject>Big Data; Characteristics; Categories; Management; Analysis; Anywhere and Anytime</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(3), 2016</description>
        <description>There is an exponential growth in the amount of data from different fields around the world, and this is known as Big Data. It needs more data management, analysis, and accessibility. This leads to an increase in the number of systems around the world that will manage and manipulate the data in different places at any time. Big Data is a systematically analysed data that depends on the existence of complex processes, devices, and resources. Data are no longer stored in traditional database storage types or on such forms like database, which only on structured data are limited, but surpassed them to the unstructured or semi-structured data. Thus, Big Data has several characteristics and specific properties proportionate with the size of the data with the enormous and rapid development in all business and areas of life. In this work, we study the relationship between the characteristics of Big Data and extract some categories from them. From this, we conclude that there are five categories, and these categories are related to each other.</description>
        <description>http://thesai.org/Downloads/Volume7No3/Paper_37-Extract_Five_Categories_CPIVW.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application of Artificial Neural Networks for Predicting Generated Wind Power</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070336</link>
        <id>10.14569/IJACSA.2016.070336</id>
        <doi>10.14569/IJACSA.2016.070336</doi>
        <lastModDate>2016-03-31T17:20:55.1430000+00:00</lastModDate>
        
        <creator>Vijendra Singh</creator>
        
        <subject>wind; neural network; wind power forecasting</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(3), 2016</description>
        <description>This paper addresses design and development of an artificial neural network based system for prediction of wind energy produced by wind turbines. Now in the last decade, renewable energy emerged as an additional alternative source for electrical power generation. We need to assess wind power generation capacity by wind turbines because of its non-exhaustible nature. The power generation by electric wind turbines depends on the speed of wind, flow direction, fluctuations, density of air, generator hours, seasons of an area, and wind turbine position. During a particular season, wind power generation access can be increased. In such a case, wind energy generation prediction is crucial for transmission of generated wind energy to a power grid system.  It is advisable for the wind power generation industry to predict wind power capacity to diagnose it.
The present paper proposes an effort to apply artificial neural network technique for measurement of the wind energy generation capacity by wind farms in Harshnath, Sikar, Rajasthan, India.</description>
        <description>http://thesai.org/Downloads/Volume7No3/Paper_36-Application_of_Artificial_Neural_Networks_for_Predicting_Generated_Wind_Power.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Evaluation of Content Based Image Retrieval on Feature Optimization and Selection Using Swarm Intelligence</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070335</link>
        <id>10.14569/IJACSA.2016.070335</id>
        <doi>10.14569/IJACSA.2016.070335</doi>
        <lastModDate>2016-03-31T17:20:55.1270000+00:00</lastModDate>
        
        <creator>Kirti Jain</creator>
        
        <creator>Dr.Sarita Singh Bhadauria</creator>
        
        <subject>CBIR; Swarm intelligence; feature extraction;SIFT transform; GSO(glowwarm swarm optimization)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(3), 2016</description>
        <description>The diversity and applicability of swarm intelligence is increasing everyday in the fields of science and engineering. Swarm intelligence gives the features of the dynamic features optimization concept. We have used swarm intelligence for the process of feature optimization and feature selection for content-based image retrieval. The performance of content-based image retrieval faced the problem of precision and recall. The value of precision and recall depends on the retrieval capacity of the image. The basic raw image content has visual features such as color, texture, shape and size. The partial feature extraction technique is based on geometric invariant function. Three swarm intelligence algorithms were used for the optimization of features: ant colony    optimization, particle swarm optimization (PSO), and glowworm optimization algorithm. Coral image dataset and MatLab software were used for evaluating performance.</description>
        <description>http://thesai.org/Downloads/Volume7No3/Paper_35-Performance_Evaluation_of_Content_Based_Image_Retrieval.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Approach for Time Series Forecasting: Bayesian Enhanced by Fractional Brownian Motion with Application to Rainfall Series</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070334</link>
        <id>10.14569/IJACSA.2016.070334</id>
        <doi>10.14569/IJACSA.2016.070334</doi>
        <lastModDate>2016-03-31T17:20:55.0970000+00:00</lastModDate>
        
        <creator>Cristian Rodriguez Rivero</creator>
        
        <creator>Daniel Pati&#241;o</creator>
        
        <creator>Julian Pucheta</creator>
        
        <creator>Victor Sauchelli</creator>
        
        <subject>long-term prediction; neural networks; Bayesian inference; Fractional Brownian Motion; Hurst parameter</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(3), 2016</description>
        <description>A new predictor algorithm based on Bayesian enhanced approach (BEA) for long-term chaotic time series using artificial neural networks (ANN) is presented. The technique based on stochastic models uses Bayesian inference by means of Fractional Brownian Motion as model data and Beta model as prior information. However, the need of experimental data for specifying and estimating causal models has not changed. Indeed, Bayes method provides another way to incorporate prior knowledge in forecasting models; the simplest representations of prior knowledge in forecasting models are hard to beat in many forecasting situations, either because prior knowledge is insufficient to improve on models or because prior knowledge leads to the conclusion that the situation is stable.
This work contributes with long-term time series prediction, to give forecast horizons up to 18 steps ahead. Thus, the forecasted values and validation data are presented by solutions of benchmark chaotic series such as Mackey-Glass, Lorenz, Henon, Logistic, R&#246;ssler, Ikeda, Quadratic one-dimensional map series and monthly cumulative rainfall collected from Despe&#241;aderos, Cordoba, Argentina. The computational results are evaluated against several non-linear ANN predictors proposed before on high roughness series that shows a better performance of Bayesian Enhanced approach in long-term forecasting.
</description>
        <description>http://thesai.org/Downloads/Volume7No3/Paper_34-A_New_Approach_for_Time_Series_Forecasting_Bayesian_Enhanced.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards a New Approach to Improve the Classification Accuracy of the Kohonen’s Self-Organizing Map During Learning Process</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070333</link>
        <id>10.14569/IJACSA.2016.070333</id>
        <doi>10.14569/IJACSA.2016.070333</doi>
        <lastModDate>2016-03-31T17:20:55.0800000+00:00</lastModDate>
        
        <creator>El Khatir HAIMOUDI</creator>
        
        <creator>Hanane FAKHOURI</creator>
        
        <creator>Loubna CHERRAT</creator>
        
        <creator>Mostafa Ezziyyani</creator>
        
        <subject>Artificial neural networks; self-organization map; Learning algorithm; Classification; Clustering; Principal components Analysis; power iteration</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(3), 2016</description>
        <description>Kohonen self-organization algorithm, known as “topologic maps algorithm”, has been largely used in many applications for classification. However, few theoretical studies have been proposed to improve and optimize the learning process of classification and clustering for dynamic and scalable systems taking into account the evolution of multi-parameter objects. Our objective in this paper is to provide a new approach to improve the accuracy and quality of the classification method based on the basic advantages of the Kohonen self-organization algorithm and on new network functions to pre-eliminate the auto-detected of drawbacks and redundancy.</description>
        <description>http://thesai.org/Downloads/Volume7No3/Paper_33-Towards_a_New_Approach_to_Improve_the_Classification_Accuracy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi Agent Architecture for Search Engine</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070332</link>
        <id>10.14569/IJACSA.2016.070332</id>
        <doi>10.14569/IJACSA.2016.070332</doi>
        <lastModDate>2016-03-31T17:20:55.0630000+00:00</lastModDate>
        
        <creator>Disha Verma</creator>
        
        <creator>Dr. Barjesh Kochar</creator>
        
        <subject>Search engine; Data mining; Multi agent systems (MAS); Semantic mapping; Hozo</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(3), 2016</description>
        <description>The process of retrieving information is becoming ambiguous day by day due to huge collection of documents present on web. A single keyword produces millions of results related to given query but these results are not up to user expectations. The search results produced from traditional text search engines may be relevant or irrelevant. The underlying reason is Web documents are HTML documents that do not contain semantic descriptors and annotations.
The paper proposes multi agent architecture to produce fewer but personalized results. The purpose of the research is to provide platform for domain specific personalized search. Personalized search allows delivering web pages in accordance with user’s interest and domain. The proposed architecture uses client side as well server side personalization to provide user with personalized fever but more accurate results. Multi agent search engine architecture uses the concept of semantic descriptors for acquiring knowledge about given domain and leading to personalized search results. Semantic descriptors are represented as network graph that holds relationship between given problem in form of hierarchy. This hierarchical classification is termed as Taxonomy.
</description>
        <description>http://thesai.org/Downloads/Volume7No3/Paper_32-Multi_Agent_Architecture_for_Search_Engine.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Wiki-Based Stochastic Programming and Statistical Modeling System for the Cloud</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070331</link>
        <id>10.14569/IJACSA.2016.070331</id>
        <doi>10.14569/IJACSA.2016.070331</doi>
        <lastModDate>2016-03-31T17:20:55.0330000+00:00</lastModDate>
        
        <creator>Vaidas Giedrimas</creator>
        
        <creator>Leonidas Sakalauskas</creator>
        
        <creator>Marius Neimantas</creator>
        
        <creator>Kestutis Žilinskas</creator>
        
        <creator>Nerijus Barauskas</creator>
        
        <creator>Remigijus Valciukas</creator>
        
        <subject>Wikinomics; open source; mathematical programming; software modeling; online computing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(3), 2016</description>
        <description>Scientific software is a special type of software because its quality has a huge impact on the quality of scientific conclusions and scientific progress. However, it is hard to ensure required quality of the software because of the misunderstandings between the scientists and the software engineers. In this paper, we present a system for improving the quality of scientific software using elements of wikinomics and cloud computing and its implementation details. The system enables scientists to collaborate and make direct evolution of the models, algorithms, and programs. WikiSPSM expands the limits of mathematical software.</description>
        <description>http://thesai.org/Downloads/Volume7No3/Paper_31-Wiki_Based_Stochastic_Programming_and_Statistical_Modeling_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>New Mathematical Modeling of Three-Level Supply Chain with Multiple Transportation Vehicles and Different Manufacturers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070330</link>
        <id>10.14569/IJACSA.2016.070330</id>
        <doi>10.14569/IJACSA.2016.070330</doi>
        <lastModDate>2016-03-31T17:20:55.0170000+00:00</lastModDate>
        
        <creator>Amir Sadeghi</creator>
        
        <creator>Amir Farmahini Farahani</creator>
        
        <creator>Hossein Beiki</creator>
        
        <subject>Transportation; Mathematical Model; Logistic Costs; Imperialist Competitive Algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(3), 2016</description>
        <description>Nowadays, no industry can move in global markets individually and independently of competitors because they are part of a supply chain and the success of each member of the chain influences the others. In this paper, the issue of three-level supply chain with several products, one manufacturer, one distributor, and several customers is reviewed. In the first part of the chain, one type of vehicle and in the second part of the chain, two types of vehicles are used. The proposed model of this paper is a mixed integer planning integrated model. The model is considered to minimize costs, including transportation, inventory, and shortage penalty cost. This paper is about the proposed approach for development of quantitative models in the field of three-level supply chain and theoretical research and presents a case study of sending produced rolls by Mobarakeh Steel Structure Company to “Sazeh Gostar Saipa (S.G.S)” and then to suppliers. The solution proposed in this paper is imperialist competitive algorithm which is solved in 20 different sizes, and the results in small size are compared with GAMS.</description>
        <description>http://thesai.org/Downloads/Volume7No3/Paper_30-New_Mathematical_Modeling_of_Three_Level_Supply_Chain.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Feature Based Correspondence: A Comparative Study on Image Matching Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070329</link>
        <id>10.14569/IJACSA.2016.070329</id>
        <doi>10.14569/IJACSA.2016.070329</doi>
        <lastModDate>2016-03-31T17:20:54.9870000+00:00</lastModDate>
        
        <creator>Usman Muhammad Babri</creator>
        
        <creator>Munim Tanvir</creator>
        
        <creator>Khurram Khurshid</creator>
        
        <subject>computer vision; image matching; image recognition; algorithm comparison; feature detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(3), 2016</description>
        <description>Image matching and recognition are the crux of computer vision and have a major part to play in everyday lives. From industrial robots to surveillance cameras, from autonomous vehicles to medical imaging and from missile guidance to space exploration vehicles computer vision and hence image matching is embedded in our lives. This communication presents a comparative study on the prevalent matching algorithms, addressing their restrictions and providing a criterion to define the level of efficiency likely to be expected from an algorithm. The study includes the feature detection and matching techniques used by these prevalent algorithms to allow a deeper insight. The chief aim of the study is to deliver a source of comprehensive reference for the researchers involved in image matching, regardless of specific applications.</description>
        <description>http://thesai.org/Downloads/Volume7No3/Paper_29-Feature_Based_Correspondence_A_Comparative_Study_on_Image_Matching_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detection and Identification System of Bacteria and Bacterial Endotoxin Based on Raman Spectroscopy</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070328</link>
        <id>10.14569/IJACSA.2016.070328</id>
        <doi>10.14569/IJACSA.2016.070328</doi>
        <lastModDate>2016-03-31T17:20:54.9530000+00:00</lastModDate>
        
        <creator>Muhammad Elsayeh</creator>
        
        <creator>Ahmed H.Kandil</creator>
        
        <subject>Rapid Microbial Detection; Rapid Pyrogen Detection; Microwave Spectroscopy; Dielectric Spectroscopy; Ultra Wide Band; Cepstral Analysis; Raman Spectroscopy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(3), 2016</description>
        <description>Sepsis is a global health problem that causes risk of death. In the developing world, about 60 to 80 % of death cases are caused by Sepsis. Rapid methods for detecting its causes, represent one of the major factors that may reduce Sepsis risks.  Such methods can provide microbial detection and identification which is critical to determine the right treatment for the patient. Microbial and Pyrogen detection is important for quality control system to ensure the absence of pathogens and Pyrogens in the manufacturing of both medical and food products. Raman spectroscopes represent a q uick and accurate identification and detection method, for bacteria and bacterial endotoxin, which this plays an important role in delivering high quality biomedical products using the power of Raman spectroscopy. It is a rapid method for chemical structure detection that can be used in identifying and classifying bacteria and bacterial endotoxin. Such a method acts as a solution for time and cost effective quality control procedures.
This work presents an automatic system based on Raman spectroscopy to detect and identify bacteria and bacterial endotoxin. It uses the frequency properties of Raman scattering through the interaction between organic materials and electromagnetic waves. The scattered intensities are measured and wave number converted into frequency, then the cepstral coefficients are extracted for both the detection and identification. The methodology depends on normalization of Fourier transformed cepstral signal to extract their classification features. Experiments’ results proved effective identification and detection of bacteria and bacterial endotoxin even with concentrations as low as 0.0003 Endotoxin unit (EU)/ml and 1 Colony Forming Unit (CFU)/ml using signal processing based  enhancement technique.
</description>
        <description>http://thesai.org/Downloads/Volume7No3/Paper_28-Detection_and_Identification_System_of_Bacteria_and_Bacterial_Endotoxin.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Role Based Multi-Agent System for E-Learning (MASeL)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070327</link>
        <id>10.14569/IJACSA.2016.070327</id>
        <doi>10.14569/IJACSA.2016.070327</doi>
        <lastModDate>2016-03-31T17:20:54.9230000+00:00</lastModDate>
        
        <creator>Mustafa Hameed</creator>
        
        <creator>Nadeem Akhtar</creator>
        
        <creator>Malik Saad Missen</creator>
        
        <subject> Management System (IMS); Multi-Agent System (MAS); Role Based Multi-Agent Systems; Agent-Group-Role (AGR); Agent-based Virtual Classroom (AVC); Intelligent Virtual Classroom (IVC); E-Learning; Information and Communication Technologies (ICTs); Formal verification; Model Checking</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(3), 2016</description>
        <description>Software agents are autonomous entities that can interact intelligently with other agents as well as their environment in order to carry out a specific task. We have proposed a role-based multi-agent system for e-learning. This multi-agent system is based on Agent-Group-Role (AGR) method. As a multi-agent system is distributed, ensuring correctness is an important issue. We have formally modeled our role-based multi-agent system. The correctness properties of liveness and safety are specified as well as verified. Timed-automata based model checker UPPAAL is used for the specification as well as verification of the e-learning system. This results in a formally specified and verified model of the role-based multi-agent system.</description>
        <description>http://thesai.org/Downloads/Volume7No3/Paper_27-Role_Based_Multi_Agent_System_for_E_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>AMBA Based Advanced DMA Controller for SoC</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070326</link>
        <id>10.14569/IJACSA.2016.070326</id>
        <doi>10.14569/IJACSA.2016.070326</doi>
        <lastModDate>2016-03-31T17:20:54.9070000+00:00</lastModDate>
        
        <creator>Abdullah Aljumah</creator>
        
        <creator>Mohammed Altaf Ahmed</creator>
        
        <subject>FPGA; AMBA; DMA; DMA Controller; SoC; data transfer rate; FIFO</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(3), 2016</description>
        <description>this paper describes the implementation of an AMBA Based Advanced DMA Controller for SoC. It uses AMBA Specifications, where two buses AHB and APB are defined and works for processor as system bus and peripheral bus respectively. The DMA controller functions between these two buses as a bridge and allow them to work concurrently. Depending on the speed of peripherals it uses buffering mechanism. Therefore an asynchronous FIFO is used for synchronizing the speed of peripherals. The proposed DMA controller can works in SoC along with processor and achieve fast data rate. The method introduced significant volume of data transfer with very low timing characteristics. Thus it is a better choice in respect of timing and volume of data. These two issues have been resolved under this research study. The results are compared with the AMD processors, like Geode GX 466, GX 500 and GX 533, and the presence and absence of DMA controller with processor is discussed and compared.  The DMAC stands to be better alternative in SoC design.</description>
        <description>http://thesai.org/Downloads/Volume7No3/Paper_26-Amba_Based_Advanced_DMA_Controller_for_SoC.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The ECG Signal Compression Using an Efficient Algorithm Based on the DWT</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070325</link>
        <id>10.14569/IJACSA.2016.070325</id>
        <doi>10.14569/IJACSA.2016.070325</doi>
        <lastModDate>2016-03-31T17:20:54.8930000+00:00</lastModDate>
        
        <creator>Oussama El B’charri</creator>
        
        <creator>Rachid Latif</creator>
        
        <creator>Wissam Jenkal</creator>
        
        <creator>Abdenbi Abenaou</creator>
        
        <subject>ECG compression; wavelet transform; lossy compression; hard thresholding</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(3), 2016</description>
        <description>The storage capacity of the ECG records presents an important issue in the medical practices. These data could contain hours of recording, which needs a large space for storage to save these records. The compression of the ECG signal is widely used to deal with this issue. The problem with this process is the possibility of losing some important features of the ECG signal. This loss could influence negatively the analyzing of the heart condition. In this paper, we shall propose an efficient method of the ECG signal compression using the discrete wavelet transform and the run length encoding. This method is based on the decomposition of the ECG signal, the thresholding stage and the encoding of the final data. This method is tested on some of the MIT-BIH arrhythmia signals from the international database Physionet. This method shows high performances comparing to other methods recently published.</description>
        <description>http://thesai.org/Downloads/Volume7No3/Paper_25-The_ECG_Signal_Compression_Using_an_Efficient_Algorithm_Based_on_the_DWT.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fault Tolerant System for Sparse Traffic Grooming in Optical WDM Mesh Networks Using Combiner Queue</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070324</link>
        <id>10.14569/IJACSA.2016.070324</id>
        <doi>10.14569/IJACSA.2016.070324</doi>
        <lastModDate>2016-03-31T17:20:54.8600000+00:00</lastModDate>
        
        <creator>Sandip R. Shinde</creator>
        
        <creator>Dr. Suhas H. Patil</creator>
        
        <creator>Dr. S. Emalda Roslin</creator>
        
        <creator>Archana S. Shinde</creator>
        
        <subject>optical communication; sparse traffic grooming; survivability; fault tolerance; Combiner Queue; WDM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(3), 2016</description>
        <description>Queuing theory is an important concept in current internet technology. As the requirement of bandwidth goes on increasing it is necessary to use optical communication for transfer of data. Optical communication at backbone network requires various devices for traffic grooming.  The cost of these devices is very high which leads to increase in the cost of network. One of the solutions to this problem is to have sparse traffic grooming in optical WDM mesh network. Sparse traffic grooming allows only few nodes in the network as grooming node (G-node). These G-nodes has the grooming capability and other nodes are simple nodes where traffic grooming is not possible. The grooming nodes are the special nodes which has high cost.  The possibility of faults at such a node, or link failure is high. Resolving such faults and providing efficient network is very important. So we have importance of such survivable sparse traffic grooming network.
Queuing theory helps to improve the result of network and groom the traffic in the network. The paper focuses on the improvement in performance of the backbone network and reduction in blocking probability.  To achieve the goals of the work we have simulated the model. The main contribution is to use survivability on the sparse grooming network and use of combiner queues at each node. It has observed that Combiner queuing alone does the job of minimizing blocking probability and balancing the load over the network. The model is not only cost effective but also increases the performance of network and minimizes the call blocking probability</description>
        <description>http://thesai.org/Downloads/Volume7No3/Paper_24-Fault_Tolerant_System_for_Sparse_Traffic_Grooming_in_Optical.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Automated Recommender System for Course Selection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070323</link>
        <id>10.14569/IJACSA.2016.070323</id>
        <doi>10.14569/IJACSA.2016.070323</doi>
        <lastModDate>2016-03-31T17:20:54.8470000+00:00</lastModDate>
        
        <creator>Amer Al-Badarenah</creator>
        
        <creator>Jamal Alsakran</creator>
        
        <subject>collaborative recommendation; association rule mining; data mining; recommender system; course selection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(3), 2016</description>
        <description>Most of electronic commerce and knowledge management` systems use recommender systems as the underling tools for identifying a set of items that will be of interest to a certain user. Collaborative recommender systems recommend items based on similarities and dissimilarities among users’ preferences. This paper presents a collaborative recommender system that recommends university elective courses to students by exploiting courses that other similar students had taken. The proposed system employs an association rules mining algorithm as an underlying technique to discover patterns between courses. Experiments were conducted with real datasets to assess the overall performance of the proposed approach.</description>
        <description>http://thesai.org/Downloads/Volume7No3/Paper_23-An_Automated_Recommender_System_for_Course_Selection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Knowledge Management of Best Practices in a Collaborative Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070322</link>
        <id>10.14569/IJACSA.2016.070322</id>
        <doi>10.14569/IJACSA.2016.070322</doi>
        <lastModDate>2016-03-31T17:20:54.8130000+00:00</lastModDate>
        
        <creator>Amal Al-Rasheed</creator>
        
        <creator>Jawad Berri</creator>
        
        <subject>Best practice; knowledge management system; knowledge sharing; higher education; life cycle; portal</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(3), 2016</description>
        <description>Identifying and sharing best practices in a domain means duplicating successes, which help people, learn from each other and reuse proven practices. Successful sharing of best practices can be accomplished by establishing a collaborative environment where users, experts and communities can interact and cooperate. A detailed review of previous research in best practice knowledge management shows that existing models have focused on developing methodologies to manage best practices but most of them did not propose solutions towards the development of full-fledge systems that make use of technologies to allow effective sharing and reuse of best practices. This paper presents a life cycle model to manage expertise for communities of practice. The proposed model is implemented in the education field as a knowledge management system that promotes and values user’s contributions. We focus on the case of best teaching practice (BTP) as they develop instructor’s abilities and improve overall instruction quality in higher education. For this purpose, we developed a computer environment including a knowledge management system and a web portal to assist instructors in higher education in the creation, sharing and application of BTPs.</description>
        <description>http://thesai.org/Downloads/Volume7No3/Paper_22-Knowledge_Management_of_Best_Practices_in_a_Collaborative_Environment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Mapreduce Lift Association Rule Mining Algorithm (MRLAR) for Big Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070321</link>
        <id>10.14569/IJACSA.2016.070321</id>
        <doi>10.14569/IJACSA.2016.070321</doi>
        <lastModDate>2016-03-31T17:20:54.7830000+00:00</lastModDate>
        
        <creator>Nour E. Oweis</creator>
        
        <creator>Mohamed Mostafa Fouad</creator>
        
        <creator>Sami R. Oweis</creator>
        
        <creator>Suhail S. Owais</creator>
        
        <creator>Vaclav Snasel</creator>
        
        <subject>Big Data; Data Mining; Association Rule; MapReduce; Lift Interesting Measurement</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(3), 2016</description>
        <description>Big Data mining is an analytic process used to discover the hidden knowledge and patterns from a massive, complex, and multi-dimensional dataset. Single-processor&#39;s memory and CPU resources are very limited, which makes the algorithm performance ineffective. Recently, there has been renewed interest in using association rule mining (ARM) in Big Data to uncover relationships between what seems to be unrelated. However, the traditional discovery ARM techniques are unable to handle this huge amount of data. Therefore, there is a vital need to scalable and parallel strategies for ARM based on Big Data approaches. This paper develops a novel MapReduce framework for an association rule algorithm based on Lift interestingness measurement (MRLAR) which can handle massive datasets with a large number of nodes. The experimental result shows the effi-ciency of the proposed algorithm to measure the correlations between itemsets through integrating the uses of MapReduce and LIM instead of depending on confidence.</description>
        <description>http://thesai.org/Downloads/Volume7No3/Paper_21-A_Novel_Mapreduce_Lift_Association_Rule_Mining_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-Agent Based Model for Web Service Composition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070320</link>
        <id>10.14569/IJACSA.2016.070320</id>
        <doi>10.14569/IJACSA.2016.070320</doi>
        <lastModDate>2016-03-31T17:20:54.7670000+00:00</lastModDate>
        
        <creator>Karima Belmabrouk</creator>
        
        <creator>Fatima Bendella</creator>
        
        <creator>Maroua Bouzid</creator>
        
        <subject>Agents; Model; Quality of Service; Service composition; Web services</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(3), 2016</description>
        <description>The evolution of the Internet and the competitiveness among companies were factors in the explosion of Web services. Web services are applications available on the Internet each performing a particular task. Web users often need to call different services to achieve a more complex task that can’t be satisfied by a simple service. And users often prefer to have the best services responding to their requests. In this context, we should measure the Quality of Service (QoS) which is a very important aspect of Web services in order to offer to the user the best services.
“How can we ensure the composition of different services to respond the user request” is the first problem that we contribute to resolve, proposing a multi-agent based model for the automatic planification of Web services. And “guarantee the required quality of the composite Web services” is a complex task regarding the unpredictable nature and dynamics of composite Web services, so our contribution to remedy to this problem consists of the use of two classes of quality attributes. The first one considers generic and the second contains specific attributes.</description>
        <description>http://thesai.org/Downloads/Volume7No3/Paper_20-Multi_Agent_Based_Model_for_Web_Service_Composition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Clustering Analysis of Wireless Sensor Network Based on Network Coding with Low-Density Parity Check</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070319</link>
        <id>10.14569/IJACSA.2016.070319</id>
        <doi>10.14569/IJACSA.2016.070319</doi>
        <lastModDate>2016-03-31T17:20:54.7370000+00:00</lastModDate>
        
        <creator>Maria Hammouti</creator>
        
        <creator>El Miloud Ar-reyouchi</creator>
        
        <creator>Kamal Ghoumid</creator>
        
        <creator>Ahmed Lichioui</creator>
        
        <subject>Clustering Techniques; Network Coding; LDPC codes; distributed algorithms; wireless sensor network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(3), 2016</description>
        <description>The number of nodes in wireless sensor networks (WSNs) is one of the fundamental parameters when it comes to developing an algorithm based on Network Coding (NC) with LDPC (Low Density Parity Check) code because it directly affects the size of the generator matrix of the LDPC code and   to its dispersion. Optimizing Wireless Communication Systems by decreasing BER (Bit Error Rate) is one approach to analyze the network into clusters (at the level of their nodes).  In this paper, the authors present a fully distributed clustering algorithm and they consider different node values by cluster, then they select the curves that have the best compromise. They examine the effects of SNR (Signal-to-noise ratio) quantization on system performance obtained for different scenarios (by varying the parameter corresponding to the number of the symbol during the forwarding phase).  Finally, the results prove that the increased number nodes improve LDPC code properties.</description>
        <description>http://thesai.org/Downloads/Volume7No3/Paper_19-Clustering_Analysis_of_Wireless_Sensor_Network_Based_on_Network_Coding.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Integrating Semantic Features for Enhancing Arabic Named Entity Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070318</link>
        <id>10.14569/IJACSA.2016.070318</id>
        <doi>10.14569/IJACSA.2016.070318</doi>
        <lastModDate>2016-03-31T17:20:54.7030000+00:00</lastModDate>
        
        <creator>Hamzah A. Alsayadi</creator>
        
        <creator>Abeer M. ElKorany</creator>
        
        <subject>Arabic Named Entity Recognition (ANER); Conditional Random Fields (CRF); Domain Ontology; Semantic Relation Feature (SRF); Arabic WordNet ontology (ANW)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(3), 2016</description>
        <description>Named Entity Recognition (NER) is currently an essential research area that supports many tasks in NLP. Its goal is to find a solution to boost accurately the named entities identification. This paper presents an integrated semantic-based Machine learning (ML) model for Arabic Named Entity Recognition (ANER) problem.  The basic idea of that model is to combine several linguistic features and to utilize syntactic dependencies to infer semantic relations between named entities. The proposed model focused on recognizing three types of named entities: person, organization and location. Accordingly, it combines internal features that represented linguistic features as well as external features that represent the semantic of relations between the three named entities to enhance the accuracy of recognizing them using external knowledge source such as Arabic WordNet ontology (ANW). We introduced both features to CRF classifier, which are effective for ANER. Experimental results show that this approach can achieve an overall F-measure around 87.86% and 84.72% for ANERCorp and ALTEC datasets respectively.</description>
        <description>http://thesai.org/Downloads/Volume7No3/Paper_18-Integrating_Semantic_Features_for_Enhancing_Arabic_Named_Entity_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Paradigm for Symmetric Cryptosystem</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070317</link>
        <id>10.14569/IJACSA.2016.070317</id>
        <doi>10.14569/IJACSA.2016.070317</doi>
        <lastModDate>2016-03-31T17:20:54.6570000+00:00</lastModDate>
        
        <creator>Shadi R. Masadeh</creator>
        
        <creator>Hamza A. Al_Sewadi</creator>
        
        <creator>Mohammad A. Wadi</creator>
        
        <subject>cryptography; security; symmetric systems; polyalphbetic cipher; key generation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(3), 2016</description>
        <description>Playfair cipher is the first known digraph polyalphabetic method. It relies on 5x5 uppercase alphabets matrix with simple substitution processes to be used for encryption and decryption. This paper proposes an enhanced variant of Playfair cipher algorithm that incorporates an algorithm for elaborate key generation starting with a seed accompanying the ciphertext and will be referred to as a Novel Paradigm for Symmetric Cryptosystem (NPSC).
The key generation, encryption and decryption processes implement modular calculations instead of the simple substitution used in the traditional Playfair cipher. It supports both alphabetic characters and numerals. This variant considerably enhances the security strength without increasing the matrix size as demonstrated by the experimentation results. Comparative studies of various critical factors with other reported versions of Playfair cipher and results were also included.
</description>
        <description>http://thesai.org/Downloads/Volume7No3/Paper_17-A_Novel_Paradigm_for_Symmetric_Cryptosystem.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving DNA Computing Using Evolutionary Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070316</link>
        <id>10.14569/IJACSA.2016.070316</id>
        <doi>10.14569/IJACSA.2016.070316</doi>
        <lastModDate>2016-03-31T17:20:54.6270000+00:00</lastModDate>
        
        <creator>Godar J. Ibrahim</creator>
        
        <creator>Tarik A. Rashid</creator>
        
        <creator>Ahmed T. Sadiq</creator>
        
        <subject>Parallel Computation; DNA Computation Algorithm; Evolutionary DNA Computing Algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(3), 2016</description>
        <description>the field of DNA Computing has attracted many biologists and computer scientists as it has a biological interface, small size and substantial parallelism. DNA computing depends on DNA molecules’ biochemical reactions which they can randomly anneal and they might accidentally cause improper or unattractive computations. This will inspire opportunities to use evolutionary computation via DNA. Evolutionary Computation emphasizes on probabilistic search and optimization methods which are mimicking the organic evolution models. The research work aims at offering a simulated evolutionary DNA computing model which incorporates DNA computing with an evolutionary algorithm. This evolutionary approach provides the likelihood for increasing dimensionality through replacing the typical filtering method by an evolutionary one. Thus, via iteratively increasing and recombination a population of strands, eliminating incorrect solutions from the population, and choosing the best solutions via gel electrophoresis, an optimal or near-optimal solution can be evolved rather than extracted from the initial population.</description>
        <description>http://thesai.org/Downloads/Volume7No3/Paper_16-Improving_DNA_Computing_Using_Evolutionary_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Energy Efficient Routing Protocol for Maximizing the Lifetime in Wsns Using Ant Colony Algorithm and Artificial Immune System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070315</link>
        <id>10.14569/IJACSA.2016.070315</id>
        <doi>10.14569/IJACSA.2016.070315</doi>
        <lastModDate>2016-03-31T17:20:54.5970000+00:00</lastModDate>
        
        <creator>Safaa Khudair Leabi</creator>
        
        <creator>Turki Younis Abdalla</creator>
        
        <subject>ant colony algorithm; artificial immune system; adaptive routing; network lifetime; wireless sensor networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(3), 2016</description>
        <description>Energy limitations have become fundamental challenge for designing wireless sensor networks. Network lifetime represent the most important and interested metric. Several attempts have been made for efficient utilization of energy in routing techniques. This paper proposes an energy efficient routing technique for maximizing the networks lifetime called swarm intelligence routing. This is achieved by using ant colony algorithm (ACO) and artificial immune system (AIS). AIS is used for solving packet LOOP problem and to control route direction. While ACO algorithm is used for determining optimum route for sending data packets. The proposed routing technique seeks for determining the optimum route from nodes towards base station so that energy exhaustion is balanced and lifetime is maximized. Proposed routing technique is compared with Dijkstra routing method. Results show significant increase in network lifetime of about 1.2567.</description>
        <description>http://thesai.org/Downloads/Volume7No3/Paper_15-Energy_Efficient_Routing_Protocol_for_Maximizing_the_Lifetime.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Competitive Representation Based Classification Using Facial Noise Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070314</link>
        <id>10.14569/IJACSA.2016.070314</id>
        <doi>10.14569/IJACSA.2016.070314</doi>
        <lastModDate>2016-03-31T17:20:54.5800000+00:00</lastModDate>
        
        <creator>Tao Liu</creator>
        
        <creator>Cong Li</creator>
        
        <creator>Ying Liu</creator>
        
        <creator>Chao Li</creator>
        
        <subject>face recognition; sparse representation; biometrics; noise detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(3), 2016</description>
        <description>Linear representation based face recognition is hotly studied in recent years. Competitive representation classification is a linear representation based method which uses the most competitive training samples to sparsely represent a probe. However, possible noises on a test face image can bias the representation results. In this paper we propose a facial noise detection method to remove noises in the test image during the competitive representation. We compare the proposed method with others on AR, Extended Yale B, ORL, FERET, and LFW databases and the experimental results show the good performance of our method.</description>
        <description>http://thesai.org/Downloads/Volume7No3/Paper_14-Competitive_Representation_Based_Classification_Using_Facial_Noise_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Corrupted MP4 Carving Using MP4-Karver</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070313</link>
        <id>10.14569/IJACSA.2016.070313</id>
        <doi>10.14569/IJACSA.2016.070313</doi>
        <lastModDate>2016-03-31T17:20:54.5500000+00:00</lastModDate>
        
        <creator>Ahmed Nur Elmi Abdi</creator>
        
        <creator>Kamaruddin Malik Mohamad</creator>
        
        <creator>Yusoof Mohammed Hasheem</creator>
        
        <creator>Rashid Naseem</creator>
        
        <creator>Jamaluddin</creator>
        
        <creator>Muhammad Aamir</creator>
        
        <subject>Digital forensic; File carving; Repairing; Corrupted; Frame extraction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(3), 2016</description>
        <description>In the digital forensic, recovery of deleted and damaged video files play an important role in searching for the evidences. In this paper, MP4-Karver tool is proposed to recover and repair the corrupted videos. Moreover, MP4-Karver extracts frames from video for automatically screen-video to detect illegal cases instead of targeting or watching complete video. Therefore, many existing approaches such as Scalpel’s method, Garfienkel, Bi-Fragment Gap Carving, Smart Carving and Frame Based Recovery attempts to recover the videos in different ways, but most of the recovered videos are usually not complete playable. The proposed MP4-Karver focuses on recovery of video files and repair corrupted videos to complete and playable. Experimental results show that the proposed MP4-Karver effectively restores corrupted or damaged video for an improved percentage of the video restoration compared with existing tools.</description>
        <description>http://thesai.org/Downloads/Volume7No3/Paper_13-Corrupted_MP4_Carving_Using_MP4_Karver.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Explorative Study of SQL Injection Attacks and Mechanisms to Secure Web Application Database- A Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070312</link>
        <id>10.14569/IJACSA.2016.070312</id>
        <doi>10.14569/IJACSA.2016.070312</doi>
        <lastModDate>2016-03-31T17:20:54.5170000+00:00</lastModDate>
        
        <creator>Chandershekhar Sharma</creator>
        
        <creator>Dr. S. C. Jain</creator>
        
        <creator>Dr. Arvind K Sharma</creator>
        
        <subject>Injection Attacks; SQL vulnerabilities; Web Application Attacks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(3), 2016</description>
        <description>The increasing innovations in web development technologies direct the augmentation of user friendly web applications. With activities like - online banking, shopping, booking, trading etc. these applications have become an integral part of everyone’s daily routine. The profit driven online business industry has also acknowledged this growth because a thriving application provides the global platform to an organization. Database of web application is the most valuable asset which stores sensitive information of an individual and of an organization. SQLIA is the topmost threat as it targets the database on web application. It allows the attacker to gain control over the application ensuing financial fraud, leak of confidential data and even deleting the database. The exhaustive survey of SQL injection attacks presented in this paper is based on empirical analysis. This comprises the deployment of injection mechanism for each attack with respective types on various websites, dummy databases and web applications. The paramount security mechanism for web application database is also discussed to mitigate SQL injection attacks.</description>
        <description>http://thesai.org/Downloads/Volume7No3/Paper_12-Explorative_Study_of_SQL_Injection_Attacks_and_Mechanisms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Critical Path Reduction of Distributed Arithmetic Based FIR Filter</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070311</link>
        <id>10.14569/IJACSA.2016.070311</id>
        <doi>10.14569/IJACSA.2016.070311</doi>
        <lastModDate>2016-03-31T17:20:54.4870000+00:00</lastModDate>
        
        <creator>Sunita Badave</creator>
        
        <creator>Anjali Bhalchandra</creator>
        
        <subject>Critical Path; Multiplier less FIR filter; Distributed Arithmetic; LUT Design; Indexed LUT</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(3), 2016</description>
        <description>Operating speed, which is reciprocal of critical path computation time, is one of the prominent design matrices of finite impulse response (FIR) filters.  It is largely affected by both, system architecture as well as technique used to design arithmetic modules. A large computation time of multipliers in conventionally designed multipliers, limits the speed of system architecture. Distributed arithmetic is one of the techniques, used to provide multiplier-free multiplication in the implementation of FIR filter. However suffers from a sever limitation of exponential growth of look up table (LUT) with order of filter. An improved distributed arithmetic technique is addressed here to design for system architecture of FIR filter. In proposed technique, a single large LUT of conventional DA is replaced by number of smaller indexed LUT pages to restrict exponential growth and to reduce system access time. It also eliminates the use of adders. Selection module selects the desired value from desired page, which leads to reduce computational time of critical path. Trade off between access times of LUT pages and selection module helps to achieve minimum critical path so as to maximize the operating speed.
Implementations are targeted to Xilinx ISE, Virtex IV devices. FIR filter with 8 bit data width of input sample results are presented here.  It is observed that, proposed design perform significantly faster as compared to the conventional DA and existing DA based designs.
</description>
        <description>http://thesai.org/Downloads/Volume7No3/Paper_11-Critical_Path_Reduction_of_Distributed_Arithmetic_Based_FIR_Filter.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implementation of Pedestrian Dynamic</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070310</link>
        <id>10.14569/IJACSA.2016.070310</id>
        <doi>10.14569/IJACSA.2016.070310</doi>
        <lastModDate>2016-03-31T17:20:54.4700000+00:00</lastModDate>
        
        <creator>Purba Daru Kusuma</creator>
        
        <subject>pattern generation; cellular automata; pedestrian dynamic; intelligent agent</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(3), 2016</description>
        <description>Pattern generation is one of the ways to implement computer science in art. Many methods have been implemented. One of them is cellular automata. In a previous work, cellular automata (CA) has been used to create an image with stochastic and irregular pattern. There are problems in the performance of the method because the average number of the occupied cells is less than 50 percent. So, this method must be improved. In this research, the pedestrian dynamic concept is implemented into the pattern generation process. This method is used so that there is a combination between stochastic and deterministic approaches in generating the pattern. This combination is the key element of the method. This proposed model has successfully produced irregular pattern image too. Based on quantitative test, the occupied cell ratio is still less than 50 percent but the proposed model can make a better distance between last position and starting point nodes of the pattern. When the number of agents is 75, the target to reach the occupied cells ratio by more than 75 percent is achieved.</description>
        <description>http://thesai.org/Downloads/Volume7No3/Paper_10-Implementation_of_Pedestrian_Dynamic.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>NMVSA Greedy Solution for Vertex Cover Problem</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070309</link>
        <id>10.14569/IJACSA.2016.070309</id>
        <doi>10.14569/IJACSA.2016.070309</doi>
        <lastModDate>2016-03-31T17:20:54.4230000+00:00</lastModDate>
        
        <creator>Mohammed Eshtay</creator>
        
        <creator>Azzam Sleit</creator>
        
        <creator>Ahmad Sharieh</creator>
        
        <subject>Vertex Cover Problem (MVC); Combinatorial Problem; NP-Complete Problem; Approximation Algorithm; Greedy algorithms; Minimum Independent Set</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(3), 2016</description>
        <description>Minimum vertex cover (MVC) is a well-known NP-Complete optimization problem. The importance of MVC in theory and practical comes from the wide range of its applications. This paper describes a polynomial time greedy algorithm to find near optimal solutions for MVC. The new algorithm NMVAS is a modification of already existed algorithm called MVAS which uses the same principle of selecting candidate from the neighborhood of the vertex with a modification in the selection procedure. A comparative study is conducted between the NMVAS and MVAS which shows that the proposed algorithm NMVSA provides better or equal results in the most cases of the underlying data sets which leads to a better average approximation ratio of NMVAS. NMVAS inherits the simplicity of the original algorithm.</description>
        <description>http://thesai.org/Downloads/Volume7No3/Paper_9-NMVSA_Greedy_Solution_for_Vertex_Cover_Problem.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detection and Feature Extraction of Collective Activity in Human-Computer Interaction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070308</link>
        <id>10.14569/IJACSA.2016.070308</id>
        <doi>10.14569/IJACSA.2016.070308</doi>
        <lastModDate>2016-03-31T17:20:54.4100000+00:00</lastModDate>
        
        <creator>Ioannis Karydis</creator>
        
        <creator>Markos Avlonitis</creator>
        
        <creator>Phivos Mylonas</creator>
        
        <creator>Spyros Sioutas</creator>
        
        <subject>Users activity; aggregation modelling; collective intelligence; time-based media; pattern detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(3), 2016</description>
        <description>Time-based online media, such as video, has been growing in importance. Still, there is limited research on information retrieval of time-coded media content. This work elaborates on the idea of extracting feature characteristics from time-based online content by means of users&#39; interactions analysis instead of analyzing the content itself. Accordingly, a time series of users’ activity in online media is constructed and shown to exhibit rich temporal dynamics. Additionally it is demonstrated that is also possible to detect characteristic patterns in collective activity while accessing time-based media. Pattern detection of collective activity, as well as feature extraction of the corresponding pattern, is achieved by means of a time series clustering approach. This is demonstrated with the proposed approach featuring information-rich videos. It is shown that the proposed probabilistic algorithm effectively detects distinct shapes of the users’ time series, predicting correctly popularity dynamics, as well as their scale characteristics.</description>
        <description>http://thesai.org/Downloads/Volume7No3/Paper_8-Detection_and_Feature_Extraction_of_Collective_Activity_in_Human_Computer_Interaction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Color Image Segmentation via Improved K-Means Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070307</link>
        <id>10.14569/IJACSA.2016.070307</id>
        <doi>10.14569/IJACSA.2016.070307</doi>
        <lastModDate>2016-03-31T17:20:54.3770000+00:00</lastModDate>
        
        <creator>Ajay Kumar</creator>
        
        <creator>Shishir Kumar</creator>
        
        <subject>k-means; k-means++; k-medoids; k-mode; kernel density component</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(3), 2016</description>
        <description>Data clustering techniques are often used to segment the real world images. Unsupervised image segmentation algorithms that are based on the clustering suffer from random initialization. There is a need for efficient and effective image segmentation algorithm, which can be used in the computer vision, object recognition, image recognition, or compression. To address these problems, the authors present a density-based initialization scheme to segment the color images. In the kernel density based clustering technique, the data sample is mapped to a high-dimensional space for the effective data classification. The Gaussian kernel is used for the density estimation and for the mapping of sample image into a high- dimensional color space. The proposed initialization scheme for the k-means clustering algorithm can homogenously segment an image into the regions of interest with the capability of avoiding the dead centre and the trapped centre by local minima phenomena. The performance of the experimental result indicates that the proposed approach is more effective, compared to the other existing clustering-based image segmentation algorithms. In the proposed approach, the Berkeley image database has been used for the comparison analysis with the recent clustering-based image segmentation algorithms like k-means++, k-medoids and k-mode.</description>
        <description>http://thesai.org/Downloads/Volume7No3/Paper_7-Color_Image_Segmentation_Via_Improved_K_Means_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>E-Learning Collaborative System for Practicing Foreign Languages with Native Speakers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070306</link>
        <id>10.14569/IJACSA.2016.070306</id>
        <doi>10.14569/IJACSA.2016.070306</doi>
        <lastModDate>2016-03-31T17:20:54.3630000+00:00</lastModDate>
        
        <creator>Ilya V. Osipov</creator>
        
        <creator>Alex A. Volinsky</creator>
        
        <creator>Anna Y. Prasikova</creator>
        
        <subject>E-learning, learning tools; peer-to-peer network; social network; open educational resources; distance learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(3), 2016</description>
        <description>The paper describes a novel social network-based open educational resource for practicing foreign languages with native speakers, based on the predefined teaching materials. This virtual learning platform, called i2istudy, eliminates misunderstanding by providing prepared and predefined scenarios, enabling the participants to understand each other and, as a consequence, to communicate freely. The developed system allows communication through the real time video and audio feed. In addition to establishing the communication link, it tracks the student progress and allows rating the instructor, based on the learner’s experience. The system went live in April 2014, and had over six thousand active daily users, with over 40,000 total registered users. Monetization has been added to the system, and time will show how popular the system will become in the future.</description>
        <description>http://thesai.org/Downloads/Volume7No3/Paper_6-E-Learning_Collaborative_System_for_Practicing_Foreign_Languages.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Effects of Walls and Floors in Indoor Localization Using Tracking Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070305</link>
        <id>10.14569/IJACSA.2016.070305</id>
        <doi>10.14569/IJACSA.2016.070305</doi>
        <lastModDate>2016-03-31T17:20:54.3470000+00:00</lastModDate>
        
        <creator>Farhat M. A. Zargoun</creator>
        
        <creator>Ibrahim M. Henawy</creator>
        
        <creator>Nesreen I. Ziedan</creator>
        
        <subject>Indoor localization; Tracking algorithm; Effects of wall and floor on RSS; Effects of obstruction; Multi wall and floor propagation model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(3), 2016</description>
        <description>The advancement in wireless and mobile networks has led to an increase in location based services (LBS). LBS can be applied in many applications, such as vehicle systems, security systems, and patient tracking systems.
The Global Navigation Satellite Systems (GNSS) have become very popular due to their ability to provide highly accurate positions, especially in outdoor environments. However, GNSS signals become very weak when they go through natural or man-made structure, like in urban canyons or indoor environments. This hinders the applicability of GNSS-based localization techniques in such challenging environments.
Many indoor localization techniques are based on the received signal strength (RSS). An RSS is proportional to the distance to an access point (AP), where it is stronger in power when it is closer to an AP, given that the received signal is not obstructed by walls or floors. This paper aims at studying the effect of walls and floors on the RSS, and estimating the distribution of the RSS due to such obstructions. Moreover, a tracking algorithm based on a multi-walls and floors propagation model is applied to increase the positioning accuracy.
</description>
        <description>http://thesai.org/Downloads/Volume7No3/Paper_5-Effects_of_Walls_and_Floors_in_Indoor_Localization_Using_Tracking_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A General Evaluation Framework for Text Based Conversational Agent</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070304</link>
        <id>10.14569/IJACSA.2016.070304</id>
        <doi>10.14569/IJACSA.2016.070304</doi>
        <lastModDate>2016-03-31T17:20:54.3000000+00:00</lastModDate>
        
        <creator>Mohammad Hijjawi</creator>
        
        <creator>Zuhair Bandar</creator>
        
        <creator>Keeley Crockett</creator>
        
        <subject>Artificial intelligence; Conversational Agent and evaluation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(3), 2016</description>
        <description>This paper details the development of a new evaluation framework for a text based Conversational Agent (CA). A CA is an intelligent system that handle spoken or/and text based conversations between machine and human. Generally, the lack of evaluation frameworks for CAs effects its development. The idea behind any system’s evaluation is to make sure about the system’s functionalities and to continue development on it. A specific CA has been chosen to test the proposed framework on it; namely ArabChat. The ArabChat is a rule based CA and uses pattern matching technique to handle user’s Arabic text based conversations. The proposed and developed evaluation framework in this paper is natural language independent. The proposed framework is based on the exchange of specific information between ArabChat and user called “Information Requirements”. This information are tagged for each rule in the applied domain and should be exist in a user’s utterance (conversation). A real experiment has been done in Applied Science University in Jordan as an information point advisor for their native Arabic students to evaluate the ArabChat and then evaluating the proposed evaluation framework.</description>
        <description>http://thesai.org/Downloads/Volume7No3/Paper_4-A_General_Evaluation_Framework_for_Text_Based_Conversational_Agent.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Internet of Everything (Ioe): Analysing the Individual Concerns Over Privacy Enhancing Technologies (Pets)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070303</link>
        <id>10.14569/IJACSA.2016.070303</id>
        <doi>10.14569/IJACSA.2016.070303</doi>
        <lastModDate>2016-03-31T17:20:54.2830000+00:00</lastModDate>
        
        <creator>Asim Majeed</creator>
        
        <creator>Rehan Bhana</creator>
        
        <creator>Anwar Ul Haq</creator>
        
        <creator>Imani Kyaruzi</creator>
        
        <creator>Shaheed Pervaz</creator>
        
        <creator>Mike-Lloyd Williams</creator>
        
        <subject>privacy; privacy enhancing technology (PET); big data; information communication technology (ICT)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(3), 2016</description>
        <description>This paper aims to investigate the effectiveness of the provision of privacy of individuals through privacy enhancing technologies (PETs). The successful evolution and emergence of cyberspace with the real world through “Internet of Everything (IoE)” has led to the speedy progress in research and development of predictive analysis of big data. The individual’s privacy has gained a considerable momentum in both industry and academia since privacy-enhancing technologies (PETs) constitute a technical means to protect information. Privacy regulations and state of law deemed this as an integral part in order to protect the individual’s private sphere when the infrastructure of Information Communication Technologies (ICT) is laid out. Modern organisations use consent forms to gather individual’s sensitive personal information for a specific purpose. The law prohibits using the person’s information for purposes other than that of when the consent was initially established. The infrastructure of ICT should be developed in alliance with the privacy laws and made compliant as well intelligent which learn by itself from the environment. This extra layer embedded in the system would educate the ICT structure and help system to authenticate as well as communicate with the perspective users. The existing literature on protecting individuals’ privacy through privacy-enhancing technologies (PETs) is still embryonic and does conclude that the individual’s concerns about privacy are not fully considered in the technological sense. Among other contributions, this research paper will devise a conceptual model to improve individual’s privacy.</description>
        <description>http://thesai.org/Downloads/Volume7No3/Paper_3-Internet_of_Everything_Ioe_Analysing_the_Individual_Concerns.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Portable Facial Recognition Jukebox Using Fisherfaces (Frj)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070302</link>
        <id>10.14569/IJACSA.2016.070302</id>
        <doi>10.14569/IJACSA.2016.070302</doi>
        <lastModDate>2016-03-31T17:20:54.2670000+00:00</lastModDate>
        
        <creator>Richard Mo</creator>
        
        <creator>Adnan Shaout</creator>
        
        <subject>Facial Recognition; Raspberry Pi; Computer Vision; GNU/Linux Operating System; OpenCV; C++</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(3), 2016</description>
        <description>A portable real-time facial recognition system that is able to play personalized music based on the identified person’s preferences was developed. The system is called Portable Facial Recognition Jukebox Using Fisherfaces (FRJ). Raspberry Pi was used as the hardware platform for its relatively low cost and ease of use. This system uses the OpenCV open source library to implement the computer vision Fisherfaces facial recognition algorithms, and uses the Simple DirectMedia Layer (SDL) library for playing the sound files. FRJ is cross-platform and can run on both Windows and Linux operating systems. The source code was written in C++. The accuracy of the recognition program can reach up to 90% under controlled lighting and distance conditions. The user is able to train up to 6 different people (as many as will fit in the GUI). When implemented on a Raspberry Pi, the system is able to go from image capture to facial recognition in an average time of 200ms.</description>
        <description>http://thesai.org/Downloads/Volume7No3/Paper_2-Portable_Facial_Recognition_Jukebox_Using_Fisherfaces.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Robust Algorithm of Forgery Detection in Copy-Move and Spliced Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070301</link>
        <id>10.14569/IJACSA.2016.070301</id>
        <doi>10.14569/IJACSA.2016.070301</doi>
        <lastModDate>2016-03-31T17:20:54.1900000+00:00</lastModDate>
        
        <creator>Tu Huynh-Kha</creator>
        
        <creator>Thuong Le-Tien</creator>
        
        <creator>Synh Ha-Viet-Uyen</creator>
        
        <creator>Khoa Huynh-Van</creator>
        
        <creator>Marie Luong</creator>
        
        <subject>Forgery detection (FD); Copy-Move; Discrete Wavelet Transform (DWT); Run Difference Method (RDM); Splicing, Sharpness</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(3), 2016</description>
        <description>The paper presents a new method to detect forgery by copy-move, splicing or both in the same image. Multiscale, which limits the computational complexity, is used to check if there is any counterfeit in the image. By applying one-level Discrete Wavelet Transform, the sharped edges, which are traces of cut-paste manipulation, are high frequencies and detected from LH, HL and HH sub-bands. A threshold is proposed to filter the suspicious edges and the morphological operation is applied to reconstruct the boundaries of forged regions. If there is no shape produced by dilation or no highlight sharped edges, the image is not faked. In case of forgery image, if a region at the other position is similar to the defined region in the image, a copy-move is confirmed. If not, a splicing is detected. The suspicious region is extracted the feature using Run Difference Method (RDM) and a feature vector is created. Searching regions having the same feature vector is called detection phase. The algorithm applying multiscale and morphological operation to detect the sharped edges and RDM to extract the image features is simulated in Matlab with high efficiency not only in the copy-move or spliced images but also the image with both copy-move and splicing.</description>
        <description>http://thesai.org/Downloads/Volume7No3/Paper_1-A_Robust_Algorithm_of_Forgery_Detection_in_Copy_Move_and_Spliced_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application of Vague Analytical Hierarchy Process to Prioritize the Challenges Facing Public Transportation in Dar Es Salaam City-Tanzania</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2016.050308</link>
        <id>10.14569/IJARAI.2016.050308</id>
        <doi>10.14569/IJARAI.2016.050308</doi>
        <lastModDate>2016-03-10T08:08:18.3530000+00:00</lastModDate>
        
        <creator>Erick P. Massami</creator>
        
        <creator>Benitha M. Myamba</creator>
        
        <subject>Analytical Hierarchy Process; Vague Set; Urban Transportation; Transportation Challenge; Decision making</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 5(3), 2016</description>
        <description>Transportation is a key to the economy and social welfare; it makes mobility more accessible and enhances the social and economic interactions. On the other hand, the increase of urban population, pollution and other negative impacts has directly affected the existing transportation system in Dar es Salaam City - Tanzania. As the transportation challenges cannot be overcome simultaneously due to the scarcity of financial resources, a decision support tool is needed to prioritize these challenges. In this study, a composite model of Vague Set Theory (VST) and Analytical Hierarchy Process (AHP) is applied to appraise the challenges. The Vague Analytical Hierarchy Process (VAHP) uses opinions of experts collected from a survey questionnaire. The computational results reveal the ranking in descending order of the urban transportation challenges as poor traffic management, inadequacy of proper public transit service and inadequacy of road transport infrastructure. The results also depict that the VAHP model is a useful decision support tool for transport planners, transport policy makers and other industry stakeholders.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume5No3/Paper_8-Application_of_Vague_Analytical_Hierarchy_Process_to_Prioritize.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The True Nature of Numbers is that they are a Group Associated with the Painlev&#233; Property</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/</link>
        <id></id>
        <doi></doi>
        <lastModDate>2016-03-10T08:08:18.3070000+00:00</lastModDate>
        
        <creator>Yoshimi SHIMOKAWA</creator>
        
        <subject>cognitive science; pattern recognition; scale invariance; Painlev&#233; Property</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 5(3), 2016</description>
        <description>The true nature of numbers is that they are a group associated with the moving Painlev&#233; property. In the past, humans considered numbers to be individual entities. The two-point selective ability of living beings can be considered to have established today’s mathematical logic. We can consider mathematical logic to have been formed by the minimum condition of a consistent group (There is no discrepancies group) resulting from a light spectrum being transmitted to or reflected on a three-dimensional closed manifold. In other words, examining the characteristics of the light spectrum enables an understanding of the characteristics of human perception, and an understanding of the characteristics of the numbers perceived by human perception. Based on the understanding of these characteristics, humans think about perceiving mathematical logic. What was unfortunate in the past about the logical constructions that started with G&#246;del, is that they sought for the concept of numbers amid mathematical logic. Cellular automatons have advanced this approach by physicalizing it, and Conway revealed discrepancies by creating actual machines [12].The decimal system, in itself, is simply a notation tool and merely a notation. Owing to the structure of base conversion, it is convenient to use decimals in two-dimensional notations. However, decimals cannot be used in three-dimensional notations; instead, circular numbers are generated. This can be considered the result of using “0,” although the concept of writing out a three-dimensional notation consistently should be created separately. With no concept of null (?) in the “ Set theory and the continuum hypothesis”[5], that the perception of the human body is simply maximizing the margin of classification boundaries in a SVM problem [3]. How to consider maximizing the margin of classification boundaries in perceiving the problem could be thought of as the theme for the theory of numbers. Current, by &quot;The International System of Units (SI)&quot; has been defined, A unit got association that the unified. By using the unit, it will be elucidated also algorithms of the molecular structure in the organism. We need a third algorithm. It is the algorithm of the molecular structure organism, that is&quot;the number of recognition&quot; and &quot;the number of physical&quot;Is an algorithm to unify. In other words, temporary has been described structure in this paper, a common algorithm that is  &quot;the number of physical&quot; and”the number of recognition&quot;. Once again, it was discussed the number of nature, for that algorithm. In the future, to make a key operation using this algorithm. I wish to apply to display notation the development of new devices.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume5No3/Paper_7-The_True_Nature_of_Numbers_is_that_they_are_a_Group_Associated.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Micro-Blog Emotion Classification Method Research Based on Cross-Media Features</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2016.050306</link>
        <id>10.14569/IJARAI.2016.050306</id>
        <doi>10.14569/IJARAI.2016.050306</doi>
        <lastModDate>2016-03-10T08:08:18.2770000+00:00</lastModDate>
        
        <creator>Qiang Chen</creator>
        
        <creator>Jiangfan Feng</creator>
        
        <subject>tweet sentiment classification; CCA; Text emotional; Image emotional</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 5(3), 2016</description>
        <description>Although the sentiment analysis of tweet has caused more and more attention in recent years, most existing methods mainly analyze the text information. Because of the fuzziness of emotion expression, users are more likely to use mixed ways, such as words and image, to express their feelings. This paper proposes a classification method of tweet emotion based on fusion feature, which combines the textual feature and the image feature effectively. Due to the sparse data and the high degree of the redundancy of the classification feature, we adopt the canonical correlation analysis to reduce dimensions of data expressed by the text emotional feature and image feature. The dimension reduction of data can maximally retains the relevance of characteristics of the text and the emotional image on the high-level semantic and utilize the support vector machine (SVM) to train and test the feature fusion data set. The results of data experiment on Sina  tweet show that the algorithm can obtain better classification effect than the single feature selection methods.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume5No3/Paper_6-Micro_Blog_Emotion_Classification_Method_Research.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Packrat Parsing: A Literature Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2016.050305</link>
        <id>10.14569/IJARAI.2016.050305</id>
        <doi>10.14569/IJARAI.2016.050305</doi>
        <lastModDate>2016-03-10T08:08:18.2300000+00:00</lastModDate>
        
        <creator>Manish M. Goswami</creator>
        
        <creator>Dr. M.M. Raghuwanshi</creator>
        
        <creator>Dr. Latesh Malik</creator>
        
        <subject>Parsing Expression Grammar; Packrat Parsing; Memoization; Backtracking</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 5(3), 2016</description>
        <description>Packrat parsing is recently introduced technique based upon expression grammar. This parsing approach uses memoization and ensures a guarantee of linear parse time by avoiding redundant function calls by using memoization. This paper studies the progress made in packrat parsing till date and discusses the approaches to tackle this parsing process efficiently. In addition to this, other issues such as left recursion, error reporting also seems to be associated with this type of parsing approach and discussed here the efforts attempted by researchers to address this issue. This paper, therefore, presents a state of the art review of packrat parsing so that researchers can use this for further development of technology in an efficient manner.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume5No3/Paper_5-Packrat_Parsing_A_Literature_Review.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving Performance of Free Space Optics Link Using Array of Receivers in Terrible Weather Conditions of Plain and Hilly Areas</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2016.050304</link>
        <id>10.14569/IJARAI.2016.050304</id>
        <doi>10.14569/IJARAI.2016.050304</doi>
        <lastModDate>2016-03-10T08:08:18.1830000+00:00</lastModDate>
        
        <creator>Amit Gupta</creator>
        
        <creator>Surbhi Bakshi</creator>
        
        <creator>Shaina</creator>
        
        <creator>Mandeep Chaudhary</creator>
        
        <subject>Free space optics (FSO) communication; Array of photo detectors; Bit Error Rate (BER); Eye diagram; Quality factor (Q factor);  Bad weather effects</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 5(3), 2016</description>
        <description>Free-space optical (FSO) communication is a cost effective and high data rate access technique, which has been proving itself a best alternative to radio frequency technology. FSO link provides high bandwidth solution to the last mile access bottleneck. However, for terrestrial communication systems, the performance of these links is severely degraded from atmospheric loss mainly due to fog, rain and snow. So, a continuous availability of the link is always a concern. This paper investigates the dreadful weather effects such as rain, fog, snow, and other losses on the transmission performance of FSO systems. The technique of using an array of receivers for improving the performance of FSO links is explored in this paper. It involves the deployment of multiple photo detectors at the receiver end to mitigate effects of various weather conditions. The performance of the proposed system is evaluated in terms of bit error rate, received signal power, Q- factor and height of eye diagram. The influence of various weather conditions of plain and hilly areas are taken into consideration and results are compared with conventional FSO links.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume5No3/Paper_4-Improving_Performance_of_Free_Space_Optics_Link_Using_Array_of_Receivers.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Applying Swarm Optimization Techniques to Calculate Execution Time for Software Modules</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2016.050303</link>
        <id>10.14569/IJARAI.2016.050303</id>
        <doi>10.14569/IJARAI.2016.050303</doi>
        <lastModDate>2016-03-10T08:08:18.1370000+00:00</lastModDate>
        
        <creator>Nagy Ramadan Darwish</creator>
        
        <creator>Ahmed A. Mohamed</creator>
        
        <creator>Bassem S. M. Zohdy</creator>
        
        <subject>Particle Swarm Optimization; Parallel Particle Swarm Optimization; MATLAB Code without Algorithm</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 5(3), 2016</description>
        <description>This research aims to calculate the execution time for software modules, using Particle Swarm Optimization (PSO) and Parallel Particle Swarm Optimization (PPSO), in order to calculate the proper time. A comparison is made between MATLAB Code without Algorithm (MCWA), PSO and PPSO to figure out the time produced when executing any software module. The proposed algorithms which include the PPSO increase the speed of executing the algorithm itself, in order to achieve quick results. This research introduces the proposed architecture to calculate execution time and uses MATLAB to implement MCWA, PSO and PPSO. The results show that PPSO algorithm is more efficient in speed and time compared to MCWA and PSO algorithm for calculating the execution time.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume5No3/Paper_3-Applying_Swarm_Optimization_Techniques_to_Calculate_Execution.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implement Fuzzy Logic to Optimize Electronic Business Success</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2016.050302</link>
        <id>10.14569/IJARAI.2016.050302</id>
        <doi>10.14569/IJARAI.2016.050302</doi>
        <lastModDate>2016-03-10T08:08:18.0270000+00:00</lastModDate>
        
        <creator>Fahim Akhter</creator>
        
        <subject>Business Intelligence; Fuzzy Logic; Electronic Business; Trust; Security; Usability</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 5(3), 2016</description>
        <description>Customers are realizing the importance and benefits of shopping online such as convenience, comparison, product research, larger selection, and lower prices. The dynamic nature of e-commerce evokes online businesses to make alterations in their business processes and decisions making to satisfy customers’ needs.  Online businesses are adopting Business Intelligence (BI) tools and systems with the collaboration of fuzzy logic system to forecast the future of the e-commerce.  With the aid of BI, businesses have more possibilities to choose types and structures of required information to serve customers. The fuzzy logic system and BI capabilities would allow both customers and vendors to make right decisions about online shopping. Many experts believe that trust and security are critical risk factors for the embracement of e-commerce. Online trust may be influenced by factors such as usability, familiarity and conducting business with unknown parties. This paper discusses fuzzy logic and BI approach to gauge the level of trust and security in online transactions. The paper further addresses the issues and concerns related to the equilibrium of trust, security, and usability in online shopping.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume5No3/Paper_2-Implement_Fuzzy_Logic_to_Optimize_Electronic_Business_Success.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Creation of a Remote Sensing Portal for Practical Use Dedicated to Local Goverments in Kyushu, Japan</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2016.050301</link>
        <id>10.14569/IJARAI.2016.050301</id>
        <doi>10.14569/IJARAI.2016.050301</doi>
        <lastModDate>2016-03-10T08:08:17.9800000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Masaya Nakashima</creator>
        
        <subject>Remote Sensing; Satellite data; Land use map creation; Disaster prevention; Normalized Difference Vegetation Index: NDVI; Forest map; Volacanic eruption; 3D representation with Digital Elevation Model: DEM; Open data API</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 5(3), 2016</description>
        <description>Remote sensing portal site for practical uses which is dedicated to local governments is created. Key components of the site are (1) links to data providers, (2) links to the data analysis software tools, (3) examples of actual uses of the satellite remote sensing data in particular for local governments. Users’ demands for remote sensing satellite data are investigated for the local governments situated in Kyushu, Japan. According to the users’ demands, the remote sensing portal site is created with the aforementioned key components. For the examples of remote sensing data applications, creation of land use maps, disaster mitigations, forest maps, vegetation index map for evaluation of vitality of agricultural fields and forests, etc. are taken into account. In particular for forest map creation, it is created with free open source software: FOSS of classifiers together with open data API derived training samples applied to Landsat-8 OLI data. On the other hand, volcanic eruption is featured for disaster relief with 3D representation by using open data derived DEM data. In accordance with the users’ evaluation reports, it is found that the proposed portal site is useful.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume5No3/Paper_1-Creation_of_a_Remote_Sensing_Portal_for_Practical.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Reflected Adaptive Differential Evolution with Two External Archives for Large-Scale Global Optimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070284</link>
        <id>10.14569/IJACSA.2016.070284</id>
        <doi>10.14569/IJACSA.2016.070284</doi>
        <lastModDate>2016-03-05T06:13:12.1200000+00:00</lastModDate>
        
        <creator>Rashida Adeeb Khanum</creator>
        
        <creator>Nasser Tairan</creator>
        
        <creator>Muhammad Asif Jan</creator>
        
        <creator>Wali Khan Mashwani</creator>
        
        <creator>Abdel Salhi</creator>
        
        <subject>Adaptive differential evolution; large scale global optimization; archives</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(2), 2016</description>
        <description>JADE is an adaptive scheme of nature inspired algorithm, Differential Evolution (DE). It performed considerably improved on a set of well-studied benchmark test problems. In this paper, we evaluate the performance of new JADE with two external archives to deal with unconstrained continuous large-scale global optimization problems labeled as Reflected Adaptive Differential Evolution with Two External Archives (RJADE/TA). The only archive of JADE stores failed solutions. In contrast, the proposed second archive stores superior solutions at regular intervals of the optimization process to avoid premature convergence towards local optima. The superior solutions which are sent to the archive are reflected by new potential solutions. At the end of the search process, the best solution is selected from the second archive and the current population. The performance of RJADE/TA algorithm is then extensively evaluated on two test beds. At first on 28 latest benchmark functions constructed for the 2013 Congress on Evolutionary Computation special session. Secondly on ten benchmark problems from CEC2010 Special Session and Competition on Large-Scale Global Optimization. Experimental results demonstrated a very competitive perfor-mance of the algorithm.</description>
        <description>http://thesai.org/Downloads/Volume7No2/Paper_84-Reflected_Adaptive_Differential_Evolution_with_Two.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Regression-Based Feature Selection on Large Scale Human Activity Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070283</link>
        <id>10.14569/IJACSA.2016.070283</id>
        <doi>10.14569/IJACSA.2016.070283</doi>
        <lastModDate>2016-03-01T13:30:02.5330000+00:00</lastModDate>
        
        <creator>Hussein Mazaar</creator>
        
        <creator>Eid Emary</creator>
        
        <creator>Hoda Onsi</creator>
        
        <subject>Action Bank; Template Matching; SpatioTemporal Orientation Energy; Correlation; R-Squared; Support Vector Ma-chine; Logistic Regression; Linear Regression; Human Activity Recognition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(2), 2016</description>
        <description>In this paper, we present an approach for regression-based feature selection in human activity recognition. Due to high dimensional features in human activity recognition, the model may have over-fitting and can’t learn parameters well. Moreover, the features are redundant or irrelevant. The goal is to select important discriminating features to recognize the human activities in videos. R-Squared regression criterion can identify the best features based on the ability of a feature to explain the variations in the target class. The features are significantly reduced, nearly by 99.33%, resulting in better classification accuracy. Support Vector Machine with a linear kernel is used to classify the activities. The experiments are tested on UCF50 dataset. The results show that the proposed model significantly outperforms state-of-the-art methods.</description>
        <description>http://thesai.org/Downloads/Volume7No2/Paper_83-Regression_Based_Feature_Selection_on_Large_Scale.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Threshold Based Penalty Functions for Constrained Multiobjective Optimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070282</link>
        <id>10.14569/IJACSA.2016.070282</id>
        <doi>10.14569/IJACSA.2016.070282</doi>
        <lastModDate>2016-03-01T13:30:02.4870000+00:00</lastModDate>
        
        <creator>Muhammad Asif Jan</creator>
        
        <creator>Nasser Mansoor Tairan</creator>
        
        <creator>Rashida Adeeb Khanum</creator>
        
        <creator>Wali Khan Mashwani</creator>
        
        <subject>Constrained multiobjective optimization; decomposi-tion; MOEA/D; dynamic and adaptive penalty functions; threshold</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(2), 2016</description>
        <description>This paper compares the performance of our re-cently proposed threshold based penalty function against its dynamic and adaptive variants. These penalty functions are incorporated in the update and replacement scheme of the multiobjective evolutionary algorithm based on decomposition (MOEA/D) framework to solve constrained multiobjective op-timization problems (CMOPs). As a result, the capability of MOEA/D is extended to handle constraints, and a new algorithm, denoted by CMOEA/D-DE-TDA is proposed. The performance of CMOEA/D-DE-TDA is tested, in terms of the values of IGD-metric and SC-metric, on the well known CF-series test instances. The experimental results are also compared with the three best performers of CEC 2009 MOEA competition. Empirical results show the pitfalls of the proposed penalty functions.</description>
        <description>http://thesai.org/Downloads/Volume7No2/Paper_82-Threshold_Based_Penalty_Functions_for_Constrained.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Threshold Based Penalty Function Embedded MOEA/D</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070281</link>
        <id>10.14569/IJACSA.2016.070281</id>
        <doi>10.14569/IJACSA.2016.070281</doi>
        <lastModDate>2016-03-01T13:30:02.4530000+00:00</lastModDate>
        
        <creator>Muhammad Asif Jan</creator>
        
        <creator>Nasser Mansoor Tairan</creator>
        
        <creator>Rashida Adeeb Khanum</creator>
        
        <creator>Wali Khan Mashwani</creator>
        
        <subject>Constrained multiobjective optimization; decompo-sition; MOEA/D; penalty function; threshold</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(2), 2016</description>
        <description>Recently, we proposed a new threshold based penalty function. The threshold dynamically controls the penalty to infeasible solutions. This paper implants the two different forms of the proposed penalty function in the multiobjective evo-lutionary algorithm based on decomposition (MOEA/D) frame-work to solve constrained multiobjective optimization problems. This led to a new algorithm, denoted by CMOEA/D-DE-ATP. The performance of CMOEA/D-DE-ATP is tested on hard CF-series test instances in terms of the values of IGD-metric and SC-metric. The experimental results are compared with the three best performers of CEC 2009 MOEA competition. Experimental results show that the proposed penalty function is very promising, and it works well in the MOEA/D framework.</description>
        <description>http://thesai.org/Downloads/Volume7No2/Paper_81-A_New_Threshold_Based_Penalty_Function.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Toward Information Diffusion Model for Viral Marketing in Business</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070280</link>
        <id>10.14569/IJACSA.2016.070280</id>
        <doi>10.14569/IJACSA.2016.070280</doi>
        <lastModDate>2016-03-01T13:30:02.3770000+00:00</lastModDate>
        
        <creator>Lulwah AlSuwaidan</creator>
        
        <creator>Mourad Ykhlef</creator>
        
        <subject>information diffusion; viral marketing; social media marketing; social networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(2), 2016</description>
        <description>Current obstacles in the study of social media marketing include dealing with massive data and real-time updates have motivated to contribute solutions that can be adopted for viral marketing. Since information diffusion and social networks are the core of viral marketing, this article aims to investigate the constellation of diffusion methods for viral marketing. Studies on diffusion methods for viral marketing have applied different computational methods, but a systematic investigation of these methods has limited. Most of the literature have focused on achieving objectives such as influence maxi-mization or community detection. Therefore, this article aims to conduct an in-depth review of works related to diffusion for viral marketing. Viral marketing has applied to business-to-consumer transactions but has seen limited adoption in business-to-business transactions. The literature review reveals a lack of new diffusion methods, especially in dynamic and large-scale networks. It also offers insights into applying various mining methods for viral marketing. It discusses some of the challenges, limitations, and future research directions of information diffusion for viral marketing. The article also introduces a viral marketing informa-tion diffusion model. The proposed model attempts to solve the dynamicity and large-scale data of social networks by adopting incremental clustering and a stochastic differential equation for business-to-business transactions.</description>
        <description>http://thesai.org/Downloads/Volume7No2/Paper_80-Toward_Information_Diffusion_Model_for_Viral.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Theoretical and numerical characterization of continuously graded thin layer by the reflection acoustic microscope</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070279</link>
        <id>10.14569/IJACSA.2016.070279</id>
        <doi>10.14569/IJACSA.2016.070279</doi>
        <lastModDate>2016-03-01T13:30:02.3130000+00:00</lastModDate>
        
        <creator>Ahmed Markou</creator>
        
        <creator>Hassan Nounah</creator>
        
        <creator>Lahcen Mountassir</creator>
        
        <subject>Reflection acoustic microscope; Surface acoustic wave; Thin graded layer; Transfer matrix method; Reflectance function; Acoustic material signature</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(2), 2016</description>
        <description>This article presents a theoretical and numerical study by the reflection acoustic microscope of the surface acoustic waves propagation at the interface formed by a thin layer and the coupling liquid (water). The thin layer presents a gradient in its acoustical parameters along its depth. A stable transfer matrix method is used to compute the reflectance function of the surface acoustic modes radiated in the coupling liquid. This function is required to calculate the theoretical acoustic material signature which allows to determine the phase velocity of these modes. In order to characterize the influence of the gradient on the acoustic material signature, a few gradient functions are studied. The numerical results obtained show that the acoustic material signature can be used to characterize these profiles.</description>
        <description>http://thesai.org/Downloads/Volume7No2/Paper_79-Theoretical_and_numerical_characterization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sorting Pairs of Points Based on Their Distances</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070278</link>
        <id>10.14569/IJACSA.2016.070278</id>
        <doi>10.14569/IJACSA.2016.070278</doi>
        <lastModDate>2016-03-01T13:30:02.2970000+00:00</lastModDate>
        
        <creator>Mohammad Farshi</creator>
        
        <creator>Abolfazl Poureidi</creator>
        
        <creator>Zorieh Soltani</creator>
        
        <subject>Sorting problem; Sorting distances</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(2), 2016</description>
        <description>Sorting data is one of the main problems in computer science which studied vastly and used in several places. In several geometric problems, like problems on point sets or lines in the plane or Euclidean space with higher dimensions, the problem of sorting pairs of points based on the distance (between them is used. Using general sorting algorithms, sorting n 2) distances between n points can be done in O(n2 log n) time. Ofcourse, sorting  (n2) independent numbers does not have a faster solution, but since we have dependency between numbers in this case, finding a faster algorithm or showing that the problem in this case has O(n2 log n) time complexity is interesting. In this paper, we try to answer this question.</description>
        <description>http://thesai.org/Downloads/Volume7No2/Paper_78-Sorting_Pairs_of_Points_Based_on_Their_Distances.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Smart Cities: A Survey on Security Concerns</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070277</link>
        <id>10.14569/IJACSA.2016.070277</id>
        <doi>10.14569/IJACSA.2016.070277</doi>
        <lastModDate>2016-03-01T13:30:02.2500000+00:00</lastModDate>
        
        <creator>Sidra Ijaz</creator>
        
        <creator>Munam Ali Shah</creator>
        
        <creator>Abid Khan</creator>
        
        <creator>Mansoor Ahmed</creator>
        
        <subject>Smart city, ICT, IoT, Information security, RFID, M2M, WSN, Smart grids, Biometrics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(2), 2016</description>
        <description>A smart city is developed, deployed and maintained with the help of Internet of Things (IoT). The smart cities have become an emerging phenomena with rapid urban growth and boost in the field of information technology. However, the function and operation of a smart city is subject to the pivotal development of security architectures. The contribution made in this paper is twofold. Firstly, it aims to provide a detailed, categorized and comprehensive overview of the research on security problems and their existing solutions for smart cities. The categorization is based on several factors such as governance, socioeconomic and technological factors. This classification provides an easy and concise view of the security threats, vulnerabilities and available solutions for the respective technologies areas that are proposed over the period 2010-2015. Secondly, an IoT testbed for smart cities architecture, i.e., SmartSantander is also analyzed with respect to security threats and vulnerabilities to smart cities. The existing best practices regarding smart city security are discussed and analyzed with respect to their performance, which could be used by different stakeholders of the smart cities.</description>
        <description>http://thesai.org/Downloads/Volume7No2/Paper_77-Smart_Cities_A_Survey_on_Security_Concerns.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Indoor Navigation System based on Passive RFID Transponder with Digital Compass for Visually Impaired People</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070276</link>
        <id>10.14569/IJACSA.2016.070276</id>
        <doi>10.14569/IJACSA.2016.070276</doi>
        <lastModDate>2016-03-01T13:30:02.1570000+00:00</lastModDate>
        
        <creator>A. M. Kassim</creator>
        
        <creator>T. Yasuno</creator>
        
        <creator>H. Suzuki</creator>
        
        <creator>H. I. Jaafar</creator>
        
        <creator>M. S. M. Aras</creator>
        
        <subject>navigation system, passive RFID, digital compass, visually impaired people</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(2), 2016</description>
        <description>Conventionally, visually impaired people using white cane or guide dog for traveling to desired destination. However, they could not identify their surround easily. Hence, this paper describes the development of navigation system which is applied to guide the visually impaired people at an indoor environment. To provide an efficient and user-friendly navigation tools, a navigation device is developed by using passive radio frequency identification (RFID) transponders which are mounted on the floor such as on tactile paving to build such as RFID networks. The developed navigation system is equipped with a digital compass to facilitate the visually impaired people to walk properly at right direction especially when turning process. The idea of positioning and localization with the digital compass and direction guiding through voice commands is implemented in this system. Some experiments also done which is focused on the calibration of the digital compass and relocates the visually impaired people back to the right route if they are out of the direction. Besides, a comparison between two subjects which are human and a mobile robot is made to check the validity of the developed navigation system. As the result, the traveling speed of human and mobile robot is obtained from the experiment. This project is beneficial to visually impaired people because the navigation device designed with voice commands will help them to have a better experience, safer and comfortable travel.</description>
        <description>http://thesai.org/Downloads/Volume7No2/Paper_76-Indoor_Navigation_System_based_on_Passive_RFID.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implementation and Evaluation of a Secure and Efficient Web Authentication Scheme using Mozilla Firefox and WAMP</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070275</link>
        <id>10.14569/IJACSA.2016.070275</id>
        <doi>10.14569/IJACSA.2016.070275</doi>
        <lastModDate>2016-03-01T13:30:02.0500000+00:00</lastModDate>
        
        <creator>Yassine SADQI</creator>
        
        <creator>Ahmed ASIMI</creator>
        
        <creator>Younes ASIMI</creator>
        
        <creator>Zakaria TBATOU</creator>
        
        <creator>Azidine GUEZZAZ</creator>
        
        <subject>authentication; session management; web security; cryptographic primitives; computer security and privacy; security implementation; authentication scheme; Mozilla Firefox; WAMP</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(2), 2016</description>
        <description>User authentication and session management are two of the most critical aspects of computer security and privacy on the web. However, despite their importance, in practice, authentication and session management are implemented through the use of vulnerable techniques. To solve this complex problem, we proposed new authentication architecture, called StrongAuth. Later, we presented an improved version of StrongAuth that includes a secure session management mechanism based on public key cryptography and other cryptographic primitives. In this paper, we present an experimental implementation and evaluation of the proposed scheme to demonstrate its feasibility in real-world scenarios. Specifically, we realize a prototype consisting of two modules: (1) a registration module that implements the registration, and (2) an authentication module integrating both the mutual authentication and the session management phases of the proposed scheme. The experimental results show that in comparison to traditional authentication and session management mechanisms, the proposed prototype presents the lowest total runtime.</description>
        <description>http://thesai.org/Downloads/Volume7No2/Paper_75-Implementation_and_Evaluation_of_a_Secure_and_Efficient.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evolutionary Algorithms Based on Decomposition and Indicator Functions: State-of-the-art Survey</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070274</link>
        <id>10.14569/IJACSA.2016.070274</id>
        <doi>10.14569/IJACSA.2016.070274</doi>
        <lastModDate>2016-03-01T13:30:01.9230000+00:00</lastModDate>
        
        <creator>Wali Khan Mashwani</creator>
        
        <creator>Abdellah Salhi</creator>
        
        <creator>Muhammad Asif jan</creator>
        
        <creator>Muhammad Sulaiman</creator>
        
        <creator>Rashida Adeeb Khanum</creator>
        
        <creator>Abdulmohsen Algarni</creator>
        
        <subject>Multi-objective optimization, Multi-objective Evolu-tionary algorithms (MOEAs), Pareto Optimality, Multi-objective Memetic Algorithm (MOMAs), Pareto dominance based MOEA, Decomposition based MOEA, Indicator based MOEAs</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(2), 2016</description>
        <description>In the last two decades, multiobjective optimization has become mainstream because of its wide applicability in a variety of areas such engineering, management, the military and other fields. Multi-Objective Evolutionary Algorithms (MOEAs) play a dominant role in solving problems with multiple conflicting objective functions. They aim at finding a set of representative Pareto optimal solutions in a single run. Classical MOEAs are broadly in three main groups: the Pareto dominance based MOEAs, the Indicator based MOEAs and the decomposition based MOEAs. Those based on decomposition and indicator functions have shown high search abilities as compared to the Pareto dominance based ones. That is possibly due to their firm theoretical background. This paper presents state-of-the-art MOEAs that employ decomposition and indicator functions as fitness evaluation techniques along with other efficient techniques including those which use preference based information, local search optimizers, multiple ensemble search operators together with self-adaptive strategies, metaheuristics, mating restriction approaches, statistical sampling techniques, integration of Fuzzy dominance concepts and many other advanced techniques for dealing with diverse optimization and search problems</description>
        <description>http://thesai.org/Downloads/Volume7No2/Paper_74-Evolutionary_Algorithms_Based_on_Decomposition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comprehensive Survey on Dynamic Graph Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070273</link>
        <id>10.14569/IJACSA.2016.070273</id>
        <doi>10.14569/IJACSA.2016.070273</doi>
        <lastModDate>2016-03-01T13:30:01.8770000+00:00</lastModDate>
        
        <creator>Aya Zaki</creator>
        
        <creator>Mahmoud Attia</creator>
        
        <creator>Doaa Hegazy</creator>
        
        <creator>Safaa Amin</creator>
        
        <subject>dynamic graphs; evolving networks; evolving graphs; temporal graphs; data management</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(2), 2016</description>
        <description>Most of the critical real-world networks are con-tinuously changing and evolving with time. Motivated by the growing importance and widespread impact of this type of networks, the dynamic nature of these networks have gained a lot of attention. Because of their intrinsic and special characteristics, these networks are best represented by dynamic graph models. To cope with their evolving nature, the representation model must keep the historical information of the network along with its temporal time. Storing such amount of data, poses many problems from the perspective of dynamic graph data management. This survey provides an in-depth overview on dynamic graph related problems. Novel categorization and classification of the state of the art dynamic graph models are also presented in a systematic and comprehensive way. Finally, we discuss dynamic graph processing including the output representation of its algorithms.</description>
        <description>http://thesai.org/Downloads/Volume7No2/Paper_73-Comprehensive_Survey_on_Dynamic_Graph_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Anti-Pattern-based Runtime Business Process Compliance Monitoring Framework</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070272</link>
        <id>10.14569/IJACSA.2016.070272</id>
        <doi>10.14569/IJACSA.2016.070272</doi>
        <lastModDate>2016-03-01T13:30:01.8000000+00:00</lastModDate>
        
        <creator>Ahmed Barnawi</creator>
        
        <creator>Ahmed Awad</creator>
        
        <creator>Amal Elgammal</creator>
        
        <creator>Radwa Elshawi</creator>
        
        <creator>Abduallah Almalaise</creator>
        
        <creator>Sherif Sakr</creator>
        
        <subject>Business Process Management; Business Process Monitoring; Business Process Compliance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(2), 2016</description>
        <description>Today&#39;s dynamically changing business and com-pliance environment demand enterprises to continuously ensure their compliance with various laws, regulations and standards. Several business studies have concluded that compliance man-agement is one of the main challenges companies face nowadays. Runtime compliance monitoring is of utmost importance for compliance assurance as during the prior design-time compli-ance checking phase, only a subset of the imposed compliance requirements can be statically checked due to the absence of required variable instantiation and contextual information. Furthermore, the fact that a BP model has been statically checked for compliance during design-time does not guarantee that the corresponding running BP instances are usually compliant due to human and machine errors. In this paper, we present a generic proactive runtime BP compliance monitoring framework, BP-MaaS. The framework incorporates a wide range of expressive high-level graphical compliance patterns for the abstract specifi-cation of runtime constraints. Compliance monitoring is achieved using anti-patterns, a novel mechanism which is agnostic towards any underlying monitoring execution technology. As a proof-of-concept, complex event processing (CEP) technology is adopted as one of the possible realizations of the monitoring engine of the framework. An integrated tool-suite has been developed as an instantiation artifact of BP-MaaS, and the validation of the approach is undertaken in several directions, which includes internal validity and case study conducts considering two real-life case studies from the banking domain.</description>
        <description>http://thesai.org/Downloads/Volume7No2/Paper_72-An_Anti_Pattern_based_Runtime_Business_Process.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Abnormalities Detection in Digital Mammography Using Template Matching</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070271</link>
        <id>10.14569/IJACSA.2016.070271</id>
        <doi>10.14569/IJACSA.2016.070271</doi>
        <lastModDate>2016-03-01T13:30:01.7200000+00:00</lastModDate>
        
        <creator>Ahmed M. Farrag</creator>
        
        <subject>Classification, Detection, Image Processing, Dig-ital Mammography, Breast Cancer, Computer Aided Detection (CAD), Template Matching.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(2), 2016</description>
        <description>Breast cancer affects 1 in 8 women in the United States. Early detection and diagnosis is key to recovery. Computer-Aided Detection (CAD) of breast cancer helps decrease morbidity and mortality rates. In this study we apply Template Matching as a method for breast cancer detection to a novel data set comprised of mammograms annotated according to ground truth. Performance is evaluated in terms of Area Under the Receiver Operator Characteristic Curve (Area Under ROC) and Free-response ROC.</description>
        <description>http://thesai.org/Downloads/Volume7No2/Paper_71-Abnormalities_Detection_in_Digital_Mammography.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Weighted G1-Multi-Degree Reduction of B&#180;ezier Curves</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070270</link>
        <id>10.14569/IJACSA.2016.070270</id>
        <doi>10.14569/IJACSA.2016.070270</doi>
        <lastModDate>2016-03-01T13:30:01.6600000+00:00</lastModDate>
        
        <creator>Abedallah Rababah</creator>
        
        <creator>Salisu Ibrahim</creator>
        
        <subject>B&#180;ezier curves; multiple degree reduction; G-continuity; geometric continuity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(2), 2016</description>
        <description>In this paper, weighted G1-multi-degree reduction
of B&#180;ezier curves is considered. The degree reduction of a given
B&#180;ezier curve of degree n is used to write it as a B&#180;ezier curve of
degree m,m &lt; n. Exact degree reduction is not possible, and,
therefore, approximation methods are used. The weight function
w(t) = 2t(1 - t), t [0, 1] is used with the L2-norm in multidegree 
reduction with G1-continuity at the end points of the curve. Since we consider 
boundary conditions this weight function improves approximation in the middle of the curve. 
Numerical results and comparisons show that the proposed method produces fewer errors and outperform all existing methods.</description>
        <description>http://thesai.org/Downloads/Volume7No2/Paper_70-Weighted_G1_Multi_Degree_Reduction_of_Bezier.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hierarchical Classifiers for Multi-Way Sentiment Analysis of Arabic Reviews</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070269</link>
        <id>10.14569/IJACSA.2016.070269</id>
        <doi>10.14569/IJACSA.2016.070269</doi>
        <lastModDate>2016-03-01T13:30:01.6270000+00:00</lastModDate>
        
        <creator>Mahmoud Al-Ayyoub</creator>
        
        <creator>Aya Nuseir</creator>
        
        <creator>Ghassan Kanaan</creator>
        
        <creator>Riyad Al-Shalabi</creator>
        
        <subject>multi-way sentiment analysis, hierarchical classi-fiers, support vector machine, decision tree, naive bayes, k-nearest neighbor, mean squared error</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(2), 2016</description>
        <description>Sentiment Analysis (SA) is one of hottest fields in data mining (DM) and natural language processing (NLP). The goal of SA is to extract the sentiment conveyed in a certain text based on its content. While most current works focus on the simple problem of determining whether the sentiment is positive or negative, Multi-Way Sentiment Analysis (MWSA) focuses on sentiments conveyed through a rating or scoring system (e.g., a 5-star scoring system). In such scoring systems, the sentiments conveyed in two reviews of close scores (such as 4 stars and 5 stars) can be very similar creating an added challenge compared to traditional SA. One intuitive way of handling this challenge is via a divide-and-conquer approach where the MWSA problem is divided into a set of sub-problems allowing the use of customized classifiers to differentiate between reviews of close scores. A hierarchical classification structure can be used with this approach where each node represents a different classification sub-problem and the decision from it may lead to the invocation of another classifier. In this work, we show how the use of this divide-and-conquer hierarchical structure of classifiers can generate better results than the use of existing flat classifiers for the MWSA problem. We focus on the Arabic language for many reasons such as the importance of this language and the scarcity of prior works and available tools for it. To the best of our knowledge, very few papers have been published on MWSA of Arabic reviews. One notable work is that of Ali and Atiya, in which the authors collected a large scale Arabic Book Reviews (LABR) dataset and made it publicly available. Unfortunately, the baseline experiments on this dataset had very low accuracy. We present two different hierarchical structures and compare their accuracies with the flat structure using different core classifiers. The comparison is based on standard accuracy measures such as precision and recall in addition to using the mean squared error (MSE) as a more accurate measure given the fact that not all misclassifications are the same. The results show that, in general, hierarchical classifiers give significant improvements (of more than 50% in certain cases) over flat classifiers.</description>
        <description>http://thesai.org/Downloads/Volume7No2/Paper_69-Hierarchical_Classifiers_for_Multi_Way_Sentiment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Cloud-Based Platform for Democratizing and Socializing the Benchmarking Process</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070268</link>
        <id>10.14569/IJACSA.2016.070268</id>
        <doi>10.14569/IJACSA.2016.070268</doi>
        <lastModDate>2016-03-01T13:30:01.5800000+00:00</lastModDate>
        
        <creator>Fuad Bajaber</creator>
        
        <creator>Amin Shafaat</creator>
        
        <creator>Omar Batarfi</creator>
        
        <creator>Radwa Elshawi</creator>
        
        <creator>Abdulrahman Altalhi</creator>
        
        <creator>Ahmed Barnawi</creator>
        
        <creator>Sherif Sakr</creator>
        
        <subject>Cloud Computing; Benchmarking; Software-as-a-Service, Social Computing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(2), 2016</description>
        <description>Performances evaluation, benchmarking and re-producibility represent significant aspects for evaluating the practical impact of scientific research outcomes in the Computer Science field. In spite of all the benefits (e.g., increasing visibility, boosting impact, improving the research quality) which can be obtained from conducting comprehensive and extensive experi-mental evaluations or providing reproducible software artifacts and detailed description of experimental setup, the required effort for achieving these goals remains prohibitive. In this article, we present the design and the implementation details of the Liquid Benchmarking platform as a social and cloud-based platform for democratizing and socializing the software benchmarking processes. Particularly, the platform facilitates the process of sharing the experimental artifacts (computing resources, datasets, software implementations, benchmarking tasks) as services where the end users can easily design, mashup, execute the experiments and visualize the experimental results with zero installation or configuration efforts. Moreover, the social features of the platform enable the users to share and provide feedback on the results of the executed experiments in a form that can guarantee a transparent scientific crediting process. Finally, we present four benchmarking case studies that have been realized via the Liquid Benchmarking platform in the following domains: XML compression techniques, graph indexing and querying techniques, string similarity join algorithms and reverse K nearest neighbors algorithms.</description>
        <description>http://thesai.org/Downloads/Volume7No2/Paper_68-A_Cloud_Based_Platform_for_Democratizing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Survey on Big Data Analytics: Challenges, Open Research Issues and Tools</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070267</link>
        <id>10.14569/IJACSA.2016.070267</id>
        <doi>10.14569/IJACSA.2016.070267</doi>
        <lastModDate>2016-03-01T13:30:01.5330000+00:00</lastModDate>
        
        <creator>D. P. Acharjya</creator>
        
        <creator>Kauser Ahmed P</creator>
        
        <subject>Big data analytics; Hadoop; Massive data; Struc-tured data; Unstructured Data</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(2), 2016</description>
        <description>A huge repository of terabytes of data is generated each day from modern information systems and digital technolo-gies such as Internet of Things and cloud computing. Analysis of these massive data requires a lot of efforts at multiple levels to extract knowledge for decision making. Therefore, big data analysis is a current area of research and development. The basic objective of this paper is to explore the potential impact of big data challenges, open research issues, and various tools associated with it. As a result, this article provides a platform to explore big data at numerous stages. Additionally, it opens a new horizon for researchers to develop the solution, based on the challenges and open research issues.</description>
        <description>http://thesai.org/Downloads/Volume7No2/Paper_67-A_Survey_on_Big_Data_Analytics_Challenges.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Energy Efficient Clustering Using Fixed Sink Mobility for Wireless Sensor Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070266</link>
        <id>10.14569/IJACSA.2016.070266</id>
        <doi>10.14569/IJACSA.2016.070266</doi>
        <lastModDate>2016-03-01T13:30:01.5030000+00:00</lastModDate>
        
        <creator>Muhammad Ali Khan</creator>
        
        <creator>Arif Iqbal Umar</creator>
        
        <creator>Babar Nazir</creator>
        
        <creator>Noor ul Amin</creator>
        
        <creator>Shaukat Mehmood</creator>
        
        <creator>Kaleem Habib</creator>
        
        <subject>Wireless Sensors Network; Mobile Sink; Clustering; Sensors</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(2), 2016</description>
        <description>In this research an efficient data gathering scheme is presented using mobile sink as data collector with Clustering as sensor organizer in a randomly organized sensors in sensing field for wireless sensor network. The scheme not only extends the network lifetime through clustering process but also improves the data gathering mechanism through efficient and simplified mobile sink movement scheme. The cluster heads selection is based on both energy and data weight, which gathering all the data from the nodes within a cluster and then delivers it to the sink. The single mobile sink which visits the cluster heads as per defined path gathers data periodically. The scheme is organized through “Mobile Sink based Data Gathering Protocol (MSDGP)” which combines single message energy efficient clustering and data gathering process. For performance evaluation, the protocol is extensively simulated for different performance metrics, namely Residual energy consumption, Number of dead nodes during variable number of rounds and Network lifetime. The results proves that MSDGP is successful in achieving the defined objectives of energy efficiency and is capable of extending the network lifetime by increasing the number of rounds. Therefore, proposed protocol is more suitable for scenarios where sensor nodes generate variable amount of data.</description>
        <description>http://thesai.org/Downloads/Volume7No2/Paper_66-Energy_Efficient_Clustering_Using_Fixed_Sink_Mobility_for_Wireless_Sensor_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Issues Elicitation and Analysis of CMMI Based Process Improvement in Developing Countries Theory and Practice</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070265</link>
        <id>10.14569/IJACSA.2016.070265</id>
        <doi>10.14569/IJACSA.2016.070265</doi>
        <lastModDate>2016-03-01T13:30:01.4870000+00:00</lastModDate>
        
        <creator>Shahbaz Ahmed Khan Ghayyur</creator>
        
        <creator>Ahmed Noman Latif</creator>
        
        <creator>Muhammad Daud Awan</creator>
        
        <creator>Malik Sikandar Hayat Khiyal</creator>
        
        <subject>CMMI-based SPI; process improvement; inefficiency; Pakistani software industry</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(2), 2016</description>
        <description>Researchers have tried to find out the pattern of rising and fall of Pakistani software industry and also the reasons for what is going exactly wrong with this industry. Different studies have witnessed that in Pakistan, the software industry is not following the international standards. Another surprising fact, being observed in the past analysis, is the companies which have initiated CMMI-based SPI program have not even achieved the higher levels of CMMI from past three years, which is an alarming sign of the declining attitude of the industry. Therefore, it has become mandatory to look for the weak points or critical barriers or issues which are actually, the reason for this slow progress of CMMI-based SPI in Pakistan software industry. This study has identified that the issues for CMMI based SPI in Pakistan are much different than what it reported in the literature. Giving proper attention to the root of the problem can help solve many of the problems in this regard.</description>
        <description>http://thesai.org/Downloads/Volume7No2/Paper_65-Issues_Elicitation_and_Analysis_of_CMMI_Based_Process_Improvement.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Empirical Investigation of Predicting Fault Count, Fix Cost and Effort Using Software Metrics</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070264</link>
        <id>10.14569/IJACSA.2016.070264</id>
        <doi>10.14569/IJACSA.2016.070264</doi>
        <lastModDate>2016-03-01T13:30:01.4700000+00:00</lastModDate>
        
        <creator>Raed Shatnawi</creator>
        
        <creator>Wei Li</creator>
        
        <subject>Software metrics; fault prediction; fix cost; fix effort; regression analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(2), 2016</description>
        <description>Software fault prediction is important in software engineering field. Fault prediction helps engineers manage their efforts by identifying the most complex parts of the software where errors concentrate. Researchers usually study the fault-proneness in modules because most modules have zero faults, and a minority have the most faults in a system. In this study, we present methods and models for the prediction of fault-count, fault-fix cost, and fault-fix effort and compare the effectiveness of different prediction models. This research proposes using a set of procedural metrics to predict three fault measures: fault count, fix cost and fix effort. Five regression models are used to predict the three fault measures. The study reports on three data sets published by NASA. The models for each fault are evaluated using the Root Mean Square Error. A comparison amongst fault measures is conducted using the Relative Absolute Error. The models show promising results to provide a practical guide to help software engineers in allocating resources during software testing and maintenance. The cost fix models show equal or better performance than fault count and effort models.</description>
        <description>http://thesai.org/Downloads/Volume7No2/Paper_64-An_Empirical_Investigation_of_Predicting_Fault_Count.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Improved Transformer for LLC Resonant Inverter for Induction Heating Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070263</link>
        <id>10.14569/IJACSA.2016.070263</id>
        <doi>10.14569/IJACSA.2016.070263</doi>
        <lastModDate>2016-03-01T13:30:01.4400000+00:00</lastModDate>
        
        <creator>Amira ZOUAOUI-KHEMAKHEM</creator>
        
        <creator>Hamed belloumi</creator>
        
        <creator>Ferid KOURDA</creator>
        
        <subject>planar transformer; induction heating; Finite Element Method; skin effect; diffusive representation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(2), 2016</description>
        <description>A new trend in power converters is to design a planar transformer that aims for low profile. However, at high frequency, the planar transformer AC losses become significant due to the proximity and skin effects. In this paper the most important factors for planar transformer (PT) design including winding loss, core loss, leakage inductance, and stray capacitance have individually been investigated. The tradeoffs among these factors have to be analysed to achieve optimal parameters. We will show a strategy to reduce losses in the primary coil of the transformer. The loss analysis of th PT was verified by the Finite Element Method (FEM) Simulations (ANSOFT MAXWELL FIELD SIMULATOR 2D) and could be utilized to optimize the transformer design procedure. Finally, the proposed PT has been integrated into an LLC Resonant Inverter for Induction Heating Application to test at high signal levels.</description>
        <description>http://thesai.org/Downloads/Volume7No2/Paper_63-An_Improved_Transformer_for_LLC_Resonant_Inverter_for_Induction_Heating_Applications.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Android Malware Detection &amp; Protection: A Survey</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070262</link>
        <id>10.14569/IJACSA.2016.070262</id>
        <doi>10.14569/IJACSA.2016.070262</doi>
        <lastModDate>2016-03-01T13:30:01.4100000+00:00</lastModDate>
        
        <creator>Saba Arshad</creator>
        
        <creator>Munam Ali Shah</creator>
        
        <creator>Abid Khan</creator>
        
        <creator>Mansoor Ahmed</creator>
        
        <subject>Android; Permissions; Signature</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(2), 2016</description>
        <description>Android has become the most popular smartphone operating system. This rapidly increasing adoption of Android has resulted in significant increase in the number of malwares when compared with previous years. There exist lots of antimalware programs which are designed to effectively protect the users’ sensitive data in mobile systems from such attacks. In this paper, our contribution is twofold. Firstly, we have analyzed the Android malwares and their penetration techniques used for attacking the systems and antivirus programs that act against malwares to protect Android systems. We categorize many of the most recent antimalware techniques on the basis of their detection methods. We aim to provide an easy and concise view of the malware detection and protection mechanisms and deduce their benefits and limitations. Secondly, we have forecast Android market trends for the year up to 2018 and provide a unique hybrid security solution and take into account both the static and dynamic analysis an android application.</description>
        <description>http://thesai.org/Downloads/Volume7No2/Paper_62-Android_Malware_Detection_Protection_Survey.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Study of Robust Control Strategies for a Dfig-Based Wind Turbine</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070261</link>
        <id>10.14569/IJACSA.2016.070261</id>
        <doi>10.14569/IJACSA.2016.070261</doi>
        <lastModDate>2016-03-01T13:30:01.3930000+00:00</lastModDate>
        
        <creator>Mohamed BENKAHLA</creator>
        
        <creator>Rachid TALEB</creator>
        
        <creator>Zinelaabidine BOUDJEMA</creator>
        
        <subject>Wind turbine (WT); doubly fed induction generator (DFIG); sliding mode controller (SMC); PI controller; adaptive fuzzy logic controller (AFLC)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(2), 2016</description>
        <description>Conventional vector control configurations which use a proportional-integral (PI) regulator for the powers DFIGs driven have some drawbacks such as parameter tuning difficulties, mediocre dynamic performances and reduced robustness. So, based on analysis of the DFIG model, fed by a direct AC-AC converter, two nonlinear algorithms: sliding mode and adaptive fuzzy logic is used to control independently active and reactive powers provided by the stator side of the DFIG to the grid. Their respective performances are compared with the conventional PI controller regarding reference tracking, response to sudden speed variation and robustness against machine parameters variations.</description>
        <description>http://thesai.org/Downloads/Volume7No2/Paper_61-Comparative_Study_of_Robust_Control_Strategies_for_a_Dfig_Based_Wind_Turbine.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cloud-Based Processing on Data Science for Visualization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070260</link>
        <id>10.14569/IJACSA.2016.070260</id>
        <doi>10.14569/IJACSA.2016.070260</doi>
        <lastModDate>2016-03-01T13:30:01.3630000+00:00</lastModDate>
        
        <creator>Ahmad Ashari</creator>
        
        <creator>A Min Tjoa</creator>
        
        <creator>Mardhani Riasetiawan</creator>
        
        <subject>component; cloud-based processing; visualization; data science; big data</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(2), 2016</description>
        <description>The big data processing and visualization have the challenge on method and process. The volume, variety, velocity, and veracity in the big data need to handle for visualizing the data. The research work investigates, design and develop cloud-based processing for data science visualization. The cloud-based processing used the data management that interact with Google drive, communicate with processing tools that deliver the system, and visualize with Google Fusion. The research uses financial banking data in Indonesia, that analyze based on geolocation, transaction flows, and networks. The research process consists of data mapping process, data tagging, data manipulation, and data visualization to resulting the data science visualization. The cloud-based processing present the data science process can be done use cloud. The research has a contribution on delivering cloud-based approach to handling data science visualization for big data.</description>
        <description>http://thesai.org/Downloads/Volume7No2/Paper_60-Cloud_Based_Processing_on_Data_Science_for_Visualization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Ensemble of Fine-Tuned Heterogeneous Bayesian Classifiers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070259</link>
        <id>10.14569/IJACSA.2016.070259</id>
        <doi>10.14569/IJACSA.2016.070259</doi>
        <lastModDate>2016-03-01T13:30:01.3470000+00:00</lastModDate>
        
        <creator>Amel Alhussan</creator>
        
        <creator>Khalil El Hindi</creator>
        
        <subject>Ensemble classifier; Bayesian Network (BN) classifiers; Fine-tuned BN classifiers; Stacking; Diversity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(2), 2016</description>
        <description>Bayesian network (BN) classifiers use different structures and different training parameters which leads to diversity in classification decisions. This work empirically shows that building an ensemble of several fine-tuned BN classifiers increases the overall classification accuracy. The accuracy of the constituent classifiers can be achieved by fine-tuning each classifier and the diversity is achieved using different BN classifiers. The proposed ensemble combines a Naive Bayes (NB) classifier, five different models of Tree Augmented Naive Bayes (TAN), and four different model of Bayesian Augmented Naive Bayes (BAN). This work also proposes a new Distance-based Diversity Measure (DDM) and uses it to analyze the diversity of the ensembles. The ensemble of fine-tuned classifier achieves better average classification accuracy than any of its constituent classifiers or the ensemble of un-tuned classifiers. Moreover, the empirical experiments present better significant results for many data sets.</description>
        <description>http://thesai.org/Downloads/Volume7No2/Paper_59-An_Ensemble_of_Fine_Tuned_Heterogeneous_Bayesian_Classifiers.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Constructing Relationship Between Software Metrics and Code Reusability in Object Oriented Design</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070258</link>
        <id>10.14569/IJACSA.2016.070258</id>
        <doi>10.14569/IJACSA.2016.070258</doi>
        <lastModDate>2016-03-01T13:30:01.3170000+00:00</lastModDate>
        
        <creator>Manoj H.M</creator>
        
        <creator>Dr. Nandakumar A.N</creator>
        
        <subject>Analytical Modeling; Code Reusability; Design Pattern; Software Methodologies</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(2), 2016</description>
        <description>The role of the design pattern in the form of software metric and internal code architecture for object-oriented design plays a critical role in software engineering regarding production cost efficiency. This paper discusses code reusability that is a frequently exercised cost saving methodology in IT production. After reviewing existing literature towards a study on software metrics, we found that very few studies are witnessed to incline towards code reusability. Hence, we developed a simple analytical model that establishes a relationship between the design components of standard software metric and code reusability using case studies of three software projects (Customer Relationship Management project, Supply Chain Management project, and Enterprise Relationship Management project). We also testify our proposal using stochastic based Markov model to find that proposed system can extract significant information of maximized values of code reusability with increasing level of uncertainties of software project methodologies.</description>
        <description>http://thesai.org/Downloads/Volume7No2/Paper_58-Constructing_Relationship_Between_Software_Metrics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving Throughput and Delay by Signaling Modification in Integrated 802.11 and 3G Heterogeneous Wireless Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070257</link>
        <id>10.14569/IJACSA.2016.070257</id>
        <doi>10.14569/IJACSA.2016.070257</doi>
        <lastModDate>2016-03-01T13:30:01.2830000+00:00</lastModDate>
        
        <creator>Majid Fouladian</creator>
        
        <creator>Mohammad Ali Pourmina</creator>
        
        <creator>Faramarz Hendessi</creator>
        
        <subject>Registration; WLAN; 3G network; Vertical handover</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(2), 2016</description>
        <description>Current trends show that UMTS network and WLAN will co-exist and work together to support more users with higher data rate services over a wider area. However, this integration invokes many challenges such as mobility management and handoff decision making. Vertical handoff is switching process between heterogeneous wireless networks in a 3G/WLAN network. Vertical handoffs often fail due to the abrupt degrade of the WLAN signal strength. In this paper, proposed a new vertical handoff method for to decrease the number of signaling and registration processes, to examine the Special conditions such as WLAN black holes and to eliminate disconnecting effects. By estimating user locations related to WLAN position, interface on-times will be reduced which makes the power consuming of the system decreased. To indicate the efficiency of proposed approach the performance and the delay results are simulated.</description>
        <description>http://thesai.org/Downloads/Volume7No2/Paper_57-Improving_Throughput_and_Delay_by_Signaling_Modification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Variant of Genetic Algorithm Based Categorical Data Clustering for Compact Clusters and an Experimental Study on Soybean Data for Local and Global Optimal Solutions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070256</link>
        <id>10.14569/IJACSA.2016.070256</id>
        <doi>10.14569/IJACSA.2016.070256</doi>
        <lastModDate>2016-03-01T13:30:01.2530000+00:00</lastModDate>
        
        <creator>Abha Sharma</creator>
        
        <creator>R. S. Thakur</creator>
        
        <subject>Categorical data; Genetic Algorithm; Population; Population size</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(2), 2016</description>
        <description>Almost all partitioning clustering algorithms getting stuck to the local optimal solutions. Using Genetic algorithms (GA) the results can be find globally optimal. This piece of work offers and investigates a new variant of the Genetic algorithm (GA) based k-Modes clustering algorithm for categorical data. A statistical analysis have been done on the popular categorical dataset which shows the user specified cluster centres stuck at local optimal solution in K-modes algorithm even in all the higher iterations  and  the proposed algorithm overcome this problem of local optima. To the best of our knowledge, such comparison has been reported here for the first time for the case of categorical data. The obtained results, shows that the proposed algorithm is better over the conventional k-Modes algorithm in terms of optimal solutions and within cluster variation measure.</description>
        <description>http://thesai.org/Downloads/Volume7No2/Paper_56-A_Variant_of_Genetic_Algorithm_Based_Categorical_Data_Clustering.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>On Shear Wave Speed Estimation for Agar-Gelatine Phantom</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070255</link>
        <id>10.14569/IJACSA.2016.070255</id>
        <doi>10.14569/IJACSA.2016.070255</doi>
        <lastModDate>2016-03-01T13:30:01.2370000+00:00</lastModDate>
        
        <creator>Hassan M. Ahmed</creator>
        
        <creator>Nancy M. Salem</creator>
        
        <creator>Ahmed F. Seddik</creator>
        
        <creator>Mohamed I. El Adawy</creator>
        
        <subject>Elasticity Imaging; Acoustic radiation force impulse (ARFI); Shear wave elasticity imaging; Soft tissue stiffness imaging</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(2), 2016</description>
        <description>Conventional imaging of diagnostic ultrasound is widely used. Although it makes the differences in the soft tissues echogenicities’ apparent and clear, it fails in describing and estimating the soft tissue mechanical properties. It cannot portray their mechanical properties, such as the elasticity and stiffness. Estimating the mechanical properties increases chances of the identification of lesions or any pathological changes. Physicians are now characterizing the tissue’s mechanical properties as diagnostic metrics. Estimating the tissue’s mechanical properties is achieved by applying a force on the tissue and calculating the resulted shear wave speed. Due to the difficulty of calculating the shear wave speed precisely inside the tissue, it is estimated by analyzing ultrasound images of the tissue at a very high frame rate. In this paper, the shear wave speed is estimated using finite element analysis. A model is constructed to simulate the tissue’s mechanical properties. For a generalized soft tissue model, Agar-gelatine model is used because it has properties similar to that of the soft tissue. A point force is applied at the center of the proposed model. As a result of this force, a deformation is caused. Peak displacements are tracked along the lateral dimension of the model for estimating the shear wave speed of the propagating wave using the Time-To-Peak displacement (TTP) method. Experimental results have shown that the estimated speed of the shear wave is 5.2 m/sec. The speed value is calculated according to shear wave speed equation equals about 5.7 m/sec; this means that our speed estimation system’s accuracy is about 91 %, which is reasonable shear wave speed estimation accuracy with a less computational power compared to other tracking methods.</description>
        <description>http://thesai.org/Downloads/Volume7No2/Paper_55-On_Shear_Wave_Speed_Estimation_for_Agar_Gelatine_Phantom.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Platform to Support the Product Servitization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070254</link>
        <id>10.14569/IJACSA.2016.070254</id>
        <doi>10.14569/IJACSA.2016.070254</doi>
        <lastModDate>2016-03-01T13:30:01.2070000+00:00</lastModDate>
        
        <creator>Giovanni Di Orio</creator>
        
        <creator>Oliviu Matei</creator>
        
        <creator>Sebastian Scholze</creator>
        
        <creator>Dragan Stokic</creator>
        
        <creator>Jos&#233; Barata</creator>
        
        <creator>Claudio Cenedese</creator>
        
        <subject>Product-Services System; Servitization; Service-Oriented Architecture; Ambient Intelligence; Context Awareness; Data Mining</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(2), 2016</description>
        <description>Nowadays manufacturers are forced to shift from their traditional product-manufacturing paradigm to the goods-services continuum by providing integrated combination of products and services. The adoption of service-based strategies is the natural consequence of the higher pressure that these companies are facing in the global markets especially due to the presence of competitors which operate in low wage region. By betting on services, or more specifically, on servitization manufacturing companies are moving up the value chain in order to move the competition from costs to sophistication and innovation. The proliferation of new emerging technologies and paradigms together with a wider dissemination of information technology (IT) can significantly improve the capability of manufacturing companies to infuse services in their own products. The authors present a knowledge-based and data-driven platform that can support the design and development of Product Extended by Services (PESs) solutions.</description>
        <description>http://thesai.org/Downloads/Volume7No2/Paper_54-A_Platform_to_Support_the_Product_Servitization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>CAT5:A Tool for Measuring the Maturity Level of Information Technology Governance Using COBIT 5 Framework</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070253</link>
        <id>10.14569/IJACSA.2016.070253</id>
        <doi>10.14569/IJACSA.2016.070253</doi>
        <lastModDate>2016-03-01T13:30:01.1770000+00:00</lastModDate>
        
        <creator>Souha&#239;l El ghazi El Houssa&#239;ni</creator>
        
        <creator>Karim Youssfi</creator>
        
        <creator>Jaouad Boutahar</creator>
        
        <subject>COBIT5; IT Governance; Process Capability Model; Maturity Model; CAT5; Process Assessment</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(2), 2016</description>
        <description>Companies have more and more trends to automate their operational and organizational activities, therefore the investment of information technology (IT) continues to increase every year. However, good governance that can ensure the alignment of IT and business strategy and realized benefits from IT investments has not always followed this increase. Measurement of IT governance is then required as a basis for the continuous improvement of the IT services. This study is aimed at producing a tool CAT5 to measure the maturity level of IT governance, thus facilitating the process of improvement of IT services. CAT5 is based on COBIT 5 framework and the design used is Unified Modeling Language. Through stages of information system development, this research results in an application for measuring the maturity level of IT governance that can be used in assessing existing IT governance.</description>
        <description>http://thesai.org/Downloads/Volume7No2/Paper_53-CAT5A_Tool_for_Measuring_the_Maturity_Level_of_Information_Technology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Multi-Criteria Decision Method in the DBSCAN Algorithm for Better Clustering</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070252</link>
        <id>10.14569/IJACSA.2016.070252</id>
        <doi>10.14569/IJACSA.2016.070252</doi>
        <lastModDate>2016-03-01T13:30:01.1430000+00:00</lastModDate>
        
        <creator>Abdellah IDRISSI</creator>
        
        <creator>Altaf ALAOUI</creator>
        
        <subject>Data mining; Clustering; Density-based clustering; Multiple-criteria decision-making</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(2), 2016</description>
        <description>This paper presents a solution based on the unsupervised classification for the multiple-criteria analysis problems of data, where the characteristics and the number of clusters are not predefined, and the objects of data sets are described by several criteria, and the latter can be contradictory, of different nature and varied weights. This work focuses on two different tracks of research, the unsupervised classification which is one of data mining techniques as well as the multi-criteria clustering which is part of the field of Multiple-criteria decision-making. Experimental results on different data sets are presented in order to show that clusters, formed using the improvement of the algorithm DBSCAN by incorporating a model of similarity, are intensive and accurate.</description>
        <description>http://thesai.org/Downloads/Volume7No2/Paper_52-A_Multi_Criteria_Decision_Method_in_the_DBSCAN_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Virtual Identity Approaches Evaluation for Anonymous Communication in Cloud Environments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070251</link>
        <id>10.14569/IJACSA.2016.070251</id>
        <doi>10.14569/IJACSA.2016.070251</doi>
        <lastModDate>2016-03-01T13:30:01.1130000+00:00</lastModDate>
        
        <creator>Ibrahim A.Gomaa</creator>
        
        <creator>Emad Abd-Elrahman</creator>
        
        <creator>Mohamed Abid</creator>
        
        <subject>Cloud Environments; Virtual Identity; Performance Evaluation and Security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(2), 2016</description>
        <description>Since the era’s of Cloud computing beginning, the Identity Management is considered as a permanent challenge especially for the hybrid IT environments that permit for many users’ applications to share the same data center depending on servers’ virtualization. This paper introduces a complete study
about Identity forms in different domains and applications. Also, a performance evaluation of new approaches for Virtual Identity was done. Actually, Virtual Identity, a new terminology used in virtual environments, was introduced to enhance the anonymous communication in such types of complex networks. Based on the work analysis and motivations done through an online survey, two techniques were used to implement the Virtual Identity; Identity Based Encryption (IBE) and Pseudonym Based Encryption (PBE). Both techniques were validated using MIRACL library for security algorithms. In addition, the performance of both approaches was evaluated under different configurations and network conditions through OPNET Modeler. The results showed the impact of the number of cloud users and their locations (either local or remote) on the application response time in cloud environments using the proposed virtual identities. Moreover, Application Characterization Environment whiteboard was used to simulate the overall flow of data across different tiers from start to end of the application task for Virtual Identity creation. The results and outcomes for both methodologies showed that they are suitable paradigm for achieving high degree of security and efficiency in such sophisticated network access to many online services and applications.
</description>
        <description>http://thesai.org/Downloads/Volume7No2/Paper_51-Virtual_Identity_Approaches_Evaluation_for_Anonymous_Communication.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving Quality of Vietnamese Text Summarization Based on Sentence Compression</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070250</link>
        <id>10.14569/IJACSA.2016.070250</id>
        <doi>10.14569/IJACSA.2016.070250</doi>
        <lastModDate>2016-03-01T13:30:01.0800000+00:00</lastModDate>
        
        <creator>Ha Nguyen Thi Thu</creator>
        
        <creator>Tu Nguyen Ngoc</creator>
        
        <creator>Cuong Nguyen Ngoc</creator>
        
        <creator>Hiep Xuan Huynh</creator>
        
        <subject>Sentence compression; topic modeling; text summarization; Grid model; n-grams; dynamic programming</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(2), 2016</description>
        <description>Sentence compression is a valuable task in the framework of text summarization. In previous works, the sentence is reduced by removing redundant words or phrases from original sentence and tries to remain information. In this paper, we propose a new method that used Grid Model and dynamic programming to calculate n-grams for generating the best sentence compression.  These reduced sentences are combined to text summarization. The experimental results showed that our method really effective and the text is grammatically, coherence and concise.</description>
        <description>http://thesai.org/Downloads/Volume7No2/Paper_50-Improving_Quality_of_Vietnamese_Text_Summarization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Evaluation of Loss Functions for Margin Based Robust Speech Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070249</link>
        <id>10.14569/IJACSA.2016.070249</id>
        <doi>10.14569/IJACSA.2016.070249</doi>
        <lastModDate>2016-03-01T13:30:01.0500000+00:00</lastModDate>
        
        <creator>Syed Abbas Ali</creator>
        
        <creator>Maria Andleeb</creator>
        
        <creator>Raheela Asif</creator>
        
        <creator>Danish-ur-Rehman</creator>
        
        <subject>Loss Functions; Statistical Learning; Automatic Speech Recognition (ASR); SVM Classifiers; Soft Margin Estimation (SME)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(2), 2016</description>
        <description>Margin-based model estimation methods are applied for speech recognition to enhance the generalization capability of acoustic model by increasing the margin. An important aspects of margin based acoustic model for parameter estimation is that, the acoustic models are derived from soft margin concept and hinge loss function used in SVM as loss function to attained enhanced speech recognition performance. In this study, performance evaluation of loss functions (Logistic, Savage, Sigmoid) have been computed in the presence of white noise, pink noise, and brown noise with and without SVM classifiers to analyze the impact of noise on loss functions in comparison with hinge loss function used in SVM for parameter estimation in margin based acoustic model. Experimental results show that hinge loss function in the presence of pink noise and white noise have significant effects on isolated digits (0-9) in both pre-conditioned and recorded data samples in comparison with brown noise. Whereas hinge loss functions show serious anomalies with savage loss and sigmoid loss in term of performance and sigmoid loss function provides exceptionally good results in term of percentage error for all prescribed conditions.</description>
        <description>http://thesai.org/Downloads/Volume7No2/Paper_49-Performance_Evaluation_of_Loss_Functions_for_Margin.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Understanding a Co-Evolution Model of Business and IT for Dynamic Business Process Requirements</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070248</link>
        <id>10.14569/IJACSA.2016.070248</id>
        <doi>10.14569/IJACSA.2016.070248</doi>
        <lastModDate>2016-03-01T13:30:01.0330000+00:00</lastModDate>
        
        <creator>Muhammad Asif Khan</creator>
        
        <subject>alignment; business-IT gap; business process; co-evolution</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(2), 2016</description>
        <description>Organizations adapt existing business processes in order to become competitive but a change in a process affects other processes as well. In order to support the required change suitable technologies must be provided so that business could run smoothly and efficiently. Since in a dynamic business environment requirements are changed frequently it is difficult to update underlying technologies to support changes in business processes. This creates a gap between business and information technology (IT) that directly affects whole business. In this study requirements for a dynamic business and a co-evolution model are presented that may bring both the entities closer to bridging the gap in a dynamic business organization. The co-evolution model has been used in a financial institution and feasibility and viability of the model has been observed.</description>
        <description>http://thesai.org/Downloads/Volume7No2/Paper_48-Understanding_a_Co_Evolution_Model_of_Business_and_IT.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Enhanced Arabchat: An Arabic Conversational Agent</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070247</link>
        <id>10.14569/IJACSA.2016.070247</id>
        <doi>10.14569/IJACSA.2016.070247</doi>
        <lastModDate>2016-03-01T13:30:01.0030000+00:00</lastModDate>
        
        <creator>Mohammad Hijjawi</creator>
        
        <creator>Zuhair Bandar</creator>
        
        <creator>Keeley Crockett</creator>
        
        <subject>Artificial Intelligence; Conversational Agents and Arabic</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(2), 2016</description>
        <description>The Enhanced ArabChat is a complement of the previous version of ArabChat. This paper details an enhancements development of a novel and practical Conversational Agent for the Arabic language called the “Enhanced ArabChat”. A conversational Agent is a computer program that attempts to simulate conversations between machine and human. Some of lessons was learned by evaluating the previous work of ArabChat . These lessons revealed that two major issues affected the ArabChat’s performance negatively. Firstly, the need for a technique to distinguish between question and non-question utterances to reply with a more suitable response depending on the utterance’s type (question and non-question based utterances). Secondly, the need for a technique to handle an utterance targeting many topics that require firing many rules at the same time. Therefore, in this paper, the “Enhanced ArabChat” will cover these enhancements to improve the ArabChat’s performance. A real experiment has been done in Applied Science University in Jordan as an information point advisor for their native Arabic students to evaluate the Enhanced ArabChat.</description>
        <description>http://thesai.org/Downloads/Volume7No2/Paper_47-The_Enhanced_Arabchat_An_Arabic_Conversational_Agent.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detection and Defense Against Packet Drop Attack in MANET</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070246</link>
        <id>10.14569/IJACSA.2016.070246</id>
        <doi>10.14569/IJACSA.2016.070246</doi>
        <lastModDate>2016-03-01T13:30:00.9870000+00:00</lastModDate>
        
        <creator>Tariq Ahamad</creator>
        
        <subject>MANET; gray hole; DoS; packet drop; security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(2), 2016</description>
        <description>MANET is a temporary network for a specified work and with the enormous growth MANETs it is becoming important and simultaneously challenging to protect this network from attacks and other threats. Packet drop attack or gray hole attack is the easiest way to make a denial of service in these dynamic networks. In this attack the malicious node reflects itself as the shortest path and receives all the packets and drops the selected packets in order to give the user the service that that is not correct. It is a specific kind of attack and protects the network and user from detecting this malicious activity. In this article I have proposed an efficient for step technique that confirms that this attack can be detected and defended with least efforts and resource consumption.</description>
        <description>http://thesai.org/Downloads/Volume7No2/Paper_46-Detection_and_Defense_Against_Packet_Drop_Attack_in_MANET.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Risk Diffusion Modeling and Vulnerability Quantification on Japanese Human Mobility Network from Complex Network Analysis Point of View</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070245</link>
        <id>10.14569/IJACSA.2016.070245</id>
        <doi>10.14569/IJACSA.2016.070245</doi>
        <lastModDate>2016-03-01T13:30:00.9730000+00:00</lastModDate>
        
        <creator>Kiyotaka Ide</creator>
        
        <creator>Hiroshi Sato</creator>
        
        <creator>Tran Quang Hoang Anh</creator>
        
        <creator>Akira Namatame</creator>
        
        <subject>Human mobility network; Risk analysis; Complex network analysis; SIS model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(2), 2016</description>
        <description>The human mobility networks are vital infrastructure in recent social systems. Many efforts have been made to keep the healthy human mobility flows to maintain sustainable development of recent well-connected society. However, the inter-connectivity sometimes raises unintended diffusion and amplification of the intrinsic risks, which is difficult to forecast because of the complexity of the underlying networks. Therefore, it is believed that modeling and simulation of the risk diffusion in the human mobility networks are suggestive and meaningful. Also, recent improvement of usability of individual-level human mobility data and capabilities of high-performance computing technologies enable us to employ the data-driven approaches. In this paper, the risk diffusion dynamics is modeled based on the SIS epidemic model and the vulnerability index is defined to quantify the node-level easiness of suffering risks. We also conduct the link removal test to find the better risk mitigation methods.</description>
        <description>http://thesai.org/Downloads/Volume7No2/Paper_45-Risk_Diffusion_Modeling_and_Vulnerability_Quantification_on_Japanese.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of Wireless Temperature Measuring System Based on the nRF24l01</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070244</link>
        <id>10.14569/IJACSA.2016.070244</id>
        <doi>10.14569/IJACSA.2016.070244</doi>
        <lastModDate>2016-03-01T13:30:00.9570000+00:00</lastModDate>
        
        <creator>Song Liu</creator>
        
        <creator>Zhiqiang Yuan</creator>
        
        <creator>Yuchen Chen</creator>
        
        <subject>nRF24L01; wireless data transmission; DS18B20; STC89C52</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(2), 2016</description>
        <description>Wireless data transmission system which composed of wireless data transmission device nRF24L01, temperature sensor [DS18B20,] and STC89C52. The system can collect and transmit temperature information and display it on LED, when the temperature excess the set value, the system will alarm by the buzzer. The hardware and software of the design are explained in detail. Finally, the application of this system in wireless temperature collection system is discussed.</description>
        <description>http://thesai.org/Downloads/Volume7No2/Paper_44-Design_of_Wireless_Temperature_Measuring_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of Intelligent Control System of Transformer Oil Temperature</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070243</link>
        <id>10.14569/IJACSA.2016.070243</id>
        <doi>10.14569/IJACSA.2016.070243</doi>
        <lastModDate>2016-03-01T13:30:00.9270000+00:00</lastModDate>
        
        <creator>Caijun Xu</creator>
        
        <creator>Liping Zhang</creator>
        
        <creator>Yuchen Chen</creator>
        
        <creator>Zhifeng Liu</creator>
        
        <subject>transformer oil temperature; temperature control; STC89C51; DS18B20</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(2), 2016</description>
        <description>in working process of power transformer, which directly affects the safe operation of transformer oil temperature as well as the stability of the network, so vital to transformer oil temperature detection and control.  Based on single chip and chip design of digital temperature measurement transformer oil temperature of an intelligent control system.  The system uses a digital temperature sensor DS18B20 collection transformer oil temperature, improves the accuracy of the system.  The low power consumption, strong anti-jamming ability of the SCM STC89C51 as the main controller to achieve control and real-time monitoring of transformer oil temperature, and input control module is designed for different transformer oil temperature preset control during normal operation, improving system usability and human-computer interaction.</description>
        <description>http://thesai.org/Downloads/Volume7No2/Paper_43-Design_of_Intelligent_Control_System_of_Transformer_Oil_Temperature.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Nonquadratic Lyapunov Functions for Nonlinear Takagi-Sugeno Discrete Time Uncertain Systems Analysis and Control</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070242</link>
        <id>10.14569/IJACSA.2016.070242</id>
        <doi>10.14569/IJACSA.2016.070242</doi>
        <lastModDate>2016-03-01T13:30:00.8930000+00:00</lastModDate>
        
        <creator>Ali Bouyahya</creator>
        
        <creator>Yassine Manai</creator>
        
        <creator>Joseph Hagg&#232;ge</creator>
        
        <subject>Nonquadratic Lyapunov functions; Non-PDC; Linear Matrix Inequality; Parametric Uncertain Systems; Takagi-Sugeno</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(2), 2016</description>
        <description>This paper deals with the analysis and design of the state feedback fuzzy controller for a class of discrete time Takagi -Sugeno (T-S) fuzzy uncertain systems.
The adopted framework is based on the Lyapunov theory and uses the linear matrix inequality (LMI) formalism. The main goal is to reduce the conservatism of the stabilization conditions using some particular Lyapunov functions.
Four nonquadratic Lyapunov Functions are used. This Lyapunov function represent an extention from two Lypaunov functions existing in the literature. Their influence in the stabilization region (feasible area of stabilization) is shown through examples, the stabilization conditions of controller for discrete time T-S parametric uncertain systems is demonstrated with the variation of the lyapunov functions between  (k, k+1) and (k, k+t)  sample times. The controller gain can be obtained via solving several linear matrix inequalities (LMIs). Through the examples and simulations, we demonstrate their uses and their robustness. Comparative study verify the effectiveness of the proposed methods.
</description>
        <description>http://thesai.org/Downloads/Volume7No2/Paper_42-Nonquadratic_Lyapunov_Functions_for_Nonlinear.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Secure Cloud Computing Architecture Using Homomorphic Encryption</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070241</link>
        <id>10.14569/IJACSA.2016.070241</id>
        <doi>10.14569/IJACSA.2016.070241</doi>
        <lastModDate>2016-03-01T13:30:00.8630000+00:00</lastModDate>
        
        <creator>Kamal Benzekki</creator>
        
        <creator>Abdeslam El Fergougui</creator>
        
        <creator>Abdelbaki El Belrhiti El Alaoui</creator>
        
        <subject>multi-cloud; privacy; fully homomorphic encryption; distributed System; confidentiality</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(2), 2016</description>
        <description>The Purpose of homomorphic encryption is to ensure privacy of data in communication, storage or in use by processes with mechanisms similar to conventional cryptography, but with added capabilities of computing over encrypted data, searching an encrypted data, etc. Homomorphism is a property by which a problem in one algebraic system can be converted to a problem in another algebraic system, be solved and the solution later can also be translated back effectively. Thus, homomorphism makes secure delegation of computation to a third party possible. Many conventional encryption schemes possess either multiplicative or additive homomorphic property and are currently in use for respective applications. Yet, a Fully Homomorphic Encryption (FHE) scheme which could perform any arbitrary computation over encrypted data appeared in 2009 as Gentry’s work. In this paper, we propose a multi-cloud architecture of N distributed servers to repartition the data and to nearly allow achieving an FHE.</description>
        <description>http://thesai.org/Downloads/Volume7No2/Paper_41-A_Secure_Cloud_Computing_Architecture_Using_Homomorphic_Encryption.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Neural Network Based Method Developed for Digit Recognition Applied to Automatic Speed Sign Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070240</link>
        <id>10.14569/IJACSA.2016.070240</id>
        <doi>10.14569/IJACSA.2016.070240</doi>
        <lastModDate>2016-03-01T13:30:00.8470000+00:00</lastModDate>
        
        <creator>Hanene Rouabeh</creator>
        
        <creator>Chokri Abdelmoula</creator>
        
        <creator>Mohamed Masmoudi</creator>
        
        <subject>Image processing; Road Sign Recognition; Neural Networks; Digit Recognition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(2), 2016</description>
        <description>This Paper presents a new hybrid technique for digit recognition applied to the speed limit sign recognition task. The complete recognition system consists in the detection and recognition of the speed signs in RGB images. A pretreatment is applied to extract the pictogram from a detected circular road sign, and then the task discussed in this work is employed to recognize digit candidates. To realize a compromise between performances, reduced execution time and optimized memory resources, the developed method is based on a conjoint use of a Neural Network and a Decision Tree. A simple Network is employed firstly to classify the extracted candidates into three classes and secondly a small Decision Tree is charged to determine the exact information. This combination is used to reduce the size of the Network as well as the memory resources utilization. The evaluation of the technique and the comparison with existent methods show the effectiveness.</description>
        <description>http://thesai.org/Downloads/Volume7No2/Paper_40-A_Novel_Neural_Network_Based_Method_Developed_for_Digit_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Discovery of the Implemented Software Engineering Process Using Process Mining Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070239</link>
        <id>10.14569/IJACSA.2016.070239</id>
        <doi>10.14569/IJACSA.2016.070239</doi>
        <lastModDate>2016-03-01T13:30:00.8170000+00:00</lastModDate>
        
        <creator>Mostafa Adel Zayed</creator>
        
        <creator>Ahmed Bahaa Farid</creator>
        
        <subject>Process Mining; Process Models Discovery; Software Engineering; Agile; and Scrum</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(2), 2016</description>
        <description>Process model guidance is an important feature by which the software process is orchestrated. Without complying with this guidance, the production lifecycle deviates from producing a reliable software with high-quality standards. Usually, teams break the process deliberately or impulsively. Application Lifecycle Management (ALM) tools log what teams do even if they break the process. The log file could be a key to discover the behavior of the undertaken process against the targeted process model. Since the date of its introduction, Process Mining techniques have been used in business process domains with no focus on the software engineering processes. This research brings the Process Mining techniques to the software engineering domain. The research shows a conclusive effort that used a Scrum adapted process model as an example of Agile adoption. This research has applied Process Mining discovery techniques to capture the actually implemented process by the Scrum team. This application clarifies the gap between the standard process guidance and the actually implemented one. The research’s results showed that Process Mining techniques have the ability to discover and verify the deviation on both levels; the process itself as well as the work items state-machine workflows.</description>
        <description>http://thesai.org/Downloads/Volume7No2/Paper_39-The_Discovery_of_the_Implemented_Software_Engineering_Process.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards Domain Ontology Creation Based on a Taxonomy Structure in Computer Vision</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070238</link>
        <id>10.14569/IJACSA.2016.070238</id>
        <doi>10.14569/IJACSA.2016.070238</doi>
        <lastModDate>2016-03-01T13:30:00.7700000+00:00</lastModDate>
        
        <creator>Mansouri fatimaezzahra</creator>
        
        <creator>Sadgal mohamed</creator>
        
        <creator>Elfazziki abdelaziz</creator>
        
        <creator>Benchikhi loubna</creator>
        
        <subject>Domain Ontology; Categorization; Taxonomy; Road scenes; Computer vision</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(2), 2016</description>
        <description>In computer vision to create a knowledge base usable by information systems, we need a data structure facilitating the information access. Artificial intelligence community uses the ontologies to structure and represent the domain knowledge. This information structure can be used as a database of many geographic information systems (GIS) or information systems treating real objects for example road scenes, besides it can be utilized by other systems. For this, we provide a process to create a taxonomy structure based on new hierarchical image clustering method. The hierarchical relation is based on visual object features and contributes to build domain ontology.</description>
        <description>http://thesai.org/Downloads/Volume7No2/Paper_38-Towards_Domain_Ontology_Creation_Based_on_a_Taxonomy_Structure.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>N-ary Relations of Association in Class Diagrams: Design Patterns</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070237</link>
        <id>10.14569/IJACSA.2016.070237</id>
        <doi>10.14569/IJACSA.2016.070237</doi>
        <lastModDate>2016-03-01T13:30:00.7530000+00:00</lastModDate>
        
        <creator>Sergievskiy Maxim</creator>
        
        <subject>UML; class diagram; multiplicity; ternary association; n-ary association; class-association; design pattern; object</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(2), 2016</description>
        <description>Most of the technology of object-oriented development relies on the use of UML diagrams, in particular, class diagrams. CASE tools, used for automation of object-oriented development, often do not support n-ary associations in the class diagrams, and their implementation in the form of program code in contrast to binary rather time-consuming. The article will show how in some cases it is possible to move from the n-ary association between classes to binary and how can reduce the number of objects. The rules to transform models, that contain n-ary association, will be presented in the form of design patterns. Proposed three new design patterns can be used in the process of developing software systems. These patterns describe transformations of n-ary (often ternary) associations occur between classes in binary and the introduction of additional classes and binary association with the aim of optimizing the model.</description>
        <description>http://thesai.org/Downloads/Volume7No2/Paper_37-N_ary_Relations_of_Association_in_Class_Diagrams_Design_Patterns.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Study for Software Project Management Approaches and Change Management in the Project Monitoring &amp; Controlling</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070236</link>
        <id>10.14569/IJACSA.2016.070236</id>
        <doi>10.14569/IJACSA.2016.070236</doi>
        <lastModDate>2016-03-01T13:30:00.7230000+00:00</lastModDate>
        
        <creator>Amira M. Gaber</creator>
        
        <creator>Sherief Mazen</creator>
        
        <creator>Ehab E. Hassanein</creator>
        
        <subject>Software Engineering; Software Project Management; and Software Change Management</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(2), 2016</description>
        <description>A software project encounters many changes during the software development life cycle.  The key challenge is to control these changes and manage their impact on the project plan, budget, and implementation schedules. A well-developed change control process should assist the project manager and the responsible team in monitoring these changes.In this paper, we examine a number of approaches for project monitoring &amp; control with different scenarios of project schedules. The comparison shows the effect of applying each approach on the cost and the time of the project. The evaluation illustrates that the integrated software Project and Change Management (IPCM) is a more efficient for providing more control in tracking change requests, and improve the performance monitoring process.</description>
        <description>http://thesai.org/Downloads/Volume7No2/Paper_36-Comparative_Study_for_Software_Project_Management_Approaches.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of System Architecture for E-Government Cloud Platforms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070235</link>
        <id>10.14569/IJACSA.2016.070235</id>
        <doi>10.14569/IJACSA.2016.070235</doi>
        <lastModDate>2016-03-01T13:30:00.6900000+00:00</lastModDate>
        
        <creator>Margulan Aubakirov</creator>
        
        <creator>Evgeny Nikulchev</creator>
        
        <subject>cloud technologies; outsourcing; Kazakhstan; cloud platform; e-Government</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(2), 2016</description>
        <description>Requirements and criteria for selection of cloud platform and platform visualization are stated by which optimal cloud products will be chosen for the Republic of Kazakhstan e-Government considering quality-price ratio, and also the framework of information and communication architecture will be introduced.</description>
        <description>http://thesai.org/Downloads/Volume7No2/Paper_35-Development_of_System_Architecture_for_E_Government_Cloud_Platforms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implementing Project Management Category Process Areas of CMMI Version 1.3 Using Scrum Practices, and Assets</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070234</link>
        <id>10.14569/IJACSA.2016.070234</id>
        <doi>10.14569/IJACSA.2016.070234</doi>
        <lastModDate>2016-03-01T13:30:00.6770000+00:00</lastModDate>
        
        <creator>Ahmed Bahaa Farid</creator>
        
        <creator>A. S. Abd Elghany</creator>
        
        <creator>Yehia Mostafa Helmy</creator>
        
        <subject>Software Engineering; Scrum Software development; Process Improvement; CMMI; Scrum; Scrum CMMI Mapping; Project Management CMMI Process Areas; CMMI-Dev version 1,3; CMMI Project Management Category</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(2), 2016</description>
        <description>Software development organizations that rely on Capability Maturity Model Integration (CMMI) to assess and improve their processes have realized that agile approaches can provide improvements as well. CMMI and agile methods can work well together and exploit synergies that have the potential to improve dramatically business performance. The major question is: How to realize the integration of these two seemingly different approaches? In an earlier work, we have conducted a field study within six companies. These companies worked with agile methods for years and the Egyptian Software Engineering Competence Center (SECC), which is the regional CMMI appraisal center, assessed them. This study was mainly conducted to enhance the empirical understanding in this research field. Additionally, it showed that companies usually don’t use agile in a good way that helps in covering the CMMI specific practices. In this paper, we present a new approach for mapping between CMMI and Scrum method. This mapping has been analyzed, enhanced, and then applied to the same companies. Putting in considerations that other previous efforts have worked in the same context but for an older version of CMMI, our research is using the latest CMMI version, which is 1.3. The research shows that our mapping approach has resulted in 37% satisfaction and achieved 17% partial satisfaction for CMMI specific practices. This resembles 19.4% enhancement in the satisfaction, and 6.2% improvement in the partial satisfaction against the previous related research effort that was already not targeting the latest CMMI version.</description>
        <description>http://thesai.org/Downloads/Volume7No2/Paper_34-Implementing_Project_Management_Category_Process_Areas.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Preliminary Study of Software Performance Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070233</link>
        <id>10.14569/IJACSA.2016.070233</id>
        <doi>10.14569/IJACSA.2016.070233</doi>
        <lastModDate>2016-03-01T13:30:00.6600000+00:00</lastModDate>
        
        <creator>Issamjebreen </creator>
        
        <creator>Mohammed Awad</creator>
        
        <subject>Performance Models; Measurement Model; Performance Prediction; Performance evaluation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(2), 2016</description>
        <description>Context: Software performance models can be obtained by applying for specific roles, skills and techniques in software life cycle, and it depends on formulating the software problem as well as gathering the performance requirements. This paper presents a preliminary review of the software performance models. This constitutes a reference for the IT companies and personnel that help them select the suitable model for their projects. Also, the study helps researchers find out further research areas in this field. A preliminary review according to a predefined strategy is used to conduct previous approaches of software performance models integrated with software development cycle in early software cycle. A review has been done for exploring and comparing the software performance models that are published previously. This study results in a comprehensive review for the existing software performance models. This review composes a clear reference for highlighting the weak and strength points of these models.</description>
        <description>http://thesai.org/Downloads/Volume7No2/Paper_33-Preliminary_Study_of_Software_Performance_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Arabic Studies’ Progress in Information Retrieval</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070232</link>
        <id>10.14569/IJACSA.2016.070232</id>
        <doi>10.14569/IJACSA.2016.070232</doi>
        <lastModDate>2016-03-01T13:30:00.6300000+00:00</lastModDate>
        
        <creator>Essam Hanandeh</creator>
        
        <creator>Hayel Khafajah</creator>
        
        <subject>Information retrieval; Arabic information retrieval; Indexing; Query reformulation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(2), 2016</description>
        <description>The field of information retrieval has witnessed tangible progress over the past decades in response to the expanded usage of the internet and the dire need of users to search for massive amounts of digital information. Given the steady increase of Arabic e-content, excellent information retrieval systems must be devised to suit the nature and requirements of the Arabic language. This paper sheds light on the current progress in the field of Arabic information retrieval, identifies the challenges that hinder the progress of this science, and proposes suggestions for further research.  This paper uses the descriptive analytical method to examine the reality of Arabic studies in the field of information retrieval and to study the problems that are being faced in this area. Specifically, the previous literature on information retrieval is reviewed by searching the related databases and websites.</description>
        <description>http://thesai.org/Downloads/Volume7No2/Paper_32-Arabic_Studies_Progress_in_Information_Retrieval.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Frequency Domain Analysis for Assessing Fluid Responsiveness by Using Instantaneous Pulse Rate Variability</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070231</link>
        <id>10.14569/IJACSA.2016.070231</id>
        <doi>10.14569/IJACSA.2016.070231</doi>
        <lastModDate>2016-03-01T13:30:00.5970000+00:00</lastModDate>
        
        <creator>Pei-Chen Lin</creator>
        
        <creator>Chia-Chi Chang</creator>
        
        <creator>Hung-Yi Hsu</creator>
        
        <creator>Tzu-Chien Hsiao</creator>
        
        <subject>fluid responsiveness (FR); instantaneous pulse rate variability (iPRV); head-up tilt (HUT); passive leg raising (PLR)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(2), 2016</description>
        <description>In the ICU, fluid therapy is conventional strategy for the patient in shock. However, only half of ICU patients have well-responses to fluid therapy, and fluid loading in non-responsive patient delays definitive therapy. Prediction of fluid responsiveness (FR) has become intense topic in clinic. Most of conventional FR prediction method based on time domain analysis, and it is limited ability to indicate FR. This study proposed a method which predicts FR based on frequency domain analysis, named instantaneous pulse rate variability (iPRV). iPRV provides a new indication in very high frequency (VHF) range (0.4-0.8Hz) of spectrum for peripheral responses. Twenty six healthy subjects participated this study and photoplethysmography signal was recorded in supine baseline, during head-up tilt (HUT), and passive leg raising (PLR), which induces variation of venous return and helps for quantitative assessment of FR individually. The result showed the spectral power of VHF decreased during HUT (573.96&#177;756.36 ms2 in baseline; 348.00&#177;434.92 ms2 in HUT) and increased during PLR (573.96&#177;756.36 ms2 in baseline; 718.92&#177;973.70 ms2 in PLR), which present the compensated regulation of venous return and FR. This study provides an effective indicator for assessing FR in frequency domain and has potential to be a reliable system in ICU.</description>
        <description>http://thesai.org/Downloads/Volume7No2/Paper_31-Frequency_Domain_Analysis_for_Assessing_Fluid_Responsiveness.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Tracking Items Through Rfid and Solving Heterogeneity Problems During a Collaboration Between Port Companies</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070230</link>
        <id>10.14569/IJACSA.2016.070230</id>
        <doi>10.14569/IJACSA.2016.070230</doi>
        <lastModDate>2016-03-01T13:30:00.5670000+00:00</lastModDate>
        
        <creator>Mehdi ABID</creator>
        
        <creator>Benayad NSIRI</creator>
        
        <creator>Yassine SERHANE</creator>
        
        <creator>Haitam AGHARI</creator>
        
        <subject>Ontologies; Multi-agent systems; RFID; Port Information System; Collaboration</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(2), 2016</description>
        <description>In this article, we are proposing an architecture that enables improvements in various steps of the collaboration process between different port companies, based on the use of ontologies, multi-agent systems and RFIDs. This approach allows us to collect and present all the data stored in each information system, by exchanging and incorporating any data to facilitate its processing while respecting the territory regulation and compliance, and data confidentiality between all these port companies in a cooperative environment.
Thanks to the use of RFID (radio frequency identification), this architecture can also deal with the process of tracking commodities belonging to  any company that is included in this collaboration process, while each item will be monitored and tracked in real time.
</description>
        <description>http://thesai.org/Downloads/Volume7No2/Paper_30-Tracking_Items_Through_Rfid_and_Solving_Heterogeneity_Problems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Choosing a Career Based Personality Matching: A Case Study of King Abdulaziz University</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070229</link>
        <id>10.14569/IJACSA.2016.070229</id>
        <doi>10.14569/IJACSA.2016.070229</doi>
        <lastModDate>2016-03-01T13:30:00.5370000+00:00</lastModDate>
        
        <creator>Nahla Aljojo</creator>
        
        <subject>Holland’s Theory of Vocational Personalities; RIASEC personality and environment types; occupational interests; Career change; hexagonal model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(2), 2016</description>
        <description>Traditionally, selecting a career involves matching the specific aptitudes and characteristics of an individual with a career which requires or involves such factors. This particular approach has as its foundation the fact that certain careers have need of individuals with certain skills and attitudes, that is to say the individual is a ‘fit’ for that particular career based on their knowledge, skill and disposition. Many students have problems determining their college majors and a suitable career. This paper shows how to find a career that fits one’s personality, and aims to help students analyse their personalities based on the Holland personality test. This paper will identify the most suitable career for students by using a validated Arabic version of the Holland survey, which is one of the most popular models used for career personality tests. This study implements a new set of tasks, testing 117 students from King Abdulaziz University with the Arabic version of the Holland test. The test was applied to female students from three majors, computer science, information systems, and preparatory year distance learning students. The implications of the test results will help students to understand their personality types and determine a suitable career, as the results of the test suggest suitable careers for students which match their personalities. Ultimately, the results show a difference between computer science, information systems and preparatory year distance learning students with regard to personality types and suitable careers.</description>
        <description>http://thesai.org/Downloads/Volume7No2/Paper_29-Choosing_a_Career_Based_Personality_Matching_A_Case_Study_of_King_Abdulaziz_University.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Hybrid Network Sniffer Model Based on Pcap Language and Sockets (Pcapsocks)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070228</link>
        <id>10.14569/IJACSA.2016.070228</id>
        <doi>10.14569/IJACSA.2016.070228</doi>
        <lastModDate>2016-03-01T13:30:00.5200000+00:00</lastModDate>
        
        <creator>Azidine GUEZZAZ</creator>
        
        <creator>Ahmed ASIMI</creator>
        
        <creator>Yassine SADQI</creator>
        
        <creator>Younes ASIMI</creator>
        
        <creator>Zakariae TBATOU</creator>
        
        <subject>Network Security; Intrusion Detection; Intrusion Prevention; Sniffing; Filtering; Network sniffer; Libpcap; Libnet; Sockets</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(2), 2016</description>
        <description>Nowadays, the protection and the security of data transited within computer networks represent a real challenge for developers of computer applications and network administrators. The Intrusion Detection System and Intrusion Prevention System are the reliable techniques for a Good security. Any detected intrusion is based on data collection. So, the collection of an important and significant traffic on the monitored systems is an interesting feature. Thus, the first task of Intrusion Detection System and Intrusion Prevention System is to collect information’s basis to treat and analyze them, and to make accurate decisions. Network analysis can be used to improve networks performances and their security, but it can also be used for malicious tasks. Our main goal in this article is to design a reliable and powerful network sniffer, called PcapSockS, based on pcap language and sockets, able to intercept traffic in three modes: connected, connectionless and raw mode. We start with the performances assessment performed on a list of most expanded and most recently used network sniffers. The study will be completed by a classification of these sniffers related to computer security objectives based on parameters library (libpcap/winpcap or libnet), filtering, availability, software or hardware, alert and real time. The PcapSockS provides a nice performance integrating reliable sniffing mechanisms that allow a supervision taking into account some low and high-level protocols for TCP and UDP network communications.</description>
        <description>http://thesai.org/Downloads/Volume7No2/Paper_28-A_New_Hybrid_Network_Sniffer_Model_Based_on_Pcap_Language.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Topic Modeling Based Solution for Confirming Software Documentation Quality</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070227</link>
        <id>10.14569/IJACSA.2016.070227</id>
        <doi>10.14569/IJACSA.2016.070227</doi>
        <lastModDate>2016-03-01T13:30:00.4730000+00:00</lastModDate>
        
        <creator>Nouh Alhindawi</creator>
        
        <creator>Obaida M. Al-Hazaimeh</creator>
        
        <creator>Rami Malkawi</creator>
        
        <creator>Jamal Alsakran</creator>
        
        <subject>Software Documentation; LDA; Clusters; HELLINGER DISTANCE; and Information Retrieval</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(2), 2016</description>
        <description>this paper presents an approach for evaluating and confirming the quality of the external software documentation using topic modeling. Typically, the quality of the external documentation has to mirror precisely the organization of the source code. Therefore, the elements of such documentation should be strongly written, associated, and presented. In this paper, we use Latent Dirichlet Allocation (LDA) and HELLINGER DISTANCE to compute the similarities between the fragments of source code and the external documentation topics. These similarities are used in this paper to improve and advance the existing external documentation. Furthermore, these similarities can also be used for evaluating the new documenting process during the evolution phase of the software. The results show that the new approach yields state-of-the-art performance in evaluating and confirming the existing external documentations quality and superiority.</description>
        <description>http://thesai.org/Downloads/Volume7No2/Paper_27-A_Topic_Modeling_Based_Solution_for_Confirming_Software_Documentation_Quality.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automatic Keyphrase Extractor from Arabic Documents</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070226</link>
        <id>10.14569/IJACSA.2016.070226</id>
        <doi>10.14569/IJACSA.2016.070226</doi>
        <lastModDate>2016-03-01T13:30:00.4430000+00:00</lastModDate>
        
        <creator>Hassan M. Najadat</creator>
        
        <creator>Ismail I. Hmeidi</creator>
        
        <creator>Mohammed N. Al-Kabi</creator>
        
        <creator>Maysa Mahmoud Bany Issa</creator>
        
        <subject>Arabic Keyphrase Extraction; Unsupervised Arabic Keyphrase Extraction; Information Retrieval</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(2), 2016</description>
        <description>The keyphrase is a sentence or a part of a sentence that contains a sequence of words that expresses the meaning and the purpose of any given paragraph. Keyphrase extraction is the task of identifying the possible keyphrases from a given document. Many applications including text summarization, indexing, and characterization use keyphrase extraction. Also, it is an essential task to improve the performance of any information retrieval system. The internet contains a massive amount of documents that may have been manually assigned keyphrases or not. The Arabic language is an important language in the world. Nowadays the number of online Arabic documents is growing rapidly; and most of them have no manually assigned keyphrases, so the user will scan the whole retrieved web documents. To avoid scanning the entire retrieved document, we need keyphrases assigned to each web document manually or automatically. This paper addresses the problem of identifying keyphrases in Arabic documents automatically. In this work, we provide a novel algorithm that identified keyphrases from Arabic text. The new algorithm, Automatic Keyphrases Extraction from Arabic (AKEA), extracts keyphrases from Arabic documents automatically. In order to test the algorithm, we collected a dataset containing 100 documents from Arabic wiki; also, we downloaded another 56 agricultural documents from Food and Agricultural Organization of the United Nations (F.A.O.). The evaluation results show that the system achieves 83% precision value in identifying 2-word and 3-word keyphrases from agricultural domains.</description>
        <description>http://thesai.org/Downloads/Volume7No2/Paper_26-Paper_Automatic_Keyphrase_Extractor_from_Arabic_Documents.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dual Security Testing Model for Web Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070225</link>
        <id>10.14569/IJACSA.2016.070225</id>
        <doi>10.14569/IJACSA.2016.070225</doi>
        <lastModDate>2016-03-01T13:30:00.4100000+00:00</lastModDate>
        
        <creator>Singh Garima</creator>
        
        <creator>Kaushik Manju</creator>
        
        <subject>Web application testing; Security testing; UML modeling; Web socket programming</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(2), 2016</description>
        <description>In recent years, web applications have evolved from small websites into large multi-tiered applications. The quality of web applications depends on the richness of contents, well structured navigation and most importantly its security. Web application testing is a new field of research so as to ensure the consistency and quality of web applications. In the last ten years there have been different approaches. Models have been developed for testing web applications but only a few focused on content testing, a few on navigation testing and a very few on security testing of web applications. There is a need to test content, navigation and security of an application in one go. The objective of this paper is to propose Dual Security Testing Model to test the security of web applications using UML modeling technique which includes web socket interface. In this research paper we have described how our security testing model is implemented using activity diagram, activity graph and based on this how test cases is generated.</description>
        <description>http://thesai.org/Downloads/Volume7No2/Paper_25-Dual_Security_Testing_Model_for_Web_Applications.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Image Transmission Model with Quality of Service and Energy Economy in Wireless Multimedia Sensor Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070224</link>
        <id>10.14569/IJACSA.2016.070224</id>
        <doi>10.14569/IJACSA.2016.070224</doi>
        <lastModDate>2016-03-01T13:30:00.3800000+00:00</lastModDate>
        
        <creator>Benlabbes Haouari</creator>
        
        <creator>Benahmed Khelifa</creator>
        
        <creator>Beladgham Mohammed</creator>
        
        <subject>Wireless Multimedia Sensor Network; Multimedia; Compression; Routing; Energy Consumption; QoS; Energy Economy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(2), 2016</description>
        <description>The objective of this article is to present the efficiency of image compression in the Wireless Multimedia Sensor Network (WMSN), the method used in this work is based on the lifting scheme coupled with the SPIHT coding to wavelet biorthogonal CDF 9/7. The effectiveness of this technique results in combining two advantages. First it allows saving energy to prolong the life of the network; second, it improves the Quality of Service (QoS) in terms of average throughput, the number of dropped packets, and End-to-End average delay. The authors examined two scenarios of the same network model in NS2 simulator, according to the above critical points of the energy economy and quality of service, the first scenario transmits an original image, and the second sends the compressed image with the used method. The simulation results presented show that the proposed system allows to extend the life of the network and minimizes the consumption of energy; and it can transmit the image in comfortable conditions for QoS of network, reduce the End-to-End average delay, no dropped packet, best average throughput and satisfied for bandwidth.</description>
        <description>http://thesai.org/Downloads/Volume7No2/Paper_24-Image_Transmission_Model_with_Quality_of_Service_and_Energy_Economy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Effect of Parallel Programming Languages on the Performance and Energy Consumption of HPC Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070223</link>
        <id>10.14569/IJACSA.2016.070223</id>
        <doi>10.14569/IJACSA.2016.070223</doi>
        <lastModDate>2016-03-01T13:30:00.3470000+00:00</lastModDate>
        
        <creator>Muhammad Aqib</creator>
        
        <creator>Fadi Fouad Fouz</creator>
        
        <subject>power consumption; quicksort; high- performance computing; performance; Open MP; Open MPI; CUDA</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(2), 2016</description>
        <description>Big and complex applications need many resources and long computation time to execute sequentially. In this scenario, all application&#39;s processes are handled in sequential fashion even if they are independent of each other. In high- performance computing environment, multiple processors are available to running applications in parallel. So mutually independent blocks of codes could run in parallel. This approach not only increases the efficiency of the system without affecting the results but also saves a significant amount of energy. Many parallel programming models or APIs like Open MPI, Open MP, CUDA, etc. are available to running multiple instructions in parallel. In this paper, the efficiency and energy consumption of two known tasks i.e. matrix multiplication and quicksort are analyzed using  different parallel programming models and a multiprocessor machine. The obtained results, which can be generalized, outline the effect of choosing a programming model on the efficiency and energy consumption when running different codes on different machines.</description>
        <description>http://thesai.org/Downloads/Volume7No2/Paper_23-The_Effect_of_Parallel_Programming_Languages_on_the_Performance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Ranking Documents Based on the Semantic Relations Using Analytical Hierarchy Process</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070222</link>
        <id>10.14569/IJACSA.2016.070222</id>
        <doi>10.14569/IJACSA.2016.070222</doi>
        <lastModDate>2016-03-01T13:30:00.3330000+00:00</lastModDate>
        
        <creator>Ali I. El-Dsouky</creator>
        
        <creator>Hesham A. Ali</creator>
        
        <creator>Rabab S. Rashed</creator>
        
        <subject>Semantic rank; ranking web; ontology; search engine; information retrieval</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(2), 2016</description>
        <description>With the rapid growth of the World Wide Web comes the need for a fast and accurate way to reach the information required. Search engines play an important role in retrieving the required information for users. Ranking algorithms are an important step in search engines so that the user could retrieve the pages most relevant to his query.
In this work, we present a method for utilizing genealogical information from ontology to find the suitable hierarchical concepts for query extension, and ranking web pages based on semantic relations of the hierarchical concepts related to query terms, taking into consideration the hierarchical relations of domain searched (sibling, synonyms and hyponyms) by different weighting based on AHP method. So, it provides an accurate solution for ranking documents when compared to the three common methods.
</description>
        <description>http://thesai.org/Downloads/Volume7No2/Paper_22-Ranking_Documents_Based_on_the_Semantic_Relations.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Self-Organized Hash Based Secure Multicast Routing Over Ad Hoc Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070221</link>
        <id>10.14569/IJACSA.2016.070221</id>
        <doi>10.14569/IJACSA.2016.070221</doi>
        <lastModDate>2016-03-01T13:30:00.3000000+00:00</lastModDate>
        
        <creator>Amit Chopra</creator>
        
        <creator>Dr. Rajneesh Kumar</creator>
        
        <subject>Security; Multicast; Group Communication; MAODV; Key management; HASH</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(2), 2016</description>
        <description>Multicast group communication over mobile ad hoc networks has various challenges related to secure data transmission. In order to achieve this goal, there is a need to authenticate the group member as well as it is essential to protect the application data, routing information, and other network resources etc. Multicast-AODV (MAODV) is the extension of an AODV protocol, and there are several issues related to each multicast network operation. In the case of dynamic group behavior, it becomes more challenging to protect the resources of a particular group. Researchers have developed different solutions to secure multicast group communication, and their proposed solutions can be used for resource protection at different layers i.e. application layer, physical layer, network layer, etc. Each security solution can guard against a particular security threat. This research paper introduced a self-organized hash based secure routing scheme for multicast ad hoc networks. It uses group Diffie-Hellman method for key distribution. Route authentication and integrity, both are ensured by generating local flag codes and global hash values. In the case of any violation, route log is monitored to identify the malicious activities.</description>
        <description>http://thesai.org/Downloads/Volume7No2/Paper_21-Self_Organized_Hash_Based_Secure_Multicast_Routing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Skip List Data Structure Based New Searching Algorithm and Its Applications: Priority Search</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070220</link>
        <id>10.14569/IJACSA.2016.070220</id>
        <doi>10.14569/IJACSA.2016.070220</doi>
        <lastModDate>2016-03-01T13:30:00.2700000+00:00</lastModDate>
        
        <creator>Mustafa Aksu</creator>
        
        <creator>Ali Karci</creator>
        
        <subject>Algorithms; Priority search; Algorithm analysis; Data structures; Performance analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(2), 2016</description>
        <description>Our new algorithm, priority search, was created with the help of skip list data structure and algorithms. Skip list data structure consists of linked lists formed in layers, which were linked in a pyramidal way. The time complexity of searching algorithm is equal to O(lgN) in an N-element skip list data structure. The new developed searching algorithm was based on the hit search number for each searched data. If a datum has greater hit search number, then it was upgraded in the skip list data structure to the upper level. That is, the mostly searched data were located in the upper levels of the skip list data structure and rarely searched data were located in the lower levels of the skip list data structure. The pyramidal structure of data was constructed by using the hit search numbers, in another word, frequency of each data. Thus, the time complexity of searching was almost ?(1) for N records data set. In this paper, the applications of searching algorithms like linear search, binary search, and priority search were realized, and the obtained results were compared. The results demonstrated that priority search algorithm was better than the binary search algorithm.</description>
        <description>http://thesai.org/Downloads/Volume7No2/Paper_20-Skip_List_Data_Structure_Based_New_Searching_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimum Route Selection for Vehicle Navigation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070219</link>
        <id>10.14569/IJACSA.2016.070219</id>
        <doi>10.14569/IJACSA.2016.070219</doi>
        <lastModDate>2016-03-01T13:30:00.2230000+00:00</lastModDate>
        
        <creator>Dalip </creator>
        
        <creator>Vijay Kumar</creator>
        
        <subject>Vehicle Navigation; Route Selection; Fuzzy; Optimum; Navigation System</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(2), 2016</description>
        <description>The objective of Optimum Route Selection for Vehicle Navigation System (ORSVNS) article is to develop a system, which provides information about real time alternate routes to the drivers and also helps in selecting the optimal route among all the alternate routes from an origin to destination. Two types of query systems, special and general, are designed for drivers. Here, the criterion for route selection is introduced using primary and secondary road attributes. The presented methodology helps the drivers in better decision making to choose optimal route using fuzzy logic. For experimental results ORSVNS is tested over 220 km portion of Haryana state in India.</description>
        <description>http://thesai.org/Downloads/Volume7No2/Paper_19-Optimum_Route_Selection_for_Vehicle_Navigation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Framework for Classifying Unstructured Data of Cardiac Patients: A Supervised Learning Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070218</link>
        <id>10.14569/IJACSA.2016.070218</id>
        <doi>10.14569/IJACSA.2016.070218</doi>
        <lastModDate>2016-03-01T13:30:00.1770000+00:00</lastModDate>
        
        <creator>Iqra Basharat</creator>
        
        <creator>Ali Raza Anjum</creator>
        
        <creator>Mamuna Fatima</creator>
        
        <creator>Usman Qamar</creator>
        
        <creator>Shoab Ahmed Khan</creator>
        
        <subject>bioinformatics; classification techniques; heart disease in Pakistan; heart disease prediction; multinomial classification; logistic regression</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(2), 2016</description>
        <description>Data mining has recently emerged as an important field that helps in extracting useful knowledge from the huge amount of unstructured and apparently un-useful data. Data mining in health organization has highest potential in this area for mining the unknown patterns in the datasets and disease prediction. The amount of work done for cardiovascular patients in Pakistan is scarcely very less. In this research study, using classification approach of machine learning we have proposed a framework to classify unstructured data of cardiac patients of the Armed Forces Institute of Cardiology (AFIC), Pakistan to four important classes. The focus of this study is to structure the unstructured medical data/reports manually, as there was no structured database available for the specific data under study. Multi-nominal Logistic Regression (LR) is used to perform multi-class classification and 10-fold cross validation is used to validate the classification models. In order to analyze the results and the performance of Logistic Regression models. The performance-measuring criterion that is used includes precision, f-measure, sensitivity, specificity, classification error, area under the curve and accuracy. This study will provide a road map for future research in the field of Bioinformatics in Pakistan.</description>
        <description>http://thesai.org/Downloads/Volume7No2/Paper_18-A_Framework_for_Classifying_Unstructured_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Corporate Responsibility in Combating Online Misinformation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070217</link>
        <id>10.14569/IJACSA.2016.070217</id>
        <doi>10.14569/IJACSA.2016.070217</doi>
        <lastModDate>2016-03-01T13:30:00.1470000+00:00</lastModDate>
        
        <creator>Fadi Safieddine</creator>
        
        <creator>Wassim Masri</creator>
        
        <creator>Pardis Pourghomi</creator>
        
        <subject>Misinformation; Social Media; Browsers; Search Engines; Corporate; Ethical; Responsibility</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(2), 2016</description>
        <description>In the age of mass information and misinformation, the corporate duty of developers of browsers, social media, and search engines are falling short of the minimum standards of responsibility. The tools and technologies are already available to combat misinformation online but the desire to integrate these tools has not taken enough priority to warrant action. This paper presents an effective and practical method based on technologies already available that could be used for browsers and social media websites that would help combat misinformation presented in the form of photo evidence, video evidence, or textual evidence the authors have termed as the “Right-click Authenticate” every browser and social media website should have.</description>
        <description>http://thesai.org/Downloads/Volume7No2/Paper_17-Corporate_Responsibility_in_Combating_Online_Misinformation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Designing of Hydraulically Balanced Water Distribution Network Based on GIS and EPANET</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070216</link>
        <id>10.14569/IJACSA.2016.070216</id>
        <doi>10.14569/IJACSA.2016.070216</doi>
        <lastModDate>2016-03-01T13:30:00.1000000+00:00</lastModDate>
        
        <creator>RASOOLI Ahmadullah</creator>
        
        <creator>KANG Dongshik</creator>
        
        <subject>Geographical Information System (GIS); Water Distribution Network (WDN); Hydraulics; EPANET</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(2), 2016</description>
        <description>The main objectives of this paper are, designing and balancing of Water Distribution Network (WDN) based on loops hydraulically balanced method as well as using Geographical Information System (GIS) methodology with the contribution of EPANET. GIS methodology is used to ensure WDN’s integrity and skeletonized a proper and functional WDN by using Network Analyst utilizing the geometric network and topology network by hierarchical geo-databases. The problem is to make WDN hydraulically balanced by applying WDN balancing method. For that reason, we have analyzed water flows in each pipe and performed the iterations process on loops in order to make the algebraic summation of head loss“h_f” around any closed loop zero. In case, the summation of pipe flows must be equal to the flow amount entering or leaving the system through each node. At each iteration, reasonable changes occurred at pipes flow until the head loss has become very small or fixed zero as (optimizes correction) by using excel sheet solver. Since this method is confirmed to be effective, simulations were done by using GIS and EPANET water distribution platform. As a result, we accomplished hydraulically balanced WDN. Finally, we have analyzed and simulated hydraulics parameters for the targeted area in Kabul city. Thus, determined successfully the hydraulics state of parameters around the network as a positive result. It is worth mentioning that, Hardy-cross method is being used for approaching more precise optimized correction and consequences concerning hydraulically-balanced and optimal WDN. This method can be done for complex loops WDN as well; the advantage of the method is simple math and self-correction. Managers and engineers who work in the field of water supply this methodology has been recommended as the more advantageous workflow in planning water distribution pattern.</description>
        <description>http://thesai.org/Downloads/Volume7No2/Paper_16-Designing_of_Hydraulically_Balanced_Water_Distribution.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Augmented Reality Approach to Integrate Practical Activities in E-Learning Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070215</link>
        <id>10.14569/IJACSA.2016.070215</id>
        <doi>10.14569/IJACSA.2016.070215</doi>
        <lastModDate>2016-03-01T13:30:00.0830000+00:00</lastModDate>
        
        <creator>EL KABTANE Hamada</creator>
        
        <creator>EL ADNANI Mohamed</creator>
        
        <creator>SADGAL Mohamed</creator>
        
        <creator>MOURDI Youssef</creator>
        
        <subject>E-learning; virtual reality; augmented reality; practical activities</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(2), 2016</description>
        <description>In the past, the term E-learning was mentioned to any learning method that used electronic machine for the distribution. With the evolution and the apparition of the internet, the term e learning has been evolved and referred to the online courses. There are a lot of platform which serves to distribute and manage the learning content. In some domain learners need to use some equipment and useful product for purpose completing the image built in the theoretical part by the practical activity part. However, most of those platforms suffer from a lack in tools that offer practical activities for learners. Using videos, virtual laboratories or distance control of real equipment as solutions to solve this lack were proposed but still limited. The mixed reality as new technology promised to create a virtual environment where the learner is an actor and can interact with the virtual objects. This article present an approach for developing integrated E-learning systems, helping to carry out the practical work by establishing a virtual laboratory that all tools and products can be manipulated by learners and teachers like in real practical activity, based on an augmented reality system.</description>
        <description>http://thesai.org/Downloads/Volume7No2/Paper_15-An_Augmented_Reality_Approach_to_Integrate_Practical_Activities.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Effective Teaching Methods and Proposed Web Libraries for Designing Animated Course Content: A Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070214</link>
        <id>10.14569/IJACSA.2016.070214</id>
        <doi>10.14569/IJACSA.2016.070214</doi>
        <lastModDate>2016-03-01T13:30:00.0670000+00:00</lastModDate>
        
        <creator>Rajesh Kumar Kaushal</creator>
        
        <creator>Dr. Surya Narayan Panda</creator>
        
        <subject>cognitive; web education; dynamic teaching tool; animation libraries</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(2), 2016</description>
        <description>The primary aim of education system is to improve cognitive and computational skills in students. It cannot be achieved by just using the latest technology. This goal can only be achieved through effective teaching methods in combination with effective technology. Lot of researchers has offered effective teaching methods and published their findings in the past. Most of them offered teaching through animations, puzzles, games and storyline. This research paper focuses on identifying effective teaching methods offered by researchers and their findings by reviewing last few years articles published in renowned journals and conferences. Another aim of this paper is to propose ideas to make teaching tools more effective that can help students to understand difficult concepts deeply, improve cognitive and computational skills and retain knowledge for longer times. These ideas will serve as future research directions in this area. Another aim of this research paper is to introduce latest web libraries that can help educators to design animated courses.</description>
        <description>http://thesai.org/Downloads/Volume7No2/Paper_14-Effective_Teaching_Methods_and_Proposed_Web_Libraries.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Contributions to the Analysis and the Supervision of a Thermal Power Plant</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070213</link>
        <id>10.14569/IJACSA.2016.070213</id>
        <doi>10.14569/IJACSA.2016.070213</doi>
        <lastModDate>2016-03-01T13:30:00.0200000+00:00</lastModDate>
        
        <creator>Lakhoua M.N</creator>
        
        <creator>Glaa R.</creator>
        
        <creator>Ben Hamouda M.</creator>
        
        <creator>El Amraoui L.</creator>
        
        <subject>SCADA systems; SADT method; SA-RT method; thermal power plant</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(2), 2016</description>
        <description>Supervision systems play an important role in industry mainly due to the increasing demand for product quality and high efficiency, and to the growing integration of automatic control systems in technical processes. In fact, the supervision system has a great number of components and interconnections, and it is difficult to describe and understand its behavior. Furthermore, the supervision system in industrial plants, implemented in supervisory control and data acquisition (SCADA) software, must undertake, at least, the following three main tasks: monitoring, control and fault tolerance. So it can be classified as a complex system. The objective of this paper is to show interests of the use of functional analysis techniques such as SADT (Structured Analysis and Design Technique) and SA-RT (Structured Analysis Real Time) for the design of supervisory systems. This is why we present a general model of analysis and supervision of production systems. This model was based on the one hand on the functional analysis (FA) and on the other hand on the SCADA system.</description>
        <description>http://thesai.org/Downloads/Volume7No2/Paper_13-Contributions_to_the_Analysis_and_the_Supervision.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of a Braking System on the Basis of Structured Analysis Methods</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070212</link>
        <id>10.14569/IJACSA.2016.070212</id>
        <doi>10.14569/IJACSA.2016.070212</doi>
        <lastModDate>2016-03-01T13:30:00.0030000+00:00</lastModDate>
        
        <creator>Ben Salem J.</creator>
        
        <creator>Lakhoua M.N.</creator>
        
        <creator>El Amraoui L.</creator>
        
        <subject>mechatronic system; ABS braking system; analysis and modeling; SADT method; SA-RT method</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(2), 2016</description>
        <description>In this paper, we present the general context of the research in the domain of analysis and modeling of mechatronic systems. In fact, we present &#224; bibliographic review on some works of research about the systemic analysis of mechatronic systems. To better understand its characteristics, we start with an introduction about mechatronic systems and various fields related to these systems, after we present a few analysis and design methods applied to mechatronic systems. Finally, we apply the two methods SADT (Structured Analysis Design Technique) and SA-RT (Structured Analysis Real Time) to the Anti-lock Braking System (ABS).</description>
        <description>http://thesai.org/Downloads/Volume7No2/Paper_12-Analysis_of_a_Braking_System_on_the_Basis_of_Structured_Analysis_Methods.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Random-Walk Based Privacy-Preserving Access Control for Online Social Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070210</link>
        <id>10.14569/IJACSA.2016.070210</id>
        <doi>10.14569/IJACSA.2016.070210</doi>
        <lastModDate>2016-03-01T13:29:59.9900000+00:00</lastModDate>
        
        <creator>You-sheng Zhou</creator>
        
        <creator>En-wei Peng</creator>
        
        <creator>Cheng-qing Guo</creator>
        
        <subject>online social networks; access control; random walk; privacy-preserving</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(2), 2016</description>
        <description>Online social networks are popularized with people to connect friends, share resources etc. Meanwhile, the online social networks always suffer the problem of privacy exposure. The existing methods to prevent exposure are to enforce access control provided by the social network providers or social network users. However, those enforcements are impractical since one of essential goal of social network application is to share updates freely and instantly. To better the security and availability in social network applications, a novel random walking based access control of social network is proposed in this paper. Unlike using explicit attribute based match in the existing schemes, the results from random walking are employed to securely compute L1 distance between two social network users in the presented scheme, which not only avoids the leakage of private attributes, but also enables each social network user to define access control policy independently. The experimental results show that the proposed scheme can facilitate the access control for online social network.</description>
        <description>http://thesai.org/Downloads/Volume7No2/Paper_10-A_Random_Walk_Based_Privacy_Preserving_Access_Control.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Pricing Schemes in Cloud Computing: An Overview</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070211</link>
        <id>10.14569/IJACSA.2016.070211</id>
        <doi>10.14569/IJACSA.2016.070211</doi>
        <lastModDate>2016-03-01T13:29:59.9900000+00:00</lastModDate>
        
        <creator>Artan Mazrekaj</creator>
        
        <creator>Isak Shabani</creator>
        
        <creator>Besmir Sejdiu</creator>
        
        <subject>Cloud Computing; Pricing Models; Pricing Schemes</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(2), 2016</description>
        <description>Cloud Computing is one of the technologies with rapid development in recent years where there is increasing interest in industry and academia. This technology enables many services and resources for end users. With the rise of cloud services number of companies that offer various services in cloud infrastructure is increased, thus creating a competition on prices in the global market. Cloud Computing providers offer more services to their clients ranging from infrastructure as a service (IaaS), platform as a service (PaaS), software as a service (SaaS), storage as a service (STaaS), security as a service (SECaaS), test environment as a service (TEaaS). The purpose of providers is to maximize revenue by their price schemes, while the main goal of customers is to have quality of services (QoS) for a reasonable price. The purpose of this paper is to compare and discuss several models and pricing schemes from different Cloud Computing providers.</description>
        <description>http://thesai.org/Downloads/Volume7No2/Paper_11-Pricing_Schemes_in_Cloud_Computing_An_Overview.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Model for Classification Secondary School Student Enrollment Approval Based on E-Learning Management System and E-Games</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070209</link>
        <id>10.14569/IJACSA.2016.070209</id>
        <doi>10.14569/IJACSA.2016.070209</doi>
        <lastModDate>2016-03-01T13:29:59.9730000+00:00</lastModDate>
        
        <creator>Hany Mohamed El-katary</creator>
        
        <creator>Essam M. Ramzy Hamed</creator>
        
        <creator>Safaa Sayed Mahmoud</creator>
        
        <subject>evaluation; learning management system; e-games; classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(2), 2016</description>
        <description>Student is the key of the educational process, where students’ creativity and interactions are strongly encouraged. There are many tools embedded in Learning Management Systems (LMS) that considered as a goal evaluation of learners. A problem that currently appeared is that assessment process is not always fair or accurate in classifying students according to accumulated knowledge. Therefore, there is a need to apply a new model for better decision making for students’ enrollment and assessments.  The proposed model may run along with an assessment tool within a LMS. The proposed model performs analysis and obtains knowledge regarding the classification capability of the assessment process. It offers knowledge for course managers regarding the course materials, quizzes, activities and e-games. The proposed model is an accurate assessment tool and thus better classification among learners. The proposed model was developed for learning management systems, which are commonly used in e-learning in Egyptian language schools. The proposed model demonstrated good accuracy compared to real sample data (250 students).</description>
        <description>http://thesai.org/Downloads/Volume7No2/Paper_9-A_Model_for_Classification_Secondary_School_Student_Enrollment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hierarchical Compressed Sensing for Cluster Based Wireless Sensor Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070208</link>
        <id>10.14569/IJACSA.2016.070208</id>
        <doi>10.14569/IJACSA.2016.070208</doi>
        <lastModDate>2016-03-01T13:29:59.9570000+00:00</lastModDate>
        
        <creator>Vishal Krishna Singh</creator>
        
        <creator>Manish Kumar</creator>
        
        <subject>Compressed sensing; in-network communication; network lifetime; traffic load balancing; wireless sensor network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(2), 2016</description>
        <description>Data transmission consumes significant amount of energy in large scale wireless sensor networks (WSNs). In such an environment, reducing the in-network communication and distributing the load evenly over the network can reduce the overall energy consumption and maximize the network lifetime significantly. In this work, the aforementioned problem of network lifetime and uneven energy consumption in large scale wireless sensor networks is addressed. This work proposes a hierarchical compressed sensing (HCS) scheme to reduce the in-network communication during the data gathering process. Co-related sensor readings are collected via a hierarchical clustering scheme. A compressed sensing (CS) based data processing scheme is devised to transmit the data from the source to the sink. The proposed HCS is able to identify the optimal position for the application of CS to achieve reduced and similar number of transmissions on all the nodes in the network. An activity map is generated to validate the reduced and uniformly distributed communication load of the WSN. Based on the number of transmissions per data gathering round, the bit-hop metric model is used to analyse the overall energy consumption. Simulation results validate the efficiency of the proposed method over the existing CS based approaches.</description>
        <description>http://thesai.org/Downloads/Volume7No2/Paper_8-Hierarchical_Compressed_Sensing_for_Cluster_Based_Wireless_Sensor_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Pattern Visualization Through Detection Plane Generation for Macroscopic Imagery</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070207</link>
        <id>10.14569/IJACSA.2016.070207</id>
        <doi>10.14569/IJACSA.2016.070207</doi>
        <lastModDate>2016-03-01T13:29:59.9430000+00:00</lastModDate>
        
        <creator>Hanan Hassan Ali Adlan</creator>
        
        <subject>pattern detection; hybrid architecture; backpropagation networks; rough set; image patterns</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(2), 2016</description>
        <description>Macroscopic images are kind of environments in which complex patterns are present. Satellite images are one of these classes where many patterns are present. This fact reflects the challenges in detecting patterns present in this kind of environments. SPOT1b satellite images provide valuable information. These images are affordable and can be applicable in wide applications. This paper demonstrates an approach to generate detection plane that visualize patterns present in the satellite image. The  detection plane uses rough neural network to provide optimal representation in backpropagation architecture. Rough set theory combined with multilayer perceptron constitutes the rough neural network. Reduction in the feature dimensionality via the rough module improves the recognition ability of the neural network. It is found that the rough module provides the neural network with optimal features. The ability of the neural network to efficiently detect and visualize the pattern stems from a developed extraction algorithm. The result of the hybrid architecture provides the plane with the best features that visualize the phenomena under investigation. Together with the novel extraction algorithm, the developed system provides a tool to visualize patterns present in SPOT1b Satellite image.</description>
        <description>http://thesai.org/Downloads/Volume7No2/Paper_7-Pattern_Visualization_ThroughDetection_Plane_Generation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Toward Secure Web Application Design: Comparative Analysis of Major Languages and Framework Choices</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070206</link>
        <id>10.14569/IJACSA.2016.070206</id>
        <doi>10.14569/IJACSA.2016.070206</doi>
        <lastModDate>2016-03-01T13:29:59.9270000+00:00</lastModDate>
        
        <creator>Stephen J. Tipton</creator>
        
        <creator>Young B. Choi</creator>
        
        <subject>Web; security; framework; application; authentication; ruby; ruby on rails; play framework; Scala; PHP; Zend Framework 2; SQL injection; threats</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(2), 2016</description>
        <description>We will examine the benefits and drawbacks in the selection of various software development languages and web application frameworks. In particular, we will consider five of the ten threats outlined in the Open Web Application Security Project (OWASP) Top 10 list of the most critical Web application security flaws [12], and examine the role of three popular Web application frameworks (Ruby on Rails (Ruby), Play Framework (Scala), and Zend Framework 2 (PHP)) in addressing a selection of these major threats. In addition, we will compare the strengths and weaknesses of each Web application framework as it pertains to the implementation of strong security measures. Furthermore, for each framework examined, assess how an organization should address these security threats in their software design utilizing their framework of choice. We will suggest the direction in which an organization facing such a decision ought to head; moreover, facilitate such a decision by assessing the benefits and drawbacks of each, based on the findings; and encourage one to decide what works best for the organization’s technical direction.</description>
        <description>http://thesai.org/Downloads/Volume7No2/Paper_6-Toward_Secure_Web_Application_Design_Comparative_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hidden Markov Models (HMMs) and Security Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070205</link>
        <id>10.14569/IJACSA.2016.070205</id>
        <doi>10.14569/IJACSA.2016.070205</doi>
        <lastModDate>2016-03-01T13:29:59.9100000+00:00</lastModDate>
        
        <creator>Rubayyi Alghamdi</creator>
        
        <subject>Markov model; Hidden Markov model; HMM, Markove model; Forward algorithm; Viterbi Algorithm; Baum-Welch algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(2), 2016</description>
        <description>The Hidden Markov models (HMMs) are statistical models used in various communities and applications. Such applications include speech recognition, mental task classification, biological analysis, and anomaly detection. In hidden Markov models, there are two states: one is a hidden state and the other is an observation state. The purpose of this survey paper is to further the understanding of hidden Markov models, as well as the solutions to the three central problems: evaluation problem, decoding problem and learning problem. In addition, applying HMMs in real world applications such as security and engineering will improve the classification and accuracy for the whole field.</description>
        <description>http://thesai.org/Downloads/Volume7No2/Paper_5-Hidden_Markov_Models_HMMs_and_Security_Applications.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>3D Servicescape Model: Atmospheric Qualities of Virtual Reality Retailing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070204</link>
        <id>10.14569/IJACSA.2016.070204</id>
        <doi>10.14569/IJACSA.2016.070204</doi>
        <lastModDate>2016-03-01T13:29:59.8970000+00:00</lastModDate>
        
        <creator>Aasim Munir Dad</creator>
        
        <creator>Professor Barry Davies</creator>
        
        <creator>Dr. Asma Abdul Rehman</creator>
        
        <subject>Virtual Reality Retailing (VRR); Servicescape; 3D Servicescape; Retail Atmospherics; Shoppers’ behaviour Introduction (Heading 1)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(2), 2016</description>
        <description>The purpose of this paper is to provide a 3D servicescape conceptual model which explores the potential effect of 3D virtual reality retail stores’ environment on shoppers&#39; behaviour. Extensive review of literature within two different domains, namely: servicescape models, and retail atmospherics, was carried out in order to propose a conceptual model. Further, eight detailed interviews were conducted to confirm the stimulus dimension of the conceptual model. A 3D servicescape conceptual model is offered on the basis of stimulus-organism-dimension, which proposes that a 3D virtual reality retail (VRR) store environment consists of physical, social, socially symbolic and natural dimensions. These dimensions are proposed to affect shoppers’ behaviour through the mediating variables of emotions (pleasure and arousal). An interrelationship between pleasure and arousal, as mediating variables, is also proposed. This research opens a number of new avenues for further research through the proposed model of shoppers’ behaviour in a VRR store environment. Further, a systematic taxonomy development of VRR store environment is attempted through this proposed model that may prove to be an important step in theory building. A comprehensive 3D service scape model along with a large number of propositions is made to define a 3D VRR store environment.</description>
        <description>http://thesai.org/Downloads/Volume7No2/Paper_4-3D_Servicescape_Model_Atmospheric_Qualities_of_Virtual_Reality_Retailing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The SVM Classifier Based on the Modified Particle Swarm Optimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070203</link>
        <id>10.14569/IJACSA.2016.070203</id>
        <doi>10.14569/IJACSA.2016.070203</doi>
        <lastModDate>2016-03-01T13:29:59.8800000+00:00</lastModDate>
        
        <creator>Liliya Demidova</creator>
        
        <creator>Evgeny Nikulchev</creator>
        
        <creator>Yulia Sokolova</creator>
        
        <subject>particle swarm optimization; SVM-classifier; kernel function type; kernel function parameters; regularization parameter; support vectors</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(2), 2016</description>
        <description>The problem of development of the SVM classifier based on the modified particle swarm optimization has been considered. This algorithm carries out the simultaneous search of the kernel function type, values of the kernel function parameters and value of the regularization parameter for the SVM classifier. Such SVM classifier provides the high quality of data classification. The idea of particles&#39; &#171;regeneration&#187; is put on the basis of the modified particle swarm optimization algorithm. At the realization of this idea, some particles change their kernel function type to the one which corresponds to the particle with the best value of the classification accuracy. The offered particle swarm optimization algorithm allows reducing the time expenditures for development of the SVM classifier. The results of experimental studies confirm the efficiency of this algorithm.</description>
        <description>http://thesai.org/Downloads/Volume7No2/Paper_3-The_SVM_Classifier_Based_on_the_Modified_Particle_Swarm_Optimization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Role of Security in Social Networking</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070202</link>
        <id>10.14569/IJACSA.2016.070202</id>
        <doi>10.14569/IJACSA.2016.070202</doi>
        <lastModDate>2016-03-01T13:29:59.8630000+00:00</lastModDate>
        
        <creator>David Hiatt</creator>
        
        <creator>Young B. Choi</creator>
        
        <subject>Security; Information Security; Social Networking: CIA; Confidentiality; Integrity; Availability; PII; Social Networking Service; SNS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(2), 2016</description>
        <description>In this paper, the concept of security and privacy in social media, or social networking will be discussed.  First, a brief history and the concept of social networking will be introduced.  Many of the security risks associated with using social media are presented.  Also, the issue of privacy and how it relates to security are described.  Based on these discussions, some solutions to improve a user’s privacy and security on social networks will be suggested.  Our research will help the readers to understand the security and privacy issues for the social network users, and some steps which can be taken by both users and social network organizations to help improve security and privacy.</description>
        <description>http://thesai.org/Downloads/Volume7No2/Paper_2-Role_of_Security_in_Social_Networking.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fruit Fly Optimization Algorithm for Network-Aware Web Service Composition in the Cloud</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070201</link>
        <id>10.14569/IJACSA.2016.070201</id>
        <doi>10.14569/IJACSA.2016.070201</doi>
        <lastModDate>2016-03-01T13:29:59.8500000+00:00</lastModDate>
        
        <creator>Umar SHEHU</creator>
        
        <creator>Ghazanfar SAFDAR</creator>
        
        <creator>Gregory EPIPHANIOU</creator>
        
        <subject>Web Services; Service Composition; QoS; Network Latency; Cloud; Fruit Fly Algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(2), 2016</description>
        <description>Service Oriented Computing (SOC) provides a framework for the realization of loosely coupled service oriented applications. Web services are central to the concept of SOC. Currently, research into how web services can be composed to yield QoS optimal composite service has gathered significant attention. However, the number and spread of web services across the cloud data centers has increased, thereby increasing the impact of the network on composite service performance experienced by the user. Recently, QoS-based web service composition techniques focus on optimizing web service QoS attributes such as cost, response time, execution time, etc. In doing so, existing approaches do not separate QoS of the network from web service QoS during service composition. In this paper, we propose a network-aware service composition approach which separates QoS of the network from QoS of web services in the Cloud. Consequently, our approach searches for composite services that are not only QoS-optimal but also have optimal QoS of the network. Our approach consists of a network model which estimates the QoS of the network in the form of network latency between services on the cloud. It also consists of a service composition technique based on fruit fly optimization algorithm which leverages the network model to search for low latency compositions without compromising service QoS levels. The approach is discussed and the results of evaluation are presented. The results indicate that the proposed approach is competitive in finding QoS optimal and low latency solutions when compared to recent techniques.</description>
        <description>http://thesai.org/Downloads/Volume7No2/Paper_1-Fruit_Fly_Optimization_Algorithm_for_Network_Aware.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimal Network Reconfiguration with Distributed Generation Using NSGA II Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2016.050202</link>
        <id>10.14569/IJARAI.2016.050202</id>
        <doi>10.14569/IJARAI.2016.050202</doi>
        <lastModDate>2016-02-10T11:43:55.8100000+00:00</lastModDate>
        
        <creator>Jasna Hivziefendic</creator>
        
        <creator>Amir Hadžimehmedovic</creator>
        
        <creator>Majda Tešanovic</creator>
        
        <subject>radial distribution network; distributed generation; genetic algorithms; NSGA II; loss reduction</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 5(2), 2016</description>
        <description>This paper presents a method to solve electrical network reconfiguration problem in the presence of distributed generation (DG) with an objective of minimizing real power loss and energy not supplied function in distribution system. A method based on NSGA II multi-objective algorithm is used to simultaneously minimize two objective functions and to identify the optimal distribution network topology. The constraints of voltage and branch current carrying capacity are included in the evaluation of the objective function. The method has been tested on radial electrical distribution network with 213 nodes, 248 lines and 72 switches. Numerical results are presented to demonstrate the performance and effectiveness of the proposed methodology.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume5No2/Paper_2-Optimal_Network_Reconfiguration_with_Distributed_Generation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Evaluation of the Implementation of Practice Teaching Program for Prospective Teachers at Ganesha University of Education Based on CIPP-Forward Chaining</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2016.050201</link>
        <id>10.14569/IJARAI.2016.050201</id>
        <doi>10.14569/IJARAI.2016.050201</doi>
        <lastModDate>2016-02-10T11:43:55.6670000+00:00</lastModDate>
        
        <creator>I Putu Wisna Ariawan</creator>
        
        <creator>Dewa Bagus Sanjaya</creator>
        
        <creator>Dewa Gede Hendra Divayana</creator>
        
        <subject>Evaluation; Practice Teaching Program; CIPP; Forward Chaining; Expert System</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 5(2), 2016</description>
        <description>The recognition of teacher status is very high and this is followed by a requirement of  a high competence level that  a teacher has to have that  the existence of teachers has to gain a serious attention, including also the beginning of prospective teachers preparation. Ganesha University of Education (Undiksha) as one of public universities in Indonesia is given authority by the government to train prospective teachers.  To produce quality prospective teachers that meet the requirement Undiksha requires all education students who will become prospective teachers to take Practice Teaching Program (PPL Real). However, in its implementation it can be said that it has not been effective yet.  Thus there is a need to evaluate the implementation of Practice Teaching Program for prospective teachers at Undiksha.  One of the techniques used is the CIPP model that is combined with Forward Chaining Method that is one of inferring strategies of Expert System. From the components of context, input, process, and product, the implementation of Practice Teaching Program (PPL-Real) of the education students of Undiksha in 2015 falls into an effective classification.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume5No2/Paper_1-An_Evaluation_of_the_Implementation_of_Practice_Teaching_Program.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>MAI and Noise Constrained LMS Algorithm for MIMO CDMA Linear Equalizer</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070195</link>
        <id>10.14569/IJACSA.2016.070195</id>
        <doi>10.14569/IJACSA.2016.070195</doi>
        <lastModDate>2016-02-03T09:04:26.6200000+00:00</lastModDate>
        
        <creator>Khalid Mahmood</creator>
        
        <creator>Syed Muhammad Asad</creator>
        
        <creator>Muhammad Moinuddin</creator>
        
        <creator>Waqas Imtiaz</creator>
        
        <subject>Least mean squared (LMS), multiple input, multiple output (MIMO), linear equalizer, multiple access in-terference (MAI), Variance, AWGN, adaptive algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>This paper presents a constrained least mean squared (LMS) algorithm for MIMO CDMA linear equalizer is presented, which is constrained on spreading sequence length, number of subscribers, variances of the Gaussian noise and the multiple access interference (MAI) plus the additive noise (introduced as a new constraint). The novelty of the proposed algorithm is that MAI and MAI plus noise variance has never been used as a constraint in MIMO CDMA systems. Convergence analysis is performed for the proposed algorithm in case when statistics of MAI and MAI plus noise are available. Simulation results are presented to compare the performance of the proposed constrained algorithm with other constrained algorithms and it is proved that the new algorithm has outperformed the existing constrained algorithms.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_95-MAI_and_Noise_Constrained_LMS_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Risk Propagation Analysis and Visualization using Percolation Theory</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070194</link>
        <id>10.14569/IJACSA.2016.070194</id>
        <doi>10.14569/IJACSA.2016.070194</doi>
        <lastModDate>2016-02-01T14:06:14.0870000+00:00</lastModDate>
        
        <creator>Sandra Konig</creator>
        
        <creator>Stefan Rass</creator>
        
        <creator>Stefan Schauer</creator>
        
        <creator>Alexander Beck</creator>
        
        <subject>security operation center; malware infection; perco-lation; BYOD; risk propagation; visualization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>This article presents a percolation-based approach for the analysis of risk propagation, using malware spreading as a showcase example. Conventional risk management is often driven by human (subjective) assessment of how one risk influences the other, respectively, how security incidents can affect subsequent problems in interconnected (sub)systems of an infrastructure. Using percolation theory, a well-established methodology in the fields of epidemiology and disease spreading, a simple simulation-based method is described to assess risk propagation system-atically. This simulation is formally analyzed using percolation theory, to obtain closed form criteria that help predicting a pandemic incident propagation (or a propagation with average-case bounded implications). The method is designed as a security decision support tool, e.g., to be used in security operation centers. For that matter, a flexible visualization technique is devised, which is naturally induced by the percolation model and the simulation algorithm that derives from it. The main output of the model is a graphical visualization of the infrastructure (physical or logical topology). This representation uses color codes to indicate the likelihood of problems to arise from a security incident that initially occurs at a given point in the system. Large likelihoods for problems thus indicate “hotspots”, where additional action should be taken.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_94-Risk_Propagation_Analysis_and_Visualization_using_Percolation_Theory.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Traffic Sign Detection and Recognition using Features Combination and Random Forests</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070193</link>
        <id>10.14569/IJACSA.2016.070193</id>
        <doi>10.14569/IJACSA.2016.070193</doi>
        <lastModDate>2016-02-01T14:06:14.0530000+00:00</lastModDate>
        
        <creator>Ayoub ELLAHYANI</creator>
        
        <creator>Mohamed EL ANSARI</creator>
        
        <creator>Ilyas EL JAAFARI</creator>
        
        <creator>Said CHARFI</creator>
        
        <subject>Traffic Sign Recognition (TSR); thresholding; Hue Saturation and Value (HSV); Histogram of Oriented Gradients (HOG); Gabor; Local Binary Pattern (LBP); Local Self-Similarity (LSS); Random forests</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>In this paper, we present a computer vision based system for fast robust Traffic Sign Detection and Recognition (TSDR), consisting of three steps. The first step consists on image enhancement and thresholding using the three components of the Hue Saturation and Value (HSV) space. Then we refer to distance to border feature and Random Forests classifier to detect circular, triangular and rectangular shapes on the segmented images. The last step consists on identifying the information included in the detected traffic signs. We compare four features descriptors which include Histogram of Oriented Gradients (HOG), Gabor, Local Binary Pattern (LBP), and Local Self-Similarity (LSS). We also compare their different combinations. For the classifiers we have carried out a comparison between Random Forests and Support Vector Machines (SVMs). The best results are given by the combination HOG with LSS together with the Random Forest classifier. The proposed method has been tested on the Swedish Traffic Signs Data set and gives satisfactory results.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_93-Traffic_Sign_Detection_and_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Single-Handed Cursor Control Technique Optimized for Rear Touch Operation and Its Usability</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070192</link>
        <id>10.14569/IJACSA.2016.070192</id>
        <doi>10.14569/IJACSA.2016.070192</doi>
        <lastModDate>2016-02-01T14:06:14.0230000+00:00</lastModDate>
        
        <creator>Yoshikazu Onuki</creator>
        
        <creator>Itsuo Kumazawa</creator>
        
        <subject>Rear touch; cursor control; mobile device; single-handed; Fitts’s law</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>To improve single-handed operation of mobile de-vices, the use of rear touch panel has potential for user interac-tions. In this paper, a basic study of operational control simply achieved through drag and tap of the index finger on a rear touch panel is conducted. Since a user has to hold the handheld device firmly with the thumb and fingers, a movable range of the tip of an index finger is limited. This restriction requires a user to perform several times of dragging actions to reach a cursor to the long distance target. Considering such kinematic restriction, a technique optimized for rear operation is proposed, wherein not only the position but also the velocity of fingertip movement is regarded. Movement time, the number of dragging operation, and the throughputs of the proposed technique have been evaluated in comparison with the generic technique using Fitts’s law. Experiments have been conducted to perform the target selection in the form of reciprocal 1D pointing tasks with ten participants. The combinations of two ways of holding the device (landscape and portrait) and two directions of dragging (horizontal and vertical) are considered. As a result, the proposed technique achieved the improvements of from 5 to 13% shorter movement time, from 20 to 40% higher throughputs and no deterioration of the number of dragging even for the longer distance targets. In addition, the further analysis addressed that there exists the advantageous combinations of the way of holding and the direction of dragging, which would be beneficial for better design of single-handed user interactions using rear touch.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_92-Single_Handed_Cursor_Control_Technique_Optimized.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Resolution Method in Linguistic Propositional Logic</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070191</link>
        <id>10.14569/IJACSA.2016.070191</id>
        <doi>10.14569/IJACSA.2016.070191</doi>
        <lastModDate>2016-02-01T14:06:13.9930000+00:00</lastModDate>
        
        <creator>Thi-Minh-Tam Nguyen</creator>
        
        <creator>Duc-Khanh Tran</creator>
        
        <subject>Resolution; Linguistic Truth Value; Linguistic Propositional Logic; Hedge Algebra</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>In the present paper, the resolution method for a linguistic propositional logic with truth value in a logical algebra - refined hedge algebra, is focused. The preliminaries of refined hedge algebra are given first. Then the syntax and semantic of linguistic propositional are defined. Finally, a resolution method which based on resolution principle in two-valued logic is established. Accordingly, the research in this paper will be helpful support for the application of intelligent reasoning system based on linguistic-valued logic which includes incomparable information.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_91-Resolution_Method_in_Linguistic_Propositional_Logic.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mobile computation offloading architecture for mobile augmented reality, case study: Visualization of cetacean skeleton</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070190</link>
        <id>10.14569/IJACSA.2016.070190</id>
        <doi>10.14569/IJACSA.2016.070190</doi>
        <lastModDate>2016-02-01T14:06:13.9770000+00:00</lastModDate>
        
        <creator>Belen G. Rodriguez-Santana</creator>
        
        <creator>Amilcar Meneses Viveros</creator>
        
        <creator>Blanca Esther Carvajal-Gamez</creator>
        
        <creator>Diana Carolina Trejo-Osorio</creator>
        
        <subject>Mobile augmented reality, mobile devices, render, mobile computation offloading</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>Augmented Reality applications can serve as teach-ing tools in different contexts of use. Augmented reality appli-cation on mobile devices can help to provide tourist information on cities or to give information on visits to museums. For example, during visits to museums of natural history, applications of augmented reality on mobile devices can be used by some visitors to interact with the skeleton of a whale. However, making rendering heavy models can be computationally infeasible on devices with limited resources such as smart phones or tablets. One solution to this problem is to use techniques to Mobile Computation Offloading. This work proposes a mobile computation offloading architecture for mobile augmented reality. This solution would allow users to interact with a whale skeleton through an augmented reality application on mobile devices. Finally testing to assess the optimization of the resources of the mobile device when performing heavy render tests were made.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_90-Mobile_computation_offloading_architecture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Innovative Framework for e-Government adoption in Saudi Arabia: A Study from the business sector perspective</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070189</link>
        <id>10.14569/IJACSA.2016.070189</id>
        <doi>10.14569/IJACSA.2016.070189</doi>
        <lastModDate>2016-02-01T14:06:13.9470000+00:00</lastModDate>
        
        <creator>Saleh Alghamdi</creator>
        
        <creator>Natalia Beloff</creator>
        
        <subject>E-Government; E-Services; Saudi Arabia; Technol-ogy Adoption; Influential Factors; Users’ Intention; Business Sector Perspective</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>E-Government increases transparency and im-proves communication between the government and the users. Providing e-Government services to business sector is a fun-damental mission of governmental agencies in Saudi Arabia. However, the adoption of e-Government systems is less than satisfactory in many countries, particularly in developing coun-tries. This is a significant factor that can lead to e-Government failure and, therefore, to the waste of budget and effort. One pertinent, unanswered question is what are the key factors that influence the adoption and utilisation level of users from business sector. Unlike much research in the literature that has utilised common technology acceptance models and theories to analyse the adoption of e-Government, which may not be sufficient for such analysis, this study proposes a conceptual framework following a holistic approach to analyse key factors that influence the adoption and utilisation of e-Government in Saudi Arabia. The developed framework, E-Government Adoption and Utilisation Model (EGAUM), was developed based on critical evaluation of several common models and theories related to technology acceptance and use including Technology Acceptance Model (TAM) and Unified Theory of Acceptance and Use of Technology (UTAUT), in conjunction with analysis of e-Government adoption literature. The study involved 48 participating business entities from two major cities in Saudi Arabia, Riyadh and Jeddah. The descriptive and statistical analyses are presented in this paper and the results indicated that all the proposed factors have degree of influence on the adoption and utilisation level. Perceived Benefits, Awareness, Previous Experience, and Regulations &amp; Policies were found to be the significant factors that are most likely to influence the adoption and usage level of users from business sector.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_89-Innovative_Framework_for_eGovernment_adoption.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>FPGA Prototype Implementation of Digital Hearing Aid from Software to Complete Hardware Design</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070188</link>
        <id>10.14569/IJACSA.2016.070188</id>
        <doi>10.14569/IJACSA.2016.070188</doi>
        <lastModDate>2016-02-01T14:06:13.9130000+00:00</lastModDate>
        
        <creator>Abdul Rehman Buzdar</creator>
        
        <creator>Azhar Latif</creator>
        
        <creator>Liguo Sun</creator>
        
        <creator>Abdullah Buzdar</creator>
        
        <subject>Hearing Aid; FPGA; CODEC; MicroBlaze; Wavelets; Filter Banks; FFT</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>The design and implementation of digital hearing aids requires a detailed knowledge of various digital signal processing techniques used in hearing aids like Wavelet Trans-forms, uniform and non-uniform Filter Banks and Fast Fourier Transform (FFT). In this paper the design and development of digital part of hearing aid is divided into three different phases. In the first phase review and Matlab simulation of various signal processing techniques used in the digital hearing aids is presented. In the second phase a software implementation was carried out and the firmware was designed for the Xilinx Microblaze softcore processor system. In the third phase everything was moved into hardware using VHDL hardware description language. The implementation was done on Xilinx Field Programmable Gate Array (FPGA) Development Board.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_88-FPGA_Prototype_Implementation_of_Digital_Hearing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Faster Scalar Multiplication Algorithm to Implement a Secured Elliptic Curve Cryptography System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070187</link>
        <id>10.14569/IJACSA.2016.070187</id>
        <doi>10.14569/IJACSA.2016.070187</doi>
        <lastModDate>2016-02-01T14:06:13.9000000+00:00</lastModDate>
        
        <creator>Fatema Akhter</creator>
        
        <subject>Cryptography; Elliptic curve cryptography; Scalar multiplication; Random walk; Elliptic curve discrete logarithm problem</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>Elliptic Curve Cryptography provides similar strength of protection comparing other public key cryptosystems but requires significantly smaller key size. This paper proposes a new faster scalar multiplication algorithm aiming at a more secured Elliptic Curve Cryptography scheme. This paper also proposes a novel Elliptic Curve Cryptography scheme where maximum length random sequence generation method is utilized as data mapping technique on elliptic curve over a finite field. The proposed scheme is tested on various bits length of prime field and key sizes. The numerical experiments demonstrate that the proposed scheme reduces the computation time compared to conventional scheme and shows very high strength against cryptanalytic attack particularly random walk attack.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_87-Faster_Scalar_Multiplication_Algorithm_to_Implement.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Translation of the Mutation Operator from Genetic Algorithms to Evolutionary Ontologies</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070186</link>
        <id>10.14569/IJACSA.2016.070186</id>
        <doi>10.14569/IJACSA.2016.070186</doi>
        <lastModDate>2016-02-01T14:06:13.8830000+00:00</lastModDate>
        
        <creator>Diana Contras</creator>
        
        <creator>Oliviu Matei</creator>
        
        <subject>Evolutionary ontologies; Genetic algorithms; Muta-tion; Ontology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>Recently introduced, evolutionary ontologies rep-resent a new concept as a combination of genetic algorithms and ontologies. We have defined a new framework comprising a set of parameters required for any evolutionary algorithm, i.e. ontological space, representation of individuals, the main genetic operators such as selection, crossover, and mutation. Although a secondary operator, mutation proves its importance in creating and maintaining evolutionary ontologies diversity. Therefore, in this article, we widely debate the mutation topic in evolutionary ontologies, marking its usefulness in practice by experimental results. Also we introduce a new mutation operator, called relational mutation, concerning mutation of a relationship through its inverse.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_86-Translation_of_the_Mutation_Operator.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Complex-Valued Neural Networks Training: A Particle Swarm Optimization Strategy</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070185</link>
        <id>10.14569/IJACSA.2016.070185</id>
        <doi>10.14569/IJACSA.2016.070185</doi>
        <lastModDate>2016-02-01T14:06:13.8670000+00:00</lastModDate>
        
        <creator>Mohammed E. El-Telbany</creator>
        
        <creator>Samah Refat</creator>
        
        <subject>Particle Swarm Optimization, Complex-Valued Neu-ral Networks, QSAR, Drug Design, prediction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>QSAR (Quantitative Structure-Activity Relation-ship) modelling is one of the well developed areas in drug development through computational chemistry. This kind of relationship between molecular structure and change in biological activity is center of focus for QSAR modelling. Machine learning algorithms are important tools for QSAR analysis, as a result, they are integrated into the drug production process. In this paper we will try to go through the problem of learning the Complex-Valued Neural Networks(CVNNs) using Particle Swarm Optimiza-tion(PSO); which is one of the open topics in the machine learning society where the CVNN is a more complicated for complex-valued data processing due to a lot of constraints such as activation function must be bounded and differentiable at the complete complex space. In this paper, a CVNN model for real-valued regression problems s presented. We tested such trained CVNN on two drug sets as a real world benchmark problem. The results show that the prediction and generalization abilities of CVNNs is superior in comparison to the conventional real-valued neural networks (RVNNs). Moreover, convergence of CVNNs is much faster than that of RVNNs in most of the cases.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_85-Complex_Valued_Neural_Networks_Training.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Applying data mining in the context of Industrial Internet</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070184</link>
        <id>10.14569/IJACSA.2016.070184</id>
        <doi>10.14569/IJACSA.2016.070184</doi>
        <lastModDate>2016-02-01T14:06:13.8530000+00:00</lastModDate>
        
        <creator>Oliviu Matei</creator>
        
        <creator>Kevin Nagorny</creator>
        
        <creator>Karsten Stoebener</creator>
        
        <subject>machine learning; data mining; k-nearest neigh-bour; neural network; support vector machine; rule induction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>Nowadays, (industrial) companies invest more and more in connecting with their clients and machines deployed to the clients. Mining all collected data brings up several technical challenges, but doing it means getting a lot of insight useful for improving equipments. We define two approaches in mining the data in the context of Industrial Internet, applied to one of the leading companies in shoe production lines, but easily extendible to any producer. For each approach, various machine learning algorithms are applied along with a voting system. This leads to a robust model, easy to adapt for any machine.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_84-Applying_data_mining_in_the_context_of_Industrial.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Efficient Method for Distributing Animated Slides of Web Presentations</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070183</link>
        <id>10.14569/IJACSA.2016.070183</id>
        <doi>10.14569/IJACSA.2016.070183</doi>
        <lastModDate>2016-02-01T14:06:13.8370000+00:00</lastModDate>
        
        <creator>Yusuke Niwa</creator>
        
        <creator>Shun Shiramatsu</creator>
        
        <creator>Tadachika Ozono</creator>
        
        <creator>Toramatsu Shintani</creator>
        
        <subject>Collaborative tools; communication aids; informa-tion sharing; Web services</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>Attention control of audience is required for suc-cessful presentations, therefore giving a presentation with im-mediate reaction,   called reactive presentation, to unexpected changes in the context given by the audience is important. Examples of functions for the reactive presentation are shape animation effects on slides and slide transition effects.
Understanding the functions that realize the reactive pre-sentation on the Web can be useful. In this work, we present an effective method for synchronizing shape animation effects on the Web,   such as moving the objects and changing the size and color of the shape objects. The main idea is to make a video of animated slides, called Web Slide Media, including the page information of slides as movie chapter information for synchronization. Moreover, we explain a method to reduce the file size of the Web slide media by removing all shape animation effects and slide transition effects from a Web slide media item, called Sparse Web Slide Media.
We demonstrate that the performance of the system is enough for practical use and the file size of the Sparse Web Slide Media is smaller than the file size of the Web Slide Media.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_83-An_Efficient_Method_for_Distributing_Animated.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Robust Hash Function Using Cross-Coupled Chaotic Maps with Absolute-Valued Sinusoidal Nonlinearity</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070182</link>
        <id>10.14569/IJACSA.2016.070182</id>
        <doi>10.14569/IJACSA.2016.070182</doi>
        <lastModDate>2016-02-01T14:06:13.8070000+00:00</lastModDate>
        
        <creator>Wimol San-Um</creator>
        
        <creator>Warakorn Srichavengsup</creator>
        
        <subject>Hash Function; Cross-Coupled Chaotic Map; Sinu-soidal Nonlinearity; Information security; Authentication</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>This paper presents a compact and effective chaos-based keyed hash function implemented by a cross-coupled topology of chaotic maps, which employs absolute-value of sinusoidal nonlinearity, and offers robust chaotic regions over broad parameter spaces with high degree of randomness through chaoticity measurements using the Lyapunov exponent. Hash function operations involve an initial stage when the chaotic map accepts initial conditions and a hashing stage that accepts input messages and generates the alterable-length hash values. Hashing performances are evaluated in terms of original message condition changes, statistical analyses, and collision analyses. The results of hashing performances show that the mean changed probabilities are very close to 50%, and the mean number of bit changes is also close to a half of hash value lengths. The collision tests reveal the mean absolute difference of each character values for the hash values of 128, 160 and 256 bits are close to the ideal value of 85.43. The proposed keyed hash function enhances the collision resistance, comparing to MD5 and SHA1, and the other complicated chaos-based approaches. An implementation of hash function Android application is demonstrated.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_82-A_Robust_Hash_Function_Using_Cross_Coupled.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Approach for On-road Vehicle Detection and Tracking</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070181</link>
        <id>10.14569/IJACSA.2016.070181</id>
        <doi>10.14569/IJACSA.2016.070181</doi>
        <lastModDate>2016-02-01T14:06:13.7730000+00:00</lastModDate>
        
        <creator>Ilyas EL JAAFARI</creator>
        
        <creator>Mohamed EL ANSARI</creator>
        
        <creator>Lahcen KOUTTI</creator>
        
        <creator>Ayoub ELLAHYANI</creator>
        
        <creator>Said CHARFI</creator>
        
        <subject>Vehicle detection; Vehicle tracking; GIST; SVM; Edge features; Kalman filter</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>On the basis of a necessary development of the road safety, vision-based vehicle detection techniques have gained an important amount of attention. This work presents a novel vehicle detection and tracking approach, and structured based on a vehicle detection process starting from, images or video data acquired from sensors installed on board of the vehicle, to vehicle detection and tracking. The features of the vehicle are extracted by the proposed GIST image processing algorithm, and recognized by the state-of-art Support Vectors Machine classifier. The tracking process was performed based on edge features matching approach. The Kalman filter was used to correct the measurements. Extensive experiments were carried out on real image data validate that it is promising to employ the proposed approach for on road vehicle detection and tracking.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_81-A_Novel_Approach_for_On_road_Vehicle_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Weighted Unsupervised Learning for 3D Object Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070180</link>
        <id>10.14569/IJACSA.2016.070180</id>
        <doi>10.14569/IJACSA.2016.070180</doi>
        <lastModDate>2016-02-01T14:06:13.7430000+00:00</lastModDate>
        
        <creator>Kamran Kowsari</creator>
        
        <creator>Manal H. Alassaf</creator>
        
        <subject>Weighted Unsupervised Learning, Object Detection, RGB-D camera, Kinect</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>This paper introduces a novel weighted unsuper-vised learning for object detection using an RGB-D camera. This technique is feasible for detecting the moving objects in the noisy environments that are captured by an RGB-D camera. The main contribution of this paper is a real-time algorithm for detecting each object using weighted clustering as a separate cluster. In a preprocessing step, the algorithm calculates the pose 3D position X, Y, Z and RGB color of each data point and then it calculates each data point’s normal vector using the point’s neighbor. After preprocessing, our algorithm calculates k-weights for each data point; each weight indicates membership. Resulting in clustered objects of the scene.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_80-Weighted_Unsupervised_Learning_for_3D_Object.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Verification of Statecharts Using Data Abstraction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070179</link>
        <id>10.14569/IJACSA.2016.070179</id>
        <doi>10.14569/IJACSA.2016.070179</doi>
        <lastModDate>2016-02-01T14:06:13.7270000+00:00</lastModDate>
        
        <creator>Steffen Helke</creator>
        
        <creator>Florian Kammuller</creator>
        
        <subject>Statecharts; CTL; Data Abstraction; Model Check-ing; Theorem Proving</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>We present an approach for verifying Statecharts including infinite data spaces. We devise a technique for checking that a formula of the universal fragment of CTL is satisfied by a specification written as a Statechart. The approach is based on a property-preserving abstraction technique that additionally preserves structure. It is prototypically implemented in a logic-based framework using a theorem prover and a model checker. This paper reports on the following results. (1) We present a proof infra-structure for Statecharts in the theorem prover Isabelle/HOL, which constitutes a basis for defining a mechanised data abstraction process. The formalisation is based on Hierar-chical Automata (HA) which allow a structural decomposition of Statecharts into Sequential Automata. (2) Based on this theory we introduce a data abstraction technique, which can be used to abstract the data space of a HA for a given abstraction function. The technique is based on constructing over-approximations. It is structure-preserving and is designed in a compositional way. (3) For reasons of practicability, we finally present two tactics supporting the abstraction that we have implemented in Isabelle/HOL. To make proofs more efficient, these tactics use the model checker SMV checking abstract models automatically.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_79-Verification_of_Statecharts_Using_Data_Abstraction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Proposal and Implementation of MPLS Fuzzy Traffic Monitor</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070178</link>
        <id>10.14569/IJACSA.2016.070178</id>
        <doi>10.14569/IJACSA.2016.070178</doi>
        <lastModDate>2016-02-01T14:06:13.6970000+00:00</lastModDate>
        
        <creator>Anju Bhandari</creator>
        
        <creator>V.P.Singh</creator>
        
        <subject>Multiprotocol Label Switched Networks; Fuzzy Traffic Monitor; Network Simulator; Ingress; Traffic Splitting; Fuzzy Logic Control System; Label setup System; Traffic Splitting System</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>Multiprotocol Label Switched Networks need highly intelligent controls to manage high volume traffic due to issues of traffic congestion and best path selection. The work demonstrated in this paper shows results from simulations for building optimal fuzzy based algorithm for traffic splitting and congestion avoidance. The design and implementation of Fuzzy based software defined networking is illustrated by introducing the Fuzzy Traffic Monitor in an ingress node. Finally, it displays improvements in the terms of mean delay (42.0%) and mean loss rate (2.4%) for Video Traffic. Then, the resu1t shows an improvement in the terms of mean delay (5.4%) and mean loss rate  (3.4%) for Data Traffic and an improvement in the terms of mean delay(44.9%) and  mean loss rate(4.1%) for Voice Traffic as compared to default MPLS implementation.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_78-Proposal_and_Implementation_of_MPLS_Fuzzy_Traffic_Monitor.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>VLSI Design of a High Performance Decimation Filter Used for Digital Filtering</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070177</link>
        <id>10.14569/IJACSA.2016.070177</id>
        <doi>10.14569/IJACSA.2016.070177</doi>
        <lastModDate>2016-02-01T14:06:13.6630000+00:00</lastModDate>
        
        <creator>Radhouane LAAJIMI</creator>
        
        <creator>Ali AJMI</creator>
        
        <creator>Randa KHEMIRI</creator>
        
        <creator>Mohsen Machout</creator>
        
        <subject>Digital circuit design; CIC decimation; Cascaded integrator comb filter (CIC); IIR-FIR structure</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>With the rapid development of computers and communications, more and more chips are required to have small size, low-power and high performance. Digital filter is one of the basic building blocks used for implementation in Very Large Scale Integration (VLSI) of mixed-signal circuits. This paper presents a design of decimation filter used for digital filtering. It consists of Cascode Integrated Comb (CIC) filters, using Finite Impulse Response (FIR) filters and Infinite Impulse Response (IIR) filters structure. This architecture provides small area and low power consumption by avoiding the use of multiplication structure. This design presents the way of speeding up the route from the theoretical design with Simulink/Matlab, via behavioral simulation in fixed-point arithmetic to the implementation on either ASIC. This has been achieved by porting the netlist of the Simulink system description into the Very high speed integrated circuit Hardware Description Language (VHDL). At the first instance, the Simulink-to-VHDL converter has been designed to use structural VHDL code to describe system interconnections, allowing simple behavioral descriptions for basic blocks. A comparison of several architectures of this circuit based on different architectures of most popular filter is presented. The comparison includes: supply voltage, power consumption, area and technology. This approach consumes only 2.94 mW of power at a supply voltage of 3V. The core chip size of the filter block without bonding pads is 0.058 mm2 by using the AMS 0.35 &#181;m CMOS technology.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_77-VLSI_Design_of_a_High_Performance_Decimation_Filter.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Prediction of Mental Health Problems Among Children Using Machine Learning Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070176</link>
        <id>10.14569/IJACSA.2016.070176</id>
        <doi>10.14569/IJACSA.2016.070176</doi>
        <lastModDate>2016-02-01T14:06:13.6500000+00:00</lastModDate>
        
        <creator>Ms. Sumathi M.R.</creator>
        
        <creator>Dr. B. Poorna</creator>
        
        <subject>Mental Health Diagnosis; Machine Learning; Prediction; Feature Selection; Basic Mental Health Problems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>Early diagnosis of mental health problems helps the professionals to treat it at an earlier stage and improves the patients’ quality of life. So, there is an urgent need to treat basic mental health problems that prevail among children which may lead to complicated problems, if not treated at an early stage. Machine learning Techniques are currently well suited for analyzing medical data and diagnosing the problem. This research has identified eight machine learning techniques and has compared their performances on different measures of accuracy in diagnosing five basic mental health problems. A data set consisting of sixty cases is collected for training and testing the performance of the techniques. Twenty-five attributes have been identified as important for diagnosing the problem from the documents.  The attributes have been reduced by applying Feature Selection algorithms over the full attribute data set. The accuracy over the full attribute set and selected attribute set on various machine learning techniques have been compared. It is evident from the results that the three classifiers viz., Multilayer Perceptron, Multiclass Classifier and LAD Tree produced more accurate results and there is only a slight difference between their performances over full attribute set and selected attribute set.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_76-Prediction_of_Mental_Health_Problems_Among_Children.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Face Recognition Based on Improved SIFT Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070175</link>
        <id>10.14569/IJACSA.2016.070175</id>
        <doi>10.14569/IJACSA.2016.070175</doi>
        <lastModDate>2016-02-01T14:06:13.6330000+00:00</lastModDate>
        
        <creator>EHSAN SADEGHIPOUR</creator>
        
        <creator>NASROLLAH SAHRAGARD</creator>
        
        <subject>face detection; improved SIFT descriptor; KGWRCM; GPCA; GLDA</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>People are usually identified by their faces. Developments in the past few decades have enabled human to automatically do the identification process.Now, face recognition process employs the advanced statistical science and matching methods. Improvements and innovations in face recognition technology during 10 to 15 past years have propelled it to the current status. Due to the wide application of face recognition algorithms in many practical systems, including security control systems, human–computer interaction systems, etc., algorithms with high success rate are highly interested in research areas in recent years.Most of suggested algorithms are about correctly identifying face photos and assigning them to a person in the database. This study focuses on face recognition based on improved SIFT algorithm. Results indicate the superiority of the proposed algorithm over the SIFT.To evaluate the proposed algorithm, it is applied on ORL database and then compared to other face detection algorithms including Gabor, GPCA, GLDA, LBP, GLDP, KGWRCM, and SIFT. The results obtained from various tests show that the proposed algorithm reveals accuracy of 98.75% and run time of 4.3 seconds which is shorter. The new improved algorithm is more efficient and more accurate than other algorithms.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_75-Face_Recognition_Based_on_Improved_SIFT_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Power-Controlled Data Transmission in Wireless Ad-Hoc Networks: Challenges and Solutions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070174</link>
        <id>10.14569/IJACSA.2016.070174</id>
        <doi>10.14569/IJACSA.2016.070174</doi>
        <lastModDate>2016-02-01T14:06:13.6030000+00:00</lastModDate>
        
        <creator>Bilgehan Berberoglu</creator>
        
        <creator>Taner Cevik</creator>
        
        <subject>ad-hoc networks; energy conservation; power control; throughput</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>Energy scarcity and interference are two important factors determining the performance of wireless ad-hoc networks that should be considered in depth. A promising method of achieving energy conservation is the transmission power control. Transmission power control also contributes to the mitigation of interference thereby promotes throughput by means of rendering multiple hosts to communicate in the same neighborhood simultaneously without impairing each other’s transmissions. However, as identified previously in the literature, traditional hidden terminal problem gets deteriorated when transmission power control mechanism is intended to be applied. In this article, we discuss the primary details about the power usage and throughput deficiency of the traditional 802.11 RTS/CTS mechanism. Improvements by means of power control are introduced as well as the solutions to the challenges likely to emerge because of the usage of diverse power levels throughout the network.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_74-Power_Controlled_Data_Transmission_in_Wireless_Ad_Hoc_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Semi-Automatic Segmentation System for Syllables Extraction from Continuous Arabic Audio Signal</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070173</link>
        <id>10.14569/IJACSA.2016.070173</id>
        <doi>10.14569/IJACSA.2016.070173</doi>
        <lastModDate>2016-02-01T14:06:13.5700000+00:00</lastModDate>
        
        <creator>Mohamed S. Abdo</creator>
        
        <creator>Ahmed H. Kandil</creator>
        
        <subject>Arabic speech syllables; automatic segmentation; boundaries detection; delta-MFCC features</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>The paper describes a speaker independent segmentation system for breaking Arabic uttered sentences into its constituent syllables. The goal is to construct a database of acoustical Arabic syllables as a step towards a syllable-based Arabic speech verification/recognition system. The proposed technique segments the utterances based on maxima extraction from delta function of 1st MFC coefficient. This method locates syllables boundaries by applying the template matching technique with reference utterances. The system was applied over a data set of 276 utterances to segment them into their 2544 constituent syllables. A segmentation success rate of about 91.5% was reached.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_73-Semi_Automatic_Segmentation_System_for_Syllables_Extraction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Non Correlation DWT Based Watermarking Behavior in Different Color Spaces</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070172</link>
        <id>10.14569/IJACSA.2016.070172</id>
        <doi>10.14569/IJACSA.2016.070172</doi>
        <lastModDate>2016-02-01T14:06:13.5400000+00:00</lastModDate>
        
        <creator>Mehdi Khalili</creator>
        
        <creator>Mahsa Nazari</creator>
        
        <subject>ATM; CCM; DWT2; color spaces; non correlation watermarking technique</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>There are two digital watermarking techniques. Digital watermarking techniques based on correlation and digital watermarking techniques that are not based on correlation. In previous work, we proposed a DWT2 based CDMA image watermarking scheme to study the effects of using eight color spaces RGB, YCbCr, JPEG-YCbCr, YIQ, YUV, HSI, HSV and CIELab, on watermarking algorithms based on correlation techniques. This paper proposes a non correlation based image watermarking scheme in wavelet transform domain and tests it in the same color spaces, to develop studying, reach a comprehensive analysis and focus on satisfying the requirements of based non coloration watermarking algorithms. To achieve more security, imperceptibility and robustness of the proposed scheme, first, the binary watermark image encodes by applying ATM, CCM and exclusive OR. Then, the scrambled watermark embeds into intended quantized approximation coefficients of wavelet transform by LSB insertion technique.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_72-Non_Correlation_DWT_Based_Watermarking_Behavior_in_Different_Color_Spaces.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Distributed Framework for Content Search Using Small World Communities</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070171</link>
        <id>10.14569/IJACSA.2016.070171</id>
        <doi>10.14569/IJACSA.2016.070171</doi>
        <lastModDate>2016-02-01T14:06:13.5100000+00:00</lastModDate>
        
        <creator>Seyyed-Mohammad Javadi-Moghaddam</creator>
        
        <creator>Stefanos Kollias</creator>
        
        <subject>Small-world networks; distributed multimedia model; ontology; community; fuzzy similarity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>The continuous growth of multimedia content available all over the web is raising the importance of a distributed framework for searching it. One of the important parameters in a distributed environment is system response time. This parameter specially plays an important role in search and retrieval. A novel two-tier structure is introduced in this paper, which focuses on the community concept to facilitate creation of ontological small worlds that can effectively assist the search task. As a result, user queries are forwarded to nodes that are likely to contain the relevant resources. Evaluation of the framework proves that the small world character of the proposed structure provides queries with better route selection and searching efficiency.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_71-A_Distributed_Framework_for_Content_Search.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>High Lightweight Encryption Standard (HLES) as an Improvement of 512-Bit AES for Secure Multimedia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070170</link>
        <id>10.14569/IJACSA.2016.070170</id>
        <doi>10.14569/IJACSA.2016.070170</doi>
        <lastModDate>2016-02-01T14:06:13.4770000+00:00</lastModDate>
        
        <creator>GUESMIA Seyf Eddine</creator>
        
        <creator>ASSAS Ouarda</creator>
        
        <creator>BOUDERAH Brahim</creator>
        
        <subject>Advanced Encryption Standard (AES); Encryption; multimedia data; security; resource-limited systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>In today’s scenario, people share information to another people frequently using network. Due to this, more amount of information are so much private but some are less private. Therefore, the attackers or the hackers take the advantage and start attempting to steal the information since 2001. the symmetric encryption algorithm called 512-bit AES provides high level of security, but it&#39;s almost be impossible to be used in multimedia transmissions and mobile systems because of the need for more design area that effect in the use of large memory space in each round and the big encryption time that it takes. This paper presents an improvement of 512-bit AES algorithm with efficient utilization of resources such as processor and memory space. The proposed approach resists the linear and differential encrypt analysis and provides high security level using a 512-bit size of key block and data block and ameliorates the performance by minimizing the use of memory space and time encryption to be able to work in specific characteristics of resource-limited systems. The experimental results on several data (text, image, sound, video) show that the used memory space is reduced to quarter, and the encryption time is reduced almost to the half. Therefore, the adopted method is very effective for encryption of multimedia data.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_70-High_Lightweight_Encryption_Standard_HLES_as_an_Improvement.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Multipath Lifetime-Prolonging Routing Algorithm for Wireless Ad Hoc Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070169</link>
        <id>10.14569/IJACSA.2016.070169</id>
        <doi>10.14569/IJACSA.2016.070169</doi>
        <lastModDate>2016-02-01T14:06:13.4470000+00:00</lastModDate>
        
        <creator>Mohamed Amine RIAHLA</creator>
        
        <creator>Karim TAMINE</creator>
        
        <subject>Mobile Multi Agent System; Ad hoc Network Lifetime; Ant Routing Protocol; Distributed Algorithm; Network Congestion</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>Dynamic networks can be tremendously challenging when deploying distributed applications on autonomous machines. Further, implementing services like routing and security for such networks is generally difficult and problematic. Consequently, multi-agent systems are well suited for designing distributed systems where several autonomous agents interact or work together to perform a set of tasks or satisfy a set of goals, hence moving the problem of analyzing from a global level to a local level therefore reducing the design complexity. In our previous paper, we presented a Multi Agent system model that has been adapted to develop a routing protocol for ad hoc networks. Wireless ad hoc networks are infrastructureless networks that comprise wireless mobile nodes which areable to communicate with each other outside the wireless transmission range. Due to frequent network topology changes, the limited energy and underlying bandwidth, routing becomes a challenging task. In this paper, we present a new version of routing algorithm devoted for mobile ad hoc networks. Our new algorithm helps controlling the network congestion and increasing the network lifetime by effectively managing nodes energy and link cost. The performance of our new version is validated through simulation. The Simulation results show the effectiveness and efficiency of our new algorithm compared to stae-of-the-artsolutions in terms of various performance metrics.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_69-A_Multipath_Lifetime_Prolonging_Routing_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Adaptive Lockable Units to Improve Data Availability in a Distributed Database System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070168</link>
        <id>10.14569/IJACSA.2016.070168</id>
        <doi>10.14569/IJACSA.2016.070168</doi>
        <lastModDate>2016-02-01T14:06:13.4170000+00:00</lastModDate>
        
        <creator>Khaled Maabreh</creator>
        
        <subject>Granularity hierarchy tree; Lockable unit; Locks; Attribute level; Concurrency control; Data availability; Replication</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>Distributed database systems have become a phenomenon and have been considered a crucial source of information for numerous users. Users with different jobs are using such systems locally or via the Internet to meet their professional requirements. Distributed database systems consist of a number of sites connected over a computer network. Each site deals with its own database and interacts with other sites as needed. Data replication in these systems is considered a key factor in improving data availability. However, it may affect system performance when most of the transactions that access the data contain write or a mix of read and write operations because of exclusive locks and update propagation. This research proposes a new adaptive approach for increasing the availability of data contained in a distributed database system. The proposed approach suggests a new lockable unit by increasing the database hierarchy tree by one level to include attributes as lockable units instead of the entire row. This technique may allow several transactions to access the database row simultaneously by utilizing some attributes and keeping others available for other transactions. Data in a distributed database system can be accessed locally or remotely by a distributed transaction, with each distributed transaction decomposed into several sub-transactions called participants or agents. These agents access the data at multiple sites and must guarantee that any changes to the data must be committed in order to complete the main transaction. The experimental results show that using attribute-level locking will increase data availability, reliability, and throughput, as well as enhance overall system performance. Moreover, it will increase the overhead of managing such a large number of locks, which will be managed according to the qualification of the query.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_68-Adaptive_Lockable_Units_to_Improve_Data_Availability.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparatative Analysis of Energy Detection Spectrum Sensing of Cognitive Radio Under Wireless Environment  Using SEAMCAT</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070167</link>
        <id>10.14569/IJACSA.2016.070167</id>
        <doi>10.14569/IJACSA.2016.070167</doi>
        <lastModDate>2016-02-01T14:06:13.3830000+00:00</lastModDate>
        
        <creator>A.S. Kang</creator>
        
        <creator>Renu Vig</creator>
        
        <creator>Jasvir Singh</creator>
        
        <creator>Jaisukh Paul Singh</creator>
        
        <subject>Cognitive Radio; Primary User; Secondary User; Detection Threshold; Interference Probability;Energy Detection; Desired/interfering/sensing received signal strength</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>In the recent years, the Cognitive Radio technology imposed itself as a good solution to enhance the utilization of unused spectrum and globalized the radio environment for different band users that utilize or require different techniques for transmission. In this paper, the energy detection spectrum sensing technique that is used to detect the presence of unknown deterministic signal is studied under the non-time dispersive fading environment using the Hata propagation model for picocell communication systems. The different aspects of non-time dispersive fading regions over energy detection spectrum sensing and impact of changing a detection threshold of the secondary user Cognitive Radio on interference at primary user for non-cooperative spectrum access have been studied in the terms of probability of interference. The entire Comparatative Analysis of Spectrum Sensing in Cognitive radio has been carried out with the aid of SEAMCAT software platform.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_67-Comparatative_Analysis_of_Energy_Detection_Spectrum_Sensing_of_Cognitive_Radio.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-Objective Optimization Algorithm to the Analyses of Diabetes Disease Diagnosis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070166</link>
        <id>10.14569/IJACSA.2016.070166</id>
        <doi>10.14569/IJACSA.2016.070166</doi>
        <lastModDate>2016-02-01T14:06:13.3530000+00:00</lastModDate>
        
        <creator>M. Anusha</creator>
        
        <creator>Dr. J.G.R. Sathiaseelan</creator>
        
        <subject>Clustering; Genetic Algorithm; Multi-objective Optimization; ECMO; Diabetes Disease</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>There is huge amount of data available in health industry which is found difficult in handing, hence mining of data is necessary to innovate the hidden patterns and their relevant features. Recently, many researchers have devoted to the study of using data mining on disease diagnosis.  Mining bio-medical data is one of the predominant research area where evolutionary algorithms and clustering techniques are emphasized in diabetes disease diagnosis. Therefore, this research focuses on application of evolution clustering multi-objective optimization algorithm (ECMO) to analyze the data of patients suffering from diabetes disease. The main objective of this work is to maximize the prediction accuracy of cluster and computation efficiency along with minimum cost for data clustering. The experimental results prove that this application has attained maximum accuracy for dataset of Pima Indians Diabetes from UCI repository. In this way, by analyzing the three objectives, ECMO could achieve best Pareto fronts.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_66-Multi_Objective_Optimization_Algorithm_to_the_Analyses.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intelligent Accreditation System: A Survey of the Issues, Challenges, and Solution</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070165</link>
        <id>10.14569/IJACSA.2016.070165</id>
        <doi>10.14569/IJACSA.2016.070165</doi>
        <lastModDate>2016-02-01T14:06:13.3370000+00:00</lastModDate>
        
        <creator>Fahim Akhter</creator>
        
        <creator>Yasser Ibrahim</creator>
        
        <subject>Challenges of Accreditation Process; Intelligent Accreditation System</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>International educational institutes are aiming to be accredited by local and international accreditation agencies such as the Association to Advance Collegiate Schools of Business (AACSB), Accreditation Board for Engineering and Technology (ABET) and so forth to be recognized by stakeholders. The institutes are striving to meet the stakeholders’ expectations by integrating quality in all standards of educational practices and guarantee a continuous improvement. This study has acknowledged the principal barriers that need to be addressed and resolved such as collection &amp; population of data, time constrains, compensation and lack of guidance and expertise. A web-based survey was conducted to identify the obstacles and the respondents’ expectations in the optimization process for accreditation. This research proposes an Intelligent Web-Based Accreditation System (IWBAS) that addresses the above issues and streamline the accreditation process.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_65-Intelligent_Accreditation_System_A_Survey_of_the_Issues.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Segmentation and Recognition of Handwritten Kannada Text Using Relevance Feedback and Histogram of Oriented Gradients – A Novel Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070164</link>
        <id>10.14569/IJACSA.2016.070164</id>
        <doi>10.14569/IJACSA.2016.070164</doi>
        <lastModDate>2016-02-01T14:06:13.3070000+00:00</lastModDate>
        
        <creator>Karthik S</creator>
        
        <creator>Srikanta Murthy K</creator>
        
        <subject>Optical character recognition; Histogram of oriented gradients; relevance feedback; segmentation; Support Vector Machine; handwritten Kannada documents</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>India is a multilingual country with 22 official languages and more than 1600 languages in existence. Kannada is one of the official languages and widely used in the state of Karnataka whose population is over 65 million. Kannada is one of the south Indian languages and it stands in the 33rd position among the list of widely spoken languages across the world. However, the survey reveals that much more effort is required to develop a complete Optical Character Recognition (OCR) system. In this direction the present research work throws light on the development of suitable methodology to achieve the goal of developing an OCR. It is noted that the overall accuracy of the OCR system largely depends on the accuracy of the segmentation phase. So it is desirable to have a robust and efficient segmentation method. In this paper, a method has been proposed for proper segmentation of the text to improve the performance of OCR at the later stages. In the proposed method, the segmentation has been done using horizontal projection profile and windowing. The result obtained is passed to the recognition module. The Histogram of Oriented Gradient (HoG) is used for the recognition in combination with the support vector machine (SVM). The result is taken as the feedback and fed to the segmentation module to improve the accuracy. The experimentation is delivered promising results.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_64-Segmentation_and_Recognition_of_Handwritten_Kannada_Text.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Privacy-Preserving Roaming Authentication Scheme for Ubiquitous Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070163</link>
        <id>10.14569/IJACSA.2016.070163</id>
        <doi>10.14569/IJACSA.2016.070163</doi>
        <lastModDate>2016-02-01T14:06:13.2730000+00:00</lastModDate>
        
        <creator>You-sheng Zhou</creator>
        
        <creator>Jun-feng Zhou</creator>
        
        <creator>Feng Wang</creator>
        
        <subject>roaming authentication; anonymous; chaotic maps; key agreement</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>A privacy-preserving roaming authentication scheme (PPRAS) for ubiquitous networks is proposed, in which a remote mobile user can obtain the service offered by a foreign agent after being authenticated. In order to protect the mobile user’s privacy, the user presents an anonymous identity to the foreign agent with the assistance of his or her home agent to complete the authentication. After that, the user and the foreign agent can establish a session key using the semi-group property of Chebyshev polynomial. In this way, huge burden of key management is avoided. Furthermore, the user can update the login password and the session key between itself and the foreign agent if necessary. The correctness is proved using BAN logic, and the performance comparison against the existing schemes is given as well.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_63-A_Privacy_Preserving_Roaming_Authentication_Scheme.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards Building an Intelligent Call Routing System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070162</link>
        <id>10.14569/IJACSA.2016.070162</id>
        <doi>10.14569/IJACSA.2016.070162</doi>
        <lastModDate>2016-02-01T14:06:13.2600000+00:00</lastModDate>
        
        <creator>Thien Khai Tran</creator>
        
        <creator>Dung Minh Pham</creator>
        
        <creator>Binh Van Huynh</creator>
        
        <subject>EduICR; spoken dialog systems; intelligent call center; voice application</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>This paper presents EduICR - an Intelligent Call Routing system. This system can route calls to the most appropriate agent using routing rules built by the text classifier. EduICR includes the following main components: telephone communication network; Vietnamese speech recognition; Text classifier/ Natural language processor and Vietnamese speech synthesis. To our best knowledge, this is one of the first systems in Vietnam to implement the integration mechanism of text processing and speech processing. This allows voice applications to be more intelligent, able to communicate with humans in natural language with high accuracy and reasonable speed. Having been built and tested in real environment, our system proves its accuracy attaining more than 95%.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_62-Towards_Building_an_Intelligent_Call_Routing_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cosine Based Latent Factor Model for Precision Oriented Recommendation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070161</link>
        <id>10.14569/IJACSA.2016.070161</id>
        <doi>10.14569/IJACSA.2016.070161</doi>
        <lastModDate>2016-02-01T14:06:13.2270000+00:00</lastModDate>
        
        <creator>Bipul Kumar</creator>
        
        <creator>Pradip Kumar Bala</creator>
        
        <creator>Abhishek Srivastava</creator>
        
        <subject>collaborative filtering; recommender systems; precision; e-commerce; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>Recommender systems suggest a list of interesting items to users based on their prior purchase or browsing behaviour on e-commerce platforms. The continuing research in recommender systems have primarily focused on developing algorithms for rating prediction task. However, most e-commerce platforms provide ‘top-k’ list of interesting items for every user. In line with this idea, the paper proposes a novel machine learning algorithm to predict a list of ‘top-k’ items by optimizing the latent factors of users and items with the mapped scores from ratings. The basic idea is to learn latent factors based on the cosine similarity between the users and items latent features which is then used to predict the scores for unseen items for every user. Comprehensive empirical evaluations on publicly available benchmark datasets reveal that the proposed model outperforms the state-of-the-art algorithms in recommending good items to a user.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_61-Cosine_Based_Latent_Factor_Model_for_Precision_Oriented_Recommendation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Investigating the Effect of Different Kernel Functions on the Performance of SVM for Recognizing Arabic Characters</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070160</link>
        <id>10.14569/IJACSA.2016.070160</id>
        <doi>10.14569/IJACSA.2016.070160</doi>
        <lastModDate>2016-02-01T14:06:13.1970000+00:00</lastModDate>
        
        <creator>Sayed Fadel</creator>
        
        <creator>Said Ghoniemy</creator>
        
        <creator>Mohamed Abdallah</creator>
        
        <creator>Hussein Abu Sorra</creator>
        
        <creator>Amira Ashour</creator>
        
        <creator>Asif Ansary</creator>
        
        <subject>SVM; Kernel Functions; Arabic Character Recognition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>A considerable progress in the recognition techniques of Latin and Chinese characters has been achieved. By contrast, Arabic Optical character Recognition is still lagging in spite that the interest and research in this area is becoming more intensive than before. This is because the Arabic is a cursive language, written from right to left, each character has two to four different forms according to its position in the word, and several characters are associated with complementary parts above, below, or inside the character. Support Vector Machines (SVMs) are used successfully for recognizing Latin, and Chinese characters. This paper studies the effect of different kernel functions on the performance of SVMs for recognizing Arabic characters. Eleven different kernel functions are used throughout this study. The objective is to specify which type of kernel functions gives the best recognition rate. The resulting kernel functions can be considered as base for future studies aiming at enhancing their performance. The obtained results show that Exponential and Laplacian Kernels give excellent performance, while others, like multi-quadric kernel, fail to recognize the characters, speciallywith increased level of noise.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_60-Investigating_the_Effect_of_Different_Kernel_Functions.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detecting Distributed Denial of Service Attacks Using Data Mining Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070159</link>
        <id>10.14569/IJACSA.2016.070159</id>
        <doi>10.14569/IJACSA.2016.070159</doi>
        <lastModDate>2016-02-01T14:06:13.1800000+00:00</lastModDate>
        
        <creator>Mouhammd Alkasassbeh</creator>
        
        <creator>Ghazi Al-Naymat</creator>
        
        <creator>Ahmad B.A Hassanat</creator>
        
        <creator>Mohammad Almseidin</creator>
        
        <subject>DDoS; IDS; MLP; Na&#239;ve Bayes; Random Forest</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>Users and organizations find it continuously challenging to deal with distributed denial of service (DDoS) attacks. . The security engineer works to keep a service available at all times by dealing with intruder attacks. The intrusion-detection system (IDS) is one of the solutions to detecting and classifying any anomalous behavior. The IDS system should always be updated with the latest intruder attack deterrents to preserve the confidentiality, integrity and availability of the service. In this paper, a new dataset is collected because there were no common data sets that contain modern DDoS attacks in different network layers, such as (SIDDoS, HTTP Flood). This work incorporates three well-known classification techniques: Multilayer Perceptron (MLP), Na&#239;ve Bayes and Random Forest. The experimental results show that MLP achieved the highest accuracy rate (98.63%).</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_59-Detecting_Distributed_Denial_of_Service_Attacks_Using_Data_Mining_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Expectation-Maximization Algorithms for Obtaining Estimations of Generalized Failure Intensity Parameters</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070158</link>
        <id>10.14569/IJACSA.2016.070158</id>
        <doi>10.14569/IJACSA.2016.070158</doi>
        <lastModDate>2016-02-01T14:06:13.1500000+00:00</lastModDate>
        
        <creator>Makram KRIT</creator>
        
        <creator>Khaled MILI</creator>
        
        <subject>Repairable systems reliability; bathtub failure intensity; EM algorithm; estimation; likelihood; Monte Carlo simulation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>This paper presents several iterative methods based on Stochastic Expectation-Maximization (EM) methodology in order to estimate parametric reliability models for randomly lifetime data. The methodology is related to Maximum Likelihood Estimates (MLE) in the case of missing data. A bathtub form of failure intensity formulation of a repairable system reliability is presented where the estimation of its parameters is considered through EM algorithm . Field of failures data from industrial site are used to fit the model. Finally, the interval estimation basing on large-sample in literature is discussed and the examination of the actual coverage probabilities of these confidence intervals is presented using Monte Carlo simulation method.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_58-Expectation_Maximization_Algorithms_for_Obtaining_Estimations.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SDAA: Towards Service Discovery Anywhere Anytime Mobile Based Application</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070157</link>
        <id>10.14569/IJACSA.2016.070157</id>
        <doi>10.14569/IJACSA.2016.070157</doi>
        <lastModDate>2016-02-01T14:06:13.1200000+00:00</lastModDate>
        
        <creator>Mehedi Masud</creator>
        
        <subject>mobile application; service discovery; mobile services; software engineering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>Providing on-demand service based on customers&#39; current location is an urgent need for many societies and individuals. Specially, for woman, elderly people, single mother, sick people, etc. Considering the need of providing localized services, this paper proposes a mobile application framework that allows an individual to receive services from his neighborhood peers anywhere anytime. The application allows an individual to find and select reliable service providers near his location. The application will provide an opportunity to the interested individuals to use their free time for providing services to the community and earn some extra money.  This application will benefit many stakeholders like elderly people, women at home, a person while traveling in an unknown place, etc. A prototype application is developed and empirical evaluation is considered to find the qualitative measures of the users&#39; acceptability and satisfaction of the application. It is observed  that users&#39; satisfaction is high.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_57-Sdaa_Towards_Service_Discovery_Anywhere_Anytime_Mobile_Based_Application.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Content-Based Image Retrieval Using Texture Color Shape and Region</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070156</link>
        <id>10.14569/IJACSA.2016.070156</id>
        <doi>10.14569/IJACSA.2016.070156</doi>
        <lastModDate>2016-02-01T14:06:13.0870000+00:00</lastModDate>
        
        <creator>Syed Hamad Shirazi</creator>
        
        <creator>Arif Iqbal Umar</creator>
        
        <creator>Saeeda Naz</creator>
        
        <creator>Noor ul Amin Khan</creator>
        
        <creator>Muhammad Imran Razzak</creator>
        
        <creator>Bandar AlHaqbani</creator>
        
        <subject>CBIR; Color Space; Relevance Feedback; Texture Features; Shape; Color</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>Interests to accurately retrieve required images from databases of digital images are growing day by day. Images are represented by certain features to facilitate accurate retrieval of the required images. These features include Texture, Color, Shape and Region. It is a hot research area and researchers have developed many techniques to use these feature for accurate retrieval of required images from the databases. In this paper we present a literature survey of the Content Based Image Retrieval (CBIR) techniques based on Texture, Color, Shape and Region. We also review some of the state of the art tools developed for CBIR.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_56-Content_Based_Image_Retrieval_Using_Texture_Color_Shape_and_Region.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implementation of a Neural Network Using Simulator and Petri Nets*</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070155</link>
        <id>10.14569/IJACSA.2016.070155</id>
        <doi>10.14569/IJACSA.2016.070155</doi>
        <lastModDate>2016-02-01T14:06:13.0730000+00:00</lastModDate>
        
        <creator>Nayden Valkov Nenkov</creator>
        
        <creator>Elitsa Zdravkova Spasova</creator>
        
        <subject>neural networks; simulators; logical or; petri net</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>This paper describes construction of multilayer perceptron by open source neural networks simulator - Neuroph and Petri net. The described multilayer perceptron solves logical function &quot;xor &quot;- exclusive or. The aim is to explore the possibilities of description of the neural networks by Petri Nets. The selected neural network (multilayer perceptron) allows to be seen clearly the advantages and disadvantages of the realizing through simulator. The selected logical function does not have a linear separability. After consumption of a neural network on a simulator was investigated implementation by Petri Nets. The results are used to determine and to consider opportunities for different discrete representations of the same model and the same subject area.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_55-Implementation_of_a_Neural_Network_Using_Simulator_and_Petri_Nets.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Arabic Stemmer for Search Engines Information Retrieval</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070154</link>
        <id>10.14569/IJACSA.2016.070154</id>
        <doi>10.14569/IJACSA.2016.070154</doi>
        <lastModDate>2016-02-01T14:06:13.0570000+00:00</lastModDate>
        
        <creator>Ahmed Khalid</creator>
        
        <creator>Zakir Hussain</creator>
        
        <creator>Mirza Anwarullah Baig</creator>
        
        <subject>Information Retrieval; Arabic Stemming; Search Engine; Arabic Morphology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>Arabic language is very different and difficult structure than other languages, that’s because it is a very rich language with complex morphology. Many stemmers have been developed for Arabic language but still there are many weakness and problems. There is still lack of usage of Arabic stemming in search engines. This paper introduces a rooted word Arabic stemmer technique. The results of the introduced technique for six Arabic sentences are used in famous search engines Google Chrome, Internet Explore and Mozilla Firefox to check the effect of using Arabic stemming in these search engines in terms of the total number of searched pages and the search time ratio for actual sentences and their stemming results. The results show that Arabic words stemming increase and accelerate the search engines output.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_54-Arabic_Stemmer_for_Search_Engines_Information_Retrieval.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Contemporary Layout’s Integration for Geospatial Image Mining</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070153</link>
        <id>10.14569/IJACSA.2016.070153</id>
        <doi>10.14569/IJACSA.2016.070153</doi>
        <lastModDate>2016-02-01T14:06:13.0400000+00:00</lastModDate>
        
        <creator>Riaz Ahmed Shaikh</creator>
        
        <creator>Jian-Ping Li</creator>
        
        <creator>Asif Khan</creator>
        
        <subject>Geo-Location; Spatial Layout; Feature Extraction; Image Mining</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>Image taxonomy and repossession plays a major role in dealing with large multimedia data on the Internet. Social networks, image sharing websites and mobile application require categorizing multimedia items for more efficient search and storage. Therefore, image classification and retrieval methods gained a great importance for researchers and companies. Image classification can be performed in a supervised and semi-supervised manner and in order to categorize an unknown image, a statistical model created using relabeled samples is fed with the numerical representation of the visual features of images.  Analysis of the keywords surrounding the images or the content of the images alone has not yet achieved results that would allow deriving precise location information to select representative images. Photos that are reliably tagged with labels of place names or areas only cover a small fraction of available images and also remain at a keyword level. State of the art of content based retrieval has been analyzed in earth perception image archives concentrating on complete frameworks indicating guarantee for the operational implementation. The methods are taken into consideration, concentrating specifically on the stages after extraction of primitive features. The solutions conceived for the issues such as synthesis and simplification of features, semantic labeling and indexing are reviewed. The approaches regarding query execution and specification are assessed where conclusions are drawn in the research of earth observation mining.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_53-Contemporary_Layouts_Integration_for_Geospatial_Image_Mining.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Vision Based Geo Navigation Information Retreival</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070152</link>
        <id>10.14569/IJACSA.2016.070152</id>
        <doi>10.14569/IJACSA.2016.070152</doi>
        <lastModDate>2016-02-01T14:06:12.9800000+00:00</lastModDate>
        
        <creator>Asif Khan</creator>
        
        <creator>Jian-Ping Li</creator>
        
        <creator>Riaz Ahmed Shaikh</creator>
        
        <subject>Vision; Geo-Navigation; Information Retrieval</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>In  order  to  derive  the  three-dimensional  camera  position  from  the  monocular camera vision,  a   geo-reference  database  is  needed.  Floor plan is a ubiquitous geo-reference database that every building refers   to   it during   construction and facility maintenance.  Comparing with other popular geo-reference database such as geo-tagged photos, the generation, update and maintenance of floor plan database   does not require costly and time consuming survey tasks. In vision based methods, the camera needs special attention. In contrast to other sensors, vision sensors typically yield vast information that needs complex strategies to permit use in real-time and on computationally con-strained platforms. This research work show that map-based visual odometer strategy derived from a state-of-the-art structure-from-motion framework is particularly suitable for locally stable, pose controlled flight. Issues concerning drifts and robustness are analyzed and discussed with respect to the original framework. Additionally, various usage of localization algorithm in view of vision has been proposed here. Though, a noteworthy downside with vision-based algorithms is the absence of robustness. The greater parts of the methodologies are delicate to scene varieties (like season or environment changes) because of the way that they utilize the Sum of Squared Differences (SSD). To stop that, we utilize the Mutual Information which is exceptionally vigorous toward global and local scene varieties. On the other hand, dense methodologies are frequently identified with drift drawbacks. Here, attempt to take care of this issue by utilizing geo-referenced pictures. The algorithm of localization has been executed and experimental results are available. Vision sensors possess the potential to extract information about the surrounding environment and determine the locations of features or points of interest. Having mapped out landmarks in an unknown environment, subsequent observations by the vision sensor can in turn be used to resolve position and orientation while continuing to map out new features. . In addition, the experimental results of the proposed model also suggest a plausibility proof for feed forward models of delineate recognition in GEO-location.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_52-Vision_Based_Geo_Navigation_Information_Retreival.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Impact of the Implementation of the ERP on End-User Satisfaction Case of Moroccan Companies</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070151</link>
        <id>10.14569/IJACSA.2016.070151</id>
        <doi>10.14569/IJACSA.2016.070151</doi>
        <lastModDate>2016-02-01T14:06:12.9470000+00:00</lastModDate>
        
        <creator>Fatima JALIL</creator>
        
        <creator>Abdellah ZAOUIA</creator>
        
        <creator>Rachid EL BOUANANI</creator>
        
        <subject>Enterprise Resource Planning (ERP); User Satisfaction; Quality Change;Information Technology (IT); Information Systems(IS); success; evaluation approaches;Evaluation Success Factors</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>In recent years, the implementation of ERP is as a lever for development and inter-organizational collaboration. The ERP is a powerful tool for integration, sharing of information, and fluidizing of the process within the organizations (El Amrani et al. 2006; Kocoglu and Moatti, 2010).  
The company must not only equip and computerization but it must opt for the establishment of an IT infrastructure &quot;optimal&quot; who will respond to its present and future needs. OF or the interest of the application integration, and especially of the ERP who come remedy the situations mentioned. This article proposes a model and tests to evaluate the success of a system &quot;Enterprise Resource Planning &quot;(ERP) based on a measure of user satisfaction. Referring to the model DeLone &amp; McLean (1992) and the work of Seddon &amp; Kiew (1994) . The criteria that can influence user satisfaction, to ensure the successful implementation of the ERP system are identified.
The results of the exploratory study, carried out on 60 users in 40 Moroccan companies, shows that user satisfaction of ERP is explained by the quality of the ERP system, perceived usefulness and quality of information provided by this type of system. The study also found that the quality of change is a predictor of satisfaction measured by user involvement in the implementation of ERP, the quality of communication within such a project and the quality of training given to users.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_51-The_Impact_of_the_Implementation_of_the_ERP.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of Cloud Network Management Using Resource Allocation and Task Scheduling Services</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070150</link>
        <id>10.14569/IJACSA.2016.070150</id>
        <doi>10.14569/IJACSA.2016.070150</doi>
        <lastModDate>2016-02-01T14:06:12.9170000+00:00</lastModDate>
        
        <creator>K.C. Okafor</creator>
        
        <creator>F.N.Ugwoke</creator>
        
        <creator>Obayi, Adaora Angela</creator>
        
        <creator>V.C Chijindu</creator>
        
        <creator>O.U Oparaku</creator>
        
        <subject>Resource Provisioning; Virtualization; Cloud Computing; Service Availability; Smart Green Energy; QoS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>Network failure in cloud datacenter could result from inefficient resource allocation; scheduling and logical segmentation of physical machines (network constraints). This is highly undesirable in Distributed Cloud Computing Networks (DCCNs) running mission critical services. Such failure has been identified in the University of Nigeria datacenter network situated in the south eastern part of Nigeria. In this paper, the architectural decomposition of a proposed DCCN was carried out while exploring its functionalities for grid performance. Virtualization services such as resource allocation and task scheduling were employed in heterogeneous server clusters. The validation of the DCCN performance was carried out using trace files from Riverbed Modeller 17.5 in order to ascertain the influence of virtualization on server resource pool. The QoS metrics considered in the analysis are: the service delay time, resource availability, throughput and utilization. From the validation analysis of the DCCN, the following results were obtained: average throughput (bytes/Sec) for DCCN = 40.00%, DCell = 33.33% and BCube = 26.67%. Average resource availability response for DCCN = 38.46%, DCell = 33.33%, and BCube = 28.21%. DCCN density on resource utilization = 40% (when logically isolated) and 60% (when not logically isolated). From the results, it was concluded that using virtualization in a cloud DataCenter servers will result in enhanced server performance offering lower average wait time even with a higher request rate and longer duration of resource use (service availability). By evaluating these recursive architectural designs for network operations, enterprises ready for Spine and leaf model could further develop their network resource management schemes for optimal performance.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_50-Analysis_of_Cloud_Network_Management_Using_Resource_Allocation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Applications of Some Topological Near Open Sets to Knowledge Discovery</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070149</link>
        <id>10.14569/IJACSA.2016.070149</id>
        <doi>10.14569/IJACSA.2016.070149</doi>
        <lastModDate>2016-02-01T14:06:12.8830000+00:00</lastModDate>
        
        <creator>A. S. Salama</creator>
        
        <creator>O. G. El-Barbary</creator>
        
        <subject>Topological spaces; Rough sets; Knowledge discovery; open sets; Accuracy measure</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>In this paper, we use some topological near open sets to introduce the rough set concepts such as near open lower and near open upper approximations. Also, we study the concept of near open, rough set and some of their basic properties. We will study the comparison among near open concepts and rough set concepts. We also studied the effect of this concept to motivate the knowledge discovery processing.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_49-Applications_of_Some_Topological_Near_Open_Sets_to_Knowledge_Discovery.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Eliminating Broadcast Storming in Vehicular Ad-Hoc Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070147</link>
        <id>10.14569/IJACSA.2016.070147</id>
        <doi>10.14569/IJACSA.2016.070147</doi>
        <lastModDate>2016-02-01T14:06:12.8530000+00:00</lastModDate>
        
        <creator>Umar Hayat</creator>
        
        <creator>Razi Iqbal</creator>
        
        <creator>Jamal Diab</creator>
        
        <subject>VANETs; Intelligent Transportation Systems; Broadcast Storming; Distance based flooding</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>VANETs (Vehicular Ad-hoc Networks) offer diversity of appealing applications. A lot of applications offered by VANETs depend upon the propagation of messages from one vehicle to another vehicle in the network. Several algorithms for useful broadcasting of safety/ warning messages in the network have been presented by different researchers. Vehicles on roads are increasing day by day. Due to this increased number of vehicles and especially during the crest hours, when the networks become very dense, dissemination of the messages blindly in the network causes problems like packet collisions, data thrashing and broadcast storming.  In this research, a relative speed based waiting time algorithm has been presented for avoiding broadcast storming problem in the VANETS especially in dense environment. This proposed algorithm calculates the waiting time for each vehicle after receiving the safety/ warning messages according to the relative speed of the vehicles, the distance between the vehicles and range of vehicles. The results show that the proposed relative speed based algorithm is better than already existing algorithms like blind flooding and dynamically broadcasting waiting time algorithm which uses number of neighbors and distance between the vehicles for calculating the waiting time.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_47-Eliminating_Broadcast_Storming_in_Vehicular_Ad_Hoc_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced Audio LSB Steganography for Secure Communication</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070146</link>
        <id>10.14569/IJACSA.2016.070146</id>
        <doi>10.14569/IJACSA.2016.070146</doi>
        <lastModDate>2016-02-01T14:06:12.8230000+00:00</lastModDate>
        
        <creator>Muhammad Junaid Hussain</creator>
        
        <creator>Khan Farhan Rafat</creator>
        
        <subject>Conceal; Human Auditory System (HAS); Imperceptible Communication; Internet as a Secure Communication Medium; LSB Based Audio Steganography; Modeling Security of Steganographic System; WAV File Steganography</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>The ease with which data can be remitted across the globe via Internet has made it an obvious (as medium) choice for on line data transmission and communication. This salient trait, however, is constraint with akin issues of privacy, veracity of the information being exchanged over it, and legitimacy of its sender together with its availability when needed. Although cryptography is being used to confront confidentiality concern yet for many is slightly limited in scope because of discernibility of encrypted information. Further, due to restrictions imposed on the use of cryptography by its citizens for personal doings, various Governments have also coxswained the research arena to explore another discipline of information hiding called steganography – whose sole purpose is to make the information being exchanged inaudible. This research is focused on evolution of model based secure LSB Steganographic scheme for digital audio wave file format to withstand passive attack by Warden Wendy.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_46-Enhanced_Audio_LSB_Steganography_for_Secure_Communication.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design and Simulation of a Low-Voltage Low-Offset Operational Amplifier</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070145</link>
        <id>10.14569/IJACSA.2016.070145</id>
        <doi>10.14569/IJACSA.2016.070145</doi>
        <lastModDate>2016-02-01T14:06:12.7900000+00:00</lastModDate>
        
        <creator>Babak Gholami</creator>
        
        <subject></subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>In many applications, offset of the OP-AMPs should be canceled to high accuracy be accomplished. In this work, an asymmetrical differential input circuit with active DC offset rejection circuit was implemented to minimize the systematic offset of the amplifier. The proposed OP-AMPs show that the systematic offset voltages is less than 80 &#181;V.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_45-Design_and_Simulation_of_a_Low_Voltage_Low_Offset_Operational_Amplifier.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Statistical Quality of Service to Increase Qos/Qoe of IP-Based Gateway for Integrating Heterogeneous Wireless Devices</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070144</link>
        <id>10.14569/IJACSA.2016.070144</id>
        <doi>10.14569/IJACSA.2016.070144</doi>
        <lastModDate>2016-02-01T14:06:12.7600000+00:00</lastModDate>
        
        <creator>Pon. Arivanantham</creator>
        
        <creator>Dr. M. Ramakrishnan</creator>
        
        <subject>Heterogeneous Wireless Mobile Networks; Ad Hoc Networks; Cellular Networks; Quality of Service; Security; Wireless Networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>In broadcast service area above communications supported cellular wireless networks, data is communicated to several addressees from a right of entry point/base station. Multicast significantly progresses the network effectiveness to dispense data to multiple addressees as associated to multiple unicast gatherings of the similar data to each receiver independently, by taking improvement of the communal nature of the wireless intermediate. These algorithms need to be intended to be responsible for the essential Quality of Service (QoS) towards an extensive assortment of applications while permitting seamless roaming between multitudes of access network technologies. This paper proposed a cellular-aided mobile ad hoc network (CAMA) structural design, in which a CAMA representative in the cellular network accomplishes the control data, while the data is transported over the mobile terminals (MTs). The routing and security info is substituted among MTs and the negotiator over cellular radio channels. A location centered routing protocol, the multi-selection greedy positioning routing (MSGPR) protocol, is projected. This novel feature makes it more appropriate in the actual world. In accumulation, dynamic new call blocking possibility is initially familiarized to make handoff decision for wireless networks. This paper proposes a novel technique to afford QoS sustenance by means of an assistant network to recuperate the failure of multicast data in the major network. A wireless device might misplace some of the multicast records send above the major network. The experiment results have exposed that the proposed algorithm outclasses traditional algorithms in bandwidth deployment, handoff dropping rate and handoff rate.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_44-Statistical_Quality_of_Service_to_Increase_QosQoe_of_Ip_Based_Gateway.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>fMRI Data Analysis Using Dempster-Shafer Method with Estimating Voxel Selectivity by Belief Measure</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070143</link>
        <id>10.14569/IJACSA.2016.070143</id>
        <doi>10.14569/IJACSA.2016.070143</doi>
        <lastModDate>2016-02-01T14:06:12.7430000+00:00</lastModDate>
        
        <creator>ATTIA Abdelouahab</creator>
        
        <creator>MOUSSAOUI Abdelouahab</creator>
        
        <creator>TALEB-AHMED Abdelmalik</creator>
        
        <subject>Dempster-Shafer theory; fMRI; GLM; t-test; HRF; OTSU method</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>In the functional Magnetic Resonance Imaging (fMRI) data analysis, detecting the activated voxels is a challenging research problem where the existing methods have shown some limits. We propose a new method wherein brain mapping is done based on Dempster-Shafer theory of evidence (DS) that is a useful method in uncertain representation analysis. Dempster-Shafer allows finding the activated regions by checking the activated voxels in fMRI data. The activated brain areas related to a given stimulus are detected by using a belief measure as a metric for evaluating activated voxels. To test the performance of the proposed method, artificial and real auditory data have been employed. The comparison of the introduced method with the t-test and GLM method has clearly shown that the proposed method can provide a higher correct detection of activated voxels.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_43-fMRI_Data_Analysis_Using_Dempster_Shafer_Method_with_Estimating_Voxel.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Testing, and Evaluation for the Voipv6 Network Related Functions, (Sendto and Receivefrom)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070142</link>
        <id>10.14569/IJACSA.2016.070142</id>
        <doi>10.14569/IJACSA.2016.070142</doi>
        <lastModDate>2016-02-01T14:06:12.7130000+00:00</lastModDate>
        
        <creator>Asaad Abdallah Yousif Malik Abusin</creator>
        
        <creator>Dr. Junaidi Abdullah</creator>
        
        <creator>Dr Tan Saw Chin</creator>
        
        <subject>(VoIPv6 (Voice over Internet Protocol V6) Performance; Voice over Internet Protocol V6 Performance testing; Voice over Internet Protocol V6 Performance analysis; VoIPv6 quality testing in the protocol and application layers; Internet Measurement Research;</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>(The network related functions (Sendto, and Receivefrom) in VoIPv6, are needed to obtain the communication socket in both UDP, and TCP before the communication can take place between the sending and receiving ends. The intent of testing and evaluating the network related functions in Voice over Internet Protocol (VoIPv6) in this research work is not to provide a comprehensive benchmark, but rather to test how well TCP (Transmission Control Protocol), and UDP (User Datagram Protocol) perform in sending and receiving VoIPv6 traffic and bulk data transfer, Part of this, due to the cumulative nature of VoIPv6 performance, can be achieved by testing the network related functions which are the Sendto and Receivefrom socket calls. This is because the sending concept in UDP IPv6 is based on the best effort sending of packets not a guaranteed sending as in TCP IPv6. In this context, performance enhancement techniques are needed to be applied in VoIPv6 due to the fact that there is no dedicated line between the sending and receiving ends. This is actually the putty and the drawback at the same time of VoIP. This is also the reason for IPv6 to take longer time yet to reach its full maturity (Recommendation G.711 of the ITU expectation is by the year 2050) when fully deploying real time applications due to their time sensitivity.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_42-Performance_Testing_and_Evaluation_for_the_Voipv6_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Fast Adaptive Artificial Neural Network Controller for Flexible Link Manipulators</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070141</link>
        <id>10.14569/IJACSA.2016.070141</id>
        <doi>10.14569/IJACSA.2016.070141</doi>
        <lastModDate>2016-02-01T14:06:12.6970000+00:00</lastModDate>
        
        <creator>Amin Riad Maouche</creator>
        
        <creator>Hosna Meddahi</creator>
        
        <subject>Adaptive control; CMAC neural network; artificial neural network; nonlinear control; flexible-link manipulator; dynamic motion equation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>This paper describes a hybrid approach to the problem of controlling flexible link manipulators in the dynamic phase of the trajectory. A flexible beam/arm is an appealing option for civil and military applications, such as space-based robot manipulators. However, flexibility brings with it unwanted oscillations and severe chattering which may even lead to an unstable system. To tackle these challenges, a novel control architecture scheme is presented.  First, a neural network controller based on the robot’s dynamic equation of motion is elaborated. Its aim is to produce a fast and stable control of the joint position and velocity and damp the vibration of each arm. Then, an adaptive Cerebellar Model Articulation Controller (CMAC) is implemented to balance unmodeled dynamics, enhancing the precision of the control. Efficiency of the new controller obtained is tested on a two-link flexible manipulator. Simulation results on a dynamic trajectory with a sinusoidal form show the effectiveness of the proposed control strategy.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_41-A_Fast_Adaptive_Artificial_Neural_Network_Controller_for_Flexible_Link_Manipulators.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dynamic Clustering for Information Retrieval from Big Data Depending on Compressed Files</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070140</link>
        <id>10.14569/IJACSA.2016.070140</id>
        <doi>10.14569/IJACSA.2016.070140</doi>
        <lastModDate>2016-02-01T14:06:12.6670000+00:00</lastModDate>
        
        <creator>Dr.Alaa Kadhim F.</creator>
        
        <creator>Prof. Dr. Ghassan H. Abdul</creator>
        
        <creator>Rasha Subhi Ali</creator>
        
        <subject>dynamic clustering; data retrieval methods; compression algorithm; ICM system; improved k-means algorithm and modified improved k-means algorithms</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>The rapid growth in the database data led to origination a large amount of data. So, it is still a big problem to access this data for answering user queries. In this paper a novel approach for aggregating the required data was proposed, this approach called dynamic clustering. Also, several retrieval methods were used for retrieving purposes. The dynamic clustering method is built clusters according to the user entries (queries). It has been applied to different compressed database files in different size and using different queries. The compressed database file it is resulted from applying ICM (Ideal Compression Method) and best compressed algorithm(improved k-mean, k-mean with medium probability and k-mean with maximum gain ratio).The retrieval methods applied to original database file, compressed file and the cluster that result from implementing dynamic clustering algorithm and the results was compared.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_40-Dynamic_Clustering_for_Information_Retrieval_from_Big_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>No-Reference Perceived Image Quality Algorithm for Demosaiced Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070139</link>
        <id>10.14569/IJACSA.2016.070139</id>
        <doi>10.14569/IJACSA.2016.070139</doi>
        <lastModDate>2016-02-01T14:06:12.6370000+00:00</lastModDate>
        
        <creator>Lamb Anupama Balbhimrao</creator>
        
        <creator>Madhuri Khambete</creator>
        
        <subject>Demosaicing; Correlation; False color; Image quality; Regression; Zipper</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>Visual image quality assessment (IQA) plays a key role in every multimedia application, as end user to it is a human-being. Real time applications demand no reference (NR) IQA, due to unavailability of the reference image. Today, most of the perceived/visual NR-IQA algorithms developed are for distortions like blur, ringing, and blocking artifacts. Very few are available for color distortions. Visible color distortions, such as false color, and zipper are produced in the demosaiced image due to incorrect interpolation of missing color values. In this paper, state of the art zipper and false color artifact quantification algorithms, general purpose NR-IQA algorithms are evaluated for visual quality assessment of demosaiced images. Separate NR- IQA algorithms are proposed for zipper and false color artifact quantification these scores are then combined to obtain final quality score for demosaiced image. Zipper algorithm quantifies zipper artifact by searching for zipper pixels in an image. While, false color algorithm finds correlation between local high frequency region’s color planes to quantify false color.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_39-No_Reference_Perceived_Image_Quality_Algorithm_for_Demosaiced_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Leveled Dag Critical Task Firstschedule Algorithm in Distributed Computing Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070138</link>
        <id>10.14569/IJACSA.2016.070138</id>
        <doi>10.14569/IJACSA.2016.070138</doi>
        <lastModDate>2016-02-01T14:06:12.6030000+00:00</lastModDate>
        
        <creator>Amal EL-NATTAT</creator>
        
        <creator>Nirmeen A. El-Bahnasawy</creator>
        
        <creator>Ayman EL-SAYED</creator>
        
        <subject>Task scheduling; Homogeneous distributed computing systems; Precedence constrained parallel applications; Directed Acyclic Graph; Critical path</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>In distributed computing environment, efficient task scheduling is essential to obtain high performance. A vital role of designing and development of task scheduling algorithms is to achieve better makes pan. Several task scheduling algorithms have been developed for homogeneous and heterogeneous distributed computing systems.  In this paper, a new static task scheduling algorithm is proposed namely; Leveled DAG Critical Task First (LDCTF) that optimizes the performance of Leveled DAG Prioritized Task (LDPT) algorithm to efficiently schedule tasks on homogeneous distributed computing systems. LDPT was compared to B-level algorithm which is the most famous algorithm in homogeneous distributed systems and it provided better results. LDCTF is a list based scheduling algorithm which depends on sorting tasks into a list according to their priority then scheduling one by one on the suitable processor. LDCTF aims to improve the performance of the system by minimizing the schedule length than LDPT and B-level algorithms.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_38-A_Leveled_Dag_Critical_Task_Firstschedule_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>MR Brain Real Images Segmentation Based Modalities Fusion and Estimation Et Maximization Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070137</link>
        <id>10.14569/IJACSA.2016.070137</id>
        <doi>10.14569/IJACSA.2016.070137</doi>
        <lastModDate>2016-02-01T14:06:12.5900000+00:00</lastModDate>
        
        <creator>ASSAS Ouarda</creator>
        
        <subject>component; Data fusion; Segmentation; Estimation and Maximization;  MRI images</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>With the development of acquisition image techniques, more data coming from different sources of image become available. Multi-modality image fusion seeks to combine information from different images to obtain more inferences than can be derived from a single modality. The main aim of this work is to improve cerebral IRM real images segmentation by fusion of modalities (T1, T2 and DP) using estimation et maximizatio Approach (EM). The evaluation of adopted approaches was compared using four criteria which are: the standard deviation (STD), entropy of information (IE), the coefficient of correlation (CC) and the space frequency (SF). The experimental results on MRI brain real images prove that the adopted scenarios of fusion approaches are more accurate and robust than the standard EM approach</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_37-MR_Brain_Real_Images_Segmentation_Based_Modalities_Fusion.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>FPGA Implementation of Adaptive Neuro-Fuzzy Inference Systems Controller for Greenhouse Climate</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070136</link>
        <id>10.14569/IJACSA.2016.070136</id>
        <doi>10.14569/IJACSA.2016.070136</doi>
        <lastModDate>2016-02-01T14:06:12.5570000+00:00</lastModDate>
        
        <creator>Charaf eddine LACHOURI</creator>
        
        <creator>Khaled MANSOURI</creator>
        
        <creator>Aissa BELMEGUENAI</creator>
        
        <creator>Mohamed mourad LAFIFI</creator>
        
        <subject>Neuro-Fuzzy; ANFIS; VHDL; FPGA; Quartus; ASIC</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>This paper describes a Field-programmable Gate Array (FPGA) implementation of Adaptive Neuro-fuzzy Inferences Systems (ANFIS) using Very High-Speed Integrated Circuit Hardware-Description Language (VHDL) for controlling temperature and humidity inside a tomato greenhouse. The main advantages of using the HDL approach are rapid prototyping and allowing usage of powerful synthesis controller through the use of the VHDL code. The use of hardware description language (HDL) in the application is suitable for implementation into an Application Specific Integrated Circuit (ASIC) and Field tools such as Quartus II 8.1. A set of six inputs meteorological and control actuators parameters that have a major impact on the greenhouse climate was chosen to represent the growing process of tomato plants. In this contribution, we discussed the construction of an ANFIS system that seeks to provide a linguistic model for the estimation of greenhouse climate from the meteorological data and control actuators during 48 days of seedlings growth embedded in the trained neural network and optimized using the backpropagation and the least square algorithm with 500 iterations. The simulation results have shown the efficiency of the implemented controller.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_36-FPGA_Implementation_of_Adaptive_Neuro_Fuzzy_Inference_Systems_Controller.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>QRS Detection Based on an Advanced Multilevel Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070135</link>
        <id>10.14569/IJACSA.2016.070135</id>
        <doi>10.14569/IJACSA.2016.070135</doi>
        <lastModDate>2016-02-01T14:06:12.5430000+00:00</lastModDate>
        
        <creator>Wissam Jenkal</creator>
        
        <creator>Rachid Latif</creator>
        
        <creator>Ahmed Toumanari</creator>
        
        <creator>Azzedine Dliou</creator>
        
        <creator>Oussama El B’charri</creator>
        
        <creator>Fadel Mrabih Rabou Maoulainine</creator>
        
        <subject>ECG Signal; QRS Complex; multilevel algorithm; thresholding technique</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>This paper presents an advanced multilevel algorithm used for the QRS complex detection. This method is based on three levels. The first permits the extraction of higher peaks using an adaptive thresholding technique. The second allows the QRS region detection. The last level permits the detection of Q, R and S waves. The proposed algorithm shows interesting results compared to recently published methods. The perspective of this work is the implementation of this method on an embedded system for a real time ECG monitoring system.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_35-QRS_Detection_Based_on_an_Advanced_Multilevel_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancement Bag-of-Words Model for Solving the Challenges of Sentiment Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070134</link>
        <id>10.14569/IJACSA.2016.070134</id>
        <doi>10.14569/IJACSA.2016.070134</doi>
        <lastModDate>2016-02-01T14:06:12.5100000+00:00</lastModDate>
        
        <creator>Doaa Mohey El-Din</creator>
        
        <subject>Sentiment analysis; Bag-Of-Words; sentiment analysis challenges; text analysis; Reviews</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>Sentiment analysis is a branch of natural language processing, or machine learning methods. It becomes one of the most important sources in decision making. It can extract, identify, evaluate or otherwise characterizes from the online sentiments reviews. Although Bag-Of-Words model is the most widely used technique for sentiment analysis, it has two major weaknesses: using a manual evaluation for a lexicon in determining the evaluation of words and analyzing sentiments with low accuracy because of neglecting the language grammar effects of the words and  ignore semantics of the words. In this paper, we propose a new technique to evaluate online sentiments in one topic domain and produce a solution for some significant sentiment analysis challenges that improves the accuracy of sentiment analysis performed. The proposed technique relies on the enhancement bag-of-words model for evaluating sentiment polarity and score automatically by using the words weight instead of term frequency. This technique also can classify the reviews based on features and keywords of the scientific topic domain. This paper introduces solutions for essential sentiment analysis challenges that are suitable for the review structure. It also examines the effects by the proposed enhancement model to reach higher accuracy.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_34-Enhancement_Bag_of_Words_Model_for_Solving.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Maximally Distant Codes Allocation Using Chemical Reaction Optimization with Enhanced Exploration</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070133</link>
        <id>10.14569/IJACSA.2016.070133</id>
        <doi>10.14569/IJACSA.2016.070133</doi>
        <lastModDate>2016-02-01T14:06:12.4930000+00:00</lastModDate>
        
        <creator>Taisir Eldos</creator>
        
        <creator>Abdallah Khreishah</creator>
        
        <subject>Evolutionary Algorithms; Chemical Reaction Optimization; Maximally Distant Codes; Binary Knapsack Problem; Fault Tolerance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>Error correcting codes, also known as error controlling codes, are sets of codes with redundancy that provides for error detection and correction, for fault tolerant operations like data transmission over noisy channels or data retention using storage media with possible physical defects. The challenge is to find a set of m codes out of 2n available n-bit combinations, such that the aggregate hamming distance among those codewords and/or the minimum distance is maximized. Due to the prohibitively large solution spaces of practically sized problems, greedy algorithms are used to generate quick and dirty solutions. However, modern evolutionary search techniques like genetic algorithms, swarm particles, gravitational search, and others, offer more feasible solutions, yielding near optimal solutions in exchange for some computational time. The Chemical Reaction Optimization (CRO), which is inspired by the molecular reactions towards a minimal energy state, emerged recently as an efficient optimization technique. However, like the other techniques, its internal dynamics are hard to control towards convergence, yielding poor performance in many situations. In this research, we proposed an enhanced exploration strategy to overcome this problem, and compared it with the standard threshold based exploration strategy in solving the maximally distant codes allocation problem. Test results showed that the enhancement provided better performance on most metrics.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_33-Maximally_Distant_Codes_Allocation_Using_Chemical_Reaction_Optimization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Bag of Features Model Using the New Approaches: A Comprehensive Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070132</link>
        <id>10.14569/IJACSA.2016.070132</id>
        <doi>10.14569/IJACSA.2016.070132</doi>
        <lastModDate>2016-02-01T14:06:12.4800000+00:00</lastModDate>
        
        <creator>CHOUGRAD Hiba</creator>
        
        <creator>ZOUAKI Hamid</creator>
        
        <creator>ALHEYANE Omar</creator>
        
        <subject>Bag of features; Image classification; Local and global descriptors; Locality-constrained Linear Coding; Spatial pyramid; Pooling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>The major challenge in content based image retrieval is the semantic gap. Images are described mainly on the basis of their numerical information, while users are more interested in their semantic content and it is really difficult to find a correspondence between these two sides. The bag of features (BoF) model is an efficient image representation technique for image classification. However, it has some limitations for instance the information loss during the encoding process, an important step of BoF. This is because the encoding is usually done by hard assignment i.e. in vector quantization each feature is encoded by being assigned to a single visual word.  Another notorious disadvantage of BoF is that it ignores the spatial relationships among the patches, which are very important in image representation. To address those limitations and enhance the results, novel approaches were proposed at each level of the BoF pipeline. In instance the combination of local and global descriptors for a better description, a soft-assignment encoding manner with a spatial pyramid partitioning for a more informative image representation and a maximum pooling to get the final descriptors. Our work aims to give a detailed version of the BoF, including all the levels of the pipeline, as a support leading to a better comprehension of the approach. We also compare and evaluate the state-of-the-art approaches and find out how these changes at each level of the pipeline could affect the performance and the overall classification results.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_32-Bag_of_Features_Model_Using_the_New_Approaches_A_Comprehensive_Study.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Assessment and Comparison of Fuzzy Based Test Suite Prioritization Method for GUI Based Software</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070131</link>
        <id>10.14569/IJACSA.2016.070131</id>
        <doi>10.14569/IJACSA.2016.070131</doi>
        <lastModDate>2016-02-01T14:06:12.4470000+00:00</lastModDate>
        
        <creator>Neha Chaudhary</creator>
        
        <creator>O.P. Sangwan</creator>
        
        <subject>Test suite prioritization; Fuzzy Model; Comparison of prioritization methods</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>The testing of event driven software has significant role to improve overall quality of software. Due to event driven nature of GUI based software many test cases are generated and it is difficult to identify test cases whose fault revealing capability is high. To identify those test cases test suite prioritization is done. Various test suite prioritization methods exists for GUI based software in literature. Prioritization methods improve the rate of fault detection. In our previous work we have proposed a fuzzy model for test suite prioritization of GUI based software. In this method priority is assigned on the basis of multiple factors using fuzzy model. These factors are: The type of event, Event Interaction, and Parameter-value interaction coverage-based criteria. Using this method a test oracle will be organized in the descending order of its effectiveness.  In this paper we are evaluating the proposed fuzzy model and comparing results with other prioritization methods.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_31-Assessment_and_Comparison_of_Fuzzy_Based_Test_Suite_Prioritization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Analysis of CPU Scheduling Algorithms with Novel OMDRRS Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070130</link>
        <id>10.14569/IJACSA.2016.070130</id>
        <doi>10.14569/IJACSA.2016.070130</doi>
        <lastModDate>2016-02-01T14:06:12.4330000+00:00</lastModDate>
        
        <creator>Neetu Goel</creator>
        
        <creator>Dr. R. B. Garg</creator>
        
        <subject>Round Robin; Turn-around time; Waiting Time; t-test; Anova test</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>CPU scheduling is one of the most primary and essential part of any operating system. It prioritizes processes to efficiently execute the user requests and help in choosing the appropriate process for execution. Round Robin (RR) &amp; Priority Scheduling(PS) are one of the most widely used and acceptable CPU scheduling algorithm. But, its performance degrades with respect to turnaround time, waiting time &amp; context switching with each recurrence. A New scheduling algorithm OMDRRS is developed to improve the performance of RR and priority scheduling algorithms. The new algorithm performs better than the popular existing algorithm. Drastic improvement is seen in waiting time, turnaround time, response time and context switching. Comparative analysis of Turn around Time(TAT), Waiting Time(WT), Response Time (RT) is shown with the help of ANOVA and t-test.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_30-Performance_Analysis_of_CPU_Scheduling_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving Image Encryption Using 3D Cat Map and Turing Machine</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070129</link>
        <id>10.14569/IJACSA.2016.070129</id>
        <doi>10.14569/IJACSA.2016.070129</doi>
        <lastModDate>2016-02-01T14:06:12.4000000+00:00</lastModDate>
        
        <creator>Nehal A. Mohamed</creator>
        
        <creator>Mostafa A. El-Azeim</creator>
        
        <creator>Alaa Zaghloul</creator>
        
        <subject>chaotic 3D cat map; brute force attacks; Dynamic random growth technique; Turing machine; key space</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>Security of data is of prime importance. Security is a very complex and vast topic. One of the common ways to protect this digital data from unauthorized eavesdropping is encryption. This paper introduces an improved image encryption technique based on a chaotic 3D cat map and Turing machine in the form of dynamic random growth technique. The algorithm consists of two main sections: The first does a preprocessing operation to shuffle the image using 3D chaotic map in the form of dynamic random growth technique. The second uses Turing machine simultaneous with shuffling pixels’ locations to diffuse pixels’ values using a random key that is generated by chaotic 3D cat map. The hybrid compound of a 3D chaotic system and Turing machine strengthen the encryption performance and enlarge the key space required to resist the brute force attacks. The main advantages of such a secure technique are the simplicity and efficiency. These good cryptographic properties prove that it is secure enough to use in image transmission systems.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_29-Improving_Image_Encryption_Using_3D_Cat_Map_and_Turing_Machine.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Discrete Particle Swarm Optimization to Estimate Parameters in Vision Tasks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070128</link>
        <id>10.14569/IJACSA.2016.070128</id>
        <doi>10.14569/IJACSA.2016.070128</doi>
        <lastModDate>2016-02-01T14:06:12.3700000+00:00</lastModDate>
        
        <creator>Benchikhi Loubna</creator>
        
        <creator>Sadgal Mohamed</creator>
        
        <creator>Elfazziki Abdelaziz</creator>
        
        <creator>Mansouri Fatimaezzahra</creator>
        
        <subject>component; component; industrial vision; image processing; optimization; DPSO; quality control</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>The majority of manufacturers demand increasingly powerful vision systems for quality control. To have good outcomes, the installation requires an effort in the vision system tuning, for both hardware and software. As time and accuracy are important, actors are oriented to automate parameter’s adjustment optimization at least in image processing. This paper suggests an approach based on discrete particle swarm optimization (DPSO) that automates software setting and provides optimal parameters for industrial vision applications. A novel update functions for our DPSO definition are suggested.
The proposed method is applied on some real examples of quality control to validate its feasibility and efficiency, which shows that the new DPSO model furnishes promising results.
</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_28-A_Discrete_Particle_Swarm_Optimization_to_Estimate_Parameters_in_Vision_Tasks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Review of Energy Reduction Techniques for Green Cloud Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070127</link>
        <id>10.14569/IJACSA.2016.070127</id>
        <doi>10.14569/IJACSA.2016.070127</doi>
        <lastModDate>2016-02-01T14:06:12.3400000+00:00</lastModDate>
        
        <creator>Shaden M. AlIsmail</creator>
        
        <creator>Heba A. Kurdi</creator>
        
        <subject>Cloud Computing; Green Computing; Energy Efficiency; Power Management; Virtualization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>The growth of cloud computing has led to uneconomical energy consumption in data processing, storage, and communications. This is unfriendly to the environment, because of the carbon emissions. Therefore, green IT is required to save the environment. The green cloud computing (GCC) approach is part of green IT; it aims to reduce the carbon footprint of datacenters by reducing their energy consumption. The GCC is a broad and exciting field for research. A plethora of research has emerged aiming to support the GCC vision by improving the utilization of computing resources from different aspects, such as: software optimization, hardware optimization, and network optimization techniques. This paper overviews the approaches to GCC and classifies them. Such a classification assists in comparisons between GCC approaches by identifying the key implementation approaches and the issues related to each.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_27-Review_of_Energy_Reduction_Techniques_for_Green_Cloud_Computing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Game Theoretic Framework for E-Mail Detection and Forgery Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070126</link>
        <id>10.14569/IJACSA.2016.070126</id>
        <doi>10.14569/IJACSA.2016.070126</doi>
        <lastModDate>2016-02-01T14:06:12.3070000+00:00</lastModDate>
        
        <creator>Long Chen</creator>
        
        <creator>Yuan Lou</creator>
        
        <creator>Min Xiao</creator>
        
        <creator>Zhen-Xing Dong</creator>
        
        <subject>email detection; email forgery; game theoretic model; Nash Equilibrium; the optimal strategy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>In email forensic, the email detection and forgery conflict is an interdependent strategy selection process, and there exists complex dynamics between the detector and the forger, who have conflicting objectives and influence each other’s performance and decisions. This paper aims to study their dynamics from the perspective of game theory .We firstly analyze the email basic structure and header information, then discuss the email detection and forgery technologies. In this paper, we propose a Detection-Forgery Game (DFG) model and make a classification of players’ strategy with the Operation Complexity (OC). In the DFG model, we regard the interactions between the detector and the forger as a two-player, non-cooperative, non-zero-sum and finite strategic game, and formulate the Nash Equilibrium. The optimal detection and forgery strategies with minimizing cost and maximizing reward will be found by using the model. Finally, we perform empirical experiments to verify the effectiveness and feasibility of the model.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_26-A_Game_Theoretic_Framework_for_EMail_Detection_and_Forgery_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of the SNR Estimator for Speech Enhancement Using a Cascaded Linear Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070125</link>
        <id>10.14569/IJACSA.2016.070125</id>
        <doi>10.14569/IJACSA.2016.070125</doi>
        <lastModDate>2016-02-01T14:06:12.2930000+00:00</lastModDate>
        
        <creator>Harjeet Kaur</creator>
        
        <creator>Rajneesh Talwar</creator>
        
        <subject>Least Mean Square (LMS); Normalized Least Mean Square (NLMS); Recursive Least Square(RLS); Speech Enhancement; Non- stationary</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>Elimination of tainted noise and improving the overall quality of a speech signal is speech enhancement. To gain the advantage of individual algorithms we propose a new linear model and that is in the form of cascade adaptive filters for suppression of non-stationary noise. We have successfully deployed NLMS (Normalized Least Mean Square) algorithm, Sign LMS (Least Mean Square) and RLS (Recursive Least Square) as the main de-noising algorithms. Moreover, we are successful in demonstrating that the prior information about the noise is not required otherwise it would have been difficult to estimate for fast-varying noise in non-stationary environment. This approach estimates clean speech by recognizing the long segments of the clean speech as one whole unit. During experiment/implementation we used in-house database (includes various types of non stationary noise) for speech enhancement and proposed model results have shown improvement over conventional algorithms not only in objective but in subjective evaluations as well. Simulations present good results with a new linear model that are compared with individual algorithm results.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_25-Analysis_of_the_SNR_Estimator_for_Speech_Enhancement.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Database-as-a-Service for Big Data: An Overview</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070124</link>
        <id>10.14569/IJACSA.2016.070124</id>
        <doi>10.14569/IJACSA.2016.070124</doi>
        <lastModDate>2016-02-01T14:06:12.2600000+00:00</lastModDate>
        
        <creator>Manar Abourezq</creator>
        
        <creator>Abdellah Idrissi</creator>
        
        <subject>Cloud Computing; Big Data; Database as a Service</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>The last two decades were marked by an exponential growth in the volume of data originating from various data sources, from mobile phones to social media contents, all through the multitude devices of the Internet of Things. This flow of data can’t be managed using a classical approach and has led to the emergence of a new buzz word: Big Data. Among the research challenges related to Big Data there is the issue of data storage. Traditional relational database systems proved to be unable to efficiently manage Big Data datasets. In this context, Cloud Computing plays a relevant role, as it offers interesting models to deal with Big Data storage, especially the model known as Database as a Service (DBaaS). We propose, in this article, a review of database solutions that are offered as DBaaS and discuss their adaptability to Big Data applications.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_24-Database_as_a_Service_for_Big_Data_An_Overview.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Survey on Digital Watermarking and its Application</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070123</link>
        <id>10.14569/IJACSA.2016.070123</id>
        <doi>10.14569/IJACSA.2016.070123</doi>
        <lastModDate>2016-02-01T14:06:12.2470000+00:00</lastModDate>
        
        <creator>Ms.Mahua Pal</creator>
        
        <subject>Watermarking; Watermarking technique; DCT; DWT; LWM; DFRNT; PSNR</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>Digital communication plays a vital role in the world of Internet as well as in the communication technology. The secrecy of the communication is an essential part of passing the data or information. One noticeable technique is Digital Watermarking. Copyright owners seek methods to control and detect such reproduction, and henceforth research on digital product copyright protection has significant practical significance for E-commerce &amp; E-Governance. In this paper, a survey on some previous work done in watermarking field is presented. Experimentally evaluated algorithms are collected to focus on the wide scope of encrypted digital watermarking for data transmission security and authentication.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_23-A_Survey_on_Digital_Watermarking_and_its_Application.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modeling of Compensation in Long-Running Transactions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070122</link>
        <id>10.14569/IJACSA.2016.070122</id>
        <doi>10.14569/IJACSA.2016.070122</doi>
        <lastModDate>2016-02-01T14:06:12.2130000+00:00</lastModDate>
        
        <creator>Rebwar Mala Nabi</creator>
        
        <creator>Sardasht M-Raouf Mahmood</creator>
        
        <creator>Rebaz Mala Nabi</creator>
        
        <creator>Rania Azad Mohammed</creator>
        
        <subject>transaction; compensation; long-running transaction and interruption</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>nowadays, the most controversial issue is transaction in database systems or web services. Specifically, in the area of service-oriented computing, where business transactions always need long periods of time to finish. In the case of a failure rollback, which is the traditional method, it will not be enough and not suitable for handling errors during long running transactions. As a substitute, the most appropriate approach is compensation which is used as an error recovery mechanism. Therefore, transactions that need a long time to complete are programmed as a composition of a set of compensable transactions. This study attempts to design several compensation policies in the long running web transaction especially when the transaction has parallel threads. Meabwhile, one thread in sequence steps of the transaction may fail. This paper also describes and models many different ways to compensate to the thread. Moreover, this study proposes a system to implement creating long running transactions as well as simulating failures by using compensation policies.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_22-Modeling_of_Compensation_in_Long_Running_Transactions.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Writing Kurdish Alphabetics in Java Programming Language</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070121</link>
        <id>10.14569/IJACSA.2016.070121</id>
        <doi>10.14569/IJACSA.2016.070121</doi>
        <lastModDate>2016-02-01T14:06:12.2000000+00:00</lastModDate>
        
        <creator>Rebwar Mala Nabi</creator>
        
        <creator>Sardasht M-Raouf Mahmood</creator>
        
        <creator>Mohammed Qadir Kheder</creator>
        
        <creator>Shadman Mahmood</creator>
        
        <subject>Java; Arabic Scripts; Java language support; Java issues; Kurdish Language</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>Nowadays, Kurdish programmers usually suffer when they need to write Kurdish letter while they program in java. More to say, all the versions of Java Development Kits have not supported Kurdish letters. Therefore, the aim of this study is to develop Java Kurdish Language Package (JKLP) for solving writing Kurdish alphabetic in Java programming language. So that Kurdish programmer and/or students they can converts the English-alphabetic to Kurdish-alphabetic. Furthermore, adding Kurdish language to standard Java Development Kit (JDK). Additionally, in this paper we present the JKLP standard documentation for users. Our object-oriented solution composed of a package consisting two classes which have been implemented in the Java programming language.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_21-Writing_Kurdish_Alphabetics_in_Java_Programming_Language.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comprehensive Study and Comparison of Information Retrieval Indexing Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070120</link>
        <id>10.14569/IJACSA.2016.070120</id>
        <doi>10.14569/IJACSA.2016.070120</doi>
        <lastModDate>2016-02-01T14:06:12.1830000+00:00</lastModDate>
        
        <creator>Zohair Malki</creator>
        
        <subject>Information Retrieval; Indexing Techniques; Inverted Files; Suffix Trees; Signature Files</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>This research is aimed at comparing techniques of indexing that exist in the current information retrieval processes. The techniques being inverted files, suffix trees, and signature files will be critically described and discussed. The differences that occur in their use will be discussed. The performance and stability of each indexing technique will be critically studied and compared with the rest of the techniques. The paper also aims at showing by the end the role that indexing plays in the process of retrieving information. It is a comparison of the three indexing techniques that will be introduced in this paper. However, the details arising from the detailed comparison will also enhance more understanding of the indexing techniques.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_20-Comprehensive_Study_and_Comparison_of_Information_Retrieval_Indexing_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Formalization of Learning Patterns Through SNKA</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070119</link>
        <id>10.14569/IJACSA.2016.070119</id>
        <doi>10.14569/IJACSA.2016.070119</doi>
        <lastModDate>2016-02-01T14:06:12.1530000+00:00</lastModDate>
        
        <creator>Mr Rajesh D</creator>
        
        <creator>Dr. K.David</creator>
        
        <subject>Data Acquiring methods; Learning Patterns; Knowledge Management; Data Mining Tools</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>The Learning patterns found among the learners community is steadily progressing towards the digitalized world. The learning patterns arise from acquiring and sharing knowledge. More impact is found on the usage of knowledge sharing tools such as facebook, linkedin, weblogs, etc that are dominating the traditional means of learning. Since the knowledge patterns acquired through web unstructured data is insecure, it leads to poor decision making or decision making without a root cause. These acquired patterns are also shared to others which indirectly affect the trust patterns between users. In this paper, In order to streamline the knowledge acquisition patterns and their sharing means a new framework is defined as Social Networking based Knowledge Acquisition (SNKA) to formalize the observed data and the Dynamic Itemset Count (DIC) algorithm is tried for predicting the users about the usage of web content before and after the knowledge is acquired. Finally the rough idea in building a tool is also suggested.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_19-Formalization_of_Learning_Patterns_Through_SNKA.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detection of Malware and Malicious Executables Using E-Birch Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070118</link>
        <id>10.14569/IJACSA.2016.070118</id>
        <doi>10.14569/IJACSA.2016.070118</doi>
        <lastModDate>2016-02-01T14:06:12.1200000+00:00</lastModDate>
        
        <creator>Dr. Ashit Kumar Dutta</creator>
        
        <subject>Birch; Malware; Executables; Android and Windows</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>Malware detection is one of the challenges to the modern computing world. Web mining is the subset of data mining used to provide solutions for complex problems. Web intelligence is the new hope for the field of computer science to bring solution for the malware detection. Web mining is the method of web intelligence to make web as an intelligent tool to combat malware and phishing websites. Generally, malware injected through websites into the user system and modifies the executable file and paralyze the whole activity of the system. Antivirus application utilizes the data mining technique to find the malware in the web. There is a need of heuristic approach to solve the malware problem. Dynamic analysis methods yield   better result than the static methods. Data mining is the best option for the dynamic analysis of malware or malicious program.  The purpose of the research is to apply the enhanced Birch algorithm to find the malware and modified executables of Windows and Android operating system.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_18-Detection_of_Malware_and_Malicious_Executables.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparing the Usability of M-Business and M-Government Software in Saudi Arabia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070117</link>
        <id>10.14569/IJACSA.2016.070117</id>
        <doi>10.14569/IJACSA.2016.070117</doi>
        <lastModDate>2016-02-01T14:06:12.1030000+00:00</lastModDate>
        
        <creator>Mutlaq B. Alotaibi</creator>
        
        <subject>Usability; interaction; heuristics; interface; mobile; Saudi Arabia</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>This study presents a usability assessment of mobile presence in the Kingdom of Saudi Arabia (KSA), with a particular focus on the variance between M-business and M-government presence. In fact, a general hypothesis was developed that M-business software is more usable than M-government software, with eleven sub-hypotheses derived from Nielsen’s heuristics method. To examine the hypotheses, a true representative sample of thirty-six (n=36) mobile software applications in Saudi Arabia were identified from prior research, representing two main categories: M-business and M-government. Within each category, eighteen (n=18) mobile software applications were carefully chosen for further evaluation, representing a wide variety of sectors. A questionnaire was devised based on Nielsen’s heuristics method; this was tailored to fit the context at hand (mobile computing) to establish a usability checklist (consisting of eleven constructs). A group of thirty-six (n=36) participants were recruited to complete the usability assessment of examining each software application against the usability checklist, by rating each item using a Likert scale. The results herein reveal that mobile interactions in KSA were, in general, of an acceptable design quality with respect to usability aspects. The average percentage score for all heuristics met by the evaluated mobile software applications was 68.6%, this reflected how well the usability practices in mobile presence were implemented. The scores for all usability components exceeded 60%, with five components being below the average score (of 68.6%) and six components being above it. The variance between M-business and M-government software usability was significant, particularly in favor of M-business. In fact, the general hypothesis was accepted as well as seven other sub-hypotheses, as only four sub-hypotheses were rejected.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_17-Comparing_the_Usability_of_M_Business_and_M_Government_Software.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>VoIP Forensic Analyzer</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070116</link>
        <id>10.14569/IJACSA.2016.070116</id>
        <doi>10.14569/IJACSA.2016.070116</doi>
        <lastModDate>2016-02-01T14:06:12.0900000+00:00</lastModDate>
        
        <creator>M Mohemmed Sha</creator>
        
        <creator>Manesh T</creator>
        
        <creator>Saied M. Abd El-atty</creator>
        
        <subject>Forensics; Packet Reordering; Session Initiation; Real Time Transfer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>People have been utilizing Voice over Internet Protocol (VoIP) in most of the conventional communication facilities which has been of assistance in the enormous attenuation of operating costs, as well as the promotion of next- generation communication services-based IP. As an intimidating upshot, cyber criminals have correspondingly started interjecting the environment and creating new challenges for the law enforcement system in any Country. This paper presents an idea of a framework for the forensic analysis of the VoIP traffic over the network. This forensic activity includes spotting and scrutinizing the network patterns of VoIP-SIP stream, which is used to initiate a session for the communication, and regenerate the content from VoIP-RTP stream, which is employed to convey the data. Proposed network forensic investigation framework also accentuates on developing an efficient packet restructuring algorithm for tracing the depraved users involved in a conversation. Network forensics is the basis of proposed work, and performs packet level surveillance of VoIP followed by reconstruction of original malicious content or network session between users for their prosecution in the court.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_16-VoIP_Forensic_Analyzer.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Face Behavior Recognition Through Support Vector Machines</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070115</link>
        <id>10.14569/IJACSA.2016.070115</id>
        <doi>10.14569/IJACSA.2016.070115</doi>
        <lastModDate>2016-02-01T14:06:12.0570000+00:00</lastModDate>
        
        <creator>Haval A. Ahmed</creator>
        
        <creator>Tarik A. Rashid</creator>
        
        <creator>Ahmed T. Sadiq</creator>
        
        <subject>Facial Behavior Recognition; Support Vector Machine; Human Computer Interaction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>Communication between computers and humans has grown to be a major field of research. Facial Behavior Recognition through computer algorithms is a motivating and difficult field of research for establishing emotional interactions between humans and computers.  Although researchers have suggested numerous methods of emotion recognition within the literature of this field, as yet, these research works have mainly focused on one method for their system output i.e. used one facial database for assessing their works. This may diminish the generalization method and additionally it might shrink the comparability range. A proposed technique for recognizing emotional expressions that are expressed through facial aspects of still images is presented. This technique uses the Support Vector Machines (SVM) as a classifier of emotions. Substantive problems are considered such as diversity in facial databases, the samples included in each database, the number of facial expressions experienced an accurate method of extracting facial features, and the variety of structural models. After many experiments and the results of different models being compared, it is determined that this approach produces high recognition rates.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_15-Face_Behavior_Recognition_Through_Support_Vector_Machines.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Adaptive Neuro-Fuzzy Inference Systems for Modeling Greenhouse Climate</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070114</link>
        <id>10.14569/IJACSA.2016.070114</id>
        <doi>10.14569/IJACSA.2016.070114</doi>
        <lastModDate>2016-02-01T14:06:12.0270000+00:00</lastModDate>
        
        <creator>Charaf eddine LACHOURI</creator>
        
        <creator>Khaled MANSOURI</creator>
        
        <creator>Mohamed mourad LAFIFI</creator>
        
        <creator>Aissa BELMEGUENAI</creator>
        
        <subject>Greenhouse climate; Modeling; ANFIS; Neuro-Fuzzy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>The objective of this work was to solve the problem of non linear time variant multi-input multi-output of greenhouse internal climate for tomato seedlings. Artificial intelligent approaches including neural networks and fuzzy inference have been used widely to model expert behavior. In this paper we proposed the Adaptive Neuro-Fuzzy Inference Systems (ANFIS) as methodology to synthesize a robust greenhouse climate model for prediction of air temperature, air humidity, CO2 concentration and internal radiation during seedlings growth. A set of ten input meteorological and control actuators parameters that have a major impact on the greenhouse climate was chosen to represent the growing process of tomato plants. In this contribution we discussed the construction of an ANFIS system that seeks to provide a linguistic model for the estimation of greenhouse climate from the meteorological data and control actuators during 48 days of seedlings growth embedded in the trained neural network and optimized using the back propagation and the least square algorithm with 500 iterations. The simulation results have shown the efficiency of the proposed model.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_14-Adaptive_Neuro_Fuzzy_Inference_Systems_for_Modeling_Greenhouse_Climate.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Adaptive Grey Verhulst Model for Network Security Situation Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070113</link>
        <id>10.14569/IJACSA.2016.070113</id>
        <doi>10.14569/IJACSA.2016.070113</doi>
        <lastModDate>2016-02-01T14:06:11.9970000+00:00</lastModDate>
        
        <creator>Yu-Beng Leau</creator>
        
        <creator>Selvakumar Manickam</creator>
        
        <subject>Grey Theory; Network Security Situation Prediction; Adaptive Grey Verhulst Model; Adjustable Generation Sequence; Prediction Accuracy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>Recently, researchers have shown an increased interest in predicting the situation of incoming security situation for organization’s network. Many prediction models have been produced for this purpose, but many of these models have various limitations in practical applications. In addition, literature shows that far too little attention has been paid in utilizing the grey Verhulst model predicting network security situation although it has demonstrated satisfactory results in other fields. By considering the nature of intrusion attacks and shortcomings of traditional grey Verhulst model, this paper puts forward an adaptive grey Verhust model with adjustable generation sequence to improve the prediction accuracy. The proposed model employs the combination methods of Trapezoidal rule and Simpson’s 1/3rd rule to obtain the background value in grey differential equation which will directly influence the forecast result. In order to verify the performance of the proposed model, benchmarked datasets, DARPA 1999 and 2000 have been used to highlight the efficacy of the proposed model. The results show that the proposed adaptive grey Verhulst surpassed GM(1,1) and traditional grey Verhulst in forecasting incoming security situation in a network.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_13-A_Novel_Adaptive_Grey_Verhulst_Model_for_Network_Security_Situation_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Metrics for Event Driven Software</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070112</link>
        <id>10.14569/IJACSA.2016.070112</id>
        <doi>10.14569/IJACSA.2016.070112</doi>
        <lastModDate>2016-02-01T14:06:11.9630000+00:00</lastModDate>
        
        <creator>Neha Chaudhary</creator>
        
        <creator>O.P. Sangwan</creator>
        
        <subject>Graphical User Interface; Structural Complexity; Testability; Fuzzy model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>The evaluation of Graphical User Interface has significant role to improve its quality. Very few metrics exists for the evaluation of Graphical User Interface. The purpose of metrics is to obtain better measurements in terms of risk management, reliability forecast, project scheduling, and cost repression. In this paper structural complexity metrics is proposed for the evaluation of Graphical User Interface. Structural complexity of Graphical User Interface is considered as an indicator of complexity. The goal of identifying structural complexity is to measure the GUI testability. In this testability evaluation the process of measuring the complexity of the user interface from testing perspective is proposed. For the GUI evaluation and calculating structural complexity an assessment process is designed which is based on types of events. A fuzzy model is developed to evaluate the structural complexity of GUI. This model takes five types of events as input and return structural complexity of GUI as output. Further a relationship is established between structural complexity and testability of event driven software. Proposed model is evaluated with four different applications. It is evident from the results that higher the complexities lower the testability of application.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_12-Metrics_for_Event_Driven_Software.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Human Object Tracking in Nonsubsampled Contourlet Domain</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070111</link>
        <id>10.14569/IJACSA.2016.070111</id>
        <doi>10.14569/IJACSA.2016.070111</doi>
        <lastModDate>2016-02-01T14:06:11.9500000+00:00</lastModDate>
        
        <creator>Nguyen Thanh Binh</creator>
        
        <subject>object tracking; zernike moment; nonsubsampled contourlet transform; context awareness; extracting features</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>The intelligent systems are becoming more important in life. Moving objects tracking is one of the tasks of intelligent systems. This paper proposes the algorithm to track the object in the street. The proposed method uses the amplitude of zernike moment on nonsubsampled contourlet transform to track object depending on context awareness. The algorithm has also been processed successfully such cases as the new object detection, object detection obscured after they reappeared, detecting and tracking objects which successfully intertwined and then separated again. The proposed method tested on a standard large dataset like PEST dataset, CAVIAR dataset and SUN dataset. The author has compared the results with the other recent methods. Experimental results of the proposed method performed well compared to the other methods.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_11-Human_Object_Tracking_in_Nonsubsampled_Contourlet_Domain.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dynamic Crypto Algorithm for Real-Time Applications DCA-RTA, Key Shifting</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070110</link>
        <id>10.14569/IJACSA.2016.070110</id>
        <doi>10.14569/IJACSA.2016.070110</doi>
        <lastModDate>2016-02-01T14:06:11.9030000+00:00</lastModDate>
        
        <creator>Ahmad H. Al-Omari</creator>
        
        <subject>Dynamic crypto algorithm; real time applications; shared key generation; symmetric key encryption</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>The need for fast and attack resistance crypto algorithm is challenging issue in the era of the revolution in the information and communication technologies. The previous works presented by the authors “Dynamic Crypto Algorithm for Real-Time Applications DCA_RTA”, still need more enhancements to bring up the DCA_RTA into acceptable security level. In this work, the author added more enhancements on the Transformation-Table that is generated by the Initial-Table IT, which affects the overall encryption/decryption process. The new TT generation proven to be less correlated with the IT than using the previous TT generation processes. The simulated result indicates more randomness in the TT, which means better attack resistance algorithm. More room for algorithm enhancements is still needed.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_10-Dynamic_Crypto_Algorithm_for_Real_Time_Applications.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Data Mining and Intrusion Detection Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070109</link>
        <id>10.14569/IJACSA.2016.070109</id>
        <doi>10.14569/IJACSA.2016.070109</doi>
        <lastModDate>2016-02-01T14:06:11.8870000+00:00</lastModDate>
        
        <creator>Zibusiso Dewa</creator>
        
        <creator>Leandros A. Maglaras</creator>
        
        <subject>(Intrusion Detection; NSL–KDD; Machine Learning; Datasets; Classifiers; Feature Selection; Waikato Environment for Knowledge Analysis; Anomaly detection; Misuse detection; Data mining)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>The rapid evolution of technology and the increased connectivity among its components, imposes new cyber-security challenges. To tackle this growing trend in computer attacks and respond threats, industry professionals and academics are joining forces in order to build Intrusion Detection Systems (IDS) that combine high accuracy with low complexity and time efficiency. The present article gives an overview of existing Intrusion Detection Systems (IDS) along with their main principles. Also this article argues whether data mining and its core feature which is knowledge discovery can help in creating Data mining based IDSs that can achieve higher accuracy to novel types of intrusion and demonstrate more robust behaviour compared to traditional IDSs.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_9-Data_Mining_and_Intrusion_Detection_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Toward a Hybrid Approach for Crowd Simulation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070108</link>
        <id>10.14569/IJACSA.2016.070108</id>
        <doi>10.14569/IJACSA.2016.070108</doi>
        <lastModDate>2016-02-01T14:06:11.8570000+00:00</lastModDate>
        
        <creator>Chighoub Rabiaa</creator>
        
        <creator>Cherif Foudil</creator>
        
        <subject>vcrowd behavior; micro-scale representation; multi-layered framework; real time simulation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>We address the problem of simulating pedestrian crowd behaviors in real time. To date, two approaches can be used in modeling and simulation of crowd behaviors, i.e. macroscopic and microscopic models. Microscopic simulation techniques can capture the accuracy simulation of individualistic pedestrian behavior while macroscopic simulations maximize the efficiency of simulation; neither of them assures the two goals at the same time. In order to achieve the strengths of the two classes of crowd modeling, we propose a hybrid architecture for defining the complex behaviors of crowd at two levels, the individual behaviors, and the aggregate motion of pedestrian flow. It consists of combining a microscopic and a macroscopic model in an unified framework, we simulate individual pedestrian behaviors in regions of low density by using a microscopic model, and we use a faster continuum model of pedestrian flow in the remainder regions of the simulation environment. We demonstrate the flexibility and scalability of our interactive hybrid simulation technique in a large environment. This technique demonstrates the applicability of hybrid techniques to the efficient simulation of large-scale flows with complex dynamics.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_8-Toward_a_Hybrid_Approach_for_Crowd_Simulation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Motion Graphs for Character Animation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070107</link>
        <id>10.14569/IJACSA.2016.070107</id>
        <doi>10.14569/IJACSA.2016.070107</doi>
        <lastModDate>2016-02-01T14:06:11.8230000+00:00</lastModDate>
        
        <creator>Kalouache Saida</creator>
        
        <creator>Cherif Foudil</creator>
        
        <subject>motion graphs; inverse kinematic; virtual human; animation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>Many works in the literature have improved the performance of motion graphs for synthesis the humanlike results in limited domains that necessity few constraints like  dance, navigation in small game like environments or in games by the gesture of feedback on a snowboard tutorial. The humanlike cannot exist in an environment without interacting with the world surrounding them; the naturalness of the entire motion extremely depends on the animation of the walking character, the chosen path and the interaction motions. Addressing exact position of end-effectors is the main disadvantage of motion graphs which cause less importance expended to the search for motions with no collision in complex environments or manipulating motions. This fact motivates this approach which is the proposition of an hybrid motion graphs taking advantages of motion graphs to synthesis a natural locomotion and overcoming their limitations in synthesis manipulation motions by combined it with an inverse kinematic method for synthesis the upper-body motions.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_7-Hybrid_Motion_Graphs_for_Character_Animation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automatic Approach for Word Sense Disambiguation Using Genetic Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070106</link>
        <id>10.14569/IJACSA.2016.070106</id>
        <doi>10.14569/IJACSA.2016.070106</doi>
        <lastModDate>2016-02-01T14:06:11.7930000+00:00</lastModDate>
        
        <creator>Dr. Bushra Kh. AlSaidi</creator>
        
        <subject>unsupervised method; genetic algorithms; word sense disambiguation; Natural Language Processing; Information Retrieval</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>Word sense disambiguation (WSD) is a significant field in computational linguistics as it is indispensable for many language understanding applications. Automatic processing of documents is made difficult because of the fact that many of the terms it contain ambiguous. Word Sense Disambiguation (WSD) systems try to solve these ambiguities and find the correct meaning. Genetic algorithms can be active to resolve this problem since they have been effectively applied for many optimization problems. In this paper, genetic algorithms proposed to solve the word sense disambiguation problem that can automatically select the intended meaning of a word in context without any additional resource. The proposed algorithm is evaluated on a collection of documents   and produce&#39;s a lot of sense to the ambiguities word, the system creates dynamic, and up-todate word sense in a highly automatic method.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_6-Automatic_Approach_for_Word_Sense_Disambiguation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Proposed Hyperchaotic System for Image Encryption</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070105</link>
        <id>10.14569/IJACSA.2016.070105</id>
        <doi>10.14569/IJACSA.2016.070105</doi>
        <lastModDate>2016-02-01T14:06:11.7630000+00:00</lastModDate>
        
        <creator>Asst. Prof. Dr. Alia Karim Abdul Hassan</creator>
        
        <subject>hyperchaos; logistic map; H&#233;non map; image; encryption; decryption</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>This paper presents a new hyper chaos system based on H&#233;non and Logistic maps which provides characteristics of high capacity, security and efficiency. The Proposed hyper chaos system is employed to generate the key for diffusion in an image encryption algorithm.  The simulation experiments to the image encryption algorithm which based on the proposed hyper chaos system show that the algorithm security analysis   it has large key space (10 84 that ensures a strong resistance against attack of exhaustion as the key space will be greater), strong sensitivity of encryption key and good statistical characteristics. Encryption and decryption time is suitable for different applications.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_5-Proposed_Hyperchaotic_System_for_Image_Encryption.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Features Management and Middleware of Hybrid Cloud Infrastructures</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070104</link>
        <id>10.14569/IJACSA.2016.070104</id>
        <doi>10.14569/IJACSA.2016.070104</doi>
        <lastModDate>2016-02-01T14:06:11.7300000+00:00</lastModDate>
        
        <creator>Evgeny Nikulchev</creator>
        
        <creator>Oleg Lukyanchikov</creator>
        
        <creator>Evgeniy Pluzhnik</creator>
        
        <creator>Dmitry Biryukov</creator>
        
        <subject>Cloud Infrastructure; Distributed Databases; Hybrid Clouds</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>The wide spread of cloud computing has identified the need to develop specialized approaches to the design, management and programming for cloud infrastructures. In the article were reviewed the peculiarities of the hybrid cloud and middleware software development, adaptive to implementing the principles of governance and change in the structure of storing data in clouds. The examples and results of experimental research are presented.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_4-Features_Management_and_Middleware_of_Hybrid_Cloud_Infrastructures.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Unified Forensic Framework for Data Identification and Collection in Mobile Cloud Social Network Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070103</link>
        <id>10.14569/IJACSA.2016.070103</id>
        <doi>10.14569/IJACSA.2016.070103</doi>
        <lastModDate>2016-02-01T14:06:11.7130000+00:00</lastModDate>
        
        <creator>Muhammad Faheem</creator>
        
        <creator>Dr Tahar Kechadi</creator>
        
        <creator>Dr An Le Khac</creator>
        
        <subject>Mobile cloud computing; forensics; mobile cloud forensics; social networking applications</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>Mobile Cloud Computing (MCC) is the emerging and well accepted concept that significantly removes the constraints of mobile devices in terms of storage and computing capabilities and improves productivity, enhances performance, saves energy, and elevates user experience. The consolidation of cloud computing, wireless communication infrastructure, portable computing devices, location- based services, and mobile web has led to the inauguration of novel computing model. The Mobile social networks and cloud computing technology have gained rapid and intensive attention in recent years because of its numerous available benefits. Despite being an advanced technology to communicate and socialize with friends, the diverse and anonymous nature of mobile cloud social networking applications makes them very vulnerable to crimes and illegal activities. On considering the point of mobile cloud computing benefits, the forensic assistance based mobile cloud computing could offer a solution to the problem of social networking applications. Therefore, this work proposes a Mobile Cloud Forensic Framework (MCFF) to facilitate forensic investigation in social networking applications. The MCFF comprises of two components such as the forensic logging module and the forensic investigation process. The forensic logging module is a readiness component that is installed both the device and on the cloud. The ClouDroid Inspector (CDI) tool uses of the record traced by forensic logging module and conduct the investigation in both the mobile and the cloud. The MCFF identifies and collects the automated synchronized copies of data on both the mobile and cloud environment to prove and establish the use of cloud service via Smartphones.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_3-A_Unified_Forensic_Framework_for_Data_Identification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>3D Virtual Worlds: Business and Learning Opportunities</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070102</link>
        <id>10.14569/IJACSA.2016.070102</id>
        <doi>10.14569/IJACSA.2016.070102</doi>
        <lastModDate>2016-02-01T14:06:11.6830000+00:00</lastModDate>
        
        <creator>Aasim Munir Dad</creator>
        
        <creator>Professor Barry Davies</creator>
        
        <creator>Dr Andrew Kear</creator>
        
        <subject>Virtual Worlds; Social Networking Sites; Virtual Reality; Virtual Education Environments; Virtual Commerce</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>Virtual worlds (VWs) are rampant and easily accessible to common internet users nowadays. Millions of users are already living their virtual lives in these worlds. Moreover, the number of users is increasing continuously. The purpose of this paper is to review all the business opportunities on these virtual worlds along with the learning opportunities for the real world companies and business students. This paper clearly and precisely defines the virtual worlds in the context of social networking sites and also aims at discussing the past, present and future of VWs. All the possible business opportunities for the real world companies including advertisement &amp; communication, retailing opportunities, application for human resource management, marketing research and organizations&#39; internal process management through virtual worlds are critically reviewed here. In addition to the discussion current learning and training opportunities for the real world companies and business students are also reviewed. The paper aims at proving that the VWs are full of business and marketing applications and they could be widely used by the real world companies for effective and efficient business operations.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_2-3D_Virtual_Worlds_Business_and_Learning_Opportunities.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Content Based Image Retrieval Using Gray Scale Weighted Average Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2016.070101</link>
        <id>10.14569/IJACSA.2016.070101</id>
        <doi>10.14569/IJACSA.2016.070101</doi>
        <lastModDate>2016-02-01T14:06:11.6200000+00:00</lastModDate>
        
        <creator>Kamlesh Kumar</creator>
        
        <creator>Jian-Ping Li</creator>
        
        <creator>Zain-ul-abidin</creator>
        
        <creator>Riaz Ahmed Shaikh</creator>
        
        <subject>Color Weighted Average Method; Gray Scale Weighted Average Method; Feature Extraction; Precision; Recall; CBIR</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 7(1), 2016</description>
        <description>High feature vector dimension quietly remained a curse element for Content Based Image Retrieval (CBIR) system which eventually degrades its efficiency while indexing similar images from database. This paper proposes CBIR system using Gray Scale Weighted Average technique for reducing the feature vector dimension. The proposed method is more suitable for color and texture image feature analysis as compared to color weighted average method as illustrated in literature review. To prove the effectiveness of retrieval system, two standard benchmark dataset namely, Wang and Amsterdam Library of Texture Images (A LOT) for color and texture have been selected to evaluate the system retrieval accuracies as well as efficiencies generated by each method. For the purpose of image similarity, Euclidean distance has been employed which matches query image feature vector with image database feature vectors. The experimental results generated by two methods showed that overall performance of the proposed method is relatively better in terms of average precision, average recall and its average retrieval time.</description>
        <description>http://thesai.org/Downloads/Volume7No1/Paper_1-Content_Based_Image_Retrieval_Using_Gray_Scale_Weighted_Average_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Bidirectional Extraction of Phrases for Expanding Queries in Academic Paper Retrieval</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2016.050105</link>
        <id>10.14569/IJARAI.2016.050105</id>
        <doi>10.14569/IJARAI.2016.050105</doi>
        <lastModDate>2016-01-11T07:08:42.8600000+00:00</lastModDate>
        
        <creator>Yuzana Win</creator>
        
        <creator>Tomonari Masada</creator>
        
        <subject>word n-grams; Jaccard similarity; PageRank; TF-IDF; query expansion; information retrieval; feature extraction</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 5(1), 2016</description>
        <description>This paper proposes a new method for query expansion based on bidirectional extraction of phrases as word n-grams from research paper titles. The proposed method aims to extract information relevant to users’ needs and interests and thus to provide a useful system for technical paper retrieval. The outcome of proposed method are the trigrams as phrases that can be used for query expansion. First, word trigrams are extracted from research paper titles. Second, a co-occurrence graph of the extracted trigrams is constructed. To construct the co-occurrence graph, the direction of edges is considered in two ways: forward and reverse. In the forward and reverse co-occurrence graphs, the trigrams point to other trigrams appearing after and before them in a paper title, respectively. Third, Jaccard similarity is computed between trigrams as the weight of the graph edge. Fourth, the weighted version of PageRank is applied. Consequently, the following two types of phrases can be obtained as the trigrams associated with the higher PageRank scores. The trigrams of the one type, which are obtained from the forward co-occurrence graph, can form a more specific query when users add a technical word or words before them. Those of the other type, obtained from the reverse co-occurrence graph, can form a more specific query when users add a technical word or words after them. The extraction of phrases is evaluated as additional features in the paper title classification task using SVM. The experimental results show that the classification accuracy is improved than the accuracy achieved when the standard TF-IDF text features are only used. Moreover, the trigrams extracted by the proposed method can be utilized to expand query words in research paper retrieval.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume5No1/Paper_5-Bidirectional_Extraction_of_Phrases_for_Expanding.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Empirical Comparison of Tree-Based Learning Algorithms: An Egyptian Rice Diseases Classification Case Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2016.050104</link>
        <id>10.14569/IJARAI.2016.050104</id>
        <doi>10.14569/IJARAI.2016.050104</doi>
        <lastModDate>2016-01-11T07:08:42.7830000+00:00</lastModDate>
        
        <creator>Mohammed E. El-Telbany</creator>
        
        <creator>Mahmoud Warda</creator>
        
        <subject>Data Mining, Classification, Decision Trees, Bayesian Network, Random Forest, Rice Diseases.</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 5(1), 2016</description>
        <description>Applications of learning algorithms in knowledge discovery are promising and relevant area of research. The classification algorithms of data mining have been successfully applied in the recent years to predict Egyptian rice diseases. Various classification algorithms can be applied on such data to devise methods that can predict the occurrence of diseases. However, the accuracy of such techniques differ according to the learning and classification rule used. Identifying the best classification algorithm among all available is a challenging task. In this study, a comprehensive comparative analysis of a tree-based different classification algorithms and their performance has been evaluated by using Egyptian rice diseases data set. The experimental results demonstrate that the performance of each classifier and the results indicate that the decision tree gave the best results.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume5No1/Paper_4-An_Empirical_Comparison_of_Tree_Based_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluation of Cirrus Cloud Detection Accuracy of GOSAT/CAI and Landsat-8 with Laser Radar: Lidar and Confirmation with Calipso Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2016.050103</link>
        <id>10.14569/IJARAI.2016.050103</id>
        <doi>10.14569/IJARAI.2016.050103</doi>
        <lastModDate>2016-01-11T07:08:42.6100000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Masanori Sakashita</creator>
        
        <subject>Cirrus cloud; GOSAT/CAI; Landsat; LiDAR; Sky view camera; Calipso; topogramphic representation of 3D clouds</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 5(1), 2016</description>
        <description>Cirrus cloud detection accuracy of GOSAT/CAI and Landsat-8 is evaluated with a ground based Laser Radar: Lidar data and sky view camera data. Also, the evaluation results are confirmed with Calipso data together with a topographic representation of vertical profile of cloud structure. Furthermore, origin of cirrus clouds is estimated with forward trajectory analysis. The results show that GOSAT/CAI de4rived cirrus clouds is not accurately enough due to missing of cirrus cloud detection spectral channel while Landsat-8 derived cirrus cloud.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume5No1/Paper_3-Evaluation_of_Cirrus_Cloud_Detection_Accuracy_of_GOSATCAI.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Rescue System with Health Condition Monitoring Together with Location and Attitude Monitoring as Well as the Other Data Acquired with Mobile Devices</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2016.050102</link>
        <id>10.14569/IJARAI.2016.050102</id>
        <doi>10.14569/IJARAI.2016.050102</doi>
        <lastModDate>2016-01-11T07:08:42.5800000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Taka Eguchi</creator>
        
        <subject>Rescue system; Location estimation; Attitude estimation; Health monitoring; Mobile applications;  Triage; Rescue planning</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 5(1), 2016</description>
        <description>Rescue system with health condition monitoring together with location and attitude monitoring as well as the other data acquired with mobile devices is proposed. Backup system for location estimation is also proposed. On behalf of GPS receivers and WiFi beacon receivers, ZigBee is used as a backup system. Attitude can be monitored with acceleration-meters equipped in the commercially available smart phones and i-phones. Also, the number of steps and calorie consumptions can be monitored with the commercially available smart phones and i-phones. By using these body attached sensors, health condition of the persons who need a help for rescue when the emergency situations can be monitored and used for rescue planning and triage. Overall system configuration is proposed together with the detailed system descriptions with some of the experimental data.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume5No1/Paper_2-Rescue_System_with_Health_Condition_Monitoring_Together_with_Location.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Study of Optimization Methods for Estimation of Sea Surface Temperature and Ocean Wind with Microwave Radiometer Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2016</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2016.050101</link>
        <id>10.14569/IJARAI.2016.050101</id>
        <doi>10.14569/IJARAI.2016.050101</doi>
        <lastModDate>2016-01-11T07:08:42.4870000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>Microwave radiometer; remote sensing; sea surface temperature; nonlinear optimization theory; simulated annealing</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 5(1), 2016</description>
        <description>Comparative study of optimization methods for estimation sea surface temperature and ocean wind with microwave radiometer data is conducted. The well known mesh method (Grid Search Method: GSM), regressive method, and simulated annealing method are compared. Surface emissivity is estimated with the simulated annealing and compared to the well known Thomas T. Wilheit model based emissivity. On the other hand, brightness temperature of microwave radiometer as a function of observation angle is estimated by the simulated annealing method and compares it to the actual microwave radiometer data. Also, simultaneous estimation of sea surface temperature and ocean wind speed is carried out by the simulated annealing and compared it to the estimated those by the GSM method. The experimental results show the simulated annealing which allows estimation of global optimum is superior to the other method in some extent.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume5No1/Paper_1-Comparative_Study_of_Optimization_Methods_for_Estimation_of_Sea_Surface.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fine-Grained Quran Dataset</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061241</link>
        <id>10.14569/IJACSA.2015.061241</id>
        <doi>10.14569/IJACSA.2015.061241</doi>
        <lastModDate>2016-01-05T07:11:30.9970000+00:00</lastModDate>
        
        <creator>Mohamed Osman Hegazi</creator>
        
        <creator>Anwer Hilal</creator>
        
        <creator>Mohammad Alhawarat</creator>
        
        <subject>Arabic Language; Holy Quran; Quranic Dataset; Text Mining; NLP</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(12), 2015</description>
        <description>Extracting knowledge from text documents has become one of the main hot topics in the field of Natural Language Processing (NLP) in the era of information explosion. Arabic NLP is considered immature due to several reasons including the low available resources. On the other hand, automatically extracting reliable knowledge from specialized data sources as holy books is considered ultimately a challenging task but of great benefit to all humans. In this context, this paper provides a comprehensive Quranic Dataset as a first part (foundation) of an ongoing research that attempts to lay grounds for approaches and applications to explore the holy Quran. The paper presents the algorithms and approaches that have been designed to extract an aggregative data from massive Arabic text sources including the holy Quran and tightly associated books. Holy Quran text is transferred into structured multi-dimensional data records starting from the chapter level, the word level and then the character level. All these are linked with interpretations and meanings, parsing, translations, intonation roots and stems of words, all from authentic and reliable sources. The final dataset is represented in excel sheets and database records format. Also, the paper presents models of the dataset at all levels. The Quranic dataset presented in this paper was designed to be appropriate for: database, data mining, text mining and Artificial Intelligence applications; it is also designed to serve as a comprehensive encyclopedia of holy Quran and the Quranic Science books.</description>
        <description>http://thesai.org/Downloads/Volume6No12/Paper_41-Fine_grained_Quran_Dataset.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Real-Time Talking Avatar on the Internet Using Kinect and Voice Conversion</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061240</link>
        <id>10.14569/IJACSA.2015.061240</id>
        <doi>10.14569/IJACSA.2015.061240</doi>
        <lastModDate>2015-12-31T11:53:58.1030000+00:00</lastModDate>
        
        <creator>Takashi Nose</creator>
        
        <creator>Yuki Igarashi</creator>
        
        <subject>Talking avatar; Voice conversion; Kinect; Inter-net; Real-time communication</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(12), 2015</description>
        <description>We have more chances to communicate via the in-ternet. We often use text/video chat, but there are some problems, such as a lack of communication and anonymity. In this paper, we propose and implement a real-time talking avatar, where we can communicate with each other by synchronizing character’s voice and motion from ours while keeping anonymity by using a voice conversion technique. For the voice conversion, we improve accuracy of the voice conversion by specializing to the target character’s voice. Finally, we conduct subjective experiments and show the possibility of a new style of communication on the internet.</description>
        <description>http://thesai.org/Downloads/Volume6No12/Paper_40-Real_Time_Talking_Avatar_on_the_Internet.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Localisation of Information and Communication Technologies in Cameroonian Languages and Cultures:Experience and Issues</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061239</link>
        <id>10.14569/IJACSA.2015.061239</id>
        <doi>10.14569/IJACSA.2015.061239</doi>
        <lastModDate>2015-12-31T11:53:58.0870000+00:00</lastModDate>
        
        <creator>Mathurin Soh</creator>
        
        <creator>Jean Romain Kouesso</creator>
        
        <creator>Laure Pauline Fotso</creator>
        
        <subject>Culture, Digital divide, ICTs, Language divide, Localisation, National language</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(12), 2015</description>
        <description>In this paper, we tackle the problem of adapting Information and Communication Technologies (ICTs) in local languages of Cameroon. The objectives are to reduce the digital and language divides, and to pave the way for the usage of such technologies to local populations who don’t understand this technological language. We first discuss and highlight several concerns about the localisation of ICTs. Afterwords, we address some challenges and issues to computerize cultural and linguistic features, and indigenous knowledge (IK) for national languages and cultures in Cameroon. As case study, we describe our experience in localising an open source editor for the Yemba language, within the of Rural Electronic Schools in African Languages Project. Because Cameroonian languages are based on the same basic alphabet, this qualitative research is extensible to other languages.</description>
        <description>http://thesai.org/Downloads/Volume6No12/Paper_39-Localisation_of_Information_and_Communication.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Extracting Topics from the Holy Quran Using Generative Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061238</link>
        <id>10.14569/IJACSA.2015.061238</id>
        <doi>10.14569/IJACSA.2015.061238</doi>
        <lastModDate>2015-12-31T11:53:58.0570000+00:00</lastModDate>
        
        <creator>Mohammad Alhawarat</creator>
        
        <subject>Statistical models; Latent Dirichlet Analysis (LDA); Holy Quran; Unsupervised Learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(12), 2015</description>
        <description>The holy Quran is one of the Holy Books of God. It is considered one of the main references for an estimated 1.6 billion of Muslims around the world. The Holy Quran language is Arabic. Specialized as well as non-specialized people in religion need to search and lookup certain information from the Holy Quran. Most research projects concentrate on the translation of the holy Quran in different languages. Nevertheless, few research projects pay attention to original text of the holy Quran in Arabic language. Keyword search is one of the Information Retrieval (IR) methods but will retrieve what is called exact search. Semantic search aims at finding deeper meanings of a text, and it is a hot field of study in Natural Language Processing (NLP). In this paper topic modeling techniques are explored to setup a framework for semantic search in the holy Quran. As the Holy Quran is the word of God, its meanings are unlimited. In this paper the words of chapter Joseph (Peace Be Upon Him (PBUH)) from the Holy Quran is analyzed based on topic modeling techniques as a case study. Latent Dirichlet Allocation (LDA) topic modeling technique has been applied in this paper into two structures (Hizb Quarters and verses) of Joseph chapter as: words, roots and stems. The log-Likelihood has been calculated for the two structures of the chapter. Results show that the best structure to use is verses, which gives the least energy for data. Some of the results of the attained topics are shown. These results suggest that topic modeling techniques failed to capture in an accurate manner the coherent topics of the chapter.</description>
        <description>http://thesai.org/Downloads/Volume6No12/Paper_38-Extracting_Topics_from_the_Holy_Quran.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced Version of Multi-algorithm Genetically Adaptive for Multiobjective optimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061237</link>
        <id>10.14569/IJACSA.2015.061237</id>
        <doi>10.14569/IJACSA.2015.061237</doi>
        <lastModDate>2015-12-31T11:53:58.0270000+00:00</lastModDate>
        
        <creator>Wali Khan Mashwani</creator>
        
        <creator>Abdellah Salhi</creator>
        
        <creator>Muhammad Asif jan</creator>
        
        <creator>Rashida Adeeb Khanum</creator>
        
        <creator>Muhammad Sulaiman</creator>
        
        <subject>Multi-objective optimization, Multi-objective Evolu-tionary algorithms (MOEAs), Pareto Optimality, Multi-objective Memetic Algorithm (MOMAs)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(12), 2015</description>
        <description>Multi-objective EAs (MOEAs) are well established population-based techniques for solving various search and optimization problems. MOEAs employ different evolutionary operators to evolve populations of solutions for approximating the set of optimal solutions of the problem at hand in a single simulation run. Different evolutionary operators suite different problems. The use of multiple operators with a self-adaptive capability can further improve the performance of existing MOEAs. This paper suggests an enhanced version of a genetically adaptive multi-algorithm for multi-objective (AMAL-GAM) optimisation which includes differential evolution (DE), particle swarm optimization (PSO), simulated binary crossover (SBX), Pareto archive evolution strategy (PAES) and simplex crossover (SPX) for population evolution during the course of optimization. We examine the performance of this enhanced version of AMALGAM experimentally over two different test suites, the ZDT test problems and the test instances designed recently for the special session on MOEA’s competition at the Congress of Evolutionary Computing of 2009 (CEC’09). The suggested algorithm has found better approximate solutions on most test problems in terms of inverted generational distance (IGD) as the metric indicator.</description>
        <description>http://thesai.org/Downloads/Volume6No12/Paper_37-Enhanced_Version_of_Multi_algorithm_Genetically.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detection of Denial of Service Attack in Wireless Network using Dominance based Rough Set</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061236</link>
        <id>10.14569/IJACSA.2015.061236</id>
        <doi>10.14569/IJACSA.2015.061236</doi>
        <lastModDate>2015-12-31T11:53:58.0100000+00:00</lastModDate>
        
        <creator>N. Syed Siraj Ahmed</creator>
        
        <creator>D. P. Acharjya</creator>
        
        <subject>Denial of service; Rough set; Lower and upper approximation; Dominance relation; Data analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(12), 2015</description>
        <description>Denial-of-service (DoS) attack is aim to block the services of victim system either temporarily or permanently by sending huge amount of garbage traffic data in various types of protocols such as transmission control protocol, user datagram protocol, internet connecting message protocol, and hypertext transfer protocol using single or multiple attacker nodes. Maintenance of uninterrupted service system is technically difficult as well as economically costly. With the invention of new vulnerabilities to system new techniques for determining these vulnerabilities have been implemented. In general, probabilistic packet marking (PPM) and deterministic packet marking (DPM) is used to identify DoS attacks. Later, intelligent decision proto-type was proposed. The main advantage is that it can be used with both PPM and DPM. But it is observed that, data available in the wireless network information system contains uncertainties. Therefore, an effort has been made to detect DoS attack using dominance based rough set. The accuracy of the proposed model obtained over the KDD cup dataset is 99.76 and it is higher than the accuracy achieved by resilient back propagation (RBP) model.</description>
        <description>http://thesai.org/Downloads/Volume6No12/Paper_36-Detection_of_Denial_of_Service_Attack_in_Wireless.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Database Preservation: The DBPreserve Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061235</link>
        <id>10.14569/IJACSA.2015.061235</id>
        <doi>10.14569/IJACSA.2015.061235</doi>
        <lastModDate>2015-12-31T11:53:57.9800000+00:00</lastModDate>
        
        <creator>Arif Ur Rahman</creator>
        
        <creator>Muhammad Muzammal</creator>
        
        <creator>Gabriel David</creator>
        
        <creator>Cristina Ribeiro</creator>
        
        <subject>Database Preservation, Transformation Rules</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(12), 2015</description>
        <description>In many institutions relational databases are used as a tool for managing information related to day to day activities. Institutions may be required to keep the information stored in relational databases accessible because of many reasons including legal requirements and institutional policies. However, the evolution in technology and change in users with the passage of time put the information stored in relational databases in danger. In the long term the information may become inaccessible when the operating system, database management system or the application software is not available any more or the contextual information not stored in the database may be lost thus affecting the authenticity and understandability of the information.
This paper presents an approach for preserving relational databases for the long-term. The proposal involves migrating a relational database to a dimensional model which is simple to understand and easy to write queries against. Practical transformation rules are developed by carrying out multiple case studies. One of the case studies is presented as a running example in the paper. Systematic implementation of the rules ensures no loss of information in the process except for the unwanted details. The database preserved using the approach is converted to an open format but may be reloaded to a database management system in the long-term.</description>
        <description>http://thesai.org/Downloads/Volume6No12/Paper_35-Database_Preservation_The_DBPreserve_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Algorithm for Post-Processing Covering Arrays</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061234</link>
        <id>10.14569/IJACSA.2015.061234</id>
        <doi>10.14569/IJACSA.2015.061234</doi>
        <lastModDate>2015-12-31T11:53:57.9630000+00:00</lastModDate>
        
        <creator>Carlos Lara-Alvarez</creator>
        
        <creator>Himer Avila-George</creator>
        
        <subject>Software testing; Combinatorial testing; Covering arrays; Post-Processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(12), 2015</description>
        <description>Software testing is a critical component of modern software development. For this reason, it has been one of the most active research topics for several years, resulting in many different algorithms, methodologies and tools. Combinatorial testing is one of the most important testing strategies. The test generation problem for combinatorial testing can be modeled as constructing a matrix which has certain properties, typically this matrix is a covering array. The construction of covering arrays with the fewest rows remains a challenging problem. This paper proposes a post-processing technique that repeatedly adjusts the covering array in an attempt to reduce its number of rows. In the experiment, 85 covering arrays, created by a state-of-the-art algorithm, were subject to the reduction process. The results report a reduction in the size of 28 covering arrays (~33%).</description>
        <description>http://thesai.org/Downloads/Volume6No12/Paper_34-A_New_Algorithm_for_Post_Processing_Covering.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving Video Streams Summarization Using Synthetic Noisy Video Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061233</link>
        <id>10.14569/IJACSA.2015.061233</id>
        <doi>10.14569/IJACSA.2015.061233</doi>
        <lastModDate>2015-12-31T11:53:57.9330000+00:00</lastModDate>
        
        <creator>Nada Jasim Al-Musawi</creator>
        
        <creator>Saad Talib Hasson</creator>
        
        <subject>Video summarization; Histogram of Oriented Gradient (HOG); Correlation coefficients (R); key frames; illumination changes; noise; Random Numbers Generator function</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(12), 2015</description>
        <description>For monitoring public domains, surveillance camera systems are used. Reviewing and processing any subsequences from large amount of raw video streams is time and space consuming. Many efficient approaches of video summarization were proposed to reduce the amount of irrelevant information. Most of these approaches do not take into consideration the illumination or lighting changes that cause noise in video sequences. In this work, video summarization algorithm for video streams has been proposed using Histogram of Oriented Gradient and Correlation coefficients techniques. This algorithm has been applied on the proposed multi-model dataset which is created by combining the original data and the dynamic synthetic data. This dynamic data is proposed using Random Number Generator function. Experiments on this dataset showed the effectiveness of the proposed algorithm compared with traditional dataset.</description>
        <description>http://thesai.org/Downloads/Volume6No12/Paper_33-Improving_Video_Streams_Summarization_Using_Synthetic_Noisy_Video_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Association Rule Hiding Techniques for Privacy Preserving Data Mining: A Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061232</link>
        <id>10.14569/IJACSA.2015.061232</id>
        <doi>10.14569/IJACSA.2015.061232</doi>
        <lastModDate>2015-12-31T11:53:57.9000000+00:00</lastModDate>
        
        <creator>Gayathiri P</creator>
        
        <creator>Dr. B Poorna</creator>
        
        <subject>Association rule mining; transactional data; privacy preservation; Association Rule Hiding (ARH); Privacy Preserving Data Mining (PPDM)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(12), 2015</description>
        <description>Association rule mining is an efficient data mining technique that recognizes the frequent items and associative rule based on a market basket data analysis for large set of transactional databases. The probability of most frequent data item occurrence of the transactional data items are calculated to present the associative rule that represents the habits of buying products of the customers in demand. Identifying associative rules of a transactional database in data mining may expose the confidentiality and privacy of an organization and individual. Privacy Preserving Data Mining (PPDM) is a solution for privacy threats in data mining. This issue is solved using Association Rule Hiding (ARH) techniques in Privacy Preserving Data Mining (PPDM). This research work on Association Rule Hiding technique in data mining performs the generation of sensitive association rules by the way of hiding based on the transactional data items. The property of hiding rules not the data makes the sensitive rule hiding process is a minimal side effects and higher data utility technique.</description>
        <description>http://thesai.org/Downloads/Volume6No12/Paper_32-Association_Rule_Hiding_Techniques_for_Privacy_Preserving_Data_Mining_A_Study.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>JPI UML Software Modeling</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061231</link>
        <id>10.14569/IJACSA.2015.061231</id>
        <doi>10.14569/IJACSA.2015.061231</doi>
        <lastModDate>2015-12-31T11:53:57.8700000+00:00</lastModDate>
        
        <creator>Cristian Vidal Silva</creator>
        
        <creator>Leopoldo L&#243;pez</creator>
        
        <creator>Rodolfo Schmal</creator>
        
        <creator>Rodolfo Villarroel</creator>
        
        <creator>Miguel Bustamante</creator>
        
        <creator>V&#237;ctor Rea Sanchez</creator>
        
        <subject>JPI; UML; AOP; JPI UML Class Diagram; JPI UML Sequence Diagram</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(12), 2015</description>
        <description>Aspect-Oriented Programming AOP extends object-oriented programming OOP with aspects to modularize crosscutting behavior on classes by means of aspects to advise base code in the occurrence of join points according to pointcut rules definition. However, join points introduce dependencies between aspects and base code, a great issue to achieve an effective independent development of software modules. Join Point Interfaces JPI represent join points using interfaces between classes and aspect, thus these modules do not depend of each other. Nevertheless, since like AOP, JPI is a programming methodology; thus, for a complete aspect-oriented software development process, it is necessary to define JPI requirements and JPI modeling phases.
Towards previous goal, this article proposes JPI UML class and sequence diagrams for modeling JPI software solutions. A purpose of these diagrams is to facilitate understanding the structure and behavior of JPI programs. As an application example, this article applies the JPI UML diagrams proposal on a case study and analyzes the associated JPI code to prove their hegemony.</description>
        <description>http://thesai.org/Downloads/Volume6No12/Paper_31-Jpi_Uml_Software_Modeling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Composable Modeling Method for Generic Test Platform for Cbtc System Based on the Port Object</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061230</link>
        <id>10.14569/IJACSA.2015.061230</id>
        <doi>10.14569/IJACSA.2015.061230</doi>
        <lastModDate>2015-12-31T11:53:57.8230000+00:00</lastModDate>
        
        <creator>WAN Yongbing</creator>
        
        <creator>WANG Daqing</creator>
        
        <creator>MEI Meng</creator>
        
        <subject>composable modeling; test platform; CBTC; port object; line simulation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(12), 2015</description>
        <description>The Communications-based train control(CBTC) system has gradually become the first choice for signal systems of urban mass transit. However, how to guarantee its safety has become a research hotspot in safety fields. The generic test system with high efficiency has become the main means to verify the function and performance of CBTC system. This paper discusses a composable modeling method for the generic test platform for CBTC system based on the port object. This method defines the port object(PO) model as the basic component for composable modeling, verifies its port behavior and generates its compositional properties. Based on the port description and the test environment description, it builds port sets and environment port cluster, respectively. Then it analyzes and extracts possible crosscutting concerns, and finally generates a variable PO component library. It takes the modeling of block port objects in line simulation of generic test platform for CBTC systems as an example to verify the feasibility of the method.</description>
        <description>http://thesai.org/Downloads/Volume6No12/Paper_30-Composable_Modeling_Method_for_Generic_Test_Platform.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Distributed Optimization Model of Wavelet Neuron for Human Iris Verification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061229</link>
        <id>10.14569/IJACSA.2015.061229</id>
        <doi>10.14569/IJACSA.2015.061229</doi>
        <lastModDate>2015-12-31T11:53:57.7770000+00:00</lastModDate>
        
        <creator>Elsayed Radwan</creator>
        
        <creator>Mayada Tarek</creator>
        
        <subject>Discrete Wavelet Transform (DWT); Wavelet Features; Wavelet Neural Network (WNN); Distributed Genetic Algorithms (GA); Human Iris Verification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(12), 2015</description>
        <description>Automatic human iris verification is an active research area with numerous applications in security purposes. Unfortunately, most of feature extraction methods in human iris verification systems are sensitive to noise, scale and rotation. This paper proposes an integrated hybrid model among Discrete Wavelet Transform, Wavelet Neural Network and Genetic Algorithms for optimizing the feature extraction and verification methods. For any iris image, the wavelet features are extracted by Discrete Wavelet Transform without any dependency on scale and pixels&#39; intensity. Besides, Wavelet Neural Network classifier is integrated as a local optimization method to solve the orientation problem and increase the intrinsic features. In solving the down sample process caused by DWT, each human iris should be characterized by a set of parameters of its optimal wavelet analysis function at a determined analysis level. Thus, distributed Genetic Algorithms, meta-heuristic algorithm, is introduced as a global optimization searching technique to discover the optimal parameter values. The details and limitation of this paper will be discussed where a comparative study should appear. Moreover, conclusions and future work are described.</description>
        <description>http://thesai.org/Downloads/Volume6No12/Paper_29-Distributed_Optimization_Model_of_Wavelet_Neuron_for_Human_Iris_Verification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Ontology-Based Clinical Decision Support System for Predicting High-Risk Pregnant Woman</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061228</link>
        <id>10.14569/IJACSA.2015.061228</id>
        <doi>10.14569/IJACSA.2015.061228</doi>
        <lastModDate>2015-12-31T11:53:57.7600000+00:00</lastModDate>
        
        <creator>Umar Manzoor</creator>
        
        <creator>Muhammad Usman</creator>
        
        <creator>Mohammed A. Balubaid</creator>
        
        <creator>Ahmed Mueen</creator>
        
        <subject>High-risk patient; Pregnant woman; Ontology-based CDSS; Clinical Decision Support System</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(12), 2015</description>
        <description>According to Pakistan Medical and Dental Council (PMDC), Pakistan is facing a shortage of approximately 182,000 medical doctors. Due to the shortage of doctors; a large number of lives are in danger especially pregnant woman. A large number of pregnant women die every year due to pregnancy complications, and usually the reason behind their death is that the complications are not timely handled. In this paper, we proposed ontology-based clinical decision support system that diagnoses high-risk pregnant women and refer them to the qualified medical doctors for timely treatment. The Ontology of the proposed system is built automatically and enhanced afterward using doctor’s feedback. The proposed framework has been tested on a large number of test cases; experimental results are satisfactory and support the implementation of the solution.</description>
        <description>http://thesai.org/Downloads/Volume6No12/Paper_28-Ontology_Based_Clinical_Decision_Support_System_for_Predicting.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of Adaptive Mobile Learning (AML) on Information System Courses</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061227</link>
        <id>10.14569/IJACSA.2015.061227</id>
        <doi>10.14569/IJACSA.2015.061227</doi>
        <lastModDate>2015-12-31T11:53:57.7470000+00:00</lastModDate>
        
        <creator>I Made Agus Wirawan</creator>
        
        <creator>Made Santo Gitakarna</creator>
        
        <subject>Mobile Learning; Information System Course; Learning Media; Adaptive Learning; Learners Response; Research and Development</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(12), 2015</description>
        <description>In general, the learning process is done conventionally, where the learning process is done face to face between teachers with learners in the classroom. Teachers have a very important role in determining the quantity and quality of the implementation study. Therefore, teachers must think and plan carefully to improve learning opportunities for learners and improve the quality of teaching. Along with the development of mobile technology and communication is rapidly increasing, enabling the learning process is not only done in the classroom, but can be done anywhere and anytime. Based on the analysis of the results of observations in the class conducted by a researcher and as a teacher in the learning courses of Information Systems, found some obstacles encountered during the learning process
This research is to develop an Adaptive Mobile Learning on Information Systems courses. The method used in this research is the development of research methods (research and development), which selected the design development using System Development Life Cycle model. Adaptive Mobile Learning will be validated and tested through three phases of testing are: (1) Product technical test as a software. (2) Testing of the product as a medium of learning, through expert review by a media expert, (3) Field test to evaluate the response of the students that learned Adaptive Mobile Learning. 
The results show that Adaptive Mobile Learning software is can present the material in the course of Information Systems. Media Adaptive Mobile Learning can be used as an alternative medium (supplement) of learning Information Systems courses. The response of students to the development and use of software for Adaptive Mobile Learning Information Systems courses is likely to very positive, which is at 67.7% very positive and 32.3% is positive.</description>
        <description>http://thesai.org/Downloads/Volume6No12/Paper_27-Development_of_Adaptive_Mobile_Learning_AML_on_Information_System_Courses.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intelligent Mobility Management Model for Heterogeneous Wireless Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061226</link>
        <id>10.14569/IJACSA.2015.061226</id>
        <doi>10.14569/IJACSA.2015.061226</doi>
        <lastModDate>2015-12-31T11:53:57.7130000+00:00</lastModDate>
        
        <creator>Sanjeev Prakash</creator>
        
        <creator>R B Patel</creator>
        
        <creator>V. K. Jain</creator>
        
        <subject>FNS; MNS; MN; WLAN; Mobile Agent</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(12), 2015</description>
        <description>Growing consumer demands for access of communication services in a ubiquitous environment is a driving force behind the development of new technologies. The rapid development in communication technology permits the end users to access heterogeneous wireless networks to utilize the swerve range of data rate service “anywhere any time”. These forces to technology developers to integrate different wireless access technologies which is known as fourth generation (4G). It is become possible to reduce the size of mobile nodes (MNs) with manifold network interfaces and development in IP-based applications. 4G mobile/wireless computing and communication heterogeneous environment consist of various access technologies that differ in bandwidth, network conditions, service type, latency and cost. A major challenge of the 4G wireless network is seamless vertical handoff across the heterogeneous wireless access network as the users are roaming in the heterogeneous wireless network environment. Today communication devices are portable and equipped with manifold interfaces and are capable to roam seamlessly among the various access technology networks for maintaining the network connectivity, since no single-interface technology provides ubiquitous coverage and quality-of-service (QoS).
This paper reports a mobile agent based heterogeneous wireless network management system. In this system agent’s decision focuses on multi parameter system (MPS). This system works on the parameters- network delay, received signal strength, network latency and study of the collected information about adjoining network cells viz., accessible channel. System is simulated and a comparative study is also made. From results it is observed that system improves the performance of wireless network.</description>
        <description>http://thesai.org/Downloads/Volume6No12/Paper_26-Intelligent_Mobility_Management_Model_for_Heterogeneous_Wireless_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Identifying Cancer Biomarkers Via Node Classification within a Mapreduce Framework</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061225</link>
        <id>10.14569/IJACSA.2015.061225</id>
        <doi>10.14569/IJACSA.2015.061225</doi>
        <lastModDate>2015-12-31T11:53:57.6830000+00:00</lastModDate>
        
        <creator>Taysir Hassan A. Soliman</creator>
        
        <subject>Big data; cancer biomarkers; MapReduce; node classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(12), 2015</description>
        <description>Big data are giving new research challenges in the life sciences domain because of their variety, volume, veracity, velocity, and value.  Predicting gene biomarkers is one of the vital research issues in bioinformatics field, where microarray gene expression and network based methods can be used.  These datasets suffer from the huge data voluminous, causing main memory problems.  In this paper, a Random Committee Node Classifier algorithm (RCNC) is proposed for identifying cancer biomarkers, which is based on microarray gene expression data and Protein-Protein Interaction (PPI) data.  Data are enriched from other public databases, such as IntACT1 and UniProt2 and Gene Ontology3 (GO). Cancer Biomarkers are identified when applied to different datasets with an accuracy rate an accuracy rate 99.16%, 99.96% precision, 99.24% recall, 99.16% F1-measure and 99.6 ROC.   To speed up the performance, it is run within a MapReduce framework, where RCNC MapReduce algorithm is much faster than RCNC sequential algorithm when having large datasets.</description>
        <description>http://thesai.org/Downloads/Volume6No12/Paper_25-Identifying_Cancer_Biomarkers_Via_Node_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Evaluation of K-Mean and Fuzzy C-Mean Image Segmentation Based Clustering Classifier</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061224</link>
        <id>10.14569/IJACSA.2015.061224</id>
        <doi>10.14569/IJACSA.2015.061224</doi>
        <lastModDate>2015-12-31T11:53:57.6500000+00:00</lastModDate>
        
        <creator>Hind R.M Shaaban</creator>
        
        <creator>Farah Abbas Obaid</creator>
        
        <creator>Ali Abdulkarem Habib</creator>
        
        <subject>Segmentation; image segmentation; Evaluation image Segmentation; K-means clustering; Fuzzy C-means</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(12), 2015</description>
        <description>This paper presents Evaluation K-mean and Fuzzy c-mean image segmentation based Clustering classifier. It was followed by thresholding and level set segmentation stages to provide accurate region segment. The proposed stay can get the benefits of the K-means clustering. The performance and evaluation of the given image segmentation approach were evaluated by comparing   K-mean and Fuzzy c-mean algorithms in case of accuracy, processing time, Clustering classifier, and Features and accurate performance results. The database consists of 40 images executed by K-mean and Fuzzy c-mean image segmentation based Clustering classifier. The experimental results confirm the effectiveness of the proposed Fuzzy c-mean image segmentation based Clustering classifier. The statistical significance Measures of mean values of Peak signal-to-noise ratio (PSNR) and Mean Square Error (MSE) and discrepancy are used for Performance Evaluation of K-mean and Fuzzy c-mean image segmentation. The algorithm’s higher accuracy can be found by the increasing number of classified clusters and with Fuzzy c-mean image segmentation.</description>
        <description>http://thesai.org/Downloads/Volume6No12/Paper_24-Performance_Evaluation_of_K_Mean_and_Fuzzy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Tree-Combined Trie: A Compressed Data Structure for Fast IP Address Lookup</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061223</link>
        <id>10.14569/IJACSA.2015.061223</id>
        <doi>10.14569/IJACSA.2015.061223</doi>
        <lastModDate>2015-12-31T11:53:57.6030000+00:00</lastModDate>
        
        <creator>Muhammad Tahir</creator>
        
        <creator>Shakil Ahmed</creator>
        
        <subject>IP address lookup; compression; dynamic data structure; IPv6</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(12), 2015</description>
        <description>For meeting the requirements of the high-speed Internet and satisfying the Internet users, building fast routers with high-speed IP address lookup engine is inevitable. Regarding the unpredictable variations occurred in the forwarding information during the time and space, the IP lookup algorithm should be able to customize itself with temporal and spatial conditions. This paper proposes a new dynamic data structure for fast IP address lookup. This novel data structure is a dynamic mixture of trees and tries which is called Tree-Combined Trie or simply TC-Trie. Binary sorted trees are more advantageous than tries for representing a sparse population while multibit tries have better performance than trees when a population is dense. TC-trie combines advantages of binary sorted trees and multibit tries to achieve maximum compression of the forwarding information. Dynamic reconfiguration of TC-trie, made it capable of customizing itself along the time and scaling to support more prefixes or longer IPv6 prefixes. TC-trie provides a smooth transition from current large IPv4 databases to the large IPv6 databases of the future Internet.</description>
        <description>http://thesai.org/Downloads/Volume6No12/Paper_23-Tree_Combined_Trie_A_Compressed_Data_Structure_for_Fast_Ip_Address_Lookup.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Pneumatic Launcher Based Precise Placement Model for Large-Scale Deployment in Wireless Sensor Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061222</link>
        <id>10.14569/IJACSA.2015.061222</id>
        <doi>10.14569/IJACSA.2015.061222</doi>
        <lastModDate>2015-12-31T11:53:57.5730000+00:00</lastModDate>
        
        <creator>Vikrant Sharma</creator>
        
        <creator>R B Patel</creator>
        
        <creator>H S Bhadauria</creator>
        
        <creator>D Prasad</creator>
        
        <subject>WSN; deployment; placement; aerial; coverage</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(12), 2015</description>
        <description>Sensor nodes (SNs) are small sized, low cost devices used to facilitate automation, remote controlling and monitoring. Wireless sensor network (WSN) is an environment monitoring network formed by the number of SNs connected by a wireless medium. Deployment of SNs is an essential phase in the life of a WSN as all the other performance matrices such as connectivity, life and coverage directly depends on it. Moreover, the task of deployment becomes challenging when the WSN is to be established in a large scale candidate region within a limited time interval in order to deal with emergency conditions. In this paper a model for time efficient and precise placement of SNs in large-scale candidate region has been proposed. It constitute of two sets of pneumatic launchers (PLs), one on either side of a deployment helicopter. Each PL is governed by software which determines the launch time and velocity of a SN for its precise placement on the predetermined positions. Simulation results show that the proposed scheme is more time efficient, feasible and cost effective in comparison to the existing state of art models of deployment and can be opted as an effective alternative to deal with emergency conditions.</description>
        <description>http://thesai.org/Downloads/Volume6No12/Paper_22-Pneumatic_Launcher_Based_Precise_Placement_Model_for_Large_Scale.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Synchronous Stream Cipher Generator Based on Quadratic Fields (SSCQF)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061221</link>
        <id>10.14569/IJACSA.2015.061221</id>
        <doi>10.14569/IJACSA.2015.061221</doi>
        <lastModDate>2015-12-31T11:53:57.5570000+00:00</lastModDate>
        
        <creator>Younes ASIMI</creator>
        
        <creator>Ahmed ASIMI</creator>
        
        <subject>Synchronous stream cipher SSCQF; linear feedback shift registers LFSRs; arithmetic of quadratic fields; Boolean functions; pseudorandom number generator and keystream generator</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(12), 2015</description>
        <description>In this paper, we propose a new synchronous stream cipher called SSCQF whose secret-key is Ks=(z1,...zn) where zi is a positive integer. Let d1, d2,..., dN be N
positive integers in {0,1,...2m -1} such that di=zi mod2m with m and m&gt;=8. 
Our purpose is to combine a linear feedback shift registers LFSRs, the arithmetic of quadratic fields: 
more precisely the unit group of quadratic fields, and Boolean functions [14]. Encryption and decryption are done by XRO`ing the output 
pseudorandom number generator with the plaintext and ciphertext respectively. The basic ingredients of this proposal stream generator SSCQF rely on the three following processes: 
In process I , we constructed the initial vectors IV={X1,...,Xn} from the secret-key Ks=(z1,...zn) by using the fundamental 
unit of Q( Nvdi) if di is a square free integer otherwise by splitting di, and in process II, we regenerate, from the vectors Xi, the vectors Yi having the same length L, 
that is divisible by 8 (equations (2) and (3) ). In process III , for each Yi , we assign L/8 linear feedback shift registers, each of length eight. 
We then obtain N x L/8 linear feedback shift registers that are initialized by the binary sequence regenerated by process II , 
filtered by primitive polynomials, and the combine the binary sequence output with L/8 Boolean functions. The keystream generator, denoted K , is a concatenation of the output binary sequences of all Boolean functions.</description>
        <description>http://thesai.org/Downloads/Volume6No12/Paper_21-A_Synchronous_Stream_Cipher_Generator_Based_on_Quadratic_Fields.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Carrier Signal Approach for Intermittent Fault Detection and Health Monitoring for Electronics Interconnections System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061220</link>
        <id>10.14569/IJACSA.2015.061220</id>
        <doi>10.14569/IJACSA.2015.061220</doi>
        <lastModDate>2015-12-31T11:53:57.5100000+00:00</lastModDate>
        
        <creator>Syed Wakil Ahmad</creator>
        
        <creator>Dr. Suresh Perinpanayagam</creator>
        
        <creator>Prof. Ian Jennions</creator>
        
        <creator>Dr. Mohammad Samie</creator>
        
        <subject>NFF; Intermittent; Intermittency; Fault detection; Health Monitoring</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(12), 2015</description>
        <description>Intermittent faults are completely missed out by traditional monitoring and detection techniques due to non-stationary nature of signals. These are the incipient events of a precursor of permanent faults to come. Intermittent faults in electrical interconnection are short duration transients which could be detected by some specific techniques but these do not provide enough information to understand the root cause of it. Due to random and non-predictable nature, the intermittent faults are the most frustrating, elusive, and expensive faults to detect in interconnection system. The novel approach of the author injects a fixed frequency sinusoidal signal into electronics interconnection system that modulates intermittent fault if persist. Intermittent faults and other channel effects are computed from received signal by demodulation and spectrum analysis. This paper describes technology for intermittent fault detection, and classification of intermittent fault, and channel characterization. The paper also reports the functionally tests of computational system of the proposed methods. This algorithm has been tested using experimental setup. It generate an intermittent signal by external vibration stress on connector and intermittency is detected by acquiring and processing propagating signal. The results demonstrate to detect and classify intermittent interconnection and noise variations due to intermittency. Monitoring the channel in-situ with low amplitude, and narrow band signal over electronics interconnection between a transmitter and a receiver provides the most effective tool for continuously watching the wire system for the random, unpredictable intermittent faults, the precursor of failure.</description>
        <description>http://thesai.org/Downloads/Volume6No12/Paper_20-A_Carrier_Signal_Approach_for_Intermittent_Fault_Detection_and_Health_Monitoring.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Survey on the Internet of Things Software Arhitecture</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061219</link>
        <id>10.14569/IJACSA.2015.061219</id>
        <doi>10.14569/IJACSA.2015.061219</doi>
        <lastModDate>2015-12-31T11:53:57.4800000+00:00</lastModDate>
        
        <creator>Nicoleta-Cristina Gaitan</creator>
        
        <creator>Vasile Gheorghita Gaitan</creator>
        
        <creator>Ioan Ungurean</creator>
        
        <subject>middleware; Internet of Things; things; software architecture</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(12), 2015</description>
        <description>The Internet of Things (IoT) is a concept and a paradigm that considers the pervasive presence in the environment of a variety of things/objects through wired or wireless that  are uniquely addressed and are able to interact with each other and cooperate with other things/objects in order to create new applications/services and to achieve common objectives. IoT defines a new world where the real, the digital and the  virtual converge to create an environment that makes the energy, transport, city, and many other areas to become more intelligent. The IoT purposed is to validate the connection type: anytime, anywhere, and everything and everyone.  IoT may be considered as a network of physical objects with embedded communication technologies that &#39;feel&#39; or interact with internal or external environment. This paper presents a survey on the Internet of Things software architectures that meets the requirements listed above.</description>
        <description>http://thesai.org/Downloads/Volume6No12/Paper_19-A_Survey_on_the_Internet_of_Things_Software_Arhitecture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intrusion Detection System in Wireless Sensor Networks: A Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061218</link>
        <id>10.14569/IJACSA.2015.061218</id>
        <doi>10.14569/IJACSA.2015.061218</doi>
        <lastModDate>2015-12-31T11:53:57.4500000+00:00</lastModDate>
        
        <creator>Anush Ananthakumar</creator>
        
        <creator>Tanmay Ganediwal</creator>
        
        <creator>Dr. Ashwini Kunte</creator>
        
        <subject>Wireless sensor networks; Intrusion Detction System; Signature based IDS; Anomaly based IDS; Hybrid based IDS; Algorithms</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(12), 2015</description>
        <description>The security of wireless sensor networks is a topic that has been studied extensively in the literature. The intrusion detection system is used to detect various attacks occurring on sensor nodes of Wireless Sensor Networks that are placed in various hostile environments. As many innovative and efficient models have emerged in the last decade in this area, we mainly focus our work on Intrusion detection Systems. This paper reviews various intrusion detection systems which can be broadly classified based on certain traditional techniques, namely signature based, anomaly based and hybrid based. The models proposed by various researchers have been critically examined based on certain classification parameters, such as detection rate, false alarm, algorithms used, etc. This work contains a summarization study of various intrusion detection systems used particularly in Wireless Sensor Networks, and also highlights their distinct features.</description>
        <description>http://thesai.org/Downloads/Volume6No12/Paper_18-Intrusion_Detection_System_in_Wireless_Sensor_Networks_A_Review.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Disaster Document Classification Technique Using Domain Specific Ontologies</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061217</link>
        <id>10.14569/IJACSA.2015.061217</id>
        <doi>10.14569/IJACSA.2015.061217</doi>
        <lastModDate>2015-12-31T11:53:57.4170000+00:00</lastModDate>
        
        <creator>Qazi Mudassar Ilyas</creator>
        
        <subject>Disaster Management; Document Classification; Ontology; Supervised Learning; Information Retrieval</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(12), 2015</description>
        <description>Manual data collection and entry is one of the bottlenecks in conventional disaster management information systems. Time is a critical factor in emergency situations and timely data collection and processing may help in saving several lives. An effective disaster management system needs to collect data from World Wide Web automatically. A prerequisite for data collection process is document classification mechanism to classify a particular document into different categories. Ontologies are formal bodies of knowledge used to capture machine understandable semantics of a domain of interest and have been used successfully to support document classification in various domains. This paper presents an ontology-based document classification technique for automatic data collection in a disaster management system. A general ontology of disasters is used that contains the description of several natural and man-made disasters. The proposed technique augments the conventional classification measures with the ontological knowledge to improve the precision of classification. A preliminary implementation of the proposed technique shows promising results with up to 10% overall improvement in precision when compared with conventional classification methods.</description>
        <description>http://thesai.org/Downloads/Volume6No12/Paper_17-A_Disaster_Document_Classification_Technique_Using_Domain_Specific_Ontologies.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>RETRACTED: A Novel Approach for Ranking Images Using User and Content Tags</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061216</link>
        <id>10.14569/IJACSA.2015.061216</id>
        <doi>10.14569/IJACSA.2015.061216</doi>
        <lastModDate>2015-12-31T11:53:57.3870000+00:00</lastModDate>
        
        <creator>Arif Ur Rahman</creator>
        
        <creator>Muhammad Muzammal</creator>
        
        <creator>Humayun Zaheer Ahmad</creator>
        
        <creator>Awais Majeed</creator>
        
        <creator>Zahoor Jan</creator>
        
        <subject></subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(12), 2015</description>
        <description>After careful and considered review of the content of this paper by a duly constituted expert committee, this paper has been found to be in violation of IJACSA`s Publication Principles. We hereby retract the content of this paper. Reasonable effort should be made to remove all past references to this paper. Retraction DOI: 10.14569/IJACSA.2015.061216.retraction</description>
        <description>http://thesai.org/Downloads/Volume6No12/Paper_16-A_Novel_Approach_for_Ranking_Images_Using_User_and_Content_Tags.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>EMCC: Enhancement of Motion Chain Code for Arabic Sign Language Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061215</link>
        <id>10.14569/IJACSA.2015.061215</id>
        <doi>10.14569/IJACSA.2015.061215</doi>
        <lastModDate>2015-12-31T11:53:57.3570000+00:00</lastModDate>
        
        <creator>Mahmoud Zaki Abdo</creator>
        
        <creator>Alaa Mahmoud Hamdy</creator>
        
        <creator>Sameh Abd El-Rahman Salem</creator>
        
        <creator>Elsayed Mostafa Saad</creator>
        
        <subject>image analysis; Sign language recognition; hand gestures; HMM; hand geometry; and MCC</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(12), 2015</description>
        <description>In this paper, an algorithm for Arabic sign language recognition is proposed. The proposed algorithm facilitates the communication between deaf and non-deaf people. A possible way to achieve this goal is to enable computer systems to visually recognize hand gestures from images. In this context, a proposed criterion which is called Enhancement Motion Chain Code (EMCC) that uses Hidden Markov Model (HMM) on word level for Arabic sign language recognition (ArSLR) is introduced.  This paper focuses on recognizing Arabic sign language at word level used by the community of deaf people. Experiments on real-world datasets showed that the reliability and suitability of the proposed algorithm for Arabic sign language recognition. The experiment results introduce the gesture recognition error rate for a different sign is 1.2% compared to that of the competitive method.</description>
        <description>http://thesai.org/Downloads/Volume6No12/Paper_15-EMCC_Enhancement_of_Motion_Chain_Code_for_Arabic_Sign_Language_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis on Existing Basic Slas and Green Slas to Define New Sustainable Green SLA</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061214</link>
        <id>10.14569/IJACSA.2015.061214</id>
        <doi>10.14569/IJACSA.2015.061214</doi>
        <lastModDate>2015-12-31T11:53:57.3400000+00:00</lastModDate>
        
        <creator>Iqbal Ahmed</creator>
        
        <creator>Hiroshi Okumura</creator>
        
        <creator>Khoei Arai</creator>
        
        <subject>SLA; GSLA; Green ICT; Sustainability; IT ethics; ICT Product Life</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(12), 2015</description>
        <description>Nowadays, most of the IT (Information Technology) and ICT (Information and Communication Technology) industries are practicing sustainability under green computing hoods. Users/Customers are also moving towards a new sustainable society. Therefore, while getting or providing different services from different ICT vendors, Service Level Agreement (SLA) becomes very important for both the service providers/vendors and users/customers. There are many ways to inform users/customers about various services with its inherent execution functionalities and even non-functional/Quality of Service (QoS) aspects through SLAs. However, these basic SLAs actually do not cover eco-efficient green issues or ethical issues for actual sustainable development. That is why green SLA (GSLA) should come into play. GSLA is a formal agreement incorporating all the traditional/basic commitments as well as respecting the ecological, economical and ethical aspects of sustainability. This research would survey on different basic SLA parameters for various services in ICT industries. At the same time, this survey would focus on finding the gaps and incorporating basic SLA parameters with existing green computing issues and ethical issues for different services in various computing domains. This research defines future GSLA in relationship with ICT product life and three pillars of sustainability. The proposed definition and overall survey could help different service providers/vendors to define their future GSLA as well as business strategies for this new transitional sustainable society.</description>
        <description>http://thesai.org/Downloads/Volume6No12/Paper_14-Analysis_on_Existing_Basic_Slas_and_Green_Slas_to_Define_New_Sustainable_Green_SLA.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Feature Analysis of Risk Factors for Stroke in the Middle-Aged Adults</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061213</link>
        <id>10.14569/IJACSA.2015.061213</id>
        <doi>10.14569/IJACSA.2015.061213</doi>
        <lastModDate>2015-12-31T11:53:57.3070000+00:00</lastModDate>
        
        <creator>Haewon Byeon</creator>
        
        <creator>Hyeung Woo Koh</creator>
        
        <subject>C4.5; stroke; decision tree; risk factor; speech problem</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(12), 2015</description>
        <description>In order to maintain health during middle age and achieve successful aging, it is important to elucidate and prevent risk factors of middle-age stroke. This study investigated high risk groups of stroke in middle age population of Korea and provides basic material for establishment of stroke prevention policy by analyzing sudden perception of speech/language problems and clusters of multiple risk factors.  This study analyzed 2,751 persons (1,191 males and 1,560 females) aged 40–59 who participated in the 2009 Korea National Health and Nutrition Examination Survey. Outcome was defined as prevalence of stroke. Set as explanatory variables were age, gender, final education, income, marital status, at-risk drinking, smoking, occupation, subjective health status, moderate physical activity, hypertension, and sudden perception of speech and language problems. A prediction model was developed by the use of a C4.5 algorithm of data-mining approach. Sudden perception of speech and language problems, hypertension, and marital status were significantly associated with stroke in Korean middle aged people. The most preferentially involved predictor was sudden perception of speech and language problems. In order to prevent middle-age stroke, it is required to systematically manage and develop tailored programs for high-risk groups based on this prediction model.</description>
        <description>http://thesai.org/Downloads/Volume6No12/Paper_13-A_Feature_Analysis_of_Risk_Factors_for_Stroke_in_the_Middle_Aged_Adults.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Ball on Beam Stabilizing Platform with Inertial Sensors</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061212</link>
        <id>10.14569/IJACSA.2015.061212</id>
        <doi>10.14569/IJACSA.2015.061212</doi>
        <lastModDate>2015-12-31T11:53:57.2930000+00:00</lastModDate>
        
        <creator>Ali Shahbaz Haider</creator>
        
        <creator>Muhammad Bilal</creator>
        
        <creator>Samter Ahmed</creator>
        
        <creator>Saqib Raza</creator>
        
        <creator>Imran Ahmed</creator>
        
        <subject>stabilizing platform; ball on beam; multi-loop controller; inertial sensors; rapid control prototyping; partial pole assignment</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(12), 2015</description>
        <description>This research paper presents a novel controller design for one degree of freedom (1-DoF) stabilizing platform using inertial sensors. The plant is a ball on a pivoted beam. Multi-loop controller design technique has been used. System dynamics is observable but uncontrollable. The uncontrollable polynomial of the system is not Hurwitz hence system is not stabilizable. Hybrid compensator design strategy is implemented by partitioning the system dynamics into two parts: controllable subsystem and uncontrollable subsystem. Controllable part is compensated by partial pole assignment in the inner loop. Prediction observer is designed for unmeasured states in the inner loop. Rapid control prototyping technique is used for compensator design for the outer loop containing the controlled inner loop and uncountable part of the system. Real-time system responses are monitored using MATLAB/Simulink that show promising performance of the hybrid compensation technique for reference tracking and robustness against model inaccuracies.</description>
        <description>http://thesai.org/Downloads/Volume6No12/Paper_12-A_Novel_Ball_on_Beam_Stabilizing_Platform_with_Inertial_Sensors.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Arabic Sentiment Analysis: A Survey</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061211</link>
        <id>10.14569/IJACSA.2015.061211</id>
        <doi>10.14569/IJACSA.2015.061211</doi>
        <lastModDate>2015-12-31T11:53:57.2770000+00:00</lastModDate>
        
        <creator>Adel Assiri</creator>
        
        <creator>Ahmed Emam</creator>
        
        <creator>Hmood Aldossari</creator>
        
        <subject>Arabic Sentiment Analysis; Qualitative Analysis; Quantitative Analysis; Smoothness Analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(12), 2015</description>
        <description>Most social media commentary in the Arabic language space is made using unstructured non-grammatical slang Arabic language, presenting complex challenges for sentiment analysis and opinion extraction of online commentary and micro blogging data in this important domain. This paper provides a comprehensive analysis of the important research works in the field of Arabic sentiment analysis. An in-depth qualitative analysis of the various features of the research works is carried out and a summary of objective findings is presented. We used smoothness analysis to evaluate the percentage error in the performance scores reported in the studies from their linearly-projected values (smoothness) which is an estimate of the influence of the different approaches used by the authors on the performance scores obtained. To solve a bounding issue with the data as it was reported, we modified existing logarithmic smoothing technique and applied it to pre-process the performance scores before the analysis. Our results from the analysis have been reported and interpreted for the various performance parameters: accuracy, precision, recall and F-score.</description>
        <description>http://thesai.org/Downloads/Volume6No12/Paper_11-Arabic_Sentiment_Analysis_A_Survey.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparison Contour Extraction Based on Layered Structure and Fourier Descriptor on Image Retrieval</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061210</link>
        <id>10.14569/IJACSA.2015.061210</id>
        <doi>10.14569/IJACSA.2015.061210</doi>
        <lastModDate>2015-12-31T11:53:57.2470000+00:00</lastModDate>
        
        <creator>Cahya Rahmad</creator>
        
        <creator>Kohei Arai</creator>
        
        <subject>Cbir; Mlccd; extract features; rgb; Fourier descriptor; shape; retrieval</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(12), 2015</description>
        <description>In this paper, a new content-based image retrieval technique using shape feature is proposed. A shape features extracted by layered structure representation has been implemented. The approach is extract feature shape by measuring the distance between centroid (center) and boundaries of the object that can capture multiple boundaries in the same angle, an object shape that has some points with the same angle.Once an input taking into account, the method will search most related image to the input. The correlation between input and output has been defined by specific role. Firstly the input image has to be converted from RGB image to Grayscale image and then follow by edge detection process. After edge detection process the boundary object will be obtained  and then calculate the distance between the center of an object and the boundary of an object and put it in the feature vector and if there is another boundary on the same angle then put it in the different feature vector with a different layer. The experiment result on the plankton dataset shows that the proposed method better than other conventional Fourier descriptor method.</description>
        <description>http://thesai.org/Downloads/Volume6No12/Paper_10-Comparison_Contour_Extraction_Based_on_Layered_Structure.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Secure Network Communication Protocol Based on Text to Barcode Encryption Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061209</link>
        <id>10.14569/IJACSA.2015.061209</id>
        <doi>10.14569/IJACSA.2015.061209</doi>
        <lastModDate>2015-12-31T11:53:57.2130000+00:00</lastModDate>
        
        <creator>Abusukhon Ahmad</creator>
        
        <creator>Bilal Hawashin</creator>
        
        <subject>Encryption, Decryption; Algorithm; Secured Communication; Private Key; Barcode Image</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(12), 2015</description>
        <description>Nowadays, after the significant development in the Internet, communication and information exchange around the world has become easier and faster than before. One may send an e-mail or perform money transaction (using a credit card) while being at home. The Internet users can also share resources (storage, memory, etc.) or invoke a method on a remote machine. All these activities require securing data while the data are sent through the global network.
There are various methods for securing data on the internet and ensuring its privacy; one of these methods is data encryption. This technique is used to protect the data from hackers by scrambling these data into a non-readable form. In this paper, we propose a novel method for data encryption based on the transformation of a text message into a barcode image. In this paper, the proposed Bar Code Encryption Algorithm (BCEA) is tested and analyzed.</description>
        <description>http://thesai.org/Downloads/Volume6No12/Paper_9-A_Secure_Network_Communication_Protocol_Based_on_Text.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Multimedia System for Breath Regulation and Relaxation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061208</link>
        <id>10.14569/IJACSA.2015.061208</id>
        <doi>10.14569/IJACSA.2015.061208</doi>
        <lastModDate>2015-12-31T11:53:57.1830000+00:00</lastModDate>
        
        <creator>Wen-Ching Liao</creator>
        
        <creator>Han-Hong Lin</creator>
        
        <creator>He-Lin Ruo</creator>
        
        <creator>Po-Hsiang Hsu</creator>
        
        <subject>breathing; relaxation; biofeedback; interaction; multimedia</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(12), 2015</description>
        <description>In the hectic life today, detrimental stress has caused numerous illness. To adjust mental states, breath regulation plays a core role in multiple relaxation techniques. In this paper, we introduce a multimedia system supporting breath regulation and relaxation. Features of this system include non-contact respiration detection, bio-signal monitoring, and breath interaction. In addition to illustrating this system, we also propose a novel form of breath interaction. Through this form of breath interaction, the system effectively influenced user breath such that their breathing features turned into patterns that appeared when people were relaxed.  An experiment was conducted to compare the effects of three forms of regulation, the free breathing mode, the pure guiding mode, and the local-mapping mode. Experiment results show that multimedia-assisted breath interaction successfully deepened and slowed down user breath, compared with free breathing mode. Besides objective breathing feature changes, subjective feedback also showed that participants were satisfied and became relaxed after using this system.</description>
        <description>http://thesai.org/Downloads/Volume6No12/Paper_8-A_Multimedia_System_for_Breath_Regulation_and_Relaxation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Enhanced Steganographic Model Based on DWT Combined with Encryption and Error Correction Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061207</link>
        <id>10.14569/IJACSA.2015.061207</id>
        <doi>10.14569/IJACSA.2015.061207</doi>
        <lastModDate>2015-12-31T11:53:57.1370000+00:00</lastModDate>
        
        <creator>Dr.Adwan Yasin</creator>
        
        <creator>Mr.Nizar Shehab</creator>
        
        <creator>Dr.Muath Sabha</creator>
        
        <creator>Mariam Yasin</creator>
        
        <subject>Steganography; DWT; LSB; hamming code; encryption and decryption</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(12), 2015</description>
        <description>The problem of protecting information, modification, privacy and origin validation are very important issues and became the concern of many researchers. Handling these problems definitely is a big challenge and this is probably why so much attention directed to the development of information protection schemes. In this paper, we propose a robust model that combines and integrates steganographic techniques with encryption, and error detection and correction techniques in order achieve secrecy, authentication and integrity. The idea of applying these techniques is based on decomposing the image into three separate color planes Red, Green and Blue and then depending on the encryption key we divide the image into N blocks. By applying DWT on each block independently, this model enables hiding the information in the image in an unpredictable manner. The part of the image where information embedded is a key depended and unknown to the intruder and by this we achieve blinded DWT effect. To enhance reliability the proposed model that uses hamming code which helps to recover lost or modified information. The proposed Model implemented and tested successfully.</description>
        <description>http://thesai.org/Downloads/Volume6No12/Paper_7-An_Enhanced_Steganographic_Model_Based_on_DWT_Combined.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Designing an IMS-LD Model for Collaborative Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061206</link>
        <id>10.14569/IJACSA.2015.061206</id>
        <doi>10.14569/IJACSA.2015.061206</doi>
        <lastModDate>2015-12-31T11:53:57.1200000+00:00</lastModDate>
        
        <creator>Fauzi El Moudden</creator>
        
        <creator>Prof. Mohamed Khaldi</creator>
        
        <creator>Prof. Aammou Souhaib</creator>
        
        <subject>Collaborative Learning; Pedagogy Project; Socio-constructivist; IMS-LD</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(12), 2015</description>
        <description>The context of this work is that of designing an IMS-LD model for collaborative learning. Our work is specifically in the field or seeking to promote, by means of information technology from a distance, a collective knowledge construction. Our approach is to first think about the conditions for creating a real collective activities between learners, and designing the IT environment that supports these activities. We chose to use the pedagogy project as a basis for teaching these collective activities. This pedagogy has already proven itself, mostly in traditional learning situations in the classroom.</description>
        <description>http://thesai.org/Downloads/Volume6No12/Paper_6-Designing_an_IMS_LD_Model_for_Collaborative_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Vitality Aware Cluster Head Election to Alleviate the Wireless Sensor Network for Long Time</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061205</link>
        <id>10.14569/IJACSA.2015.061205</id>
        <doi>10.14569/IJACSA.2015.061205</doi>
        <lastModDate>2015-12-31T11:53:57.0900000+00:00</lastModDate>
        
        <creator>P. Thiruvannamalai Sivasankar</creator>
        
        <creator>Dr. M. RamaKrishnan</creator>
        
        <subject>Wireless Sensor Networks(WSNs); Residual energy;  Clustering; Life span; Sensor</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(12), 2015</description>
        <description>The Wireless Sensor Networks (WSN) motivated by its unique characters such as it is capable of enduring callous ecological circumstances, and grant better scalability. The wireless sensor network is composed of insignificant sensors and a base station. The battery supplies the energy for the sensors. Hence, the lifetime of the network gets tainted while overworking for transmission. Since the WSN is being utilized for the dangerous purpose, we have to swell the lifespan of the network. The clustering is one of the foremost mechanisms to maximize the network&#39;s lifespan. The cluster head assortment plays an imperative role given the fact that clusters head was answerable for the transformation of data between cluster member and the base station. This present article deals with the novel scheme for the cluster head selection entitled as vitality aware cluster head election. In this scheme, the sensor nodes are being clustered into an optimal number. Subsequently, the cluster head is selected by a ballot for each and every group based on its remaining energy. To weigh up the performance of the proposed method, a Network Simulator (NS-2) has been employed.</description>
        <description>http://thesai.org/Downloads/Volume6No12/Paper_5-Vitality_Aware_Cluster_Head_Election_to_Alleviate_the_Wireless_Sensor_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Posteriori Pareto Front Diversification Using a Copula-Based Estimation of Distribution Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061204</link>
        <id>10.14569/IJACSA.2015.061204</id>
        <doi>10.14569/IJACSA.2015.061204</doi>
        <lastModDate>2015-12-31T11:53:57.0600000+00:00</lastModDate>
        
        <creator>Abdelhakim Cheriet</creator>
        
        <creator>Foudil Cherif</creator>
        
        <subject>Multiobjective Optimization Problems; Evolutionary Algorithms; Estimation of Distribution Algorithms; Copulas</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(12), 2015</description>
        <description>We propose CEDA, a Copula-based Estimation of Distribution Algorithm, to increase the size, achieve high diversity and convergence of optimal solutions for a multiobjective optimization problem. The algorithm exploits the statistical properties of Copulas to produce new solutions from the existing ones through the estimation of their distribution. CEDA starts by taking initial solutions provided by any MOEA (Multi Objective Evolutionary Algorithm), construct Copulas to estimate their distribution, and uses the constructed Copulas to generate new solutions. This design saves CEDA the need of running an MOEA every time alternative solutions are requested by a Decision Maker when the found solutions are not satisfactory. CEDA was tested on a set of benchmark problems traditionally used by the community, namely UF1, UF2, ..., UF10  and CF1, CF2, ..., CF10. CEDA used along with SPEA2 and NSGA2 as two examples of MOEA thus resulting in two variants CEDA-SPEA2 and CEDA-NSGA2 and compare them with SPEA2 and NSGA2. The results of The experiments show that, with both variants of CEDA, new solutions can be generated in a significantly smaller without compromising quality compared to those found SPEA2 and NSGA2.</description>
        <description>http://thesai.org/Downloads/Volume6No12/Paper_4-A_Posteriori_Pareto_Front_Diversification_Using_a_Copula_Based_Estimation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Spectrum Sensing Methodologies for Cognitive Radio Systems: A Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061203</link>
        <id>10.14569/IJACSA.2015.061203</id>
        <doi>10.14569/IJACSA.2015.061203</doi>
        <lastModDate>2015-12-31T11:53:56.9670000+00:00</lastModDate>
        
        <creator>Ireyuwa E. Igbinosa</creator>
        
        <creator>Olutayo O. Oyerinde</creator>
        
        <creator>Viranjay M. Srivastava</creator>
        
        <creator>Stanley Mneney</creator>
        
        <subject>Cognitive radio; Cooperative sensing; Data Fusion; OFDM; Spectrum Sensing; wideband sensing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(12), 2015</description>
        <description>Spectrum sensing is an important functional unit of the cognitive radio networks. The spectrum sensing is one of the main challenges encountered by cognitive radio. This paper presents a survey of spectrum sensing techniques and they are studied from a cognitive radio perspective. The challenges that go with spectrum sensing are reviewed. Two sensing schemes, namely; cooperative sensing and eigenvalue-based sensing are studied. The various advantages and disadvantages are highlighted. Based on this study, the cooperative spectrum sensing is proposed for employment in spectrum sensing in wideband based cognitive radio systems.</description>
        <description>http://thesai.org/Downloads/Volume6No12/Paper_3-Spectrum_Sensing_Methodologies_for_Cognitive_Radio_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Prediction Model for Mild Cognitive Impairment Using Random Forests</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061202</link>
        <id>10.14569/IJACSA.2015.061202</id>
        <doi>10.14569/IJACSA.2015.061202</doi>
        <lastModDate>2015-12-31T11:53:56.8700000+00:00</lastModDate>
        
        <creator>Haewon Byeon</creator>
        
        <subject>random forests; data mining; dementia; mild cognitive impairment; risk factors</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(12), 2015</description>
        <description>Dementia is a geriatric disease which has emerged as a serious social and economic problem in an aging society and early diagnosis is very important for it. Especially, early diagnosis and early intervention of Mild Cognitive Impairment (MCI) which is the preliminary stage of dementia can reduce the onset rate of dementia. This study developed MCI prediction model for the Korean elderly in local communities and provides a basic material for the prevention of cognitive impairment. The subjects of this study were 3,240 elderly (1,502 males, 1,738 females) in local communities over the age of 65 who participated in the Korean Longitudinal Survey of Aging (close) conducted in 2012. The outcome was defined as having MCI and set as explanatory variables were gender, age, level of education, level of income, marital status, smoking, drinking habits, regular exercise more than once a week, monthly average hours of participation in social activities, subjective health, diabetes and high blood pressure. The random Forests algorithm was used to develop a prediction model and the result was compared with logistic regression model and decision tree model. As the result of this study, significant predictors of MCI were age, gender, level of education, level of income, subjective health, marital status, smoking, drinking, regular exercise and high blood pressure. In addition, Random Forests Model was more accurate than the logistic regression model and decision tree model. Based on these results, it is necessary to build monitoring system which can diagnose MCI at an early stage.</description>
        <description>http://thesai.org/Downloads/Volume6No12/Paper_2-A_Prediction_Model_for_Mild_Cognitive_Impairment_Using_Random_Forests.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Introducing a Method for Modeling Knowledge Bases in Expert Systems Using the Example of Large Software Development Projects</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061201</link>
        <id>10.14569/IJACSA.2015.061201</id>
        <doi>10.14569/IJACSA.2015.061201</doi>
        <lastModDate>2015-12-31T11:53:56.8100000+00:00</lastModDate>
        
        <creator>Franz Felix F&#252;ssl</creator>
        
        <creator>Detlef Streitferdt</creator>
        
        <creator>Weijia Shang</creator>
        
        <creator>Anne Triebel</creator>
        
        <subject>Knowledge Engineering; Ontology Engineering; Knowledge Modelling; Knowledge Base; Expert System; Artificial Intelligence; Deductive Reasoning Element Pruning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(12), 2015</description>
        <description>Goal of this paper is to develop a meta-model, which provides the basis for developing highly scalable artificial intelligence systems that should be able to make autonomously decisions based on different dynamic and specific influences. An artificial neural network builds the entry point for developing a multi-layered human readable model that serves as knowledge base and can be used for further investigations in deductive and inductive reasoning. A graph-theoretical consideration gives a detailed view into the model structure. In addition to it the model is introduced using the example of large software development projects. The integration of Constraints and Deductive Reasoning Element Pruning are illustrated, which are required for executing deductive reasoning efficiently.</description>
        <description>http://thesai.org/Downloads/Volume6No12/Paper_1-Introducing_a_Method_for_Modeling_Knowledge_Bases_in_Expert_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Differential Evolution Enhanced with Eager Random Search for Solving Real-Parameter Optimization Problems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2015.041208</link>
        <id>10.14569/IJARAI.2015.041208</id>
        <doi>10.14569/IJARAI.2015.041208</doi>
        <lastModDate>2015-12-09T16:18:03.7070000+00:00</lastModDate>
        
        <creator>Miguel Leon</creator>
        
        <creator>Ning Xiong</creator>
        
        <subject>Evolutionary Algorithm, Differential Evolution, Ea-ger Random Search, Memetic Algorithm, Optimization</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 4(12), 2015</description>
        <description>Differential evolution (DE) presents a class of evo-lutionary computing techniques that appear effective to handle real parameter optimization tasks in many practical applications. However, the performance of DE is not always perfect to ensure fast convergence to the global optimum. It can easily get stagnation resulting in low precision of acquired results or even failure. This paper proposes a new memetic DE algorithm by incorporating Eager Random Search (ERS) to enhance the performance of a basic DE algorithm. ERS is a local search method that is eager to replace the current solution by a better candidate in the neighborhood. Three concrete local search strategies for ERS are further introduced and discussed, leading to variants of the proposed memetic DE algorithm. In addition, only a small subset of randomly selected variables is used in each step of the local search for randomly deciding the next trial solution. The results of tests on a set of benchmark problems have demonstrated that the hybridization of DE with Eager Random Search can substantially augment DE algorithms to find better or more precise solutions while not requiring extra computing resources.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume4No12/Paper_8-Differential_Evolution_Enhanced_with_Eager_Random.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analytical Study of Some Selected Classification Algorithms in WEKA Using Real Crime Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2015.041207</link>
        <id>10.14569/IJARAI.2015.041207</id>
        <doi>10.14569/IJARAI.2015.041207</doi>
        <lastModDate>2015-12-09T16:18:03.6430000+00:00</lastModDate>
        
        <creator>Obuandike Georgina N.</creator>
        
        <creator>Audu Isah</creator>
        
        <creator>John Alhasan</creator>
        
        <subject>Data Mining; Classification; Decision Tree; Na&#239;ve Bayesian; Tp Rate; component; formatting</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 4(12), 2015</description>
        <description>Data mining in the field of computer science is an answered prayer to the demand of this digital age. It is used to unravel hidden information from large volumes of data usually kept in data repositories to help improve management decision making. Classification is an essential task in data mining which is used to predict unknown class labels. It has been applied in the classification of different types of data. There are different techniques that can be applied in building a classification model. In this study the performance of these techniques such as J48 which is a type of decision tree classifier, Na&#239;ve Bayesian is a classifier that applies probability functions and ZeroR is a rule induction classifier are used. These classifiers are tested using real crime data collected from Nigeria Prisons Service. The metrics used to measure the performance of each classifier include accuracy, time, True Positive Rate (TP) Rate, False Positive (FP) Rate, Kappa Statistic, Precision and Recall. The study showed that the J48 classifier has the highest accuracy compared to other two classifiers in consideration. Choosing the right classifier for data mining task will help increase the mining accuracy.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume4No12/Paper_7-Analytical_Study_of_Some_Selected_Classification_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Language Identification by Using SIFT Features</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2015.041206</link>
        <id>10.14569/IJARAI.2015.041206</id>
        <doi>10.14569/IJARAI.2015.041206</doi>
        <lastModDate>2015-12-09T16:18:03.6130000+00:00</lastModDate>
        
        <creator>Nikos Tatarakis</creator>
        
        <creator>Ergina Kavallieratou</creator>
        
        <subject>Document image processing; language identification; SIFT features; bag of Visual Words; Fisher Vector</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 4(12), 2015</description>
        <description>Two novel techniques for language identification of both, machine printed and handwritten document images, are presented. Language identification is the procedure where the language of a given document image is recognized and the appropriate language label is returned. In the proposed approaches, the main body size of the characters for each document image is determined, and accordingly, a sliding window is used, in order to extract the SIFT local features. Once a large number of features have been extracted from the training set, a visual vocabulary is created, by clustering the feature space. Data clustering is performed using K-means or Gaussian Mixture Models and the Expectation - Maximization algorithm. For each document image, a Bag of Visual Words or Fisher Vector representation is constructed, using the visual vocabulary and the extracted features of the document image. Finally, a multi class Support Vector Machine classification scheme is used, to score the system. Experiments are performed on well-known databases and comparative results with another established technique, are also given.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume4No12/Paper_6-Language_Identification_by_Using_SIFT_Features.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Naive Bayes Classifier Algorithm Approach for Mapping Poor Families Potential</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2015.041205</link>
        <id>10.14569/IJARAI.2015.041205</id>
        <doi>10.14569/IJARAI.2015.041205</doi>
        <lastModDate>2015-12-09T16:18:03.5670000+00:00</lastModDate>
        
        <creator>Sri Redjeki</creator>
        
        <creator>M. Guntara</creator>
        
        <creator>Pius Anggoro</creator>
        
        <subject>Data Mining; Naive Bayes; Poverty Potential; Mapping</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 4(12), 2015</description>
        <description>The poverty rate that was recorded high in Indonesia becomes main priority the government to find a solution to poverty rate was below 10%. Initial identification the potential poverty becomes a very important thing to anticipate the amount of the poverty rate. Naive Bayes Classifier (NBC) algorithm was one of data mining algorithms that can be used to perform classifications the family poor with 11 indicators with three classifications. This study using sample data of poor families a total of 219 data. A system that built use Java programming compared to the result of Weka software with accuracy the results of classification of 93%. The results of classification data of poor families mapped by adding latitude-longitude data and a photograph of the house of the condition of poor families. Based on the results of mapping classifications using NBC can help the government in Kabupaten Bantul in examining the potential of poor people.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume4No12/Paper_5-Naive_Bayes_Classifier_Algorithm_Approach_for_Mapping.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Blurring and Deblurring Digital Images Using the Dihedral Group</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2015.041204</link>
        <id>10.14569/IJARAI.2015.041204</id>
        <doi>10.14569/IJARAI.2015.041204</doi>
        <lastModDate>2015-12-09T16:18:03.5200000+00:00</lastModDate>
        
        <creator>Husein Hadi Abbas Jassim</creator>
        
        <creator>Zahir M. Hussain</creator>
        
        <creator>Hind R.M Shaaban</creator>
        
        <creator>Kawther B.R. Al-dbag</creator>
        
        <subject>Dihedral group; Kronecker Product; motion blur and deblur; digital image</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 4(12), 2015</description>
        <description>A new method of blurring and deblurring digital images is presented. The approach is based on using new filters generating from average filter and  H-filters using the action of the dihedral group. These filters are called HB-filters; used to cause a motion blur and then deblurring affected images. Also, enhancing images using HB-filters is presented as compared to other methods like Average, Gaussian, and Motion. Results and analysis show that the HB-filters are better in peak signal to noise ratio (PSNR) and RMSE.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume4No12/Paper_4-Blurring_and_Deblurring_Digital_Images_Using_the_Dihedral_Group.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Expert System-Based Evaluation of Civics Education as a Means of Character Education Based on Local Culture in the Universities in Buleleng</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2015.041203</link>
        <id>10.14569/IJARAI.2015.041203</id>
        <doi>10.14569/IJARAI.2015.041203</doi>
        <lastModDate>2015-12-09T16:18:03.3930000+00:00</lastModDate>
        
        <creator>Dewa Bagus Sanjaya</creator>
        
        <creator>Dewa Gede Hendra Divayana</creator>
        
        <subject>Evaluation of Civics Education; Character; Local culture; Expert System; Certainty Factor</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 4(12), 2015</description>
        <description>Civics education as a means of character education based on local culture has the mission to develop values and attitudes. In the educational process, various strategies and methods of value education can be used.  In Civics Education characters are developed as the impact of education and also as its nurturing effect.  Meanwhile,   other subjects, which formally have the major mission other than character development have to develop activities that have the nurturing effect of character development in the students. However, in the educational process this has not run well. Hence there is a need to evaluate educational programs at public as well as private universities in Buleleng regency. One of the evaluation techniques that can be used is the CIPP model combined with certainty factor method in expert system. The CIPP Model can evaluated the Civics education processes at all the public and private universities in Buleleng regency objectively, especially in probing local culture in character educational development. Meanwhile, the certainty factor method is used to determine the extent or degree of certainty of a component being evaluated in Civics educational processes.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume4No12/Paper_3-An_Expert_System_Based_Evaluation_of_Civics_Education.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Implementation of Outpatient Online Registration Information System of Mutiara Bunda Hospital</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2015.041202</link>
        <id>10.14569/IJARAI.2015.041202</id>
        <doi>10.14569/IJARAI.2015.041202</doi>
        <lastModDate>2015-12-09T16:18:03.3330000+00:00</lastModDate>
        
        <creator>Masniah </creator>
        
        <subject>Online Registration; Outpatient; Information System</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 4(12), 2015</description>
        <description>Outpatient care is one of the medical services in Mutiara Bunda hospital. The management of outpatient registration of Mutiara Bunda Hospital used conventional way. Within 1 hour serving, 5 patients were enrolled with an average time of 13 minutes per patient. This caused the registration queue to get outpatient services. The study was conducted with the aim to produce outpatient online registration information system design of Mutiara Bunda Hospital in order to increase outpatient registration service and to manage data in getting medical care.
The patients register on Outpatient Online Registration Information System without having to come first to the hospital and get a queue number, so they can estimate the waiting time in the hospital to get medical care at Mutiara Bunda Hospital; while the patients who come to the hospital are served directly by the registrar.
From the results of the research, it can be concluded that the application of Outpatient Online Registration Information System help in managing and processing data of patient registration to be able to get medical care immediately at Mutiara Bunda Hospital.
</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume4No12/Paper_2-An_Implementation_of_Outpatient_Online_Registration_Information_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Effect of Sensitivity Improvement of Visible to NIR Digital Cameras on NDVI Measurements in Particular for Agricultural Field Monitoring</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2015.041201</link>
        <id>10.14569/IJARAI.2015.041201</id>
        <doi>10.14569/IJARAI.2015.041201</doi>
        <lastModDate>2015-12-09T16:18:03.2230000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Takuji Maekawa</creator>
        
        <creator>Toshihisa Maeda</creator>
        
        <creator>Hiroshi Sekiguchi</creator>
        
        <creator>Noriyuki Masago</creator>
        
        <subject>CuInGaSe; SiCMOS; NDVI; Rice crop; Tealeaves; S/N ratio; Sensivity</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 4(12), 2015</description>
        <description>Effect of sensitivity improvement of Near Infrared: NIR digital cameras on Normalized Difference Vegetation Index: NDVI measurements in particular for agricultural field monitoring is clarified. Comparative study is conducted between sensitivity improved visible to near infrared camera of CuInGaSe: CIGS and the conventional camera. Signal to Noise: S/N ratio and sensitivity are evaluated with NIR camera data which are acquired in tea farm areas and rice paddy fields. From the experimental results, it is found that S/N ratio of the conventional digital camera with NIR wavelength coverage is better than CIGS utilized image sensor while the sensitivity of the CIGS image sensor is much superior to that of the conventional camera. Also, it is found that NDVI derived from the CIGS image sensor is much better than that from the conventional camera due to the fact that the sensitivity of the CIGS image sensor in red color wavelength region is much better than that of the conventional camera.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume4No12/Paper_1-Effect_of_Sensitivity_Improvement_of_Visible_to_NIR_Digital_Cameras.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Multiple-Objects Recognition Method Based on Region Similarity Measures: Application to Roof Extraction from Orthophotoplans</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061139</link>
        <id>10.14569/IJACSA.2015.061139</id>
        <doi>10.14569/IJACSA.2015.061139</doi>
        <lastModDate>2015-12-01T12:15:15.1200000+00:00</lastModDate>
        
        <creator>Abdellatif El Idrissi</creator>
        
        <creator>Youssef El Merabet</creator>
        
        <creator>Yassine Ruichek</creator>
        
        <creator>Raja Touahni</creator>
        
        <creator>Abderrahmane Sbihi</creator>
        
        <creator>Cyril Meurie</creator>
        
        <creator>Ahmed Moussa</creator>
        
        <subject>Object recognition; Region Similarity Measure; Tex-ture; Feature extraction; Orthophotoplans</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(11), 2015</description>
        <description>In this paper, an efficient method for automatic and accurate detection of multiple objects from images using a region similarity measure is presented. This method involves the construction of two knowledge databases: The first one contains several distinctive textures of objects to be extracted. The second one is composed with textures representing background. Both databases are provided by some examples (training set) of images from which one wants to recognize objects. The proposed procedure starts by an initialization step during which the studied image is segmented into homogeneous regions. In order to separate the objects of interest from the image background, an evaluation of the similarity between the regions of the segmented image and those of the constructed knowledge databases is then performed. The proposed approach presents several advantages in terms of applicability, suitability and simplicity. Experimental results obtained from the method applied to extract building roofs from orthophotoplans prove its robustness and performance over popular methods like K Nearest Neighbours (KNN) and Support Vector Machine (SVM).</description>
        <description>http://thesai.org/Downloads/Volume6No11/Paper_39-A_Multiple_Objects_Recognition_Method_Based_on_Region_Similarity_Measures.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Runtime Analysis of GPU-Based Stereo Matching</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061138</link>
        <id>10.14569/IJACSA.2015.061138</id>
        <doi>10.14569/IJACSA.2015.061138</doi>
        <lastModDate>2015-12-01T12:15:15.0430000+00:00</lastModDate>
        
        <creator>Christian Zentner</creator>
        
        <creator>Yan Liu</creator>
        
        <subject>stereo matching; GPU computing; runtime analysis; computer vision; image processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(11), 2015</description>
        <description>This paper elaborates on the possibility to leverage the highly parallel nature of GPUs to implement more efficient stereo matching algorithms. Different algorithms have been implemented and compared on the CPU and the GPU in order to show the speedup gained by moving the computation to the graphics card. The results were evaluated for accuracy using the test available on the Middlebury website for stereo vision. An assessment of the runtime performance was done by a script which examined the runtime behaviour of the individual steps of the stereo matching algorithm.</description>
        <description>http://thesai.org/Downloads/Volume6No11/Paper_38-Runtime_Analysis_of_GPU_Based_Stereo_Matching.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Overview of Surface Tracking and Representation in Fluid Simulation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061137</link>
        <id>10.14569/IJACSA.2015.061137</id>
        <doi>10.14569/IJACSA.2015.061137</doi>
        <lastModDate>2015-12-01T12:15:15.0130000+00:00</lastModDate>
        
        <creator>Listy Stephen</creator>
        
        <creator>Anoop Jose</creator>
        
        <subject>Fluid simulation; Physics based animation; Real-ism; Surface representation; Tracking</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(11), 2015</description>
        <description>Realism in fluid animation can be achieve with physics based techniques and is the best among other approaches. Now, this area constitutes hot researches. There are number of mechanisms evolved with the advent of both hardware and soft-ware technologies. Most of the fluid simulation methods described with or without a clear surface representation. This paper focused on a quantitative survey of various fluid surface tracking and representation techniques. Suitable tracking schemes with the hybrid fluid simulation approach may give mind blowing visual effects for various applications.</description>
        <description>http://thesai.org/Downloads/Volume6No11/Paper_37-An_Overview_of_Surface_Tracking.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>AL-S2m: Soft road traffic Signs map for vehicular systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061136</link>
        <id>10.14569/IJACSA.2015.061136</id>
        <doi>10.14569/IJACSA.2015.061136</doi>
        <lastModDate>2015-12-01T12:15:14.9500000+00:00</lastModDate>
        
        <creator>Ammar LAHLOUHI</creator>
        
        <subject>Road Traffic Signs, Roadmap, Map-Matching, Driver Assistance Systems, Autonomous vehicles</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(11), 2015</description>
        <description>In this paper, we describe AL-S2m, a roadmap with traffic signs to be used in vehicular systems. AL-S2m is part of a more general system of traffic signs (TSs) management, called AL-S2, which includes two sides: central map server and client vehicular system. The server allows establishing, maintaining and disseminating AL-S2m. The client localizes the vehicle in AL-S2m and detects TSs. In this paper, we focus on the AL-S2m establishment. AL-S2m can handle variable TSs and its update is easy, which keeps it coherent with the reality. Also, it improves the map-matching algorithm. We implemented AL-S2m easily using an Android device.</description>
        <description>http://thesai.org/Downloads/Volume6No11/Paper_36-AL-S2m_Soft_road_traffic_Signs_map_for_vehicular.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Wireless Sensor Networks for Road Traffic Monitoring</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061135</link>
        <id>10.14569/IJACSA.2015.061135</id>
        <doi>10.14569/IJACSA.2015.061135</doi>
        <lastModDate>2015-12-01T12:15:14.9330000+00:00</lastModDate>
        
        <creator>Kahtan Aziz</creator>
        
        <creator>Saed Tarapiah</creator>
        
        <creator>Mohanad Alsaedi</creator>
        
        <creator>Salah Haj Ismail</creator>
        
        <creator>Shadi Atalla</creator>
        
        <subject>Wireless Sensor Networks(WSN); Linear Topology; Road Monitoring; Jennic MAC</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(11), 2015</description>
        <description>Wireless Sensor Networks (WSNs) consist of large number of sensor nodes. Each node is empowered by a com-munication interface that is mainly characterized by low power, short transmission distance and minimal data rate such as the maximum data rate in ZigBee technology is 256 kbps, while approximately the physical transmission range between 10 to 20 meters. Currently, WSN Technology is being distributed over a large roadway of areas, in order to monitor traffic and environmental data. This approach allows several Intelligent Transport Systems (ITSs) applications to exploit the primary collected data in order to generate intelligent decisions based on earlier valuable selected information. Therefore, in this work we present a MAC protocol that is suitable for WSN where its nodes are assigned to a linear topology. The investigated protocol is realized by adapting an already existing Jennic MAC protocol. We demonstrate the validity of the MAC by building a complete end-to-end road traffic monitoring system using 4 Jennic nodes deployed in an indoor environment with the aim to prove the MAC potential in meeting the expectations of ITS applications. It is appropriate to mention that the proposed implementation just considers stationary WSN nodes.</description>
        <description>http://thesai.org/Downloads/Volume6No11/Paper_35-Wireless_Sensor_Networks_for_Road_Traffic.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Approximation Algorithms for Scheduling with Rejection on Two Unrelated Parallel Machines</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061134</link>
        <id>10.14569/IJACSA.2015.061134</id>
        <doi>10.14569/IJACSA.2015.061134</doi>
        <lastModDate>2015-12-01T12:15:14.8730000+00:00</lastModDate>
        
        <creator>Feng Lin</creator>
        
        <creator>Xianzhao Zhang</creator>
        
        <creator>Zengxia Cai</creator>
        
        <subject>Scheduling; Rejection; Approximation algorithm; Linear programming; Rounding</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(11), 2015</description>
        <description>In this paper, we study the scheduling problem with rejection on two unrelated parallel machines. We may choose to reject some jobs, thus incurring the corresponding penalty. The goal is to minimize the makespan plus the sum of the penalties of the rejected jobs. We first formulate this scheduling problem into an integer program, then relax it into a linear program. From the optimal solution to the linear program, we obtain the two algorithms using the technique of linear programming rounding. In conclusion, we present a deterministic 3-approximation algorithm and a randomized 3-approximation algorithm for this problem.</description>
        <description>http://thesai.org/Downloads/Volume6No11/Paper_34-Approximation_Algorithms_for_Scheduling_with_Rejection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Video Summarization: Survey on Event Detection and Summarization in Soccer Videos</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061133</link>
        <id>10.14569/IJACSA.2015.061133</id>
        <doi>10.14569/IJACSA.2015.061133</doi>
        <lastModDate>2015-12-01T12:15:14.8400000+00:00</lastModDate>
        
        <creator>Yasmin S. Khan</creator>
        
        <creator>Soudamini Pawar</creator>
        
        <subject>Summarization; Sports Summarization; Soccer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(11), 2015</description>
        <description>In today&#39;s world, the rapid development of digital video and editing technology has led to fast growing of video data, creating the need for effective and advanced techniques for analysis and video retrieval, as multimedia repositories have made browsing, delivery of contents (video) and video retrieval very slow. Hence, video summarization proposes various ways for faster browsing among a large amount of data and also for content indexing. Many people spend their free time to watch or play different sports like soccer, cricket, etc. but it is not possible to watch each and every game due to the longer timing of the game. In such cases, the users may just want to view the summary of the video that is just an abstract of the original video, instead of watching the whole video that provides more information about the occurrence of various incidents in the video. It is preferable to watch just highlights of the game or just review/trailer of a movie. Apparently, summarizing a video is an important process. In this paper, video summarization approaches are discussed, that can generate static or dynamic summaries. We present different techniques for each mode in literature. We have discussed some features used for generating video summaries. As soccer is the world’s most famous game played and watched, it is taken as a case study. Research done in this domain is discussed. We conclude that there is a broad perspective for further research in this field.</description>
        <description>http://thesai.org/Downloads/Volume6No11/Paper_33-Video_Summarization_Survey_on_Event_Detection_and_Summarization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Smart City Architecture: Vision and Challenges</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061132</link>
        <id>10.14569/IJACSA.2015.061132</id>
        <doi>10.14569/IJACSA.2015.061132</doi>
        <lastModDate>2015-12-01T12:15:14.8100000+00:00</lastModDate>
        
        <creator>Narmeen Zakaria Bawany</creator>
        
        <creator>Jawwad A. Shamsi</creator>
        
        <subject>Smart city; Data management; urban technology; socio-technical systems; smart city architecture</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(11), 2015</description>
        <description>The concept of smart city was born to provide improved quality of life to citizens. The key idea is to integrate information system services of each domain, such as health, education, transportation, power grid etc., of the city to provide public services to citizens efficiently and ubiquitously. These expectations induce massive challenges and requirements. This research is aimed to highlight key ICT (Information and Communication Technology) challenges related to adaptation of smart city. Realizing the significance of effective data collection, storage, retrieval, and efficient network resource provisioning, the research proposes a high level architecture for smart city. The proposed framework is based on a hierarchical model of data storage and defines how different stakeholders will be communicating and offering services to citizens. The architecture facilitates step by step implementation towards a smart city, integrating services, as they are developed in a timely manner.</description>
        <description>http://thesai.org/Downloads/Volume6No11/Paper_32-Smart_City_Architecture_Vision_and_Challenges.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Generating Representative Sets and Summaries for Large Collection of Images Using Image Cropping Techniques and Result Comparison</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061131</link>
        <id>10.14569/IJACSA.2015.061131</id>
        <doi>10.14569/IJACSA.2015.061131</doi>
        <lastModDate>2015-12-01T12:15:14.7800000+00:00</lastModDate>
        
        <creator>Abdullah Al-Mamun</creator>
        
        <creator>Dhaval Gandhi</creator>
        
        <creator>Sheak Rashed Haider Noori</creator>
        
        <subject>summarization; representative set; image collection; diversity;  coverage</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(11), 2015</description>
        <description>The collection of photos hosted on photo archives and social networking sites has been increasing exponentially. It is really hard to get the summary of a large image set without browsing through the entire collection. In this paper two different techniques of image cropping (random windows technique and sequential windows technique) have been proposed to generate effective representative sets. A ranking mechanism has been also proposed for finding the best representative set.</description>
        <description>http://thesai.org/Downloads/Volume6No11/Paper_31-Generating_Representative_Sets_and_Summaries_for_Large_Collection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>On Attack-Relevant Ranking of Network Features</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061130</link>
        <id>10.14569/IJACSA.2015.061130</id>
        <doi>10.14569/IJACSA.2015.061130</doi>
        <lastModDate>2015-12-01T12:15:14.7470000+00:00</lastModDate>
        
        <creator>Adel Ammar</creator>
        
        <creator>Khaled Al-Shalfan</creator>
        
        <subject>Intrusion detection; network security; feature selection; KDD dataset; neural networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(11), 2015</description>
        <description>An Intrusion Detection System (IDS) is an important component of the defense-in-depth security mechanism in any computer network system. For assuring timely detection of intrusions from millions of connection records, it is important to reduce the number of connection features examined by the IDS, using feature selection or feature reduction techniques. In this scope, this paper presents the first application of a distinctive feature selection method based on neural networks to the problem of intrusion detection, in order to determine the most relevant network features, which is an important step towards constructing a lightweight anomaly-based intrusion detection system. The same procedure is used for feature selection and for attack detection, which gives more consistency to the method. We apply this method to a case study, on KDD dataset and show its advantages compared to some existing feature selection approaches. We then measure its dependence to the network architecture and the learning database.</description>
        <description>http://thesai.org/Downloads/Volume6No11/Paper_30-On_Attack_Relevant_Ranking_of_Network_Features.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of Medical Domain Using CMARM: Confabulation Mapreduce Association Rule Mining Algorithm for Frequent and Rare Itemsets</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061129</link>
        <id>10.14569/IJACSA.2015.061129</id>
        <doi>10.14569/IJACSA.2015.061129</doi>
        <lastModDate>2015-12-01T12:15:14.7170000+00:00</lastModDate>
        
        <creator>Dr. Jyoti Gautam</creator>
        
        <creator>Neha Srivastava</creator>
        
        <subject>association rule mining; cogency; confabulation theory; medical data mining</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(11), 2015</description>
        <description>In Human Life span, disease is a major cause of illness and death in the modern society. There are various factors that are responsible for diseases like work environment, living and working conditions, agriculture and food production, housing, unemployment, individual life style etc. The early diagnosis of any disease that frequently and rarely occurs with the growing age can be helpful in curing the disease completely or to some extent. The long-term prognosis of patient records might be useful to find out the causes that are responsible for particular diseases. Therefore, human being can take early preventive measures to minimize the risk of diseases that may supervene with the growing age and hence increase the life expectancy chances. In this paper, a new CMARM: Confabulation-MapReduce based association rule mining algorithm is proposed for the analysis of medical data repository for both rare and frequent itemsets using an iterative MapReduce based framework inspired by cogency. Cogency is the probability of the assumed facts being true if the conclusion is true, means it is based on pairwise item conditional probability, so the proposed algorithm mine association rules by only one pass through the file. The proposed algorithm is also valuable for dealing with infrequent items due to its cogency inspired approach.</description>
        <description>http://thesai.org/Downloads/Volume6No11/Paper_29-Analysis_of_Medical_Domain_Using_CMARM_Confabulation_Mapreduce.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Study Between METEOR and BLEU Methods of MT: Arabic into English Translation as a Case Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061128</link>
        <id>10.14569/IJACSA.2015.061128</id>
        <doi>10.14569/IJACSA.2015.061128</doi>
        <lastModDate>2015-12-01T12:15:14.7000000+00:00</lastModDate>
        
        <creator>Laith S. Hadla</creator>
        
        <creator>Taghreed M. Hailat</creator>
        
        <creator>Mohammed N. Al-Kabi</creator>
        
        <subject>component; Machine Translation; Arabic-English Corpus; Google Translator; Babylon Translator; METEOR; BLEU</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(11), 2015</description>
        <description>The Internet provides its users with a variety of services, and these services include free online machine translators, which translate free of charge between many of the world&#39;s languages such as Arabic, English, Chinese, German, Spanish, French, Russian, etc. Machine translators facilitate the transfer of information between different languages, thus eliminating the language barrier, since the amount of information and knowledge available varies from one language to another, Arabic content on the internet, for example, accounts 1% of the total internet content, while Arabs constitute 5% of the population of the earth, which means that the intellectual productivity of the Arabs is low because within internet use Internet&#39;s Arabic content represents 20% of their natural proportion, which in turn encouraged some Arab parties to improve Arabic content within the internet. So, many of those interested specialists rely on machine translators to bridge the knowledge gap between the information available in the Arabic language and those in other living languages such as English.
This empirical study aims to identify the best Arabic to English Machine translation system, in order to help the developers of these systems to enhance the effectiveness of these systems. Furthermore, such studies help the users to choose the best. This study involves the construction of a system for Automatic Machine Translation Evaluation System of the Arabic language into language. This study includes assessing the accuracy of the translation by the two known machine translators, Google Translate, and the second, which bears the name of Babylon machine translation from Arabic into English. BLEU and METEOR methods are used the MT quality, and to identify the closer method to human judgments. The authors conclude that BLEU is closer to human judgments METEOR method.</description>
        <description>http://thesai.org/Downloads/Volume6No11/Paper_28-Comparative_Study_Between_METEOR_and_BLEU_Methods.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Arabic Alphabet and Numbers Sign Language Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061127</link>
        <id>10.14569/IJACSA.2015.061127</id>
        <doi>10.14569/IJACSA.2015.061127</doi>
        <lastModDate>2015-12-01T12:15:14.6830000+00:00</lastModDate>
        
        <creator>Mahmoud Zaki Abdo</creator>
        
        <creator>Alaa Mahmoud Hamdy</creator>
        
        <creator>Sameh Abd El-Rahman Salem</creator>
        
        <creator>Elsayed Mostafa Saad</creator>
        
        <subject>hand gestures; hand geometry; Sign language recognition; image analysis; and HMM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(11), 2015</description>
        <description>This paper introduces an Arabic Alphabet and Numbers Sign Language Recognition (ArANSLR). It facilitates the communication between the deaf and normal people by recognizing the alphabet and numbers signs of Arabic sign language to text or speech. To achieve this target, the system able to visually recognize gestures from hand image input. The proposed algorithm uses hand geometry and the different shape of a hand in each sign for classifying letters shape by using Hidden Markov Model (HMM). Experiments on real-world datasets showed that the proposed algorithm for Arabic alphabet and numbers sign language recognition is suitability and reliability compared with other competitive algorithms. The experiment results show that the increasing of the gesture recognition rate depends on the increasing of the number of zones by dividing the rectangle surrounding the hand.</description>
        <description>http://thesai.org/Downloads/Volume6No11/Paper_27-Arabic_Alphabet_and_Numbers_Sign_Language_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Case Based Reasoning: Case Representation Methodologies</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061126</link>
        <id>10.14569/IJACSA.2015.061126</id>
        <doi>10.14569/IJACSA.2015.061126</doi>
        <lastModDate>2015-12-01T12:15:14.6070000+00:00</lastModDate>
        
        <creator>Shaker H. El-Sappagh</creator>
        
        <creator>Mohammed Elmogy</creator>
        
        <subject>Case based reasoning; Ontological case representation; Case retrieval; Clinical decision support system; Knowledge management</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(11), 2015</description>
        <description>Case Based Reasoning (CBR) is an important technique in artificial intelligence, which has been applied to various kinds of problems in a wide range of domains. Selecting case representation formalism is critical for the proper operation of the overall CBR system. In this paper, we survey and evaluate all of the existing case representation methodologies. Moreover, the case retrieval and future challenges for effective CBR are explained. Case representation methods are grouped in to knowledge-intensive approaches and traditional approaches. The first group overweight the second one. The first methods depend on ontology and enhance all CBR processes including case representation, retrieval, storage, and adaptation. By using a proposed set of qualitative metrics, the existing methods based on ontology for case representation are studied and evaluated in details. All these systems have limitations. No approach exceeds 53% of the specified metrics. The results of the survey explain the current limitations of CBR systems. It shows that ontology usage in case representation needs improvements to achieve semantic representation and semantic retrieval in CBR system.</description>
        <description>http://thesai.org/Downloads/Volume6No11/Paper_26-Case_Based_Reasoning_Case_Representation_Methodologies.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Impact of Heterogeneous Deployment on Source Initiated Reactive Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061125</link>
        <id>10.14569/IJACSA.2015.061125</id>
        <doi>10.14569/IJACSA.2015.061125</doi>
        <lastModDate>2015-12-01T12:15:14.4970000+00:00</lastModDate>
        
        <creator>Nonita Sharma</creator>
        
        <creator>Ajay K Sharma</creator>
        
        <creator>Kumar Shashvat</creator>
        
        <subject>Wireless sensor networks (WSNs); Cost Analysis Model; Energy Analysis Model; Sensing Range Model; Optimality</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(11), 2015</description>
        <description>Selection of an optimal number of high energy level nodes and the most appropriate heterogeneity level is a prerequisite in the heterogeneous deployment of wireless sensor network, and it serves several purposes like enhanced network lifetime, finest energy consumption, and optimal sensing coverage. The paper presents the mathematical modeling of cost, energy and sensing range analysis of 2-level, 3-level, and n-level heterogeneous wireless sensor network.  An experimental investigation has been carried out to investigate the effect of heterogeneity on a proposed Energy Efficient Source Initiated Reactive Algorithm. Studies on these aspects have been done to find the limitations of the algorithm for homogeneous networks and to find how it enriches sensing range and network lifetime. Based on the simulated experimental and numerical results, a mathematical model is presented to calculate the optimal number of high-level nodes which can simultaneously enhance network lifetime and achieve optimal sensing coverage. The results are compared with the homogeneous network to prove the effectiveness of the stated approach and proposed a model.</description>
        <description>http://thesai.org/Downloads/Volume6No11/Paper_25-Impact_of_Heterogeneous_Deployment_on_Source_Initiated_Reactive_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Sleep Monitoring System with Sleep-Promoting Functions in Noise Detection and Sound Generation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061124</link>
        <id>10.14569/IJACSA.2015.061124</id>
        <doi>10.14569/IJACSA.2015.061124</doi>
        <lastModDate>2015-12-01T12:15:14.3870000+00:00</lastModDate>
        
        <creator>Lyn Chao-ling Chen</creator>
        
        <creator>Kuan-Wen Chen</creator>
        
        <subject>ambient sound; image sequence analysis; noise detection; non-invasive sleep monitoring; sleep promotion</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(11), 2015</description>
        <description>Recently, there has been a growing demand and interest in developing sleep-promoting systems for improving sleep condition. Because sleep environments are various, and sensitivity to noise differs individually, it is difficult for current sleep-promoting systems to provide an adoptable solution. This paper develops a non-invasive sleep monitoring system with adaptive sleep-promoting sound according to sleep environments and sleep habits. For people who fall asleep in a quiet environment, a constant sound playing probably affects their sleep. The proposal is designed to distinguish the noise disturbances, and a sleep-promoting sound is triggered automatically. A device with multiple sensors: an infrared depth sensor, a RGB camera, and a four-microphone array, is used to detect sleep disturbances. When a noise is detected, an ambient sound is playing to cover the noise automatically. Besides, it also applies to people who are used to sleep with sound by providing additional sound playing from the beginning of their sleep. Moreover, from the input of depth signals and color images, the scores are calculated from the sleep information, and are record for sleep quality evaluation. An overnight experiment was carried out, and the results show the efficiency of the proposed system in diverse sleep environments. The adaptable method is feasible for individuals, and it is also convenient and cost-effective to be used in home context.</description>
        <description>http://thesai.org/Downloads/Volume6No11/Paper_24-A_Sleep_Monitoring_System_with_Sleep_Promoting_Functions_in_Noise_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Embedded Modbus Compliant Interactive Operator Interface for a Variable Frequency Drive Using Rs 485</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061123</link>
        <id>10.14569/IJACSA.2015.061123</id>
        <doi>10.14569/IJACSA.2015.061123</doi>
        <lastModDate>2015-12-01T12:15:14.2800000+00:00</lastModDate>
        
        <creator>Adnan Shaout</creator>
        
        <creator>Khurram Abbas</creator>
        
        <subject>Modbus RTU; Variable Frequency Drive Operator Panel; Modbus Master VFD</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(11), 2015</description>
        <description>The paper proposes the architecture and software design of a Modbus Compliant Operator Interface Panel (MCOIP) for a high speed Variable Frequency Drive (VFD) – a state of the art embedded design that offers several key advantages over the existing proprietary industrial models in use today. The use of serial Modbus RTU communication over RS485 allows an economically feasible, open source, vendor neutral, feature laden, robust and safe operating model. Through the use of an ARM based RISC microcontroller, the low response time of the design makes the human machine interface more real-time and interactive.</description>
        <description>http://thesai.org/Downloads/Volume6No11/Paper_23-An_Embedded_Modbus_Compliant_Interactive_Operator_Interface.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Testing the Use of the Integrated Model in Designing the Management Information Systems by Using the Mathematical Probability Theories</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061122</link>
        <id>10.14569/IJACSA.2015.061122</id>
        <doi>10.14569/IJACSA.2015.061122</doi>
        <lastModDate>2015-12-01T12:15:14.1700000+00:00</lastModDate>
        
        <creator>Mohammad M M Abu Omar</creator>
        
        <creator>Khairul Anuar Abdullah</creator>
        
        <subject>Integrated Model; Management Information System; MIS; Classical Approach; Information System Life Cycle; ISLC; Simple Random Sampling; Probability Theory</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(11), 2015</description>
        <description>The integrated model is a new model that is recently developed to decrease from the classical approach weaknesses and problems in building the management information systems (MIS’s) that are used to solve the management problems in the practical life. The use of this integrated model needs to be tested, to prove how efficiently and successfully the model works. To achieving this objective, this paper uses the mathematical probability theories to implement an internal test of the integrated model work before using it in the practical life. The paper uses the qualitative research method in its methodologies.</description>
        <description>http://thesai.org/Downloads/Volume6No11/Paper_22-Testing_the_Use_of_the_Integrated_Model_in_Designing_the_Management_Information_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Protein Sequence Matching Using Parametric Spectral Estimate Scheme</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061121</link>
        <id>10.14569/IJACSA.2015.061121</id>
        <doi>10.14569/IJACSA.2015.061121</doi>
        <lastModDate>2015-12-01T12:15:14.1230000+00:00</lastModDate>
        
        <creator>Hsuan-Ting Chang</creator>
        
        <creator>Hsiao-Wei Peng</creator>
        
        <creator>Ciing-He Li</creator>
        
        <creator>Neng-Wen Lo</creator>
        
        <subject>protein sequence; amino acids; digital signal processing; parametric spectral estimate; hydrophilicity; hydrophobicity; Markov chain</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(11), 2015</description>
        <description>Putative protein sequences decoded from the messenger ribonucleic acid (mRNA) sequences are composed of twenty amino acids with different physical-chemical properties, such as hydrophobicity and hydrophilicity (uncharged, positively charged or negatively charged amino acids). In this paper, the power spectral estimate (PSE) technique for random processes is applied to the protein sequence matching framework. First, the twenty kinds of amino acids are classified based on their hydrophobicity and hydrophilicity. Then each amino acid in the protein sequence is mapped to a corresponding complex value. Consider the various Hidden Markov chain orders in the complex valued sequences. The PSE method can explore the implicit statistical relations among protein sequences. The mean squared error between the power spectra of two sequences is determined and then used to measure their similarity. The experimental results verify that the proposed PSE method provides the consistent similarity measurement with the well-known ClustalW and BLASTp schemes. Moreover, the proposed PSE can show better similarity relevance than ClustalW and BLASTp schemes.</description>
        <description>http://thesai.org/Downloads/Volume6No11/Paper_21-Protein_Sequence_Matching_Using_Parametric_Spectral_Estimate_Scheme.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Proactive Software Engineering Approach to Ensure Rapid Software Development and Scalable Production with Limited Resources</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061120</link>
        <id>10.14569/IJACSA.2015.061120</id>
        <doi>10.14569/IJACSA.2015.061120</doi>
        <lastModDate>2015-12-01T12:15:14.0600000+00:00</lastModDate>
        
        <creator>A. B. Farid</creator>
        
        <subject>Software Engineering; Load Testing; Test Analysis; ISO 29119; Continuous Integration; Static Analysis; Stress Analysis; Application Scalability; Building High Scalability System; Build Verification Test; Software Configuration Management; Version Control;</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(11), 2015</description>
        <description>Nowadays, the need for building scalable systems in narrow time window is needed. While the efforts and accuracy usually required for building high scale systems is not simple, the agile nature of system requirements spawn a need for enhancing some software engineering practices. These practices should be integrated together in order to help software (SW) development teams to build, and test scalable systems rapidly with a high confidence level in their scalability.
This research explains the proposed Proactive Approach, which presents a set of software engineering practices that could help in producing scalable system while minimizing the wasted time within the production cycle. This set of practices have been validated, verified and tested through building 46 releases of one of the most important, mission critical and scalable systems. Applying these practices succeeded to enhance average response time of web pages by %1921.5, test code churn by more than % 5000, time to release by % 300, and succeeded to produce a system that could stand against 95375 users with % 99.921 scalability ratio.</description>
        <description>http://thesai.org/Downloads/Volume6No11/Paper_20-Proactive_Software_Engineering_Approach_to_Ensure_Rapid_Software_Development.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Acoustic Emotion Recognition Using Linear and Nonlinear Cepstral Coefficients</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061119</link>
        <id>10.14569/IJACSA.2015.061119</id>
        <doi>10.14569/IJACSA.2015.061119</doi>
        <lastModDate>2015-12-01T12:15:13.9500000+00:00</lastModDate>
        
        <creator>Farah Chenchah</creator>
        
        <creator>Zied Lachiri</creator>
        
        <subject>Mel Frequency Cepstral Coefficients (MFCC); Linear Frequency Cepstral Coefficients (LFCC);Hidden Markov Model (HMM); Support Vector Machine  (SVM); emotion recognition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(11), 2015</description>
        <description>Recognizing human emotions through vocal channel has gained increased attention recently. In this paper, we study how used features, and classifiers impact recognition accuracy of emotions present in speech. Four emotional states are considered for classification of emotions from speech in this work. For this aim, features are extracted from audio characteristics of emotional speech using Linear Frequency Cepstral Coefficients (LFCC) and Mel-Frequency Cepstral Coefficients (MFCC). Further, these features are classified using Hidden Markov Model (HMM) and Support Vector Machine (SVM).</description>
        <description>http://thesai.org/Downloads/Volume6No11/Paper_19-Acoustic_Emotion_Recognition_Using_Linear_and_Nonlinear_Cepstral_Coefficients.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>User Interface Design of E-Learning System for Functionally Illiterate People</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061118</link>
        <id>10.14569/IJACSA.2015.061118</id>
        <doi>10.14569/IJACSA.2015.061118</doi>
        <lastModDate>2015-12-01T12:15:13.8570000+00:00</lastModDate>
        
        <creator>Asifur Rahman</creator>
        
        <creator>Akira Fukuda</creator>
        
        <subject>User Interface; Illiterate people; e-Learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(11), 2015</description>
        <description>Among different type of illiterate people, the print illiterates suffer most from getting crucial information passed around the society. Many print illiterate people are found in the developing countries and in many cases they live in the remote areas working as farmers. These people are deprived of the knowledge generated from the latest scientific researches. This research makes some recommendations related to developing user interface especially suitable for the print illiterate people. In this regard, a user interface is developed based on the recommendations from the previous researchers. The authors find the recommendations insufficient and develop another user interface based on the improvements proposed by the authors. Later both the user interfaces are tested by two different groups of print illiterate people in a remote village in Bangladesh. The test data shows that the proposed improvement contributes significantly to make the user interface more usable to the target population. 13 out of 15 users could complete the assigned task successfully using improved user interface. Whereas only 8 out of 14 users could do the same with the other user interface. Among the successful users, the improved user interface took 26% less time than that of the other user interface. Finally some recommendations to develop user interface for the functionally illiterate people are made based on the results and observations of this research.</description>
        <description>http://thesai.org/Downloads/Volume6No11/Paper_18-User_Interface_Design_of_E_Learning_System_for_Functionally_Illiterate_People.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Structural Equation Model (SEM) of Governing Factors Influencing the Implementation of T-Government</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061117</link>
        <id>10.14569/IJACSA.2015.061117</id>
        <doi>10.14569/IJACSA.2015.061117</doi>
        <lastModDate>2015-12-01T12:15:13.7030000+00:00</lastModDate>
        
        <creator>Sameer Alshetewi</creator>
        
        <creator>Dr Robert Goodwin</creator>
        
        <creator>Faten Karim</creator>
        
        <creator>Dr Denise de Vries</creator>
        
        <subject>t-Government; e-Government; Strategy; Stakeholders; Citizens’ Centricity; Funding</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(11), 2015</description>
        <description>Governments around the world have invested significant sums of money on Information and Communication Technology (ICT) to improve the efficiency and effectiveness of services been provided to their citizens. However, they have not achieved the desired results because of the lack of interoperability between different government entities. Therefore, many governments have started shifting away from the original concept of e-Government towards a much more transformational approach that encompasses the entire relationship between different government departments and users of public services, which can be termed as transformational government (t- Government). In this paper, a model is proposed for governing factors that impact the implementation of t-Government such as strategy, leadership, stakeholders, citizen centricity and funding in the context of Saudi Arabia. Five constructs are hypothesised to be related to the implementation of t-Government. To clarify the relationships among these constructs, a structural equation model (SEM) is utilised to examine the model fit with the five hypotheses. The results show that there are positive and significant relationships among the constructs such as the relationships between strategy and t-Government; the relationships between stakeholders and t-Government; the relationships between leadership and t-Government. This study also showed an insignificant relationship between citizens’ centricity and t-Government and also an insignificant relationship between funding and t-Government.  document is a “live” template and already defines the components of your paper [title, text, heads, etc.] in its style sheet.</description>
        <description>http://thesai.org/Downloads/Volume6No11/Paper_17-A_Structural_Equation_Model_SEM_of_Governing_Factors_Influencing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>L-Bit to M-Bit Code Mapping</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061116</link>
        <id>10.14569/IJACSA.2015.061116</id>
        <doi>10.14569/IJACSA.2015.061116</doi>
        <lastModDate>2015-12-01T12:15:13.5930000+00:00</lastModDate>
        
        <creator>Ruixing Li</creator>
        
        <creator>Shahram Latifi</creator>
        
        <creator>Yun Lun</creator>
        
        <creator>Ming Lun</creator>
        
        <subject>overhead; mapping; synchronization; consecutive “0”</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(11), 2015</description>
        <description>We investigate codes that map L bits to m bits to achieve a set of codewords which contain no consecutive n “0”s.  Such codes are desirable in the design of line codes which, in the absence of clock information in data, provide reasonable clock recovery due to sufficient state changes.  Two problems are tackled- (i) we derive n_min for a fixed L and m and (ii) determine m_min for a fixed L and n.  Results benefit telecommunication applications where clock synchronization of received data needs to be done with minimum overhead.</description>
        <description>http://thesai.org/Downloads/Volume6No11/Paper_16-L_Bit_to_M_Bit_Code_Mapping.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Robust Convolutional Neural Networks for Image Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061115</link>
        <id>10.14569/IJACSA.2015.061115</id>
        <doi>10.14569/IJACSA.2015.061115</doi>
        <lastModDate>2015-12-01T12:15:13.4830000+00:00</lastModDate>
        
        <creator>Hayder M. Albeahdili</creator>
        
        <creator>Haider A. Alwzwazy</creator>
        
        <creator>Naz E. Islam</creator>
        
        <subject>Convolutional Neural Network; Image recognition; Multiscale input images</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(11), 2015</description>
        <description>Recently image recognition becomes vital task using several methods. One of the most interesting used methods is using Convolutional Neural Network (CNN).  It is widely used for this purpose. However, since there are some tasks that have small features that are considered an essential part of a task, then classification using CNN is not efficient because most of those features diminish before reaching the final stage of classification. In this work, analyzing and exploring  essential parameters that can influence model performance. Furthermore different elegant prior contemporary models are recruited to introduce new leveraging model. Finally, a new CNN architecture is proposed which achieves state-of-the-art classification results on the different challenge benchmarks. The experimented are conducted on MNIST, CIFAR-10, and CIFAR-100 datasets.  Experimental results showed that the results outperform and achieve superior results comparing to the most contemporary approaches.</description>
        <description>http://thesai.org/Downloads/Volume6No11/Paper_15-Robust_Convolutional_Neural_Networks_for_Image_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Gesture Recognition Based on Human Grasping Activities Using PCA-BMU</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061114</link>
        <id>10.14569/IJACSA.2015.061114</id>
        <doi>10.14569/IJACSA.2015.061114</doi>
        <lastModDate>2015-12-01T12:15:13.4200000+00:00</lastModDate>
        
        <creator>Nazrul H. ADNAN</creator>
        
        <creator>Mahzan T.</creator>
        
        <subject>recognition; grasp; grasp taxonomy; human finger; dimensionality reduction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(11), 2015</description>
        <description>This research study presents the recognition of fingers grasps for various grasping styles of daily living. In general, the posture of the human hand determines the fingers that are used to create contact between an object at the same time while developing the touching contact. Human grasping can detect by studying the movement of fingers while bending during object holding. Ten right-handed subjects are participated in the experiment; each subject was fitted with a right-handed GloveMAP, which recorded all movement of the thumb, index, and middle of human fingers while grasping selected objects. GloveMAP is constructed using flexible bend sensors placed back of a glove. Based on the grasp human taxonomy by Cutkosky, the object grasping is distinguished by two dominant prehensile postures; that is, the power grip and the precision grip. The dataset signal is extracted using GloveMAP, and all the signals are filtered using Gaussian filtering method. The method is capable to improving the amplitude transmission characteristic with the minimal combination of time and amplitude response. The result was no overshoot in order to smoothen the grasping signal from unneeded signal (noise) that occurs on the input / original grasping data. Principal Component Analysis – Best Matching Unit (PCA-BMU) is a process of justifying the human grasping data involves several grasping groups and forming a component identified as nodes or neuron.</description>
        <description>http://thesai.org/Downloads/Volume6No11/Paper_14-Gesture_Recognition_Based_on_Human_Grasping_Activities.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Image Processing Based Customized Image Editor and Gesture Controlled Embedded Robot Coupled with Voice Control Features</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061113</link>
        <id>10.14569/IJACSA.2015.061113</id>
        <doi>10.14569/IJACSA.2015.061113</doi>
        <lastModDate>2015-12-01T12:15:13.3130000+00:00</lastModDate>
        
        <creator>Somnath Kar</creator>
        
        <creator>Ankit Jana</creator>
        
        <creator>Debarshi Chatterjee</creator>
        
        <creator>Dipayan Mitra</creator>
        
        <creator>Soumit Banerjee</creator>
        
        <creator>Debasish Kundu</creator>
        
        <creator>Sudipta Ghosh</creator>
        
        <creator>Sauvik Das Gupta</creator>
        
        <subject>Image Processing; Image Editor; Gesture Control; Embedded Robot; Voice Control</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(11), 2015</description>
        <description>In modern sciences and technologies, images gain much broader scopes due to the ever growing importance of scientific visualization (of often large-scale complex scientific/experimental data) like microarray data in genetic research, or real-time multi-asset portfolio trading in finance etc. In this paper, a proposal has been presented to implement a Graphical User Interface (GUI) consisting of various MATLAB functions related to image processing and using the same to create a basic image processing editor having different features like, viewing the red, green and blue components of a color image separately, color detection and various other features like noise addition and removal, edge detection, cropping, resizing, rotation, histogram adjust, brightness control that is used in a basic image editor along with object detection and tracking. This has been further extended to provide reliable and a more natural technique for the user to navigate a robot in the natural environment using gestures based on color tracking. Additionally, Voice control technique has been employed to navigate the robot in various directions in the Cartesian plane employing normal Speech recognition techniques available in Microsoft Visual Basic.</description>
        <description>http://thesai.org/Downloads/Volume6No11/Paper_13-Image_Processing_Based_Customized_Image_Editor_and_Gesture_Controlled.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Role of Secondary Attributes to Boost the Prediction Accuracy of Students’ Employability Via Data Mining</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061112</link>
        <id>10.14569/IJACSA.2015.061112</id>
        <doi>10.14569/IJACSA.2015.061112</doi>
        <lastModDate>2015-12-01T12:15:13.2030000+00:00</lastModDate>
        
        <creator>Pooja Thakar</creator>
        
        <creator>Prof. Dr. Anil Mehta</creator>
        
        <creator>Dr. Manisha</creator>
        
        <subject>Data Mining; Education; Prediction; Psychometric;  Educational Data Mining</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(11), 2015</description>
        <description>Data Mining is best-known for its analytical and prediction capabilities. It is used in several areas such as fraud detection, predicting client behavior, money market behavior, bankruptcy prediction. It can also help in establishing an educational ecosystem, which discovers useful knowledge, and assist educators to take proactive decisions to boost student performance and employability.
This paper presents an empirical study that compares varied classification algorithms on two datasets of MCA (Masters in Computer Applications) students collected from various affiliated colleges of a reputed state university in India. One dataset includes only primary attributes, whereas other dataset is feeded with secondary psychometric attributes in it. The results showcase that solely primary academic attributes don’t lead to smart prediction accuracy of students’ employability, once they square measure within the initial year of their education. The study analyzes and stresses the role of secondary psychometric attributes for better prediction accuracy and analysis of students’ performance. Timely prediction and analysis of students’ performance can help Management, Teachers and Students to work on their gray areas for better results and employment opportunities.</description>
        <description>http://thesai.org/Downloads/Volume6No11/Paper_12-Role_of_Secondary_Attributes_to_Boost_the_Prediction_Accuracy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comparative Study of Relational and Non-Relational Database Models in a Web- Based Application</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061111</link>
        <id>10.14569/IJACSA.2015.061111</id>
        <doi>10.14569/IJACSA.2015.061111</doi>
        <lastModDate>2015-12-01T12:15:13.0930000+00:00</lastModDate>
        
        <creator>Cornelia Gyor&#246;di</creator>
        
        <creator>Robert Gyor&#246;di</creator>
        
        <creator>Roxana Sotoc</creator>
        
        <subject>MongoDB; MSSQL; NoSQL; non-relational database</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(11), 2015</description>
        <description>The purpose of this paper is to present a comparative study between relational and non-relational database models in a web-based application, by executing various operations on both relational and on non-relational databases thus highlighting the results obtained during performance comparison tests. The study was based on the implementation of a web-based application for population records. For the non-relational database, we used MongoDB and for the relational database, we used MSSQL 2014. We will also present the advantages of using a non-relational database compared to a relational database integrated in a web-based application, which needs to manipulate a big amount of data.</description>
        <description>http://thesai.org/Downloads/Volume6No11/Paper_11-A_Comparative_Study_of_Relational_and_Non_Relational_Database.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Creating a Knowledge Database for Lectures of Faculty Members, Proposed E-Module for Isra University</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061110</link>
        <id>10.14569/IJACSA.2015.061110</id>
        <doi>10.14569/IJACSA.2015.061110</doi>
        <lastModDate>2015-12-01T12:15:12.9830000+00:00</lastModDate>
        
        <creator>Dr. Amaal Al-Amawi</creator>
        
        <creator>Dr. Salwa Alsmarai</creator>
        
        <creator>Dr. Manar Maraqa</creator>
        
        <subject>Knowledge; knowledge database; electronic knowledge database; Knowledge sharing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(11), 2015</description>
        <description>Higher education in Jordan is currently expanding as new universities open and compete for offering the best learning experience. Many universities face accreditation challenges, hence, they attend to recruit lecturers who may not have a solid teaching experience. Experienced lecturers tend to have high turnover rate, which cause knowledge loss. To prevent such loss, this research presents a knowledge repository framework. This framework will serve as a reference and a vessel of knowledge that builds and develops the educational and teaching capacities of professors/lecturers. It can also be seen as part of the electronic learning system, which brings benefits to students and enables them to retrieve any lectures they need. The main question we aim to answer is whether a knowledge memory can be designed and created to contribute in supporting the educational and teaching capacities of university lecturers. In order to answer this question, this research creates an electronic knowledge database to store explicit knowledge taken from lectures (written, audio and visual). These lectures are prepared and circulated or presented by university professors/lecturers throughout all university colleges and departments. This knowledge database resembles a cognitive memory that grows and develops with time.</description>
        <description>http://thesai.org/Downloads/Volume6No11/Paper_10-Creating_a_Knowledge_Database_for_Lectures_of_Faculty_Members.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Handwriting Word Recognition Based on SVM Classifier</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061109</link>
        <id>10.14569/IJACSA.2015.061109</id>
        <doi>10.14569/IJACSA.2015.061109</doi>
        <lastModDate>2015-12-01T12:15:12.7200000+00:00</lastModDate>
        
        <creator>Mustafa S. Kadhm</creator>
        
        <creator>Asst. Prof. Dr. Alia Karim Abdul Hassan</creator>
        
        <subject>Arabic Text; Preprocessing; Feature Extraction; SVM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(11), 2015</description>
        <description>this paper proposed a new architecture for handwriting word recognition system Based on Support Vector Machine SVM Classifier. The proposed work depends on the handwriting word level, and it does not need for character segmentation stage. An Arabic handwriting dataset AHDB, dataset used for train and test the proposed system. Besides, the system achieved the best recognition accuracy 96.317% based on several feature extraction methods and SVM classifier. Experimental results show that the polynomial kernel of SVM is convergent and more accurate for recognition than other SVM kernels.</description>
        <description>http://thesai.org/Downloads/Volume6No11/Paper_9-Handwriting_Word_Recognition_Based_on_SVM_Classifier.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Medical Image De-Noising Schemes Using Different Wavelet Threshold Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061108</link>
        <id>10.14569/IJACSA.2015.061108</id>
        <doi>10.14569/IJACSA.2015.061108</doi>
        <lastModDate>2015-12-01T12:15:12.6100000+00:00</lastModDate>
        
        <creator>Nadir Mustafa</creator>
        
        <creator>Saeed Ahmed Khan</creator>
        
        <creator>Jiang Ping Li</creator>
        
        <creator>Mohamed Tag Elsir</creator>
        
        <subject>Baye’s Wavelet threshold; Discrete Wavelet; Medical Image De-noising; Magnetic Resonance Imaging (MRI)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(11), 2015</description>
        <description>In recent years most of researcher’s has done tremendous work in the field of  medical image applications such as Magnetic Resonance Imaging (MRI), Ultra Sound, CT scan but still  there are many  research and experiments in medical imaging field and diagnosing of human health by Health Care Institutes. There is a growing interest for medical imaging de-noising as a hot area of research and also imaging equipment as a device. It is used for better image processing and highlighting the important features. These images are affected with random noise during acquisition, analyzing and Transmission process. This results in blurry image visible in low contrast. Wavelet transforms have effective method to separate the noise from the original medical image by using threshold techniques without affecting the important data of an image.  Wavelet transform enables us to use the forward wavelet transform to represent sub-band of the original image in decomposition process then reconstructing this sub band coefficients to original image using inverse wavelet transform.  In this work, the quality of medical image has been evaluated using filter assessment parameters like Variance, standard deviation, the squared difference error between original medical image &amp; de-noised image (MSE) and the ratio between original image &amp; noisy image. From numerical results, we can see that the algorithm is efficient de-noising of noisy medical image. When, investigating with Baye’s threshold techniques it achieved the Best value of peak signal to noise ratio (PSNR). For best medical image de-noising, the wavelet based de-noising algorithm has been investigated and results of Baye’s techniques and hard &amp; soft threshold methods have been compared.</description>
        <description>http://thesai.org/Downloads/Volume6No11/Paper_8-Medical_Image_De_Noising_Schemes_Using_Different_Wavelet_Threshold_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Survey of Quality Prediction of Product Reviews</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061107</link>
        <id>10.14569/IJACSA.2015.061107</id>
        <doi>10.14569/IJACSA.2015.061107</doi>
        <lastModDate>2015-12-01T12:15:12.5630000+00:00</lastModDate>
        
        <creator>H.Almagrabi </creator>
        
        <creator>A. Malibari</creator>
        
        <creator>J. McNaught</creator>
        
        <subject>sentiment analysis; product reviews; content analysis; helpfulness detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(11), 2015</description>
        <description>With the help of Web-2.0, the Internet offers a vast amount of reviews on many topics and in different domains. This has led to an explosive growth of product reviews and customer feedback, which presents the problem of how to handle the abundant volume of data. It is an expensive and time-consuming task to analyze this huge content of opinions. Therefore, the need for automated sentiment analysis systems is vital. However, these systems encounter many challenges; assessing the content quality of the posted opinions is an important area of study that is related to sentiment analysis. Currently, review helpfulness is assessed manually; however the task of automatically assessing it has gained more attention in recent years. This paper provides a survey of approaches to the challenge of identifying the content quality of product reviews.</description>
        <description>http://thesai.org/Downloads/Volume6No11/Paper_7-A_Survey_of_Quality_Prediction_of_Product_Reviews.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Performance of GIS on Cloud Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061106</link>
        <id>10.14569/IJACSA.2015.061106</id>
        <doi>10.14569/IJACSA.2015.061106</doi>
        <lastModDate>2015-12-01T12:15:12.4370000+00:00</lastModDate>
        
        <creator>Ahmed Teaima A. T</creator>
        
        <creator>Hesham Ahmed Hefny. H. F. H</creator>
        
        <creator>BahaaShabana B.S</creator>
        
        <subject>Cloud Computing; GIS; Kd-tree; Quadtree</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(11), 2015</description>
        <description>Cloud computing provides a way of determining dynamically scalable and virtualized resources as a service over the Internet. GIS is a technology, which could use Cloud Computing for distributed parallel processing of a large set of data, store and share the results with users around the world. GIS is beneficial and works well when it be available to everyone, everywhere, anytime and with downcast fee of minimal sized in terms of technology and outlay. Cloud Computing used to portray and help users to use GIS applications in an easy way. This paper will study some example of a data structure like a K-d tree and Quad trees of GIS application and compare between them when storing these data structures on Cloud computing, the paper  also portrays the results  of the study of data structure on cloud computing platforms to retrieve data from cloud computing. The paper provides an application for “finding neighborhood from existing data stored.</description>
        <description>http://thesai.org/Downloads/Volume6No11/Paper_6-Enhancing_Performance_of_GIS_on_Cloud_Computing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Embed Attitude from Student on Elearning Using Instructional Design with ADDIE Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061105</link>
        <id>10.14569/IJACSA.2015.061105</id>
        <doi>10.14569/IJACSA.2015.061105</doi>
        <lastModDate>2015-12-01T12:15:12.3430000+00:00</lastModDate>
        
        <creator>Ni Putu Linda Santiari</creator>
        
        <subject>Instructional Design; eLearning; Affective Learning; Reliability; Validity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(11), 2015</description>
        <description>Attitude is very important in an education, without a good attitude certainly education will not be able to run smoothly, even education can be said to fail if the output of the education did not have a good attitude in the community in the workplace. To determine the value of the attitude in elearning is not easy. In this study will try to create a method or means that can be used to determine the value of the attitude of a student in the learning system elearning. The method to be used is instructional design using ADDIE Model, where the latter begins by determining the parameters to be assessed from that attitude, the parameters used are each - each part of Affective Learning. After determining the parameters are then carried out the design and manufacture of questioner, before this questioner deployed then ever before will be testing the validity and reliability using SPSS. If questioner has valid and reliabl,  then the next can be done questioner deployment and then be evaluated. Questioner from spreading to some of the students showed that students that the attitude of the students already Very Good with a total student getting very good value are 96 people with a percentage of 48%.</description>
        <description>http://thesai.org/Downloads/Volume6No11/Paper_5-Embed_Attitude_from_Student_on_Elearning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implementation of Central Dogma Based Cryptographic Algorithm in Data Warehouse Architecture for Performance Enhancement</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061104</link>
        <id>10.14569/IJACSA.2015.061104</id>
        <doi>10.14569/IJACSA.2015.061104</doi>
        <lastModDate>2015-12-01T12:15:12.2200000+00:00</lastModDate>
        
        <creator>Rajdeep Chowdhury</creator>
        
        <creator>Soupayan Datta</creator>
        
        <creator>Saswata Dasgupta</creator>
        
        <creator>Mallika De</creator>
        
        <subject>Data Warehouse; Central Dogma; Replication; Translation; Transcription; Codon; Data Mart; Hashing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(11), 2015</description>
        <description>Data warehouse is a set of integrated databases deliberated to expand decision-making and problem solving, espousing exceedingly condensed data. Data warehouse happens to be progressively more accepted theme for contemporary researchers with respect to contemporary inclination towards industry and executive purview. The crucial tip of the proposed work is integrated on delivering an enhanced and an exclusive innovative model based on the intention of enhancing security measures, which at times have been found wanting and also ensuring improved accessibility using Hashing modus operandi. An unsullied algorithm was engendered using the concept of protein synthesis, prevalently studied in Genetics, that is, in the field of Biotechnology, wherein three steps are observed, namely; DNA Replication, Translation and Transcription. In the proposed algorithm, the two latter steps, that is, Translation and Transcription have been taken into account and the concept have been used for competent encryption and proficient decryption of data. Central Dogma Model is the name of the explicit model that accounts for and elucidates the course of action for Protein Synthesis using the Codons which compose the RNA and the DNA and are implicated in numerous bio–chemical processes in living organisms. It could be observed that subsequently a dual stratum of encryption and decryption mechanism has been employed for optimal security. The formulation of the immaculate Hashing modus operandi ensure that there would be considerable diminution of access time, keeping in mind the apt retrieval of all indispensable data from the data vaults.
The pertinent appliance of the proposed model with enhanced security might be in its significant service in a variety of organizations where accrual of protected data is of extreme magnitude. The variety of organizations might include educational organizations, corporate houses, medical establishments, private establishments and so on and so forth.</description>
        <description>http://thesai.org/Downloads/Volume6No11/Paper_4-Implementation_of_Central_Dogma_Based_Cryptographic_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SmartOrBAC</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061103</link>
        <id>10.14569/IJACSA.2015.061103</id>
        <doi>10.14569/IJACSA.2015.061103</doi>
        <lastModDate>2015-12-01T12:15:12.0970000+00:00</lastModDate>
        
        <creator>Imane BOUIJ-PASQUIER</creator>
        
        <creator>Anas ABOU EL KALAM</creator>
        
        <creator>Abdellah AIT OUAHMAN</creator>
        
        <subject>internet of things; security; privacy; access control model; authorization process</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(11), 2015</description>
        <description>The emergence of the Internet of Things (IoT) paradigm, provides a huge scope for more streamlined living through an increase of smart services but this coincides with an increase in security and privacy concerns, therefore access control has been an important factor in the development of IoT.
This work proposes an authorization access model called SmartOrBAC built around a set of security and performance requirements. This model enhances the existing OrBAC (Organization-based Access Control) model and adapts it to IoT environments. SmartOrBAC separates the problem into different functional layers and then distributes processing costs between constrained devices and less constrained ones and at the same time addresses the collaborative aspect with a specific solution. This paper also presents the application of SmartOrBAC on a real example of IoT and gives a complexity study demonstrating that even though this model is extensive, it does not add additional complexity regarding traditional access control models.</description>
        <description>http://thesai.org/Downloads/Volume6No11/Paper_3-SmartOrBAC.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Using the Sub-Game Perfect Nash Equilibrium to Deduce the Effect of Government Subsidy on Consumption Rates and Prices</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061102</link>
        <id>10.14569/IJACSA.2015.061102</id>
        <doi>10.14569/IJACSA.2015.061102</doi>
        <lastModDate>2015-12-01T12:15:11.9870000+00:00</lastModDate>
        
        <creator>Dr. Magdi Amer</creator>
        
        <creator>Dr. Ahmed Kattan</creator>
        
        <subject>game-theory; subsidy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(11), 2015</description>
        <description>Governments are interested in inducing positive habits and behaviors in its citizens and discouraging ones that are harmful to the individual or to the society. Taxation and legislation are usually used to discourage negative behaviors. Subsidy seems the politically correct way to encourage positive behaviors. In this paper, the Subgame Perfect Nash Equilibrium is used to deduce the effect of the government subsidy on the user consumption, prices and producer and distributor profits.</description>
        <description>http://thesai.org/Downloads/Volume6No11/Paper_2-Using_the_Sub_Game_Perfect_Nash_Equilibrium_to_Deduce_the_Effect.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Distance Prediction for Commercial Serial Crime Cases Using Time Delay Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061101</link>
        <id>10.14569/IJACSA.2015.061101</id>
        <doi>10.14569/IJACSA.2015.061101</doi>
        <lastModDate>2015-12-01T12:15:11.8300000+00:00</lastModDate>
        
        <creator>Anahita Ghazvini</creator>
        
        <creator>Siti Norul Huda Sheikh Abdullah</creator>
        
        <creator>Mohammed Ariff Abdullah</creator>
        
        <creator>Md Nawawi Junoh</creator>
        
        <creator>Zainal Abidin bin Kasim</creator>
        
        <subject>Criminology and Computational Criminology; Neural Network; modeling; NARX; BPTT; Quantum GIS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(11), 2015</description>
        <description>The prediction of the next serial criminal time is important in the field of criminology for preventing the recurring actions of serial criminals. In the associated dynamic systems, one of the main sources of instability and poor performances is the time delay, which is commonly predicted based on nonlinear methods. The aim of this study is to introduce a dynamic neural network model by using nonlinear autoregressive time series with exogenous (external) input (NARX) and Back Propagation Through Time (BPTT), which is verified intensively with MATLAB to predict and model the crime times for the next distance of serial cases. Recurrent neural networks have been extensively used for modeling of nonlinear dynamic systems. There are different types of recurrent neural networks such as Time Delay Neural Networks (TDNN), layer recurrent networks, NARX, and BPTT. The NARX model for the two cases of input- output modeling of dynamic systems and time series prediction draw more attention. In this study, a comparison of two models of NARX and BPTT used for the prediction of the next serial criminal time illustrates that the NARX model exhibits better performance for the prediction of serial cases than the BPTT model. Our future work aims to improve the NARX model by combining objective functions.</description>
        <description>http://thesai.org/Downloads/Volume6No11/Paper_1-Distance_Prediction_for_Commercial_Serial_Crime_Cases.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Compressed Sensing Based Encryption Approach for Tax Forms Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2015.041106</link>
        <id>10.14569/IJARAI.2015.041106</id>
        <doi>10.14569/IJARAI.2015.041106</doi>
        <lastModDate>2015-11-12T07:39:22.9600000+00:00</lastModDate>
        
        <creator>Adrian Brezulianu</creator>
        
        <creator>Monica Fira</creator>
        
        <creator>Marius Daniel Pestina</creator>
        
        <subject>compressed sensing; encryption; security; greedy algorithms</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 4(11), 2015</description>
        <description>In this work we investigate the possibility to use the measurement matrices from compressed sensing as secret key to encrypt / decrypt signals. Practical results and a comparison between BP (basis pursuit) and OMP (orthogonal matching pursuit) decryption algorithms are presented. To test our method, we used 10 text messages (10 different tax forms) and we generated 10 random matrices and for distortion validate we used the PRD (the percentage root-mean-square difference), its normalized version (PRDN) measures and NMSE (normalized mean square error). From the practical results we found that the time for BP algorithm is much higher than for OMP algorithm and the errors are smaller and should be noted that the OMP does not guarantee the convergence of the algorithm. We found that it is more advantageous, for tax forms (or other templates that show no interest for encryption) to encrypt only the recorded data. The time required for decoding is significantly lower than the decryption for the entire form</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume4No11/Paper_6-Compressed_Sensing_Based_Encryption_Approach_for_Tax_Forms_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Prediction of New Student Numbers using Least Square Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2015.041105</link>
        <id>10.14569/IJARAI.2015.041105</id>
        <doi>10.14569/IJARAI.2015.041105</doi>
        <lastModDate>2015-11-12T07:39:22.8970000+00:00</lastModDate>
        
        <creator>Dwi Mulyani</creator>
        
        <subject>Prediction of New Students; Least Square method</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 4(11), 2015</description>
        <description>STMIK BANJARBARU has acquired less number of new students for the last three years compared to the previous years. The numbers of new student acquisition are not always the same every year. The unstable number of new student acquisition made the difficulty in designing classes, lecturers, and other charges. Knowing the prediction number of new student acquisition for the coming period is very important as a basis for further decision making. Least Square method as the method of calculation to determine the scores prediction is often used to have a prediction, because the calculation is more accurate then moving average.
The study was aimed to help the private colleges or universities, especially STMIK BANJARBARU, in predicting the number of new students who are accepted, so it will be easier to make decisions in determining the next steps and estimating the financial matters.
The prediction of the number of new student acquisition will facilitates STMIK BANJARBARU to determine the number of classes, scheduling, etc.
From the results of the study, it can be concluded that prediction analysis by using Least Square Method can be used to predict the number of new students acquisition for the coming period based on the student data in the previous years, because it produces valid results or closer to the truth. From the test results in the last 3 years, the validity shows 97.8%, so it can be said valid.
</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume4No11/Paper_5-Prediction_of_New_Student_Numbers_using_Least_Square_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implementation of Computer Assisted CIPP Model for Evaluation Program of HIV/AIDS Countermeasures in Bali</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2015.041104</link>
        <id>10.14569/IJARAI.2015.041104</id>
        <doi>10.14569/IJARAI.2015.041104</doi>
        <lastModDate>2015-11-12T07:39:22.8500000+00:00</lastModDate>
        
        <creator>I Made Sundayana</creator>
        
        <subject>Evaluation; Computer Assisted CIPP Model</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 4(11), 2015</description>
        <description>One of the fact within economical development of tourism in Bali is indicated by established tourism facilities in order to support Bali tourism industry. Consquently, It has brought up effect that large numbers of new citizen search for occupation to Bali.Those people who came and settle in Bali temporaly or permanently, consequently Bali become heterogeneous.Thus, Bali become over populated. Since, over populated in Bali has risen up the economic sector and it has been spreading HIV /AIDS rapidly. As anticipation and prevention for contagious, developed and spreading HIV of Bali Provinse has regulated (regional act ) Number 3 2006 concerning of prevention act fr HIV/AIDS. As the matter of fact, regional act is not properly conducted yet as, therefore it is evaluation required f0r the rule and program that have been conducted by the government.One of the technical evaluation can be applied is CIPP model. However, CIPP model is still applied in conventional way and it has not yet contributed accurate evaluational count in processing the data, therefore by using CIPP model of computer assistance. This can be proved by ending up the result of the total program percentage of HIV /AID prevention by conventional counted result as much 88.000%, meanwhile the count with computer assistance end up with  88.400% in result. It shows high category.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume4No11/Paper_4-Implementation_of_Computer_Assisted_CIPP_Model_for_Evaluation_Program_of_HIV_AIDS_Countermeasures_in_Bali.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Defending Grey Attacks by Exploiting Wavelet Analysis in Collaborative Filtering Recommender Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2015.041103</link>
        <id>10.14569/IJARAI.2015.041103</id>
        <doi>10.14569/IJARAI.2015.041103</doi>
        <lastModDate>2015-11-12T07:39:22.7870000+00:00</lastModDate>
        
        <creator>Zhihai Yang</creator>
        
        <creator>Zhongmin Cai</creator>
        
        <creator>Agile Esmaeilikelishomi</creator>
        
        <subject>recommender system; grey attack; discrete wavelet transform; shilling attack</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 4(11), 2015</description>
        <description>“Shilling” attacks or “profile injection” attacks have always major challenges in collaborative filtering recommender systems (CFRSs). Many efforts have been devoted to improve collaborative filtering techniques which can eliminate the “shilling” attacks. However, most of them focused on detecting push attack or nuke attack which is rated with the highest score or lowest score on the target items. Few pay attention to grey attack when a target item is rated with a lower or higher scores than the average score, which shows a more hidden rating behavior than push or nuke attack. In this paper, we present a novel detection method to make recommender systems resistant to such attacks. To characterize grey ratings, we exploit rating deviation of item to discriminate between grey attack profiles and genuine profiles. In addition, we also employ novelty and popularity of item to construct rating series. Since it is difficult to discriminate between the rating series of attacker and genuine users, we incorporate into discrete wavelet transform (DWT) to amplify these differences based on the rating series of rating deviation, novelty and popularity, respectively. Finally, we respectively extract features from rating series of rating deviation-based, novelty-based and popularity-based by using amplitude domain analysis method and combine all clustered results as our detection results. We conduct a list of experiments on the Book-Crossing dataset in diverse attack models. Experimental results were included to validate the effectiveness of our approach in comparison with benchmarked methods. </description>
        <description>http://thesai.org/Downloads/IJARAI/Volume4No11/Paper_3-Defending_Grey_Attacks_by_Exploiting_Wavelet_Analysis_in_Collaborative_Filtering_Recommender_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Method for Surface Reflectance Estimation with MODIS by Means of Bi-Section between MODIS and Estimated Radiance as well as Atmospheric Correction with Skyradiometer</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2015.041102</link>
        <id>10.14569/IJARAI.2015.041102</id>
        <doi>10.14569/IJARAI.2015.041102</doi>
        <lastModDate>2015-11-12T07:39:22.6930000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Kenta Azuma</creator>
        
        <subject>Sea surface reflectance; Atmospheric correction; Sky-radiometer; MODIS; satellite remote sensing</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 4(11), 2015</description>
        <description>Method for surface reflectance estimation with MODIS by means of bi-section algorithm between MODIS and estimated radiance is proposed together with atmospheric correction with sky-radiometer data. Surface reflectance is one of MODIS products and is need to be improved its estimation accuracy. In particular the location near the skyradiometer or aeronet sites of which solar direct, aureole and diffuse radiance are measured, it is possible to improve the estimation accuracy of surface reflectance. The experiment is conducted at the skyradiometer site which is situated at Saga University. There is Ariake Sea near the Saga University. It is rather difficult to estimate surface reflectance of the sea surface because the reflectance is too low in comparison to that of land surface. In order to improve surface reflectance estimation accuracy, atmospheric correction is mandated. Atmospheric correction method is also proposed by using skyradiometer data. Through the experiment, it is found that these surface reflectance estimation and atmospheric correction methods are validated.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume4No11/Paper_2-Method_for_Surface_Reflectance_Estimation_with_MODIS_by_Means_of_Bi-Section_Between_MODIS_and_Estimated_Radiance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Vicarious Calibration Data Screening Method Based on Variance of Surface Reflectance and Atmospheric Optical Depth Together with Cross Calibration</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2015.041101</link>
        <id>10.14569/IJARAI.2015.041101</id>
        <doi>10.14569/IJARAI.2015.041101</doi>
        <lastModDate>2015-11-12T07:39:22.6330000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>Vicarious calibration; Surface reflectance; Atmospheric Optical Depth; Sky-radiometer; Terra/ASTER; Satellite remote sensing</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 4(11), 2015</description>
        <description>Vicarious calibration data screening method based on the measured atmospheric optical depth and the variance of the measured surface reflectance at the test sites is proposed. Reliability of the various calibration data has to be improved. In order to improve the reliability of the vicarious calibration data, some screenings have to be made. Through experimental study, it is found that vicarious calibration data screening would be better to apply with the measured atmospheric optical depth and variance of the measured surface reflectance due to the facts that thick atmospheric optical depth means that the atmosphere contains serious pollution sometime and that large deviation of the surface reflectance from the average means that the solar irradiance has an influence due to cirrus type of clouds. As the results of the screening, the uncertainty of vicarious calibration data from the approximated radiometric calibration coefficient is remarkably improved. Also, it is found that cross calibration uncertainty is poorer than that of vicarious calibration.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume4No11/Paper_1-Vicarious_Calibration_Data_Screening_Method_Based_on_Variance_of_Surface_Reflectance_and_Atmospheric_Optical_Depth.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Simulation Model for Nakagmi-m Fading Channel with m&gt;1</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061040</link>
        <id>10.14569/IJACSA.2015.061040</id>
        <doi>10.14569/IJACSA.2015.061040</doi>
        <lastModDate>2015-11-03T08:43:35.3700000+00:00</lastModDate>
        
        <creator>Sandeep Sharma</creator>
        
        <creator>Rajesh Mishra</creator>
        
        <subject>Nakagami Distribution; Fading Channel; Wireless Channel Modeling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(10), 2015</description>
        <description>In this paper, we propose a model to simulate a wireless fading channel based on Nakagami-m distribution with m&gt;1. The Nakagami-m fading channel is the most generalized distribution as it can generate one-sided Gaussian distribution, Rayleigh distribution and Rician distribution for m equals to 0.5, 1 and &gt;1 respectively. In this work we have proposed a method to generate a wireless fading channel based on Nakagami-m distribution as this distribution fits to a wide class of fading channel conditions. Simulation results were obtained using Matlab R2013a and compared with the analytical results.</description>
        <description>http://thesai.org/Downloads/Volume6No10/Paper_40-A_Simulation_Model_for_Nakagmi_M_Fading_Channel.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Hidden Web Crawling Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061039</link>
        <id>10.14569/IJACSA.2015.061039</id>
        <doi>10.14569/IJACSA.2015.061039</doi>
        <lastModDate>2015-11-02T11:58:43.8130000+00:00</lastModDate>
        
        <creator>L.Saoudi </creator>
        
        <creator>A.Boukerram</creator>
        
        <creator>S.Mhamedi</creator>
        
        <subject>Deep crawler; Hidden Web crawler; SQLI query; form submission; searchable forms</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(10), 2015</description>
        <description>Traditional search engines deal with the Surface Web which is a set of Web pages directly accessible through hyperlinks and ignores a large part of the Web called hidden Web which is a great amount of valuable information of online database which is “hidden” behind the query forms.
To access to those information the crawler have to fill the forms with a valid data, for this reason we propose a new approach which use SQLI technique in order to find the most promising keywords of a specific domain for automatic form submission.
The effectiveness of proposed framework has been evaluated through experiments using real web sites and encouraging preliminary results were obtained</description>
        <description>http://thesai.org/Downloads/Volume6No10/Paper_39-A_New_Hidden_Web_Crawling_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>On the Codes over a Semilocal Finite Ring</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061038</link>
        <id>10.14569/IJACSA.2015.061038</id>
        <doi>10.14569/IJACSA.2015.061038</doi>
        <lastModDate>2015-11-01T11:33:31.7570000+00:00</lastModDate>
        
        <creator>Abdullah Dertli</creator>
        
        <creator>Yasemin Cengellenmis</creator>
        
        <creator>Senol Eren</creator>
        
        <subject>Cyclic codes; Skew cyclic codes; Quantum codes</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(10), 2015</description>
        <description>In this paper, we study the structure of cyclic, quasi cyclic, constacyclic codes and their skew codes over the finite ring R. The Gray images of cyclic, quasi cyclic, skew cyclic, skew quasi cyclic and skew constacyclic codes over R are obtained. A necessary and sufficient condition for cyclic (negacyclic) codes over R that contains its dual has been given. The parameters of quantum error correcting codes are obtained from both cyclic and negacyclic codes over R. Some examples are given. Firstly, quasi constacyclic and skew quasi constacyclic codes are introduced. By giving two inner product, it is investigated their duality. A sufficient condition for 1 generator skew quasi constacyclic codes to be free is determined.</description>
        <description>http://thesai.org/Downloads/Volume6No10/Paper_38-On_the_Codes_over_a_Semilocal_Finite_Ring.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Distance and Speed Measurements using FPGA and ASIC on a high data rate system</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061037</link>
        <id>10.14569/IJACSA.2015.061037</id>
        <doi>10.14569/IJACSA.2015.061037</doi>
        <lastModDate>2015-11-01T11:33:31.7270000+00:00</lastModDate>
        
        <creator>Abdul Rehman Buzdar</creator>
        
        <creator>Liguo Sun</creator>
        
        <creator>Azhar Latif</creator>
        
        <creator>Abdullah Buzdar</creator>
        
        <subject>Distance; Speed; FPGA; MicroBaze; Co-Design; ASIC</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(10), 2015</description>
        <description>This paper deals with the implementation of FPGA and ASIC designs to calculate the distance and speed of a moving remote object using laser source and echo pulses reflected from that remote object. The project proceeded in three phases for the FPGA implementation: All-in-C design using Xilinx Microblaze soft core processor system, an accelerated design with custom co-processor and Microblaze soft core processor system, and full custom hardware design implemented using VHDL on Xilinx FPGA. Later the complete system was implemented on ASIC. The ASIC implementation optimized the modules for area and timing for a 130nm process technology.</description>
        <description>http://thesai.org/Downloads/Volume6No10/Paper_37-Distance_and_Speed_Measurements_using_FPGA.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Approach for a Better Load Balancing and a Better Distribution of Resources in Cloud Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061036</link>
        <id>10.14569/IJACSA.2015.061036</id>
        <doi>10.14569/IJACSA.2015.061036</doi>
        <lastModDate>2015-11-01T11:33:31.6930000+00:00</lastModDate>
        
        <creator>Abdellah IDRISSI</creator>
        
        <creator>Faouzia ZEGRARI</creator>
        
        <subject>Cloud Computing; Constraints QoS; Combinatorial Optimization; Task Scheduler; Exact Method</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(10), 2015</description>
        <description>Cloud computing is a new paradigm where data and services of Information Technology are provided via the Internet by using remote servers. It represents a new way of delivering computing resources allowing access to the network on demand. Cloud computing consists of several services, each of which can hold several tasks. As the problem of scheduling tasks is an NP-complete problem, the task management can be an important element in the technology of cloud computing. To optimize the performance of virtual machines hosted in cloud computing, several algorithms of scheduling tasks have been proposed. In this paper, we present an approach allowing to solve the problem optimally and to take into account the QoS constraints based on the different user requests. This technique, based on the Branch and Bound algorithm, allows to assign tasks to different virtual machines while ensuring load balance and a better distribution of resources. The experimental results show that our approach gives very promising results for an effective tasks planning.</description>
        <description>http://thesai.org/Downloads/Volume6No10/Paper_36-A_New_Approach_for_a_Better_Load_Balancing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Adoption of Biometric Fingerprint Identification as an Accessible, Secured form of ATM Transaction Authentication</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061035</link>
        <id>10.14569/IJACSA.2015.061035</id>
        <doi>10.14569/IJACSA.2015.061035</doi>
        <lastModDate>2015-11-01T11:33:31.6800000+00:00</lastModDate>
        
        <creator>Michael Mireku Kwakye</creator>
        
        <creator>Hanan Yaro Boforo</creator>
        
        <creator>Eugene Louis Badzongoly</creator>
        
        <subject>Information Technology; Automatic Teller Machine; Biometrics; Fingerprint; BioHASH; Token</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(10), 2015</description>
        <description>Security is continuously an important concern for most Information Technology-related industries, especially the banking industry. The banking industry is concerned with protecting and securing the privacy and data of their customers, as well as their transactions. The adoption of biometric technology as a means of identifying and authenticating individuals has been proposed as one of the varied solutions to many of the security challenges faced by the banking industry. In this paper, the authors address the ATM transaction authentication problem of banking transactions using fingerprint identification as one form of biometric authentication. The novel methodology adopted proposes the use of an online off-card fingerprint verification, which involves the matching of live fingerprint (templates) with pre-stored templates read from the ATM smart card. The experimental evaluation of the proposed methodology presents a system that offers a faster and relatively better security of authentication, as compared to previous and existing methodologies. Moreover, the use of BioHASH templates ensures an irreversible cryptographic hash function, facilitates a faster authentication, and enables an efficient framework of detecting potential duplicates of banking account holders.</description>
        <description>http://thesai.org/Downloads/Volume6No10/Paper_35-Adoption_of_Biometric_Fingerprint_Identification_as_an_Accessible.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Study of Automatic Extraction, Classification, and Ranking of Product Aspects Based on Sentiment Analysis of Reviews</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061034</link>
        <id>10.14569/IJACSA.2015.061034</id>
        <doi>10.14569/IJACSA.2015.061034</doi>
        <lastModDate>2015-11-01T11:33:31.6470000+00:00</lastModDate>
        
        <creator>Muhammad Rafi</creator>
        
        <creator>Muhammad Rafay Farooq</creator>
        
        <creator>Usama Noman</creator>
        
        <creator>Abdul Rehman Farooq</creator>
        
        <creator>Umair Ali Khatri</creator>
        
        <subject>Aspect ranking; Product Aspect Ranking; Sentiment analysis; Sentiment lexicon</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(10), 2015</description>
        <description>It is very common for a customer to read reviews about the product before making a final decision to buy it. Customers are always eager to get the best and the most objective information about the product theywish to purchase and reviews are the major source to obtain this information. Although reviews are easily accessible from the web, but since most of them carry ambiguous opinion and different structure, it is often very difficult for a customer to filter the information he actually needs. This paper suggests a framework, which provides a single user interface solution to this problem based on sentiment analysis of reviews. First, it extracts all the reviews from different websites carrying varying structure, and gathers information about relevant aspects of that product. Next, it does sentiment analysis around those aspects and gives them sentiment scores. Finally, it ranks all extracted aspects and clusters them into positive and negative class. The final output is a graphical visualization of all positive and negative aspects, which provide the customer easy, comparable, and visual information about the important aspects of the product. The experimental results on five different products carrying 5000 reviewsshow 78% accuracy. Moreover, the paper also explained the effect of Negation, Valence Shifter, and Diminisher with sentiment lexiconon sentiment analysis, andconcluded that they all are independent of the case problem , and have no effect on the accuracy of sentiment analysis.</description>
        <description>http://thesai.org/Downloads/Volume6No10/Paper_34-Study_of_Automatic_Extraction_Classification_and_Ranking_of_Product_Aspects.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Prediction of Poor Inhabitant Number Using Least Square and Moving Average Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061033</link>
        <id>10.14569/IJACSA.2015.061033</id>
        <doi>10.14569/IJACSA.2015.061033</doi>
        <lastModDate>2015-11-01T11:33:31.6170000+00:00</lastModDate>
        
        <creator>Ningrum Ekawati</creator>
        
        <subject>Poor Inhabitant; Prediction; Least Square; Moving Average</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(10), 2015</description>
        <description>The number of poor inhabitant in South Kalimantan decreased within the last three years compared with the previous years. The numbers of poor inhabitant differs from time to time. This scaled dynamical number has been a problem for the local government to take proper polices to solve this matter.  It will then be necessary to predict a potential number of poor inhabitants in the next year as the basis on subsequent policy making. This research will apply both Least Square and Moving Average method as the measurement to count prediction values. From the results of the study, the prediction analysis by using those two methods is valid for predicting acquired number of poor inhabitant for the next period according to the data from the previous year. Based on the study, the validity of Least Square was 98.35% and Moving Average was 98.79% by using the data in the last seven years.</description>
        <description>http://thesai.org/Downloads/Volume6No10/Paper_33-Prediction_of_Poor_Inhabitant_Number_Using_Least_Square.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Computer Science Approach to Information-Like Artifacts as Exemplified by Memes</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061032</link>
        <id>10.14569/IJACSA.2015.061032</id>
        <doi>10.14569/IJACSA.2015.061032</doi>
        <lastModDate>2015-11-01T11:33:31.6000000+00:00</lastModDate>
        
        <creator>Sabah Al-Fedaghi</creator>
        
        <subject>information; cultural evolution; mental structures; memes; memetics; conceptual modeling; diagrammatic representation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(10), 2015</description>
        <description>Providing information can be expanded to include systems that deliver information-like artifacts. They provide such “things” as advertisements, propaganda pieces, and meme artifacts. Memes are the subject of extensive intellectual debate in science and popular culture because it is claimed that parallels can be drawn between theories of cultural evolution manifested in memes, and theories of biological evolution. Memes are described as self-reproducing mental structures, intangible entities transmitted from mind to mind, verbally or by repeated actions and/or imitation. The problem is that researchers describe memes in terms of English-language text or ad hoc diagrams. This paper considers the problem that the field of memetics lacks a uniform language for examining diverse conceptualizations of memes. The paper presents a unifying diagrammatic representation used in computer science, in which all types of “claimed” memes can be expressed and their general characteristics observed. Several examples from the literature on memes are recast in terms of this representation. The results point to the capability of the proposed depiction to express various types of memes.</description>
        <description>http://thesai.org/Downloads/Volume6No10/Paper_32-Computer_Science_Approach_to_Information_Like_Artifacts.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Effect of Feature Selection on Phish Website Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061031</link>
        <id>10.14569/IJACSA.2015.061031</id>
        <doi>10.14569/IJACSA.2015.061031</doi>
        <lastModDate>2015-11-01T11:33:31.5870000+00:00</lastModDate>
        
        <creator>Hiba Zuhair</creator>
        
        <creator>Ali Selmat</creator>
        
        <creator>Mazleena Salleh</creator>
        
        <subject>phish website; phishing detection; feature selection; classification model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(10), 2015</description>
        <description>Recently, limited anti-phishing campaigns have given phishers more possibilities to bypass through their advanced deceptions. Moreover, failure to devise appropriate classification techniques to effectively identify these deceptions has degraded the detection of phishing websites. Consequently, exploiting as new; few; predictive; and effective features as possible has emerged as a key challenge to keep the detection resilient. Thus, some prior works had been carried out to investigate and apply certain selected methods to develop their own classification techniques. However, no study had generally agreed on which feature selection method that could be employed as the best assistant to enhance the classification performance. Hence, this study empirically examined these methods and their effects on classification performance. Furthermore, it recommends some promoting criteria to assess their outcomes and offers contribution on the problem at hand. Hybrid features, low and high dimensional datasets, different feature selection methods, and classification models were examined in this study. As a result, the findings displayed notably improved detection precision with low latency, as well as noteworthy gains in robustness and prediction susceptibilities. Although selecting an ideal feature subset was a challenging task, the findings retrieved from this study had provided the most advantageous feature subset as possible for robust selection and effective classification in the phishing detection domain.</description>
        <description>http://thesai.org/Downloads/Volume6No10/Paper_31-The_Effect_of_Feature_Selection_on_Phish_Website_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Real-Time Face Motion Based Approach towards Modeling Socially Assistive Wireless Robot Control with Voice Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061030</link>
        <id>10.14569/IJACSA.2015.061030</id>
        <doi>10.14569/IJACSA.2015.061030</doi>
        <lastModDate>2015-11-01T11:33:31.5400000+00:00</lastModDate>
        
        <creator>Abhinaba Bhattacharjee</creator>
        
        <creator>Partha Das</creator>
        
        <creator>Debasish Kundu</creator>
        
        <creator>Sudipta Ghosh</creator>
        
        <creator>Sauvik Das Gupta</creator>
        
        <subject>Face detection; Skin tone segmentation; Voice Commands; Speech Recognition; Wi-Fi Protected Access (WPA); Arduino Wi-Fi Shield; iRobot Create; Surveillance systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(10), 2015</description>
        <description>The robotics domain has a couple of specific general design requirements which requires the close integration of planning, sensing, control and modeling and for sure the robot must take into account the interactions between itself, its task and its environment surrounding it. Thus considering the fundamental configurations, the main motive is to design a system with user-friendly interfaces that possess the ability to control embedded robotic systems by natural means. While earlier works have focused primarily on issues such as manipulation and navigation only, this proposal presents a conceptual and intuitive approach towards man-machine interaction in order to provide a secured live biometric logical authorization to the user access, while making an intelligent interaction with the control station to navigate advanced gesture controlled wireless Robotic prototypes or mobile surveillance systems along desired directions through required displacements. The intuitions are based on tracking real-time 3-Dimensional Face Motions using skin tone segmentation and maximum area considerations of segmented face-like blobs, Or directing the system with voice commands using real-time speech recognition. The system implementation requires designing a user interface to communicate between the Control station and prototypes wirelessly, either by accessing the internet over an encrypted Wi-Fi Protected Access (WPA) via a HTML web page for communicating with face motions or with the help of natural voice commands like “Trace 5 squares”, “Trace 10 triangles”, “Move 10 meters”, etc. evaluated on an iRobot Create over Bluetooth connectivity using a Bluetooth Access Module (BAM). Such an implementation can prove to be highly effective for designing systems of elderly aid and maneuvering the physically challenged.</description>
        <description>http://thesai.org/Downloads/Volume6No10/Paper_30-A_Real_Time_Face_Motion_Based_Approach_towards_Modeling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Bayesian Approach to Service Selection for Secondary Users in Cognitive Radio Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061029</link>
        <id>10.14569/IJACSA.2015.061029</id>
        <doi>10.14569/IJACSA.2015.061029</doi>
        <lastModDate>2015-11-01T11:33:31.5230000+00:00</lastModDate>
        
        <creator>Elaheh Homayounvala</creator>
        
        <subject>Cognitive Radio; Service Selection; Bayesian Networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(10), 2015</description>
        <description>In cognitive radio networks where secondary users (SUs) use the time-frequency gaps of primary users&#39; (PUs) licensed spectrum opportunistically, the experienced throughput of SUs depend not only on the traffic load of the PUs but also on the PUs&#39; service type. Each service has its own pattern of channel usage, and if the SUs know the dominant pattern of primary channel usage, then they can make a better decision on choosing which service is better to be used at a specific time to get the best advantage of the primary channel, in terms of higher achievable throughput. However, it is difficult to inform directly SUs of PUs&#39; dominant used services in each area, for practical reasons. This paper proposes a learning mechanism embedded in SUs to sense the primary channel for a specific length of time. This algorithm recommends the SUs upon sensing a free primary channel, to choose the best service in order to get the best performance, in terms of maximum achieved throughput and the minimum experienced delay. The proposed learning mechanism is based on a Bayesian approach that can predict the performance of a requested service for a given SU. Simulation results show that this service selection method outperforms the blind opportunistic SU service selection, significantly.</description>
        <description>http://thesai.org/Downloads/Volume6No10/Paper_29-A_Bayesia_Approach_to_Service_Selection_for_Secondary.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>FRoTeMa: Fast and Robust Template Matching</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061028</link>
        <id>10.14569/IJACSA.2015.061028</id>
        <doi>10.14569/IJACSA.2015.061028</doi>
        <lastModDate>2015-11-01T11:33:31.4770000+00:00</lastModDate>
        
        <creator>Abdullah M. Moussa</creator>
        
        <creator>M. I. Habib</creator>
        
        <creator>Rawya Y. Rizk</creator>
        
        <subject>template matching; pattern matching; brightness-contrast invariance; rotation invariance; scale invariance; sufficient condition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(10), 2015</description>
        <description>Template matching is one of the most basic techniques in computer vision, where the algorithm should search for a template image T in an image to analyze I. This paper considers the rotation, scale, brightness and contrast invariant grayscale template matching problem. The proposed algorithm uses a sufficient condition for distinguishing between candidate matching positions and other positions that cannot provide a better degree of match with respect to the current best candidate. Such condition is used to significantly accelerate the search process by skipping unsuitable search locations without sacrificing exhaustive accuracy. Our proposed algorithm is compared with eight existing state-of-the-art techniques. Theoretical analysis and experiments on eight image datasets show that the proposed simple algorithm can maintain exhaustive accuracy while providing a significant speedup.</description>
        <description>http://thesai.org/Downloads/Volume6No10/Paper_28-FRoTeMa_Fast_and_Robust_Template_Matching.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SLA for E-Learning System Based on Cloud Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061027</link>
        <id>10.14569/IJACSA.2015.061027</id>
        <doi>10.14569/IJACSA.2015.061027</doi>
        <lastModDate>2015-11-01T11:33:31.4600000+00:00</lastModDate>
        
        <creator>Doaa Elmatary</creator>
        
        <creator>Samy Abd El Hafeez</creator>
        
        <creator>Wael Awad</creator>
        
        <creator>Fatma Omara</creator>
        
        <subject>Cloud Computing; Service Level Agreement (SLA); E-learning System</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(10), 2015</description>
        <description>The Service Level Agreement (SLA) becomes an important issue especially over the Cloud Computing and online services that based on the ‘pay-as-you-use’ fashion. Establishing the Service level agreements (SLAs), which can be defined as a negotiation between the service provider and the user, is needed for many types of current applications as the E-Learning systems. The work in this paper presents an idea of optimizing the SLA parameters to serve any E-Learning system over the Cloud Computing platform, with defining the negotiation process, the suitable frame work, and the sequence diagram to accommodate the E-Learning systems.</description>
        <description>http://thesai.org/Downloads/Volume6No10/Paper_27-SLA_for_E_Learning_System_Based_on_Cloud_Computing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Effective Storage Mechanism for High Performance Computing (HPC)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061026</link>
        <id>10.14569/IJACSA.2015.061026</id>
        <doi>10.14569/IJACSA.2015.061026</doi>
        <lastModDate>2015-11-01T11:33:31.4430000+00:00</lastModDate>
        
        <creator>Fatima El Jamiy</creator>
        
        <creator>Abderrahmane Daif</creator>
        
        <creator>Mohamed Azouazi</creator>
        
        <creator>Abdelaziz Marzak</creator>
        
        <subject>Big data; High Performance Computing; Storage; Distributed File System; BlobSeer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(10), 2015</description>
        <description>All over the process of treating data on HPC Systems, parallel file systems play a significant role. With more and more applications, the need for high performance Input-Output is rising. Different possibilities exist: General Parallel File System, cluster file systems and virtual parallel file system (PVFS) are the most important ones. However, these parallel file systems use pattern and model access less effective such as POSIX semantics (A family of technical standards emerged from a project to standardize programming interfaces software designed to operate on variant UNIX operating system.), which forces the MPI-IO implementations to use inefficient techniques based on locks. To avoid this synchronization in these techniques, we ensure that the use of a versioning-based file system is much more effective.</description>
        <description>http://thesai.org/Downloads/Volume6No10/Paper_26-An_Effective_Storage_Mechanism_for_High_Performance_Computing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automation and Validation of Annotation for Hindi Anaphora Resolution</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061025</link>
        <id>10.14569/IJACSA.2015.061025</id>
        <doi>10.14569/IJACSA.2015.061025</doi>
        <lastModDate>2015-11-01T11:33:31.4130000+00:00</lastModDate>
        
        <creator>Pardeep Singh</creator>
        
        <creator>Kamlesh Dutta</creator>
        
        <subject>Annotation; natural language processing; demonstrative pronoun; semantic category; indirect anaphora; semiautomatic annotation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(10), 2015</description>
        <description>The process of labelling any language genre by which one can extract useful information is called annotation. This provides syntactic information about a word or a word phrase. In this paper, an effort has been made to provide the algorithm for semiautomatic annotation for Hindi text to cater anaphora resolution only. The study was conducted on twelve files of Ranchi Express available in EMILLE corpus. The corpus is originally tagged for demonstrative pronouns. The detection of the pronouns is supported by the incorporation of seven tags. However the semantic interpretation of the demonstrative pronoun is not supported in the original corpus.  In this paper an effort has been made to automate the process of tagging as well as the handling of semantic information through addition tags. It was conducted on 1485 demonstrative pronouns. The average accuracy of precision, recall and F measure is 74, 71 and 72 respectively.</description>
        <description>http://thesai.org/Downloads/Volume6No10/Paper_25-Automation_and_Validation_of_Annotation_for_Hindi_Anaphora_Resolution.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Medical Image De-Noising Schemes using Wavelet Transform with Fixed form Thresholding</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061024</link>
        <id>10.14569/IJACSA.2015.061024</id>
        <doi>10.14569/IJACSA.2015.061024</doi>
        <lastModDate>2015-11-01T11:33:31.3830000+00:00</lastModDate>
        
        <creator>Nadir Mustafa</creator>
        
        <creator>Jiang Ping Li</creator>
        
        <creator>Saeed Ahmed Khan</creator>
        
        <creator>Mohaned Giess</creator>
        
        <subject>Image De-noising System; GUI De-noised image; Code De-noised image; Wavelet transform; Soft and Hard Threshold</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(10), 2015</description>
        <description>Medical Imaging is currently a hot area of bio-medical engineers, researchers and medical doctors as it is extensively used in diagnosing of human health and by health care institutes.  The imaging equipment is the device, which is used for better image processing and highlighting the important features. These images are affected by random noise during acquisition, analyzing and transmission process. This condition results in the blurry image visible in low contrast.  The Image De-noising System (IDs) is used as a tool for removing image noise and preserving important data. Image de-noising is one of the most interesting research areas among researchers of technology-giants and academic institutions. For Criminal Identification Systems (CIS) &amp; Magnetic Resonance Imaging (MRI), IDs is more beneficial in the field of medical imaging. This paper proposes an algorithm for de-noising medical images using different types of wavelet transform, such as Haar, Daubechies, Symlets and Bi-orthogonal. In this paper noise image quality has been evaluated using filter assessment parameters like Peak Signal to Noise Ratio (PSNR), Mean Square Error (MSE) and Variance, It has been observed to form the numerical results that, the presentation of proposed algorithm reduced the mean square error and achieved best value of peak signal to noise ratio (PSNR). In this paper, the wavelet based de-noising algorithm has been investigated on medical images along with threshold.</description>
        <description>http://thesai.org/Downloads/Volume6No10/Paper_24-Medical_Image_De_Noising_Schemes_using_Wavelet_Transform.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparison of Burden on Youth in Communicating with Elderly using Images Versus Photographs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061023</link>
        <id>10.14569/IJACSA.2015.061023</id>
        <doi>10.14569/IJACSA.2015.061023</doi>
        <lastModDate>2015-11-01T11:33:31.3500000+00:00</lastModDate>
        
        <creator>Miyuki Iwamoto</creator>
        
        <creator>Noriaki Kuwahara</creator>
        
        <creator>Kazunari Morimoto</creator>
        
        <subject>elderly; reminiscence videos; senior care home photographic image</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(10), 2015</description>
        <description>Conversation is a good preventative against behavioral problems in the elderly. However, caregivers are usually very busy tending to patients and lack the time to communicate extensively with them. Toward overcoming such problems actively listening volunteers have more opportunities to communicate with the elderly, but the number of skilled volunteers is limited. Therefore, we investigated conversational support systems for inexperienced volunteers; such systems usually include content such as photographs, videos, and music. We expected that the volunteers would feel less stress when using videos instead of photographs for conversational support because the former provided both volunteers and patients with richer information than the latter. On the other hand, photographs gave patients more chances to talk with volunteers. However, there has been no research to date on the effect of content type upon stress and conversational quality. In this paper, we compared using photographs with using video from such viewpoints.</description>
        <description>http://thesai.org/Downloads/Volume6No10/Paper_23-Comparison_of_Burden_on_Youth_in_Communicating_with_Elderly.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design and Realization of Mongolian Syntactic Retrieval System Based on Dependency Treebank</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061022</link>
        <id>10.14569/IJACSA.2015.061022</id>
        <doi>10.14569/IJACSA.2015.061022</doi>
        <lastModDate>2015-11-01T11:33:31.3370000+00:00</lastModDate>
        
        <creator>S.Loglo </creator>
        
        <creator>Sarula</creator>
        
        <subject>Mongolian Language; Dependency Grammar; Dependency Treebank; Syntactic Retrieval; Information Retrieval</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(10), 2015</description>
        <description>In the past seven years, Language Research Institute of Inner Mongolia University has constructed a 500,000-word scale Mongolian dependency treebank. The syntactic treebank provides a favorable data platform for language research and information processing. In order to effectively use the treebank, we have designed and implemented a graphical syntactic information retrieval system based on the Mongolian dependency treebank. As an application system, this retrieval system offers search and statistical analysis on word, phrase, syntactic fragment and syntactic structure level.</description>
        <description>http://thesai.org/Downloads/Volume6No10/Paper_22-Design_and_Realization_of_Mongolian_Syntactic_Retrieval_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimizing User&#39;s Utility from Cloud Computing Services in a Networked Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061021</link>
        <id>10.14569/IJACSA.2015.061021</id>
        <doi>10.14569/IJACSA.2015.061021</doi>
        <lastModDate>2015-11-01T11:33:31.3030000+00:00</lastModDate>
        
        <creator>Eli WEINTRAUB</creator>
        
        <creator>Yuval COHEN</creator>
        
        <subject>Utility Optimization; Cloud Computing; Consumer preferences; Conjoint Analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(10), 2015</description>
        <description>Cloud Computing customers are looking for the best utility for their money. Research shows that functional aspects are considered more important than service prices in customer buying decisions. Choosing the best service provider might be complicated since each provider may sell three kinds of services organized in three layers: SaaS (Software as a service), PaaS (Platform as a service) and IaaS (Infrastructure as a service). This research targets the problem of optimizing consumers&#39; utility, using conjoint analysis methodology. Providers currently offer software services as bundles belonging to the same layer, or to underlying layers. Bundling services prevent customers from splitting their service purchases between a provider of software and a different provider of the underlying layers. This research assumes that in the future will exist a free competitive market, in which consumers will be free to switch their services to different providers, eliminating the negative biases of bundling, during making their buying decisions. This research proposes a mathematical model and three possible strategies for implementation in organizations, and illustrates its advantages compared to existing utility maximization practices. Current conjoint analysis method chooses the best utility in a traditional cloud architecture in which one provider offers a bundle of all three layers. The proposed model assumes a networked cloud architecture in which a customer may choose services from any provider, building for himself the best basket of services maximizing his/her total utility. This research outlines three business models which will assist organizations shift gradually from current CC architecture to the future networked architectures, thus maximizing their utility.</description>
        <description>http://thesai.org/Downloads/Volume6No10/Paper_21-Optimizing_Users_Utility_from_Cloud_Computing_Services.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Effective Calibration and Evaluation of Multi-Camera Robotic Head</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061020</link>
        <id>10.14569/IJACSA.2015.061020</id>
        <doi>10.14569/IJACSA.2015.061020</doi>
        <lastModDate>2015-11-01T11:33:31.2900000+00:00</lastModDate>
        
        <creator>Petra Kocmanova</creator>
        
        <creator>Ludek Zalud</creator>
        
        <subject>calibration; camera; mobile robot; thermal imaging; data-fusion</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(10), 2015</description>
        <description>The paper deals with appropriate calibration of multispectral vision systems and evaluation of the calibration and data-fusion quality in real-world indoor and outdoor conditions. Checkerboard calibration pattern developed by our team for multispectral calibration of intrinsic and extrinsic parameters is described in detail. The circular object for multispectral fusion evaluation is described as well. The objects were used by our team for calibration and evaluation of advanced visual system of Orpheus-X3 robot that is taken as a demonstrator, but their use is much wider, and authors suggest to use them as testbed for visual measurement systems of mobile robots. To make the calibration easy and straightforward, the authors developed MultiSensCalib program in Matlab, containing all the described techniques. The software is provided as publicly available, including source code and testing images.</description>
        <description>http://thesai.org/Downloads/Volume6No10/Paper_20-Effective_Calibration_and_Evaluation_of_Multi_Camera_Robotic_Head.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of ANFIS Estimator of Permanent Magnet Brushless DC Motor Position for PV Pumping System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061019</link>
        <id>10.14569/IJACSA.2015.061019</id>
        <doi>10.14569/IJACSA.2015.061019</doi>
        <lastModDate>2015-11-01T11:33:31.2570000+00:00</lastModDate>
        
        <creator>TERKI Amel</creator>
        
        <creator>MOUSSI Ammar</creator>
        
        <creator>TERKI Nadjiba</creator>
        
        <subject>Photovoltaic system; Brushless DC motor; ANFIS estimator; Speed controller</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(10), 2015</description>
        <description>This paper presents a new scheme for PMBLDC (permanent magnet brushless direct current) rotor position estimation based an ANFIS (adaptive network fuzzy inference system) estimator. The operation of such motor requires accurate rotor position knowledge. However, most of rotor position sensors produce undesirable effects such as mechanical losses and have other disadvantages. In order to overcome the disadvantages, sensorless scheme seems to offer great advantages. This work present an ANFIS estimator design. Combining the adaptive capability of the neural network to gather with the reasoning ability of the fuzzy logic in ANFIS modeling results in a fast responding and flexible model. This procedure lends itself perfectly adapted for complex system such as PV pumping systems.</description>
        <description>http://thesai.org/Downloads/Volume6No10/Paper_19-Design_of_ANFIS_Estimator_of_Permanent_Magnet_Brushless.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Power and Contention Control Scheme: As a Good Candidate for Interference Modeling in Cognitive Radio Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061018</link>
        <id>10.14569/IJACSA.2015.061018</id>
        <doi>10.14569/IJACSA.2015.061018</doi>
        <lastModDate>2015-11-01T11:33:31.2270000+00:00</lastModDate>
        
        <creator>Ireyuwa E. Igbinosa</creator>
        
        <creator>Olutayo O. Oyerinde</creator>
        
        <creator>Viranjay M. Srivastava</creator>
        
        <creator>Stanley H. Mneney</creator>
        
        <subject>Aggregate interference; cognitive radio; interference management; interference modeling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(10), 2015</description>
        <description>Due to the ever growing need for spectrum, the cognitive radio (CR) has been proposed to improve the radio spectrum utilization. In this scenario, the secondary users (SU) are permitted to share spectrum with the licensed primary users (SU) with a strict condition that they do not cause harmful interference to the cognitive network. In this work, we have proposed an interference model for cognitive radio network that utilizes power or contention control interference management schemes. We derived the probability density function (PDF) with the power control scheme, where the power of transmission of the CR transmitter is guided by the power control law and also with contention control scheme that has a fixed transmission power for all CR transmitter controlled by a contention control protocol. This protocol makes a decision on which CR transmitter can transmit at any point in time. In this work, we have shown that power and contention control schemes are good candidates for interference modeling in cognitive radio system. The impact of the unknown location of the primary receiver on the resulting interference generated by the CR transmitters was investigated and the results shows that the challenges of the hidden primary receivers lead to higher CR-primary interference in respect to higher mean and variance. Finally, the presented results show power control and the contention control scheme are good candidates in reducing the interference generated by the cognitive radio network.</description>
        <description>http://thesai.org/Downloads/Volume6No10/Paper_18-Power_and_Contention_Control_Scheme_As_a_Good_Candidate.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Personalized Subject Learning Based on Topic Detection and Canonical Correlation Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061017</link>
        <id>10.14569/IJACSA.2015.061017</id>
        <doi>10.14569/IJACSA.2015.061017</doi>
        <lastModDate>2015-11-01T11:33:31.1500000+00:00</lastModDate>
        
        <creator>Zhangzu SHI</creator>
        
        <creator>Steve K. SHI</creator>
        
        <creator>Lucy L. SHI</creator>
        
        <subject>Topic Detection; Canonical Correlation Analysis; Personalized Education; Subject Learning; Multimodality</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(10), 2015</description>
        <description>To keep pace with the time, learning from printed medium alone is no longer a comprehensive approach. Fresh digital contents can definitely be the complement of printed education medium. Although timely access to fresh contents is becoming increasingly important for education and gaining such access is no longer a problem, the capacity for human teachers to assimilate such huge amounts of contents is limited. Topic Detection (TD) is then a promising research area that addresses speedy access of desired contents based on topic or subject. On the other hand, personalized education is getting more attention because it facilitates the improvement of creativity and subject learning of the students. This paper reveals a patented Personalized Subject Learning (PSL) system that caters for the need of personalized education and efficiently provides subject based contents. An efficient topic detection algorithm for providing subject content is presented. Moreover, since education contents are multimedia based ones with multimodal, PSL introduces Canonical Correlation Analysis (CCA) method to detect multimodal correlations across different types of media. Due to its novelty, PSL has been used as the key engine in a real world application of personalized education system as the smart education module sponsored by a Smart City project.</description>
        <description>http://thesai.org/Downloads/Volume6No10/Paper_17-Personalized_Subject_Learning_Based_on_Topic_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mobile Arabchat: An Arabic Mobile-Based Conversational Agent</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061016</link>
        <id>10.14569/IJACSA.2015.061016</id>
        <doi>10.14569/IJACSA.2015.061016</doi>
        <lastModDate>2015-11-01T11:33:31.1330000+00:00</lastModDate>
        
        <creator>Mohammad Hijjawi</creator>
        
        <creator>Hazem Qattous</creator>
        
        <creator>Omar Alsheiksalem</creator>
        
        <subject>Conversational Agent; Mobile; ArabChat; Chatterbot and Arabic</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(10), 2015</description>
        <description>The conversation automation/simulation between a user and machine evolved during the last years. A number of research-based systems known as conversational agents has been developed to address this challenge.  A conversational Agent is a program that attempts to simulate conversations between the human and machine. Few of these programs targeted the mobile-based users to handle the conversations between them and a mobile device through an embodied spoken character. Wireless communication has been rapidly extended with the expansion of mobile services. Therefore, this paper discusses the proposing and developing a framework of a mobile-based conversational agent called Mobile ArabChat to handle the Arabic conversations between the Arab users and mobile device. To best of our knowledge, there are no such applications that address this challenge for Arab mobile-based users. An Android based application was developed in this paper, and it has been tested and evaluated in a large real environment. Evaluation results show that the Mobile ArabChat works properly, and there is a need for such a system for Arab users.</description>
        <description>http://thesai.org/Downloads/Volume6No10/Paper_16-Mobile_Arabchat_An_Arabic_Mobile_Based_Conversational_Agent.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of Socket Based on Intelligent Control and Energy Management</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061015</link>
        <id>10.14569/IJACSA.2015.061015</id>
        <doi>10.14569/IJACSA.2015.061015</doi>
        <lastModDate>2015-11-01T11:33:31.0700000+00:00</lastModDate>
        
        <creator>Jiang Feng</creator>
        
        <creator>Wu Fei</creator>
        
        <creator>Dai Jian</creator>
        
        <creator>Zou Yan</creator>
        
        <subject>Internet of things; Smart home; Intelligent electrical outlet; Wireless communication; Power statistics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(10), 2015</description>
        <description>Smart home is one of the main applications of internet of things, and it will realize the intellectualization of household. Smart socket is part of the smart home, which can be controlled remotely by power supplied, monitor utilization condition, communication network and other functions. This article mainly introduces the intelligent electrical outlet of each hardware modules; software part mainly analyzes the socket’s communication mechanism, and the electricity consumption of collected power statistics through diagrams to feedback through wireless communication. Things achieved in an environment of communication between the user and the smart power outlet timely feedback to the user, so as to achieve energy-saving purposes.</description>
        <description>http://thesai.org/Downloads/Volume6No10/Paper_15-Design_of_Socket_Based_on_Intelligent_Control_and_Energy_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Lempel - Ziv Implementation for a Compression System Model with Sliding Window Buffer</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061014</link>
        <id>10.14569/IJACSA.2015.061014</id>
        <doi>10.14569/IJACSA.2015.061014</doi>
        <lastModDate>2015-11-01T11:33:31.0230000+00:00</lastModDate>
        
        <creator>Ahmad AbdulQadir AlRababah</creator>
        
        <subject>Digital signal processing; FPGA; RAM; dual-port RAM; token; literal</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(10), 2015</description>
        <description>Proposed compression system architecture based on Lempel-Ziv algorithm with sliding window history buffer, this architecture may be realized on FPGA, and can provide input data streams from multiple sources and context switching. Base requirements to impression system and compression system architecture were proposed. Compression system architecture should provide quick reconstruction possibility for building another system with other technical characteristics and architecture features (like reconfigurable system architecture features) on given architecture base. Digital signal processing may comprise lined or non-lined procedures. Non-lined signal processing is strictly associated with no lined structure sympathy then can apply in this period, regularity, and patio-temporal fields.</description>
        <description>http://thesai.org/Downloads/Volume6No10/Paper_14-Lempel_Ziv_Implementation_for_a_Compression_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Parallel Domain Decomposition for 1-D Active Thermal Control Problem with PVM</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061013</link>
        <id>10.14569/IJACSA.2015.061013</id>
        <doi>10.14569/IJACSA.2015.061013</doi>
        <lastModDate>2015-11-01T11:33:30.9770000+00:00</lastModDate>
        
        <creator>Simon Uzezi Ewedafe</creator>
        
        <creator>Rio Hirowati Shariffudin</creator>
        
        <subject>1-D ATCP; Stationary Techniques; SPMD; DD; PVM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(10), 2015</description>
        <description>this paper describes a 1-D Active Thermal Control Problem (1-D ATCP) with the use of Stationary Iterative Techniques (Jacobi and Gauss-Seidel) on the discretization of the resulted matrices. Parallelization of the problem is carried out using Domain Decomposition (DD) parallel communication approach with Parallel Virtual Machine (PVM) to enable better flexibility in parallel execution, and greater ease of parallel implementation across the different domain of block sizes. We described the parallelization of the method by the use of Single Program Multiple Data (SPMD) technique. The 1-D ATCP is implemented on a parallel cluster (Geo Cluster), with the ability to exploit inherent parallelism of the computation. The parallelization and performance strategies are discussed, and results of the parallel experiments are presented.</description>
        <description>http://thesai.org/Downloads/Volume6No10/Paper_13-Parallel_Domain_Decomposition_for_1_D_Active_Thermal_Control_Problem_with_PVM.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cloud Management and Governance: Adapting IT Outsourcing to External Provision  of Cloud-Based IT Services</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061012</link>
        <id>10.14569/IJACSA.2015.061012</id>
        <doi>10.14569/IJACSA.2015.061012</doi>
        <lastModDate>2015-11-01T11:33:30.9600000+00:00</lastModDate>
        
        <creator>Dr. Victoriano Valencia Garc&#237;a</creator>
        
        <creator>Dr. Eugenio J. Fern&#225;ndez Vicente</creator>
        
        <creator>Dr. Luis Usero Aragon&#233;s</creator>
        
        <subject>Cloud computing; IT governance; IT management; Outsourcing; IT service; Maturity model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(10), 2015</description>
        <description>Outsourcing is a strategic option which complements IT services provided internally in organizations. The maturity model for IT service outsourcing (henceforth MM-2GES) is a new holistic maturity model based on standards ISO/IEC 20000 and ISO/IEC 38500, and the frameworks and best practices of ITIL and COBIT, with a specific focus on IT outsourcing. MM-2GES allows independent validation, practical application, and an effective transition to a model of good governance and management of outsourced IT services.
Cloud computing is a new model for provisioning and consuming IT services on a need and pay-per-use basis. This model allows the IT systems to be more agile and flexible. The external provision of cloud-based services as part of Cloud computing appears as an evolution of traditional outsourcing, due to the emerging technologies related to the provision of IT services. As a result of technological developments, traditional outsourcing and external provision of cloud-based services, share common characteristics, but there are also some differences.
This paper adapts MM-2GES to external provision of cloud-based services, from the point of view of the customer. This way, the applicability of the model can be implemented in organizations that have both models traditional IT outsourcing and cloud-based services provided externally, in order to achieve excellence in governance and management of all kind of IT services provided externally to organizations.
</description>
        <description>http://thesai.org/Downloads/Volume6No10/Paper_12-Cloud_Management_and_Governance_Adapting_IT_Outsourcing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Algorithm for the Optimization of Training Convolutional Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061011</link>
        <id>10.14569/IJACSA.2015.061011</id>
        <doi>10.14569/IJACSA.2015.061011</doi>
        <lastModDate>2015-11-01T11:33:30.9470000+00:00</lastModDate>
        
        <creator>Hayder M. Albeahdili</creator>
        
        <creator>Tony Han</creator>
        
        <creator>Naz E. Islam</creator>
        
        <subject>Convolutional Neural Network; Particle Swarm optimization; Image Classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(10), 2015</description>
        <description>The training optimization processes and efficient fast classification are vital elements in the development of a convolution neural network (CNN). Although stochastic gradient descend (SGD) is a Prevalence algorithm used by many researchers for the optimization of training CNNs, it has vast limitations. In this paper, it is endeavor to diminish and tackle drawbacks inherited from SGD by proposing an alternate algorithm for CNN training optimization. A hybrid of genetic algorithm (GA) and particle swarm optimization (PSO) is deployed in this work. In addition to SGD, PSO and genetic algorithm (PSO-GA) are also incorporated as a combined and efficient mechanism in achieving non trivial solutions. The proposed unified method achieves state-of-the-art classification results on the different challenge benchmark datasets such as MNIST, CIFAR-10, and SVHN.  Experimental results showed that the results outperform and achieve superior results to most contemporary approaches.</description>
        <description>http://thesai.org/Downloads/Volume6No10/Paper_11-Hybrid_Algorithm_for_the_Optimization_of_Training_Convolutional_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Learners’ Attitudes Towards Extended-Blended Learning Experience Based on the S2P Learning Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061010</link>
        <id>10.14569/IJACSA.2015.061010</id>
        <doi>10.14569/IJACSA.2015.061010</doi>
        <lastModDate>2015-11-01T11:33:30.9000000+00:00</lastModDate>
        
        <creator>Salah Eddine BAHJI</creator>
        
        <creator>Jamila EL ALAMI</creator>
        
        <creator>Youssef LEFDAOUI</creator>
        
        <subject>learning model; S2P learning model; blended learning; extended-blended learning; learning experience; learners’ motivation; gamification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(10), 2015</description>
        <description>Within the Moroccan context, the Higher Education Institutions have realized the importance of the integration of information technologies into the formal learning curriculum. However, the risks of demotivation remain large in tertiary education, even with the support of these new technologies. It is therefore essential to ensure consistently the maintenance of the learners’ motivation, which must start from the design phase by adopting real motivational strategies. Blended Learning addresses the issue of the quality of teaching and learning offering then some answers to learners’ motivation issue. So, we try to extend of the dimensions of Blended Learning to an “Extended Blended Learning (Ex-BL)”, according to the S2P Model Learning designed as an integration model, arguing that knowledge and learning tools are nowadays available everywhere. This integration of educational resources takes into consideration various components: Face-to-face/online learning, Text-based Learning/Game-based Learning/Media-based Learning, Gamification, and Open Educational Resources. This paper investigates the learner perceptions of this instructional design. This includes their perceptions of learning effectiveness and its impact on their motivation during the learning experience. This investigation focused on two main points: the “Observation of learners’ behavior”, especially during online activities, as a way to gauge the degree of motivation and engagement; and the “Evaluation of the learning experience” through a survey covering the appreciation of the instructional design; the degree of satisfaction; the students’ motivation; the online platform; the extension of Ex-BL elements and their impact on learners’ motivation.</description>
        <description>http://thesai.org/Downloads/Volume6No10/Paper_10-Learners_Attitudes_Towards_Extended_Blended_Learning_Experience.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Classifying three Communities of Assam Based on Anthropometric Characteristics using R Programming</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061009</link>
        <id>10.14569/IJACSA.2015.061009</id>
        <doi>10.14569/IJACSA.2015.061009</doi>
        <lastModDate>2015-11-01T11:33:30.8530000+00:00</lastModDate>
        
        <creator>Sadiq Hussain</creator>
        
        <creator>Dali Dutta</creator>
        
        <creator>Runjun Patir</creator>
        
        <creator>Prof. Sarthak Sengupta</creator>
        
        <creator>Prof. Jiten Hazarika</creator>
        
        <creator>Prof. G.C. Hazarika</creator>
        
        <subject>Data Mining; Classification; R Programming; Logistic Regression; Cochran Mantel Haenszel Test</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(10), 2015</description>
        <description>The study of anthropometric characteristics of different communities plays an important role in design, ergonomics and architecture. As the change of life style, nutrition and ethnic composition of different communities leads to obesity epidemic etc. The authors performed two experiments. In the first experiment, the authors tried to classify three communities of Assam, India based on anthropometric characteristics using R Programming. The authors mined out the statistically significant anthropometric characteristics among the Chutia, Mising and Deori communities of Assam. In the second experiment, the authors performed the Cochran Mantel Haenszel test to find out the association between the communities and BMI based nutritional status stratified by the age of the people studied.</description>
        <description>http://thesai.org/Downloads/Volume6No10/Paper_9-Classifying_three_Communities_of_Assam_Based_on_Anthropometric_Characteristics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Big-Learn: Towards a Tool Based on Big Data to Improve Research in an E-Learning Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061008</link>
        <id>10.14569/IJACSA.2015.061008</id>
        <doi>10.14569/IJACSA.2015.061008</doi>
        <lastModDate>2015-11-01T11:33:30.8370000+00:00</lastModDate>
        
        <creator>Karim Aoulad Abdelouarit</creator>
        
        <creator>Boubker Sbihi</creator>
        
        <creator>Noura Aknin</creator>
        
        <subject>big data; e-learning; data structuring; learning; digital pedagogy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(10), 2015</description>
        <description>In the area of data management for information system and especially at the level of e-learning platforms, the Big Data phenomenon makes the data difficult to deal with standard database or information management tools. Indeed, for educational purposes and especially in a distance training or online research, the learner that uses the e-learning platform is left with a heterogeneous set of data such as files of all kinds, curves, course materials, quizzes, etc. This requires a specialized fusion system to combine the variety of data and improve the performance, robustness, flexibility, consistency and scalability, so that they can provide the best result to the learner The user of the e-learning platform. In this context, it is proposed to develop a tool called &quot;Big-Learn&quot; based on a technique to integrate the mixing of structured and unstructured data in one data layer, and, in order to facilitate access more optimal search relevance with adequate and consistent results according to the expectations of the learner. The methodology adopted will consist initially in a quantitative and qualitative study of the variety of data and their typology, followed by a detailed analysis of the structure and harmonization of the data to finally find a fictional model for their treatment. This conceptual work will be crowned with a working prototype as a tool achieved with UML and Java technology.</description>
        <description>http://thesai.org/Downloads/Volume6No10/Paper_8-Big_Learn_Towards_a_Tool_Based_on_Big_Data_to_Improve_Research.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Common Radio Resource Management Algorithms in Heterogeneous Wireless Networks with KPI Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061007</link>
        <id>10.14569/IJACSA.2015.061007</id>
        <doi>10.14569/IJACSA.2015.061007</doi>
        <lastModDate>2015-11-01T11:33:30.7900000+00:00</lastModDate>
        
        <creator>Saed Tarapiah</creator>
        
        <creator>Kahtan Aziz</creator>
        
        <creator>Shadi Atalla</creator>
        
        <subject>heterogeneous wireless networks; Radio Resource Management (RRM); radio access technology (RAT)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(10), 2015</description>
        <description>The rapid increase of number of personal wireless communication equipped devices boosts the user service demands on wireless networks. Thus, the spectrum resource management in such networks becomes an important topic in the near future. Notwithstanding, typically, users equipped with multiple wireless interfaces, thus the access operational scenario is no longer based on single Radio Access Technology (RAT). In this work, we studied the heterogeneous wireless communication scenarios, as a joint cooperative management of different RATs through which network providers can satisfy as possible as wide variety of user services demands in a more efficient manner by exploiting their varying characteristics and properties. To achieve this objective, a Common Radio Resource Management (CRRM) algorithms and techniques are proposed and designed to efficiently manage and optimize the radio resources in a heterogeneous wireless networks. In this context, this work studies and analyzes some common radio resource management techniques to efficiently distribute traffic among the available radio access technologies while provid- ing adequate quality of service levels under heterogeneous traffic scenarios. The most interesting algorithms have been critically analyzed and then some in depth investigations with attention on implementations and techno-economic issues are performed on some of the identified CRRM algorithms.</description>
        <description>http://thesai.org/Downloads/Volume6No10/Paper_7-Common_Radio_Resource_Management_Algorithms_in_Heterogeneous.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Adaptive Approach to Mitigate Ddos Attacks in Cloud</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061006</link>
        <id>10.14569/IJACSA.2015.061006</id>
        <doi>10.14569/IJACSA.2015.061006</doi>
        <lastModDate>2015-11-01T11:33:30.7730000+00:00</lastModDate>
        
        <creator>Baldev Singh</creator>
        
        <creator>S.N. Panda</creator>
        
        <subject>DDOS attack; Intrusion detection; Threshold; Cloud; virtual machine</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(10), 2015</description>
        <description>Distributed denial of service (DDOS) attack constitutes one of the prominent cyber threats and among the hardest security problems in modern cyber world. This research work focuses on reviewing DDOS detection techniques and developing a numeric stable theoretical framework used for detecting various DDOS attacks in cloud. Main sections in the paper are devoted to review and analysis of algorithms used for detection of DDOS attacks. The framework theorized here deals with the variability calculation method in conjunction with sampling, searching methods to find a current state of a particular parameter under observation for detecting DDOS attacks. This way a solution is to build that measure the performance and conduct the monitoring framework to capture adversity related to DDOS attacks. The described algorithm intends to capture the current context value of the parameters that determine the reliability of the detection algorithm and the online pass algorithm helps to maintain the variability of those collected values thus maintaining numerical stability by doing robust statistical operations at endpoints of traffic in cloud based network.</description>
        <description>http://thesai.org/Downloads/Volume6No10/Paper_6-An_Adaptive_Approach_to_Mitigate_Ddos_Attacks_in_Cloud.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comprehensive Centralized-Data Warehouse for Managing Malaria Cases</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061005</link>
        <id>10.14569/IJACSA.2015.061005</id>
        <doi>10.14569/IJACSA.2015.061005</doi>
        <lastModDate>2015-11-01T11:33:30.7430000+00:00</lastModDate>
        
        <creator>Nova Eka Diana</creator>
        
        <creator>Aan Kardiana</creator>
        
        <subject>malaria case; centralized data warehouse; galaxy scheme; ETL; timely report</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(10), 2015</description>
        <description>Tanah Bumbu is one of the most endemic areas in Indonesia for patients diagnosed with malaria diseases. Currently, available malaria case data were stored in disparate sources. Hence, it is difficult for the public health department to quickly and easily gather the useful information for determining strategic actions in tackling these cases. The purpose of this research is to build a data warehouse that integrates all malaria cases from disparate sources. This malaria data warehouse is a centralized architecture of galaxy or constellation scheme that consists of three fact tables and 13 dimension tables. SQL Server Integration Services (SSIS) is utilized to build ETL packages that load data from various sources to stages, dimensions, and fact tables in malaria data warehouse. Finally, a timely report can be generated by extracting the salient information located in malaria data warehouse.</description>
        <description>http://thesai.org/Downloads/Volume6No10/Paper_5-Comprehensive_Centralized_Data_Warehouse_for_Managing_Malaria_Cases.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Improved Bees Algorithm for Real Parameter Optimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061004</link>
        <id>10.14569/IJACSA.2015.061004</id>
        <doi>10.14569/IJACSA.2015.061004</doi>
        <lastModDate>2015-11-01T11:33:30.6970000+00:00</lastModDate>
        
        <creator>Wasim A. Hussein</creator>
        
        <creator>Shahnorbanun Sahran</creator>
        
        <creator>Siti Norul Huda Sheikh Abdullah</creator>
        
        <subject>Bees algorithm; Population initialization; Local search; Global search; Levy flight; Patch environment</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(10), 2015</description>
        <description>The Bees Algorithm (BA) is a bee swarm-based search algorithm inspired by the foraging behavior of a swarm of honeybees. BA can be divided into four parts: the parameter tuning part, the initialization part, the local search part, and the global search part. Recently, BA based on Patch-Levy-based Initialization Algorithm (PLIA-BA) has been proposed. However, the initial stage remains an initial step, and its improvement is not enough for more challenging problem classes with different properties. The local and global search capabilities are also required to be enhanced to improve the quality of final solution and the convergence speed of PLIA-BA on such problems. Consequently, in this paper, a new local search algorithm has been adopted based on the Levy looping flights. Moreover, the mechanism of the global search has been enhanced to be closer to nature and based on the patch-Levy model adopted in the initialization algorithm (PLIA). The improvements in local and global search parts are incorporated into PLIA-BA to advise a new version of BA that is called Patch-Levy-based Bees Algorithm (PLBA). We investigate the performance of the proposed PLBA on a set of challenging benchmark functions. The results of the experiments indicate that PLBA significantly outperforms the other BA variants, including PLIA-BA and can produce comparable results with other state-of-the-art algorithms.</description>
        <description>http://thesai.org/Downloads/Volume6No10/Paper_4-An_Improved_Bees_Algorithm_for_Real_Parameter_Optimization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SOHO: Information Security Awareness in the Aspect of Contingency Planning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061003</link>
        <id>10.14569/IJACSA.2015.061003</id>
        <doi>10.14569/IJACSA.2015.061003</doi>
        <lastModDate>2015-11-01T11:33:30.6500000+00:00</lastModDate>
        
        <creator>Jason Maurer</creator>
        
        <creator>Brandon Clark</creator>
        
        <creator>Young B. Choi</creator>
        
        <subject>SOHO; Information Security; Contingency Planning; Small Office; Home Office</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(10), 2015</description>
        <description>This paper seeks to take general security awareness information for home and small business owners and make it understandable and accessible by looking at practical ways to keep valuable information accessible after an incident or disaster according to current methods.  This paper will first review select general security awareness information, then take a look at some aspects of contingency planning and look at some basic practical techniques to use in order to protect systems and information from complete loss after an incident.  Finally, the ground work for implementing an individualized plan for a small business office or home office will be laid and some practical steps to take will be recommended.</description>
        <description>http://thesai.org/Downloads/Volume6No10/Paper_3-SOHO_Information_Security_Awareness_in_the_Aspect.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards Network-Aware Composition of Big Data Services in the Cloud</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061002</link>
        <id>10.14569/IJACSA.2015.061002</id>
        <doi>10.14569/IJACSA.2015.061002</doi>
        <lastModDate>2015-11-01T11:33:30.6170000+00:00</lastModDate>
        
        <creator>Umar SHEHU</creator>
        
        <creator>Ghazanfar SAFDAR</creator>
        
        <creator>Gregory EPIPHANIOU</creator>
        
        <subject>Big data; Service composition; QoS; Genetic Algorithm; Network latency; Cloud</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(10), 2015</description>
        <description>Several Big data services have been developed on the cloud to meet increasingly complex needs of users. Most times a single Big data service may not be capable in satisfying user requests. As a result, it has become necessary to aggregate services from different Big data providers together in order to execute the user&#39;s request. This in turn has posed a great challenge; how to optimally compose services from a given set of Big data providers without affecting if not optimizing Quality of Service (QoS). With the advent of cloud-based Big data applications composed of services spread across different network environments, QoS of the network has become important in determining the true performance of composite services. However current studies fail to consider the impact of QoS of network on composite service selection. Therefore a novel network-aware genetic algorithm is proposed to perform composition of Big data services in the cloud. The algorithm adopts an extended QoS model which separates QoS of network from service QoS. It also uses a novel network coordinate system in finding composite services that have low network latency without compromising service QoS. Results of evaluation indicate that the proposed approach finds low latency and QoS-optimal compositions when compared with current approaches.</description>
        <description>http://thesai.org/Downloads/Volume6No10/Paper_2-Towards_Network_Aware_Composition_of_Big_Data_Services_in_the_Cloud.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Framework for Assessing Privacy of Internet Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.061001</link>
        <id>10.14569/IJACSA.2015.061001</id>
        <doi>10.14569/IJACSA.2015.061001</doi>
        <lastModDate>2015-11-01T11:33:30.4770000+00:00</lastModDate>
        
        <creator>James PH Coleman</creator>
        
        <subject>privacy; privacy compliance; risk; data protection; privacy impact assessments; internet applications</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(10), 2015</description>
        <description>This paper presents a new framework for assessing and documenting the privacy risks associated with developing and managing internet applications. The Framework for Assessing Privacy of Internet Applications (FAPIA) provides a tool to aid the analysis of privacy risks and a structured means of analyzing the risks and documenting a control systems to ensure compliance with data protection and privacy legislation in a range of different countries.</description>
        <description>http://thesai.org/Downloads/Volume6No10/Paper_1-Framework_for_Assessing_Privacy_of_Internet_Applications.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application of K-Means Algorithm for Efficient Customer Segmentation: A Strategy for Targeted Customer Services</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2015.041007</link>
        <id>10.14569/IJARAI.2015.041007</id>
        <doi>10.14569/IJARAI.2015.041007</doi>
        <lastModDate>2015-10-11T06:39:57.1670000+00:00</lastModDate>
        
        <creator>Chinedu Pascal Ezenkwu</creator>
        
        <creator>Simeon Ozuomba</creator>
        
        <creator>Constance kalu</creator>
        
        <subject>machine learning; data mining; big data; customer segmentation; MATLAB; k-Means algorithm; customer service; clustering; extrapolation</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 4(10), 2015</description>
        <description>The emergence of many business competitors has engendered severe rivalries among competing businesses in gaining new customers and retaining old ones. Due to the preceding, the need for exceptional customer services becomes pertinent, notwithstanding the size of the business. Furthermore, the ability of any business to understand each of its customers’ needs will earn it greater leverage in providing targeted customer services and developing customised marketing programs for the customers. This understanding can be possible through systematic customer segmentation. Each segment comprises customers who share similar market characteristics. The ideas of Big data and machine learning have fuelled a terrific adoption of an automated approach to customer segmentation in preference to traditional market analyses that are often inefficient especially when the number of customers is too large. In this paper, the k-Means clustering algorithm is applied for this purpose. A MATLAB program of the k-Means algorithm was developed (available in the appendix) and the program is trained using a z-score normalised two-feature dataset of 100 training patterns acquired from a retail business. The features are the average amount of goods purchased by customer per month and the average number of customer visits per month. From the dataset, four customer clusters or segments were identified with 95% accuracy, and they were labeled: High-Buyers-Regular-Visitors (HBRV), High-Buyers-Irregular-Visitors (HBIV), Low-Buyers-Regular-Visitors (LBRV) and Low-Buyers-Irregular-Visitors (LBIV).</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume4No10/Paper_7-Application_of_K_Means_Algorithm_for_Efficient_Customer_Segmentation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Recognition of Similar Wooden Surfaces with a Hierarchical Neural Network Structure</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2015.041006</link>
        <id>10.14569/IJARAI.2015.041006</id>
        <doi>10.14569/IJARAI.2015.041006</doi>
        <lastModDate>2015-10-11T06:39:57.1370000+00:00</lastModDate>
        
        <creator>Irina Topalova</creator>
        
        <subject>recognition; preprocessing; neural network; wooden surface</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 4(10), 2015</description>
        <description>The surface quality assurance check is an important task in industrial production of wooden parts. There are many automated systems applying different methods for preprocessing and recognition/classification of surface textures, but in the most cases these methods cannot produce very high recognition accuracy.  This paper aims to propose a method for effective recognition of similar wooden surfaces applying simple preprocessing, recognition and classification stage. The method is based on simultaneously training two different neural networks with surface image histograms and their second derivatives. The combined outputs of these networks give an input training set for a third neural network to make the final decision. The proposed method is tested with image samples of seven similar wooden texture images and shows high recognition accuracy. The results are analyzed, discussed and further research tasks are proposed.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume4No10/Paper_6-Recognition_of_Similar_Wooden_Surfaces_with_a_Hierarchical_Neural_Network_Structure.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improved Text Reading System for Digital Open Universities</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2015.041005</link>
        <id>10.14569/IJARAI.2015.041005</id>
        <doi>10.14569/IJARAI.2015.041005</doi>
        <lastModDate>2015-10-11T06:39:57.1070000+00:00</lastModDate>
        
        <creator>Mahamadou ISSOUFOU TIADO</creator>
        
        <creator>Abdou IDRISSA</creator>
        
        <creator>Karimou DJIBO</creator>
        
        <subject>m-learning; distance learning; digital open universities; cloud-computing; audio warehouse</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 4(10), 2015</description>
        <description>The New Generation of Digital Open Universities (DOUNG) is a recently proposed model using m-learning and cloud computing option and based on an integrated architecture built with open networks as GSM and Internet. The goal of achieving the ubiquitous ability of the m-learning is having the large number of languages as a serious issue. It needs to use many teachers in order to repeat the same course in various languages. In this paper, an extended system is proposed under the consideration of the low capacities of the cell-phone device in terms of computing and visualization. The model uses the possibility to build a voice warehouse which can be used to generate the audio format of every course provide in a text format and in a particular language. The Advanced Text Reading System (ATRS) is proposed to use that voice warehouse and to produce the audio format of a course, giving facility to teachers to easily overcome the constraints of language barrier. The new proposed model is described and its contributions are discussed.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume4No10/Paper_5-Improved_Text_Reading_System_for_Digital_Open_Universities.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Design of a Multi-Agent Smart E-Examiner</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2015.041004</link>
        <id>10.14569/IJARAI.2015.041004</id>
        <doi>10.14569/IJARAI.2015.041004</doi>
        <lastModDate>2015-10-11T06:39:57.0130000+00:00</lastModDate>
        
        <creator>Khaled Nasser ElSayed</creator>
        
        <subject>m-Learning; e-Assessments; Multi-agent; Semantic net; Examiner</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 4(10), 2015</description>
        <description>this paper proposes a design of an application   of multi agent technology on a semantic net knowledge base, to build a smart e-examiner system. This e-examiner could be used in building and grading a personalized special on-line e-assessment. The produced e-assessment should cover the majority of examined topics and material. It should cover various levels of difficulties and learners profile(s).  The   e-examiner will use a semantic net question bank, to emphasize on the structuring categories of all course domains. This task is done through four different intelligent agents: control agent, personal agent, examiner agent, and grading agent. The system might select questions from a bank of questions for several courses. It could be used in different education levels and natures. Also, it will produce a key for the produced exam, to be used latter in grading, and giving final marks of e-assessments.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume4No10/Paper_4-A_Design_of_a_Multi_Agent_Smart_E_Examiner.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Estimation of Rice Crop Quality and Harvest Amount from Helicopter Mounted NIR Camera Data and Remote Sensing Satellite Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2015.041003</link>
        <id>10.14569/IJARAI.2015.041003</id>
        <doi>10.14569/IJARAI.2015.041003</doi>
        <lastModDate>2015-10-11T06:39:56.9970000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Masanoori Sakashita</creator>
        
        <creator>Osamu Shigetomi</creator>
        
        <creator>Yuko Miura</creator>
        
        <subject>Rice Crop; Rice Leaf; Nitrogen content; Protein content; NDVI</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 4(10), 2015</description>
        <description>Estimation of rice crop quality and harvest amount in paddy fields with the different rice stump density derived from helicopter mounted NIR camera and remote sensing satellite data is made. Using the intensive study site of rice paddy fields with managing, estimation of protein content in rice crop and nitrogen content in rice leaves through regression analysis with Normalized Difference Vegetation Index: NDVI derived from camera mounted on a radio-control helicopter is made together with harvest amount of rice crops. Through experiments at rice paddy fields which is situated at Saga Prefectural Agriculture Research Institute SPRIA in Saga city, Japan, it is found that protein content in rice crops is highly correlated with NDVI which is acquired with visible and Near Infrared: NIR camera mounted on radio-control helicopter. It also is found that nitrogen content in rice leaves is correlated to NDVI as well. Protein content in rice crop is negatively proportional to rice taste. Therefore rice crop quality can be evaluated through NDVI observation of rice paddy field.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume4No10/Paper_3-Estimation_of_Rice_Crop_Quality_and_Harvest_Amount.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Relation Between Chlorophyll-A Concentration and Red Tide in the Intensive Study Area of the Ariake Sea, Japan in Winter Seasons by using MODIS Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2015.041002</link>
        <id>10.14569/IJARAI.2015.041002</id>
        <doi>10.14569/IJARAI.2015.041002</doi>
        <lastModDate>2015-10-11T06:39:56.9800000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>chlorophyl-a concentration; red tide; diatom; MODIS; satellite remote sensing</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 4(10), 2015</description>
        <description>Relation between chlorophyll-a concentration and red tide in the intensive study area of the back of Ariake Sea, Japan in the recent winter seasons is investigated by using MODIS data. Mechanism of red tide appearance is not so clarified. On the other hand, chlorophyll-a concentration can be estimated with satellite remote sensing data. An attempt is made for estimation of the location and size of red tide appearance. In particular, severe damage due to red tide is suspected in the winter seasons now a day. Therefore, 6 years (winter 2010 to winter 2015) data of MODIS data derived chlorophyll-a concentration and truth data of red tide appearance (the location and the volume) which are provided by Saga Prefectural Fishery Promotion Center: SPFPC (once/10 days of shipment data) have been investigated. As the results of the investigation, it is found that a strong correlation between the chlorophyll-a concentration and red tide appearance together with the possible sources of the red tide.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume4No10/Paper_2-Relation_Between_Chlorophyll_A_Concentration_and_Red_Tide.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Diagrammatic Representation as a Tool for Clarifying Logical Arguments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2015.041001</link>
        <id>10.14569/IJARAI.2015.041001</id>
        <doi>10.14569/IJARAI.2015.041001</doi>
        <lastModDate>2015-10-11T06:39:56.9170000+00:00</lastModDate>
        
        <creator>Sabah Al-Fedaghi</creator>
        
        <subject>artificial intelligence; diagrammatic representation; conditionals; argument forms; logical argumentation; modus ponens</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 4(10), 2015</description>
        <description>Knowledge representation of reasoning processes is a central notion in the field of artificial intelligence, especially for knowledge-based agents, because such representation facilitates knowledge of action outcomes necessary for optimum performance by problem-solving agents in complex situations. Logic is the primary vehicle by which knowledge is represented in knowledge-based agents. It involves logical inference that produces answers from what is known based on this inference mechanism. Modus Ponens is the best-known rule of inference that is sound. Recently, a dispute has arisen regarding attempts to show that modus ponens is not a valid form of inference. Part of the cause of the controversy is miscommunication of the involved problem. This paper proposes a diagrammatic representation of modus ponens with the hope that such a representation will serve to clarify the issue. The advantage of this diagrammatic representation is a better understanding of the reasoning process behind this inference rule.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume4No10/Paper_1-Diagrammatic_Representation_as_a_Tool_for_Clarifying_Logical_Arguments.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Use of Programming Languages on the Final Project Report by Using Analytical Hierarchy Process (AHP)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060942</link>
        <id>10.14569/IJACSA.2015.060942</id>
        <doi>10.14569/IJACSA.2015.060942</doi>
        <lastModDate>2015-10-08T13:50:20.1870000+00:00</lastModDate>
        
        <creator>Juhartini </creator>
        
        <creator>Muhammad Suyanto</creator>
        
        <subject>programming language; parameters; AHP</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(9), 2015</description>
        <description>The development in information technology provides a lot of convenience for everyone. Academy of Information Management and Computer (AIMC) students of the fourth semester, implementing the Job Training must specify the type of programming that will be used as a Final Project Report. The study assessed five types of programming  language by using the approach of Analytical Hierarchy Process (AHP) to obtain information on the programming language that has the quality or better rating than 5 programming languages is based on the parameters. Analytical Hierarchy Process (AHP) is one way in determining or making a decision that are multi-criteria or multi-objective such as choosing the programming language for the Student Information Management at the Academy of Information Management and Computer (AIMC). Programming language based on five criteria consisting of Clarity, Simplicity, and unity; Orthogonality; Fairness for Applications; Supports Abstraction; Environment Program; and Portability Program.</description>
        <description>http://thesai.org/Downloads/Volume6No9/Paper_42-The_Use_of_Programming_Languages_on_the_Final_Project_Report.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Survey on Smart use of BBM and its Influence on Academic Achievement in SMK Health PGRI Denpasar</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060941</link>
        <id>10.14569/IJACSA.2015.060941</id>
        <doi>10.14569/IJACSA.2015.060941</doi>
        <lastModDate>2015-10-08T13:50:20.0770000+00:00</lastModDate>
        
        <creator>I Wayan Gede Narayana</creator>
        
        <subject>Black Berry Messenger; Academic Achievement; Correlation;  Simple Random Sampling;  Kendall’s Tau</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(9), 2015</description>
        <description>Black Berry Messenger chat facility is very popular in the process of communicating through text messages, pictures and videos, so that the process of communication in the interaction between users can be carried out actively in virtual world. In this study, it will be examined as to whether the Black Berry Messenger has a significant influence on academic achievement in SMK Health PGRI Denpasar. This study will show the effect of the BBM in the improvement of academic achievement, using quantitative methods by distributing questionnaires from a random sample of the student population of SMK health PGRI Denpasar. The correlation analysis uses the correlation coefficient Kendall&#39;s tau which managed to deliver the results that the BBM is very influential with the increase of academic achievement.</description>
        <description>http://thesai.org/Downloads/Volume6No9/Paper_41-A_Survey_on_Smart_Use_of_BBM_and_its_Influence_on_Academic_Achievement.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Competence Making on Computer Engineering Program by Using Analytical Hierarchy Process (AHP)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060940</link>
        <id>10.14569/IJACSA.2015.060940</id>
        <doi>10.14569/IJACSA.2015.060940</doi>
        <lastModDate>2015-10-08T13:50:19.9830000+00:00</lastModDate>
        
        <creator>Ahmad Yani</creator>
        
        <creator>Lalu Darmawan Bakti</creator>
        
        <subject>Decision Support System; AHP; Competence; Criteria</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(9), 2015</description>
        <description>This paper shows competence election for the students of the Academy of Information Management and Computer (AIMC) Mataram on computer engineering courses who completed the study in semester 1, 2 and 3 and choose lesson competence. The competence election on computer engineering courses is intended to make students get easier in choosing the appropriate competence and professional expertise of students in an effort to steer students to the ability and the students&#39; academic achievement. It is an uneasy thing for the students. Limited information make the students find it difficult to choose the competence of determining the best based on the academic achievement and interest. To solve the problems, standard decision support system is required to help and provide a solution or an alternative based on the accurate data by using Analytical Hierarchy Process (AHP) that is computerized.</description>
        <description>http://thesai.org/Downloads/Volume6No9/Paper_40-Competence_Making_on_Computer_Engineering_Program.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Analysis of Encryption and Decryption Application by using One Time Pad Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060939</link>
        <id>10.14569/IJACSA.2015.060939</id>
        <doi>10.14569/IJACSA.2015.060939</doi>
        <lastModDate>2015-10-08T13:50:19.8270000+00:00</lastModDate>
        
        <creator>Zaeniah </creator>
        
        <creator>Bambang Eka Purnama</creator>
        
        <subject>cryptography; algorithms One Time pad; encryption; Decryption</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(9), 2015</description>
        <description>Security of data in a computer is needed to protect critical data and information from other parties. One way to protect data is to apply the science of cryptography to perform data encryption. There are wide variety of algorithms used for encryption of data, this study used a one-time pad algorithm for encrypting data. Algorithm One Time Pad uses the same key in the encryption process and a decryption of the data. An encrypted data will be transformed into cipher text so that the only person who has the key can open that data. Therefore, analysis will be done for an application that implements a one-time pad algorithm for encrypting data. The application that implements the one time pad algorithm can help users to store data securely.</description>
        <description>http://thesai.org/Downloads/Volume6No9/Paper_39-An_Analysis_Encryption_and_Description_Application_by_using_One_Time_Pad_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Secure Clustering in Vehicular Ad Hoc Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060938</link>
        <id>10.14569/IJACSA.2015.060938</id>
        <doi>10.14569/IJACSA.2015.060938</doi>
        <lastModDate>2015-10-06T09:10:14.6770000+00:00</lastModDate>
        
        <creator>Zainab Nayyar</creator>
        
        <creator>Dr. Muazzam Ali Khan Khattak</creator>
        
        <creator>Dr. Nazar Abass Saqib</creator>
        
        <creator>Nazish Rafique</creator>
        
        <subject>Vehicular ad hoc networks; secure clustering; wireless technologies; certification authority; cluster heads</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(9), 2015</description>
        <description>A vehicular Ad-hoc network is composed of moving cars as nodes without any infrastructure. Nodes self-organize to form a network over radio links. Security issues are commonly observed in vehicular ad hoc networks; like authentication and authorization issues. Secure Clustering plays a significant role in VANETs. In recent years, various secure clustering techniques with distinguishing feature have been newly proposed. In order to provide a comprehensive understanding of these techniques are designed for VANETs and pave the way for the further research, a survey of the secure clustering techniques is discussed in detail in this paper. Qualitatively, as a result of highlighting various techniques of secure clustering certain conclusions are drawn which will enhance the availability and security of vehicular ad hoc networks. Nodes present in the clusters will work more efficiently and the message passing within the nodes will also get more authenticated from the cluster heads.</description>
        <description>http://thesai.org/Downloads/Volume6No9/Paper_38-Secure_Clustering_in_Vehicular_Ad_Hoc_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Skill Evaluation for Newly Graduated Students Via Online Test</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060937</link>
        <id>10.14569/IJACSA.2015.060937</id>
        <doi>10.14569/IJACSA.2015.060937</doi>
        <lastModDate>2015-10-03T08:08:34.5370000+00:00</lastModDate>
        
        <creator>Mahdi Mohammed Younis</creator>
        
        <creator>Miran Hikmat Mohammed Baban</creator>
        
        <subject>LAN-Network; Database; Online Test; Skills evaluation; feedback</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(9), 2015</description>
        <description>Every year in each university many students are graduated holding a first university degree. For example Bachelor degree in Computer Science. Most of those students have a motivation to continue with further studies to get higher education level degree in universities. However, some other students may have enthusiasm towards working based on their skills that they got during their life of studies in university. In both cases, it is required that applicant must pass a test that comprise the entire subject that they learned before.  For this reasons, this research is proposing a new technique to evaluate graduate students skills.</description>
        <description>http://thesai.org/Downloads/Volume6No9/Paper_37-Skill_Evaluation_for_Newly_Graduated_Students_Via_Online_Test.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fuzzy Based Evaluation of Software Quality Using Quality Models and Goal Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060936</link>
        <id>10.14569/IJACSA.2015.060936</id>
        <doi>10.14569/IJACSA.2015.060936</doi>
        <lastModDate>2015-10-02T10:23:21.6570000+00:00</lastModDate>
        
        <creator>Arfan Mansoor</creator>
        
        <creator>Detlef Streitferdt</creator>
        
        <creator>Franz-Felix Fu&#223;l</creator>
        
        <subject>Decision making; Goal Models; Quality Models; NFR; Fuzzy numbers</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(9), 2015</description>
        <description>Software quality requirements are essential part for the success of software development. Defined and guaranteed quality in software development requires identifying, refining, and predicting quality properties by appropriate means. Goal models of goal oriented requirements engineering (GORE) and quality models are useful for modelling of functional goals as well as for quality goals. Once the goal models are obtained representing the functional requirements and integrated quality goals, there is need to evaluate each functional requirement arising from functional goals and quality requirement arising from quality goals. The process consist of two main parts. In first part, the goal models are used to evaluate functional goals. The leaf level goals are used to establish the evaluation criteria. Stakeholders are also involved to contribute their opinions about the importance of each goal (functional and/or quality goal). Stakeholder opinions are then converted into quantifiable numbers using triangle fuzzy numbers (TFN). After applying the defuzzification process on TFN, the scores (weights) are obtained for each goal. In second part specific quality goals are identified, refined/tailored based on existing quality models and their evaluation is performed similarly using TFN and by applying defuzzification process. The two step process helps to evaluate each goal based on stakeholder opinions and to evaluate the impact of quality requirements. It also helps to evaluate the relationships among functional goals and quality goals. The process is described and applied on ’cyclecomputer’ case study.</description>
        <description>http://thesai.org/Downloads/Volume6No9/Paper_36-Fuzzy_Based_Evaluation_of_Software_Quality_Using.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Resistance to Statistical Attacks of Parastrophic Quasigroup Transformation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060935</link>
        <id>10.14569/IJACSA.2015.060935</id>
        <doi>10.14569/IJACSA.2015.060935</doi>
        <lastModDate>2015-10-01T12:45:25.6070000+00:00</lastModDate>
        
        <creator>Verica Bakeva</creator>
        
        <creator>Aleksandra Popovska-Mitrovikj</creator>
        
        <creator>Vesna Dimitrova</creator>
        
        <subject>uniform distribution; cryptographic properties; sta-tistical attack; encrypted message; quasigroup; parastrophic quasi-group transformation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(9), 2015</description>
        <description>The resistance to statistical kind of attacks of encrypted messages is a very important property for designing cryptographic primitives. In this paper, the parastrophic quasigroup PE-transformation, proposed elsewhere, is considered and the proof that it has this cryptographic property is given. Namely, it is proven that if PE-transformation is used for design of an encryption function then after n applications of it on arbitrary message the distribution of m-tuples (m = 1; 2; : : : ; n) is uniform. These uniform distributions imply the resistance to statistical attack of the encrypted messages. For illustration of theoretical results, some experimental results are presented as well.</description>
        <description>http://thesai.org/Downloads/Volume6No9/Paper_35-Resistance_to_Statistical_Attacks_of_Parastrophic_Quasigroup_Transformation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>AATCT: Anonymously Authenticated Transmission on the Cloud with Traceability</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060934</link>
        <id>10.14569/IJACSA.2015.060934</id>
        <doi>10.14569/IJACSA.2015.060934</doi>
        <lastModDate>2015-10-01T12:45:25.5770000+00:00</lastModDate>
        
        <creator>Maged Hamada Ibrahim</creator>
        
        <subject>Cloud computing; anonymous transmission; pseudonym systems; smart cards; mobile devices; authentication; IT security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(9), 2015</description>
        <description>In Cloud computing, anonymous authentication is an important service that must be available to users in the Cloud. Users have the right to remain anonymous as long as they behave honestly. However, in case a malicious behavior is detected, the system – under court order – must be able to trace the user to his clear identity. Most of the proposed authentication schemes for the Cloud are either password-based authentication schemes that are vulnerable to offline dictionary attacks, or biometric-based authentication schemes that take a long time of execution specially in case of high security requirements. In this paper, we propose an efficient and secure scheme to non-interactively authenticate the users on the Cloud to the remote servers while preserving their anonymity. In case of accusations, the registration authority is able to trace any user to his clear identity. We avoid using low entropy passwords or biometric mechanisms, instead, we employ pseudonym systems in our design. The computation complexity and storage requirements are efficient and suitable to be implemented on smart cards/devices. Our proposed scheme withstands challenging adversarial attacks such as, stolen databases attacks, databases insertion attacks, impersonation attacks, replay attacks and malicious users/servers collaboration attacks.</description>
        <description>http://thesai.org/Downloads/Volume6No9/Paper_34-AATCT_Anonymously_Authenticated_Transmission.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Osteoporosis Detection using Important Shape-Based Features of the Porous Trabecular Bone on the Dental X-Ray Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060933</link>
        <id>10.14569/IJACSA.2015.060933</id>
        <doi>10.14569/IJACSA.2015.060933</doi>
        <lastModDate>2015-10-01T12:45:25.5600000+00:00</lastModDate>
        
        <creator>Enny Itje Sela</creator>
        
        <creator>Rini Widyaningrum</creator>
        
        <subject>dental X-Ray; feature selection; osteoporosis Detection;  porous  trabecular bone</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(9), 2015</description>
        <description>Osteoporosis screening using dental X-Ray images has been growing an interesting research. Existing methods for osteoporosis screening have been performed using the dental periapical or panoramic in X-Ray images. There was limited research using both the periapical and panoramic due to the expensive cost of obtaining data. This paper presents a combination of the periapical and panoramic images for osteoporosis detection. The images processing was performed to obtain the shape-based features of the porous trabecular bone on both the dental radiograph images. The important features were selected from the extracted features. These selected features were chosen for osteoporosis detection using the decision tree. The quantitative evaluation used confusion matrix. It was found accuracy rate to be 73.33%, sensitivity rate to be 72.23, and specificity rate to be 72.23% for data testing.</description>
        <description>http://thesai.org/Downloads/Volume6No9/Paper_33-Osteoporosis_Detection_using_Important_Shape_Based_Features.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Ontology-Based Textual Emotion Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060932</link>
        <id>10.14569/IJACSA.2015.060932</id>
        <doi>10.14569/IJACSA.2015.060932</doi>
        <lastModDate>2015-10-01T12:45:25.5430000+00:00</lastModDate>
        
        <creator>Mohamed Haggag</creator>
        
        <creator>Samar Fathy</creator>
        
        <creator>Nahla Elhaggar</creator>
        
        <subject>emotion detection; ontology; ontology matching; natural language processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(9), 2015</description>
        <description>Emotion Detection from text is a very important area of natural language processing. This paper shows a new method for emotion detection from text which depends on ontology. This method is depending on ontology extraction from the input sentence by using a triplet extraction algorithm by the OpenNLP parser, then make an ontology matching with the ontology base that we created by similarity and word sense disambiguation. This ontology base consists of ontologies and the emotion label related to each one. We choose the emotion label of the sentence with the highest score of matching.  If the extracted ontology doesn’t match any ontology from the ontology base we use the keyword-based approach. This method doesn’t depend only on keywords like previous approaches; it depends on the meaning of sentence words and the syntax and semantic analysis of the context.</description>
        <description>http://thesai.org/Downloads/Volume6No9/Paper_32-Ontology_Based_Textual_Emotion_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards Agile Implementation of Test Maturity Model Integration (TMMI) Level 2 using Scrum Practices</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060931</link>
        <id>10.14569/IJACSA.2015.060931</id>
        <doi>10.14569/IJACSA.2015.060931</doi>
        <lastModDate>2015-10-01T12:45:25.5130000+00:00</lastModDate>
        
        <creator>Ahmed B. Farid</creator>
        
        <creator>Enas M. Fathy</creator>
        
        <creator>Mahmoud Abd Ellatif</creator>
        
        <subject>Agile software development; Scrum; TMMI; Software Testing; CMMI</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(9), 2015</description>
        <description>the software industry has invested the substantial effort to improve the quality of its products like ISO, CMMI and TMMI. Although applying of TMMI maturity criteria has a positive impact on product quality, test engineering productivity, and cycle-time effort. But it requires heavy software development processes and large investments in terms of cost and time that medium/small companies do not deal with it. At the same time, agile methods deal with changing requirements and continuous delivery of valuable software by short time iterations.  So that the aim of this paper is to improve the testing process by applying a detailed mapping between TMMI and Scrum practices and verifying this mapping with a study providing empirical evidence of the obtained results. The research has been evaluated on a sample of 2 large-sized TMMI certified companies. In conclusion, the experimental results show the effectiveness of using this integrated approach compared to other existing approaches.</description>
        <description>http://thesai.org/Downloads/Volume6No9/Paper_31-Towards_Agile_Implementation_of_Test_Maturity_Model_Integration.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Online Paper Review Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060930</link>
        <id>10.14569/IJACSA.2015.060930</id>
        <doi>10.14569/IJACSA.2015.060930</doi>
        <lastModDate>2015-10-01T12:45:25.4830000+00:00</lastModDate>
        
        <creator>Doaa Mohey El-Din</creator>
        
        <creator>Hoda M.O. Mokhtar</creator>
        
        <creator>Osama Ismael</creator>
        
        <subject>Sentiment analysis; Opinion Mining; Reviews; Text analysis; Bag of words; sentiment analysis challenges</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(9), 2015</description>
        <description>Sentiment analysis or opinion mining is used to automate the detection of subjective information such as opinions, attitudes, emotions, and feelings.  Hundreds of thousands care about scientific research and take a long time to select suitable papers for their research. Online reviews on papers are the essential source to help them. The reviews save reading time and save papers cost. This paper proposes a new technique to analyze online reviews. It is called sentiment analysis of online papers (SAOOP). SAOOP is a new technique used for enhancing bag-of-words model, improving the accuracy and performance. SAOOP is useful in increasing the understanding rate of review&#39;s sentences through higher language coverage cases. SAOOP introduces solutions for some sentiment analysis challenges and uses them to achieve higher accuracy. This paper also presents a measure of topic domain attributes, which provides a ranking of total judging on each text review for assessing and comparing results across different sentiment techniques for a given text review.  Finally, showing the efficiency of the proposed approach by comparing the proposed technique with two sentiment analysis techniques. The comparison terms are based on measuring accuracy, performance and understanding rate of sentences.</description>
        <description>http://thesai.org/Downloads/Volume6No9/Paper_30-Online_Paper_Review_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Content-Based Image Retrieval using Local Features Descriptors and Bag-of-Visual Words</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060929</link>
        <id>10.14569/IJACSA.2015.060929</id>
        <doi>10.14569/IJACSA.2015.060929</doi>
        <lastModDate>2015-10-01T12:45:25.4670000+00:00</lastModDate>
        
        <creator>Mohammed Alkhawlani</creator>
        
        <creator>Mohammed Elmogy</creator>
        
        <creator>Hazem Elbakry</creator>
        
        <subject>Content-based Image Retrieval (CBIR); Scale Invariant Feature Transform (SIFT); Speeded Up Robust Features (SURF); K-Means Algorithm; Support Vector Machine (SVM); Bag-of-Visual Word (BoVW)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(9), 2015</description>
        <description>Image retrieval is still an active research topic in the computer vision field. There are existing several techniques to retrieve visual data from large databases. Bag-of-Visual Word (BoVW) is a visual feature descriptor that can be used successfully in Content-based Image Retrieval (CBIR) applications. In this paper, we present an image retrieval system that uses local feature descriptors and BoVW model to retrieve efficiently and accurately similar images from standard databases. The proposed system uses SIFT and SURF techniques as local descriptors to produce image signatures that are invariant to rotation and scale. As well as, it uses K-Means as a clustering algorithm to build visual vocabulary for the features descriptors that obtained of local descriptors techniques. To efficiently retrieve much more images relevant to the query, SVM algorithm is used. The performance of the proposed system is evaluated by calculating both precision and recall. The experimental results reveal that this system performs well on two different standard datasets.</description>
        <description>http://thesai.org/Downloads/Volume6No9/Paper_29-Content_BASED_Image_Retrieval_Using_Local_Features_Descriptors.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Texture Analysis on Image Motif of Endek Bali using K-Nearest Neighbor Classification Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060928</link>
        <id>10.14569/IJACSA.2015.060928</id>
        <doi>10.14569/IJACSA.2015.060928</doi>
        <lastModDate>2015-10-01T12:45:25.4370000+00:00</lastModDate>
        
        <creator>I Gede Surya Rahayuda</creator>
        
        <subject>Analysis; Texture; Image; Endek Bali; Edge Detection; GLCM; K-NN</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(9), 2015</description>
        <description>Endek fabric Bali is one form of craft woven fabric of Balinese society. Endek fabric has a variety of motifs or designs, a lot of people does not know that Endek have the type based on the design motif. In this research will be carried out an analysis texture on the Image Motif of Endek Bali and then classify them into several classes based on the type pattern motif of Endek Bali. The first step in this research is to collect some Image of Endek with different motif, then transform image into a gray level image using Edge Detection and do the extraction using GLCM, after that performed data classification using K-NN. Based on the analysis of all the grades K that have been tested, the value K with the most excellent level accuracy is on K = 15 and on Correlation component, with an accuracy percentage of 43.33%. Overall, the Cemplong motif is a motif that has recognition accuracy levels better than most other motifs, that with a percentage of 57.50%. There are quite a lot of motifs that are less precise Endek recognized at the time of classification. It’s because among the Endek motifs may have the similar texture. The purpose of this study is to analyze texture and classify image motif of Endek Bali so that later can be developed into an application or program that can help to recognize the type of fabric Endek Bali. And even better if the program is implemented on the mobile phone, so can facilitate the process of image acquisition and subsequent directly extracted and classified from the mobile phone and can produce accurate classification results.</description>
        <description>http://thesai.org/Downloads/Volume6No9/Paper_28-Texture_Analysis_on_Image_Motif_of_Endek_Bali.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Scrum Method Implementation in a Software Development Project Management</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060927</link>
        <id>10.14569/IJACSA.2015.060927</id>
        <doi>10.14569/IJACSA.2015.060927</doi>
        <lastModDate>2015-10-01T12:45:25.4030000+00:00</lastModDate>
        
        <creator>Putu Adi Guna Permana</creator>
        
        <subject>Metode Scrum; Agile; SDLC; Software</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(9), 2015</description>
        <description>To maximize the performance, companies conduct a variety of ways to increase the business profit. The work management between one company and the other company is different, so the differences in the management may cause the software to have a different business process. Software development can be defined as creating a new software or fixing the existing one. Technology developments led to increasing demand for software, Industrial Technology (IT) Companies should be able to project well maintenance. The methodology in software development is used in accordance with the company&#39;s needs based on the SDLC (Software Development Life Cycle). Scrum method is a part of the Agile method that is expected to increase the speed and flexibility in software development project management.</description>
        <description>http://thesai.org/Downloads/Volume6No9/Paper_27-Scrum_Method_Implementation_in_a_Software_Development_Project_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Developing a New Integrated Model to improve the using of Classical Approach in Designing Management Information Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060926</link>
        <id>10.14569/IJACSA.2015.060926</id>
        <doi>10.14569/IJACSA.2015.060926</doi>
        <lastModDate>2015-10-01T12:45:25.3900000+00:00</lastModDate>
        
        <creator>Mohammad M M Abu Omar</creator>
        
        <creator>Khairul Anuar Abdullah</creator>
        
        <subject>Management Information System; MIS; Systems Development Methodologies; Classical Approach; Information System Life Cycle; ISLC</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(9), 2015</description>
        <description>Management information system (MIS) is used to solve management problems in the practical life, the designing and building of the management information systems is done by using one of the systems development methodologies. Classical approach is one of these methodologies which still suffer from some critical problems when it is used in designing and building the management information systems, it consumes more time and cost during its life cycle. This paper develops a new integrated model to minimize the classical approach life cycle in designing and building the management information systems in order to avoid the additional consume in time and cost.</description>
        <description>http://thesai.org/Downloads/Volume6No9/Paper_26-Developing_a_New_Integrated_Model_to_improve_the_using_of_Classical_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modeling Knowledge Bases for Automated Decision Making Systems – A Literature Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060925</link>
        <id>10.14569/IJACSA.2015.060925</id>
        <doi>10.14569/IJACSA.2015.060925</doi>
        <lastModDate>2015-10-01T12:45:25.3730000+00:00</lastModDate>
        
        <creator>Franz Felix F&#252;ssl</creator>
        
        <creator>Detlef Streitferdt</creator>
        
        <creator>Anne Triebel</creator>
        
        <subject>Knowledge Base; Knowledge Modeling; Knowledge Engineering; Ontology Engineering; Artificial Neural Network; Expert System</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(9), 2015</description>
        <description>Developing automated decision making systems means dealing with knowledge in every possible manner. One of the most important points of developing artificial intelligent systems is developing a precise knowledge base with integrating self-learning mechanisms. Moreover using knowledge in expert systems or decision support systems it is necessary to document knowledge and make it visible for managing it. Main goal of this work is finding a suitable solution for modeling knowledge bases in automated decision making systems concerning both illustrating specific knowledge and learning mechanisms. There are a lot of different terms describing this kind of research, such as knowledge modeling, knowledge engineering or ontology engineering. For that reason this paper provides a comparison of the technical terms in this domain by illustrating similarities, specifics and how they are used in literature.</description>
        <description>http://thesai.org/Downloads/Volume6No9/Paper_25-Modeling_Knowledge_Bases_for_Automated_Decision_Making_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Modified Heuristic-Block Protocol Model for Privacy and Concurrency in Cloud</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060924</link>
        <id>10.14569/IJACSA.2015.060924</id>
        <doi>10.14569/IJACSA.2015.060924</doi>
        <lastModDate>2015-10-01T12:45:25.3400000+00:00</lastModDate>
        
        <creator>Akhilesh Kumar Bhardwaj</creator>
        
        <creator>Dr. Surinder</creator>
        
        <creator>Dr. Rajiv Mahajan</creator>
        
        <subject>Cloud Computing; TPA; Firefly; MHT; NTRU; LZW</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(9), 2015</description>
        <description>With boost in the figure of cloud users and the magnitude of sensitive data on cloud, shielding of cloud has become more important. Competent methods are consistently desirable to ensure the information privacy and load management of outsource data on un-trusted cloud servers. The base of our proposed idea is the chronological display of metaheuristic firefly algorithm and blocks based Merkle hash tree protocol. This pool of combination significantly reduces the communication delay and I/O costs. The projected scheme in addition considers the dynamic data operations at block level while maintaining the equivalent security assurance. Our method makes use of third party auditor to periodically verify the data stored at cloud provider side. Our elucidation removes the burden of verification from the user side and alleviates both the user’s and storage service’s fear about data outburst and data corruptions.</description>
        <description>http://thesai.org/Downloads/Volume6No9/Paper_24-A_Modified_Heuristic_Block_Protocol_Model_for_Privacy_and_Concurrency_in_Cloud.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Multi-Attribute Decision Making for Electrician Selection using Triangular Fuzzy Numbers Arithmetic Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060923</link>
        <id>10.14569/IJACSA.2015.060923</id>
        <doi>10.14569/IJACSA.2015.060923</doi>
        <lastModDate>2015-10-01T12:45:25.3100000+00:00</lastModDate>
        
        <creator>Wiwien Hadikurniawati</creator>
        
        <creator>Retantyo Wardoyo</creator>
        
        <subject>multi-attribute decision making; triangular fuzzy number; fuzzy arithmetic; electrician</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(9), 2015</description>
        <description>This study uses an approach of fuzzy multi attribute decision making in determining alternatives to solve the selection problem of the electrician through  a competency test. Competency test consists of several tests of knowledge, skills and work attitude. The parameters of decision making is used to choose the best alternative written test, test of theoretical knowledge, practice knowledge test and oral test. Linguistic values expressed by triangular fuzzy numbers is used to represent the preferences of decision makers so that the uncertainty and imprecision in the selection process can be minimized. Aggregation results are represented using triangular fuzzy numbers. The output of this selection process is the best alternative obtained using triangular fuzzy numbers arithmetic approach.</description>
        <description>http://thesai.org/Downloads/Volume6No9/Paper_23-A_Multi_Attribute_Decision_Making_for_Electrician_Selection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intrusion Detection Techniques in Wireless Sensor Network using Data Mining Algorithms: Comparative Evaluation Based on Attacks Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060922</link>
        <id>10.14569/IJACSA.2015.060922</id>
        <doi>10.14569/IJACSA.2015.060922</doi>
        <lastModDate>2015-10-01T12:45:25.2930000+00:00</lastModDate>
        
        <creator>YOUSEF EL MOURABIT</creator>
        
        <creator>AHMED TOUMANARI</creator>
        
        <creator>ANOUAR BOUIRDEN</creator>
        
        <creator>NADYA EL MOUSSAID</creator>
        
        <subject>Keyword: Wireless sensor network; Anomaly Detection; Intrusion detection system; classification; KDD’99; Weka</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(9), 2015</description>
        <description>Wireless sensor network (WSN) consists of sensor nodes. Deployed in the open area, and characterized by constrained resources, WSN suffers from several attacks, intrusion and security vulnerabilities. Intrusion detection system (IDS) is one of the essential security mechanism against attacks in WSN. In this paper we present a comparative evaluation of the most performant detection techniques in IDS for WSNs, the analyzes and comparisons of the approaches are represented technically, followed by a brief. Attacks in WSN also are presented and classified into several criteria. To implement and measure the performance of detection techniques we prepare our dataset, based on KDD&#39;99, into five step, after normalizing our dataset, we determined normal class and 4 types of attacks, and used the most relevant attributes for the classification process. We propose applying CfsSubsetEval with BestFirst approach as an attribute selection algorithm for removing the redundant attributes. The experimental results show that the random forest methods provide high detection rate and reduce false alarm rate. Finally, a set of principles is concluded, which have to be satisfied in future research for implementing IDS in WSNs. To help researchers in the selection of IDS for WSNs, several recommendations are provided with future directions for this research.</description>
        <description>http://thesai.org/Downloads/Volume6No9/Paper_22-Intrusion_Detection_Techniques_in_Wireless_Sensor_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>MCIP Client Application for SCADA in Iiot Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060921</link>
        <id>10.14569/IJACSA.2015.060921</id>
        <doi>10.14569/IJACSA.2015.060921</doi>
        <lastModDate>2015-10-01T12:45:25.2630000+00:00</lastModDate>
        
        <creator>Nicoleta Cristina GAITAN</creator>
        
        <subject>SCADA; OPC DA; OPC.NET; OPC UA; DDS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(9), 2015</description>
        <description>Modern automation systems architectures which include several subsystems for which an adequate burden sharing is required. These subsystems must work together to fulfil the tasks imposed by the common function, given by the business purpose to be fulfilled. These subsystems or components, in order to perform these tasks, must communicate with each other, this being the critical function of the architecture of such a system. This article presents a MCIP (Monitoring and Control of the Industrial Processes) client application which allows the monitoring and control of the industrial processes and which is object-oriented. As a novelty, the paper presents the architecture of the user object, which is actually a wrapper that allows the connection to Communication Standard Interface bus, the characteristics of the IIoT (Industrial Internet of Things) object and the correspondence between a server’s address space and the address space of MCIP.</description>
        <description>http://thesai.org/Downloads/Volume6No9/Paper_21-MCIP_Client_Application_for_SCADA_in_Iiot_Environment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Analysis of Information Technology on Data Processing by using Cobit Framework</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060920</link>
        <id>10.14569/IJACSA.2015.060920</id>
        <doi>10.14569/IJACSA.2015.060920</doi>
        <lastModDate>2015-10-01T12:45:25.2470000+00:00</lastModDate>
        
        <creator>Surni Erniwati</creator>
        
        <creator>Nina Kurnia Hikmawati</creator>
        
        <subject>Cobit; Data Processing; Information Technology; Level of Maturity; Management Awareness</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(9), 2015</description>
        <description>Information technology and processes is inter connected, directing and controlling the company in achieving corporate goals through value-added and balancing the risks and benefits of information technology. This study is aimed to analyze the level of maturity (maturity level) on the data process and produced information technology recommendations that can be made as regards the management of IT to support the academic performance of the service to be better. Maturity level calculation was done by analyzing questionnaires on the state of information technology. The results of this study obtainable that the governance of information technology in data processing in Mataram ASM currently quite good. Current maturity value for the data processing has the value 2.69. This means that the company / organization already has a pattern of repeatedly done in managing the activities related to data management processes. Based on the data analysis, there is an effect on the current conditions and expected conditions can be taken solution or corrective actions to improve IT governance in the process of data management at ASM Mataram gradually.</description>
        <description>http://thesai.org/Downloads/Volume6No9/Paper_20-An_Analysis_of_Information_Technology_on_Data_Processing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>How to Model a Likely Behavior of a Pedagogical Agent from a Real Situation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060919</link>
        <id>10.14569/IJACSA.2015.060919</id>
        <doi>10.14569/IJACSA.2015.060919</doi>
        <lastModDate>2015-10-01T12:45:25.2170000+00:00</lastModDate>
        
        <creator>Mohamedade Farouk NANNE</creator>
        
        <subject>pedagogical agent; nonverbal communication; behavior; corpus analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(9), 2015</description>
        <description>The aim of this work is to model the behavior verbal and nonverbal behavior of a Pedagogical Agent (PA) can be integrated into an Intelligent Tutoring System. The following research questions were posed: what is the nonverbal component of an educational communication? How to study this component for a computational model of plausible behavior of a virtual agent? What correlations between educational actions and the direction of gaze of a human agent? To carry out exploratory work, a methodological approach based on multi-modal video corpus study was adopted. Within a multidisciplinary team consisting of computer scientists and didactics of mathematics, an educational situation in which a virtual pedagogical agent is likely was developed. Dyadic interactions between teachers and learners late second early third (15-16 years) in a skills assessment interview in mathematics following the resolution of exercises by students with a mathematics software was filmed. A multi-level annotation scheme to annotate the observed behavior was proposed. The multidisciplinary research subject (ITS, Human-Machine Interfaces, Educational sciences, educational, linguistic etc.) due to the development of the coding scheme a delicate but important work given the wealth of knowledge from different disciplines. After a portion of the collected work annotation corpus statistical measures derived from annotations carried out suggest different strategies for teachers in terms of gaze direction depending on the learner profile and pedagogical actions. These measures have enabled to extract rules to control the nonverbal behavior of a PA.</description>
        <description>http://thesai.org/Downloads/Volume6No9/Paper_19-How_to_Model_a_Likely_Behavior_of_a_Pedagogical_Agent_from_a_Real_Situation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Electrooculogram Signals Analysis for Process Control Operator Based on Fuzzy c-Means</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060918</link>
        <id>10.14569/IJACSA.2015.060918</id>
        <doi>10.14569/IJACSA.2015.060918</doi>
        <lastModDate>2015-10-01T12:45:25.2000000+00:00</lastModDate>
        
        <creator>Jiangwen Song</creator>
        
        <creator>Raofen Wang</creator>
        
        <creator>Guanghua Zhang</creator>
        
        <creator>Chaoxing Xiong</creator>
        
        <creator>Leyan Zhang</creator>
        
        <creator>Cunbang Sun</creator>
        
        <subject>electrooculogram; fuzzy c-means; operator functional state; fatigue</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(9), 2015</description>
        <description>Biomedical signals of human can reflect the body&#39;s task load, fatigue and other psychological information. Compared with other biomedical signals, electrooculogram (EOG) has higher amplitude, less interference, and is easy to detect. In this paper, the EOG signals of operator’s were analyzed. Wavelet transform was used to remove the high-frequency artifacts. Then fuzzy c-means was adopted to detect the eye blink peak points of EOG. After that, eye blink interval (EBI) of operator was calculated. Four EOG features (the average of EBI, variance of EBI, standard deviation of EBI and variation coefficient of EBI) were extracted. Finally, the relationship between EOG features and operator’s fatigue, effort, anxiety and task load were analyzed. The experimental results illustrate that EOG features had some relation to the operator’s fatigue, effort, anxiety and task load respectively.</description>
        <description>http://thesai.org/Downloads/Volume6No9/Paper_18-Electrooculogram_Signals_Analysis_for_Process_Control_Operator.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Approach to Improve Classification Accuracy of Leaf Images using Dorsal and Ventral Features</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060917</link>
        <id>10.14569/IJACSA.2015.060917</id>
        <doi>10.14569/IJACSA.2015.060917</doi>
        <lastModDate>2015-10-01T12:45:25.1700000+00:00</lastModDate>
        
        <creator>Arun Kumar</creator>
        
        <creator>Vinod Patidar</creator>
        
        <creator>Deepak Khazanachi</creator>
        
        <creator>Poonam Saini</creator>
        
        <subject>Leaf image; Leaf classification; Texture features; Statistical features; Dorsal and ventral sides of leaves; Gray level co-occurrence matrix</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(9), 2015</description>
        <description>This paper proposes to improve the classification accuracy of the leaf images by extracting texture and statistical features by utilizing the presence of striking features on the dorsal and ventral sides of the leaves, which on other types of objects may not be that prominent. The texture features have been extracted from dorsal, ventral and a combination of dorsal-ventral sides of leaf images using Gray level co-occurrence matrix. In addition to this, this work also uses certain general statistical features for discriminating them into various classes. The feature selection work has been performed separately for the dorsal, ventral and combined data sets (for both texture and statistical features) using the most common feature selection algorithms.  After selecting the relevant features, the classification has been done using the classification algorithms: K-Nearest Neighbor, J48, Na&#239;ve Bayes, Partial Least Square (PLS), Classification and Regression Tree (CART), Classification Tree(CT). The classification accuracy has been calculated and compared to find which side of the leaf image (dorsal or ventral) gives better results with which type of features(texture or statistical). This study reveals that the ventral leaf features can be another alternative in discriminating the leaf images into various classes.</description>
        <description>http://thesai.org/Downloads/Volume6No9/Paper_17-An_Approach_to_Improve_Classification_Accuracy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of Prediction Model for Endocrine Disorders in the Korean Elderly Using CART Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060916</link>
        <id>10.14569/IJACSA.2015.060916</id>
        <doi>10.14569/IJACSA.2015.060916</doi>
        <lastModDate>2015-10-01T12:45:25.1400000+00:00</lastModDate>
        
        <creator>Haewon Byeon</creator>
        
        <subject>data-mining; CART; elderly; health behavior; endocrine disorders</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(9), 2015</description>
        <description>The aim of the present cross-sectional study was to analyze the factors that affect endocrine disorders in the Korean elderly. The data were taken from the A Study of the Seoul Welfare Panel Study 2010. The subjects were 2111 people (879 males, 1,232 females) aged 60 and older living in the community. The dependent variable was defined as the prevalence of endocrine disorders. The explanatory variables were gender, level of education, household income, employment status, marital status, drinking, smoking, BMI, subjective health status, physical activity, experience of stress, and depression. In the Classification and Regression Tree (CART) algorithm analysis, subjective health status, BMI, education level, and household income were significantly associated with endocrine disorders in the Korean elderly. The most preferentially involved predictor was subjective health status. The development of guidelines and health education to prevent endocrine disorders is required for taking multiple risk factors into account.</description>
        <description>http://thesai.org/Downloads/Volume6No9/Paper_16-Development_of_Prediction_Model_for_Endocrine_Disorders.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cryptocurrency Mining – Transition to Cloud</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060915</link>
        <id>10.14569/IJACSA.2015.060915</id>
        <doi>10.14569/IJACSA.2015.060915</doi>
        <lastModDate>2015-10-01T12:45:25.1070000+00:00</lastModDate>
        
        <creator>Hari Krishnan R.</creator>
        
        <creator>Sai Saketh Y.</creator>
        
        <creator>Venkata Tej Vaibhav M.</creator>
        
        <subject>Cryptocurrency; Bitcoin mining; Cloud mining; Double Spending; Profitability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(9), 2015</description>
        <description>Cryptocurrency, a form of digital currency that has an open and decentralized system and uses cryptography to enhance security and control the creation of new units, is touted to be the next step from conventional monetary transactions. Many cryptocurrencies exist today, with Bitcoin being the most prominent of them. Cryptocurrencies are generated by mining, as a fee for validating any transaction. The rate of generating hashes, which validate any transaction, has been increased by the use of specialized machines such as FPGAs and ASICs, running complex hashing algorithms like SHA-256 and Scrypt, thereby leading to faster generation of cryptocurrencies. This arms race for cheaper-yet-efficient machines has been on since the day the first cryptocurrency, Bitcoin, was introduced in 2009. However, with more people venturing into the world of virtual currency, generating hashes for this validation has become far more complex over the years, with miners having to invest huge sums of money on employing multiple high performance ASICs. Thus the value of the currency obtained for finding a hash did not justify the amount of money spent on setting up the machines, the cooling facilities to overcome the enormous amount of heat they produce and electricity required to run them. The next logical step in this is to utilize the power of cloud computing. Miners leasing super computers that generate hashes at astonishing rates that have a high probability of profits, with the same machine being leased to more than one person on a time bound basis is a win-win situation to both the miners, as well as the cloud service providers. This paper throws light on the nuances of cryptocurrency mining process, the traditional machines used for mining, their limitations, about how cloud based mining is the logical next step and the advantage that cloud platform offers over the traditional machines.</description>
        <description>http://thesai.org/Downloads/Volume6No9/Paper_15-Cryptocurrency_Mining_Transition_to_Cloud.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Method and Similarity to Recognize Javanese Keris</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060914</link>
        <id>10.14569/IJACSA.2015.060914</id>
        <doi>10.14569/IJACSA.2015.060914</doi>
        <lastModDate>2015-10-01T12:45:25.0930000+00:00</lastModDate>
        
        <creator>Halim Budi Santoso</creator>
        
        <creator>Ryan Peterzon Hadjon</creator>
        
        <subject>Edge Detection; Similarity; Canny Algorithm; Basic Morphological; Image Recognition; Javanese Keris</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(9), 2015</description>
        <description>this paper describes Hybrid method and Similarity for recoginizing Javanese Keris. Javanese Keris is one of traditional javanese weapon. It is one of the Indonesia Cultural Heritage. Keris is famous for its distinctive wavy blade. But some of the keris has straight blades. There are many kinds of Keris and every Keris has its own unique pattern. The algorithm to recognize several types of Javanese Keris is made by using Edge Detection method with Image Segmentation and combine with Similarity. By using these three methods, it will bring more exact result.
The method combines Canny Algorithm and Basic Morphological for image segmentation. The  result of edge detection and image segmentation are compared with the sample pictures using similarity. Ten (10) images of traditional javanese keris are being used as samples. The final result for this study is recognizing the kind of keris. These techniques is carried out with an experiment by using MATLAB 6.0.1 software.</description>
        <description>http://thesai.org/Downloads/Volume6No9/Paper_14-Hybrid_Method_and_Similarity_to_Recognize_Javanese_Keris.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Grid Color Moment Features in Glaucoma Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060913</link>
        <id>10.14569/IJACSA.2015.060913</id>
        <doi>10.14569/IJACSA.2015.060913</doi>
        <lastModDate>2015-10-01T12:45:25.0600000+00:00</lastModDate>
        
        <creator>Abir Ghosh</creator>
        
        <creator>Anurag Sarkar</creator>
        
        <creator>Amira S. Ashour</creator>
        
        <creator>Dana Balas-Timar</creator>
        
        <creator>Nilanjan Dey</creator>
        
        <creator>Valentina E. Balas</creator>
        
        <subject>Glaucoma; Clinical decision support system; RIM-ONE image database; Classifier; Back Propagation Neural Network; color feature extraction; Grid Color Moment</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(9), 2015</description>
        <description>Automated diagnosis of glaucoma disease is focused on the analysis of the retinal images to localize, perceive and evaluate the optic disc. Clinical decision support system (CDSS) is used for glaucoma classification in human eyes. This process depends mainly on the feature type that can be morphological or non-morphological. It is originated in the retinal image analysis technique that used color feature, texture features, extract structure, or contextual. This work proposes an empirical study on a narrative automated glaucoma diagnosis classification system based on both Grid Color Moment method as a feature vector to extract the color features (non-morphological) and neural network classifier. Consequently, these features are fed to the back propagation neural network (BPNN) classifier for automated diagnosis. The proposed system was tested using an open RIM-ONE database with accurate gold standards of the optic nerve head. This work classifies both normal and abnormal defected retina with glaucoma images. The experimental results achieved an accuracy of 87.47%. Thus, the proposed system can detect the early glaucoma stage with good accuracy.</description>
        <description>http://thesai.org/Downloads/Volume6No9/Paper_13-Grid_Color_Moment_Features_in_Glaucoma_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Approach to Calculate the Efficiency for an N-Receiver Wireless Power Transfer System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060912</link>
        <id>10.14569/IJACSA.2015.060912</id>
        <doi>10.14569/IJACSA.2015.060912</doi>
        <lastModDate>2015-10-01T12:45:25.0130000+00:00</lastModDate>
        
        <creator>Thabat Thabet</creator>
        
        <creator>Dr. John Woods</creator>
        
        <subject>wireless power transfer; multiple receivers</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(9), 2015</description>
        <description>A wireless power transfer system with more than one receiver is a realistic proposition for charging multiple devices such as phones and a tablets. Therefore, it is necessary to consider systems with single transmitters and multiple receivers in terms of efficiency. Current offerings only consider single device charging systems. A problem encountered is the efficiency of one receiver can be affected by another because of the mutual inductance between them. In this paper, an efficiency calculation method is presented for a wireless power transfer system with one to N-receivers. The mutual inductance between coils is implicitly calculated for different spatial positions and verified by practical experimentation. The effect of changing parameters, such as resonant frequency, coil size and distance between coils, on the efficiency has been studied. A clarification of the special performance of a wireless power transfer system at a specific point has been presented.</description>
        <description>http://thesai.org/Downloads/Volume6No9/Paper_12-An_Approach_to_Calculate_the_Efficiency_for_an_N_Receiver_Wireless_Power_Transfer_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Algorithm for Balancing Group Population in Collaborative Leaning Sessions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060911</link>
        <id>10.14569/IJACSA.2015.060911</id>
        <doi>10.14569/IJACSA.2015.060911</doi>
        <lastModDate>2015-10-01T12:45:24.9830000+00:00</lastModDate>
        
        <creator>Aiman Turani</creator>
        
        <creator>Jawad Alkhatib</creator>
        
        <subject>Group formation; Group Population; CSCL; Collaboration Techniques</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(9), 2015</description>
        <description>Proper group formation is essential in conducting a productive collaborative learning session. It specifies the internal structure that the collaborating groups should have based on roles. The Group formation is a dynamic and a challenging component. In some cases, more than one group formation is made during a single session and more than one role a single participant undertakes. Group population is another essential process that follows group formation. Group population is concerned with the process of assigning participants to groups and roles. There are several group population methods that are used in Computer Support for Collaborative Learning environment.
The main challenge in group population is the partial filling situation. Partial filling is happened when some participants fail to attend their assigned groups at the start of a session. Partial filling could be caused by various reasons. It could be caused by human’s mistakes or by technical faults.  In this paper, a correction algorithm is described to balance back the groups’ participation levels. This algorithm is based on three main phases: Group Elimination, External Transfer, and Internal Transfer phase. The algorithms of these phases are fully described in this paper.</description>
        <description>http://thesai.org/Downloads/Volume6No9/Paper_11-A_New_Algorithm_for_Balancing_Group_Population_in_Collaborative_Leaning_Sessions.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Developing Computer Network Based on EIGRP Performance Comparison and OSPF</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060910</link>
        <id>10.14569/IJACSA.2015.060910</id>
        <doi>10.14569/IJACSA.2015.060910</doi>
        <lastModDate>2015-10-01T12:45:24.9670000+00:00</lastModDate>
        
        <creator>Lalu Zazuli Azhar Mardedi</creator>
        
        <creator>Abidarin Rosidi</creator>
        
        <subject>Network; EIGRP; OSPF; based; simulator; performance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(9), 2015</description>
        <description>One of the computer network systems technologies that are growing rapidly at this time is internet. In building the networks, a routing mechanism is needed to integrate the entire computer with a high degree of flexibility. Routing is a major part in giving a performance to the network. With many existing routing protocols, network administrators need a reference comparison of the performance of each type of the routing protocol. One such routing is Enhanced Interior Gateway Routing Protocol (EIGRP) and Open Shortest Path First (OSPF). This paper only focuses on the performance of both the routing protocol on the network topology hybrid. Network services existing internet access speeds average of 8.0 KB/sec and 2 MB bandwidth. A backbone network is used by two academies, they are Academy of Information Management and Computer (AIMC) and Academy of Secretary and Management (ASM), with 2041 clients and it caused slow internet access. To solve the problems, the analysis and comparison of performance between the Enhanced Interior Gateway Routing Protocol (EIGRP) and Open Shortest Path First (OSPF) will be applied. Simulation software Cisco Packet Tracer 6.0.1 is used to get the value and to verify the results of its use.</description>
        <description>http://thesai.org/Downloads/Volume6No9/Paper_10-Developing_Computer_Network_Based_on_EIGRP_Performance_Comparison_and_OSPF.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Apsidal Precession for Low Earth Sun Synchronized Orbits</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060909</link>
        <id>10.14569/IJACSA.2015.060909</id>
        <doi>10.14569/IJACSA.2015.060909</doi>
        <lastModDate>2015-10-01T12:45:24.9370000+00:00</lastModDate>
        
        <creator>Shkelzen Cakaj</creator>
        
        <creator>Bexhet Kamo</creator>
        
        <creator>Algenti Lala</creator>
        
        <creator>Ilir Shinko</creator>
        
        <creator>Elson Agastra</creator>
        
        <subject>LEO; satellite; orbit; apsidal precession</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(9), 2015</description>
        <description>By nodal regression and apsidal precession, the Earth flattering at satellite low Earth orbits (LEO) is manifested.  Nodal regression refers to the shift of the orbit’s line of nodes over time as Earth revolves around the Sun. Nodal regression is orbit feature utilized for circular orbits to be Sun synchronized. A sun&#172;-synchronized orbit lies in a plane that maintains a fixed angle with respect to the Earth-Sun direction. In the low Earth Sun synchronized circular orbits are suited the satellites that accomplish their photo imagery missions. Nodal regression depends on orbital altitude and orbital inclination angle. For the respective orbital altitudes the inclination window for the Sun synchronization to be attained is determined. The apsidal precession represents major axis shift, respectively the argument of perigee deviation. The apsidal precession simulation, for inclination window of sun synchronized orbital altitudes, is provided through this paper.</description>
        <description>http://thesai.org/Downloads/Volume6No9/Paper_9-The_Apsidal_Precession_for_Low_Earth_Sun_Synchronized_Orbits.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Systematic Literature Review of Agile Scalability for Large Scale Projects</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060908</link>
        <id>10.14569/IJACSA.2015.060908</id>
        <doi>10.14569/IJACSA.2015.060908</doi>
        <lastModDate>2015-10-01T12:45:24.9200000+00:00</lastModDate>
        
        <creator>Hina saeeda</creator>
        
        <creator>Hannan Khalid</creator>
        
        <creator>Mukhtar Ahmed</creator>
        
        <creator>Abu Sameer</creator>
        
        <creator>Fahim Arif</creator>
        
        <subject>Agility; large scale projects; agile scalability; SCRUM; XP; DSDM; Crystal; SLR; Statistical Analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(9), 2015</description>
        <description>In new methods, “agile” has come out as the top approach in software industry for the development of the soft wares. With different shapes agile is applied for handling the issues such as low cost, tight time to market schedule continuously changing requirements, Communication &amp; Coordination, team size and distributed environment. Agile has proved to be successful in the small and medium size project, however, it have several limitations when applied on large size projects. The purpose of this study is to know agile techniques in detail, finding and highlighting its restrictions for large size projects with the help of systematic literature review. The systematic literature review is going to find answers for the Research questions: 1) How to make agile approaches scalable and adoptable for large projects?2) What are the existing methods, approaches, frameworks and practices support agile process in large scale projects? 3) What are limitations of existing agile approaches, methods, frameworks and practices with reference to large scale projects? This study will identify the current research problems of the agile scalability for large size projects by giving a detail literature review of the identified problems, existed work for providing solution to these problems and will find out limitations of the existing work for covering the identified problems in the agile scalability. All the results gathered will be summarized statistically based on these finding remedial work will be planned in future for handling the identified limitations of agile approaches for large scale projects.</description>
        <description>http://thesai.org/Downloads/Volume6No9/Paper_8-Systematic_Literature_Review_of_Agile_Scalability_for_Large_Scale_Projects.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Image Stitching System Based on ORB Feature-Based Technique and Compensation Blending</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060907</link>
        <id>10.14569/IJACSA.2015.060907</id>
        <doi>10.14569/IJACSA.2015.060907</doi>
        <lastModDate>2015-10-01T12:45:24.8900000+00:00</lastModDate>
        
        <creator>Ebtsam Adel</creator>
        
        <creator>Mohammed Elmogy</creator>
        
        <creator>Hazem Elbakry</creator>
        
        <subject>Image stitching; Image mosaicking; Feature-based approaches; Scale Invariant Feature Transform (SIFT); Speed-up Robust Feature detector (SURF); Oriented FAST and Rotated BRIEF (ORB); Exposure Compensation blending</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(9), 2015</description>
        <description>The construction of a high-resolution panoramic image from a sequence of input overlapping images of the same scene is called image stitching/mosaicing. It is considered as an important, challenging topic in computer vision, multimedia, and computer graphics. The quality of the mosaic image and the time cost are the two primary parameters for measuring the stitching performance. Therefore, the main objective of this paper is to introduce a high-quality image stitching system with least computation time. First, we compare many different features detectors. We test Harris corner detector, SIFT, SURF, FAST, GoodFeaturesToTrack, MSER, and ORB techniques to measure the detection rate of the corrected keypoints and processing time. Second, we manipulate the implementation of different common categories of image blending methods to increase the quality of the stitching process. From experimental results, we conclude that ORB algorithm is the fastest, more accurate, and with higher performance. In addition, Exposure Compensation is the highest stitching quality blending method. Finally, we have generated an image stitching system based on ORB using Exposure Compensation blending method.</description>
        <description>http://thesai.org/Downloads/Volume6No9/Paper_7-Image_Stitching_System_Based_on_ORB_Feature_Based_Technique.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Community Perception of the Security and Acceptance of Mobile Banking Services in Bahrain: An Empirical Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060906</link>
        <id>10.14569/IJACSA.2015.060906</id>
        <doi>10.14569/IJACSA.2015.060906</doi>
        <lastModDate>2015-10-01T12:45:24.8570000+00:00</lastModDate>
        
        <creator>Ahmad S. Mashhour</creator>
        
        <creator>Zakarya Saleh</creator>
        
        <subject>Adoption factors; Bahrain community; Mobile Banking Services; Perceived Usefulness (PU); Perceived Ease of Use (PEOU); Technology acceptance model (TAM)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(9), 2015</description>
        <description>Bahraini banks and financial organizations have applied remote enabled service using the internet and a mobile device to increase efficiency, reduce costs and improve quality of services. There is   need for these organizations to identify factors that persuade customers and raise their attitudes towards adoption and usage of these services.
This study identifies the most important factors affecting customer attitudes towards mobile banking acceptance in Bahrain. The model formulated in this research presents and empirically examined the factors influencing mobile banking adoption behavior on customers. The model was tested with a survey sample of 300 banking customers. The findings of the study indicate that wireless connection quality, mobile banking awareness, the social influence, mobile self-efficacy, trust, and resistance to change have significant impact on the attitudes towards the likelihood of adopting mobile banking. Model developed is an extension to the Technology Acceptance Model (TAM). Data analysis is based on the Statistical Package for Social Science (SPSS).</description>
        <description>http://thesai.org/Downloads/Volume6No9/Paper_6-Community_Perception_of_the_Security_and_Acceptance_of_Mobile_Banking.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modelling for Forest Fire Evolution Based on the Energy Accumulation and Release</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060905</link>
        <id>10.14569/IJACSA.2015.060905</id>
        <doi>10.14569/IJACSA.2015.060905</doi>
        <lastModDate>2015-10-01T12:45:24.8270000+00:00</lastModDate>
        
        <creator>Fan Yang</creator>
        
        <creator>Qing Yang</creator>
        
        <creator>Xingxing Liu</creator>
        
        <creator>Pan Wang</creator>
        
        <subject>forest fire evolution; energy accumulation and release; cellular automaton (CA); simulation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(9), 2015</description>
        <description>Forest fire evolution plays an important role in the decision-making of controlling the forest fire. This paper aims to simulate the dynamics of the forest fire spread using a cellular automaton approach. Having analyzed the characteristics and evolution of forest fires, a simulation model for the forest fire evolution based on the energy accumulation and release is proposed. And, taking Australia&#39;s catastrophic forest fire in 2009 as an example, the fire’s evolution closely to the reality is simulated. The results of the experiments are shown that if forest energy is released in a small scale before or during the fire, the fire would be better controlled even if it does not occur. Improving the efficiency of the fire extinguishing procedures and reducing the speed of the fire spread are also effective for controlling the forest fire.</description>
        <description>http://thesai.org/Downloads/Volume6No9/Paper_5-Modelling_for_Forest_Fire_Evolution_Based_on_the_Energy_Accumulation_and_Release.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Expansion of e-Commerce Coverage to Unreached Community by using Micro-Finance Infrastructure</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060904</link>
        <id>10.14569/IJACSA.2015.060904</id>
        <doi>10.14569/IJACSA.2015.060904</doi>
        <lastModDate>2015-10-01T12:45:24.8100000+00:00</lastModDate>
        
        <creator>Ashir Ahmed</creator>
        
        <creator>Kazi Mozaher Hossein</creator>
        
        <creator>Md. Asifur Rahman</creator>
        
        <creator>Takuzo Osugi</creator>
        
        <creator>Akira Fukuda</creator>
        
        <creator>Hiroto Yasuura</creator>
        
        <subject>ICT; BoP; microfinance; E-commerce; social services; ePassbook</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(9), 2015</description>
        <description>Most people at the BOP (base of the economic pyramid, the largest but the poorest community in the world comprising 69% of world population) do not have access to e-commerce services. The way e-commerce is designed and practiced today does not enable their participation. The reasons are: their purchasing power is low, they do not have any means to make online payments, and there is no infrastructure to deliver the purchased items to their doors. To enable the participation of the people at BOP, we propose an e-commerce framework by engaging MFI resources and our recently developed ePassbook system. This paper shows how the BOP community can enjoy the benefits of the e-commerce service by using the proposed model. The advantages of making e-commerce available to the BOP are discussed, in addition to the challenges involved in implementing the model.</description>
        <description>http://thesai.org/Downloads/Volume6No9/Paper_4-Expansion_of_e-Commerce_Coverage_to_Unreached_Community.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Impact of Learning Styles on Learner&#39;s Performance in E-Learning Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060903</link>
        <id>10.14569/IJACSA.2015.060903</id>
        <doi>10.14569/IJACSA.2015.060903</doi>
        <lastModDate>2015-10-01T12:45:24.7800000+00:00</lastModDate>
        
        <creator>Manal Abdullah</creator>
        
        <creator>Wafaa H. Daffa</creator>
        
        <creator>Reem M. Bashmail</creator>
        
        <creator>Mona Alzahrani</creator>
        
        <creator>Malak Sadik</creator>
        
        <subject>Learning style; Silverman; E-Learning; online learning; styling model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(9), 2015</description>
        <description>due to growing popularity of E-Learning, personalization has emerged as important need. Differences of learners&#39; abilities and their learning styles have affected the learning outcomes significantly. Meanwhile, with the development of E-Learning technologies, learners can be provided more effective learning environment to optimize their performance. The purpose of this study is to determine the impact of learning styles on learner’s performance in e-learning environment, and use this learning style data to make recommendations for learners, instructors, and contents of online courses. Data analysis in this research represented by user performance gathered from an E-learning platform (Blackboard), where this user performance data is represented by actions performed by platform&#39;s users. A 10-fold cross validation was used to create and test the model, and the data was analyzed by the WEKA software. Classification accuracy, MAE, and the ROC area have been observed. The results show that the accuracy of classification by means of NBTree technique had the highest correct value at 69.697% and it could be applied to develop Felder Silverman&#39;s learning style while taking into consideration students’ preference. Moreover, students’ performance increased by more than 12%.</description>
        <description>http://thesai.org/Downloads/Volume6No9/Paper_3-The_Impact_of_Learning_Styles_on_Learner_Performance_in_E_Learning_Environment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Building Safety Road Maps Based on Difference of Judgment of Road Users by their Smartphone</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060902</link>
        <id>10.14569/IJACSA.2015.060902</id>
        <doi>10.14569/IJACSA.2015.060902</doi>
        <lastModDate>2015-10-01T12:45:24.7630000+00:00</lastModDate>
        
        <creator>Viet Chau Dang</creator>
        
        <creator>Hiroshi Sato</creator>
        
        <creator>Masao Kubo</creator>
        
        <creator>Akira Namatame</creator>
        
        <subject>Traffic Safety Map; Risk Estimation; Occupancy Grid Map; Driving Model; Smartphone Sensing; Collective Intelligence</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(9), 2015</description>
        <description>Recently, there has been a growing demand and interest in developing methods for analyzing smartphone logs to extract traffic safety information. Because the log is high time resolution and closely related to user activities but fragmentary and myopic, it is difficult for currently available collision probability based quantitative risk assessment methods to create traffic safety maps automatically from the driving log which require all of concrete information about a collision for example, size of vehicle, speed of pedestrian. This paper proposes a computable risk measurement method for building traffic safety maps with the logs of different users&#39; driving, which does not discuss collision probability. The proposal is designed to compute differences in the recognition of the road environment among road users mathematically. Drivers differ in their recognition, judgment, and handling of a given situation. Suppose that a difference in recognition among users in the same situation is a signal of danger. This signal is easy to calculate by Poisson process. Each user&#39;s recognition of road environment and the safety map integrated from the collection of the recognition are generated fully automated. A real-world experiment was carried out, and the results show that the assumption and the proposed method succeeded in generating an accurate and effective traffic safety map.</description>
        <description>http://thesai.org/Downloads/Volume6No9/Paper_2-Building_Safety_Road_Maps_Based_on_Difference_of_Judgment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of Social Media GIS to Support Information Utilization from Normal Times to Disaster Outbreak Times</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060901</link>
        <id>10.14569/IJACSA.2015.060901</id>
        <doi>10.14569/IJACSA.2015.060901</doi>
        <lastModDate>2015-10-01T12:45:24.6700000+00:00</lastModDate>
        
        <creator>Kayoko YAMAMOTO</creator>
        
        <creator>Shun FUJITA</creator>
        
        <subject>Social Media GIS; Web-GIS; SNS; Twitter; Disaster Information; Disaster Reduction; Support for Information Utilization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(9), 2015</description>
        <description>The present study aims to design, develop, operate and evaluate a social media GIS (Geographical Information Systems) specially tailored to mash-up the information that local residents and governments provide to support information utilization from normal times to disaster outbreak times in order to promote disaster reduction. The conclusions of the present study are summarized in the following three points. (1) Social media GIS, an information system which integrates a Web-GIS, an SNS and Twitter in addition to an information classification function, a button function and a ranking function into a single system, was developed. This made it propose an information utilization system based on the assumption of disaster outbreak times when information overload happens as well as normal times. (2) The social media GIS was operated for fifty local residents who are more than 18 years old for ten weeks in Mitaka City of Tokyo metropolis. Although about 32% of the users were in their forties, about 30% were aged fifties, and more than 10% of the users were in their twenties, thirties and sixties or more. (3) The access survey showed that 260 pieces of disaster information were distributed throughout the whole city of Mitaka. Among the disaster information, danger-related information occupied 20%, safety-related information occupied 68%, and other information occupied 12%.</description>
        <description>http://thesai.org/Downloads/Volume6No9/Paper_1-Development_of_Social_Media_GIS_to_Support_Information_Utilization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automatic Recognition of Human Parasite Cysts on Microscopic Stools Images using Principal Component Analysis and Probabilistic Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2015.040906</link>
        <id>10.14569/IJARAI.2015.040906</id>
        <doi>10.14569/IJARAI.2015.040906</doi>
        <lastModDate>2015-09-10T05:37:32.5530000+00:00</lastModDate>
        
        <creator>Beaudelaire Saha Tchinda</creator>
        
        <creator>Daniel Tchiotsop</creator>
        
        <creator>Ren&#233; Tchinda</creator>
        
        <creator>Didier WOLF</creator>
        
        <creator>Michel NOUBOM</creator>
        
        <subject>Human Parasite Cysts; Microscopic image; Segmentation; Parasite extraction; feature extraction; Principal component analysis; probabilistic neural Network; Parasite Recognition</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 4(9), 2015</description>
        <description>Parasites live in a host and get its food from or at the expensive of that host. Cysts represent a form of resistance and spread of parasites. The manual diagnosis of microscopic stools images is time-consuming and depends on the human expert. In this paper, we propose an automatic recognition system that can be used to identify various intestinal parasite cysts from their microscopic digital images. We employ image pixel feature to train the probabilistic neural networks (PNN). Probabilistic neural networks are suitable for classification problems. The main novelty is the use of features vectors extracted directly from the image pixel. For this goal, microscopic images are previously segmented to separate the parasite image from the background. The extracted parasite is then resized to 12x12 image features vector. For dimensionality reduction, the principal component analysis basis projection has been used. 12x12 extracted features were orthogonalized into two principal components variables that consist the input vector of the PNN. The PNN is trained using 540 microscopic images of the parasite. The proposed approach was tested successfully on 540 samples of protozoan cysts obtained from 9 kinds of intestinal parasites.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume4No9/Paper_6-Automatic_Recognition_of_Human_Parasite_Cysts_on_Microscopic_Stools_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>System for EKG Monitoring</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2015.040905</link>
        <id>10.14569/IJARAI.2015.040905</id>
        <doi>10.14569/IJARAI.2015.040905</doi>
        <lastModDate>2015-09-10T05:37:32.5200000+00:00</lastModDate>
        
        <creator>Jakub Ševc&#237;k</creator>
        
        <creator>Ondrej Kainz</creator>
        
        <creator>Peter Fecilak</creator>
        
        <creator>František Jakab</creator>
        
        <subject>Arduino; arrhythmia; C sharp; cardiovascular diseases; diagnosis; electrocardiogram; heart; Matlab</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 4(9), 2015</description>
        <description>In this paper the system for the electrocardiogram (EKG) monitoring based on the of Arduino microcontroller is presented. Detailed description of the electrocardiogram itself serves as a ground for building the proposed hardware and software solution. The software implementation is in a form of both, Matlab environment, and own application. Final output enables retrieval of the actual data in real time and further and provide the rudimentary diagnosis. Utilization of such device is for self home diagnosis of arrhythmia.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume4No9/Paper_5-System_for_EKG_Monitoring.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>System for Human Detection in Image Based on Intel Galileo</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2015.040904</link>
        <id>10.14569/IJARAI.2015.040904</id>
        <doi>10.14569/IJARAI.2015.040904</doi>
        <lastModDate>2015-09-10T05:37:32.4900000+00:00</lastModDate>
        
        <creator>Rastislav Ešt&#243;k</creator>
        
        <creator>Ondrej Kainz</creator>
        
        <creator>Miroslav Michalko</creator>
        
        <creator>František Jakab</creator>
        
        <subject>image processing; Intel Galileo; motion detection; object recognition</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 4(9), 2015</description>
        <description>The aim of this paper is a comparative analysis of methods for motion detection and human recognition in the image. Authors propose the own solution following the comparative analysis of current approaches. Then authors design and implement hardware and software solution for motion detection in the video with human recognition in the picture. The development board Intel Galileo serves as the basis for hardware implementation. Authors implement own software solution for motion detection and human recognition in the image, resulting in the evaluation of proposed implementation.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume4No9/Paper_4-System_for_Human_Detection_in_Image_Based_on_Intel_Galileo.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Directional Audible Sound System using Ultrasonic Transducers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2015.040903</link>
        <id>10.14569/IJARAI.2015.040903</id>
        <doi>10.14569/IJARAI.2015.040903</doi>
        <lastModDate>2015-09-10T05:37:32.4600000+00:00</lastModDate>
        
        <creator>Wen-Kung Tseng</creator>
        
        <subject>ultrasound; amplitude-modulating; directional audible sound beam; weightings; H_8 optimization method</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 4(9), 2015</description>
        <description>In general the audible sound has the characteristics of spreading, however the ultrasound is directional. This study used amplitude-modulating technique for an array of 8 ultrasonic transducers to produce directional audible sound beam. In this study sound field distribution for the directional audible sound beam has been investigated. The effect of different weightings varied with different frequency for the transducers on the directivity of the sound beam has also been evaluated. An H (infinity) optimization method was used to calculate the optimal weightings of the transducers for better directivity of the sound beam. Different optimal weightings also added to the carrier and sideband frequencies to control the difference frequency’s beamwidth and sidelobe amplitude. The results showed that the beam width can be controlled and good directivity of the sound beam can be obtained by using the H_8 optimization method.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume4No9/Paper_3-A_Directional_Audible_Sound_System_using_Ultrasonic_Transducers.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Instruments and Criteria for Research and Analysis of the Internet Visibility of Bulgarian Judicial Institutions WEB-Space*</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2015.040902</link>
        <id>10.14569/IJARAI.2015.040902</id>
        <doi>10.14569/IJARAI.2015.040902</doi>
        <lastModDate>2015-09-10T05:37:32.4270000+00:00</lastModDate>
        
        <creator>Nayden Valkov Nenkov</creator>
        
        <creator>Mariana Mateeva Petrova</creator>
        
        <subject>judicial institution; WEB-page; SEO (search engine optimization); evaluation criteria; court</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 4(9), 2015</description>
        <description>e-Justice has been under discussion at European level since 2007. The article describes some tools and displays objective criteria for evaluating the WEB-pages of judicial institutions in Bulgaria. A methodology is offered in order to improve the organization and functioning of the judicial institutions. It is used to conduct experimental tests for analysis and assessment of the main characteristics of the Bulgaria courts’ WEB-sites. The results provide grounds for findings and recommendations leading to improved communication and the presence of these institutions in the WEB.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume4No9/Paper_2-Instruments_and_Criteria_for_Research_and_Analysis_of_the_Internet_Visibility.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Case-based Reasoning with Input Text Processing to Diagnose Mood [Affective] Disorders</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2015.040901</link>
        <id>10.14569/IJARAI.2015.040901</id>
        <doi>10.14569/IJARAI.2015.040901</doi>
        <lastModDate>2015-09-10T05:37:32.3200000+00:00</lastModDate>
        
        <creator>Sri Mulyana</creator>
        
        <creator>Sri Hartati</creator>
        
        <creator>Retantyo Wardoyo</creator>
        
        <creator>Edi Winarko</creator>
        
        <subject>Case-Based Reasoning; mood disorder; case similarity; Jaccard Method; Tversky Method; Modified-Tversky Method</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 4(9), 2015</description>
        <description>Case-Based Reasoning is one of the methods used in expert systems. Calculation of similarity degree among the cases has always been an important aspect in CBR as the system will attempt to identify cases with the highest of similarity degree in a case-base to provide solutions for new problems. In this research, a CBR model with input text processing for diagnosing mood [affective] disorder is developed. It correlates with the increased tendency of mood disorder in accordance with the dynamics of the economic and political situation. Calculation of similarity degree among the cases is one of the main focuses in this research. This study proposed a new method to calculate similarity degree between cases, Modified-Tversky. The analysis performed to assess the method used in measuring case similarity reveals that the Modified-Tversky Method surpasses the other methods. In the all tests conducted, the results of case similarity measures using the Modified-Tversky method is greater than or equal to the calculations performed using the Jaccard dan Tversky methods. The test results also provide an average level of performance in processing text input is 89.3 %.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume4No9/Paper_1-Case_based_Reasoning_with_Input_Text_Processing_to_Diagnose_Mood.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application of distributed lighting control architecture in dementia-friendly smart homes</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2015.040806</link>
        <id>10.14569/IJARAI.2015.040806</id>
        <doi>10.14569/IJARAI.2015.040806</doi>
        <lastModDate>2015-08-31T13:10:11.8000000+00:00</lastModDate>
        
        <creator>Atousa Zaeim</creator>
        
        <creator>Samia Nefti-Meziani</creator>
        
        <creator>Adham Atyabi</creator>
        
        <subject>Smart Home; Ambient intelligence; Machine Learn-ing; Distributed Learning</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 4(8), 2015</description>
        <description>Dementia is a growing problem in societies with aging populations, not only for patients, but also for family members and for the society in terms of the associated costs of providing health care. Helping patients to maintain a degree of independence in their home environment while ensuring their safeties is considered as a positive step forward for addressing individual needs of dementia patients. A common symptom for dementia patients including those with Alzheimer’s Disease and Related Dementia (ADRD) is sleep disturbance, patients being awake at night and asleep during the day. One of the problems with night time sleep disturbance in dementia patients is the possible accidental falls of patients in the dark environment. An issue associated with un-hourly sleeping behavior in these patients is the lighting condition of their surroundings. Clinical studies indicate that appropriate level of lighting can help to restore the rest-activity cycles of ADRD patients. This study tackles this problem by generating machine learning solutions for controlling the lighting conditions of multiple rooms in the house in different hours based on patterns of behaviors generated for the patient. Several neural network oriented classification methods are investigated and their feasibilities are assessed with a collection of synthetic data capturing two conditions of balanced and unbalanced inter-class samples. The classifiers are utilized within two centric and distributed lighting control architectures. The results indicate the feasibility of the distributed architecture in achieving a high level of classification performance resulting in adequate control over lighting conditions of the house in various time periods.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume4No8/Paper_6-Application_of_distributed_lighting_control_architecture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Driver’s Awareness and Lane Changing Maneuver in Traffic Flow based on Cellular Automaton Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2015.040805</link>
        <id>10.14569/IJARAI.2015.040805</id>
        <doi>10.14569/IJARAI.2015.040805</doi>
        <lastModDate>2015-08-31T13:10:11.7830000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Steven Ray Sentinuwo</creator>
        
        <subject>traffic cellular automata; scope awareness; lane changing maneuver; driver perception; speed estimation</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 4(8), 2015</description>
        <description>Effect of driver’s awareness (e.g., to estimate the speed and arrival time of another vehicle) on the lane changing maneuver is discussed. “Scope awareness” is defined as the visibility which is required for the driver to make a visual perception about road condition and the speed of vehicle that appears in the target lane for lane changing in the road. Cellular automaton based simulation model is created and applied to simulation studies for driver awareness behavior. This study clarifies relations between the lane changing behavior and the scope awareness parameter that reflects driver behavior. Simulation results show that the proposed model is valid for investigation of the important features of lane changing maneuver.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume4No8/Paper_5-Driver_Awareness_and_Lane_Changing_Maneuver_in_Traffic_Flow.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cyberspace Challenges and Law Limitations</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060837</link>
        <id>10.14569/IJACSA.2015.060837</id>
        <doi>10.14569/IJACSA.2015.060837</doi>
        <lastModDate>2015-08-31T13:10:11.7670000+00:00</lastModDate>
        
        <creator>Aadil Al-Mahrouqi</creator>
        
        <creator>Cormac O Cianain</creator>
        
        <creator>Tahar Kechadi</creator>
        
        <subject>Internet anonymous; pseudonymous internet users, electronic discovery; large-scale data breaches</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(8), 2015</description>
        <description>Privacy and Data security are heating topic in the modern technologically advanced economy. Technological Innovations have created new forms of electronic data which are more vulnerable to theft or loss when compared to tradi-tional data storage. Moreover, the recent advances in internet technologies have exacerbated the risk of security threats. The Internet brings a whole new set of challenges in terms of data protection. Considering the complexities of modern technological advancements and its impact on data security, this study examines the Irish laws and EU directives for privacy and data security, its effectiveness in managing large scale data breaches and limitations. This paper also simulates attack scenarios that can be done by anonymous users in a complex cyberspace environment and explains how a digital evidence related to the attack scenario can be tracked down.</description>
        <description>http://thesai.org/Downloads/Volume6No8/Paper_37-Cyberspace_Challenges_and_Law_Limitations.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SmartFit: A Step Count Based Mobile Application for Engagement in Physical Activities</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060836</link>
        <id>10.14569/IJACSA.2015.060836</id>
        <doi>10.14569/IJACSA.2015.060836</doi>
        <lastModDate>2015-08-31T13:10:11.7500000+00:00</lastModDate>
        
        <creator>Atifa Sarwar</creator>
        
        <creator>Hamid Mukhtar</creator>
        
        <creator>Maajid Maqbool</creator>
        
        <creator>Djamel Belaid</creator>
        
        <subject>Feedback, Gamification, Physical Activities, Step Count</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(8), 2015</description>
        <description>Research has found that relatively few people en-gage in regular exercise or other physical activities. Despite the availability of numerous mobile applications and specialized devices for self-tracking, people mostly lack the motivation for performing physical activities. In this article we present SmartFit, a mobile application that uses step count for promoting the physical activities in adults. This article points out that while considering walk, activity duration is not sufficient for deter-mining users activeness state. Step count is another factor that should be taken into account. For this we propose an approach for converting the steps into duration for which activity has been performed. This duration is then used in Smartfit for categorizing user into different activeness levels. Gamification techniques have been incorporated in SmartFit as they are found to serve the purpose of motivating and encouraging the user. Gamification is used for awarding/deducting points to user in order to keep them engaged for longer period. Furthermore feedback is also provided to users depending upon their goal and achieved progress. The objective is to facilitate and motivate the user and then keep them engaged in carrying out the recommended level of physical activities.</description>
        <description>http://thesai.org/Downloads/Volume6No8/Paper_36-SmartFit_A_Step_Count_Based_Mobile_Application.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modeling and simulation of the effects of landslide on circulation of transports on the mountain roads</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060835</link>
        <id>10.14569/IJACSA.2015.060835</id>
        <doi>10.14569/IJACSA.2015.060835</doi>
        <lastModDate>2015-08-31T13:10:11.7370000+00:00</lastModDate>
        
        <creator>Manh Hung Nguyen</creator>
        
        <creator>Tuong Vinh Ho</creator>
        
        <creator>Trong Khanh Nguyen</creator>
        
        <creator>Minh Duc Do</creator>
        
        <subject>Simulation model, Traffic network simulation, Landslide effect, Multiagent system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(8), 2015</description>
        <description>Landslides, as one of the major natural hazards, account each year for enormous property damage in terms of both direct and indirect costs. Mountain roads where probability of land sliding is the highest, causes hurdles not only in the traffic flow but generate various traffic problems in the form of congestion, high accidents rate and waste of time. This paper introduces an agent-based model for modeling and simulation of the effects of landslide on the circulation of transports on mountain roads. This model is applied to the National Road N&#176;6 of Vietnam to visualize and analyze the effects of landslide on the road when it occurs. This model could help us to improve and optimally organize of landslide warning and rescue system on the road.</description>
        <description>http://thesai.org/Downloads/Volume6No8/Paper_35-Modeling_and_simulation_of_the_effects_of_landslide.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Model Checking Self-Stabilising in Embedded Systems with Linear Temporal Logic</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060834</link>
        <id>10.14569/IJACSA.2015.060834</id>
        <doi>10.14569/IJACSA.2015.060834</doi>
        <lastModDate>2015-08-31T13:10:11.7200000+00:00</lastModDate>
        
        <creator>Rim Marah</creator>
        
        <creator>Abdelaaziz EL Hibaoui</creator>
        
        <subject>Distributed Embedded Systems, Linear Temporal Logic, Self-Stabilization, Model Checking, Verification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(8), 2015</description>
        <description>Over the past two decades, the use of distributed embedded systems is wide in many applications. One way to guarantee that these systems tolerate transient faults is done by making them self-stabilizing systems, which automatically recover from any transient fault.
In this paper we present a formalism of self-stabilization concept based on Linear Temporal Logic (LTL), and model checked the self-stabilization in embedded systems. Using a case study inspired by industrial practice, we present in detail a model checking to verify the self stabilization property of our embedded system.</description>
        <description>http://thesai.org/Downloads/Volume6No8/Paper_34-Model_Checking_Self_Stabilising_in_Embedded.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Image Enhancement Using Homomorphic Filtering and Adaptive Median Filtering for Balinese Papyrus (Lontar)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060833</link>
        <id>10.14569/IJACSA.2015.060833</id>
        <doi>10.14569/IJACSA.2015.060833</doi>
        <lastModDate>2015-08-31T13:10:11.6900000+00:00</lastModDate>
        
        <creator>Ida Bagus Ketut Surya Arnawa</creator>
        
        <subject>Image Enhancement; Balinese Papyrus; Homomorphic Filtering; Adaptive Median Filtering; Otsu Binarization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(8), 2015</description>
        <description>Balinese papyrus (Lontar) is one of the most popular media to write for more than a hundred years in Indonesia. Balinese papyrus are used to document things that are considered important in the past. Most of the balinese papyrus suffered damage caused by weathering, edible fungus and insects making it is difficult to read. One of the efforts made to preserve the existence of balinese papyrus is to perform digitization of it. The problems most often encountered in the process of digitizing the image of the balinese papyrus is less good results as there is noise caused by its conditions that have been damaged and the uneven distribution illumination in this part of the image. In this study the authors propose to combine homomorphic filtering with adaptive median filtering to perform image enhancement. Surve results obtained show the percentage of the average respondents stated that the image enhancement results are good is 83.4%, the percentage of the average respondents stated that the image enhancement results are very good is 4% and the percentage of the average respondents stated that the image enhancement results are enough is 12, 6%.</description>
        <description>http://thesai.org/Downloads/Volume6No8/Paper_33-Image_Enhancement_Using_Homomorphic_Filtering.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evolutionary Approach to jointly resolve the Power and the Capacity Optimization problems in the multi-user OFDMA Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060832</link>
        <id>10.14569/IJACSA.2015.060832</id>
        <doi>10.14569/IJACSA.2015.060832</doi>
        <lastModDate>2015-08-31T13:10:11.6570000+00:00</lastModDate>
        
        <creator>Ndiaye Abdourahmane</creator>
        
        <creator>Ouya Samuel</creator>
        
        <creator>Mendy Gervais</creator>
        
        <subject>Cellular Network; OFDMA; Resources Allocation; Rate Adaptive; Margin Adaptive; Genetic Algorithms</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(8), 2015</description>
        <description>This paper deals with the problem of resources allo-cation in the downlink of the radio mobile systems. The allocation of resources is established in the context of multi-path effect and Doppler effect. These phenomena cause randomly variations of the channel and make difficult the allocation of the resources. Therefore, the problem of resources allocation is a high non linear optimization problem. Thus, we propose an implementation of genetic algorithms approach to increase the total throughput of users while minimizing the consumption of the total power of the base station. Like the evolutionary algorithms approach, this method is characterized by its robustness, permitting to efficiently solve a non linear optimization problem.</description>
        <description>http://thesai.org/Downloads/Volume6No8/Paper_32-Evolutionary_Approach_to_jointly_resolve_the_Power.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>AdviseMe: An Intelligent Web-Based Application for Academic Advising</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060831</link>
        <id>10.14569/IJACSA.2015.060831</id>
        <doi>10.14569/IJACSA.2015.060831</doi>
        <lastModDate>2015-08-31T13:10:11.6270000+00:00</lastModDate>
        
        <creator>Lawrence Keston Henderson</creator>
        
        <creator>Wayne Goodridge</creator>
        
        <subject>Web-Based Academic Advising; Academic Advising; Ontology; Jena; Expert Systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(8), 2015</description>
        <description>The traditional academic advising process in many tertiary-level institutions today possess significant inefficiencies, which often account for high levels of student dissatisfaction. Common issues include high student-advisor loads, long waiting periods at advisory offices and the need for advisors to handle a significant number of redundant cases, among others.
Utilizing semantic web expert system technologies, a solution was proposed that would complement the traditional advising process, alleviating its issues and inefficiencies where possible. The solution coined ‘AdviseMe’, an intelligent web-based application, provides a reliable, user-friendly interface for the handling of general advisory cases in special degree programmes offered by the Faculty of Science and Technology (FST) at the University of the West Indies (UWI), St. Augustine campus. In addition to providing information on handling basic student issues, the system’s core features include course advising, as well as infor-mation of graduation status and oral exam qualifications. This paper produces an overview of the solution, with special attention being paid to the its inference system exposed via its RESTful Java Web Server (JWS).
The system was able to provide sufficient accurate advice for the sample set presented and showed high levels of acceptabil-ity by both students and advisors. Furthermore, its successful implementation demonstrated its ability to enhance the advisory process of any tertiary-level institution with programmes similar to that of FST.</description>
        <description>http://thesai.org/Downloads/Volume6No8/Paper_31-AdviseMe_An_Intelligent_Web_Based_Application.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid Heuristic/Deterministic Dynamic Programing Technique for Fast Sequence Alignment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060830</link>
        <id>10.14569/IJACSA.2015.060830</id>
        <doi>10.14569/IJACSA.2015.060830</doi>
        <lastModDate>2015-08-31T13:10:11.6100000+00:00</lastModDate>
        
        <creator>Talal Bonny</creator>
        
        <subject>Sequence alignment, dynamic programing, Perfor-mance, optimization, FPGA</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(8), 2015</description>
        <description>Dynamic programming seeks to solve complex problems by breaking them down into multiple smaller problems. The solutions of these smaller problems are then combined to reach the overall solution. Deterministic algorithms have the advantage of accuracy but they need large computational power requirements. Heuristic algorithms have the advantage of speed but they provide less accuracy. This paper presents a hybrid design of dynamic programing technique that is used for sequence alignment. Our technique combines the advantages of deterministic and heuristic algorithms by delivering the optimal solution in suitable time. we implement our design on a Xilinx Zynq-7000 Artix-7 FPGA and show that our implementation improves the performance of sequence alignment by 63% for in comparison to the traditional known methods.</description>
        <description>http://thesai.org/Downloads/Volume6No8/Paper_30-A_Hybrid_Heuristic_Deterministic_Dynamic.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Security Issues Model on Cloud Computing:  A Case of Malaysia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060829</link>
        <id>10.14569/IJACSA.2015.060829</id>
        <doi>10.14569/IJACSA.2015.060829</doi>
        <lastModDate>2015-08-31T13:10:11.5800000+00:00</lastModDate>
        
        <creator>Komeil Raisian</creator>
        
        <creator>Jamaiah Yahaya</creator>
        
        <subject>Cloud Computing; Security Issues; Security Viewpoint; Grid Computing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(8), 2015</description>
        <description>By developing the cloud computing, viewpoint of many people regarding the infrastructure architectures, software distribution and improvement model changed significantly. Cloud computing associates with the pioneering deployment architecture, which could be done through grid calculating, effectiveness calculating and autonomic calculating. The fast transition towards that, has increased the worries regarding a critical issue for the effective transition of cloud computing. From the security viewpoint, several issues and problems were discussed regarding the cloud transfer. The goal of this study is to represent a general security viewpoint of cloud computing through signifying the security problems that must be focused and accomplished appropriately for identifying a better perspective about current world of cloud computing. This research also is to clarify the particular interrelationships existing between cloud computing security and other associated variables such as data security, virtual machine security, application security and privacy. In addition, a model of cloud computing security which depends on the investigation regarding previous studies has been developed. To examine the model a type of descriptive survey is applied. The survey sample population is selected from employers and managers IT companies in Malaysia. By testing the correlation, the results of study indicated that there are those identified security challenges in current world of cloud computing. Furthermore, the results showed that the cloud computing security correlation with data security, virtual machine security, application security and privacy is positive.</description>
        <description>http://thesai.org/Downloads/Volume6No8/Paper_29-Security_Issues_Model_on_Cloud_Computing_A_Case_of_Malaysia.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Map Reduce: A Survey Paper on Recent Expansion</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060828</link>
        <id>10.14569/IJACSA.2015.060828</id>
        <doi>10.14569/IJACSA.2015.060828</doi>
        <lastModDate>2015-08-31T13:10:11.5630000+00:00</lastModDate>
        
        <creator>Shafali Agarwal</creator>
        
        <creator>Zeba Khanam</creator>
        
        <subject>Map Reduce; Hadoop; Iterative Computation; Phoenix; Databases</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(8), 2015</description>
        <description>A rapid growth of data in recent time, Industries and academia required an intelligent data analysis tool that would be helpful to satisfy the need to analysis a huge amount of data. MapReduce framework is basically designed to compute data intensive applications to support effective decision making. Since its introduction, remarkable research efforts have been put to make it more familiar to the users subsequently utilized to support the execution of massive data intensive applications.
Our survey paper emphasizes the state of the art in improving the performance of various applications using recent MapReduce models and how it is useful to process large scale dataset. A comparative study of given models corresponds to Apache Hadoop and Phoenix will be discussed primarily based on execution time and fault tolerance. At the end, a high-level discussion will be done about the enhancement of the MapReduce computation in specific problem area such as Iterative computation, continuous query processing, hybrid database etc.</description>
        <description>http://thesai.org/Downloads/Volume6No8/Paper_28-Map_Reduce_A_Survey_Paper_on_Recent_Expansion.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Psychosocial Correlates of Software Designers&#39; Professional Aptitude</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060827</link>
        <id>10.14569/IJACSA.2015.060827</id>
        <doi>10.14569/IJACSA.2015.060827</doi>
        <lastModDate>2015-08-31T13:10:11.5500000+00:00</lastModDate>
        
        <creator>Walery Suslow</creator>
        
        <creator>Jacek Kowalczyk</creator>
        
        <creator>Michal Statkiewicz</creator>
        
        <creator>Marta Boinska</creator>
        
        <creator>Janina Nowak</creator>
        
        <subject>Software designer; Software quality; Professional skills; Early diagnosis; Psychological tests; NEO-FFI</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(8), 2015</description>
        <description>This paper presents quantitative results of the first phase of empirical research carried out within the framework of the interdisciplinary project InfoPsycho that was initiated in 2013 at the Koszalin University of Technology and the University of Gdansk. The aim of the study was to identify the personality traits that characterize successful applicants for university studies in the field of software development. Synthetic indicators of quality and performance of their design tasks and exercises were selected as the criteria for candidates’ professional skills. To measure personality traits, the NEO-FFI questionnaire was used, based on the five-factor model by Costa and McCrae. Preliminary results show that expected young designers (N=140) score high on neuroticism and introversion as compared with those designers whose design documentation is of poor quality. They also show a high degree of conscientiousness, which can be seen when their performance of exercises and programming tasks is being evaluated.</description>
        <description>http://thesai.org/Downloads/Volume6No8/Paper_27-Psychosocial_Correlates_of_Software_Designers_Professional_Aptitude.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Contemplation of Effective Security Measures in Access Management from Adoptability Perspective</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060826</link>
        <id>10.14569/IJACSA.2015.060826</id>
        <doi>10.14569/IJACSA.2015.060826</doi>
        <lastModDate>2015-08-31T13:10:11.5170000+00:00</lastModDate>
        
        <creator>Tehseen Mehraj</creator>
        
        <creator>Bisma Rasool</creator>
        
        <creator>Burhan Ul Islam Khan</creator>
        
        <creator>Asifa Baba</creator>
        
        <creator>Prof. A. G. Lone</creator>
        
        <subject>Access Management; Authentication; One-Time-Password (OTP); OTP generation; User adoptability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(8), 2015</description>
        <description>With the extension in computer networks, there has been a drastic change in the disposition of network security. Security has always been a major concern of any organization as it involves mechanisms to ensure reliable access. In the present era of global electronic connectivity where hackers, eavesdroppers, electronic frauds, viruses are growing in number, security has proved to be indispensable. Although numerous solutions have been put forth in literature to guarantee security, they have botched with related traits like efficiency and scalability. Despite the range of security solutions that have been presented by experts, not a single approach has been wholly agreed upon to provide absolute security or standardized upon unanimously. Furthermore, these approaches lack adoptable user implementation. In this paper, various approaches and techniques that were introduced in the past for the purpose of enhancing the authentication and authorization of the user while performing sensitive and confidential transaction are discussed. Each of works performed by the researchers is taken single-handedly for debate followed by analysis. Finally, the open issues in the current domain of study are formulated from the review conducted.</description>
        <description>http://thesai.org/Downloads/Volume6No8/Paper_26-Contemplation_of_Effective_Security_Measures_in_Access_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Bio-Inspired Clustering of Complex Products Structure based on DSM</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060825</link>
        <id>10.14569/IJACSA.2015.060825</id>
        <doi>10.14569/IJACSA.2015.060825</doi>
        <lastModDate>2015-08-31T13:10:11.4870000+00:00</lastModDate>
        
        <creator>Fan Yang</creator>
        
        <creator>Pan Wang</creator>
        
        <creator>Sihai Guo</creator>
        
        <creator>Qibing Lu</creator>
        
        <creator>Xingxing Liu</creator>
        
        <subject>Bio-inspired computation; genetic algorithm (GA); reliable fitness function; DSM; complex products; clustering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(8), 2015</description>
        <description>Clustering plays an important role in the decomposition of complex products structure. Different clustering algorithms may achieve different effects of the decomposition. This paper aims to proposes a bio-inspired genetic algorithm that is implemented based on its reliable fitness function and design structure matrix (DSM) for clustering analysis of complex products. This new bio-inspired genetic algorithm captures the features of DSM, which is base on the biological evolution theory. Examples of these products include motorcycle engines that are presented for clustering. The five cluster alternatives are obtained from the regular clustering algorithm and the bio-inspired genetic algorithm, while the best cluster alternative comes from the bio-inspired genetic algorithm. The results show that this algorithm is well adaptable, especially when the product elements have complicated and asymmetric connections.</description>
        <description>http://thesai.org/Downloads/Volume6No8/Paper_25-Bio_Inspired_Clustering_of_Complex_Products_Structure.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Research on Energy Saving Method for IDC CRAC System based on Prediction of Temperature</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060824</link>
        <id>10.14569/IJACSA.2015.060824</id>
        <doi>10.14569/IJACSA.2015.060824</doi>
        <lastModDate>2015-08-31T13:10:11.4570000+00:00</lastModDate>
        
        <creator>Zou Yan</creator>
        
        <creator>WU Fei</creator>
        
        <creator>Xing Jian</creator>
        
        <creator>Dong Bo</creator>
        
        <subject>IDC; CRAC system; BP Neural Network Model; Forecast of temperature; Energy saving</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(8), 2015</description>
        <description>Amid the information era, energy consumption of IDC Computer Room Air Conditioning (CRAC) system is becoming increasingly serious. Thus there is growing concern over energy saving and consumption reduction. Based on the analysis of the energy saving application of the air conditioning system in the present computer room, a new energy saving method of the IDC CRAC system, which presents energy saving decision based on the prediction of temperature, is proposed. Its principle is the collection of CPU utilization reflected the change of equipment working load, the temperature in hot spots and cold area. Then, to build a BP Neural Network model, taking the working load and the temperature in hot spots for the actual input, taking the temperature in a cold area for actual output.  The BP Neural Network model can predict the temperature in hot spots of the next, when a set of real-time data into the model. Choosing a reasonable and effective decision-making scheme of the air conditioning system can realize energy saving control. Preliminary simulation results show, through the establishment of BP network model obtain   approximation error of training samples and the prediction error of testing samples, both to highlight the advantages of the model. Finally, the distribution of temperature change about CRAC system whole day obtained by simulation shows that the proposed energy-saving method can reduce the energy consumption of IDC, fully embodies the effect of energy saving.</description>
        <description>http://thesai.org/Downloads/Volume6No8/Paper_24-Research_on_Energy_Saving_Method_for_IDC_CRAC_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Robust Fuzzy-Second Order Sliding Mode based Direct Power Control for Voltage Source Converter</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060823</link>
        <id>10.14569/IJACSA.2015.060823</id>
        <doi>10.14569/IJACSA.2015.060823</doi>
        <lastModDate>2015-08-31T13:10:11.4230000+00:00</lastModDate>
        
        <creator>D. Kairous</creator>
        
        <creator>B. Belmadani</creator>
        
        <subject>AC-DC power converters; Bidirectional power flow; Fuzzy logic; Sliding mode control; Direct Power control</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(8), 2015</description>
        <description>This paper focuses on a second order sliding mode based direct power controller (SOSM-DPC) of a three-phase grid-connected voltage source converter (VSC). The proposed control scheme combined with fuzzy logic aims at regulating the DC-link voltage of the converter and precisely tracking arbitrary power references, in order to easily control the system’s power factor. Therefore measures are proposed to reduce the chattering effects inherent to sliding-mode control (SMC). Simulations performed under Matlab/Simulink validate the feasibility of the designed Fuzzy-SOSM. Simulation results on a 1kVA grid-connected VSC under normal and faulted grid voltage conditions demonstrate good performance of the proposed control law in terms of robustness, stability and precision.</description>
        <description>http://thesai.org/Downloads/Volume6No8/Paper_23-Robust_Fuzzy_Second_Order_Sliding_Mode_based_Direct_Power_Control.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implementation of Location Base Service on Tourism Places in West Nusa Tenggara by using Smartphone</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060822</link>
        <id>10.14569/IJACSA.2015.060822</id>
        <doi>10.14569/IJACSA.2015.060822</doi>
        <lastModDate>2015-08-31T13:10:11.4100000+00:00</lastModDate>
        
        <creator>Karya Gunawan</creator>
        
        <creator>Bambang Eka Purnama</creator>
        
        <subject>location base service; GPS; Smartphone; tourism information</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(8), 2015</description>
        <description>The study is aimed to create an application that can assist users in finding information about tourism places in West Nusa Tenggara, Indonesia. West Nusa Tenggara is one of the provinces in Indonesia and one of the second tourists’ destinations after Bali. It is a small Sunda Island which consists of two large islands located in the west of Lombok and Sumbawa that located in the east to the capital of Mataram on the island of Lombok. The area of West Nusa Tenggara province is 19,708.79 km2. The application provides information such as descriptions of sights, tourism spot address, photo galleries, and the available facilities and the closest track to where the tourism spot is by using Google maps. Google maps can display map locations and closest routes from the user&#39;s position to the tourism place. Determination of the position of the user and position of Tourism sites using GPS (Global Positioning System). This application is made by using the Java programming language for Android with Eclipse 4.2.1 IDE that can be used on the Smartphone device based on Android. The result of the research is the creation of location-based services applications to search a place of Tourism in West Nusa Tenggara province so as to help tourists visiting the West Nusa Tenggara. The benefit of this application is that to determine the path in finding the tourism places.</description>
        <description>http://thesai.org/Downloads/Volume6No8/Paper_22-Implementation_of_Location_Base_Service_on_Tourism_Places.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Probabilistic Algorithm based on Fuzzy Clustering for Indoor Location in Fingerprinting Positioning Method </title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060821</link>
        <id>10.14569/IJACSA.2015.060821</id>
        <doi>10.14569/IJACSA.2015.060821</doi>
        <lastModDate>2015-08-31T13:10:11.3770000+00:00</lastModDate>
        
        <creator>Bo Dong</creator>
        
        <creator>Fei Wu</creator>
        
        <creator>Jian Xing</creator>
        
        <creator>Yan Zou</creator>
        
        <subject>Fuzzy Clustering; Fingerprinting Positioning; Indoor Location; RSSI; Probabilistic Algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(8), 2015</description>
        <description>Recently, the location of the fingerprint positioning technology is obviously superior to the signal transmission loss model based on the positioning technology, and is widely concerned by scholars. In the online phase, due to the efficiency of the probabilistic distribution matching computation is low and when clustering the position fingerprint database, hard clustering lead to degrading the positioning accuracy, a probabilistic algorithm based on fuzzy clustering is proposed and applied to the indoor location fingerprinting positioning. Compared with hard clustering fusion algorithm, the proposed method has realized the fuzzy partition of the database, makes online positioning phase can effectively search the desired fingerprint data, and improve the positioning accuracy. Experiments show that the algorithm can effectively deal with the problem of the positioning accuracy of hard clustering.</description>
        <description>http://thesai.org/Downloads/Volume6No8/Paper_21-Probabilistic_Algorithm_based_on_Fuzzy_Clustering_for_Indoor_Location.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Computer Vision for Screening Resistance Level of Rice Varieties to Brown Planthopper</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060820</link>
        <id>10.14569/IJACSA.2015.060820</id>
        <doi>10.14569/IJACSA.2015.060820</doi>
        <lastModDate>2015-08-31T13:10:11.3600000+00:00</lastModDate>
        
        <creator>Elvira Nurfadhilah</creator>
        
        <creator>Yeni Herdiyeni</creator>
        
        <creator>Aunu Rauf</creator>
        
        <creator>Rahmini</creator>
        
        <subject>brown planthopper; color extraction; resistance; standard seedboxes screening test</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(8), 2015</description>
        <description>Brown planthopper is one of the most important insect pest that threatens the stability of national rice production in Indonesia. One of the efforts to save rice production is by using brown planthopper resistant variety. Currently the determination approach is still conventional based on Standard Seedboxes Screening Test from IRRI with assistance of experienced experts in the scoring process resistance level.In this study, a prototype of application system to predict resistance levels by image color approach was developed. The method consists of collecting images data, preparation process (background and objects segmentation), and determination of area proportion which has been infected (sick and dead) and healthy, based on ‘A’ value from CIELab color space laboratory. According to proportion value distribution, the rule of rice resistance to brown planthopper assessment based on image was developed. The rule is mostly similar with IRRI standard rules. All of images were assessed based on the rule and then the model was developed with an error rate of 17.02%.</description>
        <description>http://thesai.org/Downloads/Volume6No8/Paper_20-Computer_Vision_for_Screening_Resistance_Level_of_Rice_Varieties.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Self-Healing Hybrid Protection Architecture for Passive Optical Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060819</link>
        <id>10.14569/IJACSA.2015.060819</id>
        <doi>10.14569/IJACSA.2015.060819</doi>
        <lastModDate>2015-08-31T13:10:11.3300000+00:00</lastModDate>
        
        <creator>Waqas A. Imtiaz</creator>
        
        <creator>P. Mehar</creator>
        
        <creator>M. Waqas</creator>
        
        <creator>Yousaf Khan</creator>
        
        <subject>passive optical network; protection; star-ring topology; reliability; CAPEX</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(8), 2015</description>
        <description>Expanding size of passive optical networks (PONs) along with high availability expectation makes the reliability performance a crucial need. Most protection architectures utilize redundant network components to enhance network survivability, which is not economical. This paper proposes new self-healing protection architecture for passive optical networks (PONs), with a single ring topology and star-ring topology at feeder and distribution level respectively. The proposed architecture provides desirable protection to the network by avoiding fiber duplication at both feeder and distribution level. Moreover, medium access control (MAC) controlled switching is utilized to provide efficient detection, and restoration of faults or cuts throughout the network. Analytical analysis reveals that the proposed self-healing hybrid protection architecture ensures survivability of the affected traffic along with desirable connection availability of 99.9994 % at minimum deployment cost, through simple architecture and simultaneous protection against failures.</description>
        <description>http://thesai.org/Downloads/Volume6No8/Paper_19-Self_Healing_Hybrid_Protection_Architecture_for_Passive_Optical_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Efficient Proposed Framework for Semantic Search Engine using New Semantic Ranking Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060818</link>
        <id>10.14569/IJACSA.2015.060818</id>
        <doi>10.14569/IJACSA.2015.060818</doi>
        <lastModDate>2015-08-31T13:10:11.3000000+00:00</lastModDate>
        
        <creator>M. M. El-gayar</creator>
        
        <creator>N.Mekky</creator>
        
        <creator>A. Atwan</creator>
        
        <subject>Semantic Search Engine; Ontology; Semantic Ranker; Crawler; RDF;SPARQL</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(8), 2015</description>
        <description>The amount of information raises billions of databases every year and there is an urgent need to search for that information by a specialize tool called search engine. There are many of search engines available today, but the main challenge in these search engines is that most of them cannot retrieve meaningful information intelligently. The semantic web technology is a solution that keeps data in a readable format that helps machines to match smartly this data with related information based on meanings. In this paper, we will introduce a proposed semantic framework that includes four phases crawling, indexing, ranking and retrieval phase. This semantic framework operates over a sorting RDF by using efficient proposed ranking algorithm and enhanced crawling algorithm.  The enhanced crawling algorithm crawls relevant forum content from the web with minimal overhead. The proposed ranking algorithm is produced to order and evaluate similar meaningful data in order to make the retrieval process becomes faster, easier and more accurate. We applied our work on a standard database and achieved 99 percent effectiveness on semantic performance in minimum time and less than 1 percent error rate compared with the other semantic systems.</description>
        <description>http://thesai.org/Downloads/Volume6No8/Paper_18-Efficient_Proposed_Framework_for_Semantic_Search_Engine.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>HTCSLQ : Hierarchical Tree Congestion Degree with Speed Sending and Sum Costs Link Quality Mechanism for Wireless Sensor Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060817</link>
        <id>10.14569/IJACSA.2015.060817</id>
        <doi>10.14569/IJACSA.2015.060817</doi>
        <lastModDate>2015-08-31T13:10:11.2670000+00:00</lastModDate>
        
        <creator>Mbida Mohamed</creator>
        
        <creator>Ezzati Abdellah</creator>
        
        <subject>HTAP (Hierarchical Tree Alternative Path); Cd (congestion degree); Ss( speed of sending); SCVQL (Sum of costs variable Link quality); Wireless Sensor Network (wsn)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(8), 2015</description>
        <description>Wireless Sensor Network (wsn) performances have progressed over the last few years, aiming at expanding the lifetime nodes. Among the important studied parts on wsn is the congestion degree that can be identified by several algorithms such as HTAP (Hierarchical Tree Alternative Path), which is the target in the current study. It is important to mention that besides the variations observed between the four HTAP deployments  of this classical algorithm ( detailed on the rest of paper ) ,  there is a possibility that other factors, such as energy efficiency, network lifetime are affected by the node displacements in the HTAP phases ( in case of the route change caused by the isolated  dead nodes to the sink )  , which give  the idea to reduce this undesirable problem  by  designing  a new algorithm  called Hierarchical Tree Congestion degree with  Speed sending  and Sum costs link Quality (HTCSLQ) by reducing the energy consumption  , according to the choice of the lower  Congestion degree , optimal Speed sending  and Sum costs link Quality values from the source to destination.</description>
        <description>http://thesai.org/Downloads/Volume6No8/Paper_17-HTCSLQ_Hierarchical_Tree_Congestion_Degree_with_Speed_Sending.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Network Efficiency – Optimized Automaton Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060816</link>
        <id>10.14569/IJACSA.2015.060816</id>
        <doi>10.14569/IJACSA.2015.060816</doi>
        <lastModDate>2015-08-31T13:10:11.2370000+00:00</lastModDate>
        
        <creator>K. Thiagarajan</creator>
        
        <creator>N. Subashini</creator>
        
        <creator>M. S. Muthuraman</creator>
        
        <subject>Automaton; Network; Efficiency; Characterization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(8), 2015</description>
        <description>A sperner’s grid is thought of a finite state system, where in the model gives rise to an optimal network through characterization of paths .the automation graphs of the various states gives rise to different groomable light paths in network.</description>
        <description>http://thesai.org/Downloads/Volume6No8/Paper_16-Network_Efficiency_Optimized_Automaton_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Proposal for Scrambled Method based on NTRU</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060815</link>
        <id>10.14569/IJACSA.2015.060815</id>
        <doi>10.14569/IJACSA.2015.060815</doi>
        <lastModDate>2015-08-31T13:10:11.2200000+00:00</lastModDate>
        
        <creator>Ahmed Tariq Sadiq</creator>
        
        <creator>Najlaa Mohammad Hussein</creator>
        
        <creator>Suha Abdul Raheem Khoja</creator>
        
        <subject>NTRU; public key; cipher; sound; scramble; segment</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(8), 2015</description>
        <description>Scrambling is widely used to protect the security of data files such as text, image, video or audio files; however, it is not the most efficient method to protect the security of the data files. This article uses NTRU public key cryptosystem to increase the robustness of scrambling of sound files. In this work, we convert the sound file into text, and then scramble it in the following way: first, we encrypt the header of the sound file then, scramble the data of the file after the header in three stages. In each stage we scramble the data of the sound file and keep the original order of data in an array then, the three arrays are encrypted by the sender and sent with the encrypted header to the receiver in one file, while the scrambled data of the sound file is sent to the receiver in another file. We have tested the proposed method on several sound files; the results show that the time of encryption and decryption is reduced to approximately one-third, or less, compared to encrypting the file using NTRU.</description>
        <description>http://thesai.org/Downloads/Volume6No8/Paper_15-Proposal_for_Scrambled_Method_based_on_NTRU.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Trust: A Requirement for Cloud Technology Adoption</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060814</link>
        <id>10.14569/IJACSA.2015.060814</id>
        <doi>10.14569/IJACSA.2015.060814</doi>
        <lastModDate>2015-08-31T13:10:11.2070000+00:00</lastModDate>
        
        <creator>Akinwale O. Akinwunmi</creator>
        
        <creator>Emmanuel A. Olajubu</creator>
        
        <creator>G. Adesola Aderounmu</creator>
        
        <subject>Cloud; User; Adoption; Trust; Bayesian Network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(8), 2015</description>
        <description>Cloud computing is a recent model for enabling convenient, on-demand network access to a shared pool of configurable computing resources  such as networks, servers, storage, applications, and services; that can be rapidly provisioned and released with minimal management effort or service provider interaction. Studies have shown that cloud computing has the potential to benefit establishments, industries, national and international economies. Despite the enormous benefits cloud computing technology has the potentials of offering, several issues are making intended users to pause in adopting the usage of the technology. Users need to be assured of the safety and reliability of the technology while using it. This is needed to build confidence around the technology and reduce the level of anxiety. This research attempts to investigate the effect of trust in the adoption of the technology by formulating a trust model based on Expectancy Disconfirmation Theory model and Bayesian network. A simulation experiment was carried out to determine the significance of trust in the adoption of cloud technology.</description>
        <description>http://thesai.org/Downloads/Volume6No8/Paper_14-Trust_A_Requirement_for_Cloud_Technology_Adoption.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Understanding Social Network Usage: Impact of Co-Presence, Intimacy, and Immediacy</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060813</link>
        <id>10.14569/IJACSA.2015.060813</id>
        <doi>10.14569/IJACSA.2015.060813</doi>
        <lastModDate>2015-08-31T13:10:11.1900000+00:00</lastModDate>
        
        <creator>Waleed Al-Ghaith</creator>
        
        <subject>Social presence theory; Social networking sites; SNS; Factors; Adoption; Usage; Saudi Arabia</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(8), 2015</description>
        <description>This study examines individuals’ intentions and behaviour on Social Networking Sites (SNSs). The study proposed model asserts that “Co-presence”, “Intimacy”, “Immediacy”, “Perceived Enjoyment”, and “Perceived ease of use” formed individuals&#39; “Attitude” towards “behavioral intention” to use SNSs. The results support all formulated hypotheses. The proposed model in this study explains 69% of individual’s attitude or feelings towards adoption the SNSs, 59% of the variance in “Behavioral Intention” and 54% of the variance in “Usage Behaviour”. The present study has shown the importance of social presence’s factors, namely co-presence, intimacy, and immediacy, in explaining individuals’ intentions and behavior. The study findings confirmed that social presence positively influence SNS usage indirectly through user attitude, and as aforementioned that three factors, namely co-presence, intimacy, and immediacy are framing the construct of social presence. Thus, this study is the first empirical effort to examine the impact of co-presence, intimacy, and immediacy in determining intention or behaviour.</description>
        <description>http://thesai.org/Downloads/Volume6No8/Paper_13-Understanding_Social_Network_Usage_Impact_of_Co_Presence.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Proposed Peer Selection Algorithm for Transmission Scheduling in P2P-VOD Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060812</link>
        <id>10.14569/IJACSA.2015.060812</id>
        <doi>10.14569/IJACSA.2015.060812</doi>
        <lastModDate>2015-08-31T13:10:11.1600000+00:00</lastModDate>
        
        <creator>Hatem Fetoh</creator>
        
        <creator>Waleed M. Bahgat</creator>
        
        <creator>Ahmed Atwan</creator>
        
        <subject>video-on-demand; segment transmission; chunk miss ratio; initial playback delay; peer-to-peer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(8), 2015</description>
        <description>Video transmission in peer-to-peer video-on-demand faces some challenges. These challenges include long transmission delay and poor quality of service. The peer selection plays an important role in enhancing transmission efficiency. For this reason, a proposed algorithm for peer selection is introduced to overcome these challenges. The proposed algorithm consists of four steps. First, the peers exchange their own buffer maps with other peers. Second, the requested segments are ordered according to their priorities. Third, neighbors of the receiver are evaluated by the efficiency estimation. Finally, the efficient sender list is applied to solve the overloading and bottleneck on the highest efficient sender. A simulation is introduced to evaluate the performance of the proposed algorithm compared to a peer selection algorithm with context-aware adaptive (CAA) data scheduling algorithm. The results show that, the proposed algorithm reduces initial buffering delay and achieves high throughput rather than CAA algorithm.</description>
        <description>http://thesai.org/Downloads/Volume6No8/Paper_12-A_Proposed_Peer_Selection_Algorithm_for_Transmission_Scheduling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Smart Grid Testbed using SCADA Software and Xbee Wireless Communication</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060811</link>
        <id>10.14569/IJACSA.2015.060811</id>
        <doi>10.14569/IJACSA.2015.060811</doi>
        <lastModDate>2015-08-31T13:10:11.1270000+00:00</lastModDate>
        
        <creator>Aryuanto Soetedjo</creator>
        
        <creator>Abraham Lomi</creator>
        
        <creator>Yusuf Ismail Nakhoda</creator>
        
        <subject>SCADA; Smart Grid; wireless communication; Xbee DigiMesh; testbed</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(8), 2015</description>
        <description>This paper presents the development of Smart Grid testbed using SCADA software and Xbee wireless communication. The proposed testbed combines both the software simulation and the hardware simulation. The Winlog SCADA software is employed to implement the algorithm in the Smart Grid system. To communicate between nodes and the Smart Grid Center, the Xbee wireless communication is employed. The testbed is useful to test and verify the developed algorithms in the Smart Grid system. By using the hardware testbed, the more realistic simulation could be performed. While by using the software testbed, the complex model and algorithm could be implemented easily. The experimental results show that the proposed testbed works properly in simulating the continuous supply algorithm implemented in the Smart Grid system.</description>
        <description>http://thesai.org/Downloads/Volume6No8/Paper_11-Smart_Grid_Testbed_using_SCADA_Software_and_Xbee_Wireless_Communication.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Increase Efficiency of SURF using RGB Color Space</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060810</link>
        <id>10.14569/IJACSA.2015.060810</id>
        <doi>10.14569/IJACSA.2015.060810</doi>
        <lastModDate>2015-08-31T13:10:11.1130000+00:00</lastModDate>
        
        <creator>M. Wafy</creator>
        
        <creator>A. M. M. Madbouly</creator>
        
        <subject>SURF; Features; Local Features; Color Space</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(8), 2015</description>
        <description>SURF is one of the most robust local invariant feature descriptors. SURF is implemented mainly for gray images. However, color presents important information in the object description and matching tasks as it clearly in the human vision system. Many objects can be unmatched if their color contents are ignored. To overcome this drawback this paper proposed a method CSURF (Color SURF) that combines features of Red, Green and blue layers to detect color objects. It edits matched process of SURF to be more efficient with color space. Experimental results show that CSURF is more precious than traditional SURF and CSURF invariant to RGB color space</description>
        <description>http://thesai.org/Downloads/Volume6No8/Paper_10-Increase_Efficiency_of_SURF_using_RGB_Color_Space.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluation and Improvement of Procurement Process with Data Analytics</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060809</link>
        <id>10.14569/IJACSA.2015.060809</id>
        <doi>10.14569/IJACSA.2015.060809</doi>
        <lastModDate>2015-08-31T13:10:11.0800000+00:00</lastModDate>
        
        <creator>Melvin Tan H.C.</creator>
        
        <creator>Wee-Leong Lee</creator>
        
        <subject>procurement; text mining; clustering; data analytics; fraud detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(8), 2015</description>
        <description>Analytics can be applied in procurement to benefit organizations beyond just prevention and detection of fraud. This study aims to demonstrate how advanced data mining techniques such as text mining and cluster analysis can be used to improve visibility of procurement patterns and provide decision-makers with insight to develop more efficient sourcing strategies, in terms of cost and effort. A case study of an organization’s effort to improve its procurement process is presented in this paper. The findings from this study suggest that opportunities exist for organizations to aggregate common goods and services among the purchases made under and across different prescribed procurement approaches. It also suggests that these opportunities are more prevalent in purchases made by individual project teams rather than across multiple project teams.</description>
        <description>http://thesai.org/Downloads/Volume6No8/Paper_9-Evaluation_and_Improvement_of_Procurement_Process_with_Data_Analytics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cryptic Mining in Light of Artificial Intelligence</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060808</link>
        <id>10.14569/IJACSA.2015.060808</id>
        <doi>10.14569/IJACSA.2015.060808</doi>
        <lastModDate>2015-08-31T13:10:11.0670000+00:00</lastModDate>
        
        <creator>Shaligram Prajapat</creator>
        
        <creator>Aditi Thakur</creator>
        
        <creator>Kajol Maheshwari</creator>
        
        <creator>Ramjeevan Singh Thakur</creator>
        
        <subject>Cipher text; Cryptic analysis; Encryption algorithm; Artificial Intelligence (AI)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(8), 2015</description>
        <description>“The analysis of cryptic text is hard problem”, and there is no fixed algorithm for generating plain-text from cipher text. Human brains do this intelligently. The intelligent cryptic analysis process needs learning algorithms, co-operative effort of cryptanalyst and mechanism of knowledge based inference engine. This information of knowledge base will be useful for mining data(plain-text, key or cipher text plain-text relationships), classification of cipher text based on enciphering algorithms, key length or any other desirable parameters,  clustering of cipher text based on similarity and extracting association rules for identifying weaknesses of cryptic algorithms. This categorization will be useful for placing given cipher text into a specific category or solving difficult level of cipher text-plain text conversion process. This paper elucidates cipher text-plain text process first than utilizes it to create a framework for AI-enabled-Cryptanalysis system. The process demonstrated in this paper attempts to analyze captured cipher from scratch. The system design elements presented in the paper gives all hints and guidelines for development of AI enabled Cryptic analysis tool.</description>
        <description>http://thesai.org/Downloads/Volume6No8/Paper_8-Cryptic_Mining_in_Light_of_Artificial_Intelligence.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Ball on Beam Stabilizing Platform with Inertial Sensors</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060807</link>
        <id>10.14569/IJACSA.2015.060807</id>
        <doi>10.14569/IJACSA.2015.060807</doi>
        <lastModDate>2015-08-31T13:10:11.0330000+00:00</lastModDate>
        
        <creator>Ali Shahbaz Haider</creator>
        
        <creator>Muhammad Nasir</creator>
        
        <creator>Basit Safir</creator>
        
        <creator>Farhan Farooq</creator>
        
        <subject>stabilizing platform; ball on beam; nonlinear dynamics; inertial sensors</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(8), 2015</description>
        <description>This research paper presents dynamic modeling of inertial sensor based one degree of freedom (1-DoF) stabilizing platform. Plant is a ball on a pivoted beam. Nonlinear modeling of the plant is done. Ball position on beam is actuated by DC motor using two arms and one beam structure. Arms and beam are linked by pivoted joints. Nonlinear geometrical relations for mechanical structure are derived followed by physically realizable approximations. These relations are used in system dynamic equations followed by linearization, resulting in a linear continuous time differential equation model. State space conversion is done. Final model is simulation and system dynamics are elaborated by analysis of the simulation responses</description>
        <description>http://thesai.org/Downloads/Volume6No8/Paper_7-A_Novel_Ball_on_Beam_Stabilizing_Platform_with_Inertial_Sensors.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Diagnosis of Wind Energy System Faults Part I : Modeling of the Squirrel Cage Induction Generator</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060806</link>
        <id>10.14569/IJACSA.2015.060806</id>
        <doi>10.14569/IJACSA.2015.060806</doi>
        <lastModDate>2015-08-31T13:10:11.0030000+00:00</lastModDate>
        
        <creator>Lahc&#232;ne Noureddine</creator>
        
        <creator>Omar Touhami</creator>
        
        <subject>Induction Generator; Rotor Broken Bars; Faults Diagnosis; MCSA</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(8), 2015</description>
        <description>Generating electrical power from wind energy is becoming increasingly important throughout the world. This fast development has attracted many researchers and electrical engineers to work on this field. The authors develop a dynamic model of the squirrel cage induction generator exists usually on wind energy systems, for the diagnosis of broken rotor bars defects from an approach of magnetically coupled multiple circuits. The generalized model is established on the base of mathematical recurrences. The winding function theory is used for determining the rotor resistances and the inductances in the case of n- broken bars. Simulation results, in Part. II of this paper, confirm the validity of the proposed model.</description>
        <description>http://thesai.org/Downloads/Volume6No8/Paper_6-Diagnosis_of_Wind_Energy_System_Faults.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of Heart Rate Variability by Applying Nonlinear Methods with Different Approaches for Graphical Representation of Results</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060805</link>
        <id>10.14569/IJACSA.2015.060805</id>
        <doi>10.14569/IJACSA.2015.060805</doi>
        <lastModDate>2015-08-31T13:10:10.9870000+00:00</lastModDate>
        
        <creator>Evgeniya Gospodinova</creator>
        
        <creator>Mitko Gospodinov</creator>
        
        <creator>Ivan Domuschiev</creator>
        
        <creator>Nilianjan Dey</creator>
        
        <creator>Amira S. Ashour</creator>
        
        <creator>Dimitra Sifaki-Pistolla</creator>
        
        <subject>Heart Rate Variability (HRV); ECG signal; Holter signal; nonlinear graphical methods</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(8), 2015</description>
        <description>There is an open discussion over nonlinear properties of the Heart Rate Variability (HRV) which takes place in most scientific studies nowadays.  The HRV analysis is a non-invasive and effective tool that manages to reflect the autonomic nervous system regulation of the heart. The current study presents the results of HRV analysis based on 24-hour Holter ECG signals of healthy and unhealthy subjects. Analysis of heart intervals is performed with the use of original algorithms and software, developed by the authors, to quantify the irregularity of the heart rate. The main aim is the formation of the parametric estimate of patients’ health status, based on mathematical methods that are applied on cardiac physiology. The obtained results show that the analysis of Holter recordings by nonlinear methods may be appropriate for diagnostic, forecast and prevention of the pathological cardiac statuses. Different approaches of graphical representation and visualization of these results are used in order to verify this.</description>
        <description>http://thesai.org/Downloads/Volume6No8/Paper_5-Analysis_of_Heart_Rate_Variability_by_Applying_Nonlinear_Methods.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Flow-Based Specification of Time Design Requirements</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060804</link>
        <id>10.14569/IJACSA.2015.060804</id>
        <doi>10.14569/IJACSA.2015.060804</doi>
        <lastModDate>2015-08-31T13:10:10.9570000+00:00</lastModDate>
        
        <creator>Sabah Al-Fedaghi</creator>
        
        <subject>design requirements; conceptual model; time constraints; model-based systems engineering; requirements specification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(8), 2015</description>
        <description>This paper focuses on design requirements in real-time systems where information is processed to produce a response within a specified time. Nowadays, computer control applications embedded in chips have grown in significance in many aspects of human life. These systems need a high level of reliability to gain the trust of users. Ensuring correctness in the early stages of the design process is especially a major challenge in these systems. Faulty requirements lead to errors in the final product that have to be fixed later, often at a high cost. A crucial step in this process is modeling the intended system. This paper explores the potential of flow-based modeling in expressing design requirements in real-time systems that include time constraints and synchronization. The main emphasized problem is how to represent time. The objective is to assist real-time system requirement engineers, in an early state of the development, to express the timing behavior of the developed system. Several known examples are modeled and the results point to the viability of the flow-based representation in comparison with such time specifications as state-based and line-based modeling.</description>
        <description>http://thesai.org/Downloads/Volume6No8/Paper_4-Flow_Based_Specification_of_Time_Design_Requirements.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Facial Age Estimation based on Decision Level Fusion of AAM, LBP and Gabor Features</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060803</link>
        <id>10.14569/IJACSA.2015.060803</id>
        <doi>10.14569/IJACSA.2015.060803</doi>
        <lastModDate>2015-08-31T13:10:10.9400000+00:00</lastModDate>
        
        <creator>Asuman G&#252;nay</creator>
        
        <creator>Vasif V. Nabiyev</creator>
        
        <subject>AAM; LBP; Gabor filters; Regression; Fusion; Age estimation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(8), 2015</description>
        <description>In this paper a new hierarchical age estimation method based on decision level fusion of global and local features is proposed. The shape and appearance information of human faces which are extracted with active appearance models (AAM) are used as global facial features. The local facial features are the wrinkle features extracted with Gabor filters and skin features extracted with local binary patterns (LBP). Then feature classification is performed using a hierarchical classifier which is the combination of an age group classification and detailed age estimation. In the age group classification phase, three distinct support vector machines (SVM) classifiers are trained using each feature vector. Then decision level fusion is performed to combine the results of these classifiers.  The detailed age of the classified image is then estimated in that age group, using the aging functions modeled with global and local features, separately. Aging functions are modeled with multiple linear regression. To make a final decision, the results of these aging functions are also fused in decision level. Experimental results on the FG-NET and PAL aging databases have shown that the age estimation accuracy of the proposed method is better than the previous methods.</description>
        <description>http://thesai.org/Downloads/Volume6No8/Paper_3-Facial_Age_Estimation_based_on_Decision_Level_Fusion_of_AAM.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Finite Element Analysis based Optimization of Magnetic Adhesion Module for Concrete Wall Climbing Robot</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060802</link>
        <id>10.14569/IJACSA.2015.060802</id>
        <doi>10.14569/IJACSA.2015.060802</doi>
        <lastModDate>2015-08-31T13:10:10.9230000+00:00</lastModDate>
        
        <creator>MD Omar faruq Howlader</creator>
        
        <creator>Traiq Pervez Sattar</creator>
        
        <subject>Finite Element Analysis (FEA); Magnetic Adhesion System; Non Destructive Testing (NDT); Wall Climbing Robot</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(8), 2015</description>
        <description>Wall climbing robot can provide easier accessibility to tall structures for Non Destructive Testing (NDT) and improve working environments of human operators.  However, existing adhesion mechanism for climbing robots such as vortex, electromagnet etc. are still at development stage and offer no feasible adhesion mechanism. As a result, few practical products have been developed for reinforced concrete surfaces, though wall-climbing robots have been researched for many years. This paper proposes a novel magnetic adhesion mechanism for wall-climbing robot for reinforced concrete surface. Mechanical design parameters such as distance between magnets, the yoke thickness, and magnet arrangements have been investigated by Finite Element Analysis (FEA). The adhesion module can be attached under the chassis of a prototype robot. The magnetic flux can penetrate maximum concrete cover of 30 mm and attain adhesion force of 121.26 N. The prototype provides high Force-to-Weight ratio compared to other reported permanent magnet based robotic systems. Both experiment and simulation results prove that the magnetic adhesion mechanism can generate efficient adhesion force for the climbing robot to operate on vertical reinforced concrete structures.</description>
        <description>http://thesai.org/Downloads/Volume6No8/Paper_2-Finite_Element_Analysis_based_Optimization_of_Magnetic_Adhesion_Module.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Competitive Sparse Representation Classification for Face Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060801</link>
        <id>10.14569/IJACSA.2015.060801</id>
        <doi>10.14569/IJACSA.2015.060801</doi>
        <lastModDate>2015-08-31T13:10:10.8630000+00:00</lastModDate>
        
        <creator>Ying Liu</creator>
        
        <creator>Jian-Xun Mi</creator>
        
        <creator>Cong Li</creator>
        
        <creator>Chao Li</creator>
        
        <subject>face recognition; collaborative representation sparse representation; and competitive representation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(8), 2015</description>
        <description>A method, named competitive sparse representation classification (CSRC), is proposed for face recognition in this paper. CSRC introduces a lowest competitive deletion mechanism which removes the lowest competitive sample based on the competitive ability of training samples for representing a probe in multiple rounds collaborative linear representation. In other words, in each round of competing, whether a training sample is retained or not in the next round depends on the ability of representing the input probe. Because of the number of training samples used for representing the probe decreases in CSRC, the coding vector is transformed into a low dimensional space comparing with the initial coding vector. Then the sparse representation makes CSRC discriminative for classifying the probe.  In addition, due to the fast algorithm, the FR system has less computational cost. To verify the validity of CSRC, we conduct a series of experiments on AR, Extended YB, and ORL databases respectively.</description>
        <description>http://thesai.org/Downloads/Volume6No8/Paper_1-Competitive_Sparse_Representation_Classification_for_Face_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Appropriate Tealeaf Harvest Timing Determination Referring Fiber Content in Tealeaf Derived from Ground based Nir Camera Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2015.040804</link>
        <id>10.14569/IJARAI.2015.040804</id>
        <doi>10.14569/IJARAI.2015.040804</doi>
        <lastModDate>2015-08-10T13:15:29.5070000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Yoshihiko Sasaki</creator>
        
        <creator>Shihomi Kasuya</creator>
        
        <creator>Hideto Matusura</creator>
        
        <subject>Tealeaves; Nitrigen content; Amino accid; Leaf volume; NIR images; Fiber content; Theanine; Amid acid; Regressive analysis</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 4(8), 2015</description>
        <description>Method for most appropriate tealeaves harvest timing with the reference to the fiber content in tealeaves which can be estimated with ground based Near Infrared (NIR) camera images is proposed. In the proposed method, NIR camera images of tealeaves are used for estimation of nitrogen content and fiber content in tealeaves. The nitrogen content is highly correlated to Theanine (amid acid) content in tealeaves. Theanine rich tealeaves taste good. Meanwhile, the age of tealeaves depend on fiber content. When tealeaves are getting old, then fiber content is increased. Tealeaf shape volume also is increased with increasing of fiber content. Fiber rich tealeaves taste not so good, in general. There is negative correlation between fiber content and NIR reflectance of tealeaves. Therefore, tealeaves quality of nitrogen and fiber contents can be estimated with NIR camera images. Also, the shape volume of tealeaves is highly correlated to NIR reflectance of tealeaf surface. Therefore, not only tealeaf quality but also harvest amount can be estimated with NIR camera images. Experimental results show the proposed method works well for estimation of appropriate tealeaves harvest timing with fiber content in the tealeaves in concern estimated with NIR camera images.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume4No8/Paper_4-Appropriate_Tealeaf_Harvest_Timing_Determination_Referring_Fiber_Content.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Locality of Chlorophyll-A Distribution in the Intensive Study Area of the Ariake Sea, Japan in Winter Seasons based on Remote Sensing Satellite Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2015.040803</link>
        <id>10.14569/IJARAI.2015.040803</id>
        <doi>10.14569/IJARAI.2015.040803</doi>
        <lastModDate>2015-08-10T13:15:29.4900000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>chlorophyl-a concentration; red tide; diatom; solar irradiance; ocean winds; tidal effect</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 4(8), 2015</description>
        <description>Mechanism of chlorophyll-a appearance and its locality in the intensive study area of the Ariake Sea, Japan in winter seasons is clarified by using remote sensing satellite data.  Through experiments with Terra and AQUA MODIS data derived chlorophyll-a concentration and truth data of chlorophyll-a concentration together with meteorological data and tidal data which are acquired for 6 years (winter 2010 to winter 2015), it is found that strong correlation between the chlorophyll-a concentration and tidal height changes. Also it is found that the relations between ocean wind speed and chlorophyll-a concentration. Meanwhile, there is a relatively high correlation between sunshine duration a day and chlorophyll-a concentration. Furthermore, it is found that there are different sources of chlorophyll-a in the three different sea areas of Ariake Sea area in the back, Isahaya bay area, and Kumamoto offshore area.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume4No8/Paper_3-Locality_of_Chlorophyll_Distribution_in_the_Intensive_Study_Area_of_the_Ariake_Sea.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis and Prediction of Crimes by Clustering and Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2015.040802</link>
        <id>10.14569/IJARAI.2015.040802</id>
        <doi>10.14569/IJARAI.2015.040802</doi>
        <lastModDate>2015-08-10T13:15:29.4600000+00:00</lastModDate>
        
        <creator>Rasoul Kiani</creator>
        
        <creator>Siamak Mahdavi</creator>
        
        <creator>Amin Keshavarzi</creator>
        
        <subject>crime; clustering; classification; genetic algorithm; weighting; rapidminer</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 4(8), 2015</description>
        <description>Crimes will somehow influence organizations and institutions when occurred frequently in a society. Thus, it seems necessary to study reasons, factors and relations between occurrence of different crimes and finding the most appropriate ways to control and avoid more crimes. The main objective of this paper is to classify clustered crimes based on occurrence frequency during different years. Data mining is used extensively in terms of analysis, investigation and discovery of patterns for occurrence of different crimes. We applied a theoretical model based on data mining techniques such as clustering and classification to real crime dataset recorded by police in England and Wales within 1990 to 2011. We assigned weights to the features in order to improve the quality of the model and remove low value of them. The Genetic Algorithm (GA) is used for optimizing of Outlier Detection operator parameters using RapidMiner tool.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume4No8/Paper_2-Analysis_and_Prediction_of_Crimes_by_Clustering_and_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Wavelet Compressed PCA Models for Real-Time Image Registration in Augmented Reality Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2015.040801</link>
        <id>10.14569/IJARAI.2015.040801</id>
        <doi>10.14569/IJARAI.2015.040801</doi>
        <lastModDate>2015-08-10T13:15:29.3970000+00:00</lastModDate>
        
        <creator>Christopher Cooper</creator>
        
        <creator>Kent Wise</creator>
        
        <creator>John Cooper</creator>
        
        <creator>Makarand Deo</creator>
        
        <subject>Image Registration; Principal Component Analysis; Wavelet Compression; Augmented Reality; Image Classification</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 4(8), 2015</description>
        <description>The use of augmented reality (AR) has shown great promise in enhancing medical training and diagnostics via interactive simulations. This paper presents a novel method to perform accurate and inexpensive image registration (IR) utilizing a pre-constructed database of reference objects in conjunction with a principal component analysis (PCA) model. In addition, a wavelet compression algorithm is utilized to enhance the speed of the registration process. The proposed method is used to perform registration of a virtual 3D heart model based on tracking of an asymmetric reference object. The results indicate that the accuracy of the method is dependent upon the extent of asymmetry of the reference object which required inclusion of higher order principal components in the model. A key advantage of the presented IR technique is the absence of a restart mechanism required by the existing approaches while allowing up to six orders of magnitude compression of the modeled image space. The results demonstrate that the method is computationally inexpensive and thus suitable for real-time augmented reality implementation.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume4No8/Paper_1-Wavelet_Compressed_PCA_Models_for_Real_Time_Image_Registration.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Image Mining: Review and New Challenges</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060732</link>
        <id>10.14569/IJACSA.2015.060732</id>
        <doi>10.14569/IJACSA.2015.060732</doi>
        <lastModDate>2015-07-30T17:22:15.0600000+00:00</lastModDate>
        
        <creator>Barbora Zahradnikova</creator>
        
        <creator>Sona Duchovicova</creator>
        
        <creator>Peter Schreiber</creator>
        
        <subject>image mining; image classification; indexing; image retrieval;</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(7), 2015</description>
        <description>Besides new technology, a huge volume of data in various form has been available for people. Image data represents a keystone of many research areas including medicine, forensic criminology, robotics and industrial automation, meteorology and geography as well as education. Therefore, obtaining specific in-formation from image databases has become of great importance. Images as a special category of data differ from text data as in terms of their nature so in terms of storing and retrieving. Image Mining as a research field is an interdisciplinary area combining methodologies and knowledge of many branches including data mining, computer vision, image processing, image retrieval, statis-tics, recognition, machine learning, artificial intelligence etc. This review focuses researching the current image mining approaches and techniques aiming at widening the possibilities of facial image analysis. This paper aims at reviewing the current state of the IM as well as at describing challenges and identifying directions of the future research in the field.</description>
        <description>http://thesai.org/Downloads/Volume6No7/Paper_32-Image_Mining_Review_and_New_Challenges.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Exploiting SCADA vulnerabilities using a Human Interface Device</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060731</link>
        <id>10.14569/IJACSA.2015.060731</id>
        <doi>10.14569/IJACSA.2015.060731</doi>
        <lastModDate>2015-07-30T17:22:15.0430000+00:00</lastModDate>
        
        <creator>Grigoris Tzokatziou</creator>
        
        <creator>Helge Janicke</creator>
        
        <creator>Leandros A. Maglaras</creator>
        
        <creator>Ying He</creator>
        
        <subject>SCADA; Cyber Security; HID; PLC</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(7), 2015</description>
        <description>SCADA (Supervisory Control and Data Acquisition) systems are used to control and monitor critical national infras-tructure functions like electricity, gas, water and railways. Field devices such as PLC’s (Programmable Logic Controllers) are one of the most critical components of a control system. Cyber-attacks usually target valuable infrastructures assets, taking advantage of architectural/technical vulnerabilities or even weaknesses in the defense systems. Even though novel intrusion detection systems are being implemented and used for defending cyber-attacks, certain vulnerabilities of SCADA systems can still be exploited. In this article we present an attack scenario based on a Human Interface Device (HID) device which is used as a means of communication/exploitation tool to compromise SCADA systems. The attack, which is a normal series of commands that are sent from the HID to the PLC cannot be detected through current intrusion detection mechanisms. Finaly we provide possible counter measures and defense mechanisms against this kind of cyber attacks.</description>
        <description>http://thesai.org/Downloads/Volume6No7/Paper_31-Exploiting_SCADA_vulnerabilities_using_a_Human_Interface_Device.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cost-Effective, Cognitive Undersea Network for Timely and Reliable Near-Field Tsunami Warning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060730</link>
        <id>10.14569/IJACSA.2015.060730</id>
        <doi>10.14569/IJACSA.2015.060730</doi>
        <lastModDate>2015-07-30T17:22:15.0100000+00:00</lastModDate>
        
        <creator>X. Xerandy</creator>
        
        <creator>Taieb Znati</creator>
        
        <creator>Louise K Comfort</creator>
        
        <subject>near field tsunami, undersea, sensor, fiber optic, detection, optimization, cost, reliable, timeliness</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(7), 2015</description>
        <description>The focus of this paper is on developing an early detection and warning system for near-field tsunami to mitigate its impact on communities at risk. This a challenging task, given the stringent reliability and timeliness requirements, the development of such an infrastructure entails. To address this challenge, we propose a hybrid infrastructure, which combines cheap but unreliable undersea sensors with expensive but highly reliable fiber optic, to meet the stringent constraints of this warning system. The derivation of a low-cost tsunami detection and warning infrastructure is cast as an optimization problem, and a heuristic approach is used to determine the minimum cost network configuration that meets the targeted reliability and timeliness requirements. To capture the intrinsic properties of the environment and model accurately the main characteristics of the sound wave propagation undersea, the proposed optimization framework incorporates the Bellhop propagation model and accounts for significant environment factors, including noise, varying undersea sound speed and sea floor profile. We apply our approach to a region which is prone to near-field tsunami threats to derive a cost-effective under sea infrastructure for detection and warning. For this case study, the results derived from the proposed framework show that a feasible infrastructure, which operates with a carrier frequency of 12-KHz, can be deployed in calm, moderate and severe environments and meet the stringent reliability and timeliness constraints, namely 20 minutes warning time and 99 % data communication reliability, required to mitigate the impact of a near-field tsunami. The proposed framework provides useful insights and guidelines toward the development of a realistic detection and warning system for near-field tsunami.</description>
        <description>http://thesai.org/Downloads/Volume6No7/Paper_30-Cost_Effective_Cognitive_Undersea_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Geographic Routing Using Logical Levels in Wireless Sensor Networks for Sensor Mobility</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060729</link>
        <id>10.14569/IJACSA.2015.060729</id>
        <doi>10.14569/IJACSA.2015.060729</doi>
        <lastModDate>2015-07-30T17:22:14.9970000+00:00</lastModDate>
        
        <creator>Yassine SABRI</creator>
        
        <creator>Najib EL KAMOUN</creator>
        
        <subject>WSN ; Routing ; Ad hoc ; Localization ; Scalability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(7), 2015</description>
        <description>In this paper we propose an improvement to the GRPW algorithm for wireless sensor networks called GRPW-M , which collects data in a wireless sensor network (WSN) using a mobile nodes. Performance of GRPW algorithm algorithm depends heavily on the immobile sensor nodes . This prediction can be hard to do. For that reason, we propose a modified algorithm that is able to adapt to the current situation in the network in which the sensor node considered mobile. The goal of the proposed algorithm is to decrease the reconstruction cost and increase the data delivery ratio. In comparing the GRPW-M protocol with GRPW protocol in simulation, this paper demonstrates that adjustment process executed by GRPW-M does in fact decrease the reconstruction cost and increase the data delivery ratio . Simulations were performed on GRPW as well as on the proposed Routing algorithm. The efficiency factors that were evaluated was total number of transmissions in the network and total delivery rate. And in general the proposed Routing algorithm may perform reasonable well for a large number network setups.</description>
        <description>http://thesai.org/Downloads/Volume6No7/Paper_29-Geographic_Routing_Using_Logical_Levels.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>New 2-D Adaptive K-Best Sphere Detection for Relay Nodes</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060728</link>
        <id>10.14569/IJACSA.2015.060728</id>
        <doi>10.14569/IJACSA.2015.060728</doi>
        <lastModDate>2015-07-30T17:22:14.9800000+00:00</lastModDate>
        
        <creator>Ahmad El-Banna</creator>
        
        <subject>Sphere Detection; K-Best; Relay; MIMO; MISO; Cooperative Communication</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(7), 2015</description>
        <description>Relay nodes are the main players of cooperative networks that used to improve the system performance and to offer virtual multiple antennas for limited antenna devices in a multi-user environment. However, employing relaying strategies requires considerable resources at the relay side and places a large burden on the relay helping node especially when considering the power consumption. Partial detection at the relay is one strategy that reduces the computational load and power consumption. In this paper, we propose a new 2-D Adaptive K-Best Sphere Decoder (2-D AKBSD) for partial detection to be used MISO relays in cooperative networks. Simulation results show that 2-D AKBSD is capable of improving the system performance and also reduces its complexity.</description>
        <description>http://thesai.org/Downloads/Volume6No7/Paper_28-New_2_D_Adaptive_K_Best_Sphere_Detection_for_Relay_Nodes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Artificial Intelligence in Performance Analysis of Load Frequency Control in Thermal-Wind-Hydro Power Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060727</link>
        <id>10.14569/IJACSA.2015.060727</id>
        <doi>10.14569/IJACSA.2015.060727</doi>
        <lastModDate>2015-07-30T17:22:14.9500000+00:00</lastModDate>
        
        <creator>K. Jagatheesan</creator>
        
        <creator>B. Anand</creator>
        
        <creator>Nilanjan Dey</creator>
        
        <creator>Amira S. Ashour</creator>
        
        <subject>Cost curve; Interconnected Power system; Load Frequency Control (LFC); Objective Function;  Performance Index; Proportional-Integral controller</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(7), 2015</description>
        <description>In this article, Load Frequency Control (LFC) of three area unequal interconnected thermal, wind and Hydro power generating units has been developed with Proportional-Integral (PI) controller under MATLAB/SIMULINK environment. Further, the PI controller gains values that optimized using trial and error method with two different objective functions, namely the Integral Time Square Error (ITSE) and the Integral Time Absolute Error (ITAE). The performance of ITAE objective function based PI controller is compared with the ITSE objective function optimized PI controller. Analysis reveals that the ITSE optimized controller gives more superior performance than ITAE based controller during one percent Step Load Perturbation (1% SLP) in area 1 (thermal area). In addition, Proportional–Integral –Derivative (PID) controller is employed to improve the same power system performance. The controller gain values are optimized using Artificial Intelligence technique based Ant Colony Optimization (ACO) algorithm. The simulation performance compares the ACO-PID controller to the conventional PI. The results proved that the proposed optimization technique based the ACO-PID controller provides a superior control performance compared to the PI controller. As the system using the ACO-PID controller yield minimum overshoot, undershoot and settling time compared to the conventional PI controlled equipped system performance.</description>
        <description>http://thesai.org/Downloads/Volume6No7/Paper_27-Artificial_Intelligence_in_Performance_Analysis_of_Load_Frequency_Control.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Research on the UHF RFID Channel Coding Technology based on Simulink</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060726</link>
        <id>10.14569/IJACSA.2015.060726</id>
        <doi>10.14569/IJACSA.2015.060726</doi>
        <lastModDate>2015-07-30T17:22:14.9170000+00:00</lastModDate>
        
        <creator>Changzhi Wang</creator>
        
        <creator>Zhicai Shi</creator>
        
        <creator>Dai Jian</creator>
        
        <creator>Li Meng</creator>
        
        <subject>UHF RFID; channel coding; convolution code; bit error rate</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(7), 2015</description>
        <description>In this letter, we propose a new UHF RFID channel coding method, which improves the reliability of the system by using the excellent error correcting performance of the convolutional code. We introduce the coding principle of convolutional code, and compare with the cyclic codes used in the past. Finally, we analyze the error correcting performance of convolutional codes. The analysis results show that the results show that the transmission rate of the system is guaranteed, and the bit error rate can be reduced by 2.561%.</description>
        <description>http://thesai.org/Downloads/Volume6No7/Paper_26-Research_on_the_UHF_RFID_Channel_Coding_Technology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Improved Brain Mr Image Segmentation using Truncated Skew Gaussian Mixture</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060725</link>
        <id>10.14569/IJACSA.2015.060725</id>
        <doi>10.14569/IJACSA.2015.060725</doi>
        <lastModDate>2015-07-30T17:22:14.9030000+00:00</lastModDate>
        
        <creator>Nagesh Vadaparthi</creator>
        
        <creator>Srinivas Yerramalle</creator>
        
        <creator>Suresh Varma Penumatsa</creator>
        
        <subject>Truncated Skew Gaussian Mixture model; Segmentation; Image quality metrics; Segmentation metrics; Fuzzy C-Means clustering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(7), 2015</description>
        <description>A novel approach for segmenting the MRI brain image based on Finite Truncated Skew Gaussian Mixture Model using Fuzzy C-Means algorithm is proposed. The methodology is presented evaluated on bench mark images. The obtained results are compared with various other techniques and the performance evaluation is performed using Image quality metrics and Segmentation metrics.</description>
        <description>http://thesai.org/Downloads/Volume6No7/Paper_25-An_Improved_Brain_Mr_Image_Segmentation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Load Balancing for Improved Quality of Service in the Cloud</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060724</link>
        <id>10.14569/IJACSA.2015.060724</id>
        <doi>10.14569/IJACSA.2015.060724</doi>
        <lastModDate>2015-07-30T17:22:14.8700000+00:00</lastModDate>
        
        <creator>AMAL ZAOUCH</creator>
        
        <creator>FAOUZIA BENABBOU</creator>
        
        <subject>Cloud Computing; Load Balancing; Cluster; Virtual Machines; Quality of Service</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(7), 2015</description>
        <description>Due to the advancement in technology and the growth of human society, it is necessary to work in an environment that reduces costs, resource-efficient, reduces man power and minimizes the use of space. This led to the emergence of cloud computing technology. Load balancing is one of the central issues in the cloud, it is the process of distributing the load and optimally balanced between different servers. ; Balanced load in the Cloud improves the performance of the QoS parameters such as resource utilization, response time, processing time, scalability, throughput, system stability and power consumption. Research in this area has led to the development of algorithms called load balancing algorithms. In this paper, we present the performance analysis of different load balancing algorithms based on different metrics such like response time, processing time, etc.... The main purpose of this article is to help us to propose a new algorithm by studying the behavior of the various existing algorithms.</description>
        <description>http://thesai.org/Downloads/Volume6No7/Paper_24-Load_Balancing_for_Improved_Quality_of_Service_in_the_Cloud.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Investigating on Mobile Ad-Hoc Network to Transfer FTP Application</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060723</link>
        <id>10.14569/IJACSA.2015.060723</id>
        <doi>10.14569/IJACSA.2015.060723</doi>
        <lastModDate>2015-07-30T17:22:14.8570000+00:00</lastModDate>
        
        <creator>Ako Muhammad Abdullah</creator>
        
        <subject>MANET; AODV; DSR; GRP; FTP Application</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(7), 2015</description>
        <description>Mobile Ad-hoc Network (MANET) is the collection of mobile nodes without requiring of any infrastructure. Mobile nodes in MANET are operating as a router and MANET network topology can change quickly. Due to nodes in the network are mobile and thus can move randomly and organize arbitrarily regardless of the directions that generate great complexity in routing traffic from source to destination. To communicate with other nodes MANET nodes contain multiple applications and it needs the different level of data traffic. While data communicate different routing protocols require whereas every node must act as a router. Nowadays, different routing protocols have available for MANET. MANET protocols designed and implemented at the network layer have vital roles that affect the application running at the application layer. In this paper, the performance of On Demand Distance Vector (AODV), Dynamic Source Routing (DSR) and Geographic Routing Protocol (GRP) will be evaluated. The main purpose of this research is to analyze the performance of MANET routing protocols to identify “Which routing protocol has ability to provide the best performance to transfer FTP Application in high mobility case under low, medium and high density scenario?”. The performance analyze with respect to Average End-to-End Delay, Media Access Delay, Network Load, Retransmission Attempt and Throughput. All simulations have been done using OPNET. On the basis of results show that the GRP gives better performance in End-to-End Delay, Media Access Delay, and Retransmission Attempt when varying network size and provide the best Throughput in small and medium network size. Simulation results verify that AODV gives better Throughput in a large network and lower Network Load in small and medium network size compared to GRP. DSR produces low Average Network load as compared to other protocols. The overall study of the FTP application shows that the performance of theses routing protocols differences by varying number of nodes and node speed. This paper results will produce enough information to identify the best routing protocol for MANET to transfer FTP application.</description>
        <description>http://thesai.org/Downloads/Volume6No7/Paper_23-Investigating_on_Mobile_Ad_Hoc_Network_to_Transfer_FTP_Application.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Integrating Service Design and Eye Tracking Insight for Designing Smart TV User Interfaces</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060722</link>
        <id>10.14569/IJACSA.2015.060722</id>
        <doi>10.14569/IJACSA.2015.060722</doi>
        <lastModDate>2015-07-30T17:22:14.8230000+00:00</lastModDate>
        
        <creator>Sheng-Ming Wang</creator>
        
        <subject>Smart TV; User Interface Design; Eye Tracking; Design Affordance; Service Design</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(7), 2015</description>
        <description>This research proposes a process that integrate service design method and eye tracking insight for designing a Smart TV user interface. The Service Design method, which is utilized for leading the combination of the quality function deployment (QFD) and the analytic hierarchy process (AHP), is used to analyze the features of three Smart TV user interface design mockups. Scientific evidences, which include the effectiveness and efficiency testing data obtained from eye tracking experiments with six participants, are provided the information for analysing the affordance of these design mockups. The results of this research demonstrate a comprehensive methodology that can be used iteratively for redesigning, redefining and evaluating of Smart TV user interfaces. It can also help to make the design of Smart TV user interfaces relate to users&#39; behaviors and needs. So that to improve the affordance of design. Future studies may analyse the data that are derived from eye tracking experiments to improve our understanding of the spatial relationship between designed elements in a Smart TV user interface.</description>
        <description>http://thesai.org/Downloads/Volume6No7/Paper_22-Integratring_Service_Design_and_Eye_Teacking_Insight.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Survey of Emergency Preparedness</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060721</link>
        <id>10.14569/IJACSA.2015.060721</id>
        <doi>10.14569/IJACSA.2015.060721</doi>
        <lastModDate>2015-07-30T17:22:14.8100000+00:00</lastModDate>
        
        <creator>Aaron Malveaux</creator>
        
        <creator>A. Nicki Washington</creator>
        
        <subject>emergency preparedness; natural disasters; emergencies</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(7), 2015</description>
        <description>Emergency preparedness is a discipline that harnesses technology, citizens, and government agencies to handle and potentially avoid natural disasters and emergencies. In this paper, a survey of the use of information technology, including social media, in emergency preparedness is presented. In addition, the current direction of research is identified, and future trends are forecasted that will lead to more effective and efficient methods of preparing for and responding to disasters.</description>
        <description>http://thesai.org/Downloads/Volume6No7/Paper_21-A_Survey_of_Emergency_Preparedness.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Assessment of High and Low Rate Protocol-based Attacks on Ethernet Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060720</link>
        <id>10.14569/IJACSA.2015.060720</id>
        <doi>10.14569/IJACSA.2015.060720</doi>
        <lastModDate>2015-07-30T17:22:14.7770000+00:00</lastModDate>
        
        <creator>Mina Malekzadeh</creator>
        
        <creator>M.A. Beiruti</creator>
        
        <creator>M.H. Shahrokh Abadi</creator>
        
        <subject>protocol attacks; OSI layer attacks; UDP attacks; TCP attacks; ICMP attacks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(7), 2015</description>
        <description>The Internet and Web have significantly transformed the world’s communication system. The capability of the Internet to instantly access information at anytime from anywhere has brought benefit for a wide variety of areas including business, government, education, institutions, medical, and entertainment services. However, the Internet has also opened up the possibilities for hackers to exploit flaws and limitations in the target networks to attack and break in without gaining physical access to the target systems. The OSI layer protocol-based attacks are among them. In this work we investigate feasibility as well as severity of the attacks against three common layering protocols including TCP, UDP, and ICMP on Ethernet-based networks in the real world through a testbed. Then a simulation environment is designed to implement the exact same attacks under similar conditions using NS2 network simulator. The testbed results and simulation results are compared with each other to explore the accuracy of the findings and measure the damages the attacks caused to the network.</description>
        <description>http://thesai.org/Downloads/Volume6No7/Paper_20-Assessment_of_High_and_Low_Rate_Protocol_based_Attacks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Information Management System based on Principles of Adaptability and Personalization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060719</link>
        <id>10.14569/IJACSA.2015.060719</id>
        <doi>10.14569/IJACSA.2015.060719</doi>
        <lastModDate>2015-07-30T17:22:14.7630000+00:00</lastModDate>
        
        <creator>Dragan &#208;okic</creator>
        
        <creator>Dragana Šarac</creator>
        
        <creator>Dragana Becejski Vujaklija</creator>
        
        <subject>adaptability; personalization; management information system; business process optimization; collaboration; portals; web services; digital identities; electronic document management</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(7), 2015</description>
        <description>Among the most significant values of a business system that contribute to its competitiveness  in comparison with others, the leading point and the highest value, second to the human factor, is the information at the company’s disposal, unified in the company’s know how. Unfortunately, the sole awareness of the importance and significance of information is mostly not enough. It is for this reason that this resource is unfairly neglected and is insufficiently used.  What results from this situation is that information is mostly unavailable and insufficiently protected. This paper will show one of the possible models of systems for managing information based on principles of adaptability and personalization which aims towards adequate, fast, efficient and secure access to protected information in the company.
The subject of this manuscript is to explore the possibilities of modern information and communication technology application in developing information management system based on principles of adaptability and personalization. The main part of this system is the portal for intelligent document management. The solution proposed in the dissertation is based on integration and implementation of services for adaptation in modern document management systems.</description>
        <description>http://thesai.org/Downloads/Volume6No7/Paper_19-Information_Management_System_based_on_Principles_of_Adaptability.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Using GIS for Retail Location Assessment at Jeddah City</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060718</link>
        <id>10.14569/IJACSA.2015.060718</id>
        <doi>10.14569/IJACSA.2015.060718</doi>
        <lastModDate>2015-07-30T17:22:14.7300000+00:00</lastModDate>
        
        <creator>Abdulkader A Murad</creator>
        
        <subject>Retail Planning; Retail Catchment Area; GIS; Market Penetration; Drive Time</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(7), 2015</description>
        <description>GIS software has different useful tools that can be used for sites, demographic and competitive analysis. These tools enable retail and market researchers to find solutions related to many retail planning issues. The aim of this paper is to use Geographical Information Systems (GIS) for retail location assessment of two retail centers called Al-Dawly and Al-Mahmal centers located at Jeddah city, Saudi Arabia. The first part of the paper presents a review about retail centers classification and about GIS applications in retail planning field. The second part of the paper discuss the outputs of the created application which include  a- defining retail catchment area, b- building retail demand profile, and c- analyzing retail catchment area. The results of this application can be used by retail planners for evaluating retail centers location and for identifying the extent of retail market</description>
        <description>http://thesai.org/Downloads/Volume6No7/Paper_18-Using_GIS_for_Retail_Location_Assessment_at_Jeddah_City.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Integrated Architectural Clock Implemented Memory Design Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060717</link>
        <id>10.14569/IJACSA.2015.060717</id>
        <doi>10.14569/IJACSA.2015.060717</doi>
        <lastModDate>2015-07-30T17:22:14.7000000+00:00</lastModDate>
        
        <creator>Ravi Khatwal</creator>
        
        <creator>Manoj Kumar Jain</creator>
        
        <subject>SRAM Architecture; Simulation; Micro wind; Xilinx; Clock Implemented Memory Design; RTL Design</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(7), 2015</description>
        <description>Recently Low power consumption and Custom Memory design is major issue for embedded designer. Micro wind and Xilinx simulator implements SRAM design architecture and performs efficient simulation. These simulators implements high performances and low power consumption of SRAM design. SRAM efficiency analyzed with 6-T architecture design and row/column based architectural design. We have analyzed clock implemented memory design and simulated with specific application. We have implemented clock based SRAM architecture that improves the internal clock efficiency of SRAM. Architectural Clock implemented memory design reduces the propagation delay and access time. Internal semiconductor material design implemented technique also improves the SRAM data transitions scheme. Semiconductor material and clock implemented design improve simulation performance of SRAM and these design implements for recently developed Application Specific Memory Design Architecture and mobile devices.</description>
        <description>http://thesai.org/Downloads/Volume6No7/Paper_17-An_Integrated_Architectural_Clock_Implemented_Memory_Design_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of Orthonormal Filter Banks based on Meyer Wavelet</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060716</link>
        <id>10.14569/IJACSA.2015.060716</id>
        <doi>10.14569/IJACSA.2015.060716</doi>
        <lastModDate>2015-07-30T17:22:14.6700000+00:00</lastModDate>
        
        <creator>Teng Xudong</creator>
        
        <creator>Dai Yiqing</creator>
        
        <creator>Lu Xinyuan</creator>
        
        <creator>Liang Jianru</creator>
        
        <subject>Meyer wavelet; Time-shift factor; orthonormal FIR filter banks; Symmetrical Index</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(7), 2015</description>
        <description>A new design method for orthonormal FIR filter banks, which can be constructed using the generalized Meyer wavelet by taking into account the effect of time-shift factor, is proposed in this paper. These generalized Meyer wavelets are proved to be of the same basic properties and the time-frequency localization characteristics as the classical Meyer wavelet, furthermore some performances of the Meyer wavelets are improved by change of time-shift factor, which can better satisfy requirements of constructing orthonormal filter banks. The simulation shows that design of orthonormal filter banks based on the generalized Meyer wavelets with maximal symmetrical index is rational and effective.</description>
        <description>http://thesai.org/Downloads/Volume6No7/Paper_16-Design_of_Orthonormal_Filter_Banks_based_on_Meyer_Wavelet.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analyzing the Changes in Online Community based on Topic Model and Self-Organizing Map</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060715</link>
        <id>10.14569/IJACSA.2015.060715</id>
        <doi>10.14569/IJACSA.2015.060715</doi>
        <lastModDate>2015-07-30T17:22:14.6530000+00:00</lastModDate>
        
        <creator>Thanh Ho</creator>
        
        <creator>Phuc Do</creator>
        
        <subject>SOM; topic model; interested topics; online users; online community; social networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(7), 2015</description>
        <description>In this paper, we propose a new model for two purposes: (1) discovering communities of users on social networks via topics with the temporal factor and (2) analyzing the changes in interested topics and users in communities in each period of time. This model, we use Kohonen network (Self-Organizing Map) combining with the topic model. After discovering communities, results are shown on output layers of Kohonen. Based on the output layer of Kohonen, we focus on analyzing the changes in interested topics and users in online communities. Experimenting the proposed model with 194 online users and 20 topics. These topics are detected from a set of Vietnamese texts on social networks in the higher education field.</description>
        <description>http://thesai.org/Downloads/Volume6No7/Paper_15-Analyzing_the_Changes_in_Online_Community.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Using Induced Fuzzy Bi-Model to Analyze Employee Employer Relationship in an Industry</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060714</link>
        <id>10.14569/IJACSA.2015.060714</id>
        <doi>10.14569/IJACSA.2015.060714</doi>
        <lastModDate>2015-07-30T17:22:14.6200000+00:00</lastModDate>
        
        <creator>Dhrubajyoti Ghosh</creator>
        
        <creator>Anita Pal</creator>
        
        <subject>Fuzzy Cognitive Map; Fuzzy Relational Maps; Fuzzy Cognitive Relational Maps; Induced Fuzzy Cognitive Relational Maps; Fuzzy bi-model;  employee employer relationship</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(7), 2015</description>
        <description>The employee-employer relationship is an intricate one. In an industry, the employers expect to achieve performances in quality and production in order to earn profit, on the other side employees need good pay and all possible allowances and best advantages than any other industry. Our main objective of this paper is to analyze the relationship between employee and employer in workplace and discussed how to maintain a strong employee and employer relationship which can produce the ultimate success of an organization using Induced Fuzzy bi-model called Induced Fuzzy Cognitive Relational Maps  (IFCRMs). IFCRMs are a directed special fuzzy digraph modelling approach based on expert&#39;s opinion. This is a non statistical approach to study the problems with imprecise information.</description>
        <description>http://thesai.org/Downloads/Volume6No7/Paper_14-Using_Induced_Fuzzy_Bi_Model_to_Analyze_Employee_Employer.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Research on Islanding Detection of Grid-Connected System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060713</link>
        <id>10.14569/IJACSA.2015.060713</id>
        <doi>10.14569/IJACSA.2015.060713</doi>
        <lastModDate>2015-07-30T17:22:14.5900000+00:00</lastModDate>
        
        <creator>Liu Zhifeng</creator>
        
        <creator>Zhang Liping</creator>
        
        <creator>Chen Yuchen</creator>
        
        <creator>Jia Chunying</creator>
        
        <subject>islanding detection; self-adaptive; active frequency shift; d-q transform</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(7), 2015</description>
        <description>This paper proposed a modified detection based on the point of common coupling (PCC) voltage in the three-phrase inverter, combined over/under frequency protection, to achieve the detection of islanding states rapidly. Islanding detection is an common issue existing in distributed generation system. Compared with active islanding detection ,this method could detect the islanding quickly and effectively .The simulation and experimental results shows that this new method can detect the islanding phenomenon quickly and accurately, which can meet the requirement of islanding detection standard, which ensure the stability of the system and the power quality recycling to  the grid.</description>
        <description>http://thesai.org/Downloads/Volume6No7/Paper_13-Research_on_Islanding_Detection_of_Grid_Connected_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Survey on Chatbot Design Techniques in Speech Conversation Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060712</link>
        <id>10.14569/IJACSA.2015.060712</id>
        <doi>10.14569/IJACSA.2015.060712</doi>
        <lastModDate>2015-07-30T17:22:14.5730000+00:00</lastModDate>
        
        <creator>Sameera A. Abdul-Kader</creator>
        
        <creator>Dr. John Woods</creator>
        
        <subject>AIML; Chatbot; Loebner Prize; NLP; NLTK; SQL; Turing Test</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(7), 2015</description>
        <description>Human-Computer Speech is gaining momentum as a technique of computer interaction. There has been a recent upsurge in speech based search engines and assistants such as Siri, Google Chrome and Cortana. Natural Language Processing (NLP) techniques such as NLTK for Python can be applied to analyse speech, and intelligent responses can be found by designing an engine to provide appropriate human like responses. This type of programme is called a Chatbot, which is the focus of this study. This paper presents a survey on the techniques used to design Chatbots and a comparison is made between different design techniques from nine carefully selected papers according to the main methods adopted. These papers are representative of the significant improvements in Chatbots in the last decade. The paper discusses the similarities and differences in the techniques and examines in particular the Loebner prize-winning Chatbots.</description>
        <description>http://thesai.org/Downloads/Volume6No7/Paper_12-Survey_on_Chatbot_Design_Techniques_in_Speech_Conversation_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Frame Work for Preserving Privacy in Social Media using Generalized Gaussian Mixture Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060711</link>
        <id>10.14569/IJACSA.2015.060711</id>
        <doi>10.14569/IJACSA.2015.060711</doi>
        <lastModDate>2015-07-30T17:22:14.5430000+00:00</lastModDate>
        
        <creator>P Anuradha</creator>
        
        <creator>Y.Srinivas</creator>
        
        <creator>MHM Krishna Prasad</creator>
        
        <subject>Privacy; Social Network; Social Relevant Groups; Generalized GMM, Tagging</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(7), 2015</description>
        <description>Social networking sites helps in developing virtual communities for people to share their thoughts, interest activities or to increase their horizon of camaraderie. Social networking sites come under few of the most frequently browsed categories websites in the world. Nevertheless Social Networking sites are also vulnerable to various problems, threats and attacks such as disclosure of information, identity thefts etc. Privacy practice in social networking sites often come into sight, as information sharing stands in conflict with the disclosure-related misuse. Face book is one such most popular and widely used Social Networking sites which have its own robust set of Privacy mechanisms. Yet they are also prone to various privacy issues and attacks. The impulse in this paper lies in proposing a novel approach for improving the privacy among the social networking sites .The article presents the issues by a novel approach based on tagging and a model based technique based on generalized Gaussian Mixture Model.</description>
        <description>http://thesai.org/Downloads/Volume6No7/Paper_11-A_Frame_Work_for_Preserving_Privacy_in_Social_Media.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Modified Clustering Algorithm in WSN</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060710</link>
        <id>10.14569/IJACSA.2015.060710</id>
        <doi>10.14569/IJACSA.2015.060710</doi>
        <lastModDate>2015-07-30T17:22:14.5270000+00:00</lastModDate>
        
        <creator>Ezmerina Kotobelli</creator>
        
        <creator>Elma Zanaj</creator>
        
        <creator>Mirjeta Alinci</creator>
        
        <creator>Edra Bum&#231;i</creator>
        
        <creator>Mario Banushi</creator>
        
        <subject>energy efficient; algorithm; WSN; clustering; Cluster Head; LEACH</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(7), 2015</description>
        <description>Nowadays many applications use Wireless Sensor Networks (WSN) as their fulfill the purpose of collection of data from a particular phenomenon. Their data centric behavior as well as harsh restrictions on energy makes WSN different from many other networks known. During this work the energy management problem of WSN is studied, by using our proposed modified algorithm. It is a clustering algorithm, where nodes are organized in clusters and send their data to a shift selected cluster head. It will provide improvement on energy consumption in terms of member nodes, by making cluster heads static. LEACH already has a good energy saving strategy but our modification will provide an easier approach towards efficiency.</description>
        <description>http://thesai.org/Downloads/Volume6No7/Paper_10-A_Modified_Clustering_Algorithm_in_WSN.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improvement on Classification Models of Multiple Classes through Effectual Processes</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060709</link>
        <id>10.14569/IJACSA.2015.060709</id>
        <doi>10.14569/IJACSA.2015.060709</doi>
        <lastModDate>2015-07-30T17:22:14.4970000+00:00</lastModDate>
        
        <creator>Tarik A. Rashid</creator>
        
        <subject>Non-Balanced Data; Feature Selection; Multiple Classification; Machine Learning Techniques; Student Performance Forecasting</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(7), 2015</description>
        <description>Classify cases in one of two classes referred to as a binary classification. However, some classification algorithms will allow, of course the use of more than two classes. This research work focuses on improving the results of classification models of multiple classes via some effective techniques. A case study of students’ achievement at Salahadin University is used in this research work. The collected data are pre-processed, cleaned, filtered, normalised, the final data was balanced and randomised, then a combining technique of Na&#239;ve Base Classifier and Best First Search algorithms are used to ultimately reduce the number of features in data sets.  Finally, a multi-classification task is conducted through some effective classifiers such as K-Nearest Neighbor, Radial Basis Function, and Artificial Neural Network to forecast the students’ performance.</description>
        <description>http://thesai.org/Downloads/Volume6No7/Paper_9-Improvement_on_Classification_Models_of_Multiple_Classes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Image Edge Detection based on ACO-PSO Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060708</link>
        <id>10.14569/IJACSA.2015.060708</id>
        <doi>10.14569/IJACSA.2015.060708</doi>
        <lastModDate>2015-07-30T17:22:14.4800000+00:00</lastModDate>
        
        <creator>Chen Tao</creator>
        
        <creator>Sun Xiankun</creator>
        
        <creator>Han Hua</creator>
        
        <creator>You Xiaoming</creator>
        
        <subject>Image edge detection; ant colony optimization; particle swarm optimization; parameter optimization; edge quality evaluation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(7), 2015</description>
        <description>This survey focuses on the problem of parameters selection in image edge detection by ant colony optimization (ACO) algorithm. By introducing particle swarm optimization (PSO) algorithm to optimize parameters in ACO algorithm, the fitness function based on connectivity of image edge is proposed to evaluate the quality of parameters in ACO algorithm. And the ACO-PSO algorithm is applied to image edge detection. The simulation results show that the parameters have been optimized and the proposed ACO-PSO algorithm presents better edges than traditional methods.</description>
        <description>http://thesai.org/Downloads/Volume6No7/Paper_8-Image_Edge_Detection_based_on_ACO_PSO_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Signal Reconstruction with Adaptive Multi-Rate Signal Processing Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060707</link>
        <id>10.14569/IJACSA.2015.060707</id>
        <doi>10.14569/IJACSA.2015.060707</doi>
        <lastModDate>2015-07-30T17:22:14.4500000+00:00</lastModDate>
        
        <creator>Korhan Cengiz</creator>
        
        <subject>LMS; Multi-Rate Systems; NLMS; Statistical Signal Processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(7), 2015</description>
        <description>Multi-rate digital signal processing techniques have been developed in recent years for a wide range of applications, such as speech and image compression, statistical and adaptive signal processing and digital audio. Multi-rate statistical and adaptive signal processing methods provide solution to original signal reconstruction, using observation signals sampled at different rates. In this study, a signal reconstruction process by using the observation signals which are sampled at different sampling rates is presented. The results are compared with the least mean squares (LMS) and the normalized least mean squares (NLMS) methods. As the results indicate, the signal estimation obtained is much more efficient than the previous studies. Designed multi-rate scheme provides significant advantages in terms of error and estimation accuracy.</description>
        <description>http://thesai.org/Downloads/Volume6No7/Paper_7-Signal_Reconstruction_with_Adaptive_Multi_Rate_Signal_Processing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Classification of Premature Ventricular Contraction in ECG</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060706</link>
        <id>10.14569/IJACSA.2015.060706</id>
        <doi>10.14569/IJACSA.2015.060706</doi>
        <lastModDate>2015-07-30T17:22:14.4330000+00:00</lastModDate>
        
        <creator>Yasin Kaya</creator>
        
        <creator>H&#252;seyin Pehlivan</creator>
        
        <subject>ECG; arrhythmia; classification; k-NN; PVC</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(7), 2015</description>
        <description>Cardiac arrhythmia is one of the most important indicators of heart disease. Premature ventricular contractions (PVCs) are a common form of cardiac arrhythmia caused by ectopic heartbeats. The detection of PVCs by means of ECG (electrocardiogram) signals is important for the prediction of possible heart failure. This study focuses on the classification of PVC heartbeats from ECG signals and, in particular, on the performance evaluation of time series approaches to the classification of PVC abnormality. Moreover, the performance effects of several dimension reduction approaches were also tested. Experiments were carried out using well-known machine learning methods, including neural networks, k-nearest neighbour, decision trees, and support vector machines. Findings were expressed in terms of accuracy, sensitivity, specificity, and running time for the MIT-BIH Arrhythmia Database. Among the different classification algorithms, the k-NN algorithm achieved the best classification rate. The results demonstrated that the proposed model exhibited higher accuracy rates than those of other works on this topic. According to the experimental results, the proposed approach achieved classification accuracy, sensitivity, and specificity rates of 99.63%, 99.29% and 99.89%, respectively.</description>
        <description>http://thesai.org/Downloads/Volume6No7/Paper_6-Classification_of_Premature_Ventricular_Contraction_in_ECG.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Indexing of Ears using Radial basis Function Neural Network for Personal Identification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060705</link>
        <id>10.14569/IJACSA.2015.060705</id>
        <doi>10.14569/IJACSA.2015.060705</doi>
        <lastModDate>2015-07-30T17:22:14.3870000+00:00</lastModDate>
        
        <creator>M. A. Jayaram</creator>
        
        <creator>Prashanth G.K</creator>
        
        <creator>M.Anusha</creator>
        
        <subject>RBFNN; kernel function; Indexing equation; Moment of inertia; radii of gyration</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(7), 2015</description>
        <description>This paper elaborates a novel method to recognize persons using ear biometrics. We propose a method to index the ears using Radial Basis Function Neural Networks (RBFNN). In order to obtain the invariant features, an ear has been considered as a planar surface of irregular shape. The shape based features like planar area, moment of inertia with respect to minor and major axes, and radii of gyration with respect to minor and major axes are considered. The indexing equation is generated using the weights, centroids and kernel function of the stabilized RBFN network. The so developed indexing equation was tested and validated. The analysis of the equation revealed 95.4% recognition accuracy. The retrieval rate of personal details became faster by an average of 13.8% when the database was organized as per the indices. Further, the three groups elicited by RBFNN were evaluated for parameters like entropy, precision, recall, specificity and F-measure. And all the parameters are found to be excellent in terms of their values and thus showcase the adequacy of the indexing model.</description>
        <description>http://thesai.org/Downloads/Volume6No7/Paper_5-Indexing_of_Ears_using_Radial_basis_Function_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>FSL-based Hardware Implementation for Parallel Computation of cDNA Microarray Image Segmentation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060704</link>
        <id>10.14569/IJACSA.2015.060704</id>
        <doi>10.14569/IJACSA.2015.060704</doi>
        <lastModDate>2015-07-30T17:22:14.3730000+00:00</lastModDate>
        
        <creator>Bogdan Bot</creator>
        
        <creator>Simina Emerich</creator>
        
        <creator>Sorin Martoiu</creator>
        
        <creator>Bogdan Belean</creator>
        
        <subject>microarray; FPGA; image processing; hardware algorithms</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(7), 2015</description>
        <description>The present paper proposes a FPGA based hardware implementations for microarray image processing algorithms in order eliminate the shortcomings of the existing software platforms: user intervention, increased computation time and cost. The proposed image processing algorithms exclude user intervention from processing. An application-specific architecture is designed aiming microarray image processing algorithms parallelization in order to speed up computation. Hardware architectures for logarithm based image enhancement, profile computation and image segmentation are described.  The methodology to integrate the hardware architecture within a microprocessor system is detailed. The Fast Simplex Link (FSL) bus is used to connect the hardware architecture as speed up co-processor of the microarray image processing system. Timing considerations were presented considering the levels of parallelism that can be achieved by using our proposed hardware architectures. The FPGA technology was chosen for implementation, due to its parallel computation capabilities and ease of reconfiguration.</description>
        <description>http://thesai.org/Downloads/Volume6No7/Paper_4-FSL_based_Hardware_Implementation_for_Parallel_Computation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enrichment of Object Oriented Petri Net and Object Z Aiming at Business Process Optimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060703</link>
        <id>10.14569/IJACSA.2015.060703</id>
        <doi>10.14569/IJACSA.2015.060703</doi>
        <lastModDate>2015-07-30T17:22:14.3570000+00:00</lastModDate>
        
        <creator>Aliasghar Ahmadikatouli</creator>
        
        <creator>Homayoon Motameni</creator>
        
        <subject>Object Z; Hierarchical object oriented Petri net; Formal methods integration; Business process; process improvement; process optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(7), 2015</description>
        <description>Software development process is on the basis of two important steps each of which has to be taken seriously, system requirement analysis and system modeling. There have been many different approaches in the literature that has their own strengths and weaknesses to tackle these two important steps, however, there is none comprehensive approach. Among them, formal methods by using their mathematical supporting background could achieve a precise, clear and in detail requirement analysis. However they were not able to illustrate graphically a system for stakeholders. On the other hand, semi-formal methods owning graphically representation of a system’s behavior make it easy for the stakeholders to perceive thoroughly. In this paper we represent an integration of object Z formal language and a graphically modeling tool hierarchical object oriented petri net. The application of business process was used to be modeled by this intergraded language.</description>
        <description>http://thesai.org/Downloads/Volume6No7/Paper_3-Enrichment_of_Object_Oriented_Petri_Net_and_Object_Z_Aiming.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mind-Reading System - A Cutting-Edge Technology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060702</link>
        <id>10.14569/IJACSA.2015.060702</id>
        <doi>10.14569/IJACSA.2015.060702</doi>
        <lastModDate>2015-07-30T17:22:14.3270000+00:00</lastModDate>
        
        <creator>Farhad Shir</creator>
        
        <subject>Brain-machine interface; Bio-signal computer command; mind-reading device; human-computer interface</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(7), 2015</description>
        <description>In this paper, we describe a human-computer interface (HCI) system that includes an enabler for controlling gadgets based on signal analysis of brain activities transmitted from the enabler to the gadgets.  The enabler is insertable in a user’s ear and includes a recorder that records brain signals.  A processing unit of the system, which is inserted in a gadget, commands the gadget based on decoding the recorded brain signals.  The proposed device and system could facilitate a brain-machine interface to control the gadget from electroencephalography signals in the user’s brain.</description>
        <description>http://thesai.org/Downloads/Volume6No7/Paper_2-Mind_Reading_System_A_Cutting_Edge_Technology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing CRM Business Intelligence Applications by Web User Experience Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060701</link>
        <id>10.14569/IJACSA.2015.060701</id>
        <doi>10.14569/IJACSA.2015.060701</doi>
        <lastModDate>2015-07-30T17:22:14.2300000+00:00</lastModDate>
        
        <creator>Natheer K. Gharaibeh</creator>
        
        <subject>CRM; Data warehouse; User Experience; Business intelligence; Web</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(7), 2015</description>
        <description>several trends are emerging in the field of CRM technology which promises a brighter future of more profitable customers and decreasing costs. One of the most critical trends is enhancing Business Intelligence applications using Web Technologies, Web technologies can improve the CRM BI implementation, but it still need evaluation, The Web has focused the attention of organizations towards the User Experience and the need to learn about their customer, The UX paradigm calls for enhancing CRMBI by Web technologies. This paper deals with this issue and provide a framework for building Web based CRMBI depending on the Process mapping between CRMBI and UX. It provides a conceptual overview of CRM and its relationship to the main disciplines BI, UX and Web.</description>
        <description>http://thesai.org/Downloads/Volume6No7/Paper_1-Enhancing_CRM_Business_Intelligence_Applications_by_Web_User_Experience_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cogntive Consistency Analysis in Adaptive Bio-Metric Authentication System Design</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2015.040711</link>
        <id>10.14569/IJARAI.2015.040711</id>
        <doi>10.14569/IJARAI.2015.040711</doi>
        <lastModDate>2015-07-10T12:53:19.0000000+00:00</lastModDate>
        
        <creator>Gahangir Hossain</creator>
        
        <creator>Habibah Khan</creator>
        
        <creator>Md.Iqbal Hossain</creator>
        
        <subject>Cognitive authentication; Cognitive consistency; Fingertip dynamics; Maximal Information Coefficient; Bivatiate plot</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 4(7), 2015</description>
        <description>Cognitive consistency analysis aims to continuously monitor one&#39;s perception equilibrium towards successful accomplishment of cognitive task.  Opposite to cognitive flexibility analysis – cognitive consistency analysis identifies monotone of perception towards successful interaction process (e.g., biometric authentication) and useful in generation of decision support to assist one in need. This study consider fingertip dynamics (e.g., keystroke, tapping, clicking etc.) to have insights on instantaneous cognitive states and its effects in monotonic advancement towards successful authentication process. Keystroke dynamics and tapping dynamics are analyzed based on response time data. Finally, cognitive consistency and confusion (inconsistency) are computed with Maximal Information Coefficient (MIC) and Maximal Asymmetry Score (MAS), respectively. Our preliminary study indicates that a balance between cognitive consistency and flexibility are needed in successful authentication process. Moreover, adaptive and cognitive interaction system requires in depth analysis of user’s cognitive consistency to provide a robust and useful assistance.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume4No7/Paper_11-Cogntive_Consistency_Analysis_in_Adaptive.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comparison between Regression, Artificial Neural Networks and Support Vector Machines for Predicting Stock Market Index</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2015.040710</link>
        <id>10.14569/IJARAI.2015.040710</id>
        <doi>10.14569/IJARAI.2015.040710</doi>
        <lastModDate>2015-07-10T06:07:29.2130000+00:00</lastModDate>
        
        <creator>Alaa F. Sheta</creator>
        
        <creator>Sara Elsir M. Ahmed</creator>
        
        <creator>Hossam Faris</creator>
        
        <subject>Stock Market Prediction; S&amp;P 500; Regres-sion; Artificial Neural Networks; Support Vector Machines.</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 4(7), 2015</description>
        <description>Obtaining accurate prediction of stock index sig-nificantly helps decision maker to take correct actions to develop a better economy. The inability to predict fluctuation of the stock market might cause serious profit loss. The challenge is that we always deal with dynamic market which is influenced by many factors. They include political, financial and reserve occasions. Thus, stable, robust and adaptive approaches which can provide models have the capability to accurately predict stock index are urgently needed. In this paper, we explore the use of Artificial Neural Networks (ANNs) and Support Vector Machines (SVM) to build prediction models for the S&amp;P 500 stock index. We will also show how traditional models such as multiple linear regression (MLR) behave in this case. The developed models will be evaluated and compared based on a number of evaluation criteria.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume4No7/Paper_10-A_Comparison_between_Regression_Artificial.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluation of Reception Facilities for Ship-generated Waste</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2015.040709</link>
        <id>10.14569/IJARAI.2015.040709</id>
        <doi>10.14569/IJARAI.2015.040709</doi>
        <lastModDate>2015-07-10T06:07:29.1830000+00:00</lastModDate>
        
        <creator>Sylvia Encheva</creator>
        
        <subject>Waste management, grey theory, assessments</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 4(7), 2015</description>
        <description>Waste management plans usually address all types of ship-generated waste and cargo residues originating from ships calling at ports. Well developed waste management plan is a serious step towards reduction of the environmental impact of ship-generated waste. Such important and at the same time complex considerations can be supported by application of modern mathematical theories. Evaluation of waste management plans based on application of grey theory is presented in this work.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume4No7/Paper_9-Evaluation_of_Reception_Facilities_for_Ship_generated.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Changes in Known Statements After New Data is Added</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2015.040708</link>
        <id>10.14569/IJARAI.2015.040708</id>
        <doi>10.14569/IJARAI.2015.040708</doi>
        <lastModDate>2015-07-10T06:07:29.1200000+00:00</lastModDate>
        
        <creator>Sylvia Encheva</creator>
        
        <subject>Ordering rules; Ordered sets; Implications</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 4(7), 2015</description>
        <description>Learning spaces are broadly defined as spaces with a noteworthy bearing on learning. They can be physical or virtual, as well as formal and informal. The formal ones are customary understood to be traditional classrooms or technologically en-hanced active learning classrooms while the informal learning spaces can be libraries, lounges, caf&#180;es, etc.. Students’ as well as lecturers’ preferences to learning spaces along with the effects of these preferences on teaching and learning have been broadly discussed by many researchers. Yet, little is done to employ mathematical methods for drawing conclusions from available data as well as investigating changes in known statements after new data is added. To do this we suggest use of ordering rules and ordered sets theories.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume4No7/Paper_8-Changes_in_Known_Statements_After_New_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Methods for Wild Pig Identifications from Moving Pictures and Discrimination of Female Wild Pigs based on Feature Matching Methods</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2015.040707</link>
        <id>10.14569/IJARAI.2015.040707</id>
        <doi>10.14569/IJARAI.2015.040707</doi>
        <lastModDate>2015-07-10T06:07:29.0570000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Indra Nugraha Abdullah</creator>
        
        <creator>Kensuke Kubo</creator>
        
        <creator>Katsumi Sugawa</creator>
        
        <subject>OpenCV; Canny filter; Template matching; Feature matching</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 4(7), 2015</description>
        <description>Methods for wild pig identifications and discrimination of female wild pigs based on feature matching methods with acquired Near Infrared: NIR moving pictures are proposed. Trials and errors are repeated for identifying wild pigs and for discrimination of female wild pigs through experiments. As a conclusion, feature matching methods with the target nipple features show a better performance. Feature matching method of FLANN shows the best performance in terms of feature extraction and tracking capabilities.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume4No7/Paper_7-Methods_for_Wild_Pig_Identifications_from_Moving_Pictures.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Seamless Location Measuring System with Wifi Beacon Utilized and GPS Receiver based Systems in Both of Indoor and Outdoor Location Measurements</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2015.040706</link>
        <id>10.14569/IJARAI.2015.040706</id>
        <doi>10.14569/IJARAI.2015.040706</doi>
        <lastModDate>2015-07-10T06:07:29.0100000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>GPS receiver; WiFi beacon; seamless location estimation</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 4(7), 2015</description>
        <description>A seamless location measuring system with WiFi beacon utilized and GPS receiver based systems in both of indoor and outdoor location measurements is proposed. Through the experiments in both of indoor and outdoor, it is found that location measurement accuracy is around 2-3 meters for the locations which are designated in both of indoor and outdoor.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume4No7/Paper_6-Seamless_Location_Measuring_System_with_Wifi_Beacon_Utilized.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Relation between Rice Crop Quality (Protein Content) and Fertilizer Amount as Well as Rice Stump Density Derived from Helicopter Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2015.040705</link>
        <id>10.14569/IJARAI.2015.040705</id>
        <doi>10.14569/IJARAI.2015.040705</doi>
        <lastModDate>2015-07-10T06:07:28.9800000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Masanoori Sakashita</creator>
        
        <creator>Osamu Shigetomi</creator>
        
        <creator>Yuko Miura</creator>
        
        <subject>Rice Crop; Rice Leaf; Total nitrogen content; Protein content; NDVI; Fertilizer amount; Rice stump density</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 4(7), 2015</description>
        <description>Relation between protein content in rice crops and fertilizer amount as well as rice stump density is clarified with a multi-spectral camera data mounted on a radio-wave controlled helicopter. Estimation of protein content in rice crop and total nitrogen content in rice leaves through regression analysis with Normalized Difference Vegetation Index: NDVI derived from camera mounted radio-controlled helicopter is already proposed. Through experiments at rice paddy fields which is situated at Saga Prefectural Research Institute of Agriculture: SPRIA in Saga city, Japan, it is found that total nitrogen content in rice leaves is linearly proportional to fertilizer amount and NDVI. Also, it is found that protein content in rice crops is positively proportional to fertilizer amount for lower fertilizer amount while protein content in rice crop is negatively proportional to fertilizer amount for relatively high fertilizer amount.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume4No7/Paper_5-Relation_between_Rice_Crop_Quality_Protein_Content_and_Fertilizer_Amount.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Realistic Rescue Simulation Method with Consideration of Road Network Restrictions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2015.040704</link>
        <id>10.14569/IJARAI.2015.040704</id>
        <doi>10.14569/IJARAI.2015.040704</doi>
        <lastModDate>2015-07-10T06:07:28.9630000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Takashi Eguchi</creator>
        
        <subject>Rescue Simulation for people with disabilities; GIS MultiAgent-based Rescue Simulation; Auction based Decision Making</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 4(7), 2015</description>
        <description>A realistic rescue simulation method with consideration of road network restrictions is proposed. Decision making and emergency communication system play important roles in rescue process when emergency situations happen. The rescue process will be more effective if we have appropriate decision making method and accessible emergency communication system. In this paper, we propose centralized rescue model for people with disabilities. The decision making method to decide which volunteers should help which disabled persons is proposed by utilizing the auction mechanism. The GIS data are used to present the objects in a large-scale disaster simulation environment such as roads, buildings, and humans. The Gama simulation platform is used to test our proposed rescue simulation model. There are road network restrictions, road disconnections, one way traffic, roads which do not allow U-Turn, etc. These road network restrictions are taken into account in the proposed rescue simulation model. The experimental results show around 10% of additional time is required for evacuation of victims.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume4No7/Paper_4-Realistic_Rescue_Simulation_Method_with_Consideration.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Trend Analysis of Relatively Large Diatoms Which Appear in the Intensive Study Area of the Ariake Sea, Japan in Winter (2011-2015) based on Remote Sensing Satellite Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2015.040703</link>
        <id>10.14569/IJARAI.2015.040703</id>
        <doi>10.14569/IJARAI.2015.040703</doi>
        <lastModDate>2015-07-10T06:07:28.9470000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Toshiya Katano</creator>
        
        <subject>chlorophyl-a concentration; red tide; diatom; sunshine duration; ocean winds; tidal effect</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 4(7), 2015</description>
        <description>Behavior of relatively large size of diatoms which appear in the Ariake Sea areas, Japan in winter based on remote sensing satellite data is clarified.  Through experiments with Terra and AQUA MODIS data derived chlorophyll-a concentration and truth data of chlorophyll-a concentration together with meteorological data and tidal data which are acquired for 5 years (winter 2011 to winter 2015), it is found that strong correlation between the chlorophyll-a concentration and tidal height changes. Also it is found that the relations between ocean wind speed and chlorophyll-a concentration. Meanwhile, there is a relatively high correlation between sunshine duration a day and chlorophyll-a concentration.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume4No7/Paper_3-Trend_Analysis_of_Relatively_Large_Diatoms_Which_Appear.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Arabic Natural Language Interface System for a Database of the Holy Quran</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2015.040702</link>
        <id>10.14569/IJARAI.2015.040702</id>
        <doi>10.14569/IJARAI.2015.040702</doi>
        <lastModDate>2015-07-10T06:07:28.9000000+00:00</lastModDate>
        
        <creator>Khaled Nasser ElSayed</creator>
        
        <subject>Natural Language Processing (NLP); Arabic Question Answering System; Morphology; Arabic Grammar; Database; SQL</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 4(7), 2015</description>
        <description>In the time being, the need for searching in the words, objects, subjects, and statistics of words and parts of the Holy Quran has grown rapidly concurrently with the grow of number of Moslems and the huge usage of smart mobiles, tablets and lab tops. Because, databases are used almost in all activities of our life, some DBs have been built to store information about words and surah of Quran. The need for accessing Quran DBs became very important and wide uses, which could be done through database applications or using SQL commands, directly from database site or indirectly by a special format through LAN or even through the WEB. Most of peoples are not experienced in SQL language, but they need to build SQL commands for their retrievals. The proposed system will translate their natural Arabic requests such as questions or imperative sentences into SQL commands to retrieve answers from a Quran DB. It will perform parsing and little morphological processes according to a sub set of Arabic context-free grammar rules to work as an interface layer between users and Database.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume4No7/Paper_2-An_Arabic_Natural_Language_Interface_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Minimal Spiking Neural Network to Rapidly Train and Classify Handwritten Digits in Binary and 10-Digit Tasks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2015.040701</link>
        <id>10.14569/IJARAI.2015.040701</id>
        <doi>10.14569/IJARAI.2015.040701</doi>
        <lastModDate>2015-07-10T06:07:28.8070000+00:00</lastModDate>
        
        <creator>Amirhossein Tavanaei</creator>
        
        <creator>Anthony S. Maida</creator>
        
        <subject>Spiking neural networks; STDP learning; digit recognition; adaptive synapse; classification</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 4(7), 2015</description>
        <description>This paper reports the results of experiments to develop a minimal neural network for pattern classification. The network uses biologically plausible neural and learning mechanisms and is applied to a subset of the MNIST dataset of handwritten digits. The research goal is to assess the classification power of a very simple biologically motivated mechanism. The network architecture is primarily a feedforward spiking neural network (SNN) composed of Izhikevich regular spiking (RS) neurons and conductance-based synapses. The weights are trained with the spike timing-dependent plasticity (STDP) learning rule. The proposed SNN architecture contains three neuron layers which are connected by both static and adaptive synapses. Visual input signals are processed by the first layer to generate input spike trains. The second and third layers contribute to spike train segmentation and STDP learning, respectively. The network is evaluated by classification accuracy on the handwritten digit images from the MNIST dataset. The simulation results show that although the proposed SNN is trained quickly without error-feedbacks in a few number of iterations, it results in desirable performance (97.6%) in the binary classification (0 and 1). In addition, the proposed SNN gives acceptable recognition accuracy in 10-digit (0-9) classification in comparison with statistical methods such as support vector machine (SVM) and multi-perceptron neural network.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume4No7/Paper_1-A_Minimal_Spiking_Neural_Network_to_Rapidly_Train.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Maximizing Throughput of SW ARQ with Network Coding through Forward Error Correction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060640</link>
        <id>10.14569/IJACSA.2015.060640</id>
        <doi>10.14569/IJACSA.2015.060640</doi>
        <lastModDate>2015-07-01T07:09:37.3270000+00:00</lastModDate>
        
        <creator>Farouq M. Aliyu</creator>
        
        <creator>Yahya Osais</creator>
        
        <creator>Ismail Keshta</creator>
        
        <creator>Adel Binajjaj</creator>
        
        <subject>Network Coding; Automatic repeat request (ARQ); Stop-and-Wait (SW); Vandermonde Matrix</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(6), 2015</description>
        <description>Over the years, several techniques for improving throughput of wireless communication have been developed in order to cater for the ever increasing demand of high speed network service. However, these techniques can only give little improvement in performance because packets have to be delivered as is. As such researchers have begun thinking outside the box by proposing ideas that require relay nodes to temper packets’ contents in order to improve the throughput of a network. One of the state of the art techniques in this field is called Network Coding (NC). NC is a state of the art technique that allows relay nodes linearly combine two or more packets in a way they can be recovered upon reaching their destination. However, increasing packet size increases possibility of error affecting it. In this paper, the authors decide to investigate whether adding data recovery technique can improve the performance of a network that uses network coding, if it can, by how much can it? Is it worth the trouble? In order to answer these questions, the authors carried out a quantitative analysis of throughput in a Stop-and-Wait Automatic Repeat reQuest (SW-ARQ) data transmission system with Network Coding (NC) and Forward Error Correction (FEC). Vandermonde matrix is chosen as the coding technique for this research because it has both NC and data recovery characteristics. Python programming language is used to develop three Discrete Event Simulations: SW-ARQ without any NC, SW-ARQ with NC and SW-ARQ with NC and FEC. The obtained results show that SW-ARQ with NC and FEC is superior to traditional SW-ARQ in terms of throughput, especially in channels with high error rates.</description>
        <description>http://thesai.org/Downloads/Volume6No6/Paper_40-Maximizing_Throughput_of_SW_ARQ_with_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implementation of Vision-based Object Tracking Algorithms for Motor Skill Assessments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060639</link>
        <id>10.14569/IJACSA.2015.060639</id>
        <doi>10.14569/IJACSA.2015.060639</doi>
        <lastModDate>2015-07-01T07:09:37.1230000+00:00</lastModDate>
        
        <creator>Beatrice Floyd</creator>
        
        <creator>Kiju Lee</creator>
        
        <subject>Vision-based Object Tracking; Motor Skill Assess-ment; Multi-marker Tracking; Computer-based Assessment.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(6), 2015</description>
        <description>Assessment of upper extremity motor skills often involves object manipulation, drawing or writing using a pencil, or performing specific gestures. Traditional assessment of such skills usually requires a trained person to record the time and accuracy resulting in a process that can be labor intensive and costly. Automating the entire assessment process will potentially lower the cost, produce electronically recorded data, broaden the implementations, and provide additional assessment infor-mation. This paper presents a low-cost, versatile, and easy-to-use algorithm to automatically detect and track single or multiple well-defined geometric shapes or markers. It therefore can be applied to a wide range of assessment protocols that involve object manipulation or hand and arm gestures. The algorithm localizes the objects using color thresholding and morphological operations and then estimates their 3-dimensional pose. The utility of the algorithm is demonstrated by implementing it for automating the following five protocols: the sport of Cup Stacking, the Soda Pop Coordination test, the Wechsler Block Design test, the visual-motor integration test, and gesture recognition.</description>
        <description>http://thesai.org/Downloads/Volume6No6/Paper_39-Implementation_of_Vision_based_Object_Tracking.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Video conference Android platform by your mobile phone</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060638</link>
        <id>10.14569/IJACSA.2015.060638</id>
        <doi>10.14569/IJACSA.2015.060638</doi>
        <lastModDate>2015-07-01T07:09:37.0900000+00:00</lastModDate>
        
        <creator>Mr. Mohamed Khalifa</creator>
        
        <creator>Dr. Chaafa Hamrouni</creator>
        
        <creator>Pr. Mahmoud Abdellaoui</creator>
        
        <subject>videoconference; multimedia mobile phone; streaming; Smartphone; Server; Customers.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(6), 2015</description>
        <description>Video conferencing is a visual and audio communications technology dedicated to Smartphones. It is based on the client-server communication model that this requires many limits since using the server. In this paper, we proposed a new communication model without going through the server (client-client), in addition, this model makes video conference used by mobile multimedia phones.</description>
        <description>http://thesai.org/Downloads/Volume6No6/Paper_38-Video_Android_platform.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Influence of Nitrogen-di-Oxide, Temperature and Relative Humidity on Surface Ozone Modeling Process Using Multigene Symbolic Regression Genetic Programming</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060637</link>
        <id>10.14569/IJACSA.2015.060637</id>
        <doi>10.14569/IJACSA.2015.060637</doi>
        <lastModDate>2015-07-01T07:09:37.0600000+00:00</lastModDate>
        
        <creator>Alaa F. Sheta</creator>
        
        <creator>Hossam Faris</creator>
        
        <subject>Air pollution; Surface Ozone; Multigene Sym-bolic Regression; Genetic Programming; Multilayer perceptron neural network; Prediction.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(6), 2015</description>
        <description>Automatic monitoring, data collection, analysis and prediction of environmental changes is essential for all living things. Understanding future climate changes does not only helps in measuring the influence on people life, habits, agricultural and health but also helps in avoiding disasters. Giving the high emission of chemicals on air, scientist discovered the growing depletion in ozone layer. This causes a serious environmental problem. Modeling and observing changes in the Ozone layer have been studied in the past. Understanding the dynamics of the pollutants features that influence Ozone is ex-plored in this article. A short term prediction model for surface Ozone is offered using Multigene Symbolic Regression Genetic Programming (GP). The proposed model customs Nitrogen-di-Oxide, Temperature and Relative Humidity as the main features to predict the Ozone level. Moreover, a comparison between GP and Artificial Neural Network (ANN) in modeling Ozone is presented. The developed results show that GP outperform the ANN.</description>
        <description>http://thesai.org/Downloads/Volume6No6/Paper_37-Influence_of_Nitrogen_di_Oxide_Temperature.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Graph-based Semi-Supervised Regression and Its Extensions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060636</link>
        <id>10.14569/IJACSA.2015.060636</id>
        <doi>10.14569/IJACSA.2015.060636</doi>
        <lastModDate>2015-07-01T07:09:37.0430000+00:00</lastModDate>
        
        <creator>Xinlu Guo</creator>
        
        <creator>Kuniaki Uehara</creator>
        
        <subject>Semi-supervised learning; Graph-Laplacian; Re-gression; Gaussian Process; Feedback; Clustering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(6), 2015</description>
        <description>In this paper we present a graph-based semi-supervised method for solving regression problem. In our method, we first build an adjacent graph on all labeled and unlabeled data, and then incorporate the graph prior with the standard Gaussian process prior to infer the training model and prediction distribution for semi-supervised Gaussian process regression. Additionally, to further boost the learning performance, we employ a feedback algorithm to pick up the helpful prediction of unlabeled data for feeding back and re-training the model iteratively. Furthermore, we extend our semi-supervised method to a clustering regression framework to solve the computational problem of Gaussian process. Experimental results show that our work achieves encouraging results.</description>
        <description>http://thesai.org/Downloads/Volume6No6/Paper_36-Graph_based_Semi_Supervised_Regression.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Energy consumption model over parallel programs implemented on multicore architectures</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060635</link>
        <id>10.14569/IJACSA.2015.060635</id>
        <doi>10.14569/IJACSA.2015.060635</doi>
        <lastModDate>2015-07-01T07:09:36.9970000+00:00</lastModDate>
        
        <creator>Ricardo Isidro-Ramirez</creator>
        
        <creator>Amilcar Meneses Viveros</creator>
        
        <creator>Erika Hernandez Rubio</creator>
        
        <subject>Energy Consumption; Multicore Processors; Paral-lel Programs; Amdahl’s law</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(6), 2015</description>
        <description>In High Performance Computing, energy consump-tion is becoming an important aspect to consider. Due to the high costs that represent energy production in all countries it holds an important role and it seek to find ways to save energy. It is reflected in some efforts to reduce the energy requirements of hardware components and applications. Some options have been appearing in order to scale down energy use and, con-sequently, scale up energy efficiency. One of these strategies is the multithread programming paradigm, whose purpose is to produce parallel programs able to use the full amount of computing resources available in a microprocessor. That energy saving strategy focuses on efficient use of multicore processors that are found in various computing devices, like mobile devices. Actually, as a growing trend, multicore processors are found as part of various specific purpose computers since 2003, from High Performance Computing servers to mobile devices. However, it is not clear how multiprogramming affects energy efficiency. This paper presents an analysis of different types of multicore-based architectures used in computing, and then a valid model is presented. Based on Amdahl’s Law, a model that considers different scenarios of energy use in multicore architectures it is proposed. Some interesting results were found from experiments with the developed algorithm, that it was execute of a parallel and sequential way. A lower limit of energy consumption was found in a type of multicore architecture and this behavior was observed experimentally.</description>
        <description>http://thesai.org/Downloads/Volume6No6/Paper_35-Energy_consumption_model_over_parallel_programs.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dynamic wireless charging of electric vehicles on the move with Mobile Energy Disseminators</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060634</link>
        <id>10.14569/IJACSA.2015.060634</id>
        <doi>10.14569/IJACSA.2015.060634</doi>
        <lastModDate>2015-07-01T07:09:36.9670000+00:00</lastModDate>
        
        <creator>Leandros A. Maglaras</creator>
        
        <creator>Jianmin Jiang</creator>
        
        <creator>Athanasios Maglaras</creator>
        
        <creator>Frangiskos V. Topalis</creator>
        
        <creator>Sotiris Moschoyiannis</creator>
        
        <subject>Electric vehicle; Dynamic Wireless Charging; IVC; Cooperative Mechanisms</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(6), 2015</description>
        <description>Dynamic wireless charging of electric vehicles (EVs) is becoming a preferred method since it enables power exchange between the vehicle and the grid while the vehicle is moving. In this article, we present mobile energy disseminators (MED), a new concept, that can facilitate EVs to extend their range in a typical urban scenario. Our proposed method exploits Inter-Vehicle (IVC) communications in order to eco-route electric vehicles taking advantage of the existence of MEDs. Combining modern communications between vehicles and state of the art technologies on energy transfer, vehicles can extend their travel time without the need for large batteries or extremely costly infrastructure. Furthermore, by applying intelligent decision mechanisms we can further improve the performance of the method.</description>
        <description>http://thesai.org/Downloads/Volume6No6/Paper_34-Dynamic_wireless_charging_of_electric_vehicles.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Classification model of arousal and valence mental states by EEG signals analysis and Brodmann correlations</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060633</link>
        <id>10.14569/IJACSA.2015.060633</id>
        <doi>10.14569/IJACSA.2015.060633</doi>
        <lastModDate>2015-07-01T07:09:36.9370000+00:00</lastModDate>
        
        <creator>Adrian Rodriguez Aguinaga</creator>
        
        <creator>Miguel Angel Lopez Ramirez</creator>
        
        <creator>Maria del Rosario Baltazar Flores</creator>
        
        <subject>Emotions; Affective Computing; EEG; SVM; Wavelets;</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(6), 2015</description>
        <description>This paper proposes a methodology to perform emotional states classification by the analysis of EEG signals, wavelet decomposition and an electrode discrimination process, that associates electrodes of a 10/20 model to Brodmann regions and reduce computational burden. The classification process were performed by a Support Vector Machines Classification process, achieving a 81.46 percent of classification rate for a multi-class problem and the emotions modeling are based in an adjusted space from the Russell Arousal Valence Space and the Geneva model.</description>
        <description>http://thesai.org/Downloads/Volume6No6/Paper_33-Classification_model_of_arousal_and_valence_mental.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Adaptive Learning Mechanism for Selection of Increasingly More Complex Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060632</link>
        <id>10.14569/IJACSA.2015.060632</id>
        <doi>10.14569/IJACSA.2015.060632</doi>
        <lastModDate>2015-07-01T07:09:36.8270000+00:00</lastModDate>
        
        <creator>Fouad Khan</creator>
        
        <subject>Adaptive Learning, Complexity, Self-awareness, Good regulator theorem, Adaptive Selection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(6), 2015</description>
        <description>Recently it has been demonstrated that causal entropic forces can lead to the emergence of complex phenomena associated with human cognitive niche such as tool use and social cooperation. Here I show that even more fundamental traits associated with human cognition such as ‘self-awareness’ can easily be demonstrated to be arising out of merely a selection for ‘better regulators’; i.e. systems which respond comparatively better to threats to their existence which are internal to themselves. A simple model demonstrates how indeed the average self-awareness for a universe of systems continues to rise as less self-aware systems are eliminated. The model also demonstrates however that the maximum attainable self-awareness for any system is limited by the plasticity and energy availability for that typology of systems. I argue that this rise in self-awareness may be the reason why systems tend towards greater complexity.</description>
        <description>http://thesai.org/Downloads/Volume6No6/Paper_32-An_Adaptive_Learning_Mechanism_for_Selection_of_Increasingly.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>(AMDC) Algorithm for wireless sensor networks in the marine environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060631</link>
        <id>10.14569/IJACSA.2015.060631</id>
        <doi>10.14569/IJACSA.2015.060631</doi>
        <lastModDate>2015-07-01T07:09:36.7500000+00:00</lastModDate>
        
        <creator>Rabab J. Mohsin</creator>
        
        <creator>John Woods</creator>
        
        <creator>Mohammed Q. Shawkat</creator>
        
        <subject>Wireless sensor network, Mobile Ad hoc Network, Very High Frequency, Sensor.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(6), 2015</description>
        <description>Data compression is known today as one of the most important enabling technologies that form the foundation of the majority of data applications and networks as we know them, including wireless sensor networks and the popular world wide net (internet). Marine data networks are gaining increasing interest in the research community due to the increasing request for data services over the sea. There are a very narrow range of available solutions because of the absence of infrastructure over such vast water surfaces. We have previously proposed applying MANET networks in the marine environment using VHF technology available on the majority of ships and vessels in order to gather different sensor data such as sea depth, temperature, wind speed and direction, etc. and send it to a central server to produce a public information map. We also discusses the gains and drawbacks of our proposal including the problem of low rate data transmission offered by VHF radio limited to 9.6 Kbps. In this paper we investigate the application of appropriate data quantization and compression techniques to the marine sensor data collected in order to reduce the burden on the channel links and achieve better transmission efficiency.</description>
        <description>http://thesai.org/Downloads/Volume6No6/Paper_31-Algorithm_for_wireless_sensor_networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>On a Flow-Based Paradigm in Modeling and Programming</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060630</link>
        <id>10.14569/IJACSA.2015.060630</id>
        <doi>10.14569/IJACSA.2015.060630</doi>
        <lastModDate>2015-07-01T07:09:36.6870000+00:00</lastModDate>
        
        <creator>Sabah Al-Fedaghi</creator>
        
        <subject>flow-based programming; conceptual description; data flow; flowthing model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(6), 2015</description>
        <description>In computer science, the concept of flow is reflected in many terms such as data flow, control flow, message flow, information flow, and so forth. Many fields of study utilize the notion, including programming, communication (e.g., Shannon-Weaver communication model), software modeling, artificial intelligence, and knowledge representation. This paper focuses on two approaches that explicitly assert a flow-based paradigm: flow-based programming (FBP) and flowthing modeling (FM). The first is utilized in programming and the latter in modeling (e.g., software development). Each produces a diagrammatic representation, and these are compared. The purpose is to promote progress in a flow-based paradigm and its utilization in the area of computer science. The resultant analysis highlights the fact that FBP and FM can benefit from each other’s methodology.</description>
        <description>http://thesai.org/Downloads/Volume6No6/Paper_30-On_a_Flow_Based_Paradigm_in_Modeling_and_Programming.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Meteosat Images Encryption based on AES and RSA Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060629</link>
        <id>10.14569/IJACSA.2015.060629</id>
        <doi>10.14569/IJACSA.2015.060629</doi>
        <lastModDate>2015-07-01T07:09:36.6400000+00:00</lastModDate>
        
        <creator>Boukhatem Mohammed Belkaid</creator>
        
        <creator>Lahdir Mourad</creator>
        
        <creator>Cherifi Mehdi</creator>
        
        <subject>AES; RSA; MSG; satellite; encryption; keys</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(6), 2015</description>
        <description>Satellite image Security is playing a vital role in the field of communication system and Internet. This work is interested in securing transmission of Meteosat images on the Internet, in public or local networks. To enhance the security of Meteosat transmission in network communication, a hybrid encryption algorithm based on Advanced Encryption Standard (AES) and Rivest Shamir Adleman (RSA) algorithms is proposed. AES algorithm is used for data transmission because of its higher efficiency in block encryption and RSA algorithm is used for the encryption of the key of the AES because of its management advantages in key cipher. Our encryption system generates a unique password every new session of encryption. Cryptanalysis and various experiments have been carried out and the results were reported in this paper, which demonstrate the feasibility and flexibility of the proposed scheme.</description>
        <description>http://thesai.org/Downloads/Volume6No6/Paper_29-Meteosat_Images_Encryption_based_on_AES_and_RSA_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Boosted Decision Trees for Lithiasis Type Identification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060628</link>
        <id>10.14569/IJACSA.2015.060628</id>
        <doi>10.14569/IJACSA.2015.060628</doi>
        <lastModDate>2015-07-01T07:09:36.6070000+00:00</lastModDate>
        
        <creator>Boutalbi Rafika</creator>
        
        <creator>Farah Nadir</creator>
        
        <creator>Chitibi Kheir Eddine</creator>
        
        <creator>Boutefnouchet</creator>
        
        <creator>Tanougast Camel</creator>
        
        <subject>urinary lithiasis; classification; Boosting; Decision Trees</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(6), 2015</description>
        <description>Several urologic studies showed that it was important to determine the lithiasis types, in order to limit the recurrence residive risk and the renal function deterioration. The difficult problem posed by urologists for classifying urolithiasis is due to the large number of parameters (components, age, gender, background ...) taking part in the classification, and hence the probable etiology  determination. There exist 6 types of urinary lithiasis which are distinguished according to their compositions (chemical components with given proportions), their etiologies and patient profile. This work presents models based on Boosted decision trees results, and which were compared according to their error rates and the runtime. The principal objectives of this work are intended to facilitate the urinary lithiasis classification, to reduce the classification runtime and an epidemiologic interest. The experimental results showed that the method is effective and encouraging for the lithiasis type identification.</description>
        <description>http://thesai.org/Downloads/Volume6No6/Paper_28-Boosted_Decision_Trees_for_Lithiasis_Type_Identification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Erp Systems Critical Success Factors</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060627</link>
        <id>10.14569/IJACSA.2015.060627</id>
        <doi>10.14569/IJACSA.2015.060627</doi>
        <lastModDate>2015-07-01T07:09:36.5770000+00:00</lastModDate>
        
        <creator>Islam K. Sowan</creator>
        
        <creator>Radwan Tahboub</creator>
        
        <subject>ERP; Information and Communication Technology; System Life Cycle; Critical success factors; Agile methodology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(6), 2015</description>
        <description>The Enterprise Resources Planning (ERP) systems are one of the highly complex systems in the information systems field; the implementations of this type of systems need a long time, high cost, and a lot of resources. Many factors affect the successful implementation of ERP system. The critical success factors (CSFs) can be categorized as general, ICT related and software engineering or system life cycle (SLC) related. This paper is a survey paper that identifies ERP systems CSFs in general and software engineering CSFs in specific. Also an agile methodology for ERP systems&#39; implementations will be presented. Many existing ERP systems were surveyed and presented from ICT / software engineering point of view.</description>
        <description>http://thesai.org/Downloads/Volume6No6/Paper_27-Erp_Systems_Critical_Success_Factors.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Using Moore Dijkstra Algorithm with Multi-Agent System to Find Shortest Path over Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060626</link>
        <id>10.14569/IJACSA.2015.060626</id>
        <doi>10.14569/IJACSA.2015.060626</doi>
        <lastModDate>2015-07-01T07:09:36.2330000+00:00</lastModDate>
        
        <creator>Basem Alrifai</creator>
        
        <creator>Hind Mousa Al-Hamadeen</creator>
        
        <subject>multi-agent system; shortest paths problems; Dijkstra Algorithm; Automata with multiplicities</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(6), 2015</description>
        <description>finding the shortest path over network is very difficult and it is the target for much research, after many researches get the result in many of algorithm and many a mount based on the performance for these algorithm .Shortest paths problems are familiar problems in computer science and mathematics. In these problems, edge weights may represent distances, costs, or any other real-valued quantity that can be added along a path, and that one may wish to minimize. Thus, edge weights are real numbers and the specific operations used are addition to compute the weight of a path and minimum to select the best path weight.
In this paper we use the Dijkstra&#39;s algorithm with new technique to find the shortest path over network to reduce the time we need to find the best path, in this paper we use node for network with the same value which can be use it to find the shortest path but this depend on the number of transition for every node when the node have high number then the node have the high priority to choose it by using this method we descries the time to find the short path .to make this algorithm more distinguish apply multi-agent system ( Automata with multiplicities ) to find the short path.</description>
        <description>http://thesai.org/Downloads/Volume6No6/Paper_26-Using_Moore_Dijkstra_Algorithm_with_Multi_Agent_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Vague Set Theory for Profit Pattern and Decision Making in Uncertain Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060625</link>
        <id>10.14569/IJACSA.2015.060625</id>
        <doi>10.14569/IJACSA.2015.060625</doi>
        <lastModDate>2015-07-01T07:09:36.2030000+00:00</lastModDate>
        
        <creator>Vivek Badhe</creator>
        
        <creator>Dr. R.S Thakur</creator>
        
        <creator>Dr. G.S Thakur</creator>
        
        <subject>Association Rule Mining; Vague Association Rule Mining; Profit Pattern Mining</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(6), 2015</description>
        <description>Problem of decision making, especially in financial issues is a crucial task in every business. Profit Pattern mining hit the target but this job is found very difficult when it is depends on the imprecise and vague environment, which is frequent in recent years. The concept of vague association rule is novel way to address this difficulty. Merely few researches have been carried out in association rule mining using vague set theory. The general approaches to association rule mining focus on inducting rule by using correlation among data and finding frequent occurring patterns. In the past years data mining technology follows traditional approach that offers only statistical analysis and discovers rules. The main technique uses support and confidence measures for generating rules. But since the data have become more complex today, it’s a requisite to find solution that deals with such problems. There are certain constructive approaches that have already reform the ARM. In this paper, we apply concept of vague set theory and related properties for profit patterns and its application to the commercial management to deal with Business decision making problem.</description>
        <description>http://thesai.org/Downloads/Volume6No6/Paper_25-Vague_Set_Theory_for_Profit_Pattern_and_Decision_Making_in_Uncertain_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Efficient Algorithm to Automated Discovery of Interesting Positive and Negative Association Rules</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060623</link>
        <id>10.14569/IJACSA.2015.060623</id>
        <doi>10.14569/IJACSA.2015.060623</doi>
        <lastModDate>2015-07-01T07:09:36.1700000+00:00</lastModDate>
        
        <creator>Ahmed Abdul-WahabAl-Opahi</creator>
        
        <creator>Basheer Mohamad Al-Maqaleh</creator>
        
        <subject>Association rule mining; negative rule and positive rules; frequent and infrequent pattern set; apriori algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(6), 2015</description>
        <description>Association Rule mining is very efficient technique for finding strong relation between correlated data. The correlation of data gives meaning full extraction process. For the discovering frequent items and the mining of positive rules, a variety of algorithms are used such as Apriori algorithm and tree based algorithm. But these algorithms do not consider negation occurrence of the attribute in them and also these rules are not in infrequent form. The discovery of infrequent itemsets is far more difficult than their counterparts, that is, frequent itemsets. These problems include infrequent itemsets discovery and generation of interest negative association rules, and their huge number as compared with positive association rules. The interesting discovery of association rules is an important and active area within data mining research. In this paper, an efficient algorithm is proposed for discovering interesting positive and negative association rules from frequent and infrequent items. The experimental results show the usefulness and effectiveness of the proposed algorithm.</description>
        <description>http://thesai.org/Downloads/Volume6No6/Paper_23-An_Efficient_Algorithm_to_Automated_Discovery_of_Interesting.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Reasoning Method on Knowledge about Functions and Operators</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060622</link>
        <id>10.14569/IJACSA.2015.060622</id>
        <doi>10.14569/IJACSA.2015.060622</doi>
        <lastModDate>2015-07-01T07:09:36.0770000+00:00</lastModDate>
        
        <creator>Nhon V. Do</creator>
        
        <creator>Hien D. Nguyen</creator>
        
        <creator>Thanh T. Mai</creator>
        
        <subject>knowledge representation; knowledge based system; intelligent problem solver; automated reasoning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(6), 2015</description>
        <description>In artificial intelligence, there are many methods for knowledge representation. One of the effective models is the Computational Object Knowledge Base model (COKB model), which can be used to represent the total knowledge and to design the knowledge base component of practical intelligent systems. Besides, reasoning methods also play an important role in knowledge base systems. In fact, a popular form of knowledge domain is knowledge about function and operations. These knowledge domains have many practical applications, especially in educational applications, such as Solid Geometry, Analytic Geometry. However, the current methods cannot reason on knowledge about Function and Operators. In this paper, we will present a reasoning method to solve problems on COKB model. These problems are related to knowledge about Functions and Operators. Also, this method has been applied to design some intelligent systems in education. Using this reasoning method, systems can solve problems in some educational knowledge domains automatically with their solutions are step-by-step.</description>
        <description>http://thesai.org/Downloads/Volume6No6/Paper_22-Reasoning_Method_on_Knowledge_about_Functions_and_Operators.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of High Precision Temperature Measurement System based on Labview</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060621</link>
        <id>10.14569/IJACSA.2015.060621</id>
        <doi>10.14569/IJACSA.2015.060621</doi>
        <lastModDate>2015-07-01T07:09:36.0170000+00:00</lastModDate>
        
        <creator>Weimin Zhu</creator>
        
        <creator>Jin Liu</creator>
        
        <creator>Haima Yang</creator>
        
        <creator>Chaochao Yan</creator>
        
        <subject>LabVIEW; AD7076; thermocouple; cold end temperature compensation; Temperature measurement</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(6), 2015</description>
        <description>Using the LabVIEW software platform, a high precision temperature measuring device is designed based on the principle of the thermocouple. The system uses the STM32 MCU as the main control chip, using AD7076 analog digital converter. The converter has 8 channel, synchronous sampling, and bipolar input. Improving the precision of temperature measurement by cold end compensation, fitting and other measures. The test results show that, the device temperature measurement precision can reach &#177;0.1 &#176;C, has the advantages of small size, high precision, and reliable performance, this high precision temperature measurement can be widely used in industrial production.</description>
        <description>http://thesai.org/Downloads/Volume6No6/Paper_21-Design_of_High_Precision_Temperature_Measurement_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Conceptual Framework of Analytical CRM in Big Data Age</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060620</link>
        <id>10.14569/IJACSA.2015.060620</id>
        <doi>10.14569/IJACSA.2015.060620</doi>
        <lastModDate>2015-07-01T07:09:35.9830000+00:00</lastModDate>
        
        <creator>Chien-hung Liu</creator>
        
        <subject>CRM; Analytical CRM; Big Data; CRM Framework</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(6), 2015</description>
        <description>Traditionally, analytical CRM (A-CRM) mainly relies on the use of structured data from a data warehouse where data are extracted, transformed, loaded from operation systems such as ERP, SCM or operational CRM. In recent years of rising big data trend, recognized shifts in E-commerce have taken place from internet-enable commerce (I-commerce), to mobile commerce (M-commerce), and now to ubiquitous commerce (U-commerce).
As theses paradigm shifts imply that ubiquitous computing improves considerably companies’ access to information by allowing them to acquire information at anytime, anywhere. Give this changes on data collection shifts due to ubiquitous computing, however, current A-CRM framework in literature seems not too matched to this change. There is only a handful studies published on CRM in ubiquitous computing environment fitting what big data age requires. Consequently, the objective of this study attempts to propose a conceptual framework of A-CRM. Built by conceptual framework approach, this framework provides valuable directions, definitions and guidelines to practitioners preparing the successful big data marketing in big data age.</description>
        <description>http://thesai.org/Downloads/Volume6No6/Paper_20-A_Conceptual_Framework_of_Analytical_CRM_in_Big_Data_Age.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application of Image Processing Techniques for TV Broadcasting of Sporting Events</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060619</link>
        <id>10.14569/IJACSA.2015.060619</id>
        <doi>10.14569/IJACSA.2015.060619</doi>
        <lastModDate>2015-07-01T07:09:35.9370000+00:00</lastModDate>
        
        <creator>Cheikhrouhu E.</creator>
        
        <creator>Jabri I.</creator>
        
        <creator>Lakhoua M.N.</creator>
        
        <creator>Mlouhi Y.</creator>
        
        <creator>Battikh T.</creator>
        
        <creator>Maalej L.</creator>
        
        <subject>Augmented Reality; Filtering; Sports field Homography; Image Processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(6), 2015</description>
        <description>In this paper, we describe a system solution thanks to which virtual graphics, the projection of advertising images, logos, match scores and the distance measurements of players on the field may be overlaid on the plan of the different types of sports fields of real tested images. This solution relies on a study of the artificial vision and the Augmented Reality applied to TV broadcasting of sporting events where we have as input the original image to be processed, the image to be projected and the coordinates of the overlay position of the objects on the plan of the field. As an output, we have the overlaid objects in the processed image at the selected position in a more realistic way and in the background.</description>
        <description>http://thesai.org/Downloads/Volume6No6/Paper_19-Application_of_Image_Processing_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-Biometric Systems: A State of the Art Survey and Research Directions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060618</link>
        <id>10.14569/IJACSA.2015.060618</id>
        <doi>10.14569/IJACSA.2015.060618</doi>
        <lastModDate>2015-07-01T07:09:35.9200000+00:00</lastModDate>
        
        <creator>Ramadan Gad</creator>
        
        <creator>Nawal El-Fishawy</creator>
        
        <creator>AYMAN EL-SAYED</creator>
        
        <creator>M. Zorkany</creator>
        
        <subject>Biometrics; Multimodal biometric systems; fusion levels; recognition methods; authentication</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(6), 2015</description>
        <description>Multi-biometrics is an exciting and interesting research topic. It is used to recognizing individuals for security purposes; to increase security levels. The recent research trends toward next biometrics generation in real-time applications. Also, integration of biometrics solves some of unimodal system limitations. However, design and evaluation of such systems raises many issues and trade-offs. A state of the art survey of multi-biometrics benefits, limitations, integration strategies, and fusion levels are discussed in this paper. Finally, upon reviewing multi-biometrics approaches and techniques; some open points are suggested to be considered as a future research point of interest.</description>
        <description>http://thesai.org/Downloads/Volume6No6/Paper_18-Multi_Biometric_Systems_A_State.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cyberspace Forensics Readiness and Security Awareness Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060617</link>
        <id>10.14569/IJACSA.2015.060617</id>
        <doi>10.14569/IJACSA.2015.060617</doi>
        <lastModDate>2015-07-01T07:09:35.9070000+00:00</lastModDate>
        
        <creator>Aadil Al-Mahrouqi</creator>
        
        <creator>Sameh Abdalla</creator>
        
        <creator>Tahar Kechadi</creator>
        
        <subject>Network Forensics;  Forensics  Readiness;  Network Security; Active Forensics; Reactive Forensics; Forensics Awareness and Network Security model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(6), 2015</description>
        <description>The goal of reaching a high level of security in wire- less and wired communication networks is continuously proving difficult to achieve. The speed at which both keepers and violators of secure networks   are evolving is relatively close. Nowadays, network   infrastructures contain a large number of event logs captured by Firewalls and Domain Controllers (DCs). However, these logs are increasingly becoming an obstacle for network administrators in analyzing networks for malicious activities. Forensic  investigators  mission  to detect  malicious  activities  and reconstruct incident  scenarios  is extremely  complex considering the number, as well as the quality  of these event logs. This paper presents the building blocks for a model for automated network readiness and awareness.   The idea for this model is to utilize the current network   security outputs   to construct   forensically comprehensive   evidence.  The  proposed  model  covers  the  three vital  phases  of the  cybercrime management chain,  which  are:
1) Forensics Readiness, 2) Active Forensics, and 3) Forensics Awareness.</description>
        <description>http://thesai.org/Downloads/Volume6No6/Paper_17-Cyberspace_Forensics_Readiness_and_Security_Awareness_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Integration of Qos Aspects in the Cloud Service Research and Selection System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060616</link>
        <id>10.14569/IJACSA.2015.060616</id>
        <doi>10.14569/IJACSA.2015.060616</doi>
        <lastModDate>2015-07-01T07:09:35.8600000+00:00</lastModDate>
        
        <creator>Manar ABOUREZQ</creator>
        
        <creator>Abdellah IDRISSI</creator>
        
        <subject>Cloud Computing; Cloud Services; Quality of Service; Skyline; Outranking methods; Multi criteria decision; ELECTRE methods; Block-Nested Loops Algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(6), 2015</description>
        <description>Cloud Computing is a business model revolution more than a technological one. It capitalized on various technologies that have proved themselves and reshaped the use of computers by replacing their local use by a centralized one where shared resources are stored and managed by a third-party in a way transparent to end-users. With this new use came new needs and one of them is the need to search through Cloud services and select the ones that meet certain requirements. To address this need, we have developed, in a previous work, the Cloud Service Research and Selection System (CSRSS) which aims to allow Cloud users to search through Cloud services in the database and find the ones that match their requirements. It is based on the Skyline and ELECTRE IS. In this paper, we improve the system by introducing 7 new dimensions related to QoS constraints. Our work’s main contribution is conceiving an Agent that uses both the Skyline and an outranking method, called ELECTREIsSkyline, to determine which Cloud services meet better the users’ requirements while respecting QoS properties. We programmed and tested this method for a total of 10 dimensions and for 50 000 cloud services. The first results are very promising and show the effectiveness of our approach.</description>
        <description>http://thesai.org/Downloads/Volume6No6/Paper_16-Integration_of_Qos_Aspects_in_the_Cloud_Service_Research.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fuzzy C-Means based Inference Mechanism for Association Rule Mining: A Clinical Data Mining Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060615</link>
        <id>10.14569/IJACSA.2015.060615</id>
        <doi>10.14569/IJACSA.2015.060615</doi>
        <lastModDate>2015-07-01T07:09:35.8430000+00:00</lastModDate>
        
        <creator>Kapil Chaturvedi</creator>
        
        <creator>Dr. Ravindra Patel</creator>
        
        <creator>Dr. D.K. Swami</creator>
        
        <subject>Association Rule Mining; Fuzzy Inference System; Clinical Data Mining; Preprocessing; Fuzzy clusters</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(6), 2015</description>
        <description>Association rule mining has wide variety of research in the field of data mining, many of association rule mining approaches are well investigated in literature, but the major issue with ARM is, huge number of frequent patterns cannot produce direct knowledge or factual knowledge, hence to find factual knowledge and to discover inference, we propose a novel approach AFIRM in this paper followed by two step procedure, first is to discover frequent pattern by Appling ARM algorithm and second is to discover inference by adopting the concept of Fuzzy c-means clustering, for performance analysis, we apply this approach on a clinical dataset (contained symptoms information of patients) and we got highly effected disease in a couple of months or in a session as hidden knowledge or inference.</description>
        <description>http://thesai.org/Downloads/Volume6No6/Paper_15-Fuzzy_C_Means_based_Inference_Mechanism.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Artificial Neural Network Application for Estimation of Natural Frequencies of Beams</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060614</link>
        <id>10.14569/IJACSA.2015.060614</id>
        <doi>10.14569/IJACSA.2015.060614</doi>
        <lastModDate>2015-07-01T07:09:35.8270000+00:00</lastModDate>
        
        <creator>Mehmet Avcar</creator>
        
        <creator>Kemal Saplioglu</creator>
        
        <subject>natural frequency; beam; ANN; multiple regression; adjusted R-square</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(6), 2015</description>
        <description>In this study, natural frequencies of the prismatical steel beams with various geometrical characteristics under the four different boundary conditions are determined using Artificial Neural Network (ANN) technique. In that way, an alternative efficient method is aimed to develop for the solution of the present problem, which provides avoiding loss of time for computing some necessary parameters. In this context, initially, first ten frequency parameters of the beam are found, where Bernoulli-Euler beam theory was adopted, and then natural frequencies are computed theoretically. With the aid of theoretically obtained results, the data sets are formed and ANN models are constructed. Here, 36 models are developed using primary 3 models. The results are found from these models by changing the number and properties of the neurons and input data. The handiness of the present models is examined by comparing the results of these models with theoretically obtained results. The effects of the number of neurons, input data and training function on the models are investigated. In addition, multiple regression models are developed with the data, and adjusted R-square is examined for determining the inefficient input parameters</description>
        <description>http://thesai.org/Downloads/Volume6No6/Paper_14-An_Artificial_Neural_Network_Application_for_Estimation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Automated Graphical User Interface based System for the Extraction of Retinal Blood Vessels using Kirsch’s Template</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060613</link>
        <id>10.14569/IJACSA.2015.060613</id>
        <doi>10.14569/IJACSA.2015.060613</doi>
        <lastModDate>2015-07-01T07:09:35.7970000+00:00</lastModDate>
        
        <creator>Joshita Majumdar</creator>
        
        <creator>Souvik Tewary</creator>
        
        <creator>Shreyosi Chakraborty</creator>
        
        <creator>Debasish Kundu</creator>
        
        <creator>Sudipta Ghosh</creator>
        
        <creator>Sauvik Das Gupta</creator>
        
        <subject>Blood Vessel Extraction; Retinal Images; Kirsch’s Templates; Image Processing; Graphical User Interface</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(6), 2015</description>
        <description>The assessment of Blood Vessel networks plays an important role in a variety of medical disorders. The diagnosis of Diabetic Retinopathy (DR) and its repercussions including micro aneurysms, haemorrhages, hard exudates and cotton wool spots is one such field. This study aims to develop an automated system for the extraction of blood vessels from retinal images by employing Kirsch’s Templates in a MATLAB based Graphical User Interface (GUI). Here, a RGB or Grey image of the retina (Fundus Photography) is used to obtain the traces of blood vessels. We have incorporated a range of Threshold values for the blood vessel extraction which would provide the user with greater flexibility and ease. This paper also deals with the more generalized implementation of various MATLAB functions present in the image processing toolbox of MATLAB to create a basic image processing editor with different features like noise addition and removal, image cropping, resizing &amp; rotation, histogram adjust, separately viewing the red, green and blue components of a colour image along with brightness control, that are used in a basic image editor. We have combined both Kirsch’s Template and various MATLAB Algorithms to obtain enhanced images which would allow the ophthalmologist to edit and intensify the images as per his/her requirement for diagnosis. Even a non technical person can manage to identify severe discrepancies because of its user friendly appearance. The GUI contains very commonly used English Language viz. Load, Colour Contrast Panel, Image Clarity etc that can be very easily understood. It is an attempt to incorporate maximum number of image processing techniques under one GUI to obtain higher performance. Also it would provide a cost effective solution towards obtaining high definition and resolution images of blood vessel extracted Retina in economically backward regions where costly machine like OCT (Optical Coherence Tomography), MRI (Magnetic Resonance Imaging) are not available. Hence an early detection of irregularity will be possible especially in rural areas.</description>
        <description>http://thesai.org/Downloads/Volume6No6/Paper_13-An_Automated_Graphical_User_Interface_based_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multiple-Published Tables Privacy-Preserving Data Mining: A Survey for Multiple-Published Tables Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060612</link>
        <id>10.14569/IJACSA.2015.060612</id>
        <doi>10.14569/IJACSA.2015.060612</doi>
        <lastModDate>2015-07-01T07:09:35.7670000+00:00</lastModDate>
        
        <creator>Abou_el_ela Abdo Hussein</creator>
        
        <creator>Nagy Ramadan Darwish</creator>
        
        <creator>Hesham A. Hefny</creator>
        
        <subject>Data mining; privacy; sensitive attribute; quasi-identifier Anatomy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(6), 2015</description>
        <description>With large growth in technology, reduced cost of storage media and networking enabled the organizations to collect very large volume of information from huge sources. Different data mining techniques are applied on such huge data to extract useful and relevant knowledge. The disclosure of sensitive data to unauthorized parties is a critical issue for organizations which could be most critical problem of data mining. So Privacy preserving data mining (PPDM) has become increasingly popular because it solves this problem and allows sharing of privacy sensitive data for analytical purposes. A lot of privacy techniques were developed based on the k-anonymity property. Because of a lot of shortcomings of the k-anonymity model, other privacy models were introduced. Most of these techniques release one table for research public after they applied on original tables. In this paper the researchers introduce techniques which publish more than one table for organizations preserving individual&#39;s privacy. One of this is (a, k) – anonymity using lossy-Join which releases two tables for publishing in such a way that the privacy protection for (a, k)-anonymity can be achieved with less distortion, and the other one is Anatomy technique which releases all the quasi-identifier and sensitive values directly in two separate tables, met l-diversity privacy requirements, without any modification in the original table.</description>
        <description>http://thesai.org/Downloads/Volume6No6/Paper_12-Multiple_Published_Tables_Privacy_Preserving_Data_Mining.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Content -based Image Retrieval for Image Indexing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060611</link>
        <id>10.14569/IJACSA.2015.060611</id>
        <doi>10.14569/IJACSA.2015.060611</doi>
        <lastModDate>2015-07-01T07:09:35.7330000+00:00</lastModDate>
        
        <creator>Md. Al-Amin Bhuiyan</creator>
        
        <subject>color indexing; HSV color model; color histogram; Minkowski distance metric; fuzzy clustering; Color Quantization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(6), 2015</description>
        <description>Content-based image retrieval has attained a position of overwhelming dominance in computer vision with the advent of digital cameras and explosion of images in the Internet and Clouds. Finding the most relevant images in a short time is a challenging job with many big cloud sites competing in image search in terms of accuracy and recall. This paper addresses an image retrieval system employing color information indexing. The system is organized with the hue components of the HSV color model. To assess the precision of the image retrieval system, experiments have been carried out on a database consisting of 450 images drawn by Japanese traditional painters, namely Sharaku, Hokusai, Hiroshige, and the images obtained from the World Wide Web (WWW) multicolor natural scenes. In order to query the database, the user specifies an object on which the same color attributes are evaluated and all similar looking images are exposed as the outcomes of the query.</description>
        <description>http://thesai.org/Downloads/Volume6No6/Paper_11-Content_based_Image_Retrieval_for_Image_Indexing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Building a Robust Client-Side Protection Against Cross Site Request Forgery</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060610</link>
        <id>10.14569/IJACSA.2015.060610</id>
        <doi>10.14569/IJACSA.2015.060610</doi>
        <lastModDate>2015-07-01T07:09:35.6730000+00:00</lastModDate>
        
        <creator>Abdalla AlAmeen</creator>
        
        <subject>Security; Reflected CSRF; client-side protection; tab ID; token</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(6), 2015</description>
        <description>In recent years, the web has been an indispensable part of business all over the world and web browsers have become the backbones of today&#39;s systems and applications. Unfortunately, the number of web application attacks has increased a great deal, so the matter of concern is securing web applications. One of the most serious cyber-attacks has been by cross site request forgery (CSRF). CSRF has been recognized among the major threats to web applications and among the top ten worst vulnerabilities for web applications. In a CSRF attack, an attacker takes liberty be authorized to take a sensitive action on a target website on behalf of a user without his knowledge. This paper, providing an overview about CSRF attack, describes the various possible attacks, the developed solutions, and the risks in the current preventive techniques. This paper comes up with a highly perfect protection mechanism against reflected CSRF called RCSR. RCSR is a tool gives computer users with full control on the attack. RCSR tool relies on specifying HTTP request source, whether it comes from different tab or from the same one of a valid user, it observes and intercepts every request that is passed through the user’s browser and extracts session information, post the extracted information to the Server, then the server create a token for user&#39;s session.  We checked the working of RCSR extension, our evaluation results show that it is working well and it successfully protects web applications against reflected CSRF.</description>
        <description>http://thesai.org/Downloads/Volume6No6/Paper_10-Building_a_Robust_Client_Side_Protection_Against.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Adoption of e-Government in Pakistan: Supply Perspective</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060609</link>
        <id>10.14569/IJACSA.2015.060609</id>
        <doi>10.14569/IJACSA.2015.060609</doi>
        <lastModDate>2015-07-01T07:09:35.6400000+00:00</lastModDate>
        
        <creator>Zulfiqar Haider</creator>
        
        <creator>Chen Shuwen</creator>
        
        <creator>Dr. Farah Lalani</creator>
        
        <creator>Dr. Aftab Ahmed Mangi</creator>
        
        <subject>e-Government; adoption; Supply; UTAUT model; Pakistan</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(6), 2015</description>
        <description>Electronic Government, also known as e-Government, is a convenient way for citizens to access e- services and to conduct business with the government using the Internet. It saves citizens and the government both time and money. This study examined adoption of e-Government supply side by looking at the UTAUT as a model of technology acceptance. Furthermore, specific variables that were proposed to moderate relationships within the UTAUT were analyzed including locus of control, perceived organizational support, affective and normative commitment, and procedural justice. Data from one sample indicated that in general, the UTAUT model was supported, however, the moderators proved non-significant. Implications are discussed for the technology acceptance process as technologies are implemented within countries and suggestions for future research in this area are discussed. This research sought to demonstrate the robustness of trust-based UTAUT to address e-Government adoption concerns. As a consequence, it was the responsibility of the researcher to select research questions, operational variables, research approaches, and research techniques, within the scope of the study. The research hypotheses formulated in this study were based on the technology acceptance literature covering the original UTAUT model with the inclusion of the trust construct.  This quantitative study was conducted with help of Unified Theory of Acceptance and Use of Technology (UTAUT) model.</description>
        <description>http://thesai.org/Downloads/Volume6No6/Paper_9-Adoption_of_e_Government_in_Pakistan_Supply_Perspective.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Smart Transportation Application using Global Positioning System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060608</link>
        <id>10.14569/IJACSA.2015.060608</id>
        <doi>10.14569/IJACSA.2015.060608</doi>
        <lastModDate>2015-07-01T07:09:35.6270000+00:00</lastModDate>
        
        <creator>Nouf Mohammad Al Shammary</creator>
        
        <creator>Abdul Khader Jilani Saudagar</creator>
        
        <subject>communication; global positioning system; smart application; tracking; transportation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(6), 2015</description>
        <description>Significant increase is noticed in the utilization of mobile applications for different purposes in the past decade. These applications can improve any individual’s way of life in many aspects such as communication, collaborative work, learning, location services, data collection, exploring, testing and analysis.  One of the most interesting mobile applications is using it for tracking by having personal locators.  These locators can track children, people on work, the elderly for personal protection etc. The intention behind developing this mobile application is to provide a smart transportation system to it users and to track their movements.
Some of the essential features of this application are
• Getting familiar with the shortest path from source to destination in advance.
• Aware of approximate time of arrival to destination.
• Knowing the capacity of vehicle used for transportation.
• Short Message Service.</description>
        <description>http://thesai.org/Downloads/Volume6No6/Paper_8-Smart_Transportation_Application_using_Global_Positioning_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application of GLBP Algorithm in the Prediction of Building Energy Consumption</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060607</link>
        <id>10.14569/IJACSA.2015.060607</id>
        <doi>10.14569/IJACSA.2015.060607</doi>
        <lastModDate>2015-07-01T07:09:35.5630000+00:00</lastModDate>
        
        <creator>Dinghao Lv</creator>
        
        <creator>Bocheng Zhong</creator>
        
        <creator>Jing Luo</creator>
        
        <subject>BP Neural network; Building energy consumption; Genetic algorithm; Levenberg-Marquardt algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(6), 2015</description>
        <description>Using BP neural network in past to predict the energy consumption of the building resulted in some shortcomings. Aiming at these shortages, a new algorithm which combined genetic algorithm with Levenberg-Marquardt algorithm (LM algorithm) was proposed. The proposed algorithm was used to improve the neural network and predict the energy consumption of buildings. First, genetic algorithm was used to optimize the weight and threshold of Artificial Neural Network (ANN). Levenberg-Marquardt algorithm was adopted to optimize the neural network training. Then the predicting model was set up in terms of the main effecting factors of the energy consumption. Furthermore, a public building power consumption data for one month is collected by establishing a monitoring platform to train and test the model. Eventually, the simulation result proved that the proposed model was qualified to predict short-term energy consumption accurately and efficiently.</description>
        <description>http://thesai.org/Downloads/Volume6No6/Paper_7-Application_of_GLBP_Algorithm_in_the_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards GP Sentence Parsing of V+P+CP/NP Structure</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060606</link>
        <id>10.14569/IJACSA.2015.060606</id>
        <doi>10.14569/IJACSA.2015.060606</doi>
        <lastModDate>2015-07-01T07:09:35.5470000+00:00</lastModDate>
        
        <creator>Du Jiali</creator>
        
        <creator>Yu Pingfang</creator>
        
        <subject>artificial intelligence; computational linguistics; machine learning; local ambiguity; garden path sentences</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(6), 2015</description>
        <description>Computational linguistics can provide an effective perspective to explain the partial ambiguity during machine translation. The structure of V+Pron+CP/NP has the ambiguous potential to bring Garden Path effect. If Tell+Pron+NP structure has considerable higher observed frequencies than Tell+Pron+CP structure, the former is regarded as the preferred structure and has much lower confusion quotient. It is possible for the grammatical unpreferred Tell+Pron+CP structure to replace the ungrammatical preferred Tell+Pron+NP, which results in the processing breakdown. The syntactic details of GP processing can be presented by the computational technologies. Computational linguistics is proved to be effective to explore the Garden Path phenomenon.</description>
        <description>http://thesai.org/Downloads/Volume6No6/Paper_6-Towards_GP_Sentence_Parsing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Markovian Process and Novel Secure Algorithm for Big Data in Two-Hop Wireless Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060605</link>
        <id>10.14569/IJACSA.2015.060605</id>
        <doi>10.14569/IJACSA.2015.060605</doi>
        <lastModDate>2015-07-01T07:09:35.5170000+00:00</lastModDate>
        
        <creator>K. Thiagarajan</creator>
        
        <creator>K. Saranya</creator>
        
        <creator>A. Veeraiah</creator>
        
        <creator>B. Sudha</creator>
        
        <subject>big data; two-hop transmission; security in physical layer; cooperative jamming; energy balance; Markov process</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(6), 2015</description>
        <description>This paper checks the correctness of our novel algorithm for secure, reliable and flexible transmission of big data in two-hop wireless networks using cooperative jamming scheme of attacker location unknown through Markovian process. Big data has to transmit in two-hop from source-to-relay and relay-to-destination node by deploying security in physical layer. Based on our novel algorithm, the nodes of the network can be identifiable, the probability value of the data absorbing nodes namely capture node C, non-capture node NC, eavesdropper node E, in each level depends upon the present level to the next level, the probability of transition between two nodes is same at all times in a given time slot to ensure more secure transmission of big data. In this paper, maximum probability for capture nodes is considered to justify the efficient transmission of big data through Markovian process.</description>
        <description>http://thesai.org/Downloads/Volume6No6/Paper_5-Markovian_Process_and_Novel_Secure_Algorithm_for_Big_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intelligent Wireless Indoor Monitoring System based on ARM</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060604</link>
        <id>10.14569/IJACSA.2015.060604</id>
        <doi>10.14569/IJACSA.2015.060604</doi>
        <lastModDate>2015-07-01T07:09:35.4530000+00:00</lastModDate>
        
        <creator>Jia Chunying</creator>
        
        <creator>Zhang Liping</creator>
        
        <creator>Chen Yuchen</creator>
        
        <subject>STM32; CC1101; Intelligent Monitoring; ID; Corresponding Code Discrimination</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(6), 2015</description>
        <description>This paper proposed an intelligent wireless indoor monitoring system based on STM32F103. The system compromises a master and terminals, which communicates through a CC1101 433M wireless unit. Using ENC28J60 and SIM900A to access Ethernet, the system can intelligent push information to cloud server, therefore we can observe and control terminals remotely. This paper present an algorithm based on MCU ID for terminal code recognition. The algorithm can intelligently discriminate terminals through code matching code between master and terminals, though hardware and software of terminals is similar. The experiment demonstrates that the proposed system is robust stability, real-time and obtains accurate warning information.</description>
        <description>http://thesai.org/Downloads/Volume6No6/Paper_4-Intelligent_Wireless_Indoor_Monitoring_System_based_on_ARM.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Experimental Study of the Cloud Architecture Selection for Effective Big Data Processing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060603</link>
        <id>10.14569/IJACSA.2015.060603</id>
        <doi>10.14569/IJACSA.2015.060603</doi>
        <lastModDate>2015-07-01T07:09:35.4230000+00:00</lastModDate>
        
        <creator>Evgeny Nikulchev</creator>
        
        <creator>Evgeniy Pluzhnik</creator>
        
        <creator>Dmitry Biryukov</creator>
        
        <creator>Oleg Lukyanchikov</creator>
        
        <creator>Simon Payain</creator>
        
        <subject>Cloud Infrastructure; Big Data; Distributed Databases; Hybrid Clouds</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(6), 2015</description>
        <description>Big data dictate their requirements to the hardware and software. Simple migration to the cloud data processing, while solving the problem of increasing computational capabilities, however creates some issues: the need to ensure the safety, the need to control the quality during data transmission, the need to optimize requests. Computational cloud does not simply provide scalable resources but also requires network infrastructure, unknown routes and the number of user requests. In addition, during functioning situation can occur, in which you need to change the architecture of the application — part of the data needs to be placed in a private cloud, part in a public cloud, part stays on the client.</description>
        <description>http://thesai.org/Downloads/Volume6No6/Paper_3-Experimental_Study_of_the_Cloud_Architecture_Selection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimal Design of PMSA for SBW Application</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060602</link>
        <id>10.14569/IJACSA.2015.060602</id>
        <doi>10.14569/IJACSA.2015.060602</doi>
        <lastModDate>2015-07-01T07:09:35.4070000+00:00</lastModDate>
        
        <creator>Rached BEN MEHREZ</creator>
        
        <creator>Lilia EL AMRAOUI</creator>
        
        <subject>Genetic Algorithms (GA); Permanent Magnet Synchronous Actuator (PMSA); finite elements analysis; Steer-By-Wire application (SBW); Thermal study; Optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(6), 2015</description>
        <description>In this paper a new topology of Permanent Magnet Synchronous Actuator (PMSA) is used for steer-by-wire application. The magnetic field patterns are determined from finite element modeling, for different rotor positions and supply currents, using FEMM software. The designed actuator geometric is, then, optimized using Genetic Algorithm in order to ameliorate its electromagnetic characteristics, and its resulting torque. Finally, a thermal analysis is achieved for the initial and the optimized actuators. The obtained results show a clear improvement of the actuator electromagnetic characteristics and heat distribution.</description>
        <description>http://thesai.org/Downloads/Volume6No6/Paper_2-Optimal_Design_of_PMSA_for_SBW_Application.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intrusion Detection and Countermeasure of Virtual Cloud Systems - State of the Art and Current Challenges</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060601</link>
        <id>10.14569/IJACSA.2015.060601</id>
        <doi>10.14569/IJACSA.2015.060601</doi>
        <lastModDate>2015-07-01T07:09:35.3600000+00:00</lastModDate>
        
        <creator>Andrew Carlin</creator>
        
        <creator>Mohammad Hammoudeh</creator>
        
        <creator>Omar Aldabbas</creator>
        
        <subject>Cloud Computing Security; Distributed Denial of Service; Intrusion Detection; Intrusion Prevention; Virtualisation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(6), 2015</description>
        <description>Clouds are distributed Internet-based platforms that provide highly resilient and scalable environments to be used by enterprises in a multitude of ways. Cloud computing offers enterprises technology innovation that business leaders and IT infrastructure managers can choose to apply based on how and to what extent it helps them fulfil their business requirements. It is crucial that all technical consultants have a rigorous understanding of the ramifications of cloud computing as its influence is likely to spread the complete IT landscape. Security is one of the major concerns that is of practical interest to decision makers when they are making critical strategic operational decisions. Distributed Denial of Service (DDoS) attacks are becoming more frequent and effective over the past few years, since the widely publicised DDoS attacks on the financial services industry that came to light in September and October 2012 and resurfaced in the past two years. In this paper, we introduce advanced cloud security technologies and practices as a series of concepts and technology architectures, from an industry-centric point of view. This is followed by classification of intrusion detection and prevention mechanisms that can be part of an overall strategy to help understand, identify and mitigate potential DDoS attacks on business networks.  The paper establishes solid coverage of security issues related to DDoS and virtualisation with a focus on structure, clarity, and well-defined blocks for mainstream cloud computing security solutions and platforms. In doing so, we aim to provide industry technologists, who may not be necessarily cloud or security experts, with an effective tool to help them understand the security implications associated with cloud adoption in their transition towards more knowledge-based systems.</description>
        <description>http://thesai.org/Downloads/Volume6No6/Paper_1-Intrusion_Detection_and_Countermeasure_of_Virtual_Cloud_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Technical Issues and Challenges in Building Human Body Sensor Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060624</link>
        <id>10.14569/IJACSA.2015.060624</id>
        <doi>10.14569/IJACSA.2015.060624</doi>
        <lastModDate>2015-07-01T07:09:35.2830000+00:00</lastModDate>
        
        <creator>Meghna Garg</creator>
        
        <creator>Manik Gupta</creator>
        
        <subject>Body Sensor Networks; MAC; Health Monitoring Systems; Internet of Things; Health Cloud; wireless biomedical sensor; wearable sensors; Energy optimization; motion-powered piezoelectric effect; Power consumption; time synchronization; Bandwidth utilization; memory; distributed storage; key management</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(6), 2015</description>
        <description>In this research work, an exploration is done for identification of critical technical issues, problems, challenges in area of wireless body network sensors, which are continuous emerging as integral part of health monitoring systems. All this is possible due to the concept of ‘Internet of Things’ [5], in which day to day consumer devices, equipment are connected onto the network enabling information gathering and management of many vital signals. The first section gives introduction to wireless network developments in recent context, and then it is followed by discussion on existing work done in context of physical, Media access control and other aspects like energy consumption and security of such systems. The tabular summary on the gaps, limitations of Wireless body area network is done and based on this work; future directions are also suggested. Care has been taken to solicit high impact general paper for conducting this systematic study.</description>
        <description>http://thesai.org/Downloads/Volume6No6/Paper_24-Technical_Issues_and_Challenges_in_Building_Human_Body_Sensor_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Heuristic Approach for Minimum Set Cover Problem</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2015.040607</link>
        <id>10.14569/IJARAI.2015.040607</id>
        <doi>10.14569/IJARAI.2015.040607</doi>
        <lastModDate>2015-06-11T05:51:33.8870000+00:00</lastModDate>
        
        <creator>Fatema Akhter</creator>
        
        <subject>Set Cover; Greedy Algorithm; LP Rounding Algo-rithm; Hill Climbing Method</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 4(6), 2015</description>
        <description>The Minimum Set Cover Problem has many prac-tical applications in various research areas. This problem belongs to the class of NP-hard theoretical problems. Several approxima-tion algorithms have been proposed to find approximate solutions to this problem and research is still going on to optimize the solution. This paper studies the existing algorithms of minimum set cover problem and proposes a heuristic approach to solve the problem using modified hill climbing algorithm. The effectiveness of the approach is tested on set cover problem instances from OR-Library. The experimental results show the effectiveness of our proposed approach.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume4No6/Paper_7-A_Heuristic_Approach_for_Minimum_Set_Cover_Problem.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Gram–Schmidt Process in Different Parallel Platforms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2015.040606</link>
        <id>10.14569/IJARAI.2015.040606</id>
        <doi>10.14569/IJARAI.2015.040606</doi>
        <lastModDate>2015-06-11T05:51:33.8400000+00:00</lastModDate>
        
        <creator>Genci Berati</creator>
        
        <subject>Gram-Schmidt Algorithm; Parallel programming model; OpenMP; MPI; Control Flow architecture</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 4(6), 2015</description>
        <description>Important operations in numerical computing are vector orthogonalization. One of the well-known algorithms for vector orthogonalisation is Gram–Schmidt algorithm. This is a method for constructing a set of orthogonal vectors in an inner product space, most commonly the Euclidean space Rn. This process takes a finite, linearly independent set S = {b1, b2, …, bk}  vectors for k = n and generates an orthogonal set S1 = {o1, o2, …, ok}. Like the most of the dense operations and big data processing problems, the Gram–Schmidt process steps can be performed by using parallel algorithms and can be implemented in parallel programming platforms. The parallelized algorithm is dependent to the platform used and needs to be adapted for the optimum performance for each parallel platform. The paper shows the algorithms and the implementation process of the Gram –Schmidt vector orthogonalosation in three different parallel platforms. The three platforms are: a) control flow shared memory hardware systems with OpenMP, b) control flow distributed memory hardware systems with MPI and c) dataflow architecture systems using Maxeler Data Flow Engines hardware. Using as single running example a parallel implementation of the computation of the Gram –Schmidt vector orthogonalosation, this paper describes how the fundamentals of parallel programming, are dealt in these platforms. The paper puts into evidence the Maxeler implementation of the Gram–Schmidt algorithms compare to the traditional platforms. Paper treats the speedup and the overall performance of the three platforms versus sequential execution for 50-dimensional Euclidian space.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume4No6/Paper_6-Gram_Schmidt_Process_in_Different_Parallel_Platforms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Iris Compression and Recognition using Spherical Geometry Image</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2015.040605</link>
        <id>10.14569/IJARAI.2015.040605</id>
        <doi>10.14569/IJARAI.2015.040605</doi>
        <lastModDate>2015-06-11T05:51:33.7600000+00:00</lastModDate>
        
        <creator>Rabab M. Ramadan</creator>
        
        <subject>3D Iris Recognition; Iris Compression; Geometry coding; Spherical Wavelets</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 4(6), 2015</description>
        <description>this research is considered to be a research to attract attention to the 3D iris compression to store the database of the iris. Actually, the 3D iris database cannot be found and in trying to solve this problem 2D iris database images are converted to 3D images just to implement the compression techniques used in 3D domain to test it and give an approximation results or to focus on this new direction in research. In this research a fully automated 3D iris compression and recognition system is presented. We use spherical based wavelet coefficients for efficient representation of the 3D iris. The spherical wavelet transformation is used to decompose the iris image into multi-resolution sub images. The representation of features based on spherical wavelet parameterization of the iris image was proposed for the 3D iris compression system. To evaluate the performance of the proposed approach, experiments were performed on the CASIA Iris database. Experimental results show that the spherical wavelet coefficients yield excellent compression capabilities with minimal set of features. Haar wavelet coefficients extracted from the iris   image was found to generate good recognition results.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume4No6/Paper_5-Iris_Compression_and_Recognition_using_Spherical_Geometry_Image.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mobile Device Based Personalized Equalizer for Improving Hearing Capability of Human Voices in Particular for Elderly Persons</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2015.040604</link>
        <id>10.14569/IJARAI.2015.040604</id>
        <doi>10.14569/IJARAI.2015.040604</doi>
        <lastModDate>2015-06-11T05:51:33.7300000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Takuto Konishi</creator>
        
        <subject>Frequency response equalization; mobile devices; formount frequancy; hearing capability; hearing aids</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 4(6), 2015</description>
        <description>Mobile device based personalized equalizer for improving the hearing capability of human voices in particular for elderly persons are proposed. Through experiments, it is found that the proposed equalizer does work well for improving hearing capability by 2 to 55 % of voice Recognition success ratio. According to the investigation of the frequency component analysis and formant detections, most of the voice sounds have the formant frequencies for the first to third frequencies within the range of 3445 Hz. Therefore, a nonlinear equalizing multiplier is better to enhance the frequency components for the first to third formants in particular. The experimental results with the voice above input experiments show that a good Percent Correct Recognition: PCR is required for 0 to more than 8000 Hz of frequency components. Also, 8162 Hz cut off frequency would be better for both noise suppressions and keeping a good PCR</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume4No6/Paper_4-Mobile_Device_Based_Personalized_Equalizer_for_Improving_Hearing_Capability.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Relations between Psychological Status and Eye Movements</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2015.040603</link>
        <id>10.14569/IJARAI.2015.040603</id>
        <doi>10.14569/IJARAI.2015.040603</doi>
        <lastModDate>2015-06-11T05:51:33.7130000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>EEG; eye movement; psychological status; alpha wave; beta wave</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 4(6), 2015</description>
        <description>Relations between psychological status and eye movements are found through experiments with readings of  different types of documents as well as playing games. Psychological status can be monitored with Electroencephalogram: EEG sensor while eye movements can be monitored with Near Infrared: NIR cameras with NIR Light Emission Diode: LED. EEG signals are suffred from noises while eye movement can be acquired without any influence from nise. Therefore, psychlogical status can be monitored with eye movement detection instead of EEG signal acquisition if there is relation between both. Through the experiments, it is found strong relation between both. In particular, relation between the number of rapid changes of line of sight directions and relatively high frequency components of EEG signals is found. It is also found that the number of rapid eye movement is counted when the users are reading the documents. The rapid eye movement is defined as 10 degrees of look angle difference for one second. Not only when the users change the lines in the document, but also when the users feel a difficulty for reading words in the document, the users’ line of sight direction moves rapidly.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume4No6/Paper_3-Relations_between_Psychological_Status_and_Eye_Movements.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Psychological Status Monitoring with Cerebral Blood Flow: CBF, Electroencephalogram: EEG and Electro-Oculogram: EOG Measurements</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2015.040602</link>
        <id>10.14569/IJARAI.2015.040602</id>
        <doi>10.14569/IJARAI.2015.040602</doi>
        <lastModDate>2015-06-11T05:51:33.7000000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>Cerebral Blood Flow; CBF; EEG; EOG; psychological status</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 4(6), 2015</description>
        <description>Psychological status monitoring with cerebral blood flow (CBF), EEG and EOG measurements are attempted. Through experiments, it is confirmed that the proposed method for psychological status monitoring is valid. It is also found correlations among the amplitudes of peak alpha and beta as well as gamma frequency of EEG signals and EOG as well as cerebral blood flow. Therefore, psychological status can be monitored with either EEG measurements or cerebral blood flow and EOG measurements.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume4No6/Paper_2-Psychological_Status_Monitoring_with_Cerebral_Blood_Flow_CBF.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Yahoo!Search and Web API Utilized Mashup based e-Leaning Content Search Engine for Mobile Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2015.040601</link>
        <id>10.14569/IJARAI.2015.040601</id>
        <doi>10.14569/IJARAI.2015.040601</doi>
        <lastModDate>2015-06-11T05:51:33.6500000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>Mashup; API; web 2.0; mobile devices; e-learning content; content retrieval; Yahoo!saerch BOSS; Web API</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 4(6), 2015</description>
        <description>Mashup based content search engine for mobile devices is proposed. Mashup technology is defined as search engine with plural different APIs. Mash-up has not only the plural APIs, but also the following specific features, (1) it enables classifications of the contents in concern by using web 2.0, (2) it may use API from the different sites, (3) it allows information retrievals from both sides of client and server, (4) it may search contents as an arbitrary structured hybrid content which is mixed content formed with the individual content from the different sites, (5) it enabling to utilize REST, RSS, Atom, etc. which are formed from XML conversions. The mash-up should be a flexible search engine for any purposes of content retrievals.  The proposed search system allows 3D space display of search menus with these peculiarities on Android devices. The proposed search system featuring Yahoo!search BOSS and Web API is also applied for e-learning content retrievals. It is confirmed that the system can be used for search a variety of e-learning content in concern efficiently.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume4No6/Paper_1-Yahoo_Search_and_Web_API_Utilized_Mashup_based_e-Leaning_Content_Search_Engine.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Denoising CT Images using wavelet transform</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060520</link>
        <id>10.14569/IJACSA.2015.060520</id>
        <doi>10.14569/IJACSA.2015.060520</doi>
        <lastModDate>2015-05-31T05:45:42.0030000+00:00</lastModDate>
        
        <creator>Lubna Gabralla</creator>
        
        <creator>Hela Mahersia</creator>
        
        <creator>Marwan Zaroug</creator>
        
        <subject>Computed Tomography; Discrete wavelet transform; Lung cancer; Thresholding</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(5), 2015</description>
        <description>Image denoising is one of the most significant tasks especially in medical image processing, where the original images are of poor quality due the noises and artifacts introduces by the acquisition systems. In this paper, we propose a new image denoising scheme by modifying the wavelet coefficients using soft-thresholding method, we present a comparative study of different wavelet denoising techniques for CT images and we discuss the obtained results. The denoising process rejects noise by thresholding in the wavelet domain. The performance is evaluated using Peak Signal-to-Noise Ratio (PSNR) and Mean Squared Error (MSE). Finally, Gaussian filter provides better PSNR and lower MSE values. Hence, we conclude that this filter is an efficient one for preprocessing medical images.</description>
        <description>http://thesai.org/Downloads/Volume6No5/Paper_20-Denoising_CT_Images_using_wavelet_transform.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A ’Cognitive Driving Framework’ for Collision Avoidance in Autonomous Vehicles</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060519</link>
        <id>10.14569/IJACSA.2015.060519</id>
        <doi>10.14569/IJACSA.2015.060519</doi>
        <lastModDate>2015-05-31T05:45:41.9700000+00:00</lastModDate>
        
        <creator>Alan J. Hamlet</creator>
        
        <creator>Carl D. Crane</creator>
        
        <subject>Multi-agent systems; autonomous vehicles; intent prediction; non-linear filtering; Bayesian filtering;</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(5), 2015</description>
        <description>The Cognitive Driving Framework is a novel method for forecasting the future states of a multi-agent system that takes into consideration both the intentions of the agents as well as their beliefs about the environment. This is partic-ularly useful for autonomous vehicles operating in an urban environment. The algorithm maintains a posterior probability distribution over agent intents and beliefs in order to more accurately forecast their future behavior. This allows an agent navigating the environment to recognize dangerous situations earlier and more accurately than competing algorithms, therefore allowing the agent take actions in order to prevent collisions. This paper presents the Cognitive Driving Framework in detail and describes its application to intersection navigation for au-tonomous vehicles. The effects of different parameter choices on the performance of the algorithm are analyzed and experiments are conducted demonstrating the ability of the algorithm to predict and prevent automobile collisions caused by human error in multiple intersection navigation scenarios. The results are compared to the performance of prevailing methods; namely reactionary planning and constant velocity forecasting.</description>
        <description>http://thesai.org/Downloads/Volume6No5/Paper_19-A_Cognitive_Driving_Framework_for_Collision_Avoidance_in_Autonomous_Vehicles.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparison Fractal Color Image Compression using YIQ and YUV Color Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060518</link>
        <id>10.14569/IJACSA.2015.060518</id>
        <doi>10.14569/IJACSA.2015.060518</doi>
        <lastModDate>2015-05-31T05:45:41.9400000+00:00</lastModDate>
        
        <creator>Eman A. Al-Hilo</creator>
        
        <creator>Rusul Zehwar</creator>
        
        <subject>Compression image; fractal color image compression and iterated function system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(5), 2015</description>
        <description>The principle of fractal image coding is that the image converges to a stable image by iterating the contractive transformations on an arbitrary initial image. This algorithm Partition the image into a number of range blocks and domain blocks. For every range block, the best matching domain block is searched for among all domain blocks by performing a set of transformations on each block. For color image compression, the Fractal coding is applied on different planes of color image independently by treating each plane as gray level image. The coordinate systems used for color image are RGB, YIQ and YUV. To encode a color image the main idea is to divide the image into its three different layers or components (RGB, YIQ and YUV). It is then possible to compress each of these layers separately, handle each of the layers as an independent image. In this paper the data of the color component (R,G,B) are transformed two times in two program separately, ones for  YIQ and other for YUV color space. The results show that using (YUV) color space is more useful and efficient than using YIQ in fractal image compression, where PSNR increase 0.1% , CR increase 0.31% and ET decrease 2.321%.</description>
        <description>http://thesai.org/Downloads/Volume6No5/Paper_18-Comparison_Fractal_Color_Image_Compression_using_YIQ.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of Eye-Blink and Face Corpora for Research in Human Computer Interaction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060517</link>
        <id>10.14569/IJACSA.2015.060517</id>
        <doi>10.14569/IJACSA.2015.060517</doi>
        <lastModDate>2015-05-31T05:45:41.9230000+00:00</lastModDate>
        
        <creator>Emmanuel Jadesola Adejoke.</creator>
        
        <creator>Ibiyemi Tunji Samuel</creator>
        
        <subject>Face Recognition; Coded Voluntary Eye Blink; Sign Language Communication; Authenticated driven; corpora</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(5), 2015</description>
        <description>A major requirement in face recognition research and coded voluntary eye-blink based sign language communication research is a robust face and eye-blink image corpora. The effectiveness, confidence level, and acceptability level of developed algorithms for face recognition and eye-blink based sign language communication depend largely on availability of relevant corpora in these fields of international standard. The wave of security challenges with attendant wanton destruction to lives and properties particularly in our country makes deployment of appropriate information technology to curb it imperative. Hence, the motivation of this work in provision of face and eye-blink image corpora with local contents to serve as input dataset to our  developed  face recognition authenticated driven coded eye-blink triggered actionable alert.</description>
        <description>http://thesai.org/Downloads/Volume6No5/Paper_17-Development_of_Eye_Blink_and_Face_Corpora_for_Research.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automatic Ferrite Content Measurement based on Image Analysis and Pattern Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060516</link>
        <id>10.14569/IJACSA.2015.060516</id>
        <doi>10.14569/IJACSA.2015.060516</doi>
        <lastModDate>2015-05-31T05:45:41.8930000+00:00</lastModDate>
        
        <creator>Hafiz Muhammad Tanveer</creator>
        
        <creator>Hafiz Muhammad Tahir Mustafa</creator>
        
        <creator>Waleed Asif</creator>
        
        <creator>Munir Ahmad</creator>
        
        <creator>Muhammad Anjum Javed</creator>
        
        <creator>Maqsood Ahmad</creator>
        
        <subject>Pattern classification; Decision threshold; Machine learning; Microstructure</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(5), 2015</description>
        <description>The existing manual point counting technique for ferrite content measurement is a difficult time consuming method which has limited accuracy due to limited human perception and error induced by points on boundaries of grid spacing. In this paper, we present a novel algorithm, based on image analysis and pattern classification, to evaluate the volume fraction of ferrite in microstructure containing ferrite and austenite. The prime focus of the proposed algorithm is to solve the problem of ferrite content measurement using automatic binary classification approach. Classification of image data into two distinct classes, using optimum threshold finding method, is the key idea behind the new algorithm. Automation of the process to measure the ferrite content and to speed up specimen’s testing procedure is the main feature of the newly developed algorithm. Improved performance index by reducing error sources is reflected from obtained results and validated through the comparison with a well-known method of Ohtsu.</description>
        <description>http://thesai.org/Downloads/Volume6No5/Paper_16-Automatic_Ferrite_Content_Measurement_based_on_Image_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Numerical Evaluation of the Effect of Gradient on Reflection Coefficient of Continuously Graded Layer</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060515</link>
        <id>10.14569/IJACSA.2015.060515</id>
        <doi>10.14569/IJACSA.2015.060515</doi>
        <lastModDate>2015-05-31T05:45:41.8630000+00:00</lastModDate>
        
        <creator>Ahmed Markou</creator>
        
        <creator>Hassan Nounah</creator>
        
        <subject>surface acoustic waves; nondestructive testing; functionally graded materials; dispersion curve; Lamb modes</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(5), 2015</description>
        <description>This paper presents a numerical model, based on transfer matrix method, for modeling the propagation of surface acoustic waves at the interface formed by the coupling liquid and a continuously inhomogeneous thin layer on a semi-infinite substrate. The tow-dimensional spectrum of reflection coefficient computed by this model, allows determining the modes which propagate at the studied interface, this model treats different profile of gradients, and the numerical results obtained show that the reflection coefficient is sensitive to the variation of these gradients.</description>
        <description>http://thesai.org/Downloads/Volume6No5/Paper_15-Numerical_Evaluation_of_the_Effect_of_Gradient_on_Reflection_Coefficient.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Enhancement of Scheduling Algorithm in Heterogeneous Distributed Computing Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060514</link>
        <id>10.14569/IJACSA.2015.060514</id>
        <doi>10.14569/IJACSA.2015.060514</doi>
        <lastModDate>2015-05-31T05:45:41.8470000+00:00</lastModDate>
        
        <creator>Aida A. NASR</creator>
        
        <creator>Nirmeen A. EL-BAHNASAWY</creator>
        
        <creator>Ayman EL-SAYED</creator>
        
        <subject>static task scheduling; heterogeneous distributed computing systems; Meta-heuristic algorithms</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(5), 2015</description>
        <description>Efficient task scheduling is essential for obtaining high performance in heterogeneous distributed computing systems. Some algorithms have been proposed for both homogeneous and heterogeneous distributed computing systems. In this paper, a new static scheduling algorithm is proposed called Node Duplication in Critical Path (NDCP) algorithm to schedule the tasks efficiently on the heterogeneous distributed computing systems. The NDCP algorithm focuses on reducing the makespan and provides better performance than the other algorithms in metrics of speedup and efficiency. It consists of two phases, priority phase and processor selection phase. From the theoretical analysis of the NDCP algorithm with other algorithms for a Directed Acyclic Graph (DAG), the better performance is observed.</description>
        <description>http://thesai.org/Downloads/Volume6No5/Paper_14-Performance_Enhancement_of_Scheduling_Algorithm_in_Heterogeneous.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Applying Topology-Shape-Metric and FUZZY Genetic Algorithm for Automatic Planar Hierarchical and Orthogonal Graphs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060513</link>
        <id>10.14569/IJACSA.2015.060513</id>
        <doi>10.14569/IJACSA.2015.060513</doi>
        <lastModDate>2015-05-31T05:45:41.8170000+00:00</lastModDate>
        
        <creator>Nahla F. Omran</creator>
        
        <creator>Sara F. Abd-el ghany</creator>
        
        <subject>graph drawing; hierarchical graphs; topology-shape-metric; fuzzy genetic algorithms</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(5), 2015</description>
        <description>The graphs appear in many applications such as computer networks, data networks, and PERT networks, when the network includes a small number of devices, it can be drawn easily by hand, as the number of devices increases, drawing becomes a very difficult task. For this problem we will develop a new method for automatic graph drawing based on two steps, the first is applying the topology –shape –metric that is approaching to orthogonal drawings for the grid and the second step is applying the fuzzy genetic algorithm that is directed, in the topology –shape –metric the final drawing is achieved through three sequential steps: planarization, orthogonalization, and compaction. Each of these steps is responsible for the quality of the final drawing. Then the genetic algorithm applied at the planarization step of the topology-shape-metric to find the geometric position of each vertex to minimize bending in the graph. The developed technique generates a greater number of planar embedding by varying the order of edges’ insertion. This is achieved clearly in the Results given in the paper.</description>
        <description>http://thesai.org/Downloads/Volume6No5/Paper_13-Applying_Topology_Shape_Metric_and_FUZZY_Genetic_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Adoption of e-Government in Pakistan: Demand Perspective</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060512</link>
        <id>10.14569/IJACSA.2015.060512</id>
        <doi>10.14569/IJACSA.2015.060512</doi>
        <lastModDate>2015-05-31T05:45:41.7830000+00:00</lastModDate>
        
        <creator>Zulfiqar Haider</creator>
        
        <creator>Chen Shuwen</creator>
        
        <creator>Zareen Abbassi</creator>
        
        <subject>E-government; adoption; demand perspective; Pakistan</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(5), 2015</description>
        <description>The reason for this paper is to investigate the variables that empower citizen adoption of e-Government driven organizations in Pakistan, where these offices are at a simple stage. Comprehension citizen’s adoption of electronic-government is an essential topic, as the utilization of e-Government has turned into an integral part of administration. Achievement of such activities depends generally on the productive utilization of e-taxpayer supported organizations. Inclusive e-Government is the gateway to the efficiency promised by the Electronic Government. This study utilizes the Unified Theory of Acceptance and Use of Technology (UTAUT) model to inspect the powerful elements of the adoption and utilization of e-Government services in Pakistan from a national point of view. An online survey was led and a factual spellbinding examination was performed on the reactions got from 200 Pakistani nationals. The embraced model can be utilized as a rule for the execution of e-Government services in Pakistan. This study recommends that government ought to run broad publicizing battles to guarantee that individuals are mindful of the services and utilization them. This infers that government ought to place accentuation on expanding familiarity with the services, show the profits of citizens, and empowering confidence in the framework.</description>
        <description>http://thesai.org/Downloads/Volume6No5/Paper_12-Adoption_of_e-Government_in_Pakistan_Demand_Perspective.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detection and Removal of Gray, Black and Cooperative Black Hole Attacks in AODV Technique</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060511</link>
        <id>10.14569/IJACSA.2015.060511</id>
        <doi>10.14569/IJACSA.2015.060511</doi>
        <lastModDate>2015-05-31T05:45:41.7700000+00:00</lastModDate>
        
        <creator>Hosny M. Ibrahim</creator>
        
        <creator>Nagwa M. Omar</creator>
        
        <creator>Ebram K. William</creator>
        
        <subject>MANET; AODV; Black Hole Attack; Gray Hole Attack; Cooperative Black Hole Attack</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(5), 2015</description>
        <description>Mobile ad hoc network (MANET) is an autonomous self-configuring infrastructure-less wireless network. MANET is vulnerable to a lot of routing security threats due to unreliability of its nodes that are highly involved in the routing process. In this paper, a new technique is proposed to enhance the security of one of the most popular MANET routing protocols that is called Ad hoc on Demand Distance Vector (AODV) with minimum routing overhead and high packet delivery ratio. The proposed technique intends to detect and remove black, gray, and cooperative black hole AODV attacks depending on a mobile backbone network constructed from randomly moving regular MANET nodes based on their trust value, location, and power. The backbone network monitors regular nodes as well as each other to periodically estimate monitoring trust values which represent the reliability of each node in the network. The drop in the monitoring trust value of any node is used as a clue to its malicious behavior. The backbone network also tries to bait the malicious nodes to reply to a request for a route to fake destination address. The proposed technique uses the control packets of the AODV to exchange its control information which highly reduces the overhead. The simulation results show that the performance of the proposed technique is more secure than AODV and the other recently introduced techniques.</description>
        <description>http://thesai.org/Downloads/Volume6No5/Paper_11-Detection_and_Removal_of_Gray_Black.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>CPLD-Based Circuit Design of IGBT Dead-Time Compensation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060510</link>
        <id>10.14569/IJACSA.2015.060510</id>
        <doi>10.14569/IJACSA.2015.060510</doi>
        <lastModDate>2015-05-31T05:45:41.7530000+00:00</lastModDate>
        
        <creator>Qing-zhen WANG</creator>
        
        <creator>Guo-hui ZENG</creator>
        
        <creator>Jin LIU</creator>
        
        <creator>Xing ZHAN</creator>
        
        <subject>Dead-time compensation; inverter; compensation method; circuit design; CPLD; IGBT</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(5), 2015</description>
        <description>IGBT (insulated-gate bipolar transistors) dead-time compensation circuit has a very important significant for improving the output voltage waveform of the inverter, reducing the harmonic output current. Thus, many compensation strategies are reported in literatures and have been implemented in industrial drives recently. Overall, the method of dead-time compensation can be divided into hardware compensation and software compensation. Hardware compensation method needs additional hardware circuits, which means additional space and cost. Still more, the additional circuit is easy to interfere with others, which can reduce the compensation accuracy. While the software compensation method takes up a lot of memory space and additional input-output ports of processor, which often result to the added operation and heat dissipation of controller. In this paper, CPLD (complex programmable logic device)-based circuit design of dead-time compensation is presented to solve these existed drawbacks. It is verified that not only can the circuit simplify existed inverter dead-time compensation design, but also it has the advantages of small volume, strong anti-interference ability, and high compensation precision. The simulation results validate that this method is feasible and effective.</description>
        <description>http://thesai.org/Downloads/Volume6No5/Paper_10-CPLD_Based_Circuit_Design_of_IGBT_Dead-Time_Compensation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Fuzzy PI Speed Controller based on Feedback Compensation Strategy for PMSM</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060509</link>
        <id>10.14569/IJACSA.2015.060509</id>
        <doi>10.14569/IJACSA.2015.060509</doi>
        <lastModDate>2015-05-31T05:45:41.7230000+00:00</lastModDate>
        
        <creator>Ou Sheng</creator>
        
        <creator>Liu Haishan</creator>
        
        <creator>Liu Guoying</creator>
        
        <creator>Zeng Guohui</creator>
        
        <creator>Zhan Xing</creator>
        
        <creator>Wang Qingzhen</creator>
        
        <creator>Liu Haishan</creator>
        
        <subject>permanent magnet synchronous motor(PMSM); Fuzzy PI; torque feedback compensation; anti-disturbance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(5), 2015</description>
        <description>in order to solve the problem of robustness or anti-disturbance of the traditional PI speed controller in the permanent magnet synchronous motor. A fuzzy PI speed controller based on load torque feedback compensation is proposed for the permanent magnet synchronous motor. The combination of fuzzy PI control strategy and load feedback compensation method can enhance the robustness and disturbance rejection of the speed loop. According to the validated results of simulation and experiments, by using this PMSM speed controller, the robustness of the system speed control was enhanced markedly, and the capacity of anti-disturbance was also improved significantly.</description>
        <description>http://thesai.org/Downloads/Volume6No5/Paper_9-A_Fuzzy_PI_Speed_Controller_based_on_Feedback_Compensation_Strategy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis and Research of Communication Interrupt Fault for Shanghai Metro Data Transmission System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060508</link>
        <id>10.14569/IJACSA.2015.060508</id>
        <doi>10.14569/IJACSA.2015.060508</doi>
        <lastModDate>2015-05-31T05:45:41.7070000+00:00</lastModDate>
        
        <creator>Jianru Liang</creator>
        
        <creator>Xinyuan Lu</creator>
        
        <creator>Cong Shi</creator>
        
        <creator>Minglai Yang</creator>
        
        <subject>Data Transmission System (DTS); L3 Ethernet switch; Terminal Server; interlock</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(5), 2015</description>
        <description>A line of Shanghai metro has been put into use for nearly fifteen years. There are three times extended during this time. The existing line’s data transmission system was modified over the last decades and has adopted many kinds of data transmission technology. By the analysis and research of communication interrupt of certain line’s data transmission system, which usually occurred in some site, the maintainers can find various security hidden danger in time and take corresponding measures, which has improved the quality of metro operation.</description>
        <description>http://thesai.org/Downloads/Volume6No5/Paper_8-Analysis_and_Research_of_Communication_Interrupt_Fault.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Fir Digital Filter Design based on Iwpso</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060507</link>
        <id>10.14569/IJACSA.2015.060507</id>
        <doi>10.14569/IJACSA.2015.060507</doi>
        <lastModDate>2015-05-31T05:45:41.6730000+00:00</lastModDate>
        
        <creator>Xinnan Hu</creator>
        
        <creator>Yujia Wang</creator>
        
        <creator>Kun Su</creator>
        
        <subject>FIR filter design; parameter optimization; improve weight particle swarm optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(5), 2015</description>
        <description>The essence of finite impulse response (FIR) digital filter design is the problem of the parameter optimization. Namely the optimal parameters of FIR digital filter are the core of the design. In due to the traditional design method of FIR digital filter is not only accuracy not high but also sideband frequency is difficult to determine. Improve Weight Particle Swarm Optimization (IWPSO) to design FIR digital filter has less calculation and fast convergence speed. The simulation results also demonstrate that the IWPSO has better appro-ximation properties and band-pass characteristics. what’s more, the convergence of IWPSO algorithm made good results in filter design efficiency.</description>
        <description>http://thesai.org/Downloads/Volume6No5/Paper_7-The_Fir_Digital_Filter_Design_based_on_Iwpso.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fall Monitoring Device for Old People based on Tri-Axial Accelerometer</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060506</link>
        <id>10.14569/IJACSA.2015.060506</id>
        <doi>10.14569/IJACSA.2015.060506</doi>
        <lastModDate>2015-05-31T05:45:41.6600000+00:00</lastModDate>
        
        <creator>Jing Luo</creator>
        
        <creator>Bocheng Zhong</creator>
        
        <creator>Dinghao Lv</creator>
        
        <subject>Fall monitoring; tri-axis acceleration sensor; threshold; GPRS module</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(5), 2015</description>
        <description>To be able to timely and effective judgment of the elderly fall, a fall monitoring device based on tri-axis accelerometer for elderly is designed. The device collects acceleration and the angle between elderly and horizontal plane of elderly people by MPU6050 tri-axial accelerometer, comparing the acceleration and angle that people and horizontal plane with threshold value to determine whether the old people fell. Delay for a period of time compared the angle and threshold again to judge the elderly in the fall still, finally send text messages to a mobile phone of guardian by GPRS module so that elderly can be helped.</description>
        <description>http://thesai.org/Downloads/Volume6No5/Paper_6-Fall_Monitoring_Device_for_Old_People_based_on_Tri_Axial_Accelerometer.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Applying Linked Data Technologies for Online Newspapers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060505</link>
        <id>10.14569/IJACSA.2015.060505</id>
        <doi>10.14569/IJACSA.2015.060505</doi>
        <lastModDate>2015-05-31T05:45:41.6270000+00:00</lastModDate>
        
        <creator>Tsvetanka Georgieva-Trifonova</creator>
        
        <creator>Tihomir Stefanov</creator>
        
        <subject>semantic web; ontology; linked data; SPARQL endpoint; RDF dataset; online newspaper</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(5), 2015</description>
        <description>The constantly growing data volume at the companies along with the necessity for finding information for the shortest possible time span involves methods of information search different from the ones conventionally used. The semantic technologies, developed in the late 90s and the beginning of the new century, are viewed as a new generation of databases and as text analyzing technologies.  The present paper deals with researching the opportunities and examining the advantages of applying the semantic web technologies and the linked data for online newspapers. Besides, a RDF-based ontology for the purposes of a system for study and evaluation of online editions of regional daily newspapers is proposed. SPARQL endpoint is implemented to access the RDF data.</description>
        <description>http://thesai.org/Downloads/Volume6No5/Paper_5-Applying_Linked_Data_Technologies_for_Online_Newspapers.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Quantifying the Relationship between Hit Count Estimates and Wikipedia Article Traffic</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060504</link>
        <id>10.14569/IJACSA.2015.060504</id>
        <doi>10.14569/IJACSA.2015.060504</doi>
        <lastModDate>2015-05-31T05:45:41.5800000+00:00</lastModDate>
        
        <creator>Tina Tian</creator>
        
        <creator>Ankur Agrawal</creator>
        
        <subject>hit count estimations; search engines; Wikipedia article traffic; cross correlation; positive delay, negative delay; prediction of Web hosting trend</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(5), 2015</description>
        <description>This paper analyzes the relationship between search engine hit counts and Wikipedia article views by evaluating the cross correlation between them. We observe the hit count estimates of three popular search engines over a month and compare them with the Wikipedia page views. The strongest cross correlations are recorded with their delays in days. We present the results in both graphs and quantitative data among different search engines. We also investigate the predicting trends between the hit counts and Wikipedia article traffic.</description>
        <description>http://thesai.org/Downloads/Volume6No5/Paper_4-Quantifying_the_Relationship_between_Hit_Count_Estimates.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Real-Time Digital Image Exposure Status Detection and Circuit Implementation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060503</link>
        <id>10.14569/IJACSA.2015.060503</id>
        <doi>10.14569/IJACSA.2015.060503</doi>
        <lastModDate>2015-05-31T05:45:41.5670000+00:00</lastModDate>
        
        <creator>Li Hongqin</creator>
        
        <creator>Wu Jianzhen</creator>
        
        <creator>Zhang Liping</creator>
        
        <creator>Ning Jun</creator>
        
        <subject>real-time; auto exposure; fast and parallel method</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(5), 2015</description>
        <description>Auto exposure is an important part of digital image signal processing. We studied the detection of the exposure status in this paper, and fast and parallel detection method was presented. The method comprises the following steps: first obtaining the current image, counting the numbers of pixels in bright and dark regions of the image and obtaining these pixels brightness; then determining exposure parameters based on the proportions of the counted numbers of pixels in bright and dark regions with preset value respectively; if the actual proportion is lower than preset value, then continue to adjust exposure parameters until pixel brightness value reaches preset brightness threshold. Experiments show that the computational complexity and operation demand is low, which can quickly determine the exposure status of the image, improving the real-time capability in image exposure control. The proposed method will make the whole digital image signal processing system works smoothly and be reliable. The circuit implementation of this method is simple with high real-time controllability. This method has been applied for China patent successfully.</description>
        <description>http://thesai.org/Downloads/Volume6No5/Paper_3-Real_Time_Digital_Image_Exposure_Status_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Generic Adaptive Multi-Gene-Set Genetic Algorithm (AMGA)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060502</link>
        <id>10.14569/IJACSA.2015.060502</id>
        <doi>10.14569/IJACSA.2015.060502</doi>
        <lastModDate>2015-05-31T05:45:41.5330000+00:00</lastModDate>
        
        <creator>Adi A. Maaita</creator>
        
        <creator>Jamal Zraqou</creator>
        
        <creator>Fadi Hamad</creator>
        
        <creator>Hamza A. Al-Sewadi</creator>
        
        <subject>Genetic algorithm; Multi-gene-set; Single-gene-set; Artificial Intelligence; Generic algorithm; Generic architecture</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(5), 2015</description>
        <description>Genetic algorithms have been used extensively in solving complex solution-space search problems. However, certain problems can include multiple sub-problems in which multiple searches through distinct solution-spaces are required before the final solution combining all the sub-solutions is found. This paper presents a generic design of genetic algorithms which can be used for solving complex solution-space search problems that involve multiple sub-solutions. Such problems are very difficult to solve using basic genetic algorithm designs that utilize a single gene-set per chromosome. The suggested algorithm presents a generic solution which utilizes both multi-gene-set chromosomes, and an adaptive gene mutation rate scheme. The results presented from experiments done using an automatic graphical user interface generation case study, show that the suggested algorithm is capable of producing successful solutions where the common single-gene-set design fails.</description>
        <description>http://thesai.org/Downloads/Volume6No5/Paper_2-A_Generic_Adaptive_Multi_Gene_Set_Genetic_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Bio-Informatics Framework: Research on 3D Sensor Data of Human Activities</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060501</link>
        <id>10.14569/IJACSA.2015.060501</id>
        <doi>10.14569/IJACSA.2015.060501</doi>
        <lastModDate>2015-05-31T05:45:41.4730000+00:00</lastModDate>
        
        <creator>Sajid Ali</creator>
        
        <creator>*Wu Zhongke</creator>
        
        <creator>Muhammad Saad Khan</creator>
        
        <creator>Ahmad Tisman Pasha</creator>
        
        <creator>Muhammad Adana Khalid</creator>
        
        <creator>Zhou Mingquan</creator>
        
        <subject>Sensor data; Bioinformtics behavior; Optical actives; BSP domain; Marker Configuration</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(5), 2015</description>
        <description>Due to increasing attraction of motion capture systems technology and the usage of captured data in wide range of research-oriented applications, a framework has developed as an improved version of MOCAP TOOLBOX in Matlab platform. Firstly, we have introduced a faithful script to deal with public motion capture data, which will be friendly for us. Various functions through dynamic programming, by using the Body Segment Parameters (BSP) are edited and they configured the position of markers according to data.  It is used to visualize and refine without the MLS view and the C3D editor software.  It has opened a valuable way of sensor data in many research aspects as gait movements, marker analysis, compression and motion pattern, bioinformatics, and animation. As a result, performed on CMU and ACCAD public mocap data, and achieved higher corrected configuration scheme of 3D markers when compared with the prior art, especially for C3D file. Another distinction of this work is that it handles the extra markers distortion, and provides the meaningful way to use captured data.</description>
        <description>http://thesai.org/Downloads/Volume6No5/Paper_1-A_New_Bio_Informatics_Framework_Research_on_3D_Sensor_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>New Hybrid (SVMs -CSOA) Architecture for classifying Electrocardiograms Signals</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2015.040505</link>
        <id>10.14569/IJARAI.2015.040505</id>
        <doi>10.14569/IJARAI.2015.040505</doi>
        <lastModDate>2015-05-10T11:31:40.4170000+00:00</lastModDate>
        
        <creator>Assist. Prof. Majida Ali Abed</creator>
        
        <creator>Assist. Prof. Dr. Hamid Ali Abed Alasad</creator>
        
        <subject>Electrocardiograms (ECG); classification; support vector machine; Cat Swarm Optimization</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 4(5), 2015</description>
        <description>a medical test that provides diagnostic relevant information of the heart activity is obtained by means of an ElectroCardioGram (ECG). Many heart diseases can be found by analyzing ECG because this method with moral performance is very helpful for shaping human heart status.  Support Vector Machines (SVM) has been widely applied in classification. In this paper we present the SVM parameter optimization approach using novel metaheuristic for evolutionary optimization algorithms is Cat Swarm Optimization Algorithm (CSOA). The results obtained assess the feasibility of new hybrid (SVMs -CSOA) architecture and demonstrate an improvement in terms of accuracy. </description>
        <description>http://thesai.org/Downloads/IJARAI/Volume4No5/Paper_5-New_Hybrid_SVMs-CSOA_Architecture_for_classifying_Electrocardiograms_Signals.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Control-Navigation System- Based Adaptive Optimal Controller &amp; EKF Localization of DDMR</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2015.040504</link>
        <id>10.14569/IJARAI.2015.040504</id>
        <doi>10.14569/IJARAI.2015.040504</doi>
        <lastModDate>2015-05-10T11:31:40.3830000+00:00</lastModDate>
        
        <creator>Dalia Kass Hanna</creator>
        
        <creator>Abdulkader Joukhadar</creator>
        
        <subject>DDMR modelling; Localization; LQR; Adaptive LQR; EKF; System Uncertainty</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 4(5), 2015</description>
        <description>This paper presents a newly developed approach for Differential Drive Mobile Robot (DDMR). The main goal is to provide a high dynamic system response in the joint space level, the low level control, as well as to enhance the DDMR localization. The proposed approach depends on a Linear Quadratic Regulator (LQR) for the low level control and an Adaptive LQR for the high level control. The investigated DDMR is considered highly nonlinear system due to uncertainty exhibited by the mobile robot incorporated with actuators nonlinearity. DDMR’s uncertainty leads to erroneous localization. An Extended Kalman Filter (EKF) -based approach with fusion sensors is used to enhance the robot degree of belief for its posture. Intensive simulation results obtained from the developed uncertain model and the proposed approach have shown very good dynamic performance on the low level control and very good convergence to the desired posture of the mobile robot path with the presence of robot uncertainty.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume4No5/Paper_4-A_Novel_Control_Navigation_System_Based_Adaptive_Optimal_Controller_EKF_Localization_of_DDMR.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cardiac Arrhythmia Classification by Wavelet Transform</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2015.040503</link>
        <id>10.14569/IJARAI.2015.040503</id>
        <doi>10.14569/IJARAI.2015.040503</doi>
        <lastModDate>2015-05-10T11:31:40.3700000+00:00</lastModDate>
        
        <creator>Hadji Salah</creator>
        
        <creator>Ellouze Noureddine</creator>
        
        <subject>ECG; QT database; Wavelet Transform; Classification; Kohonen self-organization map</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 4(5), 2015</description>
        <description>Cardiovascular diseases are the major public health parameter; they are the leading causes of mortality in the world. In fact many studies have been implemented to reduce the risk, including promoting education, prevention, and monitoring of patients at risk. In this paper we propose to develop classification system heartbeats. This system is based mainly on Wavelet Transform to extract features and Kohonen self-organization map the arrhythmias are considered in this study: N,(Normal), V(PrematureVentricular), A(AtrialPremature), S(Extrasystolesupraventriculaire), F(FusionN+S), R(RightBundle Branch).</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume4No5/Paper_3-Cardiac_Arrhythmia_Classification_by_Wavelet_Transform.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Integral Lqr-Based 6dof Autonomous Quadrocopter Balancing System Control</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2015.040502</link>
        <id>10.14569/IJARAI.2015.040502</id>
        <doi>10.14569/IJARAI.2015.040502</doi>
        <lastModDate>2015-05-10T11:31:40.3230000+00:00</lastModDate>
        
        <creator>A Joukhadar</creator>
        
        <creator>I Hasan</creator>
        
        <creator>A Alsabbagh</creator>
        
        <creator>M Alkouzbary</creator>
        
        <subject>Quadrocopter; Balancing Control; Stability of Quadrocopter; LQR; Integral LQR; Modelling of Quadrocopter</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 4(5), 2015</description>
        <description>This paper presents an LQR-Based 6DOF control of an unmanned aerial vehicles (UAV), namely a small-scale quadrocopter. Due to its high nonlinearity and a high degree of coupling system, the control of an UAV is very challenging. quadrocopter trajectory tracking in a 3D space is greatly affected by the quadrocopter balancing around its roll-pitch-yaw frame. Lack of precise tracking control about the body frame may result in inaccurate localization with respect to a fixed frame. Thus, the present paper provides a high dynamic control tracking balanc-ing system response. An integral LQR-based controller is pro-posed to enhance the dynamic system response balancing on roll, pitch and yaw. The control on the hovering angles consists of two-cascaded loops. Namely, an inner loop for the angular speed control of each angular motion around the body frame axes, and an outer loop for the desired position control. In general, the proposed balancing control system on roll, pitch and yaw, has six control loops. The proposed control approach is implemented utilizing an embedded ATMega2560 microcontroller system. Practical results obtained from the proposed control approach exhibits fast and robust control response and high disturbance rejection.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume4No5/Paper_2-Integral_Lqr-Based_6dof_Autonomous_Quadrocopter_Balancing_System_Control.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Influence of Stubborn Agents in a Multi-Agent Network for Inter-Team Cooperation/Negotiation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2015.040501</link>
        <id>10.14569/IJARAI.2015.040501</id>
        <doi>10.14569/IJARAI.2015.040501</doi>
        <lastModDate>2015-05-10T11:31:40.2130000+00:00</lastModDate>
        
        <creator>Eugene S. Kitamura</creator>
        
        <creator>Akira Namatame</creator>
        
        <subject>Multi-agent system; consensus problem; stubborn agents; complex network; dumbbell network; Laplacian matrix; boundary role person; coalition formation</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 4(5), 2015</description>
        <description>When teams interact for cooperation or negotiation, there are unique dynamics that occur depending on the conditions. In this paper, a multi-agent system is used under the restrain of a network structure to model two teams of agents interacting for a common consensus, however with the presence of stubborn agents. The networks used were a minimum dumbbell network and two scale-free networks joined together. The network topology, which is a global characteristic, along with the presence of conflicting stubborn agents, can cause various conditions that affect teamwork in cooperation or negotiation. Notable characteristics revealed are boundary role persons (BRPs), lack of unity, need for a third party moderator, coalition formation, and loyalty of the BRP dependent on the distance from the core ideology of the team. Both local and global characteristics of network structures contribute to such phenomenon. The modeling method and corresponding simulation results provide valuable insight for predicting possible social dynamics and outcome when planning cooperation/negotiation tactics.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume4No5/Paper_1-The_Influence_of_Stubborn_Agents_in_a_Multi-Agent_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cost Optimization of Cloud Computing Services in a Networked Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060420</link>
        <id>10.14569/IJACSA.2015.060420</id>
        <doi>10.14569/IJACSA.2015.060420</doi>
        <lastModDate>2015-05-01T05:48:11.2370000+00:00</lastModDate>
        
        <creator>Eli WEINTRAUB</creator>
        
        <creator>Yuval COHEN</creator>
        
        <subject>Cloud Computing; Pricing Model; Cost optimization; Software as a service; Platform as a service; Infrastructure as a service</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(4), 2015</description>
        <description>Cloud computing service providers&#39; offer their customers&#39; services maximizing their revenues, whereas customers wish to minimize their costs. In this paper we shall concentrate on consumers&#39; point of view. Cloud computing services are composed of services organized according to a hierarchy of software application services, beneath them platform services which also use infrastructure services. Providers currently offer software services as bundles consisting of services which include the software, platform and infrastructure services. Providers also offer platform services bundled with infrastructure services. Bundling services prevent customers from splitting their service purchases between a provider of software and a different provider of the underlying platform or infrastructure. This bundling policy is likely to change in the long run since it contradicts economic competition theory, causing an unfair pricing model and locking-in consumers to specific service providers. In this paper we assume the existence of a free competitive market, in which consumers are free to switch their services among providers. We assume that free market competition will enforce vendors to adopt open standards, improve the quality of their services and suggest a large variety of cloud services in all layers. Our model is aimed at the potential customer who wishes to find the optimal combination of service providers which minimizes his costs. We propose three possible strategies for implementation of the model in organizations. We formulate the mathematical model and illustrate its advantages compared to existing pricing practices used by cloud computing consumers.</description>
        <description>http://thesai.org/Downloads/Volume6No4/Paper_20-Cost_Optimization_of_Cloud_Computing_Services_in_a_Networked_Environment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Menu Positioning on Web Pages. Does it Matter?</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060419</link>
        <id>10.14569/IJACSA.2015.060419</id>
        <doi>10.14569/IJACSA.2015.060419</doi>
        <lastModDate>2015-05-01T05:48:11.2230000+00:00</lastModDate>
        
        <creator>Dr Pietro Murano</creator>
        
        <creator>Tracey J. Lomas</creator>
        
        <subject>User interfaces; menu design; Interface navigation; evaluation; usability; universal design</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(4), 2015</description>
        <description>This paper concerns an investigation by the authors into the efficiency and user opinions of menu positioning in web pages. While the idea and use of menus on web pages is not new, the authors feel there is not enough empirical evidence to help designers choose an appropriate menu position. We therefore present the design and results of an empirical experiment, investigating the usability of menu positioning on web pages. A four condition experiment was conducted by the authors. Each condition tested a different menu position. The menu positions tested were left vertical, right vertical, top horizontal and bottom horizontal. The context was a fictitious online store. The results, based on statistical analysis and statistically significant findings, suggest that the top horizontal and left vertical positioned menus incurred fewer errors and fewer mouse clicks. Furthermore, the user satisfaction ratings were in line with the efficiency aspects observed.</description>
        <description>http://thesai.org/Downloads/Volume6No4/Paper_19-Menu_Positioning_on_Web_Pages_Does_it_Matter.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Case-Based Reasoning for Selecting Study Program in Senior High School</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060418</link>
        <id>10.14569/IJACSA.2015.060418</id>
        <doi>10.14569/IJACSA.2015.060418</doi>
        <lastModDate>2015-05-01T05:48:11.1900000+00:00</lastModDate>
        
        <creator>Sri Mulyana</creator>
        
        <creator>Sri Hartati</creator>
        
        <creator>Retantyo Wardoyo</creator>
        
        <creator>Edi Winarko</creator>
        
        <subject>Case Based Reasoning; Case retrieval;  similarity degree; new cases; recommended solutions;selecting study program</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(4), 2015</description>
        <description>One of the reasoning methods in expert system is Case-Based Reasoning (CBR). A problem is searching for past cases in the case base with thehighest similarity degree. This implies that calculation of similarity degree among the cases is an important aspect in CBR. In this study, an application of computer reasoning system based on CBR is developed for selecting study program in Senior High School (SHS). This aplication can be used to assist students for selecting study program. The cases used in the study include results of the intelligence test, student’s interest, and grades of several subjects.Each case in the case base will be calculated for the similarity degree with new cases entered. Furthermore, it is the cases with highest similarity degree that are recommended as solutions.</description>
        <description>http://thesai.org/Downloads/Volume6No4/Paper_18-Case-Based_Reasoning_for_Selecting_Study_Program_in_Senior_High_School.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Visualization of Input Parameters for Stream and Pathline Seeding</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060417</link>
        <id>10.14569/IJACSA.2015.060417</id>
        <doi>10.14569/IJACSA.2015.060417</doi>
        <lastModDate>2015-05-01T05:48:11.1600000+00:00</lastModDate>
        
        <creator>Tony McLoughlin</creator>
        
        <creator>Matt Edmunds</creator>
        
        <creator>Chao Tong</creator>
        
        <creator>Robert S Laramee</creator>
        
        <creator>Ian Masters</creator>
        
        <creator>Guoning Chen</creator>
        
        <creator>Nelson Max</creator>
        
        <creator>Harry Yeh</creator>
        
        <creator>Eugene Zhang</creator>
        
        <subject>Seeding Parameter Sensitivity, Uncertainty, Ex-ploratorive Visualization.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(4), 2015</description>
        <description>Uncertainty arises in all stages of the visualization pipeline. However, the majority of flow visualization applications convey no uncertainty information to the user. In tools where uncertainty is conveyed, the focus is generally on data, such as error that stems from numerical methods used to generate a simulation or on uncertainty associated with mapping visualiza-tion primitives to data. Our work is aimed at another source of uncertainty - that associated with user-controlled input param-eters. The navigation and stability analysis of user-parameters has received increasing attention recently. This work presents an investigation of this topic for flow visualization, specifically for three-dimensional streamline and pathline seeding. From a dynamical systems point of view, seeding can be formulated as a predictability problem based on an initial condition. Small perturbations in the initial value may result in large changes in the streamline in regions of high unpredictability. Analyzing this predictability quantifies the perturbation a trajectory is subjugated to by the flow. In other words, some predictions are less certain than others as a function of initial conditions. We introduce novel techniques to visualize important user input parameters such as streamline and pathline seeding position in both space and time, seeding rake position and orientation, and inter-seed spacing. The implementation is based on a metric which quantifies similarity between stream and pathlines. This is important for Computational Fluid Dynamics (CFD) engineers as, even with the variety of seeding strategies available, manual seeding using a rake is ubiquitous. We present methods to quantify and visualize the effects that changes in user-controlled input parameters have on the resulting stream and pathlines. We also present various visualizations to help CFD scientists to intuitively and effectively navigate this parameter space. The reaction from a domain expert in fluid dynamics is also reported.</description>
        <description>http://thesai.org/Downloads/Volume6No4/Paper_17-Visualization_of_Input_Parameters_for_Stream_and_Pathline_Seeding.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Lossless Quality Steganographic Color Image Compression</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060416</link>
        <id>10.14569/IJACSA.2015.060416</id>
        <doi>10.14569/IJACSA.2015.060416</doi>
        <lastModDate>2015-05-01T05:48:11.1270000+00:00</lastModDate>
        
        <creator>Tamer Rabie</creator>
        
        <subject>Lossless Quality Compression; Steganography; Color Image Compression; Lab color space; RGB color space, Frequency Domain Data Hiding</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(4), 2015</description>
        <description>This paper develops a steganography-based paradigm for lossless-quality compression of high-resolution color images acquired by megapixel cameras. Our scheme combines space-domain and frequency-domain image processing operations where in the space domain, color-brightness separation is exploited, and in the frequency domain, spectral properties of the Fourier magnitude and phase of the color image is exploited. Working in both domains concurrently allows for an approach to ultrahigh-resolution image compression that addresses both issues of quality and storage size. Experimental results as well as empirical observations show that our technique exceeds the highest quality JPEG image compression standard in terms of compression rates while being very competitive with JPEG in the overall fidelity of the decompressed image, with the added advantage of being able to recover the original fine details in the color image without any degradations common in lossy image compression techniques.</description>
        <description>http://thesai.org/Downloads/Volume6No4/Paper_16-Lossless_Quality_Steganographic_Color_Image.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Knowledge Level Assessment in e-Learning Systems Using Machine Learning and User Activity Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060415</link>
        <id>10.14569/IJACSA.2015.060415</id>
        <doi>10.14569/IJACSA.2015.060415</doi>
        <lastModDate>2015-05-01T05:48:11.0970000+00:00</lastModDate>
        
        <creator>Nazeeh Ghatasheh</creator>
        
        <subject>Concept Maps; Multi-Class Classification; Machine Learning; Electronic Learning; Activity Analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(4), 2015</description>
        <description>Electronic Learning has been one of the foremost trends in education so far. Such importance draws the attention to an important shift in the educational paradigm. Due to the complexity of the evolving paradigm, the prospective dynamics of learning require an evolution of knowledge delivery and evaluation. This research work tries to put in hand a futuristic design of an autonomous and intelligent e-Learning system. In which machine learning and user activity analysis play the role of an automatic evaluator for the knowledge level. It is important to assess the knowledge level in order to adapt content presentation and to have more realistic evaluation of online learners. Several classification algorithms are applied to predict the knowledge level of the learners and the corresponding results are reported. Furthermore, this research proposes a modern design of a dynamic learning environment that goes along the most recent trends in e-Learning. The experimental results illustrate an overall performance superiority of a support vector machine model in evaluating the knowledge levels; having 98.6%of correctly classified instances with 0.0069 mean absolute error.</description>
        <description>http://thesai.org/Downloads/Volume6No4/Paper_15-Knowledge_Level Assessment_in_e-Learning_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Distributed Intrusion Detection System for Vehicular Ad Hoc Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060414</link>
        <id>10.14569/IJACSA.2015.060414</id>
        <doi>10.14569/IJACSA.2015.060414</doi>
        <lastModDate>2015-05-01T05:48:11.0800000+00:00</lastModDate>
        
        <creator>Leandros A. Maglaras</creator>
        
        <subject>VANET; Intrusion Detection; OCSVM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(4), 2015</description>
        <description>In the new interconnected world, we need to secure vehicular cyber-physical systems (VCPS) using sophisticated intrusion detection systems. In this article, we present a novel distributed intrusion detection system (DIDS) designed for a vehicular ad hoc network (VANET). By combining static and dynamic detection agents, that can be mounted on central vehicles, and a control center where the alarms about possible attacks on the system are communicated, the proposed DIDS can be used in both urban and highway environments for real time anomaly detection with good accuracy and response time.</description>
        <description>http://thesai.org/Downloads/Volume6No4/Paper_14-A_Novel_Distributed_Intrusion_Detection_System_for_Vehicular_Ad_Hoc_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Simple and Reliable Method for the Evaluation of the Exposed Field Near the GSM Antenna</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060413</link>
        <id>10.14569/IJACSA.2015.060413</id>
        <doi>10.14569/IJACSA.2015.060413</doi>
        <lastModDate>2015-05-01T05:48:11.0670000+00:00</lastModDate>
        
        <creator>Algenti Lala</creator>
        
        <creator>Bexhet Kamo</creator>
        
        <creator>Vladi Kolici</creator>
        
        <creator>Shkelzen Cakaj</creator>
        
        <subject>evaluation; electromagnetic field;near field; NARDA SRM 3000; base-station</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(4), 2015</description>
        <description>The objective of this paper is to present a simple, accurate and very efficient method for the evaluation of the field in the vicinity of GSM antennas of the radio base-station in urban areas. The method is based on the replacement of the antenna panel with a group of discrete source emitters. A geometrical approximation is used for the evaluation of the environment’s influence also.  The calculated results are compared with results taken from the use of NARDA SRM 3000 measuring equipment. The presented method could be successfully used for the exposure evaluation of the electromagnetic field emitted by GSM antennas of the base-station in urban areas.</description>
        <description>http://thesai.org/Downloads/Volume6No4/Paper_13-A_Simple_and_Reliable_Method_for_the_Evaluation_of_the_Exposed_Field_Near_the_GSM_Antenna.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Vulnerability of the Process Communication Model in Bittorrent Protocol</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060412</link>
        <id>10.14569/IJACSA.2015.060412</id>
        <doi>10.14569/IJACSA.2015.060412</doi>
        <lastModDate>2015-05-01T05:48:11.0330000+00:00</lastModDate>
        
        <creator>Ahmed ElShafee</creator>
        
        <subject>Peer-to-Peer security; BitTorrent protocol; Code injection; Packets sniffing, Ethernet LAN</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(4), 2015</description>
        <description>BitTorrent is the most extensively used protocol in peer-to-peer systems. Its clients are widely spread worldwide and account for a large fraction of today’s Internet traffic. This paper will discuss potential attack that exploits a certain vulnerability of BitTorrent based systems. Code injection refers to force a code – which may be malicious - to run inside another benign code, by inserting it into known process name or process ID. Operating systems supply API functions that can be used by third party to inject a few lines of malicious code inside the original running process, which can effectively damage or harm user resources. Ethernet is the most common internetwork layer for Local Area Networks; the shared medium of LAN enables all users on the same broadcasting domain to listen to all exchanged packets through the network (promiscuous mode), so any adversary can easily perform a simple packet sniffing process on the medium access layer of the network. By capturing and analyzing the sent packets from the P2P application, an adversary can use the revealed process ID by BitTorrent protocol to start the code injection action. So the adversary will be able to seize more machines from the network. Controlled machines can be used to perform many attacks. The study revealed that any adversary can exploit the vulnerability of the process communication model used in P2P by injecting any malicious process inside the BitTorrent application itself exposed by sniffing the exchanged BitTorrent packets through LAN.</description>
        <description>http://thesai.org/Downloads/Volume6No4/Paper_12-Vulnerability_of_the_Process_Communication_Model_in_Bittorrent_Protocol.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modeling Mechanical and Electrical Uncertain Systems using Functions of Robust Control MATLAB Toolbox&#174;3</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060411</link>
        <id>10.14569/IJACSA.2015.060411</id>
        <doi>10.14569/IJACSA.2015.060411</doi>
        <lastModDate>2015-05-01T05:48:11.0200000+00:00</lastModDate>
        
        <creator>Mohammed Tawfik Hussein</creator>
        
        <subject>uncertainty; interval; robust stability; system response; nyquist criteria; root bounds</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(4), 2015</description>
        <description>Uncertainty is inherent property of all real life control systems, and this is due to that there is nothing constant practically; all parameters are going to change under some environmental circumstances, therefore control engineers  must not ignore this changing since it can affect the behavior and the performance of the system.
In this paper a critical research method for modeling uncertain systems is demonstrated with the utilization of built in robust control Matlab Toolbox&#174;3 functions. Good results were obtained for testing the stability of interval linear time invariant systems.
Finally mechanical and electrical uncertain systems were implemented as practical example to validate the uncertainty.</description>
        <description>http://thesai.org/Downloads/Volume6No4/Paper_11-Modeling_Mechanical_and_Electrical_Uncertain_Systems_using_Functions.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Electronic Human Resource Management (e-HRM) of Hotel Business in Phuket</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060410</link>
        <id>10.14569/IJACSA.2015.060410</id>
        <doi>10.14569/IJACSA.2015.060410</doi>
        <lastModDate>2015-05-01T05:48:11.0030000+00:00</lastModDate>
        
        <creator>Kitimaporn Choochote</creator>
        
        <creator>Kitsiri Chochiang</creator>
        
        <subject>electronic human resource management; e-HRM; hotel business; e-learning; Management</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(4), 2015</description>
        <description>This research aims to study the pattern of the electronic human resources management (e-HRM) of the hotel business in Phuket. The study is conducted with the implementation of field data and in-depth interview of hotels’ HR managers. In consequence, the study reveals that the hotel business has applied the use of the e-HRM varying in job recruitment (15 percent), employee engagement (55 percent), organizational file structure (10 percent), idea and creativity exchanges (38 percent) and assessment system (6 percent). However, considered as 100 percent, the hotel business has not prepared to apply the use of the e-HRM in salary system, learning and training program, welfare allocation and career development.</description>
        <description>http://thesai.org/Downloads/Volume6No4/Paper_10-Electronic_Human_Resource_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automatic Construction of Java Programs from Functional Program Specifications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060409</link>
        <id>10.14569/IJACSA.2015.060409</id>
        <doi>10.14569/IJACSA.2015.060409</doi>
        <lastModDate>2015-05-01T05:48:10.9730000+00:00</lastModDate>
        
        <creator>Md. Humayun Kabir</creator>
        
        <subject>Functional Program Specification; Existential Theorems; Higher Order Functional Program; Mapping Rules; Programming Language Translation System; Java Program; Refinement</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(4), 2015</description>
        <description>This paper presents a novel approach to construct Java programs automatically from the input functional program specifications on natural numbers from the constructive proofs of the input specifications using an inductive theorem prover called Poiti&#39;n. The construction of a Java program from the input functional program specification involves two phases. The theorem prover is used to construct a higher order functional (HOF) program from the input specification expressed as an existential theorem. A set of mapping rules for a Programming Language Translation System (PLTS) is defined for translating functional expressions to their semantic equivalent Java code. The generated functional program is translated into intermediate Java code in the form of a Java function using the PLTS module. The generated Java function requires a small refinement to obtain a syntactically correct Java function. This Java function is encapsulated within a user defined Java class as a member operation, which is invoked within a Java application class consisting of a main function by creating objects resulting in an executable Java program. The constructed functional program and the generated Java program both are correct with respect to the input specification as they produce the same output.</description>
        <description>http://thesai.org/Downloads/Volume6No4/Paper_9-Automatic_Construction_of_Java_Programs_from_Functional_Program_Specifications.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Robot Path Planning Based on Random Coding Particle Swarm Optimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060408</link>
        <id>10.14569/IJACSA.2015.060408</id>
        <doi>10.14569/IJACSA.2015.060408</doi>
        <lastModDate>2015-05-01T05:48:10.9570000+00:00</lastModDate>
        
        <creator>Kun Su</creator>
        
        <creator>YuJia Wang</creator>
        
        <creator>XinNan Hu</creator>
        
        <subject>robot path planning; Dijsktra algorithm; random coding; particle swarm optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(4), 2015</description>
        <description>Mobile robot navigation is to find an optimal path to guide the movement of the robot, so path planning is guaranteed to find a feasible optimal path. However, the path planning problem must be solve two problems, i.e., the path must be kept away from obstacles or avoid the collision with obstacles and the length of path should be minimized. In this paper, a path planning algorithm based on random coding particle swarm optimization (RCPSO) algorithm is proposed to get the optimal collision-free path. Dijstra algorithm is applied to search a sub-optimal collision-free path in our algorithm; then the RCPSO algorithm is developed to tackle this optimal path planning problem in order to generate the global optimal path. The crossover operator of genetic algorithm and random coding are introduced into the particle swarm optimization to optimize the location of the sub-optimal path. The experiment results show that the proposed method is effective and feasible compared with different algorithms.</description>
        <description>http://thesai.org/Downloads/Volume6No4/Paper_8-Robot_Path_Planning_Based_on_Random_Coding_Particle_Swarm_Optimization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Classification of Ultrasound Kidney Images using PCA and Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060407</link>
        <id>10.14569/IJACSA.2015.060407</id>
        <doi>10.14569/IJACSA.2015.060407</doi>
        <lastModDate>2015-05-01T05:48:10.9100000+00:00</lastModDate>
        
        <creator>Mariam Wagih Attia</creator>
        
        <creator>F.E.Z. Abou-Chadi</creator>
        
        <creator>Hossam El-Din Moustafa</creator>
        
        <creator>Nagham Mekky</creator>
        
        <subject>Ultrasound kidney images; Feature Extraction; Principal Component Analysis; Neural Network classifier</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(4), 2015</description>
        <description>In this paper, a computer-aided system is proposed for automatic classification of Ultrasound Kidney diseases.  Images of five classes: Normal, Cyst, Stone, Tumor and Failure were considered. A set of statistical features and another set of multi-scale wavelet-based features were extracted from the region of interest (ROI) of each image and the principal component analysis was performed to reduce the number of features. The selected  features were utilized in the design and training of   a neural network classifier. A correct classification rate of 97% has been obtained using the multi-scale wavelet-based features.</description>
        <description>http://thesai.org/Downloads/Volume6No4/Paper_7-Classification_of_Ultrasound_Kidney_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predictable CPU Architecture Designed for Small Real-Time Application - Concept and Theory of Operation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060406</link>
        <id>10.14569/IJACSA.2015.060406</id>
        <doi>10.14569/IJACSA.2015.060406</doi>
        <lastModDate>2015-05-01T05:48:10.8800000+00:00</lastModDate>
        
        <creator>Nicoleta Cristina GAITAN</creator>
        
        <creator>Ionel ZAGAN</creator>
        
        <creator>Vasile Gheorghita GAITAN</creator>
        
        <subject>fine-grained multithreading; hardware scheduler; pipeline; hard real-time; predictable</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(4), 2015</description>
        <description>The purpose of this paper is to describe an predictable CPU architecture, based on the five stage pipeline assembly line and a hardware scheduler engine. We aim at developing a fine-grained multithreading implementation, named nMPRA-MT. The new proposed architecture uses replication and remapping techniques for the program counter, the register file, and the pipeline registers and is implemented with a FPGA device. An original implementation of a MIPS processor with thread interleaved pipeline is obtained, using dynamic scheduling of hard real-time tasks and interrupts. In terms of interrupts handling, the architecture uses a particular method consisting of assigning interrupts to tasks, which insures an efficient control for both the context switch, and the system real-time behavior. The originality of the approach resides in the predictability and spatial isolation of the hard real-time tasks, executed every two clock cycles. The nMPRA-MT architecture is enabled by an innovative scheme of predictable scheduling algorithm, without stalling the pipeline assembly line.</description>
        <description>http://thesai.org/Downloads/Volume6No4/Paper_6-Predictable_CPU_Architecture_Designed_for_Small_Real-Time_Application.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Memetic Multi-Objective Particle Swarm Optimization-Based Energy-Aware Virtual Network Embedding</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060405</link>
        <id>10.14569/IJACSA.2015.060405</id>
        <doi>10.14569/IJACSA.2015.060405</doi>
        <lastModDate>2015-05-01T05:48:10.8630000+00:00</lastModDate>
        
        <creator>Ashraf A. Shahin</creator>
        
        <subject>energy-efficient resource management; green computing; virtual network embedding; cloud computing; resource allocation; substrate network fragmentation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(4), 2015</description>
        <description>In cloud infrastructure, accommodating multiple virtual networks on a single physical network reduces power consumed by physical resources and minimizes cost of operating cloud data centers. However, mapping multiple virtual network resources to physical network components, called virtual network embedding (VNE), is known to be NP-hard. With considering energy efficiency, the problem becomes more complicated. In this paper, we model energy-aware virtual network embedding, devise metrics for evaluating performance of energy aware virtual network-embedding algorithms, and propose an energy aware virtual network-embedding algorithm based on multi-objective particle swarm optimization augmented with local search to speed up convergence of the proposed algorithm and improve solutions quality. Performance of the proposed algorithm is evaluated and compared with existing algorithms using extensive simulations, which show that the proposed algorithm improves virtual network embedding by increasing revenue and decreasing energy consumption.</description>
        <description>http://thesai.org/Downloads/Volume6No4/Paper_5-Memetic_Multi-Objective_Particle_Swarm_Optimization-Based.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Area Efficient Implementation of Elliptic Curve Point Multiplication Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060404</link>
        <id>10.14569/IJACSA.2015.060404</id>
        <doi>10.14569/IJACSA.2015.060404</doi>
        <lastModDate>2015-05-01T05:48:10.8470000+00:00</lastModDate>
        
        <creator>Sunil Devidas Bobade</creator>
        
        <creator>Dr.Vijay R. Mankar</creator>
        
        <subject>Cryptography; Elliptic Curve Cryptography; Double Point Multiplication; Binary Elliptic Curve; Differential Addition Chain</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(4), 2015</description>
        <description>Elliptic Curve Cryptography (ECC) has established itself as the most preferred and secured cryptography algorithm for the secure data transfer and secure data storage in embedded system environment. Efficient implementation of point multiplication algorithm is crucial activity for designing area efficient, low footprint ECC cryptoprocessors. In this paper, an area efficient implementation of double point multiplication algorithm over binary elliptic curve is presented. Area analysis of double point multiplication algorithm based on differential addition chains method is carried out and area report is generated. Area optimization is achieved by using pipelined structure and by reutilizing idle resources from previous stages in processing unit. The proposed architecture for double point multiplication is implemented on Xilinx Virtex-4 FPGA device. Architecture is modeled in verilog-HDL and synthesized using Xilinx ISE 14.1 design software and is found to be more efficient in terms of area than the existing such architectures.</description>
        <description>http://thesai.org/Downloads/Volume6No4/Paper_4-Area_Efficient_Implementation_of_Elliptic_Curve_Point_Multiplication_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Developing a Search Algorithm and a Visualization Tool for SNOMED CT</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060403</link>
        <id>10.14569/IJACSA.2015.060403</id>
        <doi>10.14569/IJACSA.2015.060403</doi>
        <lastModDate>2015-05-01T05:48:10.8170000+00:00</lastModDate>
        
        <creator>Anthony Masi</creator>
        
        <creator>Ankur Agrawal</creator>
        
        <subject>SNOMED CT; electronic; health records; visualization; search algorithm; GUI</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(4), 2015</description>
        <description>With electronic health records rising in popularity among hospitals and physicians, the SNOMED CT medical terminology has served as a valuable standard for those looking to exchange a variety of information linked to clinical knowledge bases, information retrieval, and data aggregation.  However, SNOMED CT is distributed as a flat file database by the International Health Terminology Standards Development Organization and visualization of data can be a problem.  This study describes an algorithm that allows a user to easily search SNOMED CT for identical or partial matches utilizing indexing and wildcard matching through a graphical user interface developed in the cross-platform programming language Java. In addition to this, the algorithm displays corresponding relationships and other relevant information pertaining to the search term.  The outcome of this study can serve as a useful visualization tool for those looking to delve into the increasingly standardized world of electronic health records as well as a tool for healthcare providers who may be seeking specific clinical information contained in the SNOMED CT database.</description>
        <description>http://thesai.org/Downloads/Volume6No4/Paper_3-Developing_a_Search_Algorithm_and_a_Visualization_Tool_for_SNOMED_CT.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Gamification, Virality and Retention in Educational Online Platform</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060402</link>
        <id>10.14569/IJACSA.2015.060402</id>
        <doi>10.14569/IJACSA.2015.060402</doi>
        <lastModDate>2015-05-01T05:48:10.7830000+00:00</lastModDate>
        
        <creator>Ilya V. Osipov</creator>
        
        <creator>Alex A. Volinsky</creator>
        
        <creator>Vadim V. Grishin</creator>
        
        <subject>Gamification; virality; retention; freemium; K-factor; metrics; open educational resource; e-learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(4), 2015</description>
        <description>The paper describes gamification, virality and retention in the freemium educational online platform with 40,000 users as an example. Relationships between virality and retention parameters as measurable metrics are calculated and discussed using real examples. Virality and monetization can be both competing and complementary mechanisms for the system growth. The K-growth factor, which combines both virality and retention, is proposed as the metrics of the overall freemium system performance in terms of the user base growth. This approach can be tested using a small number of users to assess the system potential performance. If the K-growth factor is less than one, the product needs further development. If the K-growth factor is greater than one, the system retains existing and attracts new users, thus a large scale market launch can be successful.
User attraction and retention mechanics are discussed based on the peer-to-peer online language training platform, which utilizes freemium business model. Key system metrics are derived to assess the future commercial potential and making decisions to either fund an advertising campaign, or continue with project technical improvements. The paper can be of interest to venture capitalists as a method to assess freemium projects.</description>
        <description>http://thesai.org/Downloads/Volume6No4/Paper_2-Gamification_Virality_and_Retention_in_Educational_Online_Platform.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multidimensional Neural-Like Growing Networks - A New Type of Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060401</link>
        <id>10.14569/IJACSA.2015.060401</id>
        <doi>10.14569/IJACSA.2015.060401</doi>
        <lastModDate>2015-05-01T05:48:10.7230000+00:00</lastModDate>
        
        <creator>Vitaliy Yashchenko</creator>
        
        <subject>multidimensional receptor-effector neural-like growing networks; neural networks; intelligent systems; electronic brain of robots</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(4), 2015</description>
        <description>The present paper describes a new type of neural networks - multidimensional neural-like growing networks. Multidimensional neural-like growing networks are a dynamic structure, which varies depending on the external information received by receptors and the information coming from the effector area to the outside world. Multidimensional receptor-effector neural-like growing networks are supposed to store and process images of objects or situations in the subject area and manage actions through a variety of spatial representations of information, such as tactile, visual, acoustic, taste, etc. Multidimensional receptor-effector neural-like growing networks are used to design intelligent systems and electronic brains of robots. The article describes the neural-like growing networks, the basic rules for constructing the neural-like growing networks and their comparison with the normal neural networks, modeling of information flows in a human body and basic blocks and functions of electronic brains of intelligent systems and robots.</description>
        <description>http://thesai.org/Downloads/Volume6No4/Paper_1-Multidimensional_Neural-Like_Growing_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Allocation of Roadside Units for Certificate Update in Vehicular Ad Hoc Network Environments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060317</link>
        <id>10.14569/IJACSA.2015.060317</id>
        <doi>10.14569/IJACSA.2015.060317</doi>
        <lastModDate>2015-04-09T17:48:05.4500000+00:00</lastModDate>
        
        <creator>Sheng-Wei Wang</creator>
        
        <subject>Roadside units allocation, VANET, certificate up-date, privacy conservation, NP-complete</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(3), 2015</description>
        <description>The roadside unit (RSU) plays an important role in VANET environments for privacy preservation. In order to conserve the privacy of a vehicle, the issued certificate must be updated frequently via RSUs. If a certificate expires without being updated, the services for the vehicle will be terminated. Therefore, deploying as more as possible RSUs ensures that the certificate can be updated before it expires. However, the cost for allocating an RSU is very high. In this paper, we consider the roadside unit allocating problem such that the certificates can be updated before it expired. Previous researches focus on the roadside unit placement problem in a small city in which for any origination-destination pair the certificate is limited to update at most once. The RSU placement problem in which more than once certificate updates are required is discussed in this paper. The RSU allocation problem is formulated and the decision problem of the RSUs allocation problem is proved as an NP-complete problem. We proposed three roadside unit placement algorithms which works well for a large city. In order to reduce the number of required RSUs for certificate update, we also proposed three backward removing methods to remove the intersections found by the RSU allocation methods. Simulation results show that the proposed algorithms yields lower number of required RSUs than the simple method named the most driving routes first method. One backward removing method named the least driving routes first backward removing method was shown to be able to further reduce the number of required RSUs.</description>
        <description>http://thesai.org/Downloads/Volume6No3/Paper_17-Allocation_of_Roadside_Units_for_Certificate_Update.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A survey on top security threats in cloud computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060316</link>
        <id>10.14569/IJACSA.2015.060316</id>
        <doi>10.14569/IJACSA.2015.060316</doi>
        <lastModDate>2015-04-09T17:48:05.4370000+00:00</lastModDate>
        
        <creator>Muhammad Kazim</creator>
        
        <creator>Shao Ying Zhu</creator>
        
        <subject>Cloud computing; Data security; Network security; Cloud service provider security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(3), 2015</description>
        <description>Cloud computing enables the sharing of resources such as storage, network, applications and software through internet. Cloud users can lease multiple resources according to their requirements, and pay only for the services they use. However, despite all cloud benefits there are many security concerns related to hardware, virtualization, network, data and service providers that act as a significant barrier in the adoption of cloud in the IT industry. In this paper, we survey the top security concerns related to cloud computing. For each of these security threats we describe, i) how it can be used to exploit cloud components and its effect on cloud entities such as providers and users, and ii) the security solutions that must be taken to prevent these threats. These solutions include the security techniques from existing literature as well as the best security practices that must be followed by cloud administrators.</description>
        <description>http://thesai.org/Downloads/Volume6No3/Paper_16-A_survey_on_top_security_threats_in_cloud_computing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modifications of Particle Swarm Optimization Techniques and Its Application on Stock Market: A Survey</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060315</link>
        <id>10.14569/IJACSA.2015.060315</id>
        <doi>10.14569/IJACSA.2015.060315</doi>
        <lastModDate>2015-04-09T17:48:05.4200000+00:00</lastModDate>
        
        <creator>Razan A. Jamous</creator>
        
        <creator>EssamEl.Seidy</creator>
        
        <creator>Assem A. Tharwat</creator>
        
        <creator>Bayoumi Ibrahim Bayoum</creator>
        
        <subject>Computational intelligence; Particle Swarm Optimization; modification; Stock Market; Portfolio Selection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(3), 2015</description>
        <description>Particle Swarm Optimization (PSO) has become popular choice for solving complex and intricate problems which are otherwise difficult to solve by traditional methods. The usage of the Particle Swarm Optimization technique in coping with Portfolio Selection problems is the most important applications of PSO to predict the stocks that have maximum profit with minimum risk, using some common indicators that give advice of buy and sell. This paper gives the reader the state of the art of the various modifications of the PSO and study whether had been applied over the stock market or not.</description>
        <description>http://thesai.org/Downloads/Volume6No3/Paper_15-Modifications_of_Particle_Swarm_Optimization_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Approach to Extend WSDL-Based Data Types Specification to Enhance Web Services Understandability</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060314</link>
        <id>10.14569/IJACSA.2015.060314</id>
        <doi>10.14569/IJACSA.2015.060314</doi>
        <lastModDate>2015-04-09T17:48:05.4200000+00:00</lastModDate>
        
        <creator>Fuad Alshraiedeh</creator>
        
        <creator>Samer Hanna</creator>
        
        <creator>Raed Alazaidah</creator>
        
        <subject>Datatypes; Understandability; Web Service</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(3), 2015</description>
        <description>Web Services are important for integrating distributed heterogeneous applications. One of the problems that facing Web Services is the difficulty for a service provider to represent the datatype of the parameters of the operations provided by a Web service inside Web Service Description Language (WSDL). This problem will make it difficult for service requester to understand,  reverse engineering, and also to decide if Web service is applicable to the required task of their application or not. This paper introduces an approach to extend Web service datatypes specifications inside WSDL in order to solve the aforementioned challenges. This approach is based on adding more description to the provided operations parameters datatypes and also simplified the WSDL document in new enrichment XML-Schema. The main contributions of this paper are:
1. Comprehensive study of 33 datatypes in C# language, and how they are represented inside WSDL document.
2. Classification of the previous mentioned datatypes into 3 categories: ( Clear, Indistinguishable, and Unclear ) datatypes.
3. Enhance the representation of 18% of C# datatypes that are not supported by XML by producing a new simple enrichment XML-based schema.
4. Enhance Web Service Understandability by simplifying WSDL document through producing summarized new simple enrichment schema.</description>
        <description>http://thesai.org/Downloads/Volume6No3/Paper_14-An_Approach_to_Extend_WSDL-Based_Data_Types_Specification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Construction of FuzzyFind Dictionary using Golay Coding Transformation for Searching Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060313</link>
        <id>10.14569/IJACSA.2015.060313</id>
        <doi>10.14569/IJACSA.2015.060313</doi>
        <lastModDate>2015-04-09T17:48:05.4030000+00:00</lastModDate>
        
        <creator>Kamran Kowsari</creator>
        
        <creator>Maryam Yammahi</creator>
        
        <creator>Nima Bari</creator>
        
        <creator>Roman Vichr</creator>
        
        <creator> Faisal Alsaby</creator>
        
        <creator>Simon Y. Berkovich</creator>
        
        <subject>FuzzyFind Dictionary; Golay Code; Golay Code Transformation Hash Table; Unsupervised learning; Fuzzy search engine; Big Data; Approximate search; Informational Retrieval; Pigeonhole Principle; Learning Algorithms ; Data Structure</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(3), 2015</description>
        <description>searching through a large volume of data is very critical for companies, scientists, and searching engines applications due to time complexity and memory complexity. In this paper, a new technique of generating FuzzyFind Dictionary for text mining was introduced. We simply mapped the 23 bits of the English alphabet into a FuzzyFind Dictionary or more than 23 bits by using more FuzzyFind Dictionary, and reflecting the presence or absence of particular letters. This representation preserves closeness of word distortions in terms of closeness of the created binary vectors within Hamming distance of 2 deviations. This paper talks about the Golay Coding Transformation Hash Table and how it can be used on a FuzzyFind Dictionary as a new technology for using in searching through big data. This method is introduced by linear time complexity for generating the dictionary and constant time complexity to access the data and update by new data sets, also updating for new data sets is linear time depends on new data points. This technique is based on searching only for letters of English that each segment has 23 bits, and also we have more than 23-bit and also it could work with more segments as reference table.</description>
        <description>http://thesai.org/Downloads/Volume6No3/Paper_13-Construction_of_FuzzyFind_Dictionary.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Jigsopu: Square Jigsaw Puzzle Solver with Pieces of Unknown Orientation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060312</link>
        <id>10.14569/IJACSA.2015.060312</id>
        <doi>10.14569/IJACSA.2015.060312</doi>
        <lastModDate>2015-04-09T17:48:05.3900000+00:00</lastModDate>
        
        <creator>Abdullah M. Moussa</creator>
        
        <subject>jigsaw puzzle; image merging; edge matching; jigsaw puzzle assembly; automatic solver</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(3), 2015</description>
        <description>In this paper, we consider the square jigsaw puzzle problem in which one is required to reassemble the complete image from a number of unordered square puzzle pieces. Here we focus on the special case where both location and orientation of each piece are unknown. We propose a new automatic solver for such problem without assuming prior knowledge about the original image or its dimensions. We use an accelerated edge matching based greedy method with combined compatibility measures to provide fast performance while maintaining robust results. Complexity analysis and experimental results reveal that the new solver is fast and efficient.</description>
        <description>http://thesai.org/Downloads/Volume6No3/Paper_12-Jigsopu_Square_Jigsaw_Puzzle_Solver_with_Pieces.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Bootstrap Approximation of Gibbs Measure for Finite-Range Potential in Image Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060311</link>
        <id>10.14569/IJACSA.2015.060311</id>
        <doi>10.14569/IJACSA.2015.060311</doi>
        <lastModDate>2015-04-09T17:48:05.3730000+00:00</lastModDate>
        
        <creator>Abdeslam EL MOUDDEN</creator>
        
        <subject>bootstrap computation; Gibbs measure; Markov Chain Monte Carlo; image Analysis; parameter estimation. Likelihood inference</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(3), 2015</description>
        <description>This paper presents a Gibbs measure approximation method through the adjustment of the associated estimated potential. We use the information criterion to prove the accuracy of this approach and the bootstrap computation method to determine the explicit form. The Gibbs sampler is the tool of our simulations while taking advantage of the use of the only one MCMC inside of the multiple necessary MCMC in the classical approximation. We focus on the validity of our approach for the Gibbs measure of a Markov Random Field with an interaction potential function and the associated uniqueness condition. Some theoretical and numerical results are given.</description>
        <description>http://thesai.org/Downloads/Volume6No3/Paper_11-Bootstrap_Approximation_of_Gibbs_Measure.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Revised Use Case Point (Re-UCP) Model for Software Effort Estimation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060310</link>
        <id>10.14569/IJACSA.2015.060310</id>
        <doi>10.14569/IJACSA.2015.060310</doi>
        <lastModDate>2015-04-09T17:48:05.3570000+00:00</lastModDate>
        
        <creator>Mudasir Manzoor Kirmani</creator>
        
        <creator>Abdul Wahid</creator>
        
        <subject>Use Case Point; Extended Use case point; Revised Use case Point; Software Effort Estimation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(3), 2015</description>
        <description>At present the most challenging issue that the software development industry encounters is less efficient management of software development budget projections. This problem has put the modern day software development companies in a situation wherein they are dealing with improper requirement engineering, ambiguous resource elicitation, uncertain cost and effort estimation. The most indispensable and inevitable aspect of any software development company is to form a counter mechanism to deal with the problems which leads to chaos. An emphatic combative domain to deal with this problem is to schedule the whole development process to undergo proper and efficient estimation process, wherein the estimation of all the resources can be made well in advance in order to check whether the conceived project is feasible and within the resources available. The basic building block in any object oriented design is Use Case diagrams which are prepared in early stages of design after clearly understanding the requirements. Use Case Diagrams are considered to be useful for approximating estimates for software development project. This research work gives detailed overview of Re-UCP (revised use case point) method of effort estimation for software projects. The Re-UCP method is a modified approach which is based on UCP method of effort estimation. In this research study 14 projects were subjected to estimate efforts using Re-UCP method and the results were compared with UCP and    e-UCP models. The comparison of 14 projects shows that Re-UCP has significantly outperformed the existing UCP and e-UCP effort estimation techniques.</description>
        <description>http://thesai.org/Downloads/Volume6No3/Paper_10-Revised_Use_Case_Point_Re-UCP_Model_for_Software.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implementation of Binary Search Trees Via Smart Pointers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060309</link>
        <id>10.14569/IJACSA.2015.060309</id>
        <doi>10.14569/IJACSA.2015.060309</doi>
        <lastModDate>2015-04-09T17:48:05.3430000+00:00</lastModDate>
        
        <creator>Ivaylo Donchev</creator>
        
        <creator>Emilia Todorova</creator>
        
        <subject>abstract data structures; binary search trees; C++; smart pointers; teaching and learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(3), 2015</description>
        <description>Study of binary trees has prominent place in the training course of DSA (Data Structures and Algorithms). Their implementation in C++ however is traditionally difficult for students. To a large extent these difficulties are due not so much to the complexity of algorithms as to language complexity in terms of memory management by raw pointers – the programmer must consider too many details to ensure a reliable, efficient and secure implementation. Evolution of C++ regarded to automated resource management, as well as experience in implementation of linear lists by means of C++ 11/14 lead to an attempt to implement binary search trees (BST) via smart pointers as well. In the present paper, the authors share experience in this direction. Some conclusions about pedagogical aspects and effectiveness of the new classes, compared to traditional library containers and implementation with built-in pointers, are made.</description>
        <description>http://thesai.org/Downloads/Volume6No3/Paper_9-Implementation_of_Binary_Search_Trees_Via_Smart_Pointers.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development and Role of Electronic Library in Information Technology Teaching in Bulgarian Schools*</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060308</link>
        <id>10.14569/IJACSA.2015.060308</id>
        <doi>10.14569/IJACSA.2015.060308</doi>
        <lastModDate>2015-04-09T17:48:05.3100000+00:00</lastModDate>
        
        <creator>Tsvetanka Georgieva-Trifonova</creator>
        
        <creator>Gabriela Chotova</creator>
        
        <subject>electronic library; information technology teaching; multimedia information system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(3), 2015</description>
        <description>The electronic library can be considered as an interactive information space. Its creation substantially supports the communication between the teachers and the student, as well as between the teachers and the parents. The enlargement of information space allows improving the efficiency and the quality of teaching, assigning more projects for realization.
The main purpose of this paper is to examine the role of the electronic library in teaching of information technologies in Bulgarian schools for providing more time for applying the learned material in order to increase the effectiveness of the educational process. We summarize and represent the advantages and disadvantages of the use of digital libraries in teaching information technologies together with the main features of a developed electronic library for teaching and educational subsidiary materials.</description>
        <description>http://thesai.org/Downloads/Volume6No3/Paper_8-Development_and_Role_of_Electronic_Library_in_Information_Technology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Apply Metaheuristic ANGEL to Schedule Multiple Projects with Resource-Constrained and Total Tardy Cost</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060307</link>
        <id>10.14569/IJACSA.2015.060307</id>
        <doi>10.14569/IJACSA.2015.060307</doi>
        <lastModDate>2015-04-09T17:48:05.2970000+00:00</lastModDate>
        
        <creator>Shih-Chieh Chen</creator>
        
        <creator>Ching-Chiuan Lin</creator>
        
        <subject>multiple project scheduling; resource-constrained project scheduling; ANGEL; ant colony optimization; genetic algorithms; local search; metaheuristics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(3), 2015</description>
        <description>In this paper the multiple projects resource-constrained project scheduling problem is considered. Several projects are to be scheduled simultaneously with sharing several kinds of limited resources in this problem.  Each project contains non-preemptive and deterministic duration activities which compete limited resources under resources and precedence constraints. Moreover, there are the due date for each project and the tardy cost per day that cause the penalty when the project cannot be accomplished before its due date. The objective is to find the schedules of the considered projects to minimize the total tardy cost subject to the resource and precedence constraints. Since the resource-constrained project scheduling problem has been proven to be NP-Hard, we cannot find a deterministic algorithm to solve this problem efficiently and metaheuristics or evolutionary algorithms are developed for this problem instead. The problem considered in this paper is harder than the original problem because the due date and tardy cost of a project are considered in addition. The metaheuristic method called ANGEL was applied to this problem. ANGEL combines ant colony optimization (ACO), genetic algorithm (GA) and local search strategy. In ANGEL, ACO and GA run alternately and cooperatively. ANGEL performs very well in the multiple projects resource-constrained project scheduling problem as revealed by the experimental results.</description>
        <description>http://thesai.org/Downloads/Volume6No3/Paper_7-Apply_Metaheuristic_ANGEL_to_Schedule_Multiple_Projects.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Minimum Number of Features with Full-Accuracy Iris Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060306</link>
        <id>10.14569/IJACSA.2015.060306</id>
        <doi>10.14569/IJACSA.2015.060306</doi>
        <lastModDate>2015-04-09T17:48:05.2970000+00:00</lastModDate>
        
        <creator>Ibrahim E. Ziedan</creator>
        
        <creator>Mira Magdy Sobhi</creator>
        
        <subject>Iris recognition; Iris features; Speed of Iris recognition; Features reduction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(3), 2015</description>
        <description>A minimum number of features for 100% iris recognition accuracy is developed in this paper. Such number is based on dividing the unwrapped iris into vertical and horizontal segments for a single iris and only vertical segments for dual-iris recognition. In both cases a simple technique that regards the mean of a segment as a feature is adopted. Algorithms and flowcharts to find the minimum of Euclidean Distance (ED) between a test iris and a matching database (DB) one are discussed. A threshold is selected to discriminate between a genuine acceptance (recognition) and a false acceptance of an imposter. The minimum number of features is found to be 47 for single iris and 52 for dual iris recognition. Comparison with recently-published techniques shows the superiority of the proposed technique regarding accuracy and recognition speed. Results were obtained using the phoenix database (UPOL).</description>
        <description>http://thesai.org/Downloads/Volume6No3/Paper_6-A_Minimum_Number_of_Features_with_Full-Accuracy_Iris_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Steganography: Applying and Evaluating Two Algorithms for Embedding Audio Data in an Image</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060305</link>
        <id>10.14569/IJACSA.2015.060305</id>
        <doi>10.14569/IJACSA.2015.060305</doi>
        <lastModDate>2015-04-09T17:48:05.2800000+00:00</lastModDate>
        
        <creator>Khaled Nasser ElSayed</creator>
        
        <subject>Steganography; Encryption and Decryption; Data and Information Security; Data Hiding; Images; Data Communication</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(3), 2015</description>
        <description>Information transmission is increasing with grow of using WEB. So, information security has become very important.  Security of data and information is the major task for scientists and political and military people. One of the most secure methods is embedding data (steganography) in different media like text, audio, digital images. this paper present two experiments in steganography of digital audio data file. It applies empirically, two algorithms in steganography in images through random insertion of digital audio data using bytes and pixels in image files. Finally, it evaluates both experiments, in order to enhance security of transmitted data.</description>
        <description>http://thesai.org/Downloads/Volume6No3/Paper_5-Steganography_Applying_and_Evaluating_Two_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Standard Positioning Performance Evaluation of a Single-Frequency GPS Receiver Implementing Ionospheric and Tropospheric Error Corrections</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060304</link>
        <id>10.14569/IJACSA.2015.060304</id>
        <doi>10.14569/IJACSA.2015.060304</doi>
        <lastModDate>2015-04-09T17:48:05.2630000+00:00</lastModDate>
        
        <creator>Alban Rakipi</creator>
        
        <creator>Bexhet Kamo</creator>
        
        <creator>Shkelzen Cakaj</creator>
        
        <creator>Algenti Lala</creator>
        
        <subject>algorithm; Global Positioning System; GDOP; Hopfield model; Klobuchar model; receiver; PVT; Raw measurements</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(3), 2015</description>
        <description>This paper evaluates the positioning performance of a single-frequency software GPS receiver using Ionospheric and Tropospheric corrections. While a dual-frequency user has the ability to eliminate the ionosphere error by taking a linear combination of observables, a single-frequency user must remove or calibrate this error by other means. To remove the ionosphere error we take advantage of the Klobuchar correction model, while for troposphere error mitigation the Hopfield correction model is used. Real GPS measurements were gathered using a single frequency receiver and post–processed by our proposed adaptive positioning algorithm. The integrated Klobuchar and Hopfield error correction models yeild a considerable reduction of the vertical error. The positioning algorithm automatically combines all available GPS pseudorange measurements when more than four satellites are in use. Experimental results show that improved standard positioning is achieved after error mitigation.</description>
        <description>http://thesai.org/Downloads/Volume6No3/Paper_4-Standard_Positioning_Performance_Evaluation_of_a_Single-Frequency_GPS.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of a Cloud Learning System Based on Multi-Agents Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060303</link>
        <id>10.14569/IJACSA.2015.060303</id>
        <doi>10.14569/IJACSA.2015.060303</doi>
        <lastModDate>2015-04-09T17:48:05.2500000+00:00</lastModDate>
        
        <creator>Mohammed BOUSMAH</creator>
        
        <creator>Ouidad LABOUIDYA</creator>
        
        <creator>Najib EL KAMOUN</creator>
        
        <subject>Cloud computing; Multi-Agents System; Project Based Learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(3), 2015</description>
        <description>Cloud Computing can provide many benefits for university. It is a new paradigm of IT, which provides all resources such as software (SaaS), platform (PaaS) and infrastructure (IaaS) as a service over the Internet. In cloud computing, user can access the services anywhere, at any time and using any devices (Smart phones, tablet computers, laptops, desktops…). Multi-Agents System approach provides ideal solution for open and scalable systems whose structure can be changed dynamically. Educational institutions all over the world have already adapted the cloud to their own settings and made use of its great potential for innovation. Based on the analysis of the advantages of cloud computing and multi-agents system approach to support e-learning session, the paper presents a complete design and experimentation of a new layer in cloud computing called Smart Cloud Learning System.</description>
        <description>http://thesai.org/Downloads/Volume6No3/Paper_3-Design_of_a_Cloud_Learning_System_Based.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Android Application to Assess Smartphone Accelerometers and Bluetooth for Real-Time Control</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060302</link>
        <id>10.14569/IJACSA.2015.060302</id>
        <doi>10.14569/IJACSA.2015.060302</doi>
        <lastModDate>2015-04-09T17:48:05.2330000+00:00</lastModDate>
        
        <creator>M. A. Nugent</creator>
        
        <creator>Dr. Harold Esmonde</creator>
        
        <subject>Android; Bluetooth; control; real-time; sensors; smartphone</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(3), 2015</description>
        <description>Modern smart phones have evolved into sophisticated embedded systems, incorporating hardware and software features that make the devices potentially useful for real-time control operations. An object-oriented Android application was developed to quantify the performance of the smartphone’s on-board linear accelerometers and bluetooth wireless module with a view to potentially transmitting accelerometer data wirelessly between bluetooth-enabled devices. A portable bluetooth library was developed which runs the bluetooth functionality of the application as an independent background service. The performance of bluetooth was tested by pinging data between 2 smartphones, measuring round-trip-time and round-trip-time variation (jitter) against variations in data size, transmission distance and sources of interference. The accelerometers were tested for sampling frequency and sampling frequency jitter.</description>
        <description>http://thesai.org/Downloads/Volume6No3/Paper_2-Android_Application_to_Assess_Smartphone_Accelerometers.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Skew Detection/Correction and Local Minima/Maxima Techniques for Extracting a New Arabic Benchmark Database</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060301</link>
        <id>10.14569/IJACSA.2015.060301</id>
        <doi>10.14569/IJACSA.2015.060301</doi>
        <lastModDate>2015-04-09T17:48:05.2170000+00:00</lastModDate>
        
        <creator>Husam Ahmed Al Hamad</creator>
        
        <subject>ACDAR; Arabic benchmark database; Arabic scripts; document analysis; handwriting recognition; skew detection and correction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(3), 2015</description>
        <description>We propose a set of techniques for extracting a new standard benchmark database for Arabic handwritten scripts. Thresholding, filtering, and skew detection/correction techniques are developed as a pre-processing step of the database. Local minima and maxima using horizontal and vertical histogram are implemented for extracting the script elements of the database. Elements of the database contain pages, paragraphs, lines, and characters. The database divides into two major parts. The first part represents the original elements without modifications; the second part represents the elements after applying the proposed techniques. The final database has collected, extracted, validated, and saved. All techniques are tested for extracting and validating the elements. In this respect, ACDAR proposes a first issue of the Arabic benchmark databases. In addition, the paper confirms establishment a specialized research-oriented center refers to learning, teaching, and collaboration activities. This center is called &quot;Arabic Center for Document Analysis and Recognition (ACDAR)&quot; which is similar to other centers developed for other languages such as English.</description>
        <description>http://thesai.org/Downloads/Volume6No3/Paper_1-Skew_DetectionCorrection_and_Local_MinimaMaxima_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Density Based Support Vector Machines for Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2015.040411</link>
        <id>10.14569/IJARAI.2015.040411</id>
        <doi>10.14569/IJARAI.2015.040411</doi>
        <lastModDate>2015-04-09T17:48:05.2000000+00:00</lastModDate>
        
        <creator>Zahra Nazari</creator>
        
        <creator>Dongshik Kang</creator>
        
        <subject>SVM; Density Based SVM; Classification; Pattern Recognition; Outlier removal</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 4(4), 2015</description>
        <description>Support Vector Machines (SVM) is the most successful algorithm for classification problems. SVM learns the decision boundary from two classes (for Binary Classification) of training points. However, sometimes there are some less meaningful samples amongst training points, which are corrupted by noises or misplaced in wrong side, called outliers. These outliers are affecting on margin and classification performance, and machine should better to discard them. SVM as a popular and widely used classification algorithm is very sensitive to these outliers and lacks the ability to discard them. Many research results prove this sensitivity which is a weak point for SVM. Different approaches are proposed to reduce the effect of outliers but no method is suitable for all types of data sets. In this paper, the new method of Density Based SVM (DBSVM) is introduced. Population Density is the basic concept which is used in this method for both linear and non-linear SVM to detect outliers.  Experiments on artificial data sets, real high-dimensional benchmark data sets of Liver disorder and Heart disease, and data sets of new and fatigued banknotes’ acoustic signals can prove the efficiency of this method on noisy data classification and the better generalization that it can provide compared to the standard SVM.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume4No4/Paper_11-Density_Based_Support_Vector_Machines_for_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Software Requirements Management</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2015.040410</link>
        <id>10.14569/IJARAI.2015.040410</id>
        <doi>10.14569/IJARAI.2015.040410</doi>
        <lastModDate>2015-04-09T17:48:05.1700000+00:00</lastModDate>
        
        <creator>Ali Altalbe</creator>
        
        <subject>Software; Software Requirements; Software Devel-opment; Software Management</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 4(4), 2015</description>
        <description>Requirements are defined as the desired set of characteristics of a product or a service. In the world of software development, it is estimated that more than half of the failures are attributed towards poor requirements management. This means that although the software functions correctly, it is not what the client requested. Modern software requirements management methodologies are available to reduce the occur-rence of such incidents. This paper performs a review on the available literature in the area while tabulating possible methods of managing requirements. It also highlights the benefits of following a proper guideline for the requirements management task. With the introduction of specific software tools for the requirements management task, better software products are now been developed with lesser resources.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume4No4/Paper_10- Software_Requirements_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Using Mining Predict Relationships on the Social Media Network: Facebook (FB)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2015.040409</link>
        <id>10.14569/IJARAI.2015.040409</id>
        <doi>10.14569/IJARAI.2015.040409</doi>
        <lastModDate>2015-04-09T17:48:05.1400000+00:00</lastModDate>
        
        <creator>Dr. Mamta Madan</creator>
        
        <creator>Meenu Chopra</creator>
        
        <subject>Online Social Media Networks (OSMNs); Facebook (FB); Data Mining; Crawling Process; Protocol</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 4(4), 2015</description>
        <description>The objective of this paper is to study on the most famous social networking site Facebook and other online social media networks (OSMNs) based on the notion of relationship or friendship. This paper discussed the methodology which can used to conduct the analysis of the social network Facebook (FB) and also define the framework of the Web Mining platform. Lastly, various technological challenges were explored which were lying under the task of extracting information from FB and discuss in detail the about crawling agent functionality.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume4No4/Paper_9-Using_Mining_Predict_Relationships_on_the_Social_Media_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Semantic-Aware Data Management System for Seismic Engineering Research Projects and Experiments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2015.040408</link>
        <id>10.14569/IJARAI.2015.040408</id>
        <doi>10.14569/IJARAI.2015.040408</doi>
        <lastModDate>2015-04-09T17:48:05.1230000+00:00</lastModDate>
        
        <creator>Md. Rashedul Hasan</creator>
        
        <creator>Feroz Farazi</creator>
        
        <creator>Oreste Bursi</creator>
        
        <creator>Md. Shahin Reza</creator>
        
        <creator>Ernesto D’Avanzo</creator>
        
        <subject>Ontology; Knowledge Base; Earthquake Engineering; Semantic Web; Virtuoso</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 4(4), 2015</description>
        <description>The invention of the Semantic Web and related technologies is fostering a computing paradigm that entails a shift from databases to Knowledge Bases (KBs). There the core is the ontology that plays a main role in enabling reasoning power that can make implicit facts explicit; in order to produce better results for users. In addition, KB-based systems provide mechanisms to manage information and semantics thereof, that can make systems semantically interoperable and as such can exchange and share data between them. In order to overcome the interoperability issues and to exploit the benefits offered by state of the art technologies, we moved to KB-based system. This paper presents the development of an earthquake engineering ontology with a focus on research project management and experiments. The developed ontology was validated by domain experts, published in RDF and integrated into WordNet. Data originating from scientific experiments such as cyclic and pseudo dynamic tests were also published in RDF. We exploited the power of Semantic Web technologies, namely Jena, Virtuoso and VirtGraph tools in order to publish, storage and manage RDF data, respectively. Finally, a system was developed with the full integration of ontology, experimental data and tools, to evaluate the effectiveness of the KB-based approach; it yielded favorable outcomes.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume4No4/Paper_8-A_Semantic-Aware_Data_Management_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of Security Protocols using Finite-State Machines</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2015.040407</link>
        <id>10.14569/IJARAI.2015.040407</id>
        <doi>10.14569/IJARAI.2015.040407</doi>
        <lastModDate>2015-04-09T17:48:05.0930000+00:00</lastModDate>
        
        <creator>Dania Aljeaid</creator>
        
        <creator>Xiaoqi Ma</creator>
        
        <creator>Caroline Langensiepen</creator>
        
        <subject>identity-based cryptosystem; cryptographic protocols; finite-state machine</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 4(4), 2015</description>
        <description>This paper demonstrates a comprehensive analysis method using formal methods such as finite-state machine. First, we describe the modified version of our new protocol and briefly explain the encrypt-then-authenticate mechanism, which is regarded as more a secure mechanism than the one used in our protocol. Then, we use a finite-state verification to study the behaviour of each machine created for each phase of the protocol and examine their behaviours together. Modelling with finite-state machines shows that the modified protocol can function correctly and behave properly even with invalid input or time delay.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume4No4/Paper_7-Analysis_of_Security_Protocols_using_Finite-State_Machines.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Lung Cancer Detection on CT Scan Images: A Review on the Analysis Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2015.040406</link>
        <id>10.14569/IJARAI.2015.040406</id>
        <doi>10.14569/IJARAI.2015.040406</doi>
        <lastModDate>2015-04-09T17:48:05.0770000+00:00</lastModDate>
        
        <creator>H. Mahersia</creator>
        
        <creator>M. Zaroug</creator>
        
        <creator>L. Gabralla</creator>
        
        <subject>Classification; Computed Tomography; Lung cancer; Nodules; Segmentation</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 4(4), 2015</description>
        <description>Lung nodules are potential manifestations of lung cancer, and their early detection facilitates early treatment and improves patient’s chances for survival. For this reason, CAD systems for lung cancer have been proposed in several studies. All these works involved mainly three steps to detect the pulmonary nodule: preprocessing, segmentation of the lung and classification of the nodule candidates. This paper overviews the current state-of-the-art regarding all the approaches and techniques that have been investigated in the literature. It also provides a comparison of the performance of the existing approaches.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume4No4/Paper_6-Lung_Cancer_Detection_on_CT_Scan_Images_A_Review_on_the_Analysis_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Accurate Topological Measures for Rough Sets</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2015.040405</link>
        <id>10.14569/IJARAI.2015.040405</id>
        <doi>10.14569/IJARAI.2015.040405</doi>
        <lastModDate>2015-04-09T17:48:05.0600000+00:00</lastModDate>
        
        <creator>A. S. Salama</creator>
        
        <subject>component; Knowledge Granulation; Topological Spaces; Rough Sets; Rough Approximations; Data Mining; Decision Making</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 4(4), 2015</description>
        <description>Data granulation is considered a good tool of decision making in various types of real life applications. The basic ideas of data granulation have appeared in many fields, such as interval analysis, quantization, rough set theory, Dempster-Shafer theory of belief functions, divide and conquer, cluster analysis, machine learning, databases, information retrieval, and many others. Some new topological tools for data granulation using rough set approximations are initiated.  Moreover, some topological measures of data granulation in topological information systems are defined. Topological generalizations using d&#223; -open sets and their applications of information granulation are developed.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume4No4/Paper_5-Accurate_Topological_Measures_for_Rough_Sets.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Method of Multi-License Plate Location in Road Bayonet Image</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2015.040404</link>
        <id>10.14569/IJARAI.2015.040404</id>
        <doi>10.14569/IJARAI.2015.040404</doi>
        <lastModDate>2015-04-09T17:48:05.0300000+00:00</lastModDate>
        
        <creator>Ying Qian</creator>
        
        <creator>Zhi Li</creator>
        
        <subject>multi-license plate location; color features; geometry characteristics; gray feature</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 4(4), 2015</description>
        <description>To solve the problem of multi-license plate location in road bayonet image, a novel approach was presented, which utilized plate’s color features, geometry characteristics and gray feature. Firstly, the RGB color image was converted to HSV color model and calculates the distance according to the plate’s color information in the color space. Secondly, the license plate candidate regions were segmented by binary and morphological processing. Finally, based on the plate’s geometry characteristics and gray feature, the license plate regions were segmented by and validated. In a certain degree, the method wasn’t limited the plate’s type, size, number, the location of the car and the background in the picture. It was tested using the road bayonet image.(Abstract)</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume4No4/Paper_4-A_Method_of_Multi-License_Plate_Location_in_Road_Bayonet_Image.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>New Cluster Validation with Input-Output Causality for Context-Based Gk Fuzzy Clustering</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2015.040403</link>
        <id>10.14569/IJARAI.2015.040403</id>
        <doi>10.14569/IJARAI.2015.040403</doi>
        <lastModDate>2015-04-09T17:48:05.0000000+00:00</lastModDate>
        
        <creator>Keun-Chang Kwak</creator>
        
        <subject>Cluster Validation; Fuzzy clustering; Gustafson-Kessel clustering; Fuzzy covariance; Context based clustering; Input-output causality</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 4(4), 2015</description>
        <description>In this paper, a cluster validity concept from an unsupervised to a supervised manner is presented. Most cluster validity criterions were established in an unsupervised manner, although many clustering methods performed in supervised and semi-supervised environments that used context information and performance results of the model. Context-based clustering methods can divide the input spaces using context-clustering information that generates an output space through an input-output causality. Furthermore, these methods generate and use the context membership function and partition matrix information. Additionally, supervised clustering learning can obtain superior performance results for clustering, such as in classification accuracy, and prediction error. A cluster validity concept that deals with the characteristics of cluster validities and performance results in a supervised manner is considered. To show the extended possibilities of the proposed concept, it demonstrates three simulations and results in a supervised manner and analyzes the characteristics.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume4No4/Paper_3-New_Cluster_Validation_with_Input-Output_Causality.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Military Robotics: Latest Trends and Spatial Grasp Solutions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2015.040402</link>
        <id>10.14569/IJARAI.2015.040402</id>
        <doi>10.14569/IJARAI.2015.040402</doi>
        <lastModDate>2015-04-09T17:48:04.9670000+00:00</lastModDate>
        
        <creator>Peter Simon Sapaty</creator>
        
        <subject>military robots; unmanned systems; Spatial Grasp Technology; holistic scenarios; self-navigation; collective behavior; self-recovery</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 4(4), 2015</description>
        <description>A review of some latest achievements in the area of military robotics is given, with main demands to management of advanced unmanned systems formulated. The developed Spatial Grasp Technology, SGT, capable of satisfying these demands will be briefed. Directly operating with physical, virtual, and executive spaces, as well as their combinations, SGT uses high-level holistic mission scenarios that self-navigate and cover the whole systems in a super-virus mode. This brings top operations, data, decision logic, and overall command and control to the distributed resources at run time, providing flexibility, ubiquity, and capability of self-recovery in solving complex problems, especially those requiring quick reaction on unpredictable situations. Exemplary scenarios of tasking and managing robotic collectives at different conceptual levels in a special language will be presented. SGT can effectively support gradual transition to automated up to fully robotic systems under the unified command and control.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume4No4/Paper_2-Military_Robotics_Latest_Trends_and_Spatial_Grasp_Solutions.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Semantic Image Retrieval: An Ontology Based Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2015.040401</link>
        <id>10.14569/IJARAI.2015.040401</id>
        <doi>10.14569/IJARAI.2015.040401</doi>
        <lastModDate>2015-04-09T17:48:04.8900000+00:00</lastModDate>
        
        <creator>Umar Manzoor</creator>
        
        <creator>Mohammed A. Balubaid</creator>
        
        <creator>Bassam Zafar</creator>
        
        <creator>Hafsa Umar</creator>
        
        <creator>M. Shoaib Khan</creator>
        
        <subject>Image Retrieval; Ontology; Semantic Image; Image Understanding; Semantic Retrieval</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 4(4), 2015</description>
        <description>Images / Videos are major source of content on the internet and the content is increasing rapidly due to the advancement in this area. Image analysis and retrieval is one of the active research field and researchers from the last decade have proposed many efficient approaches for the same. Semantic technologies like ontology offers promising approach to image retrieval as it tries to map the low level image features to high level ontology concepts. In this paper, we have proposed Semantic Image Retrieval: An Ontology based Approach which uses domain specific ontology for image retrieval relevant to the user query. The user can give concept / keyword as text input or can input the image itself. Semantic Image Retrieval is based on hybrid approach and uses shape, color and texture based approaches for classification purpose.   Mammals domain is used as a test case and its ontology is developed. The proposed system is trained on Mammals dataset and tested on large number of test cases related to this domain. Experimental results show the efficiency / accuracy of the proposed system and support the implementation of the same.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume4No4/Paper_1-Semantic_Image_Retrieval_An_Ontology_Based_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fuzzy Soft Sets Supporting Multi-Criteria Decision Processes</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2015.040305</link>
        <id>10.14569/IJARAI.2015.040305</id>
        <doi>10.14569/IJARAI.2015.040305</doi>
        <lastModDate>2015-03-10T20:07:19.1870000+00:00</lastModDate>
        
        <creator>Sylvia Encheva</creator>
        
        <subject>Soft sets; Uncertainties; Decision making</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 4(3), 2015</description>
        <description>Students experience various types of difficulties when it comes to examinations, where some of them are subject related while others are more of a psychological character. A number of factors influencing academic success or failure of undergraduate students are identified in various research studies. One of the many important questions related to that is how to select individuals endangered to be unable to complete a particular study program or a subject. The intention of this work is to develop an approach for early discovery of students who could face serious difficulties through their studies.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume4No3/Paper_5-Fuzzy_Soft_Sets_Supporting_Multi-Criteria_Decision.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>For a Better Coordination Between Students Learning Styles and Instructors Teaching Styles</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2015.040304</link>
        <id>10.14569/IJARAI.2015.040304</id>
        <doi>10.14569/IJARAI.2015.040304</doi>
        <lastModDate>2015-03-10T20:07:19.1730000+00:00</lastModDate>
        
        <creator>Sylvia Encheva</creator>
        
        <subject>Refinement orders; Relational concept analysis; Learning</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 4(3), 2015</description>
        <description>While learning has been in the main focus of a number of educators and researches, instructors’ teaching styles have received considerably less attention. When it comes to dependencies between learning styles and teaching styles the available knowledge is even less. There is a definite need for a systematic approach while looking for such dependencies. We propose application of refinement orders and relational concept analysis for pursuing further investigations on the matter.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume4No3/Paper_4-For_a_Better_Coordination_Between_Students.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Attribute Reduction for Generalized Decision Systems*</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2015.040303</link>
        <id>10.14569/IJARAI.2015.040303</id>
        <doi>10.14569/IJARAI.2015.040303</doi>
        <lastModDate>2015-03-10T20:07:19.1400000+00:00</lastModDate>
        
        <creator>Bi-Jun REN</creator>
        
        <creator>Yan-Ling FU</creator>
        
        <creator>Ke-Yun QIN</creator>
        
        <subject>Rough set; generalized indiscernibility relation; positive region reduction; distribution reduction</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 4(3), 2015</description>
        <description>Attribute reduction of information system is one of the most important applications of rough set theory. This paper focuses on generalized decision system and aims at studying positive region reduction and distribution reduction based on generalized indiscernibility relation. The judgment theorems for attribute reductions and attribute reduction approaches are presented. Our approaches improved the existed discernibility matrix and discernibility conditions. Furthermore, the reduction algorithms based on discernible degree are proposed.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume4No3/Paper_3-Attribute_Reduction_for_Generalized_Decision_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application of Machine Learning Approaches in Intrusion Detection System: A Survey</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2015.040302</link>
        <id>10.14569/IJARAI.2015.040302</id>
        <doi>10.14569/IJARAI.2015.040302</doi>
        <lastModDate>2015-03-10T20:07:19.1100000+00:00</lastModDate>
        
        <creator>Nutan Farah Haq</creator>
        
        <creator>Abdur Rahman Onik</creator>
        
        <creator>Md. Avishek Khan Hridoy</creator>
        
        <creator>Musharrat Rafni</creator>
        
        <creator>Faisal Muhammad Shah</creator>
        
        <creator>Dewan Md. Farid</creator>
        
        <subject>Intrusion detection; Survey; Classifiers; Hybrid; Ensemble; Dataset; Feature Selection</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 4(3), 2015</description>
        <description>Network security is one of the major concerns of the modern era. With the rapid development and massive usage of internet over the past decade, the vulnerabilities of network security have become an important issue. Intrusion detection system is used to identify unauthorized access and unusual attacks over the secured networks. Over the past years, many studies have been conducted on the intrusion detection system. However, in order to understand the current status of implementation of machine learning techniques for solving the intrusion detection problems this survey paper enlisted the 49 related studies in the time frame between 2009 and 2014 focusing on the architecture of the single, hybrid and ensemble classifier design. This survey paper also includes a statistical comparison of classifier algorithms, datasets being used and some other experimental setups as well as consideration of feature selection step.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume4No3/Paper_2-Application_of_Machine_Learning_Approaches_in_Intrusion_Detection_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Digital Library of Expert System Based at Indonesia Technology University</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2015.040301</link>
        <id>10.14569/IJARAI.2015.040301</id>
        <doi>10.14569/IJARAI.2015.040301</doi>
        <lastModDate>2015-03-10T20:07:19.0330000+00:00</lastModDate>
        
        <creator>Dewa Gede Hendra Divayana</creator>
        
        <creator>I Putu Wisna Ariawan</creator>
        
        <creator>I Made Sugiarta</creator>
        
        <creator>I Wayan Artanayasa</creator>
        
        <subject>Digital Library; Expert System; Forward Chaining; Backward Chaining</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 4(3), 2015</description>
        <description>Digital library is a very interesting phenomenon in the world of libraries. In this era of globalization, the digital library is needed by students, faculty, and the community in the search for quick reference through internet access, so that students, faculty, and the community does not have to come directly to the library. Accessing collections of digital libraries can also be done anytime and anywhere. Digital Library development also occurred at Indonesia Technology University. That University offers a digital library based of expert system. The concept of digital library is utilizing science expert system in the process of cataloging and searching digital collections. By using this digital library based of expert system, users can search the collection, reading collection, and download the desired collection by online system. The digital library based of expert system at Indonesia Technology University is built using the PHP programming language, MySQL database as a data base management system, and developed the method of forward chaining and backward chaining as inference engine.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume4No3/Paper_1-Digital_Library_of_Expert_System_Based_at_Indonesia_Technology_University.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Similarity Calculation Method of Chinese Short Text Based on Semantic Feature Space</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060242</link>
        <id>10.14569/IJACSA.2015.060242</id>
        <doi>10.14569/IJACSA.2015.060242</doi>
        <lastModDate>2015-03-06T17:37:10.0600000+00:00</lastModDate>
        
        <creator>Liqiang Pan</creator>
        
        <creator>Pu Zhang</creator>
        
        <creator>Anping Xiong</creator>
        
        <subject>short text; semantic feature space;similarity; semantic similarity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(2), 2015</description>
        <description>In order to improve the accuracy of short text similarity calculation, this paper presents the idea that use the history of short text messages to construct semantic feature space, then use the vector in semantic feature space to represent short text and do semantic extension, and finally calculate the short text similarity of corresponding vector in the semantic feature space. This method can represent the semantic information of short text message thoroughly so as to improve the accuracy of similarity calculation. We selected a large number of problem test sets for experiments. The results show that the method we proposed is reasonable and effective.</description>
        <description>http://thesai.org/Downloads/Volume6No2/Paper_42-Similarity_Calculation_Method_of_Chinese_Short_Text_Based_on_Semantic_Feature_Space.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Ontology-based Change Propagation in Shareable Health Information Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060241</link>
        <id>10.14569/IJACSA.2015.060241</id>
        <doi>10.14569/IJACSA.2015.060241</doi>
        <lastModDate>2015-02-28T17:28:26.1330000+00:00</lastModDate>
        
        <creator>Anny Kartika Sari</creator>
        
        <creator>Wenny Rahayu</creator>
        
        <subject>health information system; ontology-based applica-tion; ontology evolution</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(2), 2015</description>
        <description>One of the most important challenges to be ad-dressed when establishing an integrated smart health environ-ment is the availability of shareable health data and knowledge which standardize the interoperability of components within the environment. Health ontologies are commonly utilized to enable interoperability between applications in such environment. However, the dynamic nature of health knowledge causes the need for frequent changes in health ontologies which then must be propagated to the relevant applications. A change propagation method that can efficiently streamline the change management from an ontology to all the applications which reference to it is proposed. A component called a mapper is used to manage the mapping between application terms and ontology concepts. The mapper is aimed to maintain the applications’ access to the most up-to-date ontology concepts and to improve the semantic mapping between the application terms and the ontology concepts. Some rules are developed for the change propagation process. The evaluation of the method shows that the mapper can improve the mapping list in terms of: (i) correctness, by proposing a new mapping entry to substitute an existing one which is not valid anymore because ontology concept is deleted or changed;(ii) currency maintenance, by recommending a better mapping between an application term and a new ontology concept based on the similarity value between the term and the new concept.</description>
        <description>http://thesai.org/Downloads/Volume6No2/Paper_41-Ontology-based_Change_Propagation_in_Shareable_Health_Information_Applications.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Vehicle Embedded Data Stream Processing Platform for Android Devices</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060240</link>
        <id>10.14569/IJACSA.2015.060240</id>
        <doi>10.14569/IJACSA.2015.060240</doi>
        <lastModDate>2015-02-28T17:28:26.1170000+00:00</lastModDate>
        
        <creator>Shingo Akiyama</creator>
        
        <creator>Yukikazu Nakamoto</creator>
        
        <creator>Akihiro Yamaguchi</creator>
        
        <creator>Kenya Sato</creator>
        
        <creator>Hiroaki Takada</creator>
        
        <subject>Android, automotive, data stream management system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(2), 2015</description>
        <description>Automotive information services utilizing vehicle data are rapidly expanding. However, there is currently no data centric software architecture that takes into account the scale and complexity of data involving numerous sensors. To address this issue, the authors have developed an in-vehicle data-stream management system for automotive embedded systems (eDSMS) as data centric software architecture. Providing the data stream functionalities to drivers and passengers are highly beneficial. This paper describes a vehicle embedded data stream processing platform for Android devices. The platform enables flexible query processing with a dataflow query language and extensible operator functions in the query language on the platform. The platform employs architecture independent of data stream schema in in-vehicle eDSMS to facilitate smoother Android application program development. This paper presents specifications and design of the query language and APIs of the platform, evaluate it, and discuss the results.</description>
        <description>http://thesai.org/Downloads/Volume6No2/Paper_40-Vehicle_Embedded_Data_Stream_Processing_Platform_for_Android_Devices.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Timed-Release Certificateless Encryption</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060239</link>
        <id>10.14569/IJACSA.2015.060239</id>
        <doi>10.14569/IJACSA.2015.060239</doi>
        <lastModDate>2015-02-28T17:28:26.0870000+00:00</lastModDate>
        
        <creator>Toru Oshikiri</creator>
        
        <creator>Taiichi Saito</creator>
        
        <subject>timed-release encryption, identity-based encryption, one-time signature</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(2), 2015</description>
        <description>Timed-Release Encryption(TRE) is an encryption mechanism that allows a receiver to decrypt a ciphertext only after the time that a sender designates. In this paper, we propose the notion of Timed-Release Certificateless Encryption(TRCLE), and define its security models. We also show a generic con-struction of TRCLE from Public-Key Encryption(PKE), Identity-Based Encryption(IBE) and one-time signature, and prove that the constructed scheme achieves the security we defined.</description>
        <description>http://thesai.org/Downloads/Volume6No2/Paper_39-Timed-Release_Certificateless_Encryption.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SOCIA: Linked Open Data of Context behind Local Concerns for Supporting Public Participation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060238</link>
        <id>10.14569/IJACSA.2015.060238</id>
        <doi>10.14569/IJACSA.2015.060238</doi>
        <lastModDate>2015-02-28T17:28:26.0700000+00:00</lastModDate>
        
        <creator>Shun Shiramatsu</creator>
        
        <creator>Tadachika Ozono</creator>
        
        <creator>Toramatsu Shintani</creator>
        
        <subject>Semantic Web; social computing; natural language processing; linked open data; e-Participation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(2), 2015</description>
        <description>To address public concerns that threat the sustain-ability of local societies, supporting public participation by shar-ing the background context behind these concerns is essentially important. We designed a SOCIA ontology, which was a linked data model, for sharing context behind local concerns with two approaches: (1) structuring Web news articles and microblogs about local concerns on the basis of geographical regions and events that were referred to by content, and (2) structuring public issues and their solutions as public goals. We moreover built a SOCIA dataset, which was a linked open dataset, on the basis of the SOCIA ontology. Web news articles and microblogs related to local concerns were semi-automatically gathered and structured. Public issues and goals were manually extracted from Web content related to revitalization from the Great East Japan Earthquake. Towards more accurate extraction of public concerns, we investigated feature expressions for extracting public concerns from microblogs written in Japanese. To address a technical issue about sample selection bias in our microblog corpus, we formulated a metric in mining feature expressions, i.e., bias-penalized information gain (BPIG). Furthermore, we developed a prototype of a public debate support system that utilized the SOCIA dataset and formulated the similarity between public goals for a goal matching service to facilitate collaboration.</description>
        <description>http://thesai.org/Downloads/Volume6No2/Paper_38-SOCIA_Linked_Open_Data_of_Context_behind_Local.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Processing the Text of the Holy Quran: a Text Mining Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060237</link>
        <id>10.14569/IJACSA.2015.060237</id>
        <doi>10.14569/IJACSA.2015.060237</doi>
        <lastModDate>2015-02-28T17:28:26.0400000+00:00</lastModDate>
        
        <creator>Mohammad Alhawarat</creator>
        
        <creator>Mohamed Hegazi</creator>
        
        <creator>Anwer Hilal</creator>
        
        <subject>Holy Quran; Text Mining; Arabic Natural Lan-guage Processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(2), 2015</description>
        <description>The Holy Quran is the reference book for more than 1.6 billion of Muslims all around the world Extracting information and knowledge from the Holy Quran is of high benefit for both specialized people in Islamic studies as well as non-specialized people. This paper initiates a series of research studies that aim to serve the Holy Quran and provide helpful and accurate information and knowledge to the all human beings. Also, the planned research studies aim to lay out a framework that will be used by researchers in the field of Arabic natural language processing by providing a ”Golden Dataset” along with useful techniques and information that will advance this field further. The aim of this paper is to find an approach for analyzing Arabic text and then providing statistical information which might be helpful for the people in this research area. In this paper the holly Quran text is preprocessed and then different text mining operations are applied to it to reveal simple facts about the terms of the holy Quran. The results show a variety of characteristics of the Holy Quran such as its most important words, its wordcloud and chapters with high term frequencies. All these results are based on term frequencies that are calculated using both Term Frequency (TF) and Term Frequency-Inverse Document Frequency (TF-IDF) methods.</description>
        <description>http://thesai.org/Downloads/Volume6No2/Paper_37-Processing_the_Text_of_the_Holy_Quran.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Confinement for Active Objects</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060236</link>
        <id>10.14569/IJACSA.2015.060236</id>
        <doi>10.14569/IJACSA.2015.060236</doi>
        <lastModDate>2015-02-28T17:28:26.0070000+00:00</lastModDate>
        
        <creator>Florian Kammuller</creator>
        
        <subject>Distributed active objects, formalization, security type systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(2), 2015</description>
        <description>In this paper, we provide a formal framework for the security of distributed active objects. Active objects com-municate asynchronously implementing method calls via futures. We base the formal framework on a security model that uses a semi-lattice to enable multi-lateral security crucial for distributed architectures. We further provide a security type system for the programming model ASPfun of functional active objects. Type safety and a confinement property are presented. ASPfun thus realizes secure down calls.</description>
        <description>http://thesai.org/Downloads/Volume6No2/Paper_36-Confinement_for_Active_Objects.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of Significant Factors for Dengue Infection Prognosis Using the Random Forest Classifier</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060235</link>
        <id>10.14569/IJACSA.2015.060235</id>
        <doi>10.14569/IJACSA.2015.060235</doi>
        <lastModDate>2015-02-28T17:28:25.9930000+00:00</lastModDate>
        
        <creator>A. Shameem Fathima</creator>
        
        <creator>D.Manimeglai</creator>
        
        <subject>Data Mining, Dengue Virus, Machine learning, Random Forest;</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(2), 2015</description>
        <description>Random forests have emerged as a versatile and highly accurate classification and regression methodology, requiring little tuning and providing interpretable outputs. Here, we briefly explore the possibility of applying this ensemble supervised machine learning technique to predict the vulnerability for complex disease - Dengue which is often baffled with chikungunya viral fever. This study presents a new-fangled approach to determine the significant prognosis factors in dengue patients. Random forests  is  used to visualize and determine the significant factors that can differentiate between the dengue patients and the healthy subjects  and for constructing a dengue disease survivability prediction model during the boosting process to improve accuracy and stability and to reduce over fitting problems. The presented methodology may be incorporated in a variety of applications such as risk management, tailored health communication and decision support systems in healthcare</description>
        <description>http://thesai.org/Downloads/Volume6No2/Paper_35-Analysis_of_significant_factors_for_dengue_infection_prognosis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A General Model for Similarity Measurement between Objects</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060234</link>
        <id>10.14569/IJACSA.2015.060234</id>
        <doi>10.14569/IJACSA.2015.060234</doi>
        <lastModDate>2015-02-28T17:28:25.9600000+00:00</lastModDate>
        
        <creator>Manh Hung Nguyen</creator>
        
        <creator>Thi Hoi Nguyen</creator>
        
        <subject>object similarity; multiple attributes similarity; sim-ilarity measurement; decision support.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(2), 2015</description>
        <description>The problem to detect the similarity or the differ-ence between objects are faced regularly in several domains of applications such as e-commerce, social network, expert system, data mining, decision support system, etc. This paper introduces a general model for measuring the similarity between objects based on their attributes. In this model, the similarity on each attribute is defined with different natures and kinds of attributes. This makes our model is general and enables to apply the model in several domains of application. We also present the applying of the model into two applications in social network and e-commerce situations.</description>
        <description>http://thesai.org/Downloads/Volume6No2/Paper_34-A_General_Model_for_Similarity_Measurement.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Service Design for Developing Multimodal Human-Computer Interaction for Smart Tvs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060233</link>
        <id>10.14569/IJACSA.2015.060233</id>
        <doi>10.14569/IJACSA.2015.060233</doi>
        <lastModDate>2015-02-28T17:28:25.9470000+00:00</lastModDate>
        
        <creator>Sheng-Ming Wang</creator>
        
        <creator>Cheih-Ju Huang</creator>
        
        <subject>Smart TV; Service Design; Human-Computer Interaction; Quality Function Deployment; Analytical Hierarchy Process</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(2), 2015</description>
        <description>A Smart TV integrates Internet and Web features into a TV, as well convergence between computer and TV and can utilize as a computer. Smart TV devices facilitate the curation of content by combining Internet-based information with content from TV providers. Many techniques, such as those that focus on speech, gestures, and eye movement, have been used to develop various human computer interfaces for Smart TVs. However, as suggested by several researchers, user scenarios and user experiences should be incorporated with development techniques to meet user demands on Smart TVs. Thus, this study applies the service design approach for scenario planning and user experience analysis of multimodal interaction development for Smart TVs. This research begins with the service design process and derives the Quality Function Deployment matrix (QFD Matrix) for initial decision-making. Analytical Hierarchy Process (AHP) is then applied to evaluate the priority and relevance of features proposed in the QFD Matrix. Research results show the service design approach is an efficient way for an interdisciplinary team to communicate. The proposed two-stage decision-making processes provide qualitatively analyze and quantitatively measure the priority and relevance of features derived from the service design process. The technique team can then develop prototypes that facilitate multimodal human-computer interaction on Smart TVs.</description>
        <description>http://thesai.org/Downloads/Volume6No2/Paper_33-Service_Design_for_Developing_Multimodal_Human-Computer_Interaction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving Web Movie Recommender System Based on Emotions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060232</link>
        <id>10.14569/IJACSA.2015.060232</id>
        <doi>10.14569/IJACSA.2015.060232</doi>
        <lastModDate>2015-02-28T17:28:25.9130000+00:00</lastModDate>
        
        <creator>Karzan Wakil</creator>
        
        <creator>Rebwar Bakhtyar</creator>
        
        <creator>Karwan Ali</creator>
        
        <creator>Kozhin Alaadin</creator>
        
        <subject>movie recommender system; collaborative filtering; content based filtering; emotion; CF; CBF; MRS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(2), 2015</description>
        <description>Recommender Systems (RSs) are garnering a significant importance with the advent of e-commerce and e-business on the web. This paper focused on the Movie Recommender System (MRS) based on human emotions. The problem is the MRS need to capture exactly the customer’s profile and features of movies, therefore movie is a complex domain and emotions is a human interaction domain, so difficult to combining together in the new Recommender System (RS). In this paper, we prepare a new hybrid approach for improving MRS, it consists of Content Based Filtering (CBF), Collaborative Filtering (CF), emotions detection algorithm and our algorithm, that presented by matrix. The result of our system provides much better recommendations to users because it enables the users to understand the relation between their emotional states and the recommended movies.</description>
        <description>http://thesai.org/Downloads/Volume6No2/Paper_32-Improving_Web_Movie_Recommender_System_Based_on_Emotions.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Resource Provisioning in Single Tier and Multi-Tier Cloud Computing: “State-of-the-Art”</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060231</link>
        <id>10.14569/IJACSA.2015.060231</id>
        <doi>10.14569/IJACSA.2015.060231</doi>
        <lastModDate>2015-02-28T17:28:25.8830000+00:00</lastModDate>
        
        <creator>Marwah Hashim Eawna</creator>
        
        <creator>Salma Hamdy Mohammed</creator>
        
        <creator>El-Sayed M. El-Horbaty</creator>
        
        <subject>Cloud Computing; Resource Provisioning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(2), 2015</description>
        <description>Cloud computing is a new computation trend for delivering information as long as an electronic device needs to access of a web server. One of the major pitfalls in cloud computing is related to optimizing the resource provisioning and allocation. Because of the uniqueness of the model, resource provisioning is performed with the objective of minimizing time and the costs associated with it. This paper reviews the state-of-the-art of managing resources of the cloud environments in theoretical research. This study discusses the performance and analysis for well-known cloud provisioning resources techniques, single tier and multi-tier.</description>
        <description>http://thesai.org/Downloads/Volume6No2/Paper_31-Resource_Provisioning_in_Single_Tier_and_Multi-Tier_Cloud.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Review on Feature Extraction and Feature Selection for Handwritten Character Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060230</link>
        <id>10.14569/IJACSA.2015.060230</id>
        <doi>10.14569/IJACSA.2015.060230</doi>
        <lastModDate>2015-02-28T17:28:25.8670000+00:00</lastModDate>
        
        <creator>Muhammad ‘Arif Mohamad</creator>
        
        <creator>Haswadi Hassan</creator>
        
        <creator>Dewi Nasien</creator>
        
        <creator>Habibollah Haron</creator>
        
        <subject>HCR; Feature Extraction; Feature Selection; Harmony Search Algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(2), 2015</description>
        <description>The development of handwriting character recognition (HCR) is an interesting area in pattern recognition. HCR system consists of a number of stages which are preprocessing, feature extraction, classification and followed by the actual recognition. It is generally agreed that one of the main factors influencing performance in HCR is the selection of an appropriate set of features for representing input samples. This paper provides a review of these advances. In a HCR, the set of features plays as main issues, as procedure in choosing the relevant feature that yields minimum classification error. To overcome these issues and maximize classification performance, many techniques have been proposed for reducing the dimensionality of the feature space in which data have to be processed. These techniques, generally denoted as feature reduction, may be divided in two main categories, called feature extraction and feature selection.  A large number of research papers and reports have already been published on this topic. In this paper we provide an overview of some of the methods and approach of feature extraction and selection. Throughout this paper, we apply the investigation and analyzation of feature extraction and selection approaches in order to obtain the current trend.  Throughout this paper also, the review of metaheuristic harmony search algorithm (HSA) has provide.</description>
        <description>http://thesai.org/Downloads/Volume6No2/Paper_30-A_Review_on_Feature_Extraction_and_Feature_Selection_for_Handwritten.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implementation of ADS Linked List Via Smart Pointers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060229</link>
        <id>10.14569/IJACSA.2015.060229</id>
        <doi>10.14569/IJACSA.2015.060229</doi>
        <lastModDate>2015-02-28T17:28:25.8370000+00:00</lastModDate>
        
        <creator>Ivaylo Donchev</creator>
        
        <creator>Emilia Todorova</creator>
        
        <subject>abstract data structures; C++; smart pointers; teaching</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(2), 2015</description>
        <description>Students traditionally have difficulties in implementing abstract data structures (ADS) in C++. To a large extent, these difficulties are due to language complexity in terms of memory management with raw pointers – the programmer must take care of too many details to provide reliable, efficient and secure implementation. Since all these technical details distract students from the essence of the studied algorithms, we decided to use in the course in DSA (Data Structures and Algorithms) an automated resource management, provided by the C++ standard ISO/IEC 14882:2011. In this work we share experience of using smart pointers to implement linked lists and discuss pedagogical aspects and effectiveness of the new classes, compared to the traditional library containers and implementation via built-in pointers.</description>
        <description>http://thesai.org/Downloads/Volume6No2/Paper_29-Implementation_of_ADS_Linked_List_Via_Smart.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Different Classification Algorithms Based on Arabic Text Classification: Feature Selection Comparative Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060228</link>
        <id>10.14569/IJACSA.2015.060228</id>
        <doi>10.14569/IJACSA.2015.060228</doi>
        <lastModDate>2015-02-28T17:28:25.8070000+00:00</lastModDate>
        
        <creator>Ghazi Raho</creator>
        
        <creator>Riyad Al-Shalabi</creator>
        
        <creator>Ghassan Kanaan</creator>
        
        <creator>Asmaa Nassar</creator>
        
        <subject>Text Classification; Feature Selection; Arabic Text; Recall; F-Measure</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(2), 2015</description>
        <description>Feature selection is necessary for effective text classification. Dataset preprocessing is essential to make upright result and effective performance. This paper investigates the effectiveness of using feature selection. In this paper we have been compared the performance between different classifiers in different situations using feature selection with stemming, and without stemming.Evaluation used a BBC Arabic dataset, different classification algorithms such as decision tree (D.T), K-nearest neighbors (KNN), Na&#239;ve Bayesian (NB) method and Na&#239;ve Bayes Multinomial(NBM) classifier were used. The experimental results are presented in term of precision, recall, F-Measures, accuracy and time to build model.</description>
        <description>http://thesai.org/Downloads/Volume6No2/Paper_28-Different_Classification_Algorithms_Based_on_Arabic.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Use of Non-Topological Node Attribute Values for Probabilistic Determination of Link Formation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060227</link>
        <id>10.14569/IJACSA.2015.060227</id>
        <doi>10.14569/IJACSA.2015.060227</doi>
        <lastModDate>2015-02-28T17:28:25.7730000+00:00</lastModDate>
        
        <creator>Abhiram Gandhe</creator>
        
        <creator>Parag Deshpande</creator>
        
        <subject>Non-Topological Attribute; Link Prediction; Na&#239;ve Bayes Classifier; Weighted Average; Graph Database; Social Network; Data Mining</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(2), 2015</description>
        <description>Here we propose a probabilistic model for determining link formation, using Na&#239;ve Bayes Classifier on non-topological attribute values of nodes, in a social network.  The proposed model gives a score which helps to determine the relationship strength in a non-formed link. In addition to Na&#239;ve Bayes Classifier, weighted Average of the Attribute value match helps to determine the friendship score of a non-formed link. 
With the increase in online social networks and its influence on people, more and more individuals are getting wider and enhanced social connect. Everyone tries to connect more to explore more. In this race of more, an individual needs better and definitive tools to help them grow their network. Wider is the network more is the possibility to explore. 
Here we present a novel approach for predicting a link (friendship) between two individuals (nodes) in a social network. The proposed approach uses non-topological attribute data values of both the nodes and predicts linkage possibility by applying Na&#239;ve Bayes Classifier on non-topological attribute data values of nodes in existing linkages.
A linkage possibility is expressed using one quantitative measure FSCORE. We call it friendship score (FSCORE) between two unconnected individuals. FSCORE is used to predict linkage between two nodes. Higher FSCORE means a higher possibility of linkage between two nodes.</description>
        <description>http://thesai.org/Downloads/Volume6No2/Paper_27-Use_of_Non-Topological_Node_Attribute_Values.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improvement of Control System Performance by Modification of Time Delay</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060226</link>
        <id>10.14569/IJACSA.2015.060226</id>
        <doi>10.14569/IJACSA.2015.060226</doi>
        <lastModDate>2015-02-28T17:28:25.7430000+00:00</lastModDate>
        
        <creator>Salem Alkhalaf</creator>
        
        <subject>Distributed control system; control delay; sampling scheme; control system performance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(2), 2015</description>
        <description>This paper presents a mathematical approach for improving the performance of a control system by modifying the time delay at certain operating conditions. This approach converts a continuous time loop into a discrete time loop. The formula derived is applied successfully to an applicable control system. The results show that the proposed approach efficiently improves the control system performance. The relation between the sampling time and the time delay is obtained. Two different operating conditions are examined to assess the proposed approach in improving the performance of the control system.</description>
        <description>http://thesai.org/Downloads/Volume6No2/Paper_26-Improvement_of_Control_System_Performance_by_Modification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Consuming Web Services on Android Mobile Platform for Finding Parking Lots</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060225</link>
        <id>10.14569/IJACSA.2015.060225</id>
        <doi>10.14569/IJACSA.2015.060225</doi>
        <lastModDate>2015-02-28T17:28:25.7270000+00:00</lastModDate>
        
        <creator>Isak Shabani</creator>
        
        <creator>Besmir Sejdiu</creator>
        
        <creator>Fatushe Jasharaj</creator>
        
        <subject>Web application; Web services; Android platform; Mobile devices; MyParking</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(2), 2015</description>
        <description>Many web applications over the last decade are built using Web services based on Simple Object Access Protocol (SOAP), because these Web services are the best choice for web applications and mobile applications in general. Researches and the results of them show how architectures and the systems primarily designed for use on desktop such as Web services calls with SOAP messaging, now are possible to be used on mobile platforms such as Android.
The purpose of this paper is the study of Android mobile platform, more precisely the ability of this platform for consuming Web services and exploring existing alternatives for consuming Web services from this platform. People use their vehicles every day for transport and this of course leads to a constant demand for finding a parking lot. In this paper is proposed the system, named as MyParking through which it is aimed to facilitate users finding a parking lot for their vehicle depending on their current location. MyParking consists of three modules: Android client, administration and Web services.</description>
        <description>http://thesai.org/Downloads/Volume6No2/Paper_25-Consuming_Web_Services_on_Android_Mobile_Platform_for_Finding_Parking_Lots.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Semantic Web Improved with the Weighted IDF Feature</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060224</link>
        <id>10.14569/IJACSA.2015.060224</id>
        <doi>10.14569/IJACSA.2015.060224</doi>
        <lastModDate>2015-02-28T17:28:25.6970000+00:00</lastModDate>
        
        <creator>Mrs. Jyoti Gautam</creator>
        
        <creator>Dr. Ela Kumar</creator>
        
        <subject>Text classification;  Semantic Web with weighted idf feature; Expanded query; New Semantic Web Algorithm; Ranking Algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(2), 2015</description>
        <description>The development of search engines is taking at a very fast rate. A lot of algorithms have been tried and tested. But, still the people are not getting precise results. Social networking sites are developing at tremendous rate and their growth has given birth to the new interesting problems. The social networking sites use semantic data to enhance the results. This provides us with a new perspective on how to improve the quality of information retrieval. As we are aware, many techniques of text classification are based on TFIDF algorithm. Term weighting has a significant role in classifying a text document. In this paper, firstly, we are extending the queries by “keyword+tags” instead of keywords only. In addition to this, secondly, we have developed a new ranking algorithm (JEKS algorithm) based on semantic tags from user feedback that uses CiteUlike data. The algorithm enhances the already existing semantic web by using the weighted IDF feature of the TFIDF algorithm. The suggested algorithm provides a better ranking than Google and can be viewed as a semantic web service in the domain of academics.</description>
        <description>http://thesai.org/Downloads/Volume6No2/Paper_24-Semantic_Web_Improved_with_the_Weighted_IDF_Feature.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid PSO-MOBA for Profit Maximization in Cloud Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060223</link>
        <id>10.14569/IJACSA.2015.060223</id>
        <doi>10.14569/IJACSA.2015.060223</doi>
        <lastModDate>2015-02-28T17:28:25.6800000+00:00</lastModDate>
        
        <creator>Dr. Salu George</creator>
        
        <subject>Cloud Computing; Profit Maximization; Admission Control; SLA; Optimization; Hybrid Particle Swam Optimization – Multi Objective Bat Algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(2), 2015</description>
        <description>Cloud service provider, infrastructure vendor and clients/Cloud user’s are main actors in any cloud enterprise like Amazon web service’s cloud or Google’s cloud. Now these enterprises take care in infrastructure deployment and cloud services management (IaaS/PaaS/SaaS). Cloud user ‘s need to provide correct amount of services needed and characteristic of workload in order to avoid over – provisioning of resources and it’s the important pricing factor. Cloud service provider need to manage the resources and as well as optimize the resources to maximize the profit. To manage the profit we consider the M/M/m queuing model which manages the queue of job and provide average execution time. Resource Scheduling is one of the main concerns in profit maximization for which we take HYBRID PSO-MOBA as it resolves the global convergence problem, faster convergence, less parameter to tune, easier searching in very large problem spaces and locating the right resource. In HYBRID PSO-MOBA we are combining the features of PSO and MOBA to achieve the benefits of both PSO and MOBA and have greater compatibility.</description>
        <description>http://thesai.org/Downloads/Volume6No2/Paper_23-Hybrid_PSO-MOBA_for_Profit_Maximization_in_Cloud_Computing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Parents&#39; Perception of Nursing Support in their Neonatal Intensive Care Unit (NICU) Experience</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060222</link>
        <id>10.14569/IJACSA.2015.060222</id>
        <doi>10.14569/IJACSA.2015.060222</doi>
        <lastModDate>2015-02-28T17:28:25.6500000+00:00</lastModDate>
        
        <creator>Amani F. Magliyah</creator>
        
        <creator>Muhamamd I. Razzak</creator>
        
        <subject>parents; stress; anxiety; NICU; nurse support; neonate; infant</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(2), 2015</description>
        <description>NICU is an environment that has many challenges in information receiving and understanding. The infants that are cared for might have serious and complex medical problems. For Parents the NICU experience is filled with stress, fear, sadness, guilt and shock of having a sick baby in NICU. The aim of this research was to explore and describe parents&#39; experience when their infant is admitted to the NICU. And assess their perception of nursing support of information provision and according to their emotional feelings. This study was undertaken at Neonatal Intensive Care Unit in King Abdulaziz Medical City (KAMC), Jeddah, Saudi Arabia which is part of National Guard Health Affairs (NGHA) organization in the kingdom. The study utilized a self-report questionnaire with likert scale measurement and telephone interview with closed questions. One hundred and four parents agree to be the part of study and provided their consent to include their children in the study. The majority of respondents were mothers (76%), the remaining (24%) from the total sample were Fathers. All their infants have been admitted to the NICU at 2014. Many parents did not able to receive enough information easily from the unit; most of them found the information by nurses was difficult to understand. The majority of parent&#39;s perceived high stress and anxiety level according to this information. Also, Most Parents was not agreed about the nurses&#39; support towards their emotional feeling and care. Additional finding indicate that a decrease in support level being associated with an increase in stress and anxiety level.  In order to provide a high level of support and decrease the level of stress, there is a need for developing support strategies. One strategy is through a technology to develop an automatic daily summary for parent.</description>
        <description>http://thesai.org/Downloads/Volume6No2/Paper_22-The_Parents_Perception_of_Nursing_Support_in_their_Neonatal.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>High Accuracy Arabic Handwritten Characters Recognition Using Error Back Propagation Artificial Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060221</link>
        <id>10.14569/IJACSA.2015.060221</id>
        <doi>10.14569/IJACSA.2015.060221</doi>
        <lastModDate>2015-02-28T17:28:25.6170000+00:00</lastModDate>
        
        <creator>Assist. Prof. Majida Ali Abed</creator>
        
        <creator>Assist. Prof. Dr. Hamid Ali Abed Alasad</creator>
        
        <subject>Character Recognition; Neural Network; Classification; Error Back Propagation Artificial Neural Network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(2), 2015</description>
        <description>This manuscript considers a new architecture to handwritten characters recognition based on simulation of the behavior of one type of artificial neural network, called the Error Back Propagation Artificial Neural Network (EBPANN). We present an overview of our neural network to be optimized and tested on 12 offline isolated Arabic handwritten characters  (???????????????? ?,?,???) because the similarity of some Arabic characters and the location of the points in the character. Accuracy of 93.61% is achieved using EBPANN which is the highest accuracy achieved during Offline Handwritten Arabic Character Recognition. It is noted that the EBPANN in general generates an optimized comparison between the input samples and database samples which improves the final recognition rate. Experimental results show that the EBPANN is convergent and more accurate in solutions that minimize the error recognition rate.</description>
        <description>http://thesai.org/Downloads/Volume6No2/Paper_21-High_Accuracy_Arabic_Handwritten_Characters_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Developement of Bayesian Networks from Unified Modeling Language for Learner Modelling</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060220</link>
        <id>10.14569/IJACSA.2015.060220</id>
        <doi>10.14569/IJACSA.2015.060220</doi>
        <lastModDate>2015-02-28T17:28:25.6030000+00:00</lastModDate>
        
        <creator>ANOUAR TADLAOUI Mouenis</creator>
        
        <creator>AAMMOU Souhaib</creator>
        
        <creator>KHALDI Mohamed</creator>
        
        <subject>Learner Modeling; Bayesian networks; Cognitive diagnosis; Uncertainty</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(2), 2015</description>
        <description>First of all, and to clarify our purpose, it seems important to say that the work we are presenting here lie within the framework of learner modeling in an adaptive system understood as computational modeling of the learner .we must state also that Bayesian Networks are effective tools for learner modeling under uncertainty. They have been successfully used in many systems, with different objectives, from the assessment of knowledge of the learner to the recognition of the plan followed in problem solving. The main objective of this paper is to develop a Bayesian networks for modeling the learner from the use case diagram of the Unified Modeling Language. To achieve this objective it is necessary first to ask the Why and how we can represent a Learner model using Bayesian networks? How can we go from a dynamic representation of the learner model using UML to a probabilistic representation with Bayesian networks? Is this approach considered experimentally justified? First, we will return to the definitions of the main relationships in the diagram use cases and Bayesian networks, and then we will focus on the development rules on which we have based our work. We then demonstrate how to develop a Bayesian network based on these rules. Finally we will present the formal structure for this consideration. The prototypes and diagrams presented in this work are arguments in favor of our objective. And the network obtained also promotes reusing the learner modeling through similar systems.</description>
        <description>http://thesai.org/Downloads/Volume6No2/Paper_20-Developement_of_Bayesian_Networks_from_Unified_Modeling_Language.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>En-Route Vehicular Traffic Optimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060219</link>
        <id>10.14569/IJACSA.2015.060219</id>
        <doi>10.14569/IJACSA.2015.060219</doi>
        <lastModDate>2015-02-28T17:28:25.5700000+00:00</lastModDate>
        
        <creator>Saravanan M</creator>
        
        <creator>Ashwin Kumar M</creator>
        
        <subject>IoT; IP; POC; Central Node; Dynamic Board; Accident detection model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(2), 2015</description>
        <description>The pathways of information are changing, the physical world itself is becoming a type of information system. In what’s called the Internet of Things (IoT), sensors and actuators embedded in physical objects—from roadways to pacemakers—are linked through wired and wireless networks, often using the same Internet Protocol (IP) that connects the Internet. When objects can both sense the environment and communicate, they become tools for understanding complexity and responding to it swiftly. The revolutionary part in all this is that these physical information systems are now beginning to be deployed, and some of them even work largely without human intervention.  This paper has addressed the traffic congestion problem with the help of Internet of Things. Increase in the number of vehicles in cities caused by the population and development of economy, has stimulated traffic congestion problems. It is becoming more serious day after day in the present scenario of developing countries. The reason for the same could be categorized as mismanagement of vehicular movement, ineffective system for controlling the mobility of vehicles, uneven roads and traffic snarl-up. Unexpected vehicular queuing is a major concern leading to wasting time of passengers and thwarting ambulance to reach the destination in time. In addition to that, traffic congestion makes it difficult to forecast the travel time accurately causing drivers to allocate more time in travel than scheduled previously. To ease these mounting traffic problems a demonstration is made on the Proof of Concept (POC) using the smart city data set provided by Telecom Italia of Milan city, to verify that these concepts have the potential for real world application and could be used by the government sectors or private transport organizations to ameliorate the passenger’s comfort on road which are as follows. A central node is developed which sets the speed limit and predicts a normalized speed separately for each locality from the available data set. For efficient control in mobility of vehicles an advanced dynamic digital board is introduced, which displays the speed limit set by the central node time to time. The normalized speed could be used to estimate the effective time taken between destinations precisely. By comparing normalized speed with real time values anomalies in the locality like congestion and presence of uneven roads is predicted. Accident detection model is integrated with the central node which sends a message to dynamic board indicating location of the accident along with the time taken. It even improves traffic flow around the accident occurred location. Central node together with navigation tools could provide re-routed path to the drivers during congestion or accident.</description>
        <description>http://thesai.org/Downloads/Volume6No2/Paper_19-En-Route_Vehicular_Traffic.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Personal Health Book Application for Developing Countries</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060218</link>
        <id>10.14569/IJACSA.2015.060218</id>
        <doi>10.14569/IJACSA.2015.060218</doi>
        <lastModDate>2015-02-28T17:28:25.5400000+00:00</lastModDate>
        
        <creator>Seddiq Alabbasi</creator>
        
        <creator>Andrew Rebeiro-Hargrave</creator>
        
        <creator>Kunihiko Kaneko</creator>
        
        <creator>Ashir Ahmed</creator>
        
        <creator>Akira Fukuda</creator>
        
        <subject>Personal health records; Patient centered healthcare; Database design; Developing countries; Extensible markup language</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(2), 2015</description>
        <description>We introduce a Personal Health Book application that is used as a portable repository for Personal Health Records (PHR) in order to alleviate healthcare organizational problems in developing countries. The Personal Health Book application allows low literate people to access and carry their own medical history from a rural healthcare provider to an urban healthcare provider. This will improve the efficiency of medical care and lower costs for health clinics in underserved areas. This paper introduces a software application that can be ported onto a USB Smart Card or/and managed by smartphone or personal computer connected to cloud computing environment. The Portable Health Book application aims to ease the problem of interoperability between health clinics by accepting any file format and contents and applies a decomposed database to categorize, group and reorganize the data. Querying the application’s database, the consumer can create a unified report presentation that is understandable by the consumer, family, and healthcare provider.  We tested the Personal Health Book framework by importing PHRs in an extensible markup language (XML) format with a basic structure, without checking the PHR content from the Grameen Portable Health Clinic database in Bangladesh and from different departments from a hospital in Japan. The Personal Health Book was able to generate a human readable output as its database reorganize and store any type of PHR including sensor device data.</description>
        <description>http://thesai.org/Downloads/Volume6No2/Paper_18-Personal_Health_Book_Application_for_Developing_Countries.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Age Estimation Based on AAM and 2D-DCT Features of Facial Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060217</link>
        <id>10.14569/IJACSA.2015.060217</id>
        <doi>10.14569/IJACSA.2015.060217</doi>
        <lastModDate>2015-02-28T17:28:25.5100000+00:00</lastModDate>
        
        <creator>Asuman G&#252;nay</creator>
        
        <creator>Vasif V. Nabiyev</creator>
        
        <subject>2D-DCT; AAM; Age estimation; PCA; Regression</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(2), 2015</description>
        <description>This paper proposes a novel age estimation method - Global and Local feAture based Age estiMation (GLAAM) - relying on global and local features of facial images. Global features are obtained with Active Appearance Models (AAM). Local features are extracted with regional 2D-DCT (2- dimensional Discrete Cosine Transform) of normalized facial images. GLAAM consists of the following modules: face normalization, global feature extraction with AAM, local feature extraction with 2D-DCT, dimensionality reduction by means of Principal Component Analysis (PCA) and age estimation with multiple linear regression. Experiments have shown that GLAAM outperforms many methods previously applied to the FG-NET database.</description>
        <description>http://thesai.org/Downloads/Volume6No2/Paper_17-Age_Estimation_Based_on_AAM_and_2D-DCT_Features.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>GPS-Based Daily Context Recognition for Lifelog Generation Using Smartphone</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060216</link>
        <id>10.14569/IJACSA.2015.060216</id>
        <doi>10.14569/IJACSA.2015.060216</doi>
        <lastModDate>2015-02-28T17:28:25.4630000+00:00</lastModDate>
        
        <creator>Go Tanaka</creator>
        
        <creator>Masaya Okada</creator>
        
        <creator>Hiroshi Mineno</creator>
        
        <subject>component; Lifelog, machine learning, GPS, healthcare</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(2), 2015</description>
        <description>Mobile devices are becoming increasingly more sophisticated with their many diverse and powerful sensors, such as GPS, acceleration, and gyroscope sensors. They provide numerous services for supporting daily human life and are now being studied as a tool to reduce the worldwide increase of lifestyle-related diseases. This paper describes a method for recognizing the contexts of daily human life by recording a lifelog based on a person’s location. The proposed method can distinguish and recognize several contexts at the same location by extracting features from the GPS data transmitted from smartphones. The GPS data are then used to generate classification models by machine learning. Five classification models were generated: a mobile or stationary recognition model, a transportation recognition model, and three daily context recognition models. In addition, optimal learning algorithms for machine learning were determined. The experimental results show that this method is highly accurate. As examples, the F-measure of the daily context recognition was approximately 0.954 overall at a tavern and approximately 0.920 overall at a university .</description>
        <description>http://thesai.org/Downloads/Volume6No2/Paper_16-GPS-Based_Daily_Context_Recognition_for_Lifelog_Generation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sentiment Analysis Based on Expanded Aspect and Polarity-Ambiguous Word Lexicon</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060215</link>
        <id>10.14569/IJACSA.2015.060215</id>
        <doi>10.14569/IJACSA.2015.060215</doi>
        <lastModDate>2015-02-28T17:28:25.4300000+00:00</lastModDate>
        
        <creator>Yanfang Cao</creator>
        
        <creator>Pu Zhang</creator>
        
        <creator>Anping Xiong</creator>
        
        <subject>polarity-ambiguous word; aspect; sentiment analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(2), 2015</description>
        <description>This paper focuses on the task of disambiguating polarity-ambiguous words and the task is reduced to sentiment classification of aspects, which we refer to sentiment expectation instead of semantic orientation widely used in previous researches. Polarity-ambiguous words refer to words like” large,  small, high, low ”, which pose a challenging task on sentiment analysis. In order to disambiguate polarity-ambiguous words, this paper constructs the aspect and polarity-ambiguous lexicon using a mutual bootstrapping algorithm. So the sentiment of polarity-ambiguous words in context can be decided collaboratively by the sentiment expectation of the aspects and polarity-ambiguous words’ prior polarity.At sentence level, experiments show that our method is effective in sentiment analysis.</description>
        <description>http://thesai.org/Downloads/Volume6No2/Paper_15-Sentiment_Analysis_Based_on_Expanded_Aspect_and_Polarity-Ambiguous.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Examination of Using Business Intelligence Systems by Enterprises in Hungary</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060214</link>
        <id>10.14569/IJACSA.2015.060214</id>
        <doi>10.14569/IJACSA.2015.060214</doi>
        <lastModDate>2015-02-28T17:28:25.4000000+00:00</lastModDate>
        
        <creator>Peter Sasvari</creator>
        
        <subject>Business Intelligence; Hungary; Enterprises</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(2), 2015</description>
        <description>Data are one of the key elements in corporate decision-making, without them, the decision-making process cannot be imagined. As a consequence, different analytical tools are needed that allow the efficient use of data, information and knowledge. These analytical tools are commonly called Business Intelligence systems that are introduced into the opeartion of enterprises to make access to business data easier, faster and broader in line with the needs of a given enterprise. Based on the findings of an empirical survey, this paper aims to give a deeper insight of the causes and purposes of using BI systems by Hungarian enterprises. It is revealed that such systems are mostly used for risk analysis, financial analysis, market analysis and controlling while their potential to make predictions is usually overlooked. One important conclusion of the paper is that the faster spread of BI systems would be facilitated by reducing costs, simpler parameter settings and a higher level of data protection.</description>
        <description>http://thesai.org/Downloads/Volume6No2/Paper_14-The_Examination_of_Using_Business_Intelligence_Systems_by_Enterprises_in_Hungary.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Assessment of Potential Dam Sites in the Kabul River Basin Using GIS</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060213</link>
        <id>10.14569/IJACSA.2015.060213</id>
        <doi>10.14569/IJACSA.2015.060213</doi>
        <lastModDate>2015-02-28T17:28:25.3830000+00:00</lastModDate>
        
        <creator>RASOOLI Ahmadullah</creator>
        
        <creator>KANG Dongshik</creator>
        
        <subject>Geographical Information System (GIS); Kabul River Basin (KRB); Digital Elevation Model (DEM); Map</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(2), 2015</description>
        <description>The research focuses on Kabul River Basin (KRB) water resources infrastructure, management and development as there are many dams already in the basin and many dams are planned and are being studied with multi-purposes objectives such as power generation, irrigation and providing water to industry and domestics. KB has been centralized all water resources related information in an integrated relational geo-database this KB is centralized repository for information river basin management with the main objectives of optimizing information collection, retrieval and organization. In addition, in this paper information and characteristics of the KRB has been presented such as drainage network or hydrology, irrigation, population, climate and surface pattern other necessary features of the basin by the use of GIS in order to invest and implement infrastracture projects. The first step in doing any kind of hydrologic modeling involves delineating streams and watersheds, and getting some basic watershed properties such as area, slope, flow length, stream network density, etc. Traditionally this was (and still is) being done manually by using topographic/contour maps. With the availability of Digital Elevation Models (DEM) and GIS tools, watershed properties can be extracted by using automated procedures. The processing of DEM to delineate watersheds is referred to as terrain pre-processing. Besides that, it produced the necessary thematic maps, base maps and other detailed maps for illustrating of basin characteristics and features GIS Based.</description>
        <description>http://thesai.org/Downloads/Volume6No2/Paper_13-Assessment_of_Potential_Dam_Sites_in_the_Kabul_River_Basin_Using_GIS.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Real-Time Research of Optimal Power Flow Calculation in Reduce Active Power Loss Aspects of Power Grid</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060212</link>
        <id>10.14569/IJACSA.2015.060212</id>
        <doi>10.14569/IJACSA.2015.060212</doi>
        <lastModDate>2015-02-28T17:28:25.3370000+00:00</lastModDate>
        
        <creator>Yuting Pan</creator>
        
        <creator>Yuchen Chen</creator>
        
        <creator>Zhiqiang Yuan</creator>
        
        <creator>Bo Liu</creator>
        
        <subject>optimal power flow; voltage regulation; reactive power compensation; cross-sectional area; active power loss</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(2), 2015</description>
        <description>In order to research how to availably reduce the active power loss value in power grid system when the power system is operating, it offers a quantitative research in theory through conceiving the unbalanced losses of power grid system under the overloading bus as the investigative object, and establishing an active power loss mathematical model. It carries out online real-time optimal flow calculation within the condition that meets the control variables and state variables of the equality and inequality constraints.  For some branches with larger network loss, it respectively adopts three methods, including voltage regulation method, reactive power compensation method, changing the branch’s cross-sectional area method, to reduce the general active power loss values.  Moreover, it compares the compensation equivalent of three methods during the recovery process of the general active power loss in the power grid.  Taking IEEE14 as an example, it verifies the effectiveness of the proposed methods. It not only can offer a reasonable measure to reduce the losses of power grid, but can provide some reliable reference for the power grid dispatching personnel.</description>
        <description>http://thesai.org/Downloads/Volume6No2/Paper_12-The_Real-Time_Research_of_Optimal_Power_Flow_Calculation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Study of Gamification Effectiveness in Online e-Learning Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060211</link>
        <id>10.14569/IJACSA.2015.060211</id>
        <doi>10.14569/IJACSA.2015.060211</doi>
        <lastModDate>2015-02-28T17:28:25.3070000+00:00</lastModDate>
        
        <creator>Ilya V. Osipov</creator>
        
        <creator>Evgeny Nikulchev</creator>
        
        <creator>Alex A. Volinsky</creator>
        
        <creator>Anna Y. Prasikova</creator>
        
        <subject>elearning; gamification; marketing; monetization; viral marketing; virality</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(2), 2015</description>
        <description>Online distance e-learning systems allow introducing innovative methods in pedagogy, along with studying their effectiveness. Assessing the system effectiveness is based on analyzing the log files to track the studying time, the number of connections, and earned game bonus points. This study is based on an example of the online application for practical foreign language speaking skills training between random users, which select the role of a teacher or a student on their own. The main features of the developed system include pre-defined synchronized teaching and learning materials displayed for both participants, along with user motivation by means of gamification. The actual percentage of successful connects between specifically unmotivated and unfamiliar with each other users was measured. The obtained result can be used for gauging the developed system success and the proposed teaching methodology in general.</description>
        <description>http://thesai.org/Downloads/Volume6No2/Paper_11-Study_of_Gamification_Effectiveness_in_Online_e-Learning_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Effects of Different Congestion Management Algorithms over Voip Performance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060210</link>
        <id>10.14569/IJACSA.2015.060210</id>
        <doi>10.14569/IJACSA.2015.060210</doi>
        <lastModDate>2015-02-28T17:28:25.2770000+00:00</lastModDate>
        
        <creator>Szabolcs Szil&#225;gyi</creator>
        
        <subject>CBWFQ; congestion; CQ; FIFO; LLQ; Pagent; PQ; queuing; WFQ</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(2), 2015</description>
        <description>This paper presents one of the features of DS (Differentiated Services) architecture, namely the queuing or congestion management. Packets can be placed into separate buffer queues, on the basis of the DS value. Several forwarding policies can be used to favor high priority packets in different ways. The major reason for queuing is that the router must hold the packet in its memory while the outgoing interface is busy with sending another packet. The main goal is to compare the performance of the following queuing mechanisms using a laboratory environment: FIFO (First-In First-Out), CQ (Custom Queuing), PQ (Priority Queuing), WFQ (Weighted Fair Queuing), CBWFQ (Class Based Weighted Fair Queuing) and LLQ (Low Latency Queuing). The research is empirical and qualitative, the results are useful both in infocommunication and in education.</description>
        <description>http://thesai.org/Downloads/Volume6No2/Paper_10-The_Effects_of_Different_Congestion_Management_Algorithms_over_Voip_Performance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Developing Software Bug Prediction Models Using Various Software Metrics as the Bug Indicators</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060209</link>
        <id>10.14569/IJACSA.2015.060209</id>
        <doi>10.14569/IJACSA.2015.060209</doi>
        <lastModDate>2015-02-28T17:28:25.2600000+00:00</lastModDate>
        
        <creator>Varuna Gupta</creator>
        
        <creator>Dr. N. Ganeshan</creator>
        
        <creator>Dr. Tarun K. Singhal</creator>
        
        <subject>Bug Prediction; DIT; WMC; CBO; LoC; SRGM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(2), 2015</description>
        <description>The bug prediction effectiveness reasonably contributes towards enhancing quality of software. Bug indicators contribute significantly in determining the bug prediction approaches and help in achieving software reliability. Various comparative research studies have indicated that Depth of Inheritance (DIT), Weighted Method per Class (WMC), Coupling between Objects (CBO) and Lines of Code (LoC) have significantly established themselves as reliable bug indicators for comprehensive bug predictions.
The researchers have carried out a quantitative research and have developed prediction models using above bug indicators as models input and have applied these models on open source projects (Camel and Ant). During this research, the results demonstrates that there is significant correlation between size oriented metrics (bug indicators) such as DIT, WMC, CBO, LoC and bugs. Overall, DIT takes dominance in achieving better impact on predicting bugs than WMC, CBO and LoC. 
The outcomes of the present research study would be of significance to software quality practitioners worldwide and would help them in prioritizing the efforts involved in bug prediction.</description>
        <description>http://thesai.org/Downloads/Volume6No2/Paper_9-Developing_Software_Bug_Prediction_Models_Using_Various_Software_Metrics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Multi-Label Classification Approach Based on Correlations Among Labels</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060208</link>
        <id>10.14569/IJACSA.2015.060208</id>
        <doi>10.14569/IJACSA.2015.060208</doi>
        <lastModDate>2015-02-28T17:28:25.2270000+00:00</lastModDate>
        
        <creator>Raed Alazaidah</creator>
        
        <creator>Fadi Thabtah</creator>
        
        <creator>Qasem Al-Radaideh</creator>
        
        <subject>Classification; Data mining; Multi-label Classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(2), 2015</description>
        <description>Multi label classification is concerned with learning from a set of instances that are associated with a set of labels, that is, an instance could be associated with multiple labels at the same time. This task occurs frequently in application areas like text categorization, multimedia classification, bioinformatics, protein function classification and semantic scene classification. Current multi-label classification methods could be divided into two categories. The first is called problem transformation methods, which transform multi-label classification problem into single label classification problem, and then apply any single label classifier to solve the problem. The second category is called algorithm adaptation methods, which adapt an existing single label classification algorithm to handle multi-label data. In this paper, we propose a multi-label classification approach based on correlations among labels that use both problem transformation methods and algorithm adaptation methods. The approach begins with transforming multi-label dataset into a single label dataset using least frequent label criteria, and then applies the PART algorithm on the transformed dataset. The output of the approach is multi-labels rules. The approach also tries to get benefit from positive correlations among labels using predictive Apriori algorithm. The proposed approach has been evaluated using two multi-label datasets named (Emotions and Yeast) and three evaluation measures (Accuracy, Hamming Loss, and Harmonic Mean). The experiments showed that the proposed approach has a fair accuracy in comparison to other related methods.</description>
        <description>http://thesai.org/Downloads/Volume6No2/Paper_8-A_Multi-Label_Classification_Approach_Based_on_Correlations_Among_Labels.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of a Decision Support System for Handling Health Insurance Deduction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060207</link>
        <id>10.14569/IJACSA.2015.060207</id>
        <doi>10.14569/IJACSA.2015.060207</doi>
        <lastModDate>2015-02-28T17:28:25.1970000+00:00</lastModDate>
        
        <creator>Shakiba Khademolqorani</creator>
        
        <creator>Ali Zeinal Hamadani</creator>
        
        <subject>Hospital Management; Insurance Deduction; Decision Support Systems; Data Mining; Multiple Criteria Decision Making</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(2), 2015</description>
        <description>Effective hospital management involves such activities as monitoring the flow of medication, controlling treatment, and billing for the patient’s treatment. A major challenge between insurance companies and hospitals lies in the way medical treatment expenses for insured patients are reimbursed. In some cases, the insurance deduction leads to the loss of revenues by hospitals. This paper proposes a framework for the handling insurance deduction that integrates three major methodologies: Decision Support Systems, Data Mining, and Multiple Criteria Decision Making. To exemplify the practical utility of the framework, it is used to study hospital services and insurance deductions are extracted from 200,000 documents in 150 hospitals in Iran. To classify the kinds of services, decision trees are developed to mine hidden rules in the data which are then modified on the basis of some performance measures. The rules are then extracted and ranked using the TOPSIS method. The results show that the proposed framework is capable of effectively providing objective and comprehensive assessments of insurance deductions.</description>
        <description>http://thesai.org/Downloads/Volume6No2/Paper_7-Development_of_a_Decision_Support_System_for_Handling_Health_Insurance_Deduction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intelligent Traffic Information System Based on Integration of Internet of Things and Agent Technology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060206</link>
        <id>10.14569/IJACSA.2015.060206</id>
        <doi>10.14569/IJACSA.2015.060206</doi>
        <lastModDate>2015-02-28T17:28:25.1670000+00:00</lastModDate>
        
        <creator>Hasan Omar Al-Sakran</creator>
        
        <subject>Intelligent Traffic; Internet-of-Things; RFID; Wireless Sensor Networks; Agent Technology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(2), 2015</description>
        <description>In recent years popularity of private cars is getting urban traffic more and more crowded. As result traffic is becoming one of important problems in big cities in all over the world. Some of the traffic concerns are congestions and accidents which have caused a huge waste of time, property damage and environmental pollution. This research paper presents a novel intelligent traffic administration system, based on Internet of Things, which is featured by low cost, high scalability, high compatibility, easy to upgrade, to replace traditional traffic management system and the proposed system can improve road traffic tremendously. The Internet of Things is based on the Internet, network wireless sensing and detection technologies to realize the intelligent recognition on the tagged traffic object, tracking, monitoring, managing and processed automatically. The paper proposes an architecture that integrates internet of things with agent technology into a single platform where the agent technology handles effective communication and interfaces among a large number of heterogeneous highly distributed, and decentralized devices within the IoT. The architecture introduces the use of an active radio-frequency identification (RFID), wireless sensor technologies, object ad-hoc networking, and Internet-based information systems in which tagged traffic objects can be automatically represented, tracked, and queried over a network. This research presents an overview of a framework distributed traffic simulation model within NetLogo, an agent-based environment, for IoT traffic monitoring system using mobile agent technology.</description>
        <description>http://thesai.org/Downloads/Volume6No2/Paper_6-Intelligent_Traffic_Information_System_Based.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Data Center Governance Information Security Compliance Assessment Based on the Cobit Framewok</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060205</link>
        <id>10.14569/IJACSA.2015.060205</id>
        <doi>10.14569/IJACSA.2015.060205</doi>
        <lastModDate>2015-02-28T17:28:25.1500000+00:00</lastModDate>
        
        <creator>Andrey Ferriyan</creator>
        
        <creator>Jazi Eko Istiyanto</creator>
        
        <subject>COBIT; CVE; maturity model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(2), 2015</description>
        <description>One of control domain of Cobit describes information security lies in Deliver and Support (DS) on DS5 Ensure Systems Security. This domain describes what things should be done by an organization to preserve and maintain the integrity of the information assets of IT where this all requires a security management process. One of the process is to perform security monitoring by conducting periodic vulnerability assessment to identify weaknesses. Because Cobit is not explained technically so it needs a method to utilizes data that has been standardized. One of the standardized database for vulnerability is CVE (Common Vulnerabilites and Exposures).This study aims to assess current condition of Data Center on Department of Transportation, Communication and Information Technology at Sleman Regency and assess the maturity level of security as well as providing solutions in particular on IT security. Next goal is to perform vulnerability assessment to find out which are the parts of the data center that may be vulnerable. Knowing weaknesses can help evaluate and provide solutions for better future. Result from this research is to create tool for vulnerability assessment and tool to calculate maturity model.</description>
        <description>http://thesai.org/Downloads/Volume6No2/Paper_5-Data_Center_Governance_Information_Security_Compliance_Assessment_Based.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Supporting Self-Organization with Logical-Clustering Towards Autonomic Management of Internet-of-Things</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060204</link>
        <id>10.14569/IJACSA.2015.060204</id>
        <doi>10.14569/IJACSA.2015.060204</doi>
        <lastModDate>2015-02-28T17:28:25.1200000+00:00</lastModDate>
        
        <creator>Hasibur Rahman</creator>
        
        <creator>Theo Kanter</creator>
        
        <creator>Rahim Rahmani</creator>
        
        <subject>autonomic management; Future Internet; Internet-of-Things; self-organization; logical-clustering; MediaSense</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(2), 2015</description>
        <description>One of the challenges for autonomic management in Future Internet is to bring about self-organization in a rapidly changing environment and enable participating nodes to be aware and respond to changes. The massive number of participating nodes in Internet-of-Things calls for a new approach in regard of autonomic management with dynamic self-organization and enabling awareness to context information changes in the nodes themselves. To this end, we present new algorithms to enable self-organization with logical-clustering, the goal of which is to ensure that logical-clustering evolves correctly in the dynamic environment. The focus of these algorithms is to structure logical-clustering topology in an organized way with minimal intervention from outside sources. The correctness of the proposed algorithm is demonstrated on a scalable IoT platform, MediaSense. Our algorithms sanction 10 nodes to organize themselves per second and afford high accuracy of nodes discovery. Finally, we outline future research challenges towards autonomic management of IoT.</description>
        <description>http://thesai.org/Downloads/Volume6No2/Paper_4-Supporting_Self-Organization_with_Logical-Clustering.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Constraint on Repair Resources, Optimal Number of Repairers and Optimal Size of a Serviced System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060203</link>
        <id>10.14569/IJACSA.2015.060203</id>
        <doi>10.14569/IJACSA.2015.060203</doi>
        <lastModDate>2015-02-28T17:28:25.1030000+00:00</lastModDate>
        
        <creator>Marin Todinov</creator>
        
        <subject>constraint on the repair resources; discrete-event simulation; optimization; repairs; optimal size of a system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(2), 2015</description>
        <description>The focus of this paper is the analysis of the constraint on the repair resources caused by breakdowns of components in large systems. The study has been conducted by creating a very efficient discrete-event simulator, based on a min-heap data structure, for determining the probability of constraint on the repair resources.
In finding the right balance between the number of repairers and salary costs, an exact optimisation algorithm has been proposed for the first time. The algorithm determines the optimal number of repairers which guarantees that the probability of constraint on the repair resources will not exceed an acceptable tolerable level. In addition, an exact optimisation algorithm has been proposed for the first time, for determining the maximum size of the system that can be serviced by a specified number of repairers so that the probability of constraint on the repair resources remains below a specified tolerable level. Unlike heuristic optimisation algorithms, the proposed algorithms are exact and always guarantee optimal solutions.
The presented results are of significant importance to operators of computer networks, production systems, transportation networks, water distribution systems, electrical distribution networks etc. They are a solid basis for management decisions regarding the optimal number of maintenance personnel needed to service the breakdowns in large systems. Increasing the number of repairers beyond the optimal level leads to high salary costs while reducing the number of repairers below the optimal number leads to a poor quality of service.</description>
        <description>http://thesai.org/Downloads/Volume6No2/Paper_3-Constraint_on_Repair_Resources_Optimal_Number.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>kEFCM: kNN-Based Dynamic Evolving Fuzzy Clustering Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060202</link>
        <id>10.14569/IJACSA.2015.060202</id>
        <doi>10.14569/IJACSA.2015.060202</doi>
        <lastModDate>2015-02-28T17:28:25.0730000+00:00</lastModDate>
        
        <creator>Shubair Abdulla</creator>
        
        <creator>Amer Al-Nassiri</creator>
        
        <subject>Evolving; Fuzzy Logic; Clustering; k-NN</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(2), 2015</description>
        <description>Despite the recent emergence of research, creating an evolving fuzzy clustering method that intelligently copes with huge amount of data streams in the present high-speed networks involves a lot of difficulties. Several efforts have been devoted to enhance traditional clustering techniques into on-line evolving fuzzy able to learn and develop continuously. In line with these efforts, we propose kEFCM, kNN-based evolving fuzzy clustering method. kEFCM overcomes the problems of computational cost, dynamic fuzzy evolving, and clustering complexity of traditional kNN. It employs the least-squares method in determining the cluster center and influential area, as well as the Euclidean distance in identifying the membership degree. It enhances the traditional kNN algorithm by involving only cluster centers in making classification decisions and evolving on-line the clusters when a new data arrives. For evaluation purpose, the experimental results on a collection of benchmark datasets are compared against other well-known clustering methods. The evaluation results approve a good competitive level of kEFCM.</description>
        <description>http://thesai.org/Downloads/Volume6No2/Paper_2-kEFCM_kNN-Based_Dynamic_Evolving_Fuzzy_Clustering_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Effective Strategies for ROI and Image Matching</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060201</link>
        <id>10.14569/IJACSA.2015.060201</id>
        <doi>10.14569/IJACSA.2015.060201</doi>
        <lastModDate>2015-02-28T17:28:24.9930000+00:00</lastModDate>
        
        <creator>Dr. Khaled M. G. Noaman</creator>
        
        <creator>Dr. Jamil Abdulhamid M. Saif</creator>
        
        <subject>systematic; random; gradient; simulated annealing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(2), 2015</description>
        <description>The paper presents an exceptional four matching strategies: systematic, random, gradient and simulated annealing using diferent metrics. We consider two kinds of image matching algorithms. The first one oriented on the whole image matching where we compare corresponding pixels or chosen image characteristics. The second one is oriented on finding the region in the target image (region of interest ROI) , which match best the ROI given in the template image. For our experiments we take the list of target images, directly from the atlas, and a subset of these images as the template images.</description>
        <description>http://thesai.org/Downloads/Volume6No2/Paper_1-Effective_Strategies_for_ROI_and_Image_Matching.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Trust-based Mechanism for Avoiding Liars in Referring of Reputation in Multiagent System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2015.040205</link>
        <id>10.14569/IJARAI.2015.040205</id>
        <doi>10.14569/IJARAI.2015.040205</doi>
        <lastModDate>2015-02-10T11:47:07.9330000+00:00</lastModDate>
        
        <creator>Manh Hung Nguyen</creator>
        
        <creator>Dinh Que Tran</creator>
        
        <subject>Multiagent system, Trust, Reputation, Liar.</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 4(2), 2015</description>
        <description>Trust is considered as the crucial factor for agents in decision making to choose the most trustworthy partner during their interaction in open distributed multiagent systems. Most current trust models are the combination of experience trust and reference trust, in which the reference trust is estimated from the judgements of agents in the community about a given partner. These models are based on the assumption that all agents are reliable when they share their judgements about a given partner to the others. However, these models are no more longer appropriate to applications of multiagent systems, where several concurrent agents may not be ready to share their private judgement about others or may share the wrong data by lying to their partners.
In this paper, we introduce a combination model of experience trust and experience trust with a mechanism to enable agents take into account the trustworthiness of referees when they refer their judgement about a given partner. We conduct experiments to evaluate the proposed model in the context of the e-commerce environment. Our research results suggest that it is better to take into account the trustworthiness of referees when they share their judgement about partners. The experimental results also indicate that although there are liars in the multiagent systems, combination trust computation is better than the trust computation based only on the experience trust of agents.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume4No2/Paper_5-A_Trust-based_Mechanism_for_Avoiding_Liars_in_Referring.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Speech emotion recognition in emotional feedback for Human-Robot Interaction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2015.040204</link>
        <id>10.14569/IJARAI.2015.040204</id>
        <doi>10.14569/IJARAI.2015.040204</doi>
        <lastModDate>2015-02-10T11:47:07.8870000+00:00</lastModDate>
        
        <creator>Javier G. R&#180;azuri</creator>
        
        <creator>David Sundgren</creator>
        
        <creator>Rahim Rahmani</creator>
        
        <creator>Aron Larsson</creator>
        
        <creator>Antonio Moran Cardenas</creator>
        
        <creator>Isis Bonet</creator>
        
        <subject>Affective Computing; Detection of Emotional Infor-mation; Machine Learning; Speech Emotion Recognition.</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 4(2), 2015</description>
        <description>For robots to plan their actions autonomously and interact with people, recognizing human emotions is crucial. For most humans nonverbal cues such as pitch, loudness, spectrum, speech rate are efficient carriers of emotions. The features of the sound of a spoken voice probably contains crucial information on the emotional state of the speaker, within this framework, a machine might use such properties of sound to recognize emotions. This work evaluated six different kinds of classifiers to predict six basic universal emotions from non-verbal features of human speech. The classification techniques used information from six audio files extracted from the eNTERFACE05 audio-visual emotion database. The information gain from a decision tree was also used in order to choose the most significant speech features, from a set of acoustic features commonly extracted in emotion analysis. The classifiers were evaluated with the proposed features and the features selected by the decision tree. With this feature selection could be observed that each one of compared classifiers increased the global accuracy and the recall. The best performance was obtained with Support Vector Machine and bayesNet.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume4No2/Paper_4-Speech_emotion_recognition_in_emotional_feedback.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Analysis of Improved Cuckoo Search(ICS) Algorithm and Artificial Bee Colony (ABC) Algorithm on Continuous Optimization Problems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2015.040203</link>
        <id>10.14569/IJARAI.2015.040203</id>
        <doi>10.14569/IJARAI.2015.040203</doi>
        <lastModDate>2015-02-10T11:47:07.8270000+00:00</lastModDate>
        
        <creator>Shariba Islam Tusiy</creator>
        
        <creator>Nasif Shawkat</creator>
        
        <creator>Md. Arman Ahmed</creator>
        
        <creator>Biswajit Panday</creator>
        
        <creator>Nazmus Sakib</creator>
        
        <subject>Artificial Bee Colony (ABC) algorithm; Bioinformatics; Improved Cuckoo Search (ICS) algorithm; L&#233;vy flight; Meta heuristic; Nature Inspired Algorithms</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 4(2), 2015</description>
        <description>This work is related on two well-known algorithm, Improved Cuckoo Search and Artificial Bee Colony Algorithm which are inspired from nature. Improved Cuckoo Search (ICS) algorithm is based on L&#233;vy flight and behavior of some birds and fruit flies and they have some assumptions and each assumption is highly observed to maintain their characteristics. Besides Artificial Bee Colony (ABC) algorithm is based on swarm intelligence, which is based on bee colony with the way the bees maintain their life in that colony. Bees’ characteristics are the main part of this algorithm. This is a theoretical result of this topic and a quantitative research paper.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume4No2/Paper_3-Comparative_Analysis_of_Improved_Cuckoo_Search(ICS)_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Innovative Processes in Computer Assisted Language Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2015.040202</link>
        <id>10.14569/IJARAI.2015.040202</id>
        <doi>10.14569/IJARAI.2015.040202</doi>
        <lastModDate>2015-02-10T11:47:07.7000000+00:00</lastModDate>
        
        <creator>Khaled M. Alhawiti</creator>
        
        <subject>Natural Language Processing; Computer Assisted Language Learning;  Syntactic Simplification Tools</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 4(2), 2015</description>
        <description>Reading ability of an individual is believed to be one of the major sections in language competency. From this perspective, determination of topical writings for second language learners is considered tough exam for language instructor. This mixed i.e. qualitative and quantitative research study aims to address the innovative processes in computer-assisted language learning through surveying the reading level and streamline content of the ESL students in the classrooms designed for students. This study is based on empirical research to measure the reading level among the ESL students. The findings of this study have revealed that using the procedures of language preparing such as shortened text as well as assessed component tools used for automatic text simplification is profitable for both the ESL students and the teachers.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume4No2/Paper_2-Innovative_Processes_in_Computer_Assisted_Language_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Blocking Black Area Method for Speech Segmentation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2015.040201</link>
        <id>10.14569/IJARAI.2015.040201</id>
        <doi>10.14569/IJARAI.2015.040201</doi>
        <lastModDate>2015-02-10T11:47:07.6230000+00:00</lastModDate>
        
        <creator>Dr. Md. Mijanur Rahman</creator>
        
        <creator>Fatema Khatun</creator>
        
        <creator>Dr. Md. Al-Amin Bhuiyan</creator>
        
        <subject>Blocking Black Area; Boundary Detection; Dynamic Thresholding; Otsu’s Algorithm; Speech Segmentation</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 4(2), 2015</description>
        <description>Speech segmentation is an important sub problem of automatic speech recognition. This research is concerned with the development of a continuous speech segmentation system using Bangla Language. This paper presents a dynamic thresholding algorithm to segment the continuous Bngla speech sentences into words/sub-words. The research uses Otsu’s method for dynamic thresholding and introduces a new approach, named blocking black area method to identify the voiced regions of the continuous speech in speech segmentation. The developed system has been justified with continuously spoken several Bangla sentences. To test the performance of the system, 100 Bangla sentences have been recorded from 5 (five) male speakers of different ages and 656 words have been presented in the 100 Bangla sentences. So, the speech database contains 500 Bangla sentences with 3280 words. All the algorithms and methods used in this research are implemented in MATLAB and the proposed system has been achieved the average segmentation accuracy of 90.58%.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume4No2/Paper_1-Blocking_Black_Area_Method_for_Speech_Segmentation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Public Transportation Management System based on GPSWiFi and Open Street Maps</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060127</link>
        <id>10.14569/IJACSA.2015.060127</id>
        <doi>10.14569/IJACSA.2015.060127</doi>
        <lastModDate>2015-01-31T17:12:18.4730000+00:00</lastModDate>
        
        <creator>Saed Tarapiah</creator>
        
        <creator>Shadi Atalla</creator>
        
        <subject>ITS; GPS; WiFi; Transportation; OSM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(1), 2015</description>
        <description>Information technology (IT) has transformed many industries, from education to health care to government, and is now in the early stages of transforming transportation systems. Transportation faces many issues like high accidents rate in general, and much more rate in developing countries due to the lack of proper infrastructure for roads, is one of the reasons for these crashes. In this project we focus on public transportation vehicles - such as buses, and mini-buses -, where the goal of the project is to design and deploy a smart/intelligent unit attached to public vehicles by using embedded microcontroller and sensors and empowering them to communicate with each other through wireless technologies. The proposed Offline Intelligent Public Transportation Management System will play a major role in reducing risks and high accidents rate, whereas it can increase the traveler satisfactions and convenience. Here, we propose a method, software as well as a framework as enabling technologies to for evaluation, planning and future improvement the public transportation system. Our system even though can be as whole or parts can be applied all over the world we mostly target developing countries. This limitation mostly appear by consider off-shelf technologies such as WiFi, GPS and Open Street Maps (OSM).</description>
        <description>http://thesai.org/Downloads/Volume6No1/Paper_27-Public_Transportation_Management_System_based_on_GPSWiFi.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Orientation Capture of a Walker’s Leg Using Inexpensive Inertial Sensors with Optimized Complementary Filter Design</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060125</link>
        <id>10.14569/IJACSA.2015.060125</id>
        <doi>10.14569/IJACSA.2015.060125</doi>
        <lastModDate>2015-01-31T17:12:18.4430000+00:00</lastModDate>
        
        <creator>Sebastian Andersson</creator>
        
        <creator>Liu Yan</creator>
        
        <subject>Motion capture, Complementary Filter, Inertial sen-sors, Bluetooth Low Energy, iOS.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(1), 2015</description>
        <description>Accelerometers and gyroscope are often referred to as inertial sensors. They detect movement and are used for motion tracking systems in many fields. In recent years they have become much smaller, lighter and cheaper which makes them attractive for use in consumer electronics. The goal of this research is to use all these advantages to create a cheap, low cost and accurate motion tracking system. The system that will be developed is using two pairs of accelerometer + gyroscope sensors which communicates with an iOS device using BLE. The sensors are attached to a persons leg to capture the orientation of the leg while walking or running. Studying the movements of a persons leg can be useful regarding both performance and health aspects. To create the system, usage of inertial sensors and how to combine their data using the complementary filter have been studied. Further, several experiments were made to optimize the filter design for this kind of movement. The results shows how the orientation estimation differs in accuracy depending on different values of how the filter is designed. However, by using the right values, a fairly accurate orientation of the leg can be estimated which is proved by the simple visualization of the iOS application.</description>
        <description>http://thesai.org/Downloads/Volume6No1/Paper_25-Orientation_Capture_of_a_Walker’s_Leg_Using.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Technical Perspectives on Knowledge Management in Bioinformatics Workflow Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060126</link>
        <id>10.14569/IJACSA.2015.060126</id>
        <doi>10.14569/IJACSA.2015.060126</doi>
        <lastModDate>2015-01-31T17:12:18.4430000+00:00</lastModDate>
        
        <creator>Walaa N. Ismail</creator>
        
        <creator>M.Sabih Aksoy</creator>
        
        <subject>Bioinformatics Workflows; Knowledge Manage-ment; Ontologies; Scientific Workflows; Semantic Web.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(1), 2015</description>
        <description>Workflow systems by it’s nature can help bioin-formaticians to plan for their experiments, store, capture and analysis of the runtime generated data. On the other hand, the life science research usually produces new knowledge at an increasing speed; Knowledge such as papers, databases and other systems knowledge that a researcher needs to deal with is actually a complex task that needs much of efforts and time. Thus the management of knowledge is therefore an important issue for life scientists. Approaches has been developed to organize biological knowledge sources and to record provenance knowledge of an experiment into a readily resource are presently being carried out. This article focuses on the knowledge management of in silico experimentation in bioinformatics workflow systems.</description>
        <description>http://thesai.org/Downloads/Volume6No1/Paper_26-Technical_Perspectives_on_Knowledge_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title></title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060124</link>
        <id>10.14569/IJACSA.2015.060124</id>
        <doi>10.14569/IJACSA.2015.060124</doi>
        <lastModDate>2015-01-31T17:12:18.4130000+00:00</lastModDate>
        
        <creator>Chika Oshima</creator>
        
        <creator>Kiyoshi Yasuda</creator>
        
        <creator>Toshiyuki Uno</creator>
        
        <creator>Kimie Machishima</creator>
        
        <creator>Koichi Nakayama</creator>
        
        <subject>Android; Care facility; Memory loss</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(1), 2015</description>
        <description></description>
        <description>http://thesai.org/Downloads/Volume6No1/Paper_24-Give_a_Dog_ICT_Devices_How_Smartphone-Carrying.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comparison of OSPFv3 and EIGRPv6 in a Small IPv6 Enterprise Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060123</link>
        <id>10.14569/IJACSA.2015.060123</id>
        <doi>10.14569/IJACSA.2015.060123</doi>
        <lastModDate>2015-01-31T17:12:18.3800000+00:00</lastModDate>
        
        <creator>Richard John Whitfield</creator>
        
        <creator>Shao Ying Zhu</creator>
        
        <subject>IPv4; IPv6; OSPFv3; IPSec; ESP; EIGRPv4; EIGRPv6; MD5</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(1), 2015</description>
        <description>As the Internet slowly transitions towards IPv6, the routing protocols that are used to forward traffic across this global network must adapt to support this gradual transition. Two of the most frequently discussed interior dynamic routing protocols today are the IETF’s OSPF and Cisco’s EIGRP routing protocol. A wealth of papers have compared OSPF and EIGRP in terms of converge times and resource usage, however few papers have assessed the performance of each when implementing their respective security mechanisms. Therefore a comparison of OSPFv3 and EIGRPv6 will be conducted using dedicated Cisco hardware. This paper will firstly introduce each protocol and its security mechanisms, before conducting a comparison of OSPFv3 and EIGRPv6 using Cisco equipment. After discussing the simulation results, a conclusion will be drawn to reveal the findings of this paper and which protocol performs the best upon implementing their respective security mechanisms within a small IPv6 enterprise network.</description>
        <description>http://thesai.org/Downloads/Volume6No1/Paper_23-A_Comparison_of_OSPFv3_and_EIGRPv6_in_a_Small_IPv6_Enterprise_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Effectiveness of Iphone’s Touch ID: KSA Case Study </title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060122</link>
        <id>10.14569/IJACSA.2015.060122</id>
        <doi>10.14569/IJACSA.2015.060122</doi>
        <lastModDate>2015-01-31T17:12:18.3670000+00:00</lastModDate>
        
        <creator>Ahmad A. Al-Daraiseh</creator>
        
        <creator>Diana Al Omari</creator>
        
        <creator>Hadeel Al Hamid</creator>
        
        <creator>Nada Hamad</creator>
        
        <creator>Rawan Althemali</creator>
        
        <subject>IPhone 5s; IPhone 6; IPhone 6 plus; Fingerprint; Touch ID</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(1), 2015</description>
        <description>A new trend of incorporating Touch ID sensors in mobile devices is appearing. Last year, Apple released a new model of its famous iPhone (5s). One of the most anticipated and hailed features of the new device was its Touch ID. Apple advertised that the new technology will increase the security of its device, and it will also be used in different applications as a proof of identity. To make the issue more controversial, Apple announced a new financial service (Apple Pay) that allows iPhone 6 users to use their iPhone as a replacement to credit cards. The minute the new technology was introduced; many questions appeared that needed immediate answers. Users were concerned about how it will work? Is it easy to use? Is it really safe? And whether it will be effective in protecting their private data or not? In this paper we provide a comprehensive study of this feature. We discuss the advantages and disadvantages of using it. Then we analyze and share the results of a survey that we conducted to measure the effectiveness of such feature in the Kingdom of Saudi Arabia (KSA). In this study, we only focus on users from KSA, because if the device fails to protect mobile’s data, severe consequences might happen.  Due to cultural believes in KSA, releasing mobile contents to unauthorized people could lead to crimes. Survey analysis revealed somewhat controversial results, while 76% of all participants believe that this technology will improve the device security, only 33% use it to lock/unlock their devices, and even a smaller percentage use it to make purchases.</description>
        <description>http://thesai.org/Downloads/Volume6No1/Paper_22-Effectiveness_of_Iphone’s_Touch_ID_KSA_Case_Study.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Survey of Topic Modeling in Text Mining</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060121</link>
        <id>10.14569/IJACSA.2015.060121</id>
        <doi>10.14569/IJACSA.2015.060121</doi>
        <lastModDate>2015-01-31T17:12:18.3500000+00:00</lastModDate>
        
        <creator>Rubayyi Alghamdi</creator>
        
        <creator>Khalid Alfalqi</creator>
        
        <subject>Topic Modeling; Methods of Topic Modeling; Latent semantic analysis (LSA); Probabilistic latent semantic analysis (PLSA); Latent Dirichlet allocation (LDA); Correlated topic model (CTM); Topic Evolution Modelin</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(1), 2015</description>
        <description>Topic models provide a convenient way to analyze large of unclassified text. A topic contains a cluster of words that frequently occur together. A topic modeling can connect words with similar meanings and distinguish between uses of words with multiple meanings. This paper provides two categories that can be under the field of topic modeling. First one discusses the area of methods of topic modeling, which has four methods that can be considerable under this category. These methods are Latent semantic analysis (LSA), Probabilistic latent semantic analysis (PLSA), Latent Dirichlet allocation (LDA), and Correlated topic model (CTM). The second category is called topic evolution models, which model topics by considering an important factor time. In the second category, different models are discussed, such as topic over time (TOT), dynamic topic models (DTM), multiscale topic tomography, dynamic topic correlation detection, detecting topic evolution in scientific literature, etc.</description>
        <description>http://thesai.org/Downloads/Volume6No1/Paper_21-A_Survey_of_Topic_Modeling_in_Text_Mining.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Android Platform Malware Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060120</link>
        <id>10.14569/IJACSA.2015.060120</id>
        <doi>10.14569/IJACSA.2015.060120</doi>
        <lastModDate>2015-01-31T17:12:18.3030000+00:00</lastModDate>
        
        <creator>Rubayyi Alghamdi</creator>
        
        <creator>Khalid Alfalqi</creator>
        
        <creator>Mofareh Waqdan</creator>
        
        <subject>Smartphone Security; Malware Analysis; Android Malware; Static Analysis; Dynamic Analysis; SDK; VAD</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(1), 2015</description>
        <description>Mobile devices have evolved from simple devices, which are used for a phone call and SMS messages to smartphone devices that can run third party applications. Nowadays, malicious software, which is also known as malware, imposes a larger threat to these mobile devices. Recently, many news items were posted about the increase of the Android malware. There were a lot of Android applications pulled from the Android Market because they contained malware. The vulnerabilities of those Applications or Android operating systems are being exploited by the attackers who got the capability of penetrating into the mobile systems without user authorization causing compromise the confidentiality, integrity and availability of the applications and the user. This paper, it gave an update to the work done in the project.
Moreover, this paper focuses on the Android Operating System and aim to detect existing Android malware. It has a dataset that contained 104 malware samples. This Paper chooses several malware from the dataset and attempting to analyze them to understand their installation methods and activation. In addition, it evaluates the most popular existing anti-virus software to see if these 104 malware could be detected.</description>
        <description>http://thesai.org/Downloads/Volume6No1/Paper_20-Android_Platform_Malware_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Ontology Based SMS Controller for Smart Phones</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060119</link>
        <id>10.14569/IJACSA.2015.060119</id>
        <doi>10.14569/IJACSA.2015.060119</doi>
        <lastModDate>2015-01-31T17:12:18.2870000+00:00</lastModDate>
        
        <creator>Mohammed A. Balubaid</creator>
        
        <creator>Umar Manzoor</creator>
        
        <creator>Bassam Zafar</creator>
        
        <creator>Abdullah Qureshi</creator>
        
        <creator>Numairul Ghani</creator>
        
        <subject>Short Text Classification; SMS Spam; Text Analysis; Ontology based SMS Spam; Text Analysis and Ontology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(1), 2015</description>
        <description>Text analysis includes lexical analysis of the text and has been widely studied and used in diverse applications. In the last decade, researchers have proposed many efficient solutions to analyze / classify large text dataset, however, analysis / classification of short text is still a challenge because 1) the data is very sparse 2) It contains noise words and 3) It is difficult to understand the syntactical structure of the text. Short Messaging Service (SMS) is a text messaging service for mobile/smart phone and this service is frequently used by all mobile users. Because of the popularity of SMS service, marketing companies nowadays are also using this service for direct marketing also known as SMS marketing.In this paper, we have proposed Ontology based SMS Controller which analyze the text message and classify it using ontology aslegitimate or spam. The proposed system has been tested on different scenarios and experimental results shows that the proposed solution is effective both in terms of efficiency and time.</description>
        <description>http://thesai.org/Downloads/Volume6No1/Paper_19-Ontology_Based_SMS_Controller_for_Smart_Phones.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Using Heavy Clique Base Coarsening to Enhance Virtual Network Embedding</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060118</link>
        <id>10.14569/IJACSA.2015.060118</id>
        <doi>10.14569/IJACSA.2015.060118</doi>
        <lastModDate>2015-01-31T17:12:18.2570000+00:00</lastModDate>
        
        <creator>Ashraf A. Shahin</creator>
        
        <subject>cloud computing; network virtualization; resource allocation; substrate network fragmentation; virtual network embedding; virtual network coarsening</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(1), 2015</description>
        <description>Network virtualization allows cloud infrastructure providers to accommodate multiple virtual networks on a single physical network. However, mapping multiple virtual network resources to physical network components, called virtual network embedding (VNE), is known to be non-deterministic polynomial-time hard (NP-hard). Effective virtual network embedding increases the revenue by increasing the number of accepted virtual networks. In this paper, we propose virtual network embedding algorithm, which improves virtual network embedding by coarsening virtual networks. Heavy Clique matching technique is used to coarsen virtual networks. Then, the coarsened virtual networks are enhanced by using a refined Kernighan-Lin algorithm. The performance of the proposed algorithm is evaluated and compared with existing algorithms using extensive simulations, which show that the proposed algorithm improves virtual network embedding by increasing the acceptance ratio and the revenue.</description>
        <description>http://thesai.org/Downloads/Volume6No1/Paper_18-Using_Heavy_Clique_Base_Coarsening_to_Enhance_Virtual_Network_Embedding.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comparative Study of Thresholding Algorithms on Breast Area and Fibroglandular Tissue</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060117</link>
        <id>10.14569/IJACSA.2015.060117</id>
        <doi>10.14569/IJACSA.2015.060117</doi>
        <lastModDate>2015-01-31T17:12:18.2270000+00:00</lastModDate>
        
        <creator>Shofwatul ‘Uyun</creator>
        
        <creator>Sri Hartati</creator>
        
        <creator>Agus Harjoko</creator>
        
        <creator>Lina Choridah</creator>
        
        <subject>Thresholding Algorithm; Breast Area; fibroglandular Area</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(1), 2015</description>
        <description>One of the independently risk factors of breast cancer is mammographic density reflecting the composition of the fibroglandular tissue in breast area. Tumor in the mammogram is precisely complicated to detect as it is covered by the density (the masking effect). The determination of mammographic density may be implemented by calculating percentage of mammographic density (quantitative and objective approaches). Thereby, the use of a proper thresholding algorithm is highly required in order to obtain the fibroglandular tissue area and breast area. The mammograms used in the research were derived from Oncology Clinic, Yogyakarta that had been verified by Radiologists using semi-automatic thresholding. This research was aimed to compare the performance of the thresholding algorithm using three parameters, namely: PME, RAE and MHD.  Zack Algorithm had the best performance to obtain the breast area with PME, RAE and MHD of about 0.33%, 0.71% and 0.01 respectively.  Meanwhile, there were two algorithms having good performance to obtain the fibroglandular tissue area, i.e. multilevel thresholding and maximum entropy with the value for PME (13.34%; 11:27%), RAE (53.34%; 51.26%) and MHD (1:47; 33.92) respectively. The obtained results suggest that zack algorithm is perfectly suited for getting breast area than multilevel thresholding and maximum entropy for getting fibroglandular tissue. It is one of the components to determine risk factors of breast cancer based on percentage of breast density.</description>
        <description>http://thesai.org/Downloads/Volume6No1/Paper_17-A_Comparative_Study_of_Thresholding_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Intelligent Natural Language Conversational System for Academic Advising</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060116</link>
        <id>10.14569/IJACSA.2015.060116</id>
        <doi>10.14569/IJACSA.2015.060116</doi>
        <lastModDate>2015-01-31T17:12:18.1930000+00:00</lastModDate>
        
        <creator>Edward M. Latorre-Navarro</creator>
        
        <creator>John G. Harris</creator>
        
        <subject>Natural Language Processing; Dialog System; Conversational Agent; Academic Advising; Advising System; Engineering Education; E-learning; Human Computer Interaction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(1), 2015</description>
        <description>Academic advisors assist students in academic, professional, social and personal matters. Successful advising increases student retention, improves graduation rates and helps students meet educational goals. This work presents an advising system that assists advisors in multiple tasks using natural language. This system features a conversational agent as the user interface, an academic advising knowledge base with a method to allow the users to contribute to it, an expert system for academic planning, and a web design structure for the implementation platform. The system is operational for several hundred students from a university department. The system performed well, obtaining close to 80%, on the traditional language processing measures of precision, recall, accuracy and F1 score. Assessment from the constituencies showed positive and assuring reviews. This work provides an assessment and technological solution to the academic advising field, i.e., the first-known advising multi-task conversational system with adaptive measures for improvement. The evaluation in a real-world scenario shows its viability, and initiated the development of a corpus for academic advising, valuable for the academic and language processing research communities.</description>
        <description>http://thesai.org/Downloads/Volume6No1/Paper_16-An_Intelligent_Natural_Language_Conversational_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Review on Parameters Identification Methods for Asynchronous Motor</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060115</link>
        <id>10.14569/IJACSA.2015.060115</id>
        <doi>10.14569/IJACSA.2015.060115</doi>
        <lastModDate>2015-01-31T17:12:18.1630000+00:00</lastModDate>
        
        <creator>Xing Zhan</creator>
        
        <creator>Guohui Zeng</creator>
        
        <creator>Jin Liu</creator>
        
        <creator>Qingzhen Wang</creator>
        
        <creator>Sheng Ou</creator>
        
        <subject>Vector control; Parameters identification; MRAS; the least square method</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(1), 2015</description>
        <description>The decoupling of excitation current and torque current is realized by Vector control so that the speed regulating performance of asynchronous motor is comparable with that of dc motor. The control precision is directly affected by accuracy of parameter identification in asynchronous motor. In this paper, based on the existing literatures, the existed parameters identification methods both online and offline are analyzed and compared, and the advantages and disadvantages of the various algorithms are listed it tables. Therefore, a comprehensive identification method of adjustable model which makes the least square method as adaptive method of model reference is presented. Finally, the outlook of developing direction for parameters identification of asynchronous motor are put forward to.</description>
        <description>http://thesai.org/Downloads/Volume6No1/Paper_15-A_Review_on_Parameters_Identification_Methods_for_Asynchronous_Motor.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Investigation of Adherence Degree of Agile Requirements Engineering Practices in Non-Agile Software Development Organizations</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060114</link>
        <id>10.14569/IJACSA.2015.060114</id>
        <doi>10.14569/IJACSA.2015.060114</doi>
        <lastModDate>2015-01-31T17:12:18.1470000+00:00</lastModDate>
        
        <creator>Mennatallah H. Ibrahim</creator>
        
        <creator>Nagy Ramadan Darwish</creator>
        
        <subject>agile methods; agile requirements engineering practices; requirements engineering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(1), 2015</description>
        <description>Requirements are critical for the success of software projects. Requirements are practically difficult to produce, as the hardest stage of building a software system is to decide what the system should do. Moreover, requirements errors are expensive to fix in the later phases of the software development life cycle. The rapidly changing business environment is highly challenging traditional Requirements Engineering (RE) practices. Most of the software development organizations are working in such dynamic environment, as a result, either by or without their awareness agile methodologies are adopted in various phases of their software development cycles. The aim of this paper is to investigate the adherence degree of agile RE practices in various software development organizations that are classifying themselves as adopting traditional (i.e. non-agile) software development methodologies. An approach is proposed for achieving this aim and it is applied on five different projects from four different organizations. The result shows that even the non-agile software development organizations are applying agile RE practices by different adherence degrees.</description>
        <description>http://thesai.org/Downloads/Volume6No1/Paper_14-Investigation_of_Adherence_Degree_of_Agile_Requirements_Engineering_Practices.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fast Vertical Mining Using Boolean Algebra</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060113</link>
        <id>10.14569/IJACSA.2015.060113</id>
        <doi>10.14569/IJACSA.2015.060113</doi>
        <lastModDate>2015-01-31T17:12:18.1300000+00:00</lastModDate>
        
        <creator>Hosny M. Ibrahim</creator>
        
        <creator>M. H. Marghny</creator>
        
        <creator>Noha M. A. Abdelaziz</creator>
        
        <subject>association rule; bit vector; Boolean algebra; frequent itemset;  vertical data format</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(1), 2015</description>
        <description>The vertical association rules mining algorithm is an efficient mining method, which makes use of support sets of frequent itemsets to calculate the support of candidate itemsets. It overcomes the disadvantage of scanning database many times like Apriori algorithm. In vertical mining, frequent itemsets can be represented as a set of bit vectors in memory, which enables for fast computation. The sizes of bit vectors for itemsets are the main space expense of the algorithm that restricts its expansibility. Therefore, in this paper, a proposed algorithm that compresses the bit vectors of frequent itemsets will be presented. The new bit vector schema presented here depends on Boolean algebra rules to compute the intersection of two compressed bit vectors without making any costly decompression operation. The experimental results show that the proposed algorithm, Vertical Boolean Mining (VBM) algorithm is better than both Apriori algorithm and the classical vertical association rule mining algorithm in the mining time and the memory usage.</description>
        <description>http://thesai.org/Downloads/Volume6No1/Paper_13-Fast_Vertical_Mining_Using_Boolean_Algebra.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Proposal of SNS to Improve Member’s Motivation in Voluntary Community Using Gamification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060112</link>
        <id>10.14569/IJACSA.2015.060112</id>
        <doi>10.14569/IJACSA.2015.060112</doi>
        <lastModDate>2015-01-31T17:12:18.1000000+00:00</lastModDate>
        
        <creator>Kohei Otake</creator>
        
        <creator>Yoshihisa Shinozawa</creator>
        
        <creator>Akito Sakurai</creator>
        
        <creator>Makoto Oka</creator>
        
        <creator>Tomofumi Uetake</creator>
        
        <creator>Ryosuke Sumita</creator>
        
        <subject>Gamification; Voluntary Community; Motivation Management</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(1), 2015</description>
        <description>Recently, the number of voluntary communities such as local communities and university club activities are increasing. In these communities, since there are various types of members and there are no binding forces, it is usually difficult to maintain and im-prove member&#39;s motivation. To maintain and improve member&#39;s motivation, most of these communities use social networking services (SNSs). However, since existing SNS offer few functions for voluntary community, it is difficult to solve this problem. This research focused on the concept of gamification and proposed an SNS to improve member&#39;s motivation of voluntary community. First, the authors analyzed the current conditions and members of a voluntary community. Based on this analysis, the authors found that an SNS to improve member&#39;s motivation of voluntary community requires functions which support member&#39;s personal activities and also functions which increase social activities. Next, the authors built an SNS that had these functions by applying the concept of gamification. The authors implemented the SNS for a University club’s activities for one month and showed the effectiveness of our SNS.</description>
        <description>http://thesai.org/Downloads/Volume6No1/Paper_12-A_Proposal_of_SNS_to_Improve_Member’s_Motivation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modeling and Simulation Analysis of Power Frequency Electric Field of UHV AC Transmission Line</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060111</link>
        <id>10.14569/IJACSA.2015.060111</id>
        <doi>10.14569/IJACSA.2015.060111</doi>
        <lastModDate>2015-01-31T17:12:18.0700000+00:00</lastModDate>
        
        <creator>Chen Han</creator>
        
        <creator>Yuchen Chen</creator>
        
        <subject>boundary element method; electric field distribution; power frequency electric field; UHV</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(1), 2015</description>
        <description>In order to study the power frequency electric field of UHV AC transmission lines, this paper which models and calculates using boundary element method simulates various factors influencing the distribution of the power frequency electric field, such as the conductor arrangement, the over-ground height, the split spacing and the sub conductor radius. Different influence of various factors on the electric field distribution will be presented. In a single loop, using VVV triangular arrangement is the most secure way; in a dual loop, the electric field intensity using reverse phase sequence is weaker than that using positive phase sequence. Elevating the over-ground height and reducing the conductor split spacing will both weaken the electric field intensity, while the change of sub conductor radius can hardly cause any difference. These conclusions are important for electric power company to detect circuit.</description>
        <description>http://thesai.org/Downloads/Volume6No1/Paper_11-Modeling_and_Simulation_Analysis_of_Power_Frequency.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Emotional Engagement and Active Learning in a Marketing Simulation: A Review and Exploratory Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060110</link>
        <id>10.14569/IJACSA.2015.060110</id>
        <doi>10.14569/IJACSA.2015.060110</doi>
        <lastModDate>2015-01-31T17:12:18.0530000+00:00</lastModDate>
        
        <creator>Kear Andrew</creator>
        
        <creator>Bown Gerald Robin</creator>
        
        <subject>component; Learning; Simulations; Feedback; Emotional Learning Scenarios; Emotional Anticipation; Deep Learning; Vignette Research</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(1), 2015</description>
        <description>This paper considers the role of emotional engagement during the use of a simulation. This is placed in the context of learning about marketing. The literature highlights questions of engagement and interactivity that are entailed in the use of these simulations. It is observed here that both the anticipation of and the process of engagement with the simulation generate emotional responses. The evidence of emotional anticipation was collected through the use of vignettes and a short survey. The production of negative emotions before and after the activity was observed and considered. The particular occurrence of these emotions on the development of understanding is then discussed. There is general evidence for the mundane reality of such simulations that support learning and group engagement. The connection with activity theory was explored and proposed as a potential theoretical fit with the evidence.</description>
        <description>http://thesai.org/Downloads/Volume6No1/Paper_10-Emotional_Engagement_and_Active_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Ipv6 Change Threats Behavior</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060109</link>
        <id>10.14569/IJACSA.2015.060109</id>
        <doi>10.14569/IJACSA.2015.060109</doi>
        <lastModDate>2015-01-31T17:12:18.0370000+00:00</lastModDate>
        
        <creator>Firas Najjar</creator>
        
        <creator>Homam El-Taj</creator>
        
        <subject>Computer Attacks; IPv4; IPv6; Security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(1), 2015</description>
        <description>IPv4 address pool is already exhausted; therefore, the change to use IPv6 is eventually necessary to give us a massive address pool. Although IPv6 was built with security in mind, extensive research must be done before deploying IPv6 to ensure the protection of security and privacy. This paper firstly presents the differences between the old and new IP versions (IPv4 and IPv6), and how these differences will affect the attacks, then the paper will show how  the attacks on IPv4 and IPv6 will remain mostly the same; furthermore, the use of IPv6 will give rise to new types of attacks and change other types’ behavior.</description>
        <description>http://thesai.org/Downloads/Volume6No1/Paper_9-Ipv6_Change_Threats_Behavior.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Monitoring Model for Hierarchical Architecture of Distributed Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060108</link>
        <id>10.14569/IJACSA.2015.060108</id>
        <doi>10.14569/IJACSA.2015.060108</doi>
        <lastModDate>2015-01-31T17:12:18.0070000+00:00</lastModDate>
        
        <creator>Phuc Tran Nguyen Hong</creator>
        
        <creator>Son Le Van</creator>
        
        <subject>architecture; distributed systems; model; monitored objects; monitoring</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(1), 2015</description>
        <description>Distributed systems are complex systems and there are a lot of the potential risks in the system, so system administrators need to have some effective support tools for network management. The architecture information of distributed systems is an essential part of distributed system monitoring solutions, because it provides general information about monitored objects in the system for administrators, as well as supports administrator in quickly detecting change of topology, error status or potential risks that arise during operation of distributed systems. The modeling approaches have an important role in developing monitoring solutions, in which they are background to develop algorithms for monitoring problems in distributed systems. This paper proposes an approach in order to model for hierarchical architecture of objects in distributed systems, in which consists of architecture of monitored objects, networks, domains and global distributed systems. Based on this model, a basic monitoring solution for hierarchical architecture of distributed systems is developed and this solution is able to provide administrators more important architecture information such as the topology of hardware components, processes, status of monitored objects, etc.</description>
        <description>http://thesai.org/Downloads/Volume6No1/Paper_8-A_Monitoring_Model_for_Hierarchical_Architecture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Golay Code Transformations for Ensemble Clustering in Application to Medical Diagnostics</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060107</link>
        <id>10.14569/IJACSA.2015.060107</id>
        <doi>10.14569/IJACSA.2015.060107</doi>
        <lastModDate>2015-01-31T17:12:17.9900000+00:00</lastModDate>
        
        <creator>Faisal Alsaby</creator>
        
        <creator>Kholood Alnowaiser</creator>
        
        <creator>Simon Berkovich</creator>
        
        <subject>medical Big Data; clustering; machine learning; pattern recognition; prediction tool; Big Data classification; Golay Code</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(1), 2015</description>
        <description>Clinical Big Data streams have accumulated large-scale multidimensional data about patients’ medical conditions and drugs along with their known side effects. The volume and the complexity of this Big Data streams hinder the current computational procedures. Effective tools are required to cluster and systematically analyze this amorphous data to perform data mining methods including discovering knowledge, identifying underlying relationships and predicting patterns. This paper presents a novel computation model for clustering tremendous amount of Big Data streams. The presented approach is utilizing the error-correction Golay Code. This clustering methodology is unique. It outperforms all other conventional techniques because it has linear time complexity and does not impose predefined cluster labels that partition data. Extracting meaningful knowledge from these clusters is an essential task; therefore, a novel mechanism that facilitates the process of predicting patterns and likelihood diseases based on a semi-supervised technique is presented.</description>
        <description>http://thesai.org/Downloads/Volume6No1/Paper_7-Golay_Code_Transformations_for_Ensemble_Clustering.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fault-Tolerant Attitude Control System for a Spacecraft with Control Moment Gyros Using Multi-Objective Optimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060106</link>
        <id>10.14569/IJACSA.2015.060106</id>
        <doi>10.14569/IJACSA.2015.060106</doi>
        <lastModDate>2015-01-31T17:12:17.9600000+00:00</lastModDate>
        
        <creator>Ai Noumi</creator>
        
        <creator>Misuzu Haruki</creator>
        
        <creator>Takuya Kanzawa</creator>
        
        <creator>Masaki Takahashi</creator>
        
        <subject>Control Moment Gyros; Spacecraft; Attitude Control; Multi-objective Optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(1), 2015</description>
        <description>Recent years have seen a growing requirement for accurate and agile attitude control of spacecraft. To both quickly and accurately control the attitude of a spacecraft, Control Moment Gyros (CMGs) which can generate much higher torque than conventional spacecraft actuators are used as actuators of the spacecraft. The drive on the motors is needed for rapid maneuverability, negatively affecting their life. Thus, in designing spacecraft the conflicting requirements are rapid maneuverability and reduced the drive on motors. Furthermore, the attitude control system needs to be fault-tolerant. The dominant requirement is different for each spacecraft mission, and therefore the relationship between the requirements should be shown. In this study, a design method is proposed for the attitude control system, using multi objective optimization of the skew angle and parameters of the control system. Pareto solutions that can show the relationship between the requirements are obtained by optimizing the parameters. Through numerical analysis, the effect with fault-tolerance and parameter differences for the dominant requirement are confirmed and the method to guide for determining parameters of the attitude control system is established.</description>
        <description>http://thesai.org/Downloads/Volume6No1/Paper_6-Fault-Tolerant_Attitude_Control_System_for_a_Spacecraft.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Review of Cross-Platforms for Mobile Learning Application Development</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060105</link>
        <id>10.14569/IJACSA.2015.060105</id>
        <doi>10.14569/IJACSA.2015.060105</doi>
        <lastModDate>2015-01-31T17:12:17.9130000+00:00</lastModDate>
        
        <creator>Nabil Litayem</creator>
        
        <creator>Bhawna Dhupia</creator>
        
        <creator>Sadia Rubab</creator>
        
        <subject>cross-platform; mobile devices;  framework;  mobile learning; hybrid mobile app</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(1), 2015</description>
        <description>Mobile learning management systems are very important for training purpose.  But considering the present scenario, the learners are equipped with a number of mobile devices that run by different operating systems with diverse  device features. Therefore, developing a mobile application for different platforms is a cumbersome task if appropriate tools and techniques are not practiced. There are three  categories of mobile application namely native,  web based and hybrid. For mobile learning system, hybrid is an appropriate choice. In order to avoid re-implementation of the same hybrid application for different platform separately, several tools are proposed for example: PhoneGap, Adobe Air, Sencha Touch and QT, each with their own strength. With proper use of the strength of individual framework or the combination of frameworks, more compatible and more stable, cross-platform mobile learning application specifically for quizzes and assignments can be developed.</description>
        <description>http://thesai.org/Downloads/Volume6No1/Paper_5-Review_of_Cross-Platforms_for_Mobile_Learning_Application_Development.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Design of Pipelined Architecture for on-the-Fly Processing of Big Data Streams</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060104</link>
        <id>10.14569/IJACSA.2015.060104</id>
        <doi>10.14569/IJACSA.2015.060104</doi>
        <lastModDate>2015-01-31T17:12:17.8830000+00:00</lastModDate>
        
        <creator>Usamah Algemili</creator>
        
        <creator>Simon Berkovich</creator>
        
        <subject>Big Data; Clustering; Computer Architecture; Parallel Processing; Pipeline Design; Variable Lengths; Symmetric Multiprocessing; Crossbar Switch; Forced Interrupt</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(1), 2015</description>
        <description>Conventional processing infrastructures have been challenged by huge demand of stream-based applications. The industry responded by introducing traditional stream processing engines along-with emerged technologies. The ongoing paradigm embraces parallel computing as the most-suitable proposition. Pipelining and Parallelism have been intensively studied in recent years, yet parallel programming on multiprocessor architectures stands as one of the biggest challenges to the software industry. Parallel computing relies on parallel programs that may encounter internal memory constrains. In addition, parallel computing needs special skillset of programming as well as software conversions. This paper presents reconfigurable pipelined architecture. The design is especially aimed at Big Data clustering, and it adopts Symmetric multiprocessing (SMP) along with crossbar switch and forced interrupt. The main goal of this promising architecture is to efficiently process big data streams on-the-fly, while it can process sequential programs on parallel-pipelined model. The system overpasses internal memory constrains of multicore architectures by applying forced interrupts and crossbar switching. It reduces complexity, data dependency, high-latency, and cost overhead of parallel computing.</description>
        <description>http://thesai.org/Downloads/Volume6No1/Paper_4-A_Design_of_Pipelined_Architecture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Model Driven Testing of Web Applications Using Domain Specific Language</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060103</link>
        <id>10.14569/IJACSA.2015.060103</id>
        <doi>10.14569/IJACSA.2015.060103</doi>
        <lastModDate>2015-01-31T17:12:17.8500000+00:00</lastModDate>
        
        <creator>Viet-Cuong Nguyen</creator>
        
        <subject>Domain specific language (DSL); model-driven development; model-driven testing; WTML</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(1), 2015</description>
        <description>As more and more systems move to the cloud, the importance of web applications has increased recently. Web applications need more strict requirements in order to sup-port higher availability. The techniques in quality assurance of these applications hence become essential, the role of testing for web application becomes more significant. Model-driven testing is a promising paradigm for the automation of software testing. In the web domain, the challenge however remains in the creation of models and the complexity of configuring, launching, and testing big number of valid configuration and testing cases. This paper proposes a solution towards this challenge with an approach using Domain Specific Language (DSL) for model driven testing of web application. Our techniques are based on building abstractions of web pages using domain specific language. We introduce WTML - a domain specific language for modeling web pages and provide automatic code generation with a web-testing framework. This methodology and techniques aim at helping software developers as well as testers to become more productive and reduce the time-to-market, while maintaining high standards of web application quality.</description>
        <description>http://thesai.org/Downloads/Volume6No1/Paper_3-Model_Driven_Testing_of_Web_Applications_Using_Domain_Specific_Language.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Security Issues of a Recent RFID Multi Tagging Protocol</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060102</link>
        <id>10.14569/IJACSA.2015.060102</id>
        <doi>10.14569/IJACSA.2015.060102</doi>
        <lastModDate>2015-01-31T17:12:17.8200000+00:00</lastModDate>
        
        <creator>Mehmet Hilal &#214;zcanhan</creator>
        
        <creator>Sezer Baytar</creator>
        
        <creator>Semih Utku</creator>
        
        <creator>G&#246;khan Dalkili&#231;</creator>
        
        <subject>Authentication; EPC Gen 2; ISO 18000-6; NFC; RFID; UHF tag</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(1), 2015</description>
        <description>RFID is now a widespread method used for identifying people and objects. But, not all communication protocols can provide the same rigorous confidentiality to RFID technology. In return, unsafe protocols put individuals and organizations into jeopardy. In this paper, a scheme that uses multiple low cost tags for identifying a single object is studied. Through algebraic analysis on chronologically ordered messages, the proposed multi tag arrangement is shown to fail to provide the claimed security. The weaknesses are discussed and previously proven precautions are recommended to increase the security of the protocol, and thus the safety of its users.</description>
        <description>http://thesai.org/Downloads/Volume6No1/Paper_2-Security_Issues_of_a_Recent_RFID_Multi_Tagging_Protocol.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cluster-Based Context-Aware Routing Protocol for Mobile Environments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2015.060101</link>
        <id>10.14569/IJACSA.2015.060101</id>
        <doi>10.14569/IJACSA.2015.060101</doi>
        <lastModDate>2015-01-31T17:12:17.7400000+00:00</lastModDate>
        
        <creator>Ahmed. A. A. Gad-ElRab</creator>
        
        <creator>T. A. A. Alzohairy</creator>
        
        <creator>Almohammady S. Alsharkawy</creator>
        
        <subject>Clustering; Context; FMCDM; Mobile and Routing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 6(1), 2015</description>
        <description>Mobile environment has many issues due to mobility, energy limitations and status changing over time. Routing method is an important issue and has a significant impact in mobile networks, whereas selecting the optimum routing path will reduce the wasting in network resources, reduce network overhead and increase network reliability and lifetime. To decide which path will achieve the networks objectives, we need to construct a new routing algorithm that uses context attributes of a mobile device such as available bandwidth, residual energy, connection number and mobility value. In this paper, we propose a new mobile nodes ranking scheme based on the combination of two multi-criteria decision making approaches, the analytic hierarchy process (AHP) and the technique for order performance by similarity to ideal solution (TOPSIS) in Fuzzy  environments. The Fuzzy AHP is used to analyze the structure of the clusterhead selection problem and to determine weights of the criteria, while the Fuzzy TOPSIS method is used to obtain the final mobile node ranking value. By basing on node ranking, we propose a new cluster based routing algorithm select the optimal clusterheads and the best routing path. Our simulation results show that the proposed method increases the network accuracy and lifetime and reduces network overhead.</description>
        <description>http://thesai.org/Downloads/Volume6No1/Paper_1-Cluster-Based_Context-Aware_Routing_Protocol.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparison of Classifiers and Statistical Analysis for EEG Signals Used in Brain Computer Interface Motor Task Paradigm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2015.040102</link>
        <id>10.14569/IJARAI.2015.040102</id>
        <doi>10.14569/IJARAI.2015.040102</doi>
        <lastModDate>2015-01-10T18:38:36.8730000+00:00</lastModDate>
        
        <creator>Oana Diana Eva</creator>
        
        <creator>Anca Mihaela Lazar</creator>
        
        <subject>brain computer interface; electroencephalogram; event related (de)synchronization</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 4(1), 2015</description>
        <description>Using the EEG Motor Movement/Imagery database there is proposed an off-line analysis for a brain computer interface (BCI) paradigm. The purpose of the quantitative research is to compare classifiers in order to determinate which of them has highest rates of classification. The power spectral density method is used to evaluated the (de)synchronizations that appear on Mu rhythm. The features extracted from EEG signals are classified using linear discriminant classifier (LDA), quadratic classifier (QDA) and classifier based on Mahalanobis distance (MD). The differences between LDA, QDA and MD are small, but the superiority of QDA was sustained by analysis of variance (ANOVA).</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume4No1/Paper_2-Comparison_of_Classifiers_and_Statistical_Analysis_for_EEG_Signals.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mobile Subscription, Penetration and Coverage Trends in Kenya’s Telecommunication Sector</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2015</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2015.040101</link>
        <id>10.14569/IJARAI.2015.040101</id>
        <doi>10.14569/IJARAI.2015.040101</doi>
        <lastModDate>2015-01-10T18:38:36.7970000+00:00</lastModDate>
        
        <creator>Omae Malack Oteri</creator>
        
        <creator>Langat Philip Kibet</creator>
        
        <creator>Ndung’u Edward N.</creator>
        
        <subject>Communication; Telecommunication; Cellular phone; Civilization; Trends</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 4(1), 2015</description>
        <description>Communication is the activity of conveying information through the exchange of thoughts, messages, or information, as by speech, visuals, signals, writing, or behavior. In Kenya the mobile subscription, penetration and coverage have been growing since the first mobile operators started operating in 1999. The current mobile operators in Kenya are given by Safaricom Ltd, Airtel Networks Kenya Ltd, Essar Telecom Kenya Ltd and Telkom Kenya Ltd (TKL-Orange).
This paper discusses the present condition of the telecommunication sector in Kenya and the trends in subscription, penetration and coverage since 1999 when the first two mobile operators (Safaricom Ltd and Celtel now as Airtel Networks Kenya Ltd) started their operations and later the introduction of two other operators Essar Telecom Kenya Ltd and Telkom Kenya Ltd (Orange brand) in 2008. The paper also tries to find out what is likely to happen in a few years to come and provision of a set of recommendations based on the analysis. The study was based on extensive literature review and secondary data sources mainly from Communication Authority of Kenya (CAK). The data obtained was analyzed using Matlab and Microsoft excel to obtain the relevant graphical representations as given in results and discussions.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume4No1/Paper_1-Mobile_Subscription_Penetration_and_Coverage_Trends.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Zigbee Routing Opnet Simulation for a Wireless Sensors Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.051220</link>
        <id>10.14569/IJACSA.2014.051220</id>
        <doi>10.14569/IJACSA.2014.051220</doi>
        <lastModDate>2015-01-01T10:02:36.4330000+00:00</lastModDate>
        
        <creator>ELKISSANI Kaoutar</creator>
        
        <creator>Pr.Moughit Mohammed</creator>
        
        <creator>Pr.Nasserdine Bouchaib</creator>
        
        <subject>WSN ; Zigbee; rooting; Opnet</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(12), 2014</description>
        <description>Wireless sensor network are nowadays considered as a viable solution for medical application . A zigbee network model is more suitable for battery capacity, bandwidth, and computing limitation for WSN. This paper will present an Opnet simulation of a zigbee network performance in order to compare routing results in 3 different topologies ( Star , Mesh and Tree ).</description>
        <description>http://thesai.org/Downloads/Volume5No12/Paper_20-Zigbee_Routing_Opnet_Simulation_for_a_Wireless_Sensors_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Software Architecture Reconstruction Method, a Survey</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.051219</link>
        <id>10.14569/IJACSA.2014.051219</id>
        <doi>10.14569/IJACSA.2014.051219</doi>
        <lastModDate>2015-01-01T10:02:36.4000000+00:00</lastModDate>
        
        <creator>Zainab Nayyar</creator>
        
        <creator>Nazish Rafique</creator>
        
        <subject>Software architecture; reverse engineering; architecture reconstruction; architecture erosion; architecture mismatch; architecture chasm; architecture drift; forward engineering; architectural aging</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(12), 2014</description>
        <description>Architecture reconstruction belongs to a reverse engineering process, in which we move from code to architecture level for reconstructing architecture. Software architectures are the blue prints of projects which depict the external overview of the software system. Mostly maintenance and testing cause the software to deviate from its original architecture, because sometimes for enhancing the functionality of a system the software deviates from its documented specifications, some new modules are included in the system without modifying the architecture of a system which create issues while reconstructing the system,  as much as the software is closed to the architecture the more it is easy to maintain and change the document so the conformance of architecture with the product is checked by applying the reverse engineering method. Another reason for reconstructing the architecture is observed in the case of legacy systems, when they need modification or an enhanced version of the system is needed to be developed.  This paper includes the methods and tools involved in reconstructing the architecture and by comparing them the best method for reconstructing architecture will be suggested.</description>
        <description>http://thesai.org/Downloads/Volume5No12/Paper_19-Software_Architecture_Reconstruction_Method_a_Survey.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Social Networks’ Benefits, Privacy, and Identity Theft: KSA Case Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.051218</link>
        <id>10.14569/IJACSA.2014.051218</id>
        <doi>10.14569/IJACSA.2014.051218</doi>
        <lastModDate>2015-01-01T10:02:36.3870000+00:00</lastModDate>
        
        <creator>Ahmad A. Al-Daraiseh</creator>
        
        <creator>Afnan S. Al-Joudi</creator>
        
        <creator>Hanan B. Al-Gahtani</creator>
        
        <creator>Maha S. Al-Qahtani</creator>
        
        <subject>social network; identity theft; fraud; privacy; fake identities</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(12), 2014</description>
        <description>Privacy breaches and Identity Theft cases are increasing at an alarming rate. Social Networking Sites (SN’s) are making it worse. Facebook (FB), Twitter and other SN’s offer attackers a wide and easily accessible platform. Privacy in the Kingdom of Saudi Arabia (KSA) is extremely important due to cultural beliefs besides the other typical reasons. In this research we comprehensively cover Privacy and Identity Theft in SNs from many aspects; such as, methods of stealing, contributing factors, ways to use stolen information, examples and other aspects. A study on the local community was also conducted. In the survey, the participants were asked about privacy on SN’s, SN’s privacy policies, and whether they think that SN’s benefits outweigh their risks. A social experiment was also conducted on FB and Twitter to show how fragile the systems are and how easy it is to gain access to private profiles. Results from the survey are scary: 43% of all the accounts are public, 76% of participants do not read the policies, and almost 60% believe that the benefits of SN’s outweigh their risks. Not too far from this, the results of the experiment show that it is extremely easy to obtain information from private accounts on FB and Twitter.</description>
        <description>http://thesai.org/Downloads/Volume5No12/Paper_18-Social_Networks_Benefits_Privacy_and_Identity_Theft_KSA_Case_Study.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Weighted Marking, Clique Structure and Node-Weighted Centrality to Predict Distribution Centre’s Location in a Supply Chain Management</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.051217</link>
        <id>10.14569/IJACSA.2014.051217</id>
        <doi>10.14569/IJACSA.2014.051217</doi>
        <lastModDate>2015-01-01T10:02:36.3530000+00:00</lastModDate>
        
        <creator>Amidu A. G. Akanmu</creator>
        
        <creator>Frank Z. Wang</creator>
        
        <creator>Fred A. Yamoah</creator>
        
        <subject>Centrality measures; Graph; Network; Clique</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(12), 2014</description>
        <description>Despite the importance attached to the weights or strengths on the edges of a graph, a graph is only complete if it has both the combinations of nodes and edges. As such, this paper brings to bare the fact that the node-weight of a graph is also a critical factor to consider in any graph/network’s evaluation, rather than the link-weight alone as commonly considered. In fact, the combination of the weights on both the nodes and edges as well as the number of ties together contribute effectively to the measure of centrality for an entire graph or network, thereby clearly showing more information. Two methods which take into consideration both the link-weights and node-weights of graphs (the Weighted Marking method of prediction of location and the Clique/Node-Weighted centrality measures) are considered, and the result from the case studies shows that the clique/node-weighted centrality measures give an accuracy of 18% more than the weighted marking method, in the prediction of Distribution Centre location of the Supply Chain Management.</description>
        <description>http://thesai.org/Downloads/Volume5No12/Paper_17-Weighted_Marking_Clique_Structure_and_Node-Weighted_Centrality.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Social Media in Azorean Organizations: Policies, Strategies and Perceptions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.051216</link>
        <id>10.14569/IJACSA.2014.051216</id>
        <doi>10.14569/IJACSA.2014.051216</doi>
        <lastModDate>2015-01-01T10:02:36.3230000+00:00</lastModDate>
        
        <creator>Nuno Filipe Cordeiro</creator>
        
        <creator>Teresa Tiago</creator>
        
        <creator>Fl&#225;vio Tiago</creator>
        
        <creator>Francisco Amaral</creator>
        
        <subject>Internet; Web 2.0; Social Media; User Generated Content</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(12), 2014</description>
        <description>Social media have brought new opportunities, and also new challenges, for organizations. With them came the rise of a new context of action, largely influenced by the changing habits and the behavior of the consumer. The purpose of the following research is to analyze the views and strategies embraced by Azorean organizations, as well as the perceptions arising from the use of social media.
For this study, a quantitative type of research, of a descriptive nature, was chosen, using an online survey.  A total of 232 valid surveys obtained led to a range of perceptions about the use of social media. The study hypotheses were verified using the Kruskal-Wallis analysis.
The results demonstrate that the majority of organizations involved in the study already use social media and that almost all of them use Facebook. The main reasons are to reach a wider audience, to increase notoriety and to communicate with customers. The most relevant difficulty felt after joining the social media is the lack of resources and availability. Marketing initiatives and content creation are the most-used activities. Remarkably, more than half don’t have a defined strategy, nor use measuring instruments to assess their presence. However, they consider that social media enhance their performance.
Social media is a widely studied topic from the consumer’s point of view, but there is still little investigation from an organizational perspective. This work sought to contribute to the knowledge about the use and involvement of organizations in social media, especially in the peripheral context.</description>
        <description>http://thesai.org/Downloads/Volume5No12/Paper_16-Social_Media_in_Azorean_Organizations_Policies_Strategies_and_Perceptions.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Facial Expression Recognition Using 3D Convolutional Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.051215</link>
        <id>10.14569/IJACSA.2014.051215</id>
        <doi>10.14569/IJACSA.2014.051215</doi>
        <lastModDate>2015-01-01T10:02:36.2930000+00:00</lastModDate>
        
        <creator>Young-Hyen Byeon</creator>
        
        <creator>Keun-Chang Kwak*</creator>
        
        <subject>convolutional neural network; facial expression  recognition; deep learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(12), 2014</description>
        <description>This paper is concerned with video-based facial expression recognition frequently used in conjunction with HRI (Human-Robot Interaction) that can naturally interact between human and robot. For this purpose, we design a 3D-CNN(3D Convolutional Neural Networks) by augmenting dimensionality reduction methods such as PCA(Principal Component Analysis) and TMPCA(Tensor-based Multilinear Principal Component Analysis) to recognize simultaneously the successive frames with facial expression images obtained through video camera. The 3D-CNN can achieve some degree of shift and deformation invariance using local receptive fields and spatial subsampling through dimensionality reduction of redundant CNN’s output. The experimental results on video-based facial expression database reveal that the presented method shows a good performance in comparison to the conventional methods such as PCA and TMPCA.</description>
        <description>http://thesai.org/Downloads/Volume5No12/Paper_15-Facial_Expression_Recognition_Using_3D.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Grammatical Inference Sequential Mining Algorithm for Protein Fold Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.051214</link>
        <id>10.14569/IJACSA.2014.051214</id>
        <doi>10.14569/IJACSA.2014.051214</doi>
        <lastModDate>2015-01-01T10:02:36.2600000+00:00</lastModDate>
        
        <creator>Taysir Hassan A. Soliman</creator>
        
        <creator>Ahmed Sharaf Eldin</creator>
        
        <creator>Marwa M. Ghareeb</creator>
        
        <creator>Mohammed E. Marie</creator>
        
        <subject>Data mining; grammatical inference; sequential mining;  protein fold recognition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(12), 2014</description>
        <description>Protein fold recognition plays an important role in computational protein analysis since it can determine protein function whose structure is unknown.  In this paper, a Classified Sequential Pattern mining technique for Protein Fold Recognition (CSPF) is proposed. CSPF technique consists of two main phases: the sequential mining pattern phase and the fold recognition phase.  In the sequential mining pattern phase, Mix &amp; Test algorithm is developed based on Grammatical Inference, which is used as a training phase.   Mix &amp; Test algorithm minimizes I/O costs by one database scan, discovers subsequence combinations directly from sequences in memory without searching the whole sequence file, has no database projection, handles gaps, and works with variant length sequences without having to align them. In addition, a parallelized version of Mix &amp; Test algorithm is applied to speed up Mix &amp; Test  algorithm performance.  In the fold recognition phase, unknown protein folds are predicted via a proposed testing function.  To test the performance, 36 SCOP protein folds are used, where the accuracy rate is 75.84% for training data and 59.7% for testing data.</description>
        <description>http://thesai.org/Downloads/Volume5No12/Paper_14-A_Grammatical_Inference_Sequential_Mining_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Use of Geographic Information System Tools in Research on Neonatal Outcomes in a Maternity-School in Belo Horizonte - Brazil</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.051213</link>
        <id>10.14569/IJACSA.2014.051213</id>
        <doi>10.14569/IJACSA.2014.051213</doi>
        <lastModDate>2015-01-01T10:02:36.2300000+00:00</lastModDate>
        
        <creator>Juliano de S. Gaspar</creator>
        
        <creator>Thabata S&#225;</creator>
        
        <creator>Zilma S. N. Reis</creator>
        
        <creator>Renato F. N. J&#250;nior</creator>
        
        <creator>Marcelo S. J&#250;nior</creator>
        
        <creator>Rafael R. Gusm&#227;o</creator>
        
        <subject>Geographic Information Systems (GIS); Fetal Malformation; Health Indicators; Obstetrics Result; Primary Care Unit (PCU); Public Health</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(12), 2014</description>
        <description>Aim: This study proposes to evaluate the spatial distribution of the public obstetric care in the city of Belo Horizonte. It will also correlate the primary care units (PCU) with the immediate neonatal outcomes of a maternity-school of Belo Horizonte, according to risk pregnancy and obstetric outcome. Method: Descriptive geographic-spatial research. This study analyzed a cohort of 2956 newborn who received care at birth in maternity-school, Hospital das Clinicas (HC) of Federal University of Minas Gerais (UFMG) between the January/2013 to July/2014. The gestational risk, the local of primary care unit (PCU) of prenatal, immediate neonatal outcome was studied. The QGIS 2.4 open source software was used to generate thematic maps and analyses. Results: It was observed that among the 2083 births analyzed 1154 (55.4%) were classified as high risk for maternal and 634 (30.4%) with poor neonatal outcome, also, that has a concentration of women living in the northwest of the city to officially refer to their childbirth mothers in the maternity-school. In cases of high risk pregnancy and perinatal complications referencing also occurs from practically all other regions of the city. Discussion: The integration of hospital clinical and administrative data with cartographic databases, through through this study, was able to make clear the patterns of referencing for childbirth in maternity-school in high risk pregnancy. Despite the limitations of a descriptive study, the analysis makes clear that the choice of place of childbirth, exceeds the matters set out in government planning of emergency obstetric referencing by sanitary districts.</description>
        <description>http://thesai.org/Downloads/Volume5No12/Paper_13-Use_of_Geographic_Information_System_Tools.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Determining the Efficient Structure of Feed-Forward Neural Network to Classify Breast Cancer Dataset</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.051212</link>
        <id>10.14569/IJACSA.2014.051212</id>
        <doi>10.14569/IJACSA.2014.051212</doi>
        <lastModDate>2015-01-01T10:02:36.2130000+00:00</lastModDate>
        
        <creator>Ahmed Khalid</creator>
        
        <creator>Noureldien A. Noureldien</creator>
        
        <subject>Hidden Layers; Number of neurons; Feed Forward Neural Network; Breast Cancer; Classification Performance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(12), 2014</description>
        <description>Classification is one of the most frequently encountered problems in data mining. A classification problem occurs when an object needs to be assigned in predefined classes based on a number of observed attributes related to that object.
Neural networks have emerged as one of the tools that can handle the classification problem. Feed-forward Neural Networks (FNN&#39;s) have been widely applied in many different fields as a classification tool.
Designing an efficient FNN structure with optimum number of hidden layers and minimum number of layer&#39;s neurons, given a specific application or dataset, is an open research problem.
In this paper, experimental work is carried out to determine an efficient FNN structure, that is, a structure with the minimum number of hidden layer&#39;s neurons for classifying the Wisconsin Breast Cancer Dataset. We achieve this by measuring the classification performance using the Mean Square Error (MSE) and controlling the number of hidden layers, and the number of neurons in each layer.
The experimental results show that the number of hidden layers has a significant effect on the classification performance and the best classification performance average is attained when the number of layers is 5, and number of hidden layer&#39;s neurons are small, typically 1 or 2.</description>
        <description>http://thesai.org/Downloads/Volume5No12/Paper_12-Determining_the_Efficient_Structure.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Upper Ontology for Benefits Management of Cloud Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.051211</link>
        <id>10.14569/IJACSA.2014.051211</id>
        <doi>10.14569/IJACSA.2014.051211</doi>
        <lastModDate>2015-01-01T10:02:36.1670000+00:00</lastModDate>
        
        <creator>Richard Greenwell</creator>
        
        <creator>Xiaodong Liu</creator>
        
        <creator>Kevin Chalmers</creator>
        
        <subject>Ontology Generation; Benefits Management; Cloud Computing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(12), 2014</description>
        <description>Benefits Management provides an established approach for decision making and value extraction for IT/IS investments and, can be used to examine cloud computing investments. The motivation for developing an upper ontology for Benefits Management is that the current Benefits Management approaches do not provide a framework for capturing and representing semantic information. There is also a need to capture benefits for cloud computing developments to provide existing and future users of cloud computing with better investment information for decision making. This paper describes the development of an upper ontology to capture greater levels of knowledge from stakeholders and IS professionals in cloud computing procurement and implementation. Complex relationships are established between cloud computing enablers, enabling changes, business changes, benefits and investment objectives</description>
        <description>http://thesai.org/Downloads/Volume5No12/Paper_11-An_Upper_Ontology_for_Benefits.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Natural Language Processing and its Use in Education</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.051210</link>
        <id>10.14569/IJACSA.2014.051210</id>
        <doi>10.14569/IJACSA.2014.051210</doi>
        <lastModDate>2015-01-01T10:02:36.1530000+00:00</lastModDate>
        
        <creator>Khaled M. Alhawiti</creator>
        
        <subject>Natural Language Processing; education; application; e-learning; scientific studies; educational system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(12), 2014</description>
        <description>Natural Language Processing (NLP) is an effective approach for bringing improvement in educational setting. Implementing NLP involves initiating the process of learning through the natural acquisition in the educational systems. It is based on effective approaches for providing a solution for various problems and issues in education. Natural Language Processing provides solution in a variety of different fields associated with the social and cultural context of language learning. It is an effective approach for teachers, students, authors and educators for providing assistance for writing, analysis, and assessment procedures. Natural Language Processing is widely integrated with the large number of educational contexts such as research, science, linguistics, e-learning, evaluations system, and contributes resulting positive outcomes in other educational settings such as schools, higher education system, and universities. The paper aims to address the process of natural language learning and its implication in the educational settings. The study also highlights how NLP can be utilized with scientific computer programs to enhance the process of education. The study follows qualitative approach. Data is collected from the secondary resources in order to identify problems faced by the teachers and students for understanding the context due to obstacles of language. Results provide effectiveness of linguistic tools such as grammar, syntax, and textual patterns that are fairly productive in educational context for learning and assessment.</description>
        <description>http://thesai.org/Downloads/Volume5No12/Paper_10-Natural_Language_Processing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Definition of Tactile Interactions for a Multi-Criteria Selection in a Virtual World</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.051209</link>
        <id>10.14569/IJACSA.2014.051209</id>
        <doi>10.14569/IJACSA.2014.051209</doi>
        <lastModDate>2015-01-01T10:02:36.1200000+00:00</lastModDate>
        
        <creator>Robin Vivian</creator>
        
        <creator>David Bertolo</creator>
        
        <creator>J&#233;r&#244;me Dinet</creator>
        
        <subject>tactile Surface; tablets; gestures; cognitive; human-centred design; iPad</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(12), 2014</description>
        <description>Tablets, smartphones are becoming increasingly common and interfaces are predominantly tactile and often multi-touch. More and more schools are testing them with their pupils in the hope of bringing pedagogic benefits. With this new type of devices, new interactions become possible. A lot of studies have been done on the manipulation of 3D objects with 2D input devices but we are just at the beginning of studies that made a link between needs of pedagogy and possibilities of these new types of interactions.  FINGERS&#169; is an application for learning spatial geometry. It’s written for pupils from 9 to 12 years old. Interactions have been designed with teachers. Some interactions are specifics for 3D geometry (3 DOF translations, rotations, nets, combinations of cubes, etc) and someone are general like designation or multi-selection. A lot of grammars of gesture propose a set of interactions to select an object or a group of objects. Multi-taps or lasso around an area are commonly adopted interactions. Performing geometry exercises needs imaging another interactions. For example how selecting all Cubes, how selecting all green objects. The real question is how introduce a parameter in selection. After presenting the limits of current solutions, this communication presents the solutions developed in FINGERS&#169;. We explain how they allow a “parameterized” selection.</description>
        <description>http://thesai.org/Downloads/Volume5No12/Paper_9-Definition_of_Tactile.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Using Object-Relational Mapping to Create the Distributed Databases in a Hybrid Cloud Infrastructure</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.051208</link>
        <id>10.14569/IJACSA.2014.051208</id>
        <doi>10.14569/IJACSA.2014.051208</doi>
        <lastModDate>2015-01-01T10:02:36.1070000+00:00</lastModDate>
        
        <creator>Oleg Lukyanchikov</creator>
        
        <creator>Evgeniy Pluzhnik</creator>
        
        <creator>Simon Payain</creator>
        
        <creator>Evgeny Nikulchev</creator>
        
        <subject>cloud database; object-relational mapping; data management; cloud services; hybrid cloud</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(12), 2014</description>
        <description>One of the challenges currently problems in the use of cloud services is the task of designing of data management systems. This is especially important for hybrid systems in which the data are located in public and private clouds. Implementation monitoring functions querying, scheduling and processing software must be properly implemented and is an integral part of the system. To provide these functions is proposed to use an object-relational mapping (ORM). The article devoted to presenting the approach of designing databases for information systems hosted in a hybrid cloud infrastructure. It also provides an example of the development of ORM library.</description>
        <description>http://thesai.org/Downloads/Volume5No12/Paper_8-Using_Object-Relational_Mapping.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Investigating the Idiotop Paratop Interaction in the Artificial Immune Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.051207</link>
        <id>10.14569/IJACSA.2014.051207</id>
        <doi>10.14569/IJACSA.2014.051207</doi>
        <lastModDate>2015-01-01T10:02:36.0730000+00:00</lastModDate>
        
        <creator>Hossam Meshref</creator>
        
        <subject>Artificial Immune Systems; Idiotopic Networks; Paratop-Epitop; Paratop-Idiotop</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(12), 2014</description>
        <description>The artificial immune system is a new computational intelligence technique that has been investigated for the past decade. By reviewing the literature, two observations were found that could affect the network learning process.  First, most researchers do not focus on Paratop-Epitop and Paratop-Idiotop interactions within the network. Second, most researchers depict the interaction within the network with all the network components present from the beginning until the end of the learning process. In this research, efforts were devoted to deal with the aforementioned observations. The findings were able to differentiate between interactions in a node within a network, and total interactions in the network.  A small simulation problem was used to show the effect of choosing a steady number of antibodies during network interactions. Results showed that a considerable number of interactions could be saved during network learning, which will lead to faster convergence. In conclusion, it is believed that the designed model is ready to be used in many applications. Therefore, it is recommend the use of our model in different applications such as controlling robots in hazardous rescue environment to save human lives.</description>
        <description>http://thesai.org/Downloads/Volume5No12/Paper_7-Investigating_the_Idiotop_Paratop_Interaction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Empirical Analyis of Public ICT Development Project Objectives in Hungary</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.051206</link>
        <id>10.14569/IJACSA.2014.051206</id>
        <doi>10.14569/IJACSA.2014.051206</doi>
        <lastModDate>2015-01-01T10:02:36.0570000+00:00</lastModDate>
        
        <creator>M&#225;rta Aranyossy</creator>
        
        <creator>Andr&#225;s Nemeslaki</creator>
        
        <creator>Adrienn Fek&#243;</creator>
        
        <subject>eGovernment; eGovernment strategy; eGovernment policy; eGovernment goals</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(12), 2014</description>
        <description>E-government development in most European countries was ensured from Structural Funds in the period of 2007-2014. In our paper we show how Hungary has used these funds in order to achieve efficiency and effectiveness in its public services. The main objective of our research has been to explore the budgetary and timing characteristics of public ICT spending, and analyze the implicit and explicit objectives of eGovernment projects in Hungary. We applied exploratory text analyzes as a novel and objective way to analyze the focus of eGovernment development policy. Our main findings are:
- After the text analysis of 85 Electronic Public Administration Operational Programme (EPAOP) and 65 State Reform Operational Programme (SROP) projects we found that keyword statistics are generally consistent with the main policy level objectives of the Operative Programmes, however there are some fields which are not emphasized, such as: the role of participation, social partners, local-government; and the improvement need of user skills through public information campaigns.
- Governmental changes are clearly reflected in the goal hierarchy: contracting in EPAOP and SROP happened in two separate waves - the significant part of financing was committed during stabilized governments in the beginning and end phase of the planning period, with a relatively passive period during governmental change in 2010-2011.</description>
        <description>http://thesai.org/Downloads/Volume5No12/Paper_6-Empirical_Analyis_of_Public_Ict.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Study of MCA Learning Algorithm for Incident Signals Estimation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.051205</link>
        <id>10.14569/IJACSA.2014.051205</id>
        <doi>10.14569/IJACSA.2014.051205</doi>
        <lastModDate>2015-01-01T10:02:36.0270000+00:00</lastModDate>
        
        <creator>Rashid Ahmed</creator>
        
        <creator>John N. Avaritsiotis</creator>
        
        <subject>Direction of Arrival; Neural networks; Principle Component Analysis; Minor Component Analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(12), 2014</description>
        <description>Many signal subspace-based approaches have already been proposed for determining the fixed Direction of Arrival (DOA) of plane waves impinging on an array of sensors. Two procedures for DOA estimation based neural network are presented. Firstly, Principal Component Analysis (PCA) is employed to extract the maximum eigenvalue and eigenvector from signal subspace to estimate DOA. Secondly, Minor component analysis (MCA) is a statistical method of extracting the eigenvector associated with the smallest eigenvalue of the covariance matrix. In this paper, we will modify a MCA learning algorithm to enhance the Convergence, where a Convergence is essential for MCA algorithm towards practical applications. The learning rate parameter is also presented, which ensures fast convergence of the algorithm, because it has direct effect on the convergence of the weight vector and the error level is affected by this value. MCA is performed to determine the estimated DOA. Simulation results will be furnished to illustrate the theoretical results achieved.</description>
        <description>http://thesai.org/Downloads/Volume5No12/Paper_5-A_Study_of_MCA_Learning_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>B2C E-Commerce Fact-Based Negotiation Using Big Data Analytics and Agent-Based Technologies</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.051204</link>
        <id>10.14569/IJACSA.2014.051204</id>
        <doi>10.14569/IJACSA.2014.051204</doi>
        <lastModDate>2015-01-01T10:02:35.9970000+00:00</lastModDate>
        
        <creator>Hasan Al-Sakran</creator>
        
        <subject>negotiations; e-commerce; agent technology; big data; analytics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(12), 2014</description>
        <description>The focus of this study is application of intelligent agent in negotiation between buyer and seller in B2C Commerce using big data analytics. The developed model is used to conduct negotiations on behalf of prospective buyers and sellers using analytics to improve negotiations to meet the practical requirements. The objective of this study is to explore the opportunities of using big data and business analytics for negotiation, where big data analytics can be used to create new opportunities for bidding. Using big data analytics sellers may learn to predict the buyers’ negotiation strategy and therefore adopt optimal tactics to pursue results that are to their best interests. An experimental design is used to collect intelligent data that can be used in conducting the negotiation process. Such approach will improve quality of negotiation decisions for both parties.</description>
        <description>http://thesai.org/Downloads/Volume5No12/Paper_4-B2C_E-Commerce_Fact-Based.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Feasibility Study on Porting the Community Land Model onto Accelerators Using Openacc</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.051203</link>
        <id>10.14569/IJACSA.2014.051203</id>
        <doi>10.14569/IJACSA.2014.051203</doi>
        <lastModDate>2015-01-01T10:02:35.9630000+00:00</lastModDate>
        
        <creator>D. Wang</creator>
        
        <creator>W. Wu</creator>
        
        <creator>F. Winkler</creator>
        
        <creator>W. Ding</creator>
        
        <creator>O. Hernandez</creator>
        
        <subject>OpenACC; Climate Modeling; Community Land Model; Functional Testing; Performance Analysis; Compiler-assisted Analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(12), 2014</description>
        <description>As environmental models (such as Accelerated Climate Model for Energy (ACME), Parallel Reactive Flow and Transport Model (PFLOTRAN), Arctic Terrestrial Simulator (ATS), etc.) became more and more complicated, we are facing enormous challenges regarding to porting those applications onto hybrid computing architecture. OpenACC emerges as a very promising technology, therefore, we have conducted a feasibility analysis on porting the Community Land Model (CLM), a terrestrial ecosystem model within the Community Earth System Models (CESM)). Specifically, we used automatic function testing platform to extract a small computing kernel out of CLM, then we apply this kernel into the actually CLM dataflow procedure, and investigate the strategy of data parallelization and the benefit of data movement provided by current implementation of OpenACC. Even it is a non-intensive kernel, on a single 16-core computing node, the performance (based on the actual computation time using one GPU) of OpenACC implementation is 2.3 time faster than that of OpenMP implementation using single OpenMP thread, but it is 2.8 times slower than the performance of OpenMP implementation using 16 threads.  On multiple nodes, MPI_OpenACC implementation demonstrated very good scalability on up to 128 GPUs on 128 computing nodes. This study also provides useful information for us to look into the potential benefits of “deep copy” capability and “routine” feature of OpenACC standards. We believe that our experience on the environmental model, CLM, can be beneficial to many other scientific research programs who are interested to porting their large scale scientific code using OpenACC onto high-end computers, empowered by hybrid computing architecture.</description>
        <description>http://thesai.org/Downloads/Volume5No12/Paper_3-A_Feasibility_Study_on_Porting_the_Community.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of Social Recommendation GIS for Tourist Spots</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.051202</link>
        <id>10.14569/IJACSA.2014.051202</id>
        <doi>10.14569/IJACSA.2014.051202</doi>
        <lastModDate>2015-01-01T10:02:35.9500000+00:00</lastModDate>
        
        <creator>Tsukasa IKEDA</creator>
        
        <creator>Kayoko YAMAMOTO</creator>
        
        <subject>Social Recommendation GIS; Web-GIS; Social Media; SNS; Recommendation Systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(12), 2014</description>
        <description>This study aims to develop a social recommendation media GIS (Geographic Information Systems) specially tailored to recommend tourist spots. The conclusions of this study are summarized in the following three points. (1) Social media GIS, an information system which integrates Web-GIS, SNS and recommendation system into a single system, was conducted in the central part of Yokohama City in Kanagawa Prefecture, Japan. The social media GIS uses a design which displays its usefulness in reducing the constraints of information inspection, time and space, and continuity, making it possible to redesign systems in accordance with target cases. (2) The social media GIS was operated for two months for members of the general public who are more than 18 years old. The total numbers of users was 98, and the number of pieces of information submitted was 232. (3) The web questionnaires of users showed the usefulness of the integration of Web-GIS, SNS and recommendation systems, because the functions of reference and recommendation can be expected to support tourists’ excursion behavior. Since the access survey of log data showed that about 35% of accesses were from mobile information terminals, it can be said that the preparation of an optimal interface for such terminals was effective.</description>
        <description>http://thesai.org/Downloads/Volume5No12/Paper_2-Development_of_Social_Recommendation_GIS_for_Tourist_Spots.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Policy-Based Automation of Dynamique and Multipoint Virtual Private Network Simulation on OPNET Modeler</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.051201</link>
        <id>10.14569/IJACSA.2014.051201</id>
        <doi>10.14569/IJACSA.2014.051201</doi>
        <lastModDate>2015-01-01T10:02:35.8870000+00:00</lastModDate>
        
        <creator>Ayoub BAHNASSE</creator>
        
        <creator>Najib EL KAMOUN</creator>
        
        <subject>VPN; multipoint; Opnet; automation; DMVPN; cloud; policy-based; WEB-BASED</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(12), 2014</description>
        <description>The simulation of large-scale networks is a challenging task especially if the network to simulate is the Dynamic Multipoint Virtual Private Network, it requires expert knowledge to properly configure its component technologies. The study of these network architectures in a real environment is almost impossible because it requires a very large number of equipment, however, this task is feasible in a simulation environment like OPNET Modeler, provided to master both the tool and the different architectures of the Dynamic Multipoint Virtual Private Network.
Several research studies have been conducted to automate the generation and simulation of complex networks under various simulators, according to our research no work has dealt with the Dynamic Multipoint Virtual Private Network. In this paper we present a simulation model of the Dynamic and Multipoint Virtual Private network in OPNET Modeler, and a WEB-based tool for project management on the same network.</description>
        <description>http://thesai.org/Downloads/Volume5No12/Paper_1-Policy-Based_Automation_of_Dynamique.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A two-level on-line learning algorithm of Artificial Neural Network with forward connections</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2014.031206</link>
        <id>10.14569/IJARAI.2014.031206</id>
        <doi>10.14569/IJARAI.2014.031206</doi>
        <lastModDate>2014-12-10T17:05:25.7130000+00:00</lastModDate>
        
        <creator>Stanislaw Placzek</creator>
        
        <subject>neural network, learning algorithm, hierarchical structure, decomposition, coordination</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 3(12), 2014</description>
        <description>An Artificial Neural Network with cross-connection is one of the most popular network structures. The structure contains: an input layer, at least one hidden layer and an output layer. Analysing and describing an ANN structure, one usually finds that the first parameter is the number of ANN’s layers. A hierarchical structure is a default and accepted way of describing the network. Using this assumption, the network structure can be described from a different point of view. A set of concepts and models can be used to describe the complexity of ANN’s structure in addition to using a two-level learning algorithm. Implementing the hierarchical structure to the learning algorithm, an ANN structure is divided into sub-networks. Every sub-network is responsible for finding the optimal value of its weight coefficients using a local target function to minimise the learning error. The second coordination level of the learning algorithm is responsible for coordinating the local solutions and finding the minimum of the global target function. In the article a special emphasis is placed on the coordinator’s role in the learning algorithm and its target function. In each iteration the coordinator has to send coordination parameters into the first level of sub-networks. Using the input X and the teaching ?? vectors, the local procedures are working and finding their weight coefficients. At the same step the feedback information is calculated and sent to the coordinator. The process is being repeated until the minimum of local target functions is achieved. As an example, a two-level learning algorithm is used to implement an ANN in the underwriting process for classifying the category of health in a life insurance company.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume3No12/Paper_6-A_two-level_on-line_learning_algorithm_of_Artificial.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Checking the Size of Circumscribed Formulae</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2014.031205</link>
        <id>10.14569/IJARAI.2014.031205</id>
        <doi>10.14569/IJARAI.2014.031205</doi>
        <lastModDate>2014-12-10T17:05:25.6830000+00:00</lastModDate>
        
        <creator>Paolo Liberatore</creator>
        
        <subject>Circumscription; computational complexity; belief revision.</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 3(12), 2014</description>
        <description>The circumscription of a propositional formula T may not be representable in polynomial space, unless the polynomial hierarchy collapses. This depends on the specific formula T, as some can be circumscribed in little space and others cannot. The problem considered in this article is whether this happens for a given formula or not. In particular, the complexity of deciding whether CIRC(T) is equivalent to a formula of size bounded by k is studied. This theoretical question is relevant as circumscription has applications in temporal logics, diagnosis, default logic and belief revision.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume3No12/Paper_5-Checking_the_Size_of_Circumscribed_Formulae.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>What is the Right Illumination Normalization for Face Recognition?</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2014.031204</link>
        <id>10.14569/IJARAI.2014.031204</id>
        <doi>10.14569/IJARAI.2014.031204</doi>
        <lastModDate>2014-12-10T17:05:25.6670000+00:00</lastModDate>
        
        <creator>Aishat Mahmoud Dan-ali</creator>
        
        <creator>Mohamed Moustafa</creator>
        
        <subject>face recognition; preprocessing; illumination</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 3(12), 2014</description>
        <description>In this paper, we investigate the effect of some illumination normalization techniques on a simple linear subspace face recognition model using two distance metrics on three challenging, yet interesting databases.  The research takes the form of experimentation and analysis in which five illumination normalization techniques were compared and analyzed using two different distance metrics. The performances and execution times of the various techniques were recorded and measured for accuracy and efficiency. The illumination normalization techniques were Gamma Intensity Correction (GIC), discrete Cosine Transform (DCT), Histogram Remapping using Normal distribution (HRN), Histogram Remapping using Log-normal distribution (HRL), and Anisotropic Smoothing technique (AS). Results showed that improved recognition rate was obtained when the right preprocessing method is applied to the appropriate database using the right classifier.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume3No12/Paper_4-What_is_the_Right_Illumination_Normalization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Incremental Granular Modeling for Predicting the Hydrodynamic Performance of Sailing Yachts</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2014.031203</link>
        <id>10.14569/IJARAI.2014.031203</id>
        <doi>10.14569/IJARAI.2014.031203</doi>
        <lastModDate>2014-12-10T17:05:25.6200000+00:00</lastModDate>
        
        <creator>Keun-Chang Kwak</creator>
        
        <subject>granular networks; particle swarm optimization; linguistic model; two-sided Gaussian contexts</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 3(12), 2014</description>
        <description>This paper is concerned with a design method for modeling Incremental Granular Model (IGM) based on Linguistic Model (LM) and Polynomial Regression (PR) from data set obtained by complex yacht hydrodynamics. For this purpose, we develop a systematic approach to generating automatic fuzzy rules based on Context-based Fuzzy C-Means (CFCM) clustering. This clustering algorithm builds information granules in the form of linguistic contexts and estimates the cluster centers by preserving the homogeneity of the clustered data points associated with the input and output space. Furthermore, IGM deals with localized nonlinearities of the complex system so that the modeling discrepancy can be compensated. After performing the design of 2nd order PR as the first global model, we refined it through a series of local fuzzy if-then rules in order to capture the remaining localized characteristics. The experimental results revealed that the presented IGM showed a better performance in comparison to the previous works for predicting the hydrodynamic performance of sailing yachts.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume3No12/Paper_3-Incremental_Granular_Modeling_for_Predicting.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Rough Approximations for Incomplete Information*</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2014.031202</link>
        <id>10.14569/IJARAI.2014.031202</id>
        <doi>10.14569/IJARAI.2014.031202</doi>
        <lastModDate>2014-12-10T17:05:25.6030000+00:00</lastModDate>
        
        <creator>Jun-Fang LUO</creator>
        
        <creator>Ke-Yun QIN</creator>
        
        <subject>Rough set; tolerance relation; valued tolerance relation</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 3(12), 2014</description>
        <description>Rough set under incomplete information has been extensively studied. Based on valued tolerance relation for incomplete information system, several approaches were presented to dealing with the attribute reductions and rule extraction. We point out some drawbacks in the existing papers for valued tolerance relation based rough approximations and propose a new kind of rough approximation operators which is a generalization of Pawlak approximation operators for complete information system. Some basic properties of the approximation operators are investigated.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume3No12/Paper_2-Rough_Approximations_for.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>From the Perspective of Artificial Intelligence: A New Approach to the Nature of Consciousness</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2014.031201</link>
        <id>10.14569/IJARAI.2014.031201</id>
        <doi>10.14569/IJARAI.2014.031201</doi>
        <lastModDate>2014-12-10T17:05:25.5430000+00:00</lastModDate>
        
        <creator>Riccardo Manzotti</creator>
        
        <creator>Sabina Jeschke</creator>
        
        <subject>consciousness; machine consciousness; multi agent system; genetic algorithms; externalism</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 3(12), 2014</description>
        <description>Consciousness is not only a philosophical but also a technological issue, since a conscious agent has evolutionary advantages. Thus, to replicate a biological level of intelligence in a machine, concepts of machine consciousness have to be considered. The widespread internalistic assumption that humans do not experience the world as it is, but through an internal ‘3D virtual reality model’, hinders this construction.
To overcome this obstacle for machine consciousness a new theoretical approach to consciousness is sketched between internalism and externalism to address the gap between experience and physical world. The ‘internal interpreter concept’ is replaced by a ‘key-lock approach’. Here, consciousness is not an image of the external world but the world itself.
A possible technological design for a conscious machine is drafted taking advantage of an architecture exploiting self-development of new goals, intrinsic motivation, and situated cognition. The proposed cognitive architecture does not pretend to be conclusive or experimentally satisfying but rather forms the theoretical the first step to a full architecture model on which the authors currently work on, which will enable conscious agents e.g. for robotics or software applications.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume3No12/Paper_1-From_the_Perspective_of_Artificial_Intelligence.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comparison of the Effects of K-Anonymity on Machine Learning Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.051126</link>
        <id>10.14569/IJACSA.2014.051126</id>
        <doi>10.14569/IJACSA.2014.051126</doi>
        <lastModDate>2014-12-01T19:16:42.9900000+00:00</lastModDate>
        
        <creator>Hayden Wimmer</creator>
        
        <creator>Loreen Powell</creator>
        
        <subject>Privacy Preserving, Data Mining, Machine Learning, Decision Tree, Neural Network, Logistic Regression, Bayesian Classifier</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(11), 2014</description>
        <description>While research has been conducted in machine learning algorithms and in privacy preserving in data mining (PPDM), a gap in the literature exists which combines the aforementioned areas to determine how PPDM affects common machine learning algorithms.  The aim of this research is to narrow this literature gap by investigating how a common PPDM algorithm, K-Anonymity, affects common machine learning and data mining algorithms, namely neural networks, logistic regression, decision trees, and Bayesian classifiers.  This applied research reveals practical implications for applying PPDM to data mining and machine learning and serves as a critical first step learning how to apply PPDM to machine learning algorithms and the effects of PPDM on machine learning.  Results indicate that certain machine learning algorithms are more suited for use with PPDM techniques.</description>
        <description>http://thesai.org/Downloads/Volume5No11/Paper_26-A_Comparison_of_the_Effects_of_K-Anonymity_on_Machine_Learning_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Timed-Release Hierarchical Identity-Based Encryption</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.051125</link>
        <id>10.14569/IJACSA.2014.051125</id>
        <doi>10.14569/IJACSA.2014.051125</doi>
        <lastModDate>2014-12-01T12:49:41.1300000+00:00</lastModDate>
        
        <creator>Toru Oshikiri</creator>
        
        <creator>Taiichi Saito</creator>
        
        <subject>timed-release encryption, hierarchical identity-based encryption, one-time signature</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(11), 2014</description>
        <description>We propose a notion of hierarchical identity-based encryption (HIBE) scheme with timed-release encryption (TRE) mechanism, timed-release hierarchical identity-based encryption (TRHIBE), and define its security models. We also show a generic construction of TRHIBE from HIBE and one-time signature, and discuss the security of the constructed scheme.</description>
        <description>http://thesai.org/Downloads/Volume5No11/Paper_25-Timed-Release_Hierarchical_Identity-Based_Encryption.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Approach for Bioinformatics Workflow Discovery</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.051124</link>
        <id>10.14569/IJACSA.2014.051124</id>
        <doi>10.14569/IJACSA.2014.051124</doi>
        <lastModDate>2014-12-01T12:49:41.1000000+00:00</lastModDate>
        
        <creator>Walaa Nagy</creator>
        
        <creator>Hoda M. O. Mokhtar</creator>
        
        <subject>Web services; In-silico Workflows; Quality of Services (QoS); Web services for bioinformatics; Bioinformatics services</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(11), 2014</description>
        <description>Workflow systems are typical fit for in the explorative research of bioinformaticians. These systems can help bioinformaticians to design and run their experiments and to automatically capture and store the data generated at runtime. On the other hand, Web services are increasingly used as the preferred method for accessing and processing the information coming from the diverse life science sources. In this work we provide an efficient approach for creating bioinformatic workflow for all-service architecture systems (i.e., all system components are services ). This architecture style simplifies the user interaction with workflow systems and facilitates both the change of individual components, and the addition of new components to adopt to other workflow tasks if required. We finally present a case study for the bioinformatics domain to elaborate the applicability of our proposed approach.</description>
        <description>http://thesai.org/Downloads/Volume5No11/Paper_24-A_Novel_Approach_for_Bioinformatics_Workflow.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid Multi-Tenant Database Schema for Multi-Level Quality of Service</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.051123</link>
        <id>10.14569/IJACSA.2014.051123</id>
        <doi>10.14569/IJACSA.2014.051123</doi>
        <lastModDate>2014-12-01T12:49:41.0830000+00:00</lastModDate>
        
        <creator>Ahmed I. Saleh</creator>
        
        <creator>Mohammed A. Fouad</creator>
        
        <creator>Mervat Abu-Elkheir</creator>
        
        <subject>Multi-Tenancy; Flexible Database Schema Design; Data Customization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(11), 2014</description>
        <description>Software as a Service (SaaS) providers can serve hundreds of thousands of customers using sharable resources to reduce costs. Multi-tenancy architecture allows SaaS providers to run a single application and a database instance, which support multiple tenants with various business needs and priorities. Until now, the database management systems (DBMSs) have not had the notion of multi-tenancy, and they have not been equipped to handle customization or scalability requirements, that are typical in multi-tenant applications. The multi-tenant database performance should adapt to tenants workloads and fit their special requirements. In this paper, we propose a new multi-tenant database schema design approach, that adapts to multi-tenant application requirements, in addition to tenants needs of data security, isolation, queries performance and response time speed. Our proposed methodology provides a trade-off between the performance and the storage space. This proposal caters for the diversity in tenants via defining multi-level quality of service for the different types of tenants, depending on tenant rate and system rate. The proposal presents a new technique to distribute data in a multi-tenant database horizontally to a set of allotment tables using an isolation point, and vertically to a set of extension tables using a merger point. Finally, we present a prototype implementation of our method using a real-world case study, showing that the proposed solution can achieve high scalability and increase performance for tenants who need speedy performance and economize storage space for tenants who do not have demanding quality of service.</description>
        <description>http://thesai.org/Downloads/Volume5No11/Paper_23-A_Hybrid_Multi-Tenant_Database_Schema_for_Multi-Level_Quality_of_Service.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Robust Increased Capacity Image Steganographic Scheme</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.051122</link>
        <id>10.14569/IJACSA.2014.051122</id>
        <doi>10.14569/IJACSA.2014.051122</doi>
        <lastModDate>2014-12-01T12:49:41.0530000+00:00</lastModDate>
        
        <creator>M. Khurrum Rahim Rashid</creator>
        
        <creator>Nadeem Salamat</creator>
        
        <creator>Saad Missen</creator>
        
        <creator>Aqsa Rashid</creator>
        
        <subject>Image steganography, LSB, Security Analysis, Robustness Analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(11), 2014</description>
        <description>with the rising tempo of unconventional right to use and hit protection of secret information is of extreme value. With the rising tempo of unconventional right to use and hit, protection of secret information is of extreme value. Steganography is the vital matter in information hiding. Steganography refers to the technology of hiding data into digital media without depiction of any misgiving. Lot of techniques has been projected during past years. In this paper, a new steganography approach for hiding data in digital images is presented with a special feature that it increases the capacity of hiding data in digital images with the least change in images perceptual appearance and statistical properties at too much less level which will be very difficult to detect.</description>
        <description>http://thesai.org/Downloads/Volume5No11/Paper_22-Robust_Increased_Capacity_Image_Steganographic_Scheme.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A System Supporting Qualitative Research</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.051121</link>
        <id>10.14569/IJACSA.2014.051121</id>
        <doi>10.14569/IJACSA.2014.051121</doi>
        <lastModDate>2014-12-01T12:49:41.0200000+00:00</lastModDate>
        
        <creator>Emilia Todorova</creator>
        
        <creator>Dimo Milev</creator>
        
        <creator>Ivaylo Donchev</creator>
        
        <subject>Information systems (IS); Qualitative data analysis (QDA); NoSQL Database; Software Design; Computer Assisted Qualitative Data Analysis Software (CAQDAS)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(11), 2014</description>
        <description>this paper presents a system aimed to be entirely located in the Internet. The database, server side, logic and user interface are accessible regardless the location and equipment of the working team members. The system supports qualitative analysis of large datasets, and the work can be distributed among several team members or  teams. A modified mixed method for developing software projects is used, according to the peculiarities of our case, splitting the phase of code writing into two sub-phases each with different number of iterations.</description>
        <description>http://thesai.org/Downloads/Volume5No11/Paper_21-A_System_Supporting_Qualitative_Research.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improved Security of Audit Trail Logs in Multi-Tenant Cloud Using ABE Schemes</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.051120</link>
        <id>10.14569/IJACSA.2014.051120</id>
        <doi>10.14569/IJACSA.2014.051120</doi>
        <lastModDate>2014-12-01T12:49:40.9730000+00:00</lastModDate>
        
        <creator>Bhanu Prakash Gopularam</creator>
        
        <creator>Nalini N</creator>
        
        <subject>multi-tenancy; audit-trail log; attribute-based encryption; reverse proxy security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(11), 2014</description>
        <description>Cloud computing is delivery of services rather than a product and among different cloud deployment models, the public cloud provides improved scalability and cost reduction when compared to others. Security and privacy of data is one of the key factors in transitioning to cloud. Typically the cloud providers have a demilitarized zone protecting the data center along with a reverse proxy setup. The reverse proxy gateway acts as initial access point and provides additional capabilities like load balancing, caching, security monitoring capturing events, syslogs related to hosts residing in the cloud. The audit-trail logs captured by reverse proxy server comprise important information related to all the tenants. While the PKI infrastructure works in cloud scenario it becomes cumbersome from manageability point of view and they lack flexibility in providing controlled access to data. In this paper we evaluate risks associated with security and privacy of audit logs produced by reverse proxy server. We provide a two-phase approach for sharing the audit-logs with users allowing fine-grained access. In this paper we evaluate certain Identity-Based and Attribute-Based Encryption schemes and provide detailed analysis on performance.</description>
        <description>http://thesai.org/Downloads/Volume5No11/Paper_20-Improved_Security_of_Audit_Trail_logs_in_Multi-Tenant.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fundamental Study to New Evaluation Method Based on Physical and Psychological Load in Care</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.051119</link>
        <id>10.14569/IJACSA.2014.051119</id>
        <doi>10.14569/IJACSA.2014.051119</doi>
        <lastModDate>2014-12-01T12:49:40.9600000+00:00</lastModDate>
        
        <creator>Hiroaki Inoue</creator>
        
        <creator>Shunji Shimizu</creator>
        
        <creator>Hirotaka Ishihara</creator>
        
        <creator>Yuuki Nakata</creator>
        
        <creator>Takeshi Tsuruga</creator>
        
        <creator>Fumikazu Miwakeichi</creator>
        
        <creator>Nobuhide Hirai</creator>
        
        <creator>Senichiro Kikuchi</creator>
        
        <creator>Satoshi Kato</creator>
        
        <creator>Eiju Watanabe</creator>
        
        <subject>component; Evaluation; Movement; Exercise; NIRS; Care; Welfare Technology; Useful welfare device evaluation; Evaluation method</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(11), 2014</description>
        <description>In Japan and developed countries, it has become aged society, and wide variety welfare device or system have been developed. But these evaluation methods of welfare device or system are limited only stability, intensity and partial operability. Because of, it is not clear to determine the standard to evaluation for welfare device or system of usefulness. Therefore, we will attempt to establish the standard for evaluation about usefulness for objectively and quantitatively for including non-verbal cognition. We examine the relationship between human movements and brain activity, and consider the evaluation method of welfare devices and systems to measure the load and fatigue which were felt by human. In this paper, we measure the load for sitting and standing movement using NISR. We tried to make sure for the possibility of the quantitatively estimation for physical or psychological load or fatigue by measuring of brain activity using NIRS(Near Infra Red Spectroscopy). As results, when subjects perform the movement task, the statistical significant difference was shown in the specific part of the brain region.</description>
        <description>http://thesai.org/Downloads/Volume5No11/Paper_19-Fundamental_Study_to_New_Evaluation_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Clustering of Slow Learners Behavior for Discovery of Optimal Patterns of Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.051118</link>
        <id>10.14569/IJACSA.2014.051118</id>
        <doi>10.14569/IJACSA.2014.051118</doi>
        <lastModDate>2014-12-01T12:49:40.9430000+00:00</lastModDate>
        
        <creator>Thakaa Z. Mohammad</creator>
        
        <creator>Abeer M.Mahmoud</creator>
        
        <subject>Data mining; E-learning; Slow Learners</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(11), 2014</description>
        <description>with the increased rates of the slow learners (SL) enrolled in schools nowadays; the schools realized that the traditional academic curriculum is inadequate. Some schools have developed a special curricula that are particularly suited a slow learner while others are focusing their efforts on the devising of better and more effective methods and techniques in teaching. In the other hand, knowledge discovery and data mining techniques certainly can help to understand more about these students and their educational behaviors. This paper discusses the clustering of elementary school slow learner students behavior for the discovery of optimal learning patterns that enhance their learning capabilities. The development stages of an integrated E-Learning and mining system are briefed. The results show that after applying the clustering algorithms Expectation maximization and K-Mean on the slow learner’s data, a reduced set of five optimal patterns list (RSWG, RWSG, RWGS, GRSW, and SGWR) is reached. Actually, the students followed these five patterns reached grads higher than 75%. Therefore, the proposed system is significant for slow learners, teachers and schools.</description>
        <description>http://thesai.org/Downloads/Volume5No11/Paper_18-Clustering_of_Slow_Learners_Behavior_for_Discovery.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Segmentation of Acute Lymphoblastic Leukemia Using C-Y Color Space</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.051117</link>
        <id>10.14569/IJACSA.2014.051117</id>
        <doi>10.14569/IJACSA.2014.051117</doi>
        <lastModDate>2014-12-01T12:49:40.9270000+00:00</lastModDate>
        
        <creator>Reham Mohammed</creator>
        
        <creator>Omima Nomir</creator>
        
        <creator>Iraky Khalifa</creator>
        
        <subject>Image Segmentation; acute lymphoblastic leukemia; RGB; C-Y color space</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(11), 2014</description>
        <description>Medical image analysis process usually starts with segmentation step, which aims to separate different objects in the image scene. This is achieved by mainly dividing the image into two parts, the region of interest (ROI) and the background. Segmentation of acute lymphoblastic leukemia blood cell (ALL) based on microscope color image is one of the important step in the recognition process. This paper proposed a technique which aims to  segment the color image of acute leukemia  by transforming  the RGB color space to C-Y color space .in the C-Y color space, the luminance component is used to segment (ALL) .The proposed algorithm runs on 100 microscopic ALL images and  the experimental result shows that the proposed system can provide a good segmentation of  ALL from its complicated background and  shows  that the  segmentation  accuracy of the proposed technique is  98.38% compared to the result of the manual segmentation method by expert.</description>
        <description>http://thesai.org/Downloads/Volume5No11/Paper_17-Segmentation_of_Acute_lymphoblastic_leukemia.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Code Level Based Programmer Assessment and Selection Criterion Using Metric Tools</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.051116</link>
        <id>10.14569/IJACSA.2014.051116</id>
        <doi>10.14569/IJACSA.2014.051116</doi>
        <lastModDate>2014-12-01T12:49:40.8800000+00:00</lastModDate>
        
        <creator>Ezekiel U. Okike</creator>
        
        <subject>computer programs; program quality; class cohesion; programmers; personality traits</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(11), 2014</description>
        <description>this study presents a code level measurement of computer programs developed by computer programmers using a Chidamber and Kemerer Java metric (CKJM) tool and the Myers Briggs Type Indicator (MBTI) tool.  The identification of potential computer programmers using personality trait factors does not seem to be the best approach without a code level measurement of the quality of programs. Hence the need to evolve a metric tool which measures both personality traits of programmers and code level quality of programs developed by programmers. This is the focus of this study. In this experiment, a set of Java based programming tasks were given to 33 student programmers who could confidently use the Java programming language. The codes developed by these students were analyzed for quality using a CKJM tool. Cohesion, coupling and number of public methods (NPM) metrics were used in the study. The choice of these three metrics from the CKJM suite was because they are useful in measuring well designed codes. By examining the cohesion values of classes, high cohesion ranges [0,1] and low coupling imply well designed code. Also number of methods (NPM) in a well-designed class is always less than 5 when cohesion range is [0,1]. Results from this study show that 19 of the 33 programmers developed good and cohesive programs while 14 did not. Further analysis revealed the personality traits of programmers and the number of good programs written by them. Programmers with Introverted Sensing Thinking Judging (ISTJ) traits produced the highest number of good programs, followed by Introverted iNtuitive Thinking Perceiving (INTP), Introverted iNtuitive Feelingng Perceiving (INTP), and Extroverted Sensing Thinking Judging (ESTJ)</description>
        <description>http://thesai.org/Downloads/Volume5No11/Paper_16-A_Code_Level_Based_Programmer_Assessment_and_Selection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Data Protection Control and Learning Conducted Via Electronic Media I.E. Internet</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.051115</link>
        <id>10.14569/IJACSA.2014.051115</id>
        <doi>10.14569/IJACSA.2014.051115</doi>
        <lastModDate>2014-12-01T12:49:40.8670000+00:00</lastModDate>
        
        <creator>Mohamed F. AlAjmi</creator>
        
        <creator>Shakir Khan</creator>
        
        <creator>Abdulkadir Alaydarous</creator>
        
        <subject>e-Learning; e-taking; ICT; data security component</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(11), 2014</description>
        <description>numerous e-taking in establishments are hurrying into receiving ICT without precisely arranging and seeing any related security concerns. E-learning is another technique for taking in which eventually relies on upon the Internet in its execution. The Internet has turned into the venue for another set of unlawful exercises, and the e-taking in environment is notwithstanding laid open to such dangers. In this paper, e-learning setting, definition, aspects, improvement, development, profits and tests are all explained upon. This paper examines the security components needed to be executed inside e-learning situations. Moreover, the paper clarifies the circumstances and existing examination relating to security in e-taking in. Besides, data security administration is inferred to help setting up a secured e-taking nature&#39;s turf.</description>
        <description>http://thesai.org/Downloads/Volume5No11/Paper_15-Data_Protection_Control_and_Learning_Conducted_Via_Electronic_Media.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>RETRACTED: Towards a Transformation Model of a Pedagogy Collaborative Project (PCP) Scenario</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.051114</link>
        <id>10.14569/IJACSA.2014.051114</id>
        <doi>10.14569/IJACSA.2014.051114</doi>
        <lastModDate>2014-12-01T12:49:40.8500000+00:00</lastModDate>
        
        <creator>Maroua AULAD BEN TAHAR</creator>
        
        <creator>Az-eddine NASSEH</creator>
        
        <creator>Mohamed KHALDI</creator>
        
        <subject>Pedagogy Collaborative Project (PCP); MDA; Transformation rules; Atlas Transformation Language (ATL); Moodle</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(11), 2014</description>
        <description>After careful and considered review of the content of this paper by a duly constituted expert committee, this paper has been found to be in violation of IJACSA`s Publication Principles. We hereby retract the content of this paper. Reasonable effort should be made to remove all past references to this paper. Retraction DOI: 10.14569/IJACSA.2014.051114.retraction</description>
        <description>http://thesai.org/Downloads/Volume5No11/Paper_14-Towards_a_transformation_model_of_a_Pedagogy_Collaborative.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Routing in Wireless Sensor Networks based on Generalized Data Stack Programming Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.051113</link>
        <id>10.14569/IJACSA.2014.051113</id>
        <doi>10.14569/IJACSA.2014.051113</doi>
        <lastModDate>2014-12-01T12:49:40.8200000+00:00</lastModDate>
        
        <creator>Hala Elhadidy</creator>
        
        <creator>Rawya Rizk</creator>
        
        <creator>Hassan T. Dorrah</creator>
        
        <subject>GDSP; history matrix; multi-stacking network; network topology; routing table; WSN</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(11), 2014</description>
        <description>Generalized Data Stack Programming (GDSP) model describes that any affected activity or varying environment is intelligently self-recorded inside the system in form of stack-based layering types where stack is re-defined to be one of six classes. The multi-stacking network is presented thinking of that any system connects to other system to work properly. This addresses a novel way to investigate and analyze any system. Wireless Sensor Networks (WSNs) monitor the environment and take action accordingly. However, WSN suffers from some weakness due to nodes failure or interference which affects the network topology and the routing table at each node. In this paper, the GDSP model is applied on the routing problems in WSNs. A history matrix at the user side is proposed to retrieve and backward the events affected the network.</description>
        <description>http://thesai.org/Downloads/Volume5No11/Paper_13-Routing_in_Wireless_Sensor_Networks_based_on_Generalized_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluating Arabic to English Machine Translation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.051112</link>
        <id>10.14569/IJACSA.2014.051112</id>
        <doi>10.14569/IJACSA.2014.051112</doi>
        <lastModDate>2014-12-01T12:49:40.8030000+00:00</lastModDate>
        
        <creator>Laith S. Hadla</creator>
        
        <creator>Taghreed M. Hailat</creator>
        
        <creator>Mohammed N. Al-Kabi</creator>
        
        <subject>component; Machine Translation; Arabic-English Corpus; Google Translator; Babylon Translator; BLEU</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(11), 2014</description>
        <description>Online text machine translation systems are widely used throughout the world freely. Most of these systems use statistical machine translation (SMT) that is based on a corpus full with translation examples to learn from them how to translate correctly. Online text machine translation systems differ widely in their effectiveness, and therefore we have to fairly evaluate their effectiveness. Generally the manual (human) evaluation of machine translation (MT) systems is better than the automatic evaluation, but it is not feasible to be used. The distance or similarity of MT candidate output to a set of reference translations are used by many MT evaluation approaches. This study presents a comparison of effectiveness of two free online machine translation systems (Google Translate and Babylon machine translation system) to translate Arabic to English. There are many automatic methods used to evaluate different machine translators, one of these methods; Bilingual Evaluation Understudy (BLEU) method. BLEU is used to evaluate translation quality of two free online machine translation systems under consideration. A corpus consists of more than 1000 Arabic sentences with two reference English translations for each Arabic sentence is used in this study. This corpus of Arabic sentences and their English translations consists of 4169 Arabic words, where the number of unique Arabic words is 2539. This corpus is released online to be used by researchers. These Arabic sentences are distributed among four basic sentence functions (declarative, interrogative, exclamatory, and imperative). The experimental results show that Google machine translation system is better than Babylon machine translation system in terms of precision of translation from Arabic to English.</description>
        <description>http://thesai.org/Downloads/Volume5No11/Paper_12-Evaluating_Arabic_to_English_Machine_Translation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Experience-Based Safety Training System Using Vr Technology for Chemical Plant</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.051111</link>
        <id>10.14569/IJACSA.2014.051111</id>
        <doi>10.14569/IJACSA.2014.051111</doi>
        <lastModDate>2014-12-01T12:49:40.7730000+00:00</lastModDate>
        
        <creator>Atsuko Nakai</creator>
        
        <creator>Yuta Kaihata</creator>
        
        <creator>Kazuhiko Suzuki</creator>
        
        <subject>safety education; training system; virtual reality</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(11), 2014</description>
        <description>In chemical plants, safety measures are needed in order to minimize the impact of severe accidents and natural disasters. At the same time, carrying out the education and training to workers the corresponding operation in non-stationary situation is essential. However, reproducing the non-stationary conditions to actual equipment or mock-up cannot be performed because it is dangerous. By using the virtual reality (VR) technology, we can build up a virtual chemical plant with lower cost compared to real plant. The operator can experience the fire and explosion accidents in the virtual space. Therefore, in this paper, we propose an experienced-based safety training system for implementing the education and training by using the non-stationary situation in the computer. This proposed system is linked with the dynamic plant simulator. A trainee can learn the correct operation through the simulated experience to prevent an accident. The safety awareness of workers will improve by experiential learning. The proposed system is useful for safety education in chemical plant.</description>
        <description>http://thesai.org/Downloads/Volume5No11/Paper_11-The_Experience-based_Safety_Training_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Data and Knowledge Extraction Based on Structure Analysis of Homogeneous Websites</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.051110</link>
        <id>10.14569/IJACSA.2014.051110</id>
        <doi>10.14569/IJACSA.2014.051110</doi>
        <lastModDate>2014-12-01T12:49:40.7570000+00:00</lastModDate>
        
        <creator>Mohammed Abdullah Hassan Al-Hagery</creator>
        
        <subject>Hyperlinks Analysis Tools; Features Extraction; Oriented Data Sets generation; Knowledge Discovery in Oriented Data Sets</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(11), 2014</description>
        <description>The World Wide Web includes several types of website applications. Mainly these applications are related to business, organizations, companies, and others. There is a lack to get raw data sets to study the behavior of the internal structure of each type of these websites. Where websites structures include treasure of links, and sub-links, in addition to some embedded features associated with the internal structure of each website. The objective of this paper is to analysis a set of homogeneous websites to establish raw data sets. These data sets can be employed for several research purposes. It also can be used to extract some invisible aspects/features within the structure. Several steps are required to accomplish this objective; first, to propose an algorithm for structure analysis, second, to implement the proposed algorithm as a software tool for the purpose of extraction and establishment of raw data sets (real data set), third, to extrapolate a set of rules or relations from these data sets. This data set can be employed for researches purposes in the field of web structure mining, to estimate important factors related to websites development processes, and websites ranking. The results comprise creation of Oriented Data Sets (ODS) for research purposes and also for deducing a set of features represents a type of new discovered knowledge in this ODS.</description>
        <description>http://thesai.org/Downloads/Volume5No11/Paper_10-Data_and_Knowledge_Extraction_Based_on_Structure.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Three-Dimensional Motion Anlaysis of Horse Rider in Wireless Sensor Network Environments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.051109</link>
        <id>10.14569/IJACSA.2014.051109</id>
        <doi>10.14569/IJACSA.2014.051109</doi>
        <lastModDate>2014-12-01T12:49:40.7270000+00:00</lastModDate>
        
        <creator>Jae-Neung Lee</creator>
        
        <creator>Keun-Chang Kwak</creator>
        
        <subject>3D motion capture and analysis; inertial sensor; wireless network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(11), 2014</description>
        <description>This paper constructed a database of the national representative level of a professional horse-rider by wearing a motion-capture suit attached with 16 inertial sensors under an inertial sensor-based wireless network environment, then made a visual comparative analysis through a few methods (graphical and statistical) on the values of all motion features (elbow angle, knee angle, knee-elbow distance, backbone angle and hip position) classified depending on horse types (using two horses named Warm-blood and Thoroughbred) and footpace types (at a trot and a canter) and obtained by various methods of calculating Euclidean distance, the second cosine, maximum and minimum values, and made a comparative analysis depending on motion features of a horse-rider by using MVN studio software. In the study, the experimental results confirmed the validity of the proposed method of obtaining the motion feature database of a horse-rider in the wireless sensor network environment and making an analytical system.</description>
        <description>http://thesai.org/Downloads/Volume5No11/Paper_9-A_Three-Dimensional_Motion_Anlaysis_of_Horse_Rider.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Vector Autoregression (Var) Model for Rainfall Forecast and Isohyet Mapping in Semarang – Central Java – Indonesia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.051108</link>
        <id>10.14569/IJACSA.2014.051108</id>
        <doi>10.14569/IJACSA.2014.051108</doi>
        <lastModDate>2014-12-01T12:49:40.6930000+00:00</lastModDate>
        
        <creator>Adi Nugroho</creator>
        
        <creator>Subanar</creator>
        
        <creator>Sri Hartati</creator>
        
        <creator>Khabib Mustofa</creator>
        
        <subject>Rainfall Forecast; VAR; Multivariate Time Series; Isohyet</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(11), 2014</description>
        <description>Agricultural and plantation activities in Indonesia, especially in Semarang, Central Java, Indonesia rely on water supply from the rainfall. The rainfall in the future is basically influenced by rainfall patterns, humidity and temperature in the past. In this case, Vector Autoregression (VAR) multivariate model is applied to forecast the rainfall in the future, in which all along Indonesian Agency for Meteorology, Climatology and Geophysics (BMKG) generally uses ARIMA model (Autoregressive Integrated Moving Average) to carry out the same thing.  The study applied the data, comprising the data of rainfall, humidity and temperature taken on a monthly basis during 2001-2013 periods from 5 measurement stations. Plotting of rainfall forecast result with VAR method is portrayed in the form of isohyet contour map to see the correlation between rainfall and coordinates of the area of the rainfall. The forecast result shows that VAR method is quite accurate to use for rainfall forecast in the study area as well as better than ARIMA method to forecast the same thing as having smaller Mean Absolute Error (MAE) and Mean Absolute Percentage Error(MAPE).</description>
        <description>http://thesai.org/Downloads/Volume5No11/Paper_8-Vector_Autoregression_Var_Model_for_Rainfall.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Study of Privatized Synthetic Data Generation Using Discrete Cosine Transforms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.051107</link>
        <id>10.14569/IJACSA.2014.051107</id>
        <doi>10.14569/IJACSA.2014.051107</doi>
        <lastModDate>2014-12-01T12:49:40.6470000+00:00</lastModDate>
        
        <creator>Kato Mivule</creator>
        
        <subject>Privatized synthetic data; Signal processing; Data privacy; Discrete cosine transforms;  Moving average filtering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(11), 2014</description>
        <description>In order to comply with data confidentiality requirements, while meeting usability needs for researchers, entities are faced with the challenge of how to publish privatized data sets that preserve the statistical traits of the original data. One solution to this problem, is the generation of privatized synthetic data sets. However, during data privatization process, the usefulness of data, have a propensity to diminish even as privacy might be guaranteed. Furthermore, researchers have documented that finding an equilibrium between privacy and utility is intractable, often requiring trade-offs. Therefore, as a contribution,  the Filtered Classification Error Gauge heuristic, is presented. The suggested heuristic is a data privacy and usability model that employs data privacy, signal processing, and machine learning techniques to generate privatized synthetic data sets with acceptable levels of usability. Preliminary results from this study show that it might be possible to generate privacy compliant synthetic data sets using a combination of data privacy, signal processing, and machine learning techniques, while preserving acceptable levels of data usability.</description>
        <description>http://thesai.org/Downloads/Volume5No11/Paper_7-A_Study_of_Privatized_Synthetic_Data_Generation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Efficient Identification of Common Subsequences from Big Data Streams Using Sliding Window Technique</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.051106</link>
        <id>10.14569/IJACSA.2014.051106</id>
        <doi>10.14569/IJACSA.2014.051106</doi>
        <lastModDate>2014-12-01T12:49:40.6000000+00:00</lastModDate>
        
        <creator>Adi Alhudhaif</creator>
        
        <subject>Frequent subsequence; Stream processing; Periodic pattern; Pattern recognition; Big data processing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(11), 2014</description>
        <description>We propose an efficient Frequent Sequence Stream algorithm for identifying the top k most frequent subsequences over big data streams.  Our Sequence Stream algorithm gains its efficiency by its time complexity of linear time and very limited space complexity. With a pre-specified subsequence window size S and the k value, in very high probabilities, the Sequence Stream algorithm retrieve the top k most frequent subsequences of size S. The Stream Sequence algorithm also provides a high accuracy of the estimation of the number of occurrences of each promoted subsequence. Our experiments indicate several factors that influence the result accuracy of the Sequence Stream algorithm: stream size, subsequence size S and frequency of the subsequence.</description>
        <description>http://thesai.org/Downloads/Volume5No11/Paper_6-Efficient_identification_of_common_subsequences_from_big_data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Feedback Optimal Control for Inverted Pendulum Problem by Using the Generating Function Technique </title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.051105</link>
        <id>10.14569/IJACSA.2014.051105</id>
        <doi>10.14569/IJACSA.2014.051105</doi>
        <lastModDate>2014-12-01T12:49:40.5700000+00:00</lastModDate>
        
        <creator>Hany R. Dwidar</creator>
        
        <subject>Inverted pendulum; Feedback control; Stability analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(11), 2014</description>
        <description>In this paper, a model is described for a system consisting of an inverted pendulum attached to a cart. We design for this model a feedback optimal control based on Linear Quadratic regulator, LQR by using the generating Function technique. This design with hard and soft constraints will help the pendulum to stabilize in the upright position. A solution of the continuous low-thrust optimal control problem based on LQR method is implemented. An example applied to this control design for a hard constraint boundary condition.</description>
        <description>http://thesai.org/Downloads/Volume5No11/Paper_5-Feedback_optimal_control_for_Inverted_Pendulum_Problem.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Broadband Access for All: Strategies and Tactis of Wireless Traffic Sharing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.051104</link>
        <id>10.14569/IJACSA.2014.051104</id>
        <doi>10.14569/IJACSA.2014.051104</doi>
        <lastModDate>2014-12-01T12:49:40.5530000+00:00</lastModDate>
        
        <creator>Jinadu Olayinka T.</creator>
        
        <creator>Owa Victor K..</creator>
        
        <subject>ad-hoc; spectrum commons; etiquettes; software defined radio (SDR); MIMO</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(11), 2014</description>
        <description>Network engineers have designed an array of protocols that enabled shared access in various wired and wireless contexts at different layers of the protocol stack [1]. One approach to managing unlicensed spectrum is to rely on a technical protocol to allocate and manage shared access Lehr [2]. This paper addressed the benefits of unlicensed wireless traffic within licensed traffic (anticipated as cognitive router-based networking). A focus on shared access to non-exclusive use of the spectrum with an holistic view of technical and institutional features is suggested for effective management of ‘spectrum commons’. Using the adapted cognitive radio architectural model and its associated multi-hop ad-hoc networking strategies to implement the ‘spectrum-common’, mobility is enhanced with each node acting as a router and packet forwarder. We formulate management frameworks that can integrate well with liquid protocols for mobile nodes. Also, these frameworks incorporate new strategies of intelligently adapting the nodes to dynamically participate in setting bandwidth capacity stochastically. The projected use of dynamic bandwidth shaping algorithm for the cognitive radio-based network (CRN) when implemented will make broadband access more economical to users and the spectrum used effectively.</description>
        <description>http://thesai.org/Downloads/Volume5No11/Paper_4-Broadband_Access_for_All_Strategies_and_Tactis_of_Wireless_Traffic_Sharing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Smartphone Intervention for Cycle Commuting</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.051103</link>
        <id>10.14569/IJACSA.2014.051103</id>
        <doi>10.14569/IJACSA.2014.051103</doi>
        <lastModDate>2014-12-01T12:49:40.4900000+00:00</lastModDate>
        
        <creator>Yun-Maw Cheng</creator>
        
        <creator>Chao-Lung Lee</creator>
        
        <subject>copresence; behavioral nudge; persuasive computing; smartphone app</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(11), 2014</description>
        <description>For those who new to cycling to and from work, how do you inspire yourself to keep up with it? Previous research has identified that having a partner is crucial to create and maintain a new habit. However, the rapidly changing work dynamics pose a challenge on this basis. Frequently failing to show up at the appointed or expected time can cause the motivation breakdown. In this paper, we introduce BikeTogether, a smartphone app that encourages and supports its users to cycle home with each other over the Internet. The app employs the metaphor of a bicycle flashlight to represent closeness, leading, and following between two sides. The cycling performance is also recorded so the users can track how they are doing over time. 10 participants were instructed and randomly paired to take a two-phases test ride on different routes. Results indicated that the app can help create the sense of being with each other while cycling and promote not only accompanied but competing ride. In addition, the outcome of the desirability towards the app implies a higher chance it will lead to a behavior change. This provides a new way that we can commit to remain encouraged.</description>
        <description>http://thesai.org/Downloads/Volume5No11/Paper_3-A_Smartphone_Intervention_for_Cycle_Commuting.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Basic Study for New Assistive Technology Based on Brain Activity during Car Driving</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.051102</link>
        <id>10.14569/IJACSA.2014.051102</id>
        <doi>10.14569/IJACSA.2014.051102</doi>
        <lastModDate>2014-12-01T12:49:40.4600000+00:00</lastModDate>
        
        <creator>Hiroaki Inoue</creator>
        
        <creator>Fumikazu Miwakeichi</creator>
        
        <creator>Shunji Shimizu</creator>
        
        <creator>Noboru Takahashi</creator>
        
        <creator>Yasuhito Yoshizawa</creator>
        
        <creator>Nobuhide Hirai</creator>
        
        <creator>Hiroyuki Nara</creator>
        
        <creator>Senichiro Kikuchi</creator>
        
        <creator>Satoshi Kato</creator>
        
        <creator>Eiju Watanabe</creator>
        
        <subject>brain information processing during driving task; spatial cognitive task; determining direction; NIRS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(11), 2014</description>
        <description>Recently, it is necessary to develop a new system which assists driving car and wheelchair as aged society. The final our purpose in this research is to contribute to developing of assistive robot and related-apparatus. In terms of developing a new system, we thought that it is important to examine behaviors as well as spatial recognition. Therefore, experiments have been performed for an examination of human spatial perceptions, especially right and left recognition, during car driving by using NIRS. In previous research, it has been documented that there were significant differences at dorsolateral prefrontal cortex at left hemisphere during virtual driving task and actual driving. In this paper, we performed measuring the brain activity during car driving by using NIRS. And we performed statistical analysis of the brain activity. The purpose of this paper is discovering the brain region which was involved in decision making when human drive a car and considering between human movement and brain activity during car driving.</description>
        <description>http://thesai.org/Downloads/Volume5No11/Paper_2-Basic_Study_for_New_Assistive_Technology_Based_on_Brain.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Handwritten Pattern Recognition Using Kohonen Neural Network Based on Pixel Character</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.051101</link>
        <id>10.14569/IJACSA.2014.051101</id>
        <doi>10.14569/IJACSA.2014.051101</doi>
        <lastModDate>2014-12-01T12:49:40.3830000+00:00</lastModDate>
        
        <creator>Lulu C. Munggaran</creator>
        
        <creator>Suryarini Widodo</creator>
        
        <creator>Cipta A.M</creator>
        
        <creator>Nuryuliani</creator>
        
        <subject>handwriting; recognition; Kohonen neural network; similarity; character</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(11), 2014</description>
        <description>Handwriting is the human way in communicating each other using written media. By the advancement in technology and development of science, there are a lot of changes of technology in terms of communication with computer through handwriting. Therefore, it is needed computer able to receive input in the form of handwriting data and able to recognize the handwriting input. Therefore, this research focuses on handwritten character recognition using Kohonen neural network. The purpose of this research is to find handwriting recognition algorithm which can receive handwriting input and recognize handwritten character directly inputted in computer using Kohonen neural network. This method studies the distribution of a set of patterns without any class information. The basic idea of this technique is understood from how human brain stores images/patterns that have been recognized through eyes, and then able to reveal the images/patterns back. This research has been successful in developing an application to recognize handwritten characters using Kohonen neural network method, and it has been tested. The application  is personal computer based and using a canvas as input media. The recognition process consist of  3 stages layer: Input layer, Training Layer and Hidden Layer. The Kohonen neural network method on handwritten character recognition application has good similarity level of character patterns in character mapping process.</description>
        <description>http://thesai.org/Downloads/Volume5No11/Paper_1-Handwritten_Pattern_Recognition_Using_Kohonen_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Reducing the Correlation Processing Time by Using a Novel Intrusion Alert Correlation Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2014.040315</link>
        <id>10.14569/SpecialIssue.2014.040315</id>
        <doi>10.14569/SpecialIssue.2014.040315</doi>
        <lastModDate>2014-11-21T13:25:33.4830000+00:00</lastModDate>
        
        <creator>Huwaida Tagelsir Ibrahim Elshoush</creator>
        
        <subject>Alert Correlation, Alert Reduction, Intrusion
Detection Systems, False Alarm Rate</subject>
        <description>Special Issue(SpecialIssue), 4(3), 2014</description>
        <description>Alert correlation analyzes the alerts from one or
more Collaborative Intrusion Detection Systems (CIDSs) to
produce a concise overview of security-related activity on the
network. The correlation process consists of multiple components,
each responsible for a different aspect of the overall correlation
goal. The sequential order of the correlation components affects
the correlation process performance. Furthermore, the total time
needed for the whole process depends on the number of processed
alerts in each component. This paper presents an innovative alert
correlation framework that minimizes the number of processed
alerts on each component and thus reducing the correlation
processing time. By reordering the components, the introduced
correlation model reduces the number of processed alerts as early
as possible by discarding the irrelevant, unreal and false alerts
in the early phases of the correlation process. A new component,
shushing the alerts, is added to deal with the unrelated and false
positive alerts. A modified algorithm for fusing the alerts is
outlined. The intruders’ intention is grouped into attack scenarios
and thus used to detect future attacks. DARPA 2000 intrusion
detection scenario specific datasets and a testbed network were
used to evaluate the innovative alert correlation model. Comparisons
with a previous correlation system were performed. The
results of processing these datasets and recognizing the attack
patterns demonstrated the potential of the improved correlation
model and gave favorable results.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo11/Paper_15-Reducing_the_Correlation_Processing_Time_by_Using.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SMP-Based Approach for Intelligent Service Interaction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2014.040314</link>
        <id>10.14569/SpecialIssue.2014.040314</id>
        <doi>10.14569/SpecialIssue.2014.040314</doi>
        <lastModDate>2014-11-21T13:25:33.4530000+00:00</lastModDate>
        
        <creator>Hosam AlHakami</creator>
        
        <creator>Feng Chen</creator>
        
        <creator>Helge Janicke</creator>
        
        <subject>Service Selection; Stable Marriage Problem (SMP);
Selective Strategy</subject>
        <description>Special Issue(SpecialIssue), 4(3), 2014</description>
        <description>Service-oriented computing is establishing itself as
a new computing paradigm in which services advertise their
capabilities within a network, and then are used, composed
and orchestrated by other services and end-users. Whilst many
approaches to matching service providers with consumers of
their services have been developed in the past, the proposed
approach in this paper takes a different view of the problem
in that it does not look for the fittest individual utility, but
views it as a constrained optimisation problem that matches
between a set of services and a set of service requests. Our
approach addresses this problem using an adaptation of the well
known stable-marriage problem and demonstrates how matching
between services and requests to a certain threshold can be
expressed. This will contribute a fair assignment between services
and requests based on their preferences. As the current state of
the service selection process considers only the view of requests,
the proposed approach can ensure several features to the services
such as service protection and service quality, e.g. it can ensure
the preservation of service availability by redirecting a coming
request to a similar service if the current service is busy.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo11/Paper_14-SMP-Based_Approach_for_Intelligent_Service.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Extended 4-Dimensional OpenGL e-book associated with electric material</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2014.040313</link>
        <id>10.14569/SpecialIssue.2014.040313</id>
        <doi>10.14569/SpecialIssue.2014.040313</doi>
        <lastModDate>2014-11-21T13:25:33.4200000+00:00</lastModDate>
        
        <creator>Kazu-masa YAMADA</creator>
        
        <creator>Nobuaki MATSUHASHI</creator>
        
        <subject>component; OpenGL; e-textbook Perovskit structure; dielectrics material; Barium Titanate</subject>
        <description>Special Issue(SpecialIssue), 4(3), 2014</description>
        <description>The aim of this research is to develop a kind of 4 Dimensional electronic textbook (4D-Text) regarding a typical dielectrics material structure of Perovskit crystalline formations of Barium Titanate where 4D means a combined 1D direction-freely-viewing and 3D animation-ing is continuously extracting while being changed and scaled viewpoint as user chosen continuously.  It is specific issue that e.g. Barium Titanium (IV) Oxide which crystallography 4D structural animation is discussed and relevantly addressing to virtual learning environments, e-learning tools, educational systems design and e-learning organizational issues.  Additionally it should be an actually-expected theme that crystalline Perovskite structure with a chemical formula ABX3 is constructed as follows; the type of anion spheres are X atoms (usually oxygens as O[2-]), the another type of spheres are B-atoms (a smaller metal cation, such as Ti[4+]) and the third type of spheres are the A-atoms (a larger metal cation, such as Ba[2+]).  Then thermal transformation between lower temperature ferroelectorode and higher temperature paraelectorode would be discussed.  Meanwhile, crystallographic pictured 4D-Text is as follows among these structures; the undistorted isometric-regular cubic, on the other hand, the symmetry lowered orthorhombic, tetragonal and trigonal in each Perovskites.  It has been concluded that free OpenGL assisted 4D animation approach technique should be good way to achieve the organization for free 4D-Text e-learning using Free 3D viewing and manipulating the 4D animation data file created via MGF (MicroAVS Geometry File) approached.  Additionally crystallography numerical data processing methods in Perovskit structure are to be using Freeware AWK language data-processing and extracting how to prepare the 4D with MGF. 
    Therefore 4D-OpenGL Free e-Text is a possible guidance that enables studying Perovskit structure while changing a viewpoint directly to the instructor&#39;s guidance and accessing contents, transcribed by being captured screens and supplementary reference with properly alphanumeric characters and relevant information.  Consequently, for the achievement of presented aim of Free 4D e-learning organizational issues, the technology of free-OpenGL is indispensable.
</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo11/Paper_13-Extended_4-Dimensional_OpenGL_e-book.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>New Approach of Using Structural Modelling for Personalized Study Planning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2014.040312</link>
        <id>10.14569/SpecialIssue.2014.040312</id>
        <doi>10.14569/SpecialIssue.2014.040312</doi>
        <lastModDate>2014-11-21T13:25:33.4070000+00:00</lastModDate>
        
        <creator>Raita Rollande</creator>
        
        <creator>Janis Grundspenkis</creator>
        
        <creator>Antons Mislevics</creator>
        
        <subject>intelligent tutoring system; personalized education; graphs; structural modelling; element ranking and structural analysis</subject>
        <description>Special Issue(SpecialIssue), 4(3), 2014</description>
        <description>The planning of individual studies becomes more and more topical. Alongside the fast and diverse rhythm of life, people need individual approach to study planning. It ensures wider availability of learning for different social groups. The use of information and communication technologies (ICT) in the process of studies makes it more attractive and more interesting as well as more adequate to the demands of the 21st century. The authors have developed graph based framework for personalization of education process based on the set of four graphs representing the structure of the study programme, the structure of the study courses, the concept maps, and the learning objects [1, 2, 3, 4, 5]. Special tool is needed which allows the learners themselves to design the learning strategy corresponding to their interests. The authors had already implemented personalized study planning process in a prototype, which allows to create a personalized study programme, and then to plan the course learning, setting the courses in the required sequence [3, 4, 5]. In this paper the authors propose the use of the methods of structure analysis to calculate the ranks for the node of the graphs thus detecting the most significant nodes in the graph structure. The calculations for the ranks are made for the graph of the study programme, for the graph of study courses, and for the concept maps. The calculation of ranks for the graph nodes allows detecting the most significant courses in the study programme, the most important topics in the study course and the most essential concepts in the concept map</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo11/Paper_12-New_Approach_of_Using_Structural_Modelling_for_Personalized_Study_Planning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Benchmarking the Higher Education Institutions in Egypt using Composite Index Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2014.040311</link>
        <id>10.14569/SpecialIssue.2014.040311</id>
        <doi>10.14569/SpecialIssue.2014.040311</doi>
        <lastModDate>2014-11-21T13:25:33.3730000+00:00</lastModDate>
        
        <creator>Mohamed Rashad M El-Hefnawy</creator>
        
        <creator>Ali Hamed El-Bastawissy</creator>
        
        <creator>Mona Ahmed Kadry</creator>
        
        <subject>Key Performance Indicators;  Composite Index; Analytic Hierarchy Process; Performance Measurement; Higher Education Institutions; Ranking Systems; Benchmark Models </subject>
        <description>Special Issue(SpecialIssue), 4(3), 2014</description>
        <description>Egypt has the largest and most significant higher education system in the Middle East and North Africa but it had been continuously facing serious and accumulated challenges. The Higher Education Institutions in Egypt are undergoing important changes involving the development of performance, they are implementing strategies to enhance the overall performance of their universities using ICT, but still the gap between what is existing and what is supposed to be for the self-regulation and improvement processes is not entirely clear to face these challenges. The using of strategic comparative analysis model and tools to evaluate the current and future states will affect the overall performance of universities and shape new paradigms in development of Higher Education System (HES), several studies have investigated the evaluation of universities through the development and use of ranking and benchmark systems
In this paper, we provide a model to construct unified Composite Index (CI) based on a set of SMART indictors emulate the nature of higher education systems in Egypt. The outcomes of the proposed model aim to measure overall performance of universities and provide unified benchmarking method in this context. The model was discussed from theoretical and technical perspectives. Meanwhile, the research study was conducted with 40 professors from 19 renowned universities in Egypt as education domain experts. 
</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo11/Paper_11-Benchmarking_the_Higher_Education_Institutions_in_Egypt_using_Composite_Index_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Safety Analysis Approach to Clinical Workflows: Application and Evaluation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2014.040310</link>
        <id>10.14569/SpecialIssue.2014.040310</id>
        <doi>10.14569/SpecialIssue.2014.040310</doi>
        <lastModDate>2014-11-21T13:25:33.3430000+00:00</lastModDate>
        
        <creator>Lamis Al-Qora’n</creator>
        
        <creator>Neil Gordon</creator>
        
        <creator>Martin Walker</creator>
        
        <creator>Septavera Sharvia</creator>
        
        <creator>Sohag Kabir</creator>
        
        <subject>clinical workflows; safety analysis; radiology; HiP-HOPS</subject>
        <description>Special Issue(SpecialIssue), 4(3), 2014</description>
        <description>Clinical workflows are safety critical workflows as they have the potential to cause harm or death to patients. Their safety needs to be considered as early as possible in the development process. Effective safety analysis methods are required to ensure the safety of these high-risk workflows, because errors that may happen through routine workflow could propagate within the workflow to result in harmful failures of the system’s output. This paper shows how to apply an approach for safety analysis of clinical workflows to analyse the safety of the workflow within a radiology department and evaluates the approach in terms of usability and benefits. The outcomes of using this approach include identification of the root causes of hazardous workflow failures that may put patients’ lives at risk. We show that the approach is applicable to this area of healthcare and is able to present added value through the detailed information on possible failures, of both their causes and effects; therefore, it has the potential to improve the safety of radiology and other clinical workflows.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo11/Paper_10-A_Safety_Analysis_Approach_to_Clinical_Workflows_Application_and_Evaluation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Developing a Framework for Examining the use of Mobile Transactions in Saudi Arabia: the User’s Perspective</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2014.040309</link>
        <id>10.14569/SpecialIssue.2014.040309</id>
        <doi>10.14569/SpecialIssue.2014.040309</doi>
        <lastModDate>2014-11-21T13:25:33.3130000+00:00</lastModDate>
        
        <creator>Mohammed A. Alqahtani</creator>
        
        <creator>Pam J. Mayhew</creator>
        
        <subject>Mobile Technologies; E-Commerce; M-Transactions; Conceptual Framework; Acceptance; Developing Countries</subject>
        <description>Special Issue(SpecialIssue), 4(3), 2014</description>
        <description>Both the recent advances in mobile technologies and the high penetration rate of mobile communication services have had a profound impact on our daily lives, and are beginning to offer interesting and advantageous new services. In particular, the mobile transaction (m-transaction) system has emerged, enabling users to pay for physical and digital goods and services using their mobile devices whenever they want, regardless of their location. Although it is anticipated that m-transactions will enjoy a bright future, there is apparently still reluctance among users to accept mobile transactions. Besides analyzing the literature, the authors have conducted five empirical studies to develop a robust comprehensive framework that encompasses the key factors which could affect Saudi users’ intentions to use m-transactions. This paper aims to summarize and discuss these studies, show how they have evolved in several stages, aiming to reach a satisfactory level of maturity and finally shed light on interesting results.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo11/Paper_9-Developing_a_Framework_for_Examining_the_use_of_Mobile_Transactions_in_Saudi_Arabia_the_User’s_Perspective.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Quantum Gravity Sensor by Curvature Energy: their Encoding and Computational Models*</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2014.040308</link>
        <id>10.14569/SpecialIssue.2014.040308</id>
        <doi>10.14569/SpecialIssue.2014.040308</doi>
        <lastModDate>2014-11-21T13:25:33.2970000+00:00</lastModDate>
        
        <creator>Francisco Bulnes</creator>
        
        <subject>Curvature energy; magnetic dilaton; quantum gravity sensor; strong interactions; quantum computing</subject>
        <description>Special Issue(SpecialIssue), 4(3), 2014</description>
        <description>Through of the concept of curvature energy encoded in non-harmonic signals due to the effect that characterizes the curvature as a deformation of field in the corresponding resonance space ( and an obstruction to the displacement to the corresponding shape operator) is developed and designed a sensor of quantum gravity considering the quantized version of curvature as observable of gravitational field where the space is distorted by the strong interactions between particles, interpreting their observable in this case, as light fields deformations obtained on space-time background. To the application of this measurement we use a hypothetical particle graviton modeled as a magnetic dilaton which must be gauge graviton (gauge boson). Also are obtained several computational models of these photonic measurements, likewise their prototype photonic devices.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo11/Paper_8-Quantum_Gravity_Sensor_by_Curvature_Energy_their_Encoding_and_Computational_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Introducing Concurrency to Workflows: Theory and A Real-World Case Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2014.040307</link>
        <id>10.14569/SpecialIssue.2014.040307</id>
        <doi>10.14569/SpecialIssue.2014.040307</doi>
        <lastModDate>2014-11-21T13:25:33.2670000+00:00</lastModDate>
        
        <creator>Laird Burns</creator>
        
        <creator>Wai Yin Mok</creator>
        
        <creator>Wes N. Colley</creator>
        
        <subject>Business process re-engineering; Unified Modeling Language; activity diagrams; flow independent workflows; concurrency</subject>
        <description>Special Issue(SpecialIssue), 4(3), 2014</description>
        <description>Making use of the concepts of the activity diagrams of the Unified Modeling Language, this paper defines an important class of workflows, called flow independent workflows, which are deterministic in the sense that if a flow independent workflow is given the same multi-set of resources as input over and over again, it will produce the same output every time. After which, this paper provides a methodology, and its accompanying algorithms, that introduces concurrency to flow independent workflows by rearranging the action nodes in every flow of control. It then applies the methodology to a real-world case to demonstrate its usefulness. It concludes with future research possibilities that might extend the methodology to general workflows, i.e., not necessarily flow independent workflows.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo11/Paper_7-Introducing_Concurrency_to_Workflows_Theory_and_A_Real-World_Case_Study.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A new algorithm for detecting SQL injection attack in Web application</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2014.040306</link>
        <id>10.14569/SpecialIssue.2014.040306</id>
        <doi>10.14569/SpecialIssue.2014.040306</doi>
        <lastModDate>2014-11-21T13:25:33.2330000+00:00</lastModDate>
        
        <creator>Ouarda Lounis</creator>
        
        <creator>Salah Eddine Bouhouita Guermeche</creator>
        
        <creator>Lalia Saoudi</creator>
        
        <creator>Salah Eddine Benaicha</creator>
        
        <subject>SQL injection attack, scanner Web, Web Application, Web vulnerabilities, security</subject>
        <description>Special Issue(SpecialIssue), 4(3), 2014</description>
        <description>Nowadays, the security of applications and Web servers is a new trend that finds its need on the Web. The number of vulnerabilities identified in this type of applications is constantly increasing especially SQL injection attack.
It is therefore necessary to regularly audit Web applications to verify the presence of exploitable vulnerabilities. Web vulnerability scanner WASAPY is one of the audit tool, it uses an algorithm which bases on a classification techniques of pages obtained by sending HTTP requests especially formatted.
We propose in this paper a new algorithm which was built in a vision to improve rather to supplement the logic followed in modeling WASAPY tool. The tool was supplemented by a new class reflecting the legitimate appearance or referential, therefore, the detection mechanism was solidly built on a statistic in a fairly clear mathematical framework described by a simple geometric representation or interpretation.
</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo11/Paper_6-A_new_algorithm_for_detecting_SQL_injection_attack_in_Web_application.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Process Knowledge Discovery in Social BPM</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2014.040305</link>
        <id>10.14569/SpecialIssue.2014.040305</id>
        <doi>10.14569/SpecialIssue.2014.040305</doi>
        <lastModDate>2014-11-21T13:25:33.2030000+00:00</lastModDate>
        
        <creator>Mohammad Ehson Rangiha</creator>
        
        <creator>Bill Karakostas</creator>
        
        <subject>Social BPM; BPM; Goal-Based Modelling; Process Recommendation; Social Tagging</subject>
        <description>Special Issue(SpecialIssue), 4(3), 2014</description>
        <description>the utilization of process knowledge for future execution is an effective way of improving the efficiency of business processes and benefit from the knowledge captured in previous executions. This paper attempts to discuss how social tagging can be used in the context of social business process management to assist and support the execution of business processes in a social environment. We believe such an approach is a step forward towards producing a comprehensive model for social business process management.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo11/Paper_5-Process_Knowledge_Discovery_in_Social_BPM.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dissecting Genuine and Deceptive Kudos: The Case of Online Hotel Reviews</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2014.040304</link>
        <id>10.14569/SpecialIssue.2014.040304</id>
        <doi>10.14569/SpecialIssue.2014.040304</doi>
        <lastModDate>2014-11-21T13:25:33.1870000+00:00</lastModDate>
        
        <creator>Snehasish Banerjee</creator>
        
        <creator>Alton Y. K. Chua</creator>
        
        <subject>e-business; user-generated content; online reviews; opinion spam; readability; genre; writing style</subject>
        <description>Special Issue(SpecialIssue), 4(3), 2014</description>
        <description>As users continue to rely on online hotel reviews for making purchase decisions, the trend of posting deceptive reviews to heap praises and kudos is gradually becoming a well-established e-business malpractice. Conceivably, it is not trivial for users to distinguish between genuine and deceptive kudos in reviews. Hence, this paper identifies three linguistic cues that could offer telltale signs to distinguish between genuine and deceptive reviews. These linguistic cues include readability, genre and writing style. Drawing data from a publicly available secondary dataset, results indicate that readability and writing style of reviews offer useful clues to distinguish between genuine and deceptive reviews. Specifically, genuine reviews could be more readable and less hyperbolic compared with deceptive entries. With respect to review genre however, the differences were largely blurred. The implications of the findings for theory and practice are highlighted.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo11/Paper_4-Dissecting_Genuine_and_Deceptive_Kudos_The_Case_of_Online_Hotel_Reviews.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>User Satisfaction Determinants for Digital Culture Heritage Online Collections</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2014.040303</link>
        <id>10.14569/SpecialIssue.2014.040303</id>
        <doi>10.14569/SpecialIssue.2014.040303</doi>
        <lastModDate>2014-11-21T13:25:33.1570000+00:00</lastModDate>
        
        <creator>Zaihasriah Zahidi</creator>
        
        <creator>Yan Peng Lim</creator>
        
        <creator>Peter Charles Woods</creator>
        
        <subject>User experience; user satisfaction; digital culture heritage online collections</subject>
        <description>Special Issue(SpecialIssue), 4(3), 2014</description>
        <description>The aim of this paper is to identify the possible determinants that influence user satisfaction in the context of digital cultural heritage (DCH) online collections. The data was collected in 3 stages. For the first stage, literature studies were conducted in understanding the general overview about user satisfaction in various web-domains. Next, think-aloud protocol was conducted with a group of general user with nil background of cultural heritage. Two existing digital culture heritage online collections were used as the vehicle to get the findings. Lastly, existing studies on Herzberg Two-Factor Theory in web-environment context was adapted and adopted in identifying the possible hygiene and motivator factors which influence the user satisfaction in this context of study.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo11/Paper_3-User_Satisfaction_Determinants_for_Digital_Culture_Heritage_Online_Collections.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of a Content Addressable Memory-based Parallel Processor implementing (-1+j)-based Binary Number System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2014.040302</link>
        <id>10.14569/SpecialIssue.2014.040302</id>
        <doi>10.14569/SpecialIssue.2014.040302</doi>
        <lastModDate>2014-11-21T13:25:33.1230000+00:00</lastModDate>
        
        <creator>Tariq Jamil</creator>
        
        <subject>binary number; complex binary; parallel processing; content-addressable; memory; associative dataflow; compiler; operating system</subject>
        <description>Special Issue(SpecialIssue), 4(3), 2014</description>
        <description>Contrary to the traditional base 2 binary number system, used in today’s computers, in which a complex number is represented by two separate binary entities, one for the real part and one for the imaginary part, Complex Binary Number System (CBNS), a binary number system with base (-1+j), is used to represent a given complex number in single binary string format. In this paper, CBNS is reviewed and arithmetic algorithms for this number system are presented. The design of a CBNS-based parallel processor utilizing content-addressable memory for implementation of associative dataflow concept has been described and software-related issues have also been explained.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo11/Paper_2-Design_of_a_Content_Addressable_Memory-based_Parallel_Processor.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Enhanced Version of the MCACC to Augment the Computing Capabilities of Mobile Devices Using Cloud Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2014.040301</link>
        <id>10.14569/SpecialIssue.2014.040301</id>
        <doi>10.14569/SpecialIssue.2014.040301</doi>
        <lastModDate>2014-11-21T13:25:33.0630000+00:00</lastModDate>
        
        <creator>Mostafa A. Elgendy</creator>
        
        <creator>Ahmed Shawish</creator>
        
        <creator>Mahmoud I. Moussa</creator>
        
        <subject>smartphones; android; offloading; mobile Cloud computing; battery; security</subject>
        <description>Special Issue(SpecialIssue), 4(3), 2014</description>
        <description>Recently as smartphones have a wide range of capabilities a lot of heavy applications like gaming, video editing, and face recognition are now available. However, this kind of applications need intensive computational power, memory, and battery. A lot of researches solve this problem by offloading applications to run on the Cloud due to its intensive storage and computation resources. Later, some techniques chooses to offload part of the applications while leaving the rest to be processed on the smartphone based on one or two metrics like power and CPU consumption only without any consideration to other important metrics. Our previously proposed MCACC framework has introduced a new generation of offloading frameworks that handle this problem by smartly emerging a group of real-time metrics like total execution time, energy consumption, remaining battery, memory, and security into the offloading decision. In this paper, we introduce an enhanced version of the MCACC framework that can now smartly operate under low bandwidth network scenario in addition to its existing capabilities. In this framework, any mobile application is divided into a group of services, and then each of them is either executed locally on the mobile or remotely on the Cloud through a dynamic offloading decision model. The extensive simulation studies show that both heavy and light applications can benefit from the proposed framework while saving energy and improving performance compare to previous counterparts. The enhanced MCACC turns the smartphones to be smarter as the offloading decision is taken without any user interference.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo11/Paper_1-An_Enhanced_Version_of_the_MCACC.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Realising Dynamism in MediaSense Publish/Subscribe Model for Logical-Clustering in Crowdsourcing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2014.031106</link>
        <id>10.14569/IJARAI.2014.031106</id>
        <doi>10.14569/IJARAI.2014.031106</doi>
        <lastModDate>2014-11-10T07:38:44.5600000+00:00</lastModDate>
        
        <creator>Hasibur Rahman</creator>
        
        <creator>Rahim Rahmani</creator>
        
        <creator>Theo Kanter</creator>
        
        <subject>Internet; crowdsourcing; pervasive computing; context information; dynamism; context-ID; logical-clustering; Publish/Subscribe; MediaSense</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 3(11), 2014</description>
        <description>The upsurge of social networks, mobile devices, Internet or Web-enabled services have enabled unprecedented level of human participation in pervasive computing which is coined as crowdsourcing. The pervasiveness of computing devices leads to a fast varying computing where it is imperative to have a model for catering the dynamic environment. The challenge of efficiently distributing context information in logical-clustering in crowdsourcing scenarios can be countered by the scalable MediaSense PubSub model. MeidaSense is a proven scalable PubSub model for static environment. However, the scalability of MediaSense as PubSub model is further challenged by its viability to adjust to the dynamic nature of crowdsourcing. Crowdsourcing does not only involve fast varying pervasive devices but also dynamic distributed and heterogeneous context information. In light of this, the paper extends the current MediaSense PubSub model which can handle dynamic logical-clustering in crowdsourcing. The results suggest that the extended MediaSense is viable for catering the dynamism nature of crowdsourcing, moreover, it is possible to predict the near-optimal subscription matching time and predict the time it takes to update (insert or delete) context-IDs along with existing published context-IDs. Furthermore, it is possible to foretell the memory usage in MediaSense PubSub model.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume3No11/Paper_6-Realising_Dynamism_in_MediaSense_PublishSubscribe_Model_for_Logical-Clustering_in_Crowdsourcing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mobile Learning-system usage: Scale development and empirical tests</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2014.031105</link>
        <id>10.14569/IJARAI.2014.031105</id>
        <doi>10.14569/IJARAI.2014.031105</doi>
        <lastModDate>2014-11-10T07:38:44.5270000+00:00</lastModDate>
        
        <creator>Saleh Alharbi</creator>
        
        <creator>Steve Drew</creator>
        
        <subject>Mobile learning; Mobile learning; Higher education; UTAUT; IS Success</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 3(11), 2014</description>
        <description>Mobile technologies have changed the shape of learning for learners, society, and education providers. Consequently, mobile learning has become a core component in modern education. Nevertheless, introducing mobile learning systems does not automatically guarantee that learners will develop a positive behavioural intention to use it and therefore use it. Thus, acceptance-of-technology and system-success studies have increased. As yet, however, much of the research regarding understanding students’ behavioural intention to use mobile learning systems seems to suffer from several shortcomings. On top of that, there is no common cognitive theoretical foundation. This study introduces a theoretical framework that combines the Unified Theory of Acceptance and Use of Technology (UTAUT) and Information System (IS) Success Model. This integration resulted in three success measures and two acceptance constructs. The success measures included the following: a) information quality, b) system quality, and c) user satisfaction; whilst the following were the acceptance measures: a) effort expectancy, b) performance expectancy, and c) social influence. Further, this study introduces lecture attitude as a new construct that is believed to moderate students’ behavioural intention. The relationships between the different factors form the research hypotheses.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume3No11/Paper_5-Mobile_Learning-system_usage_Scale_development_and_empirical_tests.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Double Competition for Information-Theoretic SOM</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2014.031104</link>
        <id>10.14569/IJARAI.2014.031104</id>
        <doi>10.14569/IJARAI.2014.031104</doi>
        <lastModDate>2014-11-10T07:38:44.5130000+00:00</lastModDate>
        
        <creator>Ryotaro Kamimura</creator>
        
        <subject>double competition, self-organizing maps, mutual information, class structure</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 3(11), 2014</description>
        <description>In this paper, we propose a new type of informationtheoretic method for the self-organizing maps (SOM), taking into account competition between competitive (output) neurons as well as input neurons. The method is called ”double competition”, as it considers competition between outputs as well as input neurons. By increasing information in input neurons, we expect to obtain more detailed information on input patterns through the information-theoretic method. We applied the informationtheoretic methods to two well-known data sets from the machine learning database, namely, the glass and dermatology data sets. We found that the information-theoretic method with double competition explicitly separated the different classes. On the other hand, without considering input neurons, class boundaries could not be explicitly identified. In addition, without considering input neurons, quantization and topographic errors were inversely related. This means that when the quantization errors decreased, topographic errors inversely increased. However, with double competition, this inverse relation between quantization and topographic errors was neutralized. Experimental results show that by incorporating information in input neurons, class structure could be clearly identified without degrading the map quality to severely.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume3No11/Paper_4-Double_Competition_for_Information-Theoretic_SOM.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Multistage Feature Selection Model for Document Classification Using Information Gain and Rough Set</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2014.031103</link>
        <id>10.14569/IJARAI.2014.031103</id>
        <doi>10.14569/IJARAI.2014.031103</doi>
        <lastModDate>2014-11-10T07:38:44.4970000+00:00</lastModDate>
        
        <creator>Mrs. Leena. H. Patil</creator>
        
        <creator>Dr. Mohammed Atique</creator>
        
        <subject>Introduction; Document Preprocessing; Information Gain; Rough Set; Classifiers</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 3(11), 2014</description>
        <description>Huge number of documents are increasing rapidly, therefore,  to organize it in digitized form text categorization becomes an challenging issue. A major issue for text categorization is its large number of features. Most of the features are noisy, irrelevant and redundant, which may mislead the classifier. Hence, it is most important to reduce dimensionality of data to get smaller subset and provide the most gain in information. Feature selection techniques reduce the dimensionality of feature space. It also improves the overall accuracy and performance.  Hence, to overcome the issues of  text categorization feature selection is considered as an efficient technique . Therefore, we, proposed a multistage  feature selection model to improve the overall accuracy and performance of classification.  In the first stage document preprocessing part is performed.  Secondly, each term within the documents are ranked according to their importance for classification using the information gain. Thirdly rough set technique is applied to the terms which are ranked importantly and feature reduction is carried out. Finally a document classification is performed on the core features using Naive Bayes and KNN classifier.  Experiments  are carried out on three  UCI datasets, Reuters 21578, Classic 04 and Newsgroup 20. Results show the better accuracy and performance of the proposed model.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume3No11/Paper_3-A_Multistage_Feature_Selection_Model_for_Document.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Information-Theoretic Measure for Face Recognition: Comparison with Structural Similarity</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2014.031102</link>
        <id>10.14569/IJARAI.2014.031102</id>
        <doi>10.14569/IJARAI.2014.031102</doi>
        <lastModDate>2014-11-10T07:38:44.4670000+00:00</lastModDate>
        
        <creator>Asmhan Flieh Hassan</creator>
        
        <creator>Zahir M. Hussain</creator>
        
        <creator>Dong Cai-lin</creator>
        
        <subject>Information Theoretic Similarity; Joint Histogram; Structural Similarity (SSIM); face recognition; Image Processing</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 3(11), 2014</description>
        <description>Automatic recognition of people faces is a challenging problem that has received significant attention from signal processing researchers in recent years. This is due to its several applications in different fields, including security and forensic analysis. Despite this attention, face recognition is still one among the most challenging problems. Up to this moment, there is no technique that provides a reliable solution to all situations. In this paper a novel technique for face recognition is presented. This technique, which is called ISSIM, is derived from our recently published information - theoretic similarity measure HSSIM, which was based on joint histogram. Face recognition with ISSIM is still based on joint histogram of a test image and a database images. Performance evaluation was performed on MATLAB using part of the well-known AT&amp;T image database that consists of 49 face images, from which seven subjects are chosen, and for each subject seven views (poses) are chosen with different facial expressions. The goal of this paper is to present a simplified approach for face recognition that may work in real-time environments. Performance of our information - theoretic face recognition method (ISSIM) has been demonstrated experimentally and is shown to outperform the well-known, statistical-based method (SSIM).</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume3No11/Paper_2-An_Information-Theoretic_Measure_for_Face.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluating Sentiment Analysis Methods and Identifying Scope of Negation in Newspaper Articles</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2014.031101</link>
        <id>10.14569/IJARAI.2014.031101</id>
        <doi>10.14569/IJARAI.2014.031101</doi>
        <lastModDate>2014-11-10T07:38:44.3730000+00:00</lastModDate>
        
        <creator>S Padmaja</creator>
        
        <creator>Prof. S Sameen Fatima</creator>
        
        <creator>Sasidhar Bandu</creator>
        
        <subject>Sentiment Analysis; Negation Identification; News Articles</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 3(11), 2014</description>
        <description>Automatic detection of linguistic negation in free text is a demanding need for many text processing applications including Sentiment Analysis. Our system uses online news archives from two different resources namely NDTV and The Hindu. While dealing with news articles, we performed three subtasks namely identifying the target; separation of good and bad news content from the good and bad sentiment expressed on the target and analysis of clearly marked opinion that is expressed explicitly, not needing interpretation or the use of world knowledge. In this paper, our main focus was on evaluating and comparing three sentiment analysis methods (two machine learning based and one lexical based) and also identifying the scope of negation in news articles for two political parties namely BJP and UPA by using three existing methodologies. They were Rest of the Sentence (RoS), Fixed Window Length (FWL) and Dependency Analysis (DA). Among the sentiment methods the best F-measure was SVM with the values 0.688 and 0.657 for BJP and UPA respectively. On the other hand, the F measures for RoS, FWL and DA were 0.58, 0.69 and 0.75 respectively. We observed that DA was performing better than the other two. Among 1675 sentences in the corpus, according to annotator I, 1,137 were positive and 538 were negative whereas according to annotator II, 1,130 were positive and 545 were negative. Further we also identified the score of each sentence and calculated the accuracy on the basis of average score of both the annotators.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume3No11/Paper_1-Evaluating_Sentiment_Analysis_Methods_and_Identifying_Scope.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Transfer Learning Method Using Ontology for Heterogeneous Multi-agent Reinforcement Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.051022</link>
        <id>10.14569/IJACSA.2014.051022</id>
        <doi>10.14569/IJACSA.2014.051022</doi>
        <lastModDate>2014-10-31T07:39:09.0230000+00:00</lastModDate>
        
        <creator>Hitoshi Kono</creator>
        
        <creator>Akiya Kamimura</creator>
        
        <creator>Kohji Tomita</creator>
        
        <creator>Yuta Murata</creator>
        
        <creator>Tsuyoshi Suzuki</creator>
        
        <subject>Transfer learning; Multi-agent reinforcement learning; Multi-agent robot systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(10), 2014</description>
        <description>This paper presents a framework, called the knowledge co-creation framework (KCF), for heterogeneous multiagent robot systems that use a transfer learning method. A multiagent robot system (MARS) that utilizes reinforcement learning and a transfer learning method has recently been studied in realworld situations. In MARS, autonomous agents obtain behavior autonomously through multi-agent reinforcement learning and the transfer learning method enables the reuse of the knowledge of other robots’ behavior, such as for cooperative behavior. Those methods, however, have not been fully and systematically discussed. To address this, KCF leverages the transfer learning method and cloud-computing resources. In prior research, we developed ontology-based inter-task mapping as a core technology for hierarchical transfer learning (HTL) method and investigated its effectiveness in a dynamic multi-agent environment. The HTL method hierarchically abstracts obtained knowledge by ontological methods. Here, we evaluate the effectiveness of HTL with a basic experimental setup that considers two types of ontology: action and state.</description>
        <description>http://thesai.org/Downloads/Volume5No10/Paper_22-Transfer_Learning_Method_Using_Ontology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Tracking of Multiple objects Using 3D Scatter Plot Reconstructed by Linear Stereo Vision</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.051021</link>
        <id>10.14569/IJACSA.2014.051021</id>
        <doi>10.14569/IJACSA.2014.051021</doi>
        <lastModDate>2014-10-31T07:39:08.9770000+00:00</lastModDate>
        
        <creator>Safaa Moqqaddem</creator>
        
        <creator>Yassine Ruichek</creator>
        
        <creator>Raja Touahni</creator>
        
        <creator>Abderrahmane Sbihi</creator>
        
        <subject>Linear stereo vision; Spectral clustering; Objects detection and tracking; Kalman filter; Data association</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(10), 2014</description>
        <description>This paper presents a new method for tracking objects using stereo vision with linear cameras. Edge points extracted from the stereo linear images are first matched to reconstruct points that represent the objects in the scene. To detect the objects, a clustering process based on a spectral analysis is then applied to the reconstructed points. The obtained clusters are finally tracked throughout their center of gravity using Kalman filter and a Nearest Neighbour based data association algorithm. Experimental results using real stereo linear images are shown to demonstrate the effectiveness of the proposed method for obstacle tracking in front of a vehicle.</description>
        <description>http://thesai.org/Downloads/Volume5No10/Paper_21-Tracking_of_Multiple_objects_Using_3D_Scatter_Plot.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Proposal and Evaluation of Toilet Timing Suggestion Methods for the Elderly</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.051020</link>
        <id>10.14569/IJACSA.2014.051020</id>
        <doi>10.14569/IJACSA.2014.051020</doi>
        <lastModDate>2014-10-31T07:39:08.9300000+00:00</lastModDate>
        
        <creator>Airi Tsuji</creator>
        
        <creator>Tomoko Yonezawa</creator>
        
        <creator>HirotakeYamazoe</creator>
        
        <creator>Shinji Abe</creator>
        
        <creator>Noriaki Kuwahara</creator>
        
        <creator>Kazunari Morimoto</creator>
        
        <subject>Elderly; Suggestion-method</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(10), 2014</description>
        <description>Elderly people need to urinate frequently, and when they go on outings they often have a difficult time finding restrooms. Because of this, researching a body water management system is needed. Our proposed system calculates timing trips to the toilet in consideration with both their schedules and the amount of body water needing to be expelled, and recommends using the restroom with sufficient time before needing to urinate. In this paper, we describe the suggested methods of this system and show the experimental results for the toilet timing suggestion methods.</description>
        <description>http://thesai.org/Downloads/Volume5No10/Paper_20-Proposal_and_Evaluation_of_Toilet_Timing_Suggestion.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Middleware to integrate heterogeneous Learning Management Systems and initial results</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.051019</link>
        <id>10.14569/IJACSA.2014.051019</id>
        <doi>10.14569/IJACSA.2014.051019</doi>
        <lastModDate>2014-10-31T07:39:08.8800000+00:00</lastModDate>
        
        <creator>J. A. Hijar Miranda</creator>
        
        <creator>Daniel V&#180;azquez S&#180;anchez</creator>
        
        <creator>Dario Emmanuel V&#180;azquez Ceballos</creator>
        
        <creator>Erika Hern&#180;andez Rubio</creator>
        
        <creator>Amilcar Meneses Viveros</creator>
        
        <creator>Elena Fabiola Ruiz Ledezma</creator>
        
        <subject>Middleware; Learning Management Systems; Application Program Interface</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(10), 2014</description>
        <description>The use of the Learning Management Systems (LMS) has been increased. It is desirable to access multiple learning objects that are managed by Learning Management Systems. The diversity of LMS allow us to consider them as heterogeneous systems; each ones with their own interface to manage the provided functionality. These interfaces can be Web services or calls to remote objects. The functionalities offered by LMS depend on their user roles. A solution to integrate diverse heterogeneous platforms is based on a middleware architecture. In this paper, a middleware architecture is presented to integrate different Learning Management Systems. Furthermore, an implementation of the proposed middleware is presented. This implementation integrates two different Learning Management Systems, using Web services and XML-RPC protocols to access student-role users capabilities. The result is a transparent layer that provides access to LMS contents.</description>
        <description>http://thesai.org/Downloads/Volume5No10/Paper_19-Middleware_to_integrate_heterogeneous_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Criminal Investigation EIDSS Based on Cooperative Mapping Mechanism</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.051018</link>
        <id>10.14569/IJACSA.2014.051018</id>
        <doi>10.14569/IJACSA.2014.051018</doi>
        <lastModDate>2014-10-31T07:39:08.8500000+00:00</lastModDate>
        
        <creator>Ping He</creator>
        
        <subject>IDSS; EIDSS;intuition evidence; object hypotheses   criminal investigation; cooperative reasoning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(10), 2014</description>
        <description>On purpose of improving the research in extension intelligence systems when the knowledge in hand is not sufficient, an intuition evidence model (IEM) based on human-computer cooperative is presented. From the initial intuition process space defined by the primitive experience, a series of interactive mapping learning systems (IMLS) with various reductive levels are created. For, each IELS, the rule sets with respective belief degree are induced and saved. The paper introduces cooperative mapping of intuition evidence and object hypothesesmethod to the criminal investigation, and poses a skeleton of cooperative reasoning. The paper views that the reliability of the cooperative reasoning depends on the human-computer interaction results. Simultaneously, choosing the case-cracking clue should be determined by comprehensive evaluations and self-learning of intuition-formal judgments are essentially needed. When applying the model to reasoning and decision making, one can match the intuition judge of the given object to the rule sets of relative nodes, and then draw the conclusion by using some kind of evaluation algorithm. A simple example on how to create and apply the model is give.</description>
        <description>http://thesai.org/Downloads/Volume5No10/Paper_18-Criminal_Investigation_EIDSS_Based_on_Cooperative_Mapping.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Texture Analysis and Modified Level Set Method for Automatic Detection of Bone Boundaries in Hand Radiographs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.051017</link>
        <id>10.14569/IJACSA.2014.051017</id>
        <doi>10.14569/IJACSA.2014.051017</doi>
        <lastModDate>2014-10-31T07:39:08.8200000+00:00</lastModDate>
        
        <creator>Syaiful Anam</creator>
        
        <creator>Eiji Uchino</creator>
        
        <creator>Hideaki Misawa</creator>
        
        <creator>Noriaki Suetake</creator>
        
        <subject>hand bones radiograph; boundary detection; modified level set method; diffusion filter</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(10), 2014</description>
        <description>Rheumatoid Arthritis (RA) is a chronic inflammatory joint disease characterized by a distinctive pattern of bone and joint destruction. To give an RA diagnosis, hand bone radiographs are taken and analyzed. A hand bone radiograph analysis starts with the bone boundary detection. It is however an extremely exhausting and time consuming task for radiologists. An automatic bone boundary detection in hand radiographs is thus strongly required. Garcia et al. have proposed a method for automatic bone boundary detection in hand radiographs by using an adaptive snake method, but it doesn’t work for those affected by RA. The level set method has advantages over the snake method. It however often leads to either a complete breakdown or a premature termination of the curve evolution process, resulting in unsatisfactory results. For those reasons, we propose a modified level set method for detecting bone boundaries in hand radiographs affected by RA. Texture analysis is also applied for distinguishing the hand bones and other areas. Evaluating the experiments using a particular set of hand bone radiographs, the effectiveness of the proposed method has been proved.</description>
        <description>http://thesai.org/Downloads/Volume5No10/Paper_17-Texture_Analysis_and_Modified_Level_Set_Method_for_Automatic_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Java Based Computer Algorithms for the Solution of a Business Mathematics Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.051016</link>
        <id>10.14569/IJACSA.2014.051016</id>
        <doi>10.14569/IJACSA.2014.051016</doi>
        <lastModDate>2014-10-31T07:39:08.7870000+00:00</lastModDate>
        
        <creator>A. D. Chinedu</creator>
        
        <creator>A.B.Adeoye</creator>
        
        <subject>Actuarial Mathematics; Java-programming Language; Leasing; Procurement; Capital Assets; Uncertainties</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(10), 2014</description>
        <description>A novel approach is proposed as a framework for working out uncertainties associated with decisions between the choices of leasing and procurement of capital assets in a manufacturing industry. The mathematical concept of the tool is discussed while the technique adopted is much simpler to implement and initialize. The codes were developed in Java-programming language and text-run and executed on a computer system running on Windows 7 operating system. This was done in order to solve a model that illustrates a case study in actuarial mathematics. Meanwhile the solution obtained proves to be stable and proffers to suit the growing frenzy for software for similar recurring cases in business. In addition, it speeds up the computational results. The results obtained using the empirical method is compared with the output and adjudged excellent in terms of accuracy and adoption.</description>
        <description>http://thesai.org/Downloads/Volume5No10/Paper_16-Java_Based_Computer_Algorithms_for_the_Solution.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>E-Learning by Using Content Management System (CMS)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.051015</link>
        <id>10.14569/IJACSA.2014.051015</id>
        <doi>10.14569/IJACSA.2014.051015</doi>
        <lastModDate>2014-10-31T07:39:08.7400000+00:00</lastModDate>
        
        <creator>Reem Razzaq Abdul Hussein</creator>
        
        <creator>Afaf Badie Al-Kaddo</creator>
        
        <subject>E-learning; Content Management Systems; Joomla; Lecturer; Student</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(10), 2014</description>
        <description>Content Management System (CMS) is a system to manage content in order to improve the educational process and to create an interactive environment where the content management system plays a role in e-learning. CMS software named (joomla) contains  sources of commercial extension, the contribution of the proposed paper  is  replacing the commercial by a range of  free extension application and employed them in the field of e-learning where new features are added to the program do not exist in the original version of joomla. The paper took advantage of these new features in building a system used by lecturers to develop the skills and capabilities of students through the electronic portal and to raise the educational level of them.</description>
        <description>http://thesai.org/Downloads/Volume5No10/Paper_15-E-Learning_by_Using_Content_Management_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Physiological Responese Measrement to Identify Online Visual Representation Designs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.051014</link>
        <id>10.14569/IJACSA.2014.051014</id>
        <doi>10.14569/IJACSA.2014.051014</doi>
        <lastModDate>2014-10-31T07:39:08.7100000+00:00</lastModDate>
        
        <creator>Yu-Ping Hsu</creator>
        
        <creator>Edward Meyen</creator>
        
        <creator>Richard Branham</creator>
        
        <subject>electrodermal activity measurement; digital visual representation design; affective learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(10), 2014</description>
        <description>This research involved the identification and validation of text-related visual display design principles from the literature. Representations were designed and developed that illustrated the intent of each visual display design principle included in the study. The representations were embedded in a research intervention and included validated examples of accurate displays of each principle and examples with varying degrees of inaccuracies. The representations were created based on design theories of human cognition: perceptual, attention memory, and mental models [1][2][3][4][5], and presented via a monitor in a controlled research environment. The environmental controls included space appropriate to the experiment, constant temperature, consistent lighting, management of distractions including sound, monitoring of operation of the measurement device and the use of standardized instructions. Bertin’s seven visual variables: position, size, color, shape, value, orientation and texture, were also examined within the design principles [6].  The result of the independent samples t test did not find significant differences between good and poor visual designs for all images across subjects. However, the results of the paired-samples t test found significant mean differences between Bertin’s principles for color, value and orientation of visual designs across subjects.  The findings support future online instructional designs and investigate the implications for the design of online instruction.</description>
        <description>http://thesai.org/Downloads/Volume5No10/Paper_14-Physiological_Responese_Measrement_to_Identify.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>E-Assessment System Based on IMS QTI for the Arabic Grammar</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.051013</link>
        <id>10.14569/IJACSA.2014.051013</id>
        <doi>10.14569/IJACSA.2014.051013</doi>
        <lastModDate>2014-10-31T07:39:08.6630000+00:00</lastModDate>
        
        <creator>Abdelkarim Abdelkader</creator>
        
        <creator>Dalila Souilem Boumiza</creator>
        
        <creator>Rafik Braham</creator>
        
        <subject>Arabic Grammar; E-assessment; IMS QTI; ANLP QTI-Based Tools</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(10), 2014</description>
        <description>Nowadays e-learning has become a  fundamental stream of learning. E-assessment is an important and essential phase of the e-learning process because of all the decisions we will make about learners when teaching them. In this paper , we describe an e-assessment system for the Arabic grammar. Our system is based, on the one hand, on linguistics tools and on the other hand, it integrates the Question and Test Interoperability (QTI) proposed by IMS Global Learning Consortium . We adopt the IMS-QTI specification to build an interoperable, reusable and sharable e-assessment system. This system is composed of  three main components. The first component is a set of linguistic tools and resources. The second represent an authoring tool which allows teachers to create questions and tests accordance with the IMS-QTI specification. The third component is an Arabic test player for parsing and interpreting QTI XML files.</description>
        <description>http://thesai.org/Downloads/Volume5No10/Paper_13-E-assessment_System_Based_on_IMS_QTI.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Prolonging Network Lifetime in Wireless Sensor Networks with Path-Constrained Mobile Sink</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.051012</link>
        <id>10.14569/IJACSA.2014.051012</id>
        <doi>10.14569/IJACSA.2014.051012</doi>
        <lastModDate>2014-10-31T07:39:08.6170000+00:00</lastModDate>
        
        <creator>Basilis G. Mamalis</creator>
        
        <subject>wireless sensor; mobile sink; node clustering; data gathering; network lifetime</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(10), 2014</description>
        <description>Many studies in recent years have considered the use of mobile sinks (MS) for data gathering in wireless sensor networks (WSN), so as to reduce the need for data forwarding among the sensor nodes (SN) and thereby prolong the network lifetime. Moreover, in practice, often the MS tour length has to be kept below a threshold, usually due to timeliness constraints on the sensors data (delay-critical applications). This paper presents a modified clustering and data forwarding protocol combined with a MS solution for efficient data gathering in wireless sensor networks (WSNs) with delay constraints. The adopted cluster formation method is based in the &#39;residual energy&#39; of the SNs and it is appropriately modified in order to fit properly to the requirement of length-constrained MS tour, which involves, among else, the need for inter-cluster communication and increased data forwarding. In addition, a suitable data gathering protocol is designed, based on an approximated TSP route that satisfies the given length constraint, whereas the proper application of reclustering phases guarantees the effective handling of the &#39;energy holes&#39; caused around the CHs involved in the MS route. Extended simulation experiments show the stable and energy-efficient behavior of the proposed scheme (thus leading to increased network lifetime) as well as its higher performance in comparison to other competent approaches from the literature.</description>
        <description>http://thesai.org/Downloads/Volume5No10/Paper_12-Prolonging_Network_Lifetime_in_Wireless_Sensor_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Credible Fuzzy Classification based Technique on Self Organized Features Maps and FRANT IC-RL</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.051011</link>
        <id>10.14569/IJACSA.2014.051011</id>
        <doi>10.14569/IJACSA.2014.051011</doi>
        <lastModDate>2014-10-31T07:39:08.5700000+00:00</lastModDate>
        
        <creator>Mona Gamal</creator>
        
        <creator>Elsayed Radwan</creator>
        
        <creator>Adel M.A. Assiri</creator>
        
        <subject>Fuzzy Rule; Classification; Self-Organized Feature Map; Credibility Measure; Ant Colony Optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(10), 2014</description>
        <description>Handling uncertainty and vagueness in real world becomes a necessity for developing intelligent and efficient systems. Based on the credibility theory, a fuzzy clustering approach that improves the classification accuracy is targeted by this work. This paper introduces a design of an efficient set of fuzzy rules that are inferred by a hybrid model of SOFM (Self Organized Features Maps) and FRANTIC-SRL (Fuzzy Rules from ANT-Inspired Computation – Simultaneous Rule Learning). Self-Organized Features Maps cluster inputs using self-adaption techniques. They are useful in generating fuzzy membership functions for the subsets of the fuzzy variables. The generated fuzzy variables are ranked by means of the credibility measure wherever the weighted average of their confidence level is determined. FRANT IC-SRL builds the fuzzy classification rule set using the ranked credibility variables in a simultaneous process. Moreover, the whole fuzzy system is evaluated based on the credibility value. The details and limitations of the proposed model are illustrated. Also, the experimental results and a comparison with previous techniques in generating fuzzy classification rules from medical data sets are declared.</description>
        <description>http://thesai.org/Downloads/Volume5No10/Paper_11-Credible_Fuzzy_Classification_basedTechnique.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Edge Detection in Satellite Image Using Cellular Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.051010</link>
        <id>10.14569/IJACSA.2014.051010</id>
        <doi>10.14569/IJACSA.2014.051010</doi>
        <lastModDate>2014-10-31T07:39:08.5230000+00:00</lastModDate>
        
        <creator>Osama Basil Gazi</creator>
        
        <creator>Dr. Mohamed Belal</creator>
        
        <creator>Dr. Hala Abdel-Galil</creator>
        
        <subject>cellular neural network; liner matrix inequality; differential evolution; genetic algorithm; Field-programmable gate array; partial differential equation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(10), 2014</description>
        <description>The present paper proposes a novel approach for edge detection in satellite images based on cellular neural networks. CNN based edge detector in used conjunction with image enhancement and noise removal techniques, in order to deliver accurate edge detection results, compared with state of the art approaches. Thus, considering the obtained results, a comparison with optimal Canny edge detector is performed. The proposed image processing chain deliver more details regarding edges than canny edge detector. The proposed method aims to preserve salient information, due to its importance in all satellite image processing applications.</description>
        <description>http://thesai.org/Downloads/Volume5No10/Paper_10-Edge_Detection_in_Satellite_Image.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Ssvep-Based Bci System and its Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.051009</link>
        <id>10.14569/IJACSA.2014.051009</id>
        <doi>10.14569/IJACSA.2014.051009</doi>
        <lastModDate>2014-10-31T07:39:08.4900000+00:00</lastModDate>
        
        <creator>Jzau-Sheng Lin</creator>
        
        <creator>Cheng-Hung Shieh</creator>
        
        <subject>Brain-Computer-Interface (BCI); Steady-State Visually Evoked Potentials (SSVEP); Electroencephalogram (EEG)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(10), 2014</description>
        <description>A Brain-Computer-Interface (BCI) based system with a System on a Programmable Chip (SOPC) platform by using of the Steady-State Visually Evoked Potentials (SSVEP) through a Bluetooth interface was proposed in this paper.  The proposed BCI system can aid the Amyotrophic Lateral Sclerosis (ALS) or other paralyzed patients to easily control an electric wheelchair in their live. The electroencephalogram (EEG) signals may be detected by electrodes and extracting chip when the patients gazed a flickered target with a specific frequencies. Then these signals can be transformed by FFT into frequency domain and then transmitted to the hardware of electric wheelchair by using of Bluetooth interface. Finally, the electric wheelchair can be moved smoothly in accordance with commands converted by the frequencies of the EEG signal. The experimental results had shown that the proposed system can easily control electric wheelchairs.</description>
        <description>http://thesai.org/Downloads/Volume5No10/Paper_9-An_SSVEP-Based_BCI_System_and_its_Applications.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Risk Assessment System for Verifying the Safeguards Based on the HAZOP Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.051008</link>
        <id>10.14569/IJACSA.2014.051008</id>
        <doi>10.14569/IJACSA.2014.051008</doi>
        <lastModDate>2014-10-31T07:39:08.4430000+00:00</lastModDate>
        
        <creator>Atsuko Nakai</creator>
        
        <creator>Kazuhiko Suzuki</creator>
        
        <subject>risk assessment; HAZOP analysis; safeguards</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(10), 2014</description>
        <description>In recent years, serious accidents in chemical plants frequently occurred in Japan. In order to prevent accidents and to mitigate process risks, to re-evaluate risks which consider the reliability of existed safeguards in chemical plants is needed. The chemical plant is obligated to provide and maintain a safe environment for people that live in such circumstances. Plant safety is provided through inherently safe design and various safeguards, such as instrumented systems, procedures, and training. HAZOP (Hazard and Operability Study) is used as one of effective measures to identify hazards in chemical plants. In this paper, a method is proposed to calculate the probability of occurrence of hazards in chemical plants already considering of existing safeguards. The developed system bases on the HAZOP analysis and reliability of safety equipment arrangement. The system can verify that the safeguards are adequate or not, and it will produce recommendations for further risk reduction. This system will become valid for risk management and present useful information to support for plant operation.</description>
        <description>http://thesai.org/Downloads/Volume5No10/Paper_8-Risk_assessment_system_for_verifying_the_safeguards.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Survey of Pedestrian Detection in Video</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.051007</link>
        <id>10.14569/IJACSA.2014.051007</id>
        <doi>10.14569/IJACSA.2014.051007</doi>
        <lastModDate>2014-10-31T07:39:08.3970000+00:00</lastModDate>
        
        <creator>Achmad Solichin</creator>
        
        <creator>Agus Harjoko</creator>
        
        <creator>Agfianto Eko Putra</creator>
        
        <subject>pedestrian detection; video; paper review</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(10), 2014</description>
        <description>Pedestrian detection is one of the important topics in computer vision with key applications in various fields of human life such as intelligent vehicles, surveillance and advanced robotics. In recent years, research related to pedestrian detection commonplace. This paper aims to review the papers related to pedestrian detection in order to provide an overview of the recent research. Main contribution of this paper is to provide a general overview of pedestrian detection process that is viewed from different sides of the discussion. We divide the discussion into three stages: input, process and output. This paper does not make a selection or technique best method and optimal because the best technique depends on the needs, concerns and existing environment. However, this paper is useful for future researchers who want to know the current researches related to pedestrian detection.</description>
        <description>http://thesai.org/Downloads/Volume5No10/Paper_7-A_Survey_of_Pedestrian_Detection_in_Video.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application of Content-Based Approach in Research Paper Recommendation System for a Digital Library</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.051006</link>
        <id>10.14569/IJACSA.2014.051006</id>
        <doi>10.14569/IJACSA.2014.051006</doi>
        <lastModDate>2014-10-31T07:39:08.3500000+00:00</lastModDate>
        
        <creator>Simon Philip</creator>
        
        <creator>P.B. Shola</creator>
        
        <creator>Abari Ovye John</creator>
        
        <subject>Recommender Systems; Content-Based Filtering; Digital Library; TF-IDF; Cosine Similarity; Vector Space Model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(10), 2014</description>
        <description>Recommender systems are software applications that provide or suggest items to intended users. These systems use filtering techniques to provide recommendations. The major ones of these techniques are collaborative-based filtering technique, content-based technique, and hybrid algorithm. The motivation came as a result of the need to integrate recommendation feature in digital libraries in order to reduce information overload. Content-based technique is adopted because of its suitability in domains or situations where items are more than the users. TF-IDF (Term Frequency Inverse Document Frequency) and cosine similarity were used to determine how relevant or similar a research paper is to a user&#39;s query or profile of interest. Research papers and user&#39;s query were represented as vectors of weights using Keyword-based Vector Space model. The weights indicate the degree of association between a research paper and a user&#39;s query. This paper also presents an algorithm to provide or suggest recommendations based on users&#39; query. The algorithm employs both TF-IDF weighing scheme and cosine similarity measure. Based on the result or output of the system, integrating recommendation feature in digital libraries will help library users to find most relevant research papers to their needs.</description>
        <description>http://thesai.org/Downloads/Volume5No10/Paper_6-Application_of_Content-Based_Approach_in_Research.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Arabic Phrase-Level Contextual Polarity Recognition to Enhance Sentiment Arabic Lexical Semantic Database Generation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.051005</link>
        <id>10.14569/IJACSA.2014.051005</id>
        <doi>10.14569/IJACSA.2014.051005</doi>
        <lastModDate>2014-10-31T07:39:08.2570000+00:00</lastModDate>
        
        <creator>Samir E. Abdelrahman</creator>
        
        <creator>Hanaa Mobarz</creator>
        
        <creator>Ibrahim Farag</creator>
        
        <creator>Mohsen Rashwan</creator>
        
        <subject>Sentiment Arabic Lexical Semantic Database; Support Vector Machine; Contextual Polarity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(10), 2014</description>
        <description>Most of opinion mining works need lexical resources for opinion which recognize the polarity of words (positive/ negative) regardless their contexts which called prior polarity. The word prior polarity may be changed when it is considered in its contexts, for example, positive words may be used in phrases expressing negative sentiments, or vice versa. In this paper, we aim at generating sentiment Arabic lexical semantic database having the word prior coupled with its contextual polarities and the related phrases. To do that, we study first the prior polarity effects of each word using our Sentiment Arabic Lexical Semantic Database on the sentence-level subjectivity and Support Vector Machine classifier. We then use the seminal English two-step contextual polarity phrase-level recognition approach to enhance word polarities within its contexts. Our results achieve significant improvement over baselines.</description>
        <description>http://thesai.org/Downloads/Volume5No10/Paper_5-Arabic_Phrase-Level_Contextual_Polarity_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Secure Cloud-Based NFC Mobile Payment Protocol</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.051004</link>
        <id>10.14569/IJACSA.2014.051004</id>
        <doi>10.14569/IJACSA.2014.051004</doi>
        <lastModDate>2014-10-31T07:39:08.1970000+00:00</lastModDate>
        
        <creator>Pardis Pourghomi</creator>
        
        <creator>Muhammad Qasim Saeed</creator>
        
        <creator>Gheorghita Ghinea</creator>
        
        <subject>Near Field Communication; Security; Mobile transaction; Cloud</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(10), 2014</description>
        <description>Near Field Communication (NFC) is one the most recent technologies in the area of application development and service delivery via mobile phone. NFC enables the mobile phone to act as identification and a credit card for customers. Dynamic relationships of NFC ecosystem players in an NFC transaction process make them partners in a way that sometimes they should share their access permissions on the applications that are running in the service environment.  One of the technologies that can be used to ensure secure NFC transactions is cloud computing which offers wide range advantages compare to the use of a Secure Element (SE) as a single entity in an NFC enabled mobile phone. In this paper, we propose a protocol based on the concept of NFC mobile payments. Accordingly, we present an extended version of the NFC cloud Wallet model [14], in which, the Secure Element in the mobile device is used for customer authentication whereas the customer&#39;s banking credentials are stored in a cloud under the control of the Mobile Network Operator (MNO). In this circumstance, Mobile Network Operator plays the role of network carrier which is responsible for controlling all the credentials transferred to the end user. The proposed protocol eliminates the requirement of a shared secret between the Point-of-Sale (POS) and the Mobile Network Operator before execution of the protocol, a mandatory requirement in the earlier version of this protocol [16]. This makes it more practicable and user friendly. At the end, we provide a detailed analysis of the protocol where we discuss multiple attack scenarios.</description>
        <description>http://thesai.org/Downloads/Volume5No10/Paper_4-A_Secure_Cloud-Based_NFC_Mobile_Payment_Protocol.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Female Under-Representation in Computing Education and Industry - A Survey of Issues and Interventions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.051003</link>
        <id>10.14569/IJACSA.2014.051003</id>
        <doi>10.14569/IJACSA.2014.051003</doi>
        <lastModDate>2014-10-31T07:39:08.1800000+00:00</lastModDate>
        
        <creator> Joseph Osunde</creator>
        
        <creator>Gill Windall</creator>
        
        <creator>Professor Liz Bacon</creator>
        
        <creator>Professor Lachlan Mackinnon</creator>
        
        <subject>Female under-representation; Structural factors; Biological factors; Socio-cultural factors; User Interaction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(10), 2014</description>
        <description>This survey paper examines the issue of female under-representation in computing education and industry, which has been shown from empirical studies to be a problem for over two decades. While various measures and intervention strategies have been implemented to increase the interest of girls in computing education and industry, the level of success has been discouraging. 
The primary contribution of this paper is to provide an analysis of the extensive research work in this area. It outlines the progressive decline in female representation in computing education. It also presents the key arguments that attempt to explain the decline and intervention strategies.  We conclude that there is a need to further explore strategies that will encourage young female learners to interact more with computer educational games.</description>
        <description>http://thesai.org/Downloads/Volume5No10/Paper_3-Female_under-representation_in_computing_education_and_industry.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Reality of Applying Security in Web Applications in Academia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.051002</link>
        <id>10.14569/IJACSA.2014.051002</id>
        <doi>10.14569/IJACSA.2014.051002</doi>
        <lastModDate>2014-10-31T07:39:08.1000000+00:00</lastModDate>
        
        <creator>Mohamed Al-Ibrahim</creator>
        
        <creator>Yousef Shams Al-Deen</creator>
        
        <subject>Web applications; Security; Education systems</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(10), 2014</description>
        <description>Web applications are used in academic institutions, such as universities, for variety of purposes. Since these web pages contain critical information, securing educational systems is as important as securing any banking system. It has been found that many academic institutions have not fully secured their web pages against some class of vulnerabilities. In this empirical study, these vulnerabilities are focused and their existences in the web sites of the academic institutions are shown.  The degree of securing web pages in education systems is measured.  The differences among academic institutions on protecting their web applications are discussed.  Recommendation on ways of protecting websites is addressed.</description>
        <description>http://thesai.org/Downloads/Volume5No10/Paper_2-The_Reality_of_Applying_Security_in_Web_Applications_in_Academia.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A GA-Based Replica Placement Mechanism for Data Grid</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.051001</link>
        <id>10.14569/IJACSA.2014.051001</id>
        <doi>10.14569/IJACSA.2014.051001</doi>
        <lastModDate>2014-10-31T07:39:08.0070000+00:00</lastModDate>
        
        <creator>Omar Almomani</creator>
        
        <creator>Mohammad Madi</creator>
        
        <subject>Data Grid; Data replication; distributed systems; Replica placement mechanism; GA-Based Replica Placement Mechanism</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(10), 2014</description>
        <description>Data Grid is an infrastructure that manages huge amount of data files, and provides intensive computational resources across geographically distributed collaboration. To increase resource availability and to ease resource sharing in such environment, there is a need for replication services. Data replication is one of the methods used to improve the performance of data access in distributed systems by replicating multiple copies of data files in the distributed sites. Replica placement mechanism is the process of identifying where to place copies of replicated data files in a Grid system. Choosing the best location is not an easy task. Current works find the best location based on number of requests and read cost of a certain file. As a result, a large bandwidth is consumed and increases the computational time. Authors  proposed a GA-Based Replica Placement Mechanism (DBRPM) that finds the best locations to store replicas based on five criteria, namely, 1) Read Cost, 2) Storage Cost, 3) Sites’ Workload, and 4) Replication Site.</description>
        <description>http://thesai.org/Downloads/Volume5No10/Paper_1-A_GA-Based_Replica_Placement_Mechanism_for_Data_Grid.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Developing a Mathematical Model to Detect Diabetes Using Multigene Genetic Programming</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2014.031007</link>
        <id>10.14569/IJARAI.2014.031007</id>
        <doi>10.14569/IJARAI.2014.031007</doi>
        <lastModDate>2014-10-11T07:21:00.7000000+00:00</lastModDate>
        
        <creator>Ahlam A Sharief</creator>
        
        <creator>Alaa Sheta</creator>
        
        <subject>Diabetes; Classification; Genetic Programming; Pima Indian data</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 3(10), 2014</description>
        <description>Diabetes Mellitus is one of the deadly diseases
growing at a rapid rate in the developing countries. Diabetes
Mellitus is being one of the major contributors to the mortality
rate. It is the sixth reason for death worldwide. Early detection
of the disease is highly recommended. This paper attempts to
enhance the detection of diabetic based on set of attributes
collected from the patients to develop a mathematical model using
Multigene Symbolic Regression Genetic Programming technique.
Genetic Programming (GP) showed significant advantages on
evolving nonlinear model which can be used for prediction. The
developed GP model is evaluated using Pima Indian data set and
showed higher capability and accuracy in detection and diagnosis
of Diabetes.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume3No10/Paper_7-Developing_a_Mathematical_Model_to_Detect.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A real time OCSVM Intrusion Detection module with low overhead for SCADA systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2014.031006</link>
        <id>10.14569/IJARAI.2014.031006</id>
        <doi>10.14569/IJARAI.2014.031006</doi>
        <lastModDate>2014-10-10T11:21:31.5330000+00:00</lastModDate>
        
        <creator>Leandros A. Maglaras</creator>
        
        <creator>Jianmin Jiang</creator>
        
        <subject>SCADA systems; OCSVM; intrusion detection</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 3(10), 2014</description>
        <description>In this paper we present a intrusion detection module
capable of detecting malicious network traffic in a SCADA
(Supervisory Control and Data Acquisition) system. Malicious
data in a SCADA system disrupt its correct functioning and
tamper with its normal operation. OCSVM (One-Class Support
Vector Machine) is an intrusion detection mechanism that does
not need any labeled data for training or any information about
the kind of anomaly is expecting for the detection process. This
feature makes it ideal for processing SCADA environment data
and automate SCADA performance monitoring. The OCSVM
module developed is trained by network traces off line and detect
anomalies in the system real time.
In order to decrease the overhead induced by communicated
alarms we propose a new detection mechanism that is
based on the combination of OCSVM with a recursive k-means
clustering procedure. The proposed intrusion detection module
K??OCSVMis capable to distinguish severe alarms from possible
attacks regardless of the values of parameters   and  , making
it ideal for real-time intrusion detection mechanisms for SCADA
systems. The most severe alarms are then communicated with
the use of IDMEF files to an IDSIDS (Intrusion Detection
System) system that is developed under CockpitCI project. Alarm
messages carry information about the source of the incident, the
time of the intrusion and a classification of the alarm.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume3No10/Paper_6-A_real_time_OCSVM_Intrusion_Detection_module.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modelling and Simulation of a Biometric Identity-Based Cryptography</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2014.031005</link>
        <id>10.14569/IJARAI.2014.031005</id>
        <doi>10.14569/IJARAI.2014.031005</doi>
        <lastModDate>2014-10-10T11:17:40.5070000+00:00</lastModDate>
        
        <creator>Dania Aljeaid</creator>
        
        <creator>Xiaoqi Ma</creator>
        
        <creator>Caroline Langensiepen</creator>
        
        <subject>e-Government; identity-based cryptosystem; biometrics; mutual authentication; finite-state machine; Petri net.</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 3(10), 2014</description>
        <description>Government information is a vital asset that must be kept in a trusted environment and efficiently managed by authorised parties. Even though e-Government provides a number of advantages, it also introduces a range of new security risks. Sharing confidential and top-secret information in a secure manner among government sectors tends to be the main element that government agencies look for. Thus, developing an effective methodology is essential and it is a key factor for e-Government success. The proposed e-Government scheme in this paper is a combination of identity-based encryption and biometric technology. This new scheme can effectively improve the security in authentication systems, which provides a reliable identity with a high degree of assurance. This paper also demonstrates the feasibility of using finite-state machines as a formal method to analyse the proposed protocols. Finally we showed how Petri Nets could be used to simulate the communication patterns between the server and client as well as to validate the protocol functionality.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume3No10/Paper_5-Modelling_and_Simulation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>FlexRFID: A Security and Service Control Policy-Based Middleware for Context-Aware Pervasive Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2014.031004</link>
        <id>10.14569/IJARAI.2014.031004</id>
        <doi>10.14569/IJARAI.2014.031004</doi>
        <lastModDate>2014-10-10T11:17:40.4600000+00:00</lastModDate>
        
        <creator>Mehdia Ajana El Khaddar</creator>
        
        <creator>Mhammed Chraibi</creator>
        
        <creator>Hamid Harroud</creator>
        
        <creator>Mohammed Boulmalf</creator>
        
        <creator>Mohammed Elkoutbi</creator>
        
        <creator>Abdelilah Maach</creator>
        
        <subject>RFID; Middleware; WSN; Ubiquitous; Pervasive Computing; FlexRFID; Policy-Based; Security; Healthcare; access control</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 3(10), 2014</description>
        <description>Ubiquitous computing targets the provision of seamless services and applications by providing an environment that involves a variety of devices having different capabilities. The design of applications in these environments needs to consider the heterogeneous devices, applications preferences, and rapidly changing contexts. RFID and WSN technologies are widely used in today’s ubiquitous computing. In Wireless Sensor Networks, sensor nodes sense the physical environment and send the sensed data to the sink by multi-hops. WSN are used in many applications such as military and environment monitoring. In Radio Frequency Identification, a unique ID is assigned to a RFID tag which is associated with a real world object. RFID applications cover many areas such as Supply Chain Management (SCM), healthcare, library management, automatic toll collection, etc. The integration of both technologies will bring many advantages in the future of ubiquitous computing, through the provision of real-world tracking and context information about the objects. This will increase considerably the automation of an information system. In order to process the large volume of data captured by sensors and RFID readers in real time, a middleware solution is needed. This middleware should be designed in a way to allow the aggregation, filtering and grouping of the data captured by the hardware devices before sending them to the backend applications. In this paper we demonstrate how our middleware solution called FlexRFID handles large amount of RFID and sensor scan data, and executes applications’ business rules in real time through its policy-based Business Rules layer. The FlexRFID middleware provides easy addition and removal of hardware devices that capture data, as well as uses the business rules of the applications to control all its services. We demonstrate how the middleware controls some defined healthcare scenarios, and deals with the access control security concern to sensitive healthcare data through the use of policies. We propose hereafter the design of FlexRFID middleware along with its evaluation results.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume3No10/Paper_4-FlexRFID_A_Security_and_Service_Control_Policy-Based_Middleware.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Parameter optimization for intelligent phishing detection using Adaptive Neuro-Fuzzy</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2014.031003</link>
        <id>10.14569/IJARAI.2014.031003</id>
        <doi>10.14569/IJARAI.2014.031003</doi>
        <lastModDate>2014-10-10T11:17:40.4270000+00:00</lastModDate>
        
        <creator>P. A. Barraclough</creator>
        
        <creator>G. Sexton</creator>
        
        <creator>M.A. Hossain</creator>
        
        <creator>N. Aslam</creator>
        
        <subject>FIS; Intelligent phishing detection; fuzzy inference system; neuro-fuzzy</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 3(10), 2014</description>
        <description>Phishing attacks has been growing rapidly in the past few years. As a result, a number of approaches have been proposed to address the problem. Despite various approaches proposed such as feature-based and blacklist-based via machine learning techniques, there is still a lack of accuracy and real-time solution. Most approaches applying machine learning techniques requires that parameters are tuned to solve a problem, but parameters are difficult to tune to a desirable output. This study presents a parameter tuning framework, using adaptive Neuron-fuzzy inference system with comprehensive data to maximize systems performance. Extensive experiment was conducted. During ten-fold cross-validation, the data is split into training and testing pairs and parameters are set according to desirable output and have achieved 98.74% accuracy. Our results demonstrated higher performance compared to other results in the field. This paper contributes new comprehensive data, novel parameter tuning method and applied a new algorithm in a new field. The implication is that adaptive neuron-fuzzy system with effective data and proper parameter tuning can enhance system performance. The outcome will provide a new knowledge in the field.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume3No10/Paper_3-Parameter_optimization_for_intelligent_phishing_detection_using_Adaptive.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Discrimination of EEG-Based Motor Imagery Tasks by Means of a Simple Phase Information Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2014.031002</link>
        <id>10.14569/IJARAI.2014.031002</id>
        <doi>10.14569/IJARAI.2014.031002</doi>
        <lastModDate>2014-10-10T11:17:40.3970000+00:00</lastModDate>
        
        <creator>Ana Loboda</creator>
        
        <creator>Alexandra Margineanu</creator>
        
        <creator>Gabriela Rotariu</creator>
        
        <creator>Anca Mihaela Lazar</creator>
        
        <subject>brain computer interface; motor imagery task; electroencephalogram; phase locking value</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 3(10), 2014</description>
        <description>We propose an off-line analysis method in order to discriminate between motor imagery tasks manipulated in a brain computer interface system. A measure of large-scale synchronization based on phase locking value is established. The results indicate that it can take advantage of the phase synchrony between scalp-recorded EEG activity in the supplementary motor area and in sezorimotor area, computing the differences between the active and the relaxation states. Phase locking value features are more discriminative in &#223; rhythm than in &#181; rhythm. The proposed method is simple, computationally efficient and proves good results on EEG Motor Movement/Imagery Dataset available from PhysioNet research resource for physiologic signals.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume3No10/Paper_2-Discrimination_of_EEG-Based_Motor_Imagery.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid Reduction Approach for Enhancing Cancer Classification of Microarray Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2014.031001</link>
        <id>10.14569/IJARAI.2014.031001</id>
        <doi>10.14569/IJARAI.2014.031001</doi>
        <lastModDate>2014-10-10T11:17:40.3200000+00:00</lastModDate>
        
        <creator>Abeer M. Mahmoud</creator>
        
        <creator>Basma A.Maher</creator>
        
        <subject>Mining Microarray data; Cancer classification; SVM</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 3(10), 2014</description>
        <description>This paper presents a novel hybrid machine learning (ML)reduction approach to enhance cancer classification accuracy of microarray data based on two ML gene ranking techniques (T-test and Class Separability (CS)). The proposed approach is integrated with two ML classifiers; K-nearest neighbor (KNN) and support vector machine (SVM); for mining microarray gene expression profiles. Four public cancer microarray databases are used for evaluating the proposed approach and successfully accomplish the mining process. These are Lymphoma, Leukemia SRBCT, and Lung Cancer. The strategy to select genes only from the training samples and totally excluding the testing samples from the classifier building process is utilized for more accurate and validated results. Also, the computational experiments are illustrated in details and comprehensively presented with literature related results. The results showed that the proposed reduction approach reached promising results of the number of genes supplemented to the classifiers as well as the classification accuracy.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume3No10/Paper_1-A_Hybrid_Reduction_Approach_for_Enhancing_Cancer.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Reducing Shared Cache Misses via dynamic Grouping and Scheduling on Multicores</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050920</link>
        <id>10.14569/IJACSA.2014.050920</id>
        <doi>10.14569/IJACSA.2014.050920</doi>
        <lastModDate>2014-10-01T06:00:52.5730000+00:00</lastModDate>
        
        <creator>Wael Amr Hossam El Din</creator>
        
        <creator>Hany Mohamed ElSayed</creator>
        
        <creator>Ihab ElSayed Talkhan</creator>
        
        <subject>Shared Cache Miss Rate; Dynamic Scheduling; Multicore; Symbiosis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(9), 2014</description>
        <description>Multicore technology enables the system to perform more tasks with higher overall system performance.However, this performance can’t be exploited well due to the high miss rate in the second level shared cache among the cores which represents one of the multicore’s challenges.
This paper addresses the dynamic co-scheduling of tasks in multicore real-time systems. The focus is on the basic idea of the megatask technique for grouping the tasks that may affect the shared cache miss rate ,and the Pfair scheduling that is then used for reducing the concurrency within the grouped tasks while ensuring the real time constrains. Consequently the shared cache miss rate is reduced.The dynamic co-scheduling is proposed through the combination of the symbiotic technique with the megatask technique for co-scheduling the tasks based on the collected information using two schemes. The first scheme is measuring the temporal working set size of each running task at run time, while the second scheme is collecting the shared cache miss rate of each running task at run time.
Experiments show that the proposed dynamic coscheduling can decrease the shared cache miss rate compared to the static one by 52%.This indicates that the dynamic coscheduling is important to achieve high performance with shared cache memory for running high workloads like multimedia applications that require real-time response and continuousmedia data types.</description>
        <description>http://thesai.org/Downloads/Volume5No9/Paper_20-Reducing_Shared_Cache_Misses_via_dynamic.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Privacy Preserving Data Publishing: A Classification Perspective</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050919</link>
        <id>10.14569/IJACSA.2014.050919</id>
        <doi>10.14569/IJACSA.2014.050919</doi>
        <lastModDate>2014-09-30T12:41:31.5770000+00:00</lastModDate>
        
        <creator>A N K Zaman</creator>
        
        <creator>Charlie Obimbo</creator>
        
        <subject>privacy preserving data publishing; differential privacy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(9), 2014</description>
        <description>The concept of privacy is expressed as release of information in a controlled way. Privacy could also be defined as privacy decides what type of personal information should be released and which group or person can access and use it. Privacy Preserving Data Publishing (PPDP) is a way to allow one to share anonymous data to ensure protection against identity disclosure of an individual. Data anonymization is a technique for PPDP, which makes sure the published data, is practically useful for processing (mining) while preserving individuals sensitive information. Most works reported in literature on privacy preserving data publishing for classification task handle numerical data. However, most real life data contains both numerical and non-numerical data. Another shortcoming is that use of distributed model called Secure Multiparty Computation (SMC). For this research, a centralized model is used for independent data publication by a single data owner. The key challenge for PPDP is to ensure privacy as well as to keep the data usable for research. Differential privacy is a technique that ensures the highest level of privacy for a record owner while providing actual information of the data set. The aim of this research is to develop a framework that satisfies differential privacy standards and to ensure maximum data usability for a classification tasks such as patient data classification in terms of blood pressure.</description>
        <description>http://thesai.org/Downloads/Volume5No9/Paper_19-Privacy_Preserving_Data_Publishing_A_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>E-mail use by the faculty members, students and staff of Saudi Arabian and Gulf states Universities</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050918</link>
        <id>10.14569/IJACSA.2014.050918</id>
        <doi>10.14569/IJACSA.2014.050918</doi>
        <lastModDate>2014-09-30T12:41:31.4830000+00:00</lastModDate>
        
        <creator>Fahad Alturise</creator>
        
        <creator>Paul R. Calder</creator>
        
        <creator>Brett Wilkinson</creator>
        
        <subject>Email; Saudi Arabia; Gulf States</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(9), 2014</description>
        <description>Electronic mail systems (Email) constitute one of the most important communication and business tools that people employ. Email in the workplace can help a business improve its productivity. Many organisations now rely on email to manage internal communications as well as other communication and business processes and procedures. This paper compares email use by university stakeholders (i.e. faculty members, staff and students) between Saudi Arabia on one hand, and the Gulf States - Qatar, Oman, United Arab Emirates (UAE) and Bahrain – on the other. A questionnaire that was expertreviewed and pilot-tested, was used to collect data from ten universities in Saudi Arabia and five universities in the Gulf States. Slight differences emerged in the Saudi Arabia and Gulf States universities’ stakeholders’ use of email in terms of having email, frequency of checking email, and skills in using email. The Saudi Arabian universities must improve their IT infrastructure, including the provision of suitable connection networks and formal training of staff in utilising IT resources. This study’s findings aim to advise the Saudi Arabian and Gulf States’ universities on their plans and programmes for e-learning and the consolidation of required resources.</description>
        <description>http://thesai.org/Downloads/Volume5No9/Paper_18-E-mail_use_by_the_faculty_members_students_and_staff.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Literature Survey of previous research work in Models and Methodologies in Project Management</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050917</link>
        <id>10.14569/IJACSA.2014.050917</id>
        <doi>10.14569/IJACSA.2014.050917</doi>
        <lastModDate>2014-09-30T12:41:31.4530000+00:00</lastModDate>
        
        <creator>Ravinder Singh</creator>
        
        <creator>Dr. Kevin Lano</creator>
        
        <subject>Programme/ Program Management; Project Management; Maturity Models; Onshore-offshore Management; Leadership and Management; Global Distributed Projects</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(9), 2014</description>
        <description>This paper provides a survey of the existing literature and research carried out in the area of project management using different models, methodologies, and frameworks.  Project Management (PM) broadly means programme management, portfolio management, practice management, project management office, etc. A project management system has a set of processes, procedures, framework, methods, tools, methodologies, techniques, resources, etc. which are used to manage the full life cycle of projects. This also means to create risk, quality, performance, and other management plans to monitor and manage the projects efficiently and effectively.</description>
        <description>http://thesai.org/Downloads/Volume5No9/Paper_17-Literature_Survey_of_previous_research_work_in_Models_and_Methodologies_in_Project_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Frequency Estimation of Single-Tone Sinusoids Under Additive and Phase Noise</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050916</link>
        <id>10.14569/IJACSA.2014.050916</id>
        <doi>10.14569/IJACSA.2014.050916</doi>
        <lastModDate>2014-09-30T12:41:31.4200000+00:00</lastModDate>
        
        <creator>Asmaa Nazar Almoosawy</creator>
        
        <creator>Zahir M. Hussain</creator>
        
        <creator>Fadel A. Murad</creator>
        
        <subject>Frequency Estimation; Correlation; Cramer-Rao Bound; Phase Noise; Maximum Likelihood Estimator</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(9), 2014</description>
        <description>We investigate the performance of main frequency estimation methods for a single-component complex sinusoid under complex additive white Gaussian noise (AWGN) as well as phase noise (PN). Two methods are under test: Maximum Likelihood (ML) method using Fast Fourier Transform (FFT), and the autocorrelation method (Corr). Simulation results showed that FFT-method has superior performance as compared to the Corr-method in the presence of additive white Gaussian noise (affecting the amplitude) and phase noise, with almost 20dB difference.</description>
        <description>http://thesai.org/Downloads/Volume5No9/Paper_16-Frequency_Estimation_of_Single-Tone_Sinusoids_under_Additive.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Text Classifier Model for Categorizing Feed Contents Consumed by a Web Aggregator</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050915</link>
        <id>10.14569/IJACSA.2014.050915</id>
        <doi>10.14569/IJACSA.2014.050915</doi>
        <lastModDate>2014-09-30T12:41:31.3900000+00:00</lastModDate>
        
        <creator>H.O.D. Longe</creator>
        
        <creator>Fatai Salami</creator>
        
        <subject>feed; aggregator; text classifier; svm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(9), 2014</description>
        <description>This paper presents a method of using a Text Classifier to automatically categorize the content of web feeds consumed by a web aggregator. The pre-defined category of the feed to be consumed by the aggregator does not always match the content being consumed and categorizing the content using the pre-defined category of the feed curtails user experience as users would not see all the contents belonging to their category of interest. A web aggregator was developed and this was integrated with the SVM classifier to automatically categorize feed content being consumed. The experimental results showed that the text classifier performs well in categorizing the content of feed being consumed and it also affirmed the disparity in the pre-defined category of the source feed and appropriate category of the consumed content.</description>
        <description>http://thesai.org/Downloads/Volume5No9/Paper_15-A_Text_Classifier_Model_for_Categorizing_Feed_Contents.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A High Performance Biometric System Based on Image Morphological Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050914</link>
        <id>10.14569/IJACSA.2014.050914</id>
        <doi>10.14569/IJACSA.2014.050914</doi>
        <lastModDate>2014-09-30T12:41:31.3600000+00:00</lastModDate>
        
        <creator>Marco Augusto Rocchietti</creator>
        
        <creator>Alejandro Luis Angel Scerbo</creator>
        
        <creator>Silvia M. Ojeda</creator>
        
        <subject>Biometric System; Digital Image Processing; Pupil and Iris Segmentation; Iris Matching</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(9), 2014</description>
        <description>At present, many of the algorithms used and proposed for digital imaging biometric systems are based on mathematical complex models, and this fact is directly related to the performance of any computer implementation of these algorithms. On the other hand, as they are conceived for general purpose digital imaging, these algorithms do not take advantage of any common morphological features from its given domains.
In this paper we developed a novel algorithm for the segmentation of the pupil and iris in human eye images, whose improvement’s hope lies in the use of morphological features of the images of the human eye.  Based on the basic structure of a standard biometric system we developed and implemented an innovation for each phase of the system, avoiding the use of mathematical complex models and exploiting some common features in any digital image of the human eye from the dataset that we used. Finally, we compared the testing results against other known state of the art works developed over the same dataset.</description>
        <description>http://thesai.org/Downloads/Volume5No9/Paper_14-A_High_Performance_Biometric_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Designing and Building a Framework for DNA Sequence Alignment Using Grid Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050913</link>
        <id>10.14569/IJACSA.2014.050913</id>
        <doi>10.14569/IJACSA.2014.050913</doi>
        <lastModDate>2014-09-30T12:41:31.3270000+00:00</lastModDate>
        
        <creator>EL-Sayed Orabi</creator>
        
        <creator>Mohamed A. Assal</creator>
        
        <creator>Mustafa Abdel Azim</creator>
        
        <creator>Yasser Kamal</creator>
        
        <subject>DNA fingerprint; Smith waterman algorithm; Needleman-Wunsch; Grid computing; Coordinator and Agent computers</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(9), 2014</description>
        <description>Deoxyribonucleic acid (DNA) is a molecule that encodes unique genetic instructions used in the development and functioning of all known living organisms and many viruses. This Genetic information is encoded as a sequence of nucleotides (adenine, cytosine, guanine, and thymine) recorded using the letters A, C, G, and T.DNA querying or alignment of these sequences required dynamic programming tools and very complex matrices and some heuristic methods like FASTA and BLAST that use massive force of processing and highly time consuming. We present a parallel solution to reduce the processing time. Smith waterman algorithm, Needleman-Wunsch, some weighting matrices and a grid of computers are used to find field of similarity between these sequences in large DNA datasets. This grid consists of master computer and unlimited number of agents. The master computer is the user interface for insert the queried sequence and coordinates the processing between the grid agents.</description>
        <description>http://thesai.org/Downloads/Volume5No9/Paper_13-Designing_and_Building_a_Framework_for_DNA_Sequence.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>XML Schema-Based Minification for Communication of Security Information and Event Management (SIEM) Systems in Cloud Environments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050912</link>
        <id>10.14569/IJACSA.2014.050912</id>
        <doi>10.14569/IJACSA.2014.050912</doi>
        <lastModDate>2014-09-30T12:41:31.2970000+00:00</lastModDate>
        
        <creator>Bishoy Moussa</creator>
        
        <creator>Mahmoud Mostafa</creator>
        
        <creator>Mahmoud El-Khouly</creator>
        
        <subject>XML; JSON; Minification; XML Schema; Cloud; Log; Communication; Compression; XMill; GZip; Code Generation; Code Readability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(9), 2014</description>
        <description>XML-based communication governs most of today’s systems communication, due to its capability of representing complex structural and hierarchical data. However, XML document structure is considered a huge and bulky data that can be reduced to minimize bandwidth usage, transmission time, and maximize performance. This contributes to a more efficient and utilized resource usage. In cloud environments, this affects the amount of money the consumer pays. Several techniques are used to achieve this goal. This paper discusses these techniques and proposes a new XML Schema-based Minification technique. The proposed technique works on XML Structure reduction using minification. The proposed technique provides a separation between the meaningful names and the underlying minified names, which enhances software/code readability. This technique is applied to Intrusion Detection Message Exchange Format (IDMEF) messages, as part of Security Information and Event Management (SIEM) system communication hosted on Microsoft Azure Cloud. Test results show message size reduction ranging from 8.15% to 50.34% in the raw message, without using time-consuming compression techniques. Adding GZip compression to the proposed technique produces 66.1% shorter message size compared to original XML messages.</description>
        <description>http://thesai.org/Downloads/Volume5No9/Paper_12-XML_Schema-Based_Minification_for_Communication.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Case Study: the Use of Agile on Mortgage Application: Evidence from Thailand</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050911</link>
        <id>10.14569/IJACSA.2014.050911</id>
        <doi>10.14569/IJACSA.2014.050911</doi>
        <lastModDate>2014-09-30T12:41:31.2630000+00:00</lastModDate>
        
        <creator>Kreecha Puphaiboon</creator>
        
        <subject>SCRUM; BPM; BRE; Mortgage Loan; LOS; Usability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(9), 2014</description>
        <description>This paper presents a case study of a mortgage loan origination project using SCRUM Agile model and Business Process Management and Business Rule Management System (BPMS and BRMS). From the Waterfall model (Stage 1), a web-based self-developed had been developed using opensource frameworks: Spring and Sarasvati. But, several problems were detected and the project failed due to insufficient project management, rapid requirement changes and developer coding skills. The project was continued (Stage 2) selecting a BPMS and BRMS tool. Later, Stage 3 SCRUM was executed with proper project management and the new tool, which suited better for rapid business needs, and minimum coding. An efficient team communication and the frequent delivery of code releases increasingly contributed to the sponsor and user’s satisfaction. However, due to political influenced timeline, inexperienced project management and requirement changes, the budget exceeds and SCRUM is not favored. Nonetheless, Open-end questionnaire and interview results with core team members both business users and developers as well as software usability measurement inventory (SUMI) conducted with 14 users, it shows that SCRUM and the new tool rescue the project. Empirically, this paper demonstrates a method to evaluate the use of Agile augmented with usability measurement to Agile development community.</description>
        <description>http://thesai.org/Downloads/Volume5No9/Paper_11-Case_Study_The_Use_of_Agile_on_Mortgage_Application_Evidence.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Study of Chaos in the Traffic of Computer Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050910</link>
        <id>10.14569/IJACSA.2014.050910</id>
        <doi>10.14569/IJACSA.2014.050910</doi>
        <lastModDate>2014-09-30T12:41:31.2170000+00:00</lastModDate>
        
        <creator>Evgeny Nikulchev</creator>
        
        <creator>Evgeniy Pluzhnik</creator>
        
        <subject>chaos; traffic of computer networks; nonlinear dynamics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(9), 2014</description>
        <description>Development of telecommunications technology currently determines the growth of research with an aim to find new solutions and innovative approaches to the mathematical description of the processes. One of the directions in the description of traffic in computer networks is focused on studying the properties of chaotic traffic. We offer a complex method for the dynamic chaos determination. It is suggested to introduce additional indicators based on the absence of trivial conservation laws and weak symmetry breaking. The conclusion is made that dynamic chaos in the example of computer network traffic.</description>
        <description>http://thesai.org/Downloads/Volume5No9/Paper_10-Study_of_Chaos_in_the_Traffic_of_Computer_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Using Social Signal of Hesitation in Multimedia Content Retrieval</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050909</link>
        <id>10.14569/IJACSA.2014.050909</id>
        <doi>10.14569/IJACSA.2014.050909</doi>
        <lastModDate>2014-09-30T12:41:31.1870000+00:00</lastModDate>
        
        <creator>Tomaž Vodlan</creator>
        
        <creator>Andrej Košir</creator>
        
        <subject>Human-computer Interaction; Social Signals; Hesitation; Matrix Factorization; Video-on-Demand; Graphical Analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(9), 2014</description>
        <description>This paper presents the graphical analysis of selection traces in matrix-factorization space of multimedia items. A trace consists of links (lines) between points that present a selected item during interaction between user and video-on-demand (VoD) system. User used gestures to select from among video on screen (VoD service), while additional user-produced social signal (SS) information was used to recommend more suitable new videos in the process of selection. We used a sample of 42 users, equally split into test (SS considered) and control and random (SS not considered) user groups. We assumed, for each user, there are areas of multimedia items in the matrix-factorization space that include preferred user items, called preferred areas. The results showed that user selection traces in the space of multimedia items (matrix-factorization space) better covered the user’s preferred areas of items if the SS of hesitation was considered.</description>
        <description>http://thesai.org/Downloads/Volume5No9/Paper_9-Using_Social_Signal_of_Hesitation_in_Multimedia_Content_Retrieval.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Architecture of a Mediation System for Mobile Payment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050908</link>
        <id>10.14569/IJACSA.2014.050908</id>
        <doi>10.14569/IJACSA.2014.050908</doi>
        <lastModDate>2014-09-30T12:41:31.1400000+00:00</lastModDate>
        
        <creator>Boutahar Jaouad</creator>
        
        <creator>El Hillali Wadii</creator>
        
        <creator>El Ghazi El Houssa&#239;ni Souha&#239;l</creator>
        
        <subject>M-paiement; M-Commerce; Android; Webservices; NFC; Rfid</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(9), 2014</description>
        <description>Nowadays, the mobile phone has become an indispensable part of our daily. Exceeding the role of a communication apparatus, and benefitting from the evolution of technology, it could be used for several uses other than telephony, energy of photography, the geolocation, until the control of the health condition and the quality of the air, by measuring the cardiac pulsations, the temperature and the ambient content water.
In this context, financial institutions wishing to take advantage of this wave of technological change and taking advantage of the telecom infrastructure robust and secure existing began to innovate to offer a new range of payment services based on mobile phone. 
Thus, in this article we present a proposal for an implementation of a mediation system of payment per mobile based on the technology of the Webservices.</description>
        <description>http://thesai.org/Downloads/Volume5No9/Paper_8-Architecture_of_a_Mediation_System_for_Mobile_Payment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sensitive Data Protection on Mobile Devices</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050907</link>
        <id>10.14569/IJACSA.2014.050907</id>
        <doi>10.14569/IJACSA.2014.050907</doi>
        <lastModDate>2014-09-30T12:41:31.1100000+00:00</lastModDate>
        
        <creator>Fan Wu</creator>
        
        <creator>Chung-han Chen</creator>
        
        <creator>Dwayne Clarke</creator>
        
        <subject>Mobile Security; Sensitive Data; Data Protection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(9), 2014</description>
        <description>Nowadays, many mobile devices such as phones and tablets are used in the workplace. A large amount of data is being transferred from one person to another. Data transfer is used for several different fields. Many companies and institutions are focusing on research and development on the way to further protect sensitive data. However, sensitive data still get leaks on mobile devices. To analyze how sensitive data get leak, a simulation on transferring sensitive data is developed. In this paper, we present the analysis of mobile security problem dealing with sensitive data from getting out. The goals in our research are for users to have a greater understanding on how data is being transferred and prevention sensitive data from being stolen. Our work will benefit mobile device users and help to prevent sensitive data from being stolen.  Our experiments show different ways to safely transfer information on mobile devices by testing three methods types, which are back-up, encryption, and lock plus wipe data.</description>
        <description>http://thesai.org/Downloads/Volume5No9/Paper_7-Sensitive_Data_Protection_on_Mobile_Devices.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hardware Segmentation on Digital Microscope Images for Acute Lymphoblastic Leukemia Diagnosis Using Xilinx System Generator</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050906</link>
        <id>10.14569/IJACSA.2014.050906</id>
        <doi>10.14569/IJACSA.2014.050906</doi>
        <lastModDate>2014-09-30T12:41:31.0770000+00:00</lastModDate>
        
        <creator>Prof. Kamal A. ElDahshan</creator>
        
        <creator>Dr. Emad H. Masameer</creator>
        
        <creator>Prof. Mohammed I. Youssef</creator>
        
        <creator>Mohammed A. Mustafa</creator>
        
        <subject>Medical Image Processing; FPGA; Image Segmentation; Xilinx System Generator</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(9), 2014</description>
        <description>Image segmentation is considered the most critical step in image processing and helps to analyze, infer and make decisions especially in the medical field. Analyzing digital microscope images for earlier acute lymphoblastic leukemia diagnosis and treatment require sophisticated software and hardware systems. These systems must provide both highly accurate and extremely fast processing of large amounts of image data. In this work, the hardware segmentation framework for Acute Lymphoblastic Leukemia (ALL) images based color histogram of Hue channel of HSV color space is proposed to segment each leukemia image into blasts and background using Field Programmable Gate Array (FPGA). The main purpose of this work is to implement image segmentation framework in a FPGA with minimum hardware resources and low execution time to be suitable enough for medical applications. Hardware framework of segmentation is designed using Xilinx System Generator (XSG) as DSP design tool that enables the use of Simulink models, implemented in VHDL and synthesized for Xilinx SPARTAN-3E Starter kit (XC3S500E-FG320) FPGA.</description>
        <description>http://thesai.org/Downloads/Volume5No9/Paper_6-Hardware_Segmentation_on_Digital_Microscope_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Study on Relationship between Modularity and Diffusion Dynamics in Networks from Spectral Analysis Perspective</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050905</link>
        <id>10.14569/IJACSA.2014.050905</id>
        <doi>10.14569/IJACSA.2014.050905</doi>
        <lastModDate>2014-09-30T12:41:31.0470000+00:00</lastModDate>
        
        <creator>Kiyotaka Ide</creator>
        
        <creator>Akira Namatame</creator>
        
        <creator>Loganathan Ponnambalam</creator>
        
        <creator>Fu Xiuju</creator>
        
        <creator>Rick Siow Mong Goh</creator>
        
        <subject>complex network; modularity; diffusion; SIS model; graph spectra; eigenvalue and eigenvector</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(9), 2014</description>
        <description>Modular structure is a typical structure that is observed in most of real networks. Diffusion dynamics in network is getting much attention because of dramatic increasing of the data flows via the www. The diffusion dynamics in network have been well analysed as probabilistic process, but the proposed frameworks still shows the difference from the real observations. In this paper, we analysed spectral properties of the networks and diffusion dynamics. Especially, we focus on studying the relationship between modularity and diffusion dynamics. Our analysis as well as simulation results show that the relative influences from the non-largest eigenvalues and the corresponding eigenvectors increase when modularity of network increases. These results have the implication that, although network dynamics have been often analysed with the approximation manner utilizing only the largest eigenvalue, the consideration of the other eigenvalues is necessary for the analysis of the network dynamics on real networks. We also investigated Node-level Eigenvalue Influence Index (NEII) which can quantify the relative influence from each eigenvalues on each node.  This investigation indicates that the influence from each eigenvalue is confined within the modular structures in the network. These findings should be made consideration by researchers interested in diffusion dynamics analysis on real networks for deeper analysis.</description>
        <description>http://thesai.org/Downloads/Volume5No9/Paper_5-A_Study_on_Relationship_between_Modularity.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Numerical Simulation on Damage Mode Evolution in Composite Laminate</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050904</link>
        <id>10.14569/IJACSA.2014.050904</id>
        <doi>10.14569/IJACSA.2014.050904</doi>
        <lastModDate>2014-09-30T12:41:31.0170000+00:00</lastModDate>
        
        <creator>Jean-Luc Rebi&#232;re</creator>
        
        <subject>numerical simulation; damage mechanism; transverse cracking;longitudinal cracking; delamination; criterion</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(9), 2014</description>
        <description>The present work follows numerous numerical simulation on the stress field analysis in a cracked cross-ply laminate. These results lead us to elaborate an energy criterion. This criterion is based on the computation of the partial strain energy release rate associated with all the three damage types: transverse cracking, longitudinal cracking and delamination. The related criterion, linear fracture based approach, is used to predict and describe the initiation of the different damage mechanisms. With this approach the influence of the nature of the material constituent on the damage mechanism is computed. We also give an assessment of the strain energy release rates associated with each damage mode. This criterion checked on glass-epoxy and graphite-epoxy composite materials will now be used in future research on new bio-based composite in the laboratory.</description>
        <description>http://thesai.org/Downloads/Volume5No9/Paper_4-Numerical_Simulation_on_Damage_Mode_Evolution.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Study of Multi-Signal Monitoring System Establishment for Hemodynamic Energy Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050903</link>
        <id>10.14569/IJACSA.2014.050903</id>
        <doi>10.14569/IJACSA.2014.050903</doi>
        <lastModDate>2014-09-30T12:41:30.9700000+00:00</lastModDate>
        
        <creator>YeonSoo Shin</creator>
        
        <creator>DuckHee Lee</creator>
        
        <creator>ChiBum Ahn</creator>
        
        <creator>SeungJoon Song</creator>
        
        <creator>Kyung Sun</creator>
        
        <subject>Cardiovascular disease; hemodynamic energy; monitoring system; Electrocardiogram; Photoplethysmography; Pressure; Flow</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(9), 2014</description>
        <description>Deaths due to cardiovascular diseases are increasing worldwide, and multi-signal monitoring systems to diagnose such diseases are under development. However, only a few researches are underway for devices that monitor hemodynamic energy, which is a marker for pulsatile flow generated by the contraction and relaxation of the heart. Therefore, this study aimed to integrate multiple monitoring devices into a single device, while also incorporating hemodynamic energy monitoring. Blood pressure and flow were measured with two channels each, while electrocardiogram (ECG), photoplethysmography (PPG), and temperature were measured with one channel each. All signals were processed at hardware level, and then converted into analog voltage. The seven signals were then converted into digital signals with a data acquisition board (DAQ). The software was developed with Labview™ (National Instruments, U.S.A) to form a graphic user interface (GUI) on a tablet computer (ACER, U.S.A) through USB 2.0, to allow for monitoring and analysis of the signals obtained. Development of this system successfully formed a multi-signal monitoring system that integrates multiple signals into one device. Future directions include development of cardiovascular diagnosis algorithm and evaluation of the system via preclinical animal experiments.</description>
        <description>http://thesai.org/Downloads/Volume5No9/Paper_3-A_Study_of_Multi-Signal_Monitoring_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Energy Efficient Cluster-Based Intrusion Detection System for Wireless Sensor Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050902</link>
        <id>10.14569/IJACSA.2014.050902</id>
        <doi>10.14569/IJACSA.2014.050902</doi>
        <lastModDate>2014-09-30T12:41:30.9370000+00:00</lastModDate>
        
        <creator>Manal Abdullah</creator>
        
        <creator>Ebtesam Alsanee</creator>
        
        <creator>Nada Alseheymi</creator>
        
        <subject>wireless sensor networks WSN; intrusion detection ID; clustering protocols; stable election protocol SEP; KDD cup’99; KKN</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(9), 2014</description>
        <description>Wireless sensor networks (WSNs) are network type where sensors are used to collect physical measurements. It has many application areas such as healthcare, weather monitoring and even military applications. Security in this kind of networks is a big concern especially in the applications that required confidentiality and privacy. Therefore, providing a WSN with an intrusion detection system is essential to protect its security from different types of intrusions, cyber-attacks and random faults. Clustering has proven its efficiency in prolong the node as well as the whole WSN lifetime. In this paper we have designed an Intrusion Detection (ID) system based on Stable Election Protocol (SEP) for clustered heterogeneous WSNs. The benefit of using SEP is that it is a heterogeneous-aware protocol to prolong the time interval before the death of the first node. KDD Cup’99 data set is used as the training data and test data. After normalizing our dataset, we trained the system to detect four types of attacks which are Probe, Dos, U2R and R2L, using 18 features out of the 42 features available in KDD Cup&#39;99 dataset. The research used the K-nearest neighbour (KNN) classifier for anomaly detection. The experiments determine K = 5 for best classification and this reveals recognition rate of attacks as 75%. Results are compared with KNN classifier for anomaly detection without using a clustering algorithm.</description>
        <description>http://thesai.org/Downloads/Volume5No9/Paper_2-Energy_Efficient_Cluster-based_Intrusion_Detection_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cloud Based Public Collaboration System in Developing Countries</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050901</link>
        <id>10.14569/IJACSA.2014.050901</id>
        <doi>10.14569/IJACSA.2014.050901</doi>
        <lastModDate>2014-09-30T12:41:30.8600000+00:00</lastModDate>
        
        <creator>Sherif M. Badr</creator>
        
        <creator>Sherif E. Hussein</creator>
        
        <subject>e-government; m-government; cloud computing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(9), 2014</description>
        <description>Governments in developing countries are increasingly making efforts to provide more access to information and services for citizens, businesses, and civil servants through smart devices. However, providing strategically high impact m-services is facing numerous challenges, such as complexity of different mobile technologies, creating secured networks to deliver reliable service, and identifying the types of services that can be easily provided on mobile devices. Those problems could be solved by applying cloud computing model to the business process of E-government to build a government cloud. This research, proposes an environment for citizens to have greater access to their government and, in theory, makes citizen-to-government contact more inclusive. In addition, it examines an application that allows anyone to report and track non-emergency issues via the internet. It can also encourage citizens to become active in improving and taking care of their community by reporting issues in their neighborhood in order to improve the Egyptian e-government development index.</description>
        <description>http://thesai.org/Downloads/Volume5No9/Paper_1-Cloud_Based_Public_Collaboration_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dynamic Programming Method Applied in Vietnamese Word Segmentation Based on Mutual Information among Syllables</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2014.030904</link>
        <id>10.14569/IJARAI.2014.030904</id>
        <doi>10.14569/IJARAI.2014.030904</doi>
        <lastModDate>2014-09-10T07:20:38.1030000+00:00</lastModDate>
        
        <creator>Nguyen Thi Uyen</creator>
        
        <creator>Tran Xuan Sang</creator>
        
        <subject>Vietnamese word segmentation; dynamic programming; mutual information; Vietnamese syllables</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 3(9), 2014</description>
        <description>Vietnamese word segmentation is an important step in Vietnamese natural language processing such as text categorization, text summary, and automated machine translation. The problem with Vietnamese word segmentation is complicated because Vietnamese words are not always separated by a space.  One word can include one or more syllables depending on the context. This paper proposes a method for Vietnamese word segmentation based on the mutual information among the syllables combined with dynamic programming. With this method, we can achieve an accuracy rate of about 90% with a raw text corpus.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume3No9/Paper_4-Dynamic_Programming_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design and Implementation of Rough Set Algorithms on FPGA: A Survey</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2014.030903</link>
        <id>10.14569/IJARAI.2014.030903</id>
        <doi>10.14569/IJARAI.2014.030903</doi>
        <lastModDate>2014-09-10T07:20:38.0700000+00:00</lastModDate>
        
        <creator>Kanchan Shailendra Tiwari</creator>
        
        <creator>Ashwin. G. Kothari</creator>
        
        <subject>Rough set theory; Discernibility matrix; reduct; Core; FPGA; classification</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 3(9), 2014</description>
        <description>Rough set theory, developed by Z. Pawlak, is a  powerful soft computing tool for extracting meaningful patterns from vague, imprecise, inconsistent and large chunk of data. It classifies the given knowledge base approximately into suitable decision classes by removing irrelevant and redundant data using attribute reduction algorithm.  Conventional Rough set information processing like discovering data dependencies, data reduction, and approximate set classification involves the use of software running on general purpose processor. Since last decade, researchers have started exploring the feasibility of these algorithms on FPGA. The algorithms implemented on a conventional processor using any standard software routine offers high flexibility but the performance deteriorates while handling larger real time databases. With the tremendous growth in FPGA, a new area of research has boomed up. FPGA offers a promising solution in terms of speed, power and cost and researchers have proved the benefits of mapping rough set algorithms on FGPA. In this paper, a survey on hardware implementation of rough set algorithms by various researchers is elaborated.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume3No9/Paper_3-Design_and_Implementation_of_Rough_Set_Algorithms_on_FPGA_a_Survey.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fuzzy Concurrent Object Oriented  Expert System for Fault Diagnosis in 8085 Microprocessor  Based System Board</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2014.030902</link>
        <id>10.14569/IJARAI.2014.030902</id>
        <doi>10.14569/IJARAI.2014.030902</doi>
        <lastModDate>2014-09-10T07:20:38.0570000+00:00</lastModDate>
        
        <creator>Mr.D. V. Kodavade</creator>
        
        <creator>Dr. Mrs.S.D.Apte</creator>
        
        <subject>Expert Systems; fuzzy; Inference; Knowledge base</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 3(9), 2014</description>
        <description>With the acceptance of artificial intelligence paradigm, a number of successful artificial intelligence systems were created. Fault diagnosis in microprocessor based  boards needs lot of empirical knowledge and expertise and is a true artificial intelligence problem. Research on fault diagnosis in microprocessor based system boards using new fuzzy-object oriented approach is presented in this paper. There are many uncertain situations  observed during fault diagnosis. These uncertain situations  were handled using fuzzy mathematics properties. Fuzzy inference mechanism is demonstrated using one case study. Some typical faults in 8085 microprocessor board and diagnostic procedures used is presented in this paper.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume3No9/Paper_2-Fuzzy_Concurrent_Object_Oriented_Expert_System_for_Fault_Diagnosis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Inference Mechanism Framework for Association Rule Mining</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2014.030901</link>
        <id>10.14569/IJARAI.2014.030901</id>
        <doi>10.14569/IJARAI.2014.030901</doi>
        <lastModDate>2014-09-10T07:20:37.9930000+00:00</lastModDate>
        
        <creator>Kapil Chaturvedi</creator>
        
        <creator>Dr. Ravindra Patel</creator>
        
        <creator>Dr. D.K. Swami</creator>
        
        <subject>Inference rules; ARM; Knowledgebase; Expert System</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 3(9), 2014</description>
        <description>Available approaches for Association Rule Mining (ARM) generates a large number of association rules, these rules may be trivial and redundant and also such rules are difficult to manage and understand for the users. If we consider their complexity, then it consumes lots of time and memory. Sometimes decision making is impossible for such kinds of association rules. An inference approach is required to resolve this kind of problem and to produce an interesting knowledge for the user. In this paper, we present an inference mechanism framework for ARM, which would be capable enough for resolving such problems, it would also predict future possibilities using Markov predictor by analyzing available fact and inference rules.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume3No9/Paper_1-An_Inference_Mechanism_Framework_for_Association_Rule_Mining.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Measuring Term Specificity Information for Assessing Sentiment Orientation of Documents in a Bayesian Learning Framework</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050829</link>
        <id>10.14569/IJACSA.2014.050829</id>
        <doi>10.14569/IJACSA.2014.050829</doi>
        <lastModDate>2014-08-31T07:41:52.4470000+00:00</lastModDate>
        
        <creator>D. Cai</creator>
        
        <subject>term specificity information; specificity measure;
naive Bayes classifier; sentiment classification.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(8), 2014</description>
        <description>The assessment of document sentiment orientation
using term specificity information is advocated in this study. An
interpretation of the mathematical meaning of term specificity
information is given based on Shannon’s entropy. A general
form of a specificity measure is introduced in terms of the interpretation.
Sentiment classification using the specificity measures
is proposed within a Bayesian learning framework, and some
potential problems are clarified and solutions are suggested when
the specificity measures are applied to estimation of posterior
probabilities for the NB classifier. A novel method is proposed
which allows each document to have multiple representations,
each of which corresponds to a sentiment class. Our experimental
results show, while both the proposed method and IR techniques
can produce high performance for sentiment classification, that
our method outperforms the IR techniques.</description>
        <description>http://thesai.org/Downloads/Volume5No8/Paper_29-Measuring_Term_Specificity_Information_for.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>cFireworks: a Tool for Measuring the Communication Costs in Collective I/O</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050828</link>
        <id>10.14569/IJACSA.2014.050828</id>
        <doi>10.14569/IJACSA.2014.050828</doi>
        <lastModDate>2014-08-31T07:41:52.4300000+00:00</lastModDate>
        
        <creator>Kwangho Cha</creator>
        
        <subject>Collective I/O; Communication Costs; Parallel Computing;
Parallel I/O</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(8), 2014</description>
        <description>Nowadays, many HPC systems use the multi-core
system as a computational node. Predicting the communication
performance of multi-core cluster systems is complicated job, but
finding out it is important to use multi-core system efficiently. In
the previous study, we introduced the simple linear regression
models for predicting the communication costs in collective I/O.
In the models, however, because it is important to get the
communication characteristics of the given system, we designed
cFireworks, an MPI application to measure the communication
costs of HPC systems. In this paper, we explain the detail
concept and experimental results of cFireworks. The performance
evaluation showed that the expected communication costs with
the linear regression models generated by using the output of
cFireworks are reasonable to use.</description>
        <description>http://thesai.org/Downloads/Volume5No8/Paper_28-cFireworks_a_Tool_for_Measuring.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multilabel Learning for Automatic Web Services Tagging</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050827</link>
        <id>10.14569/IJACSA.2014.050827</id>
        <doi>10.14569/IJACSA.2014.050827</doi>
        <lastModDate>2014-08-31T07:41:52.4000000+00:00</lastModDate>
        
        <creator>Mustapha AZNAG</creator>
        
        <creator>Mohamed QUAFAFOU</creator>
        
        <creator>Zahi JARIR</creator>
        
        <subject>Web services, Tags, Automatic, Recommendation,
Machine Learning, Topic Models.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(8), 2014</description>
        <description>Recently, some web services portals and search
engines as Biocatalogue and Seekda!, have allowed users to
manually annotate Web services using tags. User Tags provide
meaningful descriptions of services and allow users to index
and organize their contents. Tagging technique is widely used
to annotate objects in Web 2.0 applications. In this paper we
propose a novel probabilistic topic model (which extends the
CorrLDA model - Correspondence Latent Dirichlet Allocation-)
to automatically tag web services according to existing manual
tags. Our probabilistic topic model is a latent variable model
that exploits local correlation labels. Indeed, exploiting label
correlations is a challenging and crucial problem especially in
multi-label learning context. Moreover, several existing systems
can recommend tags for web services based on existing manual
tags. In most cases, the manual tags have better quality. We also
develop three strategies to automatically recommend the best
tags for web services. We also propose, in this paper, WS-Portal;
An Enriched Web Services Search Engine which contains 7063
providers, 115 sub-classes of category and 22236 web services
crawled from the Internet. In WS-Portal, severals technologies
are employed to improve the effectiveness of web service discovery
(i.e. web services clustering, tags recommendation, services rating
and monitoring). Our experiments are performed out based
on real-world web services. The comparisons of Precision@n,
Normalised Discounted Cumulative Gain (NDCGn) values for
our approach indicate that the method presented in this paper
outperforms the method based on the CorrLDA in terms of
ranking and quality of generated tags.</description>
        <description>http://thesai.org/Downloads/Volume5No8/Paper_27-Multilabel_Learning_for_Automatic_Web_Services_Tagging.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Reconsideration of Potential Problems of Applying EMIM for Text Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050826</link>
        <id>10.14569/IJACSA.2014.050826</id>
        <doi>10.14569/IJACSA.2014.050826</doi>
        <lastModDate>2014-08-31T07:41:52.3830000+00:00</lastModDate>
        
        <creator>D. Cai</creator>
        
        <subject>text analysis; term dependence; term co-occurrence;
the expected mutual information measure (EMIM).</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(8), 2014</description>
        <description>It seems that the term dependence methods developed
using the expected mutual information measure (EMIM)
have not achieved their potential in many areas of science,
involving statistical text analysis or document processing. This
study examines the reasons for the failure and highlights potential
problems of applications. Several interesting questions are arisen,
including, does a term provide any information if it occurs in all
the sample documents? how the mutual information of two terms,
under their status values, makes contribution to EMIM? are two
terms highly dependent for their co-occurrence if they receive a
high positive EMIM value? what may imply for dependence of
two term pairs when they receive the same EMIM value? how
can properly verify two terms to be high dependent for their cooccurrence?
how can properly apply EMIM? does the size of the
sample set matter? This study attempts to answer these questions
in order to clarify confusions caused by the problems and/or
suggest solutions to the problems. Some interesting examples are
provided to clarify our viewpoints.</description>
        <description>http://thesai.org/Downloads/Volume5No8/Paper_26-Reconsideration_of_Potential_Problems_of_Applying.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Activity Based Learning Kits for Children in a Disadvantaged Community According to the Project “Vocational Teachers Teach Children to Create Virtuous Robots from Garbage”</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050825</link>
        <id>10.14569/IJACSA.2014.050825</id>
        <doi>10.14569/IJACSA.2014.050825</doi>
        <lastModDate>2014-08-31T07:41:52.3670000+00:00</lastModDate>
        
        <creator>Kuntida Thamwipat</creator>
        
        <creator>Pornpapatsorn Princhankol</creator>
        
        <creator>Thanakarn Khumphai</creator>
        
        <creator>Vitsanu Sudsangket</creator>
        
        <subject>Activity Based Learning Kits; Children in a Disadvantaged Community; Virtuous Robots; Garbage</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(8), 2014</description>
        <description>This research was aimed to develop and evaluate the activity based learning kits for children in a disadvantaged community according to the project “Vocational Teachers Teach Children to Create Virtuous Robots from Garbage”, to examine the learning achievement, to measure the satisfaction and to do an authentic assessment of children as regards the learning kits. The researchers chose the sampling group purposively out of children aged 4-14 years in the community under bridge           zone 1 who could participate in the summer activity in the academic year 2013. The sampling group consisted of 40 children. Statistical tools  in this research included mean and standard deviation. The results showed that the content quality was good (x &#175; = 4.40, S.D. = 0.55), the presentation quality   was good (x &#175; = 4.46, S.D. = 0.68). After learning with the learning kits , the children in the disadvantaged community could achieve higher post-test score than pre-test score with statistical significance at a .05 level. The children expressed the highest level of satisfaction towards the learning kits (x &#175; = 4.57, S.D. = 0.58). The authentic assessment of children as regards the learning kits was at a good level (x &#175; = 4.43, S.D. = 0.51 and this complied with the hypotheses. Therefore, the activity based learning kits are useful and could be used in other nearby communities.</description>
        <description>http://thesai.org/Downloads/Volume5No8/Paper_25-Activity_Based_Learning_Kits_for_Children.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Local and Semi-Global Feature-Correlative Techniques for Face ?Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050824</link>
        <id>10.14569/IJACSA.2014.050824</id>
        <doi>10.14569/IJACSA.2014.050824</doi>
        <lastModDate>2014-08-31T07:41:52.3670000+00:00</lastModDate>
        
        <creator>Asaad Noori Hashim</creator>
        
        <creator>Zahir M. Hussain</creator>
        
        <subject>Zernike Moments; Face Recognition; Structural Similarity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(8), 2014</description>
        <description>Face recognition is an interesting field of computer vision with many commercial and ?scientific applications.  It is considered as a very hot topic and challenging problem at the ?moment. Many methods and techniques have been proposed and applied for this purpose, ?such as neural networks, PCA, Gabor filtering, etc. Each approach has its weaknesses as ?well as its points of strength. This paper introduces a highly efficient method for the ?recognition of human faces in digital images using a new feature extraction method that ?combines the global and local information in different views (poses) of facial images. ?Feature extraction techniques are applied on the images (faces) based on Zernike moments ?and structural similarity measure (SSIM) with local and semi-global blocks. Pre-processing ?is carried out whenever needed, and numbers of measurements are derived. More ?specifically, instead of the usual approach for applying statistics or structural methods ?only, the proposed methodology integrates higher-order representation patterns extracted ?by Zernike moments with a modified version of SSIM (M-SSIM). Individual measurements and metrics resulted from mixed SSIM and Zernike-based approaches give a powerful ?recognition tool with great results. Experiments reveal that correlative Zernike vectors give ?a better discriminant compared with using 2D correlation of the image itself. The ?recognition rate using ORL Database of Faces reaches 98.75%, while using FEI ??(Brazilian) Face Database we got 96.57%. The proposed approach is robust against ?rotation and noise.?</description>
        <description>http://thesai.org/Downloads/Volume5No8/Paper_24-Local_and_Semi-Global_Feature.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving TCP Throughput Using Modified Packet Reordering Technique (MPRT) Over Manets</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050823</link>
        <id>10.14569/IJACSA.2014.050823</id>
        <doi>10.14569/IJACSA.2014.050823</doi>
        <lastModDate>2014-08-31T07:41:52.3370000+00:00</lastModDate>
        
        <creator>Prakash B. Khelage</creator>
        
        <creator>Dr. Uttam D. Kolekar</creator>
        
        <subject>TCP; MANET; RTT; RTO; Congestion; Network validation model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(8), 2014</description>
        <description>at the beginning of development of network technology TCP transport agent were designed assuming that communication is using wired network, but recently there is huge demand and use of wireless networks for communication. Those TCP variants which are successful in wired networks are neither able to detect exact causes of packet losses nor unnecessary transmission delays over wireless networks. The biggest challenge over MANET is design of robust and reliable TCP variant which should give best performance in different network scenarios. Till date more than dozens of TCP variants designed and modified by researcher and scientist communities even though the level of TCP performance have to be optimum in different scenarios, Such as congestion, link failure, signal loss and interferences.  Over rod, grid and bulk network model also. As some of TCP-variant performs well in particular network scenarios but degrades in other scenarios. The objective of this research work, to modify packet reordering technique based TCP variant, implement and compare its performance with other variants. Validation of basic and main network model done using network simulator (NS2) and calculated throughput, delay and packet drop by processing trace files. The simulated result shows that, proposed technique performs outstanding almost in all network scenarios with minimum packet losses and minimum delay.</description>
        <description>http://thesai.org/Downloads/Volume5No8/Paper_23-Improving_TCP_Throughput_Using_Modified_Packet.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Information Communication Technology Adoption in Higher Education Sector of Botswana: a Case of Botho University</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050822</link>
        <id>10.14569/IJACSA.2014.050822</id>
        <doi>10.14569/IJACSA.2014.050822</doi>
        <lastModDate>2014-08-31T07:41:52.3200000+00:00</lastModDate>
        
        <creator>Clifford Matsoga Lekopanye</creator>
        
        <creator>Alpheus Mogwe</creator>
        
        <subject>ICT; Adoption; e-learning; Education</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(8), 2014</description>
        <description>Opportunities, benefits and achievements are emerging factors for institutions, lecturers, and learners from the increasing availability of Information Communication Technologies (ICT). These factors are relevant especially for new and growing higher educational institutions (HEI) whose survival depends on, among other factors, the use of ICT to develop new organizational models to enhance their internal and external communication relationship and produce quality graduates.
Given the relevance of the topic, the researchers studied positive impact of the adoption of ICT by higher educational institutions in an attempt to justify the use of ICT. In this paper, adoption refers to institutions migrating from traditional modes of paper based school management and student engagement to a computerized environment. This shift is hoped to enhance academic development and flexibility, increase level of student engagement, enhance cost-effectiveness, and create a sustainable environment through interactive learning resources.
Although the study was conducted at a single institution (i.e. Botho University), it restricts its focus exclusively to the educational motivations for institutions to adopt ICT. In order to ascertain the current state of knowledge, an extensive review, analysis, and synthesis of the collected data and literature have been undertaken. The authors conclude the paper by identifying and examining potential benefits and achievements of institutions in adopting ICT.</description>
        <description>http://thesai.org/Downloads/Volume5No8/Paper_22-Information_Communication_Technology_Adoption.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cost-Effective Smart Metering System for the Power Consumption Analysis of Household</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050821</link>
        <id>10.14569/IJACSA.2014.050821</id>
        <doi>10.14569/IJACSA.2014.050821</doi>
        <lastModDate>2014-08-31T07:41:52.3070000+00:00</lastModDate>
        
        <creator>Michal Kovalc&#237;k</creator>
        
        <creator>Peter Fecilak</creator>
        
        <creator>František Jakab</creator>
        
        <creator>Jozef Dudiak</creator>
        
        <creator>Michal Kolcun</creator>
        
        <subject>electrical voltage; electrical current; active power; measurement principles; intelligent metering system; smart meter; Atmel</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(8), 2014</description>
        <description>This paper deals with design, calibration, experimental implementation and validation of cost-effective smart metering system. Goal was to analyse power consumption of the household with the immediate availability of the measured information utilizing modern networked and mobile technologies. Research paper outlines essential principles of the measurement process, document theoretical and practical aspects which are important for the construction of such smart meter and finally, results from the experimental implementation has been evaluated and validated.</description>
        <description>http://thesai.org/Downloads/Volume5No8/Paper_21-Cost-Effective_Smart_Metering_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Secure Deletion of Data from SSD</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050820</link>
        <id>10.14569/IJACSA.2014.050820</id>
        <doi>10.14569/IJACSA.2014.050820</doi>
        <lastModDate>2014-08-31T07:41:52.2730000+00:00</lastModDate>
        
        <creator>Akli Fundo</creator>
        
        <creator>Aitenka Hysi</creator>
        
        <creator>Igli Tafa</creator>
        
        <subject>deletion-data; SSD; FTL layer; logic address; physical address</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(8), 2014</description>
        <description>The deletion of data from storage is an important component on data security. The deletion of entire disc or special files is well-known on hard drives, but this is quite different on SSDs, because they have a different architecture inside, and the main problem is if they serve the same methods like hard drives for data deletion or erasing. The built-in operations are used to do this on SSDs. The purpose of this review is to analyses some methods which are proposed to erase data form SSDs and their results too, to see which of them offers the best choice. In general we will see that the techniques of erasing data from entire disc from hard drives can be used also on SSDs, but there’s a problem with bugs. On the other hand, we cannot use the same techniques of erasing a file from hard drives and SSDs. To make this possible, there are required changes in FTL layer, which is responsible for mapping between logic addresses and physical addresses.</description>
        <description>http://thesai.org/Downloads/Volume5No8/Paper_20-Secure_Deletion_of_Data_from_SSD.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Discovering a Secure Path in MANET by Avoiding Black Hole Attack</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050819</link>
        <id>10.14569/IJACSA.2014.050819</id>
        <doi>10.14569/IJACSA.2014.050819</doi>
        <lastModDate>2014-08-31T07:41:52.2600000+00:00</lastModDate>
        
        <creator>Hicham Zougagh</creator>
        
        <creator>Ahmed Toumanari</creator>
        
        <creator>Rachid Latif</creator>
        
        <creator>Y. Elmourabit</creator>
        
        <creator>Noureddine Idboufker</creator>
        
        <subject>MANET; OLSR; Security; Routing Protocol  Cooperative black hole attack</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(8), 2014</description>
        <description>In a mobile ad hoc network (MANET), a source node must rely on intermediate nodes to forward its packets along multi-hop routes to the destination node. Due to the lack of infrastructure in such networks, secure and reliable packet delivery is challenging. The performance of a Mobile Ad hoc Network (MANET) is closely related to the capability of the implemented routing protocol to adapt itself to unpredictable changes of topology network and link status. One of this routing protocol is OLSR [1] (Optimized Link State Routing Protocol) which assumes that all nodes are trusted. However, in hostile environnement, the OLSR is known to be vulnerable to various kinds of malicious attacks. This paper proposes a cooperative black hole attack against MANETs exploiting vulnerabilities of OLSR. In this attack, two attacking nodes cooperate in order to disrupt the topology discovery and prevent routes to a target node from being established in the network.</description>
        <description>http://thesai.org/Downloads/Volume5No8/Paper_19-Discovering_A_Secure_Path_in_MANET.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Regression Testing Cost Reduction Suite</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050818</link>
        <id>10.14569/IJACSA.2014.050818</id>
        <doi>10.14569/IJACSA.2014.050818</doi>
        <lastModDate>2014-08-31T07:41:52.2430000+00:00</lastModDate>
        
        <creator>Mohamed Alaa El-Din</creator>
        
        <creator>Ismail Abd El-Hamid Taha</creator>
        
        <creator>Hesham El-Deeb</creator>
        
        <subject>Software maintenance cost; reduced test suite; reduced regression test suite; regression testing cost reduction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(8), 2014</description>
        <description>The estimated cost of software maintenance exceeds 70 percent of total software costs [1], and large portion of this maintenance expenses is devoted to regression testing. Regression testing is an expensive and frequently executed maintenance activity used to revalidate the modified software. Any reduction in the cost of regression testing would help to reduce the software maintenance cost.
 Test suites once developed are reused and updated frequently as the software evolves. As a result, some test cases in the test suite may become redundant when the software is modified over time since the requirements covered by them are also covered by other test cases. 
Due to the resource and time constraints for re-executing large test suites, it is important to develop techniques to minimize available test suites by removing redundant test cases. In general, the test suite minimization problem is NP complete. This paper focuses on proposing an effective approach for reducing the cost of regression testing process. The proposed approach is applied on real-time case study. It was found that the reduction in cost of regression testing for each regression testing cycle is ranging highly improved in the case of programs containing high number of selected statements which in turn maximize the benefits of using it in regression testing of complex software systems. The reduction in the regression test suite size will reduce the effort and time required by the testing teams to execute the regression test suite. Since regression testing is done more frequently in software maintenance phase, the overall software maintenance cost can be reduced considerably by applying the proposed approach.</description>
        <description>http://thesai.org/Downloads/Volume5No8/Paper_18-Regression_Testing_Cost_Reduction_Suite.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Social Learners’ Profiles in a Distance Learning System Powered by a Social Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050817</link>
        <id>10.14569/IJACSA.2014.050817</id>
        <doi>10.14569/IJACSA.2014.050817</doi>
        <lastModDate>2014-08-31T07:41:52.2130000+00:00</lastModDate>
        
        <creator>HROR Naoual</creator>
        
        <creator>OUMAIRA Ilham/MESSOUSSI Rochdi</creator>
        
        <subject>Distance learning system; Social profile; Traces; fuzzy; social network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(8), 2014</description>
        <description>This work is integrated into the general problem of the research systems of distance learning; and more particularly in the monitoring and the positioning of social profile learners in a distance learning system powered by a social network. In this article, we propose a model multi-entry to determine the learners’ profile and positioning his social type. This template allows you to exploit the different traces generated by a learner in the platform and produce indicators on his own profile.
Our approach leverages the tools of the fuzzy set theory, the modal and temporal logic. The motivation of this research is to create a tool which helps the tutor to better observe and follow the actions of the learners at the level of the learning platform, and to anticipate potential discouragements or abandonment of the learner.
</description>
        <description>http://thesai.org/Downloads/Volume5No8/Paper_17-Social_Learners_Profiles_in_a_Distance_Learning_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluating Usability of E-Learning Systems in Universities</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050815</link>
        <id>10.14569/IJACSA.2014.050815</id>
        <doi>10.14569/IJACSA.2014.050815</doi>
        <lastModDate>2014-08-31T07:41:52.1800000+00:00</lastModDate>
        
        <creator>Nicholas Kipkurui Kiget</creator>
        
        <creator>Professor G. Wanyembi</creator>
        
        <creator>Anselemo Ikoha Peters</creator>
        
        <subject>e-learning; moodle; usability; learnability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(8), 2014</description>
        <description>The use of e-learning systems has increased significantly in the recent times. E-learning systems are supplementing teaching and learning in universities globally. Kenyan universities have adopted e-learning technologies as means for delivering course content. However despite adoption of these systems, there are considerable challenges facing the usability of the systems. Lecturers and students have different perceptions in regard to the usability of e-learning systems. The aim of this study was to evaluate usability attributes that affect e-learning systems in Kenyan universities. The study had two fold objectives; determining status of e-learning platforms and evaluating usability issues affecting e-learning adoption in Kenyan universities. The research took a case study of one of the public universities which has implemented Moodle e-learning system. The usability attributes evaluated were user-friendliness, learnability, technological infrastructure and policy. The research made recommendations which could help universities accelerate the adoption of e-learning systems.</description>
        <description>http://thesai.org/Downloads/Volume5No8/Paper_15-Evaluating_Usability_of_E-Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Agent Based Personalized Semantc Web Information Retrieval System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050816</link>
        <id>10.14569/IJACSA.2014.050816</id>
        <doi>10.14569/IJACSA.2014.050816</doi>
        <lastModDate>2014-08-31T07:41:52.1800000+00:00</lastModDate>
        
        <creator>Dr.M. Thangaraj</creator>
        
        <creator>Mrs.Mchamundeeswari</creator>
        
        <subject>Agent; Personalization; Semantic web; information retrieval; ranking algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(8), 2014</description>
        <description>Every user has an individual background and a precise goal in search of  information. The goal of personalized search is to search results to a particular user based on the  user’s interests and preferences. Effective personalization of information access involves two important challenges: accurately identifying the user context and organizing the information to match with the particular context. In this paper, the system uses ontology as a knowledge base for the information retrieval process.  It is one layer above   any   one of search engines retrieve by analyzing just the keywords. Here, the query is analyzed both syntactically and semantically. The developed system retrieves the web results more relevant to the users query. The level of accuracy will be enhanced since the query is analyzed semantically. The results are re-ranked and optimized for providing the relevant links.  Based on the user’s information access behavior, an ontological profile is created, which is also used for personalization. If the system is deployed for web information gathering, search performance can be improved and  accurate results can be retrieved.</description>
        <description>http://thesai.org/Downloads/Volume5No8/Paper_16-Agent_Based_Personalized_Semantc.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Online Monitoring System Design of Intelligent Circuit Breaker Based on DSP and ARM</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050814</link>
        <id>10.14569/IJACSA.2014.050814</id>
        <doi>10.14569/IJACSA.2014.050814</doi>
        <lastModDate>2014-08-31T07:41:52.1500000+00:00</lastModDate>
        
        <creator>Meng Song</creator>
        
        <creator>Liping Zhang</creator>
        
        <creator>Yuchen Chen</creator>
        
        <creator>Weijin Zheng</creator>
        
        <subject>circuit breaker; online monitoring; ARM; DSP</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(8), 2014</description>
        <description>In order to accurately analyze the dynamic characteristics of the vacuum circuit breaker, a dual-core master-slave processor structure for online monitoring system based on DSP and ARM is proposed. This structure consists of host computer, lower computer and signal processing modules. The lower computer uses DSP as the core, which completes acquisition and data preprocessing of circuit breaker’s dynamic characteristics through sensors and signal conditioning circuits. The host computer uses ARM as the core which is responsible for task management, analysis, processing and displaying collected data via Ethernet. The communication between DSP and ARM is conducted by HPI. This design improves the reliability of intelligent control unit for the circuit breaker. The experiment showed that this system works steadily and accuracy.</description>
        <description>http://thesai.org/Downloads/Volume5No8/Paper_14-Online_Monitoring_System_Design.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Path Planning in a Dynamic Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050813</link>
        <id>10.14569/IJACSA.2014.050813</id>
        <doi>10.14569/IJACSA.2014.050813</doi>
        <lastModDate>2014-08-31T07:41:52.1330000+00:00</lastModDate>
        
        <creator>Mohamed EL KHAILI</creator>
        
        <subject>component; path planning; navigation; robotics; visibility graph; obstacles contours; moving obstacles; space-time representation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(8), 2014</description>
        <description>Path planning is an important area in the control of autonomous mobile robots. Recent work has focused on aspects reductions in processing time than the memory requirements. A dynamic environment uses a lot of memory and hence the processing time increases too. Our approach is to reduce the processing time by the use of a pictorial approach to reduce the number of data used. In this paper, we present a path planning approach that operates in three steps. First, a construction of the visibility tree is performed. The following treatments are not performed on the original image but on the result tree whose elements are specific points of the environment linked by the relationship of visibility. We construct thereafter a visibility graph which one seeks the shortest path. This approach has a great interest because of its fast execution speed. The path search is extended also for the case where obstacles can move. The moving obstacles may be other mobile robots whose trajectories and speeds are known initially. At the end, some applications are provided on solving similar problem such civil aviation in order to guide plane avoiding collisions.</description>
        <description>http://thesai.org/Downloads/Volume5No8/Paper_13-Path_Planning_in_a_Dynamic_Environment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-Agent Architecture for Implementation of ITIL Processes: Case of Incident Management Process</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050812</link>
        <id>10.14569/IJACSA.2014.050812</id>
        <doi>10.14569/IJACSA.2014.050812</doi>
        <lastModDate>2014-08-31T07:41:52.1030000+00:00</lastModDate>
        
        <creator>Youssef SEKHARA</creator>
        
        <creator>Hicham MEDROMI</creator>
        
        <creator>Adil SAYOUTI</creator>
        
        <subject>ITIL; Process; Multi-agent system; Incident Management Process</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(8), 2014</description>
        <description>ITIL (Information Technology Infrastructure Library) is the most widely accepted approach to IT service management in the world. Upon the adoption of ITIL processes, organizations face many challenges that can lead to increased complexity. In this paper we use the advantages of agent technology to make implementation and use of ITIL processes more efficient, starting by the incident management process.</description>
        <description>http://thesai.org/Downloads/Volume5No8/Paper_12-Multi-Agent_Architecture_for_Implementation_of_ITIL.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Neural Network Based Lna Design for Mobile Satellite Receiver</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050811</link>
        <id>10.14569/IJACSA.2014.050811</id>
        <doi>10.14569/IJACSA.2014.050811</doi>
        <lastModDate>2014-08-31T07:41:52.0730000+00:00</lastModDate>
        
        <creator>Abhijeet Upadhya</creator>
        
        <creator>Prof. P. K. Chopra</creator>
        
        <subject>Satellite; Mobile; Artificial Neural Networks; Scattering Parameters; Noise Figure</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(8), 2014</description>
        <description>Paper presents a Neural Network Modelling approach to microwave LNA design. To acknowledge the specifications of the amplifier, Mobile Satellite Systems are analyzed. Scattering parameters of the LNA in the frequency range 0.5 to 18 GHz are calculated using a Multilayer Perceptron Artificial Neural Network model and corresponding smith charts and polar charts are plotted as output to the model. From these plots, the microwave scattering parameter description of the LNA are obtained. Model is efficiently trained using Agilent ATF 331M4 InGaAs/InP Low Noise pHEMT amplifier datasheet and the neural model’s output seem to follow the various device characteristic curves with high regression. Next, Maximum Allowable Gain and Noise figure of the device are modelled and plotted for the same frequency range. Finally, the optimized model is utilized as an interpolator and the resolution of the amplifying capability with noise characteristics are obtained for the L Band of MSS operation.</description>
        <description>http://thesai.org/Downloads/Volume5No8/Paper_11-Neural_Network_Based_Lna_Design.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Role of Knowledge Reusability in Technological Environment During Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050810</link>
        <id>10.14569/IJACSA.2014.050810</id>
        <doi>10.14569/IJACSA.2014.050810</doi>
        <lastModDate>2014-08-31T07:41:52.0400000+00:00</lastModDate>
        
        <creator>O. K. Harsh</creator>
        
        <subject>Knowledge; Reuse; Tacit Knowledge; Explicit Knowledge; Technology; ADRI model; Quality</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(8), 2014</description>
        <description>Role of technology and reusability on the knowledge management and knowledge transformation has been analyzed by considering the extended model of Nonaka and Takeuchi which includes the knowledge reuse in the three dimensional environment. Knowledge transformation has been further refined (and boosted) to get the more qualitative and quantitative knowledge by applying the concept of knowledge reification, indexing and adaption. By extending these concepts and related processes, ADRI quality model on higher education learning has been analyzed. Present work suggest that reusability along with above mentioned concepts during ADRI cycle can boost the qualitative knowledge in higher educational setting and observed that ADRI model has the similar trends as that of Nonaka model in the three dimensional environments. In addition, discussion also prevails that best practices required in a higher educational setting correspond to ADRI model.
It has been suggested that time along with reusability supports to the tacit as well as explicit knowledge management during learning. The knowledge transformation achieved this way is more qualitative. Finally it can be concluded that tacit and explicit knowledge required to reuse is an important aspect now days in managing higher educational knowledge in a fast growing contemporary environment provided knowledge is exploited appropriately.</description>
        <description>http://thesai.org/Downloads/Volume5No8/Paper_10-Role_of_Knowledge_Reusability.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Efficient Method for Calculating Similarity Between Web Services</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050809</link>
        <id>10.14569/IJACSA.2014.050809</id>
        <doi>10.14569/IJACSA.2014.050809</doi>
        <lastModDate>2014-08-31T07:41:51.9770000+00:00</lastModDate>
        
        <creator>T. RACHAD</creator>
        
        <creator>J.Boutahar</creator>
        
        <creator>S.El ghazi</creator>
        
        <subject>web service; semantic similarity; syntactic similarity; WordNet; word sense disambiguation; Hausdorff distance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(8), 2014</description>
        <description>Web services allow communication between heterogeneous systems in a distributed environment. Their enormous success and their increased use led to the fact that thousands of Web services are present on the Internet. This significant number of Web services which not cease to increase has led to problems of the difficulty in locating and classifying web services, these problems are encountered mainly during the operations of web services discovery and substitution.
Traditional ways of search based on keywords are not successful in this context, their results do not support the structure of Web services and they consider in their search only the identifiers of the web service description language (WSDL) interface elements.
The methods based on semantics (WSDLS, OWLS, SAWSDL…) which increase the WSDL description of a Web service with a semantic description allow raising partially this problem, but their complexity and difficulty delays their adoption in real cases.
Measuring the similarity between the web services interfaces is the most suitable solution for this kind of problems, it will classify available web services so as to know those that best match the searched profile and those that do not match. Thus, the main goal of this work is to study the degree of similarity between any two web services by offering a new method that is more effective than existing works.</description>
        <description>http://thesai.org/Downloads/Volume5No8/Paper_9-A_New_Efficient_Method_for_Calculating_Similarity.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Toward Accurate Feature Selection Based on BSS-GRF</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050808</link>
        <id>10.14569/IJACSA.2014.050808</id>
        <doi>10.14569/IJACSA.2014.050808</doi>
        <lastModDate>2014-08-31T07:41:51.9470000+00:00</lastModDate>
        
        <creator>S. M. Elseuofi</creator>
        
        <creator>Samy Abd El -Hafeez</creator>
        
        <creator>Wael Awad</creator>
        
        <creator>R. M. El-Awady</creator>
        
        <subject>rough set; Genetic; blind source separation; E-mail Filtering; Machine Learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(8), 2014</description>
        <description>As of late, Feature extraction in email classification assumes a vital part. Many Feature extraction algorithms need more effort in term of accuracy. In order to improve the classifier accuracy and for faster classification, the hybrid algorithm is proposed. This hybrid algorithm combines the Genetics Rough set with blind source separation approach (BSS-GRF). The main aim of proposing this hybrid algorithm is to improve the classifier accuracy for classifying incoming e-mails.</description>
        <description>http://thesai.org/Downloads/Volume5No8/Paper_8-Toward_accurate_Feature_selection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of Duck Diseases Expert System with Applying Alliance Method at Bali Provincial Livestock Office</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050807</link>
        <id>10.14569/IJACSA.2014.050807</id>
        <doi>10.14569/IJACSA.2014.050807</doi>
        <lastModDate>2014-08-31T07:41:51.9170000+00:00</lastModDate>
        
        <creator>Dewa Gede Hendra Divayana</creator>
        
        <subject>Expert System; Forward Chaining; Backward Chaining; Weighted Product; Alliance Method; Duck Diseases</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(8), 2014</description>
        <description>Farming is one of the activities that have a business opportunity. One is raising ducks. The main results can be obtained from the breeding duck is a duck meat and eggs for consumption and also means praying ceremony in Bali, as well as duck egg shells that can be used for jewelry. Since the outbreak of avian influenza began in 2008, have an impact on consumer demand of ducks decreased and consumers become more careful in choosing and consuming duck. The avian influenza virus not only spread across the country of China, Thailand and Vietnam, but also in Indonesia, Bali is no exception. This is evidenced by the discovery of cases of death due to bird flu virus in some areas in Bali, among others: the regency of Karangasem, Badung, Tabanan, Klungkung and Jembrana. From this, the Bali Provincial Livestock Office took steps to develop an expert system in the detection of diseases ducks. This expert system uses a alliance method is a combination of forward chaining, backward chaining and weighted product to search the physical symptoms and behavioral symptoms duck by the name of a known disease and to determine the percentage of disease attack level in ducks. In this study, the analytical techniques used to analyze the truth is a alliance method of duck disease expert system. Activity data collection and information to support research conducted by, among others, literature studies, interviews, and observations.</description>
        <description>http://thesai.org/Downloads/Volume5No8/Paper_7-Development_of_Duck_Diseases_Expert_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Managing Open Educational Resources on the Web of Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050806</link>
        <id>10.14569/IJACSA.2014.050806</id>
        <doi>10.14569/IJACSA.2014.050806</doi>
        <lastModDate>2014-08-31T07:41:51.8830000+00:00</lastModDate>
        
        <creator>Gilbert Paquette</creator>
        
        <creator>Alexis Miara</creator>
        
        <subject>resource management; resource repositories; OER; open resources; MOOCS; RDF; Web of data; Semantic Web; IEEE-LOM; ISO-MLR</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(8), 2014</description>
        <description>In the last few years, the international work on Massive Open On-line Courses (MOOCS) underlined new needs for open educational resources (OER) management within the context of the Web of Data. First, within MOOCs, all (or at least most) resources must be open and available on the Web through URIs, including the MOOCs themselves. Second the evolution of research and practice in the field of OER repositories, notably the focus in international e-learning standards, is moving recently from OER metadata stored in relational databases towards RDF-based descriptions of resources stored in triple stores. Third, new resource management tools like COMETE provide more intelligent search capabilities within the Web of data, both for designers who are building MOOCs, and also for students who should be equipped with friendly tools to personalize their environment. We will present some COMETE use cases to illustrate these new possibilities and advocate for their integration within MOOC platforms.</description>
        <description>http://thesai.org/Downloads/Volume5No8/Paper_6-Managing_Open_Educational_Resources.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimal Network Design for Consensus Formation: Wisdom of Networked Agents</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050805</link>
        <id>10.14569/IJACSA.2014.050805</id>
        <doi>10.14569/IJACSA.2014.050805</doi>
        <lastModDate>2014-08-31T07:41:51.8700000+00:00</lastModDate>
        
        <creator>Eugene S. Kitamura</creator>
        
        <creator>Akira Namatame</creator>
        
        <subject>wisdom of crowds; consensus problem; Laplacian matrix</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(8), 2014</description>
        <description>The wisdom of crowds refers to the phenomenon in which the collective knowledge of a community is greater than the knowledge of any individual. This paper proposes a network design for the fastest and slowest consensus formation under average node degree restrictions, which is one aspect of the wisdom of crowds concept. Consensus and synchronization problems are closely related to variety of issues such as collective behavior in nature, the interaction among agents as a matter of the robot control, and building efficient wireless sensor networks. However, designing networks with desirable properties is complex and it may pose a multi-constraint and multi-criterion optimization problem. For the purpose of realizing such efficient network topology, this paper presents an optimization approach to design networks for better consensus formation by focusing on the eigenvalue spectral of Laplacian matrix. In both the fastest and slowest networks presented, consensus is formed among local structures first, then on a global scale. This suggests that both local and global topology influence the networks dynamics. These findings are useful for those who seek to manage efficient consensus and synchronization in a setting that can be modeled as a multi-agent system.</description>
        <description>http://thesai.org/Downloads/Volume5No8/Paper_5-Optimal_Network_Design_for_Consensus_Formation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>WOLF: a Research Platform to Write NFC Secure Applications on Top of Multiple Secure Elements (With an Original SQL-Like Interface)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050804</link>
        <id>10.14569/IJACSA.2014.050804</id>
        <doi>10.14569/IJACSA.2014.050804</doi>
        <lastModDate>2014-08-31T07:41:51.8370000+00:00</lastModDate>
        
        <creator>Anne Marie Lesas</creator>
        
        <creator>Benjamin Renaut</creator>
        
        <creator>Pr. Serge Miranda</creator>
        
        <creator>Amosse Edouard</creator>
        
        <subject>Mobiquitous services; Near Field Communication (NFC); Secure Element (SE); Smart card; Structured (English as a) Query Language (SQL); Digital Wallet; TSM / OTA; UICC</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(8), 2014</description>
        <description>This article presents the WOLF (Wallet Open Library Framework) platform which supports an original interface for NFC developers called “SE-QL”. SE-QL is a SQL-like interface which eases and optimizes NFC secure application development in making the heterogeneity of the Secure Element (SE) transparent. SE implementation could be “embedded” (eSE) in the mobile device, or inside the SIM Card (UICC), or “on-host” software-based, or in the Cloud (e.g. through HCE); every SE implementation has its own interface(s) making NFC secure-application development extremely cumbersome and complex. Proposed SE-QL solves this problem. This article demonstrates the feasibility and attractiveness of our approach based upon an original high-level API.</description>
        <description>http://thesai.org/Downloads/Volume5No8/Paper_4-WOLF_a_research_Platform_to_write_NFC.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Impact and Challenges of Cloud Computing Adoption on Public Universities in Southwestern Nigeria</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050803</link>
        <id>10.14569/IJACSA.2014.050803</id>
        <doi>10.14569/IJACSA.2014.050803</doi>
        <lastModDate>2014-08-31T07:41:51.8070000+00:00</lastModDate>
        
        <creator>Oyeleye Christopher Akin</creator>
        
        <creator>Fagbola Temitayo Matthew</creator>
        
        <creator>Daramola Comfort Y.</creator>
        
        <subject>cloud computing; cloud adoption; information-communication-technology; public-universities</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(8), 2014</description>
        <description>This study investigates the impact and challenges of the adoption of cloud computing by public universities in the Southwestern part of Nigeria. A sample size of 100 IT staff, 50 para-IT staff and 50 students were selected in each university using stratified sampling techniques with the aid of well-structured questionnaires. Microsoft excel was used to capture the data while frequency and percentage distributions were used to analyze it. In all, 2, 000 copies of the questionnaire were administered to the ten (10) public universities in the southwestern part of Nigeria while 1742 copies were returned which represents a respondent rate of 87.1%. The result of the findings revealed that the adoption of cloud computing has a significant impact on cost effectiveness, enhanced availability, low environmental impact, reduced IT complexities, mobility, scalability, increased operability and reduced investment in physical asset However, the major challenges confronting the adoption of cloud are data insecurity, regulatory compliance concerns, lock-in and privacy concerns. This paper concludes by recommending strategies to manage the identified challenges in the study area.</description>
        <description>http://thesai.org/Downloads/Volume5No8/Paper_3-The_Impact_and_Challenges_of_Cloud_Computing_Adoption.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Semantic Similarity Calculation of Chinese Word</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050802</link>
        <id>10.14569/IJACSA.2014.050802</id>
        <doi>10.14569/IJACSA.2014.050802</doi>
        <lastModDate>2014-08-31T07:41:51.7770000+00:00</lastModDate>
        
        <creator>Liqiang Pan</creator>
        
        <creator>Pu Zhang</creator>
        
        <creator>Anping Xiong</creator>
        
        <subject>semantic similarity; LDA; subject model; HowNet</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(8), 2014</description>
        <description>This paper puts forward a two layers computing method to calculate semantic similarity of Chinese word. Firstly, using Latent Dirichlet Allocation (LDA) subject model to generate subject spatial domain. Then mapping word into topic space and forming topic distribution which is used to calculate semantic similarity of word(the first layer computing). Finally, using semantic dictionary &quot;HowNet&quot; to deeply excavate semantic similarity of word (the second layer computing). This method not only overcomes the problem that it’s not specific enough merely using LDA to calculate semantic similarity of word, but also solves the problems such as new words (haven’t been added in dictionary) and without considering specific context when calculating semantic similarity based on semantic dictionary &quot;HowNet&quot;. By experimental comparison, this thesis proves feasibility,availability and advantages of the calculation method.</description>
        <description>http://thesai.org/Downloads/Volume5No8/Paper_2-Semantic_Similarity_Calculation_of_Chinese_Word.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Information Hiding Scheme Based on Pixel-Value-Ordering and Prediction-Error Expansion with Reversibility</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050801</link>
        <id>10.14569/IJACSA.2014.050801</id>
        <doi>10.14569/IJACSA.2014.050801</doi>
        <lastModDate>2014-08-31T07:41:51.7130000+00:00</lastModDate>
        
        <creator>Ching Chiuan Lin</creator>
        
        <creator>Shih-Chieh Chen</creator>
        
        <creator>Kuo Feng Hwang</creator>
        
        <subject>Reversible data hiding; Pixel-value-ordering; Prediction-error expansion</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(8), 2014</description>
        <description>This paper proposes a data hiding scheme based on pixel-value-ordering and predication-error expansion. In a natural image, most neighboring pixels have similar pixel values, i.e. the difference between neighboring pixels is small. Based on the observation, we may predict a pixel’s value according its neighboring pixels. The proposed scheme divides an image into non-overlapping blocks each of which consists of three pixels, and pixels in a block are sorted in a descending order. Messages are embedded into two difference values, where one is between the largest and medium pixels and the other is between the smallest and medium ones. In the embedding process, difference values equal to 0 or greater than 1 are unchanged or increased by 1, respectively, and those equal to 1 are also unchanged or increased by 1 if the message bit to be embedded is equal to 0 or 1, respectively. Calculating the difference value, one may extract a message bit of 0 or 1 if it is equal to 1 or 2, respectively. Recovering pixels is done by decreasing those difference values by 1 if they are equal to or larger than 2. Experimental results demonstrate that the proposed scheme may provide much larger embedding capacity, comparing to existing study, and a satisfied image quality.</description>
        <description>http://thesai.org/Downloads/Volume5No8/Paper_1-An_Information_Hiding_Scheme_Based.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Natural Gradient Descent for Training Stochastic Complex-Valued Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050729</link>
        <id>10.14569/IJACSA.2014.050729</id>
        <doi>10.14569/IJACSA.2014.050729</doi>
        <lastModDate>2014-07-31T11:10:10.1430000+00:00</lastModDate>
        
        <creator>Tohru Nitta</creator>
        
        <subject>Neural network; Complex number; Learning; Singular point</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(7), 2014</description>
        <description>In this paper, the natural gradient descent method for the multilayer stochastic complex-valued neural networks is considered, and the natural gradient is given for a single stochastic complex-valued neuron as an example. Since the space of the learnable parameters of stochastic complex-valued neural networks is not the Euclidean space but a curved manifold, the complex-valued natural gradient method is expected to exhibit excellent learning performance.</description>
        <description>http://thesai.org/Downloads/Volume5No7/Paper_29-Natural_Gradient_Descent_for_Training_Stochastic.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Estimating the Number of Test Workers Necessary for a Software Testing Process Using Artificial Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050728</link>
        <id>10.14569/IJACSA.2014.050728</id>
        <doi>10.14569/IJACSA.2014.050728</doi>
        <lastModDate>2014-07-31T11:10:10.1300000+00:00</lastModDate>
        
        <creator>Alaa F. Sheta</creator>
        
        <creator>Sofian Kassaymeh</creator>
        
        <creator>David Rine</creator>
        
        <subject>Staff Management; Neural Networks; Software Testing; Estimation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(7), 2014</description>
        <description>On time and within budget software project development represents a challenge for software project managers. Software management activities include but are not limited to: estimation of project cost, development of schedules and budgets, meeting user requirements and complying with standards. Recruiting development team members is a sophisticated problem for a software project manager. Since the utmost cost in software development effort is manpower, software project effort and is associated cost estimation models are used in estimating the effort required to complete a project. This effort estimate can then be converted into dollars based on the proper labor rates. An initial development team needs to be selected not only at the beginning of the project but also during the development process. It is important to allocate the necessary team to a project and efficiently distribute their effort during the development life cycle. In this paper, we provide our initial idea of developing a prediction model for defining the estimated required number of test workers of a software project during the software testing process. The developed models utilize the test instance and the number of observed faults as input to the proposed models. Artificial Neural Networks (ANNs) successfully build the dynamic relationships between the inputs and output and produce and accurate predication estimates.</description>
        <description>http://thesai.org/Downloads/Volume5No7/Paper_28-Estimating_the_Number_of_Test_Workers_Necessary.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A parallel line sieve for the GNFS Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050727</link>
        <id>10.14569/IJACSA.2014.050727</id>
        <doi>10.14569/IJACSA.2014.050727</doi>
        <lastModDate>2014-07-31T11:10:10.1130000+00:00</lastModDate>
        
        <creator>Sameh Daoud</creator>
        
        <creator>Ibrahim Gad</creator>
        
        <subject>parallel Algorithm; Public Key Cryptosystem; GNFS Algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(7), 2014</description>
        <description>RSA is one of the most important public key cryptosystems for information security. The security of RSA depends on Integer factorization problem, it relies on the difficulty of factoring large integers. Much research has gone into problem of factoring a large number. Due to advances in factoring algorithms and advances in computing hardware the size of the number that can be factorized increases exponentially year by year. The General Number Field Sieve algorithm (GNFS) is currently the best known method for factoring large numbers over than 110 digits. In this paper, a parallel GNFS implementation on a BA-cluster is presented. This study begins with a discussion of the serial algorithm in general and covers the five steps of the algorithm. Moreover, this approach discusses the parallel algorithm for the sieving step. The experimental results have shown that the algorithm has achieved a good speedup and can be used for factoring a large integers.</description>
        <description>http://thesai.org/Downloads/Volume5No7/Paper_27-A_parallel_line_sieve_for_the_GNFS_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design and Implementation of an Interpreter Using Software Engineering Concepts</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050726</link>
        <id>10.14569/IJACSA.2014.050726</id>
        <doi>10.14569/IJACSA.2014.050726</doi>
        <lastModDate>2014-07-31T11:10:10.0970000+00:00</lastModDate>
        
        <creator>Fan Wu</creator>
        
        <creator>Hira Narang</creator>
        
        <creator>Miguel Cabral</creator>
        
        <subject>Interpreter; Software Engineering; Computer Science Curricula</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(7), 2014</description>
        <description>In this paper, an interpreter design and implementation for a small subset of C Language using software engineering concepts are presented. This paper reinforces an argument for the application of software engineering concepts in the area of interpreter design but it also focuses on the relevance of the paper to undergraduate computer science curricula. The design and development of the interpreter is also important to software engineering. Some of its components form the basis for different engineering tools. This paper also demonstrates that some of the standard software engineering concepts such as object-oriented design, design patterns, UML diagrams, etc., can provide a useful track of the evolution of an interpreter, as well as enhancing confidence in its correctness</description>
        <description>http://thesai.org/Downloads/Volume5No7/Paper_26-Design_and_Implementation_of_an_Interpreter.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Identifying and Extracting Named Entities from Wikipedia Database Using Entity Infoboxes</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050725</link>
        <id>10.14569/IJACSA.2014.050725</id>
        <doi>10.14569/IJACSA.2014.050725</doi>
        <lastModDate>2014-07-31T11:10:10.0670000+00:00</lastModDate>
        
        <creator>Muhidin Mohamed</creator>
        
        <creator>Mourad Oussalah</creator>
        
        <subject>named entity identification; Wikipedia infobox; infobox templates; Named Entity Classification (NEC);</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(7), 2014</description>
        <description>An approach for named entity classification based on Wikipedia article infoboxes is described in this paper. It identifies the three fundamental named entity types, namely; Person, Location and Organization. An entity classification is accomplished by matching entity attributes extracted from the relevant entity article infobox against core entity attributes built from Wikipedia Infobox Templates. Experimental results showed that the classifier can achieve a high accuracy and F-measure scores of 97%. Based on this approach, a database of around 1.6 million 3-typed named entities is created from 20140203 Wikipedia dump. Experiments on CoNLL2003 shared task named entity recognition (NER) dataset disclosed the system’s outstanding performance in comparison to three different state-of-the-art systems.</description>
        <description>http://thesai.org/Downloads/Volume5No7/Paper_25-Identifying_and_Extracting_Named_Entities.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Clustering of Image Data Using K-Means and Fuzzy K-Means</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050724</link>
        <id>10.14569/IJACSA.2014.050724</id>
        <doi>10.14569/IJACSA.2014.050724</doi>
        <lastModDate>2014-07-31T11:10:10.0330000+00:00</lastModDate>
        
        <creator>Md. Khalid Imam Rahmani</creator>
        
        <creator>Naina Pal</creator>
        
        <creator>Kamiya Arora</creator>
        
        <subject>Clustering; Segmentation; K-Means Clustering; Fuzzy K-Means</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(7), 2014</description>
        <description>Clustering is a major technique used for grouping of numerical and image data in data mining and image processing applications. Clustering makes the job of image retrieval easy by finding the images as similar as given in the query image. The images are grouped together in some given number of clusters. Image data are grouped on the basis of some features such as color, texture, shape etc. contained in the images in the form of pixels. For the purpose of efficiency and better results image data are segmented before applying clustering. The technique used here is K-Means and Fuzzy K-Means which are very time saving and efficient.</description>
        <description>http://thesai.org/Downloads/Volume5No7/Paper_24-Clustering_of_Image_Data_Using_K-Means_and_Fuzzy_K-Means.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Ecn Approach to Congestion Control Mechanisms in Mobile Adhoc Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050723</link>
        <id>10.14569/IJACSA.2014.050723</id>
        <doi>10.14569/IJACSA.2014.050723</doi>
        <lastModDate>2014-07-31T11:10:10.0200000+00:00</lastModDate>
        
        <creator>Som Kant Tiwari</creator>
        
        <creator>Dr.Y.K.Rana</creator>
        
        <creator>Prof. Anurag Jain</creator>
        
        <subject>Explicit Congestion Notification (ECN); Mobile ad hoc Networks (MANET); Congestion control; Congestion window Transmission Control Protocol (TCP)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(7), 2014</description>
        <description>Node(s)/link(s) of a network are subjected to overloading; network performance deteriorates substantially due to network congestion. Network congestion can be mitigated with the help of Explicit Congestion notification (ECN) technique. ECN notification is carried out by setting ECN bit in the TCP header. This allows for end-to-end notification of network congestion without dropping packets. ECN bit notifies TCP sources of incipient congestion before losses occur. ECN is a binary indicator (1 bit) which does not reflect the congestion level completely and so its convergence speed is relatively low. In our work, we have used an extra ECN bit (2 bit ECN). The extra bit allows for passing of additional congestion feedback to the source node. This enables the source node to determine the level of congestion based on which steps can be taken to ensure faster convergence. In comparison to single bit ECN, the additional information afforded by double bit ECN allows for more flexibility to adjust window size, to handle congestion. Simulation results have shown that the proposed method improves overall performance of the network by over 12%.</description>
        <description>http://thesai.org/Downloads/Volume5No7/Paper_23-An_ECN_Approach_to_Congestion_Control_Mechanisms_in_Mobile_Adhoc_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Crypto-Steganography: A Survey</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050722</link>
        <id>10.14569/IJACSA.2014.050722</id>
        <doi>10.14569/IJACSA.2014.050722</doi>
        <lastModDate>2014-07-31T11:10:09.9570000+00:00</lastModDate>
        
        <creator>Md. Khalid Imam Rahmani</creator>
        
        <creator>Kamiya Arora</creator>
        
        <creator>Naina Pal</creator>
        
        <subject>Steganography; Image Steganography; Cryptography; Least Significant Bit (LSB); Enhanced Least Significant Bit (ELSB); Compression; Decompression; Advanced Encryption Standard (AES); Data Encryption Standard (DES); Hashing algorithms</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(7), 2014</description>
        <description>The two important aspects of security that deal with transmitting information or data over some medium like Internet are steganography and cryptography. Steganography deals with hiding the presence of a message and cryptography deals with hiding the contents of a message. Both of them are used to ensure security. But none of them can simply fulfill the basic requirements of security i.e. the features such as robustness, undetectability and capacity etc. So a new method based on the combination of both cryptography and steganography known as Crypto-Steganography which overcome each other’s weaknesses and make difficult for the intruders to attack or steal sensitive information is being proposed. This paper also describes the basics concepts of steganography and cryptography on the basis of previous literatures available on the topic.</description>
        <description>http://thesai.org/Downloads/Volume5No7/Paper_22-A_Crypto-Steganography_A_Survey.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Study of Scala Repositories on Github</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050721</link>
        <id>10.14569/IJACSA.2014.050721</id>
        <doi>10.14569/IJACSA.2014.050721</doi>
        <lastModDate>2014-07-31T11:10:09.9400000+00:00</lastModDate>
        
        <creator>Ron Coleman</creator>
        
        <creator>Matthew A. Johnson</creator>
        
        <subject>Functional programming; Scala; GitHub.com</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(7), 2014</description>
        <description>Functional programming appears to be enjoying a renaissance of interest for developing practical, “real-world” applications. Proponents have long maintained that the functional style is a better way to modularize programs and reduce complexity. What is new in this paper is we test this claim by studying the complexity of open source codes written in Scala, a modern language that unifies functional and object programming. We downloaded from GitHub, Inc., a portfolio of mostly “trending” Scala repositories that included the Scala compiler and standard library, much of them written in Scala; the Twitter, Inc., server and its support libraries; and many other repositories, several of them production-oriented and commercially inspired. In total we investigated approximately 22,000 source files with 2 millions lines of code and 223,000 methods written by hundreds of programmers. To analyze these sources, we developed a novel compiler kit that measures lines of code and adaptively learns to estimate the cyclomatic complexity of functional-object codes. The data show, first, lines of code and cyclomatic complexity are positively correlated as we expected but only weakly which we did not expect with Kendall’s t=0.258–0.274. Second, 75% of the Scala methods are straight-line, that is, they have the lowest possible cyclomatic complexity. Third, nearly 70% of methods have three or fewer lines. Fourth, the distributions of lines of code and cyclomatic complexity are both non-Gaussian (P&lt;0.01), which is as surprising as it is interesting. These data may offer new insights into software complexity and the large-scale structure of applications including but not necessarily limited to Scala.</description>
        <description>http://thesai.org/Downloads/Volume5No7/Paper_21-A_Study_of_Scala_Repositories_on_GitHub.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Client Side Phishing Websites Detection Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050720</link>
        <id>10.14569/IJACSA.2014.050720</id>
        <doi>10.14569/IJACSA.2014.050720</doi>
        <lastModDate>2014-07-31T11:10:09.9270000+00:00</lastModDate>
        
        <creator>Firdous Kausar</creator>
        
        <creator>Bushra Al-Otaibi</creator>
        
        <creator>Asma Al-Qadi</creator>
        
        <creator>Nwayer Al-Dossari</creator>
        
        <subject>Phishing Attacks; Browser Plugin; Anti Phishing;  Security; Firefox</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(7), 2014</description>
        <description>Phishing tricks to steal personal or credential information by entering victims into a forged website similar to the original site, and urging them to enter their information believing that this site is legitimate. The number of internet users who are becoming victims of phishing attacks is increasing beside that phishing attacks have become more sophisticated. In this paper we propose a client-side solution to protect against phishing attacks which is a Firefox extension integrated as a toolbar that is responsible for checking whether recipient website is trusted or not by inspecting URLs of each requested webpage. If the site is suspicious the toolbar is going to block it. Every URL is evaluated corresponding to features extracted from it. Three heuristics (primary domain, sub domain, and path) and Na&#239;ve Bayes classification using four lexical features combined with page ranking received from two different services (Alexa, and Google page rank) used to classify URL. The proposed method requires no server changes and will prevent internet users from fraudulent sites especially from phishing attacks based on deceptive URLs. Experimental results show that our approach can achieve 48% accuracy ratio using a test set of 246 URL, and 87.5% accuracy ratio by excluding NB addition tested over 162 URL.</description>
        <description>http://thesai.org/Downloads/Volume5No7/Paper_20-Hybrid_Client_Side_Phishing_Websites_Detection_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Feature Descriptor Based on Normalized Corners and Moment Invariant for Panoramic Scene Generation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050719</link>
        <id>10.14569/IJACSA.2014.050719</id>
        <doi>10.14569/IJACSA.2014.050719</doi>
        <lastModDate>2014-07-31T11:10:09.8930000+00:00</lastModDate>
        
        <creator>Kawther Abbas Sallal</creator>
        
        <creator>Abdul-Monem Saleh Rahma</creator>
        
        <subject>Feature extraction; feature description; motion estimation; registration; panoramic scene</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(7), 2014</description>
        <description>Panorama generation systems aim at creating a wide-view image by aligning and stitching a sequence of images. The technology is extensively used in many fields such as virtual reality, medical image analysis, and geological engineering. This research is concerned with combining multiple images with a region of overlap to produce a wide field of view by the detection of feature points for images with different camera motion in an efficient and fast way. Feature extraction and description are important and critical steps in panorama construction. This study presents techniques of corner detection, moment invariant and random sampling to locate the important features and built storing descriptors in the images under noise, transformation, lighting, little viewpoint changes, blurring and compression circumstances. Corner detection and normalization are used to extract features in the image, while the descriptors are built by moment invariant in an efficient way. Finally, the matching and motion estimation is implemented based on the random sampling method. The results of experiments conducted on images and video sequences taken by handheld camera and images taken from the internet. The results show that the proposed algorithm generates panoramic image and panoramic video of good quality in a fast and efficient way.</description>
        <description>http://thesai.org/Downloads/Volume5No7/Paper_19-Feature_Descriptor_Based_on_Normalized_Corners.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automatic Optic Disc Boundary Extraction from Color Fundus Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050718</link>
        <id>10.14569/IJACSA.2014.050718</id>
        <doi>10.14569/IJACSA.2014.050718</doi>
        <lastModDate>2014-07-31T11:10:09.8630000+00:00</lastModDate>
        
        <creator>Thresiamma Devasia</creator>
        
        <creator>Paulose Jacob</creator>
        
        <creator>Tessamma Thomas</creator>
        
        <subject>fundus image; optic nerve head; optic disc; Fuzzy C-Means clustering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(7), 2014</description>
        <description>Efficient optic disc segmentation is an important task in automated retinal screening. For the same reason optic disc detection is fundamental for medical references and is important for the retinal image analysis application. The most difficult problem of optic disc extraction is to locate the region of interest. More over it is a time consuming task. This paper tries to overcome this barrier by presenting an automated method for optic disc boundary extraction using Fuzzy C Means combined with thresholding. The discs determined by the new method agree relatively well with those determined by the experts. The present method has been validated on a data set of 110 colour fundus images from DRION database, and has obtained promising results. The performance of the system is evaluated using the difference in horizontal and vertical diameters of the obtained disc boundary and that of the ground truth obtained from two expert ophthalmologists. For the 25 test images selected from the 110 colour fundus images, the Pearson correlation of the ground truth diameters with the detected diameters by the new method are 0.946 and 0.958 and, 0.94 and 0.974 respectively. From the scatter plot, it is shown that the ground truth and detected diameters have a high positive correlation. This computerized analysis of optic disc is very useful for the diagnosis of retinal diseases.</description>
        <description>http://thesai.org/Downloads/Volume5No7/Paper_18-Automatic_Optic_Disc_Boundary_Extraction_from_Color.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application of Fuzzy Self-Optimizing Control Based on Differential Evolution Algorithm for the Ratio of Wind to Coal Adjustment of Boiler in the Thermal Power Plant</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050717</link>
        <id>10.14569/IJACSA.2014.050717</id>
        <doi>10.14569/IJACSA.2014.050717</doi>
        <lastModDate>2014-07-31T11:10:09.8330000+00:00</lastModDate>
        
        <creator>Ting Hou</creator>
        
        <creator>Liping Zhang</creator>
        
        <creator>Yuchen Chen</creator>
        
        <subject>fuzzy self-optimizing control; differential evolution algorithm; best ratio of wind to coal; boiler efficiency</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(7), 2014</description>
        <description>The types of coal are multiplex in domestic small and medium sized boilers, and with unstable ingredients, the method of maintaining the amount of wind and coal supply in a fixed proportion of the wind adjustment does not always ensure the best economical boiler combustion process, the key of optimizing combustion is to modify reasonable proportion of wind and coal online. In this paper, a kind of fuzzy self-optimizing control based on differential evolution algorithm is proposed, which applied in the power plant boiler system, the boiler combustion efficiency has been significantly improved than previous indirect control. In this paper, a thermal power plant is our research object, in the case of determining the optimum system performance, the unit efficiency can be increased significantly using this method, and the important issues of energy efficiency of power plants can be successfully solved.</description>
        <description>http://thesai.org/Downloads/Volume5No7/Paper_17-Application_of_Fuzzy_Self-optimizing_Control_Based_on_Differential.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Adaptive Cache Replacement:A Novel Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050716</link>
        <id>10.14569/IJACSA.2014.050716</id>
        <doi>10.14569/IJACSA.2014.050716</doi>
        <lastModDate>2014-07-31T11:10:09.8000000+00:00</lastModDate>
        
        <creator>Sherif Elfayoumy</creator>
        
        <creator>Sean Warden</creator>
        
        <subject>cache replacement policy; high performance computing; adaptive caching; Web caching</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(7), 2014</description>
        <description>Cache replacement policies are developed to help insure optimal use of limited resources.  Varieties of such algorithms exist with relatively few that dynamically adapt to traffic patterns.  Algorithms that are tunable typically utilize off-line training mechanisms or trial-and-error to determine optimal characteristics.
Utilizing multiple algorithms to establish an efficient replacement policy that dynamically adapts to changes in traffic load and access patterns is a novel option that is introduced in this article. A simulation of this approach utilizing two existing, simple, and effective policies; namely, LRU and LFU was studied to assess the potential of the adaptive policy. This policy is compared and contrasted to other cache replacement policies utilizing public traffic samples mentioned in the literature as well as a synthetic model created from existing samples.  Simulation results suggest that the adaptive cache replacement policy is beneficial, primarily in smaller cache sizes.
</description>
        <description>http://thesai.org/Downloads/Volume5No7/Paper_16-Adaptive_Cache_Replacement.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Ontology Mapping of Business Process Modeling Based on Formal Temporal Logic</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050715</link>
        <id>10.14569/IJACSA.2014.050715</id>
        <doi>10.14569/IJACSA.2014.050715</doi>
        <lastModDate>2014-07-31T11:10:09.7870000+00:00</lastModDate>
        
        <creator>Irfan Chishti</creator>
        
        <creator>Jixin Ma</creator>
        
        <creator>Brian Knight</creator>
        
        <subject>Business Process Modeling techniques; Ontology; Temporal Logic; Semantics; Mapping</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(7), 2014</description>
        <description>A business process is the combination of a set of activities with logical order and dependence, whose objective is to produce a desired goal. Business process modeling (BPM) using knowledge of the available process modeling techniques enables a common understanding and analysis of a business process. Industry and academics use informal and formal techniques respectively to represent business processes (BP), having the main objective to support an organization. Despite both are aiming at BPM, the techniques used are quite different in their semantics. While carrying out literature research, it has been found that there is no general representation of business process modeling is available that is expressive than the commercial modeling tools and techniques. Therefore, it is primarily conceived to provide an ontology mapping of modeling terms of Business Process Modeling Notation (BPMN), Unified Modeling Language (UML) Activity Diagrams (AD) and Event Driven Process Chains (EPC) to temporal logic. Being a formal system, first order logic assists in thorough understanding of process modeling and its application. However, our contribution is to devise a versatile conceptual categorization of modeling terms/constructs and also formalizing them, based on well accepted business notions, such as action, event, process, connector and flow. It is demonstrated that the new categorization of modeling terms mapped to formal temporal logic, provides the expressive power to subsume business process modeling techniques i.e. BPMN, UML AD and EPC.</description>
        <description>http://thesai.org/Downloads/Volume5No7/Paper_15-Ontology_Mapping_of_Business_Process_Modeling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Tool Design of Cobit Roadmap Implementation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050714</link>
        <id>10.14569/IJACSA.2014.050714</id>
        <doi>10.14569/IJACSA.2014.050714</doi>
        <lastModDate>2014-07-31T11:10:09.7700000+00:00</lastModDate>
        
        <creator>Karim Youssfi</creator>
        
        <creator>Jaouad Boutahar</creator>
        
        <creator>Souhail Elghazi</creator>
        
        <subject>IT governance; COBIT; Tool design; Roadmap; Implementation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(7), 2014</description>
        <description>Over the last two decades, the role of information technology in organizations has changed from primarily a supportive and transactional function to being an essential prerequisite for strategic value generation. The organizations based their operational services through its Information Systems (IS) that need to be managed, controlled and monitored constantly. IT governance (ITG), i.e. the way organizations manage IT resources, has became a key factor for enterprise success due to the increasing enterprise dependency on IT solutions. There are several approaches available to deal with ITG. These methods are diverse, and in some cases, long and complicated to implement. One well-accepted ITG framework is COBIT, designed for a global approach. This paper describes a design of a tool for COBIT roadmap implementation. The model is being developed in the course of ongoing PhD research.</description>
        <description>http://thesai.org/Downloads/Volume5No7/Paper_14-A_Tool_Design_of_Cobit_Roadmap_Implementation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Computer Ethics in the Semantic Web Age</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050713</link>
        <id>10.14569/IJACSA.2014.050713</id>
        <doi>10.14569/IJACSA.2014.050713</doi>
        <lastModDate>2014-07-31T11:10:09.7400000+00:00</lastModDate>
        
        <creator>Aziz Alotaibi</creator>
        
        <subject>Computer ethics; semantic web; privacy concerns; and global information</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(7), 2014</description>
        <description>Computer ethics can be defined as a set of moral principles that monitor the use of computers. Similar rules were then required for both programmers and users. Issues that were not anticipated in the past have arisen due to the introduction of newer platforms such as Semantic Web.  Both programmers and users are now obliged to consider phenomenon such as informed consent.  In this paper, I will explore the ethical problems that arise for professionals and users with the advent of new technologies, especially with privacy concerns and global information.</description>
        <description>http://thesai.org/Downloads/Volume5No7/Paper_13-Computer_Ethics_in_the_Semantic_Web_Age.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Shape Based Image Search Technique</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050712</link>
        <id>10.14569/IJACSA.2014.050712</id>
        <doi>10.14569/IJACSA.2014.050712</doi>
        <lastModDate>2014-07-31T11:10:09.7070000+00:00</lastModDate>
        
        <creator>Aratrika Sarkar</creator>
        
        <creator>PallabiBhatttacharjee</creator>
        
        <subject>shape-based image retrieval; contour matching; edge matching; pixel matching</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(7), 2014</description>
        <description>This paper describes an interactive application we have developed based on shaped-based image retrieval technique. The key concepts described in the project are, i)matching of images based on contour matching;  ii)matching of images based on edge matching; iii)matching of images based on pixel matching of colours. Further, the application facilitates the matching of images invariant of transformations like i) translation ; ii) rotation; iii) scaling. The key factor of the system is, the system shows the percentage unmatched of the image uploaded with respect to the images already existing in the database graphically, whereas, the integrity of the system lies on the unique matching techniques used for optimum result. This increases the accuracy of the system. For example, when a user uploads an image say, an image of a mango leaf, then the application shows all mango leaves present in the database as well other leaves matching the colour and shape of the mango leaf uploaded.</description>
        <description>http://thesai.org/Downloads/Volume5No7/Paper_12-A_Shape_Based_Image_Search_Technique.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automated Menu Recommendation System Based on Past Preferences</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050711</link>
        <id>10.14569/IJACSA.2014.050711</id>
        <doi>10.14569/IJACSA.2014.050711</doi>
        <lastModDate>2014-07-31T11:10:09.6900000+00:00</lastModDate>
        
        <creator>Daniel Simon Sanz</creator>
        
        <creator>Ankur Agrawal</creator>
        
        <subject>data mining; Apriori; Android; restaurant; recommendation system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(7), 2014</description>
        <description>Data mining plays an important role in ecommerce in today’s world. Time is critical when it comes to shopping as options are unlimited and making a choice can be tedious. This study presents an application of data mining in the form of an Android application that can provide user with automated suggestion based on past preferences. The application helps a person to choose what food they might want to order in a specific restaurant. The application learns user behavior with each order - what they order in each kind of meal and what are the products that they select together. After gathering enough information, the application can suggest the user about the most selected dish in the recent past and since the application started to learn. Applications, such as these, can play a major role in helping make a decision based on past preferences, thereby reducing the user involvement in decision making.</description>
        <description>http://thesai.org/Downloads/Volume5No7/Paper_11-Automated_menu_recommendation_system_based_on_past_preferences.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>New Approach for Image Fusion Based on Curvelet Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050710</link>
        <id>10.14569/IJACSA.2014.050710</id>
        <doi>10.14569/IJACSA.2014.050710</doi>
        <lastModDate>2014-07-31T11:10:09.6600000+00:00</lastModDate>
        
        <creator>Gehad Mohamed Taher</creator>
        
        <creator>Mohamed ElSayed Wahed</creator>
        
        <creator>Ghada EL Taweal</creator>
        
        <subject>Image fusion; visual colored image; monochrome images</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(7), 2014</description>
        <description>Most of the image fusion work has been limited to monochrome images. Algorithms which utilize human colour perception are attracting the image fusion community with great interest. It is mainly due to the reason that the use of colour greatly expands the amount of information to be conveyed in an image. Since, the human visual system is very much sensitive to colours; research was undertaken in mapping three individual monochrome multispectral images to the respective channels of an RGB image to produce a false colour fused image. Producing a fused colour output image which maintains the original chromaticity of the input visual image is highly tricky.
The focus of this paper is developing a new approach to fuse a color image (visual image) and a corresponding grayscale one (Infrared image – IR) using the curvelet approach using different fusion rules in new fields. The fused image obtained by the proposed approach maintain the high resolution of the colored image (visual image), incorporate any hidden object given by the IR sensor as an example, or complements the two input images and keep the natural color of the visual image.
</description>
        <description>http://thesai.org/Downloads/Volume5No7/Paper_10-New_Approach_for_Image_Fusion_based_on_Curvelet_approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Object-Oriented Smartphone Application for Structural Finite Element Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050709</link>
        <id>10.14569/IJACSA.2014.050709</id>
        <doi>10.14569/IJACSA.2014.050709</doi>
        <lastModDate>2014-07-31T11:10:09.6300000+00:00</lastModDate>
        
        <creator>B.J. Mac Donald</creator>
        
        <subject>Objected-oriented programming; Finite Element Method; Java; Android</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(7), 2014</description>
        <description>Smartphones are becoming increasingly ubiquitous both in general society and the workplace. Recent increases in mobile processing power have shown the current generation of smartphones has equivalent processing power to a supercomputer from the early 1990s. Many industries have abandoned desktop computing and are now entirely reliant on mobile devices. Given these facts it is logical that smartphones are considered as the next platform for finite element analysis (FEA). This paper presents an architecture for a smartphone FEA application using object-oriented programming. A MVC design pattern is adopted and a demonstration FEA application for the Android smartphone platform is presented.</description>
        <description>http://thesai.org/Downloads/Volume5No7/Paper_9-An_Object-oriented_Smartphone_Application_for_Structural.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modification of CFCM in The Presence of Heavy AWGN for Bayesian Blind Channel Equalizer</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050708</link>
        <id>10.14569/IJACSA.2014.050708</id>
        <doi>10.14569/IJACSA.2014.050708</doi>
        <lastModDate>2014-07-31T11:10:09.6130000+00:00</lastModDate>
        
        <creator>Changkyu Kim</creator>
        
        <creator>Soowhan Han</creator>
        
        <subject>Gaussian Partition Matrix; Conditional Fuzzy C-Means; Channel States; Bayesian Blind Equalizer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(7), 2014</description>
        <description>In this paper, the modification of conditional Fuzzy C-Means (CFCM) aimed at estimation of unknown desired channel states is accomplished for Bayesian blind channel equalizer under the presence of heavy additive Gaussian noise (AWGN). For the modification of CFCM to search the optimal channel states of a heavy noise-corrupted communication channel, a Gaussian weighted partition matrix, along with the Bayesian likelihood fitness function and the conditional constraint of ordinary CFCM, is developed and exploited. In the experiments, binary signals are generated at random and transmitted through both types of linear and nonlinear channels which are corrupted with various degrees of AWGN, and the modified CFCM estimates the channel states of those unknown channels. The simulation results, including the comparison with the previously developed algorithm exploiting the ordinary CFCM, demonstrate the effectiveness of proposed modification in terms of accuracy and speed, especially under the presence of heavy AWGN. Therefore, the proposed modification can possibly constitute a search algorithm of optimal channel states for Bayesian blind channel equalizer in severe noise-corrupted communication environments.</description>
        <description>http://thesai.org/Downloads/Volume5No7/Paper_8-Modification_of_CFCM_in_the_presence_of_heavy_AWGN.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Applicability of the Maturity Model for IT Service Outsourcing in Higher Education Institutions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050707</link>
        <id>10.14569/IJACSA.2014.050707</id>
        <doi>10.14569/IJACSA.2014.050707</doi>
        <lastModDate>2014-07-31T11:10:09.5830000+00:00</lastModDate>
        
        <creator>Victoriano Valencia Garc&#237;a</creator>
        
        <creator>Dr. Eugenio J. Fern&#225;ndez Vicente</creator>
        
        <creator>Dr. Luis Usero Aragon&#233;s</creator>
        
        <subject>IT governance; IT management; Outsourcing; IT services; Maturity model; Maturity measurement</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(7), 2014</description>
        <description>Outsourcing is a strategic option which complements IT services provided internally in organizations. This study proposes the applicability of a new holistic maturity model based on standards ISO/IEC 20000 and ISO/IEC 38500, and the frameworks and best practices of ITIL and COBIT, with a specific focus on IT outsourcing.
This model allows independent validation and practical application in the field of higher education. In addition, this study allows to achieve an effective transition to a model of good governance and management of outsourced IT services which, aligned with the core business of universities, affect the effectiveness and efficiency of its management, optimizes its value and minimizes risks.
</description>
        <description>http://thesai.org/Downloads/Volume5No7/Paper_7-Applicability_of_the_Maturity_Model_for_IT_Service_Outsourcing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mitigation of Cascading Failures with Link Weight Control</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050706</link>
        <id>10.14569/IJACSA.2014.050706</id>
        <doi>10.14569/IJACSA.2014.050706</doi>
        <lastModDate>2014-07-31T11:10:09.5670000+00:00</lastModDate>
        
        <creator>Hoang Anh Tran Quang</creator>
        
        <creator>Akira Namatame</creator>
        
        <subject>cascading failures; link’s weight; network robustness</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(7), 2014</description>
        <description>Cascading failures are crucial issues for the study of survivability and resilience of our infrastructures and have attracted much interest in complex networks research. In this paper, we study the  overload-based cascading failure model and propose a soft defense strategy to mitigate the damage from such cascading failures. In particular, we assign adjustable weights to individual links of a network and control the weight parameter. The information flow and the routing patterns in a network are then controlled based on the assigned weights. The main idea of this work is to control the traffics on the network and we verify the effectiveness of the load redistribution for mitigating cascading failure. Numerical results imply that network robustness can be  enhanced significantly using the relevant smart routing strategy, in which loads in the network are properly redistributed.</description>
        <description>http://thesai.org/Downloads/Volume5No7/Paper_6-Mitigation_of_Cascading_Failures_with.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Second Correlation Method for Multivariate Exchange Rates Forecasting</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050705</link>
        <id>10.14569/IJACSA.2014.050705</id>
        <doi>10.14569/IJACSA.2014.050705</doi>
        <lastModDate>2014-07-31T11:10:09.5370000+00:00</lastModDate>
        
        <creator>Agus Sihabuddin</creator>
        
        <creator>Subanar</creator>
        
        <creator>Dedi Rosadi</creator>
        
        <creator>Edi Winarko</creator>
        
        <subject>forecasting; foreign exchange; NARX; second correlation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(7), 2014</description>
        <description>Foreign exchange market is one of the most complex dynamic market with high volatility, non linear and irregularity. As the globalization spread to the world, exchange rates forecasting become more important and complicated. Many external factors influence its volatility. To forecast the exchange rates, those external variables can be used and usually chosen based on the correlation to the predicted variable. A new second correlation method to improve forecasting accuracy is proposed. The second correlation is used to choose the external variable with different time interval. The proposed method is tested using six major monthly exchange rates with Nonlinear Autoregressive with eXogenous input (NARX) compared with Nonlinear Autoregressive (NAR) for model benchmarking.  We evaluated the forecasting accuracy of proposed method is increasing by 16.8% compared to univariate NAR model and slight better than linear correlation on average for Dstat parameter and gives almost no improvement for  MSE.</description>
        <description>http://thesai.org/Downloads/Volume5No7/Paper_5-A_Second_Correlation_Method_for_Multivariate.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Wavelet-Based Approach for Ultrasound Image Restoration</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050704</link>
        <id>10.14569/IJACSA.2014.050704</id>
        <doi>10.14569/IJACSA.2014.050704</doi>
        <lastModDate>2014-07-31T11:10:09.5030000+00:00</lastModDate>
        
        <creator>Mohammed Tarek GadAllah</creator>
        
        <creator>Samir Mohammed Badawy</creator>
        
        <subject>Ultrasound Medical Imaging; Curvelet Based Image Denoising; Wavelet Based Image Fusion</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(7), 2014</description>
        <description>Ultrasound&#39;s images are generally affected by speckle noise which is mainly due to the scattering phenomenon’s coherent nature. Speckle filtration is accompanied with loss of diagnostic features. In this paper a modest new trial introduced to remove speckles while keeping the fine features of the tissue under diagnosis by enhancing image’s edges; via Curvelet denoising  and Wavelet based image fusion. Performance evaluation of our work is done by four quantitative measures: the peak signal to noise ratio (PSNR), the square root of the mean square of error (RMSE), a universal image quality index (Q), and the Pratt’s figure of merit (FOM) as a quantitative measure for edge preservation. Plus Canny edge map which is extracted as a qualitative measure of edge preservation. The measurements of the proposed approach assured its qualitative and quantitative success into image denoising while maintaining edges as possible. A Gray phantom is designed to test our proposed enhancement method. The phantom results assure the success and applicability of this paper approach not only to this research works but also for gray scale diagnostic scans’ images including ultrasound’s B-scans.</description>
        <description>http://thesai.org/Downloads/Volume5No7/Paper_4-A_Wavelet-Based_Approach_for_Ultrasound_Image_Restoration.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Framework to Improve Communication and Reliability Between Cloud Consumer and Provider in the Cloud</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050703</link>
        <id>10.14569/IJACSA.2014.050703</id>
        <doi>10.14569/IJACSA.2014.050703</doi>
        <lastModDate>2014-07-31T11:10:09.4730000+00:00</lastModDate>
        
        <creator>Vivek Sridhar</creator>
        
        <subject>cloud computing; requirement communication; requirement engineering; cloud service; cloud discoverability; data Mining; artificial intelligence in Cloud Computing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(7), 2014</description>
        <description>Cloud services consumers demand reliable methods for choosing appropriate cloud service provider for their requirements. Number of cloud consumer is increasing day by day and so cloud providers, hence requirement for a common platform for interacting between cloud provider and cloud consumer is also on the raise. This paper introduces Cloud Providers Market Platform Dashboard. This will act as not only just cloud provider discoverability but also provide timely report to consumer on cloud service usage and provide new requirement based on consumer cloud usage and cost for the same.  Dashboard is also responsible for getting cost of each service provider for a particular requirement. Our solution will learn from requirements and provide required details for consumer for effective usage of cloud services. This also enable service provider to understand requirements, provide quality of service, to understand new requirement and deliver. This framework also deals with how best we can use before and after usage of cloud services to choose a right service provider for a particular requirement in a community.</description>
        <description>http://thesai.org/Downloads/Volume5No7/Paper_3-A_framework_to_improve_communication_and_reliability.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Image Segmentation Via Color Clustering</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050702</link>
        <id>10.14569/IJACSA.2014.050702</id>
        <doi>10.14569/IJACSA.2014.050702</doi>
        <lastModDate>2014-07-31T11:10:09.4100000+00:00</lastModDate>
        
        <creator>Kaveh Heidary</creator>
        
        <subject>Clustering; Classification; Image Segmentation; Machine Vision</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(7), 2014</description>
        <description>This paper develops a computationally efficient process for segmentation of color images. The input image is partitioned into a set of output images in accordance to color characteristics of various image regions. The algorithm is based on random sampling of the input image and fuzzy clustering of the training data followed by crisp classification of the input image. The user prescribes the number of randomly selected pixels comprising the trainer set and the number of color classes characterizing the image compartments. The algorithm developed here constitutes an effective preprocessing technique with various applications in machine vision systems. Spectral segmentation of the sensor image can potentially lead to enhanced performance of the object detection, classification, recognition, authentication and tracking modules of the autonomous vision system.</description>
        <description>http://thesai.org/Downloads/Volume5No7/Paper_2-Image_Segmentation_Via_Color_Clustering.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Benefits Management of Cloud Computing Investments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050701</link>
        <id>10.14569/IJACSA.2014.050701</id>
        <doi>10.14569/IJACSA.2014.050701</doi>
        <lastModDate>2014-07-31T11:10:09.3500000+00:00</lastModDate>
        
        <creator>Richard Greenwell</creator>
        
        <creator>Xiaodong Liu</creator>
        
        <creator>Kevin Chalmers</creator>
        
        <subject>Cloud Computing; Benefits Management; Information Systems Management</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(7), 2014</description>
        <description>This paper examines investments in cloud computing using the Benefits Management approach. The major contribution of the paper is to provide a unique insight into how organizations derive value from cloud computing investments. The motivation for writing this paper is to consider the business benefits generated from utilizing cloud computing in a range of organizations. Case studies are used to describe a number of organizations approaches to benefits exploitation using cloud computing. It was found that smaller organizations can generate rapid growth using strategies based on cloud computing. Larger organizations have used utility approaches to reduce the costs of IT infrastructure.</description>
        <description>http://thesai.org/Downloads/Volume5No7/Paper_1-Benefits_Management_of_Cloud_Computing_Investments.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Breast Cancer Diagnosis using Artificial Neural Networks with Extreme Learning Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2014.030703</link>
        <id>10.14569/IJARAI.2014.030703</id>
        <doi>10.14569/IJARAI.2014.030703</doi>
        <lastModDate>2014-07-21T08:22:10.9870000+00:00</lastModDate>
        
        <creator>Chandra Prasetyo Utomo</creator>
        
        <creator>Aan Kardiana</creator>
        
        <creator>Rika Yuliwulandari</creator>
        
        <subject>breast cancer; artificial neural networks; extreme learning machine; medical decision support systems</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 3(7), 2014</description>
        <description>Breast cancer is the second cause of dead among women. Early detection followed by appropriate cancer treatment can reduce the deadly risk. Medical professionals can make mistakes while identifying a disease. The help of technology such as data mining and machine learning can substantially improve the diagnosis accuracy. Artificial Neural Networks (ANN) has been widely used in intelligent breast cancer diagnosis. However, the standard Gradient-Based Back Propagation Artificial Neural Networks (BP ANN) has some limitations. There are parameters to be set in the beginning, long time for training process, and possibility to be trapped in local minima. In this research, we implemented ANN with extreme learning techniques for diagnosing breast cancer based on Breast Cancer Wisconsin Dataset. Results showed that Extreme Learning Machine Neural Networks (ELM ANN) has better generalization classifier model than BP ANN. The development of this technique is promising as intelligent component in medical decision support systems. </description>
        <description>http://thesai.org/Downloads/IJARAI/Volume3No7/Paper_3-Breast_Cancer_Diagnosis_using_Artificial_Neural_Networks_with_Extreme_Learning_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Classifications of Motor Imagery Tasks in Brain Computer Interface Using Linear Discriminant Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2014.030702</link>
        <id>10.14569/IJARAI.2014.030702</id>
        <doi>10.14569/IJARAI.2014.030702</doi>
        <lastModDate>2014-07-10T08:37:06.7000000+00:00</lastModDate>
        
        <creator>Roxana Aldea</creator>
        
        <creator>Monica Fira</creator>
        
        <subject>Brain computer interface; motor imagery; wavelet; linear discriminant analysis</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 3(7), 2014</description>
        <description>In this paper, we address a method for motor imagery feature extraction for brain computer interface (BCI). The wavelet coefficients were used to extract the features from the motor imagery EEG and the linear discriminant analysis was utilized to classify the pattern of left or right hand imagery movement and rest. The performance of the proposed method was evaluated using EEG data recorded by us, with 8 g.tec active electrodes by means of g.MOBIlab+ module. The maximum accuracy of classification is 91%.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume3No7/Paper_2-Classifications_of_motor_imagery_tasks_in_brain_computer.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A More Intelligent Literature Search</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2014.030701</link>
        <id>10.14569/IJARAI.2014.030701</id>
        <doi>10.14569/IJARAI.2014.030701</doi>
        <lastModDate>2014-07-10T08:37:06.6230000+00:00</lastModDate>
        
        <creator>Michael G King</creator>
        
        <creator>Alison Van Bree</creator>
        
        <subject>automated literature search; database; search algorithm; Craniosynostosis; fibroblast growth factor receptor</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 3(7), 2014</description>
        <description>Although the topic of study relates to an environmental/health issue, it is  the methodology described which serves to showcase an embryonic form of a new “more intelligent” protocol of search algorithm.   Through the implementation  of this algorithm, an extensive automated literature base yielded a single credible solution to a previously unsolved problem.   Faced with a distressing but entirely unexplained incidence of birth defects, the proposed model of knowledge scavenging worked through acknowledged gaps in understanding of increased (phosphate) fertilizer,  enabled the  template of known facts regarding the interactions of phosphates with the processes of mammal (and other animal) growth, of metabolic function, and of neurological development, and delivered a causal model which would not, at least not easily, derive from current literature search methods.   Illustrating the practical value of a step forwards in the design of intelligent literature search, the present study provides a candidate cause to explain a cluster of bovine deformity</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume3No7/Paper_1-A_More_Intelligent_Literature_Search.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>XCS with an internal action table for non-Markov environments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050626</link>
        <id>10.14569/IJACSA.2014.050626</id>
        <doi>10.14569/IJACSA.2014.050626</doi>
        <lastModDate>2014-07-01T06:22:55.1600000+00:00</lastModDate>
        
        <creator>Tomohiro Hayashida</creator>
        
        <creator>Ichiro Nishizaki</creator>
        
        <creator>Keita Moriwake</creator>
        
        <subject>Learning classifier systems; Non-Markov environments;
XCS; Internal register.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(6), 2014</description>
        <description>To cope with sequential decision problems in non-
Markov environments, learning classifier systems using the internal
register have been proposed. Since, by utilizing the action
part of classifiers, these systems control the internal register in
the same way as choosing actions to the environment, they do not
always work well. In this paper, we develop an effective learning
classifier system with two different rule sets for internal and
external actions. The first one is used for determining internal
actions, that is, rules for controlling the internal register. It
provides stable performance by separating control of the internal
register from the action part of classifiers, and it is represented
by “If [external state] &amp; [internal state] then [internal action],” and
we call a set of the first rules the internal action table. The second
one is for selecting external actions as in the classical classifier
system, but its structure is slightly different with the classical
one; it is represented by “If [external state] &amp; [internal state] &amp;
[internal action] then [external action].” In the proposed system,
aliased states in the environment are identified by observing
payoffs of a classifier and referring to the internal action table.
To demonstrate the efficiency and effectiveness of the proposed
system, we apply it to woods environments which are used in the
related works, and compare the performance of it to those of the
existing classifier systems.</description>
        <description>http://thesai.org/Downloads/Volume5No6/Paper_26-XCS_with_an_internal_action_table.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Compound Generic Quantitative Framework for Measuring Digital Divide</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050625</link>
        <id>10.14569/IJACSA.2014.050625</id>
        <doi>10.14569/IJACSA.2014.050625</doi>
        <lastModDate>2014-07-01T06:22:55.1300000+00:00</lastModDate>
        
        <creator>Noureldien A. Noureldien</creator>
        
        <subject>Digital Divide; Digital Divide Indicator; E-inclusion; Inclusion Factors; Inclusion Activities</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(6), 2014</description>
        <description>The term digital divide had been used in the literature to conceptualize the gap in using and utilizing information and communication technologies. Digital divide can be identified on different levels such as individuals, groups, societies, organizations and countries. On the other hand, the concept of e-Inclusion is coined to define activities needed to bridge digital divide.
One of the most challenging research areas in digital divide that had been a subject for exhaustive studies is measuring digital divide. Researchers have proposed many metrics and indices to measure digital divide. However, most of the proposed measures are bivariate comparisons that reduce measurement to comparisons of Internet penetration rates or alike.
This paper proposes a compound generic framework for quantitative measuring of digital divide on the individuals or group level. The proposed framework takes into account the context of the digital divide in each society.</description>
        <description>http://thesai.org/Downloads/Volume5No6/Paper_25-A_Compound_Generic_Quantitative_Framework_for_Measuring_Digital_Divide.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Estimation of Water Quality Parameters Using the Regression Model with Fuzzy K-Means Clustering</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050624</link>
        <id>10.14569/IJACSA.2014.050624</id>
        <doi>10.14569/IJACSA.2014.050624</doi>
        <lastModDate>2014-07-01T06:22:55.1130000+00:00</lastModDate>
        
        <creator>Muntadher A. SHAREEF</creator>
        
        <creator>Abdelmalek TOUMI</creator>
        
        <creator>Ali KHENCHAF</creator>
        
        <subject>In situ data measurements; IKONOS data; water quality parameters; GLCM; empirical models; fuzzy K-means clustering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(6), 2014</description>
        <description>the traditional methods in remote sensing used for monitoring and estimating pollutants are generally relied on the spectral response or scattering reflected from water. In this work, a new method has been proposed to find contaminants and determine the Water Quality Parameters (WQPs) based on theories of the texture analysis. Empirical statistical models have been developed to estimate and classify contaminants in the water. Gray Level Co-occurrence Matrix (GLCM) is used to estimate six texture parameters: contrast, correlation, energy, homogeneity, entropy and variance. These parameters are used to estimate the regression model with three WQPs. Finally, the fuzzy K-means clustering was used to generalize the water quality estimation on all segmented image. Using the in situ measurements and IKONOS data, the obtained results show that texture parameters and high resolution remote sensing able to monitor and predicate the distribution of WQPs in large rivers.</description>
        <description>http://thesai.org/Downloads/Volume5No6/Paper_24-Estimation_of_Water_Quality_Parameters_Using_The_Regression_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Forecasting Rainfall Time Series with stochastic output approximated by neural networks Bayesian approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050623</link>
        <id>10.14569/IJACSA.2014.050623</id>
        <doi>10.14569/IJACSA.2014.050623</doi>
        <lastModDate>2014-07-01T06:22:55.0800000+00:00</lastModDate>
        
        <creator>Cristian Rodriguez Rivero</creator>
        
        <creator>Julian Antonio Pucheta</creator>
        
        <subject>rainfall time series; stochastic method; bayesian approach; computational intelligence</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(6), 2014</description>
        <description>The annual estimate of the availability of the amount of water for the agricultural sector has become a lifetime in places where rainfall is scarce, as is the case of northwestern Argentina. This work proposes to model and simulate monthly rainfall time series from one geographical location of Catamarca, Valle El Viejo Portezuelo. In this sense, the time series prediction is mathematical and computational modelling series provided by monthly cumulative rainfall, which has stochastic output approximated by neural networks Bayesian approach. We propose to use an algorithm based on artificial neural networks (ANNs) using the Bayesian inference. The result of the prediction consists of 20% of the provided data consisting of 2000 to 2010. A new analysis for modelling, simulation and computational prediction of cumulative rainfall from one geographical location is well presented. They are used as data information, only the historical time series of daily flows measured in mmH2O. Preliminary results of the annual forecast in mmH2O with a prediction horizon of one year and a half are presented, 18 months, respectively. The methodology employs artificial neural network based tools, statistical analysis and computer to complete the missing information and knowledge of the qualitative and quantitative behavior. They also show some preliminary results with different prediction horizons of the proposed filter and its comparison with the performance Gaussian process filter used in the literature.</description>
        <description>http://thesai.org/Downloads/Volume5No6/Paper_23-Forecasting_Rainfall_Time_Series.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Solution Structure and Error Estimation for The Generalized Linear Complementarity Problem</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050622</link>
        <id>10.14569/IJACSA.2014.050622</id>
        <doi>10.14569/IJACSA.2014.050622</doi>
        <lastModDate>2014-07-01T06:22:55.0500000+00:00</lastModDate>
        
        <creator>Tingfa Yan</creator>
        
        <subject>GLCP; solution structure; error estimation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(6), 2014</description>
        <description>In this paper, we consider the generalized linear complementarity problem (GLCP). Firstly, we develop some equivalent reformulations of the problem under milder conditions, and then characterize the solution of the GLCP. Secondly, we also establish the global error estimation for the GLCP by weakening the assumption. These results obtained in this paper can be taken as an extension for the classical linear complementarity problems.</description>
        <description>http://thesai.org/Downloads/Volume5No6/Paper_22-The_Solution_Structure_and_Error_Estimation_for_the_Generalized.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Watermarking Digital Image Using Fuzzy Matrix Compositions and Rough Set</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050621</link>
        <id>10.14569/IJACSA.2014.050621</id>
        <doi>10.14569/IJACSA.2014.050621</doi>
        <lastModDate>2014-07-01T06:22:55.0330000+00:00</lastModDate>
        
        <creator>Sharbani Bhattacharya</creator>
        
        <subject>Fuzzy Product-Mod-Minus Matrix; Fuzzy Compliment-Product-Minus Matrix; Fuzzy Rough set; Watermarking; Encrypting</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(6), 2014</description>
        <description>Watermarking is done in digital images for authentication and to restrict its unauthorized usages. Watermarking is sometimes invisible and can be extracted only by authenticated party. Encrypt a text or information by public –private key from two fuzzy matrix and embed it in image as watermark. In this paper we proposed two fuzzy compositions Product-Mod-Minus, and Compliment-Product-Minus. Embedded watermark using Fuzzy Rough set created from fuzzy matrix compositions.</description>
        <description>http://thesai.org/Downloads/Volume5No6/Paper_21-Watermarking_Digital_Image_Using_Fuzzy_Matrix_Compositions_and_Rough_Set.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Encrypted With Fuzzy Compliment-Max-Product Matrix in Watermarking</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050620</link>
        <id>10.14569/IJACSA.2014.050620</id>
        <doi>10.14569/IJACSA.2014.050620</doi>
        <lastModDate>2014-07-01T06:22:55.0200000+00:00</lastModDate>
        
        <creator>Sharbani Bhattacharya</creator>
        
        <subject>Watermarking; Fuzzy Compliment-Max-Product Matrix,  Fuzzification;  Encryption</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(6), 2014</description>
        <description>Watermark is used to protect copyright and to authenticate images. In digital media, today’s world images are in electronic form available in the internet. For its protection and authentication invisible watermarking in encrypted form are used. In this paper encryption is done using fuzzy Compliment-Max-Product matrix and then encrypted watermark is embedded in the digital media at desired places using fuzzy rule. The Region of Interest (ROI) is decided with fuzzification. Then, watermark is inserted at the respective positions in the image. Robustness of watermark is judged for ROI. This method of watermarking is done on all image file formats and it is resistant for geometric, noise and compression attack.</description>
        <description>http://thesai.org/Downloads/Volume5No6/Paper_20-Encrypted_with_Fuzzy_Compliment-Max-Product_Matrix_in_Watermarking.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fast Efficient Clustering Algorithm for Balanced Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050619</link>
        <id>10.14569/IJACSA.2014.050619</id>
        <doi>10.14569/IJACSA.2014.050619</doi>
        <lastModDate>2014-07-01T06:22:54.9870000+00:00</lastModDate>
        
        <creator>Adel A. Sewisy</creator>
        
        <creator>M. H. Marghny</creator>
        
        <creator>Rasha M. Abd ElAziz</creator>
        
        <creator>Ahmed I. Taloba</creator>
        
        <subject>Clustering; K-means algorithm; Bee algorithm; GA algorithm; FBK-means algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(6), 2014</description>
        <description>The Cluster analysis is a major technique for statistical analysis, machine learning, pattern recognition, data mining, image analysis and bioinformatics. K-means algorithm is one of the most important clustering algorithms. However, the k-means algorithm needs a large amount of computational time for handling large data sets. In this paper, we developed more efficient clustering algorithm to overcome this deficiency named Fast Balanced k-means (FBK-means). This algorithm is not only yields the best clustering results as in the k-means algorithm but also requires less computational time. The algorithm is working well in the case of balanced data.</description>
        <description>http://thesai.org/Downloads/Volume5No6/Paper_19-Fast_Efficient_Clustering_Algorithm_for_Balanced_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Cloud Computing Security Model to Detect and Prevent DoS and DDoS Attack</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050618</link>
        <id>10.14569/IJACSA.2014.050618</id>
        <doi>10.14569/IJACSA.2014.050618</doi>
        <lastModDate>2014-07-01T06:22:54.9570000+00:00</lastModDate>
        
        <creator>Masudur Rahman</creator>
        
        <creator>Wah Man Cheung</creator>
        
        <subject>Denial of Service attack; Distributed Denial of Service Attack; mechanism of DoS and DDoS attack; framework to prevent DDoS attack, hardware based watermarking </subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(6), 2014</description>
        <description>Cloud computing has been considered as one of the crucial and emerging networking technology, which has been changed the architecture of computing in last few years. Despite the security concerns of protecting data or providing continuous service over cloud, many organisations are considering different types cloud services as potential solution for their business. We are researching on cloud computing security issues and potential cost effective solution for cloud service providers. In our first paper we have revealed number of security risks for cloud computing environment, which has focused on lack of awareness of cloud service providers. In our second paper, we have investigated on technical security issues involved in cloud service environment, where it’s been revealed that DoS or DDoS is one of the common and significant dangers for cloud computing environment. In this paper, we have investigated on different techniques that can be used for DoS or DDoS attack, have recommended hardware based watermarking framework technology to protect the organisation from these threats.</description>
        <description>http://thesai.org/Downloads/Volume5No6/Paper_18-A_Novel_Cloud_Computing_Security_Model_to_Detect_and_Prevent_DoS_and_DDoS_Attack.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Toward an Effective Information Security Risk Management of Universities’ Information Systems Using Multi Agent Systems, Itil, Iso 27002,Iso 27005</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050617</link>
        <id>10.14569/IJACSA.2014.050617</id>
        <doi>10.14569/IJACSA.2014.050617</doi>
        <lastModDate>2014-07-01T06:22:54.9400000+00:00</lastModDate>
        
        <creator>S. FARIS</creator>
        
        <creator>S.EL HASNAOUI</creator>
        
        <creator>H.MEDROMI</creator>
        
        <creator>H.IGUER</creator>
        
        <creator>A.SAYOUTI</creator>
        
        <subject>Information security; information systems; multi agent systems; ITIL V3; ISO 27002; ISO 27005</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(6), 2014</description>
        <description>Universities in the public and private sectors depend on information technology and information systems to successfully carry out their missions and business functions.               Information systems are subject to serious threats that can have adverse effects on organizational operations and assets, and individuals by exploiting both known and unknown vulnerabilities to compromise the confidentiality, integrity, or availability of the information being processes, stored or transmitted by those systems. Threats to information systems can include purposeful attacks, environmental disruptions, and human/machine errors, and can result in harm to the integrity of data.  Therefore, it is imperative that all the actors at all levels in a university information system understand their responsibilities and are held accountable for managing information security risk-that is the risk associated with the operation and use of information systems that support the missions and business functions of their university.
The purpose of this paper is to propose an information security toolkit namely URMIS (University Risk Management Information System) based on multi agent systems and integrating with existing information security frameworks and standards, to enhance the security of universities information systems.</description>
        <description>http://thesai.org/Downloads/Volume5No6/Paper_17-Toward_an_Effective_Information_Security_Risk_Management_of_Universities.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Simulation of Performance Execution Procedure to Improve Seamless Vertical Handover in Heterogeneous Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050616</link>
        <id>10.14569/IJACSA.2014.050616</id>
        <doi>10.14569/IJACSA.2014.050616</doi>
        <lastModDate>2014-07-01T06:22:54.9100000+00:00</lastModDate>
        
        <creator>Omar Khattab</creator>
        
        <creator>Omar Alani</creator>
        
        <subject>Vertical Handover (VHO); Media Independent Handover (MIH); Interworking Architectures; Mobile IPv4 (MIPv4); Heterogeneous Wireless Networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(6), 2014</description>
        <description>One challenge of wireless networks integration is the ubiquitous wireless access abilities which provide the seamless handover for any moving communication device between different types of technologies (3GPP and non-3GPP) such as Global System for Mobile Communication (GSM), Wireless Fidelity (Wi-Fi), Worldwide Interoperability for Microwave Access (WiMAX), Universal Mobile Telecommunications System (UMTS) and Long Term Evolution (LTE). This challenge is important as Mobile Users (MUs) are becoming increasingly demanding for services regardless of technological complexities associated with it. To fulfill these requirements for seamless Vertical Handover (VHO) two main interworking architectures have been proposed by European Telecommunication Standards Institute (ETSI) for integration between different types of technologies; namely, loose and tight coupling. On the other hand, Media Independent Handover IEEE 802.21 (MIH) is a framework which has been proposed by IEEE Group to provide seamless VHO between the aforementioned technologies by utilizing these interworking architectures to facilitate and complement their works. The paper presents the design and the simulation of a Mobile IPv4 (MIPv4) based procedure for loose coupling architecture with MIH to optimize performance in heterogeneous wireless networks. The simulation results show that the proposed procedure provides seamless VHO with minimal latency and zero packet loss ratio.</description>
        <description> http://thesai.org/Downloads/Volume5No6/Paper_16-Simulation_of_Performance_Execution_Procedure_to_Improve.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Using an MPI Cluster in the Control of a Mobile Robots System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050615</link>
        <id>10.14569/IJACSA.2014.050615</id>
        <doi>10.14569/IJACSA.2014.050615</doi>
        <lastModDate>2014-07-01T06:22:54.8800000+00:00</lastModDate>
        
        <creator>Mohamed Salim LMIMOUNI</creator>
        
        <creator>Sa&#239;d BENAISSA</creator>
        
        <creator>Hicham MEDROMI</creator>
        
        <creator>Adil SAYOUTI</creator>
        
        <subject>clusters; MPI; parallel programming; mobile systems; mobile robots</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(6), 2014</description>
        <description>Recently, HPC (High Performance Computing) systems have gone from supercomputers to clusters. The clusters are used in all tasks that require very high computing power such as weather forecasting, climate research, molecular modeling, physical simulations, cryptanalysis, etc. The use of clusters is increasingly important in the scientific community, where the need for high performance computing (HPC) is still growing. In this paper, we propose an improvement of a mobile robots system control by using an MPI (Message Passing Interface) cluster. This cluster will launch, manipulate and process data from multiple robots simultaneously.</description>
        <description>http://thesai.org/Downloads/Volume5No6/Paper_15-Using_an_MPI_Cluster_in_the_Control.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Prototype of a Web ETL Tool</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050614</link>
        <id>10.14569/IJACSA.2014.050614</id>
        <doi>10.14569/IJACSA.2014.050614</doi>
        <lastModDate>2014-07-01T06:22:54.8630000+00:00</lastModDate>
        
        <creator>Matija Novak</creator>
        
        <creator>Kornelije Rabuzin</creator>
        
        <subject>ETL; data warehouse; web; ETL tool</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(6), 2014</description>
        <description>Extract, transform and load (ETL) is a process that makes it possible to extract data from operational data sources, to transform data in the way needed for data warehousing purposes and to load data into a data warehouse (DW). ETL process is the most important part when building the data warehouse. Because the ETL process is a very complex and time consuming, this paper presents a prototype of a web ETL tool that offers step-by-step guidance through the entire process to the end user. This ETL tool is designed as a web application so users can save time (and space) required for installation purposes.</description>
        <description> http://thesai.org/Downloads/Volume5No6/Paper_14-Prototype_of_a_web_ETL_Tool.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>System Autonomy Modeling During Early Concept Definition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050613</link>
        <id>10.14569/IJACSA.2014.050613</id>
        <doi>10.14569/IJACSA.2014.050613</doi>
        <lastModDate>2014-07-01T06:22:54.8330000+00:00</lastModDate>
        
        <creator>Rosteslaw M. Husar</creator>
        
        <creator>Jerrell Stracener</creator>
        
        <subject>Systems Engineering; Autonomous Systems; Requirements Engineering; System of Systems component; System Autonomy Modeling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(6), 2014</description>
        <description>The current rapid systems engineering design methods, such as AGILE, significantly reduce the development time. This results in the early availability of incremental capabilities, increases the importance of accelerating and effectively performing early concept trade studies. Current system autonomy assessment tools are level based and are used to provide the levels of autonomy attained during field trials. These tools have limited applicability in earlier design definition stages. An algorithmic system autonomy tool is needed to facilitate trade off studies, analyses of alternatives and concept of operations performed during those very early phases. We developed our contribution to such a tool and described it in this paper.</description>
        <description>http://thesai.org/Downloads/Volume5No6/Paper_13-System_Autonomy_Modeling_During_Early_Concept_Definition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development Process Patterns for Distributed Onshore/Offshore Software Projects</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050612</link>
        <id>10.14569/IJACSA.2014.050612</id>
        <doi>10.14569/IJACSA.2014.050612</doi>
        <lastModDate>2014-07-01T06:22:54.8170000+00:00</lastModDate>
        
        <creator>Ravinder Singh</creator>
        
        <creator>Dr. Kevin Lano</creator>
        
        <subject>Distributed work; Onshore; Offshore; Software; IT projects; Programme and project Management</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(6), 2014</description>
        <description>the globalisation of the commercial world, and the use of distributed working practices (Offshore/ onshore/ near-shore) has increased dramatically with the improvement of information and communication technologies. Many organisations, especially those that operate within knowledge intensive industries, have turned to distributed work arrangements to facilitate information exchange and provide competitive advantage in terms of cost and quicker delivery of the solutions. The information and communication technologies (ICT) must be able to provide services similar to face-to-face conditions. Additional organisations functions must be enhanced to overcome the shortcomings of ICT and also to compensate for time gaps, cultural differences, and distributed team work.  Our proposed model identifies four key work models or patterns that affect the operation of distributed work arrangements, and we also propose guidelines for managing distributed work efficiently and effectively.</description>
        <description>http://thesai.org/Downloads/Volume5No6/Paper_12-Development_Process_Patterns_for_Distributed_Onshore_Offshore_Software_Projects.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Teaching Introductory Programming</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050611</link>
        <id>10.14569/IJACSA.2014.050611</id>
        <doi>10.14569/IJACSA.2014.050611</doi>
        <lastModDate>2014-07-01T06:22:54.8000000+00:00</lastModDate>
        
        <creator>Ljubomir Jerinic</creator>
        
        <subject>Pedagogical Pattern; Pattern Design; Learning; Programming; Computer science education; Programming; Software agents; Electronic learning; Computer aided instruction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(6), 2014</description>
        <description>From the educational point of view, learning by mistake could be influential teaching method, especially for teaching/learning Computer Science (CS), and/or Information Technologies (IT). As learning programming is very difficult and hard task, perhaps even more difficult and extremely demanding job to teach novices how to make correct computers programs. The concept of design pedagogical patterns has received surprisingly little attention so far from the researchers in the field of pedagogy/didactics of Computer Science. Design pedagogical patterns are descriptions of successful solutions of common problems that occur in teaching/learning CS and IT. Good pedagogical patterns could help teachers when they have to design new course, lessons, topics, examples, and assignments, in a particular context. Pedagogical patterns captured the best practice in a teaching/learning CS and/or IT. They could be very helpful to the teachers in preparing their own lessons. In this paper a brief description of special class design of pedagogical patterns, the group of patterns for learning by mistakes, is presented. In addition, usage of helpful and misleading pedagogical agents, which have been developed in Agent-based E-learning System (AE-lS), based on pedagogical pattern for explanation Explain, and pedagogical pattern for learning by mistakes Wolf, Wolf, Mistake, is described.</description>
        <description>http://thesai.org/Downloads/Volume5No6/Paper_11-Teaching_Introductory_Programming.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Domain Based Prefetching in Web Usage Mining</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050610</link>
        <id>10.14569/IJACSA.2014.050610</id>
        <doi>10.14569/IJACSA.2014.050610</doi>
        <lastModDate>2014-07-01T06:22:54.7700000+00:00</lastModDate>
        
        <creator>Dr. M. Thangaraj</creator>
        
        <creator>Mrs. V. T. Meenatchi</creator>
        
        <subject>Latency; Domain; Prefetching; bandwidth; Network Traffic; Web Log File</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(6), 2014</description>
        <description>In the current web scenario, the Internet users expect the web to be more friendly and meaningful with reduced network traffic. Every end user needs the channel with high bandwidth. In order to reduce the web server load, the access latency and to improve the network bandwidth from heavy network traffic, a model called Domain based Prefetching (DoP) is recommended, which uses the technique of General Access Pattern Tracking. DoP presents the user with several generic Domains with the top visited web requests in each Domain, which are retrieved from the web log file for future web access.</description>
        <description> http://thesai.org/Downloads/Volume5No6/Paper_10-Domain_Based_Prefetching_in_Web_Usage_Mining.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Principle of Duality on Prognostics</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050609</link>
        <id>10.14569/IJACSA.2014.050609</id>
        <doi>10.14569/IJACSA.2014.050609</doi>
        <lastModDate>2014-07-01T06:22:54.7530000+00:00</lastModDate>
        
        <creator>Mohammad Samie</creator>
        
        <creator>Amir M. S. Motlagh</creator>
        
        <creator>Alireza Alghassi</creator>
        
        <creator>Suresh Perinpanayagam</creator>
        
        <creator>Epaminondas Kapetanios</creator>
        
        <subject>Prognostic Model; Integrated System Health Management (ISHM); Degradation; Duality; Cuk Converter</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(6), 2014</description>
        <description>The accurate estimation of the remaining useful life (RUL) of various components and devices used in complex systems, e.g., airplanes remain to be addressed by scientists and engineers. Currently, there area wide range of innovative proposals put forward that intend on solving this problem. Integrated System Health Management (ISHM) has thus far seen some growth in this sector, as a result of the extensive progress shown in demonstrating feasible and viable techniques. The problems related to these techniques were that they often consumed time and were too expensive and resourceful to develop. In this paper we present a radically novel approach for building prognostic models that compensates and improves on the current prognostic models inconsistencies and problems. Broadly speaking, the new approach proposes a state of the art technique that utilizes the physics of a system rather than the physics of a component to develop its prognostic model. A positive aspect of this approach is that the prognostic model can be generalized such that a new system could be developed on the basis and principles of the prognostic model of another system. This paper will mainly explore single switch dc-to-dc converters which will be used as an experiment to exemplify the potential success that can be discovered from the development of a novel prognostic model that can efficiently estimate the remaining useful life of one system based on the prognostics of its dual system.</description>
        <description>http://thesai.org/Downloads/Volume5No6/Paper_9-Principle_of_Duality_in_Prognostics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Performance Analysis of Feature(S)-Classifier Combination for Devanagari Optical Character Recognition System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050608</link>
        <id>10.14569/IJACSA.2014.050608</id>
        <doi>10.14569/IJACSA.2014.050608</doi>
        <lastModDate>2014-07-01T06:22:54.7230000+00:00</lastModDate>
        
        <creator>Jasbir Singh</creator>
        
        <creator>Gurpreet Singh Lehal</creator>
        
        <subject>Artificial Neural Network; DCT; Directional Distance Distribution; Feature extraction, Gabor; k-Nearest Neighbour; Profile direction codes; Support Vector Machines; Transition; Zoning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(6), 2014</description>
        <description>this paper presents a comparative performance analysis of feature(s)-classifier combination for Devanagari optical character recognition system. For performance evaluation, three classifiers namely support vector machines, artificial neural networks and k-nearest neighbors, and seven feature extraction approaches viz. profile direction codes, transition, zoning, directional distance distribution, Gabor filter, discrete cosine transform and gradient features have been used. The first four features have been used jointly as statistical features. The performance has also been evaluated by using the combination of these feature extraction approaches. In addition, performance evaluation has also been done by varying the feature vector length of Gabor and DCT features. For training the classifiers, 7000 samples of first 70 classes (out of 942 classes), recognized in the earlier work have been used. Such a large number of classes are due to the horizontal and vertical fusion/overlapping characters. We have chosen first 70 classes as their percentage contribution out of 942 classes has found to be 96.69%. For testing, 1400 samples have been collected separately. A corpus of 25 books has been used for sample collection. Classifiers trained on different features, have been compared for performance evaluation. It has been found that support vector machines trained with Gradient features provide the classification correctness of 99.429%, and there is no significant increase in the performance with the increase in the feature vector length.</description>
        <description> http://thesai.org/Downloads/Volume5No6/Paper_8-Comparative_Performance_Analysis_of_Feature.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Security Policies for Securing Cloud Databases</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050607</link>
        <id>10.14569/IJACSA.2014.050607</id>
        <doi>10.14569/IJACSA.2014.050607</doi>
        <lastModDate>2014-07-01T06:22:54.6900000+00:00</lastModDate>
        
        <creator>Ingrid A. Buckley</creator>
        
        <creator>Fan Wu</creator>
        
        <subject>relational database; cloud; security; threats; hackers, security patterns; cloud database</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(6), 2014</description>
        <description>Databases are an important and almost mandatory means for storing information for later use.  Databases require effective security to protect the information stored within them. In particular access control measures are especially important for cloud databases, because they can be accessed from anywhere in the world at any time via the Internet. The internet has provided a plethora of advantages by increasing accessibility to various services, education, information and communication. The internet also presents challenges and disadvantages, which include securing services, information and communication. Naturally, the internet is being used for good but also to carry out malicious attacks on cloud databases. In this paper we discuss approaches and techniques to protect cloud databases, including security policies which can realized as security patterns.</description>
        <description>http://thesai.org/Downloads/Volume5No6/Paper_7-Security_Policies_for_Securing_Cloud_Databases.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Individual Syllabus for Personalized Learner-Centric E-Courses in E-Learning and M-Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050606</link>
        <id>10.14569/IJACSA.2014.050606</id>
        <doi>10.14569/IJACSA.2014.050606</doi>
        <lastModDate>2014-07-01T06:22:54.6770000+00:00</lastModDate>
        
        <creator>Khaled Nasser ElSayed</creator>
        
        <subject>AI; Agent; education; e-Learning; m-Learning; Semantic Net</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(6), 2014</description>
        <description>Most of e-learning and m-learning systems are course-centric. These systems provided services that concentrated on course material and pedagogical. They did not take into account varieties of student levels, skills, interests or preferences. This paper provides a design of an approach for personalized and self-adapted agent-based learning systems for enhancing e-learning and mobile learning (m-learning) services to be learner-centric. It presents a modeling of goals of different learners of a corporate training in computer courses in an educational institute. It figures how to customize and personalize learning paths (course syllabus) for e-learning and m-learning platforms. The delivering of e-courses become personalized learner-centric, which improves learning outcome, satisfaction of learners and enhances education.</description>
        <description> http://thesai.org/Downloads/Volume5No6/Paper_6-Individual_Syllabus_for_Personalized_Learner-Centric.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Educational Data Mining Model Using Rattle</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050605</link>
        <id>10.14569/IJACSA.2014.050605</id>
        <doi>10.14569/IJACSA.2014.050605</doi>
        <lastModDate>2014-07-01T06:22:54.6430000+00:00</lastModDate>
        
        <creator>Sadiq Hussain</creator>
        
        <creator>G.C. Hazarika</creator>
        
        <subject>Educational Data Mining; R Programming; Rattle; ROC Curve; Support Vector Machine; Random Forest </subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(6), 2014</description>
        <description>Data Mining is the extraction of knowledge from the large databases. Data Mining had affected all the fields from combating terror attacks to the human genome databases. For different data analysis, R programming has a key role to play. Rattle, an effective GUI for R Programming is used extensively for generating reports based on several current trends models like random forest, support vector machine etc. It is otherwise hard to compare which model to choose for the data that needs to be mined. This paper proposes a method using Rattle for selection of Educational Data Mining Model.</description>
        <description> http://thesai.org/Downloads/Volume5No6/Paper_5-Educational_Data_Mining_Model_Using_Rattle.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Experience of Taiwan Policy Development To Accelerate Cloud Migration</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050604</link>
        <id>10.14569/IJACSA.2014.050604</id>
        <doi>10.14569/IJACSA.2014.050604</doi>
        <lastModDate>2014-07-01T06:22:54.6300000+00:00</lastModDate>
        
        <creator>Sheng-Chi Chen</creator>
        
        <subject>Cloud Computing; Action Research; Project Management Office</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(6), 2014</description>
        <description>developing cloud computing is a key policy for government, while convenient service is an important issue for people living. In the beginning of 2010, the Taiwan Government has launched a “Cloud Computing Development Project”, and has devoted to service planning and investment activities. At the end of 2012, in a three-year comprehensive review and suggestion adoption from public and private sectors, the Taiwan Government adjusted the policy and rename as “Cloud Computing Application and Development Project”. 
From the perspectives of government application, industry development, and cloud open platform, this study describes how the vision drive goals and thinking push forward strategies. In the process of government and industry collaboration, it is progressively created value for cloud services. The Cloud Computing Project Management Office acts a key role as policy advisor, matching platform, and technical supporting to the achievements of (1) policy assessment and strategy enhancement; (2) construction of cloud open platform to the demand and supply linkage; (3) innovation and integration planning for government service application, leading to industry development.</description>
        <description>http://thesai.org/Downloads/Volume5No6/Paper_4-An_Experience_of_Taiwan_Policy_Development_to_Accelerate_Cloud_Migration.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Estimating Null Values in Database Using CBR and Supervised Learning Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050603</link>
        <id>10.14569/IJACSA.2014.050603</id>
        <doi>10.14569/IJACSA.2014.050603</doi>
        <lastModDate>2014-07-01T06:22:54.5970000+00:00</lastModDate>
        
        <creator>Khaled Nasser ElSayed</creator>
        
        <subject>DataBase(DB);Data mining; Case-Based Reasoning (CBR); Classification;Null Values; Supervised Learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(6), 2014</description>
        <description>Database and database systems have been used widely in almost, all life activities. Sometimes missed data items are discovered as missed or null values in the database tables. The presented paper proposes a design for a supervised learning system to estimate missed values found in the university database. The values of estimated data items or data it items used in estimation are numeric and not computed. The system performs data classification based on Case-Based Reasoning (CBR) to estimate loosed marks of students. A data set is used in training the system under the supervision of an expert. After training the system to classify and estimate null values under expert supervision, it starts classification and estimation of null data by itself.</description>
        <description> http://thesai.org/Downloads/Volume5No6/Paper_3-Estimating_Null_Values_in_Database_using_CBR_and_Supervised_Learning_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Coverage Analysis for Low Earth Orbiting Satellites at Low Elevation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050602</link>
        <id>10.14569/IJACSA.2014.050602</id>
        <doi>10.14569/IJACSA.2014.050602</doi>
        <lastModDate>2014-07-01T06:22:54.5830000+00:00</lastModDate>
        
        <creator>Shkelzen Cakaj</creator>
        
        <creator>Bexhet Kamo</creator>
        
        <creator>Algenti Lala</creator>
        
        <creator>Alban Rakipi</creator>
        
        <subject>LEO; satellite; coverage</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(6), 2014</description>
        <description>Low Earth Orbit (LEO) satellites are used for public networking and for scientific purposes. Communication via satellite begins when the satellite is positioned in its orbital position. Ground stations can communicate with LEO satellites only when the satellite is in their visibility region. The duration of the visibility and the communication vary for each LEO satellite pass over the station, since LEO satellites move too fast over the Earth. The satellite coverage area is defined as a region of the Earth where the satellite is seen at a minimum predefined elevation angle. The satellite’s coverage area on the Earth depends on orbital parameters. The communication under low elevation angles can be hindered by natural barriers. For safe communication and for savings within a link budget, the coverage under too low elevation is not always provided. LEO satellites organized in constellations act as a convenient network solution for real time global coverage. Global coverage model is in fact the complementary networking process of individual satellite’s coverage. Satellite coverage strongly depends on elevation angle.  To conclude about the coverage variation for low orbiting satellites at low elevation up to 10&#186;,   the simulation for attitudes from 600km to 1200km is presented through this paper. </description>
        <description>http://thesai.org/Downloads/Volume5No6/Paper_2-The_Coverage_Analysis_for_Low_Earth_Orbiting_Satellites_at_Low_Elevation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Open Source P2P Encrypted Voip Application</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050601</link>
        <id>10.14569/IJACSA.2014.050601</id>
        <doi>10.14569/IJACSA.2014.050601</doi>
        <lastModDate>2014-07-01T06:22:54.5370000+00:00</lastModDate>
        
        <creator>Ajay Kulkarni</creator>
        
        <creator>Saurabh Kulkarni</creator>
        
        <subject>voip; softphone; java; open source</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(6), 2014</description>
        <description>Open source is the future of technology. This community is growing by the day; developing and improving existing frameworks and software for free. Open source replacements are coming up for almost all proprietary software nowadays. This paper proposes an open source application which could replace Skype, a popular VoIP soft phone. The performance features of the developed software is analyzed and compared with Skype so that we can conclude that it can be an efficient replacement. This application is developed in pure Java using various APIs and package and boasts features like voice calling, chatting, file sharing etc. The target audience for this software will initially only be organizations (for internal communication) and later will be released on a larger scale.</description>
        <description>http://thesai.org/Downloads/Volume5No6/Paper_1-An_Open_Source_P2P_Encrypted_VoIP_Application.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A novel hybrid genetic differential evolution algorithm for constrained optimization problems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2014.030602</link>
        <id>10.14569/IJARAI.2014.030602</id>
        <doi>10.14569/IJARAI.2014.030602</doi>
        <lastModDate>2014-06-11T11:18:56.2200000+00:00</lastModDate>
        
        <creator>Ahmed Fouad Ali</creator>
        
        <subject>Constrained optimization problems, Genetic algorithms,
Differential evolution algorithm, Linear crossover.</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 3(6), 2014</description>
        <description>Most of the real-life applications have many constraints
and they are considered as constrained optimization
problems (COPs). In this paper, we present a new hybrid genetic
differential evolution algorithm to solve constrained optimization
problems. The proposed algorithm is called hybrid genetic differential
evolution algorithm for solving constrained optimization
problems (HGDESCOP). The main purpose of the proposed
algorithm is to improve the global search ability of the DE
algorithm by combining the genetic linear crossover with a DE
algorithm to explore more solutions in the search space and to
avoid trapping in local minima. In order to verify the general
performance of the HGDESCOP algorithm, it has been compared
with 4 evolutionary based algorithms on 13 benchmark functions.
The experimental results show that the HGDESCOP algorithm
is a promising algorithm and it outperforms other algorithms.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume3No6/Paper_2-A_novel_hybrid_genetic_differential_evolution_algorithm_for_constrained_optimization_problems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Silent Speech Recognition with Arabic and English Words for Vocally Disabled  Persons</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2014.030601</link>
        <id>10.14569/IJARAI.2014.030601</id>
        <doi>10.14569/IJARAI.2014.030601</doi>
        <lastModDate>2014-06-11T11:18:56.1730000+00:00</lastModDate>
        
        <creator>Sami Nassimi</creator>
        
        <creator>Noora Mohamed</creator>
        
        <creator>Walaa AbuMoghli</creator>
        
        <creator>Mohamed WaleedFakhr</creator>
        
        <subject>Surface Electromyography; Support Vector Machine; Hidden Markov Models; Silent Speech Recognition</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 3(6), 2014</description>
        <description>This paper presents the results of our research in silent speech recognition (SSR) using Surface Electromyography (sEMG); which is the technology of recording the electric activation potentials of the human articulatory muscles by surface electrodes in order to recognize speech. Though SSR is still in the experimental stage, a number of potential applications seem evident. Persons who have undergone a laryngectomy, or older people for whom speaking requires a substantial effort, would be able to mouth (vocalize) words rather than actually pronouncing them. Our system has been trained with 30 utterances from each of the three subjects we had on a testing vocabulary of 4 phrases, and then tested for 15 new utterances that were not part of the training list. The system achieved an average of 91.11% word accuracy when using Support Vector Machine (SVM) classifier while the base language is English, and an average of 89.44% word accuracy using the Standard Arabic language. </description>
        <description>http://thesai.org/Downloads/IJARAI/Volume3No6/Paper_1-Silent_Speech_Recognition_with_Arabic_and_English_Words_for_Vocally_Disabled_Persons.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Impact of Black-Hole Attack on AODV Protocol</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2014.040204</link>
        <id>10.14569/SpecialIssue.2014.040204</id>
        <doi>10.14569/SpecialIssue.2014.040204</doi>
        <lastModDate>2014-05-31T13:44:28.1530000+00:00</lastModDate>
        
        <creator>FIHRI Mohammed</creator>
        
        <creator>OTMANI Mohamed</creator>
        
        <creator>EZZATI Abdellah</creator>
        
        <subject>Black-Hole; AODV; Attack</subject>
        <description>Special Issue(SpecialIssue), 4(2), 2014</description>
        <description>In mobile Ad-Hoc networks, each node of the network must contribute in the process of communication and routing. However this contribution can expose the network to several types of attackers. In this paper, we study the impact of one attack called BLACK-HOLE, on Ad hoc On-Demand Distance Vector routing protocol. In this attack a malicious node can be placed between two or several nodes, and begin dropping all packets from  a source and breaking communications between nodes. The vulnerability of the route discovery packets is exploited by the attacker with a simple modification in the routing protocol, in order to control all the traffic between nodes. In this study we simulate the attack with NS2, taking into account the mobility of the network and the attacker, the position of the attacker and finally the number of the attackers. We will also see the impact of this attack in a higher number of loss packet  compared with AODV  in normal situation.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo10/Paper_4-The_Impact_of_Black-Hole_Attack_on_AODV_Protocol.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A comparative study of decision tree ID3 and C4.5</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2014.040203</link>
        <id>10.14569/SpecialIssue.2014.040203</id>
        <doi>10.14569/SpecialIssue.2014.040203</doi>
        <lastModDate>2014-05-31T13:44:28.1230000+00:00</lastModDate>
        
        <creator>Badr HSSINA</creator>
        
        <creator>Abdelkarim MERBOUHA</creator>
        
        <creator>Hanane EZZIKOURI</creator>
        
        <creator>Mohammed ERRITALI</creator>
        
        <subject>Data  mining;  classification  algorithm; decision tree; ID3 algorithme; C4.5 algorithme</subject>
        <description>Special Issue(SpecialIssue), 4(2), 2014</description>
        <description>Data  mining  is  the  useful  tool  to discovering  the  knowledge  from  large  data.  Different methods &amp; algorithms are available in data mining. Classification is most common method used for finding the mine rule from the large database.  Decision  tree  method  generally  used  for the  Classification,  because  it  is  the  simple hierarchical  structure  for  the  user  understanding &amp;  decision  making.  Various  data  mining algorithms  available  for  classification  based  on Artificial Neural Network, Nearest Neighbour Rule &amp;  Baysen  classifiers  but  decision  tree  mining  is simple one. ID3 and C4.5 algorithms have been introduced by J.R Quinlan which produce reasonable decision trees. The objective of this paper is to present these algorithms. At first we present the classical algorithm that is ID3, then highlights of this study we will discuss in more detail C4.5 this one is a natural extension of the ID3 algorithm. And we will make a comparison between these two algorithms and others algorithms such as C5.0 and CART. </description>
        <description>http://thesai.org/Downloads/SpecialIssueNo10/Paper_3-A_comparative_study_of_decision_tree_ID3_and_C4.5.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Time Slots Investment of a dead node in LEACH protocol on Wireless Sensor Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2014.040202</link>
        <id>10.14569/SpecialIssue.2014.040202</id>
        <doi>10.14569/SpecialIssue.2014.040202</doi>
        <lastModDate>2014-05-31T13:44:28.1070000+00:00</lastModDate>
        
        <creator>Abedelhalim HNINI</creator>
        
        <creator>Abdellah EZZATI</creator>
        
        <creator>Mohammed FIHRI</creator>
        
        <creator>Abdelmajid HAJAMI</creator>
        
        <subject>Network Clustering; routing protocol; Ad hoc Network; LEACH; WSN; Node dies</subject>
        <description>Special Issue(SpecialIssue), 4(2), 2014</description>
        <description>Wireless sensor network (MSN) is a wirelessly interconnected network. WSN promises a wide range of potential such as surveillance, military and civilian, to name just a few, applications. A sensor node senses the environment and delivers data to the sink. Energy saving is one of the keys to challenge network life time. LEACH protocol has been incorporated to extend network life time. This protocol forms clusters of the sensor nodes and elect one of them to become a Cluster Head (CH) to route data cluster to sink. In cluster, communication uses TDMA technique. This latter organizes transmission time (Time Slot) which corresponds to each node member of cluster. When a node dies, time slot corresponding to this node will be free. In this paper, we will present LEACH comportments after a node dies, then we will propose some ideas to invest its time slots by alive node members to maximize data reception time parameter so throughput end-to-end. This parameter is very important for real-time data. Finally, we will simulate this idea with Network Simulator (NS2) to argue for our propositions.     </description>
        <description>http://thesai.org/Downloads/SpecialIssueNo10/Paper_2-Time_Slots_Investment_of_a_dead_node_in_LEACH_protocol_on_Wireless_Sensor_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Robust Automatic Traffic Signs Recognition Using Fast Polygonal Approximation of Digital Curves and Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2014.040201</link>
        <id>10.14569/SpecialIssue.2014.040201</id>
        <doi>10.14569/SpecialIssue.2014.040201</doi>
        <lastModDate>2014-05-31T13:44:28.0600000+00:00</lastModDate>
        
        <creator>Abderrahim SALHI</creator>
        
        <creator>Brahim MINAOUI</creator>
        
        <creator>Mohamed FAKIR</creator>
        
        <subject>Traffic sign; recognition; detection; pattern matching; image processing; Polygonal Approximation of digital curves</subject>
        <description>Special Issue(SpecialIssue), 4(2), 2014</description>
        <description>Traffic Sign Detection and Recognition (TSDR) has many features help the driver in improving the safety and comfort, today it is widely used in the automotive manufacturing sector, a robust detection and recognition system a good solution for driver assistance systems, it can warn the driver and control or prohibit certain actions which significantly increase driving safety and comfort. This paper presents a study to design, implement and test a method of detection and recognition of road signs based on computer vision. The approach adopted in this work consists of two main modules: a detection module which is based on color segmentation and edge detection to identify areas of the scene may contain road signs and a recognition module based on the multilayer perceptrons whose role is to match the patterns detected with road signs corresponding visual information. The development of these two modules is performed using the C/C++ language and the OpenCV library. 
The tests are performed on a set of real images of traffic to show the performance of the system being developed.
</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo10/Paper_1-Robust_Automatic_Traffic_Signs_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Solving for the RC4 stream cipher state register using a genetic algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050533</link>
        <id>10.14569/IJACSA.2014.050533</id>
        <doi>10.14569/IJACSA.2014.050533</doi>
        <lastModDate>2014-05-31T13:15:55.0470000+00:00</lastModDate>
        
        <creator>Benjamin Ferriman</creator>
        
        <creator>Charlie Obimbo</creator>
        
        <subject></subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(5), 2014</description>
        <description>The RC4 stream cipher has shown to be quite
resilient to cryptanalysis for the 26 years it has been around.
The algorithm is still one of the most widely used methods of
encryption over the Internet today being implemented through
the Secure Socket Layer and Transport Layer Security protocols.
Genetic algorithms are a sub-class of evolutionary algorithms
that have been used to help solve many different problems of
optimization in a variety of disciplines. In this paper we will
examine the abilities of the genetic algorithm as a tool to help
solve the permutation that is stored as the state register of the
RC4 stream cipher. Finally, we will show that on average the
genetic algorithm can solve 100% of the keystream in 2121:5
generations.</description>
        <description>http://thesai.org/Downloads/Volume5No5/Paper_33-Solving_for_the_RC4_stream_cipher_state_register.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Herbal Leave Recognition System Based on Dirichlet Laplacian Eigenvalues</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050532</link>
        <id>10.14569/IJACSA.2014.050532</id>
        <doi>10.14569/IJACSA.2014.050532</doi>
        <lastModDate>2014-05-31T13:15:55.0300000+00:00</lastModDate>
        
        <creator>Mahmoud Elgamal</creator>
        
        <creator>Mahmoud Youness R. Alaidy</creator>
        
        <subject>Eigenvalues, Finite difference method, Curve descriptor,
Binary image classification, noise, leave recognition.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(5), 2014</description>
        <description>Identifying and recognition of herbal plant green
leaves is essential in botanical study. In[8] Thai herb leaf image
recognition system used for recognition of leaves with accuracy
of 93.29%, in this paper, we propose a recognition system of
leaves based on the eigenvalues of Dirichlet Laplacian that used
to generate three different sets of features for shape analysis
and classification in binary images[4]. First leaf images are
preprocessed to remove unwanted background, converted to
binary form; used to build the images database, finally Queries
made on the system. The correct classification rates without
noise is 100% and with noise is   90%.</description>
        <description>http://thesai.org/Downloads/Volume5No5/Paper_32-Herbal_Leave_Recognition_System_Based_on.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A fast cryptosystem using reversible cellular automata</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050531</link>
        <id>10.14569/IJACSA.2014.050531</id>
        <doi>10.14569/IJACSA.2014.050531</doi>
        <lastModDate>2014-05-31T13:15:55.0170000+00:00</lastModDate>
        
        <creator>Said BOUCHKAREN</creator>
        
        <creator>Saiida LAZAAR</creator>
        
        <subject>AES; Cellular automata; Diffusion; Cryptosystem; MARGOLUS neighborhood</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(5), 2014</description>
        <description>This article defines a new algorithm for a secret key cryptosystem using cellular automata which is a promising approach to cryptography. Our algorithm is based on cellular automata built on a set of reversible rules which have the ability to construct unpredictable secret keys using MARGOLUS neighborhood. To prove the feasibility of the algorithm, we present some tests of encryption, decryption and diffusion; a CPU time comparison with an encryption algorithm by blocks as for instance AES-256 is established. On the other hand, the security of the algorithm is proved and the implemented algorithm resists against a brute force attack.</description>
        <description>http://thesai.org/Downloads/Volume5No5/Paper_31-A_fast_cryptosystem_using_reversible_cellular_automata.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Efficient Eye Blink Detection Method for disabled-helping domain</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050530</link>
        <id>10.14569/IJACSA.2014.050530</id>
        <doi>10.14569/IJACSA.2014.050530</doi>
        <lastModDate>2014-05-31T13:15:55.0000000+00:00</lastModDate>
        
        <creator>Assit. Prof. Aree A. Mohammed</creator>
        
        <creator>MSc. Student Shereen A. Anwer</creator>
        
        <subject>eye detection; eye tracking; eye blinking; smoothing filter; detection accuracy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(5), 2014</description>
        <description>In this paper, we present a real time method based on some video and image processing algorithms for eye blink detection. The motivation of this research is the need of disabling who cannot control the calls with human mobile interaction directly without the need of hands. A Haar Cascade Classifier is applied for face and eye detection for getting eye and facial axis information. In addition, the same classifier is used based on Haar- like features to find out the relationship between the eyes and the facial axis for positioning the eyes. An efficient eye tracking method is proposed which uses the position of detected face. Finally, an eye blinking detection based on eyelids state (close or open) is used for controlling android mobile phones. The method is used with and without smoothing filter to show the improvement of detection accuracy. The application is used in real time for studying the effect of light and distance between the eyes and the mobile device in order to evaluate the accuracy detection and overall accuracy of the system. Test results show that our proposed method provides a 98% overall accuracy and 100% detection accuracy for a distance of 35 cm and an artificial light. </description>
        <description>http://thesai.org/Downloads/Volume5No5/Paper_30-Efficient_Eye_Blink_Detection_Method_for_disabled-helping_domain.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cloud and Web Technologies: Technical Improvements and Their Implications on E-Governance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050529</link>
        <id>10.14569/IJACSA.2014.050529</id>
        <doi>10.14569/IJACSA.2014.050529</doi>
        <lastModDate>2014-05-31T13:15:54.9830000+00:00</lastModDate>
        
        <creator>Danish Manzoor</creator>
        
        <creator>Ashraf Ali</creator>
        
        <creator>Dr. Ateeq Ahmad</creator>
        
        <subject>Cloud; Web Semantics; Burst; SaaS; PaaS; IaaS; G-Cloud; M-Cloud; TCO; ICT, E-Governance; ITIL; Cloud legacy; Legacy cloud system silos; Lock-In; RDF; XML; URI</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(5), 2014</description>
        <description>Cloud computing technology helps to improve ICT based services like e-governance execution and create new business opportunities and their implementation. Cloud computing is an evolution of web based internet application and describes an advance  consumption, supplement and delivery model for Information Technology and ICT services based on the global network. This enables allocation of resources and costs across a large pool of users while providing on-demand services with dynamic scalability. So we can say that a technology that has the capability and potential to offer solutions for e-governance is cloud computing. Cloud computing provide service-oriented access to users least compromising on security. In today&#39;s era software and their services are biggest cost concern for the implementation of IT environment in an organization. Cloud has the capability to reduce the cost in dramatic way for the all kind of the organization even it is small scale Industry or a big corporate organization. This makes Cloud an excellent platform to host e-governance services and application. The basic intention of this paper is whatever improvement happening in cloud technology and in web technology mention sub sequentially. If we apply them in existing e-governance application running under various department then we can minimize the some of the most basic affected components of application software like cost of the software in its execution, optimal time for usage and running of application software, storage capacity for storing of the data and network infrastructure used for the functioning of the application software.</description>
        <description>http://thesai.org/Downloads/Volume5No5/Paper_29-Cloud_and_Web_Technologies_Technical_Improvements_and_Their_Implications.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Opinion Mining and Analysis for Arabic Language</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050528</link>
        <id>10.14569/IJACSA.2014.050528</id>
        <doi>10.14569/IJACSA.2014.050528</doi>
        <lastModDate>2014-05-31T13:15:54.9700000+00:00</lastModDate>
        
        <creator>Mohammed N. Al-Kabi</creator>
        
        <creator>Amal H. Gigieh</creator>
        
        <creator>Izzat M. Alsmadi</creator>
        
        <creator>Heider A. Wahsheh</creator>
        
        <creator>Mohamad M. Haidar</creator>
        
        <subject>Sentiment Analysis; Arabic Sentiment Analysis; Opinion mining; Opinion Subjectivity; Opinion Polarity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(5), 2014</description>
        <description>Social media constitutes a major component of Web 2.0 and includes social networks, blogs, forum discussions, micro-blogs, etc. Users of social media generate a huge volume of reviews and comments on a daily basis. These reviews and comments reflect the opinions of users about different issues, such as: products, news, entertainments, or sports. Therefore different establishments may need to analyze these reviews and comments. For examples: It is essential for companies to know the pros and cons of their products or services in the eyes of customers. Governments may want to know the attitude of people towards certain decisions, services, etc. Although the manual analysis of textual reviews and comments can be more accurate than the automatic methods, nonetheless, it is time consuming, expensive, and can be subjective. Furthermore, the huge amount of data contained in social networks can make it impractical to perform analysis manually. This paper focuses on evaluating Arabic social content. Currently, Middle East is an area rich of major political and social reforms. The social media can be a rich source of information to evaluate such contexts. In this research we developed an opinion mining and analysis tool to collect different forms of Arabic language (i.e. Standard or MSA, and colloquial). The tool accepts comments and opinions as input and generates polarity based outputs related to the comments. Additionally the tool can determine the comment or review is: (subjective or objective), (positive or negative), and (strong or weak). The evaluation of the performance of the developed tool showed that it yields more accurate results when it is applied on domain-based Arabic reviews relative to general-based Arabic reviews.</description>
        <description>http://thesai.org/Downloads/Volume5No5/Paper_28-Opinion_Mining_and_Analysis_for_Arabic_Language.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Secure Electronic Transaction Payment Protocol Design and Implementation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050527</link>
        <id>10.14569/IJACSA.2014.050527</id>
        <doi>10.14569/IJACSA.2014.050527</doi>
        <lastModDate>2014-05-31T13:15:54.9530000+00:00</lastModDate>
        
        <creator>Houssam El Ismaili</creator>
        
        <creator>Hanane Houmani</creator>
        
        <creator>Hicham Madroumi</creator>
        
        <subject>E-commerce; Secure Socket Layer (SSL); Secure Electronic Transaction (SET); 3D-Secure</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(5), 2014</description>
        <description>Electronic payment is the very important step of the electronic business system, and its security must be ensured. SSL/TLS and SET are two widely discussed means of securing online credit card payments. Because of implementation issues, SET has not really been adopted by e-commerce participants, whereas, despite the fact that it does not address all security issues, SSL/TLS is commonly used for Internet e-commerce security. The three-domain (3D) security schemes, including 3-D Secure and 3D SET have recently been proposed as ways of improving ecommerce transaction security. Based on our research about SSL, SET, 3D security schemes and the requirements of electronic payment, we designed a secure and efficient E-Payment protocol. The new protocol offers an extra layer of protection for cardholders and merchants. Customers are asked to enter an additional password after checkout completion to verify they are truly the cardholder, the authentication is done directly between the cardholder and card issuer using the issuer security certificate and without involving the third party (Visa, MasterCard).</description>
        <description>http://thesai.org/Downloads/Volume5No5/Paper_27-A_Secure_Electronic_Transaction_Payment_Protocol.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Algorithm for Summarization of Paragraph Up to One Third with the Help of Cue Words Comparison</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050526</link>
        <id>10.14569/IJACSA.2014.050526</id>
        <doi>10.14569/IJACSA.2014.050526</doi>
        <lastModDate>2014-05-31T13:15:54.9370000+00:00</lastModDate>
        
        <creator>Noopur Srivastava</creator>
        
        <creator>Bineet Kumar Gupta</creator>
        
        <subject>Data Mining; Data Warehouse; Artificial Intelligence</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(5), 2014</description>
        <description>In the fast growing information era utility of technology are more precise than completing the assignment manually. The digital information technology creates a knowledge-based society with high-tech global economy which spreads over and influence the corporate and service sector to operate in more efficient and convenient way. Here an attempt was made on Extract Technology based on research. In this technology data could be refined and sourced with certainty and relevance. The application of artificial intelligence matched with the theories of machine learning would prove to be very effective. Sometime summarization of paragraph required rather than page or pages. So, Auto Summarization Model is an agnostic content summarization technology that automatically parses news, information, documents and many more into relevant and contextually accurate abbreviated summaries. This is a concept to convert a whole paragraph into one third. The Auto summarization technology reads a document, much better way than manually prepared, where, keywords and key phrases accurately weighted as they are found in the document, text or web page.</description>
        <description>http://thesai.org/Downloads/Volume5No5/Paper_26-An_Algorithm_for_Summarization_of_Paragraph_Up_to_One_Third_with_the_Help_of_Cue_Words_Comparison.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Using Digital Image Processing to Make an Intelligent Gate</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050525</link>
        <id>10.14569/IJACSA.2014.050525</id>
        <doi>10.14569/IJACSA.2014.050525</doi>
        <lastModDate>2014-05-31T13:15:54.9200000+00:00</lastModDate>
        
        <creator>Sundus K. E.</creator>
        
        <creator>AL_Mamare S. H.</creator>
        
        <subject>image processing; color recognition; patch recognition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(5), 2014</description>
        <description>This paper presents an automatic system for controlling and dominating building gate based on digital image processing. The system begins with a digital camera, which captures a picture for that vehicle which intends to enter the building, then sends the picture to the computer. Image analyses performed to detect and recognize the vehicle, and matching the vehicle’s image with the stored database of the permissible vehicles. Then, the computer sends a signal to the electro-mechanical part that controls gate to open and permits the vehicle to enter the building in case of the vehicle’s image matches any image in the database, or sends an apology voice message in case of no identical image.    
The system is regarded as an empirical and it was applied on various types of vehicles. Results obtained were accurate and the system is successful for all vehicles used in the system test.
</description>
        <description>http://thesai.org/Downloads/Volume5No5/Paper_25-Using_Digital_Image_Processing_to_Make_an_Intelligent_Gate.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Designing a Multi Agent System Architecture for IT Governance Platform</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050524</link>
        <id>10.14569/IJACSA.2014.050524</id>
        <doi>10.14569/IJACSA.2014.050524</doi>
        <lastModDate>2014-05-31T13:15:54.9070000+00:00</lastModDate>
        
        <creator>S. ELHASNAOUI</creator>
        
        <creator>H. MEDROMI</creator>
        
        <creator>S.FARIS</creator>
        
        <creator>H.IGUER</creator>
        
        <creator>A. SAYOUTI</creator>
        
        <subject>IT Governance; Multi Agent System; COBIT 5; ITIL V3; ISO/IEC 27001/27001; Process; Information System Introduction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(5), 2014</description>
        <description>This paper presents a multi-agents architecture which facilitates the integration of three major IT governance frameworks: COBIT5, ITIL V3 and ISO/IEC27002, to optimize the construction of a distributed system. This architecture proposes a new and easier method to develop a distributed multi agents system, where agents involved in this system can communicate in a distributed way thanks to functionalities offered by the system. It gives finally an overview of implementation of a prototype of the proposed solution limited for the moment to integration of processes most used in the majority of information systems.  </description>
        <description>http://thesai.org/Downloads/Volume5No5/Paper_24-Designing_a_Multi_Agent_System_Architecture_for_IT_Governance_Platform.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Adaptive Hybrid Controller for DBMS Performance Tuning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050523</link>
        <id>10.14569/IJACSA.2014.050523</id>
        <doi>10.14569/IJACSA.2014.050523</doi>
        <lastModDate>2014-05-31T13:15:54.8900000+00:00</lastModDate>
        
        <creator>Sherif Mosaad Abdel Fattah</creator>
        
        <creator>Maha Attia Mahmoud</creator>
        
        <creator>Laila Abd-Ellatif Abd-Elmegid</creator>
        
        <subject>automatic database tuning; fuzzy logic; adaptive controller; regression; self-tuning; DBMS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(5), 2014</description>
        <description>Performance tuning process of database management system (DBMS) is an expensive, complex and time consuming process to be handled by human experts. A proposed adaptive controller is developed that utilizes a hybrid model from fuzzy logic and regression analysis to tune the memory-resident data structures of DBMS. The fuzzy logic module uses flexible rule matrix with adaption techniques to deal with fluctuations and abrupt changes in the operation environment. The regression module predicts fluctuations in operation environment so the controller can take former action. Experimental results on standard benchmarks showed significant performance enhancement as compared to built-in self-tuning features.</description>
        <description>http://thesai.org/Downloads/Volume5No5/Paper_23-An_Adaptive_Hybrid_Controller_for_DBMS_Performance_Tuning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Algorithm Research  for Supply Chain Management Optimization Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050522</link>
        <id>10.14569/IJACSA.2014.050522</id>
        <doi>10.14569/IJACSA.2014.050522</doi>
        <lastModDate>2014-05-31T13:15:54.8900000+00:00</lastModDate>
        
        <creator>Ruomeng Kong</creator>
        
        <creator>Chengjiang Yin</creator>
        
        <subject>supply chain management optimization model; the extended linear complementarity problem; error bound; algorithm; quadratical convergence</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(5), 2014</description>
        <description>In this paper, we consider the extended linear complementarity problem on supply chain management optimization model. We first give a global error bound for the extended linear complementarity problem, and then propose a new type of algorithm based on the error bound estimation.  Both the global and quadratic rate of convergence are established. These conclusions can be viewed as extensions of previously known results.</description>
        <description>http://thesai.org/Downloads/Volume5No5/Paper_22-An_Algorithm_Research_for_Supply_Chain_Management_Optimization_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving Forecasting Accuracy in the Case of Intermittent Demand Forecasting</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050521</link>
        <id>10.14569/IJACSA.2014.050521</id>
        <doi>10.14569/IJACSA.2014.050521</doi>
        <lastModDate>2014-05-31T13:15:54.8730000+00:00</lastModDate>
        
        <creator>Daisuke Takeyasu</creator>
        
        <creator>Asami Shitara</creator>
        
        <creator>Kazuhiro Takeyasu</creator>
        
        <subject>intermittent demand forecasting; minimum variance; exponential smoothing method; trend</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(5), 2014</description>
        <description>In making forecasting, there are many kinds of data. Stationary time series data are relatively easy to make forecasting but random data are very difficult in its execution for forecasting. Intermittent data are often seen in industries. But it is rather difficult to make forecasting in general. In recent years, the needs for intermittent demand forecasting are increasing because of the constraints of strict Supply Chain Management. How to improve the forecasting accuracy is an important issue. There are many researches made on this. But there are rooms for improvement. In this paper, a new method for cumulative forecasting method is proposed. The data is cumulated and to this cumulated time series, the following method is applied to improve the forecasting accuracy. Trend removing by the combination of linear and 2nd order non-linear function and 3rd order non-linear function is executed to the production data of X-ray image intensifier tube device and Diagnostic X-ray image processing apparatus. The forecasting result is compared with those of the non-cumulative forecasting method. The new method shows that it is useful for the forecasting of intermittent demand data. The effectiveness of this method should be examined in various cases.</description>
        <description>http://thesai.org/Downloads/Volume5No5/Paper_21-Improving_Forecasting_Accuracy_in_the_Case_of_Intermittent_Demand_Forecasting.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Web and Telco Service Integration: A Dynamic and Adaptable Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050520</link>
        <id>10.14569/IJACSA.2014.050520</id>
        <doi>10.14569/IJACSA.2014.050520</doi>
        <lastModDate>2014-05-31T13:15:54.8600000+00:00</lastModDate>
        
        <creator>Juli&#225;n Rojas</creator>
        
        <creator>Leandro Ord&#243;&#241;ez-Ante</creator>
        
        <creator>Juan Carlos Corrales</creator>
        
        <subject>Web Services; Telco Services; JAIN SLEE; Integration; Adaptation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(5), 2014</description>
        <description>The current evolution of the Web, known as Web 2.0 and characterized by providing a diverse global service ecosystem, has marked a change in the role played by telecom operators. In order to maintain high competitive market dynamism and generate new revenue sources, many operators seek to leverage the wide variety of existing Web services and integrate them with its infrastructure capabilities. Such integration leads to various challenges from a technological perspective, where the heterogeneity on networks and the need for highly qualified personnel for the development of these services are highlighted. This paper is propose the definition of a mechanism for integrating both Web and Telco services which facilitates and speeds the development of new services, considering the dynamic conditions of its execution.</description>
        <description>http://thesai.org/Downloads/Volume5No5/Paper_20-Web_and_Telco_Service_Integration_A_Dynamic_and_Adaptable_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Review of Text Messaging (SMS) as a Communication Tool for Higher Education</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050519</link>
        <id>10.14569/IJACSA.2014.050519</id>
        <doi>10.14569/IJACSA.2014.050519</doi>
        <lastModDate>2014-05-31T13:15:54.8430000+00:00</lastModDate>
        
        <creator>Dr.Daragh Naughton</creator>
        
        <subject>SMS; text message; higher education; motivation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(5), 2014</description>
        <description>Since 2011, The Limerick Institute of Technology (School of Applied Science Engineering &amp; Technology) has actively engaged in a course of research to determine if SMS can be used to (1) increase student’s preparation for class, (2) increase their motivational levels towards learning and (3) to assist with memory retention. The purpose of this paper is to introduce the concept of the technology and to summarise the academic arguments that have been made both for and against the use of such technology for teaching and learning activities in higher education. A full quantitate and qualitative analysis of the research will take place in 2014. The project forms part of a school wide Scholarship of Teaching &amp; Learning approach (SoTL). This paper will be of interest to academic managers, program managers, e-learning support staff, administrators and lecturers.</description>
        <description>http://thesai.org/Downloads/Volume5No5/Paper_19-A_Review_of_Text_Messaging_SMS_as_a_Communication_Tool_for_Higher_Education.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Telugu Bigram Splitting using Consonant-based and Phrase-based Splitting</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050518</link>
        <id>10.14569/IJACSA.2014.050518</id>
        <doi>10.14569/IJACSA.2014.050518</doi>
        <lastModDate>2014-05-31T13:15:54.8130000+00:00</lastModDate>
        
        <creator>T. Kameswara Rao</creator>
        
        <creator>Dr. T. V. Prasad</creator>
        
        <subject>Bigram; n-gram; consonant based splitting; phrase based splitting</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(5), 2014</description>
        <description>Splitting is a conventional process in most of Indian languages according to their grammar rules. It is called ‘pada vicchEdanam’ (a Sanskrit term for word splitting) and is widely used by most of the Indian languages.  Splitting plays a key role in Machine Translation (MT) particularly when the source language (SL) is an Indian language. Though this splitting may not succeed completely in extracting the root words of which the compound is formed, but it shows considerable impact in Natural Language Processing (NLP) as an important phase. Though there are many types of splitting, this paper considers only consonant based and phrase based splitting.</description>
        <description>http://thesai.org/Downloads/Volume5No5/Paper_18-Telugu_Bigram_Splitting_using_Consonant-based_and_Phrase-based_Splitting.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Enhanced Fuzzy Multi Criteria Decision Making Model with A proposed Polygon Fuzzy Number
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050517</link>
        <id>10.14569/IJACSA.2014.050517</id>
        <doi>10.14569/IJACSA.2014.050517</doi>
        <lastModDate>2014-05-31T13:15:54.7970000+00:00</lastModDate>
        
        <creator>Samah Bekheet</creator>
        
        <creator>Ammar Mohammed</creator>
        
        <creator>Hesham A. Hefny</creator>
        
        <subject>Fuzzy multi criteria decision making; linguistic values; polygon fuzzy number; level set</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(5), 2014</description>
        <description>Decisions in real world applications are often made   under   the   presence   of   conflicting,   uncertain,
incomplete   and   imprecise   information.   Fuzzy   multi
Criteria Decision making (FMCDM) approach provides a powerful approach for drawing rational decisions under uncertainty  given   in   the   form   of   linguistic  values.
Linguistic values are usually represented as fuzzy numbers. Most of researchers adopt either triangle or trapezoidal fuzzy numbers. Since triangle, intervals, and even singleton are special cases of Trapezoidal fuzzy numbers, so, for most researchers Trapezoidal fuzzy numbers are considered Generalized fuzzy numbers (GFN). In this paper, we introduce polygon fuzzy number (PFN) as the actual form of GFN. The proposed form of PFN provides higher flexibility to decision makers to express their own linguistic rather than other form of fuzzy numbers. The given illustrative example ensures such ability for better handling of the FMCDM problems.
</description>
        <description>http://thesai.org/Downloads/Volume5No5/Paper_17-An_Enhanced_Fuzzy_Multi_Criteria_Decision.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Evaluation of Private Clouds Eucalyptus versus CloudStack</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050516</link>
        <id>10.14569/IJACSA.2014.050516</id>
        <doi>10.14569/IJACSA.2014.050516</doi>
        <lastModDate>2014-05-31T13:15:54.7800000+00:00</lastModDate>
        
        <creator>Mumtaz M.Ali AL-Mukhtar</creator>
        
        <creator>Asraa Abdulrazak Ali Mardan</creator>
        
        <subject>Cloud Computing; CloudStack; Eucalyptus; IaaS; Virtual Machine; Performance Evaluation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(5), 2014</description>
        <description>the number of open source cloud management platforms is increasing day-by-day. The features of these software vary significantly and this creates a difficulty for the cloud consumers to choose the software based on their business and scientific requirements. This paper evaluates Eucalyptus and CloudStack, the two most popular open source platforms used to build private Infrastructure as a service (IaaS) clouds. The performance of virtual machines (VMs) initiated and managed by Eucalyptus and CloudStack are evaluated in terms of CPU utilization, memory bandwidth, disk I/O access speed, and network performance using suitable benchmarks. Different VM management operations such as add, delete and live migration are also assessed to determine which cloud solution is more suitable than other to be adopted as a private cloud solution. As a further performance testing, a simple web application has been implemented on the both clouds to evaluate their suitability in web application hosting.</description>
        <description>http://thesai.org/Downloads/Volume5No5/Paper_16-Performance_Evaluation_of_Private_Clouds.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Computation of Single Beam Echo Sounder Signal for Underwater Objects Detection and Quantification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050514</link>
        <id>10.14569/IJACSA.2014.050514</id>
        <doi>10.14569/IJACSA.2014.050514</doi>
        <lastModDate>2014-05-31T13:15:54.7670000+00:00</lastModDate>
        
        <creator>Henry M. Manik</creator>
        
        <creator>Asep Mamun</creator>
        
        <creator>Totok Hestirianoto</creator>
        
        <subject>single beam; echo sounder; backscattering; algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(5), 2014</description>
        <description>Underwater Acoustic methods have been extensively used to locate and identify marine objects. These applications include locating underwater vehicles, finding shipwrecks, imaging sediments and imaging bubble fields. Ocean is fairly transparent to sound and opaque to all other sources of radiation. Acoustics technology is the most effective tool for monitoring this environment because of the sound&#39;s ability to propagate long distance in water.  We used single beam echo sounder to discriminate underwater objects.  Development of the algorithm and applied it to detect and quantify underwater object such as fish, sea grass, and seabed. We found the detected target has different backscatter value.</description>
        <description>http://thesai.org/Downloads/Volume5No5/Paper_14-Computation_of_Single_Beam_Echo_Sounder_Signal_for_Underwater_Objects_Detection_and_Quantification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Inter-organizational Workflow for Intelligent Audit of Information Technologies in terms of Entreprise Business Processes</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050515</link>
        <id>10.14569/IJACSA.2014.050515</id>
        <doi>10.14569/IJACSA.2014.050515</doi>
        <lastModDate>2014-05-31T13:15:54.7670000+00:00</lastModDate>
        
        <creator>Meriyem Chergui</creator>
        
        <creator>Hicham Medromi</creator>
        
        <creator>Adil Sayouti</creator>
        
        <subject>Inter-organizational Workflow; COBIT; Audit; Information System; IT Governance; Business Processes; Multi-Agent System; Semantic Web; Ontology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(5), 2014</description>
        <description>IT governance is critical to the success of Enterprise governance by providing effective, efficient and measurable improvements in business processes by ensuring that information technologies are in line with business objectives. Consequently, this paper provides an intelligent solution to audit Information System Business processes using the IT Governance Framework COBIT. The particularity of this solution is the use of Inter-organization Workflows (IOW), Multi-agent System and semantic web. In fact Inter-Organizational Workflow is used to cooperate autonomous, heterogeneous and distributed organizations processes to reach a common goal. In this paper case the goal is the dynamic alignment of every Business Process with the convenient Information System component and this through a permanent interaction with different stockholders. Multi-agent Systems (MAS) are known as the natural solution for IOW modeling since they provide dynamic modification and execution of adaptive processes. In addition, MAS have the ability to describe distribution and coordination of IOW organizations in micro and macro level, with high level communication protocols. As for the semantic web, the proposed IT Governance IOW based on COBIT, has the principal role to match Enterprise real Business Goals with COBIT Business goals , so the use of the semantic web is a way to share business terminology and avoid semantic conflict for a correct and efficient Audit operation. </description>
        <description>http://thesai.org/Downloads/Volume5No5/Paper_15-Inter-organizational_Workflow_for_Intelligent_Audit_of_Information_Technologies_in_terms_of_Entreprise_Business_Processes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Green Technology, Cloud Computing and Data Centers: the Need for Integrated Energy Efficiency Framework and Effective Metric</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050513</link>
        <id>10.14569/IJACSA.2014.050513</id>
        <doi>10.14569/IJACSA.2014.050513</doi>
        <lastModDate>2014-05-31T13:15:54.7500000+00:00</lastModDate>
        
        <creator>Nader Nada</creator>
        
        <creator>Abusfian Elgelany</creator>
        
        <subject>Cloud Computing; green Cloud; Datacenter; Energy efficiency</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(5), 2014</description>
        <description>Energy efficiency (EE), energy consumption cost and environmental impact are vibrant challenges to cloud computing and data centers. Reducing energy consumption and emissions of carbon dioxide (CO2) in data centers represent open areas and driving force for future research work on green data centers. Our Literature review reveals that there are currently several energy efficiency frameworks for data centers which combine a green IT architecture with specific activities and procedures that led to decrease the impact on environment and less CO2 emissions. The current available frameworks have some pros and cons that is the reason why there is an urgent need for an integrated criterion for selecting and adopting energy efficiency framework for data centers.  The required energy efficiency framework criteria should also consider the social network applications as a vital related factor in elevating energy consumption, as well as high potential for better energy efficiency in data centers. Additionally, in this paper, we highlighted the importance of the identification of efficient and effective energy efficiency metric that can be used for the measurement and determination of the value of data centers efficiency and their performance combined with sound and empirically validated integrated EE framework.</description>
        <description>http://thesai.org/Downloads/Volume5No5/Paper_13-Green_Technology_Cloud_Computing_and_Data_Centers_the_Need_for_Integrated_Energy_Efficiency_Framework_and_Effective_Metric.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Ameliorate Threshold Distributed Energy Efficient Clustering Algorithm for Heterogeneous Wireless Sensor Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050512</link>
        <id>10.14569/IJACSA.2014.050512</id>
        <doi>10.14569/IJACSA.2014.050512</doi>
        <lastModDate>2014-05-31T13:15:54.7330000+00:00</lastModDate>
        
        <creator>MOSTAFA BAGHOURI</creator>
        
        <creator>SAAD CHAKKOR</creator>
        
        <creator>ABDERRAHMANE HAJRAOUI</creator>
        
        <subject>Heterogeneous wireless sensor networks; Clustering based algorithm; Energy-efficiency; TDEEC Protocol; Network lifetime</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(5), 2014</description>
        <description>Ameliorating the lifetime in heterogeneous wireless sensor network is an important task because the sensor nodes are limited in the resource energy. The best way to improve a WSN lifetime is the clustering based algorithms in which each cluster is managed by a leader called Cluster Head. Each other node must communicate with this CH to send the data sensing. The nearest base station nodes must also send their data to their leaders, this causes a loss of energy. In this paper, we propose a new approach to ameliorate a threshold distributed energy efficient clustering protocol for heterogeneous wireless sensor networks by excluding closest nodes to the base station in the clustering process. We show by simulation in MATLAB that the proposed approach increases obviously the number of the received packet messages and prolongs the lifetime of the network compared to TDEEC protocol.</description>
        <description>http://thesai.org/Downloads/Volume5No5/Paper_12-Ameliorate_Threshold_Distributed_Energy_Efficient_Clustering_Algorithm_for_Heterogeneous_Wireless_Sensor_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Effect of Diversity Implementation on Precision in Multicriteria Collaborative Filtering</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050511</link>
        <id>10.14569/IJACSA.2014.050511</id>
        <doi>10.14569/IJACSA.2014.050511</doi>
        <lastModDate>2014-05-31T13:15:54.7200000+00:00</lastModDate>
        
        <creator>Wiranto </creator>
        
        <creator>Edi Winarko</creator>
        
        <creator>Sri Hartati</creator>
        
        <creator>Retantyo Wardoyo</creator>
        
        <subject>Algorithms; multicriteria; content; collaborative; filtering; systems; similarity; diversity; precision</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(5), 2014</description>
        <description>This research was triggered by the criticism on the emergence of homogeneity in recommendation within the collaborative filtering based recommender systems that put similarity as the main principle in the algorithm. To overcome the problem of homogeneity, this study proposes a novelty, i.e. the diversity of recommendations applied to the multicriteria collaborative filtering-based document recommender systems. Development of the diversity recommendation was made by the two techniques, the first is to compare the similarity of content and the second is to use a variation of the criteria. The application of diversity, both content and criteria-based, was proven to provide a sufficiently significant influence on the increase of recommendation precision.</description>
        <description>http://thesai.org/Downloads/Volume5No5/Paper_11-The_Effect_of_Diversity_Implementation_on_Precision_in_Multicriteria_Collaborative_Filtering.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comparative Study of Game Tree Searching Methods</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050510</link>
        <id>10.14569/IJACSA.2014.050510</id>
        <doi>10.14569/IJACSA.2014.050510</doi>
        <lastModDate>2014-05-31T13:15:54.7030000+00:00</lastModDate>
        
        <creator>Ahmed A. Elnaggar</creator>
        
        <creator>Mostafa Abdel Aziem</creator>
        
        <creator>Mahmoud Gadallah</creator>
        
        <creator>Hesham El-Deeb</creator>
        
        <subject>game tree search; searching; evaluation; parallel; distributed; GPU</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(5), 2014</description>
        <description>In this paper, a comprehensive survey on gaming tree searching methods that can use to find the best move in two players zero-sum computer games was introduced. The purpose of this paper is to discuss, compares and analyzes various sequential and parallel algorithms of gaming tree, including some enhancement for them. Furthermore, a number of open research areas and suggestions of future work in this field are mentioned.</description>
        <description>http://thesai.org/Downloads/Volume5No5/Paper_10-A_Comparative_Study_of_Game_Tree_Searching_Methods.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid Method to Improve Forecasting Accuracy in the Case of Sanitary Materials Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050509</link>
        <id>10.14569/IJACSA.2014.050509</id>
        <doi>10.14569/IJACSA.2014.050509</doi>
        <lastModDate>2014-05-31T13:15:54.6870000+00:00</lastModDate>
        
        <creator>Daisuke Takeyasu</creator>
        
        <creator>Hirotake Yamashita</creator>
        
        <creator>Kazuhiro Takeyasu</creator>
        
        <subject>component; minimum variance; exponential smoothing method; forecasting; trend; sanitary materials</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(5), 2014</description>
        <description>Sales forecasting is a starting point of supply chain management, and its accuracy influences business management significantly. In industries, how to improve forecasting accuracy such as sales, shipping is an important issue. In this paper, a hybrid method is introduced and plural methods are compared. Focusing that the equation of exponential smoothing method(ESM) is equivalent to (1,1) order ARMA model equation, a new method of estimation of smoothing constant in exponential smoothing method is proposed before by Takeyasu et.al. which satisfies minimum variance of forecasting error. Firstly, we make estimation of ARMA model parameter and then estimate smoothing constants. In this paper, combining the trend removing method with this method, we aim to improve forecasting accuracy. Trend removing by the combination of linear and 2nd order non-linear function and 3rd order non-linear function is carried out to the manufacturer’s data of sanitary materials.
The new method shows that it is useful for the time series that has various trend characteristics and has rather strong seasonal trend. The effectiveness of this method should be examined in various cases.
</description>
        <description>http://thesai.org/Downloads/Volume5No5/Paper_9-A_Hybrid_Method_to_Improve_Forecasting_Accuracy_in_the_Case_of_Sanitary_Materials_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Building BTO System in the Sanitary Materials Manufacturer with the Utilization of the High Accuracy Forecasting</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050508</link>
        <id>10.14569/IJACSA.2014.050508</id>
        <doi>10.14569/IJACSA.2014.050508</doi>
        <lastModDate>2014-05-31T13:15:54.6570000+00:00</lastModDate>
        
        <creator>Hirotake Yamashita</creator>
        
        <creator>Kazuhiro Takeyasu</creator>
        
        <subject>BTO; forecasting; lead time; stock; sanitary materials; AR model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(5), 2014</description>
        <description>In recent years, BTO (Build to Order) system is prevailing. It pursues short lead time, minimum stocks, and thereby minimum cost. But the high accuracy demand forecasting is inevitable for the parts manufacturers. In this paper, well organized BTO system in the sanitary materials manufacturer is seek with the aid of high accuracy demand forecasting, which is newly developed by us. Focusing that the equation of ESM is equivalent to (1,1) order ARMA model equation, a new method of estimation of smoothing constant in ESM was derived. Trend removal method was also devised. AR model is also used for forecasting. After removing trend, AR model is utilized and forecasting is executed. Better one in the forecasting accuracy between them was chosen for the final forecasting.
Thus, we could obtain the high accuracy demand forecasting. These methods are examined by the data of sanitary materials manufacturer and the BTO system is newly built by utilizing this method. Further development of this system should be performed hereafter.
</description>
        <description>http://thesai.org/Downloads/Volume5No5/Paper_8-Building_BTO_System_in_the_Sanitary_Materials_Manufacturer_with_the_Utilization_of_the_High_Accuracy_Forecasting.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Proposal for Two Enhanced NTRU</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050507</link>
        <id>10.14569/IJACSA.2014.050507</id>
        <doi>10.14569/IJACSA.2014.050507</doi>
        <lastModDate>2014-05-31T13:15:54.5630000+00:00</lastModDate>
        
        <creator>Ahmed Tariq Sadiq</creator>
        
        <creator>Najlaa Mohammad Hussein</creator>
        
        <creator>Suha Abdul Raheem Khoja</creator>
        
        <subject>NTRU; security; sound</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(5), 2014</description>
        <description>Sound is very widely used in communication. In order to ensure secure communication a cryptographic data scheme is used. Secure sound is needed in many fields such as military, business, banking and electronic commerce. There is also an increasing demand for secured sound in network communication. Several symmetric and asymmetric algorithms are used for sound encryption. In this work, NTRU, the last in line public key cryptosystem is enhanced in two methods and used for encrypting sound files after converting the sound into text. In the proposed methods the message is encrypted one character at a time, since NTRU encrypts only prime numbers, thus 7 bits of each character is encrypted and the eighth bit is left without encryption. In method I NTRU algorithm is enhanced by adding the result obtained from calculating a mathematical equation of one variable to the message and then the resulted encrypted bit is fed-back and added to the next bit of the message in the next step; this procedure is repeated for the subsequent bits of the message. In method II NTRU algorithm is enhanced by adding the subsequent states of LFSR (Linear Feedback Shift Register) to the subsequent bytes of the message. The proposed methods are tested on several sound files; the results show that the proposed methods I and II maintain approximately the same original method encryption and decryption time while generating more complex encryption.</description>
        <description>http://thesai.org/Downloads/Volume5No5/Paper_7-Proposal_for_Two_Enhanced_NTRU.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>OJADEAC: An Ontology Based Access Control Model for JADE Platform</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050506</link>
        <id>10.14569/IJACSA.2014.050506</id>
        <doi>10.14569/IJACSA.2014.050506</doi>
        <lastModDate>2014-05-31T13:15:54.5470000+00:00</lastModDate>
        
        <creator>Ban Sharief Mustafa</creator>
        
        <creator>Najla Aldabagh</creator>
        
        <subject>Java Agent Development Framework (JADE); JADE-S; Ontology-Based Access Control Model; Web Ontology Language (OWL) </subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(5), 2014</description>
        <description>Java Agent Development Framework (JADE) is a software framework to make easy the development of Multi-Agent applications in compliance with the Foundation for Intelligent Physical Agents (FIPA) specifications.  JADE propose new infrastructure solutions to support the development of useful and convenient distributed applications. Security is one of the most important issues in implementing and deploying such applications. JADE-S security add-ons are one of the most popular security solutions in JADE platform. It provides several security services including authentication, authorization, signature and encryption services. Authorization service will give authorities to perform an action based on a set of permission objects attached to every authenticated user. This service has several drawbacks when implemented in a scalable distributed context aware applications. In this paper, an ontology-based access control model called (OJADEAC) is proposed to be applied in JADE platform by combining Semantic Web technologies with context-aware policy mechanism to overcome the shortcoming of this service. The access control model is represented by a semantic ontology, and a set of two level semantic rules representing platform and application specific policy rules. OJADEAC model is distributed, intelligent, dynamic, context-aware and use reasoning engine to infer access decisions based on ontology knowledge.</description>
        <description>http://thesai.org/Downloads/Volume5No5/Paper_6-OJADEAC_An_Ontology_Based_Access_Control_Model_for_JADE_Platform.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Prediction of Satellite Motion under the Effects of the Earth’s Gravity, Drag Force and Solar Radiation Pressure in terms of the KS-regularized Variables</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050505</link>
        <id>10.14569/IJACSA.2014.050505</id>
        <doi>10.14569/IJACSA.2014.050505</doi>
        <lastModDate>2014-05-31T13:15:54.5300000+00:00</lastModDate>
        
        <creator>Hany R. Dwidar</creator>
        
        <subject>KS-regularized variables; orbit determination; Numerical Modeling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(5), 2014</description>
        <description>This paper is concerned with an orbit prediction using one of the best regular theories (KS-regularized variables).  Perturbations due to the Earth’s gravitational field with axial symmetry up to the fourth order zonal harmonic, atmospheric drag (variation in density model with height) and solar radiation pressure are considered.  Applications of the problem with a comparison between the perturbations effect will be illustrated by numerical and graphical example. </description>
        <description>http://thesai.org/Downloads/Volume5No5/Paper_5-Prediction_of_Satellite_Motion_under_the_Effects_of_the_Earth’s_Gravity_Drag_Force_and_Solar_Radiation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Novel LVCSR Decoder Based on Perfect Hash Automata and Tuple Structures – SPREAD –</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050504</link>
        <id>10.14569/IJACSA.2014.050504</id>
        <doi>10.14569/IJACSA.2014.050504</doi>
        <lastModDate>2014-05-31T13:15:54.5170000+00:00</lastModDate>
        
        <creator>Matej Rojc</creator>
        
        <creator>Kacic Zdravko</creator>
        
        <subject>LVCSR decoder; tuple structure; finite automata; perfect hashing; Look-Ahead; language models</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(5), 2014</description>
        <description>The paper presents the novel design of a one-pass large vocabulary continuous-speech recognition decoder engine, named SPREAD. The decoder is based on a time-synchronous beam-search approach, including statically expanded cross-word triphone contexts. An approach using efficient tuple structures is proposed for the construction of the complete search-network. The foremost benefits are the important space savings and higher processing speed, and the compact and reduced size of the tuple structure, especially when exploiting the structure of the key. In this way, the time needed to load the ASR search-network into the memory is also significantly reduced. Further, the paper proposes and presents the complete methodology for compiling general ASR knowledge sources into a tuple structures. Additionally, the beam search is enhanced with the novel implementation of a bigram language model Look-Ahead technique, by using tuple structures and a caching scheme. The SPREAD LVCSR decoder is based on a token-passing algorithm, capable of restricting its search-space by several types of token pruning. By using the presented language model Look-Ahead technique, it is possible to increase the number of tokens that can be pruned without decoding precision loss.</description>
        <description>http://thesai.org/Downloads/Volume5No5/Paper_4-Novel_LVCSR_Decoder_Based_on_Perfect_Hash_Automata_and_Tuple_Structures.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Experimental Analysis of the Fault Tolerance of the PIM-SM IP Multicast Routing Protocol under GNS3</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050503</link>
        <id>10.14569/IJACSA.2014.050503</id>
        <doi>10.14569/IJACSA.2014.050503</doi>
        <lastModDate>2014-05-31T13:15:54.5000000+00:00</lastModDate>
        
        <creator>G&#225;bor Lencse</creator>
        
        <creator>Istv&#225;n Derka</creator>
        
        <subject>IP multicast protocols; PIM-SM; OSPF; fault tolerance; GNS3; modelling and simulation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(5), 2014</description>
        <description>PIM-SM is the most commonly used IP multicast routing protocol in IPTV systems. Its fault tolerance is examined by experimenting on a mesh topology multicast test network built up by Cisco routers under GNS3. Different fault scenarios are played and different parameters of the PIM-SM and of the OSPF protocols are examined if they influence and how they influence the outage time of an IPTV service. The failure of the Rendezvous Point (RP) of the given IP multicast group as well as the complete failure of a router in the media forwarding path of the multicast stream are examined. A method is given how the service outage time caused by the complete failure of a router can be limited by an appropriate choice of the Dead Interval parameter of OSPF.</description>
        <description>http://thesai.org/Downloads/Volume5No5/Paper_3-Experimental_Analysis_of_the_Fault_Tolerance_of_the_PIM-SM_IP_Multicast_Routing_Protocol_under_GNS3.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Acceptance Factors and Current Level of Use of Web 2.0 Technologies for Learning in Higher Education: a Case Study of Two Countries</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050502</link>
        <id>10.14569/IJACSA.2014.050502</id>
        <doi>10.14569/IJACSA.2014.050502</doi>
        <lastModDate>2014-05-31T13:15:54.5000000+00:00</lastModDate>
        
        <creator>Razep Echeng</creator>
        
        <creator>Abel Usoro</creator>
        
        <subject>Web 2.0 technologies; acceptance factors; adoption for learning; collaboration; participation; Nigerian higher education; Scotland</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(5), 2014</description>
        <description>The use of Web 2.0 technology tools or social media in educational context is being emphasized in recent times in different parts of the world and this has brought about a significant increase in the number of educational institutions who are aware of their usefulness when either implementing them as a separate system or incorporating them into their learning management systems.  However, there is little research on the acceptance and how much these tools are currently being used for learning hence the need for more empirical studies to investigate factors that would influence acceptance and increase the use of these technologies. The study developed hypotheses and a research model which was operationalized into a questionnaire administered to academics and students in Scotland and Nigeria.  317 responses were received from Nigeria and 279 from Scotland.  Analysed data was used to validate the research model that is aimed at explaining acceptance and present level of use of Web 2.0 technology tools in learning environments.</description>
        <description>http://thesai.org/Downloads/Volume5No5/Paper_2-Acceptance_Factors_and_Current_Level_of_Use_of_Web_2.0_Technologies_for_Learning_in_Higher_Education_a_Case_Study_of_Two_Countries.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Integrating Android Devices into Network Management Systems based on SNMP</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050501</link>
        <id>10.14569/IJACSA.2014.050501</id>
        <doi>10.14569/IJACSA.2014.050501</doi>
        <lastModDate>2014-05-31T13:15:54.4830000+00:00</lastModDate>
        
        <creator>Fernando Hidalgo</creator>
        
        <creator>Eric Gamess</creator>
        
        <subject>Network Management Systems; SNMP; Android; Performance Evaluation; Benchmarks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(5), 2014</description>
        <description>Mobile devices are becoming essential for today life. In developed countries, about half of the people have a smartphone, resulting in millions of these electronic devices. Android is the most popular operating system for smartphones and other electronic devices such as tablets. Hence, for network administrators, it is essential to start managing all the Android based devices. SNMP is the de facto standard for network administration, where agents that are running in managed devices are polled by management stations. Some primitive tools have already been developed to transform an Android device as a basic management station. However, so far, there is no SNMP agent for this operating system. In this paper, we develop the first SNMP agent for Android. We also propose an SNMP benchmark to study the SNMP traffic that can be supported by our SNMP agent over some real and actual Android devices. The results obtained show that it is realistic to integrate mobile Android devices in network management systems since they can handle a high number of SNMP requests in a reasonable period of time.</description>
        <description>http://thesai.org/Downloads/Volume5No5/Paper_1-Integrating_Android_Devices_into_Network_Management_Systems_based_on_SNMP.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>External analysis of strategic market management can be realized based upon different human mindset–A debate in the light of statistical perspective</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2014.030503</link>
        <id>10.14569/IJARAI.2014.030503</id>
        <doi>10.14569/IJARAI.2014.030503</doi>
        <lastModDate>2014-05-10T07:31:08.9070000+00:00</lastModDate>
        
        <creator>Prasun Chakrabarti</creator>
        
        <creator>Prasant Kumar Sahoo</creator>
        
        <subject>statistical correlation , fuzzy logic , optimistic , pessimistic , fickle-minded , business statistics</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 3(5), 2014</description>
        <description>The paper entails the statistical correlation of the investigations carried out for the sales and profit prediction and analysis by persons of different mindsets in case of strategic uncertainty . The paper by virtue of statistical and fuzzy logic based justifications has pointed out certain discovered facts in this perspective. The normal , optimistic , pessimistic and fickle-minded based individual mindsets significantly contribute to varying external analysis of business statistics.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume3No5/Paper_3-External_analysis_of_strategic_market_management_can_be_realized_based.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implementation of an Intelligent Course Advisory Expert System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2014.030502</link>
        <id>10.14569/IJARAI.2014.030502</id>
        <doi>10.14569/IJARAI.2014.030502</doi>
        <lastModDate>2014-05-10T07:31:08.8930000+00:00</lastModDate>
        
        <creator>Olawande Daramola</creator>
        
        <creator>Onyeka Emebo</creator>
        
        <creator>Ibukun Afolabi</creator>
        
        <creator>Charles Ayo</creator>
        
        <subject>Academic advising; expert system; case-based reasoning; JESS; rule-based reasoning; evaluation</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 3(5), 2014</description>
        <description>Academic advising of students is an expert task that requires a lot of time, and intellectual investments from the human agent saddled with such a responsibility. In addition, good quality academic advising is subject to availability of experienced and committed personnel to undertake the task. However, there are instances when there is paucity of capable human adviser, or where qualified persons are not readily available because of other pressing commitments, which will make system-based decision support desirable and useful.  In this work, we present the design and implementation of an intelligent Course Advisory Expert System (CAES) that uses a combination of rule based reasoning (RBR) and case based reasoning (CBR) to recommend courses that a student should register in a specific semester, by making recommendation based on the student’s academic history. The evaluation of CAES yielded satisfactory performance in terms of credibility of its recommendations and usability.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume3No5/Paper_2-Implementation_of_an_Intelligent_Course_Advisory_Expert_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Multi_Agent Advisor System for Maximizing E-Learning of an E-Course</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2014.030501</link>
        <id>10.14569/IJARAI.2014.030501</id>
        <doi>10.14569/IJARAI.2014.030501</doi>
        <lastModDate>2014-05-10T07:31:08.8470000+00:00</lastModDate>
        
        <creator>Khaled Nasser ElSayed</creator>
        
        <subject>AI; Agent; Multi_Agents; distant learning; e-Learning; e-Teaching; Education; e-Course</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 3(5), 2014</description>
        <description>Web-based learning environments have become popular in e-teaching throw WWW as a distance learning. There is an urgent need to enhance e-learning to be suitable to the level of learner knowledge. The presented paper uses intelligent multi-agent technology to advise and help learners to maximize their learning of an offered e-course. It will build its advices on the performance and level of education of learners including past and current learning. Most of advices are to guide learner to make exercises as quizzes or passing tests in different level of difficulties.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume3No5/Paper_1-A_Multi_Agent_Advisor_system_for_Maximizing_e-Learning_of_an_e-Course.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Surface Texture Synthesis and Mixing Using Differential Colors</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050433</link>
        <id>10.14569/IJACSA.2014.050433</id>
        <doi>10.14569/IJACSA.2014.050433</doi>
        <lastModDate>2014-04-30T18:25:23.8400000+00:00</lastModDate>
        
        <creator>Qing Wu</creator>
        
        <creator>Lin Shi</creator>
        
        <creator>Stephen Bond</creator>
        
        <creator>Yizhou Yu</creator>
        
        <subject></subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(4), 2014</description>
        <description>In neighborhood-based texture synthesis, adjacent
local regions need to satisfy color continuity constraints in order
to avoid visible seams. Such continuity constraints seriously restrict
the variability of synthesized textures, making it impossible
to generate new textures by mixing multiple input textures with
very different base colors. In this paper, we propose to relax such
restrictions and decompose synthesis into two relatively disjoint
stages. In the first stage, an intermediate synthesized texture is
generated by only considering the high frequency details during
region search and matching. Such a scheme broadens the search
space during texture synthesis, but may produce obvious seams
due to large discontinuities in low frequency components. In the
second stage, instead of performing local feathering along these
discontinuities, we perform Laplacian texture reconstruction,
which retains the high frequency details but computes new
consistent low frequency components to eliminate the seams.
It does not only affect texels close to the discontinuities, but
also modifies the rest of the texels. Therefore, it can be viewed
as a global feature-preserving smoothing step, and is more
effective than local feathering. Experiments indicate that our twostage
synthesis can produce desirable results for regular texture
synthesis as well as texture mixing from multiple sources.</description>
        <description>http://thesai.org/Downloads/Volume5No4/Paper_33-Surface_Texture_Synthesis_and_Mixing_Using.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Incorporating Auxiliary Information in Collaborative Filtering Data Update with Privacy Preservation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050432</link>
        <id>10.14569/IJACSA.2014.050432</id>
        <doi>10.14569/IJACSA.2014.050432</doi>
        <lastModDate>2014-04-30T18:25:23.8270000+00:00</lastModDate>
        
        <creator>Xiwei Wang </creator>
        
        <creator>Jun Zhang</creator>
        
        <creator>Pengpeng Lin</creator>
        
        <creator>Nirmal Thapa</creator>
        
        <creator>Yin Wang</creator>
        
        <creator>Jie Wang</creator>
        
        <subject>auxiliary information; collaborative filtering; data
growth; nonnegative matrix factorization; privacy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(4), 2014</description>
        <description>Online shopping has become increasingly popular
in recent years. More and more people are willing to buy products
through Internet instead of physical stores. For promotional
purposes, almost all online merchants provide product recommendations
to their returning customers. Some of them ask
professional recommendation service providers to help develop
and maintain recommender systems while others need to share
their data with similar shops for better product recommendations.
There are two issues, (1) how to protect customers’ privacy while
retaining data utility before they release the data to the third
parties; (2) based on (1), how to handle data growth efficiently.
In this paper, we propose a NMF (Nonnegative Matrix
Factorization)-based data update approach in collaborative filtering
(CF) that solves the problems. The proposed approach utilizes
the intrinsic property of NMF to distort the data for protecting
user’s privacy. In addition, the user and item auxiliary information
is taken into account in incremental nonnegative matrix
tri-factorization to help improve the data utility. Experiments on
three different datasets (MovieLens, Sushi and LibimSeTi) are
conducted to examine the proposed approach. The results show
that our approach can quickly update the new data and provide
both high level privacy protection and good data utility.</description>
        <description>http://thesai.org/Downloads/Volume5No4/Paper_32-Incorporating_Auxiliary_Information_in_Collaborative.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Human Recognition System using Cepstral Information</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050431</link>
        <id>10.14569/IJACSA.2014.050431</id>
        <doi>10.14569/IJACSA.2014.050431</doi>
        <lastModDate>2014-04-30T18:25:23.8100000+00:00</lastModDate>
        
        <creator>Emna RABHI</creator>
        
        <creator>Zied Lachiri</creator>
        
        <subject>Electrocardiogram(ECG); Linear Frequency Cepstral Coefficients( LFCC); Hidden Markov Model (HMM).</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(4), 2014</description>
        <description>This paper presents a new method for human recognition using the cepstral information. The proposed method consists in extracting the Linear Frequency Cepstral Coefficients (LFCC) from each heartbeat in the homomorphic domain. Thus, the Hidden Markov Model (HMM) under Hidden Markov Model Toolkit (HTK) is used for electrocardiogram (ECG) classification. To evaluate the performance of the classifier, the number of coefficients and the number of frequency bands are varied. Concerning the HMM topology, the number of Gaussians and states are also varied. The best rate is obtained with 32 coefficients, 24 frequency bands, 1 Gaussian and 5 states. Further, the method is improved by adding dynamic features: the first order delta (?) and energy (E) to the coefficients.
The approach is evaluated on 18 healthy signals of the MIT_BIH database.
The obtained results reveal which LFCC with energy that make a 33 dimensional feature vector leads to the best human recognition rate which is 99.33%.</description>
        <description>http://thesai.org/Downloads/Volume5No4/Paper_31-Human_Recognition_System_using_Cepstral_Information.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A new Hierarchical Group Key Management based on Clustering Scheme for Mobile Ad Hoc Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050430</link>
        <id>10.14569/IJACSA.2014.050430</id>
        <doi>10.14569/IJACSA.2014.050430</doi>
        <lastModDate>2014-04-30T18:25:23.7800000+00:00</lastModDate>
        
        <creator>Ayman EL-SAYED</creator>
        
        <subject>Group Key management; Mobile Ad hoc network;
MANET security; Unicast/Multicast protocols in MANET.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(4), 2014</description>
        <description>The migration from wired network to wireless network
has been a global trend in the past few decades because
they provide anytime-anywhere networking services. The
wireless networks are rapidly deployed in the future, secure
wireless environment will be mandatory. As well, The mobility
and scalability brought by wireless network made it possible
in many applications. Among all the contemporary wireless
networks,Mobile Ad hoc Networks (MANET) is one of the most
important and unique applications. MANET is a collection of
autonomous nodes or terminals which communicate with each
other by forming a multihop radio network and maintaining
connectivity in a decentralized manner. Due to the nature of
unreliable wireless medium data transfer is a major problem
in MANET and it lacks security and reliability of data. The
most suitable solution to provide the expected level of security
to these services is the provision of a key management protocol.
A Key management is vital part of security. This issue is even
bigger in wireless network compared to wired network. The
distribution of keys in an authenticated manner is a difficult
task in MANET. When a member leaves or joins the group, it
needs to generate a new key to maintain forward and backward
secrecy. In this paper, we propose a new group key management
schemes namely a Hierarchical, Simple, Efficient and Scalable
Group Key (HSESGK) based on clustering management scheme
for MANETs and different other schemes are classified. Group
members deduce the group key in a distributed manner.</description>
        <description>http://thesai.org/Downloads/Volume5No4/Paper_30-A_new_Hierarchical_Group_Key_Management_based.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modeling and Forecasting the Number of Pilgrims Coming from Outside the Kingdom of Saudi Arabia Using Bayesian and Box-Jenkins Approaches</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050429</link>
        <id>10.14569/IJACSA.2014.050429</id>
        <doi>10.14569/IJACSA.2014.050429</doi>
        <lastModDate>2014-04-30T18:25:23.7630000+00:00</lastModDate>
        
        <creator>SAMEER M. SHAARAWY</creator>
        
        <creator>ESAM A. KHAN</creator>
        
        <creator>MAHMOUD A. ELGAMAL</creator>
        
        <subject>autoregressive processes; identification; estimation; diagnostic checking; forecasting; Jeffreys&#39; prior; and posterior probability mass function</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(4), 2014</description>
        <description>Pilgrimage has received a great attention by the government of Saudi Arabia. Of special interest is the yearly series of the Number of Pilgrims coming from Outside the kingdom (NPO) since it is one of the most important indicators in determining the planning mechanism for future hajj seasons. This study approaches the problems of identification, estimation, diagnostic checking and forecasting of the NPO series using Bayesian and Box - Jenkins approaches. The accuracy of Bayesian and Box- Jenkins techniques have been checked for forecasting the future observations and the results were very satisfactory. Moreover, it has been shown that Bayesian technique gives more accurate results than Box-Jenkins technique.</description>
        <description>http://thesai.org/Downloads/Volume5No4/Paper_29-Modeling_and_Forecasting_the_Number_of_Pilgrims_Coming_from_Outside.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>EEG Mouse:A Machine Learning-Based Brain Computer Interface</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050428</link>
        <id>10.14569/IJACSA.2014.050428</id>
        <doi>10.14569/IJACSA.2014.050428</doi>
        <lastModDate>2014-04-30T18:25:23.7470000+00:00</lastModDate>
        
        <creator>Mohammad H. Alomari</creator>
        
        <creator>Ayman AbuBaker</creator>
        
        <creator>Aiman Turani</creator>
        
        <creator>Ali M. Baniyounes</creator>
        
        <creator>Adnan Manasreh</creator>
        
        <subject>EEG; BCI; Data Mining; Machine Learning; SVMs; NNs; DWT; Feature Extraction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(4), 2014</description>
        <description>The main idea of the current work is to use a wireless Electroencephalography (EEG) headset as a remote control for the mouse cursor of a personal computer. The proposed system uses EEG signals as a communication link between brains and computers. Signal records obtained from the PhysioNet EEG dataset were analyzed using the Coif lets wavelets and many features were extracted using different amplitude estimators for the wavelet coefficients. The extracted features were inputted into machine learning algorithms to generate the decision rules required for our application. The suggested real time implementation of the system was tested and very good performance was achieved. This system could be helpful for disabled people as they can control computer applications via the imagination of fists and feet movements in addition to closing eyes for a short period of time.</description>
        <description>http://thesai.org/Downloads/Volume5No4/Paper_28-EEG_Mouse_A_Machine_Learning-Based_Brain_Computer_Interface.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Malware Detection in Cloud Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050427</link>
        <id>10.14569/IJACSA.2014.050427</id>
        <doi>10.14569/IJACSA.2014.050427</doi>
        <lastModDate>2014-04-30T18:25:23.7330000+00:00</lastModDate>
        
        <creator>Safaa Salam Hatem</creator>
        
        <creator>Dr. Maged H. wafy</creator>
        
        <creator>Dr. Mahmoud M. El-Khouly</creator>
        
        <subject>Malware; Security; Cloud Computing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(4), 2014</description>
        <description>Antivirus software is one of the most widely used tools for detecting and stopping malicious and unwanted files. However, the long term effect of traditional host based antivirus is questionable. Antivirus software fails to detect many modern threats and its increasing complexity has resulted in vulnerabilities that are being exploited by malware. This paper advocates a new model for malware detection on end hosts based on providing antivirus as an in-cloud network service. This model enables identification of malicious and unwanted software by multiple detection engines Respectively,This approach provides several important benefits including better detection of malicious software, enhanced forensics capabilities and improved deployability. Malware detection in cloud computing includes a lightweight, cross-Storge host agent and a network service. In this paper Combines detection techniques, static signatures analyze and Dynamic analysis detection. Using this mechanism we find that cloud- malware detection provides 35% better detection coverage against recent threats compared to a single antivirus engine and a 98% detection rate across the cloud environment.</description>
        <description>http://thesai.org/Downloads/Volume5No4/Paper_27-Malware_Detection_in_Cloud_Computing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Investigating Students’ Achievements in Computing Science Using Human Metric</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050426</link>
        <id>10.14569/IJACSA.2014.050426</id>
        <doi>10.14569/IJACSA.2014.050426</doi>
        <lastModDate>2014-04-30T18:25:23.7000000+00:00</lastModDate>
        
        <creator>Ezekiel U. Okike</creator>
        
        <subject>academic achievement; personality traits; computing science; study habits</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(4), 2014</description>
        <description>This study investigates the role of personality traits, motivation for career choice and study habits in students’ academic achievements in the computing sciences. A quantitative research method was employed. Data was collected from 60 computing science students using the Myer Briggs Type indicator (MBTI) with additional questionnaires.  A model of the form  y_(ij=&#223;_0+&#223;_1 x_(1j+ ) &#223;_2 x_(2j+ ) &#223;_3 x_(3j+ ) &#223;_4 x_(4j+ )…&#223;_n x_nj ) was used, where y_ij   represents a dependent variable, &#223;_0+&#223;_1 x_(1j+ ) &#223;_2 x_(2j+ ) &#223;_3 x_(3j+ ) &#223;_4 x_(4j+ )…&#223;_n x_nj the independent variables. Data analysis was performed on the data using the Statistical Package for the social sciences (SPSS). Linear regression was done in order to fit the model and justify its significance or none significance at the 0.05 level of significance. Result of regression model was also used to determine the impact of the independent variable on students’ performance. Results from this study suggests that the strongest motivator for a choice of career in the computing sciences is the desire to become a computing professional. Students’ achievements especially in the computing sciences do not depend  only on students temperamental ability or personality traits, motivations for choice of  course of study and reading habit, but also on the use of Internet based sources more than going to the university library to read book materials available in all areas</description>
        <description>http://thesai.org/Downloads/Volume5No4/Paper_26-Investigating_Students’_Achievements_in_Computing_Science_Using_Human_Metric.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Image Sharpness Metric Based on Algebraic Multi-grid Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050425</link>
        <id>10.14569/IJACSA.2014.050425</id>
        <doi>10.14569/IJACSA.2014.050425</doi>
        <lastModDate>2014-04-30T18:25:23.6870000+00:00</lastModDate>
        
        <creator>Qian Ying</creator>
        
        <creator>Ren Xue-mei</creator>
        
        <creator>Huang Ying</creator>
        
        <creator>Meng Li</creator>
        
        <subject>image sharpness mean square error; algebraic multigrid method; sharpness metric; image reconstruction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(4), 2014</description>
        <description>In order to improve Mean Square Error of its reliance on reference images when evaluating image sharpness, the no-reference metric based on algebraic multi-grid is proposed. The proposed metric first reconstructs the original image by Algebraic Multi-grid (AMG), then compute the Mean Square Error between original image and reconstructed image, the result represents image sharpness. Experiments show that the proposed sharpness metric has better practicability and monotonicity, correlates well with the perceived sharpness. The algorithm has superiority in image sharpness metric. </description>
        <description>http://thesai.org/Downloads/Volume5No4/Paper_25-Image_Sharpness_Metric_Based_on_Algebraic_Multi-grid_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>On an Overlaid Hybrid Wire/Wireless Interconnection Architecture for Network-on-chip</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050424</link>
        <id>10.14569/IJACSA.2014.050424</id>
        <doi>10.14569/IJACSA.2014.050424</doi>
        <lastModDate>2014-04-30T18:25:23.6700000+00:00</lastModDate>
        
        <creator>Ling Wang</creator>
        
        <creator> Zhihai Guo</creator>
        
        <creator>Peng Lv</creator>
        
        <creator>Yingtao Jiang</creator>
        
        <subject>Network-on-Chip; wireless; subnet; 2-Level Hybrid Mesh topology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(4), 2014</description>
        <description>Network-on-Chip (NoC) built upon metal low-k interconnect wires, are to meet the ever stringent performance requirements in the future technology nodes. In response to this interconnection crisis, the wireless network-on-chip (WNoC), enabled by the availability of miniaturized on-chip antennas and transceivers, is envisioned one of the most revolutionary promising approach alternatives. In this paper, we present a new WNoC architecture with a layered topology, where a metal/low-k based wired network is partitioned into several subnetworks, and these subnetworks are connected through a wireless network that is overlaid on top of them. Due to limited transmission range, the wireless nodes in the wireless network actually communicate with each other in a multiple-hop fashion. As a large volume of traffic will go through the wireless nodes, a contention avoidance routing algorithm is adopted. In addition, two virtual channels have been introduced into the wireless router design to avoid any possible deadlocks that otherwise may occur.  Experiment results have shown that throughput of the proposed architecture, on average, is about 20% higher than that of the existing WNoC architectures. And delay of the proposed architecture is about 30% less than the existing WNoC architectures.</description>
        <description>http://thesai.org/Downloads/Volume5No4/Paper_24-On_an_Overlaid_Hybrid_Wire_Wireless_Interconnection_Architecture_for_Network-on-chip.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards a Modular Recommender System for Research Papers written in Albanian</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050423</link>
        <id>10.14569/IJACSA.2014.050423</id>
        <doi>10.14569/IJACSA.2014.050423</doi>
        <lastModDate>2014-04-30T18:25:23.6400000+00:00</lastModDate>
        
        <creator>Klesti Hoxha</creator>
        
        <creator>Alda Kika</creator>
        
        <creator>Eriglen Gani</creator>
        
        <creator>Silvana Greca</creator>
        
        <subject>recommender system; Albanian; information retrieval; intelligent search; digital library</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(4), 2014</description>
        <description>In the recent years there has been an increase in scientific papers publications in Albania and its neighboring countries that have large communities of Albanian speaking researchers. Many of these papers are written in Albanian. It is a very time consuming task to find papers related to the researchers’ work, because there is no concrete system that facilitates this process. In this paper we present the design of a modular intelligent search system for articles written in Albanian. The main part of it is the recommender module that facilitates searching by providing relevant articles to the users (in comparison with a given one). We used a cosine similarity based heuristics that differentiates the importance of term frequencies based on their location in the article. We did not notice big differences on the recommendation results when using different combinations of the importance factors of the keywords, title, abstract and body. We got similar results when using only the title and abstract in comparison with the other combinations. Because we got fairly good results in this initial approach, we believe that similar recommender systems for documents written in Albanian can be build also in contexts not related to scientific publishing.</description>
        <description>http://thesai.org/Downloads/Volume5No4/Paper_23-Towards_a_Modular_Recommender_System_for_Research_Papers_written_in_Albanian.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>DUT Verification Through an Efficient and Reusable Environment with Optimum Assertion and Functional Coverage in SystemVerilog</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050422</link>
        <id>10.14569/IJACSA.2014.050422</id>
        <doi>10.14569/IJACSA.2014.050422</doi>
        <lastModDate>2014-04-30T18:25:23.6230000+00:00</lastModDate>
        
        <creator>Deepika Ahlawat</creator>
        
        <creator>Neeraj Kr. Shukla</creator>
        
        <subject>Assertions; Coverage; Environment; Mailbox; Randomization;SystemVerilog; Threads; Transactions</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(4), 2014</description>
        <description>Verification is the most integral part of chip manufacturing and testing and is as important as the designing. Verification provides with the actual implementation and functionality of a Design under Test (DUT) and checks if it meets the specifications or not. In this paper, a communication protocol has been verified as per the design specifications. The environment so created completely wraps the design under verification and observes an optimum functional and assertion based coverage. The coverage so obtained is 100% assertion based coverage and 83.3% functional coverage using SV (SystemVerilog). The total coverage so obtained is 91.66%.</description>
        <description>http://thesai.org/Downloads/Volume5No4/Paper_22-DUT_Verification_through_an_Efficient_and_Reusable_Environment_with_Optimum_Assertion.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Survey of Unstructured Text Summarization Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050421</link>
        <id>10.14569/IJACSA.2014.050421</id>
        <doi>10.14569/IJACSA.2014.050421</doi>
        <lastModDate>2014-04-30T18:25:23.6070000+00:00</lastModDate>
        
        <creator>Sherif Elfayoumy</creator>
        
        <creator>Jenny Thoppil</creator>
        
        <subject>text summarization; unstructured data; text mining; unstructured data analytics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(4), 2014</description>
        <description>Due to the explosive amounts of text data being created and organizations increased desire to leverage their data corpora, especially with the availability of Big Data platforms, there is not usually enough time to read and understand each document and make decisions based on document contents. Hence, there is a great demand for summarizing text documents to provide a representative substitute for the original documents. By improving summarizing techniques, precision of document retrieval through search queries against summarized documents is expected to improve in comparison to querying against the full spectrum of original documents. 
Several generic text summarization algorithms have been developed, each with its own advantages and disadvantages. For example, some algorithms are particularly good for summarizing short documents but not for long ones.  Others perform well in identifying and summarizing single-topic documents but their precision degrades sharply with multi-topic documents. In this article we present a survey of the literature in text summarization. We also surveyed some of the most common evaluation methods for the quality of automated text summarization techniques. Last, we identified some of the challenging problems that are still open, in particular the need for a universal approach that yields good results for mixed types of documents.
</description>
        <description>http://thesai.org/Downloads/Volume5No4/Paper_21-A_Survey_of_Unstructured_Text_Summarization_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Analysis of Faults Detection in Wind Turbine Generator Based on High-Resolution Frequency Estimation Methods</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050420</link>
        <id>10.14569/IJACSA.2014.050420</id>
        <doi>10.14569/IJACSA.2014.050420</doi>
        <lastModDate>2014-04-30T18:25:23.5770000+00:00</lastModDate>
        
        <creator>CHAKKOR SAAD</creator>
        
        <creator>Baghouri Mostafa</creator>
        
        <creator>Hajraoui Abderrahmane</creator>
        
        <subject>Wind turbine Generator; Fault diagnosis; Frequency Estimation; Monitoring; Maintenance; High Resolution Methods; Current Signature Analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(4), 2014</description>
        <description>Electrical energy production based on wind power has become the most popular renewable resources in the recent years because it gets reliable clean energy with minimum cost. The major challenge for wind turbines is the electrical and the mechanical failures which can occur at any time causing prospective breakdowns and damages and therefore it leads to machine downtimes and to energy production loss. To circumvent this problem, several tools and techniques have been developed and used to enhance fault detection and diagnosis to be found in the stator current signature for wind turbines generators. Among these methods, parametric or super-resolution frequency estimation methods, which provides typical spectrum estimation, can be useful for this purpose. Facing on the plurality of these algorithms, a comparative performance analysis is made to evaluate robustness based on differents metrics: accuracy, dispersion, computation cost, perturbations and faults severity. Finally, simulation results in Matlab with most occurring faults indicate that ESPRIT and R-MUSIC algorithms have high capability of correctly identifying the frequencies of fault characteristic components, a performance ranking had been carried out to demonstrate the efficiency of the studied methods in faults detecting.</description>
        <description>http://thesai.org/Downloads/Volume5No4/Paper_20-Performance_Analysis_of_Faults_Detection_in_Wind_Turbine_Generator_Based_on_High-Resolution_Frequency_Estimation_Methods.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Estimating Traffic Intensity at Toll Gates Using Qeueueing Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050419</link>
        <id>10.14569/IJACSA.2014.050419</id>
        <doi>10.14569/IJACSA.2014.050419</doi>
        <lastModDate>2014-04-30T18:25:23.5600000+00:00</lastModDate>
        
        <creator>Vincent O. R</creator>
        
        <creator>Olayiwola O. E.</creator>
        
        <creator>Kosemani O. O.</creator>
        
        <subject>Toll Gates; Queueing Networks; Traffic intensity; Delay and Image detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(4), 2014</description>
        <description>Traffic information generation is a routine-like operation that is done on a daily basis at any public gate. A toll gate is a public roadway by which people enter and leave a public organisation. The existing models give premium consideration to security over prompt services and as such associated with processes that have high cost of implementation, inaccuracy from complex method as well pose other technical problems such as delay. This research presents an automated procedure for monitoring traffic at toll gates to give the best compromise among the conflicting objectives of payment, security and good services. The system gathers information about the traffic situation with respect to the license plate number captured from each vehicle that passes through the toll gate and as well captures data such as arrival speed, arrival time and date and uses this data as input to generate traffic report/information on a daily basis. Experimentally the system shows that it can effectively capture the vehicle video and detect the license plate in day time, showing accuracy of about 85% to 90%, practical results based on actual data are included.</description>
        <description>http://thesai.org/Downloads/Volume5No4/Paper_19-Estimating_Traffic_Intensity_at_Toll_Gates_Using_Queueing_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>On the Parallel Design and Analysis for 3-D ADI Telegraph Problem with MPI</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050418</link>
        <id>10.14569/IJACSA.2014.050418</id>
        <doi>10.14569/IJACSA.2014.050418</doi>
        <lastModDate>2014-04-30T18:25:23.5430000+00:00</lastModDate>
        
        <creator>Simon Uzezi Ewedafe</creator>
        
        <creator>Rio Hirowati Shariffudin</creator>
        
        <subject>3-DTEL; ADI; MPI; SPMD; DD; Parallel Design</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(4), 2014</description>
        <description>In this paper we describe the 3-D Telegraph Equation (3-DTEL) with the use of Alternating Direction Implicit (ADI) method on Geranium Cadcam Cluster (GCC) with Message Passing Interface (MPI) parallel software. The algorithm is presented by the use of Single Program Multiple Data (SPMD) technique. The implementation is discussed by means of Parallel Design and Analysis with the use of Domain Decomposition (DD) strategy. The 3-DTEL with ADI scheme is implemented on the GCC cluster, with an objective to evaluate the overhead it introduces, with ability to exploit the inherent parallelism of the computation. Results of the parallel experiments are presented. The Speedup and Efficiency from the experiments on different block sizes agree with the theoretical analysis.</description>
        <description>http://thesai.org/Downloads/Volume5No4/Paper_18-On_the_Parallel_Design_and_Analysis_for_3-D_ADI_Telegraph_Problem_with_MPI.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Coding Technique Based on the Frequency Evolution Creates with a Time Frequency Analysis a New Genome&#39;s Landscape</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050417</link>
        <id>10.14569/IJACSA.2014.050417</id>
        <doi>10.14569/IJACSA.2014.050417</doi>
        <lastModDate>2014-04-30T18:25:23.5300000+00:00</lastModDate>
        
        <creator>Imen MESSAOUDI</creator>
        
        <creator>Afef ELLOUMI</creator>
        
        <creator>Zied LACHIRI</creator>
        
        <subject>C.elegans; Complex Morlet wavelet; Frequency Chaos Game Signal; Genome exploration</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(4), 2014</description>
        <description>In recent years, considerable effort has been devoted to study the biological data sets within the framework of the genomic signal processing field. However, the enormous amount of data deposited into public databases makes the search for useful information a difficult task. Effectively, the choice for a convenient analysis approach is not at all obvious at all. In this work, we provide a new way to map the genomes within the form of images. The mapping uses the Complex Morlet wavelet as analysis technique and the Frequency Chaos Game Signal (FCGS) as digital dataset. Before processing the wavelet analysis, we build the FCGS in such a way that we can follow the frequency evolution of nucleotides’ occurrence along the genome. The time-frequency analysis of the FCGS signals constitutes a pertinent tool for exploring the DNA structures in the C.elegans genome-wide landscape.</description>
        <description>http://thesai.org/Downloads/Volume5No4/Paper_17-A_Coding_Technique_Based_on_the_Frequency_Evolution_Creates_with_a_Time_Frequency_Analysis_a_New_Genome&#39;s_Landscape.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A web based Publish-Subscribe framework for mobile computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050416</link>
        <id>10.14569/IJACSA.2014.050416</id>
        <doi>10.14569/IJACSA.2014.050416</doi>
        <lastModDate>2014-04-30T18:25:23.4970000+00:00</lastModDate>
        
        <creator>Cosmina Ivan</creator>
        
        <subject>mobile computing; Websockets; publish-subscribe; REST</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(4), 2014</description>
        <description>The growing popularity of mobile devices is permanently changing the Internet user’s computing experience. Smartphones and tablets begin to replace the desktop as the primary means of interacting with various information technology and web resources. While mobile devices facilitate in consuming web resources in the form of web services, the growing demand for consuming services on mobile device is introducing a complex ecosystem in the mobile environment. This research addresses the communication challenges involved in mobile distributed networks and proposes an event-driven communication approach for information dissemination. This research investigates different communication techniques such as polling, long-polling and server-side push as client-server interaction mechanisms and the latest web technologies standard WebSocket , as communication protocol within a Publish/Subscribe paradigm. Finally, this paper introduces and evaluates the proposed framework, that is a hybrid approach of WebSocket and event-based publish/subscribe for operating in  mobile environments.</description>
        <description>http://thesai.org/Downloads/Volume5No4/Paper_16-A_web_based_Publish-Subscribe_framework_for_mobile_computing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Selection of Touch Gestures for Children’s Applications: Repeated Experiment to Increase Reliability</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050415</link>
        <id>10.14569/IJACSA.2014.050415</id>
        <doi>10.14569/IJACSA.2014.050415</doi>
        <lastModDate>2014-04-30T18:25:23.4830000+00:00</lastModDate>
        
        <creator>Nor Azah Abdul Aziz</creator>
        
        <creator>Nur Syuhada Mat Sin</creator>
        
        <creator>Firat Batmaz</creator>
        
        <creator>Roger Stone</creator>
        
        <creator>Paul Wai Hing Chung</creator>
        
        <subject>Children; Gesture; Applications (Apps)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(4), 2014</description>
        <description>This paper discusses the selection of touch gestures for children’s applications. This research investigates the gestures that children aged between 2 to 4 years old can manage on the iPad device.  Two experiments were conducted for this research. The first experiment was carried out in United Kingdom. The second experiment was carried out in Malaysia. The two similar experiments were carried out to increase the reliability and refine the result. 
This study shows that children aged 4 years have no problem using the 7 common gestures found in iPad applications. Some children aged 3 years have problem with two of the gestures. A high percentage of children aged 2 years struggled with the free rotate, drag &amp; drop, pinch and spread gestures.
This paper also discusses the Additional Criteria for the use of Gestures, Interface Design Components and Research on Children using iPad and Applications.
</description>
        <description>http://thesai.org/Downloads/Volume5No4/Paper_15-Selection_of_Touch_Gestures_for_Children’s_Applications_Repeated_Experiment_to_Increase_Reliability.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-Domain Modeling and Simulation of an Aircraft System for Advanced Vehicle-Level Reasoning Research and Development</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050414</link>
        <id>10.14569/IJACSA.2014.050414</id>
        <doi>10.14569/IJACSA.2014.050414</doi>
        <lastModDate>2014-04-30T18:25:23.4500000+00:00</lastModDate>
        
        <creator>F. Khan</creator>
        
        <creator>O. F. Eker</creator>
        
        <creator>T. Sreenuch</creator>
        
        <creator>A. Tsourdos</creator>
        
        <subject>Intelligent Reasoning; Finite State Machines; Aircraft System Simulation; Multi-Physics; Real-Time Simulation, VLRS (Vehicle Level Reasoning System); HM (Health Management)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(4), 2014</description>
        <description>In this paper, we describe a simulation based health monitoring system test-bed for aircraft systems. The purpose of the test-bed is to provide a technology neutral basis for implementing and evaluation of reasoning systems on vehicle level and software architecture in support of the safety and maintenance process. This simulation test-bed will provide the sub-system level results and data which can be fed to the VLRS to generate vehicle level reasoning to achieve broader level diagnoses. This paper describes real-time system architecture and concept of operations for the aircraft major sub-systems. The four main components in the real-time test-bed are the aircraft sub-systems (e.g. battery, fuel, engine, generator, heating and lighting system) simulation model, fault insertion unit, health monitoring data processing and user interface. In this paper, we adopted a component based modelling paradigm for the implementation of the virtual aircraft systems. All of the fault injections are currently implemented via software. The fault insertion unit allows for the repeatable injection of faults into the system. The simulation test-bed has been tested with many different faults which were undetected on system level to process and detect on the vehicle level reasoning. This article also shows how one system fault can affect the overall health of the vehicle.</description>
        <description>http://thesai.org/Downloads/Volume5No4/Paper_14-Multi-Domain_Modeling_and_Simulation_of_an_Aircraft_System_for_Advanced_Vehicle-Level_Reasoning_Research_and_Development.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Performance Analysis of Wireless Communication Protocols for Intelligent Sensors and Their Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050413</link>
        <id>10.14569/IJACSA.2014.050413</id>
        <doi>10.14569/IJACSA.2014.050413</doi>
        <lastModDate>2014-04-30T18:25:23.4370000+00:00</lastModDate>
        
        <creator>Chakkor Saad</creator>
        
        <creator>Baghouri Mostafa</creator>
        
        <creator>El Ahmadi Cheikh</creator>
        
        <creator>Hajraoui Abderrahmane</creator>
        
        <subject>Wireless communications; Performances; Energy; Protocols; Intelligent sensors; Applications</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(4), 2014</description>
        <description>The systems based on intelligent sensors are currently expanding, due to theirs functions and theirs performances of intelligence: transmitting and receiving data in real-time, computation and processing algorithms, metrology remote, diagnostics, automation and storage measurements…The radio frequency wireless communication with its multitude offers a better solution for data traffic in this kind of systems. The mains objectives of this paper is to present a solution of the problem related to the selection criteria of a better wireless communication technology face up to the constraints imposed by the intended application and the evaluation of its key features. The comparison between the different wireless technologies (Wi-Fi, Wi-Max, UWB, Bluetooth, ZigBee, ZigBeeIP, GSM/GPRS) focuses on their performance which depends on the areas of utilization. Furthermore, it shows the limits of their characteristics. Study findings can be used by the developers/ engineers to deduce the optimal mode to integrate and to operate a system that guarantees quality of communication, minimizing energy consumption, reducing the implementation cost and avoiding time constraints.</description>
        <description>http://thesai.org/Downloads/Volume5No4/Paper_13-Comparative_Performance_Analysis_of_Wireless_Communication_Protocols_for_Intelligent_Sensors.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Study on Method of Feature Selection in Speech Content Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050412</link>
        <id>10.14569/IJACSA.2014.050412</id>
        <doi>10.14569/IJACSA.2014.050412</doi>
        <lastModDate>2014-04-30T18:25:23.4030000+00:00</lastModDate>
        
        <creator>Si An</creator>
        
        <creator>Xinghua Fan</creator>
        
        <subject>acoustic features; orthogonal experiment; the SVM classifier; CHI statistical methods; features level  fusion; LBS vector quantization algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(4), 2014</description>
        <description>Information communication is developing rapidly now, Voice communication from a distance is more and more popular. In order to evaluate and classify the content correctly, the acoustic features is used to analyze first in this paper, Orthogonal experiment[1] method is used to find out characteristic of voice that has contribution to the speech content classification then make it and the textual characteristic together. The result of experiments shows that the feature combination of voice and content has better effect on voice content classification, the effectiveness has been improved.</description>
        <description>http://thesai.org/Downloads/Volume5No4/Paper_12-Study_on_Method_of_Feature_Selection_in_Speech.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>On the Performance of the Predicted Energy Efficient Bee-Inspired Routing (PEEBR)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050411</link>
        <id>10.14569/IJACSA.2014.050411</id>
        <doi>10.14569/IJACSA.2014.050411</doi>
        <lastModDate>2014-04-30T18:25:23.3900000+00:00</lastModDate>
        
        <creator>Imane M. A. Fahmy</creator>
        
        <creator>Laila Nassef</creator>
        
        <creator>Hesham A. Hefny</creator>
        
        <subject>PEEBR; Reactive protocol; path selection; MANET; ABC; energy consumption; battery residual power; AODV; DSDV</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(4), 2014</description>
        <description>The Predictive Energy Efficient Bee Routing PEEBR is a swarm intelligent reactive routing algorithm inspired from the bees food search behavior. PEEBR aims to optimize path selection in the Mobile Ad-hoc Network MANET based on energy consumption prediction. It uses Artificial Bees Colony ABC Optimization model and two types of bee agents: The scout bee for exploration phase and the forager bee for evaluation and exploitation phases. PEEBR considers the predicted mobile nodes battery residual power and the expected energy consumption for packet reception and relaying of these nodes along each of the potential routing paths between a source and destination pair. In this research paper, the performance of the proposed and improved PEEBR algorithm is evaluated in terms of energy consumption efficiency and throughput compared to two state-of-art ad-hoc routing protocols: The Ad-hoc On-demand Distance Vector AODV and the Destination Sequenced Distance Vector DSDV for various MANET sizes.</description>
        <description>http://thesai.org/Downloads/Volume5No4/Paper_11-On_the_Performance_of_the_Predicted_Energy_Efficient_Bee-Inspired_Routing_PEEBR.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Real-Time Simulation and Analysis of the Induction Machine Performances Operating at Flux Constant </title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050410</link>
        <id>10.14569/IJACSA.2014.050410</id>
        <doi>10.14569/IJACSA.2014.050410</doi>
        <lastModDate>2014-04-30T18:25:23.3570000+00:00</lastModDate>
        
        <creator>Aziz Derouich</creator>
        
        <creator>Ahmed Lagrioui</creator>
        
        <subject>Induction Machine Modeling; Scalar-controlled induction machine; Experimental identification; Environment Matlab/Simulink/DSPACE</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(4), 2014</description>
        <description>In this paper, we are interested, in a first time, at the study and the implementation of a V/f control for induction machine in real time. After, We are attached to a comparison of the results by simulation and experiment for, speed responses, flux and currents of the real machine, with a DSPACE card and model established by classical identification (Direct Current test ,  blocked-rotor test, no-load test , synchronous test), to ensure the validity of the established model.
The scalar controlled induction motor allows operation of the motor with the maximum torque by simultaneous action on the frequency and amplitude of the stator voltage, with conservation of the ratio V/f.
Speed reference imposes a frequency at the inverter supplying the voltages needed to power the motor, which determines the speed of rotation. The maximum torque of the machine is proportional to the square of the supply voltage and inversely proportional to the frequency voltage. So, Keep V/f constant implies a operating with maximum constant torque.
The results obtained for the rotor flux and the stator currents are especially satisfactory steady.
</description>
        <description>http://thesai.org/Downloads/Volume5No4/Paper_10-Real-Time_Simulation_and_Analysis_of_the_Induction_Machine_Performances_Operating_at_Flux_Constant.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving the Prediction Accuracy of Multicriteria Collaborative Filtering by Combination Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050409</link>
        <id>10.14569/IJACSA.2014.050409</id>
        <doi>10.14569/IJACSA.2014.050409</doi>
        <lastModDate>2014-04-30T18:25:23.3270000+00:00</lastModDate>
        
        <creator>Wiranto </creator>
        
        <creator>Edi Winarko</creator>
        
        <creator>Sri Hartati</creator>
        
        <creator>Retantyo Wardoyo</creator>
        
        <subject>Algorithm; multicriteria collaborative filtering; document; recommendation; system; similarity; multidimensional distance; decomposition; combination; prediction; accuracy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(4), 2014</description>
        <description>This study focuses on developing the multicriteria collaborative filtering algorithmfor improving the prediction accuracy. The approaches applied were user-item multirating matrix decomposition, the measurement of user similarity using cosine formula and multidimensional distance, individual criteria weight calculation, and rating prediction for the overall criteria by a combination approach. Results of the study show variation in multicriteria collaborative filtering algorithm, which was used for improving the document recommender system with the two following characteristics. First, the rating prediction for four individual criteria using collaborative filtering algorithm by a cosine-based user similarity and a multidimensional distance-based user similarity. Second, the rating prediction for the overall criteria using a combination  algorithms. Based on the results of testing, it can be concluded that a variety of models developed for the multicriteria collaborative filtering systems had much better prediction accuracy than for the classic collaborative filtering, which was characterized by the increasingly smaller values of Mean Absolute Error. The best accuracy was achieved by the multicriteria collaborative filtering system with multidimensional distance-based similarity.</description>
        <description>http://thesai.org/Downloads/Volume5No4/Paper_9-Improving_the_Prediction_Accuracy_of_Multicriteria_Collaborative_Filtering_by_Combination_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards a Service-Based Framework for Environmental Data Processing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050408</link>
        <id>10.14569/IJACSA.2014.050408</id>
        <doi>10.14569/IJACSA.2014.050408</doi>
        <lastModDate>2014-04-30T18:25:23.2970000+00:00</lastModDate>
        
        <creator>Ivan Madjarov</creator>
        
        <creator>Juš Kocijan</creator>
        
        <creator>Alexandra Grancharova</creator>
        
        <creator>Bogdan Shishedjiev</creator>
        
        <subject>Scientific data; Environmental data; Web services; Data integration; Stochastic model; Gaussian process; Metadata</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(4), 2014</description>
        <description>Scientists are confronted with significant data management problems due to the huge volume and high complexity of environmental data. An important aspect of environmental data management is that data, needed for a process, are not always in the adequate format. In this contribution, we analyze environmental data structure, and model this data using a semantic-based method. Using this model, we design and implement a framework based on Web services for transformation between massive environmental text-based data and relational databases. We present a mapping model for environmental data transformation to be used in the scenario devoted to the methodology for development of stochastic models for prediction of environmental parameters by application of Gaussian processes.</description>
        <description>http://thesai.org/Downloads/Volume5No4/Paper_8-Towards_a_Service-based_Framework_for_Environmental_Data_Processing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Redesigning Educational Systems Using IJAZA Structure</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050407</link>
        <id>10.14569/IJACSA.2014.050407</id>
        <doi>10.14569/IJACSA.2014.050407</doi>
        <lastModDate>2014-04-30T18:25:23.2800000+00:00</lastModDate>
        
        <creator>Yasser Bahjat</creator>
        
        <creator>Ibrahim Albidewi</creator>
        
        <subject>IJAZA system; Islamic educational system; Apprentice studies</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(4), 2014</description>
        <description>This paper discusses the use of IJAZA in an education system. The IJAZA system was the system used in the earliest Islamic nation for education and accreditation. The system was designed with a few major cultural concepts in mind. The basic principle of the IJAZA system is to give people complete freedom to study what they want, when they want it, from whomever they want, without any restrictions or influence from outside parties. To ensure the effectiveness of the system we think that the IJAZA system needs complete documentation of everything that happens within it, such as IJAZA documentation when it is first established, up to the performance review of the certified by the job market. Therefore, this paper provides a brief methodology, evaluation and implementation of the IJAZA system.</description>
        <description>http://thesai.org/Downloads/Volume5No4/Paper_7-Redesigning_Educational_Systems_Using_IJAZA_Structure.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Methods of Isolation for Application Traces Using Virtual Machines and Shadow Copies</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050406</link>
        <id>10.14569/IJACSA.2014.050406</id>
        <doi>10.14569/IJACSA.2014.050406</doi>
        <lastModDate>2014-04-30T18:25:23.2500000+00:00</lastModDate>
        
        <creator>George Pecherle</creator>
        
        <creator>Cornelia Gyor&#246;di</creator>
        
        <creator>Robert Gyor&#246;di</creator>
        
        <subject>security; privacy; application traces; data wiping; virtual machines; shadow copies; sandbox</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(4), 2014</description>
        <description>To improve the user&#39;s experience, almost all applications save usage data: web browsers save history and cookies, chat programs save message archives and so on. However, this data can be confidential and may compromise the user&#39;s privacy. There are third party solutions to automatically detect and wipe these traces, but they have two problems: they need a constantly updated database of files to target, and they wipe the data after it has been written to the disk. Our proposed solution does not need a database and it automatically reverts the application to its initial (clean) state, leaving no traces behind. This is done by using a monitoring process developed by us and the Volume Shadow Copy Service that takes snapshots when the application runs and restores them at the end of the run. </description>
        <description>http://thesai.org/Downloads/Volume5No4/Paper_6-Methods_of_Isolation_for_Application_Traces_Using_Virtual_Machines_and_Shadow_Copies.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Exon_Intron Separation Using Amino Acids Groups Frenquency Repartition as Coding Technique</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050405</link>
        <id>10.14569/IJACSA.2014.050405</id>
        <doi>10.14569/IJACSA.2014.050405</doi>
        <lastModDate>2014-04-30T18:25:23.2330000+00:00</lastModDate>
        
        <creator>Afef Elloumi Oueslati</creator>
        
        <creator>Noureddine Ellouze</creator>
        
        <subject>exons; introns; amino acid coding technique; amino acid repartition; exons - introns separation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(4), 2014</description>
        <description>this paper presents a new coding technique based on amino acids repartition in chromosome. The signal generated with this coding technique constitutes, after treatment, a new way to separate between exons and introns in a gene. The algorithm proposed is composed of six steps. We convert from ATCG to amino acids.  We specify the amino acid order group. We constitute the signal based on group’s repartition. We apply a smoothing technique on resulting signal. We inverse the exons peaks from minima to maxima. We show the separation between exons and introns regions. We present here the results obtained on the gene reference G 56F11. </description>
        <description>http://thesai.org/Downloads/Volume5No4/Paper_5-Exon_Intron_Separation_Using_Amino_Acids_Groups_Frenquency_Repartition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Impediments of Activating E-Learning in Higher Education Institutions in Saudi Arabia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050403</link>
        <id>10.14569/IJACSA.2014.050403</id>
        <doi>10.14569/IJACSA.2014.050403</doi>
        <lastModDate>2014-04-30T18:25:23.2000000+00:00</lastModDate>
        
        <creator>Ashraf M. H. Abdel Gawad</creator>
        
        <creator>Khalefah A. K. Al-Masaud</creator>
        
        <subject>E-Learning; Internet; curriculum; Saudi Arabia Universities; cultural aspects</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(4), 2014</description>
        <description>This paper presents the real reasons which constraint the application of the E-learning in higher education institutions in Saudi Arabia (Case study: Qassim University)and some suggested solutions. A questionnaire has been designed for the study include 48 paragraphs, divided into 5 parts, the first include the principle information, the second define how the technology can be used in the E-learning, the third deals with how to support the E-learning idea, the fourth part, asking the difficulties and challenges that face the application of E-learning, the fifth items asking to provide suggestion for solving the problem.  The study has 100 samples for faculty members and undergraduate students in Ar-Rass college of Science and Arts, male Departments at Qassim University. The study indicates that the main factors that obstruct the E-learning is the financial support from saving advanced PC&#39;s, labs, and establishing strong computer network, adding to the weakness of some faculty members and student to English language. The study focused in the suggested solution for the problem by applying the Electronic subjects, and imposes the whole faculty members to prepare at least one course in Electronic form.</description>
        <description>http://thesai.org/Downloads/Volume5No4/Paper_3-Impediments_of_Activating_E-Learning_in_Higher_Education_Institutions_in_Saudi_Arabia.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dynamic Allocation of Abundant Data Along Update Sub-Cycles To Support Update Transactions In Wireless Broadcasting</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050404</link>
        <id>10.14569/IJACSA.2014.050404</id>
        <doi>10.14569/IJACSA.2014.050404</doi>
        <lastModDate>2014-04-30T18:25:23.2000000+00:00</lastModDate>
        
        <creator>Ahmad al-Qerem</creator>
        
        <subject>data broadcast; Dynamic Allocation; concurrency control; cycle decomposition;  Abundant Data</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(4), 2014</description>
        <description>Supporting transactions processing over wireless broadcasting environment has attracted a considerable amount of research in a mobile computing system. To allow more than one conflicting transactions to be committed within the same broadcast cycle, the main broadcast cycle need to be decompose into a sub cycles. This decomposition contains both the original data to be broadcast in Rcast cycle and the updates come from the committed transactions on these data items called Ucast cycle. Allocation of updated data items along Ucast cycles is highly affecting the concurrency among conflicting transactions. Given the conflicting degree of data items, one can design proper data allocation along Ucast Cycles to increase the concurrency among conflicting transactions. We explore in this paper the problem of adjusting abundant data allocations to respond in effective way to the changes of data conflicting probability, and develop an efficient algorithm ADDUcast to solve this problem. Performance of our adjustment algorithms is analyzed and a system simulator is developed to validate our results. It is shown by our results that the ADDUcast is highly increased the average number of committed transactions within the same broadcast cycle.</description>
        <description>http://thesai.org/Downloads/Volume5No4/Paper_4-Dynamic_Allocation_of_Abundant_Data_along_Update_Sub-cycles_to_Support_Update_Transactions_in_Wireless_Broadcasting.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Optimized Analogy-Based Project Effort Estimation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050402</link>
        <id>10.14569/IJACSA.2014.050402</id>
        <doi>10.14569/IJACSA.2014.050402</doi>
        <lastModDate>2014-04-30T18:25:23.1700000+00:00</lastModDate>
        
        <creator>Mohammad Azzeh</creator>
        
        <creator>Yousef Elsheikh</creator>
        
        <creator>Marwan Alseid</creator>
        
        <subject>Cost Estimation; Effort Estimation by Analogy; Bees Optimization Algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(4), 2014</description>
        <description>despite the predictive performance of Analogy-Based Estimation (ABE) in generating better effort estimates, there is no consensus on: (1) how to predetermine the appropriate number of analogies, (2) which adjustment technique produces better estimates. Yet, there is no prior works attempted to optimize both number of analogies and feature distance weights for each test project. Perhaps rather than using fixed number, it is better to optimize this value for each project individually and then adjust the retrieved analogies by optimizing and approximating complex relationships between features and reflects that approximation on the final estimate. The Artificial Bees Algorithm is utilized to find, for each test project, the appropriate number of closest projects and features distance weights that is used to adjust those analogies’ efforts. The proposed technique has been applied and validated to 8 publically datasets from PROMISE repository. Results obtained show that: (1) the predictive performance of ABE has noticeably been improved, (2) the number of analogies was remarkably variable for each test project. While there are many techniques to adjust ABE, Using optimization algorithm provides two solutions in one technique and appeared useful for datasets with complex structure.</description>
        <description>http://thesai.org/Downloads/Volume5No4/Paper_2-An_Optimized_Analogy-Based_Project_Effort_Estimation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Proposed Architectural Model for an Automatic Adaptive E-Learning System Based on Users Learning Style</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050401</link>
        <id>10.14569/IJACSA.2014.050401</id>
        <doi>10.14569/IJACSA.2014.050401</doi>
        <lastModDate>2014-04-30T18:25:23.1230000+00:00</lastModDate>
        
        <creator>Adeniran Adetunji</creator>
        
        <creator>Akande Ademola</creator>
        
        <subject>E-Learning; Learning Style; Adaptation;   AAeL</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(4), 2014</description>
        <description>It has been established through literature that, if an e-learning system could adapt to learning characteristics of learners, it will increase learning performance and content knowledge acquisition of learners. This paper is a basic research work for knowledge that lay down a foundation for application and implementation. We reviewed trends in adaptive e-learning system development, make an expository on learning-style models towards learners’ learning character and propose an Architectural model of Automatic Adaptive E-learning System (AAeLS) based on learning-style concept/models. The concept it to model an e-learning system that will automatically adapt to learning preference of users’, the system learn about users’ learning style while the user learn the material content of the system; thus the learning process in two ways, the system is learning when the user is learning. We recommend further work on implementation and testing of the model, in an applied research.</description>
        <description>http://thesai.org/Downloads/Volume5No4/Paper_1-A_Proposed_Architectural_Model_for_an_Automatic_Adaptive_E-Learning_System_Based.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Scale-Based Local Feature Selection for Scene Text Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2014.030403</link>
        <id>10.14569/IJARAI.2014.030403</id>
        <doi>10.14569/IJARAI.2014.030403</doi>
        <lastModDate>2014-04-10T09:37:49.9170000+00:00</lastModDate>
        
        <creator>Boyu Zhang</creator>
        
        <creator>Jia Feng Liu</creator>
        
        <creator>XiangLong Tang</creator>
        
        <subject>Scene Text Recognition; Local Feature;  Stroke Width</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 3(4), 2014</description>
        <description>Scene text recognition has drawn increasing concerns from the OCR community in recent years. Among numerous methods that have been proposed, local feature based methods represented by bag-of-features (BoFs) show notable robustness and efficiency. However, as the existing detectors are based on assumptions about local saliency, a vast number of non-informative local features would be detected in the feature detection stage. In this paper, we propose to remove non-informative local features by integrating feature scales with stroke width information.Experiments taken both on synthetic data and real scene data show that the proposed feature selection method could effectively filter non-informative features and improve the recognition accuracy.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume3No4/Paper_3-Scale-based_Local_Feature_Selection_for_Scene.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Framework for Knowledge–Based Intelligent Clinical Decisionsupport to Predict Comorbidity</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2014.030402</link>
        <id>10.14569/IJARAI.2014.030402</id>
        <doi>10.14569/IJARAI.2014.030402</doi>
        <lastModDate>2014-04-10T09:37:49.8570000+00:00</lastModDate>
        
        <creator>Ernest E. Onuiri</creator>
        
        <creator>Oludele Awodele</creator>
        
        <creator>Sunday A. Idowu</creator>
        
        <subject>Framework; Knowledge-based; Comorbidity; Clinical Decision Support System (CDSS)</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 3(4), 2014</description>
        <description>Research in medicine has shown that comorbidity is prevalent among chronic diseases. In ophthalmology, it is used to refer to the overlap of two or more ophthalmic disorders. The comorbidity of cataract and glaucoma has continued to gain increasing prominence in ophthalmology within the past few decades and poses a major concern to practitioners. The situation is made worse by the dearth in number of ophthalmologists in Nigeria vis-&#224;-vis Sub-Saharan Africa, making it most inevitable that patients will find themselves more at the mercies of General Practitioners (GPs) who are not experts in this domain of interest. To stem the tide, we designed a framework that adopts a knowledge-based Clinical Decision Support System (CDSS) approach to deal with predicting ophthalmic comorbidity as well as the generation of patient-specific care plans at the point of care. This research which is within the domain of medical/healthcare informatics was carried out through an in-depth understanding of the intricacies associated with knowledge representation/preprocessing of relevant domain knowledge. Furthermore, we present the Comorbidity Ontological Framework for Intelligent Prediction (COFIP) in which Artificial Neural Network and Decision Trees, both being mechanisms of Artificial Intelligence (AI) was embedded into the framework to give it an intelligent (predictive and adaptive) capability. This framework provides the platform for a CDSS that is diagnostic, predictive and preventive. This is because the framework was designed to predict with satisfactory accuracy, the tendency of a patient with either of cataract or glaucoma to degenerate into a state comorbidity. Furthermore, because this framework is generic in outlook, it can be adapted for other chronic diseases of interest within the medical informatics research community.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume3No4/Paper_2-Framework_for_Knowledge–Based_Intelligent_Clinical_Decisionsupport_to_Predict_Comorbidity.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Registration Method for Multimodal Medical Images Using Contours Mutual Information</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2014.030401</link>
        <id>10.14569/IJARAI.2014.030401</id>
        <doi>10.14569/IJARAI.2014.030401</doi>
        <lastModDate>2014-04-10T09:37:49.7470000+00:00</lastModDate>
        
        <creator>Ying Qian</creator>
        
        <creator>Meng Li</creator>
        
        <creator>Qingjie Wei</creator>
        
        <creator>Xuemei Ren</creator>
        
        <subject>contour mutual information; mutual information; multimodal medical image; image registration; variational level set method</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 3(4), 2014</description>
        <description>In recent years, mutual information has developed as a popular image registration measure especially in multimodality image registration. For different modality medical images, the contour of tissues or organs is similarity. In this paper, an effective new registration method of the multimodal medical images based on the contour mutual information is proposed. Firstly, get the contour through variational level set method. Secondly, within the contour pixels are assigned the same grayscale value, obtain two contour images. Finally, two contour images using mutual information as similarity measure for image registration. The experiment results show that the registration algorithm proposed in this paper can do more effectively and more accurately work than normalized mutual information and gradient mutual information.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume3No4/Paper_1-A_Registration_Method_for_Multimodal_Medical_Images_Using_Contours_Mutual_Information.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Survey of Automated Text Simplification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2014.040109</link>
        <id>10.14569/SpecialIssue.2014.040109</id>
        <doi>10.14569/SpecialIssue.2014.040109</doi>
        <lastModDate>2014-04-04T17:27:10.7370000+00:00</lastModDate>
        
        <creator>Matthew Shardlow</creator>
        
        <subject>Text Simplification, Lexical Simplification, Syntactic; NLP
Simplification</subject>
        <description>Special Issue(SpecialIssue), 4(1), 2014</description>
        <description>Text simplification modifies syntax and lexicon to
improve the understandability of language for an end user.
This survey identifies and classifies simplification research within
the period 1998-2013. Simplification can be used for many
applications, including: Second language learners, preprocessing
in pipelines and assistive technology. There are many approaches
to the simplification task, including: lexical, syntactic, statistical
machine translation and hybrid techniques. This survey also
explores the current challenges which this field faces. Text
simplification is a non-trivial task which is rapidly growing into
its own field. This survey gives an overview of contemporary
research whilst taking into account the history that has brought
text simplification to its current state.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo9/Paper_9-A_Survey_of_Automated_Text_Simplification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Effective Reasoning Algorithm for Question Answering System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2014.040108</link>
        <id>10.14569/SpecialIssue.2014.040108</id>
        <doi>10.14569/SpecialIssue.2014.040108</doi>
        <lastModDate>2014-04-04T17:27:10.7070000+00:00</lastModDate>
        
        <creator>Poonam Tanwar</creator>
        
        <creator>Dr. T. V. Prasad</creator>
        
        <creator>Dr. Kamlesh Datta</creator>
        
        <subject>Knowledge Representation (KR); Semantic Net; Script; Reasoning; QAS; NLP</subject>
        <description>Special Issue(SpecialIssue), 4(1), 2014</description>
        <description>Knowledge representation (KR) is the most desirable area of research to make the system intelligent. Today is the era of knowledge that requires articulations, semantic, syntax etc. These requirements, forced to design a general system which is applicable to represent declarative as well as procedural knowledge. Without effective inference/reasoning mechanism, the strength and utility of knowledge representation technique fulfill the partial requirement for an intelligent system. The objective of this research work is to present the effective/ appropriate knowledge representation technique for representing the general knowledge and a reasoning algorithm for Question Answering system (QAS) work as story reader, so that appropriate knowledge can be infer from the system. The architecture of knowledge representation system is capable to integrate different type of knowledge and it is cost effective also. </description>
        <description>http://thesai.org/Downloads/SpecialIssueNo9/Paper_8-An_Effective_Reasoning_Algorithm_for_Question_Answering_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Simple Strategy to Start Domain Ontology from Scratch</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2014.040107</link>
        <id>10.14569/SpecialIssue.2014.040107</id>
        <doi>10.14569/SpecialIssue.2014.040107</doi>
        <lastModDate>2014-04-04T17:27:10.6900000+00:00</lastModDate>
        
        <creator>Ivo Wolff Gersberg</creator>
        
        <creator>Nelson F. F. Ebecken</creator>
        
        <subject>domain ontology; contextual approach; ontology; NLP</subject>
        <description>Special Issue(SpecialIssue), 4(1), 2014</description>
        <description>Aiming the usage of Domain Ontology as an educational tool for neophyte students and focusing in a fast and easy way to start Domain Ontology from scratch, the semantics are set aside to identify contexts of concepts (terms) to build the ontology. Text Mining, Link Analysis and Graph Analysis create an abstract rough sketch of interactions between terms. This first rough sketch is presented to the expert providing insights into and inspires him to inform or communicate knowledge, through assertive sentences. Those assertive sentences subsidize the creation of the ontology.  A web prototype tool to visualize the ontology and retrieve book contents is also presented.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo9/Paper_7-A_Simple_Strategy_to_Start_Domain_Ontology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards the Design of a Textile Chemical Ontology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2014.040106</link>
        <id>10.14569/SpecialIssue.2014.040106</id>
        <doi>10.14569/SpecialIssue.2014.040106</doi>
        <lastModDate>2014-04-04T17:27:10.6770000+00:00</lastModDate>
        
        <creator>Carolina Prieto Ferrero</creator>
        
        <creator>Elena Lloret</creator>
        
        <creator>Manuel Palomar</creator>
        
        <subject>Ontology; Chemical Textile; Chemical Ontology; Textile Ontology; Chemical Textile Ontology; NLP</subject>
        <description>Special Issue(SpecialIssue), 4(1), 2014</description>
        <description>The main goal of this paper is to present the initial version of a Textile Chemical Ontology, to be used by textile professionals with the purpose of conceptualising and representing the banned and harmful chemical substances that are forbidden in this domain. After analysing different methodologies and determining that “Methontology” is the most appropriate for the purposes, this methodology is explored and applied to the domain. In this manner, an initial set of concepts are defined, together with their hierarchy and the relationships between them. This paper shows the benefits of using the ontology through a real use case in the context of Information Retrieval. The potentiality of the proposed ontology in this preliminary evaluation encourages extending the ontology with a higher number of concepts and relationships, and validating it within other Natural Language Processing applications.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo9/Paper_6-Towards_the_Design_of_a_Chemical_Textile_Ontology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automating the Shaping of Metadata Extracted from a Company Website with Open Source Tools</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2014.040105</link>
        <id>10.14569/SpecialIssue.2014.040105</id>
        <doi>10.14569/SpecialIssue.2014.040105</doi>
        <lastModDate>2014-04-04T17:27:10.6600000+00:00</lastModDate>
        
        <creator>Dr Ir Robert VISEUR</creator>
        
        <subject>terminology extraction; named entities; NLP; tag cloud; market analysis</subject>
        <description>Special Issue(SpecialIssue), 4(1), 2014</description>
        <description>As part of a market analysis process, the objective was to automate the task of identifying the activities and skills of a collection of enterprises, namely Belgian and French open source companies. In order to avoid manual annotation through visual analysis of the websites’ content,  a tool chain was developed to collect the content of websites and extract the important terms. Standard software libraries were identified, allowing to clean up HTML documents and to perform the part-of-speech tagging process used for extracting terminology. This procedure is supplemented by the extraction and the recognition of named entities. The terms extracted in the HTML pages of a company website were then merged and filtered and a circular tags cloud was generated. This presentation facilitates the identification of important terms, commonly referred to as activities and technologies supported by the company. Several changes are planned for this prototype, including, in particular, the extension to the texts in French, the association of extracted terms to the vocabulary of a classification scheme and the automatic generation of dashboards to facilitate the monitoring of the evolution of the industrial sector.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo9/Paper_5-Automating_the_Shaping_of_Metadata_Extracted_from_a_Company_Website_with_Open_Source_Tools.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analyzing Opinions and Argumentation in News Editorials and Op-Eds</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2014.040104</link>
        <id>10.14569/SpecialIssue.2014.040104</id>
        <doi>10.14569/SpecialIssue.2014.040104</doi>
        <lastModDate>2014-04-04T17:27:10.6300000+00:00</lastModDate>
        
        <creator>Bal Krishna Bal</creator>
        
        <subject>editorials; opinions; arguments; persuasion; sentiment analysis, annotation; NLP</subject>
        <description>Special Issue(SpecialIssue), 4(1), 2014</description>
        <description>Analyzing opinions and arguments in news editorials and op-eds is an interesting and a challenging task. The challenges lie in multiple levels – the text has to be analyzed in the discourse level (paragraphs and above) and also in the lower levels (sentence, phrase and word levels). The abundance of implicit opinions involving sarcasm, irony and biases adds further complexity to the task. The available methods and techniques on sentiment analysis and opinion mining are still much focused in the lower levels, i.e., up to the sentence level. However, the given task requires the application of the concepts from a number of closely related sub-disciplines – Sentiment Analysis, Argumentation Theory, Discourse Analysis, Computational Linguistics, Logic and Reasoning etc. The primary argument of this paper is that partial solutions to the problem can be achieved by developing linguistic resources and using them for automatically annotating the texts for opinions and arguments. This paper discusses the ongoing efforts in the development of linguistic resources for annotating opinionated texts, which are useful in the analysis of opinions and arguments in news editorials and op-eds.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo9/Paper_4-Analyzing_Opinions_and_Argumentation_in_News_Editorials_and_Op-Eds.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Solving Semantic Problem of Phrases in NLP Using Universal Networking Language (UNL)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2014.040103</link>
        <id>10.14569/SpecialIssue.2014.040103</id>
        <doi>10.14569/SpecialIssue.2014.040103</doi>
        <lastModDate>2014-04-04T17:27:10.6300000+00:00</lastModDate>
        
        <creator>M. F Mridha</creator>
        
        <creator>Aloke Kumar Saha</creator>
        
        <creator>Jugal Krishna Das</creator>
        
        <subject>Syntax and Semantic analysis; Bangla Root Word; Morphology and UNL; NLP</subject>
        <description>Special Issue(SpecialIssue), 4(1), 2014</description>
        <description>This paper largely deals with the Semantic problem and generation of semantic relations, which are difficult problems in the field of natural language processing. In this work we looked at it through the knowledge based perspective and language phenomena in terms of rules and dictionary features. The work is more focused on solving the problems of Bangla sentences, thereby improving the accuracy of analysis process. The problem needs, particularly, a knowledge intensive solution. We have used insights from linguistics, towards solving this problem. Also, the usefulness of automatic extraction of features for words in the dictionary becomes evident through the work.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo9/Paper_3-Solving_Semantic_Problem_of_Phrases_in_NLP_Using_Universal_Networking_Language.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Key Issues in Vowel Based Splitting of Telugu Bigrams</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2014.040102</link>
        <id>10.14569/SpecialIssue.2014.040102</id>
        <doi>10.14569/SpecialIssue.2014.040102</doi>
        <lastModDate>2014-04-04T17:27:10.5970000+00:00</lastModDate>
        
        <creator>T. Kameswara Rao</creator>
        
        <creator>Dr. T. V.Prasad</creator>
        
        <subject>Telugu word splitting; vowel based splitting; compound word splitting; bigrams; trigrams; n-grams; NLP</subject>
        <description>Special Issue(SpecialIssue), 4(1), 2014</description>
        <description>Splitting of compound Telugu words into its components or root words is one of the important, tedious and yet inaccurate tasks of Natural Language Processing (NLP). Except in few special cases, at least one vowel is necessarily involved in Telugu conjunctions. In the result, vowels are often repeated as they are or are converted into other vowels or consonants. This paper describes issues involved in vowel based splitting of a Telugu bigram into proper root words using Telugu grammar conjunction (‘sandhi’) rules for MT.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo9/Paper_2-Key_Issues_in_Vowel_Based.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Representation of a Sentence using a Polar Fuzzy Neutrosophic Semantic Net</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2014.040101</link>
        <id>10.14569/SpecialIssue.2014.040101</id>
        <doi>10.14569/SpecialIssue.2014.040101</doi>
        <lastModDate>2014-04-04T17:27:10.5030000+00:00</lastModDate>
        
        <creator>Sachin Lakra</creator>
        
        <creator>T. V. Prasad</creator>
        
        <creator>G. Ramakrishna</creator>
        
        <subject>semantic net; polarity; neutrosophy; polar fuzzy neutrosophic semantic net;NLP</subject>
        <description>Special Issue(SpecialIssue), 4(1), 2014</description>
        <description>A semantic net can be used to represent a sentence. A sentence in a language contains semantics which are polar in nature, that is, semantics which are positive, neutral and negative. Neutrosophy is a relatively new field of science which can be used to mathematically represent triads of concepts. These triads include truth, indeterminacy and falsehood, and so also positivity, neutrality and negativity. Thus a conventional semantic net has been extended in this paper using neutrosophy into a Polar Fuzzy Neutrosophic Semantic Net. A Polar Fuzzy Neutrosophic Semantic Net has been implemented in MATLAB and has been used to illustrate a polar sentence in English language. The paper demonstrates a method for the representation of polarity in a computer’s memory. Thus, polar concepts can be applied to imbibe a machine such as a robot, with emotions, making machine emotion representation possible.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo9/Paper_1-Representation_of_a_Sentence_using_a_Polar_Fuzzy_Neutrosophic_Semantic_Net.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Inverted Pendulum-type Personal Mobility Considering Human Vibration Sensitivity</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050325</link>
        <id>10.14569/IJACSA.2014.050325</id>
        <doi>10.14569/IJACSA.2014.050325</doi>
        <lastModDate>2014-04-01T11:46:07.3570000+00:00</lastModDate>
        
        <creator>Misaki Masuda</creator>
        
        <creator>Takuma Suzuki</creator>
        
        <creator>Kazuto Yokoyama</creator>
        
        <creator>Masaki Takahashi</creator>
        
        <subject>Inverted Pendulum-Type Personal Mobility; Ride Comfort; Human Vibration; Frequency Analysis; Vibration Control; Frequency Shaped LQG</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(3), 2014</description>
        <description>An inverted pendulum-type PM (personal mobility) has been attracting attention as a low-carbon vehicle. For many people who like to use the PM, ride comfort is important. However, ride comfort of PM has not been focused on in previous studies. The vibration is one of causes that make riders feel uncomfortable. The PM is unstable system and horizontal vibration may be caused by a stabilizing control. Additionally, vertical vibration may also be caused by road disturbances. This study analyzes the vibration of the rider’s head in these two directions when the PM runs on a road with disturbances in numerical simulations, and evaluates ride comfort with the frequency characteristics of the vibration. To consider human vibration sensitivity, the frequency weighting proposed in ISO 2631-1 is used as an evaluation standard. The improvement methods are proposed from both software and hardware, and it is confirmed that the proposed method can improve ride comfort.</description>
        <description>http://thesai.org/Downloads/Volume5No3/Paper_25-Inverted_Pendulum-type_Personal_Mobility_Considering_Human_Vibration_Sensitivity.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mobile Web Services: State of the Art and
Challenges</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050324</link>
        <id>10.14569/IJACSA.2014.050324</id>
        <doi>10.14569/IJACSA.2014.050324</doi>
        <lastModDate>2014-03-31T11:46:36.4200000+00:00</lastModDate>
        
        <creator>Khalid Elgazzar</creator>
        
        <creator>Patrick Martin</creator>
        
        <subject>Mobile Web services, service provisioning, mobile
devices, ubiquitous computing, mobile computing.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(3), 2014</description>
        <description>For many years mobile devices were commonly recognized
as Web consumers. However, the advancements in mobile
device manufacturing, coupled with the latest achievements in
wireless communication developments are both key enablers for
shifting the role of mobile devices from service consumers to
service providers. This paradigm shift is a major step towards
the realization of pervasive and ubiquitous computing. Mobile
Web service provisioning is the art of hosting and offering
Web services from mobile devices, which actively contributes
towards the direction of Mobile Internet. In this paper, we
provide the state of the art of mobile service provisioning as
it currently stands. We focus our discussions on its applicability,
reliability, and challenges of mobile environments and resource
constraints. We study the different provisioning architectures,
enabler technologies, publishing and discovery mechanisms, and
maintenance of up-to-date service registries. We point out the
major open research issues in each provisioning aspect. Performance
issues due to the resource constraints of mobile devices
are also discussed.</description>
        <description>http://thesai.org/Downloads/Volume5No3/Paper_24-Mobile_Web_Services_State_of_the_Art_and.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Variational Formulation of the Template-Based Quasi-Conformal Shape-from-Motion from Laparoscopic Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050323</link>
        <id>10.14569/IJACSA.2014.050323</id>
        <doi>10.14569/IJACSA.2014.050323</doi>
        <lastModDate>2014-03-31T11:46:36.3900000+00:00</lastModDate>
        
        <creator>Abed Malti</creator>
        
        <subject>Laparoscopy, monocular 3D reconstruction, extensible
surface.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(3), 2014</description>
        <description>One of the current limits of laparosurgery is the
absence of a 3D sensing facility for standard monocular laparoscopes.
Significant progress has been made to acquire 3D from
a single camera using Visual SLAM (Simultaneous Localization
And Mapping), however most of the current approaches rely
on the assumption that the observed tissue is rigid or undergoes
periodic deformations. In laparoscopic surgery, these assumptions
do not apply due to the unpredictable and elastic deformation
of the tissues.
We propose a new sequential 3D reconstruction method
adapted to reconstructing organs in the abdominal cavity. We
draw on recent computer vision methods exploiting a known
3D view of the environment at rest position called a template.
However, no such method has ever been attempted in-vivo.
State-of-the-art methods assume that the environment can be
modeled as an isometric developable surface: one which deforms
isometrically to a plane. While this assumption holds for paper
and cloth-like surfaces, it certainly does not fit human organs and
tissue in general. Our method tackles these limits: it uses a nondevelopable
template and copes with natural 3D deformations
by introducing quasi-conformal prior. Our method adopts a new
two-phase approach. First the 3D template is reconstructed invivo
using RSfM (Rigid Shape-from-Motion) while the surgeon
is exploring – but not deforming – structures in the abdominal
cavity. Second, the surgeon manipulates and deforms the environment.
Here, the 3D template is quasi-conformally deformed to
match the 2D image data provided by the monocular laparoscope.
This second phase only relies on a single image. Therefore it copes
with both sequential processing and self-recovery from tracking
failures.
The proposed approach has been validated using: (i) in-vivo
animal data with ground-truth, and (ii) in-vivo laparoscopic
videos of a real patient’s uterus. Our experimental results
illustrate the ability of our method to reconstruct natural 3D
deformations typical in real surgery.</description>
        <description>http://thesai.org/Downloads/Volume5No3/Paper_23-Variational_Formulation_of_the_Template-Based.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Survey of Error Correction Mechanisms for Video Streaming over the Internet</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050322</link>
        <id>10.14569/IJACSA.2014.050322</id>
        <doi>10.14569/IJACSA.2014.050322</doi>
        <lastModDate>2014-03-31T11:46:36.3730000+00:00</lastModDate>
        
        <creator>Jafar Ababneh</creator>
        
        <creator>Omar Almomani</creator>
        
        <subject>Forward Error Correction (FEC; Retransmission;  Error Resilience; Error  Concealment; video quality; Video Streaming over the Internet; networking  quality of service</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(3), 2014</description>
        <description>This overview is targeted at determining state-of-the-art on Error control mechanisms for video streaming over the Internet.  The aims of error control mechanisms are to provide and protect the data from errors caused by packet loss due to congestion and link failure. The error control classified into two categories:   Error correction coding and Error detection coding. Error control mechanisms for video applications can be classified into four types: forward error correction (FEC), retransmission, error resilience, and error concealment. In this paper, we provide a survey on the existing error control mechanisms. Representative error mechanisms systems. We describe the challenges and solutions of each error control mechanisms.  Finally we show the Factors effect in the video quality through transmission over Internet.</description>
        <description>http://thesai.org/Downloads/Volume5No3/Paper_22-Survey_of_Error_Correction_Mechanisms_for_Video_Streaming_over_the_Internet.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>LPA Beamformer for Tracking Nonstationary Accelerated Near-Field Sources</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050321</link>
        <id>10.14569/IJACSA.2014.050321</id>
        <doi>10.14569/IJACSA.2014.050321</doi>
        <lastModDate>2014-03-31T11:46:36.3600000+00:00</lastModDate>
        
        <creator>Amira S. Ashour</creator>
        
        <subject>Near-field; range and DOA estimation, moving source tracking; LPA beamformer; REM (Recursive Expectation Maximization)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(3), 2014</description>
        <description>In this paper, a computationally very efficient algorithm for direction of arrival (DOA) as well as range parameter estimation is proposed for near-field narrowband nonstationary accelerated moving sources. The proposed algorithm based on the local polynomial approximation (LPA) beamformer, which proves its efficiency with far-field applications. The LPA estimates the instantaneous values of the direction of arrival, angular velocity, acceleration as well as the range parameters of near-field sources using weighted least squares approach which based on Taylor series. The performance efficiency of the LPA beamformer to estimate the DOAs of near-field sources is evaluated and compared with the Recursive Expectation-Maximization (REM) method. The comparison is done using standard deviation of DOA estimation error as well as for range versus signal to noise ratio (SNR). The simulation results show that LPA beamformer outperform REM1 in signal-to-noise ratio requirements.</description>
        <description>http://thesai.org/Downloads/Volume5No3/Paper_21-LPA_Beamformer_for_Tracking_Nonstationary_Accelerated_Near-Field_Sources.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Extending Access Management to maintain audit logs in cloud computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050320</link>
        <id>10.14569/IJACSA.2014.050320</id>
        <doi>10.14569/IJACSA.2014.050320</doi>
        <lastModDate>2014-03-31T11:46:36.3270000+00:00</lastModDate>
        
        <creator>Ajay Prasad</creator>
        
        <creator>Prasun Chakrabarti</creator>
        
        <subject>Cloud Computing; Access Management; Audit logs</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(3), 2014</description>
        <description>Considering the most often talked about security risks in cloud computing, like, security and compliance, viability, lack of transparency, reliability and performance issues. Bringing strong auditability in cloud services can reduce these risks to a great extent. Also, auditing, both internally and externally is generally required and sometimes unavoidable looking into the present day competition in the business arena. Auditing in web based and cloud based usage environments focuses mainly on cost of a service which determines the overall expenditure of the user organization. However, the expenditure can be controlled by a collaborative approach between the provider company and the user organization by constantly monitoring the end user access and usage of subscribed cloud services. Though, many cloud providers will claim of having a robust auditable feature, the generic verifiability with sustainable long term recording of usage logs do not exist at all. Certain access management models can be perfectly extended to maintain audit logs for long terms. However, maintaining long term logs certainly has storage implications, especially with larger organizations. The storage implications need to be studied.</description>
        <description>http://thesai.org/Downloads/Volume5No3/Paper_20-Extending_Access_Management_to_maintain_audit_logs_in_cloud_computing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluating the Impact of Critical Factors in Agile Continuous Delivery Process: A System Dynamics Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050319</link>
        <id>10.14569/IJACSA.2014.050319</id>
        <doi>10.14569/IJACSA.2014.050319</doi>
        <lastModDate>2014-03-31T11:46:36.2970000+00:00</lastModDate>
        
        <creator>Olumide Akerele</creator>
        
        <creator>Muthu Ramachandran</creator>
        
        <creator>Mark Dixon</creator>
        
        <subject>Agile software development; Continuous Delivery; Delivery Pipeline; System Dynamics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(3), 2014</description>
        <description>Continuous Delivery is aimed at the frequent delivery of good quality software in a speedy, reliable and efficient fashion – with strong emphasis on automation and team collaboration. However, even with this new paradigm, repeatability of project outcome is still not guaranteed: project performance varies due to the various interacting and inter-related factors in the Continuous Delivery &#39;system&#39;. This paper presents results from the investigation of various factors, in particular agile practices, on the quality of the developed software in the Continuous Delivery process. Results show that customer involvement and the cognitive ability of the QA have the most significant individual effects on the quality of software in continuous delivery.</description>
        <description>http://thesai.org/Downloads/Volume5No3/Paper_19-Evaluating_the_Impact_of_Critical_Factors_in_Agile_Continuous_Delivery_Process_A_System_Dynamics_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of a Local Entomological Database for Education and Research using Simulation (Virtual) Methods</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050318</link>
        <id>10.14569/IJACSA.2014.050318</id>
        <doi>10.14569/IJACSA.2014.050318</doi>
        <lastModDate>2014-03-31T11:46:36.2670000+00:00</lastModDate>
        
        <creator>Emad I. Khater</creator>
        
        <creator>Mona G. Mahmoud</creator>
        
        <creator>Enas H. Ghallab</creator>
        
        <creator>Magdi G. Shehata</creator>
        
        <creator>Yasser M. Abd El-Latif</creator>
        
        <subject>Bioinformatics; local database; entomology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(3), 2014</description>
        <description>Bioinformatics has been regarded as one of the rapidly-evolving fields with enormous impact on the history of life and biomedical sciences. It is an interdisciplinary science that integrates life sciences, mathematics and computer science in order to extract meaningful biological insights from large data sets of raw DNA and protein sequences. Our objective was the development of an entomogenomics database (provisionally named EntomDB) for education and research in entomology (entomology is the science of insects). This DB includes DNA/protein sequence data selected from genomes of major insect models of importance in biology and biomedical research. EntomDB will represent a customized easy, interactive and self-learning resource tool for beginner users in poor-resource settings. This will enable the users to learn basic skills in bioinformatics and genomics, needlessly to search through the numerous databases currently available on the World-Wide Web with their complex interfaces and contents. EntomDB will help students and young researchers in studying the primary structure, splicing, and translation and predict function of different genes by using simple simulation methods. It is also designed to be adaptable to work off-line, in case no internet connection is available. EntomDB is primarily designed for entomology discipline; however, it can easily be adapted for other disciplines in life and biomedical sciences. EntomDB will have important educational and developmental outcomes in promoting bioinformatics learning in the developing world and provide affordable first-level training for advanced degree and research levels.</description>
        <description>http://thesai.org/Downloads/Volume5No3/Paper_18-Development_of_a_Local_Entomological_Database_for_Education_and_Research_using_Simulation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improved Generalization in Recurrent Neural Networks Using the Tangent Plane Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050317</link>
        <id>10.14569/IJACSA.2014.050317</id>
        <doi>10.14569/IJACSA.2014.050317</doi>
        <lastModDate>2014-03-31T11:46:36.2500000+00:00</lastModDate>
        
        <creator>P May</creator>
        
        <creator>E Zhou</creator>
        
        <creator>C. W. Lee</creator>
        
        <subject>real time recurrent learning; tangent plane; generalization; weight elimination; temporal pattern recognition; non-linear process control</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(3), 2014</description>
        <description>The tangent plane algorithm for real time recurrent learning (TPA-RTRL) is an effective online training method for fully recurrent neural networks. TPA-RTRL uses the method of approaching tangent planes to accelerate the learning processes. Compared to the original gradient descent real time recurrent learning algorithm (GD-RTRL) it is very fast and avoids problems like local minima of the search space.  However, the TPA-RTRL algorithm actively encourages the formation of large weight values that can be harmful to generalization.  This paper presents a new TPA-RTRL variant that encourages small weight values to decay to zero by using a weight elimination procedure built into the geometry of the algorithm.  Experimental results show that the new algorithm gives good generalization over a range of network sizes whilst retaining the fast convergence speed of the TPA-RTRL algorithm.</description>
        <description>http://thesai.org/Downloads/Volume5No3/Paper_17-Improved_Generalization_in_Recurrent_Neural_Networks_Using_the_Tangent_Plane_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Arabic Natural Language Processing Laboratory serving Islamic Sciences</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050316</link>
        <id>10.14569/IJACSA.2014.050316</id>
        <doi>10.14569/IJACSA.2014.050316</doi>
        <lastModDate>2014-03-31T11:46:36.2330000+00:00</lastModDate>
        
        <creator>Moath M. Najeeb</creator>
        
        <creator>Abdelkarim A. Abdelkader</creator>
        
        <creator>Musab B. Al-Zghoul</creator>
        
        <subject>Natural Language Processing; Arabic Language; Islamic Sciences; Framework; Laboratory</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(3), 2014</description>
        <description>Arabic Natural Language Processing (ANLP) has a great attention as a new research topic in the last few years. In this paper an ANLP laboratory has been created to serve the Islamic Sciences, especially the science of Hadith. The main tasks of this laboratory are creating and using the necessary linguistic resources (Corpora, Lexicon, etc) in developing or adapting the basic tools (Parser, POS-tagger, etc), developing Arabic Natural Language Processing system’s evaluation framework and defining research areas and services for universities. The laboratory can also adapt the important theories, resources, tools and applications of other natural language processing such as English and French.</description>
        <description>http://thesai.org/Downloads/Volume5No3/Paper_16-Arabic_Natural_Language_Processing_Laboratory_serving_Islamic_Sciences.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Collaborative Pharmacy Student Learning Outline for Mobile Atmosphere</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050315</link>
        <id>10.14569/IJACSA.2014.050315</id>
        <doi>10.14569/IJACSA.2014.050315</doi>
        <lastModDate>2014-03-31T11:46:36.2030000+00:00</lastModDate>
        
        <creator>Dr. Mohamed F. AlAjmi</creator>
        
        <creator>Shakir Khan</creator>
        
        <subject>collaborative learning; mobile environment; real time approach; non-real time approach; mixture approach; lecture section; interface section; learner section; diagnose section</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(3), 2014</description>
        <description>the idea of this research is for the concern of Collaborative learning based mobile factors by applying via pharmacy students of the college. We focus on three features, computer mutual learning, learning process module, and student learning mode. In this paper, student-focused instruct module, student edge section, teacher interface section, learner section, solution problem section, curriculum section, control section, and diagnose section are planned. This system permits students to be sustained with a real time approach, non-real time approach, mixture approach. The devices used contain smart phone, PDAs, mobile devices, transportable computers and tablet PDAs. This system is to become a more capable student learning environment so that student can get student’s learning done more efficiently. The development of a collaborative learning combines the advantages of an adaptive learning environment with the advantages of mobile telecommunication and the suppleness of mobile devices.</description>
        <description>http://thesai.org/Downloads/Volume5No3/Paper_15-Collaborative_Pharmacy_Student_Learning_Outline_for_Mobile_Atmosphere.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Machine Learning Tool for Weighted Regressions in Time, Discharge, and Season</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050314</link>
        <id>10.14569/IJACSA.2014.050314</id>
        <doi>10.14569/IJACSA.2014.050314</doi>
        <lastModDate>2014-03-31T11:46:36.1870000+00:00</lastModDate>
        
        <creator>Alexander Maestre</creator>
        
        <creator>Eman El-Sheikh</creator>
        
        <creator>Derek Williamson</creator>
        
        <creator>Amelia Ward</creator>
        
        <subject>Machine Learning; Boosted Regression Trees; Survival Parametric Regression; Water Quality Modeling; Weighted Regressions in Time; Discharge; and Season</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(3), 2014</description>
        <description>A new machine learning tool has been developed to classify water stations with similar water quality trends. The tool is based on the statistical method, Weighted Regressions in Time, Discharge, and Season (WRTDS), developed by the United States Geological Survey (USGS) to estimate daily concentrations of water constituents in rivers and streams based on continuous daily discharge data and discrete water quality samples collected at the same or nearby locations.  WRTDS is based on parametric survival regressions using a jack-knife cross validation procedure that generates unbiased estimates of the prediction errors. One of the disadvantages of WRTDS is that it needs a large number of samples (n &gt; 200) collected during at least two decades. In this article, the tool is used to evaluate the use of Boosted Regression Trees (BRT) as an alternative to the parametric survival regressions for water quality stations with a small number of samples. We describe the development of the machine learning tool as well as an evaluation comparison of the two methods, WRTDS and BRT. The purpose of the tool is to evaluate the reduction in variability of the estimates by clustering data from nearby stations with similar concentration and discharge characteristics. The results indicate that, using clustering, the predicted concentrations using BRT are in general higher than the observed concentrations. In addition, it appears that BRT generates higher sum of square residuals than the parametric survival regressions.</description>
        <description>http://thesai.org/Downloads/Volume5No3/Paper_14-A_Machine_Learning_Tool_for_Weighted_Regressions_in_Time.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Bipolar Factor and Systems Analysis skills of Student Computing Professionals at University of Botswana, Gaborone.</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050313</link>
        <id>10.14569/IJACSA.2014.050313</id>
        <doi>10.14569/IJACSA.2014.050313</doi>
        <lastModDate>2014-03-31T11:46:36.1570000+00:00</lastModDate>
        
        <creator>Ezekiel U. Okike</creator>
        
        <subject>personality trait; systems analysts; academic achievement; bipolar matrix</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(3), 2014</description>
        <description>Temperamental suitability (personality traits) could be a factor to consider in career placement and professional development. It could also be an indicator of professional success or failure in any profession such as Systems Analysis and Design (SAD). However, there is not sufficient empirical evidence in support of the personality traits to which systems analysts and designers may be categorized. The objective of this study is to empirically investigate the main personality traits to which systems analysts and designers may belong, then propose a new approach to composing a personality matrix based on sound computational model. The study employed a quantitative research approach to measure the personality traits of 60 student systems analysts and designers using a human metric tool such as the Myers Briggs Type Indicator (MBTI) and some pre designed additional questionnaires. A mathematical model of the form  a_(ij=&#223;_0+&#223;_1 x_(1j+ ) &#223;_2 x_(2j+ ) &#223;_3 x_(3j+ ) &#223;_4 x_(4j+ )…&#223;_n x_nj ) was employed in order to measure achievement in systems analysis  and  design examination.  The Statistical Package for the Social Sciences (SPSS) was used to analyse the data. Using linear regression, the model was not significant, implying that achievement in SAD examination does not depend only on personality traits, motivation variables and study habit variables which were the independent variables. However, the R squared value indicated that these variables account for 52% variation in the dependent variable SAD score (achievement).  The best achievers in the personality traits are   ENFJ, ENTJ, ISFJ and INFJ all scoring 70% each. Therefore, the best achievers possess the personality traits of extroversion (E), iNtuition (N), Feeling (F), Judging (J), Thinking (T), Introversion (I), Sensing (S). Overall, the highest passes are students of the traits INFJ (11), INTJ (11 passes), ENTJ (10 passes), ENFJ (10 passes), ESFP (3passes), ISFJ (3passes), ISTJ (3 passes), ESFP (2 passes), ENFP (1), ISTP (1), and ISFP (1).</description>
        <description>http://thesai.org/Downloads/Volume5No3/Paper_13-Bipolar_Factor_and_Systems_Analysis_skills_of_Student_Computing_Professionals_at_University_of_Botswana.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sample K-Means Clustering Method for Determining the Stage of Breast Cancer Malignancy Based on Cancer Size on Mammogram Image Basis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050312</link>
        <id>10.14569/IJACSA.2014.050312</id>
        <doi>10.14569/IJACSA.2014.050312</doi>
        <lastModDate>2014-03-31T11:46:36.1400000+00:00</lastModDate>
        
        <creator>Karmilasari </creator>
        
        <creator>Suryarini Widodo</creator>
        
        <creator>Matrissya Hermita</creator>
        
        <creator>Nur Putri Agustiyani</creator>
        
        <creator>Yuhilza Hanum</creator>
        
        <creator>Lussiana ETP</creator>
        
        <subject>classification; staging; breast cancer; mammogram;  k-means clustering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(3), 2014</description>
        <description>Breast cancer is a disease that arises due to the growth of breast tissue cells that are not normal. The detection of breast cancer malignancy level / stage relies heavily on the results of the analysis of the doctor. To assist the analysis, this research aims to develop a software that can determine the stage of breast cancer based on the size of the cancerous tissue. Steps of the research consist of mammogram image acquisition, determining the ROI (Region of Interest), using Region growing segmentation method, measuring the area of suspected cancer, and determine the stage classification of the area on the mammogram image by using Sample K-Means Clustering method. Based on 33 malignant (abnormal) mammogram sample images taken from the mini mammography database of MIAS, the proposed method can detect stage of breast cancer is in malignant group.</description>
        <description>http://thesai.org/Downloads/Volume5No3/Paper_12-Sample_K-Means_Clustering_Method_for_Determining_the_Stage_of_Breast_Cancer_Malignancy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An autonomous intelligent gateway for wireless sensor network based on mobile node</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050311</link>
        <id>10.14569/IJACSA.2014.050311</id>
        <doi>10.14569/IJACSA.2014.050311</doi>
        <lastModDate>2014-03-31T11:46:36.1270000+00:00</lastModDate>
        
        <creator>Hajar Mansouri</creator>
        
        <creator>Fouad Moutaouakil</creator>
        
        <creator>Hicham Medromi</creator>
        
        <subject>Mobile Wireless Sensor Network; Multi Agent Systems; gateway; mobile nodes</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(3), 2014</description>
        <description>One of the recent tendencies for Wireless Sensor Networks (WSNs) that significantly increases their performance and functionality is the utilization of mobile nodes. This paper describes the software architecture of an intelligent autonomous gateway, designed to provide the necessary middleware between locally deployed sensor networks based on mobile node and a remote location. The gateway provides hierarchical networking,  auto management of the mobile wsn (MWSN), alarm notification and SMS/Internet access capabilities with user authentication. Our architecture includes three multi agent system modules, an interface module, a management module and a treatment module. The management module consists of two agents, a control communication agent, and a learning agent. The control communication agent interacts with the interface module and the treatment module in order to decide which data mule can reach the target. Several factors such as battery status, coverage issues, and communication situations have been taken into consideration.</description>
        <description>http://thesai.org/Downloads/Volume5No3/Paper_11-An_autonomous_intelligent_gateway_for_wireless_sensor_network_based_on_mobile_node.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Students Attendance System Using QR Code</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050310</link>
        <id>10.14569/IJACSA.2014.050310</id>
        <doi>10.14569/IJACSA.2014.050310</doi>
        <lastModDate>2014-03-31T11:46:36.0930000+00:00</lastModDate>
        
        <creator>Fadi Masalha</creator>
        
        <creator>Nael Hirzallah</creator>
        
        <subject>Mobile Computing; Attendance System; Educational System; GPS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(3), 2014</description>
        <description>Smartphones are becoming more preferred companions to users than desktops or notebooks. Knowing that smartphones are most popular with users at the age around 26, using smartphones to speed up the process of taking attendance by university instructors would save lecturing time and hence enhance the educational process. This paper proposes a system that is based on a QR code, which is being displayed for students during or at the beginning of each lecture. The students will need to scan the code in order to confirm their attendance. The paper explains the high level implementation details of the proposed system. It also discusses how the system verifies student identity to eliminate false registrations.</description>
        <description>http://thesai.org/Downloads/Volume5No3/Paper_10-A_Students_Attendance_System_Using_QR_Code.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Feasibility of automated detection of HONcode conformity for health-related websites</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050309</link>
        <id>10.14569/IJACSA.2014.050309</id>
        <doi>10.14569/IJACSA.2014.050309</doi>
        <lastModDate>2014-03-31T11:46:36.0630000+00:00</lastModDate>
        
        <creator>C&#233;lia Boyer</creator>
        
        <creator>Ljiljana Dolamic</creator>
        
        <subject>internet content quality; health; machine learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(3), 2014</description>
        <description>In this paper, authors evaluate machine learning algorithms to detect the trustworthiness of a website according to HONcode criteria of conduct (detailed in paper). To derive a baseline, we evaluated a Naive Bayes algorithm, using single words as features. We compared the baseline algorithm’s performance to that of the same algorithm employing different feature types, and to the SVM algorithm. The results demonstrate that the most basic configuration (Naive Bayes, single word) could produce a 0.94 precision for “easy” HON criteria such as “Date”. Conversely, for more difficult HON criteria “Justifiability”, we obtained precision of 0.68 by adjusting the system parameters such as algorithm (SVM) and feature types (W2).</description>
        <description>http://thesai.org/Downloads/Volume5No3/Paper_9-Feasibility_of_automated_detection_of_HONcode_conformity_for_health-related_websites.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Open Vehicle Routing Problem by Ant Colony Optimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050308</link>
        <id>10.14569/IJACSA.2014.050308</id>
        <doi>10.14569/IJACSA.2014.050308</doi>
        <lastModDate>2014-03-31T11:46:36.0300000+00:00</lastModDate>
        
        <creator>Er. Gurpreet Singh</creator>
        
        <creator>Dr. Vijay Dhir</creator>
        
        <subject>Ant Colony Optimization (ACO); Vehicle routing Problem(VRP);Open Vehicle routing Problem(OVRP); Travelling Salesman Problem(TSP);Swarm Intelligence(SI)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(3), 2014</description>
        <description>Vehicle routing problem (VRP) is real-world combinatorial optimization problem which determine the optimal route of a vehicle. Generally, toprovide the efficientvehicle serving to the customer through different services by visiting the number of cities or stops. The VRP follows the Travelling Salesman Problem (TSP), in which each of vehicle visiting a set of cities such that every city is visited by exactly one vehicle only once. This work proposes the Ant Colony Optimization (ACO)-TSP algorithm to eliminate the tour loop for Open Vehicle routing Problem (OVRP). A key aspect of this algorithm is to plan the routes of buses that must pick up and deliver the school students from various bus stops on time, especially in the case of far distance covered by the vehicle in a rural area and find out the efficient and safe vehicle route.</description>
        <description>http://thesai.org/Downloads/Volume5No3/Paper_8-Open_Vehicle_Routing_Problem_by_Ant_Colony_Optimization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Audio Content Classification Method Research Based on Two-step Strategy</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050307</link>
        <id>10.14569/IJACSA.2014.050307</id>
        <doi>10.14569/IJACSA.2014.050307</doi>
        <lastModDate>2014-03-31T11:46:36.0000000+00:00</lastModDate>
        
        <creator>Sumei Liang</creator>
        
        <creator>Xinhua Fan</creator>
        
        <subject>Two-step Strategy; Audio classification; MFCC; Frame energy; Naive Byes; Support vector machine (SVM)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(3), 2014</description>
        <description>Audio content classification is an interesting and significant issue. Audio classification technique has two basic parts: audio feature extraction and classifier. In general the audio content classification method is firstly to identify the original audio into text, then use the identified text to classify. But the text recognition rate is not high, some words that good for classification are identified by mistake causing that the classification effect is not ideal. In order to solve these problems above, this paper proposes a new effective audio classification method based on two-step strategy. In the first step the features are extracted by using the improved mutual information and classified with Na&#239;ve Bayes classifier. After classification of the first step, an unreliable area is determined, and samples with features in this area go on to be classified with the second step. In the second step, textual features extracted with CHI statistic method are used to build a text feature space model. Then audio features containing MFCC and frame energy are combined together with the text features to build a new feature vector space model. Finally, the new feature vector space model is classified using Support Vector Machine (SVM) classifier. The experiments show that the two-step strategy classification method for audio classification achieves great classification performance with the accuracy rate of 97.2%.</description>
        <description>http://thesai.org/Downloads/Volume5No3/Paper_7-Audio_Content_Classification_Method_Research_Based_on_Two.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A  Web Mining Approach for Personalized E-Learning System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050306</link>
        <id>10.14569/IJACSA.2014.050306</id>
        <doi>10.14569/IJACSA.2014.050306</doi>
        <lastModDate>2014-03-31T11:46:35.9700000+00:00</lastModDate>
        
        <creator>Manasi Chakurkar</creator>
        
        <creator>Prof.Deepa Adiga</creator>
        
        <subject>Web usage mining; web content mining; web personalization; e-learning system, Lingo; HITS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(3), 2014</description>
        <description>The Web Mining plays a very important role for the E-learning systems. In personalized E-Learning system, user customize the learning environment based on personal choices. In a general search process ,a hyperlink which is having maximum number of hits will get displayed first . For making a personalized system history of every user need to be saved in the form of user logs.  In this paper we present a architecture with the use of Web mining for Web personalization. The proposed system provides a new approach with combination of web usage mining, HITS algorithm and web content mining. It combines hits results on user logs and web page contents with a clustering algorithm called as Lingo clustering algorithm. This proposed system with combined approach gives a better performance than a usage based system. Further the results are computed according to matrices computed from previous and proposed method.</description>
        <description>http://thesai.org/Downloads/Volume5No3/Paper_6-A_Web_Mining_Approach_for_Personalized_E-Learning_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Interventional Spasticity Management for Enhancing Patient – Physician Communications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050305</link>
        <id>10.14569/IJACSA.2014.050305</id>
        <doi>10.14569/IJACSA.2014.050305</doi>
        <lastModDate>2014-03-31T11:46:35.9370000+00:00</lastModDate>
        
        <creator>Chiyuri NAGAYAMA</creator>
        
        <subject>Electromyography; Spasticity; Internal Modeling;Viscoelasticity; Stroke Rehabilitation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(3), 2014</description>
        <description>Stroke is the third most common cause of death in the Western world, behind heart disease and cancer, and accounts for over half of all neurologic admissions to community hospitals. Spasticity is commonly defined as excessive motor activity characterized by a velocity-dependent increase in tonic stretch reflexes. It is often associated with exaggerated tendon jerks, and is often accompanied by abnormal cutaneous and autonomic reflexes, muscle weakness, lack of dexterity, fatigability, and co-contraction of agonist and antagonist muscles. It is a common complication of central nervous system disorders, including stroke, traumatic brain injury, cerebral palsy, multiple sclerosis, anoxic brain injury, spinal cord injury, primary lateral sclerosis, and hereditary spastic hemiparesis. Leg muscle activation during locomotion is produced by spinal neuronal circuits within the spinal cord, the spinal pattern generator [central pattern generator (CPG)]. For the control of human locomotion, afferent information from a variety of sources within the visual, vestibular, and proprioceptive systems is utilized by the CPGs. Findings of this research can be applied to older adults in longitudinal home care who suffer spasticity caused by stroke.</description>
        <description>http://thesai.org/Downloads/Volume5No3/Paper_5-Interventional_Spasticity_Management_for_Enhancing_Patient.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Distributed programming using AGAPIA</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050304</link>
        <id>10.14569/IJACSA.2014.050304</id>
        <doi>10.14569/IJACSA.2014.050304</doi>
        <lastModDate>2014-03-31T11:46:35.9070000+00:00</lastModDate>
        
        <creator>Ciprian I. Paduraru</creator>
        
        <subject>patterns; parallel;  distributed; AGAPIA; fork; join; control; scan; wavefront;  map;  reduction;  pipeline;  scatter;  decomposition;  gather</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(3), 2014</description>
        <description>As distributed applications became more commonplace and more sophisticated, new programming languages and models for distributed programming were created. The main scope of most of these languages was to simplify the process of development by a providing a higher expressivity. This paper presents another programming language for distributed computing named AGAPIA. Its main purpose is to provide an increased expressiveness while keeping the performance close to a core programming language. To demonstrate its capabilities the paper shows the implementations of some well-known patterns specific to distribute programming along with a comparison to the corresponding MPI implementation. A complete application is presented by combining a few patterns. By taking advantage of the transparent communication model and high level statements and patterns intended to simplify the development process, the implementation of distributed programs become modular, easier to write, in clear and closer to the original solution formulation.</description>
        <description>http://thesai.org/Downloads/Volume5No3/Paper_4-Distributed_programming_using_AGAPIA.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>New Framework for Improving Big Data Analysis Using Mobile Agent</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050303</link>
        <id>10.14569/IJACSA.2014.050303</id>
        <doi>10.14569/IJACSA.2014.050303</doi>
        <lastModDate>2014-03-31T11:46:35.8900000+00:00</lastModDate>
        
        <creator>Youssef M. ESSA</creator>
        
        <creator>Gamal ATTIYA</creator>
        
        <creator>Ayman EL-SAYED</creator>
        
        <subject>Mobile Agent; JADE; Big Data Analysis; HDFS; Fault Tolerance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(3), 2014</description>
        <description>the rising number of applications serving millions of users and dealing with terabytes of data need to a faster processing paradigms. Recently, there is growing enthusiasm for the notion of big data analysis. Big data analysis becomes a very important aspect for growth productivity, reliability and quality of services (QoS). Processing of big data using a powerful machine is not efficient solution. So, companies focused on using Hadoop software for big data analysis. This is because Hadoop designed to support parallel and distributed data processing. Hadoop provides a distributed file processing system that stores and processes a large scale of data. It enables a fault tolerant by replicating data on three or more machines to avoid data loss.Hadoop is based on client server model and used single master machine called NameNode. However, Hadoop has several drawbacks affecting on its performance and reliability against big data analysis. In this paper, a new framework is proposed to improve big data analysis and overcome specified drawbacks of Hadoop. These drawbacks are replication tasks, Centralized node and nodes failure. The proposed framework is called MapReduce Agent Mobility (MRAM). MRAM is developed by using mobile agent and MapReduce paradigm under Java Agent Development Framework (JADE).</description>
        <description>http://thesai.org/Downloads/Volume5No3/Paper_3-New_Framework_for_Improving_Big_Data_Analysis_Using_Mobile_Agent.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Throughput Flow Constraint Theorem and its Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050302</link>
        <id>10.14569/IJACSA.2014.050302</id>
        <doi>10.14569/IJACSA.2014.050302</doi>
        <lastModDate>2014-03-31T11:46:35.8600000+00:00</lastModDate>
        
        <creator>Michael T. Todinov</creator>
        
        <subject>networks with disturbed flows; congestion; decongestion; maximum throughput flow; telecommunication networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(3), 2014</description>
        <description>The paper states and proves an important result related to the theory of flow networks with disturbed flows:“the throughput flow constraint in any network is always equal to the throughput flow constraint in its dual network”. 
After the failure or congestion of several edges in the network, the throughput flow constraint theorem provides the basis of a very efficient algorithm for determining the edge flows which correspond to the optimal throughput flow from sources to destinations which is the throughput flow achieved with the smallest amount of generation shedding from the sources.
In the case where a failure of an edge causes a loss of the entire flow through the edge, the throughput flow constraint theorem permits the calculation of the new maximum throughput flow to be done in   time, where m is the number of edges in the network.In this case, the new maximum throughput flow is calculated by inspecting the network only locally, in the vicinity of the failed edge, without inspecting the rest of the network.
The superior average running time of the presented algorithm, makes it particularly suitable for decongesting overloaded transmission links of telecommunication networks, in real time.In the paper, it is also shown that the deliberate choking of flows along overloaded edges, leading to a generation of momentary excess and deficit flow, provides a very efficient mechanism for  decongesting overloaded branches.
</description>
        <description>http://thesai.org/Downloads/Volume5No3/Paper_2-The_Throughput_Flow_Constraint_Theorem_and_its_Applications.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Auditing Hybrid IT Environments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050301</link>
        <id>10.14569/IJACSA.2014.050301</id>
        <doi>10.14569/IJACSA.2014.050301</doi>
        <lastModDate>2014-03-31T11:46:35.7970000+00:00</lastModDate>
        
        <creator>Georgiana Mateescu</creator>
        
        <creator>Marius Vladescu</creator>
        
        <subject>audit cloud computing; cloud service safety; cloud governance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(3), 2014</description>
        <description>This paper presents a personal approach of auditing the hybrid IT environments consisting in both on premise and on demand services and systems. The analysis is performed from both safety and profitability perspectives and it aims to offer to strategy, technical and business teams a representation of the value added by the cloud programme within the company’s portfolio. Starting from the importance of the IT Governance in the actual business environments, we presented in the first section the main principles that drive the technology strategy in order to maximize the value added by IT assets in the business products. Section two summarizes the frameworks leveraged by our approach in order to implement the safety and profitability computation algorithms described in the third section. The paper concludes with benefits of our personal frameworks and presents the future developments.</description>
        <description>http://thesai.org/Downloads/Volume5No3/Paper_1-Auditing_Hybrid_IT_Environments.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Study of Routing Path  Decision Method Using Mobile Robot Based on Distance Between Sensor Nodes</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2014.030305</link>
        <id>10.14569/IJARAI.2014.030305</id>
        <doi>10.14569/IJARAI.2014.030305</doi>
        <lastModDate>2014-03-10T06:31:20.6900000+00:00</lastModDate>
        
        <creator>Yuta Koike</creator>
        
        <creator>Kei Sawai</creator>
        
        <creator>Tsuyoshi Suzuki</creator>
        
        <subject>Wireless Sensor Networks; Moblie Robot Tele-Operation; Maintaining Throughput; Routing Path</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 3(3), 2014</description>
        <description>We propose Robot Wireless Sensor Networks (RWSNs) management method for maintaining wireless communication connectivity for a mobile robot teleoperation with considering a distance between sensor nodes. Recent studies for reducing disaster damage focus on a disaster area information gathering in underground spaces. Since information gathering activities in such post disaster underground spaces present a high risk of personal injury by secondary disasters, a lot of rescue workers were injured or killed in the past. On basis of this background, gathering information by utilizing the mobile robot is discussed in wide area. However, maintaining wireless communication infrastructures for teleoperation of a mobile rescue robot in the post-disaster underground space by various reasons. Therefore we have been discussing the wireless communication infrastructures construction method for teleoperation of the rescue robot by utilizing the RWSN. In this paper, we evaluated the proposed method for changing routing path by utilizing the RWSN in field operation test in order to confirm the availability of performance of communication connectivity and the throughputs between End-to-End communications via constructed network.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume3No3/Paper_5-A_Study_of_Routing_Path_Decision_Method_Using_Mobile_Robot_Based_on_Distance_Between_Sensor_Nodes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimum Band and Band Combination for Retrieving Total Nitrogen, Water, Fiber Content in Tealeaves Through Remote Sensing Based on Regressive Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2014.030304</link>
        <id>10.14569/IJARAI.2014.030304</id>
        <doi>10.14569/IJARAI.2014.030304</doi>
        <lastModDate>2014-03-10T06:31:20.6730000+00:00</lastModDate>
        
        <creator>Kohei Arai </creator>
        
        <subject>regressive analysis; total nitrogen content; tealeaves; fiber content; water content</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 3(3), 2014</description>
        <description>Optimum band and band combination for retrieving total nitrogen, water and fiber content in tealeaves with remote sensing data is investigated based on regressive analysis. Based on actual measured data of total nitrogen, fiber and water content in tealeaves as well as remotely sensed visible to near infrared reflectance data with 5nm of wavelength steps and ASTER/VNIR onboard Terra satellite, regressive analysis is conducted. As the results, it is found that 1045nm is the best wavelength for retrieving total nitrogen content while 945nm is the best wavelength for fiber content retrieval. Also it is found that 545nm is the best wavelength for water content. On the other hand, it is found that 350 and 750nm wavelength combination is the best for estimation of total nitrogen content while 535 and 720 wavelength combination is the best for fiber content estimation. It also found that 545 and 760nm wavelength combination is the best for water content retrieval. </description>
        <description>http://thesai.org/Downloads/IJARAI/Volume3No3/Paper_4-Optimum_Band_and_Band_Combination_for_Retrieving_Total_Nitrogen.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Estimation of Protein Content in Rice Crop and Nitrogen Content in Rice Leaves Through Regression Analysis with NDVI Derived from Camera Mounted Radio-Control Helicopter</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2014.030303</link>
        <id>10.14569/IJARAI.2014.030303</id>
        <doi>10.14569/IJARAI.2014.030303</doi>
        <lastModDate>2014-03-10T06:31:20.6600000+00:00</lastModDate>
        
        <creator>Kohei Arai </creator>
        
        <creator>Masanori Sakashita</creator>
        
        <creator>Osamu Shigetomi</creator>
        
        <creator>Yuko Miura</creator>
        
        <subject>nitrogen content; NDVI; protein content; rice paddy field; remote sensing; regression analysis</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 3(3), 2014</description>
        <description>Estimation of protein content in rice crop and nitrogen content in rice leaves through regression analysis with Normalized Difference Vegetation Index: NDVI derived from camera mounted radio-control helicopter is proposed. Through experiments at rice paddy fields which is situated at Saga Prefectural Research Institute of Agriculture: SPRIA in Saga city, Japan, it is found that protein content in rice crops is highly correlated with NDVI which is acquired with visible and Near Infrared: NIR camera mounted on radio-control helicopter. It also is found that nitrogen content in rice leaves is correlated to NDVI as well. Protein content in rice crop is negatively proportional to rice taste. Therefore rice crop quality can be evaluated through NDVI observation of rice paddy field.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume3No3/Paper_3-Estimation_of_Protein_Content_in_Rice_Crop_and_Nitrogen_Content_in_Rice_Leaves_Through_Regression_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Proposal of Tabu Search Algorithm Based on Cuckoo Search</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2014.030302</link>
        <id>10.14569/IJARAI.2014.030302</id>
        <doi>10.14569/IJARAI.2014.030302</doi>
        <lastModDate>2014-03-10T06:31:20.6430000+00:00</lastModDate>
        
        <creator>Ahmed T. Sadiq Al-Obaidi</creator>
        
        <creator>Ahmed Badre Al-Deen Majeed</creator>
        
        <subject>Tabu Search; Cuckoo Search; Heuristic Search; Neighborhood Search; Optimization; 4-Color Map; TSP</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 3(3), 2014</description>
        <description>This paper presents a new version of Tabu Search (TS) based on Cuckoo Search (CS) called (Tabu-Cuckoo Search TCS) to reduce the effect of the TS problems.  The proposed algorithm provides a more diversity to candidate solutions of TS. Two case studies have been solved using the proposed algorithm, 4-Color Map and Traveling Salesman Problem. The proposed algorithm gives a good result compare with the original, the iteration numbers are less and the local minimum or non-optimal solutions are less.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume3No3/Paper_2-Proposal_of_Tabu_Search_Algorithm_Based_on_Cuckoo.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Numerical Deviation Based Optimization Method for Estimation of Total Column CO2 Measured with Ground Based Fourier Transformation Sepectormeter: FTS Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2014.030301</link>
        <id>10.14569/IJARAI.2014.030301</id>
        <doi>10.14569/IJARAI.2014.030301</doi>
        <lastModDate>2014-03-10T06:31:20.5330000+00:00</lastModDate>
        
        <creator>Kohei Arai </creator>
        
        <creator>Takuya Fukamachi</creator>
        
        <creator>Hiroshi Okumura</creator>
        
        <creator>Shuji Kawakami</creator>
        
        <creator>Hirofumi Ohyama</creator>
        
        <subject>FTS; carbon dioxide; methane; sensitivity analysis; error analysis</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 3(3), 2014</description>
        <description>Numerical deviation based optimization method for estimation of total column CO2 measured with ground based Fourier Transformation Spectormeter: FTS data is proposed. Through experiments with aircraft based sample return data and the ground based FTS data, it is found that the proposed method is superior to the conventional method of Levenberg Marquads based nonlinear least square method with analytic deviation of Jacobian and Hessean around the current solution. Moreover, the proposed method shows better accuracy and required computer resources in comparison to the internationally used method (TCCON method) for estimation of total column CO2 with FTS data. It is also found that total column CO2 depends on weather conditions, in particular,  wind speed.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume3No3/Paper_1-Numerical_Deviation_Based_Optimization_Method_for_Estimation_of_Total_Column_CO2_Measured.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mobile Receiver-Assisted Localization Based on Selective Coordinates in Approach to Estimating Proximity for Wireless Sensor Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050224</link>
        <id>10.14569/IJACSA.2014.050224</id>
        <doi>10.14569/IJACSA.2014.050224</doi>
        <lastModDate>2014-02-28T16:41:46.1700000+00:00</lastModDate>
        
        <creator>Zulfazli Hussin</creator>
        
        <creator>Yukikazu Nakamoto</creator>
        
        <subject>Localization; proximity estimation; genetic algorithm;wireless sensor networks; received signal strength</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(2), 2014</description>
        <description>Received signal strength (RSS)-based mobile localization
has become popular due to its inexpensive localization
solutions in large areas. Compared to various physical properties
of radio signals, RSS is an attractive approach to localization
because it can easily be obtained through existing wireless devices
without any additional hardware. Although RSS is not considered
to be a good choice for estimating physical distances, it provides
some useful distance related information in adding and indicating
connectivity information in neighboring nodes. RSS-based
localization is generally divided into range-based and rangefree.
Range-based localization can achieve excellent accuracy
but is too costly to apply to large-scale networks. Methods of
range-free localization are regarded as cost-effective solutions
for localization in sensor networks. However, the localizations
are subject to the effect of radio patterns that affect variations in
the radial distance estimates between nodes. It is a challenging
task to select an efficient RSS value that can provide small
variations in the radial distance in wireless environments. We
propose a method of Mobile Localization using the Proximities
of Selective coordinates (MoLPS) to localize target nodes by using
information on proximities between target nodes and mobile
receivers as a metric to estimate the location of target nodes.
We ran a simulation experiment to assess the performance of
MoLPS with 100 target nodes that were randomly deployed
along a sensory field boundary. We found from the results of the
simulation experiment that localization error had been reduced
to below 2m in more than 80% of the target nodes.</description>
        <description>http://thesai.org/Downloads/Volume5No2/Paper_24-Mobile_Receiver-Assisted_Localization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Energy Saving EDF Scheduling for Wireless Sensors on Variable Voltage Processors</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050223</link>
        <id>10.14569/IJACSA.2014.050223</id>
        <doi>10.14569/IJACSA.2014.050223</doi>
        <lastModDate>2014-02-28T16:41:44.0830000+00:00</lastModDate>
        
        <creator>Hussein EL Ghor</creator>
        
        <creator>El-Hadi M Aggoune</creator>
        
        <subject></subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(2), 2014</description>
        <description>Advances in micro technology has led to the development
of miniaturized sensor nodes with wireless communication
to perform several real-time computations. These systems
are deployed wherever it is not possible to maintain a wired
network infrastructure and to recharge/replace batteries and the
goal is then to prolong as much as possible the lifetime of the
system. In our work, we aim to modify the Earliest Deadline First
(EDF) scheduling algorithm to minimize the energy consumption
using the Dynamic Voltage and Frequency Selection. To this end,
we propose an Energy Saving EDF (ES-EDF) algorithm that
is capable of stretching the worst case execution time of tasks
as much as possible without violating deadlines. We prove that
ES-EDF is optimal in minimizing processor energy consumption
and maximum lateness for which an upper bound on the
processor energy saving is derived. In order to demonstrate the
benefits of our algorithm, we evaluate it by means of simulation.
Experimental results show that ES-EDF outperforms EDF and
Enhanced EDF (E-EDF) algorithms in terms of both percentage
of feasible task sets and energy savings.</description>
        <description>http://thesai.org/Downloads/Volume5No2/Paper_23-Energy_Saving_EDF_Scheduling_for_Wireless_Sensors.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Complexity of Network Design for Private Communication and the P-vs-NP Question</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050222</link>
        <id>10.14569/IJACSA.2014.050222</id>
        <doi>10.14569/IJACSA.2014.050222</doi>
        <lastModDate>2014-02-28T16:41:41.9730000+00:00</lastModDate>
        
        <creator>Stefan Rass</creator>
        
        <subject>Complexity; NP; Privacy; Security; Game Theory;
Graph Augmentation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(2), 2014</description>
        <description>We investigate infeasibility issues arising along network
design for information-theoretically secure cryptography.
In particular, we consider the problem of communication in
perfect privacy and formally relate it to graph augmentation
problems and the P-vs-NP-question. Based on a game-theoretic
privacy measure, we consider two optimization problems related
to secure infrastructure design with constraints on computational
efforts and limited budget to build a transmission network.
It turns out that information-theoretic security, although not
drawing its strength from computational infeasibility, still can
run into complexity-theoretic difficulties at the stage of physical
network design. Even worse, if we measure (quantify) secrecy
by the probability of information-leakage, we can prove that
approximations of a network design towards maximal security
are computationally equivalent to the exact solutions to the same
problem, both of which are again equivalent to asserting that
P = NP. In other words, the death of public-key cryptosystems
upon P = NP may become the birth of feasible network
design algorithms towards information-theoretically confidential
communication.</description>
        <description>http://thesai.org/Downloads/Volume5No2/Paper_22-Complexity_of_Network_Design_for_Private.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>TCP I-Vegas in Mobile-IP Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050221</link>
        <id>10.14569/IJACSA.2014.050221</id>
        <doi>10.14569/IJACSA.2014.050221</doi>
        <lastModDate>2014-02-28T16:41:40.8400000+00:00</lastModDate>
        
        <creator>Nitin Jain</creator>
        
        <creator>Dr. Neelam Srivastava</creator>
        
        <subject>TCP Vegas; Mobile-IP; NS-2</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(2), 2014</description>
        <description>Mobile Internet Protocol (Mobile-IP or MIP) provides hosts with the ability to change their point of attachment to the network without compromising their ability to communicate. However, when TCP Vegas is used over a MIP network, its performance degrades because it may respond to a handoff by invoking its congestion control algorithm. TCP Vegas is sensitive to the change of Round-Trip Time (RTT) and it may recognize the increased RTT as a result of network congestion. This is because TCP Vegas could not differentiate whether the increased RTT is due to route change or network congestion. This paper presents a new and improved version of conventional TCP Vegas, which we named as TCP I-Vegas (where “I”, stands for Improved). Vegas performs well when compared to Reno but when sharing bandwidth with Reno its performance degrades. I-Vegas has been designed keeping in mind that whenever TCP variants like Reno has to share the bandwidth with Vegas then instead of using Vegas, if we use I-Vegas then the loss which Vegas would have to bear will not be more. We compared the performance of I-Vegas with Vegas in MIP environment using Network Simulator (NS-2). Simulation results show that I-Vegas performs better than Vegas in terms of providing better throughput and congestion window behavior.</description>
        <description>http://thesai.org/Downloads/Volume5No2/Paper_21-TCP_I-Vegas_in_Mobile-IP_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Tentative Analysis of the Rectangular Horizontal-slot Microstrip Antenna</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050220</link>
        <id>10.14569/IJACSA.2014.050220</id>
        <doi>10.14569/IJACSA.2014.050220</doi>
        <lastModDate>2014-02-28T16:41:38.7500000+00:00</lastModDate>
        
        <creator>Md. Tanvir Ishtaique ul Huque</creator>
        
        <creator>Md. Imran Hasan</creator>
        
        <subject>GEMS; Microstrip antenna; Rectangular horizontal slot antenna; Return loss; Slot antenna; antenna </subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(2), 2014</description>
        <description>In this paper, we have presented a new type of microstrip antenna mentioned as rectangular horizontal-slot patch antenna. Our main motto is to design a novel antenna which has the simplicity in structure and higher return loss. We have followed a tentative approach which leads us to an exceptional result, better than conventional one and the experimental outcomes result some guidelines for further practice. Here all of these antennas were analyzed by using GEMS (General Electro-Magnetic Solver) commercial software from 2COMU (Computer and Communication Unlimited)</description>
        <description>http://thesai.org/Downloads/Volume5No2/Paper_20-A_Tentative_Analysis_of_the_Rectangular_Horizontal-slot_Microstrip_Antenna_.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Audio Search Based on Keyword Spotting in Arabic Language</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050219</link>
        <id>10.14569/IJACSA.2014.050219</id>
        <doi>10.14569/IJACSA.2014.050219</doi>
        <lastModDate>2014-02-28T16:41:37.6170000+00:00</lastModDate>
        
        <creator>Mostafa Awaid</creator>
        
        <creator>Sahar A. Fawzi</creator>
        
        <creator>Ahmed H. Kandil</creator>
        
        <subject>Speech Recognition; Keyword Spotting; Template Matching</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(2), 2014</description>
        <description>Keyword spotting is an important application of speech recognition. This research introduces a keyword spotting approach to perform audio searching of uttered words in Arabic speech. The matching process depends on the utterance nucleus which is insensitive to its context. For spotting the targeted utterances, the matched nuclei are expanded to cover the whole utterances. Applying this approach to Quran and standard Arabic has promising results. To improve this spotting approach, it is combined with a text search in case of the existence of a transcript. This can be applied on Quran as there is exact correspondence between the audio and text files of each verse.  The developed approach starts by text search to identify the verses that include the target utterance(s).  For each allocated verse, the occurrence(s) of the target utterance is determined. The targeted utterance (the reference) is manually segmented from an allocated verse. Then Keyword spotting is performed for the extracted reference to the corresponding audio file.  The accuracy of the spotted utterances achieved 97%. The experiments showed that the use of the combined text and audio search has reduced the search time by 90% when compared with audio search only tested on the same content. The developed approach has been applied to non transcribed audio files (preaches and News) for searching chosen utterances. The results are promising.  The accuracy of spotting was around 84% in case of preaches and 88% in case of the news.</description>
        <description>http://thesai.org/Downloads/Volume5No2/Paper_19-Audio_Search_Based_on_Keyword_Spotting_in_Arabic_Language.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Investigating the combination of structural and textual information about multimedia retrieval</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050218</link>
        <id>10.14569/IJACSA.2014.050218</id>
        <doi>10.14569/IJACSA.2014.050218</doi>
        <lastModDate>2014-02-28T16:41:35.5400000+00:00</lastModDate>
        
        <creator>Sana FAKHFAKH</creator>
        
        <creator>Mohamed TMAR</creator>
        
        <creator>Walid MAHDI</creator>
        
        <subject>Geometric distance; multimedia retrieval, element; structure; document modeling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(2), 2014</description>
        <description>The expansion of structured information in different applications introduces a new ambiguity in multimedia retrieval in semi-structured documents. We investigate in this paper the combination of textual and structural context for multimedia retrieval in XML document thus we present a indexing model which combines textual and structural information. We propose a geometric method who use implicitly of textual and structural context of XML elements and we are particularly interested by improve the effectiveness of various structural factors for multimedia retrieval. Using a geometric metric, we can represent structural information in XML document with a vector for each element.
Given a textual query, our model lets us combine scores obtained from each sources of evidence and return a list of relevant retrieved multimedia element. Experimental evaluation is carried out using the INEX Ad Hoc Task 2007 and the Image CLEF Wikipedia Retrieval Task 2010. The results show that combination of scores of textual modality and structural modality significantly improves compared results of using a single modality.
</description>
        <description>http://thesai.org/Downloads/Volume5No2/Paper_18-Investigating_the_combination_of_structural_and_textual_information_about_multimedia_retrieval.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>RPOA Model-Based Optimal Resource Provisioning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050217</link>
        <id>10.14569/IJACSA.2014.050217</id>
        <doi>10.14569/IJACSA.2014.050217</doi>
        <lastModDate>2014-02-28T16:41:33.4100000+00:00</lastModDate>
        
        <creator>Noha El. Attar</creator>
        
        <creator>Samy Abd El -Hafeez</creator>
        
        <creator>Wael Awad</creator>
        
        <creator>Fatma Omara</creator>
        
        <subject>Cloud Computing; Resource Provision Data Communication; Particle Swarm Optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(2), 2014</description>
        <description>Optimal utilization of resources is the core of the provisioning process in the cloud computing. Sometimes the local resources of a data center are not adequate to satisfy the users’ requirements. So, the providers need to create several data centers at different geographical area around the world and spread the users’ applications on these resources to satisfy both service providers and customers QoS requirements. By considering the expansion of the resources and applications, the transmission cost and time have to be concerned as significant factors in the allocation process. 
According to the work of our previous paper, a Resource Provision Optimal Algorithm (RPOA) based on Particle Swarm Optimization (PSO) has been introduced to find the near optimal resource utilization with considering the customer budget and suitable for deadline time. This paper is considered an enhancement to RPOA algorithm to find the near optimal resource utilization with considering the data transfer time and cost, in addition to the customer budget and deadline time, in the performance measurement.
</description>
        <description>http://thesai.org/Downloads/Volume5No2/Paper_17-RPOA_Model-Based_Optimal_Resource.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>OLAWSDS:An Online Arabic Web Spam Detection System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050216</link>
        <id>10.14569/IJACSA.2014.050216</id>
        <doi>10.14569/IJACSA.2014.050216</doi>
        <lastModDate>2014-02-28T16:41:31.2630000+00:00</lastModDate>
        
        <creator>Mohammed N. Al-Kabi</creator>
        
        <creator>Heider A. Wahsheh</creator>
        
        <creator>Izzat M. Alsmadi</creator>
        
        <subject>Arabic Web spam; content-based; link-based; Information Retrieval</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(2), 2014</description>
        <description>For marketing purposes, Some Websites designers and administrators use illegal Search Engine Optimization (SEO) techniques to optimize the ranking of their Web pages and mislead the search engines. Some Arabic Web pages use both content and link features, to increase artificially the rank of their Web pages in the Search Engine Results Pages (SERPs).
This study represents an enhancement to previous work in this field. It includes the design and implementation of an online Arabic Web spam detection system, based on algorithms and mathematical foundations, which can detect the Arabic content and link web spam depending on the tree of the spam detection conditions, beside depending on the user’s feedback through a custom Web browser. The users can participate in making the decision about any Web page, through their feedbacks, so they judge if the Arabic Web pages in the browser are relevant for their particular queries or not. The proposed system uses the extracted content and link features from Arabic Web pages to determine whether to label each Web page as a spam or as a non-spam. This system also attempts to learn from the user’s feedback to enhance automatically its performance.
Statistical analysis is adopted in this study to evaluate the proposed system. Statistical Package for the Social Sciences (SPSS) software is used to evaluate this new system which considers the users feedbacks as dependent variables, while Arabic content and links features on the other hand are considered independent variables. The statistical analysis with the SPSS is used to apply a variety of tests, such as the test of the analysis of variance (ANOVA). ANOVA is used to show the relationships between the dependent and independent variables in the dataset, which leads to solving problems and building intelligent decisions and results.
</description>
        <description>http://thesai.org/Downloads/Volume5No2/Paper_16-OLAWSDS_An_Online_Arabic_Web_Spam_Detection_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dynamic Software Architecture for Medical Domain Using Pop Counts</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050215</link>
        <id>10.14569/IJACSA.2014.050215</id>
        <doi>10.14569/IJACSA.2014.050215</doi>
        <lastModDate>2014-02-28T16:41:29.1600000+00:00</lastModDate>
        
        <creator>UMESH BANODHA</creator>
        
        <creator>KANAK SAXENA</creator>
        
        <subject>Software Architecture; Quality Attributes; Pop count; medical process reengineering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(2), 2014</description>
        <description>Over the past few decades, the complexity of software for almost any era has increased significantly. The aim of this paper is to provide an approach which not only feasible but also decision-oriented in medical era. It focus on the careful planning and organizing success in continuous process improvements in software and hardware technology as this brings a lot of trouble to system development and maintenance. We have used the pop count method to develop the dynamic software architecture with the existence of quality attributes in order to find out the level of severity in patients of any diseases on the specialist perception. This is useful for taking decision on priority healing and regular concentration of the patients even in the absence of the specialist. Further the method (model) tested on the 25 symptoms of 100 patients which does not contain any dichotomous data and found with the help of statistical evaluation (that it result almost perfect classification) that the architecture is conformance to the medical software architecture quality requirements.</description>
        <description>http://thesai.org/Downloads/Volume5No2/Paper_15-Dynamic_Software_Architecture_for_Medical_Domain_Using_Pop_Counts.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>New Method Based on Multi-Threshold of Edges Detection in Digital Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050214</link>
        <id>10.14569/IJACSA.2014.050214</id>
        <doi>10.14569/IJACSA.2014.050214</doi>
        <lastModDate>2014-02-28T16:41:26.2370000+00:00</lastModDate>
        
        <creator>Amira S. Ashour</creator>
        
        <creator>Mohamed A. El-Sayed</creator>
        
        <creator>Shimaa E. Waheed</creator>
        
        <creator>S. Abdel-Khalek</creator>
        
        <subject>image processing; multi-threshold; edges detection; clustering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(2), 2014</description>
        <description>Edges characterize object boundaries in image and are therefore useful for segmentation, registration, feature extraction, and identification of objects in a scene. Edges detection is used to classify, interpret and analyze the digital images in a various fields of applications such as robots, the sensitive applications in military, optical character recognition, infrared gait recognition, automatic target recognition, detection of video changes, real-time video surveillance, medical images, and scientific research images. There are different methods of edges detection in digital image. Each one of these methods is suited to a particular type of images.  But most of these methods have some defects in the resulting quality. Decreasing of computation time is needed in most applications related to life time, especially with large size of images, which require more time for processing. Threshold is one of the powerful methods used for edge detection of image.  In this paper, We propose a new method based on different Multi-Threshold values using Shannon entropy to solve the problem of the traditional methods. It is minimize the computation time. In addition to the high quality of output of edge image. Another benefit comes from easy implementation of this method.</description>
        <description>http://thesai.org/Downloads/Volume5No2/Paper_14-New_Method_Based_on_Multi-Threshold_of_Edges_Detection_in_Digital_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of Rest Facility Information Exchange System by Utilizing Delay Tolerant Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050213</link>
        <id>10.14569/IJACSA.2014.050213</id>
        <doi>10.14569/IJACSA.2014.050213</doi>
        <lastModDate>2014-02-28T16:41:25.0930000+00:00</lastModDate>
        
        <creator>Masahiro Ono</creator>
        
        <creator>Kei Sawai</creator>
        
        <creator>Tsuyoshi Suzuki</creator>
        
        <subject>Delay Tolerant Network; rest facility; disaster; communication infrastructure; simulation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(2), 2014</description>
        <description>In this paper, we propose temporary rest facilities information exchange system among many people unable to get home by utilizing Delay Tolerant Network (DTN) after a disaster. When public transportation services are interrupted by the disaster, those people try to get home on foot while taking a rest at the facility. However, it is difficult for those people to obtain information of temporary rest facilities provided hurriedly, because communication infrastructures in the disaster area are disconnected by the disaster damage. Therefore, we propose a method to exchange the information among those people mutually by using mobile device via DTN for diffusion of the information. By using DTN, those people can communicate with each other by using mobile device and use the rest facility on the basis of the information even if the communication infrastructures are disconnected. Then, we develop mobile device application software to exchange the rest facility information among the people via DTN. In order to evaluate the application, we verified the communication performance in practical experiments. The experimental results showed the developed application had sufficient performance to exchange the information of the rest facility via DTN. Then, we verify the diffusivity of the rest facility information by a network simulation. The simulation results showed that the rest facility information was diffused widely and effectively to those people.</description>
        <description>http://thesai.org/Downloads/Volume5No2/Paper_13-Development_of_Rest_Facility_Information_Exchange_System_by_Utilizing_Delay_Tolerant_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Early Development of UVM based Verification Environment of Image Signal Processing Designs using TLM Reference Model of RTL</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050212</link>
        <id>10.14569/IJACSA.2014.050212</id>
        <doi>10.14569/IJACSA.2014.050212</doi>
        <lastModDate>2014-02-28T16:41:22.9800000+00:00</lastModDate>
        
        <creator>Abhishek Jain</creator>
        
        <creator>Sandeep Jana</creator>
        
        <creator>Dr. Hima Gupta</creator>
        
        <creator>Krishna Kumar</creator>
        
        <subject>SystemVerilog; SystemC; Transaction Level Modeling; Universal Verification Methodology (UVM); Processor model; Universal Verification Component (UVC); Reference Model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(2), 2014</description>
        <description>With semiconductor industry trend of “smaller the better”, from an idea to a final product, more innovation on product portfolio and yet remaining competitive and profitable are few criteria which are culminating into pressure and need for more and more innovation for CAD flow, process management and project execution cycle. Project schedules are very tight and to achieve first silicon success is key for projects. This necessitates quicker verification with better coverage matrix. Quicker Verification requires early development of the verification environment with wider test vectors without waiting for RTL to be available.
In this paper, we are presenting a novel approach of early development of reusable multi-language verification flow, by addressing four major activities of verification –

1. Early creation of Executable Specification
2. Early creation of Verification Environment
3. Early development of test vectors and 
4. Better and increased Re-use of blocks 

Although this paper focuses on early development of UVM based Verification Environment of Image Signal Processing designs using TLM Reference Model of RTL, same concept can be extended for non-image signal processing designs.
</description>
        <description>http://thesai.org/Downloads/Volume5No2/Paper_12-Early_Development_of_UVM_based_Verification_Environment_of_Image_Signal_Processing_Designs_using_TLM_Reference_Model_of_RTL.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>TCP- Costco Reno: New Variant by Improving Bandwidth Estimation to adapt over MANETs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050211</link>
        <id>10.14569/IJACSA.2014.050211</id>
        <doi>10.14569/IJACSA.2014.050211</doi>
        <lastModDate>2014-02-28T16:41:20.8570000+00:00</lastModDate>
        
        <creator>Prakash B. Khelage</creator>
        
        <creator>Dr. Uttam D. Kolekar</creator>
        
        <subject>mobile ad hoc network (MANET); Ccongestionl; Link failure;signal loss; interference; Retransmission timeout; RTT and BW estimation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(2), 2014</description>
        <description>The Transmission Control Protocol (TCP) is traditional, dominant and has been de facto standard protocol, used as transport agent at transport layer of TCP/IP protocol suite. Basically it is designed to provide reliability and assure guaranty to end-to-end delivery of data over unreliable networks. In practice, most TCP deployments have been carefully designed in the context of wired networks. Ignoring the properties of wireless Ad Hoc Networks, therefore it can lead to TCP implementations with poor performance. The problem of TCP and all its existing variations within MANETs resides in its inability to distinguish between different data packet loss causes, whenever the data loss occur traditional TCP congestion control algorithm assumes loss is due to congestion episode and reduces sending parameters value unnecessary. Thus, TCP has not always the optimum behavior in front of packet losses which might cause network performance degradation and resources waste. In order to adapt TCP over mobile Ad hoc environment, improvements have been proposed based on RTT and BW estimation technique in the literature to help TCP to differentiate accurate causes between the different types of losses. But still does not handle all the problems accurately and effectively. In this paper, a proposed TCP-Costco Reno a New Variant, accurately estimates the available bandwidth over Mobile Ad Hoc networks and sets sending rate accordingly to maximize utilization of available resources and hence improves performance of TCP over mobile Ad hoc networks. The results of the simulation indicate an improvement in throughput over interference, link failure and signal loss validation scenarios. Further, it shows highest average of average throughput then those variants which are most successful over MANETs.</description>
        <description>http://thesai.org/Downloads/Volume5No2/Paper_11-TCP-Costco_Reno_New_Variant_by_Improving_Bandwidth_Estimation_to_adapt_over_MANETs.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluation of Different Hypervisors Performance in the Private Cloud with SIGAR Framework</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050210</link>
        <id>10.14569/IJACSA.2014.050210</id>
        <doi>10.14569/IJACSA.2014.050210</doi>
        <lastModDate>2014-02-28T16:41:18.7500000+00:00</lastModDate>
        
        <creator>P. Vijaya Vardhan Reddy</creator>
        
        <creator>Dr. Lakshmi Rajamani</creator>
        
        <subject>CloudStack; Hypervisor; Management Server; Private Cloud; Virtualization Technology; SIGAR; Passmark </subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(2), 2014</description>
        <description>To make cloud computing model Practical and to have essential characters like rapid elasticity, resource pooling, on demand access and measured service, two prominent technologies are required. One is internet and second important one is virtualization technology. Virtualization Technology plays major role in the success of cloud computing. A virtualization layer which provides an infrastructural support to multiple virtual machines above it by virtualizing hardware resources such as CPU, Memory, Disk and NIC is called a Hypervisor. It is interesting to study how different Hypervisors perform in the Private Cloud. Hypervisors do come in Paravirtualized, Full Virtualized and Hybrid flavors. It is novel idea to compare them in the private cloud environment. This paper conducts different performance tests on three hypervisors XenServer, ESXi and KVM and results are gathered using SIGAR API (System Information Gatherer and Reporter) along with Passmark benchmark suite. In the experiment, CloudStack 4.0.2 (open source cloud computing software) is used to create a private cloud, in which management server is installed on Ubuntu 12.04 – 64 bit operating system. Hypervisors XenServer 6.0, ESXi 4.1 and KVM (Ubuntu 12.04) are installed as hosts in the respective clusters and their performances have been evaluated in detail by using SIGAR Framework, Passmark and NetPerf.</description>
        <description>http://thesai.org/Downloads/Volume5No2/Paper_10-Evaluation_of_Different_Hypervisors_Performance_in_the_Private_Cloud_with_SIGAR_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Greedy Algorithm for Load Balancing Jobs with Deadlines in a Distributed Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050209</link>
        <id>10.14569/IJACSA.2014.050209</id>
        <doi>10.14569/IJACSA.2014.050209</doi>
        <lastModDate>2014-02-28T16:41:16.6530000+00:00</lastModDate>
        
        <creator>Ciprian I. Paduraru</creator>
        
        <subject>scheduling; greedy; coordination; network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(2), 2014</description>
        <description>One of the most challenging issues when dealing with distributed networks is the efficiency of jobs load balancing. This paper presents a novel algorithm for load balancing jobs that have a given deadline in a distributed network assuming central coordination.  The algorithm uses a greedy strategy for global and local decision making: schedule a job as late as possible. It has an increased overhead over other well-known methods, but the load balancing policy provides a better fit for jobs.</description>
        <description>http://thesai.org/Downloads/Volume5No2/Paper_9-A_Greedy_Algorithm_for_Load_Balancing_Jobs_with_Deadlines_in_a_Distributed_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>New technique to insure data integrity for archival files storage (DIFCS)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050207</link>
        <id>10.14569/IJACSA.2014.050207</id>
        <doi>10.14569/IJACSA.2014.050207</doi>
        <lastModDate>2014-02-28T16:41:12.4530000+00:00</lastModDate>
        
        <creator>Mohannad Najjar</creator>
        
        <subject>cryptography; hash functions; data integrity; authentication;  HMAC; file archiving</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(2), 2014</description>
        <description>In this paper we are developing an algorithm to increase the security of using HMAC function (Key-Hashed Message Authentication) to insure data integrity for exchanging archival files. Hash function is a very strong tool used in information security. The algorithm we are developing is safe, quick and will allow the University of Tabuk (UT) authorities to be sure that data of archival document will not be changed or modified by unauthorized personnel through transferring in the network; it will also increase the efficiency of network in which archived files are exchanged. The basic issues of hash functions and data integrity will be presented as well.
In this research: The developed algorithm is effective and easy to implement using HMAC algorithm to guarantee data integrity for archival scanned documents in the document management system.</description>
        <description>http://thesai.org/Downloads/Volume5No2/Paper_7-New_technique_to_insure_data_integrity_for_archival_files_storage_(DIFCS).pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SentiTFIDF – Sentiment Classification using Relative Term Frequency Inverse Document Frequency</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050206</link>
        <id>10.14569/IJACSA.2014.050206</id>
        <doi>10.14569/IJACSA.2014.050206</doi>
        <lastModDate>2014-02-28T16:41:10.3430000+00:00</lastModDate>
        
        <creator>Kranti Ghag</creator>
        
        <creator>Ketan Shah</creator>
        
        <subject>Sentiment Classification; Term Weighting; Term Frequency; Term Presence; Document Vectors</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(2), 2014</description>
        <description>Sentiment Classification refers to the computational techniques for classifying whether the sentiments of text are positive or negative. Statistical Techniques based on Term Presence and Term Frequency, using Support Vector Machine are popularly used for Sentiment Classification. This paper presents an approach for classifying a term as positive or negative based on its proportional frequency count distribution and proportional presence count distribution across positively tagged documents in comparison with negatively tagged documents. Our approach is based on term weighting techniques that are used for information retrieval and sentiment classification. It differs significantly from these traditional methods due to our model of logarithmic differential term frequency and term presence distribution for sentiment classification. Terms with nearly equal distribution in positively tagged documents and negatively tagged documents were classified as a Senti-stop-word and discarded. The proportional distribution of a term to be classified as Senti-stop-word was determined experimentally. We evaluated the SentiTFIDF model by comparing it with state of art techniques for sentiment classification using the movie dataset.</description>
        <description>http://thesai.org/Downloads/Volume5No2/Paper_6-SentiTFIDF_Sentiment_Classification_using_Relative_Term_Frequency_Inverse_Document_Frequency.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance of window synchronisation in coherent optical ofdm system</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050205</link>
        <id>10.14569/IJACSA.2014.050205</id>
        <doi>10.14569/IJACSA.2014.050205</doi>
        <lastModDate>2014-02-28T16:41:08.2500000+00:00</lastModDate>
        
        <creator>Sofien Mhatli</creator>
        
        <creator>Bechir Nsiri</creator>
        
        <creator>Mutasam Jarajreh</creator>
        
        <creator>Basma Hammami</creator>
        
        <creator>Rabah Attia</creator>
        
        <subject>COFDM; timing offset; time synchronization; training symbol</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(2), 2014</description>
        <description>In this paper we investigate the performances of a robust and efficient technique for frame/symbol timing synchronization in coherent optical OFDM. It uses a preamble consisting of only two training symbol with two identical parts to achieve reliable synchronization schemes. The performances of the timing offset estimator at correct and incorrect timing in coherent optical OFDM are compared in term of mean and variance of the timing offset, and finally, we study the influence of number of subcarriers and chromatic dispersion.</description>
        <description>http://thesai.org/Downloads/Volume5No2/Paper_5-Performance_of_window_synchronisation_in_coherent_optical_ofdm_system.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Bimodal Emotion Recognition from Speech and Text</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050204</link>
        <id>10.14569/IJACSA.2014.050204</id>
        <doi>10.14569/IJACSA.2014.050204</doi>
        <lastModDate>2014-02-28T16:41:05.9370000+00:00</lastModDate>
        
        <creator>Weilin Ye</creator>
        
        <creator>Xinghua Fan</creator>
        
        <subject>emotion recognition; acoustic features; textual features; decision level fusion</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(2), 2014</description>
        <description>This paper presents an approach to emotion recognition from speech signals and textual content. In the analysis of speech signals, thirty-seven acoustic features are extracted from the speech input. Two different classifiers Support Vector Machines (SVMs) and BP neural network are adopted to classify the emotional states. In text analysis, we use the two-step classification method to recognize the emotional states. The final emotional state is determined based on the emotion outputs from the acoustic and textual analyses. In this paper we have two parallel classifiers for acoustic information and two serial classifiers for textual information, and a final decision is made by combing these classifiers in decision level fusion. Experimental results show that the emotion recognition accuracy of the integrated system is better than that of either of the two individual approaches.</description>
        <description>http://thesai.org/Downloads/Volume5No2/Paper_4-Bimodal_Emotion_Recognition_from_Speech_and_Text.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Secured Framework for Geographical Information Applications on Web</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050203</link>
        <id>10.14569/IJACSA.2014.050203</id>
        <doi>10.14569/IJACSA.2014.050203</doi>
        <lastModDate>2014-02-28T16:41:04.8170000+00:00</lastModDate>
        
        <creator>Mennatallah H. Ibrahim</creator>
        
        <creator>Hesham A. Hefny</creator>
        
        <subject>spatial data; geographic information systems; access control; authorization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(2), 2014</description>
        <description>Current geographical information applications increasingly require managing spatial data through the Web. Users of geographical information application need not only to display the spatial data but also to interactively modify them. As a result, the security risks that face geographical information applications are also increasing. In this paper, a secured framework is proposed. The proposed framework&#39;s goal is, providing a fine grained access control to web-based geographic information applications. A case study is finally applied to prove the proposed framework feasibility and effectiveness.</description>
        <description>http://thesai.org/Downloads/Volume5No2/Paper_3-A_Secured_Framework_for_Geographical_Information_Applications_on_Web.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Construction Strategy of Wireless Sensor Networks with Throughput Stability by Using Mobile Robot</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050202</link>
        <id>10.14569/IJACSA.2014.050202</id>
        <doi>10.14569/IJACSA.2014.050202</doi>
        <lastModDate>2014-02-28T16:41:03.6600000+00:00</lastModDate>
        
        <creator>Kei Sawai</creator>
        
        <creator>Shigeaki Tanabe</creator>
        
        <creator>Hitoshi Kono</creator>
        
        <creator>Yuta Koike</creator>
        
        <creator>Ryuta Kunimoto</creator>
        
        <creator>Tsuyoshi Suzuki</creator>
        
        <subject>Wireless Sensor Networks; Rescue Robot Tele-Operation; Maintaining Throughput</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(2), 2014</description>
        <description>We propose a wireless sensor networks deployment strategy for constructing wireless communication infrastructures for a rescue robot with considering a throughput between sensor nodes (SNs). Recent studies for reducing disaster damage focus on a disaster area information gathering in underground spaces. Since information gathering activities in such post disaster underground spaces present a high risk of personal injury by secondary disasters, a lot of rescue workers were injured or killed in the past. Because of this background, gathering information by utilizing the rescue robot is discussed in wide area. However, there are no wireless communication infrastructures for tele-operation of rescue robot in the post-disaster environment such as the underground space. Therefore, we have been discussing the construction method of wireless communication infrastructures for remotely operated the rescue robot by utilizing the rescue robot. In this paper, we evaluated the proposed method in field operation test, and then it is confirmed that maintaining communication connectivity and throughputs between End to End of constructed networks.</description>
        <description>http://thesai.org/Downloads/Volume5No2/Paper_2-Construction_Strategy_of_Wireless_Sensor_Networks_with_Throughput_Stability_by_Using_Mobile_Robot.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Applying Cellular Automata for Simulating and Assessing Urban Growth Scenario Based in Nairobi, Kenya</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050201</link>
        <id>10.14569/IJACSA.2014.050201</id>
        <doi>10.14569/IJACSA.2014.050201</doi>
        <lastModDate>2014-02-28T16:41:02.4300000+00:00</lastModDate>
        
        <creator>Kenneth Mubea</creator>
        
        <creator>Roland Goetzke</creator>
        
        <creator>Gunter Menz</creator>
        
        <subject>Urban Growth; Scenarios; Nairobi; Cellular automata; Simulation; sustainable development</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(2), 2014</description>
        <description>This research explores urban growth based scenarios for the city of Nairobi using a cellular automata urban growth model (UGM). African cities have experienced rapid urbanization over the last decade due to increased population growth and high economic activities. We used multi-temporal Landsat imageries for 1976, 1986, 2000 and 2010 to investigate urban land-use changes in Nairobi. Our UGM used data from urban land-use of 1986 and 2010, road data, slope data and exclusion layer. Monte-Carlo technique was used for model calibration and Multi Resolution Validation (MRV) technique for validation. Simulation of urban land-use was done up to the year 2030 when Kenya plans to attain Vision 2030. Three scenarios were explored in the urban modelling process; unmanaged growth with no restriction on environmental areas, managed growth with moderate protection, and a managed growth with maximum protection on forest, agricultural areas, and urban green. Thus alternative scenario development using UGM is useful for planning purposes so as to ensure sustainable development is achieved. UGM provides quantitative, visual, spatial and temporal information which aid policy and decision makers can make informed decisions.</description>
        <description>http://thesai.org/Downloads/Volume5No2/Paper_1-Applying_Cellular_Automata_for_Simulating_and_Assessing_Urban_Growth_Scenario_Based_in_Nairobi_Kenya.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Spatial Domain Image Steganography based on Security and Randomization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050121</link>
        <id>10.14569/IJACSA.2014.050121</id>
        <doi>10.14569/IJACSA.2014.050121</doi>
        <lastModDate>2014-02-21T06:33:11.7870000+00:00</lastModDate>
        
        <creator>Namita Tiwari</creator>
        
        <creator>Dr. Madhu Sandilya</creator>
        
        <creator>Dr. Meenu Chawla</creator>
        
        <subject>Spatial domain; PSNR; MSE</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(1), 2014</description>
        <description>In the present digital scenario secure communication is the prime requirement. Commonly, cryptography used for the said purpose. Another method related to cryptography is used for the above objective is Steganography. Steganography is the art of hiding information in some medium. Here we are using image as a means for covering information. Spatial domain image Steganography has been used for the work because of its compatibility to images. Objective of the paper is to increase the capacity of hidden data in a way that security could be maintained. In the current work MSB of the randomly selected pixel have been used as indicator. Result analysis has been performed on the basis of different parameters like PSNR, MSE and capacity.</description>
        <description>http://thesai.org/Downloads/Volume5No1/Paper_21-Spatial_Domain_Image_Steganography_based_on_Security_and_Randomization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Using the Technology Acceptance Model in Understanding Academics’ Behavioural Intention to Use Learning Management Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050120</link>
        <id>10.14569/IJACSA.2014.050120</id>
        <doi>10.14569/IJACSA.2014.050120</doi>
        <lastModDate>2014-02-21T06:33:11.7730000+00:00</lastModDate>
        
        <creator>Saleh Alharbi</creator>
        
        <creator>Steve Drew</creator>
        
        <subject>Technology Acceptance Model; Higher education; Learning management systems; Saudi Arabia</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(1), 2014</description>
        <description>Although e-learning is in its infancy in Saudi Arabia, most of the public universities in the country show a great interest in the adoption of learning and teaching tools. Determining the significance of a particular tool and predicting the success of implantation is essential prior to its adoption. This paper presents and modifies the technology acceptance model (TAM) in an attempt to assist public universities, particularly in Saudi Arabia, in predicting the behavioural intention to use learning management systems (LMS). This study proposed a theoretical framework that includes the core constructs in TAM: namely, perceived ease of use, perceived usefulness, and attitude toward usage. Additional external variables were also adopted—namely, the lack of LMS availability, prior experience (LMS usage experience), and job relevance. The overall research model suggests that all mentioned variables either directly or indirectly affect the overall behavioural intention to use an LMS. Initial findings suggest the applicability of using TAM to measure the behavioural intention to use an LMS. Further, the results confirm the original TAM’s findings.</description>
        <description>http://thesai.org/Downloads/Volume5No1/Paper_20-Using_the_Technology_Acceptance_Model_in_Understanding_Academics’_Behavioural_Intention_to_Use_Learning_Management_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>On A New Competitive Measure For Oblivious Routing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050116</link>
        <id>10.14569/IJACSA.2014.050116</id>
        <doi>10.14569/IJACSA.2014.050116</doi>
        <lastModDate>2014-02-21T06:33:11.7570000+00:00</lastModDate>
        
        <creator>G&#225;bor N&#233;meth</creator>
        
        <subject>competitive ratio; oblivious routing; l^p norm; L^p norm; throughput polytope; feasible region; probability of conges&#172;tion; hyper--spherical coordinates</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(1), 2014</description>
        <description>Oblivious routing algorithms use only locally avail&#172;able information at network nodes to forward traffic, and as such, a plausible choice for distributed implementations.  It is a natural desire to quantify the performance penalty we pay for this distributedness.  Recently, R&#228;cke has shown that for general undirected graphs the competitive ratio is only O(log n), that is, the maximum congestion caused by the   oblivious algorithm is within a logarithmic factor of the best possible congestion.  And while the performance penalty is larger for directed networks (Azar gives a O(vn) lower bound), experiments on many real-world topologies show that it usually remains under 2.  These competitive measures, however, are of worst-case type, and therefore do not always give adequate characterization. 
The more different combinations of demands a routing algorithm can accommodate in the network without congestion, the better.  Driven by this observation, in this paper we introduce a new competitive measure, the volumetric competitive ratio, as the measure of all admissible demands compared to the measure of demands routed without congestion.  The main result of the paper is a general lower bound on the volumetric ratio; and we also show a directed graph with O(1) competitive ratio that exhibits O(n) volumetric ratio. 
Our numerical evaluations show that the competitivity of   oblivious routing in terms of the new measure quickly vanishes even in relatively small common-place topologies. </description>
        <description>http://thesai.org/Downloads/Volume5No1/Paper_16-On_A_New_Competitive_Measure_For_Oblivious_Routing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Automated approach for Preventing ARP Spoofing Attack using Static ARP Entries</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050114</link>
        <id>10.14569/IJACSA.2014.050114</id>
        <doi>10.14569/IJACSA.2014.050114</doi>
        <lastModDate>2014-02-21T06:33:11.7400000+00:00</lastModDate>
        
        <creator>Ahmed M. AbdelSalam</creator>
        
        <creator>Wail S.Elkilani</creator>
        
        <creator>Khalid M.Amin</creator>
        
        <subject>component; layer two attacks; ARP spoofing; ARP cache poisoning; Static ARP entries</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(1), 2014</description>
        <description>ARP spoofing is the most dangerous attack that threats LANs, this attack comes from the way the ARP protocol works, since it is a stateless protocol. The ARP spoofing attack may be used to launch either denial of service (DoS) attacks or Man in the middle (MITM) attacks. Using static ARP entries is considered the most effective way to prevent ARP spoofing. Yet, ARP spoofing mitigation methods depending on static ARP have major drawbacks. In this paper, we propose a scalable technique to prevent ARP spoofing attacks, which automatically configures static ARP entries. Every host in the local network will have a protected non-spoofed ARP cache. The technique operates in both static and DHCP based addressing schemes, and Scalability of the technique allows protecting of a large number of users without any overhead on the administrator. Performance study of the technique has been conducted using a real network. The measurement results have shown that the client needs no more than one millisecond to register itself for a protected ARP cache. The results also shown that the server can a block any attacker in just few microsecond under heavy traffic.</description>
        <description>http://thesai.org/Downloads/Volume5No1/Paper_14-An_Automated_approach_for_Preventing_ARP_Spoofing_Attack_using_Static_ARP_Entries.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Spectrum Sharing Security and Attacks in CRNs: a Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050111</link>
        <id>10.14569/IJACSA.2014.050111</id>
        <doi>10.14569/IJACSA.2014.050111</doi>
        <lastModDate>2014-02-21T06:33:11.7230000+00:00</lastModDate>
        
        <creator>Wajdi Alhakami</creator>
        
        <creator>Ali Mansour</creator>
        
        <creator>Ghazanfar A. Safdar</creator>
        
        <subject>Dynamic Spectrum Access; Spectrum Sharing; Common Control Channel; Cognitive Radio Networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(1), 2014</description>
        <description>Cognitive Radio plays a major part in communication technology by resolving the shortage of the spectrum through usage of dynamic spectrum access and artificial intelligence characteristics. The element of spectrum sharing in cognitive radio is a fundamental approach in utilising free channels. Cooperatively communicating cognitive radio devices use the common control channel of the cognitive radio medium access control to achieve spectrum sharing. Thus, the common control channel and consequently spectrum sharing security are vital to ensuring security in the subsequent data communication among cognitive radio nodes. In addition to well known security problems in wireless networks, cognitive radio networks introduce new classes of security threats and challenges, such as licensed user emulation attacks in spectrum sensing and misbehaviours in the common control channel transactions, which degrade the overall network operation and performance. This review paper briefly presents the known threats and attacks in wireless networks before it looks into the concept of cognitive radio and its main functionality. The paper then mainly focuses on spectrum sharing security and its related challenges. Since spectrum sharing is enabled through usage of the common control channel, more attention is paid to the security of the common control channel by looking into its security threats as well as protection and detection mechanisms. Finally, the pros and cons as well as the comparisons of different CR-specific security mechanisms are presented with some open research issues and challenges.</description>
        <description>http://thesai.org/Downloads/Volume5No1/Paper_11-Spectrum_Sharing_Security_and_Attacks_in_CRNs_a_Review.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Study in Performance for Subcarrier Mapping in Uplink 4G-LTE under Different Channel Cases</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050107</link>
        <id>10.14569/IJACSA.2014.050107</id>
        <doi>10.14569/IJACSA.2014.050107</doi>
        <lastModDate>2014-02-21T06:33:11.7100000+00:00</lastModDate>
        
        <creator>Raad Farhood Chisab</creator>
        
        <creator>Prof. (Dr.) C. K. Shukla</creator>
        
        <subject>LTE; SCFDMA; 4G; PAPR; BER; channel model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(1), 2014</description>
        <description>in recent years, wireless communication has experienced a rapid growth and it promises to become a globally important infrastructure. One common design approach in fourth generation 4G systems is Single Carrier Frequency Division Multiple Access (SC-FDMA). It is a single carrier communication technique on the air interface. It has become broadly accepted mainly because of its high resistance to frequency selective fading channels. The third Generation Partnership Project-Long Term Evolution (3GPP-LTE) uses this technique in uplink direction because of its lower peak to average power ratio PAPR as compared to Orthogonal Frequency Division Multiple Access (OFDMA) that is used for downlink direction. In this paper the LTE in general and SCFDMA will be discuss in details and its performance will be study under two types of subcarrier mapping which are localized and distributed mode also within different channel cases. The results show that the localized subcarrier mapping give lower bit error rate BER than the distributed mode and give different activity under miscellaneous channel cases.</description>
        <description>http://thesai.org/Downloads/Volume5No1/Paper_7-Comparative_Study_in_Performance_for_Subcarrier_Mapping_in_Uplink_4G-LTE_under_Different_Channel_Cases.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving Classification Accuracy of Heart Sound Signals Using Hierarchical MLP Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050104</link>
        <id>10.14569/IJACSA.2014.050104</id>
        <doi>10.14569/IJACSA.2014.050104</doi>
        <lastModDate>2014-02-21T06:33:11.6930000+00:00</lastModDate>
        
        <creator>Mohd Zubir Suboh</creator>
        
        <creator>Md. Yid M.S.</creator>
        
        <creator>Muhyi Yaakob</creator>
        
        <creator>Mohd Shaiful Aziz Rashid Ali</creator>
        
        <subject>Hierarchical MLP network; Multi-layer Peceptron Network; heart sound signals</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(1), 2014</description>
        <description>Classification of heart sound signals to normal or their classes of disease are very important in screening and diagnosis system since various applications and devices that fulfilling this purpose are rapidly design and developed these days. This paper states and alternative method in improving classification accuracy of heart sound signals. Standard and improvised Multi-Layer Perceptron (MLP) network in hierarchical form were used to obtain the best classification results. Two data sets of normal and four abnormal heart sound signals from heart valve diseases were used to train and test the MLP networks. It is found that hierarchical MLP network could significantly increase the classification accuracy to 100% compared to standard MLP network with accuracy of 85.71% only.</description>
        <description>http://thesai.org/Downloads/Volume5No1/Paper_4-Improving_Classification_Accuracy_of_Heart_Sound_Signals_Using_Hierarchical_MLP_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comparative Study of Meta-heuristic Algorithms for Solving Quadratic Assignment Problem</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050101</link>
        <id>10.14569/IJACSA.2014.050101</id>
        <doi>10.14569/IJACSA.2014.050101</doi>
        <lastModDate>2014-02-21T06:33:11.6770000+00:00</lastModDate>
        
        <creator>Gamal Abd El-Nasser A. Said</creator>
        
        <creator>Abeer M. Mahmoud</creator>
        
        <creator>El-Sayed M. El-Horbaty</creator>
        
        <subject>Quadratic Assignment Problem (QAP); Genetic Algorithm (GA); Tabu Search (TS); Simulated Annealing (SA); Performance Analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(1), 2014</description>
        <description>Quadratic Assignment Problem (QAP) is an NP-hard combinatorial optimization problem, therefore, solving the QAP requires applying one or more of the meta-heuristic algorithms. This paper presents a comparative study between Meta-heuristic algorithms: Genetic Algorithm, Tabu Search, and Simulated annealing for solving a real-life (QAP) and analyze their performance in terms of both runtime efficiency and solution quality. The results show that Genetic Algorithm has a better solution quality while Tabu Search has a faster execution time in comparison with other Meta-heuristic algorithms for solving QAP. </description>
        <description>http://thesai.org/Downloads/Volume5No1/Paper_1-A_Comparative_Study_of_Meta-heuristic_Algorithms_for_Solving_Quadratic_Assignment_Problem.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Review of Scripting Techniques Used in Automated Software Testing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050128</link>
        <id>10.14569/IJACSA.2014.050128</id>
        <doi>10.14569/IJACSA.2014.050128</doi>
        <lastModDate>2014-02-21T06:33:11.6630000+00:00</lastModDate>
        
        <creator>Milad Hanna</creator>
        
        <creator>Nahla El-Haggar</creator>
        
        <creator>Mostafa Sami</creator>
        
        <subject>Software Testing; Automated Software Testing; Test Data; Test Case; Test Script; Manual Testing; Software Under Test; Graphical User Interface</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(1), 2014</description>
        <description>Software testing is the process of evaluating the developed system to assess the quality of the final product. Unfortunately, software-testing process is expensive and consumes a lot of time through software development life cycle. As software systems grow, manual software testing becomes more and more difficult. Therefore, there was always a need to decrease the testing time. Recently, automation is as a major factor in reducing the testing effort by many researchers. Therefore, automating software-testing process is vital to its success. This study aims to compare the main features of different scripting techniques used in process of automating the execution phase in software testing process. In addition, an overview of different scripting techniques will be presented to show the state of art of this study.</description>
        <description>http://thesai.org/Downloads/Volume5No1/Paper_28-A_Review_of_Scripting_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Novel Fractional Wavelet Transform with Closed-Form Expression</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050126</link>
        <id>10.14569/IJACSA.2014.050126</id>
        <doi>10.14569/IJACSA.2014.050126</doi>
        <lastModDate>2014-02-21T06:33:11.6470000+00:00</lastModDate>
        
        <creator>K. O. O. Anoh</creator>
        
        <creator>R. A. A. Abd-Alhameed</creator>
        
        <creator>S. M. R. Jones</creator>
        
        <creator>O. Ochonogor</creator>
        
        <subject>Fractional Fourier transform; wavelet; fractional wavelet transform</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(1), 2014</description>
        <description>A new wavelet transform (WT) is introduced based on the fractional properties of the traditional Fourier transform. The new wavelet follows from the fractional Fourier order which uniquely identifies the representation of an input function in a fractional domain. It exploits the combined advantages of WT and fractional Fourier transform (FrFT). The transform permits the identification of a transformed function based on the fractional rotation in time-frequency plane. The fractional rotation is then used to identify individual fractional daughter wavelets. This study is, for convenience, limited to one-dimension. Approach for discussing two or more dimensions is shown.</description>
        <description>http://thesai.org/Downloads/Volume5No1/Paper_26-Novel_Fractional_Wavelet_Transform_with_Closed-Form_Expression.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cost Effective System Modeling of Active Micro- Module Solar Tracker</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050127</link>
        <id>10.14569/IJACSA.2014.050127</id>
        <doi>10.14569/IJACSA.2014.050127</doi>
        <lastModDate>2014-02-21T06:33:11.6470000+00:00</lastModDate>
        
        <creator>Md. Faisal Shuvo</creator>
        
        <creator>Md. Abu Saleh Ovi</creator>
        
        <creator>Md. Mehedi Hasan</creator>
        
        <creator>Arifur Rahman</creator>
        
        <subject>micro-module, active solar tracker, LDR sensor, microcontroller</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(1), 2014</description>
        <description>The increasing interests in using renewable energies are coming from solar thermal energy and solar photovoltaic systems to the micro production of electricity. Usually we already have considered the solar tracking topology in large scale applications like power plants and satellite but most of small scale applications don’t have any solar tracker system, mainly because of its high cost and complex circuit design. From that aspect, this paper confab microcontroller based one dimensional active micro-module solar tracking system, in which inexpensive LDR is used to generate reference voltage to operate microcontroller for functioning the tracking system. This system provides a fast response of tracking system to the parameters like change of light intensity as well as temperature variations. This micro-module model of tracking system can be used for small scale applications like portable electronic devices and running vehicles.</description>
        <description>http://thesai.org/Downloads/Volume5No1/Paper_27-Cost_Effective_System_Modeling_of_Active_Micro.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Wireless LAN Security Threats &amp; Vulnerabilities</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050125</link>
        <id>10.14569/IJACSA.2014.050125</id>
        <doi>10.14569/IJACSA.2014.050125</doi>
        <lastModDate>2014-02-21T06:33:11.6300000+00:00</lastModDate>
        
        <creator>Md. Waliullah</creator>
        
        <creator>Diane Gan</creator>
        
        <subject>WLAN; IEEE 802.11i; WIDS/WIPS; MITM; DoS; SSID; AP; WEP; WPA/WPA2 </subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(1), 2014</description>
        <description>Wireless LANs are everywhere these days from home to large enterprise corporate networks due to the ease of installation, employee convenience, avoiding wiring cost and constant mobility support. However, the greater availability of wireless LANs means increased danger from   attacks and increased challenges to an organisation, IT staff and IT security professionals. This paper discusses the various security issues and vulnerabilities related to the IEEE 802.11 Wireless LAN encryption standard and common threats/attacks pertaining to the home and enterprise Wireless LAN system and provide overall guidelines and recommendation to the home users and organizations. </description>
        <description>http://thesai.org/Downloads/Volume5No1/Paper_25-Wireless_LAN_Security_Threats_Vulnerabilities.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>De Jong’s Sphere Model Test for a Human Community Based Genetic Algorithm Model (HCBGA)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050123</link>
        <id>10.14569/IJACSA.2014.050123</id>
        <doi>10.14569/IJACSA.2014.050123</doi>
        <lastModDate>2014-02-21T06:33:11.6170000+00:00</lastModDate>
        
        <creator>Nagham Azmi AL-Madi</creator>
        
        <subject>Genetic Algorithms (GAs); Evolutionary Algorithms (EA); Simple Standard Genetic Algorithms (SGA), Human Community Based Genetic Algorithm (HCBGA) model; De Jongs’ functions, the Sphere model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(1), 2014</description>
        <description>A new structured population approach for genetic algorithm, based on the custom, behavior and pattern of human community is provided. This model is named the Human Community Based Genetic Algorithm (HCBGA) model. It includes gender, age, generation, marriage, birth and death. Using the De Jong’s first function 1, “The Sphere Model” comparisons between values and results concerning the averages and best fits of both, the Simple Standard Genetic Algorithm (SGA), and the Human Community Based Genetic Algorithm (HCBGA) model are obtained. These results are encouraging in that the Human Community Based Genetic Algorithm (HCBGA) model performs better in finding best fit solutions of generations in different populations than the Simple Standard Genetic Algorithm. The HCBGA model is an evolution of the simple Genetic Algorithm (SGA).
The result of this paper is an extended of the result concerning algorithm in [6]. 
</description>
        <description>http://thesai.org/Downloads/Volume5No1/Paper_23-De_Jong’s_Sphere_Model_Test_for_a_Human_Community_Based_Genetic_Algorithm_Model_(HCBGA).pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mining Interesting Positive and Negative Association Rule Based on Improved Genetic Algorithm (MIPNAR_GA)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050122</link>
        <id>10.14569/IJACSA.2014.050122</id>
        <doi>10.14569/IJACSA.2014.050122</doi>
        <lastModDate>2014-02-21T06:33:11.6000000+00:00</lastModDate>
        
        <creator>Nikky Suryawanshi Rai</creator>
        
        <creator>Susheel Jain</creator>
        
        <creator>Anurag Jain</creator>
        
        <subject>Association rule mining; negative rule and positive rules; frequent and infrequent pattern set; genetic algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(1), 2014</description>
        <description>Association Rule mining is very efficient technique for finding strong relation between correlated data. The correlation of data gives meaning full extraction process. For the mining of positive and negative rules, a variety of algorithms are used such as Apriori algorithm and tree based algorithm. A number of algorithms are wonder performance but produce large number of negative association rule and also suffered from multi-scan problem. The idea of this paper is to eliminate these problems and reduce large number of negative rules. Hence we proposed an improved approach to mine interesting positive and negative rules based on genetic and MLMS algorithm. In this method we used a multi-level multiple support of data table as 0 and 1. The divided process reduces the scanning time of database. The proposed algorithm is a combination of MLMS and genetic algorithm. This paper proposed a new algorithm (MIPNAR_GA) for mining interesting positive and negative rule from frequent and infrequent pattern sets. The algorithm is accomplished in to three phases: a).Extract frequent and infrequent pattern sets by using apriori method b).Efficiently generate positive and negative rule. c).Prune redundant rule by applying interesting measures. The process of rule optimization is performed by genetic algorithm and for evaluation of algorithm conducted the real world dataset such as heart disease data and some standard data used from UCI machine learning repository.</description>
        <description>http://thesai.org/Downloads/Volume5No1/Paper_22-Mining_Interesting_Positive_and_Negative_Association_Rule_Based_on_Improved_Genetic_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mining of Web Server Logs in a Distributed Cluster Using Big Data Technologies</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050119</link>
        <id>10.14569/IJACSA.2014.050119</id>
        <doi>10.14569/IJACSA.2014.050119</doi>
        <lastModDate>2014-02-21T06:33:11.5830000+00:00</lastModDate>
        
        <creator>Savitha K</creator>
        
        <creator>Vijaya MS</creator>
        
        <subject>Big data; Hadoop; Mapreduce; Session identification; Analytics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(1), 2014</description>
        <description>Big Data is an emerging growing dataset beyond the ability of a traditional database tool. Hadoop rides the big data where the massive quantity of information is processed using cluster of commodity hardware. Web server logs are semi-structured files generated by the computer in large volume usually of flat text files. It is utilized efficiently by Mapreduce as it process one line at a time. This paper performs the session identification in log files using Hadoop in a distributed cluster. Apache Hadoop Mapreduce a data processing platform is used in pseudo distributed mode and in fully distributed mode. The framework effectively identifies the session utilized by the web surfer to recognize the unique users and pages accessed by the users. The identified session is analyzed in R to produce a statistical report based on total count of visit per day. The results are compared with non-hadoop approach a java environment, and it results in a better time efficiency, storage and processing speed of the proposed work.</description>
        <description>http://thesai.org/Downloads/Volume5No1/Paper_19-Mining_of_Web_Server_Logs_in_a_Distributed_Cluster_Using_Big_Data_Technologies.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modelling and Output Power Evaluation of Series-Parallel Photovoltaic Modules</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050118</link>
        <id>10.14569/IJACSA.2014.050118</id>
        <doi>10.14569/IJACSA.2014.050118</doi>
        <lastModDate>2014-02-21T06:33:11.5700000+00:00</lastModDate>
        
        <creator>Dr. Fadi N. Sibai</creator>
        
        <subject>solar energy; series-parallel photovoltaic modules; mathematical modeling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(1), 2014</description>
        <description>Solar energy has received attention in the Middle East given the abundant and free irradiance and extended sunny weather. Although photovoltaic panels were introduced decades ago, they have recently become economical and gained traction. We present a mathematical model for series-parallel photovoltaic modules, evaluate the model, and present the I-V and P-V characteristic plots for various temperatures, irradiance, and diode ideality factors. The power performance results are then analyzed and recommendations are made. Unlike other related work, our evaluation uses standard spreadsheet software avoiding commercial simulation packages and application programming. Results indicate improved power performance with irradiance and parallel connections of cell series branches. Given the hot weather in our region, increasing the number of cells connected in series from 36 to 72 is not recommended.</description>
        <description>http://thesai.org/Downloads/Volume5No1/Paper_18-Modelling_and_Output_Power_Evaluation_of_Series-Parallel_Photovoltaic_Modules.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of Reversible Counter</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050117</link>
        <id>10.14569/IJACSA.2014.050117</id>
        <doi>10.14569/IJACSA.2014.050117</doi>
        <lastModDate>2014-02-21T06:33:11.5530000+00:00</lastModDate>
        
        <creator>Md. Selim Al Mamun</creator>
        
        <creator>B. K. Karmaker</creator>
        
        <subject>Flip-flop; Counter; Garbage Output; Reversible Logic; Quantum Cost </subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(1), 2014</description>
        <description>This article presents a research work on the design and synthesis of sequential circuits and flip-flops that are available in digital arena; and describes a new synthesis design of reversible counter that is optimized in terms of quantum cost, delay and garbage outputs compared to the existing designs. We proposed a new model of reversible T flip-flop in designing reversible counter.</description>
        <description>http://thesai.org/Downloads/Volume5No1/Paper_17-Design_of_Reversible_Counter.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>For an Independent Spell-Checking System from the Arabic Language Vocabulary</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050115</link>
        <id>10.14569/IJACSA.2014.050115</id>
        <doi>10.14569/IJACSA.2014.050115</doi>
        <lastModDate>2014-02-21T06:33:11.5370000+00:00</lastModDate>
        
        <creator>Bakkali Hamza</creator>
        
        <creator>Yousfi Abdellah</creator>
        
        <creator>Gueddah Hicham</creator>
        
        <creator>Belkasmi Mostafa</creator>
        
        <subject>Arabic language; Lexicon; Misspelled word; Error model; Spell-checking; Edit distance; Morphological analysis; Prefix; Stem, Suffix</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(1), 2014</description>
        <description>In this paper, we propose a new approach for spell-checking errors committed in Arabic language.
This approach is almost independent of the used dictionary, of the fact that we introduced the concept of morphological analysis in the process of spell-checking. Hence, our new system uses a stems dictionary of reduced size rather than exploiting a large dictionary not covering the all Arabic words.
The obtained results are highly positive and satisfactory; this has allowed us to appreciate the validity of our concept and shows the importance of our new approach.
</description>
        <description>http://thesai.org/Downloads/Volume5No1/Paper_15-For_an_Independent_Spell-Checking_System_from_the_Arabic_Language_Vocabulary.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Probabilistic Monte-Carlo Method for Modelling and  Prediction of Electronics Component Life</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050113</link>
        <id>10.14569/IJACSA.2014.050113</id>
        <doi>10.14569/IJACSA.2014.050113</doi>
        <lastModDate>2014-02-21T06:33:11.5230000+00:00</lastModDate>
        
        <creator>T. Sreenuch</creator>
        
        <creator>A. Alghassi</creator>
        
        <creator>S. Perinpanayagam</creator>
        
        <creator>Y. Xie</creator>
        
        <subject>Prognostics; Monte-Carlo Simulation; Remaining Useful Life</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(1), 2014</description>
        <description>Power electronics are widely used in electric vehicles, railway locomotive and new generation aircrafts. Reliability of these components directly affect the reliability and performance of these vehicular platforms. In recent years, several research work about reliability, failure mode and aging analysis have been extensively carried out. There is a need for an efficient algorithm able to predict the life of power electronics component. In this paper, a probabilistic Monte-Carlo framework is developed and applied to predict remaining useful life of a component. Probability distributions are used to model the component’s degradation process. The modelling parameters are learned using Maximum Likelihood Estimation. The prognostic is carried out by the mean of simulation in this paper. Monte-Carlo simulation is used to propagate multiple possible degradation paths based on the current health state of the component. The remaining useful life and confident bounds are calculated by estimating mean, median and percentile descriptive statistics of the simulated degradation paths. Results from different probabilistic models are compared and their prognostic performances are evaluated.</description>
        <description>http://thesai.org/Downloads/Volume5No1/Paper_13-Probabilistic_Monte_Carlo_Method_for_Modelling_and_Prediction_of_Electronics_Component_Life.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Reduced Complexity Divide and Conquer Algorithm for Large Scale TSPs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050110</link>
        <id>10.14569/IJACSA.2014.050110</id>
        <doi>10.14569/IJACSA.2014.050110</doi>
        <lastModDate>2014-02-21T06:33:11.5070000+00:00</lastModDate>
        
        <creator>Hoda A. Darwish</creator>
        
        <creator>Ihab Talkhan</creator>
        
        <subject>Traveling Salesman Problem; Computational Geometry; Heuristic Algorithms; Divide and Conquer; Hashing; Nearest Neighbor 2-opt Algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(1), 2014</description>
        <description>The Traveling Salesman Problem (TSP) is the problem of finding the shortest path passing through all given cities while only passing by each city once and finishing at the same starting city. This problem has NP-hard complexity making it extremely impractical to get the most optimal path even for problems as small as 20 cities since the number of permutations becomes too high. Many heuristic methods have been devised to reach “good” solutions in reasonable time. In this paper, we present the idea of utilizing a spatial “geographical” Divide and Conquer technique in conjunction with heuristic TSP algorithms specifically the Nearest Neighbor 2-opt algorithm. We have found that the proposed algorithm has lower complexity than algorithms published in the literature. This comes at a lower accuracy expense of around 9%.  It is our belief that the presented approach will be welcomed to the community especially for large problems where a reasonable solution could be reached in a fraction of the time.</description>
        <description>http://thesai.org/Downloads/Volume5No1/Paper_10-Reduced_Complexity_Divide_and_Conquer_Algorithm_for_Large_Scale_TSPs.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Competency-Based Ontology for Learning Design Repositories</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050108</link>
        <id>10.14569/IJACSA.2014.050108</id>
        <doi>10.14569/IJACSA.2014.050108</doi>
        <lastModDate>2014-02-21T06:33:11.4900000+00:00</lastModDate>
        
        <creator>Gilbert Paquette</creator>
        
        <subject>Learning Designs; Learning Objects Repository; Competency Referencin; Generic skills; Learning Design Ontology; Metadata for Learning Designs</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(1), 2014</description>
        <description>Learning designs are central resources for educational environments because they provide the organizational structure of learning activities; they are concrete instructional methods. We characterize each learning design by the competencies they target. We define competencies at the meta-knowledge level, as generic processes acting on domain-specific knowledge. We summarize a functional taxonomy of generic skills that draws upon three fields of knowledge: education, software engineering and artificial intelligence. This taxonomy provides the backbone of an ontology for learning designs, enabling the creation of a library of learning designs based on their cognitive and meta-cognitive properties.</description>
        <description>http://thesai.org/Downloads/Volume5No1/Paper_8-A_Competency_Based_Ontology_for_Learning_Design_Repositories.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Computerized Kymograph for Muscle Contraction Measurement Using Ultrasonic Distance Sensor</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050106</link>
        <id>10.14569/IJACSA.2014.050106</id>
        <doi>10.14569/IJACSA.2014.050106</doi>
        <lastModDate>2014-02-21T06:33:11.4600000+00:00</lastModDate>
        
        <creator>Suhaeri </creator>
        
        <creator>Vitri Tundjungsari</creator>
        
        <subject>kymograph; computerized; distance sensor</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(1), 2014</description>
        <description>Kymograph is a device to record the magnitude of physiological variables, such as: muscle contraction. However, we observe some lacks of the conventional kymographs, such as: result’s visualisation and accuracy. Hence, we propose a computerized-kymograph which can automatically measure, record, and display graphical data of muscle contraction on the computer by using ultrasonic distance sensor. We develop hardware and software systems to support computerized kymograph and then test our device with live frog. The result shows that the device works well by displaying better visualization than the conventional kymograph. </description>
        <description>http://thesai.org/Downloads/Volume5No1/Paper_6-Computerized_Kymograph.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Generic Framework for Automated Quality Assurance of Software Models –Implementation of an Abstract Syntax Tree</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050105</link>
        <id>10.14569/IJACSA.2014.050105</id>
        <doi>10.14569/IJACSA.2014.050105</doi>
        <lastModDate>2014-02-21T06:33:11.4430000+00:00</lastModDate>
        
        <creator>Darryl Owens</creator>
        
        <creator>Dr Mark Anderson</creator>
        
        <subject>software quality assurance; software testing; automated software engineering; programming language paradigms; language independence; abstract syntax tree; static analysis; dynamic analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(1), 2014</description>
        <description>Abstract Syntax Tree’s (AST) are used in language tools, such as compilers, language translators and transformers as well as analysers; to remove syntax and are therefore an ideal construct for a language independent tool. AST’s are also commonly used in static analysis.  This increases the value of ASTs for use within a universal Quality Assurance (QA) tool. The Object Management Group (OMG) have outlined a Generic AST Meta-model (GASTM) which may be used to implement the internal representation (IR) for this tool. This paper discusses the implementation and modifications made to the previously published proposal, to use the Object Management Group developed Generic Abstract Syntax Tree Meta-model core-components as an internal representation for an automated quality assurance framework.</description>
        <description>http://thesai.org/Downloads/Volume5No1/Paper_5-A_Generic_Framework_for_Automated_Quality_Assurance_of_Software_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Laguerre Kernels –Based SVM for Image Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050103</link>
        <id>10.14569/IJACSA.2014.050103</id>
        <doi>10.14569/IJACSA.2014.050103</doi>
        <lastModDate>2014-02-21T06:33:11.4300000+00:00</lastModDate>
        
        <creator>Ashraf Afifi</creator>
        
        <subject>Laguerre polynomials; kernel functions; functional analysis; SVMs; classification problem</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(1), 2014</description>
        <description>Support vector machines (SVMs) have been promising methods for classification and regression analysis because of their solid mathematical foundations which convey several salient properties that other methods hardly provide. However the performance of SVMs is very sensitive to how the kernel function is selected, the challenge is to choose the kernel function for accurate data classification. In this paper, we introduce a set of new kernel functions derived from the generalized Laguerre polynomials. The proposed kernels could improve the classification accuracy of SVMs for both linear and nonlinear data sets. The proposed kernel functions satisfy Mercer’s condition and orthogonally properties which are important and useful in some applications when the support vector number is needed as in feature selection. The performance of the generalized Laguerre kernels is evaluated in comparison with the existing kernels. It was found that the choice of the kernel function, and the values of the parameters for that kernel are critical for a given amount of data. The proposed kernels give good classification accuracy in nearly all the data sets, especially those of high dimensions.</description>
        <description>http://thesai.org/Downloads/Volume5No1/Paper_3-Laguerre_Kernels_Based_SVM_for_Image_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Generic Packing Detection Using Several Complexity Analysis for Accurate Malware Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050102</link>
        <id>10.14569/IJACSA.2014.050102</id>
        <doi>10.14569/IJACSA.2014.050102</doi>
        <lastModDate>2014-02-21T06:33:11.4130000+00:00</lastModDate>
        
        <creator>Dr. Mafaz Mohsin Khalil Al-Anezi</creator>
        
        <subject>Packed Executables; Malware Detection; compression algorithms</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(1), 2014</description>
        <description>The attackers do not want their Malicious software (or malwares) to be reviled by anti-virus analyzer. In order to conceal their malware, malware programmers are getting utilize the anti reverse engineering techniques and code changing techniques such as the packing, encoding and encryption techniques. Malware writers have learned that signature based detectors can be easily evaded by “packing” the malicious payload in layers of compression or encryption. State-of-the-art malware detectors have adopted both static and dynamic techniques to recover the payload of packed malware, but unfortunately such techniques are highly ineffective.  If the malware is packed or encrypted, then it is very difficult to analyze. Therefore, to prevent the harmful effects of malware and to generate signatures for malware detection, the packed and encrypted executable codes must initially be unpacked. The first step of unpacking is to detect the packed executable files. 
The objective is to efficiently and accurately distinguish between packed and non-packed executables, so that only executables detected as packed will be sent to an general unpacker, thus saving a significant amount of processing time. The generic method of this paper show that it achieves very high detection accuracy of packed executables with a low average processing time.
In this paper, a packed file detection technique based on complexity measured by several algorithms, and it has tested using a packed and unpacked dataset of file type .exe. The preliminary results are very promising where achieved high accuracy with enough performance. Where it achieved about 96% detection rate on packed files and 93% detection rate on unpacked files. The experiments also demonstrate that this generic technique can effectively prepared to detect unknown, obfuscated malware and cannot be evaded by known evade techniques.
</description>
        <description>http://thesai.org/Downloads/Volume5No1/Paper_2-Generic_Packing_Detection_Using_Several_Complexity_Analysis_for_Accurate_Malware_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comparative Usability Study on the Use of Auditory Icons to Support Virtual Lecturers in E-Learning Interfaces</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050112</link>
        <id>10.14569/IJACSA.2014.050112</id>
        <doi>10.14569/IJACSA.2014.050112</doi>
        <lastModDate>2014-02-21T06:33:11.3970000+00:00</lastModDate>
        
        <creator>Marwan Alseid</creator>
        
        <creator>Mohammad Azzeh</creator>
        
        <creator>Yousef El Sheikh</creator>
        
        <subject>auditory icons; virtual lecturer; e-learning; usability; speech; avatar; multimodal interaction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(1), 2014</description>
        <description>Prior conducted research revealed that the auditory icons could contribute in supporting the virtual lecturers in presence of full body animation while delivering the learning content in e-learning interfaces. This paper presents further empirical investigation into the use of these supportive auditory icons by comparing three different e-learning interfaces in terms of usability aspects; effectiveness, user satisfaction and memorability. The aim is to find out which combination of the tested multimodal metaphors is the best one in terms of utilizing the auditory icons to supplement the presentation of learning material by virtual lecturer. The first experimental e-learning interface incorporates a speaking virtual lecturer with full body gestures along with supportive auditory icons. The second experimental e-learning interface includes the use of virtual lecturer speech in the absence of his body and accompanied with the same auditory icons used in the first interface. However, the third interface is similar to the second one in terms of using the virtual lecturer&#39;s speech but without any additional auditory icons. The obtained results have shown that the inclusion of auditory icons could enhance the usability and learning performance of e-learning interfaces much better if combined along with speaking virtual lectures in the absence of any body animation.</description>
        <description>http://thesai.org/Downloads/Volume5No1/Paper_12-A_Comparative_Usability_Study_on_the_Use_of_Auditory_Icons_to_Support_Virtual_Lecturers.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>DES: Dynamic and Elastic Scalability in Cloud Computing Database Architecture</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050124</link>
        <id>10.14569/IJACSA.2014.050124</id>
        <doi>10.14569/IJACSA.2014.050124</doi>
        <lastModDate>2014-02-21T06:33:11.3830000+00:00</lastModDate>
        
        <creator>Dr. K. Chitra</creator>
        
        <creator>B.Jeeva Rani</creator>
        
        <subject>Cloud computing; Availability; Scalability; Shared-disk; Shared-nothing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(1), 2014</description>
        <description>Nowadays, companies are becoming global organizations. Such organizations do not limit themselves in conducting business in one country. They need dynamic, elastic, scalable cloud computing platform that operates around-the-clock. Full functionality, adaptability, non-stop availability and reduced cost are the major requirements that are expected from cloud computing services. Planned or unplanned system outages are the enemies of the successful business in cloud computing environment. Hence it requires highly available, elastic scalable systems. In this paper, we analyze the benefits of cloud computing and evaluates the database architectures namely shared-disk database architecture and shared-nothing database architecture for high availability, Dynamic and Elastic Scalability.</description>
        <description>http://thesai.org/Downloads/Volume5No1/Paper_24-DES_Dynamic_and_Elastic_Scalability_in_Cloud _Computing_Database_Architecture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Secure Undeniable Threshold Proxy Signature Scheme</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2014.050109</link>
        <id>10.14569/IJACSA.2014.050109</id>
        <doi>10.14569/IJACSA.2014.050109</doi>
        <lastModDate>2014-02-21T06:33:11.3330000+00:00</lastModDate>
        
        <creator>Sattar J. Aboud</creator>
        
        <subject>cryptography; digital signature; proxy signature; threshold proxy signature</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 5(1), 2014</description>
        <description>the threshold proxy signature scheme allows the original signer to delegate a signature authority to the proxy group to cooperatively sign message on behalf of an original signer. In this paper, we propose a new scheme which includes the features and benefits of the RSA scheme. Also, we will evaluate the security of undeniable threshold proxy signature scheme with known signers. We find that the existing threshold proxy scheme is insecure against the original signer forgery. In this paper, we show the cryptanalysis of an existed scheme. Additional, we propose the secure, undeniable and known signers threshold proxy signature scheme which answers the drawback of an existed scheme. We also demonstrate that a threshold proxy signature suffers from a conspiracy of an original signer and a secret share dealer, that the scheme is commonly forgeable, and cannot offer undeniable. We claim that the proposed scheme offers the undeniable characteristic.</description>
        <description>http://thesai.org/Downloads/Volume5No1/Paper_9-Undeniable_Threshold_Proxy_Signature_Scheme.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Migration Dynamics in Artificial Agent Societies</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2014.030208</link>
        <id>10.14569/IJARAI.2014.030208</id>
        <doi>10.14569/IJARAI.2014.030208</doi>
        <lastModDate>2014-02-10T16:13:59.9170000+00:00</lastModDate>
        
        <creator>Harjot Kaur</creator>
        
        <creator>Karanjeet Singh Kahlon</creator>
        
        <creator>Rajinder Singh Virk</creator>
        
        <subject></subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 3(2), 2014</description>
        <description>An Artificial Agent Society can be defined as a collection of agents interacting with each other for some purpose and/or inhabiting a specific locality, possibly in accordance to some common norms/rules. These societies are analogous to human and ecological societies, and are an expanding and emerging field in research about social systems. Social networks, electronic markets and disaster management organizations can be viewed as such artificial (open) agent societies and can be best understood as computational societies. Members of such artificial agent societies are heterogeneous intelligent software agents which are operating locally and cooperating and coordinating with each other in order to achieve goals of an agent society. These artificial agent societies have some kind of dynamics existing in them in terms of dynamics of Agent Migration, Role-Assignment, Norm- Emergence, Security and Agent-Interaction. In this paper, we have described the dynamics of agent migration process, starting from the various types of agent migration, causes or reasons for agent migration, consequences of agent migration, and an agent migration framework to model the its behavior for migration of agents between societies.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume3No2/Paper_8-Migration_Dynamics_in_Artificial_Agent_Societies.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Using Unlabeled Data to Improve Inductive Models
by Incorporating Transductive Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2014.030207</link>
        <id>10.14569/IJARAI.2014.030207</id>
        <doi>10.14569/IJARAI.2014.030207</doi>
        <lastModDate>2014-02-10T16:13:59.8870000+00:00</lastModDate>
        
        <creator>ShengJun Cheng</creator>
        
        <creator>Jiafeng Liu</creator>
        
        <creator>XiangLong Tang</creator>
        
        <subject>Inductive model, Transductive model, Semi- supervised learning, Markov random walk</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 3(2), 2014</description>
        <description>This paper shows how to use labeled and unlabeled data to improve inductive models with the help of transductivemodels.We proposed a solution for the self-training scenario. Self- training is an effective semi-supervised wrapper method which can generalize any type of supervised inductive model to the semi-supervised settings. it iteratively refines a inductive model by bootstrap from unlabeled data. Standard self-training uses the classifier model(trained on labeled examples) to label and select candidates from the unlabeled training set, which may be problematic since the initial classifier may not be able to provide highly confident predictions as labeled training data is always rare. As a result, it could always suffer from introducing too much wrongly labeled candidates to the labeled training set, which may severely degrades performance. To tackle this problem, we propose a novel self-training style algorithm which incorporate a graph-based transductive model in the self-labeling process. Unlike standard self-training, our algorithm utilizes labeled and unlabeled data as a whole to label and select unlabeled examples for training set augmentation. A robust transductive model based on graph markov random walk is proposed, which exploits manifold assumption to output reliable predictions on unlabeled data using noisy labeled examples. The proposed algorithm can greatly minimize the risk of performance degradation due to accumulated noise in the training set. Experiments show that the proposed algorithm can effectively utilize unlabeled data to improve classification performance.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume3No2/Paper_7-Using_Unlabeled_Data_to_Improve_Inductive_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Preliminary Study on Phytoplankton Distribution Changes Monitoring for the Intensive Study Area of the Ariake Sea, Japan Based on Remote Sensing Satellite Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2014.030206</link>
        <id>10.14569/IJARAI.2014.030206</id>
        <doi>10.14569/IJARAI.2014.030206</doi>
        <lastModDate>2014-02-10T16:13:59.8570000+00:00</lastModDate>
        
        <creator>Kohei Arai </creator>
        
        <creator>Toshiya Katano</creator>
        
        <subject>chlorophyl-a concentration; suspended solid; ocean winds</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 3(2), 2014</description>
        <description>Phytoplankton distribution changes in the Ariake Sea areas, Japan based on remote sensing satellite data is studied.  Through experiments with Terra and AQUA MODIS data derived chlorophyll-a concentration and suspended solid as well as truth data of chlorophyll-a concentration together with meteorological data and tidal data which are acquired 7 months from October 2012 to April 2013, it is found that strong correlation between the truth data of chlorophyll-a and MODIS derived chlorophyll-a concentrations with R square value ranges from 0.677 to 0.791. Also it is found that the relations between ocean wind speed and chlorophyll-a concentration as well as between tidal effects and chlorophyll-a concentration. Meanwhile, there is a relatively high correlation between sunshine duration a day and chlorophyll-a concentration.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume3No2/Paper_6-Preliminary_Study_on_Phytoplankton_Distribution.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predicting Quality of Answer in Collaborative Question Answer Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2014.030205</link>
        <id>10.14569/IJARAI.2014.030205</id>
        <doi>10.14569/IJARAI.2014.030205</doi>
        <lastModDate>2014-02-10T16:13:59.8230000+00:00</lastModDate>
        
        <creator>Kohei Arai </creator>
        
        <creator>ANIK Nur Handayani</creator>
        
        <subject>collaborative question answer learning; domain knowledge; answer quality predictor; recommender</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 3(2), 2014</description>
        <description>Studies over the years shown that students had actively and more interactively involved in a classroom discussion to gain their knowledge. By posting questions for other participants to answer, students could obtain several answers to their question. The problem is sometimes the answer chosen by student as the best answer is not necessarily the best quality answer. Therefore, an automatic recommender system based on student activity, may improve these situations as it will choose the best answer objectively. On the other side, in the implementation of collaborative learning, in addition to sharing information, sometimes students also need a reference or domain knowledge which relevant with the topic. In this paper, we proposed answer quality predictor in collaborative question answer (CQA) learning, to predict the quality of answer either from recommender system based on users activity or domain knowledge as reference information.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume3No2/Paper_5-Predicting_Quality_of_Answer_in_Collaborative_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Method for Traffic Flow Estimation using On-dashboard Camera Image</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2014.030204</link>
        <id>10.14569/IJARAI.2014.030204</id>
        <doi>10.14569/IJARAI.2014.030204</doi>
        <lastModDate>2014-02-10T16:13:59.8230000+00:00</lastModDate>
        
        <creator>Kohei Arai </creator>
        
        <creator>Steven Ray Sentinuwo</creator>
        
        <subject>traffic flow estimation; on-dashboard camera; computer vision.</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 3(2), 2014</description>
        <description>This paper presents the method to estimate the traffic flow on the urban roadway by using car’s on-dashboard camera image. The system described, shows something new which utilizes only road traffic photo images to get the information about urban roadway traffic flow automatically.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume3No2/Paper_4-Method_for_traffic_flow_estimation_using_on-dashboard_camera.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Human Gait Gender Classification using 3D Discrete Wavelet Transform Feature Extraction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2014.030203</link>
        <id>10.14569/IJARAI.2014.030203</id>
        <doi>10.14569/IJARAI.2014.030203</doi>
        <lastModDate>2014-02-10T16:13:59.8070000+00:00</lastModDate>
        
        <creator>Kohei Arai </creator>
        
        <creator>Rosa Andrie Asmara</creator>
        
        <subject>gender gait classification; 3D Skeleton Model; SVM; Biometrics; 3D DWT</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 3(2), 2014</description>
        <description>Feature extraction for gait recognition has been created widely. The ancestor for this task is divided into two parts, model based and free-model based. Model-based approaches obtain a set of static or dynamic skeleton parameters via modeling or tracking body components such as limbs, legs, arms and thighs. Model-free approaches focus on shapes of silhouettes or the entire movement of physical bodies. Model-free approaches are insensitive to the quality of silhouettes. Its advantage is a low computational costs comparing to model-based approaches. However, they are usually not robust to viewpoints and scale. Imaging technology also developed quickly this decades. Motion capture (mocap) device integrated with motion sensor has an expensive price and can only be owned by big animation studio. Fortunately now already existed Kinect camera equipped with depth sensor image in the market with very low price compare to any mocap device. Of course the accuracy not as good as the expensive one, but using some preprocessing method we can remove the jittery and noisy in the 3D skeleton points. Our proposed method is to analyze the effectiveness of 3D skeleton feature extraction using 3D Discrete Wavelet Transforms (3D DWT). We use Kinect Camera to get the depth data. We use Ipisoft mocap software to extract 3d skeleton model from Kinect video. From the experimental results shows 83.75% correctly classified instances using SVM.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume3No2/Paper_3-Human_Gait_Gender_Classification_using_3D_Discrete_Wavelet_Transform_Feature_Extraction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Study of Feature Extraction Components from Several Wavelet Transformations for Ornamental Plants</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2014.030202</link>
        <id>10.14569/IJARAI.2014.030202</id>
        <doi>10.14569/IJARAI.2014.030202</doi>
        <lastModDate>2014-02-10T16:13:59.7930000+00:00</lastModDate>
        
        <creator>Kohei Arai </creator>
        
        <creator>Indra Nugraha Abdullah</creator>
        
        <creator>Hiroshi Okumura</creator>
        
        <subject>wavelet transformation; shift invariant; rotation invariant; feature extraction; leaf identification</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 3(2), 2014</description>
        <description>Human has a duty to preserve the nature, preserving the plant is one of the examples. This research emphasis on ornamental plant that has functionality not only as ornament plant but also as a medicinal plant. Purpose of this research is to find the best of the particular feature extraction components from several wavelet transformations. It consists of Daubechies, Dyadic, and Dual-tree complex wavelet transformation. Dyadic and Dual-tree complex wavelet transformations have shift invariant property. While Daubechies is a standard wavelet transform that widely used for many applications. This comparison is utilizing leaf image datasets from ornamental plants. From the experiments, obtained that best classification performance attained by Dual-tree complex wavelet transformation with 96.66% of overall performance result.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume3No2/Paper_2-Comparative_Study_of_Feature_Extraction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Static Gesture Recognition Combining Graph and Appearance Features</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2014.030201</link>
        <id>10.14569/IJARAI.2014.030201</id>
        <doi>10.14569/IJARAI.2014.030201</doi>
        <lastModDate>2014-02-10T16:13:59.7300000+00:00</lastModDate>
        
        <creator>Marimpis Avraam</creator>
        
        <subject>Gesture Recognition; Neural Gas; Linear Discriminant Analysis; Bayes Rule; Laplacian Matrix</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 3(2), 2014</description>
        <description>In this paper we propose the combination of graph-based characteristics and appearance-based descriptors such as detected edges for modeling static gestures. Initially we convolve the original image with a Gaussian kernel and blur the image. Canny edges are then extracted. The blurring is performed in order to enhance some characteristics in the image that are crucial for the topology of the gesture (especially when the fingers are overlapping). There are a large number of properties that can describe a graph, one of which is the adjacency matrix that describes the topology of the graph itself. We approximate the topology of the hand utilizing Neural Gas with Competitive Hebbian Learning, generating a graph. From the graph we extract the Laplacian matrix and calculate its spectrum. Both canny edges and Laplacian spectrum are used as features. As a classifier we employ Linear Discriminant Analysis with Bayes’ Rule. We apply our method on a published American Sign Language dataset (ten classes) and the results are very promising and further study of this approach is imminent from the authors.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume3No2/Paper_1-Static_Gesture_Recognition_Combining_Graph_and_Appearance_Features.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Zernike Moment Feature Extraction for Handwritten Devanagari (Marathi) Compound Character Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2014.030110</link>
        <id>10.14569/IJARAI.2014.030110</id>
        <doi>10.14569/IJARAI.2014.030110</doi>
        <lastModDate>2014-01-09T19:24:51.6370000+00:00</lastModDate>
        
        <creator>Karbhari V. Kale</creator>
        
        <creator>Prapti D. Deshmukh</creator>
        
        <creator>Shriniwas V. Chavan</creator>
        
        <creator>Majharoddin M. Kazi</creator>
        
        <creator>Yogesh S. Rode</creator>
        
        <subject>Handwritten Character, Devanagari Compound,
Zernike, SVM, k-NN.</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 3(1), 2014</description>
        <description>Compound character recognition of Devanagari
script is one of the challenging tasks since the characters are complex
in structure and can be modified by writing combination of
two or more characters. These compound characters occurs 12 to
15% in the Devanagari Script. The moment based techniques are
being successfully applied to several image processing problems
and represents a fundamental tool to generate feature descriptors
where the Zernike moment technique has a rotation invariance
property which found to be desirable for handwritten character
recognition. This paper discusses extraction of features from
handwritten compound characters using Zernike moment feature
descriptor and proposes SVM and k-NN based classification system.
The proposed classification system preprocess and normalize
the 27000 handwritten character images into 30x30 pixels images
and divides them into zones. The pre-classification produces three
classes depending on presence or absence of vertical bar. Further
Zernike moment feature extraction is performed on each zone.
The overall recognition rate of proposed system using SVM and
k-NN classifier is upto 98.37%, and 95.82% respectively.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume3No1/Paper_10-Zernike_Moment_Feature_Extraction_for_Handwritten.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Some more results on fuzzy k-competition graphs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2014.030109</link>
        <id>10.14569/IJARAI.2014.030109</id>
        <doi>10.14569/IJARAI.2014.030109</doi>
        <lastModDate>2014-01-09T19:24:51.6230000+00:00</lastModDate>
        
        <creator>Sovan Samanta</creator>
        
        <creator>Madhumangal Pal</creator>
        
        <creator>Anita Pal</creator>
        
        <subject>Fuzzy graphs, fuzzy competition graphs, fuzzy
k-competition graphs, fuzzy competition number</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 3(1), 2014</description>
        <description>Fuzzy competition graph as the generalization of
competition graph is introduced here. Fuzzy k-competition graph
as a special type of fuzzy competition graph is defined here along
with fuzzy isolated vertices. The fuzzy competition number is
also introduced and investigated several properties. Isomorphism
properties on fuzzy competition graphs are discussed.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume3No1/Paper_9-Some_more_results_on_fuzzy_k-competition_graphs.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>New concepts of fuzzy planar graphs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2014.030108</link>
        <id>10.14569/IJARAI.2014.030108</id>
        <doi>10.14569/IJARAI.2014.030108</doi>
        <lastModDate>2014-01-09T19:24:51.6070000+00:00</lastModDate>
        
        <creator>Sovan Samanta</creator>
        
        <creator>Anita Pal</creator>
        
        <creator>Madhumangal Pal</creator>
        
        <subject>Fuzzy graphs, fuzzy planar graphs, fuzzy dual
graphs, isomorphism.</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 3(1), 2014</description>
        <description>Fuzzy planar graph is an important subclass of
fuzzy graph. Fuzzy planar graphs and its several properties are
presented. A very close association of fuzzy planar graph is fuzzy
dual graph. This is also defined and several properties of it are
studied. Isomorphism on fuzzy graphs are well defined in literature.
Isomorphic relation between fuzzy planar graph and its dual graph
are established.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume3No1/Paper_8-New_concepts_of_fuzzy_planar_graphs.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Human Lips-Contour Recognition and Tracing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2014.030107</link>
        <id>10.14569/IJARAI.2014.030107</id>
        <doi>10.14569/IJARAI.2014.030107</doi>
        <lastModDate>2014-01-09T19:24:51.5770000+00:00</lastModDate>
        
        <creator>Md. Hasan Tareque</creator>
        
        <creator>Ahmed Shoeb Al Hasan</creator>
        
        <subject>Watershed Model; Active contour models (ACM); H8 filter Contour model; Lypunov stability theory</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 3(1), 2014</description>
        <description>Human-lip detection is an important criterion for many automated modern system in present day. Like computerized speech reading, face recognition etc. system can work more precisely if human-lip can detect accurately. There are many processes for detecting human-lip. In this paper an approach is developed so that the region of a human-lip can be detected, we called it lip contour. For this a region-based Active Contour Model (ACM) is introduced with watershed segmentation. In this model we used global energy terms instead of local energy terms because, global energy gives better convergence rate for malicious environment. At the time of ACM initialization by using H8 based on Lyapunov stability theory, the system gives more accurate and stable result.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume3No1/Paper_7-Human_Lips-Contour_Recognition_and_Tracing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Trust Evaluation for Trust-based RS</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2014.030106</link>
        <id>10.14569/IJARAI.2014.030106</id>
        <doi>10.14569/IJARAI.2014.030106</doi>
        <lastModDate>2014-01-09T19:24:51.5600000+00:00</lastModDate>
        
        <creator>Sajjawat Charoenrien</creator>
        
        <creator>Saranya Maneeroj</creator>
        
        <subject>trust-based recommender systems; trust values; similarity values</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 3(1), 2014</description>
        <description>Trust-based recommender systems provide the recommendations on the most suitable items for the individual users by using the trust values from their trusted friends. Usually, the trust values are obtained directly from the users, or by calculated using the similarity values between the pair of users. However, the current trust value evaluation can cause the following three problems. First, it is difficult to identify the co-rated items for calculating the similarity values between the users. Second, the current trust value evaluation still has symmetry property which makes the same trust value on both directions (trustor and trustee). Finally, the current trust value evaluation does not focus on how to adjust the trust values for the remote user.  To eliminate all of these problems, our purposed method consists of three new factors. First, the similarity values between the users are calculated using a latent factor model instead of the co-rated items. Second, in order to identify the trustworthiness for every user in trust network, the degrees of reliability are calculated. Finally, we use the number of hops for adjusting the trust value for the remote users who are expected to be low trust as shown in the real-world application concept. This trust evaluation leads to better predicted rating and getting more predictable ratings. Consequently, from our experiment, the more efficiency trust-based recommender system is obtained, comparing with the classical method on both accuracy and coverage.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume3No1/Paper_6-A_New_Trust_Evaluation_for_Trust-based_RS.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Method and System for Human Action Detections with Acceleration Sensors for the Proposed Rescue System for Disabled and Elderly Persons Who Need a Help in Evacuation from Disaster Area</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2014.030105</link>
        <id>10.14569/IJARAI.2014.030105</id>
        <doi>10.14569/IJARAI.2014.030105</doi>
        <lastModDate>2014-01-09T19:24:51.4970000+00:00</lastModDate>
        
        <creator>Kohei Arai </creator>
        
        <subject>vital sign; heart beat puls ratee; body temperature; blood pressure; blesses, consciousnes; seonsor network </subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 3(1), 2014</description>
        <description>Method and system for human action detections with acceleration sensors for the proposed rescue system for disabled and elderly persons who need a help in evacuation from disaster areas is proposed. Not only vital signs, blood pressure, heart beat pulse rate, body temperature, bless and consciousness, but also, the location and attitude of the persons have to be monitored for the proposed rescue system. The attitude can be measured with acceleration sensors. In particular, it is better to discriminate the attitudes, sitting, standing up, and lying down. Also, action speed has to be detected. Experimental results show that these attitude monitoring can be done with acceleration sensors.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume3No1/Paper_5-Method_and_System_for_Human_Action_Detections_with_Acceleration_Sensors_for_the_Proposed_Rescue_System_for_Disabled.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Vital Sign and Location/Attitude Monitoring with Sensor Networks for the Proposed Rescue System for Disabled and Elderly Persons Who Need a Help in Evacuation from Disaster Areas</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2014.030104</link>
        <id>10.14569/IJARAI.2014.030104</id>
        <doi>10.14569/IJARAI.2014.030104</doi>
        <lastModDate>2014-01-09T19:24:51.4830000+00:00</lastModDate>
        
        <creator>Kohei Arai </creator>
        
        <subject>vital sign; heart beat puls ratee; body temperature; blood pressure; blesses; consciousnes; seonsor network</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 3(1), 2014</description>
        <description>Method and system for vital sign (Body temperature, blood pressure, bless, Heart beat pulse rate, and consciousness) and location/attitude monitoring with sensor network for the proposed rescue system for disabled and elderly persons who need a help in evacuation from disaster areas is proposed.   Experimental results show that all of vital signs as well as location and attitude of the disabled and elderly persons are monitored with the proposed sensor networks. </description>
        <description>http://thesai.org/Downloads/IJARAI/Volume3No1/Paper_4-Vital_Sign_and_Location_Attitude_Monitoring_with_Sensor_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sensitivity Analysis for Aerosol Refractive Index and Size Distribution Estimation Methods Based on Polarized Atmospheric Irradiance Measurements</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2014.030103</link>
        <id>10.14569/IJARAI.2014.030103</id>
        <doi>10.14569/IJARAI.2014.030103</doi>
        <lastModDate>2014-01-09T19:24:51.4830000+00:00</lastModDate>
        
        <creator>Kohei Arai </creator>
        
        <subject>Degree of Polarization; aerosol refractive index; size distribution </subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 3(1), 2014</description>
        <description>Aerosol refractive index and size distribution estimations based on polarized atmospheric irradiance measurements are proposed together with its application to reflectance based vicarious calibration. A method for reflectance based vicarious calibration with aerosol refractive index and size distribution estimation using atmospheric polarization irradiance data is proposed. It is possible to estimate aerosol refractive index and size distribution with atmospheric polarization irradiance measured with the different observation angles (scattering angles). The Top of the Atmosphere (TOA) or at-sensor radiance is estimated based on atmospheric codes with estimated refractive index and size distribution then vicarious calibration coefficient can be calculated by comparing to the acquired visible to near infrared instrument data onboard satellites. The estimated TOA radiance based on the proposed method is compared to that with aureole-meter based approach which is based on refractive index and size distribution estimated with solar direct, diffuse and aureole (Conventional AERONET approach). It is obvious that aureole-meter is not portable, heavy and large while polarization irradiance measurement instruments are light and small (portable size and weight). </description>
        <description>http://thesai.org/Downloads/IJARAI/Volume3No1/Paper_3-Sensitivity_Analysis_for_Aerosol_Refractive_Index_and_Size_Distribution_Estimation_Methods_Based.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Speed and Vibration Performance as well as Obstacle Avoidance Performance of Electric Wheel Chair Controlled by Human Eyes Only</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2014.030102</link>
        <id>10.14569/IJARAI.2014.030102</id>
        <doi>10.14569/IJARAI.2014.030102</doi>
        <lastModDate>2014-01-09T19:24:51.4500000+00:00</lastModDate>
        
        <creator>Kohei Arai </creator>
        
        <creator>Ronny Mardiyanto</creator>
        
        <subject>Human Computer Interaction; Gaze; Obstacle Avoidance; Electric Wheel Chair Control</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 3(1), 2014</description>
        <description>Speed and vibration performance as well as obstacle avoidance performance of the previously proposed Electric Wheel Chair: EWC controlled by human eyes only is conducted. Experimental results show acceptable performances of speed vibration performance as well as obstacle avoidance performance for disabled persons. More importantly, disabled persons are satisfied with the proposed EWC because it works by their eyes only. Without hands and finger, they can control EWC freely.  </description>
        <description>http://thesai.org/Downloads/IJARAI/Volume3No1/Paper_2-Speed_and_Vibration_Performance_as_well_as_Obstacle_Avoidance_Performance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Association Rule Based Flexible Machine Learning Module for Embedded System Platforms like Android</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2014</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2014.030101</link>
        <id>10.14569/IJARAI.2014.030101</id>
        <doi>10.14569/IJARAI.2014.030101</doi>
        <lastModDate>2014-01-09T19:24:51.4030000+00:00</lastModDate>
        
        <creator>Amiraj Dhawan</creator>
        
        <creator>Shruti Bhave</creator>
        
        <creator>Amrita Aurora</creator>
        
        <creator>Vishwanathan Iyer</creator>
        
        <subject>machine learning; association rules; machine learning in embedded systems; android, ID3; Apriori; Max-Miner</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 3(1), 2014</description>
        <description>The past few years have seen a tremendous growth in the popularity of smartphones. As newer features continue to be added to smartphones to increase their utility, their significance will only increase in future. Combining machine learning with mobile computing can enable smartphones to become ‘intelligent’ devices, a feature which is hitherto unseen in them. Also, the combination of machine learning and context aware computing can enable smartphones to gauge users’ requirements proactively, depending upon their environment and context. Accordingly, necessary services can be provided to users.
In this paper, we have explored the methods and applications of integrating machine learning and context aware computing on the Android platform, to provide higher utility to the users. To achieve this, we define a Machine Learning (ML) module which is incorporated in the basic Android architecture. Firstly, we have outlined two major functionalities that the ML module should provide. Then, we have presented three architectures, each of which incorporates the ML module at a different level in the Android architecture. The advantages and shortcomings of each of these architectures have been evaluated. Lastly, we have explained a few applications in which our proposed system can be incorporated such that their functionality is improved.
</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume3No1/Paper_1-Association_Rule_Based_Flexible_Machine_Learning_Module_for_Embedded_System_Platforms_like_Android.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Constructing and monitoring processes in BPM using hybrid architectures</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2013.030410</link>
        <id>10.14569/SpecialIssue.2013.030410</id>
        <doi>10.14569/SpecialIssue.2013.030410</doi>
        <lastModDate>2014-01-02T18:06:20.6300000+00:00</lastModDate>
        
        <creator>Jos&#233; Martinez Garro</creator>
        
        <creator>Patricia Baz&#225;n</creator>
        
        <subject>BPM; Cloud Computing; Execution; Monitoring</subject>
        <description>Special Issue(SpecialIssue), 3(4), 2013</description>
        <description>with the entrance of BPM in the Cloud, a change in the conception and design of Business Processes has been produced. Distributed environments, in this context offer computing possibilities which are advantageous for processes, especially in a decomposition context. This last concept has been introduced in BPM allowing processes to be executed in a cloud environment as well as in an embedded one. This situation takes advantage of both approaches under criteria like sensitive data, high computing performance and system portability. An unexplored aspect in current bibliography is process monitoring over a decomposed environment. In the present article we introduce the analysis of some concepts presented in current bibliography, and we propose also the architecture for a distributed process monitoring system. In this architecture we consider different design factors like location transparency, and the data needed for instance tracking over a cloud system.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo8/Paper_10-Constructing_and_Monitoring_Processes_in_BPM_using_Hybrid_Architectures.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Ultrafast Scalable Embedded DCT Image Coding for Tele-immersive Delay-Sensitive Collaboration</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.041230</link>
        <id>10.14569/IJACSA.2013.041230</id>
        <doi>10.14569/IJACSA.2013.041230</doi>
        <lastModDate>2013-12-31T10:36:09.8800000+00:00</lastModDate>
        
        <creator>Mauritz Panggabean</creator>
        
        <creator>Maciej Wielgosz</creator>
        
        <creator>Harald &#216;verby</creator>
        
        <creator>Leif Arne R&#248;nningen</creator>
        
        <subject></subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(12), 2013</description>
        <description>A delay-sensitive, real-time, tele-immersive collaboration
for the future requires much lower end-to-end delay (EED)
for good synchronization than that for existing teleconference
systems. Hence, the maximum EED must be guaranteed, and
the visual-quality degradation must be graceful. Distributed
Multimedia Plays (DMP) architecture addresses the envisioned
collaboration and the challenges. We propose a DCT-based,
embedded, ultrafast, quality scalable image-compression scheme
for the collaboration on the DMP architecture. A parallel FPGA
implementation is also designed to show the technical feasibility.</description>
        <description>http://thesai.org/Downloads/Volume4No12/Paper_30-Ultrafast_Scalable_Embedded_DCT_Image_Coding_for.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>FPGA Architecture for Kriging Image Interpolation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.041229</link>
        <id>10.14569/IJACSA.2013.041229</id>
        <doi>10.14569/IJACSA.2013.041229</doi>
        <lastModDate>2013-12-30T18:32:47.0770000+00:00</lastModDate>
        
        <creator>Maciej Wielgosz</creator>
        
        <creator>Mauritz Panggabean</creator>
        
        <creator>Leif Arne R&#248;nningen</creator>
        
        <subject></subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(12), 2013</description>
        <description>This paper proposes an ultrafast scalable embedded
image compression scheme based on discrete cosine transform.
It is designed for general network architecture that guarantees
maximum end-to-end delay (EED), in particular the Distributed
Multimedia Plays (DMP) architecture. DMP is designed to enable
people to perform delay-sensitive real-time collaboration from
remote places via their own collaboration space (CS). It requires
much lower EED to achieve good synchronization than that in
existing teleconference systems. A DMP node can drop packets
from networked CSs intelligently to guarantee its local delay and
degrade visual quality gracefully. The transmitter classifies visual
information in an input image into priority ranks. Included in the
bitstream as side information, the ranks enable intelligent packet
dropping. The receiver reconstructs the image from the remaining
packets. Four priority ranks for dropping are provided. Our
promising results reveal that, with the proposed compression
technique, maximum EED can be guaranteed with graceful
degradation of image quality. The given parallel designs for its
hardware implementation in FPGA shows its technical feasibility
as a module in the DMP architecture.</description>
        <description>http://thesai.org/Downloads/Volume4No12/Paper_29-FPGA_Architecture_for_Kriging_Image_Interpolation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of Intelligent Surveillance System Focused on Comprehensive Flow</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.041228</link>
        <id>10.14569/IJACSA.2013.041228</id>
        <doi>10.14569/IJACSA.2013.041228</doi>
        <lastModDate>2013-12-30T18:32:45.9770000+00:00</lastModDate>
        
        <creator>Shigeki Aoki Tatsuya Gibo</creator>
        
        <creator>Eri Kuzumoto</creator>
        
        <creator>Takao Miyamoto</creator>
        
        <subject>Intelligent surveillance system; comprehensive flow;
optical flow; Shannon’s information theory</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(12), 2013</description>
        <description>Surveillance cameras are today a common sight
in public spaces and thoroughfares, where they are used to
prevent crime and monitor traffic. However, human operators
have limited attention spans and may miss anomalies. Here,
we develop an intelligent surveillance system on the basis of
spatio-temporal information in comprehensive flow of human
traffic. The comprehensive flow is extracted from optical flows,
and anomalies are identified on the basis of the spatiotemporal
distribution. Because our system extracts only a few anomalies
from many surveillance cameras, operators will not miss the
important scenes. In experiment, we confirmed effectiveness of
our intelligent surveillance system.</description>
        <description>http://thesai.org/Downloads/Volume4No12/Paper_28-Development_of_Intelligent_Surveillance_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Construction of Powerful Online Search Expert System Based on Semantic Web</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.041227</link>
        <id>10.14569/IJACSA.2013.041227</id>
        <doi>10.14569/IJACSA.2013.041227</doi>
        <lastModDate>2013-12-30T18:32:44.8500000+00:00</lastModDate>
        
        <creator>Yasser A. Nada</creator>
        
        <subject>xpert System, Semantic Web, Online Search, XML.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(12), 2013</description>
        <description>In this paper we intends to build an expert system based on semantic web for online search using XML, to help users to find the desired software, and read about its features and specifications. The expert system saves user&#39;s time and effort of web searching or buying software from available libraries. Building online search expert system is ideal for capturing support knowledge to produce interactive on-line systems that provide searching details, situation-specific advice exactly like setting a session with an expert. Any person can access this interactive system from his web browser and get some questions answer in addition to precise advice which was provided by an expert. The system can provide some troubleshooting diagnose, find the right products; … Etc.
The proposed system further combines aspects of three research topics (Semantic Web, Expert System and XML). Semantic web Ontology will be considered as a set of directed graphs where each node represents an item and the edges denote a term which is related to another term. Organizations can now optimize their most valuable expert knowledge through powerful interactive Web-enabled knowledge automation expert system. Online sessions emulate a conversation with a human expert asking focused questions and producing customized recommendations and advice. Hence, the main powerful point of the proposed expert system is that the skills of any domain expert will be available to everyone.</description>
        <description>http://thesai.org/Downloads/Volume4No12/Paper_27-Construction_of_Powerful_Online_Search_Expert_System_Based_on_Semantic_Web.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Simulating Cooperative Systems Applications: a New Complete Architecture</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.041226</link>
        <id>10.14569/IJACSA.2013.041226</id>
        <doi>10.14569/IJACSA.2013.041226</doi>
        <lastModDate>2013-12-30T18:32:43.6930000+00:00</lastModDate>
        
        <creator>Dominique Gruyer</creator>
        
        <creator>S&#233;bastien Demmel</creator>
        
        <creator>Brigitte d’Andr&#233;a-Novel</creator>
        
        <creator>Gr&#233;goire S. Larue</creator>
        
        <creator>Andry Rakotonirainy</creator>
        
        <subject>Cooperative Systems; IEEE 802.11p; Inter-vehicular Communications; simulation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(12), 2013</description>
        <description>For a decade, embedded driving assistance systems were mainly dedicated to the management of short time events (lane departure, collision avoidance, collision mitigation). Recently a great number of projects have been focused on cooperative embedded devices in order to extend environment perception. Handling an extended perception range is important in order to provide enough information for both path planning and co-pilot algorithms which need to anticipate events. To carry out such applications, simulation has been widely used. Simulation is efficient to estimate the benefits of Cooperative Systems (CS) based on Inter-Vehicular Communications (IVC). This paper presents a new and modular architecture built with the SiVIC simulator and the RTMaps™ multi-sensors prototyping platform. A set of improvements, implemented in SiVIC, are introduced in order to take into account IVC modelling and vehicles’ control. These 2 aspects have been tuned with on-road measurements to improve the realism of the scenarios. The results obtained from a freeway emergency braking scenario are discussed.</description>
        <description>http://thesai.org/Downloads/Volume4No12/Paper_26-Simulating_Cooperative_Systems_Applications_a_New_Complete_Architecture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Pre-Eminance of Open Source Eda Tools and Its Types in The Arena of Commercial Electronics</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.041225</link>
        <id>10.14569/IJACSA.2013.041225</id>
        <doi>10.14569/IJACSA.2013.041225</doi>
        <lastModDate>2013-12-30T18:32:42.5670000+00:00</lastModDate>
        
        <creator>Geeta Yadav</creator>
        
        <creator>Neeraj Kr. Shukla</creator>
        
        <subject>Open source EDA; MentorGraphics; Cadence; Icarus Verilog; GTKwave viewer; Verilator; GHDL VHDL simulator;  gEDA; Linux; Github; VPI</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(12), 2013</description>
        <description>Digital synthesis with a goal of chip designing in the commercial electronics arena is packed into large EDA Software providers like, Synopsys, Cadence, or MentorGraphics. These commercial tools being expensive and having closed file structures. It is also a financial constraint for the startup companies sometimes who have their budget limitations. Any bug-fixes or add features cannot be made with ease; in such scenario the company is forced to opt for an alternative cost effective EDA software. This paper deals with the advantages of using open source EDA tools like Icarus Verilog, Verilator, GTKwave viewer, GHDL VHDL simulator, gEDA, etc. that are available as a free source and focuses on the Icarus Verilog simulator tool. It can be seen as a big encouragement for startups in Semiconductor domain. Thereby, these open source EDA tools make the design process more cost-effective, less time consuming and affordable as well.</description>
        <description>http://thesai.org/Downloads/Volume4No12/Paper_25-Pre-Eminance_of_Open_Source_Eda_Tools_and_Its_Types_in_The_Arena_of_Commercial_Electronics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Achieving Regulatory Compliance for Data Protection in the Cloud</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.041224</link>
        <id>10.14569/IJACSA.2013.041224</id>
        <doi>10.14569/IJACSA.2013.041224</doi>
        <lastModDate>2013-12-30T18:32:41.4030000+00:00</lastModDate>
        
        <creator>Mark Rivis</creator>
        
        <creator>Shao Ying Zhu</creator>
        
        <subject>cloud computing; data protection legislation; Data Protection Act 1998; homomorphic encryption; data privacy; symmetric encryption</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(12), 2013</description>
        <description>The advent of cloud computing has enabled organizations to take advantage of cost-effective, scalable and reliable computing platforms.  However, entrusting data hosting to third parties has inherent risks.  Where the data in question can be used to identify living individuals in the UK, the Data Protection Act 1998 (DPA) must be adhered to.  In this case, adequate security controls must be in place to ensure privacy of the data.  Transgressions may be met with severe penalties.  This paper outlines the data controller’s obligations under the DPA and, with respect to cloud computing, presents solutions for possible encryption schemes.  Using traditional encryption can lead to key management challenges and limit the type of processing which the cloud service can fulfill.  Improving on this, the evolving area of homomorphic encryption is presented which promises to enable useful processing of data whilst it is encrypted.  Current approaches in this field have limited scope and an impractical processing overhead.  We conclude that organizations must thoroughly evaluate and manage the risks associated with processing personal data in the cloud.</description>
        <description>http://thesai.org/Downloads/Volume4No12/Paper_24-Achieving_Regulatory_Compliance_for_Data_Protection_in_the_Cloud.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Failure of E-government in Jordan to Fulfill Potential</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.041223</link>
        <id>10.14569/IJACSA.2013.041223</id>
        <doi>10.14569/IJACSA.2013.041223</doi>
        <lastModDate>2013-12-30T18:32:40.2830000+00:00</lastModDate>
        
        <creator>Raed Kanaan</creator>
        
        <creator>Ghassan Kanaan</creator>
        
        <subject>e-government; grounded theory; Jordan; total failure</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(12), 2013</description>
        <description>The aim of this paper is to uncover the reasons behind what so-called total failure in e-government project in Jordan. Reviewing the published papers in this context revealed that both citizens and employees do not understand the current status of this program. The majority of these papers measure the quality of e-services presented by e-government. However, according to the minister of Communication and Information Technologies (MOCIT), only three e-services are provided by this program up to writing this paper. Moreover, he decided to freeze the current working on e-government programme.  These facts drove the authors to conduct this research. General review of the existing literature concerning e-government implementation in Jordan was applied, then a qualitative research was utilised to uncover the reasons behind the failure of the e-government program in Jordan. The collected data then was analysed using Strauss and Corbin&#39;s method of grounded theory. This paper illustrates that Jordanian government need to exert strenuous efforts to move from the first stage of e-government implementation into an interactive one after fourteen years of launching the program, considering that only three e-services are presented up to October 2013. Reasons behind the failure of e-government in Jordan have also been identified.</description>
        <description>http://thesai.org/Downloads/Volume4No12/Paper_23-The_Failure_of_E-government_in_Jordan_to_Fulfill_Potential.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Model of an E-Learning Web Site  for Teaching and Evaluating Online.</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.041222</link>
        <id>10.14569/IJACSA.2013.041222</id>
        <doi>10.14569/IJACSA.2013.041222</doi>
        <lastModDate>2013-12-30T18:32:39.0670000+00:00</lastModDate>
        
        <creator>Mohammed A. Amasha</creator>
        
        <creator>Salem Alkhalaf</creator>
        
        <subject>Key Words; e-learning; evaluation; designing a website; university performance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(12), 2013</description>
        <description>This research is endeavoring to design an e-learning web site on the internet having the course name as &quot;Object Oriented Programming&quot; (OOP) for the students of level four at Computer Science Department (CSD). This course is to be taught online (through web) and then a programme is to be designed to evaluate students’ performance electronically while introducing a comparison between online teaching , e-evaluation and traditional methods of evaluation. The research seeks to lay out a futuristic perception that how the future online teaching and e-electronic evaluation should be the matter which highlights the importance of this research.</description>
        <description>http://thesai.org/Downloads/Volume4No12/Paper_22-A_Model_of_an_E-Learning_Web_Site_for_Teaching_and_Evaluating_Online.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Survey: Risk Assessment for Cloud Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.041221</link>
        <id>10.14569/IJACSA.2013.041221</id>
        <doi>10.14569/IJACSA.2013.041221</doi>
        <lastModDate>2013-12-30T18:32:37.8630000+00:00</lastModDate>
        
        <creator>Drissi S.</creator>
        
        <creator>Houmani H.</creator>
        
        <creator>Medromi H.</creator>
        
        <subject>cloud computing; risk; risk assessment approach; survey; cloud consumers</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(12), 2013</description>
        <description>with the increase in the growth of cloud computing and the changes in technology that have resulted a new ways for cloud providers to deliver their services to cloud consumers, the cloud consumers should be aware of the risks and vulnerabilities present in the current cloud computing environment. An information security risk assessment is designed specifically for that task. However, there is lack of structured risk assessment approach to do it. This paper aims to survey existing knowledge regarding risk assessment for cloud computing and analyze existing use cases from cloud computing to identify the level of risk assessment realization in state of art systems and emerging challenges for future research.</description>
        <description>http://thesai.org/Downloads/Volume4No12/Paper_21-Survey_Risk_Assessment_for_Cloud_Computing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Route Optimization in Network Mobility</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.041220</link>
        <id>10.14569/IJACSA.2013.041220</id>
        <doi>10.14569/IJACSA.2013.041220</doi>
        <lastModDate>2013-12-30T18:32:36.7300000+00:00</lastModDate>
        
        <creator>Md. Hasan Tareque</creator>
        
        <creator>Ahmed Shoeb Al Hasan</creator>
        
        <subject>Delegation; Hierarchical; Source Routing; BGP- assisted; Network Mobility; Route Optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(12), 2013</description>
        <description>NEtwork MObility (NEMO) controls mobility of a number of mobile nodes in a comprehensive way using one or more mobile routers. To choose a route optimization scheme, it is very important to have a quantitative comparison of the available route optimization schemes. The  focus  of  this  paper   is  to  analyze  the  degree of Route Optimization (RO),  deploy-ability  and  type of RO supported by each class in general. The comparison shows the differences among the schemes in terms of issues, such as additional header, signaling and memory   requirement. We classify the schemes established on the basic method for route optimization, and equal the schemes based on protocol overhead, such as header overhead, amount of signaling, and memory requirements. Lastly the performance of the classes of different schemes has to be estimated under norms such as available bandwidth, topology of the mobile network and mobility type.</description>
        <description>http://thesai.org/Downloads/Volume4No12/Paper_20-Route_Optimization_in_Network_Mobility.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Recognition of Objects by Using Genetic Programming</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.041219</link>
        <id>10.14569/IJACSA.2013.041219</id>
        <doi>10.14569/IJACSA.2013.041219</doi>
        <lastModDate>2013-12-30T18:32:35.6170000+00:00</lastModDate>
        
        <creator>Nerses Safaryan</creator>
        
        <creator>Hakob Sarukhanyan</creator>
        
        <subject>Terminals; Fitness; Selection; Crossover; Mutation; Ground Truth</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(12), 2013</description>
        <description>This document is devoted to the task of object detection and recognition in digital images by using genetic programming. The goal was to improve and simplify existing approaches. The detection and recognition are achieved by means of extracting the features. A genetic program is used to extract and classify features of objects. Simple features and primitive operators are processed in genetic programming operations. We are trying to detect and to recognize objects in SAR images. Due to the new approach described in this article, five and seven types of objects were recognized with good recognition results.</description>
        <description>http://thesai.org/Downloads/Volume4No12/Paper_19-Recognition_of_Objects_by_Using_Genetic_Programming.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Consumer Acceptance of Location Based Services in the Retail Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.041218</link>
        <id>10.14569/IJACSA.2013.041218</id>
        <doi>10.14569/IJACSA.2013.041218</doi>
        <lastModDate>2013-12-30T18:32:34.5130000+00:00</lastModDate>
        
        <creator>Iris Uitz</creator>
        
        <creator>Roxane Koitz</creator>
        
        <subject>Location Based Services; Usability-Testing; Apps; Shopping-Apps; Mobile; Consumer Acceptance; Mobile Marketing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(12), 2013</description>
        <description>Smartphones have become a commodity item. In combination with their seemingly infinite extensions through mobile applications, they hold great economic potential for businesses. Location Based Services (LBS) take advantage of their portability by providing relevant information to the user regarding their location. Utilizing the user’s position to create personalized location-specific marketing messages enables businesses to yield value for their customers. The main objective of this paper is to identify factors, which influence the acceptance of LBS apps in the context of retail since there is a lack of research in this field. A qualitative research approach was chosen to investigate the relevant variables. Based on the conducted interviews, theories were derived and verified against further data retrievals. Similar to findings of previous research the factors ease of use (usability) and usefulness were confirmed as being crucial in forming consumers’ attitudes.</description>
        <description>http://thesai.org/Downloads/Volume4No12/Paper_18-Consumer_Acceptance_of_Location_Based_Services_in_the_Retail_Environment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Internet Forensics Framework Based-on Clustering</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.041217</link>
        <id>10.14569/IJACSA.2013.041217</id>
        <doi>10.14569/IJACSA.2013.041217</doi>
        <lastModDate>2013-12-30T18:32:33.2300000+00:00</lastModDate>
        
        <creator>Imam Riadi</creator>
        
        <creator>Jazi Eko Istiyanto</creator>
        
        <creator>Ahmad Ashari</creator>
        
        <creator>Subanar</creator>
        
        <subject>framework; forensics; Internet; log; clustering; Denial of Service</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(12), 2013</description>
        <description>Internet network attacks are complicated and worth studying. The attacks include Denial of Service (DoS). DoS attacks that exploit vulnerabilities found in operating systems, network services and applications. Indicators of DoS attacks, is when legitimate users cannot access the system. This paper proposes a framework for Internet based forensic logs that aims to assist in the investigation process to reveal DoS attacks. The framework in this study consists of several steps, among others : logging into the text file and database as well as identifying an attack based on the packet header length. After the identification process, logs are grouped using k-means clustering algorithm into three levels of attack (dangerous, rather dangerous and not dangerous) based on port numbers and tcpflags of the package. Based on the test results the proposed framework can be grouped into three level attacks and found the attacker with a success rate of 89,02%, so, it can be concluded that the proposed framework can meet the goals set in this research.</description>
        <description>http://thesai.org/Downloads/Volume4No12/Paper_17-Internet_Forensics_Framework_Based-on_Clustering.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automated Timetabling Using Stochastic Free-Context Grammar Based on Influence-Mapping</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.041216</link>
        <id>10.14569/IJACSA.2013.041216</id>
        <doi>10.14569/IJACSA.2013.041216</doi>
        <lastModDate>2013-12-30T18:32:32.1300000+00:00</lastModDate>
        
        <creator>Hany Mahgoub</creator>
        
        <creator>Mohamed Altaher</creator>
        
        <subject>Heuristic Search; Automated Timetabling; Stochastic Context-Free Grammar; Influence Map</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(12), 2013</description>
        <description>This paper presents a new system that solves the problem of finding suitable class schedule using strongly-typed heuristic search technique. The system is called Automated Timetabling Solver (ATTSolver). The system uses Stochastic Context-Free Grammar rules to build schedule and make use of influence maps to assign the fittest slot (place &amp; time) for each lecture in the timetable. This system is very useful in cases of the need to find valid, diverse, suitable and on-the-fly timetable which takes into account the soft constraints that has been imposed by the user of the system. The performance of the proposed system is compared with the aSc system for the number of tested schedules and the execution time. The results show that the number of tested schedules in the proposed system is always less than that in aSc system. Moreover, the execution time of the proposed system is much better than aSc system in all cases of the sequential runs.</description>
        <description>http://thesai.org/Downloads/Volume4No12/Paper_16-Automated_Timetabling_Using_Stochastic_Free-Context_Grammar_Based_on_Influence-Mapping.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A particle swarm optimization algorithm for the continuous absolute p-center location problem with Euclidean distance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.041215</link>
        <id>10.14569/IJACSA.2013.041215</id>
        <doi>10.14569/IJACSA.2013.041215</doi>
        <lastModDate>2013-12-30T18:32:31.0070000+00:00</lastModDate>
        
        <creator>Hassan M. Rabie</creator>
        
        <creator>Dr. Ihab A. El-Khodary</creator>
        
        <creator>Prof. Assem A. Tharwat</creator>
        
        <subject>absolute p-center; location problem; particle swarm optimization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(12), 2013</description>
        <description>The p-center location problem is concerned with determining the location of p centers in a plane/space to serve n demand points having fixed locations. The continuous absolute p-center location problem attempts to locate facilities anywhere in a space/plane with Euclidean distance. The continuous Euclidean p-center location problem seeks to locate p facilities so that the maximum Euclidean distance to a set of n demand points is minimized. A particle swarm optimization (PSO) algorithm previously advised for the solution of the absolute p-center problem on a network has been extended to solve the absolute p-center problem on space/plan with Euclidean distance. In this paper we develop a PSO algorithm for the continuous absolute p-center location problem to minimize the maximum Euclidean distance from each customer to his/her nearest facility, called “PSO-ED”. This problem is proven to be NP-hard. We tested the proposed algorithm “PSO-ED” on a set of 2D and 3D problems and compared the results with a branch and bound algorithm. The numerical experiments show that PSO-ED algorithm can solve optimally location problems with Euclidean distance including up to 1,904,711 points.</description>
        <description>http://thesai.org/Downloads/Volume4No12/Paper_15-A_particle_swarm_optimization_algorithm_for_the_continuous_absolute_p-center_location_problem_with_Euclidean_distance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design and Evaluation of Spatial Multi Interaction Interface</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.041214</link>
        <id>10.14569/IJACSA.2013.041214</id>
        <doi>10.14569/IJACSA.2013.041214</doi>
        <lastModDate>2013-12-30T18:32:29.8630000+00:00</lastModDate>
        
        <creator>Chang Ok Yun</creator>
        
        <creator>Tae Soo Yun</creator>
        
        <creator>YoSeph Choi</creator>
        
        <subject>Interactive display; Ambient environment; Interaction surface; Spatial interaction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(12), 2013</description>
        <description>Nowadays interactive displays are capable of offering a great variety of interactions to users thanks to advancement of ubiquitous computing technologies. Although many methods of interactions have been researched, usability of the devices is still limited and they are offered only to a single user at a time. This paper proposes a spatial multi-interaction interface that can provide various interactions to many users in an ambient environment. An interaction surface is created for users to interact through IR-LEDs Array Bar. The coordinate information of the hand is extracted by detecting the area of the hand of a user in the interaction surface. Then users can experience various interactions through “spatial touches” on the interaction surface. In our paper a usability evaluation is carried out for our new interface which gives the emphasis to the interaction interface. The usability of our new interface is shown to be significantly better through statistical testing using t-testing. Finally, users can perform various interactions with natural hand motions only without the aid of devices that have to be operated manually.</description>
        <description>http://thesai.org/Downloads/Volume4No12/Paper_14-Design_and_Evaluation_of_Spatial_Multi_Interaction_Interface.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Loop Modeling Forward and Feedback Analysis in Cerebral Arteriovenous Malformation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.041213</link>
        <id>10.14569/IJACSA.2013.041213</id>
        <doi>10.14569/IJACSA.2013.041213</doi>
        <lastModDate>2013-12-30T18:32:28.7100000+00:00</lastModDate>
        
        <creator>Y. Kiran Kumar</creator>
        
        <creator>Shashi.B.Mehta</creator>
        
        <creator>Manjunath Ramachandra</creator>
        
        <subject>Vessel Loops; AVM; Lumped Model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(12), 2013</description>
        <description>Cerebral Arteriovenous Malformation (CAVM) hemodynamic in disease condition results changes in the flow and pressure level in blood vessels. Cerebral Arteriovenous Malformation (CAVM) is an abnormal shunting of vessels between arteries and veins. It is one of the common Brain disorder. In general, the blood flows of cerebral region are from arteries to veins through capillary bed. This paper is focus on the creation of  a new electrical model for spiral loop structures that will simulate the pressure at various locations of the CAVM Complex blood vessels. The proposed model helps Doctors to take diagnostic and treatment planning for treatment by non-invasive measurement..  This can cause rupture or decreased blood supply to the tissue through capillary causing infarct. Measuring flow and pressure without intervention along the vessel is big challenge due to loop structures of feedback and forward flows in Arteriovenous Malformation patients. In this paper, we proposed a lumped model for the spiral loop in CAVM Structures that will help doctors to find the pressure and velocity measurements non-invasively.</description>
        <description>http://thesai.org/Downloads/Volume4No12/Paper_13-Loop_Modeling_Forward_and_Feedback_Analysis_in_Cerebral_Arteriovenous_Malformation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Phenomenon of Enterprise Systems in Higher Education:Insights From Users</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.041212</link>
        <id>10.14569/IJACSA.2013.041212</id>
        <doi>10.14569/IJACSA.2013.041212</doi>
        <lastModDate>2013-12-30T18:32:27.6030000+00:00</lastModDate>
        
        <creator>Ahed Abugabah</creator>
        
        <creator>Louis Sansogni</creator>
        
        <creator>Osama Abdulaziz Alfarraj</creator>
        
        <subject>ERP systems; user performance; system quality; higher education</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(12), 2013</description>
        <description>Higher education has been strongly influenced by global trends to adopt new technologies. There has been a call by governments for universities worldwide to improve their performance and efficiency. In response, higher education institutions have turned to Enterprise Resource Planning systems (ERP) in order to cope with the changing environment and overcome the limitations of legacy systems as a means for integration and performance improvement advantages. However, failure rate of ERP implementation is high and debate still exists regarding the various contributions of the ERP systems to performance, especially at the user level, where the core values of information systems are represented and the actual benefits and impacts are created.
As consequence, this study evaluates the impacts of ERP systems on user performance in higher education institutions with a view to better understand ERP phenomenon in these institutions and determine whether or not these systems work well in such a complex environment. The study also developed and validated statistically a new model suggesting a more inclusive sight for examining ERPs utilization and impacts and combining the key ideas of three well-known and widely used information systems models.
</description>
        <description>http://thesai.org/Downloads/Volume4No12/Paper_12-The_Phenomenon_of_Enterprise_Systems_in_Higher_EducationInsights_From_Users.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The cybercrime process : an overview of scientific challenges and methods</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.041211</link>
        <id>10.14569/IJACSA.2013.041211</id>
        <doi>10.14569/IJACSA.2013.041211</doi>
        <lastModDate>2013-12-30T18:32:26.4570000+00:00</lastModDate>
        
        <creator>Patrick Lallement</creator>
        
        <subject>cybercrime; detection; forensic analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(12), 2013</description>
        <description>The aim of this article is to describe the cybercrime process and to identify all issues that appear at the different steps, between the detection of incident to the final report that must be exploitable for a judge. It is to identify at all steps, issues and methods to address them. </description>
        <description>http://thesai.org/Downloads/Volume4No12/Paper_11-The_cybercrime_process_an_overview_of_scientific_challenges_and_methods.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Image-Based Model For Predicting Cracks In Sewer Pipes</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.041210</link>
        <id>10.14569/IJACSA.2013.041210</id>
        <doi>10.14569/IJACSA.2013.041210</doi>
        <lastModDate>2013-12-30T18:32:25.1700000+00:00</lastModDate>
        
        <creator>Iraky Khalifa</creator>
        
        <creator>Amal Elsayed Aboutabl</creator>
        
        <creator>Gamal Sayed AbdelAziz Barakat</creator>
        
        <subject>Visual inspection; Sewer pipes; Canny algorithm; Crack detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(12), 2013</description>
        <description>Visual inspection by a human operator has been mostly used up till now to detect cracks in sewer pipes.   In this paper, we address the problem of automated detection of such cracks. We propose a model which detects crack fractures that may occur in weak areas of a network of pipes. The model also predicts the level of dangerousness of the detected cracks among five crack levels. We evaluate our results by comparing them with those obtained by using the Canny algorithm. The accuracy percentage of this model exceeds 90% and outperforms other approaches.</description>
        <description>http://thesai.org/Downloads/Volume4No12/Paper_10-A_New_Image-Based_Model_For_Predicting_Cracks_In_Sewer_Pipes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>New Simulation Method of New HV Power Supply for Industrial Microwave Generators with N=2 Magnetrons</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.041209</link>
        <id>10.14569/IJACSA.2013.041209</id>
        <doi>10.14569/IJACSA.2013.041209</doi>
        <lastModDate>2013-12-30T18:32:24.0230000+00:00</lastModDate>
        
        <creator>N. El Ghazal</creator>
        
        <creator>A.Belhaiba</creator>
        
        <creator>M.Chraygane</creator>
        
        <creator>B.Bahani</creator>
        
        <creator>M.Ferfra</creator>
        
        <subject>Modeling; New Power Supply; Magnetron; Microwave Generator; Matlab-Simulink; High Voltage (HV)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(12), 2013</description>
        <description>This original work treats a new simulation method of a new type of high voltage power supply for microwave generators with N magnetrons (treated case: N=2 magnetrons), used as a source of energy in the industrial applications. This new power supply is composed of a single-phase HV transformer with magnetic leakage flow, supplying two parallel cells, which multiplies the voltage and stabilizes the current. The doubler supplies one magnetron. The transformer is presented by its p equivalent circuit. Each inductance of the model is characterized by its relation &quot;flow-current&quot;. In this paper, we present a new approach validation of the p model of the special transformer using Matlab-Simulink code. The theoretical results compared with the experimental measurements, is in good agreement with them. The use of this tool Matlab-Simulink, has allowed us to confirm the possibility of the operation of this new system without interaction between magnetrons, with a view to a possible optimization which lead to reduce the weight, the volume and the cost of implementation while ensuring the process of regulating the current in each magnetron.</description>
        <description>http://thesai.org/Downloads/Volume4No12/Paper_9-New_Simulation_Method_of_New_HV_Power_Supply_for_Industrial_Microwave_Generators.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fir Filter Design Using The Signed-Digit Number System and Carry Save Adders – A Comparison</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.041208</link>
        <id>10.14569/IJACSA.2013.041208</id>
        <doi>10.14569/IJACSA.2013.041208</doi>
        <lastModDate>2013-12-30T18:32:22.9370000+00:00</lastModDate>
        
        <creator>Hesham Altwaijry</creator>
        
        <creator>Yasser Mohammad Seddiq</creator>
        
        <subject>FIR Filters; Signed Digit; Carry-Save ; FPGA</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(12), 2013</description>
        <description>This work looks at optimizing finite impulse response (FIR) filters from an arithmetic perspective. Since the main two arithmetic operations in the convolution equations are addition and multiplication, they are the targets of the optimization. Therefore, considering carry-propagate-free addition techniques should enhance the addition operation of the filter. The signed-digit number system is utilized to speedup addition in the filter. An alternative carry propagate free fast adder, carry-save adder, is also used here to compare its performance to the signed-digit adder. For multiplication, Booth encoding is used to reduce the number of partial products. The two filters are modeled in VHDL, synthesized and place-and-routed. The filters are deployed on a development board to filter digital images. The resultant hardware is analyzed for speed and logic utilization</description>
        <description>http://thesai.org/Downloads/Volume4No12/Paper_8-Fir_Filter_Design_Using_The_Signed-Digit_Number_System_and_Carry_Save_Adders.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>High Performance Color Image Processing in Multicore CPU using MFC Multithreading</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.041207</link>
        <id>10.14569/IJACSA.2013.041207</id>
        <doi>10.14569/IJACSA.2013.041207</doi>
        <lastModDate>2013-12-30T18:32:21.8230000+00:00</lastModDate>
        
        <creator>Anandhanarayanan Kamalakannan</creator>
        
        <creator>Govindaraj Rajamanickam</creator>
        
        <subject>Color image; fuzzy contrast intensification; edge detection; lock-free multithreading; MFC thread; block-data; multicore programming </subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(12), 2013</description>
        <description>Image processing is an engineering field where stored image data is readily available for parallel processing.  Basically data processing algorithms developed in sequential approach are not capable of harnessing the computing power of individual cores present in a single-chip multicore processor. To utilize the multicore processor efficiently on windows platform for color image processing applications, a lock-free multithreading approach was developed using Visual C++ with Microsoft Foundation Class (MFC) support. This approach distributes the image data processing task on multicore Central Processing Unit (CPU) without using parallel programming framework like Open Multi-Processing (OpenMP) and reduces the algorithm execution time. In image processing, each pixel is processed using same set of high-level instruction which is time consuming. Therefore to increase the processing speed of the algorithm in a multicore CPU, the entire image data is partitioned into equal blocks and copy of the algorithm is applied on each block using separate worker thread. In this paper, multithreaded color image processing algorithms namely contrast enhancement using fuzzy technique and edge detection were implemented. Both the algorithms were tested on an Intel Core i5 Quad-core processor for ten different images of varying pixel size and their performance results are presented. A maximum of 71% computing performance improvement and speedup of about 3.4 times over sequential approach was obtained for large-size images using four thread model.</description>
        <description>http://thesai.org/Downloads/Volume4No12/Paper_7-High_Performance_Color_Image_Processing_in_Multicore_CPU_using_MFC_Multithreading.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Anonymous Broadcast Messages</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.041206</link>
        <id>10.14569/IJACSA.2013.041206</id>
        <doi>10.14569/IJACSA.2013.041206</doi>
        <lastModDate>2013-12-30T18:32:20.7100000+00:00</lastModDate>
        
        <creator>Dragan Lazic</creator>
        
        <creator>Charlie Obimbo</creator>
        
        <subject>Dining Cryptographer network; Privacy; sender-untraceability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(12), 2013</description>
        <description>The Dining Cryptographer network (or DC-net) is a privacy preserving communication protocol devised by David Chaum for anonymous message publication. A very attractive feature of DC-nets is the strength of its security, which is inherent in the protocol and is not dependent on other schemes, like encryption. Unfortunately the DC-net protocol has a level of complexity that causes it to suffer from exceptional communication overhead and implementation difficulty that precludes its use in many real-world use-cases. We have designed and created a DC-net implementation that uses a pure client-server model, which successfully avoids much of the complexity inherent in the DC-net protocol. We describe the theory of DC-nets and our pure client-server implementation, as well as the compromises that were made to reduce the protocol’s level of complexity. Discussion centers around the details of our implementation of DC-net.</description>
        <description>http://thesai.org/Downloads/Volume4No12/Paper_6-Anonymous_Broadcast_Messages.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Building an Artificial Idiotopic Immune Model Based on Artificial Neural Network Ideology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.041205</link>
        <id>10.14569/IJACSA.2013.041205</id>
        <doi>10.14569/IJACSA.2013.041205</doi>
        <lastModDate>2013-12-30T18:32:19.5630000+00:00</lastModDate>
        
        <creator>Hossam Meshref</creator>
        
        <subject>Artificial Immune Systems; Artificial Neural Networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(12), 2013</description>
        <description>In the literature, there were many research efforts that utilized the artificial immune networks to model their designed applications, but they were considerably complicated, and restricted to a few areas that such as computer security applications. The objective of this research is to introduce a new model for artificial immune networks that adopts features from other biological successful models to overcome its complexity such as the artificial neural networks. Common concepts between the two systems were investigated to design a simple, yet a robust, model of artificial immune networks. Three artificial neural networks learning models were available to choose from in the research design: supervised, unsupervised, and reinforcement learning models. However, it was found that the reinforcement model is the most suitable model. Research results examined network parameters, and appropriate relations between concentration ranges and their dependent parameters as well as the expected reward during network learning.  In conclusion, it is recommended the use of the designed model by other researchers in different applications such as controlling robots in hazardous environment to save human lives as well as using it on image retrieval in general to help the police department identify suspects.</description>
        <description>http://thesai.org/Downloads/Volume4No12/Paper_5-Building_an_Artificial_Idiotopic_Immune_Model_Based_on_Artificial_Neural_Network_Ideology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Color, texture and shape descriptor fusion with Bayesian network classifier for automatic image annotation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.041204</link>
        <id>10.14569/IJACSA.2013.041204</id>
        <doi>10.14569/IJACSA.2013.041204</doi>
        <lastModDate>2013-12-30T18:32:18.4400000+00:00</lastModDate>
        
        <creator>Mustapha OUJAOURA</creator>
        
        <creator>Brahim MINAOUI</creator>
        
        <creator>Mohammed FAKIR</creator>
        
        <subject>image annotation; k-means segmentation; Bayesian networks; color histograms; Legendre moments; Texture; ETH-80 database</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(12), 2013</description>
        <description>Due to the large amounts of multimedia data prevalent on the Web, Some images presents textural motifs while others may be recognized with colors or shapes of their content. The use of descriptors based on one’s features extraction method, such as color or texture or shape, for automatic image annotation are not efficient in some situations or in absence of the chosen type. The proposed approach is to use a fusion of some efficient color, texture and shape descriptors with Bayesian networks classifier to allow automatic annotation of different image types. This document provides an automatic image annotation that merges some descriptors in a parallel manner to have a vector that represents the various types of image characteristics. This allows increasing the rate and accuracy of the annotation system. The Texture, color histograms, and Legendre moments, are used and merged respectively together in parallel as color, texture and shape features extraction methods, with Bayesian network classifier, to annotate the image content with the appropriate keywords. The accuracy of the proposed approach is supported by the good experimental results obtained from ETH-80 databases.</description>
        <description>http://thesai.org/Downloads/Volume4No12/Paper_4-Color_texture_and_shape_descriptor_fusion_with_Bayesian_network_classifier_for_automatic_image_annotation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Quantum Cost Optimization for Reversible Sequential Circuit</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.041203</link>
        <id>10.14569/IJACSA.2013.041203</id>
        <doi>10.14569/IJACSA.2013.041203</doi>
        <lastModDate>2013-12-30T18:32:17.3330000+00:00</lastModDate>
        
        <creator>Md. Selim Al Mamun</creator>
        
        <creator>David Menville</creator>
        
        <subject>Flip-flop; Garbage Output; Reversible Logic; Quantum Cost</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(12), 2013</description>
        <description>Reversible sequential circuits are going to be the significant memory blocks for the forthcoming computing devices for their ultra low power consumption. Therefore design of various types of latches has been considered a major objective for the researchers quite a long time. In this paper we proposed efficient design of reversible sequential circuits that are optimized in terms of quantum cost, delay and garbage outputs. For this we proposed a new 3*3 reversible gate called SAM gate and we then design efficient sequential circuits using SAM gate along with some of the basic reversible logic gates.  </description>
        <description>http://thesai.org/Downloads/Volume4No12/Paper_3-Quantum_Cost_Optimization_for_Reversible_Sequential_Circuit.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Attacking Misaligned Power Tracks Using Fourth-Order Cumulant</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.041202</link>
        <id>10.14569/IJACSA.2013.041202</id>
        <doi>10.14569/IJACSA.2013.041202</doi>
        <lastModDate>2013-12-30T18:32:16.2130000+00:00</lastModDate>
        
        <creator>Eng. Mustafa M. Shiple</creator>
        
        <creator>Prof. Dr. Iman S. Ashour</creator>
        
        <creator>Prof. Dr. Abdelhady A. Ammar</creator>
        
        <subject>Correlation power analysis (CPA); Differential power analysis (DPA); side channel attack; FPGA; AES; cryptography; cipher; fourth-order cumulant; Gaussian noise; higher order statistics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(12), 2013</description>
        <description>Side channel attacks (SCA) use the leaked confidential data to reveal the cipher key. Power consumptions, electromagnetic emissions, and operation timing of cryptographic hardware are examples of measurable parameters (analysis) effected by internal confident data.  To prevent such attacks, SCA countermeasures are implemented. Misaligned power tracks is a considerable countermeasure which directly affect the effectiveness of SCA. Added to that, SCA are suffering from tremendous types of noise problems. This paper proposes Fourth-order Cumulant Analysis as preprocessing step to align power tracks dynamically and partially. Moreover, this paper illustrates that the proposed analysis can efficiently deal with Gaussian noise and misaligned tracks through comprehensive analysis of an AES 128 bit block cipher.</description>
        <description>http://thesai.org/Downloads/Volume4No12/Paper_2-Attacking_Misaligned_Power_Tracks_Using.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Personalizing of Content Dissemination in Online Social Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.041201</link>
        <id>10.14569/IJACSA.2013.041201</id>
        <doi>10.14569/IJACSA.2013.041201</doi>
        <lastModDate>2013-12-30T18:32:14.0800000+00:00</lastModDate>
        
        <creator>Abeer ElKorany</creator>
        
        <creator>Khaled ElBahnasy</creator>
        
        <subject>social network; content similarity measurement; Information retrieval; Information dissemination</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(12), 2013</description>
        <description>Online social networks have seen a rapid growth in recent years. A key aspect of many of such networks is that they are rich in content and social interactions. Users of social networks connect with each other and forming their own communities. With the evolution of huge communities hosted by such websites, users suffer from managing information overload and it is become hard to extract useful information.  Thus, users need a mechanism to filter online social streams they receive as well as enable them to interact with most similar users. In this paper, we address the problem of personalizing dissemination of relevant information in knowledge sharing social network. The proposed framework identifies the most appropriate user(s) to receive specific post by calculating similarity between target user and others.  Similarity between users within OSN is calculated based on users’ social activity which is an integration of content published as well as social pattern Application of this framework to a representative subset of a large real-world social network: the user/community network of the blog service stack overflow is illustrated here. Experiments show that the proposed model outperform tradition similarity methods.</description>
        <description>http://thesai.org/Downloads/Volume4No12/Paper_1-Personalizing_of_Content_Dissemination_in_Online_Social_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modifying the IEEE 802.11 MAC to improve performance of multiple broadcasting of multimedia data in wireless ad-hoc networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2013.030409</link>
        <id>10.14569/SpecialIssue.2013.030409</id>
        <doi>10.14569/SpecialIssue.2013.030409</doi>
        <lastModDate>2013-12-26T18:19:22.8100000+00:00</lastModDate>
        
        <creator>Christos Chousidis</creator>
        
        <creator>Rajagopal Nilavalan</creator>
        
        <subject>wireless networks; broadcasting; multimedia; linear increase of CW; EBNA; CTS-to-Self</subject>
        <description>Special Issue(SpecialIssue), 3(4), 2013</description>
        <description>Multimedia applications over wireless networks have dramatically increased over the past years.  Numerous new devices and applications that distribute audio and video over wireless networks are introduced every day and all of them demands a reliable and efficient wireless standard. Either operating as independent ad-hoc networks or as terminal parts of wired networks or the internet, wireless networks are frequently facing the need to broadcast multimedia data from multiple sources to multiple users. IEEE 802.11 standard (Wi-Fi) is the primary technology in wireless networking today. However, it has some inherited problems when it comes to broadcasting caused mainly by the lack of an acknowledgment mechanism. These problems do not allow the standard to take full advantage of the bandwidth offered by its latest amendments. In this paper two independent modifications of the medium access control (MAC) mechanism of the standard, are proposed along with the expanded use of the CTS-to-Self protection mechanism. The main objective of this study is to explore the ability of the modified MAC mechanisms to improve broadcasting performance while are operating in conjunction with a regular wireless network, and also to define the cases were the use of CTS-to-Self-protection mechanism can improve the overall performance of the network. The results show that the overall performance can be improved using these alternative MAC methods. Also, the cases where the CTS-to-Self technique can additionally contribute to the network performance are defined and analyzed.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo8/Paper_9-Modifying_the_IEEE_802.11_MAC_to_improve_performance_of_multiple_broadcasting_of_multimedia_data_in_wireless_ad-hoc_networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Advancing Research Infrastructure Using OpenStack</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2013.030408</link>
        <id>10.14569/SpecialIssue.2013.030408</id>
        <doi>10.14569/SpecialIssue.2013.030408</doi>
        <lastModDate>2013-12-25T18:44:00.6800000+00:00</lastModDate>
        
        <creator>Ibad Kureshi</creator>
        
        <creator>Carl Pulley</creator>
        
        <creator>John Brennan</creator>
        
        <creator>Violeta Holmes</creator>
        
        <creator>Stephen Bonner</creator>
        
        <creator>Yvonne James</creator>
        
        <subject></subject>
        <description>Special Issue(SpecialIssue), 3(4), 2013</description>
        <description>Cloud computing, which evolved from grid computing,
virtualisation and automation, has a potential to deliver
a variety of services to the end user via the Internet. Using the
Web to deliver Infrastructure, Software and Platform as a Service
(SaaS/PaaS) has benefits of reducing the cost of investment in
internal resources of an organisation. It also provides greater
flexibility and scalability in the utilisation of the resources.
There are different cloud deployment models - public, private,
community and hybrid clouds. This paper presents the results of
research and development work in deploying a private cloud using
OpenStack at the University of Huddersfield, UK, integrated into
the University campus Grid QGG.
The aim of our research is to use a private cloud to improve
the High Performance Computing (HPC) research infrastructure.
This will lead to a flexible and scalable resource for research,
teaching and assessment. As a result of our work we have
deployed private QGG-cloud and devised a decision matrix and
mechanisms required to expand HPC clusters into the cloud
maximising the resource utilisation efficiency of the cloud.
As part of teaching and assessment of computing courses an
Automated Formative Assessment (AFA) system was implemented
in the QGG-Cloud. The system utilises the cloud’s flexibility
and scalability to assign and reconfigure required resources
for different tasks in the AFA. Furthermore, the throughput
characteristics of assessment workflows were investigated and
analysed so that the requirements for cloud-based provisioning
can be adequately made.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo8/Paper_8-Advancing_Research_Infrastructure_Using_OpenStack.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Lightweight Symmetric Encryption Algorithm for Secure Database</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2013.030407</link>
        <id>10.14569/SpecialIssue.2013.030407</id>
        <doi>10.14569/SpecialIssue.2013.030407</doi>
        <lastModDate>2013-12-25T18:43:55.0400000+00:00</lastModDate>
        
        <creator>Hanan A. Al-Souly</creator>
        
        <creator>Abeer S. Al-Sheddi</creator>
        
        <creator>Heba A. Kurdi</creator>
        
        <subject>Encryption; Security; Protection; Transposition; Substitution; Folding; Shifting</subject>
        <description>Special Issue(SpecialIssue), 3(4), 2013</description>
        <description>Virtually all of today’s organizations store their data in huge databases to retrieve, manipulate and share them in an efficient way. Due to the popularity of databases for storing important and critical data, they are becoming subject to an overwhelming range of threats, such as unauthorized access. Such a threat can result in severe financial or privacy problems, as well as other corruptions. To tackle possible threats, numerous security mechanisms have emerged to protect data housed in databases. Among the most successful database security mechanisms is database encryption. This has the potential to secure the data at rest by converting the data into a form that cannot be easily understood by unauthorized persons. Many encryption algorithms have been proposed, such as Transposition-Substitution-Folding-Shifting encryption algorithm (TSFS), Data Encryption Standard (DES), and Advanced Encryption Standard (AES) algorithms. Each algorithm has advantages and disadvantages, leaving room for optimization in different ways. This paper proposes enhancing the TSFS algorithm by extending its data set to special characters, as well as correcting its substitution and shifting steps to avoid the errors occurring during the decryption process. Experimental results demonstrate the superiority of the proposed algorithm, as it has outperformed the well-established benchmark algorithms, DES and AES, in terms of query execution time and database added size.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo8/Paper_7-Lightweight_Symmetric_Encryption_Algorithm_for_Secure_Database.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Strategic Analysis towards the Formulation of Micro Sourcing Strategic Trusts</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2013.030406</link>
        <id>10.14569/SpecialIssue.2013.030406</id>
        <doi>10.14569/SpecialIssue.2013.030406</doi>
        <lastModDate>2013-12-25T18:43:49.0530000+00:00</lastModDate>
        
        <creator>Noor Habibah Arshad</creator>
        
        <creator>Siti Salwa Salleh</creator>
        
        <creator>Syaripah Ruzaini Syed Aris</creator>
        
        <creator>Norjansalika Janom</creator>
        
        <subject>micro workers; job providers; crowd sourcing; B40 group; SWOT analysis; Gap analysis </subject>
        <description>Special Issue(SpecialIssue), 3(4), 2013</description>
        <description>Malaysian government, realising its responsibility to upgrade the quality of life, has identified micro sourcing industry as one of the potential industry to elevate the livelihoods of the poor especially the B40 group. The B40 in Malaysia is defined as household income level of less than RM 2,300 per month. The huge potential impacts of micro sourcing industry provide motivation for this research.  In determining the best way for Malaysia to implement micro sourcing industry using the available resources, strategic analysis was conducted. Tools such as SWOT and Gap analysis were used to perform the strategic analysis.  Thus, the objective of the paper is to develop a full awareness of the situation through Strengths, Weaknesses, Opportunities and Threats (SWOT) analysis. In order to determine the factors that define its current state, gap analysis was used to list down the factors needed to reach the target state and to fill the gap between these two states. Through these analyses it helps with both strategic planning and decision-making. Workshops were held to gather information from stakeholders and to discuss on the internal Strength and Weakness and the external Opportunity and Threat of micro sourcing. The discussions reveal the gap between where we are and where we want to be and also reveal areas where it must improve to meet the micro sourcing goals. The findings from the SWOT and Gap analyses will provide perspective, will reveal connections, areas for action and also identify deficiencies. The analyses will also build on the strengths, minimize the weaknesses, seize opportunities and counteracts threats and fine tuning one process.  Finally, micro sourcing strategic trusts will be formulated.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo8/Paper_6-Strategic_Analysis_towards_the_Formulation_of_Micro_Sourcing_Strategic_Trusts.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sociomaterial analysis of Music Notation Lessons: Virtual work and digital materialities</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2013.030405</link>
        <id>10.14569/SpecialIssue.2013.030405</id>
        <doi>10.14569/SpecialIssue.2013.030405</doi>
        <lastModDate>2013-12-25T18:43:45.9000000+00:00</lastModDate>
        
        <creator>Demosthenes Akoumianakis</creator>
        
        <subject>Virtual work; affordances; design qualities; case study research</subject>
        <description>Special Issue(SpecialIssue), 3(4), 2013</description>
        <description>The present research rests and elaborates on sociomaterial aspects of virtual practices, as manifested through distributed and collaborative work. This is approached through an interpretive case study of music notation lessons (MNLs) using the DIAMOUSES system. Our empirical data suggest that sociomateriality shifts the focus of designing interactive technologies from mere considerations of digital manifestation (i.e., forms of representation) towards explicit accounts of the representational practices (i.e., the particular material properties of these forms) and the quality attributes to be embedded in technology.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo8/Paper_5-Sociomaterial_analysis_of_Music_Notation_Lessons_Virtual_work_and_digital_materialities_.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Generating a Domain Specific Checklist through an Adaptive Framework for Evaluating Social Networking Websites</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2013.030404</link>
        <id>10.14569/SpecialIssue.2013.030404</id>
        <doi>10.14569/SpecialIssue.2013.030404</doi>
        <lastModDate>2013-12-25T18:43:39.9470000+00:00</lastModDate>
        
        <creator>Roobaea AlRoobaea</creator>
        
        <creator>Ali H. Al-Badi</creator>
        
        <creator>Pam J. Mayhew</creator>
        
        <subject>Heuristic evaluation (HE); User Testing (UT); Domain Specific Inspection (DSI); social networks domain; social networks checklist</subject>
        <description>Special Issue(SpecialIssue), 3(4), 2013</description>
        <description>The growth of the Internet and related technologies has enabled the development of a new breed of dynamic websites and applications that are growing rapidly in use and that have had a great impact on many businesses. These websites need to be continuously evaluated and monitored to measure their efficiency and effectiveness, to assess user satisfaction, and ultimately to improve their quality. 
The lack of an adaptive usability evaluation checklist for improvement of the usability assessment process for social network sites (SNSs) represents a missing piece in usability testing. This paper presents an adaptive Domain Specific Inspection (DSI) checklist as a tool for evaluating the usability of SNSs. The results show that the adaptive social network usability checklist helped evaluators to facilitate the evaluation process, and it helped website owners to choose the specific-context usability areas that they feel are important to their usability evaluations. Moreover, it was more efficient and effective than user testing and heuristics evaluation methods.    
</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo8/Paper_4-Generating_a_Domain_Specific_Checklist_through_an_Adaptive_Framework_for_Evaluating_Social_Networking_Websites.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Measuring Homophily in Social Network: Identification of Flow of Inspiring Influence under New Vistas of Evolutionary Dynamics</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2013.030403</link>
        <id>10.14569/SpecialIssue.2013.030403</id>
        <doi>10.14569/SpecialIssue.2013.030403</doi>
        <lastModDate>2013-12-25T18:43:36.7200000+00:00</lastModDate>
        
        <creator>Hameed Al-Qaheri</creator>
        
        <creator>Soumya Banerjee</creator>
        
        <subject>Homophily; Affiliation; Embeddedness; Betweenness; Graph occupancy; Evolutionary dynamics </subject>
        <description>Special Issue(SpecialIssue), 3(4), 2013</description>
        <description>Interaction with different person leads to different kinds of ideas and sharing or some nourishing effects which might influence others to believe or trust or even join some association and subsequently become the member of that community. This will facilitate to enjoy all kinds of social privileges. These concepts of grouping similar objects can be experienced as well as could be implemented on any Social Networks. The concept of homophily could assist to design the affiliation graph (of similar and close similar entities) of every member of any social network thus identifying the most popular community. In this paper we propose and discuss three tier data-mining algorithms) of a social network and evolutionary dynamics from graph properties perspective (embeddedness, betweenness and graph occupancy). A novel contribution is made in the proposal incorporating the principle of evolutionary dynamics to investigate the graph properties. The work also has been extended towards certain specific introspection about the distribution of the impact, and incentives of evolutionary algorithm for social network based events. The experiments demonstrate the interplay between on-line strategies and social network occupancy to maximize their individual profit levels.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo8/Paper_3-Measuring_Homophily_in_Social_Network_Identification_of_Flow_of_Inspiring_Influence_under_New_Vistas_of_Evolutionary_Dynamics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Socially Driven, Goal-Oriented Approach to Business Process Management</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2013.030402</link>
        <id>10.14569/SpecialIssue.2013.030402</id>
        <doi>10.14569/SpecialIssue.2013.030402</doi>
        <lastModDate>2013-12-25T18:43:30.7870000+00:00</lastModDate>
        
        <creator>Mohammad Ehson Rangiha</creator>
        
        <creator>Bill Karakostas</creator>
        
        <subject>BPM; Social BPM; Goal-Based Modeling; Social Goals; Process Goals</subject>
        <description>Special Issue(SpecialIssue), 3(4), 2013</description>
        <description>Over the recent years, there has been much discussion about the concept of Social Business Process Management (SBPM) and how it is able to overcome some of the limitations of the traditional BPM systems. This paper aims to address gaps in social BPM research by working towards a goal-driven SBPM meta-model that seamlessly integrate the process design and enactment stages. This approach also makes use of a process recommendation system to guide the activities of the users based on their social behavior and social goals. We argue that this approach will lead to truly social driven process enactment environments.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo8/Paper_2-A_Socially_Driven_Goal-Oriented_Approach_to_Business_Process_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Approximate SER for M-PSK using MRC and STTD Techniques over Fading Channels</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2013.030401</link>
        <id>10.14569/SpecialIssue.2013.030401</id>
        <doi>10.14569/SpecialIssue.2013.030401</doi>
        <lastModDate>2013-12-25T18:43:24.8630000+00:00</lastModDate>
        
        <creator>Mahmoud A. Khodeir</creator>
        
        <creator>Muteeah A. Jawarneh </creator>
        
        <subject>Rician fading channel; Rayleigh fading channel; Nakagami-m fading channel; maximum ratio combining; space diversity; space time transmit diversity;  symbol error rate.</subject>
        <description>Special Issue(SpecialIssue), 3(4), 2013</description>
        <description>In this study, approximate symbol error rate (SER) expressions for M-ary phase shift keying (M-PSK) modulation scheme over independent and identically distributed (i.i.d) slow-flat Rician and Rayleigh fading channels are derived. Simulation results show the superior impact of using the maximum ratio combining (MRC) space diversity technique on the overall performance. In particular, the communication reliability (i.e., capacity and coverage) will increase by increasing the diversity order (i.e., the number of the combiner’s branches), where less power is needed to achieve the same probability of error. Then, a comparison between the approximate and exact probability of symbol error is performed and the results are shown to be comparable (1–2 dB). Next, approximate SER expression is derived over i.i.d slow-flat Nakagami-m fading channels. In particular, space time transmit diversity (STTD) technique is used to enhance the reliability of the proposed model using two transmit antennas and one receive antenna. The simulation results show the effect of the Nakagami-m parameter, m, on the SER where the performance will improve by increasing the value of m where fading is less severe in this case. Furthermore, the performance of the SER is lower for higher values of SNR and is worse for high order PSK modulation schemes. </description>
        <description>http://thesai.org/Downloads/SpecialIssueNo8/Paper_1-Approximate_SER_for_M-PSK_using_MRC_and_STTD_Techniques_over_Fading_Channels.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evolving Software Effort Estimation Models Using Multigene Symbolic Regression Genetic Programming</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.021207</link>
        <id>10.14569/IJARAI.2013.021207</id>
        <doi>10.14569/IJARAI.2013.021207</doi>
        <lastModDate>2013-12-10T05:58:10.5700000+00:00</lastModDate>
        
        <creator>Sultan Aljahdali</creator>
        
        <creator>Alaa Sheta</creator>
        
        <subject></subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(12), 2013</description>
        <description>Software has played an essential role in engineering,
economic development, stock market growth and military
applications. Mature software industry count on highly predictive
software effort estimation models. Correct estimation of software
effort lead to correct estimation of budget and development time.
It also allows companies to develop appropriate time plan for
marketing campaign. Now a day it became a great challenge to get
these estimates due to the increasing number of attributes which
affect the software development life cycle. Software cost estimation
models should be able to provide sufficient confidence on its
prediction capabilities. Recently, Computational Intelligence (CI)
paradigms were explored to handle the software effort estimation
problem with promising results. In this paper we evolve two new
models for software effort estimation using Multigene Symbolic
Regression Genetic Programming (GP). One model utilizes the
Source Line Of Code (SLOC) as input variable to estimate the
Effort (E); while the second model utilize the Inputs, Outputs,
Files, and User Inquiries to estimate the Function Point (FP).
The proposed GP models show better estimation capabilities compared
to other reported models in the literature. The validation
results are accepted based Albrecht data set.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No12/Paper_7-Evolving_Software_Effort_Estimation_Models_Using.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Contradiction Resolution of Competitive and Input Neurons to Improve Prediction and Visualization Performance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.021206</link>
        <id>10.14569/IJARAI.2013.021206</id>
        <doi>10.14569/IJARAI.2013.021206</doi>
        <lastModDate>2013-12-10T05:58:07.0130000+00:00</lastModDate>
        
        <creator>Ryotaro Kamimura</creator>
        
        <subject>contradiction resolution; self- and outer-evaluation;
visualization; self-organizing maps; dependent input neuron selection</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(12), 2013</description>
        <description>In this paper, we propose a new type of informationtheoretic
method to resolve the contradiction observed in competitive
and input neurons. For competitive neurons, contradiction
between self-evaluation (individuality) and outer-evaluation (collectivity)
exists, which is reduced to realize the self-organizing
maps. For input neurons, there exists contradiction between the
use of many and few input neurons. We try to realize a situation
where as many input neurons as possible are used, and at the
same time, another where only a few input neurons are used. This
contradictory situation can be resolved by viewing input neurons
on different levels, namely, the individual and average level. We
applied contradiction resolution to two data sets, namely, the
Japanese short term economy survey (Tankan) and Dollar-Yen
exchange rates. In both data sets, we succeeded in improving
the prediction performance. Many input neurons were used on
average, but a few input neurons were only taken for each
input pattern. In addition, connection weights were condensed
into a small number of distinct groups for better prediction and
interpretation performance.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No12/Paper_6-Contradiction_Resolution_of_Competitive_and_Input_Neurons_to_Improve_Prediction_and_Visualization_Performance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sensitivity Analysis and Error Analysis of Reflectance Based Vicarious Calibration with Estimated Aerosol Refractive Index and Size Distribution Derived from Measured Solar Direct and Diffuse Irradiance as well as Measured Surface Reflectance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.021205</link>
        <id>10.14569/IJARAI.2013.021205</id>
        <doi>10.14569/IJARAI.2013.021205</doi>
        <lastModDate>2013-12-10T05:58:05.1730000+00:00</lastModDate>
        
        <creator>Kohei Arai </creator>
        
        <subject>Vicarious calibration; Top of Atmosphere Radiance; At sensor Radiance; refractive index; size distribution; Junge parameter; optical depth</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(12), 2013</description>
        <description>Sensitivity analysis and error of reflectance based vicarious calibration with estimated aerosol refractive index and size distribution derived from measured solar direct and diffuse irradiance as well as measured surface reflectance is conducted for solar reflective channels of mission instruments onboard remote sensing satellites. Through these error analyses, it is found that the most influencing factor is surface reflectance. The most significant 75 to 91% of vicarious calibration coefficients error is due to surface reflectance followed by atmospheric optical depth and Junge parameter. Therefore, we have to care about surface reflectance measuring accuracy followed by atmospheric optical depth (aerosol refractive index, and water vapor and ozone absorption) and Junge parameter (aerosol size distribution). As a conclusion, it is confirmed that surface reflectance is most influencing factor on TOA radiance. When the atmospheric optical depth is small, then Junge parameter is influencing.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No12/Paper_5-Sensitivity_Analysis_and_Error_Analysis_of_Reflectance_Based_Vicarious.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Experimental Approach of Reflectance Based Vicarious Calibration Method for Solar Reflectance Wavelength Region of Sensor Onboard Remote Sensing Satellites</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.021204</link>
        <id>10.14569/IJARAI.2013.021204</id>
        <doi>10.14569/IJARAI.2013.021204</doi>
        <lastModDate>2013-12-10T05:58:03.3300000+00:00</lastModDate>
        
        <creator>Kohei Arai </creator>
        
        <subject>Vicarious calibration; Top of Atmosphere Radiance, At sensor Radiance; refractive index; size distribution; Junge parameter; optical depth</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(12), 2013</description>
        <description>Experimental approach of reflectance based vicarious calibration of solar reflectance wavelength region of mission instruments onboard remote sensing satellites is conducted. As an example, vicarious calibration of ASTER/VNIR with estimated aerosol refractive index and size distribution that depends on atmospheric conditions is discussed. Strange solution of estimated refractive index and size distribution may occurred due to the fact that solution fell into one of local minima in the inversion process for phase function fitting between measured and estimated with assumed refractive index and size distribution. This paper describes atmospheric conditions that may induce such a situation. Namely, it may occur when the atmospheric optical depth is too thin and or Junge parameter is too small. In such case, refractive index and size distribution estimation accuracy is poor. A relation between refractive index and size distribution estimation accuracy and estimation accuracy of the Top of the Atmosphere (TOA) radiance (vicarious calibration accuracy) is also clarified in particular for ASTER/VNIR vicarious calibration. It is found that 10% of the refractive index and size distribution estimation error causes approximately 1.3% of TOA radiance estimation error.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No12/Paper_4-Experimental_Approach_of_Reflectance_Based_Vicarious_Calibration.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Discrimination Method between Prolate and Oblate Shapes of Leaves Based on Polarization Characteristics Measured with Polarization Film Attached Cameras
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.021203</link>
        <id>10.14569/IJARAI.2013.021203</id>
        <doi>10.14569/IJARAI.2013.021203</doi>
        <lastModDate>2013-12-10T05:58:01.5070000+00:00</lastModDate>
        
        <creator>Kohei Arai </creator>
        
        <subject>polarization; Monte Carlo Ray Tracing; prolate and oblate shapes of leaves</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(12), 2013</description>
        <description>Method for discrimination between prolate and oblate shapes of leaves based on polarization characteristics is proposed Method for investigation of polarization characteristics of leaves by means of Monte Carlo Ray Tracing: MCRT is also proposed. Validity of the proposed discrimination method is confirmed with MCRT simulations. Also field experiments are conducted. Through field experiments at the tea estates situated in Saga prefecture, a validity of the proposed method is confirmed. Also discrimination between prolate and oblate shapes of leaves is attempted. The results show that the proposed method is valid and discrimination can be performed. </description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No12/Paper_3-Discrimination_Method_between_Prolate_and_Oblate.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multifidus Muscle Volume Estimation Based on Three Dimensional Wavelet Multi Resolution Analysis: MRA with Buttocks Computer-Tomography: CT Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.021202</link>
        <id>10.14569/IJARAI.2013.021202</id>
        <doi>10.14569/IJARAI.2013.021202</doi>
        <lastModDate>2013-12-10T05:57:59.6970000+00:00</lastModDate>
        
        <creator>Kohei Arai </creator>
        
        <subject>Edge detection; MRA; Multifidus muscle </subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(12), 2013</description>
        <description>Multi-Resolution Analysis:. MRA based edge detection algorithm is proposed for estimation of volume of multifidus muscle in the Computer Tomography: CT scanned image The volume of multifidus muscle would be a good measure for metabolic syndrome rather than internal fat from a point of view from processing complexity. The proposed measure shows 0.178 of R square which corresponds to mutual correlation between internal fat and the volume of multifidus muscle. It is also fund that R square between internal fat and the other possible measures shows smaller than that of multifidus muscle.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No12/Paper_2-Multifidus_Muscle_Volume_Estimation_Based_on_Three_Dimensional_Wavelet_Multi_Resolution_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Method for Aureole Estimation Refinement Through Comparisons Between Observed Aureole and Estimated Aureole Based on Monte Carlo Ray Tracing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.021201</link>
        <id>10.14569/IJARAI.2013.021201</id>
        <doi>10.14569/IJARAI.2013.021201</doi>
        <lastModDate>2013-12-10T05:57:57.8230000+00:00</lastModDate>
        
        <creator>Kohei Arai </creator>
        
        <subject>Monte Carlo method; Ray tracing method; Aureole; aerosol; optical depth; solar diffuse; solar direct </subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(12), 2013</description>
        <description>Method for aureole estimation refinement through comparisons between observed aureole and estimated aureole based on Monte Carlo Ray Tracing: MCRT is proposed. Through some experiments, it is found that the proposed method does work for refinement of aureole estimation. The experimental results also show the proposed method is validated through comparison with empirical aureole estimation equation which is proposed by Shebrooke University research group.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No12/Paper_1-Method_for_Aureole_Estimation_Refinement_Through_Comparisons.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A General Framework of Generating Estimation Functions for Computing the Mutual Information of Terms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.041128</link>
        <id>10.14569/IJACSA.2013.041128</id>
        <doi>10.14569/IJACSA.2013.041128</doi>
        <lastModDate>2013-11-30T18:21:58.1170000+00:00</lastModDate>
        
        <creator>D. Cai</creator>
        
        <creator>T.L. McCluskey</creator>
        
        <subject>mutual information of terms (MIT); term dependence;
statistical semantic analysis; probability estimation.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(11), 2013</description>
        <description>Computing statistical dependence of terms in textual
documents is a widely studied subject and a core problem
in many areas of science. This study focuses on such a
problem and explores the techniques of estimation using the
expected mutual information measure. A general framework is
established for tackling a variety of estimations: (i) general forms
of estimation functions are introduced; (ii) a set of constraints
for the estimation functions is discussed; (iii) general forms of
probability distributions are defined; (iv) general forms of the
measures for calculating mutual information of terms (MIT)
are formalised; (v) properties of the MIT measures are studied
and, (vi) relations between the MIT measures are revealed. Four
estimation methods, as examples, are proposed and mathematical
meanings of the individual methods are respectively interpreted.
The methods may be directly applied to practical problems for
computing dependence values of individual term pairs. Due to its
generality, our method is applicable to various areas, involving
statistical semantic analysis of textual data.</description>
        <description>http://thesai.org/Downloads/Volume4No11/Paper_28-A_General_Framework_of_Generating.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Software Effort Estimation Inspired by COCOMO and FP Models: A Fuzzy Logic Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.041127</link>
        <id>10.14569/IJACSA.2013.041127</id>
        <doi>10.14569/IJACSA.2013.041127</doi>
        <lastModDate>2013-11-30T18:21:56.3070000+00:00</lastModDate>
        
        <creator>Alaa F. Sheta</creator>
        
        <creator>Sultan Aljahdali</creator>
        
        <subject></subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(11), 2013</description>
        <description>Budgeting, bidding and planning of software
project effort, time and cost are essential elements of any software
development process. Massive size and complexity of now a
day produced software systems cause a substantial risk for
the development process. Inadequate and inefficient information
about the size and complexity results in an ambiguous estimates
that cause many losses. Project managers cannot adequately
provide good estimate for both the effort and time needed.
Thus, no clear release day to the market can be defined. This
paper presents two new models for software effort estimation
using fuzzy logic. One model is developed based on the famous
COnstructive COst Model (COCOMO) and utilizes the Source
Line Of Code (SLOC) as input variable to estimate the Effort (E);
while the second model utilize the Inputs, Outputs, Files, and User
Inquiries to estimate the Function Point (FP). The proposed fuzzy
models show better estimation capabilities compared to other
reported models in the literature and better assist the project
manager in computing the software required development effort.
The validation results are carried out using Albrecht data set.</description>
        <description>http://thesai.org/Downloads/Volume4No11/Paper_27-Software_Effort_Estimation_Inspired_by_COCOMO.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Personalized Real Time Weather Forecasting With Recommendations</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.041126</link>
        <id>10.14569/IJACSA.2013.041126</id>
        <doi>10.14569/IJACSA.2013.041126</doi>
        <lastModDate>2013-11-30T18:21:54.5130000+00:00</lastModDate>
        
        <creator>Abhishek Kumar Singh</creator>
        
        <creator>Aditi Sharma</creator>
        
        <creator>Rahul Mishra</creator>
        
        <subject>Weather Forcasting; NOAA</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(11), 2013</description>
        <description>Temperature forecasting and rain forecasting in today&#39;s environment is playing a major role in many fields like transportation, tour planning and agriculture. The purpose of this paper is to provide a real time forecasting to the user according to their current position and requirement.
The simplest method of forecasting the weather, persistence, relies upon today&#39;s conditions to forecast the conditions tomorrow i.e. analyzing historical data for predicting future weather conditions. The weather data used for the DM research include daily temperature, daily pressure and monthly rainfall.
</description>
        <description>http://thesai.org/Downloads/Volume4No11/Paper_26-Personalized_Real_Time_Weather_Forecasting_With_Recommendations.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>pGBbBShift - Perceptual Generalized Bitplane-by-Bitplane Shift</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.041125</link>
        <id>10.14569/IJACSA.2013.041125</id>
        <doi>10.14569/IJACSA.2013.041125</doi>
        <lastModDate>2013-11-30T18:21:52.6570000+00:00</lastModDate>
        
        <creator>Jaime Moreno</creator>
        
        <subject>Image Coding; JPEG2000; Hi-SET; region of interest (ROI); bitplane coding; wavelet coding; maximum shift (MaxShift); bitplane-by-bitplane shift (BbBShift); generalized bitplane-by-bitplane shift (GBbBShift)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(11), 2013</description>
        <description>The paper we present pGBbBShift. This algorithm permits to code any Region of Interest (ROI) in a perceptual way, i.e. the presented algorithm introduces some characteristics of the Human Visual System. Furthermore, it introduces features of chromatic induction to the GBbBShift method when bitplanes of ROI and background areas are coded. Thus, the included features balance visual importance of some pixels regardless their numerical importance, namely we avoid to use Information Theory criteria. Visual criteria are applied using the CIWaM, which is contrast band-pass ?lter that predicts color perception. pGBbBShift is compared against classical ROI algorithms, such as the MaxShift method of JPEG2000 and results show that there is no perceptual difference. pGBbBShift method is an open algorithm that can be applied in any wavelet based image coder such as JPEG2000, SPIHT or SPECK. Finally, we applied pGBbBShift to Hi-SET coder and we obtain the best results when the overall visual image quality is assessed</description>
        <description>http://thesai.org/Downloads/Volume4No11/Paper_25-pGBbBShift.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Building future generation service-oriented information broker networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.041124</link>
        <id>10.14569/IJACSA.2013.041124</id>
        <doi>10.14569/IJACSA.2013.041124</doi>
        <lastModDate>2013-11-30T18:21:50.8130000+00:00</lastModDate>
        
        <creator>Sophie Wrobel</creator>
        
        <creator>Eleni Kosta</creator>
        
        <creator>Mohamed Bourimi</creator>
        
        <creator>Rafael Gim&#233;nez</creator>
        
        <creator>Simon Scerri</creator>
        
        <subject>di.me; online privacy; social media; software design; legal and ethical issues; broker platform; context-aware web services; user data</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(11), 2013</description>
        <description>Future generation networks target collecting intelligence from multiple sources based on end-users&#39; data and their social interaction in order to draw useful conclusions on enabling users to execute their rights to online privacy. These networks form a rising class of service-oriented broker platforms. Designers and providers of such network platforms during the design and development of their systems focus primarily on technical specifications and issues. However, given the importance and richness of user information collected, they should already at the design phase take into account legal and ethical requirements. Failure to do so, may result in privacy violations, which may, in turn, affect the success of the network due to increasing awareness with respect to users’ privacy and security concerns, and may incur future costs. In this paper, we show how the di.me system balanced technical and legal requirements through both its design and implementation, while building a decentralized social networking platform. We report on our advances and experiences through a prototypical technology realizing such a platform, analyze the legal implications within the EU legal framework, and provide recommendations and conclusions for user-friendly service-oriented broker platforms.</description>
        <description>http://thesai.org/Downloads/Volume4No11/Paper_24-Building_future_generation_service-oriented.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mobile Robots in Teaching Programming for IT Engineers and its Effects</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.041123</link>
        <id>10.14569/IJACSA.2013.041123</id>
        <doi>10.14569/IJACSA.2013.041123</doi>
        <lastModDate>2013-11-30T18:21:47.4130000+00:00</lastModDate>
        
        <creator>Attila P&#225;sztor</creator>
        
        <creator>R&#243;bert Pap-Szigeti</creator>
        
        <creator>Erika T&#246;r&#246;k</creator>
        
        <subject>programmable mobile robots; project method; positive effects</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(11), 2013</description>
        <description>in this paper the new methods and devices introduced into the learning process of programming for IT engineers at our college is described. Based on our previous research results we supposed that project methods and some new devices can reduce programming problems during the first term. These problems are rooted in the difficulties of abstract thinking and they can cause the decrease of programming self-concept and other learning motives. 
We redesigned the traditional learning environment. As a constructive approach project method was used. Our students worked in groups of two or three; small problems were solved after every lesson. In the problem solving process students use programmable robots (e.g. Surveyor, LEGO NXT and RCX). They had to plan their program, solve some technical problems and test their solution. 
The usability of mobile robots in the learning process and the short-term efficiency of our teaching method were checked with a control group after a semester (n = 149). We examined the effects on our students’ programming skills and on their motives, mainly on their attitudes and programming self-concept. After a two-year-long period we could measure some positive long-term effects.
</description>
        <description>http://thesai.org/Downloads/Volume4No11/Paper_23-Mobile_Robots_in_Teaching_Programming_for_IT_Engineers_and_its_Effects.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Inverted Indexing In Big Data Using Hadoop Multiple Node Cluster</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.041122</link>
        <id>10.14569/IJACSA.2013.041122</id>
        <doi>10.14569/IJACSA.2013.041122</doi>
        <lastModDate>2013-11-30T18:21:43.9970000+00:00</lastModDate>
        
        <creator>Kaushik Velusamy</creator>
        
        <creator>Deepthi Venkitaramanan</creator>
        
        <creator>Nivetha Vijayaraju</creator>
        
        <creator>Greeshma Suresh</creator>
        
        <creator>Divya Madhu</creator>
        
        <subject>Hadoop; Big data; inverted indexing; data structure</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(11), 2013</description>
        <description>Inverted Indexing is an efficient, standard data structure, most suited for search operation over an exhaustive set of data. The huge set of data is mostly unstructured and does not fit into traditional database categories. Large scale processing of such data needs a distributed framework such as Hadoop where computational resources could easily be shared and accessed. An implementation of a search engine in Hadoop over millions of Wikipedia documents using an inverted index data structure would be carried out for making search operation more accomplished. Inverted index data structure is used for mapping a word in a file or set of files to their corresponding locations. A hash table is used in this data structure which stores each word as index and their corresponding locations as its values thereby providing easy lookup and retrieval of data making it suitable for search operations.</description>
        <description>http://thesai.org/Downloads/Volume4No11/Paper_22-Inverted_Indexing_In_Big_Data_Using_Hadoop_Multiple_Node_Cluster.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Pilot Study Examining the Online Behavior of Web Users with Visual Impairments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.041121</link>
        <id>10.14569/IJACSA.2013.041121</id>
        <doi>10.14569/IJACSA.2013.041121</doi>
        <lastModDate>2013-11-30T18:21:40.5970000+00:00</lastModDate>
        
        <creator>Julian Brinkley</creator>
        
        <creator>Nasseh Tabrizi</creator>
        
        <subject>Web Accessibility; Social Networking; Human Computer Interaction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(11), 2013</description>
        <description>This report presents the results of a pilot study on the online behavioral habits of 46 internet users; 26 of whom self-identified as having a visual impairment (either blind or low vision).  While significant research exists which documents the degree of difficulty that users with visual impairments have in interacting with the Web relative to the sighted, few have addressed the degree to which this usability disparity impacts online behavior; information seeking and online exploratory behaviors especially.  Fewer still have addressed this usability disparity within the context of distinct website types; i.e. are usability issues more pronounced with certain categories of websites as opposed to others?   This pilot study was effective both in exploring these issues and in identifying the accessibility of online social networks as a primary topic of investigation with respect to the formal study that is to follow.</description>
        <description>http://thesai.org/Downloads/Volume4No11/Paper_21-A_Pilot_Study_Examining_the_Online_Behavior_of_Web_Users_with_Visual_Impairments.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>CAPTCHA Based on Human Cognitive Factor</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.041120</link>
        <id>10.14569/IJACSA.2013.041120</id>
        <doi>10.14569/IJACSA.2013.041120</doi>
        <lastModDate>2013-11-30T18:21:37.3500000+00:00</lastModDate>
        
        <creator>Mohammad Jabed Morshed Chowdhury</creator>
        
        <creator>Narayan Ranjan Chakraborty</creator>
        
        <subject>CAPTCHA; Usability; Security; Cognitive; Psychology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(11), 2013</description>
        <description>A CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart) is an automatic security mechanism used to determine whether the user is a human or a malicious computer program. It is a program that generates and grades tests that are human solvable, but intends to be beyond the capabilities of current computer programs. CAPTCHA should be designed to be very easy for humans but very hard for machines. Unfortunately, the existing CAPTCHA systems while trying to maximize the difficulty for automated programs to pass tests by increasing distortion or noise have consequently, made it also very difficult for potential users. To address the issue, this paper addresses an alternative form of CAPTCHA that provides a variety of questions from mathematical, logical and general problems which only human can understand and answer correctly in a given time.  The proposed framework supports diversity in choosing the questions to be answered and a user-friendly framework to the users. A user-study is also conducted to judge the performance of the developed system with different background. The study shows the efficacy of the implemented system with a good level of user satisfaction over traditional CAPTCHA available today.</description>
        <description>http://thesai.org/Downloads/Volume4No11/Paper_20-CAPTCHA_Based_on_Human_Cognitive_Factor.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Assessment of 3G Mobile Service Acceptance in Bangladesh</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.041119</link>
        <id>10.14569/IJACSA.2013.041119</id>
        <doi>10.14569/IJACSA.2013.041119</doi>
        <lastModDate>2013-11-30T18:21:33.9500000+00:00</lastModDate>
        
        <creator>Tajmary Mahfuz</creator>
        
        <creator>Subhenur Latif</creator>
        
        <subject>Awareness; Adoption; 3G mobile service; usage pattern</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(11), 2013</description>
        <description>This paper aims to find out the key factors influencing mobile users to adopt 3G technology and affecting the subscriber’s feedback while using third generation (3G) mobile services that are available for one year in Bangladesh. An interesting fact that motivated this research was the significant low rate of 3G service usage among mobile operators in Bangladesh though we get the completely opposite picture worldwide. To examine the user acceptance and to depict user behavioral pattern, data were collected from 200 respondents through a survey. The analysis was done into two categories: one was in general and the other one was department based.  The results of the study revealed the user intention, awareness, attitude, expectation, key 3G service usage etc.The findings have future implications for existing as well as newly arrived service providers who have very recently started their journey. Considering these identified factors would provide the directions for telecom operators to achieve high rate of 3G service adoption and to provide more successful 3G services.However, the study covered a limited area where those findings are applicable. The result of this study might be helpful for the telecom operators while targeting the 3G subscriber market and also for the future research on this field.</description>
        <description>http://thesai.org/Downloads/Volume4No11/Paper_19-An_Assessment_of_3G_Mobile_Service_Acceptance_in_Bangladesh.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Web Resources Annotation for the Web of Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.041118</link>
        <id>10.14569/IJACSA.2013.041118</id>
        <doi>10.14569/IJACSA.2013.041118</doi>
        <lastModDate>2013-11-30T18:21:30.4700000+00:00</lastModDate>
        
        <creator>Jawad Berri</creator>
        
        <subject>Web resources; semantic annotation; web of learning; contextual exploration</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(11), 2013</description>
        <description>Semantic annotation of web resources is an essential ingredient to leverage the web of information to the semantic web where resources are easily shared and reused. In the education field, reusing hypermedia web resources can support to a great deal the design of modern instructional environments and the development of interactive and non-linear material for learning. Sharing and reusing these resources by different web applications and services presupposes that they are visible for retrieval through a semantic description of their content, function and relations with other resources. This paper presents the annotation and discovery of web resources to create learning objects that constitute the building blocks of learning sessions which are delivered to users in the Web of Learning. Semantic annotation is done by the contextual exploration method which analyzes web resources’ text descriptions and metadata in order to annotate automatically resources. We present the system architecture and a case study that illustrates the proposed approach.</description>
        <description>http://thesai.org/Downloads/Volume4No11/Paper_18-Web_Resources_Annotation_for_the_Web_of_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Using Learning Analytics to Understand the Design of an Intelligent Language Tutor – Chatbot Lucy</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.041117</link>
        <id>10.14569/IJACSA.2013.041117</id>
        <doi>10.14569/IJACSA.2013.041117</doi>
        <lastModDate>2013-11-30T18:21:27.0400000+00:00</lastModDate>
        
        <creator>Yi Fei Wang</creator>
        
        <creator>Stephen Petrina</creator>
        
        <subject>learning analytics; intelligent tutor; chatbot; second language acquisition; learning design</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(11), 2013</description>
        <description>the goal of this article is to explore how learning analytics can be used to predict and advise the design of an intelligent language tutor, chatbot Lucy. With its focus on using student-produced data to understand the design of Lucy to assist English language learning, this research can be a valuable component for language-learning designers to improve second language acquisition. In this article, we present students’ learning journey and data trails, the chatting log architecture and resultant applications to the design of language learning systems.</description>
        <description>http://thesai.org/Downloads/Volume4No11/Paper_17-Using_Learning_Analytics_to_Understand_the_Design_of_an_Intelligent_Language_Tutor.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An analysis of Internet Banking in Portugal</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.041116</link>
        <id>10.14569/IJACSA.2013.041116</id>
        <doi>10.14569/IJACSA.2013.041116</doi>
        <lastModDate>2013-11-30T18:21:23.6070000+00:00</lastModDate>
        
        <creator>Jo&#227;o Pedro Couto</creator>
        
        <creator>Teresa Tiago</creator>
        
        <creator>Fl&#225;vio Tiago</creator>
        
        <subject>internet banking; mobile banking; technology adoption models; Portugal</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(11), 2013</description>
        <description>In recent years, mobile operations have gained wide popularity among mainstream users, and banks tend to follow this trend. But are bank customers ready to move forward? Mobile banking appears to be a natural extension of Internet banking, . Thus, to predict consumer decisions to adopt mobile banking, it’s useful to understand the pattern of adoption of Internet banking (IB). This investigation seeks contribute to an expansion of the knowledge regarding this matter by researching Portuguese consumers’ patterns and behaviors concerning the acceptance and use intention of IB as a foundation for establishing growth strategies of mobile banking. For data collection, we used an online “snowball” process. The statistical treatment used included a factor analysis in order to allow examination of the interrelationships between the original variables. The analysis was made possible by developing a set of factors that expresses the common traits between them. The results revealed that the majority of respondents did not identify problems with the use of and interaction with the IB service. The study generated some interesting findings. First, the data generally supports the conceptual framework presented. However, some points need to be made: (i) trust and convenience, from all the elements referenced in the literature as relevant from the client’ perspective, continue to be a very important elements; (ii) the results did not support the paradigm that the characteristics of individuals affect their behavior as consumers, (iii) individual technological characteristics affect consumer adoption of IB service; (iv) consumer perceptions about the IB service affect their use, as reveal by the existence of three types of customers that show different practices and perceptions of IB; and (v) intention to use IB is dependent upon attitudes and subjective norms regarding the use of IB.</description>
        <description>http://thesai.org/Downloads/Volume4No11/Paper_16-An_analysis_of_Internet_Banking_in_Portugal.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Software Development Effort Estimation by Means of Genetic Programming</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.041115</link>
        <id>10.14569/IJACSA.2013.041115</id>
        <doi>10.14569/IJACSA.2013.041115</doi>
        <lastModDate>2013-11-30T18:21:20.1730000+00:00</lastModDate>
        
        <creator>Arturo Chavoya</creator>
        
        <creator>Cuauhtemoc Lopez-Martin</creator>
        
        <creator>M.E. Meda-Campa&#241;a</creator>
        
        <subject>genetic programming; feedforward neural network; software effort estimation; statistical regression</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(11), 2013</description>
        <description>In this study, a genetic programming technique was used with the goal of estimating the effort required in the development of individual projects. Results obtained were compared with those generated by a statistical regression and by a neural network that have already been used to estimate the development effort of individual software projects. A sample of 132 projects developed by 40 programmers was used for generating the three models and another sample of 77 projects developed by 24 programmers was used for validating the three models. Results in the accuracy of the model obtained from genetic programming suggest that it could be used to estimate software development effort of individual projects.</description>
        <description>http://thesai.org/Downloads/Volume4No11/Paper_15-Software_Development_Effort_Estimation_by_Means_of_Genetic_Programming.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Service Pyramid Concept of Knowledge Intensive Business Services in the Small and Medium Sized Enterprises Sector</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.041114</link>
        <id>10.14569/IJACSA.2013.041114</id>
        <doi>10.14569/IJACSA.2013.041114</doi>
        <lastModDate>2013-11-30T18:21:16.7570000+00:00</lastModDate>
        
        <creator>S&#225;ndor Nagy</creator>
        
        <subject>knowledge intensive business services; information and communication technology; small and medium sized enterprises; productisation; service product; service pyramid</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(11), 2013</description>
        <description>The activity of service companies is more complex today than before the use of Information and Communication Technology (ICT) became widespread in business. Fewer service companies can be found on the market based solely on physical customer interaction; however, they have been replaced with standardized and automated services. This structural change can be observed particularly among Knowledge Intensive Business Services (KIBS) companies where the use of ICT is not simply a market requirement, but a competition tool, as well. Without the application of any ICT instrument, a KIBS company could not meet market expectations, and furthermore, business developments are mainly ICT-based. On the other hand, companies need to develop their businesses more and more, as KIBS companies which are not able to differentiate themselves as trustworthy on the business services market will disappear. ‘Just another business service company’ cannot really sell today. This is applicable to a greater extent to Small and Medium Sized Enterprises (SME), where ICT has opened market opportunities that have never existed for them before. The aim of this paper is to highlight the new features of KIBS business activities in the SME sector and the related research questions.</description>
        <description>http://thesai.org/Downloads/Volume4No11/Paper_14-Service_Pyramid_Concept_of_Knowledge_Intensive_Business_Services_in_the_Small_and_Medium_Sized_Enterprises_Sector.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cross Correlation versus Mutual Information for Image Mosaicing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.041113</link>
        <id>10.14569/IJACSA.2013.041113</id>
        <doi>10.14569/IJACSA.2013.041113</doi>
        <lastModDate>2013-11-30T18:21:13.3400000+00:00</lastModDate>
        
        <creator>Sherin Ghannam</creator>
        
        <creator>A. Lynn Abbott</creator>
        
        <subject>mosaicing; normalized cross correlation; mutual information</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(11), 2013</description>
        <description>This paper reviews the concept of image mosaicing and presents a comparison between two of the most common image mosaicing techniques. The first technique is based on normalized cross correlation (NCC) for registering overlapping 2D images of a 3D scene. The second is based on mutual information (MI). The experimental results demonstrate that the two techniques have a similar performance in most cases but there are some interesting differences. The choice of a distinctive template is critical when working with NCC. On the other hand, when using MI, the registration procedure was able to provide acceptable performance even without distinctive templates.  But generally the performance when using MI with large rotation angles was not accurate as with NCC.</description>
        <description>http://thesai.org/Downloads/Volume4No11/Paper_13-Cross_Correlation_versus_Mutual_Information_for_Image_Mosaicing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Developing Parallel Application on Multi-core Mobile Phone</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.041112</link>
        <id>10.14569/IJACSA.2013.041112</id>
        <doi>10.14569/IJACSA.2013.041112</doi>
        <lastModDate>2013-11-30T18:21:10.0670000+00:00</lastModDate>
        
        <creator>Dhuha Basheer Abdullah</creator>
        
        <creator>Mohammed M. Al-Hafidh</creator>
        
        <subject>Mobile phone; Parallel programming; Multi-core processor; Lane detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(11), 2013</description>
        <description>One cannot imagine daily life today without mobile devices such as mobile phones or PDAs. They tend to become your mobile computer offering all features one might need on the way. As a result devices are less expensive and include a huge amount of high end technological components. Thus they also become attractive for scientific research. Today multi-core mobile phones are taking all the attention. Relying on the principles of tasks and data parallelism, we propose in this paper a real-time mobile lane departure warning system (M-LDWS) based on a carefully designed parallel programming framework on a quad-core mobile phone, and show how to increase the utilization of processors to achieve improvement on the system’s runtime.</description>
        <description>http://thesai.org/Downloads/Volume4No11/Paper_12-Developing_Parallel_Application_on_Multi-core_Mobile_Phone.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Control of Single Axis Magnetic Levitation System Using Fuzzy Logic Control</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.041111</link>
        <id>10.14569/IJACSA.2013.041111</id>
        <doi>10.14569/IJACSA.2013.041111</doi>
        <lastModDate>2013-11-30T18:21:06.6330000+00:00</lastModDate>
        
        <creator>Tania Tariq Salim</creator>
        
        <creator>Vedat Mehmet Karsli</creator>
        
        <subject>Magnetic Levitation; System MAGLEV; Fuzzy Control; Linear Quadratic Regulator Controller</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(11), 2013</description>
        <description>This paper presents a fuzzy logic controller design for the stabilization of magnetic levitation system (Maglev &#39;s).Additionally, the investigation on Linear Quadratic Regulator Controller (LQRC) also mentioned here. This paper presents the difference between the performance of fuzzy logic control (FLC) and LQRC for the same linear model of magnetic levitation system .A magnetic levitation is a nonlinear unstable system and the fuzzy logic controller brings the magnetic levitation system to a stable region by keeping a magnetic ball suspended in the air. The modeling of the system is simulated using Matlab Simulink and connected to Hilink platform and the maglev model of Zeltom company. This paper presents a comparison for both LQRC and FLC to control a ball suspended on the air. The performance results of simulation shows that the fuzzy logic controller had better performance than the LQR control.</description>
        <description>http://thesai.org/Downloads/Volume4No11/Paper_11-Control_of_Single_Axis_Magnetic_Levitation_System_Using_Fuzzy_Logic_Control.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Evaluation for the DIPDAM scheme on the OLSR and the AODV MANETs Routing Protocols</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.041110</link>
        <id>10.14569/IJACSA.2013.041110</id>
        <doi>10.14569/IJACSA.2013.041110</doi>
        <lastModDate>2013-11-30T18:21:03.2170000+00:00</lastModDate>
        
        <creator>Ahmad Almazeed</creator>
        
        <creator>Ahmed Mohamed Abdalla</creator>
        
        <subject>Ad hoc networks; AODV; Computer network management; IDS; MANETS; OLSR</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(11), 2013</description>
        <description>the DIPDAM scheme is a fully-distributed message exchange framework designed to overcome the challenges caused by the decentralized and dynamic characteristics of mobile ad-hoc networks. The DIPDAM mechanism is based on three parts Path Validation Message (PVM) enables E2E feedback loop between the source and the destination, Attacker Finder Message (AFM) to detect attacker node through the routing path, and Attacker Isolation Message (AIM) to isolate the attacker from routing path and update the black list for each node then trigger to neighbors with updated information. The DIPDAM scheme was fully tested on the OLSR routing protocol. In order to prove the efficiency of DIPDAM scheme on detection and isolation packet dropping attackers, DIPDAM is applied to another routing protocol category, AODV. AODV represents different concepts in routing path calculation and it is widely adopted. The comparison between the two routing protocol is tested on smart attackers. The goal from this comparison is to prove that the DIPDAM scheme can be applied to a different routing protocols category.</description>
        <description>http://thesai.org/Downloads/Volume4No11/Paper_10-Performance_Evaluation_for_the_DIPDAM_scheme.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Grouping-based Scheduling with Load Balancing for Fine-Grained Jobs in Grid Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.041109</link>
        <id>10.14569/IJACSA.2013.041109</id>
        <doi>10.14569/IJACSA.2013.041109</doi>
        <lastModDate>2013-11-30T18:20:59.8000000+00:00</lastModDate>
        
        <creator>Rabab Mohamed Ezzat</creator>
        
        <creator>Amal Elsayed Aboutabl</creator>
        
        <creator>Mostafa Sami Mostafa</creator>
        
        <subject>grid computing; job scheduling; job grouping; load balancing; resource utilization; Gridsim</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(11), 2013</description>
        <description>Grid computing is characterized by the existence of a collection of heterogeneous geographically distributed resources that are connected over high speed networks. Job scheduling and resource management have been a great challenge to researchers in the area of grid computing. Very often, there are applications having a large number of fine-grained jobs. Sending these fine-grained jobs individually to be executed on grid resources that have high processing power reduces resource utilization and is thus uneconomical. This paper presents efficient grouping-based scheduling models that group fine-grained jobs to form coarse-grained jobs which are sent for execution on grid resources. Our grouping strategy is based on the processing capability of resources and the processing requirements of grouped jobs. A load balancing approach is also presented to achieve efficient utilization of resources. Simulation experiments were conducted using the Gridsim toolkit. Results show that the total simulation time and the cost are improved by grouping. Furthermore, our load balancing approach enhances resource utilization and achieves load balancing among resources.   </description>
        <description>http://thesai.org/Downloads/Volume4No11/Paper_9-Grouping-based_Scheduling_with_Load_Balancing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Exploratory study of proposed factors to Adopt e-government Services</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.041108</link>
        <id>10.14569/IJACSA.2013.041108</id>
        <doi>10.14569/IJACSA.2013.041108</doi>
        <lastModDate>2013-11-30T18:20:56.2600000+00:00</lastModDate>
        
        <creator>Sulaiman A. Alateyah</creator>
        
        <creator>Richard M Crowder</creator>
        
        <creator>Gary B Wills</creator>
        
        <subject>adoption; citizens’ intention; e-government; e-government models; influential factors; Saudi Arabia</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(11), 2013</description>
        <description>this paper discusses e-government, in particular the challenges that face adoption of e-government in Saudi Arabia. In this research we define e-government as a matrix of stakeholders: governments to governments, governments to business and governments to citizens, using information and communications technology to deliver and consume services. e-government services still face many challenges in their implementation and general adoption in many countries including Saudi Arabia. In addition, the background and the discussion identify the influential factors that affect the citizens’ intention to adopt e-government services in Saudi Arabia. Consequently, these factors have been defined and categorized followed by an exploratory study to examine the importance of these factors. Therefore, this research has identified factors that determine if the citizen will adopt e-government services and thereby aiding governments in accessing what is required to increase adoption.</description>
        <description>http://thesai.org/Downloads/Volume4No11/Paper_8-An_Exploratory_study_of_proposed_factors.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Blind Turing-Machines: Arbitrary Private Computations from Group Homomorphic Encryption</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.041107</link>
        <id>10.14569/IJACSA.2013.041107</id>
        <doi>10.14569/IJACSA.2013.041107</doi>
        <lastModDate>2013-11-30T18:20:53.0130000+00:00</lastModDate>
        
        <creator>Stefan Rass</creator>
        
        <subject>secure function evaluation; homomorphic encryption; chosen ciphertext security; cloud computing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(11), 2013</description>
        <description>Secure function evaluation (SFE) is the process of computing a function (or running an algorithm) on some data, while keeping the input, output and intermediate results hidden from the environment in which the function is evaluated. This can be done using fully homomorphic encryption, Yao&#39;s garbled circuits or secure multiparty computation. Applications are manifold, most prominently the outsourcing of computations to cloud service providers, where data is to be manipulated and processed in full confidentiality. Today, one of the most intensively studied solutions to SFE is fully homomorphic encryption (FHE). Ever since the first such systems have been discovered in 2009, and despite much progress, FHE still remains inefficient and difficult to implement practically. Similar concerns apply to garbled circuits and (generic) multiparty computation protocols. In this work, we introduce the concept of a blind Turing-machine, which uses simple homomorphic encryption (an extension of ElGamal encryption) to process ciphertexts in the way as standard Turing-machines do, thus achieving computability of any function in total privacy. Remarkably, this shows that fully homomorphic encryption is indeed an overly strong primitive to do SFE, as group homomorphic encryption with equality check is already sufficient. Moreover, the technique is easy to implement and perhaps opens the door to efficient private computations on nowadays computing machinery, requiring only simple changes to well-established computer architectures.</description>
        <description>http://thesai.org/Downloads/Volume4No11/Paper_7-Blind_Turing-Machines_Arbitrary_Private_Computations.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modification of Contract Net Protocol (CNP) : A Rule-Updation Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.041106</link>
        <id>10.14569/IJACSA.2013.041106</id>
        <doi>10.14569/IJACSA.2013.041106</doi>
        <lastModDate>2013-11-30T18:20:51.2200000+00:00</lastModDate>
        
        <creator>Sandeep Kaur</creator>
        
        <creator>Harjot Kaur</creator>
        
        <creator>Sumeet Kaur Sehra</creator>
        
        <subject>Multi-Agent System; Coordination; Communication Language; Norm-based Contract Net Protocol; utility parameters</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(11), 2013</description>
        <description>Coordination in multi-agent system is very essential, in order to perform complex tasks and lead MAS towards its goal.  Also, the member agents of multi-agent system should be autonomous as well as collaborative to accomplish the complex task for which multi-agent system is designed specifically. Contract-Net Protocol (CNP) is one of the coordination mechanisms which is used by multi-agent systems which prefer coordination through interaction protocols.  In order to overcome the limitations of conventional CNP, this paper proposes a modification in conventional CNP called updated-CNP. Updated-CNP is an effort towards updating of a CNP in terms of its limitations of modifiability and communication overhead.  The limitation of the modification of tasks, if the task requirements change at any instance, corresponding to tasks which are allocated to contractor agents by manager agents is possible in our updated-CNP version, which was not possible in the case of conventional-CNP, as it has to be restarted in the case of task modification. This in turn will be reducing the communication overhead of CNP, which is time taken by various agents using CNP to pass messages to each other. For the illustration of the updated CNP, we have used a sound predator-prey case study.</description>
        <description>http://thesai.org/Downloads/Volume4No11/Paper_6-Modification_of_Contract_Net_Protocol_(CNP)_A_Rule-Updation_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Comparison between Na&#239;ve Bayes, Decision Tree and k-Nearest Neighbor in Searching Alternative Design in an Energy Simulation Tool</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.041105</link>
        <id>10.14569/IJACSA.2013.041105</id>
        <doi>10.14569/IJACSA.2013.041105</doi>
        <lastModDate>2013-11-30T18:20:47.7400000+00:00</lastModDate>
        
        <creator>Ahmad Ashari</creator>
        
        <creator>Iman Paryudi</creator>
        
        <creator>A Min Tjoa</creator>
        
        <subject>energy simulation tool; classification method; na&#239;ve bayes; decision tree; k-nearest neighbor</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(11), 2013</description>
        <description>Energy simulation tool is a tool to simulate energy use by a building prior to the erection of the building.  Commonly it has a feature providing alternative designs that are better than the user’s design.  In this paper, we propose a novel method in searching alternative design that is by using classification method. The classifiers we use are Na&#239;ve Bayes, Decision Tree, and k-Nearest Neighbor.
Our experiment shows that Decision Tree has the fastest classification time followed by Na&#239;ve Bayes and k-Nearest Neighbor.  The differences between classification time of Decision Tree and Na&#239;ve Bayes also between Na&#239;ve Bayes and k-NN are about an order of magnitude.  Based on Percision, Recall, F-measure, Accuracy, and AUC, the performance of Na&#239;ve Bayes is the best.  It outperforms Decision Tree and k-Nearest Neighbor on all parameters but precision.
</description>
        <description>http://thesai.org/Downloads/Volume4No11/Paper_5-Performance_Comparison_between_Na&#239;ve_Bayes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Security of Mobile Phones and Their Usage in Business</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.041104</link>
        <id>10.14569/IJACSA.2013.041104</id>
        <doi>10.14569/IJACSA.2013.041104</doi>
        <lastModDate>2013-11-30T18:20:44.2930000+00:00</lastModDate>
        
        <creator>Abdullah Saleh Alqahtani</creator>
        
        <subject>Smartphone; PDA; security; business-oriented; applications</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(11), 2013</description>
        <description>The purpose of this document is to provide an overview on the growth on mobile phone and PDA devices and use in business-oriented modern day life style. The explosion of smartphones in enterprise and personal computing heighten the concerns of security and privacy of users. Now a day’s use of mobile is in every walk of life like shopping, trading, paying bill and even using internet banking. But with these facilities some draw backs are also there. Recent studies have shown that applications can host new types of malware. This discussion explores smartphone security through several research works and how user himself can avoid from data hacking and other insecurities.  </description>
        <description>http://thesai.org/Downloads/Volume4No11/Paper_4-Security_of_mobile_phones_and_their_usage_in_business.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Image fusion approach with noise reduction using Genetic algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.041103</link>
        <id>10.14569/IJACSA.2013.041103</id>
        <doi>10.14569/IJACSA.2013.041103</doi>
        <lastModDate>2013-11-30T18:20:42.4830000+00:00</lastModDate>
        
        <creator>Gehad Mohamed Taher</creator>
        
        <creator>Mohamed Elsayed Wahed</creator>
        
        <creator>Ghada El Taweal</creator>
        
        <creator>Ahmed Fouad</creator>
        
        <subject>Multi-focus image fusion; Curvelet transform; genetic algorithm Introduction </subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(11), 2013</description>
        <description>Image fusion is becoming a challenging field as for its importance to different applications, Multi focus image fusion is a type of image fusion that is used in medical fields, surveillances, and military issues to get the image all in focus from multi images every one is in focus in a different part, and for making the input images more accurate before making the fusing process we use Genetic Algorithm (GA) for image de-noising as a preprocessing process. In our research paper we introduce a new approach that begin with image de-noising using GA and then apply the curvelet transform for image decomposition to get a multi focus image fusion image that is focused in all of its parts. The results show that Curvelet transform had been proven to be effective at detecting image activity along curves, and increasing the quality of the obtained fused images.  And applying the mean fusion rule for fusing multi-focus images gives accurate results than PCA, contrast and mode fusion rule, Also, GA shows more accurate results in image de-noising after comparing it to contourlet transform.</description>
        <description>http://thesai.org/Downloads/Volume4No11/Paper_3-Image_fusion_approach_with_noise_reduction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Development of Learning Resource for the Saleng Community under the Bridge of Zone 1 Entitled “How to Repair Electrical Appliances”</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.041102</link>
        <id>10.14569/IJACSA.2013.041102</id>
        <doi>10.14569/IJACSA.2013.041102</doi>
        <lastModDate>2013-11-30T18:20:40.5970000+00:00</lastModDate>
        
        <creator>Pornpapatsorn Princhankol</creator>
        
        <creator>Kuntida Thamwipat</creator>
        
        <creator>Paveena Thambunharn</creator>
        
        <creator>Wuttipong Phansatarn</creator>
        
        <subject>learning resource; Saleng community under the bridge of zone 1; repairing electrical appliances</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(11), 2013</description>
        <description>This research was aimed to develop a learning resource for the Saleng community under the bridge of zone 1 entitled “How to Repair Electrical Appliances” in order to determine the quality of learning community and to examine the satisfaction of the sampling group which was chosen using specified sampling method. The tools used in this study consisted of 1) learning resource for the Saleng community under the bridge of zone 1 entitled “How to Repair Electrical Appliances”, including video clips, posters, leaflets, and exhibitions; 2) the quality assessment forms; and 3) the satisfaction evaluation form. According to the quality assessment of the learning resource for the Saleng community under the bridge of zone 1 by the panel of 6 experts, the mean score for the quality in terms of contents was 4.83 with S.D. of 0.29, or at a very good level. As for the quality in terms of media, the mean score was 4.47 with S.D. of .63, or at a good level. Afterwards, the learning resource for the Saleng community under the bridge of zone 1 entitled “How to Repair Electrical Appliances” was used by the sampling group of 30 persons. The satisfaction towards the learning resource was 4.71 with S.D. of .64, or at the highest level. It could be concluded that the learning resource for the Saleng community under the bridge of zone 1 entitled “How to Repair Electrical Appliances” can be used as a practical instructional tool for the community.</description>
        <description>http://thesai.org/Downloads/Volume4No11/Paper_2-The_Development_of_Learning_Resource.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implicit sensitive text summarization based on data conveyed by connectives</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.041101</link>
        <id>10.14569/IJACSA.2013.041101</id>
        <doi>10.14569/IJACSA.2013.041101</doi>
        <lastModDate>2013-11-30T18:20:38.7530000+00:00</lastModDate>
        
        <creator>Henda Chorfi Ouertani</creator>
        
        <subject>Automatic summarization; implicit data; topoi; topos; argumentation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(11), 2013</description>
        <description>So far and trying to reach human capabilities, research in automatic summarization has been based on hypothesis that are both enabling and limiting. Some of these limitations are: how to take into account and reflect (in the generated summary) the implicit information conveyed in the text,   the author intention, the reader intention, the context influence, the general world knowledge…. Thus, if we want machines to mimic human abilities, then they will need access to this same large variety of knowledge. The implicit is affecting the orientation and the argumentation of the text and consequently its summary. Most of Text Summarizers (TS) are processing as compressing the initial data and they necessarily suffer from information loss. TS are focusing on features of the text only, not on what the author intended or why the reader is reading the text. In this paper, we address this problem and we present a system focusing on acquiring knowledge that is implicit. We principally spotlight the implicit information conveyed by the argumentative connectives such as : but,  even, yet …. and their effect on the summary.    </description>
        <description>http://thesai.org/Downloads/Volume4No11/Paper_1-Implicit_sensitive_text_summarization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>E-learning as a Research Area: An Analytical Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020923</link>
        <id>10.14569/IJACSA.2011.020923</id>
        <doi>10.14569/IJACSA.2011.020923</doi>
        <lastModDate>2013-11-28T10:54:33.4930000+00:00</lastModDate>
        
        <creator>Sangeeta Kakoty</creator>
        
        <creator>Monohar Lal</creator>
        
        <creator>Shikhar Kr. Sarma</creator>
        
        <subject>E-learning; learning standards; technology enhanced learning procedure; e-education system; dimension of research areas.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(9), 2011</description>
        <description>The concept of E-learning is very broad. It was coined in late 90s as the technological enhanced learning mechanism through Internet. Now it captures a broad range of electronic media like Internet, Intranets, Extranets, satellite broadcast, audio/video tape, interactive TV and CD-ROM to make the learning procedure more flexible and user friendly. Because of the flexible nature of E-learning, it has got more demand among the people of our country and the demand is increasing day by day. As the demand is increasing, this is the time to standardize the whole e-learning system in a proper way and the time to increase the quality of existing standards. Though many standards are already there and has accepted by many academia, institutes and organisations, still there are some gaps and works are going on to make them more practicable and more systematic.This paper analyses the current e-learning procedure and showing the new dimension of research work on this area that follows the important and most neglected research areas till today in this domain. It also analyses the importance of e-education system and recent market of e-learning procedure.</description>
        <description>http://thesai.org/Downloads/Volume2No9/Paper 23 - E-learning as a Research Area An Analytical Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Room Impulse Response Modeling in the Sub-2kHz Band using 3-D Rectangular Digital Waveguide Mesh</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020922</link>
        <id>10.14569/IJACSA.2011.020922</id>
        <doi>10.14569/IJACSA.2011.020922</doi>
        <lastModDate>2013-11-28T10:54:31.6830000+00:00</lastModDate>
        
        <creator>Zhixin Chen</creator>
        
        <subject>Embedded Systems; Digital Signal Processing; Room Acoustics.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(9), 2011</description>
        <description>Digital waveguide mesh has emerged as an efficient and straightforward way to model small room impulse response because it solves the wave equation directly in the time domain. In this paper, we investigate the performance of the 3-D rectangular digital waveguide mesh in modeling the low frequency portion of room impulse response. We find that it has similar performance compared to the popular image source method when modeling the low frequency portion of the early specular reflections of the room impulse responses.</description>
        <description>http://thesai.org/Downloads/Volume2No9/Paper 22 - Room Impulse Response Modeling in the Sub-2kHz Band using 3-D Rectangular Digital Waveguide Mesh.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Ontology Based Reuse Algorithm towards Process Planning in Software Development</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020921</link>
        <id>10.14569/IJACSA.2011.020921</id>
        <doi>10.14569/IJACSA.2011.020921</doi>
        <lastModDate>2013-11-28T10:54:29.8730000+00:00</lastModDate>
        
        <creator>Shilpa Sharma</creator>
        
        <creator>Maya Ingle</creator>
        
        <subject>Ontological Knowledge Modeling; OntoReuseAlgo; Ontolayering Principle.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(9), 2011</description>
        <description>The process planning task for specified design provisions in software development can be significantly developed by referencing the knowledge reuse scheme. Reuse is considered to be one of the most promising techniques to improve software excellence and productivity. Reuse during software development depends much on the existing design knowledge in meta-model, a “read only” repository of information. We have proposed, an ontology based reuse algorithm towards process planning in software development. According to the common conceptual base facilitated by ontology and the characteristics of knowledge, the concepts and the entities are represented into meta-model and endeavor prospects. The relations between these prospects and its linkage knowledge are used to construct an ontology based reuse algorithm. In addition, our experiment illustrates realization of process planning in software development by practicing this algorithm. Subsequently, its benefits are delineated.</description>
        <description>http://thesai.org/Downloads/Volume2No9/Paper 21 - An Ontology Based Reuse Algorithm towards Process Planning in Software Development.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Performance between XEN-HVM, XEN-PV and Open-VZ during live-migration</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020920</link>
        <id>10.14569/IJACSA.2011.020920</id>
        <doi>10.14569/IJACSA.2011.020920</doi>
        <lastModDate>2013-11-28T10:54:28.0800000+00:00</lastModDate>
        
        <creator>Igli Tafa</creator>
        
        <creator>Elinda Kajo</creator>
        
        <creator>Ariana Bejleri</creator>
        
        <creator>Olimpjon Shurdi</creator>
        
        <subject>Hypervisor; XEN-PV; XEN-HVM; Open-VZ; CPU Consumption; Memory Utilization; Downtime.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(9), 2011</description>
        <description>The aim of this paper is to compare the performance between three hypervisors: XEN-PV, XEN-HVM and Open-VZ. We have simulated the migration of a virtual machine by using a warning failure approach. Based on some experiments we have compared CPU Consumption, Memory Utilization, Total Migration Time and Downtime. We have also tested the hypervisor’s performance by changing the packet’s size from 1500 byte to 64 byte. From these tests we have concluded that Open-VZ has a bigger CPU Consumption than XEN-PV, but the Total Migration time is smaller than in XEN-PV. XEN-HVM has a worse performance than XEN-PV, especially regarding to Downtime parameter.</description>
        <description>http://thesai.org/Downloads/Volume2No9/Paper 20 - The Performance between XEN-HVM, XEN-PV and Open-VZ during live-migration.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of a Computer Aided Transport Monitoring System (CATRAMS) for Manufacturing Organizations</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020919</link>
        <id>10.14569/IJACSA.2011.020919</id>
        <doi>10.14569/IJACSA.2011.020919</doi>
        <lastModDate>2013-11-28T10:54:26.2700000+00:00</lastModDate>
        
        <creator>Dipo Theophilus Akomolafe</creator>
        
        <subject>transport; manufacturing; database; hardware; communication, route; goods; passengers.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(9), 2011</description>
        <description>Presently, there are different types of monitoring systems and devices being used to monitor vehicles, products, processes and activities in manufacturing organizations. Each of these devices has their unique strengths and weaknesses but one problem that is common to them is that there is no knitted relationship between the devices and parameters necessary for effective and efficient monitoring system. Therefore, there is the need to develop a system that will address this shortcoming.
CATRAMS is an integration of computer and communication facilities to monitor and control movement of vehicles and goods. Its objectives are to provide detail information on movement of vehicles and reduce likely operational delays associated with movement of vehicles and goods. The development of the system was carried out by studying some existing devices to know their limitations, designing of road transport database, specifying hardware requirements and integrating the hardware and software resources to make a complete system. The system was tested using data collected from some manufacturing industries and it was found out that detail information about movement of vehicles and goods could be provided by the system.</description>
        <description>http://thesai.org/Downloads/Volume2No9/Paper 19 - Development of a Computer Aided Transport Monitoring System (CATRAMS) for Manufacturing Organizations.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Audio Watermarking with Error Correction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020918</link>
        <id>10.14569/IJACSA.2011.020918</id>
        <doi>10.14569/IJACSA.2011.020918</doi>
        <lastModDate>2013-11-28T10:54:24.4600000+00:00</lastModDate>
        
        <creator>Aman Chadha</creator>
        
        <creator>Sandeep Gangundi</creator>
        
        <creator>Rishabh Goel</creator>
        
        <creator>Hiren Dave</creator>
        
        <subject>watermarking; audio watermarking; data hiding; data confidentiality.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(9), 2011</description>
        <description>In recent times, communication through the internet has tremendously facilitated the distribution of multimedia data. Although this is indubitably a boon, one of its repercussions is that it has also given impetus to the notorious issue of online music piracy. Unethical attempts can also be made to deliberately alter such copyrighted data and thus, misuse it. Copyright violation by means of unauthorized distribution, as well as unauthorized tampering of copyrighted audio data is an important technological and research issue. Audio watermarking has been proposed as a solution to tackle this issue. The main purpose of audio watermarking is to protect against possible threats to the audio data and in case of copyright violation or unauthorized tampering, authenticity of such data can be disputed by virtue of audio watermarking.</description>
        <description>http://thesai.org/Downloads/Volume2No9/Paper 18 - Audio Watermarking with Error Correction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>mSCTP Based Decentralized Mobility Framework</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020917</link>
        <id>10.14569/IJACSA.2011.020917</id>
        <doi>10.14569/IJACSA.2011.020917</doi>
        <lastModDate>2013-11-28T10:54:22.6200000+00:00</lastModDate>
        
        <creator>Waqas A Imtiaz</creator>
        
        <creator>M. Afaq</creator>
        
        <creator>Mohammad A.U. Babar</creator>
        
        <subject>mSCTP; Chord; Multihoming; Seamless mobility.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(9), 2011</description>
        <description>To conceive the full potential of wireless IP services, Mobile Nodes (MNs) must be able to roam seamlessly across different networks. Mobile Stream Control Transmission Protocol (mSCTP) is a transport layer solution, which unlike Mobile IP (MIP), provides seamless mobility with minimum delay and negligible packet loss. However, mSCTP fails to locate the current IP address of the mobile node when Correspondent Node (CN) wants to initiate a session. In this paper, we propose DHT Chord to provide the required location management. Chord is a P2P algorithm, which can efficiently provide the IP address of the called MN by using its key-value mapping. The proposed decentralized mobility framework collectively exploits the multihoming feature of mSCTP, and efficient key-value mapping of chord to provide seamless mobility. Suitability of the framework is analyzed by preliminary analysis of chord lookup efficiency, and mSCTP handover procedure using overlay weaver and NS-2. Performance analysis shows that mSCTP multihoming feature and Chord efficient key-value mapping can provide a non-delayed, reliable, and an efficient IP handover solution.</description>
        <description>http://thesai.org/Downloads/Volume2No9/Paper 17 - mSCTP Based Decentralized Mobility Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>PSO Based Short-Term Hydrothermal Scheduling with Prohibited Discharge Zones</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020916</link>
        <id>10.14569/IJACSA.2011.020916</id>
        <doi>10.14569/IJACSA.2011.020916</doi>
        <lastModDate>2013-11-28T10:54:20.8230000+00:00</lastModDate>
        
        <creator>G Sreenivasan</creator>
        
        <creator>C.H.Saibabu</creator>
        
        <creator>S.Sivanagaraju</creator>
        
        <subject>Hydrothermal scheduling; Particle Swarm Optimization; Valve - point loading effect; Prohibited Discharge Zones.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(9), 2011</description>
        <description>This paper presents a new approach to determine the optimal hourly schedule of power generation in a hydrothermal power system using PSO technique.. The simulation results reveal that the proposed PSO approach appears to be the powerful in terms of convergence speed, computational time and minimum fuel cost.</description>
        <description>http://thesai.org/Downloads/Volume2No9/Paper 16 - PSO Based Short-Term Hydrothermal Scheduling with  Prohibited Discharge Zones.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of k-Coverage in Wireless Sensor Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020915</link>
        <id>10.14569/IJACSA.2011.020915</id>
        <doi>10.14569/IJACSA.2011.020915</doi>
        <lastModDate>2013-11-28T10:54:19.0130000+00:00</lastModDate>
        
        <creator>Rasmi Ranjan Patra</creator>
        
        <creator>Prashanta Kumar Patra</creator>
        
        <subject>Wireless sensor networks; coverage; k-coverage; connectivity.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(9), 2011</description>
        <description>Recently, a concept of wireless sensor networks has attracted much attention due to its wide-range of potential applications. Wireless sensor networks also pose a number of challenging optimization problems. One of the fundamental problems in sensor networks is the coverage problem, which reflects the quality of service that can be provided by a particular sensor network. The coverage concept is depending from several points of view due to a variety of sensors and a wide-range of their applications. One fundamental issue in sensor networks is the coverage problem, which reflects how well a sensor network is monitored or tracked by sensors. In this paper, we formulate this problem as a decision problem, whose goal is to determine the degree of coverage of a sensor network, which is covered by at least k sensors, where k is a predefined value. The sensing ranges of sensors can be same or different. Performance evaluation of our protocol indicates that degree of coverage of wireless sensor networks can be determined within small period of time. Therefore energy consumption of the sensor networks can be minimized.</description>
        <description>http://thesai.org/Downloads/Volume2No9/Paper 15 - Analysis of k-Coverage in Wireless Sensor Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>EGovernment Stage Model: Evaluating the Rate of Web Development Progress of Government Websites in Saudi Arabia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020914</link>
        <id>10.14569/IJACSA.2011.020914</id>
        <doi>10.14569/IJACSA.2011.020914</doi>
        <lastModDate>2013-11-28T10:54:17.2200000+00:00</lastModDate>
        
        <creator>Osama Alfarraj</creator>
        
        <creator>Steve Drew</creator>
        
        <creator>Rayed Abdullah AlGhamdi</creator>
        
        <subject>eGovernment; Saudi Arabia government websites; web development progress; eGovernment stage model.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(9), 2011</description>
        <description>This paper contributes to the issue of eGovernment implementation in Saudi Arabia by discussing the current situation of ministry websites. It evaluates the rate of web development progress of vital government websites in Saudi Arabia using the eGovernment stage model. In 2010, Saudi Arabia ranked 58th in the world and 4th in the Gulf region in eGovernment readiness according to United Nations reports. In particular, Saudi Arabia has ranked 75th worldwide for its online service index and its components compared to the neighbouring Gulf country of Bahrain, which was ranked 8th for the same index. While this is still modest in relation to the Saudi government’s expectation concerning its vision for eGovernment implementation for 2010, and the results achieved by the neighbouring Gulf countries such as Bahrain and the United Arab Emirates on the eGovernment index, the Saudi government has endeavoured to meet the public needs concerning eGovernment and carry out the implementation of eGovernment properly. Governments may heed the importance of actively launching official government websites – the focus of this study – as the main portals for delivering their online services to all the different categories of eGovernment (including G2C, G2B, and G2G). However, certain Saudi ministries have not given due attention to this vital issue. This is evidenced by the fact that some of their websites are not fully developed or do not yet exist, which clearly impedes that particular ministry from appropriately delivering eServices.</description>
        <description>http://thesai.org/Downloads/Volume2No9/Paper 14 - EGovernment Stage Model Evaluating the Rate of Web Development Progress of Government Websites in Saudi Arabia.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Prediction of Users Behavior through Correlation Rules</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020913</link>
        <id>10.14569/IJACSA.2011.020913</id>
        <doi>10.14569/IJACSA.2011.020913</doi>
        <lastModDate>2013-11-28T10:54:15.4100000+00:00</lastModDate>
        
        <creator>Navin Kumar Tyagi</creator>
        
        <creator>A. K. Solanki</creator>
        
        <subject>Web usage mining; FPgrowth; Cosine measure; Usability; Association rules.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(9), 2011</description>
        <description>Web usage mining is an application of Web mining which focus on the extraction of useful information from usage data of severs logs. In order to improve the usability of a Web site so that users can more easily find and retrieve information they are looking for, we proposed a recommendation methodology based on correlation rules. A correlation rule is measured not only by its support and confidence but also by the correlation between itemsets. Proposed methodology recommends interesting Web pages to the users on the basis of their behavior discovered from web log data. Association rules are generated using FP growth approach and we used two criteria for selecting interesting rules: Confidence and Cosine measure. We also proposed an algorithm for the recommendation process.</description>
        <description>http://thesai.org/Downloads/Volume2No9/Paper 13 - Prediction of Users Behavior through Correlation Rules.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Semantic Retrieval Approach for Web Documents</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020912</link>
        <id>10.14569/IJACSA.2011.020912</id>
        <doi>10.14569/IJACSA.2011.020912</doi>
        <lastModDate>2013-11-28T10:54:13.6170000+00:00</lastModDate>
        
        <creator>Hany M Harb</creator>
        
        <creator>Khaled M. Fouad</creator>
        
        <creator>Nagdy M. Nagdy</creator>
        
        <subject>Semantic Web; information retrieval; Semantic Retrieval; Semantic Similarity; WordNet.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(9), 2011</description>
        <description>Because of explosive growth of resources in the internet, the information retrieval technology has become particularly important. However the current retrieval methods are essentially based on the full text matching of keywords approach lacking of semantic information and can’t understand the user&#39;s query intent very well. These methods return a large number of irrelevant information, and are unable to meet the user&#39;s request. Systems have been established so far failed to overcome fully the limitations of search based on keywords. Such systems are built from variations of classic models that represent information by keywords. Using Semantic Web is a way to increase the precision of information retrieval systems.
In this paper, we propose the semantic information retrieval approach to extract the information from the web documents in certain domain (jaundice diseases) by collecting the domain relevant documents using focused crawler based on domain ontology, and using similar semantic content that is matched with a given user’s query. Semantic retrieval approach aims to discover semantically similar terms in documents and query terms using WordNet.</description>
        <description>http://thesai.org/Downloads/Volume2No9/Paper 12 - Semantic Retrieval Approach for Web Documents.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Query by Humming and Metadata Search System (HQMS) Analysis over Diverse Features</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020911</link>
        <id>10.14569/IJACSA.2011.020911</id>
        <doi>10.14569/IJACSA.2011.020911</doi>
        <lastModDate>2013-11-28T10:54:11.8070000+00:00</lastModDate>
        
        <creator>Nauman Ali Khan</creator>
        
        <creator>Mubashar Mushtaq</creator>
        
        <subject>Query by humming; Metadata; MIR; Hybrid system; Pipe and Filter; Evaluation.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(9), 2011</description>
        <description>Retrieval of music content over web is one of the toughest job tasks and found some of the significant challenge. Song retrieval over web is the emerging problem from the category of Music Information Retrieval. Several searching techniques related to metadata and content of song are developed and implemented. In this paper we are going to propose and evaluate a new hybrid technique for song retrieval named as “hybrid query by humming and metadata search system” (HQMS).HQMS is a hybrid model that is based on metadata and query by humming. HQMS narrow downs the relevant results thus improve overall accuracy of the system. Two filters, metadata and query by humming work together and enhances overall accuracy. The system is evaluated song wise, metadata wise and Age wise. Issues related to query reformulation are also highlighted as a part of the paper. More over the evaluation validates the hypothesis and the proposed Architecture.</description>
        <description>http://thesai.org/Downloads/Volume2No9/Paper 11 - Hybrid Query by Humming and Metadata Search System (HQMS) Analysis over Diverse Features.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hierarchical Cellular Structures in High-Capacity Cellular Communication Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020910</link>
        <id>10.14569/IJACSA.2011.020910</id>
        <doi>10.14569/IJACSA.2011.020910</doi>
        <lastModDate>2013-11-28T10:54:09.9970000+00:00</lastModDate>
        
        <creator>R K Jain</creator>
        
        <creator>Sumit Katiyar</creator>
        
        <creator>N. K. Agrawal Sr Sr M IEEE</creator>
        
        <subject>Hierarchical cellular structures; Total frequency hopping; Adaptive frequency allocation; Communication system traffic; Code division multiple access (CDMA).</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(9), 2011</description>
        <description>In the prevailing cellular environment, it is important to provide the resources for the fluctuating traffic demand exactly in the place and at the time where and when they are needed. In this paper, we explored the ability of hierarchical cellular structures with inter layer reuse to increase the capacity of mobile communication network by applying total frequency hopping (T-FH) and adaptive frequency allocation (AFA) as a strategy to reuse the macro and micro cell resources without frequency planning in indoor pico cells [11]. The practical aspects for designing macro- micro cellular overlays in the existing big urban areas are also explained [4]. Femto cells are inducted in macro / micro / pico cells hierarchical structure to achieve the required QoS cost effectively.</description>
        <description>http://thesai.org/Downloads/Volume2No9/Paper 10 - Hierarchical Cellular Structures in High-Capacity Cellular Communication Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Some New Results about The Period of Recurring Decimal</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020909</link>
        <id>10.14569/IJACSA.2011.020909</id>
        <doi>10.14569/IJACSA.2011.020909</doi>
        <lastModDate>2013-11-28T10:54:08.0170000+00:00</lastModDate>
        
        <creator>Bo Zhang</creator>
        
        <subject>Recurring decimal;Period;Prime.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(9), 2011</description>
        <description>This study mainly discusses period problem of recurring decimals.According to Euler theorem, this paper gives the computation formula of period of recurring decimal, relation of the period and least positive period, and the necessary and sufficient condition that the period is eaqual to least positive period.</description>
        <description>http://thesai.org/Downloads/Volume2No9/Paper 9 - Some New Results about The Period of Recurring Decimal .pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Reliability Model for Evaluating Trustworthiness of Intelligent Agents in Vertical Handover</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020908</link>
        <id>10.14569/IJACSA.2011.020908</id>
        <doi>10.14569/IJACSA.2011.020908</doi>
        <lastModDate>2013-11-28T10:54:06.0500000+00:00</lastModDate>
        
        <creator>Kailash Chander</creator>
        
        <creator>Dimple Juneja</creator>
        
        <subject>Trust Certificate; Mobile Agents; Vertical Handover; Secure Network.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(9), 2011</description>
        <description>Our previous works have proposed the deployment of mobile agents to assist vertical handover decisions in 4G. Adding a mobile agent in the 4G could lead to many advantages such as reduced consumption of network bandwidth, delay in network latency and reduction in the time taken to complete a particular task. However, this deployment demands that the deployed collection of agents shall be secure and trustworthy. Security of a mobile agent includes maintaining confidentiality, reality and integrity of not only the agent employed but also the system in which it is deployed. In fact, many conventional security solutions exists, however, very few of them addresses the challenge of introducing trusted computing in mobile agents, deployed in 4G, in particular. This paper proposes a new reliability model by implementing trust certificate for mobile agents in vertical handover.</description>
        <description>http://thesai.org/Downloads/Volume2No9/Paper 8 - A New Reliability Model for Evaluating Trustworthiness of Intelligent Agents in Vertical Handover.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Polarimetric SAR Image Classification with High Frequency Component Derived from Wavelet Multi Resolution Analysis: MRA</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020907</link>
        <id>10.14569/IJACSA.2011.020907</id>
        <doi>10.14569/IJACSA.2011.020907</doi>
        <lastModDate>2013-11-28T10:54:04.2400000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>polarimetric SAR; polarization signature; wavelet MRA.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(9), 2011</description>
        <description>A method for polarimetric Synthetic Aperture Radar: SAR image classification with high frequency component derived from wavelet Multi-Resolution Analysis: MRA is proposed. Although it is well known that polarization signature derived from fully polarized SAR data is useful for SAR image classifications, it is still unknown how to utilize the polarization signature in the image classification. High frequency component of the polarization signature calculated with the fully polarized SAR data is taken as one of the features utilizing in the classification into account. Thus improvement of classification performance is achieved for the proposed classification method of which such feature is included in the feature space for the Maximum Likelihood based classification method.</description>
        <description>http://thesai.org/Downloads/Volume2No9/Paper 7 - Polarimetric SAR Image Classification with High Frequency Component Derived from Wavelet Multi Resolution Analysis MRA.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An improved Approach for Document Retrieval Using Suffix Trees</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020906</link>
        <id>10.14569/IJACSA.2011.020906</id>
        <doi>10.14569/IJACSA.2011.020906</doi>
        <lastModDate>2013-11-28T10:54:02.4300000+00:00</lastModDate>
        
        <creator>N Sandhya</creator>
        
        <creator>A. Govardhan</creator>
        
        <creator>Y. Sri Lalitha</creator>
        
        <creator>K. Anuradha</creator>
        
        <subject>Document retrieval; Frequent Word Sequences; Suffix tree; Traversal technique.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(9), 2011</description>
        <description>Huge collection of documents is available at few mouse clicks. The current World Wide Web is a web of pages. Users have to guess possible keywords that might lead through search engines to the pages that contain information of interest and browse hundreds or even thousands of the returned pages in order to obtain what they want. In our work we build a generalized suffix tree for our documents and propose a search technique for retrieving documents based on a sort of phrase called word sequences. Our proposed method efficiently searches for a given phrase (with missing or additional words in between) with better performance.</description>
        <description>http://thesai.org/Downloads/Volume2No9/Paper 6 - An improved Approach for Document Retrieval Using Suffix Trees.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Embedded Object Detection with Radar Echo Data by Means of Wavelet Analysis of MRA: Multi-Resolution Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020905</link>
        <id>10.14569/IJACSA.2011.020905</id>
        <doi>10.14569/IJACSA.2011.020905</doi>
        <lastModDate>2013-11-28T10:54:00.6370000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>Three dimensional wavelet transformation; Radar echo data; wavelet MRA; Edge detection.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(9), 2011</description>
        <description>A method for embedded object detection with radar echo data by means of wavelet analysis of MRA: Multi-Resolution Analysis, in particular, three dimensional wavelet transformations is proposed. In order to improve embedded object detecting capability, not only one dimensional radar echo data but also three dimensional data are used. Through a comparison between one dimensional edge detection with Sobel operator and three dimensional wavelet transformation based edge detection, it is found that the proposed method is superior to the Sobel operator based method</description>
        <description>http://thesai.org/Downloads/Volume2No9/Paper 5 - Embedded Object Detection with Radar Echo Data by Means of Wavelet Analysis of MRA Multi-Resolution Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Backup Communication Routing Through Internet Satellite, WINDS for Transmitting of Disaster Relief Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020904</link>
        <id>10.14569/IJACSA.2011.020904</id>
        <doi>10.14569/IJACSA.2011.020904</doi>
        <lastModDate>2013-11-28T10:53:58.7500000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>internet satellite; disaster mitigation; TCP/IP protocol; accerelator.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(9), 2011</description>
        <description>A countermeasure for round trip delay which occurs in between satellite and ground with network accelerator is investigated together with operating system dependency on effectiveness of accelerator. Also disaster relief data transmission experiments are conducted for mitigation of disaster together with acceleration of disaster related data transmission between local government and disaster prevention center. Disaster relief information including remote sensing satellite images and information from the disaster occurred areas to local government for creation of evacuation information is accelerated so that it becomes possible to send them to the residents in the suffered areas due to disaster through data broadcasting in the digital TV channel.</description>
        <description>http://thesai.org/Downloads/Volume2No9/Paper 4 - Backup Communication Routing Through Internet Satellite, WINDS for Transmitting of Disaster Relief Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Efficient Scheme for MANET Domain Formation (ESMDF)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020903</link>
        <id>10.14569/IJACSA.2011.020903</id>
        <doi>10.14569/IJACSA.2011.020903</doi>
        <lastModDate>2013-11-28T10:53:56.9230000+00:00</lastModDate>
        
        <creator>Adwan AbdelFattah</creator>
        
        <subject>MANET; ESMDF; Domain; Domain Router; Domain Formation.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(9), 2011</description>
        <description>Mobile Ad hoc Network (MANET) has random topology as MANET devices leaving or joining to the network at anytime. The dynamic nature of MANETs makes achieving secrecy, connectivity and high performance, a big challenge and a complex task. In this paper, we proposed an efficient technique for Dynamic construction of large MANET based on division the network into interoperable domains. This technique is a hybrid of centralized and distributed control of packets forwarding that balances power consumption, minimizes the routing tables and improves the security features. The principles of domain formation based on joining adjacent devices into one group which controlled by one capable device called domain controller. The presented scheme enhances the throughput and the stability of large MANET by minimizing the flooding of messages for keeping track of Devices and during the domain formation.</description>
        <description>http://thesai.org/Downloads/Volume2No9/Paper 3 - An Efficient Scheme for MANET Domain Formation (ESMDF).pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards an Adaptive Learning System Based on a New Learning Object Granularity Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020902</link>
        <id>10.14569/IJACSA.2011.020902</id>
        <doi>10.14569/IJACSA.2011.020902</doi>
        <lastModDate>2013-11-28T10:53:55.1300000+00:00</lastModDate>
        
        <creator>Amal Battou</creator>
        
        <creator>Ali El Mezouary</creator>
        
        <creator>Chihab Cherkaoui</creator>
        
        <creator>Driss Mammass</creator>
        
        <subject>aaptability; learning Content; adaptive learning systems, learning object; granularit;y learning content.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(9), 2011</description>
        <description>To achieve the adaptability required in ALS, adaptive learning system (ALS) takes advantage of granular and reusable content. The main goal of this paper is to examine the learning object granularity issue which is directly related with Learning Object (LO) reusability and the adaptability process required in ALS. For that purpose, we present the learning objects approach and the related technologies. Then, we discuss the fine-grained as a fundamental characteristic to reach the adaptability and individualization required in ALS. After that, we present some learning object granularity approaches in the literature before presenting our granularity approach. Finally, we propose an example of implementation of our approach to test its ability to meet the properties associated with fine-grained and adaptability.</description>
        <description>http://thesai.org/Downloads/Volume2No9/Paper 2 - Towards an Adaptive Learning System Based on a New Learning Object Granularity Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluation of the Segmentation by Multispectral Fusion Approach with Adaptive Operators: Application to Medical Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020901</link>
        <id>10.14569/IJACSA.2011.020901</id>
        <doi>10.14569/IJACSA.2011.020901</doi>
        <lastModDate>2013-11-28T10:53:53.2730000+00:00</lastModDate>
        
        <creator>Lamiche Chaabane</creator>
        
        <creator>Moussaoui Abdelouahab</creator>
        
        <subject>image fusion; possibility theory; segmentation; MR images.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(9), 2011</description>
        <description>With the development of acquisition image techniques, more and more image data from different sources of image become available. Multi-modality image fusion seeks to combine information from different images to obtain more inferences than can be derived from a single modality. In medical imaging based application fields, image fusion has emerged as a promising research area since the end of the last century. The paper presents the evaluation of the segmentation of MR images using the multispectral fusion approach in the possibility theory context . Some results are presented and discussed.</description>
        <description>http://thesai.org/Downloads/Volume2No9/Paper 1 - Evaluation of the Segmentation by Multispectral Fusion Approach with Adaptive Operators Application to Medical Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Editorial</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2013.03030</link>
        <id>10.14569/SpecialIssue.2013.03030</id>
        <doi>10.14569/SpecialIssue.2013.03030</doi>
        <lastModDate>2013-11-25T14:48:49.5200000+00:00</lastModDate>
        
        <creator>Gregory Giuliani</creator>
        
        <creator>Dorian Gorgan</creator>
        
        <subject>enviroGRIDS; Observation System; Spatial Data Infrastructure; Grid computing; Black Sea; Remote sensing; Hydrological modeling; GEOSS</subject>
        <description>Special Issue(SpecialIssue), 3(3), 2013</description>
        <description>The Black Sea Catchment Observation System has been developed in the frame of the EU/FP7 enviroGRIDS project to inform about crucial regional environmental issues. This system is now making resources accessible to a large community of users for data management and publishing, for hydrological models calibration and execution, for satellite image processing, for report generation and visualization, and for decision support. In this special issue, we present the different components that were developed as well as the encountered challenges in order to bring innovative contributions into the Global Earth Observation System of Systems. One of the major issues was to enable data exchange across different heterogeneous components and infrastructures, more specifically Spatial Data and Grid infrastructures. The interoperability standards proposed by the Open Geospatial Consortium (OGC) support the scalability and the efficient combination of the complex specialized functionalities and the computation potential of these platforms. Another important issue was to build the human, institutional and infrastructure capacities to contribute and use this new observation system.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo7/Paper-Editorial.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>OWS4SWAT: Publishing and Sharing SWAT Outputs with OGC standards</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2013.030311</link>
        <id>10.14569/SpecialIssue.2013.030311</id>
        <doi>10.14569/SpecialIssue.2013.030311</doi>
        <lastModDate>2013-11-20T19:49:13.4870000+00:00</lastModDate>
        
        <creator>Gregory Giuliani</creator>
        
        <creator>Kazi Rahman</creator>
        
        <creator>Nicolas Ray</creator>
        
        <creator>Anthony Lehmann</creator>
        
        <subject>enviroGRIDS; SWAT; Water; Interoperability; WPS; OGC; GEOSS; Black Sea</subject>
        <description>Special Issue(SpecialIssue), 3(3), 2013</description>
        <description>The Soil and Water Assessment Tool (SWAT) is a widely used hydrological model that produces several useful outputs (e.g. evapotranspiration, soil moisture, aquifer recharge, river discharge) as text files. Currently, visualizing and publishing SWAT outputs as geospatial data requires a lot of time and repetitive processing steps. Moreover, data used and produced are often not interoperable and restricted to software like ArcGIS or MapWindow. Consequently, integrating SWAT outputs with other datasets and/or models is difficult. To solve these issues, we propose an innovative, scalable and interoperable framework allowing (1) the automatization of post-processing tasks using orchestrated Web Processing Services (WPS) and (2) the publishing of SWAT outputs using interoperable data services (e.g Web Feature Service, Web Map Service). The proposed framework simplifies map/data production and facilitates exchange/integration of hydrological data with other sources.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo7/Paper_11-OWS4SWAT_Publishing_and_Sharing_SWAT_Outputs_with_OGC_standards.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Web Based Access to Water Related Data Using OGC WaterML 2.0</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2013.030310</link>
        <id>10.14569/SpecialIssue.2013.030310</id>
        <doi>10.14569/SpecialIssue.2013.030310</doi>
        <lastModDate>2013-11-20T19:49:11.6770000+00:00</lastModDate>
        
        <creator>Adrian Almoradie</creator>
        
        <creator>Ioana Popescu</creator>
        
        <creator>Andreja Jonoski</creator>
        
        <creator>Dimitri Solomatine</creator>
        
        <subject>WaterML 2.0; GeoServer; Flood information dissemination; web-based; Somes Mare</subject>
        <description>Special Issue(SpecialIssue), 3(3), 2013</description>
        <description>As the Open Geospatial Consortium (OGC) has adopted WaterML 2.0 as encoding standard for representing hydro-meteorological time series data, the water community is in need of tools and methods for delivering such data over the web.  
This article presents experiences with one approach for publishing water-related data over the web based on the OGC WaterML 2.0-GeoServer framework and methods developed by Australia&#39;s Commonwealth Scientific and Industrial Research Organisation (CSIRO). 
This implementation is one component of a web based flood information system for Somes Mare basin in Romania, which has been developed within the enviroGRIDS EU FP7 research project.
</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo7/Paper_10-Web_Based_Access_to_Water_Related_Data_Using_OGC_WaterML_2.0.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Interoperable, GIS-oriented, Information and Support System for Water Resources Management</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2013.030309</link>
        <id>10.14569/SpecialIssue.2013.030309</id>
        <doi>10.14569/SpecialIssue.2013.030309</doi>
        <lastModDate>2013-11-20T19:49:09.8670000+00:00</lastModDate>
        
        <creator>Pierluigi Cau</creator>
        
        <creator>Simone Manca</creator>
        
        <creator>Costantino Soru</creator>
        
        <creator>Davide Muroni</creator>
        
        <creator>Dorian Gorgan</creator>
        
        <creator>Victor Bacu</creator>
        
        <creator>Anthony Lehman</creator>
        
        <creator>Nicolas Ray</creator>
        
        <creator>Gregory Giuliani</creator>
        
        <subject>enviroGRIDS; GIS; SWAT; DSS; Argilla; Black Sea Catchment; Mapserver; BASHYT;  Hydrology; Interoperability; OGC; Portal</subject>
        <description>Special Issue(SpecialIssue), 3(3), 2013</description>
        <description>Important objectives of the four-year enviroGRIDS project encompass the improvement of transnational cooperation, the use of state of the art Information and Communication Technologies for data analysis and sharing and the application of environmental models for monitoring present and predicting future states of the environment for the Black Sea region. In such a transnational context, there is a dire need for the environmental sciences to evolve from a simple, local-scale vision toward a complex, multi-user, multilayered holistic approach. BASHYT (http://swat.crs4.it/) is a Web based, GIS oriented, information and support tool, part of the Black Sea Catchment – Observation System (BSC-OS). It exposes a set of applications for data management, analysis and visualization and a complete server and client side development framework (wiki like) to create Web contents. The core of the portal relies on the hydrological semi distributed SWAT code to model the water cycle and predict the effect of management decisions on water, sediment, nutrient and pesticide yields on large river basins. Furthermore, BASHYT aims at quantifying the interconnectedness between (human and natural) pressures and states of water body receptors at different space and time scales. The aim is to enhance environmental management capacity to assess water resource and  to share and process large amounts of key environmental information.  Within an experimental and innovative programming environment, modules have been developed to run near real-time applications based on numerical solvers (SWAT is just one example), run pre- and post-processing codes, query and map results through the Web browser. A set of web OGC services and a complete Application Programming Interface (API) are also exposed by the portal. We expect to improve the ways in which land management systems can operate and improve model usability to aid in making management decisions and watershed-scale modeling.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo7/Paper_9-An_Interoperable,_GIS-oriented_Information.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Calibration of SWAT Hydrological Models in a Distributed Environment Using the gSWAT Application</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2013.030308</link>
        <id>10.14569/SpecialIssue.2013.030308</id>
        <doi>10.14569/SpecialIssue.2013.030308</doi>
        <lastModDate>2013-11-20T19:49:06.4500000+00:00</lastModDate>
        
        <creator>Victor Bacu</creator>
        
        <creator>Danut Mihon</creator>
        
        <creator>Teodor Stefanut</creator>
        
        <creator>Denisa Rodila</creator>
        
        <creator>Dorian Gorgan</creator>
        
        <subject></subject>
        <description>Special Issue(SpecialIssue), 3(3), 2013</description>
        <description>Topics such as the sustainability and vulnerability
of land management practices on water quality and quantity are
very important in these days both for decision makers and for
citizens. The enviroGRIDS FP7 project addresses some of these
topics in the Black Sea Catchment area. One of the software tools
developed in this project is gSWAT. It allows the calibration of
SWAT hydrological models in a flexible development environment
and uses distributed computational infrastructures to speedup
the simulations. The development of SWAT (Soil Water
Assessment Tool) hydrological models is a well-known procedure
for the hydrological specialists and this paper highlights, from the
end-users point of view, the scenarios related with the calibration
procedures available in the gSWAT application.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo7/Paper_8-Calibration_of_SWAT_Hydrological_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Remotely Sensed Data Processing on Grids by Using GreenLand Web Based Platform</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2013.030307</link>
        <id>10.14569/SpecialIssue.2013.030307</id>
        <doi>10.14569/SpecialIssue.2013.030307</doi>
        <lastModDate>2013-11-20T19:49:03.0330000+00:00</lastModDate>
        
        <creator>Filiz Bektas Balcik</creator>
        
        <creator>Danut Mihon</creator>
        
        <creator>Vlad Colceriu</creator>
        
        <creator>Karin Allenbach</creator>
        
        <creator>Cigdem Goksel</creator>
        
        <creator>A. Ozgur Dogru</creator>
        
        <creator>Gregory Giuliani</creator>
        
        <creator>Dorian Gorgan</creator>
        
        <subject>GreenLand; Landsat; MODIS; image processing; Grid computing</subject>
        <description>Special Issue(SpecialIssue), 3(3), 2013</description>
        <description>Developing applications for analyzing and processing different remotely sensed data is very important for environmental predictions and management strategies. Applications focusing on environmental and natural resource monitoring need large data sets to be processed and fast response to actions. These requirements mostly imply high computing power that can be achieved through the parallel and distributed capabilities provided by the Grid infrastructure. This paper presents the GreenLand application as a user friendly web based platform for the use of environmental specialists engaging remote sensing applications using Grid computing technology. Theoretical concepts and basic functionalities of GreenLand platform were tested in two detailed case studies: a land cover/use determination analysis in Istanbul (Turkey) by conducting vegetation indices and density slice classification on Landsat 5 Thematic Mapper (TM) imagery, and the retrieval of large remote sensing products datasets (The Moderate Resolution Imaging Spectroradiometer (MODIS)) for the entire Black Sea Catchment. All the results of different image processing scenarios used in the reported experiments have been developed through the enviroGRIDS project, targeting the Black Sea Catchment (BSC) area.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo7/Paper_7-Remotely_Sensed_Data_Processing_on_Grids.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mathematical Modeling of Distributed Image Processing Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2013.030306</link>
        <id>10.14569/SpecialIssue.2013.030306</id>
        <doi>10.14569/SpecialIssue.2013.030306</doi>
        <lastModDate>2013-11-20T19:48:59.6030000+00:00</lastModDate>
        
        <creator>Vlad Colceriu</creator>
        
        <creator>Danut Mihon</creator>
        
        <creator>Angela Minculescu</creator>
        
        <creator>Victor Bacu</creator>
        
        <creator>Denisa Rodila</creator>
        
        <creator>Dorian Gorgan</creator>
        
        <subject></subject>
        <description>Special Issue(SpecialIssue), 3(3), 2013</description>
        <description>Satellite images play an important role in developing
Geographical Information System software applications that
prove to be useful for different Earth Science phenomena analysis.
Accurate results are obtained from high resolution images, or by
applying the same algorithm multiple times over a specific input
data set. In both cases the data volume that needs to be processed
is large, and usually involves distributed infrastructures. In order
for non-technical users to use these algorithms, they should
be described in a flexible manner, using workflow structure
models. This paper highlights the main achievements within
the GreenLand platform, regarding scheduling, executing, and
monitoring the Grid processes. Its development is based on
simple, but powerful, notion of mathematical directed acyclic
graphs that are used in parallel and distributed executions over
the Grid infrastructure.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo7/Paper_6-Mathematical_Modeling_of_Distributed_Image.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Grid Based Processing of Satellite Images in GreenLand Platform</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2013.030305</link>
        <id>10.14569/SpecialIssue.2013.030305</id>
        <doi>10.14569/SpecialIssue.2013.030305</doi>
        <lastModDate>2013-11-20T19:48:56.1870000+00:00</lastModDate>
        
        <creator>Danut Mihon</creator>
        
        <creator>Vlad Colceriu</creator>
        
        <creator>Victor Bacu</creator>
        
        <creator>Dorian Gorgan</creator>
        
        <subject></subject>
        <description>Special Issue(SpecialIssue), 3(3), 2013</description>
        <description>Geographical Information System (GIS) applications
that process large amount of data require intensive usage
of hardware capabilities provided by distributed platforms, such
as the Grid infrastructure. Due to the constant demand of data
availability and data sharing, without concerning its format and
size, a new software solution is needed. GreenLand is a system
capable to provide such a solution, based on its constituent
modules: GreenLandGUI, gProcess, ESIP, WorkflowEditor, and
OperatorEditor. This paper highlights each of them and how
they interact in order to create a platform capable of fetching,
processing, and visualizing large amount of data exposed in a
uniform and standardized manner.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo7/Paper_5-Grid_Based_Processing_of_Satellite_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>OGC Compliant Services for Remote Sensing Processing over the Grid Infrastructure</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2013.030304</link>
        <id>10.14569/SpecialIssue.2013.030304</id>
        <doi>10.14569/SpecialIssue.2013.030304</doi>
        <lastModDate>2013-11-20T19:48:54.3770000+00:00</lastModDate>
        
        <creator>Danut Mihon</creator>
        
        <creator>Vlad Colceriu</creator>
        
        <creator>Victor Bacu</creator>
        
        <creator>Denisa Rodila</creator>
        
        <creator>Dorian Gorgan</creator>
        
        <creator>Karin Allenbach</creator>
        
        <creator>Gregory Giuliani</creator>
        
        <subject></subject>
        <description>Special Issue(SpecialIssue), 3(3), 2013</description>
        <description>The latest issues in simulating and analyzing different
Earth Science phenomena require the development of complex
algorithms, based on satellite images in different formats. The
main goal of this paper is to process these data in a standard
manner and to automatically publish the execution results by
using the latest Web Processing Services (WPS). The development
of these services needs to be slightly different when involving large
volume of data processed over the Grid infrastructure opposed to
standalone machines. This paper provides an implementation solution
of the WPS standard within the GreenLand platform, and
exemplifies it on the Black Sea catchment hydrologic modeling
use case.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo7/Paper_4-OGC_Compliant_Services_for_Remote_Sensing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enabling Efficient Discovery of and Access to Spatial Data Services</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2013.030303</link>
        <id>10.14569/SpecialIssue.2013.030303</id>
        <doi>10.14569/SpecialIssue.2013.030303</doi>
        <lastModDate>2013-11-20T19:48:50.9600000+00:00</lastModDate>
        
        <creator>Karel Charvat</creator>
        
        <creator>Premysl Vohnout</creator>
        
        <creator>Michal Sredl</creator>
        
        <creator>Stepan Kafka</creator>
        
        <creator>Tomas Mildorf</creator>
        
        <creator>Andrea De Bono</creator>
        
        <creator>Gregory Giuliani</creator>
        
        <subject>EnviroGRIDS; web services; discovery; metadata; geoportal; SDI; INSPIRE; OGC; SuperCAT</subject>
        <description>Special Issue(SpecialIssue), 3(3), 2013</description>
        <description>Spatial data represent valuable information and a basis for decision making processes in society. The number of specialisms that use spatial data for such purposes is increasing. Increasing is also the number of services enabling to search, access, process, analyse or visualise spatial data. Standardisation activities of the Open Geospatial Consortium (OGC) support standardised sharing of services through the Web. However, many services declared as OGC compliant do not respond or they are not available. The paper introduces an innovative solution for efficient discovery of and access to spatial data services compliant with OGC specifications. The research was performed in the context of the EnviroGRIDS geoportal. Several thousands of harvested services were quality checked and the summary of the testing including the identified problems are presented.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo7/Paper_3-Enabling_Efficient_Discovery_of_and_Access_to_Spatial.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Building Regional Capacities for GEOSS and INSPIRE: a journey in the Black Sea Catchment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2013.030302</link>
        <id>10.14569/SpecialIssue.2013.030302</id>
        <doi>10.14569/SpecialIssue.2013.030302</doi>
        <lastModDate>2013-11-20T19:48:47.5430000+00:00</lastModDate>
        
        <creator>Gregory Giuliani</creator>
        
        <creator>Nicolas Ray</creator>
        
        <creator>Anthony Lehmann</creator>
        
        <subject>enviroGRIDS; Capacity Building; GEOSS; Black Sea; Spatial Data Infrastrcuture; Grid computing</subject>
        <description>Special Issue(SpecialIssue), 3(3), 2013</description>
        <description>To understand environmental systems like the Black Sea catchment, it is required to gather and integrate different datasets. However, data discoverability, accessibility and integration are among the most frequent difficulties that scientists are regularly facing. To tackle these issues, capacity building (at human, institutional, and technical levels) is recognized as a key enabler to raise awareness and create commitments on the benefits of data sharing and publication using interoperable services. In this paper, we present experiences and lessons learnt in the frame of the EU FP7 project enviroGRIDS in developing a network of GEO partners and an efficient strategy to build capacities of scientists from different countries in the Black Sea region. As a result, 27 services, providing access to more than 300 (local or regional) environmental datasets corresponding to around 300’000 layers, are currently registered into the Global Earth Observation System of Systems (GEOSS). Finally, we discuss the added value for stakeholders in the region to participate into GEOSS and the European directive on data sharing INSPIRE, and how to improve its visibility and credibility in the research community, among potential end users.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo7/Paper_2-Building_Regional_Capacities_for_GEOSS_and_INSPIRE_a_journey_in_the_Black_Sea_Catchment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Black Sea Catchment Observation System as a Portal for GEOSS Community</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2013.030301</link>
        <id>10.14569/SpecialIssue.2013.030301</id>
        <doi>10.14569/SpecialIssue.2013.030301</doi>
        <lastModDate>2013-11-20T19:48:44.0000000+00:00</lastModDate>
        
        <creator>Dorian Gorgan</creator>
        
        <creator>Gregory Giuliani</creator>
        
        <creator>Nicolas Ray</creator>
        
        <creator>Anthony Lehmann</creator>
        
        <creator>Pierluigi Cau</creator>
        
        <creator> Karim Abbaspour</creator>
        
        <creator> Karel Charvat</creator>
        
        <creator> Andreja Jonoski</creator>
        
        <subject>Grids computing; geospatial; SWAT hydrological
model; satellite image processing; spatial data processing; distributed
computing.</subject>
        <description>Special Issue(SpecialIssue), 3(3), 2013</description>
        <description>The resources of the enviroGRIDS system are
accessible to the large community of users through the BSCOS
Portal that provides Web applications for data management,
hydrological model calibration and execution, satellite image
processing, report generation and visualization, citizens oriented
applications, and virtual training center. The portal publishes
through Internet both the geospatial functionality provided by
Web technologies, and the high power computation resources
supported by the Grid technologies. The paper highlights the
issues on the implementation of the portal by heterogeneous
technologies, in order to support control flow, processing, and
visualization of spatial data for GEOSS community, Earth Science
specialists, and generally for Web users.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo7/Paper_1-Black_Sea_Catchment_Observation_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Prediction of assets behavior in financial series using machine learning algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.021107</link>
        <id>10.14569/IJARAI.2013.021107</id>
        <doi>10.14569/IJARAI.2013.021107</doi>
        <lastModDate>2013-11-09T14:35:03.7270000+00:00</lastModDate>
        
        <creator>Diego Esteves da Silva</creator>
        
        <creator>Julio Cesar Duarte</creator>
        
        <subject>Forecasting; Stock Market; Machine Learning; Financial Series</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(11), 2013</description>
        <description>The prediction of financial assets using either classification or regression models, is a challenge that has been growing in the recent years, despite the large number of publications of forecasting models for this task. Basically, the non-linear tendency of the series and the unexpected behavior of assets (compared to forecasts generated in studies of fundamental analysis or technical analysis) make this problem very hard to solve. In this work, we present for this task some modeling techniques using Support Vector Machines (SVM) and a comparative performance analysis against other basic machine learning approaches, such as Logistic Regression and Naive Bayes. We use an evaluation set based on company stocks of the BVM&amp;F, the official stock market in Brazil, the third largest in the world. We show good prediction results, and we conclude that it is not possible to find a single model that generates good results for every asset. We also present how to evaluate such parameters for each model. The generated model can also provide additional information to other approaches, such as regression models.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No11/Paper_7-Prediction_of_assets_behavior_in_financial_series_using_machine_learning_algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sub-goal based Robot Visual Navigation through Sensorial Space Tesselation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.021106</link>
        <id>10.14569/IJARAI.2013.021106</id>
        <doi>10.14569/IJARAI.2013.021106</doi>
        <lastModDate>2013-11-09T14:35:00.3100000+00:00</lastModDate>
        
        <creator>George Palamas</creator>
        
        <creator>J. Andrew Ware</creator>
        
        <subject>Self Organization; Growing Neural Gas; Genetic Algorithm; Robot Map Building; Visual Servoing; Recurrent Neural Network</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(11), 2013</description>
        <description>In this paper, we propose an evolutionary cognitive architecture to enable a mobile robot to cope with the task of visual navigation. Initially a graph based world representation is used to build a map, prior to navigation, through an appearance based scheme using only features associated with color information. During the next step, a genetic algorithm evolves a navigation controller that the robot uses for visual servoing, driving through a set of nodes on the topological map.  Experiments in simulation show that an evolved robot, adapted to both exteroceptive and proprioceptive data, is able to successfully drive through a list of sub-goals minimizing the problem of local minima in which evolutionary process can sometimes get trapped. We also show that this approach is more expressive for defining a simplistic fitness formula yet descriptive enough for targeting specific goals.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No11/Paper_6-Sub-goal_based_Robot_Visual_Navigation_through.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fisher Distance Based GA Clustering Taking Into Account Overlapped Space Among Probability Density Functions of Clusters in Feature Space</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.021105</link>
        <id>10.14569/IJARAI.2013.021105</id>
        <doi>10.14569/IJARAI.2013.021105</doi>
        <lastModDate>2013-11-09T14:34:56.8470000+00:00</lastModDate>
        
        <creator>Kohei Arai </creator>
        
        <subject>GA clustering; Fisher distance; crossover; mutation; overlapped space among probability density functions of clusters</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(11), 2013</description>
        <description>Fisher distance based Genetic Algorithm: GA clustering method which takes into account overlapped space among probability density functions of clusters in feature space is proposed. Through experiments with simulation data of 2D and 3D feature space generated by random number generator, it is found that clustering performance depends on overlapped space among probability density function of clusters. Also it is found relation between cluster performance and the GA parameters, crossover and mutation probability as well as the number of features and the number of clusters.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No11/Paper_5-Fisher_Distance_Based_GA_Clustering_Taking_Into.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Frequent Physical Health Monitoring as Vital Signs with Psychological Status Monitoring for Search and Rescue of Handicapped, Diseased and Eldery Persons</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.021104</link>
        <id>10.14569/IJARAI.2013.021104</id>
        <doi>10.14569/IJARAI.2013.021104</doi>
        <lastModDate>2013-11-09T14:34:55.0530000+00:00</lastModDate>
        
        <creator>Kohei Arai </creator>
        
        <subject>vital sign; heart beat puls ratee; body temperature; blood pressure; blesses; consciousnes; seonsor network</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(11), 2013</description>
        <description>Method and system for frequent health monitoring as vital signs with psycholo9gical status monitoring for search and rescue of handicapped person is proposed. Heart beat pulse rate, body temperature, blood pressure, blesses and consciousness is well known vital signs. In particular for Alzheimer diseased persons, handicapped peoples, etc. it is very important to monitor vital signs in particular in the event of evacuation from disaster occurred areas together with location and attitude information. Then such persons who need help for evacuation can be survived. Through experiments wearing the proposed sensors with three normal persons including male and female, young and elder persons and one diseased person, it is found that the proposed system is useful. It is also found that the proposed system can be used for frequent health condition monitoring.    Furthermore, physical health monitoring error due to psychological condition can be corrected with the proposed system.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No11/Paper_4-Frequent_Physical_Health_Monitoring_as_Vital_Signs.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Method for Tealeaves Quality Estimation Through Measurements of Degree of Polazation, Leaf Area Index, Photosynthesis Available Radiance and Normalized Difference Vegetation Index for Characterization of Tealeaves</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.021103</link>
        <id>10.14569/IJARAI.2013.021103</id>
        <doi>10.14569/IJARAI.2013.021103</doi>
        <lastModDate>2013-11-09T14:34:53.2430000+00:00</lastModDate>
        
        <creator>Kohei Arai </creator>
        
        <subject>adjucency effect; nonlinear mixed pixel model; Monte Carlo method; Ray tracing method</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(11), 2013</description>
        <description>Method for tealeaves quality estimation through measurements of Degree of Polarization: DP, Leaf Area Index: LAI, Photosynthesis Available Radiance: PAR and Normalized Difference Vegetation Index: NDVI for characterization of tealeaves is proposed. The method allows estimations of PAR, NDVI, Grow Index: GI by using measured Degree of Polarization: DP of tealeaves. Through experiments at the tea farm areas, it is found that the proposed method is validated. Also, the method is validated through Monte Carlo Ray Tracing: MCRT simulations for discrimination between prolate and oblate shapes of tealeaves. In accordance with growing tealeaves, prolate shape of tealeaves changes their shape to oblate shape. Therefore, growing stage can be estimated with DP measurements.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No11/Paper_3-Method_for_Tealeaves_Quality_Estimation_Through.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Free Open Source Software: FOSS Based e-learning, Mobile Learning Systems Together with Blended Learning System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.021102</link>
        <id>10.14569/IJARAI.2013.021102</id>
        <doi>10.14569/IJARAI.2013.021102</doi>
        <lastModDate>2013-11-09T14:34:51.4470000+00:00</lastModDate>
        
        <creator>Kohei Arai </creator>
        
        <subject>e-learning; blended learning; foree open source softeware; mushup; mobile learning</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(11), 2013</description>
        <description>Free Open Source Software: FOSS based e-learning system is proposed together with blended learning and mobile learning. Mashup search engine for e-learning contents search and content adaptation from e-learning to mobile learning content are also implemented. Through implementation of the proposed system, it is found that the system does work well for improvement of learning efficiency.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No11/Paper_2-Free_Open_Source_Software.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Case Study on Effectiveness Evaluation of Buisiness Procedure Reengineering: BPR for Local Government in Saga Prefecture, Japan</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.021101</link>
        <id>10.14569/IJARAI.2013.021101</id>
        <doi>10.14569/IJARAI.2013.021101</doi>
        <lastModDate>2013-11-09T14:34:49.5600000+00:00</lastModDate>
        
        <creator>Kohei Arai </creator>
        
        <creator>Youm Jong Sun</creator>
        
        <subject>Business Procedure Reengineering: BPR; local government</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(11), 2013</description>
        <description>Case study on validation of effeteness of Business Procedure Reengineering: BPR for local government in Saga prefecture, Japan is conducted. As the results, if it found that BPR is effective. The local government, environment established a government CIO room introduction of a number system was a long-cherished wish is determined in 2013, it is possible to promote e- government and e-municipality and the banner of great incredibly plan called &quot;world-leading creative nation&quot; is being put into place some. We would like you to realize the municipality a cloud can be enhanced administrative services to pour our best to take this opportunity, give the impression to the residents as possible; the operational efficiency of the civil service, the foundation is reduced large flower IT costs</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No11/Paper_1-Case_Study_on_Effectiveness_Evaluation_of_Buisiness_Procedure_Reengineering.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Terrain Coverage Ant Algorithms: The Random Kick Effect</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.041025</link>
        <id>10.14569/IJACSA.2013.041025</id>
        <doi>10.14569/IJACSA.2013.041025</doi>
        <lastModDate>2013-10-31T10:53:07.3000000+00:00</lastModDate>
        
        <creator>M. Dervisi</creator>
        
        <creator>O.B. Efremides</creator>
        
        <creator>D.P. Iracleous</creator>
        
        <subject>Terrain coverage, ant agents, performance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(10), 2013</description>
        <description>In this work the effect of random repositioning
of ant robots/agents on the performance of terrain coverage
algorithms is investigated. A number of well-known terrain
coverage algorithms are implemented and studied in a simulated
environment. We prove that agent repositioning imposes small
variations on the performance of the algorithms when random
or controlled jumps occurred and evaporation and failures are
allowed.</description>
        <description>http://thesai.org/Downloads/Volume4No10/Paper_25-Terrain_Coverage_Ant_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Determining Public Structure Crowd Evacuation Capacity</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.041024</link>
        <id>10.14569/IJACSA.2013.041024</id>
        <doi>10.14569/IJACSA.2013.041024</doi>
        <lastModDate>2013-10-31T10:53:04.0700000+00:00</lastModDate>
        
        <creator>Pejman Kamkarian</creator>
        
        <creator>Henry Hexmoor</creator>
        
        <subject>Networks of Bayesian Belief Revision, Public Space Safety, Crowd Evacuation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(10), 2013</description>
        <description>This paper explores a strategy for determining public space safety. Due to varied purposes and locations, each public space has architecture as well as facilities. A generalized analysis of capacities for public spaces is essential. The method we propose is to examine a public space with a given architecture. We used Bayesian Belief Network to determine the level of safety and identify points of weakness in public spaces.</description>
        <description>http://thesai.org/Downloads/Volume4No10/Paper_24-Determining_Public_Structure_Crowd_Evacuation_Capacity.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mitigating Black Hole attack in MANET by Extending Network Knowledge</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.041023</link>
        <id>10.14569/IJACSA.2013.041023</id>
        <doi>10.14569/IJACSA.2013.041023</doi>
        <lastModDate>2013-10-31T10:53:00.8400000+00:00</lastModDate>
        
        <creator>Hicham Zougagh</creator>
        
        <creator>Ahmed Toumanari</creator>
        
        <creator>Rachid Latif</creator>
        
        <creator>Noureddine.Idboufker</creator>
        
        <subject>MANET; OLSR; Security; Routing Protocol; Black Hole attack</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(10), 2013</description>
        <description>The Optimized Link State Routing Protocol is developed for Mobile Ad Hoc Network. It operates as a table driven, proactive protocol. The core of the OLSR protocol is the selection of Multipoint Relays (MPRs), used as a flooding mechanism for distributing control traffic messages in the network, and reducing the redundancy in the flooding process. A node in an OLSR network selects its MPR set so that all two hop neighbor are reachable by the minimum number of MPR. However, if an MPR misbehaves during the execution of the protocol, the connectivity of the network is compromised. This paper introduces a new algorithm for the selection of Multipoint Relays (MPR) with additional coverage whose aims is to provide  each node to selects alternative paths to reach any destination two hops away. This technique helps avoid the effect of malicious attacks and its easily to implement the corresponding algorithm. </description>
        <description>http://thesai.org/Downloads/Volume4No10/Paper_23-Mitigating_Black_Hole_attack_in_MANET_by_Extending.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Acceptance of Web 2.0 in learning in higher education: a case study Nigeria</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.041022</link>
        <id>10.14569/IJACSA.2013.041022</id>
        <doi>10.14569/IJACSA.2013.041022</doi>
        <lastModDate>2013-10-31T10:52:57.4230000+00:00</lastModDate>
        
        <creator>Razep Echeng</creator>
        
        <creator>Abel Usoro</creator>
        
        <creator>Grzegorz Majewski</creator>
        
        <subject>Web 2.0; collaboration; active participation; enhanced learning; Web 2.0 acceptance; learning; higher education; technology based learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(10), 2013</description>
        <description>Technology acceptance has been studied in different perspectives.  Though a few empirical studies on acceptance of Web 2.0 as a social networking tool in teaching and learning exist, none of such studies exist in Nigeria which is the focus of this study.  This paper reports on a pilot study that begins to fill this gap by investigating the perceptions, attitude and acceptance of Web 2.0 in e-learning of this country.  Based on literature review and initial primary study, a conceptual model of 9 variables and associated hypotheses was designed.  The model was operationalised into a questionnaire that was used to collect data from 317 students from 5 universities.  The findings that came from data analysis indicate that all the variables except motivation via learning management systems which are not presently used in these universities affect intention to use Web 2.0 in e-learning in Nigeria.  Some of the validated variables are perceived usefulness and prior knowledge.  The major conclusions and recommendations include the utilisation of Web 2.0 facilities to stimulate participation in learning. This work will contribute to the body of knowledge on acceptance of Web 2.0 social networking tools in teaching and learning.  It will aid management decisions toward investing better on technology so as to improve the educational sector. This research will also be beneficial in the social development of individuals, local communities, national and international communities.</description>
        <description>http://thesai.org/Downloads/Volume4No10/Paper_22-Acceptance_of_Web_2.0_in_learning_in_higher_education.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Load Balancing with Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.041021</link>
        <id>10.14569/IJACSA.2013.041021</id>
        <doi>10.14569/IJACSA.2013.041021</doi>
        <lastModDate>2013-10-31T10:52:54.0070000+00:00</lastModDate>
        
        <creator>Nada M. Al Sallami</creator>
        
        <creator>Ali Al daoud</creator>
        
        <creator>Sarmad A. Al Alousi</creator>
        
        <subject>Green Cloud Computing; Load Balancing; Artificial Neural Networks</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(10), 2013</description>
        <description>This paper discusses a proposed load balance technique based on artificial neural network. It distributes workload equally across all the nodes by using back propagation learning algorithm to train feed forward Artificial Neural Network (ANN). The proposed technique is simple and it can work efficiently when effective training sets are used. ANN predicts the demand and thus allocates resources according to that demand. Thus, it always maintains the active servers according to current demand, which results in low energy consumption than the conservative approach of over-provisioning. Furthermore, high utilization of server results in more power consumption, server running at higher utilization can process more workload with similar power usage. Finally the existing load balancing techniques in cloud computing are discussed and compared with the proposed technique based on various parameters like performance, scalability, associated overhead... etc. In addition energy consumption and carbon emission perspective are also considered to satisfy green computing.</description>
        <description>http://thesai.org/Downloads/Volume4No10/Paper_21-Load_Balancing_with_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Geo-visual Approach for Spatial Scan Statistics: An Analysis of Dengue Fever Outbreaks in Delhi</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.041020</link>
        <id>10.14569/IJACSA.2013.041020</id>
        <doi>10.14569/IJACSA.2013.041020</doi>
        <lastModDate>2013-10-31T10:52:50.6070000+00:00</lastModDate>
        
        <creator>Shuchi Mala</creator>
        
        <creator>Raja Sengupta</creator>
        
        <subject>Disease Surveillance; p-value; clusters; statistically significant; outbreaks and visualization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(10), 2013</description>
        <description>There are very few surveillance systems being used to detect disease outbreaks at present. In disease surveillance system, data related to cases and various risk factors are collected and then the collected data is transformed into meaningful information for effective disease control using statistical analysis tools. Disease outbreaks can be detected but for effective disease control, a visualization approach is required. Without appropriate visualization, it is very difficult to interpret the results of analysis. In this work, a method has been developed for geographical representation of the disease surveillance and response system for early detection of disease outbreaks using SaTScan and open source Geographic Information System software. Maps that combine the geographical location of diseases and clusters to enhance the understanding of results of statistical analysis tool are developed using QGIS library which provides many spatial algorithms and native GIS functions. This library is accessed through PyQGIS and PyQt using Python.</description>
        <description>http://thesai.org/Downloads/Volume4No10/Paper_20-Geo-visual_Approach_for_Spatial_Scan_Statistics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Thyroid Diagnosis based Technique on Rough Sets with Modified Similarity Relation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.041019</link>
        <id>10.14569/IJACSA.2013.041019</id>
        <doi>10.14569/IJACSA.2013.041019</doi>
        <lastModDate>2013-10-31T10:52:47.1600000+00:00</lastModDate>
        
        <creator>Elsayed Radwan</creator>
        
        <creator>Adel M.A. Assiri</creator>
        
        <subject>Thyroid Disease - Rough Sets - Data Discretization – Knowledge Reduction; Modified Similarity Relation MSIM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(10), 2013</description>
        <description>Because of the patient’s inconsistent data, uncertain Thyroid Disease dataset is appeared in the learning process: irrelevant, redundant, missing, and huge features. In this paper, Rough sets theory is used in data discretization for continuous attribute values, data reduction and rule induction. Also, Rough sets try to cluster the Thyroid relation attributes in the presence of missing attribute values and build the Modified Similarity Relation that is dependent on the number of missing values with respect to the number of the whole defined attributes for each rule. The discernibility matrix has been constructed to compute the minimal sets of reducts, which is used to extract the minimal sets of decision rules that describe similarity relations among rules. Thus, the rule associated strength is measured.</description>
        <description>http://thesai.org/Downloads/Volume4No10/Paper_19-Thyroid_Diagnosis_based_Technique.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design and Application of Queue-Buffer Communication Model in Pneumatic Conveying</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.041018</link>
        <id>10.14569/IJACSA.2013.041018</id>
        <doi>10.14569/IJACSA.2013.041018</doi>
        <lastModDate>2013-10-31T10:52:45.3500000+00:00</lastModDate>
        
        <creator>Liping Zhang</creator>
        
        <creator>Haomin Hu</creator>
        
        <subject>Queue-Buffer; Programmable Logic Controller; Freeport Communication; Critical resource; Mutex</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(10), 2013</description>
        <description>In order to communicate with a PLC (Programmable Logic Controller) flexibly and freely, a data communication model based on the PLC&#39;s free port is designed. In the structure of the model, a distributed data communication environment is constructed by using Ethernet and serial adapters. In the communication algorithm, a method of queue-buffer for multi-threaded is used to improve the real-time of the control. This communication model has good scalability and portability because the realization of it is not restricted by the number of PLC slave stations and the type of operating system of the host computer. A corresponding communication algorithm is applied to data collection and devices monitoring for a pipe pneumatic conveying system. The practice shows that not only the stability and reliability of the model can meet the needs of automatic control but also the communication performance and efficiency of the model is outstanding.</description>
        <description>http://thesai.org/Downloads/Volume4No10/Paper_18-Design_and_Application_of_Queue-Buffer.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimizing the use of an SPI Flash PROM in Microblaze-Based Embedded Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.041017</link>
        <id>10.14569/IJACSA.2013.041017</id>
        <doi>10.14569/IJACSA.2013.041017</doi>
        <lastModDate>2013-10-31T10:52:41.9500000+00:00</lastModDate>
        
        <creator>Ahmed Hanafi</creator>
        
        <creator>Mohammed Karim</creator>
        
        <subject>Microblaze; ISF; Bootloader; SREC; Configuration; Bitstream SPI Flash</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(10), 2013</description>
        <description>This paper aims to simplify FPGA designs that incorporate Embedded Software Systems using a soft core Processor. It describes a simple solution to reduce the need of multiple non-volatile memory devices by using one SPI (Serial Peripheral Interface) Flash PROM for FPGA configuration data, software code (Processor applications), and miscellaneous user data. We have thus developed a design based on a MicroBlaze soft processor implemented on a Xilinx Spartan-6 FPGA SP605 Evaluation Kit. The hardware architecture with SPI flash was designed using the Xilinx Platform Studio (XPS) and the software applications, including the bootloader, was developed with Xilinx Software Development Kit (SDK). ISE Design Tools prepared by Xilinx Company, is employed to create the files used to program flash memory which are SREC (S-record) file associated with software code, Hexadecimal file for user data, and bootloader file to configure the FPGA and allows software applications stored in flash memory to be executed when the system is powered on. Reading access to the SPI Flash memory is simplified by the use of Xilinx In-System Flash (ISF) library. </description>
        <description>http://thesai.org/Downloads/Volume4No10/Paper_17-Optimizing_the_use_of_an_SPI_Flash_PROM.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>On the Practical Feasibility of Secure Multipath Communication</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.041016</link>
        <id>10.14569/IJACSA.2013.041016</id>
        <doi>10.14569/IJACSA.2013.041016</doi>
        <lastModDate>2013-10-31T10:52:38.4530000+00:00</lastModDate>
        
        <creator>Stefan Rass</creator>
        
        <creator>Benjamin Rainer</creator>
        
        <creator>Stefan Schauer</creator>
        
        <subject>communication system security; multipath channels; privacy; risk analysis; security by design</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(10), 2013</description>
        <description>Secure multipath transmission (MPT) uses network path redundancy to achieve privacy in the absence of public-key encryption or any shared secrets for symmetric encryption. Since this form of secret communication works without secret keys, the risk of human failure in key management naturally vanishes, leaving security to rest only on the network management. Consequently, MPT allows for secure communication even under hacker attacks, on condition that at least some parts of the network remain intact (unconquered) at all times. This feature is, however, bought at the price of high network connectivity (densely meshed structures) that is hardly found in real life networks. Based on a game-theoretic treatment of multipath transmission, we present theoretical results for judging the networks suitability for secure communication. In particular, as MPT uses non-intersecting and reliable paths, we present algorithms to compute these in a way that is especially suited for subsequent secure and reliable communication. Our treatment will use MPT as a motivating and illustrating example, however, the results obtained are not limited to any particular application of multipath transmission or security.</description>
        <description>http://thesai.org/Downloads/Volume4No10/Paper_16-On_the_Practical_Feasibility_of_Secure_Multipath_Communication.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Quaternionic Wigner-Ville distribution of analytical signal in hyperspectral imagery</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.041015</link>
        <id>10.14569/IJACSA.2013.041015</id>
        <doi>10.14569/IJACSA.2013.041015</doi>
        <lastModDate>2013-10-31T10:52:35.0070000+00:00</lastModDate>
        
        <creator>Yang LIU</creator>
        
        <creator>Robert GOUTTE</creator>
        
        <subject>Analytical; signal; hyperspectral imagery; quaternionic distribution</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(10), 2013</description>
        <description>The 2D Quaternionic Fourier Transform (QFT), applied to a real 2D image, produces an invertible quaternionic spectrum. If we conserve uniquely the first quadrant of this spectrum, it is possible, after inverse transformation, to obtain, not the original image, but a 2D quaternion image, which generalize in 2D the classical notion of 1D analytical image.
From this quaternion image, we compute the corresponding correlation product, then, by applying the direct QFT, we obtain the 4D Wigner-Ville distribution of this analytical signal. With reference to the shift variables ?1 , ?2 used for the computation of the correlation product, we obtain a local quaternion Wigner-Ville distribution  spectrum.
</description>
        <description>http://thesai.org/Downloads/Volume4No10/Paper_15-Quaternionic_Wigner-Ville_distribution.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Reasoning Model for Strengthening the problem solving capability of Expert Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.041014</link>
        <id>10.14569/IJACSA.2013.041014</id>
        <doi>10.14569/IJACSA.2013.041014</doi>
        <lastModDate>2013-10-31T10:52:31.7600000+00:00</lastModDate>
        
        <creator>Kapil Khandelwal</creator>
        
        <creator>Durga Prasad Sharma</creator>
        
        <subject>knowledge based systems; KBS, sustained learning;  problem solving;  hybrid reasoning models; case based reasoning; CBR; model based reasoning; MBR; rule based reasoning;RBR</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(10), 2013</description>
        <description>In this paper, we briefly outlined popular case-based reasoning combinations. More specifically, we focus on combinations of case-based reasoning with rule based reasoning, and model based reasoning. Further we examined the strengths and weaknesses of various reasoning models, case-based reasoning, rule-based reasoning and model-based reasoning, and discuss how they can be combined to form a more robust and better-performing hybrid. In a decision support system to address the variety of tasks a user performs, a single type of knowledge and reasoning method is often not sufficient. It is often necessary to determine which reasoning method would be the most appropriate for each task, and a combination of different methods has often shown the best results. In this study CBR was mixed with other RBR and MBR approaches to promote synergies and benefits  beyond  those  achievable  using  CBR  or  other  individual  reasoning  approaches  alone. Each approach has advantages and disadvantages, which are proved to be complementary in a large degree. So, it is well-justified to combine these to produce effective hybrid approaches, surpassing the disadvantages of each component method. “KNAPS-CR” model integrates problem solving with learning from experience within an extensive model of different knowledge types. “KNAPS-CR” has a reasoning strategy which first attempts case-based reasoning, then rule-based reasoning, and, finally, model-based reasoning. It learns from each problem solving session by updating its collection of cases, irrespective of which reasoning method that succeeded in solving the problem.</description>
        <description>http://thesai.org/Downloads/Volume4No10/Paper_14-Hybrid_Reasoning_Model_for_Strengthening_the_problem_solving_capability_of_Expert_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Architectural-model for Context aware Adaptive Delivery of Learning Material</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.041013</link>
        <id>10.14569/IJACSA.2013.041013</id>
        <doi>10.14569/IJACSA.2013.041013</doi>
        <lastModDate>2013-10-31T10:52:29.9370000+00:00</lastModDate>
        
        <creator>Kalla. Madhu Sudhana</creator>
        
        <creator>V. Cyril Raj</creator>
        
        <creator>T.Ravi</creator>
        
        <subject>MVC design pattern; context-aware adaptive learning; architectural model; Usability Analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(10), 2013</description>
        <description>The web based learning has become more complex to search required learning resources with continuously growing digital learning contents which are entangled with structural and semantic interrelationship. Meanwhile, the rapid development of communication technology lead to heterogeneity of learning devices than it was in the early stage. The context-aware adaptive learning environment has become a promising solution to these searching and presentation problems in educational domain. To solve this context aware learning content delivery problem, we proposed a novel architectural model based on MVC (Model–View–Controller) design pattern, that is able to perform personalized adaptive delivery of course content according to learner contextual information such as learning style and characteristics of the learning device using an ontological approach. </description>
        <description>http://thesai.org/Downloads/Volume4No10/Paper_13-An_Architectural-model_for_Context_aware_Adaptive.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Risk Assessment of Network Security Based on Non-Optimum Characteristics Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.041012</link>
        <id>10.14569/IJACSA.2013.041012</id>
        <doi>10.14569/IJACSA.2013.041012</doi>
        <lastModDate>2013-10-31T10:52:26.5030000+00:00</lastModDate>
        
        <creator>Ping He</creator>
        
        <subject>network security; non-optimum analysis; risk assessment; maximum and minimum limitation; ARS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(10), 2013</description>
        <description>This paper discusses in detail the theory of non-optimum analysis on network systems. It points out that the main problem of exploring indefinite networks’ optimum lies in the lack of non-optimum analysis on the network system. The paper establishes the syndrome and empirical analysis based on the non-optimum category of the network security. At the same time, it also puts forward the non-optimum measurement of the network security along with non-optimum tracing and self-organization of the network systems. The formation of non-optimum network serves as the basis for existence of optimum network. Besides, the level of network security can be measured from the non-optimum characteristics analysis of network systems. By summing the practice, this paper has also come at the minimum non-optimum principle of the network security optimization, established the relationship between risk and non-optimum, and put forward evaluation method about trust degree of network security. Finally, according to the previous practice of network security risk management, a kind of network security optimization has been developed to approach the relationship of non-optimum and risk.</description>
        <description>http://thesai.org/Downloads/Volume4No10/Paper_12-Risk_Assessment_of_Network_Security_Based_on_Non-Optimum.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Overall Sensitivity Analysis Utilizing Bayesian Network for the Questionnaire Investigation on SNS</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.041011</link>
        <id>10.14569/IJACSA.2013.041011</id>
        <doi>10.14569/IJACSA.2013.041011</doi>
        <lastModDate>2013-10-31T10:52:24.6930000+00:00</lastModDate>
        
        <creator>Tsuyoshi Aburai</creator>
        
        <creator>Kazuhiro Takeyasu</creator>
        
        <subject>SNS; Questionnaire Investigation; Bayesian Network; Sensitivity Analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(10), 2013</description>
        <description>Social Networking Service (SNS) is prevailing rapidly in Japan in recent years. The most popular ones are Facebook, mixi, and Twitter, which are utilized in various fields of life together with the convenient tool such as smart-phone. In this work, a questionnaire investigation is carried out in order to clarify the current usage condition, issues and desired functions. More than 1,000 samples are gathered. Bayesian network is utilized for this analysis. Sensitivity analysis is carried out by setting evidence to all items.  This enables overall analysis for each item. We analyzed them by sensitivity analysis and some useful results were obtained.  We have presented the paper concerning this. But the volume becomes too large, therefore we have split them and this paper shows the latter half of the investigation result by setting evidence to Bayesian Network parameters. Differences in usage objectives and SNS sites are made clear by the attributes and preference of SNS users. They can be utilized effectively for marketing by clarifying the target customer through the sensitivity analysis.</description>
        <description>http://thesai.org/Downloads/Volume4No10/Paper_11-Overall_Sensitivity_Analysis_Utilizing_Bayesian.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Location Determination Technique for Traffic Control and Surveillance using Stratospheric Platforms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.041010</link>
        <id>10.14569/IJACSA.2013.041010</id>
        <doi>10.14569/IJACSA.2013.041010</doi>
        <lastModDate>2013-10-31T10:52:21.2630000+00:00</lastModDate>
        
        <creator>Yasser Albagory</creator>
        
        <creator>Mostafa Nofal</creator>
        
        <creator>Said El-Zoghdy</creator>
        
        <subject>stratospheric platforms; mobile communications; DOA techniques; GPS Technique</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(10), 2013</description>
        <description>This paper presents a new technique for location determination using the promising technique of stratospheric platform (SP) flying at altitudes 17-22 km high and a suitable Direction-of-Arrival technique (DOA). The SP system is preferable due to its superior communication performance compared to conventional terrestrial and satellite systems. The proposed technique provides central information about accurate locations for mobile stations which is very important for traffic control and rescue operations at emergency situations. The DOA estimation in this technique defines the user location using high resolution DOA technique such as MUSIC which provides an accuracy comparable to the Global Positioning System (GPS) technique but without the need for GPS receivers. Several scenarios for users’ locations determination are examined to define the robustness of the proposed technique. </description>
        <description>http://thesai.org/Downloads/Volume4No10/Paper_10-A_Novel_Location_Determination_Technique.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced Link Redirection Interface for Secured Browsing using Web Browser Extensions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.041009</link>
        <id>10.14569/IJACSA.2013.041009</id>
        <doi>10.14569/IJACSA.2013.041009</doi>
        <lastModDate>2013-10-31T10:52:18.0500000+00:00</lastModDate>
        
        <creator>Mrinal Purohit Y</creator>
        
        <creator>Kaushik Velusamy</creator>
        
        <creator>Shriram K Vasudevan</creator>
        
        <subject>Browsers; HTTPS; SSL</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(10), 2013</description>
        <description>In the present world scenario where data is meant to be protected from intruders and crackers, everyone has the fear to keep their private data safe. As the data is stored on servers accessed through websites by browsers, it’s the browsers, which act as a medium between a user and the server to send or receive data. As browsers send data in plain text, any data which is sent could easily be intercepted and used against someone. Hence this led to the use of Transport Layer Security (TLS) and Secure Socket Layer (SSL), which are cryptographic protocols designed to provide communication security over the Internet. A layer on top of SSL/TLS, support an encrypted mode, also known as HTTPS (HTTP Secure). Therefore, one of the main aspect of security lies in the website supporting HTTPS. Most websites have support for this encrypted mode and still we use an unencrypted mode of websites because a common user is unaware of the advancements in the field of technology. So to help us, in browsers, we have extensions or plug-ins to ease our life. This paper proposes the idea to implement the security measures in the web browsers. </description>
        <description>http://thesai.org/Downloads/Volume4No10/Paper_9-Enhanced_Link_Redirection_Interface_for_Secured.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Approach for Hiding Data Using B-box </title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.041008</link>
        <id>10.14569/IJACSA.2013.041008</id>
        <doi>10.14569/IJACSA.2013.041008</doi>
        <lastModDate>2013-10-31T10:52:14.6330000+00:00</lastModDate>
        
        <creator>Dr. Saad Abdual azize AL_ani</creator>
        
        <creator>Bilal Sadeq Obaid Obaid</creator>
        
        <subject>ASCII; Binary; Cryptosystem; Decimal; Decryption; Encryption; Image; Plaintext; Video </subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(10), 2013</description>
        <description>Digital Images and video encryption play an important role in today’s multimedia world. Many encryption schemes have been proposed to provide a security for digital images.  This paper designs an efficient cryptosystem for video. Our method can achieve two goals; the first goal is to design a height security for hiding a data in video, the second goal is to design a computational complexity cryptosystem.    </description>
        <description>http://thesai.org/Downloads/Volume4No10/Paper_8-A_New_Approach_for_Hiding_Data_Using_B-box.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Maturity Model for IT Service Outsourcing in Higher Education Institutions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.041007</link>
        <id>10.14569/IJACSA.2013.041007</id>
        <doi>10.14569/IJACSA.2013.041007</doi>
        <lastModDate>2013-10-31T10:52:12.8230000+00:00</lastModDate>
        
        <creator>Victoriano Valencia Garc&#237;a</creator>
        
        <creator>Dr. Eugenio J. Fern&#225;ndez Vicente</creator>
        
        <creator>Dr. Luis Usero Aragon&#233;s</creator>
        
        <subject>IT Governance; IT Management; Outsourcing; IT Services; Maturity Models</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(10), 2013</description>
        <description>The current success of organizations depends on the successful implementation of Information and Comunication Technologies (ICTs). Good governance and ICT management are essential for delivering value, managing technological risks, managing resources and performance measurement. In addition, outsourcing is a strategic option which complements IT services provided internally in organizations.
This paper proposes the design of a new holistic maturity model based on standards ISO/IEC 20000 and ISO/IEC 38500, the frameworks and best practices of ITIL and COBIT, with a specific focus on IT outsourcing. 
This model is validated by practices in the field of higher education, using a questionnaire and a metrics table among other measurement tools. Models, standards and guidelines are proposed in the model for facilitating adaptation to universities and achieving excellence in the outsourcing of IT services. The applicability of the model allows an effective transition to a model of good governance and management of outsourced IT services which, aligned with the core business of universities (teaching, research and innovation), affect the effectiveness and efficiency of its management, optimizes its value and minimizes risks.
</description>
        <description>http://thesai.org/Downloads/Volume4No10/Paper_7-Maturity_Model_for_IT_Service_Outsourcing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Using Penalized Regression with Parallel Coordinates for Visualization of Significance in High Dimensional Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.041006</link>
        <id>10.14569/IJACSA.2013.041006</id>
        <doi>10.14569/IJACSA.2013.041006</doi>
        <lastModDate>2013-10-31T10:52:09.3730000+00:00</lastModDate>
        
        <creator>Shengwen Wang</creator>
        
        <creator>Yi Yang</creator>
        
        <creator>Jih-Sheng Chang</creator>
        
        <creator>Fang-Pang Lin</creator>
        
        <subject>Parallel Coordinates; High-Dimensional Data; Multivariate Visualization; LASSO Regression; Penalized Regression; Dimension Reordering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(10), 2013</description>
        <description>In recent years, there has been an exponential increase in the amount of data being produced and disseminated by diverse applications, intensifying the need for the development of effective methods for the interactive visual and analytical exploration of large, high-dimensional datasets. In this paper, we describe the development of a novel tool for multivariate data visualization and exploration based on the integrated use of regression analysis and advanced parallel coordinates visualization. Conventional parallel-coordinates visualization is a classical method for presenting raw multivariate data on a 2D screen. However, current tools suffer from a variety of problems when applied to massively high-dimensional datasets. Our system tackles these issues through the combined use of regression analysis and a variety of enhancements to traditional parallel-coordinates display capabilities, including new techniques to handle visual clutter, and intuitive solutions for selecting, ordering, and grouping dimensions. We demonstrate the effectiveness of our system through two case-studies.</description>
        <description>http://thesai.org/Downloads/Volume4No10/Paper_6-Using_Penalized_Regression_with_Parallel_Coordinates.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Synthetic template: effective tool for target classification and machine vision</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.041005</link>
        <id>10.14569/IJACSA.2013.041005</id>
        <doi>10.14569/IJACSA.2013.041005</doi>
        <lastModDate>2013-10-31T10:52:05.9430000+00:00</lastModDate>
        
        <creator>Kaveh Heidary</creator>
        
        <subject>machine vision; image procession; target classification; correlation filter</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(10), 2013</description>
        <description>A process for replacing a voluminous image dictionary, which characterizes a certain target of interest in a constrained zone of effectiveness representing controlled states including scale and view angle, with a synthetic template has been developed. Synthetic template (ST) is a spatial map (grayscale image) obtained by combining the set of zone-specific training images that are ascribed to the target of interest. It has been shown that the solo-template ST correlation filter outperforms filter banks comprised of multiple target-class training images. A geometric interpretation of the basic ST concept is employed in order to further explain and substantiate its properties. </description>
        <description>http://thesai.org/Downloads/Volume4No10/Paper_5-Synthetic_template_effective_tool_for_target_classification_and_machine_vision.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hiding an Image inside another Image using Variable-Rate Steganography</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.041004</link>
        <id>10.14569/IJACSA.2013.041004</id>
        <doi>10.14569/IJACSA.2013.041004</doi>
        <lastModDate>2013-10-31T10:52:02.6970000+00:00</lastModDate>
        
        <creator>Abdelfatah A. Tamimi</creator>
        
        <creator>Ayman M. Abdalla</creator>
        
        <creator>Omaima Al-Allaf</creator>
        
        <subject>image steganography; information hiding; LSB method</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(10), 2013</description>
        <description>A new algorithm is presented for hiding a secret image in the least significant bits of a cover image. The images used may be color or grayscale images. The number of bits used for hiding changes according to pixel neighborhood information of the cover image. The exclusive-or (XOR) of a pixel’s neighbors is used to determine the smoothness of the neighborhood. A higher XOR value indicates less smoothness and leads to using more bits for hiding without causing noticeable degradation to the cover image. Experimental results are presented to show that the algorithm generally hides images without significant changes to the cover image, where the results are sensitive to the smoothness of the cover image.</description>
        <description>http://thesai.org/Downloads/Volume4No10/Paper_4-Hiding_an_Image_inside_another_Image.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automated Edge Detection Using Convolutional Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.041003</link>
        <id>10.14569/IJACSA.2013.041003</id>
        <doi>10.14569/IJACSA.2013.041003</doi>
        <lastModDate>2013-10-31T10:51:59.2500000+00:00</lastModDate>
        
        <creator>Mohamed A. El-Sayed</creator>
        
        <creator>Yarub A. Estaitia</creator>
        
        <creator>Mohamed A. Khafagy</creator>
        
        <subject>Edge detection; Convolutional Neural Networks; Max Pooling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(10), 2013</description>
        <description>The edge detection on the images is so important for image processing. It is used in a various fields of applications ranging from real-time video surveillance and traffic management to medical imaging applications. Currently, there is not a single edge detector that has both efficiency and reliability. Traditional differential filter-based algorithms have the advantage of theoretical strictness, but require excessive post-processing. Proposed CNN technique is used to realize edge detection task it takes the advantage of momentum features extraction, it can process any input image of any size with no more training required, the results are very promising when compared to both classical methods and other ANN based methods</description>
        <description>http://thesai.org/Downloads/Volume4No10/Paper_3-Automated_Edge_Detection_Using_Convolutional.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Smart Broadcast Technique for Improved Video Applications over Constrained Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.041002</link>
        <id>10.14569/IJACSA.2013.041002</id>
        <doi>10.14569/IJACSA.2013.041002</doi>
        <lastModDate>2013-10-31T10:51:57.3930000+00:00</lastModDate>
        
        <creator>U. Ukommi</creator>
        
        <subject>Video communication; broadcast; video quality; wireless network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(10), 2013</description>
        <description>improved wireless video communication is challenging since video stream is vulnerable to channel distortions. Hence, the need to investigate efficient scheme for improved video communications. This research work investigated broadcast schemes, and proposes smart broadcast technique as a solution for improved video quality over constrained network such as wireless network under tight constraints. The scheme exploits the concept of video analysis and adaptation principles in the optimization process. The experimental results obtained under different channel conditions demonstrate the capability of the proposed scheme in terms of improving the average received video quality performance over a constrained network.</description>
        <description>http://thesai.org/Downloads/Volume4No10/Paper_2-Smart_Broadcast_Technique_for_Improved_Video_Applications.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Identification of Employees Using RFID in IE-NTUA</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.041001</link>
        <id>10.14569/IJACSA.2013.041001</id>
        <doi>10.14569/IJACSA.2013.041001</doi>
        <lastModDate>2013-10-31T10:51:53.7430000+00:00</lastModDate>
        
        <creator>Rashid Ahmed</creator>
        
        <creator>John N. Avaritsiotis</creator>
        
        <subject>RFID; Employees In National Technical University; Global Positioning System; Non-Line of Site</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(10), 2013</description>
        <description>During the last decade with the rapid increase in indoor wireless communications, location-aware services have received a great deal of attention for commercial, public-safety, and a military application, the greatest challenge associated with indoor positioning methods is moving object data and identification. Mobility tracking and localization are multifaceted problems, which have been studied for a long time in different contexts. Many potential applications in the domain of WSNs require such capabilities. The mobility tracking needs inherent in many surveillance, security and logistic applications. This paper presents the identification of employees in National Technical University in Athens (IE-NTUA), when the employees access to a certain area of the building (enters and leaves to/from the college), Radio Frequency Identification (RFID) applied for identification by offering special badges containing RFID-tags.</description>
        <description>http://thesai.org/Downloads/Volume4No10/Paper_1-Identification_of_Employees_Using_RFID.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Numerical Representation of Web Sites of Remote Sensing Satellite Data Providers and Its Application to Knowledge Based Information Retrievals with Natural Language</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.021005</link>
        <id>10.14569/IJARAI.2013.021005</id>
        <doi>10.14569/IJARAI.2013.021005</doi>
        <lastModDate>2013-10-10T17:54:01.3530000+00:00</lastModDate>
        
        <creator>Kohei Arai </creator>
        
        <subject>Information retrieval; Knowledge based system; Natural language; Feature mapping </subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(10), 2013</description>
        <description>A method for numerical expression of web site which is relating to satellite remote sensing and its application to knowledge based information retrieval system which allows retrievals with natural language is proposed and implemented. Through experiments with remote sensing related information, it is found that the proposed information retrieval system does work in particular for remote sensing satellite data retrievals with natural language. </description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No10/Paper_5-Numerical_Representation_of_Web_Sites_of_Remote_Sensing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Image Retrieval and Classification Method Based on Euclidian Distance Between Normalized Features Including Wavelet Descriptor</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.021004</link>
        <id>10.14569/IJARAI.2013.021004</id>
        <doi>10.14569/IJARAI.2013.021004</doi>
        <lastModDate>2013-10-10T17:53:59.5600000+00:00</lastModDate>
        
        <creator>Kohei Arai </creator>
        
        <subject>hue feature; texture information; wavelet descripter; red tide; phytoplankton idintification</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(10), 2013</description>
        <description>Image retrieval method based on Euclidian distance between normalized features with their mean and variance in feature space is proposed. Effectiveness of the normalization is evaluated together with a validation of the proposed image retrieval method. The proposed method is applied for discrimination and identifying dangerous red tide species based on wavelet utilized classification methods together with texture and color features. Through experiments, it is found that classification performance with the proposed wavelet derived shape information extracted from the microscopic view of the phytoplankton is effective for identifying dangerous red tide species among the other red tide species rather than the other conventional texture, color information. Moreover, it is also found that the proposed normalization of features is effective to improve identification performance.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No10/Paper_4-Image_Retrieval_and_Classification_Method_Based.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparison Among Cross, Onboard and Vicarious Calibrations for Terra/ASTER/VNIR</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.021003</link>
        <id>10.14569/IJARAI.2013.021003</id>
        <doi>10.14569/IJARAI.2013.021003</doi>
        <lastModDate>2013-10-10T17:53:57.7670000+00:00</lastModDate>
        
        <creator>Kohei Arai </creator>
        
        <subject>vicarious calibration; cross calibration; visible to near infrared radiometer; earth observation satellite; remote sensing; radiative transfer equation</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(10), 2013</description>
        <description>Comparative study on radiometric calibration methods among onboard, cross and vicarious calibration for visible to near infrared radiometers onboard satellites is conducted. The data sources of the aforementioned three calibration methods are different and independent. Therefore, it may say that the reliable Radiometric Calibration Accuracy: RCC would be the RCC which are resemble each other two of three RCCs. As experimental results, it is found that vicarious and cross calibration are reliable than onboard calibration. Also vicarious calibration based cross calibration method is proposed here. The proposed cross calibration method should be superior to the conventional cross calibration method based on band-to-band data comparison. Through experiments, it is also found that the proposed cross calibration is better than the conventional cross calibration. The radiometric calibration accuracy of the conventional cross calibration method can be evaluated by using the proposed cross calibration method. </description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No10/Paper_3-Comparison_Among_Cross_Onboard_and_Vicarious_Calibrations.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Noise Suppressing Edge Enhancement Based on Genetic Algorithm Taking Into Account Complexity of Target Images Measured with Fractal Dimension</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.021002</link>
        <id>10.14569/IJARAI.2013.021002</id>
        <doi>10.14569/IJARAI.2013.021002</doi>
        <lastModDate>2013-10-10T17:53:55.9700000+00:00</lastModDate>
        
        <creator>Kohei Arai </creator>
        
        <subject>edge enhancement; fractal dimension; genetic algorithm; simulated annealing; remote sensing satellite imagery</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(10), 2013</description>
        <description>Method for noise suppressing edge enhancement based on genetic algorithm taking into account complexity of target images measured with fractal dimension is proposed. Through experiments with satellite remote sensing imagery data with additive noise, it is found that the proposed method shows appropriate edge enhancing performance with suppressing the additive noise in accordance with complexity of target images. It is also found that the proposed method requires a small computer resources in comparison to the method based on Simulated Annealing: SA.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No10/Paper_2-Noise_Suppressing_Edge_Enhancement_Based_on_Genetic_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cheap and Effective System for Parking Avoidance of the Car Without Permission at Disabled Parking Permit Spaces</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.021001</link>
        <id>10.14569/IJARAI.2013.021001</id>
        <doi>10.14569/IJARAI.2013.021001</doi>
        <lastModDate>2013-10-10T17:53:54.0200000+00:00</lastModDate>
        
        <creator>Kohei Arai </creator>
        
        <subject>Disabled Parking Permit; ultrasound sensor; Near Infrared cameras;  RFID writer and reader with IC card; IC chip and IC tag; ETC system; GPS receiver</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(10), 2013</description>
        <description>Cheap and effective system for parking avoidance of the car without permission at Disabled Parking Permit: DPP spaces is proposed. The proposed system is validated through some experiments. Multiple methods for detection of car in the DPP space using ultrasound sensors, Near Infrared: NIR cameras, RFID writer and reader with IC card, IC chip and IC tag as well as ETC system and GPS receiver are proposed. It is found that these proposed car detection systems work well. Furthermore, it is effective if more than two systems out of the proposed car detection systems are used for avoid car parking without permission.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No10/Paper_1-Cheap_and_Effective_System_for_Parking_Avoidance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards a Seamless Future Generation Network for High Speed Wireless Communications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040935</link>
        <id>10.14569/IJACSA.2013.040935</id>
        <doi>10.14569/IJACSA.2013.040935</doi>
        <lastModDate>2013-09-30T18:05:23.6230000+00:00</lastModDate>
        
        <creator>Kelvin O. O. Anoh</creator>
        
        <creator>Raed A. A. Abd-Alhameed</creator>
        
        <creator>Michael Chukwu</creator>
        
        <creator>Mohammed Buhari</creator>
        
        <creator>Steve M. R. Jones</creator>
        
        <subject>Doppler Effect, Doubly selective fading, frequencyselective fading, OFDM, Wavelet, MIMO</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(9), 2013</description>
        <description>The MIMO technology towards achieving future
generation broadband networks design criteria is presented.
Typical next generation scenarios are investigated. The MIMO
technology is integrated with the OFDM technology for effective
space, time and frequency diversity exploitations for high speed
outdoor environment. Two different OFDM design kernels (fast
Fourier transform (FFT) and wavelet packet transform (WPT))
are used at the baseband for OFDM system travelling at
terrestrial high speed for 800MHz and 2.6GHz operating
frequencies. Results show that the wavelet kernel for designing
OFDM systems can withstand doubly selective channel fading for
mobiles speeds up to 280Km/hr at the expense of the traditional
OFDM design kernel, the fast Fourier transform.</description>
        <description>http://thesai.org/Downloads/Volume4No9/Paper_35-Towards_a_Seamless_Future_Generation_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Bayesian framework for glaucoma progression detection using Heidelberg Retina Tomograph images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040934</link>
        <id>10.14569/IJACSA.2013.040934</id>
        <doi>10.14569/IJACSA.2013.040934</doi>
        <lastModDate>2013-09-30T18:05:21.3300000+00:00</lastModDate>
        
        <creator>Akram Belghith</creator>
        
        <creator>Christopher Bowd</creator>
        
        <creator>Madhusudhanan Balasubramanian</creator>
        
        <creator>Robert N. Weinreb</creator>
        
        <creator>Linda M. Zangwill</creator>
        
        <subject>Glaucoma, Markov random field, change detec-tion, Bayesian estimation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(9), 2013</description>
        <description>Glaucoma, the second leading cause of blindness
in the United States, is an ocular disease characterized by
structural changes of the optic nerve head (ONH) and changes
in visual function. Therefore, early detection is of high importance
to preserve remaining visual function. In this context,
the Heidelberg Retina Tomograph (HRT), a confocal scanning
laser tomograph, is widely used as a research tool as well as
a clinical diagnostic tool for imaging the optic nerve head to
detect glaucoma and monitor its progression.
In this paper, a glaucoma progression detection technique is
proposed using the HRT images. Contrary to the existing
methods that do not integrate the spatial pixel dependency in
the change detection map, we propose the use of the Markov
Random Field (MRF) to handle a such dependency. In order
to estimate the model parameters, a Monte Carlo Markov
Chain procedure is used. We then compared the diagnostic
performance of the proposed framework to existing methods
of glaucoma progression detection.</description>
        <description>http://thesai.org/Downloads/Volume4No9/Paper_34-A_Bayesian_framework_for_glaucoma_progression.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Texture Classification based on Bidimensional Empirical Mode Decomposition and Local Binary Pattern</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040933</link>
        <id>10.14569/IJACSA.2013.040933</id>
        <doi>10.14569/IJACSA.2013.040933</doi>
        <lastModDate>2013-09-30T18:05:19.5200000+00:00</lastModDate>
        
        <creator>JianJia Pan</creator>
        
        <creator>YuanYan Tang</creator>
        
        <subject>Texture classification, Empirical Mode Decomposition,
Local Binary Pattern, Invariant feature</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(9), 2013</description>
        <description>This paper presents a new simple and robust
texture analysis feature based on Bidimensional Empirical Mode
Decomposition (BEMD) and Local Binary Pattern (LBP). BEMD
is a locally adaptive decomposition method and suitable for the
analysis of nonlinear or nonstationary signals. Texture images are
decomposed to several Bidimensional Intrinsic Mode Functions
(BIMFs) by BEMD, which present a new set multi-scale components
of images. In our approach, firstly, saddle points are added
as supporting points for interpolation to improve original BEMD,
and then images are decomposed by the new BEMD to several
components (BIMFs). After then, Local Binary Pattern (LBP) in
different sizes is used to detect features from different BIMFs.
At last, normalization and BIMFs selection method are adopted
for features selection. The proposed feature presents invariant
while preserving LBP’s simplicity. Our method has also been
evaluated in CuRet and KTH-TIPS2a texture image databases.
It is experimentally demonstrated that the proposed feature
achieves higher classification accuracy than other state-of-theart
texture representation methods, especially in small training
samples condition.</description>
        <description>http://thesai.org/Downloads/Volume4No9/Paper_33-Texture_Classification_based_on_Bidimensional.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Privacy-Preserving Clustering Using Representatives over Arbitrarily Partitioned Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040932</link>
        <id>10.14569/IJACSA.2013.040932</id>
        <doi>10.14569/IJACSA.2013.040932</doi>
        <lastModDate>2013-09-30T18:05:16.3200000+00:00</lastModDate>
        
        <creator>Yu Li</creator>
        
        <creator>Sheng Zhong</creator>
        
        <subject></subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(9), 2013</description>
        <description>The challenge in privacy-preserving data mining is
avoiding the invasion of personal data privacy. Secure computa-
tion provides a solution to this problem. With the development of
this technique, fully homomorphic encryption has been realized
after decades of research; this encryption enables the computing
and obtaining results via encrypted data without accessing any
plaintext or private key information. In this paper, we propose
a privacy-preserving clustering using representatives (CURE)
algorithm over arbitrarily partitioned data using fully homomor-
phic encryption. Our privacy-preserving CURE algorithm allows
cooperative computation without revealing users’ individual data.
The method used in our algorithm enables the data to be
arbitrarily distributed among different parties and to receive
accurate clustering result simultaneously.</description>
        <description>http://thesai.org/Downloads/Volume4No9/Paper_32-Privacy-Preserving_Clustering_Using_Representatives.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Limit Cycle Generation for Multi-Modal and 2-Dimensional Piecewise Affine Control Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040931</link>
        <id>10.14569/IJACSA.2013.040931</id>
        <doi>10.14569/IJACSA.2013.040931</doi>
        <lastModDate>2013-09-30T18:05:12.2970000+00:00</lastModDate>
        
        <creator>Tatsuya Kai</creator>
        
        <subject></subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(9), 2013</description>
        <description>This paper considers a limit cycle control problem
of a multi-modal and 2-dimensional piecewise affine control
system. Limit cycle control means a controller design method to
generate a limit cycle for given piecewise affine control systems.
First, we deal with a limit cycle synthesis problem and derive a
new solution of the problem. In addition, theoretical analysis on
the rotational direction and the period of a limit cycle is shown.
Next, the limit cycle control problem for piecewise affine control
system is formulated. Then, we obtain matching conditions such
that the piecewise affine control system with the state feedback
law corresponds to the reference system which generates a desired
limit cycle. Finally, in order to indicate the effectiveness of the
new method, a numerical simulation is illustrated.</description>
        <description>http://thesai.org/Downloads/Volume4No9/Paper_31-Limit_Cycle_Generation_for_Multi-Modal.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detecting Linkedin Spammers and its Spam Nets</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040930</link>
        <id>10.14569/IJACSA.2013.040930</id>
        <doi>10.14569/IJACSA.2013.040930</doi>
        <lastModDate>2013-09-30T18:05:10.5030000+00:00</lastModDate>
        
        <creator>V&#180;ictor M. Prieto</creator>
        
        <creator>Manuel A&#180; lvarez</creator>
        
        <creator>Fidel Cacheda</creator>
        
        <subject></subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(9), 2013</description>
        <description>Spam is one of the main problems of the WWW.
Many studies exist about characterising and detecting several
types of Spam (mainly Web Spam, Email Spam, Forum/Blob
Spam and Social Networking Spam). Nevertheless, to the best of
our knowledge, there are no studies about the detection of Spam
in Linkedin. In this article, we propose a method for detecting
Spammers and Spam nets in the Linkedin social network. As
there are no public or private Linkedin datasets in the state of
the art, we have manually built a dataset of real Linkedin users,
classifying them as Spammers or legitimate users.
The proposed method for detecting Linkedin Spammers
consists of a set of new heuristics and their combinations using
a kNN classifier. Moreover, we proposed a method for detecting
Spam nets (fake companies) in Linkedin, based on the idea that
the profiles of these companies share content similarities. We have
found that the proposed methods were very effective. We achieved
an F-Measure of 0.971 and an AUC close to 1 in the detection
of Spammer profiles, and in the detection of Spam nets, we have
obtained an F-Measure of 1.</description>
        <description>http://thesai.org/Downloads/Volume4No9/Paper_30-Detecting_Linkedin_Spammersand_its_Spam_Nets.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Remote Health Care System Combining a Fall Down Alarm and Biomedical Signal Monitor System in an Android Smart-Phone</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040929</link>
        <id>10.14569/IJACSA.2013.040929</id>
        <doi>10.14569/IJACSA.2013.040929</doi>
        <lastModDate>2013-09-30T18:05:07.0870000+00:00</lastModDate>
        
        <creator>Ching-Sung Wang</creator>
        
        <creator>Chien-Wei Li</creator>
        
        <creator>Teng-Hui Wang</creator>
        
        <subject>Health care; Biomedical signal; Fall down alarm; Real-time; Android smart phone</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(9), 2013</description>
        <description>First aid and immediate help are very important following an accident. The earlier the detection and treatment is carried out, the better the prognosis and chance of recovery of the patients. It is even more important when considering the elderly. Once the elderly have an accident, they not only physically injure their body, but also impair their mental and social ability, and may develop severe sequela. In the last few years, the continuously developed Android cell phone has been applied to many fields. Despite the nature of the GPS positioning system that the mobile phone currently uses, most applications used are SMS and file transfers. However, these biomedical measurement signals, passing through a transferring interface and uploading to the mobile, result the little really successful cases with the remote health care feasibility. This research will develop an Android cell phone which combines the functionality of an ECG, pulsimeter, SpO2, and BAD (Body Activity Detector) for real-time monitoring of the activity of a body. When an accident occurs, the signals go through Android smart phone, immediately notifying the remote ends and providing first time help.</description>
        <description>http://thesai.org/Downloads/Volume4No9/Paper_29-A_Remote_Health_Care_System_Combining.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Facing the challenges of the One-Tablet-Per-Child policy in Thai primary school education</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040928</link>
        <id>10.14569/IJACSA.2013.040928</id>
        <doi>10.14569/IJACSA.2013.040928</doi>
        <lastModDate>2013-09-30T18:05:03.1370000+00:00</lastModDate>
        
        <creator>Ratchada Viriyapong</creator>
        
        <creator>Antony Harfield</creator>
        
        <subject>educational technology; m-learning; mobile computing; tablet-based education</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(9), 2013</description>
        <description>The Ministry of Education in Thailand is currently distributing tablets to all first year primary (Prathom 1) school children across the country as part of the government’s “One Tablet Per Child” (OTPC) project to improve education. Early indications suggest that there are many unexplored issues in designing and implementing tablet activities for such a large and varied group of students and so far there is a lack of evaluation on the effectiveness of the tablet activities. In this article, the authors propose four challenges for the improving Thailand’s OTPC project, consisting of: developing contextualised content, ensuring usability, providing teacher support, and assessing learning outcomes. A case study on developing science activities for first year primary school children on the OTPC devices is the basis for presenting possible solutions to the four challenges. In presenting a solution to the challenge of providing teacher support, an architecture is described for collecting data from student interactions with the tablet in order to analysis the current progress of students while in a live classroom setting. From tests in three local Thai schools, the authors evaluate the case study from both student and teacher perspectives. In concluding the paper, a framework for guiding mobile learning innovation is utilised to review the qualities and shortcomings of the case study.</description>
        <description>http://thesai.org/Downloads/Volume4No9/Paper_28-Facing_the_challenges_of_the_One-Tablet-Per-Child.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>EICT Based Diagnostic Tool and Monitoring System for EMF Radiation to Sustain Environmental Safety</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040927</link>
        <id>10.14569/IJACSA.2013.040927</id>
        <doi>10.14569/IJACSA.2013.040927</doi>
        <lastModDate>2013-09-30T18:04:59.7070000+00:00</lastModDate>
        
        <creator>K Parandham</creator>
        
        <creator>RP Gupta</creator>
        
        <creator>Amitab Saxena</creator>
        
        <subject>EICT Based Diagnostic tool; Electromagnetic Fields(EMF) Radiation; Mobile Telephony; Data Mining; Data Warehousing; Electronics; Information and Communication Technologies(EICT); International Commission on Non-Ionizing Radiation Protection(ICNIRP); Compressed Natural Gas(CNG)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(9), 2013</description>
        <description>the adverse effects of electromagnetic radiation from mobile phones and communication towers on health issues are being well documented today. However, exact correlation between radiation of communication towers and their radiation levels, are not monitored.
Aim of this paper is to study, analyze, apply networking and data mining technologies to develop an EICT based Diagnostic tool and Monitoring system for electromagnetic radiation levels into environment. This system is to network all mobile towers of each service provider as a single entity and then connect all service providers  to a central monitoring agency online for continuous monitoring. Since very large numbers of mobile towers exist in India, each state can have its own regional network which is further networked with national central network. This can be enlarged to entire world for monitoring the EMF radiation levels near every mobile tower. For these regional national and international networks the connectivity is to be instituted by the respective service provider. 
In this paper an attempt is made to logically apply Data Mining and networking technologies to develop a central EICT based diagnostic tool and monitoring system for EMF radiation from each transmission tower. With this system regional, national and international agencies/authorities can monitor the EMF radiation at each and every transmission tower area continuously and verify them with exposure standards. It is proposed to display this information using Integrated Display System in front of monitoring authority at appropriate levels.
</description>
        <description>http://thesai.org/Downloads/Volume4No9/Paper_27-EICT_Based_Diagnostic_Tool_and_Monitoring.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multithreading Image Processing in Single-core and Multi-core CPU using Java</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040926</link>
        <id>10.14569/IJACSA.2013.040926</id>
        <doi>10.14569/IJACSA.2013.040926</doi>
        <lastModDate>2013-09-30T18:04:57.9130000+00:00</lastModDate>
        
        <creator>Alda Kika</creator>
        
        <creator>Silvana Greca</creator>
        
        <subject>multithreading; image processing; multi-core; Java</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(9), 2013</description>
        <description>Multithreading has been shown to be a powerful approach for boosting a system performance. One of the good examples of applications that benefits from multithreading is image processing. Image processing requires many resources and processing run time because the calculations are often done on a matrix of pixels. The programming language Java supports the multithreading programming as part of the language itself instead of treating threads through the operating system. In this paper we explore the performance of Java image processing applications designed with multithreading approach. In order to test how the multithreading influences on the performance of the program, we tested several image processing algorithms implemented with Java language using the sequential one thread and multithreading approach on single and multi-core CPU. The experiments were based not only on different platforms and algorithms that differ from each other from the level of complexity, but also on changing the sizes of the images and the number of threads when multithreading approach is applied. Performance is increased on single core and multiple core CPU in different ways in relation with image size, complexity of the algorithm and the platform.</description>
        <description>http://thesai.org/Downloads/Volume4No9/Paper_26-Multithreading_Image_Processing_in_Single-core.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Study of the Software Metrics for the complexity and Maintainability of Software Development</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040925</link>
        <id>10.14569/IJACSA.2013.040925</id>
        <doi>10.14569/IJACSA.2013.040925</doi>
        <lastModDate>2013-09-30T18:04:56.0870000+00:00</lastModDate>
        
        <creator>Dr Sonal Chawla</creator>
        
        <creator>Gagandeep Kaur</creator>
        
        <subject>Static metrics; OO metrics; MOOD</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(9), 2013</description>
        <description>Software metrics is one of the well-known topics of research in software engineering.  Metrics are used to improve the quality and validity of software systems. Research in this area focus mainly on static metrics obtained by static analysis of the software. However modern software systems without object oriented design are incomplete. Every system has its own complexity which should be measured to improve the quality of the system. This paper describes the different types of metrics along with the static code metrics and Object oriented metrics. Then the metrics are summarized on the basis of relevance in finding the complexity and hence help in better maintainability of the software code, retaining the quality and making it cost effective.</description>
        <description>http://thesai.org/Downloads/Volume4No9/Paper_25-Comparative_Study_of_the_Software_Metrics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Link-Budget Design and Analysis showing Impulse-based UWB Performance Trade-Off flexibility as Integrator Solution for Different Wireless Short-Range Infrastructures</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040924</link>
        <id>10.14569/IJACSA.2013.040924</id>
        <doi>10.14569/IJACSA.2013.040924</doi>
        <lastModDate>2013-09-30T18:04:54.2770000+00:00</lastModDate>
        
        <creator>M. S Jawad</creator>
        
        <creator>Othman A. R</creator>
        
        <creator>Z. Zakaria</creator>
        
        <creator>A.A.lsa</creator>
        
        <creator>W. Ismail</creator>
        
        <subject>Ultra Wideband; Time Hopping-pulse position Modulation; Radio Frequency Identification; Wireless Sensors Networks; RAKE Receiver; Bit Error rate</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(9), 2013</description>
        <description>Future wireless indoor scenarios are expected to be complex requiring wireless nodes to adaptive responding to dynamic changes according to channel conditions. Interacting with neighboring nodes to achieve optimized performance in term of date-rates, distance and BER performance are the main concerns of designing future wireless solutions. IR-UWB came to the picture as the missing Puzzle to achieve these requirements and gluing the different wireless indoor existing infrastructures in global platform solutions. This paper shows the flexibility of IR-UWB in signal design at the physical layer level as cross-layer architecture of optimized performance. A detailed performance analysis presented in this paper as a mathematical model of the proposed wireless solution and described in proposed link budget design template. The performance evaluation is carried-on to show the proposed system as a good candidate in different wireless scenarios for different data-rate requirements, distance for specific requirement BER. Simulations results and well as experimental statistical analysis of the received signal under different channel models and conditions are carried-out as a proof of concept of the proposed system.</description>
        <description>http://thesai.org/Downloads/Volume4No9/Paper_24-Link-Budget_Design_and_Analysis_showing_Impulse-based_UWB_Performance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mining Positive and Negative Association Rules Using FII-Tree</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040923</link>
        <id>10.14569/IJACSA.2013.040923</id>
        <doi>10.14569/IJACSA.2013.040923</doi>
        <lastModDate>2013-09-30T18:04:52.4670000+00:00</lastModDate>
        
        <creator>T Ramakrishnudu</creator>
        
        <creator>R B V Sbramanyam</creator>
        
        <subject>data mining; association rule; frequent itemset; positive association rule; negative association rule</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(9), 2013</description>
        <description>Positive and negative association rules are important to find useful information hidden in large datasets, especially negative association rules can reflect mutually exclusive correlation among items. Association rule mining among frequent items has been extensively studied in data mining research. However, in recent years, there has been an increasing demand for mining the infrequent items. In this paper, we propose a tree based approach to store both frequent and infrequent itemsets to mine both the positive and negative association rules from frequent and infrequent itemsets. It minimizes I/O overhead by scanning the database only once. The performance study shows that the proposed method is an efficient than the previously proposed method.</description>
        <description>http://thesai.org/Downloads/Volume4No9/Paper_23-Mining_Positive_and_Negative_Association_Rules_using_FII-tree.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving the Security of the Medical Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040922</link>
        <id>10.14569/IJACSA.2013.040922</id>
        <doi>10.14569/IJACSA.2013.040922</doi>
        <lastModDate>2013-09-30T18:04:50.6570000+00:00</lastModDate>
        
        <creator>Ahmed Mahmood</creator>
        
        <creator>Tarfa Hamed</creator>
        
        <creator>Charlie Obimbo</creator>
        
        <creator>Robert Dony</creator>
        
        <subject>Medical Imaging Security; Telemedicine Security; Chinese remainder theorem; Watermarking</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(9), 2013</description>
        <description>Applying security to the transmitted medical images is important to protect the privacy of patients. Secure transmission requires cryptography, and watermarking to achieve confidentiality, and data integrity. Improving cryptography part needs to use an encryption algorithm that stands for a long time against different attacks. The proposed method is based on number theory and uses Chinese remainder theorem as a backbone. This approach achieves high level of security and stands against different attacks for a long time. On watermarking part, the medical image is divided into two regions: a region of interest (ROI) and a region of background (ROB). The pixel values of the ROI contain the important information so this region must not experience any change. The proposed watermarking technique is based on dividing the medical image in to blocks and inserting the watermark to the ROI by shifting the blocks. Then, an equivalent number of blocks in the ROB are removed. This approach can be considered as lossless since it does not affect on the ROI, also it does not increase the image size. In addition, it can stand against some watermarking attacks such cropping, and noise.</description>
        <description>http://thesai.org/Downloads/Volume4No9/Paper_22-Improving_the_Security_of_the_Medical_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A System for Multimodal Context-Awareness</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040921</link>
        <id>10.14569/IJACSA.2013.040921</id>
        <doi>10.14569/IJACSA.2013.040921</doi>
        <lastModDate>2013-09-30T18:04:47.2400000+00:00</lastModDate>
        
        <creator>Georgios Galatas</creator>
        
        <creator>Fillia Makedon</creator>
        
        <subject>Multimodal; Context-awareness; Microsoft Kinect; RFID; Localization; Identification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(9), 2013</description>
        <description>in this paper we present the improvement of our novel localization system, by introducing radio-frequency identification (RFID) which adds person identification capabilities and increases multi-person localization robustness. Our system aims at achieving multi-modal context-awareness in an assistive, ambient intelligence environment. The unintrusive devices used are RFID and 3-D audio-visual information from 2 Kinect sensors deployed at various locations of a simulated apartment to continuously track and identify its occupants, thus enabling activity monitoring. More specifically, we use skeletal tracking conducted on the depth images and sound source localization conducted on the audio signals captured by the Kinect sensors to accurately localize and track multiple people. RFID information is used mainly for identification purposes but also for rough location estimation, enabling mapping of the location information from the Kinect sensors to the identification events of the RFID. Our system was evaluated in a real world scenario and attained promising results exhibiting high accuracy, therefore showing the great prospect of using the RFID and Kinect sensors jointly to solve the simultaneous identification and localization problem.</description>
        <description>http://thesai.org/Downloads/Volume4No9/Paper_21-A_System_for_Multimodal_Context-Awareness.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Impact of Cognitive Tools on the Development of the Inquiry Skills of High School Students in Physics</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040920</link>
        <id>10.14569/IJACSA.2013.040920</id>
        <doi>10.14569/IJACSA.2013.040920</doi>
        <lastModDate>2013-09-30T18:04:45.4470000+00:00</lastModDate>
        
        <creator>Mohamed I. Mustafa</creator>
        
        <creator>Louis Trudel</creator>
        
        <subject>inquiry skills; teaching strategy; cognitive tools; high school physics; simulation; science laboratory</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(9), 2013</description>
        <description>the purpose of the study was to compare the effectiveness of two teaching strategies that utilize two different cognitive tools on the development of students’ inquiry skills in mechanics. The strategies were used to help students formulate Newton’s 2nd law of motion. Two cognitive tools had been used: a computer simulation and manipulations of concrete objects in physics laboratory. A quasi-experimental method that employed the 2 Cognitive Tools ? 2 Time of learning split-plot factorial design was applied in the study. The sample consisted of 54 Grade 11 students from two physics classes of the university preparation section in a high school of the province of Ontario (Canada). One class was assigned to interactive computer simulations (treatment) and the other to concrete objects in physics laboratory (control). Both tools were embedded in the general framework of the guided-inquiry cycle approach. The results showed that the interaction effect of the Cognitive Tools ? Time of learning was not statistically significant. However, the results also showed a significant effect on the development of students’ inquiry skills regardless of the type of cognitive tool they had used. Although the findings suggested that these two strategies are effective in developing students’ inquiry skills in mechanics, students in the computer simulation group had shown larger gain in their inquiry skills test than their counterparts in the laboratory group.</description>
        <description>http://thesai.org/Downloads/Volume4No9/Paper_20-The_Impact_of_Cognitive_Tools_on_the_Development.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Diy Approach to Uni-Temporal Database Implementation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040919</link>
        <id>10.14569/IJACSA.2013.040919</id>
        <doi>10.14569/IJACSA.2013.040919</doi>
        <lastModDate>2013-09-30T18:04:42.0300000+00:00</lastModDate>
        
        <creator>Haitao Yang</creator>
        
        <creator>Fei Xu</creator>
        
        <creator>Lating Xia</creator>
        
        <subject>DIY solution; recurrence; historical snapshot; uni-temporal database; MIS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(9), 2013</description>
        <description>when historical versions of data are concerned for a MIS (Management Information System) we naturally might resort to temporal database products. These bi-temporal products, however, are often extravagant and not easily mastered to most of MIS practitioners. Hence we present a plain DIY (do it yourself) solution, the Audit &amp; Change Logs Mechanism-based approach--ACLM, to meet the uni-temporal requirement from restoring historical versions of data. With ACLM programmers can code SQL scripts on demand to trace and replay any snapshot of historical data version via RDBMS built-in functions, they need not to shift away from their usual way of coding stored procedures for data maintenance. Besides, the ACLM approach is compatible with mega-data change, and its additive overhead was instantiated imperceptible for throughputs of routine access with a typical scenario.</description>
        <description>http://thesai.org/Downloads/Volume4No9/Paper_19-A_Diy_Approach_to_Uni-Temporal_Database_Implementation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Efficient Role Assignment Scheme for Multichannel Wireless Mesh Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040918</link>
        <id>10.14569/IJACSA.2013.040918</id>
        <doi>10.14569/IJACSA.2013.040918</doi>
        <lastModDate>2013-09-30T18:04:39.0200000+00:00</lastModDate>
        
        <creator>Sunmyeng Kim</creator>
        
        <creator>Hyun Ah Lee</creator>
        
        <subject>role assignment; multichannel; mesh network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(9), 2013</description>
        <description>a wireless mesh network (WMN) is cost-effective access network architecture. The performance of multi-hop communication quickly reduces as the number of hops becomes larger. Nassiri et al. proposed a Molecular MAC protocol for autonomic assignment and use of multiple channels to improve network performance. In the Molecular MAC protocol, each node forms a shortest path-spanning tree to a gateway node linked to a wired Internet. After a tree is formed, the nodes with an even-numbered depth and an odd-numbered depth are assigned with the roles of a nucleus and an electron, respectively. After such roles are assigned, each nucleus selects an idle channel. However, this protocol has the following drawback; since the nodes with an even-numbered depth are assigned with the role of a nucleus, there are many nuclei in the topology. The number of assigned channels tends to increase, since each nucleus selects an idle channel that is not currently being occupied by its neighboring nuclei. In wireless communications networks, channels are very important resources. Thus, it is necessary to assign the minimum number of channels as little as possible. To do so, this paper proposes an efficient role assignment scheme, which can reduce the number of assigned channels by reducing the number of nodes assigned as nuclei and preventing nodes within the transmission range of each other from becoming nuclei. Based on various simulation results, the proposed scheme was verified.</description>
        <description>http://thesai.org/Downloads/Volume4No9/Paper_18-Efficient_Role_Assignment_Scheme_for_Multichannel.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Bitwise Operations Related to a Fast Sorting Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040917</link>
        <id>10.14569/IJACSA.2013.040917</id>
        <doi>10.14569/IJACSA.2013.040917</doi>
        <lastModDate>2013-09-30T18:04:35.8070000+00:00</lastModDate>
        
        <creator>Krasimir Yordzhev</creator>
        
        <subject>bitwise operations; programming languages C/C++ and Java; sorting algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(9), 2013</description>
        <description>in the work we discuss the benefit of using bitwise operations in programming. Some interesting examples in this respect have been shown. What is described in detail is an algorithm for sorting an integer array with the substantial use of the bitwise operations. Besides its correctness we strictly prove that the described algorithm works in time O(n). In the work during the realization of each of the examined algorithms we use the apparatus of the object-oriented programming with the syntax and the semantics of the programming language C++</description>
        <description>http://thesai.org/Downloads/Volume4No9/Paper_17-The_Bitwise_Operations_Related_to_a_Fast_Sorting.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A quadratic convergence method for the management equilibrium model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040916</link>
        <id>10.14569/IJACSA.2013.040916</id>
        <doi>10.14569/IJACSA.2013.040916</doi>
        <lastModDate>2013-09-30T18:04:34.0100000+00:00</lastModDate>
        
        <creator>Jiayi Zhang</creator>
        
        <subject>Management equilibrium model; estimation of error bound; algorithm; quadratic convergence</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(9), 2013</description>
        <description>in this paper, we study a class of methods for solving the management equilibrium model. We first give an estimate of the error bound for the model, and then, based on the estimate of the error bound, propose a method for solving the model. We prove that our algorithm is quadratically convergent without the requirement of existence of a non-degenerate solution.</description>
        <description>http://thesai.org/Downloads/Volume4No9/Paper_16-A_quadratic_convergence_method_for_the_management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Algorithm for Improving the ESP Game</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040915</link>
        <id>10.14569/IJACSA.2013.040915</id>
        <doi>10.14569/IJACSA.2013.040915</doi>
        <lastModDate>2013-09-30T18:04:31.8270000+00:00</lastModDate>
        
        <creator>Mohamed Sakr</creator>
        
        <creator>Hany Mahgoub</creator>
        
        <creator>Arabi Keshk</creator>
        
        <subject>ESP game; Games with a purpose; Human computation; crowdsourcing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(9), 2013</description>
        <description>one of the human-computation techniques is games with a purpose (GWAP) and microtask crowdsourcing. These techniques can help in making the image retrieval (IR) be more accurate and helpful. It provides the IR system’s database with a rich of information by adding more descriptions and annotations to images. One of the systems of human-computation is ESP Game. ESP Game is a type of games with a purpose. In the ESP game there has been a lot of work was proposed to solve many of the problems in it and make the most benefit of the game. One of these problems is that the ESP game neglects player&#39;s answers for the same image that don&#39;t match. This paper presents a new algorithm to use neglected data to generate new labels for the images. We deploy our algorithm at the University of Menoufia for evaluation. In this trial, we first focused on measuring the total number of labels generated by our Recycle Unused Answers For Images algorithm (RUAI). In our evaluation of the RUAI algorithm we present a new evaluation measure we called it quality of labels measure. This measure identifies the quality of the labels in compared to the pre-qualified labels. The results reveal that the proposed algorithm improved the results in compared to the ESP game in all cases.</description>
        <description>http://thesai.org/Downloads/Volume4No9/Paper_15-A_Novel_Algorithm_for_Improving_the_ESP_Game.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Open Cloud Model for Expanding Healthcare Infrastructure</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040914</link>
        <id>10.14569/IJACSA.2013.040914</id>
        <doi>10.14569/IJACSA.2013.040914</doi>
        <lastModDate>2013-09-30T18:04:27.9130000+00:00</lastModDate>
        
        <creator>Sherif E. Hussein</creator>
        
        <creator>Hesham Arafat</creator>
        
        <subject>Cloud Computing; OpenStack; Openshif; Cloudsim; e-health</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(9), 2013</description>
        <description>with the rapid improvement of computation facilities, healthcare still suffers limited storage space and lacks full utilization of computer infrastructure. That not only adds to the cost burden but also limits the possibility for expansion and integration with other healthcare services. Cloud computing which is based on virtualization, elastic allocation of resources, and pay as you go for used services, opened the way for the possibility to offer fully integrated and distributed healthcare systems that can expand globally. However, cloud computing with its ability to virtualize resources doesn&#39;t come cheap or safe from the healthcare perspective. The main objective of this paper is to introduce a new strategy of healthcare infrastructure implementation using private cloud based on OpenStack with the ability to expand over public cloud with hybrid cloud architecture. This research proposes the migration of legacy software and medical data to a secured private cloud with the possibility to integrate with arbitrary public clouds for services that might be needed in the future. The tools used are mainly OpenStack, DeltaCloud, and OpenShift which are open source adopted by major cloud computing companies. Their optimized integration can give an increased performance with a considerable reduction in cost without sacrificing the security aspect. Simulation was then performed using CloudSim to measure the design performance.</description>
        <description>http://thesai.org/Downloads/Volume4No9/Paper_14-An_Open_Cloud_Model_for_Expanding_Healthcare.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Distributed Key Based Security Framework for Private Clouds</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040913</link>
        <id>10.14569/IJACSA.2013.040913</id>
        <doi>10.14569/IJACSA.2013.040913</doi>
        <lastModDate>2013-09-30T18:04:26.1170000+00:00</lastModDate>
        
        <creator>Ali Shahbazi</creator>
        
        <creator>Julian Brinkley</creator>
        
        <creator>Ali Karahroudy</creator>
        
        <creator>Nasseh Tabrizi</creator>
        
        <subject>private cloud security framework; distributed key; dynamic metadata reconstruction; cloud security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(9), 2013</description>
        <description>Cloud computing in its various forms continues to grow in popularity as organizations of all sizes seek to capitalize on the cloud’s scalability, externalization of infrastructure and administration and generally reduced application deployment costs. But while the attractiveness of these public cloud services is obvious, the ability to capitalize on these benefits is significantly limited for those organization requiring high levels of data security.  It is often difficult if not impossible from a legal or regulatory perspective for government agencies or health services organizations for instance to use these cloud services given their many documented data security issues. As a middle ground between the benefits and security concerns of public clouds, hybrid clouds have emerged as an attractive alternative; limiting access, conceptually, to users within an organization or within a specific subset of users within an organization.  Private clouds being significant options in hybrid clouds, however, are still susceptible to security vulnerabilities, a fact which points to the necessity of security frameworks capable of addressing these issues.  In this paper we introduce the Treasure Island Security Framework (TISF), a conceptual security framework designed to specifically address the security needs of private clouds.  We have based our framework on a Distributed Key and Sequentially Addressing Distributed file system (DKASA); itself borrowing heavily from the Google File System and Hadoop.  Our approach utilizes a distributed key methodology combined with sequential chunk addressing and dynamic reconstruction of metadata to produce a more secure private cloud. The goal of this work is not to evaluate framework from an operational perspective but to instead provide the conceptual underpinning for the TISF. Experimental findings from our evaluation of the framework within a pilot project will be provided in a subsequent work.  </description>
        <description>http://thesai.org/Downloads/Volume4No9/Paper_13-A_Distributed_Key_Based_Security_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Acoustic Strength of Green Turtle and Fish Based on FFT Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040912</link>
        <id>10.14569/IJACSA.2013.040912</id>
        <doi>10.14569/IJACSA.2013.040912</doi>
        <lastModDate>2013-09-30T18:04:22.9030000+00:00</lastModDate>
        
        <creator>Azrul Mahfurdz</creator>
        
        <creator>Sunardi</creator>
        
        <creator>Hamzah Ahmad</creator>
        
        <creator>Syed Abdullah Syed Abdul Kadir</creator>
        
        <creator>Nazuki Sulong</creator>
        
        <subject>Echosounder; Green Turtle; acoustic power; TED</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(9), 2013</description>
        <description>the acoustic power at difference angle and distance were measure for four different ages of Green Turtles and three species of fish using modified echo sounder V1082. The echo signal from TVG output was digitized at a sampling rate 1MHz using analog to digital converter (Measurement Computing USB1208HS). Animals were tied with wood frame to ensure it can’t move away from the sound beam.  The scatter value for fish demonstrates echo strength is different and depends on the angle of measurement. The lowest acoustic power of fish was recorded from their tail. The finding show that, there is significant difference between fish and turtles aged 12 to 18 years at 4.5 meter and 5 meter. The carapace and plastron of sea turtle gives high backscattering strength compare to other side. The high value obtained probably because of the hard surface of the carapace and plastron. This result is considered important in determining the best method of separating sea turtle and fish. Through this result, revealed that size, surface and animal angle play important role in determining acoustic strength value. </description>
        <description>http://thesai.org/Downloads/Volume4No9/Paper_12-Acoustic_Strength_of_Green_Turtle_and_Fish_Based_on_FFT_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Construction of Neural Networks that Do Not Have Critical Points Based on Hierarchical Structure</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040911</link>
        <id>10.14569/IJACSA.2013.040911</id>
        <doi>10.14569/IJACSA.2013.040911</doi>
        <lastModDate>2013-09-30T18:04:19.5030000+00:00</lastModDate>
        
        <creator>Tohru Nitta</creator>
        
        <subject>critical point; singular point; redundancy; complex number; quaternion</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(9), 2013</description>
        <description>a critical point is a point at which the derivatives of an error function are all zero. It has been shown in the literature that critical points caused by the hierarchical structure of a real-valued neural network (NN) can be local minima or saddle points, although most critical points caused by the hierarchical structure are saddle points in the case of complex-valued neural networks. Several studies have demonstrated that singularity of those kinds has a negative effect on learning dynamics in neural networks. As described in this paper, the decomposition of high-dimensional neural networks into low-dimensional neural networks equivalent to the original neural networks yields neural networks that have no critical point based on the hierarchical structure. Concretely, the following three cases are shown: (a) A 2-2-2 real-valued NN is constructed from a 1-1-1 complex-valued NN. (b) A 4-4-4 real-valued NN is constructed from a 1-1-1 quaternionic NN. (c) A 2-2-2 complex-valued NN is constructed from a 1-1-1 quaternionic NN. Those NNs described above do not suffer from a negative effect by singular points during learning comparatively because they have no critical point based on a hierarchical structure.</description>
        <description>http://thesai.org/Downloads/Volume4No9/Paper_11-Construction_of_Neural_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Survey of Network-On-Chip Tools</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040910</link>
        <id>10.14569/IJACSA.2013.040910</id>
        <doi>10.14569/IJACSA.2013.040910</doi>
        <lastModDate>2013-09-30T18:04:16.0400000+00:00</lastModDate>
        
        <creator>Ahmed Ben Achballah</creator>
        
        <creator>Slim Ben Saoud</creator>
        
        <subject>Embedded Systems; Network-On-Chip; CAD Tools; Performance Analysis; Verification and Measurement</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(9), 2013</description>
        <description>Nowadays System-On-Chips (SoCs) have evolved considerably in term of performances, reliability and integration capacity. The last advantage has induced the growth of the number of cores or Intellectual Properties (IPs) in a same chip. Unfortunately, this important number of IPs has caused a new issue which is the intra-communication between the elements of a same chip. To resolve this problem, a new paradigm has been introduced which is the Network-On-Chip (NoC). Since the introduction of the NoC paradigm in the last decade, new methodologies and approaches have been presented by research community and many of them have been adopted by industrials. The literature contains many relevant studies and surveys discussing NoC proposals and contributions. However, few of them have discussed or proposed a comparative study of NoC tools. The objective of this work is to establish a reliable survey about available design, simulation or implementation NoC tools. We collected an important amount of information and characteristics about NoC dedicated tools that we will present throughout this survey. This study is built around a respectable amount of references and we hope it will help scientists.</description>
        <description>http://thesai.org/Downloads/Volume4No9/Paper_10-A_Survey_of_Network-On-Chip_Tools.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Recommender System for Personalised Wellness Therapy</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040909</link>
        <id>10.14569/IJACSA.2013.040909</id>
        <doi>10.14569/IJACSA.2013.040909</doi>
        <lastModDate>2013-09-30T18:04:12.6400000+00:00</lastModDate>
        
        <creator>Thean Pheng Lim</creator>
        
        <creator>Wahidah Husain</creator>
        
        <creator>Nasriah Zakaria</creator>
        
        <subject>recommender system; rule-based reasoning; case-based reasoning; Wellness</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(9), 2013</description>
        <description>rising costs and risks in health care have shifted the preference of individuals from health treatment to disease prevention. This prevention treatment is known as wellness. In recent years, the Internet has become a popular place for wellness-conscious users to search for wellness-related information and solutions. As the user community becomes more wellness conscious, service improvement is needed to help users find relevant personalised wellness solutions. Due to rapid development in the wellness market, users value convenient access to wellness services. Most wellness websites reflect common health informatics approaches; these amount to more than 70,000 sites worldwide. Thus, the wellness industry should improve its Internet services in order to provide better and more convenient customer service. This paper discusses the development of a wellness recommender system that would help users find and adapt suitable personalised wellness therapy treatments based on their individual needs. This paper introduces new approaches that enhance the convenience and quality of wellness information delivery on the Internet. The wellness recommendation task is performed using an Artificial Intelligence technique of hybrid case-based reasoning (HCBR).  HCBR solves users’ current wellness problems by applying solutions from similar cases in the past.  From the evaluation results for our prototype wellness recommendation system, we conclude that wellness consultants are using consistent wellness knowledge to recommend solutions for sample wellness cases generated through an online consultation form.  Thus, the proposed model can be integrated into wellness websites to enable users to search for suitable personalized wellness therapy treatment based on their health condition. </description>
        <description>http://thesai.org/Downloads/Volume4No9/Paper_9-Recommender_System_for_Personalised_Wellness.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Soft Processor MicroBlaze-Based Embedded System for Cardiac Monitoring</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040908</link>
        <id>10.14569/IJACSA.2013.040908</id>
        <doi>10.14569/IJACSA.2013.040908</doi>
        <lastModDate>2013-09-30T18:04:09.2230000+00:00</lastModDate>
        
        <creator>El Hassan El Mimouni </creator>
        
        <creator>Mohammed Karim</creator>
        
        <subject>ECG; FPGA; Heart Rate Variability; MicroBlaze; QRS detection; SoC; Xilkernel</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(9), 2013</description>
        <description>this paper aims to contribute to the efforts of design community to demonstrate the effectiveness of the state of the art Field Programmable Gate Array (FPGA), in the embedded systems development, taking a case study in the biomedical field. With this design approach, we have developed a System on Chip (SoC) for cardiac monitoring based on the soft processor MicroBlaze and the Xilkernel Real Time Operating System (RTOS), both from Xilinx. The system permits the acquisition and the digitizing of the Electrocardiogram (ECG) analog signal, displaying heart rate on seven segments module and ECG on Video Graphics Adapter (VGA) screen, tracing the heart rate variability (HRV) tachogram, and communication with a Personal Computer (PC) via the serial port. We have used the MIT_BIH Database records to test and evaluate our implementation performance. In terms of the resources utilization, the implementation occupies around 70% of the used FPGA, namely the Xilinx Spartan 6 XC6SLX16. The accuracy of the QRS detection exceeds 96%. </description>
        <description>http://thesai.org/Downloads/Volume4No9/Paper_8-A_Soft_Processor_MicroBlaze-Based_Embedded.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Pedagogy: Instructivism to Socio-Constructivism through Virtual Reality</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040907</link>
        <id>10.14569/IJACSA.2013.040907</id>
        <doi>10.14569/IJACSA.2013.040907</doi>
        <lastModDate>2013-09-30T18:04:05.8200000+00:00</lastModDate>
        
        <creator>Moses O. Onyesolu</creator>
        
        <creator>Victor C. Nwasor</creator>
        
        <creator>Obiajulu E. Ositanwosu</creator>
        
        <creator>Obinna N. Iwegbuna</creator>
        
        <subject>learning theory; virtual reality; simulated environment; education; pedagogy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(9), 2013</description>
        <description>learning theories evolved with time, beginning with instructivism, constructivism, to social constructivism.  These theories no doubt were applied in education and they had their effects on learners. Technology advanced, created a paradigm shift by creating new ways of teaching and learning as found in virtual reality (VR). VR provided creative ways in which students learn, provides opportunity to achieve learning goals by presenting artificial environments. We developed and simulated a virtual reality system on a desktop by deploying Visual Basic.NET, Java and Macromedia Flash. This simulated environment enhanced students’ understanding by providing a degree of reality unattainable in a traditional two-dimensional interface, creating a sensory-rich interactive learning environment. </description>
        <description>http://thesai.org/Downloads/Volume4No9/Paper_7-Pedagogy_Instructivism_to_Socio-Constructivism.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Concept-to-Product Knowledge Management Framework: Towards a Cloud-based Enterprise 2.0 Environment at a Multinational Corporation in Penang</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040906</link>
        <id>10.14569/IJACSA.2013.040906</id>
        <doi>10.14569/IJACSA.2013.040906</doi>
        <lastModDate>2013-09-30T18:04:03.6370000+00:00</lastModDate>
        
        <creator>Yu-N Cheah</creator>
        
        <creator>Soo Beng Khoh</creator>
        
        <subject>knowledge management framework; organizational memory; Enterprise 2.0; workflow management; cloud computing; concept to product</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(9), 2013</description>
        <description>Knowledge management initiatives of a multinational corporation in Penang are currently deployed via its enterprise-wide portal and Intranet. To improve knowledge management initiatives from its current strength, efforts could now be focused on synergizing organizational workflow as well as computing resources and repositories. This paper proposes a concept-to-product knowledge management framework to be deployed in a cloud-based environment. It aims to provide effective support for collaborative knowledge management efforts at all stages of projects. The multi-layered framework is built upon the organizational memory which drives relevant processors and applications. The framework manifests itself in the form of a cloud-based concept-to-product dashboard from which employees can access applications and tools that facilitate their day-to-day tasks in a seamless manner.</description>
        <description>http://thesai.org/Downloads/Volume4No9/Paper_6-A_Concept-to-Product_Knowledge_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fingerprint Image Segmentation Using Haar Wavelet and Self Organizing Map</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040905</link>
        <id>10.14569/IJACSA.2013.040905</id>
        <doi>10.14569/IJACSA.2013.040905</doi>
        <lastModDate>2013-09-30T18:03:59.7370000+00:00</lastModDate>
        
        <creator>Sri Suwarno</creator>
        
        <creator>Subanar</creator>
        
        <creator>Agus Harjoko</creator>
        
        <creator>Sri Hartati</creator>
        
        <subject>Fingerprint Segmentation; AFIS; background image; foreground image; Haar wavelet; SOM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(9), 2013</description>
        <description>Fingerprint image segmentation is one of the important preprocessing steps in Automatic Fingerprint Identification Systems (AFIS). Segmentation separates image background from image foreground, removing unnecessary information from the image. This paper proposes a new fingerprint segmentation method using Haar wavelet and Kohonen’s Self Organizing Map (SOM). Fingerprint image was decomposed using 2D Haar wavelet in two levels. To generate features vectors, the decomposed image was divided into nonoverlapping blocks of 2x2 pixels and converted into four elements vectors. These vectors were then fed into SOM network that grouped them into foreground and background clusters. Finally, blocks  in the background area were removed based on indexes of blocks in the background cluster. From the research that has been carried out, we conclude that the proposed method is effective to segment background from fingerprint images.</description>
        <description>http://thesai.org/Downloads/Volume4No9/Paper_5-Fingerprint_Image_Segmentation_Using_Haar_Wavelet.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Review on Aspect Oriented Programming</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040904</link>
        <id>10.14569/IJACSA.2013.040904</id>
        <doi>10.14569/IJACSA.2013.040904</doi>
        <lastModDate>2013-09-30T18:03:57.9100000+00:00</lastModDate>
        
        <creator>Heba A. Kurdi</creator>
        
        <subject>Aspect Oriented Programming; software engineering; AspectJ</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(9), 2013</description>
        <description>Aspect-oriented programming (AOP) has been introduced as a potential programming approach for the specification of nonfunctional component properties, such as fault-tolerance, logging and exception handling. Such properties are referred to as crosscutting concerns and represent critical issues that conventional programming approaches could not modularize effectively leading to a complex code. This paper discusses AOP concept, the necessity that led to it, how it provides better results in code quality and software development efficiency, followed by stating challenges that developers and researchers face when dealing with this approach. It has been concluded that AOP is promising and deserves more attention from developers and researchers. However, more systematic evaluation studies should be conducted to better understand its implications.</description>
        <description>http://thesai.org/Downloads/Volume4No9/Paper_4-Review_on_Aspect_Oriented_Programming.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Analysis and Comparison of 6to4 Relay Implementations</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040903</link>
        <id>10.14569/IJACSA.2013.040903</id>
        <doi>10.14569/IJACSA.2013.040903</doi>
        <lastModDate>2013-09-30T18:03:56.0400000+00:00</lastModDate>
        
        <creator>G&#225;bor Lencse</creator>
        
        <creator>S&#225;ndor R&#233;p&#225;s</creator>
        
        <subject>IPv6 deployment; IPv6 transition solutions; 6to4; performance analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(9), 2013</description>
        <description>the depletion of the public IPv4 address pool may speed up the deployment of IPv6. The coexistence of the two versions of IP requires some transition mechanisms. One of them is 6to4 which provides a solution for the problem of an IPv6 capable device in an IPv4 only environment. From among the several 6to4 relay implementations, the following ones were selected for testing: sit under Linux, stf under FreeBSD and stf under NetBSD. Their stability and performance were in&#172;ves&#172;ti&#172;gat&#172;ed in a test network. The increasing measure of the load of the 6to4 relay implementations was set by incrementing the number of the client computers that provided the traffic. The packet loss and the response time of the 6to4 relay as well as the CPU utilization and the memory consumption of the computer running the tested 6to4 relay im&#172;ple&#172;men&#172;ta&#172;tions were measured. The implementations were tested also under very heavy load conditions to see if they are safe to be used in production systems.</description>
        <description>http://thesai.org/Downloads/Volume4No9/Paper_3-Performance_Analysis_and_Comparison_of_6to4_Relay.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Partition based Graph Compression</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040902</link>
        <id>10.14569/IJACSA.2013.040902</id>
        <doi>10.14569/IJACSA.2013.040902</doi>
        <lastModDate>2013-09-30T18:03:52.6230000+00:00</lastModDate>
        
        <creator>Meera Dhabu</creator>
        
        <creator>Dr. P. S. Deshpande</creator>
        
        <creator>Siyaram Vishwakarma</creator>
        
        <subject>graph compression; su graph; partitioning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(9), 2013</description>
        <description>Graphs are used in diverse set of disciplines ranging from computer networks to biological networks, social networks, World Wide Web etc. With the advancement in the technology and the discovery of new knowledge, size of graphs is increasing exponentially. A graph containing millions of nodes and billions of edges can be of size in TBs. At the same time, the size of graphs presents a big obstacle to understand the essential information they contain. Also with the current size of main memory it seems impossible to load the whole graph into main memory. Hence the need of graph compression techniques arises. In this paper, we present graph compression technique which partition graphs into subgraphs and then each partition can be compressed individually. For partitioning, proposed approach identifies weak links present in the graph and partition graph at those weak links. During query processing, the partitions which are required need to be decompressed, eliminating decompression of whole graph.</description>
        <description>http://thesai.org/Downloads/Volume4No9/Paper_2-Partition_based_Graph_Compression.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Autonomic Auto-scaling Controller for Cloud Based Applications </title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040901</link>
        <id>10.14569/IJACSA.2013.040901</id>
        <doi>10.14569/IJACSA.2013.040901</doi>
        <lastModDate>2013-09-30T18:03:50.7670000+00:00</lastModDate>
        
        <creator>Jorge M. Londo&#241;o-Pel&#225;ez</creator>
        
        <creator>Carlos A. Florez-Samur</creator>
        
        <subject>autonomic resource management; cloud computing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(9), 2013</description>
        <description>One of the key promises of Cloud Computing is elasticity – applications have at their disposal a very large pool of resources from which they can allocate whatever they need. For any fair-size application the amount of resources is significant and both overprovisioning and under provisioning have a negative impact on the customer. In the first case it leads to over costs and in the second case to poor application performance with negative business repercussions as well. It is then an important problem to provision resources appropriately.
In addition, it is well known that application workloads exhibit high variability over short time periods. This creates the necessity of having autonomic mechanisms that make resource management decisions in real time and optimizing both cost and performance. To address these problems we present and autonomic auto-scaling controller that based on the stream of measurements from the system maintains the optimal number of resources and responds efficiently to workload variations, without incurring in over costs for high churn of resources or short duration peaks in the workload.
To evaluate the performance of our system we conducted extensive evaluations based on traces of real applications deployed in the cloud. Our results show significant improvements over existing techniques.
</description>
        <description>http://thesai.org/Downloads/Volume4No9/Paper_1-An_Autonomic_Auto-scaling_Controller_for_Cloud_Based_Applications.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Paradox of the Fuzzy Disambiguation in the Information Retrieval</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.020909</link>
        <id>10.14569/IJARAI.2013.020909</id>
        <doi>10.14569/IJARAI.2013.020909</doi>
        <lastModDate>2013-09-09T14:15:42.1700000+00:00</lastModDate>
        
        <creator>Anna Bryniarska</creator>
        
        <subject>fuzzy disambiguation paradox, Description
Logic, FuzzyDL, Information Retrieval Logic, Semantic Web.</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(9), 2013</description>
        <description>Current methods of data mining, word sense disambiguation
in the information retrieval, semantic relation, fuzzy
sets theory, fuzzy description logic, fuzzy ontology and their
implementation, omit the existence of paradox called here the
paradox of the fuzzy disambiguation. The paradox lies in the
fact that due to fuzzy data and the experts knowledge it can
be obtained precise knowledge. In this paper to describe this
paradox, is introduced a conceptual apparatus. Moreover, there
is formulated an information retrieval logic. There are suggested
certain applications of this logic to search information on the
Web.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No9/Paper_9-The_Paradox_of_the_Fuzzy_Disambiguation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Discrete Mechanics Approach to Gait Generation on Periodically Unlevel Grounds for the Compass-type Biped Robot</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.020908</link>
        <id>10.14569/IJARAI.2013.020908</id>
        <doi>10.14569/IJARAI.2013.020908</doi>
        <lastModDate>2013-09-09T14:15:38.6930000+00:00</lastModDate>
        
        <creator>Tatsuya Kai</creator>
        
        <creator>Takeshi Shintani</creator>
        
        <subject></subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(9), 2013</description>
        <description>This paper addresses a gait generation problem for
the compass-type biped robot on periodically unlevel grounds.
We first derive the continuous/discrete compass-type biped robots
(CCBR/DCBR) via continuous/discrete mechanics, respectively.
Next, we formulate a optimal gait generation problem on periodically
unlevel grounds for the DCBR as a finite dimensional
nonlinear optimization problem, and show that a discrete control
input can be obtained by solving the optimization problem
with the sequential quadratic programming. Then, we develop
a transformation method from a discrete control input into a
continuous zero-order hold input based on the discrete Lagranged’Alembert
principle. Finally, we show numerical simulations,
and it turns out that our new method can generate a stable gaits
on a periodically unlevel ground for the CCBR.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No9/Paper_8-A_Discrete_Mechanics_Approach_to_Gait_Generation_on_Periodically_Unlevel_Grounds_for_the_Compass-type_Biped_Robot.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Approach with Support Vector Machine using Variable Features Selection on Breast Cancer Prognosis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.020907</link>
        <id>10.14569/IJARAI.2013.020907</id>
        <doi>10.14569/IJARAI.2013.020907</doi>
        <lastModDate>2013-09-09T14:15:32.2800000+00:00</lastModDate>
        
        <creator>Sandeep Chaurasia</creator>
        
        <creator>Dr. P Chakrabarti</creator>
        
        <subject>Breast cancer; feature selection; Support vectors; Support Vector Machine; Wisconsin Breast Cancer Dataset.</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(9), 2013</description>
        <description>Cancer diagnosis and clinical outcome prediction are among the most important emerging applications of machine learning. In this paper we have used an approach by using support vector machine classifier to construct a model that is useful for the breast cancer survivability prediction.  We have used both 5 cross and 10 cross validation of variable selection on input feature vectors and the performance measurement through bio-learning class performance while measuring AUC, specificity and sensitivity. The performance of the SVM is much better than the other machine learning classifier. </description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No9/Paper_7-An_Approach_with_Support_Vector_Machine_using_Variable_Features_Selection_on_Breast_Cancer_Prognosis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Study Among Lease Square Method, Steepest Descent Method, and Conjugate Gradient Method for Atmopsheric Sounder Data Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.020906</link>
        <id>10.14569/IJARAI.2013.020906</id>
        <doi>10.14569/IJARAI.2013.020906</doi>
        <lastModDate>2013-09-09T14:15:30.4070000+00:00</lastModDate>
        
        <creator>Kohei Arai </creator>
        
        <subject>nonlinear optimization theory; solution space; atmpspheric sounder</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(9), 2013</description>
        <description>Comparative study among Least Square Method: LSM, Steepest Descent Method: SDM, and Conjugate Gradient Method: CGM for atmospheric sounder data analysis (estimation of vertical profiles for water vapor) is conducted. Through simulation studies, it is found that CGM shows the best estimation accuracy followed by SDM and LSM. Method dependency on atmospheric models is also clarified.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No9/Paper_6-Comparative_Study_Among_Lease_Square_Method,_Steepest_Descent_Method,_and_Conjugate_Gradient_Method_for_Atmopsheric_Sounder_Data_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Visualization of 5D Assimilation Data for Meteorological Forecasting and Its Related Disaster Mitigations Utilizing Vis5D of Software Tool</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.020905</link>
        <id>10.14569/IJARAI.2013.020905</id>
        <doi>10.14569/IJARAI.2013.020905</doi>
        <lastModDate>2013-09-09T14:15:28.1000000+00:00</lastModDate>
        
        <creator>Kohei Arai </creator>
        
        <subject>animation; assimilation data; weather related disaster; Vis5D; NCEP/GDAS</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(9), 2013</description>
        <description>Method for visualization of 5D assimilation data for meteorological forecasting and its related disaster mitigations utilizing Vis5D of software tool is proposed. In order to mitigate severe weather related disaster, meteorological forecasting and prediction is needed. There are some numerical weather forecasting data, in particular, assimilation data. Time series of three dimensional geophysical parameters have to be represented visually onto computer display in a comprehensive manner. On the other hand, there are some visualization software tools. In particular, Vis5D of software tool for animation of three dimensional imagery data can be displayed. Through experiments with NCEP/GDAS assimilation data, it is found that the proposed method is appropriate for representation of 5D assimilation data in a comprehensive manner.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No9/Paper_5-Visualization_of_5D_Assimilation_Data_for_Meteorological_Forecasting_and_Its_Related_Disaster_Mitigations.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Bi-Directional Reflectance Distribution Function: BRDF Effect on Un-mixing, Category Decomposition of the Mixed Pixel (MIXEL) of Remote Sensing Satellite Imagery Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.020904</link>
        <id>10.14569/IJARAI.2013.020904</id>
        <doi>10.14569/IJARAI.2013.020904</doi>
        <lastModDate>2013-09-09T14:15:26.2570000+00:00</lastModDate>
        
        <creator>Kohei Arai </creator>
        
        <subject>Unmixing; BRDF; Mixel; Lambertian surface; Category decomposition </subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(9), 2013</description>
        <description>Method for unmixing, category decomposition of the mixed pixel (MIXEL) of remote sensing satellite imagery data taking into account the effect due to Bi-Directional Reflectance Distribution Function: BRDF  is proposed. Although there is not so small BRDF effect on estimation mixing ratios, conventional unmixing methods do not take into account the effect. Through experiments, the effect is clarified. Also the proposed unmixing method with consideration of BRDF effect is validated.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No9/Paper_4-Bi-Directional_Reflectance_Distribution_Function.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Secure Copier Which Allows Reuse Copied Documents with Sorting Capability in Accordance with Document Types</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.020903</link>
        <id>10.14569/IJARAI.2013.020903</id>
        <doi>10.14569/IJARAI.2013.020903</doi>
        <lastModDate>2013-09-09T14:15:24.4330000+00:00</lastModDate>
        
        <creator>Kohei Arai </creator>
        
        <subject>data hiding; MPA; wavelet; secure copy</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(9), 2013</description>
        <description>Secure copy machine which allows reuse copied documents with sorting capability in accordance with the document types. Through experiments with a variety of document types, it is found that copied documents can be shared and stored in database in accordance with automatically classified document types securely. The copied documents are protected by data hiding based on wavelet Multi Resolution Analysis: MRA.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No9/Paper_3-Secure_Copier_Which_Allows_Reuse_Copied_Documents_with_Sorting_Capability_in_Accordance_with_Document_Types.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Genetic Algorithm Utilizing Image Clustering with Merge and Split Processes Which Allows Minimizing Fisher Distance Between Clusters</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.020902</link>
        <id>10.14569/IJARAI.2013.020902</id>
        <doi>10.14569/IJARAI.2013.020902</doi>
        <lastModDate>2013-09-09T14:15:22.6070000+00:00</lastModDate>
        
        <creator>Kohei Arai </creator>
        
        <subject>image clustering; clustering; genetic algorithm; Fisher distance</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(9), 2013</description>
        <description>Genetic algorithm utilizing image clustering with merge and split processes which allows minimizing Fisher distance between clusters is proposed.  Through experiments with simulation and real remote sensing satellite imagery data, it is found that the proposed clustering method is superior to the conventional k-means and ISODATA clustering methods in comparison to the geographic maps and classification results from Maximum Likelihood classification method. </description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No9/Paper_2-Genetic_Algorithm_Utilizing_Image_Clustering_with_Merge_and_Split_Processes_Which_Allows_Minimizing_Fisher_Distance_Between_Clusters.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Efficient Routing Protocol under Noisy Environment for Mobile Ad Hoc Networks using Fuzzy Logic</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.020901</link>
        <id>10.14569/IJARAI.2013.020901</id>
        <doi>10.14569/IJARAI.2013.020901</doi>
        <lastModDate>2013-09-09T14:15:19.9400000+00:00</lastModDate>
        
        <creator>Supriya Srivastava</creator>
        
        <creator>A. K. Daniel</creator>
        
        <subject>Fuzzy Logic; Noise; Signal Strength; MANET</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(9), 2013</description>
        <description>A MANET is a collection of mobile nodes communicating and cooperating with each other to route a packet from the source to their destinations. A MANET is used to support dynamic routing strategies in absence of wired infrastructure and centralized administration. In this paper, we propose a routing algorithm for the mobile ad hoc networks based on fuzzy logic to discover an optimal route for transmitting data packets to the destination. This protocol helps every node in MANET to choose next efficient successor node on the basis of channel parameters like environment noise and signal strength. The protocol improves the performance of a route by increasing network life time, reducing link failure and selecting best node for forwarding the data packet to next node</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No9/Paper_1-An_Efficient_Routing_Protocol_under_Noisy_Environment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Steganography Method for Hiding BW Images into Gray Bitmap Images via k-Modulus Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040836</link>
        <id>10.14569/IJACSA.2013.040836</id>
        <doi>10.14569/IJACSA.2013.040836</doi>
        <lastModDate>2013-09-01T05:19:05.3970000+00:00</lastModDate>
        
        <creator>Firas A. Jassim</creator>
        
        <subject>Image steganography, Information hiding, Security,
k-Modulus Method.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(8), 2013</description>
        <description>This paper is to create a pragmatic steganographic
implementation to hide black and white image which is known
as stego image inside another gray bitmap image that known as
cover image. First of all, the proposed technique uses k-Modulus
Method (K-MM) to convert all pixels within the cover image
into multiples of positive integer named k. Since the black and
white images can be represented using binary representation, i.e.
0 or 1. Then, in this article, the suitable value for the positive
integer k is two. Therefore, each pixel inside the cover image is
divisible by two and this produces a reminder which is either
0 or 1. Subsequently, the black and white representation of the
stego image could be hidden inside the cover image. The ocular
differences between the cover image before and after adding the
stego image are insignificant. The experimental results show that
the PSNR values for the cover image are very high with very
small Mean Square Error.</description>
        <description>http://thesai.org/Downloads/Volume4No8/Paper_36-A_Novel_Steganography_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Pilot Study of Industry Perspective on Requirement Engineering Education: Measurement of Rasch Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040835</link>
        <id>10.14569/IJACSA.2013.040835</id>
        <doi>10.14569/IJACSA.2013.040835</doi>
        <lastModDate>2013-09-01T05:19:03.6030000+00:00</lastModDate>
        
        <creator>NOR AZLIANA AKMAL JAMALUDIN</creator>
        
        <creator>SHAMSUL SAHIBUDDIN</creator>
        
        <subject>Higher Learning Education; Requirement Engineering; Education; Rasch Measurement Model; Employability Skill; Undergraduate Problem; Rasch Analysis; Unidimensionality;  Human-based Problem.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(8), 2013</description>
        <description>Software development industry identifies that human-based give a significant problem in Requirement Engineering. To that reason, education gives a substantial impact in delivering a skill worker and should be a medium to reduce the problem. Survey question was distributed among ICT for this pilot study to the organization of MSC status in Malaysia for pilot study. 15.53% (N = 32) respondent successfully return their respond back. The result shows that only 27 person is analyzed regarding to misfit data provided by Rasch Measurement Model. The unidimensionality, person-item map and misfit data are discussed. Research objective to identify the undergraduate problem in Requirement Engineering education is achieved. Future work will be discussed on further analysis on actual survey to improve employability skill among software engineering undergraduate students.</description>
        <description>http://thesai.org/Downloads/Volume4No8/Paper_35-Pilot_Study_of_Industry_Perspective.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improved QO-STBC OFDM System Using Null Interference Elimination</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040834</link>
        <id>10.14569/IJACSA.2013.040834</id>
        <doi>10.14569/IJACSA.2013.040834</doi>
        <lastModDate>2013-09-01T05:19:00.1700000+00:00</lastModDate>
        
        <creator>K. O. O. Anoh</creator>
        
        <creator>R. A. Abd-Alhameed</creator>
        
        <creator>Y. A. S. Dama</creator>
        
        <creator>S. M. R. Jones</creator>
        
        <creator>T. S. Ghazaany</creator>
        
        <creator>J. Rodrigues</creator>
        
        <creator>K. N. Voudouris</creator>
        
        <subject>QO-STBC; STBC; OFDM; Decoding matrix; Null-Interference; Interference;</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(8), 2013</description>
        <description>The quasi-orthogonal space time block coding (QO-STBC) over orthogonal frequency division multiplexing (OFDM) is investigated. Traditionally, QO-STBC does not achieve full diversity since the detection matrix of QO-STBC scheme is not a diagonal matrix. In STBC, the decoding matrix is a diagonal matrix which enables linear decoding whereas the decoding matrix in traditional QO-STBC does not enable linear decoding. In this paper it is shown that there are some interfering terms in terms of non-diagonal elements that result from the decoding process which limit the linear decoding. As a result, interference from the application of the QO-STBC decoding matrix depletes the performance of the scheme such that full diversity is not attained. A method of eliminating this interference in QO-STBC is investigated by nulling the interfering terms towards full diversity for an OFDM system. It was found that the interference reduction technique permits circa 2dB BER performance gain in QO-STBC. The theoretical and simulation results are presented, for both traditional QO-STBC and interference-free QO-STBC applying OFDM</description>
        <description>http://thesai.org/Downloads/Volume4No8/Paper_34-Improved_QO-STBC_OFDM_System_Using_Null_Interference.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Software Ecosystem: Features, Benefits and Challenges</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040833</link>
        <id>10.14569/IJACSA.2013.040833</id>
        <doi>10.14569/IJACSA.2013.040833</doi>
        <lastModDate>2013-09-01T05:18:56.9730000+00:00</lastModDate>
        
        <creator>J. V. Joshua</creator>
        
        <creator>D.O. Alao</creator>
        
        <creator>S.O. Okolie</creator>
        
        <creator>O. Awodele</creator>
        
        <subject>Software ecosystem; Open source; closed system</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(8), 2013</description>
        <description>Software Ecosystem (SECO) is a new and rapidly evolving phenomenon in the field of software engineering. It is an approach through which many variables can resolve complex relationships among companies in the software industry. SECOs are gaining importance with the advent of the Google Android, Apple iOS, Microsoft and Salesforce.com ecosystems. It is a co-innovation approach by developers, software organisations, and third parties that share common interest in the development of the software technology. There are limited researches that have been done on SECOs hence researchers and practitioners are still eager to elucidate this concept.
A systematic study was undertaken to present a review of software ecosystems to address the features, benefits and challenges of SECOs.
This paper showed that open source development model and innovative process development were key features of SECOs and the main challenges of SECOs were security, evolution management and infrastructure tools for fostering interaction. Finally SECOs fostered co-innovation, increased attractiveness for new players and decreased costs
</description>
        <description>http://thesai.org/Downloads/Volume4No8/Paper_33-Software_Ecosystem_Features,_Benefits_and_Challenges.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Analysis of Keccak f-[1600]</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040832</link>
        <id>10.14569/IJACSA.2013.040832</id>
        <doi>10.14569/IJACSA.2013.040832</doi>
        <lastModDate>2013-09-01T05:18:53.0730000+00:00</lastModDate>
        
        <creator>Ananya Chowdhury</creator>
        
        <creator>Utpal Kumar Ray</creator>
        
        <subject>Sponge Construction; State; Rounds; Bitrate(r); Capacity(c); Diversifier(d); Plane; Slice; Sheet; Row; Column; Lane; Bit</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(8), 2013</description>
        <description>Keccak is the latest Hash Function selected as the winner of NIST Hash Function Competition. SHA-3 is not meant to replace SHA-2 as no significant attacks on SHA-2 have been demonstrated. But it is designed in response to the need to find an alternative and dissimilar construct for Cryptographic Hash that is more fortified to attacks. In this paper we have tried to depict an analysis of the software implementation of Keccak-f[1600] based on the disk space utilization and time required to compute digest of desired sizes.</description>
        <description>http://thesai.org/Downloads/Volume4No8/Paper_32-Performance_Analysis_of_Keccak_f-[1600].pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Determination of Affirmative and Negative Intentions for Indirect Speech Acts by a Recommendation Tree</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040831</link>
        <id>10.14569/IJACSA.2013.040831</id>
        <doi>10.14569/IJACSA.2013.040831</doi>
        <lastModDate>2013-09-01T05:18:49.1570000+00:00</lastModDate>
        
        <creator>Takuki Ogawa</creator>
        
        <creator>Kazuhiro Morita</creator>
        
        <creator>Masao Fuketa</creator>
        
        <creator>Jun-ichi Aoe</creator>
        
        <subject>recommendation system; indirect speech acts; affirmative intention; negative intention</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(8), 2013</description>
        <description>For context-based recommendation systems, it is necessary to detect affirmative and negative intentions from answers. However, traditional studies can not determine these intentions from indirect speech acts.
In order to determine these intentions from indirect speech acts, this paper defines a recommendation tree and proposes an algorithm of deriving intentions of indirect speech acts by the tree. In the proposed method, a recommendation condition (RC) is introduced and it is classified into a required RC, a selectable RC, and a not-selectable RC. The recommendation tree is constructed by nodes and edges corresponding to these three conditions. The deriving algorithm determines affirmative and negative intentions of indirect speech acts by tracing the trees. 
From experimental results, it is verified that the accuracy of the proposed method is about 40 points higher than the traditional method.
</description>
        <description>http://thesai.org/Downloads/Volume4No8/Paper_31-The_Determination_of_Affirmative_and_Negative_Intentions_for_Indirect_Speech_Acts_by_a_Recommendation_Tree.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Study on the Conception of Generic Fuzzy Expert System for Surveillance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040830</link>
        <id>10.14569/IJACSA.2013.040830</id>
        <doi>10.14569/IJACSA.2013.040830</doi>
        <lastModDate>2013-09-01T05:18:45.7400000+00:00</lastModDate>
        
        <creator>Najar Yousra</creator>
        
        <creator>Ketata Raouf</creator>
        
        <creator>Ksouri Mekki</creator>
        
        <subject>Generic Fuzzy expert system; surveillance, uncertainty&#39;error analysis,;three tanks; ECG</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(8), 2013</description>
        <description>This paper deals with using fuzzy logic to minimize uncertainty effects in surveillance. It studies the conception of an efficient fuzzy expert system that had two characteristics: generic and robust to uncertainties. Analyzing distance between variables optimal and real values is the main idea of the research. Fuzzy inference system decides, then, about significant variables state: normal or abnormal. A comparison between three proposed fuzzy expert systems is presented to highlight the effect of membership number and type. Beside, being generic this system could also be applied in three fields: industrial surveillance, camera surveillance and medical surveillance. To expose results in these fields, matlab is used to realize this approach and to simulate systems responses which revealed interested conclusions.</description>
        <description>http://thesai.org/Downloads/Volume4No8/Paper_30-A_Study_on_the_Conception_of_Generic_Fuzzy_Expert_System_for_Surveillance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Representation Modeling Persona by using Ontologies: Vocabulary Persona</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040829</link>
        <id>10.14569/IJACSA.2013.040829</id>
        <doi>10.14569/IJACSA.2013.040829</doi>
        <lastModDate>2013-09-01T05:18:42.3400000+00:00</lastModDate>
        
        <creator>GAOU Salma</creator>
        
        <creator>EL KADIRI Kamal Eddine</creator>
        
        <creator>CORNELIU BURAGA Sabin</creator>
        
        <subject>Semantic Web; FOAF; Persona; Vocabulary; Ontology; Persona;</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(8), 2013</description>
        <description>Semantic Web is then to add to all these resources semantics that allow computer systems to &quot;understand&quot; the meaning by accessing structured collections of information and inference rules that can be used to drive reasoning automated to better satisfy user requirements. Standard description of Web resources proposed by the W3C, as the name implies, RDF (Resource Description Framework) is a meta-data used to guide the description of resources, to make it more &quot;structured&quot; information necessary for engines research and, more generally, to all necessary computer automated tool for analyzing web pages. The web is a new web sematique or all Web resources are described by metadata, which allows machines better use of these resources. Considering as a foundation specification FOAF (Friend Of A Friend), we use semantic structures (RDFa) to create an ontology and technologies in which it is implemented.Create a conceptual model (eg, an ontology) for personas and their uses in the context of human-computer interaction we will present some screenshots of execution of application.</description>
        <description>http://thesai.org/Downloads/Volume4No8/Paper_29-Representation_Modeling_Persona_by_using_Ontologies_Vocabulary_Persona.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Semantic, Automatic Image Annotation Based on Multi-Layered Active Contours and Decision Trees</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040828</link>
        <id>10.14569/IJACSA.2013.040828</id>
        <doi>10.14569/IJACSA.2013.040828</doi>
        <lastModDate>2013-09-01T05:18:39.1230000+00:00</lastModDate>
        
        <creator>Joanna Isabelle OLSZEWSKA</creator>
        
        <subject>automatic image annotation; natural language tags; decision trees; semantic attributes; visual features; active contours; segmentation; image retrieval</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(8), 2013</description>
        <description>In this paper, we propose a new approach for automatic image annotation (AIA) in order to automatically and efficiently assign linguistic concepts to visual data such as digital images, based on both numeric and semantic features. The presented method first computes multi-layered active contours. The first-layer active contour corresponds to the main object or foreground, while the next-layers active contours delineate the object’s subparts. Then, visual features are extracted within the regions segmented by these active contours and are mapped into semantic notions. Next, decision trees are trained based on these attributes, and the image is semantically annotated using the resulting decision rules. Experiments carried out on several standards datasets have demonstrated the reliability and the computational effectiveness of our AIA system.</description>
        <description>http://thesai.org/Downloads/Volume4No8/Paper_28-Semantic_Automatic_Image_Annotation_Based_on_Multi-Layered_Active_Contours_and_Decision_Trees.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Generating an Educational Domain Checklist through an Adaptive Framework for Evaluating Educational Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040827</link>
        <id>10.14569/IJACSA.2013.040827</id>
        <doi>10.14569/IJACSA.2013.040827</doi>
        <lastModDate>2013-09-01T05:18:35.9100000+00:00</lastModDate>
        
        <creator>Roobaea S. AlRoobaea</creator>
        
        <creator>Ali H. Al-Badi</creator>
        
        <creator>Pam J. Mayhew</creator>
        
        <subject>Heuristic evaluation (HE);User Testing (UT);Domain Specific Inspection (DSI); Adaptive Framework; Adaptive Checklist</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(8), 2013</description>
        <description>The growth of the Internet and related technologies has enabled the development of a new breed of dynamic websites that is growing rapidly in use and that has had a huge impact on many businesses. One type of websites that have been widely spread and are being widely adopted is the educational websites. There are many forms of educational websites, such as free online websites and Web-based server software. This creates challenges regarding their continuing evaluation and monitoring in order to measure their efficiency and effectiveness, to assess user satisfaction and, ultimately, to improve their quality.
The lack of an adaptive usability checklist for improvement of the usability assessment process for educational systems represents a missing piece in ‘usability testing’. This paper presents an adaptive Domain-Specific Inspection (DSI) checklist as a tool for evaluating the usability of educational systems. The results show that the adaptive educational usability checklist helped evaluators to facilitate the evaluation process. It also provides an opportunity for website owners to choose the usability area(s) that they think need to be evaluated. Moreover, this method was more efficient and effective than user testing (UT) and heuristics evaluation (HE) methods.</description>
        <description>http://thesai.org/Downloads/Volume4No8/Paper_27-Generating_an_Educational_Domain_Checklist_through_an_Adaptive_Framework_for_Evaluating_Educational_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Protein PocketViewer: A Web-Service Based Interface for Protein Pocket Extraction and Visualization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040826</link>
        <id>10.14569/IJACSA.2013.040826</id>
        <doi>10.14569/IJACSA.2013.040826</doi>
        <lastModDate>2013-09-01T05:18:34.1170000+00:00</lastModDate>
        
        <creator>Xiaoyu Zhang</creator>
        
        <creator>Martin Gordon</creator>
        
        <subject>protein structure; pockets; Model 2; AJAX; web service; visualization;</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(8), 2013</description>
        <description>One important problem in bioinformatics is to study pockets or tunnels within the protein structure. These pocket or tunnel regions are significant because they indicate areas of ligand binding or enzymatic reactions, and tunnels are often solvent ion conductance areas.  The Protein Pocket Viewer (PPV) is a web interface that allows the user to extract and visualize the protein pockets in a browser, based on the algorithm in [1]. The PPV packaged the pocket extraction executable as a web service, and made it accessible to all users with the Internet access and a modern java enabled browser.  The PPV employed the Model2design pattern, which led to a loosely coupled implementation that is more robust and easier to maintain. It consists of a client web interface for user inputs and visualization, a middle-layer for controlling the flow, and the backend web services performing the actual CPU-intensive computation.  The PPV web client consists of multiple window regions, with each region providing differing views of the protein, pockets and related information. For a more responsive user experience, the PPV web client employs AJAX for asynchronous execution of long running tasks, like protein pocket extraction.</description>
        <description>http://thesai.org/Downloads/Volume4No8/Paper_26-Protein_Pocket_Viewer_A_Web-Service_Based_Interface_for_Protein_Pocket_Extraction_and_Visualization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Analysis of Brand Selection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040825</link>
        <id>10.14569/IJACSA.2013.040825</id>
        <doi>10.14569/IJACSA.2013.040825</doi>
        <lastModDate>2013-09-01T05:18:30.3400000+00:00</lastModDate>
        
        <creator>Kazuhiro Takeyasu†</creator>
        
        <creator>Yuki Higuchi</creator>
        
        <subject>brand selection; matrix structure; brand position</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(8), 2013</description>
        <description>It is often observed that consumers select upper class brand when they buy next time. Suppose that former buying data and current buying data are gathered. Also suppose that upper brand is located upper in the variable array. Then the transition matrix becomes upper triangular matrix under the supposition that former buying variables are set input and current buying variables are set output. Takeyasu et al. analyzed the brand selection and its matrix structure before. In that paper, products of one genre are analyzed. In this paper, brand selection among multiple genre and its matrix structure are analyzed. Taking an automobile for example, customer brand selection from company A to B or company A to C can be made clear utilizing above stated method. We can confirm not only the preference shift among brands but also the preference shift among companies. This enables building marketing strategy for automobile company much easier. Analyzing such structure provides useful applications. Thus, this proposed approach enables to make effective marketing plan and/or establishing new brand.</description>
        <description>http://thesai.org/Downloads/Volume4No8/Paper_25-An_Analysis_of_Brand_Selection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Proposed NFC Payment Application</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040824</link>
        <id>10.14569/IJACSA.2013.040824</id>
        <doi>10.14569/IJACSA.2013.040824</doi>
        <lastModDate>2013-09-01T05:18:28.5170000+00:00</lastModDate>
        
        <creator>Pardis Pourghomi</creator>
        
        <creator>Muhammad Qasim Saeed</creator>
        
        <creator>Gheorghita Ghinea</creator>
        
        <subject>Near Field Communication; Security; Mobiletransaction; GSM authentication.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(8), 2013</description>
        <description>Near Field Communication (NFC) technology is based on a short range radio communication channel which enables users to exchange data between devices. With NFC technology, mobile services establish a contactless transaction system to make the payment methods easier for people. Although NFC mobile services have great potential for growth, they have raised several issues which have concerned the researches and prevented the adoption of this technology within societies. Reorganizing and describing what is required for the success of this technology have motivated us to extend the current NFC ecosystem models to accelerate the development of this business area. In this paper, we introduce a new NFC payment application, which is based on our previous “NFC Cloud Wallet” model [1] to demonstrate a reliable structure of NFC ecosystem. We also describe the step by step execution of the proposed protocol in order to carefully analyse the payment application and our main focus will be on the Mobile Network Operator (MNO) as the main player within the ecosystem.</description>
        <description>http://thesai.org/Downloads/Volume4No8/Paper_24-A_Proposed_NFC_Payment_Application.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Knowledge Management Strategyfor SMEs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040823</link>
        <id>10.14569/IJACSA.2013.040823</id>
        <doi>10.14569/IJACSA.2013.040823</doi>
        <lastModDate>2013-09-01T05:18:25.3030000+00:00</lastModDate>
        
        <creator>Kitimaporn choochote</creator>
        
        <subject>Knowledge Management; Small and Medium Enterprice; SME; </subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(8), 2013</description>
        <description>In Thailand, as in other developing countries, the focus was on the large industry first, since governments assumed that large enterprises could generate more employment. However, there has been a realization that the SMEs are the biggest group in the country and are significantly important to the process of social and economic development. This realization has prompted Thailand to institute mechanisms to support and protect SMEs consist of manufacturing, merchandising (wholesale &amp; retail) and service businesses. Unfortunately, most of these SMEs lack capability in operational areas such as technology, management, marketing, and finance when compared to large enterprises. In order to adapt and survive SMEs need full and proper support from the government. To aid in their adaptation and survival, SMEs and government must develop their knowledge management framework to effectively harness their past and present experiences, and anticipate the future evolution of their commercial environment. In most countries, SMEs are the biggest source of export even in normal circumstances. Consequently, the state and SMEs have to focus and work hard to their ensure survival. </description>
        <description>http://thesai.org/Downloads/Volume4No8/Paper_23-Knowledge_Management_Strategy_for_SMEs.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Workshop Session Recordings on Green Volunteering Activities of Students in a Disadvantaged Area According to the Good-Hearted Vocation Teacher to Support Itinerant Junk Buyers </title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040822</link>
        <id>10.14569/IJACSA.2013.040822</id>
        <doi>10.14569/IJACSA.2013.040822</doi>
        <lastModDate>2013-09-01T05:18:21.9000000+00:00</lastModDate>
        
        <creator>Kuntida Thamwipat</creator>
        
        <creator>Thanakarn Kumphai</creator>
        
        <subject>Recordings; Workshop; Student Activities; Green Volunteering; Disadvantaged Community</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(8), 2013</description>
        <description>This project was aimed to provide workshop session recordings on green volunteering activities of students in one disadvantaged area under the bridge of zone 1, Pracha-Utit Road 76, Toong-kru District, Bangkok where the majority worked as itinerant junk buyers. Therefore, the students held workshop sessions with the aim to provide training on how to repair electrical appliances and engines so that the community members could use this knowledge to increase the value of the unwanted electrical appliances they bought. The project also discussed the risk and danger of certain junk product which might be mixed with rubbish and taught how to classify recyclable products to increase the value of the junk. This project was the first of its own and it was done as green volunteering activities of students. The research team has provided 182 families from the community under the bridge of zone 1 with a number of workshop sessions. The sampling group was chosen out of those who attended at least three times and there were 20 persons. The research results showed that the sampling group achieved high level of knowledge (100.0%). They could fix fans as well as repair and maintain engines. They could classify junk. They expressed high level of satisfaction towards the workshop sessions (mean score of 4.18 with S.D. of 0.27). When the assessment was conducted as regards the operation and the recordings on green volunteering activities of 13 students, it was at the highest level (mean score of 4.68 with S.D. of 0.42). This workshop project was the first runner-up of the national SCB Challenge 2012 Community Project as organized by Siam Commercial Bank PLC. </description>
        <description>http://thesai.org/Downloads/Volume4No8/Paper_22-Workshop_Session_Recordings_on_Green_Volunteering_Activities_of_Students_in_a_Disadvantaged_Area.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Directed and Almost-Directed Flow Loops in Real Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040821</link>
        <id>10.14569/IJACSA.2013.040821</id>
        <doi>10.14569/IJACSA.2013.040821</doi>
        <lastModDate>2013-09-01T05:18:19.2630000+00:00</lastModDate>
        
        <creator>M. Todinov</creator>
        
        <subject>directed flow loops; almost-directed flow loops; flow networks; optimization; classical algorithms; maximising the flow. </subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(8), 2013</description>
        <description>Directed flow loops are highly undesirable because they are associated with wastage of energy for maintaining them and entail big losses to the world economy. It is shown that directed flow loops may appear in networks even if the dispatched commodity does not physically travel along a closed contour. Consequently, a theorem giving the necessary and sufficient condition of a directed flow loop on randomly oriented straight-line flow paths has been formulated and a close-form expression has been derived for the probability of a directed flow loop. The results show that even for a relatively small number of intersecting flow paths, the probability of a directed flow loop is very large, which means that the existence of directed flow loops in real networks is practically inevitable. Consequently, a theorem and an efficient algorithm have been proposed related to discovering and removing directed flow loops in a network with feasible flows. The new concept ‘almost-directed flow loop’ has also been introduced for the first time. It is shown that the removal of an almost-directed flow loop also results in a significant decrease of the losses. It is also shown that if no directed flow loops exist in the network, the removal of an almost-directed flow loop cannot create a directed flow loop.</description>
        <description>http://thesai.org/Downloads/Volume4No8/Paper_21-Directed_and_Almost-Directed_Flow_Loops_in_Real_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Investigate the Performance of Document Clustering Approach Based on Association Rules Mining</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040820</link>
        <id>10.14569/IJACSA.2013.040820</id>
        <doi>10.14569/IJACSA.2013.040820</doi>
        <lastModDate>2013-09-01T05:18:16.0670000+00:00</lastModDate>
        
        <creator>Noha Negm</creator>
        
        <creator>Mohamed Amin</creator>
        
        <creator>Passent Elkafrawy</creator>
        
        <creator>Abdel Badeeh M. Salem</creator>
        
        <subject>Web Document Clustering; Knowledge Discovery; Association Rules Mining; Frequent termsets; Apriori algorithm; Text Documents; Text Mining; Data Mining</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(8), 2013</description>
        <description>The challenges of the standard clustering methods and the weaknesses of Apriori algorithm in frequent termset clustering formulate the goal of our research. Based on Association Rules Mining, an efficient approach for Web Document Clustering (ARWDC) has been devised. An efficient Multi-Tire Hashing Frequent Termsets algorithm (MTHFT) has been used to improve the efficiency of mining association rules by targeting improvement in mining of frequent termset. Then, the documents are initially partitioned based on association rules. Since a document usually contains more than one frequent termset, the same document may appear in multiple initial partitions, i.e., initial partitions are overlapping. After making partitions disjoint, the documents are grouped within the partition using descriptive keywords, the resultant clusters are obtained effectively. In this paper, we have presented an extensive analysis of the ARWDC approach for different sizes of Reuters datasets. Furthermore the performance of our approach is evaluated with the help of evaluation measures such as, Precision, Recall and F-measure compared to the existing clustering algorithms like Bisecting K-means and FIHC. The experimental results show that the efficiency, scalability and accuracy of the ARWDC approach has been improved significantly for Reuters datasets.</description>
        <description>http://thesai.org/Downloads/Volume4No8/Paper_20-Investigate_the_Performance_of_Document_Clustering_Approach_Based_on_Association_Rules_Mining.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intelligent Ambulance Traffic Assistance System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040819</link>
        <id>10.14569/IJACSA.2013.040819</id>
        <doi>10.14569/IJACSA.2013.040819</doi>
        <lastModDate>2013-09-01T05:18:14.2730000+00:00</lastModDate>
        
        <creator>RONOJOY GHOSH</creator>
        
        <creator>VIVEK SHAH</creator>
        
        <creator>HITESH AGARWAL</creator>
        
        <creator>ASHUTOSH BHUSHAN</creator>
        
        <creator>PRASUN KANTI GHOSH</creator>
        
        <subject>python; ambulance; wireless; Bluetooth; cryptography;</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(8), 2013</description>
        <description>With the increase in traffic road density, several causalities occur due to delay in taking a patient to the hospital in an ambulance. In this paper, we have developed an algorithm to find the shortest path to reach the required destination. As required the software will identify the present location of the vehicle and ask the user for the destination. Then it will show all the available paths, highlighting the shortest one or in several cases the most optimum one. Further we made the traffic signals automated for special vehicles like an ambulance or a fire-engine such that the signals will go green for the ambulance as it comes in the vicinity of the traffic signal, thus providing them with a clear path to reach its destination. The original signal is restored as soon as the ambulance goes undetected by the Bluetooth scanner of the traffic signal.</description>
        <description>http://thesai.org/Downloads/Volume4No8/Paper_19-Intelligent_Ambulance_Traffic_Assistance_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Actions for data warehouse success</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040818</link>
        <id>10.14569/IJACSA.2013.040818</id>
        <doi>10.14569/IJACSA.2013.040818</doi>
        <lastModDate>2013-09-01T05:18:11.0130000+00:00</lastModDate>
        
        <creator>Aziza CHAKIR</creator>
        
        <creator>Hicham MEDROMI</creator>
        
        <creator>Adil SAYOUTI</creator>
        
        <subject>Information Technology Infrastructure Library (ITIL); data warehouse; governance; insufficiencies of the data warehouse; multi-agent system.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(8), 2013</description>
        <description>Problem statement: The Data Warehouse is a database dedicated to the storage of all data used in the decision analysis, it meets the customer requirements, to ensure, in time, that a data warehouse complies with the rules of construction and manages the evolutions necessary of the information system (IS).
Results: According to the studies carried out, we see that a system based on a data warehouse governed by the best practices of The Information Technology Infrastructure Library (ITIL) and equipped with a multi-agent system will make it possible our direction to ensure governance tending towards the optimization of the exploitation of the data warehouse.
</description>
        <description>http://thesai.org/Downloads/Volume4No8/Paper_18-Actions_for_data_warehouse_success.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cross Language Information Retrieval Model for Discovering WSDL Documents Using Arabic Language Query</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040817</link>
        <id>10.14569/IJACSA.2013.040817</id>
        <doi>10.14569/IJACSA.2013.040817</doi>
        <lastModDate>2013-09-01T05:18:09.2170000+00:00</lastModDate>
        
        <creator>Prof. Dr. Torkey I.Sultan</creator>
        
        <creator>Dr. Ayman E. Khedr</creator>
        
        <creator>Fahad Kamal Alsheref</creator>
        
        <subject>Web services discovery; Cross Language Information Retrieval; WSDL; Text Mining</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(8), 2013</description>
        <description>Web service discovery is the process of finding a suitable Web service for a given user’s query through analyzing the web service‘s WSDL content and finding the best match for the user’s query. The service query should be written in the same language of the WSDL, for example English. Cross Language Information Retrieval techniques does not exist in the web service discovery process. The absence of CLIR methods limits the search language to the English language keywords only, which raises the following question   “How do people that do not know the English Language find a web service, This paper proposes the application of CLIR techniques and IR methods to support Bilingual Web service discovery process the second language that proposed here is Arabic. Text mining techniques were applied on WSDL content and user’s query to be ready for CLIR methods. The proposed model was tested on a curated catalogue of Life Science Web Services http://www.biocatalogue.org/ and used for solving the research problem with 99.87 %   accuracy and 95.06 precision</description>
        <description>http://thesai.org/Downloads/Volume4No8/Paper_17-Cross_Language_Information_Retrieval_Model_For_Discovering_Wsdl_Documents_Using_Arabic_Language_Query.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Debranding in Fantasy Realms: Perceived Marketing Opportunities within the Virtual World</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040816</link>
        <id>10.14569/IJACSA.2013.040816</id>
        <doi>10.14569/IJACSA.2013.040816</doi>
        <lastModDate>2013-09-01T05:18:07.3930000+00:00</lastModDate>
        
        <creator>Kear Andrew</creator>
        
        <creator>Bown Gerald Robin</creator>
        
        <creator>Christidi Sofia</creator>
        
        <subject>component; Fantasy Realms, Debranding; Sacred Space; Marketing; 3D Virtual Worlds; Second Life</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(8), 2013</description>
        <description>This paper discusses the application of the concept of debranding within immersive virtual environments. In particular the issue of the media richness and vividness of experience is considered in these experience realms that may not be conducive to traditional branding invasive strategies. Brand equity is generally seen to be the desired outcome of branding strategies and the authors suggest that unless the virtual domains are considered as sacred spaces then brand equity may be compromised. The application of the above concepts is applied to the differing social spaces that operate within the different experience realms. The ideas of resonance, presence and interactivity are considered here. They lead to the development of a constructed positioning by the participants. Through the process of debranding, marketers may be able to enter these sacred spaces without negative impact to the brand. Perception of these virtual spaces was found to be partially congruent with this approach to branding. It thus presents a number of challenges for the owners of such virtual spaces and also virtual worlds in increasing the commercial utilization of investment in these environments.</description>
        <description>http://thesai.org/Downloads/Volume4No8/Paper_16-Debranding_in_Fantasy_Realms_Perceived_Marketing_Opportunities_within_the_Virtual_World.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Virtual Calibration of Cosmic Ray Sensor: Using Supervised Ensemble Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040815</link>
        <id>10.14569/IJACSA.2013.040815</id>
        <doi>10.14569/IJACSA.2013.040815</doi>
        <lastModDate>2013-09-01T05:18:04.1800000+00:00</lastModDate>
        
        <creator>Ritaban Dutta</creator>
        
        <creator> Claire D’Este</creator>
        
        <subject>Cosmic Ray sensor; Ensemble supervised machine learning;   Area wise bulk soil moisture.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(8), 2013</description>
        <description>In this paper an ensemble of supervised machine learning methods has been investigated to virtually and dynamically calibrate the cosmic ray sensors measuring area wise bulk soil moisture. Main focus of this study was to find an alternative to the currently available field calibration method; based on expensive and time consuming soil sample collection methodology.  Data from the Australian Water Availability Project (AWAP) database was used as independent soil moisture ground truth and results were compared against the conventionally estimated soil moisture using a Hydroinnova CRS-1000 cosmic ray probe deployed in Tullochgorum, Australia. Prediction performance of a complementary ensemble of four supervised estimators, namely Sugano type Adaptive Neuro-Fuzzy Inference System (S-ANFIS), Cascade Forward Neural Network (CFNN), Elman Neural Network (ENN) and Learning Vector Quantization Neural Network (LVQN) was evaluated using training and testing paradigms. An AWAP trained ensemble of four estimators was able to predict bulk soil moisture directly from cosmic ray neutron counts with 94.4% as best accuracy. The ensemble approach outperformed the individual performances from these networks. This result proved that an ensemble machine learning based paradigm could be a valuable alternative data driven calibration method for cosmic ray sensors against the current expensive and hydrological assumption based field calibration method.</description>
        <description>http://thesai.org/Downloads/Volume4No8/Paper_15-Virtual_Calibration_of_Cosmic_Ray_Sensor.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dynamic Evaluation and Visualisation of the Quality and Reliability of Sensor Data Sources</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040814</link>
        <id>10.14569/IJACSA.2013.040814</id>
        <doi>10.14569/IJACSA.2013.040814</doi>
        <lastModDate>2013-09-01T05:18:02.3830000+00:00</lastModDate>
        
        <creator>Ritaban Dutta</creator>
        
        <creator> Claire D’Este</creator>
        
        <creator>Ahsan Morshed</creator>
        
        <creator>Daniel Smith</creator>
        
        <creator>Aruneema Das</creator>
        
        <creator>Jagannath Aryal</creator>
        
        <subject>Dynamic Visualisation; Sensor Web; Weather Station; Time Series Catalogue; Interpolated 3D Surface.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(8), 2013</description>
        <description>Before using remote data sources, or those from external organisations, it is important to establish if the source is fit for purpose. We have developed an approach to automatic sensor data annotation and visualisation that evaluates overall sensor network performance and data quality. The CSIRO’s South Esk hydrological sensor web combines data related to water management from five different organisations, which provides a suitable platform to explore the issues of reliability and uncertainty. An environmental gridded surface is generated based on the observations and evaluations of quality and reliability of the sensor node provider.</description>
        <description>http://thesai.org/Downloads/Volume4No8/Paper_14-Dynamic_Evaluation_and_Visualisation_of_the_Quality_and_Reliability_of_Sensor_Data_Sources.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Bootstrapping Domain Knowledge Exploration using Conceptual Mapping of Wikipedia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040813</link>
        <id>10.14569/IJACSA.2013.040813</id>
        <doi>10.14569/IJACSA.2013.040813</doi>
        <lastModDate>2013-09-01T05:18:00.0600000+00:00</lastModDate>
        
        <creator>Mai Eldefrawi</creator>
        
        <creator>Ahmed Sharaf eldin Ahmed</creator>
        
        <creator>Adel Elsayed</creator>
        
        <subject>Conceptual Mapping; Conceptual Representation; Domain knowledge; Wikipedia; Self-regulated learners</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(8), 2013</description>
        <description>Wikipedia is one of the largest online encyclopedias that exist in a hypertext form. This nature prevents Wikipedia’s potential to be fully discovered. Therefore the focus of this paper is on the role of domain knowledge in supporting the exploration of classical encyclopedic content, which in this case is Wikipedia. A main contribution provided by the author of this work is a methodology for identifying the nature, the form and the role of domain knowledge expressed in conceptual form.  It’s also a method of representation and analysis for describing the domain knowledge and for the extraction of the logical representation of a raw form of the domain knowledge. Such logical representation is of limited value in describing the real nature of domain knowledge. Hence we transform it into an adequate graphical representation, mostly of an arc-node form which is called conceptual representation.</description>
        <description>http://thesai.org/Downloads/Volume4No8/Paper_13-Bootstrapping_Domain_Knowledge_Exploration_using_Conceptual_Mapping_of_Wikipedia.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluation of Human Emotion from Eye Motions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040812</link>
        <id>10.14569/IJACSA.2013.040812</id>
        <doi>10.14569/IJACSA.2013.040812</doi>
        <lastModDate>2013-09-01T05:17:56.6430000+00:00</lastModDate>
        
        <creator>Vidas Raudonis</creator>
        
        <creator>Gintaras Dervinis</creator>
        
        <creator>Andrius Vilkauskas</creator>
        
        <creator>Agne Paulauskaite - Taraseviciene</creator>
        
        <creator>Gintare Kersulyte - Raudone</creator>
        
        <subject>Emotional status; eye tracking; human computer interaction; virtual human</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(8), 2013</description>
        <description>The object of this paper is to develop an emotion recognition system that analysis the motion trajectory of the eye and gives the response on appraisal emotion. The emotion recognition solution is based on the data gathering using head mounted eye tracking device. The participants of experimental investigation were provided with a visual stimulus (PowerPoint slides) and the emotional feedback was determined by the combination of eye tracking device and emotion recognition software. The stimulus was divided in four groups by the emotion that should be triggered in the human, i.e., neutral, disgust, exhilaration and excited. Some initial experiments and the data on the recognition accuracy of the emotion from eye motion trajectory are provided along with the description of implemented algorithms.</description>
        <description>http://thesai.org/Downloads/Volume4No8/Paper_12-Evaluation_of_Human_Emotion_from_Eye_Motions.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>On The Performance of the Gravitational Search Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040811</link>
        <id>10.14569/IJACSA.2013.040811</id>
        <doi>10.14569/IJACSA.2013.040811</doi>
        <lastModDate>2013-09-01T05:17:53.2270000+00:00</lastModDate>
        
        <creator>Taisir Eldos</creator>
        
        <creator>Rose Al Qasim</creator>
        
        <subject>Optimization; Gravitational Search; Genetic Algorithms; Cell Placement </subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(8), 2013</description>
        <description>Gravitational Search Algorithms (GSA) are heuristic optimization evolutionary algorithms based on Newton&#39;s law of universal gravitation and mass interactions. GSAs are among the most recently introduced techniques that are not yet heavily explored. An early work of the authors has successfully adapted this technique to the cell placement problem, and shown its efficiency in producing high quality solutions in reasonable time. We extend this work by fine tuning the algorithm parameters and transition functions towards better balance between exploration and exploitation. To assess its performance and robustness, we compare it with that of Genetic Algorithms (GA), using the standard cell placement problem as benchmark to evaluate the solution quality, and a set of artificial instances to evaluate the capability and possibility of finding an optimal solution. Experimental results show that the proposed approach is competitive in terms of success rate or likelihood of optimality and solution quality. And despite that it is computationally more expensive due to its hefty mathematical evaluations, it is more fruitful on the long run.</description>
        <description>http://thesai.org/Downloads/Volume4No8/Paper_11-On_The_Performance_Of_The_Gravitational_Search_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of Social Media GIS for Information Exchange between Regions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040810</link>
        <id>10.14569/IJACSA.2013.040810</id>
        <doi>10.14569/IJACSA.2013.040810</doi>
        <lastModDate>2013-09-01T05:17:49.8100000+00:00</lastModDate>
        
        <creator>Syuji YAMADA</creator>
        
        <creator>Kayoko YAMAMOTO</creator>
        
        <subject>Information Exchange;Web-GIS; Social Media; SNS; Twitter</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(8), 2013</description>
        <description>This study aims to develop a social media GIS (Geographic Information Systems) specially tailored to information exchange between regions. The conclusions of this study are summarized in the following three points.(1) Social media GIS is a geographic information system which integratesWeb-GIS, SNS and Twitter into a single system. A social media GIS was conducted for the collection of regional information in the eastern part of Yamanashi Prefecture. The social media GIS uses a design which displays its usefulness in multi-directional information transmission and in easing the limitations of space, time and continuity, making it possible to redesign systems in accordance with target cases.(2) During the operation of the social media GIS for about two months, most of the users were in their20s. Users exchanged regional information using the comment and button functions.(3)The system was evaluated based on the results of a questionnaire to users and an access analysis oflog data during operationin order to identify measures for improvement of the system. Because of users’ high evaluations of its original functions, the overall operability of the system was highly evaluated. Most of the contributed information was only known to local residents, and it was evident that the system fulfilled its intended role.</description>
        <description>http://thesai.org/Downloads/Volume4No8/Paper_10-Development_of_Social_Media_GIS_for_Information_Exchange.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>On Integrating Mobile Applications into the Digital Forensic Investigative Process</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040809</link>
        <id>10.14569/IJACSA.2013.040809</id>
        <doi>10.14569/IJACSA.2013.040809</doi>
        <lastModDate>2013-09-01T05:17:46.3930000+00:00</lastModDate>
        
        <creator>April Tanner</creator>
        
        <creator>Soniael Duncan</creator>
        
        <subject>mobile device forensics; digital forensics; forensic process, forensic models; MIT App Inventor</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(8), 2013</description>
        <description>What if a tool existed that allowed digital forensic investigators to create their own apps that would assist them with the evidence identification and collection process at crime scenes?  First responders are responsible for ensuring that digital evidence is examined in such a way that the integrity of the evidence is not jeopardized.  Furthermore, they play a pivotal part in preserving evidence during the collection of evidence at the crime scene and transport to the laboratory.   This paper proposes the development of a mobile application that can be developed for or created by a first responder to assist in the identification, acquisition, and preservation of digital evidence at a crime scene.</description>
        <description>http://thesai.org/Downloads/Volume4No8/Paper_9-On_Integrating_Mobile_Applications_into_the_Digital_Forensic.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Expert System for Building House Cost Estimation: Design, Implementation, and Evaluation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040808</link>
        <id>10.14569/IJACSA.2013.040808</id>
        <doi>10.14569/IJACSA.2013.040808</doi>
        <lastModDate>2013-09-01T05:17:42.6030000+00:00</lastModDate>
        
        <creator>Yasser A. Nada</creator>
        
        <subject>Expert System; Building House; CLIPS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(8), 2013</description>
        <description>This paper introduces an expert system which demonstrates a new method for accurate estimation of building house cost. This system is simple and decreases the time, the effort, and the money of its beneficiaries. In addition, design and implementation of the proposed expert system are introduced. CLIPS 6.0 and C# are used in implementation phase. Also, this expert system is programmed to be in a standalone package with a platform independency. Furthermore, the developed expert system is tested under several real cases.  Finally, an initial evaluation of this expert system is carried out and a positive feedback is received from user’s samples, which makes it robust and efficient.</description>
        <description>http://thesai.org/Downloads/Volume4No8/Paper_8-A_Novel_Expert_System_for_Building_House_Cost_Estimation_Design,_Implementation,_and_Evaluation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>User-Based Interaction for Content-Based Image Retrieval by Mining User Navigation Patterns.</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040807</link>
        <id>10.14569/IJACSA.2013.040807</id>
        <doi>10.14569/IJACSA.2013.040807</doi>
        <lastModDate>2013-09-01T05:17:38.5630000+00:00</lastModDate>
        
        <creator>A. Srinagesh</creator>
        
        <creator>Lavanya Thota</creator>
        
        <creator>G.P.Saradhi Varma</creator>
        
        <creator>A.Govardhan</creator>
        
        <subject>Image Retrieval; CBIR; Relevance Feedback; Navigation Patterns; Query Expansion; Query Reweighting; Query Point Movement.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(8), 2013</description>
        <description>In Internet, Multimedia and Image Databases image searching is a necessity. Content-Based Image Retrieval (CBIR) is an approach for image retrieval. With User interaction included in CBIR with Relevance Feedback (RF) techniques, the results are obtained by giving more number of iterative feedbacks for large databases is not an efficient method for real- time applications. So, we propose a new approach which converges rapidly and can aptly be called as Navigation Pattern-Based Relevance Feedback (NPRF) with User-based interaction mode. We combined NPRF with RF techniques with three concepts viz., query Re-weighting (QR), Query Expansion (QEX) and Query Point Movement (QPM). By using, these three techniques efficient results are obtained by giving a small number of feedbacks. The efficiency of the proposed method with results is proved by calculating Precision, Recall and Evaluation measures.</description>
        <description>http://thesai.org/Downloads/Volume4No8/Paper_7-User-Based_Interaction_for_Content-Based_Image_Retrieval.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid Method to Improve Forecasting Accuracy Utilizing Genetic Algorithm –An Application to the Data of Operating equipment and supplies</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040806</link>
        <id>10.14569/IJACSA.2013.040806</id>
        <doi>10.14569/IJACSA.2013.040806</doi>
        <lastModDate>2013-09-01T05:17:36.2070000+00:00</lastModDate>
        
        <creator>Daisuke Takeyasu</creator>
        
        <creator>Kazuhiro Takeyasu†</creator>
        
        <subject>minimum variance; exponential smoothing method; forecasting; trend; operating equipment and supplies</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(8), 2013</description>
        <description>In industries, how to improve forecasting accuracy such as sales, shipping is an important issue. There are many researches made on this. In this paper, a hybrid method is introduced and plural methods are compared. Focusing that the equation of exponential smoothing method(ESM) is equivalent to (1,1) order ARMA model equation, new method of estimation of smoothing constant in exponential smoothing method is proposed before by us which satisfies minimum variance of forecasting error. Generally, smoothing constant is selected arbitrarily. But in this paper, we utilize above stated theoretical solution. Firstly, we make estimation of ARMA model parameter and then estimate smoothing constants. Thus theoretical solution is derived in a simple way and it may be utilized in various fields. Furthermore, combining the trend removing method with this method, we aim to improve forecasting accuracy. An approach to this method is executed in the following method. Trend removing by the combination of linear and 2nd order non-linear function and 3rd order non-linear function is executed to the data of Operating equipment and supplies for three cases (An injection device and a puncture device, A sterilized hypodermic needle and A sterilized syringe). The weights for these functions are set 0.5 for two patterns at first and then varied by 0.01 increment for three patterns and optimal weights are searched. Genetic Algorithm is utilized to search the optimal weight for the weighting parameters of linear and non-linear function. For the comparison, monthly trend is removed after that. Theoretical solution of smoothing constant of ESM is calculated for both of the monthly trend removing data and the non-monthly trend removing data. Then forecasting is executed on these data. The new method shows that it is useful for the time series that has various trend characteristics and has rather strong seasonal trend. The effectiveness of this method should be examined in various cases.</description>
        <description>http://thesai.org/Downloads/Volume4No8/Paper_6-A_Hybrid_Method_to_Improve_Forecasting_Accuracy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>English to Creole and Creole to English Rule Based Machine Translation System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040805</link>
        <id>10.14569/IJACSA.2013.040805</id>
        <doi>10.14569/IJACSA.2013.040805</doi>
        <lastModDate>2013-09-01T05:17:34.3970000+00:00</lastModDate>
        
        <creator>Sameerchand Pudaruth</creator>
        
        <creator>Lallesh Sookun</creator>
        
        <creator>Arvind Kumar Ruchpaul</creator>
        
        <subject>rule-based; Mauritian Creole; English</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(8), 2013</description>
        <description>Machine translation is the process of translating a text from one language to another, using computer software. A translation system is important to overcome language barriers, and help people communicate between different parts of the world. Most of popular online translation system caters only for the most commonly used languages but no research has been made so far concerning the translation of the Mauritian Creole language. In this paper, we present the first machine translation (MT) system that translates English sentences to Mauritian Creole language and vice-versa. The system uses the rule based machine translation approach to perform translation. It takes as input sentences in the source language either in English or Creole and outputs the translation of the sentences in the target language. The results show that the system can provide translation of acceptable quality. This system can potentially benefit many categories of people, since it allow them to perform their translation quickly and with ease.  </description>
        <description>http://thesai.org/Downloads/Volume4No8/Paper_5-English_to_Creole_and_Creole_to_English_Rule_Based_Machine_Translation_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mining Opinion in Online Messages</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040804</link>
        <id>10.14569/IJACSA.2013.040804</id>
        <doi>10.14569/IJACSA.2013.040804</doi>
        <lastModDate>2013-09-01T05:17:32.6030000+00:00</lastModDate>
        
        <creator>Norlela Samsudin</creator>
        
        <creator>Abdul Razak Hamda</creator>
        
        <creator>Mazidah Puteh</creator>
        
        <creator>Mohd Zakree Ahmad Nazri</creator>
        
        <subject>Opinion mining; text normalization; feature selection.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(8), 2013</description>
        <description>The number of messages that can be mined from online entries increases as the number of online application users increases. In Malaysia, online messages are written in mixed languages known as ‘Bahasa Rojak’. Therefore, mining opinion using natural language processing activities is difficult. This study introduces a Malay Mixed Text Normalization Approach (MyTNA) and a feature selection technique based on Immune Network System (FS-INS) in the opinion mining process using  machine learning approach. The purpose of MyTNA is to normalize noisy texts in online messages. In addition, FS-INS will automatically select relevant features for the opinion mining process. Several experiments involving 1000 positive movies feedback and 1000 negative movies feedback have been conducted. The results show that accuracy values of opinion mining using Na&#239;ve Bayes (NB), k-Nearest Neighbor (kNN)  and Sequential Minimal Optimization (SMO) increase  after the introduction of MyTNA and FS-INS.</description>
        <description>http://thesai.org/Downloads/Volume4No8/Paper_4-Mining_Opinion_in_Online_Messages.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design and Development of AlgoWBIs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040803</link>
        <id>10.14569/IJACSA.2013.040803</id>
        <doi>10.14569/IJACSA.2013.040803</doi>
        <lastModDate>2013-09-01T05:17:29.1700000+00:00</lastModDate>
        
        <creator>Kavita </creator>
        
        <creator>Dr. Abdul Wahid</creator>
        
        <creator>Dr. G N Purohit</creator>
        
        <subject>Algorithm, Instructional design; Web-Based Instruction system (WBIs); Meta learning; Self-paced learning; Web-Based learning; Interactive learning; computer based learning; personalized learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(8), 2013</description>
        <description>AlgoWBIs has been developed to support algorithm learning. The goal to develop this tool is to empower educators and learners with an interactive learning tool to improving algorithm’s skills.
The paper focuses on how to trim down the challenges ofalgorithm learning andalso discusses how this tool will improve the effectiveness of computer facilitated interactive learning and will support in reducing stress of learning. It will aid computer science graduate and undergraduate learners.Perspective development model has been used for development to enhance the tool’s features.
</description>
        <description>http://thesai.org/Downloads/Volume4No8/Paper_3-Design_and_Development_of_AlgoWBIs.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Novel MIME Type and Extension Based Packet Classification Algorithm in WiMAX</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040802</link>
        <id>10.14569/IJACSA.2013.040802</id>
        <doi>10.14569/IJACSA.2013.040802</doi>
        <lastModDate>2013-09-01T05:17:27.3470000+00:00</lastModDate>
        
        <creator>Siddu P. Algur</creator>
        
        <creator>Niharika Kumar</creator>
        
        <subject>QoS; WiMAX; MAC; Packet Classification; IEEE802.16e </subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(8), 2013</description>
        <description>IEEE 802.16 provides quality of service by providing five different service classes. When a packet reaches the MAC layer, the packet classifier has to classify the packet such that the packet is associated with appropriate QoS. In this paper a packet classification algorithm is proposed that exploits the HTTP header of the application layer to determine the type of data and classify the data. The extension and MIME type (content type) headers are used to classify the packets. Further the algorithm is enhanced by considering the type of user as an additional parameter during packet classification. Simulations are done on variable bit rate video data. Based on the type of video the packets are classified as RTPS or nRTPS thereby providing differentiated QoS. Simulation results reveal that high priority video classified as RTPS traffic receive higher throughput compared to video of lower priority. The delay faced by high priority videos is correspondingly less compared to low priority video.</description>
        <description>http://thesai.org/Downloads/Volume4No8/Paper_2-Novel_MIME_Type_and_Extension_Based_Packet_Classification_Algorithm_in_WiMAX.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Autonomic Computing for Business Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040801</link>
        <id>10.14569/IJACSA.2013.040801</id>
        <doi>10.14569/IJACSA.2013.040801</doi>
        <lastModDate>2013-09-01T05:17:23.7870000+00:00</lastModDate>
        
        <creator>Devasia Kurian</creator>
        
        <creator>Pethuru Raj</creator>
        
        <subject>Autonomic Computing Architecture; CRM; Service Oriented Architecture; Multi-Agent Architecture</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(8), 2013</description>
        <description>Autonomic computing, a new deployment technology introduced by IBM a decade ago, to manage the ever increasing complexity of IT systems, has become a part of many large scale deployments today. A lot of inroads have been made by autonomic computing in the areas of networking, data centers, storage, and database management. But few attempts have been exercised in business applications such as ERP, SCM or CRM, and Online Retail. In this paper, we would like to dive deeper to extract and explain where the pioneering autonomic computing paradigm stands today and the varied opportunities and possibilities in this area. A simplistic architecture for deployment of autonomic business applications is introduced and illustrated in this paper. A sample implementation of different management modules from various areas is described in order to invigorate the readers. This should form the basis for a newer and nimbler start, and the ubiquitous application of AC concepts to enable business transformation.  This paper represents a solid extension of the paper presented in World Congress on Information and Communication Technologies, 2012[1].</description>
        <description>http://thesai.org/Downloads/Volume4No8/Paper_1-Autonomic_Computing_for_Business_Applications.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Memetic Algorithm with Filtering Scheme for the Minimum Weighted Edge Dominating Set Problem</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.020808</link>
        <id>10.14569/IJARAI.2013.020808</id>
        <doi>10.14569/IJARAI.2013.020808</doi>
        <lastModDate>2013-08-10T08:11:18.2500000+00:00</lastModDate>
        
        <creator>Abdel Rahman Hedar</creator>
        
        <creator>Shada N. Abdel-Aziz</creator>
        
        <creator>Adel A. Sewisy</creator>
        
        <subject>Minimum weight edge dominating set, Graph theory,
Genetic algorithm, Memetic algorithm, Local search.</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(8), 2013</description>
        <description>The minimum weighted edge dominating set problem
(MWEDS) generalizes both the weighted vertex cover problem
and the problem of covering the edges of graph by a
minimum cost set of both vertices and edges. In this paper, we
propose a meta-heuristic approach based on genetic algorithm
and local search to solve the MWEDS problem. Therefore, the
proposed method is considered as a memetic search algorithm
which is called Memetic Algorithm with filtering scheme for
minimum weighted edge dominating set, and called shortly
(MAFS). In the MAFS method, three new fitness functions are
invoked to effectively measure the solution qualities. The search
process in the proposed method uses intensification scheme, called
“filtering”, beside the main genetic search operations in order
to achieve faster performance. The experimental results proves
that the proposed</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No8/Paper_8-Memetic_Algorithm_with_Filtering_Scheme.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Algorithm for Design of Digital Notch Filter Using Simulation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.020807</link>
        <id>10.14569/IJARAI.2013.020807</id>
        <doi>10.14569/IJARAI.2013.020807</doi>
        <lastModDate>2013-08-10T08:11:16.3300000+00:00</lastModDate>
        
        <creator>Amit Verma</creator>
        
        <creator>Naina</creator>
        
        <creator>C.S. Vinitha </creator>
        
        <subject>Component, , filters, design of filters, MATLAB etc</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(8), 2013</description>
        <description>A smooth waveform is generated of low frequency signal can be achieved through the Digital Notch Filter. Noise can be easily eliminated from a speech signal by using a Notch filter. In this paper the design of notch filter using MATLAB has been designed and implemented. The performance and characteristics of the filter has been shown in the waveform  in the conclusion part of the paper.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No8/Paper_7-Algorithm_for_Design_of_Digital_Notch_Filter_Using_Simulation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Wearable Computing System with Input-Output Devices Based on Eye-Based Human Computer Interaction Allowing Location Based Web Services</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.020806</link>
        <id>10.14569/IJARAI.2013.020806</id>
        <doi>10.14569/IJARAI.2013.020806</doi>
        <lastModDate>2013-08-10T08:11:13.6000000+00:00</lastModDate>
        
        <creator>Kohei Arai </creator>
        
        <subject>eye based human computer interaction; wearable computing; location based web services</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(8), 2013</description>
        <description>Wearable computing with Input-Output devices Base on Eye-Based Human Computer Interaction: EBHCI which allows location based web services including navigation, location/attitude/health condition monitoring is proposed. Through implementation of the proposed wearable computing system, all the functionality is confirmed. It is also found that the system does work well. It can be used easily and also is not expensive. Experimental results for EBHCI show excellent performance in terms of key-in accuracy as well as input speed. It is accessible to internet, obviously, and has search engine capability.   </description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No8/Paper_6-Wearable_Computing_System_with_Input-Output.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Access Fee Charging System for Information Contents Sharing Through P2P Communications </title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.020805</link>
        <id>10.14569/IJARAI.2013.020805</id>
        <doi>10.14569/IJARAI.2013.020805</doi>
        <lastModDate>2013-08-10T08:11:11.3070000+00:00</lastModDate>
        
        <creator>Kohei Arai </creator>
        
        <subject>P2P communication; steganography; data hiding; watermarking; charge system; content security</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(8), 2013</description>
        <description>Charge system for information contents exchange through P2P communications is proposed. Security is the most important for this charge system and is kept with data hiding method with steganography and watermarking. Security level for this charge system is evaluated with image contents.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No8/Paper_5-Access_Fee_Charging_System_for_Information_Contents.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi Spectral Image Classification Method with Selection of Independent Spectral Features through Correlation Analysis  </title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.020804</link>
        <id>10.14569/IJARAI.2013.020804</id>
        <doi>10.14569/IJARAI.2013.020804</doi>
        <lastModDate>2013-08-10T08:11:09.4800000+00:00</lastModDate>
        
        <creator>Kohei Arai </creator>
        
        <subject>image classification, polarimetric SAR, correlation analysis</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(8), 2013</description>
        <description>Multi spectral image classification method with selection processes of independent spectral features through correlation analysis is proposed. The proposed method is validated by applying to the polarimetric Synthetic Aperture Radar: SAR data. Also Probability Distribution Function: PDF for of features are checked and confirmed the most independent PDF allows greatest classification performance. </description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No8/Paper_4-Multi_Spectral_Image_Classification_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Prediction Method of El Nino Southern Oscillation: ENSO by Means of Wavelet Based Data Compression with Appropriate Support Length of Base Function  </title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.020803</link>
        <id>10.14569/IJARAI.2013.020803</id>
        <doi>10.14569/IJARAI.2013.020803</doi>
        <lastModDate>2013-08-10T08:11:06.8430000+00:00</lastModDate>
        
        <creator>Kohei Arai </creator>
        
        <subject>Prediction, Time series analysis, wavelet, ENSO, support length of mother wavelet function, base function </subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(8), 2013</description>
        <description>Method for El Nino/Southern Oscillation: ENSO by means of wavelet based data compression with appropriate support length of base function is proposed. Through the experiments with observed southern oscillation index, the proposed method is validated. Also a method for determination of appropriate support length is proposed and is validated. </description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No8/Paper_3-Prediction_Method_of_El_Nino_Southern_Oscillation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Study on Sea Surface Temperature Estimation with Thermal Infrared Radiometer Data among Conventional MCSST, Split Window and Conjugate Gradient Based Methods  </title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.020802</link>
        <id>10.14569/IJARAI.2013.020802</id>
        <doi>10.14569/IJARAI.2013.020802</doi>
        <lastModDate>2013-08-10T08:11:05.0370000+00:00</lastModDate>
        
        <creator>Kohei Arai </creator>
        
        <subject>Sea Surface Temperature; radiative transfer equation; regression; conjugate gradient; MCSST; split window</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(8), 2013</description>
        <description>Comparative study on Sea Surface Temperature: SST estimations among the conventional Multi-Channel Seat Surface Temperature: MCSST, split window method and the proposed Conjugate Gradient based method: CGM with Thermal Infrared Radiometer: TIR data through simulations is conducted. Utilizing the proposed linearized inversion of radiative transfer equation, SST can be estimated. SST estimation accuracy of the proposed method is compared to the conventional regression based method (Split Window and MCSST method). Through the simulation study, it is found that the proposed CGM based method is superior to the conventional regression based method.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No8/Paper_2-Comparative_Study_on_Sea_Surface_Temperature.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparison between Rayleigh and Mie Scattering Assumptions for Z-R Relation and Rainfall Rate Estimation with TRMM/PR Data </title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.020801</link>
        <id>10.14569/IJARAI.2013.020801</id>
        <doi>10.14569/IJARAI.2013.020801</doi>
        <lastModDate>2013-08-10T08:11:03.0700000+00:00</lastModDate>
        
        <creator>Kohei Arai </creator>
        
        <subject>TRMM/PR; Precipitation; Z-R Relation; Rainfall Rate; Rayleigh scattering; Mie scattering; Droplet size distribution; RadarReflectivity Factor;  </subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(8), 2013</description>
        <description>Comparison of the rain rate estimated with the assumptions of Rayleigh and Mie scattering is made. We analyzed the different relationships between the radar reflective factor and rain rate (so-called Z-R relationship) with both scattering models for different DSD (droplet size distribution) and rainfall types as the wavelength is 2.2cm which is in accord with the band of TRMM/PR. Meanwhile we introduced a discrete ordinates method to retrieve the Z-R relationship for Mie scattering assumption. It is found that the retrieval result can be represented as the sum of some simple Z-R relationships. By the analysis of the Z-R relationships estimated from Rayleigh and Mie scattering assumptions in the rain types, we found that the difference of Z-R relationships between Rayleigh and Mie scattering in the thunderstorm that represents the larger raindrop size is larger than that in the drizzle that represent the smaller raindrop size.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No8/Paper_1-Comparison_between_Rayleigh_and_Mie_Scattering_Assumptions.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Exact Output Rate of Generalized Peres Algorithm for Generating Random Bits from Loaded Dice</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040731</link>
        <id>10.14569/IJACSA.2013.040731</id>
        <doi>10.14569/IJACSA.2013.040731</doi>
        <lastModDate>2013-07-30T19:12:56.4870000+00:00</lastModDate>
        
        <creator>Sung-il Pae</creator>
        
        <subject>Random number generation; Peres algorithm; exact output rate; random bits; loaded dice.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(7), 2013</description>
        <description>We report a computation of the exact output
rate of recently-discovered generalization of Peres algorithm for
generating random bits from loaded dice. Instead of resorting to
brute-force computation for all possible inputs, which becomes
quickly impractical as the input size increases, we compute the
total output length on equiprobable sets of inputs by dynamic
programming using a recursive formula.</description>
        <description>http://thesai.org/Downloads/Volume4No7/Paper_31-Exact_Output_Rate_of_Generalized_Peres_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Physical Activity Identification using Supervised Machine Learning and based on Pulse Rate</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040730</link>
        <id>10.14569/IJACSA.2013.040730</id>
        <doi>10.14569/IJACSA.2013.040730</doi>
        <lastModDate>2013-07-30T19:12:53.2570000+00:00</lastModDate>
        
        <creator>Mobyen Uddin Ahmed</creator>
        
        <creator>Amy Loutfi</creator>
        
        <subject>Physical activity; Elderly; Pulse rate; Case-based Reasoning (CBR); Support Vector Machine (SVM) and Neural Network (NN)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(7), 2013</description>
        <description>Physical activity is one of the key components for elderly in order to be actively ageing. Pulse rate is a convenient physiological parameter to identify elderly’s physical activity since it increases with activity and decreases with rest. However, analysis and classification of pulse rate is often difficult due to personal variation during activity. This paper proposed a Case-Based Reasoning (CBR) approach to identify physical activity of elderly based on pulse rate. The proposed CBR approach has been compared with the two popular classification techniques, i.e. Support Vector Machine (SVM) and Neural Network (NN). The comparison has been conducted through an empirical experimental study where three experiments with 192 pulse rate measurement data are used. The experiment result shows that the proposed CBR approach outperforms the other two methods. Finally, the CBR approach identifies physical activity of elderly 84% accurately based on pulse rate.</description>
        <description>http://thesai.org/Downloads/Volume4No7/Paper_30-Physical_Activity_Identification_using_Supervised_Machine_Learning_and_based_on_Pulse_Rate.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid Model for Secure Data Transfer in Audio Signals using HCNN and DD DWT</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040729</link>
        <id>10.14569/IJACSA.2013.040729</id>
        <doi>10.14569/IJACSA.2013.040729</doi>
        <lastModDate>2013-07-30T19:12:51.4300000+00:00</lastModDate>
        
        <creator>B. Geetha vani</creator>
        
        <creator>Prof. E. V. Prasad</creator>
        
        <subject>Cryptography; Hopfield Chaotic Neural Network; Audio Steganography; Double Density Discrete Wavelet Transform.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(7), 2013</description>
        <description>In today’s world, there are a number of cryptographic and steganography techniques used in order to have secured data transfer between a sender and a receiver. In this paper a new hybrid approach that integrates the merits of cryptography and audio steganography is presented. First, the original message is encrypted using chaotic neural network and the resultant cipher text is embedded into a cover audio using Double Density Discrete Wavelet Transform (DD DWT). The resultant stego audio is transmitted to the receiver and the reverse process is done in order to get back the original plain text. The proposed method presents a Steganography scheme along with the cryptography scheme which enhances the security of the algorithm.</description>
        <description>http://thesai.org/Downloads/Volume4No7/Paper_29-A_Hybrid_Model_for_Secure_Data_Transfer_in_Audio_Signals_using_HCNN_and_DD_DWT.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of MIMO Systems used in planning a 4G-WiMAX Network in Ghana</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040728</link>
        <id>10.14569/IJACSA.2013.040728</id>
        <doi>10.14569/IJACSA.2013.040728</doi>
        <lastModDate>2013-07-30T19:12:48.0000000+00:00</lastModDate>
        
        <creator>E. T. Tchao</creator>
        
        <creator>K. Diawuo</creator>
        
        <creator>W.K. Ofosu</creator>
        
        <creator>E. Affum</creator>
        
        <subject>WiMAX; Performance; BER; MIMO; Interference</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(7), 2013</description>
        <description>with the increasing demand for mobile data services, Broadband Wireless Access (BWA) is emerging as one of the fastest growing areas within mobile communications. Innovative wireless communication systems, such as WiMAX, are expected to offer highly reliable broadband radio access in order to meet the increasing demands of emerging high speed data and multimedia services. In Ghana, deployment of WiMAX technology has recently begun. Planning these high capacity networks in the presence of multiple interferences in order to achieve the aim of enabling users enjoy cheap and reliable internet services is a critical design issue. This paper has used a deterministic approach for simulating the Bit-Error-Rate (BER) of initial MIMO antenna configurations which were considered in deploying a high capacity 4G-WiMAX network in Ghana. The radiation pattern of the antenna used in the deploying the network has been simulated with Genex-Unet and NEC and results presented. An adaptive 4x4 MIMO antenna configuration with optimally suppressed sidelobes has been suggested for future network deployment since the adaptive 2x2 MIMO antenna configuration, which was used in the initial network deployment provides poor estimates for average  BER performance as compared to 4x4 antenna configuration which seem  less affected in the presence of multiple interferers.</description>
        <description>http://thesai.org/Downloads/Volume4No7/Paper_28-Analysis_of_MIMO_Systems_used_in_Planning_a_4G_WiMAX_Network_in_Ghana.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multimodal Biometric Technology System Framework and E-Commerce in Emerging Markets</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040727</link>
        <id>10.14569/IJACSA.2013.040727</id>
        <doi>10.14569/IJACSA.2013.040727</doi>
        <lastModDate>2013-07-30T19:12:44.7230000+00:00</lastModDate>
        
        <creator>Chike Obed-Emeribe</creator>
        
        <subject>Biometrics; Multimodal; Frameworks; e-commerce; Emerging Markets</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(7), 2013</description>
        <description>It is self-evident that the game changer of our modern world – the “internet” has endowed the twenty first century man with enormous potentials and possibilities. Ranging from enhanced capabilities in business (e-business), governance (e-governance), politics, social interaction and information exchange. The internet has indeed shrinked the global distance that once posed a great barrier and limited man’s endeavours in the preceding centuries.
Amidst the great advantages derivable from the use of internet for various purposes lie inherent security threats. To a large extent, these security hindrances have been addressed in advanced nations of the world, as a result, internet phenomenon has pervaded all aspects of the advanced nations economy. This is evident in different electronic platforms that are available for the delivery of various products and services. On the contrary, the application of internet in various aspects of commerce has been hampered by the challenges of security limitations due to identity issues in the developing/emerging economies. Due to these security threats, business owners and the general public in less-developed world demonstrate great sense of apathy in the use of available electronic options for the purpose of commerce.
Against the backdrop of the above, and the poor infrastructure basis of the developing nations, this research paper analyses and proposes the implementation of multimodal biometric technology frameworks with novel server architecture to tackle the security threats inherent with e-commerce in the developing world.
</description>
        <description>http://thesai.org/Downloads/Volume4No7/Paper_27-Multimodal_Biometric_Technology_System_Framework_and_E-Commerce_in_Emerging_Markets.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Identification–Oriented Control Designs with Application to a Wind Turbine Benchmark</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040726</link>
        <id>10.14569/IJACSA.2013.040726</id>
        <doi>10.14569/IJACSA.2013.040726</doi>
        <lastModDate>2013-07-30T19:12:41.2900000+00:00</lastModDate>
        
        <creator>Silvio Simani</creator>
        
        <creator>Paolo Castaldi</creator>
        
        <subject>control algorithms; fuzzy modelling and control; recursive estimation; adaptive PI controllers; wind turbine model.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(7), 2013</description>
        <description>Wind turbines are complex dynamic systems forced by stochastic wind disturbances, gravitational, centrifugal, and gyroscopic loads. Since their aerodynamics are nonlinear, wind turbine modelling is thus challenging. Therefore, the design of control algorithms for wind turbines must account for these complexities, but without being too complex and unwieldy. Therefore, the main contribution of this study consists of providing two examples of robust and viable control designs with application to a wind turbine simulator. Due to the description of the considered process, extensive simulations of this test case and Monte–Carlo analysis are the tools for assessing experimentally the achieved features of the suggested control schemes, in terms of reliability, robustness, and stability, in the presence of modelling and measurement errors. These developed control methods are finally compared with different approaches designed for the same benchmark, in order to evaluate the properties of the considered control techniques.</description>
        <description>http://thesai.org/Downloads/Volume4No7/Paper_26-Identification–Oriented_Control_Designs_with_Application_to_a_Wind_Turbine_Benchmark.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Security Concerns in E-payment and the Law in Jordan</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040725</link>
        <id>10.14569/IJACSA.2013.040725</id>
        <doi>10.14569/IJACSA.2013.040725</doi>
        <lastModDate>2013-07-30T19:12:37.8270000+00:00</lastModDate>
        
        <creator>Mohammad Atwah Al-ma&#39;aitah</creator>
        
        <subject>E-payment systems; Cyber crime; Web security; Law.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(7), 2013</description>
        <description>Recently communications and information technology became widely used in various aspects of life. The internet becomes the main network for information support. Using of internet enabled public and private organizations to develop its business and expand its activities. Private organizations applied the principles of e-commerce to improve the quality of services which provided to customers. While public sector organizations started to apply the principles of e-government in an effort to increase efficiency and effectiveness and achieve maximum equality among citizens. One of the major challenges raised by widespread use of e-government and e-commerce application is security issues especially e-payment.  This paper discusses the present law in Kingdom of Jordan which deal with the problem of frauds and violation of consumers’ rights and privacy when they making e-payment. In addition this paper tries to make comprehensive study on e-payments and the law to decide if there more legislation is needed.</description>
        <description>http://thesai.org/Downloads/Volume4No7/Paper_25-Security_Concerns_in_E-payment_and_the_Law_in_Jordan.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of Child Computer Interaction in Edutainment and Simulation Games Application on Android Platform in Indonesia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040724</link>
        <id>10.14569/IJACSA.2013.040724</id>
        <doi>10.14569/IJACSA.2013.040724</doi>
        <lastModDate>2013-07-30T19:12:34.3800000+00:00</lastModDate>
        
        <creator>Setia Wirawan</creator>
        
        <creator> Dewi Agushinta R.</creator>
        
        <creator>Faris Fajar Muhammad</creator>
        
        <creator> Lutfi Dwi Saifudin</creator>
        
        <creator>Mustafa Ibrahim</creator>
        
        <subject>Child Computer Interaction; Edutainment; Game application; Simulation Games</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(7), 2013</description>
        <description>Child Computer Interaction (CCI) has become a challenge in utilizing the technology as education media. The increasing number of children, who use advanced gadgets in Indonesia such as smartphones and tablet PCs, provides a new space for developing interactive educational game application for kids. Indonesia is a country with the biggest number of Android-based game application downloaders in the world of service providers. Modeling serious game that has been chosen to deliver the concept is Edutainment and Simulation Games. This paper will analyze and review the application of the two concepts of the game, using data on the ranking from one of the top application service providers in analytic applications. The game application developers are expected to understand CCI and develop applications concepts that suit the needs of children in Indonesia. This will create market opportunities in the Indonesian game industry in the future.</description>
        <description>http://thesai.org/Downloads/Volume4No7/Paper_24-Analysis_of_Child_Computer_Interaction_in_Edutainment_and_Simulation_Games_Application_on_Android_Platform_in_Indonesia.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Interaction Protocols in Multi-Agent Systems based on Agent Petri Nets Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040723</link>
        <id>10.14569/IJACSA.2013.040723</id>
        <doi>10.14569/IJACSA.2013.040723</doi>
        <lastModDate>2013-07-30T19:12:30.9470000+00:00</lastModDate>
        
        <creator>Borhen Marzougui</creator>
        
        <creator>Kamel Barkaoui</creator>
        
        <subject>Interaction; MAS; APN; Model; formalism; FIFA; Protocols.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(7), 2013</description>
        <description>This paper deals with the modeling of interaction between agents in Multi Agents System (MAS) based on Agent Petri Nets (APN). Our models are created based on communicating agents. Indeed, an agent initiating a conversation with other can specify the interaction protocol wishes to follow. The combination of APN and FIPA Protocols schemes leads to a set of deployment formal rules for points where model interaction can be successfully implemented. We introduce some models FIPA standard protocols.</description>
        <description>http://thesai.org/Downloads/Volume4No7/Paper_23-Interaction_Protocols_in_Multi-Agent_Systems_based_on_Agent_Petri_Nets_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Adaptive Multimodal Biometrics System using PSO</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040722</link>
        <id>10.14569/IJACSA.2013.040722</id>
        <doi>10.14569/IJACSA.2013.040722</doi>
        <lastModDate>2013-07-30T19:12:27.4830000+00:00</lastModDate>
        
        <creator>Ola M. Aly</creator>
        
        <creator>Tarek A. Mahmoud</creator>
        
        <creator>Gouda I. Salama</creator>
        
        <creator>Hoda M. Onsi</creator>
        
        <subject>multibiometric; match score fusion; PSO; Irsi; Palmprint; Finger_Knuckle</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(7), 2013</description>
        <description>Multimodal biometric systems which fuse information from a number of biometrics, are gaining more attentions lately because they are able to overcome limitations in unimodal biometric systems. These systems are suited for high security applications. Most of the proposed multibiometric systems offer one level of security. In this paper a new approach for adaptive combination of multiple biometrics has been proposed to ensure multiple levels of security. The score level fusion rule is adapted using (PSO) Particle Swarm Optimization to ensure the desired system performance corresponding to the desired level of security. The experimental results prove that the proposed multimodal biometric system is appropriate for applications that require different levels of security.</description>
        <description>http://thesai.org/Downloads/Volume4No7/Paper_22-An_Adaptive_Multimodal_Biometrics_System_using_PSO.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid Approach for Co-Channel Speech Segregation based on CASA, HMM Multipitch Tracking, and Medium Frame Harmonic Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040721</link>
        <id>10.14569/IJACSA.2013.040721</id>
        <doi>10.14569/IJACSA.2013.040721</doi>
        <lastModDate>2013-07-30T19:12:24.0530000+00:00</lastModDate>
        
        <creator>Ashraf M. Mohy Eldin</creator>
        
        <creator>Aliaa A. A. Youssif</creator>
        
        <subject>CASA (computational auditory scene analysis); co-channel speech segregation; HMM (hidden Markov model) tracking; hybrid speech segregation approach; medium frame nic model; multipitch tracking; prominent pitch.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(7), 2013</description>
        <description>This paper proposes a hybrid approach for co-channel speech segregation. HMM (hidden Markov model) is used to track the pitches of 2 talkers. The resulting pitch tracks are then enriched with the prominent pitch. The enriched tracks are correctly grouped using pitch continuity. Medium frame harmonics are used to extract the second pitch for frames with only one pitch deduced using the previous steps. Finally, the pitch tracks are input to CASA (computational auditory scene analysis) to segregate the mixed speech. The center frequency range of the gamma tone filter banks is maximized to reduce the overlap between the channels filtered for better segregation. Experiments were conducted using this hybrid approach on the speech separation challenge database and compared to the single (non-hybrid) approaches, i.e. signal processing and CASA. Results show that using the hybrid approach outperforms the single approaches.</description>
        <description>http://thesai.org/Downloads/Volume4No7/Paper_21-A_Hybrid_Approach_for_Co-Channel_Speech_Segregation_based_on_CASA,_HMM_Multipitch.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Segmentation on the Dental Periapical X-Ray Images for Osteoporosis Screening
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040720</link>
        <id>10.14569/IJACSA.2013.040720</id>
        <doi>10.14569/IJACSA.2013.040720</doi>
        <lastModDate>2013-07-30T19:12:20.6370000+00:00</lastModDate>
        
        <creator>Enny Itje Sela</creator>
        
        <creator>Sri Hartati</creator>
        
        <creator>Agus Harjoko</creator>
        
        <creator>Retantyo Wardoyo</creator>
        
        <creator>Munakhir MS</creator>
        
        <subject>dental periapical X-Ray; osteoporosis; porous trabeculae;  segmentation.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(7), 2013</description>
        <description>Segmentation on the trabecular of dental periapical X-Ray images is very important for osteoporosis screening. Existing methods do not perform well in segmenting the trabecular of dental periapical in X-Ray images due to the presence of large amount of spurious edges. This paper presents a combination of tophat-bothat filtering, histogram equalization contrasting and local adaptive thresholding approach for automatic segmentation of dental periapical in X-Ray images. The qualitative evaluation is done by a dentist and shows that the proposed segmentation algorithm performed well the porous of trabecular features of dental periapical. The quantitative evaluation used fuzzy classification based on neural network to classify these features. It were found accuracy rate to be 99,96% for training set and around 65% for testing set for a dataset of 60 subjects.</description>
        <description>http://thesai.org/Downloads/Volume4No7/Paper_20-Segmentation_on_The_Dental_Periapical_X-Ray_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Evaluation of Two-Hop Wireless Link under Nakagami-m Fading</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040719</link>
        <id>10.14569/IJACSA.2013.040719</id>
        <doi>10.14569/IJACSA.2013.040719</doi>
        <lastModDate>2013-07-30T19:12:17.2200000+00:00</lastModDate>
        
        <creator>Afsana Nadia</creator>
        
        <creator>Arifur Rahim Chowdhury</creator>
        
        <creator>Md. Shoayeb Hossain</creator>
        
        <creator>Md. Imdadul Islam</creator>
        
        <creator>M. R. Amin</creator>
        
        <subject>MIMO; MISO; SIMO; TAS; MRC; SER; SNR.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(7), 2013</description>
        <description>Now-a-days, intense research is going on two-hop wireless link under different fading conditions with its remedial measures. In this paper work, a two-hop link under three different conditions is considered: (i) MIMO on both hops, (ii) MISO in first hop and SIMO in second hop and finally (iii) SIMO in first hop and MISO in second hop. The three models used here give the flexibility of using STBC (Space Time Block Coding) and combining scheme on any of the source to relay (S-R) and relay to destination (R-D) link. Even incorporation of Transmitting Antenna Selection (TAS) is possible on any link. Here, the variation of SER (Symbol Error Rate) is determined against mean SNR (Signal-to-Noise Ratio) of R-D link for three different modulation schemes: BPSK, 8-PSK and 16-PSK, taking the number of antennas and SNR of S-R link as parameters under Nakagami -m fading condition.</description>
        <description>http://thesai.org/Downloads/Volume4No7/Paper_19-Performance_Evaluation_of_Two-Hop_Wireless_Link_under_Nakagami-m_Fading.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SVM Classification of Urban High-Resolution Imagery Using Composite Kernels and Contour Information</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040718</link>
        <id>10.14569/IJACSA.2013.040718</id>
        <doi>10.14569/IJACSA.2013.040718</doi>
        <lastModDate>2013-07-30T19:12:13.8170000+00:00</lastModDate>
        
        <creator>Aissam Bekkari</creator>
        
        <creator>Mostafa El yassa</creator>
        
        <creator>Soufiane Idbraim</creator>
        
        <creator>Driss Mammass</creator>
        
        <creator>Azeddine Elhassouny</creator>
        
        <creator>Danielle Ducrot</creator>
        
        <subject>SVM; Contour information; Composite Kernels; Haralick features; Satellite image; Spectral and spatial information;  GLCM; Fourier descriptors; Hough transform; Zernike moments.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(7), 2013</description>
        <description>The classification of remote sensing images has done great forward taking into account the image’s availability with different resolutions, as well as an abundance of very efficient classification algorithms. A number of works have shown promising results by the fusion of spatial and spectral information using Support Vector Machines (SVM) which are a group of supervised classification algorithms that have been recently used in the remote sensing field, however the addition of contour information to both spectral and spatial information still less explored.
For this purpose we propose a methodology exploiting the properties of Mercer’s kernels to construct a family of composite kernels that easily combine multi-spectral features and Haralick texture features as data source. The composite kernel that gives the best results will be used to introduce contour information in the classification process.  
The proposed approach was tested on common scenes of urban imagery. The three different kernels tested allow a significant improvement of the classification performances and a flexibility to balance between the spatial and spectral information in the classifier. The experimental results indicate a global accuracy value of 93.52%, the addition of contour information, described by the Fourier descriptors, Hough transform and Zernike moments, allows increasing the obtained global accuracy by 1.61% which is very promising.
</description>
        <description>http://thesai.org/Downloads/Volume4No7/Paper_18-SVM_Classification_of_Urban_High-Resolution_Imagery_Using_Composite_Kernels_and_Contour_Information.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mining Frequent Itemsets from Online Data Streams: Comparative Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040717</link>
        <id>10.14569/IJACSA.2013.040717</id>
        <doi>10.14569/IJACSA.2013.040717</doi>
        <lastModDate>2013-07-30T19:12:10.3870000+00:00</lastModDate>
        
        <creator>HebaTallah Mohamed Nabil</creator>
        
        <creator>Ahmed Sharaf Eldin</creator>
        
        <creator>Mohamed Abd El-Fattah Belal</creator>
        
        <subject>Data mining; frequent itemsets; data stream; sliding window model; landmark model; fading model.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(7), 2013</description>
        <description>Online mining of data streams poses many new challenges more than mining static databases. In addition to the one-scan nature, the unbounded memory requirement, the high data arrival rate of data streams and the combinatorial explosion of itemsets exacerbate the mining task. The high complexity of the frequent itemsets mining problem hinders the application of the stream mining techniques. In this review, we present a comparative study among almost all, as we are acquainted, the algorithms for mining frequent itemsets from online data streams. All those techniques immolate with the accuracy of the results due to the relatively limited storage, leading, at all times, to approximated results.</description>
        <description>http://thesai.org/Downloads/Volume4No7/Paper_17-Mining_Frequent_Itemsets_from_Online_Data_Streams_Comparative_Study.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Classification of Image Database Using Independent Principal Component Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040716</link>
        <id>10.14569/IJACSA.2013.040716</id>
        <doi>10.14569/IJACSA.2013.040716</doi>
        <lastModDate>2013-07-30T19:12:06.9870000+00:00</lastModDate>
        
        <creator>H. B. Kekre</creator>
        
        <creator>Tanuja K. Sarode</creator>
        
        <creator>Jagruti K. Save</creator>
        
        <subject>Image classification; Principal Component Analysis (PCA); Eigen value; Eigen vector; Variance; Nearest neighbor classifier; Orthogonal transform; Feature vector; Covariance matrix.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(7), 2013</description>
        <description>The paper presents a modified approach of Principal Component Analysis (PCA) for an automatic classification of image database. Principal components are the distinctive or peculiar features of an image. PCA also holds information regarding the structure of data. PCA can be applied to all training images of different classes together forming universal subspace or to an individual image forming an object subspace . But if PCA is applied independently on the different classes of objects, the main direction will be different for them. Thus, they can be used to construct a classifier which uses them to make decisions regarding the class. Also the dimension reduction of feature vector is possible. Initially training image set is  chosen for each class. PCA, using eigen vector decomposition, is applied to an individual class forming an individual and independent eigenspace for that class. If there are n classes of training images, we get n eigenspaces. The dimension of  eigenspace depends upon the number of selected eigen vectors. Each training image is projected on the corresponding eigenspace giving its feature vector. Thus n sets of training feature vectors are produced. In testing phase, new image is projected on all eigenspaces forming n feature vectors. These feature vectors are compared with training feature vectors in corresponding eigenspace. Feature vector nearest to new image  in each eigenspace is found out. Classification of new image is accomplished by comparing the distances between the nearest feature vector and training image feature vector in each eigenspace. Two distance criteria such as Euclidean and Manhattan distance are used. The system is tested on COIL-100 database. Performance is tested and tabulated for different sizes of training image database.</description>
        <description>http://thesai.org/Downloads/Volume4No7/Paper_16-Classification_of_Image_Database_Using_Independent_Principal_Component_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>POSIX.1 conformance For Android Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040715</link>
        <id>10.14569/IJACSA.2013.040715</id>
        <doi>10.14569/IJACSA.2013.040715</doi>
        <lastModDate>2013-07-30T19:12:05.1770000+00:00</lastModDate>
        
        <creator>Tayyaba Nafees</creator>
        
        <creator>Prof. Dr. Shoab Ahmad Khan</creator>
        
        <subject>Portable Operating System Interface (POSIX); Application Programming Interface (API); Operating system (OS); User interface (UI)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(7), 2013</description>
        <description>Android operating system is designed for use in mobile computing by The Open Handset Alliance. Android market has hundreds of thousands of Android applications and these applications are restricted only to the mobiles. This restriction is mainly because of portability and compatibility issues of Android operating system. So need of employing these countless Android applications on any POISX Desktop operating system without disturbing the internal structure of application is very desirable. Thus we need to resolve these standardization and portability concerns by using POSIX standards (Portable Operating System Interface). The concepts of POSIX conformance for Android applications provide full-scale portability services and Android applications reusability for any POSIX desktop operating system. So Android applications will become usable for all POISX desktop users. This research theme introduces POSIX.1 Android thin layer model that simply provides the POSIX conformance for Android applications. It is using the POSIX.1 APIs for Android applications, which maintains the compatibility between the POISX Desktop operating systems and Android applications. We have analyzed our research work by implementation of the different applications in standard POSIX environment and, have verified its results. The results of POSIX.1 model clearly showed that it will boost up the Android applications market revenue up to 100% plus add real-time standardization and reusability.</description>
        <description>http://thesai.org/Downloads/Volume4No7/Paper_15-POSIX.1_conformance_for_Android_Applications.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Intelligent Diagnostic System for Congenital Heart Defects</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040714</link>
        <id>10.14569/IJACSA.2013.040714</id>
        <doi>10.14569/IJACSA.2013.040714</doi>
        <lastModDate>2013-07-30T19:12:00.8870000+00:00</lastModDate>
        
        <creator>Amir Mohammad Amiri</creator>
        
        <creator>Giuliano Armano</creator>
        
        <subject>congenital heart defects; Heart murmurs; newborns; classification and regression trees;</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(7), 2013</description>
        <description>Congenital heart disease is the most common birth defect. The article describes detection and classification of congenital heart defect using classification and regressing trees. The ultimate goal of this research can decrease risk of cardiac arrest and mortality in compared with healthy children. The intelligent system proposed in three stages technique for automate diagnosis:(i) pre-processing(ii), feature extraction, and (iii) classification of congenital heart defects (CHD) using data mining tools. The intelligent diagnostic system has been validated with a representative dataset of 110 heart sound signals, taken from healthy and unhealthy medical cases. This system was evaluated in the test dataset with the following performance measurements global accuracy: 98.18%, sensitivity, 96.36% and specificity 100%. This results show the feasibility of classification based on optimized feature extraction and classifier. This paper follows the Association for the recommendations of the Advancement of Medical Instrumentation.</description>
        <description>http://thesai.org/Downloads/Volume4No7/Paper_14-An_Intelligent_Diagnostic_System_for_Congential_Heart_Defects.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Effective Performance of Information Retrieval by using Domain Based Crawler</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040713</link>
        <id>10.14569/IJACSA.2013.040713</id>
        <doi>10.14569/IJACSA.2013.040713</doi>
        <lastModDate>2013-07-30T19:11:57.4370000+00:00</lastModDate>
        
        <creator>Sk. Abdul Nabi</creator>
        
        <creator>Dr. P. Premchand</creator>
        
        <subject>Domain Based Information Retrieval (DBIR); Storage Space (SS); Searching Time (ST) ; Master Crawler;Basic Crawler ; EPOW.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(7), 2013</description>
        <description>World Wide Web continuously introduces new capabilities and attracts many people[1]. It consists of more than 60 billion pages online. Due to this explosion in size, the information retrieval system or Search Engines are being upgraded day by day and it can be used to access the information effectively and efficiently. In this paper, we have addressed Domain Based Information Retrieval (DBIR) System. In this system we crawl the information from the web and added all links to the data base which are related to a specific domain. It simply ignores which are not related to that domain. Because of that we can save the Storage Space (SS) and Searching Time (ST) and as a result it improves the performance of the system.
It is an extension of Effective Performance of Web Crawler (EPOW) System [2], in which it has two Crawler modules. The first one is Basic Crawler. It consists of multiple downloaders to achieve parallelization policy . The second one is Master Crawler, which is used to filter the URLs send by the Basic Crawler based on the Domain and sends back to the Basic Crawler to extract the related links. All these related links are collectively stored into the database under a unique domain name.
</description>
        <description>http://thesai.org/Downloads/Volume4No7/Paper_13-Effective_Performance_of_Information_Retrieval.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Permutation Based Approach for Effective and Efficient Representation of Face Images under Varying Illuminations</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040712</link>
        <id>10.14569/IJACSA.2013.040712</id>
        <doi>10.14569/IJACSA.2013.040712</doi>
        <lastModDate>2013-07-30T19:11:55.6130000+00:00</lastModDate>
        
        <creator>Shylaja S S</creator>
        
        <creator>K N Balasubramanya Murthy</creator>
        
        <creator> S Natarajan</creator>
        
        <subject>Biometrics; Face Recognition; Independent Component Analysis (ICA); Linear Discriminant Analysis (LDA); Locality Preserving Projections (LPP); Pattern Recognition;ermutation Matrix (PM); Singular Value Decomposition (SVD).</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(7), 2013</description>
        <description>Paramount importance for an automated face recognition system is the ability to enhance discriminatory power with a low-dimensional feature representation. Keeping this as a focal point, we present a novel approach for face recognition by formulating the problem of face tagging in terms of permutation. Using a fundamental concept that, dominant pixels of a person will remain dominant under varying illuminations, we develop a Permutation Matrix (PM) based approach for representing face images. The proposed method is extensively evaluated on several benchmark databases under different exemplary evaluation protocols reported in the literature. Experimental results and comparative study with state-of-the-art methods suggest that the proposed approach provides a better representation of face, thereby achieving higher efficacy and lower error rates.</description>
        <description>http://thesai.org/Downloads/Volume4No7/Paper_12-A_Novel_Permutation_Based_Approach_for_Effective_and_Efficient_Representation_of_Face_Images_under_Varying_Illuminations.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Contribution of the Computer Technologies in the Teaching of Physics: Critical Review and Conception of an Interactive Simulation Software</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040711</link>
        <id>10.14569/IJACSA.2013.040711</id>
        <doi>10.14569/IJACSA.2013.040711</doi>
        <lastModDate>2013-07-30T19:11:53.7870000+00:00</lastModDate>
        
        <creator>Abdeljalil M&#233;tioui</creator>
        
        <creator>Louis Trudel</creator>
        
        <subject>critical review; physical; conception; interactive environments; representations</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(7), 2013</description>
        <description>In the present research, we will synthesize the main research results about the development of interactive computer environments for physics teaching and learning. We will see that few types of software propose environments that take into account the user&#39;s erroneous representations in order to make him become aware of his mistakes. The majority of these softwares present modelling activities that are restricted to the automatic collection of experimental data and their analysis under graphical form. As a consequence, we will present the design of computer environments for the learning of the phenomenona of absorption and diffusion of light which will take into account the user&#39;s initial representations. The design of these environments is divided in five steps: (1) diagnostic of the user’s initial representations; (2) confrontation of the user’s initial representations by the simulation; (3) reconstruction by the user of his representations following the completion by the user of the simulation; (4) reconstruction of the user’s representations following the presentation by the software of scientific information related to the case studied and (5) assessment of the current state of understanding of the user by the software.</description>
        <description>http://thesai.org/Downloads/Volume4No7/Paper_11-Contribution_of_the_Computer_Technologies.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>On an internal multimodel control for nonlinear multivariable systems - A comparative study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040710</link>
        <id>10.14569/IJACSA.2013.040710</id>
        <doi>10.14569/IJACSA.2013.040710</doi>
        <lastModDate>2013-07-30T19:11:50.3400000+00:00</lastModDate>
        
        <creator>Nahla Touati Karmani</creator>
        
        <creator>Dhaou Soudani</creator>
        
        <creator>Mongi Naceur</creator>
        
        <creator>Mohamed Benrejeb</creator>
        
        <subject>internal multimodel control; nonlinear multivariable systems; inverse model; switching models; residues techniques.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(7), 2013</description>
        <description>An internal multimodel control designed for nonlinear multivariable systems, is proposed in this paper. This approach is based on the multi-modeling of nonlinear systems and the realization of a specific inversion of each model.  A comparative study is presented between two structures of the internal multimodel control. The first one is based on switching models and the second on residues techniques as fusion method. The case of a second order nonlinear multivariable system shows the effectiveness of both structures.</description>
        <description>http://thesai.org/Downloads/Volume4No7/Paper_10-On_an_internal_multimodel_control_for_nonlinear_multivariable_systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Collaborative Spectrum Sensing under Suburban Environments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040709</link>
        <id>10.14569/IJACSA.2013.040709</id>
        <doi>10.14569/IJACSA.2013.040709</doi>
        <lastModDate>2013-07-30T19:11:46.9070000+00:00</lastModDate>
        
        <creator>Aamir Zeb Shaikh</creator>
        
        <creator>Dr. Talat Altaf</creator>
        
        <subject>collaborative spectrum sensing; suburban environment; asymptotic   analysis; cognitive radio; opportunistic access</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(7), 2013</description>
        <description>Collaborative spectrum sensing for detection of white spaces helps in realizing reliable and efficient spectrum sensing algorithms, which results in efficient usage of primary spectrum in secondary fashion. Collaboration among cognitive radios improves probability of detecting a spectral hole as well as sensing time.  Available literature, in this domain, uses Gudmundson’s exponential correlation model for correlated lognormal shadowing under both urban and suburban environments. However, empirical measurements verify that the suburban environment can better be modeled through double exponential correlation model under suburban environments in comparison to Gudmundson’s exponential correlation model. Collaboration among independent sensors provides diversity gains. Asymptotic detection probability for collaborating users under suburban environments using double exponential correlation model has been derived. Also, the Region of Convergence performance of collaborative detection is presented which agrees well with analytical derivations.</description>
        <description>http://thesai.org/Downloads/Volume4No7/Paper_9-Collaborative_Spectrum_Sensing_under_Suburban_Environments.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Communication in Veil: Enhanced Paradigm for ASCII Text Files</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040708</link>
        <id>10.14569/IJACSA.2013.040708</id>
        <doi>10.14569/IJACSA.2013.040708</doi>
        <lastModDate>2013-07-30T19:11:43.4600000+00:00</lastModDate>
        
        <creator>Khan Farhan Rafat</creator>
        
        <creator>Muhammad Sher</creator>
        
        <subject>Embedded Secrets; Hide and Seek; Eccentric way of writing; ASCII Text Steganography; Communication in Veil; Stealth Communication</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(7), 2013</description>
        <description>Digitization has a persuasive impact on information and communication technology (ICT) field which can be realized from the fact that today one seldom think to stand in long awaiting queue just to deposit utility bills, buy movie ticket, or dispatch private letters via post office etc. as these and other such similar activities are now preferably being done electronically over internet which has shattered the geographical boundaries and has tied the people across the world into a single logical unit called global village. The efficacy and precision with which electronic transactions are made is commendable and is one of the reasons why more and more people are switching over to e-commerce for their official and personal usage. Via social networking sites one can interact with family and friends at any time of his/her choice. The darker side of this comforting aspect, however, is that the contents sent on/off-line may be monitored for active or passive intervention by the antagonistic forces for their illicit motives ranging from but not only limited to password, ID and social security number theft to impersonation, compromising personal information, blackmailing etc. This necessitated the need to hide data or information of some significance in an oblivious manner in order to detract the enemy as regards its detection. 
      This paper aims at evolving an avant-garde information hiding scheme for ASCII text files - a research area regarded as the most difficult in contrast to audio, video or image file formats for the said purpose.
</description>
        <description>http://thesai.org/Downloads/Volume4No7/Paper_8-Communication_in_Veil-Enhanced_Paradigm_for_ASCII_Text_Files.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Use of Software Project Management Tools in Saudi Arabia: An Exploratory Survey</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040707</link>
        <id>10.14569/IJACSA.2013.040707</id>
        <doi>10.14569/IJACSA.2013.040707</doi>
        <lastModDate>2013-07-30T19:11:40.0130000+00:00</lastModDate>
        
        <creator>Nouf AlMobarak</creator>
        
        <creator>Rawan AlAbdulrahman</creator>
        
        <creator>Shahad AlHarbi</creator>
        
        <creator>Wea’am AlRashed</creator>
        
        <subject>project management tools; survey; Hijri calendar; Arabic interface;  software engineering; Arabic documentation.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(7), 2013</description>
        <description>This paper reports the results of an online survey study, which was conducted to investigate the use of software project management tools in Saudi Arabia. The survey provides insights of project management in the local context of Saudi Arabia from ten different companies which participated in this study. The aim is to explore and specify the project management tools used by software project management teams and their managers, to understand the supported features that might influence their selection. Moreover,  the existence of the Arabic interface,  the Hijri calendar and the Arabic documentation has been specially considered, due to the nature of the local context in dealing with the Hijri calendar and the prolific use of Arabic as the formal language in communication with clients in the public sector.</description>
        <description>http://thesai.org/Downloads/Volume4No7/Paper_7-The_Use_of_Software_Project_Management_Tools_in_Saudi_Arabia_An_Exploratory_Survey.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The SNCD as a Metrics for Image Quality Assessment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040706</link>
        <id>10.14569/IJACSA.2013.040706</id>
        <doi>10.14569/IJACSA.2013.040706</doi>
        <lastModDate>2013-07-30T19:11:36.5630000+00:00</lastModDate>
        
        <creator>Avid Roman-Gonzalez</creator>
        
        <subject>Quality metrics; NCD, SNCD; Kolmogorov complexity; image quality.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(7), 2013</description>
        <description>In our era, when we have a lot of instrument to capture digital images and they go more in more increasing the image resolution; the quality of the images become very important for different application, and the development tool to quality assessment is a current issue.  In this paper, we propose to use the Symmetric Normalized Compression Distance (SNCD) as a metrics for the measurement of image quality, especially when we analyze residual errors. We also show performance comparisons of other metrics that we can found in the various research literatures and the SNCD. We also present an analysis about the performance of the SNCD depending to the type of distortion.</description>
        <description>http://thesai.org/Downloads/Volume4No7/Paper_6-The_SNCD_as_a_Metrics_for_Image_Quality_Assessment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Rule Based System for Recognizing Emotions Using Multimodal Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040705</link>
        <id>10.14569/IJACSA.2013.040705</id>
        <doi>10.14569/IJACSA.2013.040705</doi>
        <lastModDate>2013-07-30T19:11:34.7530000+00:00</lastModDate>
        
        <creator>Preeti Khanna</creator>
        
        <creator>Sasikumar M.</creator>
        
        <subject>Human Computer Interaction (HCI); Multimodal emotion recognition; Rule based system; Emotional state; Modalities.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(7), 2013</description>
        <description>Emotion is assuming increasing importance in human computer interaction (HCI), in general, with the growing feeling that emotion is central to human communication and intelligence. Users expect not just functionality as a factor of usability, but experiences, matched to their expectations, emotional states, and interaction goals. Endowing computers with this kind of intelligence for HCI is a complex task. It becomes more complex with the fact that the interaction of humans with their environment (including other humans) is naturally multimodal. In reality, one uses a combination of modalities and they are not treated independently. In an attempt to render HCI more similar to human-human communication and enhance its naturalness, research on multiple modalities of human expressions has seen ongoing progress in the past few years. As compared to unimodal approaches, various problems arise in case of multimodal emotion recognition especially concerning fusion architecture of multimodal information. In this paper we will be proposing a rule based hybrid approach to combine the information from various sources for recognizing the target emotions. The results presented in this paper shows that it is feasible to recognize human affective states with a reasonable accuracy by combining the modalities together using rule based system.</description>
        <description>http://thesai.org/Downloads/Volume4No7/Paper_5-Rule_Based_System_for_Recognizing_Emotions_Using_Multimodal_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>LED to LED communication with WDM concept for flash light of Mobile phones</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040704</link>
        <id>10.14569/IJACSA.2013.040704</id>
        <doi>10.14569/IJACSA.2013.040704</doi>
        <lastModDate>2013-07-30T19:11:32.9430000+00:00</lastModDate>
        
        <creator>Devendra J Varanva</creator>
        
        <creator>Kantipudi MVV Prasad</creator>
        
        <subject>Free Space Light Communication; Visible Light Communication; Solid State Device; LED as Sensor; LED-LED Communication; WDM; VLC with Mobile handset/Smartphone.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(7), 2013</description>
        <description>After observing recent developments in Free Space Optical Communication especially Visible Light Communication, It is clear that LED is main component as a source. LED being solid state device makes endless list of possibilities. But here we will get through its ability to sense light as well, and use of Wavelength Division Multiplexing (WDM) in mobile flash is also suggested, this opens door to many applications.</description>
        <description>http://thesai.org/Downloads/Volume4No7/Paper_4-LED_to_LED_communication_with_WDM_concept_for_flash_light_of_Mobile_phones.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Visual Exploration of Complex Network Data Using Affective Brain-Computer Interface</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040703</link>
        <id>10.14569/IJACSA.2013.040703</id>
        <doi>10.14569/IJACSA.2013.040703</doi>
        <lastModDate>2013-07-30T19:11:29.4970000+00:00</lastModDate>
        
        <creator>Sergey V. Kovalchuk</creator>
        
        <creator>Denis M. Terekhov</creator>
        
        <creator>Aleksey A. Bezgodov</creator>
        
        <creator>Alexander V. Boukhanovsky</creator>
        
        <subject>affective brain-computer interface; complex network data; visualisation; virtual reality; domain-specific knowledge; human-computer interaction.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(7), 2013</description>
        <description>This paper describes the current state of the work aimed towards an affective application of BCI to the task of complex data visual exploration. The developed technological approach exploits the idea of supporting tacit and complex domain-specific knowledge acquisition during the examination of visual images built using large input data sets. The presented experimental research on the complex network data exploration process shows the capabilities of the presented approach through the analysis of a user’s affective state estimation.</description>
        <description>http://thesai.org/Downloads/Volume4No7/Paper_3-Visual_Exploration_of_Complex_Network_Data_Using_Affective_Brain-Computer_Interface.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Applying Genetic Algorithms to Test JUH DBs Exceptions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040702</link>
        <id>10.14569/IJACSA.2013.040702</id>
        <doi>10.14569/IJACSA.2013.040702</doi>
        <lastModDate>2013-07-30T19:11:26.0500000+00:00</lastModDate>
        
        <creator>Mohammad Alshraideh</creator>
        
        <creator>Ezdehar Jawabreh</creator>
        
        <creator>Basel A. Mahafzah</creator>
        
        <creator>Heba M. AL Harahsheh</creator>
        
        <subject>Database Application; Exception Handling; Mutation Testing; Genetic Algorithms; Select Statement.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(7), 2013</description>
        <description>Database represents an essential part of software applications. Many organizations use database as a repository for large amount of current and historical information. With this context testing database applications is a key issue that deserves attention. SQL Exception handling mechanism can increase the reliability of the system and improve the robustness of the software. But the exception handling code that is used to respond to exceptional conditions tends to be the source of the systems failure. It is difficult to test the exception handling by traditional methods. This paper presents a new technique that combines mutation testing and global optimization based search algorithm to test exceptions code in Jordan University Hospital (JUH) database application. Thus, using mutation testing to speed the raising of exception and global optimization technique in order to automatically generate test cases, we used fitness function depends on range of data related to each query. We try to achieve the coverage of three types of PL/SQL exceptions, which are No_Data_Found (NDF), Too_Many_Rows (TMR) and Others exceptions. The results show that TMR exception is not always covered this due to existence of primary key in the query, also uncovered status appear in nested exceptions.</description>
        <description>http://thesai.org/Downloads/Volume4No7/Paper_2-Applying_Genetic_Algorithms_to_Test_JUH_DBs_Exceptions.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Relation Inclusive Search for Hindi Documents
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040701</link>
        <id>10.14569/IJACSA.2013.040701</id>
        <doi>10.14569/IJACSA.2013.040701</doi>
        <lastModDate>2013-07-30T19:11:24.1130000+00:00</lastModDate>
        
        <creator>Pooja Arora</creator>
        
        <creator>Om Vikas</creator>
        
        <subject>Relation inclusive search; RSearch; spatial &amp; temporal prepositions and postpositions; Hindi document retrieval; case roles.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(7), 2013</description>
        <description>Information retrieval (IR) techniques become a challenge to researchers due to huge growth of digital and information retrieval. As a wide variety of Hindi Data and Literature is now available on web, we have developed information retrieval system for Hindi documents. This paper presents a new searching technique that has promising results in terms of F-measure. Historically, there have been two major approaches to IR - keyword based search and concept based search. We have introduced new relation inclusive search which performs searching of documents using case role relation, spatial relation and temporal relation of query terms and gives results better than previously used approaches. In this method we have used new indexing technique which stores information about relation between terms along with its position. We have compared four types of searching: Keyword Based search without Relation Inclusive, Keyword Based search with Relation Inclusive, Concept Based search without Relation Inclusive and Concept Based search with Relation Inclusive. Our proposed searching method gave significant improvement in terms of F-measure. For experiments we have used Hindi document corpus, Gyannidhi from C-DAC. This technique effectively improves search performance for documents in English as well.</description>
        <description>http://thesai.org/Downloads/Volume4No7/Paper_1-Relation_Inclusive_Search_for Hindi_Documents.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Printed Arabic Character Classification Using Cadre of Level Feature Extraction Technique</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2013.030209</link>
        <id>10.14569/SpecialIssue.2013.030209</id>
        <doi>10.14569/SpecialIssue.2013.030209</doi>
        <lastModDate>2013-07-27T06:43:57.5730000+00:00</lastModDate>
        
        <creator>S. Nouri</creator>
        
        <creator>M.Fakir</creator>
        
        <subject>Arabic Character; Cadre of Level; Recognition; K-Nearest Neighbor; Digits.</subject>
        <description>Special Issue(SpecialIssue), 3(2), 2013</description>
        <description>Feature extraction techniques is important in character recognition, because they can enhance the efficacy of recognition in comparison to other approaches. This study aims to investigate the novel feature extraction technique called the Cadre of Level technique in order to represent printed characters and digits. This technique gives statistic and morphologic information, i.e. the calculation is based on a statistical approach but in the positions which can give some information about the morphologic of character. The image of a character is divided into 100 zones, then for each zone we average 5 extracted values (one value for each level) to 1 value for each zone, which gives 100 features for each character. This technique was applied to 105 different characters and 10 different digits of Arabic printed script. K-Nearest Neighbor algorithm was used to classify the printed characters and digits.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo6/Paper_9-Printed_Arabic_Character_Classification_Using_Cadre_of_Level_Feature_Extraction_Technique.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Recognition of Amazigh characters using SURF &amp; GIST descriptors</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2013.030208</link>
        <id>10.14569/SpecialIssue.2013.030208</id>
        <doi>10.14569/SpecialIssue.2013.030208</doi>
        <lastModDate>2013-07-27T06:43:51.7870000+00:00</lastModDate>
        
        <creator>H. Moudni</creator>
        
        <creator>M. Er-rouidi</creator>
        
        <creator>M. Oujaoura</creator>
        
        <creator>O. Bencharef</creator>
        
        <subject>SURF; GIST; Principal Component Analysis; Neural Network; Amazigh Characters.</subject>
        <description>Special Issue(SpecialIssue), 3(2), 2013</description>
        <description>In this article, we describe the recognition system of Amazigh handwritten letters. The SURF descriptor, specifically the SURF-36, and the GIST descriptor are used for extracting feature vectors of each letter from our database which consists of 25740 manuscripts isolated Amazigh characters. All the feature vectors of each letter form a training set which is used to train the neural network so that it can calculate a single output on the information it receives. Finally, we made a comparative study between the SURF-36 descriptor and GIST descriptor.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo6/Paper_8-Recognition_of_Amazigh_characters_using_SURF_GIST_descriptors.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance evaluation of ad hoc routing protocols in VANETs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2013.030207</link>
        <id>10.14569/SpecialIssue.2013.030207</id>
        <doi>10.14569/SpecialIssue.2013.030207</doi>
        <lastModDate>2013-07-27T06:43:46.0070000+00:00</lastModDate>
        
        <creator>Mohammed ERRITALI</creator>
        
        <creator>Bouabid El Ouahidi</creator>
        
        <subject>Routing protocols; OPNET; Performance evaluation; DSR; AODV; OLSR; TORA; GRP.</subject>
        <description>Special Issue(SpecialIssue), 3(2), 2013</description>
        <description>The objective of this work is to compare with simulating, using OPNET the performance of five Ad hoc routing protocols: DSR, AODV, OLSR ,TORA and  GRP , and  to examine the impact of mobility and the density of nodes on the behavior of these protocols in a Vehicular Ad hoc NETwork (VANET). The results show that there is not a protocol that is favorite for all evaluation criteria. Indeed, each protocol has different behavior in relation to performance metrics considered, including the rate of routing packets sent, delay, and the debit.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo6/Paper_7-Performance_evaluation_of_ad_hoc_routing_protocols_in_VANETs.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Handwritten Tifinagh Text Recognition Using Fuzzy K-NN and Bi-gram Language Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2013.030206</link>
        <id>10.14569/SpecialIssue.2013.030206</id>
        <doi>10.14569/SpecialIssue.2013.030206</doi>
        <lastModDate>2013-07-27T06:43:42.9330000+00:00</lastModDate>
        
        <creator>Said Gounane</creator>
        
        <creator>Mohammad Fakir</creator>
        
        <creator>Belaid Bouikhalen</creator>
        
        <subject>Tifinagh character recognition; fuzzy k-nearest neighbor; features extraction; bigram language model.</subject>
        <description>Special Issue(SpecialIssue), 3(2), 2013</description>
        <description>In this paper we present a new approach in Tifinagh character recognition using a combination of, k-nearest neighbor algorithm and the bigram language model. After the preprocessing of the text image, and the word segmentation, for each image character, the k-NN algorithm proposes candidates weighted of their membership degree. Then we use the bigram language model to choose the most appropriate sequence of characters. Results show that our method increases the recognition rate.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo6/Paper_6-Handwritten_Tifinagh_Text_Recognition_Using_Fuzzy_K-NN_and_Bi-gram_Language_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Invariant Descriptors and Classifiers Combination for Recognition of Isolated Printed Tifinagh Characters</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2013.030205</link>
        <id>10.14569/SpecialIssue.2013.030205</id>
        <doi>10.14569/SpecialIssue.2013.030205</doi>
        <lastModDate>2013-07-27T06:43:36.4570000+00:00</lastModDate>
        
        <creator>M. OUJAOURA</creator>
        
        <creator>R. EL AYACHI</creator>
        
        <creator>B. MINAOUI</creator>
        
        <creator>M. FAKIR</creator>
        
        <creator>B. BOUIKHALENE</creator>
        
        <creator>O. BENCHAREF</creator>
        
        <subject>Recognition system; Legendre moments; Zernike moment; Hu moments; Neural Networks; Multiclass SVM; nearest neighbour classifier</subject>
        <description>Special Issue(SpecialIssue), 3(2), 2013</description>
        <description>In order to improve the recognition rate, this document proposes an automatic system to recognize isolated printed Tifinagh characters by using a fusion of 3 classifiers and a combination of some features extraction methods. The Legendre moments, Zernike moments and Hu moments are used as descriptors in the features extraction phase due to their invariance to translation, rotation and scaling changes. In the classification phase, the neural network, the multiclass SVM (Support Vector Machine) and the nearest neighbour classifiers are combined together. The experimental results of each single features extraction method and each single classification method are compared with our approach to show its robustness.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo6/Paper_5-Invariant_Descriptors_and_Classifiers_Combination_for_Recognition_of_Isolated_Printed_Tifinagh_Characters.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Review of Color Image Segmentation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2013.030204</link>
        <id>10.14569/SpecialIssue.2013.030204</id>
        <doi>10.14569/SpecialIssue.2013.030204</doi>
        <lastModDate>2013-07-27T06:43:33.3830000+00:00</lastModDate>
        
        <creator>Abderrahmane ELBALAOUI</creator>
        
        <creator>M.FAKIR</creator>
        
        <creator>N.IDRISSI</creator>
        
        <creator>A.MERBOUHA</creator>
        
        <subject>Image segmentation; k-means; region and Boundary.</subject>
        <description>Special Issue(SpecialIssue), 3(2), 2013</description>
        <description>This paper provides a review of methods advanced in the past few years for segmentation of color images. After a brief definition of the segmentation, we outline the various existing techniques, classified according to their approaches. We have identified five that are based approaches contours, those relying on notion of region, structural approaches, those based on the form and then using those notions of graphs. For each of these approaches, we then explained and illustrated their most important methods. This review is not intended to be exhaustive and the classification of certain methods may be discussed since at the boundary between different approaches.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo6/Paper_4-Review_of_Color_Image_Segmentation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hierarchical Algorithm for Hidden Markov Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2013.030203</link>
        <id>10.14569/SpecialIssue.2013.030203</id>
        <doi>10.14569/SpecialIssue.2013.030203</doi>
        <lastModDate>2013-07-27T06:43:27.5800000+00:00</lastModDate>
        
        <creator>SANAA CHAFIK </creator>
        
        <creator>DAOUI CHERKI</creator>
        
        <subject>Hidden Markov Model; Forward; Divide and Conquer;  Decomposition;  Communicating Class.</subject>
        <description>Special Issue(SpecialIssue), 3(2), 2013</description>
        <description>The Forward algorithm is an inference algorithm for hidden Markov models, which often leads to a very large hidden state space. The objective of this work is to reduce the task of solving the Forward algorithm, by offering faster improved algorithm which is based on divide and conquer technique.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo6/Paper_3-Hierarchical_Algorithm_for_Hidden_Markov_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Annotation and research of pedagogical documents in a platform of e-learning based on Semantic Web</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2013.030202</link>
        <id>10.14569/SpecialIssue.2013.030202</id>
        <doi>10.14569/SpecialIssue.2013.030202</doi>
        <lastModDate>2013-07-27T06:43:21.7570000+00:00</lastModDate>
        
        <creator>S. BOUKIL</creator>
        
        <creator>C. DAOUI</creator>
        
        <creator>B. BOUIKHALENE</creator>
        
        <creator>M.FAKIR</creator>
        
        <subject>Ontologie ; Web s&#233;mantique ;XML; RDF ; RDFS ; OWL ; m&#233;tadonn&#233;es ; enseignement &#224; distance.</subject>
        <description>Special Issue(SpecialIssue), 3(2), 2013</description>
        <description>E-learning is considered as one of the areas in which the Semantic Web can make a real improvement whatsoever in finding information, or reusing of educational resources or even personalized learning paths. This paper aimsto develop an educational ontology that will be used to annotate learning materials and pedagogical documents.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo6/Paper_2-Annotation_and_research_of_pedagogical_documents_in_a_platform_of_e-learning_based_on_Semantic_Web.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application of Data Mining Tools for Recognition of Tifinagh Characters</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2013.030201</link>
        <id>10.14569/SpecialIssue.2013.030201</id>
        <doi>10.14569/SpecialIssue.2013.030201</doi>
        <lastModDate>2013-07-27T06:43:15.2630000+00:00</lastModDate>
        
        <creator>M. OUJAOURA</creator>
        
        <creator>R. EL AYACHI</creator>
        
        <creator>O. BENCHAREF</creator>
        
        <creator>Y. CHIHAB</creator>
        
        <creator>B. JARMOUNI</creator>
        
        <subject>OCR; Data Mining; Classification; Recognition; Tifinagh; geodesic descriptors; Zernike Moments; CART; AdaBoost; KNN; SVM; RNA; ANFIS</subject>
        <description>Special Issue(SpecialIssue), 3(2), 2013</description>
        <description>The majority of Tifinagh OCR presented in the literature does not exceed the scope of simulation software such as Matlab. In this work, the objective is to compare the classification data mining tool for Tifinagh character recognition. This comparison is performed in a working environment using an Oracle database and Oracle Data Mining tools (ODM) to determine the algorithms that gives the best Recognition rates (rate / time).</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo6/Paper_1-Application_of_Data_Mining_Tools_for_Recognition_of_Tifinagh_Characters.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Weapon Target Assignment with Combinatorial Optimization Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.020707</link>
        <id>10.14569/IJARAI.2013.020707</id>
        <doi>10.14569/IJARAI.2013.020707</doi>
        <lastModDate>2013-07-09T18:46:27.4400000+00:00</lastModDate>
        
        <creator>Asim Tokg&#246;z</creator>
        
        <creator>Serol Bulkan</creator>
        
        <subject>Weapon Target Assignment; Combinatorial Optimization; Genetic Algorithm; Tabu Search; Simulated Annealing; Variable Neighborhood Search; WTA; WASA; TEWASA</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(7), 2013</description>
        <description>Weapon Target Assignment (WTA) is the assignment of friendly weapons to the hostile targets in order to protect friendly assets or destroy the hostile targets and considered as a NP-complete problem. Thus, it is very hard to solve it for real time or near-real time operational needs.  In this study, genetic algorithm (GA), tabu search (TS), simulated annealing (SA) and Variable Neighborhood Search (VNS) combinatorial optimization techniques are applied to the WTA problem and their results are compared with each other and also with the optimized GAMS solutions. Algorithms are tested on the large scale problem instances. It is found that all the algorithms effectively converge to the near global optimum point(s) (a good quality) and the efficiency of the solutions (speed of solution) might be improved according to the operational needs. VNS and SA solution qualities are better than both GA and TS.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No7/Paper_7-Weapon_Target_Assignment_with_Combinatorial_Optimization_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Contradiction Resolution between Self and Outer Evaluation for Supervised Multi-Layered Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.020706</link>
        <id>10.14569/IJARAI.2013.020706</id>
        <doi>10.14569/IJARAI.2013.020706</doi>
        <lastModDate>2013-07-09T18:46:25.5030000+00:00</lastModDate>
        
        <creator>Ryotaro Kamimura</creator>
        
        <subject>contradiction resolution; self- and outer-evaluation; visualization; self-organizing maps</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(7), 2013</description>
        <description>In this paper, we propose a new type of informationtheoretic
method. We suppose that a neuron should be evaluated
from different points of view to precisely discern its properties.
In this paper, we restrict ourselves to two types of evaluation
methods for neurons, namely, self and outer-evaluation. A neuron
fires only as a result of evaluating itself, while the neuron can
fire as a result of evaluation by all surrounding neurons. Selfand
outer-evaluation should be equivalent to each other. When
contradiction between two types of evaluation exists, the contradiction
should be as small as possible. Contradiction between
self- and outer-evaluations is realized in terms of the Kullback-
Leibler divergence between two types of neurons. Contradiction
between self- and outer-evaluation can be resolved by decreasing
the contradiction ratio between the two types of evaluation in
terms of KL divergence. This method is expected to extract the
main features in input patterns, if those are shared by two types
of evaluation. We applied the method to two data sets, namely,
the logistic and dollar-yen exchange rate data. In both problems,
experimental results showed that visualization performance could
be improved, leading to clearer class structure for both problems.
In addition, when visualization was improved, generalization
performance did not necessarily degrade, showing the possibility
of networks with better visualization and prediction performance.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No7/Paper_6-Contradiction_Resolution_between_Self_and_Outer_Evaluation_for_Supervised_Multi-Layered_Neural_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>3D Skeleton model derived from Kinect Depth Sensor Camera and its application to walking style quality evaluations</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.020705</link>
        <id>10.14569/IJARAI.2013.020705</id>
        <doi>10.14569/IJARAI.2013.020705</doi>
        <lastModDate>2013-07-09T18:46:21.9930000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Rosa Andrie Asmara</creator>
        
        <subject>Disable gait classification; 3D Skeleton Model; SVM; Biometrics</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(7), 2013</description>
        <description>Feature extraction for gait recognition has been created widely. The ancestor for this task is divided into two parts, model based and free-model based. Model-based approaches obtain a set of static or dynamic skeleton parameters via modeling or tracking body components such as limbs, legs, arms and thighs. Model-free approaches focus on shapes of silhouettes or the entire movement of physical bodies. Model-free approaches are insensitive to the quality of silhouettes. Its advantage is a low computational costs comparing to model-based approaches. However, they are usually not robust to viewpoints and scale. Imaging technology also developed quickly this decades. Motion capture (mocap) device integrated with motion sensor has an expensive price and can only be owned by big animation studio. Fortunately now already existed Kinect camera equipped with depth sensor image in the market with very low price compare to any mocap device. Of course the accuracy not as good as the expensive one, but using some preprocessing we can remove the jittery and noisy in the 3D skeleton points. Our proposed method is part of model based feature extraction and we call it 3D Skeleton model. 3D skeleton model for extracting gait itself is a new model style considering all the previous model is using 2D skeleton model. The advantages itself is getting accurate coordinate of 3D point for each skeleton model rather than only 2D point. We use Kinect to get the depth data. We use Ipisoft mocap software to extract 3d skeleton model from Kinect video. From the experimental results shows 86.36% correctly classified instances using SVM.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No7/Paper_5-3D_Skeleton_model_derived_from_Kinect_Depth_Sensor_Camera_and_its_application_to_walking_style_quality_evaluations.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Recovering Method of Missing Data Based on Proposed Modified Kalman Filter When Time Series of Mean Data is Known</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.020704</link>
        <id>10.14569/IJARAI.2013.020704</id>
        <doi>10.14569/IJARAI.2013.020704</doi>
        <lastModDate>2013-07-09T18:46:20.1530000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>Kalman filter; nremote sensing satellite image; time series analysis</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(7), 2013</description>
        <description>Recovering method of missing data based on the proposed modified Kalman filter for the case that the time series of mean data is know is proposed. There are some cases of which although a portion of data is missing, mean value of the time series of data is known. For instance, although coarse resolution of imagery data are acquired every day, fine resolution of imagery data are missing sometimes. In other words, coarse resolution of imaging sensor has wide swath width while fine resolution of imaging sensor has narrow swath, in general. Therefore, coarse resolution of sensor data can be acquired every day while fine resolution of sensor data can be acquired not so frequently. It would be nice to become able to create frequently acquired fine resolution of sensor data (every day) using the previously acquired fine resolution of sensor data together with the coarse resolution of sensor data. The proposed method allows creation of fine resolution sensor data with the aforementioned method based on a modified Kalman filter. As an example of the proposed method, prediction of missing ASTER/VNIR data based on Kalman filter using simultaneously acquired MODIS data as a mean value of time series data in revision of filter status is attempted together with a comparative study of prediction errors for both conventional Kalman filter and the proposed modified Kalman filter which utilizes mean value of time series data derived from the other sources. Experimental data shows that 4 to 111% of prediction error reduction can be achieved by the proposed modified Kalman filter in comparison to the conventional Kalman filter. It is found that the reduction rate depends on the mean value accuracy of time series data derived from the other data sources. The experimental results with remote sensing satellite imagery data show a validity of the proposed method</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No7/Paper_4-Recovering_Method_of_Missing_Data_Based_on_Proposed_Modified_Kalman_Filter_When_Time_Series_of_Mean_Data_is_Known.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Validity of Spontaneous Braking and Lane Changing with Scope of Awareness by Using Measured Traffic Flow</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.020703</link>
        <id>10.14569/IJARAI.2013.020703</id>
        <doi>10.14569/IJARAI.2013.020703</doi>
        <lastModDate>2013-07-09T18:46:18.3270000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Steven Ray Sentinuwo</creator>
        
        <subject>traffic model validation; spontaneous braking; scope awareness; traffic cellular automaton.</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(7), 2013</description>
        <description>This paper presents the validation method and its evaluation of the spontaneous braking and lane changing with scope awareness parameter. By using the real traffic flow data, the traffic cellular automaton model that accommodate these two driver behaviors, e.g., spontaneous braking and driver scope awareness has been compared and evaluated. The real traffic flow data have been observed via video-recording captured from real traffic situation. The validation results shown that by accommodate spontaneous braking and scope awareness parameters, the model can produced traffic flow’s accuracy value 83.9% compared to the real traffic flow data.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No7/Paper_3-Validity_of_Spontaneous_Braking_and_Lane_Changing_with_Scope_of_Awareness_by_Using_Measured_Traffic_Flow.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Effect of Driver Scope Awareness in the Lane Changing Maneuvers Using Cellular Automaton Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.020702</link>
        <id>10.14569/IJARAI.2013.020702</id>
        <doi>10.14569/IJARAI.2013.020702</doi>
        <lastModDate>2013-07-09T18:46:16.5030000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Steven Ray Sentinuwo</creator>
        
        <subject>scope awareness; lane changing meneuver; speed estimation; spontaneous braking.</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(7), 2013</description>
        <description>This paper investigated the effect of drivers’ visibility and their perception (e.g., to estimate the speed and arrival time of another vehicle) on the lane changing maneuver. The term of scope awareness was used to describe the visibility required by the driver to make a perception about road condition and the speed of vehicle that exist in that road. A computer simulation model was conducted to show this driver awareness behavior. This studying attempt to precisely catching the lane changing behavior and illustrate the scope awareness parameter that reflects driver behavior. This paper proposes a simple cellular automata model for studying driver visibility effects of lane changing maneuver and driver perception of estimated speed. Different values of scope awareness were examined to capture its effect on the traffic flow. Simulation results show the ability of this model to capture the important features of lane changing maneuver and revealed the appearance of the short-thin solid line jam and the wide solid line jam in the traffic flow as the consequences of lane changing maneuver.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No7/Paper_2-Effect_of_Driver_Scope_Awareness_in_the_Lane_Changing_Maneuvers_Using_Cellular_Automaton_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative study between the proposed shape independent clustering method and the conventional methods (K-means and the other)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.020701</link>
        <id>10.14569/IJARAI.2013.020701</id>
        <doi>10.14569/IJARAI.2013.020701</doi>
        <lastModDate>2013-07-09T18:46:14.6470000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Cahya Rahmad</creator>
        
        <subject>clustering algorithms; mlccd; shape independence clustering;</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(7), 2013</description>
        <description>Cluster analysis aims at identifying groups of similar objects and, therefore helps to discover distribution of patterns and interesting correlations in the data sets. In this paper, we propose to provide a consistent partitioning of a dataset which allows identifying any shape of cluster patterns in case of numerical clustering, convex or non-convex. The method is based on layered structure representation that be obtained from measurement distance and angle of numerical data to the centroid data and based on the iterative clustering construction utilizing a nearest neighbor distance between clusters to merge. Encourage result show the effectiveness of the proposed technique.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No7/Paper_1-Comparative_study_between_the_proposed_shape_independent_clustering_method_and_the_conventional_methods.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Neural Network based Mobility aware Prefetch Caching and Replacement Strategies in Mobile
Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040521</link>
        <id>10.14569/IJACSA.2013.040521</id>
        <doi>10.14569/IJACSA.2013.040521</doi>
        <lastModDate>2013-07-01T08:36:31.2630000+00:00</lastModDate>
        
        <creator>Hariram Chavan</creator>
        
        <creator>Suneeta Sane</creator>
        
        <creator>H. B. Kekre</creator>
        
        <subject>Location Based Services, Caching, backpropagation,
cache-miss-initiated prefetch, cache replacement policy.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(5), 2013</description>
        <description>The Location Based Services (LBS) have ushered
the way mobile applications access and manage Mobile Database
System (MDS). Caching frequently accessed data into the mobile
database environment, is an effective technique to improve
the MDS performance. The cache size limitation enforces an
optimized cache replacement algorithm to find a suitable subset of
items for eviction from the cache. In wireless environment mobile
clients move freely from one location to another and their access
pattern exhibits temporal-spatial locality. To ensure efficient cache
utilization, it is important to consider the movement direction,
current and future location, cache invalidation and optimized
prefetching for mobile clients when performing cache replacement.
This paper proposes a Neural Network based Mobility
aware Prefetch Caching and Replacement policy (NNMPCR)
in Mobile Environment to manage LBS data. The NNMPCR
policy employs a neural network prediction system that is able to
capture some of the spatial patterns exhibited by users moving in
a wireless environment. It is used to predict the future behavior
of the mobile client. A cache-miss-initiated prefetch is used to
reduce future misses and valid scope invalidation technique for
cache invalidation. This makes the policy adaptive to clients
movement behavior and optimizes the performance compared
to earlier policies.</description>
        <description>http://thesai.org/Downloads/Volume4No5/Paper_21-Neural_Network_based_Mobility_aware_Prefetch.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Study of Current Femto-Satellite Approches</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040520</link>
        <id>10.14569/IJACSA.2013.040520</id>
        <doi>10.14569/IJACSA.2013.040520</doi>
        <lastModDate>2013-07-01T08:36:27.8000000+00:00</lastModDate>
        
        <creator>Nizar Tahri</creator>
        
        <creator>Chafaa Hamrouni</creator>
        
        <creator>Adel M.Alimi</creator>
        
        <subject>Femtosatellite; Communication; Spacecraft</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(5), 2013</description>
        <description>The success of space technology evolves according to
the technological progression in terms of density of CMOS
integration (Complementary on - Silicon Metal) and MEMS
(Micro-Electro-Mechanical System) [4]. The need of spatial
services has been a significant growth due to several factors such
as population increases, telecommunication applications, climate
changes, earth control and observation military goals, and so on.
To cover this, spatial vehicle generations, specific calculators and
Femto-cell systems have been developed. More recently, Ultra -
Small Satellites (USS) have been proposed and different
approaches, concerning developing of these kind of spatial systems,
have been presented in literature. This miniature satellite is
capable to fly in the space and to provide different services such as
imagery, measures and communications [4, 9, 10]. This paper deals
with the study of two different USS Femto-satellite architectures
that exist in literature in order to propose a future architecture that
can provide an optimization of power supply consumption and
ameliorate service communication quality.</description>
        <description>http://thesai.org/Downloads/Volume4No5/Paper_20-Study_of_Current_Femto-Satellite_Approches.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Revisit of Logistic Regression</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040519</link>
        <id>10.14569/IJACSA.2013.040519</id>
        <doi>10.14569/IJACSA.2013.040519</doi>
        <lastModDate>2013-07-01T08:36:25.1970000+00:00</lastModDate>
        
        <creator>Takumi Kobayashi</creator>
        
        <creator>Kenji Watanabe</creator>
        
        <creator>Nobuyuki Otsu</creator>
        
        <subject></subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(5), 2013</description>
        <description>Logistic regression (LR) is widely applied as a
powerful classification method in various fields, and a variety
of optimization methods have been developed. To cope with
large-scale problems, an efficient optimization method for LR
is required in terms of computational cost and memory usage.
In this paper, we propose an efficient optimization method
using non-linear conjugate gradient (CG) descent. In each CG
iteration, the proposed method employs the optimized step size
without exhaustive line search, which significantly reduces the
number of iterations, making the whole optimization process
fast. In addition, on the basis of such CG-based optimization
scheme, a novel optimization method for kernel logistic regression
(KLR) is proposed. Unlike the ordinary KLR methods, the
proposed method optimizes the kernel-based classifier, which
is naturally formulated as the linear combination of sample
kernel functions, directly in the reproducing kernel Hilbert
space (RKHS), not the linear coefficients. Subsequently, we also
propose the multiple-kernel logistic regression (MKLR) along
with the optimization of KLR. The MKLR effectively combines
the multiple types of kernels with optimizing the weights for
the kernels in the framework of the logistic regression. These
proposed methods are all based on CG-based optimization and
matrix-matrix computation which is easily parallelized such as
by using multi-thread programming. In the experimental results
on multi-class classifications using various datasets, the proposed
methods exhibit favorable performances in terms of classification
accuracies and computation times.</description>
        <description>http://thesai.org/Downloads/Volume4No5/Paper_19-Revisit_of_Logistic_Regression.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Secure Medical Images Sharing over Cloud Computing environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040518</link>
        <id>10.14569/IJACSA.2013.040518</id>
        <doi>10.14569/IJACSA.2013.040518</doi>
        <lastModDate>2013-07-01T08:36:23.3870000+00:00</lastModDate>
        
        <creator>Fatma E.-Z. A. Elgamal</creator>
        
        <creator>Noha A. Hikal</creator>
        
        <creator>F.E.Z. Abou-Chadi</creator>
        
        <subject>Cloud computing; Electronic Patients&#39; Records; Cloud drops; encryption; spatial synchronization; authentication; Hybrid image watermarking; spatial watermarking; Discrete cosine Transform</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(5), 2013</description>
        <description>Nowadays, many applications have been appeared due to the rapid development in the term of telecommunication. One of these applications is the telemedicine where the patients&#39; digital data can transfer between the doctors for farther diagnosis. Therefore, the protection of the exchanged medical data is essential especially when transferring these data in an insecure medium such as the cloud computing environment, where the security is considered a major issue.  In this paper, two security approaches were presented to guarantee a secure sharing of medical images over the cloud computing environment by providing the mean of trust management between the authorized parities of these data and also allows the privacy sharing of the Electronic Patients&#39; Records string data between those parities while preserving the shared medical image from the distortion. The first approach apply spatial watermarking technique while the second approach implements a hybrid spatial and transform techniques in order to achieve the needed goal. The experimental results show the efficiency of the proposed approaches and the robustness against various types of attacks.</description>
        <description>http://thesai.org/Downloads/Volume4No5/Paper_18-Secure_Medical_Images_Sharing_over_Cloud_Computing_environment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Research on Chinese University Students’ Media Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040517</link>
        <id>10.14569/IJACSA.2013.040517</id>
        <doi>10.14569/IJACSA.2013.040517</doi>
        <lastModDate>2013-07-01T08:36:21.0300000+00:00</lastModDate>
        
        <creator>Chengliang Zhang</creator>
        
        <creator>Haifei Yu</creator>
        
        <subject>University students’ media image; Content analytic method; the public opinion; Synergistic effect</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(5), 2013</description>
        <description>At present, university students, as the &quot;after 90&quot; and a new generation of young intellectuals, are being paid generally attentions by mass media. Nevertheless, university students’ public images are on a decline as they have negative news appeared ceaselessly. Contemporary university students are becoming a group of people who are gazed at fixedly by the media. Moreover, the media keeps gazing at them and help them to build university students’ media images. However, this kind of media behavior affects public judgments on university students’ images. Furthermore, in the eye of the public, university students’ images become serious distortion.</description>
        <description>http://thesai.org/Downloads/Volume4No5/Paper_17-Research_on_Chinese_University_Students_Media_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>LASyM: A Learning Analytics System for MOOCs </title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040516</link>
        <id>10.14569/IJACSA.2013.040516</id>
        <doi>10.14569/IJACSA.2013.040516</doi>
        <lastModDate>2013-07-01T08:36:19.2200000+00:00</lastModDate>
        
        <creator>Yassine Tabaa</creator>
        
        <creator>Abdellatif Medouri</creator>
        
        <subject>Cloud Computing; MOOCs; Hadoop; Learning Analytics.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(5), 2013</description>
        <description>Nowadays, the Web has revolutionized our vision as to how deliver courses in a radically transformed and enhanced way. Boosted by Cloud computing, the use of the Web in education has revealed new challenges and looks forward to new aspirations such as MOOCs (Massive Open Online Courses) as a technology-led revolution ushering in a new generation of learning environments. Expected to deliver effective education strategies, pedagogies and practices, which lead to student success, the massive open online courses, considered as the “linux of education”, are increasingly developed by elite US institutions such MIT, Harvard and Stanford by supplying open/distance learning for large online community without paying any fees, MOOCs have the potential to enable free university-level education on an enormous scale. Nevertheless, a concern often is raised about MOOCs is that a very small proportion of learners complete the course while thousands enrol for courses. In this paper, we present LASyM, a learning analytics system for massive open online courses. The system is a Hadoop based one whose main objective is to assure Learning Analytics for MOOCs’ communities as a mean to help them investigate massive raw data, generated by MOOC platforms around learning outcomes and assessments, and reveal any useful information to be used in designing learning-optimized MOOCs. To evaluate the effectiveness of the proposed system we developed a method to identify, with low latency, online learners more likely to drop out</description>
        <description>http://thesai.org/Downloads/Volume4No5/Paper_16-LASyM_A_Learning_Analytics_System_for_MOOCs.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>QOS,Comparison of BNP Scheduling Algorithms with Expanded Fuzzy System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040515</link>
        <id>10.14569/IJACSA.2013.040515</id>
        <doi>10.14569/IJACSA.2013.040515</doi>
        <lastModDate>2013-07-01T08:36:17.4100000+00:00</lastModDate>
        
        <creator>Amita Sharma</creator>
        
        <creator>Harpreet Kaur</creator>
        
        <subject>Parallel Processing; DAG,BNP; Fuzzy logic; Multiprocessor.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(5), 2013</description>
        <description>Parallel processing is a filed in which different systems run together to save the time of the processing and to increase the performance of the system. It has been also seen that it works somewhat up to the load balancing concept. Previous algorithms like HLFET ,MCP, DLS ,ETF  , have shown that they can reduce the burden of the processor by working simultaneous working system .In our research work , we have combined HLFET ,MCP, DLS ,ETF  with FUZZY logic to check out what effect it makes to the parameters which has been taken from the previous work done like Makespan, SLR ,Speedup , Process Utilization.It has been found that the fuzzy logic system works better than the single algorithm. </description>
        <description>http://thesai.org/Downloads/Volume4No5/Paper_15-QOS_Comparison_of BNP_Scheduling_Algorithms_with_Expanded_Fuzzy_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An efficient user scheduling scheme for downlink Multiuser MIMO-OFDM systems with Block Diagonalization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040514</link>
        <id>10.14569/IJACSA.2013.040514</id>
        <doi>10.14569/IJACSA.2013.040514</doi>
        <lastModDate>2013-07-01T08:36:15.6170000+00:00</lastModDate>
        
        <creator>Mounir Esslaoui</creator>
        
        <creator>Mohamed Essaaidi</creator>
        
        <subject>MU-MIMO; OFDM; scheduling; precoding; Block Diagonalization;</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(5), 2013</description>
        <description>The combination of multiuser multiple-input multiple-output (MU-MIMO) technology with orthogonal frequency division multiplexing (OFDM) is an attractive solution for next generation of wireless local area networks (WLANs), currently standardized within IEEE 802.11ac, and the fourth-generation (4G) mobile cellular wireless systems to achieve a very high system throughput while satisfying quality of service (QoS) constraints. In particular, Block Diagonalization (BD) scheme is a low-complexity precoding technique for MU-MIMO downlink channels, which completely pre-cancels the multiuser interference. The major issue of the BD scheme is that the number of users that can be simultaneously supported is limited by the ratio of the number of base station transmit antennas to the number of user receive antennas. When the number of users is large, a subset of users must be selected, and selection algorithms should be designed to maximize the total system throughput. In this paper, the BD technique is extended to MU-MIMO-OFDM systems and a low complexity user scheduling algorithm is proposed to find the optimal subset of users that should transmit simultaneously, in light of the instantaneous channel state information (CSI), such that the total system sum-rate capacity is maximized. Simulation results show that the proposed scheduling algorithm achieves a good trade-off between sum-rate capacity performance and computational complexity.</description>
        <description>http://thesai.org/Downloads/Volume4No5/Paper_14-An_efficient_user_scheduling_scheme_for_downlink_Multiuser_MIMO-OFDM_systems_with_Block_Diagonalization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Jabber-based Cross-Domain Efficient and Privacy-Ensuring Context Management Framework</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040513</link>
        <id>10.14569/IJACSA.2013.040513</id>
        <doi>10.14569/IJACSA.2013.040513</doi>
        <lastModDate>2013-07-01T08:36:13.8070000+00:00</lastModDate>
        
        <creator>Zakwan Jaroucheh</creator>
        
        <creator>Xiaodong Liu</creator>
        
        <creator>Sally Smith</creator>
        
        <subject>pervasive computing; cross-domain context management; context modeling; Jabber protocol; privacy. </subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(5), 2013</description>
        <description>in pervasive environments, context-aware applications require a global knowledge of the context information distributed in different spatial domains in order to establish context-based interactions. Therefore, the design of distributed storage, retrieval, and dissemination mechanisms of context information across domains becomes vital. In such environments, we envision the necessity of collaboration between different context servers distributed in different domains; thus, the need for generic APIs and protocol allowing context information exchange between different entities: context servers, context providers, and context consumers. As a solution this paper proposes ubique, a distributed middleware for context-aware computing that allows applications to maintain domain-based context interests to access context information about users, places, events, and things - all made available by or brokered through the home domain server. This paper proposes also a new cross-domain protocol for context management which ensures the privacy and the efficiency of context information dissemination. It has been robustly built upon the Jabber protocol which is a widely adopted open protocol for instant messaging and is designed for near real-time communication. Simulation and experimentation results show that ubique framework well supports robust cross-domain context management and collaboration.</description>
        <description>http://thesai.org/Downloads/Volume4No5/Paper_13-Jabber-based_Cross_Domain_Efficient_and_Privacy_Ensuring_Context_Management_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Modified clustering for LEACH algorithm in WSN</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040512</link>
        <id>10.14569/IJACSA.2013.040512</id>
        <doi>10.14569/IJACSA.2013.040512</doi>
        <lastModDate>2013-07-01T08:36:12.0130000+00:00</lastModDate>
        
        <creator>B. Brahma Reddy</creator>
        
        <creator>K.Kishan Rao</creator>
        
        <subject>Clustering Index; LEACH; Wireless Sensor Networks; Energy optimization; Network lifetime</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(5), 2013</description>
        <description>Node clustering and data aggregation are popular techniques to reduce energy consumption in large Wireless Sensor Networks (WSN).  Cluster based routing is always a hot research area in wireless sensor networks. Classical LEACH protocol has many advantages in energy efficiency, data aggregation and so on. However, determining number of clusters present in a network is an important problem. Conventional clustering techniques generally assume this parameter to be user supplied. There exist very few techniques that can solve the problem of automatic detection of number of clusters satisfactorily. Some of these techniques rely on user supplied information, while others use cluster validity indices.  In this paper, we proposed a rather simple method to identify the number of clusters that can give satisfactory results. Proposed method is compared with classical LEACH protocol and found to be giving better results.</description>
        <description>http://thesai.org/Downloads/Volume4No5/Paper_12-A_Modified_clustering_for_LEACH_algorithm_in_WSN.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Computational Model of Extrastriate Visual Area MT on Motion Perception</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040511</link>
        <id>10.14569/IJACSA.2013.040511</id>
        <doi>10.14569/IJACSA.2013.040511</doi>
        <lastModDate>2013-07-01T08:36:10.2030000+00:00</lastModDate>
        
        <creator>Jiawei Xu</creator>
        
        <creator>Shigang Yue</creator>
        
        <subject>Motion perception; daytime and nocturnal scenes; spatio-temporal phase </subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(5), 2013</description>
        <description>Human vision system are sensitive to motion perception under complex scenes. Building motion attention models similar to human visual attention system should be very beneficial to computer vision and machine intelligence; meanwhile, it has been a challenging task due to the complexity of human brain and limited understanding of the mechanisms underlying the human vision system. This paper models the motion perception mechanisms in human extrastriate visual middle temporal area (MT) computationally. MT is middle temporal area which is sensitive on motion perception. This model can explain the attention selection mechanism and visual motion perception to some extent. With the proposed model, we analysis the motion perception under day time with single or multiple moving objects, we then mimic the visual attention process consisting of attention shifts and eye fixations against motion- feature-map. The model produced similar gist perception outputs in our experiments, when day-time images and nocturnal images from the same scene are processed. At last, we mentioned the future direction of this research.</description>
        <description>http://thesai.org/Downloads/Volume4No5/Paper_11-A_Computational_Model_of_Extrastriate_Visual_Area_MT_on_Motion_Perception.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>DCaaS: Data Consistency as a Service for Managing Data Uncertainty on the Clouds</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040510</link>
        <id>10.14569/IJACSA.2013.040510</id>
        <doi>10.14569/IJACSA.2013.040510</doi>
        <lastModDate>2013-07-01T08:36:08.3930000+00:00</lastModDate>
        
        <creator>Islam Elgedawy</creator>
        
        <subject>clouds; cloudlet; cloud adapter; data uncertainty; DCaaS; SaaS; PaaS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(5), 2013</description>
        <description>Ensuring data correctness over partitioned distributed database systems is a classical problem. Classical solutions proposed to solve this problem are mainly adopting locking or blocking techniques. These techniques are not suitable for cloud environments as they produce terrible response times; due to the long latency and faultiness of wide area network connections among cloud datacenters. One way to improve performance is to restrict access of users-bases to specific datacenters and avoid data sharing between datacenters.  However, conflicts might appear when data is replicated between datacenters; nevertheless change propagation timeliness is not guaranteed. Such problems created data uncertainty on cloud environments. Managing data uncertainty is one of the main obstacles for supporting global distributed transactions on the clouds. To overcome this problem, this paper proposes an quota-based approach for managing data uncertainty on the clouds that guarantees global data correctness without global locking or blocking. To decouple service developers from the hassles of managing data uncertainty, we propose to use a new platform service (i.e. Data Consistency as a Service (DCaaS)) to encapsulate the proposed approach. DCaaS service also ensures SaaS services cloud portability, as it works as a cloud adapter between SaaS service instances. Experiments show that proposed approach realized by the DCaaS service provides much better response time when compared with classical locking and blocking techniques.</description>
        <description>http://thesai.org/Downloads/Volume4No5/Paper_10-DCaaS_Data_Consistency_as_a_Service_for_Managing_Data_Uncertainty_on_the_Clouds.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application of multi regressive linear model and neural network for wear prediction of grinding mill liners</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040509</link>
        <id>10.14569/IJACSA.2013.040509</id>
        <doi>10.14569/IJACSA.2013.040509</doi>
        <lastModDate>2013-07-01T08:36:06.6000000+00:00</lastModDate>
        
        <creator>Farzaneh Ahmadzadeh</creator>
        
        <creator>Jan.Lundberg</creator>
        
        <subject>Wear prediction; Remaining useful life; Artificial neural network; Principal Component Analysis; Maintenance scheduling ; Condition Monitoring.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(5), 2013</description>
        <description>The liner of an ore grinding mill is a critical component in the grinding process, necessary for both high metal recovery and shell protection. From an economic point of view, it is important to keep mill liners in operation as long as possible, minimising the downtime for maintenance or repair. Therefore, predicting their wear is crucial. This paper tests different methods of predicting wear in the context of remaining height and remaining life of the liners. The key concern is to make decisions on replacement and maintenance without stopping the mill for extra inspection as this leads to financial savings. The paper applies linear multiple regression and artificial neural networks (ANN) techniques to determine the most suitable methodology for predicting wear. The advantages of the ANN model over the traditional approach of multiple regression analysis include its high accuracy.</description>
        <description>http://thesai.org/Downloads/Volume4No5/Paper_9-Application_of_multi_regressive_linear_model_and_neural_network_for_wear_prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Building Low Cost Cloud Computing Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040508</link>
        <id>10.14569/IJACSA.2013.040508</id>
        <doi>10.14569/IJACSA.2013.040508</doi>
        <lastModDate>2013-07-01T08:36:04.7730000+00:00</lastModDate>
        
        <creator>Carlos Antunes</creator>
        
        <creator>Ricardo Vardasca</creator>
        
        <subject>cloud computing; IAAS; jail environments; optimization; PAAS.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(5), 2013</description>
        <description>The actual models of cloud computing are based in megalomaniac hardware solutions, being its implementation and maintenance unaffordable to the majority of service providers. The use of jail services is an alternative to current models of cloud computing based on virtualization. Models based in utilization of jail environments instead of the used virtualization systems will provide huge gains in terms of optimization of hardware resources at computation level and in terms of storage and energy consumption. In this paper it will be addressed the practical implementation of jail environments in real scenarios, which allows the visualization of areas where its application will be relevant and will make inevitable the redefinition of the models that are currently defined for cloud computing. In addition it will bring new opportunities in the development of support features for jail environments in the majority of operating systems.  </description>
        <description>http://thesai.org/Downloads/Volume4No5/Paper_8-Building_Low_Cost_Cloud_Computing_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>ImageCompression Using Real Fourier Transform, Its Wavelet Transform And Hybrid Wavelet With DCT</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040507</link>
        <id>10.14569/IJACSA.2013.040507</id>
        <doi>10.14569/IJACSA.2013.040507</doi>
        <lastModDate>2013-07-01T08:36:02.9630000+00:00</lastModDate>
        
        <creator>Dr. H. B. Kekre</creator>
        
        <creator>Dr. Tanuja Sarode</creator>
        
        <creator>Prachi Natu</creator>
        
        <subject>Real Fourier Transform; Hybrid Wavelet Transform; DCT</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(5), 2013</description>
        <description>This paper proposes new image compression technique that uses Real Fourier Transform. Discrete Fourier Transform (DFT) contains complex exponentials. It contains both cosine and sine functions. It gives complex values in the output of Fourier Transform. To avoid these complex values in the output, complex terms in Fourier Transform are eliminated. This can be done by using coefficients of Discrete Cosine Transform (DCT) and Discrete Sine Transform (DST). DCT as well as DST are orthogonal even after sampling and both are equivalent to FFT of data sequence of twice the length. DCT uses real and even functions and DST uses real and odd functions which are equivalent to imaginary part in Fourier Transform. Since coefficients of both DCT and DST contain only real values, Fourier Transform obtained using DCT and DST coefficients also contain only real values. This transform called Real Fourier Transform is applied on colour images. RMSE values are computed for column, Row and Full Real Fourier Transform. Wavelet transform of size N2xN2 is generated using NxN Real Fourier Transform. Also Hybrid Wavelet Transform is generated by combining Real Fourier transform with Discrete Cosine Transform. Performance of these three transforms is compared using RMSE as a performance measure. It has been observed that full hybrid wavelet transform obtained by combining Real Fourier Transform and DCT gives best performance of all. It is compared with DCT Full Wavelet Transform. It beats the performance of Full DCT Wavelet transform. Reconstructed image quality obtained in Real Fourier-DCT Full Hybrid Wavelet Transform is superior to one obtained in DCT, DCT Wavelet and DCT Hybrid Wavelet Transform.</description>
        <description>http://thesai.org/Downloads/Volume4No5/Paper_7-Image_Compression_Using_Real_Fourier_Transform_Its_Wavelet_Transform_And_Hybrid_Wavelet_With_DCT.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Privacy Impacts of Data Encryption on the Efficiency of Digital Forensics Technology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040506</link>
        <id>10.14569/IJACSA.2013.040506</id>
        <doi>10.14569/IJACSA.2013.040506</doi>
        <lastModDate>2013-07-01T08:36:01.1530000+00:00</lastModDate>
        
        <creator>Adedayo M. Balogun</creator>
        
        <creator>Shao Ying Zhu</creator>
        
        <subject>Encryption; Information Security; Digital Forensics; Anti-Forensics; Cryptography; TrueCrypt</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(5), 2013</description>
        <description>Owing to a number of reasons, the deployment of encryption solutions are beginning to be ubiquitous at both organizational and individual levels. The most emphasized reason is the necessity to ensure confidentiality of privileged information. Unfortunately, it is also popular as cyber-criminals&#39; escape route from the grasp of digital forensic investigations. The direct encryption of data or indirect encryption of storage devices, more often than not, prevents access to such information contained therein. This consequently leaves the forensics investigation team, and subsequently the prosecution, little or no evidence to work with, in sixty percent of such cases. However, it is unthinkable to jeopardize the successes brought by encryption technology to information security, in favour of digital forensics technology. This paper examines what data encryption contributes to information security, and then highlights its contributions to digital forensics of disk drives. The paper also discusses the available ways and tools, in digital forensics, to get around the problems constituted by encryption. A particular attention is paid to the Truecrypt encryption solution to illustrate ideas being discussed. It then compares encryption&#39;s contributions in both realms, to justify the need for introduction of new technologies to forensically defeat data encryption as the only solution, whilst maintaining the privacy goal of users.</description>
        <description>http://thesai.org/Downloads/Volume4No5/Paper_6-Privacy_Impacts_of_Data_Encryption_on_the_Efficiency_of_Digital_Forensics_Technology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative study of Authorship Identification Techniques for Cyber Forensics Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040505</link>
        <id>10.14569/IJACSA.2013.040505</id>
        <doi>10.14569/IJACSA.2013.040505</doi>
        <lastModDate>2013-07-01T08:35:59.3430000+00:00</lastModDate>
        
        <creator>Smita Nirkhi</creator>
        
        <creator>Dr.R.V.Dharaskar</creator>
        
        <subject>cyber crime; Author Identification; SVM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(5), 2013</description>
        <description>Authorship Identification techniques are used to identify the most appropriate author from group of potential suspects of online messages and find evidences to support the conclusion. Cybercriminals make misuse of online communication for sending blackmail or a spam email and then attempt to hide their true identities to void detection.Authorship Identification of online messages is the contemporary research issue for identity tracing in cyber forensics. This is highly interdisciplinary area as it takes advantage of machine learning, information retrieval, and natural language processing. In this paper, a study of recent techniques and automated approaches to attributing authorship of online messages is presented. The focus of this review study is to summarize all existing authorship identification techniques used in literature to identify authors of online messages. Also it discusses evaluation criteria and parameters for authorship attribution studies and list open questions that will attract future work in this area.</description>
        <description>http://thesai.org/Downloads/Volume4No5/Paper_5-Comparative_study_of_Authorship_Identification_Techniques_for_Cyber_Forensics_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Data Flow Sequences: A Revision of Data Flow Diagrams for Modelling Applications using XML</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040504</link>
        <id>10.14569/IJACSA.2013.040504</id>
        <doi>10.14569/IJACSA.2013.040504</doi>
        <lastModDate>2013-07-01T08:35:56.6770000+00:00</lastModDate>
        
        <creator>James PH Coleman</creator>
        
        <subject>Data Flow Diagrams; Modelling diagrams; XML; Data Flow Sequence Diagrams</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(5), 2013</description>
        <description>Data Flow Diagrams were developed in the 1970’s as a method of modelling data flow when developing information systems. While DFDs are still being used, the modern web-based which is client-server based means that DFDs are not as useful. This paper proposes a modified form of DFD that incorporates, amongst other features sequences. The proposed system, called Data Flow Sequences (DFS) is better able to model real world systems in a way that simplifies application development. The paper also proposes an XML implementation for DFS which allows analytical tools to be used to analyse the DFS diagrams. The paper discusses a tool that is able to detect orphan data flow sequences and other potential problems. </description>
        <description>http://thesai.org/Downloads/Volume4No5/Paper_4-Data_Flow_Sequences_A_Revision_of_Data_Flow_Diagrams_for_Modelling_Applications_using_XML.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fine Particulate Matter Concentration Level Prediction by using Tree-based Ensemble Classification Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040503</link>
        <id>10.14569/IJACSA.2013.040503</id>
        <doi>10.14569/IJACSA.2013.040503</doi>
        <lastModDate>2013-07-01T08:35:54.8670000+00:00</lastModDate>
        
        <creator>Yin Zhao</creator>
        
        <creator>Yahya Abu Hasan</creator>
        
        <subject>Random Forest; C5.0; PM2.5 prediction; data mining.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(5), 2013</description>
        <description>Pollutant forecasting is an important problem in the environmental sciences. Data mining is an approach to discover knowledge from large data. This paper tries to use data mining methods to forecast ?PM?_(2.5) concentration level, which is an important air pollutant. There are several tree-based classification algorithms available in data mining, such as CART, C4.5, Random Forest (RF) and C5.0. RF and C5.0 are popular ensemble methods, which are, RF builds on CART with Bagging and C5.0 builds on C4.5 with Boosting, respectively. This paper builds ?PM?_(2.5) concentration level predictive models based on RF and C5.0 by using R packages. The data set includes 2000-2011 period data in a new town of Hong Kong. The ?PM?_(2.5) concentration is divided into 2 levels, the critical points is 25&#181;g/m^3 (24 hours mean). According to 100 times 10-fold cross validation, the best testing accuracy is from RF model, which is around 0.845~0.854.</description>
        <description>http://thesai.org/Downloads/Volume4No5/Paper_3-Fine_Particulate_Matter_Concentration_Level_Prediction_by_using_Tree-based_Ensemble_Classification_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Advanced Personnel Vetting Techniques in Critical Multi-Tennant Hosted Computing Environments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040502</link>
        <id>10.14569/IJACSA.2013.040502</id>
        <doi>10.14569/IJACSA.2013.040502</doi>
        <lastModDate>2013-07-01T08:35:53.0430000+00:00</lastModDate>
        
        <creator>Farhan Hyder Sahito</creator>
        
        <creator>Wolfgang Slany</creator>
        
        <subject>Cloud Computing; Human Threats; Multi Layered Security Strategy; Employee Screening; fMRI.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(5), 2013</description>
        <description>The emergence of cloud computing presents a strategic direction for critical infrastructures and promises to have far-reaching effects on their systems and networks to deliver better outcomes to the nations at a lower cost. However, when considering cloud computing, government entities must address a host of security issues (such as malicious insiders) beyond those of service cost and flexibility. The scope and objective of this paper is to analyze, evaluate and investigate the insider threat in cloud security in sensitive infrastructures as well as to propose two proactive socio-technical solutions for securing commercial and governmental cloud infrastructures. Firstly, it proposes actionable framework, techniques and practices in order to ensure that such disruptions through human threats are infrequent, of minimal duration, manageable, and cause the least damage possible. Secondly, it aims for extreme security measures to analyze and evaluate human threats related assessment methods for employee screening in certain high-risk situations using cognitive analysis technology, in particular functional Magnetic Resonance Imaging (fMRI). The significance of this research is also to counter human rights and ethical dilemmas by presenting a set of ethical and professional guidelines. The main objective of this work is to analyze related risks, identify countermeasures and present recommendations to develop a security awareness culture that will allow cloud providers to utilize effectively the benefits of this advanced techniques without sacrificing system security.</description>
        <description>http://thesai.org/Downloads/Volume4No5/Paper_2-Advanced_Personnel_Vetting_Techniques_in_Critical_Multi-Tennant_Hosted_Computing_Environments.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Fuzzy Rough Rule Based System Enhanced By Fuzzy Cellular Automata</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040501</link>
        <id>10.14569/IJACSA.2013.040501</id>
        <doi>10.14569/IJACSA.2013.040501</doi>
        <lastModDate>2013-07-01T08:35:51.1870000+00:00</lastModDate>
        
        <creator>Mona Gamal</creator>
        
        <creator>Ahmed Abou El-Fetouh</creator>
        
        <creator>Shereef Barakat</creator>
        
        <subject>fuzzy rough reduction; fuzzy rough rules; fuzzy cellular automata; Self Organized Feature Maps (SOFM).</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(5), 2013</description>
        <description>Handling uncertain knowledge is a very tricky problem in the current world as the data, we deal with, is uncertain, incomplete and even inconsistent. Finding an efficient intelligent framework for this kind of knowledge is a challenging task. The knowledge based framework can be represented by a rule based system that depends on a set of rules which deal with uncertainness in the data. Fuzzy rough rules are a good competitive in dealing with the uncertain cases. They are consisted of fuzzy rough variables in both the propositions and consequences. The fuzzy rough variables represent the lower and upper approximations of the subsets of a fuzzy variable. These fuzzy variables use labels (fuzzy subsets) instead of values. An efficient fuzzy rough rule based system must depend on good and accurate rules. This system needs to be enhanced to view the future recommendations or in other words the system in time sequence. This paper tries to make a rule based system for uncertain knowledge using fuzzy rough theory to generate the desired accurate rules and then use fuzzy cellular automata parallel system to enhance the rule based system developed and find out what the system would look like in time sequence so as to give good recommendations about the system in the future. The proposed model along with experimental results and simulations of the rule based systems of different data sets in time sequence is illustrated.</description>
        <description>http://thesai.org/Downloads/Volume4No5/Paper_1-A_Fuzzy_Rough_Rule_Based_System_Enhanced_By_Fuzzy_Cellular_Automata.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Image and Video based double watermark extraction spread spectrum watermarking in low variance region</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040615</link>
        <id>10.14569/IJACSA.2013.040615</id>
        <doi>10.14569/IJACSA.2013.040615</doi>
        <lastModDate>2013-06-29T14:55:36.6570000+00:00</lastModDate>
        
        <creator>Mriganka Gogoi</creator>
        
        <creator>Koushik Mahanta</creator>
        
        <creator>H.M.Khalid Raihan Bhuyan</creator>
        
        <creator>Dibya Jyoti Das</creator>
        
        <creator>Ankita Dutta</creator>
        
        <subject>Watermark;Gold Code; Variance; Correlation.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(6), 2013</description>
        <description>Digital watermarking plays a very important role in copyright protection. It is one of the techniques which are used for safeguarding the origins of the image, audio and video by protecting it against Piracy. This paper proposes a low variance based spread spectrum watermarking for image and video in which the watermark is obtained twice in the receiver. The watermark to be added is a binary image of comparatively smaller size than the Cover Image. Cover Image is divided into number of 8x8 blocks and transform into frequency domain using Discrete Cosine Transform. A gold sequence is added as well as subtracted in each block for each watermark bit. In most cases, researchers has generally used algorithms for extracting single watermark and also it is seen that finding the location of the distorted bit of the watermark due to some attacks is one of the most challenging task. However, in this paper the same watermark is embedded as well as extracted twice with gold code without much distortion of the image and comparing these two watermarks will help in finding the distorted bit. Another feature is that as this algorithm is based on embedding of watermark in low variance region, therefore proper extraction of the watermark is obtained at a lesser modulating factor. The proposed algorithm is very much useful in applications like real-time broad casting, image and video authentication and secure camera system. The experimental results show that the watermarking technique is robust against various attacks.</description>
        <description>http://thesai.org/Downloads/Volume4No6/Paper_15-Image_and_Video_based_double_watermark_extraction_spread_spectrum_watermarking_in_low_variance_region.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Algorithm to Represent Texture Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040614</link>
        <id>10.14569/IJACSA.2013.040614</id>
        <doi>10.14569/IJACSA.2013.040614</doi>
        <lastModDate>2013-06-29T14:55:34.3330000+00:00</lastModDate>
        
        <creator>Silvia Mar&#237;a Ojeda</creator>
        
        <creator>Grisel Maribel Britos</creator>
        
        <subject>Autoregressive Models; Texture Images; Similarity Measures.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(6), 2013</description>
        <description>In recent times the spatial autoregressive models have been extensively used to represent images. In this paper we propose an algorithm to represent and reproduce texture images based on the estimation of spatial autoregressive processes. The image intensity is locally modeled by a first spatial autoregressive model with support in a strongly causal prediction region on the plane. A basic criteria to quantify similarity between two images is used to locally select this region among four different possibilities, corresponding to the four strongly causal regions on the plane. Two global image similarity measures are used to evaluate the performance of our proposal.</description>
        <description>http://thesai.org/Downloads/Volume4No6/Paper_14-A_New_Algorithm_to_Represent_Texture_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Micro Sourcing Strategic Framework for Low Income Group</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040613</link>
        <id>10.14569/IJACSA.2013.040613</id>
        <doi>10.14569/IJACSA.2013.040613</doi>
        <lastModDate>2013-06-29T14:55:31.7130000+00:00</lastModDate>
        
        <creator>Noor Habibah Arshad</creator>
        
        <creator>Siti Salwa Salleh</creator>
        
        <creator>Syaripah Ruzaini Syed Aris</creator>
        
        <creator>Norjansalika Janom</creator>
        
        <creator>Norazam Mastuki</creator>
        
        <subject>Capability building; expedite growth; harnessing demand; platform capacity; strategic thrusts</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(6), 2013</description>
        <description>The role of ICTs among poor people and communities has increased tremendously.  One of the ICT industries – the micro sourcing industry – has been identified as one of a potential industry to help increase income for the poor in Malaysia. Micro sourcing is an effective way to accomplish tedious tasks at a faster rate.  It involves large projects that are broken down into micro tasks.  These micro tasks are well-defined and then distributed to a group of workers. The objective of this study is to develop the strategic framework of micro sourcing to generate income for the low income group. Four methods were used to gather information for this study. The methods used were documentation and literature reviews, focus group meetings, workshops and interviews. Based on the analysis of the current scenario of local micro sourcing industry, strategic framework was developed based on the five Strategic Thrusts identified. The Strategic Thrusts are harnessing demand side (job providers) of domestic and international market;  platform capacity and capability building; leverage and utilise existing infrastructure; uplift and enhance capability of the supply side (micro workers); and instruments to expedite growth of local micro sourcing industry. The Strategic Framework is intended to provide strategic direction at national level to all stakeholders; to highlight key areas that need to be addressed in order to grow a sustainable micro sourcing industry in the country; and to serve as a guideline in the implementation of programs and plans related to micro sourcing industry development</description>
        <description>http://thesai.org/Downloads/Volume4No6/Paper_13-Micro_Sourcing_Strategic_Framework_for_Low_Income_Group.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Proposed Multi-Modal Palm Veins-Face Biometric Authentication</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040612</link>
        <id>10.14569/IJACSA.2013.040612</id>
        <doi>10.14569/IJACSA.2013.040612</doi>
        <lastModDate>2013-06-29T14:55:29.9030000+00:00</lastModDate>
        
        <creator>S.F. Bahgat</creator>
        
        <creator>S. Ghoniemy</creator>
        
        <creator>M. Alotaibi</creator>
        
        <subject>Biometric authentication; Face Recognition; Feature Fusion;  Palm veins; Statistical features.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(6), 2013</description>
        <description>Biometric authentication technology identifies people by their unique biological information. An account holder’s body characteristics or behaviors are registered in a database and then compared with others who may try to access that account to see if the attempt is legitimate. Since veins are internal to the human body, its information is hard to duplicate. Compared with a finger or the back of a hand, a palm has a broader and more complicated vascular pattern and thus contains a wealth of differentiating features for personal identification. However, a single biometric is not sufficient to meet the variety of requirements, including matching performance imposed by several large-scale authentication systems. Multi-modal biometric systems seek to alleviate some of the drawbacks encountered by uni-modal biometric systems by consolidating the evidence presented by multiple biometric traits/sources. This paper proposes a multi-modal authentication technique based on Palm Veins as a personal identifying factor, augmented by face features to increase the accuracy of security recognition. The obtained results point at an increased authentication accuracy.</description>
        <description>http://thesai.org/Downloads/Volume4No6/Paper_12-Proposed_Multi-Modal_Palm_Veins-Face_Biometric_Authentication.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Generating a Domain Specific Inspection Evaluation Method through an Adaptive Framework</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040611</link>
        <id>10.14569/IJACSA.2013.040611</id>
        <doi>10.14569/IJACSA.2013.040611</doi>
        <lastModDate>2013-06-29T14:55:28.0770000+00:00</lastModDate>
        
        <creator>Roobaea AlRoobaea</creator>
        
        <creator>Ali H. Al-Badi</creator>
        
        <creator>Pam J. Mayhew</creator>
        
        <subject>Heuristic Evaluation (HE); User Testing (UT); Domain Specific Inspection (DSI); adaptive framework; social networks domain.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(6), 2013</description>
        <description>The electronic information revolution and the use of computers as an essential part of everyday life are now more widespread than ever before, as the Internet is exploited for the speedy transfer of data and business. Social networking sites (SNSs), such as LinkedIn, Ecademy and Google+ are growing in use worldwide, and they present popular business channels on the Internet. However, they need to be continuously evaluated and monitored to measure their levels of efficiency, effectiveness and user satisfaction, ultimately to improve quality. Nearly all previous studies have used Heuristic Evaluation (HE) and User Testing (UT) methodologies, which have become the accepted methods for the usability evaluation of User Interface Design (UID); however, the former is general, and unlikely to encompass all usability attributes for all website domains. The latter is expensive, time consuming and
misses consistency problems. To address this need, a new evaluation method is developed using traditional evaluations (HE and UT) in novel ways.
The lack of an adaptive methodological framework that can be used to generate a domain- specific evaluation method, which can then be used to improve the usability assessment process for a product in any chosen domain, represents a missing area in usability testing. This paper proposes an adaptive framework that is readily capable of adaptation to any domain, and then evaluates it by generating an evaluation method for assessing and improving the usability of products in a particular domain. The evaluation method is called Domain Specific Inspection (DSI), and it is empirically, analytically and statistically tested by applying it on three websites in the social networks domain. Our experiments show that the adaptive framework is able to build a formative and summative evaluation method that provides optimal results with regard to our newly identified set of comprehensive usability problem areas as well as relevant usability evaluation method (UEM) metrics, with minimum input in terms of the cost and time usually spent on employing traditional usability evaluation methods (UEMs).
</description>
        <description>http://thesai.org/Downloads/Volume4No6/Paper_11-Generating_a_Domain_Specific_Inspection_Evaluation_Method_through_an_Adaptive_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Face Recognition as an Authentication Technique in Electronic Voting</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040610</link>
        <id>10.14569/IJACSA.2013.040610</id>
        <doi>10.14569/IJACSA.2013.040610</doi>
        <lastModDate>2013-06-29T14:55:25.5670000+00:00</lastModDate>
        
        <creator>Noha E. El-Sayad</creator>
        
        <creator>Rabab Farouk Abdel-Kader</creator>
        
        <creator>Mahmoud Ibraheem Marie</creator>
        
        <subject>Electronic Voting; Face Recognition; Gabor Filter; Eigenface.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(6), 2013</description>
        <description>In this research a Face Detection and Recognition system (FDR) used as an Authentication technique in online voting, which one of electronic is voting types, is proposed. Web based voting allows the voter to vote from any place in state or out of state. The voter’s image is captured and passed to a face detection algorithm (Eigenface or Gabor filter) which is used to detect his face from the image and save it as the first matching point. The voter’s National identification card number is used to retrieve and return his saved photo from the database of the Supreme Council elections (SCE) which is passed to the same detection algorithm (Eigenface or Gabor filter) to detect face from it and save it as second matching point. The two matching points are used by a matching algorithm to check wither they are identical or not.  If the results of the matching algorithm are two point match then checks wither this person has the right to vote or not. If he has right to vote then a voting form is presented to him.
The result shows that the proposed algorithm capable of finding over 90% of the faces in database and allows their voter to vote in approximately 58 seconds.
</description>
        <description>http://thesai.org/Downloads/Volume4No6/Paper_10-Face_Recognition_as_an_Authentication_Technique_in_Electronic_Voting.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Study of the capacity of Optical Network On Chip based on MIMO (Multiple Input Multiple Output) system</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040609</link>
        <id>10.14569/IJACSA.2013.040609</id>
        <doi>10.14569/IJACSA.2013.040609</doi>
        <lastModDate>2013-06-29T14:55:23.7400000+00:00</lastModDate>
        
        <creator>S. Mhatli</creator>
        
        <creator>B.Nsiri</creator>
        
        <creator>R.Attia</creator>
        
        <subject>? –ROUTER; MIMO CHANNELS; CAPACITY; CDMA</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(6), 2013</description>
        <description>When designing Optical Networks-On-Chip,   designers have resorted to make dialogue between emitters (lasers) and receivers (photo-detectors) through a waveguide which is based mainly on optical routers called ?-router. In this paper, we propose a new method based on the Multiple Input Multiple Output concept, and we give a model of the channel propagation, then we study the influence of different parameters in the design of Optical Networks-On-Chip.</description>
        <description>http://thesai.org/Downloads/Volume4No6/Paper_9-Study_of_the_capacity_of_Optical_Network_on_Chip.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Strategy for Training Set Selection in Text Classification Problems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040608</link>
        <id>10.14569/IJACSA.2013.040608</id>
        <doi>10.14569/IJACSA.2013.040608</doi>
        <lastModDate>2013-06-29T14:55:21.9300000+00:00</lastModDate>
        
        <creator>Maria Luiza C. Passini</creator>
        
        <creator>Katiusca B. Est&#233;banez</creator>
        
        <creator>Grazziela P. Figueredo</creator>
        
        <creator>Nelson F. F. Ebecken</creator>
        
        <subject>text mining; data reduction; classification problems; feature selection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(6), 2013</description>
        <description>An issue in text classification problems involves the choice of good samples on which to train the classifier. Training sets that properly represent the characteristics of each class have a better chance of establishing a successful predictor. Moreover, sometimes data are redundant or take large amounts of computing time for the learning process. To overcome this issue, data selection techniques have been proposed, including instance selection. Some data mining techniques are based on nearest neighbors, ordered removals, random sampling, particle swarms or evolutionary methods. The weaknesses of these methods usually involve a lack of accuracy, lack of robustness when the amount of data increases, over?tting and a high complexity. This work proposes a new immune-inspired suppressive mechanism that involves selection. As a result, data that are not relevant for a classifier’s ?nal model are eliminated from the training process. Experiments show the e?ectiveness of this method, and the results are compared to other techniques; these results show that the proposed method has the advantage of being accurate and robust for large data sets, with less complexity in the algorithm.</description>
        <description>http://thesai.org/Downloads/Volume4No6/Paper_8-A_Strategy_for_Training_Set_Selection_in_Text_Classification_Problems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Image Blocks Model for Improving Accuracy in Identification Systems of Wood Type</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040607</link>
        <id>10.14569/IJACSA.2013.040607</id>
        <doi>10.14569/IJACSA.2013.040607</doi>
        <lastModDate>2013-06-29T14:55:20.1070000+00:00</lastModDate>
        
        <creator>Gasim </creator>
        
        <creator>Kudang Boro Seminar</creator>
        
        <creator>Agus Harjoko</creator>
        
        <creator>Sri Hartati</creator>
        
        <subject>image processing; pattern recognition; ANN; wood identification.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(6), 2013</description>
        <description>Image-based recognition systems commonly use an extracted image from the target object using texture analysis. However, some of the proposed and implemented  recognitionues systems of wood types up to this time have not been achieving adequatue accuracy, efficiency and feasable execution speed with respect to practicality. This paper discussed a new method of image-based recognition system for wood type identification by dividing the wood image into several blocks, each of which is extracted using gray image and edge detection techniques. The wood feature analysis concentrates on three parameters entropy, standard deviation, and correlation. Our experiment results showed that our method can increase the recognition accuracy up to 95%, which is faster and better than the previous existing method with 85% recognition accuracy. Moreover, our method needs only to analyze three feature parameters compared to the previous existing method needs to analyze seven feature parameters, ang thus  implying a simpler and faster recognition process.</description>
        <description>http://thesai.org/Downloads/Volume4No6/Paper_7-Image_Blocks_Model_for_Improving_Accuracy_in_Identification_Systems_of_Wood_Type.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Face Recognition System Based on Different Artificial Neural Networks Models and Training Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040606</link>
        <id>10.14569/IJACSA.2013.040606</id>
        <doi>10.14569/IJACSA.2013.040606</doi>
        <lastModDate>2013-06-29T14:55:18.3130000+00:00</lastModDate>
        
        <creator>Omaima N. A. AL-Allaf</creator>
        
        <creator>Abdelfatah Aref Tamimi</creator>
        
        <creator>Mohammad A. Alia</creator>
        
        <subject>Face Recognition; Backpropagation Neural Network (BPNN); Feed Forward Neural Network; Cascade Forward; Function Fitting; Pattern Recognition</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(6), 2013</description>
        <description>Face recognition is one of the biometric methods that is used to identify any given face image using the main features of this face. In this research, a face recognition system was suggested based on four Artificial Neural Network (ANN) models separately: feed forward backpropagation neural network (FFBPNN), cascade forward backpropagation neural network (CFBPNN), function fitting neural network (FitNet) and pattern recognition neural network (PatternNet). Each model was constructed separately with 7 layers (input layer, 5 hidden layers each with 15 hidden units and output layer). Six ANN training algorithms (TRAINLM, TRAINBFG, TRAINBR, TRAINCGF, TRAINGD, and TRAINGD) were used to train each model separately. Many experiments were conducted for each one of the four models based on 6 different training algorithms. The performance results of these models were compared according to mean square error and recognition rate to identify the best ANN model. The results showed that the PatternNet model was the best model used. Finally, comparisons between the used training algorithms were performed. Comparison results showed that TrainLM was the best training algorithm for the face recognition system.</description>
        <description>http://thesai.org/Downloads/Volume4No6/Paper_6-Face_Recognition_System_Based_on_Different_Artificial_Neural_Networks_Models_and_Training_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The quest towards a winning Enterprise 2.0 collaboration technology adoption strategy</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040605</link>
        <id>10.14569/IJACSA.2013.040605</id>
        <doi>10.14569/IJACSA.2013.040605</doi>
        <lastModDate>2013-06-29T14:55:16.2070000+00:00</lastModDate>
        
        <creator>Robert Louw</creator>
        
        <creator>Jabu Mtsweni</creator>
        
        <subject>Web 2.0; Enterprise 2.0; collaboration; technology adoption; adoption strategy; critical adoption elements</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(6), 2013</description>
        <description>Although Enterprise 2.0 collaboration technologies present enterprises with a significant amount of business benefits; enterprises are still facing challenges in promoting and sustaining end-user adoption. The purpose of this paper is to provide a systematic review on Enterprise 2.0 collaboration technology adoption models, challenges, as well as to provide emerging statistic approaches that purport to address these challenges.
The paper will present four critical Enterprise 2.0 adoption elements that need to form part of an Enterprise 2.0 collaboration technology adoption strategy. The four critical elements were derived from the ‘SHARE 2013 for business users’ conference conducted in Johannesburg, South Africa 2013, as well as a review of the existing literature. The four adoption elements include enterprise strategic alignment, adoption strategy, governance, and communication, training and support.
The four critical Enterprise 2.0 adoption elements will allow enterprises to ensure strategic alignment between the chosen Enterprise 2.0 collaboration technology toolset and the chosen business strategies.  In addition by reviewing and selecting an appropriate adoption strategy that incorporates governance, communication and a training and support system, the enterprise can improve its ability towards a successful Enterprise 2.0 adoption campaign.
</description>
        <description>http://thesai.org/Downloads/Volume4No6/Paper_5-The_quest_towards_a_winning_Enterprise_2.0_collaboration_technology_adoption_strategy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modeling the Cut-off Frequency of Acoustic Signal with an Adaptative Neuro-Fuzzy Inference System (ANFIS)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040604</link>
        <id>10.14569/IJACSA.2013.040604</id>
        <doi>10.14569/IJACSA.2013.040604</doi>
        <lastModDate>2013-06-29T14:55:13.8670000+00:00</lastModDate>
        
        <creator>Y. NAHRAOUI</creator>
        
        <creator>E.H. AASSIF</creator>
        
        <creator>G.Maze</creator>
        
        <creator>R.LATIF</creator>
        
        <subject>ANFIS; time-frequency; SPWV; Acoustic scattering, acoustic circumferential waves; cut-off frequency;cylindrical shell.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(6), 2013</description>
        <description>An Adaptative Neuro-Fuzzy Inference System (ANFIS), new flexible tool, is applied to predict the cut-off frequencies of the symmetric and the anti-symmetric circumferential waves (Si and Ai, i=1,2) propagating around an elastic aluminum cylindrical shell of various radius ratio b/a (a: outer radius and b: inner radius). The time-frequency of Wigner-Ville and the proper modes theory are used in this study to compare and valid the frequencies values predicted by the ANFIS model. The useful data, of the cut-off frequencies (ka)c, are used to train and to test the performances of the model. These data are determined from the values calculated using the proper modes theory of resonances and also from those determined using the time-frequency images of Wigner-Ville. The material density, the radius ratio b/a, the index i of the symmetric and the anti-symmetric circumferential waves, and the longitudinal and transverse velocities of the material constituting the tube, are selected as the input parameters of the ANFIS model. This technique is able to model and to predict the cut-off frequencies, of the symmetric and the anti-symmetric circumferential waves, with a high precision, based on different estimation errors such as mean relative error (MRE), mean absolute error (MAE) and standard error (SE). A good agreement is obtained between the output values predicted using the propose model and those computed by the proper modes theory.</description>
        <description>http://thesai.org/Downloads/Volume4No6/Paper_4-Modeling_the_Cut-off_Frequency_of_Acoustic_Signal_with_an_Adaptative_Neuro-Fuzzy_Inference_System_ANFIS.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Expected Reliability of Everyday- and Ambient Assisted Living Technologies</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040603</link>
        <id>10.14569/IJACSA.2013.040603</id>
        <doi>10.14569/IJACSA.2013.040603</doi>
        <lastModDate>2013-06-29T14:55:12.0570000+00:00</lastModDate>
        
        <creator>Frederick Steinke</creator>
        
        <creator>Tobias Fritsch</creator>
        
        <creator>Andreas Hertzer</creator>
        
        <creator>Helmut Tautz</creator>
        
        <creator>Simon Zickwolf</creator>
        
        <subject>Ambient Assisted Living; Elderly People; Expected Reliability; Online Survey; Technology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(6), 2013</description>
        <description>To receive valuable information about expected reliability in everyday technologies compared to Ambient Assisted Living (AAL) technologies, an online survey was conducted including five everyday (train, dishwasher, navigation system, computer, mobile phone) and three AAL (stove, window, floor sensors) technologies. The age range of the 206 participants (109 men; 97 female) was from 14 to 88 years (mean=38.0). The descriptive analysis indicates expected reliabilities of more than 90% for most technologies. Only train punctuality is considered as less reliable with a mean expected reliability of 86%. Furthermore, by using t-tests it can be shown that the three AAL technologies are expected to have a higher reliability than the everyday technologies. Additionally, a sample split at the age of 50 years indicates that elderly participants expect that technologies have a higher reliability than younger participants do. Using these findings, in a next step an experiment with different reliability levels of AAL technologies will be designed. This differentiation will be used to measure the influence of reliability on trust and intention to use in context of Ambient Assisted Living. </description>
        <description>http://thesai.org/Downloads/Volume4No6/Paper_3-Expected_Reliability_of_Everyday-_and_Ambient_Assisted_Living_Technologies.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evolutionary approach to optimisation of the operation of electric power distribution networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040602</link>
        <id>10.14569/IJACSA.2013.040602</id>
        <doi>10.14569/IJACSA.2013.040602</doi>
        <lastModDate>2013-06-29T14:55:10.2630000+00:00</lastModDate>
        
        <creator>Jan Stepien</creator>
        
        <creator>Sylwester Filipiak</creator>
        
        <subject>evolutionary algorithms; distribution power networks; electric breakdown</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(6), 2013</description>
        <description>An idea of using a classifying system and co-evolutionary algorithm for operation support of electric power distribution systems operators has been presented in the paper. The method proposed by the author of the work is typified by the short time of designating the most rational post breakdown configurations in complex electric power Medium Voltage distribution network structures. It is the use by the classifying system working with the co-evolution algorithm that enables the effective creation of substitute scenarios for the Medium Voltage electric power distribution network. The method drawn up may be used in current systems managing the work of distribution networks to assist network operators in taking decisions concerning connection actions in supervised electric power systems.</description>
        <description>http://thesai.org/Downloads/Volume4No6/Paper_2-Evolutionary_approach_to_optimisation_of_the_operation_of_electric_power_distribution_networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A multi-scale method for automatically extracting the dominant features of cervical vertebrae in CT images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040601</link>
        <id>10.14569/IJACSA.2013.040601</id>
        <doi>10.14569/IJACSA.2013.040601</doi>
        <lastModDate>2013-06-29T14:55:08.4370000+00:00</lastModDate>
        
        <creator>Tung-Ying Wu</creator>
        
        <creator>Sheng-Fuu Lin</creator>
        
        <subject>cervical spine; active contour; curvature scale space; turning angle.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(6), 2013</description>
        <description>Localization of the dominant points of cervical spines in medical images is important for improving the medical automation in clinical head and neck applications. In order to automatically identify the dominant points of cervical vertebrae in neck CT images with precision, we propose a method based on multi-scale contour analysis to analyzing the deformable shape of spines. To extract the spine contour, we introduce a method to automatically generate the initial contour of the spine shape, and the distance field for level set active contour iterations can also be deduced. In the shape analysis stage, we at first coarsely segment the extracted contour with zero-crossing points of the curvature based on the analysis with curvature scale space, and the spine shape is modeled with the analysis of curvature scale space. Then, each segmented curve is analyzed geometrically based on the turning angle property at different scales, and the local extreme points are extracted and verified as the dominant feature points. The vertices of the shape contour are approximately derived with the analysis at coarse scale, and then adjusted precisely at fine scale. Consequently, the results of experiment show that we approach a success rate of 93.4% and accuracy of 0.37mm by comparing with the manual results.</description>
        <description>http://thesai.org/Downloads/Volume4No6/Paper_1-A_multi-scale_method_for_automatically_extracting_the_dominant_features_of_cervical_vertebrae_in_CT_images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Smart Grid Network Transmission Line RLC Modelling Using Random Power Line Synthesis Scheme</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040639</link>
        <id>10.14569/IJACSA.2013.040639</id>
        <doi>10.14569/IJACSA.2013.040639</doi>
        <lastModDate>2013-06-29T14:55:06.6270000+00:00</lastModDate>
        
        <creator>Ezennaya S.O</creator>
        
        <creator>Udeze C. C</creator>
        
        <creator>Okafor K .C</creator>
        
        <creator>Onyedikachi S.N</creator>
        
        <creator>Anierobi C.C</creator>
        
        <subject>RPLS; Smart Grid; Overhead; Conductor; RLC parameters.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(6), 2013</description>
        <description>This work proposes Random Power line Synthesis (RPLS) as a quicker computational approach to solving RLC parameters of a modern smart grid transmission network. Since modern grid systems provide a holistic perspective of modern grid development, it is obvious that a transmission network that is ageing cannot serve the expanded load demand. The need to revoltionalize the traditional transmission model while exploiting basic electrical theories and principles in Smart Grid (SG) architecture necessitated this paper. This work seeks to address the RLC parameter modelling for SG template to provision dynamic power in Nigerian context.  Other schemes of transmission RLC modelling were studied as well as outlining their limitations. Consequently, we then proposed a fuzzy smart grid framework for RLC computation and developed a proposed SG overhead transmission line from its conductor characteristics and tower geometry considering the RLC parameters of the conductor while applying RPLS to generate the parameter metrics.</description>
        <description>http://thesai.org/Downloads/Volume4No6/Paper_39-Smart_Grid_Network_Transmission_Line_RLC_Modelling_Using_Random_Power_Line_Synthesis_Scheme.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Wideband Parameters Analysis and Validation for Indoor radio Channel at 60/70/80GHz for Gigabit Wireless Communication employing Isotropic, Horn and Omni directional Antenna</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040638</link>
        <id>10.14569/IJACSA.2013.040638</id>
        <doi>10.14569/IJACSA.2013.040638</doi>
        <lastModDate>2013-06-29T14:55:04.8000000+00:00</lastModDate>
        
        <creator>E. Affum</creator>
        
        <creator>E.T. Tchao</creator>
        
        <creator>K. Diawuo</creator>
        
        <creator>K. Agyekum</creator>
        
        <subject>Indoor; Wideband; Isotropic; rms Delay; Power delay Profile; Excess delay</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(6), 2013</description>
        <description>recently, applications of millimeter (mm) waves for high-speed broadband wireless local area network communication systems in indoor environment are increasingly gaining recognition as it provides gigabit-speed wireless communications with carrier-class performances over distances of a mile or more due to spectrum availability and wider bandwidth requirements. Collectively referred to as E-Band, the millimeter wave wireless technology present the potential to offer bandwidth delivery comparable to that of fiber optic, but without the financial and logistic challenges of deploying fiber. This paper investigates the wideband parameters using the ray tracing technique for indoor propagation systems with rms delay spread for Omni-directional and Horn Antennas for Bent Tunnel at 80GHz. The results obtained were 2.03and 1.95 respectively, besides, the normalized received power with 0.55&#215;?10?^8excess delay at 70GHz for Isotropic Antenna which was at 0.97.</description>
        <description>http://thesai.org/Downloads/Volume4No6/Paper_38-Wideband_Parameters_Analysis_and_Validation_for.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Correlated Topic Model for Web Services Ranking</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040637</link>
        <id>10.14569/IJACSA.2013.040637</id>
        <doi>10.14569/IJACSA.2013.040637</doi>
        <lastModDate>2013-06-29T14:55:02.9930000+00:00</lastModDate>
        
        <creator>Mustapha AZNAG</creator>
        
        <creator>Mohamed QUAFAFOU</creator>
        
        <creator>Zahi JARIR</creator>
        
        <subject>Web service, Data Representation, Discovery, Ranking,
Machine Learning, Topic Models</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(6), 2013</description>
        <description>With the increasing number of published Web services
providing similar functionalities, it’s very tedious for a
service consumer to make decision to select the appropriate one
according to her/his needs. In this paper, we explore several
probabilistic topic models: Probabilistic Latent Semantic Analysis
(PLSA), Latent Dirichlet Allocation (LDA) and Correlated
Topic Model (CTM) to extract latent factors from web service
descriptions. In our approach, topic models are used as efficient
dimension reduction techniques, which are able to capture semantic
relationships between word-topic and topic-service interpreted
in terms of probability distributions. To address the limitation of
keywords-based queries, we represent web service description as
a vector space and we introduce a new approach for discovering
and ranking web services using latent factors. In our experiment,
we evaluated our Service Discovery and Ranking approach
by calculating the precision (P@n) and normalized discounted
cumulative gain (NDCGn).</description>
        <description>http://thesai.org/Downloads/Volume4No6/Paper_37-Correlated_Topic_Model_for_Web_Services_Ranking.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Probabilistic Distributed Algorithm for Uniform Election in Triangular Grid Graphs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040636</link>
        <id>10.14569/IJACSA.2013.040636</id>
        <doi>10.14569/IJACSA.2013.040636</doi>
        <lastModDate>2013-06-29T14:55:01.1970000+00:00</lastModDate>
        
        <creator>El Mehdi Stouti</creator>
        
        <creator>Ismail Hind</creator>
        
        <creator>Abdelaaziz El Hibaoui</creator>
        
        <subject>Uniform Election, Distributed Algorithms, Probabilistic Election, Markov Process, Randomized Algorithm Analysis.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(6), 2013</description>
        <description>Probabilistic algorithms are designed to handle
problems that do not admit deterministic effective solutions. In
the case of the election problem, many algorithms are available
and applicable under appropriate assumptions, for example: the
uniform election in trees, k??trees and polyominoids.
In this paper, first, we introduce a probabilistic algorithm
for the uniform election in the triangular grid graphs, then, we
expose the set of rules that generate the class of the triangular
grid graphs. The main of this paper is devoted to the analysis
of our algorithm. We show that our algorithm is totally fair in
so far as it gives the same probability to any vertex of the given
graph to be elected.</description>
        <description>http://thesai.org/Downloads/Volume4No6/Paper_36-Probabilistic_Distributed_Algorithm_for_Uniform.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Software Tool for Analysing NT&#174; File System Permissions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040635</link>
        <id>10.14569/IJACSA.2013.040635</id>
        <doi>10.14569/IJACSA.2013.040635</doi>
        <lastModDate>2013-06-29T14:54:59.4030000+00:00</lastModDate>
        
        <creator>Simon Parkinson</creator>
        
        <creator>Andrew Crampton</creator>
        
        <subject></subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(6), 2013</description>
        <description>Administrating and monitoring New Technology
File System (NTFS) permissions can be a cumbersome and
convoluted task. In today’s data rich world there has never
been a more important time to ensure that data is secured
against unwanted access. This paper identifies the essential and
fundamental requirements of access control, highlighting the
main causes of their misconfiguration within the NTFS. In
response, a number of features are identified and an efficient,
informative and intuitive software-based solution is proposed for
examining file system permissions. In the first year that the
software has been made freely available it has been downloaded
and installed by over four thousand users1.</description>
        <description>http://thesai.org/Downloads/Volume4No6/Paper_35-A_Novel_Software_Tool_for_Analysing_NT.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Watermarking in E-commerce</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040634</link>
        <id>10.14569/IJACSA.2013.040634</id>
        <doi>10.14569/IJACSA.2013.040634</doi>
        <lastModDate>2013-06-29T14:54:57.5930000+00:00</lastModDate>
        
        <creator>Peyman Rahmati</creator>
        
        <creator>Andy Adler</creator>
        
        <creator>Thomas Tran</creator>
        
        <subject>Data hiding; geometric distortion; watermarking; print-and-scan; and E-commerce Introduction </subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(6), 2013</description>
        <description>A major challenge for E-commerce and content-based businesses is the possibility of altering identity documents or other digital data. This paper shows a watermark-based approach to protect digital identity documents against a Print-Scan (PS) attack. We propose a secure ID card authentication system based on watermarking. For authentication purposes, a user/customer is asked to upload a scanned picture of a passport or ID card through the internet to fulfill a transaction online. To provide security in online ID card submission, we need to robustly encode personal information of ID card’s holder into the card itself, and then extract the hidden information correctly in a decoder after the PS operation. The PS operation imposes several distortions, such as geometric, rotation, and histogram distortion, on the watermark location, which may cause the loss of information in the watermark. An online secure authentication system needs to first eliminate the distortion of the PS operation before decoding the hidden data. This study proposes five preprocessing blocks to remove the distortions of the PS operation: filtering, localization, binarization, undoing rotation, and cropping. Experimental results with 100 ID cards showed that the proposed online ID card authentication system has an average accuracy of 99% in detecting hidden information inside ID cards after the PS process. The innovations of this study are the implementation of an online watermark-based authentication system which uses a scanned ID card picture without any added frames around the watermark location, unlike previous systems. </description>
        <description>http://thesai.org/Downloads/Volume4No6/Paper_34-Watermarking_in_E-commerce.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>New electronic white cane for stair case detection and recognition using ultrasonic sensor</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040633</link>
        <id>10.14569/IJACSA.2013.040633</id>
        <doi>10.14569/IJACSA.2013.040633</doi>
        <lastModDate>2013-06-29T14:54:54.9730000+00:00</lastModDate>
        
        <creator>Sonda Ammar Bouhamed</creator>
        
        <creator>Imene Khanfir Kallel</creator>
        
        <creator>Dorra Sellami Masmoudi</creator>
        
        <subject>Electronic white cane; ultrasonic signal  processing; ground-stair classification ;temporal representation of ultrasonic signal; frequencial representation of ultrasonic signal</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(6), 2013</description>
        <description>Blinds people need some aid to interact with their environment with more security. A new device is then proposed to enable them to see the world with their ears. Considering not only system requirements but also technology cost, we used, for the conception of our tool, ultrasonic sensors and one monocular camera to enable user being aware of the presence and nature of potential encountered obstacles. In this paper, we are involved in using only one ultrasonic sensor to detect stair-cases in electronic cane. In this context, no previous work has considered such a challenge. Aware that the performance of an object recognition system depends on both object representation and classification algorithms, we have used in our system, one representation of ultrasonic signal in frequency domain: spectrogram representation explaining how the spectral density of signal varies with time, spectrum representation showing the amplitudes as a function of the frequency,  periodogram representation estimating the spectral density of signal. Several features, thus extracted from each representation, contribute in the classification process. Our system was evaluated on a set of ultrasonic signal where stair-cases occur with different shapes. Using a multiclass SVM approach, recognition rates of 82.4% has been achieved.</description>
        <description>http://thesai.org/Downloads/Volume4No6/Paper_33-New_electronic_white_cane_for_stair_case_detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of Copeland Score Methods for Determine Group Decisions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040632</link>
        <id>10.14569/IJACSA.2013.040632</id>
        <doi>10.14569/IJACSA.2013.040632</doi>
        <lastModDate>2013-06-29T14:54:53.1630000+00:00</lastModDate>
        
        <creator>Ermatita </creator>
        
        <creator>Sri Hartati</creator>
        
        <creator>Retantyo Wardoyo</creator>
        
        <creator>Agus Harjoko</creator>
        
        <subject>Group Decision Support System; Copeland Score.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(6), 2013</description>
        <description>Voting method requires to determine group decision of  decision by each decision maker in  group. Determination of decisions  by  group of decision maker requires voting methods. Copeland score is one of voting method that has been developed by previous researchers. This  method does not accommodate the weight of the expertise and interests of each decision maker. This paper proposed the voting  method using Copeland score with added weighting. The method has developed of considering the weight of the expertise and interests of the decision maker. The method  accordance with the problems encountered of group decision making . Expertise and interests of decision makers are given weight based on their expertises of decision maker contribution of the problems faced by the group to determine the decision.</description>
        <description>http://thesai.org/Downloads/Volume4No6/Paper_32-Development_of_Copeland_Score_Methods_for_Determine_Group_Decisions.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>OntoVerbal: a Generic Tool and Practical Application to SNOMED CT</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040631</link>
        <id>10.14569/IJACSA.2013.040631</id>
        <doi>10.14569/IJACSA.2013.040631</doi>
        <lastModDate>2013-06-29T14:54:51.3530000+00:00</lastModDate>
        
        <creator>Shao Fen Liang</creator>
        
        <creator>Donia Scott</creator>
        
        <creator>Robert Stevens</creator>
        
        <creator>Alan Rector</creator>
        
        <subject>ontology verbalisation; natural language generation; OWL; SNOMED CT</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(6), 2013</description>
        <description>Ontology development is a non-trivial task requiring expertise in the chosen ontological language. We propose a method for making the content of ontologies more transparent by presenting, through the use of natural language generation, naturalistic descriptions of ontology classes as textual paragraphs. The method has been implemented in a proof-of-concept system, OntoVerbal, that automatically generates paragraph-sized textual descriptions of ontological classes expressed in OWL. OntoVerbal has been applied to ontologies that can be loaded into Prot&#233;g&#233; and been evaluated with SNOMED CT, showing that it provides coherent, well-structured and accurate textual descriptions of ontology classes.</description>
        <description>http://thesai.org/Downloads/Volume4No6/Paper_31-OntoVerbal_a_Generic_Tool_and_Practical_Application_to_SNOMED_CT.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>TX-Kw: An Effective Temporal XML Keyword Search</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040630</link>
        <id>10.14569/IJACSA.2013.040630</id>
        <doi>10.14569/IJACSA.2013.040630</doi>
        <lastModDate>2013-06-29T14:54:48.9500000+00:00</lastModDate>
        
        <creator>Rasha Bin-Thalab</creator>
        
        <creator>Neamat El-Tazi</creator>
        
        <creator>Mohamed E.El-Sharkawi</creator>
        
        <subject>temporal XML; Keyword Search; ranking</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(6), 2013</description>
        <description>Inspired by the great success of information retrieval (IR) style keyword search on the web, keyword search on XML has emerged recently. Existing methods cannot resolve challenges addressed by using keyword search in Temporal XML documents. We propose a way to evaluate temporal keyword search queries over Temporal XML documents. Moreover, we propose a new ranking method based on the time-aware IR ranking methods to rank temporal keyword search queries results. Extensive experiments have been conducted to show the effectiveness of our approach.</description>
        <description>http://thesai.org/Downloads/Volume4No6/Paper_30-TX-Kw_An_Effective_Temporal_XML_Keyword_Search.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Case Study of Named Entity Recognition in Odia Using Crf++ Tool</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040629</link>
        <id>10.14569/IJACSA.2013.040629</id>
        <doi>10.14569/IJACSA.2013.040629</doi>
        <lastModDate>2013-06-29T14:54:46.7500000+00:00</lastModDate>
        
        <creator>Dr.Rakesh ch. Balabantaray</creator>
        
        <creator>Suprava Das</creator>
        
        <creator>Kshirabdhi Tanaya Mishra</creator>
        
        <subject>Named Entity Recognition; CRF++ Tool; Odia Named Entity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(6), 2013</description>
        <description>NER have been regarded as an efficient strategy to extract relevant entities for various purposes. The aim of this paper is to exploit conventional method for NER in Odia by parameterizing CRF++ tool in different ways. As a case study, we have used gazetteer and POS tag to generate different feature set in order to compare the performance of NER task. Comparison study demonstrates how proposed NER system works on different feature set.</description>
        <description>http://thesai.org/Downloads/Volume4No6/Paper_29-Case_Study_of_Named_Entity_Recognition_in_Odia.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automated Classification of L/R Hand Movement EEG Signals using Advanced Feature Extraction and Machine Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040628</link>
        <id>10.14569/IJACSA.2013.040628</id>
        <doi>10.14569/IJACSA.2013.040628</doi>
        <lastModDate>2013-06-29T14:54:43.9600000+00:00</lastModDate>
        
        <creator>Mohammad H. Alomari</creator>
        
        <creator>Aya Samaha</creator>
        
        <creator>Khaled AlKamha</creator>
        
        <subject>EEG; BCI; ICA; MRCP; ERD/ERS; machine learning; NN; SVM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(6), 2013</description>
        <description>In this paper, we propose an automated computer platform for the purpose of classifying Electroencephalography (EEG) signals associated with left and right hand movements using a hybrid system that uses advanced feature extraction techniques and machine learning algorithms. It is known that EEG represents the brain activity by the electrical voltage fluctuations along the scalp, and Brain-Computer Interface (BCI) is a device that enables the use of the brain’s neural activity to communicate with others or to control machines, artificial limbs, or robots without direct physical movements. In our research work, we aspired to find the best feature extraction method that enables the differentiation between left and right executed fist movements through various classification algorithms. The EEG dataset used in this research was created and contributed to PhysioNet by the developers of the BCI2000 instrumentation system. Data was preprocessed using the EEGLAB MATLAB toolbox and artifacts removal was done using AAR. Data was epoched on the basis of Event-Related (De) Synchronization (ERD/ERS) and movement-related cortical potentials (MRCP) features. Mu/beta rhythms were isolated for the ERD/ERS analysis and delta rhythms were isolated for the MRCP analysis. The Independent Component Analysis (ICA) spatial filter was applied on related channels for noise reduction and isolation of both artifactually and neutrally generated EEG sources. The final feature vector included the ERD, ERS, and MRCP features in addition to the mean, power and energy of the activations of the resulting Independent Components (ICs) of the epoched feature datasets. The datasets were inputted into two machine-learning algorithms: Neural Networks (NNs) and Support Vector Machines (SVMs). Intensive experiments were carried out and optimum classification performances of 89.8 and 97.1 were obtained using NN and SVM, respectively. This research shows that this method of feature extraction holds some promise for the classification of various pairs of motor movements, which can be used in a BCI context to mentally control a computer or machine.</description>
        <description>http://thesai.org/Downloads/Volume4No6/Paper_28-Automated_Classification_of_LR_Hand_Movement_EEG_Signals_using_Advanced_Feature_Extraction_and_Machine_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A comparative study of Image Region-Based Segmentation Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040627</link>
        <id>10.14569/IJACSA.2013.040627</id>
        <doi>10.14569/IJACSA.2013.040627</doi>
        <lastModDate>2013-06-29T14:54:42.1500000+00:00</lastModDate>
        
        <creator>Lahouaoui LALAOUI</creator>
        
        <creator>Tayeb MOHAMADI</creator>
        
        <subject>Evaluation criteria; Martin’s; Rand Index; Image Segmentation; Magnetic resonance image.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(6), 2013</description>
        <description>Image segmentation has recently become an essential step in image processing as it mainly conditions the interpretation which is done afterwards. It is still difficult to justify the accuracy of a segmentation algorithm, regardless of the nature of the treated image. In this paper we perform an objective comparison of region-based segmentation techniques such as supervised and unsupervised deterministic classification, non-parametric and parametric probabilistic classification. Eight methods among the well-known and used in the scientific community have been selected and compared. The Martin’s(GCE, LCE), probabilistic Rand Index (RI), Variation of Information (VI) and Boundary Displacement Error (BDE) criteria are used to evaluate the performance of these algorithms on Magnetic Resonance (MR) brain images, synthetic MR image, and synthetic images. MR brain image are composed of the gray matter (GM), white matter (WM) and cerebrospinal fluid (CSF) and others, and the synthetic MR image composed of the same for real image and the plus edema, and the tumor. Results show that segmentation is an image dependent process and that some of the evaluated methods are well suited for a better segmentation.</description>
        <description>http://thesai.org/Downloads/Volume4No6/Paper_27-A_comparative_study_of_Image_Region-Based_Segmentation_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Fuzzy Rule Based Forensic Analysis of DDoS Attack in MANET</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040626</link>
        <id>10.14569/IJACSA.2013.040626</id>
        <doi>10.14569/IJACSA.2013.040626</doi>
        <lastModDate>2013-06-29T14:54:40.3570000+00:00</lastModDate>
        
        <creator>Ms. Sarah Ahmed</creator>
        
        <creator>Ms. S. M. Nirkhi</creator>
        
        <subject>DoS and DDoS attack; DSR; Fuzzy logic; MANET; Network forensic analysis.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(6), 2013</description>
        <description>Mobile Ad Hoc Network (MANET) is a mobile distributed wireless networks. In MANET each node are self capable that support routing functionality in an ad hoc scenario, forwarding of data or exchange of topology information using wireless communications. These characteristic specifies a better scalability of network. But this advantage leads to the scope of security compromising. One of the easy ways of security compromise is denial of services (DoS) form of attack, this attack may paralyze a node or the entire network and when coordinated by group of attackers is considered as distributed denial of services (DDoS) attack. A typical, DoS attack is flooding excessive volume of traffic to deplete key resources of the target network. In MANET flooding can be done at routing. Ad Hoc nature of MANET calls for dynamic route management. In flat ad hoc routing categories there falls the reactive protocols sub category, in which one of the most prominent member of this subcategory is dynamic source routing (DSR) which works well for smaller number of nodes and low mobility situations. DSR allows on demand route discovery, for this they broadcast a route request message (RREQ). Intelligently flooding RREQ message there forth causing DoS or DDoS attack, making targeted network paralyzed for a small duration of time is not very difficult to launch and have potential of loss to the network. After an attack on the target system is successful enough to crash or disrupt MANET for some period of time, this event of breach triggers for investigation. Investigation and forensically analyzing attack scenario provides the source of digital proof against attacker. In this paper, the parameters for RREQ flooding are pointed, on basis of these parameters fuzzy logic based rules are deduced and described for both DoS and DDoS. We implemented a fuzzy forensic tool to determine the flooding RREQ attack of the form DoS and DDoS. For this implementation various experiments and results are elaborated in this paper.</description>
        <description>http://thesai.org/Downloads/Volume4No6/Paper_26-A_Fuzzy_Rule_Based_Forensic_Analysis_of_DDoS_Attack_in_MANET.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Designing a Markov Model for the Analysis of 2-tier Cognitive Radio Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040625</link>
        <id>10.14569/IJACSA.2013.040625</id>
        <doi>10.14569/IJACSA.2013.040625</doi>
        <lastModDate>2013-06-29T14:54:38.3600000+00:00</lastModDate>
        
        <creator>Tamal Chakraborty</creator>
        
        <creator>Iti Saha Misra</creator>
        
        <subject>Cognitive Radio Network; 2-tier; Voice over IP; Markov Model; Spectrum Handoff</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(6), 2013</description>
        <description>Cognitive Radio Network (CRN) aims to reduce spectrum congestion by allowing secondary users to utilize idle spectrum bands in the absence of primary users. However, the overall user capacity and hence, the system throughput is bounded by the total number of available idle channels in the system. This paper aims to solve the problem of limited user capacity in basic CRN by proposing a 2-tier CRN that allows another tier (or layer) of secondary users to transmit, in addition to the already existing set of primary and secondary users in the system. Markov Models are designed step-wise to map the interaction between primary and secondary users in both tiers by including suitable traffic distribution models and system parameters. Spectrum handoff is also incorporated in the developed Markov Models. Performance analysis is carried out in terms of SU transmission, dropping, blocking and handoff probabilities along with mathematical formulation of the overall SU throughput in 2-tier CRN. It confirms better spectrum utilization in spectrum handoff enabled 2-tier CRN over basic CRN with enhancement in quality of service for secondary users in terms of reduced dropping and blocking probabilities.</description>
        <description>http://thesai.org/Downloads/Volume4No6/Paper_25-Designing_a_Markov_Model_for_the_Analysis_of_2-tier_Cognitive_Radio_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Data fusion based framework for the recognition of Isolated Handwritten Kannada Numerals</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040624</link>
        <id>10.14569/IJACSA.2013.040624</id>
        <doi>10.14569/IJACSA.2013.040624</doi>
        <lastModDate>2013-06-29T14:54:34.8970000+00:00</lastModDate>
        
        <creator>Mamatha. H.R</creator>
        
        <creator>Sucharitha Srirangaprasad</creator>
        
        <creator>Srikantamurthy K</creator>
        
        <subject>feature selection; feature fusion; decision fusion; Curvelet transform; K-NN classifier; data fusion; isolated handwritten Kannada numerals; OCR;</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(6), 2013</description>
        <description>combining classifiers appears as a natural step forward when a critical mass of knowledge of single classifier models has been accumulated. Although there are many unanswered questions about matching classifiers to real-life problems, combining classifiers is rapidly growing and enjoying a lot of attention from pattern recognition and machine learning communities. For any pattern classification task, an increase in data size, number of classes, dimension of the feature space and interclass separability affect the performance of any classifier. It is essential to know the effect of the training dataset size on the recognition performance of a feature extraction method and classifier. In this paper, an attempt is made to measure the performance of the classifier by testing the classifier with two different datasets of different sizes. In practical classification applications, if the number of classes and multiple feature sets for pattern samples are given, a desirable recognition performance can be achieved by data fusion. In this paper, we have proposed a framework based on the combined concepts of decision fusion and feature fusion for the isolated handwritten Kannada numerals classification. The proposed method improves the classification result. From the experimental results it is seen that there is an increase of 13.95% in the recognition accuracy.</description>
        <description>http://thesai.org/Downloads/Volume4No6/Paper_24-Data_fusion_based_framework_for_the_recognition_of_Isolated_Handwritten_Kannada_Numerals.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving Assessment Management Using Tools</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040623</link>
        <id>10.14569/IJACSA.2013.040623</id>
        <doi>10.14569/IJACSA.2013.040623</doi>
        <lastModDate>2013-06-29T14:54:33.1000000+00:00</lastModDate>
        
        <creator>Shang Gao</creator>
        
        <creator>Jo Coldwell-Neilson</creator>
        
        <creator>Andrzej Goscinski</creator>
        
        <subject>assessment management; WebCT Vista; Desire2Learn; CloudDeakin; marking guide; personalized comment; Markers Assistant; On-line Grades System</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(6), 2013</description>
        <description>This paper firstly explains the importance of assessment management, then introduces two assessment tools currently used in the School of Information Technology at Deakin University. A comparison of assignment marking was conducted after collecting test data from three sets of assignments. The importance of providing detailed marking guides and personalized comments is emphasized and future possible extension to the tools is also discussed at the end of this paper.</description>
        <description>http://thesai.org/Downloads/Volume4No6/Paper_23-Improving_assessment_management_using_tools.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Exploiting the Role of Hardware Prefetchers in Multicore Processors</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040622</link>
        <id>10.14569/IJACSA.2013.040622</id>
        <doi>10.14569/IJACSA.2013.040622</doi>
        <lastModDate>2013-06-29T14:54:31.2900000+00:00</lastModDate>
        
        <creator>Hasina Khatoon</creator>
        
        <creator>Shahid Hafeez Mirza</creator>
        
        <creator>Talat Altaf</creator>
        
        <subject>Multicore; prefetchers; prefetch-sensitive; memory wall; aggressive prefetching; multiprogram workload; parallel workload.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(6), 2013</description>
        <description>The processor-memory speed gap referred to as memory wall, has become much wider in multi core processors due to a number of cores sharing the processor-memory interface. In addition to other cache optimization techniques, the mechanism of prefetching instructions and data has been used effectively to close the processor-memory speed gap and lower the memory wall. A number of issues have emerged when prefetching is used aggressively in multicore processors. The results presented in this paper are an indicator of the problems that need to be taken into consideration while using prefetching as a default technique. This paper also quantifies the amount of degradation that applications face with the aggressive use of prefetching. Another aspect that is investigated is the performance of multicore processors using a multiprogram workload as compared to a single program workload while varying the configuration of the built-in hardware prefetchers. Parallel workloads are also investigated to estimate the speedup and the effect of hardware prefetchers.  
This paper is the outcome of work that forms a part of the PhD research project currently in progress at NED University of Engineering and Technology, Karachi. 
</description>
        <description>http://thesai.org/Downloads/Volume4No6/Paper_22-Exploiting_the_Role_of_Hardware_Prefetchers_in_Multicore_Processors.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comprehensive Evaluation of Weight Growth and Weight Elimination Methods Using the Tangent Plane Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040621</link>
        <id>10.14569/IJACSA.2013.040621</id>
        <doi>10.14569/IJACSA.2013.040621</doi>
        <lastModDate>2013-06-29T14:54:29.4800000+00:00</lastModDate>
        
        <creator>P May</creator>
        
        <creator>E Zhou</creator>
        
        <creator>C. W. Lee</creator>
        
        <subject>neural networks; backpropagation; generalization; tangent plane; weight elimination; extreme learning machine</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(6), 2013</description>
        <description>The tangent plane algorithm is a fast sequential learning method for multilayered feedforward neural networks that accepts almost zero initial conditions for the connection weights with the expectation that only the minimum number of weights will be activated.  However, the inclusion of a tendency to move away from the origin in weight space can lead to large weights that are harmful to generalization.  This paper evaluates two techniques used to limit the size of the weights, weight growing and weight elimination, in the tangent plane algorithm.  Comparative tests were carried out using the Extreme Learning Machine which is a fast global minimiser giving good generalization. Experimental results show that the generalization performance of the tangent plane algorithm with weight elimination is at least as good as the ELM algorithm making it a suitable alternative for problems that involve time varying data such as EEG and ECG signals.  </description>
        <description>http://thesai.org/Downloads/Volume4No6/Paper_21-A_Comprehensive_Evaluation_of_Weight_Growth_and_Weight_Elimination_Methods_Using_the_Tangent_Plane_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Format SPARQL Query Results into HTML Report</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040620</link>
        <id>10.14569/IJACSA.2013.040620</id>
        <doi>10.14569/IJACSA.2013.040620</doi>
        <lastModDate>2013-06-29T14:54:27.6730000+00:00</lastModDate>
        
        <creator>Dr Sunitha Abburu</creator>
        
        <creator>G.Suresh Babu</creator>
        
        <subject>SPARQL query; Oracle database 11g semantic store; Jena adapter; HTML report.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(6), 2013</description>
        <description>SPARQL is one of the powerful query language for querying semantic data. It is recognized by the W3C as a query language for RDF. As an efficient query language for RDF, it has defined several query result formats such as CSV, TSV and XML etc. These formats are not attractive, understandable and readable. The results need to be converted in an appropriate format so that user can easily understand. The above formats require additional transformations or tool support to represent the query result in user readable format. The main aim of this paper is to propose a method to build HTML report dynamically for SPARQL query results. This enables SPARQL query result display, in HTML report format easily, in an attractive understandable format without the support of any additional or external tools or transformation.</description>
        <description>http://thesai.org/Downloads/Volume4No6/Paper_20-Format_SPARQL_Query_Results_into.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comparative Study of Three TDMA Digital Cellular Mobile Systems (GSM, IS-136 NA-TDMA and PDC) Based On Radio Aspect</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040619</link>
        <id>10.14569/IJACSA.2013.040619</id>
        <doi>10.14569/IJACSA.2013.040619</doi>
        <lastModDate>2013-06-29T14:54:25.8630000+00:00</lastModDate>
        
        <creator>Laishram Prabhakar</creator>
        
        <subject>Comparative Study; GSM; IS-136 TDMA; PDC;Radio Aspect</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(6), 2013</description>
        <description>As mobile and personal communication services and networks involve providing seamless global roaming and improve quality of service to its users, the role of such network for numbering and identification and quality of service will become increasingly important, and well defined. All these will enhance performance for the present as well as future mobile and personal communication network, provide national management function in mobile communication network and provide national and international roaming. Moreover, these require standardized subscriber and identities. To meet these demands, mobile computing would use standard networks. Thus, in this study the researcher attempts to highlight a comparative picture of the three standard digital cellular mobile communication systems: (i) Global System for Mobile (GSM) -- The European Time Division Multiple Access (TDMA) Digital Cellular Standard, (ii) Interim Standard-136 (IS-136) -- The North American TDMA Digital Cellular Standard (D-AMPS), and (iii) Personal Digital Cellular (PDC) -- The Japanese TDMA Digital Cellular Standard.</description>
        <description>http://thesai.org/Downloads/Volume4No6/Paper_19-A_Comparative_Study_of_Three_TDMA_Digital_Cellular_Mobile_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Efficient Approach for Image Filtering by Using Neighbors pixels</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040618</link>
        <id>10.14569/IJACSA.2013.040618</id>
        <doi>10.14569/IJACSA.2013.040618</doi>
        <lastModDate>2013-06-29T14:54:24.0670000+00:00</lastModDate>
        
        <creator>Smrity Prasad</creator>
        
        <creator>N.Ganesan</creator>
        
        <subject>Salt &amp; Pepper Noise; Filter; PSNR; MSE</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(6), 2013</description>
        <description>Image Processing refers to the use of algorithm to perform processing on digital image. Microscopic images like some microorganism images contain different type of noises which reduce the quality of the images. Removing noise is a difficult task. Noise removal is an issue of image processing. Images containing noise degrade the quality of the images. Removing noise is an important processing task. After removing noise from the images, the visual effect will not be proper. This paper presents an approach to de-noise based on averaging of pixels in 5X5 window is proposed.</description>
        <description>http://thesai.org/Downloads/Volume4No6/Paper_18-An_Efficient_Approach_for_Image_Filtering_by_Using_Neighbors_pixels.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Integrating Social Network Services with Vehicle Tracking Technologies</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040617</link>
        <id>10.14569/IJACSA.2013.040617</id>
        <doi>10.14569/IJACSA.2013.040617</doi>
        <lastModDate>2013-06-29T14:54:22.2600000+00:00</lastModDate>
        
        <creator>Ahmed ElShafee</creator>
        
        <creator>Mahmoud ElMenshawi</creator>
        
        <creator>Mena Saeed</creator>
        
        <subject>Vehicle Tracking; GSM; GPS; Microcontrollers; Twitter; Google maps.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(6), 2013</description>
        <description>This paper gives design, and implementation of a newly proposed vehicle tracking system, that uses the popular social network as a value added service for traditional tracking system. The proposed tracking system make use of Google maps service to trace the vehicle, each vehicle has an account that contains a posts of Google maps that display the vehicle location on real time mode. A hardware module is inside the vehicle that uses Global Positioning System (GPS) – to detect vehicle location- and Global system for mobile communication (GSM) – to update vehicle location in  vehicle account on social network -. System uses the well-known Arduino microcontroller to control GSM-GPS Modem. The proposed system can be used for a broad range of applications such as traffic management and vehicle tracking/ anti theift system, and finally traffic routing and navigation. it can be applied in many business cases, like public transportation, so passengers can track their buses, trains, by following the vehicle account on social network. It also can be used in private business sector as an easy and simple fleet tracking and management system , or can be used by anyone who wants to track his car, or to find his way in case he get lost.</description>
        <description>http://thesai.org/Downloads/Volume4No6/Paper_17-Integrating_Social_Network_Services_with_Vehicle_Tracking_Technologies.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Framework for Creating a Distributed Rendering Environment on the Compute Clusters</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040616</link>
        <id>10.14569/IJACSA.2013.040616</id>
        <doi>10.14569/IJACSA.2013.040616</doi>
        <lastModDate>2013-06-29T14:54:20.4330000+00:00</lastModDate>
        
        <creator>Ali Sheharyar</creator>
        
        <creator>Othmane Bouhali</creator>
        
        <subject>distributed; rendering; animation; render farm; cluster</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(6), 2013</description>
        <description>This paper discusses the deployment of existing render farm manager in a typical compute cluster environment such as a university. Usually, both a render farm and a compute cluster use different queue managers and assume total control over the physical resources. But, taking out the physical resources from an existing compute cluster in a university-like environment whose primary use of the cluster is to run numerical simulations may not be possible. It can potentially reduce the overall resource utilization in a situation where compute tasks are more than rendering tasks. Moreover, it can increase the system administration cost. In this paper, a framework has been proposed that creates a dynamic distributed rendering environment on top of the compute clusters using existing render farm managers without requiring the physical separation of the resources. </description>
        <description>http://thesai.org/Downloads/Volume4No6/Paper_16-A_Framework_for_Creating_a_Distributed_Rendering_Environment_on_the_Compute_Clusters.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Relative Motion of Formation Flying with Elliptical Reference Orbit</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.020613</link>
        <id>10.14569/IJARAI.2013.020613</id>
        <doi>10.14569/IJARAI.2013.020613</doi>
        <lastModDate>2013-06-09T18:54:16.9870000+00:00</lastModDate>
        
        <creator>Hany R Dwidar</creator>
        
        <creator>Ashraf H. Owis</creator>
        
        <subject></subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(6), 2013</description>
        <description>In this paper we present the optimal control of the
relative motion of formation flying consisting of two spacecrafts.
One of the spacecraft is considered as the chief, orbiting the
Earth on a Highly Elliptical Orbit(HEO), and the other ,orbiting
the chief, is considered as the deputy. The Keplerian relative
dyanmics of the formation as well as the the second zonal
hamonics of the Earth’s gravitational field (J2) are studied.
To study these perturbative effect the linearized true anomaly
varying Tschauner-Hempel equations are augmented to include
the effect of J2. We solve the nonlinear feedback optimal control
of the relative motion using the state dependent Riccati Equation(
SDRE). The results are validated through a nummerical
example.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No6/Paper_13-Relative_Motion_of_Formation_Flying_with_Elliptical.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mobile Devices Based 3D Image Display Depending on User’s Actions and Movements</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.020612</link>
        <id>10.14569/IJARAI.2013.020612</id>
        <doi>10.14569/IJARAI.2013.020612</doi>
        <lastModDate>2013-06-09T18:54:15.0370000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Herman Tolle</creator>
        
        <creator>Akihiro Serita</creator>
        
        <subject>3D image representation; argumented reality; virtual reality;</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(6), 2013</description>
        <description>Method and system for 3D image display onto mobile phone and/or tablet terminal is proposed. Displaying 3D images are changed in accordance with user’s location and attitude as well as some motions. Also 3D images can be combined with real world images and virtual images. Through implementation and experiments with virtual images, all functionalities are verified and validated. One of applications of the proposed system is attempted for demonstration of virtual interior of the house in concern. User can take a look at the inside house in accordance with user’s movement. Thus user can imagine how the interior of the house looks like.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No6/Paper_12-Mobile_Devices_Based_3D_Image_Display_Depending_on_User’s_Actions_and_Movements.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Method for Psychological Status Monitoring with Line of Sight Vector Changes (Human Eye Movements) Detected with Wearing Glass</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.020611</link>
        <id>10.14569/IJARAI.2013.020611</id>
        <doi>10.14569/IJARAI.2013.020611</doi>
        <lastModDate>2013-06-09T18:54:13.1500000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Kiyoshi Hasegawa</creator>
        
        <subject>psychological status; eye movement; eye detection and tracking; eeg signal; </subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(6), 2013</description>
        <description>Method for psychological status monitoring with line of sight vector changes (human eye movement) detected with wearing glass is proposed. Succored eye movement should be an indicator of humans’ psychological status. Probability of succored eye movement, therefore, is measured. Through experiments with simple and complicated documents, relation between psychological status measured with eeg signals and the probability of succored eye movements is clarified. It is found that there is strong relation between both results in psychological status can be estimated with eye movement measurements.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No6/Paper_11-Method_for_Psychological_Status_Monitoring_with_Line_of_Sight_Vector_Changes_Human_Eye_Movements_Detected_with_Wearing_Glass.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Monte Carlo Ray Tracing Based Adjacency Effect and Nonlinear Mixture Pixel Model for Remote Sensing Satellite Imagery Data Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.020610</link>
        <id>10.14569/IJARAI.2013.020610</id>
        <doi>10.14569/IJARAI.2013.020610</doi>
        <lastModDate>2013-06-09T18:54:11.2630000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>adjucency effect; nonlinear mixed pixel model; Monte Carlo method; Ray tracing method</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(6), 2013</description>
        <description>Monte Carlo Ray Tracing: MCRT based adjacency effect and nonlinear mixture pixel model is proposed for remote sensing satellite imagery data analysis. Through simulation and actual visible to near infrared radiometer onboard spaceborne data utilizing experiment, the proposed model is confirmed and validated. Therefore, influences due to adjacency effect and nonlinearity of mixed pixel can be taken into account in the remote sensing satellite imagery data analysis.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No6/Paper_10-Monte_Carlo_Ray_Tracing_Based_Adjacency_Effect_and_Nonlinear_Mixture_Pixel_Model_for_Remote_Sensing_Satellite_Imagery_Data_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>3D Map Creation Based on Knowledgebase System for Texture Mapping Together with Height Estimation Using Objects’ Shadows with High Spatial Resolution Remote Sensing Satellite Imagery Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.020609</link>
        <id>10.14569/IJARAI.2013.020609</id>
        <doi>10.14569/IJARAI.2013.020609</doi>
        <lastModDate>2013-06-09T18:54:09.3270000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>texture mapping; remote sensing satellite; 3D map; object height estimation;</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(6), 2013</description>
        <description>Method for 3D map creation based on knowledgebase system for texture mapping together with height estimation using objects’ shadows with high spatial resolution of remote sensing satellite imagery data is proposed. Through the experiments with IKONOS imagery data, the proposed method is validated. </description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No6/Paper_9-3D_Map_Creation_Based_on_Knowledgebase_System_for_Texture_Mapping_Together_with_Height.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Method for 3D Rendering Based on Intersection Image Display Which Allows Representation of Internal Structure of 3D objects</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.020608</link>
        <id>10.14569/IJARAI.2013.020608</id>
        <doi>10.14569/IJARAI.2013.020608</doi>
        <lastModDate>2013-06-09T18:54:07.4230000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>3D object display; volume rendering; solid model; afterimage; line of sight vector </subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(6), 2013</description>
        <description>Method for 3D rendering based on intersection image display which allows representation of internal structure is proposed. The proposed method is essentially different from the conventional volume rendering based on solid model which allows representation of just surface of the 3D objects. By using afterimage, internal structure can be displayed through exchanging the intersection images with internal structure for the proposed method. Through experiments with CT scan images, the proposed method is validated. Also one of other applicable areas of the proposed for design of 3D pattern of Large Scale Integrated Circuit: LSI is introduced. Layered patterns of LSI can be displayed and switched by using human eyes only. It is confirmed that the time required for displaying layer pattern and switching the pattern to the other layer by using human eyes only is much faster than that using hands and fingers.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No6/Paper_8-Method_for_3D_Rendering_Based_on_Intersection_Image_Display_Which_Allows_Representation_of_Internal_Structure_of_3D_objects.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Lecturer’s e-Table (Server Terminal) Which Allows Monitoring the Location at Where Each Student is Looking During Lessons with e-Learning Contents Through Client Terminals</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.020607</link>
        <id>10.14569/IJARAI.2013.020607</id>
        <doi>10.14569/IJARAI.2013.020607</doi>
        <lastModDate>2013-06-09T18:54:05.5070000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>e-learning; line of sight estimation; e-table for lecturers; eye based Human-Computer Interaction;</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(6), 2013</description>
        <description>Lecturer’s e-Table (Server Terminal) which allows monitoring the location at where each student is looking during lessons with e-learning contents through Client Terminals is proposed. Through the lessons with e-learning contents through client terminal, it is obvious that student can take the lessons much efficiently when the student are looking at the appropriate location at where the lecturers would like the students look at. It, however, is not always that the student is looking at the appropriate location even if the system indicates the appropriate location. The proposed system of lecturer’s e-table of server terminal allows monitoring the location at where each student is looking. Therefore, lecturer can make a caution to the student for the appropriate location at where the student has to be looking at. Through experiments with students, it is found that the achievement test score with the indication of appropriate location is much better than that without the indication. Also, learning efficiency with the caution is much greater than that without the caution.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No6/Paper_7-Lecturer’s_e-Table_Server_Terminal)Which_Allows_Monitoring_the_Location_at_Where_Each_Student.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Efficient Routing Protocol under Noisy Environment for Mobile Ad Hoc Networks using Fuzzy Logic</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.020606</link>
        <id>10.14569/IJARAI.2013.020606</id>
        <doi>10.14569/IJARAI.2013.020606</doi>
        <lastModDate>2013-06-09T18:54:03.6170000+00:00</lastModDate>
        
        <creator>Supriya Srivastava</creator>
        
        <creator> A. K. Daniel</creator>
        
        <subject>Fuzzy Logic; Noise; Signal Strength; MANET</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(6), 2013</description>
        <description>A MANET is a collection of mobile nodes communicating and cooperating with each other to route a packet from the source to their destinations. A MANET is used to support dynamic routing strategies in absence of wired infrastructure and centralized administration. In this paper, we propose a routing algorithm for the mobile ad hoc networks based on fuzzy logic to discover an optimal route for transmitting data packets to the destination. This protocol helps every node in MANET to choose next efficient successor node on the basis of channel parameters like environment noise and signal strength. The protocol improves the performance of a route by increasing network life time, reducing link failure and selecting best node for forwarding the data packet to next node.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No6/Paper_6-An_Efficient_Routing_Protocol_under_Noisy_Environment_for_Mobile_Ad_Hoc_Networks_using_Fuzzy_Logic.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Parallelization of 2-D IADE-DY Scheme on Geranium Cadcam Cluster for Heat Equation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.020605</link>
        <id>10.14569/IJARAI.2013.020605</id>
        <doi>10.14569/IJARAI.2013.020605</doi>
        <lastModDate>2013-06-09T18:54:01.7130000+00:00</lastModDate>
        
        <creator>Simon Uzezi Ewedafe</creator>
        
        <creator>Rio Hirowati Shariffudin</creator>
        
        <subject>Parallel Implementation; Heat Equation; SPMD; IADE-DY; Domain Decomposition</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(6), 2013</description>
        <description>A parallel implementation of the Iterative Alternating Direction Explicit method by D’Yakonov (IADE-DY) for solving 2-D heat equation on a distributed system of Geranium Cadcam cluster (GCC) using the Message Passing Interface (MPI) is presented. The implementation of the scheduling of n tri-diagonal system of equations with the above method was used to show improvement on speedup, effectiveness, and efficiency. The Master/Worker paradigm and Single Program Multiple Data (SPMD) model is employed to manage the whole computation based on the use of domain decomposition. The completion of the execution can need task recovery and favorable configuration. The above mentioned details consist of a main report about the numerical validation of the parallelization through simulation to demonstrate the proposed method effectiveness on the cluster system.  It was found that the rate of convergence decreases as the number of processors increases. The result of this paper suggests that the 2-D IADE-DY scheme is a good approach to solving problems, particularly when it is simulation with more processors.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No6/Paper_5-Parallelization_of_2-D_IADE-DY_Scheme_on_Geranium_Cadcam_Cluster_for_Heat_Equation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Category Decomposition Method Based on Matched Filter for Un-Mixing of Mixed Pixels Acquired with Spaceborne Based Hyperspectral Radiometers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.020604</link>
        <id>10.14569/IJARAI.2013.020604</id>
        <doi>10.14569/IJARAI.2013.020604</doi>
        <lastModDate>2013-06-09T18:53:59.8100000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>category decomposition; hyperspectral radiometer; mixed pixel; un-mixing; matched filter;</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(6), 2013</description>
        <description>Category decomposition method based on matched filter for un-mixing of mixed pixels: mixels which are acquired with spaceborne based hyperspectral radiometers is proposed. Through simulation studies with simulated mixed pixels which are created with spectral reflectance data derived from USGS spectral library as well as actual airborne based hyperspectral radiometer imagery data, it is found that the proposed method works well with acceptable decomposition accuracy.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No6/Paper_4-Category_Decomposition_Method_Based_on_Matched_Filter_for_Un-Mixing_of_Mixed_Pixels_Acquired_with_Spaceborne.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Structural Algorithm for Complex Natural Languages Parse Generation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.020603</link>
        <id>10.14569/IJARAI.2013.020603</id>
        <doi>10.14569/IJARAI.2013.020603</doi>
        <lastModDate>2013-06-09T18:53:57.9070000+00:00</lastModDate>
        
        <creator>Enikuomehin, A. O. </creator>
        
        <creator>Rahman</creator>
        
        <creator>Ameen</creator>
        
        <subject>Natural Language; Syntax; Parsing; Meaning Representation </subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(6), 2013</description>
        <description>In artificial intelligence, the study of how humans understand natural languages is cognitive based and such science is essential in the development of a modern day embedded robotic systems. Such systems should have the capability to process natural languages and generate meaningful output. As against machines, humans have the ability to understand a natural language sentence due to the in-born facility inherent in them and such is used to process it. Robotics requires appropriate PARSE systems to be developed in order to handle language based operations. In this paper, we present a new method of generating parse structures on complex natural language using algorithmic processes. The paper explores the process of generating meaning via parse structure and improves on the existing results using well established parsing scheme. The resulting algorithm was implemented in Java and a natural language interface for parse generation is presented. The result further shows that tokenizing sentences into their respective units affects the parse structure in the first instance and semantic representation in the larger scale. Efforts were made to limit the rules used in the generation of the grammar since natural language rules are almost infinite depending on the language set.  (Abstract)</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No6/Paper_3-A_Structural_Algorithm_for_Complex_Natural_Languages_Parse_Generation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Clustering Web Documents based on Efficient Multi-Tire Hashing Algorithm for Mining Frequent Termsets</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.020602</link>
        <id>10.14569/IJARAI.2013.020602</id>
        <doi>10.14569/IJARAI.2013.020602</doi>
        <lastModDate>2013-06-09T18:53:56.0030000+00:00</lastModDate>
        
        <creator>Noha Negm</creator>
        
        <creator>Passent Elkafrawy</creator>
        
        <creator>Mohamed Amin</creator>
        
        <creator>Abdel Badeeh M. Salem</creator>
        
        <subject>Document Clustering; Knowledge Discovery; Hashing; Frequent termsets; Apriori algorithm; Text Documents; Text Mining; Data Mining</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(6), 2013</description>
        <description>Document Clustering is one of the main themes in text mining. It refers to the process of grouping documents with similar contents or topics into clusters to improve both availability and reliability of text mining applications. Some of the recent algorithms address the problem of high dimensionality of the text by using frequent termsets for clustering. Although the drawbacks of the Apriori  algorithm, it still the basic algorithm for mining frequent termsets. This paper presents an approach for Clustering Web Documents based on Hashing algorithm for mining Frequent Termsets (CWDHFT). It introduces an efficient Multi-Tire Hashing algorithm for mining Frequent Termsets (MTHFT) instead of Apriori algorithm. The algorithm uses new methodology for generating frequent termsets by building the multi-tire hash table during the scanning process of documents only one time. To avoid hash collision, Multi Tire technique is utilized in this proposed hashing algorithm.  Based on the generated frequent termset the documents are partitioned and the clustering occurs by grouping the partitions through the descriptive keywords. By using MTHFT algorithm, the scanning cost and computational cost is improved moreover the performance is considerably increased and increase up the clustering process. The  CWDHFT approach improved accuracy, scalability and efficiency when compared with existing clustering algorithms like Bisecting K-means and FIHC.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No6/Paper_2-Clustering_Web_Documents_based_on_Efficient_Multi-Tire_Hashing_Algorithm_for_Mining_Frequent_Termsets.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enterprise Architecture Model that Enables to Search for Patterns of Statistical Information</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.020601</link>
        <id>10.14569/IJARAI.2013.020601</id>
        <doi>10.14569/IJARAI.2013.020601</doi>
        <lastModDate>2013-06-09T18:53:54.0870000+00:00</lastModDate>
        
        <creator>Oleg Chertov</creator>
        
        <subject>Enterprise Architecture; Pattern Search; Data Mining; Architecture Description Language </subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(6), 2013</description>
        <description>Enterprise architecture is the stem from which developing of any departmental information system should grow and around which it should revolve. In the paper, a fragment of an enterprise architecture model is built using ArchiMate language. This fragment enables to search for information in enterprises which do not work in productive industry. Such enterprises include official statistics. The proposed model embraces all three architectural levels of corresponding information systems, namely, OLTP, OLAP, and Data Mining. Particularly, the latter level enables to search for patterns of statistical information.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No6/Paper_1-Enterprise_Architecture_Model_that_Enables_to_Search_for_Patterns_of_Statistical_Information.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Gesture Controlled Robot using Image Processing </title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.020511</link>
        <id>10.14569/IJARAI.2013.020511</id>
        <doi>10.14569/IJARAI.2013.020511</doi>
        <lastModDate>2013-05-10T07:27:30.5300000+00:00</lastModDate>
        
        <creator>Harish Kumar Kaura</creator>
        
        <creator>Vipul Honrao</creator>
        
        <creator>Sayali Patil</creator>
        
        <creator>Pravish Shetty</creator>
        
        <subject>Gestures; OpenCV; Arduino; WiFly;  L293D.</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(5), 2013</description>
        <description>Service robots directly interact with people, so finding a more natural and easy user interface is of fundamental importance. While earlier works have focused primarily on issues such as manipulation and navigation in the environment, few robotic systems are used with user friendly interfaces that possess the ability to control the robot by natural means. To facilitate a feasible solution to this requirement, we have implemented a system through which the user can give commands to a wireless robot using gestures. Through this method, the user can control or navigate the robot by using gestures of his/her palm, thereby interacting with the robotic system. The command signals are generated from these gestures using image processing. These signals are then passed to the robot to navigate it in the specified directions.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No5/Paper_11-Gesture_Controlled_Robot_using_Image_Processing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Optimization Algorithm For Combinatorial Problems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.020510</link>
        <id>10.14569/IJARAI.2013.020510</id>
        <doi>10.14569/IJARAI.2013.020510</doi>
        <lastModDate>2013-05-10T07:27:28.7530000+00:00</lastModDate>
        
        <creator>Azmi Alazzam</creator>
        
        <creator>Harold W. Lewis III</creator>
        
        <subject>meta-heuristic; optimization; combinatorial problems</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(5), 2013</description>
        <description>Combinatorial optimization problems are those problems that have a finite set of possible solutions. The best way to solve a combinatorial optimization problem is to check all the feasible solutions in the search space. However, checking all the feasible solutions is not always possible, especially when the search space is large. Thus, many meta-heuristic algorithms have been devised and modified to solve these problems. The meta-heuristic approaches are not guaranteed to find the optimal solution since they evaluate only a subset of the feasible solutions, but they try to explore different areas in the search space in a smart way to get a near-optimal solution in less cost and time. In this paper, we propose a new meta-heuristic algorithm that can be used for solving combinatorial optimization problems. The method introduced in this paper is named the Global Neighborhood Algorithm (GNA). The algorithm is principally based on a balance between both the global and local search. A set of random solutions are first generated from the global search space, and then the best solution will give the optimal value. After that, the algorithm will iterate, and in each iteration there will be two sets of generated solutions; one from the global search space and the other set of solutions will be generated from the neighborhood of the best solution. Throughout the paper, the algorithm will be delineated with examples. In the final phase of the research, the results of GNA will be discussed and compared with the results of Genetic Algorithm (GA) as an example of another optimization method.  </description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No5/Paper_10-A_New_Optimization_Algorithm_For_Combinatorial_Problems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Path Based Mapping Technique for Robots </title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.020509</link>
        <id>10.14569/IJARAI.2013.020509</id>
        <doi>10.14569/IJARAI.2013.020509</doi>
        <lastModDate>2013-05-10T07:27:27.0070000+00:00</lastModDate>
        
        <creator>Amiraj Dhawan</creator>
        
        <creator>Parag Oak</creator>
        
        <creator>Rahul Mishra</creator>
        
        <creator>George Puthanpurackal</creator>
        
        <subject>Robots; map generation; navigation; AI planning; path planning;</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(5), 2013</description>
        <description>The purpose of this paper is to explore a new way of autonomous mapping. Current systems using perception techniques like LAZER or SONAR use probabilistic methods and have a drawback of allowing considerable uncertainty in the mapping process. Our approach is to break down the environment, specifically indoor, into reachable areas and objects, separated by boundaries, and identifying their shape, to render various navigable paths around them. This is a novel method to do away with uncertainties, as far as possible, at the cost of temporal efficiency. Also this system demands only minimum and cheap hardware, as it relies on only Infra-Red sensors to do the job.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No5/Paper_9-Path_Based_Mapping_Technique_for_Robots.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Analysis and Evaluation of Different Data Mining Algorithms used for Cancer Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.020508</link>
        <id>10.14569/IJARAI.2013.020508</id>
        <doi>10.14569/IJARAI.2013.020508</doi>
        <lastModDate>2013-05-10T07:27:25.3070000+00:00</lastModDate>
        
        <creator>Gopala Krishna Murthy Nookala</creator>
        
        <creator>Bharath Kumar Pottumuthu</creator>
        
        <creator>Nagaraju Orsu</creator>
        
        <creator>Suresh B. Mudunuri</creator>
        
        <subject>Weka; Cancer Classification; Micro-array; Data-mining; Classification Algorithms; Gene Expression Data;</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(5), 2013</description>
        <description>Classification algorithms of data mining have been successfully applied in the recent years to predict cancer based on the gene expression data. Micro-array is a powerful diagnostic tool that can generate handful information of gene expression of all the human genes in a cell at once. Various classification algorithms can be applied on such micro-array data to devise methods that can predict the occurrence of tumor. However, the accuracy of such methods differ according to the classification algorithm used. Identifying the best classification algorithm among all available is a challenging task. In this study, we have made a comprehensive comparative analysis of 14 different classification algorithms and their performance has been evaluated by using 3 different cancer data sets. The results indicate that none of the classifiers outperformed all others in terms of the accuracy when applied on all the 3 data sets. Most of the algorithms performed better as the size of the data set is increased. We recommend the users not to stick to a particular classification method and should evaluate different classification algorithms and select the better algorithm.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No5/Paper_8-Performance_Analysis_and_Evaluation_of_Different_Data_Mining_Algorithms_used_for_Cancer_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The preliminary results of a force feedback control for Sensorized Medical Robotics</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.020507</link>
        <id>10.14569/IJARAI.2013.020507</id>
        <doi>10.14569/IJARAI.2013.020507</doi>
        <lastModDate>2013-05-10T07:27:23.6030000+00:00</lastModDate>
        
        <creator>Duck Hee Lee</creator>
        
        <creator>Seoung Joon Song</creator>
        
        <creator>Reza Fazel-Rezai</creator>
        
        <creator>Jaesoon Choi</creator>
        
        <subject>Laparoscopic surgery; force feedback control; degree of freedom (DOF).</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(5), 2013</description>
        <description>A laparoscopic surgery system by using a robot holds many problems. Among these, its inability in delivering touching sensation to a surgeon is raised as the biggest problem. The current paper attempted to find a force feedback controlling method at the time of performing movement by using one-degree of freedom (DOF) arm of slave and master system that is used in the programming based system. The study was two methods experience force feedback control; In the first place used force sensors and otherwise, conducted for the force feedback control by using a force sensor and for the case when the sensor could not be used due to the spatial and systematic limitation. The realization of force feedback system was successful, and the experiment results of force feedback control and current based force feedback control mode based on the force sensors of one-DOF system indicates that it could be directly applied to the another multi-DOF surgical robot system that is currently under the development.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No5/Paper_7-The_preliminary_results_of_a_force_feedback_control_for_Sensorized_Medical_Robotics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mashup Based Content Search Engine for Mobile Devices</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.020506</link>
        <id>10.14569/IJARAI.2013.020506</id>
        <doi>10.14569/IJARAI.2013.020506</doi>
        <lastModDate>2013-05-10T07:27:21.9030000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>search engine; API; mashup; Andoride; e-learning content retrieval;</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(5), 2013</description>
        <description>Mashup based content search engine for mobile devices is proposed. Example of the proposed search engine is implemented with Yahoo!JAPAN Web SearchAPI, Yahoo!JAPAN Image searchAPI, YouTube Data API, and Amazon Product Advertising API. The retrieved results are also merged and linked each other. Therefore, the different types of contents can be referred once an e-learning content is retrieved. The implemented search engine is evaluated with 20 students. The results show usefulness and effectiveness on e-learning content searches with a variety of content types, image, document, pdf files, moving picture. </description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No5/Paper_6-Mashup_Based_Content_Search_Engine_for_Mobile_Devices.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Remote Sensing Satellite Image Database System Allowing Image Portion Retrievals Utilizing Principal Component Which Consists Spectral and Spatial Features Extracted from Imagery Data </title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.020505</link>
        <id>10.14569/IJARAI.2013.020505</id>
        <doi>10.14569/IJARAI.2013.020505</doi>
        <lastModDate>2013-05-10T07:27:20.2030000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>Image retrieval; Spectral and spatial features; e </subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(5), 2013</description>
        <description>Remote Sensing satellite image database which allows image portion retrievals utilizing principal component which consists of spectral and spatial features extracted from the imagery data is proposed. Through the experiments with actual remote sensing satellite images, it is found that the proposed image retrieval does work so well. </description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No5/Paper_5-Remote_Sensing_Satellite_Image_Database_System_Allowing_Image_Portion_Retrievals_Utilizing_Principal_Component.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Image Clustering Method Based on Self Organization Mapping: SOM Derived Density Maps and Its Application for Landsat Thematic Mapper Image </title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.020504</link>
        <id>10.14569/IJARAI.2013.020504</id>
        <doi>10.14569/IJARAI.2013.020504</doi>
        <lastModDate>2013-05-10T07:27:18.5030000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>SOM clustering; Density map; Boundary image; Labeling algorithm </subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(5), 2013</description>
        <description>A new method for image clustering with density maps derived from Self-Organizing Maps (SOM) is proposed together with a clarification of learning processes during a construction of clusters. Simulation studies and the experiments with remote sensing satellite derived imagery data are conducted.  It is found that the proposed SOM based image clustering method shows much better clustered result for both simulation and real satellite imagery data. It is also found that the separability among clusters of the proposed method is 16% longer than the existing k-mean clustering. It is also found that the separability among clusters of the proposed method is 16% longer than the existing k-mean clustering. In accordance with the experimental results with Landsat-5 TM image, it takes more than 20000 of iteration for convergence of the SOM learning processes. </description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No5/Paper_4-Image_Clustering_Method_Based_on_Self_Organization_Mapping_SOM_Derived_Density_Maps.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sensitivity Analysis on Sea Surface Temperature Estimation Methods with Thermal Infrared Radiometer Data through Simulations </title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.020503</link>
        <id>10.14569/IJARAI.2013.020503</id>
        <doi>10.14569/IJARAI.2013.020503</doi>
        <lastModDate>2013-05-10T07:27:16.7870000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>SST estimation; Split Window; Conjugate Gradient; MODTRAN; atmospheric model </subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(5), 2013</description>
        <description>Sensitivity analysis on Sea Surface Temperature: SST estimation with Thermal Infrared Radiometer: TIR data through simulations is conducted. Also Conjugate Gradient Method: CGM based SST estimation method is proposed. SST estimation error of the proposed CGM based method is compared to the conventional Split Window Method: SWM with a variety of conditions including atmospheric models. The results show that the proposed CGM based method is superior to the SWM. </description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No5/Paper_3-Sensitivity_Analysis_on_Sea_Surface_Temperature_Estimation_Methods_with_Thermal_Infrared_Radiometer_Data_through_Simulations.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sea Ice Concentration Estimation Method with Satellite Based Visible to Near Infrared Radiometer Data Based on Category Decomposition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.020502</link>
        <id>10.14569/IJARAI.2013.020502</id>
        <doi>10.14569/IJARAI.2013.020502</doi>
        <lastModDate>2013-05-10T07:27:15.0870000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>Unmixing; Inversion theory; Category decomposition; Remote Sensing </subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(5), 2013</description>
        <description>Unmixing method for estimation of mixing ratio of the components of which the pixel in concern consists based on inversion theory is proposed together with its application to sea ice estimation method with satellite based visible to near infrared radiometer data. Through comparative study on the different unmixing methods with remote sensing satellite imagery data, it is found that the proposed inversion theory based unmixing method is superior to the other methods. Also it is found that the proposed unmixing method is applicable to sea ice concentration estimations. </description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No5/Paper_2-Sea_Ice_Concentration_Estimation_Method_with_Satellite_Based_Visible_to_Near_Infrared_Radiometer_Data_Based_on_Category_Decomposition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Rice Crop Quality Evaluation Method through Regressive Analysis between Nitrogen Content and Near Infrared Reflectance of Rice Leaves Measured from Near Field</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.020501</link>
        <id>10.14569/IJARAI.2013.020501</id>
        <doi>10.14569/IJARAI.2013.020501</doi>
        <lastModDate>2013-05-10T07:27:13.3870000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>rice crop; regressive analysis; nitrogen content; reflectance measurement;  </subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(5), 2013</description>
        <description>Rice crop quality evaluation method through regressive analysis between nitrogen content in the rice leaves and near infrared reflectance measurement data from near field, from radio wave controlled helicopter is proposed. Rice quality dependency on nitrogen of chemical fertilizer and water supply condition is evaluated. Also homogeneity of the rice crop quality in the paddy fields is evaluated. Furthermore, influence due to shadow on near infrared reflectance of rice leaves measured from near field is taken into account in the rice crop quality evaluation processes. </description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No5/Paper_1-Rice_Crop_Quality_Evaluation_Method_through_Regressive_Analysis_between_Nitrogen_Content_and_Near_Infrared.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Feedback Optimal Control of Low-thrust Orbit
Transfer in Central Gravity Field</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040426</link>
        <id>10.14569/IJACSA.2013.040426</id>
        <doi>10.14569/IJACSA.2013.040426</doi>
        <lastModDate>2013-04-30T18:00:39.6530000+00:00</lastModDate>
        
        <creator>Ashraf H. Owis</creator>
        
        <subject></subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(4), 2013</description>
        <description>Low-thrust trajectories with variable radial thrust is studied
in this paper. The problem is tackled by solving the Hamilton-
Jacobi-Bellman equation via State Dependent Riccati Equation(
STDE) technique devised for nonlinear systems. Instead
of solving the two-point boundary value problem in which the
classical optimal control is stated, this technique allows us to
derive closed-loop solutions. The idea of the work consists
in factorizing the original nonlinear dynamical system into
a quasi-linear state dependent system of ordinary differential
equations. The generating function technique is then applied
to this new dynamical system, the feedback optimal control is
solved. We circumvent in this way the problem of expanding
the vector field and truncating higher-order terms because no
remainders are lost in the undertaken approach. This technique
can be applied to any planet-to-planet transfer; it has been
applied here to the Earth-Mars low-thrust transfer.</description>
        <description>http://thesai.org/Downloads/Volume4No4/Paper_26-Feedback_Optimal_Control_of_Low-thrust_Orbit_Transfer_in_Central_Gravity_Field.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Graph Mining Sub Domains and a Framework for Indexing – A Graphical Approach
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040425</link>
        <id>10.14569/IJACSA.2013.040425</id>
        <doi>10.14569/IJACSA.2013.040425</doi>
        <lastModDate>2013-04-30T18:00:37.9530000+00:00</lastModDate>
        
        <creator>K. Vivekanandan</creator>
        
        <creator>A. Pankaj Moses Monickaraj</creator>
        
        <creator>D. Ramya Chithra</creator>
        
        <subject>Graph; Graph Mining; Graph Classification; Graph Searching; Graph Indexing; Graph Clustering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(4), 2013</description>
        <description>Graphs are one of the popular models for effective representation of complex structured huge data and the similarity search for graphs has become a fundamental research problem in Graph Mining. In this paper initially, the preliminary graph related basic theorems are brushed and showcased on with various research sub domains such as Graph Classification, Graph Searching, Graph Indexing, and Graph Clustering. These are discussed with few of the most dominant algorithms in their respective sub domains.   Finally a model is proposed along with various algorithms with their future projection.</description>
        <description>http://thesai.org/Downloads/Volume4No4/Paper_25-GRAPH_MINING_SUB_DOMAINS_AND_A_FRAMEWORK_FOR_INDEXING_A_GRAPHICAL_APPROACH.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Impact of Medical Technology on Expansion in Healthcare Expenses</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040424</link>
        <id>10.14569/IJACSA.2013.040424</id>
        <doi>10.14569/IJACSA.2013.040424</doi>
        <lastModDate>2013-04-30T18:00:35.1470000+00:00</lastModDate>
        
        <creator>Shakir Khan</creator>
        
        <creator>Dr. Mohamed Fahad AlAjmi</creator>
        
        <subject>medical technology; health costs; health care; research and development</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(4), 2013</description>
        <description>the impact of medical technology on expansion in health care expenses has long been a subject of essential interest, mainly in the context of long-term outcrops of health spending, which must deal with the issue of the applicability of historical trends to future periods. The idea of this paper is to assess an approximate range for the involvement of technological alteration to growth in health spending, and to assess factors which might adjust this impact in the future. Based on the studies re-examined, we estimated that roughly half of growth in actual per capita health care costs is attributable to the beginning and diffusion of new medical technology, within an approximately probable range of 38 to 62 percent of expansion.</description>
        <description>http://thesai.org/Downloads/Volume4No4/Paper_24-Impact_of_Medical_Technology_on_Expansion_in_Healthcare_Expenses.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards a Fraud Prevention E-Voting System </title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040423</link>
        <id>10.14569/IJACSA.2013.040423</id>
        <doi>10.14569/IJACSA.2013.040423</doi>
        <lastModDate>2013-04-30T18:00:33.4130000+00:00</lastModDate>
        
        <creator>Dr. Magdi Amer</creator>
        
        <creator>Dr. Hazem El-Gendy</creator>
        
        <subject>e-voting, security</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(4), 2013</description>
        <description>Election falsification is one of the biggest problems facing third world countries as well as developed countries with respect to cost and time. In this paper, the guidelines for building a legally binding fraud-proof Electronic-Voting are presented.  Also, the limitations are discussed.  </description>
        <description>http://thesai.org/Downloads/Volume4No4/Paper_23-Towards_a_Fraud_Prevention_E-Voting_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Distributed Deployment Scheme for Homogeneous Distribution of Randomly Deployed Mobile Sensor Nodes in Wireless Sensor Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040422</link>
        <id>10.14569/IJACSA.2013.040422</id>
        <doi>10.14569/IJACSA.2013.040422</doi>
        <lastModDate>2013-04-30T18:00:31.6830000+00:00</lastModDate>
        
        <creator>Ajay Kumar</creator>
        
        <creator>Vikrant Sharma</creator>
        
        <creator>D. Prasad</creator>
        
        <subject>Active MSNs, Desired location, Candidate location, Communication range, Sensing range etc.  </subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(4), 2013</description>
        <description>One of the most active research areas in wireless sensor networks is the coverage. The efficiency of the sensor network is measured in terms of the coverage area and connectivity. Therefore these factors must be considered during the deployment. In this paper, we have presented a scheme for homogeneous distribution of randomly distributed mobile sensor nodes (MSNs) in the deployment area. The deployment area is square in shape, which is divided into number of concentric regions centered at Base Station, these regions are separated by half of the communication range and further deployment area is divided in to numbers of regular hexagons. To achieve the maximum coverage and better connectivity MSNs will set themselves at the center of the hexagon on the instruction provided by the BS which is located at one of the corner in the deployment area. The simulation results shows that the presented scheme is better than CPVF and FLOOR schemes in terms of number of MSNs required for same coverage area and average movement required by MSNs to fix themselves at the desired location and energy efficiency.</description>
        <description>http://thesai.org/Downloads/Volume4No4/Paper_22-Distributed_Deployment_Scheme_for_Homogeneous_Distribution_of_Randomly_Deployed_Mobile.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Reducing Attributes in Rough Set Theory with the Viewpoint of Mining Frequent Patterns</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040421</link>
        <id>10.14569/IJACSA.2013.040421</id>
        <doi>10.14569/IJACSA.2013.040421</doi>
        <lastModDate>2013-04-30T18:00:29.5600000+00:00</lastModDate>
        
        <creator>Thanh-Trung Nguyen</creator>
        
        <creator>Phi-Khu Nguyen</creator>
        
        <subject>accumulating frequent patterns; attribute reduction; maximal set; maximal random prior set; mining frequent patterns; rough set</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(4), 2013</description>
        <description>The main objective of the Attribute Reduction problem in Rough Set Theory is to find and retain the set of attributes whose values vary most between objects in an Information System or Decision System. Besides, Mining Frequent Patterns aims finding items that the number of times they appear together in transactions exceeds a given threshold as much as possible. Therefore, the two problems have similarities. From that, an idea formed is to solve the problem of Attribute Reduction from the viewpoint and method of Mining Frequent Patterns. The main difficulty of the Attribute Reduction problem is the time consuming for execution, NP-hard. This article proposes two new algorithms for Attribute Reduction: one has linear complexity, and one has global optimum with concepts of Maximal Random Prior Set and Maximal Set.</description>
        <description>http://thesai.org/Downloads/Volume4No4/Paper_21-Reducing_Attributes_in_Rough_Set_Theory.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Time Variant Change Analysis in Satellite Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040420</link>
        <id>10.14569/IJACSA.2013.040420</id>
        <doi>10.14569/IJACSA.2013.040420</doi>
        <lastModDate>2013-04-30T18:00:27.8600000+00:00</lastModDate>
        
        <creator>Rachita Sharma</creator>
        
        <creator>Sanjay Kumar Dubey</creator>
        
        <subject>Satellite Images; SOFM; ANN; Supervised Classification, Unsupervised classification.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(4), 2013</description>
        <description>This paper describes the time variant changes in satellite images using Self Organizing Feature Map (SOFM) technique associated with Artificial Neural Network. In this paper, we take a satellite image and find the time variant changes using above technique with the help of MATLAB. This paper reviews remotely sensed data analysis with neural networks. First, we present an overview of the main concepts underlying Artificial Neural Networks (ANNs), including the main architectures and learning algorithms. Then, the main tasks that involve ANNs in remote sensing are described. We first make a brief introduction to models of networks, for then describing in general terms Artificial Neural Networks (ANNs). As an application, we explain the back propagation algorithm, since it is widely used and many other algorithms are derived from it. There are two techniques that are used for classification in pattern recognition such as Supervised Classification and Unsupervised Classification. In supervised learning technique the network knows about the target and it has to change accordingly to get the desired output corresponding to the presented input sample data. Most of the previous work has already been done on supervised classification. In this study we are going to present the classification of satellite images using unsupervised classification method of ANN.</description>
        <description>http://thesai.org/Downloads/Volume4No4/Paper_20-Time_Variant_Change_Analysis_in_Satellite_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid Framework using RBF and SVM for Direct Marketing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040419</link>
        <id>10.14569/IJACSA.2013.040419</id>
        <doi>10.14569/IJACSA.2013.040419</doi>
        <lastModDate>2013-04-30T18:00:26.1430000+00:00</lastModDate>
        
        <creator>M. Govidarajan</creator>
        
        <subject>Direct Marketing; Ensemble; Radial Basis Function; Support Vector Machine; Classification Accuracy.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(4), 2013</description>
        <description>one of the major developments in machine learning in the past decade is the ensemble method, which finds highly accurate classifier by combining many moderately accurate component classifiers. This paper addresses using an ensemble of classification methods for direct marketing. Direct marketing has become an important application field for data mining. In direct marketing, companies or organizations try to establish and maintain a direct relationship with their customers in order to target them individually for specific product offers or for fund raising. A variety of techniques have been employed for analysis ranging from traditional statistical methods to data mining approaches. In this research work, new hybrid classification method is proposed by combining classifiers in a heterogeneous environment using arcing classifier and their performances are analyzed in terms of accuracy. A Classifier ensemble is designed using Radial Basis Function (RBF) and Support Vector Machine (SVM) as base classifiers. Here, modified training sets are formed by resampling from original training set; classifiers constructed using these training sets and then combined by voting. Empirical results illustrate that the proposed hybrid systems provide more accurate direct marketing system. </description>
        <description>http://thesai.org/Downloads/Volume4No4/Paper_19-A_Hybrid_Framework_using_RBF_and_SVM_for_Direct_Marketing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimization Query Process of Mediators Interrogation Based On Combinatorial Storage</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040418</link>
        <id>10.14569/IJACSA.2013.040418</id>
        <doi>10.14569/IJACSA.2013.040418</doi>
        <lastModDate>2013-04-30T18:00:23.2730000+00:00</lastModDate>
        
        <creator>L. Cherrat</creator>
        
        <creator>M. Ezziyyani</creator>
        
        <creator>M. Essaaidi</creator>
        
        <subject>Mediation; Datawarehouse; Optimisation; Classification </subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(4), 2013</description>
        <description>In the distributed environment where a query involves several heterogeneous sources, communication costs must be taken into consideration. In this paper we describe a query optimization approach using dynamic programming technique for set integrated heterogeneous sources.  The objective of the optimization is to minimize the total processing time including load processing, request rewriting and communication costs, to facilitate communication inter-sites and to optimize the time of data transfer from site to others. Moreover,  the ability to store data in more than one centre site provides more flexibility in terms of Security/Safety and overload of the network. In contrast to optimizers which are considered a restricted search space, the proposed optimizer searches the closed subsets of sources and independency relationship which may be deep laniary or hierarchical trees. Especially the execution of the queries can start traversal anywhere over any  subset  and not only from a specific source. </description>
        <description>http://thesai.org/Downloads/Volume4No4/Paper_18-Optimization_Query_Process_of_Mediators_Interrogation_Based_On_Combinatorial_Storage.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Intelligent mutli-object retrieval system for historical mosaics</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040417</link>
        <id>10.14569/IJACSA.2013.040417</id>
        <doi>10.14569/IJACSA.2013.040417</doi>
        <lastModDate>2013-04-30T18:00:21.2130000+00:00</lastModDate>
        
        <creator>Wafa Maghrebi</creator>
        
        <creator>Anis B. Ammar</creator>
        
        <creator>Adel M. Alimi</creator>
        
        <creator>Mohamed A. Khabou</creator>
        
        <subject>retrieval; mosaics; metadata; classification; multi-objects</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(4), 2013</description>
        <description>In this work we present a Mosaics Intelligent Retrieval System (MIRS) for digital museums. The objective of this work is to attain a semantic interpretation of images of historical mosaics. We use the fuzzy logic techniques and semantic similarity measure to extract knowledge from the images for multi-object indexing. The extracted knowledge provides the users (experts and laypersons) with an intuitive way to describe and to query the images in the database. Our contribution in this paper is firstly, to define semantic fuzzy linguistic terms to encode the object position and the inter-objects spatial relationships in the mosaic image. Secondly, to present a fuzzy color quantization approach using the human perceptual HSV color space and finally, to classify semantically the mosaics images using a semantic similarity measure. The automatically extracted knowledge are collected and traduced into XML language to create mosaics metadata. This system uses a simple Graphic User Interface (GUI) in natural language and applies the classification approach both on the mosaics images database and on user queries, to limit images classes in the retrieval process. MIRS is tested on images from the exceptional Tunisian collection of complex mosaics. Experimental results are based on queries of various complexities which yielded a system’s recall and precision rates of 86.6% and 87.1%, respectively, while the classification approach gives an average success rate evaluated to 76%.</description>
        <description>http://thesai.org/Downloads/Volume4No4/Paper_17-An_Intelligent_mutli-object_retrieval_system_for_historical_mosaics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Semantic Conflicts Reconciliation as a Viable Solution for Semantic Heterogeneity Problems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040416</link>
        <id>10.14569/IJACSA.2013.040416</id>
        <doi>10.14569/IJACSA.2013.040416</doi>
        <lastModDate>2013-04-30T18:00:18.3270000+00:00</lastModDate>
        
        <creator>Walaa S. Ismail</creator>
        
        <creator>Mona M. Nasr</creator>
        
        <creator>Torky I. Sultan</creator>
        
        <creator>Ayman E. Khedr</creator>
        
        <subject>Data Integration; Heterogeneous Sources; Interoperability; Semantic Conflicts; Context; Reconciliation Ontology.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(4), 2013</description>
        <description>Achieving semantic interoperability is a current challenge in the field of data integration in order to bridge semantic conflicts occurring when the participating sources and receivers use different or implicit data assumptions. Providing a framework that automatically detects and resolves semantic conflicts is considered as a daunting task for many reasons, it should preserve the local autonomy of the integrated sources, as well as provides a standard query language for accessing the integrated data on a global basis. Many existing traditional and ontology-based approaches have tried to achieve semantic interoperability, but they have certain drawbacks that make them inappropriate for integrating data from a large number of participating sources.
We propose semantic conflicts reconciliation (SCR) framework, it is ontology-based system in which all data semantics explicitly described in the knowledge representation phase and automatically taken into account through the interpretation mediation service phase, so conflicts detected and resolved automatically at the query time. 
</description>
        <description>http://thesai.org/Downloads/Volume4No4/Paper_16-Semantic_Conflicts_Reconciliation_as_a_Viable_Solution_for_Semantic_Heterogeneity_Problems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Review of Computation Solutions by Mobile Agents in an Unsafe Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040415</link>
        <id>10.14569/IJACSA.2013.040415</id>
        <doi>10.14569/IJACSA.2013.040415</doi>
        <lastModDate>2013-04-30T18:00:16.6430000+00:00</lastModDate>
        
        <creator>Anis Zarrad</creator>
        
        <creator>Yassine Daadaa</creator>
        
        <subject>Distributed algorithm; Mobile Agent; Network Decontamination; Black Hole Search; and Network Exploration</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(4), 2013</description>
        <description>Exploration in an unsafe environment is one of the major problems that can be seen as a basic block for many distributed mobile protocols. In such environment we consider that either the nodes (hosts) or the agents can pose some danger to the network. Two cases are considered. In the first case, the dangerous node is a called black hole, a node where incoming agents are trapped, and the problem is for the agents to locate it. In the second case, the dangerous agent is a virus; an agent moving between nodes infecting them, and the problem is for the “good” agents to capture it, and decontaminate the network. 
In this paper, we present several solutions for a black–hole and network decontamination problems. Then, we analyze their efficiency. Efficiency is evaluated based on the complexity, and the effort is in the minimization of the number of simultaneous decontaminating elements active in the system while performing the decontamination techniques.
</description>
        <description>http://thesai.org/Downloads/Volume4No4/Paper_15-A_Review_of_Computation_Solutions_by_Mobile_Agents_in_an_Unsafe_Environment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Learning by Modeling (LbM): Understanding Complex Systems by Articulating Structures, Behaviors, and Functions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040414</link>
        <id>10.14569/IJACSA.2013.040414</id>
        <doi>10.14569/IJACSA.2013.040414</doi>
        <lastModDate>2013-04-30T18:00:14.9270000+00:00</lastModDate>
        
        <creator>Kamel Hashem</creator>
        
        <creator>David Mioduser</creator>
        
        <subject>learning by modeling; simulation; complexity; mental models; educational technology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(4), 2013</description>
        <description>Understanding the behavior of complex systems has become a focal issue for scientists in a wide range of disciplines. Making sense of a complex system should require that a student construct a network of concepts and principles about the learning complex phenomena. This paper describes part of a project about Learning-by-Modeling (LbM). Many features of complex systems make it difficult for students to develop deep understanding. Previous research indicates that involvement with modeling scientific phenomena and complex systems can play a powerful role in science learning. Some researchers argue with this view indicating that models and modeling do not contribute to understanding complexity concepts, since these increases the cognitive load on students.  In this study we investigated the effect of different modes of involvement in exploring scientific phenomena using computer simulation tools, on students’ mental model from the perspective of structure, behaviour and function. Quantitative and qualitative methods are used to report about 121 freshmen students that engaged in participatory simulations about complex phenomena, showing emergent, self-organized and decentralized patterns. Results show that LbM plays a major role in students&#39; concept formation about complexity concepts.</description>
        <description>http://thesai.org/Downloads/Volume4No4/Paper_14-Learning_by_Modeling_LbM_Understanding_Complex_Systems_by_Articulating_Structures_Behaviors_and_Functions.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Translation of Pronominal Anaphora from English to Telugu Language</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040413</link>
        <id>10.14569/IJACSA.2013.040413</id>
        <doi>10.14569/IJACSA.2013.040413</doi>
        <lastModDate>2013-04-30T18:00:13.2100000+00:00</lastModDate>
        
        <creator>T. Suryakanthi</creator>
        
        <creator>Dr. S.V.A.V. Prasad</creator>
        
        <creator>Dr. T. V. Prasad</creator>
        
        <subject>—GNP; Gender number person; SL: Source language;English; TL; target language: Telugu; S-singular; P-plural, Mmasculine; F-feminine; N-neuter; VBD – past tense verb form; VBZ- 3rd person singular present verb form; VBP- non 3rd person singular present verb form; MD- Modal</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(4), 2013</description>
        <description>Discourses are linguistic structures above sentence level. Discourse is nothing but a coherent sequence of sentences. Discourse analysis is concerned with coherent processing of text segments larger than the sentence and this requires something more than just the interpretation of the individual sentences. A phenomenon that operates at discourse level includes cohesion. Text is cohesive if its elements link together. This linking can be either forward or backward. Pronominal referencing is one method for linking sentences. This paper presents the issues in translating pronominal references from English to Telugu language. This work handles resolution and generation of personal pronouns whose antecedents appear before the anaphora. An algorithm is developed for translation of pronominal references. </description>
        <description>http://thesai.org/Downloads/Volume4No4/Paper_13-Translation_of_Pronominal_Anaphora_from_English_to_Telugu_Language.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hierarchical Low Power Consumption Technique with Location Information for Sensor Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040412</link>
        <id>10.14569/IJACSA.2013.040412</id>
        <doi>10.14569/IJACSA.2013.040412</doi>
        <lastModDate>2013-04-30T18:00:11.5100000+00:00</lastModDate>
        
        <creator>Susumu Matsumae</creator>
        
        <creator>Fukuhito Ooshita</creator>
        
        <subject>wireless sensor networks; geographical adaptive fidelity; energy conservation; network lifetime </subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(4), 2013</description>
        <description>In the wireless sensor networks composed of battery-powered sensor nodes, one of the main issues is how to save power consumption at each node. The usual approach to this problem is to activate only necessary nodes (e.g., those nodes which compose a backbone network), and to put other nodes to sleep. One such algorithm using location information is GAF (Geographical Adaptive Fidelity), and the GAF is enhanced to HGAF (Hierarchical Geographical Adaptive Fidelity). In this paper, we show that we can further improve the energy efficiency of HGAF by modifying the manner of dividing sensor-field. We also provide a theoretical bound on this problem.</description>
        <description>http://thesai.org/Downloads/Volume4No4/Paper_12-Hierarchical_Low_Power_Consumption_Technique_with_Location_Information_for_Sensor_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Impact of other-cell interferences on downlink capacity in WCDMA Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040411</link>
        <id>10.14569/IJACSA.2013.040411</id>
        <doi>10.14569/IJACSA.2013.040411</doi>
        <lastModDate>2013-04-30T18:00:09.7930000+00:00</lastModDate>
        
        <creator>Fadoua Thami Alami</creator>
        
        <creator>Noura Aknin</creator>
        
        <creator>Ahmed El Moussaoui</creator>
        
        <subject>WCDMA; planning process; downlink; capacity estimation; other-cell interferences ;own-cell interferences; Node B total required power;admission control.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(4), 2013</description>
        <description>Before the establishment of the UMTS network, operators are obliged to make the planning process to ensure a better quality of service (QOS) for mobile stations belonging to WCDMA cells. This process consists of estimating a set of parameters characterizing the radio cell in the downlink direction. Among them, we are interested in the Node B total required power PTot, and the maximum cell capacity, in the case of voice only and in the case of voice/video. To implement the effect of the other-cell interferences power, modeled as a fraction fDL of the own-cell received power, on various radio parameters described previously, we focused our study on two different scenarios: the first is based on an isolated cell and the second, on multiple cells. In addition, when the WCDMA cell reaches its maximum capacity, the introduction of admission control algorithms is essential to maintain the QOS of the ongoing mobile stations. For this purpose, we have proposed an admission control algorithm, based both on the Node B total required power and the cell loading factor. This algorithm gives rigorous results compared to the existing ones in the literature.</description>
        <description>http://thesai.org/Downloads/Volume4No4/Paper_11-Impact_of_other-cell_interferences_on_downlink_capacity_in_WCDMA_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>3D CAD model reconstruction of a human femur from MRI images </title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040410</link>
        <id>10.14569/IJACSA.2013.040410</id>
        <doi>10.14569/IJACSA.2013.040410</doi>
        <lastModDate>2013-04-30T18:00:08.0930000+00:00</lastModDate>
        
        <creator>Mohammed RADOUANI</creator>
        
        <creator>Youssef AOURA</creator>
        
        <creator>Benaissa EL FAHIME</creator>
        
        <creator>Latifa OUZIZI</creator>
        
        <subject>biomechanic; MRI imaging; 3D reconstructionl; femur</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(4), 2013</description>
        <description>Medical practice and life sciences take full advantage of progress in engineering disciplines, in particular the computer assisted placement technique in hip surgery. This paper describes the three dimensional model reconstruction of human femur from MRI images. The developed program enables to obtain digital shape of 3D femur recognized by all CAD software and allows an accurate placement of the femoral component. This technic provides precise measurement of implant alignment during hip resurfacing or total hip arthroplasty, thereby reducing the risk of component mal-positioning and femoral neck notching.</description>
        <description>http://thesai.org/Downloads/Volume4No4/Paper_10-3D_CAD_model_reconstruction_of_a_human_femur_from_MRI_images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Novel Steganography System using Lucas Sequence</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040409</link>
        <id>10.14569/IJACSA.2013.040409</id>
        <doi>10.14569/IJACSA.2013.040409</doi>
        <lastModDate>2013-04-30T18:00:06.3930000+00:00</lastModDate>
        
        <creator>Fahd Alharbi</creator>
        
        <subject>Steganography;  LSB;   Lucas; PSNR; PRNG </subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(4), 2013</description>
        <description>Steganography is the process of embedding data into a media form such as image, voice, and video. The major methods used for data hiding are the frequency domain and the spatial domain. In the frequency domain, the secret data bits are inserted into the coefficients of the image pixel&#39;s frequency representation such as Discrete Cosine Transform (DCT) , Discrete Fourier Transform (DFT) and Discrete Wavelet Transform (DWT) . On the other hand, in the spatial domain method, the secret data bits are inserted directly into the images&#39; pixels value decomposition. The Lest Significant Bit (LSB) is consider as the most widely spatial domain method used for data hiding. LSB embeds the secret message&#39;s bits into the least significant bit plane( Binary decomposition) of the image in a sequentially manner . The LSB is simple, but it poses some critical issues. The secret message is easily detected and attacked duo to the sequential embedding process. Moreover, embedding using a higher bit plane would degrade the image quality. In this paper, we are proposing a novel data hiding method based on Lucas number system. We use Lucas number system to decompose the images&#39; pixels values to allow using  higher bit plane for embedding without degrading the image&#39;s quality. The experimental results show that the proposed method achieves better Peak Signal to Noise Ratio ( PSNR)  than the LSB method for both gray scale and color  images. Moreover, the security of the hidden data is enhanced by using Pseudo Random Number Generators(PRNG) for selecting the secret data bits to be embedded and the image&#39;s pixels used for embedding.</description>
        <description>http://thesai.org/Downloads/Volume4No4/Paper_9-Novel_Steganography_System_using_Lucas_Sequence.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Studies and a Method to Minimize and Control the Jitter in Optical Based Communication System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040408</link>
        <id>10.14569/IJACSA.2013.040408</id>
        <doi>10.14569/IJACSA.2013.040408</doi>
        <lastModDate>2013-04-30T18:00:04.6770000+00:00</lastModDate>
        
        <creator>N. Suresh Kumar</creator>
        
        <creator>R. Sridevi</creator>
        
        <creator>Dr. D.V. R. K. Reddy</creator>
        
        <creator>V. Sridevi</creator>
        
        <subject>Optics; Jitter; pipeline; Clock; propagation delay; high speed data.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(4), 2013</description>
        <description>In the years, optical communication systems have been using significantly for attractive solutions to the increasing high data rate in telecommunication systems and various other applications. In the present days mostly, two types of communication schemes are using in data communication, namely asynchronous transmission and synchronous transmission depending on their timing and frame format. But both transmission systems are facing complications seriously with the involvement of jitter in data propagation. The jitter can degrade the performance of a transmission system by introducing bit errors and uncontrolled offsets or displacements in the digital signals. The jitter creates problems furiously at high data rate systems. The jitter need to be minimized in the communication system, otherwise it also degrades the performance of the interconnected systems with main circuit. This will happen due to improper synchronization or management of the clock scheme in the communication system. The improper organization of clock scheme propagates fault data and clock scheme to all other interconnected circuits. In the present work a new clock scheme is discussed to minimize the jitter in data propagation.</description>
        <description>http://thesai.org/Downloads/Volume4No4/Paper_8-Studies_and_a_Method_to_Minimize_and_Control_the_Jitter_in_Optical_Based_Communication_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Developing a Stochastic Input Oriented Data Envelopment Analysis (SIODEA) Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040407</link>
        <id>10.14569/IJACSA.2013.040407</id>
        <doi>10.14569/IJACSA.2013.040407</doi>
        <lastModDate>2013-04-30T18:00:02.9770000+00:00</lastModDate>
        
        <creator>Basma E. El-Demerdash</creator>
        
        <creator>Ihab A. El-Khodary</creator>
        
        <creator>Assem A. Tharwat</creator>
        
        <subject>Data Envelopment Analysis; Stochastic Variables; Input Oriented; Performance Measure; Efficiency Measurement.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(4), 2013</description>
        <description>Data Envelopment Analysis (DEA) is a powerful quantitative tool that provides a means to obtain useful information about efficiency and performance of firms, organizations, and all sorts of functionally similar, relatively autonomous operating units, known as Decision Making Units (DMU). Usually the investigated DMUs are characterized by a vector of multiple inputs and multiple outputs. Unfortunately, not all inputs and/or outputs are deterministic; some could be stochastic. The main concern in this paper is to develop an algorithm to help any organization for evaluating their performance given that some inputs are stochastic. The developed algorithm is for a Stochastic Input Oriented Model based on the Chance Constrained Programming, where the stochastic inputs are normally distributed, while the remaining inputs and all outputs are deterministic.</description>
        <description>http://thesai.org/Downloads/Volume4No4/Paper_7-Developing_a_Stochastic_Input_Oriented_Data_Envelopment_Analysis_SIODEA_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Analysis of K-Means and Fuzzy C-Means Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040406</link>
        <id>10.14569/IJACSA.2013.040406</id>
        <doi>10.14569/IJACSA.2013.040406</doi>
        <lastModDate>2013-04-30T18:00:01.2770000+00:00</lastModDate>
        
        <creator>Soumi Ghosh</creator>
        
        <creator>Sanjay Kumar Dubey</creator>
        
        <subject>clustering; k-means; fuzzy c-means; time complexity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(4), 2013</description>
        <description>In the arena of software, data mining technology has been considered as useful means for identifying patterns and trends of large volume of data. This approach is basically used to extract the unknown pattern from the large set of data for business as well as real time applications. It is a computational intelligence discipline which has emerged as a valuable tool for data analysis, new knowledge discovery and autonomous decision making. The raw, unlabeled data from the large volume of dataset can be classified initially in an unsupervised fashion by using cluster analysis i.e. clustering the assignment of a set of observations into clusters so that observations in the same cluster may be in some sense be treated as similar. The outcome of the clustering process and efficiency of its domain application are generally determined through algorithms. There are various algorithms which are used to solve this problem. In this research work two important clustering algorithms namely centroid based K-Means and representative object based FCM (Fuzzy C-Means) clustering algorithms are compared. These algorithms are applied and performance is evaluated on the basis of the efficiency of clustering output. The numbers of data points as well as the number of clusters are the factors upon which the behaviour patterns of both the algorithms are analyzed. FCM produces close results to K-Means clustering but it still requires more computation time than K-Means clustering.</description>
        <description>http://thesai.org/Downloads/Volume4No4/Paper_6-Comparative_Analysis_of_K-Means_and_Fuzzy_C_Means_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detection and Isolation of Packet Dropping Attacker in MANETs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040405</link>
        <id>10.14569/IJACSA.2013.040405</id>
        <doi>10.14569/IJACSA.2013.040405</doi>
        <lastModDate>2013-04-30T17:59:59.5770000+00:00</lastModDate>
        
        <creator>Ahmed Mohamed Abdalla</creator>
        
        <creator>Ahmad H. Almazeed</creator>
        
        <creator>Imane Aly Saroit</creator>
        
        <creator>Amira Kotb</creator>
        
        <subject>MANETS; IDS; OLSR; DIPDAM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(4), 2013</description>
        <description>Several approaches have been proposed for Intrusion Detection Systems (IDS) in Mobile Ad hoc Networks (MANETs). Due to lack of MANETs infrastructure and well defined perimeter MANETs are susceptible to a variety of attacker types. To develop a strong security mechanism it is necessary to understand how malicious nodes can attack the MANETs. A new IDS mechanism is presented based on End-to-End connection for securing Optimized Link State Routing (OLSR) routing protocol. This new mechanism is named as Detection and Isolation Packet Dropped Attackers in MANETs (DIPDAM). DIPDAM mechanism based on three ID messages Path Validation Message (PVM) , Attacker Finder Message  (AFM) and Attacker Isolation Message (AIM).
DIPDAM mechanism based on End-to-End (E2E) communication between the source and the destination is proposed.
The simulation results showed that the proposed mechanism is able to detect any number of attackers while keeping a reasonably low overhead in terms of network traffic.
</description>
        <description>http://thesai.org/Downloads/Volume4No4/Paper_5-Detection_and_Isolation_of_Packet_Dropping_Attacker_in_MANETs.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Narrowing Down Learning Research: Technical Documentation in Information Systems Research</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040404</link>
        <id>10.14569/IJACSA.2013.040404</id>
        <doi>10.14569/IJACSA.2013.040404</doi>
        <lastModDate>2013-04-30T17:59:57.8770000+00:00</lastModDate>
        
        <creator>Thomas Puchleitner</creator>
        
        <subject>problem-based learning; self-regulated learning; self-directed learning; product learning; customer learning; consumer learning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(4), 2013</description>
        <description>learning how to use technical products is of high interest for customers as well as businesses. Besides product usability, technical documentation in various forms plays a major role for the acceptance of innovative products. Software applications partly integrate personalized learning strategies but late developments in information and communication technology extend these potentials to the non-software sector too. Mobile devices as smartphones allow the linking between physical and virtual world and are thereby eligible instruments for product learning and the application of adequate learning theories. Very few scientific publications accurately addressing the learning of product features and functionalities can be depicted. By applying a research profiling approach as a stepwise analysis of available publications, relevant learning paradigms and their corresponding scientific areas are depicted. As this research topic relates to marketing as well as information systems research the applied approach may also show beneficial for other interdisciplinary intentions. </description>
        <description>http://thesai.org/Downloads/Volume4No4/Paper_4-Narrowing_Down_Learning_Research_Technical_Documentation_in_Information_Systems_Research.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of an Automatic Accessibility Evaluator to Validate a Virtual and Authenticated Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040403</link>
        <id>10.14569/IJACSA.2013.040403</id>
        <doi>10.14569/IJACSA.2013.040403</doi>
        <lastModDate>2013-04-30T17:59:56.1770000+00:00</lastModDate>
        
        <creator>Elisa Maria Pivetta</creator>
        
        <creator>Carla Flor</creator>
        
        <creator>Daniela Satomi Saito</creator>
        
        <creator>Vania Ribas Ulbricht</creator>
        
        <subject>automatic validation tool; WCAG 2.0; accessibility; WAVE)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(4), 2013</description>
        <description>This article’s objective is to analyze an automatic validation software compatible with the guidelines of Web Content Accessibility Guidelines (WCAG) 2.0 in an authenticated environment. To the evaluation it was utilized as a test platform the authenticated environment of Moodle, which is an open source platform created for educational environments. Initially, a brief conceptualization about accessibility and the operation of these guidelines was described, and then the software to be tested was chosen: the WAVE. In the next step, the tool’s operation was valued and the study’s analysis was made, which allowed the comparison between the testable errors of WAVE with the guidelines of WCAG 2.0. As the results of the research, it was concluded that the tool WAVE obtained a good performance, even though it did not include several guidelines of WCAG 2.0 and did not classified the results within the accessibility’s principles of Web Accessibility Initiative (WAI). Also showed itself more adequate to developers than to common users, which have no knowledge of Web programming language.</description>
        <description>http://thesai.org/Downloads/Volume4No4/Paper_3-Analysis_of_an_Automatic_Accessibility_Evaluator_to_Validate_a_Virtual_and_Authenticated_Environment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Cost-Efficient and Reliable Resource Allocation Model Based on Cellular Automaton Entropy for  Cloud Project Scheduling</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040402</link>
        <id>10.14569/IJACSA.2013.040402</id>
        <doi>10.14569/IJACSA.2013.040402</doi>
        <lastModDate>2013-04-30T17:59:54.4730000+00:00</lastModDate>
        
        <creator>Huankai Chen</creator>
        
        <creator>Frank Wang</creator>
        
        <creator>Na Helian</creator>
        
        <subject>Resource Allocation; Cloud Project Scheduling; Entropy; Cellular Automaton; Cost-efficiency; Reliability; Complex System; Local Activity; Global Order; Disorder</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(4), 2013</description>
        <description>Resource allocation optimization is a typical cloud project scheduling problem: a problem that limits a cloud system’s ability to execute and deliver a project as originally planned. The entropy, as a measure of the degree of disorder in a system, is an indicator of a system’s tendency to progress out of order and into a chaotic condition, and it can thus serve to measure a cloud system’s reliability for project scheduling. In this paper, cellular automaton is used for modeling the complex cloud project scheduling system. Additionally, a method is presented to analysis the reliability of cloud scheduling system by measuring the average resource entropy (ARE). Furthermore, a new cost-efficient and reliable resource allocation (CERRA) model is proposed based on cellular automaton entropy to aid decision maker for planning projects on the cloud. At last, the proposed model is designed using Matlab toolbox and simulated with three basic cloud scheduling algorithm, First Come First Served Algorithm (FCFS), Min-Min Algorithm and Max-Min Algorithm. The simulation results show that the proposed model can lead to achieve a cost-efficient and reliable resource allocation strategy for running projects on the cloud environment.</description>
        <description>http://thesai.org/Downloads/Volume4No4/Paper_2-A_Cost-Efficient_and_Reliable_Resource_Allocation_Model_Based_on_Cellular_Automaton_Entropy_for_Cloud_Project_Scheduling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>E-learning in Higher Educational Institutions in Kuwait: Experiences and Challenges</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040401</link>
        <id>10.14569/IJACSA.2013.040401</id>
        <doi>10.14569/IJACSA.2013.040401</doi>
        <lastModDate>2013-04-30T17:59:52.7430000+00:00</lastModDate>
        
        <creator>Mubarak M Alkharang</creator>
        
        <creator>George Ghinea</creator>
        
        <subject>e-learning; higher education; adoption; Kuwait; developed countries; e-learning barriers.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(4), 2013</description>
        <description>E-learning as an organizational activity started in the developed countries, and as such, the adoption models and experiences in the developed countries are taken as a benchmark in the literature. This paper investigated the barriers that affect or prevent the adoption of e-learning in higher educational institutions in Kuwait as an example of a developing country, and compared them with those found in developed countries. Semi-structured interviews were used to collect the empirical data from academics and managers in higher educational institutions in Kuwait. The research findings showed that the main barriers in Kuwait were lack of management awareness and support, technological barriers, and language barriers. From those, two barriers were specific to Kuwait (lack of management awareness and language barriers) when compared with developed countries. Recommendations for decision makers and suggestions for further research are also considered in this study.</description>
        <description>http://thesai.org/Downloads/Volume4No4/Paper_1-E-learning_in_Higher_Educational_Institutions_in_Kuwait_Experiences_and_Challenges.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>e-Restaurant: Online Restaurant Management System for Android</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2013.030108</link>
        <id>10.14569/SpecialIssue.2013.030108</id>
        <doi>10.14569/SpecialIssue.2013.030108</doi>
        <lastModDate>2013-04-13T16:10:36.8970000+00:00</lastModDate>
        
        <creator>Dr. Vinayak Ashok Bharadi</creator>
        
        <creator>Vivek Ranjan</creator>
        
        <creator>Nikesh Masiwal</creator>
        
        <creator>Nikita Verma</creator>
        
        <subject>Recommendation; Tablet; menu; Intelligent; Android application; restaurant </subject>
        <description>Special Issue(SpecialIssue), 3(1), 2013</description>
        <description>The simplicity and ease of access of a menu are the main things that facilitate ordering food in a restaurant. A Tablet menu completely revolutionizes the patron’s dining experience. Existing programs provide an app that restaurants can use to feed their menus into iOS &amp; Android based tablets and make it easier for the diners to flip, swipe &amp; tap through the menu. We here aim to provide the restaurants with a tablet menu that would recommend dishes based on a recommendation algorithm which has not been implemented elsewhere. In addition to this we run the app on an Android based tablet &amp; not on an iOS based tablet which is more expensive alternative. We use a cloud-based server for storing the database which makes it inexpensive &amp; secure.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo5/Paper_8-E-Restaurant_Online_Restaurant_Management_System_for_Android.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance of Turbo Product Code in Wimax</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2013.030107</link>
        <id>10.14569/SpecialIssue.2013.030107</id>
        <doi>10.14569/SpecialIssue.2013.030107</doi>
        <lastModDate>2013-04-13T16:10:34.9930000+00:00</lastModDate>
        
        <creator>Trushita Chaware</creator>
        
        <creator>Nileema Pathak</creator>
        
        <subject>CTC; eBCH; AWGN; Code Rate; OFDM</subject>
        <description>Special Issue(SpecialIssue), 3(1), 2013</description>
        <description>IEEE 802.16 is a standard for Broadband Wireless Access (BWA) air interface. 802.16e supports mobile broadband wireless access, which is an additional feature over its predecessors, which support fixed wireless access.  Binary Convolutional Turbo Coding (CTC) is used as mandatory Forward Error Correction method in 802.16e. In this paper the performance of a simple and efficient optional coding scheme namely Turbo Product Code (TPC) is proposed for 802.16e system and is compared with CTC.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo5/Paper_7-Performance_of_Turbo_Product_Code_in_Wimax.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>GaAs, InP, InGaAs, GaInP, p+-i-n+ Multiplication measurements for Modeling of Semiconductor as photo detectors</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2013.030106</link>
        <id>10.14569/SpecialIssue.2013.030106</id>
        <doi>10.14569/SpecialIssue.2013.030106</doi>
        <lastModDate>2013-04-13T16:10:31.3900000+00:00</lastModDate>
        
        <creator>Sanjay. C. Patil</creator>
        
        <creator>B.K.Mishra</creator>
        
        <subject>Photo detectors; Impact ionization</subject>
        <description>Special Issue(SpecialIssue), 3(1), 2013</description>
        <description>Optoelectronic is one of the thrust areas for the recent research activity. One of the key components of the optoelectronic family is photo detector to be widely used in broadband communication, optical computing, optical transformer, optical control etc. Present paper includes the investigation. carried on the basis of the. Multiplication measurements on GaAs, InP, InGaAs, GaInP, p+-i-n+s with –region thicknesses, with investigation of applicability of the local ionization theory. A local ionization coefficient to be increasingly unrepresentative of the position dependent values in the device as is reduced below 1 um. </description>
        <description>http://thesai.org/Downloads/SpecialIssueNo5/Paper_6-GaAs_InP_InGaAs_GaInP_Multiplication_measurements_for_Modeling_of_Semiconductor_as_photo_detectors.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SWiFiNet : A Task Distributed System Architecture for WSN</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2013.030105</link>
        <id>10.14569/SpecialIssue.2013.030105</id>
        <doi>10.14569/SpecialIssue.2013.030105</doi>
        <lastModDate>2013-04-13T16:10:29.4870000+00:00</lastModDate>
        
        <creator>A. W. Rohankar</creator>
        
        <creator>Mrinal K. Naskar</creator>
        
        <creator>Amitava Mukherjee</creator>
        
        <subject>Wsn; Reusable; Reconfigurable; Network Architecture</subject>
        <description>Special Issue(SpecialIssue), 3(1), 2013</description>
        <description>WSN is a technology, straddling many application areas of this millennium. The research and practices in WSN have not mended their ways due to many unjust requirement specifications for WSN architecture. The paradox is WSN characteristics and traditional approaches for network operation are diagonally opposite. Power optimization, resource constraints and small form factor are the main characteristics of the WSN node. Taking into consideration these main characteristics, the ‘SWiFiNet’: a task distributed reusable architecture for WSN has been proposed in this paper. The focus of this architecture developed is to identify and reuse system components while preserving the sensor node characteristics. The complex network functionality is pushed onto overlay second tier devices, leaving sensor free for application development. This work demonstrates the implementation of ‘SWiFinet’ on hardware platform and network simulator using complete portability of reusable system components. Simulation and hardware results have been presented which illustrate that ‘SWiFiNet’ performs better and are application independent generic framework for WSN application development.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo5/Paper_5-SWiFiNet_A_Task_Distributed_System_Architecture_for_WSN.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Evaluation and Statistical Analysis of MANET routing Protocols for RPGM and MG</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2013.030104</link>
        <id>10.14569/SpecialIssue.2013.030104</id>
        <doi>10.14569/SpecialIssue.2013.030104</doi>
        <lastModDate>2013-04-13T16:10:24.9930000+00:00</lastModDate>
        
        <creator>Prajakta M. Dhamanskar</creator>
        
        <creator>Dr. Nupur Giri</creator>
        
        <subject>MANET; AODV; DSR; TORA; RPGM; MG.</subject>
        <description>Special Issue(SpecialIssue), 3(1), 2013</description>
        <description>Mobile Ad Hoc Network is a collection of mobile nodes forming temporary network. In MANET routing protocols are classified as Proactive, Reactive and Hybrid. The work presented here evaluates performance of three Reactive routing protocols such as AODV, DSR and TORA under six performance metrics such as packet delivery ratio, routing overhead, packet loss, normalized routing load, throughput and end to end delay The nodes follow two realistic mobility models such as Reference Point Group Mobility model (RPGM) and Manhattan Grid (MG) model. This work also presents the statistical analysis of AODV, DSR and TORA in Manhattan Grid (MG) model and Reference Point Group Mobility (RPGM) model in low load with low speed, average load with average speed, high load with high speed and very high load with high speed. Contribution in this work is beneficial in deciding which protocol to choose for better QoS.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo5/Paper_4-Performance_Evaluation_and_Statistical_Analysis_of_MANET_routing_Protocols_for_RPGM_and_MG.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Satellite Image Retrieval using Modified Block Truncation Coding and Kekre Transform Patterns</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2013.030103</link>
        <id>10.14569/SpecialIssue.2013.030103</id>
        <doi>10.14569/SpecialIssue.2013.030103</doi>
        <lastModDate>2013-04-13T16:10:21.4070000+00:00</lastModDate>
        
        <creator>A R Sawant</creator>
        
        <creator>Dr. Vinayak A Bharadi</creator>
        
        <creator>Dr. H B Kekre</creator>
        
        <subject>Image retrieval; CBIR; MBTC; Kekre’s pattern; Satellite image retrieval</subject>
        <description>Special Issue(SpecialIssue), 3(1), 2013</description>
        <description>Satellite images and Content Based Image Retrieval are interesting topics of research since years. Specifically, CBIR is on developing technologies for bridging the semantic gap that currently prevents wide-deployment of image content-based search engines. Image search engines currently in use are mostly rely on human generated data, such as text. Annotation of an image is totally depend on the person’s perception who is going to store it into database. It is time-consuming as well as error prone. Therefore search engine using text input results in various non-relevant images. To overcome drawbacks of text based image retrieval, Content based image retrieval is introduced where retrieval of images is totally depend on the features of images. Mostly, content-based methods are based on low-level descriptions, while high-level or semantic descriptions are beyond current capabilities. In this paper, we will try to implement the technique to fill this gap. This technique can eventually be extended to allow for content-based similarity type of search, to find out different query blocks from satellite image. We will fire an object as a query and most similar blocks will get retrieved. This will help out to trace particular object from large satellite image.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo5/Paper_3-Satellite_Image_Retrieval_using_Modified_Block_Truncation_Coding_and_Kekre_Transform_Patterns.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Removal of False Negatives in Moving Object Detection Using RGB color space</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2013.030102</link>
        <id>10.14569/SpecialIssue.2013.030102</id>
        <doi>10.14569/SpecialIssue.2013.030102</doi>
        <lastModDate>2013-04-13T16:10:17.7870000+00:00</lastModDate>
        
        <creator>Shailaja Surkutlawar</creator>
        
        <creator>Prof. Ramesh Kulkarni</creator>
        
        <subject>Moving objection detection; background subtraction; mixture of Gaussians MoG; Shadow detection; RGB color space. </subject>
        <description>Special Issue(SpecialIssue), 3(1), 2013</description>
        <description>Moving objects detection is a fundamental step in many vision based applications. Background subtraction is the typical method. Many background models have been introduced to deal with different problems. The method based on mixture of Gaussians is a good balance between accuracy and complexity, and is used frequently by many researchers. But it still cannot provide satisfied results in some cases. Video-surveillance and traffic analysis systems can be heavily improved using vision-based techniques to extract, manage and track objects in the scene. However, problems arise due to shadows. In particular, moving shadows can affect the correct localization, measurements and detection of moving objects. This work aims to present a technique for shadow detection and suppression used in a system using RGB color space for moving visual object detection and tracking. Experimental results show that the proposed approach can significantly enhance shadow suppression results and moving objects are detected.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo5/Paper_2-Removal_of_False_Negatives_in_Moving_Object_Detection_Using_RGB_color_space.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Tuning of PID Controllers using Advanced Genetic Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2013.030101</link>
        <id>10.14569/SpecialIssue.2013.030101</id>
        <doi>10.14569/SpecialIssue.2013.030101</doi>
        <lastModDate>2013-04-13T16:10:15.6630000+00:00</lastModDate>
        
        <creator>Ms. Reshmi P. Pillai</creator>
        
        <creator>Sharad P. Jadhav</creator>
        
        <creator>Dr. Mukesh D. Patil</creator>
        
        <subject>PID tuning; Genetic Algorithm; Multi-parent crossover; Elite crossover; Discrete recombination.</subject>
        <description>Special Issue(SpecialIssue), 3(1), 2013</description>
        <description>Proportional-Integral-Derivative (PID) controllers have been widely used in process industry for decades from small industry to high technology industry. But they still remain poorly tuned by use of conventional tuning methods. Conventional technique like Zeigler-Nichols method does not give an optimized value for PID controller parameters. In this paper, we optimize the PID controller parameter using Genetic Algorithm (GA), which is a stochastic global search method that replicates the process of evolution.  
Using genetic algorithm the tuning of the controller will result in the optimum controller being evaluated for the system every time. The GA is basically based on an iterative process of selection, recombination, mutation and evaluation. The performance of Advanced Genetic Algorithm (AGA) is compared with Guo Tao&#39;s Algorithm (GTA) and Elite Multi-Parent Crossover Evolutionary Optimization Algorithm (EMPCOA). AGA has a different replacement strategy as compared to EMPCOA which helps to maintain the population diversity and thus reducing the computational time which is proved by the results presented here. 
The effectiveness of the AGA is also verified for a system with an unstable plant. The PID controller is also tuned with different error criteria viz. Integral Time Absolute Error (ITAE), Integral Square Error (ISE) and Integral Absolute Error (IAE).
</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo5/Paper_1-Tuning_of_PID_Controllers_using_Advanced_Genetic_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>k-Modulus Method for Image Transformation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040340</link>
        <id>10.14569/IJACSA.2013.040340</id>
        <doi>10.14569/IJACSA.2013.040340</doi>
        <lastModDate>2013-04-09T16:24:51.9430000+00:00</lastModDate>
        
        <creator>Firas A. Jassim</creator>
        
        <subject></subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(3), 2013</description>
        <description>In this paper, we propose a new algorithm to make
a novel spatial image transformation. The proposed approach
aims to reduce the bit depth used for image storage. The
basic technique for the proposed transformation is based of the
modulus operator. The goal is to transform the whole image into
multiples of predefined integer. The division of the whole image
by that integer will guarantee that the new image surely less
in size from the original image. The k-Modulus Method could
not be used as a stand alone transform for image compression
because of its high compression ratio. It could be used as a scheme
embedded in other image processing fields especially compression.
According to its high PSNR value, it could be amalgamated with
other methods to facilitate the redundancy criterion.</description>
        <description>http://thesai.org/Downloads/Volume4No3/Paper_40-k-Modulus_Method_for_Image_Transformation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Draft dynamic student learning in design and manufacturing of complex shape parts</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040332</link>
        <id>10.14569/IJACSA.2013.040332</id>
        <doi>10.14569/IJACSA.2013.040332</doi>
        <lastModDate>2013-04-09T16:24:50.0530000+00:00</lastModDate>
        
        <creator>Ivana Kleinedlerov&#225;</creator>
        
        <creator>Peter Kleinedler</creator>
        
        <creator>Alexander Jan&#225;c</creator>
        
        <creator>Ivan Buransk&#253;</creator>
        
        <subject>dynamic education blended learning; e-learning; CA systems.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(3), 2013</description>
        <description>The contribution deals with the dynamic teaching of students through blended learning and teaching online distance teaching which can be considered nowadays to be a very effective and dynamic education of students. Content of the article is focused on the sphere of  programming with CNC machines and use Cax systems for the production of a particular shape complex parts - shearing knife. The article presented also proposed effective teaching resources. The motivation for solution of this project is that  dynamic education leads students to gaining experience and skills, individual identification of the issue, creativity,  suggestion of problem solving variations. The achieved way of education and its confirmed and verified positive results can be applied for various target groups of students and their fields of study</description>
        <description>http://thesai.org/Downloads/Volume4No3/Paper_32-Draft_dynamic_student_learning_in_design.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Survey of Environment and Demands Along with a Marketing Communications Plan for WatPutthabucha Market to Promote Agricultural Tourism through Main Media and Online Social Network Media</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040331</link>
        <id>10.14569/IJACSA.2013.040331</id>
        <doi>10.14569/IJACSA.2013.040331</doi>
        <lastModDate>2013-04-09T16:24:48.1670000+00:00</lastModDate>
        
        <creator>Kuntida Thamwipat</creator>
        
        <creator>Nakorn Thamwipat</creator>
        
        <subject>Environment; Marketing Communications Plan; Demands; Agricultural Tourism; Main Media; Online Social Network Media</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(3), 2013</description>
        <description>This study was aimed to examine the current environment and the demands and to make a marketing communications plan for WatPuttabucha Market to promote agricultural tourism through main media and online social network media. Moreover, it was aimed to build up working experiences for research with communities near the campus through the integration of course instruction and community service. The data were collected in the second term of academic year 2012 between January and February 2013 in WatPuttabucha Market and nearby communities. There were 2 sampling groups as in King Mongkut’s University of Technology Thonburi students (50 persons) and WatPuttabucha Market and nearby community members (50 persons). In total, there were 100 persons for the survey. This collection was based on an accidental basis. According to the data concerning the environment, WatPuttabucha Market had 9 interesting shops for agricultural tourism and 4 major tourist attractions. As for the demands, it was found that 47 students (or 94%) would like WatPuttabucha Market to be open as a site for agricultural tourism mainly on Saturday and Sunday. 47 persons from WatPuttabucha Market and nearby communities (or 94%) also would like it to be open mainly on Saturday and Sunday. As for the communicative plan, it was found that there were 7 kinds of main media. There were 5 kinds of online social network media for check-in special events. The majority of students (mean score of 4.89 and standard deviation of 0.86) agreed with the integration of research in their Marketing Communication course because it allowed them to get more familiar with communities near the campus and recommended continuing this similar project for the next year. </description>
        <description>http://thesai.org/Downloads/Volume4No3/Paper_31-A_Survey_of_Environment_and_Demands_Along_with_a_Marketing_Communications_Plan.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Viewpoint for Mining Frequent Patterns</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040326</link>
        <id>10.14569/IJACSA.2013.040326</id>
        <doi>10.14569/IJACSA.2013.040326</doi>
        <lastModDate>2013-04-09T16:24:46.2800000+00:00</lastModDate>
        
        <creator>Thanh-Trung Nguyen</creator>
        
        <creator>Phi-Khu Nguyen</creator>
        
        <subject>accumulating frequent patterns; data mining; frequent pattern; horizontal parallelization; representative set; vertical parallelization</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(3), 2013</description>
        <description>According to the traditional viewpoint of Data mining, transactions are accumulated over a long period of time (in years) in order to find out the frequent patterns associated with a given threshold of support, and then they are applied to practice of business as important experience for the next business processes. From the point of view, many algorithms have been proposed to exploit frequent patterns. However, the huge number of transactions accumulated for a long time and having to handle all the transactions at once are still challenges for the existing algorithms. In addition, today, new characteristics of the business market and the regular changes of business database with too large frequency of added-deleted-altered operations are demanding a new algorithm mining frequent patterns to meet the above challenges. This article proposes a new perspective in the field of mining frequent patterns: accumulating frequent patterns along with a mathematical model and algorithms to solve existing challenges.</description>
        <description>http://thesai.org/Downloads/Volume4No3/Paper_26-A_New_Viewpoint_for_Mining_Frequent_Patterns.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Fresnelet-Based Encryption of Medical Images using Arnold Transform</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040322</link>
        <id>10.14569/IJACSA.2013.040322</id>
        <doi>10.14569/IJACSA.2013.040322</doi>
        <lastModDate>2013-04-09T16:24:44.3770000+00:00</lastModDate>
        
        <creator>Muhammad Nazeer</creator>
        
        <creator>Dai-Gyoung Kim</creator>
        
        <creator>Bibi Nargis</creator>
        
        <creator>Yasir Mehmood Malik</creator>
        
        <subject>Fresnelet transform; wavelet transform; Arnold transform, data hiding; and image encryption</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(3), 2013</description>
        <description>Medical images are commonly stored in digital media and transmitted via Internet for certain uses. If a medical information image alters, this can lead to a wrong diagnosis which may create a serious health problem. Moreover, medical images in digital form can easily be modified by wiping off or adding small pieces of information intentionally for certain illegal purposes. Hence, the reliability of medical images is an important criterion in a hospital information system. In this paper, the Fresnelet transform is employed along with appropriate handling of the Arnold transform and the discrete cosine transform to provide secure distribution of medical images. This method presents a new data hiding system in which steganography and cryptography are used to prevent unauthorized data access. The experimental results exhibit high imperceptibility for embedded images and significant encryption of information images. </description>
        <description>http://thesai.org/Downloads/Volume4No3/Paper_22-A_Fresnelet-Based_Encryption_of_Medical_Images_using_Arnold_Transform.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Diagnosing Learning Disabilities in a Special Education By an Intelligent Agent Based System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040321</link>
        <id>10.14569/IJACSA.2013.040321</id>
        <doi>10.14569/IJACSA.2013.040321</doi>
        <lastModDate>2013-04-09T16:24:42.4730000+00:00</lastModDate>
        
        <creator>Khaled Nasser elSayed</creator>
        
        <subject>Intelligent Agent; Learning Disabilities; Special Education; Semantic Network; Psych Pedagogy Evaluation; Exemplar Based Classification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(3), 2013</description>
        <description>The presented paper provides an intelligent agent based classification system for diagnosing and evaluation of learning disabilities with special education students. It provides pedagogy psychology profiles for those students and offer solution strategies with the best educational activities. It provides tools that allow class teachers to discuss psycho functions and basic skills for learning skills, then, performs psycho pedagogy evaluation by comprising a series of strategies in a semantic network knowledge base. The system’s agent performs its classification of student’s disabilities based on its past experience that it got from the exemplars that were classified by expert and acquired in its knowledge base.</description>
        <description>http://thesai.org/Downloads/Volume4No3/Paper_21-Diagnosing_Learning_Disabilities_in_a_Special_Education_By_an_Intelligent_Agent_Based_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Transmission Control for Fast Recovery of Rateless Codes</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040305</link>
        <id>10.14569/IJACSA.2013.040305</id>
        <doi>10.14569/IJACSA.2013.040305</doi>
        <lastModDate>2013-04-09T16:24:40.5830000+00:00</lastModDate>
        
        <creator>Jau-Wu Huang</creator>
        
        <creator>Kai-Chao Yang</creator>
        
        <creator>Han-Yu Hsieh</creator>
        
        <creator>Jia-Shung Wang</creator>
        
        <subject>LT codes; broadcasting channel; degree distribution</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(3), 2013</description>
        <description>Luby Transform (LT) codes are more important in communication applications due to the characteristics of fast encoding/decoding process, and low complexity. However, LT codes are optimal only when the number of input symbols is close to infinity. Some researches modified the degree distribution of LT codes to make LT codes suitable for short symbol length. In this article, we propose an optimal coding algorithm to recover all of the encoded symbols for LT codes quickly. The proposed algorithm observes the coding status of each client and increases the coding performance by changing the transmission sequence of low-degree and high-degree encoding packets. Simulation results show that the resulting decoding overhead of our algorithm are lower than the traditional LT codes, and our algorithm is very appropriate to server various clients in the broadcasting channel environment. </description>
        <description>http://thesai.org/Downloads/Volume4No3/Paper_5-Transmission_Control_for_Fast_Recovery_of_Rateless_Codes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Toward Evolution Strategies Application in Automatic Polyphonic Music Transcription using Electronic Synthesis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040337</link>
        <id>10.14569/IJACSA.2013.040337</id>
        <doi>10.14569/IJACSA.2013.040337</doi>
        <lastModDate>2013-04-09T16:24:38.6970000+00:00</lastModDate>
        
        <creator>Herve Kabamba Mbikayi</creator>
        
        <subject>evolution strategy; polyphonic music transcription; FFT; electronic synthesis; MIDI; notes; frequency; audio; signal; fundamental frequency; pitch detection; F0; chords; monophonic; contours</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(3), 2013</description>
        <description>We present in this paper a new approach for polyphonic music transcription using evolution strategies (ES). Automatic music transcription is a complex process that still remains an open challenge. Using an audio signal to be transcribed as target for our ES, information needed to generate a MIDI file can be extracted from this latter one. Many techniques presented in the literature at present exist and a few of them have applied evolutionary algorithms to address this problem in the context of considering it as a search space problem. However, ES have never been applied until now. The experiments showed that by using these machines learning tools, some shortcomings presented by other evolutionary algorithms based approaches for transcription can be solved. They include the computation cost and the time for convergence. As evolution strategies use self-adapting parameters, we show in this paper that by correctly tuning the value of its strategy parameter that controls the standard deviation, a fast convergence can be triggered toward the optima, which from the results performs the transcription of the music with good accuracy and in a short time. In the same context, the computation task is tackled using parallelization techniques thus reducing the computation time and the transcription time in the overall.</description>
        <description>http://thesai.org/Downloads/Volume4No3/Paper_37-Toward_Evolution_Strategies_Application_in_Automatic_Polyphonic_Music_Transcription_using_Electronic_Synthesis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A novel optical network on chip design for future generation of multiprocessors system on chip</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040336</link>
        <id>10.14569/IJACSA.2013.040336</id>
        <doi>10.14569/IJACSA.2013.040336</doi>
        <lastModDate>2013-04-09T16:24:36.8100000+00:00</lastModDate>
        
        <creator>M. Channoufi</creator>
        
        <creator>P. Lecoy</creator>
        
        <creator>S. Le Beux</creator>
        
        <creator>R. Attia</creator>
        
        <creator>B. Delacressonniere</creator>
        
        <subject>3D-Optical network on chip, multi-level optical layer, optical control.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(3), 2013</description>
        <description>The paper presents a novel Optical Network on Chip (ONoC) relying on the multi-level optical layer design paradigm and called “OMNoC”. The proposed ONoC relies on multi-level microring resonator allowing efficient light coupling between superposed waveguides. Such microring resonator avoids using waveguide crossing, which contribute to reduce propagation losses. Preliminary experimental results demonstrate the potential of multi-level optical layer for reducing power consumption and increasing scalability in the proposed ONoC.</description>
        <description>http://thesai.org/Downloads/Volume4No3/Paper_36-A_novel_optical_network_on_chip_design_for_future_generation_of_multiprocessors_system_on_chip.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Genetic algorithms to optimize base station sitting in WCDMA networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040333</link>
        <id>10.14569/IJACSA.2013.040333</id>
        <doi>10.14569/IJACSA.2013.040333</doi>
        <lastModDate>2013-04-09T16:24:34.9200000+00:00</lastModDate>
        
        <creator>Najat Erradi</creator>
        
        <creator>Noura Aknin</creator>
        
        <creator>Fadoua Thami Alami</creator>
        
        <creator>Ahmed El Moussaoui</creator>
        
        <subject>Ukumura-Hata model; UMTS; W-CDMA; Genetic Algorithms.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(3), 2013</description>
        <description>In UMTS network, radio planning cannot only be based on signal predictions, but it must also consider the traffic distribution, the power control mechanism as well as the power limits and the signal quality constraints. The present work aims to optimize the number of base stations used in the WCDMA radio network. In this paper, we propose a mathematical programming model for optimizing the base station locations considered for the uplink (mobile to BS) direction, and which considers the typical power control mechanism of WCDMA. The two contrasting objectives of the optimization process are the traffic coverage maximization and the installation costs minimization. A mimetic algorithm (AM) (genetic algorithm+ a local search) is proposed to find good approximate solutions of this NP hard problem. The Numerical results are obtained for reels instances, and generated by using classical propagation models.</description>
        <description>http://thesai.org/Downloads/Volume4No3/Paper_33-Genetic_algorithms_to_optimize_base_station_sitting_in_WCDMA_networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-resolution Analysis of Multi-spectral Palmprints using Hybrid Wavelets for Identification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040329</link>
        <id>10.14569/IJACSA.2013.040329</id>
        <doi>10.14569/IJACSA.2013.040329</doi>
        <lastModDate>2013-04-09T16:24:32.5500000+00:00</lastModDate>
        
        <creator>Dr. H.B. Kekre</creator>
        
        <creator>Dr. Tanuja Sarode</creator>
        
        <creator>Rekha Vig</creator>
        
        <subject>Hand-based biometrics; Hybrid wavelet; Multi-resolution; Energy compaction, fusion; </subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(3), 2013</description>
        <description>Palmprint is a relatively new physiological biometric used in identification systems due to its stable and unique characteristics. The vivid texture information of palmprint present at different resolutions offers abundant prospects in personal recognition. This paper describes a new method to authenticate individuals based on palmprint identification. In order to analyze the texture information at various resolutions, we introduce a new hybrid wavelet, which is generated using two or more component transforms incorporating both their properties.  A unique property of this wavelet is its flexibility to vary the number of components at each level of resolution and hence can be made suitable for various applications. Multi-spectral palmprints have been identified using energy compaction of the hybrid wavelet transform coefficients. The scores generated for each set of palmprint images under red, green and blue illuminations are combined using score-level fusion using AND and OR operators. Comparatively low values of equal error rate and high security index have been obtained for all fusion techniques. The experimental results demonstrate the effectiveness and accuracy of the proposed method.</description>
        <description>http://thesai.org/Downloads/Volume4No3/Paper_29-Multi_resolution_Analysis_of Multi_spectral_Palmprints_using_Hybrid_Wavelets_for_Identification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Pilot Study: The Use of Electroencephalogram to Measure Attentiveness towards Short Training Videos</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040327</link>
        <id>10.14569/IJACSA.2013.040327</id>
        <doi>10.14569/IJACSA.2013.040327</doi>
        <lastModDate>2013-04-09T16:24:30.6770000+00:00</lastModDate>
        
        <creator>Paul Alton Nussbaum</creator>
        
        <creator>Rosalyn Hobson Hargraves</creator>
        
        <subject>Electroencephalogram; EEG; Signal Analysis; Machine Learning; Attentiveness; Training; Videos</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(3), 2013</description>
        <description>Universities, schools, and training centers are seeking to improve their computer-based [3] and distance learning classes through the addition of short training videos, often referred to as podcasts [4]. As distance learning and computer based training become more popular, it is of great interest to measure if students are attentive to recorded lessons and short training videos.
The proposed research presents a novel approach to this issue. Signal processing of electroencephalogram (EEG) has proven useful in measuring attentiveness in a variety of applications such as vehicle operation and listening to sonar [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15]. Additionally, studies have shown that EEG data can be correlated to the ability of participants to remember television commercials days after they have seen them [16]. Electrical engineering presents a possible solution with recent advances in the use of biometric signal analysis for the detection of affective (emotional) response [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27].
Despite the wealth of literature on the use of EEG to determine attentiveness in a variety of applications, the use of EEG for the detection of attentiveness towards short training videos has not been studied, nor is there a great deal of consistency with regard to specific methods that would imply a single method for this new application. Indeed, there is great variety in EEG signal processing and machine learning methods described in the literature cited above and in other literature [28] [29] [30] [31] [32] [33] [34]. 
This paper presents a novel method which uses EEG as an input to an automated system that measures a participant’s attentiveness while watching a short training video. This paper provides the results of a pilot study, including a structured comparison of signal processing and machine learning methods to find optimal solutions which can be extended to other applications.
</description>
        <description>http://thesai.org/Downloads/Volume4No3/Paper_27-Pilot_Study_The_Use_of_Electroencephalogram_to_Measure_Attentiveness_towards_Short_Training_Videos.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mitigating Cyber Identity Fraud using Advanced Multi Anti-Phishing Technique</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040325</link>
        <id>10.14569/IJACSA.2013.040325</id>
        <doi>10.14569/IJACSA.2013.040325</doi>
        <lastModDate>2013-04-09T16:24:28.7730000+00:00</lastModDate>
        
        <creator>Yusuf Simon Enoch</creator>
        
        <creator>Adebayo Kolawole John</creator>
        
        <creator>Adetula Emmanuel Olumuyiwa</creator>
        
        <subject>Security; authentication; attack; Cybercrime; Identify theft</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(3), 2013</description>
        <description>Developing Countries are gradually transiting from cash to an electronic based economy by virtue of cashless policy implementation. With this development, cyber criminals and hackers who hitherto attacked businesses and individuals across the Atlantic now see this development as a new venture for their criminal acts and are thus re-directing their energies towards exploiting possible loopholes in the electronic payment system in order to perpetuate fraud. In this paper, we proposed an enhanced approach to detecting phishing attempts and preventing unauthorized online banking withdrawal and transfer. We employed the use of Semantics Content Analysis, Earth Mover Distance and Biometric Authentication with finger print to construct a model. We demonstrated the efficacy of the implemented model with the experiments conducted, a good and considerable result was achieved.</description>
        <description>http://thesai.org/Downloads/Volume4No3/Paper_25-Mitigating_Cyber_Identity_Fraud_using_Advanced_Multi_Anti-Phishing_Technique.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Posteriori Error Estimator for Mixed Approximation of the Navier-Stokes Equations with the  Boundary Condition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040324</link>
        <id>10.14569/IJACSA.2013.040324</id>
        <doi>10.14569/IJACSA.2013.040324</doi>
        <lastModDate>2013-04-09T16:24:26.8730000+00:00</lastModDate>
        
        <creator>J. EL MEKKAOUI</creator>
        
        <creator>M A. BENNANI</creator>
        
        <creator>A.ELKHALFI</creator>
        
        <creator>A. ELAKKAD</creator>
        
        <subject>Navier-Stokes Equations;   boundary condition; Mixed Finite element method; Residual Error Estimator;</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(3), 2013</description>
        <description>In this paper, we introduce the Navier-Stokes equations with a new boundary condition. In this context, we show the existence and uniqueness of the solution of the weak formulation associated with the proposed problem. To solve this latter, we use the discretization by mixed finite element method. In addition, two types of a posteriori error indicator are introduced and are shown to give global error estimates that are equivalent to the true error. In order to evaluate the performance of the method, the numerical results are compared with some previously published works and with others coming from commercial code like ADINA system.</description>
        <description>http://thesai.org/Downloads/Volume4No3/Paper_24-A_Posteriori_Error_Estimator_for_Mixed_Approximation_of_the_Navier-Stokes_Equations_with_the_C_a_b_c_Boundary_Condition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Type Method for the Structured Variational Inequalities Problem</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040323</link>
        <id>10.14569/IJACSA.2013.040323</id>
        <doi>10.14569/IJACSA.2013.040323</doi>
        <lastModDate>2013-04-09T16:24:24.9700000+00:00</lastModDate>
        
        <creator>Chengjiang Yin</creator>
        
        <subject>structured variational inequality problem; algorithm; globally convergent; R-linear convergent </subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(3), 2013</description>
        <description>In this paper, we present an algorithm for solving the structured variational inequality problem, and prove the global convergence of the new method without carrying out any line search technique, and the global R-convergence rate are also given under the suitable conditions.</description>
        <description>http://thesai.org/Downloads/Volume4No3/Paper_23-A_New_Type_Method_for_the_Structured_Variational_Inequalities_Problem.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Joint Operation in Public Key Cryptography</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040320</link>
        <id>10.14569/IJACSA.2013.040320</id>
        <doi>10.14569/IJACSA.2013.040320</doi>
        <lastModDate>2013-04-09T16:24:23.0670000+00:00</lastModDate>
        
        <creator>Dragan Vidakovic</creator>
        
        <creator>Olivera Nikolic</creator>
        
        <creator>Jelena Kaljevic</creator>
        
        <creator>Dusko Parezanovic</creator>
        
        <subject>Public-key cryptography; RSA &amp; ECC Digital Signature; Inverse element; Code</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(3), 2013</description>
        <description>We believe that there is no real data protection without our own tools. Therefore, our permanent aim is to have more of our own codes. In order to achieve that, it is necessary that a lot of young researchers become interested in cryptography. We believe that the encoding of cryptographic algorithms is an important step in that direction, and it is the main reason why in this paper we present a software implementation of finding the inverse element, the operation which is essentially related to both ECC (Elliptic Curve Cryptography) and the RSA schemes of digital signature.</description>
        <description>http://thesai.org/Downloads/Volume4No3/Paper_20-Joint_operation_in_public_key_cryptography.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Simulation of a WiMAX network to evaluate the performance of MAC IEEE 802.16 during the IR phase of Network Entry and Initialization </title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040319</link>
        <id>10.14569/IJACSA.2013.040319</id>
        <doi>10.14569/IJACSA.2013.040319</doi>
        <lastModDate>2013-04-09T16:24:21.1770000+00:00</lastModDate>
        
        <creator>Namratha M</creator>
        
        <creator>Pradeep</creator>
        
        <creator>Manu G V</creator>
        
        <subject>Backoff delay; Circular topology; Linear topology; Markov Model; Pervasive computing; Ubiquitous computing; WiMAX</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(3), 2013</description>
        <description>Pervasive Computing is also called as Ubiquitous Computing, which means “being present everywhere at once” or “constantly encountered”. The main idea behind making these pervasive computing systems is that these systems improve living by performing computations on their own, without having to be monitored by anyone. These systems are targeted to become invisible to the user i.e., they perform their tasks without the user’s knowledge. 
To achieve this environment, the underlying requirement is a Network. One of the biggest constraints in achieving this environment is the “Last Mile” problem. It refers to the last leg of delivering connectivity from a communications provider to a customer. In recent years there has been increasing interest shown in wireless technologies for subscriber access, as an alternative to traditional twisted-pair local loop. 
WiMAX, Worldwide Interoperability for Microwave Access, is a Telecommunications technology that provides wireless transmission of data and is based on the IEEE 802.16 standard (also called Broadband Wireless Access). 802.16 uses paired radio channels Up Link Channel (UL) and Down Link Channel (DL) for establishing a communication between Base Station (BS) and Subscriber Station (SS). When a SS wants to establish connectivity with a BS it goes through the Network Entry and Initialization procedure of which Initial Ranging (IR) is a very important part. IR is the process of acquiring the correct timing offset and power adjustments such that the SS’s transmissions are aligned to maintain the UL connection with the BS. All the SS’s of a BS will compete for the contention slots for their network entry. Whenever the SS has to transmit the request packets it performs the Truncated Binary Exponential Backoff procedure. This method is the contention resolution procedure used in IEEE 802.16 networks. 
Our focus here was to simulate a WiMAX network so as to evaluate the performance of MAC IEEE 802.16 during the IR phase of Network Entry and Initialization. We have used Network Simulator-2 (NS-2) for our simulation purposes. We are using WiMAX “patch” which simulates the PHY and the MAC features of a WiMAX network. We have evaluated the performance of MAC IEEE 802.16 for various topologies.
</description>
        <description>http://thesai.org/Downloads/Volume4No3/Paper_19-Simulation_of_a_WiMAX_network_to_evaluate_the_performance_of_MAC_IEEE_802.16_during_the_IR.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sentiment Analyzer for Arabic Comments System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040317</link>
        <id>10.14569/IJACSA.2013.040317</id>
        <doi>10.14569/IJACSA.2013.040317</doi>
        <lastModDate>2013-04-09T16:24:19.2900000+00:00</lastModDate>
        
        <creator>Alaa El-Dine Ali Hamouda</creator>
        
        <creator>Fatma El-zahraa El-taher</creator>
        
        <subject>Analysis for Arabic comments; machine learning algorithms; sentiment analysis; opinion mining</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(3), 2013</description>
        <description>Today, the number of users of social network is increasing. Millions of users share opinions on different aspects of life every day. Therefore social network are rich sources of data for opinion mining and sentiment analysis. Also users have become more interested in following news pages on Facebook. Several posts; political for example, have thousands of users’ comments that agree/disagree with the post content. Such comments can be a good indicator for the community opinion about the post content. For politicians, marketers, decision makers …, it is required to make sentiment analysis to know the percentage of users agree, disagree and neutral respect to a post. This raised the need to analyze theusers’ comments in Facebook. We focused on Arabic Facebook news pages for the task of sentiment analysis. We developed a corpus for sentiment analysis and opinion mining purposes. Then, we used different machine learning algorithms – decision tree, support vector machines, and naive bayes - to develop sentiment analyzer. The performance of the system using each technique was evaluated and compared with others.</description>
        <description>http://thesai.org/Downloads/Volume4No3/Paper_17-Sentiment_Analyzer_for_Arabic_Comments_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>New Technique for Suppression Four-Wave Mixing Effects in SAC-OCDMA Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040315</link>
        <id>10.14569/IJACSA.2013.040315</id>
        <doi>10.14569/IJACSA.2013.040315</doi>
        <lastModDate>2013-04-09T16:24:17.3870000+00:00</lastModDate>
        
        <creator>Ibrahim Fadhil Radhi</creator>
        
        <creator>S. A. Aljunid</creator>
        
        <creator>Hilal A. Fadhil</creator>
        
        <creator>Thanaa Hussein Abd</creator>
        
        <subject>Optical code division multiple access (OCDMA); Spectral amplitude coding (SAC); Multi diagonal (MD); Random Diagonal (RD); Four-Wave Mixing (FWM); Light Emitting Diode (LED).)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(3), 2013</description>
        <description>A new technique invented for suppressing the FWM in SAC-OCDMA systems based on adding idle code at the sideband of the code construction to generate the virtual FWM power at the sideband of the signal, and then by subtracting these virtual FWM power from the original FWM power in the system and filtering the data part at the channel. This technique is applied for both SAC codes, Random Diagonal Code (RD) and Multi Diagonal Code (MD). Moreover, in terms of cost, the reported technique is considered a cost-effective as the LED light source is used to generate the sideband codes. The results showed that the FWM reduced approximately 25 dBm after using the technique. For example, in the RD code the FWM power at 40km fiber length and input power is 15 dBm using the SMF fiber type is approximately -55 dBm before using the technique, after using the technique at the same values of parameters the FWM power is approximately -90dBm. In other words, at the MD code the FWM power before using the technique is approximately -61 dBm, the same parameters values, however after using our technique the value of the FWM power is approximately -81dBm. However, These results gave impact on the Bit Error Rate (BER) also, for example the value of BER in the RD code at the input power -10dBm and 35km fiber length before using the technique is 1.6&#215;10-23 and after using the technique the value of BER will become 4.05&#215;10-28. In addition at the MD code the BER value before using the technique is 9.4&#215;10-22 and after using the technique the value of BER is 7.4&#215;10-31.</description>
        <description>http://thesai.org/Downloads/Volume4No3/Paper_15-New_Technique_for_Suppression_Four-Wave_Mixing_Effects_in_SAC-OCDMA_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Robust Facial Expression Recognition via Sparse Representation and Multiple Gabor filters</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040314</link>
        <id>10.14569/IJACSA.2013.040314</id>
        <doi>10.14569/IJACSA.2013.040314</doi>
        <lastModDate>2013-04-09T16:24:15.4830000+00:00</lastModDate>
        
        <creator>Rania Salah El-Sayed</creator>
        
        <creator> Prof.Dr. Ahmed El Kholy</creator>
        
        <creator> Prof.Dr. Mohamed Youssri El-Nahas</creator>
        
        <subject>Facial expression recognition (FER); L1-minimization; sparse representation; Gabor filters; support vector machine (SVM).</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(3), 2013</description>
        <description>Facial expressions recognition plays important role in human communication. It has become one of the most challenging tasks in the pattern recognition field. It has many applications such as: human computer interaction, video surveillance, forensic applications, criminal investigations, and in many other fields. In this paper we propose a method for facial expression recognition (FER). This method provides new insights into two issues in FER: feature extraction and robustness. For feature extraction we are using sparse representation approach after applying multiple Gabor filter and then using support vector machine (SVM) as classifier. We conduct extensive experiments on standard facial expressions database to verify the performance of proposed method. And we compare the result with other approach.</description>
        <description>http://thesai.org/Downloads/Volume4No3/Paper_14-Robust_Facial_Expression_Recognition_via_Sparse_Representation_And_Multiple_Gabor_filters.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Interference Aware Channel Assignment Scheme in Multichannel Wireless Mesh Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040313</link>
        <id>10.14569/IJACSA.2013.040313</id>
        <doi>10.14569/IJACSA.2013.040313</doi>
        <lastModDate>2013-04-09T16:24:12.9700000+00:00</lastModDate>
        
        <creator>Sunmyeng Kim</creator>
        
        <subject>interference; channel assignment; multichannel; mesh network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(3), 2013</description>
        <description>Wireless mesh networks (WMNs) are gaining significant attention to provide wireless broadband service. Nodes in a wireless mesh network can communicate with each other directly or through one or more intermediate nodes. Because of multi-hop transmissions with multiple contending and compet&#172;ing channels, performance of wireless mesh networks decreases. Supporting high performance is an important challenge in multi-hop mesh networks. Nassiri et al. proposed a Molecular MAC protocol for autonomic assignment and use of multiple channels to improve network performance. In the Molecular MAC protocol, nodes are either nuclei or electrons in an atom. Neighboring atoms use orthogonal channels to operate in parallel data transmissions. Each nucleus selects an idle channel that is not currently being occupied by its neighboring atoms with the assistance from electrons in the same atom. However, this protocol has the following drawback; since a nucleus allocates a channel with help from the electrons in its own transmission range, it is not able to recognize the existence of those atoms in the interference range. Therefore, allocating the same channel to neighboring atoms results in the deterioration of network performance. In order to resolve this problem, we propose a channel allocation scheme with the interference issue taken into account. Based on various simulation results, the proposed scheme was verified that different channels could be allocated to those neighboring atoms in the interference range.</description>
        <description>http://thesai.org/Downloads/Volume4No3/Paper_13-Interference_Aware_Channel_Assignment_Scheme_in_Multichannel_Wireless_Mesh_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Collaborative Learning Skills in Multi-touch Tables  for UML Software Design</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040311</link>
        <id>10.14569/IJACSA.2013.040311</id>
        <doi>10.14569/IJACSA.2013.040311</doi>
        <lastModDate>2013-04-09T16:24:11.0830000+00:00</lastModDate>
        
        <creator>Mohammed Basheri</creator>
        
        <creator>Malcolm Munro</creator>
        
        <creator>Liz Burd</creator>
        
        <creator>Nilufar Baghaei</creator>
        
        <subject>Collaborative Design; Multi-touch Table; PC-based; Collaborative Learning Skills</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(3), 2013</description>
        <description>The use of Multi-touch interfaces for collaborative learning has received significant attention. Their ability to synchronously accommodate multiple users is an advantage in co-located collaborative design tasks. This paper explores the Multi-touch interface’s potential in collaborative Unified Modeling Language diagramming by comparing it to a PC-based tool, looking at the Collaborative Learning Skills and amount of physical interactions in both conditions. The results show that even though participants talked more in the PC-based condition, the use of the Multi-touch table increased the amount of physical interactions, and encouraged the “Creative Conflict” skills amongst the team members. </description>
        <description>http://thesai.org/Downloads/Volume4No3/Paper_11-Collaborative_Learning_Skills_in_Multi-touch_Tables_for_UML_Software_Design.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Resolution of Unsteady Navier-stokes Equations with the C a,b Boundary condition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040308</link>
        <id>10.14569/IJACSA.2013.040308</id>
        <doi>10.14569/IJACSA.2013.040308</doi>
        <lastModDate>2013-04-09T16:24:09.1670000+00:00</lastModDate>
        
        <creator>Jaouad EL-Mekkaoui</creator>
        
        <creator> Ahmed Elkhalfi</creator>
        
        <creator>Abdeslam Elakkad</creator>
        
        <subject>Unsteady Navier-Stokes Equations; Mixed Finite Element Method; C_(a,b,c) boundary condition; Adina system.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(3), 2013</description>
        <description>in this work, we introduce the unsteady   incompressible Navier-Stokes equations with a new boundary condition, generalizes the Dirichlet and the Neumann conditions. Then we derive an adequate variational formulation of time-dependent Navier- Stokes equations. We then prove the existence theorem and a uniqueness result. A Mixed finite-element discretization is used to generate the nonlinear system corresponding to the Navier-Stokes equations. The solution of the linearized system is carried out using the GMRES method. In order to evaluate the performance of the method, the numerical results are compared with others coming from commercial code like Adina system.</description>
        <description>http://thesai.org/Downloads/Volume4No3/Paper_8-Resolution_of_Unsteady_Navier-stokes_Equations_with_the_ C_a_b_Boundary_condition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Quadrant Based WSN Routing Technique By Shifting Of Origin</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040307</link>
        <id>10.14569/IJACSA.2013.040307</id>
        <doi>10.14569/IJACSA.2013.040307</doi>
        <lastModDate>2013-04-09T16:24:07.2630000+00:00</lastModDate>
        
        <creator>Nandan Banerji</creator>
        
        <creator>Uttam Kumar Kundu</creator>
        
        <creator>Pulak Majumder</creator>
        
        <creator>Debabrata Sarddar</creator>
        
        <subject>component;   WSN; Quardent; Packet; Hops</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(3), 2013</description>
        <description>A sensor is a miniaturized, low powered (basically battery powered), limited storage device which can sense the natural phenomenon or things and convert it into electrical energy or vice versa using transduction process. A Wireless Sensor Network (WSN) is such a wireless network built using sensors. The sensors communicate with each other’s using wireless medium. They can be deployed in such an environment; inaccessible to human or difficult to reach. Basically there is a vast application on automated world such as robotics, avionics, oceanographic study, space, satellites etc. The routing of a packet from a source node to a destination should be efficient in such a way that must be efficient in case of energy, communication overhead, less intermediate hops. The scheme will help to route the packet with a lesser intermediate nodes as the neighbors are being selected based on their Quadrant position.</description>
        <description>http://thesai.org/Downloads/Volume4No3/Paper_7-Quadrant_Based_WSN_Routing_Technique_By_Shifting_Of_Origin.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Approach for Detection of Hard Exudates</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040338</link>
        <id>10.14569/IJACSA.2013.040338</id>
        <doi>10.14569/IJACSA.2013.040338</doi>
        <lastModDate>2013-04-09T16:24:05.3430000+00:00</lastModDate>
        
        <creator>Dr.H. B. Kekre</creator>
        
        <creator>Dr. Tanuja K. Sarode</creator>
        
        <creator>Ms. Tarannum Parkar</creator>
        
        <subject>Diabetic Retinopathy; Hard Exudates; Clustering; Mathematical Morphology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(3), 2013</description>
        <description>Diabetic Retinopathy is a severe and widely spread eye disease which can lead to blindness. Hence, early detection of Diabetic Retinopathy is a must. Hard Exudates are the primary sign of Diabetic Retinopathy. Early treatment of Diabetic Retinopathy is possible if we detect Hard Exudates at the earliest stage. The main concentration of this paper is to discuss techniques for efficient detection of Hard Exudates. The first technique, discusses Hard Exudates detection using mathematical morphology. The second technique, proposes a Hybrid Approach for Detection of Hard Exudates. This approach consists of three stages: preprocessing, clustering and post processing. In preprocessing stage, we resize the image and apply morphological dilation. The clustering stage applies Linde-Buzo-Gray and k-means algorithm to detect Hard Exudates. In post processing stage, we remove all unwanted feature components from the image to get accurate results. We evaluate the performance of the above mentioned techniques using the DIARETDB1 database which provides ground truth. The optimal results will be obtained when the number of clusters chosen is 8 in both of the clustering algorithms.</description>
        <description>http://thesai.org/Downloads/Volume4No3/Paper_38-Hybrid_Approach_for_Detection_of_Hard_Exudates.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Extended Standard Hough Transform for Analytical Line Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040339</link>
        <id>10.14569/IJACSA.2013.040339</id>
        <doi>10.14569/IJACSA.2013.040339</doi>
        <lastModDate>2013-04-09T16:24:03.4570000+00:00</lastModDate>
        
        <creator>Abdoulaye SERE</creator>
        
        <creator>Oumarou SIE</creator>
        
        <creator>Eric ANDRES</creator>
        
        <subject>hough; transform; recognition; discrete</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(3), 2013</description>
        <description>This paper presents a new method which extends
the Standard Hough Transform for the recognition of naive or
standard line in a noisy picture. The proposed idea conserves the
power of the Standard Hough Transform particularly a limited
size of the parameter space and the recognition of vertical lines.
The dual of a segment, and the dual of a pixel have been proposed
to lead to a new definition of the preimage. Many alternatives
of approximation could be established for the sinusoid curves of
the dual of a pixel to get new algorithms of line recognition.</description>
        <description>http://thesai.org/Downloads/Volume4No3/Paper_39-Extended_Standard_Hough_Transform_for_Analytical.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Energy-Aware Fragmented Memory Architecture with a Switching Power Supply for Sensor Nodes</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040335</link>
        <id>10.14569/IJACSA.2013.040335</id>
        <doi>10.14569/IJACSA.2013.040335</doi>
        <lastModDate>2013-04-09T16:24:01.5530000+00:00</lastModDate>
        
        <creator>Harish H Kenchannavar</creator>
        
        <creator>M.M.Math</creator>
        
        <creator>Umakant P.Kulkarni</creator>
        
        <subject>modified memory architecture; switching power supply; sensor node; energy conserve; idle energy</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(3), 2013</description>
        <description>The basic sensor node architecture in a wireless sensor network contains sensing, transceiver, processing and memory units along with the power supply module. Because the basic sensor network application nature is surveillance, these networks may be deployed in a remote environment without human intervention. The sensor nodes are also battery-powered tiny devices with limited memory capacity. Because of these sensor node limitations, the architecture can be modified to efficiently utilise energy during memory accesses by dividing the memory into multiple banks and including a memory switching controller unit and a power switching module. This modification conserves energy, so power can be supplied only to the bank or part of the memory being accessed instead of powering the entire memory module, thus leading to efficient energy consumption. Simulations have been performed on fragmented memory architecture by incorporating the M/M/1 queuing model. When the packets get queued up, energy utilisation and a packet drop at the sensor node is observed. The energy consumption is reduced by an average of 70%, and there is significantly less packet drop compared to the normal memory architecture. This leads to increase in node and network lifetime and prevents information loss</description>
        <description>http://thesai.org/Downloads/Volume4No3/Paper_35-Energy-Aware_Fragmented_Memory_Architecture_with_a_Switching_Power_Supply_for_Sensor_Nodes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Algorithm to Match Ontologies on the Semantic Web</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040334</link>
        <id>10.14569/IJACSA.2013.040334</id>
        <doi>10.14569/IJACSA.2013.040334</doi>
        <lastModDate>2013-04-09T16:23:59.6630000+00:00</lastModDate>
        
        <creator>Alaa Qassim Al-Namiy</creator>
        
        <subject>Semantic Web; Ontology matching; WordNet; Information retrieval; web service</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(3), 2013</description>
        <description>It has been recognized that semantic data and knowledge extraction will significantly improve the capability of natural language interfaces to the semantic search engine. Semantic Web technology offers a vast scale of sharing and integration of distributed data sources by combining information easily. This will enable the user to find the information easily and efficiently.
In this paper, we will explore some issues of developing algorithms for the Semantic Web. The first one to build the semantic contextual meaning by scanning the text, extract knowledge and automatically infer the meaning of the information from text that contains the search words in any sentence and correlate with hierarchical classes defined in the Ontology as a result of input resources. The second to discover the hierarchical relationships among terms (i.e. discover the semantic relations across hierarchical classifications). The proposed algorithm will be relying on a number of resources including Ontology and WordNet.
</description>
        <description>http://thesai.org/Downloads/Volume4No3/Paper_34-Algorithm_to_Match_Ontologies_on_the_Semantic_Web.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Method for Psychological Status Estimation by Gaze Location Monitoring Using Eye-Based Human- Computer Interaction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040330</link>
        <id>10.14569/IJACSA.2013.040330</id>
        <doi>10.14569/IJACSA.2013.040330</doi>
        <lastModDate>2013-04-09T16:23:57.7600000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Ronny Mardiyanto</creator>
        
        <subject>psychological status; gaze estimation; computer input by human eyes only</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(3), 2013</description>
        <description>Method for psychological status estimation by gaze location monitoring using Eye-Based Human-Computer Interaction: EBHCI is proposed. Through the experiment with English book reading of e-learning content, relation between psychological status and the distance between the correct location of English sentence reading points and the corresponding location derived from EBHCI is clarified. Psych9ological status is estimated from peak alpha frequency derived from eeg signals. It is concluded that psychological status can be estimated with gaze location monitoring.</description>
        <description>http://thesai.org/Downloads/Volume4No3/Paper_30-Method_for_Psychological_Status_Estimation_by_Gaze_Location_Monitoring_Using_Eye_Based_Human_Computer_Interaction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Camera Mouse Including “Ctrl-Alt-Del” Key Operation Using Gaze, Blink, and Mouth Shape</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040328</link>
        <id>10.14569/IJACSA.2013.040328</id>
        <doi>10.14569/IJACSA.2013.040328</doi>
        <lastModDate>2013-04-09T16:23:55.8730000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Ronny Mardiyanto</creator>
        
        <subject>Camera Mouse; User Gaze; Combination keys function</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(3), 2013</description>
        <description>This paper presents camera mouse system with additional feature: &quot;CTRL - ALT - DEL&quot; key. The previous gaze-based camera mouse systems are only considering how to obtain gaze and making selection. We proposed gaze-based camera mouse with &quot;CTRL - ALT - DEL&quot; key. Infrared camera is put on top of display while user looking ahead. User gaze is estimated based on eye gaze and head pose. Blinking and mouth detections are used to create &quot;CTR - ALT - DEL&quot; key. Pupil knowledge is used to improve robustness of eye gaze estimation against different users. Also, Gabor filter is used to extract face features. Skin color information and face features are used to estimate head pose. The experiments of each method have done and the results show that all methods work perfectly. By implemented this system, troubleshooting of camera mouse can be done by user itself and makes camera mouse be more sophisticated.</description>
        <description>http://thesai.org/Downloads/Volume4No3/Paper_28-Camera_Mouse_Including_Ctrl-Alt-Del_Key_Operation_Using_Gaze_Blink_and_Mouth_Shape.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Routing Discovery Algorithm Using Parallel Chase Packet </title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040318</link>
        <id>10.14569/IJACSA.2013.040318</id>
        <doi>10.14569/IJACSA.2013.040318</doi>
        <lastModDate>2013-04-09T16:23:53.9700000+00:00</lastModDate>
        
        <creator>Muneer Bani Yassein</creator>
        
        <creator>Yaser M. Khamayseh</creator>
        
        <creator>Amera Al-Ameri</creator>
        
        <creator>Wail E. Mardini</creator>
        
        <subject>MANET; Chase Packets; AODV; Broadcast Storm Problem.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(3), 2013</description>
        <description>On demand routing protocols for ad hoc networks such as Ad Hoc On Demand Distance Vector (AODV) initiate a route discovery process when a route is needed by flooding the network with a route request packet. The route discovery process in such protocols depends on a simple flooding as a broadcast technique due to its simplicity. Simple flooding results in packet congestion, route request overhead and excessive collisions, namely broadcast storm problem. A number of routing techniques have been proposed to control the simple flooding technique. Ideally, the broadcast of route request or the route discovery process must be stopped as soon as the destination node is found. This will free the network from many redundant packets that may cause network collision and contention.
In this paper, chasing packet technique is used with standard AODV routing protocol to end the fulfilled route requests. The chase packet is initiated by the source node and is broadcasted in parallel with route request packet. As soon as the destination is found the chase packet starts its work by trying to catch and discard the route request in early stages before it broadcasts further in the network.
Performance evaluation is conducted using simulation to investigate the performance of the proposed scheme against the existing approach that uses chase packet technique such as Traffic Locality Route Discovery Algorithm with Chase (TLRDA-C). Results reveal that the proposed scheme minimizes end-to-end packet delays and achieves low routing request overhead.
</description>
        <description>http://thesai.org/Downloads/Volume4No3/Paper_18-Routing_Discovery_Algorithm_Using_Parallel_Chase_Packet.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Selection of Eigenvectors for Face Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040316</link>
        <id>10.14569/IJACSA.2013.040316</id>
        <doi>10.14569/IJACSA.2013.040316</doi>
        <lastModDate>2013-04-09T16:23:52.0500000+00:00</lastModDate>
        
        <creator>Manisha Satone</creator>
        
        <creator>G.K.Kharate</creator>
        
        <subject>face recognition; PCA; wavelet transform; genetic algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(3), 2013</description>
        <description>Face recognition has advantages over other biometric methods.  Principal Component Analysis (PCA) has been widely used for the face recognition algorithm. PCA has limitations such as poor discriminatory power and large computational load. Due to these limitations of the existing PCA based approach, we used a method of applying PCA on wavelet subband of the face image and two methods are proposed to select best of the eigenvectors for recognition.  The proposed methods select important eigenvectors using genetic algorithm and entropy of eigenvectors. Results show that compared to traditional method of selecting top eigenvectors, proposed method gives better results with less number of eigenvectors.</description>
        <description>http://thesai.org/Downloads/Volume4No3/Paper_16-Selection_of_Eigenvectors_for_Face_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of Semi-Adaptive 190-200 KHz Digital Band Pass Filters for SAR Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040310</link>
        <id>10.14569/IJACSA.2013.040310</id>
        <doi>10.14569/IJACSA.2013.040310</doi>
        <lastModDate>2013-04-09T16:23:50.1470000+00:00</lastModDate>
        
        <creator>P Yadav</creator>
        
        <creator>A Khare</creator>
        
        <creator>K Parandham Gowd</creator>
        
        <subject>Digital filter; XILINX and MATLAB softwares; Field Programmable Gate Arrays (FPGA); SPARTAN-3E; DSP Chips.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(3), 2013</description>
        <description>Technologies have advanced rapidly in the field of digital signal processing due to advances made in high speed, low cost digital integrated chips. These technologies have further stimulated ever increasing use of signal representation in digital form for purposes of transmission, measurement, control and storage. Design of digital filters especially adaptive or semi adaptive is the necessity of the hour for SAR applications. The aim of this research work is to design and performance evaluation of 380-400 KHz Bartlett, Blackman and Chebyshev digital semi adaptive filters. For this work XILINX and MATLAB softwares were used for the design. As pert of practical research work these designs were translated using FPGA hardware SPARTAN-3E kit. These were optimized, analyzed, compared and evaluated keeping the sampling frequency at 5 MHz for 64 order.  Both these filters designed using software and hardware were tested by passing a sinusoidal test signal of 381 KHz along with noise and the filtered output signals are presented. </description>
        <description>http://thesai.org/Downloads/Volume4No3/Paper_10-Design_of_Semi-Adaptive_190-200_KHz_Digital_Band_Pass_Filters_for_SAR_Applications.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Overview of Recent Machine Learning Strategies in Data Mining</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040309</link>
        <id>10.14569/IJACSA.2013.040309</id>
        <doi>10.14569/IJACSA.2013.040309</doi>
        <lastModDate>2013-04-09T16:23:48.1970000+00:00</lastModDate>
        
        <creator>Bhanu Prakash Battula</creator>
        
        <creator>Dr. R. Satya Prasad</creator>
        
        <subject>Data mining; classification; supervised learning; unsupervised learning; learning strategies.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(3), 2013</description>
        <description>Most of the existing classification techniques concentrate on learning the datasets as a single similar unit, in spite of so many differentiating attributes and complexities involved. However, traditional classification techniques, require to analysis the dataset prior to learning and for not doing so they loss their performance in terms of accuracy and AUC. To this end, many of the machine learning problems can be very easily solved just by careful observing human learning and training nature and then mimic the same in the machine learning.  
This paper presents an updated literature survey of current and novel machine learning strategies inducing models efficiently for supervised and unsupervised learning in data mining.
</description>
        <description>http://thesai.org/Downloads/Volume4No3/Paper_9-An_Overview_of_Recent_Machine_Learning_Strategies_in_Data_Mining.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Project Based CS/IS-1 Course with an Active Learning Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040306</link>
        <id>10.14569/IJACSA.2013.040306</id>
        <doi>10.14569/IJACSA.2013.040306</doi>
        <lastModDate>2013-04-09T16:23:46.2470000+00:00</lastModDate>
        
        <creator>Dr. Suvineetha Herath</creator>
        
        <creator>Dr. Ajantha Herath</creator>
        
        <creator>Mr. Mohammed A.R. Siddiqui</creator>
        
        <creator>Mr. Khuzaima AH. El-Jallad</creator>
        
        <subject>I/O; arithmetic expressions; if-else and switch conditional operations; for-while iterative computation; inheritance; polymorphism; recursion; searching and sorting algorithms; IDE.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(3), 2013</description>
        <description>High level programming languages use system defined data types and the user defined data types in computations. We have developed a project-based CS/IS-1 course to substitute the traditional lecture based classroom to help students design and use at least two user defined data types in their computations to solve real world problems. Abstract data types and basic programming constructs are introduced efficiently to the students in an active learning environment using games.  To assess and evaluate the changes made we distributed the course module among our students and other instructors. This paper describes our experience in developing the project based active learning environment.</description>
        <description>http://thesai.org/Downloads/Volume4No3/Paper_6-A_Project_Based_CSIS-1_Course_with_an_Active_Learning_Environment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Spatial-Temporal Variations of Turbidity and Ocean Current Velocity of the Ariake Sea Area, Kyushu, Japan Through Regression Analysis with Remote Sensing Satellite Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040304</link>
        <id>10.14569/IJACSA.2013.040304</id>
        <doi>10.14569/IJACSA.2013.040304</doi>
        <lastModDate>2013-04-09T16:23:44.2970000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Yuichi Sarusawa</creator>
        
        <subject>turbidity; ocean current; remote sensing satellite; regressive analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(3), 2013</description>
        <description>Regression analysis based method for turbidity and ocean current velocity estimation with remote sensing satellite data is proposed. Through regressive analysis with MODIS data and measured data of turbidity and ocean current velocity, regressive equation which allows estimation of turbidity and ocean current velocity is obtained. With the regressive equation as well as long term MODIS data, turbidity and ocean current velocity trends in Ariake Sea area are clarified. It is also confirmed that the negative correlation between ocean current velocity and turbidity.</description>
        <description>http://thesai.org/Downloads/Volume4No3/Paper_4-Spatial-Temporal_Variations_of_Turbidity_and_Ocean_Current_Velocity_of_the_Ariake_Sea_Area.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Interactive Application Development Policy Object 3D Virtual Tour History Pacitan District based Multimedia </title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040303</link>
        <id>10.14569/IJACSA.2013.040303</id>
        <doi>10.14569/IJACSA.2013.040303</doi>
        <lastModDate>2013-04-09T16:23:42.3800000+00:00</lastModDate>
        
        <creator>Muga Linggar Famukhit</creator>
        
        <creator>Maryono</creator>
        
        <creator>Lies Yulianto</creator>
        
        <creator>Bambang Eka Purnama</creator>
        
        <subject>Interactive 3D Virtual Application Development Object-Based Tourism Pacitan History Multimedia</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(3), 2013</description>
        <description>Pacitan has a wide range of tourism activity. One of the tourism district is Pacitan historical attractions. These objects have a history tour of the educational values, history and culture, which must be maintained and preserved as one tourism asset Kabupeten Pacitan. But the history of the current tour the rarely visited and some of the students also do not understand the history of each of these historical attractions. Hence made a information media of 3D virtual interactive applications Pacitan tour history in the form of interactive CD applications. The purpose of the creation of interactive applications is to introduce Pacitan history tours to students and the community. Creating interactive information media that can provide an overview of the history of the existing tourist sites in Pacitan The benefits of this research is the students and the public will get to know the history of historical attractions Pacitan. As a media introduction of historical attractions and as a medium of information to preserve the historical sights. Band is used in the manufacturing methods Applications 3D Virtual Interactive Attractions: History-Based Multimedia Pacitan authors used the method library, observation and interviews. Design using 3ds Max 2010, Adobe Director 11.5, Adobe Photoshop CS3 and Corel Draw. The results of this research is the creation of media interakif information that can provide knowledge about the history of Pacitan.</description>
        <description>http://thesai.org/Downloads/Volume4No3/Paper_3-Interactive_Application_Development_Policy_Object_3D_Virtual_Tour_History.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sensitivity Analysis and Validation of Refractive Index Estimation Method with Ground Based Atmospheric Polarized Radiance Measurement Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040301</link>
        <id>10.14569/IJACSA.2013.040301</id>
        <doi>10.14569/IJACSA.2013.040301</doi>
        <lastModDate>2013-04-09T16:23:40.4770000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>solar irradiance; refractive index; atmospheric polarized radiance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(3), 2013</description>
        <description>Sensitivity analysis and validation of the proposed refractive index estimation method with ground based atmospheric polarized radiance measurement data is conducted. Through the sensitivity analysis, it is found that Degree of Polarization: DP is highly dependent upon surface reflectance followed by imaginary and real part of refractive index and Junge parameter. DP at 550nm is greater than that at 870nm slightly. DP is decreased in accordance with increasing of real part and is increased with increasing of imaginary part while DP is increased with increasing of Junge parameter. It is also found that the peak of DP is appeared not only 90 degree of scattering angle but also at around 150 degree, in particular, when aerosol scattering is dominant. By using the aforementioned characteristics, it may be concluded that it is possible to estimate refractive index with ground based polarized radiance measurements</description>
        <description>http://thesai.org/Downloads/Volume4No3/Paper_1-Sensitivity_Analysis_and_Validation_of_Refarctive_Index_Estimation_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automatic Skin Cancer Images Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040342</link>
        <id>10.14569/IJACSA.2013.040342</id>
        <doi>10.14569/IJACSA.2013.040342</doi>
        <lastModDate>2013-04-09T16:23:38.5400000+00:00</lastModDate>
        
        <creator>Mahmoud Elgamal</creator>
        
        <subject>Skin cancer images; wavelet transformation; principle component analysis; feed forward back-propagation network; knearest neighbor classification.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(3), 2013</description>
        <description>Early detection of skin cancer has the potential to reduce mortality and morbidity. This paper presents two hybrid techniques for the classification of the skin images to predict it if exists. The proposed hybrid techniques consists of three stages, namely, feature extraction, dimensionality reduction, and classification. In the first stage, we have obtained the features related with images using discrete wavelet transformation. In the second stage, the features of skin images have been reduced using principle component analysis to the more essential features. In the classification stage, two classifiers based on supervised machine learning have been developed. The first classifier based on feed forward back-propagation artificial neural network and the second classifier based on k-nearest neighbor. The classifiers have been used to classify subjects as normal or abnormal skin cancer images. A classification with a success of 95% and 97.5% has been obtained by the two proposed classifiers and respectively. This result shows that the proposed hybrid techniques are robust and effective.</description>
        <description>http://thesai.org/Downloads/Volume4No3/Paper_42-Automatic_Skin_Cancer_Images_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Vicarious Calibration Based Cross Calibration of Solar Reflective Channels of Radiometers Onboard Remote Sensing Satellite and Evaluation of Cross Calibration Accuracy through Band-to-Band Data Comparisons </title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040302</link>
        <id>10.14569/IJACSA.2013.040302</id>
        <doi>10.14569/IJACSA.2013.040302</doi>
        <lastModDate>2013-04-09T16:23:34.7030000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>vicarious calibration; cross calibration; visible to near infrared radiometer; earth observation satellite; remote sensing; radiative transfer equation;</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(3), 2013</description>
        <description>Accuracy evaluation of cross calibration through band-to-band data comparison for visible and near infrared radiometers which onboard earth observation satellites is conducted. The conventional cross calibration for visible to near infrared radiometers onboard earth observation satellites is conducted through comparisons of band-to-band data of which spectral response functions are overlapped mostly. There are the following major error sources due to observation time difference, spectral response function difference in conjunction of surface reflectance and atmospheric optical depth, observation area difference. These error sources are assessed with dataset acquired through ground measurements of surface reflectance and optical depth. Then the accuracy of the conventional cross calibration is evaluated with vicarious calibration data. The results show that cross calibration accuracy can be done more precisely if the influences due to the aforementioned three major error sources are taken into account.</description>
        <description>http://thesai.org/Downloads/Volume4No3/Paper_2-Vicarious_Calibration_Based_Cross_Calibration_of_Solar_Reflective_Channels_of_Radiometers_Onboard_Remote.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analytical Solution of the Perturbed Oribt-Attitude Motion of a Charged Spacecraft in the Geomagnetic Field</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040341</link>
        <id>10.14569/IJACSA.2013.040341</id>
        <doi>10.14569/IJACSA.2013.040341</doi>
        <lastModDate>2013-04-09T16:23:32.7830000+00:00</lastModDate>
        
        <creator>Hani M. Mohmmed</creator>
        
        <creator>Mostafa K. Ahmed</creator>
        
        <creator>Ashraf Owis</creator>
        
        <creator>Hany Dwidar</creator>
        
        <subject></subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(3), 2013</description>
        <description>In this work we investigate the orbit-attitude perturbations
of a rigid spacecraft due to the effects of several forces
and torques. The spacecraft is assumed to be of a cylindrical
shape and equipped with a charged screen with charge density
s. Clearly the main force affecting the motion of the spacecraft
is the gravitational force of the Earth with uniform spherical
mass. The effect of oblate Earth up to J2 is considered as
perturbation on both the orbit and attitude of the spacecraft,
where the attitude of the spacecraft is acted upon by what is
called gravity gradient torque. Another source of perturbation
on the attitude of the spacecraft comes from the motion of
the charged spacecraft in the geomagnetic field. This motion
generates a force known as the Lorentz force which is the source
of the Lorentz force torque influencing the rotational motion of
the spacecraft. In this work we give an analytical treatment of
the orbital-rotational dynamics of the spacecraft. We first use
the definitions of Delaunay and Andoyer variables in order to
formulate the Hamiltonian of the orbit-attitude motion under the
effects of forces and torques of interest. Since the Lorentz force
is a non-conservative force, a potential like function is introduced
and added to the Hamiltonian. We solve the canonical equations
of the Hamiltonian system by successive transformations using a
technique proposed by Lie and modified by Deprit and Kamel
to solve the problem. In this technique we make two successive
transformations to eliminate the short and long periodic terms
from the Hamiltonian.</description>
        <description>http://thesai.org/Downloads/Volume4No3/Paper_41-Analytical_Solution_of_the_Perturbed_Oribt-Attitude.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Category Decomposition Method for Un-Mixing of Mixels Acquired with Spaceborne Based Visible and Near Infrared Radiometers by Means of Maximum Entropy Method with Parameter Estimation Based on Simulated Annealing </title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.020410</link>
        <id>10.14569/IJARAI.2013.020410</id>
        <doi>10.14569/IJARAI.2013.020410</doi>
        <lastModDate>2013-04-09T16:14:23.1900000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>category decomposition; mixed pixel; satellite remote sensing; maximum entropy method; simulated annealing; spectral library</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(4), 2013</description>
        <description>Category decomposition method for un-mixing of mixels (Mixed Pixels) which is acquired with spaceborne based visible to near infrared radiometers by means of Maximum Entropy Method (MEM) with parameter estimation based on Simulated Annealing: SA is proposed. Through simulation studies with spectral characteristics of the ground cover targets which are derived from spectral library and actual remote sensing satellite imagery data, it is confirmed that the proposed method works well. </description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No4/Paper_10-Category_Decomposition_Method_for_Un-Mixing_of_Mixels_Acquired_with_Spaceborne_Based_Visible_and_Near_Infrared_Radiometers.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Web-based Expert Decision Support System for Tourism Destination Management in Nigeria</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.020409</link>
        <id>10.14569/IJARAI.2013.020409</id>
        <doi>10.14569/IJARAI.2013.020409</doi>
        <lastModDate>2013-04-09T16:14:21.2870000+00:00</lastModDate>
        
        <creator>Yekini Nureni Asafe</creator>
        
        <creator>Adetoba Bolaji</creator>
        
        <creator>Aigbokhan Edwin Enaholo</creator>
        
        <creator>Olufemi Olubukola</creator>
        
        <subject>Tourism; Tourism Destination; Decision Support Syste;  Web-Based; Lagos; Abuja; Nigeria</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(4), 2013</description>
        <description>The use of Information Technologies have played and currently playing prominent roles in many organizations, such as business, education, commerce. The tourism industry has witnessed the use and application of various computer-based systems in carrying out one or more activities or operation. But currently there is no computer-based system tourist destination to integrate the tourists (from outside Nigeria) and tourists center within Nigeria, so that tourists can make a pre destination plan and decision before venturing into tourism journey to the country Nigeria. The authors of this paper proposed the design of web-based Expert Decision System (WEDSS), to provide tourist to Nigeria and its environ essential data and tools to managing their tours and to base all the decisions concerning to queries on the, tourist centers and hotels based on the following issue; climate, road conditions, cultural aspects, lodging, health facilities, banking, etc. of the location to be visited on sound and rational bases.  Web-Based Tourist Decision Support System (WEDSS) for Nigeria will be developed to allow the tourist to find their route in Nigeria and ask for information about sights, accommodations and other places of interest which are nearby to him to improve the convenience, safety, efficiency of travel and enhance tourism attraction of tourists.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No4/Paper_9-Web-based_Expert_Decision_Support_System_for_Tourism_Destination_Management_in_Nigeria.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fusion of Saliency Maps for Visual Attention Selection in Dynamic Scenes</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.020408</link>
        <id>10.14569/IJARAI.2013.020408</id>
        <doi>10.14569/IJARAI.2013.020408</doi>
        <lastModDate>2013-04-09T16:14:19.3830000+00:00</lastModDate>
        
        <creator>Jiawei Xu</creator>
        
        <creator>Shigang Yue</creator>
        
        <subject>Global Attention Model; Saliency Map Fusion; Motion Vector Field</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(4), 2013</description>
        <description>Human vision system can optionally process the visual information and adjust the contradiction between the limited resources and the huge visual information. Building attention models similar to human visual attention system should be very beneficial to computer vision and machine intelligence; meanwhile, it has been a challenging task due to the complexity of human brain and limited understanding of the mechanisms underlying the human attention system. Previous studies emphasized on static attention, however the motion features, which are playing key roles in human attention system intuitively, have not been well integrated into the previous models. Motion features such as motion direction are assumed to be processed within the dorsal visual and the dorsal auditory pathways and there is no systematic approach to extract the motion cues well so far. In this paper, we proposed a generic Global Attention Model (GAM) system based on visual attention analysis. The computational saliency map is superimposed by a set of saliency maps via different predefined approaches. We added three saliencies maps up together to reflect dominant motion features into the attention model, i.e., the fused saliency map at each frame is adjusted by the top-down, static and motion saliency maps. By doing this, the proposed attention model accommodating motion feature into the system so that it can responds to real visual events in a manner similar to the human visual attention system in a realistic circumstance. The visual challenges used in our experiments are selected from the benchmark video sequences. We tested the GAM on several dynamic scenes, such as traffic artery, parachuter landing and surfing, with high speed and cluttered background. The experiment results showed the GAM system demonstrated high robustness and real-time ability under complex dynamic scenes. Extensive evaluations based on comparisons with other approaches of the attention model results have verified the effectiveness of the proposed system.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No4/Paper_8-Fusion_of_Saliency_Maps_for_Visual_Attention_Selection_in_Dynamic_Scenes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Applying Inhomogeneous Probabilistic Cellular Au-tomata Rules on Epidemic Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.020407</link>
        <id>10.14569/IJARAI.2013.020407</id>
        <doi>10.14569/IJARAI.2013.020407</doi>
        <lastModDate>2013-04-09T16:14:17.4670000+00:00</lastModDate>
        
        <creator>Wesam M. Elsayed</creator>
        
        <creator>Ahmed H. El-bassiouny</creator>
        
        <creator>Elsayed F. Radwan</creator>
        
        <subject>Probabilistic Cellular Automata (PCA); Epidemic modeling; Optimization.</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(4), 2013</description>
        <description>This paper presents some of the results of our probabilis&#172;tic cellular automaton (PCA) based epidemic model. It is shown that PCA performs better than deterministic ones. We consider two possible ways of  interaction that relies on a two-way split rules either horizontal or vertical interaction with 2 different probabilities causing more of the best possible choices for the behavior of the disease. Our results are a generalization of  that Hawkins et al done.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No4/Paper_7-Applying_Inhomogeneous_Probabilistic_Cellular_Automata.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Simulated Multiagent-Based Architecture for Intrusion Detection System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.020406</link>
        <id>10.14569/IJARAI.2013.020406</id>
        <doi>10.14569/IJARAI.2013.020406</doi>
        <lastModDate>2013-04-09T16:14:15.5630000+00:00</lastModDate>
        
        <creator>Onashoga S. Adebukola</creator>
        
        <creator>Ajayi O. Bamidele</creator>
        
        <creator>Akinwale A. Taofik</creator>
        
        <subject>MIDS; CPM; Pattern-growth; Profiling </subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(4), 2013</description>
        <description>In this work, a Multiagent-based architecture for Intrusion Detection System (MIDS) is proposed to overcome the shortcoming of current Mobile Agent-based Intrusion Detection System. MIDS is divided into three major phases namely: Data gathering, Detection and the Response phases. The data gathering stage involves data collection based on the features in the distributed system and profiling. The data collection components are distributed on both host and network. Closed Pattern Mining (CPM) algorithm is introduced for profiling users’ activities in network database. The CPM algorithm is built on the concept of Frequent Pattern-growth algorithm by mining a prefix-tree called CPM-tree, which contains only the closed itemsets and its associated support count. According to the administrator’s specified thresholds, CPM-tree maintains only closed patterns online and incrementally outputs the current closed frequent pattern of users’ activities in real time. MIDS makes use of mobile and static agents to carry out the functions of intrusion detection. Each of these agents is built with rule-based reasoning to autonomously detect intrusions. Java 1.1.8 is chosen as the implementation language and IBM’s Java based mobile agent framework, Aglet 1.0.3 as the platform for running the mobile and static agents. In order to test the robustness of the system, a real-time simulation is carried out on University of Agriculture, Abeokuta (UNAAB) network dataset and the results showed an accuracy of 99.94%, False Positive Rate (FPR) of 0.13% and False Negative Rate (FNR) of 0.04%. This shows an improved performance of MIDS when compared with other known MA-IDSs. </description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No4/Paper_6-A_Simulated_Multiagent-Based_Architecture_for_Intrusion_Detection_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improvement of Automated Detection Method for Clustered Microcalcification Based on Wavelet Transformation and Support Vector Machine </title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.020405</link>
        <id>10.14569/IJARAI.2013.020405</id>
        <doi>10.14569/IJARAI.2013.020405</doi>
        <lastModDate>2013-04-09T16:14:13.6430000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Indra Nugraha Abdullah</creator>
        
        <creator>Hiroshi Okumura</creator>
        
        <creator>Rie Kawakami</creator>
        
        <subject>Automated Detection Method; Mammogram; Clustered Microcalcification;Wavelet; SVM; Standard Deviation.</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(4), 2013</description>
        <description>The main problem that corresponding with breast cancer is how to deal with small calcification part inside the breast called microcalcification (MC). A breast screening examination called mammogram is provided as preventive way. Mammogram image with a considerable amount of MC or called clustered MC has been a problem for the doctor and the radiologist. Particularly, when they should determine correctly the region of interest. This work is an improvement work from the previous work. It utilizes the Daubechies D4 wavelet as a feature extractor and the SVM classifier as an effective binary classifier. The escalating point shown with 84.44% of classification performance, 90% of sensitivity and 91.43% of specificity.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No4/Paper_5-Improvement_of_Automated_Detection_Method_for_Clustered_Microcalcification_Based_on_Wavelet_Transformation_and_Support_Vector_Machine.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Geography Markup Language: GML Based Representation of Time Serie of Assimilation Data and Its Application to Animation Content Creation and Representations</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.020404</link>
        <id>10.14569/IJARAI.2013.020404</id>
        <doi>10.14569/IJARAI.2013.020404</doi>
        <lastModDate>2013-04-09T16:14:11.7400000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>assimilation data; GML, NCEP/GDAS; tweening method of interpolation</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(4), 2013</description>
        <description>Method for Geography Markup Language: GML based representation of time series of assimilation data and its application to animation content creation and representations is proposed. It is validated the proposed method with NCEP/GDAS assimilation data. Also usefulness of the proposed interpolation method with tweening method is confirmed for unequal time interval of the time series data. </description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No4/Paper_4-Geography_Markup_Language_GML_Based_Representation_of_Time_Serie_of_Assimilation_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Method for Car in Dangerous Action Detection by Means of Wavelet Multi Resolution Analysis Based on Appropriate Support Length of Base Function</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.020403</link>
        <id>10.14569/IJARAI.2013.020403</id>
        <doi>10.14569/IJARAI.2013.020403</doi>
        <lastModDate>2013-04-09T16:14:09.8530000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Tomoko Nishikawa</creator>
        
        <subject>Change detection, MRA, support length of mother wavelet </subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(4), 2013</description>
        <description>Multi-Resolution Analysis: MRA based on the mother wavelet function with which support length differs from the image of the automobile rear under run is performed, and the run characteristic of a car is searched for. Speed, deflection, etc. are analyzed and the method of detecting vehicles with high accident danger is proposed. The experimental results show that vehicles in a dangerous action can be detected by the proposed method.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No4/Paper_3-Method_for_Car_in_Dangerous_Action_Detection_by_Means_of_Wavelet_Multi_Resolution_Analysis_Based.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>E-Learning System Utilizing Learners’ Characteristics Recognized Through Learning Processes with Open Simulator</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.020402</link>
        <id>10.14569/IJARAI.2013.020402</id>
        <doi>10.14569/IJARAI.2013.020402</doi>
        <lastModDate>2013-04-09T16:14:07.9500000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Anik Nur Handayani</creator>
        
        <subject>e-learning system; Q/A System; Open Simulator; learners’ characteristics;</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(4), 2013</description>
        <description>E-learning system utilizing learners’ characteristics which is recognized through learning processes with Open Simulator for overcoming week points is proposed. Through dialogs with avatars in the Open Simulator, it is possible to understand learners’ week points. Using learners’ characteristics, most appropriate subjects and achievement tests are provided by the proposed e-learning system. Experimental results show that the proposed e-learning system is much effective than the conventional e-learning system without utilizing learners’ characteristics.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No4/Paper_2-E-Learning_System_Utilizing_Learners_Characteristics_Recognized_Through_Learning_Processes_with_Open_Simulator.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparison Between Linear and Nonlinear Models of Mixed Pixels in Remote Sensing Satellite Images Based on Cierniewski Surface BRDF Model by Means of Monte Carlo Ray Tracing Simulation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.020401</link>
        <id>10.14569/IJARAI.2013.020401</id>
        <doi>10.14569/IJARAI.2013.020401</doi>
        <lastModDate>2013-04-09T16:14:06.0470000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>Monte Carlo simulation; ray tracing method; mixed pixel model; surface model;</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(4), 2013</description>
        <description>Comparative study on linear and nonlinear mixed pixel models of which pixels in remote sensing satellite images is composed with plural ground cover materials mixed together, is conducted for remote sensing satellite image analysis. The mixed pixel models are based on Cierniewski of ground surface reflectance model. The comparative study is conducted by using of Monte Carlo Ray Tracing: MCRT simulations. Through simulation study, the difference between linear and nonlinear mixed pixel models is clarified. Also it is found that the simulation model is validated.   </description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No4/Paper_1-Comparison_Between_Linear_and_Nonlinear_Models_of_Mixed_Pixels_in_Remote_Sensing_Satellite_Images_Based.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Decision Making and Emergency Communication System in Rescue Simulation for People with Disabilities</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.020312</link>
        <id>10.14569/IJARAI.2013.020312</id>
        <doi>10.14569/IJARAI.2013.020312</doi>
        <lastModDate>2013-03-09T07:34:16.2230000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Tran Xuan Sang</creator>
        
        <subject>Rescue Simulation for people with disablities; GIS Multi Agent-based Rescue Simulation;  Auction based Decision Making</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(3), 2013</description>
        <description>Decision making and emergency communication system play important roles in rescue process when emergency situations happen. The rescue process will be more effective if we have appropriate decision making method and accessible emergency communication system. In this paper, we propose a  centralized rescue model for people with disabilities. The decision making method to decide which volunteers should help which disabled persons is proposed by utilizing the auction mechanism. The GIS data are used to present the objects in a large-scale disaster simulation environment such as roads, buildings, and humans. The Gama simulation platform is used to test our proposed rescue simulation model. </description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No3/Paper_12-Decision_Making_and_Emergency_Communication_System_in_Rescue_Simulation_for_People_with_Disabilities.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Vibration Control of MR Damper Landing Gear</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.020311</link>
        <id>10.14569/IJARAI.2013.020311</id>
        <doi>10.14569/IJARAI.2013.020311</doi>
        <lastModDate>2013-03-09T07:34:14.1500000+00:00</lastModDate>
        
        <creator>Disha Saxena</creator>
        
        <creator>Harsh Rathore</creator>
        
        <subject>Magneto-Rheological (MR) damper; Proportional Integral Derivative (PID) Controller; Fuzzy Logic Controller (FLC)</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(3), 2013</description>
        <description>In the field of Automation, Fuzzy Control Fuzzy control has significant merits which are utilized in intelligent controllers, especially for vibration control systems. This paper is concerned with the application aspects of the developed MR damper for landing gear system, to attenuate the sustained vibrations during the landing phase. Also a comparative study is made on the responses obtained from the MR damper landing gear by utilizing PID and Fuzzy PID controllers.Theory is a well-known technique to acquire the desired response of different non-linear systems.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No3/Paper_11-Vibration_Control_of_MR_Damper_Landing_Gear.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Robot Path Planning using An Ant Colony Optimization Approach:A Survey</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.020310</link>
        <id>10.14569/IJARAI.2013.020310</id>
        <doi>10.14569/IJARAI.2013.020310</doi>
        <lastModDate>2013-03-09T07:34:10.4200000+00:00</lastModDate>
        
        <creator>Alpa Reshamwala</creator>
        
        <creator>Deepika P Vinchurkar</creator>
        
        <subject>Path planning; Ant colony algorithm; collision avoidance.</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(3), 2013</description>
        <description>Path planning problem, is a challenging topic in robotics. Indeed, a significant amount of research has been devoted to this problem in recent years. The ant colony optimization algorithm is another approach to solve this problem. Each ant drops a quantity of artificial pheromone on every point that the ant passes through. This pheromone simply changes the probability that the next ant becomes attracted to a particular grid point. The techniques described in the paper   adapt a global attraction term which guides ants to head toward the destination point. The paper describes the various techniques for the robot path  planning using the Ant colony Algorithm. The paper also provides the brief comparison of the three techniques described in the paper.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No3/Paper_10-Robot_Path_Planning_using_An_Ant_Colony_Optimization_Approach_A_Survey.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Identification of Ornamental Plant Functioned as Medicinal Plant Based on Redundant Discrete Wavelet Transformation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.020309</link>
        <id>10.14569/IJARAI.2013.020309</id>
        <doi>10.14569/IJARAI.2013.020309</doi>
        <lastModDate>2013-03-09T07:34:08.3300000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Indra Nugraha Abdullah</creator>
        
        <creator>Hiroshi Okumura</creator>
        
        <subject>Identification; ornamental plant; leaf; wavelet; DWT; Redundant DWT;  SVM. </subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(3), 2013</description>
        <description>Human has a duty to preserve the nature. One of the examples is preserving the ornamental plant. Huge economic value of plant trading, escalating esthetical value of one space and medicine efficacy that contained in a plant are some positive values from this plant. However, only few people know about its medicine efficacy. Considering the easiness to obtain and the medicine efficacy, this plant should be an initial treatment of a simple disease or option towards chemical based medicines. In order to let people get acquaint, we need a system that can proper identify this plant. Therefore, we propose to build a system based on Redundant Discrete Wavelet Transformation (RDWT) through its leaf. Since its character is translation invariant that able to produce some robust features to identify ornamental plant. This system was successfully resulting 95.83% of correct classification rate.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No3/Paper_9-Identification_of_Ornamental_Plant_Functioned_as_Medicinal_Plant_Based_on_Redundant_Discrete_Wavelet_Transformation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evolutionary Approaches to Expensive Optimisation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.020308</link>
        <id>10.14569/IJARAI.2013.020308</id>
        <doi>10.14569/IJARAI.2013.020308</doi>
        <lastModDate>2013-03-09T07:34:06.2570000+00:00</lastModDate>
        
        <creator>Maumita Bhattacharya</creator>
        
        <subject>Optimization; Evolutionary Algorithm, Approximation Model; Fitness Approximation; Meta-model; Surrogate</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(3), 2013</description>
        <description>Surrogate assisted evolutionary algorithms (EA) are rapidly gaining popularity where applications of EA in complex real world problem domains are concerned. Although EAs are powerful global optimizers, finding optimal solution to complex high dimensional, multimodal problems often require very expensive fitness function evaluations. Needless to say, this could brand any population-based iterative optimization technique to be the most crippling choice to handle such problems. Use of approximate model or surrogates provides a much cheaper option. However, naturally this cheaper option comes with its own price! This paper discusses some of the key issues involved with use of approximation in evolutionary algorithm, possible best practices and solutions. Answers to the following questions have been sought:  what type of fitness approximation to be used; which approximation model to use; how to integrate the approximation model in EA; how much approximation to use; and how to ensure reliable approximation.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No3/Paper_8-Evolutionary_Approaches_to_Expensive_Optimisation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Knowledge-Based System Approach for Extracting Abstractions from Service Oriented Architecture Artifacts</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.020307</link>
        <id>10.14569/IJARAI.2013.020307</id>
        <doi>10.14569/IJARAI.2013.020307</doi>
        <lastModDate>2013-03-09T07:34:04.1970000+00:00</lastModDate>
        
        <creator>George Goehring</creator>
        
        <creator>Thomas Reichherzer</creator>
        
        <creator>Eman El-Sheikh</creator>
        
        <creator>Dallas Snider</creator>
        
        <creator>Norman Wilde</creator>
        
        <creator>Sikha Bagui</creator>
        
        <creator>John Coffey</creator>
        
        <creator>Laura J. White</creator>
        
        <subject>expertise; rule-based system; knowledge-based system; service oriented architecture; SOA; software maintenance; search tool.</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(3), 2013</description>
        <description>Rule-based methods have traditionally been applied to develop knowledge-based systems that replicate expert performance on a deep but narrow problem domain. Knowledge engineers capture expert knowledge and encode it as a set of rules for automating the expert’s reasoning process to solve problems in a variety of domains. We describe the development of a knowledge-based system approach to enhance program comprehension of Service Oriented Architecture (SOA) software. Our approach uses rule-based methods to automate the analysis of the set of artifacts involved in building and deploying a SOA composite application. The rules codify expert knowledge to abstract information from these artifacts to facilitate program comprehension and thus assist Software Engineers as they perform system maintenance activities. A main advantage of the knowledge-based approach is its adaptability to the heterogeneous and dynamically evolving nature of SOA environments.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No3/Paper_7-A_Knowledge-Based_System_Approach_for_Extracting_Abstractions_from_Service_Oriented_Architecture_Artifacts.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Regressive Analysis on Leaf Nitrogen Content and Near Infrared Reflectance and Its Application for Agricultural Farm Monitoring with Helicopter Mounted Near Infrared Camera</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.020306</link>
        <id>10.14569/IJARAI.2013.020306</id>
        <doi>10.14569/IJARAI.2013.020306</doi>
        <lastModDate>2013-03-09T07:34:00.4370000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Sadayuki Akaisi</creator>
        
        <creator>Hideo Miyazaki</creator>
        
        <creator>Yasuyuki Watabe</creator>
        
        <subject>regressive analysis; nitrogen content; tealeaves; near infrared cameras; t</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(3), 2013</description>
        <description>Method for evaluation of nitrogen richness of tealeaves with near infrared reflectance is proposed. Also tea farm monitoring with helicopter mounted near infrared camera is proposed. Through experiments and regressive analysis, it is found that the proposed method and monitoring system is validated. </description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No3/Paper_6-Regressive_Analysis_on_Leaf_Nitrogen_Content_and_Near_Infrared_Reflectance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Rice Crop Field Monitoring System with Radio Controlled Helicopter Based Near Infrared Cameras Through Nitrogen Content Estimation and Its Distribution Monitoring</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.020305</link>
        <id>10.14569/IJARAI.2013.020305</id>
        <doi>10.14569/IJARAI.2013.020305</doi>
        <lastModDate>2013-03-09T07:33:58.3470000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Yuko Miura</creator>
        
        <creator>Osamu Shigetomi</creator>
        
        <creator>Hideaki Munemoto</creator>
        
        <subject>radio controlled helicopter; near infrared camera; nitrogen content in the rice crop leaves; remote sensing; </subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(3), 2013</description>
        <description>Rice crop field monitoring system with radio controlled helicopter based near infrared cameras is proposed together with nitrogen content estimation method for monitoring its distribution in the field in concern. Through experiments at the Saga Prefectural Agricultural Research Institute: SPARI, it is found that the proposed system works well for monitoring nitrogen content in the rice crop which indicates quality of the rice crop and its distribution in the field in concern. Therefore, it becomes available to maintain the rice crop fields in terms of quality control.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No3/Paper_5-Rice_Crop_Field_Monitoring_System_with_Radio.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predicting Quality of Answer in Collaborative Q/A Community</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.020304</link>
        <id>10.14569/IJARAI.2013.020304</id>
        <doi>10.14569/IJARAI.2013.020304</doi>
        <lastModDate>2013-03-09T07:33:56.2570000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>ANIK Nur Handayani</creator>
        
        <subject>component; Collaborative; diversity of information; questions;  answers;  predict;  non-textual feature </subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(3), 2013</description>
        <description>Community Question Answering (CQA) services have emerged allowing information seekers pose their information need which is questions and receive answers from their fellow users, also participate in evaluating the questions or answers in a variety of topics. Within this community information seekers could interact and get information from a wide range of users, forming a heterogeneous social networks and interaction between users.  A question may receive multiple answers from multiple users and the asker or the fellow users could choose the best answer. Freedom and convenience in participation, led to the diversity of the information. In this paper we present a general model to predict quality of information in a CQA by using non textual features. We showing and testing our quality measurement to a collection of question and answer pairs. In the future our models and predictions could be useful for predictor quality information as a recommender system to complete a collaborative learning. </description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No3/Paper_4-Predicting_Quality_of_Answer_in_Collaborative_QA_Community.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Content Based Image Retrieval by using Multi Layer Centroid Contour Distance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.020303</link>
        <id>10.14569/IJARAI.2013.020303</id>
        <doi>10.14569/IJARAI.2013.020303</doi>
        <lastModDate>2013-03-09T07:33:54.1630000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Cahya Rahmad</creator>
        
        <subject>Content based Image Retrieval; CCD; MLCCD</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(3), 2013</description>
        <description>In this paper we present a new approach to measuring similarity between two shape of object. In conventional method, centroid contour distance (CCD) is formed by measuring distance between centroid (center) and boundary of object, but this method cannot capture if an object have multiple boundary in the same angle. We develop a novel approach feature shape by measuring distance between centroid (center) and boundary of object that can capture multiple boundaries in the same angle or multi-layer centroid contour distance (MLCCD).  The experiment result on simulation dataset and plankton dataset show that the proposed method (MLCCD) better than the conventional method (CCD).</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No3/Paper_3-Content_Based_Image_Retrieval_by_using_Multi_Layer_Centroid_Contour_Distance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Color Radiomap Interpolation for Efficient Fingerprint WiFi-based Indoor Location Estimation </title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.020302</link>
        <id>10.14569/IJARAI.2013.020302</id>
        <doi>10.14569/IJARAI.2013.020302</doi>
        <lastModDate>2013-03-09T07:33:52.0730000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Herman Tolle</creator>
        
        <subject>network; indoor localization; radio map; user monitoring, location estimation</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(3), 2013</description>
        <description>Indoor location estimation system based on existing 802.11 signal strength is becoming increasingly prevalent in the area of mobility and ubiquity. The user-based location determination system utilizes the information of the Signal Strength (SS) received from the surrounding Access Points (APs) to determine the user position. In this paper, we focus on the development of a user position estimation using existing WiFi environment for its low cost and ease of deployment and study fingerprint-based deterministic techniques for their simplicity and effectiveness. We present the color radio map interpolation method with ease to development, reduce the calibration effort on creating radio map while still retain the accuracy of user position estimation. The average accuracy error 1.108 meter is achieved on 1.25 meter x 2.5 meter of cell grid size.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No3/Paper_2-Color_Radiomap_Interpolation_for_Efficient_Fingerprint_WiFi-based_Indoor_Location_Estimation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Adaptive Group Organization Cooperative Evolutionary Algorithm for TSK-type Neural Fuzzy Networks Design</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.020301</link>
        <id>10.14569/IJARAI.2013.020301</id>
        <doi>10.14569/IJARAI.2013.020301</doi>
        <lastModDate>2013-03-09T07:33:49.8900000+00:00</lastModDate>
        
        <creator>Sheng-Fuu Lin</creator>
        
        <creator>Jyun-Wei Chang</creator>
        
        <subject>TSK-type Neural Fuzzy Networks; Evolutionary Algorithm; Group based Symbiotic; Self Organization; System Identification</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(3), 2013</description>
        <description>This paper proposes a novel adaptive group organization cooperative evolutionary algorithm (AGOCEA) for TSK-type neural fuzzy networks design. The proposed AGOCEA uses group-based cooperative evolutionary algorithm and self-organizing technique to automatically design neural fuzzy networks. The group-based evolutionary divided populations to several groups and each group can evolve itself. In the proposed self-organizing technique, it can automatically determine the parameters of the neural fuzzy networks, and therefore some critical parameters have no need to be assigned in advance.   The simulation results are shown the better performance of the proposed algorithm than the other learning algorithms.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No3/Paper_1-Adaptive_Group_Organization_Cooperative_Evolutionary_Algorithm_for_TSK-type_Neural_Fuzzy_Networks_Design.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Reliable Network Traffic Collection for Network Characterization and User Behavior</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040241</link>
        <id>10.14569/IJACSA.2013.040241</id>
        <doi>10.14569/IJACSA.2013.040241</doi>
        <lastModDate>2013-03-01T12:42:34.6870000+00:00</lastModDate>
        
        <creator>Ali Ismail Awad</creator>
        
        <creator>Hanafy Mahmud Ali</creator>
        
        <creator>Heshasm F. A. Hamed</creator>
        
        <subject></subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(2), 2013</description>
        <description>This paper presents a reliable and complete traffic
collection facility as a first and crucial step toward accurate
traffic analysis for network characterization and user behavior.
The key contribution is to produce an accurate, reliable and high
fidelity traffic traces as the valuable source of information in the
passive traffic ananlysis approach. In order to guarantee the
traces reliability, we first detect the bottlenecks of the collection
facility, and then propose different monitoring probes starting
from the ethernet network interface and ending at the packet
trace. The proposed facility can run without stop for long time
instead of one-shot periods, therefore, it can be used to draw
a complete picture of network traffic that fully characterize the
network and user behavior. The laboratory experiments conclude
that the system is highly reliable, stable and produces reliable
traces attached with different statistics reports that come from
the installed monitoring probes.</description>
        <description>http://thesai.org/Downloads/Volume4No2/Paper_41-Reliable_Network_Traffic_Collection_for_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Gaps Approach to Access the Efficiency and Effectiveness of IT-Initiatives In Rural Areas: case study of Samalta, a village in the central Himalayan Region of India</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040237</link>
        <id>10.14569/IJACSA.2013.040237</id>
        <doi>10.14569/IJACSA.2013.040237</doi>
        <lastModDate>2013-03-01T12:42:32.5670000+00:00</lastModDate>
        
        <creator>Kamal Kumar Ghanshala</creator>
        
        <creator>Durgesh Pant</creator>
        
        <creator>Jatin Pandey</creator>
        
        <subject>KDJ-Gaps Model; Efficiency; Effectiveness; IT-Initiatives</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(2), 2013</description>
        <description>This paper focuses on the effectiveness and efficiency of IT initiatives in rural areas where topology creates isolation to developmental activities. A village is selected for the study and information is gathered through interviews of village dwellers. These collected responses are then analyzed and a gaps model is proposed.</description>
        <description>http://thesai.org/Downloads/Volume4No2/Paper_37-A_Gaps_Approach_to_Access_the_Efficiency_and_Effectiveness.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>E-governance justified</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040233</link>
        <id>10.14569/IJACSA.2013.040233</id>
        <doi>10.14569/IJACSA.2013.040233</doi>
        <lastModDate>2013-03-01T12:42:30.4730000+00:00</lastModDate>
        
        <creator>William Akotam Agangiba</creator>
        
        <creator>Millicent Akotam Agangiba</creator>
        
        <subject>E-governance; Government; Human activities; Information Technology (IT); Service delivery; citizen.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(2), 2013</description>
        <description>Information and Communication Technology today has become an indispensable part in our lives, gaining wide application in human activities. This is due to the fact that, its use is less expensive, more secure, and allows speedy information transmission and access. It serves as a good base for the development and success of today’s relatively young technologies. It has relieved people of manual day-to-day activities in such areas as businesses organizations, transport industry, teaching and research, banking, broadcasting, entertainment amongst other. This paper takes an overview study of e-governance, one of the most demanding applications of information and communication technology for public services. The paper summarizes the concept of e-governance, its major essence and some ongoing e-governance activities in some parts of the world. </description>
        <description>http://thesai.org/Downloads/Volume4No2/Paper_33-Egovernance_justified.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>GASolver-A Solution to Resource Constrained Project Scheduling by Genetic Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040231</link>
        <id>10.14569/IJACSA.2013.040231</id>
        <doi>10.14569/IJACSA.2013.040231</doi>
        <lastModDate>2013-03-01T12:42:28.3700000+00:00</lastModDate>
        
        <creator>Dr Mamta Madan</creator>
        
        <creator>Mr Rajneesh Madan</creator>
        
        <subject>Genetic Algorithm; Dependency Injection; GASolver.Core; Resource Constrained Scheduling.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(2), 2013</description>
        <description>The Resource Constrained Scheduling Problem (RCSP) represents an important research area. Not only exact solution but also many heuristic methods have been proposed to solve RCPSP (Resource Constrained Project Scheduling Problem). It is an NP hard problem. Heuristic methods are designed to solve large and highly Resource Constrained software projects. We have solved the problem of resource constrained scheduling problem and named as GASolver. It is implemented in C# using .net platform.  We have used Dependency Injection to make the problem loosely coupled, so that other arena of scheduling like Time Cost Tradeoff (CT), Payment Scheduling (PS)  etc can be merged with same solution in the future. We have implemented GASolver using Genetic Algorithm (GA).</description>
        <description>http://thesai.org/Downloads/Volume4No2/Paper_31-GASolver-A_Solution_to_Resource_Constrained_Project_Scheduling_by_Genetic_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automatic Image Registration Technique of Remote Sensing Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040227</link>
        <id>10.14569/IJACSA.2013.040227</id>
        <doi>10.14569/IJACSA.2013.040227</doi>
        <lastModDate>2013-03-01T12:42:26.2770000+00:00</lastModDate>
        
        <creator>M. Wahed</creator>
        
        <creator>Gh.S.El-tawel</creator>
        
        <creator>A.Gad El-karim </creator>
        
        <subject>Image registration; Steerable Pyramid Transform; SIFT; RANSAC</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(2), 2013</description>
        <description>Image registration is a crucial step in most image processing tasks for which the final result is achieved from a combination of various resources. Automatic registration of remote-sensing images is a difficult task as it must deal with the intensity changes and variation of scale, rotation and illumination of the images. This paper proposes image registration technique of multi-view, multi- temporal and multi-spectral remote sensing images. Firstly, a preprocessing step is performed by applying median filtering to enhance the images. Secondly, the Steerable Pyramid Transform is adopted to produce multi-resolution levels of reference and sensed images; then, the Scale Invariant Feature Transform (SIFT) is utilized for extracting feature points that can deal with the large variations of scale, rotation and illumination between images .Thirdly, matching the features points by using the Euclidian distance ratio; then removing the false matching pairs using the RANdom SAmple Consensus (RANSAC) algorithm. Finally, the mapping function is obtained by the affine transformation. Quantitative comparisons of our technique with the related techniques show a significant improvement in the presence of large scale, rotation changes, and the intensity changes. The effectiveness of the proposed technique is demonstrated by the experimental results.</description>
        <description>http://thesai.org/Downloads/Volume4No2/Paper_27-Automatic_Image_Registration_Technique_of_Remote_Sensing_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Visual Web User Interface Design in Augmented Reality Technology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040218</link>
        <id>10.14569/IJACSA.2013.040218</id>
        <doi>10.14569/IJACSA.2013.040218</doi>
        <lastModDate>2013-03-01T12:42:24.1870000+00:00</lastModDate>
        
        <creator>Chouyin Hsu</creator>
        
        <creator>Haui-Chih Shiau</creator>
        
        <subject>Visual Represeantion; User Interface Design; Augmented Reality; Google SketchUp</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(2), 2013</description>
        <description>Upon the popularity of 3C devices, the visual creatures are all around us, such the online game, touch pad, video and animation. Therefore, the text-based web page will no longer satisfy users. With the popularity of webcam, digital camera, stereoscopic glasses, or head-mounted display, the user interface becomes more visual and multi-dimensional. For the consideration of 3D and visual display in the research of web user interface design, Augmented Reality technology providing the convenient tools and impressive effects becomes the hot topic. Augmented Reality effect enables users to represent parts of the digital objects on top of the physical surroundings. The easy operation with webcam greatly improving the visual representation of web pages becomes the interest of our research. Therefore, we apply Augmented Reality technology for developing a city tour web site to collect the opinions of users. Therefore, the website stickiness is an important measurement. The major tasks of the work include the exploration of Augmented Reality technology and the evaluation of the outputs of Augmented Reality. The feedback opinions of users are valuable references for improving AR application in the work. As a result, the AR increasing the visual and interactive effects of web page encourages users to stay longer and more than 80% of users are willing to return for visiting the website soon. Moreover, several valuable conclusions about Augmented Reality technology in web user interface design are also provided for further practical references. </description>
        <description>http://thesai.org/Downloads/Volume4No2/Paper_18-The_Visual_Web_User_Interface_Design_in_Augmented_Reality_Technology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Neural Network Solution For Service Level Agreement</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040211</link>
        <id>10.14569/IJACSA.2013.040211</id>
        <doi>10.14569/IJACSA.2013.040211</doi>
        <lastModDate>2013-03-01T12:42:22.0670000+00:00</lastModDate>
        
        <creator>Sarmad Al-Aloussi</creator>
        
        <subject>Service Oriented Architecture; Service Level Agreements; QoS; and Neural Network.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(2), 2013</description>
        <description>Service Oriented Computing is playing an important role in sharing the industry and the way business is conducted and services are delivered and managed. This paradigm is expected to have major impact on service economy; the service sector includes health services, financial services, government services, etc. This involves significant interaction between clients and service providers[1]. This paper is pointed in addressing the problem of enabling Service Level Agreement (SLA) oriented resources allocation in data centers to satisfy competing applications demand for computing services. A QoS report designed to compare performance variables to QoS parameters and indicate when a threshold has been crossed. This paper was suggested a methodology which helps in SLA evaluation and comparison. The methodology was found on the adoption of policies both for service behavior and SLA description and on the definition of a metric function for evaluation and comparison of policies. In addition, this paper contributes a new philosophy to evaluate the agreements between user and service provider by monitoring the measurable and immeasurable qualities to extract the decision by using artificial neural networks (ANN).</description>
        <description>http://thesai.org/Downloads/Volume4No2/Paper_11-Neural_Network_Solution_For_Service_Level_Agreement.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automated Localization of Optic Disc in Retinal Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040210</link>
        <id>10.14569/IJACSA.2013.040210</id>
        <doi>10.14569/IJACSA.2013.040210</doi>
        <lastModDate>2013-03-01T12:42:19.9600000+00:00</lastModDate>
        
        <creator>Deepali A.Godse</creator>
        
        <creator>Dr.Dattatraya S.Bormane</creator>
        
        <subject>disease; healthy; optic disc; retinal image </subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(2), 2013</description>
        <description>An efficient detection of optic disc (OD) in colour retinal images is a significant task in an automated retinal image analysis system. Most of the algorithms developed for OD detection are especially applicable to normal and healthy retinal images. It is a challenging task to detect OD in all types of retinal images, that is, normal, healthy images as well as abnormal, that is, images affected due to disease. This paper presents an automated system to locate an OD and its centre in all types of retinal images. The ensemble of steps based on different criteria produces more accurate results. The proposed algorithm gives excellent results and avoids false OD detection. The technique is developed and tested on standard databases provided for researchers on internet, Diaretdb0 (130 images), Diaretdb1 (89 images), Drive (40 images) and local database (194 images). The local database images are collected from ophthalmic clinics. It is able to locate OD and its centre in 98.45% of all tested cases. The results achieved by different algorithms can be compared when algorithms are applied on same standard databases. This comparison is also discussed in this paper which shows that the proposed algorithm is more efficient. </description>
        <description>http://thesai.org/Downloads/Volume4No2/Paper_10-Automated_Localization_of_Optic_Disc_in_Retinal_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparison of the Information Technology Development in Slovakia and Hungary</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040209</link>
        <id>10.14569/IJACSA.2013.040209</id>
        <doi>10.14569/IJACSA.2013.040209</doi>
        <lastModDate>2013-03-01T12:42:17.8530000+00:00</lastModDate>
        
        <creator>Peter Sasvari</creator>
        
        <creator>Zsuzsa Majoros</creator>
        
        <subject>Information society; Information Technology; Slovakia, Hungary </subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(2), 2013</description>
        <description>Nowadays the role of information is increasingly important, so every company has to provide the efficient procurement, processing, storage and visualization of this special resource in hope to stay competitive. More and more enterprises introduce Enterprise Resource Planning System to be able to perform the listed functions. The article illustrates the usage of these systems in Hungary and Slovakia, as well as tests the following presumption: the level of Information Technology (IT) development is lower in Hungary than our northern neighbor.</description>
        <description>http://thesai.org/Downloads/Volume4No2/Paper_9-Comparison_of_the_information_technology_development_in_Slovakia_and_Hungary.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SS-SVM (3SVM): A New Classification Method for Hepatitis Disease Diagnosis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040208</link>
        <id>10.14569/IJACSA.2013.040208</id>
        <doi>10.14569/IJACSA.2013.040208</doi>
        <lastModDate>2013-03-01T12:42:15.7630000+00:00</lastModDate>
        
        <creator>Mohammed H. Afif</creator>
        
        <creator>Abdel-Rahman Hedar</creator>
        
        <creator>Taysir H. Abdel Hamid</creator>
        
        <creator>Yousef B. Mahdy</creator>
        
        <subject>Support Vector Machine; Scatter Search; Classification; Parameter tuning</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(2), 2013</description>
        <description>In this paper, a new classification approach combining support vector machine with scatter search approach for hepatitis disease diagnosis is presented, called 3SVM. The scatter search approach is used to find near optimal values of SVM parameters and its kernel parameters. The hepatitis dataset is obtained from UCI. Experimental results and comparisons prove that the 3SVM gives better outcomes and has a competitive performance relative to other published methods found in literature, where the average accuracy rate obtained is 98.75%.</description>
        <description>http://thesai.org/Downloads/Volume4No2/Paper_8-SS-SVM_3SVM_A_New_Classification_Method_for_Hepatitis_Disease_Diagnosis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Introducing SMART Table Technology in Saudi Arabia Education System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040207</link>
        <id>10.14569/IJACSA.2013.040207</id>
        <doi>10.14569/IJACSA.2013.040207</doi>
        <lastModDate>2013-03-01T12:42:13.6570000+00:00</lastModDate>
        
        <creator>Gafar Almalki</creator>
        
        <creator>Professor Glenn Finger</creator>
        
        <creator>Dr Jason Zagami</creator>
        
        <subject>ICT; Smart Table; education; barrier; implementation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(2), 2013</description>
        <description>Education remains one of the most important economic development indicators in Saudi Arabia. This is evident in the continuous priority of the development and enhancement of education. The application of technology is crucial to the growth and improvement of the educational system in Saudi Arabia. Introducing SMART Table technology in the Saudi Arabian education system is argued in this paper as being able to assist teachers and students in the process of accommodating both technological changes and new knowledge. SMART Tables also can enhance the level of flexibility in the educational system, thus improving the quality of education within a modern Saudi Arabia. It is crucial to integrate technology effectively and efficiently within the educational system to improve the quality of student outcomes. This study will consider the potential benefits and recommendations associated with the adoption of SMART Tables in Saudi Arabian education system. </description>
        <description>http://thesai.org/Downloads/Volume4No2/Paper_7-Introducing_SMART_Table_Technology_in_Saudi_Arabia_Education_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detection and Correction of Sinkhole Attack with Novel Method in WSN Using NS2 Tool</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040205</link>
        <id>10.14569/IJACSA.2013.040205</id>
        <doi>10.14569/IJACSA.2013.040205</doi>
        <lastModDate>2013-03-01T12:42:11.5670000+00:00</lastModDate>
        
        <creator>Tejinderdeep Singh</creator>
        
        <creator>Harpreet Kaur Arora</creator>
        
        <subject>Wireless Sensor Networks (WSN); Intrusion Detection (ID); Base Station (BS); Sinkhole (SH);Ad-hoc on demand Distance Vector (AoDV).</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(2), 2013</description>
        <description>Wireless Sensor Networks (WSNs) are used in many applications in military, ecological, and health-related areas. These applications often include the monitoring of sensitive information such as enemy movement on the battlefield or the location of personnel in a building. Security is therefore important in WSNs. However, WSNs suffer from many constraints, including low computation capability, small memory, limited energy resources, susceptibility to physical capture, and the use of insecure wireless communication channels. These constraints make security in WSNs a challenge. In this article we discuss security issues in WSNs. In this paper we are discussing a vulnerable sinkhole attack, its implementation and correction.</description>
        <description>http://thesai.org/Downloads/Volume4No2/Paper_5-Detection_and_Correction_of_Sinkhole_attack_with_novel_method_in_WSN_using_NS2_tool.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Diabetes Monitoring System Using Mobile Computing Technologies</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040204</link>
        <id>10.14569/IJACSA.2013.040204</id>
        <doi>10.14569/IJACSA.2013.040204</doi>
        <lastModDate>2013-03-01T12:42:09.4470000+00:00</lastModDate>
        
        <creator>Mashael Saud Bin-Sabbar</creator>
        
        <creator>Mznah Abdullah Al-Rodhaan</creator>
        
        <subject>Diabetes; Electronic Monitoring; Remote Monitoring, Ubiquities Healthcare; T1DM, T2DM; Android Monitoring System.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(2), 2013</description>
        <description>Diabetes is a chronic disease that needs to regularly be monitored to keep the blood sugar levels within normal ranges. This monitoring depends on the diabetic treatment plan that is periodically reviewed by the endocrinologist. The frequent visit to the main hospital seems to be tiring and time consuming for both endocrinologist and diabetes patients. The patient may have to travel to the main city, paying a ticket and reserving a place to stay.  Those expenses can be reduced by remotely monitoring the diabetes patients with the help of mobile devices.  In this paper, we introduce our implementation of an integrated monitoring tool for the diabetes patients. 
The designed system provides a daily monitoring and monthly services. The daily monitoring includes recording the result of daily analysis and activates to be transmitted from a patient’s mobile device to a central database. The monthly services require the patient to visit a nearby care center in the patient home town to do the medical examination and checkups. The result of this visit entered into the system and then synchronized with the central database. Finally, the endocrinologist can remotely monitor the patient record and adjust the treatment plan and the insulin doses if need. 
</description>
        <description>http://thesai.org/Downloads/Volume4No2/Paper_4-Diabetes_Monitoring_System_Using_Mobile_Computing_Technologies.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Algorithm Selection for Constraint Optimization Domains</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040240</link>
        <id>10.14569/IJACSA.2013.040240</id>
        <doi>10.14569/IJACSA.2013.040240</doi>
        <lastModDate>2013-03-01T12:42:07.3400000+00:00</lastModDate>
        
        <creator>Avi Rosenfeld</creator>
        
        <subject></subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(2), 2013</description>
        <description>In this paper we investigate methods for selecting
the best algorithms in classic distributed constraint optimization
problems. While these are NP-complete problems, many heuristics
have nonetheless been proposed. We found that the best
method to use can change radically based on the specifics of
a given problem instance. Thus, dynamic methods are needed
that can choose the best approach for a given problem. We
found that large differences typically exist in the expected utility
between algorithms, allowing for a clear policy. We present a
dynamic algorithm selection approach based on this realization.
As support for this approach, we describe the results from
thousands of trials from Distributed Constraint Optimization
problems that demonstrates the strong statistical improvement
of this dynamic approach over the static methods they are based
on.</description>
        <description>http://thesai.org/Downloads/Volume4No2/Paper_40-Algorithm_Selection_for_Constraint_Optimization_Domains.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Reliable Global Navigation System using Flower
Constellation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040239</link>
        <id>10.14569/IJACSA.2013.040239</id>
        <doi>10.14569/IJACSA.2013.040239</doi>
        <lastModDate>2013-03-01T12:42:05.2330000+00:00</lastModDate>
        
        <creator>Daniele Mortari</creator>
        
        <creator>Jeremy J.Davis</creator>
        
        <creator>Ashraf Owi</creator>
        
        <creator>Hany Dwidar</creator>
        
        <subject></subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(2), 2013</description>
        <description>For many space missions using satellite constellations,
symmetry of satellites distribution plays usually a key role.
Symmetry may be considered in space and/or in time distribution.
Examples of required symmetry in space distribution are in
Earth observation missions (either, for local or global) as well
as in navigation systems. It is intuitive that to optimally observe
the Earth a satellite constellation should be synchronized with
the Earth rotation rate. If a satellite constellation must be
designed to constitute a communication network between Earth
and Jupiter, then the orbital period of the constellation satellites
should be synchronized with both Earth and Jupiter periods
of revolution around the Sun. Another example is to design
satellite constellations to optimally observe specific Earth sites or
regions. Again, this satellites constellation should be synchronized
with Earth’s rotational period and (since the time gap between
two subsequent observations of the site should be constant) also
implies time symmetry in satellites distribution. Obtaining this
result will allow to design operational constellations for observing
targets (sites, borders, regions) with persistence or assigned revisit
times, while minimizing the number of satellites required.
Constellations of satellites for continuous global or zonal Earth
coverage have been well studied over the last twenty years,
are well known and have been well documented [1], [2], [7],
[8], [11], [13]. A symmetrical, inclined constellation, such as a
Walker constellation [1], [2] provides excellent global coverage
for remote sensing missions; however, applications where target
revisit time or persistent observation are important lead to
required variations of traditional designs [7], [8]. Also, few results
are available that affect other figures of merit, such as continuous
regional coverage and the systematic use of eccentric orbit
constellations to optimize“hang time” over regions of interest.
Optimization of such constellations is a complex problem and the
general-purpose constellation design methodology used today is
largely limited to Walker-like constellations.
As opposed to Walker Constellations [1], [2], which were
looking for symmetries in inertial reference frame, Flower
Constellations [11] were devised to obtain symmetric distributions
of satellites on rotating reference frames (e.g., Earth, Jupiter,
satellite orbit). Since the theory of Flower Constellations has
evolved with time the next section is dedicated to the summary
of the theory up to the current status. The FCs solution space
has been recently expanded with the Lattice theory [13], [14],
encompassing all possible symmetric solutions.</description>
        <description>http://thesai.org/Downloads/Volume4No2/Paper_39-Reliable_Global_Navigation_System_using_Flower.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Efficient Algorithm for Resource Allocation in Parallel and Distributed Computing Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040238</link>
        <id>10.14569/IJACSA.2013.040238</id>
        <doi>10.14569/IJACSA.2013.040238</doi>
        <lastModDate>2013-03-01T12:42:03.0970000+00:00</lastModDate>
        
        <creator>S.F. El-Zoghdy</creator>
        
        <creator>M.Nofal</creator>
        
        <creator>M.A.Shohla</creator>
        
        <creator>A.A.El-sawy </creator>
        
        <subject>grid computing; resource management; load balancing; performance evaluation; queuing theory; simulation models (key words)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(2), 2013</description>
        <description>Resource allocation in heterogeneous parallel and distributed computing systems is the process of allocating user tasks to processing elements for execution such that some performance objective is optimized. In this paper, a new resource allocation algorithm for the computing grid environment is proposed.  It takes into account the heterogeneity of the computational resources. It resolves the single point of failure problem which many of the current algorithms suffer from.  In this algorithm, any site manager receives two kinds of tasks namely, remote tasks arriving from its associated local grid manager, and local tasks submitted directly to the site manager by local users in its domain. It allocates the grid workload based on the resources occupation ratio and the communication cost. The grid overall mean task response time is considered as the main performance metric that need to be minimized. The simulation results show that the proposed resource allocation algorithm improves the grid overall mean task response time. (Abstract)</description>
        <description>http://thesai.org/Downloads/Volume4No2/Paper_38-An_Efficient_Algorithm_for_Resource_Allocation_in_Parallel_and_Distributed_Computing_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Expensive Optimisation: A Metaheuristics Perspective</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040230</link>
        <id>10.14569/IJACSA.2013.040230</id>
        <doi>10.14569/IJACSA.2013.040230</doi>
        <lastModDate>2013-03-01T12:42:01.0070000+00:00</lastModDate>
        
        <creator>Maumita Bhattacharya</creator>
        
        <subject>Evolutionary Algorithm; Preference Learning; Surrogate Modeling; Surrogate Ranking</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(2), 2013</description>
        <description>Stochastic, iterative search methods such as Evolutionary Algorithms (EAs) are proven to be efficient optimizers. However, they require evaluation of the candidate solutions which may be prohibitively expensive in many real world optimization problems. Use of approximate models or surrogates is being explored as a way to reduce the number of such evaluations. In this paper we investigated three such methods. The first method (DAFHEA) partially replaces an expensive function evaluation by its approximate model. The approximation is realized with support vector machine (SVM) regression models. The second method (DAFHEA II) is an enhancement on DAFHEA to accommodate for uncertain environments. The third one uses surrogate ranking with preference learning or ordinal regression. The fitness of the candidates is estimated by modeling their rank. The techniques’ performances on some of the benchmark numerical optimization problems have been reported. The comparative benefits and shortcomings of both techniques have been identified.</description>
        <description>http://thesai.org/Downloads/Volume4No2/Paper_30-Expensive_Optimisation_A_Metaheuristics_Perspective.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Recognition of Facial Expression Using Eigenvector Based Distributed Features and Euclidean Distance Based Decision Making Technique</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040229</link>
        <id>10.14569/IJACSA.2013.040229</id>
        <doi>10.14569/IJACSA.2013.040229</doi>
        <lastModDate>2013-03-01T12:41:58.9130000+00:00</lastModDate>
        
        <creator>Jeemoni Kalita</creator>
        
        <creator>Karen Das</creator>
        
        <subject>Facial expression recognition; facial expressions; Eigenvectors; Eigenvalues</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(2), 2013</description>
        <description>In this paper, an Eigenvector based system has been presented to recognize facial expressions from digital facial images. In the approach, firstly the images were acquired and cropping of five significant portions from the image was performed to extract and store the Eigenvectors specific to the expressions. The Eigenvectors for the test images were also computed, and finally the input facial image was recognized when similarity was obtained by calculating the minimum Euclidean distance between the test image and the different expressions.</description>
        <description>http://thesai.org/Downloads/Volume4No2/Paper_29-Recognition_of_Facial_Expression_Using_Eigenvector_Based_Distributed_Features_and_Euclidean_Distance_Based_Decision_Making_Technique.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Indian Sign Language Recognition Using Eigen Value Weighted Euclidean Distance Based Classification Technique</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040228</link>
        <id>10.14569/IJACSA.2013.040228</id>
        <doi>10.14569/IJACSA.2013.040228</doi>
        <lastModDate>2013-03-01T12:41:56.8400000+00:00</lastModDate>
        
        <creator>Joyeeta Singha</creator>
        
        <creator>Karen Das</creator>
        
        <subject>Hand Gesture Recognition; Skin Filtering; Human Computer Interaction; Euclidean Distance (E.D.); Eigen value; Eigen vector.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(2), 2013</description>
        <description>Sign Language Recognition is one of the most growing fields of research today. Many new techniques have been developed recently in these fields. Here in this paper, we have proposed a system using Eigen value weighted Euclidean distance as a classification technique for recognition of various Sign Languages of India. The system comprises of four parts: Skin Filtering, Hand Cropping, Feature Extraction and Classification. 24 signs were considered in this paper, each having 10 samples, thus a total of 240 images was considered for which recognition rate obtained was 97%. </description>
        <description>http://thesai.org/Downloads/Volume4No2/Paper_28-Indian_Sign_Language_Recognition_Using_Eigen_Value_Weighted_Euclidean_Distance_Based_Classification_Technique.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Integration of Automated Decision Support Systems with Data Mining Abstract: A Client Perspective</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040226</link>
        <id>10.14569/IJACSA.2013.040226</id>
        <doi>10.14569/IJACSA.2013.040226</doi>
        <lastModDate>2013-03-01T12:41:54.7670000+00:00</lastModDate>
        
        <creator>Abdullah Saad AL-Malaise</creator>
        
        <subject>Decision Support Systems; Knowledge Management; Data Mining; Customer’s Satisfaction. </subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(2), 2013</description>
        <description>Customer’s behavior and satisfaction are always play important role to increase organization’s growth and market value. Customers are on top priority for the growing organization to build up their businesses. In this paper presents the architecture of Decision Support Systems (DSS) in connection to deal with the customer’s enquiries and requests. Main purpose behind the proposed model is to enhance the customer’s satisfaction and behavior using DSS. We proposed model by extension in traditional DSS concepts with integration of Data Mining (DM) abstract. The model presented in this paper shows the comprehensive architecture to work on the customer requests using DSS and knowledge management (KM) for improving the customer’s behavior and satisfaction.  Furthermore, DM abstract provides more methods and techniques; to understand the contacted customer’s data, to classify the replied answers in number of classes, and to generate association between the same type of queries, and finally to maintain the KM for future correspondence.</description>
        <description>http://thesai.org/Downloads/Volume4No2/Paper_26-Integration_of_Automated_Decision_Support_Systems_with_Data_Mining_Abstract_A_Client_Perspective.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Omega Model for Human Detection and Counting for application in Smart Surveillance System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040225</link>
        <id>10.14569/IJACSA.2013.040225</id>
        <doi>10.14569/IJACSA.2013.040225</doi>
        <lastModDate>2013-03-01T12:41:52.6900000+00:00</lastModDate>
        
        <creator>Subra mukherjee</creator>
        
        <creator>Karen Das</creator>
        
        <subject>Omega Model; Human Detection; Surveillance; Back ground Subtraction; Gaussian Mixture Model (GMM); Mixture of Gaussians (MOG).</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(2), 2013</description>
        <description>Driven by the significant advancements in technology and social issues such as security management, there is a strong need for Smart Surveillance System in our society today. One of the key features of a Smart Surveillance System is efficient human detection and counting such that the system can decide and label events on its own. In this paper we propose a new, novel and robust model: “The Omega Model”, for detecting and counting human beings present in the scene. The proposed model employs a set of four distinct descriptors for identifying the unique features of the head, neck and shoulder regions of a person. This unique head-neck-shoulder signature given by the Omega Model  exploits the challenges such as inter person variations  in size and shape of people’s head, neck and shoulder regions to achieve robust detection of human beings  even under partial occlusion, dynamically changing  background and varying illumination conditions. After experimentation we observe and analyze the influences of each of the four descriptors on the system performance and computation speed and conclude that a weight based decision making system produces the best results. Evaluation results on a number of images indicate the validation of our method in actual situation.  </description>
        <description>http://thesai.org/Downloads/Volume4No2/Paper_25-Omega_Model_for_Human_Detection_and_Counting_for_application_in_Smart_Surveillance_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Mixed Finite Element Method for Elasticity Problem</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040224</link>
        <id>10.14569/IJACSA.2013.040224</id>
        <doi>10.14569/IJACSA.2013.040224</doi>
        <lastModDate>2013-03-01T12:41:50.5830000+00:00</lastModDate>
        
        <creator>A. Elakkad</creator>
        
        <creator>M.A.Bennani</creator>
        
        <creator>J.EL Mekkaoui</creator>
        
        <creator>A.Elkhalfi</creator>
        
        <subject>Elasticity problem; Mixed Finite element method; BDM1 approximation; ABAQUS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(2), 2013</description>
        <description>This paper describes a numerical solution for plane elasticity problem. It includes algorithms for discretization by mixed finite element methods. The discrete scheme allows the utilization of Brezzi - Douglas - Marini element (BDM1) for the stress tensor and piecewise constant elements for the displacement. The numerical results are compared with some previously published works or with others coming from commercial code like ABAQUS.</description>
        <description>http://thesai.org/Downloads/Volume4No2/Paper_24-A_mixed_finite_element_method_for_elasticity_problem.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Comparison of DCT and Walsh Transforms for Watermarking using DWT-SVD</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040220</link>
        <id>10.14569/IJACSA.2013.040220</id>
        <doi>10.14569/IJACSA.2013.040220</doi>
        <lastModDate>2013-03-01T12:41:48.4930000+00:00</lastModDate>
        
        <creator>Dr.H.B. Kekre</creator>
        
        <creator>Dr.Tanuja Sarode</creator>
        
        <creator>Shachi Natu</creator>
        
        <subject>Discrete Wavelet Transform (DWT); Discrete Cosine Transform (DCT); Singular Value Decomposition (SVD); watermarking.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(2), 2013</description>
        <description>This paper presents a DWT-DCT-SVD based hybrid watermarking method for color images. Robustness is achieved by applying DCT to specific wavelet sub-bands and then factorizing each quadrant of frequency sub-band using singular value decomposition. Watermark is embedded in host image by modifying singular values of host image. Performance of this technique is then compared by replacing DCT by Walsh in above combination. Walsh results in computationally faster method and acceptable performance. Imperceptibility of method is tested by embedding watermark in HL2, HH2 and HH1 frequency sub-bands. Embedding watermark in HH1 proves to be more robust and imperceptible than using HL2 and HH2 sub-bands.</description>
        <description>http://thesai.org/Downloads/Volume4No2/Paper_20-Performance_Comparison_of_DCT_and_Walsh_Transforms_for_Watermarking_using_DWT-SVD.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>E-Government Grid Services Topology 
Based On Province And Population In Indonesia</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040219</link>
        <id>10.14569/IJACSA.2013.040219</id>
        <doi>10.14569/IJACSA.2013.040219</doi>
        <lastModDate>2013-03-01T12:41:46.4200000+00:00</lastModDate>
        
        <creator>Ummi Azizah Rachmawati</creator>
        
        <creator>Xue Li</creator>
        
        <creator>Heru Suhartanto</creator>
        
        <subject>component; formatting; style; styling; insert (key words)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(2), 2013</description>
        <description>The e-Government Grid Service Model in Indonesia is an adjustments based on the framework of existing e-Government and also the form of government in the country. Grid-based services for interoperability could be a solution for resource sharing and interoperability of e-Government systems. In previous study, we designed and simulated the topology of Indonesian e-Government Grid services based on function group from e-Government application solution map to connect the ministry/agency/department /institution. In this paper we analyse the result of e-Government services topology simulation based on the province and population in the country.</description>
        <description>http://thesai.org/Downloads/Volume4No2/Paper_19-E-Government_Grid_Services_Topology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Coordinated Resource Management Models in Hierarchical Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040216</link>
        <id>10.14569/IJACSA.2013.040216</id>
        <doi>10.14569/IJACSA.2013.040216</doi>
        <lastModDate>2013-03-01T12:41:44.2970000+00:00</lastModDate>
        
        <creator>Gabsi Mounir</creator>
        
        <creator>Rekik Ali</creator>
        
        <creator>Temani Moncef </creator>
        
        <subject>Strategy models; logistic system; resources management; optimisation; routing;  transport</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(2), 2013</description>
        <description>In response to the trend of efficient global economy, constructing a global logistic model has garnered much attention from the industry .Location selection is an important issue for those international companies that are interested in building a global logistics management system. Infrastructure in Developing Countries are based on the use of both classical and modern  control technology, for which the most important components are professional levels of structure knowledge, dynamics and management processes, threats and interference and external and internal attacks. The problem of control flows of energy and materials resources in local and regional structures in normal and marginal, emergency operation provoked information attacks or threats on failure flows are further relevant especially when considering the low level of professional ,psychological and cognitive training of operational personnel manager. Logistics Strategies include the business goals requirements, allowable decisions tactics, and vision for designing and operating a logistics system .In this paper described the selection module coordinating flow management strategies based on the use of resources and logistics systems concepts. </description>
        <description>http://thesai.org/Downloads/Volume4No2/Paper_16-Coordinated_Resource_Management_Models_in_Hierarchical_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Segmentation of Ultrasound Breast Images using Vector Neighborhood with Vector Sequencing on KMCG and augmented KMCG algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040214</link>
        <id>10.14569/IJACSA.2013.040214</id>
        <doi>10.14569/IJACSA.2013.040214</doi>
        <lastModDate>2013-03-01T12:41:42.2070000+00:00</lastModDate>
        
        <creator>Dr.H.B. kekre</creator>
        
        <creator>Pravin Shrinath</creator>
        
        <subject>codebook; seed vector; training set; vector quantization </subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(2), 2013</description>
        <description>B mode ultrasound (US) imaging is popular and important modality to examine the range of clinical problems and also used as complimentary to the mammogram imaging to detect and diagnose the nature breast tumor. To understand the nature (benign or malignant) of the tumor most of the radiologists focus on shape and boundary. Therefore boundary is as important characteristic of the tumor along with the shape. Tracing the contour manually is a time consuming and tedious task. Automated and efficient segmentation method also helps radiologists to understand and observe the volume of a tumor (growth or shrinkage). Inherent artifact present in US images, such as speckle, attenuation and shadows are major hurdles in achieving proper segmentation. Along with these artifacts, inhomogeneous texture present in the region of interest is also a major concern. Most of the algorithms studies in the literature include noise removal technique as a preprocessing step. Here in this paper, we are eliminating this step and directly handling the images with high degree of noise.  VQ based clustering technique is proposed for US image segmentation with KMCG and augmented KMCG codebook generation algorithms. Using this algorithm images are divided in to clusters, further these clusters are merged sequentially. A novel technique of sequential cluster merging with vector sequencing has been used.   We have also proposed a technique to find out the region of interest from the selected cluster with seed vector acquisition.  
Results obtained by our method are compared with our earlier method and Marker Controlled Watershed transform. With the opinion of the expert radiologist, we found that our method gives better results. 
</description>
        <description>http://thesai.org/Downloads/Volume4No2/Paper_14-Segmentation_of_Ultrasound_Breast_Images_using_Vector_Neighborhood_with_Vector_Sequencing_on_KMCG_and_augmented_KMCG_algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>IRS for Computer Character Sequences Filtration: a new software tool and algorithm to support the IRS at tokenization process</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040212</link>
        <id>10.14569/IJACSA.2013.040212</id>
        <doi>10.14569/IJACSA.2013.040212</doi>
        <lastModDate>2013-03-01T12:41:40.1330000+00:00</lastModDate>
        
        <creator>Ahmad Al Badawi</creator>
        
        <creator>Qasem Abu Al-Haija</creator>
        
        <subject>Information Retrieval; Tokenization; pattern matching; and Sequences Filtration.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(2), 2013</description>
        <description>Tokenization is the task of chopping it up into pieces, called tokens, perhaps at the same time throwing away certain characters, such as punctuation. A token is an instance of token a sequence of characters in some particular document that are grouped together as a useful semantic unit for processing. New software tool and algorithm to support the IRS at tokenization process are presented. Our proposed tool will filter out the three computer character Sequences: IP-Addresses, Web URLs, Date, and Email Addresses. Our tool will use the pattern matching algorithms and filtration methods. After this process, the IRS can start a new tokenization process on the new retrieved text which will be free of these sequences.</description>
        <description>http://thesai.org/Downloads/Volume4No2/Paper_12-IRS_for_Computer_Character_Sequences_Filtration_a_new_software_tool_and_algorithm_to_support_the_IRS_at_tokenization_process.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Universal Learning System for Embedded System Education and Promotion</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040203</link>
        <id>10.14569/IJACSA.2013.040203</id>
        <doi>10.14569/IJACSA.2013.040203</doi>
        <lastModDate>2013-03-01T12:41:38.0400000+00:00</lastModDate>
        
        <creator>Kai-Chao Yang</creator>
        
        <creator>Yu-Tsang Chang</creator>
        
        <creator>Chien-Ming Wu</creator>
        
        <creator>Chun-Ming Huang</creator>
        
        <subject>Embedded system; Distance learning; Computer science education</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(2), 2013</description>
        <description>In this article, the idea of the universal learning system for embedded systems is presented. The proposed system provides a complete learning environment consisting of the information collection center, preference estimation system, Q&amp;A center, forum, and virtual classroom. The skeleton of the proposed system is a preference estimation system, which helps users know the relationship between different hardware kits and suggests suitable hardware kits to users to learn embedded systems. Then, the proposed system provides the virtual classroom and Q&amp;A service for users to start their classes. Besides, users can share design samples and experience, and join discussions through the forum of the proposed system. For demonstration, three embedded hardware platforms are introduced and applied by the proposed learning system. The results show that most students feel the proposed learning system can effectively help with their embedded software design.</description>
        <description>http://thesai.org/Downloads/Volume4No2/Paper_3-Universal_Learning_System_for_Embedded_System_Education_and_Promotion.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Toward the Integration of Traditional and Agile Approaches</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040202</link>
        <id>10.14569/IJACSA.2013.040202</id>
        <doi>10.14569/IJACSA.2013.040202</doi>
        <lastModDate>2013-03-01T12:41:35.9670000+00:00</lastModDate>
        
        <creator>Hung-Fu Chang</creator>
        
        <creator>Stephen C-Y.Lu</creator>
        
        <subject>Source Code Analysis; Software Data Mining; Agile Development</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(2), 2013</description>
        <description>The agile approach uses continuous delivery, instead of distinct procedure, to work closer with customers and to respond faster requirement changes. All of these are against the traditional plan driven approach. Due to agile method’s characteristics and its success in the real world practices, a number of discussions regarding the differences between agile and traditional approaches emerged recently and many studies intended to integrate both methods to synthesize the benefits from these two sides. However, this type of research often concludes from observations of a development activity or surveys after a project. To provide a more objective supportive evidence of comparing these two approaches, our research analyzes the source codes, logs, and notes. We argue that the agile and traditional approaches share common characteristics, which can be considered as the glue for integrating both methods. In our study, we collect all the submissions from the version control repository, and meeting notes and discussions. By applying our suggested analysis method, we illustrate the shared properties between agile and traditional approaches; thus, different development phases, like implementation and test, can still be identified in agile development history. This result not only provides a positive result for our hypothesis but also offers a suggestion for a better integration.</description>
        <description>http://thesai.org/Downloads/Volume4No2/Paper_2-Toward_the_Integration_of_Traditional_and_Agile_Approaches.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Machine Learning for Bioclimatic Modelling</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040201</link>
        <id>10.14569/IJACSA.2013.040201</id>
        <doi>10.14569/IJACSA.2013.040201</doi>
        <lastModDate>2013-03-01T12:41:33.8770000+00:00</lastModDate>
        
        <creator>Maumita Bhattacharya</creator>
        
        <subject>Machine Learning; Bioclimatic Modelling; Geographic Range; Artificial Neural Network; Evolutionary Algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(2), 2013</description>
        <description>Many machine learning (ML) approaches are widely used to generate bioclimatic models for prediction of geographic range of organism as a function of climate. Applications such as prediction of range shift in organism, range of invasive species influenced by climate change are important parameters in understanding the impact of climate change. However, success of machine learning-based approaches depends on a number of factors. While it can be safely said that no particular ML technique can be effective in all applications and success of a technique is predominantly dependent on the application or the type of the problem, it is useful to understand their behavior to ensure informed choice of techniques. This paper presents a comprehensive review of machine learning-based bioclimatic model generation and analyses the factors influencing success of such models.  Considering the wide use of statistical techniques, in our discussion we also include conventional statistical techniques used in bioclimatic modelling.</description>
        <description>http://thesai.org/Downloads/Volume4No2/Paper_1-Machine_Learning_for_Bioclimatic_Modelling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>ZeroX Algorithms with Free crosstalk in Optical Multistage Interconnection Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040223</link>
        <id>10.14569/IJACSA.2013.040223</id>
        <doi>10.14569/IJACSA.2013.040223</doi>
        <lastModDate>2013-03-01T12:41:31.8000000+00:00</lastModDate>
        
        <creator>M.A.Al- Shabi</creator>
        
        <subject>component; Optical multistage interconnection networks (MINs); ZeroX Algorithm; crosstalk in Omega network. </subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(2), 2013</description>
        <description>Multistage interconnection networks (MINs) have been proposed as interconnecting structures in various types of communication applications ranging from parallel systems, switching architectures, to multicore systems and advances. Optical technologies have drawn the interest for optical implementation in MINs to achieve high bandwidth capacity at the rate of terabits per second. Crosstalk is the major problem with optical interconnections; it not only degrades the performance of network but also disturbs the path of communication signals. To avoid crosstalk in Optical MINs many algorithms have been proposed by many researchers and some of the researchers suppose some solution to improve Zero Algorithm.   This paper will be illustrated that is no any crosstalk appears in Zero based algorithms (ZeroX, ZeroY and ZeroXY) in using refine and unique case functions. 
Through simulation modeling, the Zero based algorithm approach yields the best performance in terms of minimal routing time in and number of passes comparison to the previous algorithms tested for comparison in this paper.
</description>
        <description>http://thesai.org/Downloads/Volume4No2/Paper_23-ZeroX_Algorithms_with_Free_crosstalk_in_Optical_Multistage_Interconnection_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modeling and Simulation Multi Motors Web Winding System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040217</link>
        <id>10.14569/IJACSA.2013.040217</id>
        <doi>10.14569/IJACSA.2013.040217</doi>
        <lastModDate>2013-03-01T12:41:29.7270000+00:00</lastModDate>
        
        <creator>Hachemi Glaoui</creator>
        
        <creator>Abdeldjebar Hazzab</creator>
        
        <creator>Bousmaha Bouchiba</creator>
        
        <creator>Isma&#239;l Khalil Bousserhane</creator>
        
        <subject>Multi motors web winding system; PI controller; tension control;   linear speed control</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(2), 2013</description>
        <description>Web winding systems allow the operations of unwinding and rewinding of various products including plastic films, sheets of paper, sheets, and fabrics. These operations are necessary for the development and the treatment of these products. Web winding systems generally consist of the same machine elements in spite of the diversity of the transported products. Due to the wide rang variation of the radius and inertia of the rollers the system dynamic change considerably during the winding/ unwinding process. Decentralized PI controller for web tension control and linear speed control are presented in this paper. The PI control method can be applied easily and is widely known, it has an important place in control applications. Simulation results show the effectiveness of the proposed linear speed and tension controller for web winding multi motors systems.</description>
        <description>http://thesai.org/Downloads/Volume4No2/Paper_17-Modeling_and_Simulation_Multi_Motors_Web_Winding_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Data Fusion Between Microwave and Thermal Infrared Radiometer Data and Its Application to Skin Sea Surface Temperature, Wind Speed and Salinity Retrievals</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040236</link>
        <id>10.14569/IJACSA.2013.040236</id>
        <doi>10.14569/IJACSA.2013.040236</doi>
        <lastModDate>2013-03-01T12:41:27.6370000+00:00</lastModDate>
        
        <creator>Kohei Arai </creator>
        
        <subject>data fusion; simulataneous estimation </subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(2), 2013</description>
        <description>Method for data fusion between Microwave Scanning Radiometer: MSR and Thermal Infrared Radiometer: TIR derived skin sea surface temperature: SSST, wind speed: WS and salinity is proposed. SSST can be estimated with MSR and TIR radiometer data. Although the contribution ocean depth to MSR and TIR radiometer data are different each other, SSST estimation can be refined through comparisons between MSR and TIR derived SSST. Also WS and salinity can be estimated with MSR data under the condition of the refined SSST. Simulation study results support the idea of the proposed data fusion method.</description>
        <description>http://thesai.org/Downloads/Volume4No2/Paper_36-Data_Fusion_Between_Microwave_and_Thermal_Infrared_Radiometer.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Visualization of Learning Processes for Back Propagation Neural Network Clustering</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040235</link>
        <id>10.14569/IJACSA.2013.040235</id>
        <doi>10.14569/IJACSA.2013.040235</doi>
        <lastModDate>2013-03-01T12:41:25.5770000+00:00</lastModDate>
        
        <creator>Kohei Arai </creator>
        
        <subject>neural network; error back propagation; convergence process; spatial correlation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(2), 2013</description>
        <description>Method for visualization of learning processes for back propagation neural network is proposed. The proposed method allows monitor spatial correlations among the nodes as an image and also check a convergence status. The proposed method is attempted to monitor the correlation and check the status for spatially correlated satellite imagery data of AVHRR derived sea surface temperature data. It is found that the proposed method is useful to check the convergence status and also effective to monitor the spatial correlations among the nodes in hidden layer.</description>
        <description>http://thesai.org/Downloads/Volume4No2/Paper_35-Visualization_of_Learning_Processes_for_Back_Propagation_Neural_Network_Clustering.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Method for Estimation of Aerosol Parameters Based on Ground Based Atmospheric Polarization Irradiance Measurements</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040234</link>
        <id>10.14569/IJACSA.2013.040234</id>
        <doi>10.14569/IJACSA.2013.040234</doi>
        <lastModDate>2013-03-01T12:41:23.4870000+00:00</lastModDate>
        
        <creator>Kohei Arai </creator>
        
        <subject>Degree of Polarization; aerosol refractive index; size distribution </subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(2), 2013</description>
        <description>Method for aerosol refractive index estimation with ground based polarization measurement data is proposed. The proposed method uses a dependency of refractive index on p and s polarized down welling solar diffuse irradiance. It is much easy to measure p and s polarized irradiance on the ground with a portable measuring instrument rather than solar direct, diffuse and aureole measurements. Through theoretical and simulation studies, it is found that the proposed method show a good estimation accuracy of refractive index using measured down welling p and s polarized irradiance data with a measuring instrument pointing to the direction which is perpendicular to the sun in the principal plane. Field experimental results also show a validity of the proposed method in comparison to the estimated results from the conventional method with solar direct, diffuse and aureole measurement data.</description>
        <description>http://thesai.org/Downloads/Volume4No2/Paper_34-Method_for_Estimation_of_Aerosol_Parameters_Based_on_Ground_Based_Atmospheric_Polarization_Irradiance_Measurements.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Method for Image Portion Retrieval and Display for Comparatively Large Scale of Imagery Data onto Relatively Small Size of Screen Which is Suitable to Block Coding of Image Data Compression</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040232</link>
        <id>10.14569/IJACSA.2013.040232</id>
        <doi>10.14569/IJACSA.2013.040232</doi>
        <lastModDate>2013-03-01T12:41:21.3970000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>data compression; image retrieval; block coding</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(2), 2013</description>
        <description>Method for image portion retrieval and display for the relatively large scale of imagery data onto comparatively small size of display is proposed. The method is suitable to the data compression methods based on block coding. Through experiments with satellite imagery data, it is found that the proposed method is useful for the display onto small sized screen such as mobile phone display and so on.</description>
        <description>http://thesai.org/Downloads/Volume4No2/Paper_32-Method_for_Image_Portion_Retrieval_and_Display_for_Comparatively.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Framework of Designing an Adaptive and Multi-Regime Prognostics and Health Management for Wind Turbine Reliability and Efficiency Improvement </title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040221</link>
        <id>10.14569/IJACSA.2013.040221</id>
        <doi>10.14569/IJACSA.2013.040221</doi>
        <lastModDate>2013-03-01T12:41:19.3030000+00:00</lastModDate>
        
        <creator>B.L. Song</creator>
        
        <creator>J.Lee</creator>
        
        <subject>PHM; Adaptive tool selection; Multi-regime prognostics; Information reconstruction; Holo-coefficient</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(2), 2013</description>
        <description>Wind turbine systems are increasing in technical complexity, and tasked with operating and degrading in highly dynamic and unpredictable conditions.  Sustaining the reliability of such systems is a complex and difficult task. In spite of extensive efforts, current prognostics and health management (PHM) methodologies face many challenges, due to the complexity of the degradation process and the dynamic operating conditions of a wind turbine. This research proposed a novel adaptive and multi-regime prognostics and health management (PHM) approach with the aim to tackle the challenges of traditional methods. With this approach, a scientific and systematic solution is provided for health assessment, diagnosis and prognosis of critical components of wind turbines under varying environmental, operational and aging processes. The system is also capable of adaptively selecting the tools suitable for a component under a certain health status and a specific operating condition. The adopted relevant health assessment, diagnosis and prognosis tools and techniques for wind turbines are warranted by the intensive research of PHM models by the IMS center for common rotary machinery components. Some sub-procedures, such as information reconstruction, regime clustering approach and the prognostics of rotating elements, were validated by the best score performance in PHM Data Challenge 2008 (student group) and 2009 (professional group). The success of the proposed wind turbine PHM system would greatly benefit current wind turbine industry.</description>
        <description>http://thesai.org/Downloads/Volume4No2/Paper_21-Framework_of_Designing_an_Adaptive_and_Multi-Regime_Prognostics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Improved Scheme on Morphological Image Segmentation Using the Gradients</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040215</link>
        <id>10.14569/IJACSA.2013.040215</id>
        <doi>10.14569/IJACSA.2013.040215</doi>
        <lastModDate>2013-03-01T12:41:17.2000000+00:00</lastModDate>
        
        <creator>Pinaki Pratim Acharjya</creator>
        
        <creator>Santanu Santra</creator>
        
        <creator>Dibyendu Ghoshal</creator>
        
        <subject>Contour detection; gradients; watershed algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(2), 2013</description>
        <description>An improved scheme for contour detection with better performance measure has been proposed. It is based on the response of human visual system during visualization of any type of an image. The scheme consisted of two parts namely to find the edge of the image by using the modified mask of Laplacian of Gaussian edge operator and subsequent modulation of the edge by using watershed algorithm. The method has been applied to a digital image and better performance measure of contour detection has been achieved.</description>
        <description>http://thesai.org/Downloads/Volume4No2/Paper_15-An_Improved_Scheme_On_Morphological_Image_Segmentation_Using_The_Gradients.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Simple Exercise-to-Play Proposal that would Reduce Games Addiction and Keep Players Healthy</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040213</link>
        <id>10.14569/IJACSA.2013.040213</id>
        <doi>10.14569/IJACSA.2013.040213</doi>
        <lastModDate>2013-03-01T12:41:15.0930000+00:00</lastModDate>
        
        <creator>Nael Hirzallah</creator>
        
        <subject>Virtual Reality; Gaming; Video Game.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(2), 2013</description>
        <description>Games players usually get addicted to video games in general and more specifically to those that are usually played over the internet. These players prefer to stay at home and play games rather than playing sports or outdoor games. This paper presents a proposal that aims to implement a simple way to let video games players exercise in order to play. The proposal targets games where players virtually live inside a certain area such as a forest, city or a war zone. Their aim is to explore the area, capture, kill and avoid being killed by something or someone. A costumed built treadmill acting as a movement capture device is proposed to capture players’ commands for movements. These movements include Running, walking, Stopping, and Turning. In that way, the players enjoy exercising as well as playing the game. However, sooner or later, the players get exhausted driving them to exit the game. That way, we believe that such a proposal would keep players healthy, and reduce the chance of addiction. </description>
        <description>http://thesai.org/Downloads/Volume4No2/Paper_13-A_Simple_Exercise-to-Play_Proposal_that_would_Reduce_Games_Addiction_and_Keep_Players_Healthy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intelligent Collaborative Quality Assurance System for Wind Turbine Supply Chain Management</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040206</link>
        <id>10.14569/IJACSA.2013.040206</id>
        <doi>10.14569/IJACSA.2013.040206</doi>
        <lastModDate>2013-03-01T12:41:12.9870000+00:00</lastModDate>
        
        <creator>B.L. SONG</creator>
        
        <creator>W.LIAO</creator>
        
        <creator>J.LEE</creator>
        
        <subject>Wind turbine expert system; Supply chain management; Collaborative quality assurance; Prediction; Pattern recognition;</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(2), 2013</description>
        <description>To determine the root causes or sources of variance of bad quality in supply chains is usually more difficult because multiple parties are involved in the current global manufacturing environment. Each component within a supply chain tends to focus on its own responsibilities and ignores possibilities for interconnectivity and therefore the potential for systematic quality assurance and quality tracing. Rather than concentrating on assigning responsibility for “recall” incidents, it would be better to expend that energy on constructing a collaborative system to assure product quality by employing a systematic view for the entire supply chain. This paper presents a systematic framework for intelligent collaborative quality assurance throughout an entire supply chain based on an expert system for implementing two levels of quality assurance: system level and component level. This proposed system provides intelligent functions for quality prediction, pattern recognition and data mining. A case study for wind turbines is given to demonstrate this approach. The results show that such a system can assure product quality improved in a continuous process.</description>
        <description>http://thesai.org/Downloads/Volume4No2/Paper_6-Intelligent_Collaborative_Quality_Assurance_System_for_Wind_Turbine_Supply_Chain_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automatic Detection Of Electrocardiogram ST Segment: Application In Ischemic Disease Diagnosis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040222</link>
        <id>10.14569/IJACSA.2013.040222</id>
        <doi>10.14569/IJACSA.2013.040222</doi>
        <lastModDate>2013-03-01T12:41:10.7870000+00:00</lastModDate>
        
        <creator>Duck Hee Lee</creator>
        
        <creator>Jun Woo Park</creator>
        
        <creator>Jeasoon Choi</creator>
        
        <creator>Ahmed Rabbi</creator>
        
        <subject>Electrocardiogram (ECG); Ischemia; European ST-T database; QRS complex ; ST segment.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(2), 2013</description>
        <description>The analysis of electrocardiograph (ECG) signal provides important clinical information for heart disease diagnosis. The ECG signal consists of the P, QRS complex, and T-wave. These waves correspond to the fields induced by specific electric phenomenon on the cardiac surface. Among them, the detection of ischemia can be achieved by analysis the ST segment. Ischemia is one of the most serious and prevalent heart diseases. In this paper, the European database was used for evaluation of automatic detection of the ST segment. The method  comprises several steps; ECG signal loading from database, signal preprocessing,  detection of QRS complex and R-peak, ST segment, and other relation parameter measurement. The developed application displays the results of the analysis. </description>
        <description>http://thesai.org/Downloads/Volume4No2/Paper_22-Automatic_Detection_of_Electrocardiogram_ST_Segment_Application_in_Ischemic_Disease_Diagnosis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Texture Based Image Retrieval Using Framelet Transform–Gray Level Co-occurrence Matrix(GLCM)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.020211</link>
        <id>10.14569/IJARAI.2013.020211</id>
        <doi>10.14569/IJARAI.2013.020211</doi>
        <lastModDate>2013-02-09T09:34:04.1170000+00:00</lastModDate>
        
        <creator>S. Sulochana</creator>
        
        <creator>R.Vidhya</creator>
        
        <subject>Content Based image Retrieval (CBIR); Discrete Wavelet transform (DWT); Framelet Transform; Gray level- co occurrence matrix(GLCM)</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(2), 2013</description>
        <description>This paper presents a novel content based image retrieval (CBIR) system based on Framelet Transform combined with gray level co-occurrence matrix (GLCM).The proposed method is shift invariant which captured edge information more accurately than conventional transform domain methods as well as able to handle images of arbitrary size. Current system uses texture as a visual content for feature extraction. First Texture features are obtained by computing the energy, standard deviation and mean on each sub band of the Framelet transform decomposed image .Then a new method as a combination of the Framelet transform-Gray level co-occurrence matrix (GLCM) is applied. The results of the proposed methods are compared with conventional methods. We have done the comparison of results of these two methods for image retrieval. Euclidean distance, Canberra distance, city black distance is used as similarity measure in the proposed CBIR system.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No2/Paper_11-Texture_Based_Image_Retrieval_Using_Framelet_Transform_Gray_level_Co-occurrence_matrix_GLCM.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improved Scatter Search Using Cuckoo Search</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.020210</link>
        <id>10.14569/IJARAI.2013.020210</id>
        <doi>10.14569/IJARAI.2013.020210</doi>
        <lastModDate>2013-02-09T09:34:02.0100000+00:00</lastModDate>
        
        <creator>Ahmed T.Sadiq Al-Obaidi</creator>
        
        <subject>component; Metaheuristic; Scatter Search; Cuckoo Search; Combinatorial Problems; Traveling Salesman Problem</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(2), 2013</description>
        <description>The Scatter Search (SS) is a deterministic strategy that has been applied successfully to some combinatorial and continuous optimization problems. Cuckoo Search (CS) is heuristic search algorithm which is inspired by the reproduction strategy of cuckoos. This paper presents enhanced scatter search algorithm using CS algorithm. The improvement provides Scatter Search with random exploration for search space of problem and more of diversity and intensification for promising solutions. The original and improved Scatter Search has been tested on Traveling Salesman Problem. A computational experiment with benchmark instances is reported. The results demonstrate that the improved Scatter Search algorithms produce better performance than original Scatter Search algorithm. The improvement in the value of average fitness is 23.2% comparing with original SS. The developed algorithm has been compared with other algorithms for the same problem, and the result was competitive with some algorithm and insufficient with another.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No2/Paper_10-Improved_Scatter_Search_Using_Cuckoo_Search.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>AutoBeeConf : A swarm intelligence algorithm for MANET administration</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.020209</link>
        <id>10.14569/IJARAI.2013.020209</id>
        <doi>10.14569/IJARAI.2013.020209</doi>
        <lastModDate>2013-02-09T09:33:59.8400000+00:00</lastModDate>
        
        <creator>Luca Caputo</creator>
        
        <creator>Cristiano Davino</creator>
        
        <creator>Filomena de Santis</creator>
        
        <creator>Vincenzo Ferri</creator>
        
        <subject>MANET; Routing protocols; IP address auto-configuration; Swarm intelligence</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(2), 2013</description>
        <description>In a mobile ad-hoc network (MANET) nodes are self-organized  without any infrastructure support: they move arbitrarily causing the network to experience quick and random topology changes, have to act as routers as well as forwarding nodes,  some of them do not communicate directly with each other. Routing and IP address auto-configuration are among the most challenging tasks in the MANET domain.  Swarm Intelligence is a property of natural and artificial systems involving minimally skilled individuals that exhibit a collective intelligent behavior derived from the interaction with each other by means of the environment. Colonies of ants and bees are the most prominent examples of swarm intelligence systems. Flexibility, robustness, and self-organization make swarm intelligence a successful design paradigm for difficult combinatorial optimization problems, such as routing and IP address allocation in MANET. This paper proposes  AutoBeeConf,  a new IP address auto-configuration algorithm based on a bee swarm labor that may be applied to large scale MANET with low complexity, low communication overhead, even address distribution, and low latency. Both the protocol description and the simulation experiments are presented to demonstrate the advantages of AutoBeeConf over two known algorithms, namely Buddy and Antbased protocols. Eventually, future research directions are established, especially toward the principle that swarm intelligence paradigms may be usefully employed in the redefinition or modifications of each layer in the TCP/IP suite in such a way that it can efficiently work even in the infrastructure-less and dynamic nature of MANET environment.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No2/Paper_9-AutoBeeConf.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Voice Recognition Method with Mouth Movement Videos Based on Forward and Backward Optical Flow</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.020208</link>
        <id>10.14569/IJARAI.2013.020208</id>
        <doi>10.14569/IJARAI.2013.020208</doi>
        <lastModDate>2013-02-09T09:33:55.8630000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>lip reading; optical flow; Hiddeen Markov Model; mouth movement</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(2), 2013</description>
        <description>Lip reading method with mouth movement videos based on backward optical flow is proposed. Through experiments with 10 of mouth movement videos, it is found that the proposed lip reading method is superior to the conventional optical flow based method. </description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No2/Paper_8-Voice_Recognition_Method_with_Mouth_Movement_Videos_Based_on_Forward_and_Backward_Optical_Flow.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid of Rough Neural Networks for Arabic/Farsi Handwriting Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.020207</link>
        <id>10.14569/IJARAI.2013.020207</id>
        <doi>10.14569/IJARAI.2013.020207</doi>
        <lastModDate>2013-02-09T09:33:53.7730000+00:00</lastModDate>
        
        <creator>Elsayed Radwan</creator>
        
        <subject>Rough Sets; Rough Neural Network; Arabic/Farsi Digit Recognition; Dissimilarity Analysis; and Classification.</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(2), 2013</description>
        <description>Handwritten character recognition is one of the focused areas of research in the field of Pattern Recognition. In this paper, a hybrid model of rough neural network has been developed for recognizing isolated Arabic/Farsi digital characters. It solves the neural network problems; proneness to overfitting, and the empirical nature of model development using rough sets and the dissimilarity analysis. Moreover the perturbation in the input data is violated using rough neuron. This paper describes an evolutionary rough neural network based technique to recognize Arabic/Farsi isolated handwritten digital characters. This method involves hierarchical feature extraction, data clustering and classification. In contrast with conventional neural network, a comparative study is appeared. Also, the details and limitations are discussed.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No2/Paper_7-Hybrid_of_Rough_Neural_Networks_for_Arabic_Farsi_Handwriting_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparison of Supervised and Unsupervised Learning Algorithms for Pattern Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.020206</link>
        <id>10.14569/IJARAI.2013.020206</id>
        <doi>10.14569/IJARAI.2013.020206</doi>
        <lastModDate>2013-02-09T09:33:49.7630000+00:00</lastModDate>
        
        <creator>R. Sathya</creator>
        
        <creator>Annamma Abraham</creator>
        
        <subject>Classification; Clustering; Learning; MLP; SOM; Supervised learning; Unsupervised learning; </subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(2), 2013</description>
        <description>This paper presents a comparative account of unsupervised and supervised learning models and their pattern classification evaluations as applied to the higher education scenario. Classification plays a vital role in machine based learning algorithms and in the present study, we found that,  though the error back-propagation learning algorithm as provided by supervised learning model is very efficient for a number of non-linear real-time problems, KSOM of unsupervised learning model, offers efficient solution and classification in the present study. </description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No2/Paper_6-Comparison_of_Supervised_and_Unsupervised_Learning_Algorithms_for_Pattern_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Eye-Base Domestic Robot Allowing Patient to Be Self-Services and Communications Remotely</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/</link>
        <id></id>
        <doi></doi>
        <lastModDate>2013-02-09T09:33:45.7870000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Ronny Mardiyanto</creator>
        
        <subject>eye-based Human-Computer Interaction (HCI); domestic robot; communication aid; virtual travel</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(2), 2013</description>
        <description>Eye-based domestic helper is proposed for helping patient self-sufficient in hospital circumstance. This kind of system will benefit for those patient who cannot move around, it especially happen to stroke patient who in the daily they just lay on the bed. They could not move around due to the body malfunction. The only information that still could be retrieved from user is eyes. In this research, we develop a new system in the form of domestic robot helper controlled by eye which allows patient self-service and speaks remotely. First, we estimate user sight by placing camera mounted on user glasses. Once eye image is captured, the several image processing are used to estimate the sight. Eye image is cropped from the source for simplifying the area. We detect the centre of eye by seeking the location of pupil. The pupil and other eye component could be easily distinguished based on the color. Because pupil has darker color than others, we just apply adaptive threshold for its separation. By using simple model of eye, we could estimate the sight based on the input from pupil location. Next, the obtained sight value is used as input command to the domestic robot. User could control the moving of robot by eye. Also, user could send the voice through text to speech functionality. We use baby infant robot as our domestic robot. We control the robot movement by sending the command via serial communication (utilizing the USB to serial adapter). Three types of command consist of move forward, turn left, and turn right are used in the system for moving the robot. In the robot, we place another camera for capturing the scenery in the front of robot.  Between robot and user, they are separated by distance. They are connected over TCP/IP network. The network allows user control the robot remotely. We set the robot as server and user’s computer as client. The robot streams the scenery video and receives command sending by the client. In the other place, client (user) receives video streaming from server and control the robot movement by sending command via the network. The user could control the robot remotely even in the long distance because user could see the scenery in the front of robot. We have tested the performance of our robot controlled over TCP/IP network. An experiment measuring the robot maneuverability performance from start point avoiding and passing obstacles has been done in our laboratory. By implementing our system, patient in hospital could self-service by them self.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No2/Paper_5-Eye-Base_Domestic_Robot_Allowing_Patient_to_be_Self-Services_and_Communications_Remotely.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Moving Domestic Robotics Control Method Based on Creating and Sharing Maps with Shortest Path Findings and Obstacle Avoidance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.020204</link>
        <id>10.14569/IJARAI.2013.020204</id>
        <doi>10.14569/IJARAI.2013.020204</doi>
        <lastModDate>2013-02-09T09:33:43.6630000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>domestic robotics; obstacle avoidance; place identifierl ultrasonic sensor; web camera</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(2), 2013</description>
        <description>Control method for moving robotics in closed areas based on creation and sharing maps through shortest path findings and obstacle avoidance is proposed. Through simulation study, a validity of the proposed method is confirmed. Furthermore, the effect of map sharing among robotics is also confirmed together with obstacle avoidance with cameras and ultrasonic sensors.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No2/Paper_4-Moving_Domestic_Robotics_Control_Method_Based_on_Creating_and_Sharing_Maps_with_Shortest_Path_Findings_and_Obstacle_Avoidance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Genetic Programming for Document Segmentation and Region Classification Using Discipulus </title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.020203</link>
        <id>10.14569/IJARAI.2013.020203</id>
        <doi>10.14569/IJARAI.2013.020203</doi>
        <lastModDate>2013-02-09T09:33:41.5570000+00:00</lastModDate>
        
        <creator>Priyadharshini N</creator>
        
        <creator>Vijaya MS</creator>
        
        <subject>Document analysis; Information retrieval; Classification; Feature extraction; Document segmentation. </subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(2), 2013</description>
        <description>Document segmentation is a method of rending the document into distinct regions.  A document is an assortment of information and a standard mode of conveying information to others. Pursuance of data from documents involves ton of human effort, time intense and might severely prohibit the usage of data systems. So, automatic information pursuance from the document has become a big issue. It is been shown that document segmentation will facilitate to beat such problems. This paper proposes a new approach to segment and classify the document regions as text, image, drawings and table. Document image is divided into blocks using Run length smearing rule and features are extracted from every blocks. Discipulus tool has been used to construct the Genetic programming based classifier model and located 97.5% classification accuracy.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No2/Paper_3-Genetic_programming_for_Document_Segmentation_And_Region_Classification_Using_Discipulus.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Kabbalah System Theory of Ontological and Knowledge Engineering for Knowledge Based Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.020202</link>
        <id>10.14569/IJARAI.2013.020202</id>
        <doi>10.14569/IJARAI.2013.020202</doi>
        <lastModDate>2013-02-09T09:33:37.5500000+00:00</lastModDate>
        
        <creator>Gabriel Burstein</creator>
        
        <creator>Constantin Virgil Negoita</creator>
        
        <subject>knowledge based system; knowledge representation; ontological engineering; knowledge modeling; knowledge engineering; artificial intelligence; Kabbalah; system theory; category theory</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(2), 2013</description>
        <description>Using the Kabbalah system theory (KST) developed in [1], [2], we propose an ontological engineering for knowledge representation of domains in terms of concept systems in knowledge based systems in artificial intelligence. KST is also used for the knowledge engineering of the knowledge model building based on ontology. KST provides thus an integrative, unifying, domain independent framework for both the knowledge representation via ontologies and knowledge model building via knowledge engineering in knowledge based systems. </description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No2/Paper_2-A_Kabbalah_System_Theory_Of_Ontological_and_Knowledge_Engineering_For_Knowledge_Based_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Emotional Belief-Desire-Intention Agent Model: Previous Work And Proposed Architecture</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.020201</link>
        <id>10.14569/IJARAI.2013.020201</id>
        <doi>10.14569/IJARAI.2013.020201</doi>
        <lastModDate>2013-02-09T09:33:33.6800000+00:00</lastModDate>
        
        <creator>Mihaela- Alexandra Puica</creator>
        
        <creator>Adina-Magda Florea</creator>
        
        <subject>Affective Computing; Agent architecture; Belief-Desire-Intention model</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(2), 2013</description>
        <description>Research in affective computing shows that agents cannot be truly intelligent, nor believable or realistic without emotions. In this paper, we present a model of emotional agents that is based on a BDI architecture. We show how we can integrate emotions, resources and personality features into an artificial intelligent agent so as to obtain a human-like behavior of this agent. We place our work in the general context of existing research in emotional agents, with emphasis on BDI emotional models.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No2/Paper_1-Emotional_Belief-Desire-Intention_Agent_Model_Previous_Work_and_Proposed_Architecture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Block Cipher Involving a Key Bunch Matrix and an Additional Key Matrix, Supplemented with Xor Operation and Supported by Key-Based Permutation and Substitution</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040129</link>
        <id>10.14569/IJACSA.2013.040129</id>
        <doi>10.14569/IJACSA.2013.040129</doi>
        <lastModDate>2013-02-04T17:00:33.2230000+00:00</lastModDate>
        
        <creator>Dr. V. U.K Sastry</creator>
        
        <creator>K. Shirisha</creator>
        
        <subject>Key; key bunch matrix; encryption; decryption; permutation; substitution; avalanche effect; cryptanalysis; xor operation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(1), 2013</description>
        <description>In this paper, we have developed a block cipher by extending the analysis of a Novel Block Cipher Involving a Key bunch Matrix and a Key-based Permutation and Substitution. Here we have include and additional key matrix, which is supplemented with xor operation. The cryptanalysis carried out in this investigation clearly indicates that this cipher cannot be broken by any attack.</description>
        <description>http://thesai.org/Downloads/Volume4No1/Paper_29-A_Block_Cipher_Involving_a_Key_Bunch_Matrix_and_an_Additional_key_Matrix.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Personalized Semantic Retrieval and Summarization of Web Based Documents</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040128</link>
        <id>10.14569/IJACSA.2013.040128</id>
        <doi>10.14569/IJACSA.2013.040128</doi>
        <lastModDate>2013-02-04T17:00:31.1330000+00:00</lastModDate>
        
        <creator>Salah T. Babek</creator>
        
        <creator>Khaled M. Fouad</creator>
        
        <creator>Naveed Arshad</creator>
        
        <subject>Semnatic Web; WordNet; Personalization; User Model; Information Retrieval; Summerization.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(1), 2013</description>
        <description>The current retrieval methods are essentially based on the string-matching approach lacking of semantic information and can’t understand the user&#39;s query intent and interest very well. These methods do regard as the personalization of the users. Semantic retrieval techniques are performed by interpreting the semantic of keywords. Using the text summarization allows a user to get a sense of the content of a full-text, or to know its information content, without reading all sentences within the full-text.
In this paper, a semantic personalized information retrieval (IR) system is proposed, oriented to the exploitation of Semantic Web technology and WordNet ontology to support semantic IR capabilities in Web documents. In a proposed system, the Web documents are represented in concept vector model using WordNet.  Personalization is used in a proposed system by building user model (UM). Text summarization in a proposed system is based on extracting the most relevant sentences from the original document to form a summary using WordNet.
The examination of the proposed system is performed by using three experiments that are based on relevance based evaluation. The results of the experiment shows that the proposed system, which is based on Semantic Web technology, can improve the accuracy and effectiveness for retrieving relevant Web documents.
</description>
        <description>http://thesai.org/Downloads/Volume4No1/Paper_28-Personalized_Semantic_Retrieval_and_Summarization_of_Web_Based_Documents.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Proposed Integrated Approach for BI and GIS
in Health Sector to Support Decision Makers
(BIGIS-DSS)


</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040127</link>
        <id>10.14569/IJACSA.2013.040127</id>
        <doi>10.14569/IJACSA.2013.040127</doi>
        <lastModDate>2013-02-04T17:00:29.0570000+00:00</lastModDate>
        
        <creator>Torky Sultan</creator>
        
        <creator>Mona Nasr</creator>
        
        <creator>Ayman Khedr</creator>
        
        <creator>Randa Abdou</creator>
        
        <subject>Business Intelligence (BI); Geographic Information System (GIS); Decision Support System (DSS)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(1), 2013</description>
        <description>This paper explores the possibilities of adopting Business Intelligence (BI), and Geographic Information System (GIS) to build a spatial intelligence and predictive analytical approach. The proposed approach will help in solving spatial problem which faces decision makers at health sector. The proposed spatial analytical approach will cover three main health planning issues. These issues are tackling health inequalities through geospatial monitor for inequalities in distribution of health units and its services, support decision-making with predictive analytics for common health indicators, and geoprocessing for input layers through dynamic health map and motion charts to support decision making.</description>
        <description>http://thesai.org/Downloads/Volume4No1/Paper_27-A_Proposed_Integrated_Approach_for_BI_and_GIS.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Shadow Suppression using RGB and HSV Color Space in Moving Object Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040126</link>
        <id>10.14569/IJACSA.2013.040126</id>
        <doi>10.14569/IJACSA.2013.040126</doi>
        <lastModDate>2013-02-04T17:00:26.9500000+00:00</lastModDate>
        
        <creator>Shailaja Surkutlawar</creator>
        
        <creator>Prof. Ramesh K Kulkarni</creator>
        
        <subject>Shadow detection; HSV color space; RGB color space.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(1), 2013</description>
        <description>Video-surveillance and traffic analysis systems can be heavily improved using vision-based techniques to extract, manage and track objects in the scene. However, problems arise due to shadows. In particular, moving shadows can affect the correct localization, measurements and detection of moving objects. This work aims to present a technique for shadow detection and suppression used in a system for moving visual object detection and tracking. The major novelty of the shadow detection technique is the analysis carried out in the HSV color space to improve the accuracy in detecting shadows. This paper exploits   comparison of shadow suppression using RGB and HSV color space in moving object detection and results in this paper  are more encouraging using HSV colour space over RGB colour space.</description>
        <description>http://thesai.org/Downloads/Volume4No1/Paper_26-Shadow_Suppression_using_RGB_and_HSV_Color_Space_in_Moving_Object_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Attribute Analysis for Bangla Words for Universal Networking Language(UNL)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040125</link>
        <id>10.14569/IJACSA.2013.040125</id>
        <doi>10.14569/IJACSA.2013.040125</doi>
        <lastModDate>2013-02-04T17:00:24.8770000+00:00</lastModDate>
        
        <creator>Aloke Kumar Saha</creator>
        
        <creator>Muhammad Firoz Mridha</creator>
        
        <creator>Shammi Akhtar</creator>
        
        <creator>Jugal Krishna Das</creator>
        
        <subject>Universal Networking Language; morphology; Bangla part of speech; morphological rules.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(1), 2013</description>
        <description>The Universal Networking Language (UNL) is an artificial worldwide generalizes form human interactive in machine independent digital platform for defining, recapitulating, amending, storing and dissipating knowledge or information among people of different affiliations. The theoretical and practical research associated with these interdisciplinary endeavor facilities in a number of practical applications in most domains of human activities such as creating globalization trends of market or geopolitical independence among nations. In our research work we have tried to develop analysis rules for Bangla part of speech which will help to create a doorway for converting the Bangla language to UNL and vice versa and overcome the barrier between Bangla to other Languages.</description>
        <description>http://thesai.org/Downloads/Volume4No1/Paper_25-Attribute_Analysis_for_Bangla_Words_for_Universal_Networking_Language _UNL.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparison and Analysis of Different Software Cost Estimation Methods</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040124</link>
        <id>10.14569/IJACSA.2013.040124</id>
        <doi>10.14569/IJACSA.2013.040124</doi>
        <lastModDate>2013-02-04T17:00:22.7870000+00:00</lastModDate>
        
        <creator>Sweta Kumari </creator>
        
        <creator>Shashank Pushkar</creator>
        
        <subject>Support vector regression; PM- person-months; MOPSO- Multiple objective particle swarm optimization; COCOMO- Constructive cost estimation; Weka data mining tools.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(1), 2013</description>
        <description>Software cost estimation is the process of predicting the effort required to develop a software system. The basic input for the software cost estimation is coding size and set of cost drivers, the output is Effort in terms of Person-Months (PM’s). Here, the use of support vector regression (SVR) has been proposed for the estimation of software project effort. We have used the COCOMO dataset and our results are compared to Intermediate COCOMO as well as to MOPSO model results for this dataset. It has been observed from the simulation that SVR outperforms other estimating techniques. This paper provides a comparative study on support vector regression (SVR), Intermediate COCOMO and Multiple Objective Particle Swarm Optimization (MOPSO) model for estimation of software project effort.
We have analyzed in terms of accuracy and Error rate. Here, data mining tool Weka is used for simulation
</description>
        <description>http://thesai.org/Downloads/Volume4No1/Paper_24-Comparison_and_Analysis_of_Different_Software_Cost_Estimation_Methods.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Monte Carlo Ray Tracing Based Non-Linear Mixture Model of Mixed Pixels in Earth Observation Satellite Imagery Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040123</link>
        <id>10.14569/IJACSA.2013.040123</id>
        <doi>10.14569/IJACSA.2013.040123</doi>
        <lastModDate>2013-02-04T17:00:20.7100000+00:00</lastModDate>
        
        <creator>Kohei Arai </creator>
        
        <subject>remote sensing satellite; visible to near infrared radiometer; mixed pixel: mixel; Monte Carlo simulation model</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(1), 2013</description>
        <description>Monte Carlo based non-linear mixel (mixed pixel) model of visible to near infrared radiometer of earth observation satellite imagery is proposed. Through comparative studies with actual real earth observation satellite imagery data between conventional linear mixel model and the proposed non-linear mixel model, it is found that the proposed mixel model represents the pixels in concern much precisely rather than the conventional linear mixel model.</description>
        <description>http://thesai.org/Downloads/Volume4No1/Paper_23-Monte_Carlo_Ray_Tracing_Based_Non-Linear_Mixture_Model_of_Mixed_Pixels_in_Earth_Observation_Satellite_Imagery_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Studying Data Mining and Data Warehousing with Different E-Learning System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040122</link>
        <id>10.14569/IJACSA.2013.040122</id>
        <doi>10.14569/IJACSA.2013.040122</doi>
        <lastModDate>2013-02-04T17:00:18.6200000+00:00</lastModDate>
        
        <creator>Dr. Mohamed F. AlAjmi</creator>
        
        <creator>Shakir Khan</creator>
        
        <creator>Dr.Arun Sharma</creator>
        
        <subject>Data Mining; Data Warehousing; e-Learning; Moodle; LMS; LCMS.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(1), 2013</description>
        <description>Data Mining and Data Warehousing are two most significant techniques for pattern detection and concentrated data management in present technology. ELearning is one of the most important applications of data mining. The foremost idea is to provide a proposal for a practical model and architecture. The standards and system structural design are analyzed here. This paper provides importance to the combination of Web Services on the e-Learning application domain, because Web Service is the most complex choice for distance education during these days. The process of e-Learning can be promising more efficiently by utilizing of Web usage mining. Mor07/e sophisticated tools are developed for internet customer’s behaviour to boost sales and profit, but no such tools are developed to recognize learner’s performance in e-Learning. In this paper, some data mining techniques are examined that could be used to improve web-based learning environments.</description>
        <description>http://thesai.org/Downloads/Volume4No1/Paper_22-Studying_Data_Mining_and_Data_Warehousing_with_Different_e-Learning_system.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Semantic E-Learn Services and Intelligent Systems using Web Ontology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040121</link>
        <id>10.14569/IJACSA.2013.040121</id>
        <doi>10.14569/IJACSA.2013.040121</doi>
        <lastModDate>2013-02-04T17:00:16.5300000+00:00</lastModDate>
        
        <creator>K. VANITHA</creator>
        
        <creator>K.YASUDHA</creator>
        
        <creator>Dr.M.SRI VENKATESH</creator>
        
        <creator>K.N.Sowjanya</creator>
        
        <subject>Semantic web; e-learning; Ontology Web Language (OWL); Ontology; OWL-S Service Ontology.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(1), 2013</description>
        <description>Present vision for the web is the semantic web in which information is given explicit meaning, making it easier for machines to automatically process and integrate information available on the web.  It provides the information exactly. Now days, ontology is playing a major role in knowledge representation for the semantic web [1]. Ontology is a conceptualization of domain into a human understandable and machine readable or machine process able format consisting of entities, attributes, relationships and axioms.  Ontology web language is designed for use by applications that need to process the content of information [22]. In this context many e-learning systems were proposed in the literature. Semantic Web technology may support more advanced Artificial intelligence problems for knowledge retrieval [20]. This paper aims at presenting an intelligent e-learning system from the literature.</description>
        <description>http://thesai.org/Downloads/Volume4No1/Paper_21-Semantic_E-Learn_Services_and_Intelligent_Systems_using_Web_Ontology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Block Cipher Involving a Key and a Key Bunch Matrix, Supplemented with Key-Based Permutation and Substitution</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040120</link>
        <id>10.14569/IJACSA.2013.040120</id>
        <doi>10.14569/IJACSA.2013.040120</doi>
        <lastModDate>2013-02-04T17:00:14.4570000+00:00</lastModDate>
        
        <creator>Dr. V. U.K Sastry</creator>
        
        <creator>K. Shirisha</creator>
        
        <subject>Key; key bunch matrix; encryption; decryption; permutation; substitution; avalanche effect; cryptanalysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(1), 2013</description>
        <description>In this paper, we have developed a block cipher involving a key and a key bunch matrix. In this cipher, we have made use of key-based permutation and key-based substitution. The cryptanalysis carried out in this investigation, shows very clearly, that this cipher is a very strong one. This is all on account of the confusion and the diffusion created by the permutation, the substitution, in each round of the iteration process.</description>
        <description>http://thesai.org/Downloads/Volume4No1/Paper_20-A_Block_Cipher_Involving_a_Key_and_a_Key_Bunch_Matrix,_Supplemented_with_Key-Based_Permutation_and_Substitution.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards No-Reference of Peak Signal to Noise Ratio</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040119</link>
        <id>10.14569/IJACSA.2013.040119</id>
        <doi>10.14569/IJACSA.2013.040119</doi>
        <lastModDate>2013-02-04T17:00:12.3800000+00:00</lastModDate>
        
        <creator>Jaime Moreno</creator>
        
        <creator>Beatriz Jaime</creator>
        
        <creator>Salvador Saucedo</creator>
        
        <subject>Human Visual System; Contrast Sensitivity Function; Perceived Images; Wavelet Transform; Peak Signal-to-Noise Ratio;No-Reference Image Quality Assessment; JPEG2000.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(1), 2013</description>
        <description>The aim of this work is to define a no-referenced perceptual image quality estimator applying the perceptual concepts of the Chromatic Induction Model The approach consists in comparing the received image, presumably degraded, against the perceptual versions (different distances) of this image degraded by means of a Model of Chromatic Induction, which uses some of the human visual system properties. Also we compare our model with an original estimator in image quality assessment, PSNR. Results are highly correlated with the ones obtained by PSNR for image (99.32% Lenna and 96.95% for image Baboon), but this proposal does not need an original image or a reference one in order to give an estimation of the quality of the degraded image.</description>
        <description>http://thesai.org/Downloads/Volume4No1/Paper_19-Towards_No-Reference_of_Peak_Signal_to_Noise_Ratio.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Area-Efficient Carry Select Adder Design by using 180 nm Technology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040118</link>
        <id>10.14569/IJACSA.2013.040118</id>
        <doi>10.14569/IJACSA.2013.040118</doi>
        <lastModDate>2013-02-04T17:00:10.3200000+00:00</lastModDate>
        
        <creator>Garish Kumar Wadhwa</creator>
        
        <creator>Amit Grover</creator>
        
        <creator>Neeti Grover</creator>
        
        <creator>Gurpreet Singh</creator>
        
        <subject>Carry Select Adder; Area-Efficient; Hardware-Sharing; Boolean Logic</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(1), 2013</description>
        <description>In this paper, we proposed an area-efficient carry select adder by sharing the common Boolean logic term. After logic simplification and sharing partial circuit, we only need one XOR gate and one inverter gate in each summation operation as well as one AND gate and one inverter gate in each carry-out operation. Through the multiplexer, we can select the correct output result according to the logic state of carry-in signal. In this way, the transistor count in a 32-bit carry select adder can be greatly reduced from 1947 to 960. </description>
        <description>http://thesai.org/Downloads/Volume4No1/Paper_18-An_Area-Efficient_Carry_Select_Adder_Design_by_using_180_nm_Technology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Inferring the Human Emotional State of Mind using Assymetric Distrubution</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040117</link>
        <id>10.14569/IJACSA.2013.040117</id>
        <doi>10.14569/IJACSA.2013.040117</doi>
        <lastModDate>2013-02-04T17:00:08.2300000+00:00</lastModDate>
        
        <creator>N. Murali Krishna</creator>
        
        <creator>P.V. Lakshmi </creator>
        
        <creator>Y. Srinivas </creator>
        
        <subject>Skew Gaussian Mixture Model; MFCC; SDC; Emotion Recognition; Confusion Matrix</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(1), 2013</description>
        <description>This present paper highlights a methodology for Emotion Recognition based on Skew Symmetric Gaussian Mixture Model classifier and MFCC-SDC ceptral coefficients as the features for the recognition of various emotions from the generated data-set of emotional voices belonging to students of both genders in GITAM University. For training and testing of the developed methodology, the data collection is carried out from the students of GITAM University of Visakhapatnam campus using acting sequence consisting of five different emotions namely Happy, Sad, Angry, Neutral, Boredom; each uttering one short emotional base sentence. For training the data we have considered fifty speakers from different regions (30 male &amp; 20 female) and one long sentence containing an emotional speech from each speaker. The experimentation is conducted on text dependent speech emotion recognition and results obtained are tabulated by constructing a Confusion Matrix and comparing with existing methodology like Gaussian mixture model.</description>
        <description>http://thesai.org/Downloads/Volume4No1/Paper_17-Inferring_the_Human_Emotional_State_of_Mind_using_Assymetric_Distrubution.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Passive Clustering for Efficient Energy Conservation in Wireless Sensor Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040116</link>
        <id>10.14569/IJACSA.2013.040116</id>
        <doi>10.14569/IJACSA.2013.040116</doi>
        <lastModDate>2013-02-04T17:00:06.1400000+00:00</lastModDate>
        
        <creator>Abderrahim MAIZATE</creator>
        
        <creator>Najib EL KAMOUN</creator>
        
        <subject>wireless sensor networks; self-organization; Clustering passive; Clustering; network lifetime; energy efficiency; Fault tolerance; Residual Energy.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(1), 2013</description>
        <description>A wireless sensor network is a set of miniature nodes that consume little energy and route information to a base station. It will enable reliable monitoring of a wide variety of phenomena for civilian, military and medical applications. Almost any sensor network application requires some form of self-organisation to route information. Recent years many protocols for network self-organization and management have been proposed and being implemented. Hierarchical clustering algorithms are very important in increasing the network’s life time. The most important point in this algorithm is cluster head selection and cluster formation because a good clustering guarantees reliability, energy efficiency and load balancing in the network. In this paper, we will use the principles of passive clustering to propose a new mechanism for selecting clusterheads. This mechanism allows the election of an alternate for each cluster head and a dynamic balancing of the role of clusterhead to the alternate when leaving or failure. Thus, it provides several advantages network reliability, stability of clusters and reduces energy consumption among the sensor nodes. Comparison with the existing schemes such as Passive Clustering and GRIDS (Geographically Repulsive Insomnious Distributed Sensors) reveals that the mechanism for selecting an alternate for clusterhead nodes, which is the most important factor influencing the clustering performance, can significantly improves the network lifetime.</description>
        <description>http://thesai.org/Downloads/Volume4No1/Paper_16-Passive_Clustering_for_Efficient_Energy_Conservation_in_Wireless_Sensor_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Nonlinear Mixing Model of Mixed Pixels in Remote Sensing Satellite Images Taking Into Account Landscape</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040115</link>
        <id>10.14569/IJACSA.2013.040115</id>
        <doi>10.14569/IJACSA.2013.040115</doi>
        <lastModDate>2013-02-04T17:00:04.0500000+00:00</lastModDate>
        
        <creator>Kohei Arai </creator>
        
        <subject>nonlinearity; mixed pixels; Monte Carlo Ray Tracing; landscape</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(1), 2013</description>
        <description>Nonlinear mixing model of mixed pixels in remote sensing satellite images taking into account landscape is proposed. Most of linear mixing models of mixed pixels do not work so well because the mixed pixels consist of several ground cover targets in a nonlinear basis essentially. In particular mixing model should be nonlinear because reflected photons from a ground cover target are scattered with atmospheric continuants and then reflected by the other or same ground cover targets. Therefore, mixing model has to be nonlinear. Monte Carlo Ray Tracing based nonlinear mixing model is proposed and simulated. Simulation results show a validity of the proposed nonlinear mixed pixel model.</description>
        <description>http://thesai.org/Downloads/Volume4No1/Paper_15-Nonlinear_Mixing_Model_of_Mixed_Pixels_in_Remote_Sensing_Satellite_Images_Taking_Into_Account_Landscape.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Study on Discrimination Methods for Identifying Dangerous Red Tide Species Based on Wavelet Utilized Classification Methods</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040114</link>
        <id>10.14569/IJACSA.2013.040114</id>
        <doi>10.14569/IJACSA.2013.040114</doi>
        <lastModDate>2013-02-04T17:00:01.9600000+00:00</lastModDate>
        
        <creator>Kohei Arai </creator>
        
        <subject>hue feature; texture information; wavelet descripter; red tide; phytoplankton idintification</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(1), 2013</description>
        <description>Comparative study on discrimination methods for identifying dangerous red tide species based on wavelet utilized classification methods is conducted. Through experiments, it is found that classification performance with the proposed wavelet derived shape information extracted from the microscopic view of the phytoplankton is effective for identifying dangerous red tide species among the other red tide species rather than the other conventional texture, color information.</description>
        <description>http://thesai.org/Downloads/Volume4No1/Paper_14-Comparative_Study_on_Discrimination_Methods_for_Identifying_Dangerous_Red_Tide_Species_Based_on_Wavelet_Utilized_Classification_Methods.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Study of Influential Factors in the Adoption and Diffusion of B2C E-Commerce</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040113</link>
        <id>10.14569/IJACSA.2013.040113</id>
        <doi>10.14569/IJACSA.2013.040113</doi>
        <lastModDate>2013-02-04T16:59:59.8700000+00:00</lastModDate>
        
        <creator>Rayed AlGhamdi</creator>
        
        <creator>Ann Nguyen</creator>
        
        <creator>Vicki Jones</creator>
        
        <subject>e-commerce;adoption; B2C; Saudi Arabia</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(1), 2013</description>
        <description>This paper looks at the present standing of e-commerce in Saudi Arabia as well as the challenges and strengths of Business to Customers (B2C) electronic commerce. Many studies have been conducted around the world in order to gain a better understanding of the demands needs and effectiveness of online commerce. A study was undertaken to review the literature identifying the factors influencing the adoption and diffusion of B2C e-commerce. It found four distinct categories: businesses customers environmental and governmental support which must all be considered when creating an e-commerce infrastructure. A concept matrix was used to provide a comparison of important factors in different parts of the world. The study found that e-commerce in Saudi Arabia was lacking in Governmental support as well as relevant involvement by both customers and retailers.</description>
        <description>http://thesai.org/Downloads/Volume4No1/Paper_13-A_Study_of_Influential_Factors_in_the_Adoption_and_Diffusion_of_B2C_E-Commerce.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Collaborative System Model for Dynamic Planning of Supply Chain</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040112</link>
        <id>10.14569/IJACSA.2013.040112</id>
        <doi>10.14569/IJACSA.2013.040112</doi>
        <lastModDate>2013-02-04T16:59:57.7770000+00:00</lastModDate>
        
        <creator>Latifa Ouzizi</creator>
        
        <creator>EL Moukhtar Zemmouri</creator>
        
        <creator>Youssef Aoura</creator>
        
        <creator>Hussain Ben-azza</creator>
        
        <subject>supply chain; negotiation; collaboration; dynamic planning; UML</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(1), 2013</description>
        <description>The business need to be structured as an integrated supply chain pushes companies to make use of a greater level of co-operation and coordination. As a means of coordination, negotiation has been chosen in this work. The object of this paper is to present formalism for negotiation in dynamic planning of a supply chain with the objective of maximizing the overall profit of each partner. To model the SC, we use the multi agent approach. Each enterprise is represented by its negotiator agent. The negotiations are formalized using UML language. The proposed negotiation process allows agents to develop a feasible production schedule.</description>
        <description>http://thesai.org/Downloads/Volume4No1/Paper_12-Collaborative_System_Model_for_Dynamic_Planning_of_Supply_Chain.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Validating Utility of TEIM: A Comparative Analysis </title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040111</link>
        <id>10.14569/IJACSA.2013.040111</id>
        <doi>10.14569/IJACSA.2013.040111</doi>
        <lastModDate>2013-02-04T16:59:55.6870000+00:00</lastModDate>
        
        <creator>Rajesh Kulkarni</creator>
        
        <creator>P.Padmanabham</creator>
        
        <subject>SE; HCI; UGAM; IOI; PS; TEIM.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(1), 2013</description>
        <description>Concrete efforts to integrate Software Engineering and Human Computer Interaction exist in the form of models by many researchers. An unconventional model called TEIM (The Evolved Integrated Model) of Software Engineering and Human Computer Interaction was proposed by us. There is a need to establish correlation with prior models as well validate utility of TEIM. In this paper product PS designed using SE-HCI integration model TEIM is evaluated by making a comparative analysis. For evaluation UGAM and IOI tools designed by DR.Anirudha Joshi are used. Our analysis showed that correlation of TEIM exists with prior models. Regression analysis showed that high correlation exists between TEIM and prior model.</description>
        <description>http://thesai.org/Downloads/Volume4No1/Paper_11-Validating_Utility_of_TEIM_A_Comparative_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Parametric and Non Parametric Time-Frequency Analysis of Biomedical Signals</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040110</link>
        <id>10.14569/IJACSA.2013.040110</id>
        <doi>10.14569/IJACSA.2013.040110</doi>
        <lastModDate>2013-02-04T16:59:53.5670000+00:00</lastModDate>
        
        <creator>S. Elouaham</creator>
        
        <creator>R. Latif</creator>
        
        <creator>A. Dliou</creator>
        
        <creator>M. LAABOUBI</creator>
        
        <creator>F. M. R. Maoulainie</creator>
        
        <subject>ECG;  Time-frequency; Periodogram; Capon; Choi-williams.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(1), 2013</description>
        <description>Due to non-stationary multicomponent nature of the electrocardiogram (ECG) signal its analysis by the monodimensional techniques temporal and frequenctial can be very difficult. The use of the time-frequency techniques can be inevitable to achieve to a correct diagnosis. Between the different existing parametric and non-parametric time-frequency techniques the Periodgram Capon Choi-Williams and Smoothed Pseudo Wigner-Ville were chosen to deal with analysis of this biomedical signal. In a first time a comparison between these time-frequency techniques was made by analyzing modulated signal to make in evidence the technique that gives a good resolution and low level of cross-terms. In a second time the Periodogram which presents a powerful technique was applied to a normal and abnormal ECG signal. The results show the effectiveness of this time-frequency in analyzing this type of biology signal.</description>
        <description>http://thesai.org/Downloads/Volume4No1/Paper_10-Parametric_and_Non_Parametric_Time-Frequency_Analysis_of_Biomedical_Signals.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluating English to Arabic Machine Translation Using BLEU</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040109</link>
        <id>10.14569/IJACSA.2013.040109</id>
        <doi>10.14569/IJACSA.2013.040109</doi>
        <lastModDate>2013-02-04T16:59:51.4600000+00:00</lastModDate>
        
        <creator>Mohammed N. Al-Kabi</creator>
        
        <creator>Taghreed M. Hailat</creator>
        
        <creator>Emad M. Al-Shawakfa</creator>
        
        <creator>Izzat M. Alsmadi</creator>
        
        <subject>component; Machine Translation; Arabic; Google Translator; Babylon Translator; BLEU</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(1), 2013</description>
        <description>This study aims to compare the effectiveness of two popular machine translation systems (Google Translate and Babylon machine translation system) used to translate English sentences into Arabic relative to the effectiveness of English to Arabic human translation. There are many automatic methods used to evaluate different machine translators, one of these methods; Bilingual Evaluation Understudy (BLEU) method, which was adopted and implemented to achieve the main goal of this study. BLEU method is based on the assumptions of automated measures that depend on matching machine translators&#39; output to human reference translations; the higher the score, the closer the translation to the human translation will be. Well known English sayings in addition to manually collected sentences from different Internet web sites were used for evaluation purposes. The results of this study have showed that Google machine translation system is better than Babylon machine translation system in terms of precision of translation from English to Arabic.</description>
        <description>http://thesai.org/Downloads/Volume4No1/Paper_9-Evaluating_English_to_Arabic_Machine_Translation_Using_BLEU.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Green ICT Readiness Model for Developing Economies: Case of Kenya</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040108</link>
        <id>10.14569/IJACSA.2013.040108</id>
        <doi>10.14569/IJACSA.2013.040108</doi>
        <lastModDate>2013-02-04T16:59:49.3700000+00:00</lastModDate>
        
        <creator>Mr.Franklin Wabwoba</creator>
        
        <creator>Dr. Stanley Omuterema</creator>
        
        <creator>Dr. Gregory W. Wanyembi</creator>
        
        <creator>Mr. Kelvin Kebati Omieno</creator>
        
        <subject>Developing economies; Extended G-readiness model; Green ICT; G-readiness model; Green ICT readiness</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(1), 2013</description>
        <description>There has been growing concerns about the rising costs of doing business and environmental degradation world over. Green ICT has been proposed to provide solutions to the two issues yet it is not being implemented fully in developing economies like Kenya. For its implementation, it is critical to establish the level of green ICT readiness of organisations to inform where to start and where to put more emphasis. Over the past few years this has been done using Molla’s G-readiness model. However this model assumes the basic level of G-readiness to be same for both developed and developing economies to be the same with regard to ICT personnel preparedness. Based on green ICT readiness in Kenya, the relationship between ICT personnel’s gender, age and training with the G-readiness variables as proposed in Molla’s G-readiness model was investigated. The study surveyed ICT personnel in four cases using a questionnaire on a seven scale likert scale. It established that there exists a significant relationship between the ICT personnel related variables and the G-readiness variables. Based on the findings on the relationship, the study extended Molla’s G-readiness model to include a sixth dimension of personnel readiness</description>
        <description>http://thesai.org/Downloads/Volume4No1/Paper_8-Green_ICT_Readiness_Model_for_Developing_Economies_Case_of_Kenya.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Applicability of Data Mining Technique Using Bayesians Network in Diagnosis of Genetic Diseases
 
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040107</link>
        <id>10.14569/IJACSA.2013.040107</id>
        <doi>10.14569/IJACSA.2013.040107</doi>
        <lastModDate>2013-02-04T16:59:47.2470000+00:00</lastModDate>
        
        <creator>Hugo Pereira Leite Filho</creator>
        
        <subject>Turner Syndrome; probabilistic networks; classification techniques based in decision trees</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(1), 2013</description>
        <description>This study aims to identify a methodology to aid in the identification of diagnosis for chromosomal abnormalities and genetic diseases, presenting as a tutorial model the Turner Syndrome. So, it has been used classification techniques based in decision trees, probabilistic networks (Na&#239;ve Bayes, TAN e BAN) and neural MLP network (Multi-Layer Perception) and training algorithm by error retro-propagation. Described tools capable of propagating evidence and developing techniques of generating efficient inference techniques to combine expert knowledge with data defined in a database. We have come to a conclusion about the best solution to work out the show problem in this study that was the Na&#239;ve Bayes model, because this presented the greatest accuracy. The decision - ID3, TAN e BAN tree models presented solutions to the indicated problem, but those were not as much satisfactory as the Na&#239;ve Bayes.  However, the neural network did not promote a satisfactory solution.</description>
        <description>http://thesai.org/Downloads/Volume4No1/Paper_7-Applicability_of_Data_Mining_Technique_using_Bayesians_Network_in_Diagnosis_of_Genetic_Diseases.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Analysis of Security Challenges in Cloud Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040106</link>
        <id>10.14569/IJACSA.2013.040106</id>
        <doi>10.14569/IJACSA.2013.040106</doi>
        <lastModDate>2013-02-04T16:59:45.1570000+00:00</lastModDate>
        
        <creator>Ms.Disha H. Parekh</creator>
        
        <creator>Dr. R. Sridaran</creator>
        
        <subject>Security threats; SQL Injection; Malevolent users; Browser Security; Malicious Attacks; Data Leakage.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(1), 2013</description>
        <description>Vendors offer a pool of shared resources to their users through the cloud network. Nowadays, shifting to cloud is a very optimal decision as it provides pay-as-you-go services to users. Cloud has boomed high in business and other industries for its advantages like multi-tenancy, resource pooling, storage capacity etc. In spite of its vitality, it exhibits various security flaws including loss of sensitive data, data leakage and few others related to cloning, resource pooling and so on. As far as security issues are concerned, a very wide study has been reviewed which signifies threats with service and deployment models of cloud. In order to comprehend these threats, this study is presented so as to effectively refine the crude security issues under various areas of cloud. This study also aims at revealing different security threats under the cloud models as well as network concerns to stagnate the threats within cloud, facilitating researchers, cloud providers and end users for noteworthy analysis of threats.</description>
        <description>http://thesai.org/Downloads/Volume4No1/Paper_6-An_Analysis_of_Security_Challenges_in_Cloud_Computing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>VHDL Design and FPGA Implementation of a Parallel Reed-Solomon (15, K, D) Encoder/Decoder</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040105</link>
        <id>10.14569/IJACSA.2013.040105</id>
        <doi>10.14569/IJACSA.2013.040105</doi>
        <lastModDate>2013-02-04T16:59:43.0670000+00:00</lastModDate>
        
        <creator>Mustapha ELHAROUSSI</creator>
        
        <creator>Asmaa HAMYANI</creator>
        
        <creator>Mostafa BELKASMI</creator>
        
        <subject>Error detecting correcting codes; Reed-Solomon encoder/decoder; VHDL language; FPGA</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(1), 2013</description>
        <description>In this article, we propose a Reed Solomon error correcting encoder/decoder with the complete description of a concrete implementation starting from a VHDL description of this decoder. The design on FPGA of the (15, k, d) Reed Solomon decoder is studied and simulated in order to implement an encoder/decoder function.The proposed architecture of the decoder can achieve a high data rate, in our case, 5 clock cycles, and having a reasonable complexity (1010 CLBs).</description>
        <description>http://thesai.org/Downloads/Volume4No1/Paper_5-VHDL_Design_and_FPGA_Implementation_of_a_Parallel_Reed-Solomon_15_K_D_EncoderDecoder.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>FF-MAC : Fast Forward IEEE 802.15.4 MAC Protocol for Real-Time Data Transmission</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040104</link>
        <id>10.14569/IJACSA.2013.040104</id>
        <doi>10.14569/IJACSA.2013.040104</doi>
        <lastModDate>2013-02-04T16:59:40.9300000+00:00</lastModDate>
        
        <creator>Khalid El Gholami</creator>
        
        <creator>Najib ELKAMOUN</creator>
        
        <creator>Kun Mean HOU</creator>
        
        <subject>component; IEEE 802.15.4; WSN; Superframe; star topology; delay; Duty cycle; D-GTS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(1), 2013</description>
        <description>This paper presents a Fast Forward MAC layer designed for hard real-time applications in wireless sensor networks. This protocol is an enhancement to the IEEE 802.15.4 standard MAC layer proposed for Low-Rate Personal Area Network. The energy conservation mechanism proposed by the current standard is quite efficient and very flexible. This flexibility comes from the ability to configure different duty cycles to meet specific application’s requirements. However, this mechanism has a considerable impact on the end-to-end delay. Our approach resolves the energy delay trade-off by avoiding the storage of the real-time data in the coordinator during sleep time. A new superframe structure is adopted and a deterministic reception scheduling is used. All the simulations were done using the network simulator 2 ‘NS-2’. The simulations outcomes show that this new proposed protocol performs better than the current standard and reduces considerably the end-to-end delay even in low duty cycle networks. Our protocol can also provides a delay bound for all network configurations which allows a better choice of the duty cycle for the required delay.</description>
        <description>http://thesai.org/Downloads/Volume4No1/Paper_4-FF-MAC_Fast_Forward_IEEE_802_15_4_MAC_Protocol_for_Real_Time_Data_Transmission.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Stable Haptic Rendering For Physics Engines Using Inter-Process Communication and Remote Virtual Coupling</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040103</link>
        <id>10.14569/IJACSA.2013.040103</id>
        <doi>10.14569/IJACSA.2013.040103</doi>
        <lastModDate>2013-02-04T16:59:38.8230000+00:00</lastModDate>
        
        <creator>Xue-Jian He</creator>
        
        <creator>Kup-Sze Choi </creator>
        
        <subject>haptic rendering; physics engine; inter-process communication; virtual coupling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(1), 2013</description>
        <description>Availability of physics engines has significantly reduced the effort required to develop interactive applications concerning the simulation of physical world. However, it becomes a problem when kinesthetic feedback is needed in the applications since the incorporation of haptic rendering is non-trivial, where fast haptic data update is demanded for stable rendering. In the regard, a framework for integrating haptic rendering into physics simulation engines is proposed. It mediates the update-rate disparity between haptic rendering and physics simulation engine by means of inter-process communication and remote virtual coupling, which fully decouples haptic rendering from complex physical simulation. Experimental results demonstrate that this framework can guarantee fast haptic rendering at 1k Hz even the physical simulation system operates at very low update rate. The remote virtual coupling algorithm shows better performance than the interpolation based methods in terms of stability and robustness.</description>
        <description>http://thesai.org/Downloads/Volume4No1/Paper_3-Stable_Haptic_Rendering_for_Physics_Engines_Using_Inter-Process_Communication_and_Remote_Virtual_Coupling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Formal Method to Derive Interoperability Requirements and Guarantees</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040102</link>
        <id>10.14569/IJACSA.2013.040102</id>
        <doi>10.14569/IJACSA.2013.040102</doi>
        <lastModDate>2013-02-04T16:59:36.7330000+00:00</lastModDate>
        
        <creator>Hazem El- Gendy</creator>
        
        <creator>Magdi Amer</creator>
        
        <creator>Ihab Talkhan</creator>
        
        <subject>Computer/Communications Protocols and Standards; Conformance Requirements and Classe; Interoperability; Protocol Data Units (PDUs);  Capabilities.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(1), 2013</description>
        <description>Interoperability among telecommunications systems, possibly by different vendors, is essential for both the development of many telecommunications networks, and today&#39;s civilization development. Interoperability testing is very costly, as it has a complexity of (n**2) for n systems, and somewhat informal. In this paper, we develop a &#39;Conformance Testing (CT)&#39;-based formal technique to determine interoperability requirements/guarantees. It allows automated derivation of the interoperability&#39; requirements of various networks as well as the interoperability guarantees among different telecommunications systems. This is achieved using static analysis of the conformance classes of the standard and knowledge of the implementation&#39;s degree of conformance (DoC) of the telecommunications systems. Consequently, it results in a lot of cost saving in addition to being a formal technique.</description>
        <description>http://thesai.org/Downloads/Volume4No1/Paper_2-Formal_Method_to_Derive_Interoperability_Requirements_and_Guarantees.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Classifying Personalization Constraints in Digital Business Environments through Case Study Research</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2013.040101</link>
        <id>10.14569/IJACSA.2013.040101</id>
        <doi>10.14569/IJACSA.2013.040101</doi>
        <lastModDate>2013-02-04T16:59:34.6430000+00:00</lastModDate>
        
        <creator>Michael J. Harnisch</creator>
        
        <subject>personalization; case study research; corporate communication; personalization constraint</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 4(1), 2013</description>
        <description>To aid professionals in the early assessment of possible risks related to personalization activities in marketing as well as to give academics a starting point to discover not only the opportunities but also the risks of personalization, a ‘Classification Scheme of Personalization Constraints’ is established after the analysis of 24 case studies. The classification scheme includes three dimensions: origin (internal, external), subject (technological, organizational) and time (data collection, matchmaking, delivery) and describes the different obstacles with which companies are confronted when implementing personalization activities. Additionally four ‘Standard Types of Personalization Environments’ are developed. They describe a set of business environments which inherit different internal and external risks related to personalization activities in marketing. The standard types are termed Flow, Performance, Dependence and Risk.</description>
        <description>http://thesai.org/Downloads/Volume4No1/Paper_1-Classifying_Personalization_Constraints_in_Digital_Business_Environments_through_Case_Study_Research.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimized Pessimistic Fibonacci Back-off Algorithm (PFB)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030939</link>
        <id>10.14569/IJACSA.2012.030939</id>
        <doi>10.14569/IJACSA.2012.030939</doi>
        <lastModDate>2013-02-02T09:15:15.7000000+00:00</lastModDate>
        
        <creator>Muneer Bani Yassein</creator>
        
        <creator>Mohammed Ahmed Alomari</creator>
        
        <creator>Constandinos X. Mavromoustakis</creator>
        
        <subject>Back-off; collision; end-to-end delay; normalized routing load; packet delivery Ratio; MANET’s; PLEB; PFB; and MAC.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(9), 2012</description>
        <description>MANET is a self-directed system consisting of mobile nodes, which can be either routers and/or hosts. Nodes in MANET are connected by wireless links without base stations. The Backoff algorithm considered as a main element of Media Access Control (MAC) protocol, which is used to avoid collision in MANET’s. The Fibonacci Backoff algorithm and the Pessimistic Fibonacci Backoff are proposed to improve network performance depending on contention window size. This research introduces a new hybrid Backoff algorithm called Pessimistic Fibonacci Backoff (PFB) Algorithm which merges the two previous algorithms in order to find the most proper contention window sizes that reduce collisions as much as possible. This research takes into consideration and evaluates each of the following main measurements: Packet delivery ratio, normalized routing load and end-to-end delay. Based on the extracted simulation results, PFB algorithm outperforms Pessimistic Linear-Exponential Backoff (PLEB) by up to 76%,40.41%, 31.88% in terms of Packet delivery ratio, end-to-end delay and normalized routing load respectively, especially in the sparse environments. All of the simulation results are obtained by the well-known NS-2 Simulator, version 2.34, without any distance or location measurements devices.</description>
        <description>http://thesai.org/Downloads/Volume3No9/Paper_39-Optimized_Pessimistic_Fibonacci_Back-off_Algorithm_(PFB).pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparison Study of Commit Protocols for Mobile Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030938</link>
        <id>10.14569/IJACSA.2012.030938</id>
        <doi>10.14569/IJACSA.2012.030938</doi>
        <lastModDate>2013-02-02T09:15:13.5630000+00:00</lastModDate>
        
        <creator>Bharati Harsoor</creator>
        
        <creator>Dr.S.Ramachandram</creator>
        
        <subject>Mobile Transactions; Transaction Log; Transaction Recovery; Network disconnection; handoff; ACID properties.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(9), 2012</description>
        <description>This paper presents a study of protocols to commit the transactions distributed over several mobile and fixed units and provides the method to handle mobility at the application layer. It describes the solutions to defeat the dilemma related to principle implementation of the Two Phase Commit (2PC) protocol which is essential to ensure the consistent commitment of distributed transactions. The paper surveys different approaches proposed for mobile transaction and outline how the conventional commitment are revisited in order to fit the needs of mobile environment. This approach deals with the frequency disconnections and the movement of mobile devices. This paper also proposes Single Phase Reliable Timeout Based Commit (SPRTBC) protocol that preserves the 2PC principle and it lessens the impact of unreliable wireless communications.</description>
        <description>http://thesai.org/Downloads/Volume3No9/Paper_38-Comparison_Study_of_Commit_Protocols_for_Mobile_Environment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Developing Backward Chaining Algorithm of Inference Engine in Ternary Grid Expert System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030937</link>
        <id>10.14569/IJACSA.2012.030937</id>
        <doi>10.14569/IJACSA.2012.030937</doi>
        <lastModDate>2013-02-02T09:15:11.4730000+00:00</lastModDate>
        
        <creator>Yuliadi Erdani</creator>
        
        <subject>expert systems; ternary grid; inference engine; backward chaining.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(9), 2012</description>
        <description>The inference engine is one of main components of expert system that influences the performance of expert system. The task of inference engine is to give answers and reasons to users by inference the knowledge of expert system. Since the idea of ternary grid issued in 2004, there is only several developed method, technique or engine working on ternary grid knowledge model. The in 2010 developed inference engine is less efficient because it works based on iterative process. The in 2011 developed inference engine works statically and quite expensive to compute. In order to improve the previous inference methods, a new inference engine has been developed. It works based on backward chaining process in ternary grid expert system.
This paper describes the development of inference engine of expert system that can work in ternary grid knowledge model. The strategy to inference knowledge uses backward chaining with recursive process. The design result is implemented in the form of software. The result of experiment shows that the inference process works properly, dynamically and more efficient to compute in comparison to the previous developed methods.</description>
        <description>http://thesai.org/Downloads/Volume3No9/Paper_37-Developing_Backward_Chaining_Algorithm_of_Inference_Engine_in_Ternary_Grid_Expert_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An integrated modular approach for Visual Analytic Systems in Electronic Health Records</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030936</link>
        <id>10.14569/IJACSA.2012.030936</id>
        <doi>10.14569/IJACSA.2012.030936</doi>
        <lastModDate>2013-02-02T09:15:09.4000000+00:00</lastModDate>
        
        <creator>Muhammad Sheraz Arshad Malik</creator>
        
        <creator>Dr.Suziah Sulaiman</creator>
        
        <subject>Visual analytic Systems; EHR; Information visualization; CARE 1.0.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(9), 2012</description>
        <description>Latest visual analytic tools help physicians to visualize temporal data in regards to medical health records. Existing systems lack vast support in the generalized collaboration, a single user-centered and task based design for Electronic Health Records (EHR). Already existing frameworks are unable to mentor the interface gaps due to problems like complexity of data sets, increased temporal information density and no support to live databases. These are significant reasons for a single model to comply the end user requirements. We propose an integrated model termed as CARE 1.0 as a future Visual analytic process model for resolving these kinds of issues based on mix method studies. This will base on different disciplines of HCI, Statistics as well as Computer Sciences. This proposed model encompasses the cognitive behavioral requirements of its stake holder’s i.e. physicians, database administrators and visualization designers. It helps in presenting a more generalized and detailed visualization for desired medical data sets.</description>
        <description>http://thesai.org/Downloads/Volume3No9/Paper_36-An_integrated_modular_approach_for_Visual_Analytic_Systems_in_Electronic_Health_Records.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Performance Evaluation of Multiple Input Queued (MIQ) Switch with Iterative Weighted Slip Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030935</link>
        <id>10.14569/IJACSA.2012.030935</id>
        <doi>10.14569/IJACSA.2012.030935</doi>
        <lastModDate>2013-02-02T09:15:07.3100000+00:00</lastModDate>
        
        <creator>S N Kore</creator>
        
        <creator>Sayali Kore</creator>
        
        <creator>Ajinkya Biradar</creator>
        
        <creator>Dr. P J Kulkarni</creator>
        
        <subject>Network communications; Packet-switching networks;routing protocols; Sequencing and scheduling.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(9), 2012</description>
        <description>Many researchers had evaluated the throughput and delay performance of virtual output queued (VOQ) packet switches using iterative weighted/un-weighted scheduling algorithms. Prof. Nick Mckeown from Stanford University had evolved with excellent iterative maximal matching (i-slip) scheme which provides throughput near to 100%. Prof. Kim had suggested multiple input queued architecture which also provide more than 90 % throughput for less number of input queues per port. (In VOQ N queues per port are used). Our attempt is to use MIQ architecture and evaluate delay, throughput performance with i-slip algorithm for scheduling. While evaluating performance we had used Bernoulli’s and Bursty (ON-OFF) traffic models.</description>
        <description>http://thesai.org/Downloads/Volume3No9/Paper_35-A_Performance_Evaluation_of_Multiple_Input_Queued_(MIQ)_Switch_with_Iterative_Weighted_Slip_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Billing System Design Based on Internet Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030934</link>
        <id>10.14569/IJACSA.2012.030934</id>
        <doi>10.14569/IJACSA.2012.030934</doi>
        <lastModDate>2013-02-02T09:15:05.2330000+00:00</lastModDate>
        
        <creator>Muzhir Shaban Al-Ani</creator>
        
        <creator>Rabah Noory</creator>
        
        <creator>Dua&#39;a Yaseen Al-Ani</creator>
        
        <subject>Billing System; Internet Billing system; E-Commerce; E-bank; bill payment; Authentication; Security.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(9), 2012</description>
        <description>This paper deals with the design of Internet billing system, in which it is possible pay invoices electronically. This approach is implemented via virtual banks, in which the process of money transfer can be implemented. In other hand many applications can be realize such as; deposit e-money, withdrawal e-money and determine account balance. A Gate way translator is used to apply authentication rules, security and privacy.</description>
        <description>http://thesai.org/Downloads/Volume3No9/Paper_34-Billing_System_Design_Based_on_Internet_Environment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Emergency System for Succoring Children using Mobile GIS</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030933</link>
        <id>10.14569/IJACSA.2012.030933</id>
        <doi>10.14569/IJACSA.2012.030933</doi>
        <lastModDate>2013-02-02T09:15:03.1430000+00:00</lastModDate>
        
        <creator>Ayad Ghany Ismaeel</creator>
        
        <subject>GPS; GPRS; Mobile GIS; SMS; Tracking device; Emergency System.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(9), 2012</description>
        <description>The large numbers of sick children in different diseases are very dreaded, and when there isn&#39;t succor at the proper time and in the type the sick child need it that makes us lose child. This paper suggested an emergency system for succoring sick child locally when he required that, and there isn&#39;t someone knows his disease. The proposed system is the first tracking system works online (24 hour in the day) but only when the sick children requiring the help using mobile GIS. In, this emergency system the child will send SMS (for easy he click one button) contains his ID and coordinates (Longitude and Latitude) via GPRS network to the web server (the child was registered previously on that server), in this step the server will locate the sick child on Google map and retrieve the child&#39;s information from the database which saved this information in registration stage, and base on these information will send succoring facility and at the same time informing the hospital, his parents, doctor, etc. about that emergency case of the child using the SMS mode through GPRS network again. The design and implement of the proposed system shows more effective cost than other systems because it used a minimum configuration (hardware and software) and works in economic mode.</description>
        <description>http://thesai.org/Downloads/Volume3No9/Paper_33-An_Emergency_System_for_Succoring_Children_using_Mobile_GIS.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Automatic Method to Adjust Parameters for Object Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030932</link>
        <id>10.14569/IJACSA.2012.030932</id>
        <doi>10.14569/IJACSA.2012.030932</doi>
        <lastModDate>2013-02-02T09:15:01.0670000+00:00</lastModDate>
        
        <creator>Issam Qaffou</creator>
        
        <creator>Mohamed Sadgal</creator>
        
        <creator>Aziz Elfazziki</creator>
        
        <subject>component; Parameters adjustment; image segmentation; Q-learning; reinforcement learning.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(9), 2012</description>
        <description>To recognize an object in an image, the user must apply a combination of operators, where each operator has a set of parameters. These parameters must be “well” adjusted in order to reach good results. Usually, this adjustment is made manually by the user. In this paper we propose a new method to automate the process of parameter adjustment for an object recognition task. Our method is based on reinforcement learning, we use two types of agents: User Agent that gives the necessary information and Parameter Agent that adjusts the parameters of each operator. Due to the nature of reinforcement learning the results do not depend only on the system characteristics but also the user’s favorite choices.</description>
        <description>http://thesai.org/Downloads/Volume3No9/Paper_32-A_New_Automatic_Method_to_Adjust_Parameters_for_Object_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Impact on Effectiveness and User Satisfaction of Menu Positioning on Web Pages</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030931</link>
        <id>10.14569/IJACSA.2012.030931</id>
        <doi>10.14569/IJACSA.2012.030931</doi>
        <lastModDate>2013-02-02T09:14:58.9770000+00:00</lastModDate>
        
        <creator>Dr Pietro Murano</creator>
        
        <creator>Kennedy K. Oenga</creator>
        
        <subject>Usability; menu design; menu positioning; affordances.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(9), 2012</description>
        <description>The authors of this paper are conducting research into the usability of menu positioning on web pages. Other researchers have also done work in this area, but the results are not conclusive and therefore more work still needs to be done in this area. The design and results of an empirical experiment, investigating the usability of menu positioning on a supermarket web site, are presented in this paper. As a comparison, the authors tested a left vertical menu and a fisheye menu placed horizontally at the top of a page in a prototype supermarket web site against a real supermarket web site using a horizontal menu placed at the top of a page. Few significant results were observed, which gave rise to the conclusion that overall there were not many differences between the tested menu types. Furthermore, an explanation for the results observed is discussed in terms of cognitive, physical, functional and sensory affordances. It is suggested that observation of the affordances may be a more crucial aspect to menu design than the actual menu positioning.</description>
        <description>http://thesai.org/Downloads/Volume3No9/Paper_31-The_Impact_on_Effectiveness_and_User_Satisfaction_of_Menu_Positioning_on_Web_Pages.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improved Accuracy of PSO and DE using Normalization: an Application to Stock Price Prediction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030930</link>
        <id>10.14569/IJACSA.2012.030930</id>
        <doi>10.14569/IJACSA.2012.030930</doi>
        <lastModDate>2013-02-02T09:14:55.9830000+00:00</lastModDate>
        
        <creator>Savinderjit Kaur</creator>
        
        <creator>Veenu Mangat</creator>
        
        <subject>Differential evolution; Parameter optimization; Stock price prediction; Support vector Machines; Normalization.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(9), 2012</description>
        <description>Data Mining is being actively applied to stock market since 1980s. It has been used to predict stock prices, stock indexes, for portfolio management, trend detection and for developing recommender systems. The various algorithms which have been used for the same include ANN, SVM, ARIMA, GARCH etc. Different hybrid models have been developed by combining these algorithms with other algorithms like roughest, fuzzy logic, GA, PSO, DE, ACO etc. to improve the efficiency. This paper proposes DE-SVM model (Differential Evolution- Support vector Machine) for stock price prediction. DE has been used to select best free parameters combination for SVM to improve results. The paper also compares the results of prediction with the outputs of SVM alone and PSO-SVM model (Particle Swarm Optimization). The effect of normalization of data on the accuracy of prediction has also been studied.</description>
        <description>http://thesai.org/Downloads/Volume3No9/Paper_30-Improved_Accuracy_of_PSO_and_DE_using_Normalization_an_Application_to_Stock_Price_Prediction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Image Denoising using Adaptive Thresholding in Framelet Transform Domain</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030929</link>
        <id>10.14569/IJACSA.2012.030929</id>
        <doi>10.14569/IJACSA.2012.030929</doi>
        <lastModDate>2013-02-02T09:14:53.8770000+00:00</lastModDate>
        
        <creator>S. Sulochana</creator>
        
        <creator>R.Vidhya</creator>
        
        <subject>Discrete Wavelet Transform(DWT); Framelet Transform(FT); Peak signal to noise ratio(PSNR); Structural similarity index measure(SSIM).</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(9), 2012</description>
        <description>Noise will be unavoidable during image acquisition process and denosing is an essential step to improve the image quality. Image denoising involves the manipulation of the image data to produce a visually high quality image. Finding efficient image denoising methods is still valid challenge in image processing. Wavelet denoising attempts to remove the noise present in the imagery while preserving the image characteristics, regardless of its frequency content. Many of the wavelet based denoising algorithms use DWT (Discrete Wavelet Transform) in the decomposition stage which is suffering from shift variance. To overcome this, in this paper we proposed the denoising method which uses Framelet transform to decompose the image and performed shrinkage operation to eliminate the noise .The framework describes a comparative study of different thresholding techniques for image denoising in Framelet transform domain. The idea is to transform the data into the Framelet basis, example shrinkage followed by the inverse transform. In this work different shrinkage rules such as universal shrink(US),Visu shrink (VS), Minmax shrink(MS), Sure shrink(SS) , Bayes shrink(BS) and Normal shrink(NS) were incorporated . Results based on different noise such as Gausssian noise, Poission noise , Salt and pepper noise and Speckle noise at (??=10,20) performed in this paper and peak signal to noise ratio (PSNR) and Structural similarity index measure(SSIM) as a measure of the quality of denoising was performed.</description>
        <description>http://thesai.org/Downloads/Volume3No9/Paper_29-Image_Denoising_using_Adaptive_Thresholding_in_Framelet_Transform_Domain.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A study on Security within public transit vehicles</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030928</link>
        <id>10.14569/IJACSA.2012.030928</id>
        <doi>10.14569/IJACSA.2012.030928</doi>
        <lastModDate>2013-02-02T09:14:51.7700000+00:00</lastModDate>
        
        <creator>A. N. Seshukumar</creator>
        
        <creator>Dr.S.Vasavi</creator>
        
        <creator>Dr.V.Srinivasa Rao</creator>
        
        <subject>Surveillance Cameras; Global Positioning System; Automation; public transit vehicles.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(9), 2012</description>
        <description>In public transit vehicles, security is the major concern for the passengers. Surveillance Systems provide the security by providing surveillance cameras in the vehicles and a storage that maintains the data. The applications that allow monitoring the data in surveillance systems of public transit vehicles will provide different features to access the video and allow to perform number of operations like exporting video, generating snapshots at a particular time, viewing the live as well as playback videos. This paper studies automation process of video surveillance system that can also be applied in the surveillance system of public transit vehicles. A new feature that enhances the security to the passengers such as tracking of vehicle through the GPS (Global positioning system) tracking system and also capability of providing the vehicle information like acceleration, speed, on the user interface of application to the user.</description>
        <description>http://thesai.org/Downloads/Volume3No9/Paper_28-A_study_on_Security_within_public_transit_vehicles.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Ethernet Based Remote Monitoring And Control Of Temperature By Using Rabbit Processor</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030927</link>
        <id>10.14569/IJACSA.2012.030927</id>
        <doi>10.14569/IJACSA.2012.030927</doi>
        <lastModDate>2013-02-02T09:14:49.6630000+00:00</lastModDate>
        
        <creator>U. SUNEETHA</creator>
        
        <creator>K.TANVEER ALAM</creator>
        
        <creator>N.ANJU LATHA</creator>
        
        <creator>B.V.S.GOUD</creator>
        
        <subject>RMACS; Control system; Ethernet.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(9), 2012</description>
        <description>Networking is a major component of the processes and control instrumentation systems as the network’s architecture solves many of the Industrial automation problems. There is a great deal of benefits in the process of industrial parameters to adopt the Ethernet control system. Hence an attempt has been made to develop an Ethernet based remote monitoring and control of temperature. In the present work the experimental result shows that remote monitoring and control system (RMACS) over the Ethernet.</description>
        <description>http://thesai.org/Downloads/Volume3No9/Paper_27-Ethernet_Based_Remote_Monitoring_And_Control_Of_Temperature_By_Using_Rabbit_Processor.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Image Encryption Supported by Compression Using Multilevel Wavelet Transform</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030926</link>
        <id>10.14569/IJACSA.2012.030926</id>
        <doi>10.14569/IJACSA.2012.030926</doi>
        <lastModDate>2013-02-02T09:14:47.5900000+00:00</lastModDate>
        
        <creator>Ch. Samson</creator>
        
        <creator>V. U. K. Sastry</creator>
        
        <subject>Image compression; wavelet transform; thresholding; image encryption; compression ratio.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(9), 2012</description>
        <description>In this paper we propose a novel approach for image encryption supported by lossy compression using multilevel wavelet transform. We first decompose the input image using multilevel 2-D wavelet transform, and thresholding is applied on the decomposed structure to get compressed image. Then we carry out encryption by decomposing the compressed image by multi-level 2-D Haar Wavelet Transform at the maximum allowed decomposition level. These results in the decomposition vector C and the corresponding bookkeeping matrix S. The decomposition vector C is reshaped into the size of the input image. The reshaped vector is rearranged by performing permutation to produce encrypted image. The vector C and the matrix S serve as key in the process of both encryption and decryption. In this analysis, we have noticed that the reconstructed image is a close replica of the input image.</description>
        <description>http://thesai.org/Downloads/Volume3No9/Paper_26-A_Novel_Image_Encryption_Supported_by_Compression_Using_Multilevel_Wavelet_Transform.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mutual Exclusion Principle for Multithreaded Web Crawlers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030925</link>
        <id>10.14569/IJACSA.2012.030925</id>
        <doi>10.14569/IJACSA.2012.030925</doi>
        <lastModDate>2013-02-02T09:14:45.4670000+00:00</lastModDate>
        
        <creator>Kartik Kumar Perisetla</creator>
        
        <subject>Web Crawlers; Mutual Exclusion principle; Multithreading; Mutex locks.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(9), 2012</description>
        <description>This paper describes mutual exclusion principle for multithreaded web crawlers. The existing web crawlers use data structures to hold frontier set in local address space. This space could be used to run more crawler threads for faster operation. All crawler threads fetch the URL to crawl from the centralized frontier. The mutual exclusion principle is used to provide access to frontier for each crawler thread in synchronized manner to avoid deadlock. The approach to utilize the waiting time on mutual exclusion lock in efficient manner has been discussed in detail.</description>
        <description>http://thesai.org/Downloads/Volume3No9/Paper_25-Mutual_Exclusion_Principle_for_Multithreaded_Web_Crawlers.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Time-Domain Large Signal Investigation on Dynamic Responses of the GDCC Quarterly Wavelength Shifted Distributed Feedback Semiconductor Laser</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030924</link>
        <id>10.14569/IJACSA.2012.030924</id>
        <doi>10.14569/IJACSA.2012.030924</doi>
        <lastModDate>2013-02-02T09:14:43.4100000+00:00</lastModDate>
        
        <creator>Abdelkarim Moumen</creator>
        
        <creator>Abdelkarim Zatni</creator>
        
        <creator>Abdenabi Elyamani</creator>
        
        <creator>Hamza Bousseta</creator>
        
        <subject>component; Distributed feedback laser; optical communication systems; Dynamic large signal analysis; Time domain model.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(9), 2012</description>
        <description>A numerical investigation on the dynamic large-signal analysis using a time-domain traveling wave model of quarter wave-shifted distributed feedback semiconductor lasers diode with a Gaussian distribution of the coupling coefficient (GDCC) is presented. It is found that the single-mode behavior and the more hole-burning effect corrections of quarter wave-shifted distributed feedback laser with large coupling coefficient can be improved significantly by this new proposed light source.</description>
        <description>http://thesai.org/Downloads/Volume3No9/Paper_24-Time-Domain_Large_Signal_Investigation_on_Dynamic_Responses_of_the_GDCC_Quarterly_Wavelength_Shifted_Distributed_Feedback_Semiconductor_Laser.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Multi-Stage Optimization Model With Minimum Energy Consumption-Wireless Mesh Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030923</link>
        <id>10.14569/IJACSA.2012.030923</id>
        <doi>10.14569/IJACSA.2012.030923</doi>
        <lastModDate>2013-02-02T09:14:41.3170000+00:00</lastModDate>
        
        <creator>S. Krishnakumar</creator>
        
        <creator>Dr.R.Srinivasan</creator>
        
        <subject>optimization; breakthrough; transportation; aximization; superimposed; transshipment.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(9), 2012</description>
        <description>Optimization models related with routing, bandwidth utilization and power consumption are developed in the wireless mesh computing environment using the operations research techniques such as maximal flow model, transshipment model and minimax optimizing algorithm. The Path creation algorithm is used to find the multiple paths from source to destination.A multi-stage optimization model is developed by combining the multi-path optimization model, optimization model in capacity utilization and energy optimization model and minimax optimizing algorithm. The input to the multi-stage optimization model is a network with many source and destination. The optimal solution obtained from this model is a minimum energy consuming path from source to destination along with the maximum data rate over each link. The performance is evaluated by comparing the data rate values of superimposed algorithm and minimax optimizing algorithm. The main advantage of this model is the reduction of traffic congestion in the network.</description>
        <description>http://thesai.org/Downloads/Volume3No9/Paper_23-A_Multi-Stage_Optimization_Model_With_Minimum_Energy_Consumption-Wireless_Mesh_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Security Analysis of Image Cryptosystem Using Stream Cipher Algorithm with Nonlinear Filtering Function</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030922</link>
        <id>10.14569/IJACSA.2012.030922</id>
        <doi>10.14569/IJACSA.2012.030922</doi>
        <lastModDate>2013-02-02T09:14:39.0400000+00:00</lastModDate>
        
        <creator>Belmeguena&#239; A&#239;ssa</creator>
        
        <creator>Derouiche Nadir</creator>
        
        <creator>Mansouri Khaled</creator>
        
        <subject>cipherImage; cryptosystem; key-stream; nonlinear filtering function; stream cipher.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(9), 2012</description>
        <description>In this work a new algorithm for encryption image is introduced. This algorithm makes it possible to cipher and decipher images by guaranteeing a maximum security. The algorithm introduced is based on stream cipher with nonlinear filtering function. The Boolean function used in this algorithm is resilient function satisfying all the cryptographic criteria necessary carrying out the best possible compromises. In order to evaluate performance, the proposed algorithm was measured through a series of tests. Experimental results illustrate that the scheme is highly key sensitive, highly resistance to the noises and shows a good resistance against brute-force, Berlekamp-Massey Attack and algebraic attack.</description>
        <description>http://thesai.org/Downloads/Volume3No9/Paper_22-Security_Analysis_of_Image_Cryptosystem_Using_Stream_Cipher_Algorithm_with_Nonlinear_Filtering_Function.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Impacts of ICTs on Banks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030921</link>
        <id>10.14569/IJACSA.2012.030921</id>
        <doi>10.14569/IJACSA.2012.030921</doi>
        <lastModDate>2013-02-02T09:14:36.9330000+00:00</lastModDate>
        
        <creator>Matthew K Luka</creator>
        
        <creator>Ibikunle A. Frank</creator>
        
        <subject>Banking industry; CBN; Customers; Economic growth; ICT; productivity.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(9), 2012</description>
        <description>ICT has taken the center stage in almost every aspect of human endeavor. ICT help banks improve the efficiency and effectiveness of services offered to customers, and enhances business processes, managerial decision making, and workgroup collaborations, which strengthens their competitive positions in rapidly changing and emerging economies. This paper considers the impacts and trends of ICTs on the banking industry of the 21st century. Four (4) parameters, namely: productivity, market structure, Innovation and value chain were used for benchmarking. Case studies of the IT platform employed by two Nigerian banks were included to for a more informed inference.</description>
        <description>http://thesai.org/Downloads/Volume3No9/Paper_21-The_Impacts_of_ICTs_on_Banks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Students’ Perceptions of the Effectiveness of Discussion Boards: What can we get from our students for a freebie point</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030920</link>
        <id>10.14569/IJACSA.2012.030920</id>
        <doi>10.14569/IJACSA.2012.030920</doi>
        <lastModDate>2013-02-02T09:14:34.8600000+00:00</lastModDate>
        
        <creator>Abdel-Hameed A. Badawy</creator>
        
        <subject>collaborative learning; cooperative learning; peer learning; teaching evaluation; pedagogy; discussion boards; Blackboard; pedagogy; survey; student feedback.1</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(9), 2012</description>
        <description>In this paper, we investigate how the students think of their experience in a junior (300 level) computer science course that uses blackboard as the underlying course management system. Blackboard’s discussion boards are heavily used for programming project support and to foster cooperation among students to answer their questions/concerns. A survey is conducted through blackboard as a voluntary quiz and the student who participated were given a participation point for their effort. The results and the participation were very interesting. We obtained statistics from the answers to the questions. The students also have given us feedback in the form of comments to all questions except for two only. The students have shown understanding, maturity and willingness to participate in pedagogy-enhancing endeavors with the premise that it might help their education and others’ education as well.</description>
        <description>http://thesai.org/Downloads/Volume3No9/Paper_20-Students’_Perceptions_of_the_Effectiveness_of_Discussion_Boards;_What_can_we_get_from_our_students_for_a_freebie_point.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SAS: Implementation of scaled association rules on spatial multidimensional quantitative dataset</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030919</link>
        <id>10.14569/IJACSA.2012.030919</id>
        <doi>10.14569/IJACSA.2012.030919</doi>
        <lastModDate>2013-02-02T09:14:32.7230000+00:00</lastModDate>
        
        <creator>M. N. Doja</creator>
        
        <creator>Sapna Jain</creator>
        
        <creator>M Afshar Alam</creator>
        
        <subject>association rules; spatial dataset; X tree.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(9), 2012</description>
        <description>Mining spatial association rules is one of the most important branches in the field of Spatial Data Mining (SDM). Because of the complexity of spatial data, a traditional method in extracting spatial association rules is to transform spatial database into general transaction database. The Apriori algorithm is one of the most commonly used methods in mining association rules at present. But a shortcoming of the algorithm is that its performance on the large database is inefficient. The present paper proposed a new algorithm by extracting maximum frequent itemsets based on spatial multidimensional quantitative dataset. Algorithms for mining spatial association rules are similar to association rule mining except consideration of special data, the predicates generation and rule generation processes are based on Apriori. The proposed method (SAS) Scaled Aprori on Spatial multidimensional quantitative dataset in the paper reduces the number of itemsets generated and also improves the execution time of the algorithm.</description>
        <description>http://thesai.org/Downloads/Volume3No9/Paper_19-SAS_Implementation_of_scaled_association_rules_on_spatial_multidimensional_quantitative_dataset.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Probabilistic: A Fuzzy Logic-Based Distance Broadcasting Scheme For Mobile Ad Hoc Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030918</link>
        <id>10.14569/IJACSA.2012.030918</id>
        <doi>10.14569/IJACSA.2012.030918</doi>
        <lastModDate>2013-02-02T09:14:30.6470000+00:00</lastModDate>
        
        <creator>Tasneem Bano</creator>
        
        <creator>Jyoti Singhai</creator>
        
        <subject>Broadcasting; Distance based broadcasting; Fuzzy; Optimization Technique; AODV.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(9), 2012</description>
        <description>An on-demand route discovery method in mobile ad hoc networks (MANET) uses simple flooding method, whereas a mobile node blindly rebroadcasts received route request (RREQ) packets until a route to a particular destination is established. Thus, this leads to broadcast storm problem. This paper presents a novel algorithm for broadcasting scheme in wireless ad hoc networks using a fuzzy logic system at each node to determine its capability to broadcast route request packets, based on the node location. Our simulation analysis shows a significant improvement in performance in terms of routing overhead, MAC collisions and end-to-end delay while still achieving a good throughput compared to the traditional AODV.</description>
        <description>http://thesai.org/Downloads/Volume3No9/Paper_18-Probabilistic_A_Fuzzy_Logic-Based_Distance_Broadcasting_Scheme_For_Mobile_Ad_Hoc_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Particle Swarm Optimization for Calibrating and Optimizing Xinanjiang Model Parameters</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030917</link>
        <id>10.14569/IJACSA.2012.030917</id>
        <doi>10.14569/IJACSA.2012.030917</doi>
        <lastModDate>2013-02-02T09:14:28.5570000+00:00</lastModDate>
        
        <creator>Kuok King Kuok</creator>
        
        <creator>Chiu Po Chan</creator>
        
        <subject>Conceptual rainfall-runoff model; Particle Swarm Optimization; Xinanjiang model calibration.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(9), 2012</description>
        <description>The Xinanjiang model, a conceptual hydrological model is well known and widely used in China since 1970s. Therefore, most of the parameters in Xinanjiang model have been calibrated and pre-set according to different climate, dryness, wetness, humidity, topography for various catchment areas in China. However, Xinanjiang model is not applied in Malaysia yet and the optimal parameters are not known. The calibration of Xinanjiang model parameters through trial and error method required much time and effort to obtain better results. Therefore, Particle Swarm Optimization (PSO) is adopted to calibrate Xinanjiang model parameters automatically. In this paper, PSO algorithm is used to find the best set of parameters for both daily and hourly models. The selected study area is Bedup Basin, located at Samarahan Division, Sarawak, Malaysia. For daily model, input data used for model calibration was daily rainfall data Year 2001, and validated with data Year 1990, 1992, 2000, 2002 and 2003. A single storm event dated 9th to 12thOctober 2003 was used to calibrate hourly model and validated with 12 different storm events. The accuracy of the simulation results are measured using Coefficient of Correlation (R) and Nash-Sutcliffe Coefficient (E2). Results show that PSO is able to optimize the 12 parameters of Xinanjiang model accurately. For daily model, the best R and E2 for model calibration are found to be 0.775 and 0.715 respectively, and average R=0.622 and E2=0.579 for validation set. Meanwhile, R=0.859 and E2=0.892 are yielded when calibrating hourly model, and the average R and E2 obtained are 0.705 and 0.647 respectively for validation set.</description>
        <description>http://thesai.org/Downloads/Volume3No9/Paper_17-Particle_Swarm_Optimization_for_Calibrating_and_Optimizing_Xinanjiang_Model_Parameters.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Localisation of Numerical Date Field in an Indian Handwritten Document</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030916</link>
        <id>10.14569/IJACSA.2012.030916</id>
        <doi>10.14569/IJACSA.2012.030916</doi>
        <lastModDate>2013-02-02T09:14:26.4500000+00:00</lastModDate>
        
        <creator>S Arunkumar</creator>
        
        <creator>Pallab Kumar Sahu</creator>
        
        <creator>Sudeep Gorai</creator>
        
        <creator>Kalyan Ghosh</creator>
        
        <subject>Connected Components; Feature Extraction; Spatial Arrangement; K-NN classifier.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(9), 2012</description>
        <description>This paper describes a method to localise all those areas which may constitute the date field in an Indian handwritten document. Spatial patterns of the date field are studied from various handwritten documents and an algorithm is developed through statistical analysis to identify those sets of connected components which may constitute the date. Common date patterns followed in India are considered to classify the date formats in different classes. Reported results demonstrate promising performance of the proposed approach.</description>
        <description>http://thesai.org/Downloads/Volume3No9/Paper_16-Localisation_of_Numerical_Date_Field_in_an_Indian_Handwritten_Document.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Knowledge Sharing Protocol for Smart Spaces</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030915</link>
        <id>10.14569/IJACSA.2012.030915</id>
        <doi>10.14569/IJACSA.2012.030915</doi>
        <lastModDate>2013-02-02T09:14:24.3770000+00:00</lastModDate>
        
        <creator>Jussi Kiljander</creator>
        
        <creator>Francesco Morandi</creator>
        
        <creator>Juha-Pekka Soininen</creator>
        
        <subject>Semantic Web; SPARQL; Ambient Intelligence; Ubiquitous Computing; embedded system; M3.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(9), 2012</description>
        <description>In this paper we present a novel knowledge sharing protocol (KSP) for semantic technology empowered ubiquitous computing systems. In particular the protocol is designed for M3 which is a blackboard based semantic interoperability solution for smart spaces. The main difference between the KSP and existing work is that KSP provides SPARQL-like knowledge sharing mechanisms in compact binary format that is designed to be suitable also for resource restricted devices and networks. In order to evaluate the KSP in practice we implemented a case study in a prototype smart space, called Smart Greenhouse. In the case study the KSP messages were on average 70.09% and 87.08% shorter than the messages in existing M3 communication protocols. Because the KSP provides a mechanism for automating the interaction in smart spaces it was also possible to implement the case study with fewer messages than with other M3 communication protocols. This makes the KSP a better alternative for resource restricted devices in semantic technology empowered smart spaces.</description>
        <description>http://thesai.org/Downloads/Volume3No9/Paper_15-Knowledge_Sharing_Protocol_for_Smart_Spaces.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Free Open Source Software: FOSS Based GIS for Spatial Retrievals of Appropriate Locations for Ocean Energy Utilizing Electric Power Generation Plants</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030914</link>
        <id>10.14569/IJACSA.2012.030914</id>
        <doi>10.14569/IJACSA.2012.030914</doi>
        <lastModDate>2013-02-02T09:14:22.3000000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>free open source software; postgres SQL; GIS; spatial retrieval.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(9), 2012</description>
        <description>Free Open Source Software: FOSS based Geographic Information System: GIS for spatial retrievals of appropriate locations for ocean wind and tidal motion utilizing electric power generation plants is proposed. Using scatterometer onboard earth observation satellites, strong wind coastal areas are retrieved with FOSS/GIS of PostgreSQL/GIS. PostGIS has to be modified together with altimeter and scatterometer database. These modification and database creation would be a good reference to the users who would like to create GIS system together with database with FOSS.</description>
        <description>http://thesai.org/Downloads/Volume3No9/Paper_14-Free_Open_Source_Software_FOSS_Based_GIS_for_Spatial_Retrievals_of_Appropriate_Locations_for_Ocean_Energy_Utilizing_Electric_Power_Generation_Plants.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fast DC Mode Prediction Scheme For Intra 4x4 Block In H.264/AVC Video Coding Standard</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030913</link>
        <id>10.14569/IJACSA.2012.030913</id>
        <doi>10.14569/IJACSA.2012.030913</doi>
        <lastModDate>2013-02-02T09:14:20.2270000+00:00</lastModDate>
        
        <creator>Tajdid UI Alam</creator>
        
        <creator>Jafor Ikbal</creator>
        
        <creator>Touhid Ul Alam</creator>
        
        <subject>Intra; macroblock (MB); DC mode; luma; prediction.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(9), 2012</description>
        <description>In this paper, the researchers proposed a new scheme for DC mode prediction in intra frame for 4X4 block. In this scheme, the upper and left neighboring pixels would be used to predict each of the pixels in the 4X4 block with different weight. The prediction equations for each position of the 4X4 block in DC mode would be static with fixed weighting coefficients. As a result, the computational time for intra frame would be decreased considerably with an increase in PSNR.</description>
        <description>http://thesai.org/Downloads/Volume3No9/Paper_13-Fast_DC_Mode_Prediction_Scheme_For_Intra_4x4_Block_In_H.264AVC_Video_Coding_Standard.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Error Analysis of Air Temperature Profile Retrievals with Microwave Sounder Data Based on Minimization of Covariance Matrix of Estimation Error</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030912</link>
        <id>10.14569/IJACSA.2012.030912</id>
        <doi>10.14569/IJACSA.2012.030912</doi>
        <lastModDate>2013-02-02T09:14:18.1200000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>Error analysis; leastsquare method; microwave sounder;air temperature profile.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(9), 2012</description>
        <description>Error analysis of air temperature profile retrievals with microwave sounder data based on minimization of covariance matrix of estimation error is conducted. Additive noise is taken into account in the observation data with microwave sounder onboard satellite. Method for air temperature profile retrievals based on minimization of difference of brightness temperature between model driven microwave sounder data and actual microwave sounder data is also proposed. The experimental results shows reasonable air temperature retrieval accuracy can be achieved by the proposed method.</description>
        <description>http://thesai.org/Downloads/Volume3No9/Paper_12-Error_Analysis_of_Air_Temperature_Profile_Retrievals_with_Microwave_Sounder_Data_Based_on_Minimization_of_Covariance_Matrix_of_Estimation_Error.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>E-learning System Which Allows Students’ Confidence Level Evaluation with Their Voice When They Answer to the Questions During Achievement Tests</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030911</link>
        <id>10.14569/IJACSA.2012.030911</id>
        <doi>10.14569/IJACSA.2012.030911</doi>
        <lastModDate>2013-02-02T09:14:16.0300000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>learnng system; confidence level evaluation; emotion recognition with voice.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(9), 2012</description>
        <description>E-learning system which allows students’ confidence level evaluation with their voice when they answer to the question during achievement tests is proposed. Through experiments of comparison of students’ confidence level between the conventional (without evaluation) and the proposed (with evaluation), 17-57% of improvement is confirmed for the proposed e-learning system.</description>
        <description>http://thesai.org/Downloads/Volume3No9/Paper_11-E-learning_System_Which_Allows_Students’_Confidence_Level_Evaluation_with_Their_Voice_When_They_Answer_to_the_Questions_During_Achievement.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Effect of Driver Strength on Crosstalk in Global Interconnects</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030910</link>
        <id>10.14569/IJACSA.2012.030910</id>
        <doi>10.14569/IJACSA.2012.030910</doi>
        <lastModDate>2013-02-02T09:14:13.9400000+00:00</lastModDate>
        
        <creator>Kalpana. A. B</creator>
        
        <creator>P.V.Hunagund</creator>
        
        <subject>Coupling; crosstalk; Interconnect; noise; victim.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(9), 2012</description>
        <description>The Noise estimation and avoidance are becoming critical, in today’s high performance IC design. An accurate yet efficient crosstalk noise model which contains as many driver/interconnect parameters as possible, is necessary for any sensitivity based noise avoidance approach. In this paper, we present an analysis for crosstalk noise model which incorporates all physical properties including victim and aggressor drivers, distributed RC characteristics of interconnects and coupling locations in both victim and aggressor lines. Also shown that crosstalk can be minimized by driver sizing optimization technique. These models are verified for various deep submicron technologies.</description>
        <description>http://thesai.org/Downloads/Volume3No9/Paper_10-Effect_of_Driver_Strength_on_Crosstalk_in_Global_Interconnects.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A computational linguistic approach to natural language processing with applications to garden path sentences analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030909</link>
        <id>10.14569/IJACSA.2012.030909</id>
        <doi>10.14569/IJACSA.2012.030909</doi>
        <lastModDate>2013-02-02T09:14:11.8500000+00:00</lastModDate>
        
        <creator>DU Jia-li</creator>
        
        <creator>YU Ping-fang</creator>
        
        <subject>Natural language processing; computational linguistics; context free grammar; Backus–Naur Form; garden path sentences.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(9), 2012</description>
        <description>This paper discusses the computational parsing of GP sentences. By an approach of combining computational linguistic methods, e.g. CFG, ATN and BNF, we analyze the various syntactic structures of pre-grammatical, common, ambiguous and GP sentences. The evidence shows both ambiguous and GP sentences have lexical or syntactic crossings. Any choice of the crossing in ambiguous sentences can bring a full-parsed structure. In GP sentences, the probability-based choice is the cognitive prototype of parsing. Once the part-parsed priority structure is replaced by the full-parsed structure of low probability, the distinctive feature of backtracking appears. The computational analysis supports Pritchett’s idea on processing breakdown of GP sentences.</description>
        <description>http://thesai.org/Downloads/Volume3No9/Paper_9-A_computational_linguistic_approach_to_natural_language_processing_with_applications_to_garden_path_sentences_analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Convenience and Medical Patient Database Benefits and Elasticity for Accessibility Therapy in Different Locations</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030908</link>
        <id>10.14569/IJACSA.2012.030908</id>
        <doi>10.14569/IJACSA.2012.030908</doi>
        <lastModDate>2013-02-02T09:14:09.7730000+00:00</lastModDate>
        
        <creator>Bambang Eka Purnama</creator>
        
        <creator>Sri Hartati</creator>
        
        <subject>Patient Medical Record.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(9), 2012</description>
        <description>When a patient comes to a hospital, clinic, physician practices or other clinics, the enrollment section will ask whether the patient in question had never come or not. If the patient in question said he had never come then the officer will ask you Medication Patient Identification Card (KiB), which will be used to search for patient records in question. In the conventional health care, then the officer will use a tracer to locate patient records at the storage warehouse in the form of stacks of paper. If a patient at a hospital is still a bit it will not be problematic, but if the patient sudha achieve large-scale number in the hundreds of thousands or even millions it will certainly cause problems.
Database records are kept in hospital untapped to the maximum to be exchanged at another hospital when the patient arrives at another hospital for further treatment or research purposes. This study aims to produce a computerized model of inter Medical Information Systems Hospital. Facilitate the benefits of this research is in the medical records of patients get information, patient history properly stored in computerized medical records, patient data search can be found quicker resulting in faster unhandled The expected outcome of this research is rapidly tertanganinya patients coming to a clinic and when the patient comes to the clinic to another place then the patient&#39;s medical resume database and the analysis can be found immediately.</description>
        <description>http://thesai.org/Downloads/Volume3No9/Paper_8-Convenience_and_Medical_Patient_Database_Benefits_and_Elasticity_for_Accessibility_Therapy_in_Different_Locations.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application of Relevance Vector Machines in Real Time Intrusion Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030907</link>
        <id>10.14569/IJACSA.2012.030907</id>
        <doi>10.14569/IJACSA.2012.030907</doi>
        <lastModDate>2013-02-02T09:14:07.7000000+00:00</lastModDate>
        
        <creator>Naveen N. C</creator>
        
        <creator>Dr Natarajan.S</creator>
        
        <creator>Dr Srinivasan.R</creator>
        
        <subject>Intrusion Detection; Change Point Detection; Relevance Vector Machine; Outlier Detection.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(9), 2012</description>
        <description>In the recent years, there has been a growing interest in the development of change detection techniques for the analysis of Intrusion Detection. This interest stems from the wide range of applications in which change detection methods can be used. Detecting the changes by observing data collected at different times is one of the most important applications of network security because they can provide analysis of short interval on global scale. Research in exploring change detection techniques for medium/high network data can be found for the new generation of very high resolution data. The advent of these technologies has greatly increased the ability to monitor and resolve the details of changes and makes it possible to analyze. At the same time, they present a new challenge over other technologies in that a relatively large amount of data must be analyzed and corrected for registration and classification errors to identify frequently changing trend. In this research paper an approach for Intrusion Detection System (IDS) which embeds a Change Detection Algorithm with Relevance Vector Machine (RVM) is proposed. IDS are considered as a complex task that handles a huge amount of network related data with different parameters. Current research work has proved that kernel learning based methods are very effective in addressing these problems. In contrast to Support Vector Machines (SVM), the RVM provides a probabilistic output while preserving the accuracy. The focus of this paper is to model RVM that can work with large network data set in a real environment and develop RVM classifier for IDS. The new model consists of Change Point (CP) and RVM which is competitive in processing time and improve the classification performance compared to other known classification model like SVM. The goal is to make the system simple but efficient in detecting network intrusion in an actual real time environment. Results show that the model learns more effectively, automatically adjust to the changes and adjust the threshold while minimizing the false alarm rate with timely detection.</description>
        <description>http://thesai.org/Downloads/Volume3No9/Paper_7-Application_of_Relevance_Vector_Machines_in_Real_Time_Intrusion_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Feature Subsumption for Sentiment Classification of Dynamic Data in Social Networks using SCDDF</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030906</link>
        <id>10.14569/IJACSA.2012.030906</id>
        <doi>10.14569/IJACSA.2012.030906</doi>
        <lastModDate>2013-02-02T09:14:05.5770000+00:00</lastModDate>
        
        <creator>Jayanag. B</creator>
        
        <creator>Vineela. K</creator>
        
        <creator>Dr. Vasavi. S</creator>
        
        <subject>Sentiment classification; Natural language processing (NLP); opinions; features; Quick Test Professional (QTP); feature identification; sentiment prediction; summary generation.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(9), 2012</description>
        <description>The analysis of opinions till now is done mostly on static data rather than on the dynamic data. Opinions may vary in time. Earlier methods concentrated on opinions expressed in an individual site. But on a given concept opinions may vary from site to site. Also the past works did not consider the opinions at aggregate level.
This paper proposes a novel method for Sentiment Classification that uses Dynamic Data Features (SCDDF). Experiments were conducted on various product reviews collected from different sites using QTP. Opinions were aggregated using Bayesian networks and Natural Language Processing techniques. Bulk amount of dynamic data is considered rather than the static one. Our method takes as input a collection of comments from the social networks and outputs ranks to the comments within each site and finally classifies all comments irrespective of the site it belongs to. Thus the user is presented with overall evaluation of the product and its features.</description>
        <description>http://thesai.org/Downloads/Volume3No9/Paper_6-Feature_Subsumption_for_Sentiment_Classification_of_Dynamic_Data_in_Social_Networks_using_SCDDF.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An RGB Image Encryption Supported by Wavelet-based Lossless Compression</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030905</link>
        <id>10.14569/IJACSA.2012.030905</id>
        <doi>10.14569/IJACSA.2012.030905</doi>
        <lastModDate>2013-02-02T09:14:03.4870000+00:00</lastModDate>
        
        <creator>Ch. Samson</creator>
        
        <creator>V. U. K. Sastry</creator>
        
        <subject>Image compression; Wavelet Transform; image encryption; Lifting Wavelet ; Secure Advanced Hill Cipher.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(9), 2012</description>
        <description>In this paper we have proposed a method for an RGB image encryption supported by lifting scheme based lossless compression. Firstly we have compressed the input color image using a 2-D integer wavelet transform. Then we have applied lossless predictive coding to achieve additional compression. The compressed image is encrypted by using Secure Advanced Hill Cipher (SAHC) involving a pair of involutory matrices, a function called Mix() and an operation called XOR. Decryption followed by reconstruction shows that there is no difference between the output image and the input image. The proposed method can be used for efficient and secure transmission of image data.</description>
        <description>http://thesai.org/Downloads/Volume3No9/Paper_5-An_RGB_Image_Encryption_Supported_by_Wavelet-based_Lossless_Compression.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Survey on Models and Query Languages for Temporally Annotated RDF</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030904</link>
        <id>10.14569/IJACSA.2012.030904</id>
        <doi>10.14569/IJACSA.2012.030904</doi>
        <lastModDate>2013-02-02T09:14:01.3970000+00:00</lastModDate>
        
        <creator>Anastasia Analyti</creator>
        
        <creator>Ioannis Pachoulakis</creator>
        
        <subject>Temporal RDF; provenance; semantics; query languages.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(9), 2012</description>
        <description>In this paper, we provide a survey on the models and query languages for temporally annotated RDF. In most of the works, a temporally annotated RDF ontology is essentially a set of RDF triples associated with temporal constraints, where, in the simplest case, a temporal constraint is a validity temporal interval. However, a temporally annotated RDF ontology may also be a set of triples connecting resources with a specific lifespan, where each of these triples is also associated with a validity temporal interval. Further, a temporal RDF ontology may be a set of triples connecting resources as they stand at specific time points. Several query languages for temporally annotated RDF have been proposed, where most of which extend SPARQL or translate to SPARQL. Some of the works provide experimental results while the rest are purely theoretical.</description>
        <description>http://thesai.org/Downloads/Volume3No9/Paper_4-A_Survey_on_Models_and_Query_Languages_for_Temporally_Annotated_RDF.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Risk Management Strategy of Applying Cloud Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030903</link>
        <id>10.14569/IJACSA.2012.030903</id>
        <doi>10.14569/IJACSA.2012.030903</doi>
        <lastModDate>2013-02-02T09:13:59.3070000+00:00</lastModDate>
        
        <creator>Chiang Ku Fan</creator>
        
        <creator>Tien-Chun Chen</creator>
        
        <subject>Cloud Computing; Risk Assessment; Risk Management; Insurance.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(9), 2012</description>
        <description>It is inevitable that Cloud Computing will trigger off some loss exposures. Unfortunately, not much of scientific and objective researches had been focused on the identification and evaluation of loss exposures stemming from applications of Cloud Computing. In order to fill this research gap, this study attempts to identify and analyze loss exposures of Cloud Computing by scientific and objective methods which provide the necessary information to administrators in support of decisions of risk management. In conclusion, this study has identified “Social Engineering”, “Cross-Cloud Compatibility” and “Mistakes are made by employees intentionally or accidentally” are high priority risks to be treated. The findings also revealed that people who work in the field of information or Cloud Computing are somehow ignorant of where the risks in Cloud Computing lie due to its novelty and complication.</description>
        <description>http://thesai.org/Downloads/Volume3No9/Paper_3-The_Risk_Management_Strategy_of_Applying_Cloud_Computing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Multi-Objective Optimization Approach Using Genetic Algorithms for Quick Response to Effects of Variability in Flow Manufacturing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030902</link>
        <id>10.14569/IJACSA.2012.030902</id>
        <doi>10.14569/IJACSA.2012.030902</doi>
        <lastModDate>2013-02-02T09:13:57.2000000+00:00</lastModDate>
        
        <creator>Riham Khalil</creator>
        
        <creator>David Stockton</creator>
        
        <creator>Parminder Singh Kang</creator>
        
        <creator>Lawrence Manyonge Mukhongo</creator>
        
        <subject>Synchronous Manufacturing; Drum-Buffer-Rope; Flow Lines; Multi-Objective Optimisation; Job Sequence.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(9), 2012</description>
        <description>This paper exemplifies a framework for development of multi-objective genetic algorithm based job sequencing method by taking account of multiple resource constraints. Along this, Theory of Constraints based Drum-Buffer-Rope methodology has been combined with genetic algorithm to exploit the system constraints. This paper introduces the Drum-Buffer-Rope to exploit the system constraints, which may affect the lead times, throughput and higher inventory holding costs. Multi-Objective genetic algorithm is introduced for job sequence optimization to minimize the lead times and total inventory holding cost, which includes problem encoding, chromosome representation, selection, genetic operators and fitness measurements, where Queuing times and Throughput are used as fitness measures. Along this, paper provides a brief comparison of proposed approach with other optimisation approaches. The algorithm generates a sequence to maximize the throughput and minimize the queuing time on bottleneck/Capacity Constraint Resource (CCR). Finally, Results are analysed to show the improvement by using current research framework.</description>
        <description>http://thesai.org/Downloads/Volume3No9/Paper_2-A_Multi-Objective_Optimization_Approach_Using_Genetic_Algorithms_for_Quick_Response_to_Effects_of_Variability_in_Flow_Manufacturing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>3D Face Compression and Recognition using Spherical Wavelet Parametrization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030901</link>
        <id>10.14569/IJACSA.2012.030901</id>
        <doi>10.14569/IJACSA.2012.030901</doi>
        <lastModDate>2013-02-02T09:13:55.0630000+00:00</lastModDate>
        
        <creator>Rabab M. Ramadan</creator>
        
        <creator>Rehab F. Abdel-Kader</creator>
        
        <subject>3D Face Recognition; Face Compression; Geometry coding; Nose tip detection; Spherical Wavelets.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(9), 2012</description>
        <description>In this research an innovative fully automated 3D face compression and recognition system is presented. Several novelties are introduced to make the system performance robust and efficient. These novelties include: First, an automatic pose correction and normalization process by using curvature analysis for nose tip detection and iterative closest point (ICP) image registration. Second, the use of spherical based wavelet coefficients for efficient representation of the 3D face. The spherical wavelet transformation is used to decompose the face image into multi-resolution sub images characterizing the underlying functions in a local fashion in both spacial and frequency domains. Two representation features based on spherical wavelet parameterization of the face image were proposed for the 3D face compression and recognition. Principle component analysis (PCA) is used to project to a low resolution sub-band. To evaluate the performance of the proposed approach, experiments were performed on the GAVAB face database. Experimental results show that the spherical wavelet coefficients yield excellent compression capabilities with minimal set of features. Haar wavelet coefficients extracted from the face geometry image was found to generate good recognition results that outperform other methods working on the GAVAB database.</description>
        <description>http://thesai.org/Downloads/Volume3No9/Paper_1-3D_Face_Compression_and_Recognition_using_Spherical_Wavelet_Parametrization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-modal Person Localization And Emergency Detection Using The Kinect</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.020106</link>
        <id>10.14569/IJARAI.2013.020106</id>
        <doi>10.14569/IJARAI.2013.020106</doi>
        <lastModDate>2013-01-09T12:17:52.8600000+00:00</lastModDate>
        
        <creator>Georgios Galatas</creator>
        
        <creator>Shahina Ferdous</creator>
        
        <creator>Fillia Makedon</creator>
        
        <subject>localization; multi-modal; Kinect; speech recognition; context-awareness; 3-D interaction</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(1), 2013</description>
        <description>Person localization is of paramount importance in an ambient intelligence environment since it is the first step towards context-awareness. In this work, we present the development of a novel system for multi-modal person localization and emergency detection in an assistive ambient intelligence environment for the elderly. Our system is based on the depth sensor and microphone array of 2 Kinect devices. We use skeletal tracking conducted on the depth images and sound source localization conducted on the captured audio signal to estimate the location of a person. In conjunction with the location information, automatic speech recognition is used as a natural and intuitive means of communication in order to detect emergencies and accidents, such as falls. Our system attained high accuracy for both the localization and speech recognition tasks, verifying its effectiveness. </description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No1/Paper_6-Multi-modal_Person_Localization_And_Emergency_Detection_Using_The_Kinect.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An interactive Tool for Writer Identification based on Offline Text Dependent Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.020105</link>
        <id>10.14569/IJARAI.2013.020105</id>
        <doi>10.14569/IJARAI.2013.020105</doi>
        <lastModDate>2013-01-09T12:17:49.0830000+00:00</lastModDate>
        
        <creator>Saranya K</creator>
        
        <creator>Vijaya MS</creator>
        
        <subject>Feature Extraction; Support Vector Machine; Training, Writer Identification.</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(1), 2013</description>
        <description>Writer identification is the process of identifying the writer of the document based on their handwriting. The growth of computational engineering, artificial intelligence and pattern recognition fields owes greatly to one of the highly challenged problem of handwriting identification. This paper proposes the computational intelligence technique to develop discriminative model for writer identification based on handwritten documents. Scanned images of handwritten documents are segmented into words and these words are further segmented into characters for word level and character level writer identification. A set of features are extracted from the segmented words and characters. Feature vectors are trained using support vector machine and obtained 94.27% accuracy for word level, 90.10% for character level. An interactive tool has been developed based on the word level writer identification model.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No1/Paper_5-An_interactive_Tool_for_Writer_Identification_based_on_Offline_Text_Dependent_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Method for object motion characteristic estimation based on wavelet Multi-Resolution Analysis: MRA</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.020104</link>
        <id>10.14569/IJARAI.2013.020104</id>
        <doi>10.14569/IJARAI.2013.020104</doi>
        <lastModDate>2013-01-09T12:17:45.0570000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>object motion characteristic; MRA; wavelet.</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(1), 2013</description>
        <description>Method for object motion characteristic estimation based on wavelet Multi-Resolution Analysis: MRA is proposed. With moving pictures, the motion characteristics, direction of translation, roll/pitch/yaw rotations can be estimated by MRA with an appropriate support length of the base function of wavelet. Through simulation study, method for determination of the appropriate support length of Daubechies base function is clarified. Also it is found that the proposed method for object motion characteristics estimation is validated. </description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No1/Paper_4-Method_for_object_motion_characteristic_estimation_based_on_wavelet_Multi-Resolution_Analysis_MRA.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Image Prediction Method with Nonlinear Control Lines Derived from Kriging Method with Extracted Feature Points Based on Morphing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.020103</link>
        <id>10.14569/IJARAI.2013.020103</id>
        <doi>10.14569/IJARAI.2013.020103</doi>
        <lastModDate>2013-01-09T12:17:42.9670000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>Kriging; morphing; image prediction; interpolation; image feature extraction.</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(1), 2013</description>
        <description>Method for image prediction with nonlinear control lines which are derived from extracted feature points from the previously acquired imagery data based on Kriging method and morphing method is proposed. Through comparisons between the proposed method and the conventional linear interpolation and widely used Cubic Spline interpolation methods, it is found that the proposed method is superior to the conventional methods in terms of prediction accuracy.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No1/Paper_3-Image_Prediction_Method_with_Nonlinear_Control_Lines_Derived_from_Kriging_Method_with_Extracted_Feature_Points_Based_on_Morphing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Prediction Method for Time Series of Imagery Data in Eigen Space</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.020102</link>
        <id>10.14569/IJARAI.2013.020102</id>
        <doi>10.14569/IJARAI.2013.020102</doi>
        <lastModDate>2013-01-09T12:17:40.8770000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>prediction method; eigen value decomposition; eigen space; time series analysis.</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(1), 2013</description>
        <description>Prediction method for time series of imagery data on eigen space is proposed. Although the conventional prediction method is defined on the real world space and time domains, the proposed method is defined on eigen space. Prediction accuracy of the proposed method is supposed to be superior to the conventional methods. Through experiments with time series of satellite imagery data, validity of the proposed method is confirmed. </description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No1/Paper_2-Prediction_Method_for_Time_Series_of_Imagery_Data_in_Eigen_Space.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>CBR&#160;in the service&#160;of accident cases evaluating</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2013</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2013.020101</link>
        <id>10.14569/IJARAI.2013.020101</id>
        <doi>10.14569/IJARAI.2013.020101</doi>
        <lastModDate>2013-01-09T12:17:38.6930000+00:00</lastModDate>
        
        <creator>Lassa&#226;d Mejri</creator>
        
        <creator>Sofian Madi</creator>
        
        <creator>Henda Ben Gh&#233;zala</creator>
        
        <subject>componentSecurity of transport;  Artificial Intelligence; Ontology; Case-Based Reasoning; Resolution scenario; Scenario of accident.</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 2(1), 2013</description>
        <description>This paper introduces a research aiming at the development of a decision support system concerning the approval of automated railway transportation systems. The objective is to implement a valuation method for the degree of compliance of the automated transportation system in-group of safety standards by the analysis of the scenarios of accident. To reach this target, we envisaged an approach Rex (Return of experience) who draws the lessons of accidents / incidents lived and/or imagined by the experts of the analysis of security in the IFSTAAR. Our approach consists in offering a decision support in the side of the experts of the certification based on a reuse of the scenarios of accidents already validated historically on other approved transportation systems. This approach Rex is very useful since it provides to the experts a class of scenarios of accidents similar to the new case treated and getting closer to the context of new case. The Case-based reasoning is then exploited as a mode of reasoning by analogy allowing to choose and to recollect one under group of historical cases that can help in the resolution of the new case introduced by the experts. Process-Oriented Case-Based Reasoning (PO-CBR) is a growing application area in which CBR is used to address problems involving process data in a variety of specialized domains. PO-CBR systems often use structured cases. Our approach is characterized by a two-phased retrieval strategy. A first phase consists in retrieving a set of cases to be considered (a class of cases most similar to a problem to resolve). In a second phase, a more fine grained strategy is then applied to the pool of candidate cases already selected by the mean of similarity measures. This approach can enhance the process of retrieving cases compared to an exhaustive case-by-case comparison.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume2No1/Paper_1-CBR_in_the_service_of_accident_cases_evaluating.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Important Features Detection in Continuous Data </title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031239</link>
        <id>10.14569/IJACSA.2012.031239</id>
        <doi>10.14569/IJACSA.2012.031239</doi>
        <lastModDate>2012-12-31T09:36:07.3130000+00:00</lastModDate>
        
        <creator>Piotr Fulmanski</creator>
        
        <creator>Alicja Miniak-G&#243;recka</creator>
        
        <subject>important features extraction; continuous data analysis; decision tree.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(12), 2012</description>
        <description>In this paper, a method for calculating the importance factor of continuous features from a given set of patterns is presented. A real problem in many practical cases, like medical data, is to find which parts of patterns are crucial for correct classification. This leads to the need of preprocessing all data, which has influence on both time and accuracy of applied methods (when unimportant data hide those which are important). There are some methods that allow selection of important features for binary and sometimes discrete data or, after some preprocessing, continuous data. Very often however, such conversion is burdened with the risk of losing important data, which is a result of lack of knowledge of optimal discretization consequence. Proposed method allows to avoid that problem, because it is based on original, non-transformed continuous data. Two factors - concentration and diversity - are defined and are used to calculate the importance factor for each feature and pattern. Based on those factors e.g. unimportant features can be identified to decrease dimension of input data or &#39;&#39;bad&#39;&#39; patterns can be detected to improve classification. An example how proposed method can be used to improve decision tree is given as well.</description>
        <description>http://thesai.org/Downloads/Volume3No12/Paper_39-Important_Features_Detection_in_Continuous_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Masking Digital Image using a Novel technique based on a Transmission Chaotic System and SPIHT Coding Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031238</link>
        <id>10.14569/IJACSA.2012.031238</id>
        <doi>10.14569/IJACSA.2012.031238</doi>
        <lastModDate>2012-12-31T09:36:05.2400000+00:00</lastModDate>
        
        <creator>Hamiche Hamid </creator>
        
        <creator>Lahdir Mourad </creator>
        
        <creator>Tahanout Mohammed </creator>
        
        <creator>Djennoune Said </creator>
        
        <subject>Chaos Modified Henon;Colpitts, SPIHT; Robustness.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(12), 2012</description>
        <description>In this article, a new transmission system of encrypted image based on novel chaotic system and SPIHT technique is proposed. This chaotic system is made up of two chaotic systems already developed: the discrete-time modified Henon chaotic system and the continuous-time Colpitts one. The transmission system is designed to take profit of two advantages. The first is the use of a robust and standard algorithm (SPIHT) which is appropriate to the digital transmission. The second is to introduce farther complexity of the encryption using the chaotic system over secure channel. Through these two advantages, our purpose is to obtain a robust system against pirate attacks. Cryptanalysis and various experiments have been carried out and the results were reported in this paper, which demonstrate the feasibility and flexibility of the proposed scheme.</description>
        <description>http://thesai.org/Downloads/Volume3No12/Paper_38-Masking_Digital_Image_using_a_Novel_technique_based_on_a_Transmission_Chaotic_System_and_SPIHT_Coding_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Test Case PrioritizationUsing Fuzzy Logic for GUI based Software</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031237</link>
        <id>10.14569/IJACSA.2012.031237</id>
        <doi>10.14569/IJACSA.2012.031237</doi>
        <lastModDate>2012-12-31T09:36:03.1630000+00:00</lastModDate>
        
        <creator>Neha Chaudhary</creator>
        
        <creator>Om Prakash Sangwan</creator>
        
        <creator>Yogesh Singh</creator>
        
        <subject>Graphical user Interface; Prioritization; Test Suite; Fuzzy Model.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(12), 2012</description>
        <description>Testing of GUI (Graphical User Interface) applications has many challenges due to its event driven nature and infinite input domain. It is very difficult for any programmer to test for each and every possible input. When test cases are generated using automated testing tool it uses each and every possible combination to generate test cases hence generates numerous number of test case for any GUI based application. Within a defined time frame it is not possible to test every test case, that is why test cases prioritization is required. Test-case prioritization has been widely proposed and used in recent years as it can improve the rate of fault detection during the testing phase. Very few methods are defined for GUI Test case prioritization that usually consider single criteria for assigning priority for the test case which is not sufficient for the consideration of that test case as more fault revealing. In this paper we have proposed a method for assigning weight value on the basis of multiple factors as one of the criteria for test case prioritization for GUI based software. These factors are: The type of event, Event Interaction, and Parameter-value interaction coverage-based criteria. In the proposed approach priority is assigned based upon these factors using fuzzy logic model. Experimental results indicate that the proposed model is suitable for prioritizing the test cases of GUI based software. </description>
        <description>http://thesai.org/Downloads/Volume3No12/Paper_37-Test_case_Prioritization_Using_Fuzzy_Logic_for_GUI_based_Software.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Equalization: Analysis of MIMO Systems in Frequency Selective Channel</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031236</link>
        <id>10.14569/IJACSA.2012.031236</id>
        <doi>10.14569/IJACSA.2012.031236</doi>
        <lastModDate>2012-12-31T09:36:01.0900000+00:00</lastModDate>
        
        <creator>Amit Grover</creator>
        
        <subject>Quadrature Amplitude Modulation (QAM); Quadrature Phase Shift Key (QPSK); Binary Phase Shift Key (BPSK); Minimum mean-squared error (MMSE); Maximum likelihood (ML); Bit error rate (BER); Inter-symbol interference (ISI); Successive-interference-cancellatio</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(12), 2012</description>
        <description>Due to the increased demand of wireless communication systems because of the features of the system which provides a wide coverage, high throughput and reliable services, the MIMO systems communication has come into existence. Features provided by these systems ensure the improved system coverage and increased data transmission rate by considering multiple numbers of transmitter and receiver antennas. In this paper, we are considering the equalization; a filtering approach that minimizes the error between actual output and desired output by continuous updating its filter coefficients for Rayleigh Frequency selective fading channel. We concluded that MMSE and ZF give the worst performance in Rayleigh frequency selective channel as compare to Rayleigh Flat fading Channel [35] due to a constant BER for large SNR’s. We have also observed that the successive interference methods provide better performance as compare to others, but their complexity is high. ML provides the better performance in comparison to others and BER doesn’t remain constant for a large SNR in this case. Simulation results shows that ML equalizer with BPSK gives better performance as compare to QPSK. Finally we concluded that Sphere decoder provides the best performance.</description>
        <description>http://thesai.org/Downloads/Volume3No12/Paper_36-Equalization_Analysis_of_MIMO_Systems_in_Frequency_Selective_Channel.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Spatial Cloud Detection and Retrieval System for Satellite Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031235</link>
        <id>10.14569/IJACSA.2012.031235</id>
        <doi>10.14569/IJACSA.2012.031235</doi>
        <lastModDate>2012-12-31T09:35:59.0130000+00:00</lastModDate>
        
        <creator>Noureldin Laban</creator>
        
        <creator>Ayman Nasr</creator>
        
        <creator>Motaz ElSaban</creator>
        
        <creator>Hoda Onsi</creator>
        
        <subject>Satellite images; Content based image retrieval; Query by polygon; Retrieval re?nement; cloud detection; geographic information system.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(12), 2012</description>
        <description>In last the decade we witnessed a large increase in data generated by earth observing satellites. Hence, intelligent processing of the huge amount of data received by hundreds of earth receiving stations, with specific satellite image oriented approaches, presents itself as a pressing need. One of the most important steps in earlier stages of satellite image processing is cloud detection. Satellite images having a large percentage of cloud cannot be used in further analysis. While there are many approaches that deal with different semantic meaning, there are rarely approaches that deal specifically with cloud detection and retrieval. In this paper we introduce a novel approach that spatially detect and retrieve clouds in satellite images using their unique properties .Our approach is developed as spatial cloud detection and retrieval system (SCDRS) that introduce a complete framework for specific semantic retrieval system. It uses a Query by polygon (QBP) paradigm for the content of interest instead of using the more conventional rectangular query by image approach. First, we extract features from the satellite images using multiple tile sizes using spatial and textural properties of cloud regions. Second, we retrieve our tiles using a parametric statistical approach within a multilevel refinement process. Our approach has been experimentally validated against the conventional ones yielding enhanced precision and recall rates in the same time it gives more precise detection of cloud coverage regions.</description>
        <description>http://thesai.org/Downloads/Volume3No12/Paper_35-Spatial_Cloud_Detection_and_Retrieval_System_for_Satellite_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Comparison of Gender and Age Group Recognition for Human-Robot Interaction </title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031234</link>
        <id>10.14569/IJACSA.2012.031234</id>
        <doi>10.14569/IJACSA.2012.031234</doi>
        <lastModDate>2012-12-31T09:35:56.9230000+00:00</lastModDate>
        
        <creator>Myung Won Lee</creator>
        
        <creator>Keun-Chang Kwak</creator>
        
        <subject>gender recognition; age group recognition; human-robot interaction. </subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(12), 2012</description>
        <description>In this paper, we focus on performance comparison of gender and age group recognition to perform robot’s application services for Human-Robot Interaction (HRI). HRI is a core technology that can naturally interact between human and robot. Among various HRI components, we concentrate audio-based techniques such as gender and age group recognition from multichannel microphones and sound board equipped with robots. For comparative purposes, we perform the performance comparison of Mel-Frequency Cepstral Coefficients (MFCC) and Linear Prediction Coding Coefficients (LPCC)  in the feature extraction step, Support Vector Machine (SVM) and C4.5 Decision Tree (DT) in the classification step. Finally, we deal with the usefulness of gender and age group recognition for human-robot interaction in home service robot environments.  </description>
        <description>http://thesai.org/Downloads/Volume3No12/Paper_34-Performance_Comparison_of_Gender_and_Age_Group_Recognition_for_Human_Robot_Interaction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Subpixel Accuracy Analysis of Phase Correlation Shift Measurement Methods Applied to Satellite Imagery</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031233</link>
        <id>10.14569/IJACSA.2012.031233</id>
        <doi>10.14569/IJACSA.2012.031233</doi>
        <lastModDate>2012-12-31T09:35:54.8030000+00:00</lastModDate>
        
        <creator>S. A. Mohamed</creator>
        
        <creator>A.K. Helmi</creator>
        
        <creator>M.A. Fkirin</creator>
        
        <creator>S.M. Badwai</creator>
        
        <subject>phase correlation (PC); high pass filter (HPF); window function; sub-pixel shift.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(12), 2012</description>
        <description>The key point of super resolution process is the accurate measuring of sub-pixel shift. Any tiny error in measuring such shift leads to an incorrect image focusing. In this paper, methodology of measuring sub-pixel shift using Phase correlation (PC) are evaluated using different window functions, then  modified version  of (PC) method using high pass filter (HPF) is introduced . Comprehensive analysis and assessment of (PC) methods shows that different natural features yield different shift measurements. It is concluded that there is no universal window function for measuring shift; it mainly depends on the features in the satellite images. Even the question of which window is optimal of particular feature is generally remains open. This paper presents the design of a method for obtaining high accuracy sub pixel shift phase correlation using (HPF).The proposed method makes the change in the different locations that lack of edges easy. </description>
        <description>http://thesai.org/Downloads/Volume3No12/Paper_33-Subpixel_Accuracy_Analysis_of_Phase_Correlation_Shift_Measurement_Methods_Applied_to_Satellite_Imagery.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Productisation of Service: A Case Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031232</link>
        <id>10.14569/IJACSA.2012.031232</id>
        <doi>10.14569/IJACSA.2012.031232</doi>
        <lastModDate>2012-12-31T09:35:50.8570000+00:00</lastModDate>
        
        <creator>Nilanjan Chattopadhyay </creator>
        
        <subject>Services management; productisation; service product; data sanitisation; data security; productised service.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(12), 2012</description>
        <description>This paper discusses the issue of Productisation of service, i.e. development of systemic, scalable and replicable service offerings, as implemented by a multinational Consulting organization, engaged in the business of outsourcing and consulting solutions, from their office in India. The literature is quite rich with discussions and debates related to products and services individually, but there seems to be an important deficiency in terms of ‘integration’ between product design and service elements for supporting new service-product system. In today’s flat world the geographic boundaries are getting diminished when firms are expanding seamlessly across the globe. This seamless expansion of the electronic data processing market makes use of the outsourcing as one of its main way to expand into various geographies. Data Sanitization is one of the most sought after service offering made by a consulting firm to protect the sensitive client data from any misuse. The paper attempts to document the process followed by a firm to productize its data sanitization service offering. This documentation will not only help in integration of product and service parameters, but also will be extremely helpful for the organizations worldwide offering service as business.   </description>
        <description>http://thesai.org/Downloads/Volume3No12/Paper_32-Productisation_of_Service_A_Case_Study.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Error Analysis on Estimation Method for Air-Temperature, Atmopspheric Pressure, and Realtive Humidity Using Absorption Due to CO2, O2, and H2O Which situated at Around Near Infrared Wavelength Region </title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031231</link>
        <id>10.14569/IJACSA.2012.031231</id>
        <doi>10.14569/IJACSA.2012.031231</doi>
        <lastModDate>2012-12-31T09:35:46.9100000+00:00</lastModDate>
        
        <creator>Kohei Arai </creator>
        
        <subject>absorption band; regressive analysis; air-temperature; atmospheric pressure and relative humidity estimations.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(12), 2012</description>
        <description>A method for air-temperature, atmospheric pressure and relative humidity using absorptions due to CO2, O2 and H2O which situated at around near infrared wavelength region is proposed and is evaluated its validity. Simulation study results with MODTRAN show a validity of the proposed method.</description>
        <description>http://thesai.org/Downloads/Volume3No12/Paper_31-Error_Analysis_on_Estimation_Method_for_Air_Temperature_Atmopspheric_Pressure_and_Realtive_Humidity_Using_Absorption.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Financial Statement Fraud Detection using Text Mining</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031230</link>
        <id>10.14569/IJACSA.2012.031230</id>
        <doi>10.14569/IJACSA.2012.031230</doi>
        <lastModDate>2012-12-31T09:35:44.8330000+00:00</lastModDate>
        
        <creator>Rajan Gupta</creator>
        
        <creator>Nasib Singh Gill</creator>
        
        <subject>Text Mining; Bag of words; Support Vector Machines.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(12), 2012</description>
        <description>Data mining techniques have been used enormously by the researchers’ community in detecting financial statement fraud. Most of the research in this direction has used the numbers (quantitative information) i.e. financial ratios present in the financial statements for detecting fraud. There is very little or no research on the analysis of text such as auditor’s comments or notes present in published reports. In this study we propose a text mining approach for detecting financial statement fraud by analyzing the hidden clues in the qualitative information (text) present in financial statements. </description>
        <description>http://thesai.org/Downloads/Volume3No12/Paper_30-Financial_Statement_Fraud_Detection_using_Text_Mining.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Passing VBR in Mobile Ad Hoc Networks – for effective live video Streaming</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031229</link>
        <id>10.14569/IJACSA.2012.031229</id>
        <doi>10.14569/IJACSA.2012.031229</doi>
        <lastModDate>2012-12-31T09:35:42.7430000+00:00</lastModDate>
        
        <creator>V. Saravanan</creator>
        
        <creator>Dr.C.Chandrasekar</creator>
        
        <subject>Variable Bit Rate; Mobile Ad Hoc; Machine Learning.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(12), 2012</description>
        <description>Mobile ad hoc networks (often referred to as MANETs) consist of wireless hosts that communicate with each other in the absence of a fixed infrastructure. This technique can be used effectively in disaster management, intellectual conference and also in the battlefield environments.  It has the significant attention in the recent years.  This research paper depicts the remuneration of using suggestion tracking for selecting energy-conserving routes in delay-tolerant applications and it sends Variable Bit Rate delivery. The previous investigation set up from earlier period surveillance that delay can be traded for energy efficiency in selecting a path. The Prior objective is to find an experiential upper bound on the energy savings by assuming that each node accurately knows or predicts its future path. It examines the effect of varying the amount of future information on routing. Such a bound may prove useful in deciding how far to look in advance, and thus how much convolution to provide in mobility tracking.</description>
        <description>http://thesai.org/Downloads/Volume3No12/Paper_29-Passing_VBR_in_Mobile_Ad_Hoc_Networks_for_effective_live_video_Streaming.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Genetic Algorithm Based Approach for Obtaining Alignment of Multiple Sequences</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031228</link>
        <id>10.14569/IJACSA.2012.031228</id>
        <doi>10.14569/IJACSA.2012.031228</doi>
        <lastModDate>2012-12-31T09:35:40.6530000+00:00</lastModDate>
        
        <creator>Ruchi Gupta</creator>
        
        <creator>Dr. Pankaj Agarwal</creator>
        
        <creator>Dr. A. K. Soni</creator>
        
        <subject>DNA Sequences; alignment; Genetic Algorithm; Crossover; Mutation; Selection; Multiple Sequence Alignment etc.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(12), 2012</description>
        <description>This paper presents genetic algorithm based solution for determing alignment of multiple molecular sequences. Two datasets from DNA families Canis_familiaris and galaxy dataset have been considered for experimental work &amp; analysis. Genetic operators like cross over rate, mutation rate can be defined by the user. Experiments &amp; observations were recorded w.r.t variable parameters like fixed population size vs variable number of generations &amp; vice versa, variable crossover &amp; mutation rates. Comparative evaluation in terms of measure of fitness accuracy is also carried out w.r.t existing MSA tools like Maft, Kalign. Experimental results show that the proposed solution does offer better fitness accuracy rates. </description>
        <description>http://thesai.org/Downloads/Volume3No12/Paper_28-Genetic_Algorithm_Based_Approach_for_Obtaining_Alignment_of_Multiple_Sequences.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Scalable and Flexible heterogeneous multi-core system</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031227</link>
        <id>10.14569/IJACSA.2012.031227</id>
        <doi>10.14569/IJACSA.2012.031227</doi>
        <lastModDate>2012-12-31T09:35:38.5630000+00:00</lastModDate>
        
        <creator>Rashmi A Jain</creator>
        
        <creator>Dr. Dinesh V. Padole</creator>
        
        <subject>Flexible Heterogeneous Multi Core system (FMC); instruction level parallelism, thread-level parallelism; and memory-level parallelism; scalable; chip multiprocessors (CMP).</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(12), 2012</description>
        <description>Multi-core system has wide utility in today’s applications due to less power consumption and high performance. Many researchers are aiming at improving the performance of these systems by providing flexible multi-core architecture. Flexibility in the multi-core processors system provides high throughput for uniform parallel applications as well as high performance for more general work. This flexibility in the architecture can be achieved by scalable and changeable-size window micro architecture. It uses the concept of execution locality to provide large-window capabilities. Use of high memory-level parallelism (MLP) reduces the memory wall. Micro architecture contains a set of small and fast cache processors which execute high locality code. A network of small in-order memory engines use low locality code to improve performance by using instruction level parallelism (ILP). Dynamic heterogeneous multi-core architecture is capable of recon?guring itself to ?t application requirements. Study of different scalable and ?exible architectures of heterogeneous multi-core system has been carried out and has been presented.</description>
        <description>http://thesai.org/Downloads/Volume3No12/Paper_27-Scalable_and_Flexible_heterogeneous_multi-core_system.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Advanced Certain Trust Model Using Fuzzy Logic and Probabilistic Logic theory </title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031226</link>
        <id>10.14569/IJACSA.2012.031226</id>
        <doi>10.14569/IJACSA.2012.031226</doi>
        <lastModDate>2012-12-31T09:35:36.4870000+00:00</lastModDate>
        
        <creator>Kawser Wazed Nafi</creator>
        
        <creator>Tonny Shekha Kar</creator>
        
        <creator>Md. Amjad Hossain</creator>
        
        <creator> M.M.A Hashem</creator>
        
        <subject>Certain trust; Certain Logic; Fuzzy Logic; Probabilistic Logic; FAM rule; Fuzzification; Defuzzification; Inference Rules.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(12), 2012</description>
        <description>Trustworthiness especially for service oriented system is very important topic now a day in IT field of the whole world. Certain Trust Model depends on some certain values given by experts and developers. Here, main parameters for calculating trust are certainty and average rating. In this paper we have proposed an Extension of Certain Trust Model, mainly the representation portion based on probabilistic logic and fuzzy logic. This extended model can be applied in a system like cloud computing, internet, website, e-commerce, etc. to ensure trustworthiness of these platforms. The model uses the concept of fuzzy logic to add fuzziness with certainty and average rating to calculate the trustworthiness of a system more accurately. We have proposed two new parameters - trust T and behavioral probability P, which will help both the users and the developers of the system to understand its present condition easily. The linguistic variables are defined for both T and P and then these variables are implemented in our laboratory to verify the proposed trust model. We represent the trustworthiness of test system for two cases of evidence value using Fuzzy Associative Memory (FAM). We use inference rules and defuzzification method for verifying the model. </description>
        <description>http://thesai.org/Downloads/Volume3No12/Paper_26-An_Advanced_Certain_Trust_Model_Using_Fuzzy_Logic_and_Probabilistic_Logic_theory.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Agent Oriented Software Testing – Role Oriented approach </title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031225</link>
        <id>10.14569/IJACSA.2012.031225</id>
        <doi>10.14569/IJACSA.2012.031225</doi>
        <lastModDate>2012-12-31T09:35:34.3970000+00:00</lastModDate>
        
        <creator>N Sivakumar</creator>
        
        <creator>K.Vivekanandan</creator>
        
        <subject>Agent oriented software enginerring; Multi-Agent System; Role oriented testing.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(12), 2012</description>
        <description>Several Agent Oriented Software Engineering (AOSE) methodologies were proposed to build open, heterogeneous and complex internet based systems. AOSE methodologies offer different conceptual frameworks, notations and techniques, thereby provide a platform to make the system abstract, generalize, dynamic and autonomous. Lifecycle coverage is one of the important criteria for evaluating an AOSE methodology. Most of the existing AOSE methodologies focuses only on analysis, design, implementation and disregarded testing, stating that the testing can be done by extending the existing object-oriented testing techniques. Though objects and agents have some similarities, they both differ widely. Role is an important attribute of an agent that has a huge scope and support for the analysis, design and implementation of Multi-Agent System (MAS). The main objective of the paper is to extend the scope and support of role towards testing, thereby the vacancy for software testing perception in the AOSE series will be filled up. This paper presents an overview of role based testing based on the V-Model in order to add the next new component as of Agent-Oriented Software testing in the agent oriented development life cycle.</description>
        <description>http://thesai.org/Downloads/Volume3No12/Paper_25-Agent_Oriented_Software_Testing_Role_Oriented_approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Peer Assignment Review Process for Collaborative E-learning: Is the Student Learning Process Changing?</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031224</link>
        <id>10.14569/IJACSA.2012.031224</id>
        <doi>10.14569/IJACSA.2012.031224</doi>
        <lastModDate>2012-12-31T09:35:32.3070000+00:00</lastModDate>
        
        <creator>Evelyn Kigozi Kahiigi</creator>
        
        <creator>Mikko Vesisenaho</creator>
        
        <creator>F.F Tusubira</creator>
        
        <creator>Henrik Hansson</creator>
        
        <creator>Mats Danielson</creator>
        
        <subject>Peer Review; Collaborative E-learning; Learning Process; Students; University; Uganda; Developing Country.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(12), 2012</description>
        <description>In recent years collaborative e-learning has been emphasized as a learning method that has facilitated knowledge construction and supported student learning. However some universities especially in developing country contexts are struggling to attain minimal educational benefits from its adoption and use. This paper investigates the application of a peer assignment review process for collaborative e-learning to third year undergraduate students. The study was aimed at evaluating the effect of the peer assignment review process on the student learning process. Data was collected using a survey questionnaire and analyzed using SPSS Version 16.0. While the student reported positive impact of the peer assignment review process in terms of facilitating students to put more effort and improve their work; quick feedback on their assignments; effective sharing and development of knowledge and information and the need of computer competence to manipulate the peer assignment review system, analysis of the quantitative data indicated that the process had limited effect on the learning process. This is attributed to lack of review skills, absence of lecturer scaffolding, low ICT literacy levels and change management.</description>
        <description>http://thesai.org/Downloads/Volume3No12/Paper_24-Peer_Assignment_Review_Process_for_Collaborative_E_learning_Is_the_Student_Learning_Process_Changing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Method for Water Vapor Profile Retievals by Means of Minimizing Difference Between Estimated and Actual Brightness Temperatures Derived from AIRS data and Radiative Transfer Model </title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031223</link>
        <id>10.14569/IJACSA.2012.031223</id>
        <doi>10.14569/IJACSA.2012.031223</doi>
        <lastModDate>2012-12-31T09:35:30.2300000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>infrared sounder; non-linear optimization method; linearized inversion.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(12), 2012</description>
        <description>Method for water vapor profile retrievals by means of minimizing difference between estimated and actual brightness temperatures derived from AIRS data and radiative transfer model is proposed. Initial value is determined by linearized radiative transfer equation. It is found that this initial value determination method makes improvement of estimation accuracy together with reducing convergence time. </description>
        <description>http://thesai.org/Downloads/Volume3No12/Paper_23-Method_for_Water_Vapor_Profile_Retievals_by_Means_of_Minimizing_Difference_Between_Estimated_and_Actual_Brightness_Temperatures_Derived.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimizing the Performance Evaluation of Robotic Arms with the Aid of Particle Swarm Optimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031222</link>
        <id>10.14569/IJACSA.2012.031222</id>
        <doi>10.14569/IJACSA.2012.031222</doi>
        <lastModDate>2012-12-31T09:35:28.1570000+00:00</lastModDate>
        
        <creator>K Shivaprakash Reddy</creator>
        
        <creator>Dr.P V K Perumal</creator>
        
        <creator>Dr.B. Durgaprasad</creator>
        
        <creator>Dr.M.A Murtaza</creator>
        
        <subject>Particle Swarm Optimization; Robotic arm gear box; Static&amp; Dynamic parameters.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(12), 2012</description>
        <description>In this modern world, robotic evaluation plays a most important role. In secure distance, this leads the humans to execute insecure task. To acquire an effective result, the system which makes the human task easier should be taken care of and the holdup behind the system should be eradicated. Only static parameters are considered and such parameters are not enough to obtain optimized value in existing work. For consecutively attaining optimized value in our previous work, we focused on both static and dynamic parameters in the robotic arm gearbox model. Now, a genetic algorithm is utilized and the result obtained is greater than the existing work. On the other hand, to attain an effective result the genetic algorithm itself is not enough since it takes massive time for computation process and the result obtained in this computation is not as much closer to the true value. By eliminating all those aforementioned issues, a proper algorithm needs to be utilized in order to achieve an efficient result than the existing and our previous works. In this paper, we anticipated to suggest a Particle Swarm Optimization technique that reduce the computation time as well as make the output result as much closer to the true value (i.e.,) experimentally obtained value.</description>
        <description>http://thesai.org/Downloads/Volume3No12/Paper_22-Optimizing_the_Performance_Evaluation_of_Robotic_Arms_with_the_Aid_of_Particle_Swarm_Optimization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cloud Computing for Solving E-Learning Problems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031221</link>
        <id>10.14569/IJACSA.2012.031221</id>
        <doi>10.14569/IJACSA.2012.031221</doi>
        <lastModDate>2012-12-31T09:35:24.1800000+00:00</lastModDate>
        
        <creator>N S. Abu El Ala</creator>
        
        <creator>W. A. Awad</creator>
        
        <creator>H. M. El-Bakry</creator>
        
        <subject></subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(12), 2012</description>
        <description>The integration of information and communication technologies in education according to the global trend occupied a great interest in the Arab world through E-Learning techniques and put it into the form of services within Services Oriented Architecture Technique (SOA), and mixing its input and outputs within the components of the Education Business Intelligence (EBI) and enhance it to simulate reality by educational virtual worlds.This paper presents a creative environment derived from both virtual and personal learning environments based on cloud computing which contains variety of tools and techniques to enhance the educational process. The proposed environment focuses on designing and monitoring educational environment based on reusing the existing web tools, techniques, and services to provide Browser-based-Application. </description>
        <description>http://thesai.org/Downloads/Volume3No12/Paper_21-Cloud_Computing_for_Solving_E-Learning_Problems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Finding Association Rules through Efficient Knowledge Management Technique </title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031220</link>
        <id>10.14569/IJACSA.2012.031220</id>
        <doi>10.14569/IJACSA.2012.031220</doi>
        <lastModDate>2012-12-31T09:35:22.0870000+00:00</lastModDate>
        
        <creator>Anwar M. A.</creator>
        
        <subject>Data Mining; Co-occurrences; Incremental association rules; Dynamic Databases.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(12), 2012</description>
        <description>One of the recent research topics in databases is Data Mining, to find, extract and mine the useful information from databases. In case of updating transactions in the database the already discovered knowledge may become invalid. So we need efficient knowledge management techniques for finding the updated knowledge from the database. There have been lot of research in data mining, but Knowledge Management in databases is not studied much. One of the data mining techniques is to find association rules from databases. But most of association rule algorithms find association rules from transactional databases. Our research is a further step of the Tree Based Association Rule Mining (TBAR) algorithm, used in relational databases for finding the association rules .In our approach of updating the already discovered knowledge; the proposed algorithm Association Rule Update (ARU), updates the already discovered association rules found through the TBAR algorithm. Our algorithm will be able to find incremental association rules from relational databases and efficiently manage the previously found knowledge. </description>
        <description>http://thesai.org/Downloads/Volume3No12/Paper_20-Finding_Association_Rules_through_Efficient_Knowledge_Management_Technique.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Data Compression for Video-Conferencing 
using Half tone and Wavelet Transform 
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031219</link>
        <id>10.14569/IJACSA.2012.031219</id>
        <doi>10.14569/IJACSA.2012.031219</doi>
        <lastModDate>2012-12-31T09:35:19.9830000+00:00</lastModDate>
        
        <creator>Dr H.B Kekre</creator>
        
        <creator>Sanjay R. Sange</creator>
        
        <creator>Dr. Tanuja K. Sarode</creator>
        
        <subject>Half tone; Low-Bit rate; video data compression; Wavelet Tranform; Bandwidth optimization; Structural Similarity Index Measure (SSIM).</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(12), 2012</description>
        <description>Overhead of data transmission over internet is increasing exponentially every day. Optimization of natural bandwidth is the basic motive by compressing image data to the maximum extend.  For the same objective, combination of lossy half tone and lossless Wavelet Transform techniques is proposed so as to obtain low-bit rate video data transmission.    Decimal values of bitmapped image are to be converted into either 1 or 0 in half toning process that incur pictorial loss and gives 8:1 compression ratio (CR) irrespective of image. Wavelet Transform is applied on half tone image for higher compression for various levels. An experimental result shows the higher CR, minimum Mean Square Error (MSE). Ten sample images of different people captured by Nikon camera are used for experimentation. All images are bitmap (.BMP) 512 X 512 in size. The proposed technique can be used for video conferencing, storage of movies and CCTV footage etc.</description>
        <description>http://thesai.org/Downloads/Volume3No12/Paper_19-Data_Compression_for_Video_Conferencing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Block Cipher Involving a Key bunch Matrix and a Key-based Permutation and Substitution</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031218</link>
        <id>10.14569/IJACSA.2012.031218</id>
        <doi>10.14569/IJACSA.2012.031218</doi>
        <lastModDate>2012-12-31T09:35:17.9070000+00:00</lastModDate>
        
        <creator>Dr V.U.K Sastry</creator>
        
        <creator>K. Shirisha</creator>
        
        <subject>Key bunch matrix; encryption; decryption; permutation; substitution; avalanche effect; cryptanalysis.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(12), 2012</description>
        <description>In this paper, we have developed a novel block cipher involving a key bunch matrix supported by a key-based permutation and a key-based substitution. In this analysis, the decryption key bunch matrix is obtained by using the given encryption key bunch matrix and the concept of multiplicative inverse. From the cryptanalysis carried out in this investigation, we have seen that the strength of the cipher is remarkably good and it cannot be broken by any conventional attack.</description>
        <description>http://thesai.org/Downloads/Volume3No12/Paper_18-A_Novel_Block_Cipher_Involving_a_Key_bunch_Matrix_and_a_Key_based_Permutation_and_Substitution.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Block Cipher Involving a Key Bunch Matrix 
and an Additional Key Matrix, Supplemented with Modular Arithmetic Addition and supported by 
Key-based Substitution
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031217</link>
        <id>10.14569/IJACSA.2012.031217</id>
        <doi>10.14569/IJACSA.2012.031217</doi>
        <lastModDate>2012-12-31T09:35:15.8170000+00:00</lastModDate>
        
        <creator>Dr V.U.K Sastry</creator>
        
        <creator>K. Shirisha</creator>
        
        <subject>key bunch matrix; additional key matrix; multiplicative inverse; encryption; decryption; permute; substitute.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(12), 2012</description>
        <description>In this paper, we have devoted our attention to the development of a block cipher, which involves a key bunch matrix, an additional matrix, and a key matrix utilized in the development of a pair of functions called Permute() and Substitute(). These two functions are used for the creation of confusion and diffusion for each round of the iteration process of the encryption algorithm. The avalanche effect shows the strength of the cipher, and the cryptanalysis ensures that this cipher cannot be broken by any cryptanalytic attack generally available in the literature of cryptography.</description>
        <description>http://thesai.org/Downloads/Volume3No12/Paper_17-A_Block_Cipher_Involving_a_Key_Bunch_Matrix_and_an_Additional_Key_Matrix_Supplemented_with_Modular_Arithmetic_Addition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Study of Proper Hierarchical Graphs on a Grid</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031216</link>
        <id>10.14569/IJACSA.2012.031216</id>
        <doi>10.14569/IJACSA.2012.031216</doi>
        <lastModDate>2012-12-31T09:35:13.7430000+00:00</lastModDate>
        
        <creator>Mohamed A. El Sayed</creator>
        
        <creator>Ahmed A. A. Radwan</creator>
        
        <creator>Nahla F. Omran </creator>
        
        <subject>level graphs; hierarchical graphs; algorithms; graph drawing.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(12), 2012</description>
        <description>Hierarchical planar graph embedding (sometimes called level planar graphs) is widely recognized as a very important task in diverse fields of research and development. Given a proper hierarchical planar graph, we want to find a geometric position of every vertex (layout) in a straight-line grid drawing without any edge-intersection. An additional objective is to minimize the area of the rectangular grid in which G is drawn with more aesthetic embedding.  In this paper we propose several ideas to find an embedding of G in a rectangular grid with area, ( -1) &#215; (k-1), where   is the number of vertices in the longest level and k is the number of levels in G.)</description>
        <description>http://thesai.org/Downloads/Volume3No12/Paper_16-Study_of_Proper_Hierarchical_Graphs_on_a_Grid.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Computing the Exit Complexity of Knowledge in Distributed Quantum Computers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031215</link>
        <id>10.14569/IJACSA.2012.031215</id>
        <doi>10.14569/IJACSA.2012.031215</doi>
        <lastModDate>2012-12-31T09:35:11.6670000+00:00</lastModDate>
        
        <creator>M A Abbas</creator>
        
        <subject>Complexity; Quantum Computers; Knowledge acquisition; Graph theory; Egress;  Distributed systems.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(12), 2012</description>
        <description>Distributed Quantum computers abide from the exit complexity of the knowledge. The exit complexity is the accrue of the nodal information needed to clarify the total egress system with deference to a distinguished exit node. The core objective of this paper is to compile an arrogant methodology for assessing the exit complexity of the knowledge in distributed quantum computers. The proposed methodology is based on contouring the knowledge using the unlabeled binary trees, hence building an benchmarked and a computer based model. The proposed methodology dramatizes knowledge autocratically calculates the exit complexity. The methodology consists of several amphitheaters, starting with detecting the baron aspect of the tree of others entitled express knowledge and then measure the volume of information and the complexity of behavior destining from the bargain of information. Then calculate egress resulting from episodes that do not lead to the withdrawal of the information. In the end is calculated total egress complexity and then appraised total exit complexity of the system. Given the complexity of the operations within the Distributed Computing Quantity, this research addresses effective transactions that could affect the three-dimensional behavior of knowledge. The results materialized that the best affair where total exit complexity as minimal as possible is a picture of a binary tree is entitled at the rate of positive and negative cardinal points medium value. It could be argued that these cardinal points should not amass the upper bound apex or minimum.</description>
        <description>http://thesai.org/Downloads/Volume3No12/Paper_15-Computing_the_Exit_Complexity_of_Knowledge_in_Distributed_Quantum_Computers.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Feistel Cipher Involving a Bunch of Keys  
Supplemented with XOR Operation
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031214</link>
        <id>10.14569/IJACSA.2012.031214</id>
        <doi>10.14569/IJACSA.2012.031214</doi>
        <lastModDate>2012-12-31T09:35:09.5930000+00:00</lastModDate>
        
        <creator>V U.K Sastry</creator>
        
        <creator>K. Anup Kumar</creator>
        
        <subject>encryption; decryption; cryptanalysis; avalanche effect; multiplicative inverse.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(12), 2012</description>
        <description>In this investigation, we have developed a novel block cipher by modifying classical Feistel cipher. In this, we have used a key bunched wherein each key has a multiplicative inverse. The cryptanalysis carried out in this investigation clearly shows that this cipher cannot be broken by any attack.</description>
        <description>http://thesai.org/Downloads/Volume3No12/Paper_14-A_Novel_Feistel_Cipher_Involving_a_Bunch_of_Keys_Supplemented_with_XOR_Operation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Feistel Cipher Involving a Bunch of Keys supplemented with Modular Arithmetic Addition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031213</link>
        <id>10.14569/IJACSA.2012.031213</id>
        <doi>10.14569/IJACSA.2012.031213</doi>
        <lastModDate>2012-12-31T09:35:07.5170000+00:00</lastModDate>
        
        <creator>Dr V.U.K Sastry</creator>
        
        <creator>Mr. K. Anup Kumar</creator>
        
        <subject>encryption; decryption; cryptanalysis; avalanche effect; modular arithmetic addition.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(12), 2012</description>
        <description>In the present investigation, we developed a novel Feistel cipher by dividing the plaintext into a pair of matrices. In the process of encryption, we have used a bunch of keys and  modular arithmetic addition. The avalanche effect shows that the cipher is a strong one.  The cryptanalysis carried out on this cipher indicates that this cipher cannot be broken by any cryptanalytic attack and it can be used for secured transmission of information.</description>
        <description>http://thesai.org/Downloads/Volume3No12/Paper_13-A_Novel_Feistel_Cipher_Involving_a_Bunch_of_Keys_supplemented_with_Modular_Arithmetic_Addition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Qos Routing Scheme and Route Repair in WSN</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031212</link>
        <id>10.14569/IJACSA.2012.031212</id>
        <doi>10.14569/IJACSA.2012.031212</doi>
        <lastModDate>2012-12-31T09:35:05.4430000+00:00</lastModDate>
        
        <creator>M Belghachi</creator>
        
        <creator>M. Feham</creator>
        
        <subject>WSNs; Quality of service; ACO; availability; global re-routing; local re-routing.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(12), 2012</description>
        <description>During the last decade, a new type of wireless network has evoked great interest among the scientific community; it is the wireless sensor networks (WSN). The WSN are used in various social activities, such as industrial processes, military surveillance applications, observation and monitoring of habitat, etc... This diversity of applications brings these networks to support different types of traffic and to provide services that must be both generic and adaptive for applications, the properties of the quality of service (QoS) are different from one application to another. However, the need to minimize energy consumption has been the most important field of WSNs research. Few studies in the field are concerned with mechanisms for efficiently delivering QoS at the application level from network level metrics and connection such as delay or bandwidth, while minimizing the energy consumption of sensor nodes that are part of network. The idea is to ensure QoS through a routing process, which can detect paths that meet the QoS requirements based on ant colony optimization (ACOs), coupled with detected routes reservation process. However, it is necessary to integrate to this diagram the maintenance of route disrupted during communication. We propose a method that aims to improve the probability of success of a local route repair. This method based on the density of nodes in the vicinity of a route, as well as on the availability of this vicinity. Taking into account these parameters in the route selection phase (end of the routing process) allows selecting among multiple routes, the one which is potentially the most easily repairable. In addition, we propose a method for early detection of the failure of a local route repair. This method can directly trigger a process of global re-routing that better fits to restore communication between the source and destination.</description>
        <description>http://thesai.org/Downloads/Volume3No12/Paper_12-QoS_Routing_scheme_and_route_repair_in_WSN.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Secure Optical Internet: A Novel Attack Prevention Mechanism for an OBS node in TCP/OBS Networks </title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031211</link>
        <id>10.14569/IJACSA.2012.031211</id>
        <doi>10.14569/IJACSA.2012.031211</doi>
        <lastModDate>2012-12-31T09:35:03.3530000+00:00</lastModDate>
        
        <creator>K. Muthuraj</creator>
        
        <creator>N. Sreenath</creator>
        
        <subject>optical internet security; burst hijacking attack; threats and vulnerabilities in TCP/OBS networks.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(12), 2012</description>
        <description>Optical Internet has become a strong development and its commercial use is growing rapidly. Due to transparency and virtual sharing infrastructure, they provide ultra-fast data rates with the help of optical burst switching technology, which transmits data in the form of bursts. From the security perspective, one of the OBS nodes in the optical network is compromised, causes the vulnerability. This paper is dealt to identify the vulnerabilities and named as burst hijacking attack and provide the prevention mechanism for the same. The NSFnet 14 nodes and the ns2 simulator with modified nOBS patch is used to simulate and verify the security parameters.</description>
        <description>http://thesai.org/Downloads/Volume3No12/Paper_11-Secure_Optical_Internet_A_Novel_Attack_Prevention_Mechanism_for_an_OBS_node_in_TCPOBS_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design &amp; Analysis of Optical Lenses by using 2D Photonic Crystals for Sub-wavelength Focusing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031210</link>
        <id>10.14569/IJACSA.2012.031210</id>
        <doi>10.14569/IJACSA.2012.031210</doi>
        <lastModDate>2012-12-31T09:35:01.2630000+00:00</lastModDate>
        
        <creator>Rajib Ahmed</creator>
        
        <creator>Mahidul Haque Prodhan</creator>
        
        <creator>Rifat Ahmmed</creator>
        
        <subject>Photonic crystals; photonic lens; body centered cubic; face centered cubic.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(12), 2012</description>
        <description>2D Photonic lenses (Convex-Convex, Convex-Plane, Plane-Convex, Concave-Concave, Concave-plane, and Plane-Concave) have been designed, simulated and optimized for optical communication using FDTD method. The effect of Crystal structures (Rectangular, Hexagonal, Face centered Cubic (FCC), Body centered Cubic (BCC), variation lattice constant (?), hole radius(r), reflective index (n), is demonstrated to get optimized parameters. Finally, with optimized parameters the effect of variation of lens radius on focal lengths and Electrical Field Intensity (Ey) is analyzed. Like optical lens, the focal length of photonic lens is also increased with lens radii, has dependency on optical axis. Moreover, with optimized parameters, Concave-Concave lens have been found as an optimal photonic lens that show sub-wavelength focusing with spatial resolutions- 9.22439&#181;m (Rectangular crystal), 7.379512&#181;m (Hexagonal Crystal), 7.840732&#181;m (FCC, BCC).</description>
        <description>http://thesai.org/Downloads/Volume3No12/Paper_10-Design_Analysis_of_Optical_Lenses_by_using_2D_Photonic_Crystals_for_Sub_wavelength_Focusing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Learning from Expressive Modeling Task </title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031209</link>
        <id>10.14569/IJACSA.2012.031209</id>
        <doi>10.14569/IJACSA.2012.031209</doi>
        <lastModDate>2012-12-31T09:34:59.1870000+00:00</lastModDate>
        
        <creator>Tolga KABACA</creator>
        
        <subject>Computer Assisted Modeling; Electronic Spreadsheet; Mathematical Model.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(12), 2012</description>
        <description>This study aimed to present an authentic way of showing how computer assisted mathematical modeling of a real world situation helped to understand mystery of that situation. To achieve this aim, a group of pre-service mathematics teachers has been asked to think on how the trip computer of cars calculates the values like instant fuel consumption, average fuel consumption and the distance to be taken with remaining fuel. The theoretical discussion on mathematical structure has been directed as semi-structured interview. Then, theoretical outcomes have been used to create the model on the electronic spreadsheet MS Excel.  At the end of the study, it has been observed that students have easily understood the behavior of trip computer by the help of mathematical background of the spreadsheet model and they have also been awaked of the role of mathematics in a real sense.</description>
        <description>http://thesai.org/Downloads/Volume3No12/Paper_9-Learning_from_Expressive_Modeling_Task.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparison between MPPT P&amp;O and MPPT Fuzzy Controls in Optimizing the Photovoltaic Generator </title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031208</link>
        <id>10.14569/IJACSA.2012.031208</id>
        <doi>10.14569/IJACSA.2012.031208</doi>
        <lastModDate>2012-12-31T09:34:57.0800000+00:00</lastModDate>
        
        <creator>Messaouda AZZOUZI</creator>
        
        <subject>solar energy; photovoltaic; PV; MPPT; P&amp;O; Boost converter; fuzzy; optimization.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(12), 2012</description>
        <description>This paper presents a comparative study between two control methods in order to optimize the efficiency of the solar generator. The simulation had been established by using Matlab/Simulink software to apply the MPPT P&amp;O and MPPT Fuzzy controls on this system which is supplied through a Boost converter.Many results have been illustrated under standard and then variable weather conditions such as the illumination and the temperature. The voltage and the power of the panel and the battery as well as the duty cycle are well presented and analyzed for the two control methods. The obtained results show the effectiveness of MPPT Fuzzy controller in optimizing the PV generator. These results can encourage the use of this control strategy on solar panels in real time to optimize their yield.</description>
        <description>http://thesai.org/Downloads/Volume3No12/Paper_8-Comparaison_between_MPPT_PO_and_MPPT_Fuzzy_Controls_in_Optimizing_the_Photovoltaic_Generator.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Facebook as a tool to Enhance Team Based Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031207</link>
        <id>10.14569/IJACSA.2012.031207</id>
        <doi>10.14569/IJACSA.2012.031207</doi>
        <lastModDate>2012-12-31T09:34:54.9900000+00:00</lastModDate>
        
        <creator>Sami M. Alhomod</creator>
        
        <creator>Mohd Mudasir Shafi</creator>
        
        <subject>Social Networking; Facebook; Team Based Learning; Communication.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(12), 2012</description>
        <description>A growing number of educators are using social networking sites (SNS) to communicate with their students. Facebook is one such example which is widely used by students and educators. Facebook has been recently used by many educational institutions but most of these have been related to provide the information to the general audience. There has not been much study done to propose Facebook as an educational tool in a classroom scenario. The aim of this paper is to propose the idea of using Facebook in team based learning (TBL) scenario. The paper demonstrates the use of Facebook at each level of TBL The paper shows how Facebook can be used by students and teacher to communicate with each other in a TBL system. The paper also explains teacher – team and teacher – student communication via Facebook.</description>
        <description>http://thesai.org/Downloads/Volume3No12/Paper_7-Facebook_as_a_tool_to_Enhance_Team_Based_Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Simple Method  for Ontology Automatic Extraction from Documents</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031206</link>
        <id>10.14569/IJACSA.2012.031206</id>
        <doi>10.14569/IJACSA.2012.031206</doi>
        <lastModDate>2012-12-31T09:34:52.9000000+00:00</lastModDate>
        
        <creator>Andreia Dal Ponte Novelli</creator>
        
        <creator>Jos&#233; Maria Parente de Oliveira</creator>
        
        <subject>document ontology; ontology creation; ontology extraction; concept  representation.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(12), 2012</description>
        <description>There are many situations where it is needed to represent and analyze the concepts that describe a document or a collection of documents. One of such situations is the information retrieval, which is becoming more complex by the growing number and variety of document types. One way to represent the concepts is through a formal structure using ontologies. Thus, this article presents a fast and simple method for automatic extraction of ontologies from documents or from a collections of documents that is independent of the document type and uses the junction of several theories and techniques, such as latent semantic for the extraction of initial concepts, Wordnet and similarity to obtain the correlation between the concepts.</description>
        <description>http://thesai.org/Downloads/Volume3No12/Paper_6-Simple_Method_for_Ontology_Automatic_Extraction_from_Documents.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Low cost approach to real-time vehicle to vehicle communication using parallel CPU and GPU processing </title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031205</link>
        <id>10.14569/IJACSA.2012.031205</id>
        <doi>10.14569/IJACSA.2012.031205</doi>
        <lastModDate>2012-12-31T09:34:50.8100000+00:00</lastModDate>
        
        <creator>GOH CHIA CHIEH</creator>
        
        <creator>DINO ISA</creator>
        
        <subject>component; CUDA; Parallel processing; Vehicle to Vehicle Communication; WLAN; ZigBee.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(12), 2012</description>
        <description>This paper proposes a novel Vehicle to Vehicle (V2V) communication system for collision avoidance which merges four different wireless devices (GPS, Wi-Fi, ZigBee&#174; and 3G) with a low power embedded Single Board Computer (SBC) in order to increase processing speed while maintaining a low cost. The three major technical challenges with such combinations are the limited system bandwidth, high memory requirement and slow response time during data processing when accessing various collision avoidance situations. Collision avoidance data processing includes processing data for vehicles on express ways, roads, tunnels, traffic jams and indoor V2V communication such as required in car parks. Effective methods are proposed to address these technical challenges through parallel Central Processing Unit (CPU) and Graphic Processing Unit (GPU) processing. With this, parallel V2V trilateration and parallel bandwidth optimization, multi-dimensional real time complex V2V data streaming can be attained in less than a second. The test results have shown that there is at least a 4 to 10 times improvement on processing speed with parallel CPU and GPU processing used in V2V communication depending on different road safety conditions.</description>
        <description>http://thesai.org/Downloads/Volume3No12/Paper_5-Low_cost_approach_to_real_time_vehicle_to_vehicle_communication_using_parallel_CPU_and_GPU_processing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Pattern Recognition-Based Environment Identification for Robust Wireless Devices Positioning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031204</link>
        <id>10.14569/IJACSA.2012.031204</id>
        <doi>10.14569/IJACSA.2012.031204</doi>
        <lastModDate>2012-12-31T09:34:48.7030000+00:00</lastModDate>
        
        <creator>Nesreen I. Ziedan</creator>
        
        <subject>component; GPS; GNSS; machine learning; pattern recognition; PCA; PNN; multipath.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(12), 2012</description>
        <description>There has been a continuous increase in the demands for Global Navigation Satellite System (GNSS) receivers in a wide range of applications. More and more wireless and mobile devices are equipped with built-in GNSS receivers; their users’ mobility behavior can result in challenging signal conditions that have detrimental effects on the receivers’ tracking and positioning accuracy. A major error source is the multipath signals, which are signals that are reflected off different surfaces and propagated to the receiver&#39;s antenna via different paths. Analysis of the received multipath signals indicated that their characteristics depend on the surrounding environment. This paper introduces a machine-learning pattern recognition algorithm that utilizes the aforementioned dependency to classify the multipath signals’ characteristics and identify the surrounding environment. The identified environment is utilized in a novel adaptive tracking technique that enables a GNSS receiver to change its tracking strategy to best suit the current signal condition. This will lead to a robust positioning under challenging signal conditions. The algorithm is verified using real and simulated Global Positioning System (GPS) signals with accurate multipath models.</description>
        <description>http://thesai.org/Downloads/Volume3No12/Paper_4-Pattern_Recognition-Based_Environment_Identification_for_Robust_Wireless_Devices_Positioning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Statistical Analysis of the Demographic Ageing Process in the EU Member States, Former Communist Countries</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031203</link>
        <id>10.14569/IJACSA.2012.031203</id>
        <doi>10.14569/IJACSA.2012.031203</doi>
        <lastModDate>2012-12-31T09:34:46.6300000+00:00</lastModDate>
        
        <creator>Dorel Savulea</creator>
        
        <creator>Nicolae Constantinescu</creator>
        
        <subject>component; statistical analysis; demographic analyse; statistical correlation.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(12), 2012</description>
        <description>The aim of this paper is to make an analysis and a comparative study of the demographic ageing process in the former communist countries which are currently EU member states. Taking into account the complexity of the phenomenon, the study approaches only a part of the indices involved in this process. According to the structure of the population on groups of ages and sexes, the population pyramid is built and in this context we carried out various analyses and comparisons among countries. Further we approach determining demographic factors for the ageing process of the population, as for example the evolution of the young population, the mortality rate and the life expectancy and the changes in the population structure are pointed out with the help of the demographic dependency rate. There have been made a series of statistical correlations and predictions which allowed for a more concrete explanation of the evolution of the demographic ageing phenomenon.</description>
        <description>http://thesai.org/Downloads/Volume3No12/Paper_3-Statistical_Analysis_of_the_Demographic_Ageing_Process_in_the_EU_Member_States_Former_Communist_Countries.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Monte Carlo Ray Tracing Based Sensitivity Analysis of the Atmospheric and the Ocean Parameters on Top of the Atmosphere Radiance </title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031202</link>
        <id>10.14569/IJACSA.2012.031202</id>
        <doi>10.14569/IJACSA.2012.031202</doi>
        <lastModDate>2012-12-31T09:34:44.5370000+00:00</lastModDate>
        
        <creator>Kohei Arai </creator>
        
        <subject>Monte Carlo Ray Tracing; radiative transfer; scattering and absorption; geophysical parameters (the atmosphere and the ocean).</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(12), 2012</description>
        <description>Monte Carlo Ray Tracing: MCRT based sensitivity analysis of the geophysical parameters (the atmosphere and the ocean) on Top of the Atmosphere: TOA radiance in visible to near infrared wavelength regions is conducted. As the results, it is confirmed that the influence due to the atmosphere is greater than that of the ocean. Scattering and absorption due to aerosol particles and molecules in the atmosphere is major contribution followed by water vapor and ozone while scattering due to suspended solid is dominant contribution for the ocean parameters.  </description>
        <description>http://thesai.org/Downloads/Volume3No12/Paper_2-Monte_Carlo_Ray_Tracing_Based_Sensitivity_Analysis_of_the_Atmospheric_and_the_Ocean_Parameters_on_Top_of_the_Atmosphere_Radiance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A new approach towards the self-adaptability of Service-Oriented Architectures to the context based on workflow </title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031201</link>
        <id>10.14569/IJACSA.2012.031201</id>
        <doi>10.14569/IJACSA.2012.031201</doi>
        <lastModDate>2012-12-31T09:34:42.4000000+00:00</lastModDate>
        
        <creator>Fa&#238;&#231;al Felhi</creator>
        
        <creator>Jalel Akaichi</creator>
        
        <subject>SOA; Webservices; self-adaptability; ubiquitous system; workflow.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(12), 2012</description>
        <description>Distributed information systems are needed to be autonomous, heterogeneous and adaptable to the context. This is the reason why they resort Web services based on SOA Based on the advanced technology of SOA. These technologies can evolve in a dynamic environment in a well-defined context and according to events automatically, such as time, temperature, location, authentication, etc... This is what we call self-adaptability to context.  In this paper, we are interested in improving the different needs of this criterion and we propose a new approach towards a self-adaptability of SOA to the context based on workflow and showing the feasibility of this approach by integration the workflow under a platform and test this integration by a case study.</description>
        <description>http://thesai.org/Downloads/Volume3No12/Paper_1-A_new_approach_towards_the_self-adaptability_of_Service-Oriented_Architectures_to_the_context_based_on_workflow.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hand Gesture recognition and classification by Discriminant and Principal Component Analysis using Machine Learning techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2012.010908</link>
        <id>10.14569/IJARAI.2012.010908</id>
        <doi>10.14569/IJARAI.2012.010908</doi>
        <lastModDate>2012-12-10T11:46:57.7400000+00:00</lastModDate>
        
        <creator>Sauvik Das Gupta</creator>
        
        <creator>Souvik Kundu</creator>
        
        <creator>Rick Pandey</creator>
        
        <creator>Rahul Ghosh</creator>
        
        <creator>Rajesh Bag</creator>
        
        <creator>Abhishek Mallik</creator>
        
        <subject>Surface EMG; Bio-medical; Principal Component Analysis; Discriminant Analysis</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 1(9), 2012</description>
        <description>This paper deals with the recognition of different hand gestures through machine learning approaches and principal component analysis. A Bio-Medical signal amplifier is built after doing a software simulation with the help of NI Multisim. At first a couple of surface electrodes are used to obtain the Electro-Myo-Gram (EMG) signals from the hands. These signals from the surface electrodes have to be amplified with the help of the Bio-Medical Signal amplifier. The Bio-Medical Signal amplifier used is basically an Instrumentation amplifier made with the help of IC AD 620.The output from the Instrumentation amplifier is then filtered with the help of a suitable Band-Pass Filter. The output from the Band Pass filter is then fed to an Analog to Digital Converter (ADC) which in this case is the NI USB 6008.The data from the ADC is then fed into a suitable algorithm which helps in recognition of the different hand gestures. The algorithm analysis is done in MATLAB. The results shown in this paper show a close to One-hundred per cent (100%) classification result for three given hand gestures.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume1No9/Paper_8-Hand_Gesture_recognition_and_classification_by_Discriminantand_Principal_Component_Analysis_using_Machine_Learning_techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of Gumbel Model for Software Reliability Using Bayesian Paradigm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2012.010907</link>
        <id>10.14569/IJARAI.2012.010907</id>
        <doi>10.14569/IJARAI.2012.010907</doi>
        <lastModDate>2012-12-10T11:46:55.6670000+00:00</lastModDate>
        
        <creator>Raj Kumar</creator>
        
        <creator>Ashwini Kumar Srivastava</creator>
        
        <creator>Vijay Kumar</creator>
        
        <subject>Probability density functio; Bayes Estimation; Hazard Function; MLE; OpenBUGS; Uniform Priors.</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 1(9), 2012</description>
        <description>In this paper, we have illustrated the suitability of Gumbel Model for software reliability data. The model parameters are estimated using likelihood based inferential procedure: classical as well as Bayesian. The quasi Newton-Raphson algorithm is applied to obtain the maximum likelihood estimates and associated probability intervals. The Bayesian estimates of the parameters of Gumbel model are obtained using Markov Chain Monte Carlo(MCMC) simulation method in OpenBUGS(established software for Bayesian analysis using Markov Chain Monte Carlo methods). The R functions are developed to study the statistical properties, model validation and comparison tools of the model and the output analysis of MCMC samples generated from OpenBUGS. Details of applying MCMC to parameter estimation for the Gumbel model are elaborated and a real software reliability data set is considered to illustrate the methods of inference discussed in this paper.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume1No9/Paper_7-Analysis_of_Gumbel_Model_for_Software_Reliability_Using_Bayesian_Paradigm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Method for Face Identification with Facial Action Coding System: FACS Based on Eigen Value Decomposion</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2012.010906</link>
        <id>10.14569/IJARAI.2012.010906</id>
        <doi>10.14569/IJARAI.2012.010906</doi>
        <lastModDate>2012-12-10T11:46:53.5900000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>face recognitio; action unit; face identification</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 1(9), 2012</description>
        <description>Method for face identification based on eigen value decomposition together with tracing trajectories in the eigen space after the eigen value decomposition is proposed. The proposed method allows person to person differences due to faces in the different emotions. By using the well known action unit approach, the proposed method admits the faces in the different emotions. Experimental results show that recognition performance depends on the number of targeted peoples. The face identification rate is 80% for four peoples of targeted number while 100% is achieved for the number of targeted number of peoples is two.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume1No9/Paper_6-Method_for_Face_Identification_with_Facial_Action_Coding_System_FACS_Based_on_Eigen_Value_Decomposition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Measures for Testing the Reactivity Property of a Software Agent</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2012.010905</link>
        <id>10.14569/IJARAI.2012.010905</id>
        <doi>10.14569/IJARAI.2012.010905</doi>
        <lastModDate>2012-12-10T11:46:51.5170000+00:00</lastModDate>
        
        <creator>N. Sivakumar</creator>
        
        <creator>K.Vivekanandan</creator>
        
        <subject>Software Agent, Multi-agent system, Software Testing</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 1(9), 2012</description>
        <description>Agent technology is meant for developing complex distributed applications. Software agents are the key building blocks of a Multi-Agent System (MAS). Software agents are unique in its nature as it possesses certain distinctive properties such as Pro-activity, Reactivity, Social-ability, Mobility etc., Agent’s behavior might differ for same input at different cases and thus testing an agent and to evaluate the quality of an agent is a tedious task. Thus the measures to evaluate the quality characteristics of an agent and to evaluate the agent behavior are lacking. The main objective of the paper is to come out with a set of measures to evaluate agent’s characteristics in particular the reactive property, so that the quality of an agent can be determined.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume1No9/Paper_5-Measures_for_Testing_the_Reactivity_Property_of_a_Software_Agent.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>LSVF: a New Search Heuristic to Reduce the Backtracking Calls for Solving Constraint Satisfaction Problem</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2012.010904</link>
        <id>10.14569/IJARAI.2012.010904</id>
        <doi>10.14569/IJARAI.2012.010904</doi>
        <lastModDate>2012-12-10T11:46:49.4270000+00:00</lastModDate>
        
        <creator>Cleyton Rodrigues</creator>
        
        <creator>Ryan Ribeiro de Azevedo</creator>
        
        <creator>Fred Freitas</creator>
        
        <creator>Eric Dantas</creator>
        
        <subject>Backtracking Call, Constraint Satisfaction Problems, Heuristic Search.</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 1(9), 2012</description>
        <description>Many researchers in Artificial Intelligence seek for new algorithms to reduce the amount of memory/ time consumed for general searches in Constraint Satisfaction Problems. These improvements are accomplished by the use of heuristics which either prune useless tree search branches or even indicate the path to reach the (optimal) solution faster than the blind version of the search. Many heuristics were proposed in the literature, like the Least Constraining Value (LCV). In this paper we propose a new pre-processing search heuristic to reduce the amount of backtracking calls, namely the Least Suggested Value First: a solution whenever the LCV solely cannot measure how much a value is constrained. In this paper, we present a pedagogical example, as well as the preliminary results.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume1No9/Paper_4-LSVF_a_New_Search_Heuristic_to_Reduce_the_Backtracking_Calls_for_Solving_Constraint__Satisfaction_Problems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Method for 3D Object Reconstruction Using Several Portion of 2D Images from the Different Aspects Acquired with Image Scopes Included in the Fiber Retractor</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2012.010903</link>
        <id>10.14569/IJARAI.2012.010903</id>
        <doi>10.14569/IJARAI.2012.010903</doi>
        <lastModDate>2012-12-10T11:46:47.3500000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>3D image reconstruction, fiber retractor, image scope</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 1(9), 2012</description>
        <description>Method for 3D object reconstruction using several portions of 2D images from the different aspects which are acquired with image scopes included in the fiber retractor is proposed. Experimental results show a great possibilityfor reconstruction of acceptable quality of 3D object on the computer with several imageswhich are viewed from the different aspects of 2D images.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume1No9/Paper_3-Method_for_3D_Object_Reconstruction_Using_Several_Portion_of_2D_Images_from_the_Different_Aspects_Acquired.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Cumulative Multi-Niching Genetic Algorithm for Multimodal Function Optimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2012.010902</link>
        <id>10.14569/IJARAI.2012.010902</id>
        <doi>10.14569/IJARAI.2012.010902</doi>
        <lastModDate>2012-12-10T11:46:45.2600000+00:00</lastModDate>
        
        <creator>Matthew Hall</creator>
        
        <subject>genetic algorithm; cumulative; memory; multi-niching; multi-modal; optimization; metaheuristic.</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 1(9), 2012</description>
        <description>This paper presents a cumulative multi-niching genetic algorithm (CMN GA), designed to expedite optimization problems that have computationally-expensive multimodal objective functions.  By never discarding individuals from the population, the CMN GA makes use of the information from every objective function evaluation as it explores the design space.  A fitness-related population density control over the design space reduces unnecessary objective function evaluations.  The algorithm’s novel arrangement of genetic operations provides fast and robust convergence to multiple local optima.  Benchmark tests alongside three other multi-niching algorithms show that the CMN GA has a greater convergence ability and provides an order-of-magnitude reduction in the number of objective function evaluations required to achieve a given level of convergence.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume1No9/Paper_2-A_Cumulative_Multi-Niching_Genetic_Algorithm_for_Multimodal_Function_Optimization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Optimization of Granular Networks Based on PSO and Two-Sided Gaussian Contexts   </title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2012.010901</link>
        <id>10.14569/IJARAI.2012.010901</id>
        <doi>10.14569/IJARAI.2012.010901</doi>
        <lastModDate>2012-12-10T11:46:43.1230000+00:00</lastModDate>
        
        <creator>Keun Chang Kwak</creator>
        
        <subject>granular networks, particle swarm optimization, linguistic model, two-sided Gaussian contexts</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 1(9), 2012</description>
        <description>This paper is concerned with an optimization of GN (Granular Networks) based on PSO (Particle Swarm Optimization) and Information granulation). The GN is designed by the linguistic model using context-based fuzzy c-means clustering algorithm performing relationship between fuzzy sets defined in the input and output space. The contexts used in this paper are based on two-sided Gaussian membership functions. The main goal of optimization based on PSO is to find the number of clusters obtained in each context and weighting factor. Finally, we apply to coagulant dosing process in a water purification plant to evaluate the predication performance and compare the proposed approach with other previous methods. </description>
        <description>http://thesai.org/Downloads/IJARAI/Volume1No9/Paper_1-An_optimization_of_Granular_Networks_based_on_PSO_and_two-sided_Gaussian_contexts.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Database Creation for Storing Electronic Documents and Research of the Staff</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031133</link>
        <id>10.14569/IJACSA.2012.031133</id>
        <doi>10.14569/IJACSA.2012.031133</doi>
        <lastModDate>2012-12-01T13:23:21.4270000+00:00</lastModDate>
        
        <creator>Pornpapatsorn Princhankol</creator>
        
        <creator>Siriwan Phacharakreangchai</creator>
        
        <subject>database creation; electronic documents; research.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(11), 2012</description>
        <description>The research study aims at creating the database for storing Electronic Documents and Research of the staff in the Department of Educational Communications and Technology, evaluating its quality and measuring the satisfaction of the database by users. The sample subjects are 14 instructors and officers in the Department of Educational Communications and Technology, King Mongkut’s University of Technology Thonburi, in the second term of academic year 2012. The research tool is the database for storing Electronic Documents and Research of the staff in Department of Educational Communications and Technology. According to the evaluation by 3 experts in the area of Education Quality Assurance, the values of mean score and standard deviation are 4.51 and 0.38, respectively. This implies that the corresponding assurance quality of the database is very good. The media quality evaluation by 3 media specialists shows that its mean score is 4.58 and its standard deviation is 0.46. The evaluation result consequently indicates a very good quality of media. In terms of satisfaction on the database, the results obtained using questionnaires shows high satisfaction scores. The mean score is 4.15 and the standard deviation is 0.55 and this can be concluded that the database can be used effectively in the department.</description>
        <description>http://thesai.org/Downloads/Volume3No11/Paper_33-A_Database_Creation_for_Storing_Electronic_Documents_and_Research_of_the_Staff.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Analysis of the Components of Project-Based Learning on Social Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031132</link>
        <id>10.14569/IJACSA.2012.031132</id>
        <doi>10.14569/IJACSA.2012.031132</doi>
        <lastModDate>2012-12-01T13:23:19.3500000+00:00</lastModDate>
        
        <creator>Kuntida Thamwipat</creator>
        
        <creator>Napassawan Yookong</creator>
        
        <subject>analysis of the componenst; project-based learning; social network; Presentation Skill course.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(11), 2012</description>
        <description>This research aims to analyze the components of project  based learning  on social network, case study of presentation skill course. The study was carried out with 98 representatives of 2nd – 3rd year students of  Educational Communications and Technology Department, 2nd semester, academic year 2554, who was studying ETM 205 Presentation Skill. 5 point-rating scale questionnaire was used as the research tool for data collection for analyzing the obtained data which has the reliability as 0.94 and using the mean, Standard Deviation (S.D.), factor analysis, varimax method in data analysis. The results can be concluded as follows:Ten components of project based learning on social network, case study of Presentation Skill course was divided into 3 steps following Ministry of Education’s framework as follows: 1) Pre-Project Stage consists of the 3rd component: The introduction of presentation skill and sharing ideas through social network, the 5th component: The learner preparation of pre-project stage, the 9th component: Writing the schemes and skills development training and the 10th component: The presentation skill training in group and individual 2) While / During-Project Stage consists of The 2nd component: Roles and responsibilities during project stage and communication, the 6th component: Readiness of performing duties of individual or group of people which communicate through social network and the 8th component: Guidance and agreement which learner submit voluntarily 3) Post-Project Stage consists of the 1st component: Evaluations, document management system, presentation, the 4th component: Instructor evaluation, feedback, congratulations to learners who achieve successful project and the 7th component: Multi-channel evaluation by regression equation or forecast equation concerning 
the analysis of the components of project learning management on social network
</description>
        <description>http://thesai.org/Downloads/Volume3No11/Paper_32-The_Analysis_of_the_Components_of_Project-Based_Learning_on_Social_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cashless Society: An Implication To Conomic Growth &amp; Development In Igeria</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031131</link>
        <id>10.14569/IJACSA.2012.031131</id>
        <doi>10.14569/IJACSA.2012.031131</doi>
        <lastModDate>2012-12-01T13:23:17.2470000+00:00</lastModDate>
        
        <creator>N A YEKINI</creator>
        
        <creator>I. K. OYEYINKA</creator>
        
        <creator>O.O. AYANNUGA</creator>
        
        <creator>O. N. LAWAL</creator>
        
        <creator>A. K. AKINWOLE</creator>
        
        <subject>Cashless Society; Corruption; Development; Economic growth;  Electronic Card.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(11), 2012</description>
        <description>The Central Bank of Nigeria (CBN) announced a new cash policy (Cashless Society), which began on 1st of January 2012 in Lagos state. The policy will be effective nationwide from June 1st 2012. The various technologies and issues involved in cashless society are discussed. This research work sampled the opinions of the general public on issues surrounding the Cashless Society by using structured questionnaire to gather information from people. Data obtained was tabulated and the result was analyzed using percentages. The analyzed result was presented graphically and it was discovered that larger percentage of sampled population are of the opinion that Cashless Society will proffer solution to high risk of cash related crimes such as illegal immigration, financing Terrorism, illegal drug Trade, Human Trafficking, corruption etc. It will enhance effectiveness of monetary policy in managing inflation in Nigeria, thereby improving economic growth and development. Various issues such as benefits and social implication of Cashless Society are discussed. We proposed a Model Cashless System for Nigeria, which will be more convenient for people toappreciate, embrace, and use.</description>
        <description>http://thesai.org/Downloads/Volume3No11/Paper_31-Cashless_Society_An_Implication_To_Economic_Growth_Development_In_Nigeria.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced ModifiedSecurity Framework for Nigeria Cashless E-payment System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031130</link>
        <id>10.14569/IJACSA.2012.031130</id>
        <doi>10.14569/IJACSA.2012.031130</doi>
        <lastModDate>2012-12-01T13:23:15.1530000+00:00</lastModDate>
        
        <creator>Fidelis C Obodoeze</creator>
        
        <creator>Dr. Francis A. Okoye</creator>
        
        <creator>Samuel C.Asogwa</creator>
        
        <creator>Frank E.Ozioko</creator>
        
        <creator>Calister N.Mba</creator>
        
        <subject>Cashless economy; electronic payment security; CBN; Nigeria;PoS; ATM; Bank Server; PCI DSS.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(11), 2012</description>
        <description>In January 2012, the Nigeria Apex Bank, Central Bank of Nigeria (CBN) rolled out guidelines for the transition of Nigeria’s mainly cash-based economy and payment system to cashless and electronic payment (e-payment) system ending over 50 years  of mainly cash-based operated economy and payment system. This announcement elicited mixed reactions firstly excitement due to the enormous benefits this transition will impact on Nigeria economy and at the same time elicited panic due to unpreparedness of the economy to transit successfully to electronic payment in a system hitherto filled with bobby trap of security challenges. Ten months later after the introduction of the policy, only a handful of the major stakeholders are fully compliant mainly because of the complexity and the high prohibitive cost of implementation of CBN adopted security framework, the Payment Card Industry Data Security Standard (PCI DSS). This paper surveys the security challenges facing the full implementation of the cashless epayment policy of Nigeria and at the end introduced an enhanced modified security framework for Nigeria’s cashless economy that may be easier and cheaper to implement by the majority of the stakeholders after studying the loopholes in the current Nigeria epayment system models.</description>
        <description>http://thesai.org/Downloads/Volume3No11/Paper_30-Enhanced_Modified_Security_Framework_for_Nigeria_Cashless_E-payment_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modern and Digitalized USB Device With Extendable Memory Capacity</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031129</link>
        <id>10.14569/IJACSA.2012.031129</id>
        <doi>10.14569/IJACSA.2012.031129</doi>
        <lastModDate>2012-12-01T13:23:13.0800000+00:00</lastModDate>
        
        <creator>J Nandini Meeraa</creator>
        
        <creator>S.Devi Abirami</creator>
        
        <creator>N.Indhuja</creator>
        
        <creator> R.Aravind</creator>
        
        <creator>C.Chithiraikkayalvizhi</creator>
        
        <creator>K.Rathina Kumar</creator>
        
        <subject>extendable memory capacity;in-built USB slots; computerized pen drive; operating system;processor.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(11), 2012</description>
        <description>This paper proposes a advance technology which is completely innovative and creative. The urge of inventing this proposal lies on the bases of the idea of making a pen drive have an extendable memory capacity with a modern and digitalized look. This device can operate without the use of a computer system or a mobile. The computerized pen drive has a display unit to display the contents of the pen drive and in-built USB slots to perform data transmission to other pen drives directly without the use of the computer system. The implementation of the extendable memory slots to the computerized pen drive makes it a modern digitalized and extendable USB device. The implementation of an operating system and a processor to the pen drive are the main challenges in this proposed system. The design of the system is done in such a way that the device is cost efficient and user friendly. The design process and the hardware, software structures of this modern and digitalized USB device with extendable memory capacity are explained in detail in this paper.</description>
        <description>http://thesai.org/Downloads/Volume3No11/Paper_29-Modern_and_Digitalized_USB_Device_With_Extendable_Memory_Capacity.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Pheromone Based Model for Ant Based Clustering</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031128</link>
        <id>10.14569/IJACSA.2012.031128</id>
        <doi>10.14569/IJACSA.2012.031128</doi>
        <lastModDate>2012-12-01T13:23:10.9900000+00:00</lastModDate>
        
        <creator>Saroj Bala</creator>
        
        <creator>S. I. Ahson</creator>
        
        <creator>R. P. Agarwal</creator>
        
        <subject>Ant colony optimization; Clustering; Stigmergy; Pheromone.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(11), 2012</description>
        <description>Swarm intelligence is a collective effort of simple agents working locally but resulting in wonderful patterns. Labor division, decentralized control, stigmergy and self organization are the major components of swarm intelligence. Ant based clustering is inspired by brood sorting in ant colonies, an example of decentralized and self organized work. Stigmergy in ant colonies is via a chemical pheromone. This paper proposes a pheromone based ant clustering algorithm. In the proposed method, ant agents use pheromone to communicate the place to make the clusters, not the path as in real ants. The pheromone concentration helps to decide pick and drop of objects with less parameters and calculations.  The simulations have shown good and dense cluster results.</description>
        <description>http://thesai.org/Downloads/Volume3No11/Paper_28-A_Pheromone_Based_Model_for_Ant_Based_Clustering.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Minimization of Call Blocking Probability using Mobile Node velocity</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031127</link>
        <id>10.14569/IJACSA.2012.031127</id>
        <doi>10.14569/IJACSA.2012.031127</doi>
        <lastModDate>2012-12-01T13:23:08.4000000+00:00</lastModDate>
        
        <creator>Suman Kumar Sikdar</creator>
        
        <creator>Uttam Kumar Kundu,</creator>
        
        <creator>Debabrata Sarddar</creator>
        
        <subject>component; Next Generation Wireless Systems (NGWS); Handoff; BS (Base Station); MN (Mobile Node); RSS (Received signal strength); IEEE802.11</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(11), 2012</description>
        <description>Due to rapid growth in IEEE 802.11 based Wireless Local Area Networks (WLAN), handoff has become a burning issue. A mobile Node (MN) requires handoff when it travels out of the coverage area of its current access point (AP) and tries to associate with another AP. But handoff delays provide a serious barrier for such services to be made available to mobile platforms. Throughout the last few years there has been plenty of research aimed towards reducing the handoff delay incurred in the various levels of wireless communication.In this article, we propose a received signal strength measurement based handoff technique to improve handoff probability. By calculating the speed of MN (Mobile Node) and signaling delay information we try to take the right decision of handoff initiation time. Our theoretical analysis and simulation results show that by taking the proper decision for handoff we can effectively reduce false handoff initiation probability and unnecessary traffic load causing packet loss and call blocking. </description>
        <description>http://thesai.org/Downloads/Volume3No11/Paper_27-Minimization_of_Call_Blocking_Probability_using_Mobile_Node_velocity.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multilayer Neural Networks and Nearest Neighbor Classifier Performances for Image Annotation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031126</link>
        <id>10.14569/IJACSA.2012.031126</id>
        <doi>10.14569/IJACSA.2012.031126</doi>
        <lastModDate>2012-12-01T13:23:06.3100000+00:00</lastModDate>
        
        <creator>Mustapha OUJAOURA,</creator>
        
        <creator>Brahim MINAOUI</creator>
        
        <creator>Mohammed FAKIR</creator>
        
        <subject>Image annotation; region growing segmentation; multilayer neural network classifier; nearest neighbour classifier; Zernike moments; Legendre moments; Hu moments; ETH-80 database.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(11), 2012</description>
        <description>The explosive growth of image data leads to the research and development of image content searching and indexing systems. Image annotation systems aim at annotating automatically animage with some controlled keywords that can be used for indexing and retrieval of images. This paper presents a comparative evaluation of the image content annotation system by using the multilayer neural networks and the nearest neighbour classifier. The region growing segmentation is used to separate objects, the Hu moments, Legendre moments and Zernike moments which are used in as feature descriptors for the image content characterization and annotation.The ETH-80 database image is used in the experiments here. The best annotation rate is achieved by using Legendre moments as feature extraction method and the multilayer neural network as a classifier.</description>
        <description>http://thesai.org/Downloads/Volume3No11/Paper_26-Multilayer_Neural_Networks_and_Nearest_Neighbor_Classifier_Performances_for_Image_Annotation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving Web Page Prediction Using Default Rule Selection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031125</link>
        <id>10.14569/IJACSA.2012.031125</id>
        <doi>10.14569/IJACSA.2012.031125</doi>
        <lastModDate>2012-12-01T13:23:04.2200000+00:00</lastModDate>
        
        <creator>Thanakorn Pamutha</creator>
        
        <creator>Chom Kimpan</creator>
        
        <creator>Siriporn Chimplee</creator>
        
        <creator>Parinya Sanguansat</creator>
        
        <subject>web mining, web usage mining; user navigation session; Markov model; association rules; Web page prediction; rule-selection methods.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(11), 2012</description>
        <description>Mining user patterns of web log files can provide significant and useful informative knowledge. A large amount of research has been done in trying to predict correctly the pages a user will most likely request next. Markov models are the most commonly used approaches for this type of web access prediction. Web page prediction requires the development of models that can predict a user’s next access to a web server. Many researchers have proposed a novel approach that integrates Markov models, association rules and clustering in web site access predictability. The low order Markov models provide higher coverage, but these are couched in ambiguous rules. In this paper, we introduce the use of default rule in resolving web access ambiguous predictions. This method could provide better prediction than using the individual traditional models. The results have shown that the default rule increases the accuracy and model-accuracy of web page access predictions. It also applies to association rules and the other combined models.</description>
        <description>http://thesai.org/Downloads/Volume3No11/Paper_25-Improving_Web_Page_Prediction_Using_Default_Rule_Selection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An agent based approach for simulating complex systems with spatial dynamics application in the land use planning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031124</link>
        <id>10.14569/IJACSA.2012.031124</id>
        <doi>10.14569/IJACSA.2012.031124</doi>
        <lastModDate>2012-12-01T13:23:02.1430000+00:00</lastModDate>
        
        <creator>Fatimazahra BARRAMOU</creator>
        
        <creator>Malika ADDOU</creator>
        
        <subject>Multi-agent system; Geographic information system; Modeling; Simulation; Complex system;  Land use planning.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(11), 2012</description>
        <description>In this research a new agent based approach for simulating complex systems with spatial dynamics is presented. We propose an architecture based on coupling between two systems: multi-agent systems and geographic information systems. We also propose a generic model of agent-oriented simulation that we will apply to the field of land use planning. In fact, simulating the evolution of the urban system is a key to help decision makers to anticipate the needs of the city in terms of installing new equipment and opening new urbanization’ areas to install the new population.</description>
        <description>http://thesai.org/Downloads/Volume3No11/Paper_24-An_agent_based_approach_for_simulating_complex_systems_with_spatial_dynamics_application_in_the_land_use_planning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cost Analysis of Algorithm Based Billboard Manger Based Handover Method in LEO satellite Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031123</link>
        <id>10.14569/IJACSA.2012.031123</id>
        <doi>10.14569/IJACSA.2012.031123</doi>
        <lastModDate>2012-12-01T13:23:00.0530000+00:00</lastModDate>
        
        <creator>Suman Kumar Sikdar</creator>
        
        <creator>Soumaya Das</creator>
        
        <creator>Debabrata Sarddar</creator>
        
        <subject>Component; Handover latency; LEO satellite; Mobile Node (MN); Billboard Manager (BM).</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(11), 2012</description>
        <description>Now-a-days LEO satellites have an important role in global communication system. They have some advantages like low power requirement and low end-to-end delay, more efficient frequency spectrum utilization between satellites and spot beams over GEO and MEO. So in future they can be used as a replacement of modern terrestrial wireless networks. But the handover occurrence is more due to the speed of the LEOs. Different protocol has been proposed for a successful handover among which BMBHO is more efficient. But it had a problem during the selection of the mobile node during handover. In our previous work we have proposed an algorithm so that the connection can be established easily with the appropriate satellite. In this paper we will evaluate the mobility management cost of Algorithm based Billboard Manager Based Handover method (BMBHO). A simulation result shows that the cost is lower than the cost of Mobile IP of SeaHO-LEO and PatHO-LEO.</description>
        <description>http://thesai.org/Downloads/Volume3No11/Paper_23-Cost_Analysis_of_Algorithm_Based_Billboard_Manger_Based_Handover_Method_in_LEO_satellite_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Randomized Fully Polynomial-time Approximation Scheme for Weighted Perfect Matching in the Plane</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031122</link>
        <id>10.14569/IJACSA.2012.031122</id>
        <doi>10.14569/IJACSA.2012.031122</doi>
        <lastModDate>2012-12-01T13:22:57.9630000+00:00</lastModDate>
        
        <creator>Yasser M. Abd El-Latif</creator>
        
        <creator>Salwa M. Ali</creator>
        
        <creator>Hanaa A.E. Essa</creator>
        
        <creator>Soheir M. Khamis</creator>
        
        <subject>Perfect matching; approximation algorithm; Monte-Carlo technique; a randomized fully polynomial-time approximation scheme; and randomized algorithm.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(11), 2012</description>
        <description>In the approximate Euclidean min-weighted perfect matching problem, a set  of   points in the plane and a real number  are given. Usually, a solution of this problem is a partition of points of  into  pairs such that the sum of the distances between the paired points is at most  times the optimal solution.
In this paper, the authors give a randomized algorithm which follows a Monte-Carlo method. This algorithm is a randomized fully polynomial-time approximation scheme for the given problem. Fortunately, the suggested algorithm is a one tackled the matching problem in both Euclidean nonbipartite and bipartite cases.
The presented algorithm outlines as follows: With repeating   times, we choose a point from  to build the suitable pair satisfying the suggested condition on the distance. If this condition is achieved, then remove the points of the constructed pair from  and put this pair in   (the output set of the solution). Then, choose a point and the nearest point of it from the remaining points in   to construct a pair and put it in . Remove the two points of the constructed pair from   and repeat this process until  becomes an empty set. Obviously, this method is very simple. Furthermore, our algorithm can be applied without any modification on complete weighted graphs   and complete weighted bipartite graphs , where  and m is an even.
</description>
        <description>http://thesai.org/Downloads/Volume3No11/Paper_22-A_Randomized_Fully_Polynomial-time_Approximation_Scheme_for_Weighted_Perfect_Matching_in_the_Plane.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Continuous Bangla Speech Segmentation using Short-term Speech Features Extraction Approaches</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031121</link>
        <id>10.14569/IJACSA.2012.031121</id>
        <doi>10.14569/IJACSA.2012.031121</doi>
        <lastModDate>2012-12-01T13:22:55.8900000+00:00</lastModDate>
        
        <creator>Md Mijanur Rahman</creator>
        
        <creator>Md. Al-Amin Bhuiyan</creator>
        
        <subject>Speech Segmentation; Features Extraction; Short-time Energy; Spectral Centroid; Dynamic Thresholding.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(11), 2012</description>
        <description>This paper presents simple and novel feature extraction approaches for segmenting continuous Bangla speech sentences into words/sub-words. These methods are based on two simple speech features, namely the time-domain features and the frequency-domain features. The time-domain features, such as short-time signal energy, short-time average zero crossing rate and the frequency-domain features, such as spectral centroid and spectral flux features are extracted in this research work. After the feature sequences are extracted, a simple dynamic thresholding criterion is applied in order to detect the word boundaries and label the entire speech sentence into a sequence of words/sub-words.  All the algorithms used in this research are implemented in Matlab and the implemented automatic speech segmentation system achieved segmentation accuracy of 96%.</description>
        <description>http://thesai.org/Downloads/Volume3No11/Paper_21-Continuous_Bangla_Speech_Segmentation_using_Short-term_Speech_Features_Extraction_Approaches.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid intelligent system for Sale Forecasting using Delphi and adaptive Fuzzy Back-Propagation Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031120</link>
        <id>10.14569/IJACSA.2012.031120</id>
        <doi>10.14569/IJACSA.2012.031120</doi>
        <lastModDate>2012-12-01T13:22:53.7970000+00:00</lastModDate>
        
        <creator>Attariuas Hicham</creator>
        
        <creator>Bouhorma Mohammed</creator>
        
        <creator>Sofi Anas</creator>
        
        <subject>Component; Hybrid intelligence approach; Delphi Method; Sales forecasting; fuzzy clustering; fuzzy system; back propagation network.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(11), 2012</description>
        <description>Sales forecasting is one of the most crucial issues addressed in business. Control and evaluation of future sales still seem concerned both researchers and policy makers and managers of companies. this research propose an intelligent hybrid sales forecasting system Delphi-FCBPN sales forecast based on Delphi Method, fuzzy clustering and Back-propagation (BP) Neural Networks with adaptive learning rate. The proposed model is constructed to integrate expert judgments, using Delphi method, in enhancing the model of FCBPN. Winter’s Exponential Smoothing method will be utilized to take the trend effect into consideration. The data for this search come from an industrial company that manufactures packaging. Analyze of results show that the proposed model outperforms other three different forecasting models in MAPE and RMSE measures.</description>
        <description>http://thesai.org/Downloads/Volume3No11/Paper_20-Hybrid_intelligent_system_for_Sale_Forecasting_using_Delphi_and_adaptive_Fuzzy_Back-Propagation_Neural_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Short Answer Grading Using String Similarity And Corpus-Based Similarity</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031119</link>
        <id>10.14569/IJACSA.2012.031119</id>
        <doi>10.14569/IJACSA.2012.031119</doi>
        <lastModDate>2012-12-01T13:22:51.7070000+00:00</lastModDate>
        
        <creator>Wael H Gomaa</creator>
        
        <creator>Aly A. Fahmy</creator>
        
        <subject>Automatic Scoring; Short Answer Grading; Semantic Similarity; String Similarity; Corpus-Based Similarity.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(11), 2012</description>
        <description>Most automatic scoring systems use pattern based that requires a lot of hard and tedious work. These systems work in     a supervised manner where predefined patterns and scoring rules are generated. This paper presents a different unsupervised approach which deals with students’ answers holistically using text to text similarity. Different String-based and Corpus-based similarity measures were tested separately and then combined to achieve a maximum correlation value of 0.504. The achieved correlation is the best value achieved for unsupervised approach Bag of Words (BOW) when compared to previous work.</description>
        <description>http://thesai.org/Downloads/Volume3No11/Paper_19-Short_Answer_Grading_Using_String_Similarity_And_Corpus-Based_Similarity.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of A high performance low-power consumption discrete time Second order Sigma-Delta modulator used for Analog to Digital Converter</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031118</link>
        <id>10.14569/IJACSA.2012.031118</id>
        <doi>10.14569/IJACSA.2012.031118</doi>
        <lastModDate>2012-12-01T13:22:49.5870000+00:00</lastModDate>
        
        <creator>Radwene LAAJIMI</creator>
        
        <creator>Mohamed MASMOUDI</creator>
        
        <subject>CMOS technology; Analog-to-Digital conversion; Low power electronics; Sigma-Delta modulation; switched-capacitor circuits; transconductance operational amplifier.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(11), 2012</description>
        <description>This paper presents the design and simulations results of a switched-capacitor discrete time Second order Sigma-Delta modulator used for a resolution of 14 bits Sigma-Delta analog to digital converter. The use of operational amplifier is necessary for low power consumption, it is designed to provide large bandwidth and moderate DC gain. With 0.35&#181;m CMOS technology, the S? modulator achieves 86 dB dynamic range, and 85 dB signal to noise ratio (SNR) over an 80 KHz signal bandwidth with an oversampling ratio (OSR) of 88, while dissipating 9.8mW at &#177;1.5V supply voltage.</description>
        <description>http://thesai.org/Downloads/Volume3No11/Paper_18-Design_of_A_high_performance_low-power_consumption_discrete_time_Second_order_Sigma-Delta_modulator.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automatic Facial Expression Recognition Based on Hybrid Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031117</link>
        <id>10.14569/IJACSA.2012.031117</id>
        <doi>10.14569/IJACSA.2012.031117</doi>
        <lastModDate>2012-12-01T13:22:47.5100000+00:00</lastModDate>
        
        <creator>Ali K. K. Bermani</creator>
        
        <creator>Atef Z. Ghalwash</creator>
        
        <creator>Aliaa A. A. Youssif</creator>
        
        <subject>Facial expression recognition; geometric and appearance modeling; Expression recognition; Gabor wavelet; principal component analysis (PCA);  neural network.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(11), 2012</description>
        <description>The topic of automatic recognition of facial expressions deduce a lot of researchers in the late last century and has increased a great interest in the past few years. Several techniques have emerged in order to improve the efficiency of the recognition by addressing problems in face detection and extraction features in recognizing expressions. This paper has proposed automatic system for facial expression recognition which consists of hybrid approach in feature extraction phase which  represent a combination between holistic and analytic approaches by extract 307 facial expression features (19 features by geometric, 288 feature by appearance). Expressions recognition is performed by using radial basis function (RBF) based on artificial neural network to recognize the six basic emotions (anger, fear, disgust, happiness, surprise, sadness) in addition to the natural. The system achieved recognition rate 97.08% when applying on person-dependent database and 93.98% when applying on person-independent.</description>
        <description>http://thesai.org/Downloads/Volume3No11/Paper_17-Automatic_Facial_Expression_Recognition_Based_on_Hybrid_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Three Layer Hierarchical Model for Chord </title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031116</link>
        <id>10.14569/IJACSA.2012.031116</id>
        <doi>10.14569/IJACSA.2012.031116</doi>
        <lastModDate>2012-12-01T13:22:45.4200000+00:00</lastModDate>
        
        <creator>Waqas A imtiaz</creator>
        
        <creator>Shimul Shil</creator>
        
        <creator>A.K.M Mahfuzur Rahman</creator>
        
        <subject>Hierarchy; DHT; CHORD; P2P.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(11), 2012</description>
        <description>Increasing popularity of decentralized Peer-to-Peer (P2P) architecture emphasizes on the need to come across an overlay structure that can provide efficient content discovery mechanism, accommodate high churn rate and adapt to failures in the presence of heterogeneity among the peers. Traditional p2p systems incorporate distributed client-server communication, which finds the peer efficiently that store a desires data item, with minimum delay and reduced overhead. However traditional models are not able to solve the problems relating scalability and high churn rates.  Hierarchical model were introduced to provide better fault isolation, effective bandwidth utilization, a superior adaptation to the underlying physical network and a reduction of the lookup path length as additional advantages.  It is more efficient and easier to manage than traditional p2p networks. This paper discusses a further step in p2p hierarchy via 3-layers hierarchical model with distributed database architecture in different layer, each of which is connected through its root. The peers are divided into three categories according to their physical stability and strength. They are Ultra Super-peer, Super-peer and Ordinary Peer and we assign these peers to first, second and third level of hierarchy respectively. Peers in a group in lower layer have their own local database which hold as associated super-peer in middle layer and access the database among the peers through user queries. In our 3-layer hierarchical model for DHT algorithms, we used an advanced Chord algorithm with optimized finger table which can remove the redundant entry in the finger table in upper layer that influences the system to reduce the lookup latency. Our research work finally resulted that our model really provides faster search since the network lookup latency is decreased by reducing the number of hops. The peers in such network then can contribute with improve functionality and can perform well in P2P networks.</description>
        <description>http://thesai.org/Downloads/Volume3No11/Paper_16-Three_Layer_Hierarchical_Model_for_Chord.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Gender Effect Canonicalization for Bangla ASR</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031115</link>
        <id>10.14569/IJACSA.2012.031115</id>
        <doi>10.14569/IJACSA.2012.031115</doi>
        <lastModDate>2012-12-01T13:22:43.3470000+00:00</lastModDate>
        
        <creator>B.K.M Mizanur Rahman</creator>
        
        <creator>Bulbul Ahamed</creator>
        
        <creator>Md. Asfak-Ur-Rahman</creator>
        
        <creator>Khaled Mahmud</creator>
        
        <creator>Mohammad Nurul Huda</creator>
        
        <subject>Acoustic Model; AutomaticSpeech Recognition; Gender Effects Suppression;  Hidden Markov Model.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(11), 2012</description>
        <description>This paper presents a Bangla (widely used as Bengali) automatic speech recognition system (ASR) by suppressing gender effects.  Gender characteristic plays an important role on the performance of ASR. If there is a suppression process that represses the decrease of differences in acoustic-likelihood among categories resulted from gender factors, a robust ASR system can be realized. In the proposed method, we have designed a new ASR incorporating the Local Features (LFs) instead of standard mel frequency cepstral coefficients (MFCCs) as an acoustic feature for Bangla by suppressing the gender effects, which embeds three HMM-based classifiers for corresponding male, female and geneder-independent (GI) characteristics.    In the experiments on Bangla speech database prepared by us, the proposed system has achieved a significant improvement of word correct rates (WCRs), word accuracies (WAs) and sentence correct rates (SCRs) in comparison with the method that incorporates Standard MFCCs.</description>
        <description>http://thesai.org/Downloads/Volume3No11/Paper_15-Gender_Effect_Canonicalization_for_Bangla_ASR.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multifinger Feature Level Fusion Based Fingerprint Identification </title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031114</link>
        <id>10.14569/IJACSA.2012.031114</id>
        <doi>10.14569/IJACSA.2012.031114</doi>
        <lastModDate>2012-12-01T13:22:41.2570000+00:00</lastModDate>
        
        <creator>Praveen N</creator>
        
        <creator>Tessamma Thomas</creator>
        
        <subject>fingerprint; multimodal biometrics; gradient; orientation field;  singularity; matching score.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(11), 2012</description>
        <description>Fingerprint based authentication systems are one of the cost-effective biometric authentication techniques employed for personal identification. As the data base population increases, fast identification/recognition algorithms are required with high accuracy. Accuracy can be increased using multimodal evidences collected by multiple biometric traits. In this work, consecutive fingerprint images are taken, global singularities are located using directional field strength and their local orientation vector is formulated with respect to the base line of the finger. Feature level fusion is carried out and a 32 element feature template is obtained. A matching score is formulated for the identification and 100% accuracy was obtained for a database of 300 persons. The polygonal feature vector helps to reduce the size of the feature database from the present 70-100 minutiae features to just 32 features and also a lower matching threshold can be fixed compared to single finger based identification.</description>
        <description>http://thesai.org/Downloads/Volume3No11/Paper_14-Multifinger_Feature_Level_Fusion_Based_Fingerprint_Identification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Method for Image Source Separation by Means of Independent Component Analysis: ICA, Maximum Entory Method: MEM, and Wavelet Based Method: WBM</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031113</link>
        <id>10.14569/IJACSA.2012.031113</id>
        <doi>10.14569/IJACSA.2012.031113</doi>
        <lastModDate>2012-12-01T13:22:39.1630000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>Blind separation; image separation; cucktail party effect; ICA;  MEM;  wavelet analuysis.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(11), 2012</description>
        <description>Method for image source separation based on Independent Component Analysis: ICA, Maximum Entropy Method: MEM, and Wavelet Based Method: WBM is proposed. Experimental results show that image separation can be done from the combined different images by using the proposed method with an acceptable residual error.</description>
        <description>http://thesai.org/Downloads/Volume3No11/Paper_13-Method_for_Image_Source_Separation_by_Means_of_Independent_Component_Analysis_ICA.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Wavelet Based Change Detection for Four Dimensional Assimilation Data in Space and Time Domains</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031112</link>
        <id>10.14569/IJACSA.2012.031112</id>
        <doi>10.14569/IJACSA.2012.031112</doi>
        <lastModDate>2012-12-01T13:22:37.1070000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>Assimilation; change detection; wavelet analysis.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(11), 2012</description>
        <description>Method for time change detection of four dimensional assimilation data by means of wavelet analysis is proposed together with spatial change detection with satellite imagery data. The method is validated with assimilation data and the same scale of satellite imagery data. Experimental results show that the proposed method does work well in visually.</description>
        <description>http://thesai.org/Downloads/Volume3No11/Paper_12-Wavelet_Based_Change_Detection_for_Four_Dimensional_Assimilation_Data_in_Space_and_Time_Domains.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sensitivity Analysis for Water Vapor Profile Estimation with Infrared: IR Sounder Data Based on Inversion </title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031111</link>
        <id>10.14569/IJACSA.2012.031111</id>
        <doi>10.14569/IJACSA.2012.031111</doi>
        <lastModDate>2012-12-01T13:22:35.0170000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>IR sounde; error budget analysis; MODTRAN; air temperature; relative humidity.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(11), 2012</description>
        <description>Sensitivity analysis for water vapor profile estimation with Infrared: IR sounder data based on inversion is carried out. Through simulation study, it is found that influence due to ground surface relative humidity estimation error is greater than that of sea surface temperature estimation error on the water vapor vertical profile retrievals.</description>
        <description>http://thesai.org/Downloads/Volume3No11/Paper_11-Sensitivity_Analysis_for_Water_Vapor_Profile_Estimation_with_Infrared_IR_Sounder_Data_Based_on_Inversion.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sensitivity Analysis of Fourier Transformation Spectrometer: FTS Against Observation Noise on Retrievals of Carbon Dioxide and Methane</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031110</link>
        <id>10.14569/IJACSA.2012.031110</id>
        <doi>10.14569/IJACSA.2012.031110</doi>
        <lastModDate>2012-12-01T13:22:32.9570000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Hiroshi Okumura</creator>
        
        <creator>Takuya Fukamachi</creator>
        
        <creator>Shuji Kawakami</creator>
        
        <creator>Hirofumi Ohyama</creator>
        
        <subject>FTS; carbon dioxide, methane, sensitivity analysis, error analysis. </subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(11), 2012</description>
        <description>Sensitivity analysis of Fourier Transformation Spectrometer: FTS against observation noise on retrievals of carbon dioxide and methane is conducted. Through experiments with real observed data and additive noise, it is found that the allowable noise on FTS observation data is less than 2.1x10-5 if estimation accuracy of total column carbon dioxide and methane is better than 1(%).</description>
        <description>http://thesai.org/Downloads/Volume3No11/Paper_10-Sensitivity_Analysis_of_Fourier_Transformation_Spectrometer_FTS_Against_Observation_Noise_on_Retrievals_of_Carbon_Dioxide_and_Methane.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Modelling Process of a Paper Folding Problem in GeoGebra 3D</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031109</link>
        <id>10.14569/IJACSA.2012.031109</id>
        <doi>10.14569/IJACSA.2012.031109</doi>
        <lastModDate>2012-12-01T13:22:30.8800000+00:00</lastModDate>
        
        <creator>Muharrem Aktumen</creator>
        
        <creator>BekirKursat Doruk</creator>
        
        <creator>Tolga Kabaca</creator>
        
        <subject>component; Modelling, GeoGebra 3D, Paper Folding.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(11), 2012</description>
        <description>In this research; a problem situation, which requires the ability of thinking in three dimensions, was developed by the researchers. As the purpose of this paper is producing a modeling task suggestion, the problem was visualized and analyzed in GeoGebra3D environment. Then visual solution was also been supported by algebraic approach. So, the capability of creating the relationship between geometric and algebraic representations in GeoGebra was also presented in 3D sense.</description>
        <description>http://thesai.org/Downloads/Volume3No11/Paper_9-The_Modelling_Process_of_a_Paper_Folding_Problem_in_GeoGebra_3D.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluation of Regressive Analysis Based Sea Surface Temperature Estimation Accuracy with NCEP/GDAS Data </title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031108</link>
        <id>10.14569/IJACSA.2012.031108</id>
        <doi>10.14569/IJACSA.2012.031108</doi>
        <lastModDate>2012-12-01T13:22:28.7900000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>Thermal infrared radiometer; Skin sea surface temperature; Regressive analysis; Split window method; NCEP/GDAS; MODTRAN; Terra and AQUA/MODIS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(11), 2012</description>
        <description>In order to evaluate the skin surface temperature (SSST) estimation accuracy with MODIS data, 84 of MODIS scenes together with the match-up data of NCEP/GDAS are used. Through regressive analysis, it is found that 0.305 to 0.417 K of RMSE can be achieved. Furthermore, it also is found that band 29 is effective for atmospheric correction (30.6 to 38.8% of estimation accuracy improvement). If single coefficient set for the regressive equation is used for all the cases, SSST estimation accuracy is around 1.969 K so that the specific coefficient set for the five different cases have to be set.</description>
        <description>http://thesai.org/Downloads/Volume3No11/Paper_8-Evaluation_of_Regressive_Analysis_Based_Sea_Surface_Temperature_Estimation_Accuracy_with_NCEPGDAS_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>DNA Sequence Representation and Comparison Based on Quaternion Number System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031107</link>
        <id>10.14569/IJACSA.2012.031107</id>
        <doi>10.14569/IJACSA.2012.031107</doi>
        <lastModDate>2012-12-01T13:22:26.7170000+00:00</lastModDate>
        
        <creator>Hsuan T Chang</creator>
        
        <creator>Chung J. Kuo</creator>
        
        <creator>Neng-Wen Lo</creator>
        
        <creator>Wei-Z.Lv</creator>
        
        <subject>Bioinformatics; genomic signal processing; DNA sequence; quaternion number;  data visualization.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(11), 2012</description>
        <description>Conventional schemes for DNA sequence representation, storage, and processing areusually developed based on the character-based formats.We propose the quaternion number system for numerical representation and further processing on DNA sequences.In the proposed method, the quaternion cross-correlation operation can be used to obtain both the global and local matching/mismatching information between two DNA sequences from the depicted one-dimensional curve and two-dimensional pattern, respectively.Simulation results on various DNA sequences and the comparison result with the well-known BLAST method are obtained to verify the effectiveness of the proposed method.</description>
        <description>http://thesai.org/Downloads/Volume3No11/Paper_7-DNA_Sequence_Representation_and_Comparison_Based_on_Quaternion_Number_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analyzing the Efficiency of Text-to-Image Encryption Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031106</link>
        <id>10.14569/IJACSA.2012.031106</id>
        <doi>10.14569/IJACSA.2012.031106</doi>
        <lastModDate>2012-12-01T13:22:24.6270000+00:00</lastModDate>
        
        <creator>Ahmad Abusukhon</creator>
        
        <creator>Mohammad Talib</creator>
        
        <creator>Maher A. Nabulsi</creator>
        
        <subject>Algorithm; Network; Secured Communication; Encryption &amp; Decryption; Private key; Encoding.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(11), 2012</description>
        <description>Today many of the activities are performed online through the Internet. One of the methods used to protect the data while sending it through the Internet is cryptography. In a previous work we proposed the Text-to-Image Encryption algorithm (TTIE) as a novel algorithm for network security. In this paper we investigate the efficiency of (TTIE) for large scale collection.</description>
        <description>http://thesai.org/Downloads/Volume3No11/Paper_6-Analyzing_the_Efficiency_of_Text-to-Image_Encryption_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Virtual Enterprise Network based on IPSec VPN Solutions and Management</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031105</link>
        <id>10.14569/IJACSA.2012.031105</id>
        <doi>10.14569/IJACSA.2012.031105</doi>
        <lastModDate>2012-12-01T13:22:22.5330000+00:00</lastModDate>
        
        <creator>Sebastian Mariu Rosu</creator>
        
        <creator>Marius Marian Popescu</creator>
        
        <creator>George Dragoi</creator>
        
        <creator>Ioana Raluca Guica</creator>
        
        <subject>Enterprise network; IPSec; network architecture; VPN; network management.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(11), 2012</description>
        <description>Informational society construction can’t be realized without research and investment projects in Information and Communication Technologies (ICT). In the 21st century, all enterprises have a local area network, a virtual private network, an Intranet and Internet, servers and workstations for operations, administration and management working together for the same objective: profits. Internet Protocol Security is a framework of open standards for ensuring private communications over public networks. It has become the most common network layer security control, typically used to create a virtual private network. Network management represents the activities, methods, procedures, and tools (software and hardware) that pertain to the operation, administration, maintenance, and provisioning of networked systems. Therefore, this work analyses the network architecture for an enterprise (industrial holding) geographically dispersed. In addition, the paper presents a network management solution for a large enterprise us</description>
        <description>http://thesai.org/Downloads/Volume3No11/Paper_5-The_Virtual_Enterprise_Network_based_on_IPSec_VPN_Solutions_and_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Genetic procedure for the Single Straddle Carrier Routing Problem </title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031104</link>
        <id>10.14569/IJACSA.2012.031104</id>
        <doi>10.14569/IJACSA.2012.031104</doi>
        <lastModDate>2012-12-01T13:22:20.4770000+00:00</lastModDate>
        
        <creator>Khaled MILI </creator>
        
        <creator>Faissal MILI</creator>
        
        <subject>Component; Container terminal optimization; routing Straddle Carrier; heuristic; Assignment strategy; Genetic Algorithm.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(11), 2012</description>
        <description>This paper concentrates on minimizing the total travel time of the Straddle Carrier during the loading operations of outbound containers in a seaport container terminal. Genetic Algorithm is well-known meta-heuristic approach inspired by the natural evolution of the living organisms. Heuristic procedure is developed to solve this problem by recourse to some genetic operators. A numerical experimentation is carried out in order to evaluate the performance and the efficiency of the solution.</description>
        <description>http://thesai.org/Downloads/Volume3No11/Paper_4-Genetic_procedure_for_the_Single_Straddle_Carrier_Routing_Problem.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimal itinerary planning for mobile multiple agents in WSN</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031103</link>
        <id>10.14569/IJACSA.2012.031103</id>
        <doi>10.14569/IJACSA.2012.031103</doi>
        <lastModDate>2012-12-01T13:22:18.3700000+00:00</lastModDate>
        
        <creator>Mostefa BENDJIMA</creator>
        
        <creator>Mohamed FEHAM</creator>
        
        <subject>Wireless sensor networ; mobile agent; optimal itinerary; energy efficiency.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(11), 2012</description>
        <description>Multi mobile agents based on data collection and aggregations have proved their effective in wireless sensor networks (WSN). However, the way in which these agents are deployed must be intelligently planned and should be adapted to the characteristics of wireless sensor networks. While most existing studies are based on the algorithms of itinerary planning for multiple agents i.e. determining the number of mobile agents (MAs) to be deployed, how to group source nodes for each MA, attributing the itinerary of each MA for its source nodes. These algorithms determine the efficiency or effectiveness of aggregation. This paper aims at presenting the drawbacks of these approaches in large-scale network, and proposes a method for grouping network nodes to determine the optimal itinerary based on multi agents using a minimum amount of energy with efficiency of aggregation in a minimum time. Our approach is based on the principle of visiting central location (VCL). The simulation experiments show that our approach is more efficient than other approaches in terms of the amount of energy consumed in relation with the efficiency of aggregation in a minimum time.</description>
        <description>http://thesai.org/Downloads/Volume3No11/Paper_3-Optimal_itinerary_planning_for_mobile_multiple_agents_in_WSN.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Extending UML for trajectory data warehouses conceptual modelling</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031102</link>
        <id>10.14569/IJACSA.2012.031102</id>
        <doi>10.14569/IJACSA.2012.031102</doi>
        <lastModDate>2012-12-01T13:22:16.3100000+00:00</lastModDate>
        
        <creator>Wided Oueslati</creator>
        
        <creator>Jalel Akaichi</creator>
        
        <subject>Trajectory data; trajectory data warehouse; UML extension; trajectory UML profile.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(11), 2012</description>
        <description>The new positioning and information capture technologies are able to treat data related to moving objects taking place in targeted phenomena. This gave birth to a new data source type called trajectory data (TD) which handle information related to moving objects. Trajectory Data must be integrated in a new data warehouse type called trajectory data warehouse (TDW) that is essential to model and to implement in order to analyze and understand the nature and the behavior of movements of objects in various contexts. However, classical conceptual modeling does not incorporate the specificity of trajectory data due to the complexity of their components that are spatial, temporal and thematic (semantic). For this reason, we focus in this paper on presenting the conceptual modeling of the trajectory data warehouse by defining a new profile using the StarUML extensibility mechanism.</description>
        <description>http://thesai.org/Downloads/Volume3No11/Paper_2-Extending_UML_for_trajectory_data_warehouses_conceptual_modelling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Predicting Garden Path Sentences Based on Natural Language Understanding System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031101</link>
        <id>10.14569/IJACSA.2012.031101</id>
        <doi>10.14569/IJACSA.2012.031101</doi>
        <lastModDate>2012-12-01T13:22:14.1430000+00:00</lastModDate>
        
        <creator>DU Jia-li</creator>
        
        <creator>YU Ping-fang</creator>
        
        <subject>Natural language understanding; N-S flowchart; computational linguistics; context free grammar; garden path sentences.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(11), 2012</description>
        <description>Natural language understanding (NLU) focusing on machine reading comprehension is a branch of natural language processing (NLP). The domain of the developing NLU system covers from sentence decoding to text understanding and the automatic decoding of GP sentence belongs to the domain of NLU system. GP sentence is a special linguistic phenomenon in which processing breakdown and backtracking are two key features. If the syntax-based system can present the special features of GP sentence and decode GP sentence completely and perfectly, NLU system can improve the effectiveness and develop the understanding skill greatly. On the one hand, by means of showing Octav Popescu’s model of NLU system, we argue that the emphasis on the integration of syntactic, semantic and cognitive backgrounds in system is necessary. On the other hand, we focus on the programming skill of IF-THEN-ELSE statement used in N-S flowchart and highlight the function of context free grammar (CFG) created to decode GP sentence. On the basis of example-based analysis, we reach the conclusion that syntax-based machine comprehension is technically feasible and semantically acceptable, and that N-S flowchart and CFG can help NLU system present the decoding procedure of GP sentence successfully. In short, syntax-based NLU system can bring a deeper understanding of GP sentence and thus paves the way for further development of syntax-based natural language processing and artificial intelligence.</description>
        <description>http://thesai.org/Downloads/Volume3No11/Paper_1-Predicting_Garden_Path_Sentences_Based_on_Natural_Language_Understanding_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Optimisation of Resource Scheduling in VCIM Systems Using Genetic Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2012.010809</link>
        <id>10.14569/IJARAI.2012.010809</id>
        <doi>10.14569/IJARAI.2012.010809</doi>
        <lastModDate>2012-11-10T05:08:48.6330000+00:00</lastModDate>
        
        <creator>Son Duy Dao</creator>
        
        <creator>Kazem Abhary</creator>
        
        <creator>Romeo Marian</creator>
        
        <subject>optimisation; genetic algorithm; resource scheduling; virtual computer-integrated manufacturing system.</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 1(8), 2012</description>
        <description>The concept of Virtual Computer-Integrated Manufacturing (VCIM) has been proposed for one and a half decade with purpose of overcoming the limitation of traditional Computer-Integrated Manufacturing (CIM) as it only works within an enterprise. VCIM system is a promising solution for enterprises to survive in the globally competitive market because it can exploit effectively locally as well as globally distributed resources. In this paper, a Genetic Algorithm (GA) based approach for optimising resource scheduling in the VCIM system is proposed. Firstly, based on the latest concept of VCIM system, a class of resource scheduling problems in the system is modelled by using agent-based approach. Secondly, GA with new strategies of handling constraint, chromosome encoding, crossover and mutation is developed to search for optimal solution for the problem. Finally, a case study is given to demonstrate the robustness of the proposed approach.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume1No8/Paper_9_Optimisation_of_Resource_Scheduling_in_VCIM_Systems_Using_Genetic_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Mechanism of Generating Joint Plans for Self-interested Agents, and by the Agents</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2012.010808</link>
        <id>10.14569/IJARAI.2012.010808</id>
        <doi>10.14569/IJARAI.2012.010808</doi>
        <lastModDate>2012-11-10T05:08:46.5270000+00:00</lastModDate>
        
        <creator>Wei HUANG</creator>
        
        <subject>joint plan; self-interested agent; privat information; Pareto optimality; concession.</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 1(8), 2012</description>
        <description>Generating joint plans for multiple self-interested agents is one of the most challenging problems in AI, since complications arise when each agent brings into a multi-agent system its personal abilities and utilities. Some fully centralized approaches (which require agents to fully reveal their private information) have been proposed for the plan synthesis problem in the literature. However, in the real world, private information exists widely, and it is unacceptable for a self-interested agent to reveal its private information. In this paper, we define a class of multi-agent planning problems, in which self-interested agents&#39; values are private information, and the agents are ready to cooperate with each other in order to cost efficiently achieve their individual goals. We further propose a semi-distributed mechanism to deal with this kind of problems. In this mechanism, the involved agents will bargain with each other to reach an agreement, and do not need to reveal their private information. We show that this agreement is a possible joint plan which is Pareto optimal and entails minimal concessions.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume1No8/Paper_8_A_Mechanism_of_Generating_Joint_Plans_for_Self-interested_Agents_and_by_the_Agents.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Local Feature based Gender Independent Bangla ASR</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2012.010807</link>
        <id>10.14569/IJARAI.2012.010807</id>
        <doi>10.14569/IJARAI.2012.010807</doi>
        <lastModDate>2012-11-10T05:08:44.4370000+00:00</lastModDate>
        
        <creator>Bulbul Ahamed</creator>
        
        <creator>Khaled Mahmud</creator>
        
        <creator>B.K.M. Mizanur Rahman</creator>
        
        <creator>Foyzul Hassan</creator>
        
        <creator>Rasel Ahmed</creator>
        
        <creator>Mohammad Nurul Huda</creator>
        
        <subject>Automatic speech recognition; Local featues; gender factor; word correct rates; word accuracies; sentence correct rates; hidden Markov model.</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 1(8), 2012</description>
        <description>This paper presents an automatic speech recognition (ASR) for Bangla (widely used as Bengali) by suppressing the speaker gender types based on local features extracted from an input speech. Speaker-specific characteristics play an important role on the performance of Bangla automatic speech recognition (ASR). Gender factor shows adverse effect in the classifier while recognizing a speech by an opposite gender, such as, training a classifier by male but testing is done by female or vice-versa.  To obtain a robust ASR system in practice it is necessary to invent a system that incorporates gender independent effect for particular gender.  In this paper, we have proposed a Gender-Independent technique for ASR that focused on a gender factor. The proposed method trains the classifier with the both types of gender, male and female, and evaluates the classifier for the male and female. For the experiments, we have designed a medium size Bangla (widely known as Bengali) speech corpus for both the male and female.The proposed system has showed a significant improvement of word correct rates, word accuracies and sentence correct rates in comparison with the method that suffers from gender effects using. Moreover, it provides the highest level recognition performance by taking a fewer mixture component in hidden Markov model (HMMs).</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume1No8/Paper_7_Local_Feature_based_Gender_Independent_Bangla_ASR.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Systems for Knowledge Representation in Artificial Intelligence</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2012.010806</link>
        <id>10.14569/IJARAI.2012.010806</id>
        <doi>10.14569/IJARAI.2012.010806</doi>
        <lastModDate>2012-11-10T05:08:42.3170000+00:00</lastModDate>
        
        <creator>Rajeswari P.V N</creator>
        
        <creator>Dr. T. V. Prasad</creator>
        
        <subject>Knowledge representation; hybrid system; hybrid schema structure.</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 1(8), 2012</description>
        <description>There are few knowledge representation (KR) techniques available for efficiently representing knowledge.  However, with the increase in complexity, better methods are needed.  Some researchers came up with hybrid mechanisms by combining two or more methods.  In an effort to construct an intelligent computer system, a primary consideration is to represent large amounts of knowledge in a way that allows effective use and efficiently organizing information to facilitate making the recommended inferences.  There are merits and demerits of combinations, and standardized method of KR is needed.  In this paper, various hybrid schemes of KR were explored at length and details presented.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume1No8/Paper_6_Hybrid_Systems_for_Knowledge_Representation_in_Artificial_Intelligence.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Interval-Based Context Reasoning Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2012.010805</link>
        <id>10.14569/IJARAI.2012.010805</id>
        <doi>10.14569/IJARAI.2012.010805</doi>
        <lastModDate>2012-11-10T05:08:40.1630000+00:00</lastModDate>
        
        <creator>Tao Lu</creator>
        
        <creator>Xiaoling Liu</creator>
        
        <creator>Xiaogang Liu</creator>
        
        <creator>Shaokun Zhang</creator>
        
        <subject>context-aware; context reasoning; interval-based; activity recognition.</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 1(8), 2012</description>
        <description>Context-aware computing is an emerging computing paradigm that provides intelligent context-aware application. Context reasoning is an important aspect in context awareness, by which high level context can be derived from low-level context data. In this paper, we focus on the situation in mobile workspace, where a worker performs a set of activities to archive defined goals. The main part of being aware is to be able to answer the question of “what is going on”. Therefore high level context we need to derive is current activity and its state. The approach we propose is knowledge-driven technique. Temporal relations as well as semantic relations are integrated into the context model of activity, and the recognition is performed based on the model. We first define the context model of activity, and then we analyze the characteristics of context change and propose a method of context reasoning.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume1No8/Paper_5_An_Interval_Based_Context_Reasoning_Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Exploration on Brain Computer Interface and Its Recent Trends
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2012.010804</link>
        <id>10.14569/IJARAI.2012.010804</id>
        <doi>10.14569/IJARAI.2012.010804</doi>
        <lastModDate>2012-11-10T05:08:38.0400000+00:00</lastModDate>
        
        <creator>T. Kameswara Rao</creator>
        
        <creator>M. Rajyalakshmi</creator>
        
        <creator>Dr. T. V. Prasad</creator>
        
        <subject>BCI; EEG; brain image reconstruction.</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 1(8), 2012</description>
        <description>Detailed exploration on Brain Computer Interface (BCI) and its recent trends has been done in this paper. Work is being done to identify objects, images, videos and their color compositions. Efforts are on the way in understanding speech, words, emotions, feelings and moods. When humans watch the surrounding environment, visual data is processed by the brain, and it is possible to reconstruct the same on the screen with some appreciable accuracy by analyzing the physiological data. This data is acquired by using one of the non-invasive techniques like electroencephalography (EEG) in BCI. The acquired signal is to be translated to produce the image on to the screen. This paper also lays suitable directions for future work.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume1No8/Paper_4_An_Exploration_on_Brain_Computer_Interface_and_Its_Recent_Trends.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Visualization of Link Structures and URL Retrievals Utilizing Internal Structure of URLs Based on Brunch and Bound Algorithms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2012.010803</link>
        <id>10.14569/IJARAI.2012.010803</id>
        <doi>10.14569/IJARAI.2012.010803</doi>
        <lastModDate>2012-11-10T05:08:35.9330000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>visualization of link structure; URL retrieval; brunch and bound method; serch engine; information collection robot.</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 1(8), 2012</description>
        <description>Method for visualization of URL link structure and URL retrievals using internal structure of URLs based on brunch and bound method is proposed. Twisting link structure of URLs can be solved by the proposed visualization method. Also some improvements are observed for the proposed brunch and bound based method in comparison to the conventional URL retrieval methods.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume1No8/Paper_3_Visualization_of_Link_Structures_and_URL_Retrievals_Utilizing_Internal_Structure_of_URLs_Based_on_Brunch_and_Bound_Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Clustering Method Based on Messy Genetic Algorithm: GA for Remote Sensing Satellite Image Classifications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2012.010802</link>
        <id>10.14569/IJARAI.2012.010802</id>
        <doi>10.14569/IJARAI.2012.010802</doi>
        <lastModDate>2012-11-10T05:08:33.8130000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>clustering; classification; Genetic Algorityhm: GA; Messy GA; simple GA.</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 1(8), 2012</description>
        <description>Clustering method for remote sensing satellite image classification based on Messy Genetic Algorithm: GA is proposed. Through simulation study and experiments with real remote sensing satellite images, the proposed method is validated in comparison to the conventional simple GA. It is also found that the proposed clustering method is useful for preprocessing of the classifications.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume1No8/Paper_2_Clustering_Method_Based_on_Messy_Genetic_Algorithm_GA_for_Remote_Sensing_Satellite_Image_Classifications.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Research of the Relationship between Perceived Stress Level and Times of Vibration of Vocal Folds</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2012.010801</link>
        <id>10.14569/IJARAI.2012.010801</id>
        <doi>10.14569/IJARAI.2012.010801</doi>
        <lastModDate>2012-11-10T05:08:31.6430000+00:00</lastModDate>
        
        <creator>Yin Zhigang</creator>
        
        <subject>Times of vibration of vocal folds (TVVF); stress level;  parameter.</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 1(8), 2012</description>
        <description>Whether a syllable is perceived as stressed or not and whether the stress is strong or weak are hot issues in speech prosody research and speech recognition. A focus of the stress study is on the investigation of the acoustic factors which contribute to the perception of stress level. This study examined all possible acoustic/physiological cues to stress based on data from Annotated Chinese Speech Corpus and proposed that times of vibration of vocal folds (TVVF) reflects stress level best.
It is traditionally held that pitch and duration are the most important acoustic parameters to stress. But for Chinese which is a tone language and features special strong-weak pattern in prosody, these two parameters might not be the best ones to represent stress degree. This paper proposed that TVVF, reflected as the number of wave pulses of the vocalic part of a syllable, is the ideal parameter to stress level.  Since number of pulses is the integral of pitch and duration (Pulse=?f(pitch)dt), TVVF can embody the effect of stress on both pitch and duration. The analyses revealed that TVVF is most correlated with the grades of stress. Therefore, it can be a more effective parameter indicating stress level.
</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume1No8/Paper_1_The_Research_of_the_Relationship_between_Perceived_Stress_Level_and_Times_of_Vibration_of_Vocal_Folds.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Request Analysis and Dynamic Queuing System for VANETs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031031</link>
        <id>10.14569/IJACSA.2012.031031</id>
        <doi>10.14569/IJACSA.2012.031031</doi>
        <lastModDate>2012-10-31T09:55:26.5900000+00:00</lastModDate>
        
        <creator>Ajay Guleria</creator>
        
        <creator>Narottam Chand</creator>
        
        <creator>Mohinder Kumar</creator>
        
        <creator>Lalit Awasthi</creator>
        
        <subject>Road Side Uni; Service Ratio; Propagation Delay; Service Queue.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(10), 2012</description>
        <description>Vehicular Ad hoc Network (VANET) is a kind of mobile ad hoc network using the capabilities of wireless communication for Vehicle-to-Vehicle and Vehicle-to-Roadside communication to provide safety and comfort to vehicles in transportation system. People in vehicles want to access data of their interest from Road Side Unit (RSU). RSU need to schedule these requests in a way to maximize the service ratio. In this paper we have proposed new methods for careful analysis of incoming requests to find whether these requests can be completed within deadline or not and to provide dynamic service queue. Simulation results show that the proposed schemes increase the service ratio significantly.</description>
        <description>http://thesai.org/Downloads/Volume3No10/Paper_31-Request_Analysis_and_Dynamic_Queuing_System_for_VANETs.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>FPGA Implementation of 5/3 Integer DWT for Image Compression</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031030</link>
        <id>10.14569/IJACSA.2012.031030</id>
        <doi>10.14569/IJACSA.2012.031030</doi>
        <lastModDate>2012-10-31T09:55:24.5000000+00:00</lastModDate>
        
        <creator>M Puttaraju</creator>
        
        <creator>Dr.A.R.Aswatha</creator>
        
        <subject>2D-DWT; Lifting; CCDS; wavelet transform; 1DDWT.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(10), 2012</description>
        <description>The wavelet transform has emerged as a cutting edge technology, in the field of image compression. Wavelet-based coding provides substantial improvements in picture quality at higher compression ratios. In this paper an approach is proposed for the compression of an image using 5/3(lossless) Integer discrete wavelet transform (DWT) for Image Compression. The proposed architecture, based on new and fast lifting scheme approach for (5, 3) filter in DWT. Here an attempt is made to establish a Standard for a data compression algorithm applied to two-dimensional digital spatial image data from payload instruments.</description>
        <description>http://thesai.org/Downloads/Volume3No10/Paper_30-FPGA_Implementation_of_53_Integer_DWT_for_Image_Compression.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Newer User Authentication, File encryption and Distributed Server Based Cloud Computing security architecture</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031029</link>
        <id>10.14569/IJACSA.2012.031029</id>
        <doi>10.14569/IJACSA.2012.031029</doi>
        <lastModDate>2012-10-31T09:55:22.4100000+00:00</lastModDate>
        
        <creator>Kawser Wazed Nafi</creator>
        
        <creator>Tonny Shekha Kar</creator>
        
        <creator>Sayed Anisul Hoque</creator>
        
        <creator>Dr. M. M. A Hashem</creator>
        
        <subject>Cloud Computing; Security architecture; AES; RSA; onetime password;  MD5 Hashing;  Hardwire database encryption.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(10), 2012</description>
        <description>The cloud computing platform gives people the opportunity for sharing resources, services and information among the people of the whole world. In private cloud system, information is shared among the persons who are in that cloud. For this, security or personal information hiding process hampers.  In this paper we have proposed new security architecture for cloud computing platform. This ensures secure communication system and hiding information from others. AES based file encryption system and asynchronous key system for exchanging information or data is included in this model. This structure can be easily applied with main cloud computing features, e.g. PaaS, SaaS and IaaS. This model also includes onetime password system for user authentication process. Our work mainly deals with the security system of the whole cloud computing platform.</description>
        <description>http://thesai.org/Downloads/Volume3No10/Paper_29-A_Newer_User_Authentication,_File_encryption_and_Distributed_Server_Based_Cloud_Computing_security_architecture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Quantifiable Analysis of Energy Efficient Clustering Heuristic
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031028</link>
        <id>10.14569/IJACSA.2012.031028</id>
        <doi>10.14569/IJACSA.2012.031028</doi>
        <lastModDate>2012-10-31T09:55:18.4800000+00:00</lastModDate>
        
        <creator>Anita Sethi</creator>
        
        <creator>J. P. Saini</creator>
        
        <creator>Manoj Bisht</creator>
        
        <subject>Cluster; ClusterHead; Gateway; Associated nodes; Energy Efficiency.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(10), 2012</description>
        <description>One of the important aspects of MANET is the restraint of quantity of available energy in the network nodes that is the most critical factor in the operation of these networks. The tremendous amount of energy using the mobile nodes in wireless communication medium makes Energy Efficiency a fundamental requirement for mobile ad hoc networks. The cluster-based routing protocols are investigated in several research studies which encourage more well-organized usage of resources in controlling large dynamic networks. Clustering can be done for different purposes, such as, routing efficiency, transmission management, backbone management etc. Less flooding, distributed operation, locally repairing of broken routes and shorter sub-optimal routes are the main features of the Clustering protocol. In this paper, we present quantifiable analysis of energy efficient cluster-based routing protocol for uninterrupted stream queries.</description>
        <description>http://thesai.org/Downloads/Volume3No10/Paper_28-Quantifiable_Analysis_of_Energy_Efficient_Clustering_Heuristic.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Algorithm for Solving Natural Language Query Execution Problems on Relational Databases</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031027</link>
        <id>10.14569/IJACSA.2012.031027</id>
        <doi>10.14569/IJACSA.2012.031027</doi>
        <lastModDate>2012-10-31T09:55:16.3870000+00:00</lastModDate>
        
        <creator>Enikuomehin A. O</creator>
        
        <creator>Okwufulueze D.O</creator>
        
        <subject>Relational Database; Interface; Natural Language; Query; SQL.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(10), 2012</description>
        <description>There continues to be an increased need for non-experts interaction with databases. This is essential in their quest to make appropriate business decisions. Researchers have, over the years, continued to find a methodology that bridges the gap that exist between information need and users satisfaction. This has been the core in studies related to natural language information retrieval. In this paper, we understudy the existing methodology and develop a model to extend the proposition of (a) Bhardwaj et al where a MAPPER was developed and implemented on student database and (b) Nihalani et al. where an integrated interface was used on relational databases.  We present a time saving executable algorithm that satisfies needed conditions required to retrieve results of natural language based queries from relational databases. Results of the experiment shows that the performance index of the algorithm is satisfactory and can be improved upon increasing the metadata table of the relational database. This is a sharp diversion from the keyword based search that has dominated most commercial databases in use today. The implementation was deployed in PHP and the retrieval time has compared favorably with earlier deployed models. We further propose the extension of this work in the areas of inculcating some fuzzy constraints to handle uncertainty and ambiguity which are inherent in human natural language.</description>
        <description>http://thesai.org/Downloads/Volume3No10/Paper_27-An_Algorithm_for_Solving_Natural_Language_Query_Execution_Problems_on_Relational_Databases.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Technique for Glitch and Leakage Power Reduction in CMOS VLSI Circuits</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031026</link>
        <id>10.14569/IJACSA.2012.031026</id>
        <doi>10.14569/IJACSA.2012.031026</doi>
        <lastModDate>2012-10-31T09:55:12.4570000+00:00</lastModDate>
        
        <creator>Pushpa Saini</creator>
        
        <creator>Rajesh Mehra</creator>
        
        <subject>Dynamic power; Leakage power; Multi-threshold; Variable body biasing;  Glitch.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(10), 2012</description>
        <description>Leakage power has become a serious concern in nanometer CMOS technologies. Dynamic and leakage power both are the main contributors to the total power consumption. In the past, the dynamic power has dominated the total power dissipation of CMOS devices. However, with the continuous trend of technology scaling, leakage power is becoming a main contributor to power consumption. In this paper, a technique has been proposed which will reduce simultaneously both glitch and leakage power. The results are simulated in Microwind3.1 in 90nm and 250 nm technology at room temperature.</description>
        <description>http://thesai.org/Downloads/Volume3No10/Paper_26-A_Novel_Technique_for_Glitch_and_Leakage_Power_Reduction_in_CMOS_VLSI_Circuits.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Harmony Search Based Algorithm for Detecting Distributed Predicates</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031025</link>
        <id>10.14569/IJACSA.2012.031025</id>
        <doi>10.14569/IJACSA.2012.031025</doi>
        <lastModDate>2012-10-31T09:55:08.5400000+00:00</lastModDate>
        
        <creator>Eslam Al Maghayreh</creator>
        
        <subject>Distributed Systems;  Detection   of   Distributed Predicates; Runtime Verification; Harmony  Search; Testing; Debugging.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(10), 2012</description>
        <description>Detection  of distributed predicates  (also  referred to as runtime verification)  can be used to verify that  a particular  run  of a given distributed program satisfies certain  properties  (represented  as  predicates).  Consequently,   distributed predicates   detection   techniques can   be used to effectively improve the dependability of a given distributed application. Due to concurrency, the detection of distributed predicates can incur significant overhead.   Most  of  the  effective  techniques developed  to  solve  this  problem  work  efficiently  for  certain  classes of predicates,  like conjunctive  predicates.  In this paper, we have  presented  a  technique  based  on  harmony search  to efficiently detect the satisfaction  of a predicate  under  the possibly  modality.  We have implemented the proposed technique and we have conducted several experiments to demonstrate its effectiveness.</description>
        <description>http://thesai.org/Downloads/Volume3No10/Paper_25-A_Harmony_Search_Based_Algorithm_for_Detecting_Distributed_Predicates.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hindi Speech Actuated Computer Interface for Web Search</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031024</link>
        <id>10.14569/IJACSA.2012.031024</id>
        <doi>10.14569/IJACSA.2012.031024</doi>
        <lastModDate>2012-10-31T09:55:04.6270000+00:00</lastModDate>
        
        <creator>Kamlesh Sharma</creator>
        
        <creator>Dr. S.V.A.V. Prasad</creator>
        
        <creator>Dr. T. V. Prasad</creator>
        
        <subject>Web search; Hindi speech; HSACIWS; computer interface;  human computer interaction.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(10), 2012</description>
        <description>Aiming at increasing system simplicity and flexibility, an audio evoked based system was developed by integrating simplified headphone and user-friendly software design. This paper describes a Hindi Speech Actuated Computer Interface for Web search (HSACIWS), which accepts spoken queries in Hindi language and provides the search result on the screen. This system recognizes spoken queries by large vocabulary continuous speech recognition (LVCSR), retrieves relevant document by text retrieval, and provides the search result on the Web by the integration of the Web and the voice systems. The LVCSR in this system showed enough performance levels for speech with acoustic and language models derived from a query corpus with target contents.</description>
        <description>http://thesai.org/Downloads/Volume3No10/Paper_24-A_Hindi_Speech_Actuated_Computer_Interface_for_Web_Search.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Secured Wireless Communication using Fuzzy Logic based High Speed Public-Key Cryptography (FLHSPKC)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031023</link>
        <id>10.14569/IJACSA.2012.031023</id>
        <doi>10.14569/IJACSA.2012.031023</doi>
        <lastModDate>2012-10-31T09:55:02.5500000+00:00</lastModDate>
        
        <creator>Arindam Sarkar</creator>
        
        <creator>J. K. Mandal</creator>
        
        <subject>Soft computing; Wireless Communication; High Speed; ECC.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(10), 2012</description>
        <description>In this paper secured wireless communication using fuzzy logic based high speed public-key cryptography (FLHSPKC) has been proposed by satisfying the major issues likes computational safety, power management and restricted usage of memory in wireless communication. Wireless Sensor Network (WSN) has several major constraints likes’ inadequate source of energy, restricted computational potentiality and limited memory. Though conventional Elliptic Curve Cryptography (ECC) which is a sort of public-key cryptography used in wireless communication provides equivalent level of security like other existing public–key algorithm using smaller parameters than other but this traditional ECC does not take care of all these major limitations in WSN. In conventional ECC consider Elliptic curve point p, an arbitrary integer k and modulus m, ECC carry out scalar multiplication kP mod m, which takes about 80% of key computation time on WSN. In this paper proposed FLHSPKC scheme provides some novel strategy including novel soft computing based strategy to speed up scalar multiplication in conventional ECC and which in turn takes shorter computational time and also satisfies power consumption restraint, limited usage of memory without hampering the security level. Performance analysis of the different strategies under FLHSPKC scheme and comparison study with existing conventional ECC methods has been done.</description>
        <description>http://thesai.org/Downloads/Volume3No10/Paper_23-Secured_Wireless_Communication_using_Fuzzy_Logic_based_High_Speed_Public-Key_Cryptography.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>NF-SAVO: Neuro-Fuzzy system for Arabic Video OCR</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031022</link>
        <id>10.14569/IJACSA.2012.031022</id>
        <doi>10.14569/IJACSA.2012.031022</doi>
        <lastModDate>2012-10-31T09:55:00.4130000+00:00</lastModDate>
        
        <creator>Mohamed Ben Halima</creator>
        
        <creator>Hichem karray</creator>
        
        <creator>Adel. M. Alimi</creator>
        
        <creator>Ana Fern&#225;ndez Vila</creator>
        
        <subject>Arabic Video OCR; Text Localization; Text Detection; Text extraction; Pattern Recognition; Neuro-Fuzy.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(10), 2012</description>
        <description>In this paper we propose a robust approach for text extraction and recognition from video clips which is called Neuro-Fuzzy system for Arabic Video OCR. In Arabic video text recognition, a number of noise components provide the text relatively more complicated to separate from the background. Further, the characters can be moving or presented in a diversity of colors, sizes and fonts that are not uniform. Added to this, is the fact that the background is usually moving making text extraction a more intricate process.
Video include two kinds of text, scene text and artificial text. Scene text is usually text that becomes part of the scene itself as it is recorded at the time of filming the scene. But artificial text is produced separately and away from the scene and is laid over it at a later stage or during the post processing time. The emergence of artificial text is consequently vigilantly directed. This type of text carries with it important information that helps in video referencing, indexing and retrieval.
</description>
        <description>http://thesai.org/Downloads/Volume3No10/Paper_22-NF-SAVO_Neuro-Fuzzy_system_for_Arabic_Video_OCR.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Smart Card Based Integrated Electronic Health Record System For Clinical Practice</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031021</link>
        <id>10.14569/IJACSA.2012.031021</id>
        <doi>10.14569/IJACSA.2012.031021</doi>
        <lastModDate>2012-10-31T09:54:56.4370000+00:00</lastModDate>
        
        <creator>N. Anju Latha</creator>
        
        <creator>B. Rama Murthy</creator>
        
        <creator>U. Sunitha</creator>
        
        <subject>Electronic health record; Smart card technology; Healthcare using smart cards.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(10), 2012</description>
        <description>Smart cards are used in information technologies as portable integrated devices with data storage and data processing capabilities. As in other fields, smart card use in health systems became popular due to their increased capacity and performance. Smart cards are used as a Electronic Health Record (EHR) Their efficient use with easy and fast data access facilities leads to implementation particularly widespread in hospitals. In this paper, a smart card based Integrated Electronic health Record System is developed. The system uses smart card for personal identification and transfer of health data and provides data communication. In addition to personal information, general health information about the patient is also loaded to patient smart card. Health care providers use smart cards to access data on patient cards. Electronic health records have number of advantages over the paper record, which improve the accuracy, quality of patient care, reduce the cost, efficiency, productivity.  In present work we measure the biomedical parameters like Blood Pressure, Diabetes Mellitus and Pulse oxygen measurement.,etc clinical parameters  of patient and store health details in Electronic Health record. The system has been successfully tested and implemented (Abstract)</description>
        <description>http://thesai.org/Downloads/Volume3No10/Paper_21-Smart_Card_Based_Integrated_Electronic_Health_Record_System_For_Clinical_Practice.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Prediction of Compressive Strength of Self compacting Concrete with Flyash and Rice Husk Ash using Adaptive Neuro-fuzzy Inference System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031020</link>
        <id>10.14569/IJACSA.2012.031020</id>
        <doi>10.14569/IJACSA.2012.031020</doi>
        <lastModDate>2012-10-31T09:54:54.3430000+00:00</lastModDate>
        
        <creator>S. S, Pathak</creator>
        
        <creator>Dr. Sanjay Sharma</creator>
        
        <creator>Dr. Hemant Sood</creator>
        
        <creator>Dr. R. K. Khitoliya</creator>
        
        <subject>Self compacting concrete; ANFIS; Flyash.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(10), 2012</description>
        <description>Self-compacting concrete is an innovative concrete that does not require vibration for placing and compaction. It is able to flow under its own weight, completely filling   formwork   and achieving full compaction even in congested reinforcement without segregation and bleeding. In the present study self compacting concrete mixes were developed using blend of fly ash and rice husk ash. Fresh properties of theses mixes were tested by using standards recommended by EFNARC (European Federation for Specialist Construction Chemicals and Concrete system). Compressive strength at 28 days was obtained for these mixes. This paper presents development of Adaptive Neuro-fuzzy Inference System (ANFIS) model for predicting compressive strength of self compacting concrete using fly ash and rice husk ash. The input parameters used for model are cement, fly ash, rice husk ash and water content. Output parameter is compressive strength at 28 days. The results show that the implemented model is good at predicting compressive strength.</description>
        <description>http://thesai.org/Downloads/Volume3No10/Paper_20-Prediction_of_Compressive_Strength_of_Self_compacting_Concrete_with_Flyash_and_Rice_Husk_Ash_using_Adaptive_Neuro-fuzzy_Inference_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>High Performance Speed Sensorless Control of Three-Phase Induction Motor Based on Cloud Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031019</link>
        <id>10.14569/IJACSA.2012.031019</id>
        <doi>10.14569/IJACSA.2012.031019</doi>
        <lastModDate>2012-10-31T09:54:52.2530000+00:00</lastModDate>
        
        <creator>Z. M. Salem</creator>
        
        <creator>M.A.Abbas</creator>
        
        <subject>cInduction motor; Cloud computing control; Sensorless control; Vector control; Observers; Modeling;  Identification.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(10), 2012</description>
        <description>Induction motor is a cast of alternating current motor where charge endures allotted to the rotor close-at-hand deputation of conductive charge. These motors are broadly applied in industrial claim due to they are arduous along with adhere no contacts. The speed controller of deltoid phase induction motor is applied to alleviate the aberration of speed. The central constructivist of this paper is to accrue the performance of speed sensorless control of three phase induction motor. To increase its performance, this paper presents a modified method for speed controller of an indirect vector-controlled induction motor drive using cloud computing technique.  Our methodology depends on speed sensorless scheme to obtain the speed signal feedback; the speed estimator is based on model reference adaptive control that uses the stator current and rotor flux as state variables for estimating the speed. In this method, the stator current error is represented as a function of first degree of the estimated speed error. An analysis and simulation of the tried algorithm is birthed and applied easing a TMS320C31 floating-point notational alert Processor. And accumulate the action of the three phase induction motor we conceived our appraisals affixed to the accountant based on cloud computing tactics. This intelligent policy uses the guidelines of the speed controller efficiently. Simulation and experimental results depicted that the motor speed is decelerated articulately to destine its illusion apprise without above and inferior smack and with about zero steady state error. The apprised accelerate alert and its dispatching buoy amassed off line from burlesque. After effects display an advantageous affinity among the accounted speed alert and it&#39;s dispatching allocated as well as aped speed flares.</description>
        <description>http://thesai.org/Downloads/Volume3No10/Paper_19-High_Performance_Speed_Sensorless_Control_of_Three-Phase_Induction_Motor_Based_on_Cloud_Computing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fusion of Biogeography based optimization and Artificial bee colony for identification of Natural Terrain Features</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031018</link>
        <id>10.14569/IJACSA.2012.031018</id>
        <doi>10.14569/IJACSA.2012.031018</doi>
        <lastModDate>2012-10-31T09:54:48.2770000+00:00</lastModDate>
        
        <creator>Priya Arora</creator>
        
        <creator>Harish Kundra</creator>
        
        <creator>Dr. V.K Panchal</creator>
        
        <subject>Biogeography-based Optimization;  Artificial bee colony; Hybrid swarm intelligence; Image classification; Multi spectral dataset.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(10), 2012</description>
        <description>Swarm Intelligence techniques expedite the configuration and collimation of the remarkable ability of group members to reason and learn in an environment of contingency and corrigendum from their peers by sharing information. This paper introduces a novel approach of fusion of two intelligent techniques generally to augment the performance of a single intelligent technique by means of information sharing.  Biogeography-based optimization (BBO) is a recently developed heuristic algorithm, which proves to be a strong entrant in swarm intelligence with the encouraging and consistent performance. But, as BBO lacks inbuilt property of clustering, its behavior can be replaced with the honey bees of artificial bee colony (ABC), a new swarm intelligent technique. These two methods can be combined to create a new method which is easy to implement and gives more optimized results than the results when BBO is used. We have successfully applied this fusion of techniques for classifying diversified land cover areas in a multispectral remote sensing satellite image. The results illustrate that the proposed approach is very efficient than BBO and highly accurate land cover features can be extracted by using this approach.</description>
        <description>http://thesai.org/Downloads/Volume3No10/Paper_18-Fusion_of_Biogeography_based_optimization_and_Artificial_bee_colony_for_identification_of_Natural_Terrain_Features.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced Authentication Mechanisms for Desktop Platform and Smart Phones</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031017</link>
        <id>10.14569/IJACSA.2012.031017</id>
        <doi>10.14569/IJACSA.2012.031017</doi>
        <lastModDate>2012-10-31T09:54:44.2970000+00:00</lastModDate>
        
        <creator>Dina EL Menshawy</creator>
        
        <creator>Hoda M. O. Mokhtar</creator>
        
        <creator>Osman Hegazy</creator>
        
        <subject>Biometrics; Keystroke; Mouse; Authentication; Smart Phones; Touch Screens; Touch Pressure; Touch Contact Size.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(10), 2012</description>
        <description>With hundreds of millions using computers and mobile devices all over the globe, these devices have an established position in modern society. Nevertheless, most of these devices use weak authentication techniques with passwords and PINs which can be easily hacked. Thus, stronger identification is needed to ensure data security and privacy. In this paper, we will explain the employment of biometrics to computer and mobile platforms. In addition, the possibility of using keystroke and mouse dynamics for computer authentication is being checked. Finally, we propose an authentication scheme for smart phones that shows positive results.</description>
        <description>http://thesai.org/Downloads/Volume3No10/Paper_17-Enhanced_Authentication_Mechanisms_for_Desktop_Platform_and_Smart_Phones.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Secured Communication Based On Knowledge Engineering Technique</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031016</link>
        <id>10.14569/IJACSA.2012.031016</id>
        <doi>10.14569/IJACSA.2012.031016</doi>
        <lastModDate>2012-10-31T09:54:40.2900000+00:00</lastModDate>
        
        <creator>M. W. Youssef</creator>
        
        <creator>Hazem El-Gendy</creator>
        
        <subject>Computer Communications; Computer/ communications Protocols; Network Security; Authentication; Scrambling; Encryption; Standard Protocols; ISO Open System Interconnections  (OSI) Model; object behavior analysis; knowledge engineering.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(10), 2012</description>
        <description>Communication security has become the keynote of the &quot;e&quot; world. Industries like eComm, eGov were built on the technology of computer networks. Those industries cannot afford security breaches. This paper presents a methodology of securing computer communication based on identifying typical communication behavior of each system user based on the dominant set of protocols utilized between the network nodes.</description>
        <description>http://thesai.org/Downloads/Volume3No10/Paper_16-A_Secured_Communication_Based_On_Knowledge_Engineering_Technique.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Architecture for Network Coded Electronic Health Record Storage System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031015</link>
        <id>10.14569/IJACSA.2012.031015</id>
        <doi>10.14569/IJACSA.2012.031015</doi>
        <lastModDate>2012-10-31T09:54:36.3270000+00:00</lastModDate>
        
        <creator>B. Venkatalakshmi</creator>
        
        <creator>S. Shanmugavel</creator>
        
        <subject>Network coding; Electronic Health Record system; Distributed content storage;  RFID;  Linear network code.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(10), 2012</description>
        <description>The use of network coding for large scale content distribution improves download time. This is demonstrated in this work by the use of network coded Electronic Health Record Storage System (EHR-SS). A Novel Architecture of 4-layers to build the EHR-SS is designed. The application integrates the data captured for the patient from three modules namely administrative data, medical records of consultation and reports of medical tests. The lower layer is the data capturing layer using RFID reader. The data is captured in the lower level from different nodes.  This data is combined with some linear coefficients using linear network coding. At the lower level the data from different tags are combined and stored and at the level 2 coding combines the data from multiple readers and a corresponding encoding vector is generated.  This network coding is done at the server node through small mat lab net-cod interface software. While accessing the stored data, the user data has the data type represented in the form of decoding vector. For storing and retrieval the primary key is the patient id. The results obtained were observed with a reduction of download time of about 12% for our case study set up.</description>
        <description>http://thesai.org/Downloads/Volume3No10/Paper_15-A_Novel_Architecture_for_Network_Coded_Electronic_Health_Record_Storage_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Approach to Keep Credentials Secured in Grid Computing Environment for the Safety of Vital Computing Resources</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031014</link>
        <id>10.14569/IJACSA.2012.031014</id>
        <doi>10.14569/IJACSA.2012.031014</doi>
        <lastModDate>2012-10-31T09:54:32.4270000+00:00</lastModDate>
        
        <creator>Avijit Bhowmick</creator>
        
        <creator>C T Bhunia</creator>
        
        <subject>Grid;  security; LSB;  authentication.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(10), 2012</description>
        <description>Presently security attacks have aimed to vulnerabilities in repetitive-use authentication secrets like static passwords. The passwords are used by user in clients side are vulnerable, as the attackers can gain access to a user&#39;s password using different types of viruses as it is being typed. These attacks are directing many Grid sites to explore one-time password solutions for authentication in Grid deployment. We present here a novel mechanism called N-LSB where Grid security will be integrated with modified LSB based steganographic technique in order to meet the higher security demands for Grid credentials.</description>
        <description>http://thesai.org/Downloads/Volume3No10/Paper_14-An_Approach_to_Keep_Credentials_Secured_in_Grid_Computing_Environment_for_the_Safety_of_Vital_Computing_Resources.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Empirical Analysis Over the Four Different Feature-Based Face and Iris Biometric Recognition Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031013</link>
        <id>10.14569/IJACSA.2012.031013</id>
        <doi>10.14569/IJACSA.2012.031013</doi>
        <lastModDate>2012-10-31T09:54:30.3370000+00:00</lastModDate>
        
        <creator>Deepak Sharma</creator>
        
        <creator>Dr. Ashok Kumar</creator>
        
        <subject>Multi-modal biometrics; Face Recognition; iris recognition; LBP operator (Local Binary Pattern); Local Gabor XOR Patterns;  PCA and EMD.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(10), 2012</description>
        <description>Recently, multimodal biometric systems have been widely accepted, which has shown increased accuracy and population coverage, while reducing vulnerability to spoofing. The main feature to multimodal biometrics is the amalgamation of different biometric modality data at the feature extraction, matching score, or decision levels. Recently, a lot of works are presented in the literature for multi-modal biometric recognition. In this paper, we have presented comparative analysis of four different feature extraction approaches, such as LBP, LGXP, EMD and PCA. The main steps involved in such four approaches are: 1) Feature extraction from face image, 2) Feature extraction from iris image and 3) Fusion of face and iris features. The performance of the feature extraction methods in multi-modal recognition is analyzed using FMR and FNMR to study the recognition behavior of these approaches. Then, an extensive analysis is carried out to find the effectiveness of different approaches using two different databases. The experimental results show the equal error rate of different feature extraction approaches in multi-modal biometric recognition. From the ROC curve plotted, the performance of the LBP and LGXP method is better compared to PCA-based technique.</description>
        <description>http://thesai.org/Downloads/Volume3No10/Paper_13-An_Empirical_Analysis_Over_The_Four_Different_Feature-Based_Face_And_Iris_Biometric_Recognition_Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Development of Mobile Client Application in Yogyakarta Tourism and Culinary Information System Based on Social Media Integration</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031012</link>
        <id>10.14569/IJACSA.2012.031012</id>
        <doi>10.14569/IJACSA.2012.031012</doi>
        <lastModDate>2012-10-31T09:54:26.3900000+00:00</lastModDate>
        
        <creator>Novrian Fajar Hidayat</creator>
        
        <creator>Ridi Ferdiana</creator>
        
        <subject>Social Network; Information Service; Culinary; Tourism; Windows Phone 7.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(10), 2012</description>
        <description>Social network is currently being an important part of someone. Many of users in social network make it an effective publication. One of many things that can be published on social network is tourism. Indonesia has a lot of tourism and culinary, especially on Special District of Yogyakarta.Tourism and culinary resources on Yogyakarta can be published and shared using social network. In addition, development of mobile technology and smartphone make easier to access social network through internet.
The release of Windows Phone 7 makes new color in the world of smartphone. Windows Phone 7comes with elegant interface, Metro Style. Besides that, standardized specification makes Windows Phone 7 suitable for integrating social network with tourism and culinary on Special District of Yogyakarta. 
This Research is expected to integrate social network with tourism and culinary on Yogyakarta. The method in this research is using ICONIX method. This method is one method that combines waterfall and agile methods. The results of this study are in the form of applications that run on Windows Phone 7 and consume a web service. This application provides information especially for tourist in order to be able to easily find culinary and tourism in Yogyakarta.</description>
        <description>http://thesai.org/Downloads/Volume3No10/Paper_12-The_Development_of_Mobile_Client_Application_in_Yogyakarta_Tourism_and_Culinary_Information_System_Based_on_Social_Media_Integration.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Shape Prediction Linear Algorithm Using Fuzzy</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031011</link>
        <id>10.14569/IJACSA.2012.031011</id>
        <doi>10.14569/IJACSA.2012.031011</doi>
        <lastModDate>2012-10-31T09:54:22.4570000+00:00</lastModDate>
        
        <creator>Navjot Kaur</creator>
        
        <creator>Sheetal Kundra</creator>
        
        <creator>Harish Kundra</creator>
        
        <subject>Shape prediction; Shape recognition; Feature extraction.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(10), 2012</description>
        <description>The goal of the proposed method is to develop shape prediction algorithm using fuzzy that is computationally fast and invariant. To predict the overlapping and joined shapes accurately, a method of shape prediction based on erosion and over segmentation is used to estimate values for dependent variables from previously unseen predictor values based on the variation in an underlying learning data set.</description>
        <description>http://thesai.org/Downloads/Volume3No10/Paper_11-Shape_Prediction_Linear_Algorithm_Using_Fuzzy.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Defending Polymorphic Worms in Computer Network using Honeypot</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031010</link>
        <id>10.14569/IJACSA.2012.031010</id>
        <doi>10.14569/IJACSA.2012.031010</doi>
        <lastModDate>2012-10-31T09:54:20.3830000+00:00</lastModDate>
        
        <creator>R. T. Goswamia</creator>
        
        <subject>Polymorphic worm; Honeypot; Honeynet; Sticky honeypot;  Cloud computing.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(10), 2012</description>
        <description>Polymorphic worms are a major threat to internet infrastructure security. In this mechanism we are using gate-translator, double honeypot, sticky honeypot, internal translator and antivirus of Cloud AV,which attracts polymorphic worms. We are proposing an algorithm to detect and remove polymorphic worms and innocuous traffic related packets.</description>
        <description>http://thesai.org/Downloads/Volume3No10/Paper_10-Defending_Polymorphic_Worms_in_Computer_Network_using_Honeypot.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Analysis Of Multi Source Fused Medical Images Using Multiresolution Transforms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031009</link>
        <id>10.14569/IJACSA.2012.031009</id>
        <doi>10.14569/IJACSA.2012.031009</doi>
        <lastModDate>2012-10-31T09:54:16.4370000+00:00</lastModDate>
        
        <creator>Ch.Hima Bindu</creator>
        
        <creator>Dr.K.Satya Prasad</creator>
        
        <subject>Image Fusion; Discrete Wavelet Transform; Contourlet Transform; Fast Discrete Curvelet Transform; Nonsubsampled Contourlet Transform.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(10), 2012</description>
        <description>Image fusion combines information from multiple images of the same scene to get a composite image that is more suitable for human visual perception or further image-processing tasks. In this paper the multi source medical images like MRI (Magnetic Resonance Imaging), CT (computed tomography) &amp; PET (positron emission tomography) are fused using different multi scale transforms. We compare various multi resolution transform algorithms, especially the latest developed methods, such as; Non Subsampled Contourlet Transform, Fast Discrete Curvelet, Contourlet, Discrete Wavelet transform and Hybrid Method (combination of DWT &amp; Contourlet) for image fusion. The fusion operations are performed with all Multi resolution transforms. The fusion rules like local maxima and spatial frequency techniques are used for selection in the low frequency and high frequency subband coefficients, which can preserve more information and quality in the fused image. The fused output obtained after the inverse transform of fused sub band coefficients. The experimental results show that the effectiveness of fusion approaches in fusing multi source images.</description>
        <description>../../Downloads/Volume3No10/Paper_9-Performance_Analysis_Of_Multi_Source_Fused_Medical_Images_Using_Multiresolution_Transforms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automatic Scheme for Fused Medical Image Segmentation with Nonsubsampled Contourlet Transform</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031008</link>
        <id>10.14569/IJACSA.2012.031008</id>
        <doi>10.14569/IJACSA.2012.031008</doi>
        <lastModDate>2012-10-31T09:54:14.3600000+00:00</lastModDate>
        
        <creator>Ch.Hima Bindu</creator>
        
        <creator>Dr.K.Satya Prasad</creator>
        
        <subject>Non Sub sampled Contourlet Transform; Image Fusion;  Automatic Segmentation.
</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(10), 2012</description>
        <description>Medical image segmentation has become an essential technique in clinical and research- oriented applications. Because manual segmentation methods are tedious, and semi-automatic segmentation lacks the flexibility, fully-automatic methods have become the preferred type of medical image segmentation. This work proposes a robust fully automatic segmentation scheme based on the modified contouring technique. The entire scheme consists of three stages. In the first stage, the Nonsubsampled Contourlet Transform (NSCT) of image is computed. This is followed by the fusion of coefficients using fusion method. For that fused image local threshold is computed. This is followed by the second stage in which the initial points are determined by computation of global threshold. Finally, in the third stage, searching procedure is started from each initial point to obtain closed-loop contours. The whole process is fully automatic. This avoids the disadvantages of semi-automatic schemes such as manually selecting the initial contours and points.</description>
        <description>http://thesai.org/Downloads/Volume3No10/Paper_8-Automatic_Scheme_for_Fused_Medical_Image_Segmentation_with_Nonsubsampled_Contourlet_Transform.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Strategy to Improve The Usage of ICT in The Kingdom of Saudi Arabia Primary School</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031007</link>
        <id>10.14569/IJACSA.2012.031007</id>
        <doi>10.14569/IJACSA.2012.031007</doi>
        <lastModDate>2012-10-31T09:54:12.3030000+00:00</lastModDate>
        
        <creator>Gafar Almalki</creator>
        
        <creator>Neville Williams</creator>
        
        <subject>ICT;  primary school;  barrier;  strategy.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(10), 2012</description>
        <description>Integration of ICT in education is a complex idea that requires practical interpretation to get significant outcomes. As a developing country, the Kingdom of Saudi Arabia (the KSA) does not have a proper technological infrastructure as developed countries. Efficient strategies are vital in improving the application of ICT in the KSA’s primary schools effectively. Improving the usage of ICT in the KSA primary schools achievement entails integrating ICT into classroom. However, some barriers that prevent a successful ICT implementation in the primary school are still present. This paper proposes several strategies to trounce the challenges. Several recommendations for ICT integration in primary school applicable in the case of the KSA are also necessary. These strategies are executable at school and national scale.</description>
        <description>http://thesai.org/Downloads/Volume3No10/Paper_7-A_Strategy_to_Improve_The_Usage_of_ICT_in_The_Kingdom_of_Saudi_Arabia_Primary_School.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Distributed Method to Localization for Mobile Sensor Networks based on the convex hull</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031006</link>
        <id>10.14569/IJACSA.2012.031006</id>
        <doi>10.14569/IJACSA.2012.031006</doi>
        <lastModDate>2012-10-31T09:54:08.3570000+00:00</lastModDate>
        
        <creator>Yassine SABRI,</creator>
        
        <creator>Najib EL KAMOUN</creator>
        
        <subject>wireless sensor network (WSN); Mobility; Localization; scalability.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(10), 2012</description>
        <description>There has been recently a trend of exploiting the heterogeneity in WSNs and the mobility of either the sensor nodes or the sink nodes to facilitate data dissemination in WSNs. Recently, there has been much focus on mobile sensor networks, and we have even seen the development of small-profile sensing devices that are able to control their own movement. Although it has been shown that mobility alleviates several issues relating to sensor network coverage and connectivity, many challenges remain. Among these, the need for position estimation is perhaps the most important. Not only is localization required to understand sensor data in a spatial context, but also for navigation, a key feature of mobile sensors. This paper concerns the localization problem in the case where all nodes in the network (anchors and others sensors) are mobile. We propose the technique following the capabilities of nodes. Thus, each node obtains either an exact position or an approximate position with the knowledge of the maximal error born. Also, we adapt the periods where nodes invoke their localization. Simulation results show the performances of our method in term of accuracy and determinate the technique the more adapted related to the network configurations.</description>
        <description>http://thesai.org/Downloads/Volume3No10/Paper_6-A_Distributed_Method_to_Localization_for_Mobile_Sensor_Networks_based_on_the_convex_hull.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>MDSA: Modified Distributed Storage Algorithm for Wireless Sensor Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031005</link>
        <id>10.14569/IJACSA.2012.031005</id>
        <doi>10.14569/IJACSA.2012.031005</doi>
        <lastModDate>2012-10-31T09:54:04.4070000+00:00</lastModDate>
        
        <creator>Mohamed Labib Borham</creator>
        
        <creator>Mostafa-Sami Mostafa</creator>
        
        <creator>Hossam Eldeen Moustafa Shamardan</creator>
        
        <subject>Distrubited storage; encoding; decoding; flooding; multicasting; unicasting.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(10), 2012</description>
        <description>In this paper, we propose a Modified distributed storage algorithm for wireless sensor networks (MDSA). Wireless Sensor Networks, as it is well known, suffer of power limitation, small memory capacity,and limited processing capabilities. Therefore, every node may disappear temporarily or permanently from the network due to many different reasons such as battery failure or physical damage. Since every node collects significant data about its region, it is important to find a methodology to recover these data in case of failure of the source node. Distributed storage algorithms provide reliable access to data through the redundancy spread over individual unreliable nodes. The proposed algorithm uses flooding to spread data over the network and unicasting to provide controlled data redundancy through the network. We evaluate the performance of the proposed algorithm through implementation and simulation. We show the results and the performance evaluation of the proposed algorithm.</description>
        <description>http://thesai.org/Downloads/Volume3No10/Paper_5-MDSA_Modified_Distributed_Storage_Algorithm_for_Wireless_Sensor_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A semantic cache for enhancing Web services communities activities: Health care case Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031004</link>
        <id>10.14569/IJACSA.2012.031004</id>
        <doi>10.14569/IJACSA.2012.031004</doi>
        <lastModDate>2012-10-31T09:54:01.4130000+00:00</lastModDate>
        
        <creator>Hela Limam</creator>
        
        <creator>Jalel Akaichi</creator>
        
        <subject>Web services community; semantic description; health care.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(10), 2012</description>
        <description>Collective memories are strong support for enhancing the activities of capitalization, management and dissemination inside a Web services community. To take advantages of collective memory, we propose an approach for indexing a health care Web services community’s’ resources with semantic annotations explaining and formalizing its informative content. Then we show how the health care Web services community’ members exploit their collective memory by expressing their queries allowing them searching relevant resources in order to perform their activities.</description>
        <description>http://thesai.org/Downloads/Volume3No10/Paper_4-A_semantic_cache_for_enhancing_Web_services_communities_activities_Health_care_case_Study.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Decision Tree Classification Model for University Admission System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031003</link>
        <id>10.14569/IJACSA.2012.031003</id>
        <doi>10.14569/IJACSA.2012.031003</doi>
        <lastModDate>2012-10-31T09:53:57.4830000+00:00</lastModDate>
        
        <creator>Abdul Fattah Mashat</creator>
        
        <creator>Mohammed M. Fouad</creator>
        
        <creator>Philip S. Yu</creator>
        
        <creator>Tarek F. Gharib</creator>
        
        <subject>Data Mining; Supervised Learning; Decision Tree; University Admission System;  Model Evaluation.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(10), 2012</description>
        <description>Data mining is the science and techniques used to analyze data to discover and extract previously unknown patterns. It is also considered a main part of the process of knowledge discovery in databases (KDD).  In this paper, we introduce a supervised learning technique of building a decision tree for King Abdulaziz University (KAU) admission system. The main objective is to build an efficient classification model with high recall under moderate precision to improve the efficiency and effectiveness of the admission process. We used ID3 algorithm for decision tree construction and the final model is evaluated using the common evaluation methods. This model provides an analytical view of the university admission system.</description>
        <description>http://thesai.org/Downloads/Volume3No10/Paper_3-A_Decision_Tree_Classification_Model_for_University_Admission_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Semantics for Concurrent Logic Programming Languages Based on Multiple-Valued Logic</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031002</link>
        <id>10.14569/IJACSA.2012.031002</id>
        <doi>10.14569/IJACSA.2012.031002</doi>
        <lastModDate>2012-10-31T09:53:55.3130000+00:00</lastModDate>
        
        <creator>Marion Glazerman Ben-Jacob</creator>
        
        <subject>concurrent logic programming; multiple-valued logic; denotational semantics.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(10), 2012</description>
        <description>In order to obtain an understanding of parallel logic thought it is necessary to establish a fully abstract model of the denotational semantics of logic programming languages. In this paper, a fixed point semantics for the committed choice, non-deterministic family of parallel programming languages, i.e. the concurrent logic programming languages is developed. The approach is from an order theoretic viewpoint. We rigorously define a semantics for a Guarded Horn Clauses-type of language because of the minimal restrictions of the language. The extension to other concurrent logic programming languages would be direct and analogous, based on their specific rules of suspension. Today’s world is replete with multitasking and parallelism in general. The content of this paper reflects a paradigm of an application of multi-valued logic which is reflective of this.</description>
        <description>http://thesai.org/Downloads/Volume3No10/Paper_2-A_Semantics_for_Concurrent_Logic_Programming_Languages_Based_on_Multiple-_Valued_Logic.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Challenges of Future R&amp;D in Mobile Communications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.031001</link>
        <id>10.14569/IJACSA.2012.031001</id>
        <doi>10.14569/IJACSA.2012.031001</doi>
        <lastModDate>2012-10-31T09:53:51.5070000+00:00</lastModDate>
        
        <creator>Dr. Anwar M. Mousa</creator>
        
        <subject>R&amp;D; NGNs; re-configurability; VANET; nanotechnology; cloud computing; HAP; WWWW; WSNs; e-healthcare;  wearable devices.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(10), 2012</description>
        <description>This paper provides a survey for the main challenges of future research and development (R&amp;D) for next generation mobile networks (NGNs). It addresses software and hardware re-configurability with focus on reconfigurable coupling and interworking amongst heterogeneous wireless access networks. It also explores the promising technologies for NGNs such as nanotechnology, cloud computing and texting by thinking. The paper then highlights the challenging research areas for enhancing network performance and affordability beginning with All IP network and security issues, Vehicular Ad Hoc Networks (VANET) and the necessity of high data rates at cell edges. Investigating the viability of cooperative networks and one unified global standard for NGNs, the paper analyzes macro-diversity and advanced multi-cell coordination for mitigating inter-cell interference. Direct device-to-device communication, global coverage using satellites and high-altitude platforms are presented as possible evolution paths for space roaming and extending NGN’ radio-access. Finally, the paper inspects the most developing applications for NGNs such as World-Wide Wireless Web (WWWW), machine-type communication, wireless sensor networks (WSNs), e-healthcare systems and wearable devices with AI capabilities.</description>
        <description>../../Downloads/Volume3No10/Paper_1-Challenges_of_Future_RandD_in_Mobile_Communications.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Identification Filtering with fuzzy estimations</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2012.010706</link>
        <id>10.14569/IJARAI.2012.010706</id>
        <doi>10.14569/IJARAI.2012.010706</doi>
        <lastModDate>2012-10-09T11:48:47.9930000+00:00</lastModDate>
        
        <creator>J.J Medel J</creator>
        
        <creator>J. C: Garcia I</creator>
        
        <creator>J. C. Sanchez G</creator>
        
        <subject>Intelligent Identification;  Digital identification filter; Fuzzy estimation; Signal processing; Probability.</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 1(7), 2012</description>
        <description>A digital identification filter interacts with an output reference model signal known as a black-box output system. The identification technique commonly needs the transition and gain matrixes. Both estimation cases are based on mean square criterion obtaining of the minimum output error as the best estimation filtering. The evolution system represents adaptive properties that the identification mechanism includes considering the fuzzy logic strategies affecting in probability sense the evolution identification filter. The fuzzy estimation filter allows in two forms describing the transition and the gain matrixes applying actions that affect the identification structure. Basically, the adaptive criterion conforming the inference mechanisms set, the Knowledge and Rule bases, selecting the optimal coefficients in distribution form.  This paper describes the fuzzy strategies applied to the Kalman filter transition function, and gain matrixes. The simulation results were developed using Matlab&#169;.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume1No7/Paper_6-Identification_Filtering_with_fuzzy_estimations.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Brain Computer Interface Boulevard of Smarter Thoughts</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2012.010705</link>
        <id>10.14569/IJARAI.2012.010705</id>
        <doi>10.14569/IJARAI.2012.010705</doi>
        <lastModDate>2012-10-09T11:48:44.0130000+00:00</lastModDate>
        
        <creator>Sumit Ghulyani</creator>
        
        <creator>Yashasvi Pratap</creator>
        
        <creator>Sumit Bisht</creator>
        
        <creator>Ravideep Singh</creator>
        
        <subject>Brain Computer Interface (BCI); Blood Oxygen Level Dependent (BOLD); Electrocorticography (ECoG); Practical Electrical Stimulation (PES);  Electroencephalograph (EEG); Magnetic Resonance Imaging (MRI),Functional Magnetic Resonance Imaging (fMRI) brain scythe, motor cortex.</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 1(7), 2012</description>
        <description>The Brain Computer Interface is a major breakthrough for the technical industry, medical world, military and the society on a whole. It is concerned with the control of devices around us such as computing gears &amp; even automobiles in the near future without really the physical intervention of the user. It helps bridge the communication gap between the society and the disabled. This mainly lays its focus on people suffering from brainstem stroke, going through a spinal cord injury or even blindness. BCI helps such patients to retain or restore communication with the outside world through intelligent signals from the brain due to the high risk of paralysis under such circumstances. This is achieved by a signal acquisition technique and converting these signals available from the sensors placed on the scalp into real-time computer commands that can be visually operated and understood. It has nothing to do with the natural neural transmission of brain signals but extracts them with the help of sensors to be processed and direct the outputs to an external device. 
This may also prove to be a major military gadget where troops may communicate their thoughts in highly stressed situations without breaking the hush. But, as every technology have some merits and demerits, so does BCI.
</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume1No7/Paper_5-Brain_Computer_Interface_Boulevard_of_Smarter_Thoughts.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Temporally Annotated Extended Logic Programs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2012.010704</link>
        <id>10.14569/IJARAI.2012.010704</id>
        <doi>10.14569/IJARAI.2012.010704</doi>
        <lastModDate>2012-10-09T11:48:41.8770000+00:00</lastModDate>
        
        <creator>Anastasia Analyti</creator>
        
        <creator>Ioannis Pachoulakis</creator>
        
        <subject>Extended logic programs; validity temporal intervals; temporal inference; query answering.</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 1(7), 2012</description>
        <description>Extended logic programs (ELPs) are a set of logic rules with strong negation   allowed in the bodies or head of the rules and weak negation ~ allowed in the bodies of the rules. ELPs enable for various forms of reasoning that cannot be achieved by definite logic programs. Answer Set Programming provides a widely acceptable semantics for ELPs. However, ELPs do not provide information regarding the temporal intervals that derived ELP literals or weakly negated ELP literals are valid. In this paper, we associate ELP rules with their validity temporal interval, resulting in a temporally annotated logic program. A ground temporal literal has the form L:i, where L is a ground ELP literal or weakly negated ELP literal and i is a temporal interval. We define (simple) entailment and maximal entailment of a ground temporal literal L:i from  a temporally annotated logic program C. Both kinds of entailment are based on Answer Set Programming. Additionally, we provide an algorithm that for an ELP literal or a weakly negated ELP literal L returns a list with all temporal intervals i such that a temporally annotated logic program C maximally entails L:i. Based on this algorithm, the answer of various kinds of temporal queries can be provided.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume1No7/Paper_4-Temporally_Annotated_Extended_Logic_Programs.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Provenance and Temporally Annotated Logic Programming</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2012.010703</link>
        <id>10.14569/IJARAI.2012.010703</id>
        <doi>10.14569/IJARAI.2012.010703</doi>
        <lastModDate>2012-10-09T11:48:39.7730000+00:00</lastModDate>
        
        <creator>Anastasia Analyti</creator>
        
        <creator>Ioannis Pachoulakis</creator>
        
        <subject>Annotated logic programming; provenance and temporal information; model theory; consequence operator.</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 1(7), 2012</description>
        <description>In this paper, we consider provenance and temporally annotated logic rules (pt-logic rules, for short), which are definite logic programming rules associated with the name of the source that they originate and the temporal interval during which they are valid. A collection of pt-logic rules form a provenance and temporally annotated logic program P, called pt-logic program, for short. We develop a model theory for P and define its maximally temporal entailments of the form A:&lt;S, ti&gt;, indicating that atom A is derived from a set of sources S and holds at a maximal temporal interval ti, according to S. We define a consequence operator that derives exactly the maximally temporal entailments of P for a set of sources. We show that the complexity of the considered entailment is EXPTIME-complete.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume1No7/Paper_3-Provenance_and_Temporally_Annotated_Logic_Programming.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Classification of the Real-Time Interaction-Based Behavior of Online Game Addiction in Children and Early Adolescents in Thailand</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2012.010702</link>
        <id>10.14569/IJARAI.2012.010702</id>
        <doi>10.14569/IJARAI.2012.010702</doi>
        <lastModDate>2012-10-09T11:48:37.6800000+00:00</lastModDate>
        
        <creator>Kongkarn Vachirapanang</creator>
        
        <creator>Sakoontip Tuisima</creator>
        
        <creator>Sukree Sinthupinyo</creator>
        
        <creator>Puntip Sirivunnabood</creator>
        
        <subject>online game addiction; classification; intelligent agent; real time.</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 1(7), 2012</description>
        <description>This paper aims to study actual behaviors of Thai children and early adolescents with different levels of game addiction while playing online games from an angle of the interaction between a user and computer. Real-time interaction-based behavior data from a program agent installed in personal computers in 20 sample houses were screened along with consent given by children and their parents. Collection of data about game-playing periods, frequency, game-playing times, text-based chatting, mouse click and keyboard typing during the game was carried out over two months and four case study in-depth interviews for addicted players and their parents. The results revealed a novel method to classify online game addiction level of children and early adolescents by mouse click and keyboard typing data and also found relationship between the playing data recorded and game addiction risk conditions and risk behaviors as explained in the article.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume1No7/Paper_2-The_Classification_of_the_Real-Time_Interaction-Based_Behavior_of_Online_Game_Addiction_in_Children_and_Early_Adolescents_in_Thailand.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evacuation Path Selection for Firefighters Based on Dynamic Triangular Network Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2012.010701</link>
        <id>10.14569/IJARAI.2012.010701</id>
        <doi>10.14569/IJARAI.2012.010701</doi>
        <lastModDate>2012-10-09T11:48:33.6570000+00:00</lastModDate>
        
        <creator>Qianyi Zhang</creator>
        
        <creator>Demin Li</creator>
        
        <creator>Yajuan An</creator>
        
        <subject>Evacuation Path Selection; Information Fusion; Dynamic Triangular Network Model.</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 1(7), 2012</description>
        <description>Path selection is one of the critical aspects in emergency evacuation. In a fire scene, how to choose an optimal evacuation path for firefighters is a challenging aspect. In this paper, firstly, a dynamic triangular network model formed by robots is presented. On the basis of this model, directed graph is established in order to calculate direct paths. Then multi-parameter information fusion which includes smoke density, temperature and oxygen density is discussed in detail for environment safety evaluation. Based on the discussions, a new way has been proposed for optimal path selection, taking into consideration the safety-factor of the path. The objectives of the method are to minimize the path lengths, at the same time, to protect firefighters from the dangerous regions. In the end, numerical simulation results prove the feasibility and superiority of this method.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume1No7/Paper_1-Evacuation_Path_Selection_for_Firefighters_Based_on_Dynamic_Triangular_Network_Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Fault Location Method Research of Three-Layer Network System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2012.010605</link>
        <id>10.14569/IJARAI.2012.010605</id>
        <doi>10.14569/IJARAI.2012.010605</doi>
        <lastModDate>2012-09-10T14:27:38.4170000+00:00</lastModDate>
        
        <creator>Hu Shaolin</creator>
        
        <creator>Li ye</creator>
        
        <creator>Karl Meinke</creator>
        
        <subject>Three-layer network; fault propagation; fault location.</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 1(6), 2012</description>
        <description>The fault location technology research of three-layer network system structure dynamic has important theoretic value and apparent engineering application value on exploring the fault detection and localization of the complex structure dynamic system. In this article, the method of failure propagation and adverse inference are adopted, the fault location algorithm of the three-layer structure dynamic network system is established on the basis of the concept of association matrix and the calculating method are proposed, and the simulation calculation confirmed the reliability of this paper. The results of the research can be used for the fault diagnosis of the hierarchical control system?testing of the engineering software and the analysis of the failure effects of layered network of all kinds and other different fields.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume1No6/Paper_5-The_Fault_Location_Method_Research_of_Three-Layer_Network_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Visual Working Efficiency Analysis Method of Cockpit Based On ANN</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2012.010604</link>
        <id>10.14569/IJARAI.2012.010604</id>
        <doi>10.14569/IJARAI.2012.010604</doi>
        <lastModDate>2012-09-10T14:27:38.3870000+00:00</lastModDate>
        
        <creator>Yingchun CHEN</creator>
        
        <creator>Dongdong WEI</creator>
        
        <creator>Gang SUN</creator>
        
        <subject>component; Visual Working Efficiency; Artificial Neural Networks;Cockpit; BP; SOM.</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 1(6), 2012</description>
        <description>The Artificial Neural Networks method is applied on visual working efficiency of cockpit. A Self-Organizing Map (SOM) network is demonstrated selecting material with near properties. Then a Back-Propagation (BP) network automatically learns the relationship between input and output. After a set of training, the BP network is able to estimate material characteristics using knowledge and criteria learned before. Results indicate that trained network can give effective prediction for material.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume1No6/Paper_4-Visual_Working_Efficiency_Analysis_Method_of_Cockpit_Based_On_ANN.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mesopic Visual Performance of Cockpit’s Interior based on Artificial Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2012.010603</link>
        <id>10.14569/IJARAI.2012.010603</id>
        <doi>10.14569/IJARAI.2012.010603</doi>
        <lastModDate>2012-09-10T14:27:38.3570000+00:00</lastModDate>
        
        <creator>Dongdong WEI</creator>
        
        <creator>Gang SUN</creator>
        
        <subject>component; Mesopic Vision; Cockpit; Artificial Neural Network; BP; SOM.</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 1(6), 2012</description>
        <description>The ambient light of cockpit is usually under mesopic vision, and it’s mainly related to the cockpit’s interior. In this paper, a SB model is come up to simplify the relationship between the mesopic luminous efficiency and the different photometric and colorimetric variables in the cockpit. Self-Organizing Map (SOM) network is demonstrated classifying and selecting samples. A Back-Propagation (BP) network can automatically learn the relationship between material characteristics and mesopic luminous efficiency. Comparing with the MOVE model, SB model can quickly calculate the mesopic luminous efficiency with certain accuracy.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume1No6/Paper_3-Mesopic_Visual_Performance_of_Cockpit’s_Interior_based_on_Artificial_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel 9/7 Wavelet Filter banks For Texture Image Coding</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2012.010602</link>
        <id>10.14569/IJARAI.2012.010602</id>
        <doi>10.14569/IJARAI.2012.010602</doi>
        <lastModDate>2012-09-10T14:27:38.2470000+00:00</lastModDate>
        
        <creator>Songjun Zhang</creator>
        
        <creator>Guoan Yang</creator>
        
        <creator>Zhengxing Cheng</creator>
        
        <creator>Huub van de Wetering</creator>
        
        <subject>9/7 wavelet filter banks; image coding; lifting scheme; texture image; Brodatz database.</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 1(6), 2012</description>
        <description>This paper proposes a novel 9/7 wavelet filter bank for texture image coding applications based on lifting a 5/3 filter to a 7/5 filter, and then to a 9/7 filter. Moreover, a one-dimensional optimization problem for the above 9/7 filter family is carried out according to the perfect reconstruction (PR) condition of wavelet transforms and wavelet properties. Finally, the optimal control parameter of the 9/7 filter family for image coding applications is determined by statistical analysis of compressibility tests applied on all the images in the Brodatz standard texture image database. Thus, a new 9/7 filter with only rational coefficients is determined. Compared to the design method of Cohen, Daubechies, and Feauveau, the design approach proposed in this paper is simpler and easier to implement. The experimental results show that the overall coding performances of the new 9/7 filter are superior to those of the CDF 9/7 filter banks in the JPEG2000 standard, with a maximum increase of 0.185315 dB at compression ratio 32:1. Therefore, this new 9/7 filter bank can be applied in image coding for texture images as the transform coding kernel.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume1No6/Paper_2-A_Novel_97_Wavelet_Filter_banks_For_Texture_Image_Coding.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Human Gait Gender Classification in Spatial and Temporal Reasoning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2012.010601</link>
        <id>10.14569/IJARAI.2012.010601</id>
        <doi>10.14569/IJARAI.2012.010601</doi>
        <lastModDate>2012-09-10T14:27:38.0730000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Rosa Andrie Asmara</creator>
        
        <subject>Gait Gender Classification; Gait Energy Motion; CASIA Gait Dataset.</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 1(6), 2012</description>
        <description>Biometrics technology already becomes one of many application needs for identification. Every organ in the human body might be used as an identification unit because they tend to be unique characteristics. Many researchers had their focus on human organ biometrics physical characteristics such as fingerprint, human face, palm print, eye iris, DNA, and even behavioral characteristics such as a way of talk, voice and gait walking. Human Gait as the recognition object is the famous biometrics system recently. One of the important advantage in this recognition compare to other is it does not require observed subject’s attention and assistance. This paper proposed Gender classification using Human Gait video data. There are many human gait datasets created within the last 10 years. Some databases that widely used are University of South Florida (USF) Gait Dataset, Chinese Academy of Sciences (CASIA) Gait Dataset, and Southampton University (SOTON) Gait Dataset. This paper classifies human gender in Spatial Temporal reasoning using CASIA Gait Database. Using Support Vector Machine as a Classifier, the classification result is 97.63% accuracy.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume1No6/Paper_1-Human_Gait_Gender_Classification_in_Spatial_and_Temporal_Reasoning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Review of Remote Terminal Unit (RTU) and Gateways for Digital Oilfield delpoyments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030826</link>
        <id>10.14569/IJACSA.2012.030826</id>
        <doi>10.14569/IJACSA.2012.030826</doi>
        <lastModDate>2012-08-30T20:05:20.3670000+00:00</lastModDate>
        
        <creator>Francis Enejo Idachaba</creator>
        
        <creator>Ayobami Ogunrinde</creator>
        
        <subject>Digital Oilfield; Gateway; HMI; i-fields; RTU; Smartfields.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(8), 2012</description>
        <description>The increasing decline in easy oil has led to an increasing need for the optimization of oil and gas processes. Digital oilfields utilize remote operations to achieve these optimization goals and the remote telemetry unit and gateways are very critical in the realization of this objective. This paper presents a review of the RTUs and gateways utilized in digital oilfield architectures. It presents a review of the architecture, their functionality and selection criteria. It also provides a comparison of the specifications of some popular RTUs.</description>
        <description>http://thesai.org/Downloads/Volume3No8/Paper_26-Review_of_Remote_Terminal_Unit_(RTU)_and_Gateways_for_Digital_Oilfield_delpoyments.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Prevention and Detection of Financial Statement Fraud – An Implementation of Data Mining Framework</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030825</link>
        <id>10.14569/IJACSA.2012.030825</id>
        <doi>10.14569/IJACSA.2012.030825</doi>
        <lastModDate>2012-08-30T20:05:16.7570000+00:00</lastModDate>
        
        <creator>Rajan Gupta</creator>
        
        <creator>Nasib Singh Gill</creator>
        
        <subject>Data mining framework; Rule engine; Rule monitor.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(8), 2012</description>
        <description>Every day, news of financial statement fraud is adversely affecting the economy worldwide. Considering the influence of the loss incurred due to fraud, effective measures and methods should be employed for prevention and detection of financial statement fraud. Data mining methods could possibly assist auditors in prevention and detection of fraud because data mining can use past cases of fraud to build models to identify and detect the risk of fraud and can design new techniques for preventing fraudulent financial reporting. In this study we implement a data mining methodology for preventing fraudulent financial reporting at the first place and for detection if fraud has been perpetrated. The association rules generated in this study are going to be of great importance for both researchers and practitioners in preventing fraudulent financial reporting. Decision rules produced in this research complements the prevention mechanism by detecting financial statement fraud.</description>
        <description>http://thesai.org/Downloads/Volume3No8/Paper_25-Prevention_and_Detection_of_Financial_Statement_Fraud_–_An_Implementation_of_Data_Mining_Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Approach of Improving Student’s Academic Performance by using K-means clustering algorithm and Decision tree</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030824</link>
        <id>10.14569/IJACSA.2012.030824</id>
        <doi>10.14569/IJACSA.2012.030824</doi>
        <lastModDate>2012-08-30T20:05:12.9370000+00:00</lastModDate>
        
        <creator>Hedayetul Islam Shovon</creator>
        
        <creator>Mahfuza Haque</creator>
        
        <subject>Database; Data clustering; Data mining; classification; prediction; Assessments; Decision tree; academic performance.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(8), 2012</description>
        <description>Improving student’s academic performance is not an easy task for the academic community of higher learning. The academic performance of engineering and science students during their first year at university is a turning point in their educational path and usually encroaches on their General Point Average (GPA) in a decisive manner. The students evaluation factors like class quizzes mid and final exam assignment lab -work are studied. It is recommended that all these correlated information should be conveyed to the class teacher before the conduction of final exam. This study will help the teachers to reduce the drop out ratio to a significant level and improve the performance of students. In this paper, we present a hybrid procedure based on Decision Tree of Data mining method and Data Clustering that enables academicians to predict student’s GPA and based on that instructor can take necessary step to improve student academic performance</description>
        <description>http://thesai.org/Downloads/Volume3No8/Paper_24-An_Approach_of_Improving_Student’s_Academic_Performance_by_using_K-means_clustering_algorithm_and_Decision_tree.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>On the Projection Matrices Influence in the Classification of Compressed Sensed ECG Signals</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030823</link>
        <id>10.14569/IJACSA.2012.030823</id>
        <doi>10.14569/IJACSA.2012.030823</doi>
        <lastModDate>2012-08-30T20:05:09.2170000+00:00</lastModDate>
        
        <creator>Monica Fira</creator>
        
        <creator>Liviu Goras</creator>
        
        <creator>Nicolae Cleju</creator>
        
        <creator>Constantin Barabasa</creator>
        
        <subject>ECG; compressed sensing; projection matrix; classification; KNN.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(8), 2012</description>
        <description>In this paper the classification results of compressed sensed ECG signals based on various types of projection matrices is investigated. The compressed signals are classified using the KNN (K-Nearest Neighbour) algorithm. A comparative analysis is made with respect to the projection matrices used, as well as of the results obtained in the case of the original (uncompressed) signals for various compression ratios. For Bernoulli projection matrices it has been observed that the classification results for compressed cardiac cycles are comparable to those obtained for uncompressed cardiac cycles. Thus, for normal uncompressed cardiac cycles a classification ratio of 91.33% was obtained, while for the signals compressed with a Bernoulli matrix, up to a compression ratio of 15:1 classification rates of approximately 93% were obtained. Significant improvements of classification in the compressed space take place up to a compression ratio of 30:1.</description>
        <description>http://thesai.org/Downloads/Volume3No8/Paper_23-On_the_Projection_Matrices_Influence_in_the_Classification_of_Compressed_Sensed_ECG_Signals.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Clone Detection Using DIFF Algorithm For Aspect Mining</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030822</link>
        <id>10.14569/IJACSA.2012.030822</id>
        <doi>10.14569/IJACSA.2012.030822</doi>
        <lastModDate>2012-08-30T20:05:05.5200000+00:00</lastModDate>
        
        <creator>Rowyda Mohammed Abd El-Aziz</creator>
        
        <creator>Amal Elsayed Aboutabl</creator>
        
        <creator>Mostafa-Sami Mostafa</creator>
        
        <subject>aspect mining; reverse engineering; clone detection; DIFF algorithm.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(8), 2012</description>
        <description>Aspect mining is a reverse engineering process that aims at mining legacy systems to discover crosscutting concerns to be refactored into aspects. This process improves system reusability and maintainability. But, locating crosscutting concerns in legacy systems manually is very difficult and causes many errors. So, there is a need for automated techniques that can discover crosscutting concerns in source code. Aspect mining approaches are automated techniques that vary according to the type of crosscutting concerns symptoms they search for. Code duplication is one of such symptoms which risks software maintenance and evolution. So, many code clone detection techniques have been proposed to find this duplicated code in legacy systems. In this paper, we present a clone detection technique to extract exact clones from object-oriented source code using Differential File Comparison Algorithm (DIFF) to improve system reusability and maintainability which is a major objective of aspect mining.</description>
        <description>http://thesai.org/Downloads/Volume3No8/Paper_22-Clone_Detection_Using_DIFF_Algorithm_For_Aspect_Mining.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>M-Commerce service systems implementation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030821</link>
        <id>10.14569/IJACSA.2012.030821</id>
        <doi>10.14569/IJACSA.2012.030821</doi>
        <lastModDate>2012-08-30T20:05:01.9230000+00:00</lastModDate>
        
        <creator>Asmahan Altaher</creator>
        
        <subject>M-commerce services; usefulness; and ease of use; social interaction.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(8), 2012</description>
        <description>Mobile commerce supports automated banking services. However, the implementation of m-commerce services systems has become increasingly important in today’s dynamic banking environment. This research studied the relationships between technology acceptance model and m- commerce services. The results of the survey on 249 respondents in several Jordan banks revealed that technology acceptance model had a significant impact on m-commerce services. The results led to the recommendation that the technology acceptance model is a success model for support using new services for electronic commerce. In addition, managers play a significant role in influencing the mobile services in banks through social interaction. Managers should focus on relative advantage, usefulness, and ease of use, in order to develop the mobile commerce services implementation.</description>
        <description>http://thesai.org/Downloads/Volume3No8/Paper_21-M-Commerce_service_systems_implementation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Techniques to improve the GPS precision</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030820</link>
        <id>10.14569/IJACSA.2012.030820</id>
        <doi>10.14569/IJACSA.2012.030820</doi>
        <lastModDate>2012-08-30T20:04:58.2400000+00:00</lastModDate>
        
        <creator>Nelson Acosta</creator>
        
        <creator>Juan Toloza</creator>
        
        <subject>GPS accuracy; relative positioning; DGPS; precision farming GPS.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(8), 2012</description>
        <description>The accuracy of a standard market receiver GPS (Global Positioning System) is near 10-15 meters the 95% of the times. To reach a sub-metric level of accuracy some techniques must be used [1]. This article describes some of these procedures to improve the positioning accuracy by using a low-cost GPS in a differential relative positioning way. The proposed techniques are some variations of Kalman, fuzzy logic and information selection.</description>
        <description>http://thesai.org/Downloads/Volume3No8/Paper_20-Techniques_to_improve_the_GPS_precision.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A  Review On Cognitive Mismatch Between Computer and Information Technology And Physicians</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030819</link>
        <id>10.14569/IJACSA.2012.030819</id>
        <doi>10.14569/IJACSA.2012.030819</doi>
        <lastModDate>2012-08-30T20:04:54.5970000+00:00</lastModDate>
        
        <creator>Fozia Anwar</creator>
        
        <creator>Dr. Suziah Sulaiman</creator>
        
        <creator> Dr. P.D.D.Dominic</creator>
        
        <subject>cognitive mismatch; HIT; usibility.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(8), 2012</description>
        <description>Health Information Technology has a great potential to transform the existing health care systems by making them safe, effective and efficient. Multi-functionality and interoperability of health information systems are very important functions. Hence these features cannot be achieved without addressing the knowledge and skills of the health care personnel. There is a great mismatch between Information Technology knowledge and skills of physicians as this discipline is completely missing in their educational tenure. So usability of health information technologies and system as well as evidence based practice in the future can be improved by addressing this cognitive mismatch. This will result in persistent partnership in HIS design between physician and IT personnel to get maximum usibility of the systems,</description>
        <description>http://thesai.org/Downloads/Volume3No8/Paper_19-A_Review_On_Cognitive_Mismatch_Between_Computer_and_Information_Technology_And_Physicians.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Brainstorming 2.0: Toward collaborative tool based on social networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030818</link>
        <id>10.14569/IJACSA.2012.030818</id>
        <doi>10.14569/IJACSA.2012.030818</doi>
        <lastModDate>2012-08-30T20:04:50.9830000+00:00</lastModDate>
        
        <creator>MohamedChrayah </creator>
        
        <creator>Kamal Eddine El Kadiri</creator>
        
        <creator>Boubker Sbihi</creator>
        
        <creator>Noura Aknin</creator>
        
        <subject>component: Web2.0, brainstorming, social networks, UML.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(8), 2012</description>
        <description>Social networks are part of Web 2.0 collaborative tools that have a major impact in enriching the sharing and communication enabling a maximum of collaboration and innovation globally between web users.  It is in this context that this article is positioned to be part of a series of scientific research conducted by our research team and that mixes social networks and collaborative decision making on the net. It aims to provide a new tool open source for solving various social problems posed by users in a collaborative 2.0 based on the technique for generating ideas, brainstorming method and social networks together for the maximum possible adequate profiles to the virtual brainstorming session. A tool is run by a user called expert accompanied by a number of users called validators to drive the process of extracting ideas to the loan of various users of the net. It offers then the solution to the problem of sending a satisfaction questionnaire administered by an expert ready for the affected user to measure the level of his satisfaction and also the success of the process launched. For its implementation, we propose a unified modeling using UML language, followed by a realization using the JAVA language.</description>
        <description>http://thesai.org/Downloads/Volume3No8/Paper_18-Brainstorming_2.0_Toward_collaborative_tool_based_on_social_networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Effective Identification of Species from DNA Sequence: A Classification Technique by Integrating DM and ANN</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030817</link>
        <id>10.14569/IJACSA.2012.030817</id>
        <doi>10.14569/IJACSA.2012.030817</doi>
        <lastModDate>2012-08-30T20:04:47.3930000+00:00</lastModDate>
        
        <creator>Sathish Kumar S</creator>
        
        <creator>Dr.N.Duraipandian</creator>
        
        <subject>Pattern Generation; DNA Sequence; Pattern Support; Mining; Neural Network.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(8), 2012</description>
        <description>Species classification from DNA sequences remains as an open challenge in the area of bioinformatics, which deals with the collection, processing and analysis of DNA and proteomic sequence. Though incorporation of data mining can guide the process to perform well, poor definition, and heterogeneous nature of gene sequence remains as a barrier. In this paper, an effective classification technique to identify the organism from its gene sequence is proposed. The proposed integrated technique is mainly based on pattern mining and neural network-based classification. In pattern mining, the technique mines nucleotide patterns and their support from selected DNA sequence. The high dimension of the mined dataset is reduced using Multilinear Principal Component Analysis (MPCA). In classification, a well-trained neural network classifies the selected gene sequence and so the organism is identified even from a part of the sequence. The proposed technique is evaluated by performing 10-fold cross validation, a statistical validation measure, and the obtained results prove the efficacy of the technique.</description>
        <description>http://thesai.org/Downloads/Volume3No8/Paper_17-An_Effective_Identification_of_Species_from_DNA_Sequence_A_Classification_Technique_by_Integrating_DM_and_ANN.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automatic Aircraft Target Recognition by ISAR Image Processing based on Neural Classifier</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030816</link>
        <id>10.14569/IJACSA.2012.030816</id>
        <doi>10.14569/IJACSA.2012.030816</doi>
        <lastModDate>2012-08-30T20:04:43.6970000+00:00</lastModDate>
        
        <creator>F Benedetto</creator>
        
        <creator>F. Riganti Fulginei</creator>
        
        <creator>A. Laudani</creator>
        
        <creator>G. Albanese</creator>
        
        <subject>Automatic target recognition; artificial intelligence; neural classifiers; ISAR image processing; shape extraction.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(8), 2012</description>
        <description>This work proposes a new automatic target classifier, based on a combined neural networks’ system, by ISAR image processing. The novelty introduced in our work is twofold. We first present a novel automatic classification procedure, and then we discuss an improved multimedia processing of ISAR images for automatic object detection. The classifier, composed by a combination of 20 feed-forward artificial neural networks, is used to recognize aircraft targets extracted from ISAR images. A multimedia processing by two recently introduced image processing techniques is exploited to improve the shape and features extraction process. Performance analysis is carried out in comparison with conventional multimedia techniques and standard detectors. Numerical results obtained from wide simulation trials evidence the efficiency of the proposed method for the application to automatic aircraft target recognition.</description>
        <description>http://thesai.org/Downloads/Volume3No8/Paper_16-Automatic_Aircraft_Target_Recognition_by_ISAR_Image_Processing_based_on_Neural_Classifier.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A hybrid Evolutionary Functional Link Artificial Neural Network for Data mining and Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030815</link>
        <id>10.14569/IJACSA.2012.030815</id>
        <doi>10.14569/IJACSA.2012.030815</doi>
        <lastModDate>2012-08-30T20:04:39.9600000+00:00</lastModDate>
        
        <creator>Faissal MILI</creator>
        
        <creator>Manel HAMDI</creator>
        
        <subject>component Data mining; Classification; Functional link artificial neural network; genetic algorithms; Particle swarm; Differential evolution</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(8), 2012</description>
        <description>This paper presents a specific structure of neural network as the functional link artificial neural network (FLANN). This technique has been employed for classification tasks of data mining. In fact, there are a few studies that used this tool for solving classification problems. In this present research, we propose a hybrid FLANN (HFLANN) model, where the optimization process is performed using 3 known population based techniques such as genetic algorithms, particle swarm and differential evolution. This model will be empirically compared to FLANN based back-propagation algorithm and to others classifiers as decision tree, multilayer perceptron based back-propagation algorithm, radical basic function, support vector machine, and K-nearest Neighbor. Our results proved that the proposed model outperforms the other single model. (Abstract)</description>
        <description>http://thesai.org/Downloads/Volume3No8/Paper_15-A_hybrid_Evolutionary_Functional_Link_Artificial_Neural_Network_for_Data_mining_and_Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Software Architecture- Evolution and Evaluation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030814</link>
        <id>10.14569/IJACSA.2012.030814</id>
        <doi>10.14569/IJACSA.2012.030814</doi>
        <lastModDate>2012-08-30T20:04:36.3600000+00:00</lastModDate>
        
        <creator>S Roselin Mary</creator>
        
        <creator>Dr.Paul Rodrigues</creator>
        
        <subject>Framework; Software Architecture; Views.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(8), 2012</description>
        <description>The growth of various software architectural frameworks and models provides a standard governing structure for different types of organizations. Selection of a suitable framework for a particular environment needs much more detailed information in various aspects and a reference guide of features should be provided. This paper brings out the history of software architecture with a new evolution tree. It also technically analyses well known frameworks used in industries and other governmental organizations and lists out the supportive tools for them. This paper presents the comparative chart that can be used as a reference guide to understand top level frameworks and to further research to enable and promote the utilization of these frameworks in various environments.</description>
        <description>http://thesai.org/Downloads/Volume3No8/Paper_14-Software_Architecture-_Evolution_and_Evaluation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Managing Changes in Citizen-Centric Healthcare Service Platform using High Level Petri Net</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030813</link>
        <id>10.14569/IJACSA.2012.030813</id>
        <doi>10.14569/IJACSA.2012.030813</doi>
        <lastModDate>2012-08-30T20:04:32.7530000+00:00</lastModDate>
        
        <creator>Sabri MTIBAA</creator>
        
        <creator>Moncef TAGINA</creator>
        
        <subject>Healthcare; requirements changes; evolution; information technology; healthcare service platform; handle changes; reconfigurable Petri nets; consistency.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(8), 2012</description>
        <description>The healthcare organizations are facing a number of daunting challenges pushing systems to deal with requirements changes and benefit from modern technologies and telecom capabilities. Systems evolution through extension of the existing information technology infrastructure becomes one of the most challenging aspects of healthcare and the adaptation to changes is a must. The paper presents a change management framework for a citizen-centric healthcare service platform. A combination between Petri nets model to handle changes and reconfigurable Petri nets model to react to these changes are introduced to fulfill healthcare goals. Thanks to this management framework model, consistency and correctness of a healthcare processes in the presence of frequent changes can be checked and guaranteed.</description>
        <description>http://thesai.org/Downloads/Volume3No8/Paper_13-Managing_Changes_in_Citizen-Centric_Healthcare_Service_Platform_using_High_Level_Petri_Net.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Integration of data mining within a Strategic Knowledge Management framework</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/</link>
        <id></id>
        <doi></doi>
        <lastModDate>2012-08-30T20:04:29.1430000+00:00</lastModDate>
        
        <creator>Sanaz Moayer</creator>
        
        <creator>Scott Gardner</creator>
        
        <subject>Knowledge Management (KM); data mining, sustainable competitive advantage; Strategic Knowledge Management (SKM) framework; integration; hard and soft systems; Australian mining organisation.
</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(8), 2012</description>
        <description>In today’s globally interconnected economy, knowledge is recognised as a valuable intangible asset and source of competitive advantage for firms operating in both established and emerging industries.  Within these contexts Knowledge Management (KM) manifests as set of organising principles and heuristics which shape management routines, structures, technologies and cultures within organisations.When employed as an integral part of business strategy KM can blend and develop the expertise and capacity embedded in human and technological networks. This may improve processes or add value to products, services, brands and reputation. We argue that if located within a suitable strategic framework, KM can enable sustainable competitive advantage by mobilising the intangible value in networks to create products, processes or services with unique characteristics that are hard to substitute or replicate. Despite the promise of integrated knowledge strategies within high technology and professional service industries, there has been limited discussion of business strategies linked to Knowledge Management in traditional capital intensive industries such as mining and petroleum. Within these industries IT-centric Knowledge Management Systems (KMS) have dominated, with varying degrees of success as business analysis, process improvement and cost reduction tools.
This paper aims to explore the opportunities and benefits arising from the application of a strategic KM and Data Mining framework within the local operations of large domestic or multinational mining companies, located in Western Australia (WA). The paper presents a high level conceptual framework for integrating so called hard, ICT and soft, human systems representing the explicit and tacit knowledge embedded within broader networks of mining activity. This Strategic Knowledge Management (SKM) framework is presented as a novel first step towards improving organisational performance and realisation of the human and technological capability captured in organisational networks. The SKM framework represents a unique combination of concepts and constructs from the Strategy, Knowledge Management, Information Systems, and Data Mining literatures. It was generated from the Stage 1- Literature and industry documentation review of a two stage exploratory study. Stage 2 will comprise a quantitative case based research approach employing clearly defined metrics to describe and compare SKM activity in designated mining companies.
</description>
        <description>http://thesai.org/Downloads/Volume3No8/Paper_12-Integration_of_data_mining_within_a_Strategic_Knowledge_Management_framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SW-SDF Based Personal Privacy with QIDB-Anonymization Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030811</link>
        <id>10.14569/IJACSA.2012.030811</id>
        <doi>10.14569/IJACSA.2012.030811</doi>
        <lastModDate>2012-08-30T20:04:25.5370000+00:00</lastModDate>
        
        <creator>Kiran P</creator>
        
        <creator>Dr Kavya N P</creator>
        
        <subject>Privacy Peserving Data Mining(PPDM);Privacy Preserving Data Publishing(PPDP); Personal Anonymization.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(8), 2012</description>
        <description>Personalized anonymization is a method in which a guarding node is used to indicate whether the record owner is ready to reveal its sensitivity based on which anonymization will be performed. Most of the sensitive values that are present in the private data base do not require privacy preservation since the record owner sensitivity is a general one. So there are only few records in the entire distribution that require privacy. For example a record owner having disease flu doesn’t mind revealing his identity as compared to record owner having disease cancer. Even in this some of the record owners who have cancer are ready to reveal their identity, this is the motivation for SW-SDF based Personal Privacy. In this paper we propose a novel personalized privacy preserving technique that over comes the disadvantages of previous personalized privacy and other anonymization techniques.  The core of this method can be divided in to two major components. The first component deals with additional attribute used in the table which is in the form of flags which can be used to divide sensitive attribute. Sensitive Disclosure Flag (SDF) determines whether record owner sensitive information is to be disclosed or whether privacy should be maintained. The second flag that we are using is Sensitive Weigh (SW) which indicates how much sensitive the attribute value is as compared with the rest. Second section deals with a novel representation called Frequency Distribution Block (FDB) and Quasi–Identifier Distribution Block(QIDB) which is used in anonymization. Experimental result show that it has lesser information loss and faster execution time as compared with existing methods.</description>
        <description>http://thesai.org/Downloads/Volume3No8/Paper_11-SW-SDF_Based_Personal_Privacy_with_QIDB-Anonymization_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>E-commerce Smartphone Application</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030810</link>
        <id>10.14569/IJACSA.2012.030810</id>
        <doi>10.14569/IJACSA.2012.030810</doi>
        <lastModDate>2012-08-30T20:04:21.8670000+00:00</lastModDate>
        
        <creator>Abdullah Saleh Alqahtani</creator>
        
        <creator>Robert Goodwin</creator>
        
        <subject>E-commerce ; PhoneGap ; M-commerce ; Smartphones ; Spree –commerce ; Ruby on Rails.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(8), 2012</description>
        <description>Mobile and e-commerce applications are tools for accessing the Internet and for buying products and services. These applications are constantly evolving due to the high rate of technological advances being made. This paper provides a new perspective on the types of applications that can be used. It describes and analyses device requirements, provides a literature review of important aspects of mobile devices that can use such applications and the requirements of websites designed for m-commerce. The design and security aspects of mobile devices are also investigated. As an alternative to existing m-commerce applications, this paper also investigates the characteristics and potential of the PhoneGap cross-mobile platform application. The results suggest that effective mobile applications do exist for various Smartphones, and web applications on mobile devices should be effective. PhoneGap and Spree applications can communicate using JSON instead of the XML language. Android simulators can be used for ensuring proper functionality and for compiling the applications.</description>
        <description>http://thesai.org/Downloads/Volume3No8/Paper_10-E-commerce_Smartphone_Application.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing eHealth Information Systems for chronic diseases remote monitoring systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030809</link>
        <id>10.14569/IJACSA.2012.030809</id>
        <doi>10.14569/IJACSA.2012.030809</doi>
        <lastModDate>2012-08-30T20:04:18.2400000+00:00</lastModDate>
        
        <creator>Amir HAJJAM</creator>
        
        <subject>Ontologies; Web Semantic; Remote Monitoring; Chronic Diseases.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(8), 2012</description>
        <description>Statistics and demographics for the aging population in Europe are compelling. The stakes are then in terms of disability and chronic diseases whose proportions will increase because of increased life expectancy. Heart failure (HF), a serious chronic disease, induces frequent re-hospitalizations, some of which can be prevented by up-stream actions. Managing HF is quite a complex process: long, often difficult and expensive. In France, nearly one million people suffer from HF and 120,000 new cases are diagnosed every year. Managing such patients, a telemedicine system tools associated with motivation and education can significantly reduce the number of hospital days that believes therefore that the patient is hospitalized for acute HF. The current development projects are fully in prevention, human security, and remote monitoring of people in their living day-to-day spaces, from the perspective of health and wellness. These projects encompass gathering, organizing, structuring and sharing medical information. They also have to take into account the main aspects of interoperability. A different approach has been used to capitalize on such information: data warehouse approach, mediation approach (or integration by views) or integration approach by link (or so-called mashup). 
In this paper, we will focus on ontologies that take a central place in the Semantic Web: on one hand, they rely on modeling from conceptual representations of the areas concerned and, on the other hand, they allow programs to make inferences over them.
</description>
        <description>http://thesai.org/Downloads/Volume3No8/Paper_9-Enhancing_eHealth_Information_Systems_for_chronic_diseases_remote_monitoring_systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Spontaneous-braking and lane-changing effect on traffic congestion using cellular automata model applied to the two-lane traffic</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030808</link>
        <id>10.14569/IJACSA.2012.030808</id>
        <doi>10.14569/IJACSA.2012.030808</doi>
        <lastModDate>2012-08-30T20:04:14.5970000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Steven Ray Sentinuwo</creator>
        
        <subject>spontaneous-braking; traffic congestion; cellular automata; two-lane trafficcomponent.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(8), 2012</description>
        <description>In the real traffic situations, vehicle would make a braking as the response to avoid collision with another vehicle or avoid some obstacle like potholes, snow, or pedestrian that crosses the road unexpectedly. However, in some cases the spontaneous-braking may occur even though there are no obstacles in front of the vehicle. In some country, the reckless driving behaviors such as sudden-stop by public-buses, motorcycle which changing lane too quickly, or tailgating make the probability of braking getting increase. The new aspect of this paper is the simulation of braking behavior of the driver and presents the new Cellular Automata model for describing this characteristic. Moreover, this paper also examines the impact of lane-changing maneuvers to reduce the number of traffic congestion that caused by spontaneous-braking behavior of the vehicles.</description>
        <description>http://thesai.org/Downloads/Volume3No8/Paper_8-Spontaneous-braking_and_lane-changing_effect_on_traffic_congestion_using_cellular_automata_model_applied_to_the_two-lane_traffic.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance model to predict overall defect density</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030807</link>
        <id>10.14569/IJACSA.2012.030807</id>
        <doi>10.14569/IJACSA.2012.030807</doi>
        <lastModDate>2012-08-30T20:04:10.9630000+00:00</lastModDate>
        
        <creator>J Venkatesh</creator>
        
        <creator>Mr. Priyesh Cherurveettil</creator>
        
        <creator>Mrs. Thenmozhi. S</creator>
        
        <creator>Dr. Balasubramanie. P</creator>
        
        <subject>process; performance; defect density; metrics.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(8), 2012</description>
        <description>Management by metrics is the expectation from the IT service providers to stay as a differentiator. Given a project, the associated parameters and dynamics, the behaviour and outcome need to be predicted. There is lot of focus on the end state and in minimizing defect leakage as much as possible. In most of the cases, the actions taken are re-active. It is too late in the life cycle. Root cause analysis and corrective actions can be implemented only to the benefit of the next project. The focus has to shift left, towards the execution phase than waiting for lessons to be learnt post the implementation. How do we pro-actively predict defect metrics and have a preventive action plan in place. This paper illustrates the process performance model to predict overall defect density based on data from projects in an organization.</description>
        <description>http://thesai.org/Downloads/Volume3No8/Paper_7-Performance_model_to_predict_overall_defect_density.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automatic Association of Strahler’s Order and Attributes with the Drainage System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030806</link>
        <id>10.14569/IJACSA.2012.030806</id>
        <doi>10.14569/IJACSA.2012.030806</doi>
        <lastModDate>2012-08-30T20:04:07.3570000+00:00</lastModDate>
        
        <creator>Mohan P Pradhan</creator>
        
        <creator>M. K. Ghose</creator>
        
        <creator>Yash R. Kharka</creator>
        
        <subject>Stream; digitization; Strahler’s order.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(8), 2012</description>
        <description>A typical drainage pattern is an arrangement of river segment in a drainage basin and has several contributing identifiable features such as leaf segments, intermediate segments and bifurcations. In studies related to morphological assessment of drainage pattern for estimating channel capacity, length, bifurcation ratio and contribution of segments to the main stream, association of order with the identified segment and creation of attribute repository plays a pivotal role. Strahler’s (1952) proposed an ordering technique that categories the identified stream segments into different classes based on their significance and contribution to the drainage pattern. This work aims at implementation of procedures that efficiently associates order with the identified segments and creates a repository that stores the attributes and estimates of different segments automatically. Implementation of such techniques not only reduces both time and effort as compared to that of manual procedures, it also improves the confidence and reliability of the results.</description>
        <description>http://thesai.org/Downloads/Volume3No8/Paper_6-Automatic_Association_of_Strahler’s_Order_and_Attributes_with_the_Drainage_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Modified Feistel Cipher Involving Substitution, Shifting of rows, Mixing of columns, XOR operation with a Key and Shuffling</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030805</link>
        <id>10.14569/IJACSA.2012.030805</id>
        <doi>10.14569/IJACSA.2012.030805</doi>
        <lastModDate>2012-08-30T20:04:03.7870000+00:00</lastModDate>
        
        <creator>V U.K Sastry</creator>
        
        <creator>K. Anup Kumar</creator>
        
        <subject>encryption; decryption; cryptanalysis; avalanche effect; XOR operation.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(8), 2012</description>
        <description>In this paper, we have developed a modification to the Feistel cipher by taking the plaintext in the form of a pair of matrices and introducing a set of functions namely, substitute, shifting of rows, mixing of columns and XOR operation with a key. Further we have supplemented this process by using another function called shuffling at the end of each round of the iteration process. In this analysis, the cryptanalysis clearly indicates that the strength of the cipher is quite significant and this is achieved by the introduction of the aforementioned functions.</description>
        <description>http://thesai.org/Downloads/Volume3No8/Paper_5-A_Modified_Feistel_Cipher_Involving_Substitution,_Shifting_of_rows,_Mixing_of_columns,_XOR_operation_with_a_Key_and_Shuffling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Monte Carlo Based Non-Linear Mixture Model of Earth Observation Satellite Imagery Pixel Data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030804</link>
        <id>10.14569/IJACSA.2012.030804</id>
        <doi>10.14569/IJACSA.2012.030804</doi>
        <lastModDate>2012-08-30T20:04:00.2130000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>remote sensing satellite; visible to near infrared radiometer; mixed pixel: mixel; Monte Carlo simulation model.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(8), 2012</description>
        <description>Monte Carlo based non-linear mixel (mixed pixel) model of visible to near infrared radiometer of earth observation satellite imagery is proposed. Through comparative studies with actual real earth observation satellite imagery data between conventional linear mixel model and the proposed non-linear mixel model, it is found that the proposed mixel model represents the pixels in concern much precisely rather than the conventional linear mixel model.</description>
        <description>http://thesai.org/Downloads/Volume3No8/Paper_4-Monte_Carlo_Based_Non-Linear_Mixture_Model_of_Earth_Observation_Satellite_Imagery_Pixel_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Algorithm for Data Compression Optimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030803</link>
        <id>10.14569/IJACSA.2012.030803</id>
        <doi>10.14569/IJACSA.2012.030803</doi>
        <lastModDate>2012-08-30T20:03:56.6430000+00:00</lastModDate>
        
        <creator>I Made Agus Dwi Suarjaya</creator>
        
        <subject>algorithms; data compression; j-bit encoding; JBE; lossless.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(8), 2012</description>
        <description>People tend to store a lot of files inside theirs storage.  When the storage nears it limit, they then try to reduce those files size to minimum by using data compression software. In this paper we propose a new algorithm for data compression, called j-bit encoding (JBE). This algorithm will manipulates each bit of data inside file to minimize the size without losing any data after decoding which is classified to lossless compression. This basic algorithm is intended to be combining with other data compression algorithms to optimize the compression ratio. The performance of this algorithm is measured by comparing combination of different data compression algorithms.</description>
        <description>http://thesai.org/Downloads/Volume3No8/Paper_3-A_New_Algorithm_for_Data_Compression_Optimization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Enhanced  MPLS-TE For Transferring Multimedia packets</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030802</link>
        <id>10.14569/IJACSA.2012.030802</id>
        <doi>10.14569/IJACSA.2012.030802</doi>
        <lastModDate>2012-08-30T20:03:53.0430000+00:00</lastModDate>
        
        <creator>Abdellah Jamali</creator>
        
        <creator>Najib Naja</creator>
        
        <creator>Driss El Ouadghiri</creator>
        
        <subject>Multi-Protocol Label Switching (MPLS); Multi-Protocol Label Switching Traffic Engineering (MPLS-TE); Forwarding Equivalence Class (FEC); Quality Of Service (QoS); Simulation.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(8), 2012</description>
        <description>Multi-Protocol Label Switching is useful in managing multimedia traffic when some links are too congested; MPLS Traffic Engineering is a growing implementation in today&#39;s service provider networks. In This paper we propose an improvement of MPLS-TE called EMPLS-TE, it is based on a modification of operation of Forwarding Equivalence Class (FEC) in order to provide the quality of service to stream multimedia. The performance of the EMPLS-TE is evaluated by a simulation model under a variety of network conditions. We also compare its performance with that of unmodified MPLS-TE and MPLS. We demonstrate how a small change to the MPLS-TE protocol can lead to significantly improved performance results. We present a comparative analysis between MPLS, MPLS-TE and Enhanced MPLS-TE (EMPLS-TE). Our proposed EMPLS-TE has a performance advantageous for multimedia applications in their movement in a congested and dense environment. EMPLS-TE defines paths for network traffic based on certain quality of service. The simulation study is conducted in this paper; it is a means to illustrate the benefits of using this Enhanced MPLS-TE for multimedia applications.</description>
        <description>http://thesai.org/Downloads/Volume3No8/Paper_2-An_Enhanced__MPLS-TE_For_Transferring_Multimedia_packets.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Instruction Design Model for Self-Paced ICT System E-Learning in an Organization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030801</link>
        <id>10.14569/IJACSA.2012.030801</id>
        <doi>10.14569/IJACSA.2012.030801</doi>
        <lastModDate>2012-08-30T20:03:49.3570000+00:00</lastModDate>
        
        <creator>Ridi Ferdiana</creator>
        
        <creator>Obert Hoseanto</creator>
        
        <subject>ICT System Adoption; Learning Model; Multimedia Learning; E-learning; Instruction Design Model; Learning Plan.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(8), 2012</description>
        <description>Adopting an Information Communication and Technology (ICT) system in an organization is somewhat challenging. User diversity, heavy workload, and different skill gap make the ICT adoption process slower. This research starts from a condition that a conventional ICT learning through short workshop and guidance book is not working well. This research proposes a model called ICT instruction design model (ICT-IDM). This model provides fast track learning through integration between multimedia learning and self-paced hands-on E-learning. Through this case study, we discovered that the proposed model provides 27% rapid learning adoption rather than conventional learning model.</description>
        <description>http://thesai.org/Downloads/Volume3No8/Paper_1-Instruction_Design_Model_for_Self-Paced_ICT_System_E-Learning_in_an_Organization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Introduction of the weight edition errors in the Levenshtein distance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2012.010506</link>
        <id>10.14569/IJARAI.2012.010506</id>
        <doi>10.14569/IJARAI.2012.010506</doi>
        <lastModDate>2012-08-10T11:59:53.4230000+00:00</lastModDate>
        
        <creator>Gueddah Hicham</creator>
        
        <creator>Yousfi Abdallah</creator>
        
        <creator>Belkasmi Mostapha</creator>
        
        <subject>component;spelling errors; correction; Levenshtein distance; weight edition error.</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 1(5), 2012</description>
        <description>In this paper, we present a new approach dedicated to correcting the spelling errors of the Arabic language. This approach corrects typographical errors like inserting, deleting, and permutation. Our method is inspired from the Levenshtein algorithm, and allows a finer and better scheduling than Levenshtein. The results obtained are very satisfactory and encouraging, which shows the interest of our new approach.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume1No5/Paper_6-Introduction_of_the_weight_edition_errors_in_the_Levenshtein_distance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Integrated Information System of Monitoring and Management for Heart Centers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2012.010504</link>
        <id>10.14569/IJARAI.2012.010504</id>
        <doi>10.14569/IJARAI.2012.010504</doi>
        <lastModDate>2012-08-10T11:59:47.2770000+00:00</lastModDate>
        
        <creator>Adrian Brezulianu</creator>
        
        <creator>Monica Fira</creator>
        
        <subject>ECG; Monitoring; Management; Integrated Information System.</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 1(5), 2012</description>
        <description>The integrated information system presented in this paper is focused on increasing the quality of the medical services offered to the patients by rendering the activities more efficient and by corroborating the family doctors and the cardiologists’ efforts. In order to accomplish the mentioned objectives we propose implementing an integrated information system by means of which the integrated management of the medical data will be done. In addition, it has been offered a collaborative instrument, patient oriented, that it will be used both by the family doctor and by the specialist doctor.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume1No5/Paper_4-Integrated_Information_System_of_Monitoring_and_Management_for_Heart_Centers.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comparative Study on different AI Techniques towards Performance Evaluation in RRM(Radar Resource Management)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2012.010503</link>
        <id>10.14569/IJARAI.2012.010503</id>
        <doi>10.14569/IJARAI.2012.010503</doi>
        <lastModDate>2012-08-10T11:59:44.0130000+00:00</lastModDate>
        
        <creator>Madhusudhan H S</creator>
        
        <creator>Khalid Nazim S.A</creator>
        
        <subject>Radar; RRM (Radar Resource Management); Artificial Intelligence (AI); Neural Network (NN); Fuzzy Logic (FL).</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 1(5), 2012</description>
        <description>The multifunction radar (MFR) has to make a decision as to which functions are to be performed first or which must be degraded or even not done at all when there are not enough resources to be allocated. The process of making these decisions and determining their allocation as a function of time is known as Radar Resource Management (RRM). The RRM has two basic issues: task prioritization and task scheduling. The task prioritization is an important factor in the task scheduler. The other factor is the required scheduling time, which is decided by the environment, the target scenario and the performance requirements of radar functions. The required scheduling time could be improved by using advanced algorithm [1, 6].</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume1No5/Paper_3-A_Comparative_Study_on_different_AI_Techniques_towards_Performance_Evaluation_in_RRM(Radar_Resource_Management).pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An improvement direction for filter selection techniques using information theory measures and quadratic optimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2012.010502</link>
        <id>10.14569/IJARAI.2012.010502</id>
        <doi>10.14569/IJARAI.2012.010502</doi>
        <lastModDate>2012-08-10T11:59:37.8430000+00:00</lastModDate>
        
        <creator>Waad Bouaguel</creator>
        
        <creator>Ghazi Bel Mufti</creator>
        
        <subject>Feature selection; mRMR; Quadratic mutual information ; filter.</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 1(5), 2012</description>
        <description>Filter selection techniques are known for their simplicity and efficiency. However this kind of methods doesn’t take into consideration the features inter-redundancy. Consequently the un-removed redundant features remain in the final classification model, giving lower generalization performance. In this paper we propose to use a mathematical optimization method that reduces inter-features redundancy and maximize relevance between each feature and the target variable. </description>
        <description>http://thesai.org/Downloads/IJARAI/Volume1No5/Paper_2-An_improvement_direction_for_filter_selection_techniques_using_information_theory_measures_and_quadratic_optimization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A genetic algorithm approach for scheduling of resources in well-services companies</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2012.010501</link>
        <id>10.14569/IJARAI.2012.010501</id>
        <doi>10.14569/IJARAI.2012.010501</doi>
        <lastModDate>2012-08-10T11:59:31.7370000+00:00</lastModDate>
        
        <creator>Adrian Brezulianu</creator>
        
        <creator>Lucian Fira</creator>
        
        <creator>Monica Fira</creator>
        
        <subject>Genetic algorithm; optimization; well-services.</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 1(5), 2012</description>
        <description>In this paper, two examples of resources scheduling in well-services companies are solved by means of genetic algorithms: resources for call solving, people scheduling.  The results demonstrate that the genetic algorithm approach can provide acceptable solutions to this type of call solving for scheduling in well-services companies. The suggested approach could be easily extended to various resource scheduling areas: process scheduling, transportation scheduling.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume1No5/Paper_1-A_genetic_algorithm_approach_for_scheduling_of_resources_in_well-services_companies.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>BPM, Agile, and Virtualization Combine to Create Effective Solutions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030721</link>
        <id>10.14569/IJACSA.2012.030721</id>
        <doi>10.14569/IJACSA.2012.030721</doi>
        <lastModDate>2012-07-31T17:01:59.3770000+00:00</lastModDate>
        
        <creator>Steve Kruba</creator>
        
        <creator>Steven Baynes</creator>
        
        <creator>Robert Hyer</creator>
        
        <subject>Agile; BPM; business process management; Rapid Solutions Development; Virtualization; Workflow</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(7), 2012</description>
        <description>The rate of change in business and government
is accelerating. A number of techniques for addressing
that change have emerged independently to provide for
automated solutions in this environment. This paper will
examine three of the most popular of these technologies—
business process management, the agile software development
movement, and infrastructure virtualization—
to expose the commonalities in these approaches and
how, when used together, their combined effect results in
rapidly deployed, more successful solutions.</description>
        <description>http://thesai.org/Downloads/Volume3No7/Paper_21-BPM,_Agile,_and_Virtualization_Combine_to_Create_Effective_Solutions.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Throughput Analysis of Ieee802.11b Wireless Lan With One Access Point Using Opnet Simulator</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030720</link>
        <id>10.14569/IJACSA.2012.030720</id>
        <doi>10.14569/IJACSA.2012.030720</doi>
        <lastModDate>2012-07-31T17:01:53.5030000+00:00</lastModDate>
        
        <creator>Isizoh A N</creator>
        
        <creator>Nwokoye A. O.C</creator>
        
        <creator>Okide S. O</creator>
        
        <creator>Ogu C. D</creator>
        
        <subject>Data-rate, buffer size, fragmentation threshold, throughput, Media Access Control (MAC).</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(7), 2012</description>
        <description>This paper analyzes the throughput performance of IEEE 802.11b Wireless Local Area Network (WLAN) with one access point. The IEEE 802.11b is a wireless protocol standard. In this paper, a wireless network was established which has one access point. OPNET IT Guru Simulator (Academic edition) was used to simulate the entire network. Thus the effects of varying some network parameters such as the data-rate, buffer-sizes, and fragmentation threshold were observed on the throughput performance metric. Several simulation graphs were obtained and used to analyze the network performance.</description>
        <description>http://thesai.org/Downloads/Volume3No7/Paper_20-Throughput_Analysis_of_Ieee802.11b_Wireless_Lan_With_One_Access_Point_Using_Opnet_Simulator.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving the Rate of Convergence of Blind Adaptive Equalization for Fast Varying Digital Communication Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030719</link>
        <id>10.14569/IJACSA.2012.030719</id>
        <doi>10.14569/IJACSA.2012.030719</doi>
        <lastModDate>2012-07-31T17:01:50.1800000+00:00</lastModDate>
        
        <creator>Iorkyase E.Tersoo</creator>
        
        <creator>Michael O. Kolawole</creator>
        
        <subject>Blind Equalization; Channel Equalizer; Constant Modulus Algorithm; Intersymbol interference; Variable Step Size.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(7), 2012</description>
        <description>The recent digital transmission systems impose the application of channel equalizers with bandwidth efficiency, which mitigates the bottleneck of intersymbol interference for high-speed data transmission-over communication channels. This leads to the exploration of blind equalization techniques that do not require the use of a training sequence. Blind equalization techniques however suffer from computational complexity and slow convergence rate. The Constant Modulus Algorithm (CMA) is a better technique for blind channel equalization. This paper examined three different error functions for fast convergence and proposed an adaptive blind equalization algorithm with variable step size based on CMA criterion. A comparison of the existing and proposed algorithms’ speed of convergence shows that the proposed algorithm outperforms the other algorithms. The proposed algorithm can suitably be employed in blind equalization for rapidly changing channels as well as for high data rate applications.</description>
        <description>http://thesai.org/Downloads/Volume3No7/Paper_19-Improving_the_Rate_of_Convergence_of_Blind_Adaptive_Equalization_for_Fast_Varying_Digital_Communication_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Study between the Proposed GA Based ISODAT Clustering and the Conventional Clustering Methods</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030718</link>
        <id>10.14569/IJACSA.2012.030718</id>
        <doi>10.14569/IJACSA.2012.030718</doi>
        <lastModDate>2012-07-31T17:01:43.9730000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>GA; ISODATA; Optimization; Clustering.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(7), 2012</description>
        <description>A method of GA: Genetic Algorithm based ISODATA clustering is proposed.GA clustering is now widely available. One of the problems for GA clustering is a poor clustering performance due to the assumption that clusters are represented as convex functions. Well known ISODATA clustering has parameters of threshold for merge and split. The parameters have to be determined without any assumption (convex functions). In order to determine the parameters, GA is utilized. Through comparatives studies between with and without parameter estimation with GA utilizing well known UCI Repository data clustering performance evaluation, it is found that the proposed method is superior to the original ISODATA and also the other conventional clustering methods.</description>
        <description>http://thesai.org/Downloads/Volume3No7/Paper_18-Comparative_Study_between_the_Proposed_GA_Based_ISODAT_Clustering_and_the_Conventional_Clustering_Methods.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>GUI Database for the Equipment Store of the Department of Geomatic Engineering, KNUST</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030717</link>
        <id>10.14569/IJACSA.2012.030717</id>
        <doi>10.14569/IJACSA.2012.030717</doi>
        <lastModDate>2012-07-31T17:01:39.3930000+00:00</lastModDate>
        
        <creator>J A Quaye-Ballard</creator>
        
        <creator>R. An</creator>
        
        <creator>A. B. Agyemang</creator>
        
        <creator>N. Y. Oppong-Quayson</creator>
        
        <creator>J. E. N. Ablade</creator>
        
        <subject>Survey Beacons; Survey Instrument; Survey Computations; GIS; DBMS.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(7), 2012</description>
        <description>The geospatial analyst is required to apply art, science, and technology to measure relative positions of natural and man-made features above or beneath the earth’s surface, and to present this information either graphically or numerically. The reference positions for these measurements need to be well archived and managed to effectively sustain the activities in the spatial industry. The research herein described highlights the need for an information system for the Land Surveyor’s Equipment Store. Such a system is a database management system with a user-friendly graphical interface. This paper describes one such system that has been developed for the Equipment Store of the Department of Geomatic Engineering, Kwame Nkrumah University of Science and Technology (KNUST), Ghana. The system facilitates efficient management and location of instruments, as well as easy location of beacons together with their attribute information; it provides multimedia information about instruments in an Equipment Store. Digital camera was used capture the pictorial descriptions of the beacons. A Geographic Information System (GIS) software was employed to visualize the spatial location of beacons and to publish the various layers for the Graphical User Interface (GUI). The aesthetics of the interface was developed with user interface design tools and coded by programming. The developed Suite, powered by a reliable and fully scalable database, provides an efficient way of booking and analyzing transactions in an Equipment Store.</description>
        <description>http://thesai.org/Downloads/Volume3No7/Paper_17-GUI_Database_for_the_Equipment_Store_of_the_Department_of_Geomatic_Engineering.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Hybrid Technique Based on Combining Fuzzy K-means Clustering and Region Growing for Improving Gray Matter and White Matter Segmentation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030716</link>
        <id>10.14569/IJACSA.2012.030716</id>
        <doi>10.14569/IJACSA.2012.030716</doi>
        <lastModDate>2012-07-31T17:01:33.2270000+00:00</lastModDate>
        
        <creator>Ashraf Afifi</creator>
        
        <subject>Fuzzy clustering; seed region growing; performance measure; MRI brain database; sensitivity and specificity.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(7), 2012</description>
        <description>In this paper we present a hybrid approach based on combining fuzzy k-means clustering, seed region growing, and sensitivity and specificity algorithms to measure gray (GM) and white matter (WM) tissue. The proposed algorithm uses intensity and anatomic information for segmenting of MRIs into different tissue classes, especially GM and WM. It starts by partitioning the image into different clusters using fuzzy k-means clustering. The centers of these clusters are the input to the region growing (SRG) method for creating the closed regions. The outputs of SRG technique are fed to sensitivity and specificity algorithm to merge the similar regions in one segment. The proposed algorithm is applied to challenging applications: gray matter/white matter segmentation in magnetic resonance image (MRI) datasets. The experimental results show that the proposed technique produces accurate and stable results.</description>
        <description>http://thesai.org/Downloads/Volume3No7/Paper_16-A_Hybrid_Technique_Based_on_Combining_Fuzzy_K-means_Clustering_and_Region_Growing_for_Improving_Gray_Matter_and_White_Matter_Segmentation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving the Solution of Traveling Salesman Problem Using Genetic, Memetic Algorithm and Edge assembly Crossover</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030715</link>
        <id>10.14569/IJACSA.2012.030715</id>
        <doi>10.14569/IJACSA.2012.030715</doi>
        <lastModDate>2012-07-31T17:01:27.3930000+00:00</lastModDate>
        
        <creator>Mohd. Junedul Haque</creator>
        
        <creator>Khalid. W. Magld</creator>
        
        <subject>NP Hard; GA(Genetic algorithms); TSP(Traveling salesman problem); MA(Memetic algorithms); EAX(Edge Assembly Crossover).</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(7), 2012</description>
        <description>The Traveling salesman problem (TSP) is to find a tour of a given number of cities (visiting each city exactly once) where the length of this tour is minimized. Testing every possibility for an N city tour would be N! Math additions. Genetic algorithms (GA) and Memetic algorithms (MA) are a relatively new optimization technique which can be applied to various problems, including those that are NPhard. The technique does not ensure an optimal solution, however it usually gives good approximations in a reasonable amount of time. They, therefore, would be good algorithms to try on the traveling salesman problem, one of the most famous NP-hard problems. In this paper I have proposed a algorithm to solve TSP using Genetic algorithms (GA) and Memetic algorithms (MA) with the crossover operator Edge Assembly Crossover (EAX) and also analyzed the result on different parameter like group size and mutation percentage and compared the result with other solutions.</description>
        <description>http://thesai.org/Downloads/Volume3No7/Paper_15-Improving_the_Solution_of_Traveling_Salesman_Problem_Using_Genetic,_Memetic_Algorithm_and_Edge_assembly_Crossover.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Image Clustering Method Based on Density Maps Derived from Self-Organizing Mapping: SOM</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030714</link>
        <id>10.14569/IJACSA.2012.030714</id>
        <doi>10.14569/IJACSA.2012.030714</doi>
        <lastModDate>2012-07-31T17:01:21.2200000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>Clustering; self organizing map;  separability; Learning process; Density map; Pixel labeling; Un-supervised classification.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(7), 2012</description>
        <description>A new method for image clustering with density maps derived from Self-Organizing Maps (SOM) is proposed together with a clarification of learning processes during a construction of clusters. It is found that the proposed SOM based image clustering method shows much better clustered result for both simulation and real satellite imagery data. It is also found that the separability among clusters of the proposed method is 16% longer than the existing k-mean clustering. It is also found that the separability among clusters of the proposed method is 16% longer than the existing k-mean clustering. In accordance with the experimental results with Landsat-5 TM image, it takes more than 20000 of iteration for convergence of the SOM learning processes.</description>
        <description>http://thesai.org/Downloads/Volume3No7/Paper_14-Image_Clustering_Method_Based_on_Density_Maps_Derived_from_Self-Organizing_Mapping_SOM.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Task Allocation Model for Rescue Disabled Persons in Disaster Area with Help of Volunteers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030713</link>
        <id>10.14569/IJACSA.2012.030713</id>
        <doi>10.14569/IJACSA.2012.030713</doi>
        <lastModDate>2012-07-31T17:01:17.3630000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Tran Xuan Sang</creator>
        
        <creator>Nguyen Thi Uyen</creator>
        
        <subject>Task Allocation Model; Multi Agent-based Rescue Simulation;  Auction based Decision Making.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(7), 2012</description>
        <description>In this paper, we present a task allocation model for search and rescue persons with disabilities in case of disaster. The multi agent-based simulation model is used to simulate the rescue process. Volunteers and disabled persons are modeled as agents, which each have their own attributes and behaviors. The task of volunteers is to help disabled persons in emergency situations. This task allocation problem is solved by using combinatorial auction mechanism to decide which volunteers should help which disabled persons. The disaster space, road network, and rescue process are also described in detail. The RoboCup Rescue simulation platform is used to present proposed model with different scenarios.</description>
        <description>http://thesai.org/Downloads/Volume3No7/Paper_13-Task_Allocation_Model_for_Rescue_Disabled_Persons_in_Disaster_Area_with_Help_of_Volunteers.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Simultaneous Estimation of Geophysical Parameters with Microwave Radiometer Data based on Accelerated Simulated Annealing: SA</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030712</link>
        <id>10.14569/IJACSA.2012.030712</id>
        <doi>10.14569/IJACSA.2012.030712</doi>
        <lastModDate>2012-07-31T17:01:11.7130000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>simulated annealing; microwave radiometer water vapor; air-temperature; wind speed.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(7), 2012</description>
        <description>Method for geophysical parameter estimations with microwave radiometer data based on Simulated Annealing: SA is proposed. Geophysical parameters which are estimated with microwave radiometer data are closely related each other. Therefore simultaneous estimation makes constraints in accordance with the relations.  On the other hand, SA requires huge computer resources for convergence. In order to accelerate convergence process, oscillated decreasing function is proposed for cool down function. Experimental results show that remarkable improvements are observed for geophysical parameter estimations.</description>
        <description>http://thesai.org/Downloads/Volume3No7/Paper_12-Simultaneous_Estimation_of_Geophysical_Parameters_with_Microwave_Radiometer_Data_based_on_Accelerated_Simulated_Annealing_SA.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>FHC-NCTSR: Node Centric Trust Based secure Hybrid Routing Protocol for Ad Hoc Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030711</link>
        <id>10.14569/IJACSA.2012.030711</id>
        <doi>10.14569/IJACSA.2012.030711</doi>
        <lastModDate>2012-07-31T17:01:08.3970000+00:00</lastModDate>
        
        <creator>Prasuna V G</creator>
        
        <creator>Dr. S Madhusudahan Verma</creator>
        
        <subject>Ad hoc network, Dynamic source routing; Hash cains; Manet; Mobile ad hoc networks; NCTS-DSR; Routing Potocol;  SEAD; Secure routing .</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(7), 2012</description>
        <description>To effectively support communication in such a dynamic networking environment as the ad hoc networks, the routing mechanisms should adapt to secure and trusted route discovery and service quality in data transmission. In this context, the paper proposed a routing protocol called Node Centric Trust based Secure Hybrid Routing Protocol [FHC-NCTSR] that opted to fixed hash chaining for data transmission and node centric trust strategy for secure route discovery. The route discovery is reactive in nature, in contrast to this, data transmission is proactive, hence the protocol FHC-NCTSR termed as hybrid routing protocol. The performance results obtained from simulation environment concluding that due to the fixed hash chaining technique opted by FHC-NCTSR, it is more than one order of magnitude faster than other hash chain based routing protocols such as SEAD in packet delivery. Due to the node centric strategy of route discovery that opted by FHC-NCTSR, it elevated as trusted one against to Rushing,  Routing table modification and Tunneling attacks,  in contrast other protocols failed to provide security for one or more attacks listed, example is ARIADNE that fails to protect from tunneling attack.</description>
        <description>http://thesai.org/Downloads/Volume3No7/Paper_11-FHC-NCTSR_Node_Centric_Trust_Based_secure_Hybrid_Routing_Protocol_for_Ad_Hoc_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>LOQES: Model for Evaluation of Learning Object</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030710</link>
        <id>10.14569/IJACSA.2012.030710</id>
        <doi>10.14569/IJACSA.2012.030710</doi>
        <lastModDate>2012-07-31T17:01:02.2130000+00:00</lastModDate>
        
        <creator>Sonal Chawla</creator>
        
        <creator>Niti Gupta</creator>
        
        <creator>Prof. R.K. Singla</creator>
        
        <subject>Learning Objects, Learning Object Repository, Peer Review, Quality Metrics, Reusability Metrics, Ranking Metrics.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(7), 2012</description>
        <description>Learning Object Technology is a diverse and contentious area, which is constantly evolving, and will inevitably play a major role in shaping the future of both teaching and learning.  Learning Objects are small chunk of materials which acts as basic building blocks of this technology enhanced learning and education.  Learning Objects are hosted by various repositories available online so that different users can use them in multiple contexts as per their requirements.  The major bottleneck for end users is finding an appropriate learning object in terms of content quality and usage.  Theorist and researchers have advocated various approaches for evaluating learning objects in form of evaluation tools and metrics, but all these approaches are either qualitative based on human review or not supported by empirical evidence.  The main objective of this paper is to study the impact of current evaluation tools and metrics on quality of learning objects and propose a new quantitative system LOQES that automatically evaluates the learning object in terms of defined parameters so as to give assurance regarding quality and value.</description>
        <description>http://thesai.org/Downloads/Volume3No7/Paper_10-LOQES_Model_for_Evaluation_of_Learning_Object.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Online Character Recognition System to Convert Grantha Script to Malayalam</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030709</link>
        <id>10.14569/IJACSA.2012.030709</id>
        <doi>10.14569/IJACSA.2012.030709</doi>
        <lastModDate>2012-07-31T17:00:56.0330000+00:00</lastModDate>
        
        <creator>Sreeraj M</creator>
        
        <creator>Sumam Mary Idicula</creator>
        
        <subject>Grantha scripts; Malayalam; Online character recognition system.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(7), 2012</description>
        <description>This paper presents a novel approach to recognize Grantha, an ancient script in South India and converting it to Malayalam, a prevalent language in South India using online character recognition mechanism. The motivation behind this work owes its credit to (i) developing a mechanism to recognize Grantha script in this modern world and (ii) affirming the strong connection among Grantha and Malayalam. A framework for the recognition of Grantha script using online character recognition is designed and implemented. The features extracted from the Grantha script comprises mainly of time-domain features based on writing direction and curvature. The recognized characters are mapped to corresponding Malayalam characters. The framework was tested on a bed of medium length manuscripts containing 9-12 sample lines and printed pages of a book titled Soundarya Lahari writtenin Grantha by Sri Adi Shankara to recognize the words and sentences. The manuscript recognition rates with the system are for Grantha as 92.11%, Old Malayalam 90.82% and for new Malayalam script 89.56%. The recognition rates of pages of the printed book are for Grantha as 96.16%, Old Malayalam script 95.22% and new Malayalam script as 92.32% respectively. These results show the efficiency of the developed system.</description>
        <description>http://thesai.org/Downloads/Volume3No7/Paper_9-An_Online_Character_Recognition_System_to_Convert_Grantha_Script_to_Malayalam.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Smart Grids: A New Framework for Efficient Power Management in Datacenter Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030708</link>
        <id>10.14569/IJACSA.2012.030708</id>
        <doi>10.14569/IJACSA.2012.030708</doi>
        <lastModDate>2012-07-31T17:00:49.8170000+00:00</lastModDate>
        
        <creator>Okafor Kennedy C</creator>
        
        <creator>Udeze Chidiebele. C</creator>
        
        <creator>E. C. N. Okafor</creator>
        
        <creator>C. C. Okezie</creator>
        
        <subject>energy; density; enterprise; power budget; framework smart grids.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(7), 2012</description>
        <description>The energy demand in the enterprise market segment demands a supply format that accommodates all generation and storage options with active participation by end users in demand response. Basically, with today’s high power computing (HPC), a highly reliable, scalable, and cost effective energy solution that will satisfy power demands and improve environmental sustainability will have a broad acceptance. In a typical enterprise data center, power managment is a major challenge impacting server density and the total cost of ownership (COO). Storage uses a significant fraction of the power budget and there are no widely deployed power-saving solutions for enterprise storage systems. This work presents Data Center Networks (DCNs) for efficient power management in the context of SMART Grids. A SMART DCN is modelled with OPNET 14.5 for Network, Process and Node models. Also, an Extended SMART Integration Module in the context of SMART DCN is shown to be more cost effective than the traditional distribution grid in DCNs. The implementation challenges are discussed also. This paper suggests that smartening the grid for DCN will guarantee a sustainable energy future for the enterprise segments.</description>
        <description>http://thesai.org/Downloads/Volume3No7/Paper_8-Smart_Grids_A_New_Framework_for_Efficient_Power_Management_in_Datacenter_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluating the Role of Information and Communication Technology (ICT) Support towards Processes of Management in Institutions of Higher Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030707</link>
        <id>10.14569/IJACSA.2012.030707</id>
        <doi>10.14569/IJACSA.2012.030707</doi>
        <lastModDate>2012-07-31T17:00:46.5100000+00:00</lastModDate>
        
        <creator>Michael Okumu Ujunju</creator>
        
        <creator>G. Wanyembi</creator>
        
        <creator>Franklin Wabwoba</creator>
        
        <subject>ICT; Competitive advantage; Strategic management; Data.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(7), 2012</description>
        <description>The role of Information and Communication Technology in achieving organization’s strategic development goals has been an area of constant debate, and as well perceived in different management dimensions. Most universities are therefore employing it (ICT) as a tool for competitive advantage to support the accomplishment of their objectives. Universities are also known to have branches or campuses that need strong and steady strategic plans to facilitate their steady expansion and growth. Besides, production of quality services from the various levels of management in these universities requires quality strategic plans and decisions. In addition, to realize the steady growth and competitive advantage, ICT not only has to be an additive but a critical component towards supporting management processes in the universities. This research sought to determine the role of ICT in supporting management processes in institutions of higher learning in Kenya. The research investigated how the different levels of management used ICT in their management processes and whether the use had any effect on management processes. The research further made recommendations to the universities on better use of ICTs in their management processes. A public university in Kenya was used as a case study in this research.</description>
        <description>http://thesai.org/Downloads/Volume3No7/Paper_7-Evaluating_the_Role_of_Information_and_Communication_Technology_(ICT)_Support_towards_Processes_of_Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Japanese Smart Grid Initiatives, Investments, and Collaborations</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030706</link>
        <id>10.14569/IJACSA.2012.030706</id>
        <doi>10.14569/IJACSA.2012.030706</doi>
        <lastModDate>2012-07-31T17:00:40.6300000+00:00</lastModDate>
        
        <creator>Amy Poh Ai Ling</creator>
        
        <creator>Sugihara Kokichi</creator>
        
        <creator>Mukaidono Masao</creator>
        
        <subject>Japanese Smart Grid; Initiative; Investment; Collaboration; Low Carbon Emission.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(7), 2012</description>
        <description>A smart grid delivers power around the country and has an intelligent monitoring system, which not only keeps track of all the energy coming in from diverse sources but also can detect where energy is needed through a two-way communication system that collects data about how and when consumers use power. It is safer in many ways, compared with the current one-directional power supply system that seems susceptible to either sabotage or natural disasters, including being more resistant to attack and power outages. In such an autonomic and advanced-grid environment, investing in a pilot study and knowing the nation’s readiness to adopt a smart grid absolves the government of complex intervention from any failure to bring Japan into the autonomic-grid environment. This paper looks closely into the concept of the Japanese government’s ‘go green’ effort, the objective of which is to make Japan a leading nation in environmental and energy sustainability through green innovation, such as creating a low-carbon society and embracing the natural grid community. This paper paints a clearer conceptual picture of how Japan’s smart grid effort compares with that of the US. The structure of Japan’s energy sources is describe including its major power generation plants, photovoltaic power generation development, and a comparison of energy sources between Japan and the US. Japan’s smart community initiatives are also highlighted, illustrating the Japanese government planned social security system, which focuses on a regional energy management system and lifestyle changes under such an energy supply structure. This paper also discusses Japan’s involvement in smart grid pilot projects for development and investment, and its aim of obtaining successful outcomes. Engagement in the pilot projects is undertaken in conjunction with Japan’s attempt to implement a fully smart grid city in the near future. In addition, major smart grid awareness activities promotion bodies in Japan are discuss in this paper because of their important initiatives for influencing and shaping policy, architecture, standards, and traditional utility operations. Implementing a smart grid will not happen quickly, because when Japan does adopt one, it will continue to undergo transformation and be updated to support new technologies and functionality.</description>
        <description>http://thesai.org/Downloads/Volume3No7/Paper_6-The_Japanese_Smart_Grid_Initiatives,_Investments,_and_Collaborations.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Modified Feistel Cipher Involving Modular Arithmetic Addition and Modular Arithmetic Inverse of a Key Matrix</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030705</link>
        <id>10.14569/IJACSA.2012.030705</id>
        <doi>10.14569/IJACSA.2012.030705</doi>
        <lastModDate>2012-07-31T17:00:34.3870000+00:00</lastModDate>
        
        <creator>Dr. V. U K Sastry</creator>
        
        <creator>K. Anup Kumar</creator>
        
        <subject>Encryption; Decryption; Key matrix; Modular Arithmetic Inverse.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(7), 2012</description>
        <description>In this investigation, we have modified the Feistel cipher by taking the plaintext in the form of a pair of square matrices. Here we have introduced the operation multiplication with the key matrices and the modular arithmetic addition in encryption. The modular arithmetic inverse of the key matrix is introduced in decryption. The cryptanalysis carried out in this paper clearly indicate that this cipher cannot be broken by the brute force attack and the known plaintext attack.</description>
        <description>http://thesai.org/Downloads/Volume3No7/Paper_5-A_Modified_Feistel_Cipher_Involving_Modular_Arithmetic_Addition_and_Modular_Arithmetic_Inverse_of_a_Key_Matrix.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Modified Feistel Cipher Involving XOR Operation and Modular Arithmetic Inverse of a Key Matrix</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030704</link>
        <id>10.14569/IJACSA.2012.030704</id>
        <doi>10.14569/IJACSA.2012.030704</doi>
        <lastModDate>2012-07-31T17:00:31.0230000+00:00</lastModDate>
        
        <creator>Dr. V. U K Sastry</creator>
        
        <creator>K. Anup Kumar</creator>
        
        <subject>Encryption; Decryption; Key matrix; Modular Arithmetic Inverse.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(7), 2012</description>
        <description>In this paper, we have developed a block cipher by modifying the Feistel cipher. In this, the plaintext is taken in the form of a pair of matrices. In one of the relations of encryption the plaintext is multiplied with the key matrix on both the sides. Consequently, we use the modular arithmetic inverse of the key matrix in the process of decryption. The cryptanalysis carried out in this investigation, clearly indicates that the cipher is a strong one, and it cannot be broken by any attack.</description>
        <description>http://thesai.org/Downloads/Volume3No7/Paper_4-A_Modified_Feistel_Cipher_Involving_XOR_Operation_and_Modular_Arithmetic_Inverse_of_a_Key_Matrix.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SVD Based Image Processing Applications: State of The Art, Contributions and Research Challenges</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030703</link>
        <id>10.14569/IJACSA.2012.030703</id>
        <doi>10.14569/IJACSA.2012.030703</doi>
        <lastModDate>2012-07-31T17:00:27.7000000+00:00</lastModDate>
        
        <creator>Rowayda A Sadek</creator>
        
        <subject>SVD; Image Processing; Singular Value Decomposition; Perceptual;  Forensic.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(7), 2012</description>
        <description>Singular Value Decomposition (SVD) has recently emerged as a new paradigm for processing different types of images. SVD is an attractive algebraic transform for image processing applications. The paper proposes an experimental survey for the SVD as an efficient transform in image processing applications. Despite the well known fact that SVD offers attractive properties in imaging, the exploring of using its properties in various image applications is currently at its infancy.  Since the SVD has many attractive properties have not been utilized, this paper contributes in using these generous properties in newly image applications and gives a highly recommendation for more research challenges. In this paper, the SVD properties for images are experimentally presented to be utilized in developing new SVD-based image processing applications. The paper offers survey on the developed SVD based image applications. The paper also proposes some new contributions that were originated from SVD properties analysis in different image processing. The aim of this paper is to provide a better understanding of the SVD in image processing and identify important various applications and open research directions in this increasingly important area; SVD based image processing in the future research.</description>
        <description>http://thesai.org/Downloads/Volume3No7/Paper_3-SVD_Based_Image_Processing_Applications_State_of_The_Art,_Contributions_and_Research_Challenges.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Core Backbone Convergence Mechanisms and Microloops Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030702</link>
        <id>10.14569/IJACSA.2012.030702</id>
        <doi>10.14569/IJACSA.2012.030702</doi>
        <lastModDate>2012-07-31T17:00:23.8470000+00:00</lastModDate>
        
        <creator>Abdelali Ala</creator>
        
        <creator>Driss El Ouadghiri</creator>
        
        <creator>Mohamed Essaaidi</creator>
        
        <subject>component; FC(Fast-convergence); RSVP(ressource reservation protocol); LDP (Label Distribution Protocol); VPN(Virtual Private Network); LFA (loop free alternate); MPLS (Multiprotocol Label Switching); PIC(Protocol independent convergence); PE(Provider edge router); P(Provider core router ).</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(7), 2012</description>
        <description>In this article we study approaches that can be used to minimise the convergence time, we also make a focus on microloops phenomenon, analysis and means to mitigate them. The convergence time reflects the time required by a network to react to a failure of a link or a router failure itself. When all nodes (routers) have updated their respective routing and forwarding databases, we can say the network has converged. This study will help in building  real-time and resilient network infrastructure, the goal is to make any evenement in the core network, as transparent as possible to any sensitive and real-time flows. This study is also, a deepening of earlier works presented in [10] and [11].</description>
        <description>http://thesai.org/Downloads/Volume3No7/Paper_2-Core_backbone_convergence_mechanisms_and_microloops_analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Bond Portfolio Analysis with Parallel Collections in Scala</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030701</link>
        <id>10.14569/IJACSA.2012.030701</id>
        <doi>10.14569/IJACSA.2012.030701</doi>
        <lastModDate>2012-07-31T17:00:16.0700000+00:00</lastModDate>
        
        <creator>Ron Coleman</creator>
        
        <creator>Udaya Ghattamanei</creator>
        
        <creator>Mark Logan</creator>
        
        <subject>parallel functional programming; parallel processing; multicore processors; Scala; computational finance.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(7), 2012</description>
        <description>In this paper, we report the results of new experiments that test the performance of Scala parallel collections to find the fair value of riskless bond portfolios using commodity multicore platforms. We developed four algorithms, each of two kinds in Scala and ran them for one to 1024 portfolios, each with a variable number of bonds with daily to yearly cash flows and 1 year to 30 year. We ran each algorithm 11 times at each workload size on three different multicore platforms. We systematically observed the differences and tested them for statistical significance. All the parallel algorithms exhibited super-linear speedup and super-efficiency consistent with maximum performance expectations for scientific computing workloads. The first-order effort or “na&#239;ve” parallel algorithms were easiest to write since they followed directly from the serial algorithms. We found we could improve upon the na&#239;ve approach with second-order efforts, namely, fine-grain parallel algorithms, which showed the overall best, statistically significant performance, followed by coarse-grain algorithms. To our knowledge these results have not been presented elsewhere.</description>
        <description>http://thesai.org/Downloads/Volume3No7/Paper_1-Bond_Portfolio_Analysis_with_Parallel_Collections_in_Scala.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Exploiting Communication Framework To Increase Usage Of SMIG Model Among Users</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2012.020108</link>
        <id>10.14569/SpecialIssue.2012.020108</id>
        <doi>10.14569/SpecialIssue.2012.020108</doi>
        <lastModDate>2012-07-21T20:27:41.5330000+00:00</lastModDate>
        
        <creator>Seema Shah</creator>
        
        <creator>Sunita Mahajan</creator>
        
        <subject>DSM based Grid;Shared Memory integrated Grid- SMIG;Communication Framework.</subject>
        <description>Special Issue(SpecialIssue), 2(1), 2012</description>
        <description>Today is the era of parallel and distributed computing models. Recent developments in DSM, Grids and DSM based Grids focus on high end computations of parallelized applications. We have built SMIG (Shared Memory Integrated with Grid) which amalgamates DSM and Grid computing paradigms as a part of our doctoral work. Literature citations have indicated a lacuna in how technological models are communicated to potential users to increase usage. Based on this lacuna we have identified the potential users and prepared a communication framework to disseminate SMIG information in order increase its usage. Hence in this paper we have compared various communication techniques used for disseminating DSM, Grid and DSM based Grid models as surveyed from literature. We have further designed and implemented a communication framework to percolate SMIG information to users. In the communication framework we have plugged in various tools for information dissemination and feedback (apart from those found in the survey) for promoting usage of technology among volunteers and application developers. These are included in the communication framework, namely arranging overview sessions, passing written documentation like presentations, installation handbook, FAQs, and also providing an opportunity to use SMIG model. The detailed responses received from the users after implementing the communication framework are encouraging and indicates that such a communication framework can be used for disseminating other technology developments to potential users. This will prove useful in today’s dynamic world where technological developments are happening on a day to day basis.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo4/Paper_8-Exploiting_Communication_Framework_To_Increase_Usage_Of_SMIG_Model_Among_Users.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>EtranS- A Complete Framework for English To Sanskrit Machine Translation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2012.020107</link>
        <id>10.14569/SpecialIssue.2012.020107</id>
        <doi>10.14569/SpecialIssue.2012.020107</doi>
        <lastModDate>2012-07-21T20:27:41.5030000+00:00</lastModDate>
        
        <creator>Promila Bahadur</creator>
        
        <creator>A.K.Jain</creator>
        
        <creator>D.S.Chauhan</creator>
        
        <subject>Analysis, Machine translation, translation theory, Interlingua, language divergence, Sanskrit, natural language processing.</subject>
        <description>Special Issue(SpecialIssue), 2(1), 2012</description>
        <description>Machine Translation has been a topic of research from the past many years. Many methods and techniques have been proposed and developed. However, quality of translation has always been a matter of concern. In this paper, we outline a target language generation mechanism with the help of language English-Sanskrit language pair using rule based machine translation technique [1]. Rule Based Machine Translation provides high quality translation and requires in depth knowledge of the language apart from real world knowledge and the differences in cultural background and conceptual divisions. A string of English sentence can be translated into string of Sanskrit ones. The methodology for design and development is implemented in the form of software named as “EtranS”.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo4/Paper_7-EtranS-A_Complete_Framework_for_English_To_Sanskrit_Machine_Translation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comparative Study on Retrieved Images by Content Based Image Retrieval System based on Binary Tree, Color, Texture and Canny Edge Detection Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2012.020106</link>
        <id>10.14569/SpecialIssue.2012.020106</id>
        <doi>10.14569/SpecialIssue.2012.020106</doi>
        <lastModDate>2012-07-21T20:27:41.4230000+00:00</lastModDate>
        
        <creator>Saroj A Shambharkar</creator>
        
        <creator>Shubhangi C. Tirpude</creator>
        
        <subject>Binary tree structure, Content Based Image Retrieval, canny edge detection, Mean, Correlation.</subject>
        <description>Special Issue(SpecialIssue), 2(1), 2012</description>
        <description>Content based image retrieval (CBIR) is a technique in which images are indexed by extracting their low level features or high level features or both. This paper presents performance &amp; comparative study of the result obtained by the two different approaches of CBIR system. The resultant retrieved images by the CBIR system is based on two approaches. Both approaches of the system using two major processes one as feature extraction and other as search and retrieval. The first approach is based on extracting the features using binary tree structure, color &amp; texture and the second approach is based on extracting shape features using Canny Edge Detection. In both the approaches the feature vector of the query image is compared with the feature vector of the each image present in the database. In the first approach the images are retrieved and arranged according to their matching score or rank value and in the second approach retrieved images are arranged according to their mean and correlation values.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo4/Paper_6-A_Comparative_Study_on_Retrieved_Images_by_Content_Based_Image_Retrieval_System_based_on_Binary_Tree_Color_Texture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Texture Feature Extraction For Biometric Authentication using Partitioned Complex Planes in Transform Domain</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2012.020105</link>
        <id>10.14569/SpecialIssue.2012.020105</id>
        <doi>10.14569/SpecialIssue.2012.020105</doi>
        <lastModDate>2012-07-21T20:27:41.3770000+00:00</lastModDate>
        
        <creator>Vinayak Ashok Bharadi</creator>
        
        <subject>Biometrics; Transforms; DCT; FFT; Kekre Transform; Hartley Transform; Kekre Wavelets.</subject>
        <description>Special Issue(SpecialIssue), 2(1), 2012</description>
        <description>Feature vector generation is an important step in biometric authentication. Biometric traits such as fingerprint, finger-knuckle prints, palmprint, and iris are rich in texture. This texture is unique and the feature vector extraction algorithm should correctly represent the texture pattern. In this paper a texture feature extraction methodology is proposed for these biometric traits. This method is based on one step transform of the two dimensional images and then using the intermediate transformation data to generate complex planes for feature vector generation. This method is implemented using Walsh, DCT, Hartley, Kekre Transform &amp; Kekre Wavelets. Results indicate the effectiveness of the feature vector for biometric authentication.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo4/Paper_5-Texture_Feature_Extraction_For_Biometric_Authentication_using_Partitioned_Complex_Planes_in_Transform_Domain.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Modeling of Queuing Techniques for Enhance QoS Support for Uniform and Exponential VoIP &amp; Video based Traffic Distribution</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2012.020104</link>
        <id>10.14569/SpecialIssue.2012.020104</id>
        <doi>10.14569/SpecialIssue.2012.020104</doi>
        <lastModDate>2012-07-21T20:27:41.3300000+00:00</lastModDate>
        
        <creator>B K Mishra</creator>
        
        <creator>S.K Singh</creator>
        
        <creator>Kalpana patel</creator>
        
        <subject>QoS-Quality of service; NGN-Next Generation Network; FIFO- First-in-first-out; PQ- Priority queuing; WFQ- Weighted-Fair queuing; VoIP- Voice over Internet Protocol.</subject>
        <description>Special Issue(SpecialIssue), 2(1), 2012</description>
        <description>In the near future there will be demand for seamless service across different types of network, so it’s a significant issue of how to guarantee the quality of service (QoS) and support a variety of services. One important generalization of the Next Generation Network is it’s a queue of network. It is expected that traffic in NGN will undergo both quantitative and qualitative changes. Such networks can model problems of contention that arise when a set of resources is shared. With the rapid transformation of the Internet into a commercial infrastructure, demands for service quality have rapidly developed. This paper gives a comparative analysis of three queuing systems FIFO, PQ and WFQ with different traffic distribution. Different traffic distribution includes constant, uniform and exponential traffic distribution. Packet end to end delay, traffic drop and packet delay variation is evaluated through simulation. Results have been evaluated for uniform and exponential traffic distribution. Result shows WFQ has better quality comparing with other techniques in a voice based services and having minimum traffic drop where as PQ techniques is better in Video based services. Simulation is done using OPNET.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo4/Paper_4-Performance_Modeling_of_Queuing_Techniques_for_Enhance_QoS_Support_for_Uniform_and_Exponential_VoIP.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Fair Queuing Technique for Efficient Content Delivery Over 3G and 4 G Networks in Varying Load Condition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2012.020103</link>
        <id>10.14569/SpecialIssue.2012.020103</id>
        <doi>10.14569/SpecialIssue.2012.020103</doi>
        <lastModDate>2012-07-21T20:27:41.2830000+00:00</lastModDate>
        
        <creator>B K Mishra</creator>
        
        <creator>S.K Singh</creator>
        
        <creator>Ruchi Shah</creator>
        
        <subject>QoS-Quality of service; NGN-Next Generation Network; FIFO- First-in-first-out; PQ- Priority queuing; WFQ- Weighted-Fair queuing; VoIP- Voice over Internet Protocol.</subject>
        <description>Special Issue(SpecialIssue), 2(1), 2012</description>
        <description>The challenges of new communication architecture are to offer better quality of service (QoS) in internet Network. A large diversity of services based on packet switching in 3G network and beyond 3G leads dramatic changes in the characteristics and parameter of data traffic. Deployment of application server and resource server has been proposed to support both high data rates and quality of service (QoS) for Next Generation Network (NGN).One important generalization of the Next Generation Network is, it’s a queue of network. It is expected that traffic in NGN will undergo both quantitative and qualitative changes. Such networks can model problems of contention that arise when a set of resources is shared. With the rapid transformation of the Internet into a commercial infrastructure, demands for service quality have rapidly developed. In this paper, few components of NGN reference architecture have been taken and system is evaluated in terms of queuing network. This paper gives a comparative analysis of three queuing systems FIFO, PQ and WFQ. Packet end to end delay, packet delay variation and traffic dropped are evaluated through simulation. Results have been evaluated for a light load intermediate load and heavy load condition for constant traffic distribution. Results have been also evaluated for variable bandwidth condition. Result shows WFQ has better quality comparing with other techniques in a voice based services where as PQ a technique is better in Video based services. Simulation is done using OPNET.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo4/Paper_3-A_Fair_Queuing_Technique_for_Efficient_Content_Delivery_Over_3G_and_4G_Networks_in_Varying_Load_Condition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Color &amp;Texture Based Image Retrieval using Fusion of Modified Block Truncation Coding (MBTC) and Kekre Transform Patterns</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2012.020102</link>
        <id>10.14569/SpecialIssue.2012.020102</id>
        <doi>10.14569/SpecialIssue.2012.020102</doi>
        <lastModDate>2012-07-21T20:27:41.2530000+00:00</lastModDate>
        
        <creator>A R Sawant</creator>
        
        <creator>V A Bharadi</creator>
        
        <creator>Bijith Markarkandy</creator>
        
        <subject>Image retrieval, CBIR, MBTC, Kekre’s pattern.</subject>
        <description>Special Issue(SpecialIssue), 2(1), 2012</description>
        <description>Content Based Image Retrieval is an interesting topic of research. This paper is about image content-based image search. Specifically, it is on developing technologies for bridging the semantic gap that currently prevents wide-deployment of image content-based search engines. Mostly, content-based methods are based on low-level descriptions, while high-level or semantic descriptions are beyond current capabilities. In this paper a CBIR is proposed based on color &amp; texture based search. Modified Block Truncation Coding (MBTC) is used for color information retrieval. To extract texture information we are using pattern generated by transforms, currently we are considering Kekre Transform. The feature vector is generated by fusion of above mentioned techniques.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo4/Paper_2-Colour_Texture_Based_Image_Retrieval_using_Fusion_of_Modified_Block_Truncation_Coding_(MBTC)_and_Kekre_Transform_Patterns.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Time and Frequency Domain Analysis of the Linear Fractional-order Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2012.020101</link>
        <id>10.14569/SpecialIssue.2012.020101</id>
        <doi>10.14569/SpecialIssue.2012.020101</doi>
        <lastModDate>2012-07-21T20:27:41.1600000+00:00</lastModDate>
        
        <creator>Manisha M Bhole</creator>
        
        <creator>Mukesh D. Patil</creator>
        
        <creator>Vishwesh A. Vyawahare</creator>
        
        <subject>Fractional-order systems, fractional calculus, stability
analysis.</subject>
        <description>Special Issue(SpecialIssue), 2(1), 2012</description>
        <description>related to the use of Fractional-order (FO)
differential equations in modeling and control. FO differential
equations are found to provide a more realistic, faithful, and
compact representations of many real world, natural and manmade
systems. FO controllers, on the other hand, have been able
to achieve a better closed-loop performance and robustness, than
their integer-order counterparts. In this paper, we provide a
systematic and rigorous time and frequency domain analysis of
linear FO systems. Various concepts like stability, step response,
frequency response are discussed in detail for a variety of linear FO systems. We also give the state space representations for these systems and comment on the controllability and observability. The exercise presented here conveys the fact that the time and frequency domain analysis of FO linear systems are very similar to that of the integer-order linear systems.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo4/Paper_1-Time_and_Frequency_Domain_Analysis_of_the_Linear_Fractional-order_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Analysis of Various Approaches Used in Frequent Pattern Mining</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2011.010323</link>
        <id>10.14569/SpecialIssue.2011.010323</id>
        <doi>10.14569/SpecialIssue.2011.010323</doi>
        <lastModDate>2012-07-21T20:27:41.0670000+00:00</lastModDate>
        
        <creator>Deepak Garg</creator>
        
        <creator>Hemant Sharma</creator>
        
        <subject>Data mining; Frequent patterns; Frequent pattern mining; association rules; support; confidence; Dynamic item set counting</subject>
        <description>Special Issue(SpecialIssue), 1(3), 2011</description>
        <description>Frequent pattern mining has become an important data mining task and has been a focused theme in data mining research. Frequent patterns are patterns that appear in a data set frequently. Frequent pattern mining searches for recurring relationship in a given data set. Various techniques have been proposed to improve the performance of frequent pattern mining algorithms. This paper presents review of different frequent mining techniques including apriori based algorithms, partition based algorithms, DFS and hybrid algorithms, pattern based algorithms, SQL based algorithms and Incremental apriori based algorithms. A brief description of each technique has been provided. In the last, different frequent pattern mining techniques are compared based on various parameters of importance. Experimental results show that FP- Tree based approach achieves better performance.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo3/Paper 23-Comparative Analysis of Various Approaches Used in Frequent Pattern Mining.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Clustering Student Data to Characterize Performance Patterns</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2011.010322</link>
        <id>10.14569/SpecialIssue.2011.010322</id>
        <doi>10.14569/SpecialIssue.2011.010322</doi>
        <lastModDate>2012-07-21T20:27:41.0330000+00:00</lastModDate>
        
        <creator>Bindiya M Varghese</creator>
        
        <creator>Jose Tomy J</creator>
        
        <creator>Unnikrishnan A</creator>
        
        <creator>Poulose Jacob K</creator>
        
        <subject>Data mining; k-means Clustering; Fuzzy C-means; Student performance analysis.</subject>
        <description>Special Issue(SpecialIssue), 1(3), 2011</description>
        <description>Over the years the academic records of thousands of students have accumulated in educational institutions and most of these data are available in digital format. Mining these huge volumes of data may gain a deeper insight and can throw some light on planning pedagogical approaches and strategies in the future. We propose to formulate this problem as a data mining task and use k-means clustering and fuzzy c-means clustering algorithms to evolve hidden patterns.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo3/Paper 22-Clustering Student Data to Characterize Performance Patterns.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Efficient Cancer Classification using Fast Adaptive Neuro-Fuzzy Inference System (FANFIS) based on Statistical Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2011.010321</link>
        <id>10.14569/SpecialIssue.2011.010321</id>
        <doi>10.14569/SpecialIssue.2011.010321</doi>
        <lastModDate>2012-07-21T20:27:40.8930000+00:00</lastModDate>
        
        <creator>K Anandakumar</creator>
        
        <creator>M.Punithavalli</creator>
        
        <subject>Gene Expressions; Cancer Classification; Neural Networks; Neuro-Fuzzy Inference System; Analysis of Variance; Modified Levenberg-Marquardt Algorithm.</subject>
        <description>Special Issue(SpecialIssue), 1(3), 2011</description>
        <description>The increase in number of cancer is detected throughout the world. This leads to the requirement of developing a new technique which can detect the occurrence the cancer. This will help in better diagnosis in order to reduce the cancer patients. This paper aim at finding the smallest set of genes that can ensure highly accurate classification of cancer from micro array data by using supervised machine learning algorithms. The significance of finding the minimum subset is three fold: a) The computational burden and noise arising from irrelevant genes are much reduced; b) the cost for cancer testing is reduced significantly as it simplifies the gene expression tests to include only a very small number of genes rather than thousands of genes; c) it calls for more investigation into the probable biological relationship between these small numbers of genes and cancer development and treatment. The proposed method involves two steps. In the first step, some important genes are chosen with the help of Analysis of Variance (ANOVA) ranking scheme. In the second step, the classification capability is tested for all simple combinations of those important genes using a better classifier. The proposed method uses Fast Adaptive Neuro-Fuzzy Inference System (FANFIS) as a classification model. This classification model uses Modified Levenberg-Marquardt algorithm for learning phase. The experimental results suggest that the proposed method results in better accuracy and also it takes lesser time for classification when compared to the conventional techniques.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo3/Paper 21-Efficient Cancer Classification using Fast Adaptive Neuro-Fuzzy Inference System (FANFIS) based on Statistical Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Method for Extracting Product Information from TV Commercial</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2011.010320</link>
        <id>10.14569/SpecialIssue.2011.010320</id>
        <doi>10.14569/SpecialIssue.2011.010320</doi>
        <lastModDate>2012-07-21T20:27:40.7530000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Herman Tolle</creator>
        
        <subject>text detection; information extraction; rule based classifying; patern matching.</subject>
        <description>Special Issue(SpecialIssue), 1(3), 2011</description>
        <description>Television (TV) Commercial program contains important product information that displayed only in seconds. People who need that information has no insufficient time for noted it, even just for reading that information. This research work focus on automatically detect text and extract important information from a TV commercial to provide information in real time and for video indexing. We propose method for product information extraction from TV commercial using knowledge based system with pattern matching rule based method. Implementation and experiments on 50 commercial screenshot images achieved a high accuracy result on text extraction and information recognition.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo3/Paper 20-Method for Extracting Product Information .pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mining Volunteered Geographic Information datasets with heterogeneous spatial reference</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2011.010319</link>
        <id>10.14569/SpecialIssue.2011.010319</id>
        <doi>10.14569/SpecialIssue.2011.010319</doi>
        <lastModDate>2012-07-21T20:27:40.6900000+00:00</lastModDate>
        
        <creator>Sadiq Hussain</creator>
        
        <creator>G.C. Hazarika</creator>
        
        <subject>data mining; periodic patterns; spatiotemporal data; Volunteered Geographic Information.</subject>
        <description>Special Issue(SpecialIssue), 1(3), 2011</description>
        <description>When the information created online by users has a spatial reference, it is known as Volunteered Geographic Information (VGI). The increased availability of spatiotemporal data collected from satellite imagery and other remote sensors provides opportunities for enhanced analysis of Spatiotemporal Patterns. This area can be defined as efficiently discovering interesting patterns from large data sets. The discovery of hidden periodic patterns in spatiotemporal data could provide unveiling important information to the data analyst. In many applications that track and analyze spatiotemporal data, movements obey periodic patterns; the objects follow the same routes (approximately) over regular time intervals. However, these methods cannot directly be applied to a spatiotemporal sequence because of the fuzziness of spatial locations in the sequence. In this paper, we define the problem of mining VGI datasets with our already established bottom up algorithm for spatiotemporal data.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo3/Paper 19-Mining Volunteered Geographic Information datasets with heterogeneous spatial reference.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Instant Human Face Attributes Recognition System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2011.010318</link>
        <id>10.14569/SpecialIssue.2011.010318</id>
        <doi>10.14569/SpecialIssue.2011.010318</doi>
        <lastModDate>2012-07-21T20:27:40.5970000+00:00</lastModDate>
        
        <creator>N Bellustin</creator>
        
        <creator>Y. Kalafati</creator>
        
        <creator>Kovalchuck</creator>
        
        <creator>A. Telnykh</creator>
        
        <creator>O. Shemagina</creator>
        
        <creator>V.Yakhno</creator>
        
        <creator>Abhishek Vaish</creator>
        
        <creator>Pinki Sharma</creator>
        
        <subject>Gender recognition; Age recognition; Ethnicity recognition; MCT; AdaBoost; attributes classifier.</subject>
        <description>Special Issue(SpecialIssue), 1(3), 2011</description>
        <description>The objective of this work is to provide a simple and yet efficient tool for human attributes like gender, age and ethnicity by the human facial image in the real time image as we all aware this term that “Real-Time frame rate is a vital factor for practical deployment of computer vision system”. In this particular paper we are trying to presents the progress towards face detection and human attributes classification system. We have developed an algorithm for the classification of gender, age and race from human frontal facial image As the basis of the classifier proposed algorithm uses training set neuron receptors that process visual information a study of the several variants of these classifiers and shows the principal possibility of sex determination, assessment of a person&#39;s age on a scale (adult - children) and recognition of race by using the neuron-like receptors.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo3/Paper 18-Instant Human Face Attributes Recognition System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Face Recognition Using Bacteria Foraging Optimization-Based Selected Features</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2011.010317</link>
        <id>10.14569/SpecialIssue.2011.010317</id>
        <doi>10.14569/SpecialIssue.2011.010317</doi>
        <lastModDate>2012-07-21T20:27:40.5370000+00:00</lastModDate>
        
        <creator>Rasleen Jakhar</creator>
        
        <creator>Navdeep Kaur</creator>
        
        <creator>Ramandeep Singh</creator>
        
        <subject>Face Recognition; Bacteria Foraging Optimization; DCT; Feature Selection.</subject>
        <description>Special Issue(SpecialIssue), 1(3), 2011</description>
        <description>Feature selection (FS) is a global optimization
problem in machine learning, which reduces the number of
features, removes irrelevant, noisy and redundant data, and
results in acceptable recognition accuracy. This paper presents a
novel feature selection algorithm based on Bacteria Foraging Optimization (BFO). The algorithm is applied to coefficients extracted by discrete cosine transforms (DCT). Evolution is
driven by a fitness function defined in terms of maximizing the class separation (scatter index). Performance is evaluated using the ORL face database.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo3/Paper 17-Face Recognition Using Bacteria Foraging Optimization-Based Selected Features.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Prototype Student Advising Expert System Supported with an Object-Oriented Database</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2011.010316</link>
        <id>10.14569/SpecialIssue.2011.010316</id>
        <doi>10.14569/SpecialIssue.2011.010316</doi>
        <lastModDate>2012-07-21T20:27:40.4570000+00:00</lastModDate>
        
        <creator>M Ayman Ahmar</creator>
        
        <subject>academic advising; expert system; object-oriented database.</subject>
        <description>Special Issue(SpecialIssue), 1(3), 2011</description>
        <description>Using intelligent computer systems technology to support the academic advising process offers many advantages over the traditional student advising. The objective of this research is to develop a prototype student advising expert system that assists the students of Information Systems (IS) major in selecting their courses for each semester towards the academic degree. The system can also be used by academic advisors in their academic planning for students. The expert system is capable of advising students using prescriptive advising model and developmental advising model. The system is supported with an object-oriented database and provides a friendly graphical user interface. Academic advising cases tested using the system showed high matching (93%) between the automated advising provided by the expert system and the advising performed by human advisors. This proves that the developed prototype expert system is successful and promising.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo3/Paper 16-A Prototype Student Advising Expert System Supported with an Object-Oriented Database.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Extended Performance Comparison of Colour to Grey and Back using the Haar, Walsh, and Kekre Wavelet Transforms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2011.010315</link>
        <id>10.14569/SpecialIssue.2011.010315</id>
        <doi>10.14569/SpecialIssue.2011.010315</doi>
        <lastModDate>2012-07-21T20:27:40.3800000+00:00</lastModDate>
        
        <creator>H B Kekre</creator>
        
        <creator>Sudeep D. Thepade</creator>
        
        <creator>Adib Parkar</creator>
        
        <subject>Colouring; Colour to Grey; Matted Greyscale; Grey to Colour; LUV Colour Space; YCbCr Colour Space; YCgCb Colour Space; YIQ Colour Space; YUV Colour Space; Haar Wavelets; Kekre’s Wavelets; Walsh Transform.</subject>
        <description>Special Issue(SpecialIssue), 1(3), 2011</description>
        <description>The storage of colour information in a greyscale image is not a new idea. Various techniques have been proposed using different colour spaces including the standard RGB colour space, the YUV colour space, and the YCbCr colour space. This paper extends the results described in [1] and [2]. While [1] describes the storage of colour information in a greyscale image using Haar wavelets, and [2] adds a comparison with Kekre’s wavelets, this paper adds a third transform – the Walsh transform and presents a detailed comparison of the performance of all three transforms across the LUV, YCbCr, YCgCb, YIQ, and YUV colour spaces. The main aim remains the same as that in [1] and [2], which is the storage of colour information in a greyscale image known as the “matted” greyscale image.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo3/Paper 15-An Extended Performance Comparison of Colour to Grey and Back using the Haar, Walsh, and Kekre Wavelet Transforms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Fuzzy Decision Tree to Estimate Development Effort for Web Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2011.010314</link>
        <id>10.14569/SpecialIssue.2011.010314</id>
        <doi>10.14569/SpecialIssue.2011.010314</doi>
        <lastModDate>2012-07-21T20:27:40.3330000+00:00</lastModDate>
        
        <creator>Ali Idri</creator>
        
        <creator>Sanaa Elyassami</creator>
        
        <subject>Fuzzy Logic; Effort Estimation; Decision Tree; Fuzzy ID3; Software project.</subject>
        <description>Special Issue(SpecialIssue), 1(3), 2011</description>
        <description>Web Effort Estimation is a process of predicting the efforts and cost in terms of money, schedule and staff for any software project system. Many estimation models have been proposed over the last three decades and it is believed that it is a must for the purpose of: Budgeting, risk analysis, project planning and control, and project improvement investment analysis. In this paper, we investigate the use of Fuzzy ID3 decision tree for software cost estimation, it is designed by integrating the principles of ID3 decision tree and the fuzzy set-theoretic concepts, enabling the model to handle uncertain and imprecise data when describing the software projects, which can improve greatly the accuracy of obtained estimates. MMRE and Pred are used, as measures of prediction accuracy, for this study. A series of experiments is reported using Tukutuku software projects dataset. The results are compared with those produced by three crisp versions of decision trees: ID3, C4.5 and CART.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo3/Paper 14-A Fuzzy Decision Tree to Estimate Development Effort for Web Applications.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multimodal Biometric Person Authentication using Speech, Signature and Handwriting Features</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2011.010313</link>
        <id>10.14569/SpecialIssue.2011.010313</id>
        <doi>10.14569/SpecialIssue.2011.010313</doi>
        <lastModDate>2012-07-21T20:27:40.2530000+00:00</lastModDate>
        
        <creator>Eshwarappa M N</creator>
        
        <creator>Mrityunjaya V. Latte</creator>
        
        <subject>Biometrics; Speaker recognition; Signature recognition; Handwriting recognition; Multimodal system.</subject>
        <description>Special Issue(SpecialIssue), 1(3), 2011</description>
        <description>The objective of this work is to develop a multimodal biometric system using speech, signature and handwriting information. Unimodal biometric person authentication systems are initially developed for each of these biometric features. Methods are then explored for integrating them to obtain multimodal system. Apart from implementing state-of-the art systems, the major part of the work is on the new explorations at each level with the objective of improving performance and robustness. The latest research indicates multimodal person authentication system is more effective and more challenging. This work demonstrates that the fusion of multiple biometrics helps to minimize the system error rates. As a result, the identification performance is 100% and verification performances, False Acceptance Rate (FAR) is 0%, and False Rejection Rate (FRR) is 0%.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo3/Paper 13-Multimodal Biometric Person Authentication using Speech, Signature and Handwriting Features .pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A new vehicle detection method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2011.010312</link>
        <id>10.14569/SpecialIssue.2011.010312</id>
        <doi>10.14569/SpecialIssue.2011.010312</doi>
        <lastModDate>2012-07-21T20:27:40.1600000+00:00</lastModDate>
        
        <creator>Zebbara Khalid</creator>
        
        <creator>Abdenbi Mazoul</creator>
        
        <creator>Mohamed El Ansari</creator>
        
        <subject>component; intelligent vehicle; vehicle detection;
Association; Optical Flow; AdaBoost; Haar filter.</subject>
        <description>Special Issue(SpecialIssue), 1(3), 2011</description>
        <description>This paper presents a new vehicle detection method
from images acquired by cameras embedded in a moving vehicle.
Given the sequence of images, the proposed algorithms should
detect out all cars in realtime. Related to the driving direction,
the cars can be classified into two types. Cars drive in the same
direction as the intelligent vehicle (IV) and cars drive in the
opposite direction. Due to the distinct features of these two types,
we suggest to achieve this method in two main steps. The first one detects all obstacles from images using the so-called association combined with corner detector. The second step is applied to validate each vehicle using AdaBoost classifier. The new method has been applied to different images data and the experimental results validate the efficacy of our method.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo3/Paper 12-A new vehicle detection method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Robust Face Detection Using Circular Multi Block Local Binary Pattern and Integral Haar Features</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2011.010311</link>
        <id>10.14569/SpecialIssue.2011.010311</id>
        <doi>10.14569/SpecialIssue.2011.010311</doi>
        <lastModDate>2012-07-21T20:27:39.9730000+00:00</lastModDate>
        
        <creator>P K Suri</creator>
        
        <creator>Amit Verma</creator>
        
        <subject>CMBLBP; MBLBP; LBP; Gentle Boosting; Face Detection.</subject>
        <description>Special Issue(SpecialIssue), 1(3), 2011</description>
        <description>In real world applications, it is very challenging to implement a good detector which gives best performance with great speed and accuracy. There is always a trade-off in terms of speed and accuracy, when we consider performance of a face detector. In the current work we have implemented a robust face detector which uses the new concept called integral Haar histograms with CMBLBP or CSMBLBP (circular multi block local binary operator).Our detector runs for real world applications and its performance is far better than any of the present detector. It works with good speed and enough accuracy with varying face sizes, varying illumination, varying angle, different face expressions, rotation, scaling like challenges which are mostly issues of concern in the domain of face detection. We use Matlab and Image processing tool box for the implementation of the above mentioned technique.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo3/Paper 11-Robust Face Detection Using Circular Multi Block Local Binary Pattern and Integral Haar Features.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Extraction of Line Features from Multifidus Muscle of CT Scanned Images with Morphologic Filter Together with Wavelet Multi Resolution Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2011.010310</link>
        <id>10.14569/SpecialIssue.2011.010310</id>
        <doi>10.14569/SpecialIssue.2011.010310</doi>
        <lastModDate>2012-07-21T20:27:39.8800000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Yuichiro Eguchi</creator>
        
        <creator>Yoichiro Kitajima</creator>
        
        <subject>multifidusmuscle; Computer Tomography; wavele; Multi Resulution Analysis; morphological filter.</subject>
        <description>Special Issue(SpecialIssue), 1(3), 2011</description>
        <description>A method for line feature extraction from multifidus muscle of Computer Tomography (CT) scanned image with morphologic filter together with wavelet based Multi Resolution Analysis (MRA) is proposed. The contour of the multifidus muscle can be extracted from hip CT image. The area of multifidus muscle is then estimated and is used for an index of belly fat because there is a high correlation between belly fat and multifidus muscle. When the area of the multifidus muscle was calculated from the CT image, the MRA with Daubechies base functions and with the parameter of MRA of level is three would appropriate. After the wavelet transformation is applied to the original hip CT image three times and LLL (3D low frequency components) is filled “0” then inverse wavelet transformation is applied for reconstruction. The proposed method is validated with four patients.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo3/Paper 10-Extraction of Line Features from Multifidus Muscle of CT Scanned Images with Morphologic Filter Together with Wavelet Multi Resolution Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Motion Blobs as a Feature for Detection on Smoke</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2011.010309</link>
        <id>10.14569/SpecialIssue.2011.010309</id>
        <doi>10.14569/SpecialIssue.2011.010309</doi>
        <lastModDate>2012-07-21T20:27:39.8330000+00:00</lastModDate>
        
        <creator>Khalid Nazim S. A</creator>
        
        <creator>M.B. Sanjay Pande</creator>
        
        <subject>Motion blobs; Blob Extraction; Feature Extraction.</subject>
        <description>Special Issue(SpecialIssue), 1(3), 2011</description>
        <description>Disturbance that is caused due to visual perception with the atmosphere is coined as smoke, but the major problem is to quantify the detected smoke that is made up of small particles of carbonaceous matter in the air, resulting mainly from the burning of organic material. The present work focuses on the detection of smoke immaterial it being accidental, arson or created one and raise an alarm through an electrical device that senses the presence of visible or invisible particles or in simple terms a smoke detector issuing a signal to fire alarm system / issue a local audible alarm from detector itself.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo3/Paper 9-Motion Blobs as a Feature for Detection on Smoke.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Tifinagh Character Recognition Using Geodesic Distances, Decision Trees &amp; Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2011.010308</link>
        <id>10.14569/SpecialIssue.2011.010308</id>
        <doi>10.14569/SpecialIssue.2011.010308</doi>
        <lastModDate>2012-07-21T20:27:39.7570000+00:00</lastModDate>
        
        <creator>O Bencharef</creator>
        
        <creator>M.Fakir</creator>
        
        <creator>B. Minaoui</creator>
        
        <creator>B.Bouikhalene</creator>
        
        <subject>component ; Tifinagh character recognition; Neural networks ; Decision trees, Riemannian geometry ; Geodesic distances.</subject>
        <description>Special Issue(SpecialIssue), 1(3), 2011</description>
        <description>The recognition of Tifinagh characters cannot be perfectly carried out using the conventional methods which are based on the invariance, this is due to the similarity that exists between some characters which differ from each other only by size or rotation, hence the need to come up with new methods to remedy this shortage. In this paper we propose a direct method based on the calculation of what is called Geodesic Descriptors which have shown significant reliability vis-&#224;-vis the change of scale, noise presence and geometric distortions. For classification, we have opted for a method based on the hybridization of decision trees and neural networks.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo3/Paper 8-Tifinagh Character Recognition Using Geodesic Distances Decision Trees Neural Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Unsupervised Method of Object Retrieval Using Similar Region Merging and Flood Fill</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2011.010307</link>
        <id>10.14569/SpecialIssue.2011.010307</id>
        <doi>10.14569/SpecialIssue.2011.010307</doi>
        <lastModDate>2012-07-21T20:27:39.6470000+00:00</lastModDate>
        
        <creator>Kanak Saxena</creator>
        
        <creator>Sanjeev Jain</creator>
        
        <creator>Uday Pratap Singh</creator>
        
        <subject>Image segmentationl; similar regions; region merging; mean shift; flood fill.</subject>
        <description>Special Issue(SpecialIssue), 1(3), 2011</description>
        <description>In this work; we address a novel interactive framework for object retrieval using unsupervised similar region merging and flood fill method which models the spatial and appearance relations among image pixels. Efficient and effective image segmentation is usually very hard for natural and complex images. This paper presents a new technique for similar region merging and objects retrieval. The users only need to roughly indicate the after which steps desired objects boundary is obtained during merging of similar regions. A novel similarity based region merging mechanism is proposed to guide the merging process with the help of mean shift technique. A region R is merged with its adjacent regions Q if Q has highest similarity with R among all Q’s adjacent regions. The proposed method automatically merges the regions that are initially segmented through mean shift technique, and then effectively extracts the object contour by merging all similar regions. Extensive experiments are performed on 22 object classes (524 images total) show promising results.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo3/Paper 7-Unsupervised Method of Object Retrieval Using Similar Region Merging and Flood Fill.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improvement of Secret Image Invisibility in Circulation Image with Dyadic Wavelet Based Data Hiding with Run-Length Coded Secret Images of Which Location of Codes are Determined with Random Number</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2011.010306</link>
        <id>10.14569/SpecialIssue.2011.010306</id>
        <doi>10.14569/SpecialIssue.2011.010306</doi>
        <lastModDate>2012-07-21T20:27:39.6130000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Yuji Yamada</creator>
        
        <subject>Dyadic wavelet; Lifting wavelet; Data hiding; Data compression.</subject>
        <description>Special Issue(SpecialIssue), 1(3), 2011</description>
        <description>An attempt is made for improvement of secret image invisibility in circulation images with dyadic wavelet based data hiding with run-length coded secret images of which location of codes are determined by random number. Through experiments, it is confirmed that secret images are almost invisible in circulation images. Also robustness of the proposed data hiding method against data compression of circulation images is discussed. Data hiding performance in terms of invisibility of secret images which are embedded in circulation images is evaluated with the Root Mean Square difference between the original secret image and extracted one from the circulation images. Meanwhile the conventional Multi-Resolution Analysis (MRA) based data hiding is attempted with a variety of parameters, level of MRA and the frequency component location of which secret image is replaced to it and is compared to the proposed method. It is found that the proposed data hiding method is superior to the conventional method. Also the conventional data hiding method is not robust against circulation image processing.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo3/Paper 6-Improvement of Secret Image Invisibility in Circulation Image with Dyadic Wavelet Based Data Hiding.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SOM Based Visualization Technique For Detection Of Cancerous Masses In Mammogram</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2011.010305</link>
        <id>10.14569/SpecialIssue.2011.010305</id>
        <doi>10.14569/SpecialIssue.2011.010305</doi>
        <lastModDate>2012-07-21T20:27:39.5830000+00:00</lastModDate>
        
        <creator>S.Pitchuman Angayarkanni M.C.A</creator>
        
        <creator>M.Phil</creator>
        
        <creator>V.Saravanan</creator>
        
        <subject>Image Enhancement; Gabor Filter; Texture Features; SOM; ROC.</subject>
        <description>Special Issue(SpecialIssue), 1(3), 2011</description>
        <description>Breast cancer is the most common form of cancer in women. An intelligent computer-aided diagnosis system can be very helpful for radiologist in detecting and diagnosing micro calcifications patterns earlier and faster than typical screening programs. In this paper, we present a system based on gabor filter based enhancement technique and feature extraction techniques using texture based segmentation and SOM(Self Organization Map) which is a form of Artificial Neural Network(ANN) used to analyze the texture features extracted. SOM determines which texture feature has the ability to classify benign, malignant and normal cases. Watershed segmentation technique is used to classify cancerous region from the non cancerous region. We have investigated and analyzed a number of feature extraction techniques and found that a combination of ten features, such as Cor-relation, Cluster Prominence, Energy, Entropy, Homogeneity, Difference variance, Difference Entropy, Information Measure, and Normalized are calculated. These features gives the distribution of tonality information and was found to be the best combination to distinguish a benign micro calcification pattern from one that is malignant and normal. The system was developed on a Windows platform. It is an easy to use intelligent system that gives the user options to diagnose, detect, enlarge, zoom, and measure distances of areas in digital mammograms. Further Using Linear Filtering Technique and the Texture Features as Mask are convolved with the segmented image .The tumor is detected using the above method and using watershed segmentation, a fair segmentation is obtained The artificial neural network with unsupervised learning together with texture based approach leads to the accuracy and positive predictive value of each algorithm were used as the evaluation indicators. 121 records acquired from the breast cancer patients at the MIAS database. The results revealed that the accuracies of texture based unsupervised learning has 0.9534 (sensitivity 0.98716 and specificity 0.9582 which was detected thorough the ROC. The results showed that the gabor based unsupervised learning described in the present study was able to produce accurate results in the classification of breast cancer data and the classification rule identified was more acceptable and comprehensible.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo3/Paper 5-SOM Based Visualization Technique For Detection Of Cancerous Masses In Mammogram.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comparison Study between Data Mining Tools over some Classification Methods</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2011.010304</link>
        <id>10.14569/SpecialIssue.2011.010304</id>
        <doi>10.14569/SpecialIssue.2011.010304</doi>
        <lastModDate>2012-07-21T20:27:39.4900000+00:00</lastModDate>
        
        <creator>Abdullah H Wahbeh</creator>
        
        <creator>Qasem A. Al-Radaideh</creator>
        
        <creator>Mohammed N. Al-Kabi</creator>
        
        <creator>Emad M. Al-Shawakfa</creator>
        
        <subject>component; data mining tools; data classification; Wekak; Orange; Tanagra; KNIME.</subject>
        <description>Special Issue(SpecialIssue), 1(3), 2011</description>
        <description>Nowadays, huge amount of data and information are available for everyone, Data can now be stored in many different kinds of databases and information repositories, besides being available on the Internet or in printed form. With such amount of data, there is a need for powerful techniques for better interpretation of these data that exceeds the human&#39;s ability for comprehension and making decision in a better way. In order to reveal the best tools for dealing with the classification task that helps in decision making, this paper has conducted a comparative study between a number of some of the free available data mining and knowledge discovery tools and software packages. Results have showed that the performance of the tools for the classification task is affected by the kind of dataset used and by the way the classification algorithms were implemented within the toolkits. For the applicability issue, the WEKA toolkit has achieved the highest applicability followed by Orange, Tanagra, and KNIME respectively. Finally; WEKA toolkit has achieved the highest improvement in classification performance; when moving from the percentage split test mode to the Cross Validation test mode, followed by Orange, KNIME and finally Tanagra respectively.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo3/Paper 4-A Comparison Study between Data Mining Tools over some Classification Methods.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Forecasting the Tehran Stock Market by Artificial Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2011.010303</link>
        <id>10.14569/SpecialIssue.2011.010303</id>
        <doi>10.14569/SpecialIssue.2011.010303</doi>
        <lastModDate>2012-07-21T20:27:39.4130000+00:00</lastModDate>
        
        <creator>Reza Aghababaeyan</creator>
        
        <creator>TamannaSiddiqui</creator>
        
        <creator>NajeebAhmadKhan</creator>
        
        <subject>Data mining; Stock Exchange; Artificial Neural Network; Matlab.</subject>
        <description>Special Issue(SpecialIssue), 1(3), 2011</description>
        <description>One of the most important problems in modern finance is finding efficient ways to summarize and visualize the stock market data to give individuals or institutions useful information about the market behavior for investment decisions. The enormous amount of valuable data generated by the stock market has attracted researchers to explore this problem domain using different methodologies. Potential significant benefits of solving these problems motivated extensive research for years. In this paper, computational data mining methodology was used to predict seven major stock market indexes. Two learning algorithms including Linear Regression and Neural Network Standard feed-forward back prop (FFB) were tested and compared. The models were trained from four years of historical data from March 2007 to February 2011 in order to predict the major stock prices indexes in the Iran (Tehran Stock Exchange). The performance of these prediction models was evaluated using two widely used statistical metrics. We can show that using Neural Network Standard feed-forward back prop (FFB) algorithm resulted in better prediction accuracy. In addition, traditional knowledge shows that a longer training period with more training data could help to build a more accurate prediction model. However, as the stock market in Iran has been highly fluctuating in the past two years, this paper shows that data collected from a closer and shorter period could help to reduce the prediction error for such highly speculated fast changing environment.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo3/Paper 3-Forecasting the Tehran Stock Market by Artificial Neural Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Speaker Identification using Row Mean of Haar and Kekre’s Transform on Spectrograms of Different Frame Sizes</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2011.010302</link>
        <id>10.14569/SpecialIssue.2011.010302</id>
        <doi>10.14569/SpecialIssue.2011.010302</doi>
        <lastModDate>2012-07-21T20:27:39.3330000+00:00</lastModDate>
        
        <creator>H B Kekre</creator>
        
        <creator>Vaishali Kulkarni</creator>
        
        <subject>Speaker Identification; Spectrogram; Haar Transform; Kekre’s Transform; Row Mean; Euclidean distance</subject>
        <description>Special Issue(SpecialIssue), 1(3), 2011</description>
        <description>In this paper, we propose Speaker Identification using two transforms, namely Haar Transform and Kekre’s Transform. The speech signal spoken by a particular speaker is converted into a spectrogram by using 25% and 50% overlap between consecutive sample vectors. The two transforms are applied on the spectrogram. The row mean of the transformed matrix forms the feature vector, which is used in the training as well as matching phases. The results of both the transform techniques have been compared. Haar transform gives fairly good results with a maximum accuracy of 69% for both 25% as well as 50% overlap. Kekre’s Transform shows much better performance, with a maximum accuracy of 85.7% for 25% overlap and 88.5% accuracy for 50% overlap.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo3/Paper 2-Speaker Identification using Row Mean of Haar and Kekre’s Transform on Spectrograms of Different Frame Sizes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Parts of Speech Tagging for Afaan Oromo</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2011.010301</link>
        <id>10.14569/SpecialIssue.2011.010301</id>
        <doi>10.14569/SpecialIssue.2011.010301</doi>
        <lastModDate>2012-07-21T20:27:39.2870000+00:00</lastModDate>
        
        <creator>Getachew Mamo Wegari</creator>
        
        <creator>Million Meshesha</creator>
        
        <subject>Natural Language processing; parts of speech tagging; Hidden Markov Model; N-Gram; Afaan Oromo.</subject>
        <description>Special Issue(SpecialIssue), 1(3), 2011</description>
        <description>The main aim of this study is to develop part-of-speech tagger for Afaan Oromo language. After reviewing literatures on Afaan Oromo grammars and identifying tagset and word categories, the study adopted Hidden Markov Model (HMM) approach and has implemented unigram and bigram models of Viterbi algorithm. Unigram model is used to understand word ambiguity in the language, while bigram model is used to undertake contextual analysis of words. For training and testing purpose 159 sentences (with a total of 1621 words) that are manually annotated sample corpus are used. The corpus is collected from different public Afaan Oromo newspapers and bulletins to make the sample corpus balanced. A database of lexical probabilities and transitional probabilities are developed from the annotated corpus. These two probabilities are from which the tagger learn and tag sequence of words in sentences. The performance of the prototype, Afaan Oromo tagger is tested using tenfold cross validation mechanism. The result shows that in both unigram and bigram models 87.58% and 91.97% accuracy is obtained, respectively. </description>
        <description>http://thesai.org/Downloads/SpecialIssueNo3/Paper 1-Parts of Speech Tagging for Afaan Oromo.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>32 x 10 and 64 &#215; 10 Gb/s transmission using hybrid Raman-Erbium doped optical amplifiers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2011.010213</link>
        <id>10.14569/SpecialIssue.2011.010213</id>
        <doi>10.14569/SpecialIssue.2011.010213</doi>
        <lastModDate>2012-07-21T20:27:39.2100000+00:00</lastModDate>
        
        <creator>Shveta Singh</creator>
        
        <creator>M.L.Sharma</creator>
        
        <creator>Ramandeep Kaur</creator>
        
        <subject>HOA; RAMAN; EDFA; BER; Q-FACTOR; EYEOPENING; DISPERSION; TRANSMISSION DISTANCE; WDM and DWDM.</subject>
        <description>Special Issue(SpecialIssue), 1(2), 2011</description>
        <description>We have successfully demonstrated a long-haul
transmission of 32 &#215; 10 Gbit/s and 64 &#215; 10 Gbit/s over singlemode
fiber of 650 km and 530 km respectively by using RAMANEDFA
hybrid optical amplifier as inline and preamplifier amplifiers. The measured Q-factors and BER of the 32 and 64 channels after 650 and 530 km respectively (16.99–17 dB) and (10-13) were higher than the standard acceptable value, which offers feasibility of the hybrid amplifiers including EDFA optical amplifiers for the long-haul transmission.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo2/Paper 13- Transmission using hybrid Raman-Erbium doped optical amplifiers.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Agent based Congestion Control Performance in Mobile ad-hoc Network:A Survey paper</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2011.010212</link>
        <id>10.14569/SpecialIssue.2011.010212</id>
        <doi>10.14569/SpecialIssue.2011.010212</doi>
        <lastModDate>2012-07-21T20:27:39.1170000+00:00</lastModDate>
        
        <creator>Vishnu Kumar Sharma</creator>
        
        <creator>Sarita Singh Bhadauria</creator>
        
        <subject>Mobile Ad hoc Networks (MANETs); mobile agents (MA); TCP.</subject>
        <description>Special Issue(SpecialIssue), 1(2), 2011</description>
        <description>A Congestion control is a key problem in mobile adhoc
networks. The standard TCP congestion control mechanism
is not able to handle the special properties of a shared wireless
channel. Many approaches have been proposed to overcome these
difficulties. ideas and show their interrelations. mobile agent
based congestion control Technique routing is proposed to avoid
congestion in ad hoc network. Some mobile agents are added in
ad hoc network, which carry routing information and nodes
congestion status. When mobile agent travels through the
network, it can select a less-loaded neighbor node as its next hop
and update the routing table according to the node’s congestion status. With the aid of mobile agents, the nodes can get the dynamic network topology in time. In this paper, we give an verview over existing proposals, explain their key ideas, TCP Issues, Reduce the Congestion, delay in mobile ad-hoc network and proposed solution</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo2/Paper 12-Agent based Congestion Control Performance in Mobile ad-hoc Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Architecture for Intrusion Detection in Mobile Ad hoc Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2011.010211</link>
        <id>10.14569/SpecialIssue.2011.010211</id>
        <doi>10.14569/SpecialIssue.2011.010211</doi>
        <lastModDate>2012-07-21T20:27:39.0700000+00:00</lastModDate>
        
        <creator>Atul Patel</creator>
        
        <creator>Ruchi Kansara</creator>
        
        <creator>Paresh Virparia</creator>
        
        <subject>Ad hoc network; Intrusion Detection System; Mobile Network.</subject>
        <description>Special Issue(SpecialIssue), 1(2), 2011</description>
        <description>Today’s wireless networks are vulnerable in many
ways including illegal use, unauthorized access, denial of service
attacks, eavesdropping so called war chalking. These problems
are one of the main issues for wider uses of wireless network. On
wired network intruder can access by wire but in wireless it has
possibilities to access the computer anywhere in neighborhood.
However, securing MANETs is highly challenging issue due to
their inherent characteristics. Intrusion detection is an important
security mechanism, but little effort has been directed towards
efficient and effective architectures for Intrusion Detection
System in the context of MANETs. We investigate existing
Intrusion Detection Architecture design Issues, challenges and
proposed a novel architecture based on a conceptual model for
an IDS agent that lead to a secure collaboration environment integrating mobile ad hoc network and the wired backbone. In wireless/mobile ad hoc network, the limited power, weak
computation capabilities of mobile nodes, and restricted bandwidth of the open media impede the establishment of a secure collaborative environment.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo2/Paper 11-A Novel Architecture for Intrusion Detection in Mobile Ad hoc Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>RSS based Vertical Handoff algorithms for Heterogeneous wireless networks - A Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2011.010210</link>
        <id>10.14569/SpecialIssue.2011.010210</id>
        <doi>10.14569/SpecialIssue.2011.010210</doi>
        <lastModDate>2012-07-21T20:27:38.9900000+00:00</lastModDate>
        
        <creator>Abhijit Bijwe</creator>
        
        <creator>C.G.Dethe</creator>
        
        <subject>RSS; WLAN; 3G; VHD.</subject>
        <description>Special Issue(SpecialIssue), 1(2), 2011</description>
        <description>Heterogeneous networks are integrated in fourth
generation. To have seamless communication and mobility
between these heterogeneous wireless access networks, support of
vertical handoff is required. Vertical handover is convergence of
heterogeneous networks for e.g.:- handover between WLAN and
cellular networks.
In this paper, three algorithms on RSS based vertical handoff
are discussed. First, algorithm is adaptive lifetime based vertical
handoff, which combines RSS and estimated lifetime (expected
duration after which the MT will be able to maintain its
connection with WLAN) to decide the vertical handover. Second algorithm, is based on dynamic RSS threshold which is more suitable for handover from WLAN to 3G network. Third
algorithm is a traveling distance prediction method, which works well for WLAN to cellular networks and vice versa. This avoids unnecessary handoff and also minimizes failure  robability.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo2/Paper 10-RSS based Vertical Handoff algorithms for Heterogeneous wireless network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>NIDS For Unsupervised Authentication Records of KDD Dataset in MATLAB</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2011.010209</link>
        <id>10.14569/SpecialIssue.2011.010209</id>
        <doi>10.14569/SpecialIssue.2011.010209</doi>
        <lastModDate>2012-07-21T20:27:38.9130000+00:00</lastModDate>
        
        <creator>Bhawana Pillai</creator>
        
        <creator>Uday Pratap Singh</creator>
        
        <subject>Anomaly detection; Intrusion Detection; Expectation Maximization; MATLAB; UNSOUND authentication; UNFEIGNED; reduce false.</subject>
        <description>Special Issue(SpecialIssue), 1(2), 2011</description>
        <description>Most anomaly based NIDS employ supervised
algorithms, whose performances highly depend on attack-free
training data. Moreover, with changing network environment or
services, patterns of normal traffic will be changed. In this paper,
we developed intrusion detection system is to analyses the
authentication records and separate UNFEIGNED and
fraudulent authentication attempts for each user account in the
system. Intrusions are detected by determining outliers related to
the built patterns. We present the modification on the outlier
detection algorithm. It is important problems to increase the
detection rates and reduce false positive rates in Intrusion
Detection System. Although preventative techniques such as
access control and authentication attempt to prevent intruders,
these can fail, and as a second line of defense, intrusion detection
has been introduced. Rare events are events that occur very
infrequently, detection of rare events is a common problem in
many domains. Support Vector Machines (SVM) as a classical
pattern recognition tool have been widely used for intrusion
detection. However, conventional SVM methods do not concern
different characteristics of features in building an intrusion detection system. Also evaluate the performance of K-Means algorithm by the detection rate and the false positive rate. All
result evaluate with the new model of KDD dataset. Result generates in ROC Curves and compared both result of K-Means and SVM in Matlab.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo2/Paper 9-NIDS For Unsupervised Authentication Records of KDD Dataset in MATLAB.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Big Brother: A Road Map for Building Ubiquitous Surveillance System in Nigeria</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2011.010208</link>
        <id>10.14569/SpecialIssue.2011.010208</id>
        <doi>10.14569/SpecialIssue.2011.010208</doi>
        <lastModDate>2012-07-21T20:27:38.8030000+00:00</lastModDate>
        
        <creator>Simon Enoch Yusuf</creator>
        
        <creator>Oluwakayode Osagbemi</creator>
        
        <subject>component; Ubiquitous; Surveillance; RFID; Security; Computing.</subject>
        <description>Special Issue(SpecialIssue), 1(2), 2011</description>
        <description>In this paper, we propose a method to improve the
security challenges in Nigeria by embedding literally hundreds of
invisible computers into the environment with each computer
performing its tasks without requiring human awareness or a large amount of human intervention to monitor human behaviour, natural disasters and search for stolen or lost items.
Ubiquitous Dynamic Surveillance cameras embedded with Radio frequency identification (RFID) is proposed for this security system.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo2/Paper 8-Big Brother A Road Map for Building Ubiquitous Surveillance System in Nigeria.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Efficient Traducer Tracing System Using Traffic Volume Information</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2011.010207</link>
        <id>10.14569/SpecialIssue.2011.010207</id>
        <doi>10.14569/SpecialIssue.2011.010207</doi>
        <lastModDate>2012-07-21T20:27:38.7270000+00:00</lastModDate>
        
        <creator>K V Ramana</creator>
        
        <creator>Raghu.B.Korrapati</creator>
        
        <creator>N. Praveen Kumar</creator>
        
        <creator>D. Prakash</creator>
        
        <subject>Content Streaming; Traffic Contours; Traducer Tracing; Digital Rights Management.</subject>
        <description>Special Issue(SpecialIssue), 1(2), 2011</description>
        <description>Many leading Broadband access technologies and
their accessing abilities have the capability to meet the future
requirements of the Broadband consumer. With the enormous
growth in the broadband technologies, there is a need of applying
simmering technology in many applications like video conference
systems and content transmission systems. Streaming content
enables users to get access the files quickly and not have to wait
until the file is done downloading. Security remains one of the
main challenges for Content Streaming. Digital Rights
Management (DRM) system must be implemented to avoid
content spreading and un-intentional content usage. Water
marking technology can also be used to implement the DRM
system but it has its own limitations and attacks. A control
method for the steaming content delivery is required to prevent
abuse of the content. For this reason, authors have proposed a
contended methodology that uses traffic volume information
obtained from routers. Traducer tracing is one of the essential
technologies that designs DRM systems, and empowers content
distributors to notice and control content acquisition. This technology utilizes the main concept of traffic contours that helps to determine who is watching the streaming content and whether or not a secondary content delivery exists i.e., mainly used to determine whether the network is being traced out by the intruder or not.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo2/Paper 7-Efficient Traducer Tracing System Using Traffic Volume Information.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Controlling Home Appliances Remotely Through Voice Command</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2011.010206</link>
        <id>10.14569/SpecialIssue.2011.010206</id>
        <doi>10.14569/SpecialIssue.2011.010206</doi>
        <lastModDate>2012-07-21T20:27:38.6470000+00:00</lastModDate>
        
        <creator>Marriam Butt</creator>
        
        <creator>Mamoona Khanam</creator>
        
        <creator>Aihab Khan</creator>
        
        <creator>Malik Sikandar Hayat Khiyal</creator>
        
        <subject>component; Voice GSM; Voice message; radio friquencey (RF); AT commands.</subject>
        <description>Special Issue(SpecialIssue), 1(2), 2011</description>
        <description>the main concern in systems development is the
integration of technologies to increase customer satisfaction.
Research presented in this paper focuses mainly in three things
first to understand the speech or voice of user second is to control
the home appliances through voice call and third is to finds
intrusion in the house. The user can make a voice call in order to
perform certain actions such as switching lights on/off, getting the
status of any appliance etc. And when system finds intrusion it
sends an alert voice message to preconfigured cell when the user
is away from the place. The proposed system is implemented
using voice Global System for Mobile Communications (GSM)
and wireless technology based on .NET framework and Attention
(AT) commands. Microsoft speech reorganization engine, speech
SDK 5.1 is used to understand the voice command of user. As it is
wireless so more cost effective and easy to use. The GSM
technology used in system provide the everywhere access of the system for security. Experimental results show that the system is more secure and cost effective as compared to existing systems. We conclude that this system provides solution for the problems faced by home owner in daily life and make their life easy and comfortable by proposing cost effective and reliable solution.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo2/Paper 6-Controlling Home Appliances Remotely Through Voice Command.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fidelity Based On Demand Secure(FBOD) Routing in Mobile Adhoc Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2011.010205</link>
        <id>10.14569/SpecialIssue.2011.010205</id>
        <doi>10.14569/SpecialIssue.2011.010205</doi>
        <lastModDate>2012-07-21T20:27:38.5370000+00:00</lastModDate>
        
        <creator>Himadri Nath Saha</creator>
        
        <creator>Dr. Debika Bhattacharyya</creator>
        
        <creator>Dr. P. K.Banerjee</creator>
        
        <subject>fidelity; sequence number; hop destination; flooding attack; black hole attack; co-operative black hole attac,routing.</subject>
        <description>Special Issue(SpecialIssue), 1(2), 2011</description>
        <description>In mobile ad-hoc network (MANET), secure routing
is a challenging issue due to its open nature, infrastructure less
property and mobility of nodes. Many mobile ad-hoc network
routing schemes have been proposed, but none of them have been
designed with security as a goal. We propose security goals for
routing in mobile ad-hoc networks, an approach significantly
different from the existing ones where data packets are routed,
based on a specific criterion of the nodes called “fidelity” The approach will reduce the computational overhead to a lot extent. Our simulation results show how we have reduced the amount of network activity for each node required to route a data packet and how this scheme prevents various attacks which may jeopardize any MANET.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo2/Paper 5-Fidelity Based On Demand Secure(FBOD) Routing in Mobile Adhoc Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Comparison of different hybrid amplifiers for different numbers of channels</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2011.010204</link>
        <id>10.14569/SpecialIssue.2011.010204</id>
        <doi>10.14569/SpecialIssue.2011.010204</doi>
        <lastModDate>2012-07-21T20:27:38.4130000+00:00</lastModDate>
        
        <creator>Sameksha Bhaskar</creator>
        
        <creator>M.L.Sharma</creator>
        
        <creator>Ramandeep Kaur</creator>
        
        <subject>RAMAN-EDFA; RAMAN-SOA; SOA-EDFA; EDFARAMAN-EDFA; QUALITY FACTOR; BER; EYE OPENING; JITTER.</subject>
        <description>Special Issue(SpecialIssue), 1(2), 2011</description>
        <description>We have investigated the performance comparison of
different hybrid optical amplifiers (RAMAN-EDFA,RAMANSOA,
SOA-EDFA,EDFA-RAMAN-EDFA).The proposed
configuration consists of 16, 32 and 64 Gbps channels at speed of
10 Gbps. We have realized the different hybrid amplifiers and
their parameters like quality factor, ber, eye opening and jitter at
different number of channels. The different combinations can
provide a better result and better feasibility for long distance transmission. It is observed that SOA-EDFA showed good performance as it can travel max distance of 220,240,260 km at
16, 32 and 64 channels respectively. Also, RAMAN-EDFA showed a good performance as it has a high QUALITY FACTOR (24.27) and BER (1 X 10-40) at 16 channels.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo2/Paper 4-Performance Comparison of different hybrid amplifiers for different numbers of channels.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Scalable TCP: Better Throughput in TCP Congestion Control Algorithms on MANETs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2011.010203</link>
        <id>10.14569/SpecialIssue.2011.010203</id>
        <doi>10.14569/SpecialIssue.2011.010203</doi>
        <lastModDate>2012-07-21T20:27:38.2730000+00:00</lastModDate>
        
        <creator>M Jehan</creator>
        
        <creator>Dr. G.Radhamani</creator>
        
        <subject>TCP Congestion Control Algorithms; MANET; BIC;
Vegas; Scalable TCP.</subject>
        <description>Special Issue(SpecialIssue), 1(2), 2011</description>
        <description>In the modern mobile communication world the
congestion control algorithms role is vital to data transmission
between mobile devices. It provides better and reliable
communication capabilities in all kinds of networking
environment. The wireless networking technology and the new
kind of requirements in communication systems needs some
extensions to the original design of TCP for on coming technology
development. This work aims to analyze some TCP congestion
control algorithms and their performance on Mobile Ad-hoc
Networks (MANET). More specifically, we describe performance behavior of BIC, Vegas and Scalable TCP congestion control algorithms. The evaluation is simulated through Network
Simulator (NS2) and the performance of these algorithms is analyzed in the term of efficient data transmission in wireless and mobile environment.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo2/Paper 3-Scalable TCP Better Throughput in TCP Congestion Control Algorithms on MANETs.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Video Transmission over Cognitive Radio TDMA Networks under Collision Errors</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2011.010202</link>
        <id>10.14569/SpecialIssue.2011.010202</id>
        <doi>10.14569/SpecialIssue.2011.010202</doi>
        <lastModDate>2012-07-21T20:27:38.1800000+00:00</lastModDate>
        
        <creator>Abdelaali CHAOUB</creator>
        
        <creator>Elhassane IBN ELHAJ</creator>
        
        <creator>Jamal EL ABBADI</creator>
        
        <subject>component; Cognitive Radio network; video transmission; TDMA; progressive compression source coding, LT codes; Collision; Goodput.</subject>
        <description>Special Issue(SpecialIssue), 1(2), 2011</description>
        <description>Cognitive Radio (CR) networks are emerging as new
paradigm of communication and channels sharing in multimedia
and wireless networks. In this paper, we address the problem of
video transmission over shared CR networks using progressive
compression source coding associated to fountain codes. We
consider a TDMA-based transmission where many subscribers
share the same infrastructure. Each Secondary User (SU) is
assigned one time slot where he transmits with a certain
probability. The given model allows each SU to transmit
opportunistically in the remaining slots. Therefore, packets are
not only corrupted by reason of Primary traffic interruptions,
but also we consider losses caused by collisions between several
SUs due to the Opportunistic Spectrum Sharing. We use a
redundancy-based model for link maintenance to compensate for
the loss of spectrum resources caused by the primary traffic
reclaims. Moreover, we setup up many Secondary User Links to mitigate the collision effects. Numerical simulations are performed to evaluate the proposed approaches in view of the
average Goodput. We conduct a stability and performance analysis of the system and we highlight the achieved gains when using our transmission model.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo2/Paper 2 - Video transmission over Cognitive Radio TDMA networks under collision errors.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Simulation and Evaluation of a Simple Adaptive Antenna Array for a WCDMA Mobile Communication</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2011.010201</link>
        <id>10.14569/SpecialIssue.2011.010201</id>
        <doi>10.14569/SpecialIssue.2011.010201</doi>
        <lastModDate>2012-07-21T20:27:38.0700000+00:00</lastModDate>
        
        <creator>Idigo V E</creator>
        
        <creator>Ifeagwu E.N</creator>
        
        <creator>Azubogu A.C.O</creator>
        
        <creator>Akpado K.A</creator>
        
        <creator>Oguejiofor O.S</creator>
        
        <subject>Smartantenna; bandwidth; SINR; adaptive beam
forming.</subject>
        <description>Special Issue(SpecialIssue), 1(2), 2011</description>
        <description>This paper presents a uniform Linear Array model of
a simple adaptive antenna array based on signal-to-interference
and noise ratio (SINR) maximization. The SINR using the
adaptive antenna array was investigated for a conventional
narrowband beam former by varying the number of antenna
array elements and number of interfering signals or users. The
results obtained were compared with that of omni-directional
antenna. The graph obtained from the results showed significant
improvement in SINR as the number of antenna elements
increases in the presence of large interferers for odd numbered
array.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo2/Paper 1-Simulation and Evaluation of a Simple Adaptive Antenna Array for a WCDMA Mobile Communication.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Comparison of SVM and K-NN for Oriya Character Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2011.010116</link>
        <id>10.14569/SpecialIssue.2011.010116</id>
        <doi>10.14569/SpecialIssue.2011.010116</doi>
        <lastModDate>2012-07-21T20:27:37.9130000+00:00</lastModDate>
        
        <creator>Sanghamitra Mohanty</creator>
        
        <creator>Himadri Nandini Das Bebartta</creator>
        
        <subject>Recognition; Features;  earest  eighbors; Support Vectors.</subject>
        <description>Special Issue(SpecialIssue), 1(1), 2011</description>
        <description>Image classification is one of the most important
branch of Artificial intelligence; its application seems to be in a
promising direction in the development of character recognition
in Optical Character Recognition (OCR). Character recognition
(CR) has been extensively studied in the last half century and
progressed to the level, sufficient to produce technology driven
applications. &quot;ow the rapidly growing computational power
enables the implementation of the present CR methodologies and
also creates an increasing demand on many emerging application
domains, which require more advanced methodologies.
Researchers for the recognition of Indic Languages and scripts
are comparatively less with other languages. There are lots of
different machine learning algorithms used for image
classification nowadays. In this paper, we discuss the
characteristics of some classification methods such as Support
Vector Machines (SVM) and K-&quot;earest &quot;eighborhood (K-&quot;&quot;)
that have been applied to Oriya characters. We will discuss the
performance of each algorithm for character classification based
on drawing their learning curve, selecting parameters and
comparing their correct rate on different categories of Oriya
characters. It has been observed that Support Vector Machines
outperforms among both the classifiers.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo1/Paper_16-Performance Comparison of SVM and K-NN for Oriya character recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automatic License Plate Localization Using Intrinsic Rules Saliency</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2011.010115</link>
        <id>10.14569/SpecialIssue.2011.010115</id>
        <doi>10.14569/SpecialIssue.2011.010115</doi>
        <lastModDate>2012-07-21T20:27:37.7270000+00:00</lastModDate>
        
        <creator>Chirag N Paunwala</creator>
        
        <creator>Dr. Suprava Patnaik</creator>
        
        <subject>License plate localization; Salient rules; Connected Region Analysis; statistical inconsistency; skew correction.</subject>
        <description>Special Issue(SpecialIssue), 1(1), 2011</description>
        <description>This paper addresses an intrinsic rule-based license plate localization (LPL) algorithm. It first selects candidate regions, and then filters negative regions with statistical constraints. Key contribution is assigning image inferred weights to the rules leading to adaptability in selecting saliency feature, which then overrules other features and the collective measure, decides the estimation. Saliency of rules is inherent to the frame under consideration hence all inevitable negative effects present in the frame are nullified, incorporating great deal of flexibility and more generalization. Situations considered for simulation, to claim that the algorithm is better generalized are, variations in illumination, skewness, aspect ratio and hence the LP font size, vehicle size, pose, partial occlusion of vehicles and presence of multiple plates. Proposed method allows parallel computation of rules, hence suitable for real time application. The mixed data set has 697 images of almost all varieties. We achieve a Miss Rate (MR) = 4% and False Detection Rate (FDR) = 5.95% in average. Also we have implemented skew correction of the above detected LPs necessary for efficient character detection.This paper addresses an intrinsic rule-based license plate localization (LPL) algorithm. It first selects candidate regions, and then filters negative regions with statistical constraints. Key contribution is assigning image inferred weights to the rules leading to adaptability in selecting saliency feature, which then overrules other features and the collective measure, decides the estimation. Saliency of rules is inherent to the frame under consideration hence all inevitable negative effects present in the frame are nullified, incorporating great deal of flexibility and more generalization. Situations considered for simulation, to claim that the algorithm is better generalized are, variations in illumination, skewness, aspect ratio and hence the LP font size, vehicle size, pose, partial occlusion of vehicles and presence of multiple plates. Proposed method allows parallel computation of rules, hence suitable for real time application. The mixed data set has 697 images of almost all varieties. We achieve a Miss Rate (MR) = 4% and False Detection Rate (FDR) = 5.95% in average. Also we have implemented skew correction of the above detected LPs necessary for efficient character detection.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo1/Paper_15-Automatic License Plate Localization Using Intrinsic Rules Saliency.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SUCCESeR: Simple and Useful Multi Color Concepts for Effective Search and Retrieval</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2011.010114</link>
        <id>10.14569/SpecialIssue.2011.010114</id>
        <doi>10.14569/SpecialIssue.2011.010114</doi>
        <lastModDate>2012-07-21T20:27:37.5700000+00:00</lastModDate>
        
        <creator>Satishkumar L Varma</creator>
        
        <creator>Sanjay N. Talbar</creator>
        
        <subject>color model; discrete cosine transform; image indexing; image retrieval.</subject>
        <description>Special Issue(SpecialIssue), 1(1), 2011</description>
        <description>The image quality depends on level of intensities used in images. Image is consists of various types of objects. Objects in the images are distinguishable because of various intensity levels used. The concentration of intensity levels so called energy can be extracted from image using discrete cosine transform (DCT). In this paper we apply DCT 8x8 block coefficients separately on three different color planes of three different color models namely RGB, HSV and YCbCr. The different elements of ten DCT coefficient matrices are used to form feature vectors. The different feature vectors are formed using these ten elements. These feature vectors are used to index all images in the database. The system was tested with Coral Image database containing 1000 natural images having 10 different classes of images. The image retrieval using these indices is giving comparatively better results.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo1/Paper_14-SUCCESeR Simple and Useful Multi Color Concepts for Effective Search and Retrieval.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Ear Recognition using Dual Tree Complex Wavelet Transform</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2011.010113</link>
        <id>10.14569/SpecialIssue.2011.010113</id>
        <doi>10.14569/SpecialIssue.2011.010113</doi>
        <lastModDate>2012-07-21T20:27:37.4300000+00:00</lastModDate>
        
        <creator>Rajesh M Bodade</creator>
        
        <creator>Sanjay N Talbar</creator>
        
        <subject>Ear recognition; ear detection; ear biometrics; DT-CWT; complex wavelet transform; Biometrics; Pattern Recognition; Security; Image Processing; Bioinformatics; Computer vision.</subject>
        <description>Special Issue(SpecialIssue), 1(1), 2011</description>
        <description>Since last 10 years, various methods have been used for ear recognition. This paper describes the automatic localization of an ear and it’s segmentation from the side poses of face images. In this paper, authors have proposed a novel approach of feature extraction of iris image using 2D Dual Tree Complex Wavelet Transform (2D-DT-CWT) which provides six sub-bands in 06 different orientations, as against three orientations in DWT. DT-CWT being complex, exhibits the property of shift invariance. Ear feature vectors are obtained by computing mean, standard deviation, energy and entropy of these six sub-bands of DT-CWT and three sub-bands of DWT. Canberra distance and Euclidian distance are used for matching. This method is implemented and tested on two image databases, UND database of 219 subjects from the University of Notre Dame and own database created at MCTE, of 40 subjects which is also used for online ear testing of system for access control at MCTE. False Acceptance Rate (FAR), False Rejection Rate (FRR), Equal Error Rate (EER) and Receiver’s Operating Curve (ROC) are compiled at various thresholds. The accuracy of recognition is achieved above 97 %.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo1/Paper_13-Ear Recognition using Dual Tree Complex Wavelet Transform.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Human Face Detection under Complex Lighting Conditions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2011.010112</link>
        <id>10.14569/SpecialIssue.2011.010112</id>
        <doi>10.14569/SpecialIssue.2011.010112</doi>
        <lastModDate>2012-07-21T20:27:37.3070000+00:00</lastModDate>
        
        <creator>Golam Moazzam</creator>
        
        <creator>Ms. Rubayat Parveen</creator>
        
        <creator>Md. Al-Amin Bhuiyan</creator>
        
        <subject>Face Detection; Genetic Searching; Fitness Function; Cross-over; Mutation; Roulette Wheel Selection.</subject>
        <description>Special Issue(SpecialIssue), 1(1), 2011</description>
        <description>This paper presents a novel method for detecting human faces in an image with complex backgrounds. The approach is based on visual information of the face from the template image and is commenced with the estimation of the face area in the given image. As the genetic algorithm is a computationally expensive process, the searching space for possible face regions is limited to possible facial features such as eyes, nose, mouth, and eyebrows so that the required timing is greatly reduced. In addition, the lighting effects and orientation of the faces are considered and solved in this method. Experimental results demonstrate that this face detector provides promising results for the images of individuals which contain quite a high degree of variability in expression, pose, and facial details.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo1/Paper_12-Human Face Detection under Complex Lighting Conditions.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automatic Image Registration Using Mexican Hat Wavelet, Invariant Moment, and Radon Transform</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2011.010111</link>
        <id>10.14569/SpecialIssue.2011.010111</id>
        <doi>10.14569/SpecialIssue.2011.010111</doi>
        <lastModDate>2012-07-21T20:27:37.2130000+00:00</lastModDate>
        
        <creator>Jignesh N Sarvaiya</creator>
        
        <creator>Dr. Suprava Patnaik</creator>
        
        <subject>Image Registration; Mexican-hat wavelet; Invariant Moments; Radon Transform.</subject>
        <description>Special Issue(SpecialIssue), 1(1), 2011</description>
        <description>Image registration is an important and fundamental task in image processing used to match two different images. Given two or more different images to be registered, image registration estimates the parameters of the geometrical transformation model that maps the sensed images back to its reference image. A feature-based approach to automated imageto- image registration is presented. The characteristic of this approach is that it combines Mexican-Hat Wavelet, Invariant Moments and Radon Transform. Feature Points from both images are extracted using Mexican-Hat Wavelet and controlpoint correspondence is achieved with invariant moments. After detecting corresponding control points from reference and sensed
images, to recover scaling and rotation a line and triangle is form
in both images respectively and applied radon transform to
register images.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo1/Paper_11-Automatic Image Registration Using Mexican Hat Wavelet, Invariant Moment, and Radon Transform.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fine Facet Digital Watermark (FFDW) Mining From The Color Image Using Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2011.010110</link>
        <id>10.14569/SpecialIssue.2011.010110</id>
        <doi>10.14569/SpecialIssue.2011.010110</doi>
        <lastModDate>2012-07-21T20:27:37.1030000+00:00</lastModDate>
        
        <creator>N Chenthalir Indra</creator>
        
        <creator>Dr. E . Ramaraj</creator>
        
        <subject>Similarity based Superior SOM; Discrete Wavelet Transform; Digital watermark; embedding; PSNR.</subject>
        <description>Special Issue(SpecialIssue), 1(1), 2011</description>
        <description>On hand watermark methods employ selective Neural Network techniques for watermark embedding efficiently. Similarity Based Superior Self Organizing Maps (SBS_SOM) a neural network algorithm for watermark generation. Host image is learned by the SBS_SOM neurons and the very fine RGB feature values are mined as digital watermark. Discrete Wavelet Transform (DWT) is used for watermark entrench. Similarity Ratio and PSNR values prove the temperament of the Fine Facet Digital Watermark (FFDW). The Proposed system affords inclusive digital watermarking system.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo1/Paper_10-Fine Facet Digital Watermark (FFDW) Mining From The Color Image Using Neural Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Component Localization in Face Alignment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2011.010109</link>
        <id>10.14569/SpecialIssue.2011.010109</id>
        <doi>10.14569/SpecialIssue.2011.010109</doi>
        <lastModDate>2012-07-21T20:27:36.9470000+00:00</lastModDate>
        
        <creator>Yanyun Qu</creator>
        
        <creator>Tianzhu Fang</creator>
        
        <creator>Yanyun Cheng</creator>
        
        <creator>Han Liu</creator>
        
        <subject>face alignment; ASM; component localization; LBP; SURF</subject>
        <description>Special Issue(SpecialIssue), 1(1), 2011</description>
        <description>Face alignment is a significant problem in the
processing of face image, and Active Shape Model (ASM) is a
popular technology for this problem. However, the initiation of
the alignment strongly affects the performance of ASM. If the
initiation of alignment is bad, the iteration of ASM optimization
will be stuck in a local minima, and the alignment will fail. In this
paper, we propose a novel approach to improve ASM by building
the classifiers of the face components. We design the SVM
classifiers for eyes, mouth and nose, and we use Speeded Up
Robust Features(SURF) and Local Binary Pattern(LBP) feature
to describe the components which are discriminative for the
components than Haar-like features. The face components are
firstly located by the classifiers and they indicate the initiation of
the alignment. Our approach can make the iterations of ASM
optimization converge fast and with the less errors. We evaluate
our approach on the frontal views of upright faces of IMM
dataset. The experimental results have shown that our approach
outperforms the original ASM in terms of efficiency and
accuracy.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo1/Paper_9-Component Localization in Face Alignment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>ID Numbers Recognition by Local Similarity Voting</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2011.010108</link>
        <id>10.14569/SpecialIssue.2011.010108</id>
        <doi>10.14569/SpecialIssue.2011.010108</doi>
        <lastModDate>2012-07-21T20:27:36.8370000+00:00</lastModDate>
        
        <creator>Shen Lu</creator>
        
        <creator>Yanyun Qu</creator>
        
        <creator>Yanyun Cheng</creator>
        
        <creator>Yi Xie</creator>
        
        <subject>template matching algorithm; ID number recognition; OCR</subject>
        <description>Special Issue(SpecialIssue), 1(1), 2011</description>
        <description>This paper aims to recognize ID numbers from three types of valid identification documents in China: the first-generation ID card, the second-generation ID card and the driver license of motor vehicle. We have proposed an approach using local similarity voting to automatically recognize ID numbers. Firstly, we extract the candidate region which contains ID numbers and then locate the numbers and characters. Secondly, we recognize the numbers by an improved template matching method based on the local similarity voting. Finally, we verify the ID numbers and characters. We have applied the proposed approach to a set of about 100 images which are shot by conventional digital cameras. The experimental results have demonstrated that this approach is efficient and is robust to the change of illumination and rotation. The recognition accuracy is up to 98%.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo1/Paper_8- ID Numbers Recognition by Local Similarity Voting.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Recombinant Skeleton Using Junction Points in Skeleton Based Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2011.010107</link>
        <id>10.14569/SpecialIssue.2011.010107</id>
        <doi>10.14569/SpecialIssue.2011.010107</doi>
        <lastModDate>2012-07-21T20:27:36.7300000+00:00</lastModDate>
        
        <creator>Komala Lakshmi</creator>
        
        <creator>Dr.M.Punithavalli</creator>
        
        <subject>Recombinant skeleton; bamboo skeleton; valance skeleton point (VSP); core skeleton point(CSP); junction skeleton points (JSP).</subject>
        <description>Special Issue(SpecialIssue), 1(1), 2011</description>
        <description>We perform the task of combining two skeleton images and to produce the recombinant skeleton. We propose the recombinant skeleton algorithm to produce the recombinant skeletons. The existing skeleton representation has been taken and the merge vertex detection algorithm was used before applying the recombinant skeleton algorithm. We can design and apply this recombinant skeleton in motion detection, image matching, tracking, panorama stitching, 3D modeling and object recognition. We can generate or manufacture the true real time object from the recombinant skeleton produced. The proposed method utilize local search algorithm for junction validation. Our frame work suggests the range of possibility in getting the recombinant skeleton. The boundary is essential for any transformation hence the bamboo skeleton algorithm is deployed for computing the boundary and for storing the skeleton together with the boundary. Thus our representation is skeleton with border or outline. From this new skeleton representation the proposed recombinant is achieved.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo1/Paper_7-Recombinant Skeleton Using Junction Points in Skeleton Based Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Skew correction for Chinese character using Hough transform</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2011.010106</link>
        <id>10.14569/SpecialIssue.2011.010106</id>
        <doi>10.14569/SpecialIssue.2011.010106</doi>
        <lastModDate>2012-07-21T20:27:36.6030000+00:00</lastModDate>
        
        <creator>Tian Jipeng,</creator>
        
        <creator>G.Hemantha Kumar</creator>
        
        <creator>H.K. Chethan</creator>
        
        <subject>Handwritten Chinese character; Computer Vision; Pattern Recognition; skew detection; Hough transforms.</subject>
        <description>Special Issue(SpecialIssue), 1(1), 2011</description>
        <description>Chinese Handwritten character recognition is an emerging field in Computer Vision and Pattern Recognition. Documents acquired through Scanner, Mobile or Camera devices are often prone to Skew and Correction of skew for such document is a major task and important factor in optical character recognition. The goal of the work is to correct skew for the documents. In this paper we have proposed a novel method for skew correction using Hough transform. The proposed approach with high precision can detect skew with large angle (-90 to +90) the experimental result reveal that the proposed method is efficient compared to well-known existing methods. The experimental results show the efficacy compared to the result of well-known existing methods.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo1/Paper_6-Skew correction for Chinese character using Hough transform.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Clustering and Bayesian network for image of faces classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2011.010105</link>
        <id>10.14569/SpecialIssue.2011.010105</id>
        <doi>10.14569/SpecialIssue.2011.010105</doi>
        <lastModDate>2012-07-21T20:27:36.5100000+00:00</lastModDate>
        
        <creator>Khlifia Jayech</creator>
        
        <creator>Mohamed Ali Mahjoub</creator>
        
        <subject>face recognition; clustering; Bayesian network;  a&#239;ve Bayes; TA ; FA .</subject>
        <description>Special Issue(SpecialIssue), 1(1), 2011</description>
        <description>In a content based image classification system, target
images are sorted by feature similarities with respect to the query
(CBIR). In this paper, we propose to use new approach
combining distance tangent, k-means algorithm and Bayesian
network for image classification. First, we use the technique of
tangent distance to calculate several tangent spaces representing
the same image. The objective is to reduce the error in the
classification phase. Second, we cut the image in a whole of
blocks. For each block, we compute a vector of descriptors. Then,
we use K-means to cluster the low-level features including color
and texture information to build a vector of labels for each
image. Finally, we apply five variants of Bayesian networks
classifiers ()a&#239;ve Bayes, Global Tree Augmented )a&#239;ve Bayes
(GTA)), Global Forest Augmented )a&#239;ve Bayes (GFA)), Tree
Augmented )a&#239;ve Bayes for each class (TA)), and Forest
Augmented )a&#239;ve Bayes for each class (FA)) to classify the
image of faces using the vector of labels. In order to validate the
feasibility and effectively, we compare the results of GFA) to
FA) and to the others classifiers ()B, GTA), TA)). The results
demonstrate FA) outperforms than GFA), )B, GTA) and TA)
in the overall classification accuracy.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo1/Paper_5- Clustering and Bayesian network for image of faces classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modeling of neural image compression using GA and BP: a comparative approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2011.010104</link>
        <id>10.14569/SpecialIssue.2011.010104</id>
        <doi>10.14569/SpecialIssue.2011.010104</doi>
        <lastModDate>2012-07-21T20:27:36.3700000+00:00</lastModDate>
        
        <creator>G G Rajput</creator>
        
        <creator>Vrinda Shivashetty</creator>
        
        <creator>Manoj Kumar singh</creator>
        
        <subject>Image compression; genetic algorithm; neural network; back propagation.</subject>
        <description>Special Issue(SpecialIssue), 1(1), 2011</description>
        <description>It is well known that the classic image compression techniques such as JPEG and MPEG have serious limitations at high compression rate; the decompressed image gets really fuzzy or indistinguishable. To overcome problems associated with conventional methods, artificial neural networks based method can be used. Genetic algorithm is a very powerful method for solving real life problems and this has been proven by applying to number of different applications. There is lots of interest to involve the GA with ANN for various reasons at various levels. Trapping in the local minima is one of the well-known problems of gradient decent based learning in ANN. The problem can be addressed using GA algorithm. But no work has been done to evaluate the performance of both learning methods from the image compression point of view. In this paper, we investigate the performance of ANN with GA in the application of image compression for obtaining optimal set of weights. Direct method of compression has been applied with neural network to get the additive advantage for security of compressed data. The experiments reveal that the standard BP with proper parameters provide good generalize capability for compression and is much faster compared to earlier work in the literature, based on cumulative distribution function. Further, the results obtained shows that general concept about GA, it performs better over gradient decent based learning, is not applicable for image compression.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo1/Paper_4-Modeling of neural image compression using GA and BP a comparative approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Image segmentation by adaptive distance based on EM algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2011.010103</link>
        <id>10.14569/SpecialIssue.2011.010103</id>
        <doi>10.14569/SpecialIssue.2011.010103</doi>
        <lastModDate>2012-07-21T20:27:36.1970000+00:00</lastModDate>
        
        <creator>Mohamed Ali Mahjoub</creator>
        
        <creator>Karim Kalti</creator>
        
        <subject>EM algorithm; image segmentation; adaptive distance.</subject>
        <description>Special Issue(SpecialIssue), 1(1), 2011</description>
        <description>This paper introduces a Bayesian image segmentation
algorithm based on finite mixtures. An EM algorithm is
developed to estimate parameters of the Gaussian mixtures. The
finite mixture is a flexible and powerful probabilistic modeling
tool. It can be used to provide a model-based clustering in the
field of pattern recognition. However, the application of finite
mixtures to image segmentation presents some difficulties;
especially it’s sensible to noise. In this paper we propose a variant
of this method which aims to resolve this problem. Our approach
proceeds by the characterization of pixels by two features: the
first one describes the intrinsic properties of the pixel and the
second characterizes the neighborhood of pixel. Then the
classification is made on the base on adaptive distance which
privileges the one or the other features according to the spatial
position of the pixel in the image. The obtained results have
shown a significant improvement of our approach compared to
the standard version of EM algorithm.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo1/Paper_3-Image segmentation by adaptive distance based on EM algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A new Optimization-Based Image Segmentation method By Particle Swarm Optimization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2011.010102</link>
        <id>10.14569/SpecialIssue.2011.010102</id>
        <doi>10.14569/SpecialIssue.2011.010102</doi>
        <lastModDate>2012-07-21T20:27:36.0730000+00:00</lastModDate>
        
        <creator>Fahd M.A Mohsen</creator>
        
        <creator>Mohiy M. Hadhoud</creator>
        
        <creator>Khalid Amin</creator>
        
        <subject>Thresholding-based segmentation; Particle swarm optimization; Quantitative image segmentation evaluation.</subject>
        <description>Special Issue(SpecialIssue), 1(1), 2011</description>
        <description>This paper proposes a new multilevel thresholding method segmenting images based on particle swarm optimization (PSO). In the proposed method, the thresholding problem is treated as an optimization problem, and solved by using the principle of PSO. The algorithm of PSO is used to find the best values of thresholds that can give us an appropriate partition for a target image according to a fitness function. in this paper, a new quantitative evaluation function is proposed based on the information theory. The new evaluation function is used as an objective function for the algorithm of PSO in the proposed method. Because quantitative evaluation functions deal with segmented images as a set of regions, the target image is divided into a set of regions and not to a set of classes during the different stages of our method (where a region is a group of connected pixels having the same range of gray levels). The proposed method has been tested on different images, and the experimental results demonstrate its effectiveness.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo1/Paper_2-A new Optimization-Based Image Segmentation method By Particle Swarm Optimization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sketch Recognition using Domain Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/SpecialIssue.2011.010101</link>
        <id>10.14569/SpecialIssue.2011.010101</id>
        <doi>10.14569/SpecialIssue.2011.010101</doi>
        <lastModDate>2012-07-21T20:27:35.8700000+00:00</lastModDate>
        
        <creator>Vasudha Vashisht</creator>
        
        <creator>Tanupriya Choudhury</creator>
        
        <creator>Dr. T. V. Prasad</creator>
        
        <subject>Sketch recognition; segmentation; domain classification.</subject>
        <description>Special Issue(SpecialIssue), 1(1), 2011</description>
        <description>Conceptualizing away the sketch processing details in a user interface will enable general users and domain experts to create more complex sketches. There are many domains for which sketch recognition systems are being developed. But, they entail image-processing skill if they are to handle the details of each domain, and also they are lengthy to build. The implemented system’s goal is to enable user interface designers and domain experts who may not have proficiency in sketch recognition to be able to construct these sketch systems. This sketch recognition system takes in rough sketches from user drawn with the help of mouse as its input. It then recognizes the sketch using segmentation and domain classification; the properties of the user drawn sketch and segments are searched heuristically in the domains and each figures of each domain, and finally it shows its domain, the figure name and properties. It also draws the sketch smoothly. The work is resulted through extensive research and study of many existing image processing and pattern matching algorithms.</description>
        <description>http://thesai.org/Downloads/SpecialIssueNo1/Paper_1-Sketch Recognition using Domain Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The result oriented process for students based on distributed datamining</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010504</link>
        <id>10.14569/IJACSA.2010.010504</id>
        <doi>10.14569/IJACSA.2010.010504</doi>
        <lastModDate>2012-07-12T07:56:17.2900000+00:00</lastModDate>
        
        <creator>Pallamreddy Venkatasubbareddy</creator>
        
        <creator>Vuda Sreenivasarao</creator>
        
        <subject>Learning result evaluation system; distributed data mining; decision tree.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(5), 2010</description>
        <description> The student result oriented learning process evaluation system is an essential tool and approach for monitoring and controlling the quality of learning process. From the perspective of data analysis, this paper conducts a research on student result oriented learning process evaluation system based on distributed data mining and decision tree algorithm. Data mining technology has emerged as a means for identifying patterns and trends from large quantities of data. It is aimed at putting forward a rule-discovery approach suitable for the student learning result evaluation and applying it into practice so as to improve learning evaluation of communication skills and finally better serve learning practicing.
</description>
        <description>http://thesai.org/Downloads/Volume1No5/Paper%204-THE%20RESULT%20ORIENTED%20PROCESS%20FOR%20STUDENTS%20BASED%20ON%20DISTRIBUTED%20DATAMINING.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Agent based Bandwidth Reservation Routing Technique in Mobile Ad Hoc Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.021220</link>
        <id>10.14569/IJACSA.2011.021220</id>
        <doi>10.14569/IJACSA.2011.021220</doi>
        <lastModDate>2012-07-12T07:56:13.1070000+00:00</lastModDate>
        
        <creator>Vishnu Kumar Sharma</creator>
        
        <creator>Dr. Sarita Singh Bhadauria</creator>
        
        <subject>Mobile Ad hoc Networks (MANETs), Mobile Agents (MA), Total Congestion Metric (TCM), Enhanced Distributed Channel Access (EDCA), Transmission opportunity limit (TXOP). </subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(12), 2011</description>
        <description>In mobile ad hoc networks (MANETs), inefficient resource allocation causes heavy losses to the service providers and results in inadequate user proficiency. For improving and automating the quality of service of MANETs, efficient resource allocation techniques are required. In this paper, we propose an agent based bandwidth reservation technique for MANET. The mobile agent from the source starts forwarding the data packets through the path which has minimum cost, congestion and bandwidth. The status of every node is collected which includes the bottleneck bandwidth field and the intermediate node computes the available bandwidth on the link. At the destination, after updating the new bottleneck bandwidth field, the data packet is feedback to the source. In resource reservation technique, if the available bandwidth is greater than bottleneck bandwidth, then bandwidth reservation for the flow is done. Using rate monitoring and adjustment methodologies, rate control is performed for the congested flows. By simulation results, we show that the resource allocation technique reduces the losses and improves the network performance.
</description>
        <description>http://thesai.org/Downloads/Volume2No12/Paper%2020-Agent%20based%20Bandwidth%20Reservation%20Routing%20Technique%20in%20Mobile%20Ad%20Hoc%20Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Integrated Information System for reserving rooms in Hotels</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.021008</link>
        <id>10.14569/IJACSA.2011.021008</id>
        <doi>10.14569/IJACSA.2011.021008</doi>
        <lastModDate>2012-07-12T07:56:06.5670000+00:00</lastModDate>
        
        <creator>Safarini Osama</creator>
        
        <subject>IS - Information System-, MIS -Management Information System, DFD - Data Flow Diagram, ER- Entity Relationship, and DBMS - Database Management system.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(10), 2011</description>
        <description>It is very important to build new and modern flexible dynamic effective compatible reusable information systems including database to help manipulate different processes and deal with many parts around it. One of these is managing the room reservations for groups and individual, to focus necessary needs in hotels and to be integrated with accounting system. This system provides many tools can be used in taking decision.
</description>
        <description>http://thesai.org/Downloads/Volume2No10/Paper%208%20-%20Integrated%20Information%20System%20for%20reserving%20rooms%20in%20Hotels.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Analysis of Corporate Feed Rectangular Patch Element and Circular Patch Element 4x2 Microstrip Array Antennas</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020711</link>
        <id>10.14569/IJACSA.2011.020711</id>
        <doi>10.14569/IJACSA.2011.020711</doi>
        <lastModDate>2012-07-12T07:56:03.1870000+00:00</lastModDate>
        
        <creator>Md. Tanvir Ishtaique-ul Huque</creator>
        
        <creator>Md. Al-Amin Chowdhury</creator>
        
        <creator>Md. Kamal Hosain</creator>
        
        <creator>Md. Shah Alam</creator>
        
        <subject> microstrip array antenna; rectangular patch; return loss; X band; circular patch.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(7), 2011</description>
        <description>This paper present simple, slim, low cost and high gain circular patch and rectangular patch microstrip array antenna, with the details steps of design process, operate in X-band(8 GHz to 12 GHz) and it provides a mean to choose the effective one based on the performance analysis of both of these array antennas. The method of analysis, design and development of these array antennas are explained completely here and analyses are carried out for 4x2 arrays. The simulation has been performed by using commercially available antenna simulator, SONNET version V12.56, to compute the current distribution, return loss response and radiation pattern. The proposed antennas are designed by using Taconic TLY-5 dielectric substrate with permittivity, er = 2.2 and height, h =1.588 mm. In all cases we get return losses in the range -4.96 dB to -25.21 dB at frequencies around 10 GHz. The gain of these antennas as simulated are found above 6 dB and side lobe label is maintained lower than main lobe. Operating frequency of these antennas is 10 GHz so these antennas are suitable for X-band application.</description>
        <description>http://thesai.org/Downloads/Volume2No7/Paper%2011-Performance%20Analysis%20of%20Corporate%20Feed%20Rectangular%20Patch%20Element%20and%20Circular%20Patch%20Element%204x2%20Microstrip%20Array%20Antennas.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>New design of Robotics Remote lab
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030214</link>
        <id>10.14569/IJACSA.2012.030214</id>
        <doi>10.14569/IJACSA.2012.030214</doi>
        <lastModDate>2012-07-12T07:55:58.4870000+00:00</lastModDate>
        
        <creator>Mohammad R Kadhum</creator>
        
        <creator>Seifedine Kadry</creator>
        
        <subject>Robotic Remote Laboratory, Robot experiment, Light obstacle, Control algorithm, NXT 2.0Robot, Robot lab, Path planning algorithm.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(2), 2012</description>
        <description> The Robotic Remote Laboratory (RRL) controls the Robot labs via the Internet and applies the Robot experiment in easy and advanced way. If we want to enhance the RRL system, we must study requirements of the Robot experiment deeply. One of key requirements of the Robot experiment is the Control algorithm, which includes all important activities to affect the Robot; one of them relates the path or obstacle. Our goal is to produce a new design of the RRL that includes a new treatment to the Control algorithm which depends on isolating one of the Control algorithm&#39;s activities that relates the paths in a separated algorithm, i.e., design the (Path planning algorithm) is independent of the original Control algorithm. This aim can be achieved by depending on the light to produce the Light obstacle. To apply the Light obstacle, we need to hardware (Light control server and Light arms) and software (path planning algorithm).The NXT 2.0 Robot will sense the Light obstacle depending on its Light sensor. The new design has two servers: one for the path (Light control server) and other for the other activities of the Control algorithm (Robot control server).The website of the new design includes three main parts (Lab Reservation, Open Lab, Download Simulation).We proposed a set of scenarios for organizing the reservation of the Remote Lab. Additionally, we developed an appropriate software to simulate the Robot and to practice it before using the Remote lab.</description>
        <description>http://thesai.org/Downloads/Volume3No2/Paper%2014%20-%20New%20design%20of%20Robotics%20Remote%20lab.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Self-regulating Message Throughput in Enterprise Messaging Servers – A Feedback Control Solution</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030123</link>
        <id>10.14569/IJACSA.2012.030123</id>
        <doi>10.14569/IJACSA.2012.030123</doi>
        <lastModDate>2012-07-12T07:55:55.1170000+00:00</lastModDate>
        
        <creator>Ravi Kumar G</creator>
        
        <creator>C.Muthusamy</creator>
        
        <creator>A.Vinaya Babu</creator>
        
        <subject> Feedback control, Message Oriented Middleware, Enterprise Messaging, Java Messaging Service, JMS Providers, Adaptive Control</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(1), 2012</description>
        <description> Enterprise Messaging is a very popular message exchange concept in asynchronous distributed computing environments. The Enterprise Messaging Servers are heavily used in building business critical Enterprise applications such as Internet based Order processing systems, pricing distribution of B2B, geographically dispersed enterprise applications. It is always desirable that Messaging Servers exhibit high performance to meet the Service Level Agreements (SLAs). There are investigations in this area of managing the performance of the distributed computing systems in different ways such as the IT administrators configuring and tuning the Messaging Servers parameters, implement complex conditional programming to handle the workload dynamics. But in practice it is extremely difficult to handle such dynamics of changing workloads in order to meet the performance requirements. Additionally it is challenging to cater to the future resource requirements based on the future workloads. Though there have been attempts to self-regulate the performance of Enterprise Messaging Servers, there is a limited investigation done in exploring feedback control systems theory in managing the Messaging Servers performance. We propose an adaptive control based solution to not only manage the performance of the servers to meet SLAs but also to pro-actively self-regulate the performance such that the Messaging Servers are capable to meet the current and future workloads. We implemented and evaluated our solution and observed that the control theory based solution will improve the performance of Enterprise Messaging Servers significantly.</description>
        <description>http://thesai.org/Downloads/Volume3No1/Paper%2023-Self-regulating%20Message%20Throughput%20in%20Enterprise%20Messaging%20Servers%20%E2%80%93%20A%20Feedback%20Control%20Solution.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Development Process of the Semantic Web and Web Ontology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020718</link>
        <id>10.14569/IJACSA.2011.020718</id>
        <doi>10.14569/IJACSA.2011.020718</doi>
        <lastModDate>2012-07-12T07:55:51.7800000+00:00</lastModDate>
        
        <creator>K Vanitha</creator>
        
        <creator>M.Sri Venkatesh</creator>
        
        <creator>K.Ravindra</creator>
        
        <creator>S.Venkata Lakshmi</creator>
        
        <subject>Toolkits; ontology; semantic web; language-based; web ontology.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(7), 2011</description>
        <description>This paper deals with the semantic web and web ontology. The existing ontology development processes are not catered towards casual web ontology development, a notion analogous to standard web page development. Ontologies have become common on the World-Wide Web[2]. Key features of this process include easy and rapid creation of ontological skeletons, searching and linking to existing ontologies and a natural language-based technique to improve presentation of ontologies[6]. Ontologies, however, vary greatly in size, scope and semantics. They can range from generic upper-level ontologies to domain-specific schemas. The success of the Semantic Web is based on the existance of numerous distributed ontologies, using which users can annotate their data, thereby enabling shared machine readable content. This paper elaborates the stages in a casual ontology development process.</description>
        <description>http://thesai.org/Downloads/Volume2No7/Paper%2018-The%20Development%20Process%20of%20the%20Semantic%20Web%20and%20Web%20Ontology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> The threshold EM algorithm for parameter learning in bayesian network with incomplete data</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020713</link>
        <id>10.14569/IJACSA.2011.020713</id>
        <doi>10.14569/IJACSA.2011.020713</doi>
        <lastModDate>2012-07-12T07:55:48.3270000+00:00</lastModDate>
        
        <creator>Fradj Ben Lamine</creator>
        
        <creator>Karim Kalti</creator>
        
        <creator>Mohamed Ali Mahjoub</creator>
        
        <subject>bayesian network; parameter learning; missing data; EM algorithm; Gibbs sampling; RBE algorithm; brain tumor.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(7), 2011</description>
        <description>Bayesian networks (BN) are used in a big range of applications but they have one issue concerning parameter learning. In real application, training data are always incomplete or some nodes are hidden. To deal with this problem many learning parameter algorithms are suggested foreground EM, Gibbs sampling and RBE algorithms. In order to limit the search space and escape from local maxima produced by executing EM algorithm, this paper presents a learning parameter algorithm that is a fusion of EM and RBE algorithms. This algorithm incorporates the range of a parameter into the EM algorithm. This range is calculated by the first step of RBE algorithm allowing a regularization of each parameter in bayesian network after the maximization step of the EM algorithm. The threshold EM algorithm is applied in brain tumor diagnosis and show some advantages and disadvantages over the EM algorithm.</description>
        <description>http://thesai.org/Downloads/Volume2No7/Paper%2013-The%20threshold%20EM%20algorithm%20for%20parameter%20learning%20in%20bayesian%20network%20with%20incomplete%20data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Algorithm design for a supply chain equilibrium management model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030510</link>
        <id>10.14569/IJACSA.2012.030510</id>
        <doi>10.14569/IJACSA.2012.030510</doi>
        <lastModDate>2012-07-12T07:55:43.7570000+00:00</lastModDate>
        
        <creator>Peimin Zhao</creator>
        
        <subject>Supply chain equilibrium management; complementary model; algorithm; quadratic convergence.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(5), 2012</description>
        <description> In this paper, we consider a complementary model for the equilibrium management of supply chain. In order to give an optimal decision for the equilibrium management, we propose a new algorithm based on an estimate of the error bound. This algorithm requires neither the existence of a non-degenerate solution nor the non-singularity of the Jacobian matrix at the solution. We also prove the quadratic convergence of the given algorithm. It can be viewed as extensions of previously known results.
</description>
        <description>http://thesai.org/Downloads/Volume3No5/Paper_10-Algorithm_design_for_a_supply_chain_equilibrium_management_model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> A Globally Convergent Algorithm of Variational Inequality</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030413</link>
        <id>10.14569/IJACSA.2012.030413</id>
        <doi>10.14569/IJACSA.2012.030413</doi>
        <lastModDate>2012-07-12T07:55:40.4130000+00:00</lastModDate>
        
        <creator>Chengjiang Yin</creator>
        
        <subject>Infinite The variational inequality; Uniformly positive bound from below; Global convergence.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(4), 2012</description>
        <description>The algorithm of variational inequality is the important and valuable question in real life all the time. In this paper, a globally convergent algorithm of variational inequality is proposed. The method ensures that the corrector step sizes have a uniformly positive bound from below. In order to prove convergence of algorithm, we first establish some definitions, properties and theorem, and then we prove its global convergence under appropriate conditions.
</description>
        <description>http://thesai.org/Downloads/Volume3No4/Paper_13-A_Globally_Convergent_Algorithm_of_Variational_Inequality.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> Image Compression of MRI Image using Planar Coding</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020705</link>
        <id>10.14569/IJACSA.2011.020705</id>
        <doi>10.14569/IJACSA.2011.020705</doi>
        <lastModDate>2012-07-12T07:55:37.0430000+00:00</lastModDate>
        
        <creator>Lalitha Y S</creator>
        
        <creator>Mrityunjaya V. Latte</creator>
        
        <subject> image coding; embedded block coding; context modeling; multi rate services.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(7), 2011</description>
        <description>In this paper a hierarchical coding technique for variable bit rate service is developed using embedded zero block coding approach. The suggested approach enhances the variable rate coding by zero tree based block-coding architecture with Context Modeling for low complexity and high performance. The proposed algorithm utilizes the significance state-table forming the context modeling to control the coding passes with low memory requirement and low implementation complexity with the nearly same performance as compared to the existing coding techniques.
</description>
        <description>http://thesai.org/Downloads/Volume2No7/Paper%205-Image%20Compression%20of%20MRI%20Image%20using%20Planar%20Coding.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Empirical Study of the Applications of Web Mining Techniques in Health Care</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.021015</link>
        <id>10.14569/IJACSA.2011.021015</id>
        <doi>10.14569/IJACSA.2011.021015</doi>
        <lastModDate>2012-07-12T07:55:33.6870000+00:00</lastModDate>
        
        <creator>Varun Kumar</creator>
        
        <creator>MD. Ezaz Ahmed</creator>
        
        <subject>Web mining; Health care management system; Data mining; Knowledge discovery; Classification; Association rules; Prediction; Outlier analysis, IDSS.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(10), 2011</description>
        <description> Few years ago, the information flow in health care field was relatively simple and the application of technology was limited. However, as we progress into a more integrated world where technology has become an integral part of the business processes, the process of transfer of information has become more complicated. There has already been a long standing tradition for computer-based decision support, dealing with complex problems in medicine such as diagnosing disease, managerial decisions and assisting in the prescription of appropriate treatment. Today, one of the biggest challenges that health care system, face is the explosive growth of data, use this data to improve the quality of managerial decisions. Web mining and Data mining techniques are analytical tools that can be used to extract meaningful knowledge from large data sets. This paper addresses the applications of web mining and data mining in health care management system to extract useful information from the huge data sets and providing analytical tool to view and use this information for decision making processes by taking real life examples. Further we propose the IDSS model for the health care so that exact and accurate decision can be taken for the removal of a particular disease.
</description>
        <description>http://thesai.org/Downloads/Volume2No10/Paper%2015-An%20Empirical%20Study%20of%20the%20Applications%20of%20Web%20Mining%20Techniques%20in%20Health%20Care.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> An Empirical Study of the Applications of Data Mining Techniques in Higher Education</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020314</link>
        <id>10.14569/IJACSA.2011.020314</id>
        <doi>10.14569/IJACSA.2011.020314</doi>
        <lastModDate>2012-07-12T07:55:30.3470000+00:00</lastModDate>
        
        <creator>Dr. Varun Kumar</creator>
        
        <creator>Anupama Chadha</creator>
        
        <subject>Higher education, Data mining, Knowledge discovery, Classification, Association rules, Prediction, Outlier analysis.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(3), 2011</description>
        <description>Few years ago, the information flow in education field was relatively simple and the application of technology was limited. However, as we progress into a more integrated world where technology has become an integral part of the business processes, the process of transfer of information has become more complicated. Today, one of the biggest challenges that educational institutions face is the explosive growth of educational data and to use this data to improve the quality of managerial decisions. Data mining techniques are analytical tools that can be used to extract meaningful knowledge from large data sets. This paper addresses the applications of data mining in educational institution to extract useful information from the huge data sets and providing analytical tool to view and use this information for decision making processes by taking real life examples.</description>
        <description>http://thesai.org/Downloads/Volume2No3/Paper%2014-%20An%20Empirical%20Study%20of%20the%20Applications%20of%20Data%20Mining%20Techniques%20in%20Higher%20Education.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Using Semantic Web to support Advanced Web-Based Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.021218</link>
        <id>10.14569/IJACSA.2011.021218</id>
        <doi>10.14569/IJACSA.2011.021218</doi>
        <lastModDate>2012-07-12T07:55:26.3700000+00:00</lastModDate>
        
        <creator>Khaled M Fouad</creator>
        
        <creator>Mostafa A. Nofal</creator>
        
        <creator>Hany M. Harb</creator>
        
        <creator>Nagdy M. Nagdy</creator>
        
        <subject> Semantic Web; Domain Ontology; Learner Profile; Adaptive Learning; Semantic Search ; Recommendation.
</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(12), 2011</description>
        <description>In the learning environments, users would be helpless without the assistance of powerful searching and browsing tools to find their way. Web-based e-learning systems are normally used by a wide variety of learners with different skills, background, preferences, and learning styles. In this paper, we perform the personalized semantic search and recommendation of learning contents on the learning Web-based environments to enhance the learning environment. Semantic and personalized search of learning content is based on a comparison of the learner profile, that is based on learning style, and the learning objects metadata. This approach needs to present both the learner profile and the learning object description as certain data structures. Personalized recommendation of learning objects uses an approach to determine a more suitable relationship between learning objects and learning profiles. Thus, it may advise a learner with most suitable learning objects. Semantic learning objects search is based on the query expansion of the user query and by using the semantic similarity to retrieve semantic matched learning objects.</description>
        <description>http://thesai.org/Downloads/Volume2No12/Paper%2018-Using%20Semantic%20Web%20to%20support%20Advanced%20Web-Based%20Environment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Solving the Resource Constrained Project Scheduling Problem to Minimize the Financial Failure Risk
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2012.010108</link>
        <id>10.14569/IJARAI.2012.010108</id>
        <doi>10.14569/IJARAI.2012.010108</doi>
        <lastModDate>2012-07-12T07:55:21.6970000+00:00</lastModDate>
        
        <creator>Zhi Jie Chen</creator>
        
        <subject> RCPSP; cash availability; memetic algorithms; variable neighborhood search</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 1(1), 2012</description>
        <description>In practice, a project usually involves cash in- and out-flows associated with each activity. This paper aims to minimize the payment failure risk during the project execution for the resource-constrained project scheduling problem (RCPSP). In such models, the money-time value, which is the product of the net cash in-flow and the time length from the completion time of each activity to the project deadline, provides a financial evaluation of project cash availability. The cash availability of a project schedule is defined as the sum of these money-time values associated with all activities, which is mathematically equivalent to the minimization objective of total weighted completion time. This paper presents four memetic algorithms (MAs) which differ in the construction of initial population and restart strategy, and a double variable neighborhood search algorithm for solving the RCPSP problem. An experiment is conducted to evaluate the performance of these algorithms based on the same number of solutions calculated using ProGen generated benchmark instances. The results indicate that the MAs with regret biased sampling rule to generate initial and restart populations outperforms the other algorithms in terms of solution quality.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume1No1/Paper8-Solving_the_Resource_Constrained_Project_Scheduling_Problem_to_Minimize_the_Financial_Failure_Risk.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis Method of Traffic Congestion Degree Based on Spatio-Temporal Simulation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030403</link>
        <id>10.14569/IJACSA.2012.030403</id>
        <doi>10.14569/IJACSA.2012.030403</doi>
        <lastModDate>2012-07-12T07:55:18.3170000+00:00</lastModDate>
        
        <creator>Shulin He</creator>
        
        <subject> road traffic simulation; extension information model; degree of traffic congestion; traffic congestion entropy; traffic congestion control system.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(4), 2012</description>
        <description>The purpose of this research is to design and implement a road traffic congestion and traffic patterns simulation (TPS) model and integrate it with extension-information model (EIM). The problems of road traffic simulation and control are studied according to the method of extension information model, and from the spatio-temporal analysis point of view. The rules of the traffic simulation from existence to evolution are analyzed using theories. Based on this study, the concept of traffic system entropy is introduced, and resulted in the establishment of a fundamental frame work for the road traffic simulation system based on extension spatio-temporal information system. Moreover, a practicable methodology is presented.
</description>
        <description>http://thesai.org/Downloads/Volume3No4/Paper_3-Analysis_Method_of_Traffic_Congestion_degree_based_on_Spatio-temporal_Simulation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> Speaker Identification using Frequency Dsitribution in the Transform Domain</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030213</link>
        <id>10.14569/IJACSA.2012.030213</id>
        <doi>10.14569/IJACSA.2012.030213</doi>
        <lastModDate>2012-07-12T07:55:14.9030000+00:00</lastModDate>
        
        <creator>H B Kekre</creator>
        
        <creator>Vaishali Kulkarni</creator>
        
        <subject>Speaker Identification; DFT; DCT; DST; Hartley; Haar; Walsh; Kekre’s Transform. </subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(2), 2012</description>
        <description>In this paper, we propose Speaker Identification using the frequency distribution of various transforms like DFT (Discrete Fourier Transform), DCT (Discrete Cosine Transform), DST (Discrete Sine Transform), Hartley, Walsh, Haar and Kekre transforms. The speech signal spoken by a particular speaker is converted into frequency domain by applying the different transform techniques. The distribution in the transform domain is utilized to extract the feature vectors in the training and the matching phases. The results obtained by using all the seven transform techniques have been analyzed and compared. It can be seen that DFT, DCT, DST and Hartley transform give comparatively similar results (Above 96%). The results obtained by using Haar and Kekre transform are very poor. The best results are obtained by using DFT (97.19% for a feature vector of size 40).
</description>
        <description>http://thesai.org/Downloads/Volume3No2/Paper%2013%20-%20Speaker%20Identification%20using%20Frequency%20Dsitribution%20in%20the%20Transform%20Domain.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Image Retrieval using DST and DST Wavelet Sectorization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020613</link>
        <id>10.14569/IJACSA.2011.020613</id>
        <doi>10.14569/IJACSA.2011.020613</doi>
        <lastModDate>2012-07-12T07:55:11.5400000+00:00</lastModDate>
        
        <creator>H B Kekre</creator>
        
        <creator>Dhirendra Mishra</creator>
        
        <subject>CBIR, Feature extraction; Precision; Recall; LIRS; LSRR; DST; DST Wavelet.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(6), 2011</description>
        <description> The concept of sectorization of the transformed images for CBIR is an innovative idea. This paper introduces the concept of Wavelet generation for Discrete sine transform (DST).The sectorization of the DST transformed images and the DST wavelet transforms has been done into various sector sizes i.e.4,8,12, and 16. The transformation of the images is tried and tested threefold i.e. row wise transformation, column wise transformation and Full transformation. We have formed two planes i.e. plane 1 and plane 2 for sectorization in full transformation .The performance of the all the approaches has been tested by means of the three plots namely average precision-recall cross over point plot, LIRS (Length of initial relevant string of images) plot, LSRR (Length of string to recover all relevant images in the database) plot. The algorithms are analyzed to check the effect of three parameters on the retrieval First the way of transformation (Row, Column, Full), Second the size of sector generated, Third the type of similarity measures used. With the consideration of all these the overall comparison has been performed.</description>
        <description>http://thesai.org/Downloads/Volume2No6/Paper%2013-Image%20Retrieval%20using%20DST%20and%20DST%20Wavelet%20Sectorization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modeling of Traffic Accident Reporting System through UML Using GIS</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030606</link>
        <id>10.14569/IJACSA.2012.030606</id>
        <doi>10.14569/IJACSA.2012.030606</doi>
        <lastModDate>2012-07-12T07:55:08.1270000+00:00</lastModDate>
        
        <creator>Dr. Gufran Ahmad Ansari</creator>
        
        <creator>Dr. M. Al-shabi</creator>
        
        <subject>UML, GIS, TARS, Sequence diagram, Activity Diagram.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(6), 2012</description>
        <description>Nowadays vehicles are increasing day by day in town and cities roads. It is a well known problem to manage traffics on the roads in towns and cities. A lot of accident occurs on the road due to careless driving and Technical faults in vehicles. The main problem of traffic authorities is to manage the traffic on the road for the smooth functioning of vehicles that can reduce the accident and violation on the road. There is a tremendous demand from traffic authorities to develop a system that can helps to avoid the accident and keep the accident report data and also maintain the accident report data. The main objective of this paper is to model a Traffic Accident Reporting System (TARS) through UML using GIS to solve the above problem. Authors are also proposed the sequence and activity diagram for the above proposed model.</description>
        <description>http://thesai.org/Downloads/Volume3No6/Paper 6-Modeling of Traffic Accident Reporting System through UML Using GIS.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Method of Genetic Algorithm (GA) for FIR Filter Construction: Design and Development with Newer Approaches in Neural Network Platform
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010614</link>
        <id>10.14569/IJACSA.2010.010614</id>
        <doi>10.14569/IJACSA.2010.010614</doi>
        <lastModDate>2012-07-12T07:55:04.7400000+00:00</lastModDate>
        
        <creator>Ajoy Kumar Dey</creator>
        
        <creator>Avijit Saha</creator>
        
        <creator>Shibani Ghosh</creator>
        
        <subject>Genetic Algorithm; FIR: filter design; optimization; neural network.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(6), 2010</description>
        <description>The main focus of this paper is to describe a developed and dynamic method of designing finite impulse response filters with automatic, rapid and less computational complexity by an efficient Genetic approach. To obtain such efficiency, specific filter coefficient coding scheme has been studied and implemented. The algorithm generates a population of genomes that represents the filter coefficient where new genomes are generated by crossover, mutation operations methods. Our proposed genetic technique has able to give better result compare to other method</description>
        <description>http://thesai.org/Downloads/Volume1No6/Paper_14%20A%20Method%20of%20Genetic%20Algorithm%20(GA)%20for%20FIR%20Filter%20Construction%20Design%20and%20Development%20with%20Newer%20Approaches%20in%20Neural%20Network%20Platform.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Algorithms for Content Distribution in Networks
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020623</link>
        <id>10.14569/IJACSA.2011.020623</id>
        <doi>10.14569/IJACSA.2011.020623</doi>
        <lastModDate>2012-07-12T07:55:00.2470000+00:00</lastModDate>
        
        <creator>Deepak Garg</creator>
        
        <subject> Content Distribution; Efficient Algorithms; Capacitated</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(6), 2011</description>
        <description>In this paper an algorithm is presented which helps us to optimize the performance of content distribution servers in a network. If it is following the pay-as-you-use model then this algorithm will result in significant cost reduction. At different times the demand of different kind of content varies and based on that number of servers who are serving that demand will vary.</description>
        <description>http://thesai.org/Downloads/Volume2No6/Paper%2023-Algorithms%20for%20Content%20Distribution%20in%20Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Crytosystem for Computer security using Iris patterns and Hetro correlators
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.021101</link>
        <id>10.14569/IJACSA.2011.021101</id>
        <doi>10.14569/IJACSA.2011.021101</doi>
        <lastModDate>2012-07-12T07:54:54.9630000+00:00</lastModDate>
        
        <creator>R Bremananth</creator>
        
        <creator>Ahmad Sharieh</creator>
        
        <subject>Auto-correlators; Biometric; crytosystem; Hetro-correlators.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(11), 2011</description>
        <description>Biometric based cryptography system provides an efficient and secure data transmission as compare to the traditional encryption system. However, it is a computationally challenge task to solve the issues to incorporate biometric and cryptography. In connection with our previous works, this paper reveals a robust cryptosystem using iris biometric pattern as a crypto-key to resolve the issues in the encryption. An error correction engine based on hetro-correlators has been used to evoke the partially tarnished data fashioned by the decryption process. This process determines the non-repudiation and key management problems. The experimental results show that the suggestion algorithm can implement in the real-life cryptosystem.</description>
        <description>http://thesai.org/Downloads/Volume2No11/Paper%201-%20Crytosystem%20for%20Computer%20security%20using%20Iris%20patterns%20and%20Hetro%20correlators.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> Effect of Error Packetization on the Quality of Streaming Video in Wireless Broadband Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030303</link>
        <id>10.14569/IJACSA.2012.030303</id>
        <doi>10.14569/IJACSA.2012.030303</doi>
        <lastModDate>2012-07-12T07:54:50.0230000+00:00</lastModDate>
        
        <creator>Aderemi A Atayero</creator>
        
        <creator>Oleg I. Sheluhin</creator>
        
        <creator>Yury A. Ivanov</creator>
        
        <subject>Video streaming; Markov model; IEEE 802.16; Bit Error Rate; Burst Error Length; Packet Error Rate; Codec.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(3), 2012</description>
        <description>A Markov model describing the duration of error intervals and error-free reception for streaming video transmission was developed based on the experimental data obtained as a result of streaming video from a mobile source on IEEE 802.16 standard network. The analysis of experimental results shows that the average quality of video sequences when simulating Markov model of packetization of errors are similar to those obtained when simulating single packet errors with PER index in the range of 3&#215;?10?^(-3) to 1&#215;?10?^(-2). An algorithm for creating software for simulating packetization of errors was developed. In this paper we describe the algorithm, software developed based on this algorithm as well as the Markov model created for the modeling.</description>
        <description>http://thesai.org/Downloads/Volume3No3/Paper3-Effect_Of_Error_Packetization_On_The_Quality_Of_Streaming_Video_In_Wireless_Broadband_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Wideband Wireless Access Systems Interference Robustness: Its Effect on Quality of Video Streaming</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030120</link>
        <id>10.14569/IJACSA.2012.030120</id>
        <doi>10.14569/IJACSA.2012.030120</doi>
        <lastModDate>2012-07-12T07:54:46.2330000+00:00</lastModDate>
        
        <creator>Aderemi A Atayero</creator>
        
        <creator>Oleg I. Sheluhin</creator>
        
        <creator>Yuri A. Ivanov</creator>
        
        <creator>Julet O. Iruemi</creator>
        
        <subject>codec, H.264/AVC, polynomial approximation coding, signal-to-noise ratio, video streaming</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(1), 2012</description>
        <description>A necessary requirement incumbent on any information communication system and/or network is the capacity to transmit information with a predefined degree of accuracy in the presence of inevitable interference. The transmission of audio and video streaming services over different conduits (wireless access systems, Internet, etc.) is becoming ever more popular. As should be expected, this widespread increase is accompanied by the attendant new and difficult task of maintaining the quality of service of streaming video. The use of very accurate coding techniques for transmissions over wireless networks alone cannot guarantee a complete eradication of distortions characteristic of the video signal. A software-hardware composite system has been developed for investigating the effect of single bit error and bit packet errors in wideband wireless access systems on the quality of H.264/AVC standard bursty video streams. Numerical results of the modeling and analysis of the effect of interference robustness on quality of video streaming are presented and discussed.</description>
        <description>http://thesai.org/Downloads/Volume3No1/Paper%2020-Wideband%20Wireless%20Access%20Systems%20Interference%20Robustness%20%20Its%20Effect%20on%20the%20Quality%20of%20Video%20Streaming.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Imputation And Classification Of Missing Data Using Least Square Support Vector Machines – A New Approach In Dementia Diagnosis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2012.010404</link>
        <id>10.14569/IJARAI.2012.010404</id>
        <doi>10.14569/IJARAI.2012.010404</doi>
        <lastModDate>2012-07-10T16:24:32.8500000+00:00</lastModDate>
        
        <creator>T R Sivapriya</creator>
        
        <creator>A.R.Nadira Banu Kamal</creator>
        
        <creator>V.Thavavel</creator>
        
        <subject>Lease Square Support Vector Machine; z-score; Classification; KNN; Support Vector Machine.</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 1(4), 2012</description>
        <description>This paper presents a comparison of different data imputation approaches used in filling missing data and proposes a combined approach to estimate accurately missing attribute values in a patient database. The present study suggests a more robust technique that is likely to supply a value closer to the one that is missing for effective classification and diagnosis. Initially data is clustered and z-score method is used to select possible values of an instance with missing attribute values. Then multiple imputation method using LSSVM (Least Squares Support Vector Machine) is applied to select the most appropriate values for the missing attributes. Five imputed datasets have been used to demonstrate the performance of the proposed method. Experimental results show that our method outperforms conventional methods of multiple imputation and mean substitution. Moreover, the proposed method CZLSSVM (Clustered Z-score Least Square Support Vector Machine) has been evaluated in two classification problems for incomplete data. The efficacy of the imputation methods have been evaluated using LSSVM classifier. Experimental results indicate that accuracy of the classification is increases with CZLSSVM in the case of missing attribute value estimation. It is found that CZLSSVM outperforms other data imputation approaches like decision tree, rough sets and artificial neural networks, K-NN (K-Nearest Neighbour) and SVM. Further it is observed that CZLSSVM yields 95 per cent accuracy and prediction capability than other methods included and tested in the study.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume1No4/Paper_4-Imputation_And_Classification_Of_Missing_Data_Using_Least_Square_Support_Vector_Machines_–_A_New_Approach_In_Dementia_Diagnosis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automatic Melakarta Raaga Identification Syste: Carnatic Music</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2012.010406</link>
        <id>10.14569/IJARAI.2012.010406</id>
        <doi>10.14569/IJARAI.2012.010406</doi>
        <lastModDate>2012-07-10T16:12:31.4870000+00:00</lastModDate>
        
        <creator>B Tarakeswara Rao</creator>
        
        <creator>Sivakoteswararao Chinnam</creator>
        
        <creator>P Lakshmi Kanth</creator>
        
        <creator>M.Gargi</creator>
        
        <subject>Music; Melakarta Raaga; Cosine Distance; Earth Movers Distance; K-NN;</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 1(4), 2012</description>
        <description>It is through experience one could as certain that the classifier in the arsenal or machine learning technique is the Nearest Neighbour Classifier. Automatic melakarta raaga identification system is achieved by identifying the nearest neighbours to a query example and using those neighbours to determine the class of the query. This approach to classification is of particular importance today because issues of poor run-time performance are not such a problem these days with the computational power that is available. This paper presents an overview of techniques for Nearest Neighbour classification focusing on; mechanisms for finding distance between neighbours using Cosine Distance, Earth Movers Distance and formulas are used to identify nearest neighbours, algorithm for classification in training and testing for identifying Melakarta raagas in Carnatic music. From the derived results it is concluded that Earth Movers Distance is producing better results than Cosine Distance measure.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume1No4/Paper_6-Automatic_Melakarta_Raaga_Identification_Syste_Carnatic_Music.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Proposed Hybrid Technique for Recognizing Arabic Characters</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2012.010405</link>
        <id>10.14569/IJARAI.2012.010405</id>
        <doi>10.14569/IJARAI.2012.010405</doi>
        <lastModDate>2012-07-10T16:12:25.6730000+00:00</lastModDate>
        
        <creator>S F Bahgat</creator>
        
        <creator>S.Ghomiemy</creator>
        
        <creator>S. Aljahdali</creator>
        
        <creator>M. Alotaibi</creator>
        
        <subject>Optical Character Recognition; Feature Extraction; Dimensionality Reduction; Principal Component Analysis; Feature Fusion.</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 1(4), 2012</description>
        <description>Optical character recognition systems improve human-machine interaction and are urgently required for many governmental and commercial departments. A considerable progress in the recognition techniques of Latin and Chinese characters has been achieved. By contrast, Arabic Optical Character Recognition (AOCR) is still lagging although the interest and research in this area is becoming more intensive than before. This is because the Arabic is a cursive language, written from right to left, each character has two to four different forms according to its position in the word, and most characters are associated with complementary parts above, below, or inside the character. The process of Arabic character recognition passes through several stages; the most serious and error-prone of which are segmentation, and feature extraction &amp; classification. This research focuses on the feature extraction and classification stage, being as important as the segmentation stage. Features can be classified into two categories; Local features, which are usually geometric, and Global features, which are either topological or statistical. Four approaches related to the statistical category are to be investigated, namely: Moment Invariants, Gray Level Co-occurrence Matrix, Run Length Matrix, and Statistical Properties of Intensity Histogram. The paper aims at fusing the features of these methods to get the most representative feature vector that maximizes the recognition rate.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume1No4/Paper_5-A_Proposed_Hybrid_Technique_for_Recognizing_Arabic_Characters.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intelligent Agent based Flight Search and Booking System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2012.010403</link>
        <id>10.14569/IJARAI.2012.010403</id>
        <doi>10.14569/IJARAI.2012.010403</doi>
        <lastModDate>2012-07-10T16:12:19.0030000+00:00</lastModDate>
        
        <creator>Floyd Garvey</creator>
        
        <creator>Suresh Sankaranarayanan</creator>
        
        <subject>Agents; Biometric; JADE-LEAP; Android.</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 1(4), 2012</description>
        <description>The world globalization is widely used, and there are several definitions that may fit this one word. However the reality remains that globalization has impacted and is impacting each individual on this planet. It is defined to be greater movement of people, goods, capital and ideas due to increased economic integration, which in turn is propelled, by increased trade and investment. It is like moving towards living in a borderless world. With the reality of globalization, the travel industry has benefited significantly. It could be said that globalization is benefiting from the flight industry. Regardless of the way one looks at it, more persons are traveling each day and are exploring several places that were distant places on a map. Equally, technology has been growing at an increasingly rapid pace and is being utilized by several persons all over the world. With the combination of globalization and the increase in technology and the frequency in travel there is a need to provide an intelligent application that is capable to meeting the needs of travelers that utilize mobile phones all over. It is a solution that fits in perfectly to a user’s busy lifestyle, offers ease of use and enough intelligence that makes a user’s experience worthwhile. Having recognized this need, the Agent based Mobile Airline Search and Booking System is been developed that is built to work on the Android to perform Airline Search and booking using Biometric. The system also possess agent learning capability to perform the search of Airlines based on some previous search pattern .The development been carried out using JADE-LEAP Agent development kit on Android.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume1No4/Paper_3-Intelligent_Agent_based_Flight_Search_and_Booking_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Need for a New Data Processing Interface for Digital Forensic Examination</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2012.010402</link>
        <id>10.14569/IJARAI.2012.010402</id>
        <doi>10.14569/IJARAI.2012.010402</doi>
        <lastModDate>2012-07-10T16:12:12.8070000+00:00</lastModDate>
        
        <creator>Inikpi O ADEMU</creator>
        
        <creator>Dr Chris O. IMAFIDON</creator>
        
        <subject>autonomous coding; intellisense; visual sudio; integrated development environment; relational reconstruction; data processing.</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 1(4), 2012</description>
        <description>Digital forensic science provides tools, techniques and scientifically proven methods that can be used to acquire and analyze digital evidence. There is a need for law enforcement agencies, government and private organisations to invest in the advancement and development of digital forensic technologies. Such an investment could potentially allow new forensic techniques to be developed more frequently. This research identifies techniques that can facilitates the process of digital forensic investigation, therefore allowing digital investigators to utilize less time and fewer resources. In this paper, we identify the Visual Basic Integrated Development Environment as an environment that provides set of rich features which are likely to be required for developing tools that can assist digital investigators during digital forensic investigation.  Establishing a user friendly interface and identifying structures and consistent processes for digital forensic investigation has been a major component of this research.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume1No4/Paper_2-The_Need_for_a_New_Data_Processing_Interface_for_Digital_Forensic_Examination.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Security Assessment of Software Design using Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2012.010401</link>
        <id>10.14569/IJARAI.2012.010401</id>
        <doi>10.14569/IJARAI.2012.010401</doi>
        <lastModDate>2012-07-10T16:12:09.4530000+00:00</lastModDate>
        
        <creator>A Adebiyi</creator>
        
        <creator>Johnnes Arreymbi</creator>
        
        <creator>Chris Imafidon</creator>
        
        <subject>Neural Networks; Software security; Attack Patterns.</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 1(4), 2012</description>
        <description>Security flaws in software applications today has been attributed mostly to design flaws. With limited budget and time to release software into the market, many developers often consider security as an afterthought. Previous research shows that integrating security into software applications at a later stage of software development lifecycle (SDLC) has been found to be more costly than when it is integrated during the early stages.  To assist in the integration of security early in the SDLC stages, a new approach for assessing security during the design phase by neural network is investigated in this paper.  Our findings show that by training a back propagation neural network to identify attack patterns, possible attacks can be identified from design scenarios presented to it. The result of performance of the neural network is presented in this paper.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume1No4/Paper_1-Security_Assessment_of_Software_Design_using_Neural_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An SMS-SQL based On-board system to manage and query a database</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030622</link>
        <id>10.14569/IJACSA.2012.030622</id>
        <doi>10.14569/IJACSA.2012.030622</doi>
        <lastModDate>2012-07-02T13:16:26.5130000+00:00</lastModDate>
        
        <creator>Ahmed Sbaa</creator>
        
        <creator>Rachid El Bejjet</creator>
        
        <creator>Hicham Medromi</creator>
        
        <subject>SMS, SQL, UNIX, LINUX, Embedded System, Perl, GSM.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(6), 2012</description>
        <description>Technological advances of recent years have facilitated the use of embedded systems. They are part of our everyday life. Thanks to them, electronic devices are increasingly present in our lives in many forms: Mobile phones, music players and managers have become the essential of modern life. Access to information anywhere at any time is increasingly a daily challenge of embedded system technology. Following an innovative idea, this paper describes an embedded system that can query any database through SMS commands to extend the consultation of data to mobile networks early generations. Based on a UNIX embedded system, the result of this work can serve as a standard consultation of databases through SQL-SMS Gateway which converts an SMS command in an SQL query. This system will open the database to the consultation via mobile without having to expose them to risks of online publication. While in the first part of this article we will discuss the state of the art of multi-agent systems and input systems onboard, the second part presents the architecture of our target system. In the third part we describe in detail the realized prototype. This article ends with a conclusion and an outlook.</description>
        <description>http://thesai.org/Downloads/Volume3No6/Paper 22-An SMS-SQL based On-board system to manage and query a database.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Application of Intuitionistic Fuzzy in Routing Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030621</link>
        <id>10.14569/IJACSA.2012.030621</id>
        <doi>10.14569/IJACSA.2012.030621</doi>
        <lastModDate>2012-07-02T13:16:20.2700000+00:00</lastModDate>
        
        <creator>Ashit Kumar Dutta</creator>
        
        <creator>Abdul Rahaman Wahab Sait</creator>
        
        <subject>component; Intuitionistic fuzzy set; fuzzy set; intuitionistic fuzzy routing; fuzzy routing; GV; IFV; MEO; IMEV;, IFLED.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(6), 2012</description>
        <description>Routing is an important functional aspect of networks to transport packets from source to destination. A router sets up optimized paths among the different nodes in the network. In this paper the authors proposed a new type of routing algorithm which includes exchange of routing information for small amount of time and then halts for few hours but this information is used by each router to make its own routing decisions based on intuitionistic fuzzy logic during these hours.</description>
        <description>http://thesai.org/Downloads/Volume3No6/Paper 21-An Application of Intuitionistic Fuzzy in Routing Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Effects of Pronunciation Practice System Based on Personalized CG Animations of Mouth Movement Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030620</link>
        <id>10.14569/IJACSA.2012.030620</id>
        <doi>10.14569/IJACSA.2012.030620</doi>
        <lastModDate>2012-07-02T13:16:14.0700000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Mariko Oda</creator>
        
        <subject>Pronunciation practice; mouth movement model; CG animation.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(6), 2012</description>
        <description>Pronunciation practice system based on personalized Computer Graphics: CG animation of mouth movement model is proposed. The system enables a learner to practice pronunciation by looking at personalized CG animations of mouth movement model , and allows him/her to compare them with his/her own mouth movements. In order to evaluate the effectiveness of the system by using personalized CG animation of mouth movement model, Japanese vowel and consonant sounds were read by 8 infants before and after practicing with the proposed system, and their pronunciations were examined. Remarkable improvement on their pronunciations is confirmed through a comparison to their pronunciation without the proposed system based on identification test by subjective basis</description>
        <description>http://thesai.org/Downloads/Volume3No6/Paper 20-Effects of Pronunciation Practice System Based on Personalized CG Animations of Mouth Movement Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluation of Perception and Performance in ICT Related Courses</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030619</link>
        <id>10.14569/IJACSA.2012.030619</id>
        <doi>10.14569/IJACSA.2012.030619</doi>
        <lastModDate>2012-07-02T13:16:10.7830000+00:00</lastModDate>
        
        <creator>Annie O EGWALI</creator>
        
        <creator>Efosa C. IGODAN</creator>
        
        <subject>Learning, ICT, Education, Perception, Performance, Examination.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(6), 2012</description>
        <description>Some teaching methods adopted for disseminating Information Communication Technology Related Courses (ICTRC) in institutions of learning have been observed to be inadequate in bringing about the right perception and performance in students.  In order to qualitatively establish the efficiency of tutoring ICTRC, this study investigates the effect of ICT resources on students’ perception and performance in ICTRC. Two hundred and forty-eight (248) students in the Department of Computer Science/ICT from two universities: a Federal and a Private University from the South-South region of Nigeria were used.  A pre-test, post-test control group and quasi experimental design were utilized. Findings revealed that teaching ICTRC without using ICT resources are not effective at empowering the students with the right perception and the students are not enabled to perform to the best of their ability. Some recommendations are proffered encouraging ICT aided teaching strategies in Nigeria.</description>
        <description>http://thesai.org/Downloads/Volume3No6/Paper 19-Evaluation of Perception and Performance in ICT Related Courses.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Semantic Searching and Ranking of Documents using Hybrid Learning System and WordNet</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030618</link>
        <id>10.14569/IJACSA.2012.030618</id>
        <doi>10.14569/IJACSA.2012.030618</doi>
        <lastModDate>2012-07-02T13:16:04.5400000+00:00</lastModDate>
        
        <creator>Pooja Arora</creator>
        
        <creator>Om Vikas</creator>
        
        <subject>Learning to Rank; English-Hindi Parallel Corpus; Hybrid Learning; Support Vector Machine (SVM); Neural Network (NN); Semantic Searching; WordNet; Search Engine.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(6), 2012</description>
        <description>Semantic searching seeks to improve search accuracy of the search engine by understanding searcher’s intent and the contextual meaning of the terms present in the query to retrieve more relevant results. To find out the semantic similarity between the query terms, WordNet is used as the underlying reference database. Various approaches of Learning to Rank are compared. A new hybrid learning system is introduced which combines learning using Neural Network and Support Vector Machine. As the size of the training set highly affects the performance of the Neural Network, we have used Support Vector Machine to reduce the size of the data set by extracting support vectors that are critical for the learning. The data set containing support vectors is then used for learning a ranking function using Neural Network. The proposed system is compared with RankNet. The experimental results demonstrated very promising performance improvements. For experiments, we have used English-Hindi parallel corpus, Gyannidhi from CDAC. F-measure and Average Interpolated Precision are used for evaluation.</description>
        <description>http://thesai.org/Downloads/Volume3No6/Paper 18-Semantic Searching and Ranking of Documents using Hybrid Learning System and WordNet.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cluster-based Communication Protocol for Load-Balancing in Wireless Sensor Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030617</link>
        <id>10.14569/IJACSA.2012.030617</id>
        <doi>10.14569/IJACSA.2012.030617</doi>
        <lastModDate>2012-07-02T13:16:01.2570000+00:00</lastModDate>
        
        <creator>Mohammed A Merzoug</creator>
        
        <creator>Abdallah Boukerram</creator>
        
        <subject>wireless sensor networks; clustering; load-balancing.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(6), 2012</description>
        <description>One of the main problems in wireless sensor networks is information collection. How the sensor nodes can send efficiently the sensed information to the sink since their number is very large and their resources are limited? Clustering provides a logical view, much more effective than the physical view, which extends the lifetime of the network and achieves the scalability objective. Although the assumption that all sensor nodes can communicate directly with each other or with the sink is not always valid because of the limited transmission range of the sensor nodes and the remoteness of the sink. In this paper, we introduce a cluster-based communication protocol that uses a multi-hop communication mode between the cluster-heads. The protocol aims to reduce and evenly distribute the energy consumption among the sensor nodes, and cover a large area of interest. The simulations show that the proposed protocol is efficient in terms of energy consumption, maximization of the network lifetime, data delivery to the sink and scalability.</description>
        <description>http://thesai.org/Downloads/Volume3No6/Paper 17-Cluster-based Communication Protocol for Load-Balancing in Wireless Sensor Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Survey on Resource Allocation Strategies in                      Cloud Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030616</link>
        <id>10.14569/IJACSA.2012.030616</id>
        <doi>10.14569/IJACSA.2012.030616</doi>
        <lastModDate>2012-07-02T13:15:57.9800000+00:00</lastModDate>
        
        <creator>V V.Vinothina,</creator>
        
        <creator>Dr.R.Sridaran</creator>
        
        <creator>Dr.Padmavathi Ganapathi</creator>
        
        <subject>Cloud Computing; Cloud Services; Resource Allocation; Infrastructure.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(6), 2012</description>
        <description>Cloud computing has become a new age technology that has got huge potentials in enterprises and markets. Clouds can make it possible to access applications and associated data from anywhere. Companies are able to rent resources from cloud for storage and other computational purposes so that their infrastructure cost can be reduced significantly. Further they can make use of company-wide access to applications, based on pay-as-you-go model. Hence there is no need for getting licenses for individual products. However one of the major pitfalls in cloud computing is related to optimizing the resources being allocated. Because of the uniqueness of the model, resource allocation is performed with the objective of minimizing the costs associated with it. The other challenges of resource allocation are meeting customer demands and application requirements. In this paper, various resource allocation strategies and their challenges are discussed in detail. It is believed that this paper would benefit both cloud users and researchers in overcoming the challenges faced.</description>
        <description>http://thesai.org/Downloads/Volume3No6/Paper 16-A Survey on Resource Allocation Strategies in Cloud Computing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Power Analysis Attacks on ECC: A Major Security Threat</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030615</link>
        <id>10.14569/IJACSA.2012.030615</id>
        <doi>10.14569/IJACSA.2012.030615</doi>
        <lastModDate>2012-07-02T13:15:54.7030000+00:00</lastModDate>
        
        <creator>Hilal Houssain</creator>
        
        <creator>Mohamad Badra</creator>
        
        <creator>Turki F. Al-Somani</creator>
        
        <subject>Wireless Sensor Networks (WSNs); Elliptic curve cryptosystems (ECC); Side-channel attacks (SCA); Scalar multiplication.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(6), 2012</description>
        <description>Wireless sensor networks (WSNs) are largely deployed in different sectors and applications, and Elliptic Curve Cryptography (ECC) is proven to be the most feasible PKC for WSN security. ECC is believed to provide same level of security such as RSA with a much shorter key length, and thus they seem to be ideal for applications with small resources such a sensor network, smartcard, RFID, etc. However, like any other cryptographic primitive, ECC implementations are vulnerable to Power Analysis Attacks (PAAs) that may reveal the secret keys by exploiting leaked power consumption from running cryptographic devices (e.g. smart cards, mobile phones etc.). In this paper, we present a comprehensive study of major PAAs and its countermeasures on ECC cryptosystems. In addition, this paper describes critical concerns to be considered in designing PAAs on ECC particular for WSNs, and illustrates the need to conduct, in the near future, intensive researches for the development of these specific PAAs.</description>
        <description>http://thesai.org/Downloads/Volume3No6/Paper 15-Power Analysis Attacks on ECC A Major Security Threat.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Monte Carlo Ray Tracing Simulation of Polarization Characteristics of Sea Water Which Contains Spherical and Non-Spherical Particles of Suspended Solid and Phytoplankton</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030614</link>
        <id>10.14569/IJACSA.2012.030614</id>
        <doi>10.14569/IJACSA.2012.030614</doi>
        <lastModDate>2012-07-02T13:15:51.4100000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Yasunori Terayama</creator>
        
        <subject>Monte Carlo Ray Tracing; Phytoplankton; Polarization characteristics.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(6), 2012</description>
        <description>Simulation method of sea water which contains spherical and non-spherical particles of suspended solid and phytoplankton based on Monte Carlo Ray Tracing: MCRT is proposed for identifying non-spherical species of phytoplankton. From the simulation results, it is found that the proposed MCRT model is validated. Also some possibility of identification of spherical and non-spherical shapes of particles which are contained in sea water is shown. Meanwhile, simulations with the different shape of particles, Prolate and Oblate show that Degree of Polarization: DP depends on shapes. Therefore, non-spherical shape of phytoplankton can be identified with polarization characteristics measurements of the ocean.</description>
        <description>http://thesai.org/Downloads/Volume3No6/Paper 14-Monte Carlo Ray Tracing Simulation of Polarization Characteristics of Sea Water Which Contains Spherical and Non-Spherical Particles of Suspended Solid and Phytoplankton.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Resolution Enhancement by Incorporating Segmentation-based Optical Flow Estimation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030613</link>
        <id>10.14569/IJACSA.2012.030613</id>
        <doi>10.14569/IJACSA.2012.030613</doi>
        <lastModDate>2012-07-02T13:15:48.1070000+00:00</lastModDate>
        
        <creator>Osama A Omer</creator>
        
        <subject>Optical flow; image segmentation; Horn-Schunck; super resolution; resolution enhancement.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(6), 2012</description>
        <description>In this paper, the problem of recovering a high-resolution frame from a sequence of low-resolution frames is considered. High-resolution reconstruction process highly depends on image registration step. Typical resolution enhancement techniques use global motion estimation technique. However, in general, video frames cannot be related through global motion due to the arbitrary individual pixel movement between frame pairs. To overcome this problem, we propose to employ segmentation-based optical flow estimation technique for motion estimation with a modified model for frame alignment. To do that, we incorporate the segmentation with the optical flow estimation in two-stage optical flow estimation. In the first stage, a reference image is segmented into homogeneous regions. In the second stage, the optical flow is estimated for each region rather than pixels or blocks. Then, the frame alignment is accomplished by optimizing the cost function that consists of L1-norm of the difference between the interpolated low-resolution (LR) frames and the simulated LR frames. The experimental results demonstrate that using segmentation-based optical flow estimation in motion estimation step with the modified alignment model works better than other motion models such as affine, and conventional optical flow motion models.</description>
        <description>http://thesai.org/Downloads/Volume3No6/Paper 13-Resolution Enhancement by Incorporating Segmentation-based Optical Flow Estimation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automated Biometric Voice-Based Access Control in Automatic Teller Machine (ATM)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030612</link>
        <id>10.14569/IJACSA.2012.030612</id>
        <doi>10.14569/IJACSA.2012.030612</doi>
        <lastModDate>2012-07-02T13:15:44.7970000+00:00</lastModDate>
        
        <creator>Yekini N A</creator>
        
        <creator>Itegboje A.O</creator>
        
        <creator>Oyeyinka I.K.</creator>
        
        <creator>Akinwole A.K.</creator>
        
        <subject>Automatic Teller Machine (ATM), Biometric, Microphone, Voiced-Based Access Control, Smartcard Access Control, Voiced-Based Verification System</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(6), 2012</description>
        <description>An automatic teller machine requires a user to pass an identity test before any transaction can be granted. The current method available for access control in ATM is based on smartcard. Efforts were made to conduct an interview with structured questions among the ATM users and the result proofed that a lot of problems was associated with ATM smartcard for access control. Among the problems are; it is very difficult to prevent another person from attaining and using a legitimate persons card, also conventional smartcard can be lost, duplicated, stolen or impersonated with accuracy.  To address the problems, the paper proposed the use of biometric voice-based access control system in automatic teller machine. In the proposed system, access will be authorized simply by means of an enroll user speaking into a microphone attached to the automatic teller machine. There are 2 phases in implementation of the proposed system: first training phase, second testing or operational phase as discussed in section 4 of this paper.</description>
        <description>http://thesai.org/Downloads/Volume3No6/Paper 12-Automated Biometric Voice-Based Access Control in Automatic Teller Machine (ATM).pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Identification of the Strong  Factor that Influence the Improvement of the Primary Education using the Evidential Reasoning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030611</link>
        <id>10.14569/IJACSA.2012.030611</id>
        <doi>10.14569/IJACSA.2012.030611</doi>
        <lastModDate>2012-07-02T13:15:41.5030000+00:00</lastModDate>
        
        <creator>Mohammad Salah Uddin Chowdury</creator>
        
        <creator>Smita Sarker</creator>
        
        <subject>Assessment; evidential reasoning; district-wise primary education; factors; key performance indicator (KPI); multiple attribute decision analysis (MADA); uncertainty; utility interval.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(6), 2012</description>
        <description>Primary education is one of the important social sectors in the developing countries. It plays a gigantic role in promoting the social development of the concerned countries. More specifically, not the only education but the quality primary education which depends on such factors will contribute a lot for its smooth acceleration. In primary education, if the quality will maintain then automatically all the problems in primary education i.e. enrollment, completion, drop-out and so on will be robotic and gradually solved. The improvement primary education is influenced by both multiple qualitative and quantitative factors. This paper presents evidential reasoning(ER) approach to find out significant factors that are aggregated in assessment of performance of primary education. A case study of Dhaka, Chittagong, Rajshahi and Khulna districts in Bangladesh is provided to illustrate the implementation process of the ER approach for finding strong factors of primary education. For that reason, firstly we assess the performance of four districts then we determine the weakness and strength of specific factors of particular district. In this paper we also show the relation of lowest or best performing districts to its specific factors.</description>
        <description>http://thesai.org/Downloads/Volume3No6/Paper 11-Identification of the Strong  Factor that Influence the Improvement of the Primary Education using the Evidential Reasoning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>ComEx Miner: Expert Mining in Virtual Communities</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030610</link>
        <id>10.14569/IJACSA.2012.030610</id>
        <doi>10.14569/IJACSA.2012.030610</doi>
        <lastModDate>2012-07-02T13:15:38.2270000+00:00</lastModDate>
        
        <creator>Akshi Kumar</creator>
        
        <creator>Nazia Ahmad</creator>
        
        <subject>expert; web 2.0; virtual community; sentiment analysis.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(6), 2012</description>
        <description>The utilization of Web 2.0 as a platform to comprehend the arduous task of expert identification is an upcoming trend. An open problem is to assess the level of expertise objectively in the web 2.0 communities formed. We propose the “ComEx Miner System” that realizes Expert Mining in Virtual Communities, as a solution for this by quantifying the degree of agreement between the sentiment of blog and respective comments received and finally ranking the blogs with the intention to mine the expert, the one with the highest rank score. In the proposed paradigm, it is the conformity &amp; proximity of sentimental orientation of community member’s blog &amp; comments received on it, which is used to rank the blogs and mine the expert on the basis of the blog ranks evaluated. The effectiveness of the paradigm is demonstrated giving a partial view of the phenomenon. The initial results show that it is a motivating technique.</description>
        <description>http://thesai.org/Downloads/Volume3No6/Paper 10-ComEx Miner Expert Mining in Virtual Communities.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Segment, Track, Extract, Recognize and Convert Sign Language Videos to Voice/Text</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030608</link>
        <id>10.14569/IJACSA.2012.030608</id>
        <doi>10.14569/IJACSA.2012.030608</doi>
        <lastModDate>2012-07-02T13:15:31.6330000+00:00</lastModDate>
        
        <creator>P V.V Kishore</creator>
        
        <creator>P.Rajesh Kumar</creator>
        
        <subject>Sign Language;Multiple Video object Segmentations and Trackings; Active Contours; skin colour, texture, boundary and shape;Feature Vector; Artificial Neural Network.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(6), 2012</description>
        <description>This paper summarizes various algorithms used to design a sign language recognition system. Sign language is the language used by deaf people to communicate among themselves and with normal people. We designed a real time sign language recognition system that can recognize gestures of sign language from videos under complex backgrounds. Segmenting and tracking of non-rigid hands and head of the signer in sign language videos is achieved by using active contour models. Active contour energy minimization is done using signers hand and head skin colour, texture, boundary and shape information. Classification of signs is done by an artificial neural network using error back propagation algorithm. Each sign in the video is converted into a voice and text command. The system has been implemented successfully for 351 signs of Indian Sign Language under different possible video environments. The recognition rates are calculated for different video environments.</description>
        <description>http://thesai.org/Downloads/Volume3No6/Paper 8-Segment, Track, Extract, Recognize and Convert Sign Language Videos to VoiceText.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards Multi Label Text Classification through Label Propagation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030607</link>
        <id>10.14569/IJACSA.2012.030607</id>
        <doi>10.14569/IJACSA.2012.030607</doi>
        <lastModDate>2012-07-02T13:15:28.3270000+00:00</lastModDate>
        
        <creator>Shweta C Dharmadhikari</creator>
        
        <creator>Maya Ingle</creator>
        
        <creator>Parag Kulkarni</creator>
        
        <subject>Label propagation, semi-supervised learning, multi-label text classification.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(6), 2012</description>
        <description>Classifying text data has been an active area of research for a long time. Text document is multifaceted object and often inherently ambiguous by nature. Multi-label learning deals with such ambiguous object. Classification of such ambiguous text objects often makes task of classifier difficult while assigning relevant classes to input document. Traditional single label and multi class text classification paradigms cannot efficiently classify such multifaceted text corpus. Through our paper we are proposing a novel label propagation approach based on semi supervised learning for Multi Label Text Classification. Our proposed approach models the relationship between class labels and also effectively represents input text documents. We are using semi supervised learning technique for effective utilization of labeled and unlabeled data for classification .Our proposed approach promises better classification accuracy and handling of complexity and elaborated on the basis of standard datasets such as Enron, Slashdot and Bibtex.</description>
        <description>http://thesai.org/Downloads/Volume3No6/Paper 7-Towards Multi Label Text Classification through Label Propagation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>EVALUATION OF SPECTRAL EFFICIENCY, SYSTEM CAPACITY AND INTERFERENCE EFFECTS ON CDMA COMMUNICATION SYSTEM</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030605</link>
        <id>10.14569/IJACSA.2012.030605</id>
        <doi>10.14569/IJACSA.2012.030605</doi>
        <lastModDate>2012-07-02T13:15:24.6730000+00:00</lastModDate>
        
        <creator>Ifeagwu E N</creator>
        
        <creator>Ekeh j</creator>
        
        <creator>Ohaneme C.O</creator>
        
        <creator>Okezie C.C</creator>
        
        <subject>Bandwidth; Capacity; Interference; Pseudo-noise.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(6), 2012</description>
        <description>Wireless communication technology have been developed based on exploring new mobile communications frequency bands, reasonable use of frequency resources and minimization, portability and multifunction’s of mobile stations. The technology of wireless communications with duplex transmission is one of the fastest expanding in the world today. A very effective solution in achieving network performance in terms of system capacity and spectral efficiency with respect to the CDMA wireless network is presented in this paper. The effect of how interference grows as the number of users increases is analyzed. Equally, this paper presents the major problems in CDMA as the multiple access interference which arises due to the deviation of the spreading codes from orthogonality.</description>
        <description>http://thesai.org/Downloads/Volume3No6/Paper 5-EVALUATION OF SPECTRAL EFFICIENCY, SYSTEM CAPACITY AND INTERFERENCE EFFECTS ON CDMA COMMUNICATION SYSTEM.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Study On OFDM In Mobile Ad Hoc Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030604</link>
        <id>10.14569/IJACSA.2012.030604</id>
        <doi>10.14569/IJACSA.2012.030604</doi>
        <lastModDate>2012-07-02T13:15:21.3930000+00:00</lastModDate>
        
        <creator>Malik Nasereldin Ahmed</creator>
        
        <creator>Abdul Hanan Abdullah</creator>
        
        <creator>Satria Mandala</creator>
        
        <subject>Ad hoc; OFDM; MANET.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(6), 2012</description>
        <description>Orthogonal Frequency Division Multiplexing (OFDM) is the physical layer in emerging wireless local area networks that are also being targeted for ad hoc networking. OFDM can be also exploited in ad hoc networks to improve the energy performance of mobile devices. It is important in wireless networks because it can be used adaptively in a dynamically changing channel. This study gives a detailed view about OFDM and how it is useful to increase the bandwidth. This paper also gives an idea about how OFDM can be a promising technology for high capacity wireless communication.</description>
        <description>http://thesai.org/Downloads/Volume3No6/Paper 4-A Study On OFDM In Mobile Ad Hoc Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comparative Study of White Box, Black Box and Grey Box Testing Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030603</link>
        <id>10.14569/IJACSA.2012.030603</id>
        <doi>10.14569/IJACSA.2012.030603</doi>
        <lastModDate>2012-07-02T13:15:18.1130000+00:00</lastModDate>
        
        <creator>Mohd Ehmer Khan</creator>
        
        <creator>Farmeena Khan</creator>
        
        <subject>Black Box; Grey Box; White Box.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(6), 2012</description>
        <description>Software testing is the process to uncover requirement, design and coding errors in the program. It is used to identify the correctness, completeness, security and quality of software products against a specification. Software testing is the process used to measure the quality of developed computer software. It exhibits all mistakes, errors and flaws in the developed software. There are many approaches to software testing, but effective testing of complex product is essentially a process of investigation, not merely a matter of creating and following route procedure. It is not possible to find out all the errors in the program. This fundamental problem in testing thus throws an open question, as to what would be the strategy we should adopt for testing. In our paper, we have described and compared the three most prevalent and commonly used software testing techniques for detecting errors, they are: white box testing, black box testing and grey box testing.</description>
        <description>http://thesai.org/Downloads/Volume3No6/Paper 3-A Comparative Study of White Box, Black Box and Grey Box Testing Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Effective Listings of Function Stop words for Twitter</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030602</link>
        <id>10.14569/IJACSA.2012.030602</id>
        <doi>10.14569/IJACSA.2012.030602</doi>
        <lastModDate>2012-07-02T13:15:14.7970000+00:00</lastModDate>
        
        <creator>Murphy Choy</creator>
        
        <subject>Stop words; Text mining; RAKE; ELFS; Twitter.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(6), 2012</description>
        <description>Many words in documents recur very frequently but are essentially meaningless as they are used to join words together in a sentence. It is commonly understood that stop words do not contribute to the context or content of textual documents. Due to their high frequency of occurrence, their presence in text mining presents an obstacle to the understanding of the content in the documents. To eliminate the bias effects, most text mining software or approaches make use of stop words list to identify and remove those words. However, the development of such top words list is difficult and inconsistent between textual sources. This problem is further aggravated by sources such as Twitter which are highly repetitive or similar in nature. In this paper, we will be examining the original work using term frequency, inverse document frequency and term adjacency for developing a stop words list for the Twitter data source. We propose a new technique using combinatorial values as an alternative measure to effectively list out stop words.</description>
        <description>http://thesai.org/Downloads/Volume3No6/Paper 2-Effective Listings of Function Stop words for Twitter.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Study of FR Video Quality Assessment of Real Time Video Stream</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030601</link>
        <id>10.14569/IJACSA.2012.030601</id>
        <doi>10.14569/IJACSA.2012.030601</doi>
        <lastModDate>2012-07-02T13:15:11.4800000+00:00</lastModDate>
        
        <creator>Xinchen Zhang</creator>
        
        <creator>Lihau Wu</creator>
        
        <creator>Yuan Fang</creator>
        
        <creator>Hao Jiang</creator>
        
        <subject>video quailty assessment (VQA); full reference; objective performance; subjective score; BP neural network.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(6), 2012</description>
        <description>To assess the real-time transmission video’s quality,  this paper persents a approach which used FR video quality assessment (VQA) model to satisfy the objective and subjective measurement requirement. If we want to get the reference video in the measuring terminal and to make a assessment, there are two problems which are how to certain the reference video frame and how to make the objective score close to the subject assessment. We present in this paper a novel method of computing the order number of the video frame in the test point. In order to establish the relationship between the objective distortion and the subjective score, we used the “best-fit” regressed curve model and the BP neural network to describe prediction formula. This work is the mainly aim to get the high accurency assessment results with the human subjective feeling. So we select huge video sources for testing and training. The experimental results show that the proposed approach is suit to assess the video quality using FR model and the converted subjective score is  available.</description>
        <description>http://thesai.org/Downloads/Volume3No6/Paper 1-A Study of FR Video Quality Assessment of Real Time Video Stream.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> Neural Network Based Hausa Language Speech Recognition
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2012.010207</link>
        <id>10.14569/IJARAI.2012.010207</id>
        <doi>10.14569/IJARAI.2012.010207</doi>
        <lastModDate>2012-07-01T10:43:53.0030000+00:00</lastModDate>
        
        <creator>Matthew K Luka</creator>
        
        <creator>Ibikunle A. Frank</creator>
        
        <creator>Gregory Onwodi</creator>
        
        <subject> Artificial neural network, Hausa language, pattern recognition, Speech recognition, speech processing</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 1(2), 2012</description>
        <description> Speech recognition is a key element of diverse applications in communication systems, medical transcription systems, security systems etc. However, there has been very little research in the domain of speech processing for African languages, thus, the need to extend the frontier of research in order to port in, the diverse applications based on speech recognition. Hausa language is an important indigenous lingua franca in west and central Africa, spoken as a first or second language by about fifty million people. Speech recognition of Hausa Language is presented in this paper. A pattern recognition neural network was used for developing the system.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume1No2/Paper_7-Neural_Network_Based_Hausa_Language_Speech_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Role Of Technology and Innovation In The Framework Of The Information Society</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2012.010206</link>
        <id>10.14569/IJARAI.2012.010206</id>
        <doi>10.14569/IJARAI.2012.010206</doi>
        <lastModDate>2012-07-01T10:43:46.6530000+00:00</lastModDate>
        
        <creator>Peter Sasvari</creator>
        
        <subject>Information society; Social Construction of Technology; Actor-Network-Theory.</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 1(2), 2012</description>
        <description>The literature on the information society indicates that it is a still-developing field of research. It can be explained by the lack of consensus on basic definitions and research methods. There are also different judgments on the importance and the significance of the information society. Some social scientists write about a change of era, others emphasize parallelism with the past. There are some authors who expect that the information society will solve the problems of social inequalities, poverty and unemployment, while others blame it on the widening social gap between the information haves and have-nots. Various models of the information society have been developed so far and they are so different from country to country that it would be rather unwise to look for a single, all-encompassing definition. In our time a number of profound socio-economic changes are underway. Almost every field of our life is affected by the different phenomena of globalization, beside the growing role of the individual; another important characteristic of this process is the development of an organizing principle based on the free creation, distribution, access and use of knowledge and information. The 1990s and the 21st century is undoubtedly characterized by the world of the information society (as a form of the post-industrial society), which represents a different quality compared to the previous ones. The application of these theories and schools on ICT is problematic in many respects. First, as we stated above, there is not a single, widely used paradigm which has synthesized the various schools and theories dealing with technology and society. Second, these fragmented approaches do not have a fully-fledged mode of application to the relationship of ICT and (information) society. Third, SCOT, ANT, the evolutionary- or the systems approach to the history of technology – when dealing with information society – does not take into account the results of approaches (such as information science or information systems literature or social informatics, information management and knowledge management, communication and media studies) studying the very essence of the information age: information, communication and knowledge. The list of unnoticed or partially incorporated sciences, which focuses on the role of ICT in human information processing and other cognitive activities, is much longer</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume1No2/Paper_6-The_Role_Of_Technology_and_Innovation_In_The_Framework_Of_The_Information_Society.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sensor Location Problems As Test Problems Of Nonsmooth Optimization And Test Results Of A Few Nonsmooth Optimization Solvers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2012.010205</link>
        <id>10.14569/IJARAI.2012.010205</id>
        <doi>10.14569/IJARAI.2012.010205</doi>
        <lastModDate>2012-07-01T10:43:43.2670000+00:00</lastModDate>
        
        <creator>Fuchun Huang</creator>
        
        <subject>sensor location problems; mathematical programming; nonsmooth optimization solver; test problems.</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 1(2), 2012</description>
        <description>In this paper we address and advocate the sensor location problems and advocate them as test problems of nonsmooth 
optimization. These problems have easy-to-understand practical meaning and importance, easy to be even randomly generated, and the solutions can be 
displayed visually on a 2-dimensional plane. For testing some nonsmooth optimization solvers, we present a very simple sensor location problem of two 
sensors for four objects with the optimal solutions known by theoretical analysis. We tested several immediately ready-to-use optimization solvers on 
this problem and found that optimization solvers MATLAB’s ga() and VicSolver’s UNsolver can solve the problem, while some other optimization solvers 
like Excel solver, Dr Frank Vanden Berghen’s CONDOR, R’s optim(), and MATLAB’s fminunc() cannot solve the problem.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume1No2/Paper_5-Sensor_Location_Problems_As_Test_Problems_Of_Nonsmooth_Optimization_And_Test_Results_Of_A_Few_Nonsmooth_Optimization_Solvers.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Framework for automatic generation of answers to conceptual questions in Frequently Asked Question (FAQ) based Question Answering System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2012.010204</link>
        <id>10.14569/IJARAI.2012.010204</id>
        <doi>10.14569/IJARAI.2012.010204</doi>
        <lastModDate>2012-07-01T10:43:39.9130000+00:00</lastModDate>
        
        <creator> Saurabh Pal</creator>
        
        <creator>Sudipta Bhattacharya</creator>
        
        <creator>Indrani Datta</creator>
        
        <creator>Arindam Chakravorty</creator>
        
        <creator></creator>
        
        <subject>Question Answering System, Frequently Asked Question, Frequently Asked Question base, knowledge Base, semantic net, semantic frame, tagging.</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 1(2), 2012</description>
        <description>Question Answering System [QAS] generates answer to various questions imposed by users. The QAS uses documents or knowledge base for extracting the answers to factoid questions and conceptual questions. Use of Frequently Asked Question (FAQ) base gives a satisfying results to QAS, but the limitation with FAQ base system is in the preparation of Question and Answer set as most of the questions are not predetermined.QAS using FAQ base fails if no semantically related questions are found in base, corresponding to the input question. This demands an automatic answering system as backup, which generates answer especially to conceptual questions. The work presented here gives a framework for automatic generation of answers to conceptual question especially “what”, “why” and “how” type for frequently asked based, Question Answering System and in terms gradually builds up the FAQ base.
</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume1No2/Paper_4-A_Framework_for_automatic_generation_of_answers_to_conceptual_questions_in_Frequently_Asked_Question_based_Question_Answering_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> Experimental Validation for CRFNFP Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2012.010203</link>
        <id>10.14569/IJARAI.2012.010203</id>
        <doi>10.14569/IJARAI.2012.010203</doi>
        <lastModDate>2012-07-01T10:43:33.6030000+00:00</lastModDate>
        
        <creator>Wang Mingjun</creator>
        
        <creator>Yi Xinhua</creator>
        
        <creator>Wang Xuefeng</creator>
        
        <creator>Tu Jun</creator>
        
        <subject>autonomous nagivation; stereo vision; machine learning; conditional random fields; scene analysis.</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 1(2), 2012</description>
        <description>In 2010,we proposed CRFNFP[1] algorithm to enhance long-range terrain perception for outdoor robots through the integration of both appearance features and spatial contexts. And our preliminary simulation results indicated the superiority of CRFNFP over other existing approaches in terms of accuracy, robustness and adaptability to dynamic unstructured outdoor environments. In this paper, we further study on the comparison experiments for navigation behaviors of robotic systems with different scene perception algorithms in real outdoor scenes. We implemented 3 robotic systems and repeated the running jobs under various conditions. We also defined 3 creterion to facilitate comparison for all systems: Obstacle Response Distance (ORD), Time to Finish Job (TFJ), Distance of the Whole Run (DWR). The comparative experiments indicate that, the CRFNFP-based navigating system outperforms traditional local-map-based navigating systems in terms of all criterion. And the results also show that the CRFNFP algorithm does enhance the long-range perception for mobile robots and helps planning more efficient paths for the navigation.
</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume1No2/Paper_3-Experimental_Validation_for_CRFNFP_Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Machine Learning Approach to Deblurring License Plate Using K-Means Clustering Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2012.010202</link>
        <id>10.14569/IJARAI.2012.010202</id>
        <doi>10.14569/IJARAI.2012.010202</doi>
        <lastModDate>2012-07-01T10:43:26.4200000+00:00</lastModDate>
        
        <creator>Sanaz Aliyan</creator>
        
        <creator>Ali Broumandnia</creator>
        
        <subject>license plate recognition; K-Means clustering; deblurring; machine learning.</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 1(2), 2012</description>
        <description>Vehicle license plate recognition (LPR) is one of the important fields in Intelligent Transportation Systems (ITS). LPR systems aim to locate, segment and recognize the license plate from captured car image. Despite the great progress of LPR system in the last decade, there are still many problems to solve to reach a robust LPR system adapted to different environment and condition. The current license plate recognition systems will not effectively work well for blurred plate image. In this paper, to overcome the blurring problem a new machine learning approach to Deblurring License Plate using the K-Means clustering method have proposed. Experimental results demonstrate the effectiveness of the K-Means clustering as a feature selection method for license plate images.
</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume1No2/Paper_2-A_New_Machine_Learning_Approach_to_Deblurring_License_Plate_Using_K-Means_Clustering_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Fuzzy Approach to Classify Learning Disability</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2012.010201</link>
        <id>10.14569/IJARAI.2012.010201</id>
        <doi>10.14569/IJARAI.2012.010201</doi>
        <lastModDate>2012-07-01T10:43:20.1370000+00:00</lastModDate>
        
        <creator>Pooja Manghirmalani</creator>
        
        <creator>Darshana More</creator>
        
        <creator>Kavita Jain</creator>
        
        <subject> Learning Disability, Dyslexia, Dysgraphia, Dyscalculia, Diagnosis, Classification, Fuzzy Expert System.</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 1(2), 2012</description>
        <description>The endeavor of this work is to support the special education community in their quest to be with the mainstream. The initial segment of the paper gives an exhaustive study of the different mechanisms of diagnosing learning disability. After diagnosis of learning disability the further classification of learning disability that is dyslexia, dysgraphia or dyscalculia are fuzzy. Hence the paper proposes a model based on Fuzzy Expert System which enables the classification of learning disability into its various types. This expert system facilitates in simulating conditions which are otherwise imprecisely defined.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume1No2/Paper_1-A_Fuzzy_Approach_to_Classify_Learning_Disability.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The study of prescriptive and descriptive models of decision making</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2012.010112</link>
        <id>10.14569/IJARAI.2012.010112</id>
        <doi>10.14569/IJARAI.2012.010112</doi>
        <lastModDate>2012-07-01T10:43:13.8230000+00:00</lastModDate>
        
        <creator>Ashok A Divekar</creator>
        
        <creator>Sunita Bangal</creator>
        
        <creator>Sumangala D.</creator>
        
        <subject>Expected Value; Prospect; expected-value rule; risk-averse.</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 1(1), 2012</description>
        <description> The field of decision making can be loosely divided into two parts: the study of prescriptive models and the study of descriptive models. Prescriptive decision scientists are concerned with prescribing methods for making optimal decisions. Descriptive decision researchers are concerned with the bounded way in which the decisions are actually made. The statistics courses treat risk from a prescriptive, by suggesting rational methods. This paper brings out the work done by many researchers by examining the psychological factors that explain how managers deviate from rationality in responding to uncertainty.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume1No1/Paper12-The_Study_Of_Prescriptive_And_Descriptive_Models_Of_Decision_Making.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> The Solution of Machines’ Time Scheduling Problem Using Artificial Intelligence Approaches</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2012.010111</link>
        <id>10.14569/IJARAI.2012.010111</id>
        <doi>10.14569/IJARAI.2012.010111</doi>
        <lastModDate>2012-07-01T10:43:06.7770000+00:00</lastModDate>
        
        <creator>Ghoniemy S</creator>
        
        <creator>El-sawy A. A.</creator>
        
        <creator>Shohla M. A.</creator>
        
        <creator>Gihan E. H.&#160;&#160;Ali</creator>
        
        <subject>Machine Time Scheduling; Particle swarm optimization; Genetic Algorithm; Time Window.</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 1(1), 2012</description>
        <description> The solution of the Machines’ Time Scheduling Problem (MTSP) is a hot point of research that is not yet matured, and needs further work. This paper presents two algorithms for the solution of the Machines’ Time Scheduling Problem that leads to the best starting time for each machine in each cycle. The first algorithm is genetic-based (GA) (with non-uniform mutation), and the second one is based on particle swarm optimization (PSO) (with constriction factor). A comparative analysis between both algorithms is carried out. It was found that particle swarm optimization gives better penalty cost than GA algorithm and max-separable technique, regarding best starting time for each machine in each cycle.
</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume1No1/Paper11-The_Solution_of_Machines_Time_Scheduling_Problem_Using_Artificial_Intelligence_Approaches.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>ELECTRE-Entropy method in Group Decision Support System Modelto Gene Mutation Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2012.010110</link>
        <id>10.14569/IJARAI.2012.010110</id>
        <doi>10.14569/IJARAI.2012.010110</doi>
        <lastModDate>2012-07-01T10:43:00.4330000+00:00</lastModDate>
        
        <creator>Ermatita </creator>
        
        <creator>Sri Hartati</creator>
        
        <creator>Retantyo Wardoyo</creator>
        
        <creator>Agus Harjoko</creator>
        
        <subject> component; Group decision support system(GDSS); Multi Atributte Decision making(MADM); Electre-entropy; preference.</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 1(1), 2012</description>
        <description> Application of Group Decision Support System (GDSS) can assist for delivering the decision of various opinions (preference) cancer detection based on the preferences of various expertise. In this paper we propose ELECTRE-Entropy for GDSS Modeling. We propose entropy weighting for each criteria under ELECTRE Method.ELECTRE is one method in Multi-Attribute Decision Making (MADM). Modeling of Group Decision Support Sytemapplyfor multi-criteria which the simulation data mutated genes that can cause cancer and solution recommended.
</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume1No1/Paper10-ELECTRE-Entropy_method%20_in_Group_Decision_Support_System_Model_to_Gene_Mutation_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Spatial Metrics based Landscape Structure and Dynamics Assessment for an emerging Indian Megalopolis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2012.010109</link>
        <id>10.14569/IJARAI.2012.010109</id>
        <doi>10.14569/IJARAI.2012.010109</doi>
        <lastModDate>2012-07-01T10:42:54.0670000+00:00</lastModDate>
        
        <creator>Ramachandra T V</creator>
        
        <creator>Bharath H. Aithal</creator>
        
        <creator>Sreekantha S</creator>
        
        <subject>Landscape Metrics; Urbanisation; Urban Sprawl; Remote sensing;  Geoinformatics; Mysore City, India.</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 1(1), 2012</description>
        <description>Human-induced land use changes are considered the prime agents of the global environmental changes. Urbanisation and associated growth patterns (urban sprawl) are characteristic of spatial temporal changes that take place at regional levels. Unplanned urbanization and consequent impacts on natural resources including basic amenities has necessitated the investigations of spatial patterns of urbanization. A comprehensive assessment using quantitative methods and methodological understanding using rigorous methods is required to understand the patterns of change that occur as human processes transform the landscapes to help regional land use planners to easily identify, understand the necessary requirement. Tier II cities in India are undergoing rapid changes in recent times and need to be planned to minimize the impacts of unplanned urbanisation. Mysore is one of the rapidly urbanizing traditional regions of Karnataka, India. In this study, an integrated approach of remote sensing and spatial metrics with gradient analysis was used to identify the trends of urban land changes. The spatial and temporal dynamic pattern of the urbanization process of the megalopolis region considering the spatial data for the ?ve decades with 3 km buffer from the city boundary has been studied, which help in the implementation of location specific mitigation measures. The time series of gradient analysis through landscape metrics helped in describing, quantifying and monitoring the spatial configuration of urbanization at landscape levels. Results indicated a signi?cant increase of urban built-up area during the last four decades. Landscape metrics indicates the coalescence of urban areas occurred during the rapid urban growth from 2000 to 2009 indicating the clumped growth at the center with simple shapes and dispersed growth in the boundary region with convoluted shapes.
</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume1No1/Paper9-Spatial_Metrics_based_Landscape_Structure_and_Dynamics_Assessment_for_an_emerging_Indian_Megalopolis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modified Genetic Algorithms Based Solution To Subset Sum Problem</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2012.010107</link>
        <id>10.14569/IJARAI.2012.010107</id>
        <doi>10.14569/IJARAI.2012.010107</doi>
        <lastModDate>2012-07-01T10:42:46.5670000+00:00</lastModDate>
        
        <creator>Harsh Bhasin</creator>
        
        <creator>Neha Singla</creator>
        
        <subject>subset sum problem; genetic algorithms; NP complete; heuristic search.</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 1(1), 2012</description>
        <description>Subset Sum Problem (SSP) is an NP Complete problem which finds its application in diverse fields. The work suggests the solution of above problem with the help of genetic Algorithms (GAs). The work also takes into consideration, the various attempts that have been made to solve this problem and other such problems. The intent is to develop a generic methodology to solve all NP Complete problems via GAs thus exploring their ability to find out the optimal solution from amongst huge set of solutions. The work has been implemented and analyzed with satisfactory results.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume1No1/Paper7-Modified_Genetic_Algorithms_Based_Solution_To_Subset_Sum_Problem.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Intelligent Location Management approaches in GSM Mobile Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2012.010106</link>
        <id>10.14569/IJARAI.2012.010106</id>
        <doi>10.14569/IJARAI.2012.010106</doi>
        <lastModDate>2012-07-01T10:42:40.2430000+00:00</lastModDate>
        
        <creator>N Mallikharjuna Rao</creator>
        
        <creator>M. M Naidu2</creator>
        
        <creator>P.Seetharam</creator>
        
        <subject>Baste Station; MSC; HLR; VLR; IMEI; MT; Fuzzy Logic; Fuzzy databases.</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 1(1), 2012</description>
        <description> Location management refers to the problem of updating and searching the current location of mobile nodes in a wireless network. To make it efficient, the sum of update costs of location database must be minimized. Previous work relying on fixed location databases is unable to fully exploit the knowledge of user mobility patterns in the system so as to achieve this minimization. The study presents an intelligent location management approach which has interacts between intelligent information system and knowledge-base technologies, so we can dynamically change the user patterns and reduce the transition between the VLR and HLR. The study provides algorithms are ability to handle location registration and call delivery.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume1No1/Paper6-An_Intelligent_Location_Management_approaches_in_GSM_Mobile_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dynamic Decision Support System Based on Bayesian Networks
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2012.010105</link>
        <id>10.14569/IJARAI.2012.010105</id>
        <doi>10.14569/IJARAI.2012.010105</doi>
        <lastModDate>2012-07-01T10:42:36.8800000+00:00</lastModDate>
        
        <creator>Hela Ltifi</creator>
        
        <creator>Ghada Trabelsi </creator>
        
        <creator>Mounir Ben Ayed </creator>
        
        <creator>Adel M. Alimi</creator>
        
        <subject> Dynamic Decision Support Systems; Nosocomial Infection; Bayesian Network.</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 1(1), 2012</description>
        <description>The improvement of medical care quality is a significant interest for the future years. The fight against nosocomial infections (NI) in the intensive care units (ICU) is a good example. We will focus on a set of observations which reflect the dynamic aspect of the decision, result of the application of a Medical Decision Support System (MDSS). This system has to make dynamic decision on temporal data. We use dynamic Bayesian network (DBN) to model this dynamic process. It is a temporal reasoning within a real-time environment; we are interested in the Dynamic Decision Support Systems in healthcare domain (MDDSS).</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume1No1/Paper5-Dynamic_Decision_Support_System_Based_on_Bayesian_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Estimation of soil moisture in paddy field using Artificial Neural Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2012.010104</link>
        <id>10.14569/IJARAI.2012.010104</id>
        <doi>10.14569/IJARAI.2012.010104</doi>
        <lastModDate>2012-07-01T10:42:30.5270000+00:00</lastModDate>
        
        <creator>Chusnul Arif</creator>
        
        <creator>Masaru Mizoguchi</creator>
        
        <creator>Masaru Mizoguchi</creator>
        
        <creator>Ryoichi Doi</creator>
        
        <subject>soil moisture; paddy field; estimation method; artificial neural networks
</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 1(1), 2012</description>
        <description>In paddy field, monitoring soil moisture is required for irrigation scheduling and water resource allocation, management and planning. The current study proposes an Artificial Neural Networks (ANN) model to estimate soil moisture in paddy field with limited meteorological data. Dynamic of ANN model was adopted to estimate soil moisture with the inputs of reference evapotranspiration (ETo) and precipitation. ETo was firstly estimated using the maximum, average and minimum values of air temperature as the inputs of model. The models were performed under different weather conditions between the two paddy cultivation periods. Training process of model was carried out using the observation data in the first period, while validation process was conducted based on the observation data in the second period. Dynamic of ANN model estimated soil moisture with R2 values of 0.80 and 0.73 for training and validation processes, respectively, indicated that tight linear correlations between observed and estimated values of soil moisture were observed. Thus, the ANN model reliably estimates soil moisture with limited meteorological data.
</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume1No1/Paper4-Estimation_of_soil_moisture_in_paddy_field_using_Artificial_Neural_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Adaptive Neuro-Fuzzy Inference System for Dynamic Load Balancing in 3GPP LTE</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2012.010103</link>
        <id>10.14569/IJARAI.2012.010103</id>
        <doi>10.14569/IJARAI.2012.010103</doi>
        <lastModDate>2012-07-01T10:42:24.2730000+00:00</lastModDate>
        
        <creator>Aderemi A Atayero,</creator>
        
        <creator>Matthew K. Luka</creator>
        
        <subject>ANFIS; 3GPP; LTE; Neural Network; Fuzzy Logic; Load balancing; Virtual load.</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 1(1), 2012</description>
        <description>ANFIS is applicable in modeling of key parameters when investigating the performance and functionality of wireless networks. The need to save both capital and operational expenditure in the management of wireless networks cannot be over-emphasized. Automation of network operations is a veritable means of achieving the necessary reduction in CAPEX and OPEX. To this end, next generations networks such WiMAX and 3GPP LTE and LTE-Advanced provide support for self-optimization, self-configuration and self-healing to minimize human-to-system interaction and hence reap the attendant benefits of automation. One of the most important optimization tasks is load balancing as it affects network operation right from planning through the lifespan of the network. Several methods for load balancing have been proposed. While some of them have a very buoyant theoretical basis, they are not practically implementable at the current state of technology. Furthermore, most of the techniques proposed employ iterative algorithm, which in itself is not computationally efficient. This paper proposes the use of soft computing, precisely adaptive neuro-fuzzy inference system for dynamic QoS-aware load balancing in 3GPP LTE. Three key performance indicators (i.e. number of satisfied user, virtual load and fairness distribution index) are used to adjust hysteresis task of load balancing.
</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume1No1/Paper3-Adaptive_Neuro-Fuzzy_Inference_System_for_Dynamic_Load_Balancing_in_3GPP_LTE.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Sparse Representation Method with Maximum Probability of Partial Ranking for Face Recognition
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2012.010102</link>
        <id>10.14569/IJARAI.2012.010102</id>
        <doi>10.14569/IJARAI.2012.010102</doi>
        <lastModDate>2012-07-01T10:42:17.9570000+00:00</lastModDate>
        
        <creator>Yi-Haur Shiau</creator>
        
        <creator>Chaur-Chin Chen</creator>
        
        <subject>Compressive sensing; Face recognition; Sparse representation classification; AdaBoost.</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 1(1), 2012</description>
        <description> Face recognition is a popular topic in computer vision applications. Compressive sensing is a novel sampling technique for finding sparse solutions to underdetermined linear systems. Recently, a sparse representation-based classification (SRC) method based on compressive sensing is presented. It has been successfully applied in face recognition. In this paper, we proposed a maximum probability of partial ranking method based on the framework of SRC, called SRC-MP, for face recognition. Eigenfiaces, fisherfaces, 2DPCA and 2DLDA are used for feature extraction. Experiments are implemented on two public face databases, Entended Yale B and ORL. In order to show our proposed method is robust for face recognition in the real world, experiment is also implemented on a web female album (WFA) face database. We utilize AdaBoost method to automatically detect human face from web album images with complex background, illumination variation and image misalignment to construct WFA database. Furthermore, we compare our proposed method with the classical projection-based methods such as principal component analysis (PCA), linear discriminant analysis (LDA), 2DPCA and 2DLDA. The experimental results demonstrate our proposed method not only is robust for varied viewing angles, expressions, and illumination, but also has higher recognition rates than other methods.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume1No1/Paper2-A_Sparse_Representation_Method_with_Maximum_Probability_of_Partial_Ranking_for_Face_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Method for Chinese Short Text Classification Considering Effective Feature Expansion</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2012.010101</link>
        <id>10.14569/IJARAI.2012.010101</id>
        <doi>10.14569/IJARAI.2012.010101</doi>
        <lastModDate>2012-07-01T10:42:14.6100000+00:00</lastModDate>
        
        <creator>Mingxuan liu,</creator>
        
        <creator>Xinghua Fan 2</creator>
        
        <subject>component; short text; classification; semantic relations; semantic constraints; statistical constraints; HowNet.</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 1(1), 2012</description>
        <description>This paper presents a Chinese short text classification method which considering extended semantic constraints and statistical constraints. This method uses “HowNet” tools to build the attribute set of concept. when coming to the part of feature expansion, we judge the collocation between the attribute words of original text and the characteristics before and after expansion as the semantic constraints, and calculate the ratio between the mutual information of the original contents and the features before expansion versus the mutual information of the original contents and the features after expansion as statistical constraints, so as to judge whether feature expansion is effective with this two constraints , then rationally use various semantic relation word-pairs in short text classification. Experiments show that this method can use semantic relations in Chinese short text classification effectively, and improve the classification performance.
</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume1No1/Paper1-A_Method_for_Chinese_Short_Text_Classification_Considering_Effective_Feature_Expansion.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> Growing Cloud Computing Efficiency</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030526</link>
        <id>10.14569/IJACSA.2012.030526</id>
        <doi>10.14569/IJACSA.2012.030526</doi>
        <lastModDate>2012-07-01T10:42:08.3000000+00:00</lastModDate>
        
        <creator>Dr. Mohamed F AlAjmi</creator>
        
        <creator>Dr. Arun Sharma Head</creator>
        
        <creator>Shakir Khan</creator>
        
        <subject>Cloud Computing; Data centre; Distributed Computing; Virtualization; data centre architecture.
</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(5), 2012</description>
        <description>Cloud computing is basically altering the expectation for how and when computing, storage and networking assets should be allocated, managed and devoted. End-users are progressively more sensitive in response time of services they ingest. Service Developers wish for the Service Providers to make sure or give the ability for dynamically assigning and managing resources in respond to alter the demand patterns in real-time. Ultimately, Service Providers are under anxiety to build their infrastructure to facilitate real-time end-to-end visibility and energetic resource management with well grained control to decrease total cost of tenure for improving quickness. What is required to rethink of the underlying operating system and management infrastructure to put up the on-going renovation of data centre from the traditional server-centric architecture model to a cloud or network centric model? This paper projects and describes a indication model for a network centric data centre infrastructure management heap that make use of it and validates key ideas that have enabled dynamism, the quality of being scalable, reliability and security in the telecommunication industry to the computing engineering. Finally, the paper will explain a proof of concept classification that was implemented to show how dynamic resource management can be enforced to enable real-time service guarantee for network centric data centre architecture.</description>
        <description>http://thesai.org/Downloads/Volume3No5/Paper_26-Growing_Cloud_Computing_Efficiency.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> An Evaluation of IFC-CityGML Unidirectional Conversion</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030525</link>
        <id>10.14569/IJACSA.2012.030525</id>
        <doi>10.14569/IJACSA.2012.030525</doi>
        <lastModDate>2012-07-01T10:42:01.9370000+00:00</lastModDate>
        
        <creator>Mohamed El-Mekawy</creator>
        
        <creator>Anders &#214;stman b</creator>
        
        <creator>Ihab Hijazi c</creator>
        
        <subject>IFC; CityGML; UBM; Evaluation; Unidirectional.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(5), 2012</description>
        <description>Interoperability between building information models (BIM) and geographic information models has a strong potential to bring benefit to different demands in construction analysis, urban planning, homeland security and other applications. Therefore, different research and commercial efforts have been initiated to integrate the most prominent semantic models in BIM and geospatial applications. These semantic models are the Industry Foundation Classes (IFC) and City Geography Markup Language (CityGML) respectively. However, these efforts mainly: a) use a unidirectional approach (mostly from IFC to CityGML) for converting data, or b) Extending CityGML by conceptual requirements for converting CityGML to IFC models. The purpose of this paper is to investigate the potential of unidirectional conversion between IFC and CityGML. The different IFC concepts and its corresponding concepts in CityGML is studied and evaluated. The investigation goes beyond building objects, also including other concepts that are represented implicitly in building schemas such as building objects relations, hierarchies of building objects, appearance and other building characteristics. Due to the large semantic differences between IFC and CityGML standards, the schema mapping is based on a manual pragmatic approach without automatic procedures. The mappings are classified into three categories, namely ‘full matching’, ‘partial matching’ and ‘no matching’. The result of the study shows that only a few concepts are classified as ‘direct matching’, a few as well as ‘no matching’ while most of the building concepts are classified as ‘partial matching’. It is concluded that unidirectional approaches cannot translate all the needed concepts from both IFC and CityGML standards. Instead, we propose a meta-based unified building model, based on both standards, which shows a high potential for overcoming the shortages of the unidirectional conversion approaches.
</description>
        <description>http://thesai.org/Downloads/Volume3No5/Paper_25-An_Evaluation_of_IFC_CityGML_Unidirectional_Conversion.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> Rotation-Invariant Neural Pattern Recognition System Using Extracted Descriptive Symmetrical Patterns</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030524</link>
        <id>10.14569/IJACSA.2012.030524</id>
        <doi>10.14569/IJACSA.2012.030524</doi>
        <lastModDate>2012-07-01T10:41:55.6400000+00:00</lastModDate>
        
        <creator>YRehab F Abdel-Kader</creator>
        
        <creator>Rabab M. Ramadan</creator>
        
        <creator>Fayez W. Zaki</creator>
        
        <creator>Emad El-Sayed</creator>
        
        <subject>Rotation-Invariant;Pattern Recognition;Edge Detection;Shape Orientation;Image processing.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(5), 2012</description>
        <description> In this paper a novel rotation-invariant neural-based pattern recognition system is proposed. The system incorporates a new image preprocessing technique to extract rotation-invariant descriptive patterns from the shapes. The proposed system applies a three phase algorithm on the shape image to extract the rotation-invariant pattern. First, the orientation angle of the shape is calculated using a newly developed shape orientation technique. The technique is effective, computationally inexpensive and can be applied to shapes with several non-equally separated axes of symmetry. A simple method to calculate the average angle of the shape’s axes of symmetry is defined. In this technique, only the first moment of inertia is considered to reduce the computational cost. In the second phase, the image is rotated using a simple rotation technique to adapt its orientation angle to any specific reference angle. Finally in the third phase, the image preprocessor creates a symmetrical pattern about the axis with the calculated orientation angle and the perpendicular axis on it. Performing this operation in both the neural network training and application phases, ensures that the test rotated patterns will enter the network in the same position as in the training. Three different approaches were used to create the symmetrical patterns from the shapes. Experimental results indicate that the proposed approach is very effective and provide a recognition rate up to 99.5%.</description>
        <description>http://thesai.org/Downloads/Volume3No5/Paper_24-Rotation_Invariant_Neural_Pattern_Recognition_System_Using_Extracted_Descriptive_Symmetrical_Patterns.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Creating a Complete Model of an Intrusion Detection System effective on the LAN
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030523</link>
        <id>10.14569/IJACSA.2012.030523</id>
        <doi>10.14569/IJACSA.2012.030523</doi>
        <lastModDate>2012-07-01T10:41:49.3330000+00:00</lastModDate>
        
        <creator>Yousef FARHAOUI</creator>
        
        <creator>Ahmed ASIMI</creator>
        
        <subject> Intrusion Detection System; Characteristic; Architecture; modele.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(5), 2012</description>
        <description>the Intrusion Detection Systems (IDS) are now an essential component in the structure of network security. The logs of connections and network activity, with a large amount of information, can be used to detect intrusions. Despite the development of new technologies of information and communication following the advent of the Internet and networks, computer security has become a major challenge, and works in this research are becoming more numerous. Various tools and mechanisms are developed to ensure a level of security to meet the demands of modern life. Among the systems, intrusion detection for identifying abnormal behavior or suspicious activities to undermine the legitimate operation of the system. The objective of this paper is the design and implementation of a comprehensive architecture of IDS in a network.</description>
        <description>http://thesai.org/Downloads/Volume3No5/Paper_23-Creating_a_Complete_Model_of_an_Intrusion_Detection_System_effective_on_the_LAN.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dynamics of Mandelbrot set with Transcendental Function</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030522</link>
        <id>10.14569/IJACSA.2012.030522</id>
        <doi>10.14569/IJACSA.2012.030522</doi>
        <lastModDate>2012-07-01T10:41:42.9970000+00:00</lastModDate>
        
        <creator>Shafali Agarwal</creator>
        
        <creator>Gunjan Srivastava</creator>
        
        <creator>Dr. Ashish Negi</creator>
        
        <subject>Superior Transcendental Mandelbrot Set; Mann iteration; fixed point; Escape Criteria.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(5), 2012</description>
        <description>These days Mandelbrot set with transcendental function is an interesting area for mathematicians. New equations have been created for Mandelbrot set using trigonometric, logarithmic and exponential functions. Earlier, Ishikawa iteration has been applied to these equations and generate new fractals named as Relative Superior Mandelbrot Set with transcendental function. In this paper, the Mann iteration is being applied on Mandelbrot set with sine function i.e. sin(zn)+c and new fractals with the concept of Superior Transcendental Mandelbrot Set will be shown. Our goal is to focus on the less number of iterations which are required to obtain fixed point of function sin(zn)+c.</description>
        <description>http://thesai.org/Downloads/Volume3No5/Paper_22-Dynamics_of_Mandelbrot_set_with_Transcendental_Function.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An hybrid method for the Arabic queries disambiguation to improve the relevance calculation in the IRS</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030521</link>
        <id>10.14569/IJACSA.2012.030521</id>
        <doi>10.14569/IJACSA.2012.030521</doi>
        <lastModDate>2012-07-01T10:41:36.6770000+00:00</lastModDate>
        
        <creator>Adil ENAANAI</creator>
        
        <creator>Aziz SDIGUI DOUKKALI</creator>
        
        <creator>El habib BENLAHMER</creator>
        
        <subject>Relevance; Information research; Similarity; Semantic gene; Arab treatment. </subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(5), 2012</description>
        <description>In the information systems, the query’s expansion brings more benefices in the relevant documents extraction. However, the current expansion types are focused on the retrieve of the maximum of documents (reduce the silence). In Arabic, the queries are derived in many morphosemantical variants. Hence the diversity of the semantic interpretations that often creates a problem of ambiguity. Our objective is to prepare the Arabic request before its introduction to the document retrieval system. This type of preparation is based on pretreatment which makes morphological changes to the query by separating affixes of the words. Then, present all of morphosemantical derivatives as a first step to the lexical audit agent, and check the consistency between the words by the context parser. Finally we present a new method of semantic similarity based on the equivalence probability calculation between two words.</description>
        <description>http://thesai.org/Downloads/Volume3No5/Paper_21-An_hybrid_method_for_the_Arabic_queries_disambiguation_to_improve_the_relevance_calculation_in_the_IRS.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Energy Efficient Clustering and Cluster Head Rotation Scheme for Wireless Sensor Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030520</link>
        <id>10.14569/IJACSA.2012.030520</id>
        <doi>10.14569/IJACSA.2012.030520</doi>
        <lastModDate>2012-07-01T10:41:29.5270000+00:00</lastModDate>
        
        <creator>Ashok Kumar</creator>
        
        <creator>Vinod Kumar</creator>
        
        <creator>Narottam Chand</creator>
        
        <subject>Corona; clusters;cluster head; sink; network lifetime.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(5), 2012</description>
        <description>Wireless sensor nodes are highly energy constrained devices. They have limited battery life due to various constraints of sensor nodes such as size and cost, etc. Moreover, most of the Wireless Sensor Network (WSN) applications render it impossible to charge or replace the battery of sensor nodes. Therefore, optimal use of node energy is a major challenge in wireless sensor networks. Clustering of sensor nodes is an effective method to use the node energy optimally and prolong the lifetime of energy constrained wireless sensor network. In this paper, we propose a location based protocol for WSN- supporting an energy efficient clustering, cluster head selection/rotation and data routing method to prolong the lifetime of sensor network. Proposed clustering protocol ensures balanced size cluster formation within the sensing field with least number of transmit-receive operations. Cluster head rotation protocol ensures balanced dissipation of node energy despite the non-uniform energy requirements of cluster head and sensor nodes in a cluster. The cluster head rotation protocol has been proposed to achieve the balanced energy consumption among the nodes within the cluster thus prolonging the lifetime of network Simulation results demonstrate that proposed protocol prolongs network lifetime due to the use of efficient clustering, cluster head selection/rotation and data routing.</description>
        <description>http://thesai.org/Downloads/Volume3No5/Paper_20-Energy_Efficient_Clustering_and_Cluster_Head_Rotation_Scheme_for_Wireless_Sensor_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>: Bridging content’s quality between the participative web and the physical world</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030519</link>
        <id>10.14569/IJACSA.2012.030519</id>
        <doi>10.14569/IJACSA.2012.030519</doi>
        <lastModDate>2012-07-01T10:41:23.2030000+00:00</lastModDate>
        
        <creator>Anouar Abtoy</creator>
        
        <creator>Noura Aknin</creator>
        
        <creator>Boubker Sbihi</creator>
        
        <subject>Collaborative web; content management; digital content; physical content.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(5), 2012</description>
        <description>web 2.0 and its applications spread widely among internet users. Taking advantage of the easy, the friendly and the rich user interfaces. As consequence, the creation and the production of content become available to anyone. The ordinary user has step forward toward being a producer rather than remaining passive consumer. With this usage shifting, a new concern emerged: the quality of the User-Generated Content UGC or the User-Created Content UCC. Our team has developed a new concept of a Framework for managing the quality of the validated content in the participative web based on the evaluation of content’s quality during its lifetime on the web. As continuation of our work, we will present a concept of extending the quality assessment of UGC or UCC into the real world by creating a bridge between digital content and its homologue in the physical world (e.g. printed version …). We will combine existing technologies such as QR codes or RFID in order to perform the linking between the digital and the physical content. This approach well offers the possibility of following the quality of UGC or UCC; and eventually evaluates it; even in hard copies. The evaluation and the assessment of physical content originated from digital content generated through web 2.0 applications will be done in real-time. The proposed approach is implemented to our framework by integrating the features into the UML diagrams in the Blog case.</description>
        <description>http://thesai.org/Downloads/Volume3No5/Paper_19-Bridging_content%E2%80%99s_quality_between_the_participative_web_and_the_physical_world.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Countermeasure for Round Trip Delay Which Occurs in Between Satellite and Ground with Software Network Accelerator</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030518</link>
        <id>10.14569/IJACSA.2012.030518</id>
        <doi>10.14569/IJACSA.2012.030518</doi>
        <lastModDate>2012-07-01T10:41:16.8700000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject> IP comunication through Internet Satellite; WINDS; software accerelator.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(5), 2012</description>
        <description>Countermeasure for round trip delay which occurs in between satellite and ground with network accelerator is investigated together with operating system dependency on effectiveness of accelerator. Also disaster relief data transmission experiments are conducted for mitigation of disaster together with acceleration of disaster related data transmission between local government and disaster prevention center. Disaster relief information including remote sensing satellite images and information from the disaster occurred areas to local government for creation of evacuation information is accelerated so that it becomes possible to send them to the residents in the suffered areas due to disaster through data broadcasting in the digital TV channel.</description>
        <description>http://thesai.org/Downloads/Volume3No5/Paper_18-Countermeasure_for_Round_Trip_Delay_Which_Occurs_in_Between_Satellite_and_Ground_with_Software_Network_Accelerator.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Approach for Troubleshooting Microprocessor based Systems using An Object Oriented Expert System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030517</link>
        <id>10.14569/IJACSA.2012.030517</id>
        <doi>10.14569/IJACSA.2012.030517</doi>
        <lastModDate>2012-07-01T10:41:13.5130000+00:00</lastModDate>
        
        <creator>D V Kodavade</creator>
        
        <creator>S.D.Apte</creator>
        
        <subject> Inference mechanism; object; Fault diagnosis.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(5), 2012</description>
        <description> The paper presents an object oriented fault diagnostic expert system framework which analyses observations from the unit under test when fault occurs and infers the causes of failures. The frame work is characterized by two basic features. The first includes a fault diagnostic strategy which utilizes the fault classification and checks knowledge about unit under test. The fault classification knowledge reduces the complexity in fault diagnosis by partitioning the fault section. The second characteristic is object oriented inference mechanism using backward chaining with message passing within objects. The refractoriness and recency property of inference mechanism improve efficiency in fault diagnosis. The developed framework demonstrates its effectiveness and superiority compared to earlier approaches
</description>
        <description>http://thesai.org/Downloads/Volume3No5/Paper_17-A_Novel_Approach_for_Troubleshooting_Microprocessor_based_Systems_using_An_Object_Oriented_Expert_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Prioritizing Test Cases Using Business CriticalityTest Value</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030516</link>
        <id>10.14569/IJACSA.2012.030516</id>
        <doi>10.14569/IJACSA.2012.030516</doi>
        <lastModDate>2012-07-01T10:41:07.2070000+00:00</lastModDate>
        
        <creator>Sonali Khandai</creator>
        
        <creator>Arup Abhinna Acharya</creator>
        
        <creator>Durga Prasad Mohapatra</creator>
        
        <subject>Software Maintenance; Regression Testing; Test case Prioritization; Business Criticality. </subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(5), 2012</description>
        <description>Software maintenance is an important and costly activity of the software development lifecycle. Regression testing is the process of validating modifications introduced in a system during software maintenance. It is very inefficient to re-execute every test case in regression testing for small changes. This issue of retesting of software systems can be handled using a good test case prioritization technique. A prioritization technique schedules the test cases for execution so that the test cases with higher priority executed before lower priority. The objective of test case prioritization is to detect fault as early as possible. Early fault detection can provide a faster feedback generating a scope for debuggers to carry out their task at an early stage. Model Based Prioritization has an edge over Code Based Prioritization techniques. The issue of dynamic changes that occur during the maintenance phase of software development can only be addressed by maintaining statistical data for system models, change models and fault models. In this paper we present a novel approach for test case prioritization by evaluating the Business Criticality Value (BCV) of the various functions present in the software using the statistical data. Then according to the business criticality value of various functions present in the change and fault model we prioritize the test cases are prioritized.</description>
        <description>http://thesai.org/Downloads/Volume3No5/Paper_16-Prioritizing_Test_Cases_Using_Business_Criticality_Test_Value.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> Test Case Generation For Concurrent Object-Oriented Systems Using Combinational Uml Models</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030515</link>
        <id>10.14569/IJACSA.2012.030515</id>
        <doi>10.14569/IJACSA.2012.030515</doi>
        <lastModDate>2012-07-01T10:41:00.9030000+00:00</lastModDate>
        
        <creator>Swagatika Dalai</creator>
        
        <creator>Arup Abhinna Acharya</creator>
        
        <creator>Durga Prasad Mohapatra</creator>
        
        <subject> Software Maintenance; Regression Testing; Test case Prioritization.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(5), 2012</description>
        <description>Software testing is an important phase of software development to ensure the quality and reliability of the software. Due to some limitations of code based testing method, the researcher has been taken a new method to work upon UML model based testing. It is found that different UML model is having different coverage and capable of detecting different kinds of faults. Here we have taken combinational UML models to have better coverage and fault detection capability. Testing concurrent system is difficult task because due to concurrent interaction among the threads and the system results in test case explosion. In this paper we have presented an approach of generating test cases for concurrent systems using combinational UML models i.e. sequence diagram and activity diagram .Then a Sequence-Activity Graph (SAG) is constructed from these two diagrams. Then that graph is traversed to generate test cases which are able to minimize test case explosion.</description>
        <description>http://thesai.org/Downloads/Volume3No5/Paper_15-Test_Case_Generation_For_Concurrent_Object_Oriented_Systems_Using_Combinational_Uml_Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>QoS Parameters Investigations and Load Intensity Analysis, (A Case for Reengineered DCN)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030514</link>
        <id>10.14569/IJACSA.2012.030514</id>
        <doi>10.14569/IJACSA.2012.030514</doi>
        <lastModDate>2012-07-01T10:40:54.0530000+00:00</lastModDate>
        
        <creator>Udeze Chidiebele. c</creator>
        
        <creator>H. C Inyiama</creator>
        
        <creator>Okafor Kennedy .c</creator>
        
        <creator>Dr C. C. Okezie</creator>
        
        <subject>QoS; DCN; Model; CAD; Homogenous; Process component.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(5), 2012</description>
        <description> This paper presents the simulation results on Reengineered DCN model considering Quality of Service (QoS) parameters in a homogeneous network for enterprise web application support. To make it feasible for the computer aided design (CAD) tool, a complete and robust real performance assessment of simulation scenarios is considered. This paper deals with a performance measurement and evaluation of a number of homogeneous end-to-end (e2e) DCN subnets taking into account a wide range of statistics. An active measurement approach of QoS parameters has been adopted for the studying of proprieties we called concise process model QoS. This paper validates the performance of the Reengineered DCN through simulations with MATLAB SimEvent which presents efficient performance metrics for its deployment. Considering the model hierarchy in context, the results shows that with the design model, a highly scalable DCN will be achieved.</description>
        <description>http://thesai.org/Downloads/Volume3No5/Paper_14-QoS_Parameters_Investigations_and_Load_Intensity_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Effect Of A Video-Based Laboratory On The High School Pupils’ Understanding Of Constant Speed Motion</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030513</link>
        <id>10.14569/IJACSA.2012.030513</id>
        <doi>10.14569/IJACSA.2012.030513</doi>
        <lastModDate>2012-07-01T10:40:50.7030000+00:00</lastModDate>
        
        <creator>Louis Trudel</creator>
        
        <creator>Abdeljalil M&#233;tioui</creator>
        
        <subject>speed; computer-assisted laboratory; understanding; high school; Rasch measurement.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(5), 2012</description>
        <description>Among the physical phenomena studied in high school, the kinematical concepts are important because they constitute a precondition for the study of subsequent concepts of mechanics. Our research aims at studying the effect of a computer-assisted scientific investigation on high school pupils’ understanding of the constant speed motion. Experimentation took place in a high school physics classroom. A repeated measures analysis of variance shows that, during the implementation of this strategy, the pupils’ understanding of kinematical concepts increased in a significant way. In conclusion, we specify advantages and limits of the study and give future research directions concerning the design of a computer-assisted laboratory in high school physics.
</description>
        <description>http://thesai.org/Downloads/Volume3No5/Paper_13-Effect_of_a_Video-Based_Laboratory_on_the_High_School_Pupils_Understanding_of_Constant_Speed_Motion.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Validation of the IS Impact Model for Measuringthe Impact of e-Learning Systems in KSA Universities StudentPerspective</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030512</link>
        <id>10.14569/IJACSA.2012.030512</id>
        <doi>10.14569/IJACSA.2012.030512</doi>
        <lastModDate>2012-07-01T10:40:44.4230000+00:00</lastModDate>
        
        <creator>Salem Alkhalaf,</creator>
        
        <creator>Steve Drew</creator>
        
        <creator>Anne Nguyen</creator>
        
        <subject> IS-Impact Model; e-learning systems; Saudi Arabia.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(5), 2012</description>
        <description>The IS-Impact Measurement Model, developed by Gable, Sedera and Chan in 2008, represents the to-date and expected stream of net profits from a given information system (IS), as perceived by all major user classes. Although this model has been stringently validated in previous studies, its generalisability and verified effectiveness are enhanced through this new application in e-learning. This paper focuses on the re-validation of the findings of the IS-Impact Model in two universities in the Kingdom of Saudi Arabia (KSA). Among the users of 2 universities e-learning systems, 528 students were recruited. A formative validation measurement with SmartPLS, a graphical structural equation modelling tool was used to analyse the collected data. On the basis of the SmartPLS results, as well as with the aid of data-supported IS impact measurements and dimensions, we confirmed the validity of the IS-Impact Model for assessing the effect of e-learning systems in KSA universities. The newly constructed model is more understandable, its use was proved as robust and applicable to various circumstances.</description>
        <description>http://thesai.org/Downloads/Volume3No5/Paper_12-Validation_of_the_IS_Impact_Model_for_Measuringthe_Impact_of_e-Learning_Systems_in_KSA_Universities_StudentPerspective.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>2D Satellite Image Registration Using Transform Based and Correlation Based Methods</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030511</link>
        <id>10.14569/IJACSA.2012.030511</id>
        <doi>10.14569/IJACSA.2012.030511</doi>
        <lastModDate>2012-07-01T10:40:38.1100000+00:00</lastModDate>
        
        <creator>H B kekre</creator>
        
        <creator>Dr. Tanuja K. Sarode</creator>
        
        <creator>Ms. Ruhina B. Karani</creator>
        
        <subject>Discrete Cosine Transform (DCT); Discrete Wavelet Transform (DWT); HAAR Transform; Walsh transform Normalized Cross Correlation; Interst Point Area Extraction; Image Registration.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(5), 2012</description>
        <description>Image registration is the process of geometrically aligning one image to another image of the same scene taken from different viewpoints or by different sensors. It is a fundamental image processing technique and is very useful in integrating information from different sensors, finding changes in images taken at different times and inferring three-dimensional information from stereo images. Image registration can be done by using two matching method: transform based methods and correlation based methods. When image registration is done using correlation based methods like normalized cross correlation, the results are slow. They are also computationally complex and sensitive to the image intensity changes which are caused by noise and varying illumination. In this paper, an unusual form of image registration is proposed which focuses upon using various transforms for fast and accurate image registration. The data set can be a set of photographs, data from various sensors, from different times, or from different viewpoints. The applications of image registration are in the field of computer vision, medical imaging, military automatic target recognition, and in analyzing images and data from satellites. The proposed technique works on satellite images. It tries to find out area of interest by comparing the unregistered image with source image and finding the part that has highest similarity matching. The paper mainly works on the concept of seeking water or land in the stored image. The proposed technique uses different transforms like Discrete Cosine Transform, Discrete Wavelet Transform, HAAR Transform and Walsh transform to achieve accurate image registration. The paper also focuses upon using normalized cross correlation as an area based technique of image registration for the purpose of comparison. The root mean square error is used as similarity measure. Experimental results show that the proposed algorithm can successfully register the template and can also process local distortion in high-resolution satellite images.</description>
        <description>http://thesai.org/Downloads/Volume3No5/Paper_11-2D_Satellite_Image_Registration_Using_Transform_Based_and_Correlation_Based_Methods.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>E-Participation Modeling and Developing with Trust for Decision Making Supplement Purpose</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030509</link>
        <id>10.14569/IJACSA.2012.030509</id>
        <doi>10.14569/IJACSA.2012.030509</doi>
        <lastModDate>2012-07-01T10:40:34.3800000+00:00</lastModDate>
        
        <creator>Vitri Tundjungsari</creator>
        
        <creator>Jazi Eko Istiyanto</creator>
        
        <creator>Edi Winarko</creator>
        
        <creator>Retantyo Wardoyo</creator>
        
        <subject>e-Participation; trust; group decision making. </subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(5), 2012</description>
        <description>ICT has been employed in various areas, including e-Participation to support citizen participation and achieve democracy ideal. Trust as a social behavior can be used as a method to model preferences and facilitate better participation and interaction in a decision making within a group of decision makers. In this paper we present literatures survey related to e-Participation and trust in computer science; we also proposed a Group Decision Support System model and application, which utilizes trust to support decisions in a collaborative team. In brief, the model is based on the synthesis of group members’ preferences following an appropriate aggregation procedure.</description>
        <description>http://thesai.org/Downloads/Volume3No5/Paper_9-E-Participation_Modeling_and_Developing_with_Trust_for_Decision_Making_Supplement_Purpose.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Discriminant Model of Network Anomaly Behavior Based on Fuzzy Temporal Inference</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030508</link>
        <id>10.14569/IJACSA.2012.030508</id>
        <doi>10.14569/IJACSA.2012.030508</doi>
        <lastModDate>2012-07-01T10:40:26.9130000+00:00</lastModDate>
        
        <creator>Ping He</creator>
        
        <subject>network anomaly behavior; anomalous omen; fuzzy temporal; set covering; hypothesis graph .</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(5), 2012</description>
        <description>The aim of this paper is to provide an active inference algorithm for anomalous behavior. As a main concept we introduce fuzzy temporal consistency covering set, and put forward a fuzzy temporal selection model based on temporal inference and covering technology. Fuzzy set is used to describe network anomaly behavior omen and character, as well as the relations between behavior omen and character. We set up a basic monitoring framework of anomalous behaviors by using causality inference of network behaviors, and then provide a recognition method of network anomaly behavior character based on hypothesis graph search. As shown in the example, the monitoring algorithm has certain reliability and operability.</description>
        <description>http://thesai.org/Downloads/Volume3No5/Paper_8-A_Discriminant_Model_of_Network_Anomaly_Behavior_Based_on_Fuzzy_Temporal_Inference.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Denoising Method for Removal of Mixed Noise in Medical Images</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030507</link>
        <id>10.14569/IJACSA.2012.030507</id>
        <doi>10.14569/IJACSA.2012.030507</doi>
        <lastModDate>2012-07-01T10:40:23.5600000+00:00</lastModDate>
        
        <creator>J UMAMAHESWARI</creator>
        
        <creator>Dr.G.RADHAMANI</creator>
        
        <subject>Gaussian noise, impulse noise, UQI, Wavelet filter, CWM, and hybrid approach.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(5), 2012</description>
        <description>Nowadays, Digital image acquisition and processing techniques plays a very important role in current day medical diagnosis. During the acquisition process, there could be distortions in the images, which will negatively affect the diagnosis images. In this paper a new technique based on the hybridization of wavelet filter and center weighted median filters is proposed for denoising multiple noise (Gaussian and Impulse) images. The model is experimented on standard Digital Imaging and Communications in Medicine (DICOM) images and the performances are evaluated in terms of peak signal to noise ratio (PSNR), Mean Absolute Error (MAE), Universal Image Quality Index (UQI) and Evaluation Time (ET). Results prove that utilization of center weighted median filters in combination with wavelet thresholding filters on DICOM images deteriorates the performance. The proposed filter gives suitable results on the basis of PSNR, MSE, UQI and ET. In addition, the proposed filter gives nearly uniform and consistent results on all the test images.
</description>
        <description>http://thesai.org/Downloads/Volume3No5/Paper_7-Hybrid_Denoising_Method_for_Removal_of_Mixed_Noise_in_Medical_Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Defect Diagnosis in Rotors Systems by Vibrations Data Collectors Using Trending Software</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030506</link>
        <id>10.14569/IJACSA.2012.030506</id>
        <doi>10.14569/IJACSA.2012.030506</doi>
        <lastModDate>2012-07-01T10:40:17.2370000+00:00</lastModDate>
        
        <creator>Hisham A. H. Al-Khazali</creator>
        
        <creator>Mohamad R. Askari2</creator>
        
        <subject>vibration data collectors; analysis software; rotating components. </subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(5), 2012</description>
        <description>Vibration measurements have been used to reliably diagnose performance problems in machinery and related mechanical products. A vibration data collector can be used effectively to measure and analyze the machinery vibration content in gearboxes, engines, turbines, fans, compressors, pumps and bearings. Ideally, a machine will have little or no vibration, indicating that the rotating components are appropriately balanced, aligned, and well maintained. Quick analysis and assessment of the vibration content can lead to fault diagnosis and prognosis of a machine’s ability to continue running. The aim of this research used vibration measurements to pinpoint mechanical defects such as (unbalance, misalignment, resonance, and part loosening), consequently diagnosis all necessary process for engineers and technicians who desire to understand the vibration that exists in structures and machines.</description>
        <description>http://thesai.org/Downloads/Volume3No5/Paper_6-Defect_Diagnosis_in_Rotors_Systems_by_Vibrations_Data_Collectors_Using_Trending_Software.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Bins Formation using CG based Partitioning of Histogram Modified Using Proposed Polynomial Transform ‘Y=2X-X2’for CBIR</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030505</link>
        <id>10.14569/IJACSA.2012.030505</id>
        <doi>10.14569/IJACSA.2012.030505</doi>
        <lastModDate>2012-07-01T10:40:11.2770000+00:00</lastModDate>
        
        <creator>H B kekre</creator>
        
        <creator>Kavita Sonawane</creator>
        
        <subject>Polynomial Transform, Euclidean Distance, Cosine Correlation Distance, Absolute Distance, Modified Histogram, CG based partitioning, Bins formation, PRCP, LSRR, Longest String. </subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(5), 2012</description>
        <description>This paper proposes a novel polynomial transform to modify the original histogram of the image to adjust the pixel density equally towards the high intensity levels so that uniform distribution of the pixels can be obtained and the image can be enhanced. We have shown the efficient use of this modified histogram for Content Based Image Retrieval. According to the CBIR system described in this paper each image is separated into R, G and B plane and for each plane a modified histogram is calculated. This modified histogram is partitioned into two parts by calculating the Center of gravity and using it 8 bins are formed on the basis of R, G and B values. These 8 bins are holding the count of pixels falling into particular range of intensity levels separated into two parts of the histogram. This count of pixels in 8 bins is used as feature vector of dimension 8 for comparison to facilitate the image retrieval process. Further these bins data is used to form the new variations of feature vectors ; Total (sum) and Mean of pixel intensities of all the pixels counted in each of the 8 bins. These feature vector variation has also produced good image retrieval. This paper compares the proposed system designed using the CG based partitioning of the original and histogram modified using the polynomial transform for formation of the 8 bins which are holding the Count of pixels and Total and Mean of intensities of these pixels. This CBIR system is tested using 200 query images from 20 different classes over database of 2000 BMP images. Query and database image feature vectors are compared using three similarity measures namely Euclidean distance, Cosine Correlation distance and Absolute distance. Performance of the system is evaluated using three parameters PRCP (Precision Recall Cross-over Point), LSRR (Length of String to Retrieve all Relevant images) and Longest String</description>
        <description>http://thesai.org/Downloads/Volume3No5/Paper_5-Bins_Formation_using_CG_based_Partitioning_of_Histogram_Modified_Using_Proposed_Polynomial.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> Eye Detection Based-on Color and Shape Features</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030504</link>
        <id>10.14569/IJACSA.2012.030504</id>
        <doi>10.14569/IJACSA.2012.030504</doi>
        <lastModDate>2012-07-01T10:40:07.4600000+00:00</lastModDate>
        
        <creator>Aryuanto Soetedjo</creator>
        
        <subject>eye detection; color thresholding; ellipse fitting.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(5), 2012</description>
        <description>This paper presents an eye detection technique based-on color and shape features. The approach consists of three steps: a rough eye localization using projection technique, a white color thresholding to extract white sclera, and an ellipse fitting to fit the ellipse shape of eye. The proposed white color thresholding utilizes the normalized RGB chromaticty diagram, where white color objects are bounded by a small circle on the diagram. Experimental result shows that the proposed technique achieves a high eye detection rate of 91%.</description>
        <description>http://thesai.org/Downloads/Volume3No5/Paper_4-Eye_Detection_Based-on_Color_and_Shape_Features.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automated Detection Method for Clustered Microcalcification in Mammogram Image Based on Statistical Textural Features</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030503</link>
        <id>10.14569/IJACSA.2012.030503</id>
        <doi>10.14569/IJACSA.2012.030503</doi>
        <lastModDate>2012-07-01T10:40:00.4070000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Indra Nugraha Abdullah</creator>
        
        <creator>Hiroshi Okumura</creator>
        
        <subject>Automated Detection Method; Mammogram; Micro calcification; Statistical Textural Features; Standard Deviation.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(5), 2012</description>
        <description>Breast cancer is the most frightening cancer for women in the world. The current problem that closely related with this issue is how to deal with small calcification part inside the breast called micro calcification (MC). As a preventive way, a breast screening examination called mammogram is provided. Mammogram image with a considerable amount of MC has been a problem for the doctor and radiologist when they should determine correctly the region of interest, in this study is clustered MC. Therefore, we propose to develop an automated method to detect clustered MC utilizing two main methods, multi-branch standard deviation analysis for clustered MC detection and surrounding region dependence method for individual MC detection. Our proposed method was resulting in 70.8% of classification rate, then for the sensitivity and specificity obtained 79% and 87%, respectively. The gained results are adequately promising to be more developed in some areas.</description>
        <description>http://thesai.org/Downloads/Volume3No5/Paper_3-Automated_Detection_Method_for_Clustered_Microcalcification_in_Mammogram_Image_Based_on_Statistical_Textural_Features.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> A Neural Network System for Detection of Obstructive Sleep Apnea Through SpO2 Signal Features</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030502</link>
        <id>10.14569/IJACSA.2012.030502</id>
        <doi>10.14569/IJACSA.2012.030502</doi>
        <lastModDate>2012-07-01T10:39:57.0170000+00:00</lastModDate>
        
        <creator>Laiali Almazaydeh</creator>
        
        <creator>Miad Faezipour</creator>
        
        <creator>Khaled Elleithy</creator>
        
        <subject>sleep apnea; PSG; SpO2; features extraction; oximetry; neural networks.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(5), 2012</description>
        <description>Obstructive sleep apnea (OSA) is a common disorder in which individuals stop breathing during their sleep. These episodes last 10 seconds or more and cause oxygen levels in the blood to drop. Most of sleep apnea cases are currently undiagnosed because of expenses and practicality limitations of overnight polysomnography (PSG) at sleep labs, where an expert human observer is required. New techniques for sleep apnea classification are being developed by bioengineers for most comfortable and timely detection. In this study, we develop and validate a neural network (NN) using SpO2 measurements obtained from pulse oximetry to predict OSA. The results show that the NN is useful as a predictive tool for OSA with a high performance and improved accuracy, approximately 93.3%, which is better than reported techniques in the literature.</description>
        <description>http://thesai.org/Downloads/Volume3No5/Paper_2-A_Neural_Network_System_for_Detection_of_Obstructive_Sleep_Apnea_Through_SpO2_Signal_Features.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Data normalization and integration in Robotic Systems using Web Services Technologies</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030501</link>
        <id>10.14569/IJACSA.2012.030501</id>
        <doi>10.14569/IJACSA.2012.030501</doi>
        <lastModDate>2012-07-01T10:39:51.0800000+00:00</lastModDate>
        
        <creator>Jose Vicente Berna-Martinez</creator>
        
        <creator>Francisco Macia-Perez</creator>
        
        <subject>SOA; robots architecture; web serices; management and integration.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(5), 2012</description>
        <description>The robotics is one of the most active areas. We also need to join a large number of disciplines to create robots. With these premises, one problem is the management of information from multiple heterogeneous sources. Each component, hardware or software, produces data with different nature: temporal frequencies, processing needs, size, type, etc. Nowadays, technologies and software engineering paradigms such as service-oriented architectures are applied to solve this problem in other areas. This paper proposes the use of these technologies to implement a robotic control system based on services. This type of system will allow integration and collaborative work of different elements that make up a robotic system</description>
        <description>http://thesai.org/Downloads/Volume3No5/Paper_1-Data_normalization_and_integration_in_Robotic_Systems_using_Web_Services_Technologies.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mining Scientific Data from Pub-Med Database</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030421</link>
        <id>10.14569/IJACSA.2012.030421</id>
        <doi>10.14569/IJACSA.2012.030421</doi>
        <lastModDate>2012-07-01T10:39:44.7670000+00:00</lastModDate>
        
        <creator>G Charles Babu</creator>
        
        <creator>Dr. A.GOVARDHAN</creator>
        
        <subject></subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(4), 2012</description>
        <description>The continuous, rapidly growing volume of scientific literature and increasing diversification of inter-disciplinary fields of science and their answers to unsolved problems in medical and allied fields of science present a major problem to scientists and librarians. It should be recalled in this aspect that today as many as 4800 scientific journals exist in the internet of which some are online only. The list of journals located in subject citation indexes in Thomson Reuters can be obtained from the website. From researchers’ point of view, the problem is amplified when we consider today’s competition where we may not be able to spend time on experimental work merely because of already published information. Therefore, considering these facts partly and the volume of serials on the other, a study has been initiated in evaluating the scientific literature published in various journal sources. The scope of the study does not permit inclusion of all periodicals in the extensive fields of biology and hence a text mining routine was employed to extract data based on keywords such as bioinformatics, algorithms, genomics and proteomics. The wide availability of genome sequence data has created abundant opportunities, most notably in the realm of functional genomics and proteomics. This quiet revolution in biological sciences has been enabled by our ability to collect, manage, analyze, and integrate large quantities of data.</description>
        <description>http://thesai.org/Downloads/Volume3No4/Paper_21-Mining_Scientific_Data_from_Pub-Med_Database.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Knowledge Discovery in Health Care Datasets Using Data Mining Tools</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030420</link>
        <id>10.14569/IJACSA.2012.030420</id>
        <doi>10.14569/IJACSA.2012.030420</doi>
        <lastModDate>2012-07-01T10:39:38.7930000+00:00</lastModDate>
        
        <creator> MD. Ezaz Ahmed</creator>
        
        <creator>Dr. Y.K. Mathur</creator>
        
        <creator>Dr Varun Kumar</creator>
        
        <subject>NCDs, Web, Web Data, Web Mining, data Mining Healthcare.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(4), 2012</description>
        <description>Non communicable diseases (NCDs) are the biggest global killers today. Sixty-three percent of all deaths in 2008 – 36 million people – were caused by NCDs. Nearly 80% of these deaths occurred in low- and middle-income countries, where the highest proportion of deaths under the age of 70 from NCDs occur [1]. The commonness of NCDs, and the resulting number of related deaths, are expected to increase substantially in the future, particularly in low and middle-income countries, due to population growth and ageing, in conjunction with economic transition and resulting changes in behavioral, occupational and environmental risk factors. NCDs already disproportionately affect low and middle-income countries. Current projections indicate that by 2020 the largest increases in NCD mortality will occur in Africa, India and other low and middle-income countries [2]. Computer-based support in healthcare is becoming ever more important. No other domain has so many innovative changes that have such a high social impact. There has already been a long standing tradition for computer-based decision support, dealing with complex problems in medicine such as diagnosing disease, managerial decisions and assisting in the prescription of appropriate treatment. As we know that “Research is for the people not for yourself” so we are pleased to work for the healthcare and hence for the society and ultimately the MANKIND.</description>
        <description>http://thesai.org/Downloads/Volume3No4/Paper_20-Knowledge_Discovery_in_Health_Care_Datasets_Using_Data_Mining_Tools.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Construction of a Web-Based Learning Platform from the Perspective of Computer Support for Collaborative Design</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030419</link>
        <id>10.14569/IJACSA.2012.030419</id>
        <doi>10.14569/IJACSA.2012.030419</doi>
        <lastModDate>2012-07-01T10:39:32.4730000+00:00</lastModDate>
        
        <creator>Hsu Cheng Mei</creator>
        
        <subject>computer support for collaborative design, computer support for collaborative learning, design education, mind-map, web-based learning platform.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(4), 2012</description>
        <description>The purpose of this study is to construct a web-based learning platform of Computer Support for Collaborative Design (CSCD) based on theories related to a constructivist learning environment model, mind mapping and computer-supported collaborative learning. The platform conforms to the needs of design students and provides effective tools for interaction and collaborative learning by integrating the tools of mind mapping into a learning environment that utilizes CSCD, a computer-assisted support system that can support and enhance group collaboration. The establishment of the CSCD learning platform represents a significant advance from the fixed functions and existing models of current online learning platforms and is the only learning platform in the world that focuses on learners in design departments. The platform is outstanding for its excellence, user-friendly functions, and innovative technology. In terms of funding, technical ability, human resources, organizational strategies, and risk analysis and evaluations, the learning platform is also worthy of expansion and implementation.</description>
        <description>http://thesai.org/Downloads/Volume3No4/Paper_19-The_Construction_of_a_Web-Based_Learning_Platform_from_the_Perspective_of_Computer_Support_for_Collaborative_Design.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>E-learning Document Search Method with Supplemental Keywords Derived from Keywords in Meta-Tag and Descriptions which are Included in the Header of the First Search Result</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030418</link>
        <id>10.14569/IJACSA.2012.030418</id>
        <doi>10.14569/IJACSA.2012.030418</doi>
        <lastModDate>2012-07-01T10:39:26.1370000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Herman Tolle</creator>
        
        <subject>Search engine, e-learning content, thesaurus engine.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(4), 2012</description>
        <description>Optimization method for e-learning document search with keywords which are derived from the keywords and descriptions in the meta-tag of web search results together with thesaurus engine is proposed. 15 to 20% of improvement on hit rate of search performance is confirmed with the proposed search engine.
</description>
        <description>http://thesai.org/Downloads/Volume3No4/Paper_18-E-learning_Document_Search_Method_with_Supplemental_Keywords_Derived_from_Keywords.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Balancing a Sphere in a Linear Oscillatory Movement through Fuzzy Control</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030417</link>
        <id>10.14569/IJACSA.2012.030417</id>
        <doi>10.14569/IJACSA.2012.030417</doi>
        <lastModDate>2012-07-01T10:39:22.7670000+00:00</lastModDate>
        
        <creator>Gustavo Ozuna</creator>
        
        <creator>German Figueroa</creator>
        
        <creator>Marek Wosniak</creator>
        
        <subject> ball beam system; balancing a metallic sphere; inclination; equilibrium forces.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(4), 2012</description>
        <description>The following paper describes an intelligent control problem, which depends on the balance of a metallic sphere on a beam, that oscillates in only one point, localized in the middle of the beam, using it for the balance of this fuzzy control system.</description>
        <description>http://thesai.org/Downloads/Volume3No4/Paper_17-Balancing_a_Sphere_in_a_Linear_Oscillatory_Movement_through_Fuzzy_Control.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparison of 2D and 3D Local Binary Pattern in Lung Cancer Diagnosis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030416</link>
        <id>10.14569/IJACSA.2012.030416</id>
        <doi>10.14569/IJACSA.2012.030416</doi>
        <lastModDate>2012-07-01T10:39:16.3230000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Yeni Herdiyeni</creator>
        
        <creator>Hiroshi Okumura</creator>
        
        <subject> lung cancer detection, local binary pattern, probabilistic neural network.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(4), 2012</description>
        <description>Comparative study between 2D and 3D Local Binary Patter (LBP) methods for extraction from Computed Tomography (CT) imagery data in lung cancer diagnosis is conducted. The lung image classification is performed using probabilistic neural network (PNN) with histogram similarity as distance measure. The technique is evaluated on a set of CT lung images from Japan Society of Computer Aided Diagnosis of Medical Images. Experimental results show that 3D LBP has superior performance in accuracy compare to 2D LBP. The 2D LBP and 3D LBP achieved a classification accuracy of 43% and 78% respectively.</description>
        <description>http://thesai.org/Downloads/Volume3No4/Paper_16-Comparison_of_2D_and_3D_Local_Binary_Pattern_in_Lung_Cancer_Diagnosis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Secure Digital Cashless Transactions with Sequence Diagrams and Spatial Circuits to Enhance the Information Assurance and Security Education</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030415</link>
        <id>10.14569/IJACSA.2012.030415</id>
        <doi>10.14569/IJACSA.2012.030415</doi>
        <lastModDate>2012-07-01T10:39:12.9030000+00:00</lastModDate>
        
        <creator>Dr. Yousif AL Bastaki</creator>
        
        <creator>Dr. Ajantha Herath</creator>
        
        <subject>e-cashless; transactions; cryptographic; algorithms; Sequence diagrams, Spatial circuits.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(4), 2012</description>
        <description>Often students have difficulties mastering cryptographic algorithms. For some time we have been developing with methods for introducing important security concepts for both undergraduate and graduate students in Information Systems, Computer Science and Engineering students. To achieve this goal, Sequence diagrams and spatial circuit derivation from equations are introduced to students. Sequence diagrams represent progression of events with time. They learn system security concepts more effectively if they know how to transform equations and high level programming language constructs into spatial circuits or special purpose hardware. This paper describes an active learning module developed to help students understand secure protocols, algorithms and modeling web applications to prevent attacks and both software and hardware implementations related to encryption. These course materials can also be used in computer organization and architecture classes to help students understand and develop special purpose circuitry for cryptographic algorithms.</description>
        <description>http://thesai.org/Downloads/Volume3No4/Paper_15-Secure_Digital_Cashless_Transactions_with_Sequence_Diagrams_and_Spatial_Circuits_to_Enhance_the_Information_Assurance_and_Security_Education.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Reversible Anonymization of DICOM Images using Cryptography and Digital Watermarking</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030414</link>
        <id>10.14569/IJACSA.2012.030414</id>
        <doi>10.14569/IJACSA.2012.030414</doi>
        <lastModDate>2012-07-01T10:39:06.4900000+00:00</lastModDate>
        
        <creator>Youssef ZAZ</creator>
        
        <creator>Lhoussain ELFADIL</creator>
        
        <subject>DICOM images; watermaking; Hofmann coding; reversible anonymization, public key cryptosystem.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(4), 2012</description>
        <description>Digital Imaging and Communications in Medicine (DICOM) is a standard for handling, storing, printing, and transmitting information in medical images. The DICOM file contains the image data and a number of attributes such as identified patient data (name, age, insurance ID card,…), and non-identified patient data (doctor’s interpretation, image type,…). Medical images serve not only for examination, but can also be used for research and education purposes. For research they are used to prevent illegal use of information; before authorizing researchers to use these images, the medical staff deletes all the data which would reveal the patient identity to prevent patient privacy. This manipulation is called anonymization. In this paper, we propose a reversible anonymization of DICOM images. Identifying patient data with image digest, computed by the well-known SHA-256 hash function, are encrypted using the proposed probabilistic public key crypto-system. After compressing the Least Significant Bit (LSB) bitplan of the image using Hofmann coding algorithm, the encrypted data is inserted into a liberated zone of the LSB bitplan of the image. The proposed method allows researchers to use anonymous DICOM images and keep to authorized staff -if necessary- the possibility to return to the original image with all related patient data.
</description>
        <description>http://thesai.org/Downloads/Volume3No4/Paper_14-Reversible_Anonymization_of_DICOM_Images_using_Cryptography_and_Digital_Watermarking.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>ATM Security Using Fingerprint Biometric Identifer: An Investigative Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030412</link>
        <id>10.14569/IJACSA.2012.030412</id>
        <doi>10.14569/IJACSA.2012.030412</doi>
        <lastModDate>2012-07-01T10:38:59.7270000+00:00</lastModDate>
        
        <creator>Moses Okechukwu Onyesolu</creator>
        
        <creator>Ignatius Majesty Ezeani</creator>
        
        <subject>ATM; PIN; Fingerprint; security; biometric.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(4), 2012</description>
        <description>The growth in electronic transactions has resulted in a greater demand for fast and accurate user identification and authentication. Access codes for buildings, banks accounts and computer systems often use personal identification numbers (PIN&#39;s) for identification and security clearances. Conventional method of identification based on possession of ID cards or exclusive knowledge like a social security number or a password are not all together reliable. An embedded fingerprint biometric authentication scheme for automated teller machine (ATM) banking systems is proposed in this paper. In this scheme, a fingerprint biometric technique is fused with the ATM for person authentication to ameliorate the security level.</description>
        <description>http://thesai.org/Downloads/Volume3No4/Paper_12-ATM_Security_Using_Fingerprint_Biometric_Identifer_An_Investigative_Study.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> An Authentication Protocol Based on Combined RFID-Biometric System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030411</link>
        <id>10.14569/IJACSA.2012.030411</id>
        <doi>10.14569/IJACSA.2012.030411</doi>
        <lastModDate>2012-07-01T10:38:53.3100000+00:00</lastModDate>
        
        <creator>Noureddine Chikouche</creator>
        
        <creator>Foudil Cherif</creator>
        
        <creator>Mohamed Benmohammed</creator>
        
        <subject>component; RFID; authentication protocol; biom&#233;tric; security.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(4), 2012</description>
        <description> Radio Frequency Identification (RFID) and biometric technologies saw fast evolutions during the last years and which are used in several applications, such as access control. Among important characteristics in the RFID tags, we mention the limitation of resources (memory, energy, …). Our work focuses on the design of a RFID authentication protocol which uses biometric data and which confirms the secrecy, the authentication and the privacy. Our protocol requires a PRNG (Pseud-Random Number Generator), a robust hash function and Biometric hash function. The Biometric hash function is used to optimize and to protect biometric data. For Security analysis of protocol proposed, we will use AVISPA and SPAN tools to verify the authentication and the secrecy.
</description>
        <description>http://thesai.org/Downloads/Volume3No4/Paper_11-An_Authentication_Protocol_Based_on_Combined_RFID-Biometric_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparison of OpenMP &amp; OpenCL Parallel Processing Technologies</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030410</link>
        <id>10.14569/IJACSA.2012.030410</id>
        <doi>10.14569/IJACSA.2012.030410</doi>
        <lastModDate>2012-07-01T10:38:46.8700000+00:00</lastModDate>
        
        <creator>Krishnahari Thouti</creator>
        
        <creator>S.R.Sathe</creator>
        
        <subject>OpenMP; OpenCL; Multicore;Parallel Computing; Graphical processors.
</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(4), 2012</description>
        <description>This paper presents a comparison of OpenMP and OpenCL based on the parallel implementation of algorithms from various fields of computer applications. The focus of our study is on the performance of benchmark comparing OpenMP and OpenCL. We observed that OpenCL programming model is a good option for mapping threads on different processing cores. Balancing all available cores and allocating sufficient amount of work among all computing units, can lead to improved performance. In our simulation, we used Fedora operating system; a system with Intel Xeon Dual core processor having thread count 24 coupled with NVIDIA Quadro FX 3800 as graphical processing unit.</description>
        <description>http://thesai.org/Downloads/Volume3No4/Paper_10-Comparison_of_OpenMP_OpenCL_Parallel_Processing_Technologies.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Intelligent Joint Admission Control for Next Generation Wireless Networks
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030409</link>
        <id>10.14569/IJACSA.2012.030409</id>
        <doi>10.14569/IJACSA.2012.030409</doi>
        <lastModDate>2012-07-01T10:38:40.4070000+00:00</lastModDate>
        
        <creator>Mohammed M Alkhawlani</creator>
        
        <creator>Fadhl M. Al-Akwaa</creator>
        
        <creator>Abdulqader M. Mohsen</creator>
        
        <subject>Heterogeneous Wireless Network; Radio Access Network; PROMETHEE; Joint Admission Control</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(4), 2012</description>
        <description> The Heterogeneous Wireless Network (HWN) integrates different wireless networks into one common network. The integrated networks often overlap coverage in the same wireless service areas, leading to the availability of a great variety of innovative services based on user demands in a cost-efficient manner. Joint Admission Control (JAC) handles all new or handoff service requests in the HWN. It checks whether the incoming service request to the selected Radio Access Network (RAN) by the initial access network selection or the vertical handover module can be admitted and allocated the suitable resources. In this paper, a decision support system is developed to address the JAC problem in the modern HWN networks. This system combines fuzzy logic and the PROMETHEE II multiple criteria decision making system algorithm, to the problem of JAC. This combination decreases the influence of the dissimilar, imprecise, and contradictory measurements for the JAC criteria coming from different sources. A performance analysis is done and the results are compared with traditional algorithms for JAC. These results demonstrate a significant improvement with our developed algorithm.
</description>
        <description>http://thesai.org/Downloads/Volume3No4/Paper_9-Intelligent_Joint_Admission_Control_for_Next_Generation_Wireless_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Real-Time Fish Observation and Fish Category Database Construction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030408</link>
        <id>10.14569/IJACSA.2012.030408</id>
        <doi>10.14569/IJACSA.2012.030408</doi>
        <lastModDate>2012-07-01T10:38:33.9700000+00:00</lastModDate>
        
        <creator> Yi-Haur Shiau</creator>
        
        <creator>Sun-In Lin</creator>
        
        <creator>Fang-Pang Lin</creator>
        
        <creator>Chaur-Chin Chen</creator>
        
        <subject> Real-time streaming; Fish observation; Fish detection; Distributed architecture.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(4), 2012</description>
        <description>This paper proposes a distributed real-time video stream system for underwater fish observation in the real world. The system, based on a three-tier architecture, includes capture devices unit, stream processor unit, and display devices unit. It supports variety of capture source devices, such as HDV, DV, WebCam, TV Card, Capture Card, and video compression formats, such as WMV, FLV/SWF, MJPEG, MPEG-2/4. The system has been demonstrated in Taiwan for long-term underwater fish observation. CCTV cameras and high-definition cameras are deployed on our system. Video compression methods and image processing methods are implemented to reduce network transfer flow and data storage space. Marine ecologists and end users can browse these real-time video streams via the Internet to understand the ecological changes immediately. These video data is preserved to form a resource base for marine ecologists. Based on the video data, fish detection is implemented. However, it is complicated in the unconstrained underwater environment, due to the water flow causes the water plants sway severely. In this paper, a bounding-surrounding boxes method is proposed to overcome the problem. It efficiently classifies moving fish as the foreground objects and the swaying water plants as the background objects. It enables to remove the irrelevant information (without fish) to reduce the massive amount of video data. Moreover, fish tracking is implemented to acquire multiple species of fish images with varied angles, sizes, shapes, and illumination to construct a fish category database.</description>
        <description>http://thesai.org/Downloads/Volume3No4/Paper_8-Real-Time_Fish_Observation_and_Fish_Category_Database_Construction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-input Multi-output Beta Wavelet Network: Modeling of Acoustic Units for Speech Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030407</link>
        <id>10.14569/IJACSA.2012.030407</id>
        <doi>10.14569/IJACSA.2012.030407</doi>
        <lastModDate>2012-07-01T10:38:27.6230000+00:00</lastModDate>
        
        <creator>Ridha Ejbali</creator>
        
        <creator>Mourad Zaied</creator>
        
        <creator>Chokri Ben Amar</creator>
        
        <subject>wavelet network; multi-input multi-output wavelet network MIMOWN; speech recognition; modeling of acoustic units; wavelet network.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(4), 2012</description>
        <description>in this paper, we propose a novel architecture of wavelet network called Multi-input Multi-output Wavelet Network MIMOWN as a generalization of the old architecture of wavelet network. This newel prototype was applied to speech recognition application especially to model acoustic unit of speech. The originality of our work is the proposal of MIMOWN to model acoustic unit of speech. This approach was proposed to overcome limitation of old wavelet network model. The use of the multi-input multi-output architecture will allows training wavelet network on various examples of acoustic units.</description>
        <description>http://thesai.org/Downloads/Volume3No4/Paper_7-Multi-input_multi-output_Beta_wavelet_network_modeling_of_acoustic_units_for_speech_recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New 3D Model-Based Tracking Technique for Robust Camera Pose Estimation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030406</link>
        <id>10.14569/IJACSA.2012.030406</id>
        <doi>10.14569/IJACSA.2012.030406</doi>
        <lastModDate>2012-07-01T10:38:21.3470000+00:00</lastModDate>
        
        <creator>Fakhreddine Ababsa</creator>
        
        <subject>Pose estimation; Line traking; Kalman filtering; Augmented Reality.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(4), 2012</description>
        <description>In this paper we present a new robust camera pose estimation approach based on 3D lines features. The proposed method is well adapted for mobile augmented reality applications We used an Extended Kalman Filter (EKF) to incrementally update the camera pose in real-time. The principal contributions of our method include first, the expansion of the RANSAC scheme in order to achieve a robust matching algorithm that associates 2D edges from the image with the 3D line segments from the input model. And second, a new powerful framework for camera pose estimation using only 2D-3D straight-lines within an EKF. Experimental results on real image sequences are presented to evaluate the performances and the feasibility of the proposed approach in indoor and outdoor environments.
</description>
        <description>http://thesai.org/Downloads/Volume3No4/Paper_6-A_New_3D_Model-Based_Tracking_Technique_for_Robust_Camera_Pose_Estimation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Nearest Neighbor Value Interpolation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030405</link>
        <id>10.14569/IJACSA.2012.030405</id>
        <doi>10.14569/IJACSA.2012.030405</doi>
        <lastModDate>2012-07-01T10:38:14.0270000+00:00</lastModDate>
        
        <creator>Olivier Rukundo</creator>
        
        <creator>Hanqiang Cao</creator>
        
        <subject>neighbor value; nearest; bilinear; bicubic; image interpolation.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(4), 2012</description>
        <description>This paper presents the nearest neighbor value (NNV) algorithm for high resolution (H.R.) image interpolation. The difference between the proposed algorithm and conventional nearest neighbor algorithm is that the concept applied, to estimate the missing pixel value, is guided by the nearest value rather than the distance. In other words, the proposed concept selects one pixel, among four directly surrounding the empty location, whose value is almost equal to the value generated by the conventional bilinear interpolation algorithm. The proposed method demonstrated higher performances in terms of H.R. when compared to the conventional interpolation algorithms mentioned.</description>
        <description>http://thesai.org/Downloads/Volume3No4/Paper_5-Nearest_Neighbor_Value_Interpolation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Separability Detection Cooperative Particle Swarm Optimizer based on Covariance Matrix Adaptation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030404</link>
        <id>10.14569/IJACSA.2012.030404</id>
        <doi>10.14569/IJACSA.2012.030404</doi>
        <lastModDate>2012-07-01T10:38:07.8270000+00:00</lastModDate>
        
        <creator>Sheng Fuu Lin</creator>
        
        <creator>Yi-Chang Cheng</creator>
        
        <creator>Jyun-Wei Chang</creator>
        
        <creator>Pei-Chia Hung</creator>
        
        <subject>cooperative behavior; particle swarm optimization; covariance matrix adaptation; separability.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(4), 2012</description>
        <description>The particle swarm optimizer (PSO) is a population-based optimization technique that can be widely utilized to many applications. The cooperative particle swarm optimization (CPSO) applies cooperative behavior to improve the PSO on finding the global optimum in a high-dimensional space. This is achieved by employing multiple swarms to partition the search space. However, independent changes made by different swarms on correlated variables will deteriorate the performance of the algorithm. This paper proposes a separability detection approach based on covariance matrix adaptation to find non-separable variables so that they can previously be placed into the same swarm to address the difficulty that the original CPSO encounters.
</description>
        <description>http://thesai.org/Downloads/Volume3No4/Paper_4-Separability_Detection_Cooperative_Particle_Swarm_Optimizer_based_on_Covariance_Matrix_Adaptation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Wavelet Based Image Retrieval Method
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030402</link>
        <id>10.14569/IJACSA.2012.030402</id>
        <doi>10.14569/IJACSA.2012.030402</doi>
        <lastModDate>2012-07-01T10:38:01.2700000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Cahya Rahmad</creator>
        
        <subject>component; Image retrieval, DWT, Wavelet, Local feature, Color, Texture.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(4), 2012</description>
        <description>A novel method for retrieving image based on color and texture extraction is proposed for improving the accuracy. In this research, we develop a novel image retrieval method based on wavelet transformation to extract the local feature of an image, the local feature consist color feature and texture feature. Once an image taking into account, we transform it using wavelet transformation to four sub band frequency images. It consists of image with low frequency which most same with the source called approximation (LL), image containing high frequency called horizontal detail (LH), image containing high frequency called vertical detail (HL), and image containing horizontal and vertical detail (HH). In order to enhance the texture and strong edge, we combine the vertical and horizontal detail to be other matrix. The next step is we estimate the important point called significant point by threshold the high value. After the significant points have been extracted from image, the coordinate of significant points will be used for knowing the most important information from the image and convert into small regions. Based on these significant point coordinates, we extract the image texture and color locally. The experimental results demonstrate that our method on standard dataset are encouraging and outperform the other existing methods, improved around 11 %.</description>
        <description>http://thesai.org/Downloads/Volume3No4/Paper_2-Wavelet_Based_Image_Retrieval_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> A New Application Programming Interface and a Fortran-like Modeling Language for Evaluating Functions and Specifying Optimization Problems at Runtime</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030401</link>
        <id>10.14569/IJACSA.2012.030401</id>
        <doi>10.14569/IJACSA.2012.030401</doi>
        <lastModDate>2012-07-01T10:37:57.9200000+00:00</lastModDate>
        
        <creator>Fuchun Huang</creator>
        
        <subject> programming language; Fortran computing language; Fortran subroutine; Application programming interface; Runtime function evaluation; Mathematical programming; Optimization problem; Optimization modeling language.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(4), 2012</description>
        <description>A new application programming interface for evaluating functions and specifying optimization problems at runtime has been developed. The new interface, named FEFAR, uses a simple language named LEFAR. Compared with other modeling languages such as AMPL or OSil, LEFAR is Fortran-like hence easy to learn and use, in particular for Fortran programmers. FEFAR itself is a Fortran subroutine hence easy to be linked to user’s main programs in Fortran language. With FEFAR a developer of optimization solver can provide pre-compiled, self-executable and directly usable software products. FEFAR and LEFAR are already used in some optimization solvers and should be a useful addition to the toolbox of programmers who develop solvers of optimization problems of any type including constrained/unconstrained, linear/nonlinear, smooth/nonsmooth optimization.</description>
        <description>http://thesai.org/Downloads/Volume3No4/Paper_1-A_New_Application_Programming_Interface_And_A_Fortran-like_Modeling_Language_For_Evaluating_Functions_and_Specifying_Optimization_Problems_At_Runtime.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> A Schema for Generating Update Semantics</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030324</link>
        <id>10.14569/IJACSA.2012.030324</id>
        <doi>10.14569/IJACSA.2012.030324</doi>
        <lastModDate>2012-07-01T10:37:51.6800000+00:00</lastModDate>
        
        <creator>Jos&#180;e Luis Carballido Carranza</creator>
        
        <creator>Claudia Zepeda</creator>
        
        <creator>Guillermo Flores</creator>
        
        <subject> Update semantics, Logic Programming semantics, update properties.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(3), 2012</description>
        <description> In this paper, we present a general schema for de ning new update semantics. This schema takes as input any basic logic programming semantics, such as the stable semantics, the p-stable semantics or the MMr semantics, and gives as output a new update semantics. The schema proposed is based on a concept called minimal generalized S models, where S is any of the logic programming semantics. Each update semantics is associated to an update operator. We also present some properties of these update operators.
</description>
        <description>http://thesai.org/Downloads/Volume3No3/Paper24-A_Schema_for_Generating_Update_Semantics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comparative Study on Temporal Mobile Access Pattern Mining Methods</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030323</link>
        <id>10.14569/IJACSA.2012.030323</id>
        <doi>10.14569/IJACSA.2012.030323</doi>
        <lastModDate>2012-07-01T10:37:45.8370000+00:00</lastModDate>
        
        <creator>Hanan Fahmy</creator>
        
        <creator>Maha A.Hana</creator>
        
        <creator>Yahia K. Helmy</creator>
        
        <subject>mobile mining; temporal data mining; mobile services; access pattern.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(3), 2012</description>
        <description>Mobile users behavior patterns is one of the most critical issues that need to be explored in mobile agent systems. Recently the algorithms of discovering frequent mobile user’s behavior patterns have been studied extensively. Existing mining methods have proposed frequent mobile user&#39;s behavior patterns statistically based on requested services and location information. Therefore, other studies considered that the mobile user&#39;s dynamic behavior patterns are usually associated with temporal access patterns. In this paper, temporal mobile access pattern methods are studied and compared in terms of complexity and accuracy. The advantages and disadvantages of these methods will be summarized as well.</description>
        <description>http://thesai.org/Downloads/Volume3No3/Paper23-A_Comparative_Study_on_Temporal_Mobile_Access_Pattern_Mining_Methods.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Improved Grunwald-Letnikov Fractional Differential Mask for Image Texture Enhancement</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030322</link>
        <id>10.14569/IJACSA.2012.030322</id>
        <doi>10.14569/IJACSA.2012.030322</doi>
        <lastModDate>2012-07-01T10:37:39.5800000+00:00</lastModDate>
        
        <creator>Vishwadeep Garg</creator>
        
        <creator>Kulbir Singh</creator>
        
        <subject> texture enhancement; fractional differential; information entropy; average gradient.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(3), 2012</description>
        <description>Texture plays an important role in identification of objects or regions of interest in an image. In order to enhance this textural information and overcome the limitations of the classical derivative operators a two-dimensional fractional differential operator is discussed, which is an improved version of the Grunwald-Letnikov (G-L) based fractional differential operator. A two dimensional-isotropic gradient operator mask based on G-L fractional differential is constructed. This nonlinear filter mask is implemented on various texture enriched digital images and enhancement of features of image is controlled by varying the intensity factor. In order to analyze the enhancement quantitatively, information entropy and average gradient are the parameters used. The results show that with improved version of Grunwald-Letnikov, fractional differential operator information entropy of image is improved by 0.5.
</description>
        <description>http://thesai.org/Downloads/Volume3No3/Paper22-An_Improved_Grunwald-Letnikov_Fractional_Differential_Mask_for_Image_Texture_Enhancement.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Web Anomaly Misuse Intrusion Detection Framework for SQL Injection Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030321</link>
        <id>10.14569/IJACSA.2012.030321</id>
        <doi>10.14569/IJACSA.2012.030321</doi>
        <lastModDate>2012-07-01T10:37:33.3870000+00:00</lastModDate>
        
        <creator>Shaimaa Ezzat Salama</creator>
        
        <creator>Mohamed I. Marie</creator>
        
        <creator>Laila M. El-Fangary</creator>
        
        <creator>Yehia K. Helmy</creator>
        
        <subject>SQL injection; association rule; anomaly detection; intrusion detection.
</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(3), 2012</description>
        <description>Databases at the background of e-commerce applications are vulnerable to SQL injection attack which is considered as one of the most dangerous web attacks. In this paper we propose a framework based on misuse and anomaly detection techniques to detect SQL injection attack. The main idea of this framework is to create a profile for legitimate database behavior extracted from applying association rules on XML file containing queries submitted from application to the database. As a second step in the detection process, the structure of the query under observation will be compared against the legitimate queries stored in the XML file thus minimizing false positive alarms.
</description>
        <description>http://thesai.org/Downloads/Volume3No3/Paper21-Web_Anomaly_Misuse_Intrusion_Detection_Framework_For_SQL_Injection_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Transforming Conceptual Model into Logical Model for Temporal Data Warehouse Security: A Case Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030320</link>
        <id>10.14569/IJACSA.2012.030320</id>
        <doi>10.14569/IJACSA.2012.030320</doi>
        <lastModDate>2012-07-01T10:37:27.1800000+00:00</lastModDate>
        
        <creator>Marwa S Farhan</creator>
        
        <creator>Mohamed E. Marie</creator>
        
        <creator>Laila M. El-Fangary</creator>
        
        <creator>Yehia K. Helmy</creator>
        
        <subject>ETL; temporal data warehouse; Data warehouse security; MDA; QVT; PIM; PSM.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(3), 2012</description>
        <description>Extraction–transformation–loading (ETL) processes are responsible for the extraction of data from several sources, their cleansing, customization and insertion into a data warehouse. Data warehouse often store historical information which is extracted from multiple, heterogeneous, autonomous and distributed data sources, thereby, the survival of the organizations depends on the correct management, security and confidentiality of the information. In this paper, we are using the Model Driven Architecture (MDA) approach to represent logical model requirements for secure Temporal Data Warehouses (TDW). We use the Platform-Independent Model (PIM) which does not include information about specific platforms and technologies. Nowadays, the most crucial issue in MDA is the transformation between a PIM and Platform Specific Models PSM. Thus, OMG defines use the Query/View/Transformation (QVT) language, an approach for expressing these MDA transformations. This paper proposes a set of rules to transform PIM model for secure temporal data warehouse (TDW) to PSM model, we apply the QVT language to the development of a secure data warehouse by means of a case study.</description>
        <description>http://thesai.org/Downloads/Volume3No3/Paper20-Transforming_Conceptual_Model_into_Logical_Model_for_Temporal_Data_Warehouse_Security_A_Case_Study.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> RC4 stream cipher and possible attacks on WEP</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030319</link>
        <id>10.14569/IJACSA.2012.030319</id>
        <doi>10.14569/IJACSA.2012.030319</doi>
        <lastModDate>2012-07-01T10:37:20.9630000+00:00</lastModDate>
        
        <creator>Lazar Stošic</creator>
        
        <creator>Milena Bogdanovic</creator>
        
        <subject>RC4 stream cipher; KSA; WEP; security of WEP; WEP attack.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(3), 2012</description>
        <description> In this paper we analyze and present some weaknesses and possible attacks on the RC4 stream cipher which were published in many journals. We review some advantages and disadvantages which come from several authors, as well as similarities and differences which can be observed in the published results. Also, we analyze the Key Scheduling Algorithm (KSA) which derives the initial state from a variable size key, and strengths and weaknesses of the RCS stream cipher. Using examples from other papers, we show that RC4 is completely insecure in a common mode of operation which is used in the widely deployed Wired Equivalent Privacy protocol (WEP, which is part of the 802.11 standard).</description>
        <description>http://thesai.org/Downloads/Volume3No3/Paper19-RC4_Stream_Cipher_And_Possible_Attacks_On_WEP.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> Evaluation of Data Security Measures in a Network Environment Towards Developing Cooperate Data Security Guidelines</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030318</link>
        <id>10.14569/IJACSA.2012.030318</id>
        <doi>10.14569/IJACSA.2012.030318</doi>
        <lastModDate>2012-07-01T10:37:14.6570000+00:00</lastModDate>
        
        <creator>Ayub Hussein Shirandula</creator>
        
        <subject> Data, security; security measures; guidelines; computer users; Mumias Sugar Company.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(3), 2012</description>
        <description>Data security in a networked environment is a topic that has become significant in organizations. As companies and organizations rely more on technology to run their businesses, connecting system to each other in different departments for efficiency data security is the concern for administrators. This research assessed the data security measures put in place at Mumias Sugar Company and the effort it was using to protect its data. The researcher also highlighted major security issues that were significantly impacting the operations of Mumias Sugar Company. The researcher used the case study methods where both qualitative and quantitative data was collected by use of questionnaire, interviewing and observation. From the findings the researcher developed data security guidelines for Mumias Sugar Company. The information gained from extensive literature review was tested and observed during the case study.The research revealed that data security lapses in the company was as a result of system administrators’ failure to update and train computer users in the company on how to implement different data security measures that were in place. The final outcome of the research was data security guidelines that were practical enough to be used at Mumias Sugar Company.</description>
        <description>http://thesai.org/Downloads/Volume3No3/Paper18-Evaluation_Of_Data_Security_Measures_In_A_Network_Environment_Towards_Developing_Cooperate_Data_Security_Guidelines.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multi-Objective Intelligent Manufacturing System for Multi Machine Scheduling</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030317</link>
        <id>10.14569/IJACSA.2012.030317</id>
        <doi>10.14569/IJACSA.2012.030317</doi>
        <lastModDate>2012-07-01T10:37:08.4270000+00:00</lastModDate>
        
        <creator>Sunita Bansal</creator>
        
        <creator>Dr. Manuj Darbari</creator>
        
        <subject>Decision making; pareto; intelligent manufacturing.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(3), 2012</description>
        <description>This paper proposes a framework for Intelligent Manufacturing systems in which the machine scheduling is achieved by MCDM and DRSA. The relationship between perception/knowledge base and profit maximization is being extended. Further for production function.</description>
        <description>http://thesai.org/Downloads/Volume3No3/Paper17-Multi-Objective_Intelligent_Manufacturing_System_For_Multi_Machine_Scheduling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>OCC: Ordered congestion control with cross layer support in Manet routing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030316</link>
        <id>10.14569/IJACSA.2012.030316</id>
        <doi>10.14569/IJACSA.2012.030316</doi>
        <lastModDate>2012-07-01T10:37:02.1900000+00:00</lastModDate>
        
        <creator>T Suryaprakash Reddy</creator>
        
        <creator>Dr.P.Chenna Reddy</creator>
        
        <subject> manet; routing protocol; congestion control; zone; occ; cross layer.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(3), 2012</description>
        <description>In the recent times many accessible congestion control procedures have no capability to differentiate involving two major problems like packet loss by link crash and packet loss by congestion. Consequently these resolutions effect in form of wastage of possessions because they target only on the packet drop by link crash that has a needless importance. Consumption of energy and possessions in order to make the basis node attentive regarding the congestion occurring in routing path is the supplementary drawback in most of the accessible procedures. This way of concentrating mainly on standardizing the outlet load at the basis node stage is the boundary to the present accessible procedures. It is already known that as a reason of link crash and congestion packet loss in the network routing largely occurs. In this article a new cross layer and path restoration procedure has been put forward. We also put forwarded two algorithms namely Path discovery Algorithm and congestion handling algorithm. In this approach of cross layer it comprises of 3 kinds of layers called network, MAC and transport layers. In this introduced approach the MAC and network layers have dynamic functionalities in identifying the congestion and standardization where the functionalities of network and transport layers are distinguished in bearing the congestion i.e. congestion endurance. The produced tentative results illustrate an enhanced management of congestion and its endurance by this approach.
</description>
        <description>http://thesai.org/Downloads/Volume3No3/Paper16-OCC_Ordered_Congestion_Control_With_Cross_Layer_Support_In_Manet_Routing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> OFW-ITS-LSSVM: Weighted Classification by LS-SVM for Diabetes diagnosis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030315</link>
        <id>10.14569/IJACSA.2012.030315</id>
        <doi>10.14569/IJACSA.2012.030315</doi>
        <lastModDate>2012-07-01T10:36:55.9800000+00:00</lastModDate>
        
        <creator>Fawzi Elias Bekri</creator>
        
        <creator>Dr. A. Govardhan</creator>
        
        <subject>machine learning; SVM; Feature reduction; feature optimization; tabu search.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(3), 2012</description>
        <description>In accordance to the fast developing technology now a days, every field is gaining it’s benefit through machines other than human involvement. Many changes are being made much advancement is possible by this developing technology. Likewise this technology is too gaining its importance in bioinformatics especially to analyse data. As we all know that diabetes is one of the present day deadly diseases prevailing. So in this paper we introduce LS-SVM classification to understand which datasets of blood may have the chance to get diabetes. Further, considering the patient’s details we can predict where he has a chance to get diabetes, if so measures to cure or stop it. In this method, an optimal Tabu search model will be suggested to reduce the chances of getting it in the future.
</description>
        <description>http://thesai.org/Downloads/Volume3No3/Paper15-OFW-ITS-LSSVM_Weighted_Classification_By_LS-SVM_For_Diabetes_Diagnosis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Digital Ecosystem-based Framework for Math Search Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030314</link>
        <id>10.14569/IJACSA.2012.030314</id>
        <doi>10.14569/IJACSA.2012.030314</doi>
        <lastModDate>2012-07-01T10:36:49.7530000+00:00</lastModDate>
        
        <creator>Mohammed Q Shatnawi,</creator>
        
        <creator>Qusai Q. Abuein</creator>
        
        <subject>component; digital ecosystem; math search; information retrieval; text-based search engines; structured information; indexing approach; representation technique.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(3), 2012</description>
        <description>Text-based search engines fall short in retrieving structured information. When searching for x(y+z) using those search engines, for example Google, it retrieves documents that contain xyz, x+y=z, (x+y+z) =xyz or any other document that contain x, y, and/or z but not x(y+z) as a standalone math expression. The reason behind this shortage; is that the text-based search engines ignore the structure of the mathematical expressions. Several issues are associated with designing and implementing math-based search systems. Those systems must be able to differentiate between a user query that contains a mathematical expression, and any other query that contains only a text term. A reliable indexing approach, along with a flexible and efficient representation technique are highly required. Eventually, text-based search systems must be able to process mathematical expressions that are well-structured and have properties that make them different from other forms of text. Here, in this context we take advantage from the concept of digital ecosystems to refine the text search process so it becomes applicable in searching for a mathematical expression. In this research, a framework that contains the basic building blocks of a math-based search system is designed.</description>
        <description>http://thesai.org/Downloads/Volume3No3/Paper14-A%20Digital%20Ecosystem-based%20Framework%20for%20Math%20Search%20Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Use of Information and Communication Technologies (ICT) in Front Office Operations of Chain Hotels in Ghana</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030313</link>
        <id>10.14569/IJACSA.2012.030313</id>
        <doi>10.14569/IJACSA.2012.030313</doi>
        <lastModDate>2012-07-01T10:36:43.5030000+00:00</lastModDate>
        
        <creator>Albert Kwansah Ansah</creator>
        
        <creator>Victoria S. Blankson</creator>
        
        <creator>Millicent Kontoh</creator>
        
        <subject>Front Office Operation; ICT; Chain Hotels; Electronic Point of Sale; Reservation.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(3), 2012</description>
        <description>The proliferation of Information and Communication Technologies (ICT) coupled with sophisticated network protocols have unveiled new avenues for enterprises and organizations and the hospitality industries cannot be left out. Technology-based systems stand in a pivotal position to offer better service to the populace. Hospitality industries such as hotels can take advantage of the pervasiveness of ICT vis-&#224;-vis technology-based systems to advance some of their operations. This paper seeks to assess the use of Information and Communication Technologies (ICT) in a front office operation of chain hotels in Ghana. The paper determines the extent of the use of information technology in a front office operation of chain hotels in Ghana. The paper continues to assess the effect of the use of information technology in the front office operation of chain hotels in Ghana, thus if the use of ICT has any effect on chain hotels’ front office operations. The paper further makes recommendations to chain hotel operators and the Ghana Tourist Authority (GTA) and policy makers on the use of information and communications technology in front office operation in chain hotels. Three chain hotels in Ghana were assessed.</description>
        <description>http://thesai.org/Downloads/Volume3No3/Paper13-The_Use_of_Information_and_Communication_Technologies(ICT)in_Front_Office_Operations_of_Chain_Hotels_in_Ghana.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Simple and Efficient Contract Signing Protocol</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030312</link>
        <id>10.14569/IJACSA.2012.030312</id>
        <doi>10.14569/IJACSA.2012.030312</doi>
        <lastModDate>2012-07-01T10:36:36.4970000+00:00</lastModDate>
        
        <creator>Abdullah M Alaraj</creator>
        
        <subject>contract signing; fair exchange protocol; digital signature; protocols; security.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(3), 2012</description>
        <description> In this paper, a new contract signing protocol is proposed based on the RSA signature scheme. The protocol will allow two parties to sign the same contract and then exchange their digital signatures. The protocol ensures fairness in that it offers parties greater security: either both parties receive each other&#39;s signatures or neither does. The protocol is based on offline Trusted Third Party (TTP) that will be brought into play only if one party fails to sign the contract. Otherwise, the TTP remains inactive. The protocol consists of only three messages that are exchanged between the two parties.
</description>
        <description>http://thesai.org/Downloads/Volume3No3/Paper12-Simple_And_Efficient_Contract_Signing_Protocol.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Mobile Learning Environment System (MLES): The Case of Android-based Learning Application on Undergraduates’ Learning
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030311</link>
        <id>10.14569/IJACSA.2012.030311</id>
        <doi>10.14569/IJACSA.2012.030311</doi>
        <lastModDate>2012-07-01T10:36:30.3070000+00:00</lastModDate>
        
        <creator>Hafizul Fahri Hanafi</creator>
        
        <creator>Khairulanuar Samsudin</creator>
        
        <subject>mobile learning; android learning; teaching and learning.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(3), 2012</description>
        <description>Of late, mobile technology has introduced new, novel environment that can be capitalized to further enrich the teaching and learning process in classrooms. Taking cognizance of this promising setting, a study was undertaken to investigate the impact of such an environment enabled by android platform on the learning process among undergraduates of Sultan Idris Education University, Malaysia; in particular, this paper discusses critical aspects of the design and implementation of the android learning system. Data were collected through a survey involving 56 respondents, and these data were analyzed by using SPSS 12.0. Findings showed that the respondents were very receptive to the interactivity, accessibility, and convenience of the system, but they were quite frustrated with the occasional interruptions due to internet connectivity problems. Overall, the mobile learning system can be utilized as an inexpensive but potent learning tool that complements undergraduates’ learning process.
</description>
        <description>http://thesai.org/Downloads/Volume3No3/Paper11-Mobile_Learning_Environment_System_(MLES)_A_Case_Study_on_Teaching_And_Learning_Using_Android_Application.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Message Segmentation to Enhance the Security of LSB Image Steganography</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030310</link>
        <id>10.14569/IJACSA.2012.030310</id>
        <doi>10.14569/IJACSA.2012.030310</doi>
        <lastModDate>2012-07-01T10:36:24.0970000+00:00</lastModDate>
        
        <creator>Dr. Mohammed Abbas Fadhil Al-Husainy</creator>
        
        <subject>Security; Distortion; Embedding; Substitution.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(3), 2012</description>
        <description>Classic Least Significant Bit (LSB) steganography technique is the most used technique to hide secret information in the least significant bit of the pixels in the stego-image. This paper proposed a technique by splitting the secret message into set of segments, that have same length (number of characters), and find the best LSBs of pixels in the stego-image that are matched to each segment. The main goal of this technique is to minimize the number of LSBs that are changed when substituting them with the bits of characters in the secret message. This will lead to decrease the distortion (noise) that is occurred in the pixels of the stego-image and as result increase the immunity of the stego-image against the visual attack. The experiment shows that the proposed technique gives good enhancement to the Classic Least Significant Bit (LSB) technique.</description>
        <description>http://thesai.org/Downloads/Volume3No3/Paper10-Message_Segmentation_To_Enhance_The_Security_Of_LSB_Image_Steganography.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Maximum-Bandwidth Node-Disjoint Paths</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030309</link>
        <id>10.14569/IJACSA.2012.030309</id>
        <doi>10.14569/IJACSA.2012.030309</doi>
        <lastModDate>2012-07-01T10:36:17.9030000+00:00</lastModDate>
        
        <creator>Mostafa H Dahshan</creator>
        
        <subject>Maximum Bandwidth; Disjoint Paths; Widest Pair; Linear Programming; ILP; NP-Complete; Dijkstra Algorithm; Multiple Constrained Path; MCP.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(3), 2012</description>
        <description>This paper presents a new method for finding the node-disjoint paths with maximum combined bandwidth in communication networks. This problem is an NP-complete problem which can be optimally solved in exponential time using integer linear programming (ILP). The presented method uses a maximum-cost variant of Dijkstra algorithm and a virtual-node representation to obtain the maximum-bandwidth node-disjoint path. Through several simulations, we compare the performance of our method to a modern heuristic technique and to the ILP solution. We show that, in a polynomial execution time, our proposed method produces results that are almost identical to ILP in a significantly lower execution time.</description>
        <description>http://thesai.org/Downloads/Volume3No3/Paper9-Maximum_Bandwidth_Node_Disjoint_Paths.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of knowledge Base Expert System for Natural treatment of Diabetes disease</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030308</link>
        <id>10.14569/IJACSA.2012.030308</id>
        <doi>10.14569/IJACSA.2012.030308</doi>
        <lastModDate>2012-07-01T10:36:11.6270000+00:00</lastModDate>
        
        <creator>Sanjeev Kumar Jha</creator>
        
        <creator>D.K.Singh</creator>
        
        <subject>Expert System; ESTA; Natural treatment; Diabetes.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(3), 2012</description>
        <description>The development of expert system for treatment of Diabetes disease by using natural methods is new information technology derived from Artificial Intelligent research using ESTA (Expert System Text Animation) System. The proposed expert system contains knowledge about various methods of natural treatment methods (Massage, Herbal/Proper Nutrition, Acupuncture, Gems) for Diabetes diseases of Human Beings. The system is developed in the ESTA (Expert System shell for Text Animation) which is Visual Prolog 7.3 Application. The knowledge for the said system will be acquired from domain experts, texts and other related sources.</description>
        <description>http://thesai.org/Downloads/Volume3No3/Paper8-Development_Of_Knowledge_Base_Expert_System_For_Natural_Treatment_Of_Diabetes_Disease.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of a Mobile Phone Based e-Health Monitoring Application</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030307</link>
        <id>10.14569/IJACSA.2012.030307</id>
        <doi>10.14569/IJACSA.2012.030307</doi>
        <lastModDate>2012-07-01T10:36:05.3830000+00:00</lastModDate>
        
        <creator>Duck Hee Lee</creator>
        
        <creator>Ahmed Rabbi</creator>
        
        <creator>Jaesoon Choi</creator>
        
        <creator>Reza Fazel-Rezai</creator>
        
        <subject>Electrocardiogram(ECG); mobile phone; MIT-BIH database; health monitoring system.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(3), 2012</description>
        <description>The use of Electrocardiogram (ECG) system is important in primary diagnosis and survival analysis of the heart diseases. Growing portable mobile technologies have provided possibilities for medical monitoring for human vital signs and allow patient move around freely. In this paper, a mobile health monitoring application program is described. This system consists of the following sub-systems: real-time signal receiver, ECG signal processing, signal display in mobile phone, and data management as well five user interface screens. We verified the signal feature detection using the MIT-BIH arrhythmia database. The detection algorithms were implemented in the mobile phone application program. This paper describes the application system that was developed and tested successfully.
</description>
        <description>http://thesai.org/Downloads/Volume3No3/Paper7-Development_of_a_Mobile_Phone_Based_e-Health_Monitoring_Application.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Contextual Modelling of Collaboration System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030306</link>
        <id>10.14569/IJACSA.2012.030306</id>
        <doi>10.14569/IJACSA.2012.030306</doi>
        <lastModDate>2012-07-01T10:35:58.7600000+00:00</lastModDate>
        
        <creator>Wafaa DACHRY</creator>
        
        <creator>Brahim AGHEZZAF</creator>
        
        <creator>Bahloul BENSASSI</creator>
        
        <creator>Adil SAYOUTI</creator>
        
        <subject>collaborative information system; business process; collaborative process; BPMN; CMCS.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(3), 2012</description>
        <description>Faced with new environmental constraints, firms decide to collaborate in collective entities and adopt new patterns of behavior. So, this firms’ collaboration becomes an unavoidable approach. Indeed, our aim interest in our study is to propose a collaborative information system for supply chain. Our proposed platform ensures cooperation and information sharing between partners in real time. In fact, several questions have to be asked: What is the information nature may be shared between partners? What processes are implemented between actors? What functional services are supported by the platform? In order to answer these questions, we present, in this article, our methodological approach of modelling, called CMCS (Contextual Modelling of Collaborative System).</description>
        <description>http://thesai.org/Downloads/Volume3No3/Paper6-Contextual_Modeling_Of_Collaboration_System_Purchasing_Process_Application.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> Building Trust In Cloud Using Public Key Infrastructure - A step towards cloud trust</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030305</link>
        <id>10.14569/IJACSA.2012.030305</id>
        <doi>10.14569/IJACSA.2012.030305</doi>
        <lastModDate>2012-07-01T10:35:52.6070000+00:00</lastModDate>
        
        <creator>Heena Kharche</creator>
        
        <creator>Mr. Deepak Singh Chouhan</creator>
        
        <subject>Cloud Computing; Public Key infrastructure; Cryptography.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(3), 2012</description>
        <description> Cloud services have grown very quickly over the past couple of years, giving consumers and companies the chance to put services, resources and infrastructures in the hands of a provider. There are big security concerns when using cloud services. With the emergence of cloud computing, Public Key Infrastructure (PKI) technology has undergone a renaissance, enabling computer to computer communications. This study describes use of PKI in cloud computing and provides insights into some of the challenges which cloud-based PKI systems face.
</description>
        <description>http://thesai.org/Downloads/Volume3No3/Paper5-Building_Trust_In_Cloud_Using_Public_Key_Infrastructure.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Overview of Video Allocation Algorithms for Flash-based SSD Storage Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030304</link>
        <id>10.14569/IJACSA.2012.030304</id>
        <doi>10.14569/IJACSA.2012.030304</doi>
        <lastModDate>2012-07-01T10:35:45.9700000+00:00</lastModDate>
        
        <creator>Jaafer Al-Sabateen</creator>
        
        <creator>Saleh Ali Alomari</creator>
        
        <creator>Putra Sumari</creator>
        
        <subject>SSDs; Allocation Algorithms; Data Management Systems; Garbage Collection; Storage Media.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(3), 2012</description>
        <description>Despite the fact that Solid State Disk (SSD) data storage media had offered a revolutionary property storages community, but the unavailability of a comprehensive allocation strategy in SSDs storage media, leads to consuming the available space, random writing processes, time-consuming reading processes, and system resources consumption. In order to overcome these challenges, an efficient allocation algorithm is a desirable option. In this paper, we had executed an intensive investigation on the SSD-based allocation algorithms that had been proposed by the knowledge community. An explanatory comparison had been made between these algorithms. We reviewed these algorithms in order to building advanced knowledge armature that would help in inventing new allocation algorithms for this type of storage media.
</description>
        <description>http://thesai.org/Downloads/Volume3No3/Paper4-An_Overview_Of_Video_Allocation_Algorithms_For_Flash_Based_SSD_Storage_Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Keyword Driven Framework for Testing Web Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030302</link>
        <id>10.14569/IJACSA.2012.030302</id>
        <doi>10.14569/IJACSA.2012.030302</doi>
        <lastModDate>2012-07-01T10:35:37.0600000+00:00</lastModDate>
        
        <creator>Rashmi </creator>
        
        <creator>Neha Bajpai</creator>
        
        <subject>driven testing; test automation; test script;, test results; Html; test reports; test result; recording.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(3), 2012</description>
        <description>The goal of this paper is to explore the use of Keyword driven testing for automated testing of web application. In Keyword driven testing, the functionality of the system-under-test is documented in a table as well as in step by- step instructions for each test. It involves the creation of modular, reusable test components. These components are then assembled into test scripts. These components can be parameterized to make them reusable across various test script. These test scripts can also be divided into various reusable actions. This saves a lot of recording procedure. The Existing tools for this testing uses Html, Xml, Spreadsheet, etc. to maintain the test steps. The test results are analyzed to create test reports.</description>
        <description>http://thesai.org/Downloads/Volume3No3/Paper2-A_Keyword_Driven_Framework_for_Testing_Web_Applications.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Approach for Arabic Handwritten Postal Addresses Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030301</link>
        <id>10.14569/IJACSA.2012.030301</id>
        <doi>10.14569/IJACSA.2012.030301</doi>
        <lastModDate>2012-07-01T10:35:30.8670000+00:00</lastModDate>
        
        <creator>Moncef Charfi</creator>
        
        <creator>Monji Kherallah</creator>
        
        <creator>Abdelkarim El Baati</creator>
        
        <creator>Adel M. Alimi</creator>
        
        <creator></creator>
        
        <subject>Postal automation; handwritten postal address; address segmentation; beta-elliptical representation; graph matching.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(3), 2012</description>
        <description>In this paper, we propose an automatic analysis system for the Arabic handwriting postal addresses recognition, by using the beta elliptical model. Our system is divided into different steps: analysis, pre-processing and classification. The first operation is the filtering of image. In the second, we remove the border print, stamps and graphics. After locating the address on the envelope, the address segmentation allows the extraction of postal code and city name separately. The pre-processing system and the modeling approach are based on two basic steps. The first step is the extraction of the temporal order in the image of the handwritten trajectory. The second step is based on the use of Beta-Elliptical model for the representation of handwritten script. The recognition system is based on Graph-matching algorithm. Our modeling and recognition approaches were validated by using the postal code and city names extracted from the Tunisian postal envelopes data. The recognition rate obtained is about 98%.
</description>
        <description>http://thesai.org/Downloads/Volume3No3/Paper1-A_New_Approach_for_Arabic_Handwritten_Postal_Addresses_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Novel Techniques for Fair Rate Control in Wireless Mesh Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030230</link>
        <id>10.14569/IJACSA.2012.030230</id>
        <doi>10.14569/IJACSA.2012.030230</doi>
        <lastModDate>2012-07-01T10:35:24.6730000+00:00</lastModDate>
        
        <creator>Mohiuddin Ahmed</creator>
        
        <creator>K.M.Arifur Rahman</creator>
        
        <subject>Wireless mesh network, Network Throughput, Congestion Control, Fairness, Airtime Allocation, Neighbourhood Phenomenon, Gateway. 
</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(2), 2012</description>
        <description>IEEE 802.11 based wireless mesh networks can exhibit severe fairness problem by distributing throughput among different flows originated from different nodes. Congestion control, Throughput, Fairness are the important factors to be considered in any wireless network. Flows originating from nodes that directly communicate with the gateway get higher throughput. On the other hand, flows originating from two or more hops away get very little throughput. For this reason a distributed fair scheduling is an ideal candidate for fair utilization of gateway’s resources (i.e., bandwidth, airtime) and thereby achieving fairness among contending flows in WMNs. There are numerous solution for aforementioned factors in wireless mesh network. We figured out some problems of few existing solutions and integrated to give the solution for those problems. We considered neighborhood phenomenon, airtime allocation and elastic rate control to design a novel technique to achieve fair rate control in wireless mesh network. And finally we introduce distributed fair scheduling to get fairness in mesh network.</description>
        <description>http://thesai.org/Downloads/Volume3No2/Paper%2030%20-%20Novel%20Techniques%20for%20Fair%20Rate%20Control%20in%20Wireless%20Mesh%20Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>GSM-Based Wireless Database Access For Food And Drug Administration And Control</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030229</link>
        <id>10.14569/IJACSA.2012.030229</id>
        <doi>10.14569/IJACSA.2012.030229</doi>
        <lastModDate>2012-07-01T10:35:18.4670000+00:00</lastModDate>
        
        <creator>Hyacinth C Inyiama</creator>
        
        <creator>Mrs Lois Nwobodo</creator>
        
        <creator>Dr. Mrs. Christiana C. Okezie</creator>
        
        <creator>Mrs. Nkolika O. Nwazor</creator>
        
        <subject>Web; GSM; SMS; AT commands; Database; NAFDAC.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(2), 2012</description>
        <description>GSM (Global system for mobile communication) based wireless database access for food and drug administration and control is a system that enables one to send a query to the database using the short messaging system (SMS) for information about a particular food or drug. It works in such a way that a user needs only send an SMS in order to obtain information about a particular drug produced by a pharmaceutical industry. The system then receives the SMS, interprets it and uses its contents to query the database to obtain the required information. The system then sends back the required information to the user if available; otherwise it informs the user of the unavailability of the required information. The application database is accessed by the mobile device through a serial port using AT commands. This system will be of great help to the National Agency for Food and Drug Administration and Control (NAFDAC), Nigeria in food and drug administration and control. It will help give every consumer access to information about any product he/she wants to purchase, and it will also help in curbing counterfeiting. NAFDAC can also use this system to send SMS alerts on banned products to users of mobile phones. Its application can be extended to match the needs of any envisaged user in closely related applications involving wireless access to a remote database.</description>
        <description>http://thesai.org/Downloads/Volume3No2/Paper%2029%20-%20GSM-Based%20Wireless%20Database%20Access%20For%20Food%20And%20Drug%20Administration%20And%20Control.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Clustering as a Data Mining Technique in Health Hazards of High levels of Fluoride in Potable Water</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030228</link>
        <id>10.14569/IJACSA.2012.030228</id>
        <doi>10.14569/IJACSA.2012.030228</doi>
        <lastModDate>2012-07-01T10:35:15.1500000+00:00</lastModDate>
        
        <creator>T Balasubramanian</creator>
        
        <creator>R.Umarani</creator>
        
        <subject>Data mining, Fluoride affected people, Clustering, K-means, Skeletal.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(2), 2012</description>
        <description>This article explores data mining techniques in health care. In particular, it discusses data mining and its application in areas where people are affected severely by using the under- ground drinking water which consist of high levels of fluoride in Krishnagiri District, Tamil Nadu State, India. This paper identifies the risk factors associated with the high level of fluoride content in water, using clustering algorithms and finds meaningful hidden patterns which gives meaningful decision making to this socio-economic real world health hazard. [2]
</description>
        <description>http://thesai.org/Downloads/Volume3No2/Paper%2028%20-%20Clustering%20as%20a%20Data%20Mining%20Technique%20in%20Health%20Hazards%20of%20High%20levels%20of%20Fluoride%20in%20Potable%20Water.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Knowledge-Based Educational Module for Object-Oriented Programming &amp; The Efficacy of Web Based e-Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030227</link>
        <id>10.14569/IJACSA.2012.030227</id>
        <doi>10.14569/IJACSA.2012.030227</doi>
        <lastModDate>2012-07-01T10:35:08.9700000+00:00</lastModDate>
        
        <creator>Ahmad S Mashhour</creator>
        
        <subject>Computer-Aided Learning (CAL); Educational module; e-Learning; Knowledge-Base; Object-Oriented Programming (OOP); Student performance; Traditional Learning.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(2), 2012</description>
        <description>The purpose of this research was to explore the effectiveness of using computer-aided learning methods in teaching compared to traditional instructions. Moreover, the research proposed an intelligent-based educational module for teaching object oriented languages. The results provide teachers positive outcomes of using computers technology in teaching. Sample of more than one hundred undergraduate students from two universities located in the Middle East participated in this empirical study. The findings of this study indicated that students using e-learning style perform more efficient in terms of understandability than traditional face-to-face learning approach. The research also indicated that female students’ performance is equally likely to male students.</description>
        <description>http://thesai.org/Downloads/Volume3No2/Paper%2027%20-%20A%20Knowledge-Based%20Educational%20Module%20for%20Object-Oriented%20Programming%20&amp;%20The%20Efficacy%20of%20Web%20Based%20e-Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Web 2.0 Technologies and Social Networking Security Fears in Enterprises</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030226</link>
        <id>10.14569/IJACSA.2012.030226</id>
        <doi>10.14569/IJACSA.2012.030226</doi>
        <lastModDate>2012-07-01T10:35:02.7570000+00:00</lastModDate>
        
        <creator>Fernando Almeida</creator>
        
        <subject>Web 2.0; security; social networking; management risks.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(2), 2012</description>
        <description>Web 2.0 systems have drawn the attention of corporation, many of which now seek to adopt Web 2.0 technologies and transfer its benefits to their organizations. However, with the number of different social networking platforms appearing, privacy and security continuously has to be taken into account and looked at from different perspectives. This paper presents the most common security risks faced by the major Web 2.0 applications. Additionally, it introduces the most relevant paths and best practices to avoid these identified security risks in a corporate environment.</description>
        <description>http://thesai.org/Downloads/Volume3No2/Paper%2026%20-%20Web%202.0%20Technologies%20and%20Social%20Networking%20Security%20Fears%20in%20Enterprises.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Using Data Mining Techniques to Build a Classification Model for Predicting Employees Performance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030225</link>
        <id>10.14569/IJACSA.2012.030225</id>
        <doi>10.14569/IJACSA.2012.030225</doi>
        <lastModDate>2012-07-01T10:34:56.5230000+00:00</lastModDate>
        
        <creator>Qasem A Al-Radaideh</creator>
        
        <creator>Eman Al Nagi</creator>
        
        <subject>Data Mining, Classification, Decision Tree, Job Performance.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(2), 2012</description>
        <description>Human capital is of a high concern for companies’ management where their most interest is in hiring the highly qualified personnel which are expected to perform highly as well. Recently, there has been a growing interest in the data mining area, where the objective is the discovery of knowledge that is correct and of high benefit for users. In this paper, data mining techniques were utilized to build a classification model to predict the performance of employees. To build the classification model the CRISP-DM data mining methodology was adopted. Decision tree was the main data mining tool used to build the classification model, where several classification rules were generated. To validate the generated model, several experiments were conducted using real data collected from several companies. The model is intended to be used for predicting new applicants’ performance.</description>
        <description>http://thesai.org/Downloads/Volume3No2/Paper%2025%20-%20Using%20Data%20Mining%20Techniques%20to%20Build%20a%20Classification%20Model%20for%20Predicting%20Employees%20Performance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Role of ICT in Education: Focus on University Undergraduates taking Mathematics as a Course</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030224</link>
        <id>10.14569/IJACSA.2012.030224</id>
        <doi>10.14569/IJACSA.2012.030224</doi>
        <lastModDate>2012-07-01T10:34:50.3030000+00:00</lastModDate>
        
        <creator>Oye N D</creator>
        
        <creator>Z.K. Shallsuku</creator>
        
        <creator>A.Iahad, N.</creator>
        
        <subject>University undergraduates; ICT; Education; ICT tools; ICT infrastructure.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(2), 2012</description>
        <description>This paper examines the role of ICT in education with focus on university undergraduates taking mathematics as a course. The study was conducted at the Federal University of Technology Yola, Adamawa State, Nigeria. 150 questionnaires were administered to the first year students offering MA112 which was completed and returned in the lecture hall. According to the study ICT usage shows that majority (32.7%) use technology once or more in a day. Again the majority of the respondents (35%) said that the greatest barrier to using ICT is technical. The survey shows that there is significant correlation between the students and the use of ICT in their studies. However difficulties facing ICT usage is highly significant also. This shows that students have negative attitudes towards using ICT in their academic work. This is a foundational problem which cannot be over emphasized. Most of the students have never practice using ICT in their primary and secondary schools. Recommendations were made, that the government should develop ICT policies and guidelines to support all levels of education from primary schools to university. ICT tools should be made more accessible to both academic staff and students.</description>
        <description>http://thesai.org/Downloads/Volume3No2/Paper%2024%20-%20The%20Role%20of%20ICT%20in%20Education%20Focus%20on%20University%20Undergraduates%20taking%20Mathematics%20as%20a%20Course.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of a web-based courseware authoring and presentation system</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030223</link>
        <id>10.14569/IJACSA.2012.030223</id>
        <doi>10.14569/IJACSA.2012.030223</doi>
        <lastModDate>2012-07-01T10:34:47.0330000+00:00</lastModDate>
        
        <creator>Hyacinth C Inyiama</creator>
        
        <creator>Miss Chioma N. Anozie</creator>
        
        <creator>Dr. Mrs. Christiana C. Okezie</creator>
        
        <creator>Mrs. Nkolika O. Nwazor</creator>
        
        <subject>Web; Courseware; Authoring; Presentation; E-Learning; Multimedia.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(2), 2012</description>
        <description>A Web-based Courseware Authoring and Presentation System is a user-friendly and interactive e-learning software that can be used by both computer experts and non-computer experts to prepare a courseware in any subject of interest and to present lectures to the target audience in full media. The resultant courseware is accessed online by the target audience. The software features automated assessment and grading of students. Top-down methodology was adopted in the development of the authoring software. Implementation languages include Asp.Net and Visual Basic 9.0 and Microsoft Access 2007. An author can use the system to create a web-based courseware on any topic of his choice, while the software platform still remains intact for yet another author. A major contribution of this work is that it eases courseware preparation and delivery for lecturers and trainers.</description>
        <description>http://thesai.org/Downloads/Volume3No2/Paper%2023%20-%20Design%20of%20a%20web-based%20courseware%20authoring%20and%20presentation%20system.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Energy-Efficient Dynamic Query Routing Tree Algorithm for Wireless Sensor Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030222</link>
        <id>10.14569/IJACSA.2012.030222</id>
        <doi>10.14569/IJACSA.2012.030222</doi>
        <lastModDate>2012-07-01T10:34:40.8300000+00:00</lastModDate>
        
        <creator>Si Gwan Kim</creator>
        
        <creator>Hyong Soon Park</creator>
        
        <subject>sensor networks; routing trees; query processing.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(2), 2012</description>
        <description>To exploit in answering queries generated by the sink for the sensor networks, we propose an efficient routing protocol called energy-efficient dynamic routing tree (EDRT) algorithm. The idea of EDRT is to maximize in-network processing opportunities using the parent nodes and sibling nodes. In-network processing reduces the number of message transmission by partially aggregating results of an aggregate query in intermediate nodes, or merging the results in one message. This results in reduction of communication cost. Our experimental results based on simulations prove that our proposed method can reduce message transmissions more than query specific routing tree (QSRT) and flooding-based routing tree (FRT).</description>
        <description>http://thesai.org/Downloads/Volume3No2/Paper%2022%20-%20Energy-Efficient%20Dynamic%20Query%20Routing%20Tree%20Algorithm%20for%20Wireless%20Sensor%20Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Forks impacts and motivations in free and open source projects</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030221</link>
        <id>10.14569/IJACSA.2012.030221</id>
        <doi>10.14569/IJACSA.2012.030221</doi>
        <lastModDate>2012-07-01T10:34:34.9100000+00:00</lastModDate>
        
        <creator>R Viseur</creator>
        
        <subject>open source; free software; community, co-creation, fork.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(2), 2012</description>
        <description> Forking is a mechanism of splitting in a community and is typically found in the free and open source software field. As a failure of cooperation in a context of open innovation, forking is a practical and informative subject of study. In-depth researches concerning the fork phenomenon are uncommon. We therefore conducted a detailed study of 26 forks from popular free and open source projects. We created fact sheets, highlighting the impact and motivations to fork. We particularly point to the fact that the desire for greater technical differentiation and problems of project governance are major sources of conflict.
</description>
        <description>http://thesai.org/Downloads/Volume3No2/Paper%2021%20-%20Forks%20impacts%20and%20motivations%20in%20free%20and%20open%20source%20projects.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Improved Squaring Circuit for Binary Numbers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030220</link>
        <id>10.14569/IJACSA.2012.030220</id>
        <doi>10.14569/IJACSA.2012.030220</doi>
        <lastModDate>2012-07-01T10:34:28.6900000+00:00</lastModDate>
        
        <creator>Kabiraj Sethi</creator>
        
        <creator>Rutuparna Panda</creator>
        
        <subject> Vedic mathematics; VLSI; binary multiplication; hardware design; VHDL.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(2), 2012</description>
        <description>In this paper, a high speed squaring circuit for binary numbers is proposed. High speed Vedic multiplier is used for design of the proposed squaring circuit. The key to our success is that only one Vedic multiplier is used instead of four multipliers reported in the literature. In addition, one squaring circuit is used twice. Our proposed Squaring Circuit seems to have better performance in terms of speed.</description>
        <description>http://thesai.org/Downloads/Volume3No2/Paper%2020%20-%20An%20Improved%20Squaring%20Circuit%20for%20Binary%20Numbers.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>wavelet de-noising technique applied to the PLL of a GPS receiver embedded in an observation satellite</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030219</link>
        <id>10.14569/IJACSA.2012.030219</id>
        <doi>10.14569/IJACSA.2012.030219</doi>
        <lastModDate>2012-07-01T10:34:21.2870000+00:00</lastModDate>
        
        <creator>Dib Djamel Eddine</creator>
        
        <creator>Djebbouri Mohamed</creator>
        
        <creator>Taleb Ahmed Abddelmalik</creator>
        
        <subject>GPS; Doppler frequency; PLL; wavelet packet de-noising; </subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(2), 2012</description>
        <description>In this paper, we study the Doppler effect on a GPS(Global Positioning System) on board of an observation satellite that receives information on a carrier wave L1 frequency 1575.42 MHz .We simulated GPS signal acquisition. This allowed us to see the behavior of this type of receiver in AWGN channel (AWGN) and we define a method to reduce the Doppler Effect in the tracking loop which is wavelet de-noising technique.
</description>
        <description>http://thesai.org/Downloads/Volume3No2/Paper%2019%20-%20wavelet%20de-noising%20technique%20applied%20to%20the%20PLL%20of%20a%20GPS%20receiver%20embedded%20in%20an%20observation%20satellite.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Segmentation of the Breast Region in Digital Mammograms and Detection of Masses</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030218</link>
        <id>10.14569/IJACSA.2012.030218</id>
        <doi>10.14569/IJACSA.2012.030218</doi>
        <lastModDate>2012-07-01T10:34:15.0700000+00:00</lastModDate>
        
        <creator>Armen Sahakyan</creator>
        
        <creator>Hakop Sarukhanyan</creator>
        
        <subject>mammography; image segmentation; ROI; masses detection; breast region</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(2), 2012</description>
        <description>The mammography is the most effective procedure for an early diagnosis of the breast cancer. Finding an accurate and efficient breast region segmentation technique still remains a challenging problem in digital mammography. In this paper we explore an automated technique for mammogram segmentation. The proposed algorithm uses morphological preprocessing algorithm in order to: remove digitization noises and separate background region from the breast profile region for further edge detection and regions segmentation.
</description>
        <description>http://thesai.org/Downloads/Volume3No2/Paper%2018%20-%20Segmentation%20of%20the%20Breast%20Region%20in%20Digital%20Mammograms%20and%20Detection%20of%20Masses.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Post-Editing Error Correction Algorithm For Speech Recognition using Bing Spelling Suggestion</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030217</link>
        <id>10.14569/IJACSA.2012.030217</id>
        <doi>10.14569/IJACSA.2012.030217</doi>
        <lastModDate>2012-07-01T10:34:08.8600000+00:00</lastModDate>
        
        <creator>Youssef Bassil</creator>
        
        <creator>Mohammad Alwani</creator>
        
        <subject>Speech Recognition; Error Correction; Bing Spelling Suggestion.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(2), 2012</description>
        <description>ASR short for Automatic Speech Recognition is the process of converting a spoken speech into text that can be manipulated by a computer. Although ASR has several applications, it is still erroneous and imprecise especially if used in a harsh surrounding wherein the input speech is of low quality. This paper proposes a post-editing ASR error correction method and algorithm based on Bing’s online spelling suggestion. In this approach, the ASR recognized output text is spell-checked using Bing’s spelling suggestion technology to detect and correct misrecognized words. More specifically, the proposed algorithm breaks down the ASR output text into several word-tokens that are submitted as search queries to Bing search engine. A returned spelling suggestion implies that a query is misspelled; and thus it is replaced by the suggested correction; otherwise, no correction is performed and the algorithm continues with the next token until all tokens get validated. Experiments carried out on various speeches in different languages indicated a successful decrease in the number of ASR errors and an improvement in the overall error correction rate. Future research can improve upon the proposed algorithm so much so that it can be parallelized to take advantage of multiprocessor computers.
</description>
        <description>http://thesai.org/Downloads/Volume3No2/Paper%2017%20-%20Post-Editing%20Error%20Correction%20Algorithm%20for%20Speech%20Recognition%20using%20Bing%20Spelling%20Suggestion.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Polylogarithmic Gap between Meshes with Reconfigurable Row/Column Buses and Meshes with Statically Partitioned Buses</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030216</link>
        <id>10.14569/IJACSA.2012.030216</id>
        <doi>10.14569/IJACSA.2012.030216</doi>
        <lastModDate>2012-07-01T10:34:02.6430000+00:00</lastModDate>
        
        <creator>Susumu Matsumae</creator>
        
        <subject>mesh-connected parallel computer; dynamically reconfigurable bus; statically partitioned bus; simulation algorithm.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(2), 2012</description>
        <description>This paper studies the difference in computational power between the mesh-connected parallel computers equipped with dynamically reconfigurable bus systems and those with static ones. The mesh with separable buses (MSB) is the mesh-connected parallel computer with dynamically reconfigurable row/column buses. The broadcast buses of the MSB can be dynamically sectioned into smaller bus segments by program control. We show that the MSB of size n&#215;n can work with ?(?log?^2?n ) step even if its dynamic reconfigurable function is disabled. Here, we assume the word-model broadcast buses, and use the relation between the word-model bus and the bit-model bus.</description>
        <description>http://thesai.org/Downloads/Volume3No2/Paper%2016%20-%20Polylogarithmic%20Gap%20between%20Meshes%20with%20Reconfigurable%20RowColumn%20Buses%20and%20Meshes%20with%20Statically%20Partitioned%20Buses.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Evaluation of Adaptive Virtual Machine Load Balancing Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030215</link>
        <id>10.14569/IJACSA.2012.030215</id>
        <doi>10.14569/IJACSA.2012.030215</doi>
        <lastModDate>2012-07-01T10:33:56.4330000+00:00</lastModDate>
        
        <creator>Meenakshi Sharma</creator>
        
        <creator>Pankaj Sharma</creator>
        
        <subject> Virtual machine; load balancing; cloudsim.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(2), 2012</description>
        <description>The conception of Cloud computing has not only reshaped the field of distributed systems but also extend businesses potential. Load balancing is a core and challenging issue in Cloud Computing. How to use Cloud computing resources efficiently and gain the maximum profits with efficient load balancing algorithm is one of the Cloud computing service providers’ ultimate goals. In this paper firstly an analysis of different Virtual machine(VM) load balancing algorithms was done, a new VM load balancing algorithm has been proposed and implemented in Virtual Machine environment of cloud computing in order to achieve better response time and cost.
</description>
        <description>http://thesai.org/Downloads/Volume3No2/Paper%2015%20-%20Performance%20Evaluation%20of%20Adaptive%20Virtual%20Machine%20Load%20Balancing%20Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Discrete Event Barber Shop Simulation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030212</link>
        <id>10.14569/IJACSA.2012.030212</id>
        <doi>10.14569/IJACSA.2012.030212</doi>
        <lastModDate>2012-07-01T10:33:49.4970000+00:00</lastModDate>
        
        <creator>Masud Rana</creator>
        
        <creator>Md. Jobayer Hossain</creator>
        
        <creator>Ratan Kumar Ghosh</creator>
        
        <creator>Md. Mostafizur Rahman</creator>
        
        <subject>Normal distribution; Simulation.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(2), 2012</description>
        <description>A simulation based project is designed which can be practically implemented in a workspace (in this case, a barber shop). The design algorithm provides the user different time varying features such as number of people arrival, number of people being served, number of people waiting at the queue etc depending upon input criteria and system capability. The project is actually helpful for a modern barber shop in which a close inspection is needed. The owner or the manager of the shop can easily have a thorough but precise idea about the performance of the shop. At the same time he/she can easily decide to modify the arrangement and capability of the shop.
</description>
        <description>http://thesai.org/Downloads/Volume3No2/Paper%2012%20-%20A%20Discrete%20Event%20Barber%20Shop%20Simulation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving Seek Time for Column Store Using MMH Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030211</link>
        <id>10.14569/IJACSA.2012.030211</id>
        <doi>10.14569/IJACSA.2012.030211</doi>
        <lastModDate>2012-07-01T10:33:46.1900000+00:00</lastModDate>
        
        <creator>Tejaswini Apte</creator>
        
        <creator>Dr. Maya Ingle</creator>
        
        <creator>Dr. A.K.Goyal</creator>
        
        <subject>Load; Selectivity; Seek; TPC-H; Algorithms; Hash.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(2), 2012</description>
        <description>Hash based search has, proven excellence on large data warehouses stored in column store. Data distribution has significant impact on hash based search. To reduce impact of data distribution, we have proposed Memory Managed Hash (MMH) algorithm that uses shift XOR group for Queries and Transactions in column store. Our experiments show that MMH improves read and write throughput by 22% for TPC-H distribution.
</description>
        <description>http://thesai.org/Downloads/Volume3No2/Paper%2011%20-%20Improving%20Seek%20Time%20for%20Column%20Store%20Using%20MMH%20Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Feature Extraction Technique for Face Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030210</link>
        <id>10.14569/IJACSA.2012.030210</id>
        <doi>10.14569/IJACSA.2012.030210</doi>
        <lastModDate>2012-07-01T10:33:40.0170000+00:00</lastModDate>
        
        <creator>Sangeeta N Kakarwal</creator>
        
        <creator>Ratnadeep R. Deshmukh</creator>
        
        <subject> Biometric; Chi square test; Entropy; FFNN; SOM.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(2), 2012</description>
        <description>This paper presents novel technique for recognizing faces. The proposed method uses hybrid feature extraction techniques such as Chi square and entropy are combined together. Feed forward and self-organizing neural network are used for classification. We evaluate proposed method using FACE94 and ORL database and achieved better performance.
</description>
        <description>http://thesai.org/Downloads/Volume3No2/Paper%2010%20-%20Hybrid%20Feature%20Extraction%20Technique%20for%20Face%20Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fairness Enhancement Scheme for Multimedia Applications in IEEE 802.11e Wireless LANs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030209</link>
        <id>10.14569/IJACSA.2012.030209</id>
        <doi>10.14569/IJACSA.2012.030209</doi>
        <lastModDate>2012-07-01T10:33:33.8370000+00:00</lastModDate>
        
        <creator>Young-Woo Nam</creator>
        
        <creator>Sunmyeng Kim</creator>
        
        <creator>Si-Gwan Kim</creator>
        
        <subject>fairness; multimedia traffic; EDCA; QoS; TXOP.
</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(2), 2012</description>
        <description> Multimedia traffic should be transmitted to a receiver within the delay bound. The traffic is discarded when breaking its delay bound. Then, QoS (Quality of Service) of the traffic and network performance are lowered. The IEEE 802.11e standard defines a TXOP (Transmission Opportunity) parameter. The TXOP is the time interval in which a station can continuously transmit multiple data packets. All stations use the same TXOP value in the IEEE 802.11e standard. Therefore, when stations transmit traffic generated in different multimedia applications, fairness problem occurs. In order to alleviate the fairness problem, we propose a dynamic TXOP control scheme based on the channel utilization of network and multimedia traffic quantity in the queue of a station. The simulation results show that the proposed scheme improves fairness and QoS of multimedia traffic.</description>
        <description>http://thesai.org/Downloads/Volume3No2/Paper%209%20-%20Fairness%20Enhancement%20Scheme%20for%20Multimedia%20Applications%20in%20IEEE%20802.11e%20Wireless%20LANs.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>E-Learning Methodologies and Tools</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030208</link>
        <id>10.14569/IJACSA.2012.030208</id>
        <doi>10.14569/IJACSA.2012.030208</doi>
        <lastModDate>2012-07-01T10:33:28.0070000+00:00</lastModDate>
        
        <creator>Oye N D</creator>
        
        <creator>Mazleena Salleh</creator>
        
        <creator>N. A. Iahad</creator>
        
        <subject>E-learning; Synchronous; Asynchronous; Tools; Methodology; Knowledge management.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(2), 2012</description>
        <description>E-learning is among the most important explosion propelled by the internet transformation. This allows users to fruitfully gather knowledge and education both by synchronous and asynchronous methodologies to effectively face the need to rapidly acquire up to date know-how within productive environments. This review paper discusses on e-learning methodologies and tools. The different categories of e-learning that includes informal and blending learning, network and work-based learning. The main focus of e-learning methodologies is on both asynchronous and synchronous methodology. The paper also looked into the three major e-learning tools which are (i) curriculum tools (ii) digital library tools and (iii) knowledge representation tools. The paper resolves that e-learning is a revolutionary way to empower workforce with the skill and knowledge it needs to turn change to an advantage. Consequently, many corporations are discovering that e-learning can be used as a tool for knowledge management. Finally the paper suggests that synchronous tools should be integrated into asynchronous environments to allow for “any-time” learning model. This environment would be primarily asynchronous with background discussion, assignments and assessment taking place and managed through synchronous tools.
</description>
        <description>http://thesai.org/Downloads/Volume3No2/Paper%208%20-%20E-LEARNING%20METHODOLOGIES%20AND%20TOOLS.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> Conception of a management tool of Technology Enhanced Learning Environments</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030207</link>
        <id>10.14569/IJACSA.2012.030207</id>
        <doi>10.14569/IJACSA.2012.030207</doi>
        <lastModDate>2012-07-01T10:33:22.1970000+00:00</lastModDate>
        
        <creator>S&#233;rgio Andr&#233; Ferreira</creator>
        
        <creator>Ant&#243;nio Andrade</creator>
        
        <subject>Learning Content Management System; Management Tool; Technology Enhanced Learning Environments.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(2), 2012</description>
        <description>This paper describes the process of the conception of a software tool of TELE management. The proposed management tool combines information from two sources: i) the automatic reports produced by the Learning Content Management System (LCMS) Blackboard and ii) the views of students and teachers on the use of the LCMS in the process of teaching and learning. The results show that the architecture of the proposed management tool has the features of a management tool, since its potential to control, to reset and to enhance the use of an LCMS in the process of teaching and learning and teacher training, is shown.</description>
        <description>http://thesai.org/Downloads/Volume3No2/Paper%207%20-%20CONCEPTION%20OF%20A%20MANAGEMENT%20TOOL%20OF%20TECHNOLOGY%20ENHANCED%20LEARNING%20ENVIRONMENTS.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Applying Social Network Analysis to Analyze a Web-Based Community</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030206</link>
        <id>10.14569/IJACSA.2012.030206</id>
        <doi>10.14569/IJACSA.2012.030206</doi>
        <lastModDate>2012-07-01T10:33:15.9800000+00:00</lastModDate>
        
        <creator>Mohammed Al-Taie</creator>
        
        <creator>Seifedine Kadry</creator>
        
        <subject>Affiliation Networks; Book-Crossing; Centrality Measures; Ego-Network; M-Slice Analysis; Pajek; Social Network Analysis; Social Networks.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(2), 2012</description>
        <description> this paper deals with a very renowned website (that is Book-Crossing) from two angles: The first angle focuses on the direct relations between users and books. Many things can be inferred from this part of analysis such as who is more interested in book reading than others and why? Which books are most popular and which users are most active and why? The task requires the use of certain social network analysis measures (e.g. degree centrality). What does it mean when two users like the same book? Is it the same when other two users have one thousand books in common? Who is more likely to be a friend of whom and why? Are there specific people in the community who are more qualified to establish large circles of social relations? These questions (and of course others) were answered through the other part of the analysis, which will take us to probe the potential social relations between users in this community. Although these relationships do not exist explicitly, they can be inferred with the help of affiliation network analysis and techniques such as m-slice. Book-Crossing dataset, which covered four weeks of users&#39; activities during 2004, has always been the focus of investigation for researchers interested in discovering patterns of users&#39; preferences in order to offer the most possible accurate recommendations. However; the implicit social relationships among users that emerge (when putting users in groups based on similarity in book preferences) did not gain the same amount of attention. This could be due to the importance recommender systems attain these days (as compared to other research fields) as a result to the rapid spread of e-commerce websites that seek to market their products online. Certain social network analysis software, namely Pajek, was used to explore different structural aspects of this community such as brokerage roles, triadic constraints and levels of cohesion. Some overall statistics were also obtained such as network density, average geodesic distance and average degree.</description>
        <description>http://thesai.org/Downloads/Volume3No2/Paper%206%20-%20Applying%20Social%20Network%20Analysis%20to%20Analyze%20a%20Web-Based%20Community.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> An incremental learning algorithm considering texts&#39; reliability</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030205</link>
        <id>10.14569/IJACSA.2012.030205</id>
        <doi>10.14569/IJACSA.2012.030205</doi>
        <lastModDate>2012-07-01T10:33:09.7770000+00:00</lastModDate>
        
        <creator>Xinghua Fan</creator>
        
        <creator>Shaozhu Wang</creator>
        
        <subject>text classification; incremental learning; reliability; text distribution; evaluation.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(2), 2012</description>
        <description>The sequence of texts selected obviously influences the accuracy of classification. Some sequences may make the performance of classification poor. For overcoming this problem, an incremental learning algorithm considering texts’ reliability, which finds reliable texts and selects them preferentially, is proposed in this paper. To find reliable texts, it uses two evaluation methods of FEM and SEM, which are proposed according to the text distribution of unlabeled texts. The results of the last experiments not only verify the effectiveness of the two evaluation methods but also indicate that the proposed incremental learning algorithm has advantages of fast training speed, high accuracy of classification, and steady performance.</description>
        <description>http://thesai.org/Downloads/Volume3No2/Paper%205%20-%20An%20incremental%20learning%20algorithm%20considering%20texts&#39;%20reliability.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> A New Approach of Trust Relationship Measurement Based on Graph Theory</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030204</link>
        <id>10.14569/IJACSA.2012.030204</id>
        <doi>10.14569/IJACSA.2012.030204</doi>
        <lastModDate>2012-07-01T10:33:03.5830000+00:00</lastModDate>
        
        <creator>Ping He</creator>
        
        <subject>Trust relations; trust relationship graph; trusted-order; random trust relationship.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(2), 2012</description>
        <description>The certainty trust relationship of network node behavior has been presented based on graph theory, and a measurement method of trusted-degree is proposed. Because of the uncertainty of trust relationship, this paper has put forward the random trusted-order and firstly introduces the construction of trust relations space (TRS) based on trusted order. Based on all those above, the paper describes new method and strategy which monitor and measure the node behavior on the expectancy behavior character for trusted compute of the node. According to manifestation of node behavior and historical information, adjust and predict the trusted-order of node behavior. The paper finally establishes dynamic trust evaluation model based node behavior characters, and then it discusses the trusted measurement method which measures the connection and hyperlink for node behavior of network in trust relationship space.
</description>
        <description>http://thesai.org/Downloads/Volume3No2/Paper%204%20-%20A%20New%20Approach%20of%20Trust%20Relationship%20Measurement%20Based%20on%20Graph%20Theory.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Global Convergence Algorithm for the Supply Chain Network Equilibrium Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030203</link>
        <id>10.14569/IJACSA.2012.030203</id>
        <doi>10.14569/IJACSA.2012.030203</doi>
        <lastModDate>2012-07-01T10:32:57.4100000+00:00</lastModDate>
        
        <creator>Lei Wang</creator>
        
        <subject>Supply chain management; network equilibrium model; generalized variational inequalities; algorithm; globally convergent.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(2), 2012</description>
        <description> In this paper, we first present an auxiliary problem method for solving the generalized variational inequalities problem on the supply chain network equilibrium model (GVIP), then its global convergence is also established under milder conditions.</description>
        <description>http://thesai.org/Downloads/Volume3No2/Paper%203%20-%20A%20Global%20Convergence%20Algorithm%20for%20the%20Supply%20Chain%20Network%20Equilibrium%20Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Flexible Approach to Modelling Adaptive Course Sequencing based on Graphs implemented using Xlink</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030202</link>
        <id>10.14569/IJACSA.2012.030202</id>
        <doi>10.14569/IJACSA.2012.030202</doi>
        <lastModDate>2012-07-01T10:32:51.2300000+00:00</lastModDate>
        
        <creator>Rachid ELOUAHBI</creator>
        
        <creator>Driss BOUZIDI</creator>
        
        <creator>Noreddine ABGHOUR</creator>
        
        <creator>Mohammed Adil NASSIR</creator>
        
        <subject>e-learning; pedagogical graph; adaptability; profile; learning units; XML; XLINK; XSL.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(2), 2012</description>
        <description>A major challenge in developing systems of distance learning is the ability to adapt learning to individual users. This adaptation requires a flexible scheme for sequencing the material to teach diverse learners. This is where we intend to contribute to model the personalized learning paths to be followed by the learner to achieve his/her determined educational objective. Our modelling approach of sequencing is based on the pedagogical graph which is called SMARTGraph. This graph allows expressing the totality of the pedagogic constraints under which the learner is submitted in order to achieve his/her pedagogic objective. SMARTGraph is a graph in which the nodes are the learning units and the arcs are the pedagogic constraints between learning units. We shall see how it is possible to organize the learning units and the learning paths to answer the expectations within the framework of individual courses according to the learner profile or within the framework of group courses. To implement our approach we exploit the strength of XLink (XML Linking Language) to define the sequencing graph.</description>
        <description>http://thesai.org/Downloads/Volume3No2/Paper%202%20-%20A%20flexible%20approach%20to%20modeling%20adaptive%20course%20sequencing%20based%20on%20graphs%20implemented%20using%20XLink.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comprehensive Analysis of E-government services adoption in Saudi Arabia: Obstacles and Challenges
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030201</link>
        <id>10.14569/IJACSA.2012.030201</id>
        <doi>10.14569/IJACSA.2012.030201</doi>
        <lastModDate>2012-07-01T10:32:45.0600000+00:00</lastModDate>
        
        <creator>Mohammed Alshehri</creator>
        
        <creator>Steve Drew</creator>
        
        <subject>Mohammed Alshehri, Steve Drew, Osama Alfarraj</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(2), 2012</description>
        <description> Often referred as Government to Citizen (G2C) e-government services, many governments around the world are developing and utilizing ICT technologies to provide information and services to their citizens. In Saudi Arabia (KSA) e-government projects have been identified as one of the top government priority areas. However, the adoption of e-government is facing many challenges and barriers including technological, cultural, organizational which must be considered and treated carefully. This paper explores the key factors of user adoption of e-government services through empirical evidence gathered by survey of 460 Saudi citizens including IT department employees from different public sectors. Based on the analysis of data collected the researchers were able to identify some of the important barriers and challenges from these different perspectives. As a result, this study has generated a list of possible recommendations for the public sector and policy-makers to move towards successful adoption of e-government services in Saudi Arabia.</description>
        <description>http://thesai.org/Downloads/Volume3No2/Paper%201%20-%20A%20Comprehensive%20Analysis%20of%20E-government%20services%20adoption%20in%20Saudi%20Arabia%20Obstacle%20and%20Challenges.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Effective Security Architecture for Virtualized Data Center Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030131</link>
        <id>10.14569/IJACSA.2012.030131</id>
        <doi>10.14569/IJACSA.2012.030131</doi>
        <lastModDate>2012-07-01T10:32:38.8930000+00:00</lastModDate>
        
        <creator>Udeze Chidiebele. C</creator>
        
        <creator>Okafor Kennedy .C</creator>
        
        <creator>H. C Inyiama</creator>
        
        <creator>Dr C. C. Okezie</creator>
        
        <subject> Infrastructure; Virtualization; VDCN; OFSDN; VVSS; VLAN; Virtual Server.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(1), 2012</description>
        <description>This work presents a candidate scheme for effective security policy that defines the requirements that will facilitate protection of network resources from internal and external security threats. Also, it ensures data privacy and integrity in a virtualized data center network (VDCN). An integration of Open Flow Software Defined Networking (OFSDN) with VLAN Virtual Server Security (VVSS) architecture is presented to address distinct security issues in virtualized data centers. The OFSDN with VVSS is proposed to create a more secured protection and maintain compliance integrity of servers and applications in the DCN. This proposal though still on the prototype phase, calls for community driven responses.</description>
        <description>http://thesai.org/Downloads/Volume3No1/Paper%2031-Effective%20Security%20Architecture%20for%20Virtualized%20Data%20Center%20Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Transform Domain Fingerprint Identification Based on DTCWT
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030130</link>
        <id>10.14569/IJACSA.2012.030130</id>
        <doi>10.14569/IJACSA.2012.030130</doi>
        <lastModDate>2012-07-01T10:32:35.6370000+00:00</lastModDate>
        
        <creator>Jossy P George</creator>
        
        <creator>Abhilash S. K</creator>
        
        <creator>Raja K. B.</creator>
        
        <subject>Fingerprint; DTCWT; Euclidean Distance; Preprocessing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(1), 2012</description>
        <description>The physiological biometric characteristics are better compared to behavioral biometric identification of human beings to identify a person. In this paper, we propose Transform Domain Fingerprint Identification Based on DTCWT. The original Fingerprint is cropped and resized to suitable dimension to apply DTCWT. The DTCWT is applied on Fingerprint to generate coefficient which form features. The performance analysis is discussed with different levels of DTCWT and also with different sizes of Fingerprint database. It is observed that the recognition rate is better in the case of level 7 compared to other levels of DTCWT.</description>
        <description>http://thesai.org/Downloads/Volume3No1/Paper%2030-Transform%20Domain%20Fingerprint%20Identification%20Based%20on%20DTCWT.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Re-tooling Code Structure Based Analysis with Model-Driven Program Slicing for Software Maintenance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030129</link>
        <id>10.14569/IJACSA.2012.030129</id>
        <doi>10.14569/IJACSA.2012.030129</doi>
        <lastModDate>2012-07-01T10:32:28.8830000+00:00</lastModDate>
        
        <creator>Oladipo Onaolapo Francisca</creator>
        
        <subject>software maintenance; static analysis; syntactic program behavior; program slicing.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(1), 2012</description>
        <description>Static code analysis is a methodology of detecting errors in program code based on the programmer&#39;s reviewing the code in areas within the program text where errors are likely to be found and since the process considers all syntactic program paths; there is the need for a model-based approach with slicing. This paper presented a model of high-level abstraction of code structure analysis for a large component based software system. The work leveraged on the most important advantage of static code structure analysis in re-tooling software maintenance for a developing economy. A program slicing technique was defined and deployed to partition the source text to manageable fragments to aid in the analysis and statecharts were deployed as visual formalism for viewing the dynamic slices. The resulting model was a high-tech static analysis process aimed at determining and confirming the expected behaviour of a software system using slices of the source text presented in the statecharts.</description>
        <description>http://thesai.org/Downloads/Volume3No1/Paper%2029-Re-tooling%20Code%20Structure%20Based%20Analysis%20with%20Model-Driven%20Program%20Slicing%20for%20Software%20Maintenance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Relationships of Soft Systems Methodology (SSM), Business Process Modeling and e-Government</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030128</link>
        <id>10.14569/IJACSA.2012.030128</id>
        <doi>10.14569/IJACSA.2012.030128</doi>
        <lastModDate>2012-07-01T10:32:22.5800000+00:00</lastModDate>
        
        <creator>Dana Indra Sensuse</creator>
        
        <creator>Arief Ramadhan</creator>
        
        <subject>Soft Systems Methodology; Business Process Modeling; e-Government.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(1), 2012</description>
        <description>e-Government have emerged in several countries. Because of many aspects that must be considered, and because of there are exist some soft components in e-Government, then the Soft Systems Methodology (SSM) can be considered to use in e-Government systems development process. On the other hand, business process modeling is essential in many fields nowadays, as well as in e-Government. Some researchers have used SSM in e-Government. Several studies that relate the business processes modeling with e-Government have been conducted. This paper tries to reveal the relationship between SSM and business process modeling. Moreover, this paper also tries to explain how business process modeling is integrated within SSM, and further link that integration to the e-Government.</description>
        <description>http://thesai.org/Downloads/Volume3No1/Paper%2028-The%20Relationships%20of%20Soft%20Systems%20Methodology%20(SSM)%20Business%20Process%20Modeling%20and%20e-Government.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Secret Key Agreement Over Multipath Channels Exploiting a Variable-Directional Antenna
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030127</link>
        <id>10.14569/IJACSA.2012.030127</id>
        <doi>10.14569/IJACSA.2012.030127</doi>
        <lastModDate>2012-07-01T10:32:16.2970000+00:00</lastModDate>
        
        <creator>Valery Korzhik</creator>
        
        <creator>Viktor Yakovlev</creator>
        
        <creator>Yuri Kovajkin</creator>
        
        <creator>Guillermo Morales-Luna</creator>
        
        <subject>wireless communication, wave propagation, cryptography, key distribution</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(1), 2012</description>
        <description>We develop an approach of key distribution protocol(KDP) proposed recently by T.Aono et al., where the security of KDP is only partly estimated in terms of eavesdropper&#39;s key bit errors. Instead we calculate the Shannon&#39;s information leaking to a wire tapper and also we apply the privacy amplification procedure from the side of the legal users. A more general mathematical model based on the use of Variable-Directional Antenna (VDA) under the condition of multipath wave propagation is proposed. The new method can effectively be used even in noiseless interception channels providing thus a widened area with respect to practical applications. Statistical characteristics of the VDA are investigated by simulation, allowing to specify the model parameters. We prove that the proposed KDP provides both security and reliability of the shared keys even for very short distances between legal users and eavesdroppers. Antenna diversity is proposed as a mean to enhance the KDP security. In order to provide a better performance evaluation of the KDP, it is investigated the use of error correcting codes.
</description>
        <description>http://thesai.org/Downloads/Volume3No1/Paper%2027-Secret%20Key%20Agreement%20Over%20Multipath%20Channels%20Exploiting%20a%20Variable-Directional%20Antenna.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Identification of Critical Node for the Efficient Performance in Manet
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030126</link>
        <id>10.14569/IJACSA.2012.030126</id>
        <doi>10.14569/IJACSA.2012.030126</doi>
        <lastModDate>2012-07-01T10:32:10.0130000+00:00</lastModDate>
        
        <creator>Shivashankar </creator>
        
        <creator>B.Sivakumar</creator>
        
        <creator>G.Varaprasad</creator>
        
        <subject>Critical node, malicious, residual battery power, reliability, bandwidth, Mobile Ad hoc Network.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(1), 2012</description>
        <description>This paper considers a network where nodes are connected randomly and can fail at random times. The critical-node test detects nodes, whose failures are malicious behavior, disconnects or significantly degrades the performance of the network. The critical node is an element, position or control entity whose disruption, is immediately degrades the ability of a force to command, control or effectively conducts combat operations. If a node is critical node, then more attention must be paid to it to avoid its failure or removal of a network. So how to confirm critical nodes in the ad hoc network is the premise to predict the network partition. A critical node is the most important node within the entity of a network. This paper suggests methods that find the critical nodes of a network based on residual battery power, reliability, bandwidth, availability and service traffic type. The metrics for evaluation has been considered as packet delivery ratio, end-to-end delay and throughput.</description>
        <description>http://thesai.org/Downloads/Volume3No1/Paper%2026-Identification%20of%20Critical%20Node%20for%20the%20Efficient%20Performance%20in%20Manet.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Scenario-Based Software Reliability Testing Profile for Autonomous Control System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030125</link>
        <id>10.14569/IJACSA.2012.030125</id>
        <doi>10.14569/IJACSA.2012.030125</doi>
        <lastModDate>2012-07-01T10:32:03.7370000+00:00</lastModDate>
        
        <creator>Jun Ai</creator>
        
        <creator>Jingwei Shang</creator>
        
        <creator>Peng Wang</creator>
        
        <subject>software reliability testing; scenario-based testing profile; autonomous control system.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(1), 2012</description>
        <description>Operational profile is often used in software reliability testing, but it is limited to non-obvious-operation software such as Autonomous Control System. After analyzing the autonomous control system and scenario technology, a scenario-based profile constructing method for software reliability testing is presented. Two levels of scenario-based profile in the paper are introduced: system level and software level, and the scenario-based profile could be obtained through mapping them. With the method, the testing data for software reliability testing could be generated.
</description>
        <description>http://thesai.org/Downloads/Volume3No1/Paper%2025-Scenario-Based%20Software%20Reliability%20Testing%20Profile%20for%20Autonomous%20Control%20System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> Improved Face Recognition with Multilevel BTC using Kekre’s LUV Color Space</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030124</link>
        <id>10.14569/IJACSA.2012.030124</id>
        <doi>10.14569/IJACSA.2012.030124</doi>
        <lastModDate>2012-07-01T10:31:57.4300000+00:00</lastModDate>
        
        <creator>H.B. Kekre</creator>
        
        <creator>Dr. Sudeep Thepade</creator>
        
        <creator>Sanchit Khandelwal</creator>
        
        <creator>Karan Dhamejani</creator>
        
        <creator>Adnan Azmi</creator>
        
        <creator></creator>
        
        <subject>Face recognition, BTC, RGB, K’LUV, Multilevel BTC, FAR, GAR.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(1), 2012</description>
        <description> The theme of the work presented in the paper is Multilevel Block Truncation Coding based Face Recognition using the Kekre’s LUV (K’LUV) color space. In [1], Multilevel Block Truncation Coding was applied on the RGB color space up to four levels for face recognition. The experimental results showed that Block Truncation Coding Level 4 (BTC-level 4) was better as compared to other BTC levels of RGB color space. Results displaying a similar pattern are realized when the K’LUV color is used. It is further observed that K’LUV color space gives improved results on all four levels.
</description>
        <description>http://thesai.org/Downloads/Volume3No1/Paper%2024-Improved%20Face%20Recognition%20with%20Multilevel%20BTC%20using%20Kekre%E2%80%98s%20LUV%20Color%20Space.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Cost-Effective Approach to the Design and Implementation of Microcontroller-based Universal Process Control Trainer</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030122</link>
        <id>10.14569/IJACSA.2012.030122</id>
        <doi>10.14569/IJACSA.2012.030122</doi>
        <lastModDate>2012-07-01T10:31:53.7000000+00:00</lastModDate>
        
        <creator>Udeze Chidiebele C</creator>
        
        <creator>Uzedeh Godwin</creator>
        
        <creator>H. C Inyiama</creator>
        
        <creator>Dr C. C. Okezie</creator>
        
        <subject> process control, control algorithm, Algorithmic State machine (ASM) chart, State Transition Table (STT), Fully-expanded STT, Control software etc</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(1), 2012</description>
        <description>This paper presents a novel approach to the design and implementation of a low-cost universal digital process control trainer. The need to equip undergraduates studying Electronic Engineering and other related courses in higher institutions with the fundamental knowledge of digital process control practical was the main objective of the work. Microcontroller-based design and implementation was the approach used where only one AT59C81 with few flip-flops were used for the whole eight processes covered by the trainer thereby justifying its low-cost and versatility.
</description>
        <description>http://thesai.org/Downloads/Volume3No1/Paper%2022-A%20Cost-Effective%20Approach%20to%20the%20Design%20and%20Implementation%20of%20Microcontroller-based%20Universal%20Process%20Control%20Trainer.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> Survey on Impact of Software Metrics on Software Quality</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030121</link>
        <id>10.14569/IJACSA.2012.030121</id>
        <doi>10.14569/IJACSA.2012.030121</doi>
        <lastModDate>2012-07-01T10:31:50.3130000+00:00</lastModDate>
        
        <creator>Mrinal Singh Rawat</creator>
        
        <creator>Arpita Mittal</creator>
        
        <creator>Sanjay Kumar Dubey</creator>
        
        <subject>Software metrics; Software quality; Software reliability; Lines of code; Function points; object oriented metrics.
</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(1), 2012</description>
        <description>Software metrics provide a quantitative basis for planning and predicting software development processes. Therefore the quality of software can be controlled and improved easily. Quality in fact aids higher productivity, which has brought software metrics to the forefront. This research paper focuses on different views on software quality. Moreover, many metrics and models have been developed; promoted and utilized resulting in remarkable successes. This paper examines the realm of software engineering to see why software metrics are needed and also reviews their contribution to software quality and reliability. Results can be improved further as we acquire additional experience with variety of software metrics. These experiences can yield tremendous benefits and betterment in quality and reliability.</description>
        <description>http://thesai.org/Downloads/Volume3No1/Paper%2021-Survey%20on%20Impact%20of%20Software%20Metrics%20on%20Software%20Quality.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Different Protocols for High Speed Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030119</link>
        <id>10.14569/IJACSA.2012.030119</id>
        <doi>10.14569/IJACSA.2012.030119</doi>
        <lastModDate>2012-07-01T10:31:43.6100000+00:00</lastModDate>
        
        <creator>Srinivasa Rao Angajala</creator>
        
        <subject> TCP protocols; high speed networks; TCP congestion.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(1), 2012</description>
        <description>New challenges arise with the presence of various types of physical links, such as wireless networks, high speed and satellite in today’s ever-changing network. It is clear that the TCP throughput deteriorates in high-speed networks with large bandwidth-delay product, and new congestion control algorithms have been proposed to address such deterioration. Traditional TCP protocols treat all packet loss as a sign of congestion. Their inability to recognize non-congestion related packet loss has significant effects on the communication efficiency. The proposed protocols such as TCP Adaptive Westwood, Scalable TCP, HS-TCP, BIC-TCP, FAST-TCP and H-TCP all have some improvement in functionality over the traditional TCP protocols. This survey gives a summarization of all the protocols for high speed networks.</description>
        <description>http://thesai.org/Downloads/Volume3No1/Paper%2019-Different%20Protocols%20for%20High%20Speed%20Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Viable Modifications to Improve Handover Latency in MIPv6</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030118</link>
        <id>10.14569/IJACSA.2012.030118</id>
        <doi>10.14569/IJACSA.2012.030118</doi>
        <lastModDate>2012-07-01T10:31:37.2530000+00:00</lastModDate>
        
        <creator>Purnendu Shekhar Pandey</creator>
        
        <creator>Dr.Neelendra Badal</creator>
        
        <subject>Handover, CoA, Home Agent, Foreign Agent, MIPv4, MIPv6.
</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(1), 2012</description>
        <description>Various Handover techniques and modifications for Handover in MIPv6 come into light during past few years. Still the problem remains, such as quality of services, better resource utilization during Handover and Handover latency. This paper focuses on such problems within various Handover techniques and proposes some modifications to reduce the Handover latency. This also improves the quality of services related with Handover in MIPv6. Experimental results presented in this paper shows that the Handover latency in MIPv6 will be reduced by applying these proposed modifications. This paper is organized in following sections: section I gives the introduction, Section II presents the Basic operations for MIPv6 Handover, Section III focuses on the related work with background, Section IV proposes modifications and Section V concludes the paper while underlining future prospects in this domain.
</description>
        <description>http://thesai.org/Downloads/Volume3No1/Paper%2018-Viable%20Modifications%20to%20Improve%20Handover%20Latency%20in%20MIPv6.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> Fault Tolerant Platform for Application Mobility across devices</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030117</link>
        <id>10.14569/IJACSA.2012.030117</id>
        <doi>10.14569/IJACSA.2012.030117</doi>
        <lastModDate>2012-07-01T10:31:30.9370000+00:00</lastModDate>
        
        <creator>T N Anitha</creator>
        
        <creator>Jayanth. A</creator>
        
        <subject>Fault tolerance; Middle ware; Midstore Manager.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(1), 2012</description>
        <description>In the mobile era, users started using Smartphone’s, tablets and other handheld devices, The advances in telecom technologies like 3G accelerates the migration towards smart phones. But still battery power and frequent change of handsets is still a constraint.They burden on user have to manually synchronize their contacts, applications they use to the new phones. Also they loss whatever they are doing when the mobile get power down. In this paper, we propose a solution to the problem discussed with a new fault tolerant platform which can provide application mobility across the devices.</description>
        <description>http://thesai.org/Downloads/Volume3No1/Paper%2017-Fault%20Tolerant%20Platform%20for%20Application%20Mobility%20across%20devices.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Efficient Threshold Signature Scheme</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030116</link>
        <id>10.14569/IJACSA.2012.030116</id>
        <doi>10.14569/IJACSA.2012.030116</doi>
        <lastModDate>2012-07-01T10:31:24.6730000+00:00</lastModDate>
        
        <creator>Sattar J Aboud</creator>
        
        <creator>Mohammad AL-Fayoumi</creator>
        
        <subject>Shamir secret sharing; threshold signature; random oracle model.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(1), 2012</description>
        <description>In this paper, we introduce a new threshold signature RSA-typed scheme. The proposed scheme has the characteristics of un-forgeable and robustness in random oracle model. Also, signature generation and verification is entirely non-interactive. In addition, the length of the entity signature participate is restricted by a steady times of the length of the RSA signature modulus. Also, the signing process of the proposed scheme is more efficient in terms of time complexity and interaction.</description>
        <description>http://thesai.org/Downloads/Volume3No1/Paper%2016-Efficient%20Threshold%20Signature%20Scheme.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Feasible Rural Education System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030115</link>
        <id>10.14569/IJACSA.2012.030115</id>
        <doi>10.14569/IJACSA.2012.030115</doi>
        <lastModDate>2012-07-01T10:31:21.3270000+00:00</lastModDate>
        
        <creator>Lincy Meera Mathews</creator>
        
        <creator>Dr Bandaru Rama Krishna Rao</creator>
        
        <subject>FRES; Education; FOSS; Semantic Web; Ontology; Natural language processing; Cloud computing.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(1), 2012</description>
        <description>The education system in rural and semi-rural areas of developing and underdeveloped countries are facing many challenges. The limited accessibility and challenges to the education are attributed mainly to political, economic and social issues of these underdeveloped countries. We propose a “Feasible Rural Education System (FRES)” based on Ontology and supported by Cloud to enhance the accessibility to education in rural areas. The system has been proposed incorporating the FOSS approach.</description>
        <description>http://thesai.org/Downloads/Volume3No1/Paper%2015-A%20Feasible%20Rural%20Education%20System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An enhanced Scheme for Reducing Vertical handover latency
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030114</link>
        <id>10.14569/IJACSA.2012.030114</id>
        <doi>10.14569/IJACSA.2012.030114</doi>
        <lastModDate>2012-07-01T10:31:15.0500000+00:00</lastModDate>
        
        <creator>Mohammad Faisal</creator>
        
        <creator>Muhammad Nawaz Khan</creator>
        
        <subject>Vertical hand over, Authentication, FMIPv6, HMIPv6, reactive, proactive, latency and packet loss.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(1), 2012</description>
        <description>Authentication in vertical Hand over is a demanding research problem. Countless methods are commenced but all of them have insufficiencies in term of latency and packet loss. Standard handover schemes (MIPv4, MIPv6, FMIPv6, and HMIPv6) also practice these shortages when a quick handover is desirable in several genuine circumstances like MANETs, VANETs etc. This paper will evaluate the literature of the work done in past and present for undertaking the authentication concerns in vertical handover and will put down a basis for building the latency and packet loss more effective in such a huge shared situation that can produce to an extremely bulky level. This effort will mostly focus on the existing tendency in vertical handover mostly with the authentication, latency and packet loss issues.
</description>
        <description>http://thesai.org/Downloads/Volume3No1/Paper%2014-An%20enhanced%20Scheme%20for%20Reducing%20Vertical%20handover%20latency.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Conceptual Design Model for High Performance Hotspot Network Infrastructure (GRID WLAN).</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030113</link>
        <id>10.14569/IJACSA.2012.030113</id>
        <doi>10.14569/IJACSA.2012.030113</doi>
        <lastModDate>2012-07-01T10:31:08.7530000+00:00</lastModDate>
        
        <creator>Udeze Chidiebele C</creator>
        
        <creator>Okafor Kennedy .C</creator>
        
        <creator>H. C Inyiama</creator>
        
        <creator>Dr C. C. Okezie</creator>
        
        <subject>Hotspot solutions, IP-based cellular phones, Buffer size dependencies, GRID WLAN.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(1), 2012</description>
        <description>The emergence of wireless networking technologies for large enterprises, operators (service providers), small-medium organizations, has made hotspot solutions for metropolitan area networks (MAN), last mile wireless connectivity, mobile broadband solutions, IP-based cellular phones (VOIP) and other event-based wireless solutions in very high demand. Wireless radio gateways (routers and access points) with its wide-spread deployment has made Wi-Fi an integral part of today’s hotspot access technology in organizational models. Despite its role in affecting performance for mobility market segments, previous research has focused on media access control (MAC) protocol techniques , carrier sense multiple access with collision avoidance (CSMA/CA), fair scheduling, and other traffic improvement methodologies without detailed consideration for virtual switch partitioning for load balancing, reliable fragmentation capacities, buffer size dependencies, load effects, queuing disciplines as well as hotspot access control framework. This paper proposes a conceptual high performance hotspot solution (GRID WLAN) and through simulations with OPNET modeler, presents efficient performance metrics for its deployment. Considering the GRID WLAN access points in context, the results shows that with the design model, a careful selection of buffer sizes, fragmentation threshold, network management framework with network load intensity will guarantee an efficient hotspot solution.</description>
        <description>http://thesai.org/Downloads/Volume3No1/Paper%2013-A%20Conceptual%20Design%20Model%20for%20High%20Performance%20Hotspot%20Network%20Infrastructure(GRID%20WLAN).pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cross Layer QoS Support Architecture with Integrated CAC and Scheduling Algorithms for WiMAX BWA Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030112</link>
        <id>10.14569/IJACSA.2012.030112</id>
        <doi>10.14569/IJACSA.2012.030112</doi>
        <lastModDate>2012-07-01T10:31:02.4300000+00:00</lastModDate>
        
        <creator>Prasun Chowdhury</creator>
        
        <creator>Iti Saha Misra</creator>
        
        <creator>Salil K Sanyal</creator>
        
        <subject>Cross layer QoS support architecture; SINR based CAC algorithm; Queue based Scheduling algorithm; Adaptive Modulation and Coding; WiMAX BWA networks.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(1), 2012</description>
        <description>In this paper, a new technique for cross layer design, based on present Eb/N0 (bit energy per noise density) ratio of the connections and target values of the Quality of Service (QoS) information parameters from MAC layer, is proposed to dynamically select the Modulation and Coding Scheme (MCS) at the PHY layer for WiMAX Broadband Wireless Access (BWA) networks. The QoS information parameter includes New Connection Blocking Probability (NCBP), Hand off Connection Dropping Probability (HCDP) and Connection Outage Probability (COP). In addition, a Signal to Interference plus Noise Ratio (SINR) based Call Admission Control (CAC) algorithm and Queue based Scheduling algorithm are integrated for the cross layer design. An analytical model using the Continuous Time Markov Chain (CTMC) is developed for performance evaluation of the algorithms under various MCS. The effect of Eb/No is observed for QoS information parameters in order to determine its optimum range. Simulation results show that the integrated CAC and packet Scheduling model maximizes the bandwidth utilization and fair allocation of the system resources for all types of MCS and guarantees the QoS to the connections.</description>
        <description>http://thesai.org/Downloads/Volume3No1/Paper%2012-Cross%20Layer%20QoS%20Support%20Architecture%20with%20Integrated%20CAC%20and%20Scheduling%20Algorithms%20for%20WiMAX%20BWA%20Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Congestion Avoidance Approach in Jumbo Frame-enabled IP Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030111</link>
        <id>10.14569/IJACSA.2012.030111</id>
        <doi>10.14569/IJACSA.2012.030111</doi>
        <lastModDate>2012-07-01T10:30:54.8070000+00:00</lastModDate>
        
        <creator>Aos Anas Mulahuwaish</creator>
        
        <creator>Kamalrulnizam Abu Bakar</creator>
        
        <creator>Kayhan Zrar Ghafoor</creator>
        
        <subject>Jumbo Frame; Queue Congestion; AQM; RED.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(1), 2012</description>
        <description> Jumbo frame is as an approach that allows for higher utilization of larger packet sizes on a domain-wise basis, decreasing the number of packets processed by core routers while not having any adverse impact on the link utilization of fairness. The major problem faced by jumbo frame networks is packet loss during queue congestion inside routers is as the RED mechanism that is recommended to combine with jumbo frame treats jumbo frame encapsulation as one packet by drop the whole jumbo frame with packets encapsulate during the congestion time. RED dropping the whole jumbo frame encapsulation randomly from head, middle and tail inside queue of router during periods of router congestion, leading to affect the scalability and performance of the networks by decreasing throughputs and increasing queue delay. This work proposes the use of two AQM techniques with jumbo frame, modified Random Early Detection MRED and developed drop Front technique DDF, which are used with the jumbo frame network to reduce packet drop and increase throughput by decreasing overhead in the network. For the purpose of evaluation, network simulator NS-2.28 was set up together with jumbo frame and AQM scenarios. Moreover, for justification objectives, the proposed algorithm and technique for AQM with jumbo frame were compared against the existing AQM algorithm and techniques that are found in the literature using metrics such as packet drop and throughput.
</description>
        <description>http://thesai.org/Downloads/Volume3No1/Paper%2011-A%20Congestion%20Avoidance%20Approach%20in%20Jumbo%20Frame-enabled%20IP%20Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Efficient Method For Multichannel Wireless Mesh Networks With Pulse Coupled Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030110</link>
        <id>10.14569/IJACSA.2012.030110</id>
        <doi>10.14569/IJACSA.2012.030110</doi>
        <lastModDate>2012-07-01T10:30:48.4830000+00:00</lastModDate>
        
        <creator>S Sobana</creator>
        
        <creator>S.Krishna Prabha</creator>
        
        <subject>Wireless Mesh Networks; Multicast; Multichannel; Multiinterface; Shortest path; DSPCNN; Auto wave; Search space.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(1), 2012</description>
        <description> Multi cast communication is a key technology for wireless mesh networks. Multicast provides efficient data distribution among a group of nodes, Generally sensor networks and MANETs uses multicast algorithms which are designed to be energy efficient and to achieve optimal route discovery among mobile nodes whereas wireless mesh networks needs to maximize throughput. Here we propose two multicast algorithms: The Level Channel Assignment (LCA) algorithm and the Multi-Channel Multicast (MCM) algorithm to improve the throughput for multichannel sand multi interface mesh networks. The algorithm builds efficient multicast trees by minimizing the number of relay nodes and total hop count distance of the trees. Shortest path computation is a classical combinatorial optimization problem. Neural networks have been used for processing path optimization problem. Pulse Coupled Neural Networks (PCNNS) suffer from high computational cast for very long paths we propose a new PCNN modal called dual source PCNN (DSPCNN) which can improve the computational efficiency two auto waves are produced by DSPCNN one comes from source neuron and other from goal neuron when the auto waves from these two sources meet the DSPCNN stops and then the shortest path is found by backtracking the two auto waves.
</description>
        <description>http://thesai.org/Downloads/Volume3No1/Paper%2010-An%20Efficient%20Method%20For%20Multichannel%20Wireless%20Mesh%20Networks%20With%20Pulse%20Coupled%20Neural%20Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Question Answering System for an Effective Collaborative Learning
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030109</link>
        <id>10.14569/IJACSA.2012.030109</id>
        <doi>10.14569/IJACSA.2012.030109</doi>
        <lastModDate>2012-07-01T10:30:42.2000000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Anik Nur Handayani</creator>
        
        <subject>component; E-Learning; Collaborative Learning; Q&amp;A; Knowledge Base.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(1), 2012</description>
        <description>The increasing advances of Internet Technologies in all application domains have changed life styles and interactions. With the rapid development of E-Learning, collaborative learning is an important for teaching, learning methods and strategies. Interaction between the students also student with the teacher is important for student to gain knowledge. Based on the four basic teaching styles formal authority, demonstrator or personal model, facilitator and delegator, today combined between facilitator and delegator style is responsible for student learning. It is student centered and the teacher as facilitates the material and activities, but learning becomes part of valuable and effective when they collaborate with each other, and as the teacher who will delegates and facilitates the responsibility of learning to the students. In this paper, we introduce an effective question answering Q&amp;A system for collaborative learning, which can act not just like a virtual teacher, but also virtual discussion for student. With the proposed system, brings a new Q&amp;A system, student can attach their question when they want collaborate using collaborative learning capitalize on one another’s resources and skills. Students can ask their questions to the group when they want to collaborate with others, asking one another for information, evaluating one another’s ideas, then each of the answer will compare with encyclopedia data base. In this research, the Q&amp;A system for the Senior High School in Indonesia, in this subject of Information Communication Technology implemented. From the the 40 question and 120 answer, the result is 90,48% precision 50% recall.</description>
        <description>http://thesai.org/Downloads/Volume3No1/Paper%209-Question%20Answering%20System%20for%20an%20Effective%20Collaborative%20Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Adaptive parameter free data mining approach for healthcare application</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030108</link>
        <id>10.14569/IJACSA.2012.030108</id>
        <doi>10.14569/IJACSA.2012.030108</doi>
        <lastModDate>2012-07-01T10:30:38.8270000+00:00</lastModDate>
        
        <creator>Dipti Patil</creator>
        
        <creator>Bhagyashree Agrawal</creator>
        
        <creator>Snehal Andhalkar</creator>
        
        <creator>Richa Biyani</creator>
        
        <creator>Mayuri Gund</creator>
        
        <creator>Dr. V.M.Wadhai</creator>
        
        <subject>Data stream mining; clustering; healthcare applications; medical signal analysis.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(1), 2012</description>
        <description>In today’s world, healthcare is the most important factor affecting human life. Due to heavy work load it is not possible for personal healthcare. The proposed system acts as a preventive measure for determining whether a person is fit or unfit based on his/her historical and real time data by applying clustering algorithms viz. K-means and D-stream. Both clustering algorithms are applied on patient’s biomedical historical database. To check the correctness of both the algorithms, we apply them on patient’s current biomedical data. The Density-based clustering algorithm i.e. the D-stream algorithm overcomes drawbacks of K-means algorithm. By calculating their performance measures we finally find out effectiveness and efficiency of both the algorithms.
</description>
        <description>http://thesai.org/Downloads/Volume3No1/Paper%208-An%20Adaptive%20parameter%20free%20data%20mining%20approach%20for%20healthcare%20application.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Communication and migration of an embeddable mobile agent platform supporting runtime code mobility</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030107</link>
        <id>10.14569/IJACSA.2012.030107</id>
        <doi>10.14569/IJACSA.2012.030107</doi>
        <lastModDate>2012-07-01T10:30:32.4730000+00:00</lastModDate>
        
        <creator>Mohamed BAHAJ</creator>
        
        <creator>Khaoula ADDAKIRI</creator>
        
        <creator>Noreddine GHERABI</creator>
        
        <subject>Mobile agent; Mobile agent platform;Agent communication.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(1), 2012</description>
        <description>In this paper we present the design and the implementation of Mobile-C, an IEEE Foundation for Intelligent Physical Agents (FIPA) compliant agent platform for mobile C/C++ agents. Such compliance ensures the interoperability between a Mobile-C agent and other agents from heterogeneous FIPA compliant mobile agent platforms. Also, the Mobile-C library was designed to support synchronization in order to protect shared resources and provide a way of deterministically timing the execution of mobile agents and threads. The new contribution of this work is to combine the mechanisms of agent migration and their synchronization.
</description>
        <description>http://thesai.org/Downloads/Volume3No1/Paper%207-Communication%20%20and%20migration%C2%A0of%20an%20embeddable%20mobile%20agent%20platform%20supporting%20runtime%20code%20mobility.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automated Periodontal Diseases Classification System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030106</link>
        <id>10.14569/IJACSA.2012.030106</id>
        <doi>10.14569/IJACSA.2012.030106</doi>
        <lastModDate>2012-07-01T10:30:26.1830000+00:00</lastModDate>
        
        <creator>Aliaa A.A Youssif</creator>
        
        <creator>Abeer Saad Gawish</creator>
        
        <creator>Mohammed Elsaid Moussa</creator>
        
        <subject>Biomedical image processing; epithelium segmentation; feature extraction; nuclear segmentation; periodontal diseases classification.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(1), 2012</description>
        <description>This paper presents an efficient and innovative system for automated classification of periodontal diseases, The strength of our technique lies in the fact that it incorporates knowledge from the patients&#39; clinical data, along with the features automatically extracted from the Haematoxylin and Eosin (H&amp;E) stained microscopic images. Our system uses image processing techniques based on color deconvolution, morphological operations, and watershed transforms for epithelium &amp; connective tissue segmentation, nuclear segmentation, and extraction of the microscopic immunohistochemical features for the nuclei, dilated blood vessels &amp; collagen fibers. Also, Feedforward Backpropagation Artificial Neural Networks are used for the classification process. We report 100% classification accuracy in correctly identifying the different periodontal diseases observed in our 30 samples dataset.
</description>
        <description>http://thesai.org/Downloads/Volume3No1/Paper%206-Automated%20Periodontal%20Diseases%20Classification%20System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A new graph based text segmentation using Wikipedia for automatic text summarization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030105</link>
        <id>10.14569/IJACSA.2012.030105</id>
        <doi>10.14569/IJACSA.2012.030105</doi>
        <lastModDate>2012-07-01T10:30:19.8700000+00:00</lastModDate>
        
        <creator>Mohsen Pourvali</creator>
        
        <creator>Ph.D. Mohammad Saniee Abadeh</creator>
        
        <subject>text Summarization; Data Mining; Word Sense Disambiguation.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(1), 2012</description>
        <description>The technology of automatic document summarization is maturing and may provide a solution to the information overload problem. Nowadays, document summarization plays an important role in information retrieval. With a large volume of documents, presenting the user with a summary of each document greatly facilitates the task of finding the desired documents. Document summarization is a process of automatically creating a compressed version of a given document that provides useful information to users, and multi-document summarization is to produce a summary delivering the majority of information content from a set of documents about an explicit or implicit main topic. According to the input text, in this paper we use the knowledge base of Wikipedia and the words of the main text to create independent graphs. We will then determine the important of graphs. Then we are specified importance of graph and sentences that have topics with high importance. Finally, we extract sentences with high importance. The experimental results on an open benchmark datasets from DUC01 and DUC02 show that our proposed approach can improve the performance compared to state-of-the-art summarization approaches.</description>
        <description>http://thesai.org/Downloads/Volume3No1/Paper%205-%20A%20new%20graph%20based%20text%20segmentation%20using%20Wikipedia%20for%20automatic%20text%20summarization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Data Warehouse Requirements Analysis Framework: Business-Object Based Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030104</link>
        <id>10.14569/IJACSA.2012.030104</id>
        <doi>10.14569/IJACSA.2012.030104</doi>
        <lastModDate>2012-07-01T10:30:13.5670000+00:00</lastModDate>
        
        <creator>Anirban Sarkar</creator>
        
        <subject>Requirements analysis; Business objects; Conceptual Model; Graph Data Model; Data Warehouses.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(1), 2012</description>
        <description>Detailed requirements analysis plays a key role towards the design of successful Data Warehouse (DW) system. The requirements analysis specifications are used as the prime input for the construction of conceptual level multidimensional data model. This paper has proposed a Business Object based requirements analysis framework for DW system which is supported with abstraction mechanism and reuse capability. It also facilitate the stepwise mapping of requirements descriptions into high level design components of graph semantic based conceptual level object oriented multidimensional data model. The proposed framework starts with the identification of the analytical requirements using business process driven approach and finally refine the requirements in further detail to map into the conceptual level DW design model using either Demand-driven of Mixed-driven approach for DW requirements analysis.
</description>
        <description>http://thesai.org/Downloads/Volume3No1/Paper%204-Data%20Warehouse%20Requirements%20Analysis%20Framework%20Business-Object%20Based%20Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fingerprint Image Enhancement:Segmentation to Thinning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030103</link>
        <id>10.14569/IJACSA.2012.030103</id>
        <doi>10.14569/IJACSA.2012.030103</doi>
        <lastModDate>2012-07-01T10:30:10.2130000+00:00</lastModDate>
        
        <creator>Iwasokun Gabriel Babatunde</creator>
        
        <creator>Akinyokun Oluwole Charles</creator>
        
        <creator>Alese Boniface Kayode</creator>
        
        <creator>Olabode Olatubosun</creator>
        
        <subject>AFIS;Pattern recognition;pattern matching;fingerprint;minutiae; image enhancement.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(1), 2012</description>
        <description>Fingerprint has remained a very vital index for human recognition. In the field of security, series of Automatic Fingerprint Identification Systems (AFIS) have been developed. One of the indices for evaluating the contributions of these systems to the enforcement of security is the degree with which they appropriately verify or identify input fingerprints. This degree is generally determined by the quality of the fingerprint images and the efficiency of the algorithm. In this paper, some of the sub-models of an existing mathematical algorithm for the fingerprint image enhancement were modified to obtain new and improved versions. The new versions consist of different mathematical models for fingerprint image segmentation, normalization, ridge orientation estimation, ridge frequency estimation, Gabor filtering, binarization and thinning. The implementation was carried out in an environment characterized by Window Vista Home Basic operating system as platform and Matrix Laboratory (MatLab) as frontend engine. Synthetic images as well as real fingerprints obtained from the FVC2004 fingerprint database DB3 set A were used to test the adequacy of the modified sub-models and the resulting algorithm. The results show that the modified sub-models perform well with significant improvement over the original versions. The results also show the necessity of each level of the enhancement.
</description>
        <description>http://thesai.org/Downloads/Volume3No1/Paper%203-Fingerprint%20Image%20Enhancement.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Framework for Improving the Performance of Ontology Matching Techniques in Semantic Web</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030102</link>
        <id>10.14569/IJACSA.2012.030102</id>
        <doi>10.14569/IJACSA.2012.030102</doi>
        <lastModDate>2012-07-01T10:30:03.9400000+00:00</lastModDate>
        
        <creator>Kamel Hussein Shafa’amri</creator>
        
        <creator>Jalal Omer Atoum</creator>
        
        <subject>Ontology matching; RDF statements;Semantic web; Similarity Aggregation.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(1), 2012</description>
        <description>Ontology matching is the process of finding correspondences between semantically related entities of different ontologies. We need to apply this process to solve the heterogeneity problems between different ontologies. Some ontologies may contain thousands of entities which make the ontology matching process very complex in terms of space and time requirements. This paper presents a framework that reduces the search space by removing entities (classes, properties) that have less probability of being matched. In order to achieve this goal we have introduced a matching strategy that uses multi matching techniques specifically; string, structure, and linguistic matching techniques. The results obtained from this framework have indicated a good quality matching outcomes in a low time requirement and a low search space in comparisons with other matching frameworks. It saves from the search space from (43% - 53%), and saves on the time requirement from (38% - 45%).</description>
        <description>http://thesai.org/Downloads/Volume3No1/Paper%202-A%20Framework%20for%20Improving%20the%20Performance%20of%20Ontology%20Matching%20Techniques%20in%20Semantic%20Web.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis and Selection of Features for Gesture Recognition Based on a Micro Wearable Device</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2012.030101</link>
        <id>10.14569/IJACSA.2012.030101</id>
        <doi>10.14569/IJACSA.2012.030101</doi>
        <lastModDate>2012-07-01T10:29:57.6070000+00:00</lastModDate>
        
        <creator>Yinghui Zhou</creator>
        
        <creator>Lei Jing</creator>
        
        <creator>Junbo Wang</creator>
        
        <creator>Zixue Cheng</creator>
        
        <subject>Internet of Things; Wearable Computing; Gesture Recognition; Feature analysis and selection; Accelerometer.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 3(1), 2012</description>
        <description>More and More researchers concerned about designing a health supporting system for elders that is light weight, no disturbing to user, and low computing complexity. In the paper, we introduced a micro wearable device based on a tri-axis accelerometer, which can detect acceleration change of human body based on the position of the device being set. Considering the flexibility of human finger, we put it on a finger to detect the finger gestures. 12 kinds of one-stroke finger gestures are defined according to the sensing characteristic of the accelerometer. Feature is a paramount factor in the recognition task. In the paper, gestures features both in time domain and frequency domain are described since features decide the recognition accuracy directly. Feature generation method and selection process is analyzed in detail to get the optimal feature subset from the candidate feature set. Experiment results indicate the feature subset can get satisfactory classification results of 90.08% accuracy using 12 features considering the recognition accuracy and dimension of feature set.</description>
        <description>http://thesai.org/Downloads/Volume3No1/Paper%201-Analysis%20and%20Selection%20of%20Features%20for%20Gesture%20Recognition%20Based%20on%20a%20Micro%20Wearable%20Device.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Flexible Tool for Web Service Selection in Service Oriented Architecture</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.021229</link>
        <id>10.14569/IJACSA.2011.021229</id>
        <doi>10.14569/IJACSA.2011.021229</doi>
        <lastModDate>2012-07-01T10:29:49.7270000+00:00</lastModDate>
        
        <creator>Walaa Nagy</creator>
        
        <creator>Hoda M. O. Mokhtar</creator>
        
        <creator>Ali El-Bastawissy</creator>
        
        <subject> Semantic Web; Web services; Web services match-making; Data warehouses; Quality of Services (QoS); Web service ranking.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(12), 2011</description>
        <description>Web Services are emerging technologies that enable application to application communication and reuse of services over Web. Semantic Web improves the quality of existing tasks, including Web services discovery, invocation, composition, monitoring, and recovery through describing Web services capabilities and content in a computer interpretable language. To provide most of the requested Web services, a Web service matchmaker is usually required. Web service matchmaking is the process of finding an appropriate provider for a requester through a middle agent. To provide the right service for the right user request, Quality of service (QoS)-based Web service selection is widely used. Employing QoS in Web service selection helps to satisfy user requirements through discovering the best service(s) in terms of the required QoS. Inspired by the mode of the Internet Web search engine, like Yahoo, Google, in this paper we provide a QoS-based service selection algorithm that is able to identify the best candidate semantic Web service(s) given the description of the requested service(s) and QoS criteria of user requirements. In addition, our proposed approach proposes a ranking method for those services. We also show how we employ data warehousing techniques to model the service selection problem. The proposed algorithm integrates traditional match making mechanism with data warehousing techniques. This integration of methodologies enables us to employ the historical preference of the user to provide better selection in future searches. The main result of the paper is a generic framework that is implemented to demonstrate the feasibility of the proposed algorithm for QoS-based Web application. Our presented experimental results show that the algorithm indeed performs well and increases the system reliability.
</description>
        <description>http://thesai.org/Downloads/Volume2No12/Paper%2029-A%20Flexible%20tool%20for%20web%20service%20Selection%20in%20Service%20Oriented%20Architecture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancing Business Intelligence in a Smarter Computing Environment through Cost Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.021228</link>
        <id>10.14569/IJACSA.2011.021228</id>
        <doi>10.14569/IJACSA.2011.021228</doi>
        <lastModDate>2012-07-01T10:29:43.3930000+00:00</lastModDate>
        
        <creator>Saurabh Kacker</creator>
        
        <creator>Vandana Choudhary</creator>
        
        <creator>Tanupriya Choudhury</creator>
        
        <creator>Vasudha Vashisht</creator>
        
        <subject>Smarter Computing; Business Intelligence; Cost Analysis; Virtualizations; Advanced Data Capabilities; Value Metrics.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(12), 2011</description>
        <description>The paper aims at improving Business Intelligence in a Smarter Computing Environment through Cost Analysis. Smarter Computing is a new approach to designing IT infrastructures to create new opportunities like creating new business models, find new ways of delivering technology-based services, and generate new insights from IT to fuel innovation and dramatically improve the economics of IT. The paper looks at various performance metrics to lower the cost of implementing Business intelligence in a smarter computing environment, to generate a cost efficient system. To ensure it, smarter services are deployed with business strategy. The working principle is based on workloads optimizations and their corresponding performance metrics like value metrics, Advanced Data Capabilities and Virtualizations so as to decrease the total IT cost.</description>
        <description>http://thesai.org/Downloads/Volume2No12/Paper%2028-Enhancing%20Business%20Intelligence%20in%20a%20Smarter%20Computing%20Environment%20through%20Cost%20Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Data Mining Approach for the Prediction of Hepatitis C Virus protease Cleavage Sites</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.021227</link>
        <id>10.14569/IJACSA.2011.021227</id>
        <doi>10.14569/IJACSA.2011.021227</doi>
        <lastModDate>2012-07-01T10:29:37.0470000+00:00</lastModDate>
        
        <creator>Ahmed mohamed samir ali gamal eldin</creator>
        
        <subject>component; HCV polyprotein; decision tree; protease; decamers </subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(12), 2011</description>
        <description>Summary: Several papers have been published about the prediction of hepatitis C virus (HCV) polyprotein cleavage sites, using symbolic and non-symbolic machine learning techniques. The published papers achieved different Levels of prediction accuracy. the achieved results depends on the used technique and the availability of adequate and accurate HCV polyprotein sequences with known cleavage sites. We tried here to achieve more accurate prediction results, and more Informative knowledge about the HCV protein cleavage sites using Decision tree algorithm. There are several factors that can affect the overall prediction accuracy. One of the most important factors is the availably of acceptable and accurate HCV polyproteins sequences with known cleavage sites. We collected latest accurate data sets to build the prediction model. Also we collected another dataset for the model testing. Motivation: Hepatitis C virus is a global health problem affecting a significant portion of the world’s population. The World Health Organization estimated that in1999; 170 million hepatitis C virus (HCV) carriers were present worldwide, with 3 to 4 million new cases per year. Several approaches have been performed to analyze HCV life cycle to find out the important factors of the viral replication process. HCV polyprotein processing by the viral protease has a vital role in the virus replication. The prediction of HCV protease cleavage sites can help the biologists in the design of suitable viral inhibitors. Results: The ease to use and to understand of the decision tree enabled us to create simple prediction model. We used here the latest accurate viral datasets. Decision tree achieved here acceptable prediction accuracy results. Also it generated informative knowledge about the cleavage process itself. These results can help the researchers in the development of effective viral inhibitors. Using decision tree to predict HCV protein cleavage sites achieved high prediction accuracy.
</description>
        <description>http://thesai.org/Downloads/Volume2No12/Paper%2027-A%20Data%20Mining%20Approach%20for%20the%20Prediction%20of%20Hepatitis%20C%20Virus%20protease%20Cleavage%20Sites%20.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Approach of Digital Forensic Model for Digital Forensic Investigation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.021226</link>
        <id>10.14569/IJACSA.2011.021226</id>
        <doi>10.14569/IJACSA.2011.021226</doi>
        <lastModDate>2012-07-01T10:29:30.7100000+00:00</lastModDate>
        
        <creator>Inikpi O Ademu</creator>
        
        <creator>Dr Chris O. Imafidon</creator>
        
        <creator>Dr David S. Preston</creator>
        
        <subject>Case Relevance; Exploratory Testing; Automated Collection; Pre-Analysis; Post-Analysis; Evidence Reliability.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(12), 2011</description>
        <description>The research introduces a structured and consistent approach for digital forensic investigation. Digital forensic science provides tools, techniques and scientifically proven methods that can be used to acquire and analyze digital evidence. The digital forensic investigation must be retrieved to obtain the evidence that will be accepted in the court. This research focuses on a structured and consistent approach to digital forensic investigation. This research aims at identifying activities that facilitate and improves digital forensic investigation process. Existing digital forensic framework will be reviewed and then the analysis will be compiled. The result from the evaluation will produce a new model to improve the whole investigation process.</description>
        <description>http://thesai.org/Downloads/Volume2No12/Paper%2026-A%20New%20Approach%20of%20Digital%20Forensic%20Model%20for%20Digital%20Forensic%20Investigation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> Optimized Min-Sum Decoding Algorithm for Low Density Parity Check Codes</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.021225</link>
        <id>10.14569/IJACSA.2011.021225</id>
        <doi>10.14569/IJACSA.2011.021225</doi>
        <lastModDate>2012-07-01T10:29:24.7830000+00:00</lastModDate>
        
        <creator>Mohammad Rakibul Islam,</creator>
        
        <creator>Dewan Siam Shafiullah</creator>
        
        <creator>Muhammad Mostafa Amir Faisal</creator>
        
        <creator>Imran Rahman</creator>
        
        <subject>&#160;LDPC codes; Min-sum algorithm; Normalized min-sum algorithm; Optimization factor.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(12), 2011</description>
        <description>Low Density Parity Check (LDPC) code approaches Shannon–limit performance for binary field and long code lengths. However, performance of binary LDPC code is degraded when the code word length is small. An optimized min-sum algorithm for LDPC code is proposed in this paper. In this algorithm unlike other decoding methods, an optimization factor has been introduced in both check node and bit node of the Min-sum algorithm. The optimization factor is obtained before decoding program, and the same factor is multiplied twice in one cycle. So the increased complexity is fairly low. Simulation results show that the proposed Optimized Min-Sum decoding algorithm performs very close to the Sum-Product decoding while preserving the main features of the Min-Sum decoding, that is low complexity and independence with respect to noise variance estimation errors.</description>
        <description>http://thesai.org/Downloads/Volume2No12/Paper%2025-Optimized%20Min-Sum%20Decoding%20Algorithm%20for%20Low%20Density%20Parity%20Check%20Codes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Solving the MDBCS Problem Using the Metaheuric–Genetic Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.021224</link>
        <id>10.14569/IJACSA.2011.021224</id>
        <doi>10.14569/IJACSA.2011.021224</doi>
        <lastModDate>2012-07-01T10:29:18.4500000+00:00</lastModDate>
        
        <creator>Milena Bogdanovic</creator>
        
        <subject> graph theory; NP-complete problems; the degree-bounded graphs; Integer linear programming; genetic algorithms.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(12), 2011</description>
        <description> The problems degree-limited graph of nodes considering the weight of the vertex or weight of the edges, with the aim to find the optimal weighted graph in terms of certain restrictions on the degree of the vertices in the subgraph. This class of combinatorial problems was extensively studied because of the implementation and application in network design, connection of networks and routing algorithms. It is likely that solution of MDBCS problem will find its place and application in these areas. The paper is given an ILP model to solve the problem MDBCS, as well as the genetic algorithm, which calculates a good enough solution for the input graph with a greater number of nodes. An important feature of the heuristic algorithms is that can approximate, but still good enough to solve the problems of exponential complexity. However, it should solve the problem heuristic algorithms may not lead to a satisfactory solution, and that for some of the problems, heuristic algorithms give relatively poor results. This is particularly true of problems for which no exact polynomial algorithm complexity. Also, heuristic algorithms are not the same, because some parts of heuristic algorithms differ depending on the situation and problems in which they are used. These parts are usually the objective function (transformation), and their definition significantly affects the efficiency of the algorithm. By mode of action, genetic algorithms are among the methods directed random search space solutions are looking for a global optimum.
</description>
        <description>http://thesai.org/Downloads/Volume2No12/Paper%2024-Solving%20the%20MDBCS%20Problem%20Using%20the%20Metaheuric%E2%80%93Genetic%20Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Approach to Improve the Representation of the User Model in the Web-Based Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.021223</link>
        <id>10.14569/IJACSA.2011.021223</id>
        <doi>10.14569/IJACSA.2011.021223</doi>
        <lastModDate>2012-07-01T10:29:12.1100000+00:00</lastModDate>
        
        <creator>Yasser A Nada</creator>
        
        <creator>Khaled M. Fouad</creator>
        
        <subject> User model; Domain ontology; Semantic Similarity; Wordnet.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(12), 2011</description>
        <description>A major shortcoming of content-based approaches exists in the representation of the user model. Content-based approaches often employ term vectors to represent each user’s interest. In doing so, they ignore the semantic relations between terms of the vector space model in which indexed terms are not orthogonal and often have semantic relatedness between one another. In this paper, we improve the representation of a user model during building user model in content-based approaches by performing these steps. First is the domain concept filtering in which concepts and items of interests are compared to the domain ontology to check the relevant items to our domain using ontology based semantic similarity. Second, is incorporating semantic content into the term vectors. We use word definitions and relations provided by WordNet to perform word sense disambiguation and employ domain-specific concepts as category labels for the semantically enhanced user models. The implicit information pertaining to the user behavior was extracted from click stream data or web usage sessions captured within the web server logs. Also, our proposed approach aims to update user model, we should analysis user&#39;s history query keywords. For a certain keyword, we extract the words which have the semantic relationships with the keyword and add them into the user interest model as nodes according to semantic relationships in the WordNet.</description>
        <description>http://thesai.org/Downloads/Volume2No12/Paper%2023-An%20Approach%20to%20Improve%20the%20Representation%20of%20the%20User%20Model%20in%20the%20Web-Based%20Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detection and Extraction of Videos using Decision Trees</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.021222</link>
        <id>10.14569/IJACSA.2011.021222</id>
        <doi>10.14569/IJACSA.2011.021222</doi>
        <lastModDate>2012-07-01T10:29:05.8170000+00:00</lastModDate>
        
        <creator>Sk Abdul Nabi</creator>
        
        <creator>Shaik Rasool</creator>
        
        <creator>Dr.P. Premchand</creator>
        
        <subject>DEVDT; Data Processing; Data Pre-Processing; Decision Tree and Training Data.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(12), 2011</description>
        <description>This paper addresses a new multimedia data mining framework for the extraction of events in videos by using decision tree logic. The aim of our DEVDT (Detection and Extraction of Videos using Decision Trees) system is for improving the indexing and retrieval of multimedia information. The extracted events can be used to index the videos. In this system we have considered C4.5 Decision tree algorithm [3] which is used for managing both continuous and discrete attributes. In this process, firstly we have adopted an advanced video event detection method to produce event boundaries and some important visual features. This rich multi-modal feature set is filtered by a pre-processing step to clean the noise as well as to reduce the irrelevant data. This will improve the performance of both Precision and Recall. After producing the cleaned data, it will be mined and classified by using a decision tree model. The learning and classification steps of this Decision tree are simple and fast. The Decision Tree has good accuracy. Subsequently, by using our system we will reach maximum Precision and Recall i.e. we will extract pure video events effectively and proficiently.</description>
        <description>http://thesai.org/Downloads/Volume2No12/Paper%2022-Detection%20and%20Extraction%20of%20Videos%20using%20Decision%20Trees.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sensor Node Deployment Strategy for Maintaining Wireless Sensor Network Communication Connectivity</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.021221</link>
        <id>10.14569/IJACSA.2011.021221</id>
        <doi>10.14569/IJACSA.2011.021221</doi>
        <lastModDate>2012-07-01T10:28:59.4800000+00:00</lastModDate>
        
        <creator>Shigeaki TANABE</creator>
        
        <creator>Kei SAWAI</creator>
        
        <creator>Tsuyoshi SUZUKI</creator>
        
        <subject>wireless sensor network; deployment strategy; communication connectivity</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(12), 2011</description>
        <description>We propose a rescue robot sensor network system in which a teleoperated rescue robot sets up a wireless sensor network (WSN) to gather disaster information in post-disaster underground spaces. In this system, the rescue robot carries wireless sensor nodes (SNs) and deploys them between gateways in an underground space on demand by the operator’s command to establish a safe approach path before rescue workers enter. However, a single communication path only is setup, because the rescue robot linearly deploys SNs between gateways. Hence, the rescue robot cannot be operated remotely if the communication path is disconnected by, for example, SN failure or changes in the environmental conditions. Therefore, SNs must be adaptively deployed so as to maintain WSN communication connectivity and negate such situations. This paper describes an SN deployment strategy for construction of a WSN robust to communication disconnection, caused by SN failure or deterioration of communications quality, in order to maintain communication connectivity between SNs. We thus propose an SN deployment strategy that uses redundant communication connection and ensures communication conditions between end-to-end communications of the WSN. The proposed strategy maintained communication conditions such that throughput between end-to-end communications in the WSN. Experimental results verifying the efficacy of the proposed method are also described.</description>
        <description>http://thesai.org/Downloads/Volume2No12/Paper%2021-Sensor%20Node%20Deployment%20Strategy%20for%20Maintaining%20Wireless%20Sensor%20Network%20Communication%20Connectivity.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Virtual Environment Using Virtual Reality and Artificial Neural Network
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.021219</link>
        <id>10.14569/IJACSA.2011.021219</id>
        <doi>10.14569/IJACSA.2011.021219</doi>
        <lastModDate>2012-07-01T10:28:53.1830000+00:00</lastModDate>
        
        <creator>Abdul Rahaman Wahab Sait</creator>
        
        <creator>Mohammad Nazim Raza</creator>
        
        <subject>component; Model; Virtual environment; Immersible virtual reality; Internet ; Artificial neural networks.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(12), 2011</description>
        <description> In this paper we describe a model, which gives a virtual environment to a group of people who uses it. The model is integrated with an Immersible Virtual Reality (IVR) design with an Artificial Neural Network (ANN) interface which runs on internet. A user who wants to participate in the virtual environment should have the hybrid IVR and ANN model with internet connection. IVR is the advanced technology used in the model to give an experience to the people to feel a virtual environment as a real one and ANN used to give a shape for the characters in the virtual environment (VE). This model actually gives an illusion to the user that as if they are in the real communication environment.</description>
        <description>http://thesai.org/Downloads/Volume2No12/Paper%2019-A%20Virtual%20Environment%20Using%20Virtual%20Reality%20and%20Artificial%20Neural%20Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>SVD-EBP Algorithm for Iris Pattern Recognition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.021217</link>
        <id>10.14569/IJACSA.2011.021217</id>
        <doi>10.14569/IJACSA.2011.021217</doi>
        <lastModDate>2012-07-01T10:28:46.4670000+00:00</lastModDate>
        
        <creator>Babasaheb G Patil</creator>
        
        <creator>Dr. Mrs. Shaila Subbaraman</creator>
        
        <subject> Singular value decomposition (SVD); Error back Propagation (EBP).</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(12), 2011</description>
        <description>This paper proposes a neural network approach based on Error Back Propagation (EBP) for classification of different eye images. To reduce the complexity of layered neural network the dimensions of input vectors are optimized using Singular Value Decomposition (SVD). The main objective of this work is to prove usefulness of SVD to form a compact set of features for classification by EBP algorithm. The results of our work indicate that optimum classification values are obtained with SVD dimensions of 20 and maximum number of classes as 9 with the state-of-the art computational resources The details of this combined system named as SVD-EBP system for Iris pattern recognition and the results thereof are presented in this paper.</description>
        <description>http://thesai.org/Downloads/Volume2No12/Paper%2017-SVD-EBP%20Algorithm%20for%20Iris%20Patten%20Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fuzzy Petri Nets for Human Behavior Verification and Validation
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.021216</link>
        <id>10.14569/IJACSA.2011.021216</id>
        <doi>10.14569/IJACSA.2011.021216</doi>
        <lastModDate>2012-07-01T10:28:40.1630000+00:00</lastModDate>
        
        <creator>M Kouzehgar</creator>
        
        <creator>M. A. Badamchizadeh</creator>
        
        <creator>S. Khanmohammadi</creator>
        
        <subject>human behavior; verification; validation; high-level fuzzy Petri nets; fuzzy rules.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(12), 2011</description>
        <description>Regarding the rapid growth of the size and complexity of simulation applications, designing applicable and affordable verification and validation (V&amp;V) structures is an important problem. On the other hand, nowadays human behavior models are principles to make decision in many simulations and in order to have valid decisions based on a reliable human decision model, first the model must pass the validation and verification criteria. Usually human behavior models are represented as fuzzy rule bases. In all the recent works, V&amp;V process is applied on a ready given rule-base. In this work, we are first supposed to construct a fuzzy rule-base and then apply the V&amp;V process on it. Considering the professor-student interaction as the case-study, in order to construct the rule base, a questionnaire is designed in a special way to be transformed to a hierarchical fuzzy rule-base. The constructed fuzzy rule base is then mapped to a fuzzy Petri net and then within the verification (generating and searching the reachability graph) and validation (reasoning the Petri net) process is searched for probable structural and semantic errors.</description>
        <description>http://thesai.org/Downloads/Volume2No12/Paper%2016-Fuzzy%20Petri%20Nets%20for%20Human%20Behavior%20Verification%20and%20Validation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Eyes Based Eletric Wheel Chair Control System- - I (eye) can control Electric Wheel Chair -</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.021215</link>
        <id>10.14569/IJACSA.2011.021215</id>
        <doi>10.14569/IJACSA.2011.021215</doi>
        <lastModDate>2012-07-01T10:28:33.8500000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Ronny Mardiyanto</creator>
        
        <subject>computer input by human eyes only; gaze estimation; electric wheelchair control.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(12), 2011</description>
        <description> Eyes base Electric Wheel Chair Control: EBEWC is proposed. The proposed EBEWC is controlled by human eyes only. Therefore disabled person can control the EBEWC by themselves. Most of the computer input system with human eyes only consider in specific condition and does not work in a real time basis. Moreover, it is not robust against various user races, illumination conditions, EWC vibration, and user&#39;s movement. Though experiments, it is found that the proposed EBEWC is robust against the aforementioned influencing factors. Moreover, it is confirmed that the proposed EBEWC can be controlled by human eyes only accurately and safely.</description>
        <description>http://thesai.org/Downloads/Volume2No12/Paper%2015-Eyes%20Based%20Eletric%20Wheel%20Chair%20Control%20System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Energy Efficient Zone Division Multihop Hierarchical Clustering Algorithm for Load Balancing in Wireless Sensor Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.021214</link>
        <id>10.14569/IJACSA.2011.021214</id>
        <doi>10.14569/IJACSA.2011.021214</doi>
        <lastModDate>2012-07-01T10:28:30.5030000+00:00</lastModDate>
        
        <creator>Ashim Kumar Ghosh</creator>
        
        <creator>Anupam Kumar Bairagi</creator>
        
        <creator>Dr. M. Abul Kashem</creator>
        
        <creator>Md. Rezwan-ul-Islam1</creator>
        
        <creator>A J M Asraf Uddin1</creator>
        
        <subject>routing protocol; WSN; multihop; load balancing; cluster based routing; zone division.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(12), 2011</description>
        <description> Wireless sensor nodes are use most embedded computing application. Multihop cluster hierarchy has been presented for large wireless sensor networks (WSNs) that can provide scalable routing, data aggregation, and querying. The energy consumption rate for sensors in a WSN varies greatly based on the protocols the sensors use for communications. In this paper we present a cluster based routing algorithm. One of our main goals is to design the energy efficient routing protocol. Here we try to solve the usual problems of WSNs. We know the efficiency of WSNs depend upon the distance between node to base station and the amount of data to be transferred and the performance of clustering is greatly influenced by the selection of cluster-heads, which are in charge of creating clusters and controlling member nodes. This algorithm makes the best use of node with low number of cluster head know as super node. Here we divided the full region in four equal zones and the centre area of the region is used to select for super node. Each zone is considered separately and the zone may be or not divided further that’s depending upon the density of nodes in that zone and capability of the super node. This algorithm forms multilayer communication. The no of layer depends on the network current load and statistics. Our algorithm is easily extended to generate a hierarchy of cluster heads to obtain better network management and energy efficiency.</description>
        <description>http://thesai.org/Downloads/Volume2No12/Paper%2014-Energy%20Efficient%20Zone%20Division%20Multihop%20Hierarchical%20Clustering%20Algorithm%20for%20Load%20Balancing%20in%20Wireless%20Sensor%20Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Text Independent Speaker Identification using Integrating Independent Component Analysis with Generalized Gaussian Mixture Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.021213</link>
        <id>10.14569/IJACSA.2011.021213</id>
        <doi>10.14569/IJACSA.2011.021213</doi>
        <lastModDate>2012-07-01T10:28:24.1900000+00:00</lastModDate>
        
        <creator>N M Ramaligeswararao</creator>
        
        <creator>Dr.V Sailaja</creator>
        
        <creator>Dr.K. Srinivasa Rao</creator>
        
        <subject>Independent component analysis; Generalized Gaussian Mixture Model; Mel frequency cepstral coefficients; Bayesian classifier; EM algorithm.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(12), 2011</description>
        <description>Recently much work has been reported in literature regarding Text Independent speaker identification models. Sailaja et al (2010)[34] has developed a Text Independent speaker identification model assuming that the speech spectra of each individual speaker can be modeled by Mel frequency cepstral coefficient and Generalized Gaussian mixture model. The limitation of this model is the feature vectors (Mel frequency cepstral coefficients) are high in dimension and assumed to be independent. But feature represented by MFCC’s are dependent and chopping some of the MFCC’s will bring falsification in the model. Hence, in this paper a new and novel Text Independent speaker identification model is developed by integrating MFCC’s with Independent component analysis(ICA) for obtaining independency and to achieve low dimensionality in feature vector extraction. Assuming that the new feature vectors follows a Generalized Gaussian Mixture Model (GGMM), the model parameters are estimated by using EM algorithm. A Bayesian classifier is used to identify each speaker. The experimental result with 50 speaker’s data base reveals that the proposed procedure outperforms the existing methods.</description>
        <description>http://thesai.org/Downloads/Volume2No12/Paper%2013-Text%20Independent%20Speaker%20Identification%20using%20Integrating%20Independent%20Component%20Analysis%20with%20Generalized%20Gaussian%20Mixture%20Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Preprocessor Agent Approach to Knowledge Discovery Using Zero-R Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.021212</link>
        <id>10.14569/IJACSA.2011.021212</id>
        <doi>10.14569/IJACSA.2011.021212</doi>
        <lastModDate>2012-07-01T10:28:17.8900000+00:00</lastModDate>
        
        <creator>Inamdar S A</creator>
        
        <creator>Narangale S.M.</creator>
        
        <creator>G. N. Shinde</creator>
        
        <subject>Data mining; Zero-R algorithm; Lower Bound Value; Class values.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(12), 2011</description>
        <description>Data mining and multiagent approach has been used successfully in the development of large complex systems. Agents are used to perform some action or activity on behalf of a user of a computer system. The study proposes an agent based algorithm PrePZero-r using Zero-R algorithm in Weka. Algorithms are powerful technique for solution of various combinatorial or optimization problems. Zero-R is a simple and trivial classifier, but it gives a lower bound on the performance of a given dataset which should be significantly improved by more complex classifiers. The Proposed Algorithm called PrePZero-r has significantly reduced time taken to build the model than Zero-R algorithm by removing the Lower Bound Values 0 while preprocessing and comparing the result with class values. Also proposed study introduced new factor “Accuracy (1-e)” for each individual attribute.</description>
        <description>http://thesai.org/Downloads/Volume2No12/Paper%2012-Preprocessor%20Agent%20Approach%20to%20Knowledge%20Discovery%20Using%20Zero-R%20Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The macroeconomic effect of the information and communication technology in Hungary</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.021211</link>
        <id>10.14569/IJACSA.2011.021211</id>
        <doi>10.14569/IJACSA.2011.021211</doi>
        <lastModDate>2012-07-01T10:28:11.6130000+00:00</lastModDate>
        
        <creator>Peter Sasvari</creator>
        
        <subject>ICT; Economic sector; Profitability; Total Factor Productivity.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(12), 2011</description>
        <description>It was not until the beginning of the 1990s that the effects of information and communication technology on economic growth as well as on the profitability of enterprises raised the interest of researchers. After giving a general description on the relationship between a more intense use of ICT devices and dynamic economic growth, the author identified and explained those four channels that had a robust influence on economic growth and productivity. When comparing the use of information technonology devices in developed as well as in developing countries, the author highlighted the importance of the available additional human capital and the elimination of organizational inflexibilities in the attempt of narrowing the productivity gap between the developed and developing nations. By processing a large quantitiy of information gained from Hungarian enterprises operating in several economic sectors, the author made an attempt to find a strong correlation between the development level of using ICT devices and profitability together with total factor productivity. Although the impact of using ICT devices cannot be measured unequivocally at the microeconomic level because of certain statistical and methodological imperfections, by applying such analytical methods as cluster analysis and correlation and regression calculation, the author managed to prove that both the correlation coefficient and the gradient of the regression trend line showed a positive relationship between the extensive use of information and communication technology and the profitability of enterprises.
</description>
        <description>http://thesai.org/Downloads/Volume2No12/Paper%2011-The%20macroeconomic%20effect%20of%20the%20information%20and%20communication%20technology%20in%20Hungary.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Pattern Discovery Using Association Rules</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.021210</link>
        <id>10.14569/IJACSA.2011.021210</id>
        <doi>10.14569/IJACSA.2011.021210</doi>
        <lastModDate>2012-07-01T10:28:05.3300000+00:00</lastModDate>
        
        <creator>Ms chandra M</creator>
        
        <creator>Mr Rahul Jadhav</creator>
        
        <creator>Ms Dipa Dixit</creator>
        
        <creator>Ms Rashmi J</creator>
        
        <creator>Ms Anjali Nehete</creator>
        
        <creator>Ms Trupti Khodkar</creator>
        
        <subject> Weblogs; Pattern discovery; Association rules.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(12), 2011</description>
        <description>The explosive growth of Internet has given rise to many websites which maintain large amount of user information. To utilize this information, identifying usage pattern of users is very important. Web usage mining is one of the processes of finding out this usage pattern and has many practical applications. Our paper discusses how association rules can be used to discover patterns in web usage mining. Our discussion starts with preprocessing of the given weblog, followed by clustering them and finding association rules. These rules provide knowledge that helps to improve website design, in advertising, web personalization etc.
</description>
        <description>http://thesai.org/Downloads/Volume2No12/Paper%2010-Pattern%20Discovery%20Using%20Association%20Rules.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comparative study of Arabic handwritten characters invariant feature</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.021209</link>
        <id>10.14569/IJACSA.2011.021209</id>
        <doi>10.14569/IJACSA.2011.021209</doi>
        <lastModDate>2012-07-01T10:27:59.0600000+00:00</lastModDate>
        
        <creator>Hamdi Hassen</creator>
        
        <creator>Maher Khemakhem</creator>
        
        <subject>component ; Arabic handwritten character; invariant feature; Hough transform; Fourier transform; Wavelet transform; Gabor Filter.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(12), 2011</description>
        <description> this paper is practically interested in the unchangeable feature of Arabic handwritten character. It presents results of comparative study achieved on certain features extraction techniques of handwritten character, based on Hough transform, Fourier transform, Wavelet transform and Gabor Filter. Obtained results show that Hough Transform and Gabor filter are insensible to the rotation and translation, Fourier Transform is sensible to the rotation but insensible to the translation, in contrast to Hough Transform and Gabor filter, Wavelets Transform is sensitive to the rotation as well as to the translation.</description>
        <description>http://thesai.org/Downloads/Volume2No12/Paper%209-A%20Comparative%20study%20of%20%20Arabic%20handwritten%20characters%20invariant%20feature.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> Identifying Nursing Computer Training Requirements using Web-based Assessment
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.021208</link>
        <id>10.14569/IJACSA.2011.021208</id>
        <doi>10.14569/IJACSA.2011.021208</doi>
        <lastModDate>2012-07-01T10:27:52.7500000+00:00</lastModDate>
        
        <creator>Naser Ghazi</creator>
        
        <creator>Gitesh Raikundalia</creator>
        
        <creator>Janette Gogler</creator>
        
        <creator>Leslie Bell</creator>
        
        <subject> component; Training Needs Analysis (TNA); Nursing Compter Literacy; Web-based Questionnaire.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(12), 2011</description>
        <description>Our work addresses issues of inefficiency and ineffectiveness in the training of nurses in computer literacy by developing an adaptive questionnaire system. This system works to identify the most effective training modules by evaluating applicants for pre-training and post-training. Our system, Systems Knowledge Assessment Tool (SKAT), aims to increase training proficiency, decrease training time and reduce costs associated with training by identifying areas of training required, and those which are not required for training, targeted to each individual. Based on the project’s requirements, a number of HTML documents were designed to be used as templates in the implementation stage. During this stage, the milestone principle was used, in which a series of coding and testing was performed to generate an error-free product.The decision-making process and it is components, as well as knowing the priority of each attribute in the application is responsible for determining the required training for each applicant. Thus, the decision-making process is an essential aspect of system design and greatly affects the training results of the applicant. The SKAT system has been evaluated to ensure that the system meets the project’s requirements. The evaluation stage was an important part of the project and required a number of nurses with different roles to evaluate the system. Based on their feedback, changes were made.</description>
        <description>http://thesai.org/Downloads/Volume2No12/Paper%208-Identifying%20Nursing%20Computer%20Training%20Requirements%20using%20web-base%20Assessment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Handsets Malware Threats and Facing Techniques
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.021207</link>
        <id>10.14569/IJACSA.2011.021207</id>
        <doi>10.14569/IJACSA.2011.021207</doi>
        <lastModDate>2012-07-01T10:27:46.4700000+00:00</lastModDate>
        
        <creator>Marwa M.A Elfattah</creator>
        
        <creator>Aliaa A.A Youssif</creator>
        
        <creator>Ebada Sarhan Ahmed</creator>
        
        <subject> mobile; malware; security; malicious programs.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(12), 2011</description>
        <description> Nowadays, mobile handsets combine the functionality of mobile phones and PDAs. Unfortunately, mobile handsets development process has been driven by market demand, focusing on new features and neglecting security. So, it is imperative to study the existing challenges that facing the mobile handsets threat containment process, and the different techniques and methodologies that used to face those challenges and contain the mobile handsets malwares. This paper also presents a new approach to group the different malware containment systems according to their typologies.
</description>
        <description>http://thesai.org/Downloads/Volume2No12/Paper%207-Handsets%20Malware%20Threats%20and%20Facing%20Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Outlier-Tolerant Kalman Filter of State Vectors in Linear Stochastic System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.021206</link>
        <id>10.14569/IJACSA.2011.021206</id>
        <doi>10.14569/IJACSA.2011.021206</doi>
        <lastModDate>2012-07-01T10:27:40.1700000+00:00</lastModDate>
        
        <creator>HU Shaolin</creator>
        
        <creator>Huajiang Ouyang</creator>
        
        <creator>Karl Meinke</creator>
        
        <creator>SUN Guoji</creator>
        
        <subject>Kalman filter; Outlier-tolerant; Outlier; Linear stochastic system.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(12), 2011</description>
        <description>The Kalman filter is widely used in many different fields. Many practical applications and theoretical results show that the Kalman filter is very sensitive to outliers in a measurement process. In this paper some reasons why the Kalman Filter is sensitive to outliers are analyzed and a series of outlier-tolerant algorithms are designed to be used as substitutes of the Kalman Filter. These outlier-tolerant filters are highly capable of preventing adverse effects from outliers similar with the Kalman Filter in complexity degree and very outlier-tolerant in the case there are some outliers arisen in sampling data set of linear stochastic systems. Simulation results show that these modified algorithms are safe and applicable.
</description>
        <description>http://thesai.org/Downloads/Volume2No12/Paper%206-Outlier-Tolerant%20Kalman%20Filter%20of%20State%20Vectors%20in%20Linear%20Stochastic%20System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Very Low Power Viterbi Decoder Employing Minimum Transition and Exchangeless Algorithms for Multimedia Mobile Communication</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.021205</link>
        <id>10.14569/IJACSA.2011.021205</id>
        <doi>10.14569/IJACSA.2011.021205</doi>
        <lastModDate>2012-07-01T10:27:36.8130000+00:00</lastModDate>
        
        <creator>S L Haridas</creator>
        
        <creator>Dr. N. K. Choudhari</creator>
        
        <subject>Hybrid register exchange method; minimum transition register exchange method; minimum transition hybrid register exchange method; register exchangeless method; hybrid register exchangeless method.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(12), 2011</description>
        <description>A very low power consumption viterbi decoder has been developed by low supply voltage and 0.15 &#181;m CMOS process technology. Significant power reduction can be achieved by modifying the design and implementation of viterbi decoder using conventional techniques traceback and Register Exchange to Hybrid Register Exchange Method (HREM), Minimum Transition Register Exchange Method (MTREM), Minimum Transition Hybrid Register Exchange Method (MTHREM), Register exchangeless Method and Hybrid Register exchangeless Method. By employing the above said schemes such as, HREM, MTREM, MTHREM, Register exchangeless Method and Hybrid Register exchangeless Method; the viterbi decoder achieves a drastic reduction in power consumption below 100 &#181;W at a supply voltage of 1.62 V when the data rate of 5 Mb/s and the bit error rate is less than 10-3. This excellent performance has been paved the way to employing the strong forward error correction and low power consumption portable terminals for personnel communication, mobile multimedia communication and digital audio broadcasting. Implementation insight and general conclusions can particularly benefit from this approach are given.
</description>
        <description>http://thesai.org/Downloads/Volume2No12/Paper%205-Very%20Low%20Power%20Viterbi%20Decoder%20Employing%20Minimum%20Transition%20and%20Exchangeless%20Algorithms%20for%20Multimedia%20Mobile%20Communication.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> Eye-based Human Computer Interaction Allowing Phoning, Reading E-Book/E-Comic/E-Learning, Internet Browsing, and TV Information Extraction</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.021204</link>
        <id>10.14569/IJACSA.2011.021204</id>
        <doi>10.14569/IJACSA.2011.021204</doi>
        <lastModDate>2012-07-01T10:27:30.5370000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Ronny Mardiyanto</creator>
        
        <subject> Eye-based HCI; E-Learning; Interface; Keyboard.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(12), 2011</description>
        <description> Eye-based Human-Computer Interaction: HCI system which allows phoning, reading e-book/e-comic/e-learning, internet browsing, and TV information extraction is proposed for handicap student in E-Learning Application. The conventional eye-based HCI applications are facing problems on accuracy and process speed. We develop new interfaces for improving key-in accuracy and process speed of eye-based key-in for E-Learning application, in particular. We propose eye-based HCI by utilizing camera mounted glasses for gaze estimation. We use the sight for controlling the user interface such as navigation of e-comic/e-book/e-learning contents, phoning, internet browsing, and TV information extraction. We develop interfaces including standard interface navigator with five keys, single line of moving keyboard, and multi line of moving keyboard in order to allow the aforementioned functions without burdening the accuracy. The experimental results show the proposed system does work the aforementioned functions in a real time basis.</description>
        <description>http://thesai.org/Downloads/Volume2No12/Paper%204-EyebasedHumanComputerInteractionAllowingPhoningReadingEBookEComicELearningInternetBrowsingandTVInformationExtraction.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Autonomous Control of Eye Based Electric Wheel Chair with Obstacle Avoidance and Shortest Path Findings Based on Dijkstra Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.021203</link>
        <id>10.14569/IJACSA.2011.021203</id>
        <doi>10.14569/IJACSA.2011.021203</doi>
        <lastModDate>2012-07-01T10:27:27.1870000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Ronny Mardiyanto</creator>
        
        <subject>Human Computer Interaction; Gaze Estimation; Obstacle Avoidance; Electric Wheel Chair Control.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(12), 2011</description>
        <description>Autonomous Eye Based Electric Wheel Chair: EBEWC control system which allows handicap person (user) to control their EWC with their eyes only is proposed. Using EBEWC, user can move to anywhere they want on a same floor in a hospital autonomously with obstacle avoidance with visible camera and ultrasonic sensor. User also can control EBEWC by their eyes. The most appropriate route has to be determined with avoiding obstacles and then autonomous real time control has to be done. Such these processing time and autonomous obstacle avoidance together with the most appropriate route determination are important for the proposed EBEWC. All the required performances are evaluated and validated. Obstacles can be avoided using acquired images with forward looking camera. The proposed EBEWC system allows creation of floor layout map that contains obstacles locations in a real time basis. The created and updated maps can be share by the electric wheel chairs on a same floor of a hospital. Experimental data show that the system allows computer input (more than 80 keys) almost perfectly and electric wheel chair can be controlled with human eyes-only safely.</description>
        <description>http://thesai.org/Downloads/Volume2No12/Paper%203-Autonomous%20Control%20of%20Eye%20Based%20Electric%20Wheel%20Chair%20with%20Obstacle%20Avoidance%20and%20Shortest%20Path%20Findings%20Based%20on%20Dijkstra%20Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> A Novel Intra-Domain Continues Handover Solution for Inter-Domain Pmipv6 Based Vehicular Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.021202</link>
        <id>10.14569/IJACSA.2011.021202</id>
        <doi>10.14569/IJACSA.2011.021202</doi>
        <lastModDate>2012-07-01T10:27:23.8430000+00:00</lastModDate>
        
        <creator>Haidar N Hussain</creator>
        
        <creator>Kamalrulnizam Abu Bakar</creator>
        
        <creator>Shaharuddin Salleh</creator>
        
        <subject> PMIPv6; MIH; MIIS.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(12), 2011</description>
        <description> IP mobility management protocols (e.g. host based mobility protocols) incur significant handover latency, thus aggravate QoS for end user devices. Proxy Mobile IPv6 (PMIPv6) was proposed by the Internet Engineering Task Force (IETF) as a new network-based mobility protocol to reduce the host based handover latency. However the current PMIPv6 cannot support the vehicles high mobility while the vehicles motion within PMIPv6 domain. In this paper we introduce a novel intra-domain PMIPv6 handover technique based vehicular network using Media Independent Handover (MIH). The novel intra-domain PMIPv6 handover based vehicular network improves the handover performance of PMIPv6 by allowing the new PMIPv6 domain to obtain the MIIS information to estimate whether the handover is necessary or not before the vehicles movement to the second MAG of the new PMIPv6 domain. We evaluate the handover latency and data packet loss of the proposed handover process compared to PMIPv6. The conducted analysis results con?rm that the novel handover process yields the reduced handover latency compared to that of PMIPv6 and also prevents data packet loss.
</description>
        <description>http://thesai.org/Downloads/Volume2No12/Paper%202-A%20Novel%20Intra-Domain%20Continues%20Handover%20Solution%20for%20Inter-Domain%20Pmipv6%20Based%20Vehicular%20Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Estimation of the Visual Quality of Video Streaming Under Desynchronization Conditions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.021201</link>
        <id>10.14569/IJACSA.2011.021201</id>
        <doi>10.14569/IJACSA.2011.021201</doi>
        <lastModDate>2012-07-01T10:27:17.5600000+00:00</lastModDate>
        
        <creator>A A Atayero</creator>
        
        <creator>O.I. Sheluhin</creator>
        
        <creator>Y.A. Ivanov</creator>
        
        <creator>A.A. Alatishe</creator>
        
        <subject>video streaming, encoder, decoder, video streaming quality, PSNR.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(12), 2011</description>
        <description>This paper presents a method for assessing desynchronized video with the aid of a software package specially developed for this purpose. A unique methodology of substituting values for lost frames was developed. It is shown that in the event of non-similarity of the sent and received sequences because of the loss of some frames in transit, the estimation of the quality indicator via traditional (existing) software is done inaccurately. We present in this paper a novel method of estimating the quality of desynchronized video streams. The developed software application is able to carry out the estimation of the quality of video sequences even when parts of the frame is missing, by means of searching out contextually similar frames and “gluing” them in lieu of the lost frames. Comparing obtained results with those from existing software validates their accuracy. The difference in results and methods of estimating video sequences of different subject groups is also discussed. The paper concludes with adequate recommendations on the best methodology to adopt for specific estimation scenarios.</description>
        <description>http://thesai.org/Downloads/Volume2No12/Paper%201-Estimation%20of%20the%20Visual%20Quality%20of%20Video%20Streaming%20Under%20Desynchronization%20Conditions.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> A new approach of designing Multi-Agent Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.021126</link>
        <id>10.14569/IJACSA.2011.021126</id>
        <doi>10.14569/IJACSA.2011.021126</doi>
        <lastModDate>2012-07-01T10:27:11.2870000+00:00</lastModDate>
        
        <creator>Sara Maalal</creator>
        
        <creator>Malika Addou</creator>
        
        <subject> Software agents; Multi-agents Systems (MAS); Analysis; Software design; Modeling; Models; Diagrams; Architecture; Model Driven Architecture (MDA); Agent Unified Modeling Language (AUML); Agent Modeling Language (AML). </subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(11), 2011</description>
        <description>Agent technology is a software paradigm that permits to implement large and complex distributed applications [1]. In order to assist analyzing, conception and development or implementation phases of multi-agent systems, we’ve tried to present a practical application of a generic and scalable method of a MAS with a component-oriented architecture and agent-based approach that allows MDA to generate source code from a given model. We’ve designed on AUML the class diagrams as a class meta-model of different agents of a MAS. Then we generated the source code of the models developed using an open source tool called AndroMDA. This agent-based and evolutive approach enhances the modularity and genericity developments and promotes their reusability in future developments. This property distinguishes our design methodology of existing methodologies in that it is constrained by any particular agent-based model while providing a library of generic models [2].</description>
        <description>http://thesai.org/Downloads/Volume2No11/Paper%2026-%20A%20new%20approach%20of%20designing%20Multi-Agent%20Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A novel approach for pre-processing of face detection system based on HSV color space and IWPT</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.021125</link>
        <id>10.14569/IJACSA.2011.021125</id>
        <doi>10.14569/IJACSA.2011.021125</doi>
        <lastModDate>2012-07-01T10:27:05.3730000+00:00</lastModDate>
        
        <creator>Megha Gupta</creator>
        
        <creator>Neetesh Gupta</creator>
        
        <subject>facial image, HSV color space, IWPT, SVM.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(11), 2011</description>
        <description>Face detection system is challenging area of research in the field of security surveillance. Preprocessing of facial image data is very important part of face detection system. Now days various method of facial image data preprocessing are available, but all these method are not up to mark. Now we have a novel approach for preprocessing of face detection system based on HSV color space and integer wavelet packet transform method. And finally in this paper we have used LBP (local binary pattern) and SVM classification process are used.</description>
        <description>http://thesai.org/Downloads/Volume2No11/Paper%2025-%20A%20novel%20approach%20for%20pre-processing%20of%20face%20detection%20system%20based%20on%20HSV%20color%20space%20and%20IWPT.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Towards Quranic reader controlled by speech</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.021123</link>
        <id>10.14569/IJACSA.2011.021123</id>
        <doi>10.14569/IJACSA.2011.021123</doi>
        <lastModDate>2012-07-01T10:26:59.0600000+00:00</lastModDate>
        
        <creator>Yacine Yekache</creator>
        
        <creator>Yekhlef Mekelleche</creator>
        
        <creator>Belkacem Kouninef</creator>
        
        <subject>arabic speech recognition; quranic reader; speech corpus; HMM; acoustic model.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(11), 2011</description>
        <description>In this paper we describe the process of designing a task-oriented continuous speech recognition system for Arabic, based on CMU Sphinx4, to be used in the voice interface of Quranic reader. The concept of the Quranic reader controlled by speech is presented, the collection of the corpus and creation of acoustic model are described in detail taking into account a specificities of Arabic language and the desired application.
</description>
        <description>http://thesai.org/Downloads/Volume2No11/Paper%2023-%20Towards%20Quranic%20reader%20controlled%20by%20speech.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Adaptive Outlier-tolerant Exponential Smoothing Prediction Algorithms with Applications to Predict the Temperature in Spacecraft</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.021122</link>
        <id>10.14569/IJACSA.2011.021122</id>
        <doi>10.14569/IJACSA.2011.021122</doi>
        <lastModDate>2012-07-01T10:26:52.7770000+00:00</lastModDate>
        
        <creator>Hu Shaolin</creator>
        
        <creator>Li Ye</creator>
        
        <creator>Zhang Wei</creator>
        
        <creator>Fan Shunxi</creator>
        
        <subject> Exponential prediction; Adaptive smoothing prediction; Outlier-tolerance smoothing prediction.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(11), 2011</description>
        <description>The exponential smoothing prediction algorithm is widely used in spaceflight control and in process monitoring as well as in economical prediction. There are two key conundrums which are open: one is about the selective rule of the parameter in the exponential smoothing prediction, and the other is how to improve the bad influence of outliers on prediction. In this paper a new practical outlier-tolerant algorithm is built to select adaptively proper parameter, and the exponential smoothing prediction algorithm is modified to prevent any bad influence from outliers in sampling data. These two new algorithms are valid and effective to overcome the two open conundrums stated above. Simulation and practical results of sampling data from temperature sensors in a spacecraft show that this new adaptive outlier-tolerant exponential smoothing prediction algorithm has the power to eliminate bad infection of outliers on prediction of process state in future.
</description>
        <description>http://thesai.org/Downloads/Volume2No11/Paper%2022-%20Adaptive%20Outlier-tolerant%20Exponential%20Smoothing%20Prediction%20Algorithms%20with%20Applications%20to%20Predict%20the%20Temperature%20in%20Spacecraft.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Test Method on the Convergence and Divergence for Infinite Integral</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.021121</link>
        <id>10.14569/IJACSA.2011.021121</id>
        <doi>10.14569/IJACSA.2011.021121</doi>
        <lastModDate>2012-07-01T10:26:46.4070000+00:00</lastModDate>
        
        <creator>Guocheng Li</creator>
        
        <subject>Infinite integral; Exponential integrating factor; Convergence and divergence.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(11), 2011</description>
        <description>The way to distinguish convergence or divergence of an infinite integral on non-negative continuous function is the important and difficult question in the mathematical teaching all the time. Using the comparison of integrands to judge some exceptional infinite integrals are hard or even useless. In this paper, we establish the exponential integrating factor of negative function, and present a new method to test based on its exponential integrating factor. These conclusions are convenient and valid additions of the previously known results.
</description>
        <description>http://thesai.org/Downloads/Volume2No11/Paper%2021-%20A%20New%20Test%20Method%20on%20the%20Convergence%20and%20Divergence%20for%20Infinite%20Integral.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improved Echo cancellation in VOIP</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.021120</link>
        <id>10.14569/IJACSA.2011.021120</id>
        <doi>10.14569/IJACSA.2011.021120</doi>
        <lastModDate>2012-07-01T10:26:40.1200000+00:00</lastModDate>
        
        <creator>Patrashiya Magdolina Halder</creator>
        
        <creator>A.K.M. Fazlul Haque</creator>
        
        <subject> PSTN; round trip delay; impedance; inverse filtering; denoise; histogram amplifier; repeater.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(11), 2011</description>
        <description>VoIP (voice over internet protocol) is very popular communication technology of this century and has played tremendous role in communication system. It is preferred by all because it deploys many benefits it uses Internet protocol (IP) networks to deliver multimedia information such as speech over a data network. VoIP system can be configured in these connection modes respectively; PC to PC, Telephony to Telephony and PC to Telephony. Echo is very annoying problem which occurs in VoIP and echo reduces the voice quality of VoIP. It is not possible to remove echo 100% from echoed signal because if echo is tried to be eliminated completely then the attempt may distort the main signal. That is why echo cannot be eliminated echo perfectly but the echo to a tolerable range. Clipping is not a good solution to suppress echo because part of speech may erroneously removed. Besides an NLP does not respond rapidly enough and also confuses the fading of the voice level at the end of a sentence with a residual echo. This paper has proposed echo cancellation in VoIP that has been tested and verified by MATLAB. The goal was to suppress echo without clipping and distorting the main signal. By the help of MATLAB program the echo is minimized to enduring level so that the received signal seems echo free. The percentage of suppressing echo varies with the amplitude of the main signal. With regarding the amplitude variation in received (echo free) signal the proposed method performs better in finding the echo free signal than the other conventional system.</description>
        <description>http://thesai.org/Downloads/Volume2No11/Paper%2020-%20Improved%20Echo%20cancellation%20in%20VOIP.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> A Fuzzy Similarity Based Concept Mining Model for Text Classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.021119</link>
        <id>10.14569/IJACSA.2011.021119</id>
        <doi>10.14569/IJACSA.2011.021119</doi>
        <lastModDate>2012-07-01T10:26:33.3470000+00:00</lastModDate>
        
        <creator>Shalini Puri</creator>
        
        <subject>Text Classification; Natural Language Processing; Feature Extraction; Concept Mining; Fuzzy Similarity Analyzer; Dimensionality Reduction; Sentence Level; Document Level; Integrated Corpora Level Processing.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(11), 2011</description>
        <description> Text Classification is a challenging and a red hot field in the current scenario and has great importance in text categorization applications. A lot of research work has been done in this field but there is a need to categorize a collection of text documents into mutually exclusive categories by extracting the concepts or features using supervised learning paradigm and different classification algorithms. In this paper, a new Fuzzy Similarity Based Concept Mining Model (FSCMM) is proposed to classify a set of text documents into pre - defined Category Groups (CG) by providing them training and preparing on the sentence, document and integrated corpora levels along with feature reduction, ambiguity removal on each level to achieve high system performance. Fuzzy Feature Category Similarity Analyzer (FFCSA) is used to analyze each extracted feature of Integrated Corpora Feature Vector (ICFV) with the corresponding categories or classes. This model uses Support Vector Machine Classifier (SVMC) to classify correctly the training data patterns into two groups; i. e., + 1 and – 1, thereby producing accurate and correct results. The proposed model works efficiently and effectively with great performance and high - accuracy results.</description>
        <description>http://thesai.org/Downloads/Volume2No11/Paper%2019-%20A%20Fuzzy%20Similarity%20Based%20Concept%20Mining%20Model%20for%20Text%20Classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>18: Plethora of Cyber Forensics</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.021118</link>
        <id>10.14569/IJACSA.2011.021118</id>
        <doi>10.14569/IJACSA.2011.021118</doi>
        <lastModDate>2012-07-01T10:26:26.3400000+00:00</lastModDate>
        
        <creator>N Sridhar</creator>
        
        <creator>Dr.D.Lalitha Bhaskari</creator>
        
        <creator>Dr.P.S.Avadhani3</creator>
        
        <subject>Cyber Forensics; digital evidence; forensics tools; cyber crimes.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(11), 2011</description>
        <description>As threats against digital assets have risen and there is necessitate exposing and eliminating hidden risks and threats. The ability of exposing is called “cyber forensics.” Cyber Penetrators have adopted more sophistical tools and tactics that endanger the operations of the global phenomena. These attackers are also using anti-forensic techniques to hide evidence of a cyber crime. Cyber forensics tools must increase its toughness and counteract these advanced persistent threats. This paper focuses on briefing of Cyber forensics, various phases of cyber forensics, handy tools and new research trends and issues in this fascinated area.</description>
        <description>http://thesai.org/Downloads/Volume2No11/Paper%2018-%20Plethora%20of%20Cyber%20Forensics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> Irrigation Fuzzy Controller Reduce Tomato Cracking</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.021117</link>
        <id>10.14569/IJACSA.2011.021117</id>
        <doi>10.14569/IJACSA.2011.021117</doi>
        <lastModDate>2012-07-01T10:26:18.7670000+00:00</lastModDate>
        
        <creator>Federico Hahn</creator>
        
        <subject>component; Fuzzy controller; irrigation controller; tomato cracking.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(11), 2011</description>
        <description>Sunlight heats the greenhouse air temperature and tomato cracking decreases marketable product up to 90%. A shade screen reduced incoming radiation during warm and sunny conditions to reduce tomato cracking. A fuzzy controller managed greenhouse irrigation to reduce tomato cracking using as variables solar radiation and substrate temperature. The embedded controller presented 9 rules and three assumptions that made it operate better. Signal peaks were removed and control actions could take place ten minutes after irrigation. Irrigation was increased during the peak hours from 12:00 to 15:00 h when it was required by the fuzzy controller; meanwhile water containing the nutrient solution was removed during very cloudy days with limited photosynthesis. After three continuous cloudy days irrigation should be scheduled to avoid plant nutrient problems. Substrate temperature in volcanic rock can be used as real time irrigation sensor. Tomato cracking decreased to 29% using the fuzzy controller and canopy temperature never exceeded 30&#186;C.</description>
        <description>http://thesai.org/Downloads/Volume2No11/Paper%2017-%20Irrigation%20Fuzzy%20Controller%20Reduce%20Tomato%20Cracking.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Clustering: Applied to Data Structuring and Retrieval
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.021116</link>
        <id>10.14569/IJACSA.2011.021116</id>
        <doi>10.14569/IJACSA.2011.021116</doi>
        <lastModDate>2012-07-01T10:26:12.4830000+00:00</lastModDate>
        
        <creator> Ogechukwu N Iloanusi</creator>
        
        <creator>Charles C. Osuagwu</creator>
        
        <subject>component; Clustering; k-means; data retrieval; indexing.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(11), 2011</description>
        <description>Clustering is a very useful scheme for data structuring and retrieval behuhcause it can handle large volumes of multi-dimensional data and employs a very fast algorithm. Other forms of data structuring techniques include hashing and binary tree structures. However, clustering has the advantage of employing little computational storage requirements and a fast speed algorithm. In this paper, clustering, k-means clustering and the approaches to effective clustering are extensively discussed. Clustering was employed as a data grouping and retrieval strategy in the filtering of fingerprints in the Fingerprint Verification Competition 2000 database 4(a). An average penetration of 7.41% obtained from the experiment shows clearly that the clustering scheme is an effective retrieval strategy for the filtering of fingerprints.</description>
        <description>http://thesai.org/Downloads/Volume2No11/Paper%2016-%20Clustering%20Applied%20to%20Data%20Structuring%20and%20Retrieval.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>CluSandra: A Framework and Algorithm for Data Stream Cluster Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.021115</link>
        <id>10.14569/IJACSA.2011.021115</id>
        <doi>10.14569/IJACSA.2011.021115</doi>
        <lastModDate>2012-07-01T10:26:06.1700000+00:00</lastModDate>
        
        <creator>Josh R Fernandez</creator>
        
        <creator>Eman M. El-Sheikh</creator>
        
        <subject> data stream; data mining; cluster analysis; knowledge discovery; machine learning; Cassandra database; BIRCH; CluStream; distributed systems.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(11), 2011</description>
        <description>The clustering or partitioning of a dataset’s records into groups of similar records is an important aspect of knowledge discovery from datasets. A considerable amount of research has been applied to the identification of clusters in very large multi-dimensional and static datasets. However, the traditional clustering and/or pattern recognition algorithms that have resulted from this research are inefficient for clustering data streams. A data stream is a dynamic dataset that is characterized by a sequence of data records that evolves over time, has extremely fast arrival rates and is unbounded. Today, the world abounds with processes that generate high-speed evolving data streams. Examples include click streams, credit card transactions and sensor networks. The data stream’s inherent characteristics present an interesting set of time and space related challenges for clustering algorithms. In particular, processing time is severely constrained and clustering algorithms must be performed in a single pass over the incoming data. This paper presents both a clustering framework and algorithm that, combined, address these challenges and allows end-users to explore and gain knowledge from evolving data streams. Our approach includes the integration of open source products that are used to control the data stream and facilitate the harnessing of knowledge from the data stream. Experimental results of testing the framework with various data streams are also discussed.</description>
        <description>http://thesai.org/Downloads/Volume2No11/Paper%2015-%20CluSandra%20A%20Framework%20and%20Algorithm%20for%20Data%20Stream%20Cluster%20Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> Current Trends in Group Key Management</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.021114</link>
        <id>10.14569/IJACSA.2011.021114</id>
        <doi>10.14569/IJACSA.2011.021114</doi>
        <lastModDate>2012-07-01T10:26:00.2700000+00:00</lastModDate>
        
        <creator>R Siva Ranjani</creator>
        
        <creator>Dr.D.Lalitha Bhaskari</creator>
        
        <creator>Dr.P.S.Avadhani3</creator>
        
        <subject>Multicast; group key management, security member driven; time driven.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(11), 2011</description>
        <description>Various network applications require sending data onto one or many members, maintaining security in the large groups is one of the major obstacles for controlling access. Unfortunately, IP multicast is not providing any security over the group communication. Group key management is a fundamental mechanism for secured multicast. This paper presents relevant group key management protocols. Then, we compared them against some pertinent performance criteria. Finally, we discuss the new research directions in group key management.
</description>
        <description>http://thesai.org/Downloads/Volume2No11/Paper%2014-%20Current%20Trends%20in%20Group%20Key%20Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> Passwords Selected by Hospital Employees: An Investigative Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.021113</link>
        <id>10.14569/IJACSA.2011.021113</id>
        <doi>10.14569/IJACSA.2011.021113</doi>
        <lastModDate>2012-07-01T10:25:53.9730000+00:00</lastModDate>
        
        <creator>B Dawn Medlin</creator>
        
        <creator>Ken Corley</creator>
        
        <creator>B. Adriana Romaniello</creator>
        
        <subject>Passwords; Security; HIPPA; HITECH.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(11), 2011</description>
        <description>The health care industry has benefitted from its employees’ ability to view patient data, but at the same time this access allows for patient’s health care records and information to be easily tampered with or stolen. Access to and transmission of patient data may improve care, increase delivery time of services and reduce health care costs, security of that information may be jeopardized due to the innocent sharing of personal and non-personal data with the wrong person. In this study, we surveyed employees of different size hospitals in various regions of the state who were willing to share their passwords. Our findings indicate that employees need further or additional training in their awareness surrounding password creation.</description>
        <description>http://thesai.org/Downloads/Volume2No11/Paper%2013-%20Passwords%20Selected%20by%20Hospital%20Employees%20An%20Investigative%20Study.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> Error Filtering Schemes for Color Images in Visual Cryptography</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.021112</link>
        <id>10.14569/IJACSA.2011.021112</id>
        <doi>10.14569/IJACSA.2011.021112</doi>
        <lastModDate>2012-07-01T10:25:48.0030000+00:00</lastModDate>
        
        <creator>Shiny Malar F.R</creator>
        
        <creator>Jeya Kumar M.K</creator>
        
        <subject> Error Diffusion; Visual Cryptography; Fourier Filtering; Context Overlapping; Color Extended Visual Cryptography.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(11), 2011</description>
        <description>The color visual cryptography methods are free from the limitations of randomness on color images. The two basic ideas used are error diffusion and pixel synchronization. Error diffusion is a simple method, in which the quantization error at each pixel level is filtered and fed as the input to the next pixel. In this way low frequency that is obtained between the input and output image is minimized which in turn give quality images. Degradation of colors are avoided with the help of pixel synchronization. The proposal of this work presents an efficient color image visual cryptic filtering scheme to improve the image quality on restored original image from visual cryptic shares. The proposed color image visual cryptic filtering scheme presents a deblurring effect on the non-uniform distribution of visual cryptic share pixels. After eliminating blurring effects on the pixels, Fourier transformation is applied to normalize the unevenly transformed share pixels on the original restored image. This in turn improves the quality of restored visual cryptographic image to its optimality. In addition the overlapping portions of the two or multiple visual cryptic shares are filtered out with homogeneity of pixel texture property on the restored original image. Experimentation are conducted with standard synthetic and real data set images, which shows better performance of proposed color image visual cryptic filtering scheme measured in terms of PSNR value (improved to 3 times) and share pixel error rate (reduced to nearly 11%) with existing grey visual cryptic filters. The results showed that the noise effects such as blurring on the restoration of original image are removed completely.</description>
        <description>http://thesai.org/Downloads/Volume2No11/Paper%2012-%20Error%20Filtering%20Schemes%20for%20Color%20Images%20in%20Visual%20Cryptography.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Concurrent Edge Prevision and Rear Edge Pruning Approach for Frequent Closed Itemset Mining</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.021111</link>
        <id>10.14569/IJACSA.2011.021111</id>
        <doi>10.14569/IJACSA.2011.021111</doi>
        <lastModDate>2012-07-01T10:25:41.7200000+00:00</lastModDate>
        
        <creator>Anurag Choubey</creator>
        
        <creator>Dr. Ravindra Patel</creator>
        
        <creator>Dr. J.L. Rana</creator>
        
        <subject>Data mining; Closed Itemsets; Pattern Mining; sequence length; graph structure.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(11), 2011</description>
        <description>Past observations have shown that a frequent item set mining algorithm are purported to mine the closed ones because the finish provides a compact and a whole progress set and higher potency. Anyhow, the newest closed item set mining algorithms works with candidate maintenance combined with check paradigm that is pricey in runtime yet as space usage when support threshold is a smaller amount or the item sets gets long. Here, we show, CEG&amp;REP that could be a capable algorithm used for mining closed sequences while not candidate. It implements a completely unique sequence finality verification model by constructing a Graph structure that build by an approach labeled “Concurrent Edge Prevision and Rear Edge Pruning” briefly will refer as CEG&amp;REP. a whole observation having sparse and dense real-life knowledge sets proved that CEG&amp;REP performs bigger compared to older algorithms because it takes low memory and is quicker than any algorithms those cited in literature frequently.
</description>
        <description>http://thesai.org/Downloads/Volume2No11/Paper%2011-%20Concurrent%20Edge%20Prevision%20and%20Rear%20Edge%20Pruning%20Approach%20for%20Frequent%20Closed%20Itemset%20Mining.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Survey of Nearest Neighbor Condensing Techniques</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.021110</link>
        <id>10.14569/IJACSA.2011.021110</id>
        <doi>10.14569/IJACSA.2011.021110</doi>
        <lastModDate>2012-07-01T10:25:35.3330000+00:00</lastModDate>
        
        <creator>MILOUD-AOUIDATE Amal</creator>
        
        <creator>BABA-ALI Ahmed Riadh</creator>
        
        <subject>Nearest neighbor (NN); kNN; Prototype selection; Condensed NN; Reduced NN; Condensing; Genetic algorithms; Tabu search.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(11), 2011</description>
        <description>The nearest neighbor rule identifies the category of an unknown element according to its known nearest neighbors’ categories. This technique is efficient in many fields as event recognition, text categorization and object recognition. Its prime advantage is its simplicity, but its main inconvenience is its computing complexity for large training sets. This drawback was dealt by the researchers’ community as the problem of prototype selection. Trying to solve this problem several techniques presented as condensing techniques were proposed. Condensing algorithms try to determine a significantly reduced set of prototypes keeping the performance of the 1-NN rule on this set close to the one reached on the complete training set. In this paper we present a survey of some condensing KNN techniques which are CNN, RNN, FCNN, Drop1-5, DEL, IKNN, TRKNN and CBP. All these techniques can improve the efficiency in computation time. But these algorithms fail to prove the minimality of their resulting set. For this, one possibility is to hybridize them with other algorithms, called modern heuristics or metaheuristics, which, themselves, can improve the solution. The metaheuristics that have proven results in the selection of attributes are principally genetic algorithms and tabu search. We will also shed light in this paper on some recent techniques focusing on this template.
</description>
        <description>http://thesai.org/Downloads/Volume2No11/Paper%2010-%20Survey%20of%20Nearest%20Neighbor%20Condensing%20Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modularity Index Metrics for Java-Based Open Source Software Projects
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.021109</link>
        <id>10.14569/IJACSA.2011.021109</id>
        <doi>10.14569/IJACSA.2011.021109</doi>
        <lastModDate>2012-07-01T10:25:29.0100000+00:00</lastModDate>
        
        <creator>Andi Wahju Rahardjo Emanuel</creator>
        
        <creator>Retantyo Wardoyo, Jazi Eko Istiyanto</creator>
        
        <subject>Open source software projects; modularity; Java; sourceforge; software metrics; system architecture. </subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(11), 2011</description>
        <description>Open Source Software (OSS) Projects are gaining popularity these days, and they become alternatives in building software system. Despite many failures in these projects, there are some success stories with one of the identified success factors is modularity. This paper presents the first quantitative software metrics to measure modularity level of Java-based OSS Projects called Modularity Index. This software metrics is formulated by analyzing modularity traits such as size, complexity, cohesion, and coupling of 59 Java-based OSS Projects from sourceforge.net using SONAR tool. These OSS Projects are selected since they have been downloaded more than 100K times and believed to have the required modularity trait to be successful. The software metrics related to modularity in class, package and system level of these projects are extracted and analyzed. The similarities found are then analyzed to determine the class quality, package quality, and then combined with system architecture measure to formulate the Modularity Index. The case study of measuring Modularity Index during the evolution of JFreeChart project has shown that this software metrics is able to identify strengths and potential problems of the project.
</description>
        <description>http://thesai.org/Downloads/Volume2No11/Paper%209-%20Modularity%20Index%20Metrics%20for%20Java-Based%20Open%20Source%20Software%20Projects.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Arabic Sign Language (ArSL) Recognition System Using HMM
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.021108</link>
        <id>10.14569/IJACSA.2011.021108</id>
        <doi>10.14569/IJACSA.2011.021108</doi>
        <lastModDate>2012-07-01T10:25:22.7070000+00:00</lastModDate>
        
        <creator>Aliaa A.A Youssif</creator>
        
        <creator>Amal Elsayed Aboutabl</creator>
        
        <creator>Heba Hamdy Ali</creator>
        
        <subject>Hand Gesture; Hand Tracking; Arabic Sign Language (ArSL); HMM; Hand Features; Hand Contours.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(11), 2011</description>
        <description>Hand gestures enabling deaf people to communication during their daily lives rather than by speaking. A sign language is a language which, instead of using sound, uses visually transmitted gesture signs which simultaneously combine hand shapes, orientation and movement of the hands, arms, lip-patterns, body movements and facial expressions to express the speaker&#39;s thoughts. Recognizing and documenting Arabic sign language has only been paid attention to recently. There have been few attempts to develop recognition systems to allow deaf people to interact with the rest of society. This paper introduces an automatic Arabic sign language (ArSL) recognition system based on the Hidden Markov Models (HMMs). A large set of samples has been used to recognize 20 isolated words from the Standard Arabic sign language. The proposed system is signer-independent. Experiments are conducted using real ArSL videos taken for deaf people in different clothes and with different skin colors. Our system achieves an overall recognition rate reaching up to 82.22%.</description>
        <description>http://thesai.org/Downloads/Volume2No11/Paper%208-%20Arabic%20Sign%20Language%20(ArSL)%20Recognition%20System%20Using%20HMM.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The impact of competitive intelligence on products and services innovation in organizations</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.021107</link>
        <id>10.14569/IJACSA.2011.021107</id>
        <doi>10.14569/IJACSA.2011.021107</doi>
        <lastModDate>2012-07-01T10:25:16.3700000+00:00</lastModDate>
        
        <creator>Phathutshedzo Nemutanzhela,</creator>
        
        <creator>Tiko Iyamu</creator>
        
        <subject>Competitive Intelligence (CI); Diffusion of Innovation (DoI); Innovation; Product; Services.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(11), 2011</description>
        <description>This paper discusses the findings of the study that was aimed at establishing the effect of Competitive Intelligence, hereafter referred to as CI, on product and service innovations. A literature study revealed that CI and innovation are widely studied subjects, but the impact of CI on innovation was not well documented. CI has been widely acclaimed as a panacea to a lot of organizational problems. The study aimed at establishing the impact of Competitive Intelligence (CI) on products and services Innovation in organisation. A case study was conducted, using an Information and communication technology (ICT) organisation. Innovation-decision process was applied in the data analysis. At the end of the study on the impact of competitive intelligence on products and services innovation in organisations the following was achieved. It was better understood that while CI is overemphasized as revolutionary, customer focused information systems products and services still remain challenging. It was also understood that not all organisations that deploy CI produce more innovative methods. A lack of knowledge- sharing and limitations within the organisational culture were found to be important factors for the deployment of competitive intelligence products and services in the organisations. Conclusions of the study basing on the findings are presented.</description>
        <description>http://thesai.org/Downloads/Volume2No11/Paper%207-%20The%20impact%20of%20competitive%20intelligence%20on%20products%20and%20services%20innovation%20in%20organisations.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Graphing emotional patterns by dilation of the iris in video sequences
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.021106</link>
        <id>10.14569/IJACSA.2011.021106</id>
        <doi>10.14569/IJACSA.2011.021106</doi>
        <lastModDate>2012-07-01T10:25:10.0100000+00:00</lastModDate>
        
        <creator>Rodolfo Romero Herrera,</creator>
        
        <creator>rromeroh@ipn.mx</creator>
        
        <creator>Saul De La O Torres</creator>
        
        <subject>Pupil dilation; joy; sadness; mathematical morphology; average interpolation.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(11), 2011</description>
        <description>For this paper, we took videos of iris of people while induced a feeling of joy or sadness, using videos to motivate the states affective. The manuscript implemented is a system of recognition affective pattern by dilating the iris, with which extracted images of the videos. The results obtained are plotted to facilitate the interpretation. A suitable treatment occurred for locating the pupil and the obtaining of the diameter of the pupil. The graphics are based on statistical time intervals. Within the research found that the iris diameter varies depending on your mood present in the subject of study. There is software that can detect changes in the pupil diameter, however it is also develops software, the main objective is to detect changes with respect to affective states and in this study is the main contribution. The joy and sadness were the emotional states that may differ. The system presents graphs that can be observed when analyzing the dependence between feelings and dilation present in the eye.</description>
        <description>http://thesai.org/Downloads/Volume2No11/Paper%206-%20Graphing%20emotional%20patterns%20by%20dilation%20of%20the%20iris%20in%20video%20sequences.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Strength of Quick Response Barcodes and Design of Secure Data Sharing System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.021105</link>
        <id>10.14569/IJACSA.2011.021105</id>
        <doi>10.14569/IJACSA.2011.021105</doi>
        <lastModDate>2012-07-01T10:25:03.7130000+00:00</lastModDate>
        
        <creator>Sona Kaushik</creator>
        
        <subject>QR Barcode; Quick response technology; Data Security; Information Security; Image processing; Data Sharing Architecture</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(11), 2011</description>
        <description> With the vast introduction of the wireless world, the exchanged information now is more prone to security attacks than ever. Barcodes are the information careers in the form of an image. Their various applications have been discussed in brief and also the structure, symbology and properties of barcodes. This paper aims to provide an approach which can share high security information over the network using QR barcodes. QR barcodes are explained in detail to get the rigged understanding on quick response technology. The design of data security model to share the data over the network is explained. This model aims to enable secure data share over the network. This model is a layered architecture and protects the data by transforming the structure of content. So barcodes are used to put tricks over the information rather than directly using it for its noble functionality.
</description>
        <description>http://thesai.org/Downloads/Volume2No11/Paper%205-%20Strength%20of%20Quick%20Response%20Barcodes%20and%20Design%20of%20Secure%20Data%20Sharing%20System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Implementation of RISI Controller Employing Adaptive Clock Gating Technique</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.021104</link>
        <id>10.14569/IJACSA.2011.021104</id>
        <doi>10.14569/IJACSA.2011.021104</doi>
        <lastModDate>2012-07-01T10:24:57.3070000+00:00</lastModDate>
        
        <creator>M Kamaraju</creator>
        
        <creator>Praveen V N Desu</creator>
        
        <subject>Interrupt; Controller; Clock gating; power.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(11), 2011</description>
        <description> With the scaling of technology and the need for higher performance and more functionality power dissipation is becoming a major issue for controller design. Interrupt based programming is widely used for interfacing a processor with peripherals. The proposed architecture implements a mechanism which combines interrupt controller and RIS (Reduced Instruction Set) CPU (Central processing unit) on a single die. RISI Controller takes only one cycle for both interrupt request generation and acknowledgement. The architecture have a dynamic control unit which consists of a program flow controller, interrupt controller and I/O controller. Adaptive clock gating technique is used to reduce power consumption in the dynamic control unit. The controller consumes a power of 174&#181;w@1MHz and is implemented in verilog HDL using Xilinx platform</description>
        <description>http://thesai.org/Downloads/Volume2No11/Paper%204-%20A%20Novel%20Implementation%20of%20RISI%20Controller%20Employing%20Adaptive%20Clock%20Gating%20Technique.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> Confidential Deterministic Quantum Communication Using Three Quantum States</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.021103</link>
        <id>10.14569/IJACSA.2011.021103</id>
        <doi>10.14569/IJACSA.2011.021103</doi>
        <lastModDate>2012-07-01T10:24:51.0000000+00:00</lastModDate>
        
        <creator>Piotr ZAWADZKI</creator>
        
        <subject>quantum cryptography; quantum secure direct communication; privacy amplification.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(11), 2011</description>
        <description> A secure quantum deterministic communication protocol is described. The protocol is based on transmission of quantum states from unbiased bases and exploits no entanglement. It is composed form two main components: a quantum quasi secure quantum communication supported by a suitable classical message preprocessing layer. Contrary to many others propositions, it does not require large quantum registers. A security level comparable to classic block ciphers is achieved by a specially designed, purely classic, message pre- and post-processing. However, unlike to the classic communication, no key agreement is required. The protocol is also designed in such a way, that noise in the quantum channel works in advantage to legitimate users improving the security level of the communication.
</description>
        <description>http://thesai.org/Downloads/Volume2No11/Paper%203-%20Confidential%20Deterministic%20Quantum%20Communication%20Using%20Three%20Quantum%20States.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>On the transmission capacity of quantum networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.021102</link>
        <id>10.14569/IJACSA.2011.021102</id>
        <doi>10.14569/IJACSA.2011.021102</doi>
        <lastModDate>2012-07-01T10:24:44.6970000+00:00</lastModDate>
        
        <creator>Sandra K&#246;nig</creator>
        
        <creator>Stefan Rass</creator>
        
        <subject>Quantum network; Quantum cryptography; network transmission capacity; queuing network; system security.
</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(11), 2011</description>
        <description>We provide various results about the transmission capacity of quantum networks. Our primary focus is on algorithmic methods to efficiently compute upper-bounds to the traffic that the network can handle at most, and to compute lower-bounds on the likelihood that a customer has to wait for service due to network congestion. This establishes analogous assertions as derived from Erlang B or Erlang C models for standard telecommunications. Our proposed methods, while specifically designed for quantum networks, do neither hinge on a particular quantum key distribution technology nor on any particular routing scheme. We demonstrate the feasibility of our approach using a worked example. Moreover, we explicitly consider two different architectures for quantum key management, one of which employs individual key-buffers for each relay connection, the other using a shared key-buffer for every transmission. We devise specific methods for analyzing the network performance depending on the chosen key-buffer architecture, and our experiments led to quite different results for the two variants.
</description>
        <description>http://thesai.org/Downloads/Volume2No11/Paper%202-%20On%20the%20transmission%20capacity%20of%20quantum%20networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Conceptual Level Design of Semi-structured Database System: Graph-semantic Based Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.021019</link>
        <id>10.14569/IJACSA.2011.021019</id>
        <doi>10.14569/IJACSA.2011.021019</doi>
        <lastModDate>2012-07-01T10:24:38.0300000+00:00</lastModDate>
        
        <creator>Anirban Sarkar</creator>
        
        <subject>Semi-structured Data; XML; XSD; Conceptual Modeling; Semi-structured Data Modeling; XML Modeling.
</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(10), 2011</description>
        <description>This paper has proposed a Graph – semantic based conceptual model for semi-structured database system, called GOOSSDM, to conceptualize the different facets of such system in object oriented paradigm. The model defines a set of graph based formal constructs, variety of relationship types with participation constraints and rich set of graphical notations to specify the conceptual level design of semi-structured database system. The proposed design approach facilitates modeling of irregular, heterogeneous, hierarchical and non-hierarchical semi-structured data at the conceptual level. Moreover, the proposed GOOSSDM is capable to model XML document at conceptual level with the facility of document-centric design, ordering and disjunction characteristic. A rule based transformation mechanism of GOOSSDM schema into the equivalent XML Schema Definition (XSD) also has been proposed in this paper. The concepts of the proposed conceptual model have been implemented using Generic Modeling Environment (GME).</description>
        <description>http://thesai.org/Downloads/Volume2No10/Paper%2019-Conceptual%20Level%20Design%20of%20Semi-structured%20Database%20System%20Graph-semantic%20Based%20Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> Cryptanalysis of An Advanced Authentication Scheme</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.021018</link>
        <id>10.14569/IJACSA.2011.021018</id>
        <doi>10.14569/IJACSA.2011.021018</doi>
        <lastModDate>2012-07-01T10:24:31.7230000+00:00</lastModDate>
        
        <creator>Sattar J Aboud</creator>
        
        <creator>Abid T. Al Ajeeli</creator>
        
        <subject>authentication protocol, offline password guessing attack, clogging attack.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(10), 2011</description>
        <description>In this paper we study a scheme for making cryptanalysis and security improvement. This protocol by Song, is a password authentication protocol using smart card. We note that this protocol has been shown to be prone to the offline password guessing attack. We perform an additional cryptanalysis on this scheme and detect that it is vulnerable to the clogging attack, a type of denial-of-service attack. We notice that all smart card typed authentication schemes which lead the scheme by Song, and need the server to find the computationally exhaustive modular exponentiation, similar to the scheme by Xu et al., and it is vulnerable to the clogging attack. Then we propose an enhancement in the scheme to avoid the clogging attack.</description>
        <description>http://thesai.org/Downloads/Volume2No10/Paper%2018-Cryptanalysis%20of%20an%20Advanced%20Authentication%20Scheme.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Retrieval of Images Using DCT and DCT Wavelet Over Image Blocks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.021017</link>
        <id>10.14569/IJACSA.2011.021017</id>
        <doi>10.14569/IJACSA.2011.021017</doi>
        <lastModDate>2012-07-01T10:24:25.0700000+00:00</lastModDate>
        
        <creator>H B Kekre</creator>
        
        <creator>Kavita Sonawane</creator>
        
        <subject>component; DCT, DCT wavelet, Eucidean distance.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(10), 2011</description>
        <description>This paper introduces a new CBIR system based on two different approaches in order to achieve the retrieval efficiency and accuracy. Color and texture information is extracted and used in this work to form the feature vector. To do the texture feature extraction this system uses DCT and DCT Wavelet transform to generate the feature vectors of the query and database images. Color information extraction process includes separation of image into R, G and B planes. Further each plane is divided into 4 blocks and for each block row mean vectors are calculated. DCT and DCT wavelet is applied over row mean vector of each block separately and 4 sets of DCT and DCT wavelet coefficients are obtained respectively. Out of these few coefficients are selected from each block and arranged in consecutive order to form the feature vector of the image. Variable size feature vectors are formed by changing the no of coefficients selected from each row vector. Total 18 different sets are obtained by changing the no of coefficients selected from each block. These two different feature databases obtained using DCT and DCT wavelet are then tested using 100 query images from 10 different categories. Euclidean distance is used as similarity measure to compare the image features. Euclidean distance calculated is sorted into ascending order and cluster of first 100 images is selected to count the images which are relevant to the query image. Results are further refined using second level thresholding which uses three criteria which can be applied to first level results. Results obtained are showing the better performance by DCT wavelet as compare to DCT transform.</description>
        <description>http://thesai.org/Downloads/Volume2No10/Paper%2017-Retrieval%20of%20Images%20Using%20DCT%20and%20DCT%20Wavelet%20Over%20Image%20Blocks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Quality EContent Design using Reusability approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.021016</link>
        <id>10.14569/IJACSA.2011.021016</id>
        <doi>10.14569/IJACSA.2011.021016</doi>
        <lastModDate>2012-07-01T10:24:20.8370000+00:00</lastModDate>
        
        <creator>Senthil Kumar J</creator>
        
        <creator>Dr S.K.Srivatsa, Ph.D</creator>
        
        <subject>Technology, paradigm, network, learning objects.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(10), 2011</description>
        <description>Technology is the one changing ever, and major technological innovations can make a paradigm shifts. The computer network known as the Internet is one such innovation. After affecting sweeping changes in the way people communicate and do business, the Internet is perched to bring about a paradigm shift in the way people learn. Consequently, a major change may also be coming in the way educational materials are designed, developed, and delivered to the learner. An instructional technology called “learning objects” currently leads other candidates for the position of technology of choice in the next generation of instructional design, development, and delivery. This paper aims to address the reusability, generativity, adaptability, and scalability of the content designed using the learning objects. Object-orientation highly values the creation of components (called “objects”) that can be reused in multiple contexts. This is the fundamental idea behind learning objects.
</description>
        <description>http://thesai.org/Downloads/Volume2No10/Paper%2016-Quality%20EContent%20Design%20using%20Reusability%20approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Study Of Indian Banks Websites For Cyber Crime Safety Mechansim</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.021014</link>
        <id>10.14569/IJACSA.2011.021014</id>
        <doi>10.14569/IJACSA.2011.021014</doi>
        <lastModDate>2012-07-01T10:24:14.1670000+00:00</lastModDate>
        
        <creator>Susheel Chandra Bhatt</creator>
        
        <creator>Durgesh Pant</creator>
        
        <subject>Cyber, Encryption, Phishing, Secure Socket Layer.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(10), 2011</description>
        <description>The human society has undergone tremendous changes from time to time with rapid pace at social level from the beginning and technological level ever since the rise of technologies. This technology word changes the human life in every manner and every sector. Banking field is one of them. Banking in India originated in the last decades of the 18th century. Since that time the banking sector applying different ways to provide facilities and securities to a common man regarding to money. Security issues play extremely important role in the implementation of technologies specially in banking sector. Further on it becomes more critical when it comes to the cyber security which is at the core of banking sector. After the arrival of Internet and WWW this banking sector is totally change specially in terms of security because now money is in your hand on a single click. Now user has number of choices to manage his money with different kind of methods. In this paper an attempt has been made to put forward various issues of Indian banks websites for cyber crime safety mechanism.</description>
        <description>http://thesai.org/Downloads/Volume2No10/Paper%2014-Study%20of%20Indian%20Banks%20Websites%20for%20Cyber%20Crime%20Safety%20Mechanism.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparison of Workflow Scheduling Algorithms in Cloud Computing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.021013</link>
        <id>10.14569/IJACSA.2011.021013</id>
        <doi>10.14569/IJACSA.2011.021013</doi>
        <lastModDate>2012-07-01T10:24:07.8630000+00:00</lastModDate>
        
        <creator>Navjot Kaur</creator>
        
        <creator>Taranjit Singh Aulakh</creator>
        
        <creator>Rajbir Singh Cheema</creator>
        
        <subject> Cloud Computing, Workflows, Scheduling, Makespan, Task ordering, Resource Allocation.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(10), 2011</description>
        <description>Cloud computing has gained popularity in recent times. Cloud computing is internet based computing, whereby shared resources, software and information are provided to computers and other devices on demand, like a public utility. Cloud computing is technology that uses the internet and central remote servers to maintain data and applications. This technology allows consumers and businesses to use application without installation and access their personal files at any computer with internet access. The main aim of my work is to study various problems, issues and types of scheduling algorithms for cloud workflows as well as on designing new workflow algorithms for cloud Workflow management system. The proposed algorithms are implemented on real time cloud which is developed using Microsoft .Net Technologies. The algorithms are compared with each other on the basis of parameters like Total execution time, Execution time for algorithm, Estimated execution time. Experimental results generated via simulation shown that Algorithm 2 is much better than Algorithm 1, as it reduced makespan time.</description>
        <description>http://thesai.org/Downloads/Volume2No10/Paper%2013-Comparison%20of%20Workflow%20Scheduling%20Algorithms%20in%20Cloud%20Computing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Experimental Improvement Analysis of Loss Tolerant TCP (LT-TCP) For Wireless Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.021012</link>
        <id>10.14569/IJACSA.2011.021012</id>
        <doi>10.14569/IJACSA.2011.021012</doi>
        <lastModDate>2012-07-01T10:24:01.5530000+00:00</lastModDate>
        
        <creator>Abdullah Al Mamun</creator>
        
        <creator>Sumaya kazary</creator>
        
        <creator>Momotaz Begum</creator>
        
        <creator>Md. Rubel</creator>
        
        <subject> LT-TCP, Mix reliability, Timeouts, Congestion avoidance, end-to-end Algorithms, Compression technique, Packet erasure rate, TCP SACK, RFEC, PFEC, RAR, ECN, throughput, RTT, Goodput.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(10), 2011</description>
        <description>Now-a-days TCP is a famous protocol used in Internet but the main problem is packet losses due to congestion. In this thesis we proposed a new Loss Tolerant TCP (LT-TCP), an enhancement of TCP which makes it robust and applicable for extreme wireless environment. In the proposed LT-TCP two additional term, data and data header compression are added in existing LT-TCP. We reduce the total volume of data and packet size in our adaptive method which is able to minimize the congestion and increase the reliability of wireless communication. The ECN respond about random data packet loss and disruption process. The overhead of Forward Error Control FEC is imposed just-in-time process and target to maximize the performance even if the path characteristics are uncertain. This proposal show that it will perform better over regular TCP and it is possible to reduce packet losses up to 40-50%.</description>
        <description>http://thesai.org/Downloads/Volume2No10/Paper%2012-An%20Experimental%20Improvement%20Analysis%20of%20Loss%20Tolerant%20TCP%20(LT-TCP)%20For%20Wireless%20Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Asynchronous Checkpointing And Optimistic Message Logging For Mobile Ad Hoc Networks
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.021011</link>
        <id>10.14569/IJACSA.2011.021011</id>
        <doi>10.14569/IJACSA.2011.021011</doi>
        <lastModDate>2012-07-01T10:23:54.6070000+00:00</lastModDate>
        
        <creator>Ruchi Tuli</creator>
        
        <creator>Parveen Kumar</creator>
        
        <subject> MANETs, clusterhead, checkpointing, pessimistic logging, fault tolerance, Mobile Host.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(10), 2011</description>
        <description>In recent years the advancements in wireless communication technology and mobile computing fueled a steady increase in both number and types of applications for wireless networks. Wireless networks can roughly be classified into cellular networks which use dedicated infrastructure (like base stations) and ad hoc networks without infrastructure. A Mobile Ad Hoc Network (MANET) is a collection of mobile nodes that can communicate with each other using Multihop wireless links without using any fixed infrastructure and centralized controller. Since this type of networks exhibits a dynamic topology, that is, the nodes move very frequently, it is hard to establish some intermittent connectivity in this scenario. Fault tolerance is one of the key issues for MANETs. In a cluster federation, clusters are gathered to provide huge computing power. Clustering methods allow fast connection and also better routing and topology management of mobile ad hoc networks (MANET). To work efficiently on such systems, networks characteristics have to be taken into account, for e.g. the latency between two nodes of different clusters is much higher than the latency between two nodes of the same cluster. In this paper, we present a message logging protocol well-suited to provide fault tolerance for cluster federations in mobile ad hoc networks. The proposed scheme is based on optimistic message logging.</description>
        <description>http://thesai.org/Downloads/Volume2No10/Paper%2011-Asynchronous%20Checkpointing%20and%20Optimistic%20Message%20Logging%20for%20Mobile%20Ad%20Hoc%20Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Ontology- and Constraint-based Approach for Dynamic Personalized Planning in Renal Disease Management</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.021010</link>
        <id>10.14569/IJACSA.2011.021010</id>
        <doi>10.14569/IJACSA.2011.021010</doi>
        <lastModDate>2012-07-01T10:23:48.3430000+00:00</lastModDate>
        
        <creator>Normadiah Mahiddin,</creator>
        
        <creator>Yu-N Cheah</creator>
        
        <creator>Fazilah Haron</creator>
        
        <subject>patient care planning; treatment protocols; dynamic treatment planning; personal health services.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(10), 2011</description>
        <description>Healthcare service providers, including those involved in renal disease management, are concerned about the planning of their patients’ treatments. With efforts to automate the planning process, shortcomings are apparent due to the following reasons: (1) current plan representations or ontologies are too fine grained, and (2) current planning systems are often static. To address these issues, we introduce a planning system called Dynamic Personalized Planner (DP Planner) which consists of: (1) a suitably light-weight and generic plan representation, and (2) a constraint-based dynamic planning engine. The plan representation is based on existing plan ontologies, and developed in XML. With the available plans, the planning engine focuses on personalizing pre-existing (or generic) plans that can be dynamically changed as the condition of the patient changes over time. To illustrate our dynamic personalized planning approach, we present an example in renal disease management. In a comparative study, we observed that the resulting DP Planner possesses features that rival that of other planning systems, in particular that of Asgaard and O-Plan.</description>
        <description>http://thesai.org/Downloads/Volume2No10/Paper%2010-An%20Ontology-%20and%20Constraint-based%20Approach%20for%20Dynamic%20Personalized%20Planning%20in%20Renal%20Disease%20Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automatic Classification and Segmentation of Brain Tumor in CT Images using Optimal Dominant Gray level Run length Texture Features</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.021009</link>
        <id>10.14569/IJACSA.2011.021009</id>
        <doi>10.14569/IJACSA.2011.021009</doi>
        <lastModDate>2012-07-01T10:23:42.0000000+00:00</lastModDate>
        
        <creator>A PADMA</creator>
        
        <creator>R.Sukanesh</creator>
        
        <subject> Dominant Gray Level Run Length Matrix method (DGLRLM), Support Vector Machine (SVM), Spatial Gray Level Dependence Matrix method (SGLDM), Genetic Algorithm(GA).</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(10), 2011</description>
        <description>Tumor classification and segmentation from brain computed tomography image data is an important but time consuming task performed manually by medical experts. Automating this process is challenging due to the high diversity in appearance of tumor tissue among different patients and in many cases, similarity between tumor and normal tissue. This paper deals with an efficient segmentation algorithm for extracting the brain tumors in computed tomography images using Support Vector Machine classifier. The objective of this work is to compare the dominant grey level run length feature extraction method with wavelet based texture feature extraction method and SGLDM method. A dominant gray level run length texture feature set is derived from the region of interest (ROI) of the image to be selected. The optimal texture features are selected using Genetic Algorithm. The selected optimal run length texture features are fed to the Support Vector Machine classifier (SVM) to classify and segment the tumor from brain CT images. The method is applied on real data of CT images of 120 images with normal and abnormal tumor images. The results are compared with radiologist labeled ground truth. Quantitative analysis between ground truth and segmented tumor is presented in terms of classification accuracy. From the analysis and performance measures like classification accuracy, it is inferred that the brain tumor classification and segmentation is best done using SVM with dominant run length feature extraction method than SVM with wavelet based texture feature extraction method and SVM with SGLDM method. In this work,we have attempted to improve the computing efficiency as it selects the most suitable feature extration method that can used for classification and segmentation of brain tumor in CT images efficiently and accurately. An avearage accuracy rate of above 97% was obtained usinh this classification and segmentation algorithm.</description>
        <description>http://thesai.org/Downloads/Volume2No10/Paper%209-Automatic%20Classification%20and%20Segmentation%20of%20Brain%20Tumor%20in%20CT%20Images%20using%20Optimal%20Dominant%20Gray%20level%20Run%20length%20Texture%20Features.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Plant Leaf Recognition using Shape based Features and Neural Network classifiers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.021007</link>
        <id>10.14569/IJACSA.2011.021007</id>
        <doi>10.14569/IJACSA.2011.021007</doi>
        <lastModDate>2012-07-01T10:23:35.3330000+00:00</lastModDate>
        
        <creator>Jyotismita Chaki</creator>
        
        <creator>Ranjan Parekh</creator>
        
        <subject>plant recognition; moment invariants; centroid-radii model; neural network; computer vision.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(10), 2011</description>
        <description>This paper proposes an automated system for recognizing plant species based on leaf images. Plant leaf images corresponding to three plant types, are analyzed using two different shape modeling techniques, the first based on the Moments-Invariant (M-I) model and the second on the Centroid-Radii (C-R) model. For the M-I model the first four normalized central moments have been considered and studied in various combinations viz. individually, in joint 2-D and 3-D feature spaces for producing optimum results. For the C-R model an edge detector has been used to identify the boundary of the leaf shape and 36 radii at 10 degree angular separation have been used to build the feature vector. To further improve the accuracy, a hybrid set of features involving both the M-I and C-R models has been generated and explored to find whether the combination feature vector can lead to better performance. Neural networks are used as classifiers for discrimination. The data set consists of 180 images divided into three classes with 60 images each. Accuracies ranging from 90%-100% are obtained which are comparable to the best figures reported in extant literature.
</description>
        <description>http://thesai.org/Downloads/Volume2No10/Paper%207-Plant%20Leaf%20Recognition%20using%20Shape%20based%20Features%20and%20Neural%20Network%20classifiers.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Statistical Approach For Latin Handwritten Digit Recognition
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.021006</link>
        <id>10.14569/IJACSA.2011.021006</id>
        <doi>10.14569/IJACSA.2011.021006</doi>
        <lastModDate>2012-07-01T10:23:28.9900000+00:00</lastModDate>
        
        <creator>Ihab Zaqout</creator>
        
        <subject>Digit recognition; freeman chain coding; feature extraction; classification.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(10), 2011</description>
        <description>A simple method based on some statistical measurements for Latin handwritten digit recognition is proposed in this paper. Firstly, a preprocess step is started with thresholding the gray-scale digit image into a binary image, and then noise removal, spurring and thinning are performed. Secondly, by reducing the search space, the region-of-interest (ROI) is cropped from the preprocessed image, then a freeman chain code template is applied and five feature sets are extracted from each digit image. Counting the number of termination points, their coordinates with relation to the center of the ROI, Euclidian distances, orientations in terms of angles, and other statistical properties such as minor-to-major axis length ratio, area and others. Finally, six categories are created based on the relation between number of termination points and possible digits. The present method is applied and tested on training set (60,000 images) and test set (10,000 images) of MNIST handwritten digit database. Our experiments report a correct classification of 92.9041% for the testing set and 95.0953% for the training set.</description>
        <description>http://thesai.org/Downloads/Volume2No10/Paper%206-A%20Statistical%20Approach%20For%20Latin%20Handwritten%20Digit%20Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Web Service Architecture for a Meta Search Engine</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.021005</link>
        <id>10.14569/IJACSA.2011.021005</id>
        <doi>10.14569/IJACSA.2011.021005</doi>
        <lastModDate>2012-07-01T10:23:23.0500000+00:00</lastModDate>
        
        <creator>K Srinivas</creator>
        
        <creator>P.V.S. Srinivas</creator>
        
        <creator>A.Govardhan</creator>
        
        <subject>Meta Search Engine; Search engine; Web Services.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(10), 2011</description>
        <description> With the rapid advancements in Information Technology, Information Retrieval on Internet is gaining its importance day by day. Nowadays there are millions of Websites and billions of homepages available on the Internet. Search Engines are the essential tools for the purpose of retrieving the required information from the Web. But the existing search engines have many problems such as not having wide scope, imbalance in accessing the sites etc. So, the effectiveness of a search engine plays a vital role. Meta search engines are such systems that can provide effective information by accessing multiple existing search engines such as Dog Pile, Meta Crawler etc, but most of them cannot successfully operate on heterogeneous and fully dynamic web environment. In this paper we propose a Web Service Architecture for Meta Search Engine to cater the need of heterogeneous and dynamic web environment. The objective of our proposal is to exploit most of the features offered by Web Services through the implementation of a Web Service Meta Search Engine .</description>
        <description>http://thesai.org/Downloads/Volume2No10/Paper%205-Web%20Service%20Architecture%20for%20a%20Meta%20Search%20Engine.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Route Maintenance Approach For Link Breakage Prediction In Mobile Ad Hoc Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.021004</link>
        <id>10.14569/IJACSA.2011.021004</id>
        <doi>10.14569/IJACSA.2011.021004</doi>
        <lastModDate>2012-07-01T10:23:16.7300000+00:00</lastModDate>
        
        <creator>Khalid Zahedi</creator>
        
        <creator>Abdul Samad Ismail</creator>
        
        <subject>MANET; link breakage prediction; DSR</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(10), 2011</description>
        <description>Mobile Ad hoc Network (MANET) consists of a group of mobile nodes that can communicate with each other without the need of infrastructure. The movement of nodes in MANET is random; therefore MANETs have a dynamic topology. Because of this dynamic topology, the link breakages in these networks are something common. This problem causes high data loss and delay. In order to decrease these problems, the idea of link breakage prediction has appeared. In link breakage prediction, the availability of a link is evaluated, and a warning is issued if there is a possibility of soon link breakage. In this paper a new approach of link breakage prediction in MANETs is proposed. This approach has been implemented on the well known Dynamic Source Routing protocol (DSR). This new mechanism was able to decrease the packet loss and delay that occur in the original protocol.</description>
        <description>http://thesai.org/Downloads/Volume2No10/Paper%204-Route%20Maintenance%20Approach%20for%20Link%20Breakage%20Predicttion%20in%20Mobile%20Ad%20Hoc%20Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Healthcare Providers’ Perceptions towards Health Information Applications at King Abdul-Aziz Medical City, Saudi Arabia
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/</link>
        <id></id>
        <doi></doi>
        <lastModDate>2012-07-01T10:23:10.3700000+00:00</lastModDate>
        
        <creator>Abeer Al-Harbi</creator>
        
        <subject>Healthcare providers, Health Information Technology, Computerized Patient Record, King Abdul-Aziz Medical City.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(10), 2011</description>
        <description>The purpose of this study was to assess the perceptions of healthcare providers towards health information technology applications in King Abdul-Aziz Medical City (KAMC) in terms of benefits, barriers, and motivation to use these applications. Data Collection: The study population consists of all healthcare providers working at KAMC. A sample size of 623 was drawn from a population of 7493 healthcare providers using convenience random sampling method. Of 623 questionnaires distributed, 377 were returned, giving a response rate of 60.5 percent. Measurement A self-administered questionnaire was developed based on extended literature review. The questionnaire comprised 25 statements measuring benefits, barriers, and motivation to use health information applications to be responded on a five-point Likert-scale. In addition, the questionnaire included questions on demographic and organizational variables. Results: The results show that the majority of healthcare providers had good knowledge and skills in information technology, as most of them use KAMC health information applications regularly and/or had training courses in the field. The results indicated that training has a significant positive effect on health providers&#39; IT knowledge and skills. The majority of healthcare providers perceived that the information technology applications in KAMC are valuable and beneficial to both patients and KAMC. However, the healthcare providers were split over the barriers to HIT use in KAMC. As for drivers, the results showed that healthcare providers generally would be motivated to use IT applications in KAMC by provision of new applications and training, contribution in change hospital&#39;s work procedures, and provision of technical support. Finally, the results showed the perceptions of healthcare providers on benefits, barriers and motives were influenced by gender, occupation and training. However, the effect of these variables on healthcare providers towards benefits, barriers and motives of IT use were inconsistent. Conclusion: Despite the perceived benefits and motives of health information applications, there were many barriers identified by healthcare providers. These were insufficient number of computers, frequent system down, and the use of computerized systems is time consuming. Furthermore, there were significant differences in the perceptions of healthcare providers towards benefits, barriers, and motives to health information technology with respect gender, occupation, and training.</description>
        <description>http://thesai.org/Downloads/Volume2No10/Paper%203-Healthcare%20Providers_%20Perceptions%20towards%20Health%20Information%20Applications%20at%20King%20Abdul-Aziz%20Medical%20City,%20Saudi%20Arabia.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Model of Temperature Dependence Shape of Ytterbium -doped Fiber Amplifier Operating at 915 nm Pumping Configuration</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.021002</link>
        <id>10.14569/IJACSA.2011.021002</id>
        <doi>10.14569/IJACSA.2011.021002</doi>
        <lastModDate>2012-07-01T10:23:04.0770000+00:00</lastModDate>
        
        <creator>Abdel Hakeim M</creator>
        
        <creator>Fady I. EL-Nahal</creator>
        
        <subject> YDFA; Gain; Noise Figure.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(10), 2011</description>
        <description>We numerically analyze the temperature dependence of an ytterbium-doped fiber amplifier (YDFA) operating at 915 nm, investigating its gain and Noise Figure properties variation with temperature. The temperature-dependent gain and noise figure variation with YDFA length are numerically obtained for the temperature range of +20 0C to +70 0C. The results show that good intrinsic output stability against temperature change can be achieved in ytterbium doped fiber amplifiers even when operating at high gain regime with small signal input. This result demonstrates the great potential for stable high power laser communication systems based on ytterbium system.</description>
        <description>http://thesai.org/Downloads/Volume2No10/Paper%202-Model%20of%20Temperature%20Dependence%20Shape%20of%20Ytterbium%20-doped%20Fiber%20Amplifier%20Operating%20at%20915%20nm%20Pumping%20Configuration.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Data Mining for Engineering Schools</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.021001</link>
        <id>10.14569/IJACSA.2011.021001</id>
        <doi>10.14569/IJACSA.2011.021001</doi>
        <lastModDate>2012-07-01T10:22:58.1570000+00:00</lastModDate>
        
        <creator>Chady EI Moucary</creator>
        
        <subject>component; Educational Data Mining; Classification and Regression Trees (CART); Relieff tool; Neural Networks; Prediction; Engineering Students’ Performance; Engineering Students’ Enrollment in Masters’ Studies.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(10), 2011</description>
        <description>the supervision of the academic performance of engineering students is vital during an early stage of their curricula. Indeed, their grades in specific core/major courses as well as their cumulative General Point Average (GPA) are decisive when pertaining to their ability/condition to pursue Masters’ studies or graduate from a five-year Bachelor-of-Engineering program. Furthermore, these compelling strict requirements not only significantly affect the attrition rates in engineering studies (on top of probation and suspension) but also decide of grant management, developing courseware, and scheduling of programs. In this paper, we present a study that has a twofold objective. First, it attempts at correlating the aforementioned issues with the engineering students’ performance in some key courses taken at early stages of their curricula, then, a predictive model is presented and refined in order to endow advisors and administrators with a powerful decision-making tool when tackling such highly important issues. Matlab Neural Networks Pattern Recognition tool as well as Classification and Regression Trees (CART) are fully deployed with important cross validation and testing. Simulation and prediction results demonstrated a high level of accuracy and offered efficient analysis and information pertinent to the management of engineering schools and programs in the frame of the aforementioned perspective.
</description>
        <description>http://thesai.org/Downloads/Volume2No10/Paper%201-Data%20Mining%20for%20Engineering%20Schools.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> A rule-based Afan Oromo Grammar Checker</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020823</link>
        <id>10.14569/IJACSA.2011.020823</id>
        <doi>10.14569/IJACSA.2011.020823</doi>
        <lastModDate>2012-07-01T10:20:44.9670000+00:00</lastModDate>
        
        <creator>Debela Tesfaye</creator>
        
        <subject>Afan oromo grammar checker; rule based grammar checker.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(8), 2011</description>
        <description>Natural language processing (NLP) is a subfield of computer science, with strong connections to artificial intelligence. One area of NLP is concerned with creating proofing systems, such as grammar checker. Grammar checker determines the syntactical correctness of a sentence which is mostly used in word processors and compilers. For languages, such as Afan Oromo, advanced tools have been lacking and are still in the early stages. In this paper a rule based grammar checker is presented. The rule base is entirely developed and dependent on the morphology of the language . The checker is evaluated and shown a promising result.</description>
        <description>http://thesai.org/Downloads/Volume2No8/Paper%2023-A%20rule-based%20Afan%20Oromo%20Grammar%20Checker%20.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> Neutrosophic Relational Database Decomposition</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020822</link>
        <id>10.14569/IJACSA.2011.020822</id>
        <doi>10.14569/IJACSA.2011.020822</doi>
        <lastModDate>2012-07-01T10:20:38.5770000+00:00</lastModDate>
        
        <creator>Meena Arora</creator>
        
        <creator>Ranjit Biswas</creator>
        
        <creator>Dr. U.S.Pandey</creator>
        
        <subject>Neutrosophic Logic, Ranking of intervals, &#223;-value of an interval, Neutrosophic set, Neutrosophic Relation, Rank Neutrosophic 1NF.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(8), 2011</description>
        <description> In this paper we present a method of decomposing a neutrosophic database relation with Neutrosophic attributes into basic relational form. Our objective is capable of manipulating incomplete as well as inconsistent information. Fuzzy relation or vague relation can only handle incomplete information. Authors are taking the Neutrosophic Relational database [8],[2] to show how imprecise data can be handled in relational schema.
</description>
        <description>http://thesai.org/Downloads/Volume2No8/Paper%2022-Neutrosophic%20Relational%20Database%20Decomposition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> Multimodal Optimization using Self-Adaptive Real Coded Genetic Algorithm with K-means &amp; Fuzzy C-means Clustering</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020821</link>
        <id>10.14569/IJACSA.2011.020821</id>
        <doi>10.14569/IJACSA.2011.020821</doi>
        <lastModDate>2012-07-01T10:20:32.1670000+00:00</lastModDate>
        
        <creator>Vrushali K Bongirwar</creator>
        
        <creator>Rahila Patel</creator>
        
        <subject>Genetic Algorithm (GA); self-adaptation; Multimodal Function Optimization; K-Means Clustering; Fuzzy C-Means Clustering.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(8), 2011</description>
        <description>Many engineering optimization tasks involve finding more than one optimum solution. These problems are considered as Multimodal Function Optimization Problems. Genetic Algorithm can be used to search Multiple optimas, but some special mechanism is required to search all optimum points. Different genetic algorithms are proposed, designed and implemented for the multimodal Function Optimization. In this paper, we proposed an innovative approach for Multimodal Function Optimization. Proposed Genetic algorithm is a Self Adaptive Genetic Algorithm and uses Clustering Algorithm for finding Multiple Optimas. Experiments have been performed on various Multimodal Optimization Functions. The Results has shown that the Proposed Algorithm given better performance on some Multimodal Functions.
</description>
        <description>http://thesai.org/Downloads/Volume2No8/Paper%2021-Multimodal%20Optimization%20using%20Self-Adaptive%20Real%20Coded%20Genetic%20Algorithm%20with%20K-means%20&amp;%20Fuzzy%20C-means%20Clustering.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Control Systems application in Java based Enterprise and Cloud Environments – A Survey
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020820</link>
        <id>10.14569/IJACSA.2011.020820</id>
        <doi>10.14569/IJACSA.2011.020820</doi>
        <lastModDate>2012-07-01T10:20:25.7600000+00:00</lastModDate>
        
        <creator>Ravi Kumar Gullapalli</creator>
        
        <creator>Dr. Chelliah Muthusamy</creator>
        
        <creator>Dr.A.Vinaya Babu</creator>
        
        <subject>Control Systems, Java, Web Servers, Application Servers, Web Services, Enterprise Service Bus</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(8), 2011</description>
        <description> The classical feedback control systems has been a successful theory in many engineering applications like electrical power, process, and manufacturing industries. For more than a decade there is active research in exploring feedback control systems applications in computing and some of the results are applied to the commercial software products. There are good number of research review papers on this subject exist, giving high level overview, explaining specific applications like load balancing or CPU utilization power management in data centers. We observe that majority of the control system applications are in Web and Application Server environments. We attempt to discuss on how control systems is applied to Web and Application(JEE) Servers that are deployed in Enterprise and cloud environments. Our paper presents this review with a specific emphasis on Java based Web, Application and Enterprise Server Bus environments. We conclude with the future reserach in applying control systems to Enterprise and Cloud environments.</description>
        <description>http://thesai.org/Downloads/Volume2No8/Paper%2020-Control%20Systems%20application%20in%20Java%20based%20Enterprise%20and%20Cloud%20Environments%20%E2%80%93%20A%20Survey.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> AODV Robust (AODVR): An Analytic Approach to Shield Ad-hoc Networks from Black Holes</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020819</link>
        <id>10.14569/IJACSA.2011.020819</id>
        <doi>10.14569/IJACSA.2011.020819</doi>
        <lastModDate>2012-07-01T10:20:19.3470000+00:00</lastModDate>
        
        <creator>Mohammad Abu Obaida</creator>
        
        <creator>Shahnewaz Ahmed Faisal</creator>
        
        <creator>Md. Abu Horaira</creator>
        
        <creator>Tanay Kumar Roy</creator>
        
        <subject>Ad-hoc Networks; Wireless Networks; MANET; RT; AODV; Ad-hoc Optimal Distance Vector; Black-hole; OPNET.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(8), 2011</description>
        <description>Mobile ad-hoc networks are vulnerable to several types of malicious routing attacks, black hole is one of those, where a malicious node advertise to have the shortest path to all other nodes in the network by the means of sending fake routing reply. As a result the destinations are deprived of desired information. In this paper, we propose a method AODV Robust (AODVR) a revision to the AODV routing protocol, in which black hole is perceived as soon as they emerged and other nodes are alerted to prevent the network of such malicious threats thereby isolating the black hole. In AODVR method, the routers formulate the range of acceptable sequence numbers and define a threshold. If a node exceeds the threshold several times then it is black listed thereby increasing the network robustness.</description>
        <description>http://thesai.org/Downloads/Volume2No8/Paper%2019-AODV%20Robust%20(AODVR)%20An%20Analytic%20Approach%20to%20Shield%20Ad-hoc%20Networks%20from%20Black%20Holes.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Bidirectional WDM-Radio over Fiber System with Sub Carrier Multiplexing Using a Reflective SOA and Cyclic AWGs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020818</link>
        <id>10.14569/IJACSA.2011.020818</id>
        <doi>10.14569/IJACSA.2011.020818</doi>
        <lastModDate>2012-07-01T10:20:12.9330000+00:00</lastModDate>
        
        <creator>Fady I El-Nahal</creator>
        
        <subject> Radio over Fiber (RoF), wavelength-division multiplexing (WDM), Sub-carrier modulation (SCM), arrayed waveguide grating (AWG).</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(8), 2011</description>
        <description>A bidirectional SCM-WDM RoF network using a reflective semiconductor optical amplifier (RSOA) and cyclic arrayed waveguide gratings (AWGs) was proposed and demonstrated. The purposed RoF network utilizes Sub Carrier Multiplexed (SCM) signals for down-link and an on-off keying (OOK) signals re-modulated for up-link. In this paper, A 50 km range colorless WDM-ROF was demonstrated for both 1 Gbit/s downstream and upstream signals. The BER performance of our scheme shows that our scheme is a practical solution to meet the data rate and cost-efficient of the optical links simultaneously in tomorrow’s ROF access networks.
</description>
        <description>http://thesai.org/Downloads/Volume2No8/Paper%2018-Bidirectional%20WDM-Radio%20over%20Fiber%20System%20with%20Sub%20Carrier%20Multiplexing%20Using%20a%20Reflective%20SOA%20and%20Cyclic%20AWGs.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>CO2 Concentration Change Detection in Time and Space Domains by Means of Wavelet Analysis of MRA: Multi-Resolution Analysis</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020817</link>
        <id>10.14569/IJACSA.2011.020817</id>
        <doi>10.14569/IJACSA.2011.020817</doi>
        <lastModDate>2012-07-01T10:20:06.8930000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>wavelet analysis; carbon di-oxide distribution; change detection.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(8), 2011</description>
        <description>A method for change detection in time and space domains based on wavelet MRA: Multi-Resolution Analysis is proposed. Measuring stations for carbon dioxide concentration are sparsely situated. Also the measuring stations monitor the carbon dioxide concentration in an irregular basis so that some interpolation techniques are required for creation of carbon dioxide concentration with a regular interval in both time and space domains. After all, time and space of MRA is applied to the interpolated carbon dioxide concentration then reconstruction is done without low frequency of LLL component. Thus relatively large change locations and time periods are detected. Through an experiment with 11 years of carbon dioxide concentration data starting from 1990 provided by WDCGG: World Data Center for Greenhouse Gasses, it is found that there exists seasonal change and relatively large changes are occurred in El Nino years. It is also found that the carbon dioxide is concentrated in European continent.</description>
        <description>http://thesai.org/Downloads/Volume2No8/Paper%2017-CO2%20Concentration%20Change%20Detection%20in%20Time%20and%20Space%20Domains%20by%20Means%20of%20Wavelet%20Analysis%20of%20MRA%20Multi-Resolution%20Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>e-Government Ethics : a Synergy of Computer Ethics, Information Ethics, and Cyber Ethics
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020816</link>
        <id>10.14569/IJACSA.2011.020816</id>
        <doi>10.14569/IJACSA.2011.020816</doi>
        <lastModDate>2012-07-01T10:20:03.4800000+00:00</lastModDate>
        
        <creator>Arief Ramadhan</creator>
        
        <creator>Dana Indra Sensuse</creator>
        
        <creator>Aniati Murni Arymurthy</creator>
        
        <subject>e-Government; Ethics; Applied Ethics; Computer Ethics; Information Ethics; Cyber Ethics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(8), 2011</description>
        <description>Ethics has become an important part in the interaction among humans being. This paper specifically discusses applied ethics as one type of ethics. There are three applied ethics that will be reviewed in this paper, i.e. computer ethics, information ethics, and cyber ethics. There are two aspects of the three applied ethics that were reviewed, i.e. their definition and the issues associated with them. The reviewing results of the three applied ethics are then used for defining e-Government ethics and formulating the issues of e-Government ethics. The e-Government ethics position, based on the previous three applied ethics, is also described in this paper. Computer ethics, information ethics and cyber ethics are considered as the foundations of e-Government ethics and several others applied ethics could enrich the e-Government ethics.
</description>
        <description>http://thesai.org/Downloads/Volume2No8/Paper%2016-e-Government%20Ethics%20a%20Synergy%20of%20Computer%20Ethics,%20Information%20Ethics,%20and%20Cyber%20Ethics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> Identification Problem of Source Term of A Reaction Diffusion Equation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020815</link>
        <id>10.14569/IJACSA.2011.020815</id>
        <doi>10.14569/IJACSA.2011.020815</doi>
        <lastModDate>2012-07-01T10:19:57.0830000+00:00</lastModDate>
        
        <creator>Bo Zhang</creator>
        
        <subject>Fractional derivative; Numerical difference scheme;
The gradient regularization method.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(8), 2011</description>
        <description>This paper will give the numerical difference scheme with Dirichlet boundary condition, and prove stability and convergence of the difference scheme, final numerical experiment results also confirm effectiveness of the algorithm.
</description>
        <description>http://thesai.org/Downloads/Volume2No8/Paper%2015-Identification%20Problem%20of%20Source%20Term%20of%20A%20Reaction%20Diffusion%20Equation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of Guess and Determined Attack on Non Linear Modified SNOW 2.0 Using One LFSR</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020814</link>
        <id>10.14569/IJACSA.2011.020814</id>
        <doi>10.14569/IJACSA.2011.020814</doi>
        <lastModDate>2012-07-01T10:19:50.6800000+00:00</lastModDate>
        
        <creator>Madiha Waris</creator>
        
        <creator>Malik Sikandar Hayat Khiyal</creator>
        
        <creator>Aihab Khan</creator>
        
        <subject>component; Linear Feedback Shift Register , Guess and Determined Attack, Finite State Machine.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(8), 2011</description>
        <description>stream ciphers encrypt the data bit by bit. In this research a new model of stream cipher SNOW 2.0 has been proposed i.e. Non linear modified SNOW 2.0 using one Linear Feedback Shift Register (LFSR) with the embedding of non linear function in the model. The analysis of Guess and Determined (GD) attack has been done to check its security with respect to previous versions. The proposed model contains one Linear Feedback Shift Register (LFSR) along with the non linear function which increases the strength of the stream cipher, to make the static nature of modified SNOW 2.0 dynamic. The Experimental analysis show that such a mechanism has been built which provides more security than the previous version of modified SNOW 2.0 in which non linearity was either not introduced or it was introduced using two Linear Feedback Shift Registers (LFSRs). It is concluded that this version is more powerful with respect to the security of plain text against Guess and Determined attack (GD) as compared to the previous versions.</description>
        <description>http://thesai.org/Downloads/Volume2No8/Paper%2014-Analysis%20of%20Guess%20and%20Determined%20Attack%20on%20Non%20Linear%20Modified%20SNOW%202.0%20Using%20One%20LFSR.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design of an Intelligent Combat Robot for war fields</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020813</link>
        <id>10.14569/IJACSA.2011.020813</id>
        <doi>10.14569/IJACSA.2011.020813</doi>
        <lastModDate>2012-07-01T10:19:44.3000000+00:00</lastModDate>
        
        <creator>S Bhargavi</creator>
        
        <creator>S.Manjunath</creator>
        
        <subject>Combat Robot; Wireless camer; Terror attack; Radio Operated; Self-Powered; Intruders</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(8), 2011</description>
        <description>The objective of this paper is to minimize human casualties in terrorist attack such as 26/11. The combat robot [1] has been designed to tackle such a cruel terror attacks. This robot is radio operated, self- powered, and has all the controls like a normal car. A wireless camera has been installed on it, so that it can monitor enemy remotely when required. It can silently enter into enemy area and send us all the information through its’ tiny Camera eyes. This spy robot can be used in star hotels, shopping malls, jewellary show rooms, etc where there can be threat from intruders or terrorists. Since human life is always precious, these robots are the replacement of fighters against terrorist in war areas.
</description>
        <description>http://thesai.org/Downloads/Volume2No8/Paper%2013-Design%20of%20an%20Intelligent%20Combat%20Robot%20for%20war%20field.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> DETECTION OF RELIABLE SOFTWARE USING SPRT</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020812</link>
        <id>10.14569/IJACSA.2011.020812</id>
        <doi>10.14569/IJACSA.2011.020812</doi>
        <lastModDate>2012-07-01T10:19:40.8930000+00:00</lastModDate>
        
        <creator>R Satya Prasad</creator>
        
        <creator>N. Supriya</creator>
        
        <creator>G.Krishna Mohan</creator>
        
        <subject>Exponential imperfect debugging; SequentialProbability Ratio Test; Maximum Likelihood Estimation; Decision lines; Software Reliability; Software failure data.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(8), 2011</description>
        <description>In Classical Hypothesis testing volumes of data is to be collected and then the conclusions are drawn which may take more time. But, Sequential Analysis of statistical science could be adopted in order to decide upon the reliable / unreliable of the developed software very quickly. The procedure adopted for this is, Sequential Probability Ratio Test (SPRT). In the present paper, we have proposed the performance of SPRT on Time domain data using exponential imperfect debugging model and analyzed the results by applying on 5 data sets. The parameters are estimated by using Maximum Likelihood Estimation.</description>
        <description>http://thesai.org/Downloads/Volume2No8/Paper%2012-Detection%20of%20Reliable%20Software%20Using%20SPRT.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced Architecture of a Web Warehouse based on Quality Evaluation Framework to Incorporate Quality Aspects in Web Warehouse Creation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020811</link>
        <id>10.14569/IJACSA.2011.020811</id>
        <doi>10.14569/IJACSA.2011.020811</doi>
        <lastModDate>2012-07-01T10:19:34.5130000+00:00</lastModDate>
        
        <creator>Umm-e Mariya Shah</creator>
        
        <creator>Azra Shamim</creator>
        
        <creator>Madiha Kazmi</creator>
        
        <subject>component; Data Warehouse; Web Warehouse; Quality Assessment, Quality Evaluation Framework; Enhanced Web Warehouse Architecture; WWW</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(8), 2011</description>
        <description> In the recent years, it has been observed that World Wide Web (www) became a vast source of information explosion about all areas of interest. Relevant information retrieval is difficult from the web space as there is no universal configuration and organization of the web data. Taking the advantage of data warehouse functionality and integrating it with the web to retrieve relevant data is the core concept of web warehouse. It is a repository that store relevant web data for business decision making. The basic function of web warehouse is to collect and store the information for analysis of users. The quality of web warehouse data affects a lot on data analysis. To enhance the quality of decision making different quality dimensions must be incorporated in web warehouse architecture. In this paper enhanced web warehouse architecture is proposed and discussed. The enhancement in the existing architecture is based on the quality evaluation framework. The enhanced architecture adds three layers in existing architecture to insure quality at various phases of web warehouse system creation. The source assessment, query evaluation and data quality layers enhance the quality of data store in web warehouse.</description>
        <description>http://thesai.org/Downloads/Volume2No8/Paper%2011-Enhanced%20Architecture%20of%20a%20Web%20Warehouse%20based%20on%20Quality%20Evaluation%20Framework%20to%20Incorporate%20Quality%20Aspects%20in%20Web%20Warehouse%20Creation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> Transforming Higher educational institution administration through ICT</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020810</link>
        <id>10.14569/IJACSA.2011.020810</id>
        <doi>10.14569/IJACSA.2011.020810</doi>
        <lastModDate>2012-07-01T10:19:28.1400000+00:00</lastModDate>
        
        <creator>J Meenakumari</creator>
        
        <creator>Dr. R. Krishnaveni</creator>
        
        <subject>Higher education, Information and Communication Technology, Integration, Knowledge administration, Information administration, e-administration.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(8), 2011</description>
        <description>The rapid development in Indian higher education sector has increased the focus on reforms in higher educational institution administration. Efficiency and accountability have become important elements, and the integration of Information and Communication Technology (ICT) into the educational administration process has become a necessity. The objective of this study is to know the current extent of ICT integration in Indian higher education institutions. The factors contributing to the successful integration of ICT into higher education administration (i.e., Knowledge administration and Information administration which constitute e-administration) are also discussed.
</description>
        <description>http://thesai.org/Downloads/Volume2No8/Paper%2010-Transforming%20Higher%20educational%20institution%20administration%20through%20ICT.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> Implementation of Locally Weighted Projection Regression Network for Concurrency Control In Computer Aided Design</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020809</link>
        <id>10.14569/IJACSA.2011.020809</id>
        <doi>10.14569/IJACSA.2011.020809</doi>
        <lastModDate>2012-07-01T10:19:21.7230000+00:00</lastModDate>
        
        <creator>A Muthukumaravel</creator>
        
        <creator>Dr.S.Purushothaman</creator>
        
        <creator>Dr.A.Jothi</creator>
        
        <subject>Concurrency Control, locally weighted projection regression, Transaction Locks, Time Stamping.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(8), 2011</description>
        <description>This paper presents implementation of locally weighted projection regression (LWPR) network method for concurrency control while developing dial of a fork using Autodesk inventor 2008. The LWPR learns the objects and the type of transactions to be done based on which node in the output layer of the network exceeds a threshold value. Learning stops once all the objects are exposed to LWPR. During testing performance, metrics are analyzed. We have attempted to use LWPR for storing lock information when multi users are working on computer Aided Design (CAD). The memory requirements of the proposed method are minimal in processing locks during transaction.
</description>
        <description>http://thesai.org/Downloads/Volume2No8/Paper%209-Implementation%20of%20Locally%20Weighted%20Projection%20Regression%20Network%20for%20Concurrency%20Control%20In%20Computer%20Aided%20Design.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Barriers in Adoption of Health Information Technology in Developing Societies</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020808</link>
        <id>10.14569/IJACSA.2011.020808</id>
        <doi>10.14569/IJACSA.2011.020808</doi>
        <lastModDate>2012-07-01T10:19:13.9470000+00:00</lastModDate>
        
        <creator>Fozia Anwar</creator>
        
        <creator>Azra Shamim</creator>
        
        <subject>component; Health Technolog; Health Information System; Barrier in e- Health.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(8), 2011</description>
        <description>This paper develops the conceptual framework of barriers faced by the decision makers and management personnel of health sector. The main theme of this paper is to give a clear understanding about the adaption barriers of health technology faced by developing societies. The information about barriers would be useful for policy makers to decide about the particular technology. So that they can fulfill the defined mission of their organizations. Developing a conceptual framework is the first step in building organizational capacity. Information technology in health sector is spreading globally. Use of health information technology is offering evidence-based practice to endorse health and human prosperity. Globalization of health information system is inevitable for establishment and promotion of healthcare sector in developing societies. Present health systems in developing societies are inadequate to meet the needs of the population. Health sector of developing societies is facing a lot of barriers in establishment and promotion of health information system. These barriers include lack of infrastructure, cost, technical sophistications, lack of skilled human resources and lack of e- readiness of medical professionals. In this paper authors conducted a survey of hospitals in Pakistan to identify and categorized adaption barriers in health information technology. Existing health system should be transformed by using HIT to improve health status of population by eliminating barriers identified in this paper.
</description>
        <description>http://thesai.org/Downloads/Volume2No8/Paper%208-Barriers%20in%20Adoption%20of%20Health%20Information%20Technology%20in%20Developing%20Societies.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>IMPLEMENTATION OF NODE ENERGY BASED ON ENCRYPTION KEYING</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020807</link>
        <id>10.14569/IJACSA.2011.020807</id>
        <doi>10.14569/IJACSA.2011.020807</doi>
        <lastModDate>2012-07-01T10:19:07.5400000+00:00</lastModDate>
        
        <creator>S Bhargavi</creator>
        
        <creator>Ranjitha B.T</creator>
        
        <subject>NEBEK; Network; protocol; communication; RC4 encryption; 
dynamic key; virtual energy; Statistical mode; Operational mode; Forwarding Node Packets.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(8), 2011</description>
        <description>This paper deals with Designing cost-efficient, secure network protocols for any Networks is a challenging problem because node in a network itself is resource-limited. Since the communication cost is the most dominant factor in any network, we introduce an energy-efficient Node Energy-Based Encryption and Keying (NEBEK) scheme that significantly reduces the number of transmissions needed for rekeying to avoid stale keys. NEBEK is a secure communication framework where sensed data is encoded using a scheme based on a permutation code generated via the RC4 encryption mechanism. The key to the RC4 encryption mechanism dynamically changes as a function of the residual energy of the node. Thus, a one-time dynamic key is employed for one packet only and different keys are used for the successive packets of the stream. The intermediate nodes along the path to the sink are able to verify the authenticity and integrity of the incoming packets using a predicted value of the key generated by the sender’s virtual energy, thus requiring no need for specific reeking messages. NEBEK is able to efficiently detect and filter false data injected into the network by malicious outsiders. We have evaluated NEBEK’s feasibility and performance analytically and through software simulations. Our results show that NEBEK, without incurring transmission overhead (increasing packet size or sending control messages for rekeying), is able to eliminate malicious data from the network in an energy efficient manner.</description>
        <description>http://thesai.org/Downloads/Volume2No8/Paper%207-IMPLEMENTATION%20OF%20NODE%20ENERGY%20BASED%20ON%20ENCRYPTION%20KEYING.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> Simulation of Spectral Subtraction Based Noise Reduction Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020806</link>
        <id>10.14569/IJACSA.2011.020806</id>
        <doi>10.14569/IJACSA.2011.020806</doi>
        <lastModDate>2012-07-01T10:19:01.1370000+00:00</lastModDate>
        
        <creator>Zhixin Chen</creator>
        
        <subject>Embedded Systems; Digital Signal Processing; Noise Reduction.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(8), 2011</description>
        <description>Noise reduction is a very meaningful but difficult task and it has been a subject of intense research in recent years. This paper introduces two popular noise reduction techniques and presents our simulation result of a noise reduction system. It is shown that the system reduces the noise almost completely while keeps the enhanced speech signal very similar to the original speech signal.</description>
        <description>http://thesai.org/Downloads/Volume2No8/Paper%206-Simulation%20of%20Spectral%20Subtraction%20Based%20Noise%20Reduction%20Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Some Modification in ID-Based Public key Cryptosystem using IFP and DDLP</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020805</link>
        <id>10.14569/IJACSA.2011.020805</id>
        <doi>10.14569/IJACSA.2011.020805</doi>
        <lastModDate>2012-07-01T10:18:57.7230000+00:00</lastModDate>
        
        <creator>Chandrashekhar Meshram</creator>
        
        <creator>S.A. Meshram</creator>
        
        <subject>Public key Cryptosystem, Identity based Cryptosystem, Discrete Logarithm Problem (DLP), Double Discrete Logarithm Problem (DDLP), and Integer Factorization Problem (IFP).</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(8), 2011</description>
        <description> In 1984, Shamir [1] introduced the concept of an identity-based cryptosystem. In this system, each user needs to visit a key authentication center (KAC) and identify him self before joining a communication network. Once a user is accepted, the KAC will provide him with a secret key. In this way, if a user wants to communicate with others, he only needs to know the “identity” of his communication partner and the public key of the KAC. There is no public file required in this system. However, Shamir did not succeed in constructing an identity based cryptosystem, but only in constructing an identity-based signature scheme. Meshram and Agrawal [5] have proposed an id - based cryptosystem based on integer factoring and double discrete logarithm problem which uses the public key cryptosystem based on integer factoring and double discrete logarithm problem. In this paper, we propose the modification in an id based cryptosystem based on the integer factoring and double discrete logarithm problem and we consider the security against a conspiracy of some entities in the proposed system and show the possibility of establishing a more secure system.</description>
        <description>http://thesai.org/Downloads/Volume2No8/Paper%205-Some%20Modification%20in%20ID-Based%20Public%20key%20Cryptosystem%20using%20IFP%20and%20DDLP.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>XML Based Representation of DFD</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020804</link>
        <id>10.14569/IJACSA.2011.020804</id>
        <doi>10.14569/IJACSA.2011.020804</doi>
        <lastModDate>2012-07-01T10:18:54.3000000+00:00</lastModDate>
        
        <creator>Swapna Salil Kolhatkar</creator>
        
        <subject>data flow diagrams; XML; metadata; diagramming tools.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(8), 2011</description>
        <description> In the world of Information Technology, the working of a information system is well explained with the use of Data Flow Diagrams (DFD). DFDs are one of the three essential perspectives of the Structured Systems Analysis and Design Method (SSADM). [3]The sponsor of a project and the end users are briefed and consulted throughout all stages of a system&#39;s evolution. With a data flow diagram, users are able to visualize how the system will operate, what the system will accomplish, and how the system will be implemented. But, various practical problems exist with the representation of DFDs. Different tools are available in the market for representing the DFDs. These tools are user friendly and based on the object oriented features. The diagrams drawn using these tools can be sent over the network for communicating with others. On the other hand, the XML is platform independent, textual information and is totally extensible. XML is structured and is an excellent way to transfer the information along with the metadata over the network. XML can be used to transfer the information related to DFDs, there by removing the problems related to understanding the diagrammatic notations and concentrating more on the information flow in the system. This paper is aimed at understanding the problems related to DFDs and representing it in XML[4] format for their storage and communication over the network. The discussion is divided into four main topics – introduction to XML and DFD, problems related to DFD, an XML representation for DFDs and finally the conclusion.</description>
        <description>http://thesai.org/Downloads/Volume2No8/Paper%204-XML%20Based%20Representation%20of%20DFD.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Security Provisions in Stream Ciphers Through Self Shrinking and Alternating Step Generator</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020802</link>
        <id>10.14569/IJACSA.2011.020802</id>
        <doi>10.14569/IJACSA.2011.020802</doi>
        <lastModDate>2012-07-01T10:18:47.1100000+00:00</lastModDate>
        
        <creator>Hafsa Rafiq</creator>
        
        <creator>Malik Sikandar Hayat Kiyal</creator>
        
        <creator>Aihab Khan</creator>
        
        <subject> component; Alternating step generators , Feed back with carry shift registers, Self shrinking generators.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(8), 2011</description>
        <description>in cryptography stream ciphers used to encrypt plain text data bits one by one. The security of stream ciphers depend upon randomness of key stream, good linear span and low probability of finding the initial states of pseudorandom generators. In this paper we propose a new stream cipher model use Feed back with carry shift registers (FCSRs) as building blocks which are considered as a source of long pseudorandom generator. Proposed model is the combined structure of alternating step generator and self shrinking generators which are commonly used stream ciphers generators. In this research we compare proposed generator against self shrinking generator (SSG), Alternating step generator (ASG) and alternating step self shrinking generator (ASSG) and we concludes that proposed generator architecture is best for encryption because it satisfies all necessary conditions of a good Stream cipher.</description>
        <description>http://thesai.org/Downloads/Volume2No8/Paper%202-Security%20Provisions%20in%20Stream%20Ciphers%20Through%20Self%20Shrinking%20and%20Alternating%20Step%20Generator%20.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>MCMC Particle Filter Using New Data Association Technique with Viterbi Filtered Gate Method for Multi-Target Tracking in Heavy Clutter
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020801</link>
        <id>10.14569/IJACSA.2011.020801</id>
        <doi>10.14569/IJACSA.2011.020801</doi>
        <lastModDate>2012-07-01T10:18:40.7270000+00:00</lastModDate>
        
        <creator>E M Saad</creator>
        
        <creator>El. Bardawiny</creator>
        
        <creator>H.I.ALI</creator>
        
        <creator>N.M.Shawky</creator>
        
        <subject>data association; multi-target tracking; particle filter ;Viterbi algorithm ; Markov chain Monte Carlo; filtering
</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(8), 2011</description>
        <description> Improving data association technique in dense clutter environment for multi-target tracking used in Markov chain Monte Carlo based particle filter (MCMC-PF) are discussed in this paper. A new method named Viterbi filtered gate Markov chain Monte Carlo VFG-MCMC is introduced to avoid track swap and to overcome the issue of loosing track to highly maneuvering targets in the presence of more background clutter and false signals. An adaptive search based on Viterbi algorithm is then used to detect the valid filtered data point in each target gate. The detected valid point for each target is applied to the estimation algorithm of MCMC-PF during calculating the sampling weights. This proposed method makes the MCMC interacts only with the valid target that is candidate from the filtered gate and no more calculations are considered for invalid targets. Simulation results demonstrate the effectiveness and better performance when compared to conventional algorithm MCMC-PF.</description>
        <description>http://thesai.org/Downloads/Volume2No8/Paper%201-MCMC%20Particle%20Filter%20Using%20New%20Data%20Association%20Technique%20with%20Viterbi%20Filtered%20Gate%20Method%20for%20Multi-Target%20Tracking%20in%20Heavy%20Clutter.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluation of SIGMA and SCTPmx for High Handover Rate Vehicle
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020725</link>
        <id>10.14569/IJACSA.2011.020725</id>
        <doi>10.14569/IJACSA.2011.020725</doi>
        <lastModDate>2012-07-01T10:18:34.3170000+00:00</lastModDate>
        
        <creator>Hala Eldaw Idris Jubara</creator>
        
        <creator>Sharifah Hafizah Syed Ariffin</creator>
        
        <subject>handover latency; speed; SCTP; mobility management; vehicle.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(7), 2011</description>
        <description>Rapid technological advance in wireless mobile communication offered Internet accessibility at anytime and anywhere including high speed wireless environment such as in high speed trains, fast moving cars etc. However, wireless Quality of Service (QoS) provisioning in such high speed movement is more difficult and challenging to be tackled than in a fixed-wired environment. This paper discussed transport layer protocol, SCTP to support seamless handover that can guarantees and maintain high QoS in high speed vehicle. The comparison of the selected protocol have been identified and analysed in two handover techniques SIGMA and SCTPmx to optimize its performance in high speed vehicular environment.</description>
        <description>http://thesai.org/Downloads/Volume2No7/Paper%2026-Evaluation%20of%20SIGMA%20and%20SCTPmx%20for%20High%20Handover%20Rate%20Vehicle.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> On Algebraic Spectrum of Ontology Evaluation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020724</link>
        <id>10.14569/IJACSA.2011.020724</id>
        <doi>10.14569/IJACSA.2011.020724</doi>
        <lastModDate>2012-07-01T10:18:27.9170000+00:00</lastModDate>
        
        <creator>Adekoya Adebayo Felix</creator>
        
        <creator>Akinwale Adio Taofiki</creator>
        
        <creator>Sofoluwe Adetokunbo</creator>
        
        <subject>Ontology evaluation; adjacency matrix; graph; algebraic spectrum.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(7), 2011</description>
        <description> Ontology evaluation remains an important open problem in the area of its application. The ontology structure evaluation framework for benchmarking the internal graph structures was proposed. The framework was used in transport and biochemical ontology. The corresponding adjacency, incidence matrices and other structural properties due to the class hierarchical structure of the transport and biochemical ontology were computed using MATLAB. The results showed that the choice of suitable choice must depend on the purpose of ontology structure evaluation.
</description>
        <description>http://thesai.org/Downloads/Volume2No7/Paper%2025-On%20Algebraic%20Spectrum%20of%20Ontology%20Evaluation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Image Compression Techniques Using Modified high quality Multi wavelets</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020723</link>
        <id>10.14569/IJACSA.2011.020723</id>
        <doi>10.14569/IJACSA.2011.020723</doi>
        <lastModDate>2012-07-01T10:18:24.5230000+00:00</lastModDate>
        
        <creator>M Ashok</creator>
        
        <creator>Dr.T.BhaskaraReddy</creator>
        
        <subject>Multi wavelets; DWT; Image coding; Quantizers; ANN.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(7), 2011</description>
        <description>over the past decade, the success of wavelets in solving many different problems has contributed to its unprecedented popularity. For best performance in image compression, wavelet transforms require filters that combine a number of desirable properties, such as orthogonality and symmetry. Advances in wavelet transforms and quantization methods have produced algorithms capable of surpassing the existing image compression standards like the Joint Photographic Experts Group (JPEG) algorithm. For best performance in image compression, wavelet transforms require filters that combine a number of desirable properties, such as orthogonality and symmetry. This paper presents new multi wavelet transform and quantization methods and introduces multi wavelet packets. Extensive experimental results demonstrate that our techniques exhibit performance equal to, or in several cases superior to, the current wavelet filters.</description>
        <description>http://thesai.org/Downloads/Volume2No7/Paper%2024-Image%20Compression%20Techniques%20Using%20Modified%20high%20quality%20Multi%20wavelets.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>RETRACTED: Status of Wireless Technologies Used For Designing Home Automation System - A Review
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020722</link>
        <id>10.14569/IJACSA.2011.020722</id>
        <doi>10.14569/IJACSA.2011.020722</doi>
        <lastModDate>2012-07-01T10:18:18.1500000+00:00</lastModDate>
        
        <creator>Ashish J Ingle</creator>
        
        <creator>Bharti W. Gawali</creator>
        
        <subject>Automation; Technology; Reliable.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(7), 2011</description>
        <description>After careful and considered review of the content of this paper by a duly constituted expert committee, this paper has been found to be in violation of IJACSA`s Publication Principles. We hereby retract the content of this paper. Reasonable effort should be made to remove all past references to this paper. Retraction DOI: 10.14569/IJACSA.2011.020722.retraction</description>
        <description>http://thesai.org/Downloads/Volume2No7/Paper%2022-Status%20of%20Wireless%20Technologies%20Used%20For%20Designing%20Home%20Automation%20System%20-%20A%20Review.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Survey of Contrast Enhancement Techniques based on Histogram Equalization</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020721</link>
        <id>10.14569/IJACSA.2011.020721</id>
        <doi>10.14569/IJACSA.2011.020721</doi>
        <lastModDate>2012-07-01T10:18:11.7400000+00:00</lastModDate>
        
        <creator>Manpreet Kaur</creator>
        
        <creator>Jasdeep Kaur</creator>
        
        <creator>Jappreet Kaur</creator>
        
        <subject>component image processing; contrast enhancement; histogram equalization; minimum mean brightness error; brightness preserving enhancement, histogram partition.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(7), 2011</description>
        <description>This Contrast enhancement is frequently referred to as one of the most important issues in image processing. Histogram equalization (HE) is one of the common methods used for improving contrast in digital images. Histogram equalization (HE) has proved to be a simple and effective image contrast enhancement technique. However, the conventional histogram equalization methods usually result in excessive contrast enhancement, which causes the unnatural look and visual artifacts of the processed image. This paper presents a review of new forms of histogram for image contrast enhancement. The major difference among the methods in this family is the criteria used to divide the input histogram. Brightness preserving Bi-Histogram Equalization (BBHE) and Quantized Bi-Histogram Equalization (QBHE) use the average intensity value as their separating point. Dual Sub-Image Histogram Equalization (DSIHE) uses the median intensity value as the separating point. Minimum Mean Brightness Error Bi-HE (MMBEBHE) uses the separating point that produces the smallest Absolute Mean Brightness Error (AMBE). Recursive Mean-Separate Histogram Equalization (RMSHE) is another improvement of BBHE. The Brightness preserving dynamic histogram equalization (BPDHE) method is actually an extension to both MPHEBP and DHE. Weighting mean-separated sub-histogram equalization (WMSHE) method is to perform the effective contrast enhancement of the digital image.</description>
        <description>http://thesai.org/Downloads/Volume2No7/Paper%2021-Survey%20of%20Contrast%20Enhancement%20Techniques%20based%20on%20Histogram%20Equalization.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Reconfigurable Efficient Design of Viterbi Decoder for Wireless Communication Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020720</link>
        <id>10.14569/IJACSA.2011.020720</id>
        <doi>10.14569/IJACSA.2011.020720</doi>
        <lastModDate>2012-07-01T10:18:05.3330000+00:00</lastModDate>
        
        <creator>Swati Gupta</creator>
        
        <creator>Rajesh mehra</creator>
        
        <subject>Clock Gating; FPGA; VHDL; Trace Back; Viterbi Decoder.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(7), 2011</description>
        <description>Viterbi Decoders are employed in digital wireless communication systems to decode the convolution codes which are the forward correction codes. These decoders are quite complex and dissipate large amount of power. With the proliferation of battery powered devices such as cellular phones and laptop computers, power dissipation, along with speed and area, is a major concern in VLSI design. In this paper, a low power and high speed viterbi decoder has been designed. The proposed design has been designed using Matlab, synthesized using Xilinx Synthesis Tool and implemented on Xilinx Virtex-II Pro based XC2vpx30 FPGA device. The results show that the proposed design can operate at an estimated frequency of 62.6 MHz by consuming fewer resources on target device.</description>
        <description>http://thesai.org/Downloads/Volume2No7/Paper%2020-Reconfigurable%20Efficient%20Design%20of%20Viterbi%20Decoder%20for%20Wireless%20Communication%20Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> Solving the Vehicle Routing Problem using Genetic Algorithm
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020719</link>
        <id>10.14569/IJACSA.2011.020719</id>
        <doi>10.14569/IJACSA.2011.020719</doi>
        <lastModDate>2012-07-01T10:17:58.9130000+00:00</lastModDate>
        
        <creator>Abdul Kadar Muhammad Masum</creator>
        
        <creator>Mohammad Shahjalal</creator>
        
        <creator>Md. Faisal Faruque</creator>
        
        <creator>Md. Iqbal Hasan Sarker</creator>
        
        <subject>Vehicle Routing Problem (VRP); Genetic Algorithm; NP-complete; Heuristic.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(7), 2011</description>
        <description>he main goal of this research is to find a solution of Vehicle Routing Problem using genetic algorithms. The Vehicle Routing Problem (VRP) is a complex combinatorial optimization problem that belongs to the NP-complete class. Due to the nature of the problem it is not possible to use exact methods for large instances of the VRP. Genetic algorithms provide a search technique used in computing to find true or approximate solution to optimization and search problems. However we used some heuristic in addition during crossover or mutation for tuning the system to obtain better result.</description>
        <description>http://thesai.org/Downloads/Volume2No7/Paper%2019-Solving%20the%20Vehicle%20Routing%20Problem%20using%20Genetic%20Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> Software Security Requirements Gathering Instrument</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020717</link>
        <id>10.14569/IJACSA.2011.020717</id>
        <doi>10.14569/IJACSA.2011.020717</doi>
        <lastModDate>2012-07-01T10:17:52.1300000+00:00</lastModDate>
        
        <creator>Smriti Jain</creator>
        
        <creator>Maya Ingle</creator>
        
        <subject>Software Requirements Specification; Security Policy; Security Objectives; Security Requirements.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(7), 2011</description>
        <description>Security breaches are largely caused by the vulnerable software. Since individuals and organizations mostly depend on softwares, it is important to produce in secured manner. The first step towards producing secured software is through gathering security requirements. This paper describes Software Security Requirements Gathering Instrument (SSRGI) that helps gather security requirements from the various stakeholders. This will guide the developers to gather security requirements along with the functional requirements and further incorporate security during other phases of software development. We subsequently present case studies that describe the integration of the SSRGI instrument with Software Requirements Specification (SRS) document as specified in standard IEEE 830-1998. Proposed SSRGI will support the software developers in gathering security requirements in detail during requirements gathering phase.</description>
        <description>http://thesai.org/Downloads/Volume2No7/Paper%2017-Software%20Security%20Requirements%20Gathering%20Instrument.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Approaches to Image Coding: A Review</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020716</link>
        <id>10.14569/IJACSA.2011.020716</id>
        <doi>10.14569/IJACSA.2011.020716</doi>
        <lastModDate>2012-07-01T10:17:45.7030000+00:00</lastModDate>
        
        <creator>Rehna V J</creator>
        
        <creator>Jeya Kumar. M. K</creator>
        
        <subject>Hybrid coding; Predictive coding; Segmentation; Vector Quantization; Compression ratio.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(7), 2011</description>
        <description>Now a days, the digital world is most focused on storage space and speed. With the growing demand for better bandwidth utilization, efficient image data compression techniques have emerged as an important factor for image data transmission and storage. To date, different approaches to image compression have been developed like the classical predictive coding, popular transform coding and vector quantization. Several second generation coding schemes or the segmentation based schemes are also gaining popularity. Practically efficient compression systems based on hybrid coding which combines the advantages of different traditional methods of image coding have also being developed over the years. In this paper, different hybrid approaches to image compression are discussed. Hybrid coding of images, in this context, deals with combining two or more traditional approaches to enhance the individual methods and achieve better quality reconstructed images with higher compression ratio. Literature on hybrid techniques of image coding over the past years is also reviewed. An attempt is made to highlight the neuro-wavelet approach for enhancing coding efficiency.</description>
        <description>http://thesai.org/Downloads/Volume2No7/Paper%2016-Hybrid%20Approaches%20to%20Image%20Coding%20A%20Review.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Workshare Process of Thread Programming and MPI Model on Multicore Architecture</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020715</link>
        <id>10.14569/IJACSA.2011.020715</id>
        <doi>10.14569/IJACSA.2011.020715</doi>
        <lastModDate>2012-07-01T10:17:38.6470000+00:00</lastModDate>
        
        <creator>R Refianti</creator>
        
        <creator>A.B. Mutiara</creator>
        
        <creator>D.T Hasta</creator>
        
        <subject>MPI; OpenMP; SMP; Multicore; Multithreading.
</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(7), 2011</description>
        <description> Comparison between OpenMP for thread programming model and MPI for message passing programming model will be conducted on multicore shared memory machine architectures in order to find which has a better performance in terms of speed and throughput. Application used to assess the scalability of the evaluated parallel programming solutions is matrix multiplication with customizable matrix dimension. Many research done on a large scale parallel computing which using high scale benchmark such as NSA Parallel Benchmark (NPB) for their testing standardization [2]. This research will be conducted on a small scale parallel computing that emphasize more on the performance evaluation between MPI and OpenMP parallel programming model using self created benchmark. It also describes how workshare processes done on different parallel programming model. It gives comparative result between message passing and shared memory programming model in runtime and amount of throughput. Testing methodology also simple and has high usability on the available resources.
</description>
        <description>http://thesai.org/Downloads/Volume2No7/Paper%2015-Workshare%20Process%20of%20Thread%20Programming%20and%20MPI%20Model%20on%20Multicore%20Architecture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Emotion Classification Using Facial Expression
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020714</link>
        <id>10.14569/IJACSA.2011.020714</id>
        <doi>10.14569/IJACSA.2011.020714</doi>
        <lastModDate>2012-07-01T10:17:32.2670000+00:00</lastModDate>
        
        <creator>Devi Arumugam</creator>
        
        <creator>Dr. S. Purushothaman B.E, M.E</creator>
        
        <subject>Universal Emotions; FLD; SVD; Eigen Values; Eigen Vector; RBF Network.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(7), 2011</description>
        <description>Human emotional facial expressions play an important role in interpersonal relations. This is because humans demonstrate and convey a lot of evident information visually rather than verbally. Although humans recognize facial expressions virtually without effort or delay, reliable expression recognition by machine remains a challenge as of today. To automate recognition of emotional state, machines must be taught to understand facial gestures. In this paper we developed an algorithm which is used to identify the person’s emotional state through facial expression such as angry, disgust, happy. This can be done with different age group of people with different situation. We Used a Radial Basis Function network (RBFN) for classification and Fisher’s Linear Discriminant (FLD), Singular Value Decomposition (SVD) for feature selection.</description>
        <description>http://thesai.org/Downloads/Volume2No7/Paper%2014-Emotion%20Classification%20Using%20Facial%20Expression.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comparative Study of various Secure Routing Protocols based on AODV
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020712</link>
        <id>10.14569/IJACSA.2011.020712</id>
        <doi>10.14569/IJACSA.2011.020712</doi>
        <lastModDate>2012-07-01T10:17:25.4970000+00:00</lastModDate>
        
        <creator>Dalip Kamboj</creator>
        
        <creator>Pankaj Kumar Sehgal</creator>
        
        <creator></creator>
        
        <subject>ad hoc network; basic security servieces; security attack.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(7), 2011</description>
        <description>The paper surveyed and compared various secure routing protocols for the mobile ad hoc networks (MANETs). MANETs are vulnerable to various security threats because of its dynamic topology and selfconfigurable nature. Security attacks on ad hoc network routing protocols disrupt network performance and reliability. The paper significantly based on base routing protocol AODV and Secure protocol based on AODV. The comparison between various secure routing protocols has been made on the basis of security services and security attacks. From the survey it is quite clear that these protocols are vulnerable to various routing attacks. A multifence secure routing protocol is still required to fulfill the basic security services and provide solution against various attacks.
</description>
        <description>http://thesai.org/Downloads/Volume2No7/Paper%2012-A%20Comparative%20Study%20of%20various%20Secure%20Routing%20Protocols%20based%20on%20AODV.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Investigation on Simulation and Measurement of Reverberation for Small Rooms</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020710</link>
        <id>10.14569/IJACSA.2011.020710</id>
        <doi>10.14569/IJACSA.2011.020710</doi>
        <lastModDate>2012-07-01T10:17:18.7400000+00:00</lastModDate>
        
        <creator>Zhixin Chen</creator>
        
        <subject>Digital Signal Processing; Reverberaton; Room Acoustics; Embedded Systems.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(7), 2011</description>
        <description>This paper investigates the simulation of reverberation in a small room using image source method, Schroeder model, and Gardner model. It then compares the simulation results with measurement of reverberation in the real measurement room. It shows a large difference between the simulation and the real measurement of reverberation due to the over-simplified representation of the sound source, receiver, and room surfaces in the models. A reverberation simulation model including the real measurement of the source, receiver, and room surfaces will be developed to better match the real measurement of reverberation.</description>
        <description>http://thesai.org/Downloads/Volume2No7/Paper%2010-Investigation%20on%20Simulation%20and%20Measurement%20of%20Reverberation%20for%20Small%20Rooms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Triple SV: A Bit Level Symmetric Block-Cipher Having High Avalanche Effect
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020709</link>
        <id>10.14569/IJACSA.2011.020709</id>
        <doi>10.14569/IJACSA.2011.020709</doi>
        <lastModDate>2012-07-01T10:17:15.3530000+00:00</lastModDate>
        
        <creator>Rajdeep Chakraborty</creator>
        
        <creator>Sonam Agarwal</creator>
        
        <creator>Sridipta Misra</creator>
        
        <creator>Vineet Khemka</creator>
        
        <subject>Avalanche Effect; Block Cipher; Cipher Block Chaining (CBC) mode; Cryptography.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(7), 2011</description>
        <description>The prolific growth of network communication system entails high risk of breach in information security. This substantiates the need for higher security for electronic information. Cryptography is one of the ways to secure electronic documents. In this paper, we propose a new block cipher, TRIPLE SV (3SV), with 256-bit block size and 112-bit key length. Generally, stream ciphers produce higher avalanche effect but Triple SV shows a substantial rise in avalanche effect with a block cipher implementation. The CBC mode has been used to attain higher avalanche effect. The technique is implemented in C language and has been tested for feasibility</description>
        <description>http://thesai.org/Downloads/Volume2No7/Paper%209-Triple%20SV%20A%20Bit%20Level%20Symmetric%20Block%20Cipher%20Having%20High%20Avalanche%20Effect.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> Characterization of Dynamic Bayesian Network-The Dynamic Bayesian Network as temporal network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020708</link>
        <id>10.14569/IJACSA.2011.020708</id>
        <doi>10.14569/IJACSA.2011.020708</doi>
        <lastModDate>2012-07-01T10:17:11.9630000+00:00</lastModDate>
        
        <creator>Nabil Ghanmi</creator>
        
        <creator>Mohamed Ali Mahjoub</creator>
        
        <creator>Najoua Essoukri Ben Amara</creator>
        
        <subject>DBN; DAG; Inference; Learning; HMM; EM Algorithm; SEM; MLE; coupled HMMs.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(7), 2011</description>
        <description> In this report, we will be interested at Dynamic Bayesian Network (DBNs) as a model that tries to incorporate temporal dimension with uncertainty. We start with basics of DBN where we especially focus in Inference and Learning concepts and algorithms. Then we will present different levels and methods of creating DBNs as well as approaches of incorporating temporal dimension in static Bayesian network.</description>
        <description>http://thesai.org/Downloads/Volume2No7/Paper%208-Characterization%20of%20Dynamic%20Bayesian%20Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> Performance Evaluation of Mesh - Based Multicast Routing Protocols in MANET’s</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020707</link>
        <id>10.14569/IJACSA.2011.020707</id>
        <doi>10.14569/IJACSA.2011.020707</doi>
        <lastModDate>2012-07-01T10:17:05.5400000+00:00</lastModDate>
        
        <creator>M Nagaratna</creator>
        
        <creator>V. Kamakshi Prasad</creator>
        
        <creator>Raghavendra Rao</creator>
        
        <subject>MANET; multicast; QoS.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(7), 2011</description>
        <description>Multicasting is a challenging task that facilitates group communication among the nodes using the most efficient strategy to deliver the messages over each link of the network. In spite of significant research achievements in recent years, efficient and extendable multicast routing in Mobile Ad Hoc Networks (MANETs) is still a difficult issue. This paper proposes the comparison of ODMR and PUMA protocol. As per the simulation results PUMA is better than ODMR.</description>
        <description>http://thesai.org/Downloads/Volume2No7/Paper%207-Performance%20Evaluation%20of%20Mesh%20-%20Based%20Multicast%20Routing%20Protocols%20in%20MANET%E2%80%99s.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Parallel and Concurrent Implementation of Lin-Kernighan Heuristic (LKH-2) for Solving Traveling Salesman Problem for Multi-Core Processors using SPC3 Programming Model
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020706</link>
        <id>10.14569/IJACSA.2011.020706</id>
        <doi>10.14569/IJACSA.2011.020706</doi>
        <lastModDate>2012-07-01T10:16:59.1170000+00:00</lastModDate>
        
        <creator>Muhammad Ali Ismail</creator>
        
        <creator>Dr. Shahid H. Mirza</creator>
        
        <creator>Dr. Talat Altaf</creator>
        
        <subject>TSP; Parallel Heuristics; Multi-core processors, parallel programming models.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(7), 2011</description>
        <description>With the arrival of multi-cores, every processor has now built-in parallel computational power and that can be fully utilized only if the program in execution is written accordingly. This study is a part of an on-going research for designing of a new parallel programming model for multi-core processors. In this paper we have presented a combined parallel and concurrent implementation of Lin-Kernighan Heuristic (LKH-2) for Solving Travelling Salesman Problem (TSP) using a newly developed parallel programming model, SPC3 PM, for general purpose multi-core processors. This implementation is found to be very simple, highly efficient, scalable and less time consuming in compare to the existing LKH-2 serial implementations in multi-core processing environment. We have tested our parallel implementation of LKH-2 with medium and large size TSP instances of TSBLIB. And for all these tests our proposed approach has shown much improved performance and scalability.
</description>
        <description>http://thesai.org/downloads/Volume2No7/Paper%206-A%20Parallel%20and%20Concurrent%20Implementation%20of%20Lin-Kernighan%20Heuristic%20(LKH-2)%20for%20Solving%20Traveling%20Salesman%20Problem%20for%20Multi-Core%20Processors%20using%20SPC3%20Programming%20Model%20.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Developing Digital Control System Centrifugal Pumping Unit</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020704</link>
        <id>10.14569/IJACSA.2011.020704</id>
        <doi>10.14569/IJACSA.2011.020704</doi>
        <lastModDate>2012-07-01T10:16:52.3170000+00:00</lastModDate>
        
        <creator>Safarini Osama</creator>
        
        <subject>ACS - Automatic Control System; CPU- Centrifugal Pumping Unit; PID- A proportional–integral–derivative.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(7), 2011</description>
        <description>Currently, the leading Russian oil companies engaged in the development of automatic control systems (ACS) oil equipment, to improve its efficiency. As an example of an object needed to optimize its work, a centrifugal pumping unit (CPU) can be specified. Frequency actuators provide an opportunity to run CPU in any algorithm, and many industrial controllers, are part of the automatic control system technological process (ACSTP), allow optimizing the operation of the centrifugal pumping unit and help determine the requirements for the development of digital automatic control system for CPU. The development of digital automatic control systems for such a complex nonlinear control object like CPU has several features that should be considered when designing control systems for that object.
</description>
        <description>http://thesai.org/Downloads/Volume2No7/Paper%204-DEVELOPING%20DIGITAL%20CONTROL%20SYSTEM%20CENTRIFUGAL%20PUMPING%20UNIT.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Error Detection and Correction over Two-Dimensional and Two-Diagonal Model and Five-Dimensional Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020703</link>
        <id>10.14569/IJACSA.2011.020703</id>
        <doi>10.14569/IJACSA.2011.020703</doi>
        <lastModDate>2012-07-01T10:16:45.9130000+00:00</lastModDate>
        
        <creator>Danial Aflakian</creator>
        
        <creator>Dr.Tamanna Siddiqui</creator>
        
        <creator>Najeeb Ahmad Khan</creator>
        
        <creator>Davoud Aflakian</creator>
        
        <subject>component; Data Communication; Error detection and correction; Linear Block Coding; Parity check code.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(7), 2011</description>
        <description>In this research paper we discover two different schemes of error detection and correction which are based on parity-check code for optimal error detection and correction utilizing check bits without degrading the data rate too much. The scheme of the first method is obtained by arranging data in rows, to form a square, and then parity bits are added. The second scheme builds upon the first, by forming a square cube and the parity scheme is extended to this shape.
</description>
        <description>http://thesai.org/Downloads/Volume2No7/Paper%203-Error%20Detection%20and%20Correction%20over%20Two-Dimensional%20and%20Two-Diagonal%20Model%20and%20Five-Dimensional%20Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automatic Queuing Model for Banking Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020702</link>
        <id>10.14569/IJACSA.2011.020702</id>
        <doi>10.14569/IJACSA.2011.020702</doi>
        <lastModDate>2012-07-01T10:16:39.5030000+00:00</lastModDate>
        
        <creator>Ahmed S.A AL-Jumaily</creator>
        
        <creator>Dr. Huda K. T. AL-Jobori</creator>
        
        <subject>Queuing Systems; Queuing System models; Queuing System Management; Scheduling Algorithms.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(7), 2011</description>
        <description>Queuing is the process of moving customers in a specific sequence to a specific service according to the customer need. The term scheduling stands for the process of computing a schedule. This may be done by a queuing based scheduler. This paper focuses on the banks lines system, the different queuing algorithms that are used in banks to serve the customers, and the average waiting time. The aim of this paper is to build automatic queuing system for organizing the banks queuing system that can analyses the queue status and take decision which customer to serve. The new queuing architecture model can switch between different scheduling algorithms according to the testing results and the factor of the average waiting time. The main innovation of this work concerns the modeling of the average waiting time is taken into processing, in addition with the process of switching to the scheduling algorithm that gives the best average waiting time.</description>
        <description>http://thesai.org/Downloads/Volume2No7/Paper%202-Automatic%20Queuing%20Model%20for%20Banking%20Applications.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Approach for Teaching of National Languages and Cultures through ICT in Cameroon
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020701</link>
        <id>10.14569/IJACSA.2011.020701</id>
        <doi>10.14569/IJACSA.2011.020701</doi>
        <lastModDate>2012-07-01T10:16:33.0600000+00:00</lastModDate>
        
        <creator>Marcellin Nkenlifack</creator>
        
        <creator>Bethin Demsong</creator>
        
        <creator>A. Teko Domche</creator>
        
        <creator>Raoul Nangue</creator>
        
        <subject> Local Languages; Culture; ICT; Learning Platform; TICELaCuN; Management Tools.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(7), 2011</description>
        <description>This article describes the input of ICT to the modernization of teaching national languages and cultures in order to promote cultural diversity as well as dissemination of scientific knowledge through national languages. This will also reinforce the understanding capacities of the population. This project will serve as the guideline towards development of scientific knowledge and know-how. It presents numerous psychological, pedagogic, scientific and social advantages, among which there is a sensitization of our languages and cultures, the deployment of a platform in some schools, the training of teachers in using ICT in language teaching, the distribution self-learning aids, the development of a website for analysis and dissemination of cultural data, of conservation of linguistic and cultural heritage, and worthiness of pre-requisites and local predispositions towards the emergence and development of technology. It will contribute to make concrete the introduction of teaching national languages in the school curricula in Cameroon.</description>
        <description>http://thesai.org/Downloads/Volume2No7/Paper%201-An%20Approach%20for%20Teaching%20of%20National%20Languages%20and%20Cultures%20through%20ICT%20in%20Cameroon.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A MODIFIED METHOD FOR ORDER REDUCTION OF LARGE SCALE DISCRETE SYSTEMS</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020625</link>
        <id>10.14569/IJACSA.2011.020625</id>
        <doi>10.14569/IJACSA.2011.020625</doi>
        <lastModDate>2012-07-01T10:16:29.6570000+00:00</lastModDate>
        
        <creator>G Saraswathi</creator>
        
        <subject> order reduction; eigen values; large scale discrete systems; modeling.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(6), 2011</description>
        <description>In this paper, an effective procedure to determine the reduced order model of higher order linear time invariant discrete systems is discussed. A new procedure has been proposed for evaluating Time moments of the  original high order system. Numerator and denominator polynomials of reduced order model are obtained by considering first  few redefined time moments of the original high order system. The proposed method has been verified using numerical examples.</description>
        <description>http://thesai.org/Downloads/Volume2No6/Paper%2025-A%20Modified%20Method%20for%20Order%20Reduction%20of%20Large%20Scale%20Discrete%20Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Generation of Attributes for Bangla Words for Universal Networking Language(UNL)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020624</link>
        <id>10.14569/IJACSA.2011.020624</id>
        <doi>10.14569/IJACSA.2011.020624</doi>
        <lastModDate>2012-07-01T10:16:23.2470000+00:00</lastModDate>
        
        <creator>Muhammad Firoz Mridha</creator>
        
        <creator>Kamruddin Md. Nur</creator>
        
        <creator>Manoj Banik</creator>
        
        <creator>Mohammad Nurul Huda</creator>
        
        <subject>Morphology; Bangla Roots; Primary Suffix; Grammatical Attributes; Universal Networking Language.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(6), 2011</description>
        <description>The usage of native language through Internet is highly demanding now a day due to rapidly increase of Internet based application in daily needs. It is important to read all information in Bangla from the internet. Universal Networking Language (UNL) addressed this issue in most of languages. It helps to overcome the language barrier among people of different nations to solve problems emerging from current globalization trends and geopolitical interdependence. In this paper we propose a work that aims to contribute with morphological analysis of those Bangla words from which we obtain roots and Primary suffixes and developing of grammatical attributes for roots and Primary suffixes that can be used to prepare Bangla word dictionary and Enconversion/Deconversion rules for Natural Language Processing(NLP).
</description>
        <description>http://thesai.org/Downloads/Volume2No6/Paper%2024-Generation%20of%20Attributes%20for%20Bangla%20Words%20for%20Universal%20Networking%20Language(UNL).pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Impact of Cloud Computing on ERP implementations in Higher Education
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020622</link>
        <id>10.14569/IJACSA.2011.020622</id>
        <doi>10.14569/IJACSA.2011.020622</doi>
        <lastModDate>2012-07-01T10:16:16.4100000+00:00</lastModDate>
        
        <creator>Shivani Goel</creator>
        
        <creator>Dr Ravi Kiran</creator>
        
        <creator>Dr Deepak Garg</creator>
        
        <subject> Cloud Computing; ERP; Higher Technical Education; ERP implementation.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(6), 2011</description>
        <description>Penetration of Higher Education in all regions is increasing all over the globe at a very fast pace. With the increase in the number of institutions offering higher education, ERP implementations has become one of the key ingredient to achieve competitiveness in the market. Many researchers have given their inputs specifying the different nature of ERP implementation in Educational institutions then the corporate organizations. Recently Cloud computing has become a buzzword and it is having applications in many domains. Researchers have already started applying cloud computing in ERP implementations of Higher education. This paper gives an insight into the nature of cloud computing impact on ERP implementations and discusses various issues related to this. Paper comes up with guidelines regarding the use of cloud computing technology in the ERP implementations of Higher Technical Education institutions.
</description>
        <description>http://thesai.org/Downloads/Volume2No6/Paper%2022-Impact%20of%20Cloud%20Computing%20on%20ERP%20implementations%20in%20Higher%20Education.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Video Compression by Memetic Algorithm
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020621</link>
        <id>10.14569/IJACSA.2011.020621</id>
        <doi>10.14569/IJACSA.2011.020621</doi>
        <lastModDate>2012-07-01T10:16:10.0070000+00:00</lastModDate>
        
        <creator>Pooja Nagpal</creator>
        
        <creator>Seema Baghla</creator>
        
        <subject>Memetic Algorithm (MA); Standard Particle Swarm Optimization (PSO); Global Local Best Particle Swarm Optimization (GLBest PSO); Video Compression; Motion Vectors; Number of Computations; Peak Signal to Noise ratio (PSNR).</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(6), 2011</description>
        <description>Memetic Algorithm by hybridization of Standard Particle Swarm Optimization and Global Local Best Particle Swarm Optimization is proposed in this paper. This technique is used to reduce number of computations of video compression by maintaining same or better quality of video. In the proposed technique, the position equation of Standard Particle Swarm Optimization is modified and used as step size equation to find best matching block in current frame. To achieve adaptive step size, time varying inertia weight is used instead of constant inertia weight for getting true motion vector dynamically. The time varying inertia weight is based up on previous motion vectors. The step size equation is used to predict best matching macro block in the reference frame with respect to macro block in the current frame for which motion vector is found. The result of proposed technique is compared with existing block matching algorithms. The performance of Memetic Algorithm is good as compared to existing algorithms in terms number of computations and accuracy.</description>
        <description>http://thesai.org/Downloads/Volume2No6/Paper%2021-Video%20Compression%20by%20Memetic%20Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> Multi-Agent System Testing: A Survey</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020620</link>
        <id>10.14569/IJACSA.2011.020620</id>
        <doi>10.14569/IJACSA.2011.020620</doi>
        <lastModDate>2012-07-01T10:16:03.6230000+00:00</lastModDate>
        
        <creator>Zina Houhamdi</creator>
        
        <subject>Software agent; Software testing; Multi-agent system testing.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(6), 2011</description>
        <description> In recent years, agent-based systems have received considerable attention in both academics and industry. The agent-oriented paradigm can be considered a natural extension to the object-oriented (OO) paradigm. Agents differ from objects in many issues which require special modeling elements but have some similarities. Although there is a well-defined OO testing technique, agent-oriented development has neither a standard development process nor a standard testing technique. In this paper, we will give an introduction to most recent works presented in the area of testing distributed systems composed of complex autonomous entities (agents). We will provide pointers to work by large players in the field. We will explain why this kind of system must be handled differently than less complex systems.
</description>
        <description>http://thesai.org/Downloads/Volume2No6/Paper%2020-Multi-Agent%20System%20Testing%20A%20Survey.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Iris Recognition Using Modified Fuzzy Hypersphere Neural Network with different Distance Measures</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020619</link>
        <id>10.14569/IJACSA.2011.020619</id>
        <doi>10.14569/IJACSA.2011.020619</doi>
        <lastModDate>2012-07-01T10:15:56.4100000+00:00</lastModDate>
        
        <creator>S Chowhan</creator>
        
        <creator>U. V. Kulkarni</creator>
        
        <creator>G. N. Shinde</creator>
        
        <subject> Bhattacharyya distance; Iris Segmentation; Fuzzy Hypersphere Neural Network.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(6), 2011</description>
        <description> In this paper we describe Iris recognition using Modified Fuzzy Hypersphere Neural Network (MFHSNN) with its learning algorithm, which is an extension of Fuzzy Hypersphere Neural Network (FHSNN) proposed by Kulkarni et al. We have evaluated performance of MFHSNN classifier using different distance measures. It is observed that Bhattacharyya distance is superior in terms of training and recall time as compared to Euclidean and Manhattan distance measures. The feasibility of the MFHSNN has been successfully appraised on CASIA database with 756 images and found superior in terms of generalization and training time with equivalent recall time.
</description>
        <description>http://thesai.org/Downloads/Volume2No6/Paper%2019-Iris%20Recognition%20Using%20Modified%20Fuzzy%20Hypersphere.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Interactive Intranet Portal for effective Management in Tertiary Institution
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020618</link>
        <id>10.14569/IJACSA.2011.020618</id>
        <doi>10.14569/IJACSA.2011.020618</doi>
        <lastModDate>2012-07-01T10:15:49.9300000+00:00</lastModDate>
        
        <creator>Idogho O Philipa</creator>
        
        <creator>Akpado Kenneth</creator>
        
        <creator>James Agajo</creator>
        
        <subject>Portal; Database; webA; MYSQL; Intranet; Admin</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(6), 2011</description>
        <description>Interactive Intranet Portal for effective management in Tertiary Institution is an enhanced and interactive method of managing and processing key issues in Tertiary Institution, Problems of result processing, tuition fee payment, library resources management are analyzed in this work. An interface was generated to handle this problem; the software is an interactive one. Several modules are involved in the paper, like: LIBRARY CONSOLE, ADMIN, STAFF, COURSE REGISTRATION, CHECKING OF RESULTS and E-NEWS modules. The server computer shall run the portal as well as OPEN SOURCE Apache Web Server, MySQL Community Edition RDBMS and PHP engine and shall be accessible by client computers on the intranet via thin-client browser such as Microsoft Internet Explorer or Mozilla Firefox. It shall be accessible through a well-secured authentication system. This Project will be developed using OPEN SOURCE technologies such XAMMP developed from WAMP (Windows Apache MySQL and PHP)
</description>
        <description>http://thesai.org/Downloads/Volume2No6/Paper%2018-Interactive%20Intranet%20Portal%20for%20effective%20Management%20in%20Tertiary%20Institution.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Image Compression using Approximate Matching and Run Length</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020617</link>
        <id>10.14569/IJACSA.2011.020617</id>
        <doi>10.14569/IJACSA.2011.020617</doi>
        <lastModDate>2012-07-01T10:15:43.4630000+00:00</lastModDate>
        
        <creator>Samir Kumar Bandyopadhyay</creator>
        
        <creator>Tuhin Utsab Paul</creator>
        
        <creator>Avishek Raychoudhury</creator>
        
        <subject>lossless image compression; approximate matching; run length.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(6), 2011</description>
        <description>Image compression is currently a prominent topic for both military and commercial researchers. Due to rapid growth of digital media and the subsequent need for reduced storage and to transmit the image in an effective manner Image compression is needed. Image compression attempts to reduce the number of bits required to digitally represent an image while maintaining its perceived visual quality. This study concentrates on the lossless compression of image using approximate matching technique and run length encoding. The performance of this method is compared with the available jpeg compression technique over a wide number of images, showing good agreements.
</description>
        <description>http://thesai.org/Downloads/Volume2No6/Paper%2017-Image%20Compression%20using%20Approximate%20Matching%20and%20Run%20Length.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> Implementation of ISS - IHAS (Information Security System-Information Hiding in Audio Signal) model with reference to proposed e-cipher Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020616</link>
        <id>10.14569/IJACSA.2011.020616</id>
        <doi>10.14569/IJACSA.2011.020616</doi>
        <lastModDate>2012-07-01T10:15:37.0400000+00:00</lastModDate>
        
        <creator>R Venkateswaran</creator>
        
        <creator>Dr. V. Sundaram, Director</creator>
        
        <subject>Encryption; Decryption; Audio data hiding; Mono Substitution; Poly Substitution.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(6), 2011</description>
        <description>This paper shows the possibility of exploiting the features of E- cipher method by using both cryptography and Information hiding in Audio signal methods used to send and receive the message in more secured way. Proposed methodology shows that successfully using these Poly substitutions methods (Proposed E-Cipher) for Encode and decodes messages to evolve a new method for Encrypting and decrypting the messages. Embedding secret messages using audio signal in digital format is now the area in focus. There exist numerous steganography techniques for hiding information in audio medium. In our proposed theme, a new model ISS-IHAS - Embedding Text in Audio Signal that embeds the text like the existing system but with strong encryption that gains the full advantages of cryptography. Using steganography it is possible to conceal the full existence of the original text and the results obtained from the proposed model is compared with other existing techniques and proved to be efficient for textual messages of minimum size as the size of the embedded text is essentially same as that of encrypted text size. This emphasis the fact that we are able to ensure secrecy without an additional cost of extra space consumed for the text to be communicated.</description>
        <description>http://thesai.org/Downloads/Volume2No6/Paper%2016-Implementation%20of%20ISS%20-%20IHAS%20(Information%20Security%20System_Information%20Hiding%20in%20Audio%20Signal)%20model%20with%20reference%20to%20proposed%20e-cipher%20Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Architecture Aware Programming on Multi-Core Systems
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020615</link>
        <id>10.14569/IJACSA.2011.020615</id>
        <doi>10.14569/IJACSA.2011.020615</doi>
        <lastModDate>2012-07-01T10:15:30.6630000+00:00</lastModDate>
        
        <creator>M R Pimple</creator>
        
        <creator>S.R. Sathe</creator>
        
        <subject>multi-core architecture; parallel programming; cache miss; blocking; OpenMP; linear algebra.
</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(6), 2011</description>
        <description> In order to improve the processor performance, the response of the industry has been to increase the number of cores on the die. One salient feature of multi-core architectures is that they have a varying degree of sharing of caches at different levels. With the advent of multi-core architectures, we are facing the problem that is new to parallel computing, namely, the management of hierarchical caches. Data locality features need to be considered in order to reduce the variance in the performance for different data sizes. In this paper, we propose a programming approach for the algorithms running on shared memory multi-core systems by using blocking, which is a well-known optimization technique coupled with parallel programming paradigm, OpenMP. We have chosen the sizes of various problems based on the architectural parameters of the system like cache level, cache size, cache line size. We studied the cache optimization scheme on commonly used linear algebra applications – matrix multiplication (MM), Gauss-Elimination (GE) and LU Decomposition (LUD) algorithm.
</description>
        <description>http://thesai.org/Downloads/Volume2No6/Paper%2015-Architecture%20Aware%20Programming%20on%20Multi%20Core%20Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Analysis of UMTS Cellular Network using Sectorization Based on Capacity and Coverage
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020614</link>
        <id>10.14569/IJACSA.2011.020614</id>
        <doi>10.14569/IJACSA.2011.020614</doi>
        <lastModDate>2012-07-01T10:15:24.2630000+00:00</lastModDate>
        
        <creator>A.k M Fazlul</creator>
        
        <creator>Mir Mohammad Abu Kyum</creator>
        
        <creator>Md. Fokhray Hossain</creator>
        
        <subject>UMTS; Capacit; Coverage and data rates; sectoring.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(6), 2011</description>
        <description>A.K.M Fazlul Haque, Mir Mohammad Abu Kyum, Md. Baitul Al Sadi, Mrinal Kar, Md. Fokhray Hossain
Abstract - Universal Mobile Telecommunications System (UMTS) is one of the standards in 3rd generation partnership project (3GPP). Different data rates are offered by UMTS for voice, video conference and other services. This paper presents the performance of UMTS cellular network using sectorization for capacity and coverage. The major contribution is to see the impact of sectorization on capacity and cell coverage in 3G UMTS cellular network. Coverage and capacity are vitally important issues in UMTS cellular Network. Capacity depends on different parameters such as sectorization, energy per bit noise spectral density ratio, voice activity, inter-cell interference and intra-cell interference, soft handoff gain factor, etc and coverage depends on frequency, chip rate, bit rate, mobile maximum power, MS Antenna Gain, EIRP, interference Margin, Noise figure etc. Different parameters that influence the capacity and coverage of UMTS cellular network are simulated using MATLAB 2009a. In this paper, the outputs of simulation for increasing amount of sectorization showed that the number of users gradually increased. The coverage area also gradually increased.</description>
        <description>http://thesai.org/Downloads/Volume2No6/Paper%2014-Performance%20Analysis%20of%20UMTS%20Cellular%20Network%20using%20Sectorization%20Based%20on%20Capacity%20and%20Coverage.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> Improvement of Brain Tissue Segmentation Using Information Fusion Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020612</link>
        <id>10.14569/IJACSA.2011.020612</id>
        <doi>10.14569/IJACSA.2011.020612</doi>
        <lastModDate>2012-07-01T10:15:20.4130000+00:00</lastModDate>
        
        <creator>Lamiche Chaabane</creator>
        
        <creator>Moussaoui Abdelouahab</creator>
        
        <subject>Information fusion; possibility theory; segmentation; FCM; MR images.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(6), 2011</description>
        <description>The fusion of information is a domain of research in full effervescence these last years. Because of increasing of the diversity techniques of images acquisitions, the applications of medical images segmentation, in which we are interested, necessitate most of the time to carry out the fusion of various data sources to have information with high quality. In this paper we propose a system of data fusion through the framework of the possibility theory adapted for the segmentation of MR images. The fusion process is divided into three steps : fuzzy tissue maps are first computed on all images using Fuzzy C- Means algorithm. Fusion is then achieved for all tissues with a fusion operator. Applications on a brain model show very promising results on simulated data and a great concordance between the true segmentation and the proposed system.</description>
        <description>http://thesai.org/Downloads/Volume2No6/Paper%2012-Improvement%20of%20Brain%20Tissue%20Segmentation%20Using%20Information%20Fusion%20Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Estimation of Dynamic Background and Object Detection in Noisy Visual Surveillance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020611</link>
        <id>10.14569/IJACSA.2011.020611</id>
        <doi>10.14569/IJACSA.2011.020611</doi>
        <lastModDate>2012-07-01T10:15:13.9300000+00:00</lastModDate>
        
        <creator>M Sankari</creator>
        
        <creator>C. Meena</creator>
        
        <subject>Background subtraction; Background updation; Binary segmentation mask; Kalman filter; 
Noise removal; Shadow removal; Traffic video sequences.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(6), 2011</description>
        <description>Dynamic background subtraction in noisy environment for detecting object is a challenging process in computer vision. The proposed algorithm has been used to identify moving objects from the sequence of video frames which contains dynamically changing backgrounds in the noisy atmosphere. There are many challenges in achieving a robust background subtraction algorithm in the external noisy environment. In connection with our previous work, in this paper, we have proposed a methodology to perform background subtraction from moving vehicles in traffic video sequences that combines statistical assumptions of moving objects using the previous frames in the dynamically varying noisy situation. Background image is frequently updated in order to achieve reliability of the motion detection. For that, a binary moving objects hypothesis mask is constructed to classify any group of lattices as being from a moving object based on the optimal threshold. Then, the new incoming information is integrated into the current background image using a Kalman filter. In order to improve the performance, a post-processing has been done. It has been accomplished by shadow and noise removal algorithms operating at the lattice which identifies object-level elements. The results of post-processing can be used to detect object more efficiently. Experimental results and analysis show the prominence of the proposed approach which has achieved an average of 94% accuracy in real-time acquired images.
</description>
        <description>http://thesai.org/Downloads/Volume2No6/Paper%2011-Estimation%20of%20Dynamic%20Background%20and%20Object%20Detection%20%20in%20Noisy%20Visual%20Surveillance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparison between Traditional Approach and Object-Oriented Approach in Software Engineering Development
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020610</link>
        <id>10.14569/IJACSA.2011.020610</id>
        <doi>10.14569/IJACSA.2011.020610</doi>
        <lastModDate>2012-07-01T10:15:07.5130000+00:00</lastModDate>
        
        <creator>Nabil Mohammed Ali Munassar</creator>
        
        <creator>Dr. A. Govardhan</creator>
        
        <subject>Software Engineering; Traditional Approach; Object-Oriented Approach; Analysis; Design; Deployment; Test; methodology; Comparison between Traditional Approach and Object-Oriented Approach</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(6), 2011</description>
        <description>This paper discusses the comparison between Traditional approaches and Object-Oriented approach. Traditional approach has a lot of models that deal with different types of projects such as waterfall, spiral, iterative and v-shaped, but all of them and other lack flexibility to deal with other kinds of projects like Object-Oriented. Object–oriented Software Engineering (OOSE) is an object modeling language and methodology. The approach of using object – oriented techniques for designing a system is referred to as object–oriented design. Object–oriented development approaches are best suited to projects that will imply systems using emerging object technologies to construct, manage, and assemble those objects into useful computer applications. Object oriented design is the continuation of object-oriented analysis, continuing to center the development focus on object modeling techniques.
</description>
        <description>http://thesai.org/Downloads/Volume2No6/Paper%2010-Comparison%20between%20Traditional%20Approach%20and%20Object-Oriented%20Approach%20in%20Software%20Engineering%20Development.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> Mining Educational Data to Analyze Students Performance</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020609</link>
        <id>10.14569/IJACSA.2011.020609</id>
        <doi>10.14569/IJACSA.2011.020609</doi>
        <lastModDate>2012-07-01T10:15:01.0830000+00:00</lastModDate>
        
        <creator>Brijesh Kumar Baradwaj</creator>
        
        <creator>Saurabh Pal</creator>
        
        <subject>Educational Data Mining (EDM); Classification; Knowledge Discovery in Database (KDD); ID3 Algorithm.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(6), 2011</description>
        <description>The main objective of higher education institutions is to provide quality education to its students. One way to achieve highest level of quality in higher education system is by discovering knowledge for prediction regarding enrolment of students in a particular course, alienation of traditional classroom teaching model, detection of unfair means used in online examination, detection of abnormal values in the result sheets of the students, prediction about students’ performance and so on. The knowledge is hidden among the educational data set and it is extractable through data mining techniques. Present paper is designed to justify the capabilities of data mining techniques in context of higher education by offering a data mining model for higher education system in the university. In this research, the classification task is used to evaluate student’s performance and as there are many approaches that are used for data classification, the decision tree method is used here. By this task we extract knowledge that describes students’ performance in end semester examination. It helps earlier in identifying the dropouts and students who need special attention and allow the teacher to provide appropriate advising/counseling.</description>
        <description>http://thesai.org/Downloads/Volume2No6/Paper%209-Mining%20Educational%20Data%20to%20Analyze%20Students%20Performance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> Context Switching Semaphore with Data Security Issues using Self-healing Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020608</link>
        <id>10.14569/IJACSA.2011.020608</id>
        <doi>10.14569/IJACSA.2011.020608</doi>
        <lastModDate>2012-07-01T10:14:54.6430000+00:00</lastModDate>
        
        <creator>M Anand</creator>
        
        <creator>Dr. S. Ravi</creator>
        
        <creator>Kuldeep Chouhan</creator>
        
        <creator>Syed Musthak Ahmed</creator>
        
        <subject>Self-healing; Semaphore; Data Security.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(6), 2011</description>
        <description>The main objective of a self healing scheme is to share and secure the information of any system at the same time. “Self-healing” techniques ultimately are dependable computing techniques. Specifically self-healing systems have to think for itself without human input, able to boot up backup systems. However, sharing and protection are two contradictory goals. Protection programs may be completely isolated from each other by executing them on separate non-networked computer, however, this precludes sharing.</description>
        <description>http://thesai.org/Downloads/Volume2No6/Paper%208-Context%20Switching%20Semaphore%20with%20Data%20Security%20Issues%20using%20Self-healing%20Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Efficient Density based Improved K- Medoids Clustering algorithm
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020607</link>
        <id>10.14569/IJACSA.2011.020607</id>
        <doi>10.14569/IJACSA.2011.020607</doi>
        <lastModDate>2012-07-01T10:14:48.2270000+00:00</lastModDate>
        
        <creator>Raghuvira Pratap A</creator>
        
        <creator>K Suvarna Vani</creator>
        
        <creator>J Rama Devi</creator>
        
        <creator>Dr.K Nageswara Rao</creator>
        
        <subject>Clustering; DBSCAN; Centroid; Medoid; k-medoids.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(6), 2011</description>
        <description> Clustering is the process of classifying objects into different groups by partitioning sets of data into a series of subsets called clusters. Clustering has taken its roots from algorithms like k-medoids and k-medoids. However conventional k-medoids clustering algorithm suffers from many limitations. Firstly, it needs to have prior knowledge about the number of cluster parameter k. Secondly, it also initially needs to make random selection of k representative objects and if these initial k medoids are not selected properly then natural cluster may not be obtained. Thirdly, it is also sensitive to the order of input dataset.Mining knowledge from large amounts of spatial data is known as spatial data mining. It becomes a highly demanding field because huge amounts of spatial data have been collected in various applications ranging from geo-spatial data to bio-medical knowledge. The database can be clustered in many ways depending on the clustering algorithm employed, parameter settings used, and other factors. Multiple clustering can be combined so that the final partitioning of data provides better clustering. In this paper, an efficient density based k-medoids clustering algorithm has been proposed to overcome the drawbacks of DBSCAN and kmedoids clustering algorithms. The result will be an improved version of kmedoids clustering algorithm. This algorithm will perform better than DBSCAN while handling clusters of circularly distributed data points and slightly overlapped clusters.
</description>
        <description>http://thesai.org/Downloads/Volume2No6/Paper%207-An%20Efficient%20Density%20based%20Improved%20K-%20Medoids%20Clustering%20algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> Design and Implementation on a Sub-band based Acoustic Echo Cancellation Approach</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020606</link>
        <id>10.14569/IJACSA.2011.020606</id>
        <doi>10.14569/IJACSA.2011.020606</doi>
        <lastModDate>2012-07-01T10:14:41.8230000+00:00</lastModDate>
        
        <creator>Zhixin Chen</creator>
        
        <subject> Digital Signal Processing; Adaptive Filter; Acoustic Echo Cancellation.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(6), 2011</description>
        <description>This paper describes the design and implementation of a sub-band based acoustic echo cancellation approach, which incorporates the normalized least mean square algorithm and the double talk detection algorithm. According to the simulation, the proposed approach works well in the modest linear noisy environment. Since the proposed approach is implemented in fix-pointed C, it can be easily ported into fixed-point DSPs to cancel acoustic echo in real systems.</description>
        <description>http://thesai.org/Downloads/Volume2No6/Paper%206-Design%20and%20Implementation%20on%20a%20Sub-band%20based%20Acoustic%20Echo%20Cancellation%20Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Authorization Mechanism for Access Control of Resources in the Web Services Paradigm
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020605</link>
        <id>10.14569/IJACSA.2011.020605</id>
        <doi>10.14569/IJACSA.2011.020605</doi>
        <lastModDate>2012-07-01T10:14:35.7700000+00:00</lastModDate>
        
        <creator>Rajender Nath</creator>
        
        <creator>Gulshan Ahuja</creator>
        
        <subject>Web Services; Access Control; Authentication; Authorization.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(6), 2011</description>
        <description>With the increase in web based enterprise services, there is an increasing trend among business enterprises to migrate to web services platform. Web services paradigm poses a number of new security challenges, which can only be realized by developing effective access control models. The fact that the enterprises allow access to the resources through web services requires development of access control models that can capture relevant information about a service requester at the time of access request and incorporate this information for making effective access control decisions. Researchers have addressed many issues related to authentication and authorization of web services requests for accessing resources, but the issues related to authorization work and identity based access are still poorly addressed. Authors of this paper focus on providing an extended approach to capture relevant information about a service requester and establish a certain level of trust so that amount of authorization work required for accessing any resource is reduced and service requests are served in an efficient manner. Compared with existing access control mechanisms, the proposed mechanism has reduced the amount of authorization work required for accessing resources across varied domains.</description>
        <description>http://thesai.org/Downloads/Volume2No6/Paper%205-An%20Authorization%20Mechanism%20for%20Access%20Control%20of%20Resources%20in%20the%20Web%20Services%20Paradigm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Decision Support Tool for Inferring Further Education Desires of Youth in Sri Lanka
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020604</link>
        <id>10.14569/IJACSA.2011.020604</id>
        <doi>10.14569/IJACSA.2011.020604</doi>
        <lastModDate>2012-07-01T10:14:29.2400000+00:00</lastModDate>
        
        <creator>M.F M Firdhous</creator>
        
        <creator>Ravindi Jayasundara</creator>
        
        <subject>Educational Desires of Youth; Univariate Analysis; Logit Model; Data Mining.
</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(6), 2011</description>
        <description>This paper presents the results of a study carried out to identify the factors that influence the further education desires of Sri Lankan youth. Statistical modeling has been initially used to infer the desires of the youth and then a decision support tool has been developed based on the statistical model developed. In order to carry out the analysis and the development of the model, data collected as part of the National Youth Survey has been used. The accuracy of the model and the decision support tool has been tested by using a random data sets and the accuracy was found to be well above 80 percent, which is sufficient for any policy related decision making.
</description>
        <description>http://thesai.org/Downloads/Volume2No6/Paper%204-A%20Decision%20Support%20Tool%20for%20Inferring%20Further%20Education%20Desires%20of%20Youth%20in%20Sri%20Lanka.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automating the Collection of Object Relational Database Metrics</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020603</link>
        <id>10.14569/IJACSA.2011.020603</id>
        <doi>10.14569/IJACSA.2011.020603</doi>
        <lastModDate>2012-07-01T10:14:25.8330000+00:00</lastModDate>
        
        <creator>Samer M Suleiman</creator>
        
        <creator>Qasem A. Al-Radaideh</creator>
        
        <creator>Bilal A. Abulhuda</creator>
        
        <creator>Izzat M. AlSmadi</creator>
        
        <subject>Object Oriented Relational Database; Metrics; Software Quality.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(6), 2011</description>
        <description> The quality of software systems is the most important factor to consider when designing and using these systems. The quality of the database or the database management system is particularly important as it is the backbone for all types of systems that it holds their data. Many researches argued that software with high quality will lead to an effective and secure system. Software quality can be assessed by using software measurements or metrics. Typically, metrics have several problems such as: having no specific standards, sometimes they are hard to measure, while at the same time they are time and resource consuming. Metrics need also to be continuously updated. A possible solution to some of those problems is to automate the process of gathering and assessing those metrics. In this research the metrics that evaluate the complexity of Object Oriented Relational Database (ORDB) are composed of the object oriented metrics and relational database metrics. This research is based on common theoretical calculations and formulations of ORDB metrics proposed by database experts. A tool is developed that takes the ORDB schema as an input and then collects several database structural metrics. Based on those proposed and gathered metrics, a study is conducted and showed that such metrics’ assessment can be very useful in assessing the database complexity.</description>
        <description>http://thesai.org/Downloads/Volume2No6/Paper%203-Automating%20the%20Collection%20of%20Object%20Relational%20Database%20Metrics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Assessing 3-D Uncertain System Stability by Using MATLAB Convex Hull Functions</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020602</link>
        <id>10.14569/IJACSA.2011.020602</id>
        <doi>10.14569/IJACSA.2011.020602</doi>
        <lastModDate>2012-07-01T10:14:19.3500000+00:00</lastModDate>
        
        <creator>Mohammed Tawfik Hussein</creator>
        
        <subject>Algorithm; 3-D convex hull; uncertainty; robust stability; root locus</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(6), 2011</description>
        <description>This paper is dealing with the robust stability of an uncertain three dimensional (3-D) system using existence MATLAB convex hull functions. Hence, the uncertain model of plant will be simulated by INTLAB Toolbox; furthermore, the root loci of the characteristic polynomials of the convex hull are obtained to judge whether the uncertain system is stable or not. A design third order example for uncertain parameters is given to validate the proposed approach.</description>
        <description>http://thesai.org/Downloads/Volume2No6/Paper%202-Assessing%203-D%20Uncertain%20System%20Stability%20by%20Using%20MATLAB%20Convex%20Hull%20Functions.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Successful Transmission Rate of Mobile Terminals with Agents in Segmented Ad Hoc Network
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020601</link>
        <id>10.14569/IJACSA.2011.020601</id>
        <doi>10.14569/IJACSA.2011.020601</doi>
        <lastModDate>2012-07-01T10:14:13.2900000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Lipur Sugiyanta</creator>
        
        <subject>MANET; agents; movement; packet; routing; network area.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(6), 2011</description>
        <description>Mobile Wireless Ad-Hoc Network (MANET) is a special kind of network, where all of the nodes move in time. Random movement is the commonly used to model the mobility in this network. Beside as a host, each node is intended to help relaying packets of neighboring nodes using multi-hop routing mechanism in order to solve problem of dead communication. In recent years, a variety of routing protocols have been proposed specially to accommodate this environment followed with its performance evaluations, likes DSDV, DSR, AODV, and extension OSPF. Researches in this network are mostly simulation based. Research efforts haven&#39;t focused much in evaluating network performance when applied to variable number of nodes that involving agents and distributed over several network areas. The paper performed simulations using MANET simulator, which carried out based on the extension OSPF routing protocol. The modification of OSPF protocol had intended to suit the OSPF standard mechanisms for operating with high performance over wireless networks characterized by a broadcast-based transmission medium, in which the physical layer connectivity is not fully meshed. Extensive simulation scenarios have been conducted in the simulator with various numbers of nodes, random and uniform movement direction, and different agent’s quantity with different size of network areas. In the performance evaluation of successful transmission (data) packets, the OSPF protocol with throughput weighted metric will be tested under different combination conditions of scenarios.
</description>
        <description>http://thesai.org/Downloads/Volume2No6/Paper%201-Successful%20Transmission%20Rate%20of%20Mobile%20Terminals%20with%20Agents%20in%20Segmented%20Ad%20Hoc%20Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Performance Study of Some Sophisticated Partitioning Algorithms
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020523</link>
        <id>10.14569/IJACSA.2011.020523</id>
        <doi>10.14569/IJACSA.2011.020523</doi>
        <lastModDate>2012-07-01T10:14:09.8630000+00:00</lastModDate>
        
        <creator>D Abhyankar</creator>
        
        <creator>M.Ingle</creator>
        
        <subject>Quicksort; Hoare Partition; Lomuto Partition; AQTime.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(5), 2011</description>
        <description>Partitioning is a central component of the Quicksort which is an intriguing sorting algorithm, and is a part of C, C++ and Java libraries. Partitioning is a key component of Quicksort, on which the performance of Quicksort ultimately depends. There have been some elegant partitioning algorithms; Profound understanding of prior may be needed if one has to choose among those partitioning algorithms. In this paper we undertake a careful study of these algorithms on modern machines with the help of state of the art performance analyzers, choose the best partitioning algorithm on the basis of some crucial performance indicators.</description>
        <description>http://thesai.org/Downloads/Volume2No5/Paper%2023-A%20Performance%20Study%20of%20Some%20Sophisticated%20Partitioning%20Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Radial Basis Function For Handwritten Devanagari Numeral Recognition
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020521</link>
        <id>10.14569/IJACSA.2011.020521</id>
        <doi>10.14569/IJACSA.2011.020521</doi>
        <lastModDate>2012-07-01T10:14:03.4330000+00:00</lastModDate>
        
        <creator>Prerna Singh</creator>
        
        <creator>Nidhi Tyagi</creator>
        
        <subject>Radial Basis Function; Devanagari Numeral Recog- nition; K-means clustering; Principal Component Analysis (PCA)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(5), 2011</description>
        <description>The task of recognizing handwritten numerals, using a classifier, has great importance. This paper applies the technique of Radial Basis Function for handwritten numeral recognition of Devanagari Script. Lot of work has been done on Devanagari numeral recognition using different techniques for increasing the accuracy of recognition. Since the database is not globally created, firstly we created the database by implementing pre-processing on the set of training data. Then by the use of Principal Component Analysis we have extracted the features of each image, some researchers have also used density feature extraction. Since different people have different writing style, so here we are trying to form a system where recognition of numeral becomes easy. Then at the hidden layer centers are determined and the weights between the hidden layer and the output layer of each neuron are determined to calculate the output, where output is the summing value of each neuron. In this paper we have proposed an algorithm for determining Devanagari numeral recognition using the above mentioned system</description>
        <description>http://thesai.org/Downloads/Volume2No5/Paper%2021-Radial%20Basis%20Function%20For%20Handwritten%20Devanagari%20Numeral%20Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Deploying an Application on the Cloud
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020520</link>
        <id>10.14569/IJACSA.2011.020520</id>
        <doi>10.14569/IJACSA.2011.020520</doi>
        <lastModDate>2012-07-01T10:13:57.1730000+00:00</lastModDate>
        
        <creator>N. Ram Ganga Charan</creator>
        
        <creator>S. Tirupati Rao</creator>
        
        <creator>Dr .P.V.S Srinivas</creator>
        
        <subject> Cloud; Virtualization; EC2; IAAS; PAAS; SAAS; CAAS; DAAS; public cloud; private cloud; hybrid cloud; Community cloud;</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(5), 2011</description>
        <description>Cloud Computing, the impending need of computing as an optimal utility, has the potential to take a gigantic leap in the IT industry, is structured and put to optimal use with regard to the contemporary trends. Developers with innovative ideas need not be apprehensive about non utility of costly resources for the service which does not cater to the need and anticipations. Cloud Computing is like a panacea to overcome the hurdles. It promises to increase the velocity with which the applications are deployed, increased creativity, innovation, lowers cost all the while increasing business acumen. It calls for less investment and a harvest of benefits. The end-users only pay for the amount of resources they use and can easily scale up as their needs grow. Service providers, on the other hand, can utilize virtualization technology to increase hardware utilization and simplify management. People want to move large scale grid computations that they used to run on traditional clusters into centrally managed environment, pay for use and be done with it .This paper deals at length with regard to the cloud, cloud computing and its myriad applications.</description>
        <description>http://thesai.org/Downloads/Volume2No5/Paper%2020-Deploying%20an%20Application%20on%20the%20Cloud.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Neural Approach for Reliable and Fault Tolerant Wireless Sensor Networks
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020519</link>
        <id>10.14569/IJACSA.2011.020519</id>
        <doi>10.14569/IJACSA.2011.020519</doi>
        <lastModDate>2012-07-01T10:13:50.5700000+00:00</lastModDate>
        
        <creator>Vijay Kumar</creator>
        
        <creator>R. B. Patel</creator>
        
        <creator>Manpreet Singh</creator>
        
        <creator>Rohit Vaid</creator>
        
        <subject>Reliability; Fault tolerance; Bi-directional Associative Memory; Wireless Sensor Network</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(5), 2011</description>
        <description> This paper presents a neural model for reliable and fault tolerant transmission in Wireless Sensor Networks based on Bi-directional Associative Memory. The proposed model is an attempt to enhance the performances of both the cooperative and non cooperative Automatic Repeat Request (ARQ) schemes in terms of reliability and fault tolerance. We have also demonstrated the performances of both the schemes with the help of suitable examples.</description>
        <description>http://thesai.org/Downloads/Volume2No5/Paper%2019-A%20Neural%20Approach%20for%20Reliable%20and%20Fault%20Tolerant%20Wireless%20Sensor%20Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Computerised Speech Processing in Hearing Aids using FPGA Architecture</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020518</link>
        <id>10.14569/IJACSA.2011.020518</id>
        <doi>10.14569/IJACSA.2011.020518</doi>
        <lastModDate>2012-07-01T10:13:43.5500000+00:00</lastModDate>
        
        <creator>V. Hanuman Kumar</creator>
        
        <creator>P. Seetha Ramaiah</creator>
        
        <subject>Speech processing system; VLSI; FPGA; CIS.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(5), 2011</description>
        <description>The development of computerized speech processing system is to mimic the natural functionality of human hearing, because of advent of technology that used Very Large Scale Integration (VLSI) devices such as Field Programmable Gate Array (FPGA) to meet the challenging requirement of providing 100% functionality of the damaged human hearing parts. Here a computerized laboratory speech processor based on Xilinx Spartan3 FPGA system was developed for hearing aids research and also presented comparison details of the commercially available Hearing Aids. The hardware design and software implementation of the speech processor are described in detail. The FPGA based speech processor is capable of providing high-rate stimulation with 12 electrodes against conventional 8 electrodes in earlier research. Using short biphasic pulses presented either simultaneously or in an interleaved fashion. Different speech processing algorithms including the Continuous Interleaved Sampling (CIS) strategy were implemented in this processor and tested successfully.</description>
        <description>http://thesai.org/Downloads/Volume2No5/Paper%2018-Computerised%20Speech%20Processing%20in%20Hearing%20Aids%20using%20FPGA%20Architecture.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Generating PNS for Secret Key Cryptography Using Cellular Automaton
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020517</link>
        <id>10.14569/IJACSA.2011.020517</id>
        <doi>10.14569/IJACSA.2011.020517</doi>
        <lastModDate>2012-07-01T10:13:37.0970000+00:00</lastModDate>
        
        <creator>Bijayalaxmi Kar</creator>
        
        <creator>D.Chandrasekhra Rao</creator>
        
        <creator>Dr. Amiya Kumar Rath</creator>
        
        <subject>Cellular automata; Cellular programming; Random number generators; Symmetric key; cryptography; Vernam cipher</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(5), 2011</description>
        <description>The paper presents new results concerning application of cellular automata (CAs) to the secret key using vernam cipher cryptography.CA are applied to generate pseudo-random numbers sequence (PNS) which is used during the encryption process. One dimensional, non uniform CAs is considered as a generator of pseudorandom number sequences (PNSs) used in cryptography with the secret key. The quality of PNSs highly depends on a set of applied CA rules. Rules of radius r = 1 and 2 for nonuniform one dimensional CAs have been considered. The search of rules is performed with use of evolutionary technique called cellular programming. As the result of collective behavior of discovered set of CA rules very high quality PNSs are generated. The quality of PNSs outperforms the quality of known one dimensional CA-based PNS generators used in the secret key cryptography. The extended set of CA rules which was found makes the cryptography system much more resistant on breaking a cryptography key.
</description>
        <description>http://thesai.org/Downloads/Volume2No5/Paper%2017-Generating%20PNS%20for%20Secret%20Key%20Cryptography%20Using%20Cellular%20Automaton.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> A Load Balancing Policy for Heterogeneous Computational Grids
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020516</link>
        <id>10.14569/IJACSA.2011.020516</id>
        <doi>10.14569/IJACSA.2011.020516</doi>
        <lastModDate>2012-07-01T10:13:30.1600000+00:00</lastModDate>
        
        <creator>Said Fathy El-Zoghdy</creator>
        
        <subject>grid computing; resource management; load balancing; performance evaluation; queuing theory; simulation models.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(5), 2011</description>
        <description>Computational grids have the potential computing power for solving large-scale scientific computing applications. To improve the global throughput of these applications, workload has to be evenly distributed among the available computational resources in the grid environment. This paper addresses the problem of scheduling and load balancing in heterogeneous computational grids. We proposed a two-level load balancing policy for the multi-cluster grid environment where computational resources are dispersed in different administrative domains or clusters which are located in different local area networks. The proposed load balancing policy takes into account the heterogeneity of the computational resources. It distributes the system workload based on the processing elements capacity which leads to minimize the overall job mean response time and maximize the system utilization and throughput at the steady state. An analytical model is developed to evaluate the performance of the proposed load balancing policy. The results obtained analytically are validated by simulating the model using Arena simulation package. The results show that the overall mean job response time obtained by simulation is very close to that obtained analytically. Also, the simulation results show that the performance of the proposed load balancing policy outperforms that of the random and uniform distribution load balancing policies in terms of mean job response time. The improvement ratio increases as the system workload increases and the maximum improvement ratio obtained is about 72% in the range of system parameter values examined.</description>
        <description>http://thesai.org/Downloads/Volume2No5/Paper%2016-A%20Load%20Balancing%20Policy%20for%20Heterogeneous%20Computational%20Grids.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application of Fuzzy Logic Approach to Software Effort Estimation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020515</link>
        <id>10.14569/IJACSA.2011.020515</id>
        <doi>10.14569/IJACSA.2011.020515</doi>
        <lastModDate>2012-07-01T10:13:22.9400000+00:00</lastModDate>
        
        <creator>Prasad Reddy P.V.G.D.</creator>
        
        <creator>Sudha K. R</creator>
        
        <creator>Rama Sree P</creator>
        
        <subject>Development Effort; EAF; Cost Drivers; Fuzzy Identification; Membership Functions; Fuzzy Rules; NASA93 dataset.
</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(5), 2011</description>
        <description>The most significant activity in software project management is Software development effort prediction. The literature shows several algorithmic cost estimation models such as Boehm’s COCOMO, Albrecht&#39;s&#39; Function Point Analysis, Putnam’s SLIM, ESTIMACS etc., but each model do have their own pros and cons in estimating development cost and effort. This is because project data, available in the initial stages of project is often incomplete, inconsistent, uncertain and unclear. The need for accurate effort prediction in software project management is an ongoing challenge. A fuzzy model is more apt when the systems are not suitable for analysis by conventional approach or when the available data is uncertain, inaccurate or vague. Fuzzy logic is a convenient way to map an input space to an output space. Fuzzy Logic is based on fuzzy set theory. A fuzzy set is a set without a crisp, clearly defined boundary. It is characterized by a membership function, which associates with each point in the fuzzy set a real number in the interval [0, 1], called degree or grade of membership. The membership functions may be Triangular, GBell, Gauss and Trapezoidal etc. In the present paper, software development effort prediction using Fuzzy Triangular Membership Function and GBell Membership Function is implemented and compared with COCOMO. A case study based on the NASA93 dataset compares the proposed fuzzy model with the Intermediate COCOMO. The results were analyzed using different criterions like VAF, MARE, VARE, MMRE, Prediction and Mean BRE. It is observed that the Fuzzy Logic Model using Triangular Membership Function provided better results than the other models.
</description>
        <description>http://thesai.org/Downloads/Volume2No5/Paper%2015-Application%20of%20Fuzzy%20Logic%20Approach%20to%20Software%20Effort%20Estimation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Routing Scheme for a New Irregular Baseline Multistage Interconnection Network
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020514</link>
        <id>10.14569/IJACSA.2011.020514</id>
        <doi>10.14569/IJACSA.2011.020514</doi>
        <lastModDate>2012-07-01T10:13:16.5300000+00:00</lastModDate>
        
        <creator>Mamta Ghai</creator>
        
        <subject>Multistage Interconnection network; Fault-Tolerance; Augmented Baseline Network.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(5), 2011</description>
        <description> Parallel processing is an efficient form of information processing system, which emphasizes the exploitation of concurrent events in the computing process. To achieve parallel processing it’s required to develop more capable and cost-effective systems. In order to operate more efficiently a network is required to handle large amount of traffic. Multi-stage Interconnection Network plays a vital role on the performance of these multiprocessor systems. In this paper an attempt has been made to analyze the characteristics of new class of Irregular Fault-Tolerant Multistage Interconnection network named as Irregular Modified Baseline Multistage Interconnection network IMABN and an efficient routing procedure has been defined to study the fault-tolerance of the network. Fault-Tolerance in an interconnection network is very important for its continuous operation over a relatively long period of time. Fault-Tolerance is an ability of the network to operate in presence of multiple faults. The behavior of the Proposed IMABN has been analyzed and compared with regular network MABN under fault free conditions and in presence of faults. In IMABN there are six possible paths between source and destinations where as MABN has only four. Thus the proposed IMABN is more Fault-tolerant than existing regular Modified Augmented Baseline multistage interconnection network (MABN).
</description>
        <description>http://thesai.org/Downloads/Volume2No5/Paper%2014-A%20Routing%20Scheme%20for%20a%20New%20Irregular%20Baseline%20Multistage%20Interconnection%20Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comprehensive Analysis of Materialized Views in a Data Warehouse Environment</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020513</link>
        <id>10.14569/IJACSA.2011.020513</id>
        <doi>10.14569/IJACSA.2011.020513</doi>
        <lastModDate>2012-07-01T10:13:10.1200000+00:00</lastModDate>
        
        <creator>Garima Thakur</creator>
        
        <creator>Anjana Gosain</creator>
        
        <subject>Materialized views; view maintenance; view selection; view adaptation; view synchronization.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(5), 2011</description>
        <description> Data in a warehouse can be perceived as a collection of materialized views that are generated as per the user requirements specified in the queries being generated against the information contained in the warehouse. User requirements and constraints frequently change over time, which may evolve data and view definitions stored in a data warehouse dynamically. The current requirements are modified and some novel and innovative requirements are added in order to deal with the latest business scenarios. In fact, data preserved in a warehouse along with these materialized views must also be updated and maintained so that they can deal with the changes in data sources as well as the requirements stated by the users. Selection and maintenance of these views is one of the vital tasks in a data warehousing environment in order to provide optimal efficiency by reducing the query response time, query processing and maintenance costs as well. Another major issue related to materialized views is that whether these views should be recomputed for every change in the definition or base relations, or they should be adapted incrementally from existing views. In this paper, we have examined several ways o performing changes in materialized views their selection and maintenance in data warehousing environments. We have also provided a comprehensive study on research works of different authors on various parameters and presented the same in a tabular manner.</description>
        <description>http://thesai.org/Downloads/Volume2No5/Paper%2013-A%20Comprehensive%20Analysis%20of%20Materialized%20Views%20in%20a%20Data%20Warehouse%20Environment.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> A robust multi color lane marking detection approach for Indian scenario
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020512</link>
        <id>10.14569/IJACSA.2011.020512</id>
        <doi>10.14569/IJACSA.2011.020512</doi>
        <lastModDate>2012-07-01T10:12:58.2500000+00:00</lastModDate>
        
        <creator>L N P Boggavarapu</creator>
        
        <creator>R S Vaddi</creator>
        
        <creator>H D Vankayalapati</creator>
        
        <creator>J K Munagala</creator>
        
        <subject>Color segmentation; HSV; Edge orientation; connected components.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(5), 2011</description>
        <description>Lane detection is an essential component of Advanced Driver Assistance System. The cognition on the roads is increasing day by day due to increase in the four wheelers on the road. The cognition coupled with ignorance towards road rules is contributing to road accidents. The lane marking violence is one of the major causes for accidents on highways in India. In this work we have designed and implemented an automatic lane marking violence detection algorithm in real time. The HSV color-segmentation based approach is verified for both white lanes and yellow lanes in Indian context. Various comparative experimental results show that the proposed approach is very effective in the lane detection and can be implemented in real-time.</description>
        <description>http://thesai.org/Downloads/Volume2No5/Paper%2012-A%20robust%20multi%20color%20lane%20marking%20detection%20approach%20for%20Indian%20scenario.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Implementation and Performance Analysis of Video Edge Detection System on Multiprocessor Platform
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020511</link>
        <id>10.14569/IJACSA.2011.020511</id>
        <doi>10.14569/IJACSA.2011.020511</doi>
        <lastModDate>2012-07-01T10:12:51.8570000+00:00</lastModDate>
        
        <creator>Mandeep Kaur</creator>
        
        <creator>Kulbir Singh</creator>
        
        <subject>Multiprocessor platform; Edge detection; Performance evaluation; noise.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(5), 2011</description>
        <description>This paper presents an agile development, implementation of Edge Detection on SMT8039 based Video And Imaging module. With the development of video processing techniques its algorithm becomes more complicated. High resolution and real time application cannot be implemented with single CPU or DSP. The system offers significant performance increase over current programmable DSP-based implementations. This paper shows that the considerable performance improvement using the FPGA solution results from the availability of high I/O resources and pipelined architecture. FPGA technology provides an alternative way to obtain high performance. Prototyping a design with FPGA offer some advantages such as relatively low cost, reduce time to market, flexibility. Another capability of FPGA is the amount of support of logic to implement complete systems/subsystems and provide reconfigurable logic for purpose of application specific based programming. DSP’s to provide more and more power and design nearly any function in a large enough FPGA, this is not usually the easiest, cheapest approach. This paper designed and implemented an Edge detection method based on coordinated DSP-FPGA techniques. The whole processing task divided between DSP and FPGA. DSP is dedicated for data I/O functions. FPGA’s task is to take input video from DSP to implement logic and after processing it gives back to DSP. The PSNR values of the all the edge detection techniques are compared. When the system is validated, it is observed that Laplacian of Gaussian method appears to be the most sensitive even in low levels of noise, while the Robert, Canny and Prewitt methods appear to be barely perturbed. However, Sobel performs best with median filter in the presence of Gaussian, Salt and Pepper, Speckle noise in video signal.
</description>
        <description>http://thesai.org/Downloads/Volume2No5/Paper%2011-Implementation%20and%20performance%20analysis%20of%20Video%20Edge%20Detection%20system%20on%20Multiprocessor%20Platform.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>RETRACTED: FPGA-Based Design of High-Speed CIC Decimator for Wireless Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020510</link>
        <id>10.14569/IJACSA.2011.020510</id>
        <doi>10.14569/IJACSA.2011.020510</doi>
        <lastModDate>2012-07-01T10:12:44.9070000+00:00</lastModDate>
        
        <creator>Rakjesh Mehra</creator>
        
        <creator>Rashmi Arora</creator>
        
        <subject>CIC; FPGA; FPGA; GSM; LUT; SDR</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(5), 2011</description>
        <description>After careful and considered review of the content of this paper by a duly constituted expert committee, this paper has been found to be in violation of IJACSA`s Publication Principles. We hereby retract the content of this paper. Reasonable effort should be made to remove all past references to this paper. Retraction DOI: 10.14569/IJACSA.2011.020510.retraction</description>
        <description>http://thesai.org/Downloads/Volume2No5/Paper%2010-FPGA-Based%20Design%20of%20High-Speed%20CIC%20Decimator%20for%20Wireless%20Applications.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Main Stream Temperature Control System Based on Smith-PID Scheduling Network Control
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020509</link>
        <id>10.14569/IJACSA.2011.020509</id>
        <doi>10.14569/IJACSA.2011.020509</doi>
        <lastModDate>2012-07-01T10:12:37.5870000+00:00</lastModDate>
        
        <creator> Jianqiu Deng</creator>
        
        <creator>Haijun Li</creator>
        
        <creator>Zhengxia Zhang</creator>
        
        <subject> Network control systems (NCSs); Gain-scheduling based Smith-PID; main stream temperature system; time delay; packet loss.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(5), 2011</description>
        <description>This paper is concerned with the controller design problem for a class of networked main stream temperature control system with long random time delay and packet losses. To compensate the effect of time delay and packet losses, a gain-scheduling based Smith-PID controller is proposed for the considered networked control systems (NCSs). Moreover, to further improve the control performance of NCSs, genetic algorithm is employed to obtain the optimal control parameters for gain-scheduling Smith-PID controller. Simulation results are given to demonstrate the effectiveness of the proposed methods.</description>
        <description>http://thesai.org/Downloads/Volume2No5/Paper%209-Main%20Stream%20Temperature%20Control%20System%20Based%20on%20Smith-PID%20Scheduling%20Network%20Control.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Generating Performance Analysis of GPU compared to Single-core and Multi-core CPU for Natural Language Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020508</link>
        <id>10.14569/IJACSA.2011.020508</id>
        <doi>10.14569/IJACSA.2011.020508</doi>
        <lastModDate>2012-07-01T10:12:31.1670000+00:00</lastModDate>
        
        <creator>Shubham Gupta</creator>
        
        <creator>M.Rajasekhara Babu</creator>
        
        <subject>NLP; Lexical Analysis; Lexicon; Shallow Parsing; GPU; GPGPU; CUDA; OpenMP.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(5), 2011</description>
        <description>In Natural Language Processing (NLP) applications, the main time-consuming process is string matching due to the large size of lexicon. In string matching processes, data dependence is minimal and hence it is ideal for parallelization. A dedicated system with memory interleaving and parallel processing techniques for string matching can reduce this burden of host CPU, thereby making the system more suitable for real-time applications. Now it is possible to apply parallelism using multi-cores on CPU, though they need to be used explicitly to achieve high performance. Recent GPUs hold a large number of cores, and have a potential for high performance in many general purpose applications. Programming tools for multi-cores on CPU and a large number of cores on GPU have been formulated, but it is still difficult to achieve high performance on these platforms. In this paper, we compare the performance of single-core, multi-core CPU and GPU using such a Natural Language Processing application.
</description>
        <description>http://thesai.org/Downloads/Volume2No5/Paper%208-Performance%20Analysis%20of%20GPU%20compared%20to%20Single-core%20and%20Multi-core%20CPU%20for%20Natural%20Language%20Applications.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design and Analysis of a Novel Low-Power SRAM Bit-Cell Structure at Deep-Sub-Micron CMOS Technology for Mobile Multimedia Applications
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020507</link>
        <id>10.14569/IJACSA.2011.020507</id>
        <doi>10.14569/IJACSA.2011.020507</doi>
        <lastModDate>2012-07-01T10:12:24.7530000+00:00</lastModDate>
        
        <creator>Neeraj kr Shukla</creator>
        
        <creator>R.K.Singh</creator>
        
        <creator>Manisha Pattanaik</creator>
        
        <subject>SRAM Bit-Cell; Gate Leakage; Sub-threshold Leakage; NC-SRAM; Asymmetric SRAM; PP-SRAM; Stacking Effect.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(5), 2011</description>
        <description> The growing demand for high density VLSI circuits and the exponential dependency of the leakage current on the oxide thickness is becoming a major challenge in deep-sub-micron CMOS technology. In this work, a novel Static Random Access Memory (SRAM) Cell is proposed targeting to reduce the overall power requirements, i.e., dynamic and standby power in the existing dual-bit-line architecture. The active power is reduced by reducing the supply voltage when the memory is functional and the standby power is reduced by reducing the gate and sub-threshold leakage currents when the memory is idle. This paper explored an integrated approach at the architecture and circuit level to reduce the leakage power dissipation while maintaining high performance in deep-submicron cache memories. The proposed memory bit-cell makes use of the pMOS pass transistors to lower the gate leakage currents while full-supply body-biasing scheme is used to reduce the sub-threshold leakage currents. To further reduce the leakage current, the stacking effect is used by switching off the stack transistors when the memory is ideal. In comparison to the conventional 6T SRAM bit-cell, the total leakage power is reduced by 50% while the cell is storing data ‘1’ and 46% when data ‘0’ at a very small area penalty. The total active power reduction is achieved by 89% when cell is storing data 0 or 1. The design simulation work was performed on the deep-sub-micron CMOS technology, the 45nm, at 250C with VDD of 0.7V.
</description>
        <description>http://thesai.org/Downloads/Volume2No5/Paper%207-Design%20and%20Analysis%20of%20a%20Novel%20Low-Power%20SRAM%20Bit-Cell%20Structure%20at%20Deep-Sub-Micron%20CMOS%20Technology%20for%20Mobile%20Multimedia%20Applications.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Conceptual Framework for an Ontology-Based Examination System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020506</link>
        <id>10.14569/IJACSA.2011.020506</id>
        <doi>10.14569/IJACSA.2011.020506</doi>
        <lastModDate>2012-07-01T10:12:18.3070000+00:00</lastModDate>
        
        <creator>Adekoya Adebayo Felix</creator>
        
        <creator>Akinwale Adio Taofiki</creator>
        
        <creator>Sofoluwe Adetokunbo</creator>
        
        <subject>semantic web; examination systems; ontology; knowledge bases.
</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(5), 2011</description>
        <description>There is an increasing reliance on the web for many software application deployments. Millions of services ranging from commerce, education, tourism and entertainment are now available on the web, making the web to be the largest database in the world as of today. However, the information available on the web is syntactically structured whereas the trend is to provide semantic linkage to them. The semantic web serves as a medium to enhance the current web in which computers can process information, interpret, and connect it to enhance knowledge retrieval. The semantic web has encouraged the creation of ontologies in a great variety of domains. In this paper, the conceptual framework for an ontology-based examination system and the ontology required for such examination systems were described. The domain ontology was constructed based on the Methontology method proposed by Fern&#225;ndez (1997). The ontology can be used to design and create metadata elements required developing web-based examination applications and can be interoperate-able with other applications. Taxonomic evaluation and the Guarino-Welty Ontoclean techniques were used to assess and refined the domain ontology in other to ensure it is error-free.
</description>
        <description>http://thesai.org/Downloads/Volume2No5/Paper%206-A%20Conceptual%20Framework%20for%20an%20Ontology-Based%20Examination%20System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Systematic and Integrative Analysis of Proteomic Data using Bioinformatics Tools</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020505</link>
        <id>10.14569/IJACSA.2011.020505</id>
        <doi>10.14569/IJACSA.2011.020505</doi>
        <lastModDate>2012-07-01T10:12:11.7700000+00:00</lastModDate>
        
        <creator>Rashmi Rameshwari</creator>
        
        <creator>Dr. T. V. Prasad</creator>
        
        <subject>Metabolic network; protein interaction network; visualization tools.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(5), 2011</description>
        <description>The analysis and interpretation of relationships between biological molecules is done with the help of networks. Networks are used ubiquitously throughout biology to represent the relationships between genes and gene products. Network models have facilitated a shift from the study of evolutionary conservation between individual gene and gene products towards the study of conservation at the level of pathways and complexes. Recent work has revealed much about chemical reactions inside hundreds of organisms as well as universal characteristics of metabolic networks, which shed light on the evolution of the networks. However, characteristics of individual metabolites have been neglected in this network. The current paper provides an overview of bioinformatics software used in visualization of biological networks using proteomic data, their main functions and limitations of the software.
</description>
        <description>http://thesai.org/Downloads/Volume2No5/Paper%205-Systematic%20and%20Integrative%20Analysis%20of%20Proteomic%20Data%20using%20Bioinformatics%20Tools.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Empirical Validation of Web Metrics for Improving the Quality of Web Page</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020504</link>
        <id>10.14569/IJACSA.2011.020504</id>
        <doi>10.14569/IJACSA.2011.020504</doi>
        <lastModDate>2012-07-01T10:12:05.3430000+00:00</lastModDate>
        
        <creator>Yogesh Singh</creator>
        
        <creator>Ruchika Malhotra</creator>
        
        <creator>Poonam Gupta</creator>
        
        <subject>Metrics; Web page; Website; Web page quality; Internet; Page composition Metrics; Page formatting Metrics.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(5), 2011</description>
        <description>Web page metrics is one of the key elements in measuring various attributes of web site. Metrics gives the concrete values to the attributes of web sites which may be used to compare different web pages. The web pages can be compared based on the page size, information quality ,screen coverage, content coverage etc. Internet and website are emerging media and service avenue requiring improvements in their quality for better customer services for wider user base and for the betterment of human kind. E-business is emerging and websites are not just medium for communication, but they are also the products for providing services. Measurement is the key issue for survival of any organization Therefore to 
measure and evaluate the websites for quality and for better understanding, the key issues related to website engineering is very important. In this paper we collect data from webby awards data (2007-2010) and classify the websites into good sites and bad sites on the basics of the assessed metrics. To achieve this aim we investigate 15 metrics proposed by various researchers. We present the findings of quantitative analysis of web page attributes and how these attributes are calculated. The result of this paper can be used in quantitative studies in web site designing. The metrics captured in the predicted model can be used to predict the goodness of website design.</description>
        <description>http://thesai.org/Downloads/Volume2No5/Paper%204-Empirical%20Validation%20of%20Web%20Metrics%20for%20Improving%20the%20Quality%20of%20Web%20Page.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fuzzy Particle Swarm Optimization with Simulated Annealing and Neighborhood Information Communication for Solving TSP</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020503</link>
        <id>10.14569/IJACSA.2011.020503</id>
        <doi>10.14569/IJACSA.2011.020503</doi>
        <lastModDate>2012-07-01T10:11:58.9300000+00:00</lastModDate>
        
        <creator>Rehab F Abdel-Kader</creator>
        
        <subject>Information Communication; Particle Swarm Optimization; Simulated Annealing; TSP.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(5), 2011</description>
        <description>In this paper, an effective hybrid algorithm based on Particle Swarm Optimization (PSO) is proposed for solving the Traveling Salesman Problem (TSP), which is a well-known NP-complete problem. The hybrid algorithm combines the high global search efficiency of fuzzy PSO with the powerful ability to avoid being trapped in local minimum. In the fuzzy PSO system, fuzzy matrices were used to represent the position and velocity of the particles in PSO and the operators in the original PSO position and velocity formulas were redefined. Two strategies were employed in the hybrid algorithm to strengthen the diversity of the particles and to speed up the convergence process. The first strategy is based on Neighborhood Information Communication (NIC) among the particles where a particle absorbs better historical experience of the neighboring particles. This strategy does not depend on the individual experience of the particles only, but also the neighbor sharing information of the current state. The second strategy is the use of Simulated Annealing (SA) which randomizes the search algorithm in a way that allows occasional alterations that worsen the solution in an attempt to increase the probability of escaping local optima. SA is used to slow down the degeneration of the PSO swarm and increase the swarm’s diversity. In SA, a new solution in the neighborhood of the original one is generated by using a designed ? search method. A new solution with fitness worse than the original solution is accepted with a probability that gradually decreases at the late stages of the search process. The hybrid algorithm is examined using a set of benchmark problems from the TSPLIB with various sizes and levels of hardness. Comparative experiments were made between the proposed algorithm and regular fuzzy PSO, SA, and basic ACO. The computational results demonstrate the effectiveness of the proposed algorithm for TSP in terms of the obtained solution quality and convergence speed.</description>
        <description>http://thesai.org/Downloads/Volume2No5/Paper%203-Fuzzy%20Particle%20Swarm%20Optimization%20with%20Simulated%20Annealing%20and%20Neighborhood%20Information%20Communication%20for%20Solving%20TSP.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>QVT transformation by modelling - From UML Model to MD Model
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020502</link>
        <id>10.14569/IJACSA.2011.020502</id>
        <doi>10.14569/IJACSA.2011.020502</doi>
        <lastModDate>2012-07-01T10:11:52.5130000+00:00</lastModDate>
        
        <creator>I Arrassen</creator>
        
        <creator>A.Meziane</creator>
        
        <creator>R.Sbai</creator>
        
        <creator>M.Erramdani</creator>
        
        <subject>Datawarehouse; Model Driven Architecture; Multidimensional Modeling; Meta Model; Transformation rules; Query View Transformation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(5), 2011</description>
        <description>To provide a complete analysis of the organization, its business and its needs, it is necessary for leaders to have data that help decision making. Data warehouses are designed to meet such needs; they are an analysis and data management technology. This article describes an MDA (Model Driven Architecture) process that we have used to automatically generate the multidimensional schema of data warehouse. This process uses model transformation using several standards such as Unified Modeling Language, Meta-Object Facility, Query View Transformation, Object Constraint Language, ... From the UML model, especially the class diagram, a multidimensional model is generated as an XML file, the transformation is carried out by the QVT (Query View Transformation) language and the OCL (Object Constraint Language) Language. To validate our approach a case study is presented at the end of this work.</description>
        <description>http://thesai.org/Downloads/Volume2No5/Paper%202-QVT%20transformation%20by%20modeling.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> Using Merkle Tree to Mitigate Cooperative Black-hole Attack in Wireless Mesh Networks
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020501</link>
        <id>10.14569/IJACSA.2011.020501</id>
        <doi>10.14569/IJACSA.2011.020501</doi>
        <lastModDate>2012-07-01T10:11:46.0870000+00:00</lastModDate>
        
        <creator>Shree Om</creator>
        
        <creator>Mohammad Talib</creator>
        
        <subject>WMN, MANET; Cooperative black-hole attack; AODV; Merkle tree; malicious; OWHF</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(5), 2011</description>
        <description>Security is always a major concern and a topic of hot discussion to users of Wireless Mesh Networks (WMNs). The open architecture of WMNs makes it very easy for malicious attackers to exploit the loopholes in the routing protocol. Cooperative Black-hole attack is a type of denial-of-service attack that sabotages the routing functions of the network layer in WMNs. In this paper we have focused on improving the security of one of the popular routing protocols among WMNs, Ad-hoc on demand distance vector (AODV) routing protocol and present a probable solution to this attack using Merkle hash tree.</description>
        <description>http://thesai.org/Downloads/Volume2No5/Paper%201-Using%20Merkle%20Tree%20to%20Mitigate%20Cooperative%20Black-hole%20Attack%20in%20Wireless%20Mesh%20Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> Application Of Extended Kalman Filter For A Free Falling Body Towards Earth</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020420</link>
        <id>10.14569/IJACSA.2011.020420</id>
        <doi>10.14569/IJACSA.2011.020420</doi>
        <lastModDate>2012-07-01T10:11:39.6230000+00:00</lastModDate>
        
        <creator>Leela Kumari. B</creator>
        
        <creator>Padma Raju. K</creator>
        
        <creator>Chandan .V.Y.V</creator>
        
        <creator>Sai Krishna. R</creator>
        
        <creator>V.M.J. Rao</creator>
        
        <subject>Kalman filter; Extended Kalman filter; free fall body; apriori information.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(4), 2011</description>
        <description>State estimation theory is one of the best mathematical approaches to analyze variants in the states of the system or process. The state of the system is defined by a set of variables that provide a complete representation of the internal condition at any given instant of time. Filtering of Random processes is referred to as Estimation, and is a well-defined statistical technique. There are two types of state estimation processes, Linear and Nonlinear. Linear estimation of a system can easily be analyzed by using Kalman Filter (KF) and is used to compute the target state parameters with a priori information under noisy environment. But the traditional KF is optimal only when the model is linear and its performance is well defined under the assumptions that the system model and noise statistics are well known. Most of the state estimation problems are nonlinear, thereby limiting the practical applications of the KF. The modified KF, aka EKF, Unscented Kalman filter and Particle filter are best known for nonlinear estimates. Extended Kalman filter (EKF) is the nonlinear version of the Kalman filter which linearizes about the current mean and covariance. The estimation can be linearised around the current estimate using the partial derivatives to compute estimates even in the face of nonlinear relationships.. The EKF has been considered the standard in the theory of nonlinear state estimation. This paper deals with how to estimate a nonlinear model with Extended Kalman filter (EKF). The approach in this paper is to analyze Extended Kalman filter where EKF provides better probability of state estimation for a free falling body towards earth.</description>
        <description>http://thesai.org/Downloads/Volume2No4/Paper%2020-Application%20Of%20Extended%20Kalman%20Filter%20For%20A%20Free%20Falling%20Body%20Towards%20Earth.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>ICT for Education</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020419</link>
        <id>10.14569/IJACSA.2011.020419</id>
        <doi>10.14569/IJACSA.2011.020419</doi>
        <lastModDate>2012-07-01T10:11:33.2130000+00:00</lastModDate>
        
        <creator>Marcellin Nkenlifack</creator>
        
        <creator>Raoul Nangue</creator>
        
        <creator>Bethin Demsong</creator>
        
        <creator>Victor Kuate Fotso</creator>
        
        <subject>Elearning, ICT; Management Tools; Secondary school; Platform</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(4), 2011</description>
        <description>This paper presents the modeling, design and implementation of a learning platform in Cameroon. This platform contains structured knowledge acquisition modules as well as teaching, learning and assessment modules to promote a constructive learning. The objective of this project is to show how ICT can be used to improve teaching and learning with modern digital tools. This will result into enabling teaching of the official computer curriculum in secondary schools at the national level. A second objective of the project is to put in place a set of ICT based administrative and managerial tools that will guide the day-to-day activities of a secondary school. This will aid in generating informations accessible from all levels of the educational system and all its related departments and partners. This project will aim at promoting ICT accessibility in the educational system while contributing to reducing the digital divide.
</description>
        <description>http://thesai.org/Downloads/Volume2No4/Paper%2019-ICT%20for%20Education.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>E-Shape Micro strip Patch Antenna on Different Thickness for pervasive Wireless Communication</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020418</link>
        <id>10.14569/IJACSA.2011.020418</id>
        <doi>10.14569/IJACSA.2011.020418</doi>
        <lastModDate>2012-07-01T10:11:26.7830000+00:00</lastModDate>
        
        <creator>Neenansha Jain</creator>
        
        <creator>Anubhuti Khare</creator>
        
        <creator>Rajesh Nema</creator>
        
        <subject>Micro strip antenna; IE3D SIMULATOR; Dielectric; Patch width; Patch Length; Losses; strip width; strip length.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(4), 2011</description>
        <description>In this Paper Presents the result for different standard thickness values, and the result is performed by thickness of 31 mile, Ku- band frequency 12GHz are gives the best result. The antenna has become a necessity for many applications in recent wireless communications, such as Radar, Microwave and space communication. The proposed antenna design on different thickness and analyzed result of all thickness between 1GHz to 15GHz frequency, When the proposed antenna design on a 31 mil RT DUROID 5880 substrate from Rogers-Corp with dielectric constant of 2.2 and loss tangent of 0.0004. At 12GHz the verify and tested result on IE3D SIMULATOR are Return loss = -23.08dB, VSWR = 1.151, Directivity = 11dBi, Gain = 4dBi, 3 dB beam width = 35.5575 degrees, Mismatch loss= -0.0289842dB is very low, Efficiency= 65.3547%, All results shown in Simulation results. The Return losses and VSWR results shown in Table 1, Table2 respectively.</description>
        <description>http://thesai.org/Downloads/Volume2No4/Paper%2018-E-Shape%20Micro%20strip%20Patch%20Antenna%20on%20Different%20Thickness%20for%20pervasive%20Wireless%20Communication.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Energy-Efficient, Noise-Tolerant CMOS Domino VLSI Circuits in VDSM Technology
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020417</link>
        <id>10.14569/IJACSA.2011.020417</id>
        <doi>10.14569/IJACSA.2011.020417</doi>
        <lastModDate>2012-07-01T10:11:20.3600000+00:00</lastModDate>
        
        <creator>Salendra Govindarajulu</creator>
        
        <creator>Dr.T.Jayachandra Prasad</creator>
        
        <creator>C.Sreelakshmi</creator>
        
        <creator>Chandrakala</creator>
        
        <creator>U.Thirumalesh</creator>
        
        <subject> Dynamic; Domino; Noise Margin; Very Deep submicron technology; High speed; Power consumption; Power delay product (PDP).</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(4), 2011</description>
        <description>Compared to static CMOS logic, dynamic logic offers good performance. Wide fan-in dynamic logic such as domino is often used in performance critical paths, to achieve high speeds where static CMOS fails to meet performance objectives. However, domino gates typically consume higher dynamic switching and leakage power and display weaker noise immunity as compared to static CMOS gates. Keeping in view of the above stated problems in previous existing designs, novel energy-efficient domino circuit techniques are proposed. The proposed circuit techniques reduced the dynamic switching power consumption; short-circuit current overhead, idle mode leakage power consumption and enhanced evaluation speed and noise immunity in domino logic circuits. Also regarding performance, these techniques minimize the power-delay product (PDP) as compared to the standard full-swing circuits in deep sub micron CMOS technology. Also the noise immunity of the CMOS Domino circuits with various techniques and keepers are analyzed. Various noise sources are considered and noise immune domino logic is proposed.</description>
        <description>http://thesai.org/Downloads/Volume2No4/Paper%2017-Energy-Efficient,%20Noise-Tolerant%20CMOS%20Domino%20VLSI%20Circuits%20in%20VDSM%20Technology.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Advanced Technology Selection Model using Neuro Fuzzy Algorithm for Electronic Toll Collection System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020416</link>
        <id>10.14569/IJACSA.2011.020416</id>
        <doi>10.14569/IJACSA.2011.020416</doi>
        <lastModDate>2012-07-01T10:11:13.9400000+00:00</lastModDate>
        
        <creator>D R Kalbande</creator>
        
        <creator>Nilesh Deotale</creator>
        
        <creator>Priyank Singhal</creator>
        
        <creator>Sumiran Shah</creator>
        
        <creator>G.T.Thampi</creator>
        
        <subject> Neural Network; Fuzzy Logic; Technology Integration; Technology Evaluation.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(4), 2011</description>
        <description>Selecting an optimum advanced technology system for an organization is one of the most crucial issues in any industry. Any technology system which makes business process more efficient and business management more simplified is one of the important Information System (IS) to the organization. The comprehensive framework is a three-phase approach which introduces two main ideas, one is the adopting of the McCall software quality model which is extracted from technology management essentials, and use the factors of McCall software quality model to be some of the technology selection criteria. Another major point is implementing and proposing a model based on this research using Neuro-Fuzzy algorithm to evaluate advance technology selection. This paper includes the concept of a new multi attribute selection process which combines both the Fuzzy logic (linguistic) and Neural network (integral valuation) methodology to evaluate or estimate a technology for Electronic Toll Collection System. Managers will be able to use this model for selecting a new technology in to their organization.</description>
        <description>http://thesai.org/Downloads/Volume2No4/Paper%2016-An%20Advanced%20Technology%20Selection%20Model%20using%20Neuro%20Fuzzy%20Algorithm%20for%20Electronic%20Toll%20Collection%20System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A study on classification of EEG Data using the Filters
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020415</link>
        <id>10.14569/IJACSA.2011.020415</id>
        <doi>10.14569/IJACSA.2011.020415</doi>
        <lastModDate>2012-07-01T10:11:07.5430000+00:00</lastModDate>
        
        <creator>V Baby Deepa</creator>
        
        <creator>Dr. P. Thangaraj</creator>
        
        <subject>EEG (Electro-encephalogram); BCI (Brain Computer Interface; FHT (Fast Hartley Transform); Chebyshev filters; FT tree.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(4), 2011</description>
        <description>In the field of data mining, classification of data is being a difficult task for further analysis. Classifying the EEG data would require more efficient algorithms. In this paper the classification filters such as Fast Hartley Transform (FHT) and Chebyshev filters are used to classify the EEG data signals. In a bulk data set of EEG signals, the signals are classified into many channels. Though various filters are available for classification, FHT with Chebyshev and FT tree only are taken to know the efficiency in classifying the EEG data signals. When these filters are applied to the data instances the percentage of correctly classified instances is high. Based on the experimental result it is suggested that these filters could be used for the enhancement of classification of EEG data.
</description>
        <description>http://thesai.org/Downloads/Volume2No4/Paper%2015-A%20study%20on%20classification%20of%20EEG%20Data%20.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Backpropagation with Vector Chaotic Learning Rate</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020414</link>
        <id>10.14569/IJACSA.2011.020414</id>
        <doi>10.14569/IJACSA.2011.020414</doi>
        <lastModDate>2012-07-01T10:11:01.4870000+00:00</lastModDate>
        
        <creator>A.M. Numan Al Mobin</creator>
        
        <creator>Mobarakol Islam</creator>
        
        <creator>Md. Rihab Rana</creator>
        
        <creator>Md. Masud Rana</creator>
        
        <creator>Kaustubh Dhar</creator>
        
        <creator>Tajul Islam</creator>
        
        <creator>Md. Rezwan</creator>
        
        <creator>M. Hossain</creator>
        
        <subject> Neural network; backpropagation; BPCL; BPVL chaos; generalization ability; convergence rate.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(4), 2011</description>
        <description>In Neural Network (NN) training, local minimum is an integrated problem. In this paper, a modification of standard backpropagation (BP) algorithm, called backpropagation with vector chaotic learning rate (BPVL) is proposed to improve the performance of NNs. BPVL method generates a chaotic time series as Vector form of Mackey Glass and logistic map. A rescaled version of these series is used as learning rate (LR). In BP training the weights of NN become inactive, after arrival of local minima in the training session. Using integrated chaotic learning rate, the weight update accelerated in the local minimum region. BPVL is tested on six real world benchmark classification problems such as breast cancer, diabetes, heart disease, australian credit card, horse and glass. The proposed BPVL outperforms the existing BP and BPCL in terms of generalization ability and also convergence rate.
</description>
        <description>http://thesai.org/Downloads/Volume2No4/Paper%2014-Backpropagation%20with%20Vector%20Chaotic%20Learning%20Rate.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> Design and Performance Analysis of Microstrip Array Antennas with Optimum Parameters for X-band Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020413</link>
        <id>10.14569/IJACSA.2011.020413</id>
        <doi>10.14569/IJACSA.2011.020413</doi>
        <lastModDate>2012-07-01T10:10:55.0770000+00:00</lastModDate>
        
        <creator>Tanvir Ishtaique ul Huque</creator>
        
        <creator>Md. Kamal Hosain</creator>
        
        <creator>Md. Shihabul Islam</creator>
        
        <creator>Md. Al-Amin Chowdhury</creator>
        
        <subject>microstrip antenna; array antenna; corporate-series feed array; corporate feed array; series feed array</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(4), 2011</description>
        <description>This paper demonstrates simple, low cost and high gain microstrip array antennas with suitable feeding techniques and dielectric substrate for applications in GHz frequency range. The optimum design parameters of the antenna are selected to achieve the compact dimensions as well as the best possible characteristics such as high radiation efficiency, high gain, etc. In this paper different microstrip array antennas such as series feed, corporate feed and corporate- series feed are designed, simulated, analyzed and compared regarding to the antenna performances. The designed antennas are 4x1, 4x1, and 4x2 arrays. The optimum feeding system is decided based on the various antenna parameters that are simulated. The simulation has been performed by using SONNET version V12.56 simulator which is a commercially available antenna simulator. The designed antennas provide return losses in the range of -4.21dB to -25.456dB at frequencies around 10GHz by using Taconic TLY-5 dielectric substrate with permittivity, er= 2.2 and height, h =1.588 mm. The gain of these simulated antennas is found about 15dB and side lobe label is maintained lower than main lobe. Since, the resonance frequency of these antennas is around 10GHz, these antennas are suitable for X-band applications such as satellite communication, radar, medical applications, and other wireless systems.</description>
        <description>http://thesai.org/Downloads/Volume2No4/Paper%2013-Design%20and%20Performance%20Analysis%20of%20Microstrip%20Array%20Antennas%20with%20Optimum%20Parameters%20for%20X-band%20Appli.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Efficient Retrieval of Text for Biomedical Domain using Data Mining Algorithm</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020412</link>
        <id>10.14569/IJACSA.2011.020412</id>
        <doi>10.14569/IJACSA.2011.020412</doi>
        <lastModDate>2012-07-01T10:10:48.1300000+00:00</lastModDate>
        
        <creator>Sumit Vashishta</creator>
        
        <creator>Dr. Yogendra Kumar Jain</creator>
        
        <subject>Data mining; Biomedical text extraction; Biomedical text mining.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(4), 2011</description>
        <description>Data mining, a branch of computer science [1], is the process of extracting patterns from large data sets by combining methods from statistics and artificial intelligence with database management. Data mining is seen as an increasingly important tool by modern business to transform data into business intelligence giving an informational advantage. Biomedical text retrieval refers to text retrieval techniques applied to biomedical resources and literature available of the biomedical and molecular biology domain. The volume of published biomedical research, and therefore the underlying biomedical knowledge base, is expanding at an increasing rate. Biomedical text retrieval is a way to aid researchers in coping with information overload. By discovering predictive relationships between different pieces of extracted data, data-mining algorithms can be used to improve the accuracy of information extraction. However, textual variation due to typos, abbreviations, and other sources can prevent the productive discovery and utilization of hard-matching rules. Recent methods of soft clustering can exploit predictive relationships in textual data. This paper presents a technique for using soft clustering data mining algorithm to increase the accuracy of biomedical text extraction. Experimental results demonstrate that this approach improves text extraction more effectively that hard keyword matching rules.</description>
        <description>http://thesai.org/Downloads/Volume2No4/Paper%2012-Efficient%20Retrieval%20of%20Text%20for%20Biomedical%20Domain%20using%20Data%20Mining%20Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Managing Knowledge in Development of Agile Software
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020411</link>
        <id>10.14569/IJACSA.2011.020411</id>
        <doi>10.14569/IJACSA.2011.020411</doi>
        <lastModDate>2012-07-01T10:10:41.7030000+00:00</lastModDate>
        
        <creator>Mohammed Abdul Bari</creator>
        
        <creator>Dr. Shahanawaj Ahamad</creator>
        
        <subject>Knowledge management; Agile software; Scaling factor; Agility; Knowledge capturing</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(4), 2011</description>
        <description>Software development is a knowledge-intensive work and the main attention is how to manage it. The systematic reviews of empirical studies presents, how knowledge management is used in software engineering and development work. This paper presents how knowledge is used in agile software development and how knowledge is transferred to agile software using agile manifesto. It then argues for the need to scale agile development strategies in knowledge management to address the full delivery. The paper explores the eight agile software scaling factors with knowledge management and their implication for successfully scaling of agile software delivery to meet the real world needs of software development organization.
</description>
        <description>http://thesai.org/Downloads/Volume2No4/Paper%2011-Managing%20Knowledge%20in%20Development%20of%20Agile%20Software.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> Coordinate Rotation Digital Computer Algorithm: Design and Architectures</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020410</link>
        <id>10.14569/IJACSA.2011.020410</id>
        <doi>10.14569/IJACSA.2011.020410</doi>
        <lastModDate>2012-07-01T10:10:35.2430000+00:00</lastModDate>
        
        <creator>Naveen Kumar</creator>
        
        <creator>Amandeep Singh Sappal</creator>
        
        <subject>CORDIC Algorithms; CORDIC Architectures; FPGA.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(4), 2011</description>
        <description>COordinate Rotation DIgital Computer (CORDIC) algorithm has potential for efficient and low-cost implementation of a large class of applications which include the generation of trigonometric, logarithmic and transcendental elementary functions, complex number multiplication, matrix inversion, solution of linear systems and general scientific computation. This paper presents a brief overview of the developments in the CORDIC algorithm and its architectures.</description>
        <description>http://thesai.org/Downloads/Volume2No4/Paper%2010-Coordinate%20Rotation%20Digital%20Computer%20Algorithm%20Design%20and%20Architectures.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Create a Virtual Mannequin Through the 2-D Image-based Anthropometric Measurement and Radius Distance Free Form Deformation
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020409</link>
        <id>10.14569/IJACSA.2011.020409</id>
        <doi>10.14569/IJACSA.2011.020409</doi>
        <lastModDate>2012-07-01T10:10:28.7870000+00:00</lastModDate>
        
        <creator>Sheng Fuu Lin</creator>
        
        <creator>Shih-Che Chien</creator>
        
        <subject>human body; anthropometric measurement; free form deformation deformation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(4), 2011</description>
        <description>3-D human body models are used in a wild spectrum of applications, such as film and entertainment industry, that require images of human replicas, but the computer generated models of human body generally do not adequately model the complex human morphology. These models do not reflect the realistic anthropometric data and are not specific enough for commercial use. This paper presents an approach to adjust virtual mannequins through the use of anthropometric measurement data, which are obtained from 2-D image-base measurement. In this approach, a novel method which used the Chinese medicine acupuncture theory for fast position locating and human body slice model to approach circumferences is proposed to 2-D image-based anthropometric measurement. The measurement data are used in grouping 3-D scanned body objects into clusters. The virtual mannequins are then adjusted by using the measurement data of the standard model that belong to its cluster. In this way, the realistic accurate virtual mannequins are created.</description>
        <description>http://thesai.org/Downloads/Volume2No4/Paper%209-Create%20a%20Virtual%20Mannequin%20Through%20the%202-D%20Image-based%20Anthropometric%20Measurement%20and%20Radius%20Distance%20Free%20Form%20Deformation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>FPGA Based Cipher Design &amp; Implementation of Recursive Oriented Block Arithmetic and Substitution Technique (ROBAST)</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020408</link>
        <id>10.14569/IJACSA.2011.020408</id>
        <doi>10.14569/IJACSA.2011.020408</doi>
        <lastModDate>2012-07-01T10:10:22.3370000+00:00</lastModDate>
        
        <creator>Rajdeep Chakraborty</creator>
        
        <creator>JK Mandal, Professor</creator>
        
        <subject>VHDL, FPGA, RTL, Block Cipher, Session key and Private Key, Cryptography, Symmetric/Private key cryptosystem.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(4), 2011</description>
        <description>Proposed FPGA based technique considers a message as a binary string on which ROBAST is applied. A block of n-bits is taken as an input stream, where n ranges from 8 to 256 – bit, then ROBAST is applied in each block to generate intermediate stream, any one intermediate stream is considered as a cipher text. The same operation is performed repeatedly on various block sizes. It is a kind of block cipher and symmetric in nature hence decoding is done in similar manner. This paper also presents an efficient hardware realization of the proposed technique using state-of-the-art Field Programmable Gate Array (FPGA). The technique is also coded in C programming language and Very High Speed Integrated Circuit Hardware Description Language (VHDL). Various results and comparisons have been performed against industrially accepted RSA and TDES. A satisfactory results and comparisons are found.</description>
        <description>http://thesai.org/Downloads/Volume2No4/Paper 8-FPGA Based Cipher Design Implementation of Recursive Oriented Block Arithmetic and Substitution Technique.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Analysis of MIMO-OFDM System Using Singular Value Decomposition and Water Filling Algorithm
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020407</link>
        <id>10.14569/IJACSA.2011.020407</id>
        <doi>10.14569/IJACSA.2011.020407</doi>
        <lastModDate>2012-07-01T10:10:15.9370000+00:00</lastModDate>
        
        <creator>Md. Noor-A-Rahim</creator>
        
        <creator>Md. Saiful Islam</creator>
        
        <creator>Md. Nashid Anjum</creator>
        
        <creator>Md. Kamal Hosain</creator>
        
        <creator>Abbas Z. Kouzani</creator>
        
        <creator></creator>
        
        <subject>MIMO; OFDM; ISI; SVD; Water filling algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(4), 2011</description>
        <description> In this paper, MIMO is paired up with OFDM to improve the performance of wireless transmission systems. Multiple antennas are employed both at the transmitting as well as receiving ends. The performance of an OFDM system is measured, considering multipath delay spread, channel noise, Rayleigh fading channel and distortion. In this paper, bits are generated and then mapped with modulation schemes such as QPSK, 8PSK, and QAM. Then, the mapped data is divided into blocks of 120 modulated data where a training sequence of the data is inserted both at the beginning and ending parts of the block. The equalization is used to determine the variation to the rest of data. The singular value decomposition (SVD) and water filling algorithm have been employed to measure the performance of the MIMO-OFDM integrated systems. Therefore, the capacity is increased by transmitting different streams of data through different antennas at a same carrier frequency. Any intersymbol interference (ISI) produced after the transmission is recovered by using spatial sampling integrated with the signal processing algorithm. Furthermore, the performance remains the same with different combinations of transmitting and receiving antennas.</description>
        <description>http://thesai.org/Downloads/Volume2No4/Paper%207-Performance%20Analysis%20of%20MIMO-OFDM%20System%20Using%20Singular%20Value%20Decomposition%20and%20Water%20Filling%20Algorith.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel approach for Implementing Security over Vehicular Ad hoc network using Signcryption through Network Grid</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020406</link>
        <id>10.14569/IJACSA.2011.020406</id>
        <doi>10.14569/IJACSA.2011.020406</doi>
        <lastModDate>2012-07-01T10:10:12.5300000+00:00</lastModDate>
        
        <creator>Vijayan R</creator>
        
        <creator>Sumitkumar Singh</creator>
        
        <subject>Network Grid; Computation Server; Vehicular Node; Public/Private keys. 
</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(4), 2011</description>
        <description>Security over Vehicular ad hoc network and identifying accurate vehicle location has always been a major challenge over VANET. Even though GPS system can be used to identify the location of the vehicle they too suffer from major drawbacks. A novel approach has been suggested by the author wherein the VANET is made more secured by using Signcryption technique and at the same time unique approach of using Network Grid to flawlessly identify the location of the vehicle has been proposed.</description>
        <description>http://thesai.org/Downloads/Volume2No4/Paper%206-A%20Novel%20approach%20for%20Implementing%20Security%20over%20Vehicular%20Ad%20hoc%20network%20using%20Signcryption%20through%20Network%20Grid.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> Dimensionality Reduction technique using Neural Networks – A Survey</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020405</link>
        <id>10.14569/IJACSA.2011.020405</id>
        <doi>10.14569/IJACSA.2011.020405</doi>
        <lastModDate>2012-07-01T10:10:06.0430000+00:00</lastModDate>
        
        <creator>Shamla Mantri</creator>
        
        <creator>Nikhil S. Tarale</creator>
        
        <creator>Sudip C. Mahajan</creator>
        
        <subject>Principal component analysis [PCA]; Independent component analysis [ICA]; self-organizing map [SOM]; Face recognition.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(4), 2011</description>
        <description>A self-organizing map (SOM) is a classical neural network method for dimensionality reduction. It comes under the unsupervised class. SOM is a neural network that is trained using unsupervised learning to produce a low-dimensional, discretized representation of the input space of the training samples, called a map. SOM uses a neighborhood function to preserve the topological properties of the input space. SOM operates in two modes: training and mapping. Using the input examples, training builds the map. It is also called as vector quantization. In this paper, we first survey related dimension reduction methods and then examine their capabilities for face recognition. In this work, different dimensionality reduction techniques such as Principal component analysis [PCA], independent component analysis [ICA] and self-organizing map [SOM] are selected and applied in order to reduce the loss of classification performance due to changes in facial expression. The experiments were conducted on ORL face database and the results show that SOM is a better technique.</description>
        <description>http://thesai.org/Downloads/Volume2No4/Paper%205-Dimensionality%20Reduction%20technique%20using%20Neural%20Networks%20%E2%80%93%20A%20Survey%20(Autosaved).pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Annotations, Collaborative Tagging, and Searching Mathematics in E-Learning
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020404</link>
        <id>10.14569/IJACSA.2011.020404</id>
        <doi>10.14569/IJACSA.2011.020404</doi>
        <lastModDate>2012-07-01T10:09:59.5770000+00:00</lastModDate>
        
        <creator>Iyad Abu Doush</creator>
        
        <creator>Faisal Alkhateeb</creator>
        
        <creator>Eslam Al Maghayreh</creator>
        
        <creator>Izzat Alsmadi</creator>
        
        <creator>Samer Samarah</creator>
        
        <subject>Semantic Web; MathML; Adaptive e-learning; Folksonomies; Collaborative tagging.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(4), 2011</description>
        <description>This paper presents a new framework for adding semantics into e-learning system. The proposed approach relies on two principles. The first principle is the automatic addition of semantic information when creating the mathematical contents. The second principle is the collaborative tagging and annotation of the e-learning contents and the use of an ontology to categorize the e-learning contents. The proposed system encodes the mathematical contents using presentation MathML with RDFa annotations. The system allows students to highlight and annotate specific parts of the e-learning contents. The objective is to add meaning into the e-learning contents, to add relationships between contents, and to create a framework to facilitate searching the contents. This semantic information can be used to answer semantic queries (e.g., SPARQL) to retrieve information request of a user. This work is implemented as an embedded code into Moodle e-learning system.</description>
        <description>http://thesai.org/Downloads/Volume2No4/Paper%204-Annotations,%20Collaborative%20Tagging,%20and%20Searching%20Mathematics%20in%20E-Learning%20.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Reliable Security Model Irrespective of Energy Constraints in Wireless Sensor Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020403</link>
        <id>10.14569/IJACSA.2011.020403</id>
        <doi>10.14569/IJACSA.2011.020403</doi>
        <lastModDate>2012-07-01T10:09:53.1230000+00:00</lastModDate>
        
        <creator>D Prasad</creator>
        
        <creator>Manik Gupta</creator>
        
        <creator>R. B. Patel</creator>
        
        <subject>Wireless sensor network (WSN); Sensor Node (SN); Base Station (BS); Static Keys; Dynamic Keys; Real-Time MAC ID (RTMAC).
</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(4), 2011</description>
        <description>Wireless Sensor Networks (WSNs) are one of the most exciting and challenging research areas. It is an emerging technology that shows various applications both for public and military purpose. In order to operate these applications successfully, it is necessary to maintain privacy and secrecy of the transmitted data. In this paper, we have presented a Reliable Security Model (RSM) for WSNs. To incorporate the security, we are using four keys out of which two are static and remaining two are dynamic. One of the static key is obtained by composition of Q number of keys, and other is real-time MAC ID (RTMAC). Dynamic keys are computed on fly and keep on changing each time when the network is synchronized. In RSM, the synchronization time is less than the time required to compromise any node by an adversary, so that even if some nodes get compromised, the keying materials of the node have already been changed.</description>
        <description>http://thesai.org/Downloads/Volume2No4/Paper%203-A%20Reliable%20Security%20Model%20Irrespective%20of%20Energy%20Constraints%20in%20Wireless%20Sensor%20Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Digital Image Watermarking Technique Based on Different Attacks
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020402</link>
        <id>10.14569/IJACSA.2011.020402</id>
        <doi>10.14569/IJACSA.2011.020402</doi>
        <lastModDate>2012-07-01T10:09:46.6630000+00:00</lastModDate>
        
        <creator>Manjit Thapa</creator>
        
        <creator>Dr. Sandeep Kumar Sood</creator>
        
        <creator>A.P Meenakshi Sharma</creator>
        
        <subject>Digital image watermarkin;, copyright protection; Singular value decomposition; Watermark embedding procedure; Watermark extracting procedure.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(4), 2011</description>
        <description>Digital watermarking is used to hide the information inside a signal, which cannot be easily extracted by the third party. Its widely used application is copyright protection of digital information. It is different from the encryption in the sense that it allows the user to access, view and interpret the signal but protect the ownership of the content. One of the current research areas is to protect digital watermark inside the information so that ownership of the information cannot be claimed by third party. With a lot of information available on various search engines, to protect the ownership of information is a crucial area of research. In recent year, several digital watermarking techniques are presented based on discrete cosine transform (DCT), discrete wavelets transform (DWT) and discrete fourier transforms (DFT). In this paper, we proposed an algorithm for digital image watermarking technique based on singular value decomposition; both of the L and U components are explored for watermarking algorithm. This technique refers to the watermark embedding procedure and watermark extracting procedure. Digital image watermarking techniques for copyright protection is robust. The experimental results prove that the quality of the watermarked image is good and that there is strong resistant against many attacks. The image watermarking techniques help to achieve artificial intelligence. Digital image watermarking is the most effective solution in this area and its use to protect the information is increasingly exponentially day by day.</description>
        <description>http://thesai.org/Downloads/Volume2No4/Paper%202-Digital%20Image%20Watermarking%20Technique%20Based%20on%20Different%20Attacks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Anthropomorphic User Interface Feedback in a Sewing Context and Affordances</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020401</link>
        <id>10.14569/IJACSA.2011.020401</id>
        <doi>10.14569/IJACSA.2011.020401</doi>
        <lastModDate>2012-07-01T10:09:40.2630000+00:00</lastModDate>
        
        <creator>Dr Pietro Murano</creator>
        
        <creator>Tanvi Sethi</creator>
        
        <subject>anthropomorphism; affordances; user interface evaluation.
</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(4), 2011</description>
        <description>The aim of the authors&#39; research is to gain better insights into the effectiveness and user satisfaction of anthropomorphism at the user interface. Therefore, this paper presents a between users experiment and the results in the context of anthropomorphism at the user interface and the giving of instruction for learning sewing stitches. Two experimental conditions were used, where the information for learning sewing stitches was the same. However the manner of presentation was varied. Therefore one condition was anthropomorphic and the other was non-anthropomorphic. Also the work is closely linked with Hartson&#39;s theory of affordances applied to user interfaces. The results suggest that facilitation of the affordances in an anthropomorphic user interface lead to statistically significant results in terms of effectiveness and user satisfaction in the sewing context. Further some violation of the affordances leads to an interface being less usable in terms of effectiveness and user satisfaction.</description>
        <description>http://thesai.org/Downloads/Volume2No4/Paper%201-Anthropomorphic%20User%20Interface%20Feedback%20in%20a%20Sewing%20Context%20and%20Affordances.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Computer Aided Design and Simulation of a Multiobjective Microstrip Patch Antenna for Wireless Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020319</link>
        <id>10.14569/IJACSA.2011.020319</id>
        <doi>10.14569/IJACSA.2011.020319</doi>
        <lastModDate>2012-07-01T10:09:32.8170000+00:00</lastModDate>
        
        <creator>Chitra Singh</creator>
        
        <creator>R. P. S. Gangwar</creator>
        
        <subject>Electromagnetic simulation; microstrip antenna; radiation pattern; IE3D.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(3), 2011</description>
        <description>The utility and attractiveness of microstrip antennas has made it ever more important to find ways to precisely determine the radiation patterns of these antennas. Taking benefit of the added processing power of today’s computers, electromagnetic simulators are emerging to perform both planar and 3D analysis of high-frequency structures. One such tool studied was IE3D, which is a program that utilizes method of moment. This paper makes an investigation of the method used by the program, and then uses IE3D software to construct microstrip antennas and analyze the simulation results. The antenna offers good electrical performance and at the same time preserves the advantages of microstrip antennas such as small size, easy to manufacture as no lumped components are employed in the design and thus, is low cost; and most importantly, it serves multiple wireless applications.
</description>
        <description>http://thesai.org/Downloads/Volume2No3/Paper%2019-%20Computer%20Aided%20Design%20and%20Simulation%20of%20a%20Multiobjective%20Microstrip%20Patch%20Antenna%20for%20Wireless%20Applications.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Feed Forward Neural Network Based Eye Localization and Recognition Using Hough Transform</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020318</link>
        <id>10.14569/IJACSA.2011.020318</id>
        <doi>10.14569/IJACSA.2011.020318</doi>
        <lastModDate>2012-07-01T10:09:26.3470000+00:00</lastModDate>
        
        <creator>Shylaja S S</creator>
        
        <creator>K N Balasubramanya Murthy</creator>
        
        <creator>S Natarajan</creator>
        
        <subject>Hough Transform; Eye Detection; Accumulator Bin; Neural Network.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(3), 2011</description>
        <description>Eye detection is a pre-requisite stage for many applications such as face recognition, iris recognition, eye tracking, fatigue detection based on eye-blink count and eye-directed instruction control. As the location of the eyes is a dominant feature of the face it can be used as an input to the face recognition engine. In this direction, the paper proposed here localizes eye positions using Hough Transformed (HT) coefficients, which are found to be good at extracting geometrical components from any given object. The method proposed here uses circular and elliptical features of eyes in localizing them from a given face. Such geometrical features can be very efficiently extracted using the HT technique. The HT is based on a evidence gathering approach where the evidence is the ones cast in an accumulator array. The purpose of the technique is to find imperfect instances of objects within a certain class of shapes by a voting procedure. Feed forward neural network has been used for classification of eyes and non-eyes as the dimension of the data is large in nature. Experiments have been carried out on standard databases as well as on local DB consisting of gray scale images. The outcome of this technique has yielded very satisfactory results with an accuracy of 98.68%</description>
        <description>http://thesai.org/Downloads/Volume2No3/Paper%2018-%20Feed%20Forward%20Neural%20Network%20Based%20Eye%20Localization%20and%20Recognition%20Using%20Hough%20Transform.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Impact of E-Media on Customer Purchase Intention</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020317</link>
        <id>10.14569/IJACSA.2011.020317</id>
        <doi>10.14569/IJACSA.2011.020317</doi>
        <lastModDate>2012-07-01T10:09:19.8770000+00:00</lastModDate>
        
        <creator>Mehmood Rehmani,</creator>
        
        <creator>Muhammad Ishfaq Khan</creator>
        
        <subject>e-discussion; e-mail; website; online chat.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(3), 2011</description>
        <description>In this research paper, authors investigated the social media (e-discussion, websites, online chat, email etc) parameters that have effect over the customers buying decisions. The research focused on the development of research model to test the impact of social media on the customer purchase intention. The literature review done to explore the work done on social media. The authors identify the problem and defined the objectives of the studies. In order to achieve them, a research model is proposed that followed by the development of research hypotheses to testify the model.
</description>
        <description>http://thesai.org/Downloads/Volume2No3/Paper%2017-%20The%20Impact%20of%20E-Media%20on%20Customer%20Purchase%20Intention.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> An Electronic Intelligent Hotel Management System for International Marketplace</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020316</link>
        <id>10.14569/IJACSA.2011.020316</id>
        <doi>10.14569/IJACSA.2011.020316</doi>
        <lastModDate>2012-07-01T10:09:13.4170000+00:00</lastModDate>
        
        <creator>Md. Noor A Rahim</creator>
        
        <creator>Md. Kamal Hosain</creator>
        
        <creator>Md. Saiful Islam</creator>
        
        <creator>Md. Nashid Anjum</creator>
        
        <creator>Md. Masud Rana</creator>
        
        <subject>E-marketplac;, hotel management, intelligent search; intelligent system;, image processing algorithms and web-based application</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(3), 2011</description>
        <description>To compete with the international market place, it is crucial for hotel industry to be able to continually improve its services for tourism. In order to construct an electronic marketplace (e-market), it is an inherent requirement to build a correct architecture with a proper approach of an intelligent systems embedded on it. This paper introduces a web based intelligent that helps in maintaining a hotel by reducing the immediate involvement of manpower. The hotel reception policy, room facilities and intelligent personalization promotion are the main focuses of this paper. An intelligent search for existing boarders as well as room availability is incorporated in the system. For each of the facilities, a flow chart has been developed which confirms the techniques and relevant devices used in the system. By studying several scenarios, the paper outlines a number of techniques for realization of the intelligent hotel management system. Special attention is paid to the security and also prevention of power and water wastages. In this power saving scenery, an image processing approach is taken to detect the presence of any people and the darkness in a particular room. Moreover, this proposed automated computerized scheme also takes an account of the cost advantage. Considering the scarcity of manpower in several countries, the objective of this paper is to initiate the discussion and research for making the proposed systems more commercialized.</description>
        <description>http://thesai.org/Downloads/Volume2No3/Paper%2016-%20An%20Electronic%20Intelligent%20Hotel%20Management%20System%20For%20International%20Marketplace.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Integrated Routing Protocol for Opportunistic Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020315</link>
        <id>10.14569/IJACSA.2011.020315</id>
        <doi>10.14569/IJACSA.2011.020315</doi>
        <lastModDate>2012-07-01T10:09:06.8730000+00:00</lastModDate>
        
        <creator>Anshul Verma</creator>
        
        <creator>Dr. Anurag Srivastava</creator>
        
        <subject>context aware routing; context information; context oblivious routing; MANET; opportunistic network.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(3), 2011</description>
        <description> In opportunistic networks the existence of a simultaneous path is not assumed to transmit a message between a sender and a receiver. Information about the context in which the users communicate is a key piece of knowledge to design efficient routing protocols in opportunistic networks. But this kind of information is not always available. When users are very isolated, context information cannot be distributed, and cannot be used for taking efficient routing decisions. In such cases, context oblivious based schemes are only way to enable communication between users. As soon as users become more social, context data spreads in the network, and context based routing becomes an efficient solution. In this paper we design an integrated routing protocol that is able to use context data as soon as it becomes available and falls back to dissemination-based routing when context information is not available. Then, we provide a comparison between Epidemic and PROPHET, these are representative of context oblivious and context aware routing protocols. Our results show that integrated routing protocol is able to provide better result in term of message delivery probability and message delay in both cases when context information about users is available or not.
</description>
        <description>http://thesai.org/Downloads/Volume2No3/Paper%2015-%20Integrated%20Routing%20Protocol%20for%20Opportunistic%20Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Arabic Cursive Characters Distributed Recognition using the DTW Algorithm on BOINC: Performance Analysis
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020313</link>
        <id>10.14569/IJACSA.2011.020313</id>
        <doi>10.14569/IJACSA.2011.020313</doi>
        <lastModDate>2012-07-01T10:09:00.0000000+00:00</lastModDate>
        
        <creator>Zied TRIFA</creator>
        
        <creator>Mohamed LABIDI</creator>
        
        <creator>Maher KHEMAKHEM</creator>
        
        <subject>Volunteer Computing, BOINC, Arabic OCR, DTW algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(3), 2011</description>
        <description>Volunteer computing or volunteer grid computing constitute a very promising infrastructure which provides enough computing and storage powers without any prior cost or investment. Indeed, such infrastructures are the result of the federation of several, geographically dispersed, computers or/and LAN computers over the Internet. Berkeley Open Infrastructure for Network Computing (BOINC) is considered the most well known volunteer computing infrastructure. In this paper, we are interested, rather, by the distribution of the Arabic OCR (Optical Character Recognition) based on the DTW (Dynamic Time Warping) algorithm on the BOINC, in order, to prove again that volunteer computing provides very interesting and promising infrastructures to speed up, at will, several greedy algorithms or applications, especially, the Arabic OCR based on the DTW algorithm. What makes very attractive the Arabic OCR based on the DTW algorithm is the following, first, its ability to recognize, properly, words or sub words, without any prior segmentation, from within a reference library of isolated characters. Second, its good immunity against a wide range of noises. Obtained first results confirm, indeed, that the Berkeley Open Infrastructure for Network Computing constitutes an interesting and promising framework to speed up the Arabic OCR based on the DTW algorithm.</description>
        <description>http://thesai.org/Downloads/Volume2No3/Paper%2013-%20Arabic%20Cursive%20Characters%20Distributed%20Recognition%20using%20the%20DTW%20Algorithm%20on%20BOINC%20Performance%20Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Pulse Shape Filtering in Wireless Communication-A Critical Analysis
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020312</link>
        <id>10.14569/IJACSA.2011.020312</id>
        <doi>10.14569/IJACSA.2011.020312</doi>
        <lastModDate>2012-07-01T10:08:53.9330000+00:00</lastModDate>
        
        <creator>A S Kang</creator>
        
        <creator>Vishal Sharma</creator>
        
        <subject>WCDMA, Pulse Shaping.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(3), 2011</description>
        <description>The goal for the Third Generation (3G) of mobile communications system is to seamlessly integrate a wide variety of communication services. The rapidly increasing popularity of mobile radio services has created a series of technological challenges. One of this is the need for power and spectrally efficient modulation schemes to meet the spectral requirements of mobile communications. Pulse shaping plays a crucial role in spectral shaping in the modern wireless communication to reduce the spectral bandwidth. Pulse shaping is a spectral processing technique by which fractional out of band power is reduced for low cost, reliable , power and spectrally efficient mobile radio communication systems. It is clear that the pulse shaping filter not only reduces inter-symbol interference (ISI), but it also reduces adjacent channel interference. The present paper deals with critical analysis of pulse shaping in wireless communication.
</description>
        <description>http://thesai.org/Downloads/Volume2No3/Paper%2012-%20Pulse%20Shape%20Filtering%20in%20Wireless%20Communication-A%20Critical%20Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Adaptive Equalization Algorithms: An Overview</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020311</link>
        <id>10.14569/IJACSA.2011.020311</id>
        <doi>10.14569/IJACSA.2011.020311</doi>
        <lastModDate>2012-07-01T10:08:46.7370000+00:00</lastModDate>
        
        <creator>Garima Malik</creator>
        
        <creator>Amandeep Singh Sappal</creator>
        
        <subject>Channel Equalizer, Adaptive Equalize, Least Mean Square, Recursive Least Squares</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(3), 2011</description>
        <description>The recent digital transmission systems impose the application of channel equalizers with short training time and high tracking rate. Equalization techniques compensate for the time dispersion introduced by communication channels and combat the resulting inter-symbol interference (ISI) effect. Given a channel of unknown impulse response, the purpose of an adaptive equalizer is to operate on the channel output such that the cascade connection of the channel and the equalizer provides an approximation to an ideal transmission medium. Typically, adaptive equalizers used in digital communications require an initial training period, during which a known data sequence is transmitted. A replica of this sequence is made available at the receiver in proper synchronism with the transmitter, thereby making it possible for adjustments to be made to the equalizer coefficients in accordance with the adaptive filtering algorithm employed in the equalizer design. In this paper, an overview of the current state of the art in adaptive equalization techniques has been presented.</description>
        <description>http://thesai.org/Downloads/Volume2No3/Paper%2011-%20Adaptive%20Equalization%20Algorithms%20An%20Overview.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multicasting over Overlay Networks - A Critical Review
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020310</link>
        <id>10.14569/IJACSA.2011.020310</id>
        <doi>10.14569/IJACSA.2011.020310</doi>
        <lastModDate>2012-07-01T10:08:40.3930000+00:00</lastModDate>
        
        <creator>M.F M Firdhous</creator>
        
        <subject>Multicasting, overlay networks, streaming media</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(3), 2011</description>
        <description> Multicasting technology uses the minimum network resources to serve multiple clients by duplicating the data packets at the closest possible point to the clients. This way at most only one data packets travels down a network link at any one time irrespective of how many clients receive this packet. Traditionally multicasting has been implemented over a specialized network built using multicast routers. This kind of network has the drawback of requiring the deployment of special routers that are more expensive than ordinary routers. Recently there is new interest in delivering multicast traffic over application layer overlay networks. Application layer overlay networks though built on top of the physical network, behave like an independent virtual network made up of only logical links between the nodes. Several authors have proposed systems, mechanisms and protocols for the implementation of multicast media streaming over overlay networks. In this paper, the author takes a critical look at these systems and mechanism with special reference to their strengths and weaknesses.</description>
        <description>http://thesai.org/Downloads/Volume2No3/Paper%2010-%20Multicasting%20over%20Overlay%20Networks%20-%20A%20Critical%20Review.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Wavelet Based Image Denoising Technique</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020309</link>
        <id>10.14569/IJACSA.2011.020309</id>
        <doi>10.14569/IJACSA.2011.020309</doi>
        <lastModDate>2012-07-01T10:08:37.0100000+00:00</lastModDate>
        
        <creator>Sachin D Ruikar</creator>
        
        <creator>Dharmpal D Doye</creator>
        
        <subject>Image; Denoising; Wavelet Transform; Signal to Noise ratio; Kernel.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(3), 2011</description>
        <description> This paper proposes different approaches of wavelet based image denoising methods. The search for efficient image denoising methods is still a valid challenge at the crossing of functional analysis and statistics. In spite of the sophistication of the recently proposed methods, most algorithms have not yet attained a desirable level of applicability. Wavelet algorithms are useful tool for signal processing such as image compression and denoising. Multi wavelets can be considered as an extension of scalar wavelets. The main aim is to modify the wavelet coefficients in the new basis, the noise can be removed from the data. In this paper, we extend the existing technique and providing a comprehensive evaluation of the proposed method. Results based on different noise, such as Gaussian, Poisson’s, Salt and Pepper, and Speckle performed in this paper. A signal to noise ratio as a measure of the quality of denoising was preferred.
</description>
        <description>http://thesai.org/Downloads/Volume2No3/Paper%209-%20Wavelet%20Based%20Image%20Denoising%20Technique.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Effective Implementation of Agile Practices - Ingenious and Organized Theoretical Framework</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020308</link>
        <id>10.14569/IJACSA.2011.020308</id>
        <doi>10.14569/IJACSA.2011.020308</doi>
        <lastModDate>2012-07-01T10:08:30.6270000+00:00</lastModDate>
        
        <creator>Veerapaneni Esther Jyothi</creator>
        
        <creator>K. Nageswara Rao</creator>
        
        <subject>Traceability; requirements; agile manifesto; framework</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(3), 2011</description>
        <description>Delivering software in traditional ways is challenged by agile software development to provide a very different approach to software development. Agile methods aim at fast, light and efficient than any other vigorous method to develop and support customers business without being chaotic. Agile software development methods claim to be people-oriented rather than process-oriented and adaptive rather than predictive. Solid Determination and Dedicated efforts are required in agile development to overcome the disadvantages of predefined set of steps and changing requirements to see the desirable outcome and to avoid the predictable results. These methods reach the target promptly by linking developers and stakeholders. The focus of this research paper is two fold. The first part is to study different agile methodologies, find out the levelheaded difficulties in agile software development and suggests possible solutions with a collaborative and innovative framework. The second part of the research paper concentrates on the importance of handling traceability in agile software development and finally proposes an ingenious and organized theoretical framework with a systematic approach to agile software development.</description>
        <description>http://thesai.org/Downloads/Volume2No3/Paper%208-%20Effective%20Implementation%20of%20Agile%20Practices%20-%20Ingenious%20and%20Organized%20Theoretical%20Framework.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> A Fuzzy Decision Support System for Management of Breast Cancer</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020307</link>
        <id>10.14569/IJACSA.2011.020307</id>
        <doi>10.14569/IJACSA.2011.020307</doi>
        <lastModDate>2012-07-01T10:08:23.8800000+00:00</lastModDate>
        
        <creator>Ahmed Abou Elfetouh Saleh</creator>
        
        <creator>Sherif Ebrahim Barakat</creator>
        
        <creator>Ahmed Awad Ebrahim Awad</creator>
        
        <subject>Decision Support System; Breast Cancer; Fuzzy Logic; Mamdani Inference.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(3), 2011</description>
        <description>In the molecular era the management of cancer is no more a plan based on simple guidelines. Clinical findings, tumor characteristics, and molecular markers are integrated to identify different risk categories, based on which treatment is planned for each individual case. This paper aims at developing a fuzzy decision support system (DSS) to guide the doctors for the risk stratification of breast cancer, which is expected to have a great impact on treatment decision and to minimize individual variations in selecting the optimal treatment for a particular case. The developed system was based on clinical practice of Oncology Center Mansoura University (OCMU). This system has six input variables (Her2, hormone receptors, age, tumor grade, tumor size, and lymph node) and one output variable (risk status). The output variable is a value from 1 to 4; representing low risk status, intermediate risk status and high risk status. This system uses Mamdani inference method and simulation applied in MATLAB R2009b fuzzy logic toolbox.</description>
        <description>http://thesai.org/Downloads/Volume2No3/Paper%207-%20A%20Fuzzy%20Decision%20Support%20System%20for%20Management%20of%20Breast%20Cancer.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Knowledge discovery from database using an integration of clustering and classification</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020306</link>
        <id>10.14569/IJACSA.2011.020306</id>
        <doi>10.14569/IJACSA.2011.020306</doi>
        <lastModDate>2012-07-01T10:08:17.2230000+00:00</lastModDate>
        
        <creator>Varun Kumar</creator>
        
        <creator>Nisha Rathee</creator>
        
        <subject>Data Mining; J48; KMEANS; WEKA; Fisher’s Iris dataset.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(3), 2011</description>
        <description>Clustering and classification are two important techniques of data mining. Classification is a supervised learning problem of assigning an object to one of several pre-defined categories based upon the attributes of the object. While, clustering is an unsupervised learning problem that group objects based upon distance or similarity. Each group is known as a cluster. In this paper we make use of a large database ‘Fisher’s Iris Dataset’ containing 5 attributes and 150 instances to perform an integration of clustering and classification techniques of data mining. We compared results of simple classification technique (using J48 classifier) with the results of integration of clustering and classification technique, based upon various parameters using WEKA (Waikato Environment for Knowledge Analysis), a Data Mining tool. The results of the experiment show that integration of clustering and classification gives promising results with utmost accuracy rate and robustness even when the data set is containing missing values.
</description>
        <description>http://thesai.org/Downloads/Volume2No3/Paper%206-%20Knowledge%20discovery%20from%20database%20using%20an%20integration%20of%20clustering%20and%20classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> Transparent Data Encryption- Solution for Security of Database Contents</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020305</link>
        <id>10.14569/IJACSA.2011.020305</id>
        <doi>10.14569/IJACSA.2011.020305</doi>
        <lastModDate>2012-07-01T10:08:10.4130000+00:00</lastModDate>
        
        <creator>Dr. Anwar Pasha Abdul Gafoor Deshmukh</creator>
        
        <creator>Dr. Riyazuddin Qureshi</creator>
        
        <subject>Transparent Data Encryption, TDE, Encryption, Decryption, Microsoft SQL Server 2008</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(3), 2011</description>
        <description>The present study deals with Transparent Data Encryption which is a technology used to solve the problems of security of data. Transparent Data Encryption means encrypting databases on hard disk and on any backup media. Present day global business environment presents numerous security threats and compliance challenges. To protect against data thefts and frauds we require security solutions that are transparent by design. Transparent Data Encryption provides transparent, standards-based security that protects data on the network, on disk and on backup media. It is easy and effective protection of stored data by transparently encrypting data. Transparent Data Encryption can be used to provide high levels of security to columns, table and tablespace that is database files stored on hard drives or floppy disks or CD’s, and other information that requires protection. It is the technology used by Microsoft SQL Server 2008 to encrypt database contents. The term encryption means the piece of information encoded in such a way that it can only be decoded read and understood by people for whom the information is intended. The study deals with ways to create Master Key, creation of certificate protected by the master key, creation of database master key and protection by the certificate and ways to set the database to use encryption in Microsoft SQL Server 2008.</description>
        <description>http://thesai.org/Downloads/Volume2No3/Paper%205-%20Transparent%20Data%20Encryption-%20Solution%20for%20Security%20of%20Database%20Contents.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Advanced Steganography Algorithm using Encrypted secret message</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020304</link>
        <id>10.14569/IJACSA.2011.020304</id>
        <doi>10.14569/IJACSA.2011.020304</doi>
        <lastModDate>2012-07-01T10:08:03.6600000+00:00</lastModDate>
        
        <creator>Joyshree Nath</creator>
        
        <creator>Asoke Nath</creator>
        
        <subject>Steganography; MSA algorithm; Encryption; Decryption.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(3), 2011</description>
        <description>In the present work the authors have introduced a new method for hiding any encrypted secret message inside a cover file. For encrypting secret message the authors have used new algorithm proposed by Nath et al(1). For hiding secret message we have used a method proposed by Nath et al(2). In MSA(1) method we have modified the idea of Play fair method into a new platform where we can encrypt or decrypt any file. We have introduced a new randomization method for generating the randomized key matrix to encrypt plain text file and to decrypt cipher text file. We have also introduced a new algorithm for encrypting the plain text multiple times. Our method is totally dependent on the random text_key which is to be supplied by the user. The maximum length of the text_key can be of 16 characters long and it may contain any character(ASCII code 0 to 255). We have developed an algorithm to calculate the randomization number and the encryption number from the given text_key. The size of the encryption key matrix is 16x16 and the total number of matrices can be formed from 16 x 16 is 256! which is quite large and hence if someone applies the brute force method then he/she has to give trail for 256! times which is quite absurd. Moreover the multiple encryption method makes the system further secured. For hiding secret message in the cover file we have inserted the 8 bits of each character of encrypted message file in 8 consecutive bytes of the cover file. We have introduced password for hiding data in the cover file. We propose that our new method could be most appropriate for hiding any file in any standard cover file such as image, audio, video files. Because the hidden message is encrypted hence it will be almost impossible for the intruder to unhide the actual secret message from the embedded cover file. This method may be the most secured method in digital water marking.
</description>
        <description>http://thesai.org/Downloads/Volume2No3/Paper%204-%20Advanced%20Steganography%20Algorithm%20using%20Encrypted%20secret%20message.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Survey on Attacks and Defense Metrics of Routing Mechanism in Mobile Ad hoc Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020302</link>
        <id>10.14569/IJACSA.2011.020302</id>
        <doi>10.14569/IJACSA.2011.020302</doi>
        <lastModDate>2012-07-01T10:07:56.9930000+00:00</lastModDate>
        
        <creator>K P Manikandan</creator>
        
        <creator>Dr.R.Satyaprasad</creator>
        
        <creator>Dr.K.Rajasekhararao</creator>
        
        <subject>MANET; Routing Protocol; Security Attacks; Routing Attacks and Defense Metrics</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(3), 2011</description>
        <description>A Mobile Ad hoc Network (MANET) is a dynamic wireless network that can be formed infrastructure less connections in which each node can act as a router. The nodes in MANET themselves are responsible for dynamically discovering other nodes to communicate. Although the ongoing trend is to adopt ad hoc networks for commercial uses due to their certain unique properties, the main challenge is the vulnerability to security attacks. In the presence of malicious nodes, one of the main challenges in MANET is to design the robust security solution that can protect MANET from various routing attacks. Different mechanisms have been proposed using various cryptographic techniques to countermeasure the routing attacks against MANET. As a result, attacks with malicious intent have been and will be devised to exploit these vulnerabilities and to cripple the MANET operations. Attack prevention measures, such as authentication and encryption, can be used as the first line of defense for reducing the possibilities of attacks. However, these mechanisms are not suitable for MANET resource constraints, i.e., limited bandwidth and battery power, because they introduce heavy traffic load to exchange and verifying keys. In this paper, we identify the existent security threats an ad hoc network faces, the security services required to be achieved and the countermeasures for attacks in routing protocols. To accomplish our goal, we have done literature survey in gathering information related to various types of attacks and solutions. Finally, we have identified the challenges and proposed solutions to overcome them. In our survey, we focus on the findings and related works from which to provide secure protocols for MANETs. However, in short, we can say that the complete security solution requires the prevention, detection and reaction mechanisms applied in MANET.
</description>
        <description>http://thesai.org/Downloads/Volume2No3/Paper%202-%20A%20Survey%20on%20Attacks%20and%20Defense%20Metrics%20of%20Routing%20Mechanism%20in%20Mobile%20Ad%20hoc%20Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> Quality of Service Management on Multimedia Data Transformation into Serial Stories Using Movement Oriented Method</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020301</link>
        <id>10.14569/IJACSA.2011.020301</id>
        <doi>10.14569/IJACSA.2011.020301</doi>
        <lastModDate>2012-07-01T10:07:48.8670000+00:00</lastModDate>
        
        <creator>A Muslim</creator>
        
        <creator>A.B. Mutiara</creator>
        
        <subject>multimedia data; serial stories; movement oriented; standard level knowledge;quality of service</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(3), 2011</description>
        <description>Multimedia data transformation into serial stories or story board will help to reduce the consumption of storage media, indexing, sorting and searching system. Movement Oriented Method that is being developed changes the form of multimedia data into serial stories. Movement Oriented Method depends on the knowledge each actor who uses it. Different knowledge of each actor in the transformation process raises complex issues, such as the sequence, and the resulted story object that could becomes the standard. And the most fatal could be, the resulted stories does not same with the original multimedia data. To solve it, the Standard Level Knowledge (SLK) in maintaining the quality of the story could be taken. SLK is the minimum knowledge that must be owned by each actor who will perform this transformation process. Quality of Service management could be applied to assess and maintain the stability and validity of the level of the each system to SLK.</description>
        <description>http://thesai.org/Downloads/Volume2No3/Paper%201-%20Quality%20of%20Service%20Management%20on%20Multimedia%20Data%20Transformation%20into%20Serial%20Stories%20Using%20Movement%20Oriented%20Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Short Description of Social Networking Websites And Its Uses
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020220</link>
        <id>10.14569/IJACSA.2011.020220</id>
        <doi>10.14569/IJACSA.2011.020220</doi>
        <lastModDate>2012-07-01T10:07:31.5470000+00:00</lastModDate>
        
        <creator>Ateeq Ahmad</creator>
        
        <subject>Social Network, kinds, Definition, Social Networking web sites, Growth</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(2), 2011</description>
        <description>Now days the use of the Internet for social networking is a popular method among youngsters. The use of collaborative technologies and Social Networking Site leads to instant online community in which people communicate rapidly and conveniently with each other. The basic aim of this research paper is to find out the kinds of social network are commonly using by the people.</description>
        <description>http://thesai.org/Downloads/Volume2No2/Paper%2020-A%20Short%20Description%20of%20Social%20Networking%20Websites%20And%20Its%20Uses.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Query based Personalization in Semantic Web Mining</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020219</link>
        <id>10.14569/IJACSA.2011.020219</id>
        <doi>10.14569/IJACSA.2011.020219</doi>
        <lastModDate>2012-07-01T10:07:25.1030000+00:00</lastModDate>
        
        <creator>Mahendra Thakur</creator>
        
        <creator>Yogendra Kumar Jain</creator>
        
        <creator>Geetika Silakari</creator>
        
        <subject>SemanticWeb Mining;Personalized Recommendation;Recommended System</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(2), 2011</description>
        <description>To provide personalized support in on-line course resources system, a semantic web-based personalized learning service is proposed to enhance the learner&#39;s learning efficiency. When a personalization system relies solely on usage-based results, however, valuable information conceptually related to what is finally recommended may be missed. Moreover, the structural properties of the web site are often disregarded. In this Paper, we present a personalize Web search system, which can helps users to get the relevant web pages based on their selection from the domain list. In the first part of our work we present Semantic Web Personalization, a personalization system that integrates usage data with content semantics, expressed in ontology terms, in order to compute semantically enhanced navigational patterns and effectively generate useful recommendations. To the best of our knowledge, our proposed technique is the only semantic web personalization system that may be used by non-semantic web sites. In the second part of our work, we present a novel approach for enhancing the quality of recommendations based on the underlying structure of a web site. We introduce UPR (Usage-based Page Rank), a Page Rank-style algorithm that relies on the recorded usage data and link analysis techniques based on user interested domains and user query.
</description>
        <description>http://thesai.org/Publication/IJACSA/Downloads/Volume2No2/Paper%2019-Query%20based%20Personalization%20in%20Semantic%20Web%20Mining.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>To Generate the Ontology from Java Source Code</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020218</link>
        <id>10.14569/IJACSA.2011.020218</id>
        <doi>10.14569/IJACSA.2011.020218</doi>
        <lastModDate>2012-07-01T10:07:18.6370000+00:00</lastModDate>
        
        <creator>Gopinath Ganapathy</creator>
        
        <creator>S. Sagayaraj</creator>
        
        <subject> component: Metadata; QDox, Parser, Jena, Ontology, Web Ontology Language and Hadoop Distributed File System;.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(2), 2011</description>
        <description>Software development teams design new components and code by employing new developers for every new project. If the company archives the completed code and components, they can be reused with no further testing unlike the open source code and components. Program File components can be extracted from the Application files and folders using API’s. The proposed framework extracts the metadata from the source code using QDox code generators and stores it in the OWL using Jena framework automatically. The source code will be stored in the HDFS repository. Code stored in the repository can be reused for software development. By Archiving all the project files in to one ontology will enable the developers to reuse the code efficiently.</description>
        <description>http://thesai.org/Downloads/Volume2No2/Paper%2018-To%20Generate%20the%20Ontology%20from%20Java%20Source%20Code.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Architectural Decision Tool Based on Scenarios and Non-functional Requirements</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020217</link>
        <id>10.14569/IJACSA.2011.020217</id>
        <doi>10.14569/IJACSA.2011.020217</doi>
        <lastModDate>2012-07-01T10:07:15.1970000+00:00</lastModDate>
        
        <creator>Mahesh Parmar</creator>
        
        <creator>W.U. Khan</creator>
        
        <creator>Binod Kumar</creator>
        
        <subject>Software Architecture, Automated Design, Non-functional requirements, Design Principle.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(2), 2011</description>
        <description> Software architecture design is often based on architects intuition and previous experience. Little methodological support is available, but there are still no effective solutions to guide the architectural design. The most difficult activity is the transformation from non-functional requirement specification into software architecture. To achieve above things proposed “An Architectural Decision Tool Based on Scenarios and Nonfunctional Requirements”. In this proposed tool scenarios are first utilized to gather information from the user. Each scenario is created to have a positive or negative effect on a non-functional quality attribute. The non-functional quality attribute is then computed and compared to other non-quality attributes to relate to a set of design principle that are relevant to the system. Finally, the optimal architecture is selected by finding the compatibility of the design principle.</description>
        <description>http://thesai.org/Downloads/Volume2No2/Paper%2017-An%20Architectural%20Decision%20Tool%20Based%20on%20%20%20Scenarios%20and%20Nonfunctional%20Requirements.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> Magneto-Hydrodynamic Antenna Design and Development Analysis with prototype</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020216</link>
        <id>10.14569/IJACSA.2011.020216</id>
        <doi>10.14569/IJACSA.2011.020216</doi>
        <lastModDate>2012-07-01T10:07:08.7470000+00:00</lastModDate>
        
        <creator>Rajveer S Yaduvanshi</creator>
        
        <creator>Harish Parthasarathy</creator>
        
        <creator>Asok De</creator>
        
        <subject>Frequency agility, reconfigurability, MHD, radiation pattern, saline water.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(2), 2011</description>
        <description>A new class of antenna based on magnetohydrodynamic technique is presented. Magneto-hydrodynamic Antenna, using electrically conducting fluid, such as NaCl solution under controlled electromagnetic fields is formulated and developed. Fluid resonator volume and electric field with magnetic field decides the resonant frequency and return loss respectively to make the antenna tuneable in the frequency range 4.5 to 9 GHz. The Maxwell’s equations, Navier Stokes equations and equations of mass conservation for the conducting fluid and field have been set up. These are expressed as partial differential equations for the stream function electric and magnetic fields, these equations are first order in time. By discretizing these equations, we are able to numerically evaluate velocity field of the fluid in the near field region and electromagnetic field in the far field region. We propose to design, develop, formulate and fabricate an prototype MHD antenna [1-3]. Formulations of a rotating fluid frame, evolution of pointing vector, permeability and permittivity of MHD antenna have been worked out. Proposed work presents tuning mechanism of resonant frequency and dielectric constant for frequency agility and configurability. Measured results of prototype antenna possess return loss up to -51.1dB at 8.59 GHz resonant frequency. And simulated resonant frequency comes out to be10.5GHz.</description>
        <description>http://thesai.org/Downloads/Volume2No2/Paper%2016-Magneto-Hydrodynamic%20Antenna%20Design%20and%20Development%20Analysis%20with%20prototype.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Extracting Code Resource from OWL by Matching Method Signatures using UML Design Document
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020215</link>
        <id>10.14569/IJACSA.2011.020215</id>
        <doi>10.14569/IJACSA.2011.020215</doi>
        <lastModDate>2012-07-01T10:07:05.3770000+00:00</lastModDate>
        
        <creator>Gopinath Ganapathy</creator>
        
        <creator>S. Sagayaraj</creator>
        
        <subject>Component: Unified Modeling language, XML, XMI Metadata Interchange, Metadata, Web Ontology Language, Jena framework</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(2), 2011</description>
        <description>Software companies develop projects in various domains, but hardly archive the programs for future use. The method signatures are stored in the OWL and the source code components are stored in HDFS. The OWL minimizes the software development cost considerably. The design phase generates many artifacts. One such artifact is the UML class diagram for the project that consists of classes, methods, attributes, relations etc., as metadata. Methods needed for the project can be extracted from this OWL using UML metadata. The UML class diagram is given as input and the metadata about the method is extracted. The method signature is searched in OWL for the similar method prototypes and the appropriate code components will be extracted from the HDFS and reused in a project. By doing this process the time, manpower system resources and cost will be reduced in Software development.
</description>
        <description>http://thesai.org/Downloads/Volume2No2/Paper%2015-Extracting%20Code%20Resource%20from%20OWL%20by%20Matching%20Method%20Signatures%20using%20UML%20Design%20Document.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> Distributed Group Key Management with Cluster based Communication for Dynamic Peer Groups</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020214</link>
        <id>10.14569/IJACSA.2011.020214</id>
        <doi>10.14569/IJACSA.2011.020214</doi>
        <lastModDate>2012-07-01T10:06:57.7930000+00:00</lastModDate>
        
        <creator>Rajender Dharavath</creator>
        
        <creator>K Bhima</creator>
        
        <subject>Secure Group Communication; Key Agreement; Key Tree; Dynamic peer groups,Cluster.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(2), 2011</description>
        <description>Secure group communication is an increasingly popular research area having received much attention in recent years. Group key management is a fundamental building block for secure group communication systems. This paper introduces a new family of protocols addressing cluster based communication, and distributed group key agreement for secure group communication in dynamic peer groups. In this scheme, group members can be divided into sub groups called clusters. We propose three cluster based communication protocols with tree-based group key management. The protocols (1) provides the communication within the cluster by generating common group key within the cluster, (2) provides communication between the clusters by generating common group key between the clusters and (3) provides the communication among all clusters by generating common group key among the all clusters. In our approach group key will be updated for each session or when a user joins or leaves the cluster. More over we use Certificate Authority which guarantees key authentication, and protects our protocol from all types of attacks.</description>
        <description>http://thesai.org/Downloads/Volume2No2/Paper%2014-Distributed%20Group%20Key%20management%20with%20Cluster%20based%20Communication%20for%20Dynamic%20peer%20Groups.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dominating Sets and Spanning Tree based Clustering Algorithms for Mobile Ad hoc Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020213</link>
        <id>10.14569/IJACSA.2011.020213</id>
        <doi>10.14569/IJACSA.2011.020213</doi>
        <lastModDate>2012-07-01T10:06:47.9670000+00:00</lastModDate>
        
        <creator>R Krishnam Raju Indukuri</creator>
        
        <creator>Suresh Varma Penumathsa</creator>
        
        <subject>mobile ad hoc networks, clustering, dominating set and spanning trees</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(2), 2011</description>
        <description>The infrastructure less and dynamic nature of mobile ad hoc networks (MANET) needs efficient clustering algorithms to improve network management and to design hierarchical routing protocols. Clustering algorithms in mobile ad hoc networks builds a virtual backbone for network nodes. Dominating sets and Spanning tree are widely used in clustering networks. Dominating sets and Spanning Tree based MANET clustering algorithms were suitable in a medium size network with respect to time and message complexities. This paper presents different clustering algorithms for mobile ad hoc networks based on dominating sets and spanning tree.</description>
        <description>http://thesai.org/Downloads/Volume2No2/Paper%2013-Dominating%20Sets%20and%20Spanning%20Tree%20based%20Clustering%20Algorithms%20for%20Mobile%20Ad%20hoc%20Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Sectorization of Full Kekre’s Wavelet Transform for Feature extraction of Color Images
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020212</link>
        <id>10.14569/IJACSA.2011.020212</id>
        <doi>10.14569/IJACSA.2011.020212</doi>
        <lastModDate>2012-07-01T10:06:41.6430000+00:00</lastModDate>
        
        <creator>H B Kekre</creator>
        
        <creator>Dhirendra Mishra</creator>
        
        <subject>CBIR, Kekre’s Wavelet Transform (KWT), Euclidian Distance, Sum of Absolute Difference, LIRS, LSRR, Precision and Recall.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(2), 2011</description>
        <description>An innovative idea of sectorization of Full Kekre’s Wavelet transformed (KWT)[1] images for extracting the features has been proposed. The paper discusses two planes i.e. Forward plane (Even plane) and backward plane (Odd plane). These two planes are sectored into 4, 8, 12 and 16 sectors. An innovative concept of sum of absolute difference (AD) has been proposed as the similarity measuring parameters and compared with the well known Euclidean distance (ED).The performances of sectorization of two planes into different sector sizes in combination with two similarity measures are checked. Class wise retrieval performance of all sectors with respect to the similarity measures i.e. ED and AD is analyzed by means of its class (randomly chosen 5 images) average precision- recall cross over points, overall average (average of class average) precision-recall cross over points and two new parameters i.e. LIRS and LSRR.
</description>
        <description>http://thesai.org/Downloads/Volume2No2/Paper%2012-Sectorization%20of%20Full%20Kekres%20Wavelet%20Transform%20for%20Feature%20extraction%20of%20Color%20Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> Dynamic Approach To Enhance Performance Of Orthogonal Frequency Division Multiplexing(OFDM) In A Wireless Communication Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020211</link>
        <id>10.14569/IJACSA.2011.020211</id>
        <doi>10.14569/IJACSA.2011.020211</doi>
        <lastModDate>2012-07-01T10:06:38.2570000+00:00</lastModDate>
        
        <creator>James Agajo</creator>
        
        <creator>Isaac O. Avazi Omeiza</creator>
        
        <creator>Idigo Victor Eze</creator>
        
        <creator>Okhaifoh Joseph(</creator>
        
        <subject>OFDM, Inter-Carrier Interference, IFFT, multipath,Signal.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(2), 2011</description>
        <description> In the mobile radio environment, signals are usually impaired by fading and multipath delay phenomenon. This work modeled and simulates OFDM in a wireless environment, it also illustrates adaptive modulation and coding over a dispersive multipath fading channel whereby simulation varies the result dynamically. Dynamic approach entails adopting probabilistic approach to determining channel allocation; First an OFDM network environment is modeled to get a clear picture of the OFDM concept. Next disturbances such as noise are deliberately introduced into systems that are both OFDM modulated and non-OFDM modulated to see how the system reacts. This enables comparison of the effect of noise on OFDM signals and non-OFDM modulated signals. Finally efforts are made using digital encoding schemes such as QAM and DPSK to reduce the effects of such disturbances on the transmitted signals. In the mobile radio environment, signals are usually impaired by fading and multipath delay phenomenon. In such channels, severe fading of the signal amplitude and inter-symbol-interference (ISI) due to the frequency electivity of the channel cause an unacceptable degradation of error performance. Orthogonal frequency division multiplexing (OFDM) is an efficient scheme to mitigate the effect of multipath channel.</description>
        <description>http://thesai.org/Downloads/Volume2No2/Paper%2011-Dynamic%20Approach%20to%20Enhance%20Performance%20of%20Orthogonal%20Frequency%20Division%20Multiplexing%20(OFDM)%20In%20a%20Wireless%20Communication%20Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> Application of Expert System with Fuzzy Logic in Teachers’ Performance Evaluation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020210</link>
        <id>10.14569/IJACSA.2011.020210</id>
        <doi>10.14569/IJACSA.2011.020210</doi>
        <lastModDate>2012-07-01T10:06:31.8370000+00:00</lastModDate>
        
        <creator>Abdur Rashid Khan</creator>
        
        <creator>Hafeez Ullah Amin</creator>
        
        <creator>Zia Ur Rehman</creator>
        
        <subject>Expert System, Fuzzy Random Variables, Decision Making, Teachers’ Performance, Qualitative &amp; Uncertain Knowledge.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(2), 2011</description>
        <description>This paper depicts adaptation of expert systems technology using fuzzy logic to handle qualitative and uncertain facts in the decision making process. Human behaviors are mostly based upon qualitative facts, which cannot be numerically measured and hardly to decide correctly. This approach is an attempt to cope with such problems in the scenario of teachers’ performance evaluation. An Expert System was developed and applied to the acquired knowledge about the problem domain that showed interesting results providing a sketch for students and researchers to find solutions of such types of problems. Through Fuzzy Logic we numerically weighted the linguistic terms, like; very good, good, bad, or high, medium, low or satisfied, unsatisfied by assigning priorities to these qualitative facts. During final decision making, key parameters were given weights according to their priorities through mapping numeric results from uncertain knowledge and mathematical formulae were applied to calculate the numeric results at final. In this way this ES will not only be useful for decision-makers to evaluate teachers’ abilities but may also be adopted in writing Annual Confidential Reports (ACR) of about all the employees of an organization.
</description>
        <description>http://thesai.org/Downloads/Volume2No2/Paper%2010-Application%20of%20Expert%20System%20with%20Fuzzy%20Logic%20in%20Teachers%20Performance%20Evaluation_Done.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Priority Based Dynamic Round Robin (PBDRR) Algorithm with Intelligent Time Slice for Soft Real Time Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020209</link>
        <id>10.14569/IJACSA.2011.020209</id>
        <doi>10.14569/IJACSA.2011.020209</doi>
        <lastModDate>2012-07-01T10:06:28.4170000+00:00</lastModDate>
        
        <creator>Rakesh Mohanty</creator>
        
        <creator>H. S. Behera</creator>
        
        <creator>Khusbu Patwari</creator>
        
        <creator>Monisha Dash</creator>
        
        <creator>Lakshmi Prasanna</creator>
        
        <subject>Real time system; Operating System; Scheduling; Round Robin Algorithm; Context switch; Waiting time; Turnaround time.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(2), 2011</description>
        <description> In this paper, a new variant of Round Robin (RR) algorithm is proposed which is suitable for soft real time systems. RR algorithm performs optimally in timeshared systems, but it is not suitable for soft real time systems. Because it gives more number of context switches, larger waiting time and larger response time. We have proposed a novel algorithm, known as Priority Based Dynamic Round Robin Algorithm(PBDRR), which calculates intelligent time slice for individual processes and changes after every round of execution. The proposed scheduling algorithm is developed by taking dynamic time quantum concept into account. Our experimental results show that our proposed algorithm performs better than algorithm in [8] in terms of reducing the number of context switches, average waiting time and average turnaround time.
</description>
        <description>http://thesai.org/Downloads/Volume2No2/Paper%209-Priority%20Based%20Dynamic%20Round%20Robin%20(PBDRR)%20Algorithm%20with%20Intelligent%20Time%20Slice%20for%20Soft%20Real%20Time%20Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis of Software Reliability Data using Exponential Power Model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020208</link>
        <id>10.14569/IJACSA.2011.020208</id>
        <doi>10.14569/IJACSA.2011.020208</doi>
        <lastModDate>2012-07-01T10:06:22.0730000+00:00</lastModDate>
        
        <creator>Ashwini Kumar Srivastava</creator>
        
        <creator>Vijay Kumar</creator>
        
        <subject> EP model, Probability density, function, Cumulative density function, Hazard rate function, Reliability function, Parameter estimation, MLE, Bayesian estimation.
</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(2), 2011</description>
        <description>In this paper, Exponential Power (EP) model is proposed to analyze the software reliability data and the present work is an attempt to represent that the model is as software reliability model. The approximate MLE using Artificial Neural Network (ANN) method and the Markov chain Monte Carlo (MCMC) methods are used to estimate the parameters of the EP model. A procedure is developed to estimate the parameters of the EP model using MCMC simulation method in OpenBUGS by incorporating a module into OpenBUGS. The R functions are developed to study the various statistical properties of the proposed model and the output analysis of MCMC samples generated from OpenBUGS. A real software reliability data set is considered for illustration of the proposed methodology under informative set of priors.
</description>
        <description>http://thesai.org/Downloads/Volume2No2/Paper%208-Analysis%20of%20Software%20Reliability%20Data%20using%20Exponential%20Power%20Model.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Algorithm to Reduce the Time Complexity of Earliest Deadline First Scheduling Algorithm in Real-Time System
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020207</link>
        <id>10.14569/IJACSA.2011.020207</id>
        <doi>10.14569/IJACSA.2011.020207</doi>
        <lastModDate>2012-07-01T10:06:15.6300000+00:00</lastModDate>
        
        <creator>Jagbeer Singh</creator>
        
        <creator>Bichitrananda Patra</creator>
        
        <creator>Satyendra Prasad Singh</creator>
        
        <subject> Real-time system; task migration, earliest deadline first, earliest feasible deadline first.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(2), 2011</description>
        <description>To this paper we have study to Reduce the time Complexity of Earliest Deadline First (EDF), a global scheduling scheme for Earliest Deadline First in Real Time System tasks on a Multiprocessors system. Several admission control algorithms for earliest deadline first are presented, both for hard and soft real-time tasks. The average performance of these admission control algorithms is compared with the performance of known partitioning schemes. We have applied some modification to the global earliest deadline first algorithms to decrease the number of task migration and also to add predictability to its behavior. The Aim of this work is to provide a sensitivity analysis for task deadline context of multiprocessor system by using a new approach of EFDF (Earliest Feasible Deadline First) algorithm. In order to decrease the number of migrations we prevent a job from moving one processor to another processor if it is among the m higher priority jobs. Therefore, a job will continue its execution on the same processor if possible (processor affinity). The result of these comparisons outlines some situations where one scheme is preferable over the other. Partitioning schemes are better suited for hard real-time systems, while a global scheme is preferable for soft real-time systems.
</description>
        <description>http://thesai.org/Downloads/Volume2No2/Paper%207-An%20Algorithm%20to%20Reduce%20the%20Time%20Complexity%20of%20Earliest%20Deadline%20First%20Scheduling%20Algorithm%20in%20%20%20Real-Time%20System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> Modelling &amp; Designing Land Record Information System Using Unified Modelling Language</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020206</link>
        <id>10.14569/IJACSA.2011.020206</id>
        <doi>10.14569/IJACSA.2011.020206</doi>
        <lastModDate>2012-07-01T10:06:08.0870000+00:00</lastModDate>
        
        <creator>Kanwalvir Singh Dhindsa</creator>
        
        <creator>Himanshu Aggarwal</creator>
        
        <subject>information system, Unified Modeling Language (UML), software modelling, software development process, UML tools.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(2), 2011</description>
        <description>Automation of Land Records is one of the most important initiatives undertaken by the revenue department to facilitate the landowners of the state of Punjab. A number of such initiatives have been taken in different States of the country. Recently, there has been a growing tendency to adopt UML (Unified Modeling Language) for different modeling needs and domains, and is widely used for designing and modelling Information systems. UML diagramming practices have been applied for designing and modeling the land record information system so as to improve technical accuracy and understanding in requirements related with this information system. We have applied a subset of UML diagrams for modeling the land record information system. The case study of Punjab state has been taken up for modelling the current scenario of land record information system in the state. Unified Modeling Language (UML) has been used as the specification technique. This paper proposes a refined software development process combined with modeled process of UML and presents the comparison study of the various tools used with UML.
</description>
        <description>http://thesai.org/Downloads/Volume2No2/Paper%206-Modelling%20and%20Designing%20Land%20Record%20Information%20System%20Using%20UML.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Knowledge-Based System’s Modeling for Software Process Model Selection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020205</link>
        <id>10.14569/IJACSA.2011.020205</id>
        <doi>10.14569/IJACSA.2011.020205</doi>
        <lastModDate>2012-07-01T10:06:01.7470000+00:00</lastModDate>
        
        <creator>Abdur Abdur Khan</creator>
        
        <creator>Zia Ur Rehman</creator>
        
        <creator>Hafeez Ullah Amin</creator>
        
        <subject>ESPMS, Expert System, Analytical Hierarchy Process, Certainty Factors, Fuzzy Logic, Decision-Making

</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(2), 2011</description>
        <description>This paper depicts the knowledge-based system named as ESPMS (Expert System for Process Model Selection) through various models. A questionnaire was developed to identify important parameters, which were evaluated through domain experts in about all the universities of Pakistan. No exact system was found, which could guide Software Engineers for selection of a proper model during software development. This paper shows that how various technologies like Fuzzy Logic, Certainty Factors, and Analytical Hierarchy Process (AHP) can be adopted to develop the Expert System. Priorities assignments to critical factors have been shown for decision making in the model selection for a problem domain. This research work will be beneficial to both students and researchers for integrating Soft Computing Techniques and Software Engineering.
</description>
        <description>http://thesai.org/Downloads/Volume2No2/Paper%205-Knowledge%20Based%20Systems%20Modeling%20for%20Software%20Process%20Model%20Selection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> Churn Prediction in Telecommunication Using Data Mining Technology</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020204</link>
        <id>10.14569/IJACSA.2011.020204</id>
        <doi>10.14569/IJACSA.2011.020204</doi>
        <lastModDate>2012-07-01T10:05:54.0530000+00:00</lastModDate>
        
        <creator>Rahul J Jadhav</creator>
        
        <creator>Usharani T. Pawar</creator>
        
        <subject>churn prediction, data mining, Decision support system, churn behavior.
</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(2), 2011</description>
        <description>Since its inception, the field of Data Mining and Knowledge Discovery from Databases has been driven by the need to solve practical problems. In this paper an attempt is made to build a decision support system using data mining technology for churn prediction in Telecommunication Company. Telecommunication companies face considerable loss of revenue, because some of the customers who are at risk of leaving a company. Increasing such customers, becoming crucial problem for any telecommunication company. As the size of the organization increases such cases also increases, which makes it difficult to manage, such alarming conditions by a routine information system. Hence, needed is highly sophisticated customized and advanced decision support system. In this paper, process of designing such a decision support system through data mining technique is described. The proposed model is capable of predicting customers churn behavior well in advance.</description>
        <description>http://thesai.org/Downloads/Volume2No2/Paper%204-Churn%20Prediction%20in%20Telecommunication.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Study on Cross Layer MAC design for performance optimization of routing protocols in MANETs</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020203</link>
        <id>10.14569/IJACSA.2011.020203</id>
        <doi>10.14569/IJACSA.2011.020203</doi>
        <lastModDate>2012-07-01T10:05:47.7130000+00:00</lastModDate>
        
        <creator>P.K Alima Beebi</creator>
        
        <creator>Sulava Singha</creator>
        
        <creator>Ranjit Mane</creator>
        
        <subject> cross-layer, routing, MAC, MANET, multi-hop networks;
</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(2), 2011</description>
        <description>One of the most visible trends in today’s commercial communication market is the adoption of wireless technology. Wireless networks are expected to carry traffic that will be a mix of real time traffic such as voice, multimedia conferences, games and data traffic such as web browsing, messaging and file transfer. All of these applications require widely varying and very diverse Quality of Service (QoS) guarantees. In an effort to improve the performance of wireless networks, there has been increased interest in protocols that rely on interactions between different layers. Cross-Layer Design has become the new issue in wireless communication systems as it seeks to enhance the capacity of wireless networks significantly through the joint optimization of multiple layers in the network. Wireless multi-hop ad-hoc networks have generated a lot of interest in the recent past due to their many potential applications. Multi-hopping implies the existence of many geographically distributed devices that share the wireless medium which creates the need for efficient MAC and routing protocols to mitigate interference and take full advantage of spatial reuse. Cross-Layer Design is an emerging proposal to support flexible layer approaches in Mobile Ad-hoc Networks (MANETs). In this paper, we present few Cross-Layer MAC design proposals by analyzing the ongoing research activities in this area for optimizing the performance of routing protocols in MANETs.</description>
        <description>http://thesai.org/Downloads/Volume2No2/Paper%203-A%20Study%20on%20Cross%20Layer%20MAC%20design%20for%20performance%20optimization%20of%20routing%20protocols%20in%20MANETs.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design and Implementation of NoC architectures based on the SpaceWire protocol</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020202</link>
        <id>10.14569/IJACSA.2011.020202</id>
        <doi>10.14569/IJACSA.2011.020202</doi>
        <lastModDate>2012-07-01T10:05:41.3970000+00:00</lastModDate>
        
        <creator>Sami HACHED</creator>
        
        <creator>Mohamed GRAJA</creator>
        
        <creator>Slim BEN SAOUD</creator>
        
        <subject>NoC architectures; SpaceWire protocol; Codec IP; ,FPGA.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(2), 2011</description>
        <description>The SpaceWire is a standard for high-speed links and networks used onboard spacecrafts, designed by the ESA, and widely used on many space missions by multiple space agencies. SpaceWire has shown a great flexibility by giving the space missions a wide range of possible configurations and topologies. Nevertheless, each topology presents its own set of tradeoffs such as hardware limitations, speed performance, power consumption, costs of implementation, etc. In order to compensate these drawbacks and increase the efficiency of the SpaceWire networks, many solutions are being considered. One of these solutions is the Network on Chip or NoC configuration which resolves many of these drawbacks by reducing the complexity of designing with a regular architecture improving speed, power, and reliability and guaranteeing a controlled structure. This paper presents the main steps for building Network on Chip based on the SpaceWire protocol. It describes the internal structure, the functioning and the communication mechanism of the Network’s Nodes. It also exposes the software development and the validation strategy, and discusses the tests and results conducted on the adopted NoC topologies.
</description>
        <description>http://thesai.org/Downloads/Volume2No2/Paper%202-Design%20and%20Implementation%20of%20NoC%20architectures%20based%20on%20the%20SpaceWire%20protocol.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Building XenoBuntu Linux Distribution for Teaching and Prototyping Real-Time Operating Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020201</link>
        <id>10.14569/IJACSA.2011.020201</id>
        <doi>10.14569/IJACSA.2011.020201</doi>
        <lastModDate>2012-07-01T10:05:35.0900000+00:00</lastModDate>
        
        <creator>Nabil LITAYEM</creator>
        
        <creator>Ahmed BEN ACHBALLAH</creator>
        
        <creator>Slim BEN SAOUD</creator>
        
        <subject>Real-time systems, Linux, Remastering, RTOS API, Xenomai</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(2), 2011</description>
        <description>This paper describes the realization of a new Linux distribution based on Ubuntu Linux and Xenomai Real-Time framework. This realization is motivated by the eminent need of real-time systems in modern computer science courses. The majority of the technical choices are made after qualitative comparison. The main goal of this distribution is to offer standard Operating Systems (OS) that include Xenomai infrastructure and the essential tools to begin hard real-time application development inside a convivial desktop environment. The released live/installable DVD can be adopted to emulate several classic RTOS Application Program Interfaces (APIs), directly use and understand real-time Linux in convivial desktop environment and prototyping real-time embedded applications.</description>
        <description>http://thesai.org/Downloads/Volume2No2/Paper%201-Building%20XenoBuntu%20Linux%20Distribution%20for%20Teaching%20and%20Prototyping%20Real-Time%20Operating%20Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Software Effort Prediction using Statistical and Machine Learning Methods</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020122</link>
        <id>10.14569/IJACSA.2011.020122</id>
        <doi>10.14569/IJACSA.2011.020122</doi>
        <lastModDate>2012-07-01T10:05:28.7730000+00:00</lastModDate>
        
        <creator>Ruchika Malhotra</creator>
        
        <creator>Ankita Jain</creator>
        
        <subject>oftware effort estimation, machine learning, decision tree, linear regression</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(1), 2011</description>
        <description> Accurate software effort estimation is an important part of software process. Effort is measured in terms of person months and duration. Both overestimation and underestimation of software effort may lead to risky consequences. Also, software project managers have to make estimates of how much a software development is going to cost. The dominant cost for any software is the cost of calculating effort. Thus, effort estimation is very crucial and there is always a need to improve its accuracy as much as possible. There are various effort estimation models, but it is difficult to determine which model gives more accurate estimation on which dataset. This paper empirically evaluates and compares the potential of Linear Regression, Artificial Neural Network, Decision Tree, Support Vector Machine and Bagging on software project dataset. The dataset is obtained from 499 projects. The results show that Mean Magnitude Relative error of decision tree method is only 17.06%. Thus, the performance of decision tree method is better than all the other compared methods.</description>
        <description>http://thesai.org/Downloads/Volume2No1/Paper%2022-Software%20Effort%20Prediction%20using%20Statistical%20and%20Machine%20Learning%20Methods%20%20.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A study on Feature Selection Techniques in Bio-Informatics</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020121</link>
        <id>10.14569/IJACSA.2011.020121</id>
        <doi>10.14569/IJACSA.2011.020121</doi>
        <lastModDate>2012-07-01T10:05:22.4670000+00:00</lastModDate>
        
        <creator>S Nirmala Devi</creator>
        
        <creator>Dr. S.P Rajagopalan</creator>
        
        <subject> Bio-Informatics; Feature Selection; Text Mining; Literature Mining; Wrapper; Filter Embedded Methods.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(1), 2011</description>
        <description>The availability of massive amounts of experimental data based on genome-wide studies has given impetus in recent years to a large effort in developing mathematical, statistical and computational techniques to infer biological models from data. In many bioinformatics problems the number of features is significantly larger than the number of samples (high feature to sample ratio datasets) and feature selection techniques have become an apparent need in many bioinformatics applications. This article provides the reader aware of the possibilities of feature selection, providing a basic taxonomy of feature selection techniques, discussing its uses, common and upcoming bioinformatics applications.</description>
        <description>http://thesai.org/Downloads/Volume2No1/Paper%2021-A%20study%20on%20Feature%20Selection%20Techniques%20in%20Bio%20Informatics.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> Framework for Automatic Development of Type 2 Fuzzy, Neuro and Neuro-Fuzzy Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020120</link>
        <id>10.14569/IJACSA.2011.020120</id>
        <doi>10.14569/IJACSA.2011.020120</doi>
        <lastModDate>2012-07-01T10:05:16.1370000+00:00</lastModDate>
        
        <creator>Mr. Jeegar A Trivedi</creator>
        
        <creator>Dr. Priti Srinivas Sajja</creator>
        
        <subject>Artificial Neural Network; Fuzzy Logic; Type 2 Fuzzy Logic; Neuro Fuzzy Hybridization.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(1), 2011</description>
        <description>This paper presents the design and development of generic framework which aids creation of fuzzy, neural network and neuro fuzzy systems to provide expert advice in various fields. The proposed framework is based on neuro fuzzy hybridization. Artificial neural network of the framework aids learning and fuzzy part helps in providing logical reasoning for making proper decisions based on inference of domain expert’s knowledge. Hence by hybridizing neural network and fuzzy logic we obtain advantages of both the fields. Further the framework considered type 2 fuzzy logic for more human like approach. Developing a neuro fuzzy advisory system is tedious and complex task. Much of the time is wasted in developing computational logic and hybridizing the two methodologies. In order to generate a neuro fuzzy advisory system quickly and efficiently, we have designed a generic framework that will generate the advisory system. The resulting advisory system for the given domain is interactive with its user and asks question to generate fuzzy rules. The system also allows provision of training sets for neural network by its users in order to train the neural network. The paper also describes a working prototype implemented based on the designed framework; which can create a fuzzy system, a neural network system or a hybrid neuro fuzzy system according to information provided. The working of the prototype is also discussed with outputs in order to develop a fuzzy system, a neural network system and a hybrid neuro fuzzy system for a domain of course selection advisory. The generated systems through this prototype can be used on web or on desktop as per the user requirement.
</description>
        <description>http://thesai.org/Downloads/Volume2No1/Paper%2020-Framework%20for%20Automatic%20Development%20of%20Type%202%20Fuzzy%20Neuro%20and%20Neuro-Fuzzy%20Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detection of Routing Misbehavior in MANETs with 2ACK scheme</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020119</link>
        <id>10.14569/IJACSA.2011.020119</id>
        <doi>10.14569/IJACSA.2011.020119</doi>
        <lastModDate>2012-07-01T10:05:09.8530000+00:00</lastModDate>
        
        <creator>Chinmaya Kumar Nayak</creator>
        
        <creator>G K Abani Kumar Dash</creator>
        
        <creator>Kharabela parida</creator>
        
        <creator>Satyabrata Das</creator>
        
        <subject>MANET; routing in MANETS; misbehavior of nodes in MANETS; credit based scheme; reputation based scheme; the2ACK scheme; network security.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(1), 2011</description>
        <description>The Routing misbehavior in MANETs (Mobile Ad Hoc Networks) is considered in this paper. Commonly routing protocols for MANETs [1] are designed based on the assumption that all participating nodes are fully cooperative. Routing protocols for MANETs are based on the assumption which are, all participating nodes are fully cooperative. Node misbehaviors may take place, due to the open structure and scarcely available battery-based energy. One such routing misbehavior is that some nodes will take part in the route discovery and maintenance processes but refuse to forward data packets. In this, we propose the 2ACK [2] scheme that serves as an add-on technique for routing schemes to detect routing misbehavior and to mitigate their effect. The basic idea of the 2ACK scheme is to send two-hop acknowledgment packets in the opposite direction of the routing path. To reduce extra routing overhead, only a few of the received data packets are acknowledged in the 2ACK scheme.
</description>
        <description>http://thesai.org/Downloads/Volume2No1/Paper%2019-Detection%20of%20Routing%20Misbehavior%20in%20MANETs%20with%202ACK%20scheme.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Coalesced Quality Management System
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020118</link>
        <id>10.14569/IJACSA.2011.020118</id>
        <doi>10.14569/IJACSA.2011.020118</doi>
        <lastModDate>2012-07-01T10:05:03.5230000+00:00</lastModDate>
        
        <creator>A Pathanjali Sastri</creator>
        
        <creator>K. Nageswara Rao</creator>
        
        <subject>Quality Assurance, Operational Excellence, Coalesced Quality Management System, Business Analyst, phase gate reviews</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(1), 2011</description>
        <description> Developing software in a stipulated time frame and within a budget is not good enough if the product developed is full of bugs and today end users are demanding higher quality software than ever before. Project lifecycle starts with pre-Project work all the way through to post-Project. Projects need to be set up correctly from the beginning to ensure success. As the software market matures, users want to be assured of quality. In order to check such unpleasant incidents or potential problems lurking around the corner for software development teams, we need a quality framework, that not only to assess the common challenges that are likely to come in the way while executing the projects but also to focus on winning the deal during proposal stage. Our research paper is an honest appraisal of the reasons behind the failure of projects and an attempt to address valuable pointers for the successes of future projects. “Coalesced Quality Management Framework (CQMF)” is a theoretical model to bring the best of Quality to the work products developed and to gain the firsthand knowledge of all the projects, defects, and quality metrics and report to the Management so that missed deadlines and enhancement of budget are avoided providing an opportunity to deliver the end product to the satisfaction of the customer. With this framework the project stakeholders and the management constantly validate what is built and verify how it is being built.</description>
        <description>http://thesai.org/Downloads/Volume2No1/Paper%2018-Coalesced%20Quality%20Management%20System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> Automatic Facial Feature Extraction and Expression Recognition based on Neural Network</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020117</link>
        <id>10.14569/IJACSA.2011.020117</id>
        <doi>10.14569/IJACSA.2011.020117</doi>
        <lastModDate>2012-07-01T10:04:57.2130000+00:00</lastModDate>
        
        <creator>S P Khandait</creator>
        
        <creator>Dr. R.C.Thool</creator>
        
        <creator>P.D.Khandait</creator>
        
        <subject>Edge projection analysis, Facial features, feature extraction, feed forward neural network, segmentation SUSAN edge detection operator.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(1), 2011</description>
        <description>In this paper, an approach to the problem of automatic facial feature extraction from a still frontal posed image and classification and recognition of facial expression and hence emotion and mood of a person is presented. Feed forward back propagation neural network is used as a classifier for classifying the expressions of supplied face into seven basic categories like surprise, neutral, sad, disgust, fear, happy and angry. For face portion segmentation and localization, morphological image processing operations are used. Permanent facial features like eyebrows, eyes, mouth and nose are extracted using SUSAN edge detection operator, facial geometry, edge projection analysis. Experiments are carried out on JAFFE facial expression database and gives better performance in terms of 100% accuracy for training set and 95.26% accuracy for test set.
</description>
        <description>http://thesai.org/Downloads/Volume2No1/Paper%2017-Automatic%20facial%20feature%20extraction%20and%20expression%20recognition%20based%20on%20neural%20network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improved Off-Line Intrusion Detection Using A Genetic Algorithm And RMI</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020116</link>
        <id>10.14569/IJACSA.2011.020116</id>
        <doi>10.14569/IJACSA.2011.020116</doi>
        <lastModDate>2012-07-01T10:04:50.8470000+00:00</lastModDate>
        
        <creator>Ahmed AHMIM</creator>
        
        <creator>Nacira GHOUALMI</creator>
        
        <creator>Noujoud KAHYA</creator>
        
        <subject>component; intrusion detection system; Genetic Algorithm; Off-Line Intrusion Detection; Misuse Detection;</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(1), 2011</description>
        <description>This article proposes an optimization of using Genetic Algorithms for the Security Audit Trail Analysis Problem, which was proposed by L. M&#233; in 1995 and improved by Pedro A. Diaz-Gomez and Dean F. Hougen in 2005. This optimization consists in filtering the attacks. So, we classify attacks in “Certainly not existing attacks class”, “Certainly existing attacks class” and “Uncertainly existing attacks class”. The proposed idea is to divide the 3rd class to independent sub-problems easier to solve. We use also the remote method invocation (RMI) to reduce resolution time. The results are very significant: 0% false+, 0%false-, detection rate equal to 100%. We present also, a comparative study to confirm the given improvement.</description>
        <description>http://thesai.org/Downloads/Volume2No1/Paper%2016-%20Improved%20Off-Line%20Intrusion%20Detection%20Using%20A%20Genetic%20Algorithm%20And%20RMI.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>IPS: A new flexible framework for image processing
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020115</link>
        <id>10.14569/IJACSA.2011.020115</id>
        <doi>10.14569/IJACSA.2011.020115</doi>
        <lastModDate>2012-07-01T10:04:44.5200000+00:00</lastModDate>
        
        <creator>Otman ABDOUN</creator>
        
        <creator>Jaafar ABOUCHABAKA</creator>
        
        <subject>Otman ABDOUN, Jaafar ABOUCHABAKA</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(1), 2011</description>
        <description>Image processing is a discipline which is of great importance in various real applications; it encompasses many methods and many treatments. Yet, this variety of methods and treatments, though desired, stands for a serious requirement from the users of image processing software; the mastery of every single programming language is not attainable for every user. To overcome this difficulty, it was perceived that the development of a tool for image processing will help to understanding the theoretical knowledge on the digital image. Thus, the idea of designing the software platform Image Processing Software (IPS) for applying a large number of treatments affecting different themes depending on the type of analysis envisaged, becomes imperative. This software has not come to substitute the existing software, but simply a contribution in the theoretical literature in the domain of Image Processing. It is implanted in the MATLAB platform: effective and simplified software specialized in image treatments in addition to the creation of Graphical User Interfaces (GUI) [5][6]. IPS is aimed to allow a quick illustration of the concepts introduced in the theoretical part. This developed software enables users to perform several operations. It allows the application of different types of noise on images, to filter images color and intensity, to detect edges of an image and to apply image thresholding by defining a variant threshold, to cite only a few.</description>
        <description>http://thesai.org/Downloads/Volume2No1/Paper%2015-IPS%20A%20new%20flexible%20framework%20for%20image%20processing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel Approach to Implement Fixed to Mobile Convergence in Mobile Adhoc Networks
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020114</link>
        <id>10.14569/IJACSA.2011.020114</id>
        <doi>10.14569/IJACSA.2011.020114</doi>
        <lastModDate>2012-07-01T10:04:38.1930000+00:00</lastModDate>
        
        <creator>Dr P.K Suri</creator>
        
        <creator>Sandeep Maan</creator>
        
        <subject>Mobile Ad-hoc Networks; Packet Telephony; Voice over Internet Protocol; Fixed to Mobile Convergence; Quality of Service</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(1), 2011</description>
        <description>Fixed to Mobile Convergence, FMC is one of the most celebrated applications of wireless networks, where a telephonic call from some fixed telephonic infrastructure is forwarded to a mobile device. Problem of extending the reach of fixed telephony over a mobile ad-hoc network working in license free ISM band has been eluding the research fraternity. Major hindrance to FMC implementation comes from very nature of mobile ad-hoc networks. Due to the dynamic nature and limited node range in mobile ad-hoc networks, it is very difficult to realize QoS dependent applications, like FMC, over them. In this work authors are proposing complete system architecture to implement fixed to mobile convergence in a mobile ad-hoc network. The mobile ad-hoc network in the problem can hold a number of telephonic calls simultaneously. The proposed system is then implemented using network simulator, ns2. The results obtained are then compared with the predefined standards for implementation of FMC.</description>
        <description>http://thesai.org/Downloads/Volume2No1/Paper%2014-A%20Novel%20Approach%20to%20Implement%20Fixed%20to%20Mobile.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Simulation of Packet Telephony in Mobile Adhoc Networks Using Network Simulator</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020113</link>
        <id>10.14569/IJACSA.2011.020113</id>
        <doi>10.14569/IJACSA.2011.020113</doi>
        <lastModDate>2012-07-01T10:04:34.8530000+00:00</lastModDate>
        
        <creator>Dr P.K Suri</creator>
        
        <creator>Sandeep Maan</creator>
        
        <subject>Network Simulator; Mobile Ad-hoc Networks; Packet Telephony; Simulator; Voice over Internet Protocol.
</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(1), 2011</description>
        <description>Packet Telephony has been regarded as an alternative to existing circuit switched fixed telephony. To propagate new idea regarding Packet Telephony researchers need to test their ideas in real or simulated environment. Most of the research in mobile ad-hoc networks is based on simulation. Among all available simulation tools, Network Simulator (ns2) has been most widely used for simulation of mobile ad-hoc networks. Network Simulator does not directly support Packet Telephony. The authors are proposing a technique to simulate packet telephony over mobile ad-hoc network using network simulator, ns2.</description>
        <description>http://thesai.org/Downloads/Volume2No1/Paper%2013-Simulation%20of%20Packet%20Telephony%20in%20Mobile%20Adhoc.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Universal Simplest possible PLC using Personal Computer</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020112</link>
        <id>10.14569/IJACSA.2011.020112</id>
        <doi>10.14569/IJACSA.2011.020112</doi>
        <lastModDate>2012-07-01T10:04:28.5330000+00:00</lastModDate>
        
        <creator>B K Rana</creator>
        
        <subject> PLC, prototype, Java, Visual Basic, education technology</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(1), 2011</description>
        <description>Need of industrial automation and control is not closed yet. PLC, the programmable logic controller as available in 2009 with all standardized possible features, discussed here concisely. This work on PLC gives a simplest form built in any computer hardware and software for immediate flexible need of the product in a small environment any number inputs and output logic permitted by a computer. At first, the product logic is implemented using simple available computer using a PCIMCIA card having 8255 parallel I/O device, RS232 etc. software like Java, C, C++, and Visual Basic. Further work continuing with all coming generations and variation in the mentioned technology. Expert engineer having these skills may fabricate a small operating PLC within one month. This is a view of preliminary work. Further up gradation of this work comes in term of Visual programming tool (computer aided design) and also universal make in terms of all operating system and computer available in the world. Sensors and module for data to logic conversion is not emphasized in this work. Connectivity has
to be as per standard. Immediate use of this work as an education technology kit. The work in this huge technology area should not be accused because this is need in some way.</description>
        <description>http://thesai.org/Downloads/Volume2No1/Paper%2012-Universal%20Simplest%20possible%20PLC%20using%20Personal%20Computer.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> L Band Propagation Measurements for DAB Service Planning in INDIA</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020111</link>
        <id>10.14569/IJACSA.2011.020111</id>
        <doi>10.14569/IJACSA.2011.020111</doi>
        <lastModDate>2012-07-01T10:04:22.2400000+00:00</lastModDate>
        
        <creator>P K Chopra</creator>
        
        <creator>S. Jain</creator>
        
        <creator>K.M. Paul</creator>
        
        <creator>S. Sharma</creator>
        
        <subject>Satellite, L-band, Signal, Mobile, Antenna, Attenuation.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(1), 2011</description>
        <description>The nature of variations of L band satellite signal strength for direct reception -both in fixed as well as in mobile reception are important technical parameters for the planning of satellite broadcast and communication services network. These parameters have been assessed through a field experiment using simulated satellite conditions. Variation of signal strength due to vegetation; urban structures; etc. as well as the building penetration loss along with the Standard Deviation of each of these variations has been assessed based on the data collected during the fixed and mobile reception. This paper gives an insight into the propagation in ‘L’ band under the simulated satellite conditions.
</description>
        <description>http://thesai.org/Downloads/Volume2No1/Paper%2011-L%20Band%20Propagation%20Measurements%20for%20DAB%20planning%20in%20INDIA.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> Virtualization Implementation Model for Cost Effective &amp; Efficient Data Centers</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020110</link>
        <id>10.14569/IJACSA.2011.020110</id>
        <doi>10.14569/IJACSA.2011.020110</doi>
        <lastModDate>2012-07-01T10:04:15.9530000+00:00</lastModDate>
        
        <creator>Mueen Uddin</creator>
        
        <creator>Azizah Abdul Rahman</creator>
        
        <subject>Virtualization; Energy Efficient Data Centre; Green IT; Carbon Footprints; Physical to Live Migration; Server Consolidation.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(1), 2011</description>
        <description>Data centers form a key part of the infrastructure upon which a variety of information technology services are built. They provide the capabilities of centralized repository for storage, management, networking and dissemination of data. With the rapid increase in the capacity and size of data centers, there is a continuous increase in the demand for energy consumption. These data centers not only consume a tremendous amount of energy but are riddled with IT inefficiencies. Data center are plagued with thousands of servers as major components. These servers consume huge energy without performing useful work. In an average server environment, 30% of the servers are “dead” only consuming energy, without being properly utilized. This paper proposes a five step model using an emerging technology called virtualization to achieve energy efficient data centers. The proposed model helps Data Center managers to properly implement virtualization technology in their data centers to make them green and energy efficient so as to ensure that IT infrastructure contributes as little as possible to the emission of greenhouse gases, and helps to regain power and cooling capacity, recapture resilience and dramatically reducing energy costs and total cost of ownership.</description>
        <description>http://thesai.org/Downloads/Volume2No1/Paper%2010-Virtualization%20Implementation%20Model%20for%20DataCenter.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> An Efficient Resource Discovery Methodology for HPGRID Systems</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020109</link>
        <id>10.14569/IJACSA.2011.020109</id>
        <doi>10.14569/IJACSA.2011.020109</doi>
        <lastModDate>2012-07-01T10:04:09.6600000+00:00</lastModDate>
        
        <creator>D.Doreen Hephzibah Miriam</creator>
        
        <creator>K.S.Easwarakumar</creator>
        
        <subject> Peer-to-Peer; Grid; Hypercube; Isomorphic partitioning; Resource Discovery</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(1), 2011</description>
        <description>An efficient resource discovery mechanism is one of the fundamental requirements for grid computing systems, as it aids in resource management and scheduling of applications. Resource discovery activity involves searching for the appropriate resource types that match the user’s application requirements. Classical approaches to Grid resource discovery are either centralized or hierarchical, and it becomes inefficient when the scale of Grid systems increases rapidly. On the other hand, the Peer-to-Peer (P2P) paradigm emerged as a successful model as it achieves scalability in distributed systems. Grid system using P2P technology can improve the central control of the traditional grid and restricts single point of failure. In this paper, we propose a new approach based on P2P techniques for resource discovery in grids using Hypercubic P2P Grid (HPGRID) topology connecting the grid nodes. A scalable, fault-tolerant, self-configuring search algorithm is proposed as Parameterized HPGRID algorithm, using isomorphic partitioning scheme. By design, the algorithm improves the probability of reaching all the working nodes in the system, even in the presence of non-alive nodes (inaccessible, crashed or nodes loaded by heavy traffic). The scheme can adapt to a complex, heterogeneous and dynamic resources of the grid environment, and has a better scalability</description>
        <description>http://thesai.org/Downloads/Volume2No1/Paper%209-An%20Efficient%20Resource%20Discovery%20Methodology%20for%20HPGRID%20Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Survey of Wireless MANET Application in Battlefield Operations
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020108</link>
        <id>10.14569/IJACSA.2011.020108</id>
        <doi>10.14569/IJACSA.2011.020108</doi>
        <lastModDate>2012-07-01T10:04:03.3270000+00:00</lastModDate>
        
        <creator>Dr C Rajabhushanam</creator>
        
        <creator>Dr. A. Kathirvel</creator>
        
        <subject>MANET; routing; protocols; wireless; simulation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(1), 2011</description>
        <description> In this paper, we present a framework for performance analysis of wireless MANET in combat/battle field environment. The framework uses a cross-layer design approach where four different kinds of routing protocols are compared and evaluated in the area of security operations. The resulting scenarios are then carried out in a simulation environment using NS-2 simulator. Research efforts also focus on issues such as Quality of Service (QoS), energy efficiency, and security, which already exist in the wired networks and are worsened in MANET. This paper examines the routing protocols and their newest improvements. Classification of routing protocols by source routing and hop-by-hop routing are described in detail and four major categories of state routing are elaborated and compared. We will discuss the metrics used to evaluate these protocols and highlight the essential problems in the evaluation process itself. The results would show better performance with respect to the performance parameters such as network throughput, end-to-end delay and routing overhead when compared to the network architecture which uses a standard routing protocol. Due to the nature of node distribution the performance measure of path reliability which distinguishes ad hoc networks from other types of networks in battlefield conditions, is given more significance in our research work.
</description>
        <description>http://thesai.org/Downloads/Volume2No1/Paper%208-Survey%20of%20Wireless%20MANET%20Application%20in%20Battlefield%20Operations.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Solution of Electromagnetic and Velocity Fields for an Electrohydrodynamic Fluid Dynamical System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020107</link>
        <id>10.14569/IJACSA.2011.020107</id>
        <doi>10.14569/IJACSA.2011.020107</doi>
        <lastModDate>2012-07-01T10:03:57.0370000+00:00</lastModDate>
        
        <creator>Rajveer S Yaduvanshi</creator>
        
        <creator>Harish Parthasarathy</creator>
        
        <subject>Permittivity tuning, Incompressible fluid, Navier-Maxwell’s coupled equations, resonance frequency reconfigure ability.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(1), 2011</description>
        <description>We studied the temporal evolution of the electromagnetic and velocity fields in an incompressible conducting fluid by means of computer simulations from the Navier Stokes and Maxwell’s equations. We then derived the set of coupled partial differential equations for the stream function vector field and the electromagnetic field. These equations are first order difference equations in time and fetch simplicity in discretization. The spatial partial derivatives get converted into partial difference equations. The fluid system of equations is thus approximated by a nonlinear state variable system. This system makes use of the Kronecker Tensor product. The final system has taken account of anisotropic permittivity. The conductivity and magnetic permeability of the fluid are assumed to be homogeneous and isotropic. Present work in this paper describes characterization of magneto hydrodynamic anisotropic medium due to permittivity. Also an efficient and modified novel numerical solution using Tensor product has been proposed. This numerical technique seems to be potentially much faster and provide compatibility in matrices operation. Application of our characterization technique shall be very useful in tuning of permittivity in Liquid crystal polymer, Plasma and Dielectric lens antennas for obtaining wide bandwidth, resonance frequency reconfigure ability and better beam control.
</description>
        <description>http://thesai.org/Downloads/Volume2No1/Paper%207-Solution%20of%20Electromagnetic%20and%20Velocity%20Fields%20for%20an%20Electrohydrodynamic%20Fluid%20Dynamical%20System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>PAV: Parallel Average Voting Algorithm for Fault-Tolerant Systems
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020106</link>
        <id>10.14569/IJACSA.2011.020106</id>
        <doi>10.14569/IJACSA.2011.020106</doi>
        <lastModDate>2012-07-01T10:03:53.6670000+00:00</lastModDate>
        
        <creator>Abbas Karimi</creator>
        
        <creator>Faraneh Zarafshan</creator>
        
        <creator>Adznan b. Jantan</creator>
        
        <subject>Fault-tolerant; Voting Algorithm; Parallel- Algorithm; Divide and Conquer.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(1), 2011</description>
        <description>Fault-tolerant systems are such systems that can continue their operation, even in presence of faults. Redundancy as one of the main techniques in implementation of fault-tolerant control systems uses voting algorithms to choose the most appropriate value among multiple redundant and probably faulty results. Average (mean) voter is one of the commonest voting methods which is suitable for decision making in highly-available and long-missions applications in which the availability and speed of the system is critical. In this paper we introduce a new generation of average voter based on parallel algorithms which is called as parallel average voter. The analysis shows that this algorithm has a better time complexity (log n) in comparison with its sequential algorithm and is especially appropriate for applications where the size of input space is large.
</description>
        <description>http://thesai.org/Downloads/Volume2No1/Paper%206-PAV%20Parallel%20Average%20Voting%20Algorithm%20for%20Fault%20Tolerant%20Systems.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Grid Approximation Based Inductive Charger Deployment Technique in Wireless Sensor Networks
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020105</link>
        <id>10.14569/IJACSA.2011.020105</id>
        <doi>10.14569/IJACSA.2011.020105</doi>
        <lastModDate>2012-07-01T10:03:47.3200000+00:00</lastModDate>
        
        <creator>Fariha Tasmin Jaigirdar</creator>
        
        <creator>Mohammad Mahfuzul Islam</creator>
        
        <creator>Sikder Rezwanul Huq</creator>
        
        <subject>wireless sensor network; energy efficiency; network security; grid approximation; inductive charger.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(1), 2011</description>
        <description> Ensuring sufficient power in a sensor node is a challenging problem now-a-days to provide required level of security and data processing capability demanded by various applications scampered in a wireless sensor network. The size of sensor nodes and the limitations of battery technologies do not allow inclusion of high energy in a sensor. Recent technologies suggest that the deployment of inductive charger can solve the power problem of sensor nodes by recharging the batteries of sensors in a complex and sensitive environment. This paper provides a novel grid approximation algorithm for efficient and low cost deployment of inductive charger so that the minimum number of chargers along with their placement locations can charge all the sensors of the network. The algorithm proposed in this paper is a generalized one and can also be used in various applications including the measurement of network security strength by estimating the minimum number of malicious nodes that can destroy the communication of all the sensors. Experimental results show the effectiveness of the proposed algorithm and impacts of the different parameters used in it on the performance measures.</description>
        <description>http://thesai.org/Downloads/Volume2No1/Paper%205-Grid%20Approximation%20Based%20Inductive%20Charger%20Deployment%20Technique%20in%20Wireless%20Sensor%20Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Genetic Algorithm for Solving Travelling Salesman Problem
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020104</link>
        <id>10.14569/IJACSA.2011.020104</id>
        <doi>10.14569/IJACSA.2011.020104</doi>
        <lastModDate>2012-07-01T10:03:40.1970000+00:00</lastModDate>
        
        <creator>Adewole Philip</creator>
        
        <creator>Akinwale Adio Taofiki</creator>
        
        <creator>Otunbanowo Kehinde</creator>
        
        <subject>Genetic Algorithm, Generation, Mutation rate, Population, Travelling Salesman Problem</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(1), 2011</description>
        <description> In this paper we present a Genetic Algorithm for solving the Travelling Salesman problem (TSP). Genetic Algorithm which is a very good local search algorithm is employed to solve the TSP by generating a preset number of random tours and then improving the population until a stop condition is satisfied and the best chromosome which is a tour is returned as the solution. Analysis of the algorithmic parameters (Population, Mutation Rate and Cut Length) was done so as to know how to tune the algorithm for various problem instances.</description>
        <description>http://thesai.org/Downloads/Volume2No1/Paper%204-A%20Genetic%20Algorithm%20for%20Solving%20Travelling%20Salesman%20Problem.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> Analyzing the Load Balance of Term-based Partitioning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020103</link>
        <id>10.14569/IJACSA.2011.020103</id>
        <doi>10.14569/IJACSA.2011.020103</doi>
        <lastModDate>2012-07-01T10:03:33.9070000+00:00</lastModDate>
        
        <creator>Ahmad Abusukhon</creator>
        
        <creator>Mohammad Talib</creator>
        
        <subject>Term-partitioning schemes, Term-frequency partitioning, Term-lengthpartitioning, Node utilization, Load balance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(1), 2011</description>
        <description> In parallel (IR) systems, where a large-scale collection is indexed and searched, the query response time is limited by the time of the slowest node in the system. Thus distributing the load equally across the nodes is very important issue. Mainly there are two methods for collection indexing, namely document-based and term-based indexing. In term-based partitioning, the terms of the global index of a large-scale data collection are distributed or partitioned equally among nodes, and then a given query is divided into sub-queries and each sub-query is then directed to the relevant node. This provides high query throughput and concurrency but poor parallelism and load balance. In this paper, we introduce new methods for terms partitioning and then we compare the results from our methods with the results from the previous work with respect to load balance and query response time.
</description>
        <description>http://thesai.org/Downloads/Volume2No1/Paper%203-Analyzing%20the%20Load%20Balance%20of%20Term-based%20Partitioning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> Open Source Software in Computer Science and IT Higher Education: A Case Study</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020102</link>
        <id>10.14569/IJACSA.2011.020102</id>
        <doi>10.14569/IJACSA.2011.020102</doi>
        <lastModDate>2012-07-01T10:03:27.9700000+00:00</lastModDate>
        
        <creator>Dan R Lipsa</creator>
        
        <creator>Robert S. Laramee</creator>
        
        <subject>open source software (OSS), free software</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(1), 2011</description>
        <description>The importance and popularity of open source software has increased rapidly over the last 20 years. This is due to a variety of advantages open source software has to offer and also the wide availability of the Internet in the early nineties. We identify and describe important open source software characteristics and then present a case study using open source software to teach three Computer Science and IT courses for one academic year. We compare fulfilling our educational requirements and goals with open source software and with proprietary software. We present some of the advantages of using Open Source Software (OSS). Finally we report on our experiences of using open source software in the classroom and describe the benefits and drawbacks of using this type of software over common proprietary software from both a financial and educational point of view.</description>
        <description>http://thesai.org/Downloads/Volume2No1/Paper%202-Open%20source%20software%20in%20computer%20science%20and%20IT%20higher%20education.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> Computing knowledge and Skills Demand: A Content Analysis of Job Adverts in Botswana</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2011</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2011.020101</link>
        <id>10.14569/IJACSA.2011.020101</id>
        <doi>10.14569/IJACSA.2011.020101</doi>
        <lastModDate>2012-07-01T10:03:22.0070000+00:00</lastModDate>
        
        <creator>Y. Ayalew</creator>
        
        <creator>Z. A. Mbero</creator>
        
        <creator>T. Z. Nkgau</creator>
        
        <creator>P. Motlogelwa</creator>
        
        <creator>A. Masizana-Katongo</creator>
        
        <creator></creator>
        
        <subject>computing job adverts; job adverts in Botswana; content analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 2(1), 2011</description>
        <description>This paper presents the results of a content analysis of computing job adverts to assess the types of skills required by employers in Botswana. Through the study of job adverts for computing professionals for one year (i.e., January 2008 to December 2008), we identified the types of skills required by employers for early career positions. The job adverts were collected from 7 major newspapers (published both daily and weekly) that are circulated throughout the country. The findings of the survey have been used for the revision and development of curricula for undergraduate degree programmes at the Department of Computer Science, University of Botswana. The content analysis focused on the identification of the most sought after types of qualifications (i.e., degree types), job titles, skills, and industry certifications. Our analysis reveals that the majority of the adverts did not set a preference to a particular type of computing degree. Furthermore, our findings indicate that those job titles and computing skills which are on high demand are not consistent with previous studies carried out in the developed countries. This requires further investigation to identify reasons for these differences from the perspective of the practices in the IT industry. It also requires further investigation regarding the degree of mismatch between the employers computing skills demands and the knowledge and skills provided by academic programmes in the country.</description>
        <description>http://thesai.org/Downloads/Volume2No1/Paper%201-Computing%20knowledge%20and%20Skills%20Demand%20A%20Content%20Analysis%20of%20Job%20Adverts%20in%20Botswana.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Key Management Techniques for Controlling the Distribution and Update of Cryptographic keys</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010624</link>
        <id>10.14569/IJACSA.2010.010624</id>
        <doi>10.14569/IJACSA.2010.010624</doi>
        <lastModDate>2012-07-01T10:03:14.4130000+00:00</lastModDate>
        
        <creator>T Lalith</creator>
        
        <creator>R.Umarani</creator>
        
        <creator>G.M.Kadharnawaz</creator>
        
        <subject></subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(6), 2010</description>
        <description>Key management plays a fundamental role in cryptography as the basis for securing cryptographic techniques providing confidentiality, entity authentication, data origin authentication, data integrity, and digital signatures. The goal of a good cryptographic design is to reduce more complex problems to the proper management and safe-keeping of a small number of cryptographic keys, ultimately secured through trust in hardware or software by physical isolation or procedural controls. Reliance on physical and procedural security (e.g., secured rooms with isolated equipment), tamper-resistant hardware, and trust in a large number of individuals is minimized by concentrating trust in a small number of easily monitored, controlled, and trustworthy elements.</description>
        <description>http://thesai.org/Downloads/Volume1No6/Paper_24_Key_Management_Techniques_for_Controlling_the_Distribution_and_Update_of_Cryptographic_keys.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Comprehensive Analysis of Spoofing
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010623</link>
        <id>10.14569/IJACSA.2010.010623</id>
        <doi>10.14569/IJACSA.2010.010623</doi>
        <lastModDate>2012-07-01T10:03:08.1300000+00:00</lastModDate>
        
        <creator>P Ramesh Babu</creator>
        
        <creator>D.Lalitha Bhaskari</creator>
        
        <creator>CH.Satyanarayana</creator>
        
        <subject>Spoofing, Filtering, Attacks, Information, Trust</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(6), 2010</description>
        <description>The main intention of writing this paper is to enable the students, computer users and novice researchers about spoofing attacks. Spoofing means impersonating another person or computer, usually by providing false information (E-mail name, URL or IP address). Spoofing can take on many forms in the computer world, all of which involve some type false representation of information. There are a variety of methods and types of spoofing. We would like to introduce and explain following spoofing attacks in this paper: IP, ARP, E-Mail, Web, and DNS spoofing. There are no legal or constructive uses for implementing spoofing of any type. Some of the outcomes might be sport, theft, vindication or some other malicious goal. The magnitude of these attacks can be very severe; can cost us millions of dollars. This Paper describes about various spoofing types and gives a small view on detection and prevention of spoofing attacks.
</description>
        <description>http://thesai.org/Downloads/Volume1No6/Paper_23_A_Comprehensive_Analysis_of_Spoofing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Face Replacement System Based on Face Pose Estimation
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010622</link>
        <id>10.14569/IJACSA.2010.010622</id>
        <doi>10.14569/IJACSA.2010.010622</doi>
        <lastModDate>2012-07-01T10:03:01.8200000+00:00</lastModDate>
        
        <creator>Kuo Yu Chiu</creator>
        
        <creator>Shih-Che Chien</creator>
        
        <creator>Sheng-Fuu Lin</creator>
        
        <subject> Facial feature, Face replacement, Neural network, Support vector machine (SVM)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(6), 2010</description>
        <description>Face replacement system plays an important role in the entertainment industries. However, most of these systems nowadays are assisted by hand and specific tools. In this paper, a new face replacement system for automatically replacing a face with image processing technique is described. The system is divided into two main parts: facial feature extraction and face pose estimation. In the first part, the face region is determined and the facial features are extracted and located. Eyes, mouth, and chin curve are extracted by their statistical and geometrical properties. These facial features are used as the information for the second part. A neural network is adopted here to classify the face pose according to the feature vectors which are obtained from the different ratio of facial features. From the experiments and some comparisons, they show that this system works better while dealing with different pose, especially for non-frontal face pose.
</description>
        <description>http://thesai.org/Downloads/Volume1No6/Paper_22_A_Face_Replacement_System_Based_on_Face_Pose_Estimation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The Impact of Social Networking Websites to Facilitate the Effectiveness of Viral Marketing</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010621</link>
        <id>10.14569/IJACSA.2010.010621</id>
        <doi>10.14569/IJACSA.2010.010621</doi>
        <lastModDate>2012-07-01T10:02:55.4730000+00:00</lastModDate>
        
        <creator>Abed Abedniya</creator>
        
        <creator>Sahar Sabbaghi Mahmouei</creator>
        
        <subject> Social networks website, viral marketing, structural equation modeling
</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(6), 2010</description>
        <description>The Internet and the World Wide Web have become two key components in today&#39;s technology based organizations and businesses. As the Internet is becoming more and more popular, it is starting to make a big impact on people&#39;s day-to-day life. As a result of this revolutionary transformation towards the modern technology, social networking on the World Wide Web has become an integral part of a large number of people&#39;s lives. Social networks are websites which allow users to communicate, share knowledge about similar interests, discuss favorite topics, review and rate products/services, etc. These websites have become a powerful source in shaping public opinion on virtually every aspect of commerce. Marketers are challenged with identifying influential individuals in social networks and connecting with them in ways that encourage viral marketing content movement and there has been little empirical research study about of this website to diffuse of viral marketing content. In this article, we explore the role of social network websites which has influence on viral marketing, and the characteristics of the most influential users to spread share viral content. Structural equation modeling is used to examine the patterns of inter-correlations among the constructions and to empirically test the hypotheses.</description>
        <description>http://thesai.org/Publication/IJACSA/Archives/Volume1No6.aspx</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Adaptive Channel Estimation Techniques for MIMO OFDM Systems
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010620</link>
        <id>10.14569/IJACSA.2010.010620</id>
        <doi>10.14569/IJACSA.2010.010620</doi>
        <lastModDate>2012-07-01T10:02:49.1800000+00:00</lastModDate>
        
        <creator>Md Masud Rana</creator>
        
        <creator>Md. Kamal Hosain</creator>
        
        <subject> MIMO; NLMS; OFDM; RLS</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(6), 2010</description>
        <description>In this paper, normalized least mean (NLMS) square and recursive least squares (RLS) adaptive channel estimator are described for multiple input multiple output (MIMO) orthogonal frequency division multiplexing (OFDM) systems. These CE methods uses adaptive estimator which are able to update parameters of the estimator continuously, so that the knowledge of channel and noise statistics are not necessary. This NLMS/RLS CE algorithm requires knowledge of the received signal only. Simulation results demonstrated that the RLS CE method has better performances compared NLMS CE method for MIMO OFDM systems. In addition, the utilizing of more multiple antennas at the transmitter and/or receiver provides a much higher performance compared with fewer antennas. Furthermore, the RLS CE algorithm provides faster convergence rate compared to NLMS CE method. Therefore, in order to combat the more channel dynamics, the RLS CE algorithm is better to use for MIMO OFDM systems.
</description>
        <description>http://thesai.org/Downloads/Volume1No6/Paper_20_ADAPTIVE_CHANNEL_ESTIMATION_TECHNIQUES_FOR_MIMO_OFDM_SYSTEMS.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Study on Associative Neural Memories</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010619</link>
        <id>10.14569/IJACSA.2010.010619</id>
        <doi>10.14569/IJACSA.2010.010619</doi>
        <lastModDate>2012-07-01T10:02:42.6000000+00:00</lastModDate>
        
        <creator>B.D C.N Prasad</creator>
        
        <creator>P E S N Krishna Prasad</creator>
        
        <creator>Sagar Yeruva</creator>
        
        <creator>P Sita Rama Murty</creator>
        
        <subject>Associative memories; SAM; DAM; Hopfield model; BAM; Holographic Associative Memory (HAM); Context-sensitive Auto-associative Memory (CSAM); Context-sensitive Asynchronous Memory (CSYM)</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(6), 2010</description>
        <description> Memory plays a major role in Artificial Neural Networks. Without memory, Neural Network can not be learned itself. One of the primary concepts of memory in neural networks is Associative neural memories. A survey has been made on associative neural memories such as Simple associative memories (SAM), Dynamic associative memories (DAM), Bidirectional Associative memories (BAM), Hopfield memories, Context Sensitive Auto-associative memories (CSAM) and so on. These memories can be applied in various fields to get the effective outcomes. We present a study on these associative memories in artificial neural networks.
</description>
        <description>http://thesai.org/Downloads/Volume1No6/Paper_19_A_Study_on_Associative_Neural_Memories.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>EM Wave Transport 2D and 3D Investigations
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010618</link>
        <id>10.14569/IJACSA.2010.010618</id>
        <doi>10.14569/IJACSA.2010.010618</doi>
        <lastModDate>2012-07-01T10:02:36.2630000+00:00</lastModDate>
        
        <creator>Rajveer S Yaduvanshi</creator>
        
        <creator>Harish Parthasarathy</creator>
        
        <subject>Boltzmann Transport Equation, Probability Distribution Function, Coupled BTE-Maxwell’s.
</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(6), 2010</description>
        <description>Boltzmann Transport Equation [1-2] has been modelled in close conjunction with Maxwell’s equation, investigations for 2D and 3D transport carriers have been proposed. Exact solution of Boltzmann Equation still remains the core field of research. We have worked towards evaluation of 2D and 3D solutions of BTE. Application of our work can be extended to study the electromagnetic wave transport in upper atmosphere i.e. ionosphere. We have given theoretical and numerical analysis of probability density function, collision integral under various initial and final conditions. Modelling of coupled Boltzmann Maxwell’s equation taking binary collision and multi species collision terms has been evaluated. Solutions of Electric Field (E) and Magnetic Field (B) under coupled conditions have been obtained. PDF convergences under the absence of electric field have been sketched, with an iterative approach and are shown in figure 1. Also 3D general algorithm for solution of BTE has been suggested.
</description>
        <description>http://thesai.org/Downloads/Volume1No6/Paper_18_EM__WAVE__TRANSPORT_2D3D_INVESTIGATIONS.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Model Based Test Case Prioritization For Testing Component Dependency In CBSD Using UML Sequence Diagram
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010617</link>
        <id>10.14569/IJACSA.2010.010617</id>
        <doi>10.14569/IJACSA.2010.010617</doi>
        <lastModDate>2012-07-01T10:02:29.9030000+00:00</lastModDate>
        
        <creator>Arup Abhinna Acharya</creator>
        
        <creator>Vuda Sreenivasarao</creator>
        
        <creator>Namita Panda</creator>
        
        <subject> Regression Testing, Object Interaction Graph, Test Cases, CBSD
</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(6), 2010</description>
        <description>Software maintenance is an important and costly activity of the software development lifecycle. To ensure proper maintenance the software undergoes regression testing. It is very inefficient to re execute every test case in regression testing for small changes. Hence test case prioritization is a technique to schedule the test case in an order that maximizes some objective function. A variety of objective functions are applicable, one such function involves rate of fault detection - a measure of how quickly faults are detected within the testing process. Early fault detection can provide a faster feedback generating a scope for debuggers to carry out their task at an early stage. In this paper we propose a method to prioritize the test cases for testing component dependency in a Component Based Software Development (CBSD) environment using Greedy Approach. An Object Interaction Graph (OIG) is being generated from the UML sequence diagrams for interdependent components. The OIG is traversed to calculate the total number of inter component object interactions and intra component object interactions. Depending upon the number of interactions the objective function is calculated and the test cases are ordered accordingly. This technique is applied to components developed in Java for a software system and found to be very effective in early fault detection as compared to non-prioritize approach.</description>
        <description>http://thesai.org/Downloads/Volume1No6/Paper_17_Model_Based_Test_Case_Prioritization_For_Testing_Component_Dependency_In_CBSD_Using_UML_Sequence_Diagram.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design Strategies for AODV Implementation in Linux
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010616</link>
        <id>10.14569/IJACSA.2010.010616</id>
        <doi>10.14569/IJACSA.2010.010616</doi>
        <lastModDate>2012-07-01T10:02:23.5930000+00:00</lastModDate>
        
        <creator>Ms Prinima Gupta</creator>
        
        <creator>Dr. R. K Tuteja</creator>
        
        <subject>Ad-hoc Networking, AODV, MANET.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(6), 2010</description>
        <description> In a Mobile Ad hoc Network (MANET), mobile nodes constructing a network, nodes may join and leave at any time, and the topology changes dynamically. Routing in a MANET is challenging because of the dynamic topology and the lack of an existing fixed infrastructure. In this paper, we explore the difficulties encountered in implementing MANET routing protocols in real oper&#172;ating systems, and study the common requirements im&#172;posed by MANET routing on the underlying operating system services. Also, it explains implementation techniques of the AODV protocol to determine the needed events, such as: Snooping, Kernel Modification, and Netfilter. In addition, this paper presents a discussion of the advantages as well as disadvantages of each implementation of this architecture in Linux.
</description>
        <description>http://thesai.org/Downloads/Volume1No6/Paper_16_DESIGN_STRATEGIES_FOR_AODV_IMPLEMENTATION_IN_LINUX.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Technique for Human Face Emotion Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010615</link>
        <id>10.14569/IJACSA.2010.010615</id>
        <doi>10.14569/IJACSA.2010.010615</doi>
        <lastModDate>2012-07-01T10:02:17.2600000+00:00</lastModDate>
        
        <creator>Renu Nagpal</creator>
        
        <creator>Pooja Nagpal</creator>
        
        <creator>Sumeet Kaur</creator>
        
        <subject>biomertics;adaptive median filter; bacteria foraging optimization;feature detection;facial expression</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(6), 2010</description>
        <description>This paper presents a novel approach for the detection of emotions using the cascading of Mutation Bacteria Foraging optimization and Adaptive Median Filter in highly corrupted noisy environment. The approach involves removal of noise from the image by the combination of MBFO &amp; AMF and then detects local, global and statistical feature form the image. The Bacterial Foraging Optimization Algorithm (BFOA), as it is called now, is currently gaining popularity in the community of researchers, for its effectiveness in solving certain difficult real-world optimization problems. Our results so far show the approach to have a promising success rate. An automatic system for the recognition of facial expressions is based on a representation of the expression, learned from a training set of pre-selected meaningful features. However, in reality the noises that may embed into an image document will affect the performance of face recognition algorithms. As a first we investigate the emotionally intelligent computers which can perceive human emotions. In this research paper four emotions namely anger, fear, happiness along with neutral is tested from database in noisy environment of salt and pepper. Very high recognition rate has been achieved for all emotions along with neutral on the training dataset as well as user defined dataset. The proposed method uses cascading of MBFO &amp; AMF for the removal of noise and Neural Networks by which emotions are classified.</description>
        <description>http://thesai.org/Downloads/Volume1No6/Paper_15_Hybrid_Technique_for_Human_Face_Emotion_Detection.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Randomized Algorithmic Approach for Biclustering of Gene Expression Data
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010613</link>
        <id>10.14569/IJACSA.2010.010613</id>
        <doi>10.14569/IJACSA.2010.010613</doi>
        <lastModDate>2012-07-01T10:02:04.3900000+00:00</lastModDate>
        
        <creator>Sradhanjali Nayak</creator>
        
        <creator>Debahuti Mishra</creator>
        
        <creator>Satyabrata Das</creator>
        
        <creator>Amiya Kumar Rath</creator>
        
        <subject>Bicluster; microarray data; gene expression; randomized algorithm</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(6), 2010</description>
        <description>Microarray data processing revolves around the pivotal issue of locating genes altering their expression in response to pathogens, other organisms or other multiple environmental conditions resulted out of a comparison between infected and uninfected cells or tissues. To have a comprehensive analysis of the corollaries of certain treatments, deseases and developmental stages embodied as a data matrix on gene expression data is possible through simultaneous observation and monitoring of the expression levels of multiple genes. Clustering is the mechanism of grouping genes into clusters based on different parameters. Clustering is the process of grouping genes into clusters either considering row at a time(row clustering) or considering column at a time(column clustering). The application of clustering approach is crippled by conditions which are unrelated to genes. To get better of these problems a unique form of clustering technique has evolved which offers simultaneous clustering (both rows and columns) which is known as biclustering. A bicluster is deemed to be a sub matrix consisting data values. A bicluster is resulted out of the removal of some of the rows as well as some of the columns of given data matrix in such a fashion that each row of what is left reads the same string. A fast, simple and efficient randomized algorithm is explored in this paper, which discovers the largest bicluster by random projections.</description>
        <description>http://thesai.org/Downloads/Volume1No6/Paper_13_Randomized_Algorithmic_Approach_for_Biclustering_of_Gene_Expression_Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Improvement by Changing Modulation Methods for Software Defined Radios
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010612</link>
        <id>10.14569/IJACSA.2010.010612</id>
        <doi>10.14569/IJACSA.2010.010612</doi>
        <lastModDate>2012-07-01T10:01:58.1270000+00:00</lastModDate>
        
        <creator>Bhalchandra B Godbole</creator>
        
        <creator>Dilip S. Aldar</creator>
        
        <subject>Wireless mobile communication, SDR, reconfigurability, modulation switching</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(6), 2010</description>
        <description>This paper describes an automatic switching of modulation method to reconfigure transceivers of Software Defined Radio (SDR) based wireless communication system. The programmable architecture of Software Radio promotes a flexible implementation of modulation methods. This flexibility also translates into adaptively, which is used here to optimize the throughput of a wireless network, operating under varying channel conditions. It is robust and efficient with processing time overhead that still allows the SDR to maintain its real-time operating objectives. This technique is studied for digital wireless communication systems. Tests and simulations using an AWGN channel show that the SNR threshold is 5dB for the case study.</description>
        <description>http://thesai.org/Downloads/Volume1No6/Paper_12_Performance_Improvement_by_Changing_Modulation_Met.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Characterization and Architecture of Component Based Models
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010611</link>
        <id>10.14569/IJACSA.2010.010611</id>
        <doi>10.14569/IJACSA.2010.010611</doi>
        <lastModDate>2012-07-01T10:01:51.8600000+00:00</lastModDate>
        
        <creator>Er Iqbaldeep kaur</creator>
        
        <creator>Dr.P.K.Suri</creator>
        
        <creator>Er.Amit Verma</creator>
        
        <subject>Components, CBSD, CORBA, KOAYLA, EJB, Component retrieval, repositories etc.
</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(6), 2010</description>
        <description>Component based Software Engineering is the most common term nowadays in the field of software development. The CBSE approach is actually based on the principle of ‘Select and Use’ rather than ‘Design and Test’ as in traditional software development methods. Since this trend of using and ‘reusing’ components is in its developing stage, there are many advantages and problems as well that occur while use of components. Here is presented a series of papers that cover various important and integral issues in the field concerned. This paper is an introductory research on the essential concepts, principles and steps that underlie the available commercialized models in CBD. This research work has a scope extending to Component retrieval in repositories and their management and implementing the results verification.</description>
        <description>http://thesai.org/Downloads/Volume1No6/Paper_11-CHARACTERIZATION%20AND%20ARCHITECTURE%20OF%20COMPONENT%20BASED%20MODELS.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Microcontroller Based Home Automation System with Security</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010610</link>
        <id>10.14569/IJACSA.2010.010610</id>
        <doi>10.14569/IJACSA.2010.010610</doi>
        <lastModDate>2012-07-01T10:01:45.5470000+00:00</lastModDate>
        
        <creator>Inderpreet Kaur</creator>
        
        <subject> Automation, 8051 microcontroller, LDR, LED, ADC, Relays, LCD display, Sensors, Stepper motor</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(6), 2010</description>
        <description>With advancement of technology things are becoming simpler and easier for us. Automatic systems are being preferred over manual system. This unit talks about the basic definitions needed to understand the Project better and further defines the technical criteria to be implemented as a part of this project.</description>
        <description>http://thesai.org/Downloads/Volume1No6/Paper_10-Microcontroller_Based_Home_Automation_System_With_Security.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Evaluation of Node Failure Prediction QoS Routing Protocol (NFPQR) in Ad Hoc Networks
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010609</link>
        <id>10.14569/IJACSA.2010.010609</id>
        <doi>10.14569/IJACSA.2010.010609</doi>
        <lastModDate>2012-07-01T10:01:39.2170000+00:00</lastModDate>
        
        <creator>Dr D Srinivas Rao</creator>
        
        <creator>Sake Pothalaiah</creator>
        
        <subject>NFPQ;, C-NFPQR; CEDAR; PLBQR; TDR; QoS; Routing Protocols</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(6), 2010</description>
        <description>The characteristics of ad hoc networks make the QoS support a very complex process unlike traditional networks. The nodes in ad hoc wireless networks have limited power capabilities. The node failure in the network leads to different problems such as network topology changes, network partitions, packet losses and low signal quality. Many QoS routing protocols like Predictive location based QoS routing protocol (PLBQR), Ticket based QoS routing, Trigger based distributed QoS routing (TDR) protocol ,Bandwidth routing(BR) protocol, Core extracted distributed routing (CEDAR) protocol have been proposed. However these algorithms do not consider the node failures and their consequences in the routing. Thus most of the routing protocols do not perform well in frequent or unpredictable node failure conditions. Node Failure Predication QoS Routing” (NFPQR) scheme provides an optimal route selection by predicting the possibility of failure of a node through its power level. The NFPQR protocol has been modified as C-NFPQR (Clustered NFPQR) in order to provide power optimization using clustered based approach. The performance of the NFPQR and C-NFPQR is evaluated through the same QoS parameters.</description>
        <description>http://thesai.org/Downloads/Volume1No6/Paper_9-Performance_Evaluation_of_Node_Failure_Prediction_QoS_Routing_Protocol_(NFPQR)_in_Ad_Hoc_Network.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> Robust R Peak and QRS detection in Electrocardiogram using Wavelet Transform</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010608</link>
        <id>10.14569/IJACSA.2010.010608</id>
        <doi>10.14569/IJACSA.2010.010608</doi>
        <lastModDate>2012-07-01T10:01:32.9230000+00:00</lastModDate>
        
        <creator>P Sasikala</creator>
        
        <creator>Dr. R.S.D. Wahidabanu</creator>
        
        <subject>Electrocardiogram, Wavelet Transform, QRS complex, Filters, Thresholds</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(6), 2010</description>
        <description>In this paper a robust R Peak and QRS detection using Wavelet Transform has been developed. Wavelet Transform provides efficient localization in both time and frequency. Discrete Wavelet Transform (DWT) has been used to extract relevant information from the ECG signal in order to perform classification. Electrocardiogram (ECG) signal feature parameters are the basis for signal Analysis, Diagnosis, Authentication and Identification performance. These parameters can be extracted from the intervals and amplitudes of the signal. The first step in extracting ECG features starts from the exact detection of R Peak in the QRS Complex. The accuracy of the determined temporal locations of R Peak and QRS complex is essential for the performance of other ECG processing stages. Individuals can be identified once ECG signature is formulated. This is an initial work towards establishing that the ECG signal is a signature like fingerprint, retinal signature for any individual Identification. Analysis is carried out using MATLAB Software. The correct detection rate of the Peaks is up to 99% based on MIT-BIH ECG database.
</description>
        <description>http://thesai.org/Downloads/Volume1No6/Paper_8_Robust_R_Peak_and_QRS_detection_in_Electrocardiogram_using_Wavelet_Transform.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Cloud Computing Through Mobile-Learning</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010607</link>
        <id>10.14569/IJACSA.2010.010607</id>
        <doi>10.14569/IJACSA.2010.010607</doi>
        <lastModDate>2012-07-01T10:01:26.6470000+00:00</lastModDate>
        
        <creator>N Mallikharjuna Rao</creator>
        
        <creator>C.Sasidhar</creator>
        
        <creator>V. Satyendra Kumar</creator>
        
        <subject>Cloud Computing, Education, SAAS, Quality Teaching, Cost effective Cloud, Mobile phone, Mobile Cloud
</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(6), 2010</description>
        <description>Cloud computing is the new technology that has various advantages and it is an adoptable technology in this present scenario. The main advantage of the cloud computing is that this technology reduces the cost effectiveness for the implementation of the Hardware, software and License for all. This is the better peak time to analyze the cloud and its implementation and better use it for the development of the quality and low cost education for all over the world. In this paper, we discuss how to influence on cloud computing and influence on this technology to take education to a wider mass of students over the country. We believe cloud computing will surely improve the current system of education and improve quality at an affordable cost.</description>
        <description>http://thesai.org/Downloads/Volume1No6/Paper_7_Cloud_Computing_Through_Mobile-Learning.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> Efficient Implementation of Sample Rate Converter
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010606</link>
        <id>10.14569/IJACSA.2010.010606</id>
        <doi>10.14569/IJACSA.2010.010606</doi>
        <lastModDate>2012-07-01T10:01:23.3030000+00:00</lastModDate>
        
        <creator>Charanjit singh</creator>
        
        <creator>Manjeet Singh patterh</creator>
        
        <creator>Sanjay Sharma</creator>
        
        <subject>WiMAX; Half Band filter; FIR filter; CIC Filter; Farrow Filter; FPGA</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(6), 2010</description>
        <description>Within wireless base station system design, manufacturers continue to seek ways to add value and performance while increasing differentiation. Transmit/receive functionality has become an area of focus as designers attempt to address the need to move data from very high frequency sample rates to chip processing rates. Digital Up Converter (DUC) and Digital Down Converter (DDC) are used as sample rate converters. These are the important block in every digital communication system; hence there is a need for effective implementation of sample rate converter so that cost can be reduced. With the recent advances in FPGA technology, the more complex devices providing high-speed as required in DSP applications are available. The filter implementation in FPGA, utilizing the dedicated hardware resources can effectively achieve application-specific integrated circuit (ASIC)-like performance while reducing development time cost and risks. So in this paper the technique for an efficient design of DDC for reducing sample rate is being suggested which meets the specifications of WiMAX system. Its effective implementation also ensures the pathway for the efficient applications in VLSI designs. Different design configurations for the sample rate converter are explored. The sample rate converter can be designed using half band filters, fixed FIR filters, poly-phase filters, CIC filters or even farrow filters.
</description>
        <description>http://thesai.org/Downloads/Volume1No6/Paper_6-Efficient_Implementation_of_Sample_Rate_Converter.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modified ID-Based Public key Cryptosystem using Double Discrete Logarithm Problem
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010605</link>
        <id>10.14569/IJACSA.2010.010605</id>
        <doi>10.14569/IJACSA.2010.010605</doi>
        <lastModDate>2012-07-01T10:01:17.0130000+00:00</lastModDate>
        
        <creator>Chandrashekhar Meshram</creator>
        
        <subject>Public key Cryptosystem, Identity based Cryptosystem, Discrete Logarithm Problem, Double Discrete Logarithm Problem.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(6), 2010</description>
        <description>In 1984, Shamir [1] introduced the concept of an identity-based cryptosystem. In this system, each user needs to visit a key authentication center (KAC) and identify him self before joining a communication network. Once a user is accepted, the KAC will provide him with a secret key. In this way, if a user wants to communicate with others, he only needs to know the “identity” of his communication partner and the public key of the KAC. There is no public file required in this system. However, Shamir did not succeed in constructing an identity based cryptosystem, but only in constructing an identity-based signature scheme. Meshram and Agrawal [4] have proposed an id - based cryptosystem based on double discrete logarithm problem which uses the public key cryptosystem based on double discrete logarithm problem. In this paper, we propose the modification in an id based cryptosystem based on the double discrete logarithm problem and we consider the security against a conspiracy of some entities in the proposed system and show the possibility of establishing a more secure system.</description>
        <description>http://thesai.org/Downloads/Volume1No6/Paper_5_Modified_ID-Based_Public_key_Cryptosystem_using_Do.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Quantization Table Estimation in JPEG Images
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010603</link>
        <id>10.14569/IJACSA.2010.010603</id>
        <doi>10.14569/IJACSA.2010.010603</doi>
        <lastModDate>2012-07-01T10:01:10.7100000+00:00</lastModDate>
        
        <creator>Salma Hamdy</creator>
        
        <creator>Haytham El-Messiry</creator>
        
        <creator>Mohamed Roushdy</creator>
        
        <creator>Essam Kahlifa</creator>
        
        <subject> Digital image forensics; forgery detection; compression history; Quantization tables.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(6), 2010</description>
        <description>Most digital image forgery detection techniques require the doubtful image to be uncompressed and in high quality. However, most image acquisition and editing tools use the JPEG standard for image compression. The histogram of Discrete Cosine Transform coefficients contains information on the compression parameters for JPEGs and previously compressed bitmaps. In this paper we present a straightforward method to estimate the quantization table from the peaks of the histogram of DCT coefficients. The estimated table is then used with two distortion measures to deem images as untouched or forged. Testing the procedure on a large set of images gave a reasonable average estimation accuracy of 80% that increases up to 88% with increasing quality factors. Forgery detection tests on four different types of tampering resulted in an average false negative rate of 7.95% and 4.35% for the two measures respectively.
</description>
        <description>http://thesai.org/Downloads/Volume1No6/Paper_3_Quantization_Table_Estimation_in_JPEG_Images_.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> Automating Legal Research through Data Mining</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010602</link>
        <id>10.14569/IJACSA.2010.010602</id>
        <doi>10.14569/IJACSA.2010.010602</doi>
        <lastModDate>2012-07-01T10:01:04.8230000+00:00</lastModDate>
        
        <creator>M F M Firdhous</creator>
        
        <subject>Text Mining; Legal Research; Term Weighting; Vector Space</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(6), 2010</description>
        <description>The term legal research generally refers to the process of identifying and retrieving appropriate information necessary to support legal decision-making from past case records. At present, the process is mostly manual, but some traditional technologies such as keyword searching are commonly used to speed the process up. But a keyword search is not a comprehensive search to cater to the requirements of legal research as the search result includes too many false hits in terms of irrelevant case records. Hence the present generic tools cannot be used to automate legal research. This paper presents a framework which was developed by combining several ‘Text Mining’ techniques to automate the process overcoming the difficulties in the existing methods. Further, the research also identifies the possible enhancements that could be done to enhance the effectiveness of the framework.
</description>
        <description>http://thesai.org/Downloads/Volume1No6/Paper_2-Automating_Legal_Research_through_Data_Mining.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Pause Time Optimal Setting for AODV Protocol on RPGM Mobility Model in MANETs
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010601</link>
        <id>10.14569/IJACSA.2010.010601</id>
        <doi>10.14569/IJACSA.2010.010601</doi>
        <lastModDate>2012-07-01T10:00:58.0670000+00:00</lastModDate>
        
        <creator>Sayid Mohamed Abdule</creator>
        
        <creator>Suhaidi Hassan</creator>
        
        <creator>Osman Ghazali</creator>
        
        <creator>Mohammed M. Kadhum</creator>
        
        <subject>MANETs, AODV, pause time, optimal settin, RPGM</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(6), 2010</description>
        <description>For the last few years, a number of routing protocols have been proposed and implemented for wireless mobile Ad hoc network. The motivation behind this paper is to discover and study the pause time effects on Ad hoc on Demand Distance Vector (AODV) protocol and find out the node pause time optimal setting for this protocol where Reference Point Group Mobility (RPGM) model uses as a reference model. In order to come across the best performance of a particular routing protocol, there a need to examine a number of parameters with different performance and analyze the optimal setting of that protocol and its network configuration environment. This experiment, the speed is fixed with 20 ms in all scenarios while the pause time is varying from scenario to another to observe the optimal setting of the pause time on protocol’s performance in this configuration. The outcome of the experiment are analyzed with different parameters such as varying number of nodes, increasing connections, increasing pause time and discussed the effects of the pause time. The results have shown that the value of the pause time can be affecting the performance of the protocol. In the experiment, we found that the lower pause time give better performance of the protocol. However, this paper is a part of ongoing research on AODV protocol in link failure. Thus, it is important to figure out the factors which can be involved the performance of the protocol.</description>
        <description>http://thesai.org/Downloads/Volume1No6/Paper_1-PAUSE_TIME_OPTIMAL_SETTING_FOR_AODV_PROTOCOL_ON_RPGM_MOBILITY_MODEL_IN_MANETs.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>OFDM System Analysis for reduction of Inter symbol Interference Using the AWGN Channel Platform
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010520</link>
        <id>10.14569/IJACSA.2010.010520</id>
        <doi>10.14569/IJACSA.2010.010520</doi>
        <lastModDate>2012-07-01T10:00:50.3670000+00:00</lastModDate>
        
        <creator>D M Bappy</creator>
        
        <creator>Ajoy Kumar Dey</creator>
        
        <creator>Susmita Saha</creator>
        
        <creator>Avijit Saha</creator>
        
        <creator>Shibani Ghosh</creator>
        
        <subject>OFDM; Inter symbol Interference; AWGN; matlab; algorithm.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(5), 2010</description>
        <description> Orthogonal Frequency Division Multiplexing (OFDM) transmissions are emerging as important modulation technique because of its capacity of ensuring high level of robustness against any interferences. This project is mainly concerned with how well OFDM system performs in reducing the Inter Symbol Interference (ISI) when the transmission is made over an Additive White Gaussian Noise (AWGN) channel. When OFDM is considered as a low symbol rate and a long symbol duration modulation scheme, it is sensible to insert a guard interval between the OFDM symbols for the purpose of eliminating the effect of ISI with the increase of Signal to Noise Ratio (SNR).</description>
        <description>http://thesai.org/Downloads/Volume1No5/Paper%2020%20OFDM%20System%20Analysis%20for%20reduction%20of%20Inter%20symbol%20Interference%20Using%20the%20AWGN%20Channel%20Platform.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>framework for Marketing Libraries in the Post-Liberalized Information and Communications Technology Era
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010519</link>
        <id>10.14569/IJACSA.2010.010519</id>
        <doi>10.14569/IJACSA.2010.010519</doi>
        <lastModDate>2012-07-01T10:00:44.0530000+00:00</lastModDate>
        
        <creator>Garimella Bhaskar Narasimha Rao</creator>
        
        <subject>Digital Libraries, University Learining, Adaptability</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(5), 2010</description>
        <description>The role of library is professional and is fast customizing to altering technological platforms. Our subscriber’s perceptions of the nature of our libraries are also altering like never before – particularly in Universities. In this dynamic environment the efficient presentation of the library is an essential survival tool. This holds true whether a library is contributing effectively to an institution’s overall management effort or, sensitively and subtly, promoting rational attitudes in the minds of its local constituency. Citations, references and annotations referred here are intended to provide background, ideas, techniques and inspiration for the novice as well as the experienced personnel. We hope you will find this paper as interesting and useful as we trust it would be, and that libraries may gain in useful collaborations and reputation from the application of information provided by resources identified here. Result oriented marketing is not on its own - some kind of substitute for a well-run service provider or service the demands of its user population, but in the 21st century even the best-run centers of learning or information service will only prosper if effort and talent are devoted to growth orientation. Almost all of us can render to this effort, and this paper can help coax the roots of marketing talent into a complete, harvestable and fragrant flower.</description>
        <description>http://thesai.org/Downloads/Volume1No5/Paper%2019-A%20framework%20for%20Marketing%20Libraries%20in%20the%20Post-Liberalized%20Information%20and%20Communications%20Technology%20Era.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhancement of Passive MAC Spoofing Detection Techniques
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010518</link>
        <id>10.14569/IJACSA.2010.010518</id>
        <doi>10.14569/IJACSA.2010.010518</doi>
        <lastModDate>2012-07-01T10:00:37.7400000+00:00</lastModDate>
        
        <creator>Aiman Abu Samra</creator>
        
        <creator>Ramzi Abed</creator>
        
        <subject> Intrusion Detection, RSS, RTT, Denial of Service
</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(5), 2010</description>
        <description>Failure of addressing all IEEE 802.11i Robust Security Networks (RSNs) vulnerabilities enforces many researchers to revise robust and reliable Wireless Intrusion Detection Techniques (WIDTs). In this paper we propose an algorithm to enhance the performance of the correlation of two WIDTs in detecting MAC spoofing Denial of Service (DoS) attacks. The two techniques are the Received Signal Strength Detection Technique (RSSDT) and Round Trip Time Detection Technique (RTTDT). Two sets of experiments were done to evaluate the proposed algorithm. Absence of any false negatives and low number of false positives in all experiments demonstrated the effectiveness of these techniques.
</description>
        <description>http://thesai.org/Downloads/Volume1No5/Paper%2018-Enhancement%20of%20Passive%20MAC%20Spoofing%20Detection%20Techniques.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Texture Based Segmentation using Statistical Properties for Mammographic Images
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010517</link>
        <id>10.14569/IJACSA.2010.010517</id>
        <doi>10.14569/IJACSA.2010.010517</doi>
        <lastModDate>2012-07-01T10:00:31.3800000+00:00</lastModDate>
        
        <creator>H B Kekre</creator>
        
        <creator>Saylee Gharge</creator>
        
        <subject>H.B.Kekre,Saylee Gharge
</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(5), 2010</description>
        <description> Segmentation is very basic and important step in computer vision and image processing. For medical images specifically accuracy is much more important than the computational complexity and thus time required by process. But as volume of data of patients goes on increasing then it becomes necessary to think about the processing time along with accuracy. Here in this paper, new algorithm is proposed for texture based segmentation using statistical properties. For that probability of each intensity value of image is calculated directly and image is formed by replacing intensity by its probability . Variance is calculated in three different ways to extract the texture features of the mammographic images. These results of proposed algorithm are compared with well known GLCM and Watershed algorithm.
</description>
        <description>http://thesai.org/Downloads/Volume1No5/Paper%2017-Texture%20Based%20Segmentation%20using%20Statistical%20Properties%20for%20Mammographic%20Images.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Decision Tree Classification of Remotely Sensed Satellite Data using Spectral Separability Matrix
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010516</link>
        <id>10.14569/IJACSA.2010.010516</id>
        <doi>10.14569/IJACSA.2010.010516</doi>
        <lastModDate>2012-07-01T10:00:25.0730000+00:00</lastModDate>
        
        <creator>M K Ghose</creator>
        
        <creator>Ratika Pradhan</creator>
        
        <creator>Sucheta Sushan Ghose</creator>
        
        <subject> Decision Tree Classifier (DTC), Separability Matrix, Maximum Likelihood Classifier (MLC), Stopping Criteria.
</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(5), 2010</description>
        <description>In this paper an attempt has been made to develop a decision tree classification algorithm for remotely sensed satellite data using the separability matrix of the spectral distributions of probable classes in respective bands. The spectral distance between any two classes is calculated from the difference between the minimum spectral value of a class and maximum spectral value of its preceding class for a particular band. The decision tree is then constructed by recursively partitioning the spectral distribution in a Top-Down manner. Using the separability matrix, a threshold and a band will be chosen in order to partition the training set in an optimal manner. The classified image is compared with the image classified by using classical method Maximum Likelihood Classifier (MLC). The overall accuracy was found to be 98% using the Decision Tree method and 95% using the Maximum Likelihood method with kappa values 97% and 94 % respectively.</description>
        <description>http://thesai.org/Downloads/Volume1No5/Paper%2016-Decision%20Tree%20Classification%20Of%20Remotely%20Sensed%20Satellite%20Data%20Using%20Spectral%20Separability%20Matrixpaper.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>suitable segmentation methodology based on pixel similarities for landmine detection in IR images
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010515</link>
        <id>10.14569/IJACSA.2010.010515</id>
        <doi>10.14569/IJACSA.2010.010515</doi>
        <lastModDate>2012-07-01T10:00:18.7500000+00:00</lastModDate>
        
        <creator>Dr G Padmavathi</creator>
        
        <creator>Dr. P. Subashini</creator>
        
        <creator>Ms. M. Krishnaveni</creator>
        
        <subject> Segmentation, Global Consistency error, h-maxima, threshold, Landmine detection</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(5), 2010</description>
        <description>Identification of masked objects especially in detection of landmines is always a difficult problem due to environmental inference. Here, segmentation phase is highly concentrated by performing an initial spatial segmentation to achieve a minimal number of segmented regions while preserving the homogeneity criteria of each region. This paper aims in evaluating similarities based segmentation methods to compose the partition of objects in Infra-Red images. The output is a set of non-overlapping homogenous regions that compose the pixels of the image. These extracted regions are used as the initial data structure in feature extraction process. Experimental results conclude that h-maxima transformation provides better results for landmine detection by taking the advantage of the threshold. The relative performance of different conventional methods and proposed method are evaluated and compared using the Global Consistency Error and Structural Content. It proves that h-maxima gives significant results that definitely facilitate the landmine classification system more effectively.
</description>
        <description>http://thesai.org/Downloads/Volume1No5/Paper%2015-A%20suitable%20segmentation%20methodology%20based%20on%20pixel%20similarities%20for%20landmine.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modular neural network approach for short term flood forecasting a comparative study
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010514</link>
        <id>10.14569/IJACSA.2010.010514</id>
        <doi>10.14569/IJACSA.2010.010514</doi>
        <lastModDate>2012-07-01T10:00:11.2570000+00:00</lastModDate>
        
        <creator>Rahul P Deshmukh</creator>
        
        <creator>A. A. Ghatol</creator>
        
        <subject>Artificial neural network, Forecasting, Rainfall, Runoff, Models.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(5), 2010</description>
        <description>The artificial neural networks (ANNs) have been applied to various hydrologic problems recently. This research demonstrates static neural approach by applying Modular feedforward neural network to rainfall-runoff modeling for the upper area of Wardha River in India. The model is developed by processing online data over time using static modular neural network modeling. Methodologies and techniques for four models are presented in this paper and a comparison of the short term runoff prediction results between them is also conducted. The prediction results of the Modular feedforward neural network with model two indicate a satisfactory performance in the three hours ahead of time prediction. The conclusions also indicate that Modular feedforward neural network with model two is more versatile than other and can be considered as an alternate and practical tool for predicting short term flood flow.
</description>
        <description>http://thesai.org/Downloads/Volume1No5/Paper%2014-Modular%20neural%20network%20approach%20for%20short%20term%20flood%20forecasting%20a%20comparative%20study.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Reliable Multicast Transport Protocol: RMTP
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010513</link>
        <id>10.14569/IJACSA.2010.010513</id>
        <doi>10.14569/IJACSA.2010.010513</doi>
        <lastModDate>2012-07-01T10:00:04.8970000+00:00</lastModDate>
        
        <creator>Pradip M Jawandhiya</creator>
        
        <creator>Sakina F.Husain</creator>
        
        <creator>M.R.Parate</creator>
        
        <creator>Dr. M. S. Ali</creator>
        
        <creator>Prof. J.S.Deshpande</creator>
        
        <subject>multicast routing, MANET, acknowledgement implosion, designated receiver</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(5), 2010</description>
        <description>This paper presents the design, implementation, and performance of a reliable multicast transport protocol (RMTP). RMTP is based on a hierarchical structure in which receivers are grouped into local regions or domains and in each domain there is a special receiver called a designated receiver (DR) which is responsible for sending acknowledgments periodically to the sender, for processing acknowledgment from receivers in its domain, and for retransmitting lost packets to the corresponding receivers. Since lost packets are recovered by local retransmissions as opposed to retransmissions from the original sender, end-to-end latency is significantly reduced, and the overall throughput is improved as well. Also, since only the DR’s send their acknowledgments to the sender, instead of all receivers sending their acknowledgments to the sender, a single acknowledgment is generated per local region, and this prevents acknowledgment implosion. Receivers in RMTP send their acknowledgments to the DR’s periodically, thereby simplifying error recovery. In addition, lost packets are recovered by selective repeat retransmissions, leading to improved throughput at the cost of minimal additional buffering at the receivers. This paper also describes the implementation of RMTP and its performance on the Internet.</description>
        <description>http://thesai.org/Downloads/Volume1No5/Paper%2013%20Reliable%20Multicast%20Transport%20Protocol%20RMTP.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Single Input Multiple Output (SIMO) Wireless Link with Turbo Coding
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010512</link>
        <id>10.14569/IJACSA.2010.010512</id>
        <doi>10.14569/IJACSA.2010.010512</doi>
        <lastModDate>2012-07-01T09:59:58.5670000+00:00</lastModDate>
        
        <creator>M M Kamruzzaman</creator>
        
        <creator>Dr. Mir Mohammad Azad</creator>
        
        <subject>Diversity; multipath fading; Rayleigh fading; Turbo coding; Maximal Ratio Combining (MRC); maximum-likelihood detector; multiple antennas</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(5), 2010</description>
        <description>Performance of a wireless link is evaluated with turbo coding in the presence of Rayleigh fading with single transmitting antenna and multiple receiving antenna. QAM modulator is considered with maximum likelihood decoding. Performance results show that there is a significant improvement in required signal to noise ratio (SNR) to achieve a given BER. It is found that the system attains a coding gain of 14.5 dB and 13 dB corresponding to two receiving antenna and four receiving antennas respectively over the corresponding uncoded system. Further, there is an improvement in SNR of 6.5 dB for four receiving antennas over two receiving antennas for the turbo coded system.
</description>
        <description>http://thesai.org/Downloads/Volume1No5/Paper%2012%20Single%20Input%20Multiple%20Output%20(SIMO)%20Wireless%20Link%20with%20Turbo%20Coding.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Generalized Two Axes Modeling, Order Reduction and Numerical Analysis of Squirrel Cage Induction Machine for Stability Studies
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010511</link>
        <id>10.14569/IJACSA.2010.010511</id>
        <doi>10.14569/IJACSA.2010.010511</doi>
        <lastModDate>2012-07-01T09:59:52.2230000+00:00</lastModDate>
        
        <creator>Sudhir kumar</creator>
        
        <creator>P. K. Ghosh</creator>
        
        <creator>S. Mukherjee</creator>
        
        <subject>Induction machine, Model order reduction, Stability, Transient analysis</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(5), 2010</description>
        <description> A substantial amount of power system load is made of large number of three phase induction machine. The transient phenomena of these machines play an important role in the behavior of the overall system. Thus, modeling of induction machine is an integral part of some power system transient studies. The analysis takes a detailed form only when its modeling becomes perfect to a greater accuracy. When the stator eddy current path is taken into account, the uniform air-gap theory in phase model analysis becomes inefficient to take care of the transients. This drawback necessitates the introduction of analysis of the machine in d-q axis frame. A widely accepted induction machine model for stability studies is the fifth-order model which considers the electrical transients in both rotor and stator windings and the mechanical transients. In practice, some flux-transient can be ignored due to the quasi-stationary nature of the variables concerned. This philosophy leads to the formation of reduced order model. Model Order Reduction (MOR) encompasses a set of techniques whose goal is to generate reduced order models with lower complexity while ensuring that the I/O response and other characteristics of the original model (such as passivity) are maintained. This paper takes the above matter as a main point of research. The authors use the philosophy of the speed-build up of induction machine to find the speed versus time profile for various load conditions and supply voltage disturbances using numerical methods due to Runge- Kutta, Trapezoidal and Euler’s in Matlab platform. The established fact of lesser computation time in reduced order model has been verified and improvement in accuracy is observed.
</description>
        <description>http://thesai.org/Downloads/Volume1No5/Paper%2011%20A%20Generalized%20Two%20Axes%20Modeling%20,%20Order%20Reduction%20and%20Numerical%20Analysis%20of%20Squirrel%20Cage%20Induction%20Machine%20for%20Stability%20Studies.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Identification and Evaluation of Functional Dependency Analysis using Rough sets for Knowledge Discovery
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010510</link>
        <id>10.14569/IJACSA.2010.010510</id>
        <doi>10.14569/IJACSA.2010.010510</doi>
        <lastModDate>2012-07-01T09:59:45.8800000+00:00</lastModDate>
        
        <creator>Y V Sreevani</creator>
        
        <creator>Prof. T. Venkat Narayana Rao</creator>
        
        <subject> Roughset;knowledge base; data mining; functional dependency; core knowledge.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(5), 2010</description>
        <description>The process of data acquisition gained momentum due to the efficient representation of storage/retrieving systems. Due to the commercial and application value of these stored data, Database Management has become essential for the reasons like consistency and atomicity in giving birth to DBMS. The existing database management systems cannot provide the needed information when the data is not consistent. So knowledge discovery in databases and data mining has become popular for the above reasons. The non-trivial future expansion process can be classified as Knowledge Discovery. Knowledge Discovery process can be attempted by clustering tools. One of the upcoming tools for knowledge representation and knowledge acquisition process is based on the concept of Rough Sets. This paper explores inconsistencies in the existing databases by finding the functional dependencies extracting the required information or knowledge based on rough sets. It also discusses attribute reduction through core and reducts which helps in avoiding superfluous data. Here a method is suggested to solve this problem of data inconsistency based medical domain with a analysis.</description>
        <description>http://thesai.org/Downloads/Volume1No5/Paper%2010-Identification%20and%20Evaluation%20of%20Functional%20Dependency%20Analysis%20using%20%20Roughsets%20for%20Knowledge%20Discovery.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Pattern Discovery using Fuzzy FP-growth Algorithm from Gene Expression Data
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010509</link>
        <id>10.14569/IJACSA.2010.010509</id>
        <doi>10.14569/IJACSA.2010.010509</doi>
        <lastModDate>2012-07-01T09:59:38.7430000+00:00</lastModDate>
        
        <creator>Sabita Barik</creator>
        
        <creator>Debahuti Mishra</creator>
        
        <creator>Shruti Mishra</creator>
        
        <creator>Sandeep Ku. Satapathy</creator>
        
        <creator>Amiya Ku. Rath</creator>
        
        <creator>Milu Acharya</creator>
        
        <subject>Gene Expression Data; Association Rule mining; Apriori Algorithm, Frequent Pattern Mining, FP-growth Algorithm
</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(5), 2010</description>
        <description>The goal of microarray experiments is to identify genes that are differentially transcribed with respect to different biological conditions of cell cultures and samples. Hence, method of data analysis needs to be carefully evaluated such as clustering, classification, prediction etc. In this paper, we have proposed an efficient frequent pattern based clustering to find the gene which forms frequent patterns showing similar phenotypes leading to specific symptoms for specific disease. In past, most of the approaches for finding frequent patterns were based on Apriori algorithm, which generates and tests candidate itemsets (gene sets) level by level. This processing causes iterative database (dataset) scans and high computational costs. Apriori algorithm also suffers from mapping the support and confidence framework to a crisp boundary. Our hybridized Fuzzy FP-growth approach not only outperforms the Apriori with respect to computational costs, but also it builds a tight tree structure to keep the membership values of fuzzy region to overcome the sharp boundary problem and it also takes care of scalability issues as the number of genes and condition increases.</description>
        <description>http://thesai.org/Downloads/Volume1No5/Paper%209-Pattern%20Discovery%20using%20Fuzzy%20FP-growth%20Algorithm%20from%20Gene%20Expression%20Data.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Intelligent Software Workflow Process Design for Location Management on Mobile Devices
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010508</link>
        <id>10.14569/IJACSA.2010.010508</id>
        <doi>10.14569/IJACSA.2010.010508</doi>
        <lastModDate>2012-07-01T09:59:32.4370000+00:00</lastModDate>
        
        <creator>N Mallikharjuna Rao</creator>
        
        <creator>P.Seetharam</creator>
        
        <subject>Wireless system, mobile, BPR, software design, intelligent design, Fuzzy database</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(5), 2010</description>
        <description> Advances in the technologies of networking, wireless communication and trimness of computers lead to the rapid development in mobile communication infrastructure, and have drastically changed information processing on mobile devices. Users carrying portable devices can freely move around, while still connected to the network. This provides flexibility in accessing information anywhere at any time. For improving more flexibility on mobile devices, the new challenges in designing software systems for mobile networks include location and mobility management, channel allocation, power saving and security. In this paper, we are proposing intelligent software tool for software design on mobile devices to fulfill the new challenges on mobile location and mobility management. In this study, the proposed Business Process Redesign (BPR) concept aims at an extension of the capabilities of an existing, widely used process modeling tool in industry with ‘Intelligent’ capabilities to suggest favorable alternatives to an existing software workflow design for improving flexibilities on mobile devices.
</description>
        <description>http://thesai.org/Downloads/Volume1No5/Paper%208%20An%20Intelligent%20Software%20Workflow%20Process%20Design%20for%20Location%20Management%20on%20Mobile%20Devices.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis and Enhancement of BWR Mechanism in MAC 802.16 for WIMAX Networks
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010507</link>
        <id>10.14569/IJACSA.2010.010507</id>
        <doi>10.14569/IJACSA.2010.010507</doi>
        <lastModDate>2012-07-01T09:59:26.1230000+00:00</lastModDate>
        
        <creator>R Bhakthavathsalam</creator>
        
        <creator>R. ShashiKumar</creator>
        
        <creator>V. Kiran</creator>
        
        <creator>Y. R. Manjunath</creator>
        
        <subject>WiMAX, MAC 802.16 , BWR, NS-2, Tcl</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(5), 2010</description>
        <description> WiMAX [Worldwide Interoperability for Microwave Access] is the latest contender as a last mile solution for providing broadband wireless Internet access and is an IEEE 802.16 standard. In IEEE 802.16 MAC protocol, Bandwidth Request (BWR) is the mechanism by which the Subscriber Station (SS) communicates its need for uplink bandwidth allocation to the Base Station (BS). The performance of the system is affected by the collisions of BWR packets in uplink transmission, that have the direct impact on the size of the contention period in uplink sub frame, uplink access delay and uplink throughput. This paper mainly deals with Performance Analysis and Improvement of Uplink Throughput in MAC 802.16 by the Application of a New Mechanism of Circularity. The implementation incorporates a generic simulation of the contention resolution mechanism at the BS. An analysis of the total uplink access delay and also uplink throughput is performed. A new paradigm of circularity is employed by selectively dropping appropriate control packets in order to obviate the bandwidth request collisions, which yields the minimum access delay and thereby effective utilization of available bandwidth towards the uplink throughput. This new paradigm improves contention resolution among the bandwidth request packets and thereby reduces the delay and increases the throughput regardless of the density and topological spread of subscriber stations handled by the BS in the network.
</description>
        <description>http://thesai.org/Downloads/Volume1No5/Paper%207%20Analysis%20and%20Enhancement%20of%20BWR%20Mechanism%20in%20MAC%20802.16%20for%20WIMAX%20Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>2- Input AND Gate For Fast Gating and Switching Based On XGM Properties of InGaAsP Semiconductor Optical Amplifier
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010506</link>
        <id>10.14569/IJACSA.2010.010506</id>
        <doi>10.14569/IJACSA.2010.010506</doi>
        <lastModDate>2012-07-01T09:59:19.8030000+00:00</lastModDate>
        
        <creator>Vikrant k Srivastava</creator>
        
        <creator>Devendra Chack</creator>
        
        <creator>Vishnu Priye</creator>
        
        <subject> Optical Logic Gates, Semiconductor Optical Amplifier, SOA, Four Wave Mixing, Cross Gain Modulationp&gt;</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(5), 2010</description>
        <description>We report on an all-optical AND-gate using simultaneous Four-Wave Mixing (FWM) and Cross-Gain Modulation (XGM) in a semiconductor optical amplifier (SOA). The operation of the proposed AND gate is simulated and the results demonstrate its effectiveness. This AND gate could provide a new possibility for all-optical computing and all-optical routing in future all-optical networks. In an AND ( AB ) gate, Boolean is firstly obtained by using signal B as a pump beam and clock signal as a probe beam in SOA-1. By passing signal A as a probe beam and as a pump beam through SOA-2, Boolean AB is acquired. Proposed optical logic unit is based on coupled nonlinear equations describing XGM and FWM effects. These equations are first solved to generate the pump, probe and conjugate pulses in a SOA. The pulse behavior are analyzed and applied to realize behavior of all-optical AND gate and its function is verified with the help of waveform and analytical assumption.</description>
        <description>http://thesai.org/Downloads/Volume1No5/Paper%206%202-%20Input%20AND%20Gate%20For%20Fast%20Gating%20and%20Switching%20Based%20On%20XGM%20Properties%20of%20InGaAsP%20Semiconductor%20Optical%20Amplifier.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An New Filtering Methods in the Wavelet Domain for Bowel Sounds
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010505</link>
        <id>10.14569/IJACSA.2010.010505</id>
        <doi>10.14569/IJACSA.2010.010505</doi>
        <lastModDate>2012-07-01T09:59:12.6830000+00:00</lastModDate>
        
        <creator>Zhang xizheng</creator>
        
        <creator>Yin ling</creator>
        
        <creator>Wang weixiong</creator>
        
        <subject> bowel sounds; de-noising; wavelet transform; threshold</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(5), 2010</description>
        <description> Bowel sounds signal (BS) is one of the important human physiological signals, analysis of the BS signal can then study gastrointestinal physiology and implement direct and effective diagnosis of gastrointestinal disorders. Use different threshold denoising methods denoising the original bowel sounds, simulated in the environment of MATLAB, then compared the denoising results and analyzed the advantages and disadvantages of those threshold denoising methods.
</description>
        <description>http://thesai.org/Downloads/Volume1No5/Paper%205%20An%20New%20Filtering%20Methods%20in%20the%20Wavelet%20Domain%20for%20Bowel%20Sounds%20.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Model for Enhancing Requirements Traceability and Analysis
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010503</link>
        <id>10.14569/IJACSA.2010.010503</id>
        <doi>10.14569/IJACSA.2010.010503</doi>
        <lastModDate>2012-07-01T09:59:08.9670000+00:00</lastModDate>
        
        <creator>Ahmed M Salem</creator>
        
        <subject> Requirements Traceability; Software Faults; Software Quality.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(5), 2010</description>
        <description>Software quality has been a challenge since the inception of computer software. Software requirements gathering, analysis, and specification; are viewed by many as the principle cause of many of the software complex problems. Requirements traceability is one of the most important and challenging tasks in ensuring clear and concise requirements. Requirements need to be specified and traced throughout the software life cycle in order to produce quality requirements. This paper describes a preliminary model to be used by software engineers to trace and verify requirements at the initial phase. This model is designed to be adaptable to requirement changes and to assess its impact.
</description>
        <description>http://thesai.org/Downloads/Volume1No5/Paper%203-A%20Model%20for%20Enhancing%20Requirements%20Traceability%20and%20Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Requirements Analysis through Viewpoints Oriented Requirements Model (VORD)
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010502</link>
        <id>10.14569/IJACSA.2010.010502</id>
        <doi>10.14569/IJACSA.2010.010502</doi>
        <lastModDate>2012-07-01T09:59:05.6230000+00:00</lastModDate>
        
        <creator>Ahmed M Salem</creator>
        
        <subject>Software Requirements; Requirements Modeling; Functional Requirements
</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(5), 2010</description>
        <description>This paper describes an extension to the Viewpoints Oriented Requirements Definition (VORD) model and attempts to resolve its lack of direct support for viewpoint interaction. Supporting the viewpoint interaction provides a useful tool for analyzing requirements changes and automating systems. It can also be used to indicate when multiple requirements are specified as a single requirement. The extension is demonstrated with the bank auto-teller system that was part of the original VORD proposal.</description>
        <description>http://thesai.org/Downloads/Volume1No5/Paper%202-Requirements%20Analysis%20through%20Viewpoints%20Oriented%20Requirements%20Model%20(VORD).pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Wavelet Time-frequency Analysis of Electro-encephalogram (EEG) Processing
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010501</link>
        <id>10.14569/IJACSA.2010.010501</id>
        <doi>10.14569/IJACSA.2010.010501</doi>
        <lastModDate>2012-07-01T09:58:59.7170000+00:00</lastModDate>
        
        <creator>Zhang xizheng</creator>
        
        <creator>Yin ling</creator>
        
        <creator>Wang weixiong</creator>
        
        <subject>EEG, time-frequency analysis, wavelet transform, de-noising.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(5), 2010</description>
        <description>This paper proposes time-frequency analysis of EEG spectrum and wavelet analysis in EEG de-noising. In this paper, the basic idea is to use the characteristics of multi-scale multi-resolution, using four different thresholds to wipe off interference and noise after decomposition of the EEG signals. By analyzing the results, understanding the effects of four different methods, it comes to a conclusion that the wavelet de-noising and soft threshold is a better conclusion.
</description>
        <description>http://thesai.org/Downloads/Volume1No5/Paper%201-Wavelet%20Time-frequency%20Analysis%20of%20Electro-encephalogram%20(EEG)%20Processing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Flow Controlling of Access at Edge Routers
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010417</link>
        <id>10.14569/IJACSA.2010.010417</id>
        <doi>10.14569/IJACSA.2010.010417</doi>
        <lastModDate>2012-07-01T09:58:53.4000000+00:00</lastModDate>
        
        <creator>S.C.V Ramana Rao</creator>
        
        <creator>S.Naga Mallik Raj</creator>
        
        <creator>S. Neeraja</creator>
        
        <creator>P.Prathusha</creator>
        
        <creator>J.David Sukeerthi Kumar</creator>
        
        <subject> bandwidth, Traffic, edge-routers, routers, decision, multimedia, Quality of Service, framework, algorithm, domain.
</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(4), 2010</description>
        <description>It is very important to allocate and manage resources for multimedia type of data traffic flows with real-time performance requirements in order to guarantee quality-of-service (QoS). In this paper, we develop a scalable architecture and an algorithm for access control of real-time flows. Since individual management of each traffic flow on each transit router can cause a fundamental scalability problem in both data and control planes, we consider that each flow is classified at the ingress router and data flow is aggregated according to the class inside the core network as in a DiffServ framework. In our approach, access decision is made for each flow at the edge routers, but it is scalable because per-flow states are not maintained and the access algorithm is simple. In the proposed access control scheme, an admissible bandwidth, which is defined as the maximum rate of a flow that can be accommodated additionally while satisfying the delay performance requirements for both existing and new flows, is calculated based on the available bandwidth measured by edge routers. The admissible bandwidth is a entry for access control, and thus, it is very important to accurately estimate the acceptable bandwidth. The performance of the proposed algorithm is evaluated by taking a set of simulation experiments using bursty traffic flows.
</description>
        <description>http://thesai.org/Downloads/Volume1No4/Paper%2017-Flow%20Controlling%20of%20Access%20at%20Edge%20Routers.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Clustering Methods for Credit Card using Bayesian rules based on K-means classification
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010416</link>
        <id>10.14569/IJACSA.2010.010416</id>
        <doi>10.14569/IJACSA.2010.010416</doi>
        <lastModDate>2012-07-01T09:58:47.1100000+00:00</lastModDate>
        
        <creator>S Jessica Saritha</creator>
        
        <creator>Prof. P.Govindarajulu</creator>
        
        <creator>K. Rajendra Prasad</creator>
        
        <creator>S.C.V. Ramana Rao</creator>
        
        <creator>C.Lakshmi</creator>
        
        <subject>Clusters, Probability, K-Means, Thomas Bayesian rule, Credit Card, attributes, banking.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(4), 2010</description>
        <description>K-means clustering algorithm is a method of cluster analysis which aims to partition n observations into clusters in which each observation belongs to the cluster with the nearest mean. It is one of the simplest unconfirmed learning algorithms that solve the well known clustering problem. It is similar to the hope maximization algorithm for mixtures of Gaussians in that they both attempt to find the centers of natural clusters in the data. Bayesian rule is a theorem in probability theory named for Thomas Bayesian. It is used for updating probabilities by finding conditional probabilities given new data. In this paper, K-mean clustering algorithm and Bayesian classification are joint to analysis the credit card. The analysis result can be used to improve the accuracy.</description>
        <description>http://thesai.org/Downloads/Volume1No4/Paper%2016-Clustering%20Methods%20for%20Credit%20Card%20using%20Bayesian%20rules%20based%20on%20K-means%20classification.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Classification of Self-Organizing Hierarchical Mobile Adhoc Network Routing Protocols - A Summary
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010415</link>
        <id>10.14569/IJACSA.2010.010415</id>
        <doi>10.14569/IJACSA.2010.010415</doi>
        <lastModDate>2012-07-01T09:58:40.7900000+00:00</lastModDate>
        
        <creator>Udayachandran Ramasamy</creator>
        
        <creator>K. Sankaranarayanan</creator>
        
        <subject> MANET, Routing Protocols, Routing Topology , Routing Algorithms and QoS.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(4), 2010</description>
        <description> MANET is a special kind of wireless network. It is a collection of mobile nodes without having aid of established infrastructure. Mobile Adhoc network removes the dependence on a fixed network infrastructure by treating every available mobile node as an intermediate switch, thereby extending the range of mobile nodes well beyond that of their base transceivers. Other advantages of Manet include easy installation and upgrade, low cost and maintenance, more flexibility, and the ability to employ new and efficient routing protocols for wireless communication. In this paper we present four routing algorithm, classifications, discuss their advantages and disadvantages.
</description>
        <description>http://thesai.org/Downloads/Volume1No4/Paper%2015-Classification%20of%20Self-Organizing%20Hierarchical%20Mobile%20Adhoc%20Network%20Routing%20Protocols%20-%20A%20Summary%20.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Measuring Semantic Similarity between Words Using Web Documents
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010414</link>
        <id>10.14569/IJACSA.2010.010414</id>
        <doi>10.14569/IJACSA.2010.010414</doi>
        <lastModDate>2012-07-01T09:58:34.4830000+00:00</lastModDate>
        
        <creator>Sheetal A Takale</creator>
        
        <creator>Sushma S. Nandgaonkar</creator>
        
        <subject> Semantic Similarity, Wikipedia, Web Search Engine, Natural Language Processing, Information Retrieval, Web Mining.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(4), 2010</description>
        <description>Semantic similarity measures play an important role in the extraction of semantic relations. Semantic similarity measures are widely used in Natural Language Processing (NLP) and Information Retrieval (IR). The work proposed here uses web-based metrics to compute the semantic similarity between words or terms and also compares with the state-of-the-art. For a computer to decide the semantic similarity, it should understand the semantics of the words. Computer being a syntactic machine, it can not understand the semantics. So always an attempt is made to represent the semantics as syntax. There are various methods proposed to find the semantic similarity between words. Some of these methods have used the precompiled databases like WordNet, and Brown Corpus. Some are based on Web Search Engine. The approach presented here is altogether different from these methods. It makes use of snippets returned by the Wikipedia or any encyclopedia such as Britannica Encyclopedia. The snippets are preprocessed for stop word removal and stemming. For suffix removal an algorithm by M. F. Porter is referred. Luhn’s Idea is used for extraction of significant words from the preprocessed snippets. Similarity measures proposed here are based on the five different association measures in Information retrieval, namely simple matching, Dice, Jaccard, Overlap, Cosine coefficient. Performance of these methods is evaluated using Miller and Charle’s benchmark dataset. It gives higher correlation value of 0.80 than some of the existing methods.</description>
        <description>http://thesai.org/Downloads/Volume1No4/Paper%2014-Measuring%20Semantic%20Similarity%20Between%20Words%20Using%20Web%20Documents.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Search Technique Using Wildcards or Truncation: A Tolerance Rough Set Clustering Approach
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010413</link>
        <id>10.14569/IJACSA.2010.010413</id>
        <doi>10.14569/IJACSA.2010.010413</doi>
        <lastModDate>2012-07-01T09:58:28.1830000+00:00</lastModDate>
        
        <creator>Sandeep Kumar Satapathy</creator>
        
        <creator>Shruti Mishra</creator>
        
        <creator>Debahuti Mishra</creator>
        
        <subject>Clustering, Tolerance Rough Set, Search Engine, Wildcard Truncation</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(4), 2010</description>
        <description>Search engine technology plays an important role in web information retrieval. However, with Internet information explosion, traditional searching techniques cannot provide satisfactory result due to problems such as huge number of result Web pages, unintuitive ranking etc. Therefore, the reorganization and post-processing of Web search results have been extensively studied to help user effectively obtain useful information. This paper has basically three parts. First part is the review study on how the keyword is expanded through truncation or wildcards (which is a little known feature but one of the most powerful one) by using various symbols like * or! The primary goal in designing this is to restrict ourselves by just mentioning the keyword using the truncation or wildcard symbols rather than expanding the keyword into sentential form. The second part of this paper gives a brief idea about the tolerance rough set approach to clustering the search results. In tolerance rough set approach we use a tolerance factor considering which we cluster the information rich search result and discard the rest. But it may so happen that the discarded results do have some information which may not be up to the tolerance level; still they do contain some information regarding the query. The third part depicts a proposed algorithm based on the above two and thus solving the above mentioned problem that usually arise in the tolerance rough set approach . The main goal of this paper is to develop a search technique through which the information retrieval will be very fast, reducing the amount of extra labor needed on expanding the query.
</description>
        <description>http://thesai.org/Downloads/Volume1No4/Paper%2013-Search%20Technique%20Using%20Wildcards%20or%20Truncation%20A%20Tolerance%20Rough%20Set%20Clustering%20Approach.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Applying Intuitionistic Fuzzy Approach to Reduce Search Domain in an Accidental Case
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010412</link>
        <id>10.14569/IJACSA.2010.010412</id>
        <doi>10.14569/IJACSA.2010.010412</doi>
        <lastModDate>2012-07-01T09:58:21.8770000+00:00</lastModDate>
        
        <creator>Yasir Ahmad</creator>
        
        <creator>Sadia Husain</creator>
        
        <creator>Afshar Alam</creator>
        
        <subject>fuzzy sets, Intuitionistic fuzzy relation, Intuitionistic fuzzy database, Intuitionistic fuzzy tolerance</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(4), 2010</description>
        <description> In this paper we are using intuitionistic fuzzy approach to minimize search domain in an accidental case where the data collected for investigation is intuitionistic fuzzy in nature. To handle these types of imprecise information we use intuitionistic fuzzy tolerance relation and translate intuitionistic fuzzy query to reach to the conclusion. Here we present an example of vehicle hit and run case where the accused had fled the accident spot within seconds leaving no clue behind.</description>
        <description>http://thesai.org/Downloads/Volume1No4/Paper%2012-Applying%20Intuitionistic%20Fuzzy%20Approach%20to%20Reduce%20Search%20Domain%20in%20an%20Accidental%20Case.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Computing the Most Significant Solution from Pareto Front obtained in Multi-objective Evolutionary</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010411</link>
        <id>10.14569/IJACSA.2010.010411</id>
        <doi>10.14569/IJACSA.2010.010411</doi>
        <lastModDate>2012-07-01T09:58:15.5400000+00:00</lastModDate>
        
        <creator>P.M Chaudhari</creator>
        
        <creator>Dr. R.V. Dharaskar</creator>
        
        <creator>Dr. V. M. Thakare</creator>
        
        <subject> Multiobjective,Pareto front ,Clustering techniques</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(4), 2010</description>
        <description>Problems with multiple objectives can be solved by using Pareto optimization techniques in evolutionary multi-objective optimization algorithms. Many applications involve multiple objective functions and the Pareto front may contain a very large number of points. Selecting a solution from such a large set is potentially intractable for a decision maker. Previous approaches to this problem aimed to find a representative subset of the solution set. Clustering techniques can be used to organize and classify the solutions. Implementation of this methodology for various applications and in a decision support system is also discussed.</description>
        <description>http://thesai.org/Downloads/Volume1No4/Paper%2011-Computing%20the%20Most%20Significant%20Solution%20from%20Pareto%20Front%20obtained%20in%20Multi-objective%20Evolutionary%20Algorithms.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Traffic Load based Performance Analysis of DSR, STAR &amp; AODV Adhoc Routing Protocol
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010410</link>
        <id>10.14569/IJACSA.2010.010410</id>
        <doi>10.14569/IJACSA.2010.010410</doi>
        <lastModDate>2012-07-01T09:58:09.2030000+00:00</lastModDate>
        
        <creator>Parma Nand</creator>
        
        <creator>Dr. S.C. Sharma</creator>
        
        <creator>Rani Astya</creator>
        
        <subject>Adhoc networks; wireless networks; CBR, routing protocols; route discovery; simulation; performance evaluation; MAC; IEEE 802.11; STAR; DSR; AODV.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(4), 2010</description>
        <description>The wireless adhoc network is comprised of nodes (it can be static or mobile) with wireless radio interface. These nodes are connected among themselves without central infrastructure and are free to move. It is a multihop process because of the limited transmission range of energy constrained wireless nodes. Thus, in such a multihop network system each node (also known as router) is independent, self-reliant and capable of routing the packets over the dynamic network topology and therefore routing becomes very important and basic operation of adhoc network. Many protocols are reported in this field but it is difficult to decide which one is best. In this paper table driven protocol STAR and on demand routing protocols AODV, DSR based on IEEE 802.11 are surveyed and characteristic summary of these routing protocols is presented. Their performance is analyzed on throughput, jitter, packet delivery ratio and end-to-end delay performance measuring metrics by varying CBR data traffic load and then their performance is also compared using QualNet 5.0.2 network simulator.</description>
        <description>http://thesai.org/Downloads/Volume1No4/Paper_10-Traffic_Load_based_Performance_Analysis.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Parallel Printer Port for Phase Measurement
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010409</link>
        <id>10.14569/IJACSA.2010.010409</id>
        <doi>10.14569/IJACSA.2010.010409</doi>
        <lastModDate>2012-07-01T09:58:02.8930000+00:00</lastModDate>
        
        <creator>Dr R.Padma Suvarna</creator>
        
        <creator>Dr.M. Usha Rani</creator>
        
        <creator>P.M.Kalyani</creator>
        
        <creator>Dr.R.Seshadri</creator>
        
        <creator>Yaswanth Kumar.Avulapati</creator>
        
        <subject> Phase detector, voltage controlled oscillator, phase locked loop and parallel printer port.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(4), 2010</description>
        <description>This white paper talks about the measurement of phase angle using Phase Locked Loop and printer port.. The phase detector compares the phase of a periodic input signal against the phase of the output of voltage controlled oscillator and generates an average output voltage Vout which is linearly proportional to the phase difference, ?&#248; between its two inputs. This output voltage is measured using the parallel Printer port of a PC.
</description>
        <description>http://thesai.org/Downloads/Volume1No4/Paper%209-PARALLEL%20PRINTER%20PORT%20FOR%20PHASE%20MEASUREMENT.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dynamic Reduct and Its Properties In the Object-Oriented Rough Set Models
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010408</link>
        <id>10.14569/IJACSA.2010.010408</id>
        <doi>10.14569/IJACSA.2010.010408</doi>
        <lastModDate>2012-07-01T09:57:56.6130000+00:00</lastModDate>
        
        <creator>M Srivenkatesh</creator>
        
        <creator>P.V.G.D.Prasadreddy</creator>
        
        <creator>Y.Srinivas</creator>
        
        <subject> Rough Set, Dynamic Reduct, Feature Core, Indiscernibility Relations, Discernibility Matrices.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(4), 2010</description>
        <description>This Paper deals with a new type of Reduct in the object-oriented rough set model which is called dynamic reduct. In the object–oriented rough set models, objects are treated as instances of classes, and illustrate structural hierarchies among objects based on is-a relationship and has-a relationship[6]. In this paper, we propose dynamic reduct and the notation of core according to the dynamic reduct in the object-oriented rough set models. It describes various formal definitions of core and discusses some properties about dynamic core in the object-oriented rough set models.</description>
        <description>http://thesai.org/Downloads/Volume1No4/Paper%208-Dynamic%20reducts%20and%20its%20Properties%20in%20the%20Object-Oriented%20Rough%20Set%20Models.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Comparison between Ant Algorithm and Modified Ant Algorithm
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010407</link>
        <id>10.14569/IJACSA.2010.010407</id>
        <doi>10.14569/IJACSA.2010.010407</doi>
        <lastModDate>2012-07-01T09:57:48.4800000+00:00</lastModDate>
        
        <creator>Shaveta Malik</creator>
        
        <subject> Ant Algorithm, Modified Ant algorithm, Travelling Salesman Problem, Quadratic problem, Ant System
</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(4), 2010</description>
        <description>This paper gives a brief about two of the meta-heuristic techniques that are used to find best among the optimal solutions for complex problems like travelling salesman problem, Quadratic problem. Both of these techniques are based on the natural phenomenon of ant. Ant algorithm find good path but due to some short comings of it, this algorithm is not able to give best out of the good or optimal solutions, but modified ant algorithm which is based on probability finds out the best among the optimal paths We will also see that the modified ant algorithm can obtain less number of hops which helps us to get the best solution to typical problems.</description>
        <description>http://thesai.org/Downloads/Volume1No4/Paper%207-Performance%20Comparison%20Between%20Ant%20Algorithm%20and%20Modified%20Ant%20Algorithm.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Performance Analysis of Indoor Positioning System
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010406</link>
        <id>10.14569/IJACSA.2010.010406</id>
        <doi>10.14569/IJACSA.2010.010406</doi>
        <lastModDate>2012-07-01T09:57:41.4870000+00:00</lastModDate>
        
        <creator>Leena Arya</creator>
        
        <creator>S.C. Sharma</creator>
        
        <creator>Millie Pant</creator>
        
        <subject>WLAN, Access point, Path loss model, Qualnet 5.0.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(4), 2010</description>
        <description>In the new era of wireless communication, Wireless Local Area Network has emerged as one of the key players in the wireless communication family. It is now a trend to develop the WLAN in various colleges and office campuses for increasing productivity and quality of goods. There are many obstacles when deploying WLAN, which demands seamless indoor handover. The objective of the work reported here is to develop modeling tools using QUALNET 5.0 simulation design tool for performance optimization of WLAN access points. To predict the signal strength and interference in a WLAN system, propagation model has been used.</description>
        <description>http://thesai.org/Downloads/Volume1No4/Paper%206-Performance%20Analysis%20of%20Indoor%20Positioning%20System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multipath Fading Channel Optimization for Wireless Medical Applications
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010405</link>
        <id>10.14569/IJACSA.2010.010405</id>
        <doi>10.14569/IJACSA.2010.010405</doi>
        <lastModDate>2012-07-01T09:57:35.2330000+00:00</lastModDate>
        
        <creator>A.K.M Fazlul Haque</creator>
        
        <creator>Md. Hanif Ali</creator>
        
        <creator>M Adnan Kiber</creator>
        
        <subject>ISI, ICI, OFDM, TTL, Channel Fading</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(4), 2010</description>
        <description> In this paper, a new method has been proposed to eliminate the intersymbol interference (ISI) and interchannel interference (ICI) for discrete multitone/orthogonal frequency division multiplexing (DMT/OFDM) systems by considering Time to Live (TTL) of multipath channel fading, especially for wireless medical application. In this method, the existence time of the packet is considered as the maximum propagation time and when the packet is sent to the receiver, the down count of the TTL starts. The existence of the packet in a network depends on TTL in an Internet Protocol (IP) packet that tells a network router whether or not the packet has been in the network too long and should be discarded. The proposed structure prevents ICI with a preprocessing method that utilizes a particular time that is equal to the continuation time of the packet and removes ISI by canceling the replica at the receiver. The simulation results show that the proposed method reduces the BER/ISI better under mutipath fading environment than other existing system.
</description>
        <description>http://thesai.org/Downloads/Volume1No4/Paper%205-Multipath%20Fading%20Channel%20Optimization%20for%20Wireless%20Medical%20Applications.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Design, Development and Simulations of MHD Equations with its proto type implementations
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010404</link>
        <id>10.14569/IJACSA.2010.010404</id>
        <doi>10.14569/IJACSA.2010.010404</doi>
        <lastModDate>2012-07-01T09:57:31.8870000+00:00</lastModDate>
        
        <creator>Rajveer S Yaduvanshi</creator>
        
        <creator>Harish Parthasarathy</creator>
        
        <subject>Lorentz force, Navier Stokes Equation, Maxwell’s Equation, Iterative Solution, Prototype.

</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(4), 2010</description>
        <description>The equations of motion of conducting fluid in a magnetic field are formulated. These consist of three sets. First is the mass conservation equation, second Navier Stokes equation which is Newton’s second law taking into account the force of magnetic field on moving charges. The electrical field effects are neglected as is usually done in MHD. The third set is Maxwell’s equation especially to monopole condition along with Ampere’s law with the current given by ohm’s law in a moving frame (the frame in which the moving particles of fluid is at rest).The mass conservation equation assuming the fluid to be incompressible leads us to express the velocity field as the curl of a velocity vector potential. The curl of the Navier Stokes equation leads to the elimination of pressure, there by leaving with an equation involving only magnetic field and the fluid velocity field. The curl of the Ampere law equation leads us to another equation relating to the magnetic field to the velocity field. A special case is considered in which the only non vanishing components of the fluid are the x and y components and the only non vanishing component of the magnetic field is z component. In this special case the velocity vector potential only has one non zero component and this is known as stream function. The MHD equation in this reduces to three partial differential equations for the three functions in 2D model. ? stream function embeds and components. Application of MHD system prototype has been worked and presented.
</description>
        <description>http://thesai.org/Downloads/Volume1No4/Paper%204-Design,%20Development%20and%20Simulations%20of%20MHD%20equations.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Development of a Low-Cost GSM SMS-Based Humidity Remote Monitoring and Control system for Industrial Applications
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010403</link>
        <id>10.14569/IJACSA.2010.010403</id>
        <doi>10.14569/IJACSA.2010.010403</doi>
        <lastModDate>2012-07-01T09:57:25.5830000+00:00</lastModDate>
        
        <creator>Dr B Ramamurthy</creator>
        
        <creator>S.Bhargavi</creator>
        
        <creator>Dr.R.ShashiKumar</creator>
        
        <subject>Automation, GSM, SMS, Humidity Sensor (HSM-20G), ARM Controller LPC2148, Remote Monitoring &amp; Control, AT Commands, Password Security, Mobile phone.
</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(4), 2010</description>
        <description>The paper proposes a wireless solution, based on GSM (Global System for Mobile Communication) networks [1] for the monitoring and control of humidity in industries. This system provides ideal solution for monitoring critical plant on unmanned sites. The system is Wireless [2] therefore more adaptable and cost-effective. Utilizing Humidity sensor HSM-20G, ARM Controller LPC2148 and GSM technology this system offers a cost effective solution to wide range of remote monitoring and control applications. Historical and real time data can be accessed world wide using the GSM network. The system can also be configured to transmit data on alarm or at preset intervals to a mobile phone using SMS text messaging. The proposed system monitors and controls the humidity from the remote location and whenever it crosses the set limit the LPC2148 processor will sends an SMS to a concerned plant authority(s) mobile phone via GSM network. The concerned authority can control the system through his mobile phone by sending AT Commands to GSM MODEM and in turn to processor. Also the system provides password security against operator misuse/abuse. The system uses GSM technology [3] thus providing ubiquitous access to the system for security and automated monitoring and control of Humidity.</description>
        <description>http://thesai.org/Downloads/Volume1No4/Paper%203-Development%20of%20a%20Low-Cost%20GSM%20SMS-Based%20Humidity%20Remote%20Monitoring%20and%20Control%20system%20for%20Industrial%20Applications.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Organizational and collaborative knowledge management: a Virtual HRD model based on Web2.0
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010402</link>
        <id>10.14569/IJACSA.2010.010402</id>
        <doi>10.14569/IJACSA.2010.010402</doi>
        <lastModDate>2012-07-01T09:57:18.2870000+00:00</lastModDate>
        
        <creator>Musadaq Hanandi</creator>
        
        <creator>Michele Grimaldi</creator>
        
        <subject>Virtual Human Resource Development, Knowledge Management, Human Resource Management, Web2.0, Organizational model.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(4), 2010</description>
        <description>Knowledge development and utilization can be facilitated by human resource practices. At the organizational level, the competitive advantage depends upon the firm utilization of existing knowledge and its ability to generate new knowledge more efficiently. At the individual level, increased delegation of responsibility and freedom of creativity may better allow the discovery and utilization of local and dispersed knowledge in the organization. This paper aims at introducing an innovative organizational model to support enterprises, international companies, and governments, in developing their human resource, through the virtual human resource, as a tool for knowledge capturing and sharing inside the organization.The VHRD organizational model allows different actors (top Management, employees, and external experts) to interact and participate in the learning process, by providing non-threatening self-evaluation and individualized feedback. In this way, the model, which is based on possible patterns and rules from existing learning systems, Web 2.0 and a homogeneous set of integrated systems and technologies, can be of support to the enterprise human resource department. In addition to this, the paper presents an evaluation method to assess the knowledge management results inside the organisation, by connecting the financial impacts with the strategy map.
</description>
        <description>http://thesai.org/Downloads/Volume1No4/Paper%202-Organizational%20and%20collaborative%20knowledge%20management%20a%20Virtual%20HRD%20model%20based%20on%20Web2.0.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving the Technical Aspects of Software Testing in Enterprises
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010401</link>
        <id>10.14569/IJACSA.2010.010401</id>
        <doi>10.14569/IJACSA.2010.010401</doi>
        <lastModDate>2012-07-01T09:57:11.9870000+00:00</lastModDate>
        
        <creator>Tim A Majchrzak</creator>
        
        <subject>Software testing, testing, software quality, design science, IT alignment, process optimization, technical aspects
</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(4), 2010</description>
        <description> Many software developments projects fail due to quality problems. Software testing enables the creation of high quality software products. Since it is a cumbersome and expensive task, and often hard to manage, both its technical background and its organizational implementation have to be well founded. We worked with regional companies that develop software in order to learn about their distinct weaknesses and strengths with regard to testing. Analyzing and comparing the strengths, we derived best practices. In this paper we explain the project’s background and sketch the design science research methodology used. We then introduce a graphical categorization framework that helps companies in judging the applicability of recommendations. Eventually, we present details on five recommendations for tech-nical aspects of testing. For each recommendation we give im-plementation advice based on the categorization framework.</description>
        <description>http://thesai.org/Downloads/Volume1No4/Paper%201-%20Improving%20the%20Technical%20Aspects%20of%20Software%20Testing%20in%20Enterprises.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Application of Locality Preserving Projections in Face Recognition
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010313</link>
        <id>10.14569/IJACSA.2010.010313</id>
        <doi>10.14569/IJACSA.2010.010313</doi>
        <lastModDate>2012-07-01T09:57:05.7030000+00:00</lastModDate>
        
        <creator>Shermina J</creator>
        
        <subject>Defect Prevention, ODC, Defect Trigger
</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(3), 2010</description>
        <description>Face recognition technology has evolved as an enchanting solution to address the contemporary needs in order to perform identification and verification of identity claims. By advancing the feature extraction methods and dimensionality reduction techniques in the application of pattern recognition, a number of face recognition systems has been developed with distinct degrees of success. Locality preserving projection (LPP) is a recently proposed method for unsupervised linear dimensionality reduction. LPP preserve the local structure of face image space which is usually more significant than the global structure preserved by principal component analysis (PCA) and linear discriminant analysis (LDA). This paper focuses on a systematic analysis of locality-preserving projections and the application of LPP in combination with an existing technique This combined approach of LPP through MPCA can preserve the global and the local structure of the face image which is proved very effective. Proposed approach is tested using the AT &amp; T face database. Experimental results show the significant improvements in the face recognition performance in comparison with some previous methods.
</description>
        <description>http://thesai.org/Downloads/Volume1No3/Paper%2013-Application%20of%20Locality%20Preserving%20Projections%20in%20Face%20Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Enhanced Segmentation Procedure for Intima-Adventitial Layers of Common Carotid Artery</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010312</link>
        <id>10.14569/IJACSA.2010.010312</id>
        <doi>10.14569/IJACSA.2010.010312</doi>
        <lastModDate>2012-07-01T09:56:59.7570000+00:00</lastModDate>
        
        <creator>V Savithri</creator>
        
        <creator>S.Purushothaman</creator>
        
        <subject>Artery, boundary detection, imaging, Ultrasonic, parallel programming
</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(3), 2010</description>
        <description>This paper presents an enhanced Segmentation technique for use on noisy B-mode ultrasound images of the carotid artery. This method is based on Image Enhancement, Edge detection and Morphological operations in Boundary detection. This procedure may simplify the job of the practitioner for analyzing accuracy and variability of segmentation results. Possible plaque regions are also highlighted. A thorough evaluation of the method in the clinical environment shows that inter observer variability is evidently decreased and so is the overall analysis time. The results demonstrate that it has the potential to perform qualitatively better than applying existing methods in intima and adventitial layer detection on B-mode images.</description>
        <description>http://thesai.org/Downloads/Volume1No3/Paper%2012-Enhanced%20Segmentation%20Procedure%20for%20Intima%20Adventitial%20Layers%20of%20Common%20Carotid.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Modelling and Analysing of Software Defect Prevention Using ODC</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010311</link>
        <id>10.14569/IJACSA.2010.010311</id>
        <doi>10.14569/IJACSA.2010.010311</doi>
        <lastModDate>2012-07-01T09:56:52.2630000+00:00</lastModDate>
        
        <creator>Prakriti Trivedi</creator>
        
        <creator>Som Pachori</creator>
        
        <subject>Defect Prevention, ODC, Defect Trigger</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(3), 2010</description>
        <description>As the time passes the software complexity is increasing and due to this software reliability and quality will be affected. And for measuring the software reliability and quality various defect measurement and defect tracing mechanism will be used .Software defect prevention work typically focuses on individual inspection and testing technique. ODC is a mechanism by which we exploit software defect that occur during the software development life cycle. Orthogonal defect classification is a concept which enables developers, quality managers and project managers to evaluate the effectiveness and correctness of the software
</description>
        <description>http://thesai.org/Downloads/Volume1No3/Paper%2011-Modelling%20and%20Analysing%20of%20Software%20Defect.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Emerging Trends of Ubiquitous Computing
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010310</link>
        <id>10.14569/IJACSA.2010.010310</id>
        <doi>10.14569/IJACSA.2010.010310</doi>
        <lastModDate>2012-07-01T09:56:48.9100000+00:00</lastModDate>
        
        <creator>Prakriti Trivedi</creator>
        
        <creator>Kamal Kishore Sagar</creator>
        
        <creator>Vernon</creator>
        
        <subject>Braille, cell, vibration, dots, motor</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(3), 2010</description>
        <description>Ubiquitous computing is a method of enhancing computer use by making many computers available throughout the physical environment, but making them effectively invisible to the user. Background network to support ubiquitous computing is ubiquitous network by which users can enjoy network services whenever and wherever they want (home, office, outdoors). In this paper Issues related to ubiquitous network, smart objects and wide area ubiquitous Networks have been discussed. We also discuss various elements used in ubiquitous computing with the challenges in this computing environment
</description>
        <description>http://thesai.org/Downloads/Volume1No3/Paper%2010-Emerging_Trends_of_Ubiquitous_Computing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Test-Bed for Emergency Management Simulations
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010309</link>
        <id>10.14569/IJACSA.2010.010309</id>
        <doi>10.14569/IJACSA.2010.010309</doi>
        <lastModDate>2012-07-01T09:56:42.0700000+00:00</lastModDate>
        
        <creator>Anu Vaidyanathan</creator>
        
        <subject>test-bed, Emergency Management, Live Call Records, PCMD, Proactive Crowd-Sourcing, Agents</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(3), 2010</description>
        <description>We present a test-bed for Emergency Management Simulations by contrasting two prototypes we have built, CAVIAR and Reverse 111. We outline the desirable design principles that guide our choices for simulating emergencies and implement these ideas in a modular system, which utilizes proactive crowd-sourcing to enable emergency response centers to contact civilians co-located with an emergency, to provide more information about the events. This aspect of proactive crowd-sourcing enables Emergency response centers to take into account that an emergency situation’s inherent nature is dynamic and that initial assumptions while deploying resources to the emergency may not hold, as the emergency unfolds. A number of independent entities, governmental and non-governmental are known to interact while mitigating emergencies. Our test-bed utilizes a number of agents to simulate various resource sharing policies amongst different administrative domains and non-profit civilian organizations that might pool their resources at the time of an emergency. A common problem amongst first responders is the lack of interoperability amongst their devices. In our test-bed, we integrate live caller data obtained from traces generated by Telecom New Zealand, which tracks cell-phone users and their voice and data calls across the network, to identify co-located crowds. The test-bed has five important components including means to select and simulate Events, Resources and Crowds and additionally provide a visual interface as part of a massive online multi-player game to simulate Emergencies in any part of the world. We also present our initial evaluation of some resource sharing policies in our intelligent agents, which are part of our test-bed.
</description>
        <description>http://thesai.org/Downloads/Volume1No3/Paper%209-A%20Test-Bed%20for%20Emergency%20Management%20Simulation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>An Electronic Design of a Low Cost BRAILLE HANDGLOVE</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010308</link>
        <id>10.14569/IJACSA.2010.010308</id>
        <doi>10.14569/IJACSA.2010.010308</doi>
        <lastModDate>2012-07-01T09:56:33.5700000+00:00</lastModDate>
        
        <creator>M Rajasenathipathi</creator>
        
        <creator>M.Arthanari</creator>
        
        <creator>M.Sivakumar</creator>
        
        <subject>Braille, cell, vibration, dots, motor</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(3), 2010</description>
        <description>This paper documents a new design for a Braille Hand glove, comprising of a majority of electrical components, the design aims to produce a product to perform vibrations in six position of blind’s person right hand. A low cost and robust design will provide the blind with an affordable and reliable tool also it produce the new technique and communications method for blind persons.</description>
        <description>http://thesai.org/Downloads/Volume1No3/Paper%208-AN%20ELECTRONIC%20DESIGN%20OF%20A%20%20LOW%20%20COST%20%20BRAILLE%20%20%20HANDGLOVE.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>High Quality Integrated Data Reconstruction for Medical Applications</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010307</link>
        <id>10.14569/IJACSA.2010.010307</id>
        <doi>10.14569/IJACSA.2010.010307</doi>
        <lastModDate>2012-07-01T09:56:27.2800000+00:00</lastModDate>
        
        <creator>A.K.M Fazlul Haque</creator>
        
        <creator>Md. Hanif Ali</creator>
        
        <creator>M Adnan Kiber</creator>
        
        <subject>FFT, IFFT, ECG, Baseband, Reconstruction, Noise, FDA tool.
</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(3), 2010</description>
        <description>In this paper, the implementation of a high quality integrated data reconstruction model and algorithm has been proposed, especially for medical applications. Patients’ Information acquired at the sending end and reconstructed at the receiving end by using a technique that would be high quality for the signal reconstruction process. A method is proposed in which the reconstruction of data like ECG, audio and other patients’ vital parameters that are acquired in the time-domain and operated in the frequency-domain. Further the data will be reconstructed in the time-domain from the frequency domain where high quality data is required. In this particular case, high quality ensures the distortion less and noiseless recovered baseband signal. This would usually require the application of Fast Fourier Transform (FFT) and Inverse Fast Fourier Transform (IFFT) return the data to the spatial domain. The simulation is performed using Matlab. The Composite baseband signal has been generated by developing a program as well as by acquiring to the workspace. The feature of the method is that it can achieve high-quality integrated data reconstruction and can be associated easily with spatial domain.</description>
        <description>http://thesai.org/Downloads/Volume1No3/Paper%207-High%20Quality%20Integrated%20Data%20Reconstruction%20for%20Medical%20Applications.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improved Spectrogram Analysis for ECG Signal in Emergency Medical Applications
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010306</link>
        <id>10.14569/IJACSA.2010.010306</id>
        <doi>10.14569/IJACSA.2010.010306</doi>
        <lastModDate>2012-07-01T09:56:23.9230000+00:00</lastModDate>
        
        <creator>A.K.M Fazlul Haque</creator>
        
        <creator>Md. Hanif Ali</creator>
        
        <creator>M Adnan Kiber</creator>
        
        <subject>Spectrogram, ECG, PSD, Periodogram, Time-varying signal, FFT.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(3), 2010</description>
        <description>This paper presents the spectrogram effect of biomedical signal, especially for ECG. Simulation module developed for the spectrogram implementation. Spectrogram based on ECG signal and power spectral density together with off-line evaluation has been observed. ECG contains very important clinical information about the cardiac activities of heart. The features of small variations in ECG signal with time-varying morphological characteristics needs to be extracted by signal processing method because there are not visible of graphical ECG signal. Small variations of simulated normal and noise corrupted ECG signal have been extracted using spectrogram. The spectrogram found to be more precise over conventional FFT in finding the small abnormalities in ECG signal. These form time-frequency representations for processing time-varying signals. By using the presented method, it is ensure that high resolution time-varying spectrum estimation with no lag error can be produced. Other benefits of the method are the straightforward procedure for evaluating the statistics of the spectrum estimation.</description>
        <description>http://thesai.org/Downloads/Volume1No3/Paper%206-Improved%20Spectrogram%20Analysis%20for%20ECG%20Signal%20in%20Emergency%20Medical%20Applications.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Council-based Distributed Key Management Scheme for MANETs
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010305</link>
        <id>10.14569/IJACSA.2010.010305</id>
        <doi>10.14569/IJACSA.2010.010305</doi>
        <lastModDate>2012-07-01T09:56:17.6100000+00:00</lastModDate>
        
        <creator>Abdelmajid HAJAMI</creator>
        
        <creator>Mohammed ELKOUTBI</creator>
        
        <subject>Key Management; MANET; Clustering</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(3), 2010</description>
        <description>Mobile ad hoc networks (MANETs) have been proposed as an extremely flexible technology for establishing wireless communications. In comparison with fixed networks, some new security issues have arisen with the introduction of MANETs. Secure routing, in particular, is an important and complicated issue. Clustering is commonly used in order to limit the amount of secure routing information. In this work, we propose an enhanced solution for ad hoc key management based on a cauterized architecture. This solution uses clusters as a framework to manage cryptographic keys in a distributed way. This paper sheds light on the key management algorithm for the OLSR protocol standard. Our algorithm takes into account the node mobility and engenders major improvements regarding the number of elected cluster heads to create a PKI council. Our objective is to distribute the certification authority functions for a reduced and less mobile cluster heads that will serve for keys exchange.</description>
        <description>http://thesai.org/Downloads/Volume1No3/Paper%205-A%20Council-based%20Distributed%20Key%20Management.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A threat risk modeling framework for Geospatial Weather Information System (GWIS) a DREAD based study
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010304</link>
        <id>10.14569/IJACSA.2010.010304</id>
        <doi>10.14569/IJACSA.2010.010304</doi>
        <lastModDate>2012-07-01T09:56:09.3870000+00:00</lastModDate>
        
        <creator>K Ram Mohan Rao</creator>
        
        <creator>Durgesh Pant</creator>
        
        <subject> Rapid Application Development, Risk rating, Security assessment.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(3), 2010</description>
        <description>Over the years, the focus has been on protecting network, host, database and standard applications from internal and external threats. The Rapid Application Development (RAD) process makes the web application extremely short and makes it difficult to eliminate the vulnerabilities. Here we study web application risk assessment technique called threat risk modeling to improve the security of the application. We implement our proposed mechanism the application risk assessment using Microsoft’s threat risk DREAD model to evaluate the application security risk against vulnerability parameters. The study led to quantifying different levels of risk for Geospatial Weather Information System (GWIS) using DREAD model.
</description>
        <description>http://thesai.org/Downloads/Volume1No3/Paper%204-A%20threat%20risk%20modeling%20framework%20for%20Geospatial%20Weather%20Information%20System%20(GWIS)%20a%20DREAD%20based%20study.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Loss Reduction in Distribution System Using Fuzzy Techniques
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010303</link>
        <id>10.14569/IJACSA.2010.010303</id>
        <doi>10.14569/IJACSA.2010.010303</doi>
        <lastModDate>2012-07-01T09:56:03.0700000+00:00</lastModDate>
        
        <creator>Sheeraz kirmani</creator>
        
        <creator>Md. Farrukh Rahman</creator>
        
        <creator>Chakresh Kumar</creator>
        
        <subject>Capacitor placement, Distribution systems, Fuzzy expert system.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(3), 2010</description>
        <description>In this paper, a novel approach using approximate reasoning is used to determine suitable candidate nodes in a distribution system for capacitor placement. Voltages and power loss reduction indices of distribution system nodes are modeled by furzy membership functions. A fuzzy expert system (FES) containing a set of heuristic rules is then used to determine the capacitor placement suitability of each node in the distribution system. Capacitors are placed on the nodes with the highest suitability. a new design methodology for determining the size, location, type and number of capacitors to be placed on a radial distribution system is presented. The objective is to minimize the peak power losses and the energy losses in the distribution system considering the capacitor cost. Test results have been presented along with the discussion of the algorithm.
</description>
        <description>http://thesai.org/Downloads/Volume1No3/Paper%203-Loss%20Reduction%20in%20Distribution%20System%20Using.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Multiphase Scalable Grid Scheduler Based on Multi-QoS Using Min-Min Heuristic
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010302</link>
        <id>10.14569/IJACSA.2010.010302</id>
        <doi>10.14569/IJACSA.2010.010302</doi>
        <lastModDate>2012-07-01T09:55:55.5270000+00:00</lastModDate>
        
        <creator>Nawfal A Mehdi</creator>
        
        <creator>Ali Mamat</creator>
        
        <creator>Hamidah Ibrahim</creator>
        
        <creator>Shamala A/P K</creator>
        
        <subject>Multi-phase; QoS; Grid Scheduling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(3), 2010</description>
        <description>In scheduling, the main factor that affects searching speed and mapping performance is the number of resources or the size of search space. In grid computing, the scheduler performance plays an essential role in the overall performance. So, it is obvious the need for scalable scheduler that can manage the growing in resources (i.e. scalable). With the assumption that each resource has its own specifications and each job has its own requirements; then searching the whole search space (all the resources) can waste plenty of scheduling time. In this paper, we propose a two-phase scheduler that uses min-min algorithm to speed up the mapping time with almost the same efficiency. The scheduler is also based on the assumption that the resources in grid computing can be classified into clusters. The scheduler tries first to schedule the jobs to the suitable cluster (i.e. first phase) and then each cluster schedule the incoming jobs to the suitable resources (i.e. second phase). The scheduler is based on multidimensional QoS to enhance the mapping as much as it can. The simulation results show that the use of two-phase strategy can support the scalable scheduler.</description>
        <description>http://thesai.org/Downloads/Volume1No3/Paper%202-Multiphase%20Scalable%20Grid%20Scheduler%20Based%20on%20Multi-QoS%20Using%20Min-Min%20Heuristic.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Comparative Study of Gaussian Mixture Model and Radial Basis Function for Voice Recognition
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010301</link>
        <id>10.14569/IJACSA.2010.010301</id>
        <doi>10.14569/IJACSA.2010.010301</doi>
        <lastModDate>2012-07-01T09:55:49.1930000+00:00</lastModDate>
        
        <creator>Fatai Adesina Anifowose</creator>
        
        <subject>Gaussian Mixture Model, Radial Basis Function, Artificial Intelligence, Computational Intelligence, Biometrics, Optimal Parameters, Voice Pattern Recognition, DTREG
</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(3), 2010</description>
        <description>A comparative study of the application of Gaussian Mixture Model (GMM) and Radial Basis Function (RBF) in biometric recognition of voice has been carried out and presented. The application of machine learning techniques to biometric authentication and recognition problems has gained a widespread acceptance. In this research, a GMM model was trained, using Expectation Maximization (EM) algorithm, on a dataset containing 10 classes of vowels and the model was used to predict the appropriate classes using a validation dataset. For experimental validity, the model was compared to the performance of two different versions of RBF model using the same learning and validation datasets. The results showed very close recognition accuracy between the GMM and the standard RBF model, but with GMM performing better than the standard RBF by less than 1% and the two models outperformed similar models reported in literature. The DTREG version of RBF outperformed the other two models by producing 94.8% recognition accuracy. In terms of recognition time, the standard RBF was found to be the fastest among the three models.
</description>
        <description>http://thesai.org/Downloads/Volume1No3/Paper%201-A%20Comparative%20Study%20of%20Gaussian%20Mixture%20Model%20and%20Radial%20Basis%20Function%20for%20Voice%20Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>RSS-Crawler Enhancement for Blogosphere-Mapping
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010209</link>
        <id>10.14569/IJACSA.2010.010209</id>
        <doi>10.14569/IJACSA.2010.010209</doi>
        <lastModDate>2012-07-01T09:55:42.8430000+00:00</lastModDate>
        
        <creator>Justus Bross</creator>
        
        <creator>Patrick Hennig</creator>
        
        <creator>Philipp Berger</creator>
        
        <creator>Christoph Meinel</creator>
        
        <subject>weblogs, rss-feeds, data mining, knowledge discovery, blogosphere, crawler, information extraction</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(2), 2010</description>
        <description> The massive adoption of social media has provided new ways for individuals to express their opinions online. The blogosphere, an inherent part of this trend, contains a vast array of information about a variety of topics. It is a huge think tank that creates an enormous and ever-changing archive of open source intelligence. Mining and modeling this vast pool of data to extract, exploit and describe meaningful knowledge in order to leverage structures and dynamics of emerging networks within the blogosphere is the higher-level aim of the research presented here. Our proprieteary development of a tailor-made feed-crawler-framework meets exactly this need. While the main concept, as well as the basic techniques and implementation details of the crawler have already been dealt with in earlier publications, this paper focuses on several recent optimization efforts made on the crawler framework that proved to be crucial for the performance of the overall framework.</description>
        <description>http://thesai.org/Downloads/Volume1No2/Paper_9-RSS-Crawler_Enhancement_for_Blogosphere-Mapping.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>On-line Rotation Invariant Estimation and Recognition
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010208</link>
        <id>10.14569/IJACSA.2010.010208</id>
        <doi>10.14569/IJACSA.2010.010208</doi>
        <lastModDate>2012-07-01T09:55:36.5300000+00:00</lastModDate>
        
        <creator>R Bremananth</creator>
        
        <creator>Andy W. H. Khong</creator>
        
        <creator>M. Sankari </creator>
        
        <subject>Feature extraction; Line integrals; Orientation-detection; Optimized Gabor filters; Rotation-invariant recognition; Radon transform.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(2), 2010</description>
        <description>Rotation invariant estimation is an important and computationally difficult process in the real-time human computer interaction. Our new methodologies propose here for on-line image rotation angle estimation, correction and feature extractions based on line integrals. We reveal that a set of projection data of line integrals from single (fan-arc and fan-beam) or multi point sources (Radon transform) are employed for orientation estimation. After estimating orientation, image angle variations are altered to its principal direction. We further combine Boltzmann machine and k-mean clustering to obtain parameter optimized Gabor filters, which are used to extract non-redundant compact set of features for classification. The proposed method of fan-line, fan-arc and Radon transform are compared for real-time image orientation detection. Accuracy of classification is evaluated with the classifiers viz., back propagation, Hamming neural network, Euclidean-norm distance, and k-nearest neighbors. Experiment on a database of 535 images consisting of license plate and iris images. The viability of suggested algorithms has been tested with different classifiers. Thus, this paper proposes an efficient rotation invariant recognition for on-line images recognition.
</description>
        <description>http://thesai.org/Downloads/Volume1No2/Paper_8-On-line_Rotation_Invariant_Estimation_and_Recognition.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Real-time Facial Emotion Detection using Support Vector Machines
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010207</link>
        <id>10.14569/IJACSA.2010.010207</id>
        <doi>10.14569/IJACSA.2010.010207</doi>
        <lastModDate>2012-07-01T09:55:30.2170000+00:00</lastModDate>
        
        <creator>Anvita Bajpai</creator>
        
        <subject> Emotion-Detection;Facial-expressions; libsvm; Support Vector Machines; Facial action coding system(FACS)
</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(2), 2010</description>
        <description>There have been continuous researches in the field of emotion detection through faces of biological species in the last few decades. This was further fuelled by the rise of artificial intelligence which has added a new paradigm to its ongoing research. This paper discusses the role of one of the artificial intelligence techniques, Support vector machines for efficient emotion detection. This study comprised of experiments conducted on Java platform by using libsvm. The coordinates of vital points of a face have been used for training the SVM network which finally led to proper identification of various emotions appearing on a human face.
</description>
        <description>http://thesai.org/Downloads/Volume1No2/Paper_7-Real-time_Facial_Emotion_Detection_using_Support_Vector_Machines.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>PATTERN BASED SUBSPACE CLUSTERING: A REVIEW
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010206</link>
        <id>10.14569/IJACSA.2010.010206</id>
        <doi>10.14569/IJACSA.2010.010206</doi>
        <lastModDate>2012-07-01T09:55:23.8530000+00:00</lastModDate>
        
        <creator>Debahuti Mishra</creator>
        
        <creator>Shruti Mishra</creator>
        
        <creator>Sandeep Satapathy</creator>
        
        <creator>Amiya Kumar Rath</creator>
        
        <creator>Milu Acharya</creator>
        
        <subject>Subspace clustering; Biclustering; p-cluster; z-cluster</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(2), 2010</description>
        <description>The task of biclustering or subspace clustering is a data mining technique that allows simultaneous clustering of rows and columns of a matrix. Though the definition of similarity varies from one biclustering model to another, in most of these models the concept of similarity is often based on such metrics as Manhattan distance, Euclidean distance or other Lp distances. In other words, similar objects must have close values in at least a set of dimensions. Pattern-based clustering is important in many applications, such as DNA micro-array data analysis, automatic recommendation systems and target marketing systems. However, pattern-based clustering in large databases is challenging. On the one hand, there can be a huge number of clusters and many of them can be redundant and thus makes the pattern-based clustering ineffective. On the other hand, the previous proposed methods may not be efficient or scalable in mining large databases. The objective of this paper is to perform a comparative study of all subspace clustering algorithms in terms of efficiency, accuracy and time complexity.
</description>
        <description>http://thesai.org/Downloads/Volume1No2/Paper_6-PATTERN_BASED_SUBSPACE_CLUSTERING.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Dynamic path restoration for new call blocking versus handoff call blocking in heterogeneous network using buffers for QoS
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010205</link>
        <id>10.14569/IJACSA.2010.010205</id>
        <doi>10.14569/IJACSA.2010.010205</doi>
        <lastModDate>2012-07-01T09:55:16.9670000+00:00</lastModDate>
        
        <creator>Ajai Kumar Daniel</creator>
        
        <creator>R Singh</creator>
        
        <creator>J P Saini</creator>
        
        <subject>Handoff call, New call buffer, Congestion, Heterogeneous network, Quality of Service (QoS),
</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(2), 2010</description>
        <description>An Ad hoc network is a collection of wireless mobile nodes dynamically forming a temporary network without the use of any existing heterogeneous network infrastructure or centralized administration. Routing protocols used inside ad hoc networks must be prepared to automatically adjust to an environment that can vary between the extremes of high mobility with low band width, and low mobility with high bandwidth The tremendous growth of wireless networks demands the need to meet different multimedia (such as voice audio, video, data, etc) applications available over the network. This application demand and allocation could lead to congestion if the network has to maintain such high resources for the quality of service (QoS) requirements of the applications. In this paper, a new Protocol is proposed for wireless mobile heterogeneous networks are based on the use of. path information, traffic and bandwidth resource information at each node, for allocation of route path and Handoff problem. The proposed protocol uses two buffers one for new call and another buffer is use for handoff calls if there is no channel available instead of dropping (rejecting) them it store in the buffer and when ever the channel is free it allocate for communication The protocol improved the performance of the network especially by the effect of the dynamic threshold of buffer size of new call buffer and handoff call buffer In the link failure situation we provide another path for the communication by applying a Restoration Mechanism for the survivability of link and improved the QoS of mobile network .
</description>
        <description>http://thesai.org/Downloads/Volume1No2/Paper_5-Dynamic_path_restoration_for_new_call_blocking_versus_handoff_call_blocking.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A Novel and Efficient countermeasure against Power Analysis Attacks using Elliptic Curve Cryptography
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010204</link>
        <id>10.14569/IJACSA.2010.010204</id>
        <doi>10.14569/IJACSA.2010.010204</doi>
        <lastModDate>2012-07-01T09:55:08.9600000+00:00</lastModDate>
        
        <creator>M Prabu</creator>
        
        <creator>R.Shanmugalakshmi</creator>
        
        <subject>component Simple Power Analysis, Differential Power Analysis, Security Analysis Model, Algorithm design, Side Channel Attacks
</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(2), 2010</description>
        <description>Recently, there is a leakage in the communication channel information by the cryptographic processors. It is the major chore to overcome the pouring out or spreading out of the secure data. In this paper, a new level of security analysis model is constructed in power analysis using Elliptic Curve Cryptography. And so many side channel attacks and their countermeasures are explained undeniably. An algorithm design based on power analysis is also described and it makes our countermeasure more secure against simple power analysis, differential power analysis and other attacks. The theoretical analysis based on these result has been shown and it represents how the algorithm design should fluctuate from the facade side channel attacks.</description>
        <description>http://thesai.org/Downloads/Volume1No2/Paper_4-A_Novel_and_Efficient_countermeasure_against_Power_Analysis_Attacks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Detection and Measurement of magnetic data for short length wireless communication using FT-IR
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010203</link>
        <id>10.14569/IJACSA.2010.010203</id>
        <doi>10.14569/IJACSA.2010.010203</doi>
        <lastModDate>2012-07-01T09:55:02.6000000+00:00</lastModDate>
        
        <creator>Abu Saleh</creator>
        
        <subject> FT; FT-IR; Spectrum; prism.
</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(2), 2010</description>
        <description> Infrared (IR) radiation is a type of electromagnetic radiation. Infrared “light” has a longer wavelength than visible light. Red light has a longer wavelength than other colors of light, and infrared has even longer waves than red does; so infrared is sort of “redder-than-red” light or “beyond red” light. Infrared radiation lies between visible light and radio waves on the electromagnetic spectrum. In this paper, the infrared radiation is used for detecting the magnetic data for high speedy short range wireless communication. But infrared radiation may use in various way. This paper contains the performance of the FT-IR technique that is for multiplexing the transmissions of different users are viewed at the same time.</description>
        <description>http://thesai.org/Downloads/Volume1No2/Paper_3-Detection_and_Measurement_of_magnetic_data_for_short_length_wireless_communication.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Personalized Recommendation Technique Based on the Modified TOPSIS Method
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010202</link>
        <id>10.14569/IJACSA.2010.010202</id>
        <doi>10.14569/IJACSA.2010.010202</doi>
        <lastModDate>2012-07-01T09:54:55.0130000+00:00</lastModDate>
        
        <creator>Guan-Dao Yang </creator>
        
        <creator>Lu Sun</creator>
        
        <subject>Personalized Recommendation Technique; Improved Gray Correlation Analysis; Modified TOPSIS Method.
</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(2), 2010</description>
        <description> Personalized recommendation service helping users to target the interesting information from the excessive information set has been widely concerned. In this paper, we firstly propose a new method named Modified TOPSIS Method utilizing the Improved Gray Correlation Analysis Method. Then, we present a new personalized recommendation technique based on the Modified TOPSIS Method. Finally, the verification method utilizing Spearman’s Rank Correlation Coefficient demonstrates that our new personalized recommendation technique is efficient.
</description>
        <description>http://thesai.org/Downloads/Volume1No2/Paper_2-A_New_Personalized_Recommendation_Technique_Based_on_the_Modified_TOPSIS_Method.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>The effect of Knowledge Characteristics in students performances
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010201</link>
        <id>10.14569/IJACSA.2010.010201</id>
        <doi>10.14569/IJACSA.2010.010201</doi>
        <lastModDate>2012-07-01T09:54:48.6870000+00:00</lastModDate>
        
        <creator>Asmahan M. Altaher</creator>
        
        <subject>codify ability, Explicitness, Availability, Teach ability, student performance.
</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(2), 2010</description>
        <description>Knowledge characteristics are the essential step in leveraging knowledge value in the university. Share Document, and contributes knowledge may not be useful without the context provided by experience. This paper focuses on the characteristics of knowledge in applied science private university and its effect on student’s performance, which aim to focus in the nature knowledge and the quality of material. Questioner was designed and sent to MIS students in the applied sciences university in order to improve the context of the knowledge and facilitated the knowledge usage in order to improve the student knowledge level. The result lead recommends that the university should understand the knowledge characteristics and the potential techniques that support sharing knowledge. In addition the university should now which type of knowledge can by articulated or which knowledge can be taught to individuals, through training, practices or apprenticeship, in order to improve the student performance.
</description>
        <description>http://thesai.org/Downloads/Volume1No2/Paper_1-The_effect_of_Knowledge_Characteristics_in_students_performances.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>[Survey Report] How WiMAX Will Deploy In India
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010109</link>
        <id>10.14569/IJACSA.2010.010109</id>
        <doi>10.14569/IJACSA.2010.010109</doi>
        <lastModDate>2012-07-01T09:54:42.3330000+00:00</lastModDate>
        
        <creator>Rakesh Kumar Jha</creator>
        
        <creator>Dr Upena D Dalal</creator>
        
        <subject> Electronic device controller, Diode, Transformer</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(1), 2010</description>
        <description>controlling of various appliances using PC. It consist a circuit for using the printer port of a PC, for control application using software and some interface hardware. The interface circuit along with the given software can be used with the printer port of any PC for controlling up to eight equipments. Parallel port is a simple and inexpensive tool for building computer controlled devices and projects. The simplicity and ease of programming makes parallel port popular.
</description>
        <description>http://thesai.org/Downloads/Volume1No1/Paper_9-How_WiMAX_will_deploy_in_India.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>PC And Speech Recognition Based Electrical Device Control
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010108</link>
        <id>10.14569/IJACSA.2010.010108</id>
        <doi>10.14569/IJACSA.2010.010108</doi>
        <lastModDate>2012-07-01T09:54:34.9430000+00:00</lastModDate>
        
        <creator>Sanchit Dua</creator>
        
        <subject>Electronic device controller, Diode, Transformer
</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(1), 2010</description>
        <description>controlling of various appliances using PC. It consist a circuit for using the printer port of a PC, for control application using software and some interface hardware. The interface circuit along with the given software can be used with the printer port of any PC for controlling up to eight equipments. Parallel port is a simple and inexpensive tool for building computer controlled devices and projects. The simplicity and ease of programming makes parallel port popular.
</description>
        <description>http://thesai.org/Downloads/Volume1No1/Paper_8-PC_And_Speech_Recognition_Based_Electrical_Device_Control.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Radio Frequency Identification Based Library Management System</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010107</link>
        <id>10.14569/IJACSA.2010.010107</id>
        <doi>10.14569/IJACSA.2010.010107</doi>
        <lastModDate>2012-07-01T09:54:28.5970000+00:00</lastModDate>
        
        <creator>Priyanka Grover</creator>
        
        <creator>Anshul Ahuja</creator>
        
        <subject> Radio frequency identification technology; RFID Readers; RFID Tags; Inductive Coupling</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(1), 2010</description>
        <description>Radio frequency identification (RFID) is a term that is used to describe a system that transfers the identity of an object or person wirelessly, using radio waves. It falls under the category of automatic identification technologies. This paper proposes RFID Based Library Management System that would allow fast transaction flow and will make easy to handle the issue and return of books from the library without much intervention of manual book keeping. The proposed system is based on RFID readers and passive RFID tags that are able to electronically store information that can be read with the help of the RFID reader. This system would be able to issue and return books via RFID tags and also calculates the corresponding fine associated with the time period of the absence of the book from the library database.</description>
        <description>http://thesai.org/Downloads/Volume1No1/Paper_7-Radio_Frequency_Identification_Based_Library_Management_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Iris Recognition System
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010106</link>
        <id>10.14569/IJACSA.2010.010106</id>
        <doi>10.14569/IJACSA.2010.010106</doi>
        <lastModDate>2012-07-01T09:54:22.3030000+00:00</lastModDate>
        
        <creator>Neha Kak</creator>
        
        <creator>Rishi Gupta</creator>
        
        <creator>Sanchit Mahajan</creator>
        
        <subject>iris recognition, biometric identification, pattern recognition, segmentation
</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(1), 2010</description>
        <description>In a biometric system a person is identified automatically by processing the unique features that are posed by the individual. Iris Recognition is regarded as the most reliable and accurate biometric identification system available. In Iris Recognition a person is identified by the iris which is the part of eye using pattern matching or image processing using concepts of neural networks. The aim is to identify a person in real time, with high efficiency and accuracy by analysing the random patters visible within the iris if an eye from some distance, by implementing modified Canny edge detector algorithm. The major applications of this technology so far have been: substituting for passports (automated international border crossing); aviation security and controlling access to restricted areas at airports; database access and computer login.</description>
        <description>http://thesai.org/Downloads/Volume1No1/Paper_6-Iris_Recognition_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Network Anomaly Detection via Clustering and Custom Kernel in MSVM</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010105</link>
        <id>10.14569/IJACSA.2010.010105</id>
        <doi>10.14569/IJACSA.2010.010105</doi>
        <lastModDate>2012-07-01T09:54:15.9770000+00:00</lastModDate>
        
        <creator>Arvind Mewada</creator>
        
        <creator>Shamila Khan</creator>
        
        <creator>Prafful Gedam</creator>
        
        <subject>IDS; K-mean; MSVM; RBF; KDD99, Custom Kernel</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(1), 2010</description>
        <description>Multiclass Support Vector Machines (MSVM) have been applied to build classifiers, which can help Network Intrusion detection. Beside their high generalization accuracy, the learning time of MSVM classifiers is still a concern when applied into Network intrusion detection systems. This paper speeds up the learning time of MSVM classifiers by reducing the number of support vectors. In this study, we proposed KMSVM method combines the K-means clustering technique with custom kernel in MSVM. Experiments performed on KDD99 dataset using KMSVM method, and the results show that the KMSVM method can speed up the learning time of classifiers by both reducing support vectors and improve the detection rate on testing dataset.
</description>
        <description>http://thesai.org/Downloads/Volume1No1/Paper_5-Network_Anomaly_Detection_via_Clustering_and_Custom_Kernel_in_MSVM.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Evaluating the impact of information systems on end user performance: A proposed model</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010104</link>
        <id>10.14569/IJACSA.2010.010104</id>
        <doi>10.14569/IJACSA.2010.010104</doi>
        <lastModDate>2012-07-01T09:54:09.6330000+00:00</lastModDate>
        
        <creator>Ahed Abugabah</creator>
        
        <creator>Louis Sanzogni</creator>
        
        <creator>Osama Alfarraj</creator>
        
        <subject>Information systems, user performance, task technology fit, technology acceptance model.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(1), 2010</description>
        <description>In the last decades, information systems (IS) researchers have concentrated their efforts in developing and testing models that help with the investigation of IS and user performance in different environments. As a result, a number of models for studying end users’ systems utilization and other related issues including system usefulness, system success and user aspects in business organizations have appeared. A synthesized model consolidating three well-known and widely used models in IS research is proposed. Our model was empirically tested in a sophisticated IS environment investigating the impacts of the enterprise recourse planning (ERP) systems on user perceived performance. Statistical analysis was performed including factors analysis and regression to test the model and prove its validity. The findings demonstrated that the proposed model performed well as most factors had direct and or indirect significant influences on user perceived performance suggesting therefore that the model possesses the ability to explain the main impacts of these factors on ERP users.</description>
        <description>http://thesai.org/Downloads/Volume1No1/Paper_4-Evaluating_the_impact_of_information_systems_on_end_user_performance.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Algebraic Specifications:Organised and focussed approach in software development</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010103</link>
        <id>10.14569/IJACSA.2010.010103</id>
        <doi>10.14569/IJACSA.2010.010103</doi>
        <lastModDate>2012-07-01T09:54:03.3000000+00:00</lastModDate>
        
        <creator>Rakesh L</creator>
        
        <creator>Dr. Manoranjan Kumar singh</creator>
        
        <subject> Abstract data types (ADTs), Formal-Methods, Abstraction, Equational reasoning, Symbolic computation.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(1), 2010</description>
        <description>Algebraic specification is a formal specification approach to deal with data structures in an implementation independent way. Algebraic specification is a technique whereby an object is specified in terms of the relationships between the operations that act on that object. In this paper we are interested in proving facts about specifications, in general, equations of the form t1 = t2 , where t1 and t2 are members of term (S), being the signature of the specification. One way of executing the specification would be to compute the first algebra for the specification and then to check whether t1 and t2 belong in the same equivalence class. The use of formal specification techniques for software engineering has the advantage that we can reason about the correctness of the software before its construction. Algebraic specification methods can be used for software development to support verifiability, reliability, and usability. The main aim of this research work is to put such intuitive ideas into concrete setting in order for better quality product.(...)</description>
        <description>http://thesai.org/Downloads/Volume1No1/Paper_3-Algebraic_Specifications.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Traffic Classification – Packet-, Flow-, and Application-based Approaches</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010102</link>
        <id>10.14569/IJACSA.2010.010102</id>
        <doi>10.14569/IJACSA.2010.010102</doi>
        <lastModDate>2012-07-01T09:53:56.9900000+00:00</lastModDate>
        
        <creator>Sasan Adibi</creator>
        
        <subject>Traffic Classification; Packet; Flow; Applications, Delay; Payload Size.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(1), 2010</description>
        <description>Traffic classification is a very important mathematical and statistical tool in communications and computer networking, which is used to find average and statistical information of the traffic passing through certain pipe or hub. The results achieved from a proper deployment of a traffic analysis method provide valuable insights, including: how busy a link is, the average end-toend delays, and the average packet size. These valuable information bits will help engineers to design robust networks, avoid possible congestions, and foresee future growth. This paper is designed to capture the essence of traffic classification methods and consider them in packet-, flow-, and application-based contexts.</description>
        <description>http://thesai.org/Downloads/Volume1No1/Paper_2-Traffic_Classification-Packet-Flow-and_Application-based_Approaches.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Poultry Diseases Warning System using Dempster-Shafer Theory and Web Mapping</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2012.010308</link>
        <id>10.14569/IJARAI.2012.010308</id>
        <doi>10.14569/IJARAI.2012.010308</doi>
        <lastModDate>2012-07-01T09:53:44.3470000+00:00</lastModDate>
        
        <creator>Andino Maseleno</creator>
        
        <creator>Md. Mahmud Hasan</creator>
        
        <subject>poultry diseases; early warning system; Dempster-Shafer theory, web mapping
</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 1(3), 2012</description>
        <description>In this research, the researcher built a Web Mapping and Dempster-Shafer theory as an early warning system of poultry diseases. Early warning is the provision of timely and effective information, through identified institutions, that allows individuals exposed to a hazard to take action to avoid or reduce their risk and prepare for effective response. In this paper as an example we use five symptoms as major symptoms which include depression, combs, wattle, bluish face region, swollen face region, narrowness of eyes, and balance disorders. Research location is in the Lampung Province, South Sumatera. The researcher’s reason to choose Lampung Province in South Sumatera on the basis that has a high poultry population. Our approach uses Dempster-Shafer theory to combine beliefs in certain hypotheses under conditions of uncertainty and ignorance, and allows quantitative measurement of the belief and plausibility in our identification result. Web Mapping is also used for displaying maps on a screen to visualize the result of the identification process. The result reveal that Poultry Diseases Warning System has successfully identified the existence of poultry diseases and the maps can be displayed as the visualization.
</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume1No3/Paper_8-Poultry_Diseases_Warning_System_using_Dempster_Shafer_Theory_and_Web_Mapping.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Leaf Image Segmentation Based On the Combination of Wavelet Transform and K Means Clustering
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2012.010307</link>
        <id>10.14569/IJARAI.2012.010307</id>
        <doi>10.14569/IJARAI.2012.010307</doi>
        <lastModDate>2012-07-01T09:53:38.3700000+00:00</lastModDate>
        
        <creator>N Valliammal</creator>
        
        <creator>Dr.S.N.Geethalakshmi</creator>
        
        <subject>Image segmentation; Wavelet Transform; Haar Wavelet; K means clustering algorithm.</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 1(3), 2012</description>
        <description>This paper focuses on Discrete Wavelet Transform (DWT) associated with the K means clustering for efficient plant leaf image segmentation. Segmentation is a basic pre-processing task in many image processing applications and essential to separate plant leafs from the background. Locating and segmenting plants from the background in an automated way is a common challenge in the analysis of plant images. Image segmentation is typically used to locate objects and boundaries (lines, curves, etc.) in images. Image segmentation is a fundamental task in agriculture computer graphics vision. Although many methods are proposed, it is still difficult to accurately segment an arbitrary image by one particular method. In recent years, more and more attention has been paid to combine segmentation algorithms and information from multiple feature spaces (e.g. color, texture, and pattern) in order to improve segmentation results .The performance of the segmentation is analyzed by Jaccard, dice, variation of index and global consistency error method. The proposed approach is verified with real time plant leaf data base. The results of proposed approach gives better convergence when compare to existing segmentation method.
</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume1No3/Paper_7-Leaf_Image_Segmentation_Based_On_the_Combination_of_Wavelet_Transform_and_K_Means_Clustering.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Genetic Algorithm Based Lane-By-Pass Approach for Smooth Traffic Flow on Road Networks</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2012.010306</link>
        <id>10.14569/IJARAI.2012.010306</id>
        <doi>10.14569/IJARAI.2012.010306</doi>
        <lastModDate>2012-07-01T09:53:32.0970000+00:00</lastModDate>
        
        <creator>Shailendra Tahilyani,</creator>
        
        <creator>Manuj Darbari</creator>
        
        <creator>Praveen Kumar Shukla</creator>
        
        <subject>Genetic Algorithms, Fuzzy Logic, Neural Network, Activity Theory.</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 1(3), 2012</description>
        <description>Traffic congestion in urban areas is a very critical problem and increasing day-by-day due to increment in number of vehicles and un-expandable traffic infrastructure. Several intelligent control systems have been developed to deal with this issue. In this paper, a new lane bypass algorithm has been developed for route diversion resulting in smooth traffic flow on the urban road networks. Genetic algorithms are utilized for the parameter optimization in this approach. Finally, the results of the proposed approach are found satisfactory.
</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume1No3/Paper_6-A_New_Genetic_Algorithm_Based_Lane-By-Pass_Approach_for_Smooth_Traffic_Flow_on_Road_Networks.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Temperature Control System Using Fuzzy Logic Technique</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2012.010305</link>
        <id>10.14569/IJARAI.2012.010305</id>
        <doi>10.14569/IJARAI.2012.010305</doi>
        <lastModDate>2012-07-01T09:53:23.6700000+00:00</lastModDate>
        
        <creator>Isizoh A N</creator>
        
        <creator>Okide S.O</creator>
        
        <creator>Anazia A.E</creator>
        
        <creator>Ogu C.D</creator>
        
        <subject>Fuzzy logic; microcontroller; temperature sensor; Analogue to Digital Converter (ADC).</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 1(3), 2012</description>
        <description> Fuzzy logic technique is an innovative technology used in designing solutions for multi-parameter and non-linear control models for the definition of a control strategy. As a result, it delivers solutions faster than the conventional control design techniques. This paper thus presents a fuzzy logic based-temperature control system, which consists of a microcontroller, temperature sensor, and operational amplifier, Analogue to Digital Converter, display interface circuit and output interface circuit. It contains a design approach that uses fuzzy logic technique to achieve a controlled temperature output function.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume1No3/Paper_5-Temperature_Control_System_Using_Fuzzy_Logic_Technique.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Automated Detection Method for Clustered Microcalcification in Mammogram Image Based on Statistical Textural Features</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2012.010304</link>
        <id>10.14569/IJARAI.2012.010304</id>
        <doi>10.14569/IJARAI.2012.010304</doi>
        <lastModDate>2012-07-01T09:53:17.2430000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <creator>Indra Nugraha Abdullah</creator>
        
        <creator>Hiroshi Okumura</creator>
        
        <subject>Automated Detection Method; Mammogram; Micro calcification; Statistical Textural Features; Standard Deviation.</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 1(3), 2012</description>
        <description>Breast cancer is the most frightening cancer for women in the world. The current problem that closely related with this issue is how to deal with small calcification part inside the breast called micro calcification (MC). As a preventive way, a breast screening examination called mammogram is provided. Mammogram image with a considerable amount of MC has been a problem for the doctor and radiologist when they should determine correctly the region of interest, in this study is clustered MC. Therefore, we propose to develop an automated method to detect clustered MC utilizing two main methods, multi-branches standard deviation analysis for clustered MC detection and surrounding region dependence method for individual MC detection. Our proposed method was resulting in 70.8% of classification rate, then for the sensitivity and specificity obtained 79% and 87%, respectively. The gained results are adequately promising to be more developed in some areas.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume1No3/Paper_4-Automated_Detection_Method_for_Clustered_Microcalcification_in_Mammogram_Image_Based_on_Statistical_Textural_Features.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Fuzzy Controller Design Using FPGA for Photovoltaic Maximum Power Point Tracking</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2012.010303</link>
        <id>10.14569/IJARAI.2012.010303</id>
        <doi>10.14569/IJARAI.2012.010303</doi>
        <lastModDate>2012-07-01T09:53:13.8630000+00:00</lastModDate>
        
        <creator>Basil M Hamed</creator>
        
        <creator>Mohammed S. El-Moghany</creator>
        
        <subject>Fuzzy Control; MPPT; Photovoltaic System; FPGA.</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 1(3), 2012</description>
        <description> The cell has optimum operating point to be able to get maximum power. To obtain Maximum Power from photovoltaic array, photovoltaic power system usually requires Maximum Power Point Tracking (MPPT) controller. This paper provides a small power photovoltaic control system based on fuzzy control with FPGA technology design and implementation for MPPT. The system composed of photovoltaic module, buck converter and the fuzzy logic controller implemented on FPGA for controlling on/off time of MOSFET switch of a buck converter. The proposed maximum power point tracking controller for photovoltaic system is tested using model designed by Matlab/Simulink program with graphical user interface (GUI) for entering the parameters of any array model using information from its datasheet, Simulation and experimental results show that performance of the fuzzy controller with FPGA in a maximum power tracking of a photovoltaic array can be made use of in several photovoltaic products and obtain satisfied result.
</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume1No3/Paper_3-Fuzzy_Controller_Design_Using_FPGA_for_Photovoltaic_Maximum_Power_Point_Tracking.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Hybrid Metaheuristics for the Unrelated Parallel Machine Scheduling to Minimize Makespan and Maximum Just-in-Time Deviations
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2012.010302</link>
        <id>10.14569/IJARAI.2012.010302</id>
        <doi>10.14569/IJARAI.2012.010302</doi>
        <lastModDate>2012-07-01T09:53:07.8570000+00:00</lastModDate>
        
        <creator>Chiuh Cheng Chyu,</creator>
        
        <creator>Wei-Shung Chang</creator>
        
        <subject>Greedy randomized adaptive search procedure; memetic algorithms; multi-objective combinatorial optimization; unrelated parallel machine scheduling; min-max matching</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 1(3), 2012</description>
        <description> This paper studies the unrelated parallel machine scheduling problem with three minimization objectives – makespan, maximum earliness, and maximum tardiness (MET-UPMSP). The last two objectives combined are related to just-in-time (JIT) performance of a solution. Three hybrid algorithms are presented to solve the MET-UPMSP: reactive GRASP with path relinking, dual-archived memetic algorithm (DAMA), and SPEA2. In order to improve the solution quality, min-max matching is included in the decoding scheme for each algorithm. An experiment is conducted to evaluate the performance of the three algorithms, using 100 (jobs) x 3 (machines) and 200 x 5 problem instances with three combinations of two due date factors – tight and range. The numerical results indicate that DAMA performs best and GRASP performs second for most problem instances in three performance metrics: HVR, GD, and Spread. The experimental results also show that incorporating min-max matching into decoding scheme significantly improves the solution quality for the two population-based algorithms. It is worth noting that the solutions produced by DAMA with matching decoding can be used as benchmark to evaluate the performance of other algorithms.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume1No3/Paper_2-Hybrid_Metaheuristics_for_the_Unrelated_Parallel_Machine_Scheduling_to_Minimize_Makespan_and_Maximum_Just-in-Time_Deviations.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> Method for Learning Effciency Improvements Based on Gaze Location Notifications on e-learning Content Screen Display</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2012.010301</link>
        <id>10.14569/IJARAI.2012.010301</id>
        <doi>10.14569/IJARAI.2012.010301</doi>
        <lastModDate>2012-07-01T09:52:59.8130000+00:00</lastModDate>
        
        <creator>Kohei Arai</creator>
        
        <subject>Gaze estimation; e-learning content; thesaurus engine.</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 1(3), 2012</description>
        <description>Method for learning efficiency improvement based on gaze notifications on e-learning content screen display is proposed. Experimental results with e-learning two types of contents (Relatively small motion of e-learning content and e-learning content with moving picture and annotation marks) show that 0.8038 to 0.9615 of R square value are observed between duration time period of proper gaze location and achievement test score.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume1No3/Paper_1-Method_for_Learning_Effciency_Improvements_Based_on_Gaze_Location_Notifications_on_e-learning_Content_Screen_Display.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Analysis, Design and Implementation of Human Fingerprint Patterns System “Towards Age &amp; Gender Determination, Ridge Thickness To Valley Thickness Ratio (RTVTR) &amp; Ridge Count On Gender Detection</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2012.010210</link>
        <id>10.14569/IJARAI.2012.010210</id>
        <doi>10.14569/IJARAI.2012.010210</doi>
        <lastModDate>2012-07-01T09:52:53.4700000+00:00</lastModDate>
        
        <creator>E O Omidiora</creator>
        
        <creator>O. Ojo</creator>
        
        <creator>N.A. Yekini</creator>
        
        <creator>T.O. Tubi</creator>
        
        <subject>Age, Gender, Fingerprint, Ridges Count, RTVTR</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 1(2), 2012</description>
        <description>The aim of this research is to analyze humans fingerprint texture in order to determine their Age &amp; Gender, and correlation of RTVTR and Ridge Count on gender detection. The study is to analyze the effectiveness of physical biometrics (thumbprint) in order to determine age and gender in humans. An application system was designed to capture the finger prints of sampled population through a fingerprint scanner device interfaced to the computer system via Universal Serial Bus (USB), and stored in Microsoft SQL Server database, while back-propagation neural network will be used to train the stored fingerprint. The specific Objectives of this research are to: Use fingerprint sensor to collect different individual fingerprint, alongside their age and gender, Formulate a model and develop a fingerprint based identification system to determine age and gender of individuals and evaluate the developed system.
</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume1No2/Paper_10-Analysis_Design_and_Implementation_of_Human_Fingerprint_Patterns_System.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title> Automated Marble Plate Classification System Based On Different Neural Network Input Training Sets And PLC Implementation</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2012.010209</link>
        <id>10.14569/IJARAI.2012.010209</id>
        <doi>10.14569/IJARAI.2012.010209</doi>
        <lastModDate>2012-07-01T09:52:47.0530000+00:00</lastModDate>
        
        <creator>Irina Topalova</creator>
        
        <subject>Automated classification; DCT; DWT; Neural network; PLC</subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 1(2), 2012</description>
        <description>The process of sorting marble plates according to their surface texture is an important task in the automated marble plate production. Nowadays some inspection systems in marble industry that automate the classification tasks are too expensive and are compatible only with specific technological equipment in the plant. In this paper a new approach to the design of an Automated Marble Plate Classification System (AMPCS),based on different neural network input training sets is proposed, aiming at high classification accuracy using simple processing and application of only standard devices. It is based on training a classification MLP neural network with three different input training sets: extracted texture histograms, Discrete Cosine and Wavelet Transform over the histograms. The algorithm is implemented in a PLC for real-time operation. The performance of the system is assessed with each one of the input training sets. The experimental test results regarding classification accuracy and quick operation are represented and discussed.
</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume1No2/Paper_9-Automated_Marble_Plate_Classification_System_Based_On_Different_Neural_Network_Input_Training_Sets_And_PLC_Implementation.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>Improving Performance Analysis Of Routing Efficiency In Wireless Sensor Networks Using Greedy Algorithm
</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2012</date>
        <link>http://dx.doi.org/10.14569/IJARAI.2012.010208</link>
        <id>10.14569/IJARAI.2012.010208</id>
        <doi>10.14569/IJARAI.2012.010208</doi>
        <lastModDate>2012-07-01T09:52:40.6270000+00:00</lastModDate>
        
        <creator>VUDA SREENIVASARAO</creator>
        
        <creator>CHANDRA SRINIVAS POTLURI</creator>
        
        <creator>SREEDEVI KADIYALA</creator>
        
        <creator>Capt. GENETU YOHANNES</creator>
        
        <subject>Greedy routing, void problem, localized algorithm, wireless sensor network. </subject>
        <description>International Journal of Advanced Research in Artificial Intelligence(IJARAI), 1(2), 2012</description>
        <description>The void problem causing the routing failure is the main challenge of the greedy routing in the wireless sensor networks. The current research work still can not fully deal with the void problem since the excessive control overheads should be consumed so as to guarantee the delivery of packets. In this paper the solution to the void problem is taken up as the issue. This situation which exists in the currently existing greedy routing algorithms in wireless sensor networks. The GAR protocol is a new protocol proposed here to guarantee the delivery of packets and excessive consumption of control overheads is resolved.</description>
        <description>http://thesai.org/Downloads/IJARAI/Volume1No2/Paper_8-Improve_Routing_Efficiency_in_Wireless_Sensor_Networks_Using_Greedy_Anti_void_Routing.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
    <record>
        <title>A New Approach for Handling Null Values in Web Server Log</title>
        <publisher>The Science and Information (SAI) Organization</publisher>
        <date>2010</date>
        <link>http://dx.doi.org/10.14569/IJACSA.2010.010101</link>
        <id>10.14569/IJACSA.2010.010101</id>
        <doi>10.14569/IJACSA.2010.010101</doi>
        <lastModDate>2010-08-01T00:00:00.0000000+00:00</lastModDate>
        
        <creator>Pradeep Ahirwar</creator>
        
        <creator>Deepak Singh Tomar</creator>
        
        <creator>Rajesh Wadhvani</creator>
        
        <subject>Null value, web mining, k-means clustering, fuzzy C-means clustering, log records, log parser.</subject>
        <description>International Journal of Advanced Computer Science and Applications(IJACSA), 1(1), 2010</description>
        <description>The web log data embed much of the user’s browsing behavior and the operational data generated through Internet end user interaction may contain noise. Which affect the knowledge based decision. Handling these noisy data is a major challenge. Null value handling is an important noise handling technique in relational data base system. In this work the issues related to null value are discussed and null value handling concept based on train data set is applied to real MANIT web server log. A prototype system based on Fuzzy C-means clustering techniques with trained data set is also proposed in this work. The proposed method integrates advantages of fuzzy system and also introduces a new criterion, which enhances the estimated accuracy of the approximation. The comparisons between different methods for handling null values are depicted. The result shows the effectiveness of the methods empirically on realistic web logs and explores the accuracy, coverage and performance of the proposed Models.</description>
        <description>http://thesai.org/Downloads/Volume1No1/Paper_1-A_New_Approach_for_Handling_Null_Values_in_Web_Server_Log.pdf</description>
        <type>text</type>
        <language>eng</language>
    </record>
    
</records>